QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
76,383,242
| 6,500,048
|
Dictionary Comprehension within pandas dataframe column
|
<p>Trying to match a dictionary item with a string value from another column.
sample data:</p>
<pre><code>df = A B
0 'a' {'a': '2', 'b': '5'}
1 'c' {'a': '2', 'b': '16', 'c': '32'}
2 'a' {'a': '6', 'd': '23'}
3 'd' {'b': '4', 'd': '76'}
</code></pre>
<p>I'm trying to get the following out:</p>
<pre><code>Df = A B
0 'a' {'a': '2'}
1 'c' {'c': '32'}
2 'a' {'a': '6'}
3 'd' {'d': '76'}
</code></pre>
<p>I got this far not inside a dataframe:</p>
<pre><code>d = {k: v for k, v in my_dict.items() if k == 'a'}
</code></pre>
<p>for a single line, but I couldn't get this to work and to be fair, I didn't expect it to work directly, but was hoping i was close:</p>
<pre><code>Test_df['B'] = {k: v for k, v in test_df['B'].items() if k == test_df['A']}
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>What do I need to do to get this to work, or is there a better more efficient way?</p>
|
<python><pandas><dictionary-comprehension>
|
2023-06-01 14:58:22
| 3
| 1,279
|
iFunction
|
76,383,226
| 5,713,709
|
How to consume GraphQL api using spring boot Resttemplate
|
<p>I want to consume a graphQL endpoint in a springboot application using resttemplate</p>
<p>Whenever,I am make a POST request with the query and variables I am always receiving the same error {"errors":[{"message":"Bad request"}]}</p>
<p>Below is the sample query, I want to send,</p>
<pre><code>{
"query": " query ($filter: SearchFilter!, $limit: Int!, $sorting: SearchSorting!) { search (filter: $filter, limit: $limit, sorting: $sorting)
{ cursor totalCount documents { id title products { name primaryCode } count Date type { ids } } } }",
"variables": {
"filter": {
"date": {
"dt": "2023-01-01"
},
"types": {
"ids": [
"1"
]
},
"products": {
"include": [
"aaa"
],
"available": True
},
"category": {
"ids": [
"A12"
]
}
},
"sorting": {
"by": "DATE"
},
"limit": 500
}
}
</code></pre>
<p>But from Python, I don't have any issues on consuming the endpoint, and I am geting response.</p>
<p>Python code:</p>
<pre><code>params = {
"filter": {
"date": {
"dt": "2023-01-01"
},
"types": {
"ids": [
"1"
]
},
"products": {
"include": [
"aaa"
],
"primaryOnly": True
},
"category": {
"ids": [
"A12"
]
}
},
"sorting": {
"field": "DATE",
"direction": "DESC"
},
"limit": 500
}
query = '''
query ($filter: SearchFilter!, $limit: Int!, $sorting: SearchSorting!) { search (filter: $filter, limit: $limit, sorting: $sorting)
{ cursor totalCount documents { id title products { name primaryCode } count releasedDt type { ids } } } }'''
result = json.loads(requests.post(uri, headers=getHeaders(), json={'query': query, "variables": params}).text)
</code></pre>
<p>Help to know how I can send query and variables using spring <strong>resttemplate</strong>.</p>
|
<python><java><spring-boot><graphql><resttemplate>
|
2023-06-01 14:56:49
| 1
| 432
|
Babu
|
76,383,183
| 330,816
|
recover original YUY2 buffer from BMP
|
<p>I have a camera that gives me a stream of YUYV data. It is actually a stereo camera and Y is encoding one sensor and U and V the other one. I'm using OpenCV to get this data and saved it into a bmp file using this simplistic script:</p>
<pre><code>import cv2
import os
cap = cv2.VideoCapture(0) # index of camera
# Check if the webcam is opened correctly
if not cap.isOpened():
raise IOError("Cannot open webcam")
i=0
folder=os.path.dirname(os.path.realpath(__file__))
while True:
ret, frame = cap.read()
frame2 = cv2.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
cv2.imshow('Input', frame2)
c = cv2.waitKey(1)
if c == 27:
break
if c == 13:
cv2.imwrite(os.path.join(folder,("%04d" % (i,))+".bmp"), frame)
i=i+1
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>I thought in BMP saving the raw buffer values would be saved but I can't recover the images. I tried loading the bmp as <code>img = cv2.imread(filename, cv2.IMREAD_UNCHANGED)</code> and display its first channel and also reading it as <code>img = cv2.imread(filename, cv2.IMREAD_COLOR)</code> and converting it back to YUV but there is always artifacts of the other image in the channels. Can you give me a hint how to get the original YUYV buffer representation?</p>
|
<python><image><opencv><image-processing>
|
2023-06-01 14:51:56
| 1
| 688
|
Cookie
|
76,383,101
| 2,100,039
|
Reshaping a Dataframe with repeating column names
|
<p>I have data that looks like this:</p>
<pre><code> dataframe_1:
week SITE LAL SITE LAL
0 1 BARTON CHAPEL 1.1 PENASCAL I 1
1 2 BARTON CHAPEL 1.1 PENASCAL I 1
2 3 BARTON CHAPEL 1.1 PENASCAL I 1
</code></pre>
<p>And, i need the final dataframe to look like this:</p>
<pre><code> dataframe_2:
week SITE LAL
0 1 BARTON CHAPEL 1.1
1 2 BARTON CHAPEL 1.1
2 3 BARTON CHAPEL 1.1
3 1 PENASCAL I 1
4 2 PENASCAL I 1
5 3 PENASCAL I 1
</code></pre>
<p>I've tried using 'melt' but I cannot get the desire result. Perhaps I'm using the wrong approach?
thank you,</p>
|
<python><pandas><dataframe><reshape><melt>
|
2023-06-01 14:42:48
| 3
| 1,366
|
user2100039
|
76,382,996
| 15,010,874
|
algorithm to calculate speeds to move in order to arrive in x turns
|
<p>given a <code>distance</code>, <code>turns</code> calculate the number of turns to be on each speed - 1 2 4, and 8. to complete the distance on the last turn.</p>
<p>you start on speed 1, and in each turn you can accelerate to the next speed or do nothing (1 -> 2, 2 -> 4, 4 -> 8), once you accelerate you can't slow back down.</p>
<p>each turn you are moving <code>speed</code> steps (distance -= speed).</p>
<p>also, it's ok to go more than <code>distance</code> steps but only if it happens on the last turn.</p>
<p>for example: distance = 25, turns = 10 -> speed 1: 1 turn, speed 2: 5 turns, speed 4: 4 turns, the total distance is 1 * 1 + 2 * 5 + 4 * 4 = 27 steps, but we got to 25 steps on the last turn which is what we need.</p>
<p>I need help writing a function that will calculate that.</p>
<pre><code>def calc_speeds(distance: int, arrive_in_x_turns: int) -> dict[int, int]:
</code></pre>
<p>so far i've used this <code>turns_till_arrival = ((turns_till_arrival - (speed // 2)) // speed) + (speed // 2) + 1</code> formula, in a for loop, for each <code>speed</code> and if <code>turns_till_arrival</code> is equal to <code>turns</code> I will accelerate until I get to <code>speed</code> without spending extra turns in other speeds (only the 1 necessary turn, because <strong>I can only accelerate once per turn</strong>) but then there are a lot of times that it doesn't work because in order for it to work I must spend more than 1 turn at other speeds but I can't figure out a way to calculate that.</p>
|
<python><python-3.x><algorithm>
|
2023-06-01 14:31:14
| 1
| 567
|
Omer Dagry
|
76,382,989
| 10,491,381
|
Functions intervals
|
<p>I have 3 functions, how can I plot them using differents intervals ?</p>
<p>This is my code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
y = 2*x+1
k = 3*x+2
i = 2*x+2
plt.plot(x, y, '-r', label='y=2x+1')
plt.plot(x, k, '-r', label='k =3x+2')
plt.plot(x, i, '-r', label='i =2x+2')
plt.title('3 functions on 3 intervals')
plt.xlabel('x', color='#1C2833')
plt.ylabel('y', color='#1C2833')
plt.legend(loc='upper left')
plt.grid()
plt.show()
</code></pre>
<p>Wanted style : 3 intervals, 3 linear functions :</p>
<p><a href="https://i.sstatic.net/MzJdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MzJdT.png" alt="enter image description here" /></a></p>
<p>This is what I get :
<a href="https://i.sstatic.net/aL1mY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aL1mY.jpg" alt="enter image description here" /></a></p>
<p>My wish is to draw the first function 2*x+1 on the following interval [x:2->x:8].</p>
<p>The second function 3*x+2 has to be plotted on the interval [x:8->x:12]</p>
<p>The third function 2*x+2 has to be plotted on the interval [x:12->x:20]</p>
<p>Is it possible ?</p>
<p>Edit :
Ended up with this :</p>
<pre><code>x = np.linspace(-5,0,100)
t = np.linspace(0,5,100)
m = np.linspace(5,10,100)
y = 2*x+1
k = 3*x-2
i = 2*x+2
plt.plot(x, y, '-r', label='y=2x+1')
plt.plot(t, k, '-r', label='k =3x-2')
plt.plot(m, i, '-r', label='i =2x+2')
</code></pre>
<p>Result :</p>
<p><a href="https://i.sstatic.net/5mk4B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5mk4B.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-06-01 14:30:05
| 1
| 347
|
harmonius cool
|
76,382,922
| 7,496,406
|
Dask map_partition does no use all workers on client
|
<p>I have very CPU heavy process and would like to use as many workers are possible in Dask.</p>
<p>When I read the csv file using the <code>read_csv</code> from <code>dask</code> and then process the dataframe using <code>map_partitions</code> only one worker is used. If I use <code>read_csv</code> from <code>pandas</code> and then convert the file to a Dask dataframe, all my workers are used. See code below.</p>
<p>Could someone explain the difference in behavior?</p>
<p>Ideally, I would like to use <code>read_csv</code> from <code>Dask</code> so that I dont have to have a conversion step. Could anyone help me with that?</p>
<pre><code>import dask as d
import pandas as pd
def fWrapper(x):
p = doSomething(x.ADDRESS, param)
return(pd.DataFrame(p, columns=["ADDRESS", "DATA","TOKEN", "CLASS"]))
# only use 1 worker instead of the available 8
dask_df = d.dataframe('path\to\file')
dask_df.set_index(UID, npartitions = 8, drop = False)
ddf2 = dask_df.map_partitions(fWrapper, meta={"ADDRESS" : object, "DATA" : object, "TOKEN" : object, "CLASS" : object}).compute()
#uses all 8 workers
df = pd.read_csv('path\to\file')
df.set_index('UID', drop=False)
dask_df2 =d.dataframe.from_pandas(df, npartitions=dask_params['df_npartitions'], sort=True)
ddf3 = dask_df2.map_partitions(fWrapper, meta={"ADDRESS" : object, "DATA" : object, "TOKEN" : object, "CLASS" : object}).compute()
</code></pre>
|
<python><dask><distributed><dask-distributed><dask-dataframe>
|
2023-06-01 14:22:15
| 1
| 1,371
|
Jrakru56
|
76,382,889
| 778,508
|
Kubernetes Logging Wrong Severity
|
<p>I deployed a python process into a pod of k8. The process makes of a simple logger:</p>
<pre><code>import sys
import logging
import logging.handlers
from pathlib import Path
import coloredlogs
from app.core.config import settings
CONFIG_PATH = Path.joinpath(BASE_PATH, '.configrc')
LOG_FORMAT = f'%(asctime)s.[{settings.APP_NAME}] %(levelname)s %(message)s'
LOG_LEVEL = settings.dict().get('LOG_LEVEL', logging.INFO)
LOGGER = logging.getLogger(__name__)
coloredlogs.install(
level=LOG_LEVEL,
fmt=LOG_FORMAT,
logger=LOGGER)
LOGGER.info("INFO Creating main objects...")
</code></pre>
<p>However, when I inspect the logs in k8, it always complains that those logs are errors:</p>
<pre><code>ERROR 2023-06-01T13:54:37.742688222Z [resource.labels.containerName: myapp] 2023-06-01 15:54:37.[myapp] INFO Creating main objects...
{
insertId: "8xj2l2hwts2f45gt"
labels: {4}
logName: "projects/proj-iot-poc/logs/stderr"
receiveTimestamp: "2023-06-01T13:54:38.596712564Z"
resource: {2}
severity: "ERROR"
textPayload: "2023-06-01 15:54:37.[myapp] INFO Creating main objects..."
timestamp: "2023-06-01T13:54:37.742688222Z"
}
</code></pre>
<p>Not worth to say that I just usedd out <code>LOGGER.info("Creating main objects...")</code> and I expect the log to be just an INFO not and ERROR...</p>
<p>The manifest is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "11"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"myapp"},"name":"myapp","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"myapp"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"myapp","version":"1.6.1rc253"}},"spec":{"containers":[{"env":[{"name":"COMMON_CONFIG_COMMIT_ID","value":"1b2e6669140391d680ff0ca34811ddc2553f15f7"},{"name":"OWN_CONFIG_COMMIT_ID","value":"52a142ca003ade39a0fd96faffbe5334facc3463"}],"envFrom":[{"configMapRef":{"name":"myapp-config"}}],"image":"europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253","lifecycle":{"preStop":{"exec":{"command":["/bin/bash","-c","sleep 5"]}}},"name":"myapp","resources":null}],"restartPolicy":"Always"}}}}
creationTimestamp: "2023-05-26T07:49:37Z"
generation: 13
labels:
name: myapp
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:version: {}
f:spec:
f:containers:
k:{"name":"myapp"}:
.: {}
f:env:
.: {}
k:{"name":"COMMON_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"OWN_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:lifecycle:
.: {}
f:preStop:
.: {}
f:exec:
.: {}
f:command: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-05-26T07:49:37Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2023-06-03T06:41:24Z"
name: myapp
namespace: default
resourceVersion: "537412667"
uid: 375a536e-e39c-4001-a234-e47e812f0bee
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: myapp
version: 1.6.1rc253
spec:
containers:
- env:
- name: COMMON_CONFIG_COMMIT_ID
value: 1b2e6669140391d680ff0ca34811ddc2553f15f7
- name: OWN_CONFIG_COMMIT_ID
value: 52a142ca003ade39a0fd96faffbe5334facc3463
envFrom:
- configMapRef:
name: myapp-config
image: europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 5
name: myapp
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2023-05-26T07:49:37Z"
lastUpdateTime: "2023-06-01T14:33:00Z"
message: ReplicaSet "myapp-f9bbb5f6d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2023-06-03T06:41:24Z"
lastUpdateTime: "2023-06-03T06:41:24Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 13
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p><strong>EDIT</strong></p>
<p>Maybe this is realted to:
<a href="https://stackoverflow.com/questions/54147483/google-cloud-functions-python-logging-issue">GCP and Python Logging</a></p>
|
<python><kubernetes>
|
2023-06-01 14:18:45
| 1
| 8,046
|
gdm
|
76,382,888
| 12,403,550
|
Partially flatten nested JSON and pivot longer
|
<p>I have many JSON files with the following structure:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>{
"requestId": "test",
"executionDate": "2023-05-10",
"executionTime": "12:02:22",
"request": {
"fields": [{
"geometry": {
"type": "Point",
"coordinates": [-90, 41]
},
"colour": "blue",
"bean": "blaCk",
"birthday": "2021-01-01",
"arst": "111",
"arstg": "rst",
"fct": {
"start": "2011-01-10",
"end": "2012-01-10"
}
}]
},
"response": {
"results": [{
"geom": {
"type": "geo",
"coord": [-90, 41]
},
"md": {
"type": "arstat",
"mdl": "trstr",
"vs": "v0",
"cal": {
"num": 4,
"comment": "message"
},
"bean": ["blue", "green"],
"result_time": 12342
},
"predictions": [{
"date": "2004-05-19",
"day": 0,
"count": 0,
"eating_stage": "trt"
}, {
"date": "2002-01-20",
"day": 1,
"count": 0,
"eating_stage": "arstg"
}, {
"date": "2004-05-21",
"day": 2,
"count": 0,
"eating_stage": "strg"
}, {
"date": "2004-05-22",
"day": 3,
"count": 0,
"eating_stage": "rst"
}
}
}
}</code></pre>
</div>
</div>
</p>
<p>The predictions part can be very deep. I want to convert this JSON to a CSV with the following structure:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>requestId</th>
<th>executionDate</th>
<th>executionTime</th>
<th>colour</th>
<th>predictions_date</th>
<th>predictions_day</th>
<th>predictions_count</th>
<th>predictions_eating_stage</th>
</tr>
</thead>
<tbody>
<tr>
<td>test</td>
<td>2023-05-10</td>
<td>12:02:22</td>
<td>blue</td>
<td>2004-05-19</td>
<td>0</td>
<td>0</td>
<td>trt</td>
</tr>
<tr>
<td>test</td>
<td>2023-05-10</td>
<td>12:02:22</td>
<td>blue</td>
<td>2002-01-20</td>
<td>1</td>
<td>0</td>
<td>astrg</td>
</tr>
<tr>
<td>test</td>
<td>2023-05-10</td>
<td>12:02:22</td>
<td>blue</td>
<td>2004-05-21</td>
<td>2</td>
<td>0</td>
<td>strg</td>
</tr>
<tr>
<td>test</td>
<td>2023-05-10</td>
<td>12:02:22</td>
<td>blue</td>
<td>2004-05-22</td>
<td>3</td>
<td>0</td>
<td>rst</td>
</tr>
</tbody>
</table>
</div>
<p>I tried the following code:</p>
<pre><code>flat_json = pd.DataFrame(
flatten(json_data), index=[0]
)
</code></pre>
<p>The code results in every data point becoming a column, and I am not sure how to pivot longer where at the 'predictions' key using JSON functions in Python. I recognise that at this stage I could pivot longer using column names, but I feel like there is a cleaner way to achieve this.</p>
|
<python><json><pandas>
|
2023-06-01 14:18:36
| 2
| 433
|
prayner
|
76,382,887
| 1,485,926
|
How to fix the line ending style (either CRLF or LF) in Python when written a text file?
|
<p>I have the following little program in Python</p>
<pre><code>from pathlib import Path
filename = Path("file.txt")
content = "line1\nline2\nline3\n"
with filename.open("w+", encoding="utf-8") as file:
file.write(content)
</code></pre>
<p>After running it I get the following file (as expected)</p>
<pre><code>line1
line2
line3
</code></pre>
<p>However, depending on where the program runs, line ending is different.</p>
<p>If I run it in Windows, I get CRLF line termination:</p>
<pre><code>$ file -k file.txt
file.txt: ASCII text, with CRLF line terminators
</code></pre>
<p>If I run it in Linux, I get LF line termination:</p>
<pre><code>$ file -k file.txt
file.txt: ASCII text
</code></pre>
<p>So, I understand that Python is using the default from the system in which it runs, which is fine most of the times. However, in my case I'd like to fix the line ending style, no matter the system where I run the program.</p>
<p>How this could be done?</p>
|
<python><line-endings>
|
2023-06-01 14:18:31
| 1
| 12,442
|
fgalan
|
76,382,642
| 10,437,110
|
Resampling Rows minute wise not working in for Even Minutes in Python DataFrame
|
<p>I have df which has 5 columns. A column named date which has minute-wise data of a few days but the data start at <code>9:15</code> and ends at <code>15:29</code>. And then there are four other columns which are named first, max, min, and last which have numerical numbers in them.</p>
<p>I wrote a code that uses <code>x</code> mins as a variable. It resamples the rows and gives rows of x minutes.</p>
<p>The first of resampled will be the 'first' of first row. <br>
The 'last' of resampled will be the 'last' of the last row. <br>
The max of resampled will be the highest of all the rows of the max column. <br>
The low of resampled will be low of all the rows for the low column.
And the date will have datetime of x minutes intervals.</p>
<p>My problem is for some minutes the code is working perfectly. But for other minutes I am getting the wrong time as the first row.</p>
<p>Instead of resampled data starting from <code>9:15</code>. It starts with some other minute.</p>
<p>Code:</p>
<pre><code>def resample_df(df, x_minutes = '15T'):
df.set_index('date', inplace=True)
resampled_df = df.resample(x_minutes).agg({
'first': 'first',
'max': 'max',
'min': 'min',
'last': 'last'
})
resampled_df.reset_index(inplace=True)
return resampled_df
</code></pre>
<p>Input:</p>
<pre><code> date first max min last
0 2023-06-01 09:15:00 0.014657 0.966861 0.556195 0.903073
1 2023-06-01 09:16:00 0.255174 0.607714 0.845804 0.039933
2 2023-06-01 09:17:00 0.956839 0.881803 0.876322 0.552568
</code></pre>
<p>Output: when x_minutes = '6T'</p>
<pre><code> date first max min last
0 2023-06-01 09:12:00 0.014657 0.966861 0.556195 0.552568
1 2023-06-01 09:18:00 0.437867 0.988005 0.162957 0.897419
2 2023-06-01 09:24:00 0.296486 0.370957 0.013994 0.108506
</code></pre>
<p>The data shows 9:12 but I don't have 9:12. Why is it giving me the wrong data?</p>
<p>Note: It works prefectly when minutes entered are odd. e.g. x_minutes = '15T'</p>
<p>Code to create a dummy df:</p>
<pre><code>import pandas as pd
import random
from datetime import datetime, timedelta
# Define the number of days for which data is generated
num_days = 5
# Define the start and end times for each day
start_time = datetime.strptime('09:15', '%H:%M').time()
end_time = datetime.strptime('15:30', '%H:%M').time()
# Create a list of all the timestamps for the specified days
timestamps = []
current_date = datetime.now().replace(hour=start_time.hour, minute=start_time.minute, second=0, microsecond=0)
end_date = current_date + timedelta(days=num_days)
while current_date < end_date:
current_time = current_date.time()
if start_time <= current_time <= end_time:
timestamps.append(current_date)
current_date += timedelta(minutes=1)
# Generate random data for each column
data = {
'date': timestamps,
'first': [random.random() for _ in range(len(timestamps))],
'max': [random.random() for _ in range(len(timestamps))],
'min': [random.random() for _ in range(len(timestamps))],
'last': [random.random() for _ in range(len(timestamps))]
}
# Create the DataFrame
df = pd.DataFrame(data)
# Display the resulting DataFrame
display(df)
</code></pre>
|
<python><python-3.x><pandas><datetime>
|
2023-06-01 13:53:14
| 1
| 397
|
Ash
|
76,382,589
| 7,674,028
|
Python paramiko ssh.connect specify network interface to bind
|
<p>With <code>paramiko.SSHClient()</code>, how can I specify the network interface to bind to. I'm referring to this argument in the ssh man page:</p>
<pre><code> -B bind_interface
Bind to the address of bind_interface before attempting to
connect to the destination host. This is only useful on
systems with more than one address.
</code></pre>
<p>In other words, I want to ssh like this <code>ssh -B en0 username@192.168.1.1</code> using <code>paramiko.SSHClient()</code>.</p>
|
<python><ssh><paramiko>
|
2023-06-01 13:48:31
| 1
| 565
|
none
|
76,382,314
| 2,729,831
|
Code disappeared after Deploying google cloud function second generation
|
<p>The source code is disappeared after I've deployed the google cloud function second generation.
I am getting the error:</p>
<pre><code>Archive not found in the storage location
</code></pre>
<p>I can download the zip file with the source code but I cannot use the embedded editor.
What is going on?
The error appear after a deploy in europe-west3, if I deploy it on other location it works.</p>
|
<python><google-cloud-platform><google-cloud-functions>
|
2023-06-01 13:16:49
| 0
| 473
|
blob
|
76,382,308
| 8,753,169
|
Apply libcst codemod and skip test files
|
<p>I am writing a <a href="https://libcst.readthedocs.io/en/latest/codemods_tutorial.html" rel="nofollow noreferrer">codemod with libcst</a> which inherits from <code>VisitorBasedCodemodCommand</code>. It works fine but is rather slow. One simple trick would be to skip all test files which start with <code>test_</code> by convention. However I haven't been able to find a place to add such logic in my codemod.</p>
<p>I saw a <a href="https://libcst.readthedocs.io/en/latest/codemods.html#libcst.codemod.SkipFile" rel="nofollow noreferrer"><code>SkipFile</code> exception</a> but I don't know from where I could trigger it.</p>
<p>How can I ignore my test files?</p>
|
<python><libcst>
|
2023-06-01 13:16:26
| 1
| 1,043
|
Martin Faucheux
|
76,382,246
| 16,984,466
|
Prefetch and merge two reverse foreign key of same model in queryset
|
<p>I have a problem, maybe somone has a solution for me.</p>
<p>I have two models:</p>
<pre class="lang-py prettyprint-override"><code>class Task(models.Model):
class Meta:
db_table = "task"
class Rule(models.Model):
to_task = models.ForeingKey(Task, related_name="to_task_rules")
from_task = models.ForeignKey(Task, related_name="from_task_rules")
class Meta:
db_table = "rule"
</code></pre>
<p>I want to loop into tasks and display all concerned distinct rules by current task by prefetching <code>to_task_rules</code> and <code>from_task_rules</code> in a field <code>task_rules</code>, but without success</p>
<p>Django does not seems to provide way to do that. I try to make prefetch with to_attr="task_rules", but, we cannot use twice same name.</p>
<pre class="lang-py prettyprint-override"><code># This solution does not work. to_attr cannot be have same name twice.
# In this case, i do not known how filter for having distinct results
Task.objects.prefetch(
Prefetch("from_task_rule", queryset=Rule.objects.all(), to_attr="task_rules"),
Prefetch("to_task_rule", queryset=Rule.objects.filter(...), to_attr="task_rules")
)
</code></pre>
<p>Anybody have an idea, excluded to prefetch during loop for preventing N+1 query ?</p>
<p>Thanks</p>
|
<python><django><django-models><django-queryset><django-orm>
|
2023-06-01 13:09:48
| 0
| 1,869
|
Lucas Grugru
|
76,382,044
| 14,649,310
|
Factory class in Python with a mapping dictionary returns TypeError
|
<p>I made something like this dummy class:</p>
<pre><code>class CreateCaseFactory:
@classmethod
def create(cls, user_id: uuid.UUID, type_: str) -> str:
creator = cls.CASE_TO_METHOD_MAP.get(type_)
if creator:
return creator(user_id)
else:
raise Exception("Invalid type")
@classmethod
def _create_case_1(cls, user_id: uuid.UUID) -> str:
result = f"Dummy Use Case 1 created for user {user_id}"
return result
@classmethod
def _create_case_2(cls, user_id: uuid.UUID) -> str:
result = f"Dummy Use Case 2 created for user {user_id}"
return result
CASE_TO_METHOD_MAP = {
"case_1": _create_case_1,
"case_2": _create_case_2,
}
</code></pre>
<p>but I get an error when I try to run it:</p>
<pre><code> if creator:
> return creator(user_id)
E TypeError: 'classmethod' object is not callable
</code></pre>
<p>How can I make this factory class work.</p>
|
<python><factory>
|
2023-06-01 12:49:50
| 2
| 4,999
|
KZiovas
|
76,382,022
| 5,386,216
|
Create Literal type from constant variables
|
<p>I want to use "constant" variables in my typing definitions, something like this:</p>
<pre class="lang-py prettyprint-override"><code>FOO = "foo"
BAR = "bar"
@dataclass
class Event():
name: Literal[FOO, BAR]
</code></pre>
<p>But that is illegal code for mypy.</p>
<p>This works, but then I cannot use the variables in my code:</p>
<pre class="lang-py prettyprint-override"><code>FOO = Literal["foo"]
BAR = Literal["bar"]
@dataclass
class Event():
name: FOO | BAR
Event(FOO) # gives: Event(name=typing.Literal['foo'])
</code></pre>
<p>Is there a way to get this to work without defining FOO and BAR twice?</p>
<hr />
<p>Impelenting Marks answer (<a href="https://stackoverflow.com/a/76382060/5386216">https://stackoverflow.com/a/76382060/5386216</a>) works, but then mypy is not able to distinguish types based on the event name:</p>
<pre class="lang-py prettyprint-override"><code>class FooBar(Enum):
FOO = "foo"
BAR = "bar"
@dataclass
class StringEvent:
name: Literal[FooBar.FOO]
value: str
@dataclass
class NumberEvent:
name: Literal[FooBar.BAR]
value: int
def handle_event(event: StringEvent | NumberEvent):
if event.name == FooBar.FOO:
event.value.upper() # should not give a mypy error
if event.name == FooBar.BAR:
event.value.upper() # Should give a mypy error
</code></pre>
<p>Both both cases of <code>event.value.upper()</code> give the following mypy error: <code>Item "int" of "Union[str, int]" has no attribute "upper"</code></p>
|
<python><mypy><python-typing>
|
2023-06-01 12:46:48
| 2
| 381
|
Quint van Dijk
|
76,381,747
| 10,309,712
|
A data resampler based on support vectors
|
<p>I am working to implement a data resampler to work based on <code>support vectors</code>. The idea is to fit an <code>SVM</code> classifier, get the <code>support vector</code> points of the classes, then balance the data by selecting only data points near the support vectors points of each class in a way that the classes have equal number of examples, ignoring all others (far from support vector points).</p>
<p>I am doing this in a multi-class setttings. So, I needed to resample the classes pairwise (i.e. <code>one-against-one</code>). I know that in <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow noreferrer">sklean's SVM</a> <em>"...internally, one-vs-one (โovoโ) is always used as a multi-class strategy to train models"</em>. However, since I am not sure how to change the training behaviour of the sklearn's SVM in a way to resample each pair during training, I implemented a custom class to do that.</p>
<p>Currently, the custom class works fine. However, in my implementation I have a bug (logic error) that changes each pair of class labels into <code>0</code> and <code>1</code>, thereby messing up with my class labels. In the code below, I illustrate this with a <code>MWE</code>:</p>
<pre class="lang-py prettyprint-override"><code># required imports
import random
from collections import Counter
from math import dist
import numpy as np
from sklearn.svm import SVC
from sklearn.utils import check_random_state
from sklearn.multiclass import OneVsOneClassifier
from imblearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
np.random.seed(7)
random.seed(7)
# resampler class
class DataUndersampler():
def __init__(self, random_state=None):
self.random_state = random_state
print('DataUndersampler()')
def fit_resample(self, X, y):
random_state = check_random_state(self.random_state)
# class distribution
counter = Counter(y)
print(f'Original class distribution: {counter}')
maj_class = counter.most_common()[0][0]
min_class = counter.most_common()[-1][0]
# number of minority examples
num_minority = len(X[ y == min_class])
#num_majority = len(X[ y == maj_class]) # check on with maj now
svc = SVC(kernel='rbf', random_state=32)
svc.fit(X,y)
# majority class support vectors
maj_sup_vectors = svc.support_vectors_[maj_class]
#min_sup_vectors = svc.support_vectors_[min_class] # minority sup vect
# compute distances to support vectors' point
distances = []
for i, x in enumerate(X[y == maj_class]):
#input(f'sv: {maj_sup_vectors}, x: {x}') # check value passed
d = dist(maj_sup_vectors, x)
distances.append((i, d))
# sort distances (reverse=False -> ascending)
distances.sort(reverse=False, key=lambda tup: tup[1])
index = [i for i, d in distances][:num_minority]
X_ds = np.concatenate((X[y == maj_class][index], X[y == min_class]))
y_ds = np.concatenate((y[y == maj_class][index], y[y == min_class]))
print(f"Resampled class distribution ('ovo'): {Counter(y_ds)} \n")
return X_ds, y_ds
</code></pre>
<p>So, working with this:</p>
<pre class="lang-py prettyprint-override"><code># synthetic data
X, y = make_classification(n_samples=10_000, n_classes=5, weights=[22.6, 3.7, 16.4, 51.9],
n_informative=4)
# actual class distribution
Counter(y)
Counter({0: 9924, 1: 22, 2: 15, 3: 13, 4: 26})
resampler = DataUndersampler(random_state=234)
rf_clf = model = RandomForestClassifier()
pipeline = Pipeline([('sampler', resampler), ('clf', rf_clf)])
classifier = OneVsOneClassifier(estimator=pipeline)
DataUndersampler()
classifier.fit(X, y)
Original class distribution: Counter({0: 9924, 1: 22})
Resampled class distribution ('ovo'): Counter({0: 22, 1: 22})
Original class distribution: Counter({0: 9924, 1: 15}) # this should be {0: 9924, 2: 15}
Resampled class distribution ('ovo'): Counter({0: 15, 1: 15}) # should be-> {0: 9924, 2: 15}
Original class distribution: Counter({0: 9924, 1: 13}) # should be -> {0: 9924, 3: 13}
Resampled class distribution ('ovo'): Counter({0: 13, 1: 13}) # -> {0: 9924, 3: 13}
Original class distribution: Counter({0: 9924, 1: 26}) # should be-> {0: 9924, 4: 26}
Resampled class distribution ('ovo'): Counter({0: 26, 1: 26}) # -> {0: 9924, 4: 26}
Original class distribution: Counter({0: 22, 1: 15}) # should be > {1: 22, 2: 15}
Resampled class distribution ('ovo'): Counter({0: 15, 1: 15}) # -> {1: 22, 2: 15}
Original class distribution: Counter({0: 22, 1: 13}) # -> {1: 22, 3: 13}
Resampled class distribution ('ovo'): Counter({0: 13, 1: 13}) ## -> {1: 22, 3: 13}
Original class distribution: Counter({1: 26, 0: 22}) # -> {4: 26, 1: 22}
Resampled class distribution ('ovo'): Counter({1: 22, 0: 22}) # -> {4: 26, 1: 22}
Original class distribution: Counter({0: 15, 1: 13}) # -> {2: 15, 3: 13}
Resampled class distribution ('ovo'): Counter({0: 13, 1: 13}) # -> {2: 15, 3: 13}
Original class distribution: Counter({1: 26, 0: 15}) # -> {4: 26, 2: 15}
Resampled class distribution ('ovo'): Counter({1: 15, 0: 15}) # -> {4: 26, 2: 15}
Original class distribution: Counter({1: 26, 0: 13}) # -> {4: 26, 3: 13}
Resampled class distribution ('ovo'): Counter({1: 13, 0: 13}) # -> {4: 26, 3: 13}
</code></pre>
<p>How do I fix this?</p>
|
<python><machine-learning><scikit-learn><svm><multiclass-classification>
|
2023-06-01 12:14:33
| 0
| 4,093
|
arilwan
|
76,381,508
| 9,488,023
|
Masking a pandas column based on another column with slightly different values
|
<p>So what I have is two Pandas dataframes in Python with a large number of xyz-coordinates. One of them will be used to mask/remove some coordinates in the other one, but the problem is that the coordinates are very slightly different so that I cannot simply remove duplicates. As an example, let's say they look like this:</p>
<pre><code>df1 = pd.DataFrame(data=None, columns=['x', 'y', 'z'])
df1.x = [104245, 252355, 547364, 135152]
df1.y = [842714, 135812, 425328, 124912]
df1.z = [125125, 547574, 364343, 346372]
df2 = pd.DataFrame(data=None, columns=['x', 'y', 'z'])
df2.x = [104230, 547298]
df2.y = [842498, 424989]
df2.z = [124976, 364001]
</code></pre>
<p>What I then want is for the first and second xyz-rows in df2 to remove the first and third row in df1. My idea was to create new columns with rounded values, compare those, and remove based on those. It would look something like this:</p>
<pre><code>df1['id'] = np.linspace(0,len(df1)-1,len(df1))
df2['id'] = np.linspace(0,len(df2)-1,len(df2))
df3 = df1.round({'x': -3, 'y': -3, 'z': -3})
df4 = df2.round({'x': -3, 'y': -3, 'z': -3})
df5 = df3.merge(df4, on=['x', 'y', 'z'], how='inner')
df6 = df1[~df1.index.isin(df5.id_x)]
</code></pre>
<p>This works fine to remove some of the values, but often they round to different places. I was hoping with help if there is a better method to mask those values which are simply closest in all three coordinates. Maybe that it finds the closest xyz-pair between df1 and df2 and masks those pairs. If anyone has any ideas I would really appreciate it!</p>
|
<python><pandas><dataframe><compare>
|
2023-06-01 11:42:16
| 3
| 423
|
Marcus K.
|
76,381,414
| 5,547,553
|
How to exclude linebreaks from a regex match in python?
|
<br>
<p>How can I make the bellow regex exclude matches that span across lines?</p>
<pre><code>import re
reg = re.compile(r'\b(apple)(?:\W+\w+){0,4}?\W+(tree|plant|garden)')
reg.findall('my\napple tree in the garden')
reg.findall('apple\ntree in the garden')
</code></pre>
<p>The first one should match, the second one should not.<br>
(Now both matches...)</p>
|
<python><regex>
|
2023-06-01 11:31:51
| 1
| 1,174
|
lmocsi
|
76,381,105
| 1,980,208
|
Find unique date from existing dataframe and make a new CSV with corresponding column values
|
<p>I have a time series every which looks like this :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Time</th>
<th>Volume every minute</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-05-25T00:00:00Z</td>
<td>284</td>
</tr>
<tr>
<td>2023-05-25T00:01:00Z</td>
<td>421</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>2023-05-27T23:58:00Z</td>
<td>894</td>
</tr>
<tr>
<td>2023-05-27T23:59:00Z</td>
<td>357</td>
</tr>
</tbody>
</table>
</div>
<p>I have to make new CSV by iterating Time column finding unique date and making new columns with corresponding values of volume every minute. For example desired output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Date</th>
<th style="text-align: center;">min1</th>
<th style="text-align: right;">min2</th>
<th style="text-align: right;">...</th>
<th style="text-align: right;">min1440</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2023-05-25</td>
<td style="text-align: center;">284</td>
<td style="text-align: right;">421</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">578</td>
</tr>
<tr>
<td style="text-align: left;">2023-05-26</td>
<td style="text-align: center;">512</td>
<td style="text-align: right;">645</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">114</td>
</tr>
<tr>
<td style="text-align: left;">2023-05-27</td>
<td style="text-align: center;">894</td>
<td style="text-align: right;">357</td>
<td style="text-align: right;">...</td>
<td style="text-align: right;">765</td>
</tr>
</tbody>
</table>
</div>
<p>i am able to fetch unique dates but after that i am clueless. please find my sample codes:</p>
<pre><code>import pandas as pd
train_data = pd.read_csv('date25to30.csv')
print(pd.to_datetime(train_data['time']).dt.date.unique())
</code></pre>
|
<python><pandas>
|
2023-06-01 10:47:43
| 1
| 439
|
prem
|
76,381,056
| 18,972,785
|
How to construct a graph using a list of tuples in python in networkX?
|
<p>I am trying to make a graph from a list of tuples stored in a variable. I found <code>G.add_edges_from(e)</code> for making graph using list of tuples. but the problem is that this does not work and when i try to for example print the graph it returns <code>None</code>. I appreciate answers that solve my problem. I use the code below to make the graph:</p>
<pre><code>import networkx as nx
e = [(1,2),(1,3),(2,3)]
G = nx.Graph()
g1 = G.add_edges_from(e)
print(g1)
</code></pre>
<p><strong>Update:</strong>
I testes this code but again give None when trying to print:</p>
<pre><code>e = [[(1,2),(1,3),(2,3)],[(10,20),(10,30),(20,30)]]
graph_list = []
for i in e:
graph_list.append(nx.Graph().add_edges_from(i))
print(graph_list[0].nodes)
</code></pre>
|
<python><graph><networkx>
|
2023-06-01 10:41:27
| 1
| 505
|
Orca
|
76,380,952
| 16,171,413
|
sqlite3.OperationalError: no such table: auth_user
|
<p>I'm developing a simple app using Django. I am done with the development and I'm trying to push to Heroku server. I get this error if I try to create a superuser</p>
<pre><code>You have 21 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, journpys, sessions.
Run 'python manage.py migrate' to apply them.
Traceback (most recent call last):`File "/app/.heroku/python/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 328, in execute`
return super().execute(query, params)
sqlite3.OperationalError: no such table: auth_user
</code></pre>
<p>I need help in fixing this error. Thanks.</p>
<p>All migrations have been done locally and using the Heroku run python manage.py migrate command with no issues. I have also looked at some articles on solutions to this error but none seemed to work.</p>
<p><strong>Update</strong>: <em>I resolved this by switching to PostgreSQL. This has been finally resolved.</em></p>
|
<python><django><heroku><sqlite3-python>
|
2023-06-01 10:25:19
| 1
| 5,413
|
Uchenna Adubasim
|
76,380,853
| 4,399,016
|
Downloading a Zip file and extracting its content in Python
|
<p>I have this code to download a zip file from a URL and extract the contents.
But the name of the excel file changes every month. This would result in duplicates getting created. And it is not possible to predict the names each time new data gets published in the URL.</p>
<pre><code>zip_file_url = "https://www.insee.fr/en/statistiques/series/xlsx/famille/102391902"
import requests, zipfile, io
r = requests.get(zip_file_url)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
</code></pre>
<p>In the end I need to load the spreadsheet in Pandas. Can it be done without knowing the name of the spreadsheet within the zip file?</p>
<p>Is it possible to rename the Spreadsheet every time so that the file is overwritten and no duplicates are created? Also, how to load the spreadsheet in pandas without knowing the file name in future?</p>
<p>So, the best way would be to extract the file and save under same file name and overwrite the previous version. This means we also know the name of the spreadsheet to be loaded in pandas.</p>
|
<python><python-requests><io><python-zipfile>
|
2023-06-01 10:13:14
| 1
| 680
|
prashanth manohar
|
76,380,813
| 10,755,782
|
Installing python from source in a remote machine with SSL support without sudo access
|
<p>I'm trying to install Python 3.11 on a remote machine running CentOS Linux 7, building from the source. However, after installation, the <code>pip</code> is not able to install any modules as it exits with the following error</p>
<pre><code>WARNING: pip is configured with locations that require TLS/SSL,
however, the SSL module in Python is not available.
</code></pre>
<p>How can I re-install python with SSL support?</p>
<p>I tried</p>
<pre><code>./configure --with-ssl
</code></pre>
<p>However, I got the error:</p>
<pre><code>configure: WARNING: unrecognized options: --with-ssl
</code></pre>
<p>The OpenSSL version installed on the machine is (OpenSSL 1.1.1k FIPS 25 Mar 2021).</p>
<p>Update:
<code>./configure --help | grep -Fi ssl</code>
returns:</p>
<pre><code>--with-openssl=DIR root of the OpenSSL directory
--with-openssl-rpath=[DIR|auto|no]
Set runtime library directory (rpath) for OpenSSL
libraries, no (default): dont set rpath, auto:
auto-detect rpath from --with-openssl and
pkg-config, DIR: set an explicit rpath
--with-ssl-default-suites=[python|openssl|STRING]
override default cipher suites string, python: use
Pythons preferred selection (default), openssl:
leave OpenSSLs defaults untouched, STRING: use a
custom string, python and STRING also set TLS 1.2 as
minimum TLS version
</code></pre>
|
<python><ssl><pip><openssl><centos>
|
2023-06-01 10:07:41
| 0
| 660
|
brownser
|
76,380,677
| 7,236,309
|
Airflow Dag using python operator to copy files from one to other folder in local system
|
<pre><code>import os
import shutil
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
def copy_tdsx_hyper_template_if_not_exists(source_files, destination_folder):
for source_file in source_files:
file_name = os.path.basename(source_file)
destination_path = os.path.join(destination_folder, file_name)
if not os.path.exists(destination_path):
shutil.copyfile(source_file, destination_path)
print(f"Template File '{file_name}' copied successfully.")
</code></pre>
<h1>Define the DAG</h1>
<pre><code>default_args = {
'owner': 'test',
'retries': 5,
'retry_delay': timedelta(minutes=2)
}
with DAG(
default_args=default_args,
dag_id="HyperAPI_extracts_dag_test",
description="DAG Operation Using Python Operator",
start_date=datetime(2023, 5, 28),
schedule='@daily'
) as dag:
# Define the task
parent_dir = 'C:\localfolder\'
Tableau_extract_name = 'TestExtract'
my_source_files = [
parent_dir + 'Tdsx_Hyper_Templates\\' + Tableau_extract_name + '.hyper',
parent_dir + 'Tdsx_Hyper_Templates\\' + Tableau_extract_name + '.tdsx'
]
my_destination_folder = parent_dir
copy_task = PythonOperator(
task_id='copy_tdsx_hyper_temp_task',
python_callable=copy_tdsx_hyper_template_if_not_exists,
op_args=[my_source_files, my_destination_folder],
dag=dag,
)
# Set task dependencies
copy_task
</code></pre>
<p>I am getting error like below in Airflow
FileNotFoundError: [Errno 2] No such file or directory:</p>
|
<python><airflow>
|
2023-06-01 09:52:02
| 2
| 2,536
|
Sreenu131
|
76,380,556
| 1,585,507
|
Intersection of circle and polygons
|
<p>I have a few polygons representing streets (the blue stuff on the image below). I also have a circle (a point with a buffer, the orange stuff on the picture). I would like to find all the portions of streets inside the circle.</p>
<p><a href="https://i.sstatic.net/wOWzr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wOWzr.png" alt="map1" /></a></p>
<p>So far, I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>from myproject import geometry as geo
...
# Load the streets of my city and build a tree
# The city is just a bunch of polygons
city = geo.load_geojson("my_city.geojson")
tree = shapely.STRtree(city)
# Generate the circle
center = shapely.Point(x, y)
circle = geo.generate_circle_around_point(center, radius)
# Find the streets intersecting the circle, then compute the intersections
shapes = [city[idx].intersection(circle) for idx in tree.query(circle)]
</code></pre>
<p>However, this code only gives me intersections of poor quality (you can see I'm missing some pieces of the streets that I would expect to be within the circle):</p>
<p><a href="https://i.sstatic.net/l57Dy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l57Dy.png" alt="enter image description here" /></a></p>
<p>Any idea what could cause this phenomenon?</p>
<p>EDIT:</p>
<p>here is how I generate the orange circle. I have to do it because since I use GPS coordinates, I need to convert the radius from meters to degrees:</p>
<pre class="lang-py prettyprint-override"><code>def generate_circle_around_point(point: shapely.Point, radius: int) -> shapely.Polygon:
# Define a projection using a local projected coordinate system
local_projection = pyproj.Proj(proj="utm", zone=18, ellps="WGS84")
# Define a function to convert GPS coordinates to the local projected coordinate
# system
to_local_projection = partial(
pyproj.transform, pyproj.CRS("EPSG:4326"), local_projection
)
# Transform the center point to the local projected coordinate system
center_point_local = shapely.ops.transform(to_local_projection, point)
# Create a circle buffer around the center point with the given radius
circle_buffer = center_point_local.buffer(radius, resolution=200)
# Transform the circle buffer back to GPS coordinates
to_gps_projection = partial(
pyproj.transform, local_projection, pyproj.CRS("EPSG:4326")
)
circle_buffer_gps = shapely.ops.transform(to_gps_projection, circle_buffer)
return circle_buffer_gps
</code></pre>
|
<python><geospatial><shapely>
|
2023-06-01 09:38:35
| 0
| 5,739
|
JPFrancoia
|
76,380,548
| 2,517,880
|
Python - DOCX table comparison
|
<p>I've two MS word documents which contain tables. Tables from file-2 is derived from file-1 and having in-place updates in file-2. For example -</p>
<p>Table from file-2:</p>
<p><a href="https://i.sstatic.net/rMRh8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rMRh8.png" alt="Table from file-2" /></a></p>
<p>Table from file-1:</p>
<p><a href="https://i.sstatic.net/q8PNu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q8PNu.png" alt="Table from file-1" /></a></p>
<p>To find the difference between two tables, I'm thinking to first tables in document then convert it into JSON format then create a pandas dataframe for it then perform diff (compare) operation on it.</p>
<p><em>Note: I'm not sure if it's the correct approach.</em></p>
<p>I've been able to extract data in JSON format with python simplify_docx.</p>
<pre><code>import docx
import pandas as pd
from simplify_docx import simplify
file1= 'file1.docx'
file2='file2.docx'
# read in a document
my_doc = docx.Document(file1)
# coerce to JSON using the standard options
my_doc_as_json_std = simplify(my_doc)
# print (json.dumps(my_doc_as_json_std, sort_keys=True, indent=4))
data = pd.DataFrame.from_dict(my_doc_as_json_std)
display(data)
</code></pre>
<p><strong>Extracted JSON</strong></p>
<pre><code>{
"TYPE": "document",
"VALUE": [
{
"TYPE": "body",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "Sample heading"
}
]
},
{
"TYPE": "table",
"VALUE": [
{
"TYPE": "table-row",
"VALUE": [
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "Column1"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "Column2"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "Column3"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "Column4"
}
]
}
]
}
]
},
{
"TYPE": "table-row",
"VALUE": [
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "A1"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "A2"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "A3"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "A4"
}
]
}
]
}
]
},
{
"TYPE": "table-row",
"VALUE": [
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "B1"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "B2"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "B3"
}
]
}
]
},
{
"TYPE": "table-cell",
"VALUE": [
{
"TYPE": "paragraph",
"VALUE": [
{
"TYPE": "text",
"VALUE": "B4"
}
]
}
]
}
]
}
]
}
]
}
]
}
</code></pre>
<p><strong>Output</strong></p>
<p><a href="https://i.sstatic.net/4Haqa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Haqa.png" alt="enter image description here" /></a></p>
<p>I think I need to iterate over whole JSON file and create dataframe cell by cell. How can I fix this output dataframe in a simple way?</p>
|
<python><pandas><dataframe>
|
2023-06-01 09:37:45
| 1
| 1,114
|
Vaibhav
|
76,380,541
| 18,018,869
|
Center one or both axis to center the "view" around a specific plot
|
<p>Let's say the <strong>point (10, 50)</strong> divides my plot of a 2d coordinate system in four sections:</p>
<ul>
<li>top left, top right</li>
<li>bottom left, bottom right</li>
</ul>
<p>This point is fixed and I know its location in advance.</p>
<p>Task: Always keep this point in the middle of the by matplotlib displayed coordinate system.</p>
<p>Sure I can achieve this by for example setting</p>
<pre class="lang-py prettyprint-override"><code>axs.set_xlim(0, 20)
axs.set_ylim(0, 100)
</code></pre>
<p>This perfectly centers mentioned point in the middle of the graph. The problem with this approach is, that I do not know in advance about my dynamic scatter data that I also want to visualize within this plot.</p>
<p>Sometimes it might be</p>
<pre class="lang-py prettyprint-override"><code>x_data = (2, 5, 15)
y_data = (25, 30, 75)
</code></pre>
<p>This would work with my previously set <code>xlim</code> and <code>ylim</code> but sometimes the data might be:</p>
<pre class="lang-py prettyprint-override"><code>x_data = (-10, 0, 20)
y_data = (-2, 101, 205)
</code></pre>
<p>This data would not show up because it is outside of set <code>xlim</code> and <code>ylim</code>.</p>
<p><strong>Question:</strong> Can I keep the default dynamic expanding of the axis so every point of my data is included and still somehow center the point in "the middle" of the graph?</p>
<p>Different phrasing: Can I specify the center of an axis without specifying its min and max and still keep the auto scaling so "every point" that is to scatter is included?</p>
|
<python><matplotlib><axis>
|
2023-06-01 09:37:04
| 3
| 1,976
|
Tarquinius
|
76,380,435
| 21,169,587
|
How to store custom object in Variable similar to Tkinter's Variable object
|
<p>In Tkinter, I attempted to use the <code>Variable</code> object to store custom objects. However, retrieving from the variable always returns a <code>str</code> object of the custom object's <code>__repr__()</code> function as can be seen from the snippet below.</p>
<pre><code>class Test:
def __init__(self):
pass
def __repr__(self):
return "repr"
import tkinter as tk
root = tk.Tk()
variable = tk.Variable()
variable.set(Test())
print(variable.get())
print(type(variable.get()))
#output
repr
<class 'str'>
</code></pre>
<p>As I am using the Variable with <code>trace_add</code> and a callback function which then utilises the object I stored in the <code>Variable</code>, I would need it to store the variable rather than just the string repr. However, tkinter's existing <code>Var</code> objects do not include one for objects. Is there an existing way to accomplish this, or will this require subclassing <code>Variable</code> or using a separate storage mechanism and utilising trace to trigger access?</p>
|
<python><tkinter>
|
2023-06-01 09:23:50
| 1
| 867
|
Shorn
|
76,380,306
| 9,676,849
|
How to import all the modules from the current directory
|
<p>I would like to import automatically, and dynamically, all the files from the current directory. I have, for example the structure as follow:</p>
<pre><code>/ my_project
|- GeneralDriverAbstract.py
|- A_Driver.py # Child of Abstract class
|- A_DriverAlpha.py
|- A_DriverBeta.py
|- B_Driver.py # Child of Abstract class
|- B_DriverAlpha.py
|- B_DriverBeta.py
|- Suffix.py
|- other_files.py
</code></pre>
<p>In <code>GeneralDriverAbstract</code> I would like to import all the class that are in the same directory. Indeed, When the user will create a driver (A, B or other in future), it will pass to the <code>init</code> of the abstract class its name (<code>A_Driver</code>, <code>B_Driver</code>...) and the suffix parameter (<code>alpha</code>, <code>beta</code>...) that is defined in the enumeration in <code>Suffix.py</code>. Then it wil make proper configuraitions by creating the object matching deviceDriver+suffix.</p>
<p>How can I import all the modules? I want my code to be dynamic, so if we need to add more devices, we will just need to create files and add suffix inside the enumeration, without modifying the abstract class.</p>
<p>I tried two approaches. First, writing an <code>__init__.py</code> file with this inside:</p>
<pre class="lang-py prettyprint-override"><code>from os.path import dirname, basename, isfile, join
import glob
modules = glob.glob(join(dirname(__file__), "*.py"))
__all__ = [ basename(f)[:-3] for f in modules if isfile(f) and not f.endswith('__init__.py')]
</code></pre>
<p>And then using <code>from . import *</code> but it is not working. I got name error, for example:
<code>test = A_Driver()</code> gives <code>NameError: name 'A_Driver' is not defined</code>.</p>
<p>The second approach was to move the code of the init file inside the abstract class as follow:</p>
<pre class="lang-py prettyprint-override"><code>from os.path import dirname, basename, isfile, join
modules = glob.glob(join(dirname(__file__), "*.py"))
module_files = [ basename(f)[:-3] for f in modules if isfile(f) and not f.endswith(('__init__.py', __file__))]
print(module_files)
# Import each module dynamically
for file in module_files:
importlib.import_module(file)
</code></pre>
<p>But even if I removed the abstract file from the list, it looks like I have a recursive inclusion. Maybe from the imports of the drivers' classes (I have twice printed the <code>module_files</code> array).
I also have a name error. I also tried with <code>__import__</code> but I got same behaviour.</p>
|
<python><import>
|
2023-06-01 09:06:20
| 1
| 301
|
Dark Patate
|
76,380,291
| 19,504,610
|
Cython: Unable to cimport from `.pxd` file
|
<p>I have a simple project directory and some simple files which failed to compile.</p>
<p>Directory structure:</p>
<pre><code>cythonize: ROOT
|___ cythonize
|___ __init__.pxd
|___ __init__.py
|___ first.pxd
|___ first.pyx
|___ second.pxd
|___ second.pyx
|___ README.md
|___ setup.py
</code></pre>
<p>Let me show what are the exact content in every file.</p>
<p><code>__init__.pxd</code>:</p>
<pre><code><EMPTY FILE>
</code></pre>
<p><code>__init__.py</code>:</p>
<pre><code><EMPTY FILE>
</code></pre>
<p><code>first.pxd</code>:</p>
<pre><code>cdef class MyClass:
cdef str good
cdef str bad
cdef str say(self, str x, str y)
</code></pre>
<p><code>first.pyx</code>:</p>
<pre><code>cdef class MyClass:
cdef str say(self, str x, str y):
return x
</code></pre>
<p><code>second.pxd</code>:</p>
<pre><code>from . cimport first # removing this does not help
</code></pre>
<p><code>second.pyx</code>:</p>
<pre><code>#cython language_level=3
from . cimport first
cdef first second(str a, str b):
return first(a, b)
</code></pre>
<h2>Objective</h2>
<p>I am simply trying to <code>cimport</code> <code>first</code> from <code>first.pxd</code> into <code>second.pyx</code> in order to use <code>first</code> in <code>second.pyx</code>.</p>
<h2>Compilation Errors</h2>
<pre><code>>>> cythonize -i -k -3 cythonize/second.pyx
Compiling C:\...\cythonize\cythonize\second.pyx because it changed.
[1/1] Cythonizing C:\...\cythonize\cythonize\second.pyx
Error compiling Cython file:
------------------------------------------------------------
...
#cython language_level=3
from . cimport first
cdef first second(str a, str b):
^
------------------------------------------------------------
cythonize\second.pyx:5:5: 'first' is not a type identifier
Error compiling Cython file:
------------------------------------------------------------
...
#cython language_level=3
from . cimport first
cdef first second(str a, str b):
return first(a, b) ^
------------------------------------------------------------
cythonize\second.pyx:6:11: 'first' is not a constant, variable or function identifier
Failed compilations: cythonize.second
</code></pre>
<p>Maybe one can show what is the minimum viable example that can make this work?</p>
|
<python><cython><cythonize><python-extensions>
|
2023-06-01 09:04:43
| 1
| 831
|
Jim
|
76,379,924
| 5,386,216
|
Create dataclass instance from union type based on string literal
|
<p>I'm trying to strongly type our code base. A big part of the code is handling events that come from external devices and forwarding them to different handlers. These events all have a value attribute, but this value can have different types. This value type is mapped per event name. So a temperature event always has an int value, an register event always as <code>RegisterInfo</code> as its value.</p>
<p>So I would like to map the event name to the value type. But we are struggling with implementation.</p>
<p>This setup comes the closest to what we want:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class EventBase:
name: str
value: Any
value_type: str
@dataclass
class RegisterEvent(EventBase):
value: RegisterInfo
name: Literal["register"]
value_type: Literal["RegisterInfo"] = "RegisterInfo"
@dataclass
class NumberEvent(EventBase):
value: float | int
name: Literal["temperature", "line_number"]
value_type: Literal["number"] = "number"
@dataclass
class StringEvent(EventBase):
value: str
name: Literal["warning", "status"]
value_type: Literal["string"] = "string"
Events: TypeAlias = RegisterEvent | NumberEvent | StringEvent
</code></pre>
<p>With this setup mypy will flag incorrect code like:</p>
<pre class="lang-py prettyprint-override"><code>def handle_event(event: Events):
if event.name == "temperature":
event.value.upper()
</code></pre>
<p>(It sees that a temperature event should have value type int, and that doesn't have an <code>upper()</code> method)</p>
<p>But creating the events becomes ugly this way. I don't want a big if statement that maps each event name to a specific event class. We have lots of different event types, and this mapping info is already inside these classes.</p>
<p>Ideally I would like it to look like this:</p>
<pre class="lang-py prettyprint-override"><code>def handle_device_message(message_info):
event_name = message_info["event_name"]
event_value = message_info["event_value"]
event = Events(event_name, event_value)
</code></pre>
<p>Is a "one-liner" like this possible?</p>
<p>I feel like we are kinda walking against wall here, could it be that the code is architecturally wrong?</p>
|
<python><mypy><python-typing><python-dataclasses>
|
2023-06-01 08:16:34
| 1
| 381
|
Quint van Dijk
|
76,379,760
| 12,055,667
|
Shell capture python return value, not print value
|
<p>I have a python script that both prints and returns some value. Like:</p>
<pre><code>(pseudo code)
def main:
print "hello"
return 1
</code></pre>
<p>I'd like the shell script to capture the value returned (1) not the value printed (hello).</p>
|
<python><shell>
|
2023-06-01 07:52:28
| 2
| 345
|
instant501
|
76,379,758
| 7,026,806
|
Why can't I overlay a violinplot and a lineplot?
|
<p>It appears that Seaborn does a lot of fiddling with the displayed axis labels, since I can't overlay "regular" matplotlib objects on top. How can I fix the below behavior?</p>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
fig, ax = plt.subplots(figsize=(10, 6))
data = pd.DataFrame()
data["value"] = np.random.normal(0, 1, 1000)
data["week"] = np.random.randint(20, 30, 1000)
# make a violin plot, and put a line on top of it
sns.violinplot(data=data, x="week", y="value", scale="width", linewidth=0.5, palette="viridis")
# fit a line to the data
x = data["week"].values
y = data["value"].values
m, b = np.polyfit(x, y, 1)
y_hat = m * x + b
# plot the line
ax.plot(x, y_hat, color="black", linewidth=2)
</code></pre>
<p><a href="https://i.sstatic.net/pvRmE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pvRmE.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn><line-plot><violin-plot>
|
2023-06-01 07:52:15
| 1
| 2,020
|
komodovaran_
|
76,379,743
| 3,417,134
|
Visualizing users by applications based on their transactions
|
<p>I have a data set which consists of user email, application they accessed and amount of data transfer happened during that transaction. I wanted to visualize this data as a network chart where users accessing a certain app would appear closer to the application node compared to others.
Here is the sample data:</p>
<pre><code>d = pd.DataFrame({'Employee Email':['abc@xyz.com','abc@xyz.com','abc@xyz.com','def@xyz.com','lmn@xyz.com','abc@xyz.com'],
'Application':['SAP','SFDC','SAP','SFDC','Tableau','Tableau'],
'Transactions':[10,20,50,78,90,22]
})
</code></pre>
<p>I was able to create a network chart but would like to make it interactive and add the above mentioned functionality of resizing edges based on transaction amount. Following is my sample code:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
user_app_transactions = d.groupby(['Employee Email','Application'])['Transactions'].sum().reset_index()
G = nx.Graph()
# Add nodes for users
users = user_app_transactions['Employee Email'].unique()
G.add_nodes_from(users, node_color='lightblue')
# Add nodes for applications
applications = user_app_transactions['Application'].unique()
G.add_nodes_from(applications, node_color='lightgreen')
# Add edges connecting users and applications
edges = [(user, app) for user, app in user_app_transactions[['Employee Email', 'Application']].values]
G.add_edges_from(edges)
# Set node positions for users and applications
pos = nx.spring_layout(G, seed=42)
# Draw nodes and edges
nx.draw_networkx_nodes(G, pos, node_color='lightblue', node_size=200, label='Users')
nx.draw_networkx_nodes(G, pos, nodelist=applications, node_color='lightgreen', node_size=300, label='Applications')
nx.draw_networkx_edges(G, pos, alpha=0.5)
# Label nodes
nx.draw_networkx_labels(G, pos, font_size=8)
# Set plot title and legend
plt.title('Adjacency Relationship: Users and Applications')
plt.legend()
# Show the plot
plt.axis('off')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Ik49M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ik49M.png" alt="enter image description here" /></a></p>
<p>Any suggestions are highly appreciated.</p>
|
<python><plotly><visualization><networkx><bokeh>
|
2023-06-01 07:50:22
| 2
| 632
|
SAL
|
76,379,667
| 13,891,321
|
Python code works as standalone but not in PYQGIS
|
<p>I have a working python code that runs within the SPYDER IDE, but doesn't work when run from inside the Python console in QGIS. I have checked the QGIS installs and I can see both panda and shapely.
The code opens a .csv (extension .mcsv by the creator) and creates a Polygon from it.</p>
<pre><code>from PyQt5 import QtWidgets
import pandas as pd
from shapely.geometry import Polygon
import_file_path, _ = QtWidgets.QFileDialog.getOpenFileName(
None, "Select File", "", "Line Files (*.mcsv)")
if import_file_path == '':
msgBox = QtWidgets.QMessageBox()
msgBox.setText("No file selected")
msgBox.setStandardButtons(QtWidgets.QMessageBox.Ok)
msgBox.setIcon(QtWidgets.QMessageBox.Warning)
msgBox.exec_()
data = pd.read_csv(import_file_path)
df = pd.DataFrame(data, columns=['Easting', 'Northing'])
df.drop(df.tail(1).index, inplace=True) # drop -9999 row
df['Easting'] = df['Easting'].astype("string")
df['Northing'] = df['Northing'].astype("string")
p = Polygon(list(zip(df["Easting"], df["Northing"])))
print(str(p))
</code></pre>
<p>The .mcsv looks like this:</p>
<pre><code>Map,Name,Easting,Northing
1,1,416213.873,6507344.904
1,1,424283.100,6509982.331
1,1,428632.580,6496675.102
1,1,418176.907,6493257.665
1,1,415338.984,6501940.281
1,1,416213.873,6507344.904
-9999,-9999,-9999,-9999
</code></pre>
<p>and the SPYDER generated output looks like:</p>
<pre><code>POLYGON ((416213.873 6507344.904, 424283.1 6509982.331, 428632.58 6496675.102, 418176.907 6493257.665, 415338.984 6501940.281, 416213.873 6507344.904))
</code></pre>
<p>And this is the QGIS error message it get:</p>
<pre><code>exec(Path('C:/Users/client/AppData/Local/Temp/tmpq5jtxh23.py').read_text())
Traceback (most recent call last):
File "C:\PROGRA~1\QGIS32~1.2\apps\Python39\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "<string>", line 24, in <module>
File "C:\PROGRA~1\QGIS32~1.2\apps\Python39\lib\site-packages\shapely\geometry\polygon.py", line 261, in __init__
ret = geos_polygon_from_py(shell, holes)
File "C:\PROGRA~1\QGIS32~1.2\apps\Python39\lib\site-packages\shapely\geometry\polygon.py", line 539, in geos_polygon_from_py
ret = geos_linearring_from_py(shell)
File "C:\PROGRA~1\QGIS32~1.2\apps\Python39\lib\site-packages\shapely\geometry\polygon.py", line 502, in geos_linearring_from_py
lgeos.GEOSCoordSeq_setX(cs, i, coords[0])
ctypes.ArgumentError: argument 4: <class 'TypeError'>: wrong type
</code></pre>
<p>QGIS 3.24.2 - Tisler
Python version 3.9.5</p>
|
<python><python-3.x><qgis><pyqgis>
|
2023-06-01 07:40:16
| 1
| 303
|
WillH
|
76,379,662
| 5,357,095
|
Manipulating a json file containing a dict of json string
|
<p>I have a json file consisting of a dict of String:String</p>
<p>I tried to first load the file using json.load and then update the dict but when I dump the updated dict I lose the key. Unable to figure out how to update a value.</p>
<p>I'm trying to update the value of <strong>sampleID</strong> to something else and write back to the same file.</p>
<pre><code>{"key":"{\"url\":\"DEFAULT\",\"id\":\"tzz22s6a\",\"sampleID\":\"jahsdioadhao\",\"isPassRequired\":false,\"isKeyRequired\":true,\"materialType\":\"SATIN\",\"clothType\":\"DEFAULT\"}"}
</code></pre>
<p>what I tried so far, which updates the value of sampleId but the format of the file is changed and also I loose the key i.e., <strong>"key"</strong></p>
<pre><code>with open(os.path.join(root, file_name)) as jsonFile:
d = json.load(jsonFile)
for values in d:
data = json.loads(d[values])
data['sampleID'] = 'newValue'
with open(os.path.join(root, file_name), 'w') as jsonFile:
json.dump(data, jsonFile, indent=4)
</code></pre>
|
<python><json>
|
2023-06-01 07:39:11
| 2
| 877
|
Alex Bloomberg
|
76,379,616
| 9,431,952
|
Django ORM get data by pagination in query level?
|
<p>I'm using Django ORM to get Employee modal data, but I have 1000-2000 data of employees in that table, now I want to get only one-page data at a time, for this raw query for 2nd page work like:</p>
<pre><code>SELECT * FROM employee LIMIT 30 OFFSET 60;
</code></pre>
<p>and its ORM version is like this:</p>
<pre><code>page_number = 3
records_per_page = 30
offset = (page_number - 1) * records_per_page
employees = Employee.objects.all().order_by('id')[offset:offset+records_per_page]
</code></pre>
<p>I want to know if is this correct to use when a large number of data. Because here it gets <code>Employee.objects.all()</code> and then it <code>offset</code> records or it's working principle is different way?</p>
|
<python><sql><django><database><orm>
|
2023-06-01 07:32:56
| 1
| 334
|
BikashSaud
|
76,379,499
| 3,896,008
|
Vectorising logsum using numpy
|
<p>I need to compute the <code>np.sum(np.exp(L))</code> where L is a long list of numbers which can be as high as 3000. np.float64 will overflow around exp(700). However, <code>np.logaddexp</code> is immune to overflow but only works for two elements.</p>
<p>I wrote the following recursive function to compute it for a list.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from smartprint import smartprint as sprint
def logsum(l):
if len(l) == 1:
return l[0]
return logsum([np.logaddexp(l[0], l[1])] + l[2:])
# odd length
l = [-10, 1, 2, 45, 100]
assert (logsum(l) == np.log(np.sum(np.exp(l))))
sprint (logsum(l))
# even length
l = [-10, 1, 2, 43]
assert (logsum(l) == np.log(np.sum(np.exp(l))))
sprint (logsum(l))
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>logsum(l) : 100.0
logsum(l) : 43.0
</code></pre>
<p>Additionally,
<code>logsum([1,2,3,4,50000])</code> outputs <code>50000</code> as expected whereas the <code>np.exp</code> would overflow. Now, the problem is that my list L is huge and has upto 10 million elements and I need to compute the logsum at least 1000 times. So, I wonder if there is a way to vectorize using the <code>np.logaddexp</code> so that I can use it for a longer list instead of just two elements.</p>
|
<python><numpy>
|
2023-06-01 07:15:32
| 1
| 1,347
|
lifezbeautiful
|
76,379,440
| 11,725,056
|
How to see the Embedding of the documents with Chroma (or any other DB) saved in Lang Chain?
|
<p>I can see everything but the Embedding of the documents when I used <code>Chroma</code> with <code>Langchain</code> and <code>OpenAI</code> embeddings. It always show me <code>None</code> for that</p>
<p>Here is the code:</p>
<pre><code>for db_collection_name in tqdm(["class1-sub2-chap3", "class2-sub3-chap4"]):
documents = []
doc_ids = []
for doc_index in range(3):
cl, sub, chap = db_collection_name.split("-")
content = f"This is {db_collection_name}-doc{doc_index}"
doc = Document(page_content=content, metadata={"chunk_num": doc_index, "chapter":chap, "class":cl, "subject":sub})
documents.append(doc)
doc_ids.append(str(doc_index))
# # Initialize a Chroma instance with the original document
db = Chroma.from_documents(
collection_name=db_collection_name,
documents=documents, ids=doc_ids,
embedding=embeddings,
persist_directory="./data")
db.persist()
</code></pre>
<p>when I do <code>db.get()</code>, I see everything as expected except <code>embedding</code> is <code>None</code>.</p>
<pre><code>{'ids': ['0', '1', '2'],
'embeddings': None,
'documents': ['This is class1-sub2-chap3-doc0',
'This is class1-sub2-chap3-doc1',
'This is class1-sub2-chap3-doc2'],
'metadatas': [{'chunk_num': 0,
'chapter': 'chap3',
'class': 'class1',
'subject': 'sub2'},
{'chunk_num': 1, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'},
{'chunk_num': 2, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'}]}
</code></pre>
<p>My <code>embeddings</code> is also working fine as it returns:</p>
<pre><code>len(embeddings.embed_documents(["EMBED THIS"])[0])
>> 1536
</code></pre>
<p>also, in my <code>./data</code> directory I have Embedding file as <code>chroma-embeddings.parquet</code></p>
<hr />
<p>I tried the example with example given in document but it shows <code>None</code> too</p>
<pre><code># Import Document class
from langchain.docstore.document import Document
# Initial document content and id
initial_content = "This is an initial document content"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
new_db = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=OpenAIEmbeddings(), # using the same embeddings as before
ids=[document_id],
)
</code></pre>
<p>Here also <code>new_db.get()</code> gives me <code>None</code></p>
|
<python><nlp><openai-api><langchain><chromadb>
|
2023-06-01 07:07:40
| 2
| 4,292
|
Deshwal
|
76,379,197
| 4,847,250
|
How Do I know on which line I clicked on in a LineCollection?
|
<p>I would like to know on which line I clicked on in a pick_event from a LineCollection.
Usually I plot signals line by line and I can access to each line information through event.artist._label but in the case of LineCollection, this is not that simple.
If I click on the second segment, I would like to find a variable that contains 1 for instance.
Is there a way to do that?</p>
<p>Here's a minimal example:</p>
<pre><code>from matplotlib.collections import LineCollection
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import rand
def onpick1(event):
print('event')
if isinstance(event.artist, LineCollection):
print('LineCollection')
thoselines = event.artist
print('Which line did I clicked on ? ')
def linecoll(N):
x = np.random.rand(N, 3)
y = np.random.rand(N, 3)
C = np.random.rand(N, 3)
L = [str(i) for i in range(N)]
data = np.stack((x, y), axis=2)
fig, ax = plt.subplots()
fig.canvas.mpl_connect('pick_event', onpick1)
lines = ax.add_collection(LineCollection(data,colors=C ))
lines.set_picker(2)
lines.labels = L
plt.show()
linecoll(10)
</code></pre>
|
<python><matplotlib>
|
2023-06-01 06:27:30
| 1
| 5,207
|
ymmx
|
76,379,142
| 18,904,265
|
Directly pass / return image from http response to web interface (streamlit)
|
<p>I am getting an image (as png) back from an API call. This is then saved as a png file and opened back up to display it in streamlit. This works fine, parts of the code are:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
import requests
import shutil
# in function:
response.raw.decode_content = True
with open("image.png", "wb") as out_file:
shutil.copyfileobj(response.raw, out_file)
# in streamlit:
image = Image.open("image.png")
st.image(image, caption="This is a caption.")
</code></pre>
<p>Is there a way to directly return the binary data to the st.image() function? My guess would be, that it's way more performant to just keep the image in memory instead of writing it to a local file. I am imagining something like:</p>
<pre class="lang-py prettyprint-override"><code>def get_image():
#post request etc...
response.raw.decode_content = True
return response.raw
image = get_image()
st.image(image)
</code></pre>
<p>I am completely new to handling binary image data in python, so I would appreciate any pointers to helpful resources :)</p>
|
<python><python-imaging-library><streamlit>
|
2023-06-01 06:15:52
| 2
| 465
|
Jan
|
76,379,139
| 3,909,896
|
Write and read a pyarrow schema from file
|
<p>I'm transforming 120 JSON tables (of type <code>List[Dict]</code> in python in-memory) of varying schemata to <code>Arrow</code> to write it to <code>.parquet</code> files on ADLS, utilizing the <code>pyarrow</code> package.</p>
<p>I want to store the schema of each table in a separate file so I don't have to hardcode it for the 120 tables. As I iterate over my tables, I want to load each schema from file and transform the JSON data to Arrow by passing the schema.</p>
<pre><code>import pyarrow as pa
data = [
{"col1": 1, "col2": "a"},
{"col1": 2, "col2": "b"},
{"col1": 3, "col2": "c"},
{"col1": 4, "col2": "d"},
{"col1": 5, "col2": "e"}
]
# How to load the schema from file and parse it into a `pa.schema`?
my_schema = pa.schema([
pa.field('year', pa.int64()),
pa.field('somthing', pa.string())]
)
arrow_table = pa.Table.from_pylist(data, schema=my_schema)
# How to write this schema to file?
arrow_table.schema
</code></pre>
<p>I could write a custom file format for the schema and write a parser that reads the (e.g. txt) file, transforming its content into the <code>pa.datatype()</code> stuff, but I hope there is an easier, "official" solution to this?</p>
|
<python><pyarrow>
|
2023-06-01 06:15:04
| 1
| 3,013
|
Cribber
|
76,379,113
| 18,192,997
|
Uploading files to Google Drive using python with Docker - googleapiclient.errors.UnknownFileType
|
<p>I am encountering an error when attempting to upload files to Google Drive using the Google Drive API in a Docker environment. The file upload works perfectly fine outside of Docker, but when running the code within a Docker container, I receive the following error:</p>
<pre><code>Traceback (most recent call last):
File "/app/apis/gtest.py", line 60, in <module>
media = drive_service.files().create(body=file_metadata, media_body=file_path).execute()
File "/usr/local/lib/python3.9/site-packages/googleapiclient/discovery.py", line 1143, in method
raise UnknownFileType(media_filename)
googleapiclient.errors.UnknownFileType: upload/2d0a3e62-f442-4b2f-a816-f00e03a6f4db/someexcelfile.xlsx
</code></pre>
<p>My overall goal is to upload files to Google Drive, where the file type can be either XLSX or JSONL, depending on the file's MIME type. While the upload works flawlessly when running the code on a Windows machine, the issue arises only within the Docker container.</p>
<p>The provided code performs the necessary checks and generates the correct file metadata. For JSONL files, it sets the MIME type to "application/octet-stream" and uses the <strong>MediaFileUpload</strong> object for the upload process. However, when attempting to upload an XLSX file, the error occurs.</p>
<p>How can I ensure that Docker recognizes the correct MIME type for the file upload, allowing it to work properly within the container?</p>
<p>Here is the code for reference:</p>
<pre><code>Here is the code:
import os
from datetime import date
from google.oauth2 import service_account
from googleapiclient.discovery import build
# Define the Google Drive folder ID
folder_id = 'somegoogledrivefolderid'
# Define the file name with the correct file extension
file_path = 'upload/2d0a3e62-f442-4b2f-a816-f00e03a6f4db/someexcelfile.xlsx'
file_name = os.path.basename(file_path) # Use the base name of the file path
# Check if the file extension is 'jsonl'
file_ext = os.path.splitext(file_path)[1] # Extract the file extension
if file_ext == '.jsonl':
print("The file is a jsonl file")
# Add your code here to handle jsonl files
else:
print("The file is not a jsonl file")
# Add your code here to handle non-jsonl files
# Generate the folder name based on the current date
current_date = date.today().strftime('%m/%d/%Y')
folder_name = os.path.join(current_date, '').replace("\\", "")
# Authenticate and create a Google Drive service
credentials = service_account.Credentials.from_service_account_file('scripts/gs_credentials.json')
drive_service = build('drive', 'v3', credentials=credentials)
# List folders in the specified folder
folder_query = f"'{folder_id}' in parents and mimeType = 'application/vnd.google-apps.folder'"
response = drive_service.files().list(q=folder_query).execute()
# Check if folder already exists
existing_folder_id = None
for folder in response.get('files', []):
if folder['name'] == folder_name:
existing_folder_id = folder['id']
break
if existing_folder_id:
print(f"Folder already exists with ID: {existing_folder_id}")
folder_id = existing_folder_id
else:
# Create the folder
folder_metadata = {
'name': folder_name,
'parents': [folder_id],
'mimeType': 'application/vnd.google-apps.folder'
}
folder = drive_service.files().create(body=folder_metadata, fields='id').execute()
folder_id = folder.get('id')
print(f"New folder created with ID: {folder_id}")
# Upload the file to the generated folder
file_metadata = {
'name': file_name,
'parents': [folder_id]
}
media = drive_service.files().create(body=file_metadata,media_mime_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", media_body=file_path).execute()
print(f'Successfully saved the file: {media["name"]}')
</code></pre>
<p>How can I make docker recognize the mimetype so the upload will work properly?</p>
|
<python><docker><google-drive-api>
|
2023-06-01 06:10:55
| 1
| 537
|
PythonKiddieScripterX
|
76,379,108
| 1,652,954
|
Module can not be imported
|
<p>as illustrated in the below posted code, as referring to the paths mentioned in the screen shot attached, i am trying to import the class <code>ProofOfConcept_1</code>
but at run time i receive the following error;</p>
<pre><code>PS M:\projects\python\flask apps\openRoutService\apps\app5\app5-test> & C:/Python310/python.exe "m:/projects/python/flask apps/openRoutService/apps/app5/app5-test/multiprocessingTests/PoolExexOnPoolExecConcept/start.py"
Traceback (most recent call last):
File "m:\projects\python\flask apps\openRoutService\apps\app5\app5-test\multiprocessingTests\PoolExexOnPoolExecConcept\start.py", line 1, in <module>
from multiprocessingTests.PoolExexOnPoolExecConcept.ProofOfConcept_1 import ProofOfConcept_1
ModuleNotFoundError: No module named 'multiprocessingTests'
</code></pre>
<p>please let me know how should i correctly import the class <code>ProofOfConcept_1</code></p>
<p><strong>start.py</strong></p>
<pre><code>from multiprocessingTests.PoolExexOnPoolExecConcept.ProofOfConcept_1 import ProofOfConcept_1
if __name__ == '__main__':
ProofOfConcept_1.parallel()
</code></pre>
<p><strong>ProofOfConcept_1.py</strong>:</p>
<pre><code>import time
import logging
import logging.config
import configparser
from multiprocessingTests.PoolExexOnPoolExecConcept.TreatmentPoolExec import TreatmentPoolExec
from multiprocessingTests.PoolExexOnPoolExecConcept.BufferPoolExec import BufferPoolExec
from utils.PoolUtils import PoolUtils
from Multiprocessing.KeyAndNoneKeyGridCells.pool.GridCellsProfiles.VarTasksDispatcher import VarTasksDispatcher
logging.basicConfig(level=logging.DEBUG)
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
})
config = configparser.ConfigParser()
config.read('configs.ini')
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
class ProofOfConcept_1(object):
@staticmethod
def parallel():
strtTime = time.time()
processesCount, procsRem = PoolUtils.getProcessesCountPerPool(numOfPools=2,processesCount=int(config['MULTIPROCESSING']['processes_count']))
cpuCount, cpuRem = PoolUtils.getCpuCountPerPool(numOfPools=2,cpuCount=int(config['MULTIPROCESSING']['cpu_count']))
calcETRRiskInTreatmentForResourcesPoolExec = TreatmentPoolExec(processCount=processesCount+procsRem,cpuCount=cpuCount+cpuRem)
calcETRRiskInTreatmentForResourcesPoolExec.setIterables([0])
calcETRRiskInBufferForResourcesPoolExec = BufferPoolExec(processCount=processesCount,cpuCount=cpuCount)
calcETRRiskInBufferForResourcesPoolExec.setIterables([0])
calcETRRiskPoolExecList = []
calcETRRiskPoolExecList.append(calcETRRiskInTreatmentForResourcesPoolExec)
calcETRRiskPoolExecList.append(calcETRRiskInBufferForResourcesPoolExec)
varTasksDispatcher = VarTasksDispatcher()
'''no results to be sent back'''
res = varTasksDispatcher.dispatch(calcETRRiskPoolExecList)
</code></pre>
<p><strong>path</strong>:</p>
<p><a href="https://i.sstatic.net/7sAPs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7sAPs.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-06-01 06:09:45
| 1
| 11,564
|
Amrmsmb
|
76,378,946
| 1,851,302
|
Python/RegEx : Converting bad paragrapgh to good paragraph
|
<p>Here's the code I used to extract PDF content using <code>pdfminer.six</code></p>
<pre><code>from pdfminer.high_level import extract_text
import pyttsx3
text = extract_text(pdf_file_path, page_numbers =[1,3])
# text content is shown below
# this text need to applied RegEX to convert into proper paragraphs
engine = pyttsx3.init()
engine.say(text)
engine.runAndWait()
</code></pre>
<p><code>text</code> content is shown below :</p>
<pre><code>Introduction
The Book of Secrets became an Osho โclassicโ shortly after it was first
published. And no wonder โ it contains not only a comprehensive
overview of Oshoโs unique, contemporary take on the eternal human quest
for meaning, but also the most comprehensive set of meditation
techniques available to help find that meaning within our own lives.
As Osho explains in the first chapter:
These are the oldest, most ancient techniques. But you can call them
the latest, also, because nothing can be added to them. They have taken in
all the possibilities, all the ways of cleaning the mind, transcending the
mind. Not a single method could be added to [these] one hundred and
twelve methods. It is the most ancient and yet the latest, yet the newest.
Old like old hills โ the methods seem eternal โ and they are new like a
dewdrop before the sun, because they are so fresh. These one hundred and
twelve methods constitute the whole science of transforming mind.
</code></pre>
<p><strong>Issue</strong>: <code>engine.say(text)</code> does its work BUT While speaking it gives <strong>long pauses</strong> after carriage return (e.g. "first", "comprehensive", "quest" ...) for an interval which matches pause after full stop(.). So, inorder to <em>read smoothly</em> I want first convert these paragraphs in proper format.</p>
<p><strong>Solution approaches</strong>: Since the reader makes equal pauses at both - the end of sentence and the end of paragraph, we can chose following approaches:</p>
<ol>
<li>Either convert entire one paragraph as single sentence and pass to reader.</li>
<li>Or, pass every single sentence(between two fullstops) to the reader.</li>
</ol>
<p> Expected text (approach 1 - Preferable) : </p>
<pre><code>Introduction
The Book of Secrets became an Osho โclassicโ shortly after it was first published. And no wonder โ it contains not only a comprehensive overview of Oshoโs unique, contemporary take on the eternal human quest for meaning, but also the most comprehensive set of meditation techniques available to help find that meaning within our own lives.
As Osho explains in the first chapter:
These are the oldest, most ancient techniques. But you can call them the latest, also, because nothing can be added to them. They have taken in all the possibilities, all the ways of cleaning the mind, transcending the mind. Not a single method could be added to [these] one hundred and twelve methods. It is the most ancient and yet the latest, yet the newest. Old like old hills โ the methods seem eternal โ and they are new like a dewdrop before the sun, because they are so fresh. These one hundred and twelve methods constitute the whole science of transforming mind.
</code></pre>
<p>Expected text (approach 2 - Preferable) :</p>
<pre><code>Introduction
The Book of Secrets became an Osho โclassicโ shortly after it was first published.
And no wonder โ it contains not only a comprehensive overview of Oshoโs unique, contemporary take on the eternal human quest for meaning, but also the most comprehensive set of meditation techniques available to help find that meaning within our own lives.
As Osho explains in the first chapter:
These are the oldest, most ancient techniques. But you can call them the latest, also, because nothing can be added to them.
They have taken in all the possibilities, all the ways of cleaning the mind, transcending the mind.
Not a single method could be added to [these] one hundred and twelve methods. It is the most ancient and yet the latest, yet the newest.
Old like old hills โ the methods seem eternal โ and they are new like a dewdrop before the sun, because they are so fresh.
These one hundred and twelve methods constitute the whole science of transforming mind.
</code></pre>
<p>I am fairly new to <code>RegEx</code> and unable to come up with a RegEx that removes newline but retains paragraph structure.</p>
|
<python><python-3.x><regex><pdf-reader>
|
2023-06-01 05:39:59
| 2
| 2,534
|
KNU
|
76,378,873
| 2,813,114
|
how to plot a single line in plotly with multiple colors according to a categorical variable
|
<p>How can I get a single connected line in <code>plotly</code> with different colors?</p>
<p>The plot below shows an attempt at a solution. However, the line has an ugly break between point 10 and point 90. How can I have a single line with multiple colors according to a categorical variable without breaking?<a href="https://i.sstatic.net/eelrG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eelrG.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import random
import pandas as pd
import plotly.express as px
white_noise = np.array([random.gauss(mu=0.0, sigma=1.0) for x in range(100)])
rw = white_noise.cumsum()
rw_df = pd.DataFrame({
'random_walk': rw,
'color': 10*['black'] + 50 * ['blue'] + 40*['black'],
'x': range(100)
})```
fig = px.line(rw_df, x='x', y='random_walk', color='color')
fig.show()
</code></pre>
|
<python><plotly>
|
2023-06-01 05:26:05
| 1
| 1,099
|
ichbinallen
|
76,378,818
| 19,157,137
|
Converting formula variables to variable names with regex operations
|
<p>I ma trying to convert the variable <code>Formula_bit</code> into variable like names where they are lowercase and words are seperated by <code>_</code>. My Process is as follows splitting the right-hand side by operators (+, -, *, /) or x (multiplication), converts the resulting items to lowercase, replaces spaces with underscores, removes opening and closing parentheses. Finally removing the leading and trailing underscores if there are any. However my <code>output</code> and <code>expected outputs</code> dont match what could I do to fix this?</p>
<pre><code>import re
Formula_bit = ['ฮฃ (Dividends)', 'Dividend Payout Ratio * eps']
# Process the right-hand side of each formula to extract parameters
params = [
re.split(r'\s*[+\-*/]\s*| x ', re.sub(r'[+\-*/]', ',', item))[0] # Split the right-hand side by operators (+, -, *, /) or 'x' (multiplication)
.lower() # Convert to lowercase
.replace(" ", "_") # Replace spaces with underscores
.replace("(", "") # Remove opening parentheses
.replace(")", "") # Remove closing parentheses
for item in Formula_bit
]
# Remove leading and trailing underscores from each item and strip whitespace
params = [item.lstrip('_').rstrip('_').strip() for item in params]
</code></pre>
<p>Output:</p>
<pre><code>['ฯ_dividends', 'dividend_payout_ratio_,_eps']
</code></pre>
<p>Expected output:</p>
<pre><code>['ฯ_dividends', 'dividend_payout_ratio', 'eps']
</code></pre>
|
<python><regex><list><split><python-re>
|
2023-06-01 05:10:20
| 1
| 363
|
Bosser445
|
76,378,670
| 2,866,298
|
pandas dataframe query not working with where
|
<p>I am new to pandas, I have this data frame:</p>
<p><code>df['educ1']</code></p>
<p>which gives</p>
<pre><code>1 4
2 3
3 3
4 4
5 1
..
28461 3
28462 2
28463 3
28464 2
28465 4
Name: educ1, Length: 28465, dtype: int64
</code></pre>
<p>when I try querying with</p>
<pre><code>dt=df[df.educ1 > 1]
</code></pre>
<p>It's working fine returning multiple rows, but when I try</p>
<pre><code>college_grad_mask=(df.educ1 > 1)
df.where(college_grad_mask).dropna().head()
</code></pre>
<p>It gives 0 rows, I wonder what is wrong here?</p>
|
<python><pandas><dataframe>
|
2023-06-01 04:29:28
| 1
| 1,906
|
osama yaccoub
|
76,378,441
| 1,444,483
|
Panda's Merge is exploding in memory
|
<p>I am trying to merge 2 dataframes and for some reason it blows out of proportion in memory.
those 2 dataframes are relatively large but nothing out of the ordinary. I am using a strong machine with 128GB of RAM.</p>
<pre><code>x = b.merge(a,left_on='id1',right_on='id',how='left')
</code></pre>
<p>This is the output:</p>
<pre><code>MemoryError: Unable to allocate 1.58 TiB for an array with shape (216639452968,) and data type int64
</code></pre>
<p>I am probably doing something wrong here but can't understand why it gets to a 1.6 TB of memory requirment.</p>
<p>Here is some info on the dataframes:</p>
<pre><code>print(a.info())
print(a.memory_usage(deep=True))
print(b.info())
print(b.memory_usage(deep=True))
</code></pre>
<pre><code>
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10092079 entries, 0 to 10092078
Data columns (total 2 columns):
# Column Dtype
--- ------ -----
0 id object
1 date datetime64[ns]
dtypes: datetime64[ns](1), object(1)
memory usage: 154.0+ MB
None
Index 128
id 654665935
date 80736632
dtype: int64
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 15000000 entries, 0 to 14999999
Data columns (total 2 columns):
# Column Dtype
--- ------ -----
0 id1 object
1 id2 object
dtypes: object(2)
memory usage: 228.9+ MB
None
Index 128
id1 965676606
id2 718661312
dtype: int64
</code></pre>
|
<python><pandas><merge><out-of-memory>
|
2023-06-01 03:11:20
| 1
| 381
|
Gal
|
76,378,374
| 9,040,520
|
docker python3.x-minimal error after ubuntu update
|
<p>my app working fine. now i want migrate my server to new EC2 from ubuntu 20 to 22.</p>
<p>this problem not only on new ubuntu (22) but also on 20 after ubuntu auto update.</p>
<p>this is my docker compose</p>
<pre><code># pull official base image
FROM python:3.9.13-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt update \
&& apt install -y build-essential gcc python3-dev musl-dev libffi-dev libssl-dev postgresql-server-dev-all cargo git tk libmagic1
# lint
RUN pip install filemagic
RUN pip install --upgrade pip
RUN pip install flake8
COPY . /usr/src/app/
# install dependencies
COPY ./requirements.txt .
RUN pip install wheel
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.13-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
# RUN addgroup -S app && adduser -S app -G app
RUN adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apt update && apt install -y tk
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chmod +x $APP_HOME/entrypoint.sh
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
</code></pre>
<p>and this is the error</p>
<p>#0 26.68 P</p>
<pre><code>reparing to unpack .../libpython3.9-minimal_3.9.2-1_amd64.deb ...
#0 26.70 Unpacking libpython3.9-minimal:amd64 (3.9.2-1) ...
#0 27.21 Selecting previously unselected package python3.9-minimal.
#0 27.21 Preparing to unpack .../python3.9-minimal_3.9.2-1_amd64.deb ...
#0 27.23 Unpacking python3.9-minimal (3.9.2-1) ...
#0 27.79 Setting up libpython3.9-minimal:amd64 (3.9.2-1) ...
#0 27.82 Setting up python3.9-minimal (3.9.2-1) ...
#0 28.16 Traceback (most recent call last):
#0 28.17 File "/usr/lib/python3.9/py_compile.py", line 215, in <module>
#0 28.17 sys.exit(main())
#0 28.17 File "/usr/lib/python3.9/py_compile.py", line 207, in main
#0 28.17 compile(filename, doraise=True)
#0 28.17 File "/usr/lib/python3.9/py_compile.py", line 172, in compile
#0 28.17 importlib._bootstrap_external._write_atomic(cfile, bytecode, mode)
#0 28.17 File "<frozen importlib._bootstrap_external>", line 126, in _write_atomic
#0 28.18 PermissionError: [Errno 13] Permission denied: '/usr/lib/python3.9/__pycache__/__future__.cpython-39.pyc.140199513461120'
#0 28.18 dpkg: error processing package python3.9-minimal (--configure):
#0 28.18 installed python3.9-minimal package post-installation script subprocess returned error exit status 1
#0 28.21 Errors were encountered while processing:
#0 28.21 python3.9-minimal
#0 28.37 E: Sub-process /usr/bin/dpkg returned an error code (1)
------
failed to solve: executor failed running [/bin/sh -c apt update && apt install -y build-essential gcc python3-dev musl-dev libffi-dev libssl-dev postgresql-server-dev-all cargo git tk libmagic1]: exit code: 100
service "web" is not running container #1
</code></pre>
<p>already tried update the python slim from 3.9 to 3.10.x 3.1.x the problem still exist</p>
|
<python><docker><ubuntu><ubuntu-22.04>
|
2023-06-01 02:49:38
| 1
| 366
|
Baltschun Ali
|
76,378,373
| 4,175,822
|
What are some alternatives to using classmethod and property decorators together?
|
<p>What are some alternatives to using classmethod and property decorators together?</p>
<p>In python 3.11 and later, combining them is no longer supported per: <a href="https://docs.python.org/3.11/library/functions.html#classmethod" rel="nofollow noreferrer">https://docs.python.org/3.11/library/functions.html#classmethod</a></p>
<p>I have code like so:</p>
<pre><code>class Bike:
@classmethod
@property
def tire_type(cls) -> tire_type.Road:
return tire_type.Road
from . import tire_type
</code></pre>
<p>tire_type import must be last because it has a cyclic dependency on the current module.
What are some options for providing the tire property in the Bike class that does not combine the two decorators?</p>
<p>I also want the tire_type type hint to show up correctly in vscode.</p>
|
<python><decorator><circular-dependency>
|
2023-06-01 02:49:15
| 2
| 2,821
|
spacether
|
76,378,321
| 11,720,193
|
Parse a CSV File and Rename the columns
|
<p>I have a csv file which has spaces in the column names in the header record. The source system is unable to rename the fields before sending - so, we have to handle this at our end before ingestion.</p>
<p>The column-names (of the src csv file) will be stored in another file on S3 along with the column-name it has to be changed to. So the program has to be generic so that it can handle an arbitrary number of columns based on the config file.</p>
<p>Config File:</p>
<pre><code>src_column-name | target column-name
S.No | SNo
Count of lines visited | LinesVisited
Revenue | RevenueGenerated
No. of clicks | NumberofClicks
</code></pre>
<p>Hence, is there a way to read the config file line-by-line and rename the source column-names (Column# 1) as per the target column-name (Column# 2) and then read the data of the actual csv file with the new column-names and save it back to S3 as a csv file with a different name.</p>
<p>For e.g. -</p>
<p>Expected:</p>
<pre><code>SNo | LinesVisited | RevenueGenerated | Numberofclicks
</code></pre>
<p>We can use Pyspark or Python. Please let me know if additional information is required.</p>
<p>Thanks.</p>
|
<python><pyspark>
|
2023-06-01 02:31:22
| 1
| 895
|
marie20
|
76,378,181
| 11,332,693
|
Merging multiple dataframes in loop based on same suffix in variable names
|
<p>I want to merge dataframes from demand_dataframe_list with supply_dataframe_list when the suffix is identical.</p>
<pre class="lang-py prettyprint-override"><code>demand_dataframe_list = [data_Market1, data_Market2]
supply_dataframe_list = [df_supply2_Market1, df_supply2_Market2]
</code></pre>
<p>For example, <code>data_Market1</code> should be merged with <code>df_supply2_Market1</code> and <code>data_Market2</code> should be merged with <code>df_supply2_Market2</code>.</p>
<p>Here Market1 and Market2 suffix should be used to get the merged data based on common columns present in each dataframes which is 'Col1' and 'Col2'.</p>
<p>Below is my try
I am getting the empty dataframe using the code help. Appreciate your help !</p>
<pre class="lang-py prettyprint-override"><code>merged_dataframes = []
for demand_df, supply_df in zip(demand_dataframe_list, supply_dataframe_list):
print(demand_df)
demand_suffix = demand_df.name.split('_')[-1] # Extract the suffix from the demand dataframe name
supply_suffix = supply_df.name.split('_')[-1] # Extract the suffix from the supply dataframe name
merged_df = pd.merge(demand_df, supply_df, how="inner", on=['Col1', 'Col2'])
merged_dataframes.append(merged_df)
</code></pre>
|
<python><pandas><dataframe><merge>
|
2023-06-01 01:41:33
| 1
| 417
|
AB14
|
76,378,174
| 19,968,680
|
Exempt domain for slowapi rate limiter
|
<p>Is there a way to exempt certain domains from rate limiting using the slowapi extension for Python FastAPI? I want the frontend (my_domain.com) to be exempt from rate-limiting, but any other requests should be rate limited. For example, I am looking for something like this:</p>
<pre class="lang-py prettyprint-override"><code>def my_key_func(request):
"""Set up a key function that exempts my_domain.com"""
if "my_domain.com" in request.client.host:
# Exempt from limiting
else:
# Do limiting
limiter = Limiter(key_func=my_key_func)
app = FastAPI(lifespan=lifespan)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
@app.get("/limited-route")
@limiter.limit("10/minute")
async def function():
return {"success": 200}
</code></pre>
<p>Any ideas? Thanks</p>
|
<python><backend><fastapi><rate-limiting><slowapi>
|
2023-06-01 01:38:14
| 0
| 322
|
gbiz123
|
76,378,138
| 11,098,908
|
Conda proxy problem when using home wifi network
|
<p>I installed anaconda in my work laptop and used the solution in this <a href="https://stackoverflow.com/questions/58797984/how-to-solve-an-error-that-appears-in-conda-proxy-configuration">post</a> to solve the proxy problem encountered when using the company's wifi network to update/install packages with <code>conda</code> or <code>pip</code>.</p>
<p>However, that solution (adding proxy to the condarc file) didn't work when I tried to update/install packages when using my home's wifi network.</p>
<p>This is a typical output when trying to install packages with home network regardless of with or without proxy setting in the <code>codarc</code> file, or disabling the proxy in internet connection setting</p>
<pre><code>(base) C:\>conda install -c conda-forge keras
Collecting package metadata (current_repodata.json): failed
ProxyError: Conda cannot proceed due to an error in your proxy configuration.
Check for typos and other configuration errors in any '.netrc' file in your home directory,
any environment variables ending in '_PROXY', and any other system-wide proxy
configuration settings.
</code></pre>
<p><a href="https://i.sstatic.net/9Octm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Octm.png" alt="enter image description here" /></a></p>
<p>Could someone please explain what happened here? Thanks.</p>
|
<python><proxy><anaconda><http-proxy>
|
2023-06-01 01:22:32
| 0
| 1,306
|
Nemo
|
76,377,964
| 7,516,523
|
Speed up Gekko when minimizing many equations with interactive variables
|
<p>I am using <code>gekko</code> to solve for 14 variables by minimizing around 10,000 equations with <code>IMODE=3</code>.</p>
<p>Each equation is the squared error between a response <code>y</code> and the output of a polynomial model at row <code>i</code> in the training data.</p>
<p><code>eq[i] = (y[i] - model[i]) ** 2</code></p>
<p>In each row, the polynomial model has around 10 to 100 terms, where the 14 optimized variables are found. The variables are very interactive in the model, meaning that multiple variables are multiplied together multiple times.</p>
<p><strong>Question:</strong> What strategies can I employ to speed up the solving time?</p>
<p>Here is a much simpler reproducible example where the model tries to fit a straight line:</p>
<pre><code>from gekko import GEKKO
import numpy as np
m = GEKKO() # instantiate gekko model
# instantiate free variables
a = m.FV(lb=0, ub=2)
a.STATUS = 1
b = m.FV(lb=0, ub=2)
b.STATUS = 1
c = m.FV(lb=0, ub=2)
c.STATUS = 1
n_eqs1 = 1000 # number of equations in dataset1
n_eqs2 = 500 # number of equations in dataset2
n_terms = 12 # number of terms in each equation
noise_scl = 1 # amount of noise represented as the std of the normal distributions
# training datasets
x = {
"dataset1": np.arange(n_eqs1)[:, np.newaxis]
+ np.random.normal(loc=0, scale=noise_scl, size=(n_eqs1, n_terms)),
"dataset2": np.arange(n_eqs2)[:, np.newaxis]
+ np.random.normal(loc=0, scale=noise_scl, size=(n_eqs2, n_terms)),
}
# response
y = np.arange(n_eqs1)
for x_ds in x.values():
for i in range(x_ds.shape[0]):
# minimize equations
m.Minimize(
(
y[i]
- (
x_ds[i, 0] * a
+ x_ds[i, 1] * a**2
+ x_ds[i, 2] * a * b
+ x_ds[i, 3] * a * (b**2)
+ x_ds[i, 4] * (a**2) * b
+ x_ds[i, 5] * (a**2) * (b**2)
+ x_ds[i, 6] * c
+ x_ds[i, 7] * (c**2)
+ x_ds[i, 8] * c * b
+ x_ds[i, 9] * c * (b**2)
+ x_ds[i, 10] * (c**2) * b
+ x_ds[i, 11] * (c**2) * (b**2)
)
/ n_terms
)
** 2
)
m.options.IMODE = 3
m.solve(disp=True)
# depending on the amount of noise, the optimized values should tend towards 1
print(f"a = {a.value[0]:3f}\n" f"b = {b.value[0]:3f}\n" f"c = {c.value[0]:3f}")
</code></pre>
|
<python><machine-learning><optimization><polynomials><gekko>
|
2023-06-01 00:10:47
| 1
| 345
|
Florent H
|
76,377,868
| 5,787,188
|
Regular expression for capturing all text starting at one pattern and ending at another
|
<p>I am scraping text data off a pdf using python. There is a common pattern that contains the data I need that begins with a numerical pattern and ends with a string pattern. I need to capture all the text, including the patterns using a regular expression.</p>
<p>I have a regular expression that works when I import the data by going pdf to txt and reading the text in. When I use PyPDF2 to extract the text from the pdf pages, the regular expression fails.</p>
<p>The data stream looks like this</p>
<pre><code>Filed: 8/21/2022\nEntered: 10/21/2022\nDischarged: 01/23/2023\nClosed: 01/30/2023\n17-55018- \nQRTbk 7 Windows PC\n OS:xxx\nRole: AdminHubertson
</code></pre>
<p>The start point is the <code>17-55018-</code> string which I have a regex that works:</p>
<pre><code>[0-9]{2}-[0-9]{5}-
</code></pre>
<p>The end point is the <code>Role: Admin</code> which is unique enough to compile.</p>
<p>I have tried a number of capture methods using lookaheads to get the text I need. These methods I have tested on regex101 and they work but I cannot get them to work</p>
<p>Some patterns I have tried:</p>
<pre><code>[0-9]{2}-[0-9]{5}-\s(\n(?!Role)(.*))*Role: Admin
[0-9]{2}-[0-9]{5}-\.(.*?)Role: Admin
[0-9]{2}-[0-9]{5}-.*(?=Role).*Role: Admin
</code></pre>
|
<python><regex><regex-lookarounds>
|
2023-05-31 23:35:38
| 1
| 381
|
DataNoob
|
76,377,862
| 16,496,244
|
Pyglet not registering key presses unless I import keyboard module in my Python Chip-8 emulator even though it's not being used
|
<h2>Background</h2>
<p>I'm working on a Python chip-8 emulator <a href="https://github.com/harshkaso/Chipy" rel="nofollow noreferrer">Chipy</a>.
And I'm using Pyglet to display sprites and handle keypresses.</p>
<p>The way I implemented the screen and keyboard functions are in separate files but in the same module, as follow.</p>
<p><strong>screen.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from pyglet.window import Window
window = Window(args, kwargs)
"""
All the functions related to displaying things on screen
"""
</code></pre>
<p><strong>keyboard.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from .screen import window
@window.event
def on_key_press():
# Handled key presses
@window.event
def on_key_release():
# Handled key releases
</code></pre>
<h2>Issue</h2>
<p>The actual opcodes are being handled in a different file, which uses the <code>screen.py</code> module for some functionalities but not the <code>keyboard.py</code> module. But if I do not import the keyboard along with the screen, pyglet won't register keypresses.</p>
<p><strong>control_unit.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from chipy.peripherals import screen, keyboard
# Handles all the opcodes
# Uses screen
# keyboard is an unused import
</code></pre>
<p>Everything works fine with above file, but If I were to remove the unused import <code>keyboard</code>, pyglet will stop registering keypresses. I don't know what is causing this behaviour or is there a work around to make the code more organized?</p>
<p>I skimmed through the keyboard events in the Pyglet documentation but didn't find anything that could help understand this behaviour.</p>
|
<python><keyboard><emulation><pyglet><chip-8>
|
2023-05-31 23:33:23
| 1
| 901
|
Harsh Kasodariya
|
76,377,750
| 9,795,817
|
Parallelize an operation applied to a list (PySpark)
|
<p>At one point in my program, a function receives a list and repeats an operation using each of its items.</p>
<p>For the sake of this example, suppose my program starts off with a list <code>l</code> and counts the rows of some generic pyspark dataframe <code>df</code> that meet a condition.</p>
<pre class="lang-py prettyprint-override"><code>l = [2, 4, 5]
res = []
for x in l:
val = df.where((col('id') == x)ย |ย (col('id2') == x)).count()
res.append(val)
</code></pre>
<p>Is it possible to have multiple workers calculate <code>val</code> simultaneously? That is, could each worker calculate its own <code>val</code> and append it to <code>l</code> individually?</p>
<p><a href="https://stackoverflow.com/questions/49023893/replace-for-loop-to-parallel-process-in-pyspark">This post</a> suggests using <code>foreach</code>, but since I'm iterating over a list (not an RDD or dataframe), I cannot use that method.</p>
|
<python><pyspark><parallel-processing>
|
2023-05-31 22:56:49
| 2
| 6,421
|
Arturo Sbr
|
76,377,693
| 1,181,065
|
Error while finding module in VS code debugger
|
<p>I have tried different solutions to enable me to debug a python file in my project.
Here is my file structure:</p>
<p>project-main</p>
<p>-ns</p>
<p>-examples</p>
<p>and several Python files in examples and ns. Basically, all example files are using ns or other imported modules. I also have a conda environment that I have already activated in vs code terminal. And here is my launch.json file:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Module",
"type": "python",
"request": "launch",
"module": "project-main.examples",
}
]
}
</code></pre>
<p>I select Start debugging from the Run menu but I keep getting this error:</p>
<pre><code>E+00000.063: Error determining module path for sys.argv
Traceback (most recent call last):
File "c:\Users\myuser\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 296, in run_module
spec = find_spec(options.target)
File "C:\Users\myuser\AppData\Local\Programs\Python\Python310\lib\importlib\util.py", line 94, in find_spec
parent = __import__(parent_name, fromlist=['__path__'])
ModuleNotFoundError: No module named 'project-main'
</code></pre>
<p>.
.
.</p>
<pre><code> File "c:\Users\myuser\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\lib\python\debugpy/..\debugpy\common\log.py", line 215, in swallow_exception
_exception(format_string, *args, **kwargs)
C:\Users\myuser\AppData\Local\Programs\Python\Python310\python.exe: Error while finding module specification for 'project-main.examples' (ModuleNotFoundError: No module named 'project-main')
</code></pre>
|
<python><visual-studio-code><module><vscode-debugger>
|
2023-05-31 22:36:54
| 2
| 539
|
Hanna
|
76,377,597
| 3,130,747
|
How to create a primary key using sqlalchemy of type 'generated always as identity'
|
<p>I'm using postgres, and want to create the SQLA model for the following DDL:</p>
<pre class="lang-sql prettyprint-override"><code>create table tbl(
tbl_id INT generated always as identity PRIMARY KEY,
tbl_x INT
)
</code></pre>
<p>How to create this table using sqlalchemy ?</p>
|
<python><postgresql><sqlalchemy><ddl>
|
2023-05-31 22:09:43
| 1
| 4,944
|
baxx
|
76,377,578
| 1,540,660
|
pandas.read_xml() unexpected behaviour
|
<p>I am trying to understand why the code:</p>
<pre class="lang-python prettyprint-override"><code>import pandas
xml = '''
<ROOT>
<ELEM atr="anything">1</ELEM>
<ELEM atr="anything">2</ELEM>
<ELEM atr="anything">3</ELEM>
<ELEM atr="anything">4</ELEM>
<ELEM atr="anything">5</ELEM>
<ELEM atr="anything">6</ELEM>
<ELEM atr="anything">7</ELEM>
<ELEM atr="anything">8</ELEM>
<ELEM atr="anything">9</ELEM>
<ELEM atr="anything">10</ELEM>
</ROOT>
'''
df = pandas.read_xml(xml, xpath='/ROOT/ELEM')
print(df.to_string())
</code></pre>
<p>... works as expected and prints:</p>
<pre>
atr ELEM
0 anything 1
1 anything 2
2 anything 3
3 anything 4
4 anything 5
5 anything 6
6 anything 7
7 anything 8
8 anything 9
9 anything 10
</pre>
<p>Yet the following code:</p>
<pre class="lang-python prettyprint-override"><code>import pandas
xml = '''
<ROOT>
<ELEM>1</ELEM>
<ELEM>2</ELEM>
<ELEM>3</ELEM>
<ELEM>4</ELEM>
<ELEM>5</ELEM>
<ELEM>6</ELEM>
<ELEM>7</ELEM>
<ELEM>8</ELEM>
<ELEM>9</ELEM>
<ELEM>10</ELEM>
</ROOT>
'''
df = pandas.read_xml(xml, xpath='/ROOT/ELEM')
print(df.to_string())
</code></pre>
<p>results in the error:</p>
<pre>ValueError: xpath does not return any nodes or attributes. Be sure to
specify in `xpath` the parent nodes of children and attributes to
parse. If document uses namespaces denoted with xmlns, be sure to
define namespaces and use them in xpath.</pre>
<p>I have read the documentation here:
<a href="https://pandas.pydata.org/docs/reference/api/pandas.read_xml.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.read_xml.html</a></p>
<p>And also checked my xpath here (code above is just a minimal example, actual XML I use is more complex):
<a href="https://freeonlineformatter.com/xpath-validator/" rel="nofollow noreferrer">https://freeonlineformatter.com/xpath-validator/</a></p>
<p>In a nutshell I need to read into pandas dataframe a list of XML child elements at a known xpath. Child elements have no attributes but all have text values. I want to get a dataframe with one column containing these valyes. What am I doing wrong?</p>
|
<python><pandas><xpath><readxml>
|
2023-05-31 22:06:09
| 1
| 336
|
Art Gertner
|
76,377,475
| 2,141,839
|
Why does Python cryptography and xmlsec1 produce different signatures
|
<p>I am working on an integration with an xml based API. I am able to sign xml requests successfully using xmlsec1 like this...</p>
<p><code>xmlsec1 --sign --lax-key-search --privkey-pem test_privkey.pem,test_cert.pem --output xml/signed.xml xml/template.xml</code></p>
<p>template.xml:
<code><root><Signature xmlns="http://www.w3.org/2000/09/xmldsig#"><SignedInfo><CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"></CanonicalizationMethod><SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"></SignatureMethod><Reference URI=""><Transforms><Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"></Transform><Transform Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"></Transform></Transforms><DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"></DigestMethod><DigestValue></DigestValue></Reference></SignedInfo><SignatureValue/></Signature></root></code></p>
<p>(I've removed the actual content in an attempt to simplify/debug the issue explained below)</p>
<p>This works fine, but now I'd like to do this in Python...</p>
<pre><code># Load the private key
with open(KEY_PATH, "rb") as key_file:
private_key = load_pem_private_key(key_file.read(), None)
# Load the XML file
tree = etree.parse(f"xml/{XML_FILENAME}")
root = tree.getroot()
# Create a Signature element
signature_element = etree.Element("Signature")
signature_element.attrib["xmlns"] = "http://www.w3.org/2000/09/xmldsig#"
# Create a SignedInfo element
signed_info_element = etree.SubElement(signature_element, "SignedInfo")
# Create a CanonicalizationMethod element
canonicalization_method_element = etree.SubElement(signed_info_element, "CanonicalizationMethod")
canonicalization_method_element.attrib["Algorithm"] = "http://www.w3.org/TR/2001/REC-xml-c14n-20010315"
# Create a SignatureMethod element
signature_method_element = etree.SubElement(signed_info_element, "SignatureMethod")
signature_method_element.attrib["Algorithm"] = "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
# Create a Reference element
reference_element = etree.SubElement(signed_info_element, "Reference")
reference_element.attrib["URI"] = ""
# Create Transforms
transforms_element = etree.SubElement(reference_element, "Transforms")
transform_element1 = etree.SubElement(transforms_element, "Transform")
transform_element1.attrib["Algorithm"] = "http://www.w3.org/2000/09/xmldsig#enveloped-signature"
transform_element2 = etree.SubElement(transforms_element, "Transform")
transform_element2.attrib["Algorithm"] = "http://www.w3.org/TR/2001/REC-xml-c14n-20010315"
# Create DigestMethod
digest_method_element = etree.SubElement(reference_element, "DigestMethod")
digest_method_element.attrib["Algorithm"] = "http://www.w3.org/2001/04/xmlenc#sha256"
# Compute the digest value for the entire XML document and add it to the DigestValue element
digest = hashlib.sha256(etree.tostring(root, method="c14n")).digest()
digest_value_element = etree.SubElement(reference_element, "DigestValue")
digest_value_element.text = b64encode(digest).decode()
# Canonicalize the SignedInfo XML
c14n_signed_info = etree.tostring(signed_info_element, method="c14n")
# Create a SHA256 digest of the SignedInfo
digest = hashlib.sha256(c14n_signed_info).digest()
# Sign the digest
signature = private_key.sign(
digest,
padding.PKCS1v15(),
hashes.SHA256()
)
# Embed the SignatureValue in the Signature
signature_value_element = etree.SubElement(signature_element, "SignatureValue")
signature_value_element.text = b64encode(signature).decode()
# Add the Signature element to the root of the document
root.append(signature_element)
# Save the signed XML
tree = etree.ElementTree(root)
with open('xml/signed.xml', 'wb') as f:
tree.write(f)
</code></pre>
<p>XML_FILENAME (simplified for debugging):
<code><root></root></code></p>
<p>When I submit the contents of signed.xml to the API it fails and this API provider is not able to help. In troubleshooting, I noticed that the SignatureValue is different between the Python and xmlsec1 versions (the digest value is the same).</p>
<p>Any ideas on why the SignatureValue would differ, or any other differences you see that would explain why it works with xmlsec1 and not with the Python code. I also tried the signxml Python library with the same results.</p>
<p>Thanks.</p>
|
<python><xml><xml-signature><xmlsec><xmlsec1>
|
2023-05-31 21:40:46
| 1
| 781
|
Bafsky
|
76,377,343
| 4,613,042
|
Applying Tensorflow TextVectorization and StringIndexers to Dask Partitions on Parallel
|
<p>I am trying to build a pipeline to parallelize writing tfrecords files on datasets that are too large to fit into memory. I have successfully used dask to do this many times in the past, but I have a new dataset requiring that TextVectorization and StringIndexers be applied outside of the model (running them inside of the model was choking the GPU's). Ultimately I'm trying to apply the vectorizers/string indexers in series within a single partition and then process each of the partitions in parallel using Dask's computation engine. I have tried about 100 different ways to write the functions and have attempted to apply tf.keras.layers.deserialize and tf.keras.layers.serialize to the vectorizers without luck.</p>
<p>Here is the pipeline as it stands:</p>
<pre class="lang-py prettyprint-override"><code>
from dask.distributed import LocalCluster, Client
import logging
import dask
import json
import os
import pickle
import tensorflow
from tensorflow.keras.layers.experimental.preprocessing import StringLookup, TextVectorization
cluster = LocalCluster()
client = Client(cluster)
# Setup logging
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.INFO)
def load_and_broadcast_vectorizers(model_dir):
logging.info("Loading and broadcasting vectorizers...")
path = os.path.join(model_dir, 'vectorizers_saved')
vectorizers = {}
for file in os.listdir(path):
if file.endswith("_vectorizer_config.json"):
key = file[:-23]
with open(os.path.join(path, f'{key}_vectorizer_config.json'), 'r') as f:
config = json.load(f)
vectorizer_class = config['vectorizer_class']
config.pop('vectorizer_class', None) # Remove the 'vectorizer_class' key
if vectorizer_class == 'StringLookup':
vectorizer = StringLookup.from_config(config)
elif vectorizer_class == 'TextVectorization':
vectorizer = TextVectorization.from_config(config)
else:
raise ValueError(f"Unknown vectorizer class: {vectorizer_class}")
vectorizers[key] = vectorizer
logging.info("Vectorizers loaded and broadcasted successfully.")
return client.scatter(vectorizers, broadcast=True)
def apply_vectorizers_to_partition(partition, vectorizers):
logging.info("Applying vectorizers to partition...")
for vectorizer_name, vectorizer in vectorizers.items():
partition[vectorizer_name] = vectorizer.transform(partition[vectorizer_name])
return partition
def apply_vectorizers_in_parallel(vectorizers, df):
logging.info("Applying vectorizers in parallel...")
return df.map_partitions(apply_vectorizers_to_partition, vectorizers)
def write_partition_to_tfrecord(partition, output_path, partition_label, partition_id):
logging.info(f"Writing {partition_label} partition {partition_id} to TFRecord...")
file_name_tfrecord = f'{partition_label}_partition_{partition_id}.tfrecord'
output_file_path_tfrecord = os.path.join(output_path, file_name_tfrecord)
with tf.io.TFRecordWriter(output_file_path_tfrecord) as writer:
for row in partition.itertuples():
# Extract features and label from each row
features = {
'input_1': tf.train.Feature(int64_list=tf.train.Int64List(value=[row['input_1']])),
'input_2': tf.train.Feature(float_list=tf.train.FloatList(value=[row['input_2']])),
'input_3': tf.train.Feature(int64_list=tf.train.Int64List(value=row['input_3'])),
'input_4': tf.train.Feature(int64_list=tf.train.Int64List(value=[row['input_4']])),
'input_5': tf.train.Feature(int64_list=tf.train.Int64List(value=[row['input_5']]))
}
label = tf.train.Feature(float_list=tf.train.FloatList(value=[row['label_col']]))
example = tf.train.Example(features=tf.train.Features(feature={**features, **{'label': label}}))
writer.write(example.SerializeToString())
logging.info(f"{partition_label} partition {partition_id} written to TFRecord.")
def write_partition_to_parquet(partition, output_path, partition_label, partition_id):
logging.info(f"Writing {partition_label} partition {partition_id} to Parquet...")
selected_columns = partition[['personuuid', 'claimnum', 'claimrowid']]
file_name_parquet = f'{partition_label}_partition_{partition_id}.parquet.snappy'
output_file_path_parquet = os.path.join(output_path, file_name_parquet)
selected_columns.to_parquet(output_file_path_parquet, compression='snappy')
logging.info(f"{partition_label} partition {partition_id} written to Parquet.")
def write_vectorized_partitions_to_files(vectorizers, df, output_path, partition_label):
logging.info(f"Writing {partition_label} vectorized partitions to files...")
dask_tasks = []
for i, partition in enumerate(df.to_delayed()):
tfrecord_task = dask.delayed(write_partition_to_tfrecord)(partition, output_path, partition_label, i)
parquet_task = dask.delayed(write_partition_to_parquet)(partition, output_path, partition_label, i)
dask_tasks.extend([tfrecord_task, parquet_task])
dask.compute(*dask_tasks)
logging.info(f"{partition_label} vectorized partitions written to files successfully.")
def process_data(model_dir, df, output_path):
logging.info("Processing data...")
vectorizers = load_and_broadcast_vectorizers(model_dir)
if vectorizers is None:
logging.error("Data processing failed due to missing vectorizers.")
return
train_df, test_df = df.random_split([0.8, 0.2], random_state=42)
train_df_vectorized = apply_vectorizers_in_parallel(vectorizers, train_df)
test_df_vectorized = apply_vectorizers_in_parallel(vectorizers, test_df)
write_vectorized_partitions_to_files(vectorizers, train_df_vectorized, os.path.join(output_path, 'train'), 'train')
write_vectorized_partitions_to_files(vectorizers, test_df_vectorized, os.path.join(output_path, 'test'), 'test')
logging.info("Data processed successfully.")
</code></pre>
<p>Error Message:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: ('Could not serialize object of type StringLookup', '<keras.layers.preprocessing.string_lookup.StringLookup object at 0x7f7f1d79e9a0>')
</code></pre>
|
<python><tensorflow><dask><dask-delayed>
|
2023-05-31 21:12:49
| 0
| 4,359
|
scribbles
|
76,377,236
| 1,028,379
|
Setting xticks moves all bars to the left side of the figure
|
<p>I'm trying to set the x-ticks labels for every 10 years. However, when the x-ticks range is set, all of the bars are compressed to the left side of the figure.</p>
<h2>DataFrame Sample</h2>
<pre class="lang-none prettyprint-override"><code> Temperature Year
0 82 1900
1 52 1901
2 33 1902
3 91 1903
4 44 1904
</code></pre>
<h2>Code</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
# random sample data
np.random.seed(365)
df = pd.DataFrame({'Temperature': np.random.randint(0, 100, 124), 'Year': range(1900, 2024)})
# setting xticks based on the min and max year
sns.barplot(data=df, x='Year', y='Temperature')
_ = plt.xticks(range(df.Year.min(), df.Year.max(), 10))
</code></pre>
<p><a href="https://i.sstatic.net/vqBzQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vqBzQ.png" alt="enter image description here" /></a></p>
<p>How can I fix the barplot so the x-axis labels have the correct range, and the bars are positioned correctly?</p>
|
<python><matplotlib><seaborn><bar-chart><x-axis>
|
2023-05-31 21:00:49
| 1
| 931
|
Sonny Parlin
|
76,376,899
| 2,986,042
|
How to keep two bash terminals session alive in python script?
|
<p>I want to automate some process which will run some python scripts. I need to perform below steps</p>
<blockquote>
<ul>
<li>Step 1: Open bash terminal (terminal_1) and run "Source script1.sh" => If this script successfully executed, then step 2</li>
<li>Step 2: Run the another script "python script2.py" in the same bash terminal (terminal_1) => if this script is successfully run, then step 3</li>
<li>Step 3: Open another bash terminal (terminal_2) and run "python script3.py" => if this script is successful, then step 4</li>
<li>Step 4: Run another script "python script4.py" in the same terminal (terminal_2) => If this script successful. then, step 5</li>
<li>Step 5: Run final script "python script5.py" in the first terminal (terminal_1 and I need to keep the session of first terminal.), if successful the end the script.</li>
</ul>
</blockquote>
<p>This is my requirement. So I have searched some methods to do this process using python script. I am using windows environment with Cygwin.</p>
<p><strong>Method 1:</strong></p>
<p>Using</p>
<pre><code>subprocess.Popen(["bash"], stderr=subprocess.PIPE,shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
</code></pre>
<p>and read the output values using <code>Thread</code> and <code>stdout.readline()</code></p>
<p><strong>Method 2:</strong></p>
<p>Using <code>pexpect.spawn</code> in python</p>
<p><strong>Method 3:</strong></p>
<p>Run first script and write to file => check successful then run another script. it will take more time.</p>
<p>I would like to know which method is suitable for my requirements. One problem here is to keep two bash terminal alive. Also please suggest if any other way is suitable here. Some examples would be much appreciated.</p>
|
<python><bash><stdin><popen>
|
2023-05-31 20:06:17
| 0
| 1,300
|
user2986042
|
76,376,898
| 9,363,181
|
How to use git with lambda function
|
<p>I am trying to import the <code>git</code> library needed to clone the repo from gitlab. A lot of research suggested that the best way would be to add a <code>layer</code> to the lambda function. So, I created a lambda layer with the below commands:</p>
<pre><code>mkdir lambda_layers
cd lambda_layers
mkdir python
cd python
pip install gitpython -t ./
cd .. && zip -r python_modules.zip .
</code></pre>
<p>When I added the zip file to the lambda function and tried running the function with nothing but just an import statement i.e <code>import git</code> and <code>print("hello world")</code> it gave me the below <code>error</code>:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': Failed to initialize: Bad git executable.
The git executable must be specified in one of the following ways:
- be included in your $PATH
- be set via $GIT_PYTHON_GIT_EXECUTABLE
- explicitly set via git.refresh()
All git commands will error until this is rectified.
This initial warning can be silenced or aggravated in the future by setting the
$GIT_PYTHON_REFRESH environment variable. Use one of the following values:
- quiet|q|silence|s|none|n|0: for no warning or exception
- warn|w|warning|1: for a printed warning
- error|e|raise|r|2: for a raised exception
Example:
export GIT_PYTHON_REFRESH=quiet
</code></pre>
<p>So what am I missing here? My zip structure is the same as the <a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-upload" rel="nofollow noreferrer">official documentation</a> still the error.</p>
|
<python><amazon-web-services><git><aws-lambda>
|
2023-05-31 20:06:07
| 1
| 645
|
RushHour
|
76,376,828
| 1,476,285
|
How do I run a PyQt ProgressBar dialog window in a multi-threaded environment?
|
<p>I've been working on this for the last couple of weeks. I've tried every configuration I can think of with little success. The basic structure of my actual application starts with a QSystemTrayIcon that houses QActions that run multiple scripts. Some of those scripts take 10-20 seconds to complete their tasks. It is fully functional, but I decided I wanted to try to incorporate a QDialog with a QProgressBar and later other widgets. But for the time being, I'm just starting with a clean QApplication structure to understand how to implement this.</p>
<p>I have created a worker.py containing a countdown. I would like this to run inside its own QThread. It'll send out pyqtSignals that I connect to which updates the progress window which runs in its own QThread.</p>
<p>worker.py - has two classes because using the QThreadPool needed a QRunnable, which couldn't emit signals, so I created a QObject as my emitter.</p>
<pre><code>from time import sleep
from PyQt5.QtCore import pyqtSignal, QRunnable, QObject
class Emitter(QObject):
signal_worker_count = pyqtSignal(int)
signal_finished = pyqtSignal()
class WorkerClass(QRunnable):
def __init__(self):
super().__init__()
self.max_count = None
self.emitter = Emitter()
def countdown(self):
for i in range(self.max_count, 0, -1):
self.emitter.signal_worker_count.emit(i)
sleep(1)
self.emitter.signal_finished.emit()
def run(self):
self.countdown()
</code></pre>
<p>progress.py has two classes because later I may want a single class, Progress, running multiple potential dialog designs.</p>
<p>progress.py</p>
<pre><code>import sys
from time import sleep
from PyQt5.QtCore import Qt, QRunnable, QObject
from PyQt5.QtWidgets import QApplication, QDialog, QProgressBar, QPushButton, QLabel
class Progress(QRunnable):
def __init__(self):
super().__init__()
self.bar = None
self.max_value = None
def increment_progress(self, progress_value):
print(self.bar.progress.value() + progress_value)
self.bar.increment_progress(progress_value)
def run(self):
print('starting progress bar')
new_bar = ProgressDialog()
new_bar.setup(10)
self.bar = new_bar
print('continuing progress bar')
self.bar.show()
print('should be showing progress bar')
def update_progress(self, progress_value):
self.bar.update_progress(progress_value)
class ProgressDialog(QDialog):
def __init__(self):
super().__init__()
self.progress = None
self.close_button = None
self.message_label = None
self.init_ui()
def init_ui(self):
self.setWindowTitle('Progress Bar')
self.resize(436, 150)
self.setWindowFlag(Qt.FramelessWindowHint)
self.progress = QProgressBar(self)
self.progress.setObjectName('ProgressBar')
self.progress.setGeometry(10, 10, 416, 30)
self.message_label = QLabel(self)
self.message_label.setObjectName('MessageLabel')
self.message_label.setGeometry(10, 50, 416, 30)
self.message_label.setText('Calculating...')
self.close_button = QPushButton('Close', self)
self.close_button.setObjectName('CloseButton')
self.close_button.setGeometry(300, 100, 75, 30)
self.close_button.setEnabled(True)
self.close_button.clicked.connect(self.hide)
def set_message(self, message):
self.message_label.setText(message)
def setup(self, max_value):
self.progress.maximum = max_value
def increment_progress(self, progress_value):
self.progress.setValue(self.progress.value() + progress_value)
def update_progress(self, progress_value):
self.progress.setValue(progress_value)
if __name__ == "__main__":
app = QApplication(sys.argv)
progress = Progress()
progress.run()
for i in range(10):
progress.increment_progress(10)
sleep(0.5)
app.exec_()
</code></pre>
<p>Finally, my main application, and the script that is run, is my QSystemTray object. It does expect an icon.png file.</p>
<p>system_tray.py</p>
<pre><code>import sys
from PyQt5.QtCore import QThreadPool, QThread
from PyQt5.QtWidgets import (QApplication,
QSystemTrayIcon,
QMenu,
QAction,
qApp)
from PyQt5.QtGui import QIcon
from worker import WorkerClass
from progress import Progress
class SystemTray(QSystemTrayIcon):
# Override the class constructor
def __init__(self):
self.worker_pool = QThreadPool()
self.worker_pool.setMaxThreadCount(2)
self.progress_pool = QThreadPool()
self.progress_pool.setMaxThreadCount(2)
self.progress = None
QSystemTrayIcon.__init__(self, QIcon('icon.png'))
no_progress_action = QAction("Without progress bar", self)
progress_action = QAction("With progress bar", self)
quit_action = QAction("Exit", self)
no_progress_action.triggered.connect(lambda: self.perform("no_progress"))
progress_action.triggered.connect(lambda: self.perform("progress"))
quit_action.triggered.connect(lambda: self.perform("quit"))
tray_menu = QMenu()
tray_menu.addAction(no_progress_action)
tray_menu.addSeparator()
tray_menu.addAction(progress_action)
tray_menu.addSeparator()
tray_menu.addAction(quit_action)
self.setContextMenu(tray_menu)
self.show()
def perform(self, req):
if req == "no_progress":
no_progress_worker = WorkerClass()
no_progress_worker.max_count = 10
no_progress_worker.emitter.signal_worker_count.connect(lambda current_count: print(current_count))
self.worker_pool.start(no_progress_worker)
elif req == "progress":
progress_worker = Progress()
self.progress_pool.start(progress_worker)
elif req == "quit":
self.hide()
qApp.quit()
def update_progress(self, current_count):
print(current_count)
if __name__ == "__main__":
app = QApplication(sys.argv)
main_window = SystemTray()
sys.exit(app.exec())
</code></pre>
<p>Running the system_tray.py file puts an icon in the Windows System Tray. Right-clicking yields a "without progress" option which runs the worker.py file countdown successfully. Running the "with progress" option begins the process, but freezes when the Progress Dialog tries to open.</p>
<p>Running the progress.py file on its own runs fine, however, the Dialog tries to show, and we see a countdown of sorts, but the window doesn't officially open until the countdown completes.</p>
<p>I realize that the full code isn't done yet in terms of the worker being linked fully to the progress dialog, but I ran into this problem before I got to that point.</p>
<p>I also tried configuring this with individual QThreads instead of a QThreadPool, with similar results. Why is my Progress Dialog not showing while my worker countdown() is printing?</p>
|
<python><multithreading><pyqt5><threadpool>
|
2023-05-31 19:52:51
| 1
| 413
|
pedwards
|
76,376,676
| 4,913,254
|
ModuleNotFoundError: No module named 'torch' when imported in a Docker container
|
<p>I want to create a Docker container for this <a href="https://github.com/broadinstitute/SpliceAI-lookup" rel="nofollow noreferrer">tool</a>. The app work on my machine locally with a Conda environment I have created. To build the container I have created a new Conda env. to follow the installation in the new env as well as in the container. In the new environment, I have a requirement file with all the Python libraries needed and I use this to build the container</p>
<pre><code>FROM python:3.6
WORKDIR /Volumes/MANUEL/SpliceAI-lookup
ADD . .
RUN pip install -r requirements.txt
# pyvcf 0.6.8 may not work if your setuptools>=58
# pip install --upgrade pip setuptools==57.5.0
CMD ./start_local_server.sh
</code></pre>
<p>Running the app on my machine the app runs ok. However, when running the image I got this</p>
<pre><code>docker run spliceai-lookup:v0
+ NUM_THREADS=1
+ HOST=127.0.0.1
+ PORT=8080
+ TIMEOUT=1800
+ gunicorn -w 1 -t 1800 -b 127.0.0.1:8080 server:app
[2023-05-31 19:13:05 +0000] [8] [INFO] Starting gunicorn 20.1.0
[2023-05-31 19:13:05 +0000] [8] [INFO] Listening at: http://127.0.0.1:8080 (8)
[2023-05-31 19:13:05 +0000] [8] [INFO] Using worker: sync
[2023-05-31 19:13:05 +0000] [11] [INFO] Booting worker with pid: 11
[2023-05-31 19:13:06 +0000] [11] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.6/site-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Volumes/MANUEL/SpliceAI-lookup/server.py", line 18, in <module>
from pangolin.model import torch, Pangolin, L, W, AR
File "/usr/local/lib/python3.6/site-packages/pangolin/model.py", line 2, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>Why torch is not found in the image but it is found in my machine if I am using in both the same requirement file to create the env in my machine and in the container??</p>
|
<python><docker><pytorch><pangolin>
|
2023-05-31 19:24:01
| 1
| 1,393
|
Manolo Dominguez Becerra
|
76,376,575
| 8,749,168
|
Python setuptools exclude dependencies when installing
|
<p>Python setuptools allows you to <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies" rel="nofollow noreferrer">specify optional dependencies</a>, but does it allow you to do something in the inverse?</p>
<p>For example, let's say I have a list of dependencies in <code>my_package</code> like below:</p>
<ul>
<li>numpy</li>
<li>pandas</li>
</ul>
<p>So if I installed the package with <code>pip install my_package</code>, it would also install these two dependencies.</p>
<p>However, for certain use cases, a user may not need <code>pandas</code>. So I would want to do something like <code>pip install my_package[~pandas]</code> or something like that to instruct pip to not install pandas.</p>
<p>Is this something that is currently supported?</p>
|
<python><pip><setuptools>
|
2023-05-31 19:08:47
| 2
| 1,088
|
pythonweb
|
76,376,568
| 7,903,749
|
How to generate a fake HTTP request object for unit test in Python?
|
<p>In a Python Django project, we want a unit test to feed a HTTP POST request into the function under test. The <code>cURL</code> sample command below shows our test settings.</p>
<p>The <code>cURL</code> approach starts from the web server's endpoint and triggers the entire logic path, but we want the unit test to focus on a specific function taking in the <code>request</code> argument. And, we got hint from <a href="https://gist.github.com/majgis/4164503" rel="nofollow noreferrer">this post</a> with the below sample Python source code to fake a HTTP request for testing.</p>
<h4>Our Question:</h4>
<p>See the <code>-d</code> argument of the <code>cURL</code> command, we wonder how to feed the request body in the faked object.</p>
<h4>Technical Details:</h4>
<ul>
<li><strong>The <code>cURL</code> command:</strong></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>curl http://127.0.0.1:8000/api/transaction/ --insecure \
--verbose \
-d '<env:Envelope ... >
<env:Header>
...
</env:Header>
<env:Body>
...
</env:Body>
</env:Envelope>
'
</code></pre>
<ul>
<li><strong>The Python sample source code for faking a HTTP request:</strong></li>
</ul>
<pre class="lang-py prettyprint-override"><code>from django.core.handlers.wsgi import WSGIRequest
from io import StringIO
from django.contrib.auth.models import AnonymousUser
def GetFakeRequest(path='/', user=None):
""" Construct a fake request(WSGIRequest) object"""
req = WSGIRequest({
'REQUEST_METHOD': 'GET',
'PATH_INFO': path,
'wsgi.input': StringIO()})
req.user = AnonymousUser() if user is None else user
return req
</code></pre>
|
<python><django><unit-testing><httprequest>
|
2023-05-31 19:07:31
| 1
| 2,243
|
James
|
76,376,456
| 3,656,916
|
After docker run python starts automatically. how to stop that?
|
<p>I assemble docker image with a dockerfile:</p>
<pre><code>FROM python:3.9.13-slim
RUN mkdir /app
COPY . /app
RUN apt-get apt-get install -y libglib2.0-0 libgl1-mesa-glx && \
rm -rf /var/lib/apt/lists/* && \
pip install opencv-python==4.5.3.56 && \
pip install pandas==1.4.3 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
</code></pre>
<p>but after <code>docker run -v /home/user/work/test_images/:/app -ti name:tag</code>
I get started python3, instead of a shell. How to stop that?</p>
|
<python><docker><shell>
|
2023-05-31 18:44:32
| 1
| 507
|
DDR
|
76,376,455
| 12,690,313
|
Transformers tokenizer attention mask for pytorch
|
<p>In my code I have:</p>
<pre class="lang-py prettyprint-override"><code>output = self.decoder(output, embedded, tgt_mask=attention_mask)
</code></pre>
<p>where</p>
<pre class="lang-py prettyprint-override"><code>decoder_layer = TransformerDecoderLayer(embedding_size, num_heads, hidden_size, dropout, batch_first=True)
self.decoder = TransformerDecoder(decoder_layer, 1)
</code></pre>
<p>I generate the attention mask using a huggingface's tokenizer:</p>
<pre class="lang-py prettyprint-override"><code>batch = tokenizer(example['text'], return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
inputs = batch['input_ids']
attention_mask = batch['attention_mask']
</code></pre>
<p>Running it through the models fails on</p>
<p><code>AssertionError: only bool and floating types of attn_mask are supported</code></p>
<p>Changing the attention mask to <code>attention_mask = batch['attention_mask'] .bool()</code></p>
<p>Causes</p>
<p><code>RuntimeError: The shape of the 2D attn_mask is torch.Size([4, 1024]), but should be (1024, 1024)</code></p>
<p>Any idea how I can use a huggingface tokenizer with my own pytorch module?</p>
|
<python><pytorch><huggingface-transformers><huggingface>
|
2023-05-31 18:44:27
| 1
| 1,341
|
Tamir
|
76,376,381
| 1,496,554
|
Why is the bump not showing up on my torus shape in 3D visualization using 'open3d' in Python?
|
<p>The code is supposed to create bump at different angle on torus 3d shape. I can see the torus but not the bump when visualize using <code>open3d</code>. Expected outcome and what I got is like the following figure.</p>
<p><a href="https://i.sstatic.net/O8puB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O8puB.png" alt="enter image description here" /></a></p>
<p>The code is as following:</p>
<pre><code>import os, sys
from math import ceil
import numpy as np
import ipywidgets
from skimage import measure
from scipy.ndimage import zoom
from scipy.interpolate import interpn
from IPython.display import display
from einops import rearrange
import igl
from tqdm import tqdm
from sklearn.preprocessing import MinMaxScaler
from scipy import stats
import matplotlib.pyplot as plt
import pandas
import open3d as o3d
#signed distance function for torus
def sdf_torus(x, radius, thickness):
q = np.stack([np.linalg.norm(x[[0, 1]], axis=0) - radius, x[2]])
return np.linalg.norm(q, axis=0) - thickness
# Crop an n-dimensional image with a centered cropping region
def center_crop(img, shape):
start = [a // 2 - da // 2 for a, da in zip(img.shape, shape)]
end = [a + b for a, b in zip(start, shape)]
slices = tuple([slice(a, b) for a, b in zip(start, end)])
return img[slices]
# Add noise to coordinates
def gradient_noise(x, scale, strength, seed=None):
shape = [ceil(s / scale) for s in x.shape[1:]]
if seed:
np.random.seed(seed)
scalar_noise = np.random.randn(*shape)
scalar_noise = zoom(scalar_noise, zoom=scale)
scalar_noise = center_crop(scalar_noise, shape=x.shape[1:])
vector_noise = np.stack(np.gradient(scalar_noise))
return vector_noise * strength
# Generating and saving the shapes
radius=0.25
thickness=0.10
noise_scale=20
noise_strength=15
seed=50
bump_width=5
bump_height=30
for idx, bump_angle in tqdm(enumerate(np.linspace(-1, 1, 2))):
coords = np.linspace(-1, 1, 100)
x = np.stack(np.meshgrid(coords, coords, coords))
sdf = sdf_torus(x, radius, thickness)
verts, faces, normals, values = measure.marching_cubes(sdf, level=0)
print(np.min(verts), np.max(verts))
#add noise
x_warp = gradient_noise(x, noise_scale, noise_strength, seed)
#print(np.min(x_warp), np.max(x_warp))
#bump angle
angle = np.pi * bump_angle
gaussian_center = np.array([np.sin(angle), 0., np.cos(angle)]) * radius
x_dist = np.linalg.norm((x - gaussian_center[:, None, None, None]), axis=0)
x_bump = bump_height * np.e ** -(1. / bump_width * x_dist ** 2)
print(np.min(x_bump), np.max(x_bump))
x_warp += -np.stack(np.gradient(x_bump))
#print(np.min(x_warp), np.max(x_warp))
x_warp = rearrange(x_warp, 'v h w d -> h w d v')
vertex_noise = interpn([np.arange(100) for _ in range(3)], x_warp, verts)
verts += vertex_noise
print(np.min(verts), np.max(verts))
igl.write_triangle_mesh(f"torus_bump_500/torus_bump_{idx}.ply", verts, faces)
</code></pre>
<p>For visualizing I use the following code:</p>
<pre><code>pcd.compute_vertex_normals()
pcd.paint_uniform_color([0.8, 0.8, 0.8])
pcd = o3d.io.read_triangle_mesh('torus_bump_500/torus_bump_1.ply')
o3d.visualization.draw_geometries([pcd])
</code></pre>
<p>I tried using different width and height of the bump but it is not showing up. It seems like the <code>x_bump</code> is not adding any effect.</p>
<p><strong>Important Note: I need torus shapes with bumps at different angles (one bump per torus). The shapes should have the same number of vertices and faces. I could have implemented different libraries to get the desired shapes using union but I need the desired shape only by distorting some vertices from the torus.</strong></p>
<p>I can visualize using <code>meshplot library</code> but interactive change is not working. I could not find the reason. If it would have been worked I could have seen the effects of the change of various parameters. The code using <code>meshplot</code> is like following:</p>
<pre><code># I have used jupyter notebook to run the code.
import meshplot as mp
# Meshplot left an annoying print statement in their code. Using this context manager to supress it...
class HiddenPrints:
def __enter__(self):
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout.close()
sys.stdout = self._original_stdout
plot=None
@mp.interact(
radius=(0, 0.5, 0.01),
thickness=(0.01, 0.25, 0.01),
noise_scale=(0.0, 20, 1),
noise_strength=(0.0, 5, 1),
seed=(1, 100),
bump_angle=(-1., 1., 0.01),
bump_width=(0.01, 0.02, 0.001),
bump_height=(0.01, 50.),
)
def show(radius, thickness, noise_scale, noise_strength, seed, bump_angle, bump_width, bump_height):
global plot
coords = np.linspace(-1, 1, 100)
x = np.stack(np.meshgrid(coords, coords, coords))
sdf = sdf_torus(x, radius, thickness)
verts, faces, normals, values = measure.marching_cubes(sdf, level=0)
x_warp = gradient_noise(x, noise_scale, noise_strength, seed)
print(x_warp.shape)
angle = np.pi * bump_angle
gaussian_center = np.array([np.sin(angle), 0., np.cos(angle)]) * radius
print(gaussian_center.shape)
x_dist = np.linalg.norm((x - gaussian_center[:, None, None, None]), axis=0)
print(x_dist.shape)
x_bump = bump_height * np.e ** -(1. / bump_width * x_dist ** 2)
print(x_bump.shape)
x_warp += -np.stack(np.gradient(x_bump))
x_warp = rearrange(x_warp, 'v h w d -> h w d v')
vertex_noise = interpn([np.arange(100) for _ in range(3)], x_warp, verts)
verts += vertex_noise
if plot is None:
plot = mp.plot(verts, faces, return_plot=True)
else:
with HiddenPrints():
plot.update_object(vertices=verts, faces=faces)
display(plot._renderer)
</code></pre>
|
<python><graphics><3d><shapes><mesh>
|
2023-05-31 18:30:07
| 1
| 481
|
Jakaria Rabbi
|
76,376,344
| 960,400
|
SqlAlchemy concat two columns for `ilike` query
|
<p>I'm trying to recreate the following (PostgreSQL) query in SqlAlchemy:</p>
<pre><code>select
u.first_name,
u.last_name,
from users u
where ((u.first_name ||' '|| u.last_name) ilike '%Mark Zuckerberg%')
</code></pre>
<p>Essentially I'm concatenating the two columns then searching by the full name, which in my case is passed in via a user query.</p>
<p>Is there a way to do this in SqlAlchemy?</p>
|
<python><flask><sqlalchemy><flask-sqlalchemy>
|
2023-05-31 18:24:02
| 1
| 661
|
James Rasmussen
|
76,376,233
| 1,946,418
|
make a web request to "about:internet" via cmdline (ping/curl/powershell, etc.)
|
<p>so we are seeing a lot of web requests sent to "about:internet" in our product, but no idea how they are generated</p>
<p>I mapped a legit IP address to <code>about:internet</code> in the <code>hosts</code> file, but <code>ping about:internet</code> doesn't seem to work.</p>
<pre><code>C:\Users\admin>ping about:internet
Ping request could not find host about:internet. Please check the name and try again.
</code></pre>
<p>but if I ping the IP address directly, it works good</p>
<pre><code>C:\Users\admin>ping 100.705.033.940
Pinging 100.705.033.940 with 32 bytes of data:
Reply from 100.705.033.940: bytes=32 time=2ms TTL=128
Reply from 100.705.033.940: bytes=32 time<1ms TTL=128
Reply from 100.705.033.940: bytes=32 time<1ms TTL=128
Reply from 100.705.033.940: bytes=32 time<1ms TTL=128
Ping statistics for 100.705.033.940:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 2ms, Average = 0ms
</code></pre>
<p>Hoping someone would know how to send a web request [via cmdline] to <code>about:internet</code>. TIA</p>
|
<python><powershell><curl><ping>
|
2023-05-31 18:06:57
| 1
| 1,120
|
scorpion35
|
76,376,207
| 5,140,756
|
How to structure a Python project to have Unit Tests
|
<p>I am facing difficulties to get the module's references in the Unit Test files.</p>
<p>This is my level 1:</p>
<pre><code>$ backend-app
โโโ Dockerfile
โโโ __init__.py
โโโ app
โโโ credentials-dev.json
โโโ requirements.txt
โโโ tests
โโโ venv
</code></pre>
<p>This is my Unit <code>tests</code> directory:</p>
<pre><code>$ backend-app.tests
โโโ __init__.py
โโโ integration
โย ย ...
โโโ unit
โโโ __init__.py
โโโ util
โโโ __init__.py
โโโ test_convert_utils.py
</code></pre>
<p>This is the module (<code>convert_utils.py</code>) file I want to access to test:</p>
<pre><code>$ backend-app.app
โโโ __init__.py
โโโ main.py
โโโ resources
โย ย โโโ config-dev.yaml
โย ย โโโ credentials-dev.json
โโโ src
โโโ impl
โย ย โโโ __init__.py
โย ย โโโ __pycache__
โย ย โโโ transaction_impl.py
โย ย โโโ transaction_validation.py
โโโ model
โย ย โโโ __init__.py
โย ย โโโ __pycache__
โย ย โโโ transaction.py
โโโ service
...
โโโ util
โโโ __init__.py
โโโ convert_utils.py
</code></pre>
<p><strong>Question</strong>: how to reference the <code>convert_utils.py</code> inside the <code>test_convert_utils.py</code> by relative or absolute path? As it is, I haven't found any way to achieve that.</p>
|
<python><pytest><python-packaging><pythonpath>
|
2023-05-31 18:02:43
| 0
| 4,283
|
Augusto
|
76,376,203
| 1,971,246
|
List Comprehension Using any() Creating Multiple Entries in Pandas
|
<p>I have a scenario where I have created a list of keywords, and I'm iterative over the rows of a dataframe to determine a column value if another column contains any words from my keyword list in it. Here is an example:</p>
<pre><code>kwrds = ['dog', 'puppy', 'golden retriever']
df = pd.DataFrame({
'description': ['This is a puppy', 'This is a dog', 'This is a golden retriever type dog', 'This is a cat', 'this is a kitten'],
'name': ['Rufus', 'Dingo', 'Rascal', 'MewMew', 'Jingles'],
'species': []})
for i,r in df.iterrows():
if any([x in r['description'] for x in kwrds]):
df.at[i, 'species'] = 'Canine'
else:
df.at[i, 'species'] = 'Feline'
</code></pre>
<p>The looping itself seems to work fine, however I am running into an issue where sometimes the species column will end up with multiple entries like</p>
<pre><code>CanineCanineCanineCanine
</code></pre>
<p>Where other times it will work fine.</p>
<p>From what I understand the list comprehension itself should only return a true or false value. It almost seems like the row is getting iterated over multiple times, but with the same index, so the entry is created over and over.</p>
<p>The problem I'm thinking with that thought though is that it is not happening for every row in the dataframe. Only some, and generally always towards the end of the dataframe.</p>
<p>I'm not even sure where to start on trying to diagnose this issue.</p>
|
<python><pandas><list-comprehension>
|
2023-05-31 18:01:54
| 1
| 415
|
William
|
76,376,199
| 1,946,418
|
Change log file half way thru execution
|
<p>Tech: Python 3.11.3</p>
<pre class="lang-py prettyprint-override"><code>import logging
logging.basicConfig(
level=logging.DEBUG,
filename="logFilePath.log",
format=loggingFormat,
datefmt="%Y-%m-%d %I:%M:%S %p",
style="%",
)
</code></pre>
<p>have a <code>my_logger.py</code> class that has this snippet in it's <code>__init__</code> and I made sure to initialize the <code>my_logger</code> at the very beginning of the module entry. I use this <code>logger</code> in several classes and it writes stuff to file and console. Life is good</p>
<p>But now I want to separate the log files for each "task" that the script does. Something like this</p>
<pre class="lang-py prettyprint-override"><code>for task in ["task1", "task2", "task3"]:
logger.info(f"this line is written to {task}.log file")
</code></pre>
<p>so there will be 3 different log files - <code>task1.log</code>, <code>task2.log</code>, <code>task3.log</code>, and so on</p>
<p>but can't really figure out how to set this up. I tried setting just <code>logging.basicConfig(filename=task)</code> at the beginning of each for loop, but that doesn't seem to work. Any ideas anyone?</p>
|
<python><python-3.x>
|
2023-05-31 18:01:09
| 1
| 1,120
|
scorpion35
|
76,376,188
| 188,473
|
How to generate uuid3 or uuid5 in spark/databricks
|
<p>What is the preferred (i.e. performant) method to generate UUID3 (or uuid5) strings in an Apache Spark context? In particular, this is within a pyspark structured streaming job, though alternatives to that could be entertained if needs be.</p>
<p>So far I've been able to generate UUIDs with the <a href="https://docs.databricks.com/sql/language-manual/functions/uuid.html" rel="nofollow noreferrer">databricks runtime's builtin method</a>, but it only provides the UUID4 (?), random version. I now have a use-case where I want to be able to consistently generate the same UUID from a given <code>(namespace, name)</code> pair, which is conveniently <a href="https://docs.python.org/3/library/uuid.html#uuid.uuid3" rel="nofollow noreferrer">supported by python</a>.</p>
<p>Is doing this with a pandas UDF likely the best bet? How should I think about the options here. I'd like to balance introducing new components with performance, so something like a Scala UDF seems like a last resort at this point.</p>
|
<python><apache-spark><databricks><uuid>
|
2023-05-31 18:00:08
| 0
| 5,107
|
Ben
|
76,376,183
| 2,597,213
|
Python 3d line interpolation to increase the line resolution
|
<p>I'm building a Python module to model wind turbine blades, those blades are defined as 2D profiles in <code>X, Y</code> space along a <code>Z</code> axis, I have already done that. I get a profile with the same number of points. The problem is that I want to create an <code>STL</code> file for that blade, and my idea is to generate a surface using the profiles, using triangulation (I don't know if this is the best solution), but the airfoils profiles are too far from each other so triangulation it's not good, so I want to increase the resolution of a blade by adding points along the Z.</p>
<p>in this picture you can see the position of profiles:</p>
<p><a href="https://i.sstatic.net/u1ZrV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u1ZrV.png" alt="enter image description here" /></a></p>
<p>And here I reformat the data to connect profile points in the Z direction:</p>
<p><a href="https://i.sstatic.net/tHcVG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tHcVG.png" alt="enter image description here" /></a></p>
<p>I think I can use some 3D interpolation to add <code>n_points</code> to each line in <code>Z</code> direction (above picture). To do this I've tried with<code>splprep</code> and <code>splev</code>, here is the code that i use</p>
<pre><code>import matplotlib.pyplot as plt
edge = np.array(edge)
x = edge[:, 0]
y = edge[:, 1]
z = edge[:, 2]
z_points = np.linspace(z[0], z[-1], n_points)
tck, u = splprep([x, y, z], s=0)
x_knots, y_knots, z_knots = splev(u, tck)
new_points = splev(z_points, tck)
fig2 = plt.figure(2)
ax3d = fig2.add_subplot(111, projection="3d")
ax3d.plot(new_points[0], new_points[1], new_points[2], "g")
ax3d.plot(x, y, z, "bo")
fig2.show()
plt.show()
</code></pre>
<p>but the result is something like this for every line, whete the spline go far fom the limits:</p>
<p><a href="https://i.sstatic.net/YILI3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YILI3.png" alt="enter image description here" /></a></p>
<p>A sample data can be found here in <a href="https://drive.google.com/file/d/1J7eeJbIEftTJ8680UvkjBiocMHSvdUJB/view?usp=sharing" rel="nofollow noreferrer">this link</a> that can be imported and viewed with the following code that produces the 2 picture:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
edges = np.load("edges.npy")
ax = fig.add_subplot(111, projection="3d")
for edge in edges:
ax.plot(edge[:, 0], edge[:, 1], edge[:, 2], "-.")
ax.set_aspect("equal")
ax.set_axis_off()
plt.show()
</code></pre>
|
<python><interpolation>
|
2023-05-31 17:59:16
| 1
| 4,885
|
efirvida
|
76,376,127
| 2,562,058
|
plt.show() causes the jupyter console to freeze if it is run from a script
|
<p>I am not sure this is a bug or some of my misunderstanding.
If the first case applies, please let me know so I will use the appropriate issue tracker.</p>
<p>Consider the following <code>./myscript.py</code> :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
tf = 20
f0 = 1
fs = 4*np.pi*f0
t = np.arange(0, tf, 1/fs)
fig, ax = plt.subplots()
ax.plot(t, np.sin(2*np.pi*f0*t))
plt.show()
</code></pre>
<p>If I run the above script through a <code>jupyter console</code> through the following command:</p>
<pre><code>In [1]: run -i ./myscript.py
</code></pre>
<p>then the console freezes and I have to <code><c-c></code> and then restart the kernel to be able to continue to work.
If I modify the above script by removing the last line and I run the following:</p>
<pre><code>In [1]: run -i ./myscript.py
In [2]: plt.show()
</code></pre>
<p>then things work perfectly, but it is very annoying to manually call <code>plt.show()</code> as shown in the latter example.</p>
<p>How to solve the problem?</p>
<p>EDIT: More details. After I stopped the kernel with <code><c-c></code> and I try to run any command from jupyter console, I get the following error message:</p>
<pre><code>/opt/homebrew/Caskroom/miniconda/base/envs/manim_ce/lib/python3.11/site-packages/jupyter_console/ptshell.py:787: UserWarning: The kernel did
not respond to an is_complete_request. Setting `use_kernel_is_complete` to False.
warn('The kernel did not respond to an is_complete_request. '
</code></pre>
|
<python><matplotlib><jupyter><jupyter-console>
|
2023-05-31 17:50:49
| 1
| 1,866
|
Barzi2001
|
76,376,097
| 9,776,699
|
Extract value from a string based on certain key value pairs
|
<p>I have some data I am pulling from JIRA that has data in the below format.</p>
<pre><code>comment text is: [{'type': 'paragraph', 'content': [{'type': 'text', 'text': 'In conversation with the customer '}, {'type': 'mention', 'attrs': {'id': '04445152', 'text': '@Kev', 'accessLevel': ''}}, {'type': 'text', 'text': ' Text 123}]}]
comment text is: [{'type': 'paragraph', 'content': [{'type': 'text', 'text': '@xyz Text abc'}]}]
comment text is: [{'type': 'paragraph', 'content': [{'type': 'mention', 'attrs': {'id': '3445343', 'text': '@Hey', 'accessLevel': ''}}, {'type': 'text', 'text': ' FYI'}]}]
comment text is:[{'content': [{'text': 'Output: ', 'type': 'text'}, {'type': 'hardBreak'}, {'type': 'hardBreak'}, {'text': "New Text goes here", 'type': 'text'}], 'type': 'paragraph'}]
</code></pre>
<p>I would like to extract all data that have key value of text and also concat if there are multiple such values in the same row. Given below is the expected output</p>
<p>Expected output:</p>
<pre><code>In conversation with the customer @Kev Text 123
@xyz Text abc
@Hey FYI
Output: New Text goes here
</code></pre>
|
<python><pandas><regex>
|
2023-05-31 17:46:28
| 1
| 1,571
|
Kevin Nash
|
76,376,033
| 1,185,242
|
How do I differentiate self-intersection vs. self-touching in shapely?
|
<p>How can I determine if a polygon is self-touching(!) like the blue one, but not self-intersecting like the red one</p>
<p><a href="https://i.sstatic.net/T6CbV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T6CbV.png" alt="enter image description here" /></a></p>
<p>Vertices for the two polygons shown in the images is here:</p>
<pre><code>from shapely.geometry import Polygon
from shapely.affinity import translate
import pylab
pylab.figure()
points = [[0, 0], [0, 100], [50, 80], [0, 60], [50, 30]]
polygon = Polygon(points)
bounds = polygon.bounds
x,y = polygon.exterior.xy
pylab.plot(x,y, 'b')
pylab.plot(0, 60, 'ko')
points = [[0, 0], [0, 100], [50, 80], [-10, 60], [50, 30]]
polygon = Polygon(points)
polygon = translate(polygon, xoff=70)
x,y = polygon.exterior.xy
pylab.plot(x,y, 'r')
pylab.axis('equal')
pylab.grid(True)
pylab.show()
</code></pre>
|
<python><computational-geometry><shapely>
|
2023-05-31 17:36:00
| 1
| 26,004
|
nickponline
|
76,376,023
| 653,397
|
Flask Application on Azure App Service throwing "405 Method Not Allowed" error
|
<p>I have a simple Flask app as shown below which is modularized using Flask Blueprint functionality. On local deployment & testing Flask App is running without any issue, I can see the expected output but when the app is deployed on the Azure App service and the request is sent from either Postman or Python then I am getting the below error. Can anyone tell me what I am missing.</p>
<pre><code><!doctype html>
<html lang=en>
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>
</code></pre>
<p>Below are relevant data</p>
<p><strong>App structure</strong></p>
<pre><code>/root
|- app.py
|- routes
| |- test.py
|- send_request.py
|- requirements.txt
</code></pre>
<p><strong>test.py</strong></p>
<pre><code>from flask import Blueprint, request, jsonify
route = Blueprint("test", __name__, url_prefix="/test")
@route.route("/", methods=["POST"])
def test():
data = request.json["test"]
print(data)
return jsonify(data)
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask
from routes import test
app = Flask(__name__)
app.register_blueprint(test.route)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p><strong>send_request.py</strong></p>
<pre><code>import requests
import json
url = "https://<app>.azurewebsites.net/test"
payload = json.dumps({
"test": "Hello World!"
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
</code></pre>
|
<python><azure><flask><azure-web-app-service>
|
2023-05-31 17:34:18
| 1
| 1,930
|
Atinesh Singh
|
76,375,993
| 3,826,115
|
Get rolling average of a Pandas DataFrame with hourly values, while taking into account cyclical nature of days
|
<p>Lets say I have a dataframe with a multiindex, constructed as follows:</p>
<pre><code>import numpy as np
import pandas as pd
ids = ['a', 'b', 'c']
hours = np.arange(24)
data = np.random.random((len(ids),len(hours)))
df = pd.concat([pd.DataFrame(index = [[id]*len(hours), hours], data = {'value':data[ind]}) for ind, id in enumerate(ids)])
df.index.names = ['ID', 'hour']
</code></pre>
<p>Which looks like this:</p>
<pre><code> value
ID hour
a 0 0.020479
1 0.059987
2 0.053100
3 0.406198
4 0.452231
...
c 19 0.150493
20 0.617098
21 0.377062
22 0.196807
23 0.954401
</code></pre>
<p>What I want to do is get a new 24-hour timeseries for each station, but calculated with a 5-hour rolling average.</p>
<p>I know I can do something like <code>df.rolling(5, center = True, on = 'hour')</code>, but the problem with this is that it doesn't take into account the fact that the hours are cyclical - i.e., the rolling average for hour 0 should be the average of hours 22, 23, 0, 1, and 2.</p>
<p>What is a good way to do this?</p>
<p>Thanks!</p>
|
<python><pandas>
|
2023-05-31 17:29:45
| 2
| 1,533
|
hm8
|
76,375,738
| 11,143,781
|
Tensorflow - Found an unshardable source dataset in image_dataset_from_directory()
|
<p>I am training a CNN model as follows:</p>
<pre><code>train_ds = image_dataset_from_directory(
self.data_dir,
validation_split=self.val,
subset="training",
seed=1,
image_size=(self.height, self.width),
batch_size=BATCH_SIZE)
val_ds = image_dataset_from_directory(
self.data_dir,
validation_split=self.val,
subset="validation",
seed=1,
image_size=(self.height, self.width),
batch_size=BATCH_SIZE)
gpus = tf.config.list_logical_devices('GPU')
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
model = Sequential([
Conv2D(32, 3, padding='same'),
PReLU(),
MaxPooling2D(),
Flatten(),
Dense(64),
PReLU(),
Dense(num_classes)])
optimizer = Adam()
model.compile(optimizer=optimizer,
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=100,
)
</code></pre>
<p>However, I get the following error, and the script terminates:</p>
<pre><code>W tensorflow/core/grappler/optimizers/data/auto_shard.cc:786] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_1"
op: "TensorSliceDataset"
</code></pre>
<p>I found <a href="https://stackoverflow.com/questions/72740907/tensorflow-cant-apply-sharing-policy-file-when-using-mirrored-strategy">this</a> question, but the answer works for <code>tf.data.Datasets</code>. I wonder if there are any solutions for image_dataset_from_directory()? and <a href="https://stackoverflow.com/questions/63458668/tensorflow-image-dataset-from-directory-for-input-dataset-and-output-dataset">this</a> answer proposes to use <code>from_tensor_slices()</code>, but I don't know how to apply it to a multiclass directory. Thanks in advance for your help.</p>
|
<python><tensorflow><multi-gpu>
|
2023-05-31 16:48:54
| 0
| 316
|
justRandomLearner
|
76,375,697
| 20,122,390
|
Why is this python regular expression not ignoring accents?
|
<p>I am using the following regular expression for a filter of an application that connects to a MongoDB database:</p>
<pre><code>{"$regex": re.compile(r'\b' + re.escape(value) + r'\b', re.IGNORECASE | re.UNICODE)}
</code></pre>
<p>The regular expression meets my search criteria however I have a problem and that is that it does not ignore accents. For example:</p>
<p>The database entry is: "Escobar, el <strong>patrรณn</strong> del mal Colombia historia".</p>
<p>And I search for "El <strong>patron</strong>".</p>
<p>I do not get any result because the "accent" in the letter O does not let me fetch the record. How can I fix it? I thought that with the re.UNICODE part I would ignore this.</p>
|
<python><regex><unicode>
|
2023-05-31 16:44:15
| 1
| 988
|
Diego L
|
76,375,649
| 1,259,374
|
Sum of numeric values and +/- signs as string algorithm
|
<p>So I have this string:</p>
<pre><code>s = 'one+one-two-one+two'
</code></pre>
<p>And I need to calculate the sum of this.</p>
<p>Tis is what I have try:</p>
<pre><code>s = 'one+one-two-one+two'
d = {'one': 1, 'two': 2}
add = s.split('+')
result = 0
for i in range(len(add)):
val = d.get(add[i], None)
if val:
result += val
else:
sub = add[i].split('-')
for j in range(len(sub)):
if j > 0:
result -= d[sub[j]]
else:
result += d[sub[j]]
print(result)
</code></pre>
<p>In case <code>'one+one-two-one+two'</code> to <code>'one+-one-two-one+two'</code> my way not works well.</p>
<p>Any suggestions ?</p>
|
<python>
|
2023-05-31 16:36:14
| 2
| 1,139
|
falukky
|
76,375,607
| 1,714,385
|
How to impute missing rows with mean in pandas?
|
<p>I have a dataframe with datetimes separated by 1 hour as indexes. However, sometimes a row is missing. Something like the example below, where there is no row for the datetime <code>2019-01-01 04:00:00</code>:</p>
<pre><code> price_eur
datetime
2019-01-01 00:00:00 51.0
2019-01-01 01:00:00 46.27
2019-01-01 02:00:00 39.78
2019-01-01 03:00:00 20.0
2019-01-01 05:00:00 22.0
</code></pre>
<p>I want to impute the missing rows by taking the average of the elements of the rows immediately surrounding the missing one, i.e. I want to obtain the following dataframe:</p>
<pre><code> price_eur
datetime
2019-01-01 00:00:00 51.0
2019-01-01 01:00:00 46.27
2019-01-01 02:00:00 39.78
2019-01-01 03:00:00 20.0
2019-01-01 04:00:00 21.0
2019-01-01 05:00:00 22.0
</code></pre>
<p>I know that I could use the resample method to impute the missing value either with the value preceding the missing one or the value following it, like so,</p>
<pre><code>prices_df.resample('1H').fillna('pad',limit=1)
</code></pre>
<p>but I'm not sure how to impute with the mean. Can anybody help?</p>
|
<python><pandas>
|
2023-05-31 16:30:35
| 1
| 4,417
|
Ferdinando Randisi
|
76,375,592
| 2,171,348
|
Which launch config does the debug button at the top-right with the VS Code Python extension use?
|
<p>when I open a .py file in vscode, there's a debug button shown up at the top-right coner of the vscode window.</p>
<p><a href="https://i.sstatic.net/xP76q.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xP76q.jpg" alt="enter image description here" /></a></p>
<p>I have a multi-root workspace, and one launch config defined. When I start debuging/running with F5 button, the one launch config is automatically selected, and everything works fine. But if I use the debug button at the top-right coner as high-lighted, vscode complains it cannot find the modules imported in the workspace.
What debug/run environment does this button use?</p>
|
<python><visual-studio-code><vscode-debugger>
|
2023-05-31 16:28:28
| 2
| 481
|
H.Sheng
|
76,375,446
| 10,498,616
|
Issue with rigidity of line plot
|
<p>I am plotting a simple dataframe in Python matplot lib using the following:</p>
<pre><code># Load the data
df = pd.read_csv('equityplot.csv')
df['Date'] = pd.to_datetime(df['Date']) # ensure date column is datetime
# Plotting
fig, ax = plt.subplots(figsize=(10, 6))
# Iterate over each of your variables and plot
for variable in ['SP', 'RP', 'NRP', 'HRP', 'MVO', 'EW']:
ax.plot(df['Date'], df[variable], label=variable)
# Formatting x-axis
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=3))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
plt.xticks(rotation=90)
# Formatting y-axis to percentage
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(100 * y)
# The percent symbol needs escaping in latex
if plt.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
formatter = FuncFormatter(to_percent)
ax.yaxis.set_major_formatter(formatter)
# Add legend
ax.legend()
plt.tight_layout()
plt.show()
</code></pre>
<p>The resulting plot looks like the following:</p>
<p><a href="https://i.sstatic.net/HqQ0v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HqQ0v.png" alt="enter image description here" /></a></p>
<p>Instead, in excel these weird breaks do not exists:</p>
<p><a href="https://i.sstatic.net/d8Wtu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d8Wtu.png" alt="enter image description here" /></a></p>
<p>The dataset as simple as the following:</p>
<ul>
<li>first column: dates</li>
<li>other columns: float values (the cumulative profit and loss function of trading strategies)</li>
</ul>
<p>Can anyone explain from where this "rigidity" in the Python plot come from?</p>
|
<python><matplotlib>
|
2023-05-31 16:09:25
| 0
| 305
|
Vitomir
|
76,375,411
| 7,413,446
|
Django optimize waiting for a response
|
<p>I'm trying to speed up my Django server which is running let's say 4 processes and for some view, it is making a request to another server which is performing some computation and sending a request to another server. Which is taking a very long time to respond let's say 5 minutes, in that case, my server will get stuck after 4 requests in a short period of time. How can I optimize it, I believe it is due to busy waiting for the request, is it possible to respond to other requests and finish replying to that request after the response from another server. I believe the solution for that is async view in Django, but do I have to convert the whole view or can I convert just the function calling the external server?</p>
|
<python><django><asynchronous><django-views><python-asyncio>
|
2023-05-31 16:05:26
| 1
| 314
|
Jakub Swistak
|
76,375,382
| 7,268,601
|
How to properly import llama-index classes?
|
<p>Recently I making some PoCs using Llama Index.</p>
<p>I'm following the <a href="https://gpt-index.readthedocs.io/en/latest/examples/query_engine/sub_question_query_engine.html" rel="nofollow noreferrer">documentation</a> in order to use routing features for different indexes. I made two indexes and I want to use SubQueryEngine to route the query engines that I have created.</p>
<p>In the tutorial, we have the following example of imports:</p>
<pre><code>from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
from llama_index.tools import QueryEngineTool, ToolMetadata
from llama_index.query_engine import SubQuestionQueryEngine
</code></pre>
<p>But when I run my example, I'm not able to import ToolMetadata neither SubQuestionQueryEngine. I got the following error:</p>
<pre><code>ImportError: cannot import name 'SubQuestionQueryEngine' from 'llama_index.query_engine'
</code></pre>
<p>Does anybody know what possible can be the problem? The other imports work normaly and I was able to run examples using GPTVectorStoreIndex class.</p>
<p>I'm using Python 3.10.11</p>
|
<python><python-3.x><llama-index>
|
2023-05-31 16:01:20
| 2
| 621
|
Thauany Moedano
|
76,375,307
| 7,483,211
|
How to make typer traceback look normal
|
<p>When using <a href="https://typer.tiangolo.com/" rel="noreferrer">typer</a> to parse CLI arguments, I get very verbose and colorful error messages. How can I get a normal Python traceback?</p>
<p>See screenshot for an example traceback (just the first few lines) for illustration of the verbose style:</p>
<pre class="lang-py prettyprint-override"><code>โฏ python scripts/add_priors.py
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/corneliusromer/code/nextclade_data_workflows/sars-cov-2/scripts/add_priors.py:26 in main โ
โ โ
โ 23 โ import polars as pl โ
โ 24 โ โ
โ 25 โ priors = ( โ
โ โฑ 26 โ โ pl.scan_ndjson(ndjson, infer_schema_length=10000) โ
โ 27 โ โ .select( โ
โ 28 โ โ โ [ โ
โ 29 โ โ โ โ pl.col("nearestNodes"), โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ json = <module 'json' from โ โ
โ โ
</code></pre>
|
<python><command-line-interface><traceback><typer>
|
2023-05-31 15:51:36
| 2
| 10,272
|
Cornelius Roemer
|
76,375,219
| 903,521
|
Conditional mocking of a function based on the parameter
|
<p>I want to mock part of the function based on the condition and for the rest want to use the actual implementation.</p>
<pre><code>script_1.py
def retrieve_pod_size(pod_number):
if pod_number == 3:
return 'an object with size 3'
elif pod_number == 5:
return 'an object with size 5'
else:
return 'default object'
</code></pre>
<p>In my test I want to mock only when the application flow hits condition pod_number == 5 for others it should still execute the actual implementation.</p>
<p>In my test script I have</p>
<pre><code>def test_pod(self):
def get_mock_pod(pod_number):
if pod_number == 5:
return 'an object with size 6'
with mock.patch('application.triggers.retrieve_pod_size', get_mock_pod)
</code></pre>
<p>Also, tried with side_effects but still have idea how to mix them based on the parameters/conditions.</p>
|
<python><python-3.x>
|
2023-05-31 15:41:07
| 0
| 3,628
|
syv
|
76,375,119
| 223,992
|
AWS Lambda - not understanding returned data
|
<p>I am writing my first Lambda script. It is a very simple HTTP to email gateway. While it is parsing the request and generating the email, I get a 502 error at the front end (via Cloudfront):</p>
<blockquote>
<p>The Lambda function returned an invalid entry in the headers object: Each header entry in the headers object must be an array. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.</p>
</blockquote>
<p>To create the returned object, I am simply decorating the event object passed to lambda_handler():</p>
<pre><code>def lambda_handler(event, context):
returnAddr=do_clever_stuff(event)
response = {
"headers": { "location": [ { "key": "Location", "value": returnAddr } ] },
"status": 303,
"statusDescription": "See Other"
}
event['Records'][0]['cf']['response']=response
return event
</code></pre>
<p>The entire data returned in the Lambda editor / test console is shown below.</p>
<p>I <em>think</em> that my response conforms to the <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-response-origin" rel="nofollow noreferrer">requirements</a>. What am I missing?</p>
<pre><code>{
"Records": [
{
"cf": {
"config": {
"distributionDomainName": "redacted.cloudfront.net",
"distributionId": "EDFDVBD6EXAMPLE",
"eventType": "origin-request",
"requestId": "4TyzHTaYWb1GX1qTfsHhEqV6HUDd_BzoBZnwfnvQc_1oF26ClkoUSEQ=="
},
"request": {
"clientIp": "203.0.113.178",
"headers": {
"x-forwarded-for": [
{
"key": "X-Forwarded-For",
"value": "203.0.113.178"
}
],
"user-agent": [
{
"key": "User-Agent",
"value": "Amazon CloudFront"
}
],
"via": [
{
"key": "Via",
"value": "2.0 2afae0d44e2540f472c0635ab62c232b.cloudfront.net (CloudFront)"
}
],
"host": [
{
"key": "Host",
"value": "example.org"
}
],
"cache-control": [
{
"key": "Cache-Control",
"value": "no-cache"
}
]
},
"method": "POST",
"origin": {
"custom": {
"customHeaders": {},
"domainName": "example.org",
"keepaliveTimeout": 5,
"path": "",
"port": 443,
"protocol": "https",
"readTimeout": 30,
"sslProtocols": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
]
}
},
"querystring": "",
"uri": "/",
"body": {
"data": "<<redacted>>"
}
},
"response": {
"headers": {
"location": [
{
"key": "Location",
"value": "/"
}
]
},
"status": 303,
"statusDescription": "See Other"
}
}
}
]
}
</code></pre>
<p><strong>Update</strong> Looking at <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-generated-response-examples" rel="nofollow noreferrer">other examples</a> in the AWS documentation, these are not returning a decorated event object, but rather just the response object, however amending the code above to <code>return response</code> does not change the behaviour.</p>
|
<python><aws-lambda><amazon-cloudfront>
|
2023-05-31 15:28:12
| 2
| 48,387
|
symcbean
|
76,375,099
| 8,372,455
|
open model zoo multi_camera_multi_target_tracking_demo
|
<p>Am trying <a href="https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/multi_camera_multi_target_tracking_demo/python" rel="nofollow noreferrer">multi_camera_multi_target_tracking_demo</a> with test video files, running the demo on Ubuntu with:</p>
<pre><code>$ python3.9 multi_camera_multi_target_tracking_demo.py -i ./test_video/test1.mp4 ./test_video/test1.mp4 --m_detector intel/person-detection-retail-0013.xml --m_reid intel/person-reidentification-retail-0277.xml
</code></pre>
<p>But I encounter an error:</p>
<pre><code>RuntimeError: Check 'false' failed at src/inference/src/core.cpp:100:
[ NETWORK_NOT_READ ] Unable to read the model: intel/person-detection-retail-0013.xml Please check that model format: xml is supported and the model is correct. Available frontends: paddle pytorch tflite tf ir onnx
</code></pre>
<p>From what I understand the script wants onnx format and I am using xml format. Can someone give me a tip on how to redownload onnx format model?</p>
<p>when I cloned the <a href="https://github.com/openvinotoolkit/open_model_zoo/tree/master" rel="nofollow noreferrer">open model zoo</a> repo I used the directions <code>omz_downloader --all</code> and <code>omz_converter --all</code></p>
|
<python><computer-vision><openvino>
|
2023-05-31 15:26:26
| 2
| 3,564
|
bbartling
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.