QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,359,017
| 15,215,859
|
FileNotFoundError: [Errno 2] No such file or directory: while exporting a parquet file from pandas dataframe
|
<p>I am basically trying to export a parquet file inside GCS cloud bucket as shown below in my code which is a GCP cloud function where i am getting error in the line "chunk.to_parquet(parquet_file_path, engine='fastparquet', compression='snappy')" saying -" No such file or directory: 'new_folder_20230206_065500/table1-20230206_065638.parquet". The folder is getting created successfully inside bucket but i am not sure why parquet file is not getting generated inside it.</p>
<pre><code>import mysql.connector
import pandas as pd
from google.cloud import storage
from datetime import datetime, timedelta
import os
def extract_data_to_gcs(request):
connection = mysql.connector.connect(
host=os.getenv('..'),
user=os.getenv('...'),
password=os.getenv('...'),
database='....'
)
cursor = connection.cursor(buffered=True)
tables = ["table1", "table2", "table3"]
client = storage.Client()
bucket = client.bucket('data-lake-archive')
# Create a timestamp-based folder name
now = datetime.now()
folder_name = now.strftime("new_folder_%Y%m%d_%H%M%S")
folder_path = f"{folder_name}/"
# Create the folder in the GCS bucket
blob = bucket.blob(folder_path)
blob.upload_from_string("", content_type="application/octet-stream")
for table in tables:
cursor.execute("SELECT * FROM {}".format(table))
chunks = pd.read_sql_query("SELECT * FROM {}".format(table), connection, chunksize=5000000)
for i, chunk in enumerate(chunks):
chunk.columns = [str(col) for col in chunk.columns]
ingestion_timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
parquet_file_path = folder_path + f"{table}-{i}.parquet"
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
# parquet_file_path = folder_path + f'abc.parquet'
print(f'folder path is {folder_path}')
print(f'parquet file path is {parquet_file_path}')
chunk.to_parquet(parquet_file_path, engine='fastparquet', compression='snappy')
# blob = bucket.blob(folder_path + f'{table}-{i}.parquet')
# blob.upload_from_filename(folder_path + f'{table}-{i}.parquet')
cursor.execute("SELECT table_name, column_name FROM information_schema.key_column_usage WHERE referenced_table_name = '{}'".format(table))
referenced_tables = cursor.fetchall()
for referenced_table in referenced_tables:
chunks = pd.read_sql_query("SELECT * FROM {}".format(referenced_table[0]), connection, chunksize=5000000)
for i, chunk in enumerate(chunks):
chunk.columns = [str(col) for col in chunk.columns]
ingestion_timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
chunk.to_parquet(f"{folder_path}{referenced_table[0]}-{ingestion_timestamp}-{i}.parquet", engine='fastparquet', compression='snappy')
blob = bucket.blob(folder_path + f'{referenced_table[0]}-{ingestion_timestamp}-{i}.parquet')
blob.upload_from_filename(folder_path + f'{referenced_table[0]}-{ingestion_timestamp}-{i}.parquet')
return 'Data extracted and uploaded to GCS'
</code></pre>
|
<python><python-3.x><google-cloud-platform><pyspark>
|
2023-02-06 08:40:40
| 1
| 317
|
Tushaar
|
75,358,989
| 736,312
|
PySimpleGUI - Binding shortcut key to one element of a sub-layout
|
<p>Thank you for reading this.</p>
<p>I am searching for a standard solution but couldn't find one.<br>
I have a simple layout made out of two layouts and the first layout is the one used to create the window.</p>
<p><a href="https://i.sstatic.net/rgarN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rgarN.png" alt="MainGUI" /></a></p>
<p>I want to bind shorcut keys to only the right buttons.<br>
They are not receiving the events at all, only always the first InputText element.<br>
I need to have the InputText elements also receiving events and I could achieved that already.<br>
I need that kind of layouts' architecture for a more complex GUI (Tab element and so on).<br></p>
<p>I have prepared one script that can contain my problem.</p>
<pre><code>import PySimpleGUI as Sg
input_folder_layout = [Sg.Text("input folder",
size=(20, 1),
background_color="#64778D"),
Sg.InputText(default_text="",
key="input_folder",
size=(65, 1),
enable_events=True),
Sg.FolderBrowse("...",
size=(6, 1),
key="input_folder_browse",
enable_events=True)]
output_folder_layout = [Sg.Text("output folder", size=(20, 1),
background_color="#64778D"),
Sg.InputText(default_text="",
key="output_folder",
size=(65, 1),
enable_events=True),
Sg.FolderBrowse("...",
size=(6, 1),
key="output_folder_browse",
enable_events=True)]
ui_layout = [input_folder_layout, output_folder_layout]
Sg.ChangeLookAndFeel("DarkBlue2")
window = Sg.Window("UI Test", ui_layout, finalize=True)
window.bind("<Ctrl_R><i>", "CTRL-i")
window.bind("<Ctrl_R><o>", "CTRL-o")
while True:
event, values = window.read(timeout=100)
if event != "__TIMEOUT__" and event is not None:
if event in ("input_folder_browse", "CTRL-i"):
print("input folder browse event found")
if event in ("output_folder_browse", "CTRL-o"):
print("output folder browse event found")
if event == Sg.WINDOW_CLOSED:
break
window.refresh()
window.close()
window = None
</code></pre>
<p>I am using PySimpleGUI==4.60.3 and Python==3.9.13.</p>
<p>Thank you.</p>
|
<python><pysimplegui>
|
2023-02-06 08:36:37
| 1
| 796
|
Toyo
|
75,358,678
| 1,897,151
|
opencv detecting website form image for certain icon to determine the field
|
<p>trying to use opencv on python with this as opening</p>
<pre><code>import cv2
img = cv2.imread("image2.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
# text field
if aspect_ratio > 2 and aspect_ratio < 9 and h > 20:
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow('canny', edges)
cv2.imshow("results", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>manage to detect the text field, but some text field has certain icons on the right hand side to indicates types of field such as drop down, or date/time or even spyglass or spectacles to indicates that this field has custom fields to search</p>
<p><a href="https://i.sstatic.net/V1k12.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V1k12.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/dqrPm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dqrPm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/sxAV1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sxAV1.png" alt="enter image description here" /></a></p>
<p>i would like to know how can i detect and decide what type of field is this by checking the icons on the right side</p>
|
<python><selenium><opencv>
|
2023-02-06 08:01:02
| 0
| 503
|
user1897151
|
75,358,527
| 15,358,800
|
Pandas reversing Data Frames giving strange results
|
<p>Let's say I've a df</p>
<pre><code> value
2023-02-01 a
2023-02-02 b
2023-02-03 c
2023-02-04 d
</code></pre>
<p>when I try to reverse <code>df</code> and after doing some operations If i reverse back It not showing complete df Like</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{"value": {0: "a", 1: "b", 2: "c", 3: "d"}}
)
df.index = pd.date_range(start='2023-02-01', periods=4,freq='D')
new = df.loc[::-1] #Reversing 1 time
new2 = new.loc[::-1] # Reversing back again..
print(new2)
</code></pre>
<p>Giving me # even though I haven't done any operations on df</p>
<pre><code>2023-02-04 d
</code></pre>
<p>Expected</p>
<pre><code> value
2023-02-01 a
2023-02-02 b
2023-02-03 c
2023-02-04 d
</code></pre>
<p>Version Detials</p>
<pre><code>Pandas 1.3.5
Python 3.9
</code></pre>
|
<python><pandas><dataframe><reverse>
|
2023-02-06 07:41:32
| 0
| 4,891
|
Bhargav
|
75,358,506
| 1,308,967
|
How do you incrementally add lexeme/s to an existing Django SearchVectorField document value through the ORM?
|
<p>You can add to an existing Postgresql <code>tsvector</code> value using <code>||</code>, for example:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE acme_table
SET my_tsvector = my_tsvector ||
to_tsvector('english', 'some new words to add to existing ones')
WHERE id = 1234;
</code></pre>
<p>Is there any way to access this functionality via the Django ORM? I.e. incrementally add to an existing <code>SearchVectorField</code> value rather than reconstruct from scratch?</p>
<p>The issue I'm having is the <code>SearchVectorField</code> property returns the <code>tsvector</code> as a string. So when I use the <code>||</code> operator as <code>+</code>, eg:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.postgres.search import SearchVector
instance.my_tsvector_prop += SearchVector(
["new", "fields"],
weight="A",
config='english'
)
</code></pre>
<p>I get the error:</p>
<p><code>TypeError: SearchVector can only be combined with other SearchVector instances, got str.</code></p>
<p>Because:</p>
<pre class="lang-py prettyprint-override"><code>type(instance.my_tsvector_prop) == str
</code></pre>
<p>A fix to this open <a href="https://code.djangoproject.com/ticket/30637" rel="nofollow noreferrer">Django bug</a> whereby a <code>SearchVectorField</code> property returns a <code>SearchVector</code> instance would probably enable this, if possible. (Although less efficient than combining in the database. In our case the update will run asynchronously so performance is not too important.)</p>
<pre class="lang-py prettyprint-override"><code>MyModel.objects
.filter(pk=1234)
.update(my_tsvector_prop=
F("my_tsvector_prop") +
SearchVector(
["new_field_name"],
weight="A",
config='english')
)
)
</code></pre>
<p>Returns:</p>
<p><code>FieldError: Cannot resolve expression type, unknown output_field</code></p>
<p>Another solution would be to run a raw SQL <code>UPDATE</code>, although I'd rather do it through the Django ORM if possible as our <code>tsvector</code> fields often reference values many joins away, so it'd be nice to find a sustainable solution.</p>
|
<python><django><postgresql><full-text-search>
|
2023-02-06 07:39:12
| 0
| 6,522
|
Chris
|
75,358,443
| 10,722,752
|
How to get original string values of encoded categorical columns in Lime graph
|
<p>I am trying to work on local explainability using Lime graph. Before building the model, I encode some of the categorical variables.</p>
<p>Sample Data and code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
df = pd.DataFrame({'customer_id' : np.arange(1,21),
'gender' : np.random.choice(['male','female'], 20),
'age' : np.random.randint(19,50, 20),
'salary' : np.random.randint(20000,95000, 20),
'purchased' : np.random.choice([0,1], 20, p = [.8,.2])})
</code></pre>
<p>Preprocessing:</p>
<pre><code>df['gender'] = df['gender'].map({'female' : 0, 'male' : 1})
df['age'] = df['age'].map(lambda x : 'young' if x<=35 else 'middle aged')
df['age'] = df['age'].map({'young' : 0, 'middle aged' : 1})
bins = [0, df['salary'].quantile(q=.33),df['salary'].quantile(q=.66),df['salary'].quantile(q=1)+1]
labels = ['low salary', 'medium salary', 'high salary']
df['salary'] = pd.cut(df['salary'], bins = bins, labels=labels)
from sklearn import preprocessing
l_encoder={}
label_encoder = preprocessing.LabelEncoder()
df['salary']= label_encoder.fit_transform(df['salary'])
df
customer_id gender age salary purchased
0 1 0 0 1 0
1 2 0 0 0 0
2 3 0 1 2 0
3 4 1 0 0 0
4 5 1 1 2 0
5 6 0 1 1 0
6 7 1 0 2 0
7 8 1 1 0 0
8 9 1 1 1 0
9 10 1 0 0 0
10 11 0 1 0 0
11 12 0 0 1 0
12 13 1 1 1 0
13 14 1 1 1 0
14 15 1 1 2 1
15 16 1 1 0 0
16 17 1 1 1 0
17 18 0 0 0 0
18 19 0 0 2 0
19 20 0 0 2 0
# input
x = df.iloc[:, :-1]
# output
y = df.iloc[:, 4]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 0)
</code></pre>
<p>Separating the <code>customer_id</code> column:</p>
<pre><code>X_train_cust = X_train.pop('customer_id')
X_test_cust = X_test.pop('customer_id')
</code></pre>
<p>Fitting a logistic regression model:</p>
<pre><code>from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
</code></pre>
<p>Building a <code>lime</code> chart:</p>
<pre><code>import lime
import lime.lime_tabular
explainer = lime.lime_tabular.LimeTabularExplainer(np.array(X_train),
feature_names=X_train.columns,
verbose=True, mode = 'classification')
exp = explainer.explain_instance(X_test.iloc[0], classifier.predict_proba)
exp.as_pyplot_figure()
</code></pre>
<p><a href="https://i.sstatic.net/kGRKa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kGRKa.png" alt="enter image description here" /></a></p>
<p>The lime chart displays the encoded features/columns values. But I need the original value. For example, if the lime chart says 0, I need to display it as <code>female</code>.
Could someone please let me know how fix it.</p>
|
<python><pandas><scikit-learn><lime>
|
2023-02-06 07:29:41
| 1
| 11,560
|
Karthik S
|
75,358,422
| 10,867,713
|
Is there any command that points me to the kubeconfig file in k8s/openshift
|
<p>Is there any command which points me to the path where kubeconfig file is present?
I mean I am working on python k8s/openshift client. I am looking for linux or python command or libraries which can print me the path where kubeconfig file is present.</p>
<p>By default kubeconfig is present it home most of the time, but it may also vary for different deployment types.</p>
<p>looking forward to any suggestions/concerns.</p>
|
<python><linux><kubernetes><openshift>
|
2023-02-06 07:27:25
| 2
| 409
|
majid asad
|
75,358,285
| 8,696,281
|
How to generate numeric mapping for categorical columns in pandas?
|
<p>I want to manipulate categorical data using pandas data frame and then convert them to <code>numpy</code> array for model training.</p>
<p>Say I have the following data frame in pandas.</p>
<pre><code>import pandas as pd
df2 = pd.DataFrame({"c1": ['a','b',None], "c2": ['d','e','f']})
>>> df2
c1 c2
0 a d
1 b e
2 None f
</code></pre>
<p>And now I want "compress the categories" horizontally as the following:</p>
<pre><code> compressed_categories
0 c1-a, c2-d <--- this could be a string, ex. "c1-a, c2-d" or array ["c1-a", "c2-d"] or categorical data
1 c1-b, c2-e
2 c1-nan, c2-f
</code></pre>
<p>Next I want to generate a dictionary/vocabulary based on the unique occurrences plus "nan" columns in <code>compressed_categories</code>, ex:</p>
<pre><code>volcab = {
"c1-a": 0,
"c1-b": 1,
"c1-c": 2,
"c1-nan": 3,
"c2-d": 4,
"c2-e": 5,
"c2-f": 6,
"c2-nan": 7,
}
</code></pre>
<p>So I can further numerically encoding then as follows:</p>
<pre><code> compressed_categories_numeric
0 [0, 4]
1 [1, 5]
2 [3, 6]
</code></pre>
<p>So my ultimate goal is to make it easy to convert them to <code>numpy</code> array for each row and thus I can further convert it to tensor.</p>
<pre><code>input_data = np.asarray(df['compressed_categories_numeric'].tolist())
</code></pre>
<p>then I can train my model using <code>input_data</code>.</p>
<p>Can anyone please show me an example how to make this series of conversion? Thanks in advance!</p>
|
<python><pandas><dataframe><torch><categorical-data>
|
2023-02-06 07:09:09
| 1
| 783
|
noobie2023
|
75,358,246
| 12,331,179
|
Nested Json Using pyspark
|
<p>We have to build nested json using below structure in pyspark and i have added data that need to feed using this</p>
<p>Input Data structure</p>
<p><a href="https://i.sstatic.net/Jep4n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jep4n.png" alt="enter image description here" /></a></p>
<p>Data</p>
<p><a href="https://i.sstatic.net/NAeDl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NAeDl.png" alt="enter image description here" /></a></p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
a1=["DA_STinf","DA_Stinf_NA","DA_Stinf_city","DA_Stinf_NA_ID","DA_Stinf_NA_ID_GRANT","DA_country"]
a2=["data.studentinfo","data.studentinfo.name","data.studentinfo.city","data.studentinfo.name.id","data.studentinfo.name.id.grant","data.country"]
columns = ["data","action"]
df = spark.createDataFrame(zip(a1, a2), columns)
#Input data for json structure
a1=["Pune"]
a2=["YES"]
a3=["India"]
col=["DA_Stinf_city","DA_Stinf_NA_ID_GRANT","DA_country"]
data=spark.createDataFrame(zip(a1, a2,a3), col)
</code></pre>
<p>Expected result based on above data</p>
<pre><code>{
"data": {
"studentinfo": {
"city": "Pune",
"name": {
"id": {
"grant": "YES"
}
}
},
"country": "india"
}
}
</code></pre>
<p>we have tried using F.struct function in manually but we have find dynamic way to build this json using df dataframe having data and action column</p>
<pre><code>data.select(
F.struct(
F.struct(
F.struct(F.col("DA_Stinf_city")).alias("city"),
F.struct(
F.struct(F.col("DA_Stinf_NA_ID_GRANT")).alias("id")
).alias("name"),
).alias("studentinfo"),
F.struct(F.col("DA_country")).alias("country")
).alias("data")
)
</code></pre>
|
<python><json><pyspark><nested-loops><nested-json>
|
2023-02-06 07:02:38
| 1
| 386
|
Amol
|
75,358,230
| 4,732,175
|
How to upgrade pypy itself?
|
<p>I've installed <code>pypy</code> by <code>conda</code>:</p>
<pre><code>conda create -n pypy37 pypy python=3.7
</code></pre>
<p>and pypy version is:</p>
<pre><code>Python 3.7.12 | packaged by conda-forge | (44db2626, Oct 29 2021, 16:19:11)
[PyPy 7.3.7 with GCC Clang 11.1.0]
</code></pre>
<p>Now I want to upgrade <code>pypy</code> itself, not <code>python</code> version, is there any command can achieve this? Thanks!</p>
|
<python><pypy>
|
2023-02-06 07:01:14
| 1
| 11,212
|
Zhang Buzz
|
75,358,101
| 7,784,797
|
Could not use `streamlit` to annotate a dataset of multiple labels
|
<p>I am trying to build an annotation interface using <code>streamlit</code>.</p>
<p>In my dataset, each data point may have multiple labels (i.e. <code>labels</code> in the code below). However, I could only select one label using <code>st.multiselect()</code> rather than the expected "multiple select". Specifically, every time I click the one of the choices, the page will be updated and the next data point pops up.</p>
<p>I am not sure what went wrong after getting stuck in this for hours. Could anyone provide any pointers for me?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import streamlit as st
df = pd.read_pickle("unlabeled.pkl")
records = df.to_dict("records")
if "annotations" not in st.session_state:
st.session_state.records = records
st.session_state.current_record = records[0]
annotated_data = list()
if st.session_state.records:
labels = st.session_state.current_record["labels"]
example = st.session_state.current_record["example"]
text = st.session_state.current_record["text"]
demo = "\n".join(["- {}".format(ee) for ee in example])
text = "- {}".format(text)
st.write(f"# Example\n{demo}\n# Output\n{text}")
labels = st.multiselect(
label="Select Labels",
options=labels
)
st.write('You Selected:', labels)
if st.button("Save"):
st.session_state.records.remove(st.session_state.current_record)
st.session_state.current_record = st.session_state.records[0]
annotated_data.append(
{
**st.session_state.current_record,
"label": labels
}
)
if len(annotated_data) % 50 == 0:
save_data(annotated_data)
save_data(annotated_data)
</code></pre>
|
<python><frontend><streamlit>
|
2023-02-06 06:44:55
| 1
| 349
|
Mr.Robot
|
75,357,968
| 3,650,087
|
How to use read-only, shared memory (as NumPy arrays) in multiprocessing
|
<p>Following the documentation for shared memory <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory" rel="nofollow noreferrer">here</a>, I have implemented a minimal example of accessing NumPy arrays backed with shared memory in a function called by a worker process in a pool. My assumption is that this code should produce minimal memory overhead for each additional worker (there is some overhead to copy the interpreter and non-shared variables, but the 16GB of memory should not be copied.)</p>
<pre><code>import numpy as np
from multiprocessing import Pool, shared_memory
from itertools import product
from tqdm import tqdm
if __name__ == "__main__":
a_shared_memory = shared_memory.SharedMemory(create=True, size=8_000_000_000)
a = np.ndarray((20, 100, 100, 100, 100), np.float32, buffer=a_shared_memory.buf)
b_shared_memory = shared_memory.SharedMemory(create=True, size=8_000_000_000)
b = np.ndarray((20, 100, 100, 100, 100), np.float32, buffer=b_shared_memory.buf)
def test_func(args):
a[args] + b[args[:-1]]
with tqdm(total=20 * 100 * 100 * 100) as pbar:
with Pool(16) as pool:
for _ in pool.imap_unordered(test_func,
product(range(20), range(100), range(100), range(100)),
chunksize=16):
pass
</code></pre>
<p>However, in practice when running this code memory usage grows in each process over time, both in the <code>RES</code> memory metric as well as the <code>SHR</code> memory metric as reported by top. (The rate of accumulation of memory can be modified with the size of the arrays being selected inside the <code>test_func</code> function.)</p>
<p>This behavior is confusing to me – these arrays are in shared memory, and I would therefore assume that a view of them shouldn't incur any memory allocation (I am testing on linux, so no copying should occur only with reading.) Further, I don't even store the results of this computation anywhere, so it is unclear why memory is being allocated.</p>
<p>Two further notes:</p>
<ol>
<li><p>According to <a href="https://stackoverflow.com/a/38135787/3650087">this answer</a>, even reading / accessing an array from shared memory will force a copy + write, since the refcount must be updated. However this should only affect the header memory page, which should be about 4kb. Why does memory continue to grow?</p>
</li>
<li><p>If I simply change the code in the following way:</p>
</li>
</ol>
<pre><code>def test_func(args):
a[args], b[args[:-1]]
</code></pre>
<p>the issues resolve – there is no memory overhead (ie. memory is shared,) and no increasing memory allocation over time.</p>
<p>I've tried to present the simplest, most intuitive application of the documentation to multiprocessing with shared memory, yet it remains very unclear to me how and why it isn't working as expected. I would like to perform some simple calculations in the <code>test_func</code>, including viewing the shared memory, adding, matrix - vector multiplication etc. Any help in getting a better grasp of how to use shared memory correctly would be very appreciated.</p>
<p><strong>Update:</strong>
When I change the <code>test_func</code> code to <code>a[0, 0, 0, 0] + b[0, 0, 0]</code> the issue disappears. Does this mean that there is some reference counter in the middle of the NumPy arrays? Such that when <code>args</code> is changing, different parts of the array are accessed and memory increases, but if the indexes are always the same, the memory doesn't increase.</p>
|
<python><numpy><multiprocessing><shared-memory>
|
2023-02-06 06:23:04
| 1
| 2,666
|
Acoop
|
75,357,888
| 4,117,975
|
How to scrape the content from all the div tags which also contain another tag?
|
<p>The website I am trying to scrape has all of its content laid out under the same <code>div</code> class type: <code>mock-div</code>. Upon inspecting its HTML, the relevant content is only present under those <code>div</code> tags which also contain the <code>figure</code> tag. What should be the correct XPath?</p>
<p>I tried to see if the following would work</p>
<pre><code>response.xpath("//figure~//").getall()
</code></pre>
<p>but it returns <code>ValueError: XPath error: Invalid expression in //figure~//</code> and rightly so.</p>
<pre><code><div class="mock-div">
<h2 class="mock-h2" id="id1"> hello world </h2>
<figure class="mock-fig"><img src="file.jpg" alt="filename">
<figcaption>file caption</figcaption> </figure>
<p>text1</p>
<p>text2</p>
</div>
...
<div class="mock-div">
<h2 class="mock-h2" id="id2"> footer </h2>
<p> end of the webpage </p>
</code></pre>
<p>From the HTML above, we want to extract from all the matching <code>div</code> tag the following information:</p>
<ol>
<li><code><h2></code> tag: hello world</li>
<li><code><p></code> tag: text1, text2</li>
<li>src value from <code>img</code> tag: file.jpg</li>
<li>alt value from <code>img</code> tag: filename</li>
<li><code>figcaption</code> tag: file caption</li>
</ol>
|
<python><web-scraping><scrapy>
|
2023-02-06 06:12:32
| 2
| 1,258
|
Amogh Mishra
|
75,357,850
| 8,168,294
|
Pandas select columns ordered at the beginning and the rest remain unchanged
|
<p>For example, I have dataframe with many columns, with the number of columns not clear, e.g.. between 10 and 20.</p>
<p>The column name in the follows:</p>
<p><code>RecordID, price, company, date, feature1, return, some_inf, feature2, feature3, ... </code></p>
<p>Sample data:</p>
<pre><code>column_names = ["RecordID", "price", "company", "date", "feature1", "return", "some_inf", "feature2", "feature3"]
values = [1, 9.99, "ABC", 20230101, 888, 0.666, "happy_everyday", "helloworld", "test"]
df = pd.DataFrame(values).T
df.columns = column_names
</code></pre>
<p>Among all these columns, I would like to pick out some columns (if they exist) and put them at the beginning, and the rest columns follows with order unchanged. For example, if I want to select <code>date, volume, price, return</code></p>
<p>Then the output (with re-ordered columns) will be</p>
<p><code>date, price, return, RecordID, company, feature1, some_inf, feature2, feature3, ...</code></p>
<p>The <code>volume</code> column does not exist in the original dataframe, so that it should also not be in the final output. I.e. The output dataframe should have the first several column in the selection list (if they also are in the original dataframe), then followed by columns not in this list, with orders unchanged.</p>
<p>Any fast way to implement this?</p>
|
<python><pandas><dataframe>
|
2023-02-06 06:07:19
| 1
| 382
|
XYZ
|
75,357,816
| 6,403,044
|
Getting an error when tried to scrape data from instagram
|
<p>I am trying to use <code>instagramy</code> python package to scrape some instagram data, by following this tutorial: <a href="https://pypi.org/project/instagramy/" rel="nofollow noreferrer">https://pypi.org/project/instagramy/</a></p>
<p>I used following lines of code (I have used a fake seesion id in this post)</p>
<pre><code>from instagramy import InstagramUser
s_id="55449%3APUiRY9UGd7JMJO%3A2uFFQSOlJinJd3dGKGGsAOvBNzTg"
profile = InstagramUser('google',sessionid=s_id)
</code></pre>
<p>But I got following error:</p>
<p><a href="https://i.sstatic.net/Xtjao.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xtjao.png" alt="enter image description here" /></a></p>
<p>Could anybody help me to figure out what is causing this error?</p>
<p>Thank you.</p>
|
<python><web-scraping><instagram>
|
2023-02-06 06:01:51
| 1
| 1,012
|
student_R123
|
75,357,643
| 13,132,728
|
Query/Filter a pandas df using a dict of lists
|
<h2><strong>My problem</strong></h2>
<p>I have a dict <code>d</code> that can be of varying length consisting of the following format:</p>
<pre><code>d = {
"foo": [
50,
100
],
"bar": [
5,
10
]
}
</code></pre>
<p>Where the key is a column name and the value is a two length list for the min and max value of said column to filter a datframe <code>df</code> on. Thus, given the input above I'd like to filter <code>df.foo</code> between 50-100 and <code>df.bar</code> between 5-10.</p>
<h2><strong>What I have tried</strong></h2>
<p>Of course, I could just hard code it like so:</p>
<pre><code>df.loc[(df.list(d.items())[0][0] > list(d.items())[0][1][0]) & (df.list(d.items())[0][0] < list(d.items())[0][1][1]) ...]
</code></pre>
<p>etc, but the number of keys (columns to filter on) may vary and also this just incredibly ugly code. Is there a cleaner/vectorized way to do this?</p>
<h2><strong>Context</strong></h2>
<p>I am building a streamlit app where a user can create n min max filters on a dataframe, and the format listed above is the format <a href="https://docs.streamlit.io/library/api-reference/widgets/st.slider" rel="nofollow noreferrer">streamlit's slider</a> returns</p>
|
<python><pandas>
|
2023-02-06 05:31:55
| 2
| 1,645
|
bismo
|
75,357,630
| 11,901,732
|
Compare <class 'pandas._libs.tslibs.timestamps.Timestamp'>, str and datetime64[ns] dates in Python
|
<p>I need to query using dates of various data types, the data and their corresponding data types are listed below:</p>
<pre><code>last_month_year: <class 'str'> ** Used `pd.to_datetime()` and got `<class 'pandas._libs.tslibs.timestamps.Timestamp'>` format
current_month_year: <class 'str'>
df['Year_Month']: object
</code></pre>
<p>The query:</p>
<pre><code>df[(df['Year_Month'] == current_month_year) | (df['Year_Month'] == last_month_year)]
</code></pre>
<p>The dates consist of "year" and "month" and are of the format "Year_Month", e.g., <code>"2020-01"</code>.</p>
<p>I had a few attempts at converting them into the same data type but there are always certain issues. What's the best data type to convert these three data types into to compare them? Thanks.</p>
|
<python><pandas><date>
|
2023-02-06 05:30:27
| 1
| 5,315
|
nilsinelabore
|
75,357,468
| 9,249,533
|
matplotlib supylabel on second axis of multiplot
|
<p>I'm not finding it possible to add a second supylabel for a right-hand y-axis of a multiplot.</p>
<p>Can anyone please confirm 1) whether or not it can be done and/or 2)provide guidance on how?</p>
<p>I am trying to achieve this:
<a href="https://i.sstatic.net/FMLxz.png" rel="noreferrer"><img src="https://i.sstatic.net/FMLxz.png" alt="enter image description here" /></a></p>
<p>Because there are a variable number of subplots (sometimes an odd number, sometimes even) across the broader project, using subplot-level labelling to label the "middle" subplot would be problematic.</p>
<p>I'm presently accomplishing with figure level text. Which looks fine within python, but the right label gets cut-off by savefig. I can only get it to work if I dummy-in null ax-level y-labels " \n".</p>
<pre><code>nrows = len(dftmp.GroupingCol.unique())
ncols = 1
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(14,10), constrained_layout=True,
sharex=True)
for e, ep in enumerate(dftmp.GroupingCol.unique(), start=1):
# define a figure axis and plot data
ax = plt.subplot(nrows, ncols, e)
dftmp["ValueCol"].loc[dftmp["GroupingCol"]==ep].plot(ax=ax, kind="bar", color=barcolor_lst) #, use_index=False)
# horizontal reference line (zero change)
zero_line = plt.axhline(0, color='k', linewidth=0.8)
# y-axis extent limits
ax.set_ylim([50*(-1.1), 50*1.1])
# create right-hand y-axis
ax2 = ax.twinx()
# y-axis extent limits
ax2.set_ylim([200*(-1), 200])
# null y-label placeholder to accommodate fig-level pseudo-supylabel
ax2.set_ylabel(" \n") # requires space and newline to work
# create supylabel for left-axis
supy_left = fig.supylabel("Left-hand y-axis super label", fontweight="bold") #, pad = 7)#, fontdict=fontdict) #fontweight='bold')
# use fig-level text as pseudo-supylable for right-axis
fig.text(x=0.97, y=0.5, s="Right-hand y-axis super label\n\n", size=13, fontweight='bold', rotation=270, ha='center', va='center')
# create super-label for x-axis
supx = fig.supxlabel("Bottom super label", fontweight="bold")
</code></pre>
<p>In the absence of the fig.text line I tried naming a second supylabel as a different object and the code runs, but doesn't produce the label.</p>
<pre><code>supy_right = fig.supylabel("Cumulative net change (m^3)", fontweight="bold", position=(0.9,0.5))
</code></pre>
|
<python><matplotlib><axis-labels><yaxis>
|
2023-02-06 05:01:15
| 1
| 2,399
|
CreekGeek
|
75,357,151
| 6,494,707
|
How to convert the distance between the object and the camera from pixels to meter?
|
<p>I need to convert the distance between the object and the camera from <code>pixels</code> to <code>meter/cm</code> frame by frame and then calculate the speed of the moving object. In the first frame, the distance of the object from the camera is <code>4 meter</code>, and the Focal length FL = 8mm. The width of the object is 0.0373 meters.</p>
<p>What I did first to calculate the projection of the object:</p>
<pre><code>focal_length_px = 8000/4.8 #Focal length(px): 8mm / (4.8 µm / px) = 1667px =: f
Object :
real_obj_width = 0.0373 # Width: 0.0373m
distance = 4 # Distance: 4m
obj_width_px = (object_reall_width/distance) * focal_length_px # Projection of object(px): (0.0373 m / 4 m) * focal_length_px = 15.5 px
</code></pre>
<p>After object detection, I want to calculate the distance. I used the <code>distance_finder()</code> code in <a href="https://www.section.io/engineering-education/approximating-the-speed-of-an-object-and-its-distance/" rel="nofollow noreferrer">this link</a>.</p>
<pre><code>def distance_finder(focal_length, real_obj_width, obj_width_in_frame):
distance = (real_obj_width * focal_length) / obj_width_in_frame
return distance
</code></pre>
<p>and the output is:</p>
<pre><code>obj_width_in_frame = 17
obj_dst = distance_finder(focal_length_px, obj_width_px, obj_width_in_frame) # obj_dst = 1523.6928104575165
</code></pre>
<p>My question is: how to convert the object distance in the frame from the camera (<strong>obj_dst = 1523.69</strong>) from pixels to meters/cm?</p>
<p>and then calculate the speed of object in the frame.</p>
|
<python><object-detection><camera-calibration><opencv>
|
2023-02-06 03:40:24
| 2
| 2,236
|
S.EB
|
75,357,092
| 233,347
|
Nested dictionary or list using given unindent data using Python
|
<p>Hi I have given below data but its unindent only using total keyword I can find the right nodes and can build tree structure.
Input:</p>
<pre><code>Current Assets
Cash
Checking 583961
Savings 224600
Petty Cash 89840
Total Cash 898402
Accounts Receivable 3593607
Work in Process 589791
Other Current Assets
Prepaid Rent 164593
Prepaid Liability Insurance 109728
Total Other Current Assets 274321
Total Current Assets 274321
</code></pre>
<p>I am looking for below Output:</p>
<pre><code>{
"Current Assets": {
"Cash": {
"Checking": 583961,
"Savings": 224600,
"Petty Cash": 89840,
"Total Cash": 898402
},
"Accounts Receivable": 3593607,
"Work in Process": 589791,
"Other Current Assets": {
"Prepaid Rent": 164593,
"Prepaid Liability Insurance": 109728,
"Total Other Current Assets": 274321
},
"Total Current Assets": 5356121
}
}
</code></pre>
<p>I tried recursion and node concept but nothing worked, It will be great if someone can help me on that trying to achieve using Python.</p>
<p>Rules:</p>
<p>As an example :
Actually <code>work in process</code> is not sub item of <code>Account Receivable' Its item of </code>current asset only.
As "work in progress" have digit at its end hence no children of it.</p>
<p>As per input data Cash does not have any numeric value at end hence such entries will have child/children,</p>
<p>cash is ending once having total cash with numeric value.</p>
<p>There will not be any children of work in process or Accounts Receivable as they are ending with Numeric value at end</p>
|
<python><list><dictionary><tree>
|
2023-02-06 03:29:36
| 1
| 5,165
|
prashant thakre
|
75,357,031
| 3,351,474
|
Python requirements.txt restrict dependency to be installed only on atom processors
|
<p>I'm using TensorFlow under inside an x64_64 environment, but the processor is an Intel Atom processor. This processor lacks the AVX processor extension and since the pre-built wheels for TensorFLow are complied with the AVX extension TensorFLow does not work and exits. Hence I had to build my own wheel and I host it on GitHub as a released file.</p>
<p>The problem I have is to download this pre-built wheel only in an Atom based processor. I was able to achieve this previously using a <code>setup.py</code> file where this can be easily detected, but I have migrated to <code>pyproject.toml</code> which is very poor when it comes to customization and scripted installation support.</p>
<p>Is there anything similar in addition to <code>platform_machine=='x86_64'</code> which checks for the processor type? Or has the migration to <code>pyproject.toml</code> killed here my flexibility?</p>
<p>The current <code>requirements.txt</code> is:</p>
<pre><code>confluent-kafka @ https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/confluent_kafka-1.9.2-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow @ https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow-2.8.4-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow-addons @ https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow_addons-0.17.1-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
tensorflow-text @ https://github.com/HandsFreeGadgets/python-wheels/releases/download/v0.1/tensorflow_text-2.8.2-cp38-cp38-linux_aarch64.whl ; platform_machine=='aarch64'
rasa==3.4.2
SQLAlchemy==1.4.45
phonetics==1.0.5
de-core-news-md @ https://github.com/explosion/spacy-models/releases/download/de_core_news_md-3.4.0/de_core_news_md-3.4.0-py3-none-any.whl
</code></pre>
<p>For <code>platform_machine=='aarch64'</code> I need something similar for x86_64 but only executed on Atom processor environments.</p>
<p>The old <code>setup.py</code> was:</p>
<pre><code>import platform
import subprocess
import os
from setuptools import setup
def get_requirements():
requirements = []
if platform.machine() == 'x86_64':
command = "cat /proc/cpuinfo"
all_info = subprocess.check_output(command, shell=True).strip()
# AVX extension is the missing important information
if b'avx' not in all_info or ("NO_AVX" in os.environ and os.environ['NO_AVX']):
requirements.append(f'tensorflow @ file://localhost/'+os.getcwd()+'/pip-wheels/amd64/tensorflow-2.3.2-cp38-cp38-linux_x86_64.whl')
elif platform.machine() == 'aarch64':
...
requirements.append('rasa==3.3.3')
requirements.append('SQLAlchemy==1.4.45')
requirements.append('phonetics==1.0.5')
requirements.append('de-core-news-md @ https://github.com/explosion/spacy-models/releases/download/de_core_news_md-3.4.0/de_core_news_md-3.4.0-py3-none-any.whl')
return requirements
setup(
...
install_requires=get_requirements(),
...
)
</code></pre>
<p>The line <code>if b'avx' not in all_info or ("NO_AVX" in os.environ and os.environ['NO_AVX'])</code> does the necessary differentiation.</p>
<p>If a <code>pyproject.toml</code> approach is not for my needs, what is recommended for Python with more installation power which is not marked as legacy? Maybe there is something similar for Python what is Gradle for building projects in the Java world, which was introduced to overcome the <code>XML</code> limitations and providing a complete scripting language which I'm not aware of?</p>
|
<python><pip><python-packaging><requirements.txt><pyproject.toml>
|
2023-02-06 03:13:53
| 1
| 6,458
|
k_o_
|
75,357,022
| 3,247,006
|
How to automatically fill all info in the 2nd payment but not 3rd or 4th payments in Stripe?
|
<p>With the Django's code below, I'm testing <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_method_options-card-setup_future_usage" rel="nofollow noreferrer">payment_method_options.card.setup_future_usage</a> in <strong>Stripe Checkout</strong> in <code>test</code> mode:</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
def test(request): # Here
customer = stripe.Customer.search(query="email:'mytest@gmail.com'", limit=1)
checkout_session = stripe.checkout.Session.create(
customer=customer["data"][0]["id"] if customer.has_more else None,
line_items=[
{
"price_data": {
"currency": "USD",
"unit_amount_decimal": 1000,
"product_data": {
"name": "T-shirt",
"description": "Good T-shirt",
},
},
"quantity": 2,
}
],
payment_method_options={ # Here
"card": {
"setup_future_usage": "on_session",
},
},
mode='payment',
success_url='http://localhost:8000',
cancel_url='http://localhost:8000'
)
return redirect(checkout_session.url, code=303)
</code></pre>
<p>For the 1st payment with <code>mytest@gmail.com</code>, I need to manually fill all info as shown below:</p>
<p><a href="https://i.sstatic.net/39Y7N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/39Y7N.png" alt="enter image description here" /></a></p>
<p>But, even for the 2st and 3rd payments with <code>mytest@gmail.com</code>, I still need to manually fill all info without automatically filled shown below:</p>
<p><a href="https://i.sstatic.net/Zoxy8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zoxy8.png" alt="enter image description here" /></a></p>
<p>Finally, for the 4th payment with <code>mytest@gmail.com</code>, all info is automatically filled as shown below:</p>
<p><a href="https://i.sstatic.net/Tb6aq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tb6aq.png" alt="enter image description here" /></a></p>
<p>So, how to automatically fill all info in the 2nd payment but not 3rd or 4th payments in <code>test</code> and <code>live</code> modes?</p>
|
<python><django><django-views><stripe-payments><checkout>
|
2023-02-06 03:12:26
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,356,848
| 12,458,212
|
Regex - removing everything after first word following a comma
|
<p>I have a column that has name variations that I'd like to clean up. I'm having trouble with the regex expression to remove everything after the first word following a comma.</p>
<pre><code>d = {'names':['smith,john s','smith, john', 'brown, bob s', 'brown, bob']}
x = pd.DataFrame(d)
Tried:
x['names'] = [re.sub(r'/.\s+[^\s,]+/','', str(x)) for x in x['names']]
Desired Output:
['smith,john','smith, john', 'brown, bob', 'brown, bob']
</code></pre>
<p>Not sure why my regex isn't working, but any help would be appreciated.</p>
|
<python><regex>
|
2023-02-06 02:27:11
| 2
| 695
|
chicagobeast12
|
75,356,723
| 1,358,829
|
Tensorflow.data.Dataset.rejection_resample modifies my dataset's element_spec
|
<p>I am trying to use <code>tf.data.Dataset.rejection_resample</code> to balance my dataset, but I am running into an issue in which the method modifies the <code>element_spec</code> of my dataset, making it incompatible with my models.</p>
<p>The original element spec of my dataset is:</p>
<pre class="lang-py prettyprint-override"><code>({'input_A': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None),
'input_B': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None)},
TensorSpec(shape=(None, 1, 1), dtype=tf.int64, name=None))
</code></pre>
<p>This is the element spec after batching.</p>
<p>However, if I run <code>rejection_resample</code> (before batching), the element spec at the end becomes:</p>
<pre class="lang-py prettyprint-override"><code>(TensorSpec(shape=(None,), dtype=tf.int64, name=None),
({'input_A': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None),
'input_B': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None)},
TensorSpec(shape=(None, 1, 1), dtype=tf.int64, name=None)))
</code></pre>
<p>So <code>rejection_resample</code> is adding another <code>tf.int64</code> tensor in the beginning of my data, which I can't find out what is it for. My problem is that this breaks compatibility between the input data and my model, since it depends on the original input tuple.</p>
<p>Furthermore, it also causes an inconsistency between the training and validation data. I was expecting to apply <code>rejection_resample</code> only on training data, but if I do that, the training dataset will have the added tensor, while the validation one won't.</p>
<p>So my question is what is this added tensor to the element spec, and if there is any way to <em>drop</em> an element from the dataset after building it. Thank you.</p>
|
<python><tensorflow><machine-learning><tensorflow-datasets>
|
2023-02-06 01:50:46
| 2
| 1,232
|
Alb
|
75,356,710
| 6,004,338
|
Duck Typing Annotations in Python3
|
<p>I am trying to add a type annotation to a function input argument that is a <code>dataclass</code> with attributes that overlap with another <code>dataclass</code>, which actually gets passed in as an input argument.</p>
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import TypeVar
@dataclass
class Foo:
a: str
zar: str
@dataclass
class Car(Foo):
b: str
@dataclass
class CarInterface:
a: str
b: str
mar = TypeVar("mar", bound=CarInterface)
def blah(x: mar):
print(x.a)
car_instance = Car(a="blah blah", zar="11", b="bb")
blah(car_instance)
</code></pre>
<p>In this example, I'm trying to create my own type annotation <code>mar</code> which is bound by <code>CarInterface</code>. I want to check that whatever class is passed into <code>blah()</code> at least has <code>a</code> and <code>b</code> attributes (don't care if the class has other attributes such as <code>zar</code>). I want to do it this way because class <code>Car</code> (which actually gets passed in) is one of many classes that will be written in the future and passed into this function.</p>
<p>I also want it to be very easy to define a new <code>Car</code>, so I would like to avoid abstract classes as I don't think the added complexity is worth mypy being happy.</p>
<p>So I'm trying to create <code>mar</code> which uses duck typing to say that <code>Car</code> satisfies the interface of <code>CarInterface</code>.</p>
<p>However, I get two mypy errors.</p>
<p>The first is on the <code>mar</code> annotation in <code>def blah</code></p>
<pre><code>TypeVar "mar" appears only once in generic function signaturePylancereportInvalidTypeVarUse
</code></pre>
<p>And the other is where I pass <code>car_instance</code> into <code>blah()</code></p>
<pre><code>Argument of type "Car" cannot be assigned to parameter "x" of type "bar@blah" in function "blah"
Type "Car" cannot be assigned to type "CarInterface"
"Car" is incompatible with "CarInterface"PylancereportGeneralTypeIssues
</code></pre>
|
<python><python-3.x><duck-typing>
|
2023-02-06 01:48:01
| 1
| 690
|
Sean
|
75,356,685
| 5,363,621
|
Python delete rows for each group after first occurance in a column
|
<p>I Have a dataframe as follows:</p>
<pre><code>df = pd.DataFrame({'Key':[1,1,1,1,2,2,2,4,4,4,5,5],
'Activity':['A','A','H','B','B','H','H','A','C','H','H','B'],
'Date':['2022-12-03','2022-12-04','2022-12-06','2022-12-08','2022-12-03','2022-12-06','2022-12-10','2022-12-03','2022-12-04','2022-12-07','2022-12-03','2022-12-13']})
</code></pre>
<p><a href="https://i.sstatic.net/GUxAP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GUxAP.png" alt="example input df" /></a></p>
<p>I need to count the activities for each 'Key' that occur before 'Activity' == 'H' as follows:</p>
<p><strong>Required Output</strong></p>
<p><a href="https://i.sstatic.net/Mrw8T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mrw8T.png" alt="expected output" /></a></p>
<p><strong>My Approach</strong></p>
<ol>
<li><p>Sort df by Key & Date ( Sample input is already sorted)</p>
</li>
<li><p>drop the rows that occur after 'H' Activity in each group as follows:</p>
<p><a href="https://i.sstatic.net/jVXI7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jVXI7.png" alt="enter image description here" /></a></p>
</li>
<li><p>Groupby df.groupby(['Key', 'Activity']).count()</p>
</li>
</ol>
<p>Is there a better approach , if not then help me in code for dropping the rows that occur after 'H' Activity in each group.</p>
<p>Thanks in advance !</p>
|
<python><pandas>
|
2023-02-06 01:39:54
| 3
| 915
|
deega
|
75,356,644
| 6,534,818
|
Python Async what is causing the memory leak?
|
<p>I am downloading zip files and looking inside them to check their contents for a few million items, but I am constantly accruing memory and I will eventually go OOM, even with small semaphores.</p>
<p>Consider the block:</p>
<pre><code> async def zip_reader(self, blobFileName, blobEndPoint, semaphore):
try:
# access blob
async with ClientSecretCredential(TENANT, CLIENTID, CLIENTSECRET) as credential:
async with BlobServiceClient(account_url="https://blob1.blob.core.windows.net/", credential=credential, max_single_get_size=64 * 1024 * 1024, max_chunk_get_size=32 * 1024 * 1024) as blob_service_client:
async with blob_service_client.get_blob_client(container=blobEndPoint, blob=blobFileName) as blob_client:
async with semaphore:
logger.info(f"Starting: {blobFileName}, {blobEndPoint}")
# open bytes
writtenbytes = io.BytesIO()
# write file to it
stream = await blob_client.download_blob(max_concurrency=25)
stream = await stream.readinto(writtenbytes)
# zipfile
f = ZipFile(writtenbytes)
# file list
file_list = [s for s in f.namelist()]
# send to df
t_df = pd.DataFrame({'fileList': file_list})
# add fileName
t_df['blobFileName'] = blobFileName
t_df['blobEndPoint'] = blobEndPoint
if semaphore.locked():
await asyncio.sleep(1)
logger.info(f"Completed: {blobFileName}")
# clean up here; also tried del on objs here as well
self.cleanup()
return t_df
async def cleanup(self):
gc.collect()
await asyncio.sleep(1)
async def async_file_as_bytes_generator(self, blobFileName, blobEndPoint, semaphore):
"""
main caller
"""
semaphore = asyncio.Semaphore(value=semaphore)
return await asyncio.gather(*[self.zip_reader(fn, ep, semaphore) for fn, ep in zip(blobFileName, blobEndPoint)], # also tried attaching here)
</code></pre>
|
<python><python-asyncio>
|
2023-02-06 01:29:31
| 1
| 1,859
|
John Stud
|
75,356,619
| 14,109,040
|
Scraping table of data from webpage with inconsistently nested html tags
|
<p>I am trying to scrape some data off of the tables in <a href="https://www.ptv.vic.gov.au/footer/data-and-reporting/network-performance/daily-performance/" rel="nofollow noreferrer">https://www.ptv.vic.gov.au/footer/data-and-reporting/network-performance/daily-performance/</a>
Specifically, I want to scrape the 'Metropolitan tram' table. However, the html elements aren't structured well and I am unsure how to identify the table by name and scrape the content.</p>
<p>This is what I have tried:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
URL = "https://www.ptv.vic.gov.au/footer/data-and-reporting/network-performance/daily-performance/"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
tables = soup.find_all("div", class_="mceTmpl table__wrapper")
for table in tables:
print("NEXT-------------------------------------------")
print(table, end="\n"*2)
</code></pre>
|
<python><web-scraping><beautifulsoup><html-table>
|
2023-02-06 01:22:53
| 1
| 712
|
z star
|
75,356,413
| 3,810,748
|
How does pytorch automatically know what are my model's parameters?
|
<p>I have defined the custom class as follows:</p>
<pre><code>class MLPClassifier(nn.Module):
"""
A basic multi-layer perceptron classifier with 3 layers.
"""
def __init__(self, input_size, hidden_size, num_classes):
"""
The constructor for the MLPClassifier class.
"""
super(MLPClassifier, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size) # weights & biases for the input-to-hidden layer
self.ac1 = nn.ReLU() # non-linear activation for the input-to-hidden layer
self.fc2 = nn.Linear(hidden_size, num_classes) # weights & biases for the hidden-to-output layer
self.ac2 = nn.Softmax(dim=1) # non-linear activation for the hidden-to-output layer
</code></pre>
<p>When I run the following script I get this:</p>
<pre><code>hyper_param_input_size = 4
hyper_param_hidden_size = 64
hyper_param_num_classes = 3
model = MLPClassifier(hyper_param_input_size, hyper_param_hidden_size, hyper_param_num_classes)
for p in model.parameters():
print(p.shape)
>>> torch.Size([64, 4])
>>> torch.Size([64])
>>> torch.Size([3, 64])
>>> torch.Size([3])
</code></pre>
<p>How on earth does PyTorch automatically know about my internally defined attributes when I never explicitly told it? Does it loop through everything in the class and check if <code>isinstance(self, nn.Layer)</code> or something?</p>
|
<python><pytorch>
|
2023-02-06 00:18:35
| 1
| 6,155
|
AlanSTACK
|
75,356,400
| 6,760,729
|
How to parallel for loop in Sagemaker Processing job
|
<p>I'm running a python code on Sagemaker Processing job, specifically SKLearnProcessor. The code run a for-loop for 200 times (each iteration is independent), each time takes 20 minutes.
for example: script.py</p>
<pre><code>for i in list:
run_function(i)
</code></pre>
<p>I'm kicking off the job from a notebook:</p>
<pre><code>sklearn_processor = SKLearnProcessor(
framework_version="1.0-1", role=role,
instance_type="ml.m5.4xlarge", instance_count=1,
sagemaker_session = Session()
)
out_path = 's3://' + os.path.join(bucket, prefix,'outpath')
sklearn_processor.run(
code="script.py",
outputs=[
ProcessingOutput(output_name="load_training_data",
source = f'/opt/ml/processing/output}',
destination = out_path),
],
arguments=["--some-args", "args"]
)
</code></pre>
<p>I want to parallel this code and make the Sagemaker processing job use it best capacity to run as many concurrent jobs as possible.
How can I do that</p>
|
<python><amazon-web-services><distributed-computing><amazon-sagemaker><embarrassingly-parallel>
|
2023-02-06 00:15:48
| 1
| 585
|
SKSKSKSK
|
75,356,395
| 14,224,000
|
Cannot create weak reference to 'Weakcallableproxy' object in Pytorch Module
|
<p>When i run my project in my system it was running fine, but when i made it as nvidia-docker2 container and run it i am getting the following error :</p>
<p>I ensured my pytorch version, cuda version are almost same in both the environments, whereas python version differs , 3.10 in my system, 3.8 in the docker container</p>
<p>Docker container Details :</p>
<pre><code>➜ Face-Recognition-From-Crowd ⚡ 3 hours ago ( master)▶ sudo docker run --gpus all --device /dev/video0 --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -it face-crowd bash
=============
== PyTorch ==
=============
NVIDIA Release 22.11 (build 48503342)
PyTorch Version 1.13.0a0+936e930
Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2022 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
root@a08973389041:/app# python3 run.py --source live
</code></pre>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2548, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2528, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/app/app.py", line 57, in video_feed
video = Process(os.path.abspath('./temp'),
File "/app/process.py", line 45, in __init__
self.recognizer = Predictor(file=False, label=True)
File "/app/scripts/FaceRecognition.py", line 31, in __init__
self.model = SingleShotLearningFR(pretrained=True)
File "/app/scripts/FRMethods/SingleShotLearningFR.py", line 29, in __init__
super(SingleShotLearningFR, self).__init__()
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/core/module.py", line 124, in __init__
self._register_sharded_tensor_state_dict_hooks_if_available()
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/core/module.py", line 2022, in _register_sharded_tensor_state_dict_hooks_if_available
self.__class__._register_load_state_dict_pre_hook(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _register_load_state_dict_pre_hook
self._load_state_dict_pre_hooks[handle.id] = _WrappedHook(hook, self if with_module else None)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 54, in __init__
self.module: weakref.ReferenceType["Module"] = weakref.ref(module)
TypeError: cannot create weak reference to 'weakcallableproxy' object
</code></pre>
<p>Note : Ask me the details u need, i will add it as edit.
Also mention the possibility of mistakes that causes this error</p>
|
<python><pytorch><garbage-collection><nvidia-docker>
|
2023-02-06 00:14:27
| 1
| 931
|
NavinKumarmMNK
|
75,356,215
| 420,827
|
DataFrame groupby on each item within a column of lists
|
<p>I have a dataframe (<code>df</code>):</p>
<pre class="lang-markdown prettyprint-override"><code>| A | B | C |
| --- | ----- | ----------------------- |
| CA | Jon | [sales, engineering] |
| NY | Sarah | [engineering, IT] |
| VA | Vox | [services, engineering] |
</code></pre>
<p>I am trying to group by each item in the <code>C</code> column list (sales, engineering, IT, etc.).</p>
<p>Tried:</p>
<pre><code>df.groupby('C')
</code></pre>
<p>but got list not hashable, which is expected. I came across another <a href="https://stackoverflow.com/questions/49434712/pandas-groupby-on-a-column-of-lists">post</a> where it was recommended to convert the <code>C</code> column to tuple which is hashable, but I need to groupby each item and not the combination.</p>
<p>My goal is to get the count of each row in the <code>df</code> for each item in the <code>C</code> column list. So:</p>
<pre><code>sales: 1
engineering: 3
IT: 1
services: 1
</code></pre>
<p>While there is probably a simpler way to obtain this than using <code>groupby</code>, I am still curious if <code>groupby</code> can be used in this case.</p>
|
<python><pandas>
|
2023-02-05 23:25:34
| 1
| 551
|
Manas
|
75,356,081
| 1,893,275
|
Print whole numbers as integers in Pandas LaTeX conversion
|
<p>I'm trying to write a small script to print LaTeX tables based on CSV.
A lot of the functionality formerly in e.g. <code>matrix2latex</code> have now been included in Pandas proper, which is cool.</p>
<p>However, no matter what I do (I tried a number of the other suggestions on here, it ended up becoming a convoluted mess which in effect changes nothing), it keeps coming out like this:</p>
<pre><code>[deco]/tmp/table ❱ python lala.py
Dataframe:
Unnamed: 0 Treatment Implant Transgenic Untreated Mice Transgenic Treated Mice Wildtype Mice Total Mice
0 P1 Armodafinil VTA 20+15 20.0 5.0 60
1 P2 NaN LC 50 NaN 10.0 60
2 P3 Escitalopram DR 20 20.0 NaN 40
3 P4 Reboxetine LC 20 20.0 NaN 40
LaTeX Table Conversion:
\begin{tabular}{lllllrrr}
& Unnamed: 0 & Treatment & Implant & Transgenic Untreated Mice & Transgenic Treated Mice & Wildtype Mice & Total Mice \\
0 & P1 & Armodafinil & VTA & 20+15 & 20.000000 & 5.000000 & 60 \\
1 & P2 & nan & LC & 50 & nan & 10.000000 & 60 \\
2 & P3 & Escitalopram & DR & 20 & 20.000000 & nan & 40 \\
3 & P4 & Reboxetine & LC & 20 & 20.000000 & nan & 40 \\
\end{tabular}
[deco]/tmp/table ❱ cat lala.py
import pandas as pd
df = pd.read_csv("table.csv")
print("\n")
print("Dataframe:")
print("")
print(df)
tex = df.style.to_latex()
print("\n")
print("LaTeX Table Conversion:")
print("")
print(tex)
[deco]/tmp/table ❱ cat table.csv
,Treatment,Implant,Transgenic Untreated Mice,Transgenic Treated Mice,Wildtype Mice,Total Mice
P1,Armodafinil,VTA,20+15,20,5,60
P2,N/A,LC,50,,10,60
P3,Escitalopram,DR,20,20,,40
P4,Reboxetine,LC,20,20,,40
</code></pre>
<p>Is there any way to make sure that whole numbers are always displayed as integers?</p>
|
<python><pandas><csv><types><latex>
|
2023-02-05 22:57:26
| 3
| 18,114
|
TheChymera
|
75,356,060
| 15,011,362
|
How can I get quantity with SP API Python
|
<p>I need a product's unit of stock(quantity). Is it possible with SP API, if possible how can I get it?</p>
<p>Note: I can get it with SKU like the following code but the product is <strong>not listed</strong> by my sellers.</p>
<pre><code>from sp_api.api import Inventories
quantity = Inventories(credentials=credentials, marketplace=Marketplaces.FR).get_inventory_summary_marketplace(**{
"details": False,
"marketplaceIds": ["A13V1IB3VIYZZH"],
"sellerSkus": ["MY_SKU_1" , "MY_SKU_2"]
})
print(quantity)
</code></pre>
|
<python><amazon-selling-partner-api>
|
2023-02-05 22:54:14
| 1
| 452
|
Melisa
|
75,356,032
| 654,019
|
error in installing ultralytics in python as they have conflicting dependencies
|
<p>I am trying to install ultralytics in a visual environment and I am getting this error message:</p>
<pre><code>pip install ultralytics
Collecting ultralytics
Using cached ultralytics-8.0.6-py3-none-any.whl (251 kB)
Collecting hydra-core>=1.2.0
Using cached hydra_core-1.3.1-py3-none-any.whl (154 kB)
Collecting matplotlib>=3.2.2
Using cached matplotlib-3.6.3-cp311-cp311-win_amd64.whl (7.2 MB)
Collecting numpy>=1.18.5
Using cached numpy-1.24.2-cp311-cp311-win_amd64.whl (14.8 MB)
Collecting opencv-python>=4.1.1
Using cached opencv_python-4.7.0.68-cp37-abi3-win_amd64.whl (38.2 MB)
Collecting Pillow>=7.1.2
Using cached Pillow-9.4.0-cp311-cp311-win_amd64.whl (2.5 MB)
Collecting PyYAML>=5.3.1
Using cached PyYAML-6.0-cp311-cp311-win_amd64.whl (143 kB)
Collecting requests>=2.23.0
Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting scipy>=1.4.1
Using cached scipy-1.10.0-cp311-cp311-win_amd64.whl (42.2 MB)
Collecting ultralytics
Using cached ultralytics-8.0.5-py3-none-any.whl (248 kB)
Using cached ultralytics-8.0.4-py3-none-any.whl (248 kB)
Using cached ultralytics-8.0.3-py3-none-any.whl (247 kB)
Using cached ultralytics-8.0.2-py3-none-any.whl (224 kB)
Using cached ultralytics-8.0.1-py3-none-any.whl (225 kB)
Using cached ultralytics-8.0.0-py3-none-any.whl (219 kB)
Using cached ultralytics-0.0.44-py3-none-any.whl (16 kB)
Collecting GitPython>=3.1.24
Using cached GitPython-3.1.30-py3-none-any.whl (184 kB)
Collecting ultralytics
Using cached ultralytics-0.0.43-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.42-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.41-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.40-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.39-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.38-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.37-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.36-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.35-py3-none-any.whl (16 kB)
Using cached ultralytics-0.0.34-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.33-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.32-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.31-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.30-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.29-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.28-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.27-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.26-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.25-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.24-py3-none-any.whl (15 kB)
Using cached ultralytics-0.0.23-py3-none-any.whl (14 kB)
Using cached ultralytics-0.0.22-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.21-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.20-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.19-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.18-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.17-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.16-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.15-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.14-py3-none-any.whl (13 kB)
Using cached ultralytics-0.0.13-py3-none-any.whl (13 kB)
ERROR: Cannot install ultralytics==0.0.13, ultralytics==0.0.14, ultralytics==0.0.15, ultralytics==0.0.16, ultralytics==0.0.17, ultralytics==0.0.18, ultralytics==0.0.19, ultralytics==0.0.20, ultralytics==0.0.21, ultralytics==0.0.22, ultralytics==0.0.23, ultralytics==0.0.24, ultralytics==0.0.25, ultralytics==0.0.26, ultralytics==0.0.27, ultralytics==0.0.28, ultralytics==0.0.29, ultralytics==0.0.30, ultralytics==0.0.31, ultralytics==0.0.32, ultralytics==0.0.33, ultralytics==0.0.34, ultralytics==0.0.35, ultralytics==0.0.36, ultralytics==0.0.37, ultralytics==0.0.38, ultralytics==0.0.39, ultralytics==0.0.40, ultralytics==0.0.41, ultralytics==0.0.42, ultralytics==0.0.43, ultralytics==0.0.44, ultralytics==8.0.0, ultralytics==8.0.1, ultralytics==8.0.2, ultralytics==8.0.3, ultralytics==8.0.4, ultralytics==8.0.5 and ultralytics==8.0.6 because these package versions have conflicting dependencies.
The conflict is caused by:
ultralytics 8.0.6 depends on torch>=1.7.0
ultralytics 8.0.5 depends on torch>=1.7.0
ultralytics 8.0.4 depends on torch>=1.7.0
ultralytics 8.0.3 depends on torch>=1.7.0
ultralytics 8.0.2 depends on torch>=1.7.0
ultralytics 8.0.1 depends on torch>=1.7.0
ultralytics 8.0.0 depends on torch>=1.7.0
ultralytics 0.0.44 depends on torch>=1.7.0
ultralytics 0.0.43 depends on torch>=1.7.0
ultralytics 0.0.42 depends on torch>=1.7.0
ultralytics 0.0.41 depends on torch>=1.7.0
ultralytics 0.0.40 depends on torch>=1.7.0
ultralytics 0.0.39 depends on torch>=1.7.0
ultralytics 0.0.38 depends on torch>=1.7.0
ultralytics 0.0.37 depends on torch>=1.7.0
ultralytics 0.0.36 depends on torch>=1.7.0
ultralytics 0.0.35 depends on torch>=1.7.0
ultralytics 0.0.34 depends on torch>=1.7.0
ultralytics 0.0.33 depends on torch>=1.7.0
ultralytics 0.0.32 depends on torch>=1.7.0
ultralytics 0.0.31 depends on torch>=1.7.0
ultralytics 0.0.30 depends on torch>=1.7.0
ultralytics 0.0.29 depends on torch>=1.7.0
ultralytics 0.0.28 depends on torch>=1.7.0
ultralytics 0.0.27 depends on torch>=1.7.0
ultralytics 0.0.26 depends on torch>=1.7.0
ultralytics 0.0.25 depends on torch>=1.7.0
ultralytics 0.0.24 depends on torch>=1.7.0
ultralytics 0.0.23 depends on torch>=1.7.0
ultralytics 0.0.22 depends on torch>=1.7.0
ultralytics 0.0.21 depends on torch>=1.7.0
ultralytics 0.0.20 depends on torch>=1.7.0
ultralytics 0.0.19 depends on torch>=1.7.0
ultralytics 0.0.18 depends on torch>=1.7.0
ultralytics 0.0.17 depends on torch>=1.7.0
ultralytics 0.0.16 depends on torch>=1.7.0
ultralytics 0.0.15 depends on torch>=1.7.0
ultralytics 0.0.14 depends on torch>=1.7.0
ultralytics 0.0.13 depends on torch>=1.7.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>when I tried to install pytorch manually, I am getting this error:</p>
<pre><code>pip install torch
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
</code></pre>
<p>How can I fix this issue?</p>
|
<python><pytorch>
|
2023-02-05 22:48:06
| 1
| 18,400
|
mans
|
75,356,007
| 20,959,773
|
How to keep JavaScript values between multiple pages in one session?
|
<p>I have this code:</p>
<pre><code>from selenium.common import TimeoutException, WebDriverException
from selenium import webdriver
import sys
class Main:
def _init_(self, page_url):
self.driver = webdriver.Chrome()
self.element_list = []
self.page_url = page_url
def javascript(self):
self.driver.get(self.page_url)
js_script = """
//Callback function
var done = arguments[arguments.length - 1];
//Take all the events
var array_events = []
var registerOuterHtml = (e) => {
array_events.push
(e.target.outerHTML)
}
var whole = (e) => {
array_events.push
(document.documentElement.outerHTML)
}
var quit = (key) => {
console.log(array_events);
(key.keyCode == 27 )? done(JSON.parse(JSON.stringify(array_events))) :
undefined
}
// Listen to the clicks
getElementHtml = document.addEventListener("click", registerOuterHtml, true)
getDOMHtml = document.addEventListener("click", whole, true)
// Listen to the key "esc" which means user has gathered all needed events
getKey = document.addEventListener("keydown", quit, true)
"""
self.driver.set_script_timeout(10000)
try:
try:
response = self.driver.execute_async_script(js_script)
print(len(response) / 2) #should print the number of times you have clicked for testing purposes
except TimeoutException:
print('Program closed after 10,000 seconds !')
self.driver.quit()
sys.exit()
except WebDriverException:
print(WebDriverException)
print('Driver may be closed incorrectly or unknown error occurred!')
return False
</code></pre>
<p>This python code starts a chrome browser with selenium, then the user should click some elements in the page, where the click gets some value using the javascript function being executed by driver.execute_async_script,and then the function is closed when pressing 'esc' button, closing window and returing values as well.
It all works fine, but if the user clicks some elements in the original page and then goes to a different page and clicks some more, the values from the previous page don't get returned (the DOM changes, so the new DOM with the fresh script obviously doesn't know anything what happened before)</p>
<p>I have tried storing the variable array_events in localStorage or sessionStorage, but it returns invalid, so I haven't been able to fix this.</p>
<p>I want the answer to be the JavaScript code correct and ready to be able to handle this task.</p>
|
<javascript><python><html><selenium>
|
2023-02-05 22:42:51
| 0
| 347
|
RifloSnake
|
75,355,852
| 8,696,281
|
How do I get columns that are generated by pandas.get_dummies()?
|
<p>I have the following dataframe:</p>
<pre><code>>>> df
n1 n2 dense c1 c2 c3
0 1 4 [1, 4] a h1 tt
1 2 5 [2, 5] b bbw ebay
2 3 6 [3, 6] c we yahoo
</code></pre>
<p>If I want to create a one-hot encoding columns for <code>c1, c2, c3</code> columns:</p>
<pre><code>>>> df_updated = pd.get_dummies(df, prefix_sep='_', dummy_na=True, columns=['c1', 'c2', 'c3'])
>>> df_updated
n1 n2 dense c1_a c1_b c1_c c1_nan c2_bbw c2_h1 c2_we c2_nan c3_ebay c3_tt c3_yahoo c3_nan
0 1 4 [1, 4] 1 0 0 0 0 1 0 0 0 1 0 0
1 2 5 [2, 5] 0 1 0 0 1 0 0 0 1 0 0 0
2 3 6 [3, 6] 0 0 1 0 0 0 1 0 0 0 1 0
</code></pre>
<p>But how can I get a list of columns that is generated by <code>get_dummies()</code>?</p>
<p>Ex. <code>['c1_a', 'c1_b', 'c1_c', 'c1_nan', 'c2_bbw', 'c2_h1', 'c2_we', 'c2_nan', 'c3_ebay', 'c3_tt', 'c3_yahoo', 'c3_nan']</code></p>
<p>I know one way of doing that is <code>list(set(df_updated.columns) - set(df.columns))</code> but is there a better way?</p>
|
<python><pandas><dataframe>
|
2023-02-05 22:07:32
| 1
| 783
|
noobie2023
|
75,355,711
| 482,819
|
Modify an attribute of an already defined class in Python (and run its definition again)
|
<p>I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.</p>
<p>For example, consider this class:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
</code></pre>
<p>I would like to change <code>x</code> to <code>2</code>, making it equivalent to this.</p>
<pre class="lang-py prettyprint-override"><code>class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
</code></pre>
<p>But I would like to make it in the following way:</p>
<pre class="lang-py prettyprint-override"><code>Base = injector(Base, x=2)
</code></pre>
<p>Is there a way to achieve this WITHOUT recompile the original class source code?</p>
|
<python><class><metaclass><inject>
|
2023-02-05 21:40:08
| 2
| 6,143
|
Hernan
|
75,355,708
| 17,696,880
|
Why are these negative-lookaheads failing to constrain a regex pattern that decides when to perform substitutions with the re.sub() function?
|
<pre class="lang-py prettyprint-override"><code>import re
#Example input:
input_text = "en la montaña a las (2023_-_02_-_04(19:00 pm)) o a las 2023_-_02_-_04 19:00 pm aasassa 2023_-_02_-_04 sdshdhshj 19:00 pm 2023_-_02_-_04 fgfg"
date_format_00 = r"(\d*)_-_(\d{2})_-_(\d{2})" # Y*_-_MM_DD
identificate_hours = r"(?:(\d{1,2})|)(?:(?:\:| )(\d{1,2})|)\s*(?:(am)|(pm)|)"
#I use a negative lookahead "(?!\()" to avoid matches
date_format_00_blocking = r"(?!\d*_-_\d{2}_-_\d{2}\s*)"
identificate_hours_blocking = r"(?!\d{1,2}(?:\:| )\d{1,2}\s*(?:am|pm|))"
#Both re.sub() should achieve replacements of the type: "2023_-_02_-_04" --> "(2023_-_01_-_11(00:00 am))"
#However, only the first of the re.sub() (in the order of the reading of the code that the interpreter does) will do it, since after that it will already be labeled with the parentheses
#Only replacements of the type: "2023_-_02_-_04 19:00" --> "(2023_-_01_-_11(19:00 pm))"
input_text = re.sub(r"(?!\()" + identificate_hours_blocking + r"\s*" + date_format_00 + r"\s*(?:a\s*la(?:s| )|)\s*" + identificate_hours + r"\s*" + date_format_00_blocking + r"\s*(?!\))",
lambda m: (f"({m[1]}_-_{m[2]}_-_{m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'}))"),
input_text, flags = re.IGNORECASE)
print(repr(input_text)) # --> print after first re.sub()
#Only replacements of the type: "19:00 2023_-_02_-_04" --> "(2023_-_01_-_11(19:00 pm))"
input_text = re.sub(r"(?!\()" + date_format_00_blocking + r"\s*" + identificate_hours + r"\s*(?:del|de\s*el|de |)\s*" + date_format_00 + r"\s*" + identificate_hours_blocking + r"\s*(?!\))",
lambda m: (f"({m[5]}_-_{m[6]}_-_{m[7]}({m[1] or '00'}:{m[2] or '00'} {m[3] or m[4] or 'am'}))"),
input_text, flags = re.IGNORECASE)
print(repr(input_text)) # --> print after second re.sub()
</code></pre>
<p>Given the input received in the <code>input_text</code> variable, 2 replacements must be performed using the <code>re.sub()</code> function (or similar), but these replacements must be performed only under certain conditions. I have tried many times to restrict these possibilities so that unwanted replacements are not made prematurely within the code.</p>
<p>Neither of this two <code>re.sub()</code> functions should modify those date-time <code>Y*_MM_DD hh:ss am or pm </code> that are protected by parentheses, like this <code>(Y*_MM_DD(hh:ss am or pm))</code>. For this reason I have placed this restrictions at the beginning <code>(?!\()</code> and at the end <code>(?!\))</code></p>
<p>The first <code>re.sub()</code> should not replace dates if a <code>hour:minutes</code> ahead is specified.</p>
<p>The second <code>re.sub()</code> should not replace dates if a <code>hour:minutes</code> is specified behind.</p>
<p>For some unknown reason <strong>my negative-lookahead are not working</strong> and I get this <strong>wrong output</strong>:</p>
<pre><code>#wrong output print after first re.sub()
'en la montaña a las ((2023_-_02_-_04(00:00 am))(19:00 pm)) o a las(2023_-_02_-_04(19:00 pm))aasassa(2023_-_02_-_04(00:00 am))sdshdhshj 19:00 pm(2023_-_02_-_04(00:00 am))fgfg'
#wrong output print after second re.sub()
'en la montaña a las ((2023_-_02_-_04(00:00 am))(19:00 pm)) o a las(2023_-_02_-_04(19:00 pm))aasassa(2023_-_02_-_04(00:00 am))sdshdhshj 19:00 pm(2023_-_02_-_04(00:00 am))fgfg'
</code></pre>
<p>And this is the <strong>correct output</strong> I'm trying to get</p>
<pre><code>#correct output print after first re.sub()
'en la montaña a las (2023_-_02_-_04(19:00 pm)) o a las (2023_-_02_-_04(19:00 pm)) aasassa (2023_-_02_-_04(00:00 am)) sdshdhshj 19:00 pm 2023_-_02_-_04 fgfg'
#correct output print after second re.sub()
'en la montaña a las (2023_-_02_-_04(19:00 pm)) o a las (2023_-_02_-_04(19:00 pm)) aasassa (2023_-_02_-_04(00:00 am)) sdshdhshj (2023_-_02_-_04(19:00 pm)) fgfg'
</code></pre>
|
<python><python-3.x><regex><regex-group><regexp-replace>
|
2023-02-05 21:39:27
| 0
| 875
|
Matt095
|
75,355,572
| 7,376,511
|
poetry + watchmedo + uwsgi: unrecognized arguments
|
<p>I have a uWSGI application that I want to monitor with watchmedo, and is under a poetry environment.</p>
<p>Unfortunately, the following does not work:</p>
<pre><code>poetry run watchmedo auto-restart --directory=./ --pattern="*.py;*.yml;*.html" --recursive -- uwsgi --ini=uwsgi.ini
</code></pre>
<p>This command raises:</p>
<pre><code>watchmedo: error: unrecognized arguments: --ini=uwsgi.ini
</code></pre>
<p>it seems that poetry is incapable of understanding that I'm passing parameters to uwsgi, not to watchmedo. I tried multiple permutations of this command and I could not get it to work. What am I missing? Even running this as <code>poetry run bash -c</code> still raises the same error.</p>
|
<python><uwsgi><python-poetry>
|
2023-02-05 21:16:37
| 1
| 797
|
Some Guy
|
75,355,560
| 14,676,485
|
Visualizing density function - difference between displot() and plot()
|
<p>I visualize density function (PDF) using two plotting approaches: <code>displot()</code> and <code>plot()</code>. I don't understand why <code>displot()</code> doesn't produce normally distributed plot wheras <code>plot()</code> do this perfectly. Density plots should look alike but they don't. What's wrong with <code>displot()</code> here?</p>
<pre><code>from scipy.stats import norm
import seaborn as sns
import numpy as np
data_x= np.arange(-4, 4, 0.001)
norm_pdf = norm.pdf(data_x)
sns.displot(data = norm_pdf, x = data_x, kind='kde')
</code></pre>
<p><a href="https://i.sstatic.net/Ud8K8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ud8K8.png" alt="enter image description here" /></a></p>
<pre><code>from scipy.stats import norm
import matplotlib.pyplot as plt
import numpy as np
data_x= np.arange(-4, 4, 0.001)
plt.plot(data_x, norm.pdf(data_x))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/rD3ap.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rD3ap.png" alt="enter image description here" /></a></p>
|
<python><statistics><seaborn><data-science>
|
2023-02-05 21:15:06
| 1
| 911
|
mustafa00
|
75,355,269
| 2,716,241
|
Installing GDAL for python in Google Cloud Functions -- error when deploying
|
<p>I've been unsuccessful in using GDAL in a Google Cloud Function with Python 3.9. I've included gdal in the "requirements.txt" file:</p>
<pre><code>numpy
pygrib
requests
google-cloud-storage
gdal
</code></pre>
<p>But get the following error when deploying the function:</p>
<pre><code>Build failed: .../setuptools/command/egg_info.py", line 541, in run
self.add_defaults()
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/setuptools/command/egg_info.py", line 578, in add_defaults
sdist.add_defaults(self)
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 228, in add_defaults
self._add_defaults_ext()
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 311, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/layers/google.python.pip/pip/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/tmp/pip-install-d9gag00y/gdal_e01a421a21914f03a3c89fd3914501b0/setup.py", line 255, in finalize_options
gdaldir = self.get_gdal_config('prefix')
File "/tmp/pip-install-d9gag00y/gdal_e01a421a21914f03a3c89fd3914501b0/setup.py", line 194, in get_gdal_config
raise gdal_config_error(traceback_string + '\n' + msg)
__main__.gdal_config_error: Traceback (most recent call last):
File "/tmp/pip-install-d9gag00y/gdal_e01a421a21914f03a3c89fd3914501b0/setup.py", line 87, in fetch_config
p = subprocess.Popen([command, args], stdout=subprocess.PIPE)
File "/layers/google.python.runtime/python/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/layers/google.python.runtime/python/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gdal-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pip-install-d9gag00y/gdal_e01a421a21914f03a3c89fd3914501b0/setup.py", line 188, in get_gdal_config
return fetch_config(option, gdal_config=self.gdal_config)
File "/tmp/pip-install-d9gag00y/gdal_e01a421a21914f03a3c89fd3914501b0/setup.py", line 90, in fetch_config
raise gdal_config_error(e)
gdal_config_error: [Errno 2] No such file or directory: 'gdal-config'
Could not find gdal-config. Make sure you have installed the GDAL native library and development headers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.; Error ID: c84b3231
</code></pre>
<p>Reviewing other similar posts, e.g. <a href="https://gis.stackexchange.com/questions/28966/python-gdal-package-missing-header-file-when-installing-via-pip">GDAL package missing</a>, it seems that the GDAL library/headers needs to be installed, <code>sudo apt-get install libgdal-dev</code>, before the <code>pip install gdal</code> will work. Given that I can't run apt-get for a cloud function, does anyone have a fix or workaround?</p>
|
<python><installation><pip><google-cloud-functions><gdal>
|
2023-02-05 20:22:12
| 1
| 545
|
TheGeographer
|
75,355,268
| 15,724,084
|
python tkinter waiting for an input to be inputted by user
|
<p>I have a code snippet. Entry widget creates itself with text, then waits seconds then destroys itself.</p>
<pre><code> entry_var_temporary = tk.StringVar()
entry_var_temporary.set(varsoundTitle_usernameHeroContainer)
entry_shtname_temp=tk.Entry(canvas2,width=30,textvariable=entry_var_temporary)
entry_shtname_temp.pack()
entry_shtname_temp.focus_set()
root.update()
time.sleep(10)
entry_shtname_temp.destroy()
root.update()
</code></pre>
<p>I have put 10 seconds to wait, and let user to modify the text, if it wants so. But as I see, time.sleep does not let the entry widget to be modified.
How can I get around this problem?
With <code>wait.window()</code> I realized I can edit text inside widget, but my problem is, it is not compulsory. So, if user doesn't put any text on it, it then after 10 sec needs to be destroyed</p>
|
<python><tkinter>
|
2023-02-05 20:22:05
| 2
| 741
|
xlmaster
|
75,355,213
| 3,484,568
|
How can I update a linearmodels PanelResults object with custom bootstrap estimates?
|
<p>I built my own class to implement an estimation procedure (call it <code>EstimationProcedure</code>). To run the procedure, the user calls method <code>fit</code>. First, this fits a Pooled OLS model using the <a href="https://bashtage.github.io/linearmodels/panel/panel/linearmodels.panel.model.PooledOLS.fit.html#linearmodels.panel.model.PooledOLS.fit" rel="nofollow noreferrer"><code>fit</code></a> method of the <a href="https://bashtage.github.io/linearmodels/panel/panel/linearmodels.panel.model.PooledOLS.html" rel="nofollow noreferrer"><code>PooledOLS</code></a> class from the <a href="https://bashtage.github.io/linearmodels/" rel="nofollow noreferrer"><code>linearmodels</code></a> package. This returns a <a href="https://bashtage.github.io/linearmodels/panel/panel/linearmodels.panel.results.PanelEffectsResults.html#linearmodels.panel.results.PanelEffectsResults" rel="nofollow noreferrer"><code>PanelResults</code></a> object which I store in variable <code>model</code>. Second, my <code>fit</code> method estimates, e.g., standard errors, t-statistics, p-values, etc. (using a custom bootstrapping method I wrote) whose results are stored in local variables, e.g., <code>std_errors</code>, <code>tstats</code>, <code>pvalues</code>, etc. My method shall now return a <code>PanelResults</code> object that combines information from the initial estimation and my own estimates (because I want to use <code>linearmodel</code>'s capabilities to compare multiple regressions and produce latex output).</p>
<p>To this end, I need to create a new <code>PanelResults</code> object. However, the necessary information is not accessible through attributes of <code>model</code>.</p>
<p>Conceptually, what would I need to do to implement this? Or is there a smarter way to achieve this? I suppose that this is rather a question on OOP which I have no experience with.</p>
<p>The following code illustrates the structure of my class:</p>
<pre class="lang-py prettyprint-override"><code>from linearmodels.panel import PooledOLS
from linearmodels.panel.results import PanelResults
class EstimationProcedure:
def __init__(self, data):
self.data = data
def fit(self):
# estimate Pooled OLS
model = PooledOLS(self.data)
# construct my own results using a bootstrap procedure
# this requires the result from an initial PooledOLS estimation
std_errors, tstats, pvalues = self.bootstrap(self.data)
# to create and return a new PanelResults object, I need
# to pass a number of results, say `res`, from the initial
# pooled OLS estimation along with my own results to the
# constructor. However, `PooledOLS` prepares
# estimation results required by `PanelResults`'s
# constructor internally without making them accessible
# through attributes. Hence, I cannot "recreate" it.
res = dict()
return PanelResults(res)
# data is stored in some dataframe
df = pd.DataFrame()
# usage of my estimation procedure
model = EstimationProcedure(df)
model.fit()
</code></pre>
|
<python><oop><linearmodels>
|
2023-02-05 20:13:51
| 2
| 618
|
Jhonny
|
75,354,898
| 7,895,542
|
Python get meta data for hidden files on windows
|
<p>I have been using the replies from <a href="https://stackoverflow.com/questions/12521525/reading-metadata-with-python">here</a> to read out the metadata of files on windows.
However i noticed that it would just ignore hidden files.</p>
<p>How can one also include hidden files in this approach?</p>
|
<python><winapi><pywin32>
|
2023-02-05 19:21:34
| 1
| 360
|
J.N.
|
75,354,886
| 7,318,120
|
copying modules from python 3.10 to 3.11 (does not work)
|
<p>I am trying to copy modules from python <code>3.10</code> to <code>3.11</code>.
I am using <code>windows 11</code>.</p>
<ul>
<li>My understanding is that one just downloads and install the new version of python.</li>
<li>I make sure that <code>python is added to path</code>.</li>
</ul>
<p>i follow this instruction: <a href="https://stackoverflow.com/questions/74190755/copying-modules-from-python-3-10-to-3-11">copying modules from python 3.10 to 3.11</a></p>
<p>i then do this:</p>
<pre><code>python3.10 -m pip freeze > requirements.txt
python3.11 -m pip install -r requirements.txt
</code></pre>
<p>but it throws an error message:</p>
<pre><code>'python3.10' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>So i do this:</p>
<pre><code>where python
</code></pre>
<p>to get this:</p>
<pre><code>C:\Users\admin\AppData\Local\Programs\Python\Python311\python.exe
C:\Users\admin\AppData\Local\Programs\Python\Python310\python.exe
C:\Users\admin\AppData\Local\Programs\Python\Python39\python.exe
C:\Users\admin\AppData\Local\Microsoft\WindowsApps\python.exe
</code></pre>
<p>I note the guidance here: <a href="https://pip.pypa.io/en/stable/cli/pip_freeze/" rel="nofollow noreferrer">https://pip.pypa.io/en/stable/cli/pip_freeze/</a>
which states this:</p>
<pre><code>env1\bin\python -m pip freeze > requirements.txt
env2\bin\python -m pip install -r requirements.txt
</code></pre>
<p>So my question is, with my paths and the above instruction, how do I implement the correct command so that all the packages are successfully updated in the new python version?</p>
<p><strong>update</strong>:</p>
<p>is this the correct implementation ?</p>
<pre><code>C:\Users\admin\AppData\Local\Programs\Python\Python310\python -m pip freeze > requirements.txt
C:\Users\admin\AppData\Local\Programs\Python\Python311\python -m install -r requirements.txt
</code></pre>
<p>And if so do i need to copy the requirements.txt file to the new path ?</p>
|
<python><pip><upgrade>
|
2023-02-05 19:19:46
| 1
| 6,075
|
darren
|
75,354,878
| 106,140
|
Python making tuple of an array with a step 2
|
<pre><code>els = [1, 2, 3, 4, ]
print([(v, els[idx + 1]) for idx, v in enumerate(els[::2])])
</code></pre>
<p>Why does Python output <code>[(1, 2), (3, 3)]</code> instead of <code>[(1, 2), (3, </code><strong>4</strong><code>)]</code>?</p>
<p>PS: I know I could do this: <code>[(els[i], els[i + 1]) for i in range(0,len(els),2)]</code> I'm not asking for a solution, I'm asking <em>why is this</em>?</p>
|
<python><loops>
|
2023-02-05 19:18:26
| 1
| 15,858
|
Olivier Pons
|
75,354,825
| 9,843,081
|
Creating Python Chart withThree Axis
|
<p>Suppose I have the following data for five different categories:
<a href="https://i.sstatic.net/rLcZi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rLcZi.png" alt="enter image description here" /></a></p>
<p>I would like to create a figure showing the mean (and the 1-std.-band around the return) in something like the graph below (mean and std in y-axis, the different categories x-axis). Would it also be possible to add the Sharpe as a red/orange triangle in the same graph (with a secondary y-axis on right)?</p>
<p><a href="https://i.sstatic.net/iXlOb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iXlOb.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><seaborn><figure>
|
2023-02-05 19:12:27
| 0
| 643
|
John
|
75,354,820
| 13,440,165
|
Extracting from an array of strings, strings that contain a substring in them (Python)
|
<p>A question in Python (3.9.5) and Pandas:</p>
<p>Suppose I have an array of strings <code>x</code> and I want to extract all the elements that contains a certain substring, e.g. <code>feb05</code>. Is there a Pythonic way to do it in one-line, including using a Pandas functions?</p>
<p>Example for what I mean:</p>
<pre><code>x = ["2023_jan05", "2023_jan_27", "2023_feb04", "2023_feb05", "2024_feb05"]
must_contain = "feb05"
desired_output = ["2023_feb05", "2024_feb05"]
</code></pre>
<p>I can run a loop,</p>
<pre><code>import numpy as np
import pandas as pd
desired_output = []
indices_bool = np.zeros(len(x))
for idx, test in enumerate(x):
if must_contain in test:
desired_output.append(test)
indices_bool[idx] = 1
</code></pre>
<p>but I seek for a more Pythonic way to do it.</p>
<p>In my application <code>x</code> is a column in a Pandas dataframe, so answers with Pandas functions will also be welcomed. The goal is to filter all the rows that has <code>must_contain</code> in the field <code>x</code> (e.g. <code>x = df["names"]</code>).</p>
|
<python><pandas><string>
|
2023-02-05 19:11:44
| 2
| 883
|
Triceratops
|
75,354,780
| 15,341,457
|
How can I make this Indexing algorithm more efficient?
|
<p>I've got a Dataframe (deriving from a csv file with various columns) with 172033 rows. I've created a custom indexing function that blocks pairs of records that haven't got similar 'name' attributes. The problem resides in the efficiency of the algorithm. Just to get to the 10th iteration it takes about a minute. Therefore indexing the whole dataset would take way too much time. How can I make my algorithm more efficient?</p>
<pre><code>class CustomIndex(BaseIndexAlgorithm):
def _link_index(self, df_a, df_b):
indici1=[]
indici2=[]
for i in range(0, 173033):
if(i%2 == 0):
print(i) #keeps track of the iteration
for j in range(i, 173033):
if(similar(df_a.loc[i, 'name'], df_a.loc[j, 'name'])>0.5):
indici1.append(i)
indici2.append(j)
indici = [indici1, indici2]
return pd.MultiIndex.from_arrays(indici, names=('first', 'second'))
</code></pre>
<p>I want to obtain a MultiIndex object, which would be an array of tuples contains the indexes of the pairs of records which are similar enough to not be blocked.</p>
<pre><code>[MultiIndex([( 0, 0),
( 0, 22159),
( 0, 67902),
( 0, 67903),
( 1, 1),
( 1, 1473),
( 1, 5980),
( 1, 123347),
( 2, 2),
...
</code></pre>
<p>Here's the code for the similarity function:</p>
<pre><code>from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
</code></pre>
<p>Here's an example of the dataframe I have as input:</p>
<pre><code> name
0 Amazon
1 Walmart
2 Apple
3 Amazon.com
4 Walmart Inc.
</code></pre>
<p>I would like the resulting MultiIndex to contain tuple links between 0 and 3, 1 and 4 and all the repetitions (0 and 0, 1 and 1 etc.)</p>
|
<python><pandas><performance><for-loop><indexing>
|
2023-02-05 19:04:20
| 4
| 332
|
Rodolfo
|
75,354,767
| 6,611,672
|
Why is dataclasses.as_dict a function instead of a method
|
<p>I know I can convert a Python dataclass to a dictionary using the <code>asdict</code> function:</p>
<pre><code>from dataclasses import asdict, dataclass
@dataclass
class Point:
x: int
y: int
point = Point(1, 2)
asdict(point)
# {'x': 1, 'y': 2}
</code></pre>
<p>Why is <code>asdict</code> a separate function instead a method on the object? The method feels more Pythonic / intuitive and prevents what seems like an unnecessary import:</p>
<pre><code>point.asdict()
# {'x': 1, 'y': 2}
</code></pre>
<p>Is there a specific reason for this design or did the authors just choose a function for no good reason? I'm curious because I'm wondering if there are any key design takeaways.</p>
<p><strong>Hypothesis</strong></p>
<p>One potential reason may be that injecting the method may override an existing method. For example, if <code>Point.asdict()</code> already exists on the vanilla class:</p>
<pre><code>@dataclass
class Point:
x: int
y: int
def asdict(self):
return {"x": self.x, "y": self.y, "sum": self.x + self.y}
</code></pre>
<p>In this case, injecting <code>asdict</code> as a method would override the existing functionality.</p>
|
<python><python-dataclasses>
|
2023-02-05 19:01:23
| 0
| 5,847
|
Johnny Metz
|
75,354,503
| 5,203,628
|
Azure Functions Python V2 Timer Trigger Does Not Deploy but Status Success in VSCode
|
<p>I am deploying a very basic Azure Functions App to demonstrate a few key features.</p>
<p>I have two functions, one demonstrating an HTTP Trigger and the other demonstrating a Timer Trigger. Both run perfectly on local instance.</p>
<pre><code>import azure.functions as func
import os
import datetime
import logging
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1")
@app.route(route="keyvaulttest")
def test_function(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
test_phrase = os.getenv("TestEnvFromKeyVault")
logging.info(f'TestEnvFromKeyVault: {test_phrase}')
logging.info('Python HTTP trigger function ran at %s', utc_timestamp)
return func.HttpResponse(
test_phrase,
status_code=200
)
@app.function_name(name="TestTimer")
@app.schedule(schedule="0 */5 * * * *", arg_name="test_timer", use_monitor=False)
def test_function(test_timer: func.TimerRequest) -> None:
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
test = os.getenv("TestEnvFromKeyVault")
if test_timer.past_due:
logging.info('The timer is past due!')
logging.info(f'TestEnvFromKeyVault: {test}')
logging.info('Python timer trigger function ran at %s', utc_timestamp)
</code></pre>
<p>When I attempt to deploy using the VSCode Azure Function extension command "Azure Functions: Deploy to FunctionApp" it says it deployed successfully. My HTTP Trigger function is deployed and works, but my Timer Trigger function is not deployed.</p>
<pre><code>12:13:48 PM testapp: Deployment successful. deployer = ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract zip. Remote build.
</code></pre>
<p><a href="https://i.sstatic.net/vsdvv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vsdvv.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><azure-functions><timer-trigger>
|
2023-02-05 18:19:13
| 2
| 1,146
|
Rob S.
|
75,354,384
| 12,671,057
|
Why is b.pop(0) over 200 times slower than del b[0] for bytearray?
|
<p>Letting them compete three times (a million pops/dels each time):</p>
<pre><code>from timeit import timeit
for _ in range(3):
t1 = timeit('b.pop(0)', 'b = bytearray(1000000)')
t2 = timeit('del b[0]', 'b = bytearray(1000000)')
print(t1 / t2)
</code></pre>
<p>Time ratios (<a href="https://tio.run/##hZDBasMwDIbvfoqfXpxA6NyWwij0tvNeYIzhMCUxc@JMVjb89JnS9bDbdBGW@b5f9lxkSNPpceZ17TiNkDBSEIRxTiz3kzEzh0kqa@1zErpgt@TFx1h2kIHAXkJCyCZ9EePoHAZiatAugpxG2iTZ@IyYvqHt7JqtbWgmVsaEjC76j7LHU6I8WQHT5sfoRfS@S4yx4HOhrFFTo2xa@mHTmLPTaOg8RqirIHruaa@71uYGviFMuuPUU3WqLwZacsD1/rbKtvs5zZWrbQPb6rwtQp7Zl@rgblWr6UYd/1DvFNG@uNd/qd@v08QHFdTr@gM" rel="noreferrer" title="Python 3.8 (pre-release) – Try It Online">Try it online!</a>):</p>
<pre><code>274.6037053753368
219.38099365582403
252.08691226683823
</code></pre>
<p>Why is <code>pop</code> that much slower at doing the same thing?</p>
|
<python><arrays><performance><cpython><python-internals>
|
2023-02-05 18:04:17
| 2
| 27,959
|
Kelly Bundy
|
75,354,311
| 15,365,513
|
How can I convert a tensor with the shape of [1, 3, 64, 64] to [1, 4, 64, 64] with the newly added layer being the same as the previous?
|
<p>I have a PyTorch tensor with the shape of <code>[1, 3, 64, 64]</code>, and I want to convert it to the shape <code>[1, 4, 64, 64]</code> while setting the value of the newly added layer to be the same as the previous layer in the same dimension (eg <code>newtensor[0][3] = oldtensor[0][2]</code>)</p>
<p>Note that my tensor has <code>requires_grad=True</code>, so I cannot use <code>resize_()</code></p>
<p>How can I do this?</p>
|
<python><pytorch><tensor>
|
2023-02-05 17:52:10
| 2
| 691
|
raspiduino
|
75,354,216
| 14,193,915
|
How to insert a row into a multiindex dataframe so that rows sum to the total?
|
<p>I have a multi index dataframe that I want to insert a row into.</p>
<pre><code>>>>import numpy as np
>>>import pandas as pd
>>> date = pd.date_range('2023-01-01', periods=3)
>>> size = ['s','m','total']
>>> arrays = [date, size]
>>> index = pd.MultiIndex.from_product(arrays, names=['date','size'])
>>> volume = [4,15,47,8,12,46,4,14,48]
>>> hours = [2,1,13,4,4,10,1,2,10]
>>> df = pd.DataFrame({'volume':volume, 'hours':hours}, index=index)
>>> df
volume hours
date size
2023-01-01 s 4 2
m 15 1
total 47 13
2023-01-02 s 8 4
m 12 4
total 46 10
2023-01-03 s 4 1
m 14 2
total 48 10
</code></pre>
<p>How can I insert a row that is labeled as 'l' for each date and is the difference between the 'total' row and the sum of the 's' and 'm' rows? The desired output is as follows:</p>
<pre><code> volume hours
date size
2023-01-01 s 4 2
m 15 1
l 28 10
total 47 13
2023-01-02 s 8 4
m 12 4
l 26 2
total 46 10
2023-01-03 s 4 1
m 14 2
l 30 7
total 48 10
</code></pre>
|
<python><pandas><dataframe><multi-index>
|
2023-02-05 17:35:22
| 1
| 831
|
jgg
|
75,354,115
| 7,713,770
|
How to filter with rest api django?
|
<p>I have this model:</p>
<pre><code>class Item(models.Model):
category = models.CharField(max_length=255)
subcategory = models.CharField(max_length=255)
name = models.CharField(max_length=255)
amount = models.PositiveIntegerField()
def __str__(self) -> str:
return self.name
</code></pre>
<p>serializer:</p>
<pre><code>class ItemSerializer(serializers.ModelSerializer):
class Meta:
model = Item
fields = ('category', 'subcategory', 'name', 'amount')
</code></pre>
<p>views.py:</p>
<pre><code>@api_view(['GET'])
def view_items(request):
queryset = Item.objects.all()
serializer = ItemSerializer(queryset, many=True)
# checking for the parameters from the URL
if request.query_params:
items = Item.objects.filter(**request.query_params.dict())
else:
items = queryset
# if there is something in items else raise error
if items:
return Response(serializer.data)
else:
return Response(status=status.HTTP_404_NOT_FOUND)
@api_view(['GET'])
def ApiOverview(request):
api_urls = {
'all_items': '/',
'Search by Category': '/?category=category_name',
'Search by Subcategory': '/?subcategory=subcategory_name',
}
return Response(api_urls)
</code></pre>
<p>urls.py:</p>
<pre><code>urlpatterns = [
path('', CategoryViewSet.ApiOverview, name='home'),
path('all/', views.view_items, name='view_items'),
]
</code></pre>
<p>So if I go to: <a href="http://127.0.0.1:8000/djangoadmin/all/" rel="nofollow noreferrer">http://127.0.0.1:8000/djangoadmin/all/</a></p>
<pre><code>[
{
"category": "food",
"subcategory": "vegetaries",
"name": "potato",
"amount": 4
},
{
"category": "food",
"subcategory": "vegetaries",
"name": "ananas",
"amount": 5
},
{
"category": "food",
"subcategory": "fruit",
"name": "apple",
"amount": 3
}
]
</code></pre>
<p>So that works.</p>
<p>But now I want to return where subcategory=vegetaries.</p>
<p>So I try like: <a href="http://127.0.0.1:8000/djangoadmin/all/?subcategory=vegetaries" rel="nofollow noreferrer">http://127.0.0.1:8000/djangoadmin/all/?subcategory=vegetaries</a></p>
<p>But then it returns all items:</p>
<pre><code>[
{
"category": "food",
"subcategory": "vegetaries",
"name": "potato",
"amount": 4
},
{
"category": "food",
"subcategory": "vegetaries",
"name": "ananas",
"amount": 5
},
{
"category": "food",
"subcategory": "fruit",
"name": "apple",
"amount": 3
}
]
</code></pre>
<p>Question: how to filter by subcategory?</p>
|
<python><django><django-rest-framework>
|
2023-02-05 17:21:35
| 0
| 3,991
|
mightycode Newton
|
75,354,106
| 12,439,119
|
Python unittest that at least one exception is raised
|
<p>Is there a way to get <code>unittest</code> standard library to check for multiple exceptions?</p>
<p>Obviously <code>assertRaises</code> works for a single exception: <a href="https://stackoverflow.com/questions/129507/how-do-you-test-that-a-python-function-throws-an-exception">How do you test that a Python function throws an exception?</a></p>
<p>But I want to test whether <strong>at least one</strong> error is raised. This feels right, but is not correct:</p>
<pre class="lang-py prettyprint-override"><code>with self.assertRaises(StatisticsError, ZeroDivisionError): # Test one or the other?
my_list_mean([])
</code></pre>
<hr />
<p>Full MRE: a "mean" function may raise a <code>ZeroDivisionError</code> or a <code>StatisticsError</code> depending on the implementation. I want to assert that this raises one or the other:</p>
<pre class="lang-py prettyprint-override"><code>from statistics import mean, StatisticsError
import unittest
def my_list_mean(lof):
# return sum(lof) / len(lof) # ZeroDivisionError
return mean(lof) # StatisticsError
class TestMultipleWaysToComputeMean(unittest.TestCase):
def test_zero_division_or_statistics_error(self):
with self.assertRaises(ZeroDivisionError):
_ = my_list_mean([])
if __name__ == "__main__": unittest.main()
</code></pre>
|
<python><unit-testing><error-handling><python-unittest>
|
2023-02-05 17:20:07
| 1
| 4,303
|
Alexander L. Hayes
|
75,354,104
| 15,479,269
|
python web scraping for emails
|
<p>I wrote this code to scrape email addresses from google search results or websites depending on t url given. However, the output is always blank.</p>
<p>The only thing in the excel sheet is the column name. I'm still new to python so not sure why that's happening.</p>
<p>What am I missing here?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
url ="https://www.google.com/search?q=solicitor+bereavement+wales+%27email%27&rlz=1C1CHBD_en-GBIT1013IT1013&sxsrf=AJOqlzWelf5qGpc4uqy_C2cd583OKlSEcQ%3A1675616694195&ei=tuHfY83MC-aIrwSQ3qxY&ved=0ahUKEwjN_9jO7v78AhVmxIsKHRAvCwsQ4dUDCBA&uact=5&oq=solicitor+bereavement+wales+%27email%27&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIFCAAQogQyBwgAEB4QogQyBwgAEB4QogQyBwgAEB4QogQyBwgAEB4QogQ6CggAEEcQ1gQQsANKBAhBGABKBAhGGABQrAxY7xRg1xZoAXABeACAAdIBiAGmBpIBBTEuNC4xmAEAoAEByAEIwAEB&sclient=gws-wiz-serp"
response = requests.get(url)
html_content = response.text
soup = BeautifulSoup(html_content, 'html.parser')
email_addresses = []
for link in soup.find_all('a'):
if 'mailto:' in link.get('href'):
email_addresses.append(link.get('href').replace('mailto:', ''))
df = pd.DataFrame(email_addresses, columns=['Email Addresses'])
df.to_excel('email_addresses_.xlsx',index=False)
</code></pre>
|
<python><pandas><web-scraping><beautifulsoup>
|
2023-02-05 17:19:48
| 2
| 703
|
someone
|
75,353,986
| 2,652,254
|
How can I place a pdf asset into a empty PDF page?
|
<p>I have multiple PDF files with small sizes (e.g. 3cm x 2 cm) exported from Adobe Indesign.
I want to compose many of these into one new PDF which has the size of a whole page.
The small PDFs contain a plotter line in a special color which would get lost if I convert them into images.</p>
<p>How can I place these PDFs (at given positions) using python and without losing the special color.</p>
<p>I tried to read into pypdf, pypdf2 and reportlab but I got lost and the examples I found did not work. I do not need the full code, a hint into the right direction would be enough (even with another language if necessary).</p>
<p>Thanks</p>
|
<python><pdf><pdfbox><pypdf>
|
2023-02-05 17:00:46
| 2
| 460
|
Iarwa1n
|
75,353,844
| 2,023,397
|
Select a button with Selenium
|
<p>I have a webpage I am trying to fill in. In the middle of the page there is a button i need to click, whose info in the inspection are as follow:</p>
<pre><code><label class="btn btn-default col-md-6 ng-binding active btn-success" ng-class="{'btn-success active': paziente.consenso_informato == 1 }" style="">
<input type="radio" ng-model="paziente.consenso_informato" name="consenso_informato" ng-required="true" ng-value="1" class="ng-not-empty ng-dirty ng-valid-parse ng-valid ng-valid-required ng-touched" value="1" required="required" style=""> Si
</label>
</code></pre>
<p>So I tried this code but there is no way I can click it:</p>
<pre><code> consent_xpath = "//label[@ng-class='{\"btn-success active\": paziente.consenso_informato == 1 }']/input[@value='1']"
# Wait for the element to be visible
wait = WebDriverWait(wd, 1)
element = wait.until(EC.visibility_of_element_located((By.XPATH, consent_xpath)))
# Scroll down to the element and click on it
wd.execute_script("arguments[0].scrollIntoView();", element)
element.click()
</code></pre>
<p>Also tried this:</p>
<pre><code>try:
element = wd.find_element_by_xpath("//label[@ng-class='{'btn-success active': paziente.consenso_informato == 1 }']")
if element.is_displayed():
element.click()
print('found')
break
except:
wd.execute_script("window.scrollBy(0, 100);")
</code></pre>
|
<python><selenium>
|
2023-02-05 16:35:49
| 2
| 397
|
Gloria Dalla Costa
|
75,353,644
| 10,535,123
|
How does PySpark work behind the scene when using a Python module which should load files?
|
<p>Let's say I have the two following python projects -</p>
<pre><code>PROJECT A
class FeatureBuilder:
def __init__(self):
self.artifact = read_artifacts_from_s3()
def create_features(self):
# do something with artifact
</code></pre>
<pre><code>PROJECT B
from pyspark.sql import DataFrame
from builder import FeatureBuilder
def pandas_udf(df: DataFrame):
feature_builder = FeatureBuilder()
def create_features(pdf):
feature_vector = fbuilder.create_features(pdf)
return feature_vector
return df.groupby("id").applyInPandas(create_features, df)
</code></pre>
<p>In this example, in project B, I'm calling to <code>create_features</code> function, which uses the <code>FeatureBuilder</code> object I imported from project A (which I can't change), and <code>FeatureBuilder</code> reads the file it needs from S3 (or any other location).</p>
<p>Project A is not a "PySpark" project - by this I mean it has no code related to the PySpark package, Spark session or Spark context at all.</p>
<p>What will happen in this case? Will every machine in the cluster read the file from S3?</p>
<p>If yes and let's say I can change project A, is there any way to optimize it? Maybe load the file from project B, broadcast it, and pass it to the object in project A?
Or maybe can I broadcast the <code>FeatureBuilder</code> object itself?</p>
<p>I'm not sure what is the right way to do that <strong>under the constraint that I can't add any Spark code to project A anyway</strong>.</p>
|
<python><pandas><apache-spark><pyspark>
|
2023-02-05 16:04:31
| 1
| 829
|
nirkov
|
75,353,556
| 6,027,879
|
python module ZipFile get base folder using regex
|
<p>Assume this zip file "acme_example.zip" contains below content of the files/folders :</p>
<pre><code>acme/one.txt
acme/one1.txt
acme/one2.txt
acme/one3.txt
acme/one4.txt
__MACOSX
.DS_Store
</code></pre>
<p>And i am using this below script</p>
<pre><code> output_var = []
skip_st = '__MACOSX'
with ZipFile('acme_example.zip','r') as ZipObj:
listfFiles = ZipObj.namelist()
for elm in listfFiles:
p = Path(elm).parts[0]
if p not in output_var:
output_var.append(p)
return re.sub(skip_st, '', ''.join(str(item) for item in output_var))
</code></pre>
<p>This above script will exclude "__MAXOSX" but is there a way to also exclude ".DS_Store" so that we will only return "acme" as folder name?</p>
|
<python><python-re>
|
2023-02-05 15:52:03
| 1
| 406
|
hare krshn
|
75,353,529
| 14,790,056
|
how to count unique values after groupby ID
|
<p>I have the following pandas dataframe <strong>df</strong></p>
<pre><code>ID from to
A 0x 0c
A 0x 0f
A 0f 0n
B 0f 0c
B 0c 0f
C 0k 0j
C 0j 0k
C 0k 0a
</code></pre>
<p>First I want to group by <code>id</code> and only keep groups if the number of unique values from <code>from</code> and <code>to</code> combined is less than 3.</p>
<p>so the desired df will be</p>
<pre><code>B 0f 0c
B 0c 0f
C 0k 0j
C 0j 0k
C 0k 0a
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-05 15:47:39
| 1
| 654
|
Olive
|
75,353,488
| 21,092,961
|
ModuleNotFoundError: No module named 'numpy' But numpy module already installed
|
<p>Error:
<a href="https://i.sstatic.net/zJaIe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zJaIe.png" alt="enter image description here" /></a>
I have already installed numpy module(<code>pip show numpy</code>):</p>
<p><a href="https://i.sstatic.net/TwZjy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TwZjy.png" alt="enter image description here" /></a>
this how it shows when i try to install numpy again
<a href="https://i.sstatic.net/2EdUp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2EdUp.png" alt="enter image description here" /></a></p>
<p>I tried to import numpy module which is already installed but it throws ModuleNotFoundError</p>
|
<python><numpy><modulenotfounderror>
|
2023-02-05 15:43:43
| 2
| 659
|
Amith A G
|
75,353,413
| 1,864,294
|
subprocess returns different output than shell
|
<p>Short context: Github's <code>gh</code> client adds ANSI colors to its JSON output, which is nice but makes post process hard. So if you pipe the output, gh makes a slightly different and regularly encoded output, for example:</p>
<pre><code>gh api organizations | cat
</code></pre>
<p>prints something like</p>
<pre><code>[{"login":"errfree","id":44,"node_id":"MDEyOk9yZ2FuaXphdGlvbjQ0","url":"https://api.github.com/orgs/errfree","repos_url":"https://api.github.com/orgs/errfree/repos","events_url":"https://api.github.com/orgs/errfree/events","hooks_url":"https://api.github.com/orgs/errfree/hooks","issues_url":"https://api.github.com/orgs/errfree/issues","members_url":"https://api.github.com/orgs/errfree/members{/member}","public_members_url":"https://api.github.com/orgs/errfree/public_members{/member}","avatar_url":"https://avatars.githubusercontent.com/u/44?v=4","description":null},
</code></pre>
<p>However, if I do the same with subprocess via</p>
<pre><code>p = subprocess.run('gh api organizations | cat', shell=True, capture_output=True)
print(p.stdout.decode())
</code></pre>
<p>the result is completely different:</p>
<p><a href="https://i.sstatic.net/A0IJ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A0IJ9.png" alt="enter image description here" /></a></p>
<p>Why? And: Is there an option to force the command (in my case <code>gh</code>) to behave like it would be piped?</p>
|
<python><shell><subprocess><pipe><github-cli>
|
2023-02-05 15:35:12
| 0
| 20,605
|
Michael Dorner
|
75,353,367
| 10,377,640
|
psycopg.OperationalError: connection failed: Connection refused in Docker
|
<p>so I tried to connect my docker app (python-1) into another docker app (postgres). But it giving me this error:</p>
<pre><code>psycopg.OperationalError: connection failed: Connection refused
python-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
python-1 | TCP/IP connections on port 25432?
</code></pre>
<p>I've tried using <code>condition: service_healthy</code> but it doesn't work. In fact, I already make sure my database is running before <code>python-1</code> is trying to connect. But the problem seems not about the database hasn't turned on yet. I already use <code>0.0.0.0</code> or postgres container's IP using <code>postgres</code> on the host and it also doesn't work.</p>
<p>Here is my <code>docker-compose.yml</code></p>
<pre><code>version: "3.8"
services:
postgres:
image: postgres:14.6
ports:
- 25432:5432
healthcheck:
test: ["CMD-SHELL", "PGPASSWORD=${DB_PASSWORD}", "pg_isready", "-U", "${DB_USERNAME}", "-d", "${DB_NAME}"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
python:
build:
context: .
dockerfile: Dockerfile
depends_on:
postgres:
condition: service_healthy
command: flask --app app init-db && flask --app app run -h 0.0.0.0 -p ${PORT}
ports:
- ${PORT}:${PORT}
environment:
DB_HOST: localhost
DB_PORT: 25432
DB_NAME: ${DB_NAME}
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
</code></pre>
<p>And this is my Dockerfile:</p>
<pre><code># syntax=docker/dockerfile:1
FROM python:3.10
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
</code></pre>
|
<python><docker><docker-compose>
|
2023-02-05 15:28:32
| 1
| 931
|
alramdein
|
75,353,111
| 7,713,770
|
how to get relationship with one-to-many on same table?
|
<p>I am using django and rest api. And I have two models:</p>
<pre><code>class Category(models.Model):
name = models.CharField(max_length=100)
slug = models.SlugField(max_length=100)
images = models.ImageField(upload_to="photos/categories")
category = models.ForeignKey("Category", on_delete=models.CASCADE, related_name='part_of', blank=True, null=True)
date_create = models.DateTimeField(auto_now_add=True)
date_update = models.DateTimeField(auto_now=True)
description = models.TextField(max_length=1000, blank=True)
legislation = models.TextField(max_length=1000, blank=True)
review = models.TextField(max_length= 000, blank=True)
eaza = models.TextField(max_length=1000, blank=True)
class Meta:
verbose_name = "category"
verbose_name_plural = "categories"
def __str__(self):
return self.name
class Animal(models.Model):
name = models.CharField(max_length=100)
slug = models.SlugField(max_length=100)
images = models.ImageField(upload_to="photos/categories")
category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name='animals')
date_create = models.DateTimeField(auto_now_add=True)
date_update = models.DateTimeField(auto_now=True)
description = models.TextField(max_length=1000, blank=True)
legislation = models.TextField(max_length=1000, blank=True)
review = models.TextField(max_length=1000, blank=True)
eaza = models.TextField(max_length=1000, blank=True)
class Meta:
verbose_name = "animal"
verbose_name_plural = "animals"
def __str__(self):
return self.name
</code></pre>
<p>And my serializer looks:</p>
<pre><code>class AnimalSerializer(serializers.ModelSerializer):
class Meta:
model = Animal
fields = ['id','name', 'description']
class CategorySerializer(serializers.ModelSerializer):
animals = AnimalSerializer(many=True)
class Meta:
model = Category
fields = ['id','category_id','name', 'description', 'animals']
</code></pre>
<p>and views.py:</p>
<pre><code>class CategoryViewSet(viewsets.ModelViewSet):
serializer_class = CategorySerializer
queryset = Category.objects.all()
@action(methods=['get'], detail=False)
def mainGroups(self,request):
mainGroups = Category.objects.filter(category_id__isnull=True)
serializer = self.get_serializer(mainGroups, many=True)
return Response(serializer.data)
</code></pre>
<p>and urls.py:</p>
<pre><code>router = routers.DefaultRouter()
router.register('groups', CategoryViewSet)
urlpatterns = [
path('', include(router.urls))
]
</code></pre>
<p>So if I go to: <a href="http://127.0.0.1:8000/djangoadmin/groups/" rel="nofollow noreferrer">http://127.0.0.1:8000/djangoadmin/groups/</a></p>
<p>I get as output:</p>
<pre><code>[
{
"id": 11,
"category_id": null,
"name": "zoogdieren",
"description": "hoi",
"animals": []
},
{
"id": 12,
"category_id": null,
"name": "amfibieen",
"description": "kujhkjh",
"animals": []
},
{
"id": 13,
"category_id": null,
"name": "vogels",
"description": "kljhkjh",
"animals": []
},
{
"id": 16,
"category_id": 13,
"name": "roofvogels",
"description": "kljhkljjl",
"animals": []
},
{
"id": 17,
"category_id": 12,
"name": "kikkers",
"description": "kjhkjh",
"animals": []
},
{
"id": 21,
"category_id": null,
"name": "reptielen",
"description": "reptielen",
"animals": []
},
{
"id": 22,
"category_id": 21,
"name": "slangen",
"description": "slangen",
"animals": []
},
{
"id": 24,
"category_id": 11,
"name": "honden",
"description": "hhhh",
"animals": []
},
{
"id": 25,
"category_id": 11,
"name": "katten",
"description": "kjhkjh",
"animals": []
},
{
"id": 26,
"category_id": 11,
"name": "olifanten",
"description": "kjhkjhkjh",
"animals": []
},
{
"id": 27,
"category_id": 21,
"name": "krokodillen",
"description": "l;l;'ll;;'l",
"animals": []
},
{
"id": 28,
"category_id": 22,
"name": "cobra",
"description": "cobra",
"animals": [
{
"id": 4,
"name": "indian cobra",
"description": "cobra"
},
{
"id": 5,
"name": "cape cobra",
"description": "cape cobra"
},
{
"id": 6,
"name": "Chinese cobra",
"description": "Chinese cobra"
}
]
},
{
"id": 29,
"category_id": 16,
"name": "valken",
"description": "valken",
"animals": []
},
{
"id": 30,
"category_id": 16,
"name": "gieren",
"description": "Gieren",
"animals": []
},
{
"id": 31,
"category_id": 21,
"name": "aligatoren",
"description": "aligatoren",
"animals": []
},
{
"id": 32,
"category_id": 13,
"name": "meeuwen",
"description": "meeuwen",
"animals": []
},
{
"id": 33,
"category_id": 22,
"name": "droppel slangen",
"description": "droppel slangen",
"animals": []
}
]
</code></pre>
<p>So for example zoogdieren with id 11 has many linked categories:</p>
<ul>
<li>honden category_id = 11</li>
<li>katten category_id = 11</li>
</ul>
<p>Question:
How to make a query that will filter for example on name zoogdieren and then the api call will filter the referenced id's?</p>
<p>So for example you fill in: <a href="http://127.0.0.1:8000/djangoadmin/groups?name=zoogdieren" rel="nofollow noreferrer">http://127.0.0.1:8000/djangoadmin/groups?name=zoogdieren</a> and as output:</p>
<pre><code>{
"id": 24,
"category_id": 11,
"name": "honden",
"description": "hhhh",
"animals": []
},
{
"id": 25,
"category_id": 11,
"name": "katten",
"description": "kjhkjh",
"animals": []
},
</code></pre>
<p>if I do this: <a href="http://127.0.0.1:8000/djangoadmin/groups/11/" rel="nofollow noreferrer">http://127.0.0.1:8000/djangoadmin/groups/11/</a></p>
<p>I get the main category:</p>
<pre><code>{
"id": 11,
"category_id": null,
"name": "zoogdieren",
"description": "hoi",
"animals": []
}
</code></pre>
<p>But I want to have the related entities with category_id 11</p>
|
<python><django><django-rest-framework>
|
2023-02-05 14:47:03
| 1
| 3,991
|
mightycode Newton
|
75,353,032
| 73,137
|
How to pass a variable sized array to TensorFlow Lite model
|
<p>I am trying to find how I can pass a dynamic-sized array (not fixed size) into my TensorFlow.</p>
<p>I am building an Android App to read Accelerometer values and predict an activity. I have built a TensorFlow model and am able to successfully import <code>.tflite</code> file into my Android.</p>
<pre><code>converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# save the model
with open("model-v2.tflite", "wb") as f:
f.write(tflite_model)
</code></pre>
<p>In my case, the number of Accelerometer X, Y, Z values I would be passing to my TensorFlow model will vary each time. I could pass a series of 10 values or 100 values. So I am trying to find how I can make the TensorFlow model accept a dynamic-sized array instead of a fixed size.</p>
<p>I am new to TensorFlow. So is this something that can be easily achieved?</p>
|
<python><android><tensorflow><accelerometer><tensorflow-lite>
|
2023-02-05 14:34:03
| 1
| 9,986
|
SyncMaster
|
75,352,822
| 12,494,765
|
I get UnicodeDecodeError while running Flask
|
<p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x99 in position 0: invalid start byte while I tried to start a Flask Server.</p>
<p>The following is the code</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
app = Flask(__name__)
app.run(debug=True, port=5000)
</code></pre>
<p>This generates the following error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nero/.local/lib/python3.10/site-packages/flask/app.py", line 1142, in run
cli.load_dotenv()
File "/home/nero/.local/lib/python3.10/site-packages/flask/cli.py", line 709, in load_dotenv
dotenv.load_dotenv(path, encoding="utf-8")
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 332, in load_dotenv
return dotenv.set_as_environment_variables()
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 90, in set_as_environment_variables
for k, v in self.dict().items():
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 74, in dict
self._dict = OrderedDict(resolve_variables(raw_values, override=self.override))
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 222, in resolve_variables
for (name, value) in values:
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 82, in parse
for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 24, in with_warn_for_invalid_lines
for mapping in mappings:
File "/usr/lib/python3/dist-packages/dotenv/parser.py", line 180, in parse_stream
reader = Reader(stream)
File "/usr/lib/python3/dist-packages/dotenv/parser.py", line 71, in __init__
self.string = stream.read()
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x99 in position 0: invalid start byte
</code></pre>
<p>This is a bare code and it should not generate any errors but it does,</p>
<blockquote>
<p>Attaching a screenshot for reference</p>
</blockquote>
<p><a href="https://i.sstatic.net/nyzEx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nyzEx.png" alt="enter image description here" /></a></p>
<p>My Environment settings are</p>
<blockquote>
<p>Python 3.10.6</p>
<p>Ubuntu 22.04 - Linux [Tested on a Windows machine also]</p>
<p>Flask 2.2.2</p>
</blockquote>
<p>Thanks in Advance</p>
<p>NB :</p>
<ul>
<li>This is not a platform specific issue, Tried on Linux and Windows. Tried in python shell and tried execting as python file.</li>
<li>This not even related to crypto issues, There could be other questions with same heading but they aren not related to Flask</li>
</ul>
<p>I tried to run a flask server with default configurations.
Expecting to run Flask Server</p>
|
<python><python-dotenv>
|
2023-02-05 14:01:55
| 1
| 388
|
Danwand N S
|
75,352,810
| 11,006,089
|
How to web scrap Economic Calendar data from TradingView and load into Dataframe?
|
<p>I want to load the Economic Calendar data from TradingView link and load into Dataframe ?</p>
<pre><code>Link: https://in.tradingview.com/economic-calendar/
Filter-1: Select Data for India and United States
Filter-2: Data for This Week
</code></pre>
<p><a href="https://i.sstatic.net/Ftb28.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ftb28.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pandas><selenium><web-scraping>
|
2023-02-05 14:00:27
| 1
| 465
|
Rohit
|
75,352,799
| 11,963,167
|
Creating an empty dataframe in pandas with column of type datetime64[ns, Europe/Paris]
|
<p>I want to create an empty dataframe in pandas with a single column 'time'. I also want it to be of type <code>datetime64[ns, 'Europe/Paris']</code>, ie. to be able to store timezone-aware timestamps.</p>
<p>I actually need to return an empty dataframe under certain conditions, but I still want to be able to perform some basic operations that require the type to be defined (for instance, merging it with other similra dataframes / performing group by using the column, and so on...).</p>
<p>For now, the simple <code>pd.DataFrame(columns=['time'])</code> creates a column of type <code>object</code>.
I tried to use <code>pd.DataFrame({'time': pd.Series(dtype=np.datetime64)})</code>, but I get <code>ValueError: The 'datetime64' dtype has no unit. Please pass in 'datetime64[ns]' instead.</code> (which I cannot pass by the way). Plus, it would not provide me the appropriate timezone.</p>
<p>Any idea how to do that ?</p>
|
<python><pandas><dataframe><numpy><python-datetime>
|
2023-02-05 13:58:35
| 1
| 496
|
Clej
|
75,352,728
| 11,079,284
|
How to get a high-quality image from an IIIFv21 API in Python?
|
<p>I have an <code>info.json</code> file in the IIIFv21 format, and I want to use it to retrieve high-quality image tiles. We can use, for example, the file that is available <a href="https://iiif.nli.org.il/IIIFv21/FL202979482/info.json" rel="nofollow noreferrer">here</a> and looked like:</p>
<pre><code>> {
"@context" : "http://iiif.io/api/image/2/context.json",
"@id" : "https://iiif.nli.org.il/IIIFv21/FL202979482",
"protocol" : "http://iiif.io/api/image",
"width" : 5414,
"height" : 3763,
"sizes" : [ {
"width" : 84,
"height" : 58
}, {
"width" : 169,
"height" : 117
}, {
"width" : 338,
"height" : 235
}, {
"width" : 676,
"height" : 470
}, {
"width" : 1353,
"height" : 940
}, {
"width" : 2707,
"height" : 1881
} ],
"tiles" : [ {
"width" : 1024,
"height" : 1024,
"scaleFactors" : [ 1, 2, 4, 8, 16, 32, 64 ]
} ],
"profile" : [ "http://iiif.io/api/image/2/level1.json", {
"formats" : [ "jpg" ],
"qualities" : [ "native", "color", "gray", "bitonal" ],
"supports" : [ "regionByPct", "regionSquare", "sizeByForcedWh", "sizeByWh", "sizeAboveFull", "rotationBy90s", "mirroring" ],
"maxWidth" : 526,
"maxHeight" : 526,
"maxArea" : 111111
} ],
"rights" : "http://web.nli.org.il/sites/NLI/Hebrew/library/items-terms-of-use/Pages/nli-copying-prohibited.aspx"
}
</code></pre>
<p>I have been trying to follow the <a href="https://iiif.io/api/image/2.1/" rel="nofollow noreferrer">IIIF specification</a>, but I'm struggling to retrieve any tile. I've tried a few different approaches, but they haven't been successful.</p>
<p>Some of my attempts are:</p>
<ol>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0,0,1024,1024/1024,/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0,0,1024,1024/1024,/0/default.jpg</a></li>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0,0,1024,1024/1024,1024/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0,0,1024,1024/1024,1024/0/default.jpg</a></li>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0,0/1024,1024/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0,0/1024,1024/0/default.jpg</a></li>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0,1024,1024/1024,/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0,1024,1024/1024,/0/default.jpg</a></li>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0,1024,1024/1024,1024/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0,1024,1024/1024,1024/0/default.jpg</a></li>
<li><a href="https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0/1024,1024/0/default.jpg" rel="nofollow noreferrer">https://iiif.nli.org.il/IIIFv21/FL202979451/0/0,0/1024,1024/0/default.jpg</a></li>
</ol>
<p>You can change the values in the <code>1024</code>s to match the desired tile location, but this is the top left tile.</p>
<p>Thank you in advance for any assistance you can provide.</p>
|
<python><image><web-scraping><iiif>
|
2023-02-05 13:48:00
| 1
| 1,052
|
Yanirmr
|
75,352,480
| 7,665,821
|
NameError when putting variable declaration in if __name__ == '__main__':
|
<p>I have a Python file named <code>main.py</code>. I am running it on Python 3.9.13 on Windows.
import uvicorn
from fastapi import FastAPI</p>
<pre><code>app = FastAPI()
@app.post('/c')
async def c(b: str):
print(a)
if __name__ == '__main__':
a = load_embeddings('embeddings')
uvicorn.run('main:app', host='127.0.0.1', port=80)
</code></pre>
<p>Running this, then invoking POST /c will cause a 500 error with NameError 'a' is not defined.</p>
<p>However it is obvious that <code>a</code> will be defined first before the server is ran. If I move <code>a</code> outside of the <code>if __name__ == '__main__':</code> then it works, but it causes <code>load_embeddings</code> to be ran multiple times for unknown reasons (3 exact). Since <code>load_embeddings</code> for me takes long time, I do not want the duplicate execution.</p>
<p>I wish to look for either of these as a solution to my issue: stop whatever outside <code>if __name__ == '__main__':</code> from executing multiple times, OR make <code>a</code> defined globally when it is being defined under <code>if __name__ == '__main__':</code>.</p>
<p>Note: variable names are intentionally renamed for ease of reading. Please do not advise me anything on coding style/naming conventions. I know the community is helpful but that's not the point here, thanks.</p>
|
<python><variables><fastapi><uvicorn>
|
2023-02-05 13:08:08
| 1
| 381
|
Billy Cao
|
75,352,393
| 5,141,652
|
Python tkinter dynamic checkbutton method
|
<p>I have a settings page with lots of checkbuttons on it so I am trying to reduce the code but I am struggling when it comes to getting the checkbutton value to work so far I have:-</p>
<pre><code>def _create_checkbox(self, label, index, state=0):
x = label.replace(" ", "-").lower()
self.settings_list[x] = state
ttk.Label(self.settings_frame, text=label).grid(
row=index, column=0)
ttk.Checkbutton(
self.settings_frame, variable=self.settings_list[x]
).grid(row=index, column=1)
</code></pre>
<p>the idea was to put the checkbutton names in a dict and then update the dict with the value but it is not working as planned, with my code all checkbutton values update as if they were one.</p>
<p>example list:</p>
<pre><code>self.settings_list = {"force-gamemode": "0", "allow-cheats": "1"}
</code></pre>
<p>Edit to show minimal working example, I did originally try to use variables (IntVar) but it failed (I cant remember why) but that's why I then switched to a dict:-</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("tkinter dynamic checkbox example")
self.geometry("700x450")
self.settings_list = {"force-gamemode": "0", "allow-cheats": "1"}
self.settings_frame = tk.Frame(self)
self.settings_frame.grid(row=0, column=0)
# create settings content
self._create_checkbox("Force Gamemode", 0, 0)
tk.Label(
self.settings_frame, text="Label to show content between checkboxes"
).grid(row=1, column=0)
self._create_checkbox("Allow Cheats", 2, 0)
tk.Button(
self.settings_frame,
text="Create Properties File",
command=self._create_properties,
).grid(row=3, column=0, sticky="ew")
def _create_checkbox(self, label, index, state=0):
x = label.replace(" ", "-").lower()
self.settings_list[x] = state
ttk.Label(self.settings_frame, text=label).grid(
row=index, column=0, padx=5, pady=5, sticky="w"
)
ttk.Checkbutton(self.settings_frame, variable=self.settings_list[x]).grid(
row=index, column=1, padx=5, pady=5, sticky="w"
)
def _create_properties(self):
print(self.settings_list["force-gamemode"])
print(self.settings_list["allow-cheats"])
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-02-05 12:55:00
| 2
| 1,037
|
Chris
|
75,351,935
| 2,749,397
|
BoundaryNorm, unexpected behavior
|
<p><em>My code:</em></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
x = y = np.linspace(0, 10, 51)
X, Y = np.meshgrid(x, y)
Z = X+Y # Z.min() => 0, Z.max() => 20
cf = plt.contourf(X, Y, Z,
levels=[5, 10, 15],
norm=colors.BoundaryNorm([5, 10, 15], 256, extend='both'))
cb = plt.colorbar(cf, extend='both')
plt.show()
</code></pre>
<p><em>Its output:</em></p>
<p><a href="https://i.sstatic.net/Nehmv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nehmv.png" alt="enter image description here" /></a></p>
<p><em>My expectations:</em></p>
<ul>
<li>in the main plot, a dark blue lower triangle in place of the white one,</li>
<li>ditto, a bright yellow upper triangle,</li>
<li>the colorbar decorated with an upper bright yellow triangle and a lower dark blue triangle.</li>
</ul>
<p><strong>My question:</strong></p>
<p>What have I done wrong?</p>
|
<python><matplotlib><colorbar><colormap><normalize>
|
2023-02-05 11:37:57
| 1
| 25,436
|
gboffi
|
75,351,892
| 6,810,805
|
How can i add several audio tracks in moviepy to support different languages?
|
<p>Now i have a moviepy code that sets an audio to my CompositeVideoClip:</p>
<pre><code>video = CompositeVideoClip([sequence])
video = video.set_audio(audio)
</code></pre>
<p>I would like to localize my video by adding additional soundtracks to it. Smth like this</p>
<pre><code>video = CompositeVideoClip([sequence])
languages = ["en", "es", "it"]
for language in languages:
# imagine i have localized_audios dict prepared
soundtrack_name = language
video = video.set_audio(localized_audios[language], soundtrack_name)
</code></pre>
<p>Is there a way to achieve this? Thank you in advance.</p>
|
<python><localization><video-processing><moviepy>
|
2023-02-05 11:30:16
| 0
| 1,758
|
Mike Kovetsky
|
75,351,823
| 3,735,871
|
PySpark- How to handle source data schema change
|
<p>I'm trying to use PySpark to read from Avro file into dataframe, do some transformations and write the dataframe out to HDFS as hive tables using the code below. The file format for the hive tables is parquet.</p>
<pre><code>df.write.mode("overwrite").format("hive").insertInto("mytable")
#this write a partition every day. When re-run, it would overwrite that run day's partition
</code></pre>
<p>The problem is, when the source data has a schema change, like added a column, it will fail with an error saying: source file structure not match with existing table schema. How should I handle this case programmatically? Many thanks for your help.</p>
<p>Edited :I want the new schema changes to be reflected in target table. I'm looking for a programmatic way to do this.</p>
|
<python><dataframe><apache-spark><pyspark><hive>
|
2023-02-05 11:17:41
| 3
| 367
|
user3735871
|
75,351,791
| 10,666,587
|
Numpy close range equivalent to `x <values < z`
|
<p>If I am looking for a boolean array whether my array is in a close range, I can use the next code:</p>
<pre><code>import numpy as np
vals = np.arange(10)
in_range = (vals < 8) & (vals > 3) # equiv to np.logical_and([vals < 8, vals > 3])
</code></pre>
<p>For the upper code, I wished to have an alternative to <code>in_range = 3 < vals < 8</code></p>
<p>In my opinion it looks more elegant, and it is not ambiguous.</p>
<p>I guess I am not the first to ask, but I swear I couldn't find and answer online. Will be glad to a reference or a new answer why this is not a good design for the <code>numpy</code> lib.</p>
|
<python><numpy>
|
2023-02-05 11:08:55
| 0
| 328
|
Shaq
|
75,351,688
| 1,184,717
|
ruamel.yaml dump explicit/dereferenced data
|
<p>Using <code>ruamel.yaml</code>, how to (1) produce exactly the same input (same order, comments, references, aliases, anchor) and (2) how to produce a dereferenced values?</p>
<p>For instance, given the following code</p>
<pre class="lang-py prettyprint-override"><code>import sys
import ruamel.yaml
yaml_input = """\
shape: &shape
color: blue
square: &square
a: 5
rectangle:
<<: *shape
<<: *square
b: 3
color: green
"""
yaml = ruamel.yaml.YAML()
yaml.allow_duplicate_keys = True
data = yaml.load(yaml_input)
yaml.dump(data, sys.stdout)
</code></pre>
<p>Its output is</p>
<pre class="lang-yaml prettyprint-override"><code>shape: &shape
color: blue
square:
a: 5
rectangle:
<<: *shape
b: 3
color: green
</code></pre>
<ol>
<li>How to produce a dereferenced output, like</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>shape:
color: blue
square:
a: 5
rectangle:
b: 3
a: 5
color: green
</code></pre>
<ol start="2">
<li>How to produce the same output as the input itself (with implicit data, that is without dereferenceing values), lile</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>shape: &shape
color: blue
square: &square
a: 5
rectangle:
<<: *shape
<<: *square
b: 3
color: green
</code></pre>
|
<python><yaml>
|
2023-02-05 10:50:29
| 1
| 10,220
|
Mr.
|
75,351,618
| 9,640,238
|
Export DataFrame timedelta column to timestamp Excel column
|
<p>I have a DataFrame that contains a <code>datetime64</code> and a <code>timedelta64</code>. Unfortunately, I can't export the latter to a properly formatted <code>hh:mm:ss</code> column in an Excel file:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
"date": [
"2023-02-05",
"2023-02-05",
"2022-12-02",
"2022-11-29",
"2022-11-18",
],
"duration": [
"01:07:48",
"05:23:06",
"02:41:58",
"00:35:11",
"02:00:20",
],
}
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')
df['duration'] = pd.to_timedelta(df['duration'])
with pd.ExcelWriter(
"df.xlsx",
datetime_format="YYYY-MM-DD",
engine="xlsxwriter",
) as writer:
workbook = writer.book
time_format = workbook.add_format({"num_format": "HH:MM:SS"})
df.to_excel(writer, sheet_name="sheet", index=False)
worksheet = writer.sheets["sheet"]
worksheet.set_column("A:A", 20)
worksheet.set_column("B:B", 50, cell_format=time_format)
</code></pre>
<p>The resulting Excel file will display like this:</p>
<p><a href="https://i.sstatic.net/nTJ7m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nTJ7m.png" alt="Excel file" /></a></p>
<p>So, the <code>date_time</code> format in the <code>ExcelWriter</code> object is applied correctly for column A, as well as the width setting for column B, but the number formatting isn't working.</p>
<p>What am I doing wrong?</p>
|
<python><excel><pandas><dataframe><xlsxwriter>
|
2023-02-05 10:37:21
| 2
| 2,690
|
mrgou
|
75,351,547
| 9,279,753
|
Equivalent C structure in Python
|
<p>I have the following in C:</p>
<pre><code>typedef struct {
short Whole;
unsigned short Frac;
} FirstStruct, FAR *pFirstStruct;
typedef struct {
char FirstArr[3];
FirstStruct SecondArr[3][3]
} SecStruct, FAR * pSecStruct;
</code></pre>
<p>I would like to do something similar in Python. Found <a href="https://stackoverflow.com/a/45384034/9279753">this answer</a> explaining how to use <code>ctypes</code> for this purpose, but I am having problems with <code>SecondArr[3][3]</code>. Here's the code in Python:</p>
<pre><code>class FirstStruct(ctypes.Structure):
_pack_ = 2
_fields = [("Whole", ctypes.c_short),
("Fract", ctypes.c_ushort)]
class SecStruct(ctypes.Structure):
class _A(ctypes.Array):
_type_ = ctypes.c_char
_length_ = 3
class _B(ctypes.Array):
_type_ = FirstStruct
_length_ = 3
class _C(ctypes.Array):
_type_ = _B
_length_ = 3
_pack_ = 2
_fields = [("FirstArr", _A),
("SecondArr", _C)]
</code></pre>
<p>By doing that, Pylance complains that <code>"_B" is not defined</code>, and I'm not completely sure it will work, nor if it is safe to mix two subclasses in that way to create a new C structure.</p>
<p>Is this the correct way of doing it even if Pylance complains about it, or is there any other way to convert the structure mentioned?</p>
|
<python><c><ctypes>
|
2023-02-05 10:26:29
| 2
| 599
|
Jose Vega
|
75,351,526
| 3,146,582
|
Can't start PyGame Zero (pgzrun) from cmd
|
<p>I can't start PyGame Zero window from cmd. According to book I bought my kid, I suppose to start it with</p>
<pre><code>pgzrun test.py
</code></pre>
<p>Unfortunteally what I get is:</p>
<pre><code>'pgzrun' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>As a Python begginer I started with checking my python and pygame lib:</p>
<pre><code>C:\Users\mikol\OneDrive\python\examples\asteroids>python --version
Python 3.10.9
C:\Users\mikol\OneDrive\python\examples\asteroids>pip show pygame
Name: pygame
Version: 2.1.2
Summary: Python Game Development
Home-page: https://www.pygame.org
Author: A community project.
Author-email: pygame@pygame.org
License: LGPL
Location: c:\users\mikol\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages
</code></pre>
<p>I have also double confirmed adding</p>
<pre><code>c:\users\mikol\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages
</code></pre>
<p>to PATH variable (using Win 11). Am I missing something?</p>
|
<python><pgzero>
|
2023-02-05 10:22:15
| 3
| 751
|
Tomek
|
75,351,398
| 196,489
|
python-gnupg always returning empty decryption result
|
<p>I am using this piece of code to generate a key, encrypt some content and then decrypt it</p>
<pre><code>with tempfile.TemporaryDirectory() as tmpdir:
gpg = gnupg.GPG(gnupghome=str(tmpdir), verbose=True)
gpg.encoding = 'utf-8'
input_data = gpg.gen_key_input(
key_type='RSA',
key_length=2048,
name_real='Atfinity',
name_comment='Generated by Atfinity',
name_email='no-reply@atfinity.ch',
passphrase='test',
)
key = gpg.gen_key(input_data)
content = 'Encrypt this please'
encrypted = gpg.encrypt_file(
StringIO(content),
recipients=key.fingerprint,
always_trust=True,
passphrase='test',
sign=False
)
decrypted = gpg.decrypt(str(encrypted), always_trust=True, passphrase='test')
self.assertEqual(content, str(decrypted))
</code></pre>
<p>However, decrypted is always an empty string. What am I doning wrong?</p>
|
<python><gnupg><python-gnupgp>
|
2023-02-05 09:57:38
| 0
| 12,904
|
Thorben Croisé
|
75,351,387
| 4,053,840
|
Passing azure secret variables to pytest in pipeline?
|
<p>We are running integration tests, written in Python, in Azure Pipeline. These tests access a database, and the credentials for accessing the database are stored in a variable group in Azure, including secret variables. This is the part of the yaml file, where the integration tests are started:</p>
<pre><code>jobs:
- job: IntegrationTests
variables:
- group: <some_variable_group>
- script: |
pdm run pytest \
--variables "$VARIABLE_FILE" \
--test-run-title="$TEST_TITLE" \
--napoleon-docstrings \
--doctest-modules \
--color=yes \
--junitxml=junit/test-results.xml \
integration
env:
DB_USER: $(SMDB_USER)
DB_PASSWORD: $(SMDB_PASSWORD)
DB_HOST: $(SMDB_HOST)
DB_DATABASE: $(SMDB_DATABASE)
</code></pre>
<p>The problem is, that we cannot read the value of SMDB_PASSWORD, as it is a secret variable. In order to use the secret variables, it is advised to use arguments in a PythonScript task (like here: <a href="https://stackoverflow.com/questions/57374248/passing-arguments-to-python-script-in-azure-devops">Passing arguments to python script in Azure Devops</a>)
but i am not aware how to modify this script to be defines PythonScript, as it includes using pdm.</p>
|
<python><azure><environment-variables><pipeline>
|
2023-02-05 09:54:44
| 1
| 483
|
Ivajlo Iliev
|
75,351,259
| 3,735,871
|
Write multiple Avro files from pyspark to the same directory
|
<p>I'm trying to write out dataframe as Avro files from PySpark dataframe to the path <code>/my/path/</code> to HDFS, and partition by the col 'partition', so under <code>/my/path/</code> , there should be the following sub directory structures</p>
<pre><code>partition= 20230101
partition= 20230102
....
</code></pre>
<p>Under these sub directories, there should be the avro files. I'm trying to use</p>
<pre><code>df1.select("partition","name","id").write.partitionBy("partition").format("avro").save("/my/path/")
</code></pre>
<p>It succeed the first time with , but when I tried to write another df with a new partition, it failed with error : path /my/path/ already exist. How should I achieve this? Many thanks for your help. The df format is as below:</p>
<pre><code>partition name id
20230101. aa. 10 ---this row is the content in the first df
20230102. bb. 20 ---this row is the content in the second df
</code></pre>
|
<python><apache-spark><pyspark><hdfs><avro>
|
2023-02-05 09:27:06
| 1
| 367
|
user3735871
|
75,351,252
| 887,651
|
Link Django models together via GenericKey?
|
<p>i have the following models:</p>
<pre><code>class Team(models.Model):
users = models.ManyToManyField("User")
class User(AbstractUser):
...
class Subscription(models.Model):
team = models.ForeignKey("Team", on_delete=models.CASCADE)
name = models.CharField(max_length=64)
class Package(models.Model):
name = models.CharField(max_length=64) # packageA, packageB
max_activation_number = models.PositiveIntegerField(default=1)
class Activation(models.Model):
subscription = models.ForeignKey("Subscription", on_delete=models.CASCADE)
package = models.ForeignKey("Package", on_delete=models.CASCADE)
created = models.DatetimeField()
class PackageA(models.Model):
...
class PackageB(models.Model):
...
</code></pre>
<p>A team has one subscription and it can activate one or more package and the same package could be activated more than one time. (number of times specified with "max_ativation_number")</p>
<p><strong>Example:</strong></p>
<p>A team has a subscription called <em><strong>Suite</strong></em> and the available packages are: <em><strong>EmailAccount</strong></em> and <em><strong>Calendar</strong></em>
The team choose to activate 3 EmailAccount and 2 Calendar (packages are not tied to each other)</p>
<p>For that reason the team could activate the same package more times.</p>
<p>For every activation i need to create a new instance on <strong>PackageA</strong> or <strong>PackageB</strong> (it depends on the choice a team made) and then i should "link" to that instance somehow.</p>
<p>Should i use <em><strong>GenericKey</strong></em> field inside Activation model? I not only need the name of the chosen package but I also need to figure out which instance.</p>
|
<python><django><database><django-models>
|
2023-02-05 09:26:16
| 2
| 4,644
|
Dail
|
75,350,843
| 3,415,077
|
What is the recommended async Oracle driver for SQLAlchemy 2.0?
|
<p>We are developing an asynchronous Python-based server using SQLAlchemy 2. So far, asynchronous access to PostgreSQL, MySQL and SQLite work fine. However, we cannot find an async driver for Oracle.</p>
|
<python><oracle-database><sqlalchemy>
|
2023-02-05 08:01:59
| 1
| 2,242
|
Juergen Zimmermann
|
75,350,817
| 3,507,584
|
Plotly - Remove axis ticks and numbers but keep label
|
<p>In this MWE, I have a plot with ticks and labels.</p>
<pre><code>fig = go.Figure(data=go.Scatter(x=[2.3], y=[5.3], mode='markers'))
fig.update_xaxes(range=[0,10], constrain="domain",title_text="Some x label",title_font={"size":22,"color":"black"}, showgrid=False)
fig.update_yaxes(scaleanchor="x",scaleratio = 1, range=[0,10], title_text="Some y label",title_font={"size":22,"color":"black"}, showgrid=False)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/hzkku.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hzkku.png" alt="With ticks" /></a></p>
<p>If I remove the grid and the ticks, the x label becomes the plot title and the y label stays far away from the plot. How can I just remove the grid and numbers while keeping the axis labels where they are (or even getting them a bit closer to the axis lines as there are no numbers now)?</p>
<pre><code>fig = go.Figure(data=go.Scatter(x=[2.3], y=[5.3], mode='markers'))
fig.update_xaxes(range=[0,10], constrain="domain",showgrid=False,showticklabels=False,rangemode="nonnegative",
title_text="Some x label",title_font={"size":22,"color":"black"})
fig.update_yaxes(scaleanchor="x",scaleratio = 1, range=[0,10], showgrid=False,showticklabels=False,rangemode="nonnegative",
title_text="Some y label",title_font={"size":22,"color":"black"})
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/6Hab3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Hab3.png" alt="Without ticks" /></a></p>
|
<python><plotly><axis><axis-labels>
|
2023-02-05 07:55:18
| 2
| 3,689
|
User981636
|
75,350,689
| 305,135
|
Pandas merging value of two rows in columns of a single row
|
<p>I have data like this, it's output of a <strong>groupby</strong>:</p>
<pre><code>numUsers = df.groupby(["user","isvalid"]).count()
count
user isvalid
5 0.0 1336
1.0 387
</code></pre>
<p>But I need to have count of <strong>count_valid</strong> and <strong>count_invalid</strong> columns for each user, like this:</p>
<pre><code> count_valid count_invalid
user
5 387 1336
</code></pre>
<p>How can I do it in optimized way in Pandas?</p>
|
<python><pandas><aggregate>
|
2023-02-05 07:26:23
| 2
| 19,540
|
AVEbrahimi
|
75,350,612
| 1,800,459
|
How to get user's IP address using Amazon API Gateway and FastAPI?
|
<p>I am using Amazon API Gateway that forwards requests to a FastAPI server (I am not using nginx). I am trying to get the user's IP address in a FastAPI endpoint, but it does not seem to be working (<code>x_forwarded_for</code> is empty).</p>
<p>Here is a snippet of my FastAPI backend:</p>
<pre><code>@app.post("/upload")
async def runUpload(file: UploadFile,request: Request, x_forwarded_for: str = Header(None)):
userIp = x_forwarded_for.split(',')[0].strip() if x_forwarded_for else request.client.host
</code></pre>
<p>This is how I start my server:</p>
<pre><code>if __name__ == '__main__':
uvicorn.run(app,port=PORT,host='0.0.0.0',proxy_headers=True,forwarded_allow_ips="*")
</code></pre>
<p>when i print the request.headers i see (of course with correct numbers), so the data is here...</p>
<pre><code>'forwarded': 'by=1.2.3.4;for=5.6.7.8;host=555.execute-api.us-east-1.amazonaws.com;proto=https'
</code></pre>
|
<python><amazon-web-services><fastapi><aws-api-gateway><x-forwarded-for>
|
2023-02-05 07:06:21
| 0
| 1,134
|
AJ222
|
75,350,533
| 722,553
|
How do I validate using pytest that my Python code is not logging any stack traces?
|
<p>I would like to use pytest to check that my function is not generating any stack trace data into logs, e.g. via <a href="https://docs.python.org/3/library/logging.html#logging.exception" rel="nofollow noreferrer">logging.exception()</a>, but the <code>caplog</code> object (<a href="https://docs.pytest.org/en/7.1.x/how-to/logging.html" rel="nofollow noreferrer">docs</a>) doesn't contain any information about the stack trace in the <code>records</code> or <code>record_tuples</code> attributes. What could I do?</p>
|
<python><pytest><python-logging>
|
2023-02-05 06:43:54
| 1
| 3,593
|
Dawngerpony
|
75,350,508
| 305,135
|
Pandas equivalent of SQL Group By while concatenating columns
|
<p>Suppose I have table having following columns:
firstname, surname, tel</p>
<p>something like this :</p>
<pre><code>firstname surname tel
alex topol 1234
jim jimix 2312
alex topol 2344
</code></pre>
<p>now I want to find numberof tel per person and sort, so I write this in SQL:</p>
<pre><code>select concat(firstname,' ',surname),count(*) from wp_eqra_requests group by concat(firstname,' ',surname) order by count(*) desc
</code></pre>
<p>But do I write this in Python Pandas? I tried using <strong>groupby</strong> but had no sucess in concatening two columns:</p>
<pre><code>numUsers = df.groupby(by=["firstname", "surname")["tel"].count()
</code></pre>
|
<python><sql><pandas><dataframe>
|
2023-02-05 06:33:13
| 1
| 19,540
|
AVEbrahimi
|
75,350,446
| 14,154,784
|
TemplateDoesNotExist at /accounts/login/
|
<p>I have checked the answers <a href="https://stackoverflow.com/questions/65199789/how-to-change-default-template-auth-login-in-django-this-is-not-working">here</a>, <a href="https://stackoverflow.com/questions/46846923/django-loginview-doesnt-override-attributes">here</a>, <a href="https://stackoverflow.com/questions/52789021/django-templatedoesnotexist-at-accounts-login">here</a>, and <a href="https://stackoverflow.com/questions/41722008/templatedoesnotexist-at-accounts-login-error">here</a>, and though I am getting the same error apparently I have a different root cause since those answers do not work for me.</p>
<p>Here's the problem: I am using Django's LoginView, overriding the template name, but it only sometimes works. <strong>If I get to the login screen from the navbar, it works great, but the same url when gotten to from a different method throws a <code>template does not exist</code> error.</strong> My URLs file:</p>
<pre><code>from django.urls import path
from django.contrib.auth import views as auth_views
from . import views
app_name = "accounts"
urlpatterns = [
path(
"login",
auth_views.LoginView.as_view(template_name="accounts/login.html"),
name="login",
),
path("logout", auth_views.LogoutView.as_view(), name="logout"),
path("signup", views.SignUp.as_view(), name="signup"),
]
</code></pre>
<p>I have a nav item for the login screen, and it works great. The relevant section in the template:</p>
<pre><code>{% else %}
<li class="nav-item"><a href="{% url 'groups:all' %}" class="nav-link">Groups</a></li>
<li class="nav-item"><a class="nav-link" href="{% url 'accounts:login' %}">Login</a></li>
<li class="nav-item"><a class="nav-link" href="{% url 'accounts:signup' %}">Sign Up</a></li>
{% endif %}
</code></pre>
<p>If someone clicks the <code>login</code> link in the navbar, it takes you to <code>http://127.0.0.1:8000/accounts/login</code> and works great.</p>
<p>BUT: I have another section of code where you need to be logged in for a link to work, and if you're not it redirects you to the login screen. The login URL looks good to me: <code>http://127.0.0.1:8000/accounts/login/?next=/groups/join/test-group-1</code>, but this time the login is met with this error instead of the login screen:</p>
<pre><code>TemplateDoesNotExist at /accounts/login/
registration/login.html
Request Method: GET
Request URL: http://127.0.0.1:8000/accounts/login/?next=/groups/join/test-group-1
Django Version: 4.1.4
Exception Type: TemplateDoesNotExist
Exception Value:
registration/login.html
Exception Location: /opt/homebrew/anaconda3/envs/py311django/lib/python3.11/site-packages/django/template/loader.py, line 47, in select_template
Raised during: django.contrib.auth.views.LoginView
Python Executable: /opt/homebrew/anaconda3/envs/py311django/bin/python
Python Version: 3.11.0
Python Path:
['/Users/brendenmillstein/Dropbox '
'(Personal)/BSM_Personal/Coding/Udemy/full_stack_django_tutorial/BSM_materials/python/social_network/simplesocial',
'/opt/homebrew/anaconda3/envs/py311django/lib/python311.zip',
'/opt/homebrew/anaconda3/envs/py311django/lib/python3.11',
'/opt/homebrew/anaconda3/envs/py311django/lib/python3.11/lib-dynload',
'/opt/homebrew/anaconda3/envs/py311django/lib/python3.11/site-packages']
Server time: Sun, 05 Feb 2023 06:01:34 +0000
</code></pre>
<p>I already added all the apps to the INSTALLED_APPS list in settings,py, as well as added</p>
<pre><code>BASE_DIR = Path(__file__).resolve().parent.parent
TEMPLATE_DIR = Path.joinpath(BASE_DIR, "templates")
</code></pre>
<p>as well as added the TEMPLATE_DIR to the TEMPLATES list:</p>
<pre><code>TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [TEMPLATE_DIR],
</code></pre>
<p><strong>The fact that login works when I get to it from the navbar, but doesn't work other times is confusing me.</strong></p>
<p>What am I missing? I don't understand why the same URL is working if you get to it by clicking on the Nav, but not working if you get to it by clicking on a link that you need to be logged in to use. Is something wrong in the <code>?next=</code> portion of the URL? I thought that shouldn't matter.</p>
<p>Send help, I'm going nuts over here trying to make this work.</p>
|
<python><django><django-views><django-templates><django-urls>
|
2023-02-05 06:14:22
| 1
| 2,725
|
BLimitless
|
75,350,417
| 1,112,406
|
How do instances of Andrej Karpathy's BigramLanguageModel run as functions with no `__call__` function?
|
<p>Andrej Karpathy's nanoGPT defines <a href="https://colab.research.google.com/drive/1JMLa53HDuA-i7ZBmqV7ZnA3c_fvtXnx-?usp=sharing" rel="nofollow noreferrer"><code>BigramLanguageModel</code></a> as follows.</p>
<pre><code>class BigramLanguageModel(nn.Module):
def __init__(self):
super().__init__()
...
def forward(self, x):
...
</code></pre>
<p>It then runs the following.</p>
<pre><code>m = BigramLanguageModel(vocab_size)
logits, loss = m(xb, yb)
</code></pre>
<p>The call to <code>m()</code> runs the <code>forward()</code> method as if there were a <code>__call__</code> function that called <code>forward()</code>. But no <code>__call__</code> function is visible. How does this work?</p>
<p>Thanks.</p>
|
<python><call>
|
2023-02-05 06:00:18
| 1
| 2,758
|
RussAbbott
|
75,350,395
| 19,504,610
|
How should we manage datetime fields in SQLModel in python?
|
<p>Let's say I want to create an API with a <code>Hero</code> SQLModel, below are minimum viable codes illustrating this:</p>
<pre><code>from typing import Optional
from sqlmodel import Field, Relationship, SQLModel
from datetime import datetime
from sqlalchemy import Column, TIMESTAMP, text
class HeroBase(SQLModel): # essential fields
name: str = Field(index=True)
secret_name: str
age: Optional[int] = Field(default=None, index=True)
created_datetime: datetime = Field(sa_column=Column(TIMESTAMP(timezone=True),
nullable=False, server_default=text("now()")))
updated_datetime: datetime = Field(sa_column=Column(TIMESTAMP(timezone=True),
nullable=False, server_onupdate=text("now()")))
team_id: Optional[int] = Field(default=None, foreign_key="team.id")
class Hero(HeroBase, table=True): # essential fields + uniq identifier + relationships
id: Optional[int] = Field(default=None, primary_key=True)
team: Optional["Team"] = Relationship(back_populates="heroes")
class HeroRead(HeroBase): # uniq identifier
id: int
class HeroCreate(HeroBase): # same and Base
pass
class HeroUpdate(SQLModel): # all essential fields without datetimes
name: Optional[str] = None
secret_name: Optional[str] = None
age: Optional[int] = None
team_id: Optional[int] = None
class HeroReadWithTeam(HeroRead):
team: Optional["TeamRead"] = None
</code></pre>
<p>My question is, how should the <code>SQLModel</code> for <code>HeroUpdate</code> be like?</p>
<ol>
<li>Does it include the <code>create_datetime</code> and <code>update_datetime</code> fields?</li>
<li>How do I delegate the responsibility of creating these fields to the database instead of using the <code>app</code> to do so?</li>
</ol>
|
<python><fastapi><pydantic><sqlmodel>
|
2023-02-05 05:54:14
| 2
| 831
|
Jim
|
75,350,113
| 19,716,381
|
Delete numpy arrays from memory after loading into tensorflow
|
<p>I have 4 numpy arrays <code>x_train</code>, <code>x_test</code>, <code>y_train</code>, <code>y_test</code> which consume about 5GB of memory. I have loaded these into tensorflow with the following code.</p>
<pre class="lang-py prettyprint-override"><code>train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
</code></pre>
<p><code>train_dataset</code> and <code>test_dataset</code> together use about 8GB of memory. The problem is that I am running out of memory and I no longer have any use of the numpy arrays. How can I free those variables from memory?</p>
<p>I tried <code>del <variable_name></code> in python, but it seems it deletes only the pointer and does not free the memory.</p>
<p>Setting the variables to <code>0</code> also doesn't work.</p>
<p>Here is the code if that could help.
<a href="https://colab.research.google.com/drive/1-nv_JRQnC3YBfyoacdufCnB6LRJacPCt?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1-nv_JRQnC3YBfyoacdufCnB6LRJacPCt?usp=sharing</a></p>
<p>The dataset is
<a href="https://www.kaggle.com/datasets/theoviel/rsna-breast-cancer-256-pngs" rel="nofollow noreferrer">https://www.kaggle.com/datasets/theoviel/rsna-breast-cancer-256-pngs</a></p>
<p>and, here is the train.csv
<a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/data?select=train.csv" rel="nofollow noreferrer">https://www.kaggle.com/competitions/rsna-breast-cancer-detection/data?select=train.csv</a></p>
|
<python><numpy><tensorflow>
|
2023-02-05 04:25:54
| 3
| 484
|
berinaniesh
|
75,350,111
| 11,280,068
|
dagster can you trigger a job to run via an api?
|
<p>I have been looking all over for the answer, but can't seem to find what I'm looking for</p>
<p>I want to create an api endpoint that can pass information to the dagster assets and trigger a run. For example, I have the following asset in dagster</p>
<pre><code>@asset
def player_already_registered(player_name: str):
q = text(
f'''
SELECT
COUNT(*)
FROM
`player_account_info`
WHERE
summonerName = :player_name
'''
)
result = database.conn.execute(q, player_name=player_name).fetchone()[0]
return bool(result)
</code></pre>
<p>Say that I have an endpoint already made where I can pass the <code>player_name</code> via a get-parameter. How can I pass the parameter to the asset and then run the job itself?</p>
|
<python><python-3.x><rest><pipeline><dagster>
|
2023-02-05 04:24:37
| 1
| 1,194
|
NFeruch - FreePalestine
|
75,350,107
| 11,969,592
|
How to pull the total balance on an SPL token for a wallet address in solana (python)
|
<p>I'm looking to get the token balance for an SPL token in solana based on the:</p>
<ul>
<li>Wallet address of the token holder</li>
<li>The token address</li>
</ul>
<p>How can I do this?</p>
<p>I thought it would be something like:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import os
url = os.getenv("SOLANA_RPC_URL")
MY_WALLET_ADDRESS = "XXXXX"
MY_TOKEN_ADDRESS = "XXX"
MINTER = "XXXX"
TOKEN_PROGRAM_ID = "TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb"
payload = {
"id": 1,
"jsonrpc": "2.0",
"method": "getTokenAccountsByOwner",
"params": [
MY_WALLET,
{"programId": TOKEN_PROGRAM_ID},
{"encoding": "jsonParsed"},
],
}
headers = {"accept": "application/json", "content-type": "application/json"}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
</code></pre>
<p>But I keep getting a blank response for what seems to be valid addresses:</p>
<pre><code>{"jsonrpc":"2.0","result":{"context":{"apiVersion":"1.13.5","slot":176104484},"value":[]},"id":1}
</code></pre>
<p>I'm using <a href="https://www.alchemy.com/" rel="nofollow noreferrer">alchemy</a> as my node for Solana.</p>
|
<python><solana>
|
2023-02-05 04:23:27
| 1
| 6,207
|
Patrick Collins
|
75,350,097
| 5,869,383
|
Python array: remove one dimension from a tuple that only has one 1D list
|
<p>I have one tuple that only has one list element <code>a = ((3, 2, 2, 2, 2), )</code>, whose length is <code>1</code>; can I remove one dimension from it to achieve <code>(3, 2, 2, 2, 2)</code>, whose length is <code>5</code>? Thank you!</p>
<pre><code>>>> a1 = ((3, 2, 2, 2, 2),)
>>> print(len(a1))
1
>>> a2 = (3, 2, 2, 2, 2) # How can I convert from a1 to a2?
>>> print(len(a2))
5
</code></pre>
|
<python>
|
2023-02-05 04:19:53
| 1
| 644
|
debug_all_the_time
|
75,350,072
| 2,205,916
|
`pipreqs` generating blank requirements.txt in Docker Container
|
<p>I am using Python in a Jupyter Lab notebook in a Docker container. I have the following code in one cell:</p>
<pre><code>import numpy as np
import os
import pandas as pd
</code></pre>
<p>Then I run the following cell:</p>
<pre><code>!pipreqs /app/loaded_reqs
</code></pre>
<p>and get:</p>
<pre><code>INFO: Successfully saved requirements file in /app/loaded_reqs/requirements.txt
</code></pre>
<p>But when I open the <code>requirements.txt</code>, it shows up empty/blank. I expected <code>numpy</code>, <code>os</code> and <code>pandas</code> to be in this <code>requirements.txt</code> file. Why might it not be working?</p>
|
<python><docker>
|
2023-02-05 04:12:03
| 1
| 3,476
|
user2205916
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.