QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,150,031
| 13,359,498
|
ValueError: Shapes (None, 1) and (None, 4) are incompatible
|
<p>I am trying to implement StratifiedKFold. I am using resnet50 for classification. I have 4 classes so I used softmax activation function in the last layer.
code snippets:</p>
<pre><code>from sklearn.model_selection import StratifiedKFold
# create the stratified k-fold cross-validator
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
for train_index, val_index in skf.split(X_train, y_train):
X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]
y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]
resnet152v2 = model.fit(X_train_fold, y_train_fold, batch_size=16, epochs=10, verbose=1)
# val_loss, val_acc = model.evaluate(X_val_fold, y_val_fold, verbose=0)
# print("Validation Loss: ", val_loss, "Validation Accuracy: ", val_acc)
</code></pre>
<p>Error: <code>ValueError: Shapes (None, 1) and (None, 4) are incompatible</code></p>
<p><strong>p.s:</strong></p>
<p>X_train_fold.shape = (492, 224, 224, 3)</p>
<p>y_train_fold.shape = (492,)</p>
|
<python><tensorflow><keras><cross-validation>
|
2023-01-17 17:21:11
| 0
| 578
|
Rezuana Haque
|
75,149,926
| 4,822,772
|
Pandas merge by interval with missing intervals
|
<p>The question can be summarized in this image</p>
<ul>
<li>I have a table (df1) with ID column, and dates</li>
<li>Another table (df2) with date intervals</li>
<li>I want a final table to indicate if each date is in one of the intervals.</li>
</ul>
<p><a href="https://i.sstatic.net/7X5UY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7X5UY.png" alt="enter image description here" /></a></p>
<pre><code>print(df1)
ID Date
0 1 2022-02-01
1 1 2022-02-02
2 1 2022-02-03
3 1 2022-02-04
4 1 2022-02-05
5 1 2022-02-06
6 1 2022-02-07
7 2 2022-02-01
8 2 2022-02-02
9 2 2022-02-03
10 2 2022-02-04
11 2 2022-02-05
12 2 2022-02-06
13 2 2022-02-07
14 2 2022-02-08
</code></pre>
<p>Here is :</p>
<pre><code>print(df2)
ID Start End
0 1 2022-02-02 2022-02-04
1 2 2022-02-04 2022-02-06
</code></pre>
<p>I tried one solution posted <a href="https://stackoverflow.com/questions/46525786/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range">here</a>,</p>
<pre><code>idx = pd.IntervalIndex.from_arrays(df_2['Start'], df_2['End'], closed='both')
df_2.index=idx
df_1['event']=df_2.loc[df_1.Date,'event'].values
</code></pre>
<p>But I got an error</p>
<pre><code>KeyError: "[Timestamp('2022-02-01 00:00:00'), Timestamp('2022-02-07 00:00:00'), Timestamp('2022-02-08 00:00:00')] not in index"
</code></pre>
<p>Since not all the dates are in the index, this solution does not work.</p>
<p>So I would like to know how to solve this problem.</p>
<p>EDIT: there can be multiple intervals for each ID in df2, for example</p>
<pre><code>data_df2 = StringIO("""
ID;Start;End
1;2022-02-02;2022-02-04
1;2022-02-06;2022-02-07
2;2022-02-04;2022-02-06
"""
)
</code></pre>
|
<python><pandas><merge>
|
2023-01-17 17:09:37
| 1
| 1,718
|
John Smith
|
75,149,830
| 10,710,625
|
KeyError(key) in get_loc_level after using .transform() or apply()
|
<p>I have a large grouped data frame with multiple groups where I'm trying to filter rows within each group. To simplify it, I will share a simplified data frame with one group where I'm getting the error. df5 is grouped by <code>"Detail", "ID", "Year"</code></p>
<pre><code>data2 = {"Year":["2012","2012","2012","2012","2012","2012","2012","2012","2012"],
"Country":['USA','USA','USA','USA','USA','USA','USA','CANADA',"CANADA"],
"Country_2": ["", "", "", "", "", "", "", "USA", "USA"],
"ID":["AF12","A15","BU14","DU157","L12","N10","RU156","DU157","RU156"],
"Detail":[1,1,1,1,1,1,1,1,1],
"Second_country_available":[False,False,False,False,False,False,False,True,True],
}
df5 = pd.DataFrame(data2)
df5_true = df5["Second_country_available"] == True
Country_2_gr = df5[df5_true].groupby(["Detail", "ID", "Year"])['Country_2'].agg(
'|'.join)
Country_2_gr
grouped_df5 = (df5.groupby(["Detail", "ID", "Year"], group_keys=False)['Country'])
filtered = grouped_df5.transform(lambda g: g.str.fullmatch(Country_2_gr[g.name]))
filtered
</code></pre>
<p>The error would be:</p>
<pre><code>return (self._engine.get_loc(key), None)
File "pandas\_libs\index.pyx", line 774, in pandas._libs.index.BaseMultiIndexCodesEngine.get_loc
KeyError: (1, 'A15', '2012')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "packages\pandas\core\indexes\.py", line 3045, in _get_loc_level
raise KeyError(key) from err
KeyError: (1, 'A15', '2012')
</code></pre>
<p>The code is working for most of the cases, so I don't want to radically change it. I would like to have a fix where in a similar case to the one I showed, the rows would be dropped.</p>
|
<python><pandas><apply>
|
2023-01-17 17:02:48
| 1
| 739
|
the phoenix
|
75,149,732
| 12,140,406
|
color seaborn swarmplot points with additional metadata beyond hue in boxplot
|
<p>Say I have data that I want to box plot and overlay with a swarm plot in seaborn, whose colors of the points add additional information on the data.</p>
<p>Question: <strong>How can I get box plots to be close to each other for a given x axis value (as is done in hue) without refactorizing x to the hue value and the x axis value?</strong></p>
<p>For example, here I want to overlay the points to the box plot and want the points further colored by <code>‘sex’</code>. Example:</p>
<pre><code>plt.figure(figsize = (5, 5))
sns.boxplot(x = 'class', y = 'age',
hue = 'embarked', dodge = True, data = df)
sns.swarmplot(x = 'class', y = 'age',
dodge = True,
color = '0.25',
hue = 'sex', data = df)
plt.legend(bbox_to_anchor = (1.5, 1))
</code></pre>
<p>EDIT:
The idea would be to have something that looks like the 'S' box for 'Third' in the plot (I made a fake example in powerpoint, so <code>hue</code> in both <code>boxplot</code> and <code>swarmplot</code> are the same to overlay the points on the appropriate boxes).</p>
<p><a href="https://i.sstatic.net/8Ykw4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Ykw4.png" alt="enter image description here" /></a></p>
<p><strong>Is there a way to make this plot without first refactorizing the x-axis to ‘first-S’, ‘first-C’, ‘first-Q’, ‘second-S’, etc and then add <code>hue</code> by <code>’sex’</code> in both plots?</strong></p>
|
<python><seaborn><visualization><boxplot><swarmplot>
|
2023-01-17 16:54:02
| 1
| 365
|
wiscoYogi
|
75,149,654
| 17,653,423
|
How to get data from nested list in response.json()
|
<p>There is a json response from an API request in the following schema:</p>
<pre><code>[
{
"id": "1",
"variable": "x",
"unt": "%",
"results": [
{
"classification": [
{
"id": "1",
"name": "group",
"category": {
"555": "general"
}
}
],
"series": [
{
"location": {
"id": "1",
"level": {
"id": "n1",
"name": "z"
},
"name": "z"
},
"serie": {
"202001": "0.08",
"202002": "0.48",
"202003": "0.19"
}
}
]
}
]
}
]
</code></pre>
<p>I want to transform the data from the <code>"serie"</code> key into a pandas DataFrame.</p>
<p>I can do that explicitly:</p>
<pre><code>content = val[0]["results"][0]["series"][0]["serie"]
df = pd.DataFrame(content.items())
df
0 1
0 202001 0.08
1 202002 0.48
2 202003 0.19
</code></pre>
<p>But if there is more than one record, that would get only the data from the first element because of the positional arguments <code>[0]</code>.</p>
<p>Is there a way to retrieve that data not considering the positional arguments?</p>
|
<python><pandas><python-requests>
|
2023-01-17 16:48:00
| 1
| 391
|
Luiz
|
75,149,539
| 20,054,635
|
How to check if file exists before reading the data into data frame using Pyspark?
|
<p>My requirement is to check if the specific file pattern exists in the data lake storage directory and if the file exists then read the file into pyspark dataframe if not exit the notebook execution.</p>
<p>I have following working code below just to read the file into dataframe.</p>
<pre><code>kaka_adls_path = "/mnt/kaka/pre/Source_Files/"
kaka_fac_address = "*FacKaka*.txt"
kaka_fac_address_df = spark.read.text(f"{kaka_adls_path}/{kaka_fac_address}")
</code></pre>
<p>where kaka_adls_path variable contains the location of the files
and kaka_fac_address variable contains the actual file name , it contains wildcards <em>FacKaka</em>.txt so that it should match the specific pattern file I need.</p>
<p>Now I want to check if the same file exists before reading the file into data frame.</p>
<p>Kindly help.</p>
|
<python><dataframe><pyspark><azure-databricks><azure-data-lake>
|
2023-01-17 16:38:53
| 1
| 369
|
Anonymous
|
75,149,469
| 2,546,099
|
Processing numpy array in parallel significantly slows down process
|
<p>For my work I have to process rather large numpy arrays (> 2e6 entries) by cutting them into small windows, and then calculating several features for each of those windows. To speed up this process I intended to take advantage of the multiprocessing-module. To test my approach, I wrote the following test code (applying a lowpass filter to each window):</p>
<pre><code>#!/usr/bin/env python3
###########################################################################################
#
# test_vectorized_lowpass_filter.py
#
###########################################################################################
"""_summary_
"""
import time
from typing import Callable
import multiprocessing
import numpy as np
import scipy.signal as signal
def apply_lowpass_filter(
signal_window: np.ndarray, lowpass_frequency: float, sample_rate: int
) -> np.ndarray:
"""_summary_
Args:
signal_window (np.ndarray): _description_
lowpass_frequency (float): _description_
sample_rate (int): _description_
Returns:
_type_: _description_
"""
lowpass_freq = lowpass_frequency / (sample_rate / 2) # Normalize the frequency
b_val, a_val = signal.butter(5, lowpass_freq, "low")
lowpassfilter = signal.filtfilt(b_val, a_val, signal_window)
return lowpassfilter
def generate_lowpass_filter(
lowpass_frequency: float, sample_rate: int
) -> tuple[float, tuple[np.ndarray, np.ndarray]]:
"""_summary_
Args:
lowpass_frequency (float): _description_
sample_rate (int): _description_
Returns:
tuple[float, tuple[np.ndarray, np.ndarray]]: _description_
"""
lowpass_freq = lowpass_frequency / (sample_rate / 2)
return signal.butter(5, lowpass_freq, "low")
def apply_generated_lowpass_filter(
signal_window: np.ndarray, filtfilt_vals: tuple[np.ndarray, np.ndarray]
) -> np.ndarray:
"""_summary_
Args:
signal_window (np.ndarray): _description_
filtfilt_vals (tuple[np.ndarray, np.ndarray]): _description_
Returns:
np.ndarray: _description_
"""
return signal.filtfilt(filtfilt_vals[0], filtfilt_vals[1], signal_window)
def unpacking_apply_along_axis(
all_args,
) -> np.ndarray:
"""_summary_
Args:
all_args (_type_): _description_
Returns:
np.ndarray: _description_
"""
(func1d, axis, arr, args, kwargs) = all_args
return np.apply_along_axis(func1d, axis, arr, *args, **kwargs)
def parallel_apply_along_axis(
func1d: Callable, axis: int, arr: np.ndarray, *args, **kwargs
) -> np.ndarray:
"""_summary_
Args:
func1d (Callable): _description_
axis (int): _description_
arr (np.ndarray): _description_
Returns:
np.ndarray: _description_
"""
effective_axis = 1 if axis == 0 else axis
if effective_axis != axis:
arr = arr.swapaxes(axis, effective_axis)
# Chunks for the mapping (only a few chunks):
chunks = [
(func1d, effective_axis, sub_arr, args, kwargs)
for sub_arr in np.array_split(
arr,
(
multiprocessing.cpu_count()
if multiprocessing.cpu_count() <= arr.shape[0]
else arr.shape[0]
),
)
]
pool = multiprocessing.Pool(
multiprocessing.cpu_count()
if multiprocessing.cpu_count() <= arr.shape[0]
else arr.shape[0]
)
individual_results = pool.map(unpacking_apply_along_axis, chunks)
# Freeing the workers:
pool.close()
pool.join()
return np.concatenate(individual_results)
def test_vectorized_lowpass_filter() -> None:
"""_summary_"""
lower_test_frequency = 10.0
pass_frequency = 50.0
window_size = 1000
sampling_rate = int(1.25e6)
time_vec = np.linspace(0, 2, sampling_rate, endpoint=False)
lower_signal_vec = np.sin(2 * np.pi * lower_test_frequency * time_vec)
filtered_lower_signal_vec = np.zeros(
(int(len(lower_signal_vec) / window_size), window_size)
)
reshaped_test_frequency = np.reshape(
lower_signal_vec,
newshape=(int(len(lower_signal_vec) / window_size), window_size),
)
t_0 = time.time()
for window_num, window in enumerate(reshaped_test_frequency):
filtered_lower_signal_vec[window_num] = apply_lowpass_filter(
signal_window=window,
lowpass_frequency=pass_frequency,
sample_rate=sampling_rate,
)
t_1 = time.time()
ravelled_test_frequency = np.ravel(filtered_lower_signal_vec)
time.sleep(0.1)
t_2 = time.time()
simple_test_frequency = apply_lowpass_filter(
signal_window=lower_signal_vec,
lowpass_frequency=pass_frequency,
sample_rate=sampling_rate,
)
t_3 = time.time()
time.sleep(0.1)
t_4 = time.time()
all_rows_test_frequency = np.apply_along_axis(
apply_lowpass_filter,
1,
reshaped_test_frequency,
lowpass_frequency=pass_frequency,
sample_rate=sampling_rate,
).ravel()
t_5 = time.time()
time.sleep(0.1)
t_6 = time.time()
pregenerated_filt_data = generate_lowpass_filter(
lowpass_frequency=pass_frequency, sample_rate=sampling_rate
)
all_rows_pregenerated_frequency = parallel_apply_along_axis(
func1d=apply_generated_lowpass_filter,
axis=1,
arr=reshaped_test_frequency,
filtfilt_vals=pregenerated_filt_data,
).ravel()
t_7 = time.time()
time.sleep(0.1)
t_8 = time.time()
all_rows_parallel_frequency = parallel_apply_along_axis(
func1d=apply_lowpass_filter,
axis=1,
arr=reshaped_test_frequency,
lowpass_frequency=pass_frequency,
sample_rate=sampling_rate,
).ravel()
t_9 = time.time()
print(f"Elapsed time for loop: {(t_1 - t_0)*10000}")
print(f"Elapsed time for full window: {(t_3 - t_2)*10000}")
print(f"Elapsed time for internal loop: {(t_5 - t_4)*10000}")
print(f"Elapsed time for parallel loop: {(t_9 - t_8) * 10000}")
print(f"Elapsed time for pregenerated parallel loop: {(t_7 - t_6) * 10000}")
print(
f"Data is equal: {np.allclose(a = ravelled_test_frequency, b = all_rows_test_frequency)}"
)
print(
f"Data is equal: {np.allclose(a = ravelled_test_frequency, b = all_rows_pregenerated_frequency)}"
)
print(
f"Data is equal: {np.allclose(a = ravelled_test_frequency, b = all_rows_parallel_frequency)}"
)
print(
f"Shape of target data: {simple_test_frequency.shape}, shape of received data: {ravelled_test_frequency.shape}"
)
if __name__ == "__main__":
test_vectorized_lowpass_filter()
</code></pre>
<p>My output is:</p>
<pre><code>Elapsed time for loop: 3431.6301345825195
Elapsed time for full window: 197.4964141845703
Elapsed time for internal loop: 3295.004367828369
Elapsed time for parallel loop: 35048.134326934814
Elapsed time for pregenerated parallel loop: 36795.69482803345
Data is equal: True
Data is equal: True
Data is equal: True
Shape of target data: (1250000,), shape of received data: (1250000,)
</code></pre>
<p>I.e. data is as expected, but the multiprocessing-approach takes significantly longer than the simple loop (which in turn is slower than just applying the lowpass on the entire data set at once).<br />
What could be the issue here? Potentially the same as here: <a href="https://stackoverflow.com/questions/64655998/issues-with-parallelizing-processing-of-numpy-array">Issues with parallelizing processing of numpy array</a>? Or something else?</p>
|
<python><numpy><parallel-processing><python-multiprocessing>
|
2023-01-17 16:32:59
| 0
| 4,156
|
arc_lupus
|
75,149,460
| 6,903,605
|
Setting xtick labels in of an sns.heatmap subplot
|
<p>I am trying to put 3 subplots of <code>sns.heatmap</code> with custom <code>xticklabels</code> and <code>yticklabels</code>. You can see that the labels I enter overlaps with some default labels. How to avoid this?</p>
<pre><code>glue = sns.load_dataset("glue").pivot("Model", "Task", "Score")
xlabels=['a','b','c','d','e','f','g','h']
ylabels=['AA','BB','CC','DD','EE','FF','GG','HH']
fig, (ax1,ax2,ax3) = plt.subplots(nrows=1, ncols=3,figsize=(24,8))
ax1 = fig.add_subplot(1,3,1)
# figure(figsize=(6, 4), dpi=80)
ax1=sns.heatmap(glue, annot=True,cmap='Reds', cbar=False, linewidths=0.2, linecolor='white',xticklabels=xlabels, yticklabels=ylabels)
xlabels=['p','q','r','s','t','u','v','z']
ylabels=['PP','QQ','RR','SS','TT','UU','VV','ZZ']
ax2 = fig.add_subplot(1,3,2)
ax2=sns.heatmap(glue, annot=True,cmap='Blues', cbar=False, linewidths=0.2, linecolor='white',xticklabels=xlabels, yticklabels=ylabels)
xlabels=['10','20','30','40','50','60','70','80']
ylabels=['11','21','31','41','51','61','71','81']
ax3 = fig.add_subplot(1,3,3)
ax3=sns.heatmap(glue, annot=True,cmap='Greens', cbar=False, linewidths=0.2, linecolor='white',xticklabels=xlabels, yticklabels=ylabels)
</code></pre>
<p><a href="https://i.sstatic.net/Jp64L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp64L.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><heatmap><subplot><yticks>
|
2023-01-17 16:32:18
| 1
| 306
|
volkan g
|
75,149,389
| 14,598,633
|
matplotlib legend not showing correctly
|
<p>I am trying to plot some data from a csv file. I used the Pandas to load the csv file. I am using <code>sns.lineplot()</code> to plot the lines. But one of the legend is always faulty. It shows a square around one of the legend.</p>
<pre class="lang-py prettyprint-override"><code>plt.figure(dpi=150)
lin1 = sns.lineplot(x = "Training time", y = "Relative L2 error", data=df[df["Activation"]=="tanh"])
lin2 = sns.lineplot(x = "Training time", y = "Relative L2 error", data=df[df["Activation"]=="silu"])
lin3 = sns.lineplot(x = "Training time", y = "Relative L2 error", data=df[df["Activation"]=="swish"])
plt.xlabel("Training time (sec)")
plt.legend(("tanh", "silu", "swish"))
plt.yscale('log',base=10)
</code></pre>
<p>I used 3 different functions because there are more <code>Activations</code>. This is the resulting plot.</p>
<p><a href="https://i.sstatic.net/HAWLH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HAWLH.png" alt="enter image description here" /></a></p>
<p>The plot is looking correct but the legend is creating problems. Here are versions of the plotting tools that I am using.</p>
<pre class="lang-py prettyprint-override"><code>Python 3.9.12
matplotlib 3.6.1
matplotlib-inline 0.1.6
seaborn 0.12.1
</code></pre>
<p>I could not find the same issue on Internet. A kernel restart isn't helping. Please let me know if more information is needed.</p>
|
<python><matplotlib><seaborn><legend>
|
2023-01-17 16:26:21
| 2
| 866
|
Prakhar Sharma
|
75,149,281
| 3,130,926
|
Pandas apply returns function instead of value
|
<p>Puzzling pandas apply behavior</p>
<pre class="lang-py prettyprint-override"><code>data = {'date_col_1': ['2020-01-24',
'2020-03-24' ],
'date_col_2': ['2017-03-08',
'2020-01-24']}
testdf = pd.DataFrame(data)
</code></pre>
<p>Then try to convert the columns to datetime,</p>
<pre><code>>>>testdf.apply(lambda x: pd.to_datetime, axis=0)
0 <function to_datetime at 0x1170c2f80>
1 <function to_datetime at 0x1170c2f80>
dtype: object
</code></pre>
<p>Why is apply returning function instead of return value ?</p>
<p><code>>>>pd.__version__</code> : 1.5.2</p>
|
<python><pandas><datetime>
|
2023-01-17 16:18:01
| 1
| 14,217
|
muon
|
75,149,191
| 7,437,143
|
Ignoring mypy attr-defined with triple quotes
|
<p>The following code:</p>
<pre class="lang-py prettyprint-override"><code># type: ignore[attr-defined]
timeit.template = """ # type: ignore[attr-defined]
def inner(_it, _timer{init}):
{setup}
_t0 = _timer()
for _i in _it:
retval = {stmt}
_t1 = _timer()
return _t1 - _t0, retval
""" # type: ignore[attr-defined]
# type: ignore[attr-defined]
</code></pre>
<p>Yields error:</p>
<pre><code>src/snncompare/Experiment_runner.py:45: error: Module has no attribute "template" [attr-defined]
Found 1 error in 1 file (checked 97 source files)
</code></pre>
<p>How can I ensure mypy ignores only this one instance of the creation of the attribute?</p>
|
<python><mypy>
|
2023-01-17 16:10:02
| 1
| 2,887
|
a.t.
|
75,149,181
| 5,195,209
|
Expanding a Scribunto module that doesn't have a function
|
<p>I want to get the return value of this Wikimedia <a href="https://cs.wiktionary.org/wiki/Modul:Languages" rel="nofollow noreferrer">Scribunto module</a> in Python. Its source code is roughly like this:</p>
<pre class="lang-lua prettyprint-override"><code>local Languages = {}
Languages = {
["aa"] = {
name = "afarština",
dir = "ltr",
name_attr_gen_pl = "afarských"
},
-- More languages...
["zza"] = {
name = "zazaki",
dir = "ltr"
}
}
return Languages
</code></pre>
<p>In the Wiktextract library, there is already <a href="https://github.com/tatuylonen/wiktextract/blob/master/get_languages.py" rel="nofollow noreferrer">Python code</a> to accomplish similar tasks:</p>
<pre class="lang-py prettyprint-override"><code>def expand_template(sub_domain: str, text: str) -> str:
import requests
# https://www.mediawiki.org/wiki/API:Expandtemplates
params = {
"action": "expandtemplates",
"format": "json",
"text": text,
"prop": "wikitext",
"formatversion": "2",
}
r = requests.get(f"https://{sub_domain}.wiktionary.org/w/api.php",
params=params)
data = r.json()
return data["expandtemplates"]["wikitext"]
</code></pre>
<p>This works for languages like French because there the Scribunto module has a well-defined function that returns a value, as an example here:</p>
<p>Scribunto module:</p>
<pre class="lang-lua prettyprint-override"><code>p = {}
function p.affiche_langues_python(frame)
-- returns the needed stuff here
end
</code></pre>
<p>The associated Python function:</p>
<pre class="lang-py prettyprint-override"><code>def get_fr_languages():
# https://fr.wiktionary.org/wiki/Module:langues/analyse
json_text = expand_template(
"fr", "{{#invoke:langues/analyse|affiche_langues_python}}"
)
json_text = json_text[json_text.index("{") : json_text.index("}") + 1]
json_text = json_text.replace(",\r\n}", "}") # remove tailing comma
data = json.loads(json_text)
lang_data = {}
for lang_code, lang_name in data.items():
lang_data[lang_code] = [lang_name[0].upper() + lang_name[1:]]
save_json_file(lang_data, "fr")
</code></pre>
<p>But in our case we don't have a function to call.
So if we try:</p>
<pre class="lang-py prettyprint-override"><code>def get_cs_languages():
# https://cs.wiktionary.org/wiki/Modul:Languages
json_text = expand_template(
"cs", "{{#invoke:Languages}}"
)
print(json_text)
</code></pre>
<p>we get <code><strong class="error"><span class="scribunto-error" id="mw-scribunto-error-0">Chyba skriptu: Musíte uvést funkci, která se má zavolat.</span></strong> usage: get_languages.py [-h] sub_domain lang_code get_languages.py: error: the following arguments are required: sub_domain, lang_code</code>. (Translated as "You have to specify a function you want to call. But when you enter a function name as a parameter like in the French example, it complains that that function does not exist.)</p>
<p>What could be a way to solve this?</p>
|
<python><lua><wikitext><scribunto>
|
2023-01-17 16:08:59
| 1
| 587
|
Pux
|
75,149,145
| 11,622,712
|
Filter rows conditionally in PySpark: Aggregate/Window/Generate expressions are not valid in where clause of the query
|
<p>I have the following dataframe in PySpark:</p>
<pre><code>from pyspark.sql import SparkSession
# Create a SparkSession
spark = SparkSession.builder.appName("myApp").getOrCreate()
# Create a list of rows for the DataFrame
rows = [("2011-11-13 11:00", 1, "2011-11-13 11:06", "2011-11-14 11:00"),
("2011-11-13 11:01", 1, "2011-11-13 11:06", "2011-11-14 11:00"),
("2011-11-13 11:04", 1, "2011-11-13 11:06", "2011-11-14 11:00"),
("2011-11-13 11:15", 1, "2011-11-13 11:06", "2011-11-14 11:00"),
("2011-12-15 15:00", 2, "2011-12-15 15:05", "2012-01-02 15:00"),
("2011-12-15 15:06", 2, "2011-12-15 15:05", "2012-01-02 15:00"),
("2011-12-15 15:08", None, None, None)]
# Create a DataFrame from the rows
df = spark.createDataFrame(rows, ["timestamp", "myid", "start_timestamp", "end_timestamp"])
# Show the DataFrame
df.show()
</code></pre>
<p>The printed output:</p>
<pre><code>+-------------------+----+-------------------+---------------------+
| timestamp|myid| start_timestamp | end_timestamp |
+-------------------+----+-------------------+---------------------+
|2011-11-13 11:00:00| 1|2011-11-13 11:06:00|2011-11-14 11:00:00 |
|2011-11-13 11:01:00| 1|2011-11-13 11:06:00|2011-11-14 11:00:00 |
|2011-11-13 11:04:00| 1|2011-11-13 11:06:00|2011-11-14 11:00:00 |
|2011-11-13 11:15:00| 1|2011-11-13 11:06:00|2011-11-14 11:00:00 |
|2011-12-15 15:00:00| 2|2011-12-15 15:05:00|2012-01-02 15:00:00 |
|2011-12-15 15:06:00| 2|2011-12-15 15:05:00|2012-01-02 15:00:00 |
|2011-12-15 15:08:00|null| null| null |
|2011-12-17 16:00:00|null| null| null |
+-------------------+----+-------------------+---------------------+
</code></pre>
<p>I need to only select the rows that:</p>
<ol>
<li>have null values in "start_timespamp" and "end_timestamp", or</li>
<li>have the closest "timestamp" to "start_timestamp" column values.</li>
</ol>
<p>For the above example, the expected result is this one:</p>
<pre><code>+--------------------+------------+----------------------+---------------------+
|timestamp | myid | start_timespamp | end_timestamp
+--------------------+------------+----------------------+---------------------+
|2011-11-13 11:04 | 1 | 2011-11-13 11:06 | 2011-11-14 11:00
|2011-12-15 15:06 | 2 | 2011-12-15 15:05 | 2012-01-02 15:00
|2011-12-15 15:08 | null | null | null
|2011-12-17 16:00:00 | null | null | null
+--------------------+------------+----------------------+---------------------+
</code></pre>
<p>This is my current code, but it gives wrong result:</p>
<p><a href="https://i.sstatic.net/jJCkC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jJCkC.png" alt="enter image description here" /></a></p>
<p>Moreover, in case of my ral dataset this code fails with the error <code>Aggregate/Window/Generate expressions are not valid in where clause of the query</code>:</p>
<pre><code>from pyspark.sql import Window
from pyspark.sql.functions import abs, col, min
# Create a window to partition the data by "myid" and order by "timestamp"
window = Window.partitionBy("myid").orderBy("timestamp")
# Add a new column "time_diff" that calculates the absolute difference between "timestamp" and "start_timestamp"
df = df.withColumn("time_diff", abs(col("timestamp").cast("long") - col("start_timestamp").cast("long")))
# Add a new column "min_time_diff" that contains the minimum "time_diff" for each "myid"
df = df.withColumn("min_time_diff", min("time_diff").over(window))
# Select the rows that have null values in "start_timespamp" and "end_timestamp"
# or have the minimum value in "time_diff" for each "myid"
df = df.filter((col("start_timestamp").isNull() & col("end_timestamp").isNull()) |
(col("time_diff") == col("min_time_diff")))
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-01-17 16:06:29
| 1
| 2,998
|
Fluxy
|
75,149,094
| 1,709,475
|
Display the output of datatypes from two dataframes
|
<p>I would like to display the output of datatypes from two dataframes.</p>
<p><code>column_name, dbf datatype, df datatype</code> should be displayed.</p>
<p>Working code</p>
<pre><code>import csv
import pandas as pd
from dbfread import DBF
csv_file = "bridges.csv"
dbf_file = "bridges.dbf"
def dbf_to_csv(path_to_dbf):
'''Convert to .csv file, display DBF and CSV column types'''
csv_fn = path_to_dbf[:-4]+ ".csv"
print('\tCreating {}'.format(csv_fn))
table = DBF(path_to_dbf)
with open(csv_fn, 'w', newline = '') as f:
writer = csv.writer(f)
writer.writerow(table.field_names)
print('\t\tWriting converted data to {}'.format(csv_fn))
for record in table:
writer.writerow(list(record.values()))
print('\n\n\t\tClosing converted data to {}\n\n'.format(csv_fn))
def main():
path_to_dbf = "./bridges.dbf"
print('\n\tPrinting the head of the .dbf file: {}'.format(dbf_file))
dbf = DBF(dbf_file)
dbf = pd.DataFrame(dbf)
print(dbf.head(5))
print('\n\tPrinting the head of the .csv file: {}'.format(csv_file))
df = pd.read_csv(csv_file)
print(df.head(5))
## 2. read the column datatype and display
print('Printing column name and column datatype:')
for name, dbf_type in dbf.dtypes.iteritems():
print('\t{}\t\t{}'.format(name, dbf_type))
print('Printing .csv column name and column datatype:')
for name, dtype in df.dtypes.iteritems():
print('\t{}\t\t{}'.format(name, dtype))
if __name__ == "__main__":
main()
</code></pre>
<p>After searching SO I found something along the lines of</p>
<pre><code> ## 2. read the column datatype and display
print('Printing column name and column datatype:')
dbf_type = dbf.dtypes.iteritems()
df_type = df.dtypes.iteritems()
for name, in zip(dbf.dtypes.iteritems(), df.dtypes.iteritems()):
print('\t{}\t\t{}\t\t{}'.format(name, df_type, dbf_type))
</code></pre>
<p>This produces the error</p>
<pre><code>ValueError: too many values to unpack (expected 1)
</code></pre>
<p>Can anyone offer a solution or alternative method, please.</p>
|
<python><pandas><dataframe>
|
2023-01-17 16:02:20
| 1
| 326
|
Tommy Gibbons
|
75,149,076
| 7,800,760
|
Select rows of dataframe whose column values amount to a given sum
|
<p>I need to find out how many of the first N rows of a dataframe make up (just over) 50% of the sum of values for that column.</p>
<p>Here's an example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(10, 1), columns=list("A"))
0 0.681991
1 0.304026
2 0.552589
3 0.716845
4 0.559483
5 0.761653
6 0.551218
7 0.267064
8 0.290547
9 0.182846
</code></pre>
<p>therefore</p>
<pre><code>sum_of_A = df["A"].sum()
</code></pre>
<p>4.868260213425804</p>
<p>and with this example I need to find, starting from row 0, how many rows I need to get a sum of at least 2.43413 (approximating 50% of sum_of_A).</p>
<p>Of course I could iterate through the rows and sum and break when I get over 50%, but is there a more concise/Pythonic/efficient way of doing this?</p>
|
<python><pandas><dataframe>
|
2023-01-17 16:01:00
| 1
| 1,231
|
Robert Alexander
|
75,148,953
| 5,947,182
|
mouse.up() not working after mouse.move()
|
<p>I'm writing a test in Playwright Python and pytest to see if the auto mouse movements can be simulated to look more like of a real user's. I use a local html canvas written from html and javascript, the code is from <a href="https://www.onlywebpro.com/2013/01/10/create-html5-canvas-drawing-board-within-5-minutes/" rel="nofollow noreferrer">here</a>. <strong>The mouse is supposed to move to point (400,50) in the browser before the HTML canvas file is requested</strong> (In the real function, the starting point will instead will randomized. Otherwise, it will always start at (0,0) which would make it look more like a bot). <strong>When the canvas is open, it's supposed to draw lines from left to right using <a href="https://ben.land/post/2021/04/25/windmouse-human-mouse-movement/" rel="nofollow noreferrer">WindMouse algorithm</a> with the same x-values for start and end points, respectively. There shouldn't be any lines connected between the lines, except for the one from the starting point to the first line.</strong> This should be because after initiating to hold down the mouse's left button with <code>page.mouse.down()</code>, and then actually drawing with <code>page.mouse.move()</code> from <code>x=100</code> to <code>x=1200</code> with different y-values in range 100 to 1000, the mouse should release outside of the loop with <code>page.mouse.up()</code>.</p>
<p>As seen in the image below, that's not what happened. Instead the <strong><code>page.mouse.up()</code> doesn't seem to execute after <code>page.mouse.down()</code> and <code>page.mouse.move()</code></strong>. I have researched and found that it might be because when the mouse left button has been held down for a certain amount of time, the browser will recognize the action as a mouse drag instead. If this is the case, how do you disable the browser's ability to automatically switch mouse action recognition; in this case, it would be to disable it from automatically recognizing <code>page.mouse.down()</code> and <code>page.mouse.move()</code> after a certain amount of time as a mouse drag? And if this is not the case, <strong>how do you fix this problem with Playwright <code>page.mouse.up()</code>?</strong>
<a href="https://i.sstatic.net/1fiGm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1fiGm.png" alt="enter image description here" /></a></p>
<p>Please have a look at the code:</p>
<pre><code>def test_drawing_board():
rel_path = r"/mats/drawing_board.html"
file_path = "".join([r"file://", os.getcwd(), rel_path])
with sync_playwright() as playwright:
# Fetch drawing board
browser = playwright.chromium.launch(headless=False, slow_mo=0.1)
page = browser.new_page()
page.mouse.move(400,50) # Place mouse in a random position in the browser before fetching the page
page.goto(file_path)
#Move mouse
start_point = 100
x = 1200
for y in range(100, 1000, 100):
# Generate mouse points
points = []
wm(start_point, y, x, y, M_0=15, D_0=12, move_mouse=lambda x, y: points.append([x, y]))
# Draw
page.mouse.down()
for point in points:
page.mouse.move(point[0], point[1])
page.mouse.up()
</code></pre>
|
<python><mouseevent><playwright><mouseup>
|
2023-01-17 15:51:09
| 2
| 388
|
Andrea
|
75,148,949
| 20,700,631
|
IfxPy connection to Informix DB problem - user with added hostname
|
<p>I am currently trying to establish connection from server (64b windows) to Avaya CMS DB - which is Informix engine - using IfxPy. IfxPy 3.0.5 comes with HCL ONEDB ODBC DRIVER (version 1.0.0.0), which is copied into site-packages.
I think I've properly installed the IfxPy module, pointed ODBC registers to iclit09b.dll, setup system environment "INFORMIXDIR". I am able to import IfxPy, but IfxPy.connect returns error from db server. The error I got suggests, that ODBC driver adds server hostname with "@" to the user id and Informix is not understanding that.
Is there anybody with experience with connecting to Avaya CMS DB from Python?</p>
<pre><code>import os
if 'INFORMIXDIR' in os.environ: #add dll lookup folder to BIN folder
os.add_dll_directory(os.path.join(os.environ['INFORMIXDIR'],"bin"))
import IfxPy
conStr="SERVER=cms_net;HOST=10.10.10.10;SERVICE=50001;PROTOCOL=olsoctcp;DATABASE=cms;UID=myuser;PWD=mypassw;CLIENT_LOCALE=en_US.UTF8;"
conn = IfxPy.connect(conStr, "", "")
IfxPy.close(conn)
</code></pre>
<p>Exception I get: [OneDB][OneDB ODBC Driver][OneDB]Incorrect password or user <strong>myuser@myserver[myserver full domain path]</strong> is not known on the database server. SQLCODE=-951</p>
<p>Any ideas?
I worked with our Avaya guy to add user into proper group and grant DB access. But we still can not figure this one out.
NB: DB is set to accept protocol olsctcp on port 50001.
Thank you.</p>
<p>I did research on internet, with not much luck. I am considering to use Informix Client SDK from IBM, hopefully that will work.
I've also tested to establish connection from ODBC 64b windows, it allows proper setup, but test fails with exactly same error.</p>
|
<python><odbc><informix><avaya>
|
2023-01-17 15:50:49
| 1
| 372
|
bracko
|
75,148,938
| 9,944,937
|
tensorflow map_fn with a list of list of strings
|
<p>I'm trying to pass a list of arguments to a function using <code>tensorflow</code> function: <code>tf.map_fn</code>. Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>def my_func(a,v,c,d):
print(a,v,c,d)
if __name__ == '__main__':
tf.config.set_visible_devices(tf.config.list_physical_devices('GPU')[0],'GPU')
iterable = [['a','b','c','s'],['s','e','f','c']]
tensor = tf.convert_to_tensor(iterable)
dataset = tf.data.Dataset.from_tensor_slices(tensor)
tf.map_fn(lambda x: my_func(*x),dataset)
</code></pre>
<p>But I'm getting this error I can't really decipher:</p>
<pre><code>Traceback (most recent call last):
File "/Volumes/WorkSSD/Notebooks/01 Convert Raw EDF to Raw CSV copy.py", line 134, in <module>
tf.map_fn(lambda x: my_func(*x),dataset)
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 629, in new_func
return func(*args, **kwargs)
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
return func(*args, **kwargs)
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/ops/map_fn.py", line 640, in map_fn_v2
return map_fn(
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
return func(*args, **kwargs)
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/ops/map_fn.py", line 392, in map_fn
result_flat_signature = [
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/ops/map_fn.py", line 393, in <listcomp>
_most_general_compatible_type(s)._unbatch() # pylint: disable=protected-access
File "/Users/fabiomagarelli/.pyenv/versions/3.10.9/lib/python3.10/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4587, in _unbatch
raise ValueError("Slicing dataset elements is not supported for rank 0.")
ValueError: Slicing dataset elements is not supported for rank 0.
</code></pre>
<p>What am I doing wrong and how do I fix it?</p>
|
<python><python-3.x><tensorflow><tensor>
|
2023-01-17 15:50:05
| 1
| 1,101
|
Fabio Magarelli
|
75,148,936
| 1,551,974
|
Cancel repeating thread in python if it takes longer than 15 seconds
|
<p>I have a function that calls itself again after completion with a delay. They goal is that the function <code>doSomething</code> runs roughly every 5 seconds. However if the function needs more than 15 seconds the thread should be cancelled.
How can I add the condition that the thread gets killed if it takes to long? If it takes more than 15 seconds I want it to cancel and try again.</p>
<p>Here is the code that I have but I was not able to add a canceling condition yet.</p>
<pre><code>def repeatProcess(self):
# do no tracking if it was set to offline
start = time.time()
doSomething()
print(f"Execution time {time.time() - start}")
delayTimer = threading.Timer(timerSeconds, self.repeatProcess)
delayTimer.start()
def doSomething(self):
print("do some stuff that may take time")
if __name__ == '__main__':
trackingThread = threading.Thread(name='repeatThread', target=repeatProcess)
trackingThread.daemon = True
trackingThread.start()
</code></pre>
|
<python><python-3.x><multithreading>
|
2023-01-17 15:49:47
| 0
| 2,278
|
Silve2611
|
75,148,921
| 2,290,763
|
Merging multiple dataframes with applying different operation on each column
|
<p>I have several daily CSV files with structure similar to this:</p>
<pre><code>| resource | start_date | end_date | total_usage | usage_per_hour | last_read |
|----------|------------|------------|-------------|----------------|------------|
| s3 | 2023-01-01 | 2023-01-01 | 22333 | 930,54 | 2023-01-01 |
| s3 | 2023-01-02 | 2023-01-02 | 11233 | 468,04 | 2023-01-01 |
| s3 | 2023-01-03 | 2023-01-03 | 6356 | 264,83 | 2023-01-03 |
| s3 | 2023-01-04 | 2023-01-04 | 757547 | 31564,46 | 2023-01-03 |
| ec2 | 2023-01-01 | 2023-01-01 | 222 | 9,25 | 2022-12-31 |
| s3 | 2023-01-05 | 2023-01-05 | 8765 | 365,21 | 2023-01-05 |
| rds | 2023-01-01 | 2023-01-01 | 111 | 4,63 | 2023-01-01 |
| rds | 2023-01-02 | 2023-01-02 | 7576 | 315,67 | 2023-01-02 |
| rds | 2023-01-03 | 2023-01-03 | 444 | 18,5 | 2023-01-02 |
| ec2 | 2023-01-02 | 2023-01-02 | 6664 | 277,67 | 2023-01-02 |
| ec2 | 2023-01-03 | 2023-01-03 | 4543 | 189,29 | 2023-01-02 |
</code></pre>
<p>I want to merge/concatenate them using pandas based on resource, but for each column I want to apply a different operation, for example:</p>
<ul>
<li>start_date and end_date should be set for the first day and last_day of the given period</li>
<li>total_usage should be a sum of all daily usages for the given period</li>
<li>usage_per_hour should be total_usage divided by all hours in the given period</li>
<li>last_read should be the latest date from all csv files</li>
</ul>
<p>I'm new in pandas world. How should I approach such a data manipulation?</p>
<p>Sample output:</p>
<pre><code>| resource | start_date | end_date | total_usage | usage_per_hour | last_read |
|----------|------------|------------|-------------|----------------|------------|
| s3 | 2023-01-01 | 2023-01-05 | 806234 | 6718,62 | 2023-01-05 |
| ec2 | 2023-01-01 | 2023-01-03 | 11429 | 158,74 | 2023-01-02 |
| rds | 2023-01-01 | 2023-01-03 | 8131 | 112,94 | 2023-01-02 |
</code></pre>
|
<python><pandas>
|
2023-01-17 15:48:41
| 0
| 1,649
|
Forin
|
75,148,810
| 5,197,034
|
Create custom sized bins of datetime Series in Pandas
|
<p>I have multiple Pandas Series of datetime64 values that I want to bin into groups using arbitrary bin sizes.</p>
<p>I've found the <code>Series.to_period()</code> function which does exactly what I want except that I need more control over the chosen bin size. <code>to_period</code> allows me to bin by full years, months, days, etc. but I also want to bin by 5 years, 6 hours or 15 minutes. Using a syntax like <code>5Y</code>, <code>6H</code> or <code>15min</code> works in other corners of Pandas but apparently not here.</p>
<pre><code>s = pd.Series(["2020-02-01", "2020-02-02", "2020-02-03", "2020-02-04"], dtype="datetime64[ns]")
# Output as expected
s.dt.to_period("M").value_counts()
2020-02 4
Freq: M, dtype: int64
# Output as expected
s.dt.to_period("W").value_counts()
2020-01-27/2020-02-02 2
2020-02-03/2020-02-09 2
Freq: W-SUN, dtype: int64
# Output as expected
s.dt.to_period("D").value_counts()
2020-02-01 1
2020-02-02 1
2020-02-03 1
2020-02-04 1
Freq: D, dtype: int64
# Output unexpected (and wrong?)
s.dt.to_period("2D").value_counts()
2020-02-01 1
2020-02-02 1
2020-02-03 1
2020-02-04 1
Freq: 2D, dtype: int64
</code></pre>
|
<python><pandas><datetime><binning>
|
2023-01-17 15:39:57
| 1
| 2,603
|
pietz
|
75,148,665
| 19,826,650
|
Insert list into MySQL database using Python
|
<p>I am doing a classification using k-nearest neighbor with python. The data I am using is float type, <code>latitude</code> and <code>longitude</code> and get the output from it like this:</p>
<p>prediction</p>
<pre><code>preds2 = model.predict(new_list)
print(type(preds2))
print(preds2)
print("-----------")
print(new_list)
print(type(new_list))
print("-----------")
</code></pre>
<p>Output prediction not merged</p>
<pre><code><class 'numpy.ndarray'>
['Taman Suaka Margasatwa Angke' 'Taman Suaka Margasatwa Angke']
-----------
[[-6.1676580997222, 106.9042428], [-6.1676580997222, 106.9042428]]
<class 'list'>
</code></pre>
<p>Merge them together</p>
<pre><code>mergedata = [[*new_list, preds2] for new_list, preds2 in zip(new_list, preds2)]
print(mergedata)
</code></pre>
<p>output from <code>mergedata</code></p>
<pre><code>[[-6.1676580997222, 106.9042428, 'Taman Suaka Margasatwa Angke'], [-6.1676580997222, 106.9042428, 'Taman Suaka Margasatwa Angke']]
</code></pre>
<p>I want to store the output from <code>mergedata</code> to database mysql so this is the code that I've tried</p>
<pre><code>import mysql.connector
connection = mysql.connector.connect(host='localhost',
database='destinasi',
user='root',
password='')
mycursor = connection.cursor()
myArray = mergedata
arraySize = len(myArray)
for r in range(0,arraySize):
try:
mycursor.execute(
"""INSERT INTO hasilclass (Latitude,Longitude,Class) VALUES (%s,%s,%s)""",
(",".myArray[r], )
)
connection.commit()
except:
print("Error")
connection.rollback()
</code></pre>
<p>The output is:</p>
<pre><code>Error
Error
</code></pre>
<p>This is the display of database in mysql</p>
<pre><code>CREATE TABLE `hasilclass` (
`Latitude` float(15,11) NOT NULL,
`Longitude` float(15,11) NOT NULL,
`Class` varchar(55) NOT NULL
)
</code></pre>
<p>It seems that <code>mycursor.execute</code> is not executing the mysql statement, i don't know the correct way to do it. Any Suggestions how to correct it? is the statement is false? all the data is from mysql as well but im using pymysql to call all the data and do all the work untill i need to store them to database.</p>
<p>The reference in the internet i found is using mysql.connector to store 2 dimensional list in python so i follow it and it goes error like this.</p>
|
<python><mysql-python>
|
2023-01-17 15:27:08
| 1
| 377
|
Jessen Jie
|
75,148,659
| 4,056,181
|
Using SeedSequence to convert any random seed into a good seed
|
<p>In NumPy 1.17, the <code>random</code> module was given an overhaul. Among the new additions were <a href="https://numpy.org/doc/stable/reference/random/bit_generators/generated/numpy.random.SeedSequence.html#numpy.random.SeedSequence" rel="nofollow noreferrer"><code>SeedSequence</code></a>. In the section about <a href="https://numpy.org/doc/stable/reference/random/parallel.html" rel="nofollow noreferrer">parallel random number generation in the docs</a>, <code>SeedSequence</code> is said to</p>
<blockquote>
<p>ensure that low-quality seeds are turned into high quality initial states</p>
</blockquote>
<p>which supposedly mean that one can safely use a seed of e.g. <code>1</code>, as long as this is processed through a <code>SeedSequence</code> prior to seeding the PRNG. However, the rest of the documentation page goes on to describe how <code>SeedSequence</code> can turn one seed into several, independent PRNGs. What if we only want one PRNG, but want to be able to safely make use of small/bad seeds?</p>
<p>I have written the below test code, which uses the Mersenne twister <code>MT19937</code> to draw normal distributed random numbers. The basic seeding (without using <code>SeedSequence</code>) is <code>MT19937(seed)</code>, corresponding to <code>f()</code> below. I also try <code>MT19937(SeedSequence(seed))</code>, corresponding to <code>g()</code>, though this results in exactly the same stream. Lastly, I try using the <code>spawn</code>/<code>spawn_key</code> functionality of <code>SeedSequence</code>, which does alter the stream (corresponding to <code>h()</code> and <code>i()</code>, which produce identical streams).</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def f():
return np.random.Generator(np.random.MT19937(seed))
def g():
return np.random.Generator(np.random.MT19937(np.random.SeedSequence(seed)))
def h():
return np.random.Generator(np.random.MT19937(np.random.SeedSequence(seed).spawn(1)[0]))
def i():
return np.random.Generator(np.random.MT19937(np.random.SeedSequence(seed, spawn_key=(0,))))
seed = 42 # low seed, contains many 0s in binary representation
n = 100
for func, ls in zip([f, g, h, i], ['-', '--', '-', '--']):
generator = func()
plt.plot([generator.normal(0, 1) for _ in range(n)], ls)
plt.show()
</code></pre>
<h4>Question</h4>
<p>Are <code>h()</code> and <code>i()</code> really superior to <code>f()</code> an <code>g()</code>? If so, why is it necessary to invoke the spawn (parallel) functionality, just to convert a (possibly bad) seed into a good seed? To me these seem like they ought to be disjoint features.</p>
|
<python><python-3.x><numpy><random><random-seed>
|
2023-01-17 15:26:44
| 1
| 13,201
|
jmd_dk
|
75,148,585
| 98,080
|
Union of dict with typed keys not compatible with an empty dict
|
<p>I'd like to type a dict as either a dictionary where the all the keys are integers or a dictionary where all the keys are strings.</p>
<p>However, when I read mypy (v0.991) on the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union, Any
special_dict: Union[dict[int, Any], dict[str, Any]]
special_dict = {}
</code></pre>
<p>I get an <code>Incompatible types</code> error.</p>
<pre><code>test_dict_nothing.py:6: error: Incompatible types in assignment (expression has type "Dict[<nothing>, <nothing>]", variable has type "Union[Dict[int, Any], Dict[str, Any]]") [assignment]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>How do I express my typing intent.</p>
|
<python><mypy><python-typing>
|
2023-01-17 15:20:56
| 1
| 3,241
|
fgregg
|
75,148,551
| 6,067,741
|
http 1.1 with requests and proxy
|
<p>I cant seem to find a way to send http connect 1.1 using python requests with proxy. it sends 1.0 and I dont want my infra to be able to accept http 1.0 connections.</p>
<pre class="lang-py prettyprint-override"><code>proxy = { 'https': f'http://{xyz}:10000' }
requests.get(
f'https://{url}/health',
timeout = 10,
proxies = proxy
)
</code></pre>
<p>this results in something like: <code>HTTP/1.1 426 Upgrade Required</code></p>
<p>using curl as a workaround for now</p>
<p><strong>edit:</strong> python 3.10.9, requests==2.26.0<br />
<strong>edit:</strong> found this line: <a href="https://github.com/python/cpython/blob/940763140f7519a125229782ca7a095af01edda4/Lib/http/client.py#L907" rel="nofollow noreferrer">https://github.com/python/cpython/blob/940763140f7519a125229782ca7a095af01edda4/Lib/http/client.py#L907</a></p>
<p>so its literally hardcoded to <code>1.0</code> for tunneling scenarios?</p>
|
<python><python-3.x><python-requests><proxy><request>
|
2023-01-17 15:18:36
| 0
| 71,429
|
4c74356b41
|
75,148,449
| 12,883,297
|
Remove the rows of dataframe based on date and flag condition in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A","13-02-2022","B","FALSE"],["A","13-02-2022","C","FALSE"],["A","14-02-2022","D","FALSE"],
["A","14-02-2022","E","FALSE"],["A","16-02-2022","A","TRUE"],["A","16-02-2022","F","FALSE"],
["A","17-02-2022","G","FALSE"],["A","17-02-2022","H","FALSE"],["A","18-02-2022","I","FALSE"],
["A","18-02-2022","J","FALSE"]],columns=["id1","date","id2","flag"])
</code></pre>
<pre><code>id1 date id2 flag
A 13-02-2022 B FALSE
A 13-02-2022 C FALSE
A 14-02-2022 D FALSE
A 14-02-2022 E FALSE
A 16-02-2022 A TRUE
A 16-02-2022 F FALSE
A 17-02-2022 G FALSE
A 17-02-2022 H FALSE
A 18-02-2022 I FALSE
A 18-02-2022 J FALSE
</code></pre>
<p>I want to remove all the rows of previous working day, next working day and the day where flag is TRUE.</p>
<p>For example here 16th Feb flag is TRUE, so remove all the rows of previous working day 14th Feb, next working day 17th Feb and 16th Feb. If TRUE is in last day of month 28th Feb where next working day is not there, then remove the rows of TRUE flag day and previous working day only.</p>
<p><strong>Expected Output:</strong></p>
<pre><code>df_out = pd.DataFrame([["A","13-02-2022","B","FALSE"],["A","13-02-2022","C","FALSE"],["A","18-02-2022","I","FALSE"],
["A","18-02-2022","J","FALSE"]],columns=["id1","date","id2","flag"])
</code></pre>
<pre><code>id1 date id2 flag
A 13-02-2022 B FALSE
A 13-02-2022 C FALSE
A 18-02-2022 I FALSE
A 18-02-2022 J FALSE
</code></pre>
<p>How to do it?</p>
|
<python><python-3.x><pandas><dataframe><datetime>
|
2023-01-17 15:11:15
| 2
| 611
|
Chethan
|
75,148,441
| 5,371,102
|
Sqlalchemy creating group to group relationship
|
<p>I have the following models, where my two models share an outside_id to some table I don't have access to, but it is via that table they are connected. So they are not in a traditional many to many relationship. The only solution I have found is to create a table with primary key outside_id and use that to bridge the two relationships.</p>
<p>It adds allot of complexity, because the only thing that I need is to be able to eagerly load data from B in A and access it when I dump it to json. Is there a simpler way to do it?</p>
<pre><code>class B(model):
id = Column(Integer(), primary_key=True, nullable=False)
outside_id = Column(String())
class A(model):
id = Column(Integer(), primary_key=True, nullable=False)
outside_id = Column(String())
relationship(
"B",
foreign_keys=[outside_id],
primaryjoin="A.outside_id==B.outside_id",
viewonly=True,
)
</code></pre>
|
<python><sql><postgresql><sqlalchemy>
|
2023-01-17 15:10:35
| 1
| 1,989
|
Peter Mølgaard Pallesen
|
75,148,235
| 4,040,743
|
Why aren't identical datetimes equal?
|
<p>I'm working on a simple Python3 script that considers data in five-minute increments. <a href="https://stackoverflow.com/questions/75112399/how-to-round-down-a-datetime-to-the-nearest-5-minutes">Thanks to this post</a>, I have code which takes any Python datetime object and then rounds it down to the nearest five minutes. (:00, :05, :10, :15, etc.) Note that I cannot use pandas.</p>
<p>Now I need to be able to compare that "rounded-down" datetime with other datetimes, and here I'm running into a problem. Consider this test code:</p>
<pre><code>import sys
from datetime import datetime
from datetime import timedelta
def roundDownDateTime(dt):
# Arguments:
# dt datetime object
delta = timedelta(minutes=1) * (dt.minute % 5)
return dt - delta
def testAlarm(testDate):
# Arguments:
# testDate datetime object
currDate = roundDownDateTime( datetime.now() ) # currDate is a DateTime object, rounded down to 5 mins
print("currDate: "+currDate.strftime("%Y%m%d%H%M"))
print("testDate: "+testDate.strftime("%Y%m%d%H%M"))
if(currDate == testDate):
print("ALARM!!!!")
def main():
testDate = datetime.strptime(sys.argv[1], "%Y%m%d%H%M")
testAlarm(testDate)
if __name__ == "__main__":
main()
</code></pre>
<p>The code does all of the following:</p>
<ul>
<li>The <code>main()</code> function takes a string you enter on the command line,
then converts it into a <code>"%Y%m%d%H%M"</code> datetime</li>
<li>Your datetime is rounded down to the last five minute increment</li>
<li>In <code>testAlarm()</code>, your date is compared with the current date, also in
<code>"%Y%m%d%H%M"</code> format, also rounded down five minutes.</li>
<li>If the current date matches the cmd line argument, you should get an
<code>"ALARM!!!</code> in the output.</li>
</ul>
<p>Here's the actual output, run on my Ubuntu machine:</p>
<pre><code>me@unbuntu1$ date
Tue Jan 17 14:27:41 UTC 2023
me@unbuntu1$
me@unbuntu1$ python3 toy04.py 202301171425
currDate: 202301171425
testDate: 202301171425
me@unbuntu1$
</code></pre>
<p>Okay: Although I'm rounding down my date to match the "rounded-down" version of the current date, the <code>if(currDate == testDate):</code> line of code is still evaluating to <code>False</code>. While both datetimes <em><strong>appear</strong></em> equal in the <code>"%Y%m%d%H%M"</code> format, they are somehow not equal.</p>
<p>My first thought was that maybe the "rounded down" datetime still retained some residual seconds or microseconds even after the rounding part? So I modified my function to this:</p>
<pre><code>def roundDownDateTime(dt):
# Arguments:
# dt DateTime object
delta = timedelta(minutes=1) * (dt.minute % 5)
dt = dt - delta
dt.replace(second=0, microsecond=0)
return dt
</code></pre>
<p>But that makes no difference; I still get the exact same output as before.</p>
<p>Normally, you would only care if <code>currDate > testDate</code> for alarming purposes. But in my case, I must be able to compare datetimes for equality after one (or more) of them has been through the <code>roundDownDateTime()</code> function. What am I missing? Is my <code>roundDownDateTime()</code> function faulty? Thank you.</p>
|
<python><python-3.x><datetime><equality>
|
2023-01-17 14:52:57
| 1
| 1,599
|
Pete
|
75,148,114
| 275,002
|
Python: How to efficiently fetch the latest record from multiple tables in MySQL?
|
<p>Let me give you some background.</p>
<p>The earlier version of the Python script was fetching data from an API after every second and then will create MySQL tables on the fly based on the value of a JSON field returns from API and then insert data into it. I was returning the list of tuples of the API data in this format:</p>
<pre><code>[('Name1',9.0),('Name2',1.0)]
</code></pre>
<p>Then a routine creates table at runtime based on <code>Name1</code> and <code>Name2</code> text and then eventually <code>INSERT</code>s the list of records in the respective tables. In the above case, there will be two tables; <code>Name1</code> and <code>Name2</code> with the following row each:</p>
<ul>
<li>Name1,9.0</li>
<li>Name2,1.0</li>
</ul>
<p>Now the data ingestion part has been separated so instead of having the latest record from API on each second I would have to query these tables and fetch the latest record from each table, in the above example two records should return; one from each table.</p>
<p>Having said all that, what is the efficient way to return the latest records from all(<em>in the current example above, both tables</em>) efficiently? By efficient means, I want to avoid multiple requests to the server. If I sent a query like the below which returns two resultsets, is it efficient or there's some other way for better performance and reduce latency?</p>
<pre><code>SELECT * FROM `Name1` ORDER BY id DESC LIMIT 1;SELECT * FROM `Name2` ORDER BY id DESC LIMIT 1
</code></pre>
<p><strong>Update</strong></p>
<p>I was told this question is a duplicate of an <a href="https://stackoverflow.com/questions/68291328/sql-join-multiple-tables-with-the-same-column-names">existing one</a>. I had seen and tried this and it does not solve my problem.</p>
<p>The attached question returns ALL the records from unified tables. All I need is the latest records of each table. I tried the following(<em>Thanks to ChatGPT</em>) and it does work</p>
<pre><code>Explain SELECT * FROM `Name1`
UNION
SELECT * FROM `Name2`
ORDER BY id DESC
LIMIT 2;
</code></pre>
<p>The LIMIT is 2 here because there are two tables in this example but in reality, they could be 30+ at any instant.</p>
<p>So the question remains: Is this the right way and is it efficient, especially when two different scripts pull and store records each second?</p>
<p><strong>Update</strong></p>
<p>Table Schema</p>
<pre><code>CREATE TABLE `mytable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`price` float DEFAULT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
</code></pre>
<p>Other tables (e,g: mytable1...mytableN) will have identical schema
Thanks</p>
|
<python><mysql>
|
2023-01-17 14:43:57
| 0
| 15,089
|
Volatil3
|
75,148,057
| 11,645,617
|
django set image delete old reference and prevent delete default
|
<p>despite of many mordern website employ an OSS for serving image, I still want to build a backend to manage small thumbnails locally.</p>
<p>however, django image field is a bit tricky.</p>
<p>there are three views I may change image reference:</p>
<ul>
<li><code>models.py</code></li>
<li><code>views.py</code></li>
<li><code>forms.py</code></li>
</ul>
<p>I used to do it simply by:</p>
<p><code>forms.py</code></p>
<pre class="lang-py prettyprint-override"><code>request.user.profile.image = self.files['image']
</code></pre>
<p>and I always have a default</p>
<p><code>models.py</code></p>
<pre class="lang-py prettyprint-override"><code>class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
image = ProcessedImageField(
default='profile/default.jpg',
upload_to='profile',
processors=[ResizeToFill(*THUMBNAIL_SIZE)],
format='JPEG',
options={'quality': THUMBNAIL_QUALITY},
)
</code></pre>
<p>After a lot of testing, I found that it always result in an issue, can be among:</p>
<ul>
<li><p>default image file get deleted.</p>
</li>
<li><p>if the image has been set before, it holds an value that is not the default, when I reset it, the old referenced file is not deleted and will occupy disk storage.</p>
</li>
</ul>
<p>to do it perfectly, I decided to write a global function for import, <strong>whenever I want to set an image, call it</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.conf import settings
def setImage(instance, attr, file):
""" instance will be saved """
if file:
ifield = getattr(instance, attr)
# old reference, can be default
iurl = ifield.url
# default
durl = settings.MEDIA_URL + instance._meta.get_field(attr).get_default()
if iurl != durl:
# old reference is not default
# delete old reference and free up space
ifield.delete(save=True)
# set new file
setattr(ifield, attr, file)
instance.save()
</code></pre>
<p>pretty straight-forward. however, in testing, I found the image will never be set. Following are the possibale reasons I have eliminated:</p>
<ul>
<li>form <code>multipart</code> <code>enctype</code> attribute</li>
<li>ajax <code>processData</code>, <code>contentType</code> is correctly set</li>
<li><code>save</code> in model class is not overriden</li>
</ul>
<p>if it's all ok, where went wrong? I logged out all the values.</p>
<pre class="lang-py prettyprint-override"><code>setImage(self.user.profile, 'image', self.files['image'])
# self.files['image'] has valid value and is passed
# to setImage, which I think, is not garbage collected
</code></pre>
<pre class="lang-py prettyprint-override"><code>def setImage(instance, attr, file):
""" instance will be saved """
print('======')
print(file)
if file:
ifield = getattr(instance, attr)
iurl = ifield.url
durl = settings.MEDIA_URL + instance._meta.get_field(attr).get_default()
print(iurl)
print(durl)
if iurl != durl:
ifield.delete(save=True)
print(f'--- del {iurl}')
setattr(ifield, attr, file)
print('----res')
print(getattr(ifield, attr))
print(ifield.image)
print('--- ins')
print(instance)
instance.save()
print('--- after save')
print(instance.image.url)
print(getattr(instance, attr))
</code></pre>
<p>the field has a default value, and I upload the screen shot in testing.</p>
<pre><code>======
Screen Shot 2022-11-03 at 10.59.41 pm.png
/media/profile/default.jpg
/media/profile/default.jpg
----res
Screen Shot 2022-11-03 at 10.59.41 pm.png
Screen Shot 2022-11-03 at 10.59.41 pm.png
--- ins
tracey
--- after save
/media/profile/default.jpg
profile/default.jpg
</code></pre>
<hr />
<p>why the image is not setting, anybody have any ideas?</p>
<hr />
<h2>Solutions</h2>
<p>after tons of testing, only a line went wrong and just simple a wrong variable.</p>
<pre class="lang-py prettyprint-override"><code>def setImage(instance, attr, file):
""" instance will be saved """
if file:
ifield = getattr(instance, attr)
iurl = ifield.url
durl = settings.MEDIA_URL + instance._meta.get_field(attr).get_default()
if iurl != durl:
ifield.delete(save=True)
setattr(instance, attr, file)
instance.save()
</code></pre>
<hr />
<p>for form update validation, here's a method I've worked out and testified working:</p>
<pre class="lang-py prettyprint-override"><code>
def clean_image(form, attr:str, field:str):
"""
form.instance.attr -> ImageFile
form.cleaned_data[field] -> ImageFile
"""
upload_img = form.cleaned_data[field]
if form.instance:
# condition: update
# create does not matter
ifield = getattr(form.instance, attr)
iurl = ifield.url
if isinstance(upload_img, InMemoryUploadedFile):
# upload new file
# else, the file is not changed
durl = settings.MEDIA_URL + form.instance._meta.get_field(attr).get_default()
if iurl != durl:
ifield.delete(save=True)
return upload_img
</code></pre>
<p>e.g. you can call it simply like:</p>
<pre class="lang-py prettyprint-override"><code>class Project(models.Model):
image = ProcessedImageField(
default='projects/default.jpg',
upload_to='projects',
processors=[ResizeToFill(*THUMBNAIL_SIZE)],
options={'quality': THUMBNAIL_QUALITY},
format='JPEG',
)
class ProjectCreateForm(forms.ModelForm):
class Meta:
model = Project
fields = ['name', 'image']
def clean_image(self):
return clean_image(self, 'image', 'image')
</code></pre>
|
<python><django><django-models><imagefield>
|
2023-01-17 14:39:04
| 1
| 3,177
|
Weilory
|
75,148,018
| 2,998,077
|
To count in how many list, certain elements appeared
|
<p>Several names that I want to count, in how many lists they appeared.</p>
<pre><code>four_in_one = [['David','Ellen','Ken'],['Peter','Ellen','Joe'],['Palow','Ellen','Jack'],['Lily','Elain','Ken']]
for name in ['David','Ken','Kate']:
for each_list in four_in_one:
i = 0
if name in each_list:
i += 1
print (name, i)
</code></pre>
<p>Output:</p>
<pre><code>David 1
Ken 1
Ken 1
</code></pre>
<p>How can I output as below?</p>
<pre><code>David 1
Kate 0
Ken 2
</code></pre>
|
<python><list><loops>
|
2023-01-17 14:36:00
| 2
| 9,496
|
Mark K
|
75,148,002
| 1,176,573
|
Populate pods CPU limits using Kubernetes Python Client for Azure AKS cluster
|
<p>I need to use <a href="https://github.com/Azure/azure-sdk-for-python" rel="nofollow noreferrer">Azure Python SDK</a> and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> to list the Pods CPU limits for a cluster running in AKS.</p>
<p>Although its straight forward using CLI/PowerShell but I need to use Python exclusively.
Must not use <a href="https://stackoverflow.com/questions/53535855/how-to-get-kubectl-configuration-from-azure-aks-with-python">subprocess calls</a>.</p>
<p>Here is snippet that gets <code>KubeConfig</code> object after authentication with Azure:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
credential = DefaultAzureCredential(exclude_cli_credential=True)
subscription_id = "XXX"
resource_group_name= 'MY-SUB'
cluster_name = "my-aks-clustername"
container_service_client = ContainerServiceClient(credential, subscription_id)
kubeconfig = container_service_client.managed_clusters. \
list_cluster_user_credentials(resource_group_name, cluster_name). \
kubeconfigs[0]
</code></pre>
<p>But I am unsure how to put this to be used by K8s Python client:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config() ## How to pass?
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
|
<python><azure><kubernetes><kubectl><azure-aks>
|
2023-01-17 14:34:42
| 3
| 1,536
|
RSW
|
75,147,819
| 1,141,818
|
Extract text before two underscores with regex
|
<p>I feel I am not far from a solution but I still struggle to extract some text from variables with Regex. The conditions are:</p>
<ul>
<li>The text can only contain upper case characters or integers</li>
<li>The text can contain underscores BUT not two consecutive ones</li>
</ul>
<p>Examples:</p>
<pre><code>test_TEST_TEST_1_TEST_13DAHA bfd --> TEST_TEST_1_TEST_13DAHA
test_TEST_TEST_1_TEST__13DAHA bfd --> TEST_TEST_1_TEST
test__TEST_TEST --> TEST_TEST
test_TEST__DHJF --> TEST
test_TEST__Ddsa --> TEST
test__TEST --> TEST
</code></pre>
<p>So far, I got</p>
<p><code>_([0-9A-Z_]+)</code> works for the first one but not the second<br />
<code>_([0-9A-Z_]+)(?:__.*)+</code> works for the second one but not the first</p>
|
<python><regex>
|
2023-01-17 14:18:56
| 2
| 3,575
|
GuillaumeA
|
75,147,699
| 10,590,609
|
Multiprocessing queue closing signal
|
<p>Suppose I have a number of items that I put in a queue for other processes to deal with. The items are rather large in memory, therefore I limit the queue size. At some point I will have no more things to put in the queue. How can I signal the other processes that the queue is closed?</p>
<p>One option would be to close the child processes when the queue is empty, but this relies on the queue being emptied slower than it is being filled.</p>
<p>The documentation of <code>multiprocessing.Queue</code> talks about the following method:</p>
<blockquote>
<p><strong>close()</strong></p>
<p>Indicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected.</p>
</blockquote>
<p>Is it safe to call close while there are still items in the queue? Are these items guaranteed to be processed? How can a another processes know that the queue is closed?</p>
|
<python><multiprocessing><queue>
|
2023-01-17 14:09:26
| 3
| 332
|
Izaak Cornelis
|
75,147,684
| 12,193,952
|
Check if variable exists and if exists whether is not a nan using Python
|
<p>I would like to improve my coding skill and I struggle with finding a proper solution. I have a variable named <code>foo</code>. If <code>foo</code> has a numeric value, I need to do some action. However <code>foo</code> can be either:</p>
<ul>
<li>not set (<code>NoneType</code>)</li>
<li>float with value nan (<code>math.nan</code>)</li>
<li>float with numeric value (<code>12.3, 16.9</code> etc.)</li>
</ul>
<p>I would like to create a <strong>simple</strong> structure that match all cases and best I can think of is this code:</p>
<pre class="lang-py prettyprint-override"><code># Validate that foo exists
if foo not None:
# if exists, check if it's not a math.nan
if not math.isnan(foo):
# do my desired action
print("Doing desired action)
...
</code></pre>
<p>What is the best (in terms of Python philosophy) approach? Thanks 🙏</p>
<hr />
<p>Findings so far</p>
<ul>
<li>Validating both conditions inside single if fails because <code>foo</code> is sometimes not a number <code>TypeError: must be real number, not NoneType</code></li>
<li>Another approach (more EAFP) is using <code>try</code> and <code>except</code>, is it more "Pythonic" solution?</li>
</ul>
<pre class="lang-py prettyprint-override"><code>try:
if not math.isnan(foo):
# do desired action
print("Doing desired action")
except TypeError:
print("foo not yet set")
</code></pre>
<hr />
<p>Note: If you ask why <code>foo</code> is so variable, it's because it's variable taken from a Dataframe. The column has some floats in it, so the column type is <code>float</code>. However missing values are displayed as nan (but type is still float). Before loading the dataframe the value is <code>None</code> and after load it's <code>math.nan</code></p>
|
<python><validation><variables><types>
|
2023-01-17 14:08:27
| 0
| 873
|
FN_
|
75,147,682
| 9,422,346
|
How to interpret HAF sensor data?
|
<p>My sensor i2c - (flow sensor) on raspberry pi gives a reading <code>b'\x06g'</code>. How can |I possibly interpret this?
Code snippet used (this code is based on - <a href="https://github.com/stripemsu/HoneywellFlow" rel="nofollow noreferrer">https://github.com/stripemsu/HoneywellFlow</a>)</p>
<pre><code>import io, fcntl
I2C_SLAVE=0x0703
I2C_BUS=1
HAFAddr=0x49
maxflow=100 #100 sccm
toHex = lambda x: ''.join([hex(ord(c))[2:].zfill(2) for c in x])
i2c_r=io.open("/dev/i2c-"+str(I2C_BUS),"rb",buffering=0)
i2c_w=io.open("/dev/i2c-"+str(I2C_BUS),"wb",buffering=0)
fcntl.ioctl(i2c_r, I2C_SLAVE,HAFAddr)
print(i2c_r.read(2))
i2c_r.close()
</code></pre>
<p>I have uploaded the datasheet of the sensor <a href="https://easyupload.io/s555l7" rel="nofollow noreferrer">HERE</a> <a href="https://i.sstatic.net/dHAPG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dHAPG.png" alt="snapshot" /></a></p>
|
<python><raspberry-pi><sensors><i2c>
|
2023-01-17 14:08:21
| 1
| 407
|
mrin9san
|
75,147,507
| 7,074,969
|
Validating adb2c jwt is throwing Invalid authorization token: InvalidSignatureError in Python
|
<p>I have been trying to validate a jwt received from adb2c in Python in the latest days. For that case, I use the <code>azure_ad_verify_token</code> library and have followed a tutorial on their docs page. As they say, I define</p>
<pre><code>azure_ad_app_id = "app_id"
azure_ad_issuer = f"https://login.***.com/{tenant_id}/v2.0"
azure_ad_jwks_uri = f"https://login.***.com/{tenant_id}/discovery/v2.0/keys"
token = "eyJ0...."
payload = verify_jwt(
token=token,
valid_audiences=[azure_ad_app_id],
issuer=azure_ad_issuer,
jwks_uri=azure_ad_jwks_uri,
verify=True,
)
</code></pre>
<p>However, this code throws me an error that says</p>
<blockquote>
<p>Invalid authorization token: InvalidSignatureError in Python</p>
</blockquote>
<p>I'm not sure why does this error happen because the same token is validated successfully in .NET but it fails in Python.</p>
<p>However, another thing I noticed is that if I paste the jwt at <a href="https://jwt.io/" rel="nofollow noreferrer">https://jwt.io/</a>, I get a message at the end that says <strong>Invalid Signature</strong>. I went through the Internet and found that I need to pass my public key but even after passing it, I still get that same <strong>Invalid Signature</strong> message.</p>
<p>Has someone ever stumbled upon an error like this? Does this seem like a problem with the configuration of the token in b2c?</p>
|
<python><jwt><azure-ad-b2c>
|
2023-01-17 13:54:40
| 1
| 1,013
|
anthino12
|
75,147,455
| 1,818,713
|
Running dash app in azure functions python model v2 but only getting default site up page
|
<p>I'm using the v2 python programming model and trying to launch a dash app similar to the example of a Flask app <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=wsgi%2Capplication-level&pivots=python-mode-decorators#alternative-entry-point" rel="nofollow noreferrer">here</a></p>
<p>my function_app.py is as follows:</p>
<pre><code>import dash
from dash import dcc
from dash import html
import azure.functions as func
dashapp = dash.Dash()
colors = {
'background': '#111111',
'text': '#7FDBFF'
}
dashapp.layout = html.Div(
style={'backgroundColor': colors['background']},
children=[
html.H1(
children='Hello Dash',
style={
'textAlign': 'center',
'color': colors['text']
}
),
html.Div(children='Dash: A web application framework for Python.', style={
'textAlign': 'center',
'color': colors['text']
}),
dcc.Graph(
id='Graph1',
figure={
'data': [
{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'},
{'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'},
],
'layout': {
'plot_bgcolor': colors['background'],
'paper_bgcolor': colors['background'],
'font': {
'color': colors['text']
}
}
}
)
]
)
app = func.WsgiFunctionApp(app=dashapp.server.wsgi_app,
http_auth_level=func.AuthLevel.ANONYMOUS)
</code></pre>
<p>My host.json is:</p>
<pre><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.15.0, 4.0.0)"
},
"functionTimeout": "00:10:00",
"extensions":
{
"http":
{
"routePrefix": ""
}
}
}
</code></pre>
<p>When I run locally it works as expected with my app at <code>http://localhost:7072/</code> but when I deploy to Azure functions and go to <code>myapp.azurewebsites.net</code> then I just get the your app is up and running page. My guess is that Azure is serving that at the root address regardless of my app but the local deployment doesn't but I don't know how to verify that or, more importantly, change that behavior.</p>
|
<python><azure-functions><wsgi>
|
2023-01-17 13:51:11
| 1
| 19,938
|
Dean MacGregor
|
75,147,385
| 4,913,660
|
Running __main__.py script with python . (dot)
|
<p>I am trying to understand why a <code> __main__.py</code> file could be run from shell issuing
<code> python .</code> within the folder where said file is located.</p>
<p>How does this work? What does the <code>.</code> stand for, and why is not the file name needed?</p>
<p>Where is the dot coming from?. Given a bash script, say</p>
<pre><code>#!/bin/bash
#Dummy bash script
echo "This is a dummy bash script"
</code></pre>
<p>saved as <code> dummy_bash_script</code>.
I could run it, opening a terminal from the folder where it is located, issuing</p>
<pre><code>$ ./dummy_bash_script
</code></pre>
<p>providing hence an explicit path (<code>./</code>). Are the two dots related? Does the dot in <code> python .</code> indicate anything about the current directory? If yes, where is the backslash gone?</p>
<p>Another use for the <code> .</code> (dot) in bash is as a shortcut for <code> source</code> , but I fail to see any connection.</p>
<p>And why is not the file name needed, in <code> python .</code> ? I understand as a file is ran as a script "somewhere under the bonnet" it is assigned the name "<strong>main</strong>". That is, the special variable <code> __name</code> gets an assignment, <code> __name__ = "__main__"</code>.</p>
<p>But how does this exactly work as a <code> __main__</code> file is executed as script using the dot, i.e. how can this file be ran with <code> python .</code>, without file name?</p>
<p>Thanks</p>
|
<python><bash>
|
2023-01-17 13:46:36
| 0
| 414
|
user37292
|
75,147,334
| 6,372,859
|
Evaluate scalar function on numpy array with conditionals
|
<p>I have a numpy array <code>r</code> and I need to evaluate a scalar function, let's say <code>np.sqrt(1-x**2)</code> on each element <code>x</code> of this array. However, I want to return the value of the function as zero, whenever <code>x>1</code>, and the value of the function on <code>x</code> otherwise.
The final result should be a numpy array of scalars.</p>
<p>How could I write this the most pythonic way?</p>
|
<python><arrays><numpy><if-statement>
|
2023-01-17 13:43:14
| 3
| 583
|
Ernesto Lopez Fune
|
75,147,116
| 14,353,779
|
conditional flagging in pandas
|
<p>I have a dataframe <code>df</code> :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">1F_col</th>
<th style="text-align: center;">2F_col</th>
<th style="text-align: center;">3F_col</th>
<th style="text-align: center;">4G_col</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">5</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">6</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
</tbody>
</table>
</div>
<p>I have 2 types of column names , one which has <code>F</code> and another which has <code>G</code> in them .</p>
<p>F_type & G_Type : If atleast one of the column names with <code>F</code> has 1, I want to flag 1 , and likewise if atleast one of the <code>G</code> columns are 1 ,then I want to flag 1(F_type & G_Type are the column names). Like below :</p>
<p>Comm1 :
If both F_type and G_Type is 1 I want to show string <code>Good</code>
If F_type is 1 and G_Type is 0 then string <code>4G</code>
If F_type is 0 and G_Type is 1 then string <code>1F</code>
If both F_type and G_Type is 0 then string <code>1F</code></p>
<p>Comm2 : If both F_type and G_Type is 0 then string <code>Hard</code>, else <code>Good</code> if Comm1 is <code>Good</code>, else <code>Soft</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">F_Type</th>
<th style="text-align: center;">G_Type</th>
<th style="text-align: center;">Comm1</th>
<th style="text-align: center;">Comm2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">Good</td>
<td style="text-align: center;">Good</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">4G</td>
<td style="text-align: center;">soft</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">Good</td>
<td style="text-align: center;">Good</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1F</td>
<td style="text-align: center;">Soft</td>
</tr>
<tr>
<td style="text-align: center;">5</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1F</td>
<td style="text-align: center;">Hard</td>
</tr>
<tr>
<td style="text-align: center;">6</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">Good</td>
<td style="text-align: center;">Good</td>
</tr>
</tbody>
</table>
</div>
<p>I have a hug records of <code>ID</code> ( 1 million ) what would be the best way to achieve this in less time?</p>
|
<python><pandas><numpy>
|
2023-01-17 13:26:03
| 3
| 789
|
Scope
|
75,146,906
| 6,494,707
|
ValueError: Length of values (586) does not match length of index (521)
|
<p>I have two data frames that I am comparing two columns <code>df_luad['Tumor_Sample_Barcode']</code> and <code>df_tmb['Patient ID']</code> of them, if the two columns values from two dataframes are equal, then it adds a column from the second dataframe <code>df_tmb['TMB (nonsynonymous)']</code>to a new third dataframe <code>df1</code> column <code>df1['tmb_value']</code>.</p>
<pre><code>df1['tmb_value'] = np.where(df_luad['Tumor_Sample_Barcode'].eq(df_tmb['Patient ID']), 'True' , df_tmb['TMB (nonsynonymous)'])
</code></pre>
<p>However, I am getting this error:</p>
<pre><code>*** ValueError: Length of values (586) does not match length of index (521)
</code></pre>
<p>which is related to the row numbers. <code>df_luad</code> has 521 rows and <code>df_tmb</code> has 586 rows. how to add the values of <code>df_tmb['TMB (nonsynonymous)']</code> only for the matching rows (records) in <code>df_luad</code>?</p>
<p>the following is <code>df_tmb</code> data
<a href="https://i.sstatic.net/xYjKm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYjKm.png" alt="enter image description here" /></a></p>
<p>and this is <code>df_luad</code>:</p>
<p><a href="https://i.sstatic.net/E8Gbe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E8Gbe.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-01-17 13:07:39
| 1
| 2,236
|
S.EB
|
75,146,792
| 1,542,093
|
Get all required fields of a nested Pydantic model
|
<p>My nested Pydantic model is defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
from pydantic import BaseModel
class Location(BaseModel):
city: Optional[str]
state: str
country: str
class User(BaseModel):
id: int
name: str = "Gandalf"
age: Optional[int]
location: Location
</code></pre>
<p>I would like to get all required fields for the <code>User</code> model.
For the above example, the expected output is <code>["id", "state", "country"]</code>.</p>
<p>Any help greatly appreciated.</p>
|
<python><python-3.6><pydantic>
|
2023-01-17 12:57:17
| 1
| 1,213
|
lordlabakdas
|
75,146,595
| 19,580,067
|
Unable to install pypiwin32 library
|
<p>Tried to install the <code>pip install pypiwin32</code> in Google Colab for reading outlook emails. But the installation keeps on getting failed.</p>
<p><a href="https://i.sstatic.net/wIXxI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wIXxI.png" alt="enter image description here" /></a></p>
<p>Tried by downgrading the python version to 3.9 as well, but didn't worked.</p>
<p>Any suggestions for fixing the issue?</p>
|
<python><machine-learning><pywin32>
|
2023-01-17 12:43:03
| 2
| 359
|
Pravin
|
75,146,476
| 20,281,672
|
Using Scipy's LowLevelCallable and numba cfunc to optimise a time based ode that takes multiple arrays as input
|
<p>I want to optimise the solving of a system of time based ODE's by combining Numba's <code>cfunc</code> and scipy's <code>LowLevelCallable</code> functionality to greatly speed up <code>odeint()</code>. However, i am having trouble finding out what the exact syntax is for the correct solution.</p>
<p>My (non-optimised) code that works looks something like this:</p>
<pre><code>import numpy as np
import numba as nb
from numba import types
from matplotlib import pyplot as plt
import scipy.integrate as si
# hours in the simulation
T_sim = 72
# simulate fluctuating outside temp for 3 days
T_out = np.sin(np.linspace(0, T_sim, T_sim)/3.8) * 8 + 18
# heat capacity of the air and floor
cap_air, cap_flr = 1, 10
def func(T, t):
T_air, T_flr= T[0], T[1]
t_ = int(t)
H_airOut = 4 * U_vent[t_] * (T_air - T_out[t_])
H_blowAir = U_blow[t_]
H_airFlr = (T_air - T_flr)
dT_air = ((H_blowAir
- H_airOut
- H_airFlr) / cap_air)
dT_flr = H_airFlr / cap_flr
return dT_air, dT_flr
T0 = [16, 18] # starting temperature for T_air and T_flr
t_eval = np.linspace(0, T_sim-1, T_sim-1)
U_blow = np.full(T_sim, 0.1) # control for the heating
U_vent = np.full(T_sim, 0.3) # control for the ventilation
result = si.odeint(func,T0,t_eval)
# this piece of code is a standin for the real code, where func() is run
# tens of thousands of times. All variables are constant, except for the
# two arrays U_blow and U_vent
for _ in range(10):
U_blow = np.random.rand(T_sim)
U_vent = np.random.rand(T_sim)
result = si.odeint(func,T0,t_eval)
plt.plot(result[:, 0], label='T_air')
plt.plot(result[:, 1], label='T_flr')
plt.plot(T_out[:-1], label='T_out')
plt.legend()
</code></pre>
<p>Based on other SO posts, im pretty sure that the solution should look something the code below, but i cant get it to work myself. The problem would have been much easier if there were only some variables changing between runs, but this is not the case, as I need to be able to vary the arrays <code>U_blow</code> & <code>U_vent</code> between runs.</p>
<pre><code>import scipy as sp
from numba.types import float64, CPointer, intc
def jit_integrand_function(integrand_function):
jitted_function = nb.jit(integrand_function, nopython=True)
@nb.cfunc(float64(intc, CPointer(float64)))
def wrapped(n, xx):
return jitted_function(xx[0], xx[1])
return sp.LowLevelCallable(wrapped.ctypes)
@jit_integrand_function
def func(T, t):
T_air, T_flr= T[0], T[1]
t_ = int(t)
H_airOut = 4 * U_vent[t_] * (T_air - T_out[t_])
H_blowAir = U_blow[t_]
H_airFlr = (T_air - T_flr)
dT_air = ((H_blowAir
- H_airOut
- H_airFlr) / cap_air)
dT_flr = H_airFlr / cap_flr
return dT_air, dT_flr
T0 = [16, 18] # starting temperature for T_air and T_flr
t_eval = np.linspace(0, T_sim-1, T_sim-1)
for _ in range(10):
U_blow = np.random.rand(T_sim)
U_vent = np.random.rand(T_sim)
result = si.odeint(func,T0,t_eval, (U_blow, U_vent))
plt.plot(result[:, 0], label='T_air')
plt.plot(result[:, 1], label='T_flr')
plt.plot(T_out[:-1], label='T_out')
plt.legend()
</code></pre>
<p>As a sidenote, my full model is not numerically stable enough for the euler forward integration method to work, so that is not option.</p>
|
<python><optimization><scipy><numba><ode>
|
2023-01-17 12:33:48
| 1
| 312
|
Rafnus
|
75,146,416
| 10,065,556
|
Impact of idle connections on max_connections in postgres
|
<p>I've been getting <code>'M': 'sorry, too many clients already'</code> lately when my FastAPI endpoint is called, throwing 500.</p>
<p>I tried running this script:</p>
<pre><code>select pid as process_id,
usename as username,
datname as database_name,
client_addr as client_address,
application_name,
backend_start,
state,
state_change
from pg_stat_activity;
</code></pre>
<p>And I could see ~50 connections with state <code>idle</code> (which I believe means connections that are established but not making any transactions).</p>
<p>My postgres has a limit of 100 <code>max_connections</code> (in .conf).</p>
<p>Other than eating RAM, is there any impact of these "idle" connections on <code>max_connections</code> or postgres in general?</p>
<p>Is there something my FastAPI app is doing wrong:</p>
<p><code>utils.py</code></p>
<pre class="lang-py prettyprint-override"><code>def get_db_instance():
try:
db = SessionLocal()
yield db
finally:
db.close()
</code></pre>
<p><code>crud.py</code></p>
<pre class="lang-py prettyprint-override"><code>from utils import get_db_instance
def add(obj, db: Session):
db.add(obj)
db.commit()
</code></pre>
|
<python><postgresql><fastapi>
|
2023-01-17 12:28:24
| 1
| 994
|
ScriptKiddieOnAComputer
|
75,146,250
| 7,714,681
|
E1101 (no-member) for code that works well
|
<p>I have a structure similar to below, in my code:</p>
<pre><code>class A():
def __init__(
self,
<SOME_VARIABLES>
)
self.matrix = self._get_matrix()
class B(A):
def __init__(
self,
<SOME_VARIABLES>
)
super().__init__(
<SOME_VARIABLES>
)
def _get_matrix(self):
<DO SOMETHING>
class C(A):
def __init__(
self,
<SOME_VARIABLES>
)
super().__init__(
<SOME_VARIABLES>
)
def _get_matrix(self):
<DO SOMETHING>
</code></pre>
<p>The code works fine. However, Pylint returns an <code>E1101(no-member)</code> error. How can I change my code so I don't get this error?</p>
<p>The <code>_get_matrix()</code> methods in classes B and C work differently, so I cannot place them in A.</p>
|
<python><object><oop><constructor><pylint>
|
2023-01-17 12:13:08
| 1
| 1,752
|
Emil
|
75,146,221
| 19,633,374
|
compare python list with a string array column in Pyspark
|
<p>I have a python list</p>
<pre><code>my_list = ["AAA","BBB", "CCC"]
</code></pre>
<p>I have to compare this list with the array column in df</p>
<pre><code># dataframe dummy
df = spark.createDataFrame([('1A', '3412asd','value-1', ['XXX', 'YYY', 'AAA']),
('2B', '2345tyu','value-2', ['DDD', 'YFFFYY', 'GGG', '1']),
('3C', '9800bvd', 'value-3', ['AAA']),
('3C', '9800bvd', 'value-1', ['AAA', 'YYY', 'CCCC'])],
('ID', 'Company_Id', 'value' ,'array_column'))
df.show()
+---+----------+-------+--------------------+
| ID|Company_Id| value| array_column |
+---+----------+-------+--------------------+
| 1A| 3412asd|value-1| [XXX, YYY, AAA] |
| 2B| 2345tyu|value-2|[DDD, YFFFYY, GGG, 1]|
| 3C| 9800bvd|value-3| [AAA] |
| 3C| 9800bvd|value-1| [AAA, YYY, CCCC] |
+---+----------+-------+---------------------+
</code></pre>
<p>I am trying to filter the dataframe by checking if any element of <code>my_list</code> matches with any element in <code>array_column</code></p>
<p><strong>Code I tried</strong></p>
<pre><code> final_df = df.filter(
F.arrays_overlap(F.col("array_column"),F.array(*[F.lit(x) for x in my_list])))
</code></pre>
<ul>
<li>I am getting the expected output but the arrays_overlap is returning <code>True</code> when both <code>my_list</code>and the <code>array_column</code> is <code>None</code></li>
</ul>
<p><strong>Cases to be included in the above filter:</strong></p>
<pre><code>None, None => False # with the above code its returning True
None, SomeVal => False #But returning None
SomeValue, None => False #But returning None
</code></pre>
<p>explanation :</p>
<ul>
<li>if <code>my_list = [None, "AAA"] </code> and array_column has a list <code>["BBB", None ]</code> - it should return False</li>
<li>if <code>my_list = [None, "AAA"] </code> and array_column has a list <code>["BBB", None "AAA"]</code> - it should return True</li>
<li>Also if <code>my_list = None </code>and array_column has some list <code>["XXX","YY"]</code> then also it should return False also viceversa case should be False</li>
</ul>
<p>Can anyone help me with this? How can i change the filter such that the above three cases return False ?</p>
|
<python><arrays><list><dataframe><pyspark>
|
2023-01-17 12:10:19
| 0
| 642
|
Bella_18
|
75,146,100
| 913,098
|
Execute a Python script post install using setuptools
|
<p>This is exactly the same as <a href="https://stackoverflow.com/q/17806485/913098">this question</a>.</p>
<p>The <a href="https://stackoverflow.com/a/18159969/913098">accepted (and only) answer</a> uses <code>distutils</code> <a href="https://stackoverflow.com/a/14753678/913098">which is deprecated</a>.</p>
<p>Simply replacing <code>from distutils.core import setup</code> with <code>from setuptools import setup</code></p>
<p>and <code>from setuptools.command import install as _install</code> with <code>from distutils.command.install import install as _install</code></p>
<p>is not enough, because api differs.</p>
<hr />
<p>I am looking for a solution using setuptools.</p>
|
<python><installation><pip><setuptools><distutils>
|
2023-01-17 11:59:36
| 0
| 28,697
|
Gulzar
|
75,146,086
| 860,233
|
My access to Google Sheets API is being blocked to a URI Redirect Mismatch
|
<p>I am unable to access my Google sheet, here is the error I get:</p>
<pre><code>Error 400: redirect_uri_mismatch
You can't sign in to this app because it doesn't comply with Google's OAuth 2.0 policy.
If you're the app developer, register the redirect URI in the Google Cloud Console.
Request details: redirect_uri=http://localhost:57592/
</code></pre>
<p>Below I'll share my code, relevant parts of the credentails file as well as my Google Cloud console configurations I have in place. Just to note, From my inexperienced eyes (as far as Python and Google console is concerned), it seems as if I've checked the right boxes based on what I've found on Stack Overflow.</p>
<p>Python:</p>
<pre><code>from __future__ import print_function
from datetime import datetime
import pickle
import os.path
import pyodbc
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# connecting to the database
connection_string = 'DRIVER={driver};PORT=1433;SERVER={server};DATABASE={database};UID={username};PWD={password}; MARS_Connection={con}'.format(driver= 'ODBC Driver 17 for SQL Server', server= server, database= database, username=username, password=password, con='Yes')
conn = pyodbc.connect(connection_string)
print("Connected to the database successfully")
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly']
# The ID and range of a sample spreadsheet.
# https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0
SPREADSHEET_ID = spreadSheetID
RANGE_NAME = 'CAPS!A2:K'
#def dataUpload():
def dataUpload():
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
print ("we going here")
flow = InstalledAppFlow.from_client_secrets_file(
'ApplicantList/credentials.json', SCOPES)
flow.redirect_uri = 'http://127.0.0.1:8000' #added this
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('sheets', 'v4', credentials=creds)
</code></pre>
<p>In Google console I have both a Service account and an OAUTH 2.0 ClientID.</p>
<p>I have the following URI's registered there:</p>
<p><a href="https://i.sstatic.net/3tNo4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3tNo4.png" alt="enter image description here" /></a></p>
<p>Each time the error comes up, I add the URI to the redirect list but then I get the same error with a new URI port (i.e http://localhost:[new port]).</p>
<p>And here is my credentails file:</p>
<pre><code>{"web":{"client_id":"xxxxx","project_id":"xxxx","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","redirect_uris":["http://127.0.0.1:8000"],"javascript_origins":["http://127.0.0.1:8000"]}}
</code></pre>
<p>What am I doing wrong?</p>
|
<python><django><google-sheets><google-cloud-platform>
|
2023-01-17 11:58:34
| 1
| 930
|
Glenncito
|
75,145,735
| 9,778,828
|
PyMongo - get 20% (random or not) of the collection
|
<p>I have a big MongoDB collection - 16 GB, 130M rows.</p>
<p>I need to query the DB and get only 20% of the data.</p>
<p>The best option would be to only get every 5th row, but also a random 20% choosing could work.</p>
<p><a href="https://www.mongodb.com/docs/upcoming/reference/operator/aggregation/sample/#pipe._S_sample" rel="nofollow noreferrer">Sample</a> is not a good option, as duplicates are very likely to happen.</p>
<p>Any suggestions? How do I do that?</p>
|
<python><pandas><mongodb><pymongo>
|
2023-01-17 11:27:07
| 1
| 505
|
AlonBA
|
75,145,428
| 14,353,779
|
How to achieve this concatenation in Pandas?
|
<p>I have a dataframe <code>df</code> :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">tray</th>
<th style="text-align: center;">bag</th>
<th style="text-align: center;">ball</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I want to add a column <code>Presence</code> in the dataframe <code>df</code> seperated by comma this way :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">tray</th>
<th style="text-align: center;">bag</th>
<th style="text-align: center;">ball</th>
<th style="text-align: center;">Presence</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">bag,ball</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">bag</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">tray,bag</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">ball</td>
</tr>
<tr>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">No Presence</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-01-17 11:02:57
| 1
| 789
|
Scope
|
75,145,424
| 9,947,412
|
FastAPI/Starlette: How to handle exceptions inside background tasks?
|
<p>I developed some API endpoints using FastAPI. These endpoints are allowed to run <code>BackgroundTasks</code>. Unfortunately, I do not know how to handle unpredictable issues from theses tasks.</p>
<p>An example of my API is shown below:</p>
<pre class="lang-py prettyprint-override"><code># main.py
from fastapi import FastAPI
import uvicorn
app = FastAPI()
def test_func(a, b):
raise ...
@app.post("/test", status_code=201)
async def test(request: Request, background_task: BackgroundTasks):
background_task.add_task(test_func, a, b)
return {
"message": "The test task was successfully sent.",
}
if __name__ == "__main__":
uvicorn.run(
app=app,
host="0.0.0.0",
port=8000
)
# python3 main.py to run
# fastapi == 0.78.0
# uvicorn == 0.16.0
</code></pre>
<p>Can you help me to handle any type of exception from such a background task?
Should I add any <code>exception_middleware</code> from Starlette, in order to achieve this?</p>
|
<python><exception><fastapi><background-task><starlette>
|
2023-01-17 11:02:34
| 1
| 907
|
PicxyB
|
75,145,410
| 4,451,315
|
Dispaly DataFrame in Jupyter Notebook along with other attributes
|
<p>If I just construct a pandas DataFrame in a Jupyter notebook, then the output looks nice:</p>
<pre class="lang-py prettyprint-override"><code>frame = pd.DataFrame({'a': [1,2,3]})
frame
</code></pre>
<p><a href="https://i.sstatic.net/6l9Bz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6l9Bz.png" alt="enter image description here" /></a></p>
<p>However, if I make my own class whose <code>repr</code> is a <code>pandas.DataFrame</code>, then that formatting is lost:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, frame, bar):
self.frame = frame
self.bar = bar
def __repr__(self):
return repr(self.frame)
</code></pre>
<p><a href="https://i.sstatic.net/ncr8n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ncr8n.png" alt="enter image description here" /></a></p>
<p>How can I define the <code>repr</code> of my class <code>Foo</code>, such that when I display it in a Jupyter Notebook, it'll look just like it would if it was a <code>pandas.DataFrame</code>?</p>
|
<python><pandas><jupyter-notebook>
|
2023-01-17 11:01:26
| 1
| 11,062
|
ignoring_gravity
|
75,145,244
| 9,770,831
|
How to add extra data to pydantic schemas in FastAPI response?
|
<p>This is my Response model -</p>
<pre><code>class BillBase(BaseModel):
bill_no: int
amount: int
about: str
class ShowSalesBill(BillBase):
id: int
left_amount: int
class Config:
orm_mode = True
</code></pre>
<p>so this <code>left_amount</code> int field is not in the Bills model I want to calculate this field based on some condition. My question is can we get extra fields while getting responses from the response model as we have in the Django serializer we can get some custom data under the <code>get_fiel_name</code> function there is something in FastAPI.</p>
|
<python><fastapi><pydantic>
|
2023-01-17 10:44:39
| 0
| 657
|
mdhv_kothari
|
75,145,058
| 12,752,172
|
Why tkinter filedialog not opening file dialog box when the code is inside the function in python?
|
<p>I'm creating a python console menu app. I want to select a CSV file from the user file location and need to ask the user to select the file location. I'm trying it with tkinter filedialog. But it working fine when it is outside of the function. But when I put the code lines into a function it is not opening the file dialog box.</p>
<p><strong>This is what I tried,</strong></p>
<pre><code>import csv
from datetime import datetime
import tkinter as tk
from tkinter import filedialog
def menu():
print(" Please select item from the menu ")
choice = int(input("""
1: Upload new file
2. Read file
0: Exit
Please enter your choice: """))
if choice == 1:
upload_file()
elif choice == 2:
print("Please try again")
#read_file()
elif choice == 0:
print("Exit.")
sys.exit
else:
print("Please try again")
menu()
def upload_file():
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
print(file_path)
menu()
</code></pre>
<p>It is not showing any error or something. Also need to check user selects the CSV file or not. if it is not a CSV file ask the user to select the correct file.
Please Help me to solve this problem.</p>
|
<python><tkinter>
|
2023-01-17 10:27:34
| 1
| 469
|
Sidath
|
75,145,023
| 4,056,181
|
Advancing PRNGs in NumPy and general distributions
|
<p>I would like to use the <a href="https://numpy.org/doc/stable/reference/random/generator.html" rel="nofollow noreferrer">random number generation of NumPy</a> to draw numbers from different distributions. For a given generator, seed and distribution, I would like to be able to draw the <code>i</code>'th number in the sequence, without having to draw all <code>i - 1</code> numbers before it. The three <a href="https://numpy.org/doc/stable/reference/random/bit_generators/index.html" rel="nofollow noreferrer">bit generators</a> <code>PCG64</code>, <code>PCG64DXSM</code> and <code>Philox</code> all have an <a href="https://numpy.org/doc/stable/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.advance.html#numpy.random.PCG64DXSM.advance" rel="nofollow noreferrer"><code>advance()</code></a> method, which seems to promise the functionality I am after.</p>
<p>For e.g. the <code>PCG64DXSM</code>, I would do</p>
<pre class="lang-py prettyprint-override"><code>seed = 42
bit_generator = np.random.PCG64DXSM(seed)
generator = np.random.Generator(bit_generator)
# Draw n uniform numbers between 0 and 1
n = 20
nums = [generator.uniform(0, 1) for _ in range(n)]
# Only draw the last number
bit_generator = np.random.PCG64DXSM(seed)
generator = np.random.Generator(bit_generator)
bit_generator.advance(n - 1)
num = generator.uniform(0, 1)
assert num == nums[-1]
</code></pre>
<p>This works great for the uniform distribution, though not for other distributions in general. In particular, I am also interested in the normal and Rayleigh distributions.</p>
<p>To show how the <code>advance()</code> method fails for these distributions, I have written the test code below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def plot(ax, rand, n=20):
BitGenerator = [np.random.PCG64DXSM, np.random.PCG64, np.random.Philox][0]
seed = 42
bit_generator = BitGenerator(seed)
generator = np.random.Generator(bit_generator)
x = [rand(generator) for _ in range(n)]
ax.plot(x, 'C0-')
for i in range(n):
bit_generator = BitGenerator(seed)
generator = np.random.Generator(bit_generator)
bit_generator.advance(i)
x = rand(generator)
ax.plot(i, x, 'C1.')
fig, axes = plt.subplots(3)
plot(axes[0], lambda generator: generator.uniform(0, 1))
plot(axes[1], lambda generator: generator.normal(0, 1))
plot(axes[2], lambda generator: generator.rayleigh(1))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/BYf0P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BYf0P.png" alt="test code" /></a></p>
<p>The three panels in the produced figure correspond to the uniform, normal and Rayleigh distributions. The blue line is the random sequence drawn sequentially from a single generator, while the orange dots are individually obtained from fresh generators using <code>advance()</code>. Using <code>advance()</code> fails for the normal and Rayleigh distributions after a while. How long this takes depend on the <code>seed</code> and which bit generator is used.</p>
<h4>Questions</h4>
<ul>
<li>Why does <code>advance()</code> work perfectly well for the uniform distribution, yet not for other distributions generally?
<ul>
<li>Why <strong>does</strong> it work for the first few random numbers in each sequence?</li>
</ul>
</li>
<li>Is this "intentionally" / known behaviour?</li>
<li><strong>How can I get around it?</strong></li>
</ul>
|
<python><python-3.x><numpy><random>
|
2023-01-17 10:24:35
| 1
| 13,201
|
jmd_dk
|
75,144,956
| 3,605,534
|
How to delete icons from comments in csv files using pandas
|
<p>I am try to delete an icons which appears in many rows of my csv file. When I create a dataframe object using pd.read_csv it shows a green squared check icon, but if I open the csv using Excel I see ✅ instead. I tried to delete using split function because the verification status is separated by | to the comment:</p>
<pre><code>df['reviews'] = df['reviews'].apply(lambda x: x.split('|')[1])
</code></pre>
<p>I noticed it didn't detect the "|" separator when the review contains the icon mentioned above.</p>
<p><a href="https://i.sstatic.net/gXDV2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gXDV2.png" alt="enter image description here" /></a></p>
<p>I am not sure if it is an encoding problem. I tried to add encoding='utf-8' in pandas read_csv but It didn't solve the problem.</p>
<p>Thanks in advance.</p>
<p>I would like to add, this is a pic when I open the csv file using Excel.</p>
<p><a href="https://i.sstatic.net/6cmH1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6cmH1.png" alt="enter image description here" /></a></p>
|
<python><pandas><csv>
|
2023-01-17 10:19:12
| 2
| 945
|
GSandro_Strongs
|
75,144,935
| 1,186,624
|
How to simulate a heat diffusion on a rectangular ring with FiPy?
|
<p>I am new to solving a PDE and experimenting with a heat diffusion on a copper body of a rectangular ring shape using FiPy.</p>
<p>And this is a plot of simulation result at some times.
<a href="https://i.sstatic.net/kAlJS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kAlJS.png" alt="enter image description here" /></a></p>
<p>I am using the <code>Grid2D()</code> for a mesh and the <code>CellVariable.constrain()</code> to specify boundary conditions. The green dots are centers of exterior faces where <em>T</em> = 273.15 + 25 (<em>K</em>), and blue dots are centers of interior faces where <em>T</em> = 273.15 + 30 (<em>K</em>).</p>
<p>Obviously, I am doing something wrong, because the temperature goes down to 0<em>K</em>. How should I specify boundary conditions correctly?</p>
<p>These are the code.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import fipy
def get_mask_of_rect(mesh, x, y, w, h):
def left_id(i, j): return mesh.numberOfHorizontalFaces + i*mesh.numberOfVerticalColumns + j
def right_id(i, j): return mesh.numberOfHorizontalFaces + i*mesh.numberOfVerticalColumns + j + 1
def bottom_id(i, j): return i*mesh.nx + j
def top_id(i, j): return (i+1)*mesh.nx + j
j0, i0 = np.floor(np.array([x, y]) / [mesh.dx, mesh.dy]).astype(int)
n, m = np.round(np.array([w, h]) / [mesh.dx, mesh.dy]).astype(int)
mask = np.zeros_like(mesh.exteriorFaces, dtype=bool)
for i in range(i0, i0 + n):
mask[left_id(i, j0)] = mask[right_id(i, j0 + m-1)] = True
for j in range(j0, j0 + m):
mask[bottom_id(i0, j)] = mask[top_id(i0 + n-1, j)] = True
return mask
mesh = fipy.Grid2D(Lx = 1, Ly = 1, nx = 20, ny = 20) # Grid of size 1m x 1m
k_over_c_rho = 3.98E2 / (3.85E2 * 8.96E3) # The thermal conductivity, specific heat capacity, and density of Copper in MKS
dt = 0.1 * (mesh.dx**2 + mesh.dy**2) / (4*k_over_c_rho)
T0 = 273.15 # 0 degree Celsius in Kelvin
T = fipy.CellVariable(mesh, name='T', value=T0+25)
mask_e = mesh.exteriorFaces
T.constrain(T0+25., mask_e)
mask_i = get_mask_of_rect(mesh, 0.25, 0.25, 0.5, 0.5)
T.constrain(T0+30, mask_i)
eq = fipy.TransientTerm() == fipy.DiffusionTerm(coeff=k_over_c_rho)
viewer = fipy.MatplotlibViewer(vars=[T], datamin=0, datamax=400)
plt.ioff()
viewer._plot()
plt.plot(*mesh.faceCenters[:, mask_e], '.g')
plt.plot(*mesh.faceCenters[:, mask_i], '.b')
def update():
for _ in range(10):
eq.solve(var=T, dt=dt)
viewer._plot()
plt.draw()
timer = plt.gcf().canvas.new_timer(interval=50)
timer.add_callback(update)
timer.start()
plt.show()
</code></pre>
|
<python><simulation><fipy>
|
2023-01-17 10:17:08
| 2
| 5,019
|
relent95
|
75,144,910
| 11,586,653
|
Python multiple substring index in string
|
<p>Given the following list of sub-strings:</p>
<pre><code>sub = ['ABC', 'VC', 'KI']
</code></pre>
<p>is there a way to get the index of these sub-string in the following string if they exist?</p>
<pre><code>s = 'ABDDDABCTYYYYVCIIII'
</code></pre>
<p>so far I have tried:</p>
<pre><code>for i in re.finditer('VC', s):
print(i.start, i.end)
</code></pre>
<p>However, re.finditer does not take multiple arguments.</p>
<p>thanks</p>
|
<python><string><loops>
|
2023-01-17 10:15:38
| 6
| 471
|
Chip
|
75,144,827
| 10,829,044
|
Pandas filter list of list values in a dataframe column
|
<p>I have a dataframe like as below</p>
<pre><code>sample_df = pd.DataFrame({'single_proj_name': [['jsfk'],['fhjk'],['ERRW'],['SJBAK']],
'single_item_list': [['ABC_123'],['DEF123'],['FAS324'],['HSJD123']],
'single_id':[[1234],[5678],[91011],[121314]],
'multi_proj_name':[['AAA','VVVV','SASD'],['QEWWQ','SFA','JKKK','fhjk'],['ERRW','TTTT'],['SJBAK','YYYY']],
'multi_item_list':[[['XYZAV','ADS23','ABC_123'],['ABC_123','ADC_123']],['XYZAV','DEF123','ABC_123','SAJKF'],['QWER12','FAS324'],['JFAJKA','HSJD123']],
'multi_id':[[[2167,2147,29481],[5432,1234]],[2313,57567,2321,7898],[1123,8775],[5237,43512]]})
</code></pre>
<p>I would like to do the below</p>
<p>a) Pick the value from <code>single_item_list</code> for each row</p>
<p>b) search that value in <code>multi_item_list</code> column of the same row. Please note that it could be <code>list of lists</code> for some of the rows</p>
<p>c) If match found, keep only that matched values in <code>multi_item_list</code> and remove all other non-matching values from <code>multi_item_list</code></p>
<p>d) Based on the position of the match item, look for corresponding value in <code>multi_id</code> list and keep only that item. Remove all other position items from the list</p>
<p>So, I tried the below but it doesn't work for nested list of lists</p>
<pre><code>for a, b, c in zip(sample_df['single_item_list'],sample_df['multi_item_list'],sample_df['multi_id']):
for i, x in enumerate(b):
print(x)
print(a[0])
if a[0] in x:
print(x.index(a[0]))
pos = x.index(a[0])
print(c[pos-1])
</code></pre>
<p>I expect my output to be like as below. In real world, I will have more cases like 1st input row (nested lists with multiple levels)</p>
<p><a href="https://i.sstatic.net/XO7Oj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XO7Oj.png" alt="enter image description here" /></a></p>
|
<python><pandas><list><dataframe><transformation>
|
2023-01-17 10:08:09
| 2
| 7,793
|
The Great
|
75,144,814
| 3,062,781
|
Generate SSH ed25519 private key with embedded comment
|
<p>So I've been looking at modifying some existing code to make it compatible with Windows as well as Linux/MacOS. There is a generator which currently shells out to <code>ssh-keygen</code> via subprocess to produce an SSH private key, and there is also an encoder for a public key which also leverages <code>ssh-keygen</code> to extract the comment from the private key and appends it to public key as per the <code>~/.ssh/authorized_keys</code> convention.</p>
<p>I'd like to change both to use the <code>crytography</code> library, but am also open to using something like <code>pycryptodome</code>.</p>
<p>The generator and encoder classes are derived from abstract base classes, and since there are obviously other generators and encoders structured this way, I want to avoid modifying the attributes and inputs if possible.</p>
<p>My question is simply:</p>
<ul>
<li>Is it possible to construct an SSH private key with a comment (like provided by <code>ssh-keygen -C <.....></code>) via <code>cryptography.hazmat.primatives</code> or <code>pycryptodome</code>? Or alternatively is there a means to modify the private key constructed via this mechanism to embed the comment without having to shell out to <code>ssh-keygen</code>?</li>
</ul>
<p>i.e.</p>
<pre><code>In [77]: from cryptography.hazmat.primitives import serialization
...: from cryptography.hazmat.primitives.asymmetric import ed25519
...:
...: private_key = ed25519.Ed25519PrivateKey.generate()
...: public_key = private_key.public_key()
...: private_bytes = private_key.private_bytes(
...: encoding=serialization.Encoding.PEM,
...: format=serialization.PrivateFormat.OpenSSH,
...: encryption_algorithm=serialization.NoEncryption()
...: )
...: public_bytes = public_key.public_bytes(
...: encoding=serialization.Encoding.OpenSSH,
...: format=serialization.PublicFormat.OpenSSH
...: )
In [78]: private_bytes
Out[78]: b'-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZWQyNTUx\nOQAAACDNXoGj2N3kqzpVkB8swYpri6aO+uat2dpLhlwIoBw4WQAAAIivLJWaryyVmgAAAAtzc2gt\nZWQyNTUxOQAAACDNXoGj2N3kqzpVkB8swYpri6aO+uat2dpLhlwIoBw4WQAAAEAqLwgM46xyVLjk\n/oVJteyl3mxKzZXDiUwlHjZ9OMM4YM1egaPY3eSrOlWQHyzBimuLpo765q3Z2kuGXAigHDhZAAAA\nAAECAwQF\n-----END OPENSSH PRIVATE KEY-----\n'
In [79]: public_bytes
Out[79]: b'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM1egaPY3eSrOlWQHyzBimuLpo765q3Z2kuGXAigHDhZ'
</code></pre>
<p>If I check the contents of the key it's generated without a comment:</p>
<pre><code>% ssh-keygen -lf /tmp/id_py_crypto
256 SHA256:jC+9/pxq3fZgeh93c5cFTSZXtrdlcRxQsD46KANsvSY /tmp/id_py_crypto.pub (ED25519)
% ssh-keygen -lf /tmp/id_py_crypto.pub
256 SHA256:jC+9/pxq3fZgeh93c5cFTSZXtrdlcRxQsD46KANsvSY no comment (ED25519)
</code></pre>
|
<python><ssh><cryptography><ssh-keys>
|
2023-01-17 10:07:24
| 0
| 765
|
264nm
|
75,144,722
| 2,081,152
|
AWS Glue Pyspark Python UDFRunner timing info total/boot/init/finish
|
<p>I am running a Pyspark AWS Glue Job that includes a Python UDF. In the logs I see this line repeated.</p>
<pre><code>INFO [Executor task launch worker for task 15765] python.PythonUDFRunner (Logging.scala:logInfo(54)):
Times: total = 268103, boot = 21, init = 2187, finish = 265895
</code></pre>
<p>Does anyone know what this logInfo (total/boot/init/finish) means??</p>
<p>I have looked at the Spark code and I am none the wiser and there isn't a mention of this info anywhere else I have looked for</p>
|
<python><apache-spark><pyspark><user-defined-functions><aws-glue>
|
2023-01-17 10:00:50
| 1
| 703
|
mamonu
|
75,144,634
| 16,852,041
|
Python | Redis Connects but Stuck on Set or Get
|
<p>Goal: Successfully <code>set()</code> and <code>get()</code> key-value pairs to local <strong>Redis</strong> via. <strong>Python</strong>.</p>
<p>I can connect to Redis. A possible issue is a firewall has closed port <code>6379</code>. However, it is open.</p>
<p>Connection works with or without parameters: <code>redis_db</code>, <code>redis_max_connections</code>.</p>
<p>I suspect the issue is with Python, as I'm able to <code>SET</code> and <code>GET</code> via. Terminal:</p>
<pre><code>127.0.0.1:6379> SET my-key TESTVAL
OK
127.0.0.1:6379> GET my-key
"TESTVAL"
127.0.0.1:6379> DEL my-key
(integer) 1
127.0.0.1:6379> GET my-key
(nil)
</code></pre>
<hr />
<p><strong>Code</strong></p>
<pre><code>import redis
redis_host = 'localhost' # 127.0.0.1
redis_port = 6379
redis_password = '' # 'your-redis-password'
redis_db = 2
redis_max_connections = 100
client = redis.Redis(host=redis_host, port=redis_port, password=redis_password, ssl=True, db=redis_db,
max_connections=redis_max_connections)
print('Connected!')
key = 'KEY'
value = 'VALUE'
client.set(key, value)
print('Data stored on Redis with key: ', key)
data = client.get(key)
print('Data retrieved from Redis with key: ', key)
print(data)
</code></pre>
<p><strong>Runtime</strong></p>
<pre><code>(venv) me@laptop:~/GitHub/project$ python3 foo/bar/minimal_working_example.py
Connected!
|
</code></pre>
<hr />
<p>Redis server is live:</p>
<pre><code>(base) me@laptop:~$ redis-server
4965:C 17 Jan 2023 09:36:56.119 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
4965:C 17 Jan 2023 09:36:56.119 # Redis version=7.0.7, bits=64, commit=00000000, modified=0, pid=4965, just started
4965:C 17 Jan 2023 09:36:56.119 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
4965:M 17 Jan 2023 09:36:56.119 * Increased maximum number of open files to 10032 (it was originally set to 1024).
4965:M 17 Jan 2023 09:36:56.119 * monotonic clock: POSIX clock_gettime
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 7.0.7 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 4965
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | https://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
4965:M 17 Jan 2023 09:36:56.120 # Server initialized
4965:M 17 Jan 2023 09:36:56.120 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
4965:M 17 Jan 2023 09:36:56.120 * Ready to accept connections
</code></pre>
<p>Tested in a new Terminal:</p>
<pre><code>(base) me@laptop:~$ redis-cli
127.0.0.1:6379> ping
PONG
</code></pre>
<p>Port <code>6379</code> is open:</p>
<pre><code>(base) me@laptop:~$ nc -z localhost 6379
(base) me@laptop:~$ telnet localhost 6379
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
</code></pre>
|
<python><redis><get><set><port>
|
2023-01-17 09:53:39
| 1
| 2,045
|
DanielBell99
|
75,144,592
| 13,238,456
|
Extracting tables from PDF using tabula-py fails to properly detect rows
|
<h4>Problem</h4>
<p>I want to extract a 70-page vocabulary table from a PDF and turn it into a CSV to use in [any vocabulary learning app].
Tabula-py and its read_pdf function is a popular solution to extract the tables, and it did detect the columns ideally without any fine-tuning. But, it only detected the columns well and had difficulties with the multi-line rows, splitting each line into a different row.</p>
<p>E.g., in the PDF you will have columns 2 and 3. The table on Stackoverflow doesn't seem to allow multi-line content either, so I added row numbers. Just merge the row 1 in your head.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Row number</th>
<th>German</th>
<th>Latin</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>First word</td>
<td>Translation for first word</td>
</tr>
<tr>
<td>1</td>
<td>with many lines of content</td>
<td>[phonetic vocabulary thingy]</td>
</tr>
<tr>
<td>1</td>
<td>and more lines</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>Second word</td>
<td>Translation for second word</td>
</tr>
</tbody>
</table>
</div>
<p>Instead of fine-tuning the read_pdf parameters, are there ways around that?</p>
|
<python><pandas><pdf><tabula-py>
|
2023-01-17 09:50:37
| 2
| 493
|
Dustin
|
75,144,422
| 6,670,900
|
Handle multiple request from Flask with Python Selenium
|
<p>I have used Flask to receive request and then run the desired actions with Selenium.</p>
<p>My problem is <code>driver = webdriver.Chrome(executable_path="chromedriver")</code> takes more time to run my request, is that a way to keep the chrome window open?</p>
<p>The results are as below:</p>
<p><a href="https://i.sstatic.net/8bHVF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bHVF.png" alt="enter image description here" /></a></p>
<p>If yes, how can I handle multiple requests from Flask if I have one Chrome window open and there are 5 requests coming in together?</p>
<p>Below is the loading time by running <code>fbCreate()</code> function with Flask from browser.</p>
<p>Anyone can help me on what can I do?</p>
<p><a href="https://i.sstatic.net/RoJsm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RoJsm.png" alt="enter image description here" /></a></p>
<p>I am trying to do actions like:</p>
<ol>
<li>Create account at Facebook</li>
<li>Get account's details from Facebook</li>
</ol>
<p>Above actions are just a example for you to understand what I am trying to do.</p>
<pre><code>from flask import Flask
from selenium import webdriver
import time
app = Flask(__name__)
@app.route('/fb/create')
def fbCreate():
s_driver_time = time.time()
driver = webdriver.Chrome(executable_path="chromedriver")
start_time = time.time()
driver.get('https://facebook.com') #example
print("--- %.2f seconds ---" % (time.time() - start_time))
print("--- %.2f driver seconds ---" % (time.time() - s_driver_time))
driver.quit()
return "done"
@app.route('/fb/account')
def fbAccount():
driver = webdriver.Chrome(executable_path="chromedriver")
driver.get('https://facebook.com/accounts') #example
driver.quit()
return "scrap data"
if __name__ == "__main__":
app.run(host="0.0.0.0", port="8080", debug=True)
</code></pre>
|
<python><selenium><flask>
|
2023-01-17 09:37:02
| 1
| 443
|
Dave Cruise
|
75,144,399
| 4,974,431
|
Split string in Python using two conditions (one delimiter and one "contain")
|
<p>Considering the following string:</p>
<pre><code>my_text = """
My favorites books of all time are:
Harry potter by JK Rowling,
Dune (first book) by Frank Herbert;
and Le Petit Prince by Antoine de Saint Exupery (I read it many times).
"""
</code></pre>
<p>I want to extract the name books and authors, so expected output is:</p>
<pre><code>output = [
['Harry Potter', 'JK Rowling'],
['Dune (first book)', 'Frank Herbert'],
['and Le Petit Prince', 'Antoine de Saint Exupery']
]
</code></pre>
<p>The basic 2-step approach would be:</p>
<ul>
<li>Use re.split to split on a list of non ascii characters ((),;\n etc) to extract sentences or at least pieces of sentences.</li>
<li>Keep only strings containing 'by' and use split again on 'by' to separate title and author.</li>
</ul>
<p>While this method would cover 90% of cases, <strong>the main issue is the consideration of brackets ()</strong>: I want to keep them in book titles (like Dune), but use them as delimiters after authors (like Saint Exupery).</p>
<p>I suspect a powerful regex would cover both, but not sure how exactly</p>
|
<python><string><split>
|
2023-01-17 09:35:28
| 6
| 1,624
|
Vincent
|
75,144,389
| 859,141
|
Reverse Inlines in Django Admin for Many to Many
|
<p>Apologies if this has been asked before and my searches have not uncovered the solution. I'm looking to include admin inlines for two models. Django version is 4.1.4</p>
<pre><code>class Book(models.Model):
title = models.CharField(max_length=100)
author_series = models.CharField(max_length=100, blank=True, null=True)
series = models.ManyToManyField(Series)
...
def __str__(self):
return self.title + " (" + str(self.pk) + ")"
class Series(models.Model):
name = models.CharField(max_length=100, blank=True, null=True)
category = models.CharField(max_length=50)
publisher = models.CharField(max_length=100,blank=True, null=True)
...
def __str__(self):
return self.publisher + " " + self.name
</code></pre>
<p>I'm able to complete the forward relationship successfully using Model.field.through format:</p>
<pre><code>class SeriesInline(admin.TabularInline):
model = Book.series.through
class BookAdmin(admin.ModelAdmin):
inlines = [SeriesInline,]
</code></pre>
<p>but would like the reverse as well.</p>
<pre><code>class InlineSeriesBooks(admin.TabularInline):
model = Book.series_set.through
... Remainder is commented out ...
class SeriesAdmin(admin.ModelAdmin):
list_display = ['name', 'category', 'publisher', 'link', 'created', 'modified']
fields = ['name', 'category', 'publisher', 'link', 'created', 'modified']
readonly_fields = ["created", "modified"]
inlines = [InlineSeriesBooks,]
</code></pre>
<p>All the links seem to be suggesting the former through solution such as <a href="https://stackoverflow.com/questions/48372252/django-admin-accessing-reverse-many-to-many">Django Admin, accessing reverse many to many</a> but with the spelling correct gets stuck in a loop. If I remove the inlines statement the page will load.</p>
<p>I have confirmed that I can follow the relationship on both directions in the front end with:</p>
<pre><code>{% for series in book.series.all %}..
</code></pre>
<p>and</p>
<pre><code>{% for book in series.book_set.all %}
</code></pre>
<p>Thanks in advance.</p>
|
<python><django><django-models><django-admin>
|
2023-01-17 09:34:38
| 1
| 1,184
|
Byte Insight
|
75,144,145
| 1,530,405
|
Python Client/Server hangs
|
<p>I have an Android app, written in Javascript (source not available) that allows the creation of Buttons and sending Button presses.</p>
<p>The Server is running on a Win10 desktop, and the code is as follows:</p>
<pre><code># Try.py
import socket
CRLF = '\r\n'
count = 0
port = 6666
print('Server Waiting on ' + str(port))
host = socket.gethostname()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((host, port)) # bind host address and port together
sock.listen(5)
conn, address = sock.accept() # accept new connection
print("Connection from: " + str(address))
data = conn.recv(1024).decode()
# the Client sends a message containing the Client's Version NO
# and the screen size
print(data)
while True:
count = count + 1
conn.send(('AAA=Button~Text=' + str(count) + CRLF).encode())
# this command sends a message to the client to create a Button
# named AAA (if it does not exists) and changes the the label = count
data = conn.recv(1024).decode()
# reply from the Client ("40=40" if message was OK)
if data == '':
break
print(data)
conn.send(('40' + CRLF).encode())
print(count)
# this command passes control to the client
# until the Button is pressed
data = conn.recv(1024).decode()
# Client replies after the button press with "40=40~AAA=1"
# "40=40" means the message received was correct
# AAA is the button ID, and 1 means a short press
print(data)
conn.close() # close the connection
</code></pre>
<p>The code works for a while (for a few minutes) after which it just hangs.</p>
<p>Here is the dump of the packets (using SmartSniff):</p>
<pre><code>[1/17/2023 8:00:27 PM:508]
EasyGUI_V1.0 ScreenSize: (1440x2412)
[1/17/2023 8:00:27 PM:508]
AAA=Button~Text=1
[1/17/2023 8:00:27 PM:514]
40=40
[1/17/2023 8:00:27 PM:514]
40
[1/17/2023 8:00:30 PM:049]
40=40~AAA=1
[1/17/2023 8:00:30 PM:049]
AAA=Button~Text=2
[1/17/2023 8:00:30 PM:068]
40=40
[1/17/2023 8:00:30 PM:070]
40
[1/17/2023 8:00:33 PM:590]
40=40~AAA=1
[1/17/2023 8:00:33 PM:591]
AAA=Button~Text=3
[1/17/2023 8:00:33 PM:597]
40=40
[1/17/2023 8:00:33 PM:597]
40
</code></pre>
<p>Is there something wrong with my Server code ???
Is it a timing problem ???</p>
|
<python><sockets>
|
2023-01-17 09:12:15
| 0
| 455
|
user1530405
|
75,144,127
| 14,065,992
|
ROC Curve area is nan CNN model
|
<p>I have implemented a CNN based Classification of image datasets but the problem is it provides a nan value of the ROC_Curve's area.
Here is the coding part,</p>
<pre><code>#Package Initilize
import numpy as np
from sklearn import metrics
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
train_datagen = image.ImageDataGenerator(
rescale=1/255,
shear_range = 0.3,
zoom_range = 0.3,
horizontal_flip = True,
)
validation_datagen = image.ImageDataGenerator(
rescale = 1/255
)
target_size = (100,100,3)
train = train_datagen.flow_from_directory(
'Dataset/Train',
target_size = target_size[:-1],
batch_size = 32,
class_mode = 'categorical'
)
validation = validation_datagen.flow_from_directory(
'Dataset/Validation',
target_size = target_size[:-1],
batch_size = 32,
class_mode = 'categorical'
)
test = validation_datagen.flow_from_directory(
'Dataset/Test',
target_size = target_size[:-1],
batch_size = 32,
shuffle = False,
class_mode = 'categorical'
)
input_layer = keras.layers.Input(shape=target_size)
#Model Define
conv2d_1 = keras.layers.Conv2D(filters=64, kernel_size=(3,3), strides=1, padding='same',
activation='relu', kernel_initializer='he_normal')(input_layer)
batchnorm_1 = keras.layers.BatchNormalization()(conv2d_1)
maxpool1=keras.layers.MaxPool2D(pool_size=(2,2))(batchnorm_1)
conv2d_2 = keras.layers.Conv2D(filters=32, kernel_size=(3,3), strides=1, padding='same',
activation='relu',kernel_initializer='he_normal')(maxpool1)
batchnorm_2 = keras.layers.BatchNormalization()(conv2d_2)
maxpool2=keras.layers.MaxPool2D(pool_size=(2,2))(batchnorm_2)
flatten = keras.layers.Flatten()(maxpool2)
dense_1 = keras.layers.Dense(256, activation='relu')(flatten)
dense_2 = keras.layers.Dense(n_classes, activation='softmax')(dense_1)
dense_3 = keras.layers.Dense(n_classes, activation='softmax')(dense_2)
model = keras.models.Model(input_layer, dense_3)
#Compile Define
model.compile(optimizer=keras.optimizers.Adam(0.001),
loss='categorical_crossentropy',
metrics=['acc'])
model.summary()
#Fit the model
history = model.fit_generator(generator=train, validation_data=validation,
epochs=2)
#ROC Curve Define
x, y = validation.next()
prediction = model.predict(x)
predict_label1 = np.argmax(prediction, axis=-1)
true_label1 = np.argmax(y, axis=-1)
y = np.array(true_label1)
scores = np.array(predict_label1)
fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=9)
roc_auc = metrics.auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic (ROC)')
plt.legend(loc="lower right")
plt.show()
</code></pre>
<p>The problem of ROC_Curve is given in the attached file, please check it.
<a href="https://i.sstatic.net/cJ0kG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cJ0kG.png" alt="ROC_Curve area is nan" /></a></p>
|
<python><scikit-learn><deep-learning><conv-neural-network><roc>
|
2023-01-17 09:10:18
| 1
| 1,855
|
Imdadul Haque
|
75,144,064
| 10,829,044
|
Pandas pick an item from list column and search in other list column
|
<p>I have a dataframe like as shown below</p>
<pre><code>sample_df = pd.DataFrame({'single_proj_name': [['jsfk'],['fhjk'],['ERRW'],['SJBAK']],
'single_item_list': [['ABC_123'],['DEF123'],['FAS324'],['HSJD123']],
'single_id':[[1234],[5678],[91011],[121314]],
'multi_proj_name':[['AAA','VVVV','SASD'],['QEWWQ','SFA','JKKK','fhjk'],['ERRW','TTTT'],['SJBAK','YYYY']],
'multi_item_list':[['XYZAV','ADS23','ABC_123'],['XYZAV','DEF123','ABC_123','SAJKF'],['QWER12','FAS324'],['JFAJKA','HSJD123']],
'multi_id':[[2167,2147,29481],[2313,57567,2321,7898],[1123,8775],[5237,43512]]})
</code></pre>
<p>I would like to do the below</p>
<p>a) Pick the value from <code>single_item_list</code> for each row</p>
<p>b) search that value in <code>multi_item_list</code> column of the same row</p>
<p>c) If match found, keep only that value in <code>multi_item_list</code> and remove all other non-matching values from <code>multi_item_list</code></p>
<p>d) Based on the position of the match item, look for corresponding value in <code>multi_id</code> list and keep only that item. Remove all other position items from the list</p>
<p>So, I tried the below but it doesn't work</p>
<pre><code>def func(df):
return list(set(sample_df['single_item_list']) - set(sample_df['multi_item_list']))
sample_df['col3'] = sample_df.apply(func, axis = 1)
</code></pre>
<p>I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/o36u2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o36u2.png" alt="enter image description here" /></a></p>
|
<python><pandas><list><dataframe><transformation>
|
2023-01-17 09:04:09
| 1
| 7,793
|
The Great
|
75,144,059
| 12,215,366
|
Python Playwright start maximized window
|
<p>I have a problem starting Playwright in Python maximized. I found some articles for other languages but doesn't work in Python, also nothing is written about maximizing window in Python in the official documentation.</p>
<p>I tried <code>browser = p.chromium.launch(headless=False, args=["--start-maximized"])</code></p>
<p>And it starts maximized but then automatically restores back to the default small window size.</p>
<p>Any ideas?
Thanks</p>
|
<python><playwright><playwright-python>
|
2023-01-17 09:03:34
| 4
| 375
|
bbfl
|
75,143,895
| 14,632,651
|
Cyrillic Encoding in Urllib Python with lower cases
|
<p>My goal is to encode this dict with <code>cyrilic</code> text:</p>
<pre><code>target_dict = {"Animal": "Cat", "city": "Москва"}
</code></pre>
<p>To this (<code>cyrilic lettesrs with lower case</code> encoded):</p>
<pre><code>Animal=Cat&city=%d0%9c%d0%be%d1%81%d0%ba%d0%b2%d0%b0
</code></pre>
<p>By default with python it encodes with <code>UPPER CASE</code>, this is my code:</p>
<pre><code>import urllib.parse
target_dict = {"Animal": "Cat", "city": "Москва"}
encoded = urllib.parse.urlencode(target_dict)
print(encoded)
</code></pre>
<p>It returns <code>city</code> with <code>UPPER CASE</code>, i need only city in lower case:</p>
<pre><code>Animal=Cat&city=%D0%9C%D0%BE%D1%81%D0%BA%D0%B2%D0%B0
</code></pre>
<p><code>I need to get city (cyriclic text)</code> with <code>lower case</code>, these two results are actually the same, but service that I am trying to connect wants exactly <code>lowercase</code>.</p>
|
<python><url><urllib><urlencode>
|
2023-01-17 08:51:42
| 1
| 1,303
|
oruchkin
|
75,143,826
| 10,311,377
|
What is the best way to wrap Thread/Process in magic method __await__ to create awaitable class?
|
<p>I would like to create "awaitable" class instances. It will be useful when I have to use some db connectors which do not have <code>asyncio</code> wrappers. Also it is just interesting for me to better understand <code>asyncio</code> and <code>generators</code>.</p>
<p>I came to the following solutions:</p>
<p><strong>Solution #1</strong></p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import threading
import time
async def ticker():
"""Just some kind of async ticker function."""
while True:
await asyncio.sleep(1)
print("tick tack sleep...")
class ThreadAwaitableWithToThread:
"""Illustrates how to wrap Thread in awaitable object."""
def do_work(self):
"""Some target function for another Thread."""
cur_th_name = threading.current_thread().getName()
for i in range(5):
print(f"Do work-{i} in {cur_th_name}")
time.sleep(2)
return "well done result!"
def __await__(self):
"""Magic method to create awaitable object."""
# to_thread is wrapper around run_in_executor
result = yield from asyncio.to_thread(self.do_work).__await__()
# The same without asyncio.to_thread:
# result = yield from loop.run_in_executor(None, self.do_work).__await__()
return result
async def amain():
"""Main asyncio entrypoint of the app."""
t = asyncio.create_task(ticker()) # "daemon" task
res = await ThreadAwaitableWithToThread()
t.cancel() # cancel "daemon" task
print(f"Result: {res}")
if __name__ == '__main__':
asyncio.run(amain())
</code></pre>
<p>The solution is based on <code>loop.run_in_executor</code>, probably it is the best solution, but I would like smth which is based on basic generators, rather than <code>asyncio</code> library functionality.</p>
<h1>Solution #2</h1>
<pre class="lang-py prettyprint-override"><code>import asyncio
import threading
import time
async def ticker():
"""Just some kind of async ticker function."""
while True:
await asyncio.sleep(1)
print("tick tack sleep...")
class ThreadAwaitable:
"""Illustrates how to wrap Thread in awaitable object."""
def __init__(self):
self.done = threading.Event() # sync between this and another thread
def do_work(self):
"""Some target function for another Thread."""
cur_th_name = threading.current_thread().getName()
for i in range(5):
print(f"Do work-{i} in {cur_th_name}")
time.sleep(2)
self.done.set()
def __await__(self):
"""Magic method to create awaitable object."""
th = threading.Thread(target=self.do_work)
th.start() # start another tread
while True:
if self.done.is_set(): # sync with another thread
print(f"Work in {self.__class__.__name__} is done!")
return "well done result!"
# make loop switch between tasks while waiting another Thread to finish work.
yield from asyncio.sleep(0.1).__await__()
async def amain():
"""Main asyncio entrypoint of the app."""
t = asyncio.create_task(ticker()) # "daemon" task
res = await ThreadAwaitable()
t.cancel() # cancel "daemon" task
print(f"Result: {res}")
if __name__ == '__main__':
asyncio.run(amain())
</code></pre>
<p>The solution is based on some kind of long polling with the help of <code>asyncio.sleep</code>, so the solution requires additional resources.</p>
<p>################################################################################</p>
<p>I would like to find some more elegant solution, I know about <code>select</code>, <code>selectors</code> libraries which allows to "listen to" sockets, if we attach Threads/Process to sockets we can switch between coroutines/generators with sockets inside. But docs of the libs is not very detailed and it will be some kind of self made <code>asyncio</code> (David Beazley in Pycon 2015). Is there any better solutions?</p>
|
<python><python-3.x><multithreading><multiprocessing><python-asyncio>
|
2023-01-17 08:45:15
| 0
| 3,906
|
Artiom Kozyrev
|
75,143,677
| 12,275,675
|
Reindexing Pandas based on daterange
|
<p>I am trying to reindex the dates in pandas. This is because there are dates which are missing, such as weekends or national hollidays.</p>
<p>To do this I am using the following code:</p>
<pre><code>import pandas as pd
import yfinance as yf
import datetime
start = datetime.date(2015,1,1)
end = datetime.date.today()
df = yf.download('F', start, end, interval ='1d', progress = False)
df.index = df.index.strftime('%Y-%m-%d')
full_dates = pd.date_range(start, end)
df.reindex(full_dates)
</code></pre>
<p>This code is producing this dataframe:</p>
<pre><code> Open High Low Close Adj Close Volume
2015-01-01 NaN NaN NaN NaN NaN NaN
2015-01-02 NaN NaN NaN NaN NaN NaN
2015-01-03 NaN NaN NaN NaN NaN NaN
2015-01-04 NaN NaN NaN NaN NaN NaN
2015-01-05 NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
2023-01-13 NaN NaN NaN NaN NaN NaN
2023-01-14 NaN NaN NaN NaN NaN NaN
2023-01-15 NaN NaN NaN NaN NaN NaN
2023-01-16 NaN NaN NaN NaN NaN NaN
2023-01-17 NaN NaN NaN NaN NaN NaN
</code></pre>
<p>Could you please advise why is it not reindexing the data and showing NaN values instead?</p>
<p>===Edit ===</p>
<p>Could it be a python version issue? I ran the same code in python 3.7 and 3.10</p>
<p>In python 3.7</p>
<p><a href="https://i.sstatic.net/ivJyn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ivJyn.png" alt="enter image description here" /></a></p>
<p>In python 3.10</p>
<p><a href="https://i.sstatic.net/vzkaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vzkaa.png" alt="enter image description here" /></a></p>
<p>In python 3.10 - It is datetime as you can see from the image.
<a href="https://i.sstatic.net/zDlXH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zDlXH.png" alt="enter image description here" /></a></p>
<p>Getting datetime after <code>yf.download('F', start, end, interval ='1d', progress = False)</code> without <code>strftime</code></p>
<p><a href="https://i.sstatic.net/RigeC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RigeC.png" alt="enter image description here" /></a></p>
|
<python><pandas><datetime><reindex>
|
2023-01-17 08:31:11
| 2
| 1,220
|
Slartibartfast
|
75,143,537
| 997,832
|
Tensorflow text classification with subject for each text
|
<p>I want to classify texts with additional input 'text' subject. I acquire these subjects from wikidata 'instance of' properties. I designed a neural net model as below. Network takes texts and subjects as input. Texts are encoded into vectors and than these vectors get pooled. Other input 'subjects', are taken as integers. These are then encoded with Category encoding(one-hot) and result of this is concatenated with result of text processing flow. Is this model okay for such a classification task? I'm not sure if one-hot encoding of subjects makes sense or not.</p>
<p>Code:</p>
<pre><code>MAX_TOKENS_NUM = 5000 # Maximum vocab size.
MAX_SEQUENCE_LEN = 40 # Sequence length to pad the outputs to.
EMBEDDING_DIMS = 100
text_input = tf.keras.Input(shape=(1,), dtype=tf.string)
subject_input = tf.keras.Input(shape=(1,), dtype=tf.int32)
text_layer = vectorize_layer(text_input)
text_layer = tf.keras.layers.Embedding(MAX_TOKENS_NUM + 1, EMBEDDING_DIMS)(text_layer)
text_layer = tf.keras.layers.GlobalAveragePooling1D()(text_layer)
subject_layer = tf.keras.layers.CategoryEncoding(
num_tokens=len(subjects), output_mode='one_hot', sparse=False
)(subject_input)
concatenated = tf.keras.layers.Concatenate(axis=1)([text_layer, subject_layer])
output = tf.keras.layers.Dense(len(labels))(concatenated)
model = tf.keras.models.Model(inputs=[text_input, subject_input], outputs=output)
model.summary()
model.compile(loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.SparseCategoricalAccuracy())
</code></pre>
<p>Model Plot:</p>
<p><a href="https://i.sstatic.net/oUPA8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oUPA8.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><deep-learning><neural-network><text-classification>
|
2023-01-17 08:17:55
| 0
| 1,395
|
cuneyttyler
|
75,143,522
| 1,581,090
|
How to read data from a serial port in Windows using python?
|
<p>In windows 10 I am trying to read the output of an attached serial device.</p>
<p>Using <code>hterm</code> I am able to see the data on serial port <code>COM5</code>. So the serial port works fine.</p>
<p>Now using WSL2 (Ubuntu 20.04.3) I am running the following python script</p>
<pre><code>import serial
ser = serial.Serial("COM5", baudrate=115200)
</code></pre>
<p>which fails with the error</p>
<pre><code>Traceback (most recent call last):
File "test1.py", line 6, in <module>
ser = serial.Serial("COM5", baudrate=115200)
File "/usr/lib/python3/dist-packages/serial/serialutil.py", line 240, in __init__
self.open()
File "/usr/lib/python3/dist-packages/serial/serialposix.py", line 268, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 2] could not open port COM5: [Errno 2] No such file or directory: 'COM5'
</code></pre>
<p>I also tried to use the suggestions posted in <a href="https://devblogs.microsoft.com/commandline/connecting-usb-devices-to-wsl/" rel="nofollow noreferrer">this article</a> at attach the USB port on which the serial device is connected to, to WSL.</p>
<p>The command <code>usbipd wsl list</code> in a windows powershell shows then</p>
<pre><code>8-3 0403:6001 USB Serial Converter Attached - Ubuntu-20.04
</code></pre>
<p>But when then running the same python code in WSL2 gives the same error.</p>
<p>So how to fix this problem so I am able to just read all data from the serial port with python?</p>
|
<python><windows><serial-port><windows-subsystem-for-linux>
|
2023-01-17 08:15:55
| 4
| 45,023
|
Alex
|
75,143,435
| 14,194,418
|
Web Driver Wait is not working when page load strategy is set to none
|
<p><strong>Selenium Version: 4.7.2</strong></p>
<p>I only want to wait for specific page to load so I want to disable the default behavior of driver page load strategy.</p>
<p>I disable it with the following code.</p>
<pre><code>options = webdriver.ChromeOptions()
options.page_load_strategy = "none"
</code></pre>
<p>Now for the page I want to wait for it to load, I use the following code.</p>
<pre><code>WebDriverWait(web, seconds).until(
lambda _: web.execute_script("return document.readyState") == "complete"
)
</code></pre>
<p>The problem is that when <code>page_load_strategy</code> is <code>none</code>. The <em>waiting code</em> doesn't work i.e. it doesn't wait for the page <code>readyState</code> to be <code>complete</code>.</p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><pageloadstrategy>
|
2023-01-17 08:05:23
| 2
| 2,551
|
Ibrahim Ali
|
75,143,190
| 15,416,614
|
Is it possible to close a window in macOS with Python?
|
<p>Actually, I wanna do some automated operations on macOS, like closing a specific window.</p>
<p>I have read some threads, and I got know about <code>Appkit</code> and <code>Quartz</code> from <a href="https://stackoverflow.com/questions/53237266/how-can-i-minimize-maximize-windows-in-macos-with-the-cocoa-api-from-a-python-sc">How can I minimize/maximize windows in macOS with the Cocoa API from a Python script?</a>.</p>
<p>Here below is my current progress:</p>
<pre><code>import AppKit
for app in AppKit.NSWorkspace.sharedWorkspace().runningApplications():
if app.localizedName() == 'Google Chrome':
app.hide()
</code></pre>
<p>With <code>AppKit</code>, I can hide a specific application successfully. But there are two issues for me with this method: first, it seems that <code>AppKit</code> can only manage <strong>Applications</strong>, but not <strong>Windows</strong> (i.e., the above code hides all <em>Google Chrome</em> windows at once); besides, <code>AppKit</code> seems to be only able to <strong>Hide</strong> an application, but not <strong>Quitting</strong> it or <strong>Closing</strong> it.</p>
<p>I also tried <code>Quartz</code>. With the below code, I can successfully find the specific windows that I wanna control, especially with the characteristic <code>kCGWindowNumber</code>. But I would like to ask, is there any module that can allow me to close (or hide) the window, maybe like with the <code>kCGWindowNumber</code>?</p>
<pre><code>import Quartz
for window in Quartz.CGWindowListCopyWindowInfo(Quartz.kCGWindowListOptionOnScreenOnly, Quartz.kCGNullWindowID):
if window['kCGWindowOwnerName'] == "Google Chrome" and window['kCGWindowLayer'] == 0:
print(window)
</code></pre>
|
<python><python-3.x><macos>
|
2023-01-17 07:37:30
| 0
| 387
|
Gordon Hui
|
75,143,185
| 997,832
|
Tensorflow Data cardinality is ambigous with multiple inputs
|
<p>I have a model with two inputs for text classification with additional input 'subject' of text. One of my inputs is for text - it gets vectorized by a vectorization layer. The other is 'subject' as int. These are concatenated later. In my code below, <code>x_train_text</code> is simply a list of texts. <code>x_train_subject</code> is a list of integers. These two and <code>y_train_int</code> has same sizes. However, even though the sizes are the same(4499), I get the following error:</p>
<pre><code>ValueError: Data cardinality is ambiguous:
x sizes: 2
y sizes: 4499
Make sure all arrays contain the same number of samples.
</code></pre>
<p>CODE:</p>
<pre><code>MAX_TOKENS_NUM = 5000 # Maximum vocab size.
MAX_SEQUENCE_LEN = 40 # Sequence length to pad the outputs to.
EMBEDDING_DIMS = 100
text_input = tf.keras.Input(shape=(1,), dtype=tf.string)
subject_input = tf.keras.Input(shape=(1,), dtype=tf.int32)
text_layer = vectorize_layer(text_input)
text_layer = tf.keras.layers.Embedding(MAX_TOKENS_NUM + 1, EMBEDDING_DIMS)(text_layer)
text_layer = tf.keras.layers.GlobalAveragePooling1D()(text_layer)
subject_layer = tf.keras.layers.CategoryEncoding(
num_tokens=len(subjects), output_mode='one_hot', sparse=False
)(subject_input)
concatenated = tf.keras.layers.Concatenate(axis=1)([text_layer, subject_layer])
output = tf.keras.layers.Dense(len(labels))(concatenated)
model = tf.keras.models.Model(inputs=[text_input, subject_input], outputs=output)
model.summary()
model.compile(loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.SparseCategoricalAccuracy())
</code></pre>
<p>FIT FUNCTION :</p>
<pre><code>epochs = 100
history = model.fit(
[x_train_text,x_train_subject],
y=y_train_int,
epochs=epochs)
</code></pre>
<p>What could be the solution?</p>
|
<python><tensorflow><deep-learning>
|
2023-01-17 07:36:44
| 1
| 1,395
|
cuneyttyler
|
75,143,062
| 5,521,699
|
Performing realtime Google Cloud speech recognition with PyAudio + VB-Audio virtual cable
|
<p>I am trying to perform streaming speech recognition with the realtime Google Cloud Speech API. I have used the second code snippet from here (<a href="https://cloud.google.com/speech-to-text/docs/transcribe-streaming-audio" rel="nofollow noreferrer">https://cloud.google.com/speech-to-text/docs/transcribe-streaming-audio</a>) to successfully transcribe audio from the microphone.</p>
<p>I want to perform the same, but using audio coming from a different source, such as an Internet socket. For this, I have used VB-Cable and PyAudio in order to simulate input coming from the microphone: I take the input from the socket and forward it to a virtual device acting as a microphone. The main parts of the Python code are below:</p>
<pre><code># ----- client.py -----
s.connect(('127.0.0.1', 30000))
wf = wave.open('test.wav', 'rb')
rate = wf.getframerate()
data = wf.readframes(48000)
while data != b'':
bytes_sent = s.send(data)
print('Sent {} bytes'.format(bytes_sent))
data = wf.readframes(48000)
# ----- server.py -----
s = socket.socket()
s.bind(('', 30000))
s.listen()
while True:
c, addr = s.accept()
print ('Got connection from {}'.format(addr))
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=48000,
output=True,
output_device_index=3) # VB-Cable output device ID
while True:
data = c.recv(48000)
print('Received {} bytes'.format(len(data)))
if data == b'':
break
stream.write(data)
# ----- mic-simulator.py -----
client, streaming_config = setup_microphone() # performs streaming config initialization, same as the first few lines in `main()`, in the link to Google Cloud Speech example
while True:
with MicrophoneStream(48000, 4800) as stream:
audio_generator = stream.generator()
requests_ = (
speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
responses = client.streaming_recognize(streaming_config, requests_)
# Now, put the transcription responses to use.
listen_print_loop(responses) # same as the function in the link to the Google Cloud Speech example
</code></pre>
<p>The problem with the code above is that sometimes I get the transcription results from Google Cloud Speech, but sometimes not. I have checked the VB-audio virtual cable parameters and whenever I forward the data coming from the socket, the input interface actually gets the data. Is it possible that the problem is some kind of rate-limiting imposed by Google Cloud speech? Anyone has a solution for this? Thanks in advance.</p>
|
<python><python-3.x><virtual><pyaudio><google-cloud-speech>
|
2023-01-17 07:23:02
| 0
| 710
|
Polb
|
75,142,908
| 8,723,790
|
eval vs string.split when getting values from .env (environment variable)
|
<p>In Python project I have a list of values stored in a .env file as environment variables. I wonder if it is better to use eval or string.split to get the values.</p>
<ol>
<li>eval</li>
</ol>
<p><em>.env</em></p>
<pre><code>ANIMALS=["Cat","Dog"]
</code></pre>
<p><em>code.py</em></p>
<pre><code>animal_list=eval(os.getenv("ANIMALS"))
</code></pre>
<ol start="2">
<li>string.split</li>
</ol>
<p><em>.env</em></p>
<pre><code>ANIMALS=Cat,Dog
</code></pre>
<p><em>code.py</em></p>
<pre><code>animal_list=os.getenv("ANIMALS").split(",")
</code></pre>
|
<python><config><eval><strsplit><.env>
|
2023-01-17 07:02:18
| 0
| 301
|
Paul Chuang
|
75,142,751
| 7,177,478
|
sqlalchemy check foreign key refer data exist before insert?
|
<p>I'm new to the sqlalchemy and fastAPI. I wonder there is any way to check refer data automatically before inserting it. For example, I want to make sure that profile.user_id exists before adding a new profile, but I don't want to do it by myself. Is that possible? Below are my table settings.</p>
<pre><code>class User(Base):
__tablename__ = "user"
id = Column("id", Integer, primary_key=True, autoincrement=True)
email = Column(String, unique=True, nullable=False)
hashed_password = Column(String, nullable=False)
create_time = Column(DateTime, nullable=False, default=func.now())
login_time = Column(DateTime, nullable=False, default=func.now())
class Profile(Base):
__tablename__ = "user_profile"
id = Column(Integer, primary_key=True, autoincrement=True)
user_id = Column(Integer, ForeignKey("user.id"), nullable=False)
name = Column(String, )
age = Column(Integer)
country = Column(Integer)
photo = Column(String, )
</code></pre>
|
<python><sql><sqlite><sqlalchemy><fastapi>
|
2023-01-17 06:38:55
| 1
| 420
|
Ian
|
75,142,692
| 9,377,382
|
Boto 3 AWS data quality unable to get data_quality result?
|
<p>I am running a AWS Glue job with data quality check. Using boto3 I am trying to the data quality result by following snippet. But I am unable to get the result. Result ID I am referring to seem to be correct, and I am using current credentials as I can invoke jobs from my code.</p>
<pre><code>import boto3
client = boto3.client('glue')
response = client.get_data_quality_result(
ResultId='jr_b3f3fb689d0967c1d8af7a992d17709fd82fcbc35ee1e60133fe3ae4179ea3e3'
)
print(response)
</code></pre>
<p>Error : botocore.errorfactory.EntityNotFoundException: An error occurred (EntityNotFoundException) when calling the GetDataQualityResult operation: Cannot find Data Quality Result in account XXXXXXX with id jr_b3f3fb689d0967c1d8af7a992d17709fd82fcbc35ee1e601.</p>
<p><a href="https://i.sstatic.net/uUwmM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uUwmM.png" alt="Image for quality result" /></a></p>
|
<python><amazon-web-services><boto3><aws-glue>
|
2023-01-17 06:30:11
| 0
| 385
|
Dhivakhar Venkatachalam
|
75,142,623
| 5,278,594
|
Using preorder traversal go binary tree to store elements in a list
|
<p>I am trying to write a code to implement pre-order traversal of a binary tree and store element in a list. I am using a helper function called <code>pre_util</code> as defined below;</p>
<pre><code>def pre_util(root, L):
if root is None:
return
L.append(root)
pre_util(root.left, L)
pre_util(root.right, L)
return L
def preorder(root):
L = []
res = pre_util(root, L)
return res
</code></pre>
<p>My only question is regarding returning values in the <code>pre_util</code> function recursive call. Will it make a difference if we replace</p>
<p><code>pre_util(root.left, L)</code> by
<code>L=pre_util(root.left, L)</code>
In my knowledge if we use <code>L=pre_util(root.left, L)</code> then the returned value overwrites the value of variable <code>L</code>. But I am not too sure why we see in many solutions just <code>pre_util(root.left, L)</code>. Appreciate some feedback/explanation</p>
|
<python><recursion><tree>
|
2023-01-17 06:21:49
| 1
| 1,483
|
jay
|
75,142,546
| 4,321,525
|
plotting a boolean array as a translucent overlay over a graph with matplotlib
|
<p>I want to plot the <code>True</code> parts of a boolean array as translucent boxes over another plot.</p>
<p>This sketch illustrates what I envision. I know I could do that with Asymptote, but I (among other reasons) need to verify that the data I work with is concise. I can supply example code of a graph and a boolean array if that helps - I don't have an idea yet how to realize the overlays, though. Asymptote might be the best option for producing plots for later publication, though.</p>
<p><a href="https://i.sstatic.net/sqMxg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sqMxg.png" alt="sketch of a graph with boolean overlay" /></a></p>
|
<python><matplotlib>
|
2023-01-17 06:11:29
| 1
| 405
|
Andreas Schuldei
|
75,142,308
| 10,844,937
|
How to merge dataframe and series?
|
<p>I have a <code>dataframe</code> and a <code>series</code> which are as following.</p>
<pre><code>_df = pd.DataFrame({'a': [1, '', '', 4], 'b': ['apple', 'banana', 'orange', 'pear']}, columns=['a', 'b'])
_series = pd.Series(['', 2, 3, ''], name="a")
</code></pre>
<p>Here I would like to merge the <code>dataframe</code> and <code>series</code> along column <code>a</code> to get rid of all the blanks. This is the result I want.</p>
<pre><code> a b
0 1 apple
1 2 banana
2 3 orange
3 4 pear
</code></pre>
<p>Here is how I do it.</p>
<pre><code>for i in range(len(_df.iloc[:, 0].to_list())):
if _df.iloc[i, 0] == '':
_df.iloc[i, 0] = df_series[i]
</code></pre>
<p>Problem is it can be very slow if the <code>dataframe</code> is big. Anyone knows if I can do this in a more efficient way?</p>
|
<python><pandas>
|
2023-01-17 05:38:50
| 2
| 783
|
haojie
|
75,142,270
| 5,901,318
|
how to call form method from template for dynamic form in django
|
<p>I have a model that one of it's field is CharField, use to store a JSON (list of dictionary) called 'schema'</p>
<pre><code>SCHEMA_DATA_TYPES=(
(1, 'Integer'),
(2, 'String'),
(3, 'Date'),
)
class MailTemplate(models.Model):
name = models.CharField(max_length=20)
tfile = models.FileField(upload_to='uploads/mailtemplate', null=True, blank=True)
schema = models.CharField(max_length=256, blank=True, null=True, editable=False)
def __str__(self) -> str:
return self.name
</code></pre>
<p>Currently, it only have one record and the 'schema' value is :</p>
<pre><code>[{"name": "date", "label": null, "type": 2, "format": ""}, {"name": "invoiceNumber", "label": null, "type": 2, "format": ""}, {"name": "company", "label": null, "type": 2, "format": ""}, {"name": "total", "label": null, "type": 2, "format": ""}, {"name": "items", "label": null, "type": 2, "format": ""}]
</code></pre>
<p>I need to build a dynamic form that have additional fields contructed from that 'schema' field.</p>
<p>first try I use htmx.
The form is showed prefectly, but additional fields couldn't read by forms.py</p>
<p>as suggested by SO user @vinkomlacic at my post <a href="https://stackoverflow.com/q/75130444/5901318">How to read additional data posted by form in django forms.py</a> , I serach for doc on how to play with <strong>init</strong> of form to do it.</p>
<p>I found <a href="https://www.caktusgroup.com/blog/2018/05/07/creating-dynamic-forms-django/" rel="nofollow noreferrer">https://www.caktusgroup.com/blog/2018/05/07/creating-dynamic-forms-django/</a> looks informative.</p>
<p>so my admin.py is</p>
<pre><code>class MailTemplateAdmin(admin.ModelAdmin):
change_form_template = 'mailtemplate_form.html'
form = MailTemplateForm
admin.site.register(MailTemplate,MailTemplateAdmin)
</code></pre>
<p>and mailtemplate_form.html is</p>
<pre><code>
{% extends "admin/change_form.html" %}
{% block after_field_sets %}
{% if not add %}
<div id="schema_head">
<b>Schema </b>
</div>
<div id="additional_fields">
{% for schema_line in form.get_schemas %}
{{ schema_line }}
{% endfor %}
</div>
{% endif %}
{% endblock %}
</code></pre>
<p>and forms.py have</p>
<pre><code>
class MailTemplateForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
schema = self.instance.schema.strip()
#print(f'INSTANCE:{schema}')
schema = json.loads(schema)
for i in range(len(schema)):
self.fields[f'schema_{i}_name'] = forms.CharField(max_length=20, initial=schema[i]['name'])
self.fields[f'schema_{i}_name'].widget.attrs["readonly"] = True
self.fields[f'schema_{i}_label'] = forms.CharField(max_length=20, required = True, initial=schema[i]['label'])
self.fields[f'schema_{i}_type'] = forms.ChoiceField(choices=SCHEMA_DATA_TYPES,initial=schema[i]['type'])
self.fields[f'schema_{i}_format'] = forms.CharField(max_length=20, required = True, initial=schema[i]['format'])
#print(f'FIELDS: {self.fields}')
def get_schemas(self):
print('get_schemas CALLED!')
for s in self.fields:
if s.startswith('schema_'):
yield self.fields[s]
</code></pre>
<p>problem is :</p>
<ol>
<li>Additional field didn't show up in web</li>
<li>from console, I didn't see any indications that MailTemplateForm.get_schemas is called.</li>
</ol>
<p>Kindly please tell me or give me any clue on how to call forms method(s) from template.</p>
<p>Sincerely<br />
-bino-</p>
|
<python><django>
|
2023-01-17 05:31:35
| 0
| 615
|
Bino Oetomo
|
75,142,234
| 10,829,044
|
pandas merge using list columns and contains operation
|
<p>I have two dataframes that are given below</p>
<pre><code>multi_df = pd.DataFrame({'multi_project_ID': ["Combo_1","Combo_2","Combo_3","Combo_4"],
'multi_items':[['Chips','Biscuits','Chocolates'],['Alcoholic Drinks','Juices','Fruits'],['Plants','Veggies','Chips'],['Cars']],
'multi_labels':[[1,2,3],[4,5,6],[8,9,10],[11]]})
single_df = pd.DataFrame({'single_project_ID': ["ABC_1","DEF_2","JKL_3","MNO_3"],
'single_items':[['Chips'],['Alcoholic Drinks'],['Biscuits'],['Smoking']],
'single_labels':[[1],[4],[8],[9]]})
</code></pre>
<p>I would like to do the below</p>
<p>a) Check whether items of <code>single_items</code> list is present under the items of <code>multi_items</code> list.</p>
<p>b) If yes, then extract the <code>multi_labels</code> and <code>multi_project_id</code> for the corresponding matching item</p>
<p>c) If no item is present/matching, then put <code>NA</code></p>
<p>So, I tried the below but it doesn't work. I don't know how and where to start.</p>
<pre><code>print(single_df.groupby('single_labels').sum()['single_items'].apply(lambda x: list(set(x))).reset_index())
</code></pre>
<p>I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/zYNON.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYNON.png" alt="enter image description here" /></a></p>
|
<python><pandas><list><dataframe><merge>
|
2023-01-17 05:25:57
| 1
| 7,793
|
The Great
|
75,142,154
| 2,079,764
|
Refrehing the Django model after save and 5 second sleep get me old state, what's wrong?
|
<p>I was able to save the data to a Django model without any errors, but data not reflected in db. But after a sleep time I was able to save the data again with same method. What might be causing this ?</p>
<p>I suspect use of the Google API, but was able to print the data before performing the save operation.</p>
<pre><code>def update_channel():
client = Client.objects.get(name="name")
print(f"Existing channel: {data['id']}") # 123
# fetch channel data from google api
data = google_drive.subscribe_new_channel()
client.channel_id = data["id"]
client.channel_resource_id = data["resourceId"]
client.save()
client.refresh_from_db()
print(f"New channel: {data['id']}") # 456
print(f"New channel in db: {client.channel_id}") # 456
time.sleep(5)
client.refresh_from_db()
print(f"channel in db: {client.channel_id}") # 123
</code></pre>
<p>Sample Output:</p>
<pre><code>Existing channel: 123
New channel: 456
New channel in db: 456
channel in db: 123
</code></pre>
|
<python><django><google-api>
|
2023-01-17 05:05:41
| 1
| 1,226
|
Rashi
|
75,142,131
| 17,696,880
|
Identify and replace using regex some strings, stored within a list, within a string that may or may not contain them
|
<pre class="lang-py prettyprint-override"><code>import re
#list of names to identify in input strings
result_list = ['Thomas Edd', 'Melissa Clark', 'Ada White', 'Louis Pasteur', 'Edd Thomas', 'Clark Melissa', 'White Eda', 'Pasteur Louis', 'Thomas', 'Melissa', 'Ada', 'Louis', 'Edd', 'Clark', 'White', 'Pasteur']
result_list.sort() # sorts normally by alphabetical order (optional)
result_list.sort(key=len, reverse=True) # sorts by descending length
#example 1
input_text = "Melissa went for a walk in the park, then Melisa Clark went to the cosmetics store. There Thomas showed her a wide variety of cosmetic products. Edd Thomas is a great salesman, even so Thomas Edd is a skilled but responsible salesman, as Edd is always honest with his customers. White is a new client who came to Edd's business due to the good social media reviews she saw from Melissa, her co-worker."
#In this example 2, it is almost the same however, some of the names were already encapsulated
# under the ((PERS)name) structure, and should not be encapsulated again.
input_text = "((PERS)Melissa) went for a walk in the park, then Melisa Clark went to the cosmetics store. There Thomas showed her a wide variety of cosmetic products. Edd Thomas is a great salesman, even so ((PERS)Thomas Edd) is a skilled but responsible salesman, as Edd is always honest with his customers. White is a new client who came to Edd's business due to the good social media reviews she saw from Melissa, her co-worker." #example 1
for i in result_list:
input_text = re.sub(r"\(\(PERS\)" + r"(" + str(i) + r")" + r"\)",
lambda m: (f"((PERS){m[1]})"),
input_text)
print(repr(input_text)) # --> output
</code></pre>
<p>Note that the names meet certain conditions under which they must be identified, that is, they must be in the middle of 2 whitespaces <code>\s*the searched name\s*</code> or be at the beginning <code>(?:(?<=\s)|^)</code> or/and at the end of the input string.</p>
<p>It may also be the case that a name is followed by a comma, for example <code>"Ada White, Melissa and Louis went shopping"</code> or if spaces are accidentally omitted <code>"Ada White,Melissa and Louis went shopping"</code>.
For this reason it is important that after <code>[.,;]</code> the possibility that it does find a name.</p>
<p>Cases where the names should NOT be encapsulated, would be for example...</p>
<p><code>"the Edd's business"</code></p>
<p><code>"The whitespace"</code></p>
<p><code>"the pasteurization process takes time"</code></p>
<p><code>"Those White-spaces in that text are unnecessary"</code></p>
<p>, since in these cases the name is followed or preceded by another word that should not be part of the name that is being searched for.</p>
<p>For examples 1 and 2 (note that example 2 is the same as example 1 but already has some encapsulated names and you have to prevent them from being encapsulated again), you should get the following output.</p>
<pre><code>"((PERS)Melissa) went for a walk in the park, then ((PERS)Melisa Clark) went to the cosmetics store. There ((PERS)Thomas) showed her a wide variety of cosmetic products. ((PERS)Edd Thomas) is a great salesman, even so ((PERS)Thomas Edd) is a skilled but responsible salesman, as ((PERS)Edd) is always honest with his customers. ((PERS)White) is a new client who came to Edd's business due to the good social media reviews she saw from ((PERS)Melissa), her co-worker."
</code></pre>
|
<python><python-3.x><regex><replace><regex-group>
|
2023-01-17 04:59:59
| 1
| 875
|
Matt095
|
75,141,938
| 4,095,108
|
Spacy incorrectly identifying pronouns
|
<p>When I try this code using Spacy, I get the desired result:</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_sm")
# example 1
test = "All my stuff is at to MyBOQ"
doc = nlp(test)
for word in doc:
if word.pos_ == 'PRON':
print(word.text)
</code></pre>
<p>The output shows <code>All</code> and <code>my</code>. However, if I add a question mark:</p>
<pre><code>test = "All my stuff is at to MyBOQ?"
doc = nlp(test)
for word in doc:
if word.pos_ == 'PRON':
print(word.text)
</code></pre>
<p>now it also identifies <code>MyBOQ</code> as a pronoun. It should be classified as an organization name (<code>word.pos_ == 'ORG'</code>) instead.</p>
<p>How do I tell Spacy not to classify MyBOQ as a pronoun? Should I just remove all punctuation before checking for pronouns?</p>
|
<python><nlp><spacy>
|
2023-01-17 04:15:14
| 1
| 1,685
|
jmich738
|
75,141,898
| 4,309,170
|
HTTP Apis for MT4
|
<p>I'm exploring options to create RESTful APIs for MT4 without setting up EA. As an example, <a href="http://mt4.mtapi.be/index.html" rel="nofollow noreferrer">http://mt4.mtapi.be/index.html</a> - is just what I want to create.</p>
<p>However, the problem is that I'm not entirely sure if its possible to do so without setting up EA inside the MT4 terminal.
I read this <a href="https://www.mql5.com/en/blogs/post/716643" rel="nofollow noreferrer">post</a> and it seems like ZeroMQ can be used.</p>
<p>Any help will be appreciated. Thank you.</p>
|
<python><zeromq><metatrader4><mt4>
|
2023-01-17 04:04:37
| 1
| 628
|
Han
|
75,141,825
| 10,200,497
|
Groupby streak of numbers and a mask
|
<p>This is my pandas dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [10, 20, 1, 55, 66, 333, 444, 1, 2, 10], 'b': [1,1, 1, -1, -1, -1, -1, 1, 1, -1]})
</code></pre>
<p>And this is the way that I need it after using <code>groupby</code>. I want all of 1s in b and two -1 after the streak of 1s. For example the first group is all of the consecutive 1s and then after the streak ends I want two -1s. If the streak of -1 is less than two, just gives the first -1 which is group two in the example:</p>
<pre><code> a b
0 10 1
1 20 1
2 1 1
3 55 -1
4 66 -1
a b
7 1 1
8 2 1
9 10 -1
</code></pre>
<p>I know that I need a mask. I have tried some of them but didn't work. These are some of my tries:</p>
<pre><code>df.groupby(df.b.diff().cumsum().eq(1))
df.groupby(df['b'].ne(df['b'].shift()).cumsum())
</code></pre>
|
<python><pandas><group-by>
|
2023-01-17 03:48:44
| 1
| 2,679
|
AmirX
|
75,141,805
| 15,298,943
|
Python - having trouble selecting single value from json data
|
<p>I have the following code from which I want to select a singular piece of data from the JSON.</p>
<p>I have the following code from which I want to select a singular piece of data from the JSON.</p>
<pre><code> j = {
"data": [
{
"astronomicalDawn": "2023-01-16T04:58:21+00:00",
"astronomicalDusk": "2023-01-16T17:00:31+00:00",
"civilDawn": "2023-01-16T06:38:18+00:00",
"civilDusk": "2023-01-16T15:20:34+00:00",
"moonFraction": 0.36248449454701365,
"moonPhase": {
"closest": {
"text": "Third quarter",
"time": "2023-01-14T22:34:00+00:00",
"value": 0.75
},
"current": {
"text": "Waning crescent",
"time": "2023-01-16T06:00:00+00:00",
"value": 0.7943440617174506
}
},
"moonrise": "2023-01-16T01:01:55+00:00",
"moonset": "2023-01-16T09:53:57+00:00",
"nauticalDawn": "2023-01-16T05:46:36+00:00",
"nauticalDusk": "2023-01-16T16:12:16+00:00",
"sunrise": "2023-01-16T07:28:07+00:00",
"sunset": "2023-01-16T14:30:45+00:00",
"time": "2023-01-16T06:00:00+00:00"
},
{
"astronomicalDawn": "2023-01-17T04:57:26+00:00",
"astronomicalDusk": "2023-01-17T17:02:07+00:00",
"civilDawn": "2023-01-17T06:37:07+00:00",
"civilDusk": "2023-01-17T15:22:26+00:00",
"moonFraction": 0.26001046334874545,
"moonPhase": {
"closest": {
"text": "Third quarter",
"time": "2023-01-14T21:31:00+00:00",
"value": 0.75
},
"current": {
"text": "Waning crescent",
"time": "2023-01-17T06:00:00+00:00",
"value": 0.8296778757434323
}
},
"moonrise": "2023-01-17T02:38:30+00:00",
"moonset": "2023-01-17T10:01:03+00:00",
"nauticalDawn": "2023-01-17T05:45:35+00:00",
"nauticalDusk": "2023-01-17T16:13:58+00:00",
"sunrise": "2023-01-17T07:26:40+00:00",
"sunset": "2023-01-17T14:32:54+00:00",
"time": "2023-01-17T06:00:00+00:00"
}
],
"meta": {
"cost": 1,
"dailyQuota": 10,
"lat": 58.7984,
"lng": 17.8081,
"requestCount": 1,
"start": "2023-01-16 06:00"
}
}
print(j['data']['moonPhase'])
</code></pre>
<p>Which gives me this error;</p>
<p><code>TypeError: list indices must be integers or slices, not str</code></p>
<p>That error is in regard to the very last line of the code. But changing the very very last line to <code>print(j['data'])</code> works.</p>
<p>What am I doing wrong - I am trying to select <code>moonPhase</code> data. It turns me on. Thank you.</p>
|
<python><json><python-requests>
|
2023-01-17 03:44:56
| 1
| 475
|
uncrayon
|
75,141,692
| 1,715,153
|
How to get all fish shell commands from a python script?
|
<p>When I run <code>complete -C</code> from my regular terminal fish shell I get a list of ~4k commands, which is great. I want this to happen from my python script. I have the following but it doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>command = "fish -c 'complete -C'"
output = (
subprocess.run(command, shell=True, capture_output=True).stdout.decode().strip()
)
</code></pre>
<p>The output <code>stdout</code> is just an empty string. The <code>stderr</code> is showing:</p>
<pre><code>complete: -C: option requires an argument
Standard input (line 1):
complete -C
(Type 'help complete' for related documentation)"
</code></pre>
<p>How can I get this to work? Is there something outside of python subprocess I can use for this?</p>
|
<python><command><command-line-interface><fish><completion>
|
2023-01-17 03:20:34
| 2
| 1,622
|
Ariel Frischer
|
75,141,642
| 10,200,497
|
Groupby streak of numbers and a mask to groupby two rows after each group
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [20, 1, 55, 333, 444, 1, 2, 10], 'b': [20, 20, 21, 21, 21, 22, 22, 22]})
</code></pre>
<p>I want to group them by column <code>b</code> and two rows after each group.
This is the output that I need:</p>
<pre><code> a b
0 20 20
1 1 20
2 55 21
3 333 21
a b
2 55 21
3 333 21
4 444 21
5 1 22
6 2 22
a b
5 1 22
6 2 22
7 10 22
</code></pre>
<p>I know that I need a mask. I have tried some of them but didn't work. This is one of my tries:</p>
<pre><code>df.groupby(df.b.diff().cumsum().eq(1))
</code></pre>
|
<python><pandas><group-by>
|
2023-01-17 03:07:01
| 1
| 2,679
|
AmirX
|
75,141,639
| 143,397
|
Python: how to manage a variable's lifetime beyond a single function?
|
<p>I'm looking to better understand how Python can be used to guarantee resource release when the lifetime of the resource extends beyond a single function.</p>
<p>The go-to answer for the general problem of ensuring a resource is released, is to use a <code>with</code> statement, and sometimes to create a context manager to wrap the resource and manage its lifetime. However <em>in every single example of this</em>, the resource is used immediately and then is no longer required. It all takes place in a single function, and there's never any demonstration of saving, returning, or storing the resource.</p>
<p>The <code>with</code> approach does not seem to handle some important cases, such as when resource creation occurs in a different place from the resource usage. Compared with the RAII concept from C++, it seems deficient.</p>
<p>Aside: I've come to understand that the <code>__del__</code> function is not suitable as a place to release resources, as there's no guarantee that it will even run, and if it does, it may not be until the very end of the program, and this may never happen.</p>
<p>For example, consider a class that creates or opens a resource (such as a file handle) in the initialiser function, <em>which is then saved to be used later</em>, at a time controlled by the owner of the <code>Foo</code> instance:</p>
<pre><code>class Foo:
def __init__(self):
self._f = open("/tmp/foo", "w")
def close(self):
if self._f:
self._f.close()
def do_something(self):
some_func(self._f)
</code></pre>
<p>This could be turned into a context manager with:</p>
<pre><code>class Foo:
def __init__(self):
self._f = None
def __enter__(self):
self._f = open("/tmp/foo", "w")
return self
def __exit__(self, type, value, traceback):
self.close()
def close(self):
if self._f:
f.close()
def do_something(self):
some_func(self._f)
</code></pre>
<p>Then callers can ensure the file is closed, e.g. in the event of exceptions, if they remember to do:</p>
<pre><code>with Foo() as foo:
foo.do_something()
</code></pre>
<p>But what if the instance of Foo, and its open file object, is to have a long lifetime? For example:</p>
<pre><code>class Bar:
def __init__(self):
self._foo = Foo()
def later(self):
self._foo.do_something()
bar = Bar()
# time passes...
bar.later()
# program eventually ends but nobody cleaned up bar._foo!
</code></pre>
<p>What do we do here? We can't use a <code>with</code> in <code>__init__</code> because this simply doesn't work - the context manager releases Foo's resource when the <code>with</code> block completes:</p>
<pre><code>class Bar:
def __init__(self):
with Foo() as foo:
self._foo = foo
# foo.__exit__() has been called! Thus self._foo is not useful.
</code></pre>
<p>So we also have to make <code>Bar</code> a context manager that manages the <code>Foo</code> resource:</p>
<pre><code>class Bar:
def __init__(self):
self._foo = None
def __enter__(self):
self._foo = Foo()
return self
def __exit__(self, type, value, traceback):
if self._foo:
self._foo.close()
def later(self):
self._foo.do_something()
</code></pre>
<p>This seems to propagate the context manager pattern up through the structure, and it gets a lot worse when you start composing multiple classes that handle their own resources.</p>
<pre><code># perhaps not so bad with two classes:
with Foo() as foo, Bar(foo) as bar:
bar.later()
# terrible with more (these all need to be context managers!):
with Foo() as foo, Bar(foo) as bar, Baz(bar) as baz, Goo(foo, baz) as goo:
goo.some_function()
</code></pre>
<p>Since "whoever opens the file must close the file", this also seems to suggest that all file objects could be created at the top level and injected instead, but not all resources are file objects and it makes more sense to have classes that encapsulate them.</p>
<p>Other than having a top-level multi-part <code>with</code> statement and dependency-injecting every single instance of anything that manages a resource, what is a good way to deal with the requirement that all of these resources are eventually cleaned up, ideally without requiring the programmer to remember to write complex <code>try/except/finally</code> blocks that call <code>close</code> or equivalent?</p>
|
<python><resources><lifetime><with-statement><raii>
|
2023-01-17 03:06:33
| 0
| 13,932
|
davidA
|
75,141,589
| 19,009,577
|
Why does multiprocessing not speed up my code
|
<p>I was trying to test the extent to which multiprocessing could speed up code, so I did this:</p>
<pre><code>rom pathos.multiprocessing import Pool
def wm():
l = []
for i in range(1000):
for j in range(1000):
l.append(i+j)
return l
def m():
with Pool() as pool:
return pool.starmap(lambda x, y: x+y, ((i, j) for j in range(1000) for i in range(1000)))
</code></pre>
<pre><code>%%timeit
wm()
79.1 ms ± 1.01 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<pre><code>%%timeit
m()
11.8 s ± 5.12 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>It seems like multiprocessing is not making my code faster, but significantly slower. While it was running I also checked task manager and the cpu readings were quite low...</p>
<p>PS: I used pathos.multiprocessing cause the normal multiprocessing had some pickle error and it was suggested to use pathos.multiprocessing from another SO question.</p>
|
<python><optimization><multiprocessing>
|
2023-01-17 02:55:56
| 0
| 397
|
TheRavenSpectre
|
75,141,543
| 10,829,044
|
Pandas filter a list of items present in a column
|
<p>I have a dataframe like as below</p>
<pre><code>df = pd.DataFrame({'text': ["Hi how","I am fine","Ila say Hi"],
'tokens':[['Hi','how'],['I','am','fine'],['Ila','say','Hi']],
'labels':[['A','B'],['C','B','A'],['D','B','A']]})
</code></pre>
<p>I would like to do the below</p>
<p>a) Filter the df using <code>tokens</code> AND <code>labels</code> column</p>
<p>b) Filter based on the values <code>Hi</code>, <code>Ila</code> for tokens column</p>
<p>c) Filter based on the values <code>A</code> and <code>D</code> for labels column</p>
<p>So, I tried the below</p>
<pre><code>df[((df['tokens']==['Hi'])&(df['tokens']==['Ila']))&((df['labels']==['A'])&(df['labels']==['D']))]
</code></pre>
<p>However, this doesn't work. Since my column has values in <code>list format</code>, how do I filter them whether the list has only one item or multiple items?</p>
<p>I expect my output to be like as below</p>
<pre><code>text tokens labels
Ila say Hi [Ila, say, Hi] [D, B, A]
</code></pre>
|
<python><pandas><list><dataframe><filter>
|
2023-01-17 02:46:57
| 1
| 7,793
|
The Great
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.