QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,185,375
| 5,684,405
|
How to prepare PIL Image.Image for tf.image.decode_image
|
<p>For a file read with:</p>
<pre><code>import PIL
import tensorflow as tf
from keras_preprocessing.image import array_to_img
path_image = "path/cat_960_720.jpg"
read_image = PIL.Image.open(path_image)
# read_image.show()
image_decode = tf.image.decode_image(read_image)
print("This is the size of the Sample image:", image_decode.shape, "\n")
print("This is the array for Sample image:", image_decode)
resize_image = tf.image.resize(image_decode, (32, 32))
print("This is the Shape of resized image", resize_image.shape)
print("This is the array for resize image:", resize_image)
to_img = array_to_img(resize_image)
to_img.show()
</code></pre>
<p>I keep getting error for this line <code>tf.image.decode_image(read_image)</code>:</p>
<blockquote>
<p>ValueError: Attempt to convert a value
(<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=960x586 at
0x11AA49F40>) with an unsupported type (<class
'PIL.JpegImagePlugin.JpegImageFile'>) to a Tensor.</p>
</blockquote>
<p>How can I pass imae read with PIL to tensorflow so that I could decode and resize,
so that I could resize this big picture to <code>32x32x3</code>?</p>
|
<python><tensorflow><python-imaging-library>
|
2023-01-20 14:29:03
| 1
| 2,969
|
mCs
|
75,185,306
| 9,759,263
|
python3 finds some programs in a folder, but not others
|
<p>I have the following directory:</p>
<pre><code>~/Library/Python/3.9/bin
</code></pre>
<p>Inside that folder, for example, appear these two programs:</p>
<p><a href="https://i.sstatic.net/z9GW0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z9GW0.png" alt="enter image description here" /></a></p>
<p>Executing:</p>
<p><code>python3 -m submit50</code></p>
<p>gives...</p>
<pre><code>usage: submit50 [-h] [--logout] [--log-level {debug,info,warning,error}] [-V]
slug
...
</code></pre>
<p>However, executing:</p>
<p><code>python3 -m youtube-dl</code></p>
<p>gives...</p>
<pre><code>/Library/Developer/CommandLineTools/usr/bin/python3: No module named youtube-dl
</code></pre>
<p>What's happening here ?</p>
<ul>
<li>Why isn't the program <code>youtube-dl</code> being located, and</li>
<li>Why is the path <code>/Library/Developer/CommandLineTools/usr/bin/</code> appearing there ?</li>
</ul>
<p><strong>OS: macOS Ventura 13.1</strong></p>
|
<python><shell>
|
2023-01-20 14:22:43
| 0
| 1,311
|
F. Zer
|
75,185,226
| 15,233,108
|
How to copy file from directory A to directory B using a list
|
<p>I'm trying to copy files from directory A, to directory B, based on a txt file containing the list of files to be extracted - located in directory B. I referred to this code: <a href="https://stackoverflow.com/questions/66408075/how-to-extract-files-from-a-particular-folder-with-filename-stored-in-a-python-l">How to extract files from a particular folder with filename stored in a python list?</a></p>
<p>but it doesn't seem to enter the if (where I have put the 'in here' printout). Could someone tell me what I am doing wrong?</p>
<p>This is the code:</p>
<pre><code>import os
import shutil
def read_input_file():
my_file = open("/mnt/d/Downloads/TSU/remaining_files_noUSD_19Jan.txt", "r")
# reading the file
data = my_file.read()
data_into_list = data.split("\n")
#print(data_into_list)
my_file.close()
return data_into_list
def filter_data(list_of_files):
path="/mnt/e/Toyota Smarthome/Untrimmed/Videos_mp4"
path_to_be_moved="/mnt/d/Downloads/TSU"
#print(list_of_files)
for file in os.listdir(path):
#print(file)
if file in list_of_files:
print("in here")
print(file)
shutil.copytree(path,path_to_be_moved)
#os.system("mv "+path+file+" "+path_to_be_moved)
if __name__ == "__main__":
list = read_input_file()
filter_data(list)
</code></pre>
<p>I am using python3 via WSL.</p>
<p>the mp4 folder contains multiple videos, and the output of "</p>
<blockquote>
<p>read input file<br />
is as follows
<a href="https://i.sstatic.net/ZdmcY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZdmcY.png" alt="output for read input file" /></a></p>
</blockquote>
<p>"</p>
<p>Thank you!</p>
|
<python>
|
2023-01-20 14:16:09
| 1
| 582
|
Megan Darcy
|
75,185,209
| 5,868,293
|
Generating new data using VAE in keras
|
<p>I have built the following function which takes as input some data and runs a VAE on them:</p>
<pre><code>def VAE(data, original_dim, latent_dim, test_size, epochs):
x_train, x_test = train_test_split(data, test_size=test_size, random_state=42)
# Define the VAE architecture
#Encoder
encoder_inputs = tf.keras.Input(shape=(original_dim,))
x = layers.Dense(64, activation='relu')(encoder_inputs)
x = layers.Dense(32, activation='relu')(x)
x = layers.Dense(8, activation='relu')(x)
#--- Custom Latent Space Layer
z_mean = layers.Dense(units=latent_dim, name='Z-Mean', activation='linear')(x)
z_log_sigma = layers.Dense(units=latent_dim, name='Z-Log-Sigma', activation='linear')(x)
z = layers.Lambda(sampling, name='Z-Sampling-Layer')([z_mean, z_log_sigma, latent_dim]) # Z sampling layer
# Instantiate the encoder
encoder = tf.keras.Model(encoder_inputs, [z_mean, z_log_sigma, z], name='encoder')
#Decoder
latent_inputs = tf.keras.Input(shape=(latent_dim,))
x = layers.Dense(8, activation='relu')(latent_inputs)
x = layers.Dense(32, activation='relu')(x)
x = layers.Dense(64, activation='relu')(x)
decoder_outputs = layers.Dense(1, activation='relu')(x)
# Instantiate the decoder
decoder = tf.keras.Model(latent_inputs, decoder_outputs, name='decoder')
# Define outputs from a VAE model by specifying how the encoder-decoder models are linked
# Instantiate a VAE model
vae = tf.keras.Model(inputs=encoder_inputs, outputs=decoder(encoder(encoder_inputs)[2]), name='vae')
# Reconstruction loss compares inputs and outputs and tries to minimise the difference
r_loss = original_dim * tf.keras.losses.mse(encoder_inputs, decoder(encoder(encoder_inputs)[2])) # use MSE
# KL divergence loss compares the encoded latent distribution Z with standard Normal distribution and penalizes if it's too different
kl_loss = -0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
#VAE total loss
vae_loss = K.mean(r_loss + kl_loss)
# Add loss to the model and compile it
vae.add_loss(vae_loss)
vae.compile(optimizer='adam')
# train the model
vae.fit(x_train, x_train, epochs=epochs, validation_data=(x_test, x_test))
</code></pre>
<p>where</p>
<pre><code>def sampling(args):
z_mean, z_log_sigma, latent_dim = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=1., seed=42)
return z_mean + K.exp(z_log_sigma) * epsilon
</code></pre>
<p>My question is, if I want to generate new data, by using the above VAE how can I achieve that ?</p>
<p>If I want to sample 100 new data, should I use this</p>
<pre><code> latent_mean = tf.math.reduce_mean(encoder(x_train)[2], axis=0)
latent_std = tf.math.reduce_std(encoder(x_train)[2], axis=0)
latent_sample = tf.random.normal(shape=(100, latent_dim), mean=latent_mean,
stddev=latent_std)
generated_data = decoder(latent_sample)
</code></pre>
<p>or</p>
<pre><code> latent_mean = tf.math.reduce_mean(encoder(x_train)[0], axis=0)
latent_std = tf.math.reduce_mean(tf.math.exp(encoder(x_train))[1], axis=0)
latent_sample = tf.random.normal(shape=(100, latent_dim), mean=latent_mean,
stddev=latent_std)
generated_data = decoder(latent_sample)
</code></pre>
<p>?</p>
<p>Basically should I infer <code>z_mean</code> and <code>z_log_sigma</code> from the <code>z</code> or should I use <code>z_mean</code> and <code>z_log_sigma</code> directly ? What is the difference ?</p>
<p>Moreover, I have seen that everytime <code>tf.random.normal</code> is used to generate new data from the latent space. Why not use lognormal for instance ? Is it because of the KL divergence ?</p>
<p>The end-goal is the distribution of the <code>generated_data</code> to be as close as possible to the distribution of the original <code>data</code>.</p>
|
<python><machine-learning><keras><deep-learning><autoencoder>
|
2023-01-20 14:15:11
| 1
| 4,512
|
quant
|
75,185,014
| 9,367,543
|
Groupby dataframe to count values
|
<p>I have a dataframe that I want to groupby and get the count per value.</p>
<pre><code>date Severity
01-12-22 Sev1
05-12-22 Sev5
22-12-22 Sev1
01-01-23 Sev4
21-01-23 sev4
30-01-23 sev3
</code></pre>
<p>And I want to count each Severity by month-year</p>
<pre><code>date Sev1 Sev2 Sev3 Sev4 Sev5
12-22 2 0 0 0 1
01-23 0 0 1 2 0
</code></pre>
|
<python><pandas>
|
2023-01-20 13:59:29
| 1
| 338
|
welu
|
75,184,989
| 7,211,014
|
python argparse default with nargs wont work
|
<p>Here is my code:</p>
<pre><code>from argparse import ArgumentParser, RawTextHelpFormatter
example_text = "test"
parser = ArgumentParser(description='my script.',
epilog=example_text,
formatter_class=RawTextHelpFormatter)
parser.add_argument('host', type=str, default="10.10.10.10",
help="Device IP address or Hostname.")
parser.add_argument('-j','--json_output', type=str, default="s", nargs='?',choices=["s", "l"],
help="Print GET statement in json form.")
#mutally exclusive required settings supplying the key
settingsgroup = parser.add_mutually_exclusive_group(required=True)
settingsgroup.add_argument('-k', '--key', type=str,
help="the api-key to use. WARNING take care when using this, the key specified will be in the user's history.")
settingsgroup.add_argument('--config', type=str,
help="yaml config file. All parameters can be placed in the yaml file. Parameters provided from form command line will take priority.")
args = parser.parse_args()
print(args.json_output)
</code></pre>
<p>my output:</p>
<pre><code>None
</code></pre>
<p>Everything I am reading online says this should work, but it doesn't. Why?</p>
|
<python><argparse><default>
|
2023-01-20 13:58:01
| 1
| 1,338
|
Dave
|
75,184,846
| 781,938
|
How do I change the default figure config in the python plotly library?
|
<p>Plotly (python) supports specifying a <code>config</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
config = {
'toImageButtonOptions': {
'format': 'svg', # one of png, svg, jpeg, webp
}
}
fig = px.bar(x=[1, 2, 3], y=[1, 3, 1])
fig.show(config=config)
</code></pre>
<p>My question is: is there a way to set a default value for this?</p>
<p>Maybe I have to create a custom templete and set <code>pio.templates.default</code>?</p>
|
<python><plotly>
|
2023-01-20 13:46:56
| 0
| 6,130
|
william_grisaitis
|
75,184,817
| 2,196,409
|
Create a list of dictionary keys whose string values are empty, using comprehension
|
<p>How would the following be achieved using comprehension so that <code>bad_keys</code> only contains the keys where the length of the associated <code>value</code> is 0?</p>
<pre><code>def _check_data_for_length(self) -> []:
"""
checks the lengths of the values contained within the dictionary of members
returns a list of the keys containing data of length 0.
:return: [] keys of empty values
"""
bad_keys = []
for (key, value) in vars(self).items():
if len(value) == 0:
bad_keys.append(key)
return bad_keys
</code></pre>
|
<python><list-comprehension><dictionary-comprehension>
|
2023-01-20 13:44:31
| 1
| 1,173
|
eklektek
|
75,184,736
| 7,697,375
|
add a column to numpy array for groupby rolling count
|
<p>I have a dataframe with 4 columns. I want to add a column that will give running total by a group. I know how to do that in pandas( i.e cumcount +1 with groupby) but I want to move the data into numpy array and vectorize the operation. Example input data</p>
<p><a href="https://i.sstatic.net/Xr9pV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xr9pV.png" alt="enter image description here" /></a></p>
<p>Example output data with total # of complaints:</p>
<p><a href="https://i.sstatic.net/ZJuLV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZJuLV.png" alt="enter image description here" /></a></p>
|
<python><numpy>
|
2023-01-20 13:38:01
| 0
| 305
|
Pawan Tolani
|
75,184,683
| 11,254,837
|
How to export dictionary as a .property file in python
|
<p>How can a dictionary in python be exported to .properties file type in python?</p>
<p>What I have:</p>
<p><code>dict = {"key1":"sentence1", "key2":"sentence2"}</code></p>
<p>The object <code>dict</code> should be saved as <code>.properties</code> file in python, the output should look like this:</p>
<pre><code>key1=sentence1
key2=sentence2
</code></pre>
|
<python><dictionary><properties-file>
|
2023-01-20 13:33:16
| 2
| 800
|
pikachu
|
75,184,649
| 1,169,091
|
Why is cross_val_score not producing consistent results?
|
<p>When this code executes the results are not consistent.
Where is the randomness coming from?</p>
<pre><code>from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
seed = 42
iris = datasets.load_iris()
X = iris.data
y = iris.target
pipeline = Pipeline([('std', StandardScaler()),
('pca', PCA(n_components = 4)),
('Decision_tree', DecisionTreeClassifier())],
verbose = False)
kfold = KFold(n_splits = 10, random_state = seed, shuffle = True)
results = cross_val_score(pipeline, X, y, cv = kfold)
print(results.mean())
0.9466666666666667
0.9266666666666665
0.9466666666666667
0.9400000000000001
0.9266666666666665
</code></pre>
|
<python><scikit-learn><cross-validation>
|
2023-01-20 13:29:47
| 1
| 4,741
|
nicomp
|
75,184,615
| 774,575
|
How to create a 2D array of scaled+shifted unit impulses?
|
<p>I'm looking for an efficient way to get a 2D array like this:</p>
<pre><code>array([[ 2., -0., -0., 0., -0., -0., 0., 0., -0., 0.],
[ 0., -1., -0., 0., -0., -0., 0., 0., -0., 0.],
[ 0., -0., -5., 0., -0., -0., 0., 0., -0., 0.],
[ 0., -0., -0., 2., -0., -0., 0., 0., -0., 0.],
[ 0., -0., -0., 0., -5., -0., 0., 0., -0., 0.],
[ 0., -0., -0., 0., -0., -1., 0., 0., -0., 0.],
[ 0., -0., -0., 0., -0., -0., 0., 0., -0., 0.],
[ 0., -0., -0., 0., -0., -0., 0., 2., -0., 0.],
[ 0., -0., -0., 0., -0., -0., 0., 0., -5., 0.],
[ 0., -0., -0., 0., -0., -0., 0., 0., -0., 4.]])
</code></pre>
<p>Diagonal elements contains values.
My current attempt:</p>
<pre><code>import numpy as np
N = 10
k = np.random.randint(-5, 5, size=N) # weights
xk = k * np.identity(N) # scaled+shifted unit impulses
</code></pre>
<p>Is there a way to get directly <code>k*np.identity()</code>? perhaps in <code>scipy</code> as this type of array is common in DSP.</p>
|
<python><arrays><scipy><signal-processing>
|
2023-01-20 13:27:12
| 1
| 7,768
|
mins
|
75,184,572
| 1,512,250
|
IndexError: Index contains null values when adding dataframe to featuretools EntitySet
|
<p>I have my dataframe which I want to add to EntitySet:</p>
<pre><code> Unnamed: 0 Year name Pos Age Tm G GS \
24672 24672 2017.0 Troy Williams SF 22.0 TOT 30.0 16.0
24675 24675 2017.0 Kyle Wiltjer PF 24.0 HOU 14.0 0.0
24688 24688 2017.0 Stephen Zimmerman C 20.0 ORL 19.0 0.0
24689 24689 2017.0 Paul Zipser SF 22.0 CHI 44.0 18.0
24690 24690 2017.0 Ivica Zubac C 19.0 LAL 38.0 11.0
MP PER ... FT% ORB DRB TRB AST STL BLK TOV \
24672 557.0 8.9 ... 0.656 15.0 54.0 69.0 25.0 27.0 10.0 33.0
24675 44.0 6.7 ... 0.500 4.0 6.0 10.0 2.0 3.0 1.0 5.0
24688 108.0 7.3 ... 0.600 11.0 24.0 35.0 4.0 2.0 5.0 3.0
24689 843.0 6.9 ... 0.775 15.0 110.0 125.0 36.0 15.0 16.0 40.0
24690 609.0 17.0 ... 0.653 41.0 118.0 159.0 30.0 14.0 33.0 30.0
PF PTS
24672 60.0 185.0
24675 4.0 13.0
24688 17.0 23.0
24689 78.0 240.0
24690 66.0 284.0
</code></pre>
<p>When I try to add dataframe to featuretools EntitySet, like this:</p>
<pre><code>entity_set.add_dataframe(dataframe_name="season_stats",
dataframe=season_stats,
index='name'
)
</code></pre>
<p>I receive such an error:</p>
<pre><code> /usr/local/lib/python3.8/dist-packages/woodwork/table_accessor.py in _check_index(dataframe, index)
1694
1695 if dataframe[index].isnull().any():
-> 1696 raise IndexError("Index contains null values")
1697
1698
IndexError: Index contains null values
</code></pre>
<p>What I'm doing wrong?</p>
|
<python><pandas><featuretools>
|
2023-01-20 13:23:00
| 1
| 3,149
|
Rikki Tikki Tavi
|
75,184,483
| 5,368,122
|
Connect to azure sql with managed identity python
|
<p>I have a compute in azure ML that I am using for development. I am trying to connect to an azure sql database with managed identity but unable to do so as it returns the error:</p>
<pre><code>Traceback (most recent call last):
File "active_monitoring/dbtester.py", line 8, in <module>
err_mart_conn.open_connection(local=False)
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/ourrehman2/code/Users/ourrehman/Sweden_cashflow_forecasting_aml/ml_logic/active_monitoring/db_manager.py", line 47, in open_connection
self.conn = pyodbc.connect(self.conn_str)
pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
</code></pre>
<p>Here is the <strong>user managed identity</strong>:
<a href="https://i.sstatic.net/Jokmc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jokmc.png" alt="enter image description here" /></a></p>
<p>and it is linked to my compute as such:
<a href="https://i.sstatic.net/c3TvS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c3TvS.png" alt="enter image description here" /></a></p>
<p>The user was created on sql side as such:</p>
<pre><code>CREATE USER [cluster-xxxxxxxxxx-dev] FROM EXTERNAL PROVIDER
EXEC sp_addrolemember 'db_datareader', 'cluster-xxxxxxxxxx-dev'
EXEC sp_addrolemember 'db_datawriter', 'cluster-xxxxxxxxxx-dev'
</code></pre>
<p>Also, <strong>on sql side, we have firewalls but exceptions are made for any azure resource trying to connect. And my compute is on AML and it should be considered azure resource I beleive.</strong></p>
<p>I have installed sql driver 18 using the following code:</p>
<pre><code>if ! [[ "18.04 20.04 22.04" == *"$(lsb_release -rs)"* ]];
then
echo "Ubuntu $(lsb_release -rs) is not currently supported.";
exit;
fi
sudo su
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list > /etc/apt/sources.list.d/mssql-release.list
exit
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18
# optional: for bcp and sqlcmd
sudo ACCEPT_EULA=Y apt-get install -y mssql-tools18
echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc
source ~/.bashrc
# optional: for unixODBC development headers
sudo apt-get install -y unixodbc-dev
</code></pre>
<p>I have the following class to <strong>conencto to database</strong>:</p>
<pre><code>class DBManager:
def __init__(self, server : str, database : str, driver='{ODBC Driver 18 for SQL Server}'):
self.server = server
self.database = database
self.conn_str = f"Driver={{ODBC Driver 18 for SQL Server}};Server={server};Database={database};Authentication=ActiveDirectoryMsi"
self.logging = Logger().getLogger(__name__)
self.conn = None
self.cursor = None
def open_connection(self, local=True):
if local:
# open connection to local database
pass
else:
print(self.conn_str)
self.conn = pyodbc.connect(self.conn_str)
self.cursor = self.conn.cursor()
try:
self.logging.info('Verifying the connection...')
self.cursor.execute("SELECT getdate()")
_ = self.cursor.fetchone()
self.logging.info("Conection successfull")
except Exception as e:
self.logging.error("Unable to connect: ", str(e))
raise e
def execute_query(self, query):
if query is None:
self.logging.info('Empty query passed.')
</code></pre>
<p>The calling code is:</p>
<pre><code>from db_manager import DBManager
server = 'myservername' # parametrize this
database = 'mydatabasename' # parametrize this
err_mart_conn = DBManager(server, database)
err_mart_conn.open_connection(local=False)
</code></pre>
|
<python><azure><odbc><azure-sql-database>
|
2023-01-20 13:12:47
| 1
| 844
|
Obiii
|
75,184,391
| 16,622,985
|
Windowing Strategy for Unbounded Side Input
|
<p>I have a pipeline which streams IoT log data. In addition, it has a bounded side input, containing the initial configurations of the devices. This configuration changes over time and has to be updated by specific logs (ConfigLogs) coming from the main PubSub source. All remaining logs (PayloadLogs) need to consume at some point the updated configuration.</p>
<p>In order to have access to the latest configuration, I came up with the following pipeline design:</p>
<p><a href="https://i.sstatic.net/1Bo1g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Bo1g.png" alt="enter image description here" /></a></p>
<p>However, I was unable to get this to work. In particular, I struggle with the correct window/trigger strategy for the side input of <code>Use Config Side Input</code>.</p>
<p>Here is a pseudocode example</p>
<pre><code>import apache_beam as beam
from apache_beam.transforms import window
from apache_beam.transforms.trigger import AccumulationMode, Repeatedly, AfterCount
class UpdateConfig(beam.DoFn):
# currently dummy logic
def process(self, element, outdated_config_list):
outdated_config = outdated_config_list[0]
print(element)
print(outdated_config)
yield {**outdated_config, **{element[0]: element[1]}}
class UseConfig(beam.DoFn):
# currently dummy logic
def process(self, element, latest_config_list):
print(element)
print(latest_config_list)
with beam.Pipeline() as pipeline:
# Bounded Side Input
outdated_config = (
pipeline
| "Config BQ" >> beam.Create([{3: 'config3', 4: 'config4', 5: 'config5'}])
| "Window Side Input" >> beam.WindowInto(window.GlobalWindows())
)
# Unbounded Source
config_logs, payload_logs = (
pipeline
| "IoT Data" >>
beam.io.ReadFromPubSub(subscription="MySub").with_output_types(bytes)
| "Decode" >> beam.Map(lambda x: eval(x.decode('utf-8')))
| "Partition Config/Payload Logs" >>
beam.Partition(lambda x, nums: 0 if type(x[0]) == int else 1, 2)
)
# Update of Bounded Side Input with part of Unbounded Source
latest_config = (
config_logs
| "Update Batch Config" >>
beam.ParDo(UpdateConfig(), batch_lookup_list=beam.pvalue.AsList(outdated_config))
| "LatestConfig Window/Trigger" >>
beam.WindowInto(
window.GlobalWindows(),
trigger=Repeatedly(AfterCount(1)),
accumulation_mode=AccumulationMode.DISCARDING
)
)
# Use Updated Side Input
(
payload_logs
| "Use Config Side Input" >>
beam.ParDo(UseConfig(), latest_config_list=beam.pvalue.AsList(latest_config))
)
</code></pre>
<p>which I feed with the following type of dummy data</p>
<pre><code># ConfigLog: (1, 'config1')
# PayloadLog: ('1', 'value1')
</code></pre>
<p>The prints of <code>Update Batch Config</code> are always executed, however Dataflow waits indefinitely in <code>Use Config Side Input</code>, even though I added an <code>AfterCount</code> trigger in <code>LatestConfig Window/Trigger</code>.</p>
<p>When I switch to a <code>FixedWindow</code> (and also add one for the <code>PayloadLogs</code>), I do get the prints in <code>Use Config Side Input</code>, but the side input is mostly empty, since the <code>FixedWindow</code> is triggered after a fixed amount of time, irrespectively if I had an incoming <code>ConfigLog</code> or not.</p>
<p>What is the correct windowing/triggering strategy when using a unbounded side input?</p>
|
<python><google-cloud-dataflow><apache-beam>
|
2023-01-20 13:03:20
| 0
| 1,176
|
CaptainNabla
|
75,184,383
| 2,715,216
|
dbx execute install from azure artifacts / private pypi
|
<p>I would like to use dbx execute to run a task/job on an azure databricks cluster.
However, i cannot make it install my code.</p>
<p>More Details on the situation:</p>
<ul>
<li>Project A with a setup.py is dependent on Project B</li>
<li>Project B is also python based and is realeased as a azure devops artifact</li>
<li>I can successfully install A by using an init script on an azure databricks cluster by git clone both projects in the init script and then pip install -e project B and A.</li>
<li>It also works when i create a pip.conf file in the init script which configures a token to use my artifacts feed</li>
<li>So dbx deploy/launch works fine as my clusters use the init script</li>
<li>However dbx execute always fails telling me that it cannot find and install Project B</li>
</ul>
<p>Does anyone know how to configure the pip which is used during dbx execute installation process? Somehow this seems to be ignoring any conf which was set with init scripts.</p>
<p>I searched through lots of documentation such as <a href="https://docs.databricks.com/libraries/index.html" rel="nofollow noreferrer">https://docs.databricks.com/libraries/index.html</a> and
<a href="https://dbx.readthedocs.io/en/latest/reference/deployment/#advanced-package-dependency-management" rel="nofollow noreferrer">https://dbx.readthedocs.io/en/latest/reference/deployment/#advanced-package-dependency-management</a> but with no luck</p>
<p>When i look into dbx package seems not that there is an option to set any pip.conf :(
<a href="https://github.com/databrickslabs/dbx/blob/main/dbx/commands/execute.py" rel="nofollow noreferrer">https://github.com/databrickslabs/dbx/blob/main/dbx/commands/execute.py</a></p>
|
<python><azure><databricks><azure-databricks><dbx>
|
2023-01-20 13:02:29
| 1
| 371
|
thompson
|
75,184,357
| 16,829,292
|
What is the best experinces to structure a Django project for scale?
|
<p>Django is great. But as we add new features, as our dev team grows, the software needs to be stable on production, things can get quite messy.</p>
<p>We are going to want some common patterns, derived from experience, on how to structure your Django project for scale and longevity.</p>
<p>What is your suggestion?</p>
<p>Indeed, there are many models and patterns for development, but we want to know your experiences.</p>
<p>Edit : I found this and I think it is helpful, but my question is, what does the experience say?
<a href="https://maktabkhooneh.org/mag/wp-content/uploads/2022/04/Django-Design-Patterns-and-Best-Practices.pdf" rel="nofollow noreferrer">https://maktabkhooneh.org/mag/wp-content/uploads/2022/04/Django-Design-Patterns-and-Best-Practices.pdf</a></p>
|
<python><django><django-rest-framework>
|
2023-01-20 13:00:16
| 1
| 323
|
Amin Zayeromali
|
75,184,221
| 12,403,550
|
Convert nested dict to dataframe, syntax error?
|
<h4>Problem</h4>
<p>I am converting multiple nested dicts to dataframes. I have a slightly different dict that I haven't been able to convert to a dataframe using my attempted solution. I am providing a shortened copy of my dict with dummy values as the reprex.</p>
<h4>Reprex dict:</h4>
<pre><code>{'metrics': [{'metric': 'DatasetCorrelationsMetric',
'result': {'current': {'stats': {'pearson': {'target_prediction_correlation': None,
'abs_max_features_correlation': 0.1},
'cramer_v': {'target_prediction_correlation': None,
'abs_max_features_correlation': None}}},
'reference': {'stats': {'pearson': {'target_prediction_correlation': None,
'abs_max_features_correlation': 0.7},
'cramer_v': {'target_prediction_correlation': None,
'abs_max_features_correlation': None}}}}}]}
</code></pre>
<h4>My attempted solution</h4>
<p>Code is based on similar dict wrangling problems that I had, but I am not sure how to apply it for this specific dict.</p>
<pre><code>data = {}
for result in reprex_dict['metrics']:
data[result['result']] = {
**{f"ref_{key}": val for key, val in result['result']['reference'].items()},
**{f"cur_{key}": val for key, val in result['result']['current'].items()}
}
</code></pre>
<h4>Expected dataframe format:</h4>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">cur_pearson_target_prediction_correlation</th>
<th style="text-align: right;">cur_pearson_abs_max_features_correlation</th>
<th style="text-align: right;">cur_cramer_v_target_prediction_correlation</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">None</td>
<td style="text-align: right;">0.1</td>
<td style="text-align: right;">None</td>
</tr>
</tbody>
</table>
</div><h4>Error message</h4>
<p>I am currently getting this error too.</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [403], line 7
5 data = {}
6 for result in corr_matrix_dict['metrics']:
----> 7 data[result['result']] = {
8 **{f"ref_{key}": val for key, val in result['result']['reference']['stats'].items()},
9 **{f"cur_{key}": val for key, val in result['result']['current']['stats'].items()}
10 }
TypeError: unhashable type: 'dict'
</code></pre>
|
<python><dataframe><dictionary>
|
2023-01-20 12:45:51
| 1
| 433
|
prayner
|
75,184,131
| 12,078,893
|
Why does loc[[]] not work for a single column?
|
<p>In the official documentation for the loc function in pandas, it is written that using double brackets, loc[[]], returns a dataframe.</p>
<blockquote>
<p>Single tuple. Note using [[]] returns a DataFrame.
<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html#" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html#</a></p>
</blockquote>
<p>But this seems to not work when I am trying to convert a single column.</p>
<p>So for example, for a dataframe df</p>
<pre><code>df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
index=['cobra', 'viper', 'sidewinder'],
columns=['max_speed', 'shield'])
</code></pre>
<p>I get code results as follows</p>
<p><a href="https://i.sstatic.net/kpKva.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kpKva.png" alt="enter image description here" /></a></p>
<p>In the last case, it seems that using double brackets on a single column gives a syntax error. I do not understand this as using a single bracked worked for a single column, and using double brackets worked for a single row. Can someone explain to me why this happens?</p>
<p>Thanks in advance</p>
|
<python><pandas><syntax-error>
|
2023-01-20 12:37:31
| 2
| 349
|
김예군
|
75,184,039
| 16,607,067
|
Overriding django admin get_queryset()
|
<p>I have two models which is one of them proxy model.
In admin I registered both and overrided <code>get_queryset()</code> method but it is not working as expected.<br />
<strong>admin.py</strong></p>
<pre class="lang-py prettyprint-override"><code>@admin.register(Category)
class CategoryAdmin(admin.ModelAdmin):
def get_queryset(self, request):
qs = super().get_queryset(request)
return qs.filter(language='en')
@admin.register(ProxyCategory)
class ProxyCategoryAdmin(CategoryAdmin):
def get_queryset(self, request):
qs = super().get_queryset(request)
return qs.filter(language='zh')
</code></pre>
<p>In admin page <em>ProxyCateegoryAdmin</em> not showing objects, if I remove <code>get_queryset()</code> from <em>CategoryAdmin</em>, it works but wanted filter both of them.
Thanks in advance</p>
|
<python><django><django-models><inheritance><django-admin>
|
2023-01-20 12:28:53
| 2
| 439
|
mirodil
|
75,183,965
| 3,968,048
|
Is it possible to manually correct a POS tag in Spacy, and also refetch the lemma after updating the POS?
|
<p>Let's say I have two sentences. In english <code>I took a fall last month.</code>, and in spanish <code>Tomé una caída el mes pasado.</code></p>
<p>Spacy gives me the following:</p>
<pre><code>I took a fall last month .
"PRON", "VERB", "DET", "NOUN", "ADJ", "NOUN", "PUNCT"
</code></pre>
<p>And</p>
<pre><code>Tomé una caída el mes pasado .
"PROPN", "DET", "NOUN", "DET", "NOUN", "ADJ", "PUNCT"
</code></pre>
<p>Spacy sees <code>Tomé</code> as a name, rather than as a verb. But I know it's a verb because of the English POS tagging. My issue is less with the incorrect POS(though it would be nice if it was correct), but I can fix that, however because the POS tag is incorrect, the lemma for <code>Tomé</code>, is <code>Tomé</code> instead of <code>tomar</code>.</p>
<p>If I do the following <code>doc[0].pos_ = "VERB"</code>, it updates the <code>pos_</code> field to be a verb, but the <code>doc[0].lemma_</code> still shows <code>Tomé</code>. Is there a way to refetch the lemma after updating the POS. Or even better, or in addition, is there a way to access lemma outside the context of a sentence. E.g. just passing in <code>Tomé</code> and <code>VERB</code> to get back <code>tomar</code>?</p>
<p>Interestingly, this is using <code>es_core_news_lg</code>. If I use <code>es_core_news_sm</code>, it correctly tags <code>Tomé</code> as a <code>VERB</code>, however in practice trying to figure out at runtime which model is better for a given sentence would be infeasible, so I'm defaulting to using <code>*_lg</code></p>
|
<python><spacy>
|
2023-01-20 12:20:48
| 1
| 3,536
|
Peter R
|
75,183,958
| 4,620,679
|
Why pandas DataFrame allows to set column using too large Series?
|
<p>Is there a reason why pandas raises ValueError exception when setting DataFrame column using a list and doesn't do the same when using Series? Resulting in superfluous Series values being ignored (e.g. 7 in example below).</p>
<pre class="lang-py prettyprint-override"><code>>>> import pandas as pd
>>> df = pd.DataFrame([[1],[2]])
>>> df
0
0 1
1 2
>>> df[0] = [5,6,7]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python310\lib\site-packages\pandas\core\frame.py", line 3655, in __setitem__
self._set_item(key, value)
File "D:\Python310\lib\site-packages\pandas\core\frame.py", line 3832, in _set_item
value = self._sanitize_column(value)
File "D:\Python310\lib\site-packages\pandas\core\frame.py", line 4529, in _sanitize_column
com.require_length_match(value, self.index)
File "D:\Python310\lib\site-packages\pandas\core\common.py", line 557, in require_length_match
raise ValueError(
ValueError: Length of values (3) does not match length of index (2)
>>>
>>> df[0] = pd.Series([5,6,7])
>>> df
0
0 5
1 6
</code></pre>
<p>Tested using python 3.10.6 and pandas 1.5.3 on Windows 10.</p>
|
<python><pandas><dataframe><series>
|
2023-01-20 12:20:10
| 1
| 501
|
michalmonday
|
75,183,956
| 3,251,645
|
Import file from subdirectory into file in another subdirectory
|
<p>I have a python project a folder structure like this:</p>
<pre><code>main_directory
main.py
drivers
__init__.py
xyz.py
utils
__init__.py
connect.py
</code></pre>
<p>I want to import <code>connect.py</code> into <code>xyz.py</code> and here's my code:</p>
<pre><code>from utils import connect as dc
</code></pre>
<p>But I keep getting this error no matter what I do, please help:</p>
<pre><code>ModuleNotFoundError: No module named 'utils'
</code></pre>
<p><strong>Update</strong>: People are telling me to set the path or directory, I don't understand why I need to do this only for importing file. This is something that should work automatically.</p>
|
<python><python-import><python-packaging>
|
2023-01-20 12:20:01
| 4
| 2,649
|
Amol Borkar
|
75,183,616
| 12,936,009
|
neovim: pwntools process automatically stops when trying to invoke interactive shell
|
<p>I'm using python pwntools.
I'm using <em><strong>python 3.10.x</strong></em></p>
<p>This line of code should open a shell for me:
<code>io.interactive()</code></p>
<p>But while running this file from vim using
<code>!./%</code> it doesn't open the shell doesn't invoke as it was supposed to be. The process stops contrarily.</p>
<p>The code:</p>
<pre><code>#!/usr/bin/python3
from pwn import *
elf = context.binary = ELF("house_of_force")
libc = elf.libc
gs = '''
continue
'''
def start():
if args.GDB:
return gdb.debug(elf.path, gdbscript=gs)
else:
return process(elf.path)
# Select the "malloc" option, send size & data.
def malloc(size, data):
io.send("1")
io.sendafter("size: ", f"{size}")
io.sendafter("data: ", data)
io.recvuntil("> ")
# Calculate the "wraparound" distance between two addresses.
def delta(x, y):
return (0xffffffffffffffff - x) + y
io = start()
# This binary leaks the address of puts(), use it to resolve the libc load address.
io.recvuntil("puts() @ ")
libc.address = int(io.recvline(), 16) - libc.sym.puts
# This binary leaks the heap start address.
io.recvuntil("heap @ ")
heap = int(io.recvline(), 16)
io.recvuntil("> ")
io.timeout = 0.1
# =============================================================================
# =-=-=- EXAMPLE -=-=-=
# The "heap" variable holds the heap start address.
log.info(f"heap: 0x{heap:02x}")
# Program symbols are available via "elf.sym.<symbol name>".
log.info(f"target: 0x{elf.sym.target:02x}")
# The malloc() function chooses option 1 from the menu.
# Its arguments are "size" and "data".
malloc(24, b"Y"*24)
# The delta() function finds the "wraparound" distance between two addresses.
log.info(f"delta between heap & main(): 0x{delta(heap, elf.sym.main):02x}")
# =============================================================================
io.interactive()
</code></pre>
<p><em><strong>Error:</strong></em></p>
<pre><code>/home/pegasus/Documents/Courses/HeapLAB-main/house_of_force/./exploit.py:19: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
io.sendafter("size: ", f"{size}")
/home/pegasus/.local/lib/python3.10/site-packages/pwnlib/tubes/tube.py:813: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
res = self.recvuntil(delim, timeout=timeout)
/home/pegasus/Documents/Courses/HeapLAB-main/house_of_force/./exploit.py:21: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
io.recvuntil("> ")
[*] delta between heap & main(): 0xfffffffffebd9816
[*] Switching to interactive mode
[*] Stopped process '/home/pegasus/Documents/Courses/HeapLAB-main/house_of_force/house_of_force' (pid 7496)
</code></pre>
|
<python><python-interactive><interactive-shell><pwntools>
|
2023-01-20 11:48:03
| 0
| 847
|
NobinPegasus
|
75,183,571
| 4,332,480
|
DRF: Which if better to create custom structure of response in Serializer/ModelSerializer?
|
<p>I am currently making a simple <code>CRUD</code> application on <code>Django Rest Framework</code>.</p>
<p>I need to return a response to the client for any request in a specific structure.</p>
<p>For example, if a client makes a <code>POST</code> request to create a new record and it was executed successfully, then API needs to return such structure:</p>
<pre><code>{
"data": [
{
"id": 1,
"email": "bobmarley@gmail.com",
}
],
"error": {}
}
</code></pre>
<p>Let's say the problem is related to the model field. In this case, the API should return such a structure:</p>
<pre><code>{
"data": [],
"error": {
"email": [
"This field is required."
]
}
}
</code></pre>
<p>If the problem is not related to the model field, then it is necessary to return to the client such a structure where there would be a description of the error:</p>
<pre><code>{
"data": [],
"error": {
"non_field_errors": [
"Description of the error."
]
}
}
</code></pre>
<p>Depending on the error, I also have to return different statuses in the query responses.</p>
<p><strong>openapi-schema.js:</strong></p>
<pre><code> /clients:
post:
summary: Create New Client
operationId: post-clients
responses:
'200':
description: Client Created
content:
application/json:
schema:
$ref: '#/components/schemas/Result'
examples: {}
'400':
description: Missing Required Information
'409':
description: Email Already Taken
</code></pre>
<p>My current code returns an incorrect structure. Should I configure all this at the serialization level?</p>
<pre><code>{
"data": [],
"error": {
"non_field_errors": [
"{'email': [ErrorDetail(string='person with this email already exists.', code='unique')]}"
]
}
}
</code></pre>
<p><strong>models.py:</strong></p>
<pre><code>class Client(models.Model):
id = models.AutoField(primary_key=True)
email = models.EmailField(unique=True)
class Meta:
db_table = "clients"
def __str__(self):
return self.email
</code></pre>
<p><strong>serializers.py:</strong></p>
<pre><code>class ClientSerializer(serializers.ModelSerializer):
class Meta:
model = Client
</code></pre>
<p><strong>views.py:</strong></p>
<pre><code>class ClientView(APIView):
def post(self, request):
data = []
error = {}
result = {"data": data, "error": error}
try:
client_serializer = ClientSerializer(data=request.data)
client_serializer.is_valid(raise_exception=True)
client_serializer.save()
data.append(client_serializer.data)
return Response(result, status=status.HTTP_201_CREATED)
except Exception as err:
error['non_field_errors'] = [str(err)]
return Response(result, status=status.HTTP_200_OK)
</code></pre>
|
<python><django><django-rest-framework><django-views><django-serializer>
|
2023-01-20 11:43:47
| 2
| 5,276
|
Nurzhan Nogerbek
|
75,183,494
| 2,635,863
|
scale within groups with scikit-learn
|
<pre><code>from sklearn.preprocessing import scale
df = pd.DataFrame({'x':['a','a','a','a','b','b','b','b'], 'y':[1,1,2,2,1,1,2,2], 'z':[12,32,14,64,24,67,44,33]})
</code></pre>
<p>I'm trying to scale column <code>z</code> for each combination of <code>x</code> and <code>y</code>:</p>
<pre><code> x y z z2
0 a 1 1 -1.22
1 a 1 2 0
2 a 1 3 1.22
3 a 2 3 -1.07
4 a 2 4 -0.27
5 a 2 6 1.34
6 b 1 4 0.71
7 b 1 2 -1.41
8 b 1 4 0.71
9 b 2 6 1.34
10 b 2 4 -0.27
11 b 2 3 -1.07
</code></pre>
<p>I tried</p>
<pre><code>df['z2'] = df.groupby(['x','y'])['z'].apply(scale)
</code></pre>
<p>but this returns the error</p>
<pre><code>TypeError: incompatible index of inserted column with frame index
</code></pre>
|
<python><pandas><scikit-learn>
|
2023-01-20 11:36:28
| 1
| 10,765
|
HappyPy
|
75,183,426
| 649,920
|
Rolling median of deviation from the current value
|
<p>Let's say I have a dataframe in pandas indexed by timestamps. For each point I need to compute median of absolute differences between the future values in the next 5 seconds and the current one. For example,</p>
<pre><code>df = pd.DataFrame({'A': [2, 1, 3, 4, -1, 2, -3, 5, 4, -10]},
index=pd.date_range('2022-01-01', periods=10, freq='s'))
</code></pre>
<p>In the output the first entry will have index <code>2022-01-01 00:00:00</code> and the value of <code>1</code>. How was this <code>1</code> computed? For the next 5 seconds we have values <code>2,1,3,4,-1,2</code>, the absolute differences with the first element are hence <code>0, 1, 1, 2, 3, 0</code>, which gives a median of <code>1</code>.</p>
<p>If that's gonna be simpler, I'm also happy to know the answer in case the window is backward-looking. It seems that using forward-looking windows is a bit cumbersome.</p>
|
<python><pandas>
|
2023-01-20 11:29:30
| 1
| 357
|
SBF
|
75,183,361
| 1,259,374
|
Python: initiated Logger with dataclass field param
|
<p>This is my <code>Logger</code> class:</p>
<pre><code>import logging
import os
import datetime
class Logger:
_logger = None
def __new__(cls, user: str, *args, **kwargs):
if cls._logger is None:
cls._logger = super().__new__(cls, *args, **kwargs)
cls._logger = logging.getLogger("crumbs")
cls._logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] [%(levelname)s] [%(filename)s] [%(funcName)s] [%(lineno)d]: %(message)s')
now = datetime.datetime.now()
directory_name = f'./log/{now.strftime("%Y-%m-%d")}'
base_log_name = f'/{user}'
log_file_extension = '.log'
if not os.path.isdir(directory_name):
os.mkdir(directory_name)
file_handler = logging.FileHandler(f'{directory_name}{base_log_name}{now.strftime("%d-%m-%Y")}{log_file_extension}', 'w', 'utf-8')
stream_handler = logging.StreamHandler()
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)
cls._logger.addHandler(file_handler)
cls._logger.addHandler(stream_handler)
return cls._logger
</code></pre>
<p>And this is my <code>class</code> that accept user <code>argument</code> and I want my <code>log</code> file to be created with my user name in the <code>file name</code>:</p>
<pre><code>@dataclass(kw_only=True)
class RunningJobManager:
user: str = field(init=True)
password: str = field(init=True)
logging: Logger = field(init=False, default_factory=Logger(user=user))
</code></pre>
<p>So currently my user <code>field</code> inside <code>Logger</code> class is with type of <code>dataclasses.Field</code> instead of <code>string</code>.
I also try to use <code>default</code> instread of <code>default_factory</code></p>
<p>And I got this error:</p>
<blockquote>
<p>Curerenly my code crash with OSError, [Errno 22] Invalid argument:
'G:\my_project\log\2023-01-20\Field(name=None,type=None,default=<dataclasses._MISSING_TYPE
object at
0x0000017B1E78DB10>,default_factory=<dataclasses._MISSING_TYPE object
at
0x0000017B1E78DB10>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE
object at 0x0000017B1E78DB10>,_field_type=None)20-01-2023.log'</p>
</blockquote>
<p>At this line:</p>
<pre><code>file_handler = logging.FileHandler(f'{directory_name}{base_log_name}{now.strftime("%d-%m-%Y")}{log_file_extension}', 'w', 'utf-8')
</code></pre>
<p><strong>EDIT</strong></p>
<pre><code>"stdout_handler": {
"formatter": "std_out",
"class": "logging.StreamHandler",
"level": "DEBUG"
}
</code></pre>
|
<python><logging><field><python-dataclasses>
|
2023-01-20 11:22:04
| 2
| 1,139
|
falukky
|
75,183,258
| 9,182,743
|
Add second y-axis with percentage in bar chart plotly
|
<p>Given df:</p>
<pre><code> names values pct
0 a 10 0.1
1 b 20 0.2
2 c 70 0.7
</code></pre>
<p>Return a bar chart with secondary y-axis as a percentage (col pct)</p>
<p>Current code:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'names':['a', 'b', 'c'], 'values':[10,20,70], 'pct':[0.1, 0.2, 0.7]})
# the figure
fig = px.bar(df, x='names', y='values')
fig.show()
</code></pre>
<p>expected outcome:
<a href="https://i.sstatic.net/Jp3MH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp3MH.png" alt="enter image description here" /></a></p>
<p><strong>EDIT</strong>: the solution provided works, is there a way to also add color to the bars, whilst not messing up the look (bar distance/alignment with xticks)</p>
<p>Here is my attempt modifying the solution provided:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.express as px
from plotly.subplots import make_subplots
df = pd.DataFrame({'names':['a', 'b', 'c'], 'values':[10,20,70], 'pct':[0.1, 0.2, 0.7]})
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig1 = px.bar(df, x='names', y='values', color='names' ) # added color
fig2 = px.bar(df, x='names', y='pct', color='names')
# plot singulalry all the bars provided.
for i in range(len(fig1.data)):
fig.add_trace(fig1.data[i], secondary_y=False)
fig.add_trace(a.data[i], secondary_y=True)
fig.update_layout(yaxis2=dict(tickvals=[0.1,0.2,0.7], tickformat='.1%', title_text='Secondary y-axis percentage'))
fig.update_layout(xaxis=dict(title_text='name'), yaxis=dict(title_text='values'))
fig.update_layout(bargap=0.0)
fig.show()
</code></pre>
<p>However it is producing double legend entry and misaligned x-axis labels</p>
<p>OUT:</p>
<p><a href="https://i.sstatic.net/lHLKU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHLKU.png" alt="enter image description here" /></a></p>
|
<python><pandas><plotly>
|
2023-01-20 11:13:52
| 1
| 1,168
|
Leo
|
75,183,093
| 15,476,955
|
python: clean dynamoDB AWS item?
|
<p>With AWS DynamoDB calls, we sometime get complex of item which precise the type of every element in the item.
It can be useful but it's a mess to extract the data.</p>
<pre><code>{
"a": {
"S": "AAAA"
},
"myList": {
"L": [
{
"S": "T1"
},
{
"S": "T2"
},
{
"S": "TH"
}
]
},
"checkList": {
"L": [
{
"M": {
"id": {
"S": "telesurveyor"
},
"check": {
"BOOL": true
}
}
}
]
},
"createdAt": {
"N": "1672842152365"
},
}
</code></pre>
<p>We need to transform it to this:</p>
<pre><code>{
"a": "AAAA",
"myList": ["T1","T2","TH"],
"checkList": [
{
"id": "telesurveyor",
"check": true
}
],
"createdAt": 1672842152365,
}
</code></pre>
<p>Is there an AWS boto3 way to do it ?
If yes what is it ? If no, how to do it manually ?</p>
|
<python><amazon-web-services><dictionary><amazon-dynamodb>
|
2023-01-20 10:57:41
| 2
| 1,168
|
Utopion
|
75,183,054
| 2,783,767
|
Is there a way to get title of a google drive file in python using its google drive file id?
|
<p>I have google drive IDs of many files which I want to download.
However the apis to download google drive files want the filename as well to save the file with that name.</p>
<p>Is there a way in python to get the title/name of the file from the google drive file ID in python?</p>
<p>If so please help to share a sample code.</p>
|
<python><google-drive-api>
|
2023-01-20 10:53:37
| 1
| 394
|
Granth
|
75,182,976
| 7,714,681
|
Store a dictionary variable from Google Colab locally
|
<p>In Google Colab, I have created a <code>dict</code> file names <code>list_dict</code>. How can pickle this and store it locally?</p>
<pre><code>PATH = <LOCAL_PATH>
pickle_out = open(PATH +'<FILE_NAME>.pickle', 'wb')
pickle.dump(list_dict, pickle_out)
pickle_out.close()
</code></pre>
<p>However, this returns:</p>
<pre><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-64-439362c7678e> in <module>
----> 1 pickle_out = open(PATH , 'wb')
2 pickle.dump(list_dict, pickle_out)
3 pickle_out.close()
FileNotFoundError: [Errno 2] No such file or directory:
</code></pre>
<p>How can I store this variable locally?</p>
|
<python><dictionary><local-storage><google-colaboratory><pickle>
|
2023-01-20 10:46:39
| 2
| 1,752
|
Emil
|
75,182,825
| 386,861
|
How to plot multiple times series using pandas and seaborn
|
<p>I've got a dataframe of data that people have helpfully collate.</p>
<p>It looks like this (ignore the index, I'm just sampling):</p>
<pre><code> uni score year
18 Arden University Limited 78.95 2020
245 The University of Manchester 71.35 2022
113 Darlington College 93.33 2020
94 City of Wolverhampton College 92 2017
345 The Royal Veterinary College 94 2018
118 Darlington College 62 2018
</code></pre>
<p>There is more data - <a href="https://github.com/elksie5000/uni_data/blob/main/uni_data_combined.csv" rel="nofollow noreferrer">https://github.com/elksie5000/uni_data/blob/main/uni_data_combined.csv</a> - but my view is to set_index on year and then filter by uni as well as larger groups, aggregated by mean/median.</p>
<p>The ultimate aim is to look at a group of universities and track the metric over time.</p>
<p>I've managed to create a simple function to plot a simple function to plot the data, thus:</p>
<pre><code>#Create a function to plot the data
def plot_uni(df, uni, query):
print(query)
df['query'] = df[uni].str.contains(query)
subset = df[df['query']].set_index("year")
subset.sort_index().plot()
</code></pre>
<p><a href="https://i.sstatic.net/n7X7F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n7X7F.png" alt="enter image description here" /></a>
I can also plot the overall mean using:</p>
<pre><code>df.groupby("year").mean()['score'].plot()
</code></pre>
<p>What I want to be able to do is plot both together.</p>
<p><a href="https://i.sstatic.net/Y3tcI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y3tcI.png" alt="enter image description here" /></a>
Ideally, I'd also like to be able to plot multiple lines in one plot and specify the colour. So for instance say the national score is in red and a particular line was say blue, while other plots were gray.</p>
<p>Any ideas?</p>
<p>UPDATE:</p>
<p>Answer from @Corralien and @Johannes Schöck both worked. Just don't know how to change the legend.</p>
|
<python><pandas><matplotlib><seaborn>
|
2023-01-20 10:34:15
| 2
| 7,882
|
elksie5000
|
75,182,812
| 8,916,474
|
How in tox run bash script and reuse returned value?
|
<p>When I create a tox environment, some libraries are installed under different paths depending on the environment that I use to trigger tox:</p>
<pre><code># Tox triggered inside virtual env
.tox/lib/site-packages
</code></pre>
<p>Sometimes</p>
<pre><code># Tox triggered inside docker
.tox/lib/python3.8/site-packages
</code></pre>
<p>I need to reuse such path in further steps inside tox env. I decided to create a bash script to find the path for installed libraries to be able to reuse it and to run it inside tox env. I thought that I can pass found path to tox and reuse it in one of the next commands. Is it possible to do such thing?</p>
<p>I tried:</p>
<p><strong>tox.ini</strong></p>
<pre><code>[tox]
envlist =
docs
min_version = 4
skipsdist = True
allowlist_externals = cd
passenv =
HOMEPATH
PROGRAMDATA
basepython = python3.8
[testenv:docs]
changedir = docs
deps =
-r some_path/library_name/requirements.txt
commands =
my_variable=$(bash ../docs/source/script.sh)
sphinx-apidoc -f -o $my_variable/source $my_variable
</code></pre>
<p>But apparently this doesn't work with tox:</p>
<blockquote>
<p>docs: commands[0] docs> my_variable=$(bash ../docs/source/script.sh)
docs: exit 2 (0.03 seconds) docs>
my_variable=$(bash../docs/source/script.sh) docs: FAIL code 2
(0.20=setup[0.17]+cmd[0.03] seconds) evaluation failed :( (0.50
seconds)</p>
</blockquote>
<p>Bash script</p>
<p><strong>script.sh</strong></p>
<pre><code>#!/bin/bash
tox_env_path="../.tox/docs/"
conf_source="source"
tox_libs=$(find . $tox_env_path -type d -name "<name of library>")
sudo mkdir -p $tox_libs/docs/source
cp $conf_source/conf.py $conf_source/index.rst $tox_libs/docs/source
echo $tox_libs
</code></pre>
|
<python><bash><tox>
|
2023-01-20 10:32:58
| 1
| 504
|
QbS
|
75,182,688
| 8,601,920
|
Get values from current till last column values in pandas groupby
|
<p>Image following pandas dataframe:</p>
<pre><code>+----+------+-------+
| ID | Name | Value |
+----+------+-------+
| 1 | John | 1 |
+----+------+-------+
| 1 | John | 4 |
+----+------+-------+
| 1 | John | 10 |
+----+------+-------+
| 1 | John | 50 |
+----+------+-------+
| 1 | Adam | 6 |
+----+------+-------+
| 1 | Adam | 3 |
+----+------+-------+
| 2 | Jen | 9 |
+----+------+-------+
| 2 | Jen | 6 |
+----+------+-------+
</code></pre>
<p>I want to apply groupby function and create a new column which stores the <code>Value</code> values as a list from the current till the last groupby value.</p>
<p>Like that:</p>
<pre><code>+----+------+-------+----------------+
| ID | Name | Value | NewCol |
+----+------+-------+----------------+
| 1 | John | 1 | [1, 4, 10, 50] |
+----+------+-------+----------------+
| 1 | John | 4 | [4, 10, 50] |
+----+------+-------+----------------+
| 1 | John | 10 | [10, 50] |
+----+------+-------+----------------+
| 1 | John | 50 | [50] |
+----+------+-------+----------------+
| 1 | Adam | 6 | [6, 3] |
+----+------+-------+----------------+
| 1 | Adam | 3 | [3] |
+----+------+-------+----------------+
| 2 | Jen | 9 | [9, 6] |
+----+------+-------+----------------+
| 2 | Jen | 6 | [9] |
+----+------+-------+----------------+
</code></pre>
<p>Is this anyhow possible using pandas groupby function?</p>
|
<python><pandas><group-by>
|
2023-01-20 10:23:06
| 1
| 557
|
adama
|
75,182,628
| 9,794,068
|
Optimal ways to search for elements in sorted list
|
<p>Whenever I needed to check whether an item belongs to a given group I used to store the elements in a list and use <code>in</code> to make the check:</p>
<pre><code>def is_in_list(element, l):
if element in l:
print('yes')
else:
print('no')
is_in_list(2, [1,2,3])
</code></pre>
<p>This solution does not require for the elements to be sorted, or for the list to even be made exclusively of numbers. I guess that in the specific case I already know that the elements are all integer numbers and that they are sorted there must be a more efficient way to do this.</p>
<p>What is the best way to organize my data so that it will be quick to verify whether a given number belongs to a list?</p>
<p>(Ps. I call it "list", but it is not important for it to be a list specifically, and if it is recommended, I can very well use arrays or dictionaries instead).</p>
|
<python>
|
2023-01-20 10:17:58
| 0
| 530
|
3sm1r
|
75,182,398
| 4,451,521
|
import __main__ with pickles
|
<p>Can someone explain me what is the role of <code>import __main__</code> in python? I read a similar question but could not understand it. (this in general first)</p>
<p>Then in particular, when a pickle of an object is made in a script from within a <code>if __name__ == "__main__":</code> , and I try to read it from another script I have to use <code>import __main__</code> in order to avoid the error</p>
<pre><code> AttributeError: Can't get attribute 'theObject' on <module '__main__'
</code></pre>
<p>as suggested <a href="https://stackoverflow.com/a/65318623/4451521">in this answer</a> (In the answer I understand <em>why</em> the error is happening but not why the import main solves it)</p>
|
<python><pickle>
|
2023-01-20 09:57:12
| 0
| 10,576
|
KansaiRobot
|
75,182,377
| 304,215
|
Electron forge distributable showing "error spawn python ENOENT" when starting the app
|
<p>I've one Electron app in which I've used python flask a server with Angular JS for all HTML front end thing. In dev environment it's working fine as I expect. But, after creating a distributable by electron forge when I'm trying to open the <code>.exe</code> file it showing this <code>Error spawn python ENOENT</code>.</p>
<p>Distributable creating process by electron forge is going without any issue. But, just after getting the ".exe" file when I'm trying to open the app by clicking on ".exe" file I'm getting this error.</p>
<p><strong>Error Screenshot -</strong></p>
<p><a href="https://i.sstatic.net/Amvat.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Amvat.png" alt="enter image description here" /></a></p>
<p><strong>My Electron app files</strong></p>
<p><strong>main.js</strong></p>
<pre><code>const { app, BrowserWindow } = require('electron');
const path = require('path');
const url = require('url');
const { spawn } = require('child_process')
const createWindow = () => {
const win = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
preload: path.join(__dirname, 'preload.js'),
},
});
win.loadFile('./dist/angular-flask-electron-app/index.html');
};
app.whenReady().then(() => {
const status = spawn("./flask_server/.venv/Scripts/python", ["./flask_server/main.py"])
status.stdout.on('data', (data) => {
console.log(`python process stdout: ${data}`);
});
status.stderr.on('data', (data) => {
console.error(`python process stderr: ${data}`);
});
createWindow();
app.on('activate', () => {
if (BrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
});
app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit();
}
});
</code></pre>
<p><strong>Flask route file - main.py</strong></p>
<pre><code>import sys
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route("/")
def hello():
return "Hello World from Flask!"
if __name__ == "__main__":
app.run(host='127.0.0.1', port=5000)
</code></pre>
<p><strong>Folder structure of electron app</strong></p>
<p><a href="https://i.sstatic.net/VfbIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VfbIS.png" alt="enter image description here" /></a></p>
<p>I can guess like somehow the distributable can't able to package the python inside it. That's why, at the time of opening the file it can't able to make the connection with Python & failed to create the python environment to run the flask route file.</p>
<p>But don't know what to do exactly to rectify this. Need some help.</p>
<p><strong>-----UPDATE-----</strong></p>
<p>First I tried Electron Forge to get the distributable. But, when I start getting this error I thought maybe Electron forge is somehow not working properly. Then I tried Electron Builder to get the distributable.</p>
<p>I followed this article to do that - <a href="https://medium.com/jspoint/packaging-and-distributing-electron-applications-using-electron-builder-311fc55178d9" rel="nofollow noreferrer">https://medium.com/jspoint/packaging-and-distributing-electron-applications-using-electron-builder-311fc55178d9</a></p>
<p>The <code>out</code> folder structure is like this now -</p>
<p><a href="https://i.sstatic.net/mwDeJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mwDeJ.png" alt="enter image description here" /></a></p>
<p><strong>Screenshot for "dist" folder</strong></p>
<p><a href="https://i.sstatic.net/BQEq9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BQEq9.png" alt="enter image description here" /></a></p>
|
<python><node.js><electron><electron-builder><electron-forge>
|
2023-01-20 09:55:32
| 0
| 5,999
|
Suresh
|
75,182,338
| 5,684,405
|
The all method not see the instance field values for subclassing keras.Model
|
<p>For subclassing to create a Tensorflow model with the following code:</p>
<pre><code>class MyClass(keras.Model):
def __int__(
self,
input_shape: tuple,
classes_count: int = 10,
model_name: str = 'model_name',
**kwargs,
):
super(MyClass, self).__init__(name=self.__class__.__name__, **kwargs)
self.my_info = "foo"
def call(self, inputs):
x = self.my_info
return x
var = MyClass((2, 2))
print(var.call("asd"))
</code></pre>
<p>The <code>call</code> method can't see the <code>self</code> field value:</p>
<blockquote>
<pre><code> x = self.my_info
AttributeError: 'MyClass' object has no attribute 'my_info'
</code></pre>
</blockquote>
<p>What am I doing wrong and how to get access to <code>self</code> elements?</p>
|
<python><tensorflow><subclassing>
|
2023-01-20 09:51:45
| 1
| 2,969
|
mCs
|
75,182,302
| 9,528,575
|
Doing group calculations with two separate dataframes in python
|
<p>I have two pandas dataframes like this:</p>
<pre><code>df1= pd.DataFrame({'sub-group':['2020','2030','2040','2030','2040','2030','2040'],
'group':['a', 'a', 'a', 'b', 'b', 'c', 'c'],
'value1':[12,11,41,33,66,22,20]})
sub-group group value1
2020 a 12
2030 a 11
2040 a 41
2030 b 33
2040 b 66
2030 c 22
2040 c 20
df2= pd.DataFrame({'sub-group':['2020','2030','2040', '2020', '2030','2040','2030','2040'],
'group':['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c'],
'value2':[10,20,30,15,45,60,12,36]})
sub-group group value2
2020 a 10
2030 a 20
2040 a 30
2020 b 15
2030 b 45
2040 b 60
2030 c 12
2040 c 36
</code></pre>
<p>I want to find <code>valu1/value2</code> for each group and sub-group. Note that the number of observations might not match in two dataframes. for example, we have 2020/b in df2 but not in df1. I those cases a nan or 0 would work.</p>
<p>I was thinking that it should be possible with <code>pd.groupby</code> but I don't know how it works with two dataframes.
Thanks.</p>
|
<python><pandas><dataframe><group-by>
|
2023-01-20 09:48:51
| 3
| 361
|
Novic
|
75,182,268
| 1,512,250
|
Create integer unique keys in 3 dataframes for rows with same names to generate automatic features using featuretools
|
<p>I have three different data frames with basketball players' data.</p>
<p>In all three dataframes there are basketball players' names.
I want to join all three dataframes into one EntitySet to use automatic feature generation using featuretools.</p>
<p>As I understand, I need to create an integer key in 3 dataframes, which would be used to join all three dataframes.
I understand that the same unique integer ids should be the same for the same players.</p>
<p>How can I create unique integer keys for 3 different datasets, ensuring that the same players have the same ids?</p>
|
<python><pandas><featuretools>
|
2023-01-20 09:46:32
| 1
| 3,149
|
Rikki Tikki Tavi
|
75,182,208
| 9,234,100
|
Connect to cloudSQL db using service account with pymysql or mysql.connector
|
<p>I have a running CloudSQL instance running in another VPC and a nginx proxy to allow cross-vpc access.
I can access the db using a built-in user. But how can I access the DB using a Google Service Account?</p>
<pre><code>import google.auth
import google.auth.transport.requests
import mysql.connector
from mysql.connector import Error
import os
creds, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)
connection = mysql.connector.connect(host=HOST,
database=DB,
user=SA_USER,
password=creds.token)
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cur = connection.cursor()
cur.execute("""SELECT now()""")
query_results = cur.fetchall()
print(query_results)
</code></pre>
<p>When using mysql connnector, I get this error:</p>
<pre><code>DatabaseError: 2059 (HY000): Authentication plugin 'mysql_clear_password' cannot be loaded: plugin not enabled
</code></pre>
<p>Then I tried using pymysql</p>
<pre><code>import pymysql
import google.auth
import google.auth.transport.requests
import os
creds, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)
try:
conn = pymysql.connect(host=ENDPOINT, user=SA_USER, passwd=creds.token, port=PORT, database=DBNAME)
cur = conn.cursor()
cur.execute("""SELECT now()""")
query_results = cur.fetchall()
print(query_results)
except Exception as e:
print("Database connection failed due to {}".format(e))
</code></pre>
<pre><code>Database connection failed due to (1045, "Access denied for user 'xx'@'xxx.xxx.xx.xx' (using password: YES)"
</code></pre>
<p>I guess these errors are all related to the token.
Anyone to suggest a proper way to get SA token to access CloudSQL DB?</p>
<p><strong>PS: Using cloudsql auth proxy is not a good option for our architecture.</strong></p>
|
<python><google-cloud-platform><google-cloud-sql><mysql-connector><pymysql>
|
2023-01-20 09:40:40
| 3
| 365
|
Max
|
75,182,045
| 17,884,397
|
Access the `b` parametr in the SVC object of scikit learn
|
<p>According to <a href="https://scikit-learn.org/stable/modules/svm.html#mathematical-formulation" rel="nofollow noreferrer">scikit learn's mathematical model of the SVC</a> there is a parameter <code>b</code> (constant value of the model):</p>
<p><a href="https://i.sstatic.net/IvLx0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IvLx0.png" alt="enter image description here" /></a></p>
<p>Yet in the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow noreferrer">SVC class documentation</a> I don't see how can one access its value.</p>
<p>Is there a way to access it? Is it really used?<br />
I wonder if I need to manually add a column of <code>1</code> to the data or the model has it built in.</p>
|
<python><scikit-learn><data-science><documentation>
|
2023-01-20 09:25:27
| 1
| 736
|
Eric Johnson
|
75,181,924
| 12,129,443
|
How to interpret torch.where() output when x , y are not given?
|
<p>I understand the output of torch.where() as per the content mentioned in the <a href="https://pytorch.org/docs/stable/generated/torch.where.html" rel="nofollow noreferrer">documentation</a>.
However, I do not understand the output it produces when x and y are not given as show below (the dimensionality of this output keeps varying though the shape of x remains the same). Can someone help me understand?</p>
<pre><code>y = torch.ones(3, 2)
x = torch.randn(3, 2)
print(x)
----------------------------
tensor([[-0.0022, 0.4871],
[ 0.0788, 0.2937],
[ 0.1909, -2.1636]])
----------------------------
print(torch.where(x > 0, x, y))
----------------------------
tensor([[1.0000, 0.4871],
[0.0788, 0.2937],
[0.1909, 1.0000]])
----------------------------
print(torch.where(x > 0))
(tensor([0, 1, 1, 2]), tensor([1, 0, 1, 0]))
</code></pre>
|
<python><pytorch>
|
2023-01-20 09:15:55
| 2
| 668
|
Srinivas
|
75,181,880
| 14,073,111
|
How to pass a parameter to a class scope fixture
|
<p>Lets say I have a fixture like this:</p>
<pre><code>@pytest.fixture(scope="class")
def some_fixture(some_paramaters):
do_something
yield
do_something
</code></pre>
<p>And i want to use it in this way:</p>
<pre><code>@pytest.mark.usefixtures('some_fixtures')
class TestSomething:
code....
</code></pre>
<p>Is it possible to pass the parameter some_paramter using @pytest.mark.usefixtures decorator. Or is there any way to pass that parameter?</p>
|
<python><python-3.x><pytest><fixtures>
|
2023-01-20 09:11:47
| 1
| 631
|
user14073111
|
75,181,821
| 8,771,201
|
Python mysql insert when record does not exist
|
<p>I have a table with articles like this:</p>
<p>Table: artPerBrand</p>
<p>Colums: ID (auto increment), BrandID, ArtCat, ArtNrShort, ArtNrLong, Active</p>
<p>I want to insert new articles (using a python script) but only if the same ArtNrLong does not already exist.</p>
<p>I cannot do this by making the ArtNrLong unique beacause I have multiple scenarios where, sometimes, I can be possible to have the same ArtNrLong in the table.</p>
<p>So I created this sql statement based on this input (<a href="https://www.geeksforgeeks.org/python-mysql-insert-record-if-not-exists-in-table/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-mysql-insert-record-if-not-exists-in-table/</a>).</p>
<pre><code> cur.execute ("INSERT INTO artPerBrand (BrandID, ArtCat, ArtNrShort, ArtNrLong, Active)\
select * from (Select" + varBrandID, varArtCat, varArtNrShort, varArtNrLong,1 +") as temp \
where not exists (Select ArtNrLong from artPerBrand where ArtNrLong="+varArtNrLong+") LIMIT 1")
</code></pre>
<p>I also tried this option:</p>
<pre><code> sql = "INSERT INTO artPerBrand(BrandID, ArtCat, ArtNrShort, ArtNrLong, Active) VALUES (%s, %s, %s, %s, %s) WHERE NOT EXISTS (SELECT * FROM artPerBrand WHERE ArtNrLong = %s)"
val = (varBrandID, varArtCat, varArtNrShort, varArtNrLong 1, varArtNrLong)
cur.execute(sql, val)
</code></pre>
<p>I get a general error, telling me the query is wrong.
Am I mistaking some quotes here, or something?</p>
<p>Combining the help in the comments brought me the solution:</p>
<pre><code> sql = "INSERT INTO artPerBrand(BrandID, ArtCat, ArtNrShort, ArtNrLong, Active) SELECT %s, %s, %s, %s, %s FROM DUAL WHERE NOT EXISTS (SELECT * FROM artPerBrand WHERE ArtNrLong = %s)"
val = (varBrandID, varArtCat, varArtNrShort, varArtNrLong 1, varArtNrLong)
cur.execute(sql, val)
</code></pre>
|
<python><mysql>
|
2023-01-20 09:06:44
| 2
| 1,191
|
hacking_mike
|
75,181,796
| 4,125,774
|
Poetry package modulenotfound when installing a local package
|
<p>I am new to this packaging, and facing now a problem.
The problem is as follows. I have created a poetry project from the CLI:</p>
<pre><code> poetry new mypackage_test
</code></pre>
<p>and have a file structure as follows:</p>
<p><a href="https://i.sstatic.net/Dj9Dg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dj9Dg.png" alt="enter image description here" /></a></p>
<p>here both my <strong>init</strong>.py are empty</p>
<p>pyproject.toml</p>
<pre><code>[tool.poetry]
name = "mypackage-test"
version = "0.1.0"
description = ""
authors = ["my name <my@name.com>"]
readme = "README.md"
packages = [{include = "mypackage_test"}]
[tool.poetry.dependencies]
python = "^3.10"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>helloWorld.py</p>
<pre><code>from pk1 import mypack
def helloworld():
print("hello world")
print(mypack.printMypk1())
if __name__ == "__main__":
helloworld()
</code></pre>
<p>mypack.py</p>
<pre><code>def printMypk1():
print("in mypackage 1")
</code></pre>
<p>when I run this code eg.:</p>
<pre><code>python helloWorld()
</code></pre>
<p>then it works as expected, but if i install this as a package and du this in python:</p>
<pre><code>>>> from mypackage_test import helloWorld
</code></pre>
<p>i get this error:</p>
<blockquote>
<p>Traceback (most recent call last):
File "", line 1, in
File "C:\Programs\Python\Python3102\lib\site-packages\mypackage_test\helloWorld.py", line 1, in
from pk1 import mypack
ModuleNotFoundError: No module named 'pk1'</p>
</blockquote>
<p>am i doing this wrong? any suggestion ?</p>
<p>Note: if i unzip the ziped file in the dist folder then i can see that the files are packed/ziped as expected</p>
|
<python><python-import><python-packaging><python-poetry>
|
2023-01-20 09:04:39
| 1
| 307
|
KapaA
|
75,181,692
| 17,561,414
|
change a struct type to map type pyspark
|
<p>Im trying to change the <code>Struct</code> type to <code>Map</code> type. I found this this solution --></p>
<pre><code>from pyspark.sql.functions import col,lit,create_map
df = df.withColumn("propertiesMap",create_map(
lit("salary"),col("properties.salary"),
lit("location"),col("properties.location")
)).drop("properties")
</code></pre>
<p>but in my case I need to wrtie alot of <code>lit</code> which I tried to avoid with for loop but seems like it does not work as when I try to print schema, i get this error <code>'list' object has no attribute 'printSchema'</code></p>
<p>my code is here:</p>
<pre><code>field_names = [field.name for field in next(field for field in df4.schema.fields if field.name=="values").dataType.fields]
df5= [df4.withColumn("valuesMap",
create_map(lit(field_name),
col(f"values.{field_name}"))) for field_name in field_names]
</code></pre>
|
<python><pyspark><dtype>
|
2023-01-20 08:53:50
| 0
| 735
|
Greencolor
|
75,181,599
| 11,159,734
|
Airflow how to catch errors from an Operator outside of the operator
|
<p>Maybe the question isn't phrased in the best way. Basically what I want to do is: Building a DAG that iterates over a list of sql files and using the BigQueryOperator() to execute these sql files.</p>
<p>However there will be sql files in the list where the corresponding <strong>tables do not exist in BQ</strong>. And <strong>I want to catch these kind of errors</strong> (they should not be printed in the log and the task should not be marked as failed. However the errors should be added to a dictionary and shown by a different task that runs at the end.</p>
<p>As you can see by the code below I <strong>tried to fetch the error from the BigQueryOperator with try & except</strong> however this does not work. The task get's executed correctly and it runs the sql's fine but if there is an error it will immediately print out the error in the log and mark the task as failed and <strong>the try & except clause is completely ignored</strong>. Also the last task print_errors() does not print anything as the dictionary is empty. So to me it looks like if I can't influence an Airflow Operator once it is called as it ignores python logic that is wrapped around the operator</p>
<p>My current code looks as follows:</p>
<p>Importing some libraries:</p>
<pre><code>import airflow
from airflow import models
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.operators.dummy_operator import DummyOperator
from datetime import datetime, timedelta
</code></pre>
<p>Get some variables + the list of sql files (hardcoded for now but will be fetched from GCP storage later on)</p>
<pre><code>BQ_PROJECT = models.Variable.get('bq_datahub_project_id').strip()
BQ_DATASET_PREFIX = models.Variable.get('datasetPrefix').strip()
AIRFLOW_BUCKET = models.Variable.get('airflow_bucket').strip()
CODE = models.Variable.get('gcs_code').strip()
COMPOSER = '-'.join(models.Variable.get('airflow_bucket').strip().split('-')[2:-2])
</code></pre>
<p>DAG</p>
<pre><code>with models.DAG(dag_id='run-sql-files',
schedule_interval='0 */3 * * *',
user_defined_macros={"COMPOSER": COMPOSER},
default_args=default_dag_args,
concurrency=2,
max_active_runs=1,
catchup=False) as dag:
def print_errors():
if other_errors:
for error, files in other_errors.items():
print("Warning: " + error + " for the following SQL files:")
for file in files:
print(file)
t0 = DummyOperator(
task_id='Start'
)
t2 = DummyOperator(
task_id='End',
trigger_rule='all_done'
)
other_errors = {}
for i, sql_file in enumerate(sql_files):
try:
full_path_sql = AIRFLOW_BUCKET + sql_file
t1 = BigQueryOperator(
task_id='sql_'+str(i),
params={"datahubProject": BQ_PROJECT, "datasetPrefix": BQ_DATASET_PREFIX,},
sql=sql_file,
use_legacy_sql=False,
location='europe-west3',
labels={ "composer_id": COMPOSER, "dag_id": "{{ dag.dag_id }}", "task_id": "{{ task.task_id }}"},
dag=dag
)
t0 >> t1 >> t2
except Exception as e:
other_errors[str(e)] = other_errors.get(str(e), []) + [sql_file]
t3 = PythonOperator(
task_id='print_errors',
python_callable=print_errors,
provide_context=True,
dag=dag)
t2 >> t3
</code></pre>
|
<python><google-cloud-platform><airflow>
|
2023-01-20 08:45:07
| 1
| 1,025
|
Daniel
|
75,181,393
| 3,352,254
|
Unable to fit custom model with lmfit - ValueError: The model function generated NaN values and the fit aborted
|
<p>I have this data:</p>
<pre><code>y=[2.103402,2.426855,1.011672,1.595371,1.861879,2.492542,2.567561,4.685010,4.452643,5.321630,6.637233,
6.109260,6.220958,5.928408,5.654726,5.498096,5.468448,6.128418,6.071376,6.487270,6.609533,6.907320,
7.626838,8.432065,9.749410,8.976752,8.742036,8.779956,8.212357,8.578200,9.170012,9.134267,9.199465,
9.094945,9.342948,9.802524,10.959913,10.488497,10.892593,10.673570,10.608582,10.036824,9.741473]
x=[300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,
2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,3700,3800,3900,4000,4100,4200,4300,4400,4500]
</code></pre>
<p>data looks like this, the fit is manually adjusted:</p>
<p><a href="https://i.sstatic.net/zbKVZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zbKVZ.png" alt="enter image description here" /></a></p>
<p>I want to fit this custom log function:</p>
<pre><code>def log_n_func(x, a, b, c, n):
return a*(np.log(b+x)/np.log(n))+c
</code></pre>
<p>I tried two approaches:</p>
<pre><code>import lmfit
def log_n_func(x, a, b, c, n):
return a*(np.log(b+x)/np.log(n))+c
regressor = lmfit.Model(log_n_func)
initial_guess = dict(a=3.61, b=443.86, c=-34, n=2)
results = regressor.fit(data=y, x=x, **initial_guess)
y_fit = results.best_fit
</code></pre>
<p>and</p>
<pre><code>from lmfit import Model, Parameters
model = Model(log_n_func, independent_vars=['x'], param_names=["a", "b", "c", "n"])
params = Parameters()
params.add("a", value=3.6)
params.add("b", value=440)
params.add("c", value=-34)
params.add("n", value=2)
result = model.fit(data=y, params=params, x=x)
</code></pre>
<p>but both lead to the same error:
<code>ValueError: The model function generated NaN values and the fit aborted! Please check your model function and/or set boundaries on parameters where applicable. In cases like this, using "nan_policy='omit'" will probably not work.</code></p>
<p>What did I do wrong?</p>
|
<python><lmfit>
|
2023-01-20 08:21:53
| 1
| 825
|
smaica
|
75,181,179
| 10,722,752
|
Azure ML pipeline fails with ImportError: cannot import name 'time_ns' error
|
<p>I am trying to run a ML pipeline in Azure, I use <code>pandarallel</code> module but the pipeline fails with below error.</p>
<pre><code> from pandarallel import pandarallel
File "/azureml-envs/azureml_76aa2eb12d27af119fdfef634eaaf565/lib/python3.6/site-packages/pandarallel/__init__.py", line 1, in <module>
from .core import pandarallel
File "/azureml-envs/azureml_76aa2eb12d27af119fdfef634eaaf565/lib/python3.6/site-packages/pandarallel/core.py", line 26, in <module>
from .progress_bars import ProgressBarsType, get_progress_bars, progress_wrapper
File "/azureml-envs/azureml_76aa2eb12d27af119fdfef634eaaf565/lib/python3.6/site-packages/pandarallel/progress_bars.py", line 8, in <module>
from time import time_ns
ImportError: cannot import name 'time_ns'
</code></pre>
<p>In the dependencies yml file, I hard coded <code>pandarallel</code> version from <code>1.3.2</code> to <code>1.6.4</code> but all versions fail with either <code>time_ns</code> error or <code>ImportError: cannot import name 'PlasmaStoreFull'</code> or <code>ModuleNotFoundError: No module named 'ipywidgets'</code>. Each version of <code>pandarallel</code> is giving such errors.</p>
<p>I am not using <code>time_ns</code> anywhere in my code. Could someone please let me know how to fix this issue.</p>
|
<python><pandas><azure>
|
2023-01-20 07:55:53
| 1
| 11,560
|
Karthik S
|
75,181,173
| 4,806,787
|
Non-monotonic evolution of runtime with increasing parallelization
|
<p>I'm running some runtime tests to understand what I can gain from parallelization and how it affects runtime (linearly?).
For a given integer <code>n</code> I successively compute the <code>n</code>-th Fibonacci number and vary the degree of parallelization by allowing to compute each Fibonacci number <code>i</code> in <code>{0,1,...,n}</code> by using up to 16 parallel processes.</p>
<pre><code>import pandas as pd
import time
import multiprocessing as mp
# n-te Fibonacci Zahl
def f(n: int):
if n in {0, 1}:
return n
return f(n - 1) + f(n - 2)
if __name__ == "__main__":
K = range(1, 16 + 1)
n = 100
N = range(n)
df_dauern = pd.DataFrame(index=K, columns=N)
for _n in N:
_N = range(_n)
print(f'\nn = {_n}')
for k in K:
start = time.time()
pool = mp.Pool(k)
pool.map(f, _N)
pool.close()
pool.join()
ende = time.time()
dauer = ende - start
m, s = divmod(dauer, 60)
h, m = divmod(m, 60)
h, m, s = round(h), round(m), round(s)
df_dauern.loc[k, _n] = f'{h}:{m}:{s}'
print(f'... k = {k:02d}, Dauer: {h}:{m}:{s}')
df_dauern.to_excel('Dauern.xlsx')
</code></pre>
<p>In the following DataFrame I display the duration (h:m:s) for <code>n</code> in <code>{45, 46, 47}</code>.</p>
<pre><code> 45 46 47
1 0:9:40 0:15:24 0:24:54
2 0:7:24 0:13:23 0:22:59
3 0:5:3 0:9:37 0:19:7
4 0:7:18 0:7:19 0:15:29
5 0:7:21 0:7:17 0:15:35
6 0:3:41 0:9:34 0:9:36
7 0:3:40 0:9:46 0:9:34
8 0:3:41 0:9:33 0:9:33
9 0:3:39 0:9:33 0:9:33
10 0:3:39 0:9:32 0:9:32
11 0:3:39 0:9:34 0:9:45
12 0:3:40 0:6:4 0:9:37
13 0:3:39 0:5:54 0:9:32
14 0:3:39 0:5:55 0:9:32
15 0:3:40 0:5:53 0:9:33
16 0:3:39 0:5:55 0:9:33
</code></pre>
<p>In my opinion the results are odd in two dimensions. First, the duration is not monotonically decreasing for increasing parallelization and second, runtime is not linearly decreasing (that is, double processes, half runtime).</p>
<ul>
<li>Is this behavior to be expected?</li>
<li>Is this behavior due to the chosen example of computing Fibonacci numbers?</li>
<li>How is it even possible that runtime increases with increasing parallelization (e.g. always when moving from 2 to 3 parallel processes)?</li>
<li>How come it does not make a difference whether I use 6 or 16 parallel processes?</li>
</ul>
|
<python><parallel-processing><multiprocessing>
|
2023-01-20 07:54:46
| 1
| 313
|
clueless
|
75,181,125
| 16,626,322
|
How can I get current text from TextArea in Gradio
|
<pre><code>import gradio as gr
input1 = gr.TextArea(label="Text (100 words max)", value=example,
elem_id=f"input{i}")
</code></pre>
<p>I made a Text Area, and wanna get current text value from TextArea.</p>
<p><code>input1.value</code> just returns default value that I assigned (In my case, <code>example</code>).</p>
<p>How can I get current value of TextArea?</p>
|
<python><gradio>
|
2023-01-20 07:49:01
| 1
| 539
|
sooyeon
|
75,181,057
| 3,164,492
|
Remove If block with FunctionDef keeping the untouched code as it is
|
<p>I need to replace the if-else section with a function enclosing the same if-else. For ex:</p>
<p>Following is the code with if-else condition</p>
<pre><code>x = 5 # comments to be retained
# above new line as well
if foo() == 'bar':
y = 10
print('foo is bar')
else:
print('foo is not bar')
z =100
</code></pre>
<p>Now I want to change this</p>
<pre><code>x = 5 # comments to be retained
# above new line as well
def if_encapsulated():
if foo() == 'bar':
y = 10
print('foo is bar')
else:
print('foo is not bar')
if_encapsulated()
z = 100
</code></pre>
<p>I am using <code>ast</code> to parse the code and using <code>ast.NodeTransformer</code> to replace <code>if</code> using <code>ast.FunctionDef</code> but when I use <code>ast.unparse</code>/<code>astor.code_gen.to_source</code> it doesn't keep track of tabs, newline and remove comments. I want to replace <code>if</code> block and keeping the rest of the code as it is.</p>
<p>Here is the code for parsing and replacing if:</p>
<pre><code>import ast
import astor
class IfReplacer(ast.NodeTransformer):
def visit_If(self, if_node):
function_node = ast.FunctionDef(name = "if_encapsulated", args=ast.arguments(posonlyargs=[], args=[], kwonlyargs=[], defaults=[]), body=[if_node], decorator_list=[], returns = None)
return function_node
code = """
x = 5 # comments to be retained
# above new line as well
if foo() == 'bar':
y = 10
print('foo is bar')
else:
print('foo is not bar')
z =100
"""
tree = ast.parse(code)
IfReplacer().visit(tree)
print(astor.code_gen.to_source(tree))
# ast.unparse(tree) doesn't work for FunctionDef as it raise error - no attribute called lineno
</code></pre>
<p>Outputs:</p>
<pre><code>x = 5
def if_encapsulated():
if foo() == 'bar':
y = 10
print('foo is bar')
else:
print('foo is not bar')
z = 100
</code></pre>
<p>I have tried using <code>asttokens</code> library which seems to resolve the such issue but when I use I faced another issue. And also explored <code>astor</code> again and faced some issues. Please help here.</p>
|
<python><tokenize><abstract-syntax-tree>
|
2023-01-20 07:40:40
| 1
| 1,805
|
Devavrata
|
75,180,971
| 10,924,836
|
Columns selection on specific text
|
<p>I want to extract specific columns that contain specific names. Below you can see my data</p>
<pre><code>import numpy as np
import pandas as pd
data = {
'Names': ['Store (007) Total amount of Sales ',
'Store perc (65) Total amount of sales ',
'Mall store, aid (005) Total amount of sales',
'Increase in the value of sales / Additional seling (22) Total amount of sales',
'Dividends (0233) Amount of income tax',
'Other income (098) Total amount of Sales',
'Other income (0245) Amount of Income Tax',
],
'Sales':[10,10,9,7,5,5,5],
}
df = pd.DataFrame(data, columns = ['Names',
'Sales',
])
df
</code></pre>
<p>This data have some specific columns that I need to be selected in the separate data frame. Keywords for this selection are words <code>Total amount of Sales</code> or <code>Total amount of sales</code> . These words are placed after the second brackets <code>)</code>. Also please take into account that text is no trimmed so empty spaces are possible.</p>
<p><a href="https://i.sstatic.net/wtj8Z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wtj8Z.jpg" alt="enter image description here" /></a></p>
<p>So can anybody help me how to solve this ?</p>
|
<python><pandas>
|
2023-01-20 07:32:04
| 1
| 2,538
|
silent_hunter
|
75,180,805
| 1,146,785
|
do asyncio tasks have to be async all the way down?
|
<p>I'm having problems wrapping an external task to parallelize it. I'm a newbie with asyncio so maybe I'm doing something wrong:</p>
<p>I have an <code>animate</code> method that I have also declared as async.
But that calls an external library that uses various iterators etc.
I'm wondering if something in a library is able to block asyncio at the top level?</p>
<p><code>animate(item)</code> is a problem. if i define another async task it will run multiple calls concurrently and 'gather' later.</p>
<p>So am I doing it wrong, or is it possible the library been written such that it can't simply be parallelized with asyncio?
I also tried wrapping the call to <code>animate</code> with another async method, without luck.</p>
<pre class="lang-py prettyprint-override"><code>MAX_JOBS = 1 # how long for
ITEMS_PER_JOB = 4 # how many images per job/user request eg for packs
async def main():
for i in range(0, MAX_JOBS):
clogger.info('job index', i)
job = get_next()
await process_job(job)
async def process_job(job):
batch = generate_batch(job)
coros = [animate(item) for idx, item in enumerate(batch)]
asyncio.gather(*coros)
asyncio.run(main())
</code></pre>
<p>the <code>animate</code> func has some internals and like</p>
<pre><code>async def animate(options):
for frame in tqdm(animator.render(), initial=animator.start_frame_idx, total=args.max_frames):
pass
</code></pre>
|
<python><asynchronous><python-asyncio>
|
2023-01-20 07:13:22
| 1
| 12,455
|
dcsan
|
75,180,687
| 12,242,085
|
How to create new columns with name of columns in list with the highest value per ID, metioned after coma if need in Python Pandas?
|
<p>I have Pandas DataFrame like below (I can add that my DataFrame is definitely bigger, so I need to do below aggregation only for selected columns):</p>
<pre><code>ID | COUNT_COL_A | COUNT_COL_B | SUM_COL_A | SUM_COL_B
-----|-------------|-------------|-----------|------------
111 | 10 | 10 | 320 | 120
222 | 15 | 80 | 500 | 500
333 | 0 | 0 | 110 | 350
444 | 20 | 5 | 0 | 0
555 | 0 | 0 | 0 | 0
666 | 10 | 20 | 60 | 50
</code></pre>
<p>Requirements:</p>
<ul>
<li><p>I need to create new column "TOP_COUNT_2" where will be name of column (COUNT_COL_A or COUNT_COL_B) with the highest value per each ID,</p>
<ul>
<li>if some ID has same values in all "COUNT_" columns take to "TOP_COUNT_2" all columns names with prefix "COUNT_" mentioned after the decimal point</li>
</ul>
</li>
<li><p>I need to create new column "TOP_SUM_2" where will be name of column (SUM_COL_A or SUM_COL_B) with the highest value per each ID,</p>
<ul>
<li>if some ID has same values in all "SUM_" columns take to "TOP_SUM_2" all columns names with prefix "COUNT_" mentioned after the decimal point</li>
</ul>
</li>
<li><p>If there is 0 in both columns with prefix COUNT_ then give NaN in column TOP_COUNT</p>
</li>
<li><p>If there is 0 in both columns with prefix SUM_ then give NaN in column TOP_SUM</p>
</li>
</ul>
<p>Desire output:</p>
<pre><code>ID | CONT_COL_A | CNT_COL_B | SUM_COL_A | SUM_COL_B | TOP_COUNT_2 | TOP_SUM_2
-----|-------------|-------------|-----------|------------|----------------------|-----------
111 | 10 | 10 | 320 | 120 | CNT_COL_A, CNT_COL_B | SUM_COL_A
222 | 15 | 80 | 500 | 500 | COUNT_COL_B | SUM_COL_A, SUM_COL_B
333 | 0 | 0 | 110 | 350 | NaN | SUM_COL_B
444 | 20 | 5 | 0 | 0 | COUNT_COL_A | NaN
555 | 0 | 0 | 0 | 0 | NaN | NaN
666 | 10 | 20 | 60 | 50 | COUNT_COL_B | SUM_COL_A
</code></pre>
<p>How can i do that in Python Pandas ?</p>
|
<python><pandas><dataframe><numpy>
|
2023-01-20 06:56:22
| 1
| 2,350
|
dingaro
|
75,180,598
| 11,402,025
|
TypeError: Object of type 'type' is not JSON serializable
|
<p>The code works fine in Postman and provides a valid response but fails to generate the OpenAPI/Swagger UI automatic docs.</p>
<pre class="lang-py prettyprint-override"><code>class Role(str, Enum):
Internal = "internal"
External = "external"
class Info(BaseModel):
id: int
role: Role
class AppInfo(Info):
info: str
@app.post("/api/v1/create", status_code=status.HTTP_200_OK)
async def create(info: Info, apikey: Union[str, None] = Header(str)):
if info:
alias1 = AppInfo(info="Portal Gun", id=123, role=info.role)
alias2 = AppInfo(info="Plumbus", id=123, , role=info.role)
info_dict.append(alias1.dict())
info_dict.append(alias2.dict())
return {"data": info_dict}
else:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Please provide the input"
)
</code></pre>
<p>Error received:</p>
<pre><code>TypeError: Object of type 'type' is not JSON serializable
</code></pre>
|
<python><swagger><fastapi><swagger-ui><openapi>
|
2023-01-20 06:43:30
| 2
| 1,712
|
Tanu
|
75,180,457
| 12,242,085
|
How to create 2 new column in DataFrame based on the highest values in rest of column with appropriate prefix in Python Pandas?
|
<p>I have Pandas DataFrame like below (I can add that my DataFrame is definitely bigger, so I need to do below aggregation only for selected columns):</p>
<pre><code>ID | COUNT_COL_A | COUNT_COL_B | SUM_COL_A | SUM_COL_B
-----|-------------|-------------|-----------|------------
111 | 10 | 10 | 320 | 120
222 | 15 | 80 | 500 | 500
333 | 0 | 0 | 110 | 350
444 | 20 | 5 | 0 | 0
555 | 0 | 0 | 0 | 0
666 | 10 | 20 | 30 | 50
</code></pre>
<p>Requirements:</p>
<ul>
<li><p>I need to create new column "TOP_COUNT" where will be name of column (COUNT_COL_A or COUNT_COL_B) with the highest value per each ID,</p>
<ul>
<li>if some ID has same values in both "COUNT_" columns take to "TOP_COUNT" column name which has higher value in its counterpart with prefix SUM_ (SUM_COL_A or SUM_COL_B)</li>
</ul>
</li>
<li><p>I need to create new column "TOP_SUM" where will be name of column (SUM_COL_A or SUM_COL_B) with the highest value per each ID,</p>
<ul>
<li>if some ID has same values in both "SUM_" columns take to "TOP_SUM" column name which has higher value in its counterpart with prefix COUNT_ (COUNT_COL_A or COUNT_COL_B)</li>
</ul>
</li>
<li><p>If there is 0 in both columns with prefix COUNT_ then give NaN in column TOP_COUNT</p>
</li>
<li><p>If there is 0 in both columns with prefix SUM_ then give NaN in column TOP_SUM</p>
</li>
</ul>
<p>Desire output:</p>
<pre><code>ID | COUNT_COL_A | COUNT_COL_B | SUM_COL_A | SUM_COL_B | TOP_COUNT | TOP_SUM
-----|-------------|-------------|-----------|------------|-------------|---------
111 | 10 | 10 | 320 | 120 | COUNT_COL_A | SUM_COL_A
222 | 15 | 80 | 500 | 500 | COUNT_COL_B | SUM_COL_B
333 | 0 | 0 | 110 | 350 | NaN | SUM_COL_B
444 | 20 | 5 | 0 | 0 | COUNT_COL_A | NaN
555 | 0 | 0 | 0 | 0 | NaN | NaN
666 | 10 | 20 | 60 | 50 | COUNT_COL_B | SUM_COL_A
</code></pre>
<p>How can i do that in Python Pandas ?</p>
|
<python><pandas><dataframe><numpy><aggregate>
|
2023-01-20 06:26:05
| 1
| 2,350
|
dingaro
|
75,180,456
| 8,040,369
|
openpyxl(): Reading the cell value not the cell formula
|
<p>I am using openpyxl() to copy the contents of an excel to another excel.</p>
<p>My original excel has an empty column in it, so i am not inserting the first column value and removing the empty column to the new file.</p>
<p>Because of this the formulas that was being used gets messed up.
Ex:
Original File:</p>
<pre><code>A B C D E
(empty 1 2 3 =SUM(B1:D1)
column)
</code></pre>
<p>New File:</p>
<pre><code>A B C D
1 2 3 =SUM(B1:D1)
</code></pre>
<p>So because of this my calculations in D column gets changed.</p>
<p>My code:</p>
<pre><code>ORG_EXCEL_FILE = openpyxl.load_workbook("workbook.xlsx")
ORG_EXL_SHEET_NAMES = ORG_EXCEL_FILE.sheetnames
NEW_EXCEL_FILE = openpyxl.load_workbook("TEST.xlsx")
NEW_EXCEL_FILE_WS = NEW_EXCEL_FILE.active
ORG_FILE_SHEET = ORG_EXCEL_FILE.get_sheet_by_name("Sheet1")
NEW_FILE_SHEET = NEW_EXCEL_FILE.get_sheet_by_name("Sheet1")
for i,row in enumerate(ORG_FILE_SHEET.iter_rows()):
for j,col in enumerate(row):
NEW_FILE_SHEET.cell(row=i+1,column=j+1).value = col.value
NEW_EXCEL_FILE.save("TEST.xlsx")
</code></pre>
<p>When I run the above code the col.value and col.interval_value both gives the cell formula.</p>
<p>I tried using <strong>openpyxl.load_workbook("workbook.xlsx", data_only=True)</strong>, it gives me <strong>None</strong> for cell.value and cell.internal_value.</p>
<p>Is there a way to get the exact column value instead of the formula or None value ?</p>
<p>Thanks,</p>
|
<python><python-3.x><openpyxl>
|
2023-01-20 06:25:57
| 1
| 787
|
SM079
|
75,180,344
| 8,002,010
|
localstack s3 can't be accessed via boto3
|
<p>I am able to run localstack via docker and my docker-compose file looks like:</p>
<pre><code>
services:
localstack:
image: localstack/localstack:latest
network_mode: host
environment:
- SERVICES=s3
- AWS_DEFAULT_REGION=eu-west-1
- HOSTNAME_EXTERNAL=localhost
- DEBUG=1
ports:
- '4566-4583:4566-4583'
</code></pre>
<p>I am able to create bukcet, upload file via [awslocal][1] like:</p>
<pre><code>create bukcet:
awslocal s3 mb s3://test
> make_bucket: test
upload test file to s3
awslocal s3 cp test.txt s3://test
> upload: ./test.txt to s3://test/test.txt
check if its uploaded:
awslocal s3 ls s3://test
> 2022-12-25 22:18:44 10 test.txt
</code></pre>
<p>All I am trying next is to connect via a code. I wrote a simple boto3 python script and the code base is failing with <code>Unable to locate credentials</code>. I tried <code>aws configure</code> but considering I don't have any idea what is my access and secret key for localstack s3, it feels like a dead end. The python code base:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from botocore.exceptions import ClientError
import os
ddb1 = boto3.client('s3', endpoint_url='http://localhost.localstack.cloud:4566')
def upload_file(file_name, bucket, object_name=None):
"""
Upload a file to a S3 bucket.
"""
try:
if object_name is None:
object_name = os.path.basename(file_name)
response = ddb1.upload_file(
file_name, bucket, object_name)
except ClientError:
print('Could not upload file to S3 bucket.')
raise
else:
return response
upload_file("testdata/test.txt", "sample")
</code></pre>
<p>Any help on how to connect via code base without <code>awslocal</code> would be a nice help.
[1]: <a href="https://github.com/localstack/awscli-local" rel="nofollow noreferrer">https://github.com/localstack/awscli-local</a></p>
|
<python><amazon-s3><boto3><localstack>
|
2023-01-20 06:11:01
| 2
| 664
|
talkdatatome
|
75,180,332
| 10,020,470
|
How to sum second list from list of list in Python
|
<p>I would like to sum from list of list as below</p>
<pre><code>array([[[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5]],
[[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5]],
[[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5]]]
</code></pre>
<p>What i want is to sum like below</p>
<pre><code> [1,1,1]+[1,1,1]+[1,1,1] = 9
[2,2,2]+[2,2,2]+[2,2,2] = 18
.... = 27
= 36
= 45
</code></pre>
<p>And return a list like below as the final list:</p>
<pre><code>[9,18,27,36,45]
</code></pre>
|
<python><arrays><python-3.x><numpy>
|
2023-01-20 06:09:22
| 4
| 437
|
foy
|
75,180,152
| 2,339,664
|
How can I create a Python class with a member named 'class'
|
<p>I want to create a Python class that represents a dictionary <strong>my_dict</strong> that I want to convert as follows</p>
<pre><code>my_class = MyClass(**my_dict)
</code></pre>
<p>The issues is that one of the dicts keys is called 'class'</p>
<p>So when I do as follows Python complains:</p>
<pre><code>class MyClass(BaseModel)
name: str
address: str
id: int
class: str <-- Python does not like me using the keyword class
</code></pre>
<p>How can I get around this?</p>
|
<python><python-3.x>
|
2023-01-20 05:39:39
| 1
| 4,917
|
Harry Boy
|
75,179,944
| 10,308,255
|
How to use `groupby` to aggregate columns into dictionary so that new column contains that dictionary?
|
<p>I have a dataframe that contains a <code>person</code>, <code>year</code>, and a bunch of <code>flag</code> containing columns, like below:</p>
<pre><code># sample dataframe
data = [["John Doe", 2018, True, False, True], ["Jane Doe", 2019, True, False, False]]
df = pd.DataFrame(data, columns=["person", "year", "flag_1", "flag_2", "flag_3"])
df
</code></pre>
<pre><code> person year flag_1 flag_2 flag_3
0 John Doe 2018 True False True
1 Jane Doe 2019 True False False
</code></pre>
<p>I would like my final output to be a <code>groupby</code> where the <code>person</code> and <code>year</code> are retained, and a <em>new</em> column containing a dictionary of all the flag results is stored.</p>
<p>Something kind of like, but not quite like, this:</p>
<p>First: reshape</p>
<pre><code>reshaped_df = pd.melt(
df, id_vars=["person", "year"], value_vars=["flag_1", "flag_2", "flag_3"]
)
</code></pre>
<pre><code> person year variable value
0 John Doe 2018 flag_1 True
1 Jane Doe 2019 flag_1 True
2 John Doe 2018 flag_2 False
3 Jane Doe 2019 flag_2 False
4 John Doe 2018 flag_3 True
</code></pre>
<p>Second: Create dictionary</p>
<pre><code>reshaped_df.set_index(["person", "year", "variable"]).T.to_dict("list")
</code></pre>
<pre><code>{('John Doe', 2018, 'flag_1'): [True],
('Jane Doe', 2019, 'flag_1'): [True],
('John Doe', 2018, 'flag_2'): [False],
('Jane Doe', 2019, 'flag_2'): [False],
('John Doe', 2018, 'flag_3'): [True],
('Jane Doe', 2019, 'flag_3'): [False]}
</code></pre>
<p><strong>except</strong> I want my output to look like this:</p>
<pre><code> person year flag_dict
0 John Doe 2018 {'flag_1': True, 'flag_2': False, 'flag_3': True}
1 Jane Doe 2019 {'flag_1': True, 'flag_2': False, 'flag_3': False}
</code></pre>
<p>Is this possible? If so how can it be done? Thank you!</p>
|
<python><pandas><dataframe><dictionary>
|
2023-01-20 01:59:41
| 1
| 781
|
user
|
75,179,880
| 19,950,360
|
plotly dash chained callback
|
<p>This is my code</p>
<pre><code>app = Dash(__name__)
app.layout = html.Div([
dcc.Location(id='url', refresh=False),
dcc.Dropdown(options=['bar', 'pie'], id='dropdown', multi=False, value='bar', placeholder='Select graph type'),
html.Div(id='page-content'),
])
@app.callback(
Output('see1', 'options'),
Input('url', 'search')
)
def ex_projecKey(search):
return re.search('project_key=(.*)', search).group(1)
@app.callback(
Output('page-content', 'children'),
Input('see1', 'options'),
Input('dropdown', 'value')
)
def update_page(options, value):
return f'{options}.{value}'
if __name__ == '__main__':
app.run_server(debug=True, port=4444)
</code></pre>
<p>I receive something URL and search project_key
after choice dropdown menu like bar or pie</p>
<p>Then i receive two object (project_key, graph_type )</p>
<p>But two error occur</p>
<pre><code>Property "options" was used with component ID:
"see1"
in one of the Input items of a callback.
This ID is assigned to a dash_html_components.Div component
in the layout, which does not support this property.
This ID was used in the callback(s) for Output(s):
page-content.children
Property "options" was used with component ID:
"see1"
in one of the Output items of a callback.
This ID is assigned to a dash_html_components.Div component
in the layout, which does not support this property.
This ID was used in the callback(s) for Output(s):
see1.options
</code></pre>
<p>first callback Output see1, options</p>
<p>Than second callback Input receive that options inst it?</p>
|
<python><plotly><plotly-dash>
|
2023-01-20 01:42:53
| 1
| 315
|
lima
|
75,179,732
| 3,169,603
|
Pythonic method for stacking np.array's of different row length
|
<p>Assume I have following multiple numpy <code>np.array</code> with different number of rows but same number of columns:</p>
<pre><code>a=np.array([[10, 20, 30],
[40, 50, 60],
[70, 80, 90]])
b=np.array([[1, 2, 3],
[4, 5, 6]])
</code></pre>
<p>I want to combine them to have following:</p>
<pre><code>result=np.array([[10, 20, 30],
[40, 50, 60],
[70, 80, 90],
[1, 2, 3],
[4, 5, 6]])
</code></pre>
<p>Here's what I do using <code>for loop</code> but I don't like it. Is there a pythonic way to do this?</p>
<pre><code>c=[a,b]
num_row=sum([x.shape[0] for x in c])
num_col=a.shape[1] # or b.shape[1]
result=np.zeros((num_row,num_col))
k=0
for s in c:
for i in s:
reult[k]=i
k+=1
result=
array([[10, 20, 30],
[40, 50, 60],
[70, 80, 90],
[1, 2, 3],
[4, 5, 6]])
</code></pre>
|
<python><numpy>
|
2023-01-20 01:08:12
| 1
| 1,093
|
doubleE
|
75,179,517
| 327,026
|
A build-system independent way to get the version from a package source directory
|
<p>There are many different ways that Python packages manage their version data. Is there a build-system independent way to extract the version from a package source directory?</p>
<p>I'm aware of the <a href="https://peps.python.org/pep-0517/" rel="nofollow noreferrer">PEP 517</a> compatible Python package builder <a href="https://github.com/pypa/build" rel="nofollow noreferrer"><code>build</code></a> which does this internally. For instance, in an example source directory for a Python package <code>my_pkg</code>:</p>
<pre><code>$ python -m build --sdist
...
Successfully built my_pkg-1.2.0.tar.gz
</code></pre>
<p>so is there a clever way to just extract the version number without building the distribution?</p>
|
<python><versioning><python-packaging><pep517>
|
2023-01-20 00:11:52
| 1
| 44,290
|
Mike T
|
75,179,258
| 11,999,684
|
Python Exception Message Escaping <=> character
|
<p>My colleagues and I are working on some code to produce SQL merge strings for users of a library we're building in Python to be run in the Azure Databricks environment. These functions provide the SQL string through a custom exception that we've written called DebugMode. The issue that we've encountered and I can't find a satisfactory answer to is why when the DebugMode string is printed do the <=> characters get removed? This can be replicated with a simpler example below where I've tossed various items into the Exception string to see what would get printed and what wouldn't.</p>
<pre class="lang-py prettyprint-override"><code>raise Exception('this is a string with the dreaded spaceship <=> < > <= >= `<=>` - = + / <random> \<rand2>')
</code></pre>
<p>This snippet results in the following:
<a href="https://i.sstatic.net/G6jrp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G6jrp.png" alt="result of raised exception" /></a></p>
<p>What I don't understand is why the <=> character is missing in the Exception printout at the top but is present when you expand the Exception. Is there a way to get the first string to include the <=> character?</p>
<hr />
<p>I've also included the custom DebugMode class we're using.</p>
<pre class="lang-py prettyprint-override"><code>class DebugMode(Exception):
'''
Exception raised in the event of debug mode being enabled on any of the merge functions. It is intended to halt the merge and provide
the SQL merge string for manual review.
Attributes:
sql_string (str): The sql merge string produced by the merge function.
Methods:
None
'''
def __init__(self, sql_string, message='Debug mode was enabled, the SQL operation has halted to allow manual review of the SQL string below.'):
self.sql_string = sql_string
self.message = message
super().__init__(self.message) # overwrite the Exception base classe's message
def __str__(self):
return f'{self.message}\n{self.sql_string}'
</code></pre>
|
<python><databricks>
|
2023-01-19 23:21:12
| 1
| 323
|
FoxHound
|
75,179,210
| 214,526
|
Reading key/value pair text file efficiently with correct by construction
|
<p>I have few thousands of text files where each file is of following form:</p>
<pre><code>Some Key: value1
Some Other Key: value2
Another Key: value3
... < 300+ such entries > ...
</code></pre>
<p>I want to read each of these files as dictionary and populate as a row in pandas dataframe. Though at this moment I am not sure if all the files have exact same keys or not but I'm hoping that there will not be too much variations as these files are logs from some tool.</p>
<p>What is the easiest way to read each file as dictionary so that it's correct by constructions?
As of now, my simple code is like following:</p>
<pre><code>with open(log_data_file, mode="r") as txt_file:
for line in txt_file:
keyval = line.strip().split(sep=":", maxsplit=3)
if len(keyval) != 2:
# some debug print
continue
data[keyval[0]] = keyval[1]
</code></pre>
<p>Possibly, I can add some logic to handle a line if that satisfies a regular expression. But beyond that, is there package in python where I can specify the grammar for the file and handle [iterate over list of (key, value)] the file only if the grammar is satisfied and file read is successful?</p>
|
<python><python-3.x>
|
2023-01-19 23:14:32
| 1
| 911
|
soumeng78
|
75,179,038
| 9,428,990
|
PyJWT validate custom claims
|
<p>Been using authlib for a while and it has been real easy to validate both the existence of a claim but also its value. According to the example:</p>
<pre><code>claims_options = {
"iss": { "essential": True, "value": "https://idp.example.com" },
"aud": { "essential": True, "value": "api1" },
"email": { "essential": True, "value": "user1@email.com" },
}
claims = jwt.decode(token, jwk, claims_options=claims_options)
claims.validate()
</code></pre>
<p>However, with PyJWT I find it to be a bit unclear. I only seem to be able to check for the existence of a claim but not its value (aud and iss obviously works):</p>
<pre><code>decoded_token = jwt.decode(
token,
key,
audience="api1",
issuer="issuer"
algorithms=["RS256"],
options={"require": ["exp", "iss", "aud", "email"]}
)
</code></pre>
<p>This is even mentioned in the <a href="https://pyjwt.readthedocs.io/en/2.1.0/api.html#:%7E:text=Does%20NOT%20verify%20that%20the%20claims%20are" rel="nofollow noreferrer">documentation</a>. However, the documentation seem incomplete. Simply put, is it possible to validate <strong>custom</strong> claim values or do I simply need to manually parse the decoded token and look for my desired values?</p>
|
<python><python-3.x><jwt><pyjwt>
|
2023-01-19 22:42:37
| 1
| 719
|
Frankster
|
75,179,010
| 388,951
|
How can I disable pylint's missing-module-docstring for unit tests?
|
<p>I'm a big fan of pylint's built-in docstring checker. I'm very happy to require docstrings on all my classes, all my functions, and all my modules.</p>
<p>What I don't like, however, is that pylint also wants docstrings on all my test modules and all my <code>pytest</code> test functions. This leads to low-value docstrings like the following:</p>
<pre class="lang-py prettyprint-override"><code>"""Tests for foo.py"""
from foo import bar
def test_bar():
"""Tests for bar"""
assert bar(1) == 2
</code></pre>
<p>I've been able to disable the function-level docstring requirement using <code>no-docstring-rgx</code> in my <a href="https://www.codeac.io/documentation/pylint-configuration.html" rel="noreferrer"><code>.pylintrc</code> file</a>:</p>
<pre><code>[MASTER]
no-docstring-rgx=^(_|test_)
</code></pre>
<p>This takes care of the <a href="https://pylint.pycqa.org/en/latest/user_guide/messages/convention/missing-function-docstring.html" rel="noreferrer"><code>missing-function-docstring</code></a> / <code>C0116</code> error.</p>
<p>But I haven't been able to find a way to disable the <a href="https://pylint.pycqa.org/en/latest/user_guide/messages/convention/missing-module-docstring.html" rel="noreferrer"><code>missing-module-docstring</code></a> / <code>C0114</code> error just for files ending with <code>_test.py</code>. Is this possible with pylint?</p>
|
<python><pytest><pylint><pylintrc>
|
2023-01-19 22:38:18
| 1
| 17,142
|
danvk
|
75,178,894
| 5,032,387
|
Specifying complex truncated beta distribution
|
<p>I'd like to specify a truncated beta distribution such that the support is [0.1, 0.4], and allow for the probability density at the lower bound to be higher than very near 0.</p>
<p>Here's what I have so far. I realize that the parameters I specify here may not give me the distribution exactly like I described, but I'm just trying to get a truncated beta distribution to work:</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
from scipy import stats
def get_ab(mean, stdev):
kappa = (mean*(1-mean)/stdev**2 - 1)
return mean*kappa, (1-mean)*kappa
class truncated_beta(stats.rv_continuous):
def _pdf(self, x, alpha, beta, a, b):
return stats.beta.pdf(x, alpha, beta) / (stats.beta.cdf(b, alpha, beta) - stats.beta.cdf(a, alpha, beta))
def _cdf(self, x, alpha, beta, a, b):
return (stats.beta.cdf(x, alpha, beta) - stats.beta.cdf(a, alpha, beta)) / (stats.beta.cdf(b, alpha, beta) - stats.beta.cdf(a, alpha, beta))
def plot_beta_distr(mu, stdev, lb, ub):
alpha_, beta_ = get_ab((mu-lb)/ub, stdev/ub)
dist = truncated_beta(a=lb, b=ub, alpha = alpha_, beta = beta_, name='truncated_beta')
x = np.linspace(lb, ub, 100000)
y = dist.pdf(x, alpha = alpha_, beta = beta_, a = lb, b = ub)
fig = px.line(x = x, y = y)
return fig
mu = 0.02
stdev = 0.005
lb = 0.1
ub = 0.4
plot_beta_distr(mu, stdev, lb, ub)
</code></pre>
<p>I get an error at the truncated_beta step:</p>
<p><code>TypeError: __init__() got an unexpected keyword argument 'alpha'</code></p>
<p><strong>Update:</strong>
The reason why the above doesn't work is very likely that I'm specifying a mu that's outside of my bounds.</p>
<p>Here is a simpler formulation that I generated from the advice on <a href="https://stackoverflow.com/questions/11491032/truncating-scipy-random-distributions">this</a> post.</p>
<pre><code>mean = .35
std = .1
lb = 0.1
ub = 0.5
def get_ab(mu, stdev):
kappa = (mu*(1-mu)/stdev**2 - 1)
return mu*kappa, (1-mu)*kappa
alpha_, beta_ = get_ab((mean - lb) / ub, std/ub)
norm = stats.beta.cdf(ub, alpha_, beta_) - stats.beta.cdf(lb, alpha_, beta_)
yr=stats.uniform.rvs(size=1000000)*norm+stats.beta.cdf(lb, alpha_, beta_)
xr=stats.beta.ppf(yr, alpha_, beta_)
xr.min() # 0.100
xr.max() # 0.4999
xr.std() # 0.104
xr.mean() # 0.341
</code></pre>
<p>Everything lines up here except the mean is off. It's clear that I'm misspecifying something.</p>
|
<python><scipy><beta-distribution>
|
2023-01-19 22:21:53
| 1
| 3,080
|
matsuo_basho
|
75,178,854
| 496,289
|
How do I exclude multiple folders and/or file-patterns from pre-commit analysis?
|
<p>I have my python project. I'm trying to setup pre-commit checks using <a href="https://pre-commit.com/" rel="nofollow noreferrer">pre-commit</a>. I want to exclude some folders and some files (by pattern) from the analysis. The `exclude tag in config file only supports string not an array.</p>
<p>E.g. in following project structure, I want to exclude <code>poc</code> and <code>tests</code> folders and <code>conftest.py</code> file.</p>
<pre><code>root
├── poetry.lock
├── pyproject.toml
├── resources
├── src
│ ├── mylib
│ │ └── functions.py
│ └── poc
│ └── some_junk.py
├── conftest.py
└── tests
├── entry_point.py
└── test_functions.py
</code></pre>
<p>I can exclude a folder or file using exclude tag, e.g. to exclude <code>poc</code> I do this:</p>
<pre class="lang-yaml prettyprint-override"><code>exclude: poc
repos:
- repo: https://github.com/pycqa/isort
rev: 5.11.4
hooks:
- id: isort
- id: ...
</code></pre>
<p>... but how do I exclude multiple files and folders?</p>
|
<python><git><continuous-integration><pre-commit-hook><pre-commit.com>
|
2023-01-19 22:16:53
| 1
| 17,945
|
Kashyap
|
75,178,705
| 3,853,537
|
How to Get Path Similarity of Stopwords like "and"
|
<p>I'm trying to get the synsets of words to get their similarity matrix. However, one of the words is "and." I realized that it is a stopword in nltk and thus may not have a synset. For example,</p>
<pre class="lang-py prettyprint-override"><code>wn.synsets('and')
</code></pre>
<p>simply returns <code>[]</code>.</p>
<p>Is there a way to get Synset for stopwords like <code>Synset('and')</code> and I can thus get the path similarity between 'and' and another word?</p>
|
<python><nltk><wordnet>
|
2023-01-19 21:55:58
| 1
| 1,484
|
Zhanwen Chen
|
75,178,696
| 8,696,281
|
Why can Pyarrow read additional index column while Pandas dataframe cannot?
|
<p>I have the following code:</p>
<pre><code>import pandas as pd
import dask.dataframe as da
from pyarrow.parquet import ParquetFile
df = pd.DataFrame([1, 2, 3], columns=["value"])
my_dataset = da.from_pandas(df, chunksize=3)
save_dir = './local/'
my_dataset.to_parquet(save_dir)
pa = ParquetFile("./local/part.0.parquet")
print(pa.schema.names)
df2 = pd.read_parquet("./local/part.0.parquet")
print(df2.columns)
</code></pre>
<p>The output is:</p>
<pre><code>['value', '__null_dask_index__']
Index(['value'], dtype='object')
</code></pre>
<p>Just curious, why did Pandas <code>dataframe</code> ignore <code>__null_dask_index__</code> column name? Or is <code>__null_dask_index__</code> not considered as a column?</p>
|
<python><pandas><dask><parquet><pyarrow>
|
2023-01-19 21:55:06
| 1
| 783
|
noobie2023
|
75,178,632
| 4,114,325
|
scikit-learn RandomForestClassifier list all variables of an estimator tree?
|
<p>I train a <code>RandomForestClassifier</code> as</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
X, y = make_classification()
clf = RandomForestClassifier()
clf.fit(X,y)
</code></pre>
<p>where <code>X</code> and <code>y</code> are some feature vectors and labels.</p>
<p>Once the fit is done, I can e.g. list the depth of all trees grown for each estimator in the forest as follows:</p>
<pre><code>[estimator.tree_.max_depth for estimator in clf.estimators_]
</code></pre>
<p>Now I would like to find out all other public variables (apart from <code>max_depth</code>) a <code>tree_</code> within an <code>estimator</code> stores. So I tried:</p>
<pre><code>vars(clf.estimators_[0].tree_)
</code></pre>
<p>but unfortunately this does not work and returns the error</p>
<blockquote>
<p><code>TypeError: vars() argument must have __dict__ attribute</code></p>
</blockquote>
<p>What syntax can I use to successfully list all public variables in a <code>estimator.tree_</code>?</p>
|
<python><scikit-learn><random-forest><class-variables>
|
2023-01-19 21:47:58
| 1
| 1,023
|
Kagaratsch
|
75,178,621
| 10,963,057
|
how to get the x-axis in german letter format in plotly
|
<p>i try to get instead of the american letters german letters. As example: May = Mrz; Oct = Okt; Dec = Dez</p>
<pre><code>import plotly.express as px
df = px.data.stocks()
fig = px.line(df, x="date", y=df.columns,
title='custom tick labels')
fig.update_xaxes(dtick="M1", tickformat="%b %y")
fig.show()
</code></pre>
|
<python><plotly><date-formatting>
|
2023-01-19 21:46:49
| 1
| 1,151
|
Alex
|
75,178,603
| 14,676,485
|
How to loop over all columns and check data distribution using Fitter library?
|
<p>I need to check data distributions of all my numeric columns in a dataset. I chose <code>Fitter</code> library to do so. I loop over all columns but have only one plot+summary table as an outcome instead. What is wrong with my code?</p>
<pre><code>from fitter import Fitter
import numpy as np
df_numeric = df.select_dtypes(include=np.number).sample(n=5000)
num_cols = df_numeric.columns.tolist()
distr = ['cauchy',
'chi2',
'expon',
'exponpow',
'gamma',
'beta',
'lognorm',
'logistic',
'norm',
'powerlaw',
'rayleigh',
'uniform']
for col in num_cols:
modif_col = df_numeric[col].fillna(0).values
dist_fitter = Fitter(modif_col, distributions=distr)
dist_fitter.fit()
dist_fitter.summary()
</code></pre>
<p><a href="https://i.sstatic.net/xfw8U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xfw8U.png" alt="enter image description here" /></a></p>
<p>Maybe there is another approach to check distributions in a loop?</p>
|
<python><statistics><data-science>
|
2023-01-19 21:45:08
| 1
| 911
|
mustafa00
|
75,178,550
| 392,086
|
Multiple gunicorn workers prevents flask app from making https calls
|
<p>A simple flask app accepts requests and then makes calls to https endpoints. Using gunicorn with multiple worker processes leads to ssl failures.</p>
<p>Using <code>flask run</code> works perfectly, albeit slowly.</p>
<p>Using <code>gunicorn --preload --workers 1</code> also works perfectly, albeit slowly.</p>
<p>Changing to <code>gunicorn --preload --workers 10</code> very frequently fails with <code>[SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC]</code> which leads me to think that there's some per-connection state that is being messed up. But, gunicorn is supposed to fork before beginning service of requests.</p>
<p>Ideas?</p>
|
<python><flask><openssl><gunicorn>
|
2023-01-19 21:38:27
| 1
| 1,092
|
MJZ
|
75,178,538
| 19,369,393
|
Why -1//2 = -1 but int(-1/2) = 0?
|
<p>I found that <code>-1 // 2</code> is equal to -1 (Why not 0?), but <code>int(-1 / 2)</code> is equal to 0 (as I expected).
It's not the case with 1 instead of -1, so both <code>1 // 2</code> and <code>int(1 / 2)</code> is equal to 0.</p>
<p>Why the results are different for -1?</p>
|
<python>
|
2023-01-19 21:37:03
| 1
| 365
|
g00dds
|
75,178,436
| 12,417,488
|
How to correct MLFlow UI FileNotFoundError: [WinError 2]?
|
<p>I created an MLFlow tracking folder and logged metrics/parameters in the following way:</p>
<pre><code>dir_parent = os.getcwd()
dir_mlflow = 'file:' + os.sep + os.path.join(dir_parent, 'mlflow_test')
mlflow.set_tracking_uri(dir_mlflow)
# log parameters in mlflow
mlflow.log_param('batch_size',args.batch_size)
mlflow.log_param('epochs',args.epochs)
mlflow.log_param('learning_rate',args.learning_rate)
# set up a callback function to track model results in training
class LogMetricsCallback(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
mlflow.log_metric('training_loss', logs['loss'], epoch)
mlflow.log_metric('training_accuracy', logs['accuracy'], epoch)
# get test metrics
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
mlflow.log_metric('test_loss', test_loss)
mlflow.log_metric('test_accuracy', test_acc)
# save model to mlflow
mlflow.keras.log_model(model, artifact_path = 'keras-model')
</code></pre>
<p>This creates a folder named <code>mlflow_test</code> which houses 3 folders: <code>.trash</code>, <code>0</code>, and <code>mlruns</code></p>
<p>How do I read and interact with the contents of this folder? How do I see my runs?</p>
<p>I tried navigating to the folder from the command line and using both the <code>mlflow ui</code> and <code>mlflow ui --host 0.0.0.0</code> commands but I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\mlflow\__main__.py", line 3, in <module>
cli.main()
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\mlflow\cli.py", line 399, in server
_run_server(
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\mlflow\server\__init__.py", line 161, in _run_server
_exec_cmd(full_command, extra_env=env_map, capture_output=False)
File "C:\Users\PJ\AppData\Roaming\Python\Python39\site-packages\mlflow\utils\process.py", line 95, in _exec_cmd
process = subprocess.Popen(
File "C:\Program Files\Python39\lib\subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Program Files\Python39\lib\subprocess.py", line 1420, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>MLFlow version: 2.1.1</p>
<p>Python version: 3.9.5 64bit</p>
<p>Windows 10 OS</p>
|
<python><machine-learning><command-line><mlflow>
|
2023-01-19 21:25:39
| 0
| 663
|
PJ_
|
75,178,435
| 558,619
|
Python: Event-Handler for background task complete
|
<p>I have a <code>_global_variable = Big_Giant_Class()</code>. <code>Big_Giant_Class</code> takes a long time to run, but it also has constantly refreshing 'live-data' behind it, so I always want as new a instance of it as possible. Its not <code>IO-bound</code>, just a load of CPU computations.</p>
<p>Further, my program has a number of functions that reference that <code>global</code> instance of <code>Big_Giant_Class</code>.</p>
<p>I'm trying to figure out a way to create <code>Big_Giant_Class</code> in an endless loop (so I always have the latest-and-greatest!), but without it being blocking to all the other functions that reference <code>_global_variable</code>.</p>
<p>Conceptually, I kind of figure the code would look like:</p>
<pre><code>import time
class Big_Giant_Class():
def __init__(self, val, sleep_me = False):
self.val = val
if sleep_me:
time.sleep(10)
def print_val(self):
print(self.val)
async def run_loop():
while True:
new_instance_value = await asyncio.run(Big_Giant_Class(val = 1)) # <-- takes a while
# somehow assign new_instance_value to _global_variable when its done!
def do_stuff_that_cant_be_blocked():
global _global_variable
return _global_variable.print_val()
_global_variable = Big_Giant_Class(val = 0)
if __name__ == "__main__":
asyncio.run(run_loop()) #<-- maybe I have to do this somewhere?
for i in range(20):
do_stuff_that_cant_be_blocked()
time.sleep(1)
Conceptual Out:
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
</code></pre>
<p>The kicker is, I have a number of functions [ie, <code>do_stuff_that_cant_be_blocked</code>] that can't be blocked.</p>
<p>I simply want them to use the last <code>_global_variable</code> value (which gets periodically updated by some unblocking...thing?). Thats why I figure I can't <code>await</code> the results, because that would block the other functions?</p>
<p>Is it possible to do something like that? I've done very little <code>asyncio</code>, so apologies if this is basic. I'm open to any other packages that might be able to do this (although I dont think <code>Trio</code> works, because I have incompatible required packages that are used)</p>
<p>Thanks for any help in advance!</p>
|
<python><async-await><python-asyncio>
|
2023-01-19 21:25:38
| 1
| 3,541
|
keynesiancross
|
75,178,182
| 1,578,210
|
Python NAND function
|
<p>How can I do a logical NAND on two numbers in python? Simple example. Let's say I have a number (0xFF) and I want a logical NAND with a mask value of 0x5.</p>
<pre><code>number = 0xFF = 0b1111 1111
mask = 0x05 = 0b0000 0101
---------------------------
desired= 0xFA = 0b1111 1010
</code></pre>
<p>I'm not reinventing the wheel here, this seems like it should be easily accomplished, but I'm stumped and I cannot find any solutions online. I can loop through the number and do a "not (number & mask)" at each bit position and reassemble the value I want, but that seems like more work than is needed here.</p>
|
<python><bit-manipulation>
|
2023-01-19 20:54:06
| 1
| 437
|
milnuts
|
75,178,105
| 2,662,901
|
Why is DataFrame int column value sometimes returned as float?
|
<p>I add a calculated column <code>c</code> to a DataFrame that only contains integers.</p>
<pre><code>df = pd.DataFrame(data=list(zip(*[np.random.randint(1,3,5), np.random.random(5)])), columns=['a', 'b'])
df['c'] = np.ceil(df.a/df.b).astype(int)
df.dtypes
</code></pre>
<p>The DataFrame reports that the column type of <code>c</code> is indeed <code>int</code>:</p>
<pre><code>a int64
b float64
c int32
dtype: object
</code></pre>
<p>If I access a value from <code>c</code> like this then I get an int:</p>
<pre><code>df.c.values[0] # Returns "3"
type(df.c.values[0]) # Returns "numpy.int32"
</code></pre>
<p>But if I access the same value using <code>loc</code> I get a float:</p>
<pre><code>df.iloc[0].c # Returns "3.0"
type(df.iloc[0].c) # Returns "numpy.float64"
</code></pre>
<p>Why is this?</p>
<p>I would like to be able to access the value using indexes without having to cast it (again) to an int.</p>
|
<python><pandas><dataframe>
|
2023-01-19 20:46:27
| 2
| 3,497
|
feetwet
|
75,178,102
| 3,785,426
|
Python & Selenium: WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1 in google colab
|
<p>I use Selenium in Python(Google Colab) for scraping.</p>
<p>Error occurred even though nothing was changed.</p>
<p>It worked fine yesterday.</p>
<p>Sometimes it works, sometimes it doesn't.</p>
<p>Why does this happen?</p>
<p><strong>ERROR</strong></p>
<pre><code>---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-2-f1882ef81cb2> in <module>
15 options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36')
16
---> 17 driver = webdriver.Chrome('chromedriver',options=options)
18 driver.maximize_window()
19
3 frames
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in assert_process_still_running(self)
117 return_code = self.process.poll()
118 if return_code:
--> 119 raise WebDriverException(f"Service {self.path} unexpectedly exited. Status code was: {return_code}")
120
121 def is_connectable(self) -> bool:
WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1
</code></pre>
<p><strong>CODE</strong></p>
<pre><code>!apt-get update
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
!pip install selenium
</code></pre>
<pre><code>import time
import random
import datetime
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36')
driver = webdriver.Chrome('chromedriver',options=options) #Error here
driver.maximize_window()
#skip the rest
</code></pre>
<p>It would be appreciated if you could give me some hint.</p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><google-colaboratory>
|
2023-01-19 20:46:25
| 3
| 861
|
SamuraiBlue
|
75,177,922
| 692,658
|
Passing variables to a script over ssh using gcloud command -- all variables treated as a single string?
|
<p>I'm trying to setup a system to run some commands on VM's in google cloud, in my case we want to run a tcpdump at a certain time using the 'at' command. Right now I'm just trying to execute any commands successfully, when I have to pass arguments along with the command and getting confusing behaviour, which appears to be that the command, and the arguments are executed as a single long command instead of separate arguments.</p>
<p>I first tried in bash, and thinking my issue was one of quoting, I moved to using python to hopefully make things easier to understand, but I appear to be hitting the same issue and figure I must be doing something wrong.</p>
<p>I have the following functions defined in python, and call them</p>
<pre><code>def execute(cmd):
popen = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
popen.stdout.close()
return_code = popen.wait()
if return_code:
raise subprocess.CalledProcessError(return_code, cmd)
def runCapture(project, instance, zone, time, duration):
## Run capture against server
print ("Running capture against Project: " + project + ", Instance: " + instance + ", Zone: " + zone, "at: " + time, "for " + str(duration) + " minutes")
## First connect, schedule capture
## Connect again, schedule upload of capture at capture time + duration time + some overrun.
## gcloud compute ssh --project=${PROJECT} ${INSTANCE} --zone="${ZONE}" --command="...do stuff..." --tunnel-through-iap
## CMD=\${1:-"/usr/sbin/tcpdump -nn -i ens4 -G \$(( ${DURATION}*60 )) -W 1 -w ./\$(uname -n)-%Y-%m-%d_%H.%M.%S.pcap"}
total_time=str(duration*60)
command="/bin/bash -c 'echo \"hello world\"'"
for path in execute(["/usr/bin/gcloud", "compute", "ssh", instance, "--project="+project, "--zone="+zone, "--tunnel-through-iap", "--command=\""+command+"\"", ]):
print(path, end="")
</code></pre>
<p>The resulting errors are as follows:</p>
<pre><code>bash: /bin/bash -c 'echo hello: No such file or directory
Traceback (most recent call last):
File "./ingressCapture.py", line 79, in <module>
results = runCapture(project, instance, zone, time, duration)
File "./ingressCapture.py", line 33, in runCapture
for path in execute(["/usr/bin/gcloud", "compute", "ssh", instance, "--project="+project, "--zone="+zone, "--tunnel-through-iap", "--command=\""+command+"\"", ]):
File "./ingressCapture.py", line 17, in execute
raise subprocess.CalledProcessError(return_code, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcloud', 'compute', 'ssh', 'tbtst-test3-app-egress-nztw', '--project=devops-tb-sandbox-250222', '--zone=europe-west1-b', '--tunnel-through-iap', '--command="/bin/bash -c \'echo "hello world"\'"']' returned non-zero exit status 127.
</code></pre>
<p>It appears to me, that instead of invoking the bash shell and running the echo command, it is instead invoking a command that includes the bash shell and then all the arguments too. I have a bash shell when I login normally via SSH, and can run the commands manually (and they work). Why are the arguments for the command from --command="....." getting called like this and how do I prevent it?</p>
|
<python><bash><gcloud>
|
2023-01-19 20:28:06
| 1
| 316
|
djsmiley2kStaysInside
|
75,177,824
| 8,372,455
|
how to reshape array to predict with LSTM
|
<p>I made an LSTM model based on <a href="https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/" rel="nofollow noreferrer">this tutorial</a> where the model input batch shape is:</p>
<pre><code>print(config["layers"][0]["config"]["batch_input_shape"])
</code></pre>
<p>returns:</p>
<pre><code>(None, 1, 96)
</code></pre>
<p>Can someone give me a tip on how change my testing data to this array shape to match the model input batch size?</p>
<pre><code>testday = read_csv('./data.csv', index_col=[0], parse_dates=True)
testday_scaled = scaler.fit_transform(testday.values)
print(testday_scaled.shape)
</code></pre>
<p>returns</p>
<pre><code>(96, 1)
</code></pre>
|
<python><tensorflow><keras><lstm>
|
2023-01-19 20:18:15
| 1
| 3,564
|
bbartling
|
75,177,795
| 7,796,833
|
Conda install script installs wrong version and then doesnt update
|
<p>Conda tells me that there is an update available, but when I try to update, it doesnt work. I run <code>conda update -n base -c defaults conda</code> and get the following notification:</p>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.10.3
latest version: 22.11.1
Please update conda by running
$ conda update -n base -c defaults conda
# All requested packages already installed.
</code></pre>
<p>I cannot find a github repo for anaconda. What is the issue here? Running <code>conda --version</code> confirms 4.10.3 is installed. I installed anaconda through the <code>Anaconda3-2022.10-Linux-x86_64.sh</code>, so definitely not 4.10.3.</p>
|
<python><anaconda><conda><anaconda3>
|
2023-01-19 20:14:43
| 1
| 618
|
Carol Eisen
|
75,177,744
| 557,406
|
Unicode not consistently working with Alembic
|
<p>I have a migration that is running some custom code that depends on unicode characters. I am currently using SQLAlchemy 1.1.9 and Alembic 1.0.2.</p>
<p>I can see my database and table have all the right settings:</p>
<pre><code>mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4 | utf8mb4_general_ci |
+--------------------------+----------------------+
</code></pre>
<p>and</p>
<pre><code>mysql> SHOW TABLE STATUS where name like 'mytable';
+---------+-----+--------------------+----------+----------------+---------+
| Name | ... | Collation | Checksum | Create_options | Comment |
+---------+-----+--------------------+----------+----------------+---------+
| mytable | ... | utf8mb4_unicode_ci | NULL | | |
+---------+-----+--------------------+----------+----------------+---------+
</code></pre>
<p>I have inserted a string, <code>Nguyễn Johñ</code> (note that the e and n are both unicode characters). When I have my flask application load the row, it properly loads. But when I run the migration, I see alembic debug logs showing <code>Nguy?n Johñ</code> and my own debug logs printing the same thing.</p>
<p>Why are some unicode characters converted to a question mark? (Note testing other characters, I see some characters in the terminal, some escaped, such as <code>"\xa0"</code>, and others as <code>"?"</code>.</p>
<p>The following might be significant too.</p>
<ul>
<li>The URL sent to <code>engine = create_engine()</code> has the utf8 charset</li>
<li>I have the following code for running the migration:</li>
</ul>
<pre><code>from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Boolean, Date, Unicode
MyTable = table('mytable',
column('id', Integer),
column('test1', Unicode(collation='utf8mb4_unicode_ci')),
column('test2', Unicode),
)
...
def upgrade():
...
bind = op.get_bind()
session = orm.Session(bind=bind)
rows = session.query(MyTable).all()
print(rows)
</code></pre>
<ul>
<li>The debug logs also show the following, but I am not sure if this is just alembic's own feature detection code:</li>
</ul>
<pre><code>INFO [sqlalchemy.engine.base.Engine] show collation where `Charset` = 'utf8' and `Collation` = 'utf8_bin'
INFO [sqlalchemy.engine.base.Engine] ()
DEBUG [sqlalchemy.engine.base.Engine] Col ('Collation', 'Charset', 'Id', 'Default', 'Compiled', 'Sortlen')
DEBUG [sqlalchemy.engine.base.Engine] Row ('utf8_bin', 'utf8', 83, '', 'Yes', 1)
</code></pre>
|
<python><unicode><sqlalchemy><alembic>
|
2023-01-19 20:09:53
| 1
| 6,395
|
Charles L.
|
75,177,719
| 304,215
|
Python Flask server not running inside electron app
|
<p>I'm working on an Electron app where I'm running a Python Flask server inside the Electron app. Code has shared below. When I'm trying to run the Electron app by <code>npm run start</code> command my electron app is working. But, when ever I'm trying to access my Flask route it's showing error like - "(failed) net::ERR_CONNECTION_REFUSED"</p>
<p><strong>Electron app folder structure</strong> -</p>
<p><a href="https://i.sstatic.net/8Sc6Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Sc6Z.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/jBlc1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jBlc1.png" alt="enter image description here" /></a></p>
<p><strong>Electron App "main.js"</strong></p>
<pre><code>const { app, BrowserWindow } = require('electron');
const path = require('path');
const { spawn } = require('child_process')
const createWindow = () => {
const win = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
preload: path.join(__dirname, 'preload.js'),
},
});
win.loadFile('index.html');
};
app.whenReady().then(() => {
spawn("python", ["./flask_server/main.py"])
createWindow();
app.on('activate', () => {
if (BrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
});
app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit();
}
});
</code></pre>
<p><strong>Electron App "index.html"</strong></p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'" />
<meta http-equiv="X-Content-Security-Policy" content="default-src 'self'; script-src 'self'" />
<title>My Test App</title>
</head>
<body>
<h1>Hello from Electron renderer!</h1>
<p>👋</p>
<p id="info"></p>
<br><a href="http://127.0.0.1:5000/">Go</a>
</body>
<script src="./renderer.js"></script>
</html>
</code></pre>
<p><strong>Python Flask route file - "main.py"</strong></p>
<pre><code>import sys
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route("/")
def hello():
return "Hello World from Flask!"
if __name__ == "__main__":
app.run(host='127.0.0.1', port=5000)
</code></pre>
<p>I'm thinking my flask server is some how not starting. But, I don't know how to see that status of my Flask server inside Electron app.</p>
<p>I'm using "child_process" to run flask server by using following piece of code..</p>
<pre><code>spawn("python", ["./flask_server/main.py"])
</code></pre>
<p>Any idea, what's going wrong. Need some help to fix this.</p>
<p>Thanks</p>
|
<python><node.js><flask><electron>
|
2023-01-19 20:06:41
| 1
| 5,999
|
Suresh
|
75,177,494
| 2,878,298
|
parse xlsx file having merged cells using python or pyspark
|
<p>I want to parse an xlsx file. Some of the cells in the file are merged and working as a header for the underneath values.<br />
But do not know what approach I should select to parse the file.</p>
<ol>
<li>Shall I parse the file from xlsx to json format and then I should perform the pivoting or transformation of dataset.
OR</li>
<li>Shall proceed just by xlsx format and try to read the specific cell values- but I believe this approach will not make the code scalable and dynamic.</li>
</ol>
<p>I tried to parse the file and tried to convert to json but it did not load the all the records. unfortunately, it is not throwing any exception.</p>
<pre><code>
from json import dumps
from xlrd import open_workbook
# load excel file
wb = open_workbook('/dbfs/FileStore/tables/filename.xlsx')
# get sheet by using sheet name
sheet = wb.sheet_by_name('Input Format')
# get total rows
total_rows = sheet.nrows
# get total columns
total_columns = sheet.ncols
# convert each row of sheet name in Dictionary and append to list
lst = []
for i in range(0, total_rows):
row = {}
for j in range(0, total_columns):
if i + 1 < total_rows:
column_name = sheet.cell(rowx=0, colx=j)
row_data = sheet.cell_value(rowx=i+1, colx=j)
row.update(
{
column_name.value: row_data
}
)
if len(row):
lst.append(row)
# convert into json
json_data = dumps(lst)
print(json_data)
</code></pre>
<p>After executing the above code I received following type of output:</p>
<pre><code> {
"Analysis": "M000000000000002001900000000000001562761",
"KPI": "FELIX PARTY.MIX",
"": 2.9969042460942
},
{
"Analysis": "M000000000000002001900000000000001562761",
"KPI": "FRISKIES ESTERILIZADOS",
"": 2.0046260994622
},
</code></pre>
<p>Once the data will be in good shape then spark-databricks should be used for the transformation.<br />
I tried multiple approaches but failed :(
Hence seeking help from the community.</p>
<p>For more clarity on the question I have added sample input/output screenshot as following.
<strong>Input dataset:</strong>
<a href="https://i.sstatic.net/fzBcQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzBcQ.png" alt="enter image description here" /></a></p>
<p><strong>Expected Output1:</strong><br />
<a href="https://i.sstatic.net/H4571.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H4571.png" alt="enter image description here" /></a></p>
<p><strong>You can download the actual dataset and expected output from the following link</strong>
<a href="https://drive.google.com/drive/folders/1ghHMI03bp-i7E2-epQZQyCfQsLOmYIcs" rel="nofollow noreferrer">Dataset</a></p>
|
<python><pandas><openpyxl><azure-databricks><xlrd>
|
2023-01-19 19:43:33
| 1
| 1,268
|
venus
|
75,177,101
| 4,133,188
|
Getting pixel coordinates and pixel values in image region bounded by an ellipse using opencv python
|
<p>I would like to draw an arbitrary ellipse on an opencv image in python and then return two arrays: (1) The pixel coordinates of all pixels bounded by the ellipse, both on the ellipse line and inside the ellipse, (2) the pixel values of each of the pixels from array (1).</p>
<p>I looked at this <a href="https://stackoverflow.com/questions/71851682/opencv-get-pixels-on-an-ellipse">answer</a>, but it only considers the points on the ellipse contour and not the region inside.</p>
|
<python><opencv>
|
2023-01-19 19:00:32
| 1
| 771
|
BeginnersMindTruly
|
75,177,041
| 11,666,502
|
How to find the longest continuous stretch of matching elements in 2 lists
|
<p>I have 2 lists:</p>
<pre><code>a = [
'Okay. ',
'Yeah. ',
'So ',
'my ',
'thinking ',
'is, ',
'so ',
'when ',
"it's ",
'set ',
'up ',
'just ',
'one ',
'and ',
"we're ",
'like ',
'next ',
'to ',
'each ',
'other '
]
b = [
'Okay. ',
'Yeah. ',
'Everything ',
'as ',
'normal ',
'as ',
'possible. ',
'Yeah. ',
'Yeah. ',
'Okay. ',
'Is ',
'that ',
'better? ',
'Yeah. ',
'So ',
'my ',
'thinking ',
'is, ',
'so ',
'when '
]
</code></pre>
<p>Each list is slightly different. However, there will be moments when a stretch of continuous elements in <code>a</code> will match a stretch of continuous elements in <code>b</code>.</p>
<p>For example:</p>
<p>The first 2 elements in both lists match. The matching list would be <code>['Okay.', 'Yeah.']</code>. This is only 2 elements long.</p>
<p>There is a longer stretch of matching words. You can see that each contains the following continuous set:
<code>['Yeah. ','So ','my ','thinking ','is, ','so ','when '] </code>
This continuous matching sequence has 7 elements. This is the longest sequence.</p>
<p>I want the index of where this sequence starts for each list. For <code>a</code>, this should be 1 and for <code>b</code> this should be 13.</p>
<p>I understand that I can make every possible ordered sequence in a, starting with the longest, and check for a match in b, stopping once I get the match. However, this seems inefficent.</p>
|
<python><list>
|
2023-01-19 18:54:24
| 2
| 1,689
|
connor449
|
75,176,951
| 15,171,387
|
Python add weights associated with values of a column
|
<p>I am working with an ex termly large datfarem. Here is a sample:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'ID': ['A', 'A', 'A', 'X', 'X', 'Y'],
})
ID
0 A
1 A
2 A
3 X
4 X
5 Y
</code></pre>
<p>Now, given the frequency of each value in column '''ID''', I want to calculate a weight using the function below and add a column that has the weight associated with each value in '''ID'''.</p>
<pre><code>def get_weights_inverse_num_of_samples(label_counts, power=1.):
no_of_classes = len(label_counts)
weights_for_samples = 1.0/np.power(np.array(label_counts), power)
weights_for_samples = weights_for_samples/ np.sum(weights_for_samples)*no_of_classes
return weights_for_samples
freq = df.value_counts()
print(freq)
ID
A 3
X 2
Y 1
weights = get_weights_inverse_num_of_samples(freq)
print(weights)
[0.54545455 0.81818182 1.63636364]
</code></pre>
<p>So, I am looking for an efficient way to get a dataframe like this given the above weights:</p>
<pre><code> ID sample_weight
0 A 0.54545455
1 A 0.54545455
2 A 0.54545455
3 X 0.81818182
4 X 0.81818182
5 Y 1.63636364
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-19 18:45:17
| 2
| 651
|
armin
|
75,176,912
| 20,589,275
|
How to split a number into three sets
|
<p>I have got an a number: like 5
i need to split it into 3 sets like</p>
<pre><code>2
1 4
2
2 3
1
5
</code></pre>
<p>Or the number 8:</p>
<pre><code>2
8 4
2
7 5
4
1 2 3 6
</code></pre>
<p>I try to</p>
<pre><code>def partition(n):
if n < 5:
return
s = n * (n + 1) // 2
if s % 3 != 0:
return
s //= 3
lst, result = [i for i in range(1, n + 1)], []
for _ in range(2):
subset, s_current = [], s
while s_current > 0:
idx_max = bisect_right(lst, s_current) - 1
subset.append(lst[idx_max])
s_current -= lst[idx_max]
lst.pop(idx_max)
result.append(subset)
result.append(lst)
return result
</code></pre>
<p>If it can't make 3 sets, should return -1
But it doesn't work what i want
please, help</p>
|
<python>
|
2023-01-19 18:42:56
| 0
| 650
|
Proger228
|
75,176,822
| 11,616,106
|
Write changing text to new pdf
|
<p>I'm trying to open abc.pdf and find google.com and replace with input words. text changing but, I can write the new text to output.pdf it stay same with abc.pdf how can i solve this ?</p>
<pre><code>import PyPDF2
import fitz
from PyPDF2 import PdfReader, PdfWriter
import requests
# # Replace with the URL of the PDF you want to download
# pdf_url = input("Enter the URL of the pdf file to download: ")
#
# # Replace with the link you want to replace the original links with
new_link = input("Enter the link you want to replace the original links with: ")
#
# # Download the PDF file
# response = requests.get(pdf_url)
# with open("abc.pdf", "wb") as f:
# f.write(response.content)
with open('abc.pdf', 'rb') as file:
reader = PyPDF2.PdfReader(file)
writer = PyPDF2.PdfWriter()
# pdf dosyasının tüm sayfalarını oku
for page in range(len(reader.pages)):
text = reader.pages[page].extract_text()
print(text)
# aranacak stringi bul
if "google.com" in text:
# stringi değiştir
text = text.replace("google.com", new_link)
# pdf dosyasını yeniden yaz
print(text)
writer.add_page(reader.pages[page])
with open('output.pdf', 'wb') as output:
writer.write(output)
file.close()
output.close()
</code></pre>
<p>I am also try with fitz, when I serach in links it has to be http:// so I couldnt change link like that google.com</p>
<pre><code>import fitz
import requests
# Replace with the URL of the PDF you want to download
pdf_url = input("Enter the URL of the pdf file to download: ")
# Replace with the link you want to replace the original links with
new_link = input("Enter the link you want to replace the original links with: ")
old_link = input("Enter the link you want to replace ")
# Download the PDF file
response = requests.get(pdf_url)
with open("file.pdf", "wb") as f:
f.write(response.content)
# Open the PDF and modify the links
pdf_doc = fitz.open("file.pdf")
for page in pdf_doc:
for link in page.links():
print(link)
if "uri" in link and link["uri"] == old_link:
print("Found one")
link["uri"] = new_link
# Save the modified PDF to the desktop
pdf_doc.save("test2.pdf")
pdf_doc.close()
</code></pre>
<p>And another :</p>
<pre><code>import PyPDF2
import fitz
from PyPDF2 import PdfReader, PdfWriter
import requests
# # Replace with the URL of the PDF you want to download
# pdf_url = input("Enter the URL of the pdf file to download: ")
#
# # Replace with the link you want to replace the original links with
new_link = input("Enter the link you want to replace the original links with: ")
#
# # Download the PDF file
# response = requests.get(pdf_url)
# with open("abc.pdf", "wb") as f:
# f.write(response.content)
# Open the original PDF file
# with open('abc.pdf', 'rb') as file:
doc = fitz.open('abc.pdf')
print(doc)
p = fitz.Point(50, 72) # start point of 1st line
for page in doc:
print(page)
text = page.get_text()
text = text.replace("google.com", new_link).encode("utf8")
rc = page.insert_text(p, # bottom-left of 1st char
text, # the text (honors '\n')
fontname="helv", # the default font
fontsize=11, # the default font size
rotate=0, # also available: 90, 180, 270
) # print(text)
# page.set_text(text)
# doc.insert_pdf(text,to_page=0)
doc.save("output.pdf")
</code></pre>
<p>doc.close()</p>
|
<python>
|
2023-01-19 18:33:44
| 0
| 521
|
hobik
|
75,176,798
| 107,832
|
Is there a good way to use C# (.NET Framework) and Python code together?
|
<p>We have a solution written in C#/.NET Framework 4.7. It has a lot of infrastructure code related to environment configurations, database access, logging, exception handling etc.
Our co-workers are eager to contribute to the project with Python code that makes a lot of special calculations. Ideally we want to pass configuration plus (big amount of) input data to their code and get back (big amount of) results without resorting to database integration. Is there a viable way to do so? Main goals are: 1) not to rewrite Python code to C# 2) not to duplicate configuration/database related code in Python to make future maintenance easier</p>
|
<python><c#><.net><integration>
|
2023-01-19 18:31:01
| 2
| 547
|
Dmitry Duginov
|
75,176,745
| 3,352,254
|
Pandas assign series to another Series based on index
|
<p>I have three Pandas Dataframes:</p>
<p><code>df1</code>:</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
</code></pre>
<p><code>df2</code>:</p>
<pre><code>0 1
3 7
6 5
9 2
</code></pre>
<p><code>df3</code>:</p>
<pre><code>1 2
4 6
7 6
</code></pre>
<p>My goal is to assign the values of <code>df2</code> and <code>df3</code> to <code>df1</code> based on the index.
<code>df1</code> should then become:</p>
<pre><code>0 1
1 2
2 NaN
3 7
4 6
5 NaN
6 5
7 6
8 NaN
9 2
</code></pre>
<p>I tried with simple assinment:</p>
<pre><code>df1.loc[df2.index] = df2.values
</code></pre>
<p>or</p>
<pre><code>df1.loc[df2.index] = df2
</code></pre>
<p>but this gives me an ValueError:
<code>ValueError: Must have equal len keys and value when setting with an iterable</code></p>
<p>Thanks for your help!</p>
|
<python><pandas><indexing><assign>
|
2023-01-19 18:26:20
| 1
| 825
|
smaica
|
75,176,489
| 12,224,591
|
Converting Python List to .NET IEnumerable?
|
<p>I'm attempting to call a C# function from a Python script, via the <code>clr</code> module from the <code>PythonNet</code> library.</p>
<p>One of the arguments that this C# function takes is of the type <code>System.Collections.Generic.IEnumerable</code>. Simply supplying a list of the required data types to the first argument results in a <code>'list' value cannot be converted to 'System.Collections.Generic.IEnumerable'</code> error.</p>
<p>After searching online, the supposed .NET datatypes should already be included in the Python installation, and I should be able to access them. However, I'm unsure as to how.</p>
<p>Doing:</p>
<pre><code>from System.Collections.Generic import *
</code></pre>
<p>Fails because the module isn't found, while doing:</p>
<pre><code>import collections
</code></pre>
<p>Doesn't have a namespace <code>IEnumerable</code>.</p>
<p>How would I go about converting a Python list to the <code>System.Collections.Generic.IEnumerable</code> data type?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
|
<python><ienumerable><python.net>
|
2023-01-19 18:02:52
| 2
| 705
|
Runsva
|
75,176,315
| 18,183,907
|
how to skip backslash followed by integer?
|
<p>i have regex <a href="https://regex101.com/r/2H5ew6/1" rel="nofollow noreferrer">https://regex101.com/r/2H5ew6/1</a></p>
<pre><code>(\!|\@)(1)
Hello!1 World
</code></pre>
<p>and i wanna get first mark (!|@) and change the number <code>1</code> to another number <code>2</code>
I did</p>
<pre><code>{\1}2_
\1\\2_
</code></pre>
<p>but it adds extra text and i just wanna change the number</p>
<p>i expect result to be</p>
<pre><code>Hello!2_World
</code></pre>
<p>and ifusing @ to be</p>
<pre><code>Hello@2_World
</code></pre>
|
<python><regex>
|
2023-01-19 17:47:17
| 4
| 487
|
yvgwxgtyowvaiqndwo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.