Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
4,800
| 33,137,764
|
Dynamically adding dictionary values based on row count from pandas dataframe
|
<p>I am rewriting some code of mine and feel there must be a better more dynamic way to do the below. Currently as you can see I am creating a condition based directly on the row count and adding values from there. However I don't want to have to make static conditions for multiple values <code>if row_count == 3:</code> <code>if row_count == 4:</code> etc. I'm positive there must be a more efficient way to achieve this. Any pointers would be appreciated.</p>
<pre><code>for root, dirs, files in os.walk(main):
filters = '*specificname*.csv'
for filename in fnmatch.filter(files, filters):
df = pd.read_csv(os.path.join(root, filename),error_bad_lines=False)
row_count = len(df.index)
device_dic = collections.defaultdict()
if row_count == 2:
device_dic[df.iloc[0][1]] = {}
device_dic[df.iloc[0][1]]['item1'] = df.iloc[0][2]
device_dic[df.iloc[0][1]]['item2'] = df.iloc[0][3]
device_dic[df.iloc[1][1]] = {}
device_dic[df.iloc[1][1]]['item1'] = df.iloc[1][2]
device_dic[df.iloc[1][1]]['item2'] = df.iloc[1][3]
for key in device_dic.iterkeys():
device.append(key)
</code></pre>
|
<pre><code>def func1(device_dict):
device_dic[df.iloc[0][1]] = {}
device_dic[df.iloc[0][1]]['item1'] = df.iloc[0][2]
device_dic[df.iloc[0][1]]['item2'] = df.iloc[0][3]
device_dic[df.iloc[1][1]] = {}
device_dic[df.iloc[1][1]]['item1'] = df.iloc[1][2]
device_dic[df.iloc[1][1]]['item2'] = df.iloc[1][3]
for key in device_dic.iterkeys():
device.append(key)
# Or whatever you want to return
return device
def func2(device_dict):
# your code here
pass
# Store each function in a dict
process_map = {2 : func1, 3: func2, 4: func2, ...}
for root, dirs, files in os.walk(main):
filters = '*specificname*.csv'
for filename in fnmatch.filter(files, filters):
df = pd.read_csv(os.path.join(root, filename),error_bad_lines=False)
row_count = len(df.index)
device_dic = collections.defaultdict()
# Could also use get() to provide a default processing func
process_func = process_map[row_count]
result = process_func(device_dict)
</code></pre>
|
python|pandas
| 0
|
4,801
| 66,366,889
|
How to change all values to the left of a particular cell in every row
|
<p>I have an array which contains 1's and 0's. A very small section of it looks like this:</p>
<pre><code>arr=[[0,0,0,0,1],
[0,0,1,0,0],
[0,1,0,0,0],
[1,0,1,0,0]]
</code></pre>
<p>I want to change the value of every cell to 1, if it is to the left of a cell with a value of 1. I want all other cells to keep their value of 0, i.e:</p>
<pre><code>arrOut=[[1,1,1,1,1],
[1,1,1,0,0],
[1,1,0,0,0]
[1,1,1,0,0]
</code></pre>
<p>Some rows have >1 cell with a value =1.</p>
<p>I have managed to do this using a very ugly double for-loop:</p>
<pre><code>for i in range(len(arr)):
for j in range(len(arr[i])):
if arr[i][j]==1:
arrOut[i][0:j]=1
</code></pre>
<p>Does anyone know of another way to do this with using for loops? I'm relatively comfortable with numpy and pandas, but also open to other libraries.</p>
<p>Thanks!</p>
|
<p>You can flip using it, and use <code>np.cumsum</code>:</p>
<pre><code>>>> arr[:, ::-1].cumsum(axis=1)[:, ::-1]
array([[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 0, 0]], dtype=int32)
</code></pre>
<p>Or the same using <code>np.fliplr</code>,</p>
<pre><code>>>> np.fliplr(np.fliplr(arr).cumsum(axis=1))
array([[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 0, 0]], dtype=int32)
</code></pre>
<p>Using <code>np.where</code>:</p>
<pre><code>>>> np.where(arr.cumsum(1)==0, 1, arr)
array([[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 0, 0]], dtype=int32)
</code></pre>
<p>If array has more than one <code>1</code>, use <code>np.clip</code>:</p>
<pre><code>>>> arr
array([[0, 0, 0, 0, 1],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0]])
>>> np.clip(arr[:, ::-1].cumsum(axis=1)[:, ::-1], 0, 1)
array([[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 1, 1, 0]], dtype=int32)
# If you want to make all 0s before the leftmost 1 to 1:
>>> np.where(arr.cumsum(1)==0, 1, arr)
array([[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 1, 0]])
</code></pre>
|
python|numpy
| 3
|
4,802
| 66,588,756
|
Is there a better way to create a Multi Index with columns preceding the multi-index data?
|
<p><a href="https://i.stack.imgur.com/o3482.png" rel="nofollow noreferrer">This</a> is my current output and what I'd like to improve.</p>
<p>Here is the code:</p>
<pre><code>df = pd.DataFrame(np.random.rand(8, 3),
index=[['Fund Name', 'Jerry Partners','', '', 'Fund Name','Boris LTD','',''],
['$Bln AUM','2Bln','', '', '$Bln AUM','6Bln','',''],
['Count', '21', ' ', ' ','Count','11', ' ', ' '],
['ticker1', 'ticker2', 'ticker3', 'ticker4', 'ticker1', 'ticker2', 'ticker3', 'ticker4']],
columns=['%Own','Purchase Price','Tot Value'])
df
</code></pre>
<p>Is there any other way to place regular columns before the multi-index?
I'd rather not repeat "Fund Name", "$Bln AUM", etc. as well as complicate my DataFrame construction.</p>
<p>\\\EDIT:</p>
<p>Here is some more information on the data that I'm wrangling. I hope this is sufficient. I have a collection of 71 funds and the tickers of their respective 10 largest investments.</p>
<p>Some of these funds have less than 10 investments, which is where I believe things get complicated.</p>
<p>I also have the assets under management of the fund, and for each ticker I have the amount owned, and the price at which the stock was purchased.
Given this information, I have a dictionary where Keys are Fund Names and Values are a List of the Tickers, like so:</p>
<p><code>{'AKO Capital': [array(['LIN', 'BKNG', 'EBAY', 'V', 'EL', 'GOOG', 'NKE', 'RACE', 'OTIS','PG'], dtype=object)], 'Ackman Trust': [array(['BRK.B', 'WM', 'CAT', 'CNI', 'WMT', 'ECL', 'CCI', 'FDX', 'UPS','SDGR'], dtype=object)]}</code>
And so forth. The % owned and price at purchase are in separate arrays. What I would like to create here is the following multi index (dots denote in-between rows):</p>
<pre><code>Manager |Ticker |% Owned
AKO Capital|LIN |25%
|BKNG |11%
|EBAY |13%
...........
|OTIS |5%
|PG |3.5%
AckmanTrust|BRK.B |5%
|WM |15%
|CAT |12%
............
|UPS |5%
|SGDR |7%
</code></pre>
<p>I would be happy to just have the Manager/Ticker levels working and figure out the rest myself.</p>
<p>Thank you.</p>
|
<p>Your data is in a good form to get into a dataframe with the right indices.</p>
<pre><code>import pandas as pd
managers_tickers = {
"AKO Capital": {
"LIN": 0.25,
"BKNG": 0.11,
"EBAY": 0.13,
"OTIS": 0.05,
"PG": 0.035,
},
"Ackman Trust": {
"BRK.B": 0.05,
"WM": 0.15,
"CAT": 0.12,
"UPS": 0.05,
"SDGR": 0.07,
},
}
df = pd.DataFrame.from_dict(managers_tickers, orient="index").stack()
print(df)
</code></pre>
<p>The above prints:</p>
<pre><code>AKO Capital LIN 0.250
BKNG 0.110
EBAY 0.130
OTIS 0.050
PG 0.035
Ackman Trust BRK.B 0.050
WM 0.150
CAT 0.120
UPS 0.050
SDGR 0.070
dtype: float64
</code></pre>
<p>You can get a single entry with:</p>
<pre><code>df["Ackman Trust"]["WM"]
# prints 0.15
</code></pre>
|
python|pandas|dataframe|multi-index
| 1
|
4,803
| 66,749,664
|
Speeding up numpy
|
<p>Is there a way to speed up the following code snippet? This is a function that accepts lidar points and converts it to Range View image Any suggestions would be appreciated. I tried using numba but didn't get much improvement.</p>
<pre><code>def lidar_rv_projection(points, proj_H=32, proj_W=2048, proj_fov_up=10, proj_fov_down=-30.0):
v_fov_up = proj_fov_up / 180.0 * np.pi
v_fov_down = proj_fov_down / 180.0 * np.pi
v_fov_total = abs(v_fov_down) + abs(v_fov_up)
depth = np.linalg.norm(points[:, :3], 2, axis=1)
x_points = points[:, 0]
y_points = points[:, 1]
z_points = points[:, 2]
x_img = np.arctan2(y_points, x_points) * -1
y_img = np.arcsin(z_points / depth)
proj_x = 0.5 * (x_img / np.pi + 1.0)
proj_y = 1.0 + (y_img + abs(v_fov_down)) * -1 / v_fov_total
proj_x *= proj_W
proj_y *= proj_H
proj_x = np.floor(proj_x)
proj_x = np.minimum(proj_W - 1, proj_x)
proj_x = np.maximum(0, proj_x).astype(np.int32) # in [0,W-1]
proj_y = np.floor(proj_y)
proj_y = np.minimum(proj_H - 1, proj_y)
proj_y = np.maximum(0, proj_y).astype(np.int32) # in [0,H-1]
order = np.argsort(depth)[::-1]
depth = depth[order]
points = points[order]
proj_y = proj_y[order]
proj_x = proj_x[order]
proj_rv_img = np.full((4, proj_H, proj_W), -1,dtype=np.float64)
proj_rv_img[0, proj_y, proj_x] = depth # range
proj_rv_img[1, proj_y, proj_x] = points[:, 2] # height z
proj_rv_img[2, proj_y, proj_x] = points[:, 3] # intensity r
proj_rv_img[3, proj_y, proj_x] = 1 # binary mask
return proj_rv_img, proj_x, proj_y, points
</code></pre>
|
<p>If you're considering other numerical packages, using <code>torch</code> or <code>tensorflow</code> (especially if you have access to a GPU) may help substantially. Fortunately, most <code>torch</code> and <code>tensorflow</code> functions are implemented in a similar way to <code>numpy</code>, so you probably wouldn't have to change too many functions.</p>
<p>Switching to <code>torch</code> or <code>tensorflow</code> will likely help with speeding up these operations if your dataset is considerably large, e.g. > 5000 points (and, since you're using lidar returns, I'm guessing somewhere between 3 and 6 dimensions).</p>
<p>Additionally, I noticed that you cast some variables to <code>np.int32</code> and <code>np.float32</code>. If it is possible to cast this to a condensed representation, this may help as well. E.g.</p>
<ol>
<li><code>np.int16</code>: Limited to [-32768, 32767]. 16-bit signed representation.</li>
<li><code>np.uint16</code>: Limited to [0, 65535]. 16-bit unsigned representation.</li>
<li><code>np.int8</code>: Limited to [-128, 127]. 8-bit signed representation.</li>
<li><code>np.uint8</code>: Limited to [0, 255]. 8-bit unsigned representation</li>
</ol>
<p>You can also try reducing the precision of your floating-point arithmetic, e.g. in half-precision (<code>np.float16</code>) rather than single-precision (<code>np.float32</code>). Note that this will lead to an increase in errors, though the degree depends on the scale of the data and the operations performed. For more information on NumPy data types, please see this link <a href="https://numpy.org/doc/stable/user/basics.types.html" rel="nofollow noreferrer">here</a>.</p>
|
python|performance|numpy|time|numba
| 0
|
4,804
| 57,516,662
|
ValueError: Error when checking input: expected input to have 4 dimensions, but got array with shape (859307, 1)
|
<p>I'm creating a convolutional autoencoder that takes in 16x16 images but I keep getting the following error: </p>
<pre><code>Traceback (most recent call last):
File "WTApruning.py", line 69, in <module>
validation_data=(x_test, x_test))
File "/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 709, in fit
shuffle=shuffle)
File "/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 2651, in _standardize_user_data
exception_prefix='input')
File "/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py", line 376, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input to have 4 dimensions, but got array with shape (859307, 1)
</code></pre>
<p>From other stack overflow posts such as this <a href="https://stackoverflow.com/questions/57222126/error-when-checking-input-expected-conv2d-6-input-to-have-4-dimensions-but-got">one</a>, it seems like I need to add another dimension for the colour channel but what would the other dimension I add be? </p>
<p>Code</p>
<pre><code>path = "..."
CATEGORIES = ["x_train", "x_test"]
count = 0
data = []
x_test, x_train = [], []
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)
data.append(img_array)
x_train, x_test = train_test_split(data, test_size = 0.1)
x_train, x_test = train_test_split(data, test_size = 0.1)
x_train = np.array(x_train)
x_test = np.array(x_test)
# just updated
x_train = x_train.reshape(x_train,(len(x_train),16,16,1))
x_test = x_test.reshape(x_test,(len(x_test),16,16,1))
# ENCODER
encoder_img = tf.keras.layers.Input(shape=(16,16,1), name="input")
x = tf.keras.layers.Conv2D(1024, 1, activation='relu', kernel_initializer=keras.initializers.RandomUniform)(encoder_img)
x = tf.keras.layers.MaxPooling2D(1)(x)
x = tf.keras.layers.Conv2D(512, 1, activation='relu')(x)
x = tf.keras.layers.MaxPooling2D(1)(x)
encoder_output = tf.keras.layers.Conv2D(256, 3, activation='relu')(x)
# DECODER
x = tf.keras.layers.Conv2DTranspose(512, 1, activation='relu')(encoder_output)
x = tf.keras.layers.UpSampling2D(1)(x)
x = tf.keras.layers.Conv2DTranspose(1024, 1, activation='relu')(x)
x = tf.keras.layers.UpSampling2D(1)(x)
decoder_output = tf.keras.layers.Conv2DTranspose(1, 3, activation='relu')(x)
# COMPILE
autoencoder = tf.keras.Model(inputs=encoder_img, outputs=decoder_output, name='autoencoder')
autoencoder.summary()
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=20,
batch_size=128,
shuffle=True,
validation_data=(x_test, x_test))
decoded_imgs = autoencoder.predict(x_test)
</code></pre>
<p>New Error after reshaping is added:</p>
<pre><code>Traceback (most recent call last):
File "WTApruning.py", line 43, in <module>
x_train = x_train.reshape(x_train,(len(x_train),16,16,1))
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>Error without reshaping:</p>
<pre><code>Traceback (most recent call last):
File "WTApruning.py", line 68, in <module>
validation_data=(x_test, x_test))
File "/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 709, in fit
shuffle=shuffle)
File "/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 2651, in _standardize_user_data
exception_prefix='input')
"/PycharmProjects/predictivemodel/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py", line 376, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input to have 4 dimensions, but got array with shape (859307, 1)
</code></pre>
|
<p>You input x_train is not a 4d input.
you should reshape it before feeding it into the network.
Best</p>
|
python|tensorflow|keras|conv-neural-network|dimension
| 1
|
4,805
| 43,661,189
|
How to get count from groupby operation into new column with Python Pandas?
|
<p>I am trying to figure out how to count all unique barcodes (2 in this case) in this groupby operation. Then I would like to write the count value into a new column into my dataframe. I am banging my head against the wall, trying all kinds of things without success so far. Any help is greatly appreciated.</p>
<pre><code>parcelno barcode product
01565115935496 1234567890123 DPD CLASSIC NP (Europa) count 1
unique 1
top 1234567890123
freq 1
Dieselzuschlag count 1
unique 1
top 1234567890123
freq 1
Maut count 1
unique 1
top 1234567890123
freq 1
Sicherheitsgebuhr count 1
unique 1
top 1234567890123
freq 1
Verzollungsabwicklung count 1
unique 1
top 1234567890123
freq 1
0987654321097 DPD CLASSIC NP (Europa) count 1
unique 1
top 0987654321097
freq 1
Dieselzuschlag count 1
unique 1
top 0987654321097
freq 1
Maut count 1
unique 1
top 0987654321097
freq 1
Sicherheitsgebuhr count 1
unique 1
top 0987654321097
freq 1
Verzollungsabwicklung count 1
unique 1
top 0987654321097
freq 1
</code></pre>
|
<p>You can count unique column values in any pandas dataframe using:</p>
<pre><code>dataframe.column_name.unique()
</code></pre>
<p>In your case, it would be</p>
<pre><code>df.barcode.unique()
</code></pre>
<p>or</p>
<pre><code> df["barcode"].unique()
</code></pre>
<p>where df is the dataframe and barcode is the column.</p>
|
python|pandas
| 0
|
4,806
| 72,863,343
|
ValueError: ssd_mobilenet_v2_fpn_keras is not supported for tf version 1. See `model_builder.py`
|
<p>hi i am facing a problem in Jupyter notebook i use python 3.7.13 and TensorFlow 1.15.5</p>
<pre><code># Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
detection_model = model_builder.build(model_config=configs['model'], is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(CHECKPOINT_PATH, 'ckpt-7')).expect_partial()
@tf.function
def detect_fn(image):
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections
</code></pre>
<blockquote>
<p>ValueError Traceback (most recent call last)
C:\Temp\ipykernel_6932\3048604568.py in
1 # Load pipeline config and build a detection model
2 configs = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
----> 3 detection_model = model_builder.build(model_config=configs['model'], is_training=False)
4
5 # Restore checkpoint</p>
</blockquote>
<p>~\AppData\Roaming\Python\Python37\site-packages\object_detection\builders\model_builder.py in build(model_config, is_training, add_summaries)
1251 build_func = META_ARCH_BUILDER_MAP[meta_architecture]
1252 return build_func(getattr(model_config, meta_architecture), is_training,
-> 1253 add_summaries)</p>
<p>~\AppData\Roaming\Python\Python37\site-packages\object_detection\builders\model_builder.py in _build_ssd_model(ssd_config, is_training, add_summaries)
400 """
401 num_classes = ssd_config.num_classes
--> 402 _check_feature_extractor_exists(ssd_config.feature_extractor.type)
403
404 # Feature extractor</p>
<p>~\AppData\Roaming\Python\Python37\site-packages\object_detection\builders\model_builder.py in _check_feature_extractor_exists(feature_extractor_type)
268 '{} is not supported for tf version {}. See <code>model_builder.py</code> for '
269 'features extractors compatible with different versions of '
--> 270 'Tensorflow'.format(feature_extractor_type, tf_version_str))
271
272</p>
<p>ValueError: ssd_mobilenet_v2_fpn_keras is not supported for tf version 1. See <code>model_builder.py</code> for features extractors compatible with different versions of Tensorflow</p>
<p><a href="https://i.stack.imgur.com/MRWHY.png" rel="nofollow noreferrer">enter image description here</a></p>
<p><a href="https://i.stack.imgur.com/91a6a.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>I see in <a href="https://github.com/tensorflow/models/tree/r1.13.0/research/object_detection/models" rel="nofollow noreferrer">tensorflow model version 1.x</a> don't have ssd_mobilenet_v2_fpn_keras model. It is just supported from version 2.x. So you should install tensorflow object detection api version 2.0 and try again.</p>
|
python|tensorflow|jupyter-notebook|object-detection|checkpoint
| 0
|
4,807
| 72,903,381
|
How to find the average of a row in pandas barring one column?
|
<p>I'm trying to find the average of each row without taking into account the "unnamed column" which is the year.<br />
Currently I have:</p>
<pre><code>print(df.mean(axis=0))
</code></pre>
<p>But this just finds the average WITH the year which obviously is a huge outlier and skews the data.</p>
<p>Below is the dataset I am using:</p>
<p><img src="https://i.stack.imgur.com/6vfnL.png" alt="enter image description here" /></p>
|
<p>Try excluding the first column with <code>.loc</code>:</p>
<pre><code>print(df.iloc[:, 1:].mean(axis=0))
</code></pre>
|
python|pandas|data-science|data-cleaning
| 0
|
4,808
| 10,321,036
|
python numpy recarray join
|
<p>Is there no "join" function in numpy recarrays? I see matplotlib has something and there is a concatenate but this is not a solution. I want a fast join in numpy/scipy or understand why it is not there. </p>
|
<p>After some digging I found this slightly buried library. I think it might be doing what I need ... curious to hear other answers as well. If this is the best solution it is NOT very well documented. I'm not sure how to contribute docs:</p>
<pre><code>import numpy as np
import numpy.lib.recfunctions as rfn
import numpy.random as random
a = random.randn(4,2)
b = random.randn(4,2)
a[1, 0] = 12
b[1, 0] = 12
print(a)
print(b)
a = np.rec.fromrecords(a, names='a,b')
b = np.rec.fromrecords(b, names='a,c')
print(a['a'])
print(b['a'])
c = rfn.join_by('a',a,b,jointype='outer')
print('')
print(c)
</code></pre>
|
python|numpy|join|recarray
| 0
|
4,809
| 70,490,204
|
How to make batch with pictures of different sizes for model in PyTorch?
|
<p>I want to use GlobalAveragePooling in my PyTorch model and not to resize, crop or pad the image. I can train my model using only one image every iteration (not batch). But it is too slow and I don't know how to use several images of different sizes as one input for Model.
Example of model code:</p>
<pre><code>class GAPModel(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3),
nn.ReLU(inplace=True),
)
self.linear = nn.Sequential(
nn.Linear(in_features=16, out_features=1),
nn.ReLU(),
)
def forward(self, image):
return self.linear(self.conv(image).mean([2, 3]))
</code></pre>
|
<p>One idea is to choose a image of the same size for each stack.
Caution.</p>
<ol>
<li>shuffling: can group indexes/images_id by size and then apply shuffle within the group.</li>
<li>last batch: just do something similar to drop_last if neccessary (see torch dataloader)</li>
<li>maybe there is more work ...</li>
</ol>
|
python|neural-network|pytorch|conv-neural-network
| 0
|
4,810
| 70,731,467
|
Function not callable anymore after one try
|
<p>I'm coding a function right now which has a really weird problem. When I define the function <code>Psi(t)</code> and call it to be plotted, it works fine. But, when you call it again to be plotted, it sends an error <code>'numpy.ndarray' object is not callable</code>. When you click play (on Jupyter notebook) on <code>Psi(t)</code> to define it again then call it to be plotted, it works fine again. You'd have to define it again if you wanna change a parameter then plot <code>Psi(t)</code> again. I don't know if it's with the code or the software that I use for python (VS code). Anyhow, here's the code:</p>
<pre><code># Constants
m = 9.109e-31 # mass of electron in kg
L = 1e-8 # length of box in m
hbar = 1.0546e-34 # hbar in J/s
x0 = L/2 # midpoint of box
sigma = 1e-10 # width of wave packet in m
kappa = 5e10 # wave number in 1/m
N = 1000 # number of grid slices
def psi0(x):
return np.exp( -(x - x0)**2/(2*sigma**2) )*np.exp(-1j*kappa*x)
#Discrete sine transform
def dst(y):
N = len(y)
y2 = np.empty(2*N,float)
y2[0] = y2[N] = 0.0
y2[1:N] = y[1:]
y2[:N:-1] = -y[1:]
a = -np.imag(rfft(y2))[:N]
a[0] = 0.0
return a
#Inverse discrete sine transform
def idst(a):
N = len(a)
c = np.empty(N+1,complex)
c[0] = c[N] = 0.0
c[1:N] = -1j*a[1:]
y = irfft(c)[:N]
y[0] = 0.0
return y
x_n = np.zeros(N, complex)
xgrid = range(N)
for i in range(N):
x_n[i] = psi0(i*L/N)
alpha = dst(np.real(x_n))
eta = dst(np.imag(x_n))
def Psi(t):
k = np.arange(1, N+1)
energy_k = (k**2*np.pi**2*hbar)/(2*m*L**2)
cos, sin = np.cos(energy_k*t), np.sin(energy_k*t)
re_psi = alpha*cos - eta*sin
im_psi = eta*cos + alpha*sin
psi = re_psi + im_psi
return idst(psi)
Psi = Psi(2e-16)
plt.plot(xgrid,Psi)
plt.show()
</code></pre>
<p>I'm hoping someone can help.</p>
|
<p>On the third-to-last line:</p>
<pre><code>Psi = Psi(2e-16)
</code></pre>
<p>You are updating the reference to <code>Psi</code> from the function to the return value. Upon doing so, <code>Psi</code> can no longer be used as a function. It is advisory to never use variables with the same names as functions or classes in your code. Solutions are to either rename the variable, or rename the function and function call.</p>
|
python|arrays|numpy|callable
| 1
|
4,811
| 70,719,806
|
Outlier removal techniques from an array
|
<p>I know there's a ton resources online for outlier removal, but I haven't yet managed to obtain what I exactly want, so posting here, I have an array (or DF) of <code>4</code> columns. Now I want to remove the rows from the DF based on a column's outlier values. The following is what I have tried, but they are not perfect.</p>
<pre><code>def outliers2(data2, m = 4.5):
c=[]
data = data2[:,1] # Choosing the column
d = np.abs(data - np.median(data)) # deviation comoutation
mdev = np.median(d) # mean deviation
for i in range(len(data)):
if (abs(data[i] - mdev) < m * np.std(data)):
c.append(data2[i])
return c
x = pd.DataFrame(outliers2(np.array(b)))
column = ['t','orig_w','filt_w','smt_w']
x.columns = column
#Plot
plt.rcParams['figure.figsize'] = [10,8]
plt.plot(b.t,b.orig_w,'o',label='Original',alpha=0.8) # Original
plt.plot(x.t,x.orig_w,'.',c='r',label='Outlier removed',alpha=0.8) # After outlier removal
plt.legend()
</code></pre>
<p>the plot illustrates how the results looks, red points after the outlier treatment over the blue original points. I would really like to get rid of those vertical group of points around the x~0 mark. What to do ?</p>
<p>A link to the data file is provided here : <a href="https://drive.google.com/file/d/1aYPX31zE4P-LW5Hva6fdqNUf4fwYHpJa/view?usp=sharing" rel="nofollow noreferrer">Full data</a>
<a href="https://i.stack.imgur.com/vn7T9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vn7T9.png" alt="enter image description here" /></a>
The green circles show typically the points i would like to get rid of
<a href="https://i.stack.imgur.com/wsDEu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wsDEu.png" alt="enter image description here" /></a></p>
|
<p>You could use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.median_filter.html" rel="nofollow noreferrer">scipy's median_filter</a>:</p>
<pre><code>import pandas as pd
from matplotlib import pyplot as plt
from scipy.ndimage import median_filter
b = pd.read_csv("test.csv")
x = b.copy()
x.orig_w = median_filter(b.orig_w, size=15)
#Plot
plt.rcParams['figure.figsize'] = [10,8]
#Original
plt.plot(b.t,b.orig_w,'o',label='Original',alpha=0.8)
# After outlier removal
plt.plot(x.t,x.orig_w,'.',c='r',label='Outlier removed',alpha=0.8)
plt.legend()
plt.show()
</code></pre>
<p>Sample output:
<a href="https://i.stack.imgur.com/YOmhI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YOmhI.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy|scipy|outliers
| 2
|
4,812
| 42,788,713
|
Pandas mapping to TRUE/FALSE as String, not Boolean
|
<p>When I try to convert some columns in a pandas dataframe from '0' and '1' to 'TRUE' and 'FALSE', pandas automatically detects dtype as boolean. I want to keep dtype as string, with the strings 'TRUE' and 'FALSE'.</p>
<p>See code below:</p>
<pre><code>booleanColumns = pandasDF.select_dtypes(include=[bool]).columns.values.tolist()
booleanDictionary = {'1': 'TRUE', '0': 'FALSE'}
pandasDF.to_string(columns = booleanColumns)
for column in booleanColumns:
pandasDF[column].map(booleanDictionary)
</code></pre>
<p>Unfortunately, python automatically converts dtype to boolean with the last operation. How can I prevent this?</p>
|
<p>If need replace <code>boolean</code> values <code>True</code> and <code>False</code>:</p>
<pre><code>booleandf = pandasDF.select_dtypes(include=[bool])
booleanDictionary = {True: 'TRUE', False: 'FALSE'}
for column in booleandf:
pandasDF[column] = pandasDF[column].map(booleanDictionary)
</code></pre>
<p>Sample:</p>
<pre><code>pandasDF = pd.DataFrame({'A':[True,False,True],
'B':[4,5,6],
'C':[False,True,False]})
print (pandasDF)
A B C
0 True 4 False
1 False 5 True
2 True 6 False
booleandf = pandasDF.select_dtypes(include=[bool])
booleanDictionary = {True: 'TRUE', False: 'FALSE'}
#loop by df is loop by columns, same as for column in booleandf.columns:
for column in booleandf:
pandasDF[column] = pandasDF[column].map(booleanDictionary)
print (pandasDF)
A B C
0 TRUE 4 FALSE
1 FALSE 5 TRUE
2 TRUE 6 FALSE
</code></pre>
<p>EDIT:</p>
<p>Simplier solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="noreferrer"><code>replace</code></a> by <code>dict</code>:</p>
<pre><code>booleanDictionary = {True: 'TRUE', False: 'FALSE'}
pandasDF = pandasDF.replace(booleanDictionary)
print (pandasDF)
A B C
0 TRUE 4 FALSE
1 FALSE 5 TRUE
2 TRUE 6 FALSE
</code></pre>
|
python|pandas|dictionary|replace
| 28
|
4,813
| 43,011,713
|
Rotate a numpy.array one bit to the right
|
<p>I have a <code>numpy.array</code> and would like to rotate its content one bit to the right. I want to perform this as efficient (in terms of execution speed) as possible. Also, please note that every element of the array is an 8-bit number (<code>np.uint8</code>). The rotation assumes that the array stores one big number which is split into chunks of size 8-bit, i.e., I'm not interested in rotating every 8-bit element by itself, but the whole array together. </p>
<p>Here is an example to remove any confusion:</p>
<pre><code>a = numpy.array([0b00000000, 0b00000001])
# rotation should be performed globally
# i.e., the result should be
# rotate(a) == numpy.array([0b10000000, 0b00000000])
</code></pre>
<p><strong>How I tried solving the problem?</strong></p>
<p><strong>Method #1:</strong> Convert the input array to binary representation and catenate the binary strings of the elements into one big string. Then pop the least significant bit and insert it ahead of the most significant bit. Finally, chop the big string into 8-bit chunks, convert every chunk into <code>np.uint8</code>, and store it in the corresponding position of the rotation result. I guess this solution is correct, but not efficient, especially if the input array is huge. </p>
<p><strong>Method #2:</strong> I found it hard to explain the idea in words, so I'll just try convey it using the code fragment below:</p>
<pre><code># Let w be the input array
# read the least significant bits of every element in w
# and store them into another array lsb
mask = np.ones(shape=w.shape, dtype=np.int8)
lsb = np.bitwise_and(w,mask)
shiftedLsb = np.left_shift(np.roll(lsb, 1), 7)
rotW = np.right_shift(w,1)
rotationResult = np.bitwise_or(shiftedLsb, rotw)
</code></pre>
<p><strong>My question:</strong> Is there a better way, in terms of execution speed, to implement this kind of rotation?</p>
<p>Thank you all.</p>
|
<p>You can speed up your "Method #2" by reducing the amount of memory allocation for temporaries:</p>
<pre><code>def method2a(w):
rotW = np.right_shift(w, 1)
lsb = np.bitwise_and(w, 1)
np.left_shift(lsb, 7, lsb)
rotW[0] |= lsb[-1]
rotW[1:] |= lsb[:-1]
return rotW
</code></pre>
<p>On my system, with a 1MB input array, this is twice as fast as your original, and produces the same results.</p>
<p>If you're willing to destroy the input, you could eliminate one of the two remaining allocations (perhaps by adding an optional <code>out</code> argument, as NumPy does in e.g. <code>left_shift()</code>).</p>
|
python|arrays|numpy|bit-manipulation
| 2
|
4,814
| 42,948,719
|
TensorFlow HVX Acceleration support
|
<p>I successfully built and ran the test application from <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/hvx" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/hvx</a>. I'd now like to benchmark HVX against the CPU implementation of <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark</a>, and if possible, the Android camera demo, to see how much it would help, but I wasn't able to find any documentation describing how to build said apps with HVX support (my builds run on the CPU). I'm testing on the Open-Q 820 development board with Android 7.0.</p>
<p>Is utilizing HVX acceleration outside the HVX test application, preferably with the benchmark and maybe the Android camera demos supported yet? If so, could someone please point me in the right direction? Thanks!</p>
|
<p>Currently, the Android demo app does not support the HVX runtime. But I'm sure that you can use the runtime with Android demo app by replacing .so file with HVX version. If you can wait for the official support, that would be happening soon, but no promise. Let me know if you have any questions :)</p>
|
tensorflow|hexagon-dsp
| 2
|
4,815
| 25,182,421
|
Overlay two numpy arrays treating fourth plane as alpha level
|
<p>I have two numpy arrays of shape (256, 256, 4). I would like to treat the fourth 256 x 256 plane as an alpha level, and export an image where these arrays have been overlayed.</p>
<p>Code example:</p>
<pre><code>import numpy as np
from skimage import io
fg = np.ndarray((256, 256, 4), dtype=np.uint8)
one_plane = np.random.standard_normal((256, 256)) * 100 + 128
fg[:,:,0:3] = np.tile(one_plane, 3).reshape((256, 256, 3), order='F')
fg[:, :, 3] = np.zeros((256, 256), dtype=np.uint8)
fg[0:128, 0:128, 3] = np.ones((128, 128), dtype=np.uint8) * 255
fg[128:256, 128:256, 3] = np.ones((128, 128), dtype=np.uint8) * 128
bg = np.ndarray((256, 256, 4), dtype=np.uint8)
bg[:,:,0:3] = np.random.standard_normal((256, 256, 3)) * 100 + 128
bg[:, :, 3] = np.ones((256, 256), dtype=np.uint8) * 255
io.imsave('test_fg.png', fg)
io.imsave('test_bg.png', bg)
</code></pre>
<p>This creates two images, fg:</p>
<p><img src="https://i.stack.imgur.com/RpWvX.png" alt="test_fg.png"></p>
<p>and bg:</p>
<p><img src="https://i.stack.imgur.com/uwq7k.png" alt="test_bg">:</p>
<p>I would like to be able to overlay the fg onto the bg. That is, the final image should have grey in the top left (because there the alpha of fg is 1), a blend of grey and colour noise in the bottom right, and pure colour noise in the other quandrants. I am looking for something like an add function that gives me a new np array.</p>
<p>Note that I don't think this is the same as <a href="https://stackoverflow.com/questions/10127284/overlay-imshow-plots-in-matplotlib">this answer</a>, which uses matplotlib.pyplot.plt to overlay the images and fiddles around with colour maps. I don't think I should need to fiddle with colour maps here, but maybe the answer is I do.</p>
<p>The reason I would like a new np.array returned by the operation is because I want to do this iteratively with many images, overlayed in order. </p>
|
<p><a href="http://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending" rel="noreferrer">Alpha blending</a> is usually done using the Porter & Duff equations:</p>
<p><img src="https://i.stack.imgur.com/iCSV2.png" alt="enter image description here"></p>
<p>where <em>src</em> and <em>dst</em> would correspond to your foreground and background images, and the <em>A</em> and <em>RGB</em> pixel values are assumed to be floating point, in the range <em>[0, 1]</em>.</p>
<p>For your specific example:</p>
<pre><code>src_rgb = fg[..., :3].astype(np.float32) / 255.0
src_a = fg[..., 3].astype(np.float32) / 255.0
dst_rgb = bg[..., :3].astype(np.float32) / 255.0
dst_a = bg[..., 3].astype(np.float32) / 255.0
out_a = src_a + dst_a*(1.0-src_a)
out_rgb = (src_rgb*src_a[..., None]
+ dst_rgb*dst_a[..., None]*(1.0-src_a[..., None])) / out_a[..., None]
out = np.zeros_like(bg)
out[..., :3] = out_rgb * 255
out[..., 3] = out_a * 255
</code></pre>
<p>Output:</p>
<p><img src="https://i.stack.imgur.com/hdldU.png" alt="enter image description here"></p>
|
python|image|numpy|alphablending|scikit-image
| 10
|
4,816
| 25,087,769
|
RuntimeWarning: Divide by Zero error: How to avoid? PYTHON, NUMPY
|
<p>I am running in to RuntimeWarning: Invalid value encountered in divide</p>
<pre><code> import numpy
a = numpy.random.rand((1000000, 100))
b = numpy.random.rand((1,100))
dots = numpy.dot(b,a.T)/numpy.dot(b,b)
norms = numpy.linalg.norm(a, axis =1)
angles = dots/norms ### Basically I am calculating angle between 2 vectors
</code></pre>
<p>There are some vectors in my a which have norm as 0. so while calculating angles it is giving runtime warning. </p>
<p>Is there a one line pythonic way to compute angles while taking into account norms which are 0?</p>
<pre><code>angles =[i/j if j!=0 else -2 for i,j in zip(dots, norms)] # takes 10.6 seconds
</code></pre>
<p>But it takes a lot of time. Since all angles will have values between 1 and -1 and I need only 10 max values this will help me. This takes around 10.6 seconds which is insane.</p>
|
<p>you can ignore warings with the <code>np.errstate</code> context manager and later replace nans with what you want:</p>
<pre><code>import numpy as np
angle = np.arange(-5., 5.)
norm = np.arange(10.)
with np.errstate(divide='ignore'):
print np.where(norm != 0., angle / norm, -2)
# or:
with np.errstate(divide='ignore'):
res = angle/norm
res[np.isnan(res)] = -2
</code></pre>
|
python|numpy
| 15
|
4,817
| 26,823,556
|
Mask array entries when column index is greater than a certain cutoff that is unique to each row
|
<p>I want to efficiently mask a large array with several hundred thousand rows and ~500 columns wherever column index is greater than <code>cutoff[i]</code>, 0 <= <code>i</code> < number of rows.</p>
<p>Here is an example:</p>
<pre><code>In [2]: x = np.random.randint(0,100,size=24).reshape((6,4)); x
Out[2]:
array([[86, 50, 19, 49],
[41, 80, 94, 50],
[36, 58, 66, 50],
[67, 45, 76, 18],
[80, 14, 10, 30],
[25, 90, 44, 38]])
In [3]: cutoff = np.array([10,4,3,2,1,0]); cutoff
Out[3]: array([10, 4, 3, 2, 1, 0])
</code></pre>
<p>Now I basically want to create a <code>mask</code> with <code>mask[:, cutoff:]=True</code>, but of course this gives me error. End result that I'm looking for:</p>
<pre><code>[[86 50 19 49]
[41 80 94 50]
[36 58 66 --]
[67 45 -- --]
[80 -- -- --]
[-- -- -- --]]
</code></pre>
<p>Is there a faster way to do the following?</p>
<pre><code>In [4]: mask = np.zeros_like(x)
In [5]: for row in range(x.shape[0]):
...: mask[row, cutoff[row]:] = 1
...:
In [6]: mask
Out[6]:
array([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 1],
[0, 1, 1, 1],
[1, 1, 1, 1]])
In [7]: np.ma.array(x, mask=mask)
Out[7]:
masked_array(data =
[[86 50 19 49]
[41 80 94 50]
[36 58 66 --]
[67 45 -- --]
[80 -- -- --]
[-- -- -- --]],
mask =
[[False False False False]
[False False False False]
[False False False True]
[False False True True]
[False True True True]
[ True True True True]],
fill_value = 999999)
</code></pre>
<p>I'm open to using pandas if it's easier or faster.</p>
|
<p>You can use broadcast:</p>
<pre><code>cutoff = np.array([10,4,3,2,1,0])
np.arange(4)[None, :] >= cutoff[:, None]
</code></pre>
<p>Here is the result:</p>
<pre><code>array([[False, False, False, False],
[False, False, False, False],
[False, False, False, True],
[False, False, True, True],
[False, True, True, True],
[ True, True, True, True]], dtype=bool)
</code></pre>
<p>Or use <code>outer()</code> method of ufunc:</p>
<pre><code>np.less_equal.outer(cutoff, np.arange(4))
</code></pre>
|
python|numpy|pandas
| 2
|
4,818
| 26,572,664
|
Set each individual column to have its own datatype
|
<p>How can I set a particular datatype for every single column?</p>
<p>I opened a <code>.txt</code> file which has 236 columns with Pandas. </p>
<p>For example I have a column with values called "System Time", another one called "Temperature", another one called "Alarm", ...</p>
<p>For "System Time" I want to use <code>DateTime</code>, for "Temperature" I want to use <code>float</code> because the values have decimal numbers (e.g. <code>24.4</code>) and for the "Alarm" I want to use <code>string</code>. </p>
<p>Can anybody help me?</p>
|
<p>When Pandas reads your file (e.g. using <code>pd.read_csv</code>) to construct the DataFrame it will automatically select the appropriate datatype (<code>dtype</code>) to hold the data on a column by column basis. This means that a column of decimal numbers will have the <code>float64</code> type, and so on.</p>
<p>If you have as many as 236 columns, it's probably easiest to let Pandas figure out the best datatypes.</p>
<p>Dates can be trickier to handle, so you might want to be more explicit about which columns Pandas should parse to the <code>datetime</code> type. You could do this after you've built the DataFrame with <code>pd.to_datetime(df["System Time"])</code>.</p>
<hr />
<p>If however you'd like to control the datatype of each column during the construction, many Pandas methods allow you pass in a list or dictionary of columns names and what their types should be.</p>
<p>For example, if you use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html" rel="nofollow noreferrer"><code>pd.read_csv</code></a>, you can make use of the <code>dtype</code> keyword argument:</p>
<blockquote>
<p><code>dtype</code> : <em>Type name or dict of column -> type</em></p>
<p>Data type for data or columns. E.g. <code>{'a': np.float64, 'b': np.int32}</code></p>
</blockquote>
<p>For instance, you might choose to construct you DataFrame in a manner similar to this:</p>
<pre><code>df = pd.read_csv('file.txt', names=["Temperature", "Alarm"],
dtypes=[np.float64, object])
</code></pre>
<p>N.B. There is no <code>string</code> datatype available in Pandas; such values usually have the datatype <code>object</code>.</p>
|
python|python-2.7|pandas|dataframe|types
| 0
|
4,819
| 26,518,673
|
Format data for survival analysis using pandas
|
<p>I'm trying to figure out the quickest way to get survival analysis data into a format that will allow for time varying covariates. Basically this would be a python implementation of <code>stsplit</code> in Stata. To give a simple example, with the following set of information:</p>
<pre><code>id start end x1 x2 exit
1 0 18 12 11 1
</code></pre>
<p>This tells us that an observation started at time 0, and ended at time 18. Exit tells us that this was a 'death' rather than right censoring. x1 and x2 are variables that are constant over time.</p>
<pre><code>id t age
1 0 30
1 7 40
1 17 50
</code></pre>
<p>I'd like to get:</p>
<pre><code>id start end x1 x2 exit age
1 0 7 12 11 0 30
1 7 17 12 11 0 40
1 17 18 12 11 1 50
</code></pre>
<p>Exit is only 1 at the end, signifying that t=18 is when the death occurred.</p>
|
<p>Assuming:</p>
<pre><code>>>> df1
id start end x1 x2 exit
0 1 0 18 12 11 1
</code></pre>
<p>and:</p>
<pre><code>>>> df2
id t age
0 1 0 30
1 1 7 40
2 1 17 50
</code></pre>
<p>You can do:</p>
<pre><code>df = df2.copy() # start with df2
df['x1'] = df1.ix[0, 'x1'] # x1 column
df['x2'] = df1.ix[0, 'x2'] # x2 column
df.rename(columns={'t': 'start'}, inplace=True) # start column
df['end'] = df['start'].shift(-1) # end column
df.ix[len(df)-1, 'end'] = df1.ix[0, 'end']
df['exit'] = 0 # exit column
df.ix[len(df)-1, 'exit'] = 1
df = df[['id', 'start', 'end', 'x1', 'x2', 'exit', 'age']] # reorder columns
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>>>> df
id start end x1 x2 exit age
0 1 0 7 12 11 0 30
1 1 7 17 12 11 0 40
2 1 17 18 12 11 1 50
</code></pre>
|
python|pandas|stata|survival-analysis
| 1
|
4,820
| 26,563,745
|
Python Pandas : how to set 2 colums at the same time?
|
<p>I posted something simpler because I thought it could be easy to understand, but referring to your comments, I was wrong, so I edit this question :</p>
<p>So here is the code. I want to do it without a loop, should it be done in pandas ?</p>
<pre><code>import pandas as pd
myval = [0.0,1.1, 2.2, 3.3, 4.4, 5.5,6.6, 7.7, 8.8,9.9]
s1 = [0,0,1,1,0,0,1,1,0,1]
s2 = [0,0,1,0,1,0,1,0,1,1]
posin = [10,0,0,0,0,0,0,0,0,0]
posout = [0,0,0,0,0,0,0,0,0,0]
sig = ['-']
d = {'myval' : myval, 's1' : s1, 's2' : s2}
d = pd.DataFrame(d)
'''
normaly the dataframe should be with the 6 col,
but I can't make the part below working in the df.(THAT is the problem !!)
The real df is 5000+ row, and this should be done for 100+ sets of values,
so this way is not eligible. Too slow.
'''
for i in xrange(1,len(myval)) :
if (s1[i]== 1) & (s2[i] == 1) & (posin[i-1] != 0 ) :
posin[i]= 0
posout[i]= posin[i-1] / myval[i]
sig.append( 'a')
elif (s1[i] == 0) & (s2[i] == 1) & (posin[i-1] == 0) :
posin[i]= posout[i-1] * myval[i]
posout[i] = 0
sig.append( 'v')
else :
posin[i] = posin[i-1]
posout[i] = posout[i-1]
sig.append('-')
d2 = pd.DataFrame({'posin' : posin , 'posout' : posout , 'sig' : sig })
d = d.join(d2)
#the result wanted :
print d
myval s1 s2 posin posout sig
0 0.0 0 0 10.000000 0.000000 -
1 1.1 0 0 10.000000 0.000000 -
2 2.2 1 1 0.000000 4.545455 a
3 3.3 1 0 0.000000 4.545455 -
4 4.4 0 1 20.000000 0.000000 v
5 5.5 0 0 20.000000 0.000000 -
6 6.6 1 1 0.000000 3.030303 a
7 7.7 1 0 0.000000 3.030303 -
8 8.8 0 1 26.666667 0.000000 v
9 9.9 1 1 0.000000 2.693603 a
</code></pre>
<p>Any help ?</p>
<p>Thanks for it !!</p>
|
<p>I was hoping that something like the following might work (as suggested in the comments), however (suprisingly?) this use of np.where raises a <code>ValueError: shape mismatch: objects cannot be broadcast to a single shape</code> (using a 1D to select from a 2D):</p>
<pre><code>np.where(df.s1 & df.s2,
pd.DataFrame({"bin": 0, "bout": df.bin.diff() / df.myval}),
np.where(df.s1,
pd.DataFrame({"bin": df.bout.diff() * df.myval, "bout": 0}),
pd.DataFrame({"bin": df.bin.diff(), "bout": df.bout.diff()})))
</code></pre>
<hr>
<p>As an alternative to using where, I would construct this in stages:</p>
<pre><code>res = pd.DataFrame({"bin": 0, "bout": df.bin.diff() / df.myval})
res.update(pd.DataFrame({"bin": df.bout.diff() * df.myval,
"bout": 0}).loc[(df.s1 == 1) & (df.s2 == 0)])
res.update(pd.DataFrame({"bin": df.bin.diff(),
"bout": df.bout.diff()}).loc[(df.s1 == 0) & (df.s2 == 0)])
</code></pre>
<p>Then you can assign this to the two columns in df:</p>
<pre><code>df[["bin", "bout"]] = res
</code></pre>
|
python|pandas
| 0
|
4,821
| 39,075,173
|
Pandas: create a table using groupby
|
<p>I have dataframe </p>
<pre><code>ID subdomain search_engine search_term code category term_code
0120bc30e78ba5582617a9f3d6dfd8ca yandex.ru 0 None 1 поисковая машина 1
0120bc30e78ba5582617a9f3d6dfd8ca my-shop.ru 0 None 5 интернет-магазин 1
0120bc30e78ba5582617a9f3d6dfd8ca ru.tele2.ru 0 None 10 Телеком-провайдеры 1
0120bc30e78ba5582617a9f3d6dfd8ca yandex.com yandex алиэкспресс 1 поисковая машина 6
0120bc30e78ba5582617a9f3d6dfd8ca fb.ru 0 None 3 информационный ресурс 1
0120bc30e78ba5582617a9f3d6dfd8ca shopotam.ru 0 None 4 интернет-агрегатор 1
031ce36695306ac09ae905927a753f33 ya.ru 0 None 1 поисковая машина 1
031ce36695306ac09ae905927a753f33 cyberforum.ru 0 None 3 информационный ресурс 1
031ce36695306ac09ae905927a753f33 fixim.ru 0 None 8 запчасти и ремонт 1
031ce36695306ac09ae905927a753f33 microsoft.com 0 None 9 сайты производителей с возможностью купить 1
031ce36695306ac09ae905927a753f33 market.yandex.ru 0 None 4 интернет-агрегатор 1
</code></pre>
<p>I need to get </p>
<pre><code>ID, path
0120bc30e78ba5582617a9f3d6dfd8ca, поисковая машина -> интернет-магазин -> Телеком-провайдеры -> поисковая машина -> информационный ресурс -> интернет-агрегатор
031ce36695306ac09ae905927a753f33, поисковая машина -> информационный ресурс -> запчасти и ремонт -> сайты производителей с возможностью купить -> интернет-агрегатор
</code></pre>
<p>I mean I want to get column <code>term_code</code> and convert it to string with delimiter <code>-></code> to every <code>ID</code>
How can I do that?</p>
|
<p>try this:</p>
<pre><code>df.groupby('ID')['category'].apply(lambda x: ' -> '.join(list(x)))
</code></pre>
<p>Demo:</p>
<pre><code>In [14]: df.groupby('ID')['category'].apply(lambda x: ' -> '.join(list(x)))
Out[14]:
ID
0120bc30e78ba5582617a9f3d6dfd8ca поисковая машина -> интернет-магазин -> Телеком-провайдеры -> 6 -> информационный ресурс
-> интернет-агрегатор
031ce36695306ac09ae905927a753f33 поисковая машина -> информационный ресурс -> запчасти и ремонт -> сайты производителей с возможностью купить
-> интернет-агрегатор
Name: category, dtype: object
</code></pre>
|
python|pandas
| 1
|
4,822
| 39,030,164
|
pandas: Add row from one data frame to another data frame?
|
<p>I have two dataframes with identical column headers.</p>
<p>I am iterating over the rows in df1, splitting one of the columns, and then using those split columns to create multiple rows to add to the other dataframe.</p>
<pre><code>for index, row in df1.iterrows():
curr_awards = row['AWARD'].split(" ")
for award in curr_awards:
new_line = row
new_line['AWARD'] = award.strip()
#insert code here to add new_line to df2
</code></pre>
<p>What is the pandas method for adding a row to another data frame?</p>
|
<p>I figured it out.</p>
<pre><code>for index, row in df1.iterrows():
curr_awards = row['AWARD'].split(" ")
for award in curr_awards:
new_line = row
new_line['AWARD'] = award.strip()
df2.loc[len(df2)] = new_line
</code></pre>
|
python|python-3.x|pandas
| 4
|
4,823
| 29,297,033
|
Numpy filter 2D array by two masks
|
<p>I have a 2D array and two masks, one for columns, and one for rows. If I try to simply do <code>data[row_mask,col_mask]</code>, I get an error saying <code>shape mismatch: indexing arrays could not be broadcast together with shapes ...</code>. On the other hand, <code>data[row_mask][:,col_mask]</code> works, but is not as pretty. Why does it expect indexing arrays to be of the same shape?</p>
<p>Here's a specific example:</p>
<pre><code>import numpy as np
data = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
row_mask = np.array([True, True, False, True])
col_mask = np.array([True, True, False])
print(data[row_mask][:,col_mask]) # works
print(data[row_mask,col_mask]) # error
</code></pre>
|
<p>Use <code>ix_</code> function :</p>
<pre><code>>>> data[np.ix_(row_mask,col_mask)]
array([[ 1, 2],
[ 4, 5],
[10, 11]])
</code></pre>
<blockquote>
<p>Combining multiple Boolean indexing arrays or a Boolean with an integer indexing array can best be understood with the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.nonzero.html#numpy.ndarray.nonzero" rel="nofollow">obj.nonzero()</a> analogy. The function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html#numpy.ix_" rel="nofollow">ix_</a> also supports boolean arrays and will work without any surprises.</p>
</blockquote>
|
python|numpy|scipy
| 3
|
4,824
| 33,851,716
|
Installing numpy and pandas for python 3.5
|
<p>I've been trying to install numpy and pandas for python 3.5 but it keeps telling me that I have an issue. </p>
<p>Could it be because numpy can't run on python 3.5 yet? </p>
|
<p>This is as a result of a Numpy distutils bug (which is already fixed in the development branch).</p>
<p>If you have brew:</p>
<pre><code>brew install homebrew/python/numpy --with-python3
</code></pre>
<p>If you don't:</p>
<pre><code>pip3 install git+https://github.com/numpy/numpy.git
</code></pre>
|
python|numpy
| 2
|
4,825
| 23,543,836
|
Generating repetitive data in numpy / pandas in a fast vectorized way
|
<p>Suppose I want to generate a 1D array like this:</p>
<pre><code>1 1 1 1 2 2 2 3 3 4
</code></pre>
<p>In general I am looking for something with this form:</p>
<pre><code>Element N-repetition
1 n-0
2 n-1
3 n-2
4 n-3
. .
. .
. .
n n-(n-1)=1
</code></pre>
<p>This is of course possible by combining arrays of
sizes n, n-1, n-2, ..., but I am wondering if there is
a better, vectorized way of doing this?</p>
|
<p>Its very simple with Numpy's <code>repeat</code>:</p>
<pre><code>n = 4
a = np.arange(1,n+1)
</code></pre>
<p>The array <code>a</code> looks like:</p>
<pre><code>array([1, 2, 3, 4])
</code></pre>
<p>And you basically want to repeat it with the reverse of <code>a</code>, so:</p>
<pre><code>np.repeat(a, a[::-1])
</code></pre>
<p>Gives:</p>
<pre><code>array([1, 1, 1, 1, 2, 2, 2, 3, 3, 4])
</code></pre>
|
python|numpy|pandas
| 5
|
4,826
| 29,570,394
|
Solving a differential equation in python with odeint
|
<p>I am trying to solve that differential equation R·(dq/dt)+(q/C)=Vi·sin(w·t), so i have the this code:</p>
<pre><code>import numpy as np
from numpy import *
import matplotlib.pyplot as plt
from math import pi, sin
from scipy.integrate import odeint
C=10e-9
R=1000 #Ohmios
Vi=10 #V
w=2*pi*1000 #Hz
fc=1/(2*pi*R*C)
print fc
def der (q,t,args=(C,R,Vi,w)):
I=((Vi/R)*sin(w*t))-(q/(R*C))
return I
</code></pre>
<p>Ok, so i have this, now i am going to integrate it with odeint</p>
<pre><code>a, b, ni = 0.0, 10.0, 1
tp=np.linspace(a, b, ni+1)
p=odeint(der, 0.0, tp)
print p
</code></pre>
<p>But there is something wrong i think. My main goal is to get q(t) and then divide it by C to get Vc.</p>
<pre><code>Vc=p/C
print Vc
</code></pre>
|
<p>There are a few issues here:</p>
<p>Division in python2, as commenters have noted, doesn't cast integers to floats during division. So you'll want to make sure your parameters are floats:</p>
<pre><code> C=10e-9
R=1000.0 #Ohmios
Vi=10.0 #V
w=2*pi*1000.0 #Hz
</code></pre>
<p>Now, you don't really need, for what you're doing, to have those parameters as arguments. Let's start simpler, and just have the derivative use these as global variables:</p>
<pre><code>def der (q,t):
I=((Vi/R)*sin(w*t))-(q/(R*C))
return I
</code></pre>
<p>Next, your space only has the start and end points, because ni is 1! Let's set ni higher, to say, 1000:</p>
<pre><code>a, b, ni = 0.0, 10.0, 1000
tp=np.linspace(a, b, ni+1)
p=odeint(der, 0, tp)
plt.plot(tp,p) # let's plot instead of print; print p
</code></pre>
<p>You'll notice this still doesn't work very well. Your driving signal is 1000 Hz, so you'll need even more points, or a smaller data range. Let's try from 0 to 0.02, with 1000 points, giving us 50 points per oscillation:</p>
<pre><code>a, b, ni = 0.0, 0.02, 1000
tp=np.linspace(a, b, ni+1)
p=odeint(der, 0, tp)
plt.plot(tp,p) # let's plot instead of print; print p
</code></pre>
<p>But this is still awkwardly unstable! Why? Because most ode solvers like this have both relative and absolute error tolerances. In numpy, these are both set by default to around 1.4e-8. Thus the absolute error tolerance is frighteningly close to your normal <code>q</code>/<code>p</code> values. You'll want to either use units that will have your values be larger (probably the better solution), or set the absolute tolerance to a correspondingly smaller value (the easier solution, shown here):</p>
<pre><code>a, b, ni = 0.0, 0.02, 1000
tp=np.linspace(a, b, ni+1)
p=odeint(der, 0, tp, atol=1e-16)
plt.plot(tp,p)
</code></pre>
|
python|numpy|differential-equations|integrate|odeint
| 2
|
4,827
| 62,167,953
|
How to keep values for nonexistent categories while subtracting dataframes?
|
<p>I have 12 dataframes with cumulative values, and I want to transform them to non-cumulative one.</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame({
"ADMIN": [1, 2],
"FIN_SOURCE": ["A", "B"],
"PROG": [150, 155],
"FUNC": [1, 2],
"ECON": [30, 50],
"VALUE": [5, 10]
})
df2 = pd.DataFrame({
"ADMIN": [1, 2, 1],
"FIN_SOURCE": ["A", "B", "A"],
"PROG": [150, 155, 160],
"FUNC": [1, 2, 1],
"ECON": [30, 50, 50],
"VALUE": [10, 15, 50]
})
</code></pre>
<p>Each dataset has columns pointing to different categories (<code>ADMIN</code>, <code>FIN_SOURCE</code>, <code>PROG</code>, <code>FUNC</code>, <code>ECON</code>), and for dataset+1 there are more unique values within each category. </p>
<p>What I tried to do: </p>
<pre class="lang-py prettyprint-override"><code>indxs = ["ADMIN", "FIN_SOURCE", "PROG", "FUNC", "ECON"]
(df2.set_index(indxs) - df1.set_index(indxs)).reset_index()
ADMIN FIN_SOURCE PROG FUNC ECON VALUE
0 1 A 150 1 30 5.0
1 1 A 160 1 50 NaN #<- I want to keep it as 50 (value in df2 before sub)
2 2 B 155 2 50 5.0
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sub.html" rel="nofollow noreferrer"><code>DataFrame.sub</code></a> with <code>fill_value=0</code> parameter:</p>
<pre><code>indxs = ["ADMIN", "FIN_SOURCE", "PROG", "FUNC", "ECON"]
df = df2.set_index(indxs).sub(df1.set_index(indxs), fill_value=0).reset_index()
print (df)
ADMIN FIN_SOURCE PROG FUNC ECON VALUE
0 1 A 150 1 30 5.0
1 1 A 160 1 50 50.0
2 2 B 155 2 50 5.0
</code></pre>
|
python|pandas
| 1
|
4,828
| 62,361,162
|
Is A PyTorch Dataset Accessed by Multiple DataLoader Workers?
|
<p>When using more than 1 DataLoader workers in PyTorch, does every worker access the same Dataset instance? Or does each DataLoader worker have their own instance of Dataset?</p>
<pre><code>from torch.utils.data import DataLoader, Dataset
class NumbersDataset(Dataset):
def __init__(self):
self.samples = list(range(1, 1001))
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
return self.samples[idx]
dataset = NumbersDataset()
train_loader = DataLoader(dataset, num_workers=4)
</code></pre>
|
<p>It seams like they are accessing to the same instance. I have tried adding a static variable inside the dataset class and incrementing it every time a new instance is created. Code can be found below.</p>
<pre><code>from torch.utils.data import DataLoader, Dataset
class NumbersDataset(Dataset):
i = 0
def __init__(self):
NumbersDataset.i += 1
self.samples = list(range(1, 1001))
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
return self.samples[idx]
dataset_1 = NumbersDataset()
train_loader = DataLoader(dataset_1, num_workers=4)
for i, data in enumerate(train_loader):
pass
dataset_2 = NumbersDataset()
train_loader = DataLoader(dataset_2, num_workers=4)
for i, data in enumerate(train_loader):
pass
print(NumbersDataset.i)
</code></pre>
<p>The output is 2. Hope it helps :D</p>
|
python|python-3.x|deep-learning|neural-network|pytorch
| 2
|
4,829
| 62,368,138
|
Pandas giving error to read txt data file
|
<p>This is the code that I am using and it is not working for some reason. Please help me out.</p>
<p>I have created a text file with some data and when I try to use Pandas to read the data it is not working.</p>
<pre><code>dF = pd.read_csv("PandasLongSample.txt", delimiter='/t')
print(dF)
</code></pre>
<p>This is the error:</p>
<pre><code>C:\Users\SVISHWANATH\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
"""Entry point for launching an IPython kernel.
</code></pre>
<p>Thanks for the help!</p>
|
<p>Use a backslash (\) instead of a slash (/) for the tab delimiter.</p>
<pre><code>dF = pd.read_csv("PandasLongSample.txt", delimiter='\t')
print(dF)
</code></pre>
|
python|pandas
| 3
|
4,830
| 62,154,772
|
pandas printing maximum value found in a column with an f string
|
<p>If I have some made up data... How do I print just the index (date & time stamp) of the maximum value found in a column named <code>Temperature</code>?</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(11)
rows,cols = 50000,2
data = np.random.rand(rows,cols)
tidx = pd.date_range('2019-01-01', periods=rows, freq='H')
df = pd.DataFrame(data, columns=['Temperature','Value'], index=tidx)
maxy = df.Temperature.max()
maxDate = df.loc[df['Temperature'].idxmax()]
</code></pre>
<p>If I print <code>maxDate</code> there is alot of unwanted info.. </p>
<pre><code>print(maxy)
print(maxDate)
</code></pre>
<p>This outputs:</p>
<pre><code>0.9999949674183947
Temperature 0.999995
Value 0.518413
Name: 2023-01-06 02:00:00, dtype: float64
</code></pre>
<p>Ultimately I am hoping to create an <code>f</code> string that just prints <code>maximum temperature recorded in the dataset is 0.999995 found on 2023-01-06 02:00:00</code>, Without the extra information such as <code>Name:</code> and <code>, dtype: float64</code> and the <code>Value 0.518413</code> column... Thanks for any tips & tricks</p>
|
<p>You f string could be:</p>
<pre><code>f'maximum temperature recoded in the dataset is {maxy} found on {maxDate.name}'
</code></pre>
<p>Or without <code>maxy</code>:</p>
<pre><code>f'maximum temperature recoded in the dataset is {maxDate.Temperature} found on {maxDate.name}'
</code></pre>
<p>Output:</p>
<pre><code>'maximum temperature recoded in the dataset is 0.9999949674183947 found on 2023-01-06 02:00:00'
</code></pre>
|
python|pandas|string-formatting
| 0
|
4,831
| 51,241,471
|
F tensorflow/core/common_runtime/device_factory.cc:77] Duplicate registration of device factory for type GPU with the same priority 210
|
<p>When excute my binary c++ file, this error happened.
I compile my c++ file with tensorflow_cc.so by using make all, and the version of tensorflow is 1.8.
Does anyone have met this problem?</p>
|
<p>We recently encountered this error. It happened when we inadvertently linked against both libtensorflow.so (<code>-ltensorflow</code>) and libtensorflow_cc.so (<code>-ltensorflow_cc</code>). It went away when we picked one.</p>
|
c++|tensorflow
| 0
|
4,832
| 51,449,242
|
Search through a concatenated dataframe for exact match and then pull minimum date
|
<p>So I am new to pandas and am punching above my weight here. I have two csv files: one is a list of authors I am interested in (data frame 1) and the second file is a total list of authors for the publishing company and their publication date (data frame 2).</p>
<p>I need to use data frame 1 to see if there is an exact name match in data frame 2. If there is a match (there can be more than 1 match) I want to pull the minimum date. Ex:) For Jake Smith in df 1 there may be 2 matches in df 2 and i want to add the oldest publication date to data frame 1.</p>
<p>df </p>
<p><code>first name|last name |</code></p>
<p>df 2</p>
<p><code>first name|last name| publication date</code></p>
<p>desired</p>
<p>if author is in df1 then add the lowest publication date to df1</p>
<p>So heres what I did. I created the data frames from the csv files and concatenated all the author files to create df2. I then did an inner join on first and last name because I thought that would be the best way to name match. I keep getting an error. And then I used a group by to try and get the minimum date. </p>
<pre><code>import pandas as pd
files_path= 'C:'
df_1 = pd.read_csv( files_path + '/author_desired.csv', sep="|")
df_merged= pd.read_csv(files_path +'/master_list.csv', sep="|")
df_final= pd.join(df_1, df_merged, on= ['LAST_NAME' , 'FIRST_NAME'], how='inner')
df_final.groupby(['FIRST_NAME', 'LAST_NAME']).max()['FIRST_PUB_DATE']
df_final.to_csv(files_path + "/merged_file.csv")
</code></pre>
<p>PLEASE HELP</p>
|
<pre><code>lis1=[{'FIRST_NAME':'James','Last_Name':'Cameran','City':'NYC'},{'FIRST_NAME':'Samuel','Last_Name':'Smith','City':'London'},{'FIRST_NAME':'Kane','Last_Name':'Win','City':'NYC'}]
lis2=[{'FIRST_NAME':'James','Last_Name':'Cameran','Pub. Year':2011},{'FIRST_NAME':'Kane','Last_Name':'Win','Pub. Year':2010},{'FIRST_NAME':'James','Last_Name':'Cameran','Pub. Year':2018},{'FIRST_NAME':'Kane','Last_Name':'Win','Pub. Year':2014}]
import pandas as pd
df1=pd.DataFrame(lis1)
df2=pd.DataFrame(lis2)
print(df1)
print(df2)
df1['Full_Name']=df1.FIRST_NAME+" "+df1.Last_Name
df2['Full_Name']=df2.FIRST_NAME+" "+df2.Last_Name
merged=pd.merge(df1,df2)[['Full_Name','Pub. Year']]
df1['Pub. Year']=[merged[merged.Full_Name==fullname]['Pub. Year'].min() for fullname in df1.Full_Name]
print(df1)
</code></pre>
<p>Output:</p>
<pre><code> City FIRST_NAME Last_Name
0 NYC James Cameran
1 London Samuel Smith
2 NYC Kane Win
FIRST_NAME Last_Name Pub. Year
0 James Cameran 2011
1 Kane Win 2010
2 James Cameran 2018
3 Kane Win 2014
City FIRST_NAME Last_Name Full_Name Pub. Year
0 NYC James Cameran James Cameran 2011.0
1 London Samuel Smith Samuel Smith NaN
2 NYC Kane Win Kane Win 2010.0
</code></pre>
|
python|pandas|csv|dataframe|data-science
| 0
|
4,833
| 48,158,460
|
Distributed tensorflow monopolizes GPUs after running server.__init__
|
<p>I have two computers with two GPUs each. I am trying to start with distributed tensorflow and very confused about how it all works. On computer A I would like to have one <code>ps</code> tasks (I have the impression this should go on the CPU) and two <code>worker</code> tasks (one per GPU). And I would like to have two 'worker' tasks on computer B. Here's how I have tried to implement this, in <code>test.py</code></p>
<pre><code>import tensorflow as tf
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--job_name', required = True, type = str)
parser.add_argument('--task_idx', required = True, type = int)
args, _ = parser.parse_known_args()
JOB_NAME = args.job_name
TASK_INDEX = args.task_idx
ps_hosts = ["computerB-i9:2222"]
worker_hosts = ["computerA-i7:2222", "computerA-i7:2223", "computerB-i9:2223", "computerB-i9:2224"]
cluster = tf.train.ClusterSpec({"ps": ps_hosts, "worker": worker_hosts})
server = tf.train.Server(cluster, job_name = JOB_NAME, task_index = TASK_INDEX)
if JOB_NAME == "ps":
server.join()
elif JOB_NAME == "worker":
is_chief = (TASK_INDEX == 0)
with tf.device(tf.train.replica_device_setter(
worker_device = "/job:worker/task:%d" % FLAGS.task_index, cluster = cluster)):
a = tf.constant(8)
b = tf.constant(9)
with tf.Session(server.target) as sess:
sess.run(tf.multiply(a, b))
</code></pre>
<p><strong>What I am finding by running <code>python3 test.py --job_name ps == task_idx 0</code> on computer A, is that I see that both GPUs on computer A have immediately been reserved by the script and that computer B shows no activity.</strong> This is not what I expected. I thought that since for the <code>ps</code> job I simply run <code>server.join()</code> that this should not use the GPU. However I can see by setting <code>pdb</code> break points that as soon as the server is initialized, the GPUs are taken. This leaves me with several questions:</p>
<p><strong>- Why does the server immediately take all the GPU capacity?
- How am I supposed to allocate GPU and launch different processes?
- Does my original plan even make sense? (I am still a little confused by tasks vs. clusters vs. servers etc...)</strong></p>
<p>I have watched the Tensorflow Developer Summit 2017 video on distributed Tensorflow and I have also been looking around on Github and blogs. I have not been able to find a working code example using the latest or even relatively recent distributed tensorflow functions. Likewise, I notice that many questions on Stack Overflow are not answered, so I have read related questions but not any that resolve my questions. I would appreciate any guidance or recommendations about other resources. Thanks!</p>
|
<p>I found that the following will work when invoking from command line:</p>
<p><code>CUDA_VISIBLE_DEVICES="" python3 test.py --job_name ps --task_idx 0 --dir_name TEST</code></p>
<p>Since I found this in a lot of code examples it seems like this may be the standard way to control an individual server's access to GPU resources. </p>
|
python|tensorflow
| 0
|
4,834
| 48,417,950
|
pandas dataframe - merging rows by substituting values with column value
|
<p>Apologies for the ambiguous title.</p>
<p>I have a dataset of students and I want to run a clustering algorithm on the students.</p>
<p>The dataset is structured such that there are more than one row per student, each with age, grade (9th, 10th, etc) a single class the student is taking and the final score in that class.</p>
<p>In pre-processing I apply pd.get_dummies to get one column for each class students are taking with a boolean value and the score column stays as is.</p>
<p>I want to merge the rows such that for each student I only have one row (because I want to cluster over students, not each row) and instead of 1 or 0 for each class, I want the final score of that class to appear in the class column and then eliminate the score column.</p>
<p>I will try to present an example:</p>
<pre><code>Name, Age, Grade, Class, Score
John, 16, 9, Biology, 98
John, 16, 9, Algebra, 95
John, 16, 9, French, 96
</code></pre>
<p>Applying pd.get_dummies results in the following columns: </p>
<pre><code>Name, Age, Grade, Class_Biology, Class_Algebra, Class_French, Score
</code></pre>
<p>I am interested in the following result:</p>
<pre><code>Name, Age, Grade, Class_Biology, Class_Algebra, Class_French
John, 16, 9, 98, 95, 96
</code></pre>
<p>Is there a more efficient way than iterating over the rows and manually creating a new row in the dataframe for each student?</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>add_prefix</code></a>:</p>
<pre><code>df = (df.set_index(['Name','Age','Grade', 'Class'])['Score']
.unstack()
.add_prefix('Class_')
.reset_index()
.rename_axis(None, axis=1))
print (df)
Name Age Grade Class_Algebra Class_Biology Class_French
0 John 16 9 95 98 96
</code></pre>
|
python|python-2.7|pandas
| 2
|
4,835
| 47,989,741
|
Python - Array becomes scalar variable when passed into function
|
<p>I am trying to create a simple python script that when given a photo, first converts it to greyscale and then will band it into a number of colors. For example if the number of colours passed in is 2, the greyscale image will be changed so that each pixel is either pitch black (0) or bright white (255). </p>
<p>However, when calling my function 'getGreyscaleValue' used to determine the greyscale value of each pixel, I am getting an error. It seems that when passing the arrays, 'bandWidthArray' and 'colorsArray' into the function, they change from arrays to a scalar variable '0.0'. Running the following script and observing the printed values, should replicate the problem:</p>
<pre><code>import numpy as np
from PIL import Image
numberOfColors = 2;
greyscaleRange=255;
col = Image.open("IMG_5525.JPG")
gray = col.convert('L') # Make grayscale
y=np.asarray(gray.getdata(),dtype=np.float64).reshape((gray.size[1],gray.size[0]))
def getGreyScaleValue(x, bandWidthArray, colorsArray):
print(bandWidthArray)
print(colorsArray)
for i in range(1, bandWidthArray.len):
if(int(round(x))<int(round(bandWidthArray[i]))):
return colorsArray[i-1]
return 255
bandWidthArray = np.linspace(0, greyscaleRange, numberOfColors+1)
colorsArray = np.linspace(0, greyscaleRange, numberOfColors)
getGreyScaleValue = np.vectorize(getGreyScaleValue)
print(bandWidthArray)
print(colorsArray)
y = getGreyScaleValue(y, bandWidthArray, colorsArray)
y=np.asarray(y,dtype=np.uint8) #if values still in range 0-255!
w=Image.fromarray(y,mode='L')
w.save('out.jpg')
</code></pre>
<p>Stack trace is as follows: </p>
<pre><code>PS C:\python\pythonimages> python imgChange1.py
[ 0. 127.5 255. ]
[ 0. 255.]
0.0
0.0
Traceback (most recent call last):
File "imgChange1.py", line 27, in <module>
y = getGreyScaleValue(y, bandWidthArray, colorsArray)
File "C:\Users\Jack\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\lib\function_base.py", line 2734, in __call__
return self._vectorize_call(func=func, args=vargs)
File "C:\Users\Jack\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\lib\function_base.py", line 2804, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "C:\Users\Jack\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\lib\function_base.py", line 2764, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "imgChange1.py", line 15, in getGreyScaleValue
for i in range(1, bandWidthArray.len):
AttributeError: 'numpy.float64' object has no attribute 'len'
</code></pre>
|
<p>Change:</p>
<pre><code>for i in range(1, bandWidthArray.len):
</code></pre>
<p>to:</p>
<pre><code>for i in range(1, len(bandWidthArray)):
</code></pre>
<p>NumPy arrays don't have a <code>len</code> method.</p>
<p>Furthermore, don't vectorize your function. Remove this line:</p>
<pre><code>getGreyScaleValue = np.vectorize(getGreyScaleValue)
</code></pre>
|
python|numpy
| 0
|
4,836
| 48,654,403
|
How do I know the maximum number of threads per block in python code with either numba or tensorflow installed?
|
<p>Is there any code in python with either numba or tensorflow installed?
For example, if I would like to know the GPU memory info, I can simply use:</p>
<pre><code>from numba import cuda
gpus = cuda.gpus.lst
for gpu in gpus:
with gpu:
meminfo = cuda.current_context().get_memory_info()
print("%s, free: %s bytes, total, %s bytes" % (gpu, meminfo[0], meminfo[1]))
</code></pre>
<p>in numba.
But I can not find any code that gives me the maximum threads per block info.
I would like the code to detect the maximum number of threads per block and further calculate the specified number of blocks in each direction.</p>
|
<pre><code>from numba import cuda
gpu = cuda.get_current_device()
print("name = %s" % gpu.name)
print("maxThreadsPerBlock = %s" % str(gpu.MAX_THREADS_PER_BLOCK))
print("maxBlockDimX = %s" % str(gpu.MAX_BLOCK_DIM_X))
print("maxBlockDimY = %s" % str(gpu.MAX_BLOCK_DIM_Y))
print("maxBlockDimZ = %s" % str(gpu.MAX_BLOCK_DIM_Z))
print("maxGridDimX = %s" % str(gpu.MAX_GRID_DIM_X))
print("maxGridDimY = %s" % str(gpu.MAX_GRID_DIM_Y))
print("maxGridDimZ = %s" % str(gpu.MAX_GRID_DIM_Z))
print("maxSharedMemoryPerBlock = %s" % str(gpu.MAX_SHARED_MEMORY_PER_BLOCK))
print("asyncEngineCount = %s" % str(gpu.ASYNC_ENGINE_COUNT))
print("canMapHostMemory = %s" % str(gpu.CAN_MAP_HOST_MEMORY))
print("multiProcessorCount = %s" % str(gpu.MULTIPROCESSOR_COUNT))
print("warpSize = %s" % str(gpu.WARP_SIZE))
print("unifiedAddressing = %s" % str(gpu.UNIFIED_ADDRESSING))
print("pciBusID = %s" % str(gpu.PCI_BUS_ID))
print("pciDeviceID = %s" % str(gpu.PCI_DEVICE_ID))
</code></pre>
<p>These appear to be all the attributes that are currently supported. I found the list <a href="https://github.com/numba/numba/blob/master/numba/cuda/cudadrv/enums.py#L303" rel="noreferrer">here</a>, which matches the enum values in the CUDA docs, so extending that is fairly trivial. I added <code>CU_DEVICE_ATTRIBUTE_TOTAL_CONSTANT_MEMORY = 9</code>, for example, and that now works as expected.</p>
<p>If I find time this weekend I'll try to round those out, get the docs updated, and submit a PR.</p>
|
python|tensorflow|cuda|numba
| 7
|
4,837
| 48,497,449
|
python pandas check column contains item from a list
|
<p>I have two data frames like </p>
<pre><code>vid vbull
1125 RHSA:2017:3200
1127 RHSA:2017:3205
1128 RHSA:2017:3208
1129 RHSA:2017:3209
kbid vdesc
2401 This contains details for RHSA:2017:3205
2402 This contains details for RHSA:2017:3206
2403 This contains details forRHSA:2017:3207
2404 This contains details for RHSA:2017:3208
2405 This contains details for RHSA:2017:3200
</code></pre>
<p>Need output from df1,df2 for matching vbull in vdesc like :</p>
<pre><code>vid vbull kbid vdesc
1125 RHSA:2017:3200 2405 This contains details for RHSA:2017:3200
1127 RHSA:2017:3207 2403 This contains details for RHSA:2017:3207 ...
</code></pre>
<p>Tried this to get the matched items but not sure how to get the matched item also in the output </p>
<pre><code>df2[df2.vdesc.str.contains('|'.join(df1.vbull))]
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>extract</code></a> for values from <code>vbull</code> first:</p>
<pre><code>df2['extracted'] = df2.vdesc.str.extract('(' + '|'.join(df1.vbull) + ')', expand=False)
print (df2)
kbid vdesc extracted
0 2401 This contains details for RHSA:2017:3205 RHSA:2017:3205
1 2402 This contains details for RHSA:2017:3206 NaN
2 2403 This contains details for RHSA:2017:3207 NaN
3 2404 This contains details for RHSA:2017:3208 RHSA:2017:3208
4 2405 This contains details for RHSA:2017:3200 RHSA:2017:3200
</code></pre>
<p>Then filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df3 = df2[df2['extracted'].notnull()].copy()
print (df3)
kbid vdesc extracted
0 2401 This contains details for RHSA:2017:3205 RHSA:2017:3205
3 2404 This contains details for RHSA:2017:3208 RHSA:2017:3208
4 2405 This contains details for RHSA:2017:3200 RHSA:2017:3200
</code></pre>
<p>And last add values of <code>vid</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>:</p>
<pre><code>df3['new'] = df3['extracted'].map(df1.set_index('vbull')['vid'])
print (df3)
kbid vdesc extracted new
0 2401 This contains details for RHSA:2017:3205 RHSA:2017:3205 1127
3 2404 This contains details for RHSA:2017:3208 RHSA:2017:3208 1128
4 2405 This contains details for RHSA:2017:3200 RHSA:2017:3200 1125
</code></pre>
|
python|list|pandas|merge|extract
| 0
|
4,838
| 48,568,712
|
DataFrame: add same data for different column and merge the whole file
|
<p>I have a DataFrame which looks like this:</p>
<pre><code>Name Year Jan Feb Mar Apr
Bee 1998 26 23 22 19
Cee 1999 43 23 43 23
</code></pre>
<p>I want to change the DataFrame into something like this:</p>
<pre><code>Name Year Mon Val
Bee 1998 1 26
Bee 1998 2 23
Bee 1998 3 22
Bee 1998 4 19
Cee 1999 1 43
Cee 1999 2 23
Cee 1999 3 43
Cee 1999 4 23
</code></pre>
<p>How do i acquire this in Python with Pandas or any other library? </p>
|
<p>First, reshape your DataFrame with <code>pd.DataFrame.melt</code>:</p>
<pre><code>df = df.melt(id_vars=['Name', 'Year'], var_name='Mon', value_name='Value')
</code></pre>
<p>...and then convert your <code>Mon</code> values to datetime values, and extract the month number:</p>
<pre><code>df.loc[:, 'Mon'] = pd.to_datetime(df['Mon'], format='%b').dt.month
# Name Year Mon Value
# 0 Bee 1998 1 26
# 1 Cee 1999 1 43
# 2 Bee 1998 2 23
# 3 Cee 1999 2 23
# 4 Bee 1998 3 22
# 5 Cee 1999 3 43
# 6 Bee 1998 4 19
# 7 Cee 1999 4 23
</code></pre>
|
python|pandas|dataframe
| 0
|
4,839
| 48,533,286
|
Why does my data change into NaN in Task4?
|
<p>Why does my data change into NaN in task 4? I also tried using .loc[], but that still doesn't work. I need to be able to use the numbers.</p>
<pre><code>dec6 = pd.read_csv('coinmarketcap_06122017.csv', header=0)
market_cap_raw = dec6[['id', 'market_cap_usd']]
print(market_cap_raw.describe())
#print(market_cap_raw)
market_cap_raw.count()
#Task 3
cap = market_cap_raw.query('market_cap_usd > 0')
cap.count()
print(cap.describe())
#Task 4
cap10 = cap.head(10).reindex(index=cap['id'])
print(cap10.describe())
</code></pre>
<p>Result: </p>
<pre><code> market_cap_usd
count 1.144000e+03
mean 4.861599e+08
std 6.713982e+09
min 1.200000e+01
25% 7.513858e+05
50% 6.856627e+06
75% 4.043108e+07
max 1.862130e+11
market_cap_usd
count 1.144000e+03
mean 4.861599e+08
std 6.713982e+09
min 1.200000e+01
25% 7.513858e+05
50% 6.856627e+06
75% 4.043108e+07
max 1.862130e+11
market_cap_usd
count 0.0
mean NaN
std NaN
min NaN
25% NaN
50% NaN
75% NaN
max NaN
</code></pre>
<p>The last print results in NaN. </p>
|
<p>The re_index() method is causing the data to change to NaN. </p>
|
python|pandas
| 0
|
4,840
| 48,479,571
|
While Import python. ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
|
<p>I have tried many solutions like installing from different sources official google link <code>Google.api...</code>, <code>pypi</code> and also building from git repo.</p>
<p>But every time I face the same problem <code>ImportError: libcublas.so.9.0:</code></p>
<p>OS: <code>Linux Arch</code> tensorflow: <code>tensorflow-gpu</code> version<code>1.5</code></p>
<p>Nvidia: <code>Cuda 9.1 and Cudnn 7.0.5</code></p>
<blockquote>
<p>Note: tensorflow cpu is working fine</p>
</blockquote>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/usr/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
|
<p>Your error message indicates that Tensorflow is looking for CUDA 9.0, while the default download is CUDA 9.1. I suggest down-reviving to CUDA 9.0. I just installed TF prebuilt binaries with CUDA 9.0 and the corresponding cudnn 7.05 and everything ran fine. From <a href="https://github.com/tensorflow/tensorflow/issues/15656" rel="nofollow noreferrer">here</a> and <a href="https://github.com/tensorflow/tensorflow/issues/15140" rel="nofollow noreferrer">here</a> it seems there are some issues with CUDA 9.1 that are still be to be worked out.</p>
<p>Note also that currently the <a href="https://www.tensorflow.org/versions/r1.5/install/install_linux" rel="nofollow noreferrer">TF 1.5 install guide</a> seems to be incorrect since it specifies CUDA 8.0 and cudnn 6.0 for the prebuilt TF while <a href="https://github.com/tensorflow/tensorflow/releases" rel="nofollow noreferrer">the release notes</a> specify cuda 9 and cudnn 7</p>
|
python|tensorflow|archlinux
| 5
|
4,841
| 48,625,687
|
replace a chained method with a variable in python
|
<p>I have a function from the <code>simple_salesforce</code> package,</p>
<pre><code>sf = Salesforce(username, pass, key)
</code></pre>
<p>In order to update an object in the salesforce database, you call sf by:</p>
<pre><code>sf.bulk.object.update(data)
</code></pre>
<p>For example, <code>account</code> is the native customer account object in salesforce, so you would feed <code>data</code> for updating accounts like this:</p>
<pre><code>sf.bulk.account.update(data)
</code></pre>
<p>I was wondering if there is a way in python to set that specific piece of the chain as an argument.</p>
<p>what I would like to do:</p>
<pre><code>def update_sf(object, data):
sf.bulk.object.update(data)
</code></pre>
<p>That way I could call:</p>
<pre><code>update_sf('account', data)
</code></pre>
<p>The only other way I can think of doing this is to create a dictionary with the dozens of values for objects in the instance</p>
<pre><code>{'account':sf.bulk.account.update(),
'contact':sf.bulk.contact.update()}
</code></pre>
<p>Is there a way to do this?</p>
|
<p>You can use builtin function <a href="https://docs.python.org/3/library/functions.html#getattr" rel="nofollow noreferrer"><code>getattr</code></a> to fetch your desired entity:</p>
<pre><code>>>> getattr(sf.bulk, object).update(data)
</code></pre>
<p>To also able to dynamically select operation(insert, update, delete) you can chain <code>getattr</code> </p>
<pre><code>>>> getattr(getattr(sf.bulk, object),operation)
</code></pre>
|
python|pandas|dictionary|methods|salesforce
| 2
|
4,842
| 70,786,169
|
How to measure training time per batches during Deep Learning in Tensorflow?
|
<p>I want to measure training time per batches during Deep Learning in Tensorflow.
There are several ways to measure training time per epochs, but I cannot find how to measure training time per batches.</p>
<p>I tried Tensorboard, but I don't know how to add some kind of 'execution time' scalars in tensorboard callbacks.</p>
<p>And I also tried to override the function 'train_step' but it didn't work the way I want.</p>
|
<p>You can do this by creating a keras custom callback. The code below will print out the time it takes to process each batch, the training accuracy for that batch and the loss for that batch</p>
<pre><code>class batch_timer(keras.callbacks.Callback):
def __init__(self ):
super(batch_timer, self).__init__()
def on_train_batch_begin(self, batch, logs=None):
self.start_time=time.time()
def on_train_batch_end(self, batch, logs=None):
stop_time=time.time()
duration =stop_time-self.start_time
acc=logs.get('accuracy')* 100 # get training accuracy
loss=logs.get('loss')
msg='processing batch {0:6s} duration= {1:12.4f} accuracy= {2:8.3f} loss: {3:8.5f}'.format(str(batch), duration, acc, loss)
print(msg )
</code></pre>
<p>Now you are going to get tons of data . When you run model.fit include</p>
<pre><code>history=model.fit(etc.... ,verbose=0, callbacks=[batch_timer()]
</code></pre>
<p>I set verbose=0 because if you don't the printout gets a bit messy because model.fit
printing gets messed up by the callback printing</p>
|
python|tensorflow|deep-learning
| 0
|
4,843
| 70,889,776
|
while saving model: list index (0) out of range
|
<p>this is my face_landmark.py</p>
<pre><code>import cv2
import numpy as np
import tensorflow as tf
from tensorflow import keras
def get_landmark_model(saved_model="models/pose_model"):
model = keras.models.load_model(saved_model)
return model
</code></pre>
<p>and in camera.py, I am importing it and using its function</p>
<pre><code>from face_detector import get_face_detector, find_faces
from face_landmarks import get_landmark_model, detect_marks
face_model = get_face_detector()
landmark_model = get_landmark_model()
</code></pre>
<p>in face_landmarks.py <strong>error is raising when I am trying to load the model</strong></p>
|
<p>While loading the model, provide file extension as well.
For example</p>
<pre><code>def get_landmark_model(saved_model="models/pose_model.h5"):
model = keras.models.load_model(saved_model)
return model
</code></pre>
<p>here <strong>h5</strong> is the extension of model.</p>
|
python|tensorflow|keras
| 0
|
4,844
| 70,926,249
|
Barplot with twinx and two bars per month
|
<p>a given df where the index is the month two columns 'Ta' and 'G_Bn'.</p>
<p><a href="https://i.stack.imgur.com/qQiVw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qQiVw.png" alt="df" /></a></p>
<p>Those columns shall be ploted against the month with seaborn to get a barplot with two y-axis. One for 'Ta' and one for 'G_Bn'. The bars have to be next to each other. My idea:</p>
<pre><code>#Create combo chart
fig, ax1 = plt.subplots(figsize=(20,10))
ax1 = sns.barplot(data=dataMonthly, color ='#2b83ba')
ax1.tick_params(axis='y')
ax1.set_xlabel('Time')
ax1.set_ylabel('Value1')
ax2 = ax1.twinx()
ax2 = sns.barplot(x = dataMonthly.index, y = "Ta", data=dataMonthly, color ="#404040")
ax2.set_ylabel('Value2')
ax2.tick_params(axis='y')
plt.show()
</code></pre>
<p>The problem is that the bars do always lay on top of each other because I plot on ax1 and then ax2. How do I solve it? Any ideas?</p>
|
<p>The following approach creates a dummy categorical variable to serve as <code>hue</code>:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dataMonthly = pd.DataFrame({'G_Bn': np.random.uniform(30, 40, 5).cumsum(),
'Ta': np.random.uniform(5, 10, 5).cumsum()})
fig, ax1 = plt.subplots(figsize=(20, 10))
ax2 = ax1.twinx()
palette = ['#2b83ba', "#404040"]
categories = ["G_Bn", "Ta"]
for ax, cat_val in zip([ax1, ax2], categories):
sns.barplot(x=dataMonthly.index, y=dataMonthly[cat_val],
hue=pd.Categorical([cat_val] * len(dataMonthly), categories), palette=palette, ax=ax)
ax1.set_xlabel('Time')
ax2.legend_.remove() # a legend was created for ax1 as well as ax2
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/DZIPd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZIPd.png" alt="sns.barplot on twinx ax, using dummy hue" /></a></p>
<p>PS: Note that if you create the <code>ax</code> before calling a seaborn function, it is strongly recommended to use the <code>ax=...</code> parameter and to not catch the return-value of seaborn's function.</p>
|
python|pandas|seaborn|bar-chart
| 0
|
4,845
| 51,573,241
|
TypeError: while_loop() got an unexpected keyword argument 'maximum_iterations' In Jupyter Azure
|
<p>I am setting up my recurrent neural network in Azure:</p>
<pre><code>model = Sequential()
model.add(GRU(units=512,
return_sequences=True,
input_shape=(None, x1,)))
model.add(Dense(y1, activation='sigmoid'))
</code></pre>
<p>But I am getting the error:</p>
<pre><code>TypeError: while_loop() got an unexpected keyword argument 'maximum_iterations'
</code></pre>
<p>I am not cretin but I believe I may be doing something that is now depreciated in current versions of TensorFlow & Keras, as in <a href="https://github.com/keras-team/keras/issues/10440" rel="nofollow noreferrer">this</a> example a similar error has occurred and such errors were pointed out. I am using Python 3.6 in the Jupyter Azure platform which also means I am unsure as to what version of Keras and TensoFlow I am using. </p>
<p>My full tracback error message is: </p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-7-e6bcba2d0346> in <module>()
205 model.add(GRU(units=512, return_sequences=True,
--> 207 input_shape=(None,x1,)))
208
~/anaconda3_501/lib/python3.6/site-packages/keras/engine/sequential.py in add(self, layer)
164 # and create the node connecting the current layer
165 # to the input layer we just created.
--> 166 layer(x)
167 set_inputs = True
168 else:
~/anaconda3_501/lib/python3.6/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
498
499 if initial_state is None and constants is None:
--> 500 return super(RNN, self).__call__(inputs, **kwargs)
501
502 # If any of `initial_state` or `constants` are specified and are Keras
~/anaconda3_501/lib/python3.6/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
458 # Actually call the layer,
459 # collecting output(s), mask(s), and shape(s).
--> 460 output = self.call(inputs, **kwargs)
461 output_mask = self.compute_mask(inputs, previous_mask)
462
~/anaconda3_501/lib/python3.6/site-packages/keras/layers/recurrent.py in call(self, inputs, mask, training, initial_state)
1587 mask=mask,
1588 training=training,
-> 1589 initial_state=initial_state)
1590
1591 @property
~/anaconda3_501/lib/python3.6/site-packages/keras/layers/recurrent.py in call(self, inputs, mask, training, initial_state, constants)
607 mask=mask,
608 unroll=self.unroll,
--> 609 input_length=timesteps)
610 if self.stateful:
611 updates = []
~/anaconda3_501/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length)
2955 parallel_iterations=32,
2956 swap_memory=True,
-> 2957 maximum_iterations=input_length)
2958 last_time = final_outputs[0]
2959 output_ta = final_outputs[1]
TypeError: while_loop() got an unexpected keyword argument 'maximum_iterations'
</code></pre>
<p>I have also learned from <a href="https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/23_Time-Series-Prediction.ipynb" rel="nofollow noreferrer">this</a> tutorial that <code>WARNING:tensorflow:From keep_dims is deprecated, use keepdims instead</code>. If this is valid, how could I do this? </p>
<p>This maybe something quite straight forward but I am quite confused, Help with this would be really appreciated. </p>
|
<pre><code>!pip uninstall keras
!pip install keras==2.1.2
</code></pre>
<p>And now it works</p>
|
python|azure|tensorflow|keras|jupyter-notebook
| 6
|
4,846
| 51,733,128
|
Your kernel may have been built without NUMA support
|
<p>I have Jetson TX2, python 2.7, Tensorflow 1.5, CUDA 9.0</p>
<p>Tensorflow seems to be working but everytime, I run the program, I get this warning:</p>
<p><code>with tf.Session() as sess:</code></p>
<p><code>print (sess.run(y,feed_dict))</code>
... </p>
<p><code>2018-08-07 18:07:53.200320: E</code></p>
<p><code>tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:881] could not open</code> <code>file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node</code></p>
<p><code>Your kernel may have been built without NUMA support.</code></p>
<p><code>2018-08-07 18:07:53.200427: I</code></p>
<p><code>tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties:</code> </p>
<p><code>name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
</code>
<code>pciBusID: 0000:00:00.0
</code>
<code>totalMemory: 7.66GiB freeMemory: 1.79GiB
</code>
<code>2018-08-07 18:07:53.200474: I
</code>
<code>tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating</code>TensorFlow device (/device:GPU:0) -> (device: 0, name: NVIDIA Tegra X2,<code></code>pci bus id: 0000:00:00.0, compute capability: 6.2)<code>
</code>
<code>2018-08-07 18:07:53.878574: I</code>
<code>
</code>tensorflow/core/common_runtime/gpu/gpu_device.cc:859] Could not identify<code>``NUMA node of /job:localhost/replica:0/task:0/device:GPU:0, defaulting</code>to<code></code>0. Your kernel may not have been built with NUMA support.`</p>
<p>Should I be worried ? Or is it something negligible ?</p>
|
<p>It shouldn't be a problem for you, since you don't need NUMA support for this board (it has only one memory controller, so memory accesses are uniform).</p>
<p>Also, I found <a href="https://devtalk.nvidia.com/default/topic/1027507/jetson-tx2/numa-error-running-tensorflow-on-jetson-tx2/" rel="noreferrer">this post</a> on nvidia forum that seems to confirm this.</p>
|
tensorflow|linux-kernel|numa
| 5
|
4,847
| 51,702,572
|
How to find common elements in several dataframes
|
<p>I have the following dataframes:</p>
<pre><code>df1 = pd.DataFrame({'col1': ['A','M','C'],
'col2': ['B','N','O'],
# plus many more
})
df2 = pd.DataFrame({'col3': ['A','A','A','B','B','B'],
'col4': ['M','P','Q','J','P','M'],
# plus many more
})
</code></pre>
<p>Which look like these:</p>
<p><strong>df1:</strong></p>
<pre><code>col1 col2
A B
M N
C O
#...plus many more
</code></pre>
<p><strong>df2:</strong></p>
<pre><code>col3 col4
A M
A P
A Q
B J
B P
B M
#...plus many more
</code></pre>
<p>The objective is to create a dataframe containing all elements in <code>col4</code> for each <code>col3</code> that occurs in one row in <code>df1</code>. For example, let's look at row 1 of <code>df1</code>. We see that <code>A</code> is in <code>col1</code> and <code>B</code> is in <code>col2</code>. Then, we go to <code>df2</code>, and check what <code>col4</code> is for <code>df2[df2['col3'] == 'A']</code> and <code>df2[df2['col3'] == 'B']</code>. We get, for <code>A</code>: <code>['M','P','Q']</code>, and for <code>B</code>, <code>['J','P','M']</code>. The intersection of these is<code>['M', 'P']</code>, so what I want is something like this</p>
<pre><code>col1 col2 col4
A B M
A B P
....(and so on for the other rows)
</code></pre>
<p>The naive way to go about this is to iterate over rows and then get the intersection, but I was wondering if it's possible to solve this via merging techniques or other faster methods. So far, I can't think of any way how.</p>
|
<p>This should achieve what you want, using a combination of <code>merge</code>, <code>groupby</code> and set intersection:</p>
<pre><code># Getting tuple of all col1=col3 values in col4
df3 = pd.merge(df1, df2, left_on='col1', right_on='col3')
df3 = df3.groupby(['col1', 'col2'])['col4'].apply(tuple)
df3 = df3.reset_index()
# Getting tuple of all col2=col3 values in col4
df3 = pd.merge(df3, df2, left_on='col2', right_on='col3')
df3 = df3.groupby(['col1', 'col2', 'col4_x'])['col4_y'].apply(tuple)
df3 = df3.reset_index()
# Taking set intersection of our two tuples
df3['col4'] = df3.apply(lambda row: set(row['col4_x']) & set(row['col4_y']), axis=1)
# Dropping unnecessary columns
df3 = df3.drop(['col4_x', 'col4_y'], axis=1)
</code></pre>
<p></p>
<pre><code>print(df3)
col1 col2 col4
0 A B {P, M}
</code></pre>
<p>If required, see <a href="https://stackoverflow.com/questions/27263805/pandas-when-cell-contents-are-lists-create-a-row-for-each-element-in-the-list">this answer</a> for examples of how to 'melt' <code>col4</code>.</p>
|
python|pandas
| 1
|
4,848
| 41,843,600
|
how to install cudnn5.1 with cuda 8.0 support for tensorflow?
|
<p>I am unable to find the cudnn-8.0-linux-x64-v5.1.tgz file. The download link has only .deb file and when I install it using
sudo dpkg -i /path/to/deb/file
I get libcudnn.so.5.1.5 file and not the headers (cudnn.h). Where can I get the .tgz file with all the .so and .h files? I am looking for libcudnn.so.5.1</p>
|
<p>You can download cuDNN from <a href="https://developer.nvidia.com/cudnn" rel="nofollow noreferrer">here</a>, you need to have an NVIDIA developer account which is free of cost. After downloading it, extract the contents and copy the files to appropriate locations:</p>
<pre><code>$ sudo cp -P include/cudnn.h /usr/include
$ sudo cp -P lib64/libcudnn* /usr/lib/x86_64-linux-gnu/
$ sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcudnn*
</code></pre>
|
cuda|tensorflow|cudnn
| 1
|
4,849
| 42,065,870
|
How can i use my own images to train my CNN neural network in tensorFlow
|
<p>I am currently working on a program that uses a CNN tensorflow neural network and I want to use my own images to train and test it, please I want some advice because I am new in deep learning </p>
<p>Thanks.</p>
|
<p>Download the ronnie package from python.[{pythonpath}\scripts\pip3 ronnie]
Construct the training data structure as [label, image pixel] and execute the below code to create the training data.</p>
<p>dataFileLoc = os.path.join(dir,"./data/digitalRec/train.csv")
orgDF = collector.initial(dataFileLoc)</p>
########################################################################### Return Object from buildFormDF are ImgXs and LabelYs in dict Structure
<h1>{meta.DATA_INPUTXs: imgXs, meta.DATA_LABELYs: labelYs}</h1>
########################################################################## trainData = collector.buildFromDF(orgDF,batchNum=1, batchSize=30000)
|
tensorflow
| 0
|
4,850
| 64,454,408
|
Performing Calculations From A Pandas Data Frame with Multiple Conditions
|
<p>Forgive the question as I'm a science major, not computer science and I'm teaching myself Python to help with a class project.</p>
<p>I have a Pandas data frame that I've imported from a .csv that looks like:</p>
<pre><code>Item_ID Event_ID Value
27 83531 2533501.8
28 83531 1616262
31 83531 269829
32 83531 55.8
33 83531 269829
34 83531 4882
35 83531 269829
36 83531 4882
37 83531 55.8
38 83531 55.8
27 83532 7137904.8
28 83532 5873877.6
31 83532 497381
32 83532 55.7
33 83532 497381
34 83532 7568
35 83532 497381
36 83532 7568
37 83532 55.7
38 83532 55.7
</code></pre>
<p>This data is from a manual entry that is done multiple times daily where the Item_ID is type of measurement, Event_ID is the unique identifier for each "data entry event" by the user, and value is the value of the measurement.</p>
<p>I need to perform a number of calculations on each unique Event_Id.</p>
<pre><code>Calc1 = ([28]/[27])*(([31]*[32])/[28])*(([33]-[34])/[33])
Calc2 = [36]/[35]
Calc3 = ([35]-[113])/[35]
Calc4 = [37]
Calc5 = [38]
</code></pre>
<p>Each number in the above formula represents an Item_ID. I want the replace the Item_ID in the formula with the value from the same row for each Event_ID.</p>
<p>This project was started a month ago and will run for 6 more weeks. By then, there will be to many data points to perform the calculations by hand.</p>
<p>As these calculations cannot be performed across Event_IDs, the formula for Event_ID 85831 would look like:</p>
<pre><code>Calc1_Data = ([1616262]/[2533501.8])*(([269829]*[55.8])/[1616262])*(([269829]-[4882])/[269829])
Calc2_Data = [4882]/[269829]
Calc3_Data = ([497381]-[0])/[497381]) ***0 would be placed hear as Item_ID 113 does not exist for this
Event_ID
Calc4_Data = [55.7]
Calc5_Data = [55.7]
</code></pre>
<p>The results would then be put into a new data frame that I could then perform my analysis on.</p>
<pre><code>Event_ID Clac1_Result Calc2_Result Calc3_Result Calc4_Result Calc5_Result
85829
85830
85331 RESULTS HERE
85332 RESULTS HERE
85833
85834
</code></pre>
<p>This is my first go at asking a question here since I've been able to find all of my other answers in the library docs or previously asked questions. If I didn't provide enough information let me know and I'll clarify if possible.</p>
<p>Thanks</p>
|
<p>You can use <code>groupby</code> followed by <code>agg</code> methods to do that.</p>
<p>First, define your calculations as functions:</p>
<pre><code># Define calculations
def Calc1(x):
return (x[28]/x[27])*((x[31]*x[32])/x[28])*((x[33]-x[34])/x[33])
def Calc2(x):
return x[36]/x[35]
# Calc3 = lambda x: (x[35]-x[113])/x[35] # commenting out because there's no 113 in the provided example
def Calc4(x):
return x[37]
def Calc5(x):
return x[38]
</code></pre>
<p>Then, perform the calculations using the <code>groupby</code> and <code>agg</code>:</p>
<pre><code>df = df.set_index('Item_ID') # set 'Item_ID' to index so that we can use fewer code inside the functions
df = df.groupby('Event_ID').agg([Calc1, Calc2, Calc4, Calc5]) # group by Event_ID, and perform the set of specified calculations
df.columns = df.columns.droplevel(0) # reset column names
</code></pre>
<p>Output:</p>
<pre><code> Calc1 Calc2 Calc4 Calc5
Event_ID
83531 5.835418 0.018093 55.8 55.8
83532 3.822212 0.015216 55.7 55.7
</code></pre>
|
python-3.x|pandas
| 1
|
4,851
| 64,444,993
|
How to filter pandas dataframe between a negative and a positive value (-0.2 to 0.2), and removing the rows that meet the condition?
|
<p>I have a Pandas dataframe with the following columns: Position, Control, Patient, REFGENE, REFGROUP
And very many rows with data (methylation data). I show you the first row of the dataframe here:</p>
<pre><code>Position Controls Patients REFGENE REFGROUP
16:53468112 0.598153 0.422916 gene_name TSS1500
</code></pre>
<p>I want to investigate the difference in methylation between control and patient so I create a new column for the difference:</p>
<pre><code>df['diff_methylation'] = (df['Patient']) - (df['Control'])
</code></pre>
<p>Here comes my problem. I want to create a new column <strong>without values between -0.2 and 0.2</strong> from the "diff_methylation" column. The statement below should say True if the values are below -0.2 or higher than 0.2 but all values come back False in the entire dataframe. I wonder if it has something to do with the negative value? Maybe there is an easier way than creating another column and just remove the rows directly from the dataframe that come back as True?</p>
<pre><code>df['Results'] = (df['diff_methylation'] > float(0.2)) & (df['diff_methylation'] < float(-0.2))
</code></pre>
<p>I know I have values above 0.2 atleast, so some rows should come out as True.</p>
<p>I have searched the web but cannot find anyone filtering with range between a positive and a negative value in pandas dataframe.</p>
<p>Kindly</p>
<p>idama</p>
|
<p>your approach is correct, however, "&" corresponds to "and", so you could use :</p>
<pre><code>df['Results']= (df['diff_methylation'] > float(0.2)) | (df['diff_methylation'] < float(-0.2))
</code></pre>
|
pandas|filter|range|negative-number
| 1
|
4,852
| 64,290,589
|
Simple Neural Network numpy error missing "exe"
|
<p>I am trying to run this code for simple Neural Network in python however an error is prompted saying " module 'numpy' has no attribute 'exe' ". I tried searching online but couldn't figure out where the problem is, here is the code:</p>
<pre><code>import numpy as np
x=np.array([ [0,0,1],
[0,1,1],
[1,0,1],
[1,1,1] ])
y=np.array([ [1,0,0,1]]).T
class NeuralNetwork(object):
def __init__(self):
#parameters
self.inputsize= 3
self.outputsize= 1
self.hiddensize=4
self.learning_rate=0.005
#(3x4) weight matrix from input layer to hidden layer
self.w0= np.random.randn(self.inputsize, self.hiddensize)
#(4x1) weight matrix from hidden layer to output layer
self.w1=np.random.randn(self.hiddensize, self.outputsize)
def feedforward(self, x):
#forward propegation through the network
self.z = np.dot(x, self.w0) #dot product with input and first set of weights
self.z2= self.sigmoid(self.z) #activation function
self.z3= np.dot(self.z2, self.w1) #dot product with hidden layer and second set of weights
output= self.sigmoid(self.z3)
return output
def sigmoid(self, s, deriv=False):
if (deriv==True):
return s*(1-s)
return 1/(1+np.exe(-s))
def backward(self, x, y, output):
#backward propegation through the network
self.output_error= y - output #error in output
self.output_delta= self.output_error * self.sigmoid(output, deriv=True)
#hidden layer error & delta
self.z2_error=self.output_delta.dot(self.w1.T)
self.z2_delta=self.z2_error * self.sigmoid(self.z2, deriv=True)
#updating weights
self.w0 += self.learning_rate*(x.T.dot(self.z2_delta))
self.w1 += self.learning_rate*(self.z2.T.dot(self.output_delta))
def train(self, x, y):
output=self.feedforward(x)
self.backward(x,y, output)
</code></pre>
<p>so far no errors, but when I run the loop</p>
<pre><code>NN=NeuralNetwork()
for i in range(1000):
NN.train(x,y)
print("predicted output: " + str(NN.feedforward(x)))
</code></pre>
<p>the error prompted is</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-42-f9adb58b2d65> in <module>
1 NN=NeuralNetwork()
2 for i in range(1000):
----> 3 NN.train(x, y)
<ipython-input-39-83a1cb894a8f> in train(self, x, y)
39
40 def train(self, x, y):
---> 41 output=self.feedforward(x)
42 self.backward(x,y, output)
43
<ipython-input-39-83a1cb894a8f> in feedforward(self, x)
15 #forward propegation through the network
16 self.z = np.dot(x, self.w0) #dot product with input and first set of weights
---> 17 self.z2= self.sigmoid(self.z) #activation function
18 self.z3= np.dot(self.z2, self.w1) #dot product with hidden layer and second set of weights
19 output= self.sigmoid(self.z3)
<ipython-input-39-83a1cb894a8f> in sigmoid(self, s, deriv)
22 if (deriv==True):
23 return s*(1-s)
---> 24 return 1/(1+np.exe(-s))
25
26 def backward(self, x, y, output):
~\anaconda3\lib\site-packages\numpy\__init__.py in __getattr__(attr)
217 return Tester
218 else:
--> 219 raise AttributeError("module {!r} has no attribute "
220 "{!r}".format(__name__, attr))
221
AttributeError: module 'numpy' has no attribute 'exe'
</code></pre>
<p>This is my first post here so I apologize if there are any mistakes</p>
|
<p>In Your <code>sigmoid</code> function you are using <code>np.exe</code> where it should be <code>np.exp</code></p>
<p><code>Numpy</code> doesn't have any function named <code>exe</code> so you are getting <code>AttributeError</code></p>
|
python|python-3.x|numpy|deep-learning|neural-network
| 1
|
4,853
| 47,959,059
|
Performing pct_change() that only considers the prior year in a time-series dataframe?
|
<p>I have a sample dataframe "df":</p>
<pre><code>df = pd.DataFrame({'Year': [2000, 2002, 2003, 2004],
'Name': ['A'] * 4,
'Value': [4, 1, 1, 3]})
</code></pre>
<p>When I perform pct_change() i.e.</p>
<pre><code>df['change'] = df['Value'].pct_change()
</code></pre>
<p>The computed "change" value for row Year = 2002 is -0.75. How can I get Pandas to return a N/A for 2002 since data for 2001 is missing as I only wish to consider the immediate prior year in a time-series?</p>
<p>Cheers.</p>
|
<p>Use <code>set_index</code> + <code>reindex</code> + <code>pct_change</code> with <code>fill_method=None</code> - </p>
<ol>
<li>First, set <code>Year</code> as the index</li>
<li>Get a range of years from the minimum to maximum, and use this range to reindex the dataframe. Missing years are now added in as <code>NaN</code>s</li>
<li>Call <code>pct_change</code> on <code>Value</code> without padding <code>NaN</code>s. </li>
</ol>
<pre><code>r = np.arange(df.Year.min(), df.Year.max() + 1)
df = df.set_index('Year').reindex(r)
</code></pre>
<pre><code>v = df['Value'].pct_change(fill_method=None)
df = df.assign(Change=v).dropna(how='all').reset_index()
df
Year Name Value Change
0 2000 A 4.0 NaN
1 2002 A 1.0 NaN
2 2003 A 1.0 0.0
3 2004 A 3.0 2.0
</code></pre>
|
python|pandas|dataframe
| 2
|
4,854
| 48,896,516
|
I want to specify an array's axes and their index values and get the sub array back
|
<p>So I know how to do this on a case by case basis, but I would like to write a class and method, or have a code snippet to do this in general. For instance, typically to extract a sub-array I would do:</p>
<pre><code>my_array = np.array(range(81)).reshape((3,3,3,3))
sub_array = my_array[0, 1, :, 0]
</code></pre>
<p>which would return a vector, or 1D array. I would like to generically have a list of which axes I have index values for, and then supply them to the array (possibly through a new class method) and get back the sub-array. e.g. for the above:</p>
<pre><code>axis_list = [0, 1, 3]
axis_idx_value = [0, 1, 0]
</code></pre>
<p>or zipped <code>axes_idx_values = [(0,0), (1, 1), (3, 0)]</code> and ideally use these in some broadcasting way or some other incredibly slick numpy syntax that already exists. I tried looking around but I couldn't find anything built in yet, and I'm at a loss on how to first call up a specific (or many) axis and then assign it a index value. Any help would be appreciated, thanks.</p>
|
<pre><code>my_array[0, 1, :, 0]
</code></pre>
<p>can also be written as</p>
<pre><code>my_array[(0, 1, slice(None), 0)]
</code></pre>
<p>or</p>
<pre><code>idx = (0, 1, slice(None), 0)
my_array[idx]
</code></pre>
<p>So you could construct that <code>idx</code> from your</p>
<pre><code>axis_list = [0, 1, 3]
axis_idx_value = [0, 1, 0]
</code></pre>
<p>Some of the numpy functions that take an axis parameter construct such an indexing tuple. Some may start with a list or array of the right size, fill in the values, and then convert it to <code>tuple</code> for indexing purposes.</p>
<p>Anyways, constructing such a <code>idx</code> tuple is straight forward Python, even if it looks a bit messy. You can hide the mess in a function.</p>
<hr>
<pre><code>In [8]: idx = np.full((4,),slice(None))
In [9]: idx[axis_list] = axis_idx_value
In [10]: tuple(idx)
Out[10]: (0, 1, slice(None, None, None), 0)
In [11]: my_array = np.array(range(81)).reshape((3,3,3,3))
In [12]: my_array[Out[10]]
Out[12]: array([ 9, 12, 15])
</code></pre>
|
python|numpy|indexing|slice
| 0
|
4,855
| 49,142,561
|
Change contrast in Numpy
|
<p>I want to write a pure Numpy function to change the contrast of an RGB image (that is represented as a Numpy uint8 array), however, the function I wrote doesn't work and I don't understand why.</p>
<p>Here is an example image:</p>
<p><a href="https://i.stack.imgur.com/Ceppy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ceppy.png" alt="enter image description here"></a></p>
<p>And here is a function that uses PIL and works fine:</p>
<pre><code>def change_contrast(img, factor):
def contrast(pixel):
return 128 + factor * (pixel - 128)
return img.point(contrast)
from PIL import Image
img = Image.fromarray(img.astype(np.uint8))
img1 = change_contrast(img, factor=2.0)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/cQM7N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cQM7N.png" alt="enter image description here"></a></p>
<p>Now here is a pure Numpy function, which, in my opinion, does the exact same thing as the other function above, but it doesn't work at all:</p>
<pre><code>def change_contrast2(img, factor):
return 128 + factor * (img - 128)
img1 = change_contrast2(img, factor=2.0)
</code></pre>
<p>where <code>img</code> is a Numpy array. The output is this:</p>
<p><a href="https://i.stack.imgur.com/17SXU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/17SXU.png" alt="enter image description here"></a></p>
<p>I don't understand what's going on and would be happy about any hints!</p>
|
<p>What you are seeing is underflow of unsigned integers:</p>
<pre><code>>>> a = np.array((64, 128, 192), dtype=np.uint8)
>>> a
array([ 64, 128, 192], dtype=uint8)
>>> a-128
array([192, 0, 64], dtype=uint8) # note the "wrong" value at pos 0
</code></pre>
<p>One way of avoiding this is coercion or type promotion:</p>
<pre><code>factor = float(factor)
np.clip(128 + factor * img - factor * 128, 0, 255).astype(np.uint8)
</code></pre>
<p>As factor is a float the dtype of the product <code>factor * img</code> is promoted to float. As floats can handle negative numbers this eliminates the underflow.</p>
<p>To be able to convert back to <code>uint8</code> we clip to the range that can be expressed by this type.</p>
|
python|numpy
| 5
|
4,856
| 58,930,339
|
more efficient way to get proportion of ones using groupby in pandas
|
<p>I have the following pandas DataFrame:</p>
<pre><code>import pandas as pd
i1 = ["AA", "AA", "AA", "BB", "BB", "BB"]
i2 = ["B1", "B1", "B1", "A1", "A1", "A1"]
col1 = [1, 1, 1, 0, 1, 0]
col2 = [0, 0, 0, 1, 1, 0]
col3 = [1, 1, 0, 0, 0, 0]
df = pd.DataFrame({"I1": i1,
"I2": i2,
"Col_1":col1,
"Col_2":col2,
"Col_3":col3})
</code></pre>
<p>What I would like to do is to get the the proportion of 1s (ones) for each i1 and i2 for each column. For example the value for <code>I1=AA</code> and <code>I2=B1</code> should be <code>Col_1=1,Col_2=0, Col_3=0.66</code>.</p>
<p>I am getting the required output using the following code:</p>
<pre><code>df.groupby(["I1", "I2"])[["Col_1", "Col_2", "Col_3"]].sum()/df.groupby(["I1", "I2"])[["Col_1", "Col_2", "Col_3"]].count()
</code></pre>
<p>However I don't think this is the best way to do it. Any help will be appreciated.</p>
|
<p>Use <code>mean</code> if there are only <code>1</code> and <code>0</code> values, because <code>mean</code> by defintion is <code>sum / count</code>:</p>
<pre><code>#mean of all numeric columns (without I1, I2)
df1 = df.groupby(["I1", "I2"]).mean()
#if need specify columns names
#df1 = df.groupby(["I1", "I2"])["Col_1", "Col_2", "Col_3"].mean()
print (df1)
Col_1 Col_2 Col_3
I1 I2
AA B1 1.000000 0.000000 0.666667
BB A1 0.333333 0.666667 0.000000
</code></pre>
|
python|pandas
| 3
|
4,857
| 59,020,566
|
how to concatenate two cells in a pandas column based on some conditions?
|
<p>Hello I have this pandas dataframe:</p>
<pre><code>
Key Predictions
C10D1 1
C11D1 8
C11D2 2
C12D1 2
C12D2 8
C13D1 3
C13D2 9
C14D1 4
C14D2 9
C15D1 8
C15D2 3
C1D1 5
C2D1 7
C3D1 4
C4D1 1
C4D2 9
C5D1 3
C5D2 2
C6D1 1
C6D2 0
C7D1 8
C7D2 6
C8D1 3
C8D2 3
C9D1 5
C9D2 1
</code></pre>
<p>I want to concatenate each cells from "Prediction" column where the "Key" matches up to 4 character.
For Example... in the "Key" column I have "C11D1" and "C11D2".. as they both contain "C11" i would like to concatente rows from prediction column that has "C11D1" and "C11D2" as index ..
Thus the result Should be:</p>
<pre><code> Predictions
Key
C10 1
C11 82
C12 28
and so on
</code></pre>
|
<p><strong><em>EDIT:</em></strong> Since OP wants to concatenate values of same index so adding that solution here.</p>
<pre><code>df.groupby(df['Key'].replace(regex=True,to_replace=r'(C[0-9]+).*',value=r'\1'))\
['Predictions'].apply(lambda x: ','.join(map(str,x)))
</code></pre>
<p>Above will concatenate them with <code>,</code> you could set it to null or space as per your need in <code>lambda x: ','</code> section.</p>
<hr>
<hr>
<p>Could you please try following.</p>
<pre><code>df.groupby(df['Key'].replace(regex=True,to_replace=r'(C[0-9]+).*',value=r'\1')).sum()
</code></pre>
<p>OR with resetting index try:</p>
<pre><code>df.groupby(df['Key'].replace(regex=True,to_replace=r'(C[0-9]+).*',value=r'\1')).sum()\
.reset_index()
</code></pre>
<p><strong><em>Explanation:</em></strong> Adding explanation for above code.</p>
<pre><code>df.groupby(df['Key'].replace(regex=True,to_replace=r'(C[0-9]+).*',value=r'\1')).sum()
df.groupby: Means use groupby for df whatever values passed to it.
df['Key'].replace(regex=True,to_replace=r'(C[0-9]+).*',value=r'\1'): Means df's key column I am using regex to replace everything after Cdigits with NULL as per OP's question.
.sum(): Means to get total sum of all similar 1st column as per need.
</code></pre>
|
python|pandas|data-science|data-analysis
| 1
|
4,858
| 58,754,293
|
Convert only rows that list length equals 1 to string
|
<p>I have a DataFrame in Python where every row from <code>tags</code> column is a list:</p>
<pre><code>df
>>> name tags
>>> alice | [a]
>>> bruce | [a, b, c]
</code></pre>
<p>I want to convert only rows that have list <code>length = 1</code> to string.
Expected result</p>
<pre><code>df
>>> name tags
>>> alice | a
>>> bruce | [a, b, c]
</code></pre>
|
<p>You can use:</p>
<pre><code>c=df['tags'].str.len().eq(1)
df['tags']=np.where(c,df['tags'].str[0],df['tags'])
print(df) #df.to_csv('file.txt',sep='|',index=False)
</code></pre>
<hr>
<pre><code> name tags
0 alice a
1 bruce [a, b, c]
</code></pre>
|
python|string|pandas|list
| 3
|
4,859
| 58,731,643
|
Find number of rows in a given week in PySpark
|
<p>I have a PySpark dataframe, a small portion of which is given below:</p>
<pre><code>+------+-----+-------------------+-----+
| name| type| timestamp|score|
+------+-----+-------------------+-----+
| name1|type1|2012-01-10 00:00:00| 11|
| name1|type1|2012-01-10 00:00:10| 14|
| name1|type1|2012-01-10 00:00:20| 2|
| name1|type1|2012-01-10 00:00:30| 3|
| name1|type1|2012-01-10 00:00:40| 55|
| name1|type1|2012-01-10 00:00:50| 10|
| name5|type1|2012-01-10 00:01:00| 5|
| name2|type2|2012-01-10 00:01:10| 8|
| name5|type1|2012-01-10 00:01:20| 1|
|name10|type1|2012-01-10 00:01:30| 12|
|name11|type3|2012-01-10 00:01:40| 512|
+------+-----+-------------------+-----+
</code></pre>
<p>For a chosen time window (say windows of <code>1 week</code>) , I want to find out how many values of <code>score</code> (say <code>num_values_week</code>) are there for every <code>name</code>. That is, how many values of <code>score</code> are there for <code>name1</code> between <code>2012-01-10 - 2012-01-16</code> , then between <code>2012-01-16 - 2012-01-23</code> and so forth (and same for all other names, like <code>name2</code> and so on.) </p>
<p>I want to have cast this information in new PySpark data frame that will have the columns <code>name</code>, <code>type</code>, <code>num_values_week</code>. How can I do this?</p>
<p>The PySpark dataframe given above can be created using the following code snippet:</p>
<pre><code>from pyspark.sql import *
import pyspark.sql.functions as F
df_Stats = Row("name", "type", "timestamp", "score")
df_stat1 = df_Stats('name1', 'type1', "2012-01-10 00:00:00", 11)
df_stat2 = df_Stats('name2', 'type2', "2012-01-10 00:00:00", 14)
df_stat3 = df_Stats('name3', 'type3', "2012-01-10 00:00:00", 2)
df_stat4 = df_Stats('name4', 'type1', "2012-01-17 00:00:00", 3)
df_stat5 = df_Stats('name5', 'type3', "2012-01-10 00:00:00", 55)
df_stat6 = df_Stats('name2', 'type2', "2012-01-17 00:00:00", 10)
df_stat7 = df_Stats('name7', 'type3', "2012-01-24 00:00:00", 5)
df_stat8 = df_Stats('name8', 'type2', "2012-01-17 00:00:00", 8)
df_stat9 = df_Stats('name1', 'type1', "2012-01-24 00:00:00", 1)
df_stat10 = df_Stats('name10', 'type2', "2012-01-17 00:00:00", 12)
df_stat11 = df_Stats('name11', 'type3', "2012-01-24 00:00:00", 512)
df_stat_lst = [df_stat1 , df_stat2, df_stat3, df_stat4, df_stat5,
df_stat6, df_stat7, df_stat8, df_stat9, df_stat10, df_stat11]
df = spark.createDataFrame(df_stat_lst)
</code></pre>
|
<p>Something like this:</p>
<pre><code>from pyspark.sql.functions import weekofyear, count
df = df.withColumn( "week_nr", weekofyear(df.timestamp) ) # create the week number first
result = df.groupBy(["week_nr","name"]).agg(count("score")) # for every week see how many rows there are
</code></pre>
|
python|pandas|pyspark|pyspark-sql|pyspark-dataframes
| 1
|
4,860
| 58,966,448
|
Webscraping an entire website pandas word cloud
|
<p>I'm attempting to create a wordcloud based off the <strong>scraped</strong> text from a specific website. The issue I'm having is with the webscraping portion of this. I've attempted two different ways, and both attempts I get stuck on how to proceed further.</p>
<p>First method:
<strong>Scrape</strong> the data for each specific tag into its own data frame</p>
<pre><code>main_content= soup.find("div", attrs= {"class" : "col-md-4"})
main_content2= soup.find("article", attrs= {"class" : "col-lg-7 mid_info"})
comp_service= soup.find("div", attrs= {"class" : "col-md-6 col-lg-4"})
</code></pre>
<p>Here I'm stuck on how to add the three dataframes together in order to create the word cloud. This works fine if I use only one of the DF's and add it into 'lists' but I'm unsure how to add the other two into a single DF to then run the rest of the code. The following is the rest of the code for the word cloud potion:</p>
<pre><code>str = ""
for list in lists:
info= list.text
str+=info
mask = np.array(Image.open("Desktop/big.png"))
color= ImageColorGenerator(mask)
wordcloud = WordCloud(width=1200, height=1000,
max_words=400,mask=mask,
stopwords=STOPWORDS,
background_color="white",
random_state=42).generate(str)
plt.imshow(wordcloud.recolor(color_func=color),interpolation="bilinear")
plt.axis("off")
plt.show()
</code></pre>
<p>Attempt 2
I found a peice of code that will extract all of the data from specific tags and put it into text</p>
<pre><code>i = 0
for lists in soup.find_all(['article','div']):
print (lists.text)
</code></pre>
<p>However, when I attempt to run the rest of the code,</p>
<pre><code>mask = np.array(Image.open("Desktop/big.png"))
color= ImageColorGenerator(mask)
wordcloud = WordCloud(width=1200, height=1000,
max_words=400,mask=mask,
stopwords=STOPWORDS,
background_color="white",
random_state=42).generate(str)
plt.imshow(wordcloud.recolor(color_func=color),interpolation="bilinear")
plt.axis("off")
plt.show()
</code></pre>
<p>I get 'ValueError: We need at least 1 word to plot a word cloud, got 0.' after running the wordcloud DF code.</p>
<p>I'm essentially just trying to pull all of the data from a website, store that information into a text file, then transform that data into a word cloud.</p>
<p>Please let me know any suggestions or clarifications I can provide.</p>
<p>Thank you.</p>
|
<p>This ended up working for me</p>
<pre><code>lists = soup.find_all(['article','div'])
str = ""
for list in lists:
info= list.text
str+=info
</code></pre>
|
python|pandas|web-scraping|jupyter-notebook|word-cloud
| 0
|
4,861
| 59,016,143
|
Why doesn't iLocation based boolean indexing work?
|
<p>I was trying to filter a Dataframe and thought that if a <code>loc</code> takes a boolean list as an input to filter, it should also work in the case for <code>iloc</code>. Eg. </p>
<pre><code>import pandas as pd
df = pd.read_csv('https://query.data.world/s/jldxidygjltewualzthzkaxtdrkdvq')
df.iloc[[True,False,True]] #works
df.loc[[True,False,True]] #works
df.loc[df['PointsPerGame'] > 10.0] #works
df.iloc[df['PointsPerGame'] > 10.0] # DOES NOT WORK
</code></pre>
<p>The documentation states that both <code>loc</code> and <code>iloc</code> accept a boolean array as an argument. </p>
<p><strong>For iloc</strong>
<a href="https://i.stack.imgur.com/D8GsN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D8GsN.png" alt="iloc"></a></p>
<p><strong>For loc</strong>
<a href="https://i.stack.imgur.com/PN3u2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PN3u2.png" alt="loc"></a></p>
<p>So, I believe this does not work purely because it was not implemented, or is this because of some other reason which I'm failing to understand? </p>
|
<p>It is not <a href="https://github.com/pandas-dev/pandas/issues/17454#issuecomment-327645521" rel="noreferrer">bug</a>:</p>
<blockquote>
<p>this is by-definition not allowed. <strong>.iloc</strong> is purely positional, so it doesn't make sense to align with a passed Series (which is what all indexing operations do).</p>
</blockquote>
<p>You need to convert mask to numpy array for <code>iloc</code>, for <code>loc</code> it working too, but not necessary:</p>
<pre><code>df.iloc[(df['PointsPerGame'] > 10.0).values]
</code></pre>
|
python|pandas
| 5
|
4,862
| 58,732,447
|
Unable to resize image with cv2
|
<p>I'm trying to resize cifar10 image set from 32x32 to 96x96.</p>
<pre><code>(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
train_images_reshaped = np.array((50000, 96, 96, 3,))
for a in range(len(train_images)):
train_images_reshaped[a] = cv2.resize(train_images[a], dsize=(96, 96), interpolation=cv2.INTER_CUBIC)
</code></pre>
<p>But am getting the error </p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
<p>What's going wrong? Any alternatives besides this to achieve my goal?</p>
|
<p>I think you meant to do </p>
<pre><code>train_images_reshaped = np.zeros((50000, 96, 96, 3,))
</code></pre>
<p>instead of </p>
<pre><code>train_images_reshaped = np.array((50000, 96, 96, 3,))
</code></pre>
|
python|numpy|keras|cv2
| 1
|
4,863
| 58,860,589
|
Cloud9 deploy hitting size limit for numpy, pandas
|
<p>I'm building in Cloud9 to deploy to Lambda. My function works fine in Cloud9 but when I go to deploy I get the error</p>
<blockquote>
<p>Unzipped size must be smaller than 262144000 bytes</p>
</blockquote>
<p>Running <code>du -h | sort -h</code> shows that my biggest offenders are:</p>
<ul>
<li><code>/debug</code> at 291M</li>
<li><code>/numpy</code> at 79M</li>
<li><code>/pandas</code> at 47M</li>
<li><code>/botocore</code> at 41M</li>
</ul>
<p>My function is extremely simple, it calls a service, uses panda to format the response, and sends it on. </p>
<ol>
<li>What is in debug and how do I slim it down/eliminate it from the deploy package?</li>
<li>How do others use libraries at all if they eat up most of the memory limit?</li>
</ol>
|
<p><strong>A brief background to understand the problem root-cause</strong></p>
<p>The problem is not with your function but with the size of the zipped packages. As per AWS <a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html" rel="nofollow noreferrer">documentation</a>, the overall size of zipped package must not exceed greater than 3MB. With that said, if the package size is greater than 3MB which happens inevitably, as a library can have many dependencies, then consider uploading the zipped package to a <code>AWS S3 bucket</code>. Note: even s3 bucket has a size limit of <code>262MB</code>. Ensure that your package does not exceed this limit. The error message that you have posted, <code>Unzipped size must be smaller than 262144000 bytes</code> is referring to the size of the deployment package aka the libraries. </p>
<p>Now, Understand some facts when working with AWS, </p>
<ol>
<li><em>AWS Containers are empty</em>. </li>
<li><em>AWS containers have a linux kernel</em></li>
<li>AWS Cloud9 is only an IDE like RStudio or Pycharm. And it uses S3 bucket for saving the installed packages.</li>
</ol>
<p>This means you'll need to know the following:</p>
<ol>
<li><p>the package and its related dependencies</p></li>
<li><p>extract the linux-compiled packages from cloud9 and save to a folder-structure like, <code>python/lib/python3.6/site-packages/</code></p></li>
</ol>
<p><strong>Possible/Workable solution to overcome this problem</strong></p>
<p>Overcome this problem by reducing the package size. See below.</p>
<p><strong>Reducing the deployment package size</strong></p>
<ul>
<li><p>Manual method: delete files and folders within each library folder that are named <code>*.info</code> and <code>*._pycache</code>. You'll need to manually look into each folder for the above file extensions to delete them.</p></li>
<li><p>Automatic method: I've to figure out the command. work in progress</p></li>
</ul>
<p><strong>Use Layers</strong></p>
<p>In AWS go to Lambda and create a layer</p>
<p>Attach the S3 bucket link containing the python package folder. Ensure the lambda function IAM role has permission to access S3 bucket. </p>
<p>Make sure the un-zipped folder size is less than 262MB. Because if its >260 MB then it cannot be attached to AWS Layer. You'll get an error, <code>Failed to create layer version: Unzipped size must be smaller than 262144000 bytes</code></p>
|
pandas|aws-lambda|aws-cloud9
| 3
|
4,864
| 58,792,218
|
How to ensure no singleton expansion in numpy is made
|
<p>Coming from MATLAB to NumPy, the distinction between 2 dimensional array where one of the dimensions equalsl 1 to 1D array is annoying.</p>
<p>For example:</p>
<pre><code>>>>import numpy as np
>>>x1 = np.array([[1],[2],[3]])
>>>x2 = np.array([1,2,3])
>>>x1.shape
(3, 1)
>>>x2.shape
(3,)
</code></pre>
<p>so when using element wise product I am getting 3X3 matrix:</p>
<pre><code>>>>x1 * x2
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
</code></pre>
<p>But what I really want is </p>
<pre><code>>>>np.squeeze(x1) * x2
array([1, 4, 9])
</code></pre>
<p>Any other way of doing this besides calling <code>np.squeeze()</code> on each vector?</p>
|
<p>What you are getting is the result of broadcasting, which <code>numpy</code> implemented long before MATLAB. Even Octave had it before MATLAB.</p>
<p>You have a (3,1) and a (3,). A leading dimension is added to the lower dim, producing (1,3). Together those broadcast to (3,3), and do the math.</p>
<p>If you could somehow turn off broadcasting (you can't), I'd expect incompatible dimensions error.</p>
<p>You want a (3,) result, so you have to somehow remove the trailing dimension of (3,1) - <code>squeeze</code>, <code>reshape</code> or <code>[:,0]</code> index do that.</p>
<p>In Octave:</p>
<pre><code>>> x1 = [1;2;3]; # (3,1)
>> y1 = [1,2,3]; # (1,3)
>> x1 .* y1 # (3,3)
ans =
1 2 3
2 4 6
3 6 9
</code></pre>
<p>From Octave docs:</p>
<blockquote>
<p>A note on terminology: “broadcasting” is the term popularized by the<br>
Numpy numerical environment in the Python programming language. In
other programming languages and environments, broadcasting may also be
known as <em>binary singleton expansion</em> (BSX, in MATLAB, and the origin of
the name of the ‘bsxfun’ function), <em>recycling</em> (R programming<br>
language), <em>single-instruction multiple data</em> (SIMD), or <em>replication</em>.</p>
</blockquote>
<p>Turn on a warning about Octave extensions:</p>
<pre><code>>> warning ("on","Octave:language-extension")
>> x1 .* y1
warning: performing `product' automatic broadcasting
ans =
1 2 3
2 4 6
3 6 9
</code></pre>
|
python|numpy
| 1
|
4,865
| 58,922,742
|
JupyterLab notebook on Google AI Platform superslow when making predictions
|
<p>I have uploaded a trained tensorflow v2 model onto the Google AI Platform to make predictions on unseen data.
This data is stored in Google Cloud Storage in shards, each c 300 MB large.</p>
<p>I am using a notebook to preprocess the data, which works fine.
When making predictions on the preprocessed data, it works but it is superslow, around 90 minutes for just a file of 300 MB. I got quite a few of these shards so I have to find a way to speed things up.</p>
<p>I have tried different notebook configurations in terms of cpu, RAM and even gpu but it does not make a difference on the prediction runtime.</p>
<p>Am I missing something? Any ideas are much appreciated!</p>
|
<p>When trying to optimize the a model for serving the most impotant considerations are</p>
<p>1: Model size</p>
<p>2: Prediction speed</p>
<p>3: Prediction throughput</p>
<p>Several techniques exist in TensorFlow that will allow you to shrink the size of a model and improve prediction latency.As per offical documentation</p>
<p>"Freezing:
Convert the variables stored in a checkpoint file of the SavedModel into<br>
constants stored directly in the model graph. This reduces the overall size of the model.</p>
<p>Pruning:
Strip unused nodes in the prediction path and the outputs of the graph, merging duplicate nodes, as well as cleaning other node ops like summary, identity, etc.</p>
<p>Constant folding:
Look for any sub-graphs within the model that always evaluate to constant expressions, and replace them with those constants.</p>
<p>Folding batch norms:
Fold the multiplications introduced in batch normalization into the weight multiplications of the previous layer.</p>
<p>Quantization:
Convert weights from floating point to lower precision, such as 16 or 8 bits.</p>
<p>The actual process would be something like this</p>
<ol>
<li>Freeze the SavedModel:
SavedModel ⇒ GraphDef</li>
<li>Optimize the frozen model:
GraphDef ⇒ GraphDef</li>
<li>Convert the optimized frozen model back to SavedModel:
GraphDef ⇒ SavedModel</li>
</ol>
<p>A very good post detailing the entire optimization for serving can be found here
Optimizing TensorFlow Models for Serving <a href="https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbf" rel="nofollow noreferrer">1</a></p>
|
python-3.x|google-cloud-platform|tensorflow2.0|jupyter-lab|gcp-ai-platform-notebook
| 0
|
4,866
| 58,853,889
|
Creating two shifted columns in grouped pandas data-frame
|
<p>I have looked all over and I still can't find an example of how to create two shifted columns in a Pandas Dataframe within its groups. </p>
<p>I have done it with one column as follows:</p>
<pre><code>data_frame['previous_category'] = data_frame.groupby('id')['category'].shift()
</code></pre>
<p>But I have to do it with 2 columns, shifting one upwards and the other downwards. </p>
<p>Any ideas?</p>
|
<p>It is possible by custom function with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a>, because one column need shift down and second shift up:</p>
<pre><code>df = pd.DataFrame({
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'F':list('aaabbb')
})
def f(x):
x['B'] = x['B'].shift()
x['C'] = x['C'].shift(-1)
return x
df = df.groupby('F').apply(f)
print (df)
B C F
0 NaN 8.0 a
1 4.0 9.0 a
2 5.0 NaN a
3 NaN 2.0 b
4 5.0 3.0 b
5 5.0 NaN b
</code></pre>
<p>If want shift same way only specify all columns in lists:</p>
<pre><code>df[['B','C']] = df.groupby('F')['B','C'].shift()
print (df)
B C F
0 NaN NaN a
1 4.0 7.0 a
2 5.0 8.0 a
3 NaN NaN b
4 5.0 4.0 b
5 5.0 2.0 b
</code></pre>
|
python|pandas|dataframe
| 2
|
4,867
| 70,105,227
|
Numpy: How to find the ratio between multiple channels and assign index of a particular channel if it's ratio is higher than a threshold value
|
<p>I have numpy array(Dimension : N * X * Y) of N channels. I need to find the ratio for each X,Y between the N channels. Create a new array (Dimension : X * Y) and assign the index of a particular channel (say N =1) if its ratio is greater than a threshold value else assign the maximum value index. Say there are 2 channels and if ratio of
X,Y point of channel 1 is greater than 0.3, I need to assign 1 to the (X,Y) of new array, if less than 0.4 then assign the max channel index. Please advise, Thanks.</p>
|
<p><code>channel1 = arr[0, :, :]</code> will get the values in channel 1 as an X by Y array.
<code>channel2 = arr[1, :, :]</code> will get the values in channel 2. Then <code>channel1 / channel2 > 0.4</code> will give you an X by Y array with True in any spot where the ratio is more than 0.4. You can convert this to <code>int</code> to get 1s and 0s.</p>
|
numpy
| 0
|
4,868
| 56,181,198
|
following line take lot of time to update since it has nearly 2.5l records are present
|
<p>In one dataframe i took group count which is more than one,need to update those index sepcific column value since its 2.5l it is failing with memory error is there any fast solution for it?</p>
<pre><code>gl_no=primary.groupby('GL Account').filter(lambda x:len(x)>1)
primary_index=primary[primary['GL Account'].isin(gl_no['GL Account'])].index
primary.loc[primary_index]['Cost Element']='01'
primary.loc[primary_index]['GL Acc Type']='P'
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> and comparing for boolean mask and set new values by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>primary = pd.DataFrame({
'Cost Element':list('abcdef'),
'GL Acc Type':list('abcdef'),
'GL Account':list('aadbbc')
})
print (primary)
Cost Element GL Acc Type GL Account
0 a a a
1 b b a
2 c c d
3 d d b
4 e e b
5 f f c
mask=primary.groupby('GL Account')['GL Account'].transform('size') > 1
primary.loc[mask, ['Cost Element','GL Acc Type']] = ['01', 'P']
print (primary)
Cost Element GL Acc Type GL Account
0 01 P a
1 01 P a
2 c c d
3 01 P b
4 01 P b
5 f f c
</code></pre>
|
pandas
| 0
|
4,869
| 56,154,199
|
How can I simplify adding columns with certain values to my dataframe?
|
<p>I have a big dataframe (more than 900000 rows) and want to add some columns depending on the first column (Timestamp with date and time). My code works, but I guess it's far too complicated and slow. I'm a beginner so help would be appreciated! Thanks!</p>
<pre><code>df['seconds_midnight'] = 0
df['weekday'] = 0
df['month'] = 0
def date_to_new_columns(date_var, i):
sec_after_midnight = dt.timedelta(hours=date_var.hour, minutes=date_var.minute, seconds=date_var.second).total_seconds()
weekday = dt.date.isoweekday(date_var)
month1 = date_var.month
df.iloc[i, 24] = sec_after_midnight
df.iloc[i, 25] = weekday
df.iloc[i, 26] = month1
return
for i in range(0, 903308):
date_to_new_columns(df.timestamp.iloc[i], i)
</code></pre>
|
<p>If the column is a datetime64/Timestamp column you can use the <a href="https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html#dt-accessor" rel="nofollow noreferrer">.dt accessor</a>:</p>
<pre><code>In [11]: df = pd.DataFrame(pd.date_range('2019-01-23', periods=3), columns=['date'])
In [12]: df
Out[12]:
date
0 2019-01-23
1 2019-01-24
2 2019-01-25
In [13]: df.date - df.date.dt.normalize() # timedelta since midnight
Out[13]:
0 0 days
1 0 days
2 0 days
Name: date, dtype: timedelta64[ns]
In [14]: (df.date - df.date.dt.normalize()).dt.seconds # seconds since midnight
Out[14]:
0 0
1 0
2 0
Name: date, dtype: int64
In [15]: df.date.dt.day_name()
Out[15]:
0 Wednesday
1 Thursday
2 Friday
Name: date, dtype: object
In [16]: df.date.dt.month_name()
Out[16]:
0 January
1 January
2 January
Name: date, dtype: object
</code></pre>
|
python|pandas
| 0
|
4,870
| 56,345,215
|
How to convert stacked columns with duplicate indices into multiple unique columns with pandas?
|
<p>I'm working with a cryptocurrency time-series data-set which has all of the different currencies vertically stacked. It has 3 columns for the date, currency and price. There date ranges are also different for each currency.</p>
<p>i.e.</p>
<pre><code>>>> df
Currency Date Price
0 0x 2017-08-16 0.111725
1 0x 2017-08-17 0.211486
2 0x 2017-08-18 0.283789
3 0x 2017-08-19 0.511434
4 0x 2017-08-20 0.429522
... ... ... ...
657311 zurcoin 2018-02-04 0.003254
657312 zurcoin 2018-02-05 0.002774
657313 zurcoin 2018-02-06 0.001986
657314 zurcoin 2018-02-09 0.002684
657315 zurcoin 2018-02-10 0.002325
</code></pre>
<p>I need to instead have a column for each currency's price and the date as the index with only unique dates. There will be plenty of null values which I intend to replace with 0's.</p>
<p>i.e</p>
<pre><code>date 0x_price 10mtoken_price 1337coin_price ...
2017-08-16 1 4 (NaN)->0 ...
2017-08-17 2 5 (NaN)->0 ...
2017-08-18 3 6 7 ...
... ... ... ... ...
</code></pre>
<p>I've tried to iterate over the dataframe with a groupby as shown:</p>
<pre><code>df2 = pd.DataFrame()
df2["date"] = df["Date"].unique()
df2.set_index("date", inplace=True)
for currency, group in df.groupby("Currency"):
df2.loc[df2.index.isin(group.Date), f"{currency}_price"] = group["Price"]
</code></pre>
<p>This returned the desired column names and shape but the dataframe was filled with NaN's.</p>
<p>i.e.</p>
<pre><code>date 0x_price 10mtoken_price 1337coin_price ...
2017-08-16 NaN NaN NaN ...
2017-08-17 NaN NaN NaN ...
2017-08-18 NaN NaN NaN ...
... ... ... ... ...
</code></pre>
<p>I also tried to achieve the same thing with df.join() as shown:</p>
<pre><code>df2 = pd.DataFrame()
df2["date"] = df["Date"].unique()
df2.set_index("date", inplace=True)
for currency, group in df.groupby("Currency"):
df2 = df2.join(group.set_index("Date")[["Price"]].rename(columns={"Price": f"{currency}_price"}))
</code></pre>
<p>This didn't get to finish executing before freezing up my computer. Perhaps it's inefficient and I'm working with around 650,000 entries?</p>
<p>I haven't been able to find the same type of problem here and I haven't been able to figure out a solution after checking the documentation. I've probably missed something but hopefully I've described the problem sufficiently. Thanks in advance.</p>
|
<p>Pandas <code>pivot_table</code> could help here. I would use:</p>
<pre><code>resul = df.pivot_table(index=['Date'], columns=['Currency'], values=['Price']).fillna(0)
</code></pre>
<p>With your example data, it gives:</p>
<pre class="lang-none prettyprint-override"><code> Price
Currency 0x zurcoin
Date
2017-08-16 0.111725 0.000000
2017-08-17 0.211486 0.000000
2017-08-18 0.283789 0.000000
2017-08-19 0.511434 0.000000
2017-08-20 0.429522 0.000000
2018-02-04 0.000000 0.003254
2018-02-05 0.000000 0.002774
2018-02-06 0.000000 0.001986
2018-02-09 0.000000 0.002684
2018-02-10 0.000000 0.002325
</code></pre>
|
python|pandas
| 2
|
4,871
| 56,302,920
|
Reverse Binary Encoding with Pandas
|
<p>I have a Pandas Dataframe and wish to reverse the binary encoding (i.e. <code>get_dummies()</code>) of three columns. The encoding is left-to-right: </p>
<pre><code> a b c
0 0 1 1
1 0 0 1
2 1 1 1
3 1 0 0
</code></pre>
<p>would result in a new categories column <code>C</code> taking values <code>0-7</code>: </p>
<pre><code> C
1 6
2 4
3 7
4 1
</code></pre>
<p>I am not sure why this line is giving me a syntax error, near <code>axis=1</code>: </p>
<pre><code>df['C'] = df.apply(lambda x: (x['a']==1 ? 1:0)+(x['b']==1 ? 2:0)+(x['c']==1 ? 4:0), axis=1)
</code></pre>
|
<p>Use numpy if performance is important - first convert DataFrame to numpy array and then use <a href="https://stackoverflow.com/a/15506055/2901002">bitwise shift</a>:</p>
<pre><code>a = df.values
#pandas 0.24+
#a = df.to_numpy()
df['C'] = a.dot(1 << np.arange(a.shape[-1]))
print (df)
a b c C
0 0 1 1 6
1 0 0 1 4
2 1 1 1 7
3 1 0 0 1
</code></pre>
|
python|pandas
| 2
|
4,872
| 56,014,057
|
Error while trying to load tensorflowJS model from local, Fetch API cannot load downloads://model. URL scheme must be"https" for CORS request error
|
<p>I am trying to load a tensorflow js model that is saved in downloads directory as mentioned in the tutorials of tensorflowjs. But I am facing cors error please find the image below.</p>
<p>Code:</p>
<pre><code><html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"></script>
<script>
async function app() {
const t = await tf.loadLayersModel('downloads://model');
console.log("done");
console.log(t);
}
app();
</script>
<script>
</script>
</head>
<body>
</body>
</html>
</code></pre>
<p><a href="https://i.stack.imgur.com/gPyTo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gPyTo.png" alt="enter image description here"></a></p>
<p>Any pointers on how to resolve this.</p>
<p>Tried disabling cors for chrome but still didnt work.</p>
|
<p>The protocol you specified looks invalid. You can specify <code>file://</code> or just omitting it should work. And also you need to specify the path to <code>model.json</code> file created by <a href="https://github.com/tensorflow/tfjs-converter" rel="nofollow noreferrer">tfjs-converter</a>. So overall, the code to load the model may look like this.</p>
<pre class="lang-js prettyprint-override"><code>const t = await tf.loadLayersModel('file:///path/to/downloads/model.json');
</code></pre>
|
google-chrome|tensorflow.js
| 0
|
4,873
| 55,705,361
|
Filter multiindex df based on std
|
<pre><code>df.groupby(['name','cat'])['valtocount'].agg('count')
</code></pre>
<p>through the above I get the following multindex df:</p>
<pre><code>name cat count
abc a 1
b 1
def a 1
c 2
</code></pre>
<p>I want to keep only the names where the std of count is >0
do you guys have any suggestion?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <code>std</code> or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.SeriesGroupBy.nunique.html" rel="nofollow noreferrer"><code>SeriesGroupBy.nunique</code></a> and filtering by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>s = df.groupby(['name','cat'])['valtocount'].agg('count')
s1 = s[s.groupby(level=0).transform('std') > 0]
print (s1)
name cat
def a 1
c 2
Name: valtocount, dtype: int64
</code></pre>
<hr>
<pre><code>s1 = s[s.groupby(level=0).transform('nunique') != 1]
</code></pre>
|
python|pandas|multi-index
| 1
|
4,874
| 55,926,313
|
Duplicating rows in a DataFrame based on column value
|
<p>Below is a set of sample data I am working with:</p>
<pre><code>sample_dat = pd.DataFrame(
np.array([[1,0,1,1,1,5],
[0,0,0,0,1,3],
[1,0,0,0,1,1],
[1,0,0,1,1,1],
[1,0,0,0,1,1],
[1,1,0,0,1,1]]),
columns=['var1','var2','var3','var4','var5','cnt']
)
</code></pre>
<p>I need to change the data so the rows are duplicated according to the value in the last column. Specifically I wish for it to do be duplicated based on the value in the <code>cnt</code> column.</p>
<p>My search yielded lots of stuff about melts, splits, and other stuff. I think what I am looking for is very basic, hopefully. Please also note that I will likely have some kind of an id in the first column that will be either an integer or string.</p>
<p>For example, the first record will be duplicated 4 more times. The second record will be duplicated twice more.</p>
<p>An example of what the <code>DataFrame</code> would look like if I were manually doing it with syntax is below:</p>
<pre><code>sample_dat2 = pd.DataFrame(
np.array([[1,0,1,1,1,5],
[1,0,1,1,1,5],
[1,0,1,1,1,5],
[1,0,1,1,1,5],
[1,0,1,1,1,5],
[0,0,0,0,1,3],
[0,0,0,0,1,3],
[0,0,0,0,1,3],
[1,0,0,0,1,1],
[1,0,0,1,1,1],
[1,0,0,0,1,1],
[1,1,0,0,1,1]]),
columns=['var1','var2','var3','var4','var5','cnt']
)
</code></pre>
|
<p>Create an empty dataframe then iterate over your data, appending each row to the new dataframe x amount of times where x is the number in the 'cnt' column.</p>
<pre><code>df =pd.DataFrame()
for index, row in sample_dat.iterrows():
for x in range(row['cnt']):
df = df.append(row, ignore_index=True)
</code></pre>
<h1>Output</h1>
<pre><code>>>> df
cnt var1 var2 var3 var4 var5
0 5.0 1.0 0.0 1.0 1.0 1.0
0 5.0 1.0 0.0 1.0 1.0 1.0
0 5.0 1.0 0.0 1.0 1.0 1.0
0 5.0 1.0 0.0 1.0 1.0 1.0
0 5.0 1.0 0.0 1.0 1.0 1.0
1 3.0 0.0 0.0 0.0 0.0 1.0
1 3.0 0.0 0.0 0.0 0.0 1.0
1 3.0 0.0 0.0 0.0 0.0 1.0
2 1.0 1.0 0.0 0.0 0.0 1.0
3 1.0 1.0 0.0 0.0 1.0 1.0
4 1.0 1.0 0.0 0.0 0.0 1.0
5 1.0 1.0 1.0 0.0 0.0 1.0
</code></pre>
|
python|pandas|numpy
| 0
|
4,875
| 55,972,556
|
ValueError: Cannot feed value of shape 'x' for Tensor 'y', which has shape 'z
|
<p>A complete rookie here, trying to run the code. The problem is that my shapes' dimensions do not coincide. Does anyone know which variables' dimensions should be changed?</p>
<p>I tried changing x or y dimensions right after assigning values to x and y but I still keep getting the error</p>
<pre class="lang-py prettyprint-override"><code>np.expand_dims(x, axis=1)
</code></pre>
<p>The main method:</p>
<pre class="lang-py prettyprint-override"><code>def main():
#tf.reset.default.graph()
sess = tf.Session()
x = tf.placeholder(tf.float32, shape=[None, HEIGHT, WIDTH], name="input")
y = tf.placeholder(tf.float32, shape=[None, NUM_LABELS], name="labels")
dropout = tf.placeholder(tf.float32, name="dropout")
np.expand_dims(input, axis=1)
logits = get_model(x, dropout)
with tf.name_scope('loss'):
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y), name=None)
tf.summary.scalar('loss', loss)
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss)
with tf.name_scope('accuracy'):
predicted = tf.argmax(logits, 1)
truth = tf.argmax(y, 1)
correct_prediction = tf.equal(predicted, truth)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
confusion_matrix = tf.confusion_matrix(truth, predicted, num_classes=NUM_LABELS)
tf.summary.scalar('accuracy', accuracy)
summ = tf.summary.merge_all()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter(LOGDIR)
writer.add_graph(sess.graph)
test_writer = tf.summary.FileWriter(TEST_LOGDIR)
print('Starting training\n')
batch = get_batch(BATCH_SIZE, PATH_TRAIN)
start_time = time.time()
for i in range(1, ITERATIONS + 1):
X, Y = next(batch)
if i % EVAL_EVERY == 0:
[train_accuracy, train_loss, s] = sess.run([accuracy, loss, summ], feed_dict={x: X, y: Y, dropout:0.5}, acc_and_loss = [i, train_loss, train_accuracy * 100])
print('Iteration # {}. Train Loss: {:.2f}. Train Acc: {:.0f}%'.format(*acc_and_loss))
writer.add_summary(s, i)
if i % (EVAL_EVERY * 20) == 0:
train_confusion_matrix = sess.run([accuracy, sum], feed_dict={x: X, y: Y, dropout:1.0})
header = LABEL_TO_INDEX_MAP.keys()
df = pd.DataFrame(np.reshape(train_confusion_matrix, (NUM_LABELS, NUM_LABELS)), index=i)
print('\nConfusion Matrix:\n {}\n'.format(df))
saver.save(sess, os.path.join(LOGDIR, "model.ckpt"), i)
sess.run(train_step, feed_dict={x: X, y: Y, dropout:0.5})
print('\nTotal training time {:0f} seconds\n'.format(time.time() - start_time))
batch = get_batch(BATCH_SIZE, PATH_TEST)
total_accuracy = 0
for i in range(ITERATIONS_TEST):
X, Y = next(batch, PATH_TEST)
test_accuracy, s = sess.run([accuracy, summ], feed_dict={x: X, y: Y, dropout:1.0})
print('Iteration # {}. Test Accuracy {:.0f}%'.format(i+1, test_accuracy * 100))
total_accuracy += (test_accuracy / ITERATIONS_TEST)
test_writer.add_summary(s, i)
print('\nFinal Test Accuracy: {:.0f}%').format(total_accuracy * 100)
if __name__ == '__main__':
init(PATH_TRAIN)
main()
</code></pre>
<p>The Result I get:</p>
<pre class="lang-py prettyprint-override"><code>ValueError: Cannot feed value of shape (100,) for Tensor 'input_19:0', which has shape '(?, 20, 44)'
</code></pre>
|
<p>It seems like it is complaining about feeding <code>X</code> which has shape (100,) into <code>x</code> which is required to have shape (anything, 20, 44). This variable has the name “input” noted in the error.</p>
<p><code>x</code> and <code>y</code> are tensorflow placeholders rather than numpy arrays, and their shape are not changed in that way. It is telling tensorflow to expect some numpy arrays (in your case perhaps <code>X</code> and <code>Y</code>) in the specified shape. Since the shapes mismatch, you might be using the wrong data, so simply reshaping <code>X</code> might give you wrong results.</p>
<p>You will have to figure out what the shape of <code>X</code> and <code>Y</code> actually are, and where the 20x44 data should be coming from out of you dataset (or if it should not be requiring 20x44 data, what should it be requiring).</p>
|
python|numpy|tensorflow|neural-network|reshape
| 1
|
4,876
| 55,772,649
|
bazel build tensorflow/tools/graph_transforms:transform_graph ERROR
|
<p>i have some problem when i want to transfer my model using bazel to convert it.<br>
i try this cmd: </p>
<pre><code>$ bazel build tensorflow/tools/graph_transforms:transform_graph
</code></pre>
<p>here are the error:</p>
<blockquote>
<p>ERROR: Skipping 'tensorflow/tools/graph_transforms:transform_graph': no such package 'tensorflow/tools/graph_transforms': BUILD file not found on package path</p>
</blockquote>
<p>can any body tell me how to use it?
thanks a lot</p>
<p>then i try this cmd: </p>
<pre><code>$ bazel build tensorflow/tools/graph_transforms:vgg_16_10000.pb
</code></pre>
<p>i put have graph_transforms in tools directory and <code>vgg_16_10000.pb</code> in <code>graph_transforms</code>.
and i still get the same problem.</p>
|
<p>In order to run that first command (<code>bazel build</code>) you must have a BUILD file at the specified directory, which in this case is <strong><em>tensorflow/tools/graph_transforms</em></strong>. Based on only the info you've provided, chances are you haven't actually cloned or downloaded the Tensorflow repository. You cannot simply run that command if you only have installed Tensorflow using <strong>pip</strong> or <strong>conda install</strong>, since those do not include the source code.</p>
<p>That should be enough information for you to decide whether you actually want to build sources or use the python API, as shown <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/summary/graph" rel="nofollow noreferrer">here</a>.</p>
|
python|tensorflow|model|bazel
| 1
|
4,877
| 55,999,420
|
Merge small pandas dataframe into larger, copying values by rule
|
<p>There are two dataframes, both with datetime objects in either 5 min, <code>df_05min</code>, or 15 min, <code>df_15min</code>, increments.</p>
<pre><code>df_05min = pd.DataFrame({'dt':['2008-10-2404:12:30',
'2008-10-2404:12:35',
'2008-10-2404:12:40',
'2008-10-2404:12:45',
'2008-10-2404:12:50',
'2008-10-2404:13:00',
'2008-10-2404:13:05']})
df_15min = pd.DataFrame([['2008-10-2404:12:15', 'L'],
['2008-10-2404:12:30', 'r'],
['2008-10-2404:12:45', 'S' ],
['2008-10-2404:13:00', 'L'],
['2008-10-2404:13:15', 'L' ]], columns=['dt','col'])
</code></pre>
<p>The goal is to merge the <code>df_15min</code> dataframe into the <code>df_05min</code> dataframe on the datetime column, <code>dt</code>, copying some accompanying data into the appropriate rows. This is instead of an outer merge where non-matching values get <code>NaN</code>. For example, in <code>df_15min</code> '2008-10-2404:12:30' has a value of <code>np.nan</code> that I would like to copy to the 5 minute values belonging to that 15 min interval in <code>df_05min</code> . This means 12:30, 12:35, and 12:40 will all have values of <code>np.nan</code>. </p>
<p>The desired end product looks like this:</p>
<pre><code>df_desired = pd.DataFrame(['2008-10-2404:12:15', 'L',
'2008-10-2404:12:30', 'r',
'2008-10-2404:12:35', 'r',
'2008-10-2404:12:40', 'r',
'2008-10-2404:12:45', 'S',
'2008-10-2404:12:50', 'S',
'2008-10-2404:13:00', 'L',
'2008-10-2404:13:15', 'L'])
</code></pre>
|
<p>Here need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> with outer join, what is not implemented, so possible solution is <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a>, sort by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a>, forward filling missing values and last create default index by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>df_05min = pd.DataFrame({'dt':['2008-10-24 04:12:30',
'2008-10-24 04:12:35',
'2008-10-24 04:12:40',
'2008-10-24 04:12:45',
'2008-10-24 04:12:50',
'2008-10-24 04:13:00',
'2008-10-24 04:13:05']})
df_15min = pd.DataFrame([['2008-10-24 04:12:15', 'L'],
['2008-10-24 04:12:30', 'r'],
['2008-10-24 04:12:45', 'S' ],
['2008-10-24 04:13:00', 'L'],
['2008-10-24 04:13:15', 'L' ]], columns=['dt','col'])
df_05min['dt'] = pd.to_datetime(df_05min['dt'])
df_15min['dt'] = pd.to_datetime(df_15min['dt'])
df=pd.merge(df_05min, df_15min, how='outer').sort_values('dt').ffill().reset_index(drop=True)
print (df)
dt col
0 2008-10-24 04:12:15 L
1 2008-10-24 04:12:30 r
2 2008-10-24 04:12:35 r
3 2008-10-24 04:12:40 r
4 2008-10-24 04:12:45 S
5 2008-10-24 04:12:50 S
6 2008-10-24 04:13:00 L
7 2008-10-24 04:13:05 L
8 2008-10-24 04:13:15 L
</code></pre>
|
python|pandas
| 1
|
4,878
| 64,668,703
|
Iterate over list (2 dataframes in 1 list)
|
<p>I am importing 2 data frames at the same time:</p>
<pre><code>import pandas as pd
import numpy as np
import time
import glob
import os
msci_folder = 'C:/Users/Mike/Desktop/docs'
mscifile = glob.glob(msci_folder + "/*.csv")
dfs = []
for file in mscifile:
df = pd.read_csv(file)
dfs.append(df)
</code></pre>
<p>Now I want to apply codes which I was using for every individual data frame, but I get error:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'loc'</p>
</blockquote>
<p>I try:</p>
<pre><code>for i, df in enumerate(dfs):
dfs = dfs.loc[dfs['URI'] == '/ID']
dfs.TIMESTAMP = dfs.TIMESTAMP.apply(lambda x: '%.3f' % x)
dfs.insert(0, 'Date', 0)
dfs['Date'] = [x[:8] for x in dfs['TIMESTAMP']]
dfs.to_csv('C:/Users/Mike/Desktop/docs/test.csv', index=False)
</code></pre>
|
<p>In your second <code>for</code> loop:</p>
<pre><code>for i, df in enumerate(dfs):
dfs = dfs.loc[dfs['URI'] == '/ID']
dfs.TIMESTAMP = dfs.TIMESTAMP.apply(lambda x: '%.3f' % x)
dfs.insert(0, 'Date', 0)
dfs['Date'] = [x[:8] for x in dfs['TIMESTAMP']]
dfs.to_csv('C:/Users/Mike/Desktop/docs/test.csv', index=False)
</code></pre>
<p>you used <code>dfs</code>, which is the initial list you created, hence the error. You should change every instance of <code>dfs</code> inside that <code>for</code> loop with <code>df</code>.</p>
<pre><code>for i, df in enumerate(dfs):
# dfs = dfs.loc[dfs['URI'] == '/ID']
df = df.loc[df['URI'] == '/ID']
# ... and so on
# save to different files
df.to_csv(f'C:/Users/Mike/Desktop/docs/test_{i}.csv', index=False)
</code></pre>
|
python|pandas|list|iteration
| 2
|
4,879
| 64,787,228
|
read excel with password in python without column number
|
<p>Hi I would like to read an Excel file with Password in Python, I found the solution here:</p>
<p><a href="https://davidhamann.de/2018/02/21/read-password-protected-excel-files-into-pandas-dataframe/" rel="nofollow noreferrer">https://davidhamann.de/2018/02/21/read-password-protected-excel-files-into-pandas-dataframe/</a></p>
<pre><code>import pandas as pd
import xlwings as xw
PATH = '/Users/me/Desktop/xlwings_sample.xlsx'
wb = xw.Book(PATH)
sheet = wb.sheets['sample']
df = sheet['A1:C4'].options(pd.DataFrame, index=False, header=True).value
df
</code></pre>
<p>which works well. However the rows of data will grow, the last rows is empty. So how can I modify the Code that it can read itself without enter the number of rows?</p>
|
<p>I think the best way to do this is to determine which is the last non-empty row and the last non-empty column. You could do this this way:</p>
<p>In my example I use a chart df from Spotify</p>
<pre><code>import xlwings as xw
filename_read = 'C:/Users/k_sego/spotifymall.xlsx'
wb = xw.Book(filename_read)
ws = wb.sheets["spotify"]
col = ws.range('A1').end('right').column
row = ws.range('A1').end('down').row
</code></pre>
<p>Here <code>col</code> and <code>row</code> are the last non empty columns and rows. Use these in your code.</p>
|
python|excel|pandas|csv
| 1
|
4,880
| 64,716,913
|
Change column values in pandas df under different conditions / invert answers on 4-point likert-scale
|
<p>In a df I have some columns with answers on a 4-point Likert scale which I need to invert meaning I need to flip the values in each row of this column: 4->1, 2->3, 3->2, 4->1</p>
<p>I've tried this:</p>
<pre><code>questDay1Df.loc[questDay1Df['STAI_State_01'] == 4] = 1
questDay1Df.loc[questDay1Df['STAI_State_01'] == 3] = 2
questDay1Df.loc[questDay1Df['STAI_State_01'] == 2] = 3
questDay1Df.loc[questDay1Df['STAI_State_01'] == 1] = 4
</code></pre>
<p>The problem is that since it's not in the same command, the lines are of course ran seperately. If for example 3 was changed to 2 in the next line it will be changed to 3 again. Any ideas how to prevent this problem/ how to write a command which does all of these conditional changes at once?</p>
<p>Thank you very much in advance!</p>
|
<p>You could use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer">.replace()</a>, but I think this solution is more fun :)</p>
<pre><code>questDay1Df['STAI_State_01'] = np.abs(questDay1Df['STAI_State_01'] - 5)
</code></pre>
<br>
<p>The <code>.replace()</code> possibility is more widely applicable:
<br></p>
<pre><code>questDay1Df['STAI_State_01'] = questDay1Df['STAI_State_01'].replace({
1: 4,
2: 3,
3: 2,
4: 1,
})
</code></pre>
|
python|pandas|dataframe
| 1
|
4,881
| 64,708,946
|
Pivoting a repeating Time Series Data
|
<p><a href="https://i.stack.imgur.com/ygdhh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ygdhh.png" alt="enter image description here" /></a>I am trying to pivot this data in such a way that I get columns like eg: AK_positive AK_probableCases, AK_negative, AL_positive.. and so on.</p>
<p>You can get the data here, df = pd.read_csv('https://covidtracking.com/api/states/daily.csv')<a href="https://i.stack.imgur.com/AVpyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AVpyN.png" alt="This is what the data looks like before pivoting (originally)" /></a></p>
|
<p>Just flatten the original MultiIndex column into tuples using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.to_flat_index.html" rel="nofollow noreferrer">.to_flat_index()</a>, and rearrange tuple elements into a new column name.</p>
<pre><code>df_pivoted.columns = [f"{i[1]}_{i[0]}" for i in df_pivoted.columns.to_flat_index()]
</code></pre>
<p>Result:</p>
<pre><code># start from April
df_pivoted[df_pivoted.index >= 20200401].head(5)
AK_positive AL_positive AR_positive ... WI_grade WV_grade WY_grade
date ...
20200401 133.0 1077.0 584.0 ... NaN NaN NaN
20200402 143.0 1233.0 643.0 ... NaN NaN NaN
20200403 157.0 1432.0 704.0 ... NaN NaN NaN
20200404 171.0 1580.0 743.0 ... NaN NaN NaN
20200405 185.0 1796.0 830.0 ... NaN NaN NaN
</code></pre>
|
python-3.x|pandas|pivot|pandas-groupby|data-manipulation
| 0
|
4,882
| 40,020,270
|
Pandas groupby countif with dynamic columns
|
<p>I have a dataframe with this structure:</p>
<pre><code>time,10.0.0.103,10.0.0.24
2016-10-12 13:40:00,157,172
2016-10-12 14:00:00,0,203
2016-10-12 14:20:00,0,0
2016-10-12 14:40:00,0,200
2016-10-12 15:00:00,185,208
</code></pre>
<p>It details the number of events per IP address for a given 20 minute period. I need a dataframe of how many 20 minute periods per miner had 0 events, from which I need to derive IP 'uptime' as a percent. The number of IP addresses is dynamic. Desired output:</p>
<pre><code>IP,noEvents,uptime
10.0.0.103,3,40
10.0.0.24,1,80
</code></pre>
<p>I have tried with groupby, agg and lambda to no avail. What is the best way of doing a 'countif' by dynamic columns?</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html" rel="nofollow"><code>mean</code></a> of boolean mask by condition <code>df == 0</code>. Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> both <code>Series</code>:</p>
<pre><code>df.set_index('time', inplace=True)
mask = (df == 0)
print (mask)
10.0.0.103 10.0.0.24
time
2016-10-12 13:40:00 False False
2016-10-12 14:00:00 True False
2016-10-12 14:20:00 True True
2016-10-12 14:40:00 True False
2016-10-12 15:00:00 False False
noEvents = mask.sum()
print (noEvents)
10.0.0.103 3
10.0.0.24 1
dtype: int64
uptime = 100 * mask.mean()
print (uptime)
10.0.0.103 60.0
10.0.0.24 20.0
dtype: float64
print (pd.concat([noEvents, uptime], axis=1, keys=('noEvents','uptime'))
.reset_index()
.rename(columns={'index':'IP'}))
IP noEvents uptime
0 10.0.0.103 3 60.0
1 10.0.0.24 1 20.0
</code></pre>
|
python|pandas|sum|multiple-columns|mean
| 3
|
4,883
| 44,222,313
|
pandas more efficient way to create dictionary object from csv to send as a post request
|
<p>Sample data:</p>
<pre><code>data = {'account': {0: 'ted',
1: 'ned',
2: 'bed',
3: 'fred',
4: 'med'},
'account_type': {0: 'Enterprise',
1: 'Enterprise',
2: 'Enterprise',
3: '',
4: 'Mid-Market'},
'rep': {0: 'bob', 1: 'sam', 2: 'sam', 3: 'bob', 4: 'tim'},
'id': {0: 5542, 1: 7118, 2: 5510, 3: 5872, 4: 5766},
'industry': {0: 'Electronics', 1: 'Retail', 2: '', 3: 'Books', 4: ''}}
df = pd.DataFrame(data=data)
</code></pre>
<p>I created my desired output by doing the following:</p>
<pre><code>properties = {'app_id':'12345','users':[]}
for i in df.index:
_id = np.asscalar(np.int64(df.loc[i,'id']))
properties['users'].append(
{
'id': _id,
'properties': {
'account': df.loc[i, 'account'],
'rep': df.loc[i, 'rep'],
'account_type': df.loc[i, 'account_type'],
'industry': df.loc[i, 'industry']
}
}
)
</code></pre>
<p>I feel like this is incredibly uninspiring and would like to know what would go into a more elegant solution that doesn't necessarily require a loop. </p>
|
<p>A bit more succinct solution using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>pandas.DataFrame.apply()</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html" rel="nofollow noreferrer"><code>pandas.Series.to_dict()</code></a>:</p>
<pre><code>def build_users(row):
properties = row.to_dict()
del properties['id']
return dict(id=row.id, properties=properties)
properties = {
'app_id': '12345',
'users': list(df.apply(build_users, axis=1)),
}
</code></pre>
<p>And using comprehensions:</p>
<pre><code>properties = {
'app_id': '12345',
'users': [dict(
id=i[0],
properties=dict(
account=i[1],
rep=i[2],
account_type=i[3],
industry=i[4],
)
) for i in zip(df.id, df.account, df.rep, df.account_type, df.industry)]
}
</code></pre>
|
python|pandas
| 1
|
4,884
| 69,394,979
|
translate docker run command (tensorflow-serving) into docker-compose
|
<p>Here is the docker run command:</p>
<pre><code>docker run -p 8501:8501 \
--name tfserving_classifier \
-e MODEL_NAME=img_classifier \
-t tensorflow/serving
</code></pre>
<p>Here is what I tried but I am not able to get the MODEL_NAME to work</p>
<pre><code> tensorflow-servings:
container_name: tfserving_classifier
ports:
- 8501:8501
command:
- -e MODEL_NAME=img_classifier
- -t tensorflow/serving
</code></pre>
|
<pre><code> tensorflow-servings:
container_name: tfserving_classifier
image: tensorflow/serving
environment:
- MODEL_NAME=img_classifier
ports:
- 8501:8501
</code></pre>
|
docker|tensorflow|tensorflow-serving
| 1
|
4,885
| 69,336,227
|
How to convert string representation list with mixed values to a list?
|
<p>How do can I convert a string that contains values that are both strings and numeric, given that the string within the list is not in quotes?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col_1': ['[2, A]', '[5, BC]']})
print(df)
col_1
0 [2, A]
1 [5, BC]
col_1 [2, A]
Name: 0, dtype: object
</code></pre>
<p>My aim is to use the list in another function, so I tried to transform the string with built-in functions such as eval() or ast.literal_eval(), however in both cases I need to add quotes around the strings, so it is "A" and "BC".</p>
|
<p>You can first use a regex to add quotes around the potential strings (here I used letters + underscore), then use <code>literal_eval</code> (for some reason I have an error with <code>pd.eval</code>)</p>
<pre><code>from ast import literal_eval
df['col_1'].str.replace(r'([a-zA-Z_]+)', r'"\1"', regex=True).apply(literal_eval)
</code></pre>
<p>output (lists):</p>
<pre><code>0 [2, A]
1 [5, BC]
</code></pre>
|
python|pandas
| 1
|
4,886
| 54,014,341
|
unable to change dtype pandas python
|
<p>I'm working with a dataframe in <code>pandas</code> and I have a column with an <code>int64</code> data type. I need to convert this data type to a string so that I can slice the characters, taking the first 3 chars of the 5 character column. The code is as follows: </p>
<pre><code>trainer_pairs[:, 'zip5'] = trainer_pairs.zip5.astype(dtype='object')
trainer_pairs.zip5.dtype
dtype('O')
</code></pre>
<p>I have confirmed the data type is an <code>object</code>, but when i try to use <code>str.slice()</code> on the column, I still get this: </p>
<pre><code>0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
</code></pre>
<p>How can I successfully update the data type so that I can run this string method? </p>
|
<p>Here you should using <code>astype(str)</code></p>
<pre><code>trainer_pairs['zip5'] = trainer_pairs.zip5.astype(str)
</code></pre>
<hr>
<p>About your errors </p>
<pre><code>df=pd.DataFrame({'zip':[1,2,3,4,5]})
df.zip.astype(object)
Out[4]:
0 1
1 2
2 3
3 4
4 5
Name: zip, dtype: object
</code></pre>
<p>Even convert to object they are still <code>int</code> , doing the slice with type <code>int</code> or <code>float</code> will return the value as <code>NaN</code> . please check </p>
<pre><code>df.zip.astype(object).apply(type)
Out[5]:
0 <class 'int'>
1 <class 'int'>
2 <class 'int'>
3 <class 'int'>
4 <class 'int'>
Name: zip, dtype: object
df.zip.astype(str).apply(type)
Out[6]:
0 <class 'str'>
1 <class 'str'>
2 <class 'str'>
3 <class 'str'>
4 <class 'str'>
Name: zip, dtype: object
</code></pre>
|
string|pandas|slice
| 1
|
4,887
| 66,277,350
|
pandas: I need to slice single column into two column separated by commas
|
<p>I have a pandas data frame which includes a column with two values in it. That looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Co-Ordinates</th>
</tr>
</thead>
<tbody>
<tr>
<td>23.821352807207695, 90.40975987926335</td>
</tr>
<tr>
<td>23.812076990866696, 90.43087907325717</td>
</tr>
</tbody>
</table>
</div>
<p>I want to make extra two columns from this existing column using its values which will be looked like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Co-Ordinates</th>
<th>Lat</th>
<th>Long</th>
</tr>
</thead>
<tbody>
<tr>
<td>23.821352807207695, 90.40975987926335</td>
<td>23.821352807207695</td>
<td>90.40975987926335</td>
</tr>
<tr>
<td>23.812076990866696, 90.43087907325717</td>
<td>23.812076990866696</td>
<td>90.43087907325717</td>
</tr>
</tbody>
</table>
</div>
<p>I have searched for this problem but didn't found any solution. Or maybe I don't know the exact term to search for. I need help to solve this problem with pandas.</p>
|
<p>Use this code:
<a href="https://www.w3resource.com/pandas/series/series-str-split.php" rel="nofollow noreferrer">Str.split()</a></p>
<pre><code>df = pd.DataFrame(data = ['23.821352807207695, 90.40975987926335','23.812076990866696, 90.43087907325717'], columns = ['Coordinates'])
df[['Lat','Long']] = df['Coordinates'].str.split(',',expand=True)
df
</code></pre>
<p>Output:</p>
<pre><code> Coordinates Lat Long
0 23.821352807207695, 90.40975987926335 23.821352807207695 90.40975987926335
1 23.812076990866696, 90.43087907325717 23.812076990866696 90.43087907325717
</code></pre>
|
python|pandas|dataframe
| 1
|
4,888
| 66,160,791
|
Merging excel sheets using pandas
|
<p>I have a quick script using python and pandas thats supposed to compare two excel sheets, grab the information that i need and create a new file. However when it creates the new file or if i just print it for testing one of the columns is coming back empty depending on where i merge (left of right)</p>
<pre><code> import pandas as pd
base_data = pd.read_excel("UpdatedList.xls") - #this sheet has Names and clock number
today_data = pd.read_excel("LocationUP.xlsx") - #this sheet has Names and where employees are working.
merge_data = base_data[["Names", "Clock Number" ]].merge(today_data[["Names", "Job"]], on="Names", how="right") #this line merges the two and creates a new files using the excel row "Names" as a merging key
# merge_data.to_excel("EmployeeLocationInner.xlsx", index=False)
print(merge_data.to_string())
</code></pre>
|
<p>I haven't double checked your merge, but rather than sending your merged data to a string, you should use
pd.to_excel()</p>
<p>something like:</p>
<pre><code>merge_data.to_excel('merged_data.xlsx', sheet_name='merged')
</code></pre>
<p>If you need to save multiple sheets, looks the documentation - there are directions there.
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html</a></p>
<p>As far as merges, left or right will select what data you want to keep, so check your merge type to make sure you are keeping the data you want. I would advise trying each type of merge, and seeing what you want. It might be helpful to learn about different merge types. Here is a nice reference, or just google sql join types:
<a href="https://www.w3schools.com/sql/sql_join.asp" rel="nofollow noreferrer">https://www.w3schools.com/sql/sql_join.asp</a></p>
<p>For reference, pd.merge()
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html</a></p>
|
python|excel|pandas
| 1
|
4,889
| 66,030,470
|
Plotly: How to show other values than counts for marginal histogram?
|
<p>I am trying to create a linked marginal plot above the original plot, with the same x axis but with a different y axis.</p>
<p>I've seen that in <code>plotly.express</code> package there are 4 options in which you can create marginal_x plot on a scatter fig, but they are all based on the same columns as x and y.</p>
<p>In my case, I have a date on my x-axis and rate of something on my y-axis, and I am trying to produce a histogram marginal distribution plot of the samples of which this rate is based on (located in samples column within the df).</p>
<p>I'm simplifying what I've tried without lessening any important details:</p>
<pre><code>import pandas as pd
import plotly.express as px
df = pd.DataFrame(
{
"date": [pd.Timestamp("20200102"), pd.Timestamp("20200103")],
"rate": [0.88, 0.96],
"samples": [130, 1200])
}
)
fig = px.scatter(df, x='date', y='rate', marginal_x='histogram')
fig.show()
</code></pre>
<p>The documentation I based on: <a href="https://plotly.com/python/marginal-plots/" rel="nofollow noreferrer">https://plotly.com/python/marginal-plots/</a></p>
<p>My desired result:
<a href="https://i.stack.imgur.com/VMOHo.png" rel="nofollow noreferrer">Example:</a></p>
<p><a href="https://i.stack.imgur.com/sF6oZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sF6oZ.png" alt="enter image description here" /></a></p>
<p>The difference is that I use an aggregated df, so my count is just 1, instead of being the amount of samples.</p>
<p>Any ideas?</p>
<p>Thanks!</p>
|
<p>I'm understanding your statement</p>
<blockquote>
<p>[...] and rate of something on my y-axis</p>
</blockquote>
<p>... to mean that you'd like to display a value on your histogram that is <em>not</em> count.</p>
<p><code>marginal_x='histogram'</code> in <code>px.scatter()</code> seems to be defaulted to show counts <em>only</em>, meaning that there is no straight-forward way to show values of individual observations. But if you're willing to use <code>fig = make_subplots()</code> in combination with <code>go.Scatter()</code> and <code>go.Bar()</code>, then you can easily build this:</p>
<h3>Plot</h3>
<p><a href="https://i.stack.imgur.com/pslLj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pslLj.png" alt="enter image description here" /></a></p>
<h3>Complete code:</h3>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=2, cols=1,
row_heights=[0.2, 0.8],
vertical_spacing = 0.02,
shared_yaxes=False,
shared_xaxes=True)
df = pd.DataFrame(
{
"date": [pd.Timestamp("20200102"), pd.Timestamp("20200103")],
"rate": [0.88, 0.96],
"samples": [130, 1200]
}
)
fig.add_trace(go.Bar(x=df['date'], y=df['rate'], name = 'rate'), row = 1, col = 1)
fig.update_layout(bargap=0,
bargroupgap = 0,
)
fig.add_trace(go.Scatter(x=df['date'], y=df['samples'], name = 'samples'), row = 2, col = 1)
fig.update_traces(marker_color = 'rgba(0,0,250, 0.3)',
marker_line_width = 0,
selector=dict(type="bar"))
fig.show()
</code></pre>
|
python|pandas|plotly|plotly-python|plotly.graph-objects
| 4
|
4,890
| 52,784,378
|
Pandas reading a specific line in python with a csv file
|
<p>I want to read a specific line in a csv file in pandas on python.Here on this image I want to read the 19010101 date <a href="https://i.stack.imgur.com/lBuTQ.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You can specify the column you want to use</p>
<pre><code>df=read_csv(yourcsv.csv)
dates = df['DATE'].values.tolist()
print(dates[0])
</code></pre>
<p>Bunch of ways to do this, just depends on your requirements.
<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">Iloc</a> function might be of use to you too. </p>
|
python|pandas
| 1
|
4,891
| 52,698,072
|
Dynamically building indexes to classify records in pandas
|
<p>I am trying to write a simple record classifier. I want to add a column whose value classifies a record. I want to codify my classification rules in a yaml, or similar file for maintenance purposes.</p>
<p>I am using Pandas as that seems to be the best way to do this with csv records in python. I am open to other suggestions. I am new to pandas and my python skills are politely described as "why does this look like perl?"</p>
<p>I've gotten a dataframe (trans) and I want to apply my rules as follows:</p>
<p><code>trans['class'][(trans['foo'] > 5) & (trans['bar'].str.contains(re.compile('baz|one|two', re.I))] = 'Record Type 1'</code></p>
<p>This works interactively. I would like to be able to generate the classifying index, <code>"(trans['foo'] > 5) & (trans['bar'].str.contains(re.compile('baz|one|two', re.I))"</code> dynamically from each rule in my yaml file. I have successfully built strings such that I have things like:</p>
<p><code>slice = "(trans['foo'] > 5) & (trans['bar'].str.contains(re.compile('baz|one|two', re.I))"
trans['class'][slice] = 'Record Type 1'</code></p>
<p>This doesn't work. What should I be doing instead?</p>
|
<p>Some points to note:</p>
<ol>
<li>Quotation marks denote strings in Python. Don't use them to surround calculation of Boolean masks.</li>
<li>Don't use chained indexing. It's <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">explicitly discouraged</a> in the docs and can lead to unexpected side-effects, or ambiguity as to whether you are modifying a view or a copy. You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>pd.DataFrame.loc</code></a> instead.</li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>pd.Series.str.contains</code></a> already supports regex and defaults to <code>regex=True</code>, you don't need to use the <code>re</code> module.</li>
</ol>
<p>For readability, you can split and combine masks. Here's an example:</p>
<pre><code>m1 = trans['foo'] > 5
m2 = trans['bar'].str.contains('baz|one|two', case=False)
trans.loc[m1 & m2, 'class'] = 'Record Type 1'
</code></pre>
<p>The usually expensive part, calculation of <code>m2</code>, can be optimized by resorting to specialist algorithms, see <a href="https://stackoverflow.com/a/48600345/9209546">this answer</a> for details.</p>
|
python|string|pandas|indexing|series
| 2
|
4,892
| 52,476,573
|
selenium pandas dataframe constructor not properly called
|
<p>The purpose of this code is to scrape a web page and extract data from a table then convert it to pandas data frame.</p>
<p>The scraping and data extracting went well.</p>
<p>The output is like this:</p>
<p>Release Date</p>
<p>Time</p>
<p>Actual</p>
<p>Forecast</p>
<p>Previous</p>
<p>Sep 09, 2018 (Aug)</p>
<p>21:30</p>
<p>0.7%</p>
<p>0.5%</p>
<p>0.3%</p>
<p>Aug 08, 2018 (Jul)</p>
<p>21:30</p>
<p>0.3%</p>
<p>0.2%</p>
<p>-0.1%</p>
<p>Jul 09, 2018 (Jun)</p>
<p>21:30</p>
<p>-0.1%</p>
<p>0.1%</p>
<p>-0.2%</p>
<p>Jun 08, 2018 (May)</p>
<p>21:30</p>
<p>-0.2%</p>
<p>-0.1%</p>
<p>-0.2%</p>
<p>May 09, 2018 (Apr)</p>
<p>21:30</p>
<p>-0.2%</p>
<p>-0.1%</p>
<p>-1.1%</p>
<p>Apr 10, 2018 (Mar)</p>
<p>21:30</p>
<p>-1.1%</p>
<p>-0.5%</p>
<p>1.2%</p>
<p>Mar 08, 2018 (Feb)</p>
<p>21:30</p>
<p>1.2%</p>
<p>0.8%</p>
<p>0.6%</p>
<p>Feb 08, 2018 (Jan)</p>
<p>21:30</p>
<p>0.6%</p>
<p>0.7%</p>
<p>0.3%</p>
<p>But when I tried to convert it to data frame I got an error.</p>
<p>Here is the code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
url = 'https://www.investing.com/economic-calendar/chinese-cpi-743'
driver = webdriver.Chrome(r"D:\Projects\Tutorial\Driver\chromedriver.exe")
driver.get(url)
wait = WebDriverWait(driver,10)
while True:
try:
item = wait.until(EC.visibility_of_element_located((By.XPATH,'//*[contains(@id,"showMoreHistory")]/a')))
driver.execute_script("arguments[0].click();", item)
except Exception:break
for table in wait.until(EC.visibility_of_all_elements_located((By.XPATH,'//*[contains(@id,"eventHistoryTable")]//tr'))):
data = [item.text for item in table.find_elements_by_xpath(".//*[self::td or self::th]")]
for data in data:
df = pd.DataFrame(data.strip(), columns=['Release Date', 'Time', 'Actual', 'Forecast', 'Previous'])
print(df)
</code></pre>
<p>Here is the error:</p>
<p>Traceback (most recent call last):</p>
<p>File "D:/Projects/Tutorial/ff.py", line 22, in
df = pd.DataFrame(data.strip(), columns=['Release Date', 'Time', 'Actual', 'Forecast', 'Previous'])</p>
<p>File "C:\Users\Sayed\Anaconda3\lib\site-packages\pandas\core\frame.py", line 422, in <strong>init</strong>
raise ValueError('DataFrame constructor not properly called!')</p>
<p>ValueError: DataFrame constructor not properly called!</p>
|
<p>Just make changes to the last part</p>
<pre><code>df = pd.DataFrame(columns=['Release Date', 'Time', 'Actual', 'Forecast', 'Previous'])
pos = 0
for table in wait.until(EC.visibility_of_all_elements_located((By.XPATH,'//*[contains(@id,"eventHistoryTable")]//tr'))):
data = [item.text for item in table.find_elements_by_xpath(".//*[self::td]")]
if data:
df.loc[pos] = data[0:5]
pos+=1
print(df)
</code></pre>
|
python|pandas|selenium
| 1
|
4,893
| 46,348,826
|
Using SQL commands in Python
|
<p>I used the following code in iPython in order to get some information from a database's table in the form of a pandas dataframe.</p>
<pre><code>import sqlite3
con = sqlite3.connect('-----.db')
a = pd.read_sql('SELECT * FROM table1, con)
c= con.cursor
</code></pre>
<p>I have table 1 as a dataframe named a. However, I need to carry out a number of inner joins between different tables from the database. My question would be how to use SQL commands within iPython using these dataframes? I tried c.execute(''' sql command for inner join''') but the error says that the dataframes mentioned are not tables.
Any help?</p>
|
<p>You just write the full sql command directly using read_sql.</p>
<pre><code>sql = """
select col1 from
tablea inner join tableb
on tablea.col2 = tableb.col2
where tablea.col3 < 10
limit 10
"""
a = pd.read_sql(sql, con)
</code></pre>
|
sqlite|pandas
| 0
|
4,894
| 46,269,491
|
Tensorflow Serving of Saved Model ssd_mobilenet_v1_coco
|
<p>I have looked on several posts on stackoverflow and have been at it for a few days now, but alas, I'm not able to properly serve an object detection model through tensorflow serving. </p>
<p>I have visited to the following links:
<a href="https://stackoverflow.com/questions/45362726/how-to-properly-serve-an-object-detection-model-from-tensorflow-object-detection">How to properly serve an object detection model from Tensorflow Object Detection API?</a></p>
<p>and </p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/11863" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/11863</a></p>
<p>Here's what I have done.</p>
<p>I have downloaded the ssd_mobilenet_v1_coco_11_06_2017.tar.gz, which contains the following files:</p>
<pre><code>frozen_inference_graph.pb
graph.pbtxt
model.ckpt.data-00000-of-00001
model.ckpt.index
model.ckpt.meta
</code></pre>
<p>Using the following script, I was able successfully convert the frozen_inference_graph.pb to a SavedModel (under directory ssd_mobilenet_v1_coco_11_06_2017/saved)</p>
<pre><code>import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
import ipdb
# Specify version 1
export_dir = './saved/1'
graph_pb = 'frozen_inference_graph.pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# name="" is important to ensure we don't get spurious prefixing
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
ipdb.set_trace()
inp = g.get_tensor_by_name("image_tensor:0")
outputs = {}
outputs["detection_boxes"] = g.get_tensor_by_name('detection_boxes:0')
outputs["detection_scores"] = g.get_tensor_by_name('detection_scores:0')
outputs["detection_classes"] = g.get_tensor_by_name('detection_classes:0')
outputs["num_detections"] = g.get_tensor_by_name('num_detections:0')
output_tensor = tf.concat([tf.expand_dims(t, 0) for t in outputs], 0)
# or use tf.gather??
# out = g.get_tensor_by_name("generator/Tanh:0")
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
sigs["predict_images"] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save()
</code></pre>
<p>I get the following error:</p>
<pre><code>bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
--port=9000 --model_base_path=/serving/ssd_mobilenet_v1_coco_11_06_2017/saved
2017-09-17 22:33:21.325087: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:268] No versions of servable default found under base path /serving/ssd_mobilenet_v1_coco_11_06_2017/saved/1
</code></pre>
<p>I understand I will need a client to connect to the server to do the prediction. However, I'm not even able to serve the model properly. </p>
|
<p>You need to change the export signature somewhat from what the original post did. This script does the necessary changes for you:</p>
<pre><code> $OBJECT_DETECTION_CONFIG=object_detection/samples/configs/ssd_mobilenet_v1_pets.config
$ python object_detection/export_inference_graph.py \ --input_type encoded_image_string_tensor \ --pipeline_config_path ${OBJECT_DETECTION_CONFIG} \ --trained_checkpoint_prefix ${YOUR_LOCAL_CHK_DIR}/model.ckpt-${CHECKPOINT_NUMBER} \ --output_directory ${YOUR_LOCAL_EXPORT_DIR}
</code></pre>
<p>For more details on what the program is doing, see:</p>
<p><a href="https://cloud.google.com/blog/big-data/2017/09/performing-prediction-with-tensorflow-object-detection-models-on-google-cloud-machine-learning-engine" rel="nofollow noreferrer">https://cloud.google.com/blog/big-data/2017/09/performing-prediction-with-tensorflow-object-detection-models-on-google-cloud-machine-learning-engine</a></p>
|
tensorflow|object-detection|tensorflow-serving|google-cloud-ml
| 4
|
4,895
| 58,555,782
|
Using generators on networks ending with tensorflow probability layer
|
<p>I have a network which ends with probability layer something like this: </p>
<pre class="lang-py prettyprint-override"><code>model = tfk.Sequential([
tfkl.InputLayer(10),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(2)),
tfpl.MultivariateNormalTriL(2)])
</code></pre>
<p>And I am creating my dataset from a generator:</p>
<pre class="lang-py prettyprint-override"><code>data_generator = tf.data.Dataset.from_generator(
data_generator,
output_types=(tf.float32, tf.float32)
).batch(batch_size)
</code></pre>
<p>I am able to fit the model with <code>fit_generator</code> but how can I do the same for prediction? (I have tried <code>predict_generator</code>, but no success)</p>
<p>Moreover, if <code>X</code> is a numpy array I can just do <code>model(X).sample(100)</code> I want to do the same with my generator, something like <code>model(data_generator).sample(100)</code>. Any idea? </p>
|
<p>Maybe you could add one more layer:</p>
<pre><code>model = tfk.Sequential([
tfkl.InputLayer(10),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(2)),
tfpl.MultivariateNormalTriL(2),
tfkl.Lambda(lambda x: tf.transpose(x.sample(100), perm=[1,0,2]))])
</code></pre>
<p>and then use </p>
<pre><code>model.predict_generator(data_generator, steps=steps)
</code></pre>
<p>Or do you need to do more with the resulting distribution than just sample?</p>
<p>Edit: because in that case, maybe you could just create a generator that feeds batches to the model as numpy arrays and yields the resulting distributions?</p>
|
python|tensorflow|tensorflow-datasets|tf.keras|tensorflow-probability
| 0
|
4,896
| 58,398,772
|
numpy array: efficient way to compute argmax within a fixed window at a set of rows and columns given as input
|
<p>Lets assume we have a 2D array <code>arr</code>, a set of positions defined by <code>rows</code> and <code>cols</code>, and a window of shape <code>(5, 1)</code>. For <code>(i, j)</code>, we need the index of the max value within <code>arr[i-2:i+2, j]</code>. We want to repeat for all input <code>(i, j)</code> pairs.</p>
<pre><code>import numpy as np
def get_max_index(arr, i, j, stride):
nr, nc = arr.shape
i_low = max(0, i - stride)
i_up = min(i + stride, nr)
idx = np.argmax(arr[i_low:i_up, j])
# transform back to original index.
return i_low + idx, j
# Given numpy array
arr = np.array([
[1,2,3,6,7],
[4,5,6,15,8],
[7,8,9,24,9],
[1,1,1,3,10],
[2,2,2,6,11],
[3,3,3,9,12],
[4,4,4,4,42]
])
# Rows and columns at which the windows will be centered.
rows = np.array([2, 4, 6, 6])
cols = np.array([1, 1, 3, 4])
# Measure corresponding to window of size 5
stride = 2
# Apply the function on the input rows and cols.
res = [get_max_index(arr, i, j, stride) for i, j in zip(rows, cols)]
assert res == [(2, 1), (2, 1), (5, 3), (6, 4)]
</code></pre>
<p>I was curious if there's a faster <code>numpy</code> way of doing this instead of using list comprehension.</p>
<p>It has some semblance to "<a href="https://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.dilation" rel="nofollow noreferrer">morphological dilation</a>" but here it's on a subset of the array cells and we want the indices.</p>
|
<p>We can leverage <a href="http://www.scipy-lectures.org/advanced/advanced_numpy/#indexing-scheme-strides" rel="nofollow noreferrer"><code>np.lib.stride_tricks.as_strided</code></a> based <a href="http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_windows" rel="nofollow noreferrer"><code>scikit-image's view_as_windows</code></a> to get sliding windowed views into a minimum-value-padded version of the input (to account for boundary cases) and then get argmax values in those and offset against the given rows. </p>
<p>Hence, the implementation would be -</p>
<pre><code>from skimage.util.shape import view_as_windows
def windowed_argmax(arr, rows, cols, stride):
# Get full window extent
W = 2*stride+1
# Pad with minimum value, so that on boundaries we will skip those
a = np.pad(arr,((stride,stride),(0,0)),'constant',constant_values=arr.min()-1)
# Get sliding windows
w = view_as_windows(a,(W,1))[...,0]
# Index into those specific rows, cols positions; get argmax, offset back
return np.c_[rows+w[rows,cols].argmax(1)-stride,cols]
</code></pre>
<p>Sample run -</p>
<pre><code>In [75]: arr
Out[75]:
array([[ 1, 2, 3, 6, 7],
[ 4, 5, 6, 15, 8],
[ 7, 8, 9, 24, 9],
[ 1, 1, 1, 3, 10],
[ 2, 2, 2, 6, 11],
[ 3, 3, 3, 9, 12],
[ 4, 4, 4, 4, 42]])
In [76]: rows
Out[76]: array([2, 4, 6, 6])
In [77]: cols
Out[77]: array([1, 1, 3, 4])
In [78]: windowed_argmax(arr, rows, cols, stride=2)
Out[78]:
array([[2, 1],
[2, 1],
[5, 3],
[6, 4]])
</code></pre>
|
python|arrays|numpy
| 1
|
4,897
| 58,547,216
|
Trouble importing Excel fields into Python via Pandas - index out of bounds error
|
<p>I'm not sure what happened, but my code has worked today, however not it won't. I have an Excel spreadsheet of projects I want to individually import and put into lists. However, I'm getting a "IndexError: index 8 is out of bounds for axis 0 with size 8" error and Google searches have not resolved this for me. Any help is appreciated. I have the following fields in my Excel sheet: id, funding_end, keywords, pi, summaryurl, htmlabstract, abstract, project_num, title. Not sure what I'm missing... </p>
<pre><code>import pandas as pd
dataset = pd.read_excel('new_ahrq_projects_current.xlsx',encoding="ISO-8859-1")
df = pd.DataFrame(dataset)
cols = [0,1,2,3,4,5,6,7,8]
df = df[df.columns[cols]]
tt = df['funding_end'] = df['funding_end'].astype(str)
tt = df.funding_end.tolist()
for t in tt:
allenddates.append(t)
bb = df['keywords'] = df['keywords'].astype(str)
bb = df.keywords.tolist()
for b in bb:
allkeywords.append(b)
uu = df['pi'] = df['pi'].astype(str)
uu = df.pi.tolist()
for u in uu:
allpis.append(u)
vv = df['summaryurl'] = df['summaryurl'].astype(str)
vv = df.summaryurl.tolist()
for v in vv:
allsummaryurls.append(v)
ww = df['htmlabstract'] = df['htmlabstract'].astype(str)
ww = df.htmlabstract.tolist()
for w in ww:
allhtmlabstracts.append(w)
xx = df['abstract'] = df['abstract'].astype(str)
xx = df.abstract.tolist()
for x in xx:
allabstracts.append(x)
yy = df['project_num'] = df['project_num'].astype(str)
yy = df.project_num.tolist()
for y in yy:
allprojectnums.append(y)
zz = df['title'] = df['title'].astype(str)
zz = df.title.tolist()
for z in zz:
alltitles.append(z)
</code></pre>
|
<blockquote>
<p>"IndexError: index 8 is out of bounds for axis 0 with size 8" </p>
</blockquote>
<pre><code>cols = [0,1,2,3,4,5,6,7,8]
</code></pre>
<p>should be <code>cols = [0,1,2,3,4,5,6,7]</code>.</p>
<p>I think you have 8 columns but your col has 9 col index.</p>
|
python|excel|pandas|numpy|text
| 3
|
4,898
| 69,072,900
|
How to create a dictionary from a xls file with multiple rows/columns
|
<p>I'm trying to create a dictionary of dictionaries from a file with multiple columns.
Basically what I want is to have the first column as the key and the remaining columns as values.
I am stuck on the following:</p>
<pre><code>import pandas as pd
from openpyxl import load_workbook
from pandas import DataFrame
epitope_dicty = pd.read_excel('file.xlsx', engine='openpyxl', sheet_name=None, header=1)
dicty_of = {}
for name, sheet in epitope_dicty.items():
dicty_sheet = {}
sheet['sheet'] = name
first_column_sheet = sheet.iloc[:, 0].dropna()
# first_column_sheet_list = first_column_sheet.tolist()
# dicty_sheet[name] = first_column_sheet_list
</code></pre>
<p>That is the excel file that I have:</p>
<pre><code>Name Col_1 Col_2 Col_3 Col_4 Col_4
row_1 2.001557961 4.068187426 4.03822587 -0.081289848 -0.029738309
row_2 2.075372353 4.055443241 5.082902764 2.01773175 1.06956111
row_3 2.037347173 3.012477155 2.085767079 0.081567704 0.035155619
row_4 2.088449675 4.083901034 4.045767022 1.047520556 1.023808368
row_5 2.029925701 4.042756058 5.07873749 0.039559598 0.021102551
</code></pre>
<p>What I want is a dictionary of dictionaries like that:</p>
<pre><code>[out]:
'Col_1':{'row_1': '2.001557961','row_2':'2.075372353',...},'Col_2':{'row_1': '4.068187426','row_2':'4.055443241',...},...,'Col_4':{'row_1': '-0.029738309',...,'row_5':'0.021102551'}
</code></pre>
<p>Does any of you have some clue on how can I continue on achieving this dicty of dicty?
Tks!</p>
|
<p>There is a method that does exactly that, <em>DataFrame.to_dict</em>.</p>
<p>Check this: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html</a></p>
<pre><code>>>> df = pd.DataFrame({'col1': [1, 2],
'col2': [0.5, 0.75]},
index=['row1', 'row2'])
>>> df
col1 col2
row1 1 0.50
row2 2 0.75
>>> df.to_dict()
{'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}}
</code></pre>
|
python|pandas|dictionary|openpyxl
| 0
|
4,899
| 69,073,297
|
Pandas: Compare rows within groups in a dataframe and create summary rows to mark / highlight different entries in group
|
<p>I have a pandas dataframe with approx. 1200 rows, where some of the rows are duplicated multiple times. The df looks like this:</p>
<pre><code>ID Serial Age Grade Chem Bio Math Phy
M001 2 52 37 1 1 1 1
M001 2 55 37 2 1 0 1
M001 3 51 36,5 1 1 1 0
M001 3 51 46,5 1 0 1 1
M041 2 52 36,1 1 1 0 0
M041 2 51 36,1 2 1 2 4
M041 2 52 36,1 1 1 0
M041 2 52 36,1 1 1 1
M010 5 58 37,4 0 1 1 3
M010 5 55 39,4 1 2 1 1
M010 5 58 37,4 1 1 1 1
</code></pre>
<p>The duplicates in the dataframe are supposed to be identified by the ID and Serial column and I was able to do that. However, I would like to compare each group of ID+Serial rows to find where they differ. This is a bit tricky as sometimes there are 2 rows in a group to compare and sometimes the rows (to compare) are more than 2 within each group.</p>
<p>I am interested in a solution where I can groupby the dataframe based on ID and Serial cols and then compare rows within each group. If there is a difference between two or more cells that needs to be recorded (e.g., in a new row with a X below the conflicting cells or perhaps highlighting the cells in red color). The resulting dataframe should look something like this:</p>
<pre><code>ID Serial Age Grade Chem Bio Math Phy
M001 2 52 37 1 1 1 1
M001 2 55 37 2 1 0 1
M001 2 X X X
M001 3 51 36,5 1 1 1 0
M001 3 51 46,5 1 0 1 1
M001 3 X X X
M041 2 52 36,1 1 1 0 0
M041 2 51 36,1 2 1 2 4
M041 2 52 36,1 1 1 0
M041 2 52 36,1 1 1 1
M041 2 X X X X
M010 5 58 37,4 0 1 1 3
M010 5 55 39,4 1 2 1 1
M010 5 58 37,4 1 1 1 1
M010 5 X X X X X
</code></pre>
<p>Can someone help with this issue?</p>
|
<p>You can create a comparison marking table by grouping on <code>ID</code> and <code>Serial</code> using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>.groupby()</code></a> and get the number of unique entries for each column by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.nunique.html" rel="nofollow noreferrer"><code>.nunique()</code></a>. Mark with <code>'X'</code> if number of unique entries > 1 or blank otherwise.</p>
<p>Finally, concat the newly created comparison marking table back to the original dataframe by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat()</code></a>. Sort by <code>ID</code> and <code>Serial</code> columns by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>.sort_values()</code></a> to bring back the related entries together.</p>
<pre><code>df_mark = (df.groupby(['ID', 'Serial'])
.nunique()
.gt(1)
.replace({True: 'X', False: ''})
.reset_index()
)
(pd.concat([df, df_mark])
.sort_values(['ID', 'Serial'])
.reset_index(drop=True)
)
</code></pre>
<p>Result:</p>
<pre><code> ID Serial Age Grade Chem Bio Math Phy
0 M001 2 52 37 1 1 1 1.0
1 M001 2 55 37 2 1 0 1.0
2 M001 2 X X X
3 M001 3 51 36,5 1 1 1 0.0
4 M001 3 51 46,5 1 0 1 1.0
5 M001 3 X X X
6 M010 5 58 37,4 0 1 1 3.0
7 M010 5 55 39,4 1 2 1 1.0
8 M010 5 58 37,4 1 1 1 1.0
9 M010 5 X X X X X
10 M041 2 52 36,1 1 1 0 0.0
11 M041 2 51 36,1 2 1 2 4.0
12 M041 2 52 36,1 1 1 0 NaN
13 M041 2 52 36,1 1 1 1 NaN
14 M041 2 X X X X
</code></pre>
|
python|pandas|dataframe
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.