Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
9,200
| 68,131,987
|
Unify different separators in column
|
<p>I have a dataframe df with ~450000 rows and 4 columns like "HK" as in the example:</p>
<pre><code>df = pd.DataFrame(
{
"HK": [
"19000000-ac-;ghj-;qrs",
"19000000- abcd-",
"19000000 -abc;klm-",
"19000000 - abc-;",
"19000000 a-",
]
}
)
df.head()
| HK
| -------------
| 19000000-ac-;ghj-;qrs
| 19000000- abcd-
| 19000000 -abc-;klm-
| 19000000 - abc-;
| 19000000 a-
</code></pre>
<p>I always have 8 digits followed by a value. The digits and the value are separated through different forms of "-" (no whitespace inbetween digits and value, whitespace left, whitespace right, whitespace left and right or only a whitespace without a "-").</p>
<p>I would like to get a unified presentation whith "$digits$ - $value$" so that my column looks like this:</p>
<pre><code>| HK
| -------------
| 19000000 - ac-;ghj-;qrs
| 19000000 - abcd-
| 19000000 - abc-;klm-
| 19000000 - abc-;
| 19000000 - a-
</code></pre>
|
<p>Using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>pd.Series.str.replace</code></a> with a regular expression:</p>
<pre><code>>>> df['HK'].str.replace(r'(?<=\d{8})[\s-]+(?=\w)', ' - ', regex=True)
0 19000000 - ac-;ghj-;qrs
1 19000000 - abcd-
2 19000000 - abc;klm-
3 19000000 - abc-;
4 19000000 - a-
Name: HK, dtype: object
</code></pre>
<p>Explaining the regular expression. There is a lookback <code>(?<=\d{8})</code> requiring that there are eight digits immediately before the main section. The main section is <code>[\s-]+</code> which requires one or more characters which are whitespace or hyphens. Then there is a lookahead <code>(?=\w)</code> requiring that immediately after this is a word character (in this case, something like <code>a</code>).</p>
|
python|pandas
| 1
|
9,201
| 59,443,435
|
Stacked bar plot - percentage
|
<p>I want to represent this information in a stacked bar plot in percentage
On the x axis I want the age groups and on the y axis and the values that represent percentage of Gender in each age group
Age is represented by bins in the dataset
I have this so far
<a href="https://i.stack.imgur.com/QKIum.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QKIum.png" alt="enter image description here"></a></p>
<p>This is my code:</p>
<pre><code>c = ds.groupby(['Age','Gender'])['Gender'].count()
d=(((c /c.groupby(level=0).sum())*100).round()).astype('int64')
d
</code></pre>
|
<p>I created a test data frame:</p>
<pre><code> df = pd.DataFrame({'Gender': ['F','M','F','F','F','M','M','M','F','F','M','F','F','M','M','M','M','F','F','M','M','M'], 'Age': [17,10,20,51,53,15,50,60,43,28,35,67,33,17,20,40,43,47,48,51,53,54]})
</code></pre>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer">pandas.cut</a> function to segment the age into proper intervals:</p>
<pre><code>bins = pd.IntervalIndex.from_tuples([(0,17),(17,25),(25,35),(35,46),(46,50),(50,55), (55,np.inf)])
df['Age_interval'] = pd.cut(df['Age'], bins=bins)
df = df.groupby(['Age_interval', 'Gender']).size().unstack().fillna(0)
df['F'] = df['F']/sum(df['F']+df['M'])*100
df['M'] = df['M']/sum(df['M']+df['F'])*100
df['Age'] = ['0-17', '18-25','26-35', '36-45', '46-50', '51-55', '55-']
df.plot(kind='bar', x='Age', title = 'Gender distribution in Age groups', rot=0,figsize=(10,5), color=['turquoise','brown'], stacked=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/CAF4E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CAF4E.png" alt="enter image description here"></a></p>
|
pandas|stacked-chart
| 0
|
9,202
| 57,239,426
|
Pandas showing only the unique instances of a value in a dataframe for a given id
|
<p>This is the dataframe I'm working with.</p>
<pre><code>df = pd.DataFrame({'id' : ['45', '45', '45', '45', '46', '46'],
'description' : ['credit score too low', 'credit score too low', 'credit score too low', 'high risk of fraud', 'address not verified', 'address not verified']})
print(df)
</code></pre>
<p>I'm trying to modify the the dataframe such that, for a given id, there are no duplicates of a description. The dataframe below is the desired output.</p>
<pre><code>newdf = pd.DataFrame({'id' : ['45', '45', '46'],
'description' : ['credit score too low', 'high risk of fraud', 'address not verified']})
print(newdf)
</code></pre>
|
<p>You can remove the duplicates with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><strong><code>.drop_duplicates()</code></strong> [pandas-doc]</a>. For example:</p>
<pre><code>>>> df
id description
0 45 credit score too low
1 45 credit score too low
2 45 credit score too low
3 45 high risk of fraud
4 46 address not verified
5 46 address not verified
>>> df.drop_duplicates()
id description
0 45 credit score too low
3 45 high risk of fraud
4 46 address not verified
</code></pre>
<p>You thus can set <code>df</code> to the new dataframe, like:</p>
<pre><code>df = df<b>.drop_duplicates()</b></code></pre>
|
python|pandas|dataframe
| 2
|
9,203
| 50,913,520
|
How can I use multiple datasets with one model in Keras?
|
<p>I am trying Forex prediction with Keras and Tensorflow using a LSTM Network.
I of course want it to train on many days of trading but to do that I would have to give it sequential data with big jumps and phases without movement... when the market is closed... This isn't ideal as it gets "confused" because of these jumps and phases of no movement. Alternatively I an use one day of minute per minute data but this way I have very limited time of training data and the model won't be very good.</p>
<p>Do you have Ideas on how to fix this?
here is my current code:</p>
<p><a href="https://de.scribd.com/document/382021571/Minutely" rel="nofollow noreferrer">CODE</a></p>
<p>Thanks</p>
|
<p>If you plan on fitting multiple datasets as data slices, sequentially, something like this would work:</p>
<pre><code>for _ in range(10):
#somehow cut the data into slices and fit them one by one
model.fit(data_slice, label_slice ......)
</code></pre>
<p>As successive calls to <strong>fit</strong> will train the single model incrementally .</p>
|
tensorflow|neural-network|keras|artificial-intelligence|forecasting
| 1
|
9,204
| 50,904,053
|
Tensorflow Intel MKL Optimization with NHWC data format
|
<p><strong>TensorFlow</strong> is compiled with the <strong>Intel MKL</strong> optimizations, many operations will be optimized and support NCHW.</p>
<p>Can someone please explain, why does Intel MKL support NCHW format more than NHWC?</p>
|
<p>TensorFlow default NHWC format is not the most efficient data layout for CPU and it results in some additional conversion overhead.Hence Intel MKL support NCHW format</p>
|
tensorflow|intel-mkl|intel-tensorflow|intel-ai-analytics
| 2
|
9,205
| 66,415,572
|
"ValueError: No gradients provided for any variable" Custom function in layer present
|
<p>Im facing some problems with my code. Im giving the nn 2 Coordinates, it has to convert these to 3 variables (angles). These angels are used to calculate the position of some vectors that should end in the given point. When i trie to run the code, i get the above described error.</p>
<pre><code>import tensorflow as tf
from tensorflow.keras import layers, losses
import numpy as np
from tensorflow.keras import Model
from main import robot_arm
r = robot_arm()
r.set_joints([1, 2, 2])
train_points = []
test_points = []
for i in range(1000000):
element = []
for i in range(2):
element.append(np.random.uniform(-5, 5))
train_points.append(element)
train_points = np.array(train_points)
print(train_points.shape)
for i in range(10000):
element = []
for i in range(2):
element.append(np.random.uniform(-5, 5))
test_points.append(element)
test_points = np.array(test_points)
print(test_points.shape)
class robot_arm_ai(Model):
def __init__(self):
super(robot_arm_ai, self).__init__()
self.decoder = tf.keras.Sequential([
layers.Dense(100, activation='linear'),
layers.Dense(100, activation='linear'),
layers.Dense(100, activation='linear'),
layers.Dense(100, activation='linear'),
layers.Dense(3),
])
def call(self, x):
x = self.decoder(x)
return r.calculate_point(x)
model = robot_arm_ai()
model.compile(optimizer="adam", loss=losses.MeanSquaredError(), run_eagerly=True)
model.fit(train_points, train_points,
epochs=10,
shuffle=True,
batch_size=256,
validation_data=(test_points, test_points))
</code></pre>
<p>this is the code used to describe the robotic arm</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
class joint():
def __init__(self, j_len=1, j_rot=0):
self.rot = j_rot
self.len = j_len
self.x_component = 0
self.y_component = 0
self.set_vector()
def print(self):
print("Length: " + str(self.len))
print("Rotation: " + str(self.rot))
def set_vector(self):
self.y_component = self.len * np.sin(self.rot * (2 * np.pi) )
self.x_component = self.len * np.cos(self.rot * (2 * np.pi) )
def random_rotation(self):
self.rot = np.random.uniform(-360, 360)
def set_rot(self, rot):
self.rot = rot
def random_rotation_set(self):
self.random_rotation()
def get_x(self):
return self.x_component
def get_y(self):
return self.y_component
class robot_arm():
def __init__(self):
self.joints = []
self.x = 0
self.y = 0
def print(self):
depth = 0
for i in self.joints:
print("- " * 30)
print("Element " + str(depth))
i.print()
depth = depth + 1
def set_joints(self, lens):
for i in lens:
self.joints.append(joint(j_len=i))
def random_rotation(self):
for i in self.joints:
i.random_rotation_set()
self.set_endpoint()
def get_point(self):
return self.x, self.y
def calculate_point(self, rot):
res = []
rot = rot.numpy().tolist()
for i in rot:
self.set_rot(i)
self.set_endpoint()
res.append(self.get_point())
return tf.convert_to_tensor( res )
def set_endpoint(self):
self.x = 0
self.y = 0
for i in self.joints:
i.set_vector()
self.x = self.x + i.get_x()
self.y = self.y + i.get_y()
def set_rot(self, rots):
if (len(rots) == len(self.joints)):
for r, j in zip(rots, self.joints):
j.set_rot(r)
self.set_endpoint()
else:
print("Failure. Passed Rotations unequal to length of joint chain. Rotation not completed.")
print(rots.shape)
def draw_plot(self):
last_x = 0
last_y = 0
for i in self.joints:
x_val = i.get_x() + last_x
y_val = i.get_y() + last_y
plt.plot([last_x, x_val], [last_y, y_val])
last_y = y_val
last_x = x_val
plt.show()
</code></pre>
<p>I think it yould have somethin to do with passing the layer through a function, not sure though. (return r.calculate_point(x))</p>
|
<p>Probably related with the fact that the method <code>robot_arm.calculate_point</code> is doing calculations outside of tensorflow, hence de backpropagation cannot be done because the gradient tape cannot "keep track" of the calculations to go from <code>x</code> to <code>r.calculate_point(x)</code>. See <a href="https://www.tensorflow.org/guide/autodiff#getting_a_gradient_of_none" rel="nofollow noreferrer">https://www.tensorflow.org/guide/autodiff#getting_a_gradient_of_none</a>.</p>
<p>Since there is nothing to train on the <code>robot_arm.calculate_point</code> method I would move that operation outside of the <code>robot_arm_ai.call</code> method. Then you should be able to train your model.</p>
|
python|tensorflow|tensorflow2.0
| 0
|
9,206
| 57,693,908
|
find column value in dataframe
|
<p>Have 2 dataframes.
First has 1 column. </p>
<p><code>test1: 1,2,3,4,5</code></p>
<p>Second has 2 columns. </p>
<p><code>test2: 0 1 1 1 1. test3: 2 2 3 3 4</code></p>
<p>I need to create new column in First dataframe that with search row value exist in whole dataframe2 (simple ctrl+F).
As result I need to get
test1: 1,2,3,4,5
check: yes,yes,yes,yes,no</p>
<p>UPD
Below code I found, but it shows good result only for first row, don't know if that make sense</p>
<pre><code>first['check'] = second.eq(first['test1'],0).any(1).astype(int)
</code></pre>
|
<p>You can check with <code>isin</code> with values flatten </p>
<pre><code>test1['col2']=test1['col1'].isin(test2.values.ravel())
</code></pre>
|
python|pandas|dataframe|vlookup
| 1
|
9,207
| 57,419,094
|
Keras: display model shape in Jupyter Notebook
|
<p>I have the following code which I used to view my Network architecture. </p>
<p><a href="https://i.stack.imgur.com/D2fpi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/D2fpi.png" alt="enter image description here"></a></p>
<p>However, I also want to see the shape of each layer, so I tried to use the following:</p>
<pre><code>from keras.utils import plot_model
#plot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png')
plot_model(model, show_shapes=True, show_layer_names=True)
</code></pre>
<p>The output file 'model.png' looks fine. But I am unable to make it display in the Jupyter Notebook. Any idea what I missed? Thanks!</p>
|
<p>since the resulting image is not a svg file anymore you should replace <code>SVG</code> with <code>Image</code>
use </p>
<pre><code>from IPython.display import Image
...
plot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png')
Image('model.png')
</code></pre>
|
python-3.x|tensorflow|keras
| 7
|
9,208
| 73,145,527
|
How to drop one level of a multiindex in pandas
|
<p>Im trying to drop a level of my multi index using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer">drop</a>, but I cant get it to work.</p>
<pre><code>iterables = [["bar", "baz"], ["bar", "baz"]]
index = pd.MultiIndex.from_product(iterables, names=["first", "second"])
df = pd.Series([1, 2, 3, 4], index=index)
df
</code></pre>
<p>df looks like this:</p>
<pre><code>first second
bar one 1
two 2
baz one 3
two 4
</code></pre>
<p>I want to drop the <code>second</code> level of the multi index, so that I would have as result:</p>
<pre><code>first
bar 1
bar 2
baz 3
baz 4
</code></pre>
<p>However, <code>df.drop(index=['second'])</code> and <code>df.drop(index=['second'],level=1)</code> dont work, so Im a little confused on how to get it done.</p>
|
<p>you use <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.droplevel.html#pandas.MultiIndex.droplevel" rel="nofollow noreferrer">pd.MultiIndex.droplevel</a></p>
<pre><code>df=df.droplevel(level=1)
or
df=df.droplevel('second')
df
</code></pre>
<pre><code>first
bar 1
bar 2
baz 3
baz 4
dtype: int64
</code></pre>
|
pandas
| 1
|
9,209
| 70,565,880
|
Adding a moving formula to excel using python
|
<p>I am writing multiple <code>dfs</code> to excel and I am trying to add a formula to cells. The problem is that my assigned formula is static for the whole row, for example:</p>
<pre><code># df
2019 2020 2021 2022
A 40 40 51 58
B 5 40 54 97
C 0.3 0.5 0.5 0.8
D 2000 40 200 300
E 0.02 1 0.25 0.19
</code></pre>
<p>And then adding a formula:</p>
<pre><code>df.loc['test'] = '=SUM(sheet_1!D5:sheet_1!D10)'
</code></pre>
<p>Does work but now the result looks like this:</p>
<pre><code># df
2019 2020 2021 2022
A 40 40 51 58
B 5 40 54 97
C 0.3 0.5 0.5 0.8
D 2000 40 200 300
E 0.02 1 0.25 0.19
test 1058 1058 1058 1058
</code></pre>
<p>I am trying to make the rolling window of <code>'=SUM(sheet_1!D5:sheet_1!D10)'</code>, so that each column would have a moving formula:</p>
<pre><code>2019 - '=SUM(sheet_1!D5:sheet_1!D10)'
2020 - '=SUM(sheet_1!D6:sheet_1!D11)'
2021 - '=SUM(sheet_1!D7:sheet_1!D12)'
# and so on
</code></pre>
<p>How could I achieve such result?</p>
|
<p>You have to depend it on the number of the column. In the most brutal way, you can use loop:</p>
<pre><code>df.loc['test'] = [f'=SUM(sheet_1!D{c+1}:sheet_1!D{c+6})' for c in range(0, len(df.columns)]
</code></pre>
|
python|excel|pandas
| 1
|
9,210
| 70,701,975
|
I already have a CUDA toolkit installed, why is conda installing CUDA again?
|
<p>I have installed cuda version 11.2 and CUDNN version 8.1 in ubuntu</p>
<pre><code>cnvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
</code></pre>
<p>When I installed tensorflow-gpu in conda environment, it is again installing cuda and cudnn.</p>
<ul>
<li>Why is it happening.</li>
<li>How to stop conda from installing cuda and cudnn again?</li>
<li>Can I just use cuda and cudnn that I have already installed? If yes, how?</li>
</ul>
<p><a href="https://i.stack.imgur.com/ycVff.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ycVff.png" alt="enter image description here" /></a></p>
|
<blockquote>
<ol>
<li>Why is it happening?</li>
</ol>
</blockquote>
<p>Conda expects to manage any packages you install <em>and</em> all their dependencies. The intention is that you literally never have to install anything else by hand for any packages they distribute in their own channel. If a GPU accelerated package requires a CUDA runtime, conda will try to select and install a correctly versioned CUDA runtime for the version of the Python package it has selected for installation.</p>
<blockquote>
<ol start="2">
<li>How to stop conda from installing cuda and cudnn again?</li>
</ol>
</blockquote>
<p>You probably can't, or at least can't without winding up with a non-functional Tensorflow installation. But see <a href="https://stackoverflow.com/q/61533291/681865">here</a> -- what conda installs is only the necessary, correctly versioned CUDA runtime components to make their GPU accelerated packages work. All they don't/can't install is a GPU driver for the hardware.</p>
<blockquote>
<ol start="3">
<li>Can I just use cuda and cudnn that I have already installed?</li>
</ol>
</blockquote>
<p>You say you installed CUDA 11.2. If you look at the conda output, you can see that it wants to install a CUDA 10.2 runtime. As you are now <a href="https://stackoverflow.com/a/70692914/681865">fully aware</a>, versioning is critical to Tensorflow and a Tensorflow build requiring CUDA 10.2 won't work with CUDA 11.2. So even if you were to stop conda from performing the dependency installation, there is a version mismatch so it wouldn't work.</p>
<blockquote>
<ol start="4">
<li>If yes, how?</li>
</ol>
</blockquote>
<p>See above.</p>
|
tensorflow|cuda|conda|tensorflow2.0|cudnn
| 3
|
9,211
| 70,592,132
|
Update/merge dataframes based values in each row
|
<p>I have two dataframes:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
"id": [1,2,3,4,5,6], "c1": [1,2,3,4,5,6], "c2": [1,2,3,4,5,6], "c3": [1,2,3,4,5,6]
})
</code></pre>
<p>and</p>
<pre><code>df2 = pd.DataFrame({
"id": [1,2,3,4,5,6], "column": ["c1", "c2", "c3", "c1", "c2", "c3"], "new-value": [10,20,30,40,50,60]
})
</code></pre>
<p>I would like to update df1 based on information from df2 so that the result is:</p>
<pre><code>df3 = pd.DataFrame({
"id": [1,2,3,4,5,6], "c1": [10,2,3,40,5,6], "c2": [1,20,3,4,50,6], "c3": [1,2,30,4,5,60]
})
</code></pre>
<ol>
<li>Is it possible to do this using pandas?</li>
<li>Are update/merge viable options for this?</li>
</ol>
|
<p>We can reshape <code>df2</code> using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>, then use it to substitute values in <code>df1</code></p>
<pre><code>df1.replace(df2.pivot(*df2.columns)).fillna(df1)
</code></pre>
<hr />
<pre><code> id c1 c2 c3
0 1 10.0 1.0 1.0
1 2 2.0 20.0 2.0
2 3 3.0 3.0 30.0
3 4 40.0 4.0 4.0
4 5 5.0 50.0 5.0
5 6 6.0 6.0 60.0
</code></pre>
|
python|pandas|dataframe
| 2
|
9,212
| 51,382,175
|
Setting values of a tensor at the indices given by tf.where()
|
<p>I am attempting to add noise to a tensor that holds the greyscale pixel values of an image. I want to set a random number of the pixels values to 255.</p>
<p>I was thinking something along the lines of:</p>
<pre><code>random = tf.random_normal(tf.shape(input))
mask = tf.where(tf.greater(random, 1))
</code></pre>
<p>and am then trying to figure out how to set the pixel values of <code>input</code> to 255 for every index of <code>mask</code>.</p>
|
<p><code>tf.where()</code> can be used for this too, assigning the noise value where mask elements are <code>True</code>, else the original <code>input</code> value:</p>
<pre><code>import tensorflow as tf
input = tf.zeros((4, 4))
noise_value = 255.
random = tf.random_normal(tf.shape(input))
mask = tf.greater(random, 1.)
input_noisy = tf.where(mask, tf.ones_like(input) * noise_value, input)
with tf.Session() as sess:
print(sess.run(input_noisy))
# [[ 0. 255. 0. 0.]
# [ 0. 0. 0. 0.]
# [ 0. 0. 0. 255.]
# [ 0. 255. 255. 255.]]
</code></pre>
|
python|tensorflow|image-processing|noise
| 2
|
9,213
| 70,759,976
|
pyspark- how to add a column to spark dataframe from a list
|
<p>I'm looking for a way to add a new column in a Spark DF from a list. In pandas approach it is very easy to deal with it but in spark it seems to be relatively difficult. Please find an examp</p>
<pre><code>#pandas approach
list_example = [1,3,5,7,8]
df.new_column = list_example
#spark ?
</code></pre>
<p>Could you please help to resolve this tackle (the easiest possible solution)?</p>
|
<p>You could try something like:</p>
<pre><code>import pyspark.sql.functions as F
list_example = [1,3,5,7,8]
new_df = df.withColumn("new_column", F.array( [F.lit(x) for x in list_example] ))
new_df.show()
</code></pre>
|
python|pandas|apache-spark|pyspark|rdd
| 0
|
9,214
| 70,880,328
|
Pandas dataframe manipulation/re-sizing of a single-column count file
|
<p>I have a file that looks like this:</p>
<pre><code>gRNA_A
gene_a
140626
gene_b
227598
gene_c
115781
gRNA_B
gene_a
125003
gene_b
102000
gene_c
200300
</code></pre>
<p>I want to read this into a pandas dataframe and re-shape it so that it looks like this:</p>
<pre><code> gene_a gene_b gene_c
gRNA_A 140626 227598 115781
gRNA_B 125003 102000 200300
</code></pre>
<p>Is this possible? If so, how?</p>
<p>Notes: it will not always be this size, so the solution needs to be size-independent. The input file will be max ~200gRNAs x 20genes. There will be gRNA_somelettercombos, but the gene will not be named gene_lettercombo-- the gene will be the name of an actual gene (like GAPDH, ACTB, etc.).</p>
|
<p>Not sure if this is the cleanest way, but this works for the given example.</p>
<p>I created a file <code>data.txt</code> with provided sample.</p>
<p>I assumed the count is always a number.</p>
<pre><code>def file_parser(f_path):
data_dict = {}
my_gRNA = None
my_gene = None
with open(f_path, "r") as f:
for each in f:
if not each:
continue
each = each.strip()
if each.startswith("gRNA"):
if each not in data_dict:
data_dict[each] = {}
my_gRNA = each
elif not each.isnumeric() and isinstance(each, str) and not each.startswith("gRNA"):
my_gene = each
elif each.isnumeric():
data_dict[my_gRNA][my_gene] = each
return data_dict
df = pd.DataFrame.from_dict(file_parser("data.txt"), orient='index')
</code></pre>
<pre><code>df.head()
gene_a gene_b gene_c
gRNA_A 140626 227598 115781
gRNA_B 125003 102000 200300
</code></pre>
<p><em>Note: This answer is very similar to the one by <em>mozway</em>. The only difference is in the parser, where I explicitly check for numeric types.</em></p>
|
python|pandas|dataframe|reshape
| 1
|
9,215
| 71,067,544
|
CNN Transfer Learning Takes Too Much Time
|
<p>I'm trying to train my model with transfer learning with Vgg16 via Google Colab(using GPU) but it takes too time and validation and test accuracy is low.
Additional informations; Train data is 16057 , test data is 4000, validation data is 2000 with different sized rgb images. Classes facial mood expressions (Happy,Sad,Energetic,Neutral) .Any suggestion ??</p>
<pre><code>#source root directory and distination root directory
train_src = "/content/drive/MyDrive/Affectnet/train_class/"
val_src = "/content/drive/MyDrive/Affectnet/val_class/"
test_src="/content/drive/MyDrive/Affectnet/test_classs/"
train_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
train_generator = train_datagen.flow_from_directory(
train_src,
target_size=(224,224),
batch_size=32,
shuffle=True
)
validation_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255
)
validation_generator = validation_datagen.flow_from_directory(
val_src,
target_size=(224, 224),
batch_size=32,
)
conv_base = tensorflow.keras.applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3)
)
for layer in conv_base.layers:
layer.trainable=False
# An empyty model is created.
model = tensorflow.keras.models.Sequential()
# VGG16 is added as convolutional layer.
model.add(conv_base)
# Layers are converted from matrices to a vector.
model.add(tensorflow.keras.layers.Flatten())
# Our neural layer is added.
model.add(tensorflow.keras.layers.Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=tensorflow.keras.optimizers.Adam(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
epochs=50,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=5)
</code></pre>
<p><a href="https://i.stack.imgur.com/Y3RDB.png" rel="nofollow noreferrer">Training Part</a></p>
<p>EDIT= I set the workers = 8 and model started training faster but i took .69 test accuracy after 30 epoch . Any Suggestion ?</p>
|
<p>Issue most likely related to "ImageDataGenerator", try using workers=8 in your fit_generator.</p>
|
tensorflow|keras|conv-neural-network|transfer-learning
| 1
|
9,216
| 51,843,209
|
Use fields with NA values for model training with tensorflow
|
<p>I am trying to create a machine learning model with tensorflow using dataset available at
<a href="https://www.kaggle.com/imnikhilanand/heart-attack-prediction" rel="nofollow noreferrer">https://www.kaggle.com/imnikhilanand/heart-attack-prediction</a></p>
<p>The csv file looks like below (please note I have replaced <code>?</code> with <code>.</code> so that it is easier to parse with pandas)</p>
<pre><code>age,sex,cp,trestbps,chol,fbs,restecg,thalach,exang,oldpeak,slope,ca,thal,num
28,1,2,130,132,0,2,185,0,0,.,.,.,0
29,1,2,120,243,0,0,160,0,0,.,.,.,0
29,1,2,140,.,0,0,170,0,0,.,.,.,0
</code></pre>
<p>When I parse this with panda, the <code>.</code> is read as <code>NaN</code> in the dataframe of pandas</p>
<p>Feeding this data is creating problems since any operation with <code>NaN</code> will be <code>NaN</code>. </p>
<p>My problem with the data is unavailable data, is there a way I can feed this kind of data to the model and get the results.</p>
<p>One of the solution I found was to replace it with some number (like 0) but doing that wrecks the accuracy of the model, I want to avoid that.</p>
|
<p>Looking at the <a href="https://www.kaggle.com/imnikhilanand/heart-attack-prediction" rel="nofollow noreferrer">data</a>, and realizing that the <em>vast majority</em> of the values in the three attributes <code>scope</code>, <code>ca</code>, and <code>thal</code> are missing, the most certain thing you should do is to remove these attributes (columns) completely from your modeling.</p>
<p>There is a (huge...) area of machine learning dealing with data imputation, but it is most certainly not applicable here - it is usually applicable when only <em>some</em> (i.e. a small minority) of your values are missing, which is far from being the case here.</p>
<blockquote>
<p>One of the solution I found was to replace it with some number (like 0) but doing that wrecks the accuracy of the model, I want to avoid that.</p>
</blockquote>
<p>Understandably so; if you do insist on experimenting in this direction, try replacing the missing values with the mean (or median) of the respective attribute (this is one of the simplest data imputation approaches).</p>
<p>In case you intend to use a deep learning model here, notice also that a dataset of only 294 samples is rather small for such an application...</p>
|
pandas|tensorflow|machine-learning
| 0
|
9,217
| 51,657,128
|
How to access the adjacent cells of each elements of matrix in python?
|
<p>Here two cells are considered adjacent if they share a boundary.
For example :</p>
<pre><code>A = 5 6 4
2 1 3
7 9 8
</code></pre>
<p>Here adjacent elements to index 0,0 is at index [0,1] and [1,0] and for index 1,1 the adjacent elements are at index [0,1],[1,0],[2,1] and [1,2].</p>
|
<p>Supposed you have <code>m</code>x<code>n</code> matrix, and you want to find the adjacent indices of the cell (<code>i</code>, <code>j</code>):</p>
<pre><code>def get_adjacent_indices(i, j, m, n):
adjacent_indices = []
if i > 0:
adjacent_indices.append((i-1,j))
if i+1 < m:
adjacent_indices.append((i+1,j))
if j > 0:
adjacent_indices.append((i,j-1))
if j+1 < n:
adjacent_indices.append((i,j+1))
return adjacent_indices
</code></pre>
|
python|python-3.x|algorithm|numpy|implementation
| 6
|
9,218
| 36,015,409
|
moving average with time offset pandas
|
<p>I looking for a vectorized solution to calculating a moving average with a date offset. I have an irregularly spaced times series of costs for a product and for each value I would like to calculate the mean of the previous three values, with a date offset of 45 days. For example if this were my input dataframe:</p>
<pre><code> In [1]: df
Out [1]:
ActCost OrDate
0 8 2015-01-01
1 5 2015-02-04
2 10 2015–02-11
3 1 2015-02-11
4 10 2015-03-11
5 18 2015-03-15
6 20 2015-05-18
7 25 2015-05-23
8 8 2015-06-11
9 5 2015-10-09
10 15 2015-11-02
12 18 2015-12-20
</code></pre>
<p>The output would be:</p>
<pre><code> In[2]: df
Out[2]:
ActCost OrDate EstCost
0 8 2015-01-01 NaN
1 5 2015-02-04 NaN
2 10 2015–02-11 NaN
3 1 2015-02-11 NaN
4 10 2015-03-11 NaN
5 18 2015-03-15 NaN
6 20 2015-05-18 9.67 # mean(index 3:5)
7 25 2015-05-23 9.67 # mean(index 3:5)
8 8 2015-06-11 9.67 # mean(index 3:5)
9 5 2015-10-09 17.67 # mean(index 6:8)
10 15 2015-11-02 17.67 # mean(index 6:8)
12 18 2015-12-20 12.67 # mean(index 7:9)
</code></pre>
<p>My current solution is the following:</p>
<pre><code> for index, row in df.iterrows():
orDate=row['OrDate']
costsLanded = orDate - timedelta(45)
if costsLanded <= np.min(df.OrDate):
df.loc[index,'EstCost']=np.nan
break
if len(dfID[df.OrDate <= costsLanded]) < 3:
df.loc[index,'EstCost'] = np.nan
break
df.loc[index,'EstCost']=np.mean(df[‘ActShipCost'][df.OrDate <=
costsLanded].head(3))
</code></pre>
<p>My code works, but is rather slow, and I have millions of these time series. I'm hoping that someone can give me some advice on how to speed this process up. I imagine that the best thing to do would be to vectorize the operation, but I'm not sure how to implement that.
Thanks so much for the help!!</p>
|
<p>Try something like this:</p>
<pre><code>#Set up DatetimeIndex (easier to just load in data with index as OrDate)
df = df.set_index('OrDate', drop=True)
df.index = pd.DatetimeIndex(df.index)
df.index.name = 'OrDate'
#Save original timestamps for later
idx = df.index
#Make timeseries with regular daily interval
df = df.resample('d').first()
#Take the moving mean with window size of 45 days
df = df.rolling(window=45, min_periods=0).mean()
#Grab the values for the original timestamp and put the index back
df = df.ix[idx].reset_index()
</code></pre>
|
python|pandas|time-series
| 0
|
9,219
| 35,917,281
|
Iterate over 'zipped' ranges of a numpy array
|
<p>I often work with numpy arrays representing critical times in a time series. I then want to iterate over the ranges and run operations on them. Eg:</p>
<pre><code>rngs = [0, 25, 36, 45, ...]
output = []
for left, right in zip(rngs[:-1], rngs[1:]):
throughput = do_stuff(array[left:right])...
output.append(throughput)
</code></pre>
<p>Is there a less awkward way to do this?</p>
|
<p>You might use <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow">enumerate</a> generator</p>
<pre><code>rngs = [0, 25, 36, 45, ...]
output = []
for index, _ in enumerate(rngs[:-1]):
throughput = do_stuff(array[index:index+1])...
output.append(throughput)
</code></pre>
<p>in one line with comprehension list:</p>
<pre><code>rngs = [0, 25, 36, 45, ...]
output = [do_stuff(array[index:index+1]) for index, _ in enumerate(rngs[:-1])]
</code></pre>
|
python|arrays|numpy
| 0
|
9,220
| 35,940,114
|
Write a pandas df into Excel and save it into a copy
|
<p>I have a pandas dataframe and I want to open an existing excel workbook containing formulas, copying the dataframe in a specific set of columns (lets say from column A to column H) and save it as a new file with a different name.</p>
<p>The idea is to update an existing template, populate it with the dataframe in a specified set of column and then save a copy of the Excel file with a different name.</p>
<p>Any idea?</p>
<p>What I have is:</p>
<pre><code> import pandas
from openpyxl import load_workbook
book = load_workbook('Template.xlsx')
writer = pandas.ExcelWriter('Template.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
df.to_excel(writer)
writer.save()
</code></pre>
|
<p>The below should work, assuming that you are happy to copy into column A. I don't see a way to write into the sheet starting in a different column (without overwriting anything).</p>
<p>The below incorporates @MaxU's suggestion of copying the template sheet before writing to it (having just lost a few hours' work on my own template workbook to pd.to_excel)</p>
<pre><code>import pandas as pd
from openpyxl.utils.dataframe import dataframe_to_rows
from shutil import copyfile
template_file = 'Template.xlsx' # Has a header in row 1 already
output_file = 'Result.xlsx' # What we are saving the template as
# Copy Template.xlsx as Result.xlsx
copyfile(template_file, output_file)
# Read in the data to be pasted into the termplate
df = pd.read_csv('my_data.csv')
# Load the workbook and access the sheet we'll paste into
wb = load_workbook(output_file)
ws = wb.get_sheet_by_name('Existing Result Sheet')
# Selecting a cell in the header row before writing makes append()
# start writing to the following line i.e. row 2
ws['A1']
# Write each row of the DataFrame
# In this case, I don't want to write the index (useless) or the header (already in the template)
for r in dataframe_to_rows(df, index=False, header=False):
ws.append(r)
wb.save(output_file)
</code></pre>
|
python|excel|pandas
| 2
|
9,221
| 37,445,334
|
Pandas Cannot Create a DataFrame from a Numpy Array Of Timestamps
|
<p>I have a numpy array of Pandas Timestamps:</p>
<pre><code>array([[Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T')],
[Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T')],
[Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T')]], dtype=object)
</code></pre>
<p>I cannot create a DataFrame from this array, as attempting to do so throws the following error:</p>
<pre><code>AssertionError: Number of Block dimensions (1) must equal number of axes (2)
</code></pre>
<p>You can see that the array is clearly 2 dimensional, which i verified by using <code>ndim</code>.</p>
<p>Why can't I create a DataFrame?</p>
|
<p>I think you can use <code>list</code> comprehension:</p>
<pre><code>import pandas as pd
import numpy as np
a =np.array([[pd.Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T')],
[pd.Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T')],
[pd.Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
pd.Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T')]], dtype=object)
df = pd.DataFrame([x for x in a], columns=['a','b','c'])
print (df)
a b \
0 2016-05-02 15:50:00+00:00 2016-05-02 15:50:00+00:00
1 2016-05-02 17:10:00+00:00 2016-05-02 17:10:00+00:00
2 2016-05-02 20:25:00+00:00 2016-05-02 20:25:00+00:00
c
0 2016-05-02 15:50:00+00:00
1 2016-05-02 17:10:00+00:00
2 2016-05-02 20:25:00+00:00
</code></pre>
<p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_records.html" rel="nofollow"><code>DataFrame.from_records</code></a>:</p>
<pre><code>print (pd.DataFrame.from_records(a, columns=['a','b','c']))
a b \
0 2016-05-02 15:50:00+00:00 2016-05-02 15:50:00+00:00
1 2016-05-02 17:10:00+00:00 2016-05-02 17:10:00+00:00
2 2016-05-02 20:25:00+00:00 2016-05-02 20:25:00+00:00
c
0 2016-05-02 15:50:00+00:00
1 2016-05-02 17:10:00+00:00
2 2016-05-02 20:25:00+00:00
</code></pre>
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/dsintro.html#alternate-constructors" rel="nofollow">alternate constructors of df</a>.</p>
|
python|arrays|numpy|pandas|dataframe
| 1
|
9,222
| 42,119,434
|
ipython notebook view wide pandas dataframe vertically
|
<p>In Pandas 0.18.1, say I have a dataframe like so:</p>
<pre><code>df = pd.DataFrame(np.random.randn(100,200))
df.head()
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
</code></pre>
<p>What if I wanted to view this vertically like so:</p>
<pre><code>0 1 2 3 4 5
6 7 8 9 10 11
</code></pre>
<p>the docs point to:</p>
<pre><code>pd.set_option('expand_frame_repr', True)
df
0 1 2 3 4 5 6 \
0 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690
1 0.404705 0.577046 -1.715002 -1.039268 -0.370647 -1.157892 -1.344312
2 1.643563 -1.469388 0.357021 -0.674600 -1.776904 -0.968914 -1.294524
3 -0.013960 -0.362543 -0.006154 -0.923061 0.895717 0.805244 -1.206412
4 -1.170299 -0.226169 0.410835 0.813850 0.132003 -0.827317 -0.076467
7 8 9
0 0.113648 -1.478427 0.524988
1 0.844885 1.075770 -0.109050
2 0.413738 0.276662 -0.472035
3 2.565646 1.431256 1.340309
4 -1.187678 1.130127 -1.436737
</code></pre>
<p>Yet I can't seem to get that same result; what am I missing?</p>
<p>Previous questions seem to revolve around viewing all rows within the slider (<code>pd.set_option('display.max_columns', None)</code> sort of thing)</p>
|
<p>If you want to see just a few records, try this.</p>
<pre><code>df.head(3).transpose()
</code></pre>
|
python|pandas|jupyter-notebook
| 12
|
9,223
| 37,797,325
|
Using Pandas, How to replace the last word of the string with an empty strings without distorting rest of the string?
|
<p>I can't share the actual data. So I am taking an example.
Suppose I have a list of suffixes - </p>
<pre><code>Suffix_List = ["Ltd.", "Inc.", "Limited", "Corp.", "AG"]
</code></pre>
<p>I have a data frame with a column containing company names. I want to replace the suffixes of the company name with an empty string. This should not distort the rest of company name. For example: Say the company name is "CAGE AG". "AG" should just be removed from the suffix not from the company name. So the result should be just "CAGE". Also, suffix should only be removed if it is present in the Suffix_List.</p>
<p>Right now I am using - </p>
<pre><code>for suffix in Suffix_List:
df['company_name'] = df['company_name'].str.replace( suffix,"")
</code></pre>
<p>But this distorts the actual company name too.</p>
<p>Sample company names could be - CAGE AG, Wage Limited, Tage Ltd. , Sage Inc</p>
|
<p>You can use regex to substitute out the suffix:</p>
<pre><code>In [11]: re.sub("\s?(" + "|".join(Suffix_List) + ")$", "", "CAGE AG")
Out[11]: 'CAGE'
</code></pre>
<p><em>This looks whether any (<code>|</code>) of the suffixes ends (<code>$</code>) the string.</em></p>
<p>On the Series/column you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>:</p>
<pre><code>In [21]: df = pd.DataFrame([["CAGE AG"], ["Stack Exchange Inc."]], columns=["company"])
In [22]: df
Out[22]:
company
0 CAGE
1 Stack Exchange
In [23]: df["company"] = df["company"].str.replace("\s?(" + "|".join(Suffix_List) + ")$", "")
In [24]: df
Out[24]:
company
0 CAGE
1 Stack Exchange
</code></pre>
|
python|pandas
| 2
|
9,224
| 37,733,225
|
Explaining the difference between sklearn's scale() and multiplying by STD and adding the mean
|
<p>I'm running sklearn's PCA <code>fit_transform()</code> function on some data I'm looking to analyze, and I'm having trouble figuring out how exactly I need to transform scaled data back into numbers that make sense within the context of what I'm running. More specifically, when I run:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.preprocessing import scale
X_scaled = scale(otr_df)
X_scaled2 = otr_df.sub(otr_df.mean())
X_scaled2 = X_scaled2.div(otr_df.std())
# Should print all zeroes
print (X_scaled - X_scaled2)/X_scaled
"""
The above prints the following:
Date Index1 Index2 Index3 Index4
2016-05-11 0.000706 0.000706 0.000706 0.000706 ...
2016-05-10 0.000706 0.000706 0.000706 0.000706 ...
2016-05-09 0.000706 0.000706 0.000706 0.000706 ...
2016-05-06 0.000706 0.000706 0.000706 0.000706 ...
. . . .
. . . .
. . . .
"""
</code></pre>
<p>Instead of zero (as I would expect), I am getting constant values of 0.000706 for every column when the bottom line of the above code is printed. Although small, it doesn't seem like that is trivial if I am multiplying by several thousand to get back to the original scale (which I am in some cases). My guess is that it has to do with dividing by (N - 1) instead of N or something along those lines. However, after too much time reading sklearn and pandas docs with nothing to show for it, I figured I would ask here if anyone had any idea.</p>
|
<p>sklearn uses zero degrees of freedom in their standard deviation calculation:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.preprocessing import scale
np.random.seed([3,1415])
otr_df = pd.DataFrame(np.random.rand(10, 10))
X_scaled = scale(otr_df)
X_scaled2 = otr_df.sub(otr_df.mean())
X_scaled2 = X_scaled2.div(otr_df.std(ddof=0))
# ^
# Specify ddof here |
# Should print all zeroes
print (X_scaled - X_scaled2)/X_scaled
0 1 2 3 4 5 6 7 8 9
0 -0.0 -0.0 -0.0 -0.0 0.0 -0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 -0.0 -0.0 -0.0
2 -0.0 0.0 -0.0 0.0 0.0 0.0 0.0 -0.0 -0.0 -0.0
3 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0
4 0.0 -0.0 0.0 -0.0 -0.0 0.0 0.0 0.0 0.0 0.0
5 -0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0
6 0.0 -0.0 0.0 0.0 -0.0 -0.0 0.0 0.0 0.0 0.0
7 -0.0 -0.0 -0.0 0.0 -0.0 -0.0 -0.0 0.0 0.0 -0.0
8 0.0 0.0 -0.0 -0.0 -0.0 0.0 -0.0 -0.0 0.0 -0.0
9 0.0 0.0 -0.0 -0.0 0.0 -0.0 -0.0 -0.0 -0.0 -0.0
</code></pre>
|
python|numpy|pandas|scikit-learn
| 2
|
9,225
| 31,655,929
|
Does spark dataframe have a "row name" for each row like pandas?
|
<p>I am trying to use Spark DataFrames to operate on two DataFrames indexing by row name. In pandas, we can do </p>
<pre><code>df.loc(['aIndex', 'anotherIndex'])
</code></pre>
<p>to select two rows in the df by the index (or name of the row). How to achieve this in Spark DataFrame? Thanks. </p>
|
<p>No, there is no row indexing in Spark. Spark Data Frames are more like tables in relational database so if you want to access specific row you have to filter:</p>
<pre><code>df = sqlContext.createDataFrame(
[("Bob", 5), ("Alice", 6), ("Chuck", 4)], ("name", "age"))
df.where("name in ('Bob', 'Alice')")
df.where((df.name == "Bob") | (df.name == "Alice"))
</code></pre>
|
python|pandas|apache-spark|pyspark|apache-spark-sql
| 4
|
9,226
| 64,447,637
|
Creating masks for duplicate elements in Tensorflow/Keras
|
<p>I am trying to write a custom loss function for a person-reidentification task which is trained in a multi-task learning setting along with object detection. The filtered label values are of the shape (batch_size, num_boxes). I would like to create a mask such that only the values which repeat in dim 1 are considered for further calculations. How do I do this in TF/Keras-backend?</p>
<p><strong>Short Example</strong>:</p>
<pre><code>Input labels = [[0,0,0,0,12,12,3,3,4], [0,0,10,10,10,12,3,3,4]]
Required output: [[0,0,0,0,1,1,1,1,0],[0,0,1,1,1,0,1,1,0]]
</code></pre>
<p>(Basically I want to filter out only duplicates and discard unique identities for the loss function).</p>
<p>I guess a combination of tf.unique and tf.scatter could be used but I do not know how.</p>
|
<p>This code works:</p>
<pre><code>x = tf.constant([[0,0,0,0,12,12,3,3,4], [0,0,10,10,10,12,3,3,4]])
def mark_duplicates_1D(x):
y, idx, count = tf.unique_with_counts(x)
comp = tf.math.greater(count, 1)
comp = tf.cast(comp, tf.int32)
res = tf.gather(comp, idx)
mult = tf.math.not_equal(x, 0)
mult = tf.cast(mult, tf.int32)
res *= mult
return res
res = tf.map_fn(fn=mark_duplicates_1D, elems=x)
</code></pre>
|
python|tensorflow|keras|deep-learning
| 1
|
9,227
| 58,875,944
|
how can I write for loop in python for copy pandas dataframe
|
<p>I'm newbie in data engineer. now i try to write python code for duplicate data from from pandas dataframe. for example
data : </p>
<pre><code> A B C D E F G E
1 2 3 4 0 1 0 1
5 6 7 8 0 1 1 0
9 1 2 3 0 1 0 1
</code></pre>
<p>I need to copy dataframe to</p>
<pre><code>dfE = A B C D E
1 2 3 4 0
5 6 7 8 0
9 1 2 3 0
dfF = A B C D F
1 2 3 4 1
5 6 7 8 1
9 1 2 3 1
dfG...
</code></pre>
<p>Help me please...</p>
|
<p>Hi piyaphong welcome to stackoverflow,</p>
<p>Basically pandas allow to select columns with column names, you can look at the code below which would probably solve your case.</p>
<pre><code>from io import StringIO
import pandas as pd
data = """
A B C D E F G E
1 2 3 4 0 1 0 1
5 6 7 8 0 1 1 0
9 1 2 3 0 1 0 1
"""
df = pd.read_csv(StringIO(data), sep=' ')
dfE = df[['A', 'B', 'C', 'D', 'E']]
dfF = df[['A', 'B', 'C', 'D', 'F']]
dfG = df[['A', 'B', 'C', 'D', 'G']]
</code></pre>
|
python|pandas
| 0
|
9,228
| 70,347,202
|
converting float number into datetime format
|
<p>I have a file that contains DateTime in float format<br />
example <code>14052020175648.000000</code> I want to convert this into <code>14-05-2020</code> and leave the timestamp value.</p>
<p>input ==> <code>14052020175648.000000</code></p>
<p>expected output ==> <code>14-05-2020</code></p>
|
<p>Use <code>pd.to_datetime</code>:</p>
<pre><code>df = pd.DataFrame({'Timestamp': ['14052020175648.000000']})
df['Date'] = pd.to_datetime(df['Timestamp'].astype(str).str[:8], format='%d%m%Y')
print(df)
# Output:
Timestamp Date
0 14052020175648.000000 2020-05-14
</code></pre>
<p>I used <code>astype(str)</code> in case where <code>Timestamp</code> is a float number and not a string, so it's not mandatory if your column already contains strings.</p>
|
python|pandas|datetime
| 2
|
9,229
| 70,204,990
|
NumPy filling values inside given bounding box coordinates for a large array
|
<p>I have a very large 3d array</p>
<pre class="lang-py prettyprint-override"><code>large = np.zeros((2000, 1500, 700))
</code></pre>
<p>Actually, <code>large</code> is an image but for each coordinate, it has 700 values. Also, I have 400 bounding boxes. Bounding boxes <strong>do not</strong> have a fixed shape. I store a tuple of lower and upper bound coordinates for each box as follows</p>
<pre class="lang-py prettyprint-override"><code>boxes_y = [(y_lower0, y_upper0), (y_lower1, y_upper1), ..., (y_lower399, y_upper399)]
boxes_x = [(x_lower0, x_upper0), (x_lower1, x_upper1), ..., (x_lower399, x_upper399)]
</code></pre>
<p>Then, for each box, I want to fill the corresponding region in <code>large</code> array with a vector of size 700. Specifically, I have an <code>embeddings</code> array for each box</p>
<pre class="lang-py prettyprint-override"><code>embeddings = np.random.rand(400, 700) # In real case, these are not random. Just consider the shape
</code></pre>
<p>What I want to do is</p>
<pre class="lang-py prettyprint-override"><code>for i in range(400):
large[boxes_y[i][0]: boxes_y[i][1], boxes_x[i][0]: boxes_x[i][1]] = embeddings[i]
</code></pre>
<p>This works but it is too slow for such a large <code>large</code> array. I am looking for vectorizing this computation.</p>
|
<p>One big problem is that the input is really <em>huge</em> (~15.6 GiB). Another is that it is travelled up to 400 times in the worst case (resulting in up to 6240 GiB to be written in RAM). The thing is that <em>overlapping regions</em> written <em>multiple times</em>.</p>
<p>A better solution is to iterate over the two first dimensions (the one of the "image") to find which bounding box must be copied as proposed by @dankal444. This is similar to what <a href="https://en.wikipedia.org/wiki/Z-buffering" rel="nofollow noreferrer"><strong>Z-buffer</strong></a>-based algorithms does in computer graphics.</p>
<p>Based on this, an even better solution is to use a <a href="https://en.wikipedia.org/wiki/Scanline_rendering" rel="nofollow noreferrer"><strong>scanline-rendering</strong></a> algorithm. In your case, the algorithm is much simpler than the traditional one since you are working with bounding-boxes and not complex polygons. For each scanline (2000 here), you can quickly filter the bounding box that are written to the scanline and then iterate over them. The classical algorithm is a bit too complex for your simple case. For each scanline, iterating over the filtered bounding-box and overwriting the their indices in each pixel is enough. This operation can be done using <strong>Numba</strong> in <strong>parallel</strong>. It is very fast because the computation is mainly performed in the CPU cache.</p>
<p>The final operation is to perform the actual data writes based on the previous indices (still using Numba in parallel). This operation is still <strong>memory bound</strong>, but <strong>the output array is only written once</strong> (only 15.6 GiB of RAM will be written in the worst case, and 7.8 GiB for <code>float32</code> items). This should take a fraction of a second on most machine. If this is not enough, you could try to use dedicated <em>GPUs</em> since the GPU RAM is often significantly faster than the main RAM (typically about an order of magnitude faster).</p>
<p>Here is the implementation:</p>
<pre class="lang-py prettyprint-override"><code># Assume the last dimension of `large` and `embeddings` is contiguous in memory
@nb.njit('void(float32[:,:,::1], float32[:,::1], int_[:,::1], int_[:,::1])', parallel=True)
def fastFill(large, embeddings, boxes_y, boxes_x):
n, m, l = large.shape
boxCount = embeddings.shape[0]
assert embeddings.shape == (boxCount, l)
assert boxes_y.shape == (boxCount, 2)
assert boxes_x.shape == (boxCount, 2)
imageBoxIds = np.full((n, m), -1, dtype=np.int16)
for y in nb.prange(n):
# Filtering -- A sort is not required since the number of bounding-box is small
boxIds = np.where((boxes_y[:,0] <= y) & (y < boxes_y[:,1]))[0]
for k in boxIds:
lower, upper = boxes_x[k]
imageBoxIds[y, lower:upper] = k
# Actual filling
for y in nb.prange(n):
for x in range(m):
boxId = imageBoxIds[y, x]
if boxId >= 0:
large[y, x, :] = embeddings[boxId]
</code></pre>
<p>Here is the benchmark:</p>
<pre class="lang-py prettyprint-override"><code>large = np.zeros((1000, 750, 700), dtype=np.float32) # 8 times smaller in memory
boxes_y = np.cumsum(np.random.randint(0, large.shape[0]//2, size=(400, 2)), axis=1)
boxes_x = np.cumsum(np.random.randint(0, large.shape[1]//2, size=(400, 2)), axis=1)
embeddings = np.random.rand(400, 700).astype(np.float32)
# Called many times
for i in range(400):
large[boxes_y[i][0]:boxes_y[i][1], boxes_x[i][0]:boxes_x[i][1]] = embeddings[i]
# Called many times
fastFill(large, embeddings, boxes_y, boxes_x)
</code></pre>
<p>Here are results on my machine:</p>
<pre><code>Initial code: 2.71 s
Numba (sequential): 0.13 s
Numba (parallel): 0.12 s (x22 times faster than the initial code)
</code></pre>
<p>Note that the first run is slower because of <a href="https://stackoverflow.com/questions/67270937/why-is-numpy-much-faster-at-creating-a-zero-array-compared-to-replacing-the-valu/67271140">virtual zero-mapped memory</a>. The Numba version is still about 10 times faster in this case.</p>
|
python|arrays|numpy|performance|numpy-ndarray
| 4
|
9,230
| 56,180,685
|
How to load such [[a,b,c],[d,e,f]..........]data into python from csv file?
|
<p>Hi I have some date in the follwing format [[a,b,c],[d,e,f],.........] in a csv file.</p>
<p>Its a 3x100 array. Please suggest me how to load the data to numpy arrray and I also want to perform one hot encoding upon it.</p>
|
<p>You have not shared the csv file correctly,
here is my best guess</p>
<p>first, read the data from file using simple file read operation
next use json module to convert it to list</p>
<pre><code>import json
a= '[[1,11,1],[7,7,77],[5,6,7]]'
a = json.loads(a)
</code></pre>
<p>it will give you list of list
as <code>[[1, 11, 1], [7, 7, 77], [5, 6, 7]]
</code></p>
<p>convert it to python dataframe</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_records(a, columns=['col1','col2','col3'])
</code></pre>
<p>You can directly use pandas <code>Categorical</code> function to create one hot encoding ex</p>
<pre><code>df['col2'] = pd.Categorical(df['col2'])
</code></pre>
|
python|machine-learning|numpy-ndarray
| 4
|
9,231
| 39,494,056
|
Progress bar for pandas.DataFrame.to_sql
|
<p>I want to migrate data from a large csv file to sqlite3 database.</p>
<p>My code on Python 3.5 using pandas:</p>
<pre><code>con = sqlite3.connect(DB_FILENAME)
df = pd.read_csv(MLS_FULLPATH)
df.to_sql(con=con, name="MLS", if_exists="replace", index=False)
</code></pre>
<p>Is it possible to print current status (progress bar) of execution of to_sql method? </p>
<p>I looked the article about <a href="https://github.com/tqdm/tqdm" rel="noreferrer">tqdm</a>, but didn't find how to do this.</p>
|
<p>Unfortuantely <code>DataFrame.to_sql</code> does not provide a chunk-by-chunk callback, which is needed by tqdm to update its status. However, you can process the dataframe chunk by chunk:</p>
<pre><code>import sqlite3
import pandas as pd
from tqdm import tqdm
DB_FILENAME='/tmp/test.sqlite'
def chunker(seq, size):
# from http://stackoverflow.com/a/434328
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
def insert_with_progress(df, dbfile):
con = sqlite3.connect(dbfile)
chunksize = int(len(df) / 10) # 10%
with tqdm(total=len(df)) as pbar:
for i, cdf in enumerate(chunker(df, chunksize)):
replace = "replace" if i == 0 else "append"
cdf.to_sql(con=con, name="MLS", if_exists=replace, index=False)
pbar.update(chunksize)
df = pd.DataFrame({'a': range(0,100000)})
insert_with_progress(df, DB_FILENAME)
</code></pre>
<p>Note I'm generating the DataFrame inline here for the sake of having a complete workable example without dependency.</p>
<p>The result is quite stunning:</p>
<p><a href="https://i.stack.imgur.com/Alz8X.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Alz8X.png" alt="enter image description here" /></a></p>
|
python|sqlite|pandas|dataframe|tqdm
| 34
|
9,232
| 39,706,277
|
Numpy: Why is difference of a (2,1) array and a vertical matrix slice not a (2,1) array
|
<p>Consider the following code:</p>
<pre><code>>>x=np.array([1,3]).reshape(2,1)
array([[1],
[3]])
>>M=np.array([[1,2],[3,4]])
array([[1, 2],
[3, 4]])
>>y=M[:,0]
>>x-y
array([[ 0, 2],
[-2, 0]])
</code></pre>
<p>I would intuitively feel this should give a (2,1) vector of zeros.</p>
<p>I am not saying, however, that this is how it should be done and everything else is stupid. I would simply love if someone could offer some logic that I can remember so things like this don't keep producing bugs in my code.</p>
<p>Note that I am not asking how I can achieve what I want (I could reshape y), but I am hoping to get some deeper understanding of why Python/Numpy works as it does. Maybe I am doing something conceptually wrong?</p>
|
<p><code>numpy.array</code> indexes such that a single value in any position collapses that dimension, while slicing retains it, even if the slice is only one element wide. This is completely consistent, for any number of dimensions:</p>
<pre><code>>> A = numpy.arange(27).reshape(3, 3, 3)
>> A[0, 0, 0].shape
()
>> A[:, 0, 0].shape
(3,)
>> A[:, :, 0].shape
(3, 3)
>> A[:1, :1, :1].shape
(1, 1, 1)
</code></pre>
<p>Notice that every time a single number is used, that dimension is dropped.</p>
<p>You can obtain the semantics you expect by using <code>numpy.matrix</code>, where two single indexes return a order 0 array and all other types of indexing return matrices</p>
<pre><code>>> M = numpy.asmatrix(numpy.arange(9).reshape(3, 3))
>> M[0, 0].shape
()
>> M[:, 0].shape # This is different from the array
(3, 1)
>> M[:1, :1].shape
(1, 1)
</code></pre>
<p>Your example works as you expect when you use <code>numpy.matrix</code>:</p>
<pre><code>>> x = numpy.matrix([[1],[3]])
>> M = numpy.matrix([[1,2],[3,4]])
>> y = M[:, 0]
>> x - y
matrix([[0],
[0]])
</code></pre>
|
python|arrays|numpy
| 1
|
9,233
| 39,646,480
|
numpy.savetxt() -- tuple index out of range when writing string to text file
|
<p>I am trying to write two strings into a .txt file using numpy.savetxt(). I want the strings to be on consecutive lines. However, when I run my code, I get the following error:</p>
<pre><code>ncol = X.shape[1]
IndexError: tuple index out of range
</code></pre>
<p>Which occurs at the first line I call np.savetxt(). My code is below:</p>
<pre><code>import numpy as np
data=np.loadtxt('data.txt')
name1 = 'James'
name2 = 'James 2'
hi = 'Hello {}'.format(name1)
bye = 'Goodbye {}'.format(name2)
np.savetxt('greet.txt', hi, fmt="%s", newline='\n')
np.savetxt('greet.txt', bye, fmt="%s")
</code></pre>
<p>I've tried it without fmt, changing '%s' to other things, but they all give me the same error. Can anyone tell me what I'm doing wrong?</p>
|
<p>According to the @moses-koledoye comment and @jvnna's answer, the following is the correct answer.</p>
<p>The <code>hi</code> and <code>bye</code> variables are of type <code>string</code></p>
<pre><code>hi = 'Hello {}'.format(name1)
bye = 'Goodbye {}'.format(name2)
</code></pre>
<p>You are trying to save them to a txt file with the numpy function <code>np.savetxt</code>, according to the documentation:</p>
<blockquote>
<p>Save an array to a text file.</p>
</blockquote>
<p>So the <code>hi</code> and <code>bye</code> variables <strong>must</strong> be of type <code>np.array</code> in order to be handled with that function. Also, some reshape is needed.</p>
<p>A snippet code that does the trick could be:</p>
<pre><code>hi = 'Hello {}'.format(name1)
bye = 'Goodbye {}'.format(name2)
hi = np.array(hi).reshape(1, ) # This does the trick for hi
bye = np.array(bye).reshape(1, ) # This does the trick for bye
np.savetxt('greet.txt', hi, fmt="%s", newline='\n')
np.savetxt('greet.txt', bye, fmt="%s")
</code></pre>
|
python|numpy
| 0
|
9,234
| 39,675,085
|
Generating normal distribution in order python, numpy
|
<p>I am able to generate random samples of normal distribution in numpy like this.</p>
<pre><code>>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
</code></pre>
<p>But they are in random order, obviously. How can I generate numbers in order, that is, values should rise and fall like in a normal distribution.</p>
<p>In other words, I want to create a curve (gaussian) with mu and sigma and <code>n</code> number of points which I can input.</p>
<p>How to do this?</p>
|
<p>To (1) generate a random sample of x-coordinates of size n (from the normal distribution) (2) evaluate the normal distribution at the x-values (3) sort the x-values by the magnitude of the normal distribution at their positions, this will do the trick:</p>
<pre><code>import numpy as np
mu,sigma,n = 0.,1.,1000
def normal(x,mu,sigma):
return ( 2.*np.pi*sigma**2. )**-.5 * np.exp( -.5 * (x-mu)**2. / sigma**2. )
x = np.random.normal(mu,sigma,n) #generate random list of points from normal distribution
y = normal(x,mu,sigma) #evaluate the probability density at each point
x,y = x[np.argsort(y)],np.sort(y) #sort according to the probability density
</code></pre>
|
python|numpy|normal-distribution
| 1
|
9,235
| 44,021,986
|
Write ndarray with strings in first row followed by a matrix of numbers
|
<p>Let's say that I create a big matrix with np.vstack with a vector of strings as a first row followed by a matrix with numbers. How can I save/write in to a file? and in a nice aligned way?</p>
<p>Simplifying:</p>
<pre><code>names = np.array(['NAME_1', 'NAME_2', 'NAME_3'])
floats = np.array([ 0.1234 , 0.5678 , 0.9123 ])
# 1) In order to vstack them, do I need to expand dimensions?
np.expand_dims(floats, axis=0)
np.expand_dims(names, axis=0)
Output = np.vstack((names,floats)) # so I get the following matrix
NAME_1 NAME_2 NAME_3
0.1234 0.5678 0.9123
# 2) How can a save/print into a file being able to modify the format of the numbers?
# And keep the columns aligned?
# Something like this: (taking into account that I have a lot of columns)
NAME_1 NAME_2 NAME_3
1.23e-1 5.67e-1 9.12e-1
# I tryied with:
np.savetxt('test.txt', Matrix, fmt=' %- 1.8s' , delimiter='\t')
# But I can't change the format of the numbers.
</code></pre>
<p>Thanks in advance!!</p>
|
<p>Apparently I found a solution following the kazemakase comments. It feels quite inefficient for big matrices but it does the work:</p>
<pre><code>names = np.array(['NAME_1', 'NAME_2', 'NAME_3'])
floats = np.array([[ 0.1234 , 0.5678 , 0.9123 ],
[ 0.1234 , -0.5678 , 0.9123 ]])
with open('test.txt', 'w+') as f:
for i in range(names.shape[0]) :
f.write( '{:^15}'.format(names[i]))
f.write( '{}'.format('\n'))
for i in range(floats.shape[0]) :
for j in range(floats.shape[1]) :
f.write( '{:^ 15.4e}'.format(floats[i,j]))
f.write( '{}'.format('\n'))
</code></pre>
<p>Giving the desired output:</p>
<pre><code> NAME_1 NAME_2 NAME_3
1.2340e-01 5.6780e-01 9.1230e-01
1.2340e-01 -5.6780e-01 9.1230e-01
</code></pre>
<p>Thank you!</p>
|
numpy|python-3.6
| 2
|
9,236
| 69,452,672
|
Does "tf.keras.losses.SparseCategoricalCrossentropy()" work for all classification problems?
|
<p>For <code>tf.keras.losses.SparseCategoricalCrossentropy()</code>, the documentation of TensorFlow says</p>
<p><code>"Use this crossentropy loss function when there are two or more label classes."</code></p>
<p>Since it covers two or more labels, including binary classification, then does it mean I can use this loss function for any classification problem? When do I <strong>have to</strong> use those binary loss such as <code>tf.keras.losses.BinaryCrossentropy</code> and similar ones?</p>
<p>I am using TensorFlow 2.3.1</p>
|
<p>BinaryCrossentropy ie like a special case of CategoricalCrossetropy with 2 classes, but BinaryCrossentropy is more efficient than CategoricalCrossentropy in calculation.
With CategoricalCrossentropy loss you should take the outputs as 2 dimension, while with BinaryCrossentropy 1 dimension is enough. It means you can reduce the weights by a half at the last layer with BinaryCrossentropy loss.</p>
|
python|tensorflow|keras
| 1
|
9,237
| 69,386,912
|
Is there any way to use `.pkl` sklearn model in Pyspark DataFrame
|
<p>If dataframe using pandas, here's what I did</p>
<pre><code>import joblib
lgbm_v5 = joblib.load('model.pkl')
b = lgbm_v5.predict_proba(X_test)
</code></pre>
<p>Is there any way to use <code>.pkl</code> sklearn model in Pyspark DataFrame?</p>
|
<p>Here is an example with a simple sentiment analysis to get you started.</p>
<p>Once you get the hang of it, it can be become much more elaborated for even better performance (pandas_udf, preds over vectorized numpy arrays).</p>
<pre><code>from functools import partial
import joblib
import numpy as np
from pyspark.sql.functions import udf, col
from pyspark.sql.types import FloatType, ArrayType
class SentimentModel:
def __init__(self):
self.tokenizer = load_model("tokenizer")
self.vectorizer = load_model("vectorizer")
self.classifier = load_model("classifier")
def load_model(name):
return joblib.load(...) # load your model from disk or whatever
def predict_text(text, model):
try:
tokens = model.tokenizer(text)
vectors = model.vectorizer.transform([t for t in tokens])
sentiment = model.classifier.predict_proba(vectors)
except ValueError:
sentiment = [0.5, 0.5]
return sentiment
model = SentimentModel()
partial_transform = partial(predict_text, model=model)
prediction_udf = udf(partial_transform, ArrayType(FloatType()))
predictions_df = df.withColumn("probabilities", prediction_udf(df.text))
</code></pre>
|
python|pandas|dataframe|pyspark
| 1
|
9,238
| 69,544,827
|
Data frame: get row and update it
|
<p>I want to select a row based on a condition and then update it in dataframe.</p>
<p>One solution I found is to update <code>df</code> based on condition, but I must repeat the condition, what is the better solution so that I get the desired row once and change it?</p>
<pre><code>df.loc[condition, "top"] = 1
df.loc[condition, "pred_text1"] = 2
df.loc[condtion, "pred1_score"] = 3
</code></pre>
<p>something like:</p>
<pre><code>row = df.loc[condition]
row["top"] = 1
row["pred_text1"] = 2
row["pred1_score"] = 3
</code></pre>
|
<p>Extract the boolean mask and set it as a variable.</p>
<pre><code>m = condition
df.loc[m, 'top'] = 1
df.loc[m, 'pred_text1'] = 2
df.loc[m, 'pred1_score'] = 3
</code></pre>
<p>but the shortest way is:</p>
<pre><code>df.loc[condition, ['top', 'pred_text1', 'pred_score']] = [1, 2, 3]
</code></pre>
<p><strong>Update</strong></p>
<blockquote>
<p>Wasn't it possible to retrieve the index of row and then update it by that index?</p>
</blockquote>
<pre><code>idx = df[condition].idx
df.loc[idx, 'top'] = 1
df.loc[idx, 'pred_text1'] = 2
df.loc[idx, 'pred1_score'] = 3
</code></pre>
|
pandas|dataframe
| 1
|
9,239
| 54,207,414
|
Is there a better way to deal with row dependency within a group when performing some calculations on a specific column?
|
<p>I have a dataframe and for some columns the one row is conditional on previous row's value. Also this dependence is only within a group identified by for example, 'gid'.</p>
<p>What I did is basically create another dataframe and then transpose the column(s) used for calculations. The steps I used in the attached code are the following.</p>
<ol>
<li>This is the original dataframe:</li>
</ol>
<pre><code> gid id x y
0 1 0 1.624345 0.876389
1 1 1 -0.611756 0.894607
2 1 2 -0.528172 0.085044
3 1 3 -1.072969 0.039055
4 1 4 0.865408 0.169830
5 2 0 -2.301539 0.878143
6 2 1 1.744812 0.098347
7 2 2 -0.761207 0.421108
8 2 3 0.319039 0.957890
9 2 4 -0.249370 0.533165
10 3 0 1.462108 0.691877
11 3 1 -2.060141 0.315516
12 3 2 -0.322417 0.686501
13 3 3 -0.384054 0.834626
14 3 4 1.133769 0.018288
</code></pre>
<ol start="2">
<li>This is the second dataframe I created after transposing:</li>
</ol>
<pre><code> x0 x1 x2 x3 x4 y0 y1 \
gid
1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
y2 y3 y4
gid
1 0.085044 0.039055 0.169830
2 0.421108 0.957890 0.533165
3 0.686501 0.834626 0.018288
</code></pre>
<ol start="3">
<li>Then I merge two to so I have same number of rows on the second dataframe.</li>
</ol>
<pre><code> gid x0 x1 x2 x3 x4 y0 y1 \
0 1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
1 1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
2 1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
3 1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
4 1 1.624345 -0.611756 -0.528172 -1.072969 0.865408 0.876389 0.894607
5 2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
6 2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
7 2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
8 2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
9 2 -2.301539 1.744812 -0.761207 0.319039 -0.249370 0.878143 0.098347
10 3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
11 3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
12 3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
13 3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
14 3 1.462108 -2.060141 -0.322417 -0.384054 1.133769 0.691877 0.315516
y2 y3 y4 id
0 0.085044 0.039055 0.169830 0
1 0.085044 0.039055 0.169830 1
2 0.085044 0.039055 0.169830 2
3 0.085044 0.039055 0.169830 3
4 0.085044 0.039055 0.169830 4
5 0.421108 0.957890 0.533165 0
6 0.421108 0.957890 0.533165 1
7 0.421108 0.957890 0.533165 2
8 0.421108 0.957890 0.533165 3
9 0.421108 0.957890 0.533165 4
10 0.686501 0.834626 0.018288 0
11 0.686501 0.834626 0.018288 1
12 0.686501 0.834626 0.018288 2
13 0.686501 0.834626 0.018288 3
14 0.686501 0.834626 0.018288 4
</code></pre>
<ol start="4">
<li>The I use a for loop to calculate columns 'x1 - x4' ('x0' is initial value, so no change is needed), then take the last row for each group and stack it.</li>
</ol>
<pre><code> gid output id
0 1 1.624345 0
1 1 2.518952 1
2 1 2.603996 2
3 1 2.643051 3
4 1 2.812881 4
5 2 -2.301539 0
6 2 1.744812 1
7 2 2.165919 2
8 2 3.123809 3
9 2 3.656974 4
10 3 1.462108 0
11 3 1.777624 1
12 3 2.464124 2
13 3 3.298750 3
14 3 3.317038 4
</code></pre>
<ol start="5">
<li>Final step is to merge them and get what I want</li>
</ol>
<pre><code> gid id x y output
0 1 0 1.624345 0.876389 1.624345
1 1 1 -0.611756 0.894607 2.518952
2 1 2 -0.528172 0.085044 2.603996
3 1 3 -1.072969 0.039055 2.643051
4 1 4 0.865408 0.169830 2.812881
5 2 0 -2.301539 0.878143 -2.301539
6 2 1 1.744812 0.098347 1.744812
7 2 2 -0.761207 0.421108 2.165919
8 2 3 0.319039 0.957890 3.123809
9 2 4 -0.249370 0.533165 3.656974
10 3 0 1.462108 0.691877 1.462108
11 3 1 -2.060141 0.315516 1.777624
12 3 2 -0.322417 0.686501 2.464124
13 3 3 -0.384054 0.834626 3.298750
14 3 4 1.133769 0.018288 3.317038
</code></pre>
<p>However I think there must be some better ways to achieve the same goal. I am thinking to use groupby and then apply by using the id column, but have not figured out how to do that. Any help is appreciated.</p>
<p>The complete code is attached.</p>
<pre><code>import numpy as np
import pandas as pd
# 1.
df = pd.DataFrame({'gid':np.repeat([1,2,3], 5),
'id': [0, 1, 2, 3, 4] *3,
'x': np.random.randn(15),
'y': np.random.random(15)})
# 2.
columns = ['gid', 'id', 'x', 'y']
_df = df[columns].set_index(['gid', 'id']).unstack()
_df.columns = _df.columns.map(lambda x: '{}{}'.format(x[0], x[1]))
# 3.
_df = _df.join(df.set_index('gid')['id'],
how='left').reset_index().set_index(df.index)
# 4.
for i in range(1, 5):
_df['x' + str(i)] = np.fmax(_df['x' + str(i)], _df['x' + str(i - 1)] + _df['y' + str(i)])
columns = pd.Index([column for column in _df.columns
if column.find('x') >= 0], name='x')
_df = _df.reindex(columns=columns).groupby(_df['gid']).last()
_df = _df.stack().reset_index().rename(columns={0: 'output'}).drop('x', axis=1)
_df['id'] = _df.groupby('gid').cumcount()
# 5.
df = df.join(_df[['output']])
</code></pre>
|
<p>I think the following code will do the job, much simple.</p>
<pre><code>def myfunc(df, id=0, column='x'):
return np.fmax(df.loc[df['id'] == id, column],
np.add(df.loc[df['id'] == id - 1, column],
df.loc[df['id'] == id, 'y']))
for id in range(1, 5):
df_1.loc[df_1['id'] == id, 'x'] = \
df_1.groupby('gid').apply(myfunc, id=id).values
</code></pre>
|
python|pandas|numpy
| 0
|
9,240
| 53,824,556
|
How to install Numpy and Pandas for AWS Lambdas?
|
<p><strong>Problem:</strong>
I wanted to use Numpy and Pandas in my AWS lambda function. I am working on Windows 10 with PyCharm. My function compiles and works fine on local machine, however, as soon as package it up and deploy on AWS, it breaks down giving errors in importing the numpy and pandas packages. I tried reinstalling both packages and then redeploying however, error remained the same. </p>
<p><strong>StackOverFlow Solutions:</strong>
Other people are having similar issues and fellow users have suggested that this is mainly compatibility issue, because Python libraries are compiled on Windows whereas, AWS Lambda runs on linux machines.</p>
<p><strong>Question:</strong>
What's the best way to create a deployment package for AWS on windows 10? Is there a way I can specify targeted platform while installing packages through PIP. Apparently there is an option in pip with tag --platform but I cannot figure out how to use it. Any helps? </p>
|
<p>Like often there is more than one way to come to a solution.</p>
<p>The preferred way imho is to use AWS lambda layers, because it separates the functional code from the dependencies. The basics are explained <a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html" rel="noreferrer">here</a>.</p>
<ol>
<li>Get all your dependencies. Like you mentioned correctly, pandas and numpy have to be compiled for the AMI Linux. This can be done with the tool: "serverless python requirements" or with a docker container based on this <a href="https://github.com/lambci/docker-lambda" rel="noreferrer">image</a>. A more detailed instruction can be found <a href="https://medium.com/@qtangs/creating-new-aws-lambda-layer-for-python-pandas-library-348b126e9f3e" rel="noreferrer">here</a>.</li>
<li>Put the dependencies in a folder called <code>python</code>.</li>
<li>zip the whole folder e.g. with the preinstalled windows zipping tool.</li>
<li>Upload the zip file to AWS as a layer: Go to AWS Lambda, from the left choose Layers and "Create a new layer". </li>
<li>After you saved the layer, go to your Lambda Function and choose "Layers". Click "Add a layer" choose your newly created layer and click on save. Now your function should not get import errors anymore.</li>
</ol>
|
python|pandas|amazon-web-services|numpy|aws-lambda
| 8
|
9,241
| 53,892,742
|
ndarray array2string output format
|
<p>I know this is a simple question, but can only find part of the answer on SO, and can't figure out how to do from Python or Numpy documentation. I'm sure it's documented, I just don't understand. </p>
<p>I need to print/write using a fixed format (6 fields at 13.7e). The array I need to print might have 4, 8, 12, or more values. I found <code>np.array2string</code>, which is <strong>almost</strong> what I need.</p>
<ul>
<li>Is there a way to eliminate the leading and trailing <code>[</code> and <code>]</code>?</li>
<li>Likewise, is there a way to avoid the indentation on 2nd and
following lines?</li>
</ul>
<p>Solution does not have to use <code>array2string</code>. It was the simplest thing I found to control print/write formatting for <code>ndarray</code>. I am open to any solution. :-) </p>
<p>Here is my a simple example to demonstrate behavior I want, and what I get:</p>
<pre><code>>>> import numpy as np
>>> foo = np.arange(4.0)
>>> # this shows desired output with 4 values
>>> print( ('%13.7e'*4) % (foo[0], foo[1], foo[2], foo[3]) )
0.0000000e+001.0000000e+002.0000000e+003.0000000e+00
>>> print ( np.array2string(foo, separator='', formatter={'float_kind':'{:13.7e}'.format}) )
[0.0000000e+001.0000000e+002.0000000e+003.0000000e+00]
>>> foo = np.arange(12.0)
>>> print ( np.array2string(foo, separator='', max_line_width=80, formatter={'float_kind':'{:13.7e}'.format}) )
[0.0000000e+001.0000000e+002.0000000e+003.0000000e+004.0000000e+005.0000000e+00
6.0000000e+007.0000000e+008.0000000e+009.0000000e+001.0000000e+011.1000000e+01]
>>> # this shows desired output with 12 values
>>> print( ('%13.7e'*6) % (foo[0], foo[1], foo[2], foo[3], foo[4], foo[5]) )
0.0000000e+001.0000000e+002.0000000e+003.0000000e+004.0000000e+005.0000000e+00
>>> print( ('%13.7e'*6) % (foo[6], foo[7], foo[8], foo[9], foo[10], foo[11]) )
6.0000000e+007.0000000e+008.0000000e+009.0000000e+001.0000000e+011.1000000e+01
</code></pre>
|
<p>The output of <code>np.array2string</code> is just a string. You can format it using normal string methods. For instance, you can strip off the lead/tail brackets and replace spaces with nothing using:</p>
<pre><code>foo = np.arange(12.)
s = (np.array2string(foo,
separator='',
formatter={'float_kind':'{:13.7e}'.format},
max_line_width=80).strip('[]').replace(' ', ''))
print(s)
# prints:
0.0000000e+001.0000000e+002.0000000e+003.0000000e+004.0000000e+005.0000000e+00
6.0000000e+007.0000000e+008.0000000e+009.0000000e+001.0000000e+011.1000000e+01
</code></pre>
|
python|numpy|numpy-ndarray
| 3
|
9,242
| 66,153,246
|
A str value changed over time & Customer dimension(s) in Pandas
|
<p>I have some customer data over dates and I want to see if for example they choose another product over time. Ideally, i'd like to copy the two columns where the changes occurred into a new column.</p>
<p>So, if I had a table like</p>
<pre><code>period, Customer , product
2020-01, Cust1, 12 TS
2020-02, Cust1, 12 TS
2020-03, Cust1, 14 SLM
2020-01, Cust2, 12 SLM
2020-02, Cust2, 12 TS
2020-03, Cust2, 14 SLM
</code></pre>
<p>So cust1 went over time from TS to SLM, whereas Cust2 went from SLM to TS then the opposite.
The final column should look like:</p>
<pre><code>period, Customer , product , change
2020-01, Cust1, 12 TS , NAN
2020-02, Cust1, 12 TS , NAN
2020-03, Cust1, 14 SLM, from TS to SLM
2020-01, Cust2, 12 SLM, NAN
2020-02, Cust2, 12 TS, from SLM to TS
2020-03, Cust2, 14 SLM, from TS to SLM
</code></pre>
<p>I have look in many solutions avaliable like <a href="https://stackoverflow.com/questions/42959330/how-to-tell-if-a-value-changed-over-dimensions-in-pandas">here</a>, but I couldn't manage to do it the way I wanted.</p>
|
<p>We can do this in a number of ways, I would suggest using <code>shift</code> and <code>groupby</code> to find the max record and then <code>.loc</code> to filter your query set appropriately.</p>
<h3>setup.</h3>
<pre><code>from io import StringIO
import pandas as pd
d = """period, Customer, quantity , product
2020-01, Cust1, 12, TS
2020-02, Cust1, 12, TS
2020-03, Cust1, 14, SLM
2020-01, Cust2, 12, SLM
2020-02, Cust2, 12, TS
2020-03, Cust2, 14, SLM"""
</code></pre>
<hr />
<pre><code>df = pd.read_csv(StringIO(d),sep=',',parse_dates=['period'])
# as you have spaces in your csv above.
#df.columns = df.columns.str.strip()
#create a record end date.
df['period_end_date'] = df.groupby('Customer')['period'].shift(-1)
#find the previous product.
df.loc[df['period_end_date'].isna(),
'previous_product'] = df.groupby('Customer')['product'].shift(1)
</code></pre>
<hr />
<p>the current record here will be where the <code>preiod_end_date</code> is null.</p>
<pre><code>print(df)
period Customer quantity product period_end_date previous_product
0 2020-01-01 Cust1 12 TS 2020-02-01 NaN
1 2020-02-01 Cust1 12 TS 2020-03-01 NaN
2 2020-03-01 Cust1 14 SLM NaT TS
3 2020-01-01 Cust2 12 SLM 2020-02-01 NaN
4 2020-02-01 Cust2 12 TS 2020-03-01 NaN
5 2020-03-01 Cust2 14 SLM NaT TS
</code></pre>
<p><em>if</em> you need it in a pre-defined format as you've outlined above.</p>
<pre><code>df.loc[df['period_end_date'].isna(),
'previous_product'] = ("FROM "
+ df.groupby('Customer')['product'].shift(1)
+ " TO "
+ df['product'] )
period Customer quantity product period_end_date previous_product
0 2020-01-01 Cust1 12 TS 2020-02-01 NaN
1 2020-02-01 Cust1 12 TS 2020-03-01 NaN
2 2020-03-01 Cust1 14 SLM NaT FROM TS TO SLM
3 2020-01-01 Cust2 12 SLM 2020-02-01 NaN
4 2020-02-01 Cust2 12 TS 2020-03-01 NaN
5 2020-03-01 Cust2 14 SLM NaT FROM TS TO SLM
</code></pre>
|
python|pandas|dataframe|logic
| 1
|
9,243
| 66,248,368
|
How create a pandas dataframe for encoding nltk frequency-distributions
|
<p>Hej, I´m an absolute beginner in Python (a linguist by training) and don´t know how to put the twitter-data, which I scraped with Twint (stored in a <code>csv-file</code>), into a <code>DataFrame</code> in Pandas to be able to encode <code>nltk frequency-distributions</code>.
Actually I´m even not sure if it is important to create a test-file and a train-file, as I did (see code below). I know it´s a very basic question. However, to get some help would be great! Thank you.</p>
<p>This is what I have so far:</p>
<pre><code>import pandas as pd
data = pd.read_csv("test_newtest90.csv")
data = pd.read_csv("train_newtest90.csv")
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import string
import nltk
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
%matplotlib inline
train = pd.read_csv("train_newtest90.csv")
test = pd.read_csv("test_newtest90.csv")
combi = train.append(test, ignore_index=True)
</code></pre>
<p>If I check:</p>
<pre><code>combi["tidy_tweet"].dtypes
</code></pre>
<p>I get this:</p>
<pre><code>dtype("0")
</code></pre>
|
<p>You do not need to split your csv in a train and a test set. That's only needed if you are going to train a model, which is not the case. So simply load the original unsplit csv file:</p>
<pre><code>import pandas as pd
data = pd.read_csv("filename.csv")
</code></pre>
<p>The next step is to clean the tweets to get rid of hashtags, urls, etc:</p>
<pre><code>import regex as re
# use RegEx to clean tweets
def cleaningTweets(twt):
twt = re.sub('@[A-Za-z0-9]+', '', twt)
twt = re.sub('#', '', twt)
twt = re.sub('https?:\/\/\S+', '', twt)
return twt
# apply previous function to the current df, assuming the relevant column name is "tweets"
df.tweets = df.tweets.apply(cleaningTweets)
# make all words lowercase
df.tweets = df.tweets.str.lower()
</code></pre>
<p>Now you can start doing fun stuff like word frequency counts. But in order to so it is advised to first remove stop words and punctuation marks:</p>
<pre><code>import nltk
import string
# load a list of stopwords from nltk to clean the tweets from stop words
nltk.download('stopwords')
stopwords = nltk.corpus.stopwords.words('english')
#remove stopwords and punctuation
df['no_stopwords'] = df.tweets.apply(lambda x: ' '.join([i for i in x.split(" ") if i not in string.punctuation if i not in stopwords]))
# count word frequency
df_word_freq = df.no_stopwords.str.split(expand=True).stack().value_counts()
# save the top 50 to csv
df_word_freq.head(50).to_csv('word_count.csv')
#df_word_freq.to_csv('word_count.csv') for saving the entire df
# create bar chart of top 50
df_word_freq.head(50).plot.bar()
</code></pre>
<p>Or you could make a word cloud:</p>
<pre><code>from wordcloud import WordCloud
# first merge all tweets into one string
whole_words = " ".join([tweets for tweets in df.tweets])
# feed the string to WordCloud
word_cloud = WordCloud(width = 700, height = 500, random_state = 1, min_font_size = 10, stopwords = stopwords).generate(whole_words)
# save wordcloud as png file
word_cloud.to_file('wordcloud.png')
</code></pre>
<p>I hope this gets you started!</p>
|
python|pandas|nltk
| 0
|
9,244
| 66,308,078
|
How to solve an Attribute Error -- no attribute 'register_op_list' in tensorflow?
|
<p>I'm trying to run this in jupyter and I get an AttributeError</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12,8)
import numpy as np
import tensorflow as tf
import keras
import pandas as pd
from keras_tqdm import TQDMNotebookCallback
</code></pre>
<p>I did a search and someone asked a similar question with no answer here: <a href="https://stackoverflow.com/questions/61426378/attributeerror-tensorflow">AttributeError tensorflow</a> (someone else asked similarly, here <a href="https://stackoverflow.com/questions/61830033/attributeerror-module-tensorflow-python-framework-op-def-registry-has-no-attr">AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list</a>). I'm hoping to improve on the question by following a comment that was left on the question by giving my full traceback:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-b19e750c758e> in <module>
4 import numpy as np
5 import tensorflow as tf
----> 6 import keras
7 import pandas as pd
8 from keras_tqdm import TQDMNotebookCallback
~/env/py37/lib/python3.7/site-packages/keras/__init__.py in <module>
2
3 from . import utils
----> 4 from . import activations
5 from . import applications
6 from . import backend
~/env/py37/lib/python3.7/site-packages/keras/activations.py in <module>
4 from . import backend as K
5 from .utils.generic_utils import deserialize_keras_object
----> 6 from .engine import Layer
7
8
~/env/py37/lib/python3.7/site-packages/keras/engine/__init__.py in <module>
6 from .topology import Layer
7 from .topology import get_source_inputs
----> 8 from .training import Model
~/env/py37/lib/python3.7/site-packages/keras/engine/training.py in <module>
23 from .. import metrics as metrics_module
24 from ..utils.generic_utils import Progbar
---> 25 from .. import callbacks as cbks
26 from ..legacy import interfaces
27
~/env/py37/lib/python3.7/site-packages/keras/callbacks.py in <module>
24 if K.backend() == 'tensorflow':
25 import tensorflow as tf
---> 26 from tensorflow.contrib.tensorboard.plugins import projector
27
28
~/env/py37/lib/python3.7/site-packages/tensorflow/contrib/__init__.py in <module>
21 # Add projects here, they will show up under tf.contrib.
22 from tensorflow.contrib import bayesflow
---> 23 from tensorflow.contrib import cloud
24 from tensorflow.contrib import compiler
25 from tensorflow.contrib import copy_graph
~/env/py37/lib/python3.7/site-packages/tensorflow/contrib/cloud/__init__.py in <module>
20
21 # pylint: disable=line-too-long,wildcard-import
---> 22 from tensorflow.contrib.cloud.python.ops.bigquery_reader_ops import *
23 # pylint: enable=line-too-long,wildcard-import
24
~/env/py37/lib/python3.7/site-packages/tensorflow/contrib/cloud/python/ops/bigquery_reader_ops.py in <module>
19 from __future__ import print_function
20
---> 21 from tensorflow.contrib.cloud.python.ops import gen_bigquery_reader_ops
22 from tensorflow.python.framework import ops
23 from tensorflow.python.ops import io_ops
~/env/py37/lib/python3.7/site-packages/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py in <module>
191
192
--> 193 _op_def_lib = _InitOpDefLibrary()
~/env/py37/lib/python3.7/site-packages/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py in _InitOpDefLibrary()
94 op_list = _op_def_pb2.OpList()
95 _text_format.Merge(_InitOpDefLibrary.op_list_ascii, op_list)
---> 96 _op_def_registry.register_op_list(op_list)
97 op_def_lib = _op_def_library.OpDefLibrary()
98 op_def_lib.add_op_list(op_list)
AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list'
</code></pre>
<p>I'm using tensorflow 1.2.1. I know it's old, but I'm trying to run some code on a github that was written three years ago. It's been a pain trying to get all the versions correct to hopefully run the project.</p>
<p>Thanks!</p>
|
<p>Solution:</p>
<pre><code>!pip install tensorflow==2.6.0
!pip install tensorflow-addons
!pip install -q "tqdm>=4.36.1"
import tensorflow as tf
import tensorflow_addons as tfa
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12,8)
import numpy as np
import tensorflow as tf
import tensorflow.keras as k
import pandas as pd
#from keras_tqdm import TQDMNotebookCallback
tqdm_callback = tfa.callbacks.TQDMProgressBar()
</code></pre>
<p>providing <a href="https://colab.research.google.com/gist/mohantym/cd2ec3dd007bd610b17d03a4251327ab/stack_66308078.ipynb" rel="nofollow noreferrer">gist</a> for reference
Reference:
<a href="https://www.tensorflow.org/addons/tutorials/tqdm_progress_bar" rel="nofollow noreferrer">https://www.tensorflow.org/addons/tutorials/tqdm_progress_bar</a></p>
|
python|python-3.x|tensorflow|keras
| 0
|
9,245
| 52,822,548
|
How to convert a column with null values to datetime format?
|
<p>How I convert the df to pandas datetime datatype? (with null value)</p>
<pre><code>datet = pd.DataFrame(['2018-09-07 00:00:00','2017-09-15 00:00:00',''],columns=['Mycol'])
datet['Mycol'] = datet['Mycol'].apply(lambda x:
dt.datetime.strptime(x,'%Y-%m-%d %H:%M:%S'))
</code></pre>
<p>But it returns error:
ValueError: time data '' does not match format '%Y-%m-%d %H:%M:%S'</p>
<p>How can I resolve that error? (keep the null as blank)</p>
<p>Thanks!</p>
|
<p>Just do:</p>
<pre><code>datet['Mycol'] = pd.to_datetime(datet['Mycol'], errors='coerce')
</code></pre>
<p>This will automatically convert the null to an <code>NaT</code> value.</p>
<p>Output:</p>
<pre><code>0 2018-09-07
1 2017-09-15
2 NaT
</code></pre>
|
python|pandas|datetime|type-conversion
| 10
|
9,246
| 52,728,812
|
How to Animate multiple columns as dots with matplotlib from pandas dataframe with NaN in python
|
<p>I have a question that maybe is asked before, but I couldn't find any post describing my problem.</p>
<p>I have two pandas dataframes, each with the same index, representing x coordinates in one dataframe and y coordinates in the other. Each colum represent a car that started a specific timestep, logged every step until it arrived, and then stopped logging. </p>
<p>Everytime a car starts on its route, a column is added to each dataframe and the coordinates of each step are added to each frame (every step it moves trough space therefor has new x,y coordinates), (see example for the x coordinates dataframe)</p>
<p><a href="https://i.stack.imgur.com/Z9k6n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z9k6n.png" alt="enter image description here"></a></p>
<p>But I am trying to animate the tracks of each car by plotting the coordinates in an animated graph, but I cannot seem to get it worked. My code:</p>
<pre><code> %matplotlib notebook
from matplotlib import animation
from JSAnimation import IPython_display
from IPython.display import HTML
fig = plt.figure(figsize=(10,10))
#ax = plt.axes()
#nx.draw_networkx_edges(w_G,nx.get_node_attributes(w_G, 'pos'))
n_steps = simulation.x_df.index
def init():
graph, = plt.plot([],[],'o')
return graph,
def get_data_x(i):
return simulation.x_df.loc[i]
def get_data_y(i):
return simulation.y_df.loc[i]
def animate(i):
x = get_data_x(i)
y= get_data_y(i)
graph.set_data(x,y)
return graph,
animation.FuncAnimation(fig, animate, frames=100, init_func = init, repeat=True)
plt.show()
</code></pre>
<p>It does not plot anything, so any help would be very much appreciated. </p>
<hr>
<p><strong>EDIT</strong>: Minimal, Complete, and Verifiable example!</p>
<p>So two simple examples of the x and y dataframes that I have. Each has the same index.</p>
<pre><code>import random
import geopandas as gpd
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import math
import pandas as pd
from shapely.geometry import Point
from matplotlib import animation
from JSAnimation import IPython_display
%matplotlib inline
[IN]: df_x = pd.DataFrame(data=np.array([[np.NaN, np.NaN, np.NaN, np.NaN], [4, np.nan, np.NaN,np.NaN], [7, 12, np.NaN,np.NaN], [6, 18, 12,9]]), index= [1, 2, 3, 4], columns=[1, 2, 3, 4])
</code></pre>
<p>gives:</p>
<pre><code>[OUT]
1 2 3 4
1 NaN NaN NaN NaN
2 4.0 NaN NaN NaN
3 7.0 12.0 NaN NaN
4 6.0 18.0 12.0 9.0
</code></pre>
<p>And the y coordinate dataframe:</p>
<pre><code>[IN] df_y = pd.DataFrame(data=np.array([[np.NaN, np.NaN, np.NaN, np.NaN], [6, np.nan, np.NaN,np.NaN], [19, 2, np.NaN,np.NaN], [4, 3, 1,12]]), index= [1, 2, 3, 4], columns=[1, 2, 3, 4])'
</code></pre>
<p>gives:</p>
<pre><code>[OUT]
1 2 3 4
1 NaN NaN NaN NaN
2 6.0 NaN NaN NaN
3 19.0 2.0 NaN NaN
4 4.0 3.0 1.0 12.0
</code></pre>
<p>Now I want to create an animation, by creating a frame by plotting the x coordinate and the y coordinate of each column per each row of both dataframes. In this example, frame 1 should not contain any plot. Frame 2 should plot point (4.0 , 6.0) (of column 1). Frame 3 should plot point (7.0,19.0) (column1) and point (12.0,2.0) (column 2). Frame 4 should plot point (6.0, 4.0) (column 1), point (18.0,3.0) (column 2), point (12.0,1.0) (column 3) and (9.0, 12.0) column 4. Therefore I wrote the following code:</p>
<p>I tried writing the following code to animate this:</p>
<pre><code> [IN] %matplotlib notebook
from matplotlib import animation
from JSAnimation import IPython_display
from IPython.display import HTML
fig = plt.figure(figsize=(10,10))
#ax = plt.axes()
graph, = plt.plot([],[],'o')
def get_data_x(i):
return df_x.loc[i]
def get_data_y(i):
return df_y.loc[i]
def animate(i):
x = get_data_x(i)
y= get_data_y(i)
graph.set_data(x,y)
return graph,
animation.FuncAnimation(fig, animate, frames=4, repeat=True)
plt.show()
</code></pre>
<p>But this does not give any output. Any suggestions?</p>
|
<p>I've reformatted your code, but I think your main issue was that your dataframes start with a index of 1, but when you're calling your animation with <code>frames=4</code>, it's calling <code>update()</code> with <code>i=[0,1,2,3]</code>. Therefore when you do <code>get_data_x(0)</code> you raise a <code>KeyError: 'the label [0] is not in the [index]'</code></p>
<p><a href="https://matplotlib.org/api/_as_gen/matplotlib.animation.FuncAnimation.html" rel="nofollow noreferrer">As per the documentation</a>, <code>frames=</code> can be passed an iterable instead of an int. Here, I simply pass the index of your dataframe, and the function will iterate and call <code>update()</code> with each value. Actually, I decided to pass the intersection of your two dataframe indexes, that way, if there is one index present in one dataframe but not the other, it will not raise an Error. If you are garanteed that your two indexes are the same, then you could just do <code>frames=df_x.index</code></p>
<pre><code>x_ = """ 1 2 3 4
1 NaN NaN NaN NaN
2 4.0 NaN NaN NaN
3 7.0 12.0 NaN NaN
4 6.0 18.0 12.0 9.0
"""
y_ = """1 2 3 4
1 NaN NaN NaN NaN
2 6.0 NaN NaN NaN
3 19.0 2.0 NaN NaN
4 4.0 3.0 1.0 12.0"""
df_x = pd.read_table(StringIO(x_), sep='\s+')
df_y = pd.read_table(StringIO(y_), sep='\s+')
fig, ax = plt.subplots(figsize=(5, 5))
graph, = ax.plot([],[], 'o')
# either set up sensible limits here that won't change during the animation
# or see the comment in function `update()`
ax.set_xlim(0,20)
ax.set_ylim(0,20)
def get_data_x(i):
return df_x.loc[i]
def get_data_y(i):
return df_y.loc[i]
def update(i):
x = get_data_x(i)
y = get_data_y(i)
graph.set_data(x,y)
# if you don't know the range of your data, you can use the following
# instructions to rescale the axes.
#ax.relim()
#ax.autoscale_view()
return graph,
# Creating the Animation object
ani = animation.FuncAnimation(fig, update,
frames=pd.Index.intersection(df_x.index,df_y.index),
interval=500, blit=False)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/r0pVm.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r0pVm.gif" alt="enter image description here"></a></p>
|
python-3.x|pandas|dataframe|animation|matplotlib
| 1
|
9,247
| 52,720,467
|
Maximum between an array and a number
|
<p>When using:</p>
<pre><code>import numpy as np
A = np.array([1,2,-3,-1, 0,3,-1])
print [max(A[j], 0) for j in range(len(A))]
</code></pre>
<p>we get <code>[1, 2, 0, 0, 0, 3, 0]</code>, as desired.</p>
<p><strong>How to get the same directly with a numpy function, such as <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.maximum.html" rel="nofollow noreferrer"><code>np.max</code></a>?</strong></p>
<pre><code>print max(A, 0) # ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
print np.max(A, 0) # 3
print np.max(A, 0, axis=0) # argument axis not working
print np.amax(A, 0) # 3
</code></pre>
|
<p>It's just <code>np.maximum(A, 0)</code>. Contrary to <code>np.max</code> function, it accepts <em>two</em> arguments, and compares them element-wise. In your case, since the second argument is a scalar value, the comparison will happen by <em>broadcasting</em> it.</p>
|
python|arrays|numpy|max
| 2
|
9,248
| 46,503,087
|
Change Index and Reindex Pandas DataFrame
|
<p>I have a Python dictionary and I created a panda data frame like below:</p>
<p><a href="https://i.stack.imgur.com/DH5Kg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DH5Kg.png" alt="enter image description here"></a></p>
<p>I want to change the name of index column to <code>date</code> . But I couldn't do this with <code>data.set_index('date')</code> . How can I do this? Any advice would be appreciated.</p>
|
<p><code>data.set_index('date')</code> here only assign the column with name 'date' as index for the dataframe <code>data</code>.</p>
<p>You can rewrite the name of the column using <code>data.index.name = 'data'</code></p>
|
pandas
| 0
|
9,249
| 58,504,332
|
how to add a variable in a dictionary with several dataframes?
|
<p>I have a dictionary with several dataframes that looks like this:</p>
<pre><code>dataframes = {'Df_20100101': DataFrame, 'Df_20100102': DataFrame, 'Df_20100103': DataFrame}
</code></pre>
<p>The keyname for each dataframe is composed by Df_ followed by the date 2010 [year], 01[month] and 01[day].</p>
<p>For each dataframe I want to add a new variable/column with the date [of course in the date format] that corresponds to its key.</p>
<p>I am kind of new learning to use dictionaries, so I would be really thankful if you can help me.</p>
<p>I tried with the following code, but it is pretty basic for what I want.</p>
<pre><code>for key, val in dataframes.items():
val['Key']==k
</code></pre>
<p>Thanks in advance!</p>
|
<p>Use dictionary comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a>:</p>
<pre><code>dataframes = {key:val.assign(Key = pd.to_datetime(key.split('_')[1]))
for key, val in dataframes.items()}
</code></pre>
<p>Your code should be changed for select <code>DataFrame</code> by <code>key</code>s:</p>
<pre><code>for key, val in dataframes.items():
dataframes[key]['Key'] = pd.to_datetime(key.split('_')[1])
</code></pre>
|
python|pandas|dictionary|for-loop|generate
| 1
|
9,250
| 58,459,189
|
Is there a minimum term length required for features in Sklearn TfidfVectorizer
|
<p>I have a pandas dataframe with sentences that I'm trying to calculate Tfidf on: </p>
<pre><code>df['sentence'] = ['buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'buy donuts', 'purchase donuts', 'purchase donuts', 'purchase donuts', 'purchase donuts', 'purchase donuts', 'buy donut', 'buy a donut', 'buy 2 donuts', 'buy 2 donuts', 'buy 2 donuts', 'buy 12 donuts', 'buy 12 donuts', 'buy 12 donuts', 'purchase 2 donuts', 'purchase 12 donuts', 'i want to buy 2 donuts', 'i want to buy 12 donuts', 'i want to buy donuts', 'i want to buy some donuts', 'buy some donuts', 'buy two donuts', 'buy two donuts', 'buy two donuts', 'buy twelve donuts', 'buy twelve donuts', 'buy twelve donuts', 'purchase two donuts', 'purchase twelve donuts', 'i want to buy two donuts', 'i want to buy twelve donuts']
</code></pre>
<p>I first lemmatize these sentences (code below) and then feed the lemmatized list to sklearn's tfidfvectorizer.</p>
<p>However, I'm noticing a weird anomaly where it is not including some of the terms as features, even though min_df and max_df are set to their default value to include all terms. When I run get_feature_names(), every term is listed as a feature except 'i', 'a', and '2':</p>
<pre><code>['12', 'buy', 'donut', 'purchase', 'some', 'to', 'twelve', 'two', 'want']
</code></pre>
<p>I am not removing stopwords. For my purposes, "2" is very distinguishing, is there a minimum term length for features in tfidfvectorizer? How can I get these terms included as features?</p>
<pre><code>nlp = spacy.load("en", disable=['ner'])
vect = TfidfVectorizer(binary=True)
## Load in data
df = pd.read_csv('buy donuts.csv', encoding='utf-8')
df.columns = df.columns.str.lower()
## Normalize sentences
df['sentence'] = df['sentence'].str.replace(r"[^\w\s']", '').str.lower().str.strip().replace('', np.nan)
df = df.dropna(subset=['unit name', 'sentence'])
## Get lemmas for tfidf
def lemmas(x):
docs = nlp(x)
sents_lemma = [token.lemma_ for token in docs]
return ' '.join(sents_lemma)
df['lemmas'] = df.index.map(df['sentence'].apply(lemmas))
## Get tfidf and calculate scores
tfidf = vect.fit_transform(df.lemmas.values.tolist())
scores = ((tfidf * tfidf.T).A).mean(axis=0)
print(vect.get_feature_names())
</code></pre>
|
<p>Check out the regular expression used by your TfidfVectorizer which you do not explicitly set. This can be accessed (or changed) with the <code>token_pattern</code> parameter.</p>
<p>It is <code>(?u)\b\w\w+\b</code> which you can see would prevent <code>2</code> or <code>I</code> from being recognized.</p>
<p>I am not sure which regex is most appropriate for your use case, perhaps you play around with it a little. However the following would capture the case of <code>buy 2 donut</code></p>
<pre><code>(?u)\b\w+\b
</code></pre>
<p>So anyway to answer the general question, you could craft your <code>token_pattern</code> in such a way to enforce a minimum length (or even a maximum length) I suppose.</p>
|
python|python-3.x|pandas|sklearn-pandas|tfidfvectorizer
| 0
|
9,251
| 58,389,161
|
Find and compare the last and the row before in a group it if it matches the criteria in python
|
<p>I have a dataframe that contains game data. For this game 2 players play a 90 minute game. However the game can last longer than 90 minutes for various reasons. What I want is to find games that player 1 won after the 90th minute mark. So I'd like the compare the end score for a game that:
- lasted more than 90 minutes,
- Player 1 won
with the previous score for the time<90 minutes.</p>
<p><a href="https://i.stack.imgur.com/TJpMj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TJpMj.png" alt="games dataframe"></a><br>
<sub>(source: <a href="https://cdn1.imggmi.com/uploads/2019/10/15/5599929a88b1a266ad6476488094fccd-full.png" rel="nofollow noreferrer">imggmi.com</a>)</sub> </p>
<pre class="lang-py prettyprint-override"><code> # Games dataframe
games = pd.DataFrame({'game_id': {0: 1,1: 1,2: 1,3: 2,4: 2,5: 3,6: 3,7:
3,8: 4,9: 4,10: 4,11: 5,12: 5,13: 5},
'time': {0: 1,1: 45,2: 95,3: 56,4: 80,5: 1,6: 95,7:
95,8: 96,9: 107,10: 108,11: 15,12: 95,13: 97},
'player 1': {0: 1,1: 1,2: 2,3: 1,4: 1,5: 0,6: 1,7: 2,8:
0,9: 1,10: 2,11: 1,12: 1,13: 1},
'player 2': {0: 0,1: 1,2: 1,3: 0,4: 1,5: 1,6: 1,7: 1,8:
1,9: 1,10: 1,11: 0,12: 1,13: 2}})
# Find the rows with the ending scores of games
a = games.drop_duplicates(["game_id"],keep='last')
#Find games that player 1 wins and time>90
b = games[((games["player 1"] - games["player 2"]) >0) (games["time"]>90)]
</code></pre>
<p>For example for game 1 Player 1 won at 95th minute before that the scores were even. Overall if the situation was a tie between 2 players before 90th minute or Player 1 was losing but after 90 minute mark the final situation is that Player 1 won. How can I filter this case ?</p>
|
<p>Edit:
How about this?</p>
<pre><code>endScores = games.loc[(games['time'] > 90) & (games['player 1'] > games['player 2'])].groupby('game_id').nth(-1)
beforeScores = games.loc[(games['time'] <= 90) & (games['player 1'] <= games['player 2'])].groupby('game_id').nth(-1)
compareGames = beforeScores.join(endScores, rsuffix='_end').dropna()
</code></pre>
|
python|pandas|dataframe|comparison
| 2
|
9,252
| 68,933,053
|
Why try/exception method is not working on my python?
|
<pre><code>import os.path
from os import path
import pandas as pd
class ImportFiles:
def __init__(self, pathname, file):
self.pathname = pathname
self.file = file
def check(self):
try:
os.path.exists(self.pathname+'/'+self.file)
print(f"'{self.pathname}/{self.file}': this file is valid to use")
except OSError:
def import_csv(self):
df = pd.read_csv(f"{self.pathname}/{self.file}")
return df
if __name__ == "__main__":
table= ImportFiles("C:/Users/..s", "....csv")
table.check()
</code></pre>
<p>This returns</p>
<pre><code>'C:/Users/..s/....csv' : this file is valid to use
</code></pre>
<p>But when I execute the next command</p>
<pre><code>table.import_csv()
</code></pre>
<p>It returns</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/..s/....csv'
</code></pre>
<p>Not sure why they couldn't find OS error at first?</p>
<p>Edit: Sorry, I simply put print method after except OSError</p>
<pre><code>print(f"Operating system raised an error: check if the file {self.file} exsits or the name is correct. Check your path that contains this file.")
</code></pre>
|
<p><a href="https://docs.python.org/3/library/os.path.html#os.path.exists" rel="nofollow noreferrer">os.path.exists</a> returns True/False. It does not raise an exception. "this file is valid to use" will print regardless of whether the file exists because no exception stopped it.</p>
|
python|pandas|operating-system
| 3
|
9,253
| 44,480,208
|
Calling from a xlsx file I get attribute error but I do not when I create the dataframe pandas
|
<p>I am trying to do this with two dataframes:</p>
<pre><code>df1 = df.copy()
df1['emails'] = df1.emails.apply(lambda x: ','.join(set(map(str.strip, x.split(','))) - set(blacklisted.email)))
df1 = df1[df1.emails != '']
</code></pre>
<p>when I create the dataframe with the same information myself, and it returns the same datatypes it works; for example if I create a dataframe that looks like this:</p>
<pre><code>blacklisted=pd.DataFrame(columns=['email'],
data=[['smith.john@hotmail.com'],['earl.bob@jpmorgan.com'],['banana.star@csu.edu'], ['london.flag@wholefoods.com'],
['soft.pretzel@utz.com']])
blacklisted.head()
email
0 smith.john@hotmail.com
1 earl.bob@jpmorgan.com
2 banana.star@csu.edu
3 london.flag@wholefoods.com
4 soft.pretzel@utz.com
</code></pre>
<p>and another dataframe that looks like this:</p>
<pre><code>df=pd.DataFrame(columns=['customerId','full name','emails'],
data=[['208863338', 'Brit Spear', 'star.shine@cw.com'],['086423367', 'Justin Bob', 'bob.love@gem.com,ruby.blue@yahoo.com'],['902626998', 'White Ice', 'iceblue@starr.com,ice@msn.com'], ['1000826799', 'Bear Lou', 'lou.bear@visa.com'],
['1609813339', 'Ariel Do', 'ariel.d@fire.com, ariel@yahoo.com']])
print(df)
customerId full name emails
0 208863338 Brit Spear star.shine@cw.com
1 086423367 Justin Bob bob.love@gem.com,ruby.blue@yahoo.com
2 902626998 White Ice iceblue@starr.com,ice@msn.com
3 1000826799 Bear Lou lou.bear@visa.com
4 1609813339 Ariel Do ariel.d@fire.com, ariel@yahoo.com
</code></pre>
<p>the above code works but when I try to call the same information from two files instead using code like this:</p>
<pre><code>blacklisted = df1 = pd.read_excel(r'C:/Users/Administrator/Documents/sfiq/blacklisted.xlsx')
df = pd.read_excel(r'C:/Users/Administrator/Documents/customers.xlsx')
</code></pre>
<p>with the exact same information as the two dataframes I created above it wont work, I get an attribute error:</p>
<pre><code>df1['emails'] = df1.emails.apply(lambda x: ','.join(set(map(str.strip, x.split(','))) - set(blacklisted.email)))
</code></pre>
<p>the error returned is:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-439d1f152f33> in <module>()
----> 1 df1['emails'] = df1.emails.apply(lambda x: ','.join(set(map(str.strip, x.split(','))) - set(blacklisted.email)))
C:\Program Files\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
2218 else:
2219 values = self.asobject
-> 2220 mapped = lib.map_infer(values, f, convert=convert_dtype)
2221
2222 if len(mapped) and isinstance(mapped[0], Series):
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:62658)()
<ipython-input-22-439d1f152f33> in <lambda>(x)
----> 1 df1['emails'] = df1.emails.apply(lambda x: ','.join(set(map(str.strip, x.split(','))) - set(blacklisted.email)))
AttributeError: 'float' object has no attribute 'split'
</code></pre>
|
<p>Suppose you have:</p>
<p>In <code>blacklisted.xlsx</code>:</p>
<p><a href="https://i.stack.imgur.com/pLmZL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pLmZL.jpg" alt="enter image description here"></a></p>
<p>In <code>customers.xlsx</code>:</p>
<p><a href="https://i.stack.imgur.com/RXOxi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RXOxi.jpg" alt="enter image description here"></a></p>
<p>Use <code>astype</code> before apply function like this:</p>
<pre><code>blacklisted = pd.read_excel(r'blacklisted.xlsx')
df = pd.read_excel(r'customers.xlsx')
df['emails'] = df.emails.astype(str).apply(lambda x: ','.join(set(map(str.strip, x.split(','))) - set(blacklisted.email)))
df
</code></pre>
<p><code>df</code> will be:</p>
<pre><code> customerId full name emails
0 208863338 Brit Spear star.shine@cw.com
1 86423367 Justin Bob ruby.blue@yahoo.com,bob.love@gem.com
2 902626998 White Ice ice@msn.com,iceblue@starr.com
3 1000826799 Bear Lou lou.bear@visa.com
4 1609813339 Ariel Do ariel@yahoo.com,ariel.d@fire.com
</code></pre>
|
python|csv|pandas|ipython
| 1
|
9,254
| 61,051,999
|
Dataframe value comparison
|
<p>I have data-frame like this</p>
<p><a href="https://i.stack.imgur.com/PTwOy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PTwOy.png" alt="enter image description here"></a></p>
<p>I want to compare a with c and b with d.
When there is a nan or empty value, it will be considered as 0.
<a href="https://i.stack.imgur.com/0NSjj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0NSjj.png" alt="enter image description here"></a></p>
<p>I tried to use list comprehension but receive <code>The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<pre><code>df['bVsd']=["True" if df['b']==df['d'] else "False"]
</code></pre>
|
<p><strong>ADDED ANSWER FOR YOUR 2ND QUESTION</strong></p>
<p>To achieve what you want to do, simply compare the columns directly:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'a':[1,3,5,7,9],
'b':[0,0,0,0,0],
'c':[1,3,5,7,9],
'd':[0,np.nan,np.nan,0,np.nan]})
# Fill the nan and empty cells with 0
df = df.fillna(0)
# To do the comparison you desire
df['aVsc'] = (df['a'] == df['c'])
df['bVsc'] = (df['b'] == df['d'])
</code></pre>
<p>The reason why you are getting the error is because <code>df['b'] == df['d']</code> returns you a series:</p>
<pre><code> 0
0 True
1 True
2 True
3 True
4 True
</code></pre>
<p>and thus it is ambiguous to evaluate the boolean value of a series, unless you specify <code>any</code> or <code>all</code>, which would not be doing you what you want either way. </p>
<p>And lastly, on a separate note, that was not the correct way of doing list comprehension. It should have an iterator and you need to loop over the iterator. Something like this: <code>[True if i == 'something' else False for i in iterator]</code>.</p>
<p><strong>2nd Question</strong></p>
<p>If you want <code>df['aVsc]</code> to be 0 when <code>df['a'] == df['c']</code>, and <code>df['aVsc] == df['a']</code> otherwise, you can use <code>np.where</code>:</p>
<pre><code>df['aVsc'] = np.where(df['a'] == df['c'], 0, df['a'])
</code></pre>
<p>in which the <code>np.where</code> function means check if condition <code>df['a'] == df['c']</code> is <code>True</code>, if it is, assign the value of <code>0</code> else, assign the value of <code>df['a']</code>.</p>
|
python|pandas|dataframe
| 3
|
9,255
| 60,831,188
|
Attribute 'getvalue' Error in Python (Pandas)
|
<p>I try to read .csv file using Pandas. When my programme starts to read a file it will get an error attribute not found.
Output </p>
<p><a href="https://i.stack.imgur.com/KDFma.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDFma.png" alt="**This is my error**"></a></p>
<p>here is my code.</p>
<pre><code>in line 23: __location = r'/Users/A/Documents/Co/1/Information/CSV/image_tags.csv'
in line 25: __image_tags= DataFrame()
...
line 35: self.__image_tags=pd.read_csv(self.__location)
...
line 73: name_row=self.__image_tags.columns.get_values().tolist()
</code></pre>
<p>Does anyone know how to solve?</p>
|
<p>It is most likely that you are asking pandas to read an empty file/ non-existent file. I face that problem in Open-CV too when I type in the wrong file name. Check your file name.</p>
<p>The logic works like this:
<code>variable = Non-Existant file</code> - so variable = NonType Object ie.Nothing
so something to the variable
eg: <code>variable +=1</code> - ie. Nothing = Nothing +1
Output: <code>Error</code></p>
|
python|python-3.x|pandas|csv|read.csv
| 0
|
9,256
| 71,641,351
|
Re-calibrate an existing regression model
|
<p>I have a pandas dataframe shown below (snapshot), I am trying to re-calibrate a regression equation.
<code>Loss = (EXP(-1.01 + (-0.08 x mob)) x Price)</code>
I wanted to fit regression on the new data available but not sure how I can input the new coefficients into the existing equation. Loss is my target variable ; mob and price are my independent variables.</p>
<p>For e.g. : <code>New Loss equation = (EXP(b1 + (b2 x mob)) x Price) </code>. Please let me know how to achieve this ? , thanks in advance</p>
<pre><code>Months RefID Price Loss mob
1/11/2019 100 4.00 3.43 2.00
1/11/2019 101 10.00 8.58 3.00
1/11/2019 102 20.00 17.16 1.00
1/12/2019 100 44.00 37.74 3.00
1/12/2019 101 66.00 56.61 4.00
1/12/2019 102 7.00 6.00 2.00
1/12/2019 103 9.00 7.72 1.00
...
</code></pre>
<p>I think the person who built the model has used interaction and used linear regression but I am not sure, I am doing the below but not sure if this is the correct approach:</p>
<pre><code>X = stage2[["Price" , "Interact_Price_mob"]] # Interact_Price_mob = Price*mob
y= stage2[["Loss"]]
reg = LinearRegression(fit_intercept=False).fit(X,y)
print(reg.coef_)
</code></pre>
|
<p>You really have two choices about how to do the regression. One is to convert the exponential to a linear equation and solve with linear regression. (Note that your equation can be rewritten as <code>log(loss/price) = b1 + b2 * mob</code>, if you are unsure how, review your logarithm rules and prove it to yourself.) The other is to do a nonlinear least squares fit to the exponential. For both, I use <code>scipy</code> below. For the exponential: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html#scipy.optimize.curve_fit" rel="nofollow noreferrer"><code>scipy.optimize.curve_fit</code></a>. For the linear: <a href="https://scipy-cookbook.readthedocs.io/items/LinearRegression.html" rel="nofollow noreferrer"><code>scipy.polyfit</code> with a first-order polynomial</a>. Note that there are other tools to do a linear regression (including from first principles) and this is likely not the most efficient, though I haven't checked.</p>
<p>The values are just the ones in your example above, so it's a very small data set but good enough for demonstration.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
from scipy import polyfit
# mob, price, and loss defined from values given above
# nonlinear least squares fit
def f(x, b1, b2):
mob = x[0,:]
price = x[1,:]
return(np.exp(b1 + b2 * mob) * price)
x = np.array([mob, price])
b, cov = curve_fit(f, xdata = x, ydata = loss)
# linear regression (polynomial fit of order 1)
(b2, b1) = polyfit(mob, np.log(loss/price), 1)
# Comparison
print('Regression\tb1\t\t\tb2')
print(f'Linear\t\t{b1}\t{b2}')
print(f'Exponential\t{b[0]}\t{b[1]}')
</code></pre>
<pre><code>Regression b1 b2
Linear -0.15350910350019625 1.7615305296685054e-06
Exponential -0.15324071098380723 -6.040694002336243e-05
</code></pre>
<p><a href="https://i.stack.imgur.com/63FI1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63FI1.png" alt="fit comparison" /></a></p>
<p>These are clearly similar but not identical. This is because both methods use a least squares fit but on different metrics.</p>
<ul>
<li>In the linear regression, a difference between a data point of 1 and a prediction of 2 has the same weight as the difference between a data point of 10 and a prediction of 20 because we have taken a logarithm of your actual equation and <code>log(2)-log(1) = log(2/1) = log(20/10) = log(20)-log(10)</code>.</li>
<li>In the exponential regression, a difference between a data point of 1 and a prediction of 2 has the same weight as the difference between a data point of 10 and a prediction of 11.</li>
</ul>
<p>Depending on your application and the range in values of your dataset, this difference could be important and one choice may be better than the other. In your case, I would want to try to reproduce the results that were used previously and see if either regression model does so, then decide if you simply want to update parameters or also update the regression model used.</p>
|
python|pandas|regression
| 1
|
9,257
| 42,280,506
|
How to replace items with their indices in a pandas series
|
<h3>Problem Statement</h3>
<p>I have file containing a graph. The file is a csv with two columns, source name and target name. I'd like to generate a file/dataframe with source and target being numerical IDs instead of strings.</p>
<p>I know this can be done in python, with something like</p>
<pre><code>node_names = list(set(source_node_names) | set(target_node_names))
names_to_ids = invert(dict(enumerate(node_names)))
# followed by some sort of replace operation using this dictionary
</code></pre>
<p>But I'm trying to learn pandas, and felt this would be a good opportunity to do so. </p>
<h3>Some questions:</h3>
<ul>
<li><p>Does it make sense to use a <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html" rel="nofollow noreferrer">Categorical</a> for this problem? I didn't think so, since each of my nodes is not a category, but my googling was steering me in that direction. </p></li>
<li><p>Right now I have my names in a series. <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.factorize.html" rel="nofollow noreferrer">Series.factorize</a> seemed promising, but I'm not entirely clear on what the return is, largely because I'm not clear on what a pandas Index is.</p></li>
<li><p><em>Meta-question #1</em>: Is there a good explanation of pandas Index somewhere? I can't find a "pandas Index tutorial" probably in part because pandas has a tutorial called <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer">Indexing and Selecting Data</a> which I think answers most people's questions, but assumes some knowledge I don't have:</p>
<blockquote>
<p>The pandas Index class and its subclasses can be viewed as implementing an <em>ordered multiset</em>.</p>
</blockquote>
<p>=0</p></li>
<li><p><em>Meta-question #2</em>: Are there any excellent resources for learning pandas?So far I've been doing a lot of googling for docs and stackoverflowing, but I may want to learn pandas properly. Are there large tutorials or books people trust? </p></li>
</ul>
<h3>My best idea so far:</h3>
<p>I think I could take my series of node names, somehow explicitly make it a dataframe such that one column is the indices and another is the name, then do two <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merges</a> with the graph dataframe, once on source and once on target, such that I have source IDs and target IDs, then just hold on to those two fields. It feels to me like there must be a better way than two merges.</p>
<h3>Some sample data</h3>
<p>As requested by @Cleb. Input:</p>
<pre><code># I have And I want:
RNF14 VDR 0 1
RNF14 SMAD 0 2
RNF14 UBE2D4 0 3
RNF14 EIF2B5 0 4
RNF14 UBE2D2 0 5
RNF14 SMAD 0 6
RNF14 UBE2D1 0 7
RNF14 UBE2D3 0 8
RNF14 IST1 0 9
RNF14 EXOSC3 0 10
RNF14 EXOSC5 0 11
RNF14 SMURF1 0 12
RNF14 SMURF2 0 13
</code></pre>
<p>Obviously this is a trivial case. I have about a million edges in my graph, for maybe 100k nodes. </p>
<h3>Update #1:</h3>
<p>It seems like factorizing might be what I want, but I want to factorize two columns of a dataframe in the same index space, which seems non-obvious. </p>
<p>I've built an index from names to IDs, I just don't know how to replace my original dataframe with the IDs. This would be some kind of "merge" operation I'm not familiar with. </p>
|
<p>I would opt for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow noreferrer"><code>pd.factorize()</code></a> across columns.</p>
<pre><code>df.apply(lambda col: pd.factorize(col)[0]+1)
</code></pre>
<p>If you want unique ids in each column, you could just unstack first. </p>
<pre><code>stacked = df.stack()
pd.DataFrame(stacked.factorize()[0], index=stacked.index).unstack()
</code></pre>
<p><strong>Demo</strong></p>
<pre><code>>>> df = pd.DataFrame(dict(const=['things']*12,
unqs=['foo']*4+['bar']*3+['baz']*5))
>>> df
const unqs
0 things foo
1 things foo
2 things foo
3 things foo
4 things bar
5 things bar
6 things bar
7 things baz
8 things baz
9 things baz
10 things baz
11 things baz
>>> stacked = df.stack()
>>> pd.DataFrame(stacked.factorize()[0], index=stacked.index).unstack()
0
const unqs
0 0 1
1 0 1
2 0 1
3 0 1
4 0 2
5 0 2
6 0 2
7 0 3
8 0 3
9 0 3
10 0 3
11 0 3
</code></pre>
|
pandas
| 1
|
9,258
| 43,306,931
|
pandas: retrieve values post group by sum
|
<p>Pandas data frame(<strong>"df"</strong>) looks like :</p>
<pre><code> name id time
1095 One 1 12:03:37.230812
1096 Two 2 10:56:29.314745
1097 Three 3 10:58:18.897624
1098 Three 3 09:45:38.755116
1099 Two 2 09:02:59.472508
1100 One 1 12:28:38.341024
</code></pre>
<p>On this, i did an operation which is</p>
<pre><code>df = df.groupby(by=['id'])[['time']].transform(sum).sort('time', ascending=False)
</code></pre>
<p>On the resulting <strong>df</strong> I want to iterate and get response as name and total time. How can I achive that from last df(from groupby/transform response) ? So result should look something like this:</p>
<pre><code>name time
One 24:03:37.230812
Two 19:56:29.314745
Three 19:58:18.897624
</code></pre>
|
<p>I think you need convert column <code>time</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a> first.</p>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by column <code>name</code> or <code>id</code> and aggregate <code>sum</code>: </p>
<pre><code>df.time = pd.to_timedelta(df.time)
df = df.groupby('name', as_index=False)['time'].sum().sort_values('time', ascending=False)
print (df)
name time
0 One 1 days 00:32:15.571836
1 Three 0 days 20:43:57.652740
2 Two 0 days 19:59:28.787253
</code></pre>
<hr>
<pre><code>df = df.groupby('id', as_index=False)['time'].sum().sort_values('time', ascending=False)
print (df)
id time
0 1 1 days 00:32:15.571836
2 3 0 days 20:43:57.652740
1 2 0 days 19:59:28.787253
</code></pre>
<p>Last is possible convert timedeltas to <code>seconds</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer"><code>total_seconds</code></a>, another conversation are <a href="http://pandas.pydata.org/pandas-docs/stable/timedeltas.html#frequency-conversion" rel="nofollow noreferrer">here</a>:</p>
<pre><code>df.time = df.time.dt.total_seconds()
print (df)
id time
0 1 88335.571836
2 3 74637.652740
1 2 71968.787253
</code></pre>
|
python|pandas
| 1
|
9,259
| 43,044,303
|
Using index arrays vs iterating in numpy
|
<p>I have many numpy arrays of about 50K elements. I want to compare them, using only certain positions of them (10% of them in the average), and performance matters. This looks like a good use case for index arrays. I can write this code:</p>
<pre class="lang-py prettyprint-override"><code>def equal_1(array1, array2, index):
return (array1[index] == array2[index]).all():
</code></pre>
<p>That is fast in practice, but it iterates for all indexes once per array.</p>
<p>I can use this other approach too:</p>
<pre class="lang-py prettyprint-override"><code>def equal_2(array1, array2, index):
for i in index:
if array1[i] != array2[i]:
return False
return True
</code></pre>
<p>This only iterates arrays until a difference is found.</p>
<p>I benchmarked both approaches for my use case.</p>
<p>In arrays that are equal, or where differences are at the end, the index array function is about 30 times faster. When there are differences at the beginning of the array, the second function is about 30 times faster.</p>
<p>Is there a way to get the best of both worlds (numpy speed + second function laziness)?</p>
|
<p>For your purposes, you may want to use the just-in-time compiler <code>@jit</code> from <code>numba</code>. </p>
<pre><code>import numpy as np
from numba import jit
a1 = np.arange(50000)
a2 = np.arange(50000)
# set some values to evaluation as false
a2[40000:45000] = 1
indices = np.random.choice(np.arange(50000), replace=False, size=5000)
indices.sort()
def equal_1(array1, array2, index):
return (array1[index] == array2[index]).all()
def equal_2(array1, array2, index):
for i in index:
if array1[i] != array2[i]:
return False
return True
@jit #just as this decorator to your function
def equal_3(array1, array2, index):
for i in index:
if array1[i] != array2[i]:
return False
return True
</code></pre>
<p>testing:</p>
<pre><code>In [44]: %%timeit -n10 -r1
...: equal_1(a1,a2,indices)
...:
10 loops, best of 1: 72.6 µs per loop
In [45]: %%timeit -n10 -r1
...: equal_2(a1,a2,indices)
...:
10 loops, best of 1: 657 µs per loop
In [46]: %%timeit -n10 -r1
...: equal_3(a1,a2,indices)
...:
10 loops, best of 1: 7.65 µs per loop
</code></pre>
<p>Just by adding <code>@jit</code> you can get a ~100x speed up in your python operation.</p>
|
python|numpy
| 1
|
9,260
| 72,245,113
|
How do I turn dataframe#1 into something i can properly graph, like dataframe#2?
|
<p>I have the following dataframe, based on data i pulled from my database:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">event_type</th>
<th style="text-align: right;">count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2022-05-10</td>
<td style="text-align: center;">page_view</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-11</td>
<td style="text-align: center;">cart_add</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-11</td>
<td style="text-align: center;">page_view</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-12</td>
<td style="text-align: center;">cart_add</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-12</td>
<td style="text-align: center;">cart_remove</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-12</td>
<td style="text-align: center;">page_view</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-13</td>
<td style="text-align: center;">cart_remove</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-13</td>
<td style="text-align: center;">page_view</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-14</td>
<td style="text-align: center;">cart_add</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-14</td>
<td style="text-align: center;">page_view</td>
<td style="text-align: right;">5</td>
</tr>
</tbody>
</table>
</div>
<p>Basically I am tracking 3 things on my website:</p>
<ol>
<li>when a user views a product page</li>
<li>when a user adds a product to their cart</li>
<li>when a user removes a product from their cart</li>
</ol>
<p>I'm tracking how often each of these events happens in a day and I want to then graph them all on a single line chart. In order to do that, I think I need to make it look something more like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">page_views</th>
<th style="text-align: center;">cart_adds</th>
<th style="text-align: right;">cart_removes</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2022-05-10</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-11</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-12</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-13</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2022-05-14</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I am very new to pandas and not even sure if this library is what I should be using. So forgive my cluelessness, but how do I make dataframe1 look like dataframe2?</p>
|
<pre><code>df.pivot(columns='event_type', index='date').fillna(0)
</code></pre>
<p>Output:</p>
<pre><code> count
event_type cart_add cart_remove page_view
date
2022-05-10 0.0 0.0 3.0
2022-05-11 2.0 0.0 2.0
2022-05-12 1.0 1.0 2.0
2022-05-13 0.0 2.0 1.0
2022-05-14 2.0 0.0 5.0
</code></pre>
|
pandas|dataframe
| 0
|
9,261
| 72,186,680
|
Matplotlib and Pandas Plotting amount of numbers in certain range
|
<p>I have pandas Dataframe that looks like this:</p>
<p><a href="https://i.stack.imgur.com/sCBp0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCBp0.png" alt="DataFrame" /></a></p>
<p>I am asking to create this kind of plot for every year [1...10] with the Score range of [1...10].
<strong>This means that for every year, the plot will present:</strong></p>
<pre><code>how many values between [0-1] have in year 1
how many values between [2-3] have in year 1
how many values between [4-5] have in year 1
.
.
.
.
.
how many values between [6-7] have in year 10
how many values between [8-9] have in year 10
how many values between [10] has in year 10
</code></pre>
<p><a href="https://i.stack.imgur.com/AYPw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYPw9.png" alt="Plot Example" /></a></p>
<p>Need some help, Thank you!</p>
|
<p>The following code works perfectly:</p>
<pre><code>def visualize_yearly_score_distribution(ds, year):
sns.set_theme(style="ticks")
first_range = 0
second_range = 0
third_range = 0
fourth_range = 0
fifth_range = 0
six_range = 0
seven_range = 0
eight_range = 0
nine_range = 0
last_range = 0
score_list = []
for index, row in ds.iterrows():
if row['Publish Date'] == year:
if 0 < row['Score'] < 1:
first_range += 1
if 1 < row['Score'] < 2:
second_range += 1
if 2 < row['Score'] < 3:
third_range += 1
if 3 < row['Score'] < 4:
fourth_range += 1
if 4 < row['Score'] < 5:
fifth_range += 1
if 5 < row['Score'] < 6:
six_range += 1
if 6 < row['Score'] < 7:
seven_range += 1
if 7 < row['Score'] < 8:
eight_range += 1
if 8 < row['Score'] < 9:
nine_range += 1
if 9 < row['Score'] < 10:
last_range += 1
score_list.append(first_range)
score_list.append(second_range)
score_list.append(third_range)
score_list.append(fourth_range)
score_list.append(fifth_range)
score_list.append(six_range)
score_list.append(seven_range)
score_list.append(eight_range)
score_list.append(nine_range)
score_list.append(last_range)
range_list = ['0-1', '1-2', '2-3', '3-4', '4-5', '5-6', '6-7', '7-8', '8-9', '9-10']
plt.pie([x*100 for x in score_list], labels=[x for x in range_list], autopct='%0.1f', explode=None)
plt.title(f"Yearly Score Distribution for {str(year)}")
plt.tight_layout()
plt.legend()
plt.show()
</code></pre>
<p>Thank you all for the kind comments :)
This case is closed.</p>
|
python|pandas|matplotlib|plot
| 0
|
9,262
| 50,663,135
|
Anchor boxes and offsets in SSD object detection
|
<p>How do you calculate anchor box offsets for object detection in SSD? As far as I understood anchor boxes are the boxes in 8x8 feature map, 4x4 feature map, or any other feature map in the output layer.</p>
<p>So what are the offsets?</p>
<p>Is it the distance between the centre of the bounding box and the centre of a particular box in say a 4x4 feature map?</p>
<p>If I am using a 4x4 feature map as my output, then my output should be of the dimension:</p>
<pre><code>(4x4, n_classes + 4)
</code></pre>
<p>where 4 is for my anchor box co-ordinates.
This 4 co-ordinates can be something like:</p>
<pre><code>(xmin, xmax, ymin, ymax)
</code></pre>
<p>This will correspond to the top-left and bottom-right corners of the bounding box.
So why do we need offsets and if so how do we calculate them?</p>
<p>Any help would be really appreciated!</p>
|
<p>We need offsets because thats what we calculate when we default anchor boxes, In case of ssd for every feature map cell they will have predefined number of anchor boxes of different scale ratios on very feature map cell,I think in the paper this number is 6.</p>
<p>Now because this is a detection problem ,we will also have ground truth bounding boxes,Here roughly, we compare the IOU of the anchor box to the GT box and if it is greater than a threshold say 0.5 we predict the box offsets to that anchor box.</p>
|
tensorflow|keras|deep-learning|object-detection|pytorch
| 1
|
9,263
| 50,498,525
|
python Tensorflow ImportError
|
<p>Please help me with this error:</p>
<blockquote>
<p>import tensorflow as tf Traceback (most recent call last): File
"/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py",
line 41, in
from tensorflow.python.pywrap_tensorflow_internal import * File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File
"/home/user/anaconda3/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file) File "/home/user/anaconda3/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec) ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last): File "", line 1, in
File
"/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/<strong>init</strong>.py",
line 24, in
from tensorflow.python import * File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/<strong>init</strong>.py",
line 49, in
from tensorflow.python import pywrap_tensorflow File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py",
line 52, in
raise ImportError(msg) ImportError: Traceback (most recent call last): File
"/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py",
line 41, in
from tensorflow.python.pywrap_tensorflow_internal import * File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File
"/home/user/anaconda3/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file) File "/home/user/anaconda3/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec) ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory</p>
<p>Failed to load the native TensorFlow runtime.</p>
<p>See
<a href="https://www.tensorflow.org/install/install_sources#common_installation_problems" rel="noreferrer">https://www.tensorflow.org/install/install_sources#common_installation_problems</a></p>
<p>for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.</p>
</blockquote>
|
<p>You could try that:</p>
<ol>
<li><p>Confirm the python interpreter path, is anaconda or system python?</p></li>
<li><p>export python library <code>export PYTHONPATH=xxxxx:$PYTHONPATH</code> to solve some lib couldn't find before execute a python script in terminal. </p></li>
</ol>
<p>If you following two step, above troubles could be solved.</p>
|
python|tensorflow
| 1
|
9,264
| 50,363,253
|
Insert array from csv using pymongo
|
<p>I have a csv file with array in string format as below:</p>
<pre><code>date,name,criteria
2018-05-16,John,"[{'age':35},{'birthyear':1983}]"
2018-05-16,Jane,"[{'age':36},{'birthyear':1982}]"
</code></pre>
<p>I am using Python with pandas and numpy for processing this</p>
<p>I need to import this file into MongoDB collection in following format : </p>
<pre><code>{
"date":'2018-05-16',
"name":"John",
"criteria" : [
{"age":35},
{"birthyear" : 1983}
]
},
{
"date":'2018-05-16',
"name":"Jane",
"criteria" : [
{"age":36},
{"birthyear" : 1982}
]
}
</code></pre>
<p>`</p>
<p>I tried using json formatter , but after insertion into Mongodb I get the array to be same as in csv file.</p>
<p>I have tried following approaches: </p>
<pre><code>#Approach 1
import pymongo
from pymongo import MongoClient
import pandas as pd
import numpy as np
import json
from datetime import datetime
df = pd.read_csv("file.csv")
records = json.loads(df.T.to_json()).values()
db.tmp_collection.insert_many(data.to_dict('record'))
#Approach 2
import pymongo
from pymongo import MongoClient
import pandas as pd
import numpy as np
import json
from datetime import datetime
df = pd.read_csv("file.csv")
data_json = json.loads(df.to_json(orient='records'))
db.tmp_collection.insert_many(data_json)
</code></pre>
<p>Both give following output in Mongodb collection : </p>
<pre><code>{
"date" : "2018-05-16",
"name" : "John",
"criteria" : "[{age:35},{birthyear:1983}]"
}
</code></pre>
<p>Can you suggest some better way.
P.S. i am new to Python.</p>
<p>Thanks in advance.</p>
|
<p>As noted the main issue is the JSON "like" data within the "string" for the <code>criteria</code> lacks quotes around the keys. With quotes properly in place you can parse the string into a list the data is structured how you want.</p>
<p>You could actually run <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow noreferrer"><code>map</code></a> and <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow noreferrer"><code>re.sub()</code></a> over the existing list and replace the <code>criteria</code> with the parsed version.</p>
<p>Given the source data in the form you stated:</p>
<pre><code>date,name,criteria
2018-05-16,John,"[{age:35},{birthyear:1983}]"
2018-05-16,Jane,"[{age:36},{birthyear:1982}]"
</code></pre>
<p>Then the important parts are:</p>
<pre><code>df = pd.read_csv("file.csv")
records = json.loads(df.to_json(orient='records'))
pattern = r"({|,)(?:\s*)(?:')?([A-Za-z_$\.][A-Za-z0-9_ \-\.$]*)(?:')?(?:\s*):"
records = map(lambda x:
dict(x.items() +
{
'criteria': json.loads(
re.sub(pattern, "\\1\"\\2\":", x['criteria'])
)
}.items()
),
records
)
</code></pre>
<p>Which is basically going through each item in the list which came out earlier and doing a substitution on the "string" to quote the keys in the objects. Then of course parses the now valid JSON string into a list of dictionary objects.</p>
<p>That would make data like:</p>
<pre><code>[{u'criteria': [{u'age': 35}, {u'birthyear': 1983}],
u'date': u'2018-05-16',
u'name': u'John'},
{u'criteria': [{u'age': 36}, {u'birthyear': 1982}],
u'date': u'2018-05-16',
u'name': u'Jane'}]
</code></pre>
<p>Which you can then pass to the <a href="https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert_many" rel="nofollow noreferrer"><code>insert_many()</code></a> to create the documents in the collection keeping that format that you want.</p>
<pre><code>db.tmp_collection.insert_many(records)
</code></pre>
<p>Attribution to <a href="https://stackoverflow.com/a/25146786/2313887">regular expression to add double quotes around keys in javascript</a> for the regular expression pattern used here.</p>
<p>Personally I would take that a little further and at least parse to <code>datetime</code>:</p>
<pre><code>records = map(lambda x:
dict(x.items() +
{
'date': datetime.strptime(x['date'], '%Y-%m-%d'),
'criteria': json.loads(
re.sub(pattern, "\\1\"\\2\":", x['criteria'])
)
}.items()
),
records
)
</code></pre>
<p>MongoDB will use a BSON Date when inserted into the collection, and that's a lot more useful than a string.</p>
<p>And again, "personally" I would not be using "named keys" inside a list for MongoDB. Instead I would rather "remap" to something more standard like <code>"k"</code> and <code>"v"</code> as in:</p>
<pre><code>records = map(lambda x:
dict(x.items() +
{
'date': datetime.strptime(x['date'], '%Y-%m-%d'),
'criteria':
[i for s in map(lambda y: [{ 'k': k, 'v': v } for k,v, in y.iteritems()] , json.loads(
re.sub(pattern, "\\1\"\\2\":", x['criteria'])
)) for i in s]
}.items()
),
records
)
</code></pre>
<p>Which gives a structure like:</p>
<pre><code>[{u'criteria': [{'k': u'age', 'v': 35}, {'k': u'birthyear', 'v': 1983}],
u'date': datetime.datetime(2018, 5, 16, 0, 0),
u'name': u'John'},
{u'criteria': [{'k': u'age', 'v': 36}, {'k': u'birthyear', 'v': 1982}],
u'date': datetime.datetime(2018, 5, 16, 0, 0),
u'name': u'Jane'}]
</code></pre>
<p>With the main reason being that "querying" with MongoDB is going to be a lot more useful if the path is more consistent like that.</p>
|
python-3.x|mongodb|pandas|numpy
| 0
|
9,265
| 50,563,215
|
Zero all values except max in Tensorflow
|
<p>I have an array <code>[0.3, 0.5, 0.79, 0.2, 0.11].</code></p>
<p>I want to convert all values to zero except the max value. So the resulting array would be:
<code>[0, 0, 0.79, 0, 0]</code></p>
<p>What would be the best way to do this in a Tensorflow graph?</p>
|
<p>If you want to keep all occurences of the maximum, you could use</p>
<pre><code>cond = tf.equal(a, tf.reduce_max(a))
a_max = tf.where(cond, a, tf.zeros_like(a))
</code></pre>
<p>If you want to keep only one occurrence of the maximum, you could use</p>
<pre><code>argmax = tf.argmax(a)
a_max = tf.scatter_nd([[argmax]], [a[argmax]], tf.to_int64(tf.shape(a)))
</code></pre>
<p>However according to <a href="https://www.tensorflow.org/api_docs/python/tf/argmax" rel="nofollow noreferrer">the doc of <code>tf.argmax</code></a>,</p>
<blockquote>
<p>Note that in case of ties the identity of the return value is not guaranteed</p>
</blockquote>
<p>As I understand it, the maximum that is kept may not be the first or the last -- and may not even be the same if run twice on the same array.</p>
|
python|tensorflow
| 4
|
9,266
| 45,492,678
|
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 3: invalid continuation byte
|
<p>I'm trying to load a csv file using <code>pd.read_csv</code> but I get the following unicode error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 3: invalid continuation byte
</code></pre>
|
<p>Unfortunately, CSV files have no built-in method of signalling character encoding.</p>
<p><code>read_csv</code> defaults to guessing that the bytes in the CSV file represent text encoded in the UTF-8 encoding. This results in <code>UnicodeDecodeError</code> if the file is using some other encoding that results in bytes that don't happen to be a valid UTF-8 sequence. (If they by luck did also happen to be valid UTF-8, you wouldn't get the error, but you'd still get wrong input for non-ASCII characters, which would be worse really.)</p>
<p>It's up to you to specify what encoding is in play, which requires some knowledge (or guessing) of where it came from. For example if it came from MS Excel on a western install of Windows, it would probably be Windows code page 1252 and you could read it with:</p>
<pre><code>pd.read_csv('../filename.csv', encoding='cp1252')
</code></pre>
|
pandas|csv|unicode|load|python-unicode
| 18
|
9,267
| 62,592,434
|
Is it possible to replace specific piece of strings for a range of columns in Python?
|
<p>Here's first <a href="https://i.stack.imgur.com/jy0f0.png" rel="nofollow noreferrer">five rows</a> of a dataset:
<code>df.head()</code></p>
<p>All numbers here are objects and I want to convert them to numeric. However, I want to change all columns at once with minimum lines of code. Here what I did:</p>
<pre><code>df.loc[:,'median household income':'number of households'].str.replace(' ','')
df.loc[:,'median household income':'number of households'].str.replace(',','')
df.loc[:,'median household income':'number of households']=pd.to_numeric(df.loc[:,'median household income':'number of households'])
</code></pre>
<p>As a result, it shows following after running the cell:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'str'
</code></pre>
|
<p>Assuming we have the following data set:</p>
<pre><code>df = pd.DataFrame({
'median household income': dict(a='1,000', b='2,000'),
'number of households': dict(a='3,500', b='1,200')
})
</code></pre>
<p>You can write a function that will be applied to all elements of the dataframe using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html" rel="nofollow noreferrer">DataFrame.applymap</a> method:</p>
<pre><code>def replace_thousand_separators(item):
return item.replace(',', '').replace(' ', '')
</code></pre>
<p>After that you can apply it together with the type transformation:</p>
<pre><code>df.applymap(replace_thousand_separators).astype(float)
</code></pre>
|
python|pandas|jupyter-notebook|jupyter
| 0
|
9,268
| 62,588,946
|
how to remove image refrences stored in numpy array as string based on some condition
|
<p>I wanted to avoid using for loops so I switched to numpy array...</p>
<pre><code>#SOURCE is the path where images are actually stored
content = np.array(os.listdir(SOURCE))
# content contains array of element type str_
</code></pre>
<p>I want to apply condition on content and get new arrays containing removed labels and not removed labels</p>
<pre><code>
condition: os.path.getsize(os.path.join(SOURCE, content)) > 0
# When I say content it means all the values in the content array
</code></pre>
<p>How can I implement this using vectorized approach ...</p>
|
<p>You can use <code>filter</code> for this.</p>
<h3>Demonstrated how to filter from <code>list</code> and <code>numpy array</code> in below</h3>
<pre><code>import os
import numpy as np
SOURCE = "test"
</code></pre>
<p>Get the list of file using <code>os.listdir</code></p>
<pre><code>## files is list object
files = os.listdir(SOURCE)
## files_arr is numpy array object
files_arr = np.array(os.listdir(SOURCE))
</code></pre>
<h2>Custom function to check the file size</h2>
<p>Make sure the file is not folder using <code>os.path.isfile()</code></p>
<pre><code>def check_file_size(file_name):
file_abs_path = os.path.join(SOURCE,file_name)
if os.path.isfile(file_abs_path) and os.path.getsize(file_abs_path) > 0:
return True
return False
</code></pre>
<h2>Filter the array and list using <code>filter</code> in-built function. This works on both data types.</h2>
<pre><code>list(filter(check_file_size, files))
list(filter(check_file_size, files_arr))
</code></pre>
<p>You will be get same result in both.</p>
|
python|arrays|numpy
| 0
|
9,269
| 62,478,528
|
Pandas - populate new column based on existing column values
|
<p>I have the following dataframe <code>df_shots</code>:</p>
<pre><code> TableIndex MatchID GameWeek Player ... ShotPosition ShotSide Close Position
ShotsDetailID ...
6 5 46605 1 Roberto Firmino ... very close range N/A close very close rangeN/A
8 7 46605 1 Roberto Firmino ... the box the centre not close the boxthe centre
10 9 46605 1 Roberto Firmino ... the box the left not close the boxthe left
17 16 46605 1 Roberto Firmino ... the box the centre close the boxthe centre
447 446 46623 2 Roberto Firmino ... the box the centre close the boxthe centre
... ... ... ... ... ... ... ... ... ...
6656 6662 46870 27 Roberto Firmino ... very close range N/A close very close rangeN/A
6666 6672 46870 27 Roberto Firmino ... the box the right not close the boxthe right
6674 6680 46870 27 Roberto Firmino ... the box the centre not close the boxthe centre
6676 6682 46870 27 Roberto Firmino ... the box the left not close the boxthe left
6679 6685 46870 27 Roberto Firmino ... outside the box N/A not close outside the boxN/A
</code></pre>
<p>For the sake of clarity, all possible 'Position' values are:</p>
<pre><code>positions = ['a difficult anglethe left',
'a difficult anglethe right',
'long rangeN/A',
'long rangethe centre',
'long rangethe left',
'long rangethe right',
'outside the boxN/A',
'penaltyN/A',
'the boxthe centre',
'the boxthe left',
'the boxthe right',
'the six yard boxthe left',
'the six yard boxthe right',
'very close rangeN/A']
</code></pre>
<p>Now I would to map the following x/y values to each 'Position' name, storing the value under a new 'Position XY' column:</p>
<pre><code> the_boxthe_center = {'y':random.randrange(25,45), 'x':random.randrange(0,6)}
the_boxthe_left = {'y':random.randrange(41,54), 'x':random.randrange(0,16)}
the_boxthe_right = {'y':random.randrange(14,22), 'x':random.randrange(0,16)}
very_close_rangeNA = {'y':random.randrange(25,43), 'x':random.randrange(0,4)}
six_yard_boxthe_left = {'y':random.randrange(33,43), 'x':random.randrange(4,6)}
six_yard_boxthe_right = {'y':random.randrange(25,33), 'x':random.randrange(4,6)}
a_diffcult_anglethe_left = {'y':random.randrange(43,54), 'x':random.randrange(0,6)}
a_diffcult_anglethe_right = {'y':random.randrange(14,25), 'x':random.randrange(0,6)}
penaltyNA = {'y':random.randrange(36), 'x':random.randrange(8)}
outside_the_boxNA = {'y':random.randrange(14,54), 'x':random.randrange(16,28)}
long_rangeNA = {'y':random.randrange(0,68), 'x':random.randrange(40,52)}
long_rangethe_centre = {'y':random.randrange(0,68), 'x':random.randrange(28,40)}
long_rangethe_right = {'y':random.randrange(0,14), 'x':random.randrange(0,24)}
long_rangethe_left = {'y':random.randrange(54,68), 'x':random.randrange(0,24)}
</code></pre>
<hr>
<p>I tried:</p>
<pre><code>if df_shots['Position']=='very close rangeN/A':
df_shots['Position X/Y']==very_close_rangeNA
...# and so on
</code></pre>
<p>But I get:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<hr>
<p>How do I do this?</p>
|
<p>It's bad form to store so many related variables outside of a container, lets use a dictionary that we map to your dataframe.</p>
<pre><code>data_dict =
{'the boxthe centre': {'y':random.randrange(25,45)...}
df['Position'] = df['Position'].map(data_dict)
print(df['Position'])
6 {'y': 35, 'x': 2}
8 {'y': 32, 'x': 1}
10 {'y': 44, 'x': 11}
17 {'y': 32, 'x': 1}
447 {'y': 32, 'x': 1}
... NaN
6656 {'y': 35, 'x': 2}
6666 {'y': 15, 'x': 11}
6674 {'y': 32, 'x': 1}
6676 {'y': 44, 'x': 11}
6679 {'y': 37, 'x': 16}
Name: Position, dtype: object
</code></pre>
|
python|pandas
| 1
|
9,270
| 62,776,121
|
How to apply an operation on a vector with offsets
|
<p>Consider the following <code>pd.DataFrame</code></p>
<pre><code>import numpy as np
import pandas as pd
start_end = pd.DataFrame([[(0, 3), (4, 5), (6, 12)], [(7, 10), (11, 90), (91, 99)]])
values = np.random.rand(1, 99)
</code></pre>
<p>The <code>start_end</code> is a <code>pd.DataFrame</code> of shape <code>(X, Y)</code> where each value inside is a tuple of <code>(start_location, end_location)</code> in the <code>values</code> vector. Another way of saying that the values in a particular cell is a vector of different lengths.</p>
<p><strong>Question</strong></p>
<p>If I want to find the mean (for example) of the vector values for each of the cells in the <code>pd.DataFrame</code>, how can I do this in a cost effective way?</p>
<p>I managed to achieve this with an <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>.apply</code></a> function, but it's quite slow.</p>
<p>I guess I need to find some way to present it in a <code>numpy</code> array and then map it back to the 2d data-frame, but I can't figure out how.</p>
<p><strong>Notes</strong></p>
<ul>
<li>The distance between start end can varies and outliers can exist.</li>
<li>The cell start / end is always non-overlapping with the other cells (it will be interest to see if this prerequisite affect speed of solution).</li>
</ul>
<p><strong>The generalized problem</strong></p>
<p>More generally speaking I this as a recurring problem of how to make a 3d array, where one of the dimensions is not of equal length to a 2d matrix via some transformation function (mean, min, etc.)</p>
|
<h3>Prospective approach</h3>
<p>Looking at your sample data :</p>
<pre><code>In [64]: start_end
Out[64]:
0 1 2
0 (1, 6) (4, 5) (6, 12)
1 (7, 10) (11, 12) (13, 19)
</code></pre>
<p>It is indeed non-overlapping for each row, but not across the entire dataset.</p>
<p>Now, we have <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>np.ufunc.reduceat</code></a> that gives us ufunc reduction for each slice :</p>
<pre><code>ufunc(ar[indices[i]: indices[i + 1]])
</code></pre>
<p>as long as <code>indices[i] < indices[i+1]</code>.</p>
<p>So, with <code>ufunc(ar, indices)</code>, we would get :</p>
<pre><code>[ufunc(ar[indices[0]: indices[1]]), ufunc(ar[indices[1]: indices[2]]), ..]
</code></pre>
<p>In our case, for each tuple <code>(x,y)</code>, we know <code>x<y</code>. With stacked version, we have :</p>
<pre><code>[(x1,y1), (x2,y2), (x3,y3), ...]
</code></pre>
<p>If we flatten, it would be :</p>
<pre><code>[x1,y1,x2,y2,x3,y3, ...]
</code></pre>
<p>So, we might not have <code>y1<x2</code>, but that's okay, because we don't need ufunc reduction for that one and similarly for the pair : <code>y2,x3</code>. But that's okay as they could be skipped with a stepsize slicing of the final output.</p>
<p>Thus, we would have :</p>
<pre><code># Inputs : a (1D array), start_end (2D array of shape (N,2))
lens = start_end[:,1]-start_end[:,0]
out = np.add.reduceat(a, start_end.ravel())[::2]/lens
</code></pre>
<p><code>np.add.reduceat()</code> part gives us the sliced summations. We needed the division by <code>lens</code> for the average computations.</p>
<p>Sample run -</p>
<pre><code>In [47]: a
Out[47]:
array([0.49264042, 0.00506412, 0.61419663, 0.77596769, 0.50721381,
0.76943416, 0.83570173, 0.2085408 , 0.38992344, 0.64348176,
0.3168665 , 0.78276451, 0.03779647, 0.33456905, 0.93971763,
0.49663649, 0.4060438 , 0.8711461 , 0.27630025, 0.17129342])
In [48]: start_end
Out[48]:
array([[ 1, 3],
[ 4, 5],
[ 6, 12],
[ 7, 10],
[11, 12],
[13, 19]])
In [49]: [np.mean(a[i:j]) for (i,j) in start_end]
Out[49]:
[0.30963037472653104,
0.5072138121177008,
0.5295464559328862,
0.41398199978967815,
0.7827645134019902,
0.5540688880441684]
In [50]: lens = start_end[:,1]-start_end[:,0]
...: out = np.add.reduceat(a, start_end.ravel())[::2]/lens
In [51]: out
Out[51]:
array([0.30963037, 0.50721381, 0.52954646, 0.413982 , 0.78276451,
0.55406889])
</code></pre>
<p>For completeness, referring back to given sample, the conversion steps were :</p>
<pre><code># Given start_end as df and values as a 2D array
start_end = np.vstack(np.concatenate(start_end.values))
a = values.ravel()
</code></pre>
<p>For other ufuncs that have <code>reduceat</code> method, we will just replace <code>np.add.reduceat</code></p>
|
pandas|numpy|offset
| 5
|
9,271
| 54,316,729
|
Extremely long time (over 10 minutes) to load TensorRT-optimized TensorFlow graphs from .pb file
|
<p>I'm experiencing extremely long load times for TensorFlow graphs optimized with TensorRT. Non-optimized ones load quickly but loading optimized ones takes <strong>over 10 minutes</strong> by the very same code:</p>
<pre><code>trt_graph_def = tf.GraphDef()
with tf.gfile.GFile(pb_path, 'rb') as pf:
trt_graph_def.ParseFromString(pf.read())
</code></pre>
<p>I'm on NVIDIA Drive PX 2 device (if that matters), with TensorFlow 1.12.0 built from sources, CUDA 9.2 and TensorRT 4.1.1.
Due to the fact that it gets stuck on ParseFromString() I'm suspecting protobuf so here's its config:</p>
<pre><code>$ dpkg -l | grep protobuf
ii libmirprotobuf3:arm64 0.26.3+16.04.20170605-0ubuntu1.1 arm64 Display server for Ubuntu - RPC definitions
ii libprotobuf-dev:arm64 2.6.1-1.3 arm64 protocol buffers C++ library (development files)
ii libprotobuf-lite9v5:arm64 2.6.1-1.3 arm64 protocol buffers C++ library (lite version)
ii libprotobuf9v5:arm64 2.6.1-1.3 arm64 protocol buffers C++ library
ii protobuf-compiler 2.6.1-1.3 arm64 compiler for protocol buffer definition files
$ pip3 freeze | grep protobuf
protobuf==3.6.1
</code></pre>
<p>And here's the way I convert non-optimized models to TRT ones:</p>
<pre><code>def get_frozen_graph(graph_file):
"""Read Frozen Graph file from disk."""
with tf.gfile.FastGFile(graph_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
return graph_def
print("Load frozen graph from disk")
frozen_graph = get_frozen_graph(DATA_DIR + MODEL + '.pb')
print("Optimize the model with TensorRT")
trt_graph = trt.create_inference_graph(
input_graph_def=frozen_graph,
outputs=output_names,
max_batch_size=1,
max_workspace_size_bytes=1 << 26,
precision_mode='FP16',
minimum_segment_size=2
)
print("Write optimized model to the file")
with open(DATA_DIR + MODEL + '_fp16_trt.pb', 'wb') as f:
f.write(trt_graph.SerializeToString())
</code></pre>
<p>Tested on ssd_mobilenet_v1_coco, ssd_mobilenet_v2_coco and ssd_inception_v2_coco from the model zoo, all behave it the same way - downloaded pb file loads in seconds, TRT-optimized - well over 10 minutes.
What's wrong? Has anyone experienced the same and has any hints how to fix it?</p>
|
<p>OK, I think I got it sorted out. I left protobuf 2.6.1 almost untouched, just installed 3.6.1 from sources with cpp implementation next to it and set the symlinks in a way that 3.6.1 is the default one. Now after:</p>
<pre><code>export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
</code></pre>
<p>all models load in a fraction of a second.</p>
<p>Here are the exact steps I made, for reference:</p>
<pre><code># Check current version
$ protoc --version
libprotoc 2.6.1
# Create a backup of the current config, just in case
mkdir protobuf
cd protobuf/
mkdir backup_originals
mkdir backup_originals/protoc
cp /usr/bin/protoc backup_originals/protoc/
tar cvzf backup_originals/libprotobuf.tgz /usr/lib/aarch64-linux-gnu/libprotobuf*
# Original include files located at: /usr/include/google/protobuf/
# I did not backed them up
# Original configuration of the libraries
$ ls -l /usr/lib/aarch64-linux-gnu/libprotobuf*
-rw-r--r-- 1 root root 2464506 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf.a
-rw-r--r-- 1 root root 430372 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf-lite.a
lrwxrwxrwx 1 root root 25 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf-lite.so -> libprotobuf-lite.so.9.0.1
lrwxrwxrwx 1 root root 25 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf-lite.so.9 -> libprotobuf-lite.so.9.0.1
-rw-r--r-- 1 root root 199096 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf-lite.so.9.0.1
lrwxrwxrwx 1 root root 20 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf.so -> libprotobuf.so.9.0.1
lrwxrwxrwx 1 root root 20 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf.so.9 -> libprotobuf.so.9.0.1
-rw-r--r-- 1 root root 1153872 Oct 24 2015 /usr/lib/aarch64-linux-gnu/libprotobuf.so.9.0.1
# Fetch and upack the sources of version 3.6.1
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protobuf-python-3.6.1.zip
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protoc-3.6.1-linux-aarch_64.zip
unzip protoc-3.6.1-linux-aarch_64.zip -d protoc-3.6.1
unzip protobuf-python-3.6.1.zip
# Update the protoc
sudo cp protoc-3.6.1/bin/protoc /usr/bin/protoc
$ protoc --version
libprotoc 3.6.1
# BUILD AND INSTALL THE LIBRARIES
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
cd protobuf-3.6.1/
./autogen.sh
./configure
make
make check
sudo make install
# Remove unnecessary links to the old version
sudo rm /usr/lib/aarch64-linux-gnu/libprotobuf.a
sudo rm /usr/lib/aarch64-linux-gnu/libprotobuf-lite.a
sudo rm /usr/lib/aarch64-linux-gnu/libprotobuf-lite.so
sudo rm /usr/lib/aarch64-linux-gnu/libprotobuf.so
# Move old version of the libraries to the same folder where the new ones have been installed, for clarity
sudo cp -d /usr/lib/aarch64-linux-gnu/libproto* /usr/local/lib/
sudo rm /usr/lib/aarch64-linux-gnu/libproto*
sudo ldconfig # Refresh shared library cache
# Check the updated version
$ protoc --version
libprotoc 3.6.1
# Final configuration of the libraries after the update
$ ls -l /usr/local/lib/libproto*
-rw-r--r-- 1 root root 77064022 Feb 9 11:07 /usr/local/lib/libprotobuf.a
-rwxr-xr-x 1 root root 978 Feb 9 11:07 /usr/local/lib/libprotobuf.la
-rw-r--r-- 1 root root 9396522 Feb 9 11:07 /usr/local/lib/libprotobuf-lite.a
-rwxr-xr-x 1 root root 1013 Feb 9 11:07 /usr/local/lib/libprotobuf-lite.la
lrwxrwxrwx 1 root root 26 Feb 9 11:07 /usr/local/lib/libprotobuf-lite.so -> libprotobuf-lite.so.17.0.0
lrwxrwxrwx 1 root root 26 Feb 9 11:07 /usr/local/lib/libprotobuf-lite.so.17 -> libprotobuf-lite.so.17.0.0
-rwxr-xr-x 1 root root 3722376 Feb 9 11:07 /usr/local/lib/libprotobuf-lite.so.17.0.0
lrwxrwxrwx 1 root root 25 Feb 9 11:19 /usr/local/lib/libprotobuf-lite.so.9 -> libprotobuf-lite.so.9.0.1
-rw-r--r-- 1 root root 199096 Feb 9 11:19 /usr/local/lib/libprotobuf-lite.so.9.0.1
lrwxrwxrwx 1 root root 21 Feb 9 11:07 /usr/local/lib/libprotobuf.so -> libprotobuf.so.17.0.0
lrwxrwxrwx 1 root root 21 Feb 9 11:07 /usr/local/lib/libprotobuf.so.17 -> libprotobuf.so.17.0.0
-rwxr-xr-x 1 root root 30029352 Feb 9 11:07 /usr/local/lib/libprotobuf.so.17.0.0
lrwxrwxrwx 1 root root 20 Feb 9 11:19 /usr/local/lib/libprotobuf.so.9 -> libprotobuf.so.9.0.1
-rw-r--r-- 1 root root 1153872 Feb 9 11:19 /usr/local/lib/libprotobuf.so.9.0.1
-rw-r--r-- 1 root root 99883696 Feb 9 11:07 /usr/local/lib/libprotoc.a
-rwxr-xr-x 1 root root 994 Feb 9 11:07 /usr/local/lib/libprotoc.la
lrwxrwxrwx 1 root root 19 Feb 9 11:07 /usr/local/lib/libprotoc.so -> libprotoc.so.17.0.0
lrwxrwxrwx 1 root root 19 Feb 9 11:07 /usr/local/lib/libprotoc.so.17 -> libprotoc.so.17.0.0
-rwxr-xr-x 1 root root 32645760 Feb 9 11:07 /usr/local/lib/libprotoc.so.17.0.0
lrwxrwxrwx 1 root root 18 Feb 9 11:19 /usr/local/lib/libprotoc.so.9 -> libprotoc.so.9.0.1
-rw-r--r-- 1 root root 991440 Feb 9 11:19 /usr/local/lib/libprotoc.so.9.0.1
# Reboot, just in case :)
sudo reboot
# BUILD AND INSTALL THE PYTHON-PROTOBUF MODULE
cd protobuf-3.6.1/python/
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
# Fix setup.py to force compilation with c++11 standard
vim setup.py
$ diff setup.py setup.py~
205,208c205,208
< #if v:
< # extra_compile_args.append('-std=c++11')
< #elif os.getenv('KOKORO_BUILD_NUMBER') or os.getenv('KOKORO_BUILD_ID'):
< extra_compile_args.append('-std=c++11')
---
> if v:
> extra_compile_args.append('-std=c++11')
> elif os.getenv('KOKORO_BUILD_NUMBER') or os.getenv('KOKORO_BUILD_ID'):
> extra_compile_args.append('-std=c++11')
# Build, test and install
python3 setup.py build --cpp_implementation
python3 setup.py test --cpp_implementation
sudo python3 setup.py install --cpp_implementation
# Make the cpp backend a default one when user logs in
sudo sh -c "echo 'export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp' >> /etc/profile.d/protobuf.sh"
</code></pre>
<p>I found that this update tends to break pip, so simply updated it with:</p>
<pre><code>wget http://se.archive.ubuntu.com/ubuntu/pool/universe/p/python-pip/python3-pip_9.0.1-2_all.deb
wget http://se.archive.ubuntu.com/ubuntu/pool/universe/p/python-pip/python-pip-whl_9.0.1-2_all.deb
sudo dpkg -i *.deb
</code></pre>
|
python|tensorflow|protocol-buffers|tensorrt
| 4
|
9,272
| 54,644,113
|
Opencv fitellipse draws the wrong contour
|
<p>I want to draw the ellipse contour around the given figure append below. I am not getting the correct result since the figure consist of two lines. </p>
<p><strong><em>I have tried the following:-</em></strong></p>
<ol>
<li>Read the Image</li>
<li>Convert the BGR to HSV</li>
<li>Define the Range of color blue</li>
<li>Create the inRange Mask to capture the value of between lower and upper blue</li>
<li>Find the contour & Draw the fit ellipse.</li>
</ol>
<p><strong><em>Here is the source code-</em></strong></p>
<pre><code>import cv2
import numpy as np
image=cv2.imread('./source image.jpg')
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_blue= np.array([75, 0, 0])
upper_blue= np.array([105, 255, 255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
res=cv2.bitwise_and(image,image,mask=mask)
_,contours,_=cv2.findContours(close,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(max(contours,key=cv2.contourArea))
cv2.ellipse(image,ellipse,(0,255,0),2)
cv2.imshow('mask',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p><strong><em>The figure/Image below show the Expected & Actual Output-</em></strong></p>
<p><a href="https://i.stack.imgur.com/TOWw0.jpg" rel="nofollow noreferrer">Expected & Actual display image</a></p>
<p><strong><em>Source Image</em></strong>
<a href="https://i.stack.imgur.com/h8gx3.jpg" rel="nofollow noreferrer">Source Image</a></p>
<p><strong><em>Output Contour Array</em></strong>
<a href="https://drive.google.com/open?id=1bQyx2dkiabBIX7ufCY3D1mU3YQVwo_lp" rel="nofollow noreferrer">Contour file</a></p>
|
<p>I try to run your code on C++ and add erosion, dilatation and convexHull for result contour:</p>
<pre><code>auto DetectEllipse = [](cv::Mat rgbImg, cv::Mat hsvImg, cv::Scalar fromColor, cv::Scalar toColor)
{
cv::Mat threshImg;
cv::inRange(hsvImg, fromColor, toColor, threshImg);
cv::erode(threshImg, threshImg, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 2);
cv::dilate(threshImg, threshImg, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 2);
std::vector<std::vector<cv::Point> > contours;
cv::findContours(threshImg, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
int areaThreshold = (rgbImg.cols * rgbImg.rows) / 100;
std::vector<cv::Point> allContours;
allContours.reserve(10 * areaThreshold);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
auto area = cv::contourArea(contours[i]);
if (area > areaThreshold)
{
allContours.insert(allContours.end(), contours[i].begin(), contours[i].end());
}
}
}
if (allContours.size() > 4)
{
std::vector<cv::Point> hull;
cv::convexHull(allContours, hull, false);
cv::ellipse(rgbImg, cv::fitEllipse(hull), cv::Scalar(255, 0, 255), 2);
}
};
cv::Mat rgbImg = cv::imread("h8gx3.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
DetectEllipse(rgbImg, hsvImg, cv::Scalar(75, 0, 0), cv::Scalar(105, 255, 255));
DetectEllipse(rgbImg, hsvImg, cv::Scalar(10, 100, 20), cv::Scalar(25, 255, 255));
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);
</code></pre>
<p>Result looks correct:
<a href="https://i.stack.imgur.com/CQ4vv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CQ4vv.png" alt="enter image description here"></a></p>
|
python-3.x|numpy|opencv|computer-vision|cv2
| -1
|
9,273
| 73,720,191
|
concat strings from different columns and get unique values in pandas df
|
<p>I have an issue while trying to concat and use <code>set</code> on multiple columns.</p>
<p>This is an example df:</p>
<pre><code>df = pd.DataFrame({'customer id':[1,2,3,4,5],
'email1':['ex11@email.com',np.nan,'ex31@email.com',np.nan, np.nan],
'email2':['ex11@email.com' ,np.nan,'Ex3@email.com','ex4@email.com', np.nan],
'email3':['ex12@email.com',np.nan,'ex3@email.com','ex4@email.com', 'ex5@email.com']})
</code></pre>
<p>df:</p>
<pre><code> customer id email1 email2 email3
0 1 ex11@email.com ex11@email.com ex12@email.com
1 2 NaN NaN NaN
2 3 ex31@email.com Ex3@email.com ex3@email.com
3 4 NaN ex4@email.com ex4@email.com
4 5 NaN NaN ex5@email.com
</code></pre>
<p>I would like to create a new column with unique values from all columns (email1, email2 & email3) so the created columns will have a set of unique emails per customer, some emails have different cases (upper, lower .. etc)</p>
<p>This is what I did so far:</p>
<pre><code>df['ALL_EMAILS'] = df[['email1','email2','email3']].apply(lambda x: ', '.join(x[x.notnull()]), axis = 1)
</code></pre>
<p>This took about <strong>3 minutes</strong> on a df of > 500K customers!</p>
<p>then I created a function to handle the output and get the unique values if the cell is not null:</p>
<pre><code>def checkemail(x):
if x:
#to_lower
lower_x = x.lower()
y= lower_x.split(',')
return set(y)
</code></pre>
<p>then applies it to the column:</p>
<pre><code>df['ALL_EMAILS'] = df['ALL_EMAILS'].apply(checkemail)
</code></pre>
<p>but I got wrong output under ALL_EMAILS column!</p>
<pre><code> ALL_EMAILS
0 { ex11@email.com, ex11@email.com, ex12@email.com}
1 None
2 { ex3@email.com, ex31@email.com}
3 { ex4@email.com, ex4@email.com}
4 {ex5@email.com}
</code></pre>
|
<p>Try work on the values directly instead of joining them then split again:</p>
<pre><code>df['ALL_EMAILS'] = df.filter(like='email').apply(lambda x: set(x.dropna().str.lower()) or None, axis=1)
</code></pre>
<p>Output:</p>
<pre><code> customer id email1 email2 email3 ALL_EMAILS
0 1 ex11@email.com ex11@email.com ex12@email.com {ex12@email.com, ex11@email.com}
1 2 NaN NaN NaN None
2 3 ex31@email.com Ex3@email.com ex3@email.com {ex31@email.com, ex3@email.com}
3 4 NaN ex4@email.com ex4@email.com {ex4@email.com}
4 5 NaN NaN ex5@email.com {ex5@email.com}
</code></pre>
|
python|python-3.x|pandas|concatenation
| 1
|
9,274
| 73,619,615
|
fill missing datetime pandas
|
<p>I have a following problem. I have this df with 10Min interval:</p>
<pre><code>df_dict = {"value" : [1, 1, 2, 3], "datetime" : ["2022-09-05 07:20:00", "2022-09-05 07:30:00", "2022-09-05 07:20:00", "2022-09-05 07:20:00"],
"expedice" : ["A", "A", "B", "C"] }
df = pd.DataFrame(df_dict)
</code></pre>
<p>I would like to fill missing datetime to have:</p>
<pre class="lang-py prettyprint-override"><code>df_dict = {"value" : [1, 1, 2, 0, 3, 0], "datetime" : ["2022-09-05 07:20:00", "2022-09-05 07:30:00", "2022-09-05 07:20:00", "2022-09-05 07:30:00", "2022-09-05 07:20:00", "2022-09-05 07:30:00"],
"expedice" : ["A", "A", "B", "B", "C", "C"] }
df = pd.DataFrame(df_dict)
</code></pre>
<p>I tried</p>
<pre class="lang-py prettyprint-override"><code>df.datetime = pd.to_datetime(df.datetime)
df.set_index(
['datetime', 'expedice']
).unstack(
fill_value=0
).asfreq(
"10Min", fill_value=0
).stack().sort_index(level=1).reset_index()
</code></pre>
<p>But I got an error <code>TypeError: Cannot change data-type for object array.</code>. How can I fix it please?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with DatetimeIndex created by minimal and maximal datetime:</p>
<pre><code>df1 = df.set_index(['expedice', 'datetime'])
df1 = (df1.reindex(pd.MultiIndex.from_product([df1.index.levels[0],
pd.date_range(df1.index.levels[1].min(),
df1.index.levels[1].max(),
freq='10Min')],
names=df1.index.names), fill_value=0)
.reset_index())
print (df1)
expedice datetime value
0 A 2022-09-05 07:20:00 1
1 A 2022-09-05 07:30:00 1
2 B 2022-09-05 07:20:00 2
3 B 2022-09-05 07:30:00 0
4 C 2022-09-05 07:20:00 3
5 C 2022-09-05 07:30:00 0
</code></pre>
|
python|pandas|dataframe|datetime
| 1
|
9,275
| 73,790,671
|
Take dataframe with column that has values from 0 to 360 and wrap around to -180 to 180
|
<p>I have a dataframe <code>df</code> with a column <code>'lon'</code> that has values which range from 0 - 15 and 250 - 360. I would like to wrap the values around from -180 to 180 without a for loop. So, the input is this:</p>
<pre><code>df['lon'] = [1,10,15,250,360]
</code></pre>
<p>and the output should look like this</p>
<pre><code>df['new_lon'] = [1,10,15,-110,0]
</code></pre>
<p>This would be easy to do with a for loop, but I would like to do it without a loop. In particular, I don't how to deal with dataframe where I want the values less than 15 to stay the same but I want the values greater than 15 to be changed.</p>
|
<p>Technically, you can add 180, get the modulo 360, subtract 180:</p>
<pre><code>df['lon'].add(180).mod(360).sub(180)
</code></pre>
<p>Other approach:</p>
<pre><code>df['lon'].mask(df['lon'].gt(180), df['lon'].sub(360))
</code></pre>
<p>In place modification:</p>
<pre><code>df.loc[df['lon'].gt(180), 'lon'] -= 360
</code></pre>
<p>output:</p>
<pre><code>0 1
1 10
2 15
3 -110
4 0
Name: lon, dtype: int64
</code></pre>
|
python|pandas
| 5
|
9,276
| 73,732,700
|
What is the best way to transform unmerged cells to merged cells with Pandas?
|
<p>I have an excel file (top image) where I'm trying to essentially go from individual rows to merged rows and retain the order (bottom). I was thinking about using the groupby function but I don't really need to calculate anything just transform it. Any ideas would be greatly appreciated.</p>
<p>Sample Code:</p>
<pre><code>{'ID': {0: 80, 1: 80, 2: 80, 3: 80, 4: 80},
'FINANCIAL CLASS RESTR': {0: 'BLUE CROSS',
1: 'COMMERCIAL',
2: 'COMMERCIAL',
3: 'MEDICARE',
4: 'COMMERCIAL'},
'PAYOR RESTR': {0: 200001, 1: 100001, 2: 100009, 3: 400001, 4: 100060},
'PLAN RESTR': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan},
'LOCATION RESTR': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan},
'EFFECTIVE FROM DATE': {0: 36526, 1: 34906, 2: 37469, 3: 36526, 4: 35065},
'ACTIVE FLAG': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1}}
</code></pre>
<p><a href="https://i.stack.imgur.com/IKQkr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKQkr.png" alt="enter image description here" /></a></p>
|
<p>This is 95% of the way there.</p>
<p>I just forced everything to be a string, built a list, then "\n".join()ed the string</p>
<p>This would not function as merged cells (merged cells give me chills), but would be human-readable in a similar way. Good luck.</p>
<pre class="lang-py prettyprint-override"><code>newdf = {
"ID": [],
"FINANCIAL CLASS RESTR": [],
"PAYOR RESTR": [],
"PLAN RESTR": [],
"LOCATION RESTR": [],
"EFFECTIVE FROM DATE": [],
"ACTIVE FLAG": [],
}
for idval in df["ID"].unique():
newdf["ID"].append(idval)
q_val = df["ID"] == idval
temp = df.query("@q_val").drop(columns="ID")
for col in temp.columns:
temp[col] = temp[col].fillna("")
temp[col] = temp[col].astype("str")
col_vals = []
for x in temp[col]:
col_vals.append(x)
new_col_val = "\n".join(col_vals)
newdf[col].append(new_col_val)
</code></pre>
|
python|excel|pandas|dataframe
| 1
|
9,277
| 71,227,954
|
Aggregating by two different columns
|
<p>I'm trying to aggregate this using python pandas,
I'm trying to find the Sum of spend and visitors for each network, but only aggregregate them if the months are the same</p>
<p>for example</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">month</th>
<th style="text-align: center;">network</th>
<th style="text-align: right;">spend</th>
<th style="text-align: right;">visitors</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">CNBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">BBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">BBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">CNBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">CNBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
</tbody>
</table>
</div>
<p>should result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">month</th>
<th style="text-align: center;">network</th>
<th style="text-align: right;">spend</th>
<th style="text-align: right;">visitors</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">CNBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: center;">BBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">CNBC</td>
<td style="text-align: right;">20</td>
<td style="text-align: right;">4</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: center;">BBC</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>how would I be able to do this?</p>
|
<p>You can group your pandas dataframe by <strong>network</strong> and by <strong>month</strong> and then call the <code>sum</code> method.</p>
<pre><code>df.groupby(['network', 'month']).sum()
</code></pre>
<p>Returns:</p>
<pre><code>network month spend visitors
BBC 9 10 2
BBC 10 10 1
CNBC 9 10 2
CNBC 10 20 4
</code></pre>
|
python|sql|pandas
| 3
|
9,278
| 71,416,293
|
Potential bug in GCP regarding public access settings for a file
|
<p>I was conversing with someone from GCS support, and they suggested that there may be a bug and that I post what's happening to the support group.</p>
<p><strong>Situation</strong></p>
<p>I'm trying to adapt this Tensorflow demo ...
<a href="https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization" rel="nofollow noreferrer">https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization</a>
... to something I can use with images stored on my GCP account. Substituting one of my images to run through the process.</p>
<p>I have the bucket set for <code>allUsers</code> to have public access, with a Role of <code>Storage Object Viewer</code>.</p>
<p>However, the demo still isn't accepting my files stored in GCS.</p>
<p>For example, this file is being rejected:
<a href="https://storage.googleapis.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg" rel="nofollow noreferrer">https://storage.googleapis.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg</a></p>
<p>That file was downloaded from the examples in the demo, and then uploaded to my GCS and the link used in the demo. But it's not being accepted. I'm using the URL from the <code>Copy URL</code> link.</p>
<p><strong>Re: publicly accessible data</strong></p>
<p>I've been following the instructions on making data publicly accessible.
<a href="https://cloud.google.com/storage/docs/access-control/making-data-public#code-samples_1" rel="nofollow noreferrer">https://cloud.google.com/storage/docs/access-control/making-data-public#code-samples_1</a></p>
<p>I've performed all the above operations from the console, but the bucket still doesn't indicate <code>public access</code> for the bucket in question. So I'm not sure what's going on there.</p>
<p>Please see the attached screen of my bucket permissions settings.</p>
<p><a href="https://i.stack.imgur.com/187tn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/187tn.jpg" alt="screen of bucket permissions" /></a></p>
<p>So I'm hoping you can clarify if those settings look good for those files being publicly accessible.</p>
<p><strong>Re: Accessing the data from the demo</strong></p>
<p>I'm also following this related article on 'Accessing public data'
<a href="https://cloud.google.com/storage/docs/access-public-data#storage-download-public-object-python" rel="nofollow noreferrer">https://cloud.google.com/storage/docs/access-public-data#storage-download-public-object-python</a></p>
<p>There are 2 things I'm not clear on:</p>
<ol>
<li>If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?</li>
<li>If I do need to add this to the code from the demo, can you tell me how I can find these 2 parts of the code:
a. source_blob_name = "storage-object-name"
b. destination_file_name = "local/path/to/file"</li>
</ol>
<p>I know the path of the file above (<a href="https://01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg" rel="nofollow noreferrer">01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg</a>), but don't understand whether that's the <code>storage-object-name</code> or the <code>local/path/to/file</code>.</p>
<p>And if it's either one of those, then how do I find the other value?</p>
<p>And furthermore, to make a bucket public, why would I need to state an individual file? That's making me think that code isn't necessary.</p>
<p>Thank you for clarifying any issues or helping to resolve my confusion.</p>
<p>Doug</p>
|
<blockquote>
<p>If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?</p>
</blockquote>
<p>No, you don't need to. I actually did some testing and I was able to pull images in GCS, may it be set to public or not.</p>
<p>As what we have discussed in this <a href="https://stackoverflow.com/questions/71314872/problem-with-unknown-image-file-format-error-for-gcs-image-in-tensorflow-style">thread</a>, what's happening in your project is that the image you are trying to pull in GCS has a <code>.jpeg</code> extension but is not actually <code>.jpeg</code>. The actual image is in <code>.jpg</code> causing TensorFlow to not able to load it properly.</p>
<p>See this testing following the demo you've mentioned and the image from your bucket. Note that I used <code>.jpg</code> as the image's extension.</p>
<pre><code>content_urls = dict(
test_public='https://storage.cloud.google.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpg'
)
</code></pre>
<p><a href="https://i.stack.imgur.com/ychhx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ychhx.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/kjeqo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kjeqo.jpg" alt="enter image description here" /></a></p>
<p>Also tested another image from your bucket and it was successfully loaded in TensorFlow.</p>
<p><a href="https://i.stack.imgur.com/0FX8k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0FX8k.jpg" alt="enter image description here" /></a></p>
|
tensorflow|google-cloud-platform|google-cloud-storage
| 1
|
9,279
| 52,184,478
|
math_ops.floor equivalent in Keras
|
<p>I'm trying to implement a custom layer in Keras where I need to convert a tensor of floats <code>[a, 1+a)</code> to a binary tensor for masking. I can see that Tensorflow has a <code>floor</code> function that can do that, but Keras doesn't seem to have it in <code>keras.backend</code>. Any idea how I can do this?</p>
|
<p>As requested by OP, I will mention the answer I gave in my comment and elaborate more:</p>
<p><strong>Short answer:</strong> you won't encounter any major problems if you use <code>tf.floor()</code>.</p>
<p><strong>Long answer:</strong> Using Keras backend functions (i.e. <code>keras.backend.*</code>) is necessary in those cases when 1) there is a need to pre-process or augment the argument(s) passed to actual function of Tensorflow or Theano backend or post-process the returned results. For example, the <a href="https://github.com/keras-team/keras/blob/f9210387088fe91b5bc8999cf0cb41a0fe9eacf6/keras/backend/tensorflow_backend.py#L1368" rel="nofollow noreferrer"><code>mean</code></a> method in backend can also work with boolean tensors as input, however the <a href="https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean" rel="nofollow noreferrer"><code>reduce_mean</code></a> method in TF expects numerical types as input; or 2) you want to write a model that works across all the Keras supported backends.</p>
<p>Otherwise, it is fine to use most of real backend functions directly; however, if the function has been defined in <code>keras.backend</code> module, then it is recommended to use that instead.</p>
|
tensorflow|keras|keras-layer|floor
| 5
|
9,280
| 60,357,469
|
Increase brightness of specific pixels in an image using python
|
<p>I would like to increase the brightness/vividness of the purple color in the following image:</p>
<p><a href="https://i.stack.imgur.com/vf1Gn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vf1Gn.png" alt="enter image description here"></a></p>
<p>Here is the color palette</p>
<p><a href="https://i.stack.imgur.com/JbDeN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JbDeN.png" alt="enter image description here"></a></p>
<p>Here is what I tried: but this increases the brightness of the whole image:</p>
<pre><code>def increase_brightness(img, value=20):
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
h, s, v = cv2.split(hsv)
lim = 255 - value
v[v > lim] = 255
v[v <= lim] += value
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
plt.imsave('img_new.png', img)
return img
</code></pre>
<p>how to create a mask to modify brightness only the pixels in the input that corresponds to purple?</p>
|
<p>Note you have converted the image from RGB (to HSV) and need to convert it from BGR (to HSV). </p>
<p>If you only want to increase the brightness of the purple, then use cv2.inRange() for the purple color to create a mask. Then modify the input image everywhere with your current method. Then use the mask to combine the input and modified images so as to only show the enhancement for the purple colors corresponding to the white in the mask.</p>
<p>So this is one to do that in Python/OpenCV.</p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/GDsgi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GDsgi.png" alt="enter image description here"></a></p>
<pre><code>import cv2
import numpy as np
# read image
img = cv2.imread('purple.png')
# set value
value = 20
# convert image to hsv colorspace
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
# create mask on purple color and also its inverted mask
low_range = (80,160,50)
high_range = (150,230,120)
mask = cv2.inRange(hsv,low_range,high_range)
inv_mask = cv2.bitwise_not(mask)
mask = cv2.merge([mask,mask,mask])
inv_mask = cv2.merge([inv_mask,inv_mask,inv_mask])
# enhance the value channel of the hsv image
lim = 255 - value
v[v > lim] = 255
v[v <= lim] += value
# convert it back to BGR colors
final_hsv = cv2.merge((h, s, v))
bgr = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
# use bit_wise_and and its inverse to combine the original and enhanced versions
bgr = cv2.bitwise_and(bgr,mask)
img = cv2.bitwise_and(img,inv_mask)
result = cv2.add(bgr,img)
# display IN and OUT images
cv2.imshow('IMAGE', img)
cv2.imshow('HSV', hsv)
cv2.imshow('MASK', mask)
cv2.imshow('RESULT', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save output image
cv2.imwrite('purple_enhanced.png', result)
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/bLbFa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bLbFa.png" alt="enter image description here"></a></p>
<p>If you alternate viewing of the input and output, you will see that the output is brighter everywhere.</p>
|
python|image|numpy|opencv
| 2
|
9,281
| 60,357,990
|
Are these images too 'noisy' to be correctly classified by a CNN?
|
<p>I'm attempting to build an image classifier to identify between 2 types of images on property sites. I've split my dataset into 2 categories: [Property, Room]. I'm hoping to be able to differentiate between whether the image is of the outside of some property or a room inside the property.</p>
<p>Below are 2 examples of the types of image I am using. My dataset consists of 800 images for each category, and then a training set of an additional 160 images for each category (not present in the training set).</p>
<p>I always seem to be get reasonable results in training, but then when I test against some real samples it usually ends up classifying all of the images into a single category.</p>
<p>Below you can see the model I am using:</p>
<pre><code>train_datagen = ImageDataGenerator(
rescale=1./255,
width_shift_range=0.1,
height_shift_range=0.1,
rotation_range=10,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
) # set validation split
validate_datagen = ImageDataGenerator(rescale=1./255)
IMG_HEIGHT = IMG_WIDTH = 128
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (11,11), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3), padding='same'),
tf.keras.layers.MaxPooling2D(11, 11),
# tf.keras.layers.Dropout(0.5),
# Second convolutional layer
tf.keras.layers.Conv2D(64, (11, 11), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(11, 11),
# tf.keras.layers.Dropout(0.5),
# Flattening
tf.keras.layers.Flatten(),
# Full connection
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(
optimizer=RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy']
)
# now train the model
history = model.fit_generator(
train_generator,
validation_data=validation_generator,
steps_per_epoch=75, #100
epochs=5, # 15, or 20, and 100 steps per epoch
validation_steps=50,
verbose=1
)
# Predict image
def load_image(img_path, show=False):
test_image = image.load_img(img_path, target_size=(IMG_HEIGHT, IMG_WIDTH))
test_image = image.img_to_array(test_image)
test_image /= 255.
test_image = np.expand_dims(test_image, axis = 0)
return test_image
def predict_image(img_path, show=False):
loaded_img = load_image(img_path, show)
pred = model.predict(loaded_img)
return 'property' if pred[0][0] == 0.0 else 'room'
print('Prediction is...')
print(predict_image('path/to/my/img')
</code></pre>
<p>Can anyone suggest the possible reasons for this? I've tried using different epochs and batch sizes, augmenting the images further, changing the Conv2D and Pooling layer size but nothing seems to help.</p>
<p>Do I perhaps not have enough data, or are they bad images to begin with? This is my first foray into ML so apologies if any of questions seem obvious.</p>
<p><a href="https://i.stack.imgur.com/9RdzD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9RdzD.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/VQNvN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQNvN.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/J4RzW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J4RzW.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/R0zTM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0zTM.jpg" alt="enter image description here"></a></p>
|
<p>You are not post-processing the output of the classifier correctly, it outputs a probability in [0, 1], with <code>values < 0.5</code> corresponding to the first class, and <code>values >= 0.5</code> for the second class. You should change the code accordingly.</p>
|
tensorflow|machine-learning|image-processing|keras|neural-network
| 2
|
9,282
| 72,766,842
|
I am trying to find the time duration within a column with reference to change in activity status of a key column in python
|
<pre><code>Activity Schedule
Activity Status Activity Date
1 Inactive 06/25/22
1 Inactive 06/21/22
1 Active 06/19/22
1 Inactive 06/18/22
2 Active 05/26/22
2 Active 05/23/22
2 Active 05/20/22
2 Inactive 04/14/22
3 Inactive 03/05/22
3 Inactive 02/28/22
3 Inactive 02/23/22
3 Active 02/02/22
3 Active 02/01/22
</code></pre>
<p>I want to find out the cumulative time gap of "inactivity" from a recent "active" status grouped by the activity codes. This is what I reached into for now but I need time lag for inactivity from the latest day of activity too.</p>
<pre><code>def diff(x):
x = x.reset_index(drop=True)
dif = []
dif.append(x[0] - x[0])
dif.extend([x[i] - x[i-1] for i in range(1,len(x))])
return dif
df['diff'] = df.groupby('Activity')['Activity Date'].transform(diff)
df['Duration'] =df.sort_values(['Activity','Activity Date']).groupby(["Status"])["diff"].transform('cumsum')
</code></pre>
<p>I am looking for resluts like this:</p>
<pre><code>Activity Status Activity Date Change
1 Inactive 06/18/22 0
1 Active 06/19/22 1
1 Inactive 06/21/22 2
1 Inactive 06/25/22 6
2 Inactive 04/14/22 0
2 Active 05/20/22 36
2 Active 05/23/22 39
3 Active 02/01/22 0
3 Active 02/02/22 1
3 Inactive 02/23/22 22
3 Inactive 02/28/22 27
3 Inactive 03/05/22 32
</code></pre>
|
<p>First, we have to sort the Dataframe values:</p>
<pre><code>df = df.sort_values(["Activity", "Activity Date"])
</code></pre>
<p><strong>result:</strong></p>
<pre><code>Activity Status Activity Date
1 Inactive 06/18/22
1 Active 06/19/22
1 Inactive 06/21/22
1 Inactive 06/25/22
2 Inactive 04/14/22
2 Active 05/20/22
2 Active 05/23/22
2 Active 05/26/22
3 Active 02/01/22
3 Active 02/02/22
3 Inactive 02/23/22
3 Inactive 02/28/22
3 Inactive 03/05/22
</code></pre>
<p>Then, creating a function that will return the days difference between dates if belong to the same 'Activity' and have different Status in the previous date, else it will return zero.</p>
<pre><code>def gap(x, y):
def last_diff_status(df_temp):
x2 = df_temp.shift().loc[x.name]
if x['Status'] == x2['Status']:
return last_diff_status(df_temp.shift())
return x2
x1 = y.loc[x.name]
if x['Activity'] == x1['Activity']:
if x['Status'] != x1['Status']:
return (pd.to_datetime(x['Activity_Date']) - pd.to_datetime(x1['Activity_Date'])).days
else:
return (pd.to_datetime(x['Activity_Date']) - pd.to_datetime(last_diff_status(y.loc[df['Activity'] == x['Activity']])['Activity_Date'])).days
else:
return 0
</code></pre>
<p>Creating the new column 'Change' by applying the function with both, current row and the shifted dataframe</p>
<pre><code>df['Change'] = df.apply(lambda x, y=df.shift().fillna(0): gap(x, y), axis=1)
</code></pre>
<p><strong>result:</strong></p>
<pre><code>Activity Status Activity_Date Change
1 Inactive 06/18/22 0.0
1 Active 06/19/22 1.0
1 Inactive 06/21/22 2.0
1 Inactive 06/25/22 6.0
2 Inactive 04/14/22 0.0
2 Active 05/20/22 36.0
2 Active 05/23/22 39.0
2 Active 05/26/22 42.0
3 Active 02/01/22 0.0
3 Active 02/02/22 0.0
3 Inactive 02/23/22 21.0
3 Inactive 02/28/22 26.0
3 Inactive 03/05/22 31.0
</code></pre>
|
python|pandas
| 0
|
9,283
| 72,768,530
|
Transform Pandas DataFrame to custom JSON format
|
<p>I currently have a Python Pandas DataFrame that I would like to convert to a custom JSON structure. Does anyone know how I can achieve this? I'd love to hear it, thanks in advance!</p>
<p>Current Pandas DataFrame structure:</p>
<pre><code>ArticleNumber | Brand | Stock | GroupCode
-----------------------------------
1 | Adidas | 124 | 20.0
</code></pre>
<p>I would like to transform the above dataframe into a JSON structure below:</p>
<pre><code>{
"Attributes": [
{
"Key": “ArticleNumber”,
"Values": [
“1”
]
},
{
"Key": “Brand”,
"Values": [
“Adidas”
]
},
{
"Key": “Stock”,
"Values": [
“124”
]
},
{
"Key": “GroupCode”,
"Values": [
“20.0”
]
},
]
}
</code></pre>
|
<p>Use list with dict comprehension for custom format:</p>
<pre><code>import json
L = df.to_dict('records')
df['DICT'] = [{"Attributes":[{'Key':k,'Values':[v]}
for k, v in x.items()]} for x in L]
df['JSON'] = [json.dumps({"Attributes":[{'Key':k,'Values':[v]}
for k, v in x.items()]}) for x in L]
print (df)
ArticleNumber Brand Stock GroupCode \
0 1 Adidas 124 20.00
1 2 Adidas1 1241 20.01
DICT \
0 {'Attributes': [{'Key': 'ArticleNumber', 'Valu...
1 {'Attributes': [{'Key': 'ArticleNumber', 'Valu...
JSON
0 {"Attributes": [{"Key": "ArticleNumber", "Valu...
1 {"Attributes": [{"Key": "ArticleNumber", "Valu...
</code></pre>
|
python|json|pandas
| 1
|
9,284
| 72,574,512
|
Is there a way to write a python function that will create 'N' arrays? (see body)
|
<p>I have an numpy array that is shape 20, 3. (So 20 3 by 1 arrays. Correct me if I'm wrong, I am still pretty new to python)</p>
<p>I need to separate it into 3 arrays of shape 20,1 where the first array is 20 elements that are the 0th element of each 3 by 1 array. Second array is also 20 elements that are the 1st element of each 3 by 1 array, etc.</p>
<p>I am not sure if I need to write a function for this. Here is what I have tried:
Essentially I'm trying to create an array of 3 20 by 1 arrays that I can later index to get the separate 20 by 1 arrays.</p>
<pre><code>a = np.load() #loads file
num=20 #the num is if I need to change array size
num_2=3
for j in range(0,num):
for l in range(0,num_2):
array_elements = np.zeros(3)
array_elements[l] = a[j:][l]
</code></pre>
<p>This gives the following error:
'''
ValueError: setting an array element with a sequence
'''
I have also tried making it a dictionary and making the dictionary values lists that are appended, but it only gives the first or last value of the 20 that I need.</p>
|
<p>Your array has shape (20, 3), this means it's a 2-dimensional array with 20 rows and 3 columns in each row.</p>
<p>You can access data in this array by indexing using numbers or ':' to indicate ranges. You want to split this in to 3 arrays of shape (20, 1), so one array per column. To do this you can pick the column with numbers and use ':' to mean 'all of the rows'. So, to access the three different columns: <code>a[:, 0]</code>, <code>a[:, 1]</code> and <code>a[:, 2]</code>.</p>
<p>You can then assign these to separate variables if you wish e.g. <code>arr = a[:, 0]</code> but this is just a reference to the original data in array a. This means any changes in arr will also be made to the corresponding data in a.</p>
<p>If you want to create a new array so this doesn't happen, you can easily use the <code>.copy()</code> function. Now if you set <code>arr = a[:, 0].copy()</code>, arr is completely separate to <code>a</code> and changes made to one will not affect the other.</p>
|
python|arrays|numpy
| 2
|
9,285
| 59,701,178
|
randomly select position in array with condition
|
<p>I have an array like this:</p>
<pre><code>A = [[1,0,2,3],
[2,0,1,1],
[3,1,0,0]]
</code></pre>
<p>and I want to get the position of one of the cells with the value == 1 such as <code>A[0][0]</code> or <code>A[1][2]</code> and so on ...</p>
<p>So far I did this:</p>
<pre><code>A = np.array([[1,0,2,3],
[2,0,1,1],
[3,1,0,0]])
B = np.where(A == 1)
C = []
for i in range(len(B[0])):
Ca = [B[0][i], B[1][i]]
C.append(Ca)
D = random.choice(C)
</code></pre>
<p>But now I want to reuse D for getting a cell value back. Like:</p>
<p><code>A[D]</code> (which does not work) should return the same as <code>A[1][2]</code></p>
<p>Does someone how to fix this or knows even a better solution?</p>
|
<p>This should work for you.</p>
<pre><code>A = np.array([[1,0,2,3],
[2,0,1,1],
[3,1,0,0]])
B = np.where(A == 1)
C = []
for i in range(len(B[0])):
Ca = [B[0][i], B[1][i]]
C.append(Ca)
D = random.choice(C)
print(A[D[0]][D[1]])
</code></pre>
<p>This gives the output.</p>
<pre><code>>>> print(A[D[0]][D[1]])
1
</code></pre>
<p>Since the value of D would be of the sort <code>[X,Y]</code>, the value could be obtained from the matrix as <code>A[D[0]][D[1]]</code></p>
|
python|numpy
| 1
|
9,286
| 59,663,251
|
Why is SimpleImputer's fit_transform not working for dataframe in google colab?
|
<pre><code>imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
weather_test = imp.fit_transform(weather_test)
</code></pre>
<p>The above code is throwing error in google colab when weather_test is a pandas dataframe.
But when I change weather_test to numpy array, it works.</p>
<pre><code>imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent', verbose=0)
weather_test = imp.fit_transform(np.array(weather_test))
</code></pre>
|
<p>Scikit-learn's API methods generally assume the input will be a numpy array rather than a pandas dataframe. For an example of how to use this functionality with a dataframe, see <a href="https://stackoverflow.com/questions/35723472/how-to-use-sklearn-fit-transform-with-pandas-and-return-dataframe-instead-of-num">How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?</a></p>
|
pandas|scikit-learn|google-colaboratory
| 0
|
9,287
| 59,686,320
|
Filtering and Format dataframe from Webscrape
|
<p>I am new to Python, but know R decently. I am trying to webscrape stock price data from yahoo. I successfully retrieved the price data and able to create a dataframe. However, yahoo includes when dividends are paid out. For now, I would like to ignore dividends, but I am having trouble filtering the dataframe to remove when dividends are paid out. Also, I would like to change the format of the <code>Date</code> column, for example, from <code>Mar 14, 2000</code> to <code>%Y-%m-%d</code>.</p>
<p>From webscrape:</p>
<pre class="lang-none prettyprint-override"><code>Date Open Close
Dec 23, 2019 0.611 Dividend None
Dec 01, 2019 88.38 88.90
</code></pre>
<p>First, I tried do filter on the <code>'None'</code>, but that is an empty dataframe: <code>df.loc[df.Close=='None']</code></p>
<p>Second I tried to replace the <code>Dividend</code> aspect of the <code>Open</code> column with a function similar to gsub in R, but may have done it incorrectly. The idea being I can remove the value in that cell and replace with a new value, <code>toRemove</code>, then filter on this new value:</p>
<pre><code>re.sub('Dividend','Remove',df.Open,flags=re.I)
</code></pre>
<p>Within R, I know you can use <code>str(df)</code> to get the structure of a dataframe and Python uses <code>df.dtypes</code>, but this returned <code>object</code> for me, which I didn't know what to do with in order to fix the date issue.</p>
<p>Code used for Webscrape:</p>
<pre><code>import pandas as pd
import bs4 as bs
import urllib.request
url = 'https://finance.yahoo.com/quote/VT/history?period1=1547078400&period2=1607558400&interval=1mo&filter=history&frequency=1mo'
source = urllib.request.urlopen(url).read()
soup =bs.BeautifulSoup(source,'lxml')
tr = soup.find_all('tr')
data = []
# formats price data
for table in tr:
td = table.find_all('td')
row = [i.text for i in td]
data.append(row)
# labels columns
columns = ['Date', 'Open', 'High', 'Low', 'Close', 'AdjClose', 'Volume']
data = data[1:-2]
df = pd.DataFrame(data)
df.columns = columns
</code></pre>
|
<p><a href="https://stackoverflow.com/questions/38067704/how-to-change-the-datetime-format-in-pandas/38067805">This answer</a> should answer your date question. As for filtering, you should probably learn to use the <code>df.loc[]</code> functionality. <a href="https://www.kaggle.com/learn/pandas" rel="nofollow noreferrer">Kaggle</a> has an excellent resource for learning dataframe manipulation in Pandas. Granted, I do not use <code>loc</code> in this solution.</p>
<p>Anyways, using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply</a> and <a href="https://stackabuse.com/lambda-functions-in-python/" rel="nofollow noreferrer">lambda functions</a>, we can quickly iterate over every row and make the changes to your <code>Open</code> column as follows.</p>
<pre><code>df['Open'] = df.apply(lambda row: float(row['Open'].split()[0]), axis=1)
</code></pre>
<p>I tested this on your dataframe and it works. In this case, <code>df.apply()</code> with <code>axis=1</code> will apply some sort of function to every row. Here, we have chosen to use a lambda function. It's worth noting you can name 'row' whatever you want here, but basically it takes in a row named row, and then you can apply any operations you wish to it.</p>
<p>I chose to pull the <code>Open</code> column value for each row with <code>row['Open']</code>, then split that string on spaces using <code>.split()</code>, and from there you can take the first string (which we know to be the number) using indexing with <code>[0]</code>. finallly, I wrapped that in a <code>float()</code> cast to make sure it was a float and not a string.</p>
<p>Learning to use <code>apply()</code> and lambda functions together is extremely valuable in pandas. Also that kaggle site would be worth checking out at least for the pandas tutorials.</p>
|
python|pandas|dataframe|web-scraping
| 1
|
9,288
| 59,603,794
|
filtering pandas .isnull().any() output
|
<p>(This question can probably be generalized to filtering any Boolean Pandas series, but nothing that I can find on that subject addresses my issue.)</p>
<p>Given this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': (1, None, 3), 'b': (4, 5, 6), 'c': (7, 8, None), 'd': (10, 11, 12)})
df
a b c d
0 1.0 4 7.0 10
1 NaN 5 8.0 11
2 3.0 6 NaN 12
</code></pre>
<p>I need to get a list of <strong>column names</strong> that have NaN values in them (my real dataset has 80+ columns and for cleaning purposes I only want to focus on anything with NaN for the time being). This will give me a full Boolean list:</p>
<pre class="lang-py prettyprint-override"><code>df.isnull().any()
a True
b False
c True
d False
dtype: bool
</code></pre>
<p>Ideally I only want:</p>
<pre class="lang-py prettyprint-override"><code>a True
c True
</code></pre>
<p>I cannot figure out how to do that. A mask is close, but is applied to the row:</p>
<pre class="lang-py prettyprint-override"><code>mask = df.isnull().values
df[mask]
a b c d
1 NaN 5 8.0 11
2 3.0 6 NaN 12
</code></pre>
<p>Is there a way to apply them to the column axis instead, or is there a better way to do what I'm looking for?</p>
|
<p>You can perform indexing on the columns with your mask:</p>
<pre><code>>>> df.columns[df.isnull().any()]
Index(['a', 'c'], dtype='object')
</code></pre>
<p>Or if you want to show the data for the given columns:</p>
<pre><code>>>> df[df.columns[df.isnull().any()]]
a c
0 1.0 7.0
1 NaN 8.0
2 3.0 NaN
</code></pre>
|
python|pandas
| 4
|
9,289
| 59,721,654
|
Create a list where the even indexes have a path to a positive category and odd indexes have a path to the negative category
|
<p>I'm basically sorting my CNN images into a list with even and odd indexing. Even index will have positive images and odd index will have negative images. Here's my code so far:</p>
<pre><code>from PIL import Image
import matplotlib.pyplot as plt
import os
import glob
import torch
from torch.utils.data import Dataset
def show_data(data_sample, shape = (28, 28)):
plt.imshow(data_sample[0].numpy().reshape(shape), cmap='gray')
plt.title('y = ' + data_sample[1])
directory="/resources/data"
negative='Negative'
negative_file_path=os.path.join(directory,negative)
negative_files=[os.path.join(negative_file_path,file) for file in os.listdir(negative_file_path) if file.endswith(".jpg")]
negative_files.sort()
negative_files[0:3]
positive="Positive"
positive_file_path=os.path.join(directory,positive)
positive_files=[os.path.join(positive_file_path,file) for file in os.listdir(positive_file_path) if file.endswith(".jpg")]
positive_files.sort()
positive_files[0:3]
n = len(negative_files)
p = len(positive_files)
number_of_samples = n + p
print(number_of_samples)
Y=torch.zeros([number_of_samples])
Y=Y.type(torch.LongTensor)
Y.type()
Y[::2]=1
Y[1::2]=0
</code></pre>
|
<p>Replace the code with:</p>
<pre><code>directory="resources/data/"
</code></pre>
|
pytorch
| 0
|
9,290
| 59,565,756
|
Merge and delete rows according to two columns value
|
<p>I have a data frame with times and locations and I want to merge rows with the same date and location so the maximum time will move to "to" column and the row that I just used its time value will be removed. </p>
<p>also, If the time difference is longer then 3 hours so the merge won't happen.</p>
<pre><code>date from location to
01 16:25 A
02 17:15 B
02 19:11 C
02 19:19 C
02 17:48 B
03 16:20 F
05 08:30 G
05 09:09 D
05 09:11 G
</code></pre>
<p>expected output:</p>
<pre><code>date from location to
01 16:25 A 16:25
02 17:15 B 17:48
02 19:11 C 19:19
02 19:19 C #this line will delete
02 17:48 B #this line will delete
03 16:20 F 16:20
05 08:30 G 08:30
05 09:09 D 09:09
05 09:11 G 09:11
</code></pre>
<p>I tried it with a double for loop but I'm sure there is a better pythonic way.
Any ideas?</p>
|
<pre class="lang-py prettyprint-override"><code># sample dataframe
df = pd.DataFrame(
{
"date": ["01", "01", "01", "01", "02", "02"],
"time": ["01:02", "02:03", "04:05", "06:07", "08:09", "12:10"],
"location": ["A", "A", "B", "B", "C", "C"],
}
)
# convert time column to datetime
df["time"] = pd.to_datetime(df["time"], format="%H:%M")
# aggregate by date and location
df = df.groupby(["date", "location"]).agg(["min", "max"]).reset_index()
# rename columns
df.columns = ["date", "location", "to", "from"]
# sort out where diff > 3
df = df[(df['to'] - df['from']).astype('timedelta64[h]').abs() <= 3]
# convert to and from to datetime.time
df['to'] = df['to'].dt.time
df['from'] = df['from'].dt.time
</code></pre>
<pre><code>date location to from
0 01 A 01:02:00 02:03:00
1 01 B 04:05:00 06:07:00
</code></pre>
|
python|pandas
| -2
|
9,291
| 40,717,614
|
Multiplication of tensors in a python list with a constant variable in tensorflow
|
<p>I have a 1D python list called <code>x</code>, of shape <code>(1000)</code> which contains tensor elements of shape <code>(3, 600)</code>. I also have a tensorflow variable <code>w</code> of shape <code>(600, 1)</code> which I would like to multiply to each tensor element of <code>x</code>. The result of each operation would be a tensor of shape <code>(3, 1)</code>.</p>
<p>Is there any way to efficiently apply <code>w</code> to each element of <code>x</code>? The logic using a python loop would be:</p>
<pre><code>for i in range(1000):
x[i] = tf.matmul(x[i], w)
</code></pre>
<p>I already tried the following:</p>
<pre><code>w = [w] * 1000
result = tf.mul(x, w)
</code></pre>
<p>But I got the following error:</p>
<pre><code>ValueError: Dimensions must be equal, but are 3 and 600 for 'Mul' (op: 'Mul') with input shapes: [1000,3,600], [1000,600,1]
</code></pre>
<p>Thanks!</p>
|
<p>Look into using tf.map_fn which maps a function along the first axis of a tensor. In you case you have x is a tensor of shape (1000, 3, 600). It does not matter that the first dim is a list. It will just act as a tensor.</p>
<p><code>tf.map_fn(lambda x_: tf.matmul(x_, W), x)</code></p>
<p>You could also use the tf.batch_matmul operation as follows.</p>
<p><code>tf.batch_matmul(x, [w] * 1000)</code></p>
<p>However I would use tf.tile instead of [w] * 1000</p>
|
matrix|tensorflow|product
| 0
|
9,292
| 18,626,709
|
How do I use k-means on time series data that has nans?
|
<p>I have a number of time series records that overlap at some times and don't necessarily have same start and end date. Each row represents a different time series. I made them all the same length to maintain the actual time of data collection.</p>
<p>For example, at t(1,2,3,4,5,6):</p>
<pre><code>Station 1: nan, nan, 2, 4, 5, 10
Station 2: nan, 1, 4, nan, 10, 8
Station 3: 1, 9, 4, 7, nan, nan
</code></pre>
<p>I am trying to run a cluster analysis in Python to group the stations with similar behavior, where the timing of the behavior is important, so I can't just get rid of the nans. (That I know of).</p>
<p>Any ideas?</p>
|
<p>K-means is not the best algorithm for this kind of data.</p>
<p>K-means is designed to minimize within-cluster variance (= sum of squares, WCSS).</p>
<p>But how do you compute variance with NaNs? And how meaningful is variance here anyway?</p>
<p>Instead, you may want to use</p>
<ul>
<li>a similarity measure designed for time series, such as DTW, threshold crossing distances etc.</li>
<li>a distance based clustering algorithm. If you only have a few series, hierarchical clustering should be fine.</li>
</ul>
|
python|numpy|time-series|cluster-analysis
| 2
|
9,293
| 18,569,408
|
Delete the elements of a array with a mask
|
<p>I want to delete the elements of an array with a mask. For example:</p>
<pre><code>row = 24
col = 24
size = row * col
a = numpy.ones((size))
mask = numpy.empty((col), dtype=numpy.bool)
</code></pre>
<p>The values of the <code>mask</code> are <code>False</code> or <code>True</code>.
If <code>mask[x] = True</code>, Then the element of <code>a[x * row:(x + 1) * row]</code> should be deleted.<br>
<strong>PS: In my case one index value corresponds one block elements of <code>a</code></strong></p>
|
<p>Through this syntax you can delete the element of array</p>
<pre><code>smaller_array =np.delete(array,index)
</code></pre>
<p>array indicates the array values
index indicates the position of the elements</p>
|
python|numpy
| 1
|
9,294
| 61,917,777
|
3D object detection for custom objects; dataset creation
|
<p>How can I create a custom dataset for 3D object detection, I want to use the "Stanford3dDataset" or "Scannet" as baseline and add my object of interest in the dataset. I have the PCD files captured from the 3D camera [Realsense] and for 3D object detection, I am using the Pointnet model. </p>
<p>I see the dataset has the text file as input instead of PCD or PLY format, how do I convert PLY/PCD files to text files. </p>
|
<p>import open3d as o3d</p>
<p>import numpy as np</p>
<h1>Load saved point cloud</h1>
<p>pcd_load = o3d.io.read_point_cloud("try.ply")</p>
<h1>convert PointCloud to numpy array</h1>
<p>xyz_load = np.asarray(pcd_load.points)</p>
<h1>Save points into a text file</h1>
<p>np.savetxt('test.txt', xyz_load)`</p>
<p>More information is available in the link - <a href="http://www.open3d.org/docs/release/tutorial/Basic/working_with_numpy.html" rel="nofollow noreferrer">NumPy <-> open3d.PointCloud</a></p>
|
3d|object-detection|tensorflow-datasets|point-cloud-library|point-clouds
| 0
|
9,295
| 62,023,067
|
Python/Numpy - return slope for simple linear regression in a new array
|
<p>I'm searching for an answer for the following problem:</p>
<p>I want to create a numpy array where all the intercepts and slopes are stored. The slope is the increase of <code>means</code> over the <code>years</code>. I have found multiple ways to calculate the intercept/slope, but I really miss the link to get them in a new array (I'm new to Numpy so the logic is slowly getting there but I've now been stuck for a day..)</p>
<p>So.. I have an array that is structured like this:</p>
<pre><code>x = np.array([(2000, 'A', '1',5), (2001, 'A', '1', 10),
(2003, 'A', '1',15), (2004, 'A', '1', 20),
(2000, 'A', '2',1), (2001, 'A', '2', 2),
(2002, 'A', '2', 3), (2003, 'A', '2', 4)],
dtype=[('year', 'i4'), ('group1', 'U2'), ('group2', 'U2'), ('means', 'i2')])
</code></pre>
<p>And I would like to end up with an array like this:</p>
<pre><code>>desired_array
array([('A', '1', 5, 5),
('A', '2', 1, 1)],
dtype=[('group1', '<U2'), ('group2', '<U2'), ('intercept', '<i2'), ('slope', '<i2')])
</code></pre>
<p>I have gotten to this point:</p>
<pre><code>ans, indices = np.unique(x[['group1', 'group2']], return_inverse=True)
desired_array = np.empty(2, dtype=[('group1', 'U2'), ('group2', 'U2'), ('intercept', '<f8'),
('slope', '<f8')])
desired_array['group1'] = ans['group1']
desired_array['group2'] = ans['group2']
x = x[x['year'] == 2000]
desired_array['intercept'] = x['means']
</code></pre>
<p>it's a bit rough which I can still improve but the main question for me where I get stuck is how to add the slope per regression line to the array.</p>
<p>Would be great is someone could help me out :)</p>
|
<p>You can simply calculate your slopes and intercepts in lists and add them in.</p>
<pre><code>x = np.array([(2000, 'A', '1',5), (2001, 'A', '1', 10),
(2002, 'A', '1',15), (2003, 'A', '1', 20),
(2000, 'A', '2',1), (2001, 'A', '2', 2),
(2002, 'A', '2', 3), (2003, 'A', '2', 4)],
dtype=[('year', 'i4'), ('group1', 'U2'), ('group2', 'U2'), ('means', 'i2')])
</code></pre>
<p>Note that I've changed the year value of rows 3 and 4 to 2002 & 2003 as opposed to 2003 & 2004, as it wouldn't be a straight line then. I'm considering years as the x axis and means as the y axis in this example. Naturally then <code>slope, m = (y2-y1)/(x2-x1)</code> and the intercept would be <code>c = y - m*x</code> for any (x,y) pair in the corresponding line. Store the slope and intercept in two lists while going through each unique group pair.</p>
<pre><code>unique_groups = np.unique(x[['group1', 'group2']])
slopes, intercepts = [],[]
for group in unique_groups:
current_group = x[x[['group1', 'group2']]==group]
x_g = current_group['year']
y_g = current_group['means']
slope = (y_g.max()-y_g.min())/(x_g.max()-x_g.min())
intercept = y_g[0]-slope*x_g[0]
slopes.append(slope)
intercepts.append(intercept)
</code></pre>
<p>Simply plug in the calculated values into the desired array.</p>
<pre><code>desired_array = np.empty(len(unique_groups), dtype=[('group1', 'U2'), ('group2', 'U2'), ('intercept', '<f8'),
('slope', '<f8')])
desired_array['group1'] = unique_groups['group1']
desired_array['group2'] = unique_groups['group2']
desired_array['intercept'] = intercepts
desired_array['slope'] = slopes
</code></pre>
|
python|arrays|numpy|linear-regression
| 1
|
9,296
| 62,018,408
|
Applying a Numba guvectorize function over time dimension of an 3D-Array with Xarray's apply_ufunc
|
<p>I have some problems getting this to work properly and I'm also open to other suggestions as I'm not 100% sure if I'm going the right way with this.</p>
<p>Here is some simple dummy data:</p>
<pre><code>times = pd.date_range(start='2012-01-01',freq='1W',periods=25)
x = np.array([range(0,20)]).squeeze()
y = np.array([range(20,40)]).squeeze()
data = np.random.randint(3, size=(25,20,20))
ds = xr.DataArray(data, dims=['time', 'y', 'x'], coords = {'time': times, 'y': y, 'x': x})
</code></pre>
<p>For each x,y-coordinate, I want to return the longest sequence of 1s or 2s over time. So my input array is 3D (time, x, y) and my output 2D (x, y). The code in 'seq_gufunc' is inspired by <a href="https://stackoverflow.com/questions/38161606/find-the-start-position-of-the-longest-sequence-of-1s">this thread</a>.
My actual dataset is much larger (with landuse classes instead of 1s, 2s, etc) and this is only a small part of a bigger workflow, where I'm also using dask for parallel processing. So in the end this should run fast and efficiently, which is why I ended up trying to figure out how to get numba's @guvectorize and Xarray's apply_ufunc to work together: </p>
<pre><code>
@guvectorize(
"(int64[:], int64[:])",
"(n) -> (n)", target='parallel', nopython=True
)
def seq_gufunc(x, out):
f_arr = np.array([False])
bool_stack = np.hstack((f_arr, (x == 1) | (x == 2), f_arr))
# Get start, stop index pairs for sequences
idx_pairs = np.where(np.diff(bool_stack))[0].reshape(-1, 2)
# Get length of longest sequence
longest_seq = np.max(np.diff(idx_pairs))
out[:] = longest_seq
## Input for dim would be: 'time'
def apply_seq_gufunc(data, dim):
return xr.apply_ufunc(seq_gufunc,
data,
input_core_dims=[[dim]],
exclude_dims=set((dim,)),
dask="allowed")
</code></pre>
<p>There are probably some very obvious mistakes that hopefully someone can point out. I have a hard time understanding what actually goes on in the background and how I should set up the layout-string of @guvectorize and the parameters of apply_ufunc so that it does what I want. </p>
<hr>
<p>EDIT2:
This is the working solution. See @OriolAbril 's answer for more information about the parameters of <code>apply_ufunc</code> and <code>guvectorize</code>. It was also necessary to implement the <code>if...else...</code> clause in case no values match and to avoid the ValueError that would be raised. </p>
<pre><code>@guvectorize(
"(int64[:], int64[:])",
"(n) -> ()", nopython=True
)
def seq_gufunc(x, out):
f_arr = np.array([False])
bool_stack = np.hstack((f_arr, (x == 1) | (x == 2), f_arr))
if np.sum(bool_stack) == 0:
longest_seq = 0
else:
# Get start, stop index pairs for sequences
idx_pairs = np.where(np.diff(bool_stack))[0].reshape(-1, 2)
# Get length of longest sequence
longest_seq = np.max(np.diff(idx_pairs))
out[:] = longest_seq
def apply_seq_gufunc(data, dim):
return xr.apply_ufunc(seq_gufunc,
data,
input_core_dims=[[dim]],
dask="parallelized",
output_dtypes=['uint8']
)
</code></pre>
|
<p>I'd point you out to <a href="https://stackoverflow.com/questions/58719696/how-to-apply-a-xarray-u-function-over-netcdf-and-return-a-2d-array-multiple-new/62012973">How to apply a xarray u_function over NetCDF and return a 2D-array (multiple new variables) to the DataSet</a>, the immediate goal is not the same, but the detailed description and examples should clarify the issue. </p>
<p>In particular, you are right in using <code>time</code> as <code>input_core_dims</code> (in order to make sure it is moved to the last dimension) and it is correctly formatted as a list of lists, however, you do not need <code>excluded_dims</code> but <code>output_core_dims==[["time"]]</code>. </p>
<p>The output has the same shape as the input, however, as explained in the link above, <code>apply_ufunc</code> expects it will have same shape as <em>broadcasted dims</em>. <code>output_core_dims</code> is needed to get <code>apply_ufunc</code> to expect output with dims <code>y, x, time</code>.</p>
|
python|numpy|numba|python-xarray|numpy-ufunc
| 1
|
9,297
| 61,904,939
|
numpy:Change values in array by randomly differently selecting index
|
<p>I am new in numpy, and I am having troubles with simple managment of numpy arrays.</p>
<p>I am doing a task in which it said that randomly daily select different 12 items in a numpy by index to change its value.</p>
<pre><code>import numpy as np
import random
N = 20
s = np.zeros([N])
for t in range(12):
randomindex = random.randint(0,len(s)-1)
s[randomindex] = 10
</code></pre>
<p>thanks for u answering .I'm sorry for my describing,i'm not good at how writting problem of python by english.--!.I will give more detailed information</p>
<pre><code>e.g. s=(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)
</code></pre>
<p>and i randomly choose a item from its numpy by its index ,</p>
<p>randomindex=random.randint(0,len(s)-1),</p>
<p>randomindex will be 0-19,</p>
<p>and s(randomindex)=10,if the randomindex is 2 means s(2) is 10,
s=(1,2,10,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20).</p>
<p>And if i want choose 3 items, i do 3 times'for',how can i everytimes choose different index changes its value in numpy.
and daily which means i will give sum the new s and given to a new numpy R[T]
like:</p>
<pre><code>import numpy as np
import random
N = 20
s = np.zeros([N])
T=10 #DAY
R = np.zereos([T])
for t in range(T-1):
R[T+1]=R[T]+R[T]*3
for t in range(12):
randomindex = random.randint(0,len(s)-1)
s[randomindex] = 10
R[T]=np.sum(s)
</code></pre>
|
<p>I'm having a little difficulty understanding what you're asking, but I think you want a way to be selecting different values in a randomized order. And the problem with the above code is that you may be getting duplicates. </p>
<p>I have two solutions. One, you can use the <a href="https://coderslegacy.com/python/libraries-in-python/python-random/" rel="nofollow noreferrer">Python random library</a>. The Python random.shuffle function will randomly shuffle all the values in an iterable (such as a list or array). You can then proceed to access them sequentially as you normally would. Here's an example of Random Shuffle.</p>
<pre><code>list1 = ["Apple", "Grapes", "Bananas","Grapes"]
random.shuffle(list1)
print(list1)
</code></pre>
<p>The second solution doesn't involve the use of a library. You can simply make an addition into the code above. Create a new empty list, and every time you retrieve a value add it into this list. Also, whenever you retrieve a value check to see whether it's already located in the list. For instance, if you retrieve the value 5, add it to the list. If later on, you end up with the value 5 again, run a check through the list and you will find that 5 already exists in it. You can then reset that iteration and take input again. </p>
<p>You can append to a list with the following code.</p>
<pre><code>newlist = ["Python", "Java","HTML","CSS"]
newlist.append("Ruby")
print(newlist)
</code></pre>
|
python|arrays|numpy
| 0
|
9,298
| 61,735,110
|
File names on CSV with bad line errors
|
<p>I'm using the following code below to capture bad line errors when reading a csv through pandas. I'm having trouble getting the filename to be included. I tried using a list to append during the loop but resulted in every file showing an error instead of just the files with errors.</p>
<p>How can I get the filename included?</p>
<pre><code>import os
import glob
import sys
from io import StringIO
import pandas as pd
from pathlib import Path
UnzipFilePoint = Path(str(os.getcwd()) + '/Unzipped/')
def FindBadLines(zipPath):
mydict = {}
mylist = []
old_stderr = sys.stderr
result = StringIO()
sys.stderr = result
x = ''
for f in glob.glob(zipPath):
df = pd.read_csv(f, dtype=str,encoding = "ISO-8859-1", error_bad_lines=False)
result_string = result.getvalue()
f_name = os.path.basename(f)
if len(result_string) > 1 :
with open('bad_lines.txt', 'w') as bad_lines:
for line in result_string.split(r'\n'):
if len(line)> 5:
bad_lines.write(line.replace('\n','').replace('b','').replace("'",''))
bad_lines.write('\n')
sys.stderr = old_stderr
zipPath = UnzipFilePoint / "*"
FindBadLines(str(zipPath))
</code></pre>
|
<p>I was able to get the following working code : </p>
<pre><code>import os
import sys
import glob
import pandas as pd
from io import StringIO
from pathlib import Path
UnzipFilePoint = Path(str(os.getcwd()) + '/Unzipped/')
def FindBadLines(zipPath):
mylist = []
for f in glob.glob(zipPath):
f_name = os.path.basename(f)
old_stderr = sys.stderr
result = StringIO()
sys.stderr = result
df = pd.read_csv(f, dtype=str,encoding = "ISO-8859-1", error_bad_lines=False ,warn_bad_lines=True)
result_string = result.getvalue()
sys.stderr = old_stderr
if len(result_string) > 5 :
mylist.append([result_string,f_name ])
mynewlist = []
for i in mylist:
i[0] = i[0].replace('b','').replace("'",'')
for x in i[0].replace('\n','').split('\\n'):
if len(x) > 1 :
mynewlist.append([x , i[1]])
df = pd.DataFrame(mynewlist, columns = ['Error', 'File'])
print(df)
zipPath = UnzipFilePoint / "*"
FindBadLines(str(zipPath))
</code></pre>
|
python|pandas|csv|dataframe
| 0
|
9,299
| 57,950,035
|
FFT of np.cos has two peaks
|
<p>For some reason, the following snippet produces an FFT graph with two negative peaks, despite the fact that <code>wave</code> only consists of one <code>cos</code> and nothing else.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
scale1 = 1.0
scale2 = 1.0e-1
N = 10000
mind = 50
maxd = 250
constantDistance = 200.0
wavelength = 400*scale2
k = 2*np.pi/wavelength
x = np.linspace(mind, maxd, N)
xCalc = (x+constantDistance)*scale1
wave = np.cos(k*xCalc)
fftwave = np.fft.fft(wave)
xfft = np.fft.fftshift(xCalc)
plt.clf()
plt.plot(xfft,fftwave,linewidth=0.25)
plt.show()
</code></pre>
<p>I've tried <code>rfft</code> and it produces one peak (because of the asymmetry), but I need all later outputs (square/triangle window, their FFT and their convolution) to be symmetrical about the "center". Also, is it possible to connect the gap that forms when using <code>np.fft.fftshift</code>?</p>
|
<p>There are a couple of small mistakes in your script and your expectation:</p>
<ol>
<li><p>A cosine has two peaks in the FT at +freq and -freq because <code>cos fx = 1/2(exp ifx + exp -ifx)</code></p></li>
<li><p>Where you are using <code>fftshift</code> you should be using <code>fftfreq</code> instead, since you are plotting in the freq domain</p></li>
</ol>
<p>.</p>
<pre><code>xfft = np.fft.fftfreq(N, maxd-mind)
</code></pre>
<ol start="3">
<li><p>Even though your signal is real its FT is still complex, so you should plot its absolute value and/or angle</p></li>
<li><p>To get rid of the little gap you can use <code>np.roll</code></p></li>
</ol>
<p>.</p>
<pre><code>xfft = np.roll(xfft, N//2)
fftwave = np.roll(fftwave, N//2)
</code></pre>
|
python-3.x|numpy|fft
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.