Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,600
| 65,522,467
|
Find if there are two columns with different names but identical values using pandas
|
<p>I have table with 30 columns, mainly numericalm with 500k rows. I would like to check if I have two columns inside this table that have the same values for all rows.
for example :</p>
<p>I have this table:</p>
<pre><code>>>> num1 num2 num3 num4
0 5.1 2.3 7 5.1
1 2.2 4.4 3.1 2.2
2 3.7 11.1 5.9 3.7
3 4.2 1.5 0.3 4.2
</code></pre>
<p>so in this case I would like to drop column "num4" because is identical to column "num1".</p>
<p>Until now I saw only ways to see if there is the same values or if they hsave the same name but not if the two columns are identical.</p>
<p>My end goal: to get rid of duplicated columns (by values and not by name)</p>
|
<p>Try <code>duplicated</code></p>
<pre><code>out = df.loc[:,~df.T.duplicated()]
Out[397]:
num1 num2 num3
0 5.1 2.3 7.0
1 2.2 4.4 3.1
2 3.7 11.1 5.9
3 4.2 1.5 0.3
</code></pre>
<p>Or</p>
<pre><code>out = df.T.drop_duplicates().T
Out[399]:
num1 num2 num3
0 5.1 2.3 7.0
1 2.2 4.4 3.1
2 3.7 11.1 5.9
3 4.2 1.5 0.3
</code></pre>
|
python|pandas|duplicates
| 2
|
7,601
| 63,469,435
|
Draw line chart and highlight minimum point in matplotlib or seaborn and with the help of pandas
|
<p>I have a data frame as shown below</p>
<p>df</p>
<pre><code>Threshold Total_cost
0.7 150040
0.8 150843
0.9 149410
1 148981
1.1 149163
1.2 150017
</code></pre>
<p>By using above df I would like to plot a line graph in python with y axis as Total cost and x axis as Threshold.</p>
<p>Where y axis range could be <code>148000</code> to <code>151000</code>.
And I also wants to highlight the point where Total_cost is minimum.
In this case which is <code>(1, 148981)</code>.</p>
|
<p>You can use <code>idxmin</code> to locate the row with minimum cost:</p>
<pre><code>ax = df.plot(x='Threshold')
(df.loc[[df['Total_cost'].idxmin()]]
.plot.scatter(x='Threshold', y='Total_cost',
color='r', ax=ax)
)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/Kt2Ju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kt2Ju.png" alt="enter image description here" /></a></p>
|
python-3.x|pandas|dataframe|matplotlib|seaborn
| 2
|
7,602
| 53,549,408
|
Modifying values in pandas dataframe with a condition
|
<p>Given this dataframe;</p>
<pre><code>df = pd.DataFrame({'col1': ['apple','lemon','orange','grape'],
'col2':['franceCNTY','italy','greeceCNTY','spain']})
</code></pre>
<p>I'd like to change the values in col2 with this rule;
if the value contains CNTY, then leave it as it is
else set the value to be Nan.</p>
<p>So, the final dataframe will contain the below values;</p>
<pre><code>df2 = pd.DataFrame({'col1': ['apple','lemon','orange','grape'],
'col2':['franceCNTY',np.nan,'greeceCNTY',np.nan]})
</code></pre>
<p>How can I change these values?
Thanks</p>
|
<h3><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.where.html" rel="nofollow noreferrer"><code>where</code></a></h3>
<p>You can use <code>where</code> in-place or not in-place:</p>
<pre><code>df['col2'] = df['col2'].where(df['col2'].str.contains('CNTY'))
print(df)
col1 col2
0 apple franceCNTY
1 lemon NaN
2 orange greeceCNTY
3 grape NaN
# in place version
df['col2'].where(df['col2'].str.contains('CNTY'), inplace=True)
</code></pre>
|
python|pandas|dataframe
| 0
|
7,603
| 53,439,773
|
Extract data from a pandas series if the values are in a dictionary-like format
|
<p>I try the solution in <a href="https://stackoverflow.com/questions/45927936/extracting-dictionary-values-from-a-pandas-dataframe">Extracting dictionary values from a pandas dataframe</a> But it didn't work.</p>
<p>I have a pandas.core.series.Series with the following general format:</p>
<pre><code>0 {'hashtags': [], 'symbols': [], 'user_mentions...
1 {'hashtags': [], 'symbols': [], 'user_mentions...
2 {'hashtags': [], 'symbols': [], 'user_mentions...
3 {'hashtags': [], 'symbols': [], 'user_mentions...
...
</code></pre>
<p>the specific format of each one is similar to the following:</p>
<pre><code>{'hashtags': [],
'symbols': [],
'user_mentions': [{'screen_name': 'jose_m',
'id_str': '132',
'name': 'Jose',
'indices': [0, 10],
'id': 103},
{'screen_name': 'paul',
'id_str': '243403',
'name': 'Jorge',
'indices': [50, 64],
'id': 2423}],
'urls': []}
</code></pre>
<p>I get that by placing the index zero to the variable <code>entities[0]</code> (Index may change).</p>
<p>I need to extract extract all the <strong>screen_name</strong> and <strong>name</strong> inside user_mentions. Thanks :)</p>
|
<p>Here is an example with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply</a>, for each <code>entities</code> returns a list with a tuple for each <code>user_mention</code>: </p>
<pre><code>def find_user_mention(user_mention):
return (user_mention['screen_name'], user_mention['name'])
df['entities'].apply(lambda x: [find_user_mention(user_mention) for user_mention in x['user_mentions']])
</code></pre>
<p>Example output with random data: </p>
<pre><code>0 [(NunkMasKKs, SUSHIPLANERO )]
1 [(leobilanski, Leo Bilanski)]
2 [(romerodiario, El Profe Romero)]
3 [(HugoYasky, Hugo Yasky)]
4 [(marianorecalde, Mariano Recalde)]
5 [(cyngarciaradio, Cynthia García)]
</code></pre>
|
python|pandas
| 0
|
7,604
| 53,414,785
|
Delete 1st and 3rd row of Df while keeping 2nd row as header
|
<p>started learning this stuff today so please forgive my ignorance.</p>
<p>My data is in csv and as described in the title, I would like to exclude the first and third row while keeping the second row as headers. The csv looks like this:</p>
<pre><code>"Title"
Date, time, count, hours, average
"empty row"
</code></pre>
<p>The data set starts in the row following empty row.</p>
|
<p>Using the <code>skiprows</code> parameter of <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>pd.read_csv</code></a>:</p>
<pre><code>from io import StringIO
x = StringIO("""Title
Date, time, count, hours, average
2018-01-01, 15:23, 16, 10, 5.5
2018-01-02, 16:33, 20, 5, 12.25
""")
# replace x with 'file.csv'
df = pd.read_csv(x, skiprows=[0, 2])
print(df)
Date time count hours average
0 2018-01-01 15:23 16 10 5.50
1 2018-01-02 16:33 20 5 12.25
</code></pre>
<p>In fact, <code>skiprows=[0]</code> suffices as empty rows are excluded by default, i.e. default behavior is <code>skip_blank_lines=True</code>.</p>
|
python|pandas|csv
| 3
|
7,605
| 72,019,016
|
Subtract number from dataframe depending on a corresponding number in the dataframe
|
<p>I have a dataframe of 2 columns, say df:</p>
<pre><code> year cases
1.1 12
1.2 14
1.4 19
1.6 23
1.6 14
2.1 26
2.5 27
2.7 35
3.1 21
3.3 24
3.8 28
</code></pre>
<p>and a list of false cases, say f</p>
<pre><code> f = [3,4,8]
</code></pre>
<p>I want to write a code so that for every +1 year, the number of cases is subtracted by its respective 'false cases'.</p>
<p>So for example, whilst 1 < year < 2, I want: cases - 3</p>
<p>Then when 2 < year < 3, I want: cases - 4</p>
<p>and when 3 < year < 4, I want: cases - 8</p>
<p>and so on</p>
<p>so that a new column, say actual cases is:</p>
<pre><code> year actual cases
1.1 9 (12-3)
1.2 11 (14-3)
1.4 16 (19-3)
1.6 20 (23-3)
1.6 11 (14-3)
2.1 22 (26-4)
2.5 23 (27-4)
2.7 31 (35-4)
3.1 13 (21-8)
3.3 16 (24-8)
3.8 20 (28-8)
</code></pre>
<p>I tried something along the lines of</p>
<pre><code> for i in range(0,df[["year"]:
if int(df[["year"][i]) > int(df[["year"][i+1]):
df[["cases"][i] - f[i]
</code></pre>
<p>But this is clearly wrong and I am not sure what to do.</p>
|
<p>You can do something like this:</p>
<pre><code>df['cases'] - (df['year']//1).astype(int).map({e:i for e, i in enumerate(f, 1)})
</code></pre>
<p>or</p>
<pre><code>df['cases'] - pd.Series(f).reindex(df['year']//1-1).to_numpy()
</code></pre>
|
python|pandas|dataframe|loops
| 1
|
7,606
| 71,942,740
|
How to iterate rows and check for multiple changes in column?
|
<p>I have multiple columns in my data frame but three columns are important.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>Date</th>
<th>Answer</th>
</tr>
</thead>
<tbody>
<tr>
<td>12222</td>
<td>2020-05-01</td>
<td>N</td>
</tr>
<tr>
<td>12222</td>
<td>2020-05-02</td>
<td>Y</td>
</tr>
<tr>
<td>12222</td>
<td>2020-05-03</td>
<td>N</td>
</tr>
<tr>
<td>12222</td>
<td>2020-05-04</td>
<td>Y</td>
</tr>
<tr>
<td>12223</td>
<td>2020-05-06</td>
<td>Y</td>
</tr>
<tr>
<td>12224</td>
<td>2020-05-07</td>
<td>Y</td>
</tr>
<tr>
<td>12224</td>
<td>2020-05-08</td>
<td>Y</td>
</tr>
<tr>
<td>12225</td>
<td>2020-05-09</td>
<td>N</td>
</tr>
<tr>
<td>12225</td>
<td>2020-05-09</td>
<td>Y</td>
</tr>
</tbody>
</table>
</div>
<p>For each id that have multiple changes in their answer I need to find and count that change in a separate column N -> Y is a change or Y->N but it has to be for the same id.</p>
<p>Therefore N->Y->N->Y would be 3 changes.
N->Y->N->N is only two changes.</p>
<p>I have sorted the data frame in ascending order using the below code and tried to do a count but only getting a count of the number of values, responding to that id rather than changes.</p>
<pre><code>df_changes = df.sort_values(by=['id', 'Date'], ascending =[True,True])
df_changes_2 = df_changes.groupby(['id','answer']).size().reset_index(name="Count")
</code></pre>
<p>Possible Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>Date</th>
<th>Answer</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>12222</td>
<td>2020-05-01</td>
<td>N</td>
<td>3</td>
</tr>
<tr>
<td>12223</td>
<td>2020-05-06</td>
<td>Y</td>
<td>0</td>
</tr>
<tr>
<td>12224</td>
<td>2020-05-07</td>
<td>Y</td>
<td>0</td>
</tr>
<tr>
<td>12225</td>
<td>2020-05-09</td>
<td>N</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can try <code>transform</code> then <code>drop_duplicates</code></p>
<pre class="lang-py prettyprint-override"><code>df['Count'] = df.groupby('id', as_index=False)['Answer'].transform(lambda col: col.ne(col.shift()).sum() - 1)
</code></pre>
<pre><code>print(df)
id Date Answer Count
0 12222 2020-05-01 N 3
1 12222 2020-05-02 Y 3
2 12222 2020-05-03 N 3
3 12222 2020-05-04 Y 3
4 12223 2020-05-06 Y 0
5 12224 2020-05-07 Y 0
6 12224 2020-05-08 Y 0
7 12225 2020-05-09 N 1
8 12225 2020-05-09 Y 1
</code></pre>
<pre><code>df = df.drop_duplicates('id', keep='first')
</code></pre>
<pre><code>print(df)
id Date Answer Count
0 12222 2020-05-01 N 3
4 12223 2020-05-06 Y 0
5 12224 2020-05-07 Y 0
7 12225 2020-05-09 N 1
</code></pre>
|
python|pandas
| 0
|
7,607
| 55,422,449
|
Save pandas.DataFrame.hist with multiple axes in one figure
|
<p>I have a pandas dataframe with 24 columns, and I use the function <code>pandas.DataFrame.hist</code> to generate a figure with some subplots.</p>
<pre><code>plot = df.hist(figsize = (20, 15))
plot
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x0000000018D47EB8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001C1200B8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001C1EADD8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001C20A4A8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B61AB38>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B61AB70>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B671898>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B698F28>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B6C85F8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B6F2C88>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B723358>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B74A9E8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B77B0B8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B7A1748>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B7C8DD8>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B7F84A8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B821B38>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B853208>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B87B898>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B8A2F28>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B8D35F8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B8FAC88>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B92C358>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B9549E8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000001B9850B8>]],
dtype=object)
</code></pre>
<p>The problem is when I try to save this figure in a single PNG file I get an error</p>
<pre><code>plot.savefig(os.path.join(folder_wd, folder_output, folder_dataset,'histogram.png'))
</code></pre>
<blockquote>
<p>AttributeError: 'numpy.ndarray' object has no attribute 'savefig'</p>
</blockquote>
<p>None of the articles that I have checked so far has offered a solution</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html" rel="nofollow noreferrer">Pandas visualisation guide</a>
<a href="https://stackoverflow.com/questions/19555525/saving-plots-axessubplot-generated-from-python-pandas-with-matplotlibs-savefi">StackOverflow</a></p>
|
<p><code>savefig</code> is not a method of the <code>plot</code> object returned by the <code>df.hist</code>. Try the following</p>
<pre><code>import matplotlib.pyplot as plt
# rest of your code
plot = df.hist(figsize = (20, 15))
plt.savefig(os.path.join(folder_wd, folder_output, folder_dataset,'histogram.png'))
</code></pre>
|
python-3.x|pandas|matplotlib
| 1
|
7,608
| 56,655,197
|
can't Ordinal Encode my data using fit_transform of category_encoders
|
<p>I'm trying to fit_transform the <code>OrdinalEncoder</code> of <code>category_encoders</code> to one of the columns of my <a href="https://www.kaggle.com/stevezhenghp/airbnb-price-prediction" rel="nofollow noreferrer">data</a>.<br>
what I've tried seeing <a href="http://contrib.scikit-learn.org/categorical-encoding/ordinal.html" rel="nofollow noreferrer">documentation</a> is: </p>
<pre><code> import category_encoders as ce
# here i'm defining mapping for OrdinalEncoder
property_ordinal_mapping_1 = [{"col":"property_type", "mapping": [('Apartment', 1),('House', 2),('Condominium', 3),
('Townhouse', 4),('Loft', 5),('Other', 6),
('Guesthouse', 7),('Bed & Breakfast', 8),
('Bungalow', 9),('Villa', 10),('Dorm', 11),
('Guest suite', 12),('Camper/RV', 13),
('Timeshare', 14),('Cabin', 15),('In-law', 16),
('Hostel', 17),('Boutique hotel', 18),('Boat', 19),
('Serviced apartment', 20),('Tent', 21),('Castle', 22),
('Vacation home', 23),('Yurt', 24),('Hut', 25),
('Treehouse', 26),('Chalet', 27),('Earth House', 28),
('Tipi', 29),('Train', 30),('Cave', 31),
('Casa particular', 32),('Parking Space', 33),
('Lighthouse', 34),('Island', 35)
]
},
]
# preparing the OrdinalEncoder for fitting and transforming
property_encoder_1 = ce.OrdinalEncoder(mapping = property_ordinal_mapping_1, return_df = True, cols=["property_type"])
</code></pre>
<p>the problem that arises when I try to <code>fit_transfom</code> is:<br>
<code>df_train = property_encoder_1.fit_transform(air_cat_2)</code><br>
the error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-96-9cea1713182c> in <module>()
----> 1 df_train = property_encoder_1.fit_transform(air_cat_2)
/usr/local/lib/python3.6/dist-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
551 if y is None:
552 # fit method of arity 1 (unsupervised transformation)
--> 553 return self.fit(X, **fit_params).transform(X)
554 else:
555 # fit method of arity 2 (supervised transformation)
/usr/local/lib/python3.6/dist-packages/category_encoders/ordinal.py in fit(self, X, y, **kwargs)
139 cols=self.cols,
140 handle_unknown=self.handle_unknown,
--> 141 handle_missing=self.handle_missing
142 )
143 self.mapping = categories
/usr/local/lib/python3.6/dist-packages/category_encoders/ordinal.py in ordinal_encoding(X_in, mapping, cols, handle_unknown, handle_missing)
288 for switch in mapping:
289 column = switch.get('col')
--> 290 X[column] = X[column].map(switch['mapping'])
291
292 try:
/usr/local/lib/python3.6/dist-packages/pandas/core/series.py in map(self, arg, na_action)
3380 """
3381 new_values = super(Series, self)._map_values(
-> 3382 arg, na_action=na_action)
3383 return self._constructor(new_values,
3384 index=self.index).__finalize__(self)
/usr/local/lib/python3.6/dist-packages/pandas/core/base.py in _map_values(self, mapper, na_action)
1216
1217 # mapper is a function
-> 1218 new_values = map_f(values, mapper)
1219
1220 return new_values
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
TypeError: 'list' object is not callable
</code></pre>
<p><code>sklearn.preprocessing.OrdinalEncoder</code> had similar error.<br>
what I'm doing wrong and how do I solve this. I've double checked the class names of my column and rewrote the whole code and nothing seemed to help; Or is there any alternate way that I can do this.<br>
Please don't mark my question as duplicate.</p>
|
<p>The documentation is incorrect. The syntax for <code>mapping</code> is now different. See here: <a href="https://github.com/scikit-learn-contrib/categorical-encoding/issues/193" rel="nofollow noreferrer">https://github.com/scikit-learn-contrib/categorical-encoding/issues/193</a></p>
|
python|pandas|machine-learning|categorical-data|ordinal
| 1
|
7,609
| 56,471,928
|
How to use only one GPU for tensorflow session?
|
<p>I have two GPUs.
My program uses TensorRT and Tensorflow.</p>
<p>When I run only TensorRT part, it is fine.
When I run together with Tensorflow part, I have error as</p>
<pre><code>[TensorRT] ERROR: engine.cpp (370) - Cuda Error in ~ExecutionContext: 77 (an illegal memory access was encountered)
terminate called after throwing an instance of 'nvinfer1::CudaError'
what(): std::exception
</code></pre>
<p>The issue is when Tensorflow session starts as follow</p>
<pre><code>self.graph = tf.get_default_graph()
self.persistent_sess = tf.Session(graph=self.graph, config=tf_config)
</code></pre>
<p>It loads two GPUs as </p>
<pre><code>2019-06-06 14:15:04.420265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6965 MB memory) -> physical GPU (device: 0, name: Quadro P4000, pci bus id: 0000:04:00.0, compute capability: 6.1)
2019-06-06 14:15:04.420713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 7159 MB memory) -> physical GPU (device: 1, name: Quadro P4000, pci bus id: 0000:05:00.0, compute capability: 6.1)
</code></pre>
<p>I tried to load only one GPU as</p>
<p>(1)Putting on top of the python code</p>
<pre><code>import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
</code></pre>
<p>(2)</p>
<pre><code>with tf.device('/device:GPU:0'):
self.graph = tf.get_default_graph()
self.persistent_sess = tf.Session(graph=self.graph, config=tf_config)
</code></pre>
<p>Both don't work.</p>
<p>How to solve the problem?</p>
|
<p>I could manage to load only one GPU be placing the following lines at the first line of the python code.</p>
<pre><code>import sys, os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
</code></pre>
|
python|tensorflow|cuda|tensorrt
| 11
|
7,610
| 66,928,491
|
Why the two outputs aren't equal in Pytorch derivation?
|
<p>I thought from the view in the derivation of composite functions(Chain rule), these two representations should be equal.</p>
<pre><code>import torch
x = torch.rand(3, requires_grad=True)
y = x + 2
z = 2 * y * y
z = z.mean()
z.backward()
print(4 * (x + 2))
print(x.grad)
</code></pre>
<p>But the outputs are as follows:</p>
<pre><code>tensor([ 8.5011, 8.6625, 11.9508], grad_fn=<MulBackward0>)
tensor([2.8337, 2.8875, 3.9836])
</code></pre>
<p>Why?</p>
|
<p>They are the same, just off by a constant factor of 3. That is because when you take mean of <code>z</code>, that essentially sums each element of <code>z</code> and divides by 3 (because here your tensor is a 3 element vector). So essentially a 1/3 factor appears in the gradient, which is multiplied upstream.</p>
|
python|pytorch|derivative
| 0
|
7,611
| 47,134,210
|
Pandas and Dictionary: Convert Dict to DataFrame and use inner keys in values as DataFrame column headers
|
<p>I have the following dictionary: </p>
<pre><code>{
0: [{1: 0.0}, {2: 0.0}, {3: 0.0}, {4: 0.0}, {5: 0.0}, {6: 0.0}, {7: 0.0}, {8: 0.0}],
1: [{1: 0.0}, {2: 0.0}, {3: 0.0}, {4: 0.0}, {5: 0.0}, {6: 0.0}, {7: 0.0}, {8: 0.0}],
2: [{1: 0.21150571615476177}, {2: 0.20021993193784904}, {3: 0.24673408701244148}, {4: 0.26073319330403394}, {5: 0.0}, {6: 0.27012912297379343}, {7: 0.0}, {8: 0.0}],
3: [{1: 0.2786416467397351}, {2: 0.2006495239101905}, {3: 0.21600480247194567}, {4: 0.25724906204967557}, {5: 0.0}, {6: 0.26817162148227375}, {7: 0.0}, {8: 0.0}],
4: [{1: 0.2755030949011681}, {2: 0.20315735111595443}, {3: 0.21705903867972787}, {4: 0.2564000954604151}, {5: 0.0}, {6: 0.26903863724054405}, {7: 0.0}, {8: 0.0}],
5: [{1: 0.27334751895045045}, {2: 0.2012256178641117}, {3: 0.22266330432504813}, {4: 0.25925509529304697}, {6: 0.27562843736621906}],
6: [{1: 0.27739942084587565}, {2: 0.198682325880847}, {3: 0.2169017627591854}, {4: 0.25843774856843105}, {6: 0.26996683786070946}],
7: [{1: 0.2726461255684456}, {2: 0.19778567408338052}, {3: 0.2197858176643358}, {4: 0.26053721842016453}, {6: 0.26812789513005875}]
}
</code></pre>
<p>How do I convert this dictionary to a <strong>Pandas DataFrame</strong> and make sure that the inner keys in each value are the column headers for the corresponding row-value?<br>
Note that in rows 5, 6 and 7 the values for inner-keys 5, 7 and 8 are missing, which means that I want a DataFrame in the following manner: </p>
<pre><code> 1 2 3 4 5 6 7 8
0 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.0 0.0
1 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.0 0.0
2 0.211651 0.202256 0.244509 0.256969 0.000000 0.275521 0.0 0.0
3 0.273670 0.199995 0.222494 0.256303 0.000000 0.275037 0.0 0.0
4 0.280948 0.200235 0.218654 0.256737 0.000000 0.276424 0.0 0.0
5 0.281718 0.197531 0.217461 0.256043 NaN 0.271181 NaN NaN
6 0.279024 0.200089 0.218020 0.261419 NaN 0.272113 NaN NaN
7 0.278222 0.203448 0.219254 0.261846 NaN 0.269600 NaN NaN
</code></pre>
<p>(The values are arbitrary and it doesn't matter what they are).<br>
I have no starting point except that I know to output a DataFrame to a CSV file using <code>pd.to_csv()</code>.<br>
Any help is appreciated. Thanks in advance.<br>
(Using Ubuntu 14.04 32-Bit VM and Python 2.7) </p>
<p><strong>P.S.</strong> a similar question was left unanswered since it had other users confused for not framing the sentences properly. It has since been deleted.<br>
I hope that this question is clear and precise.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with list comprehension and then little hack - sum of all columns by second level, what join all non <code>NaN</code>s column:</p>
<pre><code>df = pd.concat({k: pd.DataFrame(v) for k,v in d.items()}, 1).stack().T.sum(level=1, axis=1)
print (df)
1 2 3 4 5 6 7 8
0 0.000000 0.000000 0.000000 0.000000 0.0 0.000000 0.0 0.0
1 0.000000 0.000000 0.000000 0.000000 0.0 0.000000 0.0 0.0
2 0.211506 0.200220 0.246734 0.260733 0.0 0.270129 0.0 0.0
3 0.278642 0.200650 0.216005 0.257249 0.0 0.268172 0.0 0.0
4 0.275503 0.203157 0.217059 0.256400 0.0 0.269039 0.0 0.0
5 0.273348 0.201226 0.222663 0.259255 NaN 0.275628 NaN NaN
6 0.277399 0.198682 0.216902 0.258438 NaN 0.269967 NaN NaN
7 0.272646 0.197786 0.219786 0.260537 NaN 0.268128 NaN NaN
</code></pre>
<p>Detail:</p>
<pre><code>print (pd.concat({k: pd.DataFrame(v) for k,v in d.items()}, 1).stack().T)
0 1 2 3 4 5 6 7
1 2 3 4 5 6 6 7 8
0 0.000000 0.000000 0.000000 0.000000 0.0 NaN 0.000000 0.0 0.0
1 0.000000 0.000000 0.000000 0.000000 0.0 NaN 0.000000 0.0 0.0
2 0.211506 0.200220 0.246734 0.260733 0.0 NaN 0.270129 0.0 0.0
3 0.278642 0.200650 0.216005 0.257249 0.0 NaN 0.268172 0.0 0.0
4 0.275503 0.203157 0.217059 0.256400 0.0 NaN 0.269039 0.0 0.0
5 0.273348 0.201226 0.222663 0.259255 NaN 0.275628 NaN NaN NaN
6 0.277399 0.198682 0.216902 0.258438 NaN 0.269967 NaN NaN NaN
7 0.272646 0.197786 0.219786 0.260537 NaN 0.268128 NaN NaN NaN
</code></pre>
|
python|pandas|dictionary
| 1
|
7,612
| 68,439,695
|
Pandas highlight specific number with different color in dataframe
|
<p>I'm trying to highlight specific number with different color in my dataframe below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[10,3,1], [3,7,2], [2,4,4]], columns=list("ABC"))
</code></pre>
<p>I can highlight a specific number by one color, for example:</p>
<pre><code>def HIGHLIGHT_COLOR(x):
criteria = x == 4
return ['background-color: green' if i else '' for i in criteria]
df.style.apply(HIGHLIGHT_COLOR)
</code></pre>
<p>What I need is to highlight every individual number, here's my code but it doesn't work:</p>
<pre><code>def HIGHLIGHT_COLOR(x):
if x == 4:
color = green
elif x == 2:
color = yellow
elif x == 3:
color = grey
elif x == 7:
color = purple
elif x == 10:
color = black
return f'color: {color}'
df.style.apply(HIGHLIGHT_COLOR)
</code></pre>
<p>Can anyone assist this? Thank you!</p>
|
<p>An option with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer">Series.map</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer">Series.fillna</a>:</p>
<pre><code>def HIGHLIGHT_COLOR(x):
return ('background-color: ' + x.map({
4: 'green',
2: 'yellow',
3: 'grey',
7: 'purple',
10: 'black'
})).fillna('')
df.style.apply(HIGHLIGHT_COLOR)
</code></pre>
<p><a href="https://i.stack.imgur.com/5ojgt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ojgt.png" alt="styled" /></a></p>
|
python|pandas|pandas-styles
| 2
|
7,613
| 68,378,928
|
how to reset index during `ngroups` to display group by column?
|
<p>I have a dataframe like as shown below</p>
<pre><code>df = pd.DataFrame({'sub_id': [101,101,101,102,102,103,104,104,105],
'test_id':['A1','A1','C1','A1','B1','D1','E1','A1','F1']})
</code></pre>
<p>I am numbering each of the unique groups of <code>sub_id</code> and <code>test_id</code> using below code (thanks to this <a href="https://stackoverflow.com/questions/68285984/how-to-generate-sequence-but-avoid-numbering-duplicates-in-groups">post</a>)</p>
<pre><code>df.groupby(['sub_id','test_id'],sort=False).ngroup()+1
</code></pre>
<p>but I want to display the group by columns as well during the output. So, I tried the below</p>
<pre><code>df.set_index(['sub_id','test_id']).groupby(['sub_id','test_id'],sort=False).ngroup()+1
df.set_index(['sub_id','test_id']).groupby(['sub_id','test_id'],sort=False,as_index=False).ngroup()+1
</code></pre>
<p>Both the options display the groupby columns but am unable to <code>reset the index</code> to make it uniform</p>
<p>Currently the output looks like below which is incorrect</p>
<p><a href="https://i.stack.imgur.com/ypZ9b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ypZ9b.png" alt="enter image description here" /></a></p>
<p><strong>but I want to reset the index (in the same line of code) and display it correctly</strong></p>
<p>Can help me with this please?</p>
|
<p>wrap up code in brackets <code>()</code> then use <code>reset_index()</code>:</p>
<pre><code>out=(df.set_index(['sub_id','test_id']).groupby(['sub_id','test_id'],sort=False).ngroup()+1).reset_index()
</code></pre>
<p>OR</p>
<p>Instead of <code>+1</code> use <code>add(1)</code> method then use <code>reset_index()</code>:</p>
<pre><code>out=df.set_index(['sub_id','test_id']).groupby(['sub_id','test_id'],sort=False).ngroup().add(1).reset_index()
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 1
|
7,614
| 68,069,432
|
Pandas - Converting columns in percentage based on first columns value
|
<p>There is a data frame with totals and counts:</p>
<pre><code>pd.DataFrame({
'categorie':['a','b','c'],
'total':[100,1000,500],
'x':[10,100,5],
'y':[100,1000,500]
})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>categorie</th>
<th>total</th>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>100</td>
<td>10</td>
<td>100</td>
</tr>
<tr>
<td>b</td>
<td>1000</td>
<td>100</td>
<td>1000</td>
</tr>
<tr>
<td>c</td>
<td>500</td>
<td>5</td>
<td>500</td>
</tr>
</tbody>
</table>
</div>
<p>I like to convert the counted columns into percentage based on the totals:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>categorie</th>
<th>total</th>
<th>x%</th>
<th>y%</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>100</td>
<td>10</td>
<td>100</td>
</tr>
<tr>
<td>b</td>
<td>1000</td>
<td>10</td>
<td>100</td>
</tr>
<tr>
<td>c</td>
<td>500</td>
<td>1</td>
<td>100</td>
</tr>
</tbody>
</table>
</div>
<p>Following will work for a series:</p>
<pre><code>(100 * df['x'] / df['total']).round(1)
</code></pre>
<p>How to apply this for all columns in the data frame?</p>
|
<p>try via <code>div()</code>,<code>mul()</code> and <code>astype()</code> method:</p>
<pre><code>df[['x%','y%']]=df[['x','y']].div(df['total'],axis=0).mul(100).astype(int)
</code></pre>
<p>output of <code>df</code>:</p>
<pre><code> categorie total x y x% y%
0 a 100 10 100 10 100
1 b 1000 100 1000 10 100
2 c 500 5 500 1 100
</code></pre>
|
pandas|percentage
| 0
|
7,615
| 68,132,300
|
Select most recent example of multiindex dataframe
|
<p>I have a similiar problem as in <a href="https://stackoverflow.com/questions/37905060/getting-the-last-element-of-a-level-in-a-multiindex">Getting the last element of a level in a multiindex</a>. In the mentioned question the multiindex dataframe has for each group a start number which is always the same.</p>
<p>However, my problem is slightly different. I have again two columns. One column with an integer (in the MWE below it is a bool) and a second column with a datetime index. Similar, to the above example, I want select for each unique value in the first column the last row. In my example, it means the value with the most recent timestamp. The solution from the question above does not work, since I have no fixed start value for the second column.</p>
<p>MWE:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(range(10), index=pd.date_range(pd.Timestamp("2020.01.01"), pd.Timestamp("2020.01.01") + pd.Timedelta(hours=50), 10))
mask = (df.index.hour > 1) & (df.index.hour < 9)
df.groupby(mask)
df = df.groupby(mask).rolling("4h").mean()
</code></pre>
<p>The resulting dataframe looks like:</p>
<pre><code> 0
False 2020-01-01 00:00:00 0.0
2020-01-01 11:06:40 2.0
2020-01-01 16:40:00 3.0
2020-01-01 22:13:20 4.0
2020-01-02 09:20:00 6.0
2020-01-02 14:53:20 7.0
2020-01-02 20:26:40 8.0
True 2020-01-01 05:33:20 1.0
2020-01-02 03:46:40 5.0
2020-01-03 02:00:00 9.0
</code></pre>
<p>Now, I want to get for each value in the first column the row with the most recent time stamp. I.e., I would like to get the following dataframe:</p>
<pre><code> 0
False 2020-01-02 20:26:40 8.0
True 2020-01-03 02:00:00 9.0
</code></pre>
<p>I would really appreciate ideas like in the mentioned link which do this.</p>
|
<p>Assuming values in level 1 are sorted try with <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.tail.html" rel="nofollow noreferrer"><code>groupby tail</code></a>:</p>
<pre><code>out = df.groupby(level=0).tail(1)
</code></pre>
<p><code>out</code>:</p>
<pre><code> 0
False 2020-01-02 20:26:40 8.0
True 2020-01-03 02:00:00 9.0
</code></pre>
<p>If not <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a> first:</p>
<pre><code>out = df.sort_index(level=1).groupby(level=0).tail(1)
</code></pre>
<p><code>out</code>:</p>
<pre><code> 0
False 2020-01-02 20:26:40 8.0
True 2020-01-03 02:00:00 9.0
</code></pre>
|
python|pandas
| 1
|
7,616
| 59,098,078
|
"AttributeError:" After converting script to TF2
|
<p>I used to the automatic update script but am still running into some issues. This area seems to be the part which is causing the issues, any help would be appreciated.</p>
<p>Error issued:" AttributeError: 'BatchDataset' object has no attribute 'output_types' "</p>
<pre><code># network parameters
n_hidden_1 = 50
n_hidden_2 = 25
ds_train = tf.data.Dataset.from_tensor_slices((X_placeholder, Y_placeholder)).shuffle(buffer_size=round(len(X_train) * 0.3)).batch(batch_size_placeholder)
ds_test = tf.data.Dataset.from_tensor_slices((X_placeholder, Y_placeholder)).batch(batch_size_placeholder)
ds_iter = tf.compat.v1.data.make_one_shot_iterator(ds_train.output_types, ds_train.output_shapes)
next_x, next_y = ds_iter.get_next()
train_init_op = ds_iter.make_initializer(ds_train)
test_init_op = ds_iter.make_initializer(ds_test)
</code></pre>
|
<p>Replace:</p>
<pre><code>ds_train.output_types
</code></pre>
<p>with:</p>
<pre><code>tf.compat.v1.data.get_output_types(ds_train)
</code></pre>
<p>similarly you may need <code>tf.compat.v1.data.get_output_shapes(ds_train)</code></p>
|
tensorflow|machine-learning|neural-network|tensorflow2.0
| 2
|
7,617
| 59,449,797
|
tensorflow working on the anaconda prompt but not on the command prompt
|
<p><a href="https://i.stack.imgur.com/iskOL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iskOL.png" alt="Example"></a></p>
<p>I've just installed tensorflow in my machine and I decided to install it with anaconda. It works just fine there but when I try to run it with the command prompt it acts as if it didn't exist.
I am getting a hard time getting tensorflow to work</p>
|
<p>From the anaconda window you activated your <code>env</code>, <code>base</code> in your case.
When you open command prompt you have to activate that <code>env</code></p>
<p><code>activate base</code></p>
<p>then run <code>python</code></p>
|
tensorflow|installation|anaconda
| 0
|
7,618
| 56,946,779
|
Pandas: Comparing a row value and modify next column's row values
|
<p>I have this Pandas Dataframe:</p>
<pre><code> A B
0 xyz Lena
1 NaN J.Brooke
2 NaN B.Izzie
3 NaN B.Rhodes
4 NaN J.Keith
.....
</code></pre>
<p>I want to compare the values of column B such that if row value begins with B then in it's adjacent row of column A new should be written and similarly if J then old. Below is what I'm expecting:</p>
<pre><code> A B
0 xyz Lena
1 old J.Brooke
2 new B.Izzie
3 new B.Rhodes
4 old J.Keith
.....
</code></pre>
<p>I'm unable to understand how I can do this. To begin with I can use <code>startswith()</code> but then how to compare one row value and then to have the required field values right in the adjacent row of another column?
This is a small case I'm trying a lot og messier things...Pandas is indeed powerful!</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>Series.str.startswith</code></a> if need set new values by conditions:</p>
<pre><code>m1 = df['B'].str.startswith('B')
m2 = df['B'].str.startswith('J')
</code></pre>
<p>If need also test missing values chain conditions by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p>
<pre><code>m1 = df['B'].str.startswith('B') & df['A'].isna()
m2 = df['B'].str.startswith('J') & df['A'].isna()
</code></pre>
<hr>
<pre><code>df['A'] = np.select([m1, m2], ['new','old'], df['A'])
print (df)
A B
0 xyz Lena
1 old J.Brooke
2 new B.Izzie
3 new B.Rhodes
4 old J.Keith
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>df.loc[m1, 'A'] = 'new'
df.loc[m2, 'A'] = 'old'
</code></pre>
|
python|pandas|csv
| 2
|
7,619
| 57,163,222
|
pandas list of dataframes to group by
|
<p>Having a list of pandas dataframes, how to concat them together into a single groupby object to have vectorized calculations on them?</p>
<p>The dfs are similar and there is no chance to groupby them after concatination.</p>
<p>group n:</p>
<pre><code>index some_values
0 2
1 3
2 2
3 2
</code></pre>
<p>group n+1:</p>
<pre><code>index some_values
6 1
7 4
8 4
</code></pre>
<p>I could loop though the list to add an identifier, but due to the fact, that this operation is a part of an other loop, I have to avoid this inner loop.</p>
<p>To ask the question in the other way: how to add identifier with cumsum to rows of DFs in list, avoiding loop operation?</p>
<p>The story of how I actually got there:</p>
<p>First I had a DF of booleans to split on <code>Trues</code> and group <code>Falses</code>:</p>
<p><code>initial_df</code>:</p>
<pre><code>index boolean
0 False
1 False
2 False
3 True
4 True
5 False
6 False
7 False
8 False
9 False
</code></pre>
<p>I used this sniped to get <code>groupes</code> of <code>dfs</code> I needed:</p>
<p><code>https://stackoverflow.com/questions/57132096/pandas-how-to-groupby-based-on-series-pattern</code></p>
<pre><code>x = listing_calendar[~listing_calendar["available"]].index.values
groups = np.split(x, np.where(np.diff(x)>1)[0]+1)
grouped_dfs = [listing_calendar.iloc[gr, :] for gr in groups]
</code></pre>
<p><code>grouped_dfs[0]</code>:</p>
<pre><code> index boolean
0 False
1 False
2 False
</code></pre>
<p><code>grouped_dfs[2]</code>:</p>
<pre><code> index boolean
5 False
6 False
7 False
8 False
9 False
</code></pre>
<p>the expected df to further <code>groupby</code>:</p>
<pre><code>index boolean group_id
0 False 0
1 False 0
2 False 0
3 True
4 True
5 False 1
6 False 1
7 False 1
8 False 1
9 False 1
</code></pre>
<p>or a <code>groupby</code> object insted of <code>grouped_dfs</code> to work with </p>
<p>Thank you!</p>
|
<p>You can use something like:</p>
<pre><code>s=np.where(~df.boolean,df.boolean.ne(df.boolean.shift()).cumsum(),np.nan)
final=df.assign(group=pd.Series(pd.factorize(s)[0]+1).replace(0,np.nan))
</code></pre>
<hr>
<pre><code> index boolean group
0 0 False 1.0
1 1 False 1.0
2 2 False 1.0
3 3 True NaN
4 4 True NaN
5 5 False 2.0
6 6 False 2.0
7 7 False 2.0
8 8 False 2.0
9 9 False 2.0
</code></pre>
<p><strong><em>Details:</em></strong></p>
<p>Use <a href="https://www.google.com/search?q=np+where&rlz=1C1GCEU_enIN822IN823&oq=np+where&aqs=chrome..69i57j69i60j69i59j0l3.1456j0j4&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer"><code>np.where()</code></a> using invert <code>~</code> and assign values with a comparision on the shifted values on the same series using <code>df.boolean.ne(df.boolean.shift()).cumsum()</code>:</p>
<pre><code>np.where(~df.boolean,df.boolean.ne(df.boolean.shift()).cumsum(),np.nan)
#array([ 1., 1., 1., nan, nan, 3., 3., 3., 3., 3.])
</code></pre>
<p>Then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.factorize.html" rel="nofollow noreferrer"><code>factorize()</code></a> which returns -1 for <code>NaN</code>. As we are doing a +1 after that we then replace 0 with <code>np.nan</code>.</p>
|
pandas|pandas-groupby
| 3
|
7,620
| 57,287,934
|
Construct DataFrame from list of dicts
|
<p>Trying to construct pandas DataFrame from list of dicts</p>
<p>List of dicts:</p>
<pre><code>a = [{'1': 'A'},
{'2': 'B'},
{'3': 'C'}]
</code></pre>
<p>Pass list of dicts into pd.DataFrame():</p>
<pre><code>df = pd.DataFrame(a)
Actual results:
1 2 3
0 A NaN NaN
1 NaN B NaN
2 NaN NaN C
</code></pre>
<pre><code>pd.DataFrame(a, columns=['Key', 'Value'])
Actual results:
Key Value
0 NaN NaN
1 NaN NaN
2 NaN NaN
</code></pre>
<p>Expected results:</p>
<pre><code> Key Value
0 1 A
1 2 B
2 3 C
</code></pre>
|
<p>Something like this with a list comprehension:</p>
<pre><code>pd.DataFrame(([(x, y) for i in a for x, y in i.items()]),columns=['Key','Value'])
</code></pre>
<hr>
<pre><code> Key Value
0 1 A
1 2 B
2 3 C
</code></pre>
|
pandas
| 1
|
7,621
| 56,964,557
|
bazel tensorflow target name misunderstanding
|
<p>I try to understand bazel dependency tree in tensorflow 2.0 project.<br>
In tensorflow/tensorflow/BUILD:598 there is a target: </p>
<pre><code>tf_cc_shared_object(
name = "tensorflow_cc",
</code></pre>
<p>When I try to query it with bazel</p>
<pre><code>bazel query //tensorflow:libtensorflow_cc --output location
</code></pre>
<p>I get this error:</p>
<pre><code>ERROR: no such target '//tensorflow:libtensorflow_cc': target 'libtensorflow_cc' not declared in package 'tensorflow' (did you mean 'libtensorflow.so'?) defined by projects/tensorflow/tensorflow/BUILD
</code></pre>
<p>Why missing target refers other target, that I don't find in the BUILD file?</p>
|
<p>Shouldn't that be:</p>
<pre class="lang-sh prettyprint-override"><code>bazel query //tensorflow:tensorflow_cc --output location
</code></pre>
<p>which produces the following output for me:</p>
<pre><code>[...]/tensorflow/tensorflow/BUILD:611:1: filegroup rule //tensorflow:tensorflow_cc
Loading: 0 packages loaded
</code></pre>
|
tensorflow|build|bazel
| 0
|
7,622
| 66,372,469
|
Is there a way to fix this import error for geopandas?
|
<p>I am trying to get GemGIS to work. During the installation process I installed the libaries <em><strong>geopandas</strong></em> and <em><strong>gemgis</strong></em> in my enviroment through:</p>
<p><code>conda install -c conda-forge geopandas</code></p>
<p>and</p>
<p><code>pip install gemgis</code></p>
<p>I used the anaconda powershell and everything seemed okay. When I started jupyter lab and tried to import those packages I got the following error code:</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-24-a62d01c1d62e> in <module>
----> 1 import geopandas as gpd
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\geopandas\__init__.py in <module>
----> 1 from geopandas._config import options # noqa
2
3 from geopandas.geoseries import GeoSeries # noqa
4 from geopandas.geodataframe import GeoDataFrame # noqa
5 from geopandas.array import points_from_xy # noqa
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\geopandas\_config.py in <module>
124 use_pygeos = Option(
125 key="use_pygeos",
--> 126 default_value=_default_use_pygeos(),
127 doc=(
128 "Whether to use PyGEOS to speed up spatial operations. The default is True "
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\geopandas\_config.py in _default_use_pygeos()
110
111 def _default_use_pygeos():
--> 112 import geopandas._compat as compat
113
114 return compat.USE_PYGEOS
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\geopandas\_compat.py in <module>
147 RTREE_GE_094 = False
148 try:
--> 149 import rtree # noqa
150
151 HAS_RTREE = True
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\rtree\__init__.py in <module>
7 __version__ = '0.9.7'
8
----> 9 from .index import Rtree, Index # noqa
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\rtree\index.py in <module>
4 import pprint
5
----> 6 from . import core
7
8 import pickle
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\rtree\core.py in <module>
73
74 # load the shared library by looking in likely places
---> 75 rt = finder.load()
76
77 rt.Error_GetLastErrorNum.restype = ctypes.c_int
G:\Programme\anaconda3\envs\gemgis\lib\site-packages\rtree\finder.py in load()
65 finally:
66 os.environ['PATH'] = oldenv
---> 67 raise OSError("could not find or load {}".format(lib_name))
68
69 elif os.name == 'posix':
OSError: could not find or load spatialindex_c-64.dll
'''
Does anyone know how to fix this? I looked up the paths and saw, that the core.py data had similar codes as requested, but the **64** was missing at the end of the spatialindex_c-.dll
</code></pre>
|
<p>Your installation of <code>rtree</code> is corrupted. You can either try reinstalling it or using <code>pygeos</code> instead.</p>
<p>The recommended way (since I see that you're using conda):</p>
<pre><code>conda install rtree -c conda-forge
</code></pre>
<p>or</p>
<pre><code>conda install pygeos -c conda-forge
</code></pre>
<p>With pygeos, rtree is not used, hence won't raise an error.</p>
|
python|dll|importerror|geopandas
| 0
|
7,623
| 66,387,188
|
Is there an easy way to extract rows from pandas DataFrame from a boolean expression?
|
<p>I'm currently struggling trying to extract rows from a DataFrame using vectorization. I'm pretty sure there's an easy way, expression or function to achieve this, but I couldn't find it.
I have this dataframe (from a mysql database):</p>
<pre class="lang-py prettyprint-override"><code> date_taux taux taux_min taux_max
0 2021-02-15 13:55:00 2.1166 2.1155 2.1232
1 2021-02-15 14:00:00 2.1256 2.1166 2.1300
2 2021-02-15 14:05:00 2.1312 2.1206 2.1348
3 2021-02-15 14:10:00 2.1174 2.1166 2.1416
4 2021-02-15 14:15:00 2.1103 2.1060 2.1253
5 2021-02-15 14:20:00 2.1269 2.1143 2.1277
6 2021-02-15 14:25:00 2.1239 2.1115 2.1300
7 2021-02-15 14:30:00 2.0880 2.0879 2.1299
8 2021-02-15 14:35:00 2.0827 2.0827 2.1060
9 2021-02-15 14:40:00 2.0747 2.0718 2.0996
10 2021-02-15 14:45:00 2.0846 2.0779 2.0861
11 2021-02-15 14:50:00 2.0826 2.0806 2.0894
12 2021-02-15 14:55:00 2.0350 2.0350 2.0857
13 2021-02-15 15:00:00 2.0796 2.0350 2.0797
14 2021-02-15 15:05:00 2.0717 2.0587 2.0800
15 2021-02-15 15:10:00 2.0762 2.0705 2.0819
16 2021-02-15 15:15:00 2.0793 2.0650 2.0884
17 2021-02-15 15:20:00 2.1005 2.0831 2.1064
18 2021-02-15 15:25:00 2.1164 2.1017 2.1206
19 2021-02-15 15:30:00 2.1199 2.1176 2.1300
</code></pre>
<p>And I also have this numpy array:</p>
<pre class="lang-py prettyprint-override"><code>[2. 2.01694915 2.03389831 2.05084746 2.06779661 2.08474576
2.10169492 2.11864407 2.13559322 2.15254237 2.16949153 2.18644068
2.20338983 2.22033898 2.23728814 2.25423729 2.27118644 2.28813559
2.30508475 2.3220339 2.33898305 2.3559322 2.37288136 2.38983051
2.40677966 2.42372881 2.44067797 2.45762712 2.47457627 2.49152542
2.50847458 2.52542373 2.54237288 2.55932203 2.57627119 2.59322034
2.61016949 2.62711864 2.6440678 2.66101695 2.6779661 2.69491525
2.71186441 2.72881356 2.74576271 2.76271186 2.77966102 2.79661017
2.81355932 2.83050847 2.84745763 2.86440678 2.88135593 2.89830508
2.91525424 2.93220339 2.94915254 2.96610169 2.98305085 3. ]
</code></pre>
<p>My goal is to add a column to the dataframe, with the amount of numbers in the array between <code>taux_min</code> and <code>taux_max</code>. An expected result would be:</p>
<pre class="lang-py prettyprint-override"><code> date_taux taux taux_min taux_max amount_lines
0 2021-02-15 13:55:00 2.1166 2.1155 2.1232 1
1 2021-02-15 14:00:00 2.1256 2.1166 2.1300 1
2 2021-02-15 14:05:00 2.1312 2.1206 2.1348 0
3 2021-02-15 14:10:00 2.1174 2.1166 2.1416 2
4 2021-02-15 14:15:00 2.1103 2.1060 2.1253 1
5 2021-02-15 14:20:00 2.1269 2.1143 2.1277 1
6 2021-02-15 14:25:00 2.1239 2.1115 2.1300 1
7 2021-02-15 14:30:00 2.0880 2.0879 2.1299 2
8 2021-02-15 14:35:00 2.0827 2.0827 2.1060 2
9 2021-02-15 14:40:00 2.0747 2.0718 2.0996 1
10 2021-02-15 14:45:00 2.0846 2.0779 2.0861 1
...
</code></pre>
<p>I tried using this code:</p>
<pre class="lang-py prettyprint-override"><code>sql = dbm.MySQL()
data = sql.pdselect("SELECT date_taux, taux, taux_min, taux_max FROM binance_rates_grid WHERE action = %s AND date_taux > %s ORDER BY date_taux ASC", "TOMOUSDT", datetime.utcnow()-timedelta(days=11))
print(data)
print("==================")
grids = np.linspace(2, 4, 60)
data["lignes"] = len(grids[(data["taux_min"] < grids) & (data["taux_max"] < grids)])
print(data)
</code></pre>
<p>But I fairly get this error: <code>ValueError: ('Lengths must match to compare', (2868,), (60,))</code></p>
<p>I'm pretty sure I'm missing something here, but I cannot tell what.</p>
|
<p>Let us try <code>numpy</code> broadcasting:</p>
<pre><code>x, y = df[['taux_min', 'taux_max']].values.T
mask = (x[:, None] <= arr) & (arr <= y[:, None])
df['amount_lines'] = mask.sum(1)
</code></pre>
<hr />
<pre><code> date_taux taux taux_min taux_max amount_lines
0 2021-02-15 13:55:00 2.1166 2.1155 2.1232 1
1 2021-02-15 14:00:00 2.1256 2.1166 2.1300 1
2 2021-02-15 14:05:00 2.1312 2.1206 2.1348 0
3 2021-02-15 14:10:00 2.1174 2.1166 2.1416 2
4 2021-02-15 14:15:00 2.1103 2.1060 2.1253 1
5 2021-02-15 14:20:00 2.1269 2.1143 2.1277 1
6 2021-02-15 14:25:00 2.1239 2.1115 2.1300 1
7 2021-02-15 14:30:00 2.0880 2.0879 2.1299 2
8 2021-02-15 14:35:00 2.0827 2.0827 2.1060 2
9 2021-02-15 14:40:00 2.0747 2.0718 2.0996 1
10 2021-02-15 14:45:00 2.0846 2.0779 2.0861 1
11 2021-02-15 14:50:00 2.0826 2.0806 2.0894 1
12 2021-02-15 14:55:00 2.0350 2.0350 2.0857 3
13 2021-02-15 15:00:00 2.0796 2.0350 2.0797 2
14 2021-02-15 15:05:00 2.0717 2.0587 2.0800 1
15 2021-02-15 15:10:00 2.0762 2.0705 2.0819 0
16 2021-02-15 15:15:00 2.0793 2.0650 2.0884 2
17 2021-02-15 15:20:00 2.1005 2.0831 2.1064 2
18 2021-02-15 15:25:00 2.1164 2.1017 2.1206 1
19 2021-02-15 15:30:00 2.1199 2.1176 2.1300 1
</code></pre>
|
python|pandas|numpy|vectorization|numpy-ndarray
| 2
|
7,624
| 66,488,155
|
Captioning a pandas dataframe from sqlite and outputing with a web browser with Python
|
<p>I have a function in my Python script which picks values from an sqlite database into a pandas (pd) dataframe to be outputted in a web browser.</p>
<p>I want the outputted table to display a caption for the table in the browser.</p>
<p>The caption should look like</p>
<blockquote>
<p>"This table shows the rate of collections for the month of month.get()" in the year year.get()</p>
</blockquote>
<p>My function code:</p>
<pre><code>def all_collectors_info():
Database()
a = cursor.execute("SELECT * FROM `collectors` WHERE Month = ? AND Year = ?", (MONTH.get(), YEAR.get(),))
fetch = cursor.fetchall()
z = [x for x in fetch]
cols = [column[0] for column in a.description]
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
df= pd.DataFrame.from_records(data = fetch, columns = cols, index = list(range(1, (len(z)+1))))
html = df.to_html()
text_file = open("index.html", "w")
text_file.write(html)
text_file.close()
</code></pre>
|
<p>There is no direct option to add <code><caption></code> tag to table from pandas, but you can just append heading(<h#> tag) like</p>
<pre class="lang-py prettyprint-override"><code>heading_template = '<h3>This table shows the rate of collections for the month of {month} in the year {year}</h3>\n'
html = df.to_html()
html = heading_template.format(month=MONTH.get(), year=YEAR.get()) + html
</code></pre>
<p>So your final function would be</p>
<pre class="lang-py prettyprint-override"><code>def all_collectors_info():
Database()
a = cursor.execute("SELECT * FROM `collectors` WHERE Month = ? AND Year = ?", (MONTH.get(), YEAR.get(),))
fetch = cursor.fetchall()
z = [x for x in fetch]
cols = [column[0] for column in a.description]
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
df= pd.DataFrame.from_records(data = fetch, columns = cols, index = list(range(1, (len(z)+1))))
heading_template = '<h3>This table shows the rate of collections for the month of {month} in the year {year}</h3>\n'
html = df.to_html()
html = heading_template.format(month=MONTH.get(), year=YEAR.get()) + html
text_file = open("index.html", "w")
text_file.write(html)
text_file.close()
</code></pre>
|
python|html|pandas|sqlite
| 0
|
7,625
| 66,399,668
|
which object detection algorithms can extract the QR code from an image efficiently
|
<p>I am new to Object Detection, for now, I want to predict the QR code in images. I want to extract the QR code from the images, and only predict the QR code without the background information, and finally predict the exact number the QR code is representing, since I am using PyTorch, is there any object detection algorithm which is compatible to PyTorch that I could apply to this task?
(for what I mean by extracting, the raw input is an image,I want to change the input of the image to the QR code in the image).</p>
|
<p>There are two ways for this task:</p>
<p><strong>Computer Vision based approach:</strong></p>
<p>OpenCV library's <code>QRCodeDetector()</code> function can <strong>detect</strong> and <strong>read</strong> QR codes easily. It returns <strong>data</strong> in QR code and bounding box information of the QR code:</p>
<pre><code>import cv2
detector = cv2.QRCodeDetector()
data, bbox, _ = detector.detectAndDecode(img)
</code></pre>
<p><strong>Deep learning based approach:</strong></p>
<p>Using common object detection framework - Yolo (<a href="https://github.com/ultralytics/yolov5" rel="nofollow noreferrer">Yolo v5</a> in PyTorch), you can achieve your target. However, you need data to train it. For computer vision based approach, you don't need to do training or data collection.</p>
<p>You may consider reading these two.</p>
|
pytorch|qr-code|object-detection
| 1
|
7,626
| 66,669,168
|
unsupported operand type(s) for +: 'float' and 'numpy.str_'
|
<p>I have the following 2 lists.</p>
<pre><code>['4794447', '1132804', '1392609', '9512999', '2041520', '7233323', '2853077', '4297617', '1321426', '2155664', '13310447', '6066387', '3551036', '4098927', '1865298', '20153634', '1323783', '6070500', '4661537', '2342299', '1302946', '6657982', '2807002', '3032171', '5928040', '2463431', '6131977', '778489']
[0.7142857142857143, 0.35714285714285715, 0.5138888888888888, 0.4583333333333333, 0.6, 0.5675675675675675, 0.589041095890411, 0.43478260869565216, 0.47368421052631576, 0.68, 0.622894633764199, 0.5945945945945946, 0.6338028169014085, 0.42028985507246375, 0.7464788732394366, 0.47593226788432264, 0.39436619718309857, 0.6176470588235294, 0.4142857142857143, 0.618421052631579, 0.5070422535211268, 0.625, 0.5789473684210527, 0.7012987012987013, 0.6533333333333333, 0.43661971830985913, 0.6533333333333333, 0.7222222222222222]
</code></pre>
<p>And I need to calculate the correlation so I did this:</p>
<pre><code> population_by_region = result['Population'].tolist()
win_loss_by_region = result['wl_ratio'].tolist()
corr, val = stats.pearsonr(population_by_region, win_loss_by_region)
</code></pre>
<p>But I get this error which is not very clear:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-37cd29ba1516> in <module>
66 print(win_loss_by_region)
67 #print(cities)
---> 68 corr, val = stats.pearsonr(population_by_region, win_loss_by_region)
69
70 print(corr)
/opt/conda/lib/python3.7/site-packages/scipy/stats/stats.py in pearsonr(x, y)
3403 # that the data type is at least 64 bit floating point. It might have
3404 # more precision if the input is, for example, np.longdouble.
-> 3405 dtype = type(1.0 + x[0] + y[0])
3406
3407 if n == 2:
TypeError: unsupported operand type(s) for +: 'float' and 'numpy.str_'
</code></pre>
<p>Both lists are the same lenght!</p>
|
<p>I think need both numeric, so use:</p>
<pre><code>population_by_region = result['Population'].astype(int).tolist()
</code></pre>
<p>Also converting to list is not necessary, pass both columns like:</p>
<pre><code>corr, val = stats.pearsonr(result['Population'].astype(int), result['wl_ratio'])
print (corr, val)
-0.04027318804589655 0.8387661496942489
</code></pre>
|
python|pandas|numpy
| 3
|
7,627
| 57,434,435
|
"Could not compute output" error using tf.keras merge layers in Tensorflow 2
|
<p>I'm trying to use a merge layer in tf.keras but getting <code>AssertionError: Could not compute output Tensor("concatenate_3/Identity:0", shape=(None, 10, 8), dtype=float32)</code>. Minimal (not)working example: </p>
<pre><code>import tensorflow as tf
import numpy as np
context_length = 10
input_a = tf.keras.layers.Input((context_length, 4))
input_b = tf.keras.layers.Input((context_length, 4))
#output = tf.keras.layers.concatenate([input_a, input_b]) # same error
output = tf.keras.layers.Concatenate()([input_a, input_b])
model = tf.keras.Model(inputs = (input_a, input_b), outputs = output)
a = np.random.rand(3, context_length, 4).astype(np.float32)
b = np.random.rand(3, context_length, 4).astype(np.float32)
pred = model(a, b)
</code></pre>
<p>I get the same error with other merge layers (e.g. <code>add</code>). I'm on TF2.0.0-alpha0 but get the same with 2.0.0-beta1 on colab. </p>
|
<p>Ok well the error message was not helpful but I eventually stumbled upon the solution: the input to <code>model</code> needs to be an iterable of tensors, i.e. </p>
<pre><code>pred = model((a, b))
</code></pre>
<p>works just fine. </p>
|
tensorflow|keras|tensorflow2.0|tf.keras
| 5
|
7,628
| 73,048,757
|
python pandas regex find pattern from another row
|
<p>I have a python pandas dataframe with the following pattern:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>file_path</th>
</tr>
</thead>
<tbody>
<tr>
<td>/home</td>
</tr>
<tr>
<td>/home/folder1</td>
</tr>
<tr>
<td>/home/folder1/file1.xlsx</td>
</tr>
<tr>
<td>/home/folder1/file2.xlsx</td>
</tr>
<tr>
<td>/home/folder2</td>
</tr>
<tr>
<td>/home/folder2/date</td>
</tr>
<tr>
<td>/home/folder2/date/dates.txt</td>
</tr>
<tr>
<td>/home/folder3</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to get the parent path in a new column, if there is no parent then call it "ROOT"</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>file_path</th>
<th>parent_path</th>
</tr>
</thead>
<tbody>
<tr>
<td>/home</td>
<td>ROOT</td>
</tr>
<tr>
<td>/home/folder1</td>
<td>/home</td>
</tr>
<tr>
<td>/home/folder1/file1.xlsx</td>
<td>/home/folder1</td>
</tr>
<tr>
<td>/home/folder1/file2.xlsx</td>
<td>/home/folder1</td>
</tr>
<tr>
<td>/home/folder2</td>
<td>/home</td>
</tr>
<tr>
<td>/home/folder2/date</td>
<td>/home/folder2</td>
</tr>
<tr>
<td>/home/folder2/date/dates.txt</td>
<td>/home/folder2/date</td>
</tr>
<tr>
<td>/home/folder3</td>
<td>/home</td>
</tr>
</tbody>
</table>
</div>
<p>My attempt:</p>
<pre><code>import re
import pandas as pd
df = pd.DataFrame(["/home", "/home/folder1", "/home/folder1/file1.xlsx",
"/home/folder1/file1.xlsx", "/home/folder1/file2.xlsx", "/home/folder2",
"/home/folder2/date", "/home/folder2/date/dates.txt", "/home/folder3"], columns=["file_path"])
# Get list
file_paths = df.file_path.unique()
def match_parent(x, file_paths):
x = x.split('/')
levels = len(x)
# Check that parent contains all elements of x and the length is 1 less
</code></pre>
<p>I was thinking to make a function that:</p>
<ol>
<li><p>For each row, compute its length and match those that are 1 length less than the current row AND,</p>
</li>
<li><p>All previous items match (are exactly the same)</p>
</li>
</ol>
<p>How can I do that?</p>
|
<p>Use <a href="https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.parent" rel="nofollow noreferrer"><code>pathlib.Path.parent</code></a> to extract the parent, as follows:</p>
<pre><code>import pandas as pd
import pathlib
df = pd.DataFrame(["/home", "/home/folder1", "/home/folder1/file1.xlsx",
"/home/folder1/file1.xlsx", "/home/folder1/file2.xlsx", "/home/folder2",
"/home/folder2/date", "/home/folder2/date/dates.txt", "/home/folder3"], columns=["file_path"])
df["parent"] = df["file_path"].apply(lambda x: pathlib.Path(x).parent)
print(df)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> file_path parent
0 /home /
1 /home/folder1 /home
2 /home/folder1/file1.xlsx /home/folder1
3 /home/folder1/file1.xlsx /home/folder1
4 /home/folder1/file2.xlsx /home/folder1
5 /home/folder2 /home
6 /home/folder2/date /home/folder2
7 /home/folder2/date/dates.txt /home/folder2/date
8 /home/folder3 /home
</code></pre>
<p>to match the exact output:</p>
<pre><code>df["parent"] = df["file_path"].apply(lambda x: res if (res := pathlib.Path(x).parent) != pathlib.Path("/") else "ROOT")
print(df)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> file_path parent
0 /home ROOT
1 /home/folder1 /home
2 /home/folder1/file1.xlsx /home/folder1
3 /home/folder1/file1.xlsx /home/folder1
4 /home/folder1/file2.xlsx /home/folder1
5 /home/folder2 /home
6 /home/folder2/date /home/folder2
7 /home/folder2/date/dates.txt /home/folder2/date
8 /home/folder3 /home
</code></pre>
|
python|pandas|string|path
| 2
|
7,629
| 73,168,950
|
incompatible shapes error using datagen.flow_from_directory() in a functional API keras model
|
<p>I am a super n00b attempting to learn TF and keras. I would like to create a model using the Functional API and fed by ImageDataGenerator() and flow_from_directory(). I am limited to using spyder (5.1.5) and python 3.7, keras 2.8.0, tensorflow 2.8.0.</p>
<p>I have organized sample patches into labelled folders to support flow_from_directory(). There are 7 classes and each patch is a small .png image, size is supposed to be 128 x 128 x 3.</p>
<p>However, when I attempt to call model.fit() I receive a ValueError:</p>
<pre><code>Traceback (most recent call last):
File ~\.spyder-py3\MtP_treeCounts\shape_error_code.py:129 in <module>
history = model.fit(ds_train,
File ~\Anaconda3\envs\tf28\lib\site-packages\keras\utils\traceback_utils.py:67 in error_handler
raise e.with_traceback(filtered_tb) from None
File ~\Anaconda3\envs\tf28\lib\site-packages\tensorflow\python\framework\func_graph.py:1147 in autograph_handler
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\training.py", line 1021, in train_function *
return step_function(self, iterator)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\training.py", line 918, in compute_loss
return self.compiled_loss(
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\losses.py", line 1789, in categorical_crossentropy
return backend.categorical_crossentropy(
File "C:\Users\jlovitt\Anaconda3\envs\tf28\lib\site-packages\keras\backend.py", line 5083, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, None) and (None, 128, 128, 1) are incompatible
</code></pre>
<p>I don't think my generator is generating anything. I assume the issue is linked to my model being fed something like [50,7] (where batch size is 50 and 7 is the number of classes) instead of [50,128,128,3] which would be 50 individual patches pulled randomly from across the class labelled folders. So it's not actually training anything.</p>
<p>Here is the code:</p>
<pre><code># set up
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras.layers import Input, Conv2D,Conv1D, UpSampling2D, concatenate,Dense, Flatten, Dropout,BatchNormalization, MaxPooling2D
from tensorflow.keras.models import Model, Sequential, load_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
K.clear_session()
del model
</code></pre>
<pre><code>#build generator & train set
datagen = ImageDataGenerator(
rotation_range=40,
zoom_range=(0.95,0.95),
width_shift_range=0.2,
height_shift_range=0.2,
dtype = np.float32,
rescale=1/255,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
data_format = "channels_last",
)
image_height = 128
image_width = 128
batch_size = 50
ds_train = datagen.flow_from_directory(
directory=r"C:/Users/jlovitt/Pyworking/for_CNN_5/RGB_aerial/patches/train/rgb/organized/",
target_size=(image_height,image_width),
batch_size = batch_size,
color_mode="rgb",
class_mode = 'categorical',
shuffle=True,
seed =42,
#subset='training',
)
</code></pre>
<pre><code>#set params
# STEP_SIZE_TRAIN = round(int(ds_train.n//ds_train.batch_size),-1)
STEP_SIZE_TRAIN = 180
# STEP_SIZE_VALID = round(int(ds_validation.n//ds_validation.batch_size),-1)
STEP_SIZE_VALID = 20
lr = 0.001
</code></pre>
<pre><code>#define model
def U_model():
in1 = Input(shape=(256,256,3))
conv1 = Conv2D(32,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(in1)
conv1 = Dropout(0.1)(conv1)
conv1 = Conv2D(32,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(conv1)
pool1 = MaxPooling2D((2,2))(conv1)
conv2 = Conv2D(64,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(pool1)
conv2 = Dropout(0.1)(conv2)
conv2 = Conv2D(64,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(conv2)
pool2 = MaxPooling2D((2,2))(conv2)
conv3 = Conv2D(128,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(pool2)
conv3 = Dropout(0.1)(conv3)
conv3 = Conv2D(128,(3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(conv3)
pool3 = MaxPooling2D((2,2))(conv3)
conv4 = Conv2D(128, 3, activation='relu', kernel_initializer='he_normal', padding='same')(pool3)
conv4 = Dropout(0.1)(conv4)
conv4 = Conv2D(128, 3, activation='relu', kernel_initializer='he_normal', padding='same')(conv4)
up1 = concatenate([UpSampling2D((2,2))(conv4),conv3],axis=-1)
conv5 = Conv2D(64,(3,3), activation='relu', kernel_initializer='he_normal', padding='same')(up1)
conv5 = Dropout(0.1)(conv5)
conv5 = Conv2D(64,(3,3), activation='relu', kernel_initializer='he_normal', padding='same')(conv5)
up2 = concatenate([UpSampling2D((2,2))(conv5), conv2], axis=-1)
conv6 = Conv2D(64, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(up2)
conv6 = Dropout(0.1)(conv6)
conv6 = Conv2D(64, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(conv6)
up3 = concatenate([UpSampling2D((2,2))(conv6), conv1], axis=-1)
conv7 = Conv2D(32, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(up3)
conv7 = Dropout(0.1)(conv7)
conv7 = Conv2D(32, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(conv7)
out1 = keras.layers.Dense(7)(conv7)
#defining inputs and outputs of model
model = Model(inputs=[in1], outputs=[out1])
model.compile(loss="categorical_crossentropy", optimizer =keras.optimizers.SGD(learning_rate=lr,momentum=0.9),metrics=[tf.keras.metrics.MeanSquaredError(),tf.keras.metrics.MeanAbsoluteError()])
return model
model = U_model()
model.summary()
</code></pre>
<pre><code>#train model
history = model.fit(ds_train,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=ds_validation,
validation_steps=STEP_SIZE_VALID,
epochs=10)
</code></pre>
|
<p>As it turns out I solved the issue with the following:</p>
<p>changed optimizer to Adam in compiler,
added a flatten() layer prior to my final dense(7) output</p>
|
python|tensorflow|keras
| 0
|
7,630
| 70,503,950
|
creating and filling empty dates with zeroes
|
<p>I have a dataframe <code>df</code></p>
<pre><code>df=pd.read_csv('https://raw.githubusercontent.com/amanaroratc/hello-world/master/x_restock.csv')
df
</code></pre>
<p><a href="https://i.stack.imgur.com/9ZivM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ZivM.png" alt="enter image description here" /></a></p>
<p>I want to fill the missing dates for each <code>Product_ID</code> with <code>restocking_events=0</code>. To start, I have created a date_range dataframe using <code>dfdate=pd.DataFrame({'Date':pd.date_range(simple.Date.min(), simple.Date.max())})</code> where <code>simple</code> is some master dataframe and min and max dates are '2021-11-13' and '2021-11-30'.</p>
<p><a href="https://i.stack.imgur.com/dkMFA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dkMFA.png" alt="enter image description here" /></a></p>
|
<p>Use:</p>
<pre><code>#added parse_dates for datetimes
df=pd.read_csv('https://raw.githubusercontent.com/amanaroratc/hello-world/master/x_restock.csv',
parse_dates=['Date'])
</code></pre>
<p>First solution is for add complete range of datetimes from minimal and maximal datetimes in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p>
<pre><code>mux = pd.MultiIndex.from_product([df['Product_ID'].unique(),
pd.date_range(df.Date.min(), df.Date.max())],
names=['Product_ID','Dates'])
df1 = df.set_index(['Product_ID','Date']).reindex(mux, fill_value=0).reset_index()
print (df1)
Product_ID Dates restocking_events
0 1004746 2021-11-13 0
1 1004746 2021-11-14 0
2 1004746 2021-11-15 0
3 1004746 2021-11-16 1
4 1004746 2021-11-17 0
... ... ...
3379 976460 2021-11-26 1
3380 976460 2021-11-27 0
3381 976460 2021-11-28 0
3382 976460 2021-11-29 0
3383 976460 2021-11-30 0
[3384 rows x 3 columns]
</code></pre>
<p>Another idea with helper DataFrame:</p>
<pre><code>from itertools import product
dfdate=pd.DataFrame(product(df['Product_ID'].unique(),
pd.date_range(df.Date.min(), df.Date.max())),
columns=['Product_ID','Date'])
print (dfdate)
Product_ID Date
0 1004746 2021-11-13
1 1004746 2021-11-14
2 1004746 2021-11-15
3 1004746 2021-11-16
4 1004746 2021-11-17
... ...
3379 976460 2021-11-26
3380 976460 2021-11-27
3381 976460 2021-11-28
3382 976460 2021-11-29
3383 976460 2021-11-30
[3384 rows x 2 columns]
</code></pre>
<pre><code>df = dfdate.merge(df, how='left').fillna({'restocking_events':0}, downcast='int')
print (df)
Product_ID Date restocking_events
0 1004746 2021-11-13 0
1 1004746 2021-11-14 0
2 1004746 2021-11-15 0
3 1004746 2021-11-16 1
4 1004746 2021-11-17 0
... ... ...
3379 976460 2021-11-26 1
3380 976460 2021-11-27 0
3381 976460 2021-11-28 0
3382 976460 2021-11-29 0
3383 976460 2021-11-30 0
[3384 rows x 3 columns]
</code></pre>
<hr />
<p>Or if need consecutive datetimes per groups use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.asfreq.html" rel="nofollow noreferrer"><code>DataFrame.asfreq</code></a>:</p>
<pre><code>df2 = (df.set_index('Date')
.groupby('Product_ID')['restocking_events']
.apply(lambda x: x.asfreq('d', fill_value=0))
.reset_index())
print (df2)
Product_ID Date restocking_events
0 112714 2021-11-15 1
1 112714 2021-11-16 1
2 112714 2021-11-17 0
3 112714 2021-11-18 1
4 112714 2021-11-19 0
... ... ...
2209 3630918 2021-11-25 0
2210 3630918 2021-11-26 0
2211 3630918 2021-11-27 0
2212 3630918 2021-11-28 0
2213 3630918 2021-11-29 1
[2214 rows x 3 columns]
</code></pre>
|
python|pandas|dataframe|group-by
| 2
|
7,631
| 70,506,146
|
Does pandas loc generates a copy in itself but it is a view with assignment?
|
<h1>Question</h1>
<p>Does <code>.loc</code> generate both view and copy depending on the context?</p>
<h2>Background</h2>
<p>A bit confused with pandas <code>.loc</code> behaviour as I had thought it should generate a view. However, it looks it generates a copy in the example below.</p>
<pre><code>import numpy as np
import pandas as pd
df_for_view = pd.DataFrame(np.random.choice(10, (3, 5)), columns=list('ABCDE'))
back_for_view = df_for_view
print(back_for_view)
---
A B C D E
0 1 5 3 7 3
1 3 3 8 3 2
2 1 7 6 8 4
</code></pre>
<p>Use <code>.loc</code> which generates a copy.</p>
<pre><code>df_for_view = df_for_view.loc[:, ['B']]
print(df_for_view.values.base is back_for_view.values.base)
---
False # <--- .loc has generated a copy because the base is different
</code></pre>
<p>Update the <code>.loc</code> generated does not reflect.</p>
<pre><code>df_for_view.loc[:, :] = -1
print(back_for_view)
---
A B C D E
0 1 5 3 7 3
1 3 3 8 3 2
2 1 7 6 8 4
</code></pre>
<p>On the other hand, if the update assignment occurs in the same line, it looks <code>.loc</code> generates a view.</p>
<pre><code>df_for_view = pd.DataFrame(np.random.choice(10, (3, 5)), columns=list('ABCDE'))
back_for_view = df_for_view
print(f"df_for_view is \n{df_for_view}\n")
print("updating...")
df_for_view.loc[:, ['B']] = -1
print(f"df_for_view is \n{df_for_view}\n")
print(f"back_for_view is \n{back_for_view}\n")
print(df_for_view.values.base is back_for_view.values.base)
print(df_for_view.loc[:, ['B']].values.base is back_for_view.values.base)
</code></pre>
<p>Result:</p>
<pre><code>df_for_view is
A B C D E
0 7 7 0 5 1
1 7 3 7 3 6
2 3 0 5 4 8
updating...
df_for_view is
A B C D E
0 7 -1 0 5 1
1 7 -1 7 3 6
2 3 -1 5 4 8
back_for_view is
A B C D E
0 7 -1 0 5 1
1 7 -1 7 3 6
2 3 -1 5 4 8
True
False
</code></pre>
<p>So, <code>.loc</code> generates a view or copy depends on the context in which it happens?</p>
|
<p>Interesting question, it seems that is more about assignment vs copy.
<code>df_for_view.loc[:, ['B']]</code> is essentially a new dataframe, as it's proved by</p>
<pre><code>print(df_for_view.loc[:, ['B']].values.base is back_for_view.values.base)
False
</code></pre>
<p>so <code>df_for_view = df_for_view.loc[:, ['B']]</code></p>
<p>could have been any other df like</p>
<pre><code>df_for_view = df_any
</code></pre>
<p>which is a dataframe copy.</p>
<p>Now, this is assigning -1, an integer, to every element in the df:</p>
<pre><code>df_for_view.loc[:, :] = -1
</code></pre>
<p>it could have been</p>
<pre><code>df_for_view.at[:, :] = -1
</code></pre>
<p>and have the same effect, or even something like:</p>
<pre><code>df_for_view.loc[:,:] = df_for_view.loc[2:2]
</code></pre>
<p>In summary, I think the difference is that in the first case loc is generating a new dataframe to be copied over another, and in the second case is used to reference every element in a dataframe.</p>
|
python|pandas
| 0
|
7,632
| 70,737,867
|
Pandas Conditional value fill, dependant on precious row's value
|
<pre><code>Current Input:
import pandas as pd
import numpy as np
# initialize list of lists
data = [
['2017-08-17 04:00:00', 1 ],
['2017-08-17 04:01:00', 2 ],
['2017-08-17 04:02:00', None ],
['2017-08-17 04:03:00', None ],
['2017-08-17 04:04:00', None ],
['2017-08-17 04:05:00', 3 ],
['2017-08-17 04:06:00', 4 ],
['2017-08-17 04:07:00', 10 ],
['2017-08-17 04:08:00', 11 ],
['2017-08-17 04:09:00', None ],
['2017-08-17 04:10:00', 11 ],
['2017-08-17 04:10:00', 11 ],
['2017-08-17 04:11:00', None ],
['2017-08-17 04:12:00', 12 ],
['2017-08-17 04:13:00', 11 ]]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['date', 'price'])
</code></pre>
<pre><code>Desired Output:
data = [ date price entry
['2017-08-17 04:00:00', 1 ],
['2017-08-17 04:01:00', 2 ],
['2017-08-17 04:02:00', None ],
['2017-08-17 04:03:00', None ],
['2017-08-17 04:04:00', None ],
['2017-08-17 04:05:00', 3 3 ],
['2017-08-17 04:06:00', 4 ],
['2017-08-17 04:07:00', 10 ],
['2017-08-17 04:08:00', 11 ],
['2017-08-17 04:09:00', None ],
['2017-08-17 04:10:00', 11 11 ],
['2017-08-17 04:10:00', 11 ],
['2017-08-17 04:11:00', None ],
['2017-08-17 04:12:00', 12 12 ],
['2017-08-17 04:13:00', 11 ]]
</code></pre>
<p>I am trying to make it so that a 3rd column named "entry" takes the value of column "price" if the price on the previous row is None and the current row's price is not none. I have tried the code below but it does not work. All it does is make the whole "entry" column None.</p>
<pre><code>condition1 = (df['price'].shift(1) is None) & (df['price'] is not None)
df['entry'] = np.where(condition1, df['price'] , None)
</code></pre>
|
<p>Your column seems to contain strings, maybe this should work:</p>
<pre><code>df['entry'] = df.loc[df['price'].ne('') & df['price'].shift(fill_value='').eq(''),
'price'].reindex(df.index).fillna('')
print(df)
# Output
date price entry
0 2017-08-17 04:00:00 1
1 2017-08-17 04:01:00 2
2 2017-08-17 04:02:00
3 2017-08-17 04:03:00
4 2017-08-17 04:04:00
5 2017-08-17 04:05:00 3 3
6 2017-08-17 04:06:00 4
7 2017-08-17 04:07:00 10
8 2017-08-17 04:08:00 11
9 2017-08-17 04:09:00
10 2017-08-17 04:10:00 11 11
11 2017-08-17 04:10:00 11
12 2017-08-17 04:11:00
13 2017-08-17 04:12:00 12 12
14 2017-08-17 04:13:00 11
</code></pre>
|
python|pandas|dataframe|conditional-statements|rows
| 0
|
7,633
| 70,975,406
|
Split regex strings into new column in pandas
|
<p>I have a redirect file in a pandas dataframe with a number of regex "or" expressions.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>regex_no</th>
<th>regex</th>
</tr>
</thead>
<tbody>
<tr>
<td>Regex4</td>
<td>/shop/accessories/jewellery/necklaces/(brand-|)jon-richard/</td>
</tr>
<tr>
<td>Regex5</td>
<td>/shop/accessories/jewellery/(bracelets|necklaces|)/brand-simply-silver-by-jon-richard/</td>
</tr>
<tr>
<td>Regex245</td>
<td>/shop/(fashion/dresses/occasion-dresses|)/bridesmaid/</td>
</tr>
</tbody>
</table>
</div>
<p>I'm looking to build a testUrl column which builds both versions of the regex in a test url to run automated tests. It would look like this.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>regex_no</th>
<th>regex</th>
<th>testUrl</th>
</tr>
</thead>
<tbody>
<tr>
<td>Regex4</td>
<td>/shop/accessories/jewellery/necklaces/(brand-|)jon-richard/</td>
<td>/shop/accessories/jewellery/necklaces/brand-jon-richard/</td>
</tr>
<tr>
<td>Regex4</td>
<td>/shop/accessories/jewellery/necklaces/(brand-|)jon-richard/</td>
<td>/shop/accessories/jewellery/necklaces/jon-richard/</td>
</tr>
<tr>
<td>Regex5</td>
<td>/shop/accessories/jewellery/(bracelets|necklaces|)/brand-simply-silver-by-jon-richard/</td>
<td>/shop/accessories/jewellery/bracelets/brand-simply-silver-by-jon-richard/</td>
</tr>
<tr>
<td>Regex5</td>
<td>/shop/accessories/jewellery/(bracelets|necklaces|)/brand-simply-silver-by-jon-richard/</td>
<td>/shop/accessories/jewellery/brand-simply-silver-by-jon-richard/</td>
</tr>
<tr>
<td>Regex5</td>
<td>/shop/accessories/jewellery/(bracelets|necklaces|)/brand-simply-silver-by-jon-richard/</td>
<td>/shop/accessories/jewellery/necklaces/brand-simply-silver-by-jon-richard/</td>
</tr>
<tr>
<td>Regex245</td>
<td>/shop/(fashion/dresses/occasion-dresses/|)bridesmaid/</td>
<td>/shop/fashion/dresses/occasion-dresses/bridesmaid/</td>
</tr>
<tr>
<td>Regex245</td>
<td>/shop/(fashion/dresses/occasion-dresses/|)bridesmaid/</td>
<td>/shop/bridesmaid/</td>
</tr>
</tbody>
</table>
</div>
<p>Unfortunately, I've no code to show how I would approach this, as it's slightly out of my knowledge capability. Thanks</p>
|
<p>You can iterate through the rows of the dataframe like <a href="https://stackoverflow.com/a/39370553/5229301">this</a>, then use <a href="https://stackoverflow.com/a/39370553/5229301">exrex</a> to grab each possible result of your regex expressions.<br />
You would need to construct a new dataframe, adding a new row for every possible result that exrex generates.<br />
Might look something like (completely untested):</p>
<pre><code>import pandas as pd
import exrex
df2 = pd.DataFrame(index = ['regex_no','regex','testUrl'])
for i in range(0, len(originalDataFrame)):
for url in exrex.generate(originalDataFrame.iloc(i)['regex']):
df2.append(originalDataFrame.iloc(i).concat(url))
</code></pre>
|
python|pandas|dataframe
| 0
|
7,634
| 70,755,391
|
In numpy, how to efficiently build a mapping from each unique value to its indices, without using a for loop
|
<p>In numpy, how to efficiently build a mapping from each unique value to its indices, without using a for loop</p>
<p>I considered the following alternatives, but they are not efficient enough for my use case because I use large arrays.</p>
<p>The first alternative, requires traversing the array with <code>for</code> loop, which may be slow when considering large numpy arrays.</p>
<pre><code>import numpy as np
from collections import defaultdict
a = np.array([1, 2, 6, 4, 2, 3, 2])
inv = defaultdict(list)
for i, x in enumerate(a):
inv[x].append(i)
</code></pre>
<p>The second alternative is non-efficient because it requires travesing the array multiple times:</p>
<pre><code>import numpy as np
a = np.array([1, 2, 6, 4, 2, 3, 2])
inv = {}
for x in np.unique(a):
inv[x] = np.flatnonzero(a == x)
</code></pre>
<p>EDIT: My numpy array consists of integers and the usage is for image segmentation. I was also looking for a method in skimage, but did not find any.</p>
|
<p>I advise you to check out <code>numba</code> which can speed up <code>numpy</code> code on python significantly - it supports <code>numpy.invert()</code> and <code>numpy.unique()</code> - <a href="https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html#" rel="nofollow noreferrer">documentation</a></p>
<p>Here is a good video explaining how to use <code>numba</code> from youtube - <a href="https://youtu.be/x58W9A2lnQc" rel="nofollow noreferrer">Make Python code 1000x Faster with Numba</a></p>
|
python|arrays|numpy|image-processing|image-segmentation
| 2
|
7,635
| 51,761,961
|
Dropping Columns in python pandas
|
<p>I'm trying to drop all the columns in a pandas dataframe, except for these few, but when I run this code all the columns are dropped. The dataset is so big, that it would be tedious to list them all, any ideas?:</p>
<pre><code>for columns in df:
if not columns == 'Carbohydrates' or columns == 'Description' or columns == '1st Household Weight' or columns == 'Sugar Total' or columns == 'Kilocalories':
df = df.drop(columns, axis = 1)
</code></pre>
|
<p>Just <strong>select the columns</strong> that you want to keep:</p>
<pre><code>df = df[['Carbohydrates','Description','1st Household Weight','Sugar Total','Kilocalories']]
</code></pre>
|
python|pandas
| 1
|
7,636
| 51,977,881
|
pd.to_datetime inconsistent with time.mktime
|
<pre><code>import time
import pandas as pd
x=pd.to_datetime('2017/01/01',yearfirst=True)
print('x:', x)
y=pd.to_datetime(time.mktime(x.timetuple()),unit='s')
print('y:', y)
</code></pre>
<p>The result is: </p>
<pre><code>x: 2017-01-01 00:00:00
y: 2016-12-31 16:00:00
</code></pre>
<p>I'd expect them to be the same, since we usually transform TimpStamp to seconds and then transform them back.
I understand that this has things to do with timezone, but how can I eliminate the timezone effect? </p>
<p>Edit:
Package information: Python 3.6, Pandas 0.23.1 and dateutil 2.7.3
Local timezone: UTC+8</p>
|
<p>I have found out that the problem is because time.mktime assumes that the timetuple is a local time. Since I am at UTC+8, time.mktime would first -8 hours to convert it to UTC, then convert it to seconds. </p>
<p>Thanks to @abarnert, I found that we can just use x.value (it's nano seconds), and if we need the seconds version, just have x.value/10**9. </p>
<p>In addition, in case anyone want a time zone neutral version, there is calendar.timegm(x.timetuple()) that does a similar job with time.mktime() but is timezone neutral.</p>
|
python|pandas|datetime|time
| 1
|
7,637
| 36,220,200
|
is the year-month part of a datetime variable still a time object in Pandas?
|
<p>consider this</p>
<pre><code>df=pd.DataFrame({'A':['20150202','20150503','20150503'],'B':[3, 3, 1],'C':[1, 3, 1]})
df.A=pd.to_datetime(df.A)
df['month']=df.A.dt.to_period('M')
df
Out[59]:
A B C month
0 2015-02-02 3 1 2015-02
1 2015-05-03 3 3 2015-05
2 2015-05-03 1 1 2015-05
</code></pre>
<p>and my month variable is:</p>
<pre><code>df.month
Out[82]:
0 2015-02
1 2015-05
2 2015-05
Name: month, dtype: object
</code></pre>
<p>Now if I index my dataset by <code>df.month</code>, it seems that Pandas understands this is a date. In other words, I can draw a plot without having to sort my index first. </p>
<p>But is this actually correct? The dtype <code>object</code> (instead of some datetime format) worries me. Is there a proper date object type for this kind of monthly date?</p>
|
<p>It is a pandas period object</p>
<pre><code>In [5]: df.month.map(type)
Out[5]:
0 <class 'pandas._period.Period'>
1 <class 'pandas._period.Period'>
2 <class 'pandas._period.Period'>
Name: month, dtype: object
</code></pre>
|
python|pandas
| 1
|
7,638
| 35,819,771
|
Applying complex function to several timeseries
|
<p>What I am trying to achieve is this:
I have several Timeseries, which I need to combine on a per-point-basis and return the result as a single new timeseries. </p>
<p>I understand that you can use various <code>numpy</code> functions on Series in <code>pandas</code>, but I am unclear on how to apply complex functions to several timeseries.</p>
<p>The Function I want to apply:</p>
<pre><code>def direction_day(y_values):
# taking a numpy array of floats
sig_sum = np.sum(np.sign(y_values))
abs_sum = np.sum(np.abs(np.sign(y_values)))
return (sig_sum / abs_sum)
</code></pre>
<p>Example of my current <code>TimeSeries</code> Objects:</p>
<pre><code>def ret_random_ts():
dates = ['2016-1-{}'.format(i)for i in range(1,21)]
values = [np.random.randn(4,3) for i in range(20)]
return pd.Series(values, index=dates)
</code></pre>
<p>Of course I can always just loop through the <code>TimeSeries</code> with <code>for</code> loops and glue them together.
I was wondering, however, if there was an option to pass a function to a <code>TimeSeries</code> object containing multiple values per date, and apply that function for each date?</p>
<p>I.e.:</p>
<pre><code>ts = ret_random_ts()
ts.apply_func(direction_day,Series['Dates'])
</code></pre>
|
<p>You can use <code>map</code>:</p>
<pre><code>ts.map(direction_day)
2016-1-1 0.166667
2016-1-2 0.000000
2016-1-3 0.166667
2016-1-4 0.666667
2016-1-5 0.000000
2016-1-6 -0.166667
</code></pre>
<p>Or <code>apply</code> (produce the same result)</p>
<pre><code>ts.apply(direction_day)
</code></pre>
<p>Or <code>apply</code> with lambda (produce the same result)</p>
<pre><code>ts.apply(lambda y: direction_day(y))
</code></pre>
<p>Each method will be applied <strong>element-wise</strong> (for value of a <code>Series</code>) since a <code>Series</code> has only one column. <code>DataFrame</code> have methods working element-wise or by <strong>row / column</strong> (see this <a href="https://stackoverflow.com/questions/19798153/difference-between-map-applymap-and-apply-methods-in-pandas">question</a> for more detail). In your case, values of the <code>Series</code> are arrays of arrays, so the entire array will be passed to the function. If you want more control, I suggest using a <code>DataFrame</code> instead of a <code>Series</code> containing an array which is not the preferred way to work in pandas. But your data has more than two dimensions (3), pandas also provide another data structure called <a href="http://pandas.pydata.org/pandas-docs/stable/cookbook.html?highlight=panel#panels" rel="nofollow noreferrer">Panel</a> but I have never worked with <code>Panel</code> so I cannot help you.</p>
<p>As an example, this kind of array will be passed to your <code>direction_day</code> function:</p>
<pre><code>[[ 1.76405235, 0.40015721, 0.97873798],
[ 2.2408932 , 1.86755799, -0.97727788],
[ 0.95008842, -0.15135721, -0.10321885],
[ 0.4105985 , 0.14404357, 1.45427351]]
</code></pre>
|
python-3.x|pandas|dataframe
| 3
|
7,639
| 35,834,913
|
Extract indices of array outside of a slice
|
<p>I want to perform statistics on an annulus around a central portion of a 2D array (performing statistics on the background around a star in an image,for instance). I know how to obtain a 2D slice of a region inside of the array and return the indices of that slice, but is there any way of obtaining the indices of the values outside of the slice?</p>
<p>I have a 2D array called 'Z' and some box size (PSF_box) around which I want to perform some statistics. This is what I've got so far:</p>
<pre><code>center = np.ceil(np.shape(Z)[0]/2.0) # center of the array
# Make a 2d slice of the star, and convert those pixels to nan
annulus[center-ceil(PSF_size/2.0):center+ceil(PSF_size/2.0)-1,\
center-ceil(PSF_size/2.0):center+ceil(PSF_size/2.0)-1] = np.nan
np.savetxt('annulus.dat',annulus,fmt='%s')
</code></pre>
<p>I convert the pixels inside of this box slice to nan, but I don't know how to output the indices of pixels outside of the box that are not 'nan'. Or better yet, is there a way to perform some operations on just the area around the slice directly? (As opposed to outputting pixel values that aren't nan)</p>
|
<p>I hope this represents roughly what you want to do, ie. get the elements of an annulus in your 2d data. If you like the data outside the annulus just change the condition.</p>
<pre><code>import numpy as np
#construct a grid
x= np.linspace(0,1,5)
y= np.linspace(0,1,5)
xv,yv = np.meshgrid(x, y, sparse=False, indexing='ij')
# a gaussian function
x0,y0=0.5,0.5
zz= np.exp(- (xv-x0)**2 - (yv-y0)**2) # a function over the grid
print 'function\n', zz
# a distance metric on the grid
distance = np.sqrt( (xv-x0)**2+ (yv-y0)**2)
print 'distance from center\n', distance
# make a condition and apply it to the array
cond= (distance>0.3) & (distance<0.7)
print 'selection\n',zz[cond]
# if you care about the locations of the annulus
print xv[cond]
print yv[cond]
</code></pre>
<p>output:</p>
<pre><code>function
[[ 0.60653066 0.73161563 0.77880078 0.73161563 0.60653066]
[ 0.73161563 0.8824969 0.93941306 0.8824969 0.73161563]
[ 0.77880078 0.93941306 1. 0.93941306 0.77880078]
[ 0.73161563 0.8824969 0.93941306 0.8824969 0.73161563]
[ 0.60653066 0.73161563 0.77880078 0.73161563 0.60653066]]
distance from center
[[ 0.70710678 0.55901699 0.5 0.55901699 0.70710678]
[ 0.55901699 0.35355339 0.25 0.35355339 0.55901699]
[ 0.5 0.25 0. 0.25 0.5 ]
[ 0.55901699 0.35355339 0.25 0.35355339 0.55901699]
[ 0.70710678 0.55901699 0.5 0.55901699 0.70710678]]
selection
[ 0.73161563 0.77880078 0.73161563 0.73161563 0.8824969 0.8824969
0.73161563 0.77880078 0.77880078 0.73161563 0.8824969 0.8824969
0.73161563 0.73161563 0.77880078 0.73161563]
[ 0. 0. 0. 0.25 0.25 0.25 0.25 0.5 0.5 0.75 0.75 0.75
0.75 1. 1. 1. ]
[ 0.25 0.5 0.75 0. 0.25 0.75 1. 0. 1. 0. 0.25 0.75
1. 0.25 0.5 0.75]
</code></pre>
<p>See also this great answer : <a href="https://stackoverflow.com/questions/16343752/numpy-where-function-multiple-conditions">Numpy where function multiple conditions</a></p>
|
python|arrays|numpy|data-structures
| 1
|
7,640
| 36,065,021
|
Ordered coordinates
|
<p>I have a list of 2D unordered coordinates :</p>
<pre><code>[[ 95 146]
[118 146]
[ 95 169]
[ 95 123]
[ 72 146]
[118 169]
[118 123]
[141 146]
[ 95 100]
[ 72 123]
[ 95 192]
[ 72 169]
[141 169]
[118 100]
[141 123]
[ 72 100]
[ 95 77]
[118 192]
[ 49 146]
[ 48 169]]
</code></pre>
<p><a href="https://i.stack.imgur.com/c1c4f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c1c4f.png" alt="Diplay coordinates" /></a></p>
<p>How could I find the corresponding row and column for each points? My points are not perfect and small rotation can exist. I'm looking at Opencv <code>findCirclesGrid</code> code but I did not find ordering algorithm.</p>
<p>EDIT: @armatita solution work with set of data but when coordinate have rotation 7°</p>
<pre><code>data = array([[ 95, 146],[72,143],[92,169],[98,123],[75,120],[69,166],[49,140],[89,192],[115,172],[46,163],[52,117],[66,189],[112,194],[121,126],[123,103],[101,100],[78,97],[141,152],[86,215],[138,175]])
def find(arr,threshold):
rmin = sys.maxint
for i in range(len(arr)):
for j in range(i+1,len(arr)):
diff = abs( arr[i] - arr[j] )
if diff > threshold and diff < rmin:
rmin = diff
return rmin
threshold = 10
space = np.array([ find(data[:,0],threshold), find(data[:,1],threshold) ], dtype=np.float32)
print "space=",space
first = np.min(data,axis=0)
order = np.around( ( data - first ) / space )
plt.scatter(data[:,1], data[:,0],c=range(len(data)),cmap="ocean")
for pt in zip(order,data):
c, rc = ( pt[1], pt[0] )
plt.text( c[1], c[0]+5, "[%d,%d]" % (rc[1],rc[0]),color='black')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/dPGS0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dPGS0.png" alt="enter image description here" /></a></p>
<p>Problem come from space calculation</p>
|
<p>By putting them inside a pseudo regular grid:</p>
<pre><code> c = [[ 95, 146],[118, 146],[ 95, 169],[ 95, 123],[ 72, 146],[118, 169],
[118, 123],[141, 146],[ 95, 100],[ 72 ,123] ,[ 95 ,192],[ 72 ,169]
,[141 ,169],[118 ,100],[141 ,123],[ 72 ,100],[ 95 , 77],[118 ,192]
,[ 49 ,146],[ 48 ,169]]
nodesx = 100
sizex = 20
sizey = 20
firstx = 70
firsty = 40
new,xt,yt = [],[],[]
for i in c:
xo = int((i[0]-firstx)/sizex)
yo = int((i[1]-firsty)/sizey)
new.append(nodesx*yo+xo)
xt.append(i[0])
yt.append(i[1])
sortedc = [x for (y,x) in sorted(zip(new,c))]
import matplotlib.pyplot as plt
plt.scatter(xt,yt)
for i in range(len(sortedc)):
plt.text(sortedc[i][0],sortedc[i][1],str(i))
plt.show()
</code></pre>
<p>,which will result in this (do tell if you don't understand the logic):</p>
<p><a href="https://i.stack.imgur.com/L0avn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L0avn.png" alt="sorted 2D coordinates"></a></p>
|
python|opencv|numpy|coordinates
| 2
|
7,641
| 37,310,141
|
Python Pandas merge if numbers equal
|
<p>Im trying to merge two csv's based on a condition. The Value 'KEYS' on csv2 has to match the 'TCNUM' on CSV1, and append it the third column. The csv's are very large and it has to be done through code.</p>
<p>df1 - CSV1:</p>
<pre><code>ID TC_NUM
dialog_testcase_0101.0001_greeting.xml 101.0001
dialog_testcase_0101.0002_greeting.xml 101.0002
dialog_testcase_0101.0003_greeting.xml 101.0003
dialog_testcase_0101.0004_greeting.xml 101.0004
dialog_testcase_0101.0005_greeting.xml 101.0005
dialog_testcase_0101.0006_greeting.xml 101.0006
dialog_testcase_0901.0008_greeting.xml 901.0007
dialog_testcase_0101.0008_greeting.xml 101.0008
dialog_testcase_0501.001_greeting.xml 501.001
dialog_testcase_0801.0011_greeting.xml 801.0011
</code></pre>
<p>df2 - CSV2: </p>
<pre><code>KEYS TC_NUM
FIT-3982 TC 101.0011, 101.0004
FIT-3980 TC 801.0011.901.007
FIT-3979 TC 101.0006, 501.001, 1907.0019, 1907.0020, 1907.0021
</code></pre>
<p>What I want:</p>
<p>csvFinal:</p>
<pre><code>ID TC_NUM Keys
dialog_testcase_0101.0001_greeting.xml 101.0011 FIT-3982
dialog_testcase_0101.0002_greeting.xml 101.0002
dialog_testcase_0101.0003_greeting.xml 101.0006 FIT_3979
dialog_testcase_0101.0004_greeting.xml 101.0004 FIT-3982
dialog_testcase_0101.0005_greeting.xml 101.0005
dialog_testcase_0101.0006_greeting.xml 101.0011 FIT_3982
dialog_testcase_0901.0008_greeting.xml 901.0007 FIT_3979
dialog_testcase_0101.0008_greeting.xml 101.0008
dialog_testcase_0501.001_greeting.xml 501.001 FIT-3979
dialog_testcase_0801.0011_greeting.xml 801.0011 FIT-3980
</code></pre>
<p>My code ..</p>
<pre><code>mergedOpen = pd.merge(df1, df2, on=['TC_NUM'])
mergedOpen.set_index('TC_NUM', inplace=True)
mergedOpen.to_csv('MergedCSVOPEN.csv')
</code></pre>
|
<p>You can after <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> remove first <code>3</code> char from column <code>TC_NUM</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> by <code>,</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> create new <code>DataFrame</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>. Both columns <code>TC_NUM</code> have to be set to equal <code>dtype</code> - <code>string</code> or <code>numeric</code>. I choose <code>numeric</code>, so I convert column <code>df2.TC_NUM</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a>:</p>
<pre><code>df2.set_index('KEYS',inplace=True)
df2 = df2.TC_NUM.str[3:]
.str.split(', ', expand=True)
.unstack()
.reset_index(drop=True, level=0)
.reset_index(name='TC_NUM')
df2['TC_NUM'] = pd.to_numeric(df2['TC_NUM'])
print (df2)
KEYS TC_NUM
0 FIT-3982 101.0011
1 FIT-3980 801.0011
2 FIT-3979 101.0006
3 FIT-3982 101.0004
4 FIT-3980 901.0070
5 FIT-3979 501.0010
6 FIT-3982 NaN
7 FIT-3980 NaN
8 FIT-3979 1907.0019
9 FIT-3982 NaN
10 FIT-3980 NaN
11 FIT-3979 1907.0020
12 FIT-3982 NaN
13 FIT-3980 NaN
14 FIT-3979 1907.0021
</code></pre>
<pre><code>mergedOpen = pd.merge(df1, df2, on='TC_NUM', how='left')
print (mergedOpen)
ID TC_NUM KEYS
0 dialog_testcase_0101.0001_greeting.xml 101.0001 NaN
1 dialog_testcase_0101.0002_greeting.xml 101.0002 NaN
2 dialog_testcase_0101.0003_greeting.xml 101.0003 NaN
3 dialog_testcase_0101.0004_greeting.xml 101.0004 FIT-3982
4 dialog_testcase_0101.0005_greeting.xml 101.0005 NaN
5 dialog_testcase_0101.0006_greeting.xml 101.0006 FIT-3979
6 dialog_testcase_0901.0008_greeting.xml 901.0007 NaN
7 dialog_testcase_0101.0008_greeting.xml 101.0008 NaN
8 dialog_testcase_0501.001_greeting.xml 501.0010 FIT-3979
9 dialog_testcase_0801.0011_greeting.xml 801.0011 FIT-3980
mergedOpen.set_index('TC_NUM', inplace=True)
mergedOpen.to_csv('MergedCSVOPEN.csv')
</code></pre>
|
python|csv|pandas
| 1
|
7,642
| 37,323,546
|
Python data frame apply filter on multiple columns with same condition?
|
<p>Here is my pandas data frame. </p>
<pre><code>new_data =
name duration01 duration02 orz01 orz02
ABC 1 years 6 months 5 months Nan Google
XYZ 4 months 3 years 2 months Google Zensar
TYZ 4 months 4 years Google In Google
OPI 2 months 3 months Nan accenture
NRM 9 months 3 years Google Zensar
</code></pre>
<p>I want to find out the name of employees who works in Google and there duration in months.Here the value contains in the multiple columns ? How to apply filter on multiple columns ?</p>
<p>duration01 => orz01 ( how many months/years employee spend in orz01)
duration02 => orz02 ( how many months/years employee spend in orz02)</p>
<p>There are total 10 orz and 10 respective duration columns.</p>
<p>I tried below code </p>
<pre><code># Selected the required columns
orz_cols = new_data.columns[new_data.columns.str.contains('orz')]
new_data [ new_data[orz_cols].apply(lambda x: x.str.contains('Google')) ]
</code></pre>
<p>But its not printing proper data ?</p>
<p>How do I achieve this </p>
<p>I want output like below</p>
<pre><code>name Total_duration_in Google_in_Months
ABC 5 months
XYZ 4 months
TYZ 52 months
</code></pre>
<p>Using the first part what @Stefan gave I did below part to convert years to months </p>
<pre><code># filter the data
Google_Data = dt1[dt1['orz'].str.contains('Google')]
dur = []
for i in range(0,len(Google_Data['duration'])):
dur.append(Google_Data['duration'][i].split())
months_list = []
for i in range(0,len(dur)):
#print dur[i]
if dur[i][1] == 'years':
if len(dur[i]) > 2:
val1 = int(dur[i][0]) * 12 + int(dur[i][2])
val11 = str(val1)+" months"
months_list.append(val11)
else:
val2 = int(dur[i][0]) * 12
val22 = str(val2)+" months"
months_list.append(val22)
else:
val3 = dur[i][0]+" months"
months_list.append(val3)
months_list[:3]
# Concat
df2 = pd.DataFrame(months_list,index=Google_Data.index.copy())
Google_duration = pd.concat([Google_Data, df2], axis=1)
Output :
organization Duration_In_Months
name
Aparna Arora Google Headstrong Capital Markets 60 months
Aparna Dasgupta Google 24 months
Aparna Dhar Google India Ltd 56 months
</code></pre>
<p>Now I want to perform final step i.e. take the sum by grouping the name but here 'name' is index. I am struggling to get the sum.</p>
<p>Here what i am trying</p>
<pre><code># Splitting the Duration_In_Months to get only number values
# Its returning the type as 'str'
Google_duration1 = Google_duration.Duration_In_Months.apply(lambda x : x.split()[0])
# apply groupby
Genpact_dur2.index.groupby(Genpact_dur2['Duration_In_Months'])
</code></pre>
<p>How do I Groupby index and take the sum ?</p>
<p>Thanks,</p>
|
<p>Consider reshaping using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow">pandas.melt</a>, then conditionally parsing out values for years and months using <code>np.where()</code>. Finally, aggregate by the <em>Google</em> organization.</p>
<pre><code>import pandas as pd
import numpy as np
...
# LIST OF SUBSET COLUMNS
durationCols = [c for c in df.columns if 'duration' in c ]
orzCols = [c for c in df.columns if 'orz' in c ]
# MELT AND MERGE
df = pd.merge(pd.melt(df, id_vars=['name'], value_vars=durationCols,
var_name=None, value_name='duration'),
pd.melt(df, id_vars=['name'], value_vars=orzCols,
var_name=None, value_name='orz'),
right_index=True, left_index=True, on=['name'])[['name', 'duration', 'orz']]
# DURATION CONDITIONAL CALCULATION (YEAR + MONTH)
df['actual_dur'] = np.where(df['duration'].str.contains('year'),
df['duration'].str[:1], 0).astype(int) * 12 + \
np.where(df['duration'].str.contains('year.*month'),
df['duration'].str[8:9],
np.where(df['duration'].str.contains('month'),
df['duration'].str[:1], 0)).astype(int)
df['orz'] = np.where(df['orz']\
.str.contains('Google'), 'Google', df['orz'])
# SUM DURATION AND OUTPUT DF
df = df[df['orz']=='Google'].groupby(['name','orz']).sum().reset_index()
df = df[['name','actual_dur']]
df.columns = ['name', 'Total_duration_in Google_in_Months']
</code></pre>
<p>Output</p>
<pre><code># name Total_duration_in Google_in_Months
# 0 ABC 5
# 1 NRM 9
# 2 TYZ 52
# 3 XYZ 4
</code></pre>
|
python|pandas|filter|group-by|multiple-columns
| 0
|
7,643
| 41,839,522
|
Time series: EWMA pandas forecast
|
<p>I have searched extensively in Google and here but cannot seem to find the answer I am looking for or at least, some thing I understand. Is it possible to use EWMA in Pandas for forecasting ? For example, if I had daily data of website clicks for 2 months 1st Feb to 31st Mar. and don't see any trend or seasonality in the data, it seems like I should be able to use EWMA to "predict" number of clicks at a later date say on 10th April. In Excel, I can imagine just filling approximately 10 dates or rows after 31st March and computing a moving average where the 5-day EWMA for 10th April will be based on weighted forecasts of prior days. Is there a way I can do this in Python ?</p>
<p>Thanks ! </p>
|
<p>It's a one-liner to implement, but you're going to be a little bored by EWMA's predictions of the future (the mean is simply the most recent observation). If you'd like a python package that lets you experiment with EWMA level, trend and seasonality, try my Holt Winters implementation:</p>
<p><a href="https://github.com/welch/seasonal" rel="nofollow noreferrer">https://github.com/welch/seasonal</a></p>
<p><a href="https://pypi.python.org/pypi/seasonal" rel="nofollow noreferrer">https://pypi.python.org/pypi/seasonal</a></p>
|
python|pandas|time-series
| 4
|
7,644
| 41,979,933
|
Python Groupby part of a string
|
<p>I'm grouping a list of transactions by UK Postcode, but I only want to group by the first part of the postcode. So, UK post codes are in two parts, outward and inward, separated by a [space]. e.g. W1 5DA.</p>
<pre><code>subtotals = df.groupby('Postcode').count()
</code></pre>
<p>Is the way I'm doing it now, the way I've thought about doing it at the moment is adding another column to the DataFrame with just the first word of the Postcode column, and then grouping by that... but I'm wondering if there's any easier way to do it.</p>
<p>Thank you</p>
|
<p>I think you need <code>groupby</code> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> by first space:</p>
<pre><code>subtotals = df.groupby(df['Postcode'].str.split().str[0]).count()
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'Postcode' :['W1 5DA','W1 5DA','W2 5DA']})
print (df)
Postcode
0 W1 5DA
1 W1 5DA
2 W2 5DA
print (df['Postcode'].str.split().str[0])
0 W1
1 W1
2 W2
Name: Postcode, dtype: object
subtotals = df.groupby(df['Postcode'].str.split().str[0]).count()
print (subtotals)
Postcode
Postcode
W1 2
W2 1
</code></pre>
<p>Check also <a href="https://stackoverflow.com/questions/33346591/what-is-the-difference-between-size-and-count-in-pandas">What is the difference between size and count in pandas?</a></p>
|
python|pandas
| 4
|
7,645
| 41,795,640
|
Merge pandas Data Frames based on conditions
|
<p>I have two files which show information about a transaction over products</p>
<p>Operations of type 1 </p>
<pre><code>d_op_1 = pd.DataFrame({'id':[1,1,1,2,2,2,3,3],'cost':[10,20,20,20,10,20,20,20],
'date':[2000,2006,2012,2000,2009,2009,2002,2006]})
</code></pre>
<p><a href="https://i.stack.imgur.com/NKthg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKthg.png" alt="enter image description here"></a></p>
<p>Operations of type 2</p>
<pre><code>d_op_2 = pd.DataFrame({'id':[1,1,2,2,3,4,5,5],'cost':[3000,3100,3200,4000,4200,3400,2000,2500],
'date':[2010,2015,2008,2010,2006,2010,1990,2000]})
</code></pre>
<p><a href="https://i.stack.imgur.com/GLceo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GLceo.png" alt="enter image description here"></a></p>
<p>I want to keep only those registers were there have been operations of type one between two operations of type 2.
E.G. for the product wit the id "1" there was an operation of type 1 (2012) between two operations of type 2 (2010,2015) so I want to keep that record. </p>
<p>The desired output it cloud be either this:</p>
<p><a href="https://i.stack.imgur.com/yuGbE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yuGbE.png" alt="enter image description here"></a></p>
<p>or this:</p>
<p><a href="https://i.stack.imgur.com/yEt5p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yEt5p.png" alt="enter image description here"></a></p>
<p>Using pd.merge() I got this result:</p>
<p><a href="https://i.stack.imgur.com/0dyfs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dyfs.png" alt="enter image description here"></a></p>
<p>How can I filter this to get the desired output? </p>
|
<p>You can use:</p>
<pre><code>#concat DataFrames together
df4 = pd.concat([d_op_1.rename(columns={'cost':'cost1'}),
d_op_2.rename(columns={'cost':'cost2'})]).fillna(0).astype(int)
#print (df4)
#find max and min dates per goups
df3 = d_op_2.groupby('id')['date'].agg({'start':'min','end':'max'})
#print (df3)
#join max and min dates to concated df
df = df4.join(df3, on='id')
df = df[(df.date > df.start) & (df.date < df.end)]
#reshape df for min, max and dated between them
df = pd.melt(df,
id_vars=['id','cost1'],
value_vars=['date','start','end'],
value_name='date')
#remove columns
df = df.drop(['cost1','variable'], axis=1) \
.drop_duplicates()
#merge to original, sorting
df = pd.merge(df, df4, on=['id', 'date']) \
.sort_values(['id','date']).reset_index(drop=True)
#reorder columns
df = df[['id','cost1','cost2','date']]
</code></pre>
<pre><code>print (df)
id cost1 cost2 date
0 1 0 3000 2010
1 1 20 0 2012
2 1 0 3100 2015
3 2 0 3200 2008
4 2 10 0 2009
5 2 20 0 2009
6 2 0 4000 2010
#if need lists for duplicates
df = df.groupby(['id','cost2', 'date'])['cost1'] \
.apply(lambda x: list(x) if len(x) > 1 else x.values[0]) \
.reset_index()
df = df[['id','cost1','cost2','date']]
print (df)
id cost1 cost2 date
0 1 20 0 2012
1 1 0 3000 2010
2 1 0 3100 2015
3 2 [10, 20] 0 2009
4 2 0 3200 2008
5 2 0 4000 2010
</code></pre>
|
python|pandas|dataframe
| 3
|
7,646
| 37,815,007
|
Pandas Series - set data between date ranges to a constant
|
<p>I have a very simple problem which might be a bit long-winded to explain but I'll do my best.</p>
<p>I have a pandas Series <code>daily</code> data which has an entry for each day and a corresponding value of False (see daily in code below).</p>
<p>I have two other Series objects, <code>start</code> and <code>end</code>, which hold dates such that <code>(start[i], end[i])</code> forms a date range pair. I would then like to set the values in daily that are between (start[i], end[i]) to True. Here is my code snippet to get a setup but I then don't know how to apply the date pairs to daily:</p>
<pre><code>import pandas as pd
import random
daily = pd.Series(False, pd.bdate_range("20150101", "today", freq="D"))
monthly = pd.Series(False, pd.bdate_range("20150101", "today", freq="MS") + pd.DateOffset(9))
start = [i+pd.DateOffset(random.choice([1,2,3,4])) for i in montly.index]
end = [i+pd.DateOffset(random.choice([1,2,3,4])) for i in start]
# Now set everything in daily between (start[i], end[i]) for all i.
</code></pre>
<p>A few more details - <code>start[i]</code> is earlier than <code>end[i]</code> for any <code>i</code> (i.e. what you'd expect) and <code>(start[i]</code>, end[i])<code>and</code>(start[j], end[j])` do not intersect (are disjoint).</p>
|
<p>You can use:</p>
<pre><code>daily = pd.Series(False, pd.bdate_range("20150101", "today", freq="D"))
monthly = pd.Series(False, pd.bdate_range("20150101", "today", freq="MS") + pd.DateOffset(9))
start = [i + pd.DateOffset(random.choice([1, 2, 3, 4])) for i in monthly.index]
end = [i + pd.DateOffset(random.choice([1, 2, 3, 4])) for i in start] # removed .index
for start, end in zip(start, end):
daily[start:end] = True
</code></pre>
<p>to get:</p>
<pre><code>2015-01-01 False
2015-01-02 False
2015-01-03 False
2015-01-04 False
2015-01-05 False
2015-01-06 False
2015-01-07 False
2015-01-08 False
2015-01-09 False
2015-01-10 False
2015-01-11 True
2015-01-12 True
2015-01-13 True
2015-01-14 True
2015-01-15 True
2015-01-16 False
2015-01-17 False
2015-01-18 False
2015-01-19 False
2015-01-20 False
2015-01-21 False
2015-01-22 False
2015-01-23 False
2015-01-24 False
2015-01-25 False
2015-01-26 False
2015-01-27 False
2015-01-28 False
2015-01-29 False
2015-01-30 False
...
2016-05-16 False
2016-05-17 False
2016-05-18 False
2016-05-19 False
2016-05-20 False
2016-05-21 False
2016-05-22 False
2016-05-23 False
2016-05-24 False
2016-05-25 False
2016-05-26 False
2016-05-27 False
2016-05-28 False
2016-05-29 False
2016-05-30 False
2016-05-31 False
2016-06-01 False
2016-06-02 False
2016-06-03 False
2016-06-04 False
2016-06-05 False
2016-06-06 False
2016-06-07 False
2016-06-08 False
2016-06-09 False
2016-06-10 False
2016-06-11 False
2016-06-12 True
2016-06-13 True
2016-06-14 False
Freq: D, dtype: bool
</code></pre>
|
python|datetime|pandas
| 1
|
7,647
| 38,038,393
|
File size increases after converting from .mat files to .txt files
|
<p>I have a lot of .mat files which contain the information about the radial part of some different wavefunctions and some other information about an atom. Now I successfully extracted the wavefunction part and using numpy.savetxt() to save it into .txt file. But the size of the file increases so much:
After I ran </p>
<pre><code> du -ch wfkt_X_rb87_n=40_L=11_J=0_step=0.001.mat
440K wfkt_X_rb87_n=40_L=11_J=0_step=0.001.mat
du -ch wfkt_X_rb87_n=40_L=12_J=0_step=0.001.txt
2,9M wfkt_X_rb87_n=40_L=12_J=0_step=0.001.txt
</code></pre>
<p>Ignore the L=11 and L=12 difference, the size of the wavefunctions are almost the same, but the file size increased by more than 6 times. I want to know the reason why and probably a way to decrease the size of the .txt files.
Here is the code how I covert the files:</p>
<pre><code> import scipy.io as sio
import os
import pickle
import numpy as np
import glob as gb
files=gb.glob('wfkt_X_rb*.mat')
for filet in files:
print filet
mat=sio.loadmat(filet)
wave=mat['wavefunction'][0]
J=mat['J']
L=mat['L']
n=mat['n']
xmax=mat['xmax'][0][0]
xmin=mat['xmin'][0][0]
xstep=mat['xstep'][0][0]
energy=mat['energy'][0][0]
name=filet.replace('.mat','.txt')
name=name.replace('rb','Rb')
x=np.linspace(xmin, xmax, num=len(wave), endpoint=False)
Data=np.transpose([x,wave])
np.savetxt(name,Data)
os.remove(filet)
with open(name, "a") as f:
f.write(str(energy)+" "+str(xstep)+"\n")
f.write(str(xmin)+" "+str(xmax))
</code></pre>
<p>and the format of the data file needed is :</p>
<pre><code> 2.700000000000000000e+01 6.226655250941872093e-04
2.700099997457605738e+01 6.232789496263042460e-04
2.700199994915211121e+01 6.238928333406641843e-04
2.700299992372816860e+01 6.245071764542571872e-04
2.700399989830422243e+01 6.251219791839867897e-04
2.700499987288027981e+01 6.257372417466700075e-04
2.700599984745633364e+01 6.263529643590372287e-04
</code></pre>
<p>If you need more information, feel free to ask! Thanks in advance.</p>
|
<p><code>.mat</code> is a binary format whereas <code>numpy.savetxt()</code> writes a plain text file. The binary representation of a double precision number (IEEE 754 double precision) takes 8 bytes. By default, numpy saves this as plain text in the format <code>0.000000000000000000e+00</code> resulting in 24 bytes.</p>
<p>There are number of additional effects which affect the resulting file size. E.g. structural overhead of the file format, compression, the format you use for writting the plain text (number of decimal digits). However in your case, i suspect that the main effect is just the difference between a binary and a plain text representation of the numbers.</p>
<p>If you want to decrease the file size, you should use a different output format. Possible options are:</p>
<ul>
<li><p>write a zipped text file:</p>
<pre><code>import gzip
with open('data.txt.gz', 'wb') as f:
numpy.savetxt(f, myarray)
</code></pre></li>
<li><p>Save as <code>.mat</code> again. See <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.savemat.html" rel="nofollow">scipy.io.savemat()</a></p></li>
<li>Write a proprietary binary numpy format (<code>.npy</code>). See <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html#numpy.save" rel="nofollow">numpy.save()</a></li>
<li>Write a proprietary binary compressed numpy format (<code>.npz</code>). See <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez_compressed.html#numpy.savez_compressed" rel="nofollow">numpy.savez_compressed()</a></li>
<li>If you have very large amounts of structured data, consider usering the <a href="http://www.h5py.org/" rel="nofollow">HDF5 file format</a>.</li>
<li>If you need to write your own binary format use <a href="https://docs.python.org/3/library/struct.html#struct.pack" rel="nofollow">struct.pack()</a> and write the resulting bytes to a file.</li>
</ul>
<p>Which option to choose depends on your situation: Who will have to read the data afterwards? How important is the compression factor? Is your data just one single array or is the structure more complex?</p>
|
python|numpy|filesize|file-type|mat
| 3
|
7,648
| 31,368,918
|
With Pandas in Python, how do I sort by two columns which are created by the agg function?
|
<p>For this sort of data</p>
<pre><code> author cat val
0 author1 category2 15
1 author2 category4 9
2 author3 category1 7
3 author4 category1 9
4 author5 category2 11
</code></pre>
<p>I want to get</p>
<pre><code> cat mean count
category2 13 2
category1 8 2
category4 9 1
</code></pre>
<p>I thought I was getting good at Pandas and wrote</p>
<pre><code>most_expensive_standalone.groupby('cat').apply(['mean', 'count']).sort(['count', 'mean'])
</code></pre>
<p>but got</p>
<pre><code> File "/home/mike/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 3862, in _intercept_function
return _func_table.get(func, fnc)
TypeError: unhashable type: 'list'
</code></pre>
|
<p>You should use <code>.agg</code> instead <code>.apply</code> if you just want to pass two aggregate functions <code>mean</code> and <code>count</code> to your data. Also, since you've applied two functions on the same column <code>val</code>, it will introduce a multi-level column index. So before sorting on newly created columns <code>mean</code> and <code>count</code>, you need to select its outer level <code>val</code> first.</p>
<pre><code>most_expensive_standalone.groupby('cat').agg(['mean', 'count'])['val'].sort(['mean', 'count']
mean count
cat
category1 8 2
category4 9 1
category2 13 2
</code></pre>
<p>Follow-ups:</p>
<pre><code># just perform groupby and .agg will give you this
most_expensive_standalone.groupby('cat').agg(['mean', 'count'])
val
mean count
cat
category1 8 2
category2 13 2
category4 9 1
</code></pre>
<p>Select <code>val</code> column</p>
<pre><code>most_expensive_standalone.groupby('cat').agg(['mean', 'count'])['val']
mean count
cat
category1 8 2
category2 13 2
category4 9 1
</code></pre>
<p>And finally call <code>.sort(['mean', 'count'])</code></p>
|
python|pandas
| 3
|
7,649
| 31,593,157
|
Convert custom class to standard Python type
|
<p>I was working with a <code>numpy</code> array called <code>predictions</code>. I was playing around with the following code:</p>
<pre><code>print type(predictions)
print list(predictions)
</code></pre>
<p>The output was:</p>
<pre><code><type 'numpy.ndarray'>`
[u'yes', u'no', u'yes', u'yes', u'yes']
</code></pre>
<p>I was wondering how <code>numpy</code> managed to build their <code>ndarray</code> class so that it could be converted to a list not with their own <code>list</code> function, but with the standard Python function. </p>
<p>Python version: 2.7, Numpy version: 1.9.2</p>
|
<blockquote>
<p>I have answered from the pure Python perspective below, but <code>numpy</code>'s
arrays are actually implemented in C - see e.g. <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/arrayobject.c#L1760" rel="nofollow">the <code>array_iter</code>
function</a>.</p>
</blockquote>
<p>The <a href="https://docs.python.org/2/library/functions.html#func-list" rel="nofollow">documentation</a> defines the argument to <code>list</code> as an <code>iterable</code>; <code>new_list = list(something)</code> works a little bit like:</p>
<pre><code>new_list = []
for element in something:
new_list.append(element)
</code></pre>
<p>(or, in a list comprehension: <code>new_list = [element for element in something]</code>). Therefore to implement this behaviour for a custom class, you need to define the <a href="https://docs.python.org/2/reference/datamodel.html#object.__iter__" rel="nofollow"><code>__iter__</code> magic method</a>:</p>
<pre><code>>>> class Demo(object):
def __iter__(self):
return iter((1, 2, 3))
>>> list(Demo())
[1, 2, 3]
</code></pre>
<p>Note that conversion to other types will require <a href="https://docs.python.org/2/reference/datamodel.html#object.__complex__" rel="nofollow">different</a> <a href="https://docs.python.org/2/reference/datamodel.html#object.__str__" rel="nofollow">methods</a>.</p>
|
python|numpy|casting
| 2
|
7,650
| 31,675,610
|
Csr_matrix.dot vs. Numpy.dot
|
<p>I have a large (<em>n=50000</em>) block diagonal <code>csr_matrix</code> <strong>M</strong> representing the adjacency matrices of a set of graphs. I have to have multiply <strong>M</strong> by a dense <code>numpy.array</code> <strong>v</strong> several times. Hence I use <code>M.dot(v)</code>.</p>
<p>Surprisingly, I have discovered that first converting <strong>M</strong> to <code>numpy.array</code> and then using <code>numpy.dot</code> is much faster.</p>
<p>Any ideas why this it the case?</p>
|
<p>I don't have enough memory to hold a <code>50000x50000</code> dense matrix in memory and multiply it by a <code>50000</code> vector. But find here some tests with lower dimensionality.</p>
<p>Setup:</p>
<pre><code>import numpy as np
from scipy.sparse import csr_matrix
def make_csr(n, N):
rows = np.random.choice(N, n)
cols = np.random.choice(N, n)
data = np.ones(n)
return csr_matrix((data, (rows, cols)), shape=(N,N), dtype=np.float32)
</code></pre>
<p>The code above generates sparse matrices with <code>n</code> non-zero elements in a <code>NxN</code> matrix.</p>
<p>Matrices:</p>
<pre><code>N = 5000
# Sparse matrices
A = make_csr(10*10, N) # ~100 non-zero
B = make_csr(100*100, N) # ~10000 non-zero
C = make_csr(1000*1000, N) # ~1000000 non-zero
D = make_csr(5000*5000, N) # ~25000000 non-zero
E = csr_matrix(np.random.randn(N,N), dtype=np.float32) # non-sparse
# Numpy dense arrays
An = A.todense()
Bn = B.todense()
Cn = C.todense()
Dn = D.todense()
En = E.todense()
b = np.random.randn(N)
</code></pre>
<p>Timings:</p>
<pre><code>>>> %timeit A.dot(b) # 9.63 µs per loop
>>> %timeit An.dot(b) # 41.6 ms per loop
>>> %timeit B.dot(b) # 41.3 µs per loop
>>> %timeit Bn.dot(b) # 41.2 ms per loop
>>> %timeit C.dot(b) # 3.2 ms per loop
>>> %timeit Cn.dot(b) # 41.2 ms per loop
>>> %timeit D.dot(b) # 35.4 ms per loop
>>> %timeit Dn.dot(b) # 43.2 ms per loop
>>> %timeit E.dot(b) # 55.5 ms per loop
>>> %timeit En.dot(b) # 43.4 ms per loop
</code></pre>
<ul>
<li>For <em>highly sparse</em> matrices (<code>A</code> and <code>B</code>) it is more than <code>1000x</code> times faster. </li>
<li>For <em>not very sparse</em> matrices (<code>C</code>), it still gets <code>10x</code> speedup. </li>
<li>For <em>almost non-sparse</em> matrix (<code>D</code> will have some <code>0</code> due to repetition in the indices, but not many probabilistically speaking), it is still faster, not much, but faster.</li>
<li>For a truly <em>non-sparse</em> matrix (<code>E</code>), the operation is slower, but not much slower.</li>
</ul>
<p><strong>Conclusion:</strong> the speedup you get depends on the <em>sparsity</em> of your matrix, but with <code>N = 5000</code> sparse matrices are <em>always</em> faster (as long as they have some <em>zero</em> entries).</p>
<p>I can't try it for <code>N = 50000</code> due to memory issues. You can try the above code and see what is like for you with that <code>N</code>.</p>
|
python|numpy|scipy|sparse-matrix
| 4
|
7,651
| 31,616,695
|
Convert Json data to Python DataFrame
|
<p>This is my first time accessing an API / working with json data so if anyone can point me towards a good resource for understanding how to work with it I'd really appreciate it. </p>
<p>Specifically though, I have json data in this form:</p>
<pre><code>{"result": { "code": "OK", "msg": "" },"report_name":"DAILY","columns":["ad","ad_impressions","cpm_cost_per_ad","cost"],"data":[{"row":["CP_CARS10_LR_774470","966","6.002019","5.797950"]}],"total":["966","6.002019","5.797950"],"row_count":1}
</code></pre>
<p>I understand this structure but I don't know how to get it into a DataFrame properly.</p>
|
<p>Looking at the structure of your <code>json</code>, presumably you will have several rows for your data and in my opinion it will make more sense to build the dataframe yourself.</p>
<p>This code uses <code>columns</code> and <code>data</code> to build a dataframe:</p>
<pre><code>In [12]:
import json
import pandas as pd
with open('... path to your json file ...') as fp:
for line in fp:
obj = json.loads(line)
columns = obj['columns']
data = obj['data']
l = []
for d in data:
l += [d['row']]
df = pd.DataFrame(l, index=None, columns=columns)
df
Out[12]:
ad ad_impressions cpm_cost_per_ad cost
0 CP_CARS10_LR_774470 966 6.002019 5.797950
</code></pre>
<p>As for the rest of the data, in your <code>json</code>, I guess you could e.g. use the totals for checking your dataframe,</p>
<pre><code>In [14]:
sums = df.sum(axis=0)
obj['total']
for i in range(0,3):
if (obj['total'][i] != sums[i+1]):
print "error in total"
In [15]:
if obj['row_count'] != len(df.index):
print "error in row count"
</code></pre>
<p>As for the rest of the data in the <code>json</code>, it is difficult for me to know if anything else should be done.</p>
<p>Hope it helps.</p>
|
python|json|pandas
| 1
|
7,652
| 47,858,023
|
"unfair" pandas categorical.from_codes
|
<p>I have to assign a label to categorical data. Let us consider the iris example:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
iris = load_iris()
print "targets: ", np.unique(iris.target)
print "targets: ", iris.target.shape
print "target_names: ", np.unique(iris.target_names)
print "target_names: ", iris.target_names.shape
</code></pre>
<p>It will be printed:</p>
<blockquote>
<p>targets: [0 1 2] targets: (150L,) target_names: ['setosa'
'versicolor' 'virginica'] target_names: (3L,)</p>
</blockquote>
<p>In order to produce the desired labels I use pandas.Categorical.from_codes:</p>
<pre><code>print pd.Categorical.from_codes(iris.target, iris.target_names)
</code></pre>
<blockquote>
<p>[setosa, setosa, setosa, setosa, setosa, ..., virginica, virginica,
virginica, virginica, virginica] Length: 150 Categories (3, object):
[setosa, versicolor, virginica]</p>
</blockquote>
<p>Let us try it on a different example:</p>
<pre><code># I define new targets
target = np.array([123,123,54,123,123,54,2,54,2])
target = np.array([1,1,3,1,1,3,2,3,2])
target_names = np.array(['paglia','gioele','papa'])
#---
print "targets: ", np.unique(target)
print "targets: ", target.shape
print "target_names: ", np.unique(target_names)
print "target_names: ", target_names.shape
</code></pre>
<p>If I try again to transform the categorical values in labels:</p>
<pre><code>print pd.Categorical.from_codes(target, target_names)
</code></pre>
<p>I get the error message:</p>
<blockquote>
<p>C:\Users\ianni\Anaconda2\lib\site-packages\pandas\core\categorical.pyc
in from_codes(cls, codes, categories, ordered)
459
460 if len(codes) and (codes.max() >= len(categories) or codes.min() < -1):
--> 461 raise ValueError("codes need to be between -1 and "
462 "len(categories)-1")
463 </p>
<p>ValueError: codes need to be between -1 and len(categories)-1</p>
</blockquote>
<p>Do you know why?</p>
|
<blockquote>
<p>Do you know why?</p>
</blockquote>
<p>If you will take a closer look at the error traceback:</p>
<pre><code>In [128]: pd.Categorical.from_codes(target, target_names)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-128-c2b4f6ac2369> in <module>()
----> 1 pd.Categorical.from_codes(target, target_names)
~\Anaconda3_5.0\envs\py36\lib\site-packages\pandas\core\categorical.py in from_codes(cls, codes, categories, ordered)
619
620 if len(codes) and (codes.max() >= len(categories) or codes.min() < -1):
--> 621 raise ValueError("codes need to be between -1 and "
622 "len(categories)-1")
623
ValueError: codes need to be between -1 and len(categories)-1
</code></pre>
<p>you'll see that the following condition is met:</p>
<pre><code>codes.max() >= len(categories)
</code></pre>
<p>in your case:</p>
<pre><code>In [133]: target.max() >= len(target_names)
Out[133]: True
</code></pre>
<p>In other words <code>pd.Categorical.from_codes()</code> expects <code>codes</code> as sequential numbers starting from <code>0</code> up to <code>len(categories) - 1</code></p>
<p><strong>Workaround:</strong></p>
<pre><code>In [173]: target
Out[173]: array([123, 123, 54, 123, 123, 54, 2, 54, 2])
</code></pre>
<p>helper dicts:</p>
<pre><code>In [174]: mapping = dict(zip(np.unique(target), np.arange(len(target_names))))
In [175]: mapping
Out[175]: {2: 0, 54: 1, 123: 2}
In [176]: reverse_mapping = {v:k for k,v in mapping.items()}
In [177]: reverse_mapping
Out[177]: {0: 2, 1: 54, 2: 123}
</code></pre>
<p>building categorical Series:</p>
<pre><code>In [178]: ser = pd.Categorical.from_codes(pd.Series(target).map(mapping), target_names)
In [179]: ser
Out[179]:
[papa, papa, gioele, papa, papa, gioele, paglia, gioele, paglia]
Categories (3, object): [paglia, gioele, papa]
</code></pre>
<p>reverse mapping:</p>
<pre><code>In [180]: pd.Series(ser.codes).map(reverse_mapping)
Out[180]:
0 123
1 123
2 54
3 123
4 123
5 54
6 2
7 54
8 2
dtype: int64
</code></pre>
|
python|pandas|categorical-data|python-iris
| 1
|
7,653
| 47,758,806
|
How do I merge one nested dictionary and one simple dictionary while changing the value in python?
|
<p>I have two dictionaries like this:</p>
<pre><code>dict1 = {'foo': 3.0, 'bar': 2.69, 'baz': 3.0}
dict2 = {'foo': {'11-abc1': 0.47}, 'bar': {'11-abc1': 0.30, '12-abc1': 0.0}, 'baz': {'14-abc1': 0.47}}
</code></pre>
<p>Now I want to merge these two dictionaries while multiplying the values. The output should look like this:</p>
<pre><code>dict3 = {'foo': {'11-abc1': 3.0 * 0.47}, 'bar': {'11-abc1':
2.69 * 0.30, '12-abc1': 2.69 * 0.0}, 'baz': {'14-abc1': 3.0 * 0.47}}
</code></pre>
<p>What would be the most efficient way to do that?</p>
|
<p>You can try this:</p>
<pre><code>dict1 = {'foo': 3.0, 'bar': 2.69, 'baz': 3.0}
dict2 = {'foo': {'11-abc1': 0.47}, 'bar': {'11-abc1': 0.30, '12-abc1': 0.0}, 'baz': {'14-abc1': 0.47}}
new_dict = {a:{c:d*dict1[a] for c, d in b.items()} for a, b in dict2.items()}
</code></pre>
<p>Output:</p>
<pre><code>{'bar': {'12-abc1': 0.0, '11-abc1': 0.8069999999999999}, 'foo': {'11-abc1': 1.41}, 'baz': {'14-abc1': 1.41}}
</code></pre>
<p>Using string formatting to show how code works:</p>
<pre><code>dict1 = {'foo': 3.0, 'bar': 2.69, 'baz': 3.0}
dict2 = {'foo': {'11-abc1': 0.47}, 'bar': {'11-abc1': 0.30, '12-abc1': 0.0}, 'baz': {'14-abc1': 0.47}}
new_dict = {a:{c:"{}*{}".format(d, dict1[a]) for c, d in b.items()} for a, b in dict2.items()}
</code></pre>
<p>Output:</p>
<pre><code>{'bar': {'12-abc1': '0.0*2.69', '11-abc1': '0.3*2.69'}, 'foo': {'11-abc1': '0.47*3.0'}, 'baz': {'14-abc1': '0.47*3.0'}}
</code></pre>
<p>Edit: Summing the final values:</p>
<pre><code>import itertools
s = {'bar': {'12-abc1': 0.0, '11-abc1': 0.8069999999999999}, 'foo': {'11-abc1': 1.41}, 'baz': {'14-abc1': 1.41}}
new_s = list(itertools.chain(*[b.items() for a, b in s.items()]))
final_data = {a:sum(i[-1] for i in b) for a, b in itertools.groupby(sorted(new_s, key=lambda x:x[0]), key=lambda x:x[0])}
</code></pre>
<p>Output:</p>
<pre><code>{'12-abc1': 0.0, '11-abc1': 2.2169999999999996, '14-abc1': 1.41}
</code></pre>
|
python|pandas|dictionary|merge|zip
| 0
|
7,654
| 47,878,894
|
Subclass a DataFrame without mutating original object
|
<p>As <a href="https://stackoverflow.com/a/45246289/7954504">mentioned a while back by @piRSquared</a>, subclassing a pandas DataFrame in the way suggested in the docs, or by <a href="https://github.com/geopandas/geopandas/blob/master/geopandas/geodataframe.py#L18" rel="nofollow noreferrer">geopandas' GeoDataFrame</a>, opens up the possibility of unwanted mutation of the original object:</p>
<pre><code>class SubFrame(pd.DataFrame):
def __init__(self, *args, **kwargs):
attr = kwargs.pop('attr', None)
super(SubFrame, self).__init__(*args, **kwargs)
self.attr = attr
@property
def _constructor(self):
return SubFrame
def somefunc(self):
"""Add some extended functionality."""
pass
df = pd.DataFrame([[1, 2], [3, 4]])
sf = SubFrame(df, attr=1)
sf[:] = np.nan # Modifies `df`
print(df)
# 0 1
# 0 NaN NaN
# 1 NaN NaN
</code></pre>
<p>The error-prone "fix" is to pass a copy at instantiation:</p>
<pre><code>sf = SubFrame(df.copy(), attr=1)
</code></pre>
<p>But this is easily subject to user error. <strong>My question is: can I create a copy of <code>self</code> (the passed DataFrame) within <code>class SubFrame</code> itself?
How would I go about doing this?</strong></p>
<p>If the answer is "No," I appreciate that too, so that I can scrap this endeavor before wasting time on it.</p>
<hr>
<h3>A polite request</h3>
<p>The pandas docs <a href="https://pandas.pydata.org/pandas-docs/stable/internals.html#subclassing-pandas-data-structures" rel="nofollow noreferrer">suggest two alternatives</a>:</p>
<ol>
<li>Extensible method chains with <code>pipe</code></li>
<li>Composition</li>
</ol>
<p>I've already thoroughly considered both of these, so I'd appreciate if answers can refrain from an opinionated discussion with generic reasons about why these 2 alternatives are better/safer.</p>
|
<p><code>Self</code> is not the dataframe you are passing. Regardless, you can perform the copy in the init function.</p>
<p>For example</p>
<pre><code>import copy
def __init__(self, farg, **kwargs):
farg = copy.deepcopy(farg)
attr = kwargs.pop('attr', None)
super().__init__(farg)
self.attr = attr
</code></pre>
<p>Should show you that the farg is the df you are passing. </p>
<p>I don't know much about subclassing a DataFrame, so if you want to keep you original <strong>init</strong> structure you can copy all the *args. Can't speak to the safety of this approach.</p>
<pre><code>def __init__(self, *args, **kwargs):
cargs = tuple(copy.deepcopy(arg) for arg in args)
attr = kwargs.pop('attr', None)
super().__init__(*cargs, **kwargs)
self.attr = attr
</code></pre>
|
python|python-3.x|pandas|inheritance
| 1
|
7,655
| 47,876,828
|
How to optimize a nested for loop in Python
|
<p>So I am trying to write a python function to return a metric called the Mielke-Berry R value. The metric is calculated like so:
<a href="https://i.stack.imgur.com/mL77o.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mL77o.png" alt="enter image description here"></a></p>
<p>The current code I have written works, but because of the sum of sums in the equation, the only thing I could think of to solve it was to use a nested for loop in Python, which is very slow...</p>
<p>Below is my code:</p>
<pre><code>def mb_r(forecasted_array, observed_array):
"""Returns the Mielke-Berry R value."""
assert len(observed_array) == len(forecasted_array)
y = forecasted_array.tolist()
x = observed_array.tolist()
total = 0
for i in range(len(y)):
for j in range(len(y)):
total = total + abs(y[j] - x[i])
total = np.array([total])
return 1 - (mae(forecasted_array, observed_array) * forecasted_array.size ** 2 / total[0])
</code></pre>
<p>The reason I converted the input arrays to lists is because I have heard (haven't yet tested) that indexing a numpy array using a python for loop is very slow. </p>
<p>I feel like there may be some sort of numpy function to solve this much faster, anyone know of anything?</p>
|
<p>Here's one vectorized way to leverage <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html" rel="noreferrer"><code>broadcasting</code></a> to get <code>total</code> -</p>
<pre><code>np.abs(forecasted_array[:,None] - observed_array).sum()
</code></pre>
<p>To accept both lists and arrays alike, we can use NumPy builtin for the outer subtraction, like so -</p>
<pre><code>np.abs(np.subtract.outer(forecasted_array, observed_array)).sum()
</code></pre>
<p>We can also make use of <a href="http://numexpr.readthedocs.io/en/latest/intro.html#how-it-works" rel="noreferrer"><code>numexpr</code> module</a> for faster <code>absolute</code> computations and perform <code>summation-reductions</code> in one single <code>numexpr evaluate</code> call and as such would be much more memory efficient, like so -</p>
<pre><code>import numexpr as ne
forecasted_array2D = forecasted_array[:,None]
total = ne.evaluate('sum(abs(forecasted_array2D - observed_array))')
</code></pre>
|
python|arrays|numpy
| 9
|
7,656
| 47,954,564
|
Printing numpy with different position in the column
|
<p>I have following numpy array</p>
<pre><code>import numpy as np
np.random.seed(20)
np.random.rand(20).reshape(5, 4)
array([[ 0.5881308 , 0.89771373, 0.89153073, 0.81583748],
[ 0.03588959, 0.69175758, 0.37868094, 0.51851095],
[ 0.65795147, 0.19385022, 0.2723164 , 0.71860593],
[ 0.78300361, 0.85032764, 0.77524489, 0.03666431],
[ 0.11669374, 0.7512807 , 0.23921822, 0.25480601]])
</code></pre>
<p>For each column I would like to slice it in positions: </p>
<pre><code>position_for_slicing=[0, 3, 4, 4]
</code></pre>
<p>So I will get following array:</p>
<pre><code>array([[ 0.5881308 , 0.85032764, 0.23921822, 0.81583748],
[ 0.03588959, 0.7512807 , 0 0],
[ 0.65795147, 0, 0 0],
[ 0.78300361, 0, 0 0],
[ 0.11669374, 0, 0 0]])
</code></pre>
<p>Is there fast way to do this ? I know I can use to do for loop for each column, but I was wondering if there is more elegant way to do this.</p>
|
<p>If "elegant" means "no loop" the following would qualify, but probably not under many other definitions (<code>arr</code> is your input array):</p>
<pre><code>m, n = arr.shape
arrf = np.asanyarray(arr, order='F')
padded = np.r_[arrf, np.zeros_like(arrf)]
assert padded.flags['F_CONTIGUOUS']
expnd = np.lib.stride_tricks.as_strided(padded, (m, m+1, n), padded.strides[:1] + padded.strides)
expnd[:, [0,3,4,4], range(4)]
# array([[ 0.5881308 , 0.85032764, 0.23921822, 0.25480601],
# [ 0.03588959, 0.7512807 , 0. , 0. ],
# [ 0.65795147, 0. , 0. , 0. ],
# [ 0.78300361, 0. , 0. , 0. ],
# [ 0.11669374, 0. , 0. , 0. ]])
</code></pre>
<p>Please note that <code>order='C'</code> and then <code>'C_CONTIGUOUS'</code> in the assertion also works. My hunch is that 'F' could be a bit faster because the indexing then operates on contiguous slices.</p>
|
python|numpy
| 3
|
7,657
| 47,635,788
|
np.insert error in numpy version '1.13.3'
|
<p>I try to insert specific values in an array at given indices, with the use of <code>np.insert</code>. Before I used Numpy 1.12 and the code was running fine but with the new Numpy 1.13.3 the following error occurs</p>
<pre><code>ValueError: shape mismatch: value array of shape () could not be broadcast to indexing result of shape ()
</code></pre>
<p>My Code:</p>
<pre><code>intial_array= 1D numpy array
indices= 1D numpy array
values_to_insert= 1D numpy array
mt_new2=np.insert(intial_array, indices,values_to_insert)
</code></pre>
<p>Is this problem known or does someone knows how to solve this issue?</p>
|
<p>Early <code>numpy</code> can replicate <code>values</code> as needed to fit the index size:</p>
<pre><code>>>> x = numpy.arange(10)
>>> numpy.insert(x,[1,3,4,5],[10,20])
array([ 0, 10, 1, 2, 20, 3, 10, 4, 20, 5, 6, 7, 8, 9])
>>> numpy.__version__
'1.12.0'
</code></pre>
<p>New numpy expects matching size:</p>
<pre><code>In [81]: x = np.arange(10)
In [82]: np.insert(x, [1,3,4,5],[10,20])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-82-382864de5db0> in <module>()
----> 1 np.insert(x, [1,3,4,5],[10,20])
/usr/local/lib/python3.5/dist-packages/numpy/lib/function_base.py in insert(arr, obj, values, axis)
5085 slobj[axis] = indices
5086 slobj2[axis] = old_mask
-> 5087 new[slobj] = values
5088 new[slobj2] = arr
5089
ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (4,)
In [83]: np.insert(x, [1,3,4,5],[10,20,10,20])
Out[83]: array([ 0, 10, 1, 2, 20, 3, 10, 4, 20, 5, 6, 7, 8, 9])
</code></pre>
<p>It looks like the earlier version used <code>resize</code>, explicitly or implicitly,</p>
<pre><code>In [85]: np.insert(x, [1,3,4,5],np.resize([10,20,30],4))
Out[85]: array([ 0, 10, 1, 2, 20, 3, 30, 4, 10, 5, 6, 7, 8, 9])
</code></pre>
|
numpy|insert
| 0
|
7,658
| 59,017,002
|
Transforming every training points without using dataloaders
|
<p>I just found out that even though <code>torchvision.dataset.MNIST</code> accepts the <code>transformer</code> parameter, ...</p>
<pre class="lang-py prettyprint-override"><code>transform = transforms.compose(
[transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]
)
mnist_trainset = datasets.mnist(
root="mnist", train=True, download=True, transform=transform
)
</code></pre>
<p>...the value obtained from the <code>mnist_trainset.data</code> variable is still not transformed (please observe that the data in the range of (0, 255) should be normalised to (-1, 1) regarding the <code>transformer</code>'s behaviour).</p>
<pre class="lang-py prettyprint-override"><code>[102] mnist_testset.data[0].min()
tensor(0, dtype=torch.uint8)
[103] mnist_testset.data[0].max()
tensor(255, dtype=torch.uint8)
</code></pre>
<p>I tried calling <code>mnist_trainset.transform</code> over <code>mnist_trainset.data</code>, but the output shape is not what I intended</p>
<pre><code>[104] mnist_testset.data.shape
torch.Size([10000, 28, 28])
[105] transform(mnist_testset.data).shape
torch.Size([3, 28, 28])
# Should be [10000, 28, 28] as identical to the original data.
</code></pre>
<p>I can use the <code>DataLoader</code> to load the entire training set and set the shuffling to <code>False</code>, but I think it's too overkilling. What is the best way to transform the <em>entire</em> <code>mnist_testset</code> using the defined <code>transformer</code> object, in order to obtain the intended transformed image, without having to manually transform it one-by-one?</p>
|
<p>Transforms are invoked when you sample the dataset using its <code>__getitem__</code> method. So you could do something like the following to get all the transformed data.</p>
<pre><code>imgs_transformed = []
for img, label in mnist_testset:
imgs_transformed.append(img[0,:,:])
</code></pre>
<p>or using list comprehension</p>
<pre><code>imgs_transformed = [img[0,:,:] for img, label in mnist_testset]
</code></pre>
<hr>
<p>If you want to turn this into one big tensor you can use <code>torch.stack</code></p>
<pre><code>data_transformed = torch.stack(imgs_transformed, dim=0)
</code></pre>
|
python|pytorch
| 1
|
7,659
| 58,988,771
|
passing a dataframe to a thread
|
<p>Inside a function, I created a local dataframe with name resamp_df. I am trying to pass this local dataframe to a thread function as an argument for running some algorithm on it. Here is my code:</p>
<p>main function</p>
<pre><code>if readyForOrder:
order_thread = threading.Thread(target=order_management, name='thread1', args=resamp_df)
order_thread.start()
</code></pre>
<p>thread function</p>
<pre><code>def order_management(df):
global readyForOrder, order_id, order_id_counter, ltp
if df.shape[0] >= 3:
readyForOrder = False
old_ltp = df.iat[-2, 0]
new_ltp = df.iat[-1, 0]
</code></pre>
<p>But my thread is not running. It generates following errors:</p>
<pre><code>TypeError: order_management() takes 1 positional argument but 7 were given
</code></pre>
<p>Any suggestions to make it work?</p>
<p>Thanks in advance</p>
|
<p>Pass in arguments as a tuple</p>
<pre><code>args=(resamp_df, )
</code></pre>
|
python|pandas|dataframe|python-multithreading
| 1
|
7,660
| 58,770,711
|
python3 - pandas determine if events occurrence are statistically significant
|
<p>I have a large dataset that looks like the below. I'd like to know if there is a significant statistical difference between when the event occurs vs when it does not occur. The assumption here is that the higher the percent change the more meaningful/better. </p>
<p>In another dataset the "event occurs" column is "True, False, Neutral". (Please ignore the index as that is the default pandas index.)</p>
<pre><code> index event occurs percent change
148 False 11.27
149 True 14.56
150 False 10.35
151 False 6.07
152 False 21.14
153 False 7.26
154 False 7.07
155 False 5.37
156 True 2.75
157 False 7.12
158 False 7.24
</code></pre>
<p>What's the best way of determining the significance when it's "True/False" or when it's "True/False/Neutral"?</p>
|
<blockquote>
<p>Load Packages, Set Globals, Make Data.</p>
</blockquote>
<pre><code>import scipy.stats as stats
import numpy as np
n = 60
stat_sig_thresh = 0.05
event_perc = pd.DataFrame({"event occurs": np.random.choice([True,False],n),
"percent change": [i*.1 for i in np.random.randint(1,1000,n)]})
</code></pre>
<blockquote>
<p>Determine if Distribution is Normal</p>
</blockquote>
<pre><code>stat_sig = event_perc.groupby("event occurs").apply(lambda x: stats.normaltest(x))
stat_sig = pd.DataFrame(stat_sig)
stat_sig = pd.DataFrame(stat_sig[0].values.tolist(), index=stat_sig.index).reset_index()
stat_sig.loc[(stat_sig.pvalue <= stat_sig_thresh), "Normal"] = False
stat_sig["Normal"].fillna("True",inplace=True)
>>>stat_sig
event occurs statistic pvalue Normal
0 False [2.9171920993203915] [0.23256255191146755] True
1 True [2.938332679486047] [0.23011724484588764] True
</code></pre>
<blockquote>
<p>Determine Statistical Significance</p>
</blockquote>
<pre><code>normal = [bool(i) for i in stat_sig.Normal.unique().tolist()]
rvs1 = event_perc["percent change"][event_perc["event occurs"] == True]
rvs2 = event_perc["percent change"][event_perc["event occurs"] == False]
if (len(normal) == 1) & (normal[0] == True):
print("the distributions are normal")
if stats.ttest_ind(rvs1,rvs2).pvalue >= stat_sig_thresh:
# we cannot reject the null hypothesis of identical average scores
print("we can't say whether there is statistically significant difference")
else:
# we reject the null hypothesis of equal averages
print("there is a statisically significant difference")
elif (len(normal) == 1) & (normal[0] == False):
print("the distributions are not normal")
if stats.wilcoxon(rvs1,rvs2).pvalue >= stat_sig_thresh:
# we cannot reject the null hypothesis of identical average scores
print("we can't say whether there is statistically significant difference")
else:
# we reject the null hypothesis of equal averages
print("there is a statisically significant difference")
else:
print("samples are drawn from different distributions")
the distributions are normal
we can't say whether there is statistically significant difference
</code></pre>
|
python-3.x|pandas
| 3
|
7,661
| 70,371,902
|
Pandas: Looking to create a multiple nested dictionary
|
<p>Here is what I am looking to generate:</p>
<pre><code>{A: {1: [1,2], 2: [2,5]},
B: {3: [1,4], 4: [7,8]}}
</code></pre>
<p>Here is the df:</p>
<pre><code>id sub_id
A 1
A 2
B 3
B 4
</code></pre>
<p>and I have the following array:</p>
<pre><code>[[1,2],
[2,5],
[1,4],
[7,8]]
</code></pre>
<p>So far, I have the following code:</p>
<pre><code>sub_id_array_dict = dict(zip(df['sub_id'].to_list(), arrays))
</code></pre>
<p>this results in the following dictionary:</p>
<pre><code>{1: [1,2],
2: [2,5],
3: [1,4],
4: [7,8]}
</code></pre>
<p>Now, I feel like i've gone down the wrong path as I'm not sure how to get roll it up to the <code>id</code> level.</p>
<p>Any help would be much appreciated.</p>
|
<p>With a simple loop this can be done like:</p>
<pre><code>from collections import defaultdict
sub_id_array_dict = defaultdict(dict)
for i, s, a in zip(df['id'].to_list(), df['sub_id'].to_list(), arrays):
sub_id_array_dict[i][s] = a
</code></pre>
|
python|json|pandas
| 0
|
7,662
| 56,394,009
|
Python cx_Oracle loading CSV using executemany() gives "Required argument 'parameters' (pos 2) not found"
|
<p>My intention is to load the csv file in a Oracle table using Python.</p>
<ol>
<li><p>I'm truncating table, if data already exists - This is working</p>
</li>
<li><p>I'm checking the count for testing purpose - This is working</p>
</li>
<li><p>I'm trying to Insert data from file in to Oracle. I'm getting issue:</p>
<blockquote>
<p>'Required argument 'parameters' (pos 2) not found'</p>
</blockquote>
</li>
</ol>
<p>Code:</p>
<pre><code>import cx_Oracle
import pandas as pd
column_names =
['Col1','Col2','Col3','Col4','Col5','Col6','Col7','Col8','Col9']
df = pd.read_csv(r"C:\Users\file.dat", names=column_names, sep='|')
dsn_tns = cx_Oracle.makedsn('*', '*', sid='*')
conn = cx_Oracle.connect(user='*', password='*', dsn=*)
c = conn.cursor()
c.execute('Truncate table Table_Name')
c.execute('select count(1) from Table_Name')
for row in c:
print(row)
for lines in df:
c = conn.cursor()
print("I want to print lines")
res = c.executemany("""Insert into Code_Extract (OPERATION,
LIST_COUNTRY_ID,LIST_CODE,SOURCE_SYSTEM_CODE,CODE_USUAL,
INT_LIST_CODE,INT_MDM_CODE,CODE_STATUS,MDM_CODE)
Values(df['col1'],df['Col2'],df['Col3'],df['Col4'],df['Col5'],df['Col6'],df['Col7'],df['Col8'],df['Col9'])""")
conn.commit()
c.execute('select count(1) from Table_Name')
for row in c:
print(row)
c.close()
conn.commit()
conn.close()
</code></pre>
<p>My expectation is whenever I receive files it should be automatically loaded in to Oracle from the specified path.</p>
|
<p>The parameters to executemany() are indeed required. See the <a href="https://cx-oracle.readthedocs.io/en/latest/cursor.html#Cursor.executemany" rel="nofollow noreferrer">documentation</a> for more information.</p>
<p>You've put the parameters inside the SQL, but instead they should be specified as bind variables, as in <code>values (:1, :2, :3, :4, :5, :6, :7, :8, :9)</code>. You would then pass the data as a list of sequences to executemany(). Hopefully that's clear! You can see an example <a href="https://github.com/oracle/python-cx_Oracle/blob/master/samples/BindInsert.py" rel="nofollow noreferrer">here</a>.</p>
|
python|database|oracle|pandas|cx-oracle
| 0
|
7,663
| 55,830,420
|
invalid value error in get_report function
|
<pre><code>def get_report(analytics):
return analytics.reports().batchGet(
body={
'reportRequests':
[
{
'viewId': VIEW_ID,
'dateRanges': [{'startDate': '7daysAgo', 'endDate': 'today'}],
'metrics': [{'expression':i} for i in METRICS],
'dimensions': [{'name':j} for j in DIMENSIONS]
}
]
}
).execute()
</code></pre>
<blockquote>
<p>File "/home/mail_name/gaToPandas/lib/python3.5/site-packages/googleapiclient/http.py", line 851, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "Invalid value 'project-id@appspot.gserviceaccount.com' for viewId parameter."> </p>
</blockquote>
<p>How can I resolve this issue?
There is an error in step 3 of this site <a href="https://www.digishuffle.com/blogs/google-analytics-reporting-python/" rel="nofollow noreferrer">https://www.digishuffle.com/blogs/google-analytics-reporting-python/</a></p>
|
<blockquote>
<p>"Invalid value 'project-id@appspot.gserviceaccount.com' for viewId parameter."</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/WiH2z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WiH2z.png" alt="enter image description here"></a></p>
<p>The view Id should be the view id from Google Analytics which you wish to request data for not the service account name.</p>
<p><a href="https://i.stack.imgur.com/C41fb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C41fb.png" alt="enter image description here"></a></p>
|
pandas|google-api|google-analytics-api
| 0
|
7,664
| 55,922,680
|
Renaming files based on Dataframe content with Python and Pandas
|
<p>I am trying to read a <code>xlsx</code> file, compare all the reference numbers from a column to files inside a folder and if they correspond, rename them to an email associate with the reference number.</p>
<p><strong>Excel File</strong> has fields such as:</p>
<pre><code> Reference EmailAddress
1123 bob.smith@yahoo.com
1233 john.drako@gmail.com
1334 samuel.manuel@yahoo.com
... .....
</code></pre>
<p>My folder <code>applicants</code> just contains <strong>doc</strong> files named as the <strong>Reference</strong> column:</p>
<p><a href="https://i.stack.imgur.com/pagr1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pagr1.jpg" alt="enter image description here"></a></p>
<p>How can I compare the contents of the <code>applicantsCVs</code> folder, to the <strong>Reference</strong> field inside my excel file and if it matches, rename all of the files as the corresponding email address ?</p>
<p>Here is What I've tried so far:</p>
<pre><code>import os
import pandas as pd
dfOne = pd.read_excel('Book2.xlsx', na_values=['NA'], usecols = "A:D")
references = dfOne['Reference']
emailAddress = dfOne['EmailAddress']
cleanedEmailList = [x for x in emailAddress if str(x) != 'nan']
print(cleanedEmailList)
excelArray = []
filesArray = []
for root, dirs, files in os.walk("applicantCVs"):
for filename in files:
print(filename) #Original file name with type 1233.doc
reworkedFile = os.path.splitext(filename)[0]
filesArray.append(reworkedFile)
for entry in references:
excelArray.append(str(entry))
for i in excelArray:
if i in filesArray:
print(i, "corresponds to the file names")
</code></pre>
<p>I compare the reference names to the folder contents and print it out if it's the same:</p>
<pre><code> for i in excelArray:
if i in filesArray:
print(i, "corresponds to the file names")
</code></pre>
<p>I've tried to rename it with <code>os.rename(filename, cleanedEmailList )</code> but it didn't work because <code>cleanedEmailList</code> is an array of emails.</p>
<p>How can I match and rename the files?</p>
<p><strong>Update:</strong></p>
<pre><code>from os.path import dirname
import pandas as pd
from pathlib import Path
import os
dfOne = pd.read_excel('Book2.xlsx', na_values=['NA'], usecols = "A:D")
emailAddress = dfOne['EmailAddress']
reference = dfOne['Reference'] = dfOne.references.astype(str)
references = dict(dfOne.dropna(subset=[reference, "EmailAddress"]).set_index(reference)["EmailAddress"])
print(references)
files = Path("applicantCVs").glob("*")
for file in files:
new_name = references.get(file.stem, file.stem)
file.rename(file.with_name(f"{new_name}{file.suffix}"))
</code></pre>
|
<p>based on sample data:</p>
<pre><code>Reference EmailAddress
1123 bob.smith@yahoo.com
1233 john.drako@gmail.com
nan jane.smith#example.com
1334 samuel.manuel@yahoo.com
</code></pre>
<p>First you assemble a <code>dict</code> with the set of references as keys and the new names as values:</p>
<pre><code>references = dict(df.dropna(subset=["Reference","EmailAddress"]).set_index("Reference")["EmailAddress"])
</code></pre>
<blockquote>
<pre><code>{'1123': 'bob.smith@yahoo.com',
'1233': 'john.drako@gmail.com',
'1334': 'samuel.manuel@yahoo.com'}
</code></pre>
</blockquote>
<p>Note that the references are <code>str</code>s here. If they aren't in your original database, you can use <code>astype(str)</code></p>
<p>Then you use <code>pathlib.Path</code> to look for all the files in the data directory:</p>
<pre><code>files = Path("../data/renames").glob("*")
</code></pre>
<blockquote>
<pre><code>[WindowsPath('../data/renames/1123.docx'),
WindowsPath('../data/renames/1156.pptx'),
WindowsPath('../data/renames/1233.txt')]
</code></pre>
</blockquote>
<p>The renaming can be made very simple:</p>
<pre><code>for file in files:
new_name = references.get(file.stem, file.stem )
file.rename(file.with_name(f"{new_name}{file.suffix}"))
</code></pre>
<p>The <code>references.get</code> asks for the new filename, and if it doesn't find it, use the original stem.</p>
<blockquote>
<pre><code>[WindowsPath('../data/renames/1156.pptx'),
WindowsPath('../data/renames/bob.smith@yahoo.com.docx'),
WindowsPath('../data/renames/john.drako@gmail.com.txt')]
</code></pre>
</blockquote>
|
python|pandas
| 3
|
7,665
| 55,699,204
|
How to check if numpy array contains empty list
|
<p>Here is a sample code for data</p>
<pre><code>import numpy as np
myList1 = np.array([1,1,1,[1],1,1,[1],1,1])
myList2 = np.array([1,1,1,[],1,1,[],1,1])
</code></pre>
<p>To see if elements in myList1 equals to [1] I could do this:</p>
<pre><code>myList1 == [1]
</code></pre>
<p>But for myList2, to see if elements in myList2 equals to [] I COULDN'T do this:</p>
<pre><code>myList2 == []
</code></pre>
<p>I had to do:</p>
<pre><code>[x == [] for x in myList2]
</code></pre>
<p>Is there another way to look for elements in lists that will also handle empty lists? some other function in numpy or python that I could use?</p>
|
<p>An array with a mix of numbers and lists (empty or not) is <code>object dtype</code>. This is practically a <code>list</code>; fast compiled <code>numpy</code> math no longer works. The only practical alternative to a list comprehension is <code>np.frompyfunc</code>.</p>
<p>Write a small function that can distinguish between a number and list and length of list, and apply that to the array. If it returns True for an empty list, then <code>np.where</code> will identify the location</p>
<pre><code>In [41]: myList1 = np.array([1,1,1,[1],1,1,[1],1,1])
...: myList2 = np.array([1,1,1,[],1,1,[],1,1])
</code></pre>
<p>Develop a function that returns True for a empty list, False otherwise:</p>
<pre><code>In [42]: len(1)
...
TypeError: object of type 'int' has no len()
In [43]: len([])
Out[43]: 0
In [44]: def foo(item):
...: try:
...: return len(item)==0
...: except TypeError:
...: pass
...: return False
...:
In [45]: foo([])
Out[45]: True
In [46]: foo([1])
Out[46]: False
In [47]: foo(1)
Out[47]: False
</code></pre>
<p>Apply it to the arrays:</p>
<pre><code>In [48]: f=np.frompyfunc(foo,1,1)
In [49]: f(myList1)
Out[49]:
array([False, False, False, False, False, False, False, False, False],
dtype=object)
In [50]: np.where(f(myList1))
Out[50]: (array([], dtype=int64),)
In [51]: np.where(f(myList2))
Out[51]: (array([3, 6]),)
</code></pre>
|
python|numpy
| 2
|
7,666
| 39,790,830
|
Getting a tuple in a Dafaframe into multiple rows
|
<p>I have a Dataframe, which has two columns (Customer, Transactions).
The Transactions column is a tuple of all the transaction id's of that customer.</p>
<pre><code>Customer Transactions
1 (a,b,c)
2 (d,e)
</code></pre>
<p>I want to convert this into a dataframe, which has customer and transaction id's, like this.</p>
<pre><code>Customer Transactions
1 a
1 b
1 c
2 d
2 e
</code></pre>
<p>We can do it using loops, but is there a straight 1 or 2 lines way for doing that.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow"><code>DataFrame</code></a> constructor:</p>
<pre><code>df = pd.DataFrame({'Customer':[1,2],
'Transactions':[('a','b','c'),('d','e')]})
print (df)
Customer Transactions
0 1 (a, b, c)
1 2 (d, e)
df1 = pd.DataFrame(df.Transactions.values.tolist(), index=df.Customer)
print (df1)
0 1 2
Customer
1 a b c
2 d e None
</code></pre>
<p>Then reshape with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a>:</p>
<pre><code>print (df1.stack().reset_index(drop=True, level=1).reset_index(name='Transactions'))
Customer Transactions
0 1 a
1 1 b
2 1 c
3 2 d
4 2 e
</code></pre>
|
python|pandas|dataframe|tuples|reshape
| 4
|
7,667
| 44,264,443
|
Why should be the function backward be called only on 1 element tensor or with gradients w.r.t to Variable?
|
<p>I am new to pytorch. I want to understand as to why we can't call the backward function on a variable containing a tensor of say size say [2,2].
And if we do want to call it on a variable containing tensor of say size say [2,2], we have to do that by first defining a gradient tensor and then calling the backward function on the variable containing the tensor w.r.t the defined gradients.</p>
|
<p>from the tutorial on <code>autograd</code> </p>
<blockquote>
<p>If you want to compute the derivatives, you can call .backward() on a
Variable. If Variable is a scalar (i.e. it holds a one element data),
you don’t need to specify any arguments to backward(), however if it
has more elements, you need to specify a grad_output argument that is
a tensor of matching shape.</p>
</blockquote>
<p>Basically to start the chain rule you need a gradient AT THE OUTPUT, to get it going. In the event the output is a scalar loss function ( which it usually is - normally you are beginning the backward pass at the loss variable ) , its an implied value of 1.0</p>
<p>from tutorial :</p>
<blockquote>
<p>let's backprop now out.backward() is equivalent to doing
out.backward(torch.Tensor([1.0]))</p>
</blockquote>
<p>but maybe you only want to update a subgraph ( somewhere deep in the network) ... and the value of a <code>Variable</code> is a matrix of weights. Then you have to tell it where the begin. From one of their chief devs ( somewhere in the links ) </p>
<p><strong>Yes, that's correct. We only support differentiation of scalar
functions, so if you want to start backward form a non-scalar value
you need to provide dout / dy</strong></p>
<p>The gradients argument </p>
<p><a href="https://discuss.pytorch.org/t/how-the-backward-works-for-torch-variable/907/8" rel="noreferrer">https://discuss.pytorch.org/t/how-the-backward-works-for-torch-variable/907/8</a> ok explanation</p>
<p><a href="https://stackoverflow.com/questions/43451125/pytorch-what-are-the-gradient-arguments">Pytorch, what are the gradient arguments</a> good explanation</p>
<p><a href="http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html" rel="noreferrer">http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html</a> tutorial</p>
|
python|pytorch
| 5
|
7,668
| 69,641,623
|
Replace column value based other column values pyspark data frame
|
<p>I have the following spark data frame.</p>
<pre><code>Date_1 Value Date_2
20-10-2021 1 Date
20-10-2021 2 Date
21-10-2021 3 Date
23-10-2021 4 Date
</code></pre>
<p>I would like to fill <code>Date_2</code> values by adding <code>Date_1 + (Value-1)</code>.</p>
<p>The output that I would like to see is the following.</p>
<pre><code>Date_1 Value Date_2
20-10-2021 1 20-10-2021
20-10-2021 2 21-10-2021
21-10-2021 3 23-10-2021
23-10-2021 4 26-10-2021
</code></pre>
<p>I have tried this using pyspark.</p>
<pre><code>import pyspark.sql.functions as F
df = df.withColumn('Date_2', F.date_add(df['Date_1'], (df['Value'] -1)).show()
</code></pre>
<p>But I am getting <code>TypeError: Column is not iterable</code>.</p>
<p>Can anyone help with this?</p>
|
<p>You would need to parse SQL function DATE_ADD like this:</p>
<pre><code>(
df
.withColumn("Value", F.col("Value").cast("int"))
.withColumn("Date_2",
F.expr('DATE_ADD(Date_1, Value - 1)')
)
)
</code></pre>
<p>DATE_ADD(Date_1, Value - 1) will add to each row in <em>Date_1</em> column value from column <em>Value</em> -1 (counting in days).</p>
<p>Additionally (if you don't have it done yet) Value columns should be INT. If you would have there for example DOUBLE type, <em>AnalysisException</em> occur.</p>
|
python|pandas|dataframe|pyspark
| 1
|
7,669
| 69,535,075
|
write filename when iterating through dataframes
|
<p>I am passing to a function several pandas <code>df</code>:</p>
<pre><code>def write_df_to_disk(*args):
for df in args:
df.to_csv('/transformed/'+str(df)+'.table',sep='\t')
write_df_to_disk(k562,hepg2,hoel)
</code></pre>
<p><code>df</code> here will be a pandas dataframe.</p>
<p>How can I assign the diferent parameters of <code>*args</code> to a string like above <code>'/transformed/'+str(df)+'.table',sep='\t'</code> ??</p>
<p>I want to have three files written to disk with the following path:</p>
<pre><code>`/transformed/k562.table`
`/transformed/hepg2.table`
`transformed/hoel.table`
</code></pre>
|
<p>ok I think I managed.</p>
<pre><code>def write_df_to_disk(l,*args):
for l,df in zip(l,args):
df.to_csv('transformed/'+l+'.table',sep='\t')
write_df_to_disk(['k562','hepg2','hoel'],k562,hepg2,hoel)
</code></pre>
|
python|pandas
| 0
|
7,670
| 69,508,015
|
using Numpy for Kmean Clustering
|
<p>I'm new in machine learning and want to build a Kmean algorithm with k = 2 and I'm struggling by calculate the new centroids. here is my code for kmeans:</p>
<pre><code>def euclidean_distance(x: np.ndarray, y: np.ndarray):
# x shape: (N1, D)
# y shape: (N2, D)
# output shape: (N1, N2)
dist = []
for i in x:
for j in y:
new_list = np.sqrt(sum((i - j) ** 2))
dist.append(new_list)
distance = np.reshape(dist, (len(x), len(y)))
return distance
def kmeans(x, centroids, iterations=30):
assignment = None
for i in iterations:
dist = euclidean_distance(x, centroids)
assignment = np.argmin(dist, axis=1)
for c in range(len(y)):
centroids[c] = np.mean(x[assignment == c], 0) #error here
return centroids, assignment
</code></pre>
<p>I have input <code>x = [[1., 0.], [0., 1.], [0.5, 0.5]]</code> and <code>y = [[1., 0.], [0., 1.]]</code> and
<code>distance</code> is an array and look like that:</p>
<pre><code>[[0. 1.41421356]
[1.41421356 0. ]
[0.70710678 0.70710678]]
</code></pre>
<p>and when I run <code>kmeans(x,y)</code> then it returns error:</p>
<blockquote>
<p>--------------------------------------------------------------------------- TypeError Traceback (most recent call
last) /tmp/ipykernel_40086/2170434798.py in
5
6 for c in range(len(y)):</p>
</blockquote>
<blockquote>
<p>----> 7 centroids[c] = (x[classes == c], 0)
8 print(centroids)</p>
<p>TypeError: only integer scalar arrays can be converted to a scalar
index</p>
</blockquote>
<p>Does anyone know how to fix it or improve my code? Thank you in advance!</p>
|
<p>Changing inputs to NumPy arrays should get rid of errors:</p>
<pre class="lang-py prettyprint-override"><code>x = np.array([[1., 0.], [0., 1.], [0.5, 0.5]])
y = np.array([[1., 0.], [0., 1.]])
</code></pre>
<p>Also seems like you must change <code>for i in iterations</code> to <code>for i in range(iterations)</code> in <code>kmeans</code> function.</p>
|
python|numpy|k-means
| 1
|
7,671
| 41,157,482
|
Group average of a numpy array?
|
<p>I have a large numpy array, with dimensions <code>[1]</code>. I want to find out a sort of "group average". More specifically,</p>
<p>Let my array be <code>[1,2,3,4,5,6,7,8,9,10]</code> and let my <code>group_size</code> be <code>3</code>. Hence, I will average the first three elements, the 4th to 6th element, the 7th to 9th element, and average the remaining elements (only 1 in this case to get - <code>[2, 5, 8, 10]</code>. Needless to say, I need a vectorized implementation.</p>
<p>Finally, my purpose is reducing the number of points in a noisy graph to smoothen out a general pattern having a lot of oscillation. Is there a <strong>correct</strong> way to do this? I would like the answer to both questions, in case they have a different answer. Thanks!</p>
|
<p>A good smoothing function is the <a href="https://en.wikipedia.org/wiki/Kernel_(image_processing)" rel="nofollow noreferrer">kernel convolution</a>. What it does is it multiplies a small array in a moving window over your larger array. </p>
<p>Say you chose a standard smoothing kernel of <code>1/3 * [1,1,1]</code> and apply it to an array (a kernel needs to be odd-numbered and normalized). Lets apply it to <code>[1,2,2,7,3,4,9,4,5,6]</code>: </p>
<p>The centre of the kernal to begin with is on the first <code>2</code>. It then averages itself and its neighbours, then moves on. The result is this:
<code>[1.67, 3.67, 4.0, 4.67, 5.33, 5.67, 6.0, 5.0]</code></p>
<p>Note that the array is missing the first and last element.</p>
<p>You can do this with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow noreferrer">numpy.convolve</a>, for example:</p>
<pre><code>import numpy as np
a = np.array([[1,2,2,7,3,4,9,4,5,6]])
k = np.array([1,1,1])/3
smoothed = np.convolve(x, k, 'valid')
</code></pre>
<p>The effect of this is that your central value is smoothed with the values from its neighbours. You can change the convolution kernel by increasing it in size, 5 for example <code>[1,1,1,1,1]/5</code>, or give it a gaussian, which will stress the central members more than the outside ones. Read the wikipedia article.</p>
<p><strong>EDIT</strong></p>
<p>This works to get a block average as the question asks for:</p>
<pre><code>import numpy as np
a = [1,2,3,4,5,6,7,8,9,10]
size = 3
new_a = []
i = 0
while i < len(a):
val = np.mean(a[i:i+3])
new_a.append(val)
i+=size
print(new_a)
[2.0, 5.0, 8.0, 10.0]
</code></pre>
|
python|numpy|matplotlib
| 3
|
7,672
| 54,083,349
|
Efficient PyTorch DataLoader collate_fn function for inputs of various dimensions
|
<p>I'm having trouble writing a custom <code>collate_fn</code> function for the PyTorch <code>DataLoader</code> class. I need the custom function because my inputs have different dimensions.</p>
<p>I'm currently trying to write the baseline implementation of the <a href="https://arxiv.org/abs/1712.06957" rel="noreferrer">Stanford MURA paper</a>. The dataset has a set of labeled studies. A study may contain more than one image. I created a custom <code>Dataset</code>class that stacks these multiple images using <code>torch.stack</code>.</p>
<p>The stacked tensor is then provided as input to the model and the list of outputs is averaged to obtain a single output. This implementation works fine with <code>DataLoader</code> when <code>batch_size=1</code>. However, when I try to set the <code>batch_size</code> to 8, as is the case in the original paper, the <code>DataLoader</code> fails since it uses <code>torch.stack</code> to stack the batch and the inputs in my batch have variable dimensions (since each study can have multiple number of images).</p>
<p>In order to fix this, I tried to implement my custom <code>collate_fn</code> function.</p>
<pre><code>def collate_fn(batch):
imgs = [item['images'] for item in batch]
targets = [item['label'] for item in batch]
targets = torch.LongTensor(targets)
return imgs, targets
</code></pre>
<p>Then in my training epoch loop, I loop through each batch like this:</p>
<pre><code>for image, label in zip(*batch):
label = label.type(torch.FloatTensor)
# wrap them in Variable
image = Variable(image).cuda()
label = Variable(label).cuda()
# forward
output = model(image)
output = torch.mean(output)
loss = criterion(output, label, phase)
</code></pre>
<p>However, this does not give me any improved timings on the epoch and still takes as long as it did with a batch size of only 1. I've also tried setting the batch size to 32 and that does not improve the timings either.</p>
<p>Am I doing something wrong?
Is there a better approach to this?</p>
|
<p>Very interesting problem! If I understand you correctly (and also checking the abstract of the paper), you have <em>40,561 images from 14,863 studies, where each study is manually labeled by radiologists as either normal or abnormal.</em></p>
<p>I believe the reason why you had the issue you faced was, say, for example, you created a stack for,</p>
<ol>
<li>study A - 12 images</li>
<li>study B - 13 images</li>
<li>study C - 7 images</li>
<li>study D - 1 image, etc.</li>
</ol>
<p>And you try to use a batch size of 8 during training which would fail when it gets to study D.</p>
<p>Therefore, is there a reason why we want to average the list of outputs in a study to fit a single label? Otherwise, I would simply collect all 40,561 images, assign the same label to all images from the same study (such that list of outputs in A is compared with a list of 12 labels).</p>
<p>Therefore, with a single dataloader you can shuffle across studies (if desired) and use the desired batch size during training.</p>
<p>I see this question has been around for a while, I hope it helps someone in the future :)</p>
|
python-3.x|machine-learning|pytorch|mini-batch
| 1
|
7,673
| 38,166,804
|
Subset a 2D array by a 2D array in python
|
<p>I want to use a 2D array to subset another 2D array(they have the same length), for example:</p>
<pre><code>import numpy as np
tmp = np.array([[0.33, 0.67], [0.67, 0.33]])
index = np.array([[1], [0]])
</code></pre>
<p>What I want is something like this:</p>
<pre><code>In[91]: np.array([tmp[i][index[i]] for i in range(len(index))])
Out[91]:
array([[ 0.67],
[ 0.67]])
</code></pre>
<p>It works but, is there a smarter/more efficient way to do this?</p>
|
<p>You can create the indices of the rows using <code>shape()</code> of your <code>index</code> array and the <code>inedx</code> itself as the the columns, then use a simple indexing to get the intended items:</p>
<pre><code>>>> tmp[(np.array(index.shape[::-1])-1)[:,None], index]
array([[ 0.67],
[ 0.67]])
</code></pre>
|
python|arrays|numpy
| 0
|
7,674
| 38,101,009
|
Changing multiple column names but not all of them - Pandas Python
|
<p>I would like to know if there is a function to change specific column names but without selecting a specific name or without changing all of them.</p>
<p>I have the code:</p>
<pre><code>df=df.rename(columns = {'nameofacolumn':'newname'})
</code></pre>
<p>But with it i have to manually change each one of them writing each name.
Also to change all of them I have</p>
<pre><code>df = df.columns['name1','name2','etc']
</code></pre>
<p>I would like to have a function to change columns 1 and 3 without writing their names just stating their location.</p>
|
<p>say you have a dictionary of the new column names and the name of the column they should replace:</p>
<pre><code>df.rename(columns={'old_col':'new_col', 'old_col_2':'new_col_2'}, inplace=True)
</code></pre>
<p>But, if you don't have that, and you only have the indices, you can do this:</p>
<pre><code>column_indices = [1,4,5,6]
new_names = ['a','b','c','d']
old_names = df.columns[column_indices]
df.rename(columns=dict(zip(old_names, new_names)), inplace=True)
</code></pre>
|
python|pandas|dataframe
| 58
|
7,675
| 66,040,089
|
Pandas aggregations in python
|
<p>I have the following data set. I want to create a dataframe that contains all teams and include the number of games played, wins, losses, and draws, and average point differential in 2017 (Y = 17).</p>
<pre><code>
Date Y HomeTeam AwayTeam HomePoints AwayPoints
2014-08-16 14 Arsenal Crystal Palace 2 1
2014-08-16 14 Leicester Everton 2 2
2014-08-16 14 Man United Swansea 1 2
2014-08-16 14 QPR Hull 0 1
2014-08-16 14 Stoke Aston Villa 0 1
</code></pre>
<p>I wrote the following code:</p>
<pre><code>df17 = df[df['Y'] == 17]
df17['differential'] = abs(df['HomePoints'] - df['AwayPoints'])
df17['home_wins'] = np.where(df17['HomePoints'] > df17['AwayPoints'], 1, 0)
df17['home_losses'] = np.where(df17['HomePoints'] < df17['AwayPoints'], 1, 0)
df17['home_ties'] = np.where(df17['HomePoints'] == df17['AwayPoints'], 1, 0)
df17['game_count'] = 1
df17.groupby("HomeTeam").agg({"differential": np.mean, "home_wins": np.sum, "home_losses": np.sum, "home_ties": np.sum, "game_count": np.sum}).sort_values(["differential"], ascending = False)
</code></pre>
<p>But i dont think this is correct as I'm only accounting for home team..does someone have a cleanear method?</p>
|
<p>Melting the dataframe granting us two new lines per old line, this allows us to have a line for the <code>HomeTeam</code> and a line for the <code>AwayTeam</code>.</p>
<p>Please find the documentation for the <code>melt</code> method here : <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.melt.html</a></p>
<pre class="lang-py prettyprint-override"><code>df = pd.melt(df, id_vars=['Date', 'Y', 'HomePoints', 'AwayPoints'], value_vars=['HomeTeam', 'AwayTeam'])
df = df.rename({'value': 'Team', 'variable': 'Home/Away'}, axis=1)
df['Differential'] = df['Home/Away'].replace({'HomeTeam': 1, 'AwayTeam': -1}) * (df['HomePoints'] - df['AwayPoints'])
def count_wins(x):
return (x > 0).sum()
def count_losses(x):
return (x < 0).sum()
def count_draws(x):
return (x == 0).sum()
df = df.groupby('Team')['Differential'].agg(['count', count_wins, count_losses, count_draws, 'sum'])
df = df.rename({'count': 'Number of games', 'count_wins': 'Wins', 'count_losses': 'Losses', 'count_draws': 'Draws', 'sum': 'Differential'}, axis=1)
</code></pre>
|
python|pandas
| 0
|
7,676
| 46,549,825
|
Tensorflow convolution
|
<p>I'm trying to perform a convolution (<code>conv2d</code>) on images of variable dimensions. I have those images in form of an 1-D array and I want to perform a convolution on them, but I have a lot of troubles with the shapes.
This is my code of the <code>conv2d</code>:</p>
<pre><code>tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
</code></pre>
<p>where <code>x</code> is the input image.
The error is:</p>
<pre><code>ValueError: Shape must be rank 4 but is rank 1 for 'Conv2D' (op: 'Conv2D') with input shapes: [1], [5,5,1,32].
</code></pre>
<p>I think I might reshape <code>x</code>, but I don't know the right dimensions. When I try this code:</p>
<pre><code>x = tf.reshape(self.x, shape=[-1, 5, 5, 1]) # example
</code></pre>
<p>I get this:</p>
<pre><code>ValueError: Dimension size must be evenly divisible by 25 but is 1 for 'Reshape' (op: 'Reshape') with input shapes: [1], [4] and with input tensors computed as partial shapes: input[1] = [?,5,5,1].
</code></pre>
|
<p>You can't use <a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv2d" rel="nofollow noreferrer"><code>conv2d</code></a> with a tensor of rank 1. Here's the description from the doc:</p>
<blockquote>
<p>Computes a 2-D convolution given <strong>4-D</strong> input and filter tensors.</p>
</blockquote>
<p>These four dimensions are <code>[batch, height, width, channels]</code> (as Engineero already wrote). </p>
<p>If you don't know the dimensions of the image in advance, tensorflow allows to provide a <em>dynamic</em> shape:</p>
<pre><code>x = tf.placeholder(tf.float32, shape=[None, None, None, 3], name='x')
with tf.Session() as session:
print session.run(x, feed_dict={x: data})
</code></pre>
<p>In this example, a 4-D tensor <code>x</code> is created, but only the number of channels is known statically (3), everything else is determined on runtime. So you can pass this <code>x</code> into <code>conv2d</code>, even if the size is dynamic.</p>
<p><strong>But there's another problem</strong>. You didn't say your task, but if you're building a convolutional neural network, I'm afraid, you'll need to know the size of the input to determine the size of FC layer after all pooling operations - this size must be static. If this is the case, I think the best solution is actually to scale your inputs to a common size before passing it into a convolutional network.</p>
<p>UPD:</p>
<p>Since it wasn't clear, here's how you can reshape any image into 4-D array.</p>
<pre><code>a = np.zeros([50, 178, 3])
shape = a.shape
print shape # prints (50, 178, 3)
a = a.reshape([1] + list(shape))
print a.shape # prints (1, 50, 178, 3)
</code></pre>
|
python|image|tensorflow|reshape|convolution
| 3
|
7,677
| 58,224,316
|
Convert panda column to a string
|
<p>I am trying to run the below script to add to columns to the left of a file; however it keeps giving me </p>
<pre><code>valueError: header must be integer or list of integers
</code></pre>
<p>Below is my code:</p>
<pre><code>import pandas as pd
import numpy as np
read_file = pd.read_csv("/home/ex.csv",header='true')
df=pd.DataFrame(read_file)
def add_col(x):
df.insert(loc=0, column='Creation_DT', value=pd.to_datetime('today'))
df.insert(loc=1, column='Creation_By', value="Sean")
df.to_parquet("/home/sample.parquet")
add_col(df)
</code></pre>
<p>Any ways to make the creation_dt column a string?</p>
|
<p>According to pandas docs <code>header</code> is row number(s) to use as the column names, and the start of the data and must be int or list of int. So you have to pass <code>header=0</code> to <code>read_csv</code> method. </p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html</a></p>
<p>Also, pandas automatically creates dataframe from read file, you don't need to do it additionally. Use just</p>
<pre><code>df = pd.read_csv("/home/ex.csv", header=0)
</code></pre>
|
python-3.x|pandas|numpy|dataframe
| 0
|
7,678
| 58,496,987
|
Filter forward col 1 without iteration
|
<p>I am dealing with a "waterfall" structure DataFrame in Pandas, Python.</p>
<p>Column 1 is full, while the rest of the data set is mostly empty representing series available for only a subset of the total period considered:</p>
<pre><code>Instrument AUPRATE. AIB0411 AIB0511 AIB0611 ... AIB1120 AIB1220 AIB0121 AIB0221
Field ...
Date ...
2011-03-31 4.75 4.730 4.710 4.705 ... NaN NaN NaN NaN
2011-04-29 4.75 4.745 4.750 4.775 ... NaN NaN NaN NaN
2011-05-31 4.75 NaN 4.745 4.755 ... NaN NaN NaN NaN
2011-06-30 4.75 NaN NaN 4.745 ... NaN NaN NaN NaN
2011-07-29 4.75 NaN NaN NaN ... NaN NaN NaN NaN
... ... ... ... ... ... ... ... ...
2019-05-31 1.50 NaN NaN NaN ... NaN NaN NaN NaN
2019-06-28 1.25 NaN NaN NaN ... 0.680 NaN NaN NaN
2019-07-31 1.00 NaN NaN NaN ... 0.520 0.530 NaN NaN
2019-08-30 1.00 NaN NaN NaN ... 0.395 0.405 0.405 NaN
2019-09-30 1.00 NaN NaN NaN ... 0.435 0.445 0.445 0.45
</code></pre>
<p>What I would like to do is to push the values from "AUPRATE" to the start of the data in every row (such that they effectively represent the zeroth observation). Where the AUPRATE values are not adjacent to the dataset, they should be replaced with NaN.</p>
<p>I could probably write a junky loop to do this but I was wondering if there was an efficient way of achieving the same outcome.</p>
<p>I am very much a novice in pandas and Python. Thank you in advance.</p>
<p>[edit]</p>
<p>Desired output:</p>
<pre><code>Instrument AUPRATE. AIB0411 AIB0511 AIB0611 ... AIB1120 AIB1220 AIB0121 AIB0221
Field ...
Date ...
2011-03-31 4.75 4.730 4.710 4.705 ... NaN NaN NaN NaN
2011-04-29 4.75 4.745 4.750 4.775 ... NaN NaN NaN NaN
2011-05-31 NaN 4.75 4.745 4.755 ... NaN NaN NaN NaN
2011-06-30 NaN NaN 4.75 4.745 ... NaN NaN NaN NaN
2011-07-29 NaN NaN NaN NaN ... NaN NaN NaN NaN
</code></pre>
<p>I have implemented the following, based on the suggestion below. I would still be happy if there was a way of doing this without iteration.</p>
<pre><code>for i in range(AU_furures_rates.shape[0]): #iterate over rows
for j in range(AU_furures_rates.shape[1]-1): #iterate over cols
if (pd.notnull(AU_furures_rates.iloc[i,j+1])) and pd.isnull(AU_furures_rates.iloc[i,1]): #move rate when needed
AU_furures_rates.iloc[i,j] = AU_furures_rates.iloc[i,0]
AU_furures_rates.iloc[i,0] = "NaN"
break
</code></pre>
|
<p>Maybe someone would find a 'cleaner' solution, but what I thought about was first iterating over the columns, to check for each row which is the column whose value you need to replace (backwards, so that it'll end up with the first occurance) with:</p>
<pre><code>df['column_to_move'] = np.nan
cols = df.columns.tolist()
for i in range(len(df) - 2, 1, -1):
df.loc[pd.isna(df[cols[i]]) & pd.notna(df[cols[i + 1]]), 'column_to_move'] = cols[i]
</code></pre>
<p>And then iterate the columns to fill the value from <code>AUPRATE.</code> to where its needed, and change <code>AUPRATE.</code> itself with <code>np.nan</code> with:</p>
<p><code>for col in cols[2: -1]:
df.loc[df['column_to_move'] == col, col] = df['AUPRATE.']
df.loc[df['column_to_move'] == col, 'AUPRATE.'] = np.nan
df.drop('column_to_move', axis=1, inplace=True)</code></p>
|
python|pandas|iteration
| 1
|
7,679
| 58,294,319
|
Using np.where returns error after using .any()
|
<p>I'm working on a dataframe that needs to create a large amount of flags, depending on multiple conditions. I'm using <code>np.where</code> but now I'm running into this error </p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>For replicability and simplicity, I'm only sharing the part of the code that produces the error together with the columns that are used.
Dataframe being used:</p>
<pre><code> Data Uniques day_a1 day_a2 day_a3
0 1 1 3 NaN NaN
1 2 2 14 15.0 NaN
2 2 1 10 10.0 NaN
3 3 1 10 10.0 10.0
802 2 2 12 NaN 29.0
806 1 1 29 NaN NaN
</code></pre>
<p>Code that generates the error:</p>
<pre><code>df['flag_3.3.3.1.1'] = np.where(
(
(df['Data'] == 3) &
(df['day_a1'] != 10) &
(df['Uniques'] == 3) & #I ran this separately and it was fine
(df['day_a1'] > 27 or df['day_a1'] < 4).any()),'flag',np.nan)
</code></pre>
<p>I seem to still have issues after passying <code>.any()</code> after the <code>or</code>.</p>
|
<p>Try replacing</p>
<pre><code>(df['day_a1'] > 27 or df['day_a1'] < 4)
</code></pre>
<p>by</p>
<pre><code>((df['day_a1'] > 27) | (df['day_a1'] < 4))
</code></pre>
<p>Note the use of <code>|</code> and the additional parenthesis for the precedence.</p>
|
python|pandas|numpy|where-clause
| 3
|
7,680
| 69,287,891
|
how do I append an array in python?
|
<p>I have some arrays of different sizes (such as (140,9), (120,9),...,(123,9)). I need to merge all of them into one array. I used the following code, but it said NumPy array does not have <code>append</code> attribute. could you please tell me how can I do it?</p>
<pre><code>os.chdir("E:/pythoncode/feature") #change directory to downloads folder
files_path = [os.path.abspath(x) for x in os.listdir()]
fnames_transfer = [x for x in files_path if x.endswith(".npy")]
feature=np.zeros((2000,9))
for i in range(len(fnames_transfer)):
data=np.load(fnames_transfer[i])
feature.append(feature,data)
</code></pre>
<p>the error is :</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-7-98bd45171da9>", line 3, in <module>
feature.append(feature,data)
AttributeError: 'numpy.ndarray' object has no attribute 'append'
</code></pre>
|
<p>Hope this helps! Comments are added too.</p>
<pre><code>os.chdir("E:/pythoncode/feature") #change directory to downloads folder
files_path = [os.path.abspath(x) for x in os.listdir()]
fnames_transfer = [x for x in files_path if x.endswith(".npy")]
feature=np.zeros((2000,9))
feature_new = []#Added empty list to capture all the appended values
for i in range(len(fnames_transfer)):
data=np.load(fnames_transfer[i])
#feature.append(feature,data) ##instead of this use extend as the following
feature_new.extend([[feature,data]]) #changed the name from "feature" to "feature_new" to not to be confused with the other variable with a same name.
#if .extend code line didnt work for you open the following line and remove the .extend
#feature_new.append(feature,data)
</code></pre>
|
python|python-3.x|numpy
| 1
|
7,681
| 69,225,532
|
Pandas df.values causes additional memory consumption
|
<p>I have a big pandas dataframe with about 300 columns and the column types are all float32. I would like to try machine learning algorithms with the data, so I call <code>df[x_columns].values</code> to create the numpy ndarray which will then be used as input to machine learning algorithms. By looking at memory consumption, it seems df[x_columns].values create a copy of the original data instead of just a view. Is it possible to only make it create a view of the original data so I can reduce the memory consumption?</p>
|
<p>The first time, use this.</p>
<pre><code>np_values = (df[x_columns].values).to_numpy
np.savez('values.npz', values=np_values) # saves a .npz file to disk
</code></pre>
<p>For all future runs with this program, comment out any lines dealing with your pandas df and directly read in from the saved numpy array.</p>
<pre><code># comment out [pandas df stuff]
values = np.load('values.npz')['values'] # reads the numpy file, np_values is now a numpy array
</code></pre>
<p>I'm unfamiliar with the pandas dataframe format, but you could save the whole dataframe and access them from the numpy array using np.savez and np.load somehow.</p>
|
python|pandas|numpy
| 0
|
7,682
| 69,041,135
|
Cummulative sum with repsect to feature with dtype = interval
|
<p>I have a dataset for which I want to transform to give cummulative percentages. In normal cases, this isn't a hard thing do do, but in this case the cummulation needs to be done on distance bins. So, here is my dataframe:</p>
<pre><code>distance_bin objects in bin percentage
0 (-0.001, 0.5] 12054 34.24
1 (0.5, 1.0] 6594 18.73
2 (1.0, 1.5] 3547 10.08
3 (1.5, 2.0] 2031 5.77
4 (2.0, 2.5] 1831 5.20
5 (2.5, 3.0] 1654 4.70
6 (3.0, 3.5] 1406 3.99
7 (3.5, 4.0] 1021 2.90
8 (4.0, 4.5] 566 1.61
9 (4.5, 5.0] 515 1.46
10 (5.0, 5.5] 680 1.93
11 (5.5, 6.0] 570 1.62
12 (6.0, 6.5] 324 0.92
13 (6.5, 7.0] 305 0.87
14 (7.0, 7.5] 223 0.63
15 (7.5, 8.0] 257 0.73
16 (8.0, 8.5] 159 0.45
17 (8.5, 9.0] 193 0.55
18 (9.0, 9.5] 179 0.51
19 (9.5, 10.0] 154 0.44
20 (10.0, 10.5] 98 0.28
21 (10.5, 11.0] 132 0.37
22 (11.0, 11.5] 132 0.37
23 (11.5, 12.0] 88 0.25
24 (12.0, 12.5] 65 0.18
25 (12.5, 13.0] 84 0.24
26 (13.0, 13.5] 58 0.16
27 (13.5, 14.0] 80 0.23
28 (14.0, 14.5] 31 0.09
29 (14.5, 15.0] 37 0.11
</code></pre>
<p>or with the <code>dtypes</code> appended:</p>
<pre><code>{'distance_bin': {0: Interval(-0.001, 0.5, closed='right'),
1: Interval(0.5, 1.0, closed='right'),
2: Interval(1.0, 1.5, closed='right'),
3: Interval(1.5, 2.0, closed='right'),
4: Interval(2.0, 2.5, closed='right'),
5: Interval(2.5, 3.0, closed='right'),
6: Interval(3.0, 3.5, closed='right'),
7: Interval(3.5, 4.0, closed='right'),
8: Interval(4.0, 4.5, closed='right'),
9: Interval(4.5, 5.0, closed='right'),
10: Interval(5.0, 5.5, closed='right'),
11: Interval(5.5, 6.0, closed='right'),
12: Interval(6.0, 6.5, closed='right'),
13: Interval(6.5, 7.0, closed='right'),
14: Interval(7.0, 7.5, closed='right'),
15: Interval(7.5, 8.0, closed='right'),
16: Interval(8.0, 8.5, closed='right'),
17: Interval(8.5, 9.0, closed='right'),
18: Interval(9.0, 9.5, closed='right'),
19: Interval(9.5, 10.0, closed='right'),
20: Interval(10.0, 10.5, closed='right'),
21: Interval(10.5, 11.0, closed='right'),
22: Interval(11.0, 11.5, closed='right'),
23: Interval(11.5, 12.0, closed='right'),
24: Interval(12.0, 12.5, closed='right'),
25: Interval(12.5, 13.0, closed='right'),
26: Interval(13.0, 13.5, closed='right'),
27: Interval(13.5, 14.0, closed='right'),
28: Interval(14.0, 14.5, closed='right'),
29: Interval(14.5, 15.0, closed='right'),
30: Interval(15.0, 15.5, closed='right'),
31: Interval(15.5, 16.0, closed='right'),
32: Interval(16.0, 16.5, closed='right'),
33: Interval(16.5, 17.0, closed='right'),
34: Interval(17.0, 17.5, closed='right'),
35: Interval(17.5, 18.0, closed='right'),
36: Interval(18.0, 18.5, closed='right'),
37: Interval(18.5, 19.0, closed='right'),
38: Interval(19.0, 19.5, closed='right'),
39: Interval(19.5, 20.0, closed='right'),
40: Interval(20.0, 20.5, closed='right'),
42: Interval(21.0, 21.5, closed='right'),
44: Interval(22.0, 22.5, closed='right'),
46: Interval(25.0, 25.5, closed='right')},
'Objects in bin': {0: 12054,
1: 6594,
2: 3547,
3: 2031,
4: 1831,
5: 1654,
6: 1406,
7: 1021,
8: 566,
9: 515,
10: 680,
11: 570,
12: 324,
13: 305,
14: 223,
15: 257,
16: 159,
17: 193,
18: 179,
19: 154,
20: 98,
21: 132,
22: 132,
23: 88,
24: 65,
25: 84,
26: 58,
27: 80,
28: 31,
29: 37,
30: 31,
31: 22,
32: 18,
33: 18,
34: 5,
35: 5,
36: 5,
37: 4,
38: 3,
39: 11,
40: 4,
42: 2,
44: 2,
46: 3},
'percentage': {0: 34.24,
1: 18.73,
2: 10.08,
3: 5.77,
4: 5.2,
5: 4.7,
6: 3.99,
7: 2.9,
8: 1.61,
9: 1.46,
10: 1.93,
11: 1.62,
12: 0.92,
13: 0.87,
14: 0.63,
15: 0.73,
16: 0.45,
17: 0.55,
18: 0.51,
19: 0.44,
20: 0.28,
21: 0.37,
22: 0.37,
23: 0.25,
24: 0.18,
25: 0.24,
26: 0.16,
27: 0.23,
28: 0.09,
29: 0.11,
30: 0.09,
31: 0.06,
32: 0.05,
33: 0.05,
34: 0.01,
35: 0.01,
36: 0.01,
37: 0.01,
38: 0.01,
39: 0.03,
40: 0.01,
42: 0.01,
44: 0.01,
46: 0.01}}
</code></pre>
<p>where</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 44 entries, 0 to 46
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 distance_bin 44 non-null interval[float64]
1 objects in bin 44 non-null int64
2 percentage 44 non-null float64
dtypes: float64(1), int64(1), interval(1)
memory usage: 1.7 KB
</code></pre>
<p>If <code>distance_bin</code> wasn't of dtype <code>interval[float64]</code>, the matter could be resolved with something like:</p>
<pre><code>df.groupby(pd.Grouper(key='distance_bin'))['percentage'].expanding().sum()
</code></pre>
<p>But this doesn't work.</p>
<p>Any idea on how to solve this? What I basically want is:</p>
<pre><code>distance_bin objects in bin percentage
0 (-0.001, 0.5] 12054 34.24
1 (-0.001, 1.0] 18648 52.97
2 (-0.001, 1.5] 22195 63.05
3 (-0.001, 2.0] 24226 68.82
4 (-0.001, 2.5] 26057 74.02
....
</code></pre>
|
<p>Try:</p>
<pre><code>right = pd.IntervalIndex(df['distance_bin']).right
df['distance_bin'] = pd.IntervalIndex.from_tuples(list(zip([-0.001]*len(right), right)))
df[['ESL:s in bin', 'percentage']] = df[['ESL:s in bin', 'percentage']].cumsum()
</code></pre>
<pre><code>>>> df
distance_bin ESL:s in bin percentage
0 (-0.001, 0.5] 12054 34.24
1 (-0.001, 1.0] 18648 52.97
2 (-0.001, 1.5] 22195 63.05
3 (-0.001, 2.0] 24226 68.82
4 (-0.001, 2.5] 26057 74.02
5 (-0.001, 3.0] 27711 78.72
6 (-0.001, 3.5] 29117 82.71
7 (-0.001, 4.0] 30138 85.61
8 (-0.001, 4.5] 30704 87.22
9 (-0.001, 5.0] 31219 88.68
10 (-0.001, 5.5] 31899 90.61
11 (-0.001, 6.0] 32469 92.23
12 (-0.001, 6.5] 32793 93.15
13 (-0.001, 7.0] 33098 94.02
14 (-0.001, 7.5] 33321 94.65
15 (-0.001, 8.0] 33578 95.38
16 (-0.001, 8.5] 33737 95.83
17 (-0.001, 9.0] 33930 96.38
18 (-0.001, 9.5] 34109 96.89
19 (-0.001, 10.0] 34263 97.33
20 (-0.001, 10.5] 34361 97.61
21 (-0.001, 11.0] 34493 97.98
22 (-0.001, 11.5] 34625 98.35
23 (-0.001, 12.0] 34713 98.60
24 (-0.001, 12.5] 34778 98.78
25 (-0.001, 13.0] 34862 99.02
26 (-0.001, 13.5] 34920 99.18
27 (-0.001, 14.0] 35000 99.41
28 (-0.001, 14.5] 35031 99.50
29 (-0.001, 15.0] 35068 99.61
30 (-0.001, 15.5] 35099 99.70
31 (-0.001, 16.0] 35121 99.76
32 (-0.001, 16.5] 35139 99.81
33 (-0.001, 17.0] 35157 99.86
34 (-0.001, 17.5] 35162 99.87
35 (-0.001, 18.0] 35167 99.88
36 (-0.001, 18.5] 35172 99.89
37 (-0.001, 19.0] 35176 99.90
38 (-0.001, 19.5] 35179 99.91
39 (-0.001, 20.0] 35190 99.94
40 (-0.001, 20.5] 35194 99.95
42 (-0.001, 21.5] 35196 99.96
44 (-0.001, 22.5] 35198 99.97
46 (-0.001, 25.5] 35201 99.98
</code></pre>
|
python-3.x|pandas
| 1
|
7,683
| 44,728,747
|
Adding multiple json data to panda dataframes
|
<p>I am using a api to get 3 json data and I would like to add those datas to 1 panda dataframes</p>
<p>This is my code
I am passing in books which contains the book id as x and those 3 id returns me 3 different json objects with all the book information.</p>
<pre><code>for x in books:
newDF = pd.DataFrame()
bookinfo = requests.get( http://books.com/?x})
books = bookinfo.json()
print(books)
</code></pre>
<p>This is the 3 arrays I get after printing books,</p>
<pre><code>{
u'bookInfo':[
{
u'book_created':u'2017-05-31',
u'book_rating':3,
u'book_sold':0
},
{
u'book_created':u'2017-05-31',
u'book_rating':2,
u'book_sold':1
},
],
u'book_reading_speed':u'4.29',
u'book_sale_date':u'2017-05-31'
}
{
u'bookInfo':[
{
u'book_created':u'2017-05-31',
u'book_rating':3,
u'book_sold':0
},
{
u'book_created':u'2017-05-31',
u'book_rating':2,
u'book_sold':1
},
],
u'book_reading_speed':u'4.29',
u'book_sale_date':u'2017-05-31'
}
{
u'bookInfo':[
{
u'book_created':u'2017-05-31',
u'book_rating':3,
u'book_sold':0
},
{
u'book_created':u'2017-05-31',
u'book_rating':2,
u'book_sold':1
},
],
u'book_reading_speed':u'4.29',
u'book_sale_date':u'2017-05-31'
}
</code></pre>
<p>What I would like to do is only take <code>u'bookInfo</code> from the three arrays and make them into 1 dataframe</p>
|
<p>IIUC:</p>
<pre><code>pd.concat(
pd.DataFrame([requests.get( http://books.com/?x}).json() for x in books]),
ignore_index=True)
</code></pre>
<p>Alternatively you can collect JSON responses into a list and do the following:</p>
<pre><code>In [30]: pd.concat([pd.DataFrame(x['bookInfo']) for x in d], ignore_index=True)
Out[30]:
book_created book_rating book_sold
0 2017-05-31 3 0
1 2017-05-31 2 1
2 2017-05-31 3 0
3 2017-05-31 2 1
4 2017-05-31 3 0
5 2017-05-31 2 1
</code></pre>
<p>or</p>
<pre><code>In [25]: pd.DataFrame([y for x in d for y in x['bookInfo']])
Out[25]:
book_created book_rating book_sold
0 2017-05-31 3 0
1 2017-05-31 2 1
2 2017-05-31 3 0
3 2017-05-31 2 1
4 2017-05-31 3 0
5 2017-05-31 2 1
</code></pre>
<p>where <code>d</code> is a list of dicts, you've posted:</p>
<pre><code>In [20]: d
Out[20]:
[{'bookInfo': [{'book_created': '2017-05-31',
'book_rating': 3,
'book_sold': 0},
{'book_created': '2017-05-31', 'book_rating': 2, 'book_sold': 1}],
'book_reading_speed': '4.29',
'book_sale_date': '2017-05-31'},
{'bookInfo': [{'book_created': '2017-05-31',
'book_rating': 3,
'book_sold': 0},
{'book_created': '2017-05-31', 'book_rating': 2, 'book_sold': 1}],
'book_reading_speed': '4.29',
'book_sale_date': '2017-05-31'},
{'bookInfo': [{'book_created': '2017-05-31',
'book_rating': 3,
'book_sold': 0},
{'book_created': '2017-05-31', 'book_rating': 2, 'book_sold': 1}],
'book_reading_speed': '4.29',
'book_sale_date': '2017-05-31'}]
</code></pre>
|
python|json|pandas
| 4
|
7,684
| 44,578,571
|
Intersect two boolean arrays for True
|
<p>Having the numpy arrays</p>
<pre><code>a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
</code></pre>
<p>how can I make the intersection of the two so that only the <code>True</code> values match? I can do something like:</p>
<pre><code>a == b
array([False, False, False, True, True], dtype=bool)
</code></pre>
<p>but the last item is <code>True</code> (understandably because both are <code>False</code>), whereas I would like the result array to be <code>True</code> only in the 4th element, something like:</p>
<pre><code>array([False, False, False, True, False], dtype=bool)
</code></pre>
|
<p>Numpy provides <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="noreferrer"><code>logical_and()</code></a> for that purpose:</p>
<pre><code>a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
c = np.logical_and(a, b)
# array([False, False, False, True, False], dtype=bool)
</code></pre>
<p>More at <a href="https://docs.scipy.org/doc/numpy/reference/routines.logic.html#logical-operations" rel="noreferrer">Numpy Logical operations</a>.</p>
|
python|python-3.x|numpy
| 39
|
7,685
| 44,414,721
|
Numpy delete rows of array inside object array
|
<p>I am trying to delete rows from arrays which are stored inside an object array in numpy. However as you can see it complains that it cannot broadcast the smaller array into the larger array. Works fine when done directly to the array. What is the issue here? Any clean way around this error other than making a new object array and copying one by one until the array I want to modify?</p>
<pre><code>In [1]: import numpy as np
In [2]: x = np.zeros((3, 2))
In [3]: x = np.delete(x, 1, axis=0)
In [4]: x
Out[4]:
array([[ 0., 0.],
[ 0., 0.]])
In [5]: x = np.array([np.zeros((3, 2)), np.zeros((3, 2))], dtype=object)
In [6]: x[0] = np.delete(x[0], 1, axis=0)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-1687d284d03c> in <module>()
----> 1 x[0] = np.delete(x[0], 1, axis=0)
ValueError: could not broadcast input array from shape (2,2) into shape (3,2)
</code></pre>
<p><strong>Edit:</strong> Apparently it works when arrays are different shape. This is quite annoying. Any way to disable automatic concatenation by <code>np.array</code>?</p>
<pre><code>In [12]: x = np.array([np.zeros((3, 2)), np.zeros((5, 8))], dtype=object)
In [13]: x[0] = np.delete(x[0], 1, axis=0)
In [14]: x = np.array([np.zeros((3, 2)), np.zeros((3, 2))], dtype=object)
In [15]: x.shape
Out[15]: (2, 3, 2)
In [16]: x = np.array([np.zeros((3, 2)), np.zeros((5, 8))], dtype=object)
In [17]: x.shape
Out[17]: (2,)
</code></pre>
<p>This is some quite inconsistent behaviour.</p>
|
<p>The fact that <code>np.array</code> creates as high a dimensional array as it can has been discussed many times on SO. If the elements are different in size it will keep them separate, or in some cases raise an error.</p>
<p>In your example</p>
<pre><code>In [201]: x = np.array([np.zeros((3, 2)), np.zeros((3, 2))], dtype=object)
In [202]: x
Out[202]:
array([[[0.0, 0.0],
[0.0, 0.0],
[0.0, 0.0]],
[[0.0, 0.0],
[0.0, 0.0],
[0.0, 0.0]]], dtype=object)
</code></pre>
<p>The safe way to make an object array of a determined size is to initialize it and then fill it:</p>
<pre><code>In [203]: x=np.empty(2, dtype=object)
In [204]: x
Out[204]: array([None, None], dtype=object)
In [205]: x[...] = [np.zeros((3, 2)), np.zeros((3, 2))]
In [206]: x
Out[206]:
array([array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.]]),
array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])], dtype=object)
</code></pre>
<p>A 1d object array like this, is, for most practical purposes a list. Operations on the elements are performed with Python level iteration, implicit or explicit (as with your list comprehension). Most of the computational power of a multidimensional numeric array is gone.</p>
<pre><code>In [207]: x.shape
Out[207]: (2,)
In [208]: [xx.shape for xx in x] # shape of the elements
Out[208]: [(3, 2), (3, 2)]
In [209]: [xx[:2,:] for xx in x] # slice the elements
Out[209]:
[array([[ 0., 0.],
[ 0., 0.]]), array([[ 0., 0.],
[ 0., 0.]])]
</code></pre>
<p>You can reshape such an array, but you can't append as if it were a list. Some math operations cross the 'object' boundary, but it is hit-and-miss. In sum, don't use object arrays when a list would work just as well.</p>
<p><a href="https://stackoverflow.com/questions/43730632/understanding-non-homogeneous-numpy-arrays">Understanding non-homogeneous numpy arrays</a></p>
|
python|arrays|numpy
| 1
|
7,686
| 61,141,833
|
Get rid of initial spaces at specific cells in Pandas
|
<p>I am working with a big dataset (more than 2 million rows x 10 columns) that has a column with string values that were filled oddly. Some rows start and end with many space characters, while others don't.</p>
<p>What I have looks like this:</p>
<pre><code> col1
0 (spaces)string(spaces)
1 (spaces)string(spaces)
2 string
3 string
4 (spaces)string(spaces)
</code></pre>
<p>I want to get rid of those spaces at the beginning and at the end and get something like this:</p>
<pre><code> col1
0 string
1 string
2 string
3 string
4 string
</code></pre>
<p>Normally, for a small dataset I would use a for iteration (I know it's far from optimal) but now it's not an option given the time it would take.</p>
<p>How can I use the power of pandas to avoid a <code>for</code> loop here?</p>
<p>Thanks!</p>
<p>edit: I can't get rid of all the whitespaces since the strings contain spaces.</p>
|
<pre><code>df['col1'].apply(lambda x: x.strip())
</code></pre>
<p>might help</p>
|
python|pandas|loops|for-loop
| 1
|
7,687
| 60,853,680
|
Compute grads of cloned tensor Pytorch
|
<p>I am having a hard time with gradient computation using PyTorch. </p>
<p>I have the outputs and the hidden states of the last time step <code>T</code> of an RNN. </p>
<p>I would like to clone my hidden states and compute its grad after backpropagation but it doesn't work. </p>
<p>After reading <a href="https://stackoverflow.com/questions/52926847/pytorch-how-to-compute-grad-after-clone-a-tensor">pytorch how to compute grad after clone a tensor</a>, I used <code>retain_grad()</code> without any success.</p>
<p>Here's my code</p>
<pre><code> hidden_copy = hidden.clone()
hidden.retain_grad()
hidden_copy.retain_grad()
outputs_T = outputs[T]
targets_T = targets[T]
loss_T = loss(outputs_T,targets_T)
loss_T.backward()
print(hidden.grad)
print(hidden_copy.grad)
</code></pre>
<p><code>hidden_grad</code> gives an array while <code>hidden_copy.grad</code> gives <code>None</code>.</p>
<p>Why does <code>hidden_copy.grad</code> give <code>None</code> ? Is there any way to compute the gradients of a cloned tensor ?</p>
|
<p>Based on the comments the problem is that <code>hidden_copy</code> is never visited during the backward pass.</p>
<p>When you perform backward pytorch follows the computation graph backwards starting at <code>loss_T</code> and works backwards to all the leaf nodes. It only visits the tensors which were used to compute <code>loss_T</code>. If a tensor isn't part of that backward path then it won't be visited and it's <code>grad</code> member will not be updated. Basically by creating a copy of the tensor and then not using it to compute <code>loss_T</code> this results in a "dead-end" in the computation graph.</p>
<p>To illustrate take a look at this diagram representing a simplified view of a computation graph. Each node in the graph is a tensor where the edges point back to direct descendants.</p>
<p><img src="https://i.stack.imgur.com/WClMW.png" width="400" /></p>
<p>Notice if we follow the path back from <code>loss_T</code> to the leaves then we never visit <code>hidden_conv</code>. Note that a leaf is a tensor with no descendants and in this case <code>input</code> is the only leaf.</p>
<p>This is an extremely simplified computation graph used to demonstrate a point. Of course in reality there are probably many more nodes between <code>input</code> and <code>hidden</code> and between <code>hidden</code> and <code>output_T</code> as well as other leaf tensors since the weights of layers are almost certainly leaves.</p>
|
python|pytorch|gradient|tensor
| 1
|
7,688
| 60,919,734
|
Marking Integer points on plot in python
|
<p>In the code below, I would like to mark the integer points.
I tried many options and different functions, but couldn't achieve the desired outcome.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from matplotlib import pyplot as plt
n = np.arange(-3,3,0.1)
x = n**2
plt.plot(n,x,'-ok')
</code></pre>
<p>The desired plot:
<img src="https://i.stack.imgur.com/syO7P.png" alt="desired plot"></p>
|
<p>Here is an appraoch:</p>
<ul>
<li>use x-values in a dense linspace to draw the smooth curve</li>
<li>use n-values of integers to draw the dots</li>
</ul>
<p>A polynomial with integer coefficients gives integer values for all integer input.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
def f(x):
return x ** 2
x = np.linspace(-3.1, 3.1, 100)
plt.plot(x, f(x), '-r')
n = np.arange(-3, 4)
plt.plot(n, f(n), 'or')
plt.show()
</code></pre>
<p>To get a segmented line instead of a curve, both the line plot and the scatter plot could be done with only integer coordinates:</p>
<pre class="lang-py prettyprint-override"><code>n = np.arange(-3, 4)
plt.plot(n, f(n), '-r')
plt.plot(n, f(n), 'or')
</code></pre>
<p><a href="https://i.stack.imgur.com/jeIhB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeIhB.png" alt="resulting plot"></a></p>
|
python|numpy|matplotlib|plot|spyder
| 1
|
7,689
| 71,549,711
|
Python Pandas - Vlookup - Update Existing Column in First Data Frame From Second Data Frame
|
<p>I am searching for the most pythonic way to achieve the following task:</p>
<p>first data frame:</p>
<pre><code>df1 =
dataA dataB key info dataC
0 ABC 123 a1b aaa
1 DEF 456 b57 bbb
2 GHI 789 a22 ccc
</code></pre>
<p>second data frame:</p>
<pre><code>df2 =
dataX key info dataY
0 KLJ a1b infoA hhh
1 RTY q3z infoB uuu
2 PUI a22 infoC ppp
</code></pre>
<p>first data frame after pythonic operations:</p>
<pre><code>df1 =
dataA dataB key info dataC
0 ABC 123 a1b infoA aaa
1 DEF 456 b57 bbb
2 GHI 789 a22 infoC ccc
</code></pre>
|
<p>Use Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> over <code>df1</code> and <code>df2</code> on columns <code>['key','info']</code>, then, use column <code>key</code> as column name to join on and use only the keys from left dataframe <code>how='left'</code>. Get the resulting column (<code>info_y</code>) into the first dataframe.</p>
<pre class="lang-py prettyprint-override"><code>df1['info'] = pd.merge(df1[['key','info']], df2[['key','info']], on='key', how='left')['info_y']
print(df1)
</code></pre>
<p>Output from <em>df1</em></p>
<pre class="lang-py prettyprint-override"><code> dataA dataB key info dataC
0 ABC 123 a1b infoA aaa
1 DEF 456 b57 NaN bbb
2 GHI 789 a22 infoC ccc
</code></pre>
|
python|pandas
| 0
|
7,690
| 43,309,508
|
Slicing array with numpy?
|
<pre><code>import numpy as np
r = np.arange(36)
r.resize((6, 6))
print(r)
# prints:
# [[ 0 1 2 3 4 5]
# [ 6 7 8 9 10 11]
# [12 13 14 15 16 17]
# [18 19 20 21 22 23]
# [24 25 26 27 28 29]
# [30 31 32 33 34 35]]
print(r[:,::7])
# prints:
# [[ 0]
# [ 6]
# [12]
# [18]
# [24]
# [30]]
print(r[:,0])
# prints:
# [ 0 6 12 18 24 30]
</code></pre>
<p>The <code>r[:,::7]</code> gives me a column, the <code>r[:,0]</code> gives me a row, they both have the same numbers. Would be glad if someone could explain to me why?</p>
|
<p>Because the step argument is greater than the corresponding shape so you'll just get the first "row". However these are not identical (even if they contain the same numbers) because the scalar index in <code>[:, 0]</code> flattens the corresponding dimension (so you'll get a 1D array). But <code>[:, ::7]</code> will keep the number of dimensions intact but alters the shape of the step-sliced dimension.</p>
|
python|python-2.7|numpy
| 2
|
7,691
| 43,192,626
|
pandas Series getting 'Data must be 1-dimensional' error
|
<p>I'm new to pandas & numpy. I'm running a simple program</p>
<pre><code>labels = ['a','b','c','d','e']
s = Series(randn(5),index=labels)
print(s)
</code></pre>
<p>getting the following error</p>
<pre><code> s = Series(randn(5),index=labels) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 243, in
__init__
raise_cast_failure=True) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 2950, in
_sanitize_array
raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional
</code></pre>
<p>Any idea what can be the issue? I'm trying this using eclipse, not using ipython notebook. </p>
|
<p>I suspect you have your imports wrong.</p>
<p>If you add this to your code:</p>
<pre><code>from pandas import Series
from numpy.random import randn
labels = ['a','b','c','d','e']
s = Series(randn(5),index=labels)
print(s)
a 0.895322
b 0.949709
c -0.502680
d -0.511937
e -1.550810
dtype: float64
</code></pre>
<p>It runs fine.</p>
<p>That said, and as pointed out by @jezrael, it's better practice to import the modules rather than pollute the namespace.</p>
<p>Your code should look like this instead.</p>
<p><em><strong>solution</strong></em></p>
<pre><code>import pandas as pd
import numpy as np
labels = ['a','b','c','d','e']
s = pd.Series(np.random.randn(5),index=labels)
print(s)
</code></pre>
|
python|pandas|numpy
| 5
|
7,692
| 72,311,901
|
Retrieving values from a DataFrame
|
<p>My DataFrame (<code>df = df.sort_values('market_name').iloc[:1]</code>):</p>
<pre><code> competition event_name event_id country_code market_name market_id total_matched Home Home_id Away Away_id Draw Draw_id
7 CONMEBOL Copa Libertadores Atletico MG v Independiente (Ecu) 31459931 None First Half Goals 1.5 1.199224510 115362.090985 Under 1.5 Goals 1221385 Over 1.5 Goals 1221386 0
</code></pre>
<p>For get <code>market_id</code> i need use index <code>[0]</code>:</p>
<pre><code>df['market_id'].values[0]
</code></pre>
<p>To collect the value writing only <code>['market_id']</code> I'm using <code>.reset_index()</code> + <code>.iterrows()</code>:</p>
<pre><code>df = df.sort_values('market_name').iloc[:1]
df = df.reset_index()
for index, row in df.iterrows():
row['market_id']
</code></pre>
<p>As this <em>dataframe</em> will always exist only one line, there is a more professional way to get this same result without this mess using many lines and looping?</p>
<p>The idea would be to format this <em>dataframe</em> beforehand so I don't need to put this <code>.value[0]</code> in each value I want to fetch and call only by the column name.</p>
|
<p>If you use <code>.iloc[0]</code> instead of <code>.iloc[:1]</code> then you get single row as <code>pandas.Series</code> and you can get value from <code>Series</code> using only header. And this doesn't need <code>.reset_index()</code></p>
<pre><code>import pandas as pd
data = {
'A': [1,2,3],
'B': [4,5,6],
'C': [7,8,9]
}
df = pd.DataFrame(data)
row = df.sort_values('A').iloc[0]
print('row:')
print(row)
print('---')
print('value A:', row['A'])
print('value B:', row['B'])
</code></pre>
<p>Result:</p>
<pre><code>row:
A 1
B 4
C 7
Name: 0, dtype: int64
---
value A: 1
value B: 4
</code></pre>
|
python|pandas|dataframe
| 1
|
7,693
| 72,422,392
|
How to create a new column from existed column which is formatted like a dictionary pandas dataframe
|
<p>In my pandas dataframe, I have a column formatted like a dictionary:</p>
<p><a href="https://i.stack.imgur.com/j9kU4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j9kU4.png" alt="enter image description here" /></a></p>
<p>What I want to do is extract data from this column and add two columns like this:</p>
<p><a href="https://i.stack.imgur.com/MhcUy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MhcUy.png" alt="enter image description here" /></a></p>
<p>In other words, I want to separate values between ":".</p>
<p>I wonder if there is an easy way to do this?</p>
|
<p>If 'column1' holds only one key: value per dictionary, then you can add new columns by calling the items method and using the first tuple:</p>
<pre><code>df[['column2', 'column3']] = pd.DataFrame(df['column1'].apply(lambda x: list(x.items())[0]).tolist(), index=df.index)
</code></pre>
|
python|pandas
| 2
|
7,694
| 72,203,065
|
Passing numpy ndarray as keras input
|
<p>I have a dataset that consists of numpy array and I need to pass this data as input for keras.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Idx</th>
<th style="text-align: center;">Target</th>
<th style="text-align: center;">RF</th>
<th style="text-align: right;">DL</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">P219109</td>
<td style="text-align: center;">[0.05555555555555555, 0.0, 0.0, 0.0, 0.0, 0.0,...</td>
<td style="text-align: right;">[1.1074159, -5.242936, -6.9121795, 0.931392, -...</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">P219109</td>
<td style="text-align: center;">[0.5833333333333334, 0.0, 0.0, 0.0, 0.0, 0.0, ...</td>
<td style="text-align: right;">[-9.173296, -4.847732, -2.5727227, 8.794523, 7...</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">P219109</td>
<td style="text-align: center;">[0.05555555555555555, 0.0, 0.0, 0.0, 0.0, 0.0,...</td>
<td style="text-align: right;">[2.5204952, 1.3955389, -4.755222, -1.7222288, ...</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">P219109</td>
<td style="text-align: center;">[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...</td>
<td style="text-align: right;">[1.4951401, 1.2499368, -3.08991, -2.0176327, -...</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">P219109</td>
<td style="text-align: center;">[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...</td>
<td style="text-align: right;">[-3.4984744, 7.1746902, -0.36313212, -3.760725...</td>
</tr>
</tbody>
</table>
</div>
<p>And here my code:</p>
<pre><code>X = df[['RF', 'DL']]
y = df['Target']
_, y_ = np.unique(y, return_inverse=True)
y = y_
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)
train_labels = to_categorical(y_train)
test_labels = to_categorical(y_test)
model = models.Sequential()
model.add(layers.Dense(100, activation='relu', input_shape=(2,)))
model.add(layers.Dense(50, activation='relu'))
model.add(layers.Dense(40, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, train_labels, epochs=10, batch_size=36)
</code></pre>
<p>And I got the following error:</p>
<pre><code>Traceback (most recent call last):
File "d:/FINAL/main.py", line 81, in <module>
model.fit(X_train, train_labels, epochs=20, batch_size=40)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1050, in fit
data_handler = data_adapter.DataHandler(
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 1100, in __init__
self._adapter = adapter_cls(
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 263, in __init__
x, y, sample_weights = _process_tensorlike((x, y, sample_weights))
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 1016, in _process_tensorlike
inputs = nest.map_structure(_convert_numpy_and_scipy, inputs)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\util\nest.py", line 659, in map_structure
structure[0], [func(*x) for x in entries],
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\util\nest.py", line 659, in <listcomp>
structure[0], [func(*x) for x in entries],
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 1011, in _convert_numpy_and_scipy
return ops.convert_to_tensor_v2_with_dispatch(x, dtype=dtype)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 1404,
in convert_to_tensor_v2_with_dispatch
return convert_to_tensor_v2(
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 1410,
in convert_to_tensor_v2
return convert_to_tensor(
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\profiler\trace.py", line 163,
in wrapped
return func(*args, **kwargs)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 1540,
in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
return constant_op.constant(value, dtype, name=name)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\constant_op.py", line 264, in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\constant_op.py", line 276, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\constant_op.py", line 301, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "C:\Users\ni\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\constant_op.py", line 98, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
</code></pre>
<p>Can someone help me to fix this?</p>
<p>Keras: 2.4.0
Tensorflow: 2.4.0
Python: 3.8</p>
<p><strong>EDIT:</strong>
How to pass a data frame object as a NumPy array? The RF column and DL column are the results of predict_proba from two different models. So, I joined it into a pandas data frame using the following code:</p>
<pre><code>rf = [item for item in rf1]
dl = [item for item in out]
y = y_test
df = pd.Series(rf)
df = df.to_frame()
df.rename(columns = {0:'RF'}, inplace = True)
df['Target'] = y
df['DL'] = dl
cols = ['Target', 'RF', 'DL']
df = df[cols]
</code></pre>
<p>I also tried this:</p>
<pre><code>X = np.asarray(X).astype(np.float32)
</code></pre>
<p>but it doesn't work as I get the following error:</p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
|
<p>As discussed in the comments, it would be best if you created your arrays directly instead of having a DataFrame in the middle.</p>
<p>The problem is that even if <code>X</code> is a numpy array, it contains other arrays because Pandas returns an array for each row and each cell. An example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'RF':[np.array([1, 2, 3, 4, 5]), np.array([6, 2, 3, 4, 5])], 'DL': [np.array([1, 2, 3, 4, 5]), np.array([7, 2, 3, 4, 5])]})
print(df)
</code></pre>
<pre><code> RF DL
0 [1, 2, 3, 4, 5] [1, 2, 3, 4, 5]
1 [6, 2, 3, 4, 5] [7, 2, 3, 4, 5]
</code></pre>
<pre><code>X = df[['RF', 'DL']].to_numpy()
print(X)
</code></pre>
<pre><code>[[array([1, 2, 3, 4, 5]) array([1, 2, 3, 4, 5])]
[array([6, 2, 3, 4, 5]) array([7, 2, 3, 4, 5])]]
</code></pre>
<p>You can reshape to try to fix the problem. This should work:</p>
<pre><code># You have to reshape each cell in each row.
X = np.array([np.reshape(x, (1, len(X[0][0])))
for i in range(len(X))
for x in X[i]]).reshape(-1, 2*len(X[0][0])).astype(np.float32)
print(X)
print(X.shape)
</code></pre>
<pre><code>[[1. 2. 3. 4. 5. 1. 2. 3. 4. 5.]
[6. 2. 3. 4. 5. 7. 2. 3. 4. 5.]]
(2, 10)
</code></pre>
<p>Obviously, this assumes the shapes of the arrays in both the <code>'RF'</code> and <code>'DL'</code> columns are the same, which I believe is true because they are the output of <code>predict_proba</code>.</p>
|
python|numpy|tensorflow|keras
| 0
|
7,695
| 72,395,708
|
How do I create a DataFrame with an empty pandas Series in a column?
|
<p>I'm trying to append each row of a DataFrame separately. Each row has Series and Scalar values. an example of a row would be</p>
<pre><code>row = {'col1': 1, 'col2':'blah', 'col3': pd.Series(['first', 'second'])}
</code></pre>
<p>When I create a DataFrame from this, it looks like this</p>
<pre><code>df = pd.DataFrame(row)
df
col1 col2 col3
0 1 blah first
1 1 blah second
</code></pre>
<p>This is what I want. The scalar values are repeated which is good. Now, some of my rows have empty Series for the column, as such:</p>
<pre><code>another_row = {'col1': 45, 'col2':'more blah', 'col3': pd.Series([], dtype='object')}
</code></pre>
<p>When I create a new DataFrame in order to concat the two, like so</p>
<pre><code>second_df = pd.DataFrame(another_row)
</code></pre>
<p>I get back an empty DataFrame. Which is not what I'm looking for.</p>
<pre><code>>>> second_df = pd.DataFrame({'col1': 45, 'col2':'more blah', 'col3': pd.Series([], dtype='object')})
>>> second_df
Empty DataFrame
Columns: [col1, col2, col3]
Index: []
>>>
</code></pre>
<p>What I'm actually after is something like this</p>
<pre><code>>>> second_df
>>>
col1 col2 col3
0 45 'more blah' NaN
</code></pre>
<p>Or something like that. Basically, I don't want the entire row to be dropped on the floor, I want the empty Series to be represented by None or NaN or something.</p>
<p>I don't get any errors, and nothing warns me that anything is out of the ordinary, so I have no idea why the df is behaving like this.</p>
|
<p>Ultimately, I reworked my code to avoid having this problem. My solution is as follows:</p>
<p>I have a function <code>do_data_stuff()</code> and it used to return a pandas series, but now I have changed it to return</p>
<ul>
<li>a series if there's stuff in it <code>Series([1, 2, 3])</code></li>
<li>or nan if it would be empty <code>np.nan</code>.</li>
</ul>
<p>A side effect of going with this approach was that the DataFrame requires an index if only scalars are passed. <code>"ValueError: If using all scalar values, you must pass an index"</code></p>
<p>So I can't pass <code>index=[0]</code> hard coded like that because I wanted the DF to have the series determine the number of rows in the DF automatically.</p>
<pre><code>row = {'col1': 1, 'col2':'blah', 'col3': pd.Series(['first', 'second'])}
df = pd.DataFrame(row)
df
col1 col2 col3
0 1 blah first
1 1 blah second
</code></pre>
<p>So what I ended up doing was adding a dynamic index call. I'm not sure if this is proper python, but it worked for me.</p>
<pre><code>stuff = do_data_stuff()
data = pd.DataFrame({
'col1': 45,
'col2': 'very awesome stuff',
'col3': stuff
},
index= [0] if stuff is np.nan else None
)
</code></pre>
<p>And then I concatenated my DataFrames using the following:</p>
<pre><code>data = pd.concat([data, some_other_df], ignore_index=True)
</code></pre>
<p>The result was a DataFrame that looks like this</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({'col1': 1, 'col2':'blah', 'col3': pd.Series(['first', 'second'])})
>>> df
col1 col2 col3
0 1 blah first
1 1 blah second
>>> stuff = np.nan
>>> stuff
nan
>>> df = pd.concat([
df, pd.DataFrame(
{
'col1': 45,
'col2': 'more awesome stuff',
'col3': stuff
},
index= [0] if stuff is np.nan else None
)], ignore_index=True)
>>> df
col1 col2 col3
0 1 blah first
1 1 blah second
2 45 more awesome stuff NaN
</code></pre>
<p>You can replace <code>np.nan</code> with anything, like <code>""</code>.</p>
|
python|pandas|dataframe|initialization|series
| 0
|
7,696
| 50,592,013
|
Separating a tricky string throughout a whole dataframe
|
<pre><code>0 NC_000001.10:g.955563G>C
1 NC_000001.10:g.955597G>T
2 NC_000001.10:g.955619G>C
3 NC_000001.10:g.957640C>T
4 NC_000001.10:g.976059C>T
5 NC_000003.11:g.37090470C>T
6 NC_000012.11:g.133256600G>A
7 NC_012920.1:m.15923A>G
</code></pre>
<p>I have a column in a dataset that looks like the above. Using the first row as an example, the information I'd like to be left with is one column containing 955563, and one column containing G>C. I've played around with a couple of regular expressions I've found here but haven't found one that does the trick.</p>
|
<p>The following works for your example:</p>
<pre><code>df[0].str.extract(':\w\.(\d+)(.+)')
# 0 1
#0 955563 G>C
#1 955597 G>T
#2 955619 G>C
#3 957640 C>T
#4 976059 C>T
#5 37090470 C>T
#6 133256600 G>A
#7 15923 A>G
</code></pre>
<p>If the last "column" always has the A>A structure, where A is a single letter, then you can be more specific with:</p>
<pre><code>df[0].str.extract(':\w\.(\d+)(\w>\w)')
</code></pre>
|
python|string|pandas|series
| 3
|
7,697
| 45,560,672
|
Tf-slim: ValueError: Variable vgg_19/conv1/conv1_1/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?
|
<p>I am using tf-slim to extract features from several batches of images. The problem is my code works for the first batch , after that I get the error in the title.My code is something like this:</p>
<pre><code>for i in range(0, num_batches):
#Obtain the starting and ending images number for each batch
batch_start = i*training_batch_size
batch_end = min((i+1)*training_batch_size, read_images_number)
#obtain the images from the batch
images = preprocessed_images[batch_start: batch_end]
with slim.arg_scope(vgg.vgg_arg_scope()) as sc:
_, end_points = vgg.vgg_19(tf.to_float(images), num_classes=1000, is_training=False)
init_fn = slim.assign_from_checkpoint_fn(os.path.join(checkpoints_dir, 'vgg_19.ckpt'),slim.get_model_variables('vgg_19'))
feature_conv_2_2 = end_points['vgg_19/pool5']
</code></pre>
<p>So as you can see, in each batch, I select a batch of images and use the vgg-19 model to extract features from the pool5 layer. But after the first iteration I get error in the line where I am trying to obtain the end-points. One solution, as I found on the internet is to reset the graph each time , but I don't want to do that because I have some weights in my graph in later part of the code which I train using these extracted features. I don't want to reset them. Any leads highly appreciated. Thanks!</p>
|
<p>You should create your graph <strong>once</strong>, not in a loop. The error message tells you exactly that - you try to build the same graph twice.</p>
<p>So it should be (in pseudocode)</p>
<pre><code>create_graph()
load_checkpoint()
for each batch:
process_data()
</code></pre>
|
deep-learning|tensorflow|tf-slim
| 1
|
7,698
| 45,299,596
|
Tensorboard : Error metadata.tsv is not a file
|
<p>I am manually trying to link <code>embedding</code> tensor with <code>metadata.tsv</code>, but I am getting following error: <code>"$LOG_DIR/metadata.tsv is not a file."</code> </p>
<p>I am running Tensorboard with following command :
<code>tensorboard --logdir default/</code>
and my <code>projector_config.pbtxt</code> file is following :</p>
<pre><code>embeddings {
tensor_name: 'embedding/decoder_embedding_matrix'
metadata_path: '$LOG_DIR/metadata.tsv'
}
</code></pre>
<p>I have checked my log_dir and it has all files. [Screen shot attached]</p>
<p><strong>LOG_DIR Screenshot</strong>:</p>
<p><a href="https://i.stack.imgur.com/T5R0g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T5R0g.png" alt="enter image description here"></a></p>
<p><strong>Error Screenshot</strong>
<a href="https://i.stack.imgur.com/VMGwI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VMGwI.png" alt="enter image description here"></a></p>
|
<p>It cannot recognize <code>$LOG_DIR</code> the way you have used it. Either edit <code>projector_config.pbtxt</code> manually to provide the full path, or use this in your code:</p>
<pre><code>import os
embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
</code></pre>
<p>where again <code>LOG_DIR</code> should preferably be the full path (and not the relative one).</p>
|
tensorflow|tensorboard
| 2
|
7,699
| 45,479,091
|
Getting error slicing time series with pandas
|
<p>I'm trying to slice a time series, I can do it perfectly this way :</p>
<pre><code>subseries = series['2015-07-07 01:00:00':'2015-07-07 03:30:00'] .
</code></pre>
<p>But the following code won't work</p>
<pre><code>def GetDatetime():
Y = int(raw_input("Year "))
M = int(raw_input("Month "))
D = int(raw_input("Day "))
d = datetime.datetime(Y, M, D) #creates a datetime object
return d
filePath = "pathtofile.csv"
series = pd.read_csv(str(filePath), index_col='date')
series.index = pd.to_datetime(series.index, unit='s')
d = GetDatetime()
f = GetDatetime()
subseries = series[d:f]
</code></pre>
<p>The last line generates this error:</p>
<pre><code>Traceback (most recent call last):
File "dontgivemeerrorsbrasommek.py", line 37, in <module>
brasla7nina= df[d:f]
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1952, in __getitem__
indexer = convert_to_index_sliceable(self, key)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 1896, in convert_to_index_sliceable
return idx._convert_slice_indexer(key, kind='getitem')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexes/base.py", line 1407, in _convert_slice_indexer
indexer = self.slice_indexer(start, stop, step, kind=kind)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexes/datetimes.py", line 1515, in slice_indexer
return Index.slice_indexer(self, start, end, step, kind=kind)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexes/base.py", line 3350, in slice_indexer
kind=kind)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexes/base.py", line 3538, in slice_locs
start_slice = self.get_slice_bound(start, 'left', kind)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.20.2-py2.7-linux-x86_64.egg/pandas/core/indexes/base.py", line 3487, in get_slice_bound
raise err
KeyError: 1435802520000000000
</code></pre>
<p>I think it's a time-stamp conversion problem so I tried the following but still it wouldn't work :</p>
<pre><code>d3 = pandas.Timestamp(datetime(Y, M, D, H, m))
d2 = pandas.to_datetime(d)
</code></pre>
<p>Your help would be appreciated, thank you. :)</p>
|
<p>change <code>def GetDatetime()</code> function return value to:</p>
<pre><code>return str(d)
</code></pre>
<p>This will return datetime string which times series will be able to deal with.</p>
|
python|pandas|numpy|time-series
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.