Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,200
| 47,507,799
|
Selecting a subset of columns without copying
|
<p>I would like to select a subset of columns from a DataFrame without copying the data. From <a href="https://stackoverflow.com/questions/23296282/what-rules-does-pandas-use-to-generate-a-view-vs-a-copy">this answer</a> it seems that it's impossible, if the columns have different dtypes. Can anybody confirm? For me, it seems that there must be a way as the feature is so essential.</p>
<p>For example, <code>df.loc[:, ['a', 'b']]</code> produces a copy.</p>
|
<p>This post is only applicable for dataframes having same dtypes across all columns.</p>
<p>It is possible if the columns to be selected are at regular strides from each other using slicing within <code>.iloc</code>. As such selecting any two columns is always possible, but for more than two columns, we need to have regular strides between them. In all of those cases, we need to know their column IDs and strides.</p>
<p>Let's try to understand these with the help of some sample cases.</p>
<p>Case #1 : Two columns starting at 0th col ID</p>
<pre><code>In [47]: df1
Out[47]:
a b c d
0 5 0 3 3
1 7 3 5 2
2 4 7 6 8
In [48]: np.array_equal(df1.loc[:, ['a', 'b']], df1.iloc[:,0:2])
Out[48]: True
In [50]: np.shares_memory(df1, df1.iloc[:,0:2]) # confirm view
Out[50]: True
</code></pre>
<p>Case #2 : Two columns starting at 1st col ID</p>
<pre><code>In [51]: df2
Out[51]:
a0 a a1 a2 b c d
0 8 1 6 7 7 8 1
1 5 8 4 3 0 3 5
2 0 2 3 8 1 3 3
In [52]: np.array_equal(df2.loc[:, ['a', 'b']], df2.iloc[:,1::3])
Out[52]: True
In [54]: np.shares_memory(df2, df2.iloc[:,1::3]) # confirm view
Out[54]: True
</code></pre>
<p>Case #2 : Three columns starting at 1st col ID and a stride of 2 columns</p>
<pre><code>In [74]: df3
Out[74]:
a0 a a1 b b1 c c1 d d1
0 3 7 0 1 0 4 7 3 2
1 7 2 0 0 4 5 5 6 8
2 4 1 4 8 1 1 7 3 6
In [75]: np.array_equal(df3.loc[:, ['a', 'b', 'c']], df3.iloc[:,1:6:2])
Out[75]: True
In [76]: np.shares_memory(df3, df3.iloc[:,1:6:2]) # confirm view
Out[76]: True
</code></pre>
<p>Select 4 columns :</p>
<pre><code>In [77]: np.array_equal(df3.loc[:, ['a', 'b', 'c', 'd']], df3.iloc[:,1:8:2])
Out[77]: True
In [78]: np.shares_memory(df3, df3.iloc[:,1:8:2])
Out[78]: True
</code></pre>
|
python|pandas|dataframe|indexing
| 2
|
8,201
| 47,440,077
|
Checking if particular value (in cell) is NaN in pandas DataFrame not working using ix or iloc
|
<p>Lets say I have following <code>pandas</code> <code>DataFrame</code>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A":[1,pd.np.nan,2], "B":[5,6,0]})
</code></pre>
<p>Which would look like:</p>
<pre><code>>>> df
A B
0 1.0 5
1 NaN 6
2 2.0 0
</code></pre>
<h2>First option</h2>
<p>I know one way to check if a particular value is <code>NaN</code>, which is as follows:</p>
<pre><code>>>> df.isnull().ix[1,0]
True
</code></pre>
<h2>Second option (not working)</h2>
<p>I thought below option, using <code>ix</code>, would work as well, but it's not:</p>
<pre><code>>>> df.ix[1,0]==pd.np.nan
False
</code></pre>
<p>I also tried <code>iloc</code> with same results:</p>
<pre><code>>>> df.iloc[1,0]==pd.np.nan
False
</code></pre>
<p>However if I check for those values using <code>ix</code> or <code>iloc</code> I get:</p>
<pre><code>>>> df.ix[1,0]
nan
>>> df.iloc[1,0]
nan
</code></pre>
<p>So, <strong>why is the second option not working?</strong> Is it possible to check for <code>NaN</code> values using <code>ix</code> or <code>iloc</code>?</p>
|
<p>Try this:</p>
<pre><code>In [107]: pd.isnull(df.iloc[1,0])
Out[107]: True
</code></pre>
<hr>
<p><strong>UPDATE:</strong> in a newer Pandas versions use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.isna.html" rel="noreferrer">pd.isna()</a>:</p>
<pre><code>In [7]: pd.isna(df.iloc[1,0])
Out[7]: True
</code></pre>
|
python|pandas|dataframe|nan
| 127
|
8,202
| 68,180,717
|
A fast alternative to pandas groupby + apply?
|
<p>I have a pandas dataframe which looks like the following (with ~ 1 Million lines):</p>
<pre><code>Column_1 Column_2 Column_3 Column_4 Column_5 Column_6 Column_7 Column_8 Column_9 Column_10
… … … … … … … … … …
… … … … … … … … … …
… … … … … … … … … …
… … … … … … … … … …
</code></pre>
<p>I want to do:</p>
<pre><code>grouping = ["Column_1", "Column_2", "Column_3", "Column_4"]
df.groupby(grouping).apply(lambda x: pd.Series({
'new_column_1':func_1(x),
'new_column_2':func_2(x),
'new_column_3':func_3(x)}
)).reset_index()
</code></pre>
<p>This works, but is incredibly slow. Functions [func_1, func_2, func_3] are custom functions I want to apply on each of the groups.</p>
<p>I read other stack overflow discussions on why this is so slow. The reason I found is that pandas groupby + apply uses python loops and not vectorization. But then how could I speed this up?</p>
<p>Let's say, for example, that:</p>
<pre><code>def func_1(x) {
return sum(x["Column_5"] >= x["Column_6"]) / sum(x["Column_5"] <= x["Column_6"])
}
def func_2(x) {
return max(x["Column_8"]) + min(x["Column_9"])
}
def func_3(x) {
return len(x)
}
</code></pre>
<p>How could we do the same operation without pandas groupby + numpy?</p>
|
<p>Looks like you want to compare the values of 2 different columns in each row and then tally the results of the row by row comparisons, then do math on the tallies. If so, make 2 new columns that have the results of the comparisons, then sum those new columns and compare the numbers. Vectorization rather than iteration. See this toy example:</p>
<pre><code>row1list = [1, 2]
row2list = [5, 3]
row3list = [5, 4]
row4list = [5, 5]
df = pd.DataFrame([row1list, row2list, row3list, row4list],
columns=['Column_5', 'Column_6'])
df[['col5 >= col6', 'col6 <= col5']] = 0, 0
# start with 0, else you get nan or 1 in the next comparison
df.loc[df['Column_5'] >= df['Column_6'], 'col5 >= col6'] = 1
df.loc[df['Column_5'] <= df['Column_6'], 'col6 <= col5'] = 1
print(df)
# Column_5 Column_6 col5 >= col6 col6 <= col5
# 0 1 2 0 1
# 1 5 3 1 0
# 2 5 4 1 0
# 3 5 5 1 1
answer_of_func1 = sum(df['col5 >= col6']) / sum(df['col6 <= col5'])
print(answer_of_func1)
# 1.5
</code></pre>
|
python|pandas|numpy|vectorization
| 0
|
8,203
| 59,138,585
|
Plotting Vasicek Model. Only size -1 arrays can be converted to python scalars
|
<p>Attempting to the plot Vasiceks Portfolio loss distribution and when I want to build an array of numbers for x between 0 and 1 for each time step I am having problems. </p>
<pre><code>x.astype(int)
x = np.arange(0.000001, 1, 0.000001)
a1 = math.sqrt((1-rho)/rho)
a2 = -1/(2*rho)*((math.sqrt(1-rho)*norm.ppf(x)-norm.ppf(p)))**2
a3 = ((1/2)*norm.ppf(x))**2
lossdist = a1*math.exp(a2+a3))
</code></pre>
<p>Above is my model but when I run it it gets to the <code>lossdist</code> function and halts, producing </p>
<pre><code>TypeError: only size-1 arrays can be converted to Python scalars
</code></pre>
<p>I have taken a look at the several answers to this previous problem but nothing is working, attempting the <code>x.astype(int)</code> and <code>np.vectorise</code> solutions. How do I fix this?</p>
|
<p>math.exp works only on scalar inputs but you're passing an array as an input. You should use numpy.exp since it accepts an array as input. </p>
<p>Reference : <code>https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html</code></p>
|
python|numpy|plot
| 0
|
8,204
| 59,395,680
|
GPU memory doesn't get freed up after evaluating data on PyTorch model in parallel process
|
<p>For my optimization algorithm, I need to evaluate a few hundred images every iteration. To speed up the process, I wanted to take full advantage of my 3 GPUs.</p>
<p>My process:</p>
<ul>
<li>Load an instance of my deep learning model on each one of my GPUs</li>
<li>Then split the workload into as many parts as I have GPUs</li>
<li>pair each workload in a tuple with the instance of the GPU loaded model it should be processed on</li>
<li>run starmap(_runDataThroughModel, sub_workload) to process all sub_workload in parallel</li>
</ul>
<p>Now there is no problem with doing this once and ending the problem, however, when I do this repeatedly the GPU memory starts to fill up with each iteration until I get a "RuntimeError: CUDA error: out of memory"</p>
<p><strong>My Question:</strong></p>
<ul>
<li>What is the correct way of going about this?</li>
<li>Why is the GPU memory
not freed? Since I pre-instantiate the GPU model outside the
"starmap" command and always pass the same instances, why would there
be a buildup?</li>
</ul>
<p><strong>Update</strong>
I re-wrote the code taking into account the issue presented in this <a href="https://stackoverflow.com/questions/46561124/python-multithreading-in-infinite-loop">thread</a>. Instantiating Pool() outside of any loop in the program didn't solve the GPU memory overflow, however, it stopped the CPU memory from building up over time.</p>
<pre><code>'''
Test GPU Memory Leak
Description: Tests how the memory doesn't get freed up when running multiprocessing with PyTorch Model forward pass
'''
import torch
import torch.multiprocessing as mp
import importlib
from PIL import Image
from skimage import io, transform
from skimage.color import rgb2gray
from skimage.io._plugins.pil_plugin import *
import torch
import torch.nn as nn
# Convolutional neural network (twohttps://duckduckgo.com/?q=install+gmsh+conda&t=canonical convolutional layers)
class ConvNet(nn.Module):
def __init__(self, num_classes=10, num_img_layers = 1, img_res = 128):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
#torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1,
# padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
nn.Conv2d(num_img_layers, 64, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(64),
nn.LeakyReLU())
self.layer2 = nn.Sequential(
nn.Conv2d(64, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.LeakyReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc1 = nn.Linear(32*int(img_res/2)*int(img_res/2), 32*32)
self.fc2 = nn.Linear(32*32, num_classes)
def forward(self, x):
#print(x.shape)
out = self.layer1(x)
#print(out.shape)
out = self.layer2(out)
#print(out.shape)
out = out.reshape(out.size(0), -1)
out = self.fc1(out)
out = self.fc2(out)
return out
class NNEvaluator:
def __init__(self, model_dict, GPU, img_res = 128, num_img_layers = 1, num_classes = None):
# Load the model checkpoint
gpu_id = 'cuda:' + str(GPU)
self.device = torch.device(gpu_id if torch.cuda.is_available() else 'cpu')
self.model_state_dict = model_dict['model_state_dict']
self.model = ConvNet(num_classes = num_classes, num_img_layers = num_img_layers, img_res = img_res).to(self.device)
self.model.to(self.device)
self.model.load_state_dict(self.model_state_dict)
self.epsilon = torch.tensor(1e-12, dtype = torch.float)
def evaluate(self, img):
self.model.eval()
with torch.no_grad():
img = img.to(self.device)
out = self.model(img)
out = out.to('cpu')
return out
def loadImage(filename):
im = Image.open("test.jpg")
im = io._plugins.pil_plugin.pil_to_ndarray(im)
im = rgb2gray(im)
image = im.transpose((0, 1))
im = torch.from_numpy(image).float()
im = torch.unsqueeze(im,0)
im = torch.unsqueeze(im,1)
return im
def _worker(workload, evaluator):
results = []
for img in workload:
results.append(evaluator.evaluate(img))
def main():
# load a model for each GPU
model_dict = torch.load('model_dict.ckpt')
GPUs = [0,1,2] # available GPUs in the system
evaluators = []
for gpu_id in GPUs:
evaluators.append(NNEvaluator(model_dict, gpu_id, num_classes=3))
# instantiate multiprocessing pool
mp.set_start_method('spawn')
mypool = mp.Pool()
# evaluate all datapoints 20 times
im = loadImage('test.jpg')
total_nr_iterations = 20
for i in range(total_nr_iterations):
# run a subset of the workload on each GPU in a separate process
nr_datapoints = 99
dp_per_evaluator = int(nr_datapoints/len(evaluators))
workload = [im for i in range(dp_per_evaluator)]
jobslist = [(workload, evaluator) for evaluator in evaluators]
mypool.starmap(_worker, jobslist)
print("Finished iteration {}".format(i))
if __name__ == '__main__':
main()
</code></pre>
<p>Output when running the code:</p>
<pre><code>Finished iteration 0
Finished iteration 1
Finished iteration 2
Process SpawnPoolWorker-10:
Process SpawnPoolWorker-12:
Traceback (most recent call last):
Traceback (most recent call last):
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/pool.py", line 110, in worker
task = get()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/queues.py", line 354, in get
return _ForkingPickler.loads(res)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/pool.py", line 110, in worker
task = get()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 119, in rebuild_cuda_tensor
event_sync_required)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/queues.py", line 354, in get
return _ForkingPickler.loads(res)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 119, in rebuild_cuda_tensor
event_sync_required)
RuntimeError: CUDA error: out of memory
RuntimeError: CUDA error: out of memory
Process SpawnPoolWorker-11:
Traceback (most recent call last):
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/pool.py", line 110, in worker
task = get()
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/multiprocessing/queues.py", line 354, in get
return _ForkingPickler.loads(res)
File "/home/ron/miniconda3/envs/PyTorchNN/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 119, in rebuild_cuda_tensor
event_sync_required)
RuntimeError: CUDA error: out of memory
</code></pre>
|
<p>I found this similar <a href="https://stackoverflow.com/questions/46561124/python-multithreading-in-infinite-loop">thread</a> where the memory leakage occurs due to the instantiation of the Pool() in the loop, rather than outside.</p>
<p>The above problem also instantiates the Pool() inside the function without using the <code>with</code> notation which would ensure that all started processes return</p>
<p>E.g. bad way:</p>
<pre><code>def evaluation(workload):
jobslist = [job for job in workload]
with Pool() as mypool:
mypool.starmap(_workerfunction, jobslist)
if __name__ == '__main__':
# pseudo data
workload = [[(100,200) for i in range(1000)] for i in range(50)]
for i in range(100):
evaluation(workload)
</code></pre>
<p>The proper way of doing this would be to instantiate the pool outside the loop, and pass a reference to the pool into the function for processing i.e.:</p>
<pre><code>def evaluation(workload, mypool):
jobslist = [job for job in workload]
mypool.starmap(_workerfunction, jobslist)
if __name__ == '__main__':
# pseudo data
with Pool() as mypool:
workload = [[(100,200) for i in range(1000)] for i in range(50)]
for i in range(100):
evaluation(workload, mypool)
</code></pre>
<p>I suspect that the GPU memory gets leaked due to left over references in parallel processes that have not been cleaned up yet.</p>
|
python-3.x|pytorch|python-multiprocessing
| 0
|
8,205
| 45,042,005
|
Python/Pandas return column and row index of found string
|
<p>I've searched previous answers relating to this but those answers seem to utilize numpy because the array contains numbers. I am trying to search for a keyword in a sentence in a dataframe ('Timeframe') where the full sentence is 'Timeframe for wave in ____' and would like to return the column and row index. For example:</p>
<pre><code> df.iloc[34,0]
</code></pre>
<p>returns the string I am looking for but I am avoiding a hard code for dynamic reasons. Is there a way to return the [34,0] when I search the dataframe for the keyword 'Timeframe'</p>
|
<p>EDIT:</p>
<p>For check index need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>contains</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, but then there are possible 3 values:</p>
<pre><code>df = pd.DataFrame({'A':['Timeframe for wave in ____', 'a', 'c']})
print (df)
A
0 Timeframe for wave in ____
1 a
2 c
def check(val):
a = df.index[df['A'].str.contains(val)]
if a.empty:
return 'not found'
elif len(a) > 1:
return a.tolist()
else:
#only one value - return scalar
return a.item()
</code></pre>
<pre><code>print (check('Timeframe'))
0
print (check('a'))
[0, 1]
print (check('rr'))
not found
</code></pre>
<p>Old solution:</p>
<p>It seems you need if need <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> for check value <code>Timeframe</code>:</p>
<pre><code>df = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,'Timeframe'],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')})
print (df)
A B C D E F
0 a 4 7 1 5 a
1 b 5 8 3 3 a
2 c 4 9 5 6 a
3 d 5 4 7 9 b
4 e 5 2 1 2 b
5 f 4 Timeframe 0 4 b
a = np.where(df.values == 'Timeframe')
print (a)
(array([5], dtype=int64), array([2], dtype=int64))
b = [x[0] for x in a]
print (b)
[5, 2]
</code></pre>
|
python-3.x|pandas
| 4
|
8,206
| 57,169,473
|
why does groupby function returns duplicated data
|
<p>I am testing pandas.groupby function and have generated a random dataframe</p>
<p><code>df = pd.DataFrame(np.random.randint(5,size=(6,3)), columns=list('abc'))</code></p>
<p>in a random case df is:</p>
<pre><code> a b c
0 2 2 2
1 1 4 2
2 3 0 1
3 2 1 3
4 0 2 2
5 2 1 4
</code></pre>
<p>when I use the following code to print out the groupby object, I get some interesting results.</p>
<pre><code>def func(x):
print(x)
df.groupby("a").apply(lambda x: func(x))
a b c
0 0 1 4
a b c
0 0 1 4
a b c
2 2 4 1
3 2 2 1
a b c
1 4 0 0
4 4 4 3
</code></pre>
<p>Could anybody let me know why index 0 appear twice in this case?</p>
|
<p><code>DataFrame.groupby.apply</code> evaluates the first group twice to determine whether a <em>fast path for calculation</em> can be followed for the remaining groups. This behavior has changed in recent versions of <code>pandas</code> as discussed <a href="https://github.com/pandas-dev/pandas/pull/24748" rel="nofollow noreferrer">here</a></p>
|
python|pandas|pandas-groupby
| 2
|
8,207
| 56,923,659
|
Stacking np.tril and np.triu together
|
<p>I have two correlation matrices, one which has the lower triangle as <code>NaN</code> values, and the other one that has its upper triangle as 'NaN' values.</p>
<p>I would like to stack them together, so I would end up with a NxN matrix with correlation coefficients.</p>
<p>I tried using <code>pd.concat()</code>, but I can't get it to work. I am looking for a better way to do this, as I am sure there is one</p>
<pre class="lang-py prettyprint-override"><code>a = [1, NaN, NaN,
0.4, 1, NaN,
0.7, 0.3, 1]
b = [1, 0.2, 0.9,
NaN, 1, 0.6,
NaN, NaN, 1]
</code></pre>
<p>I would like to have something that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>c = [1, 0.2, 0.9,
0.4, 1, 0.6,
0.7, 0.3, 1]
</code></pre>
<p>Thanks!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> for replace missing values by another <code>DataFrame</code>:</p>
<pre><code>df = df1.fillna(df2)
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>DataFrame.combine_first</code></a>:</p>
<pre><code>df = df1.combine_first(df2)
</code></pre>
<p>All together:</p>
<pre><code>a = [1, np.nan, np.nan,
0.4, 1, np.nan,
0.7, 0.3, 1]
b = [1, 0.2, 0.9,
np.nan, 1, 0.6,
np.nan, np.nan, 1]
df1 = pd.DataFrame(np.asarray(a).reshape(3,3))
df2 = pd.DataFrame(np.asarray(b).reshape(3,3))
df = df1.fillna(df2)
print (df)
0 1 2
0 1.0 0.2 0.9
1 0.4 1.0 0.6
2 0.7 0.3 1.0
</code></pre>
|
python|pandas|numpy|dataframe
| 2
|
8,208
| 23,200,524
|
Propagate pandas series metadata through joins
|
<p>I'd like to be able attach metadata to the series of dataframes (specifically, the original filename), so that after joining two dataframes I can see metadata on where each of the series came from.</p>
<p>I see github issues regarding <code>_metadata</code> (<a href="https://github.com/pydata/pandas/issues/6323" rel="nofollow">here</a>, <a href="https://github.com/pydata/pandas/issues/2485" rel="nofollow">here</a>), including some relating to the current <code>_metadata</code> attribute (<a href="https://github.com/pydata/pandas/pull/5205" rel="nofollow">here</a>), but nothing in the pandas docs.</p>
<p>So far I can modify the <code>_metadata</code> attribute to supposedly allow preservation of metadata, but get an <code>AttributeError</code> after the join.</p>
<pre><code>df1 = pd.DataFrame(np.random.randint(0, 4, (6, 3)))
df2 = pd.DataFrame(np.random.randint(0, 4, (6, 3)))
df1._metadata.append('filename')
df1[df1.columns[0]]._metadata.append('filename')
for c in df1:
df1[c].filename = 'fname1.csv'
df2[c].filename = 'fname2.csv'
df1[0]._metadata # ['name', 'filename']
df1[0].filename # fname1.csv
df2[0].filename # fname2.csv
df1[0][:3].filename # fname1.csv
mgd = pd.merge(df1, df2, on=[0])
mgd['1_x']._metadata # ['name', 'filename']
mgd['1_x'].filename # raises AttributeError
</code></pre>
<p>Any way to preserve this?</p>
<p><strong>Update: Epilogue</strong></p>
<p>As discussed <a href="https://github.com/pydata/pandas/issues/6923" rel="nofollow">here</a>, <code>__finalize__</code> cannot keep track of Series that are members of a dataframe, only independent series. So for now I'll keep track of the Series-level metadata by maintaining a dictionary of metadata attached to the dataframes. My code looks like:</p>
<pre><code>def cust_merge(d1, d2):
"Custom merge function for 2 dicts"
...
def finalize_df(self, other, method=None, **kwargs):
for name in self._metadata:
if method == 'merge':
lmeta = getattr(other.left, name, {})
rmeta = getattr(other.right, name, {})
newmeta = cust_merge(lmeta, rmeta)
object.__setattr__(self, name, newmeta)
else:
object.__setattr__(self, name, getattr(other, name, None))
return self
df1.filenames = {c: 'fname1.csv' for c in df1}
df2.filenames = {c: 'fname2.csv' for c in df2}
pd.DataFrame._metadata = ['filenames']
pd.DataFrame.__finalize__ = finalize_df
</code></pre>
|
<p>I think something like this will work (and if not, pls file a bug report as this, while supported is a bit bleading edge, iow it IS possible that the join methods don't call this all the time. That is a bit untested).</p>
<p>See this <a href="https://github.com/pydata/pandas/issues/6923" rel="nofollow">issue</a> for a more detailed example/bug fix.</p>
<pre><code>DataFrame._metadata = ['name','filename']
def __finalize__(self, other, method=None, **kwargs):
"""
propagate metadata from other to self
Parameters
----------
other : the object from which to get the attributes that we are going
to propagate
method : optional, a passed method name ; possibly to take different
types of propagation actions based on this
"""
### you need to arbitrate when their are conflicts
for name in self._metadata:
object.__setattr__(self, name, getattr(other, name, None))
return self
DataFrame.__finalize__ = __finalize__
</code></pre>
<p>So this replaces the default finalizer for DataFrame with your custom one. Where I have indicated, you need to put some code which can arbitrate between conflicts. This is the reason this is not done by default, e.g. frame1 has name 'foo' and frame2 has name 'bar', what do you do when the method is <code>__add__</code>, what about another method?. Let us know what you do and how it works out.</p>
<p>This is ONLY replacing for DataFrame (and you can simply do the default action if you want), which is to propogate other to self; you can also not set anything except under special cases of method. </p>
<p>This method is meant to be overriden if sub-classes, that's why you are monkey patching here (rather than sub-classing which is most of the time overkill).</p>
|
python|pandas|metadata
| 5
|
8,209
| 35,343,795
|
How get ranges of one column gruop by class column? In Pandas
|
<p>I'm practicing with Pandas and i want to get the ranges of a column from a dataframe by the values of another column.</p>
<p>An example dataset: </p>
<pre><code> Points Grade
1 7.5 C
2 9.3 A
3 NaN A
4 1.3 F
5 8.7 B
6 9.5 A
7 7.9 C
8 4.5 F
9 8.0 B
10 6.8 D
11 5.0 D
</code></pre>
<p>I want group ranges of points for each grade so i can induce missing values.</p>
<p>For that goal i need gets something like this:</p>
<pre><code>Grade Points
A [9.5, 9.3]
B [8.7, 8.0]
C [7.5, 7.0]
D [6.8, 5.0]
F [1.3, 4.5]
</code></pre>
<p>I can get it with for and that kinds of stuffs but is it possible with pandas in some easy way?</p>
<p>I tried all groupby combinations i know and nothing. Some suggestion?</p>
|
<p>You can first filter <code>df</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="nofollow"><code>notnull</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.groupby.html" rel="nofollow"><code>groupby</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow"><code>tolist</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>print df
Points Grade
0 7.5 C
1 9.3 A
2 NaN A
3 1.3 F
4 8.7 B
5 9.5 A
6 7.9 C
7 4.5 F
8 8.0 B
9 6.8 D
10 5.0 D
</code></pre>
<pre><code>print df['Points'].notnull()
0 True
1 True
2 False
3 True
4 True
5 True
6 True
7 True
8 True
9 True
10 True
Name: Points, dtype: bool
print df.loc[df['Points'].notnull()]
Points Grade
0 7.5 C
1 9.3 A
3 1.3 F
4 8.7 B
5 9.5 A
6 7.9 C
7 4.5 F
8 8.0 B
9 6.8 D
10 5.0 D
print df.loc[df['Points'].notnull()].groupby('Grade')['Points']
.apply(lambda x: x.tolist()).reset_index()
Grade Points
0 A [9.3, 9.5]
1 B [8.7, 8.0]
2 C [7.5, 7.9]
3 D [6.8, 5.0]
4 F [1.3, 4.5]
</code></pre>
|
python-3.x|pandas|data-analysis
| 1
|
8,210
| 51,081,439
|
Is the usage of on-line data augmentation a fair comparison between CNN models
|
<p>I am using on-line data augmentation of images I feed into my Convolutional Neural Network. I am using the Keras ImageDataGenerator for this. The images are augmented in each batch and then the model is trained on these images.</p>
<p>I am comparing different models, but since the images are augmented on the fly, is this really fair, since each models is getting slightly different images?</p>
|
<p>If I understand you correct you are wondering whether the randomness caused by the data augmentation affects the result?</p>
<p>The randomness of the augmentation does not affect the result (at least not to a degree that makes a difference anyway) if you train long enough. The other options you have are (as I think about it):</p>
<ol>
<li>Augment your data deterministically applying the same transformation to your images <strong>before</strong> inserting them to your model. Those transformation could be (a) either random ones, e.g. rotate your images by a random degree between some limits, or (b) predetermined ones, e.g. rotate all your images by 1, 3 and 5 degrees.</li>
<li>Don't augment your data at all. Use your initial data to train your model.</li>
</ol>
<p>The effect of those choices are:</p>
<ol>
<li>The number of transformation you would apply is limited and even if choice 1a is chosen would be predefined set. If you are willing to increase this dramatically other issues arise like where are you going to store all this data, how are you going to handle it during training etc. So, on the fly has the advantage that the storage of your data does not change, neither is the way you deal with your data. The disadvantage of course being a slower procedure is used (which depending on the transformation could make quite a difference).</li>
<li>This choice to be valid means that you have <strong>a lot</strong> of data. And by meaning a lot (depending on the problem of course) sometimes a lot is not enough. Since your are (probably) using different data for testing differences appear between your training and testing data in many aspects. For example for human detection (arbitrary choice) differences in poses, colors, light conditions, image clarity, image size, aspect ratio are common. How do you deal with that? You either collect a super huge collection of data or (probably) use data augmentation, right?</li>
</ol>
<p>To sum it up, it's fair because on the long run it does not make a big difference. Consider the option of early stopping for your model for example. Is it fair to compare models that have stopped their training not in the best iteration? Well, it's not completely fair but it does not make a difference.</p>
|
python|tensorflow|machine-learning|keras|convolutional-neural-network
| 2
|
8,211
| 50,721,847
|
Tensorflow: cannot extract filename from tfrecord
|
<p>I have written an image, label and filename to a tfrecords file. When I try to decode the file, I cannot convert the filename to a string from tf.string.</p>
<p>The code I wrote to convert it to a tfrecords file:</p>
<pre><code>num_batches = 6
batch_size = math.ceil(X_training.shape[0] / num_batches)
for i in range(num_batches):
train_path = os.path.join("data","batch_" + str(i) + '.tfrecords')
writer = tf.python_io.TFRecordWriter(train_path)
start_row = i * batch_size
end_row = start_row + batch_size - 1
for idx in range(start_row, end_row):
try:
label = y_tr[idx]
filename = train_filenames[idx].tostring()
image = X_tr[idx]
image_raw = image.tostring()
except:
continue
example = tf.train.Example(
features=tf.train.Features(
feature={
'label': _int64_feature(label),
'filename': _bytes_feature(filename),
'image': _bytes_feature(image_raw),
}))
serialized = example.SerializeToString()
writer.write(serialized)
</code></pre>
<p>To read and decode a tfrecords file I have the function:</p>
<pre><code>def read_and_decode_single_example(filenames):
filename_queue = tf.train.string_input_producer(filenames)
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
features={
'label': tf.FixedLenFeature([], tf.int64),
'filename': tf.FixedLenFeature([], tf.string),
'image': tf.FixedLenFeature([], tf.string)
})
label = features['label']
image = tf.decode_raw(features['image'], tf.uint8)
image = tf.reshape(image, [499, 499, 1])
filename = features['filename']
return label, image, filename
</code></pre>
<p>When I decode the different batches, the filename that gets returned looks like:</p>
<blockquote>
<p>b'P\x00\x00\x00_\x00\x00\x000\x00\x00\x000\x00\x00\x001\x00\x00\x004\x00\x00\x008\x00\x00\x00_\x00\x00\x00R\x00\x00\x00I\x00\x00\x00G\x00\x00\x00H\x00\x00\x00T\x00\x00\x00_\x00\x00\x00M\x00\x00\x00L\x00\x00\x00O\x00\x00\x00.\x00\x00\x00j\x00\x00\x00p\x00\x00\x00g\x00\x00\x00'</p>
</blockquote>
<p>What am I doing wrong in decoding from a tf.string?</p>
|
<p>Calling <code>.decode().replace('\x00', '')</code> on your bytestring produces 'P_00148_RIGHT_MLO.jpg'.</p>
<p>Adding the decode and replace in the function return should solve your problem.</p>
|
python|string|tensorflow|machine-learning|deep-learning
| 1
|
8,212
| 50,767,043
|
Dask Dataframe - multiple rows from each row
|
<p>I have this dask dataframe that has two columns, one of which contains tuples (or arrays). What I want is to have a new dataframe that has a row for each element of the tuple in each row.</p>
<p>An example dataframe can be constructed like this:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
tmp = pd.DataFrame({'name': range(10), 'content': [range(i) for i in range(10)]})
ddf = dd.from_pandas(tmp, npartitions=1)
</code></pre>
<p>It is shaped like this:</p>
<pre><code>ddf: name content
0 ()
1 (0)
2 (0, 1)
3 (0, 1, 2)
...
</code></pre>
<p>My goal is to have something that looks like this:</p>
<pre><code>ddf: name element
1 0
2 0
2 1
3 0
3 1
3 2
...
</code></pre>
<p>Thank you in advance for your help.</p>
<hr>
<p>Actually, my ultimate goal is to count the occurrencies in <code>'element'</code>, which is straight-forward if I can get to the last df I showed. If you know another -maybe easier- way to achieve this, I would really appreciate it if you shared it.</p>
|
<p>You can transform the dataframe <code>tmp</code> in the shape you want by doing:</p>
<pre><code>tmp_2 = (tmp.set_index('name')['content']
.apply(pd.Series).stack().astype(int)
.reset_index().drop('level_1',1).rename(columns={0:'content'}))
</code></pre>
<p>and then create your ddf the same way.</p>
<p>It's not in dask as you said in a comment you might be able to replicate in dask.</p>
|
python|pandas|dataframe|dask
| 1
|
8,213
| 50,915,906
|
Pandas dataframe `apply` to `dtype` generates unexpected results
|
<h1>Example</h1>
<p>Toy dataframe:</p>
<pre><code>>>> df = pd.DataFrame({'a': ['the', 'this'], 'b': [5, 2.3], 'c': [8, 11], 'd': ['the', 7]})
</code></pre>
<p>yields:</p>
<pre><code>>>> df
a b c d
0 the 5.0 8 the
1 this 2.3 11 7
</code></pre>
<p>and:</p>
<pre><code>>>> df.dtypes
a object
b float64
c int64
d object
dtype: object
</code></pre>
<hr>
<h1>Problem Statement</h1>
<p>But what I <strong>really</strong> want to do is perform <code>df.apply</code> so that I can perform some actions on values in a column <strong>if that column/series is a string type</strong>.</p>
<p>So I thought I could simply do something like:</p>
<pre><code>>>> df.apply(lambda x: if x.dtype == 'object' and <the other check I care about>)
</code></pre>
<p>But it didn't work as I'd expected, everything was an <code>object</code>. To verify, try:</p>
<pre><code>>>> df.apply(lambda x: x.dtype == 'object')
a True
b True
c True
d True
dtype: bool
</code></pre>
<p>Trying to understand what was going on, I tried the following:</p>
<pre><code>>>> def tmp_fn(val, typ):
... if val.dtype == typ:
... print(type(val))
... print(val.dtype)
</code></pre>
<p>and then</p>
<pre><code>>>> df.apply(lambda x: tmp_fn(x, 'object'))
<class 'pandas.core.series.Series'>
object
<class 'pandas.core.series.Series'>
object
<class 'pandas.core.series.Series'>
object
<class 'pandas.core.series.Series'>
object
a None
b None
c None
d None
dtype: object
</code></pre>
<hr>
<h1>Attempts at Understanding</h1>
<p>Now I knew what was happening: the pandas series was being interpreted as just that, a series. Seemed easy to solve.</p>
<p>But, in fact, it wasn't working as a series normally works in other cases. For instance, if I try:</p>
<pre><code>>>> df.a.dtype
dtype('O')
>>> df.b.dtype
dtype('float64')
</code></pre>
<p>Those both work as I expected and give me the type of object <strong>inside</strong> the series, instead of the simple fact that it is a series.</p>
<p>But try as I might, I couldn't figure out a way to replicate that same sort of behavior within <code>pandas.DataFrame.apply</code>. What's going on here? How can I get the series to act as it normally would? In other words, how can I get a <code>pandas.DataFrame.apply</code> to work exactly as a <code>pandas.Series</code> would? I never knew/realized they weren't acting identically until now.</p>
|
<p>You can use <code>result_type='expand'</code> in <code>.apply()</code> With that, list-like results will be turned into columns. You can read more in the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer">docs</a>:</p>
<pre><code>df.apply(lambda x: x.dtype, result_type='expand')
</code></pre>
<p>Output:</p>
<pre><code>a object
b float64
c int64
d object
dtype: object
</code></pre>
<p>Without <code>result_type='expand'</code>:</p>
<pre><code>df.apply(lambda x: print(x))
</code></pre>
<p>Gives:</p>
<pre><code>0 the
1 this
Name: a, dtype: object
0 5
1 2.3
Name: b, dtype: object
0 8
1 11
Name: c, dtype: object
0 the
1 7
Name: d, dtype: object
</code></pre>
<p>With <code>result_type='expand'</code>:</p>
<pre><code>df.apply(lambda x: print(x), result_type='expand')
</code></pre>
<p>Output:</p>
<pre><code>0 the
1 this
Name: a, dtype: object
0 5.0
1 2.3
Name: b, dtype: float64
0 8
1 11
Name: c, dtype: int64
0 the
1 7
Name: d, dtype: object
</code></pre>
|
python|pandas|dataframe
| 2
|
8,214
| 50,990,523
|
How to design a Tensorflow Js model for a single output MLP?
|
<p>I am trying to implement and test a single output MLP using Tensorflow Js where my data looks like this:</p>
<p><code>dataset = [[x_1, x_2, ..., x_n, y], ...]</code></p>
<p>Here is my code:</p>
<pre><code> for (var i = 0; i < dataset.length; ++i) {
x[i] = dataset[i].slice(0, inputLength);
y[i] = dataset[i][inputLength];
}
const xTrain = tf.tensor2d(x.slice(1));
const yTrain = tf.tensor1d(y.slice(1));
const model = tf.sequential();
model.add(tf.layers.dense({inputShape: [inputLength], units: 10}));
model.add(tf.layers.dense({units: 1}));
const learningRate = 0.1;
const optimizer = tf.train.sgd(learningRate);
model.compile({loss: 'meanSquaredError', optimizer});
return model.fit(
xTrain,
yTrain,
{
batchSize: 10,
epochs: 5
}
)
</code></pre>
<p>The problem is that my model is not converging and I get <code>null</code> value for loss function at each step. Also, please note that I know that I can use multivariate regression to solve this but I want to compare the result with MLP.</p>
<p>I was wondering if somebody can help me with this.</p>
|
<p>The dimension of the x/y-tensors used in <code>model.fit()</code> has to be one more than the shape of the first/last layer of the model to represent multiple training data sets so GPU-accelerated batch training is possible.</p>
<p>Another problem of your model is the high <code>learningRate</code> (in relation to the magnitude of the training values) which prevents the model from converging because it jumps over the optimal solution and gets out of control.</p>
<p>Either reduce the <code>learningRate</code> or normalize the learning values to a lower magnitude.</p>
|
node.js|neural-network|tensorflow.js
| 1
|
8,215
| 66,726,183
|
How do I stop decimals changing to integers when replacing the numbers of a row in an array?
|
<p>I'm trying to replace the 0th row of array "A" with 0.5 times the 1st row plus the original 0th row with the code below:</p>
<pre><code>A = np.array([[ 9, 6, 7, 8, 1, 7, 2], [ 8, 2, 6, 5, 1, 5, 3], [ 7, 3, 1, 4, 5, 10, 1],
[10, 5, 7, 5, 4, 6, 2], [ 5, 5, 2, 6, 4, 2, 7]])
b = A[0]+0.5*A[1]
print(b)
for n in range(len(A[0])):
A[0][n] = b[n]
A
</code></pre>
<p>The list "b" is what I want to replace the old 0th row with. However, in the new version of array A, it takes the list of decimals "b" and makes them integers but I want them to stay as decimals:</p>
<pre><code>[13. 7. 10. 10.5 1.5 9.5 3.5]
array([[13, 7, 10, 10, 1, 9, 3],
[ 8, 2, 6, 5, 1, 5, 3],
[ 7, 3, 1, 4, 5, 10, 1],
[10, 5, 7, 5, 4, 6, 2],
[ 5, 5, 2, 6, 4, 2, 7]])
</code></pre>
<p>How do I make it so the new row's numbers stay as decimal numbers?</p>
|
<p>The problem is <code>A</code>'s original <code>dtype</code> is <code>int</code>, then every value in it is and will be an <code>int</code>. To fix it, you can specify <code>dtype = float</code> from the beginning:</p>
<pre><code>A = np.array([[ 9, 6, 7, 8, 1, 7, 2],
[ 8, 2, 6, 5, 1, 5, 3],
[ 7, 3, 1, 4, 5, 10, 1],
[10, 5, 7, 5, 4, 6, 2],
[ 5, 5, 2, 6, 4, 2, 7]], dtype = float)
A[0] += A[1]/2
</code></pre>
<p><strong>Output</strong></p>
<pre><code>A
array([[13. , 7. , 10. , 10.5, 1.5, 9.5, 3.5],
[ 8. , 2. , 6. , 5. , 1. , 5. , 3. ],
[ 7. , 3. , 1. , 4. , 5. , 10. , 1. ],
[10. , 5. , 7. , 5. , 4. , 6. , 2. ],
[ 5. , 5. , 2. , 6. , 4. , 2. , 7. ]])
</code></pre>
|
python|numpy
| 2
|
8,216
| 66,379,396
|
pandas data frame plotting in subplots
|
<p>I have the following pandas data frame and would like to create <code>n</code> plots horizontally where n = unique labels(l1,l2,.) in the <code>a1 row</code>(for example in the following example there will be two plots because of <code>l1 and l2</code>). Then for these two plots, each plot will plot <code>a4</code> as the x-axis against <code>a3</code> as y axis. For example, <code>ax[0]</code> will contain a graph for <code>a1</code>, where it has three lines, linking the points <code>[(1,15)(2,20)],[(1,17)(2,19)],[(1,23)(2,15)]</code> for the below data.</p>
<pre><code>import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
d = {'a1': ['l1','l1','l1','l1','l1','l1','l2','l2','l2','l2','l2','l2'],
'a2': ['a', 'a', 'b','b','c','c','d','d','e','e','f','f'],
'a3': [15,20,17,19,23,15,22,21,23,23,24,27],
'a4': [1,2,1,2,1,2,1,2,1,2,1,2]}
df=pd.DataFrame(d)
df
a1 a2 a3 a4
1 a 15 1
1 a 20 2
1 b 17 1
1 b 19 2
1 c 23 1
1 c 15 2
2 d 22 1
2 d 21 2
2 e 23 1
2 e 23 2
2 f 24 1
2 f 27 2
</code></pre>
<p>I currently have the following:</p>
<pre><code>def graph(dataframe):
x = dataframe["a4"]
y = dataframe["a3"]
ax[0].plot(x,y) #how do I plot and set the title for each group in their respective subplot without the use of for-loop?
fig, ax = plt.subplots(1,len(pd.unique(df["a1"])),sharey='row',figsize=(15,2))
df.groupby(["a1"]).apply(graph)
</code></pre>
<p>However, my above attempt only plots all a3 against a4 on the first subplot(because I wrote <code>ax[0].plot()</code>). I can always use a for-loop to accomplish the desired task, but for large number of unique groups in <code>a1</code>, it will be computationally expensive. Is there a way to make it a one-liner on the line <code>ax[0].plot(x,y)</code> and it accomplishes the desired task without a for loop? Any inputs are appreciated.</p>
|
<p>I do not see any way of avoiding a for loop when plotting this data with pandas. My initial thought was to reshape the dataframe to make <code>subplots=True</code> work, like this:</p>
<pre><code>dfp = df.pivot(columns='a1').swaplevel(axis=1).sort_index(axis=1)
dfp
</code></pre>
<p><a href="https://i.stack.imgur.com/GUPGA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GUPGA.png" alt="df_pivoted" /></a></p>
<p>But I do not see how to select the level 1 of the the columns <code>MultiIndex</code> to make something like <code>dfp.plot(x='a4', y='a3', subplots=True)</code> work.</p>
<p>Removing level 0 and then running the plotting function with
<code>dfp.droplevel(axis=1, level=0).plot(x='a4', y='a3', subplots=True)</code> raises <code>ValueError: x must be a label or position</code>. And even if this worked, there would still be the issue of linking the correct points together.</p>
<p>The <a href="https://seaborn.pydata.org/index.html" rel="nofollow noreferrer">seaborn package</a> was created to conveniently plot this kind of dataset. If you are open to using it here is an example with <a href="https://seaborn.pydata.org/generated/seaborn.relplot.html" rel="nofollow noreferrer"><code>relplot</code></a>:</p>
<pre><code>import pandas as pd # v 1.1.3
import seaborn as sns # v 0.11.0
d = {'a1': ['l1','l1','l1','l1','l1','l1','l2','l2','l2','l2','l2','l2'],
'a2': ['a', 'a', 'b','b','c','c','d','d','e','e','f','f'],
'a3': [15,20,17,19,23,15,22,21,23,23,24,27],
'a4': [1,2,1,2,1,2,1,2,1,2,1,2]}
df = pd.DataFrame(d)
sns.relplot(data=df, x='a4', y='a3', col='a1', hue ='a2', kind='line', height=4)
</code></pre>
<p><a href="https://i.stack.imgur.com/6RySr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6RySr.png" alt="relplot" /></a></p>
<p>You can customize the colors with the <code>palette</code> argument and adjust the grid layout with <code>col_wrap</code>.</p>
|
python|pandas|dataframe|matplotlib|plot
| 1
|
8,217
| 16,110,252
|
Need to compare very large files around 1.5GB in python
|
<pre><code>"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2"
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025"
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792"
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800"
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595"
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957"
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212"
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080"
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731"
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000"
"DF","0001HARISH@GMAIL.COM","NF251352240086","22DEC2010","B2C","4006"
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000"
"DF","0001HARISH@GMAIL.COM","NF252022031180","09DEC2010","B2C","3439"
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136"
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41"
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96"
</code></pre>
<p>Above is the sample data.
Data is sorted according to email addresses and the file is very large around 1.5Gb</p>
<p>I want output in another csv file something like this</p>
<pre><code>"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2",1,0 days
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025",1,0 days
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792",1,0 days
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800",1,0 days
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595",1,0 days
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957",1,0 days
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212",1,0 days
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080",1,0 days
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731",1,0 days
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000",1,0 days
"DF","0001HARISH@GMAIL.COM","NF251352240086","09DEC2010","B2C","4006",1,0 days
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000",2,3 days
"DF","0001HARISH@GMAIL.COM","NF252022031180","22DEC2010","B2C","3439",3,10 days
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41",1,0 days
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96",2,1 days
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96",3,0 days
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96",4,9 days
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96",5,0 days
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96",6,4 days
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96",7,0 days
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136",8,44 days
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136",9,0 days
</code></pre>
<p>i.e if entry occurs 1st time i need to append 1 if it occurs 2nd time i need to append 2 and likewise i mean i need to count no of occurences of an email address in the file and if an email exists twice or more i want difference among dates and remember <strong>dates are not sorted</strong> so we have to sort them also against a particular email address and i am looking for a solution in python using numpy or pandas library or any other library that can handle this type of huge data without giving out of bound memory exception i have dual core processor with centos 6.3 and having ram of 4GB</p>
|
<p>make sure you have 0.11, read these docs: <a href="http://pandas.pydata.org/pandas-docs/dev/io.html#hdf5-pytables" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/dev/io.html#hdf5-pytables</a>, and these recipes: <a href="http://pandas.pydata.org/pandas-docs/dev/cookbook.html#hdfstore" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/dev/cookbook.html#hdfstore</a> (esp the 'merging on millions of rows'</p>
<p>Here is a solution that seems to work. Here is the workflow:</p>
<ol>
<li>read data from your csv by chunks and appending to an hdfstore</li>
<li>iterate over the store, which creates another store that does the combiner</li>
</ol>
<p>Essentially we are taking a chunk from the table and combining with a chunk from every other part of the file. The combiner function does not reduce, but instead calculates your function (the diff in days) between all elements in that chunk, eliminating duplicates as you go, and taking the latest data after each loop. Kind of like a recursive reduce almost.</p>
<p>This should be O(num_of_chunks**2) memory and calculation time
chunksize could be say 1m (or more) in your case</p>
<pre><code>processing [0] [datastore.h5]
processing [1] [datastore_0.h5]
count date diff email
4 1 2011-06-24 00:00:00 0 0000.ANU@GMAIL.COM
1 1 2011-06-24 00:00:00 0 00000.POO@GMAIL.COM
0 1 2010-07-26 00:00:00 0 00000000@11111.COM
2 1 2013-01-01 00:00:00 0 0000650000@YAHOO.COM
3 1 2013-01-26 00:00:00 0 00009.GAURAV@GMAIL.COM
5 1 2011-10-29 00:00:00 0 0000MANNU@GMAIL.COM
6 1 2011-11-21 00:00:00 0 0000PRANNOY0000@GMAIL.COM
7 1 2011-06-26 00:00:00 0 0000PRANNOY0000@YAHOO.CO.IN
8 1 2012-10-25 00:00:00 0 0000RAHUL@GMAIL.COM
9 1 2011-05-10 00:00:00 0 0000SS0@GMAIL.COM
12 1 2010-12-09 00:00:00 0 0001HARISH@GMAIL.COM
11 2 2010-12-12 00:00:00 3 0001HARISH@GMAIL.COM
10 3 2010-12-22 00:00:00 13 0001HARISH@GMAIL.COM
14 1 2012-11-28 00:00:00 0 000AYUSH@GMAIL.COM
15 2 2012-11-29 00:00:00 1 000AYUSH@GMAIL.COM
17 3 2012-12-08 00:00:00 10 000AYUSH@GMAIL.COM
18 4 2012-12-12 00:00:00 14 000AYUSH@GMAIL.COM
13 5 2013-01-25 00:00:00 58 000AYUSH@GMAIL.COM
import pandas as pd
import StringIO
import numpy as np
from time import strptime
from datetime import datetime
# your data
data = """
"DF","00000000@11111.COM","FLTINT1000130394756","26JUL2010","B2C","6799.2"
"Rail","00000.POO@GMAIL.COM","NR251764697478","24JUN2011","B2C","2025"
"DF","0000650000@YAHOO.COM","NF2513521438550","01JAN2013","B2C","6792"
"Bus","00009.GAURAV@GMAIL.COM","NU27012932319739","26JAN2013","B2C","800"
"Rail","0000.ANU@GMAIL.COM","NR251764697526","24JUN2011","B2C","595"
"Rail","0000MANNU@GMAIL.COM","NR251277005737","29OCT2011","B2C","957"
"Rail","0000PRANNOY0000@GMAIL.COM","NR251297862893","21NOV2011","B2C","212"
"DF","0000PRANNOY0000@YAHOO.CO.IN","NF251327485543","26JUN2011","B2C","17080"
"Rail","0000RAHUL@GMAIL.COM","NR2512012069809","25OCT2012","B2C","5731"
"DF","0000SS0@GMAIL.COM","NF251355775967","10MAY2011","B2C","2000"
"DF","0001HARISH@GMAIL.COM","NF251352240086","22DEC2010","B2C","4006"
"DF","0001HARISH@GMAIL.COM","NF251742087846","12DEC2010","B2C","1000"
"DF","0001HARISH@GMAIL.COM","NF252022031180","09DEC2010","B2C","3439"
"Rail","000AYUSH@GMAIL.COM","NR2151120122283","25JAN2013","B2C","136"
"Rail","000AYUSH@GMAIL.COM","NR2151213260036","28NOV2012","B2C","41"
"Rail","000AYUSH@GMAIL.COM","NR2151313264432","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2151413266728","29NOV2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2512912359037","08DEC2012","B2C","96"
"Rail","000AYUSH@GMAIL.COM","NR2517612385569","12DEC2012","B2C","96"
"""
# read in and create the store
data_store_file = 'datastore.h5'
store = pd.HDFStore(data_store_file,'w')
def dp(x, **kwargs):
return [ datetime(*strptime(v,'%d%b%Y')[0:3]) for v in x ]
chunksize=5
reader = pd.read_csv(StringIO.StringIO(data),names=['x1','email','x2','date','x3','x4'],
header=0,usecols=['email','date'],parse_dates=['date'],
date_parser=dp, chunksize=chunksize)
for i, chunk in enumerate(reader):
chunk['indexer'] = chunk.index + i*chunksize
# create the global index, and keep it in the frame too
df = chunk.set_index('indexer')
# need to set a minimum size for the email column
store.append('data',df,min_itemsize={'email' : 100})
store.close()
# define the combiner function
def combiner(x):
# given a group of emails (the same), return a combination
# with the new data
# sort by the date
y = x.sort('date')
# calc the diff in days (an integer)
y['diff'] = (y['date']-y['date'].iloc[0]).apply(lambda d: float(d.item().days))
y['count'] = pd.Series(range(1,len(y)+1),index=y.index,dtype='float64')
return y
# reduce the store (and create a new one by chunks)
in_store_file = data_store_file
in_store1 = pd.HDFStore(in_store_file)
# iter on the store 1
for chunki, df1 in enumerate(in_store1.select('data',chunksize=2*chunksize)):
print "processing [%s] [%s]" % (chunki,in_store_file)
out_store_file = 'datastore_%s.h5' % chunki
out_store = pd.HDFStore(out_store_file,'w')
# iter on store 2
in_store2 = pd.HDFStore(in_store_file)
for df2 in in_store2.select('data',chunksize=chunksize):
# concat & drop dups
df = pd.concat([df1,df2]).drop_duplicates(['email','date'])
# group and combine
result = df.groupby('email').apply(combiner)
# remove the mi (that we created in the groupby)
result = result.reset_index('email',drop=True)
# only store those rows which are in df2!
result = result.reindex(index=df2.index).dropna()
# store to the out_store
out_store.append('data',result,min_itemsize={'email' : 100})
in_store2.close()
out_store.close()
in_store_file = out_store_file
in_store1.close()
# show the reduced store
print pd.read_hdf(out_store_file,'data').sort(['email','diff'])
</code></pre>
|
python|csv|numpy|pandas|large-data-volumes
| 8
|
8,218
| 57,677,834
|
How to combine input with output tensors to create a recurrent layer?
|
<p>I'm trying to change a layer that calculates an output with ny outputs to a layer that calculates a recurrent output so the output has the same shape as the input. For example, consider the following</p>
<pre><code>nt = 1000
nx_in = 8
ny = 2
x_train = np.array(shape=(nt, nx_in))
input = keras.Input(shape=(1, None, x_train.shape[1]), name='x_input')
output_ny = layers.Dense(ny)(x_input)
</code></pre>
<p>The above generates the expected results. Now I would like to create a recurrent layer by generating a new output tensor that has the same shape as the input tensor and is created by taking one value from tensor output_ny, and (nx_in/ny-1), or 3, values from tensor input</p>
<pre><code>print('x_input.shape: ', x_input.shape)
print('ny_output.shape:', ny_output.shape)
print('max_lag: ', max_lag)
output_list = list()
ky_start = 0
max_lag = 5
stored_lags = max_lag - 1
for iy in range(ny):
ky_end = ky_start + stored_lags - 1
print('append output, {}:{}'.format(iy, iy+1))
output_list.append(ny_output[:, :, :, iy:(iy+1)])
print('append input, {}:{}'.format(ky_start, ky_end))
output_list.append(x_input[:, :, :, ky_start:ky_end])
ky_start = ky_end + 1
outputs = tf.unstack(output_list, axis=3)
</code></pre>
<p>The printed output is</p>
<pre><code>x_input.shape: (?, 1, ?, 8)
ny_output.shape: (?, 1, ?, 2)
max_lag: 5
append output, 0:1
append input, 0:3
append output, 1:2
append input, 4:7
</code></pre>
<p>This generates the following error message</p>
<pre><code>ValueError: Dimension 3 in both shapes must be equal, but are 1 and 3. Shapes are [?,1,?,1] and [?,1,?,3].
From merging shape 2 with other shapes. for 'packed' (op: 'Pack') with input shapes: [?,1,?,1], [?,1,?,3], [?,1,?,1], [?,1,?,3].
</code></pre>
<p>How to generate a new output tensor that has the same shape as the input tensor, and is built by appending one element from tensor output_ny, and 3 elements from tensor input, for each ny?</p>
|
<p>The Tensorflow "graph" is a Directed Acyclic Graph of computations. The backpropagation algorithm walks this graph backwards, and prediction walks it forwards.</p>
<p>My understanding is that you are trying to introduce a cycle into the graph. This will not work.</p>
<p>If you start from a basic implementation of neural networks, you can add your own recurrent cell. </p>
|
python|tensorflow|keras|recurrent-neural-network|tensor
| 0
|
8,219
| 57,404,966
|
Pandas column names begin from index column when running on visual code jupyter environment
|
<p><a href="https://i.stack.imgur.com/HZhwL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HZhwL.png" alt="enter image description here"></a></p>
<p>I have written a very simple code that creates a pandas data frame. The issue is when I do my column naming ['X','Y'], my column heading X passes itself on to the column containing the index values. This only occurs when you run the code in jupyter environment, initiated in visual code by #%%. When you run the same code in the terminal the results are accurate. Shown in image below. Any ideas why?</p>
<pre><code>#%%
import pandas as pd
from matplotlib import pyplot as plt
data_1 = {'X': [1.0,2.0,3.0], 'Y': [1.0,2.5,3.5]}
df_1 = pd.DataFrame(data_1)
print(df_1)
</code></pre>
<p>I have also tried other methods, but the result is the same</p>
<pre><code>data_1 = [(1.0,1.0),(2.0,2.5),(3.0,3.5)]
df_1 = pd.DataFrame(data_1, columns = ['X','Y']
</code></pre>
|
<p>It's the default of the plugin and affects only visually... does not affect the functionallity</p>
<p><img src="https://i.imgur.com/gO37X1l.png" alt="example"></p>
|
python-3.x|pandas|visual-studio-code|jupyter-notebook
| 1
|
8,220
| 57,401,767
|
Assigning new column value to data frame based on if time series data matches functional constraint
|
<p>I have the time series of a set of data for different samples. I would like to add a new column stating if the sample falls within a minimum and maximum constraint. If it does, I will assign the value 1 to the column, otherwise 0.</p>
<p>The example data frame I am using is below. The minimum constraint has the form y=-5t+30 and the maximum constraint has the form y=-5t+110. I would like to test the samples between time=2 and time=4. </p>
<p>I have thought to use np.where() to check each column beginning at 'time=2' and to 'time=4', but I am unsure how to loop through the data frame and unsure how to specify the time while looping through. </p>
<pre class="lang-py prettyprint-override"><code>
ex_data=[['s1',50,50,50,50,50,50],['s2',120,110,100,90,80,70],['s3',30,70,110,70,30,10],['s4',10,30,70,110,70,30],['s5',55,30,20,15,5,0]]
df=pd.DataFrame(ex_data,columns=['sample','time=0','time=1','time=2','time=3','time=4','time=5'])
</code></pre>
<p>I expect the new column labeled success to be 1,0,0,0,0 for each respective row.</p>
<hr>
<p>Edit 1:
Perhaps I was not specific enough. I would like the new column to be 1 if all the time steps match the constraint, otherwise 0.</p>
<hr>
<p>Edit 2:</p>
<p>I was able to come up with the following solution. </p>
<pre class="lang-py prettyprint-override"><code> def checking(s):
mincon=[20,15,10]
maxcon=[100,95,90]
val=s.iloc[3:6]
val=val.values
for i in range(len(val)):
if (val[i]<=mincon[i]) | (val[i]>=maxcon[i]):
return 0
return 1
ex_data=[['s1',50,50,50,50,50,50],['s2',120,110,100,90,80,70],
['s3',30,70,110,70,30,10],['s4',10,30,70,110,70,30],['s5',55,30,20,15,5,0]]
df=pd.DataFrame(ex_data,columns=
['sample','time=0','time=1','time=2','time=3','time=4','time=5'])
df
df['matches constraint?']=df.apply(checking,axis=1)
df
</code></pre>
<p>I am not sure how to run this code on here, but it outputs what I want. Does anyone have suggestions for how I can speed this up when applying it to a muhc larger data set?</p>
|
<p>First, tidy your data.</p>
<pre><code>tidy = pd.wide_to_long(df, stubnames='time', i='sample', j='x', sep='=') \
.reset_index().rename(columns={'time': 'value', 'x': 'time'})
# I made the choice to do some renaming to make it clearer what vars are what
</code></pre>
<p>Yield this sort of DataFrame (truncated):</p>
<pre><code> sample time value
0 s1 0 50
1 s2 0 120
2 s3 0 30
3 s4 0 10
4 s5 0 55
5 s1 1 50
</code></pre>
<p>Then, make sure it fits:</p>
<pre><code>tidy['fits'] = (tidy['min'] < tidy.value) & (tidy.value < tidy['max'])
</code></pre>
<p>Yields this sort of DataFrame (truncated):</p>
<pre><code> sample time value min max fits
0 s1 0 50 30 110 True
1 s2 0 120 30 110 False
2 s3 0 30 30 110 False
3 s4 0 10 30 110 False
4 s5 0 55 30 110 True
5 s1 1 50 25 105 True
6 s2 1 110 25 105 False
7 s3 1 70 25 105 True
</code></pre>
<p>If you want only the ones where time is in [2, 4], then subset the tidy DataFrame to that set (e.g. <code>tidy.query('2 <= time <= 4', inplace=True)</code>). If you want to use this to drop samples, then aggregate it from there and merge back to your original set, then do you subset command.</p>
|
python|pandas
| 0
|
8,221
| 57,638,682
|
question how to deal with KeyError: 0 or KeyError: 1 etc
|
<p>I am new in python and this data science world and I am trying to play with different datasets. </p>
<p>In this case I am using the housing price index from quandl but unfortunately I get stuck when when I need to take the abbreviations names from the wiki page always getting the same Error KeyError.</p>
<pre><code>import quandl
import pandas as pd
#pull every single housing price index from quandl
#quandl api key
api_key = 'xxxxxxxxxxxx'
#get stuff from quandl
df = quandl.get('FMAC/HPI_AK',authtoken = api_key) #alaska \
##print(df.head())
#get 50 states using pandas read html from wikipedia
fifty_states = pd.read_html('https://en.wikipedia.org /wiki/List_of_states_and_territories_of_the_United_States')
##print(fifty_states[0][1]) #first data frame is index 0, #looking for column 1,#from element 1 on
#get quandl frannymac query names for each 50 state
for abbv in fifty_states[0][1][2:]:
#print('FMAC/HPI_'+str(abbv))
</code></pre>
<p>So the problem I got in the following step:</p>
<pre><code>#get 50 states using pandas read html from wikipedia
fifty_states = pd.read_html('https://en.wikipedia.org /wiki/List_of_states_and_territories_of_the_United_States')
##print(fifty_states[0][1]) #first data frame is index 0, #looking for column 1,#from element 1 on
</code></pre>
<p>I have tried different ways to get just the abbreviation but does not work</p>
<pre><code>for abbv in fifty_states[0][1][2:]:
#print('FMAC/HPI_'+str(abbv))
for abbv in fifty_states[0][1][1:]:
#print('FMAC/HPI_'+str(abbv))
</code></pre>
<p>always Keyerror: 0 </p>
<p>I just need this step to work, and to have the following output:</p>
<pre><code>FMAC/HPI_AL,
FMAC/HPI_AK,
FMAC/HPI_AZ,
FMAC/HPI_AR,
FMAC/HPI_CA,
FMAC/HPI_CO,
FMAC/HPI_CT,
FMAC/HPI_DE,
FMAC/HPI_FL,
FMAC/HPI_GA,
FMAC/HPI_HI,
FMAC/HPI_ID,
FMAC/HPI_IL,
FMAC/HPI_IN,
FMAC/HPI_IA,
FMAC/HPI_KS,
FMAC/HPI_KY,
FMAC/HPI_LA,
FMAC/HPI_ME
</code></pre>
<p>for the 50 states from US and then proceed to make a data analysis from this data.</p>
<p>Can anybody tell me what am I doing wrong ? cheers</p>
|
<p>Note that <code>fifty_states</code> is a <strong>list</strong> of DataFrames, filled with
content of tables from the source page.</p>
<p>The first of them (at index <em>0</em> in <em>fifty_states</em>) is the table of US states.</p>
<p>If you don't know column names in a DataFrame (e.g. <em>df</em>),
to get column 1 from it (numeration form <em>0</em>), run:</p>
<pre><code>df.iloc[:, 1]
</code></pre>
<p>So, since we want this column from <em>fifty_states[0]</em>, run:</p>
<pre><code>fifty_states[0].iloc[:, 1]
</code></pre>
<p>Your code failed because you attempted to apply <em>[1]</em> to this DataFrame,
but this DataFrame has no column named <em>1</em>.</p>
<p>Note that e.g. <code>fifty_states[0][('Cities', 'Capital')]</code> gives proper result,
because:</p>
<ul>
<li>this DataFrame has a MultiIndex on columns,</li>
<li>one of columns has <em>Cities</em> at the first MultiIndex level
and <em>Capital</em> at the second level.</li>
</ul>
<p>And getting back to your code, run:</p>
<pre><code>for abbv in fifty_states[0].iloc[:, 1]:
print('FMAC/HPI_' + str(abbv))
</code></pre>
<p>Note that <em>[2:]</em> is not needed. You probably wanted to skip 2 initial rows
of the <em><table></em> HTML tag, containing column names,
but in <em>Pandas</em> they are actually kept in the MultiIndex on columns,
so to get all values, you don't need to skip anything.</p>
<p>If you want these strings as a <strong>list</strong>, for future use, the code can be:</p>
<pre><code>your_list = ('FMAC/HPI_' + fifty_states[0].iloc[:, 1]).tolist()
</code></pre>
|
python|pandas|quandl
| 0
|
8,222
| 57,591,096
|
Reshaping data and seperation by multiple delimiters
|
<p>Sorry but I need some help with pandas data wrangling.
I have a large dataset in excel. Each cell contains data from several days. I have loaded the data with pandas, but I haven't found a desirable way of separating it into individual cells.
The format is "Date" space dash space "value" Pipe and repeated as such 20100205 - 0.10 |</p>
<p>I want to separate the cell such that I have a cell with the date and corresponding value below. </p>
<pre><code>+-----------+------------------------------------------------------
| ID | WBC
+-----------+------------------------------------------------------
| 1 | 20100205 - 0.10 |20100205 - 0.16 |20100205 - 0.21 etc..
+-----------+------------------------------------------------------
Ideal:
+----------+-------------+-------------+------------+
| ID | 20100205 | 20100205 | 20100205 |
+----------+-------------+-------------+------------+
| 1 | 0.10 | 0.16 | 0.21 |
+----------+-------------+-------------+------------+
</code></pre>
<pre><code>from pandas import DataFrame
data = {'ID': ['1'],
'WBC': ["20100205 - 0.10 |20100205 - 0.16 |20100205 - 0.21 |20100305 - 71.69 |20100306 - 0.27 |20100306 - 0.42 |20100306 - 1.42"]
}
df = DataFrame (data,columns= ['ID', 'WBC'])
</code></pre>
|
<p>Basic idea is to parse the information in your <code>WBC</code> column and then create the new columns as required:</p>
<pre><code>import pandas as pd
data={'ID': ['1'],
'WBC': ["20100205 - 0.10 |20100205 - 0.16 |20100205 - 0.21 |20100305 - 71.69 |20100306 - 0.27 |20100306 - 0.42 |20100306 - 1.42"]
}
df=pd.DataFrame(data, columns= ['ID', 'WBC'])
df["WBC"] = df["WBC"].str.split("|")
dates = [x.split(" - ")[0] for x in df.loc[0, "WBC"]]
vals = [x.split(" - ")[1] for x in df.loc[0, "WBC"]]
for i in range(len(dates)):
df[int(dates[i])] = float(vals[i])
df.drop("WBC", axis=1, inplace=True)
# df.set_index("ID", inplace=True) # If you want this as your index
</code></pre>
<p>This then leaves you with:</p>
<pre><code>df
ID 20100205 20100305 20100306 20100205 20100305 20100306
0 1 0.21 71.69 1.42 0.21 71.69 1.42
</code></pre>
<p>(Ideally, your data frame should however have unique column names).</p>
|
python|pandas
| 1
|
8,223
| 24,234,034
|
Pandas: import csv with user corrected faulty values
|
<p>I try to import a csv and dealing with faulty values, e.x. wrong decimal seperator or strings in int/double columns. I use converters to do the error fixing. In case of strings in number columns the user sees a input box where he has to fix the value. Is it possible to get the column name and/or the row which is actually 'imported'? If not, is there a better way to do the same?</p>
<pre><code>example csv:
------------
description;elevation
point a;-10
point b;10,0
point c;35.5
point d;30x
from PyQt4 import QtGui
import numpy
from pandas import read_csv
def fixFloat(x):
# return x as float if possible
try:
return float(x)
except:
# if not, test if there is a , inside, replace it with a . and return it as float
try:
return float(x.replace(",", "."))
except:
changedValue, ok = QtGui.QInputDialog.getText(None, 'Fehlerhafter Wert', 'Bitte korrigieren sie den fehlerhaften Wert:', text=x)
if ok:
return self.fixFloat(changedValue)
else:
return -9999999999
def fixEmptyStrings(s):
if s == '':
return None
else:
return s
converters = {
'description': fixEmptyStrings,
'elevation': fixFloat
}
dtypes = {
'description': object,
'elevation': numpy.float64
}
csvData = read_csv('/tmp/csv.txt',
error_bad_lines=True,
dtype=dtypes,
converters=converters
)
</code></pre>
|
<p>I would take a different approach here.<br>
Rather than at read_csv time, I would read the csv naively and <strong>then</strong> fix / convert to float:</p>
<pre><code>In [11]: df = pd.read_csv(csv_file, sep=';')
In [12]: df['elevation']
Out[12]:
0 -10
1 10,0
2 35.5
3 30x
Name: elevation, dtype: object
</code></pre>
<p>Now just iterate through this column:</p>
<pre><code>In [13]: df['elevation'] = df['elevation'].apply(fixFloat)
</code></pre>
<p><em>This is going to make it much easier to reason about the code (which columns you're applying functions to, how to access other columns etc. etc.).</em></p>
|
python|csv|pandas|user-input|converters
| 0
|
8,224
| 43,698,479
|
pandas dataframe merge overlapping keys
|
<p>I have two dataframes, left and right. The keys (columns) of right are a subset of those in left. I want to to keep the column data from right and put in left, and I don't care about the overlapping key data in the left:</p>
<pre><code>left = pd.DataFrame({'key1': ['Knan', 'Knan', 'Knan', 'Knan'],
'key2': ['Kxx1', 'Kxx2', 'Kxx3', 'Kxx4'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K5', 'K6', 'K7', 'K8']})
for key in right.keys():
if key in left.keys():
left[key] = right[key]
</code></pre>
<p>Is there a better way to do this with merge or concat or something?</p>
|
<p>IIUC, you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a>:</p>
<pre><code>df_out = right.combine_first(left)
print(df_out)
</code></pre>
<p>Output:</p>
<pre><code> A B key1 key2
0 A0 B0 K0 K5
1 A1 B1 K1 K6
2 A2 B2 K1 K7
3 A3 B3 K2 K8
</code></pre>
|
python|pandas|dataframe
| 0
|
8,225
| 43,481,710
|
select first occurance of minimum index from numpy array
|
<p>I am trying to find out the index of the minimum value in each row and I am using below code.</p>
<pre><code>#code
import numpy as np
C = np.array([[1,2,4],[2,2,5],[4,3,3]])
ind = np.where(C == C.min(axis=1).reshape(len(C),1))
ind
#output
(array([0, 1, 1, 2, 2], dtype=int64), array([0, 0, 1, 1, 2], dtype=int64))
</code></pre>
<p>but the problem it is returning all indices of minimum values in each row. but I want only the first occurrence of minimum values. like </p>
<pre><code>(array([0, 1, 2], dtype=int64), array([0, 0, 1], dtype=int64))
</code></pre>
|
<p>If you want to use comparison against the minimum value, we need to use <code>np.min</code> and keep the dimensions with <code>keepdims</code> set as <code>True</code> to give us a boolean array/mask. To select the first occurance, we can use <code>argmax</code> along each row of the mask and thus have our desired output.</p>
<p>Thus, the implementation to get the corresponding column indices would be -</p>
<pre><code>(C==C.min(1, keepdims=True)).argmax(1)
</code></pre>
<p>Sample step-by-step run -</p>
<pre><code>In [114]: C # Input array
Out[114]:
array([[1, 2, 4],
[2, 2, 5],
[4, 3, 3]])
In [115]: C==C.min(1, keepdims=1) # boolean array of min values
Out[115]:
array([[ True, False, False],
[ True, True, False],
[False, True, True]], dtype=bool)
In [116]: (C==C.min(1, keepdims=True)).argmax(1) # argmax to get first occurances
Out[116]: array([0, 0, 1])
</code></pre>
<p>The first output of row indices would simply be a range array -</p>
<pre><code>np.arange(C.shape[0])
</code></pre>
<hr>
<p>To achieve the same column indices of first occurance of minimum values, a direct way would be to use <code>np.argmin</code> -</p>
<pre><code>C.argmin(axis=1)
</code></pre>
|
python|python-2.7|python-3.x|numpy|scipy
| 3
|
8,226
| 73,035,536
|
Pytorch CUDA not available with correct versions
|
<p>I really need help setting up CUDA for development with Pytorch. I have a Nvidia graphics card and am using Python 3.8. To install pytorch with the correct CUDA integration I ran <code>conda install pytorch torchvision cudatoolkit=10.1 -c python</code>. The problem is that <code>torch.cuda.is_available()</code> always returns <code>False</code>.</p>
<p>Can anyone help me here?
I the following are my versionings:</p>
<p><code>nvcc --version</code></p>
<p>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243</p>
<p><code>nvidia-smi</code></p>
<p>NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6</p>
|
<p>As in the <a href="https://pytorch.org/" rel="nofollow noreferrer">PyTorch</a> website, install with <code>conda install pytorch torchvision cudatoolkit=11.3 -c pytorch</code> or <code>conda install pytorch torchvision cudatoolkit=11.6 -c pytorch</code></p>
|
python|pytorch|gpu
| 0
|
8,227
| 73,048,449
|
Python: Read Json and create chart
|
<p>I have a json file:</p>
<pre><code>{
"code":"200000",
"data":[
{
"price":"1001",
"sequence":"1636607335665",
"side":"sell",
"size":"0.00000544",
"time":1657828843154264094
},
{
"price":"5538.9",
"sequence":"1636607335668",
"side":"buy",
"size":"0.00000098",
"time":1657828843399526634
},
{
"price":"1001",
"sequence":"1636607335707",
"side":"sell",
"size":"0.00000098",
"time":1657828850316727168
},
{
"price":"1001",
"sequence":"1636607335710",
"side":"sell",
"size":"0.00000098",
"time":1657828850582673268
},
{
"price":"1001",
"sequence":"1636607335713",
"side":"sell",
"size":"0.00000098",
"time":1657828850924769384
},
{
"price":"1001",
"sequence":"1636607335716",
"side":"sell",
"size":"0.00000098",
"time":1657828850993197165
},
{
"price":"1001",
"sequence":"1636607335719",
"side":"sell",
"size":"0.00000098",
"time":1657828851193402610
},
{
"price":"5538.9",
"sequence":"1636607335722",
"side":"buy",
"size":"0.00000089",
"time":1657828857749638652
},
{
"price":"1001",
"sequence":"1636607335725",
"side":"sell",
"size":"0.00000018",
"time":1657828859913652722
},
{
"price":"1001",
"sequence":"1636607335728",
"side":"sell",
"size":"0.00000018",
"time":1657828860167682305
},
{
"price":"1001",
"sequence":"1636607335731",
"side":"sell",
"size":"0.00000018",
"time":1657828860325667720
},
{
"price":"1001",
"sequence":"1636607335734",
"side":"sell",
"size":"0.00000018",
"time":1657828860426539709
},
{
"price":"1001",
"sequence":"1636607335737",
"side":"sell",
"size":"0.00000018",
"time":1657828860605057780
},
{
"price":"5538.9",
"sequence":"1636607335740",
"side":"buy",
"size":"0.00000016",
"time":1657829347628544318
},
{
"price":"1001",
"sequence":"1636607335751",
"side":"sell",
"size":"1",
"time":1657830042459243431
},
{
"price":"1001",
"sequence":"1636607335754",
"side":"sell",
"size":"1",
"time":1657830042758958029
},
{
"price":"1001",
"sequence":"1636607335757",
"side":"sell",
"size":"1",
"time":1657830043035655903
},
{
"price":"1001",
"sequence":"1636607335760",
"side":"sell",
"size":"1",
"time":1657830043157218306
},
{
"price":"1001",
"sequence":"1636607335763",
"side":"sell",
"size":"1",
"time":1657830043346430450
},
{
"price":"5538.9",
"sequence":"1636607335766",
"side":"buy",
"size":"0.16496021",
"time":1657830098124334016
},
{
"price":"18000",
"sequence":"1636607335770",
"side":"buy",
"size":"0.22673876",
"time":1657830098399521206
},
{
"price":"18000",
"sequence":"1636607335773",
"side":"buy",
"size":"0.00000023",
"time":1657830098764936584
},
{
"price":"1001",
"sequence":"1636607335922",
"side":"sell",
"size":"0.07833984",
"time":1657830344363632786
},
{
"price":"1001",
"sequence":"1636607335925",
"side":"sell",
"size":"0.07833984",
"time":1657830344675019788
},
{
"price":"1001",
"sequence":"1636607335928",
"side":"sell",
"size":"0.07833984",
"time":1657830345026214909
},
{
"price":"1001",
"sequence":"1636607335931",
"side":"sell",
"size":"0.07833984",
"time":1657830345193678817
},
{
"price":"1001",
"sequence":"1636607335934",
"side":"sell",
"size":"0.07833984",
"time":1657830345382321251
},
{
"price":"18000",
"sequence":"1636607335937",
"side":"buy",
"size":"0.02173928",
"time":1657830368271317194
},
{
"price":"18000",
"sequence":"1636607335940",
"side":"buy",
"size":"0.00000002",
"time":1657830368537904561
},
{
"price":"1001",
"sequence":"1636607336107",
"side":"sell",
"size":"0.00434786",
"time":1657830506400266964
},
{
"price":"1001",
"sequence":"1636607336110",
"side":"sell",
"size":"0.00434786",
"time":1657830506678292287
},
{
"price":"1001",
"sequence":"1636607336113",
"side":"sell",
"size":"0.00434786",
"time":1657830506963452449
},
{
"price":"1001",
"sequence":"1636607336116",
"side":"sell",
"size":"0.00434786",
"time":1657830507262876484
},
{
"price":"1001",
"sequence":"1636607336119",
"side":"sell",
"size":"0.00434786",
"time":1657830507274604674
},
{
"price":"18000",
"sequence":"1636607336122",
"side":"buy",
"size":"0.00120653",
"time":1657830544353165335
},
{
"price":"18000",
"sequence":"1636607336125",
"side":"buy",
"size":"0.00000001",
"time":1657830544694561727
},
{
"price":"18000",
"sequence":"1636607336144",
"side":"buy",
"size":"0.75031517",
"time":1657831267172000485
},
{
"price":"19000",
"sequence":"1636607336146",
"side":"buy",
"size":"1",
"time":1657831267172000485
},
{
"price":"19349",
"sequence":"1636607336148",
"side":"buy",
"size":"0.90156477",
"time":1657831267172000485
},
{
"price":"19349",
"sequence":"1636607336153",
"side":"buy",
"size":"0.00051682",
"time":1657832244551197889
},
{
"price":"19349",
"sequence":"1636607336156",
"side":"buy",
"size":"0.00051682",
"time":1657832323552102950
},
{
"price":"19349",
"sequence":"1636607336159",
"side":"buy",
"size":"0.00051682",
"time":1657832404615114041
},
{
"price":"19349",
"sequence":"1636607336162",
"side":"buy",
"size":"0.0002",
"time":1657832453907261588
},
{
"price":"19349",
"sequence":"1636607336165",
"side":"buy",
"size":"0.00051682",
"time":1657832502167873450
},
{
"price":"19349",
"sequence":"1636607336178",
"side":"buy",
"size":"0.00516822",
"time":1657840322172772611
},
{
"price":"19349",
"sequence":"1636607336181",
"side":"buy",
"size":"0.00516822",
"time":1657841032582704012
},
{
"price":"19349",
"sequence":"1636607336188",
"side":"buy",
"size":"0.49431546",
"time":1657841056954765534
},
{
"price":"19349",
"sequence":"1636607336191",
"side":"buy",
"size":"0.00516822",
"time":1657841633410013927
},
{
"price":"19343.1953",
"sequence":"1636607336194",
"side":"sell",
"size":"0.00025848",
"time":1657841683315799309
},
{
"price":"19349",
"sequence":"1636607336202",
"side":"buy",
"size":"0.00031009",
"time":1657841883739588741
},
{
"price":"19337.3923414",
"sequence":"1636607336205",
"side":"sell",
"size":"0.00025856",
"time":1657841909289751556
},
{
"price":"19343.19529998",
"sequence":"1636607336213",
"side":"buy",
"size":"0.00025856",
"time":1657841956440581560
},
{
"price":"19349",
"sequence":"1636607336215",
"side":"buy",
"size":"0.00005161",
"time":1657841956440581560
},
{
"price":"19337.39234139",
"sequence":"1636607336220",
"side":"sell",
"size":"0.00025863",
"time":1657849200908617135
},
{
"price":"19331.59112369",
"sequence":"1636607336222",
"side":"sell",
"size":"0.00000137",
"time":1657849200908617135
},
{
"price":"19343.19529997",
"sequence":"1636607336227",
"side":"buy",
"size":"0.00025863",
"time":1657849228110938680
},
{
"price":"19349",
"sequence":"1636607336229",
"side":"buy",
"size":"0.00005154",
"time":1657849228110938680
},
{
"price":"19337.39234138",
"sequence":"1636607336234",
"side":"sell",
"size":"0.0002587",
"time":1657849268869709337
},
{
"price":"19331.59112369",
"sequence":"1636607336236",
"side":"sell",
"size":"0.0000013",
"time":1657849268869709337
},
{
"price":"19343.19529996",
"sequence":"1636607336241",
"side":"buy",
"size":"0.0002587",
"time":1657849326110758419
},
{
"price":"19349",
"sequence":"1636607336243",
"side":"buy",
"size":"0.00005147",
"time":1657849326110758419
},
{
"price":"19349",
"sequence":"1636607336248",
"side":"buy",
"size":"0.00031009",
"time":1657849345012661392
},
{
"price":"19337.39234137",
"sequence":"1636607339163",
"side":"sell",
"size":"0.00025877",
"time":1657867041820159833
},
{
"price":"19331.59112369",
"sequence":"1636607339165",
"side":"sell",
"size":"0.00025597",
"time":1657867041820159833
},
{
"price":"5689",
"sequence":"1636607339562",
"side":"buy",
"size":"0.01",
"time":1657868597265526086
},
{
"price":"1001",
"sequence":"1636607339565",
"side":"sell",
"size":"0.01",
"time":1657868598170110735
},
{
"price":"5689",
"sequence":"1636607339571",
"side":"buy",
"size":"0.00017577",
"time":1657868599016786169
},
{
"price":"1001",
"sequence":"1636607339575",
"side":"sell",
"size":"0.00017577",
"time":1657868599847374525
},
{
"price":"5689",
"sequence":"1636607340168",
"side":"buy",
"size":"0.01",
"time":1657869354756776345
},
{
"price":"1001",
"sequence":"1636607340171",
"side":"sell",
"size":"0.01",
"time":1657869355998154084
},
{
"price":"5689",
"sequence":"1636607340174",
"side":"buy",
"size":"0.00017577",
"time":1657869356825345678
},
{
"price":"1001",
"sequence":"1636607340180",
"side":"sell",
"size":"0.00017577",
"time":1657869357692624956
},
{
"price":"2500",
"sequence":"1636607341557",
"side":"buy",
"size":"0.001",
"time":1657870618541508945
},
{
"price":"2500",
"sequence":"1636607341559",
"side":"buy",
"size":"0.002",
"time":1657870618541508945
},
{
"price":"5689",
"sequence":"1636607341566",
"side":"buy",
"size":"0.00017577",
"time":1657870620769860457
},
{
"price":"1001",
"sequence":"1636607341569",
"side":"sell",
"size":"0.000999",
"time":1657870621674017603
},
{
"price":"1001",
"sequence":"1636607341647",
"side":"sell",
"size":"0.07510662",
"time":1657870777704932538
},
{
"price":"5689",
"sequence":"1636607341671",
"side":"buy",
"size":"0.00527333",
"time":1657870857711688568
},
{
"price":"5689",
"sequence":"1636607341674",
"side":"buy",
"size":"0.01",
"time":1657870868834149507
},
{
"price":"1001",
"sequence":"1636607341677",
"side":"sell",
"size":"0.01",
"time":1657870869931861093
},
{
"price":"5689",
"sequence":"1636607341680",
"side":"buy",
"size":"0.00017577",
"time":1657870870751938985
},
{
"price":"1001",
"sequence":"1636607341683",
"side":"sell",
"size":"0.000999",
"time":1657870871658420024
},
{
"price":"5689",
"sequence":"1636607341725",
"side":"buy",
"size":"0.06474014",
"time":1657870941440018617
},
{
"price":"1001",
"sequence":"1636607341743",
"side":"sell",
"size":"0.06474014",
"time":1657870960390939598
},
{
"price":"5689",
"sequence":"1636607341752",
"side":"buy",
"size":"0.002",
"time":1657870973846595055
},
{
"price":"1001",
"sequence":"1636607341767",
"side":"sell",
"size":"0.00662687",
"time":1657871009792192217
},
{
"price":"5689",
"sequence":"1636607341791",
"side":"buy",
"size":"0.004",
"time":1657871055207222592
},
{
"price":"5689",
"sequence":"1636607341878",
"side":"buy",
"size":"0.06420589",
"time":1657871246995089433
},
{
"price":"1001",
"sequence":"1636607341881",
"side":"sell",
"size":"0.06420589",
"time":1657871257132151075
},
{
"price":"5011.18714286",
"sequence":"1636607341907",
"side":"buy",
"size":"0.007",
"time":1657871357664571926
},
{
"price":"1001",
"sequence":"1636607341911",
"side":"sell",
"size":"0.007",
"time":1657871362560748791
},
{
"price":"1001",
"sequence":"1636607341968",
"side":"sell",
"size":"0.070073",
"time":1657871518530117878
},
{
"price":"1001",
"sequence":"1636607341971",
"side":"sell",
"size":"0.063382",
"time":1657871522234372733
},
{
"price":"5689",
"sequence":"1636607342013",
"side":"buy",
"size":"0.01232959",
"time":1657871675332533829
},
{
"price":"5689",
"sequence":"1636607342016",
"side":"buy",
"size":"0.01115229",
"time":1657871676052486838
},
{
"price":"5689",
"sequence":"1636607342022",
"side":"buy",
"size":"0.06362616",
"time":1657871749924575254
},
{
"price":"1001",
"sequence":"1636607342025",
"side":"sell",
"size":"0.06362616",
"time":1657871755000539664
},
{
"price":"2500",
"sequence":"1636607342521",
"side":"buy",
"size":"0.003",
"time":1657873027819080167
},
{
"price":"5689",
"sequence":"1636607342636",
"side":"buy",
"size":"0.01757778",
"time":1657873355549703658
},
{
"price":"1001",
"sequence":"1636607343484",
"side":"sell",
"size":"0.89815224",
"time":1657875048576698842
}
]}
</code></pre>
<p>I am trying to read out the values. Unfortunately, this does not work. What I have in mind:</p>
<p>I want to read first ("price" : "200") and then whether it was a sell or buy order ("side" : "sell"). The sell order should then be displayed in a line chart. The same with the buy order.</p>
<p>My Code:</p>
<pre><code>import json
with open('events.json') as f:
json_object = json.load(f)
pairs = json_object.items()
for side, price in pairs:
print(side)
</code></pre>
<p>Output:</p>
<pre><code>code
data
Process finished with exit code 0
</code></pre>
<p>My Code for the Chart:</p>
<pre><code># importing pandas library
import pandas as pd
# importing matplotlib library
import matplotlib.pyplot as plt
# creating dataframe
df = pd.DataFrame({
'X': [1, 2, 3, 4, 5],
'Y': [2, 4, 6, 10, 15]
})
# plotting a line graph
print("Line graph: ")
plt.plot(df["X"], df["Y"])
plt.show()
</code></pre>
|
<p>You can read the dictionary data into a dataframe.</p>
<pre><code>df = pd.DataFrame(data_dict)
df = df['data'].apply(pd.Series)
</code></pre>
<h4>Filtering data</h4>
<pre><code>df_plot = pd.concat([df.loc[df.side.eq('buy'), ['price']].reset_index(drop=True), df.loc[df.side.eq('sell'), ['price']].reset_index(drop=True)], axis=1)
df_plot.columns = ['buy_price', 'sell_price']
</code></pre>
<h4>Line plot</h4>
<pre><code>df_plot.astype('float').plot.line()
</code></pre>
<p><a href="https://i.stack.imgur.com/2KRRY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2KRRY.png" alt="enter image description here" /></a></p>
|
python|json|pandas|matplotlib
| 1
|
8,228
| 73,133,796
|
Removing pandas rows based on existence of values in certain columns
|
<p>I have a df like so:</p>
<pre><code>A B C
f s x
a b c
n
p l k
i
s j p
</code></pre>
<p>Now, I want to remove all the records that have the value in column A but are empty on the rest of df's columns, how can I achieve such a thing? The expected result would be a df like:</p>
<pre><code>A B C
f s x
a b c
p l k
s j p
</code></pre>
<p>@EDIT:
this solved the case for me:</p>
<pre><code>columns = list(df.columns)[1:]
df = df[~df[columns].isnull().all(1)]
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>DataFrame.replace</code></a> in order to set blanks to NaN, then you can remove rows with NaN with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>DataFrame.dropna</code></a>:</p>
<pre><code>df.replace(r'^\s*$', np.nan, regex=True).dropna()
</code></pre>
<p>Or if you want remove only if column A is not NaN</p>
<pre><code>df.replace(r'^\s*$', np.nan, regex=True)
.loc[lambda x: ~(x['A'].notna()
& x.filter(regex='!A').isna().all(axis=1))]
#morgan equivalent
#df.replace(r'^\s*$', np.nan, regex=True)
# .loc[lambda x: x['A'].isna()
# | x.filter(regex='!A').notna().any(axis=1)]
</code></pre>
|
python|pandas|dataframe
| 1
|
8,229
| 10,545,957
|
creating pandas data frame from multiple files
|
<p>I am trying to create a pandas <code>DataFrame</code> and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the <code>DataFrame</code>.</p>
<p>Not sure what's the way to append to current <code>DataFrame</code> in pandas or is there a way for pandas to suck a list of files into a <code>DataFrame</code>.</p>
|
<p>The pandas <code>concat</code> command is your friend here. Lets say you have all you files in a directory, targetdir. You can:</p>
<ol>
<li>make a list of the files </li>
<li>load them as pandas dataframes </li>
<li>and concatenate them together</li>
</ol>
<p>`</p>
<pre><code>import os
import pandas as pd
#list the files
filelist = os.listdir(targetdir)
#read them into pandas
df_list = [pd.read_table(file) for file in filelist]
#concatenate them together
big_df = pd.concat(df_list)
</code></pre>
|
python|pandas
| 38
|
8,230
| 10,625,096
|
Extracting first n columns of a numpy matrix
|
<p>I have an array like this:</p>
<pre><code> array([[-0.57098887, -0.4274751 , -0.38459931, -0.58593526],
[-0.22279713, -0.51723555, 0.82462029, 0.05319973],
[ 0.67492385, -0.69294472, -0.2531966 , 0.01403201],
[ 0.41086611, 0.26374238, 0.32859738, -0.80848795]])
</code></pre>
<p>Now I want to extract the following:</p>
<pre><code> [-0.57098887, -0.4274751]
[-0.22279713, -0.51723555]
[ 0.67492385, -0.69294472]
[ 0.41086611, 0.26374238]
</code></pre>
<p>So basically just first 2 columns..</p>
|
<p>If <code>a</code> is your array:</p>
<pre><code>In [11]: a[:,:2]
Out[11]:
array([[-0.57098887, -0.4274751 ],
[-0.22279713, -0.51723555],
[ 0.67492385, -0.69294472],
[ 0.41086611, 0.26374238]])
</code></pre>
|
python|numpy
| 93
|
8,231
| 70,696,514
|
Add IDs to dataframe with random Noise
|
<p>My initial dataframe looks as follows:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"id":[1,1,1,1,2,2],
"time": [1,2,3,4,5,6],
"x": [1,2,3,4,9,11 ],
"y": [5,6,7,8,3,2],
})
</code></pre>
<p>So I have two IDs (1 and 2) or two different time series.
Now I want to add some random noise to x- and y-value for each ID and save it as new IDs (with same length) in the initial df:</p>
<pre><code># Noise
import numpy as np
noise = np.random.normal(0,1,#number of elements you get in array noise)
new_signal = original + noise
# https://stackoverflow.com/questions/14058340/adding-noise-to-a-signal-in-python
</code></pre>
<p>So the resulting df would look something like the following (the values are just an example what the resulting output could be):</p>
<pre><code>df = pd.DataFrame({
"id":[1,1,1,1,2,2 ,3,3,3,3, 4,4],
"time": [1,2,3,4,5,6 ,7,8,9,10, 11,12 ],
"x": [1,2,3,4,9,11, 1.0005,2.3256,3.1256,4.5647, 9.6514,11.4567 ],
"y": [5,6,7,8,3,2, 5.0505,6.0276,7.1056,8.5607, 3.6014,2.4567],
})
</code></pre>
<p>As you can see: 2 new IDs (3 and 4) have been added and also the values with noise.</p>
<p>Currently I am trying it with different loops but it seems quite complicated. Any suggestions?</p>
<p>Bonus question: How not just duplicate, but adding it by 3 times.</p>
|
<p>You can build a new dataframe and concat them:</p>
<pre><code>df1 = pd.concat([df['id'] + df['id'].max(),
df['time'] + df['time'].max(),
df['x'] + np.random.normal(0, 1, len(df)),
df['y'] + np.random.normal(0, 1, len(df))], axis=1) \
.set_index(df.index + len(x))
out = pd.concat([df, df1])
</code></pre>
<p>Output:</p>
<pre><code>>>> out
id time x y
0 1 1 1.000000 5.000000
1 1 2 2.000000 6.000000
2 1 3 3.000000 7.000000
3 1 4 4.000000 8.000000
4 2 5 9.000000 3.000000
5 2 6 11.000000 2.000000
10 3 7 1.479734 5.720535
11 3 8 0.076273 6.256060
12 3 9 2.856642 6.845974
13 3 10 4.119396 7.738969
14 4 11 9.220569 2.710783
15 4 12 10.451495 1.245976
</code></pre>
|
python|pandas|dataframe|numpy
| 1
|
8,232
| 70,529,611
|
Anytree to Pandas or tuple conversion with node members as indices
|
<p>I'd like to build a pandas dataframe or tuple from an anytree object, where each node has a list attribute of members:</p>
<pre><code>from anytree import Node, RenderTree, find_by_attr
from anytree.exporter import DictExporter
from collections import OrderedDict
import pandas as pd
import numpy as np
tree = Node('T0C0',
n=1000,
tier=0,
members=['A','B','C','D'])
Node('T0C0.T1C0',
parent=find_by_attr(tree, 'T0C0'),
n=400,
tier=1,
members=['B','C'])
Node('T0C0.T1C1',
parent=find_by_attr(tree, 'T0C0'),
n=600,
tier=1,
members=['A','D'])
Node('T0C0.T1C1.T2C0',
parent=find_by_attr(tree, 'T0C0.T1C1'),
n=300,
tier=2,
members=['D'])
Node('T0C0.T1C1.T2C1',
parent=find_by_attr(tree, 'T0C0.T1C1'),
n=300,
tier=2,
members=['A'])
</code></pre>
<p>my goal is to produce a dataframe of end-nodes per member, or, even better, tier membership per column like the following:</p>
<pre><code>pd.DataFrame(data=np.array([['T0C0.T1C1.T2C1','T0C0.T1C0','T0C0.T1C0','T0C0.T1C1.T2C0'],
['T0C0','T0C0','T0C0','T0C0'],
['T0C0.T1C1','T0C0.T1C0','T0C0.T1C0','T0C0.T1C1'],
['T0C0.T1C1.T2C1',None,None,'T0C0.T1C1.T2C0']]
),
index=['A','B','C','D'],columns=['EndCluster','tier0','tier1','tier2'])
</code></pre>
<p>I've tried exporting to ordereddict and to json and building data frames directly from there, but "children" becomes a column in the resulting dataframe, with ordered dict entries. I cannot find a way to unnest. Thank you for any help!</p>
|
<p>The answer turned out easier than I thought.
First grab all the end nodes using anytree's <code>findall()</code></p>
<pre><code>endnodes = anytree.findall(tree, filter_=lambda node: len(node.children)==0)
</code></pre>
<p>This returns a list of nodes, easier to work with in this case than anytree's OrderedDict conversion</p>
<p>Finally, populate the dataframe by multiplying member-level attributes by <code>len(member)</code></p>
<pre><code>members = []
tier = []
endcluster = []
for item in endnodes:
members += item.members
tier += [item.tier] * len(item.members)
endcluster += [item.name] * len(item.members)
endf = pd.DataFrame(index=members)
endf['tier']=tier
endf['endcluster']=endcluster
</code></pre>
|
python|pandas|anytree
| 0
|
8,233
| 70,703,038
|
NoneType Object is not callable - Python/CNN
|
<p>Friends, I am new at this. Can you please help me understand why this code:</p>
<pre><code>#Construct the model
model_simple = Sequential()
model_simple.add(Conv2D(strides = 1, kernel_size = 3, filters = 12, use_bias = True, bias_initializer = tf.keras.initializers.RandomUniform(minval=-0.05, maxval=0.05) , padding = "valid", activation = tf.nn.relu))
model_simple.add(Conv2D(strides = 1, kernel_size = 3, filters = 12, use_bias = True, bias_initializer = tf.keras.initializers.RandomUniform(minval=-0.05, maxval=0.05) , padding = "valid", activation = tf.nn.relu))
model_simple.add(Conv2DTranspose(strides = 1, kernel_size = 3, filters = 12, use_bias = True, bias_initializer = tf.keras.initializers.RandomUniform(minval=-0.05, maxval=0.05) , padding = "valid", activation = tf.nn.relu))
model_simple.add(Conv2DTranspose(strides = 1, kernel_size = 3, filters = 3, use_bias = True, bias_initializer = tf.keras.initializers.RandomUniform(minval=-0.05, maxval=0.05) , padding = "valid", activation = tf.nn.relu))
#Compile the model
model_simple.compile(optimizer = tf.keras.optimizers.Adam(epsilon = 1e-8), loss = tf.losses.mean_squared_error)
</code></pre>
<pre><code>model_simple.fit(imgs_for_input, imgs_for_output, epochs=3, batch_size=32)
</code></pre>
<p>creates this error:</p>
<pre><code>Epoch 1/3
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-33-957812893dc7> in <module>()
----> 1 model_simple.fit(imgs_for_input, imgs_for_output, epochs=3, batch_size=32)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
940 # In this case we have created variables on the first call, so we run the
941 # defunned version which is guaranteed to never create variables.
--> 942 return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
943 elif self._stateful_fn is not None:
944 # Release the lock early so that multiple threads can perform the call
TypeError: 'NoneType' object is not callable
</code></pre>
<p>I appreciate your time!</p>
|
<p>you must build your model before training. E.g.</p>
<pre><code>batch = 32
height = 32
width = 32
channels = 3
inputShape = (batch, height, width, channels)
model_simple.build(inputShape)
model_simple.fit(imgs_for_input, imgs_for_output, epochs=3)
</code></pre>
|
python|tensorflow|keras|deep-learning|neural-network
| 0
|
8,234
| 70,566,442
|
Exchange certain values in different size vectors
|
<p>I got a problem where I cant exchange values in an array. I've got 2 arrays, one filled with <em>zeros</em> and <em>ones</em>, for example: <code>disp = [[0.], [0.], [0.], [1.], [1.], [1.], [1.], [0.], [0.], [0.]]</code> and the other one filled with values I would like to implement at the place where the <em>ones</em> are in <strong>disp</strong>, for example: <code>to_replace_at_1 = [[17.17], [0.], [-7.0] , [7.0]]</code>.</p>
<p>The result should look like this: <code>disp = [[0.], [0.], [0.], [17.17], [0.], [-7.0], [7.0], [0.], [0.], [0.]]</code>.</p>
<p>I tried this:</p>
<pre><code> for i in range(len(disp)):
vector = disp[i]
for value in vector:
if value == 1.0:
for e in to_replace_at_1:
disp[i] = to_replace_at_1[e]
</code></pre>
<p>But it ends up crashing. What should I try? How do I solve this?</p>
|
<p>One way to do it:</p>
<pre><code>iterator = iter(to_replace_at_1)
[x if x[0] != 1 else next(iterator) for x in disp]
</code></pre>
|
python|numpy|replace
| 1
|
8,235
| 70,643,623
|
Python dataframe Renaming two columns into one
|
<p>The dataframe below has two names for one. The dataframe is of type "column.pandas.core.indexes.multi.MultiIndex"
This is what list for the data frame looks like</p>
<pre><code>[('caption', ''),
('', ''),
('tackles', 'TOT'),
('tackles', 'SOLO'),
('tackles', 'SACKS'),
('tackles', 'TFL'),
('misc', 'PD'),
('misc', 'QB HTS'),
('misc', 'TD'),
('misc', 'Unnamed: 8_level_1'),
('gameid', '')]
</code></pre>
<p>Maybe also displaying what the info displays may help. In the event that the list function maybe is not clear.</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 24 entries, 0 to 23
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 (caption, ) 24 non-null object
1 (, ) 24 non-null object
2 (tackles, TOT) 24 non-null int64
3 (tackles, SOLO) 24 non-null int64
4 (tackles, SACKS) 24 non-null int64
5 (tackles, TFL) 24 non-null int64
6 (misc, PD) 24 non-null int64
7 (misc, QB HTS) 24 non-null int64
8 (misc, TD) 24 non-null int64
9 (misc, Unnamed: 8_level_1) 0 non-null float64
10 (gameid, ) 24 non-null object
dtypes: float64(1), int64(7), object(3)
memory usage: 2.2+ KB
</code></pre>
<p>How would I change that to 1 name per column with no spaces</p>
<pre><code>['caption_',
'_',
'tackles_TOT',
'tackles_SOLO',
'tackles_SACKS',
'tackles_TFL',
'misc_PD',
'misc_QBHTS',
'misc_TD',
'misc_Unnamed:8_level_1',
'gameid_']
</code></pre>
|
<p>IIUC:</p>
<pre><code>df.columns = df.columns.to_flat_index().map('_'.join)
print(df.columns)
# Output
Index(['caption_', '_', 'tackles_TOT', 'tackles_SOLO', 'tackles_SACKS',
'tackles_TFL', 'misc_PD', 'misc_QB HTS', 'misc_TD',
'misc_Unnamed: 8_level_1', 'gameid_'],
dtype='object')
</code></pre>
<p>If your column index is already an <code>Index</code> and not a <code>MultiIndex</code>, you can remove <code>to_flat_index()</code> but the result is the same even if it's an <code>Index</code>.</p>
|
python|python-3.x|pandas
| 2
|
8,236
| 70,716,325
|
How do I loop variable names based on values in a list
|
<p>I have this list with five heights in it and I want to put it in a loop to create five separate dataframes indexed by these numbers. This would include creating a column name based on different height, reading a csv file and assigning the colNames to it, and finally dropping the unused columns. I have multiple blocks of the same code to do this but I want to learn how to do it with a loop so I can clean up my script.</p>
<p>I get a NameError: name 'colNames' is not defined.</p>
<pre><code> i = 0
height = ['0', '5', '15', '25', '50']
while i < len(height):
colNames["height{}".format(i)] = ["A", "B_%s" % height, "C", "D"]
df["height{}".format(i)] = pd.read_csv("test%s.csv" % height, names = colNames["height{}".format(i)])
df["height{}".format(i)].drop(labels = ["A", "C"],axis = 1, inplace = True)
i += 1
</code></pre>
<p>Expected results</p>
<pre><code>colNames0 = ["A", "B_0", "C", "D"]
df0 = pd.read_csv("test0.csv", names = colNames0])
df0.drop(labels = ["A", "C"], axis = 1, inplace = True)
...
colNames50 = ["A", "B_0", "C", "D"]
df50 = pd.read_csv("test50.csv", names = colNames50])
df50.drop(labels = ["A", "C"], axis = 1, inplace = True)
</code></pre>
|
<p>Trying to name separate DataFrames in this way is a bit unwieldy in Python, but here is how I might go about writing a loop for the problem you pose:</p>
<pre class="lang-py prettyprint-override"><code>dflist = []
for num, height in enumerate(['0', '5', '15', '25', '50']):
dflist.append(pd.read_csv('test{}.csv'.format(height), names=['A', 'B{}'.format(height), 'C', 'D'])[['B{}'.format(height), 'D']])
</code></pre>
<p>You would not have DataFrames named df0, df5, ..., but will rather have a list of DataFrames. Unless there is a reason to save the various column names, you can just name your columns directly in the call to pd.read_csv. Additionally, selecting only the columns you want to keep at the end of the line is a little more streamlined than dropping the others in a separate command. As a side note,</p>
<pre class="lang-py prettyprint-override"><code>df['newname'] = value
</code></pre>
<p>is a way to make a new column in an existing DataFrame, not a way to define a DataFrame.</p>
<p>The reason you are getting a NameError is because the syntax</p>
<pre class="lang-py prettyprint-override"><code>colNames[x] = value
</code></pre>
<p>assumes you are trying to assign the value to a pre-existing object named "colNames".</p>
|
python|pandas|dataframe|while-loop
| 2
|
8,237
| 70,550,806
|
What is a good way to implement a function to make an array with inputted arguments?
|
<p>I would like to make a function that accepts any number of arguments and returns an array using those arguments as parameters. Here's the code example - I would like to do something like this, except it should work:</p>
<pre><code>import numpy as np
def getgoodarray(*args):
goodarray = np.round(np.arange(args)*10)
return goodarray
B = getgoodarray(2, 3) # this should make a 2d array, with two 1dimensional arrays, each of 3 elements
</code></pre>
<p>Do you have any idea on how this function could be implemented properly?</p>
|
<p>Use any method that builds an array, fill with zeros, ones or random</p>
<pre><code>def getgoodarray(*args):
return np.ones(args)
# return np.zeros(args)
# return np.random.randint(0, 10, args)
</code></pre>
<pre><code>x = getgoodarray(2, 3)
[[7 7 1]
[8 2 5]]
</code></pre>
<pre><code>x = getgoodarray(2, 2, 2, 3)
[[[[6 8 3]
[7 0 0]]
[[5 8 9]
[6 8 5]]]
[[[6 0 5]
[8 7 0]]
[[3 4 3]
[0 2 4]]]]
</code></pre>
|
python|arrays|numpy
| 2
|
8,238
| 70,697,704
|
How to convert dataframe into numpy array?
|
<p>So I am currently making a neural network MLP (Multi-layer-Perceptron) on group classification on sea turtle speed between beach to sea on the seconds unit, and it looks somewhat like this.</p>
<p><a href="https://i.stack.imgur.com/UQgxe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UQgxe.png" alt="enter image description here" /></a></p>
<p>Now on my Jupyter Notebook I type the following</p>
<pre><code>data = pd.read_csv("SeaTurtles.csv")
data
</code></pre>
<p>It shows the data in dataframe</p>
<p><a href="https://i.stack.imgur.com/UgHFP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UgHFP.png" alt="enter image description here" /></a></p>
<p>What I wanted to do is to separate these two groups which are group A for "Training" data and group B for "Testing" data. I wanted to convert the dataframe into a NumPy array so that it would make it easier for me to classify them into the SpeciesCode.
So I type in:</p>
<pre><code>newdf=data.drop(['Group1','Group 2'], axis = 1)
newdf
</code></pre>
<p><a href="https://i.stack.imgur.com/Oi2Ou.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oi2Ou.png" alt="enter image description here" /></a></p>
<p>I need to split it into two test group A and B so I typed in</p>
<pre><code>groupA=newdf.loc[newdf['Test*'] == 'A']
groupB=newdf.loc[newdf['Test*'] == 'B']
</code></pre>
<p><a href="https://i.stack.imgur.com/HYY6Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HYY6Q.png" alt="enter image description here" /></a></p>
<p>Group B
<a href="https://i.stack.imgur.com/7drDs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7drDs.png" alt="enter image description here" /></a></p>
<p>What I did was to convert these into numpy array with .to_numpy()</p>
<pre><code>groupA = groupA.to_numpy()
groupA
</code></pre>
<p>and it returns the "AttributeError: 'numpy.ndarray' object has no attribute 'to_numpy'
"
<a href="https://i.stack.imgur.com/tsokJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsokJ.png" alt="enter image description here" /></a></p>
<p>My question is did I do something wrong or is there another way for me to convert this dataframe into numpy array so that I can start training the data? Thank you in advance.</p>
|
<p>Pandas dataframe is a two-dimensional data structure to store and retrieve data in rows and columns format.</p>
<p>You can convert pandas dataframe to numpy array using the <code>df.to_numpy()</code> method.</p>
<p>You can use the below code snippet to convert pandas dataframe into numpy array.</p>
<pre><code>numpy_array = df.to_numpy()
print(type(numpy_array))
</code></pre>
<p>Output:</p>
<pre><code><class 'numpy.ndarray'>
</code></pre>
|
python|arrays|pandas|numpy
| 0
|
8,239
| 70,589,997
|
Keras loss: 0.0000e+00 and accuracy stays constant
|
<p>I have 101 folders from 0-100 containing synthetic training images.
This is my code:</p>
<pre><code>dataset = tf.keras.utils.image_dataset_from_directory(
'Pictures/synthdataset5', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=32, image_size=(128,128), shuffle=True, seed=None, validation_split=None, subset=None,interpolation='bilinear', follow_links=False,crop_to_aspect_ratio=False
)
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
model = Sequential()
model.add(Conv2D(32, kernel_size=5, activation='relu', input_shape=(128,128,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=5, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(dataset,epochs=75)
</code></pre>
<p>And I always get the same result for every epoch:</p>
<pre><code>Epoch 1/75
469/469 [==============================] - 632s 1s/step - loss: 0.0000e+00 - accuracy: 0.0098
</code></pre>
<p>What's wrong???</p>
|
<p>So turns out your loss might be the problem after all.
If you use SparseCategoricalCrossentropy instead as loss it should work.</p>
<pre><code>model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
</code></pre>
<p>After this you should adjust the last layer to:</p>
<pre><code>model.add(Dense(101, activation='softmax'))
</code></pre>
<p>Also don't forget to import <code>import tensorflow as tf</code>
Let me know if this solves the issue.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
8,240
| 42,664,045
|
Creating a 3d matrix with pandas panel
|
<p>My goal is to create a pandas panel, I currently have a csv, with the sample as follows:</p>
<pre><code>Year From country To country Points
2005 Albania Albania 0
2005 Albania Bosnia & Herzegovina 0
2005 Albania Croatia 2
2005 Albania Cyprus 7
2005 Albania Denmark 0
</code></pre>
<p>I want to make a 3D array where the first axis is all the years range, which I have to search through the csv to find when 2005 turns to 2006, etc then the next axis will be the from country and the other axis will be the to country, and those axes will have the value of points... if that makes sense? Is the pandas panel the tool I should be using here, and how would scrape the years from the big dataframe to create a new dataframe for assumingly, all the years (2005 - 2016)</p>
<p>EDIT:
I found this picture, which is exactly what I'm trying to do for EACH year instead of the average of all the years. So it'd be like have one of those graphs for each year, 2005 - 2016
<a href="https://i.stack.imgur.com/WlD5u.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WlD5u.jpg" alt="matrix"></a></p>
|
<p>Format your dataframe where the index is a multiindex with two levels. Using the method <code>to_panel</code> will assume the <code>Items</code> is in the columns, <code>Major_axis</code> is in the first level of the index, and <code>Minor_axis</code> is in the second level of the index.</p>
<pre><code>df.set_index(['From country', 'To country', 'Year']).Points.unstack().to_panel()
<class 'pandas.core.panel.Panel'>
Dimensions: 1 (items) x 1 (major_axis) x 5 (minor_axis)
Items axis: 2005 to 2005
Major_axis axis: Albania to Albania
Minor_axis axis: Albania to Denmark
</code></pre>
|
python|csv|pandas
| 0
|
8,241
| 26,922,284
|
Filling gaps for cumulative sum with Pandas
|
<p>I'm trying to calculate the inventory of stocks from a table in monthly buckets in Pandas. This is the table:</p>
<pre><code>Goods | Incoming | Date
-------+------------+-----------
'a' | 10 | 2014-01-10
'a' | 20 | 2014-02-01
'b' | 30 | 2014-01-02
'b' | 40 | 2014-05-13
'a' | 20 | 2014-06-30
'c' | 10 | 2014-02-10
'c' | 50 | 2014-05-10
'b' | 70 | 2014-03-10
'a' | 10 | 2014-02-10
</code></pre>
<p>This is my code so far:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'goods': ['a', 'a', 'b', 'b', 'a', 'c', 'c', 'b', 'a'],
'incoming': [0, 20, 30, 40, 20, 10, 50, 70, 10],
'date': ['2014-01-10', '2014-02-01', '2014-01-02', '2014-05-13', '2014-06-30', '2014-02-10', '2014-05-10', '2014-03-10', '2014-02-10']})
df['date'] = pd.to_datetime(df['date'])
# we don't care about year in this example
df['month'] = df['date'].map(lambda x: x.month)
dfg = df.groupby(['goods', 'month'])['incoming'].sum()
# flatten multi-index
dfg = dfg.reset_index ()
dfg['level'] = dfg.groupby(['goods'])['incoming'].cumsum()
dfg
</code></pre>
<p>which returns</p>
<pre><code> goods month incoming level
0 a 1 0 0
1 a 2 30 30
2 a 6 20 50
3 b 1 30 30
4 b 3 70 100
5 b 5 40 140
6 c 2 10 10
7 c 5 50 60
</code></pre>
<p>While this is good, the visualisation method that I use requires (1) the same number of data points per group ('goods'), (2) the same extent of the time-series (i.e. earliest/latest month is the same for all time series) and (3) that there are no "gaps" in any time series (a month between min(month) and max(month) with a data point).</p>
<p>How can I do this with Pandas? Note, even thought this structure may be a bit inefficient, I'd like to stick with the general flow of things. Perhaps it's possible to insert some "post-processing" to fill in the gaps.</p>
<p><strong>Update</strong></p>
<p>To summarise the response below, I chose to do this:</p>
<pre><code>piv = dfg.pivot_table(["level"], "month", "goods")
piv = piv.reindex(np.arange(piv.index[0], piv.index[-1] + 1))
piv = piv.ffill(axis=0)
piv = piv.fillna(0)
piv.index.name = 'month'
</code></pre>
<p>I also added</p>
<pre><code>piv = piv.stack()
print r.reset_index()
</code></pre>
<p>to get a table similar to the input table:</p>
<pre><code> month goods level
0 1 a 0
1 1 b 30
2 1 c 0
3 2 a 30
4 2 b 30
5 2 c 10
6 3 a 30
7 3 b 100
8 3 c 10
9 4 a 30
10 4 b 100
11 4 c 10
12 5 a 30
13 5 b 140
14 5 c 60
15 6 a 50
16 6 b 140
17 6 c 60
</code></pre>
|
<p>I think you want to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tools.pivot.pivot_table.html" rel="nofollow"><code>pivot_table</code></a>:</p>
<pre><code>In [11]: df.pivot_table(values="incoming", index="month", columns="goods", aggfunc="sum")
Out[11]:
goods a b c
month
1 0 30 NaN
2 30 NaN 10
3 NaN 70 NaN
5 NaN 40 50
6 20 NaN NaN
</code></pre>
<p>To get the filled in months, you can reindex (this feels a little hacky, there may be a neater way):</p>
<pre><code>In [12]: res.reindex(np.arange(res.index[0], res.index[-1] + 1))
Out[12]:
goods a b c
1 0 30 NaN
2 30 NaN 10
3 NaN 70 NaN
4 NaN NaN NaN
5 NaN 40 50
6 20 NaN NaN
</code></pre>
<hr>
<p>One issue here is that month is independent of year, in may be preferable to have a period index:</p>
<pre><code>In [21]: df.pivot_table(values="incoming", index=pd.DatetimeIndex(df.date).to_period("M"), columns="goods", aggfunc="sum")
Out[21]:
goods a b c
2014-01 0 30 NaN
2014-02 30 NaN 10
2014-03 NaN 70 NaN
2014-05 NaN 40 50
2014-06 20 NaN NaN
</code></pre>
<p>and then you can reindex by the period range:</p>
<pre><code>In [22]: res2.reindex(pd.period_range(res2.index[0], res2.index[-1], freq="M"))
Out[22]:
goods a b c
2014-01 0 30 NaN
2014-02 30 NaN 10
2014-03 NaN 70 NaN
2014-04 NaN NaN NaN
2014-05 NaN 40 50
2014-06 20 NaN NaN
</code></pre>
<hr>
<p>Which is to say, you can do the same with your <code>dfg</code>:</p>
<pre><code>In [31]: dfg.pivot_table(["incoming", "level"], "month", "goods")
Out[31]:
incoming level
goods a b c a b c
month
1 0 30 NaN 0 30 NaN
2 30 NaN 10 30 NaN 10
3 NaN 70 NaN NaN 100 NaN
5 NaN 40 50 NaN 140 60
6 20 NaN NaN 50 NaN NaN
</code></pre>
<p>and reindex.</p>
|
python|pandas|time-series|cumsum
| 2
|
8,242
| 39,353,758
|
pandas pivot table of sales
|
<p>I have a list like below:</p>
<pre><code> saleid upc
0 155_02127453_20090616_135212_0021 02317639000000
1 155_02127453_20090616_135212_0021 00000000000888
2 155_01605733_20090616_135221_0016 00264850000000
3 155_01072401_20090616_135224_0010 02316877000000
4 155_01072401_20090616_135224_0010 05051969277205
</code></pre>
<p>It represents one customer (saleid) and the items he/she got (upc of the item)</p>
<p>What I want is to pivot this table to a form like below:</p>
<pre><code> 02317639000000 00000000000888 00264850000000 02316877000000
155_02127453_20090616_135212_0021 1 1 0 0
155_01605733_20090616_135221_0016 0 0 1 0
155_01072401_20090616_135224_0010 0 0 0 0
</code></pre>
<p>So, columns are unique UPCs and rows are unique SALEIDs.</p>
<p>i read it like this:</p>
<pre><code>tbl = pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str})
tbl.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18570726 entries, 0 to 18570725
Data columns (total 2 columns):
saleid object
upc object
dtypes: object(2)
memory usage: 283.4+ MB
</code></pre>
<p>I have done some steps but not the correct ones!</p>
<pre><code>tbl.pivot_table(columns=['upc'],aggfunc=pd.Series.nunique)
upc 00000000000000 00000000000109 00000000000116 00000000000123 00000000000130 00000000000147 00000000000154 00000000000161 00000000000178 00000000000185 ...
saleid 44950 287 26180 4881 1839 623 3347 7
</code></pre>
<p>EDIT:
Im using the solution variation below:</p>
<pre><code>chunksize = 1000000
f = 0
for chunk in pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str}, chunksize=chunksize):
print(f)
t = pd.crosstab(chunk.saleid, chunk.upc)
t.head(3)
t.to_csv('tbl_sales_index_converted_' + str(f) + '.csv.bz2',header=True,sep=';',compression='bz2')
f = f+1
</code></pre>
<p>the original file is extremely big to fit to memory after conversion.
The above solution has the problem on not having all the columns on all the files as I'm reading chunks from the original file.</p>
<p>Question 2: is there a way to force all chunks to have the same columns?</p>
|
<p><strong><em>Option 1</em></strong></p>
<pre><code>df.groupby(['saleid', 'upc']).size().unstack(fill_value=0)
</code></pre>
<p><a href="https://i.stack.imgur.com/6yvfj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6yvfj.png" alt="enter image description here"></a></p>
<p><strong><em>Option 2</em></strong></p>
<pre><code>pd.crosstab(df.saleid, df.upc)
</code></pre>
<p><a href="https://i.stack.imgur.com/6yvfj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6yvfj.png" alt="enter image description here"></a></p>
<h3>Setup</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
text = """ saleid upc
0 155_02127453_20090616_135212_0021 02317639000000
1 155_02127453_20090616_135212_0021 00000000000888
2 155_01605733_20090616_135221_0016 00264850000000
3 155_01072401_20090616_135224_0010 02316877000000
4 155_01072401_20090616_135224_0010 05051969277205"""
df = pd.read_csv(StringIO(text), delim_whitespace=True, dtype=str)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/qM9sA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qM9sA.png" alt="enter image description here"></a></p>
|
python|csv|pandas|numpy
| 3
|
8,243
| 39,255,211
|
TensorFlow iOS memory warnings
|
<p>We are building an iOS app to perform image classification using the TensorFlow library.</p>
<p>Using our machine learning model (91MB, 400 classes) and the TensorFlow 'simple' example, we get memory warnings on any iOS device with 1GB of RAM. 2GB models do not experience any warnings, while < 1GB models completely run out of memory and crash the app.</p>
<p>We are using the latest TensorFlow code from the master branch that includes <a href="https://github.com/tensorflow/tensorflow/commit/459c2fed498530b794c4871892fd68d1e6834ac6" rel="nofollow">this iOS memory performance commit</a>, which we thought might help but didn't.</p>
<p>We have also tried setting various GPU options on our TF session object, including <code>set_allow_growth(true)</code> and <code>set_per_process_gpu_memory_fraction()</code>.</p>
<p>Our only changes to the TF 'simple' example code is a <code>wanted_width</code> and <code>wanted_height</code> of 299, and an <code>input_mean</code> and <code>input_std</code> of 128.</p>
<p>Has anyone else run into this? Is our model simply too big?</p>
|
<p>You can use memory mapping, have you tried that? Tensorflow provides documentation. You can also round your weight values to even less decimal places. </p>
|
ios|memory|machine-learning|tensorflow
| 0
|
8,244
| 39,278,163
|
How to use validation monitor in Softmax classifier in tensorflow
|
<p>I just edit the <a href="http://mnist_softmax_classifier" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/mnist_softmax.py</a> to enable logging by using a validation monitor </p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data')
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
sess = tf.InteractiveSession()
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
validation_metrics = {"accuracy": tf.contrib.metrics.streaming_accuracy,
"precision": tf.contrib.metrics.streaming_precision,
"recall": tf.contrib.metrics.streaming_recall}
validation_monitor = tf.contrib.learn.monitors.ValidationMonitor(
mnist.test.images,
mnist.test.labels,
every_n_steps=50, metrics=validation_metrics,
early_stopping_metric="loss",
early_stopping_metric_minimize=True,
early_stopping_rounds=200)
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Train
tf.initialize_all_variables().run()
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
train_step.run({x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
</code></pre>
<p>But i am confused how i set <strong>validation_monitor</strong> in this program. I have learned in <strong>DNNClassfier</strong> , the validation_monitor is used in the flowwing way </p>
<pre><code># Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000, monitors=[validation_monitor])
</code></pre>
<p>So, how i can use validation_monitor in softmax_classifer? </p>
|
<p>I don't think there's an easy way to do that, since <code>ValidationMonitor</code> is a part of <code>tf.contrib</code>, e.g. contribution code that is not supported by the TensorFlow team. So unless you are using some higher-level API from <code>tf.contrib</code> (like <code>DNNClassfier</code>), you might not be able to simply pass a <code>ValidationMonitor</code> instance to a optimizer's <code>minimize</code> method.</p>
<p>I believe your options are:</p>
<ul>
<li>Check how <code>DNNClassfier</code>'s <code>fit</code> method is implemented and utilise the same approach by manually handling a <code>ValidationMonitor</code> instance in your graph and session.</li>
<li>Implement your own validation routine for logging and/or early stopping or whatever it is you intend to use <code>ValidationMonitor</code> for.</li>
</ul>
|
machine-learning|tensorflow|computer-vision|deep-learning|softmax
| 1
|
8,245
| 39,083,686
|
Printing Beta(Coef) alone from statsmodels OLS regression
|
<p>I am running the linear regression function on a time series data of two stocks using statsmodels. While printing out the results using "summary", my code works fine. However I want to print only the beta(coef) of the two stocks. I tried using "params" instead of "summary" in the lines, but I keep getting the error message:</p>
<p>"TypeError: 'numpy.ndarray' object is not callable"</p>
<p>I understand this is a very basic mistake, but i'm very new to coding. Any advice would be appreciated.</p>
<p>Below is my code:</p>
<pre><code>import pandas as pd
import statsmodels.api as sm
from scipy.stats.mstats import zscore
df = pd.read_excel('C:\\Users\Sai\Desktop\DF.xlsx')
x = df[['Stock A']]
y = df[['Stock B']]
model = sm.OLS(zscore(x), zscore(y))
results = model.fit()
print(results.params())
</code></pre>
<p>This is the error message I keep getting:</p>
<pre><code>"C:\Program Files\Python35-32\python.exe" C:/Users/Sai/Desktop/Quantstart.py
Traceback (most recent call last):
File "C:/Users/Sai/Desktop/Quantstart.py", line 13, in <module>
print(results.params())
TypeError: 'numpy.ndarray' object is not callable
</code></pre>
|
<p>Use the below code</p>
<pre><code>results.params
</code></pre>
<p>instead of</p>
<pre><code>results.params()
</code></pre>
<p>and it will work properly.</p>
|
python-3.x|numpy|time-series|linear-regression|statsmodels
| 2
|
8,246
| 19,666,029
|
Efficient way to decompress and multiply sparse arrays in python
|
<p>In a database I have a compressed frequency array. The first value represents the full array index, and the second value represents the frequency. This is compressed to only non-0 values because it is pretty sparse - less than 5% non-0's. I am trying to decompress the array, and then I need the dot product of this array with an array of weights to get a total weight. This is very inefficient with larger arrays. Does anyone have a more efficient way of doing this? For example, should I be using scipy.sparse and just leave the compressedfreqs array as-is? Or maybe is there a more efficient list comprehension I should be doing instead of looping through each item?</p>
<p>Here is a smaller example of what I am doing:</p>
<pre><code>import numpy as np
compressedfreqs = [(1,4),(3,2),(9,8)]
weights = np.array([4,4,4,3,3,3,2,2,2,1])
freqs = np.array([0] * 10)
for item in compressedfreqs:
freqs[item[0]] = item[1]
totalweight = np.dot(freqs,weights)
print totalweight
</code></pre>
|
<p>You could use <code>scipy.sparse</code> to handle all that for you:</p>
<pre><code>>>> import scipy.sparse as sps
>>> cfq = np.array([(1,4),(3,2),(9,8)])
>>> cfq_sps = sps.coo_matrix((cfq[:,1], ([0]*len(cfq), cfq[:,0])))
>>> cfq_sps
<1x10 sparse matrix of type '<type 'numpy.int32'>'
with 3 stored elements in COOrdinate format>
>>> cfq_sps.A # convert to dense array
array([[0, 4, 0, 2, 0, 0, 0, 0, 0, 8]])
>>> weights = np.array([4,4,4,3,3,3,2,2,2,1])
>>> cfq_sps.dot(weights)
array([30])
</code></pre>
<p>If you prefer to not use the sparse module, you can get it to work, albeit probably slower, with a generator expression:</p>
<pre><code>>>> sum(k*weights[j] for j,k in cfq)
30
</code></pre>
|
python|arrays|numpy|scipy|sparse-matrix
| 2
|
8,247
| 19,719,746
|
How can one efficiently remove a range of rows from a large numpy array?
|
<p>Given a large 2d numpy array, I would like to remove a range of rows, say rows <code>10000:10010</code> efficiently. I have to do this multiple times with different ranges, so I would like to also make it parallelizable.</p>
<p>Using something like <code>numpy.delete()</code> is not efficient, since it needs to copy the array, taking too much time and memory. Ideally I would want to do something like create a view, but I am not sure how I could do this in this case. A masked array is also not an option since the downstream operations are not supported on masked arrays.</p>
<p>Any ideas?</p>
|
<p>Because of the strided data structure that defines a numpy array, what you want will not be possible without using a masked array. Your best option might be to use a masked array (or perhaps your own boolean array) to mask the deleted the rows, and then do a single real <code>delete</code> operation of all the rows to be deleted before passing it downstream.</p>
|
python|numpy
| 3
|
8,248
| 19,761,140
|
Calling Functions with Multiple Arguments when using Groupby
|
<p>When writing functions to be used with groupby.apply or groupby.transform in pandas if the functions have multiple arguments, then when calling the function as part of groupby the arguments follow a comma rather than in parentheses. An example would be:</p>
<pre><code>def Transfunc(df, arg1, arg2, arg2):
return something
GroupedData.transform(Transfunc, arg1, arg2, arg3)
</code></pre>
<p>Where the df argument is passed automatically as the first argument.</p>
<p>However, the same syntax does not seem to be possible when using a function to group the data. Take the following example: </p>
<pre><code>people = DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis'])
people.ix[2:3, ['b', 'c']] = NA
def MeanPosition(Ind, df, Column):
if df[Column][Ind] >= np.mean(df[Column]):
return 'Greater Group'
else:
return 'Lesser Group'
# This function compares each data point in column 'a' to the mean of column 'a' and return a group name based on whether it is greater than or less than the mean
people.groupby(lambda x: MeanPosition(x, people, 'a')).mean()
</code></pre>
<p>The above works just fine, but I can't understand why I have to wrap the function in a lambda. Based upon the syntax used with transform and apply it seems to me that the following should work just fine:</p>
<pre><code>people.groupby(MeanPosition, people, 'a').mean()
</code></pre>
<p>Can anyone tell me why, or how I can call the function without wrapping it in a lambda?</p>
<p>Thanks</p>
<p>EDIT: I do not think it is possible to group the data by passing a function as the key without wrapping that function in a lambda. One possible workaround is to rather than passing a function as the key, pass an array that has been created by a function. This would work in the following manner:</p>
<pre><code>def MeanPositionList(df, Column):
return ['Greater Group' if df[Column][row] >= np.mean(df[Column]) else 'Lesser Group' for row in df.index]
Grouped = people.groupby(np.array(MeanPositionList(people, 'a')))
Grouped.mean()
</code></pre>
<p>But then of course it could be better just to cut out the middle man function all together and simply use an array with list comprhension....</p>
|
<p>Passing arguments to <code>apply</code> just happens to work, because <code>apply</code> passes on all arguments to the target function.</p>
<p>However, <code>groupby</code> takes multiple arguments, see <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.groupby.html?highlight=groupby#pandas.DataFrame.groupby" rel="nofollow">here</a>, so its not possible to differentiate between arguments; passing a lambda / named function is more explicit and the way to go.</p>
<p>Here is how to do what I think you want (slightly modified as you have all distinct groups in your example)</p>
<pre><code>In [22]: def f(x):
....: result = Series('Greater',index=x.index)
....: result[x<x.mean()] = 'Lesser'
....: return result
....:
In [25]: df = DataFrame(np.random.randn(5, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['Joe', 'Joe', 'Wes', 'Wes', 'Travis'])
In [26]: df
Out[26]:
a b c d e
Joe -0.293926 1.006531 0.289749 -0.186993 -0.009843
Joe -0.228721 -0.071503 0.293486 1.126972 -0.808444
Wes 0.022887 -1.813960 1.195457 0.216040 0.287745
Wes -1.520738 -0.303487 0.484829 1.644879 1.253210
Travis -0.061281 -0.517140 0.504645 -1.844633 0.683103
In [27]: df.groupby(df.index.values).transform(f)
Out[27]:
a b c d e
Joe Lesser Greater Lesser Lesser Greater
Joe Greater Lesser Greater Greater Lesser
Travis Greater Greater Greater Greater Greater
Wes Greater Lesser Greater Lesser Lesser
Wes Lesser Greater Lesser Greater Greater
</code></pre>
|
python|lambda|pandas
| 2
|
8,249
| 12,950,024
|
Add a column with a groupby on a hierarchical dataframe
|
<p>I have a dataframe structured like this:</p>
<pre><code>First A B
Second bar baz foo bar baz foo
Third cat dog cat dog cat dog cat dog cat dog cat dog
0 3 8 7 7 4 7 5 3 2 2 6 2
1 8 6 5 7 8 7 1 8 6 0 3 9
2 9 2 2 9 7 3 1 8 4 1 0 8
3 3 6 0 6 3 2 2 6 2 4 6 9
4 7 6 4 3 1 5 0 4 8 4 8 1
</code></pre>
<p>So there are three column levels. I want to add a new column on the second level where for each of the third levels a computation is performed, for example 'new' = 'foo' + 'bar'. So the resulting dataframe would look like:</p>
<pre><code>First A B
Second bar baz foo new bar baz foo new
Third cat dog cat dog cat dog cat dog cat dog cat dog cat dog cat dog
0 3 8 7 7 4 7 7 15 5 3 2 2 6 2 11 5
1 8 6 5 7 8 7 16 13 1 8 6 0 3 9 4 17
2 9 2 2 9 7 3 16 5 1 8 4 1 0 8 1 16
3 3 6 0 6 3 2 6 8 2 6 2 4 6 9 8 15
4 7 6 4 3 1 5 8 11 0 4 8 4 8 1 8 5
</code></pre>
<p>I have found a workaround which is listed at the end of this post, but its not at all 'panda-style' and prone to errors. The apply or transform function on a group seems like the right way to go but after hours of trying I still do not succeed. I figured the correct way should be something like:</p>
<pre><code>def func(data):
fi = data.columns[0][0]
th = data.columns[0][2]
data[(fi,'new',th)] = data[(fi,'foo',th)] + data[(fi,'bar',th)]
print data
return data
print grouped.apply(func)
</code></pre>
<p>The new column is properly added within the function, but is not returned. Using the same function with transform would work if the 'new' column already exists in the df, but how do you add a new column at a specific level 'on the fly' or before grouping?</p>
<p>The code to generate the sample df is:</p>
<pre><code>import pandas, itertools
first = ['A','B']
second = ['foo','bar','baz']
third = ['dog', 'cat']
tuples = []
for tup in itertools.product(first, second, third):
tuples.append(tup)
columns = pandas.MultiIndex.from_tuples(tuples, names=['First','Second','Third'])
data = np.random.randint(0,10,(5, 12))
df = pandas.DataFrame(data, columns=columns)
</code></pre>
<p>And my workaround:</p>
<pre><code>dfnew = None
grouped = df.groupby(by=None, level=[0,2], axis=1)
for name, group in grouped:
newparam = group.xs('foo', axis=1, level=1) + group.xs('bar', axis=1, level=1)
dftmp = group.join(pandas.DataFrame(np.array(newparam), columns=pandas.MultiIndex.from_tuples([(group.columns[0][0], 'new', group.columns[0][2])], names=['First','Second', 'Third'])))
if dfnew is None:
dfnew = dftmp
else:
dfnew = pandas.concat([dfnew, dftmp], axis=1)
print dfnew.sort_index(axis=1)
</code></pre>
<p>Which works, but creating a new dataframe for each group and 'manually' assigning the levels is a really bad practice.</p>
<p>So what is the proper way to do this? I found several posts dealing with similar questions, but all of these had only 1 level of columns, and that's exactly what I'm struggling with.</p>
|
<p>There definitely is a weakness in the API here but I'm not sure off the top of my head to make it easier to do what you're doing. Here's one simple way around this, at least for your example:</p>
<pre><code>In [20]: df
Out[20]:
First A B
Second foo bar baz foo bar baz
Third dog cat dog cat dog cat dog cat dog cat dog cat
0 7 2 9 3 3 0 5 9 8 2 0 6
1 1 4 1 7 2 3 2 3 1 0 4 0
2 6 5 0 6 6 1 5 1 7 4 3 6
3 4 8 1 9 0 3 9 2 3 1 5 9
4 6 1 1 5 1 2 2 6 3 7 2 1
In [21]: rdf = df.stack(['First', 'Third'])
In [22]: rdf['new'] = rdf.foo + rdf.bar
In [23]: rdf
Out[23]:
Second bar baz foo new
First Third
0 A cat 3 0 2 5
dog 9 3 7 16
B cat 2 6 9 11
dog 8 0 5 13
1 A cat 7 3 4 11
dog 1 2 1 2
B cat 0 0 3 3
dog 1 4 2 3
2 A cat 6 1 5 11
dog 0 6 6 6
B cat 4 6 1 5
dog 7 3 5 12
3 A cat 9 3 8 17
dog 1 0 4 5
B cat 1 9 2 3
dog 3 5 9 12
4 A cat 5 2 1 6
dog 1 1 6 7
B cat 7 1 6 13
dog 3 2 2 5
In [24]: rdf.unstack(['First', 'Third'])
Out[24]:
Second bar baz foo new
First A B A B A B A B
Third cat dog cat dog cat dog cat dog cat dog cat dog cat dog cat dog
0 3 9 2 8 0 3 6 0 2 7 9 5 5 16 11 13
1 7 1 0 1 3 2 0 4 4 1 3 2 11 2 3 3
2 6 0 4 7 1 6 6 3 5 6 1 5 11 6 5 12
3 9 1 1 3 3 0 9 5 8 4 2 9 17 5 3 12
4 5 1 7 3 2 1 1 2 1 6 6 2 6 7 13 5
</code></pre>
<p>And you can of course rearrange to your heart's content:</p>
<pre><code>In [28]: rdf.unstack(['First', 'Third']).reorder_levels(['First', 'Second', 'Third'], axis=1).sortlevel(0, axis=1)
Out[28]:
First A B
Second bar baz foo new bar baz foo new
Third cat dog cat dog cat dog cat dog cat dog cat dog cat dog cat dog
0 3 9 0 3 2 7 5 16 2 8 6 0 9 5 11 13
1 7 1 3 2 4 1 11 2 0 1 0 4 3 2 3 3
2 6 0 1 6 5 6 11 6 4 7 6 3 1 5 5 12
3 9 1 3 0 8 4 17 5 1 3 9 5 2 9 3 12
4 5 1 2 1 1 6 6 7 7 3 1 2 6 2 13 5
</code></pre>
|
python|group-by|pandas
| 7
|
8,250
| 28,886,439
|
Built in function in numpy to interpret an Integer as a numpy array with index = integer value set
|
<p>I am new to numpy and I am trying to avoid for-loops. My requirement is as below:</p>
<pre><code>Input - decimal value (ex. 3)
Output - Binary numpy array ( = 00000 01000)
</code></pre>
<p>Another example : </p>
<pre><code>Input = 6
Output = 00010 00000
</code></pre>
<p>Note: I do not want the binary representation of 3. I only need the index value of array = integer to be set.</p>
<p>Is there any standard library function in numpy? Something analogous to get_dummies function in pandas module.</p>
|
<p>Try this instead. This doesn't use any for loops and if you add some sanity checks it should work fine.</p>
<pre><code>def oneOfK(label):
rows = label.shape[0];
rowsIndex=np.arange(rows,dtype="int")
oneKLabel = np.zeros((rows,10))
#oneKLabel = np.zeros((rows,np.max(label)+1))
oneKLabel[rowsIndex,label.astype(int)]=1
return oneKLabel
</code></pre>
|
python|numpy
| 2
|
8,251
| 33,712,638
|
How to generate the unique id from a list of ids containing duplicates
|
<p>I am using pandas package to deal with my data, and I have a dataframe looks like below. </p>
<pre><code>data = pd.read_csv('people.csv')
id, A, B
John, 1, 3
Mary, 2, 5
John, 4, 6
John, 3, 7
Mary, 5, 2
</code></pre>
<p>I'd like to produce the unique id for those duplicates but keep the same order of them. </p>
<pre><code>id, A, B
John, 1, 3
Mary, 2, 5
John.1, 4, 6
John.2, 3, 7 # John shows up three times.
Mary.1, 5, 2 # Mary shows up twice.
</code></pre>
<p>I tried something like <code>set_index</code>, <code>pd.factorize()</code> and <code>index_col</code> but they do not work. </p>
|
<p>In order to obtain the indices you may use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html#pandas.core.groupby.GroupBy.cumcount" rel="nofollow"><code>GroupBy.cumcount</code></a>:</p>
<pre><code>>>> idx = df.groupby('id').cumcount()
>>> idx
0 0
1 0
2 1
3 2
4 1
dtype: int64
</code></pre>
<p>The non-zero ones may be appended by:</p>
<pre><code>>>> mask = idx != 0
>>> df.loc[mask, 'id'] += '.' + idx[mask].astype('str')
>>> df
id A B
0 John 1 3
1 Mary 2 5
2 John.1 4 6
3 John.2 3 7
4 Mary.1 5 2
</code></pre>
|
python|python-2.7|pandas
| 2
|
8,252
| 33,737,596
|
External access to pythonanywhere MySQL database with pandas and SQLAlchemy
|
<p>I want to use <code>pandas</code> to read data from my pythonanywhere MySQL database. <code>pandas</code> uses <code>sqlalchemy</code>.</p>
<p>The following doesn't work:</p>
<pre><code>import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('mysql://user:pass@user.mysql.pythonanywhere-services.com/user$db_name')
pd.read_sql('SHOW TABLES from db_name', engine)
</code></pre>
<p>I'm getting an error: <code>OperationalError: OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on 'user.mysql.pythonanywhere-services.com' (10060)") None None</code></p>
<p>What's wrong? Or is external access not possible with pythonanywhere? (I'm on a free plan)</p>
|
<p>PythonAnywhere dev here. Unfortunately you can't connect to your PythonAnywhere database from outside the service. If you had a paid plan (which comes with SSH access) then you could do it <a href="https://help.pythonanywhere.com/pages/SSHTunnelling" rel="nofollow">by using SSH tunnelling</a> but that won't work from a free account. </p>
|
python|mysql|pandas|sqlalchemy|pythonanywhere
| 2
|
8,253
| 22,482,003
|
Value error, truth error, ambiguous error
|
<p>When using this code</p>
<pre><code> for i in range(len(data)):
if Ycoord >= Y_west and Xcoord == X_west:
flag = 4
</code></pre>
<p>I get this ValueError</p>
<p>if Ycoord >= Y_west and Xcoord == X_west:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
<p>then i use the above restriction</p>
<p>Any help on how i can keep my restriction and go on with the writing of my file?</p>
|
<p>The variables <code>Ycoord</code> and <code>Xcoord</code> are probably <code>numpy.ndarray</code> objects. You have to use the array compatible <code>and</code> operator to check all its values for your condition. You can create a flag array and set the values to <code>4</code> in all places where your conditional is <code>True</code>:</p>
<pre><code>check = np.logical_and(Ycoord >= Y_west, Xcoord == X_west)
flag = np.zeros_like(Ycoord)
flag[check] = 4
</code></pre>
<p>or you have to test value-by-value in your code doing:</p>
<pre><code>for i in range(len(data)):
if Ycoord[i] >= Y_west and Xcoord[i] == X_west:
flag = 4
</code></pre>
|
python|arrays|numpy|python-2.6
| 1
|
8,254
| 62,046,431
|
Python Dataframe: Get number of week days present in last month?
|
<p>I have <code>df</code> with column <code>day_name</code>. I'm trying to get number of week_days present in last month?</p>
<p>I'm trying to get number of week_days present in last month.</p>
<p>For ex: There are <code>4 Fridays</code> and <code>5 Thrusdays</code> in April </p>
<p>df</p>
<pre><code> day_name
0 Friday
1 Sunday
2 Thursday
3 Wednesday
4 Monday
</code></pre>
<p>As per python for a single day:</p>
<pre><code> import calendar
year = 2020
month = 4
day_to_count = calendar.WEDNESDAY
matrix = calendar.monthcalendar(year,month)
num_days = sum(1 for x in matrix if x[day_to_count] != 0)
</code></pre>
<p>How do i use this in dataframe or any suggestions?</p>
<p>expected output</p>
<pre><code> day_name last_months_count
0 Friday 4
1 Sunday 4
2 Thursday 5
3 Wednesday 5
4 Monday 4
</code></pre>
|
<p>For last month:</p>
<pre><code>year, month = 2020, 4
start,end = f'{year}/{month}/1', f'{year}/{month+1}/1'
# we exclude the last day
# which is first day of next month
last_month = pd.date_range(start,end,freq='D')[:-1]
df['last_month_count'] = df['day_name'].map(last_month.day_name().value_counts())
</code></pre>
<p>Output:</p>
<pre><code> day_name last_month_count
0 Friday 4
1 Sunday 4
2 Thursday 5
3 Wednesday 5
4 Monday 4
</code></pre>
<p><strong>Bonus:</strong> to extract last month programatically:</p>
<pre><code>from datetime import datetime
now = datetime.now()
year, month = now.year, now.month
# first month of the year
if month == 1:
year, month = year-1, 12
</code></pre>
|
python|pandas|numpy|dataframe
| 2
|
8,255
| 62,379,530
|
does validation_data in model.fit() method in Tensorflow Keras have to be a tuple?
|
<p>I'm implementing a complicated loss function so I use a custom layer to pass the loss. Something like:</p>
<pre><code>class SIAMESE_LOSS(Layer):
def __init__(self, **kwargs):
super(SIAMESE_LOSS, self).__init__(**kwargs)
@staticmethod
def mmd_loss(source_samples, target_samples):
return mmd(source_samples, target_samples)
@staticmethod
def regression_loss(pred, labels):
return K.mean(mae(pred, labels))
def call(self, inputs, **kwargs):
source_labels = inputs[0]
target_labels = inputs[1]
source_pred = inputs[2]
target_pred = inputs[3]
source_samples = inputs[4]
target_samples = inputs[5]
source_loss = self.regression_loss(source_pred, source_labels)
target_loss = self.regression_loss(target_pred, target_labels)
mmd_loss = self.mmd_loss(source_samples, target_samples)
self.add_loss(source_loss)
self.add_loss(target_loss)
self.add_loss(mmd)
self.add_metric(source_loss, aggregation='mean', name='source_mae')
self.add_metric(target_loss, aggregation='mean', name='target_mae')
self.add_metric(mmd_loss, aggregation='mean', name='MMD')
return mmd_loss+target_loss+source_loss
</code></pre>
<p>So the labels are sent to the model as inputs.<br>
Therefore fitting the model will be like:</p>
<pre><code> history = model.fit(
x=[train_data_s, train_data_t, self.train_labels, self.train_data_t],
y=None,
batch_size=self.batch_size,
epochs=base_epochs,
verbose=2,
callbacks=cp_callback,
validation_data=[val_data_s, val_data_t, self.val_labels, self.val_labels_t],
shuffle=True
)
</code></pre>
<p>However, according to the official document in Tensorflow, validation_data should be: </p>
<blockquote>
<p>Data on which to evaluate the loss and any model metrics at the end of
each epoch. The model will not be trained on this data.
validation_data will override validation_split. validation_data could
be: tuple (x_val, y_val) of Numpy arrays or tensors tuple (x_val,
y_val, val_sample_weights) of Numpy arrays dataset For the first two
cases, batch_size must be provided. For the last case,
validation_steps could be provided. Note that validation_data does not
support all the data types that are supported in x, eg, dict,
generator or keras.utils.Sequence.</p>
</blockquote>
<p>There's no 'label' that should be passed since they're already handled by the model as inputs. How can I solve the problem if I still wanna use validation data?</p>
|
<p>to write your own loss you need to inherit from class Loss and then implement your loss calculation in the init and call methods.
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss</a></p>
<p>so you dont need to train without passing y in model.fit()</p>
|
python|tensorflow|keras|loss-function
| 0
|
8,256
| 62,043,788
|
Add values above bars on a bar chart in python
|
<p>I have the following code below: </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.figure()
languages =['Python', 'SQL', 'Java', 'C++', 'JavaScript']
pos = np.arange(len(languages))
popularity = [56, 39, 34, 34, 29]
bars = plt.bar(pos, popularity, align='center', linewidth=0, color='lightslategrey')
bars[0].set_color('#1F77B4')
plt.xticks(pos, languages, alpha=0.8)
plt.ylabel("")
plt.title('Top 5 Languages for Math & Data \nby % popularity on Stack Overflow', alpha=0.8)
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
for spine in plt.gca().spines.values():
spine.set_visible(False)
for index, value in enumerate(str(popularity)):
plt.text(index,value,str(popularity))
plt.show()
</code></pre>
<p>I am trying to label the values from the Y-axis above each bar (the Y-values).</p>
<p>However, this code is not doing that. </p>
<p>Could anybody point out to me where I am going wrong?</p>
|
<p>You are almost there; You only need to change the last for loop to be like so:</p>
<pre><code>...
...
for index, value in enumerate(popularity):
plt.text(index,value, str(value))
plt.show()
</code></pre>
<p>which will generate this plot:
<a href="https://i.stack.imgur.com/GpnR5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GpnR5.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib|graph|visualization
| 2
|
8,257
| 51,519,379
|
Best way(run-time) to aggregate (calculate ratio of) sum to total count based on group by
|
<p>I'm trying to identify ratio of approved applications(identified by flag '1' and if not then '0') to total applications for each person(Cust_ID). I have achieved this logic by the following code but it takes about 10 mins to compute this for 1.6 M records. Is there a faster to perform the same operation?</p>
<pre><code># Finding ratio of approved out of total applications
df_approved_ratio = df.groupby('Cust_ID').apply(lambda x:x['STATUS_Approved'].sum()/len(x))
</code></pre>
|
<p>I think need aggregate by <code>mean</code>:</p>
<pre><code>df = pd.DataFrame({'STATUS_Approved':[0,1,0,0,1,1],
'Cust_ID':list('aaabbb')})
print (df)
STATUS_Approved Cust_ID
0 0 a
1 1 a
2 0 a
3 0 b
4 1 b
5 1 b
df_approved_ratio = df.groupby('Cust_ID')['STATUS_Approved'].mean()
print (df_approved_ratio)
Cust_ID
a 0.333333
b 0.666667
Name: STATUS_Approved, dtype: float64
print (df.groupby('Cust_ID').apply(lambda x:x['STATUS_Approved'].sum()/len(x)))
Cust_ID
a 0.333333
b 0.666667
Name: STATUS_Approved, dtype: float64
</code></pre>
|
pandas|python-3.6|calculation
| 1
|
8,258
| 51,547,168
|
Pandas join two dataframes based on relationship described in dictionary
|
<p>I have two dataframes that I want to join based on a relationship described in a dictionary of lists, where the keys in the dictionary refer to ids from dfA idA column, and the items in the list are ids from dfB idB column. The dataframes and dictionary look something like this:</p>
<pre><code>dfA
colA colB idA
0 a abc 3
1 b def 4
2 b ghi 5
dfB
colX idB colZ
0 bob 7 a
1 bob 7 b
2 bob 7 c
3 jim 8 d
4 jake 9 a
5 jake 9 e
myDict = { '3': [ '7', '8' ], '4': [], '5': ['7', '9'] }
</code></pre>
<p>How can I use myDict to join the two dataframes to produce a dataframe like the following?</p>
<pre><code>dfC
colA colB idA colX idB colZ
0 a abc 3 bob 7 a
1 b
2 c
3 jim 8 d
4 b def 4 None None None
5 b ghi 5 bob 7 a
6 b
7 c
8 jake 9 a
9 e
</code></pre>
|
<p>You can create a linking table (DataFrame) from your dictionary. Below full working example. It might need some row and column sorting at the end to produce exactly your output.</p>
<pre><code>import pandas as pd
import numpy as np
dfA = pd.DataFrame({'colA': ('a', 'b', 'b'),
'colB': ('abc', 'def', 'ghi'),
'idA': ('3', '4', '5')})
dfB = pd.DataFrame({'colX': ('bob', 'bob', 'bob', 'jim', 'jake', 'jake'),
'idB': ('7', '7', '7', '8', '9', '9'),
'colZ': ('a', 'b', 'c', 'd', 'a', 'e')})
myDict = {'3': ['7', '8'], '4': [], '5': ['7', '9']}
dfC = pd.DataFrame(columns=['idA', 'idB'])
i = 0
for key, value in myDict.items():
# the if statement is for empty list to create one record with NaNs
if not value:
dfC.loc[i, 'idA'] = key
dfC.loc[i, 'idB'] = np.nan
i += 1
for val in value:
dfC.loc[i, 'idA'] = key
dfC.loc[i, 'idB'] = val
i += 1
temp = dfA.merge(dfC, how='right')
result = temp.merge(dfB, how='outer')
print(result)
</code></pre>
<p>The output is:</p>
<pre><code> colA colB idA idB colX colZ
0 a abc 3 7 bob a
1 a abc 3 7 bob b
2 a abc 3 7 bob c
3 b ghi 5 7 bob a
4 b ghi 5 7 bob b
5 b ghi 5 7 bob c
6 a abc 3 8 jim d
7 b def 4 NaN NaN NaN
8 b ghi 5 9 jake a
9 b ghi 5 9 jake e
</code></pre>
|
python|pandas
| 1
|
8,259
| 48,297,940
|
Map values of multiple dataframes and fill columns
|
<p>Lets assume I have the following three dataframes:</p>
<p><strong>Dataframe 1:</strong></p>
<pre><code>df1 = {'year': ['2010','2012','2014','2015'], 'count': [1,1,1,1]}
df1 = pd.DataFrame(data=df1)
df1 = df1.set_index('year')
df1
year count
2010 1
2012 1
2014 1
2015 1
</code></pre>
<p><strong>Dataframe 2:</strong></p>
<pre><code>df2 = {'year': ['2010','2011','2016','2017'], 'count': [2,1,3,1]}
df2 = pd.DataFrame(data=df2)
df2 = df2.set_index('year')
df2
year count
2010 2
2011 1
2016 3
2017 1
</code></pre>
<p><strong>Dataframe 3:</strong></p>
<pre><code>df3 = {'year': ['2010','2011','2012','2013','2014','2015','2017'], 'count': [4,2,5,4,4,1,1]}
df3 = pd.DataFrame(data=df3)
df3 = df3.set_index('year')
df3
year count
2010 4
2011 2
2012 5
2013 4
2014 4
2015 1
2017 1
</code></pre>
<p>Now I want to have three dataframes with all the years and counts. For example if <code>df1</code> has missing years <em>2011, 2013, 2016, 2017</em> then these are added in the index of df1 with counts against each of the new added indexes as 0.</p>
<p>So my output would be something like this for df1:</p>
<pre><code>year count
2010 1
2012 1
2014 1
2015 1
2011 0
2013 0
2016 0
2017 0
</code></pre>
<p>And similarly for df2 and df3 as well. Thanks.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.union.html" rel="nofollow noreferrer"><code>union</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>:</p>
<pre><code>idx = df1.index.union(df2.index).union(df3.index)
print (idx)
Index(['2010', '2011', '2012', '2013',
'2014', '2015', '2016', '2017'], dtype='object', name='year')
</code></pre>
<p>Another solution:</p>
<pre><code>from functools import reduce
idx = reduce(np.union1d,[df1.index, df2.index, df3.index])
print (idx)
['2010' '2011' '2012' '2013' '2014' '2015' '2016' '2017']
</code></pre>
<hr>
<pre><code>df1 = df1.reindex(idx, fill_value=0)
print (df1)
count
year
2010 1
2011 0
2012 1
2013 0
2014 1
2015 1
2016 0
2017 0
</code></pre>
<pre><code>df2 = df2.reindex(idx, fill_value=0)
print (df2)
count
year
2010 2
2011 1
2012 0
2013 0
2014 0
2015 0
2016 3
2017 1
</code></pre>
<pre><code>df3 = df3.reindex(idx, fill_value=0)
print (df3)
count
year
2010 4
2011 2
2012 5
2013 4
2014 4
2015 1
2016 0
2017 1
</code></pre>
|
python|pandas|dataframe
| 3
|
8,260
| 48,376,704
|
How to train a classifier that contain multi dimensional featured input values
|
<p>I am trying to model a classifier that contain Multi Dimensional Feature as input. Can any one knew of a dataset that contain multi dimensional Features?
Lets say for example: In mnist data we have pixel location as feature & feature value is a Single Dimensional grey scale value that varies from (0 - 255), But if we consider a colour image then in that case a single grey scale value is not sufficient, in this case also we will take the pixel location as feature but feature value will be of 3 Dimension( R(0-255) as one dimension, G(0-255) as second dimension and B(0-255) as third dimension) So in this case how can one solve using FeedForward Neural network?
SMALL SUGGESTIONS ALSO ACCEPTED.</p>
|
<p>The same way.</p>
<p>If you plug the pixels into your network directly just reshape the tensor to have H*W*3 length.</p>
<p>If you use convolutions note the the last parameter is the number of input/output dimensions. Just make sure the first convolution uses 3 as input.</p>
|
tensorflow|machine-learning|dataset|feed-forward
| 0
|
8,261
| 48,510,229
|
reconstructing signal with tensorflow.contrib.signal causes amplification or modulation (frames, overlap_and_add, stft etc)
|
<p><strong><em>UPDATE</strong>: I've reimplemented this in librosa to compare, and the results are indeed very different to the results from tensorflow. Librosa gives the results I'd expect (but not tensorflow).</em></p>
<p>I've posted this as an <a href="https://github.com/tensorflow/tensorflow/issues/16465" rel="noreferrer">issue</a> on the tensorflow repo, but it's quiet there so I'm trying here. Also I'm not sure if it's a bug in tensorflow, or user error on my behalf. For completeness I'll include full source and results here too.</p>
<p>A.) When I create frames from a signal with <code>frame_length=1024</code> and <code>frame_step=256</code> (i.e. 25% hop size, 75% overlap) using a hann window (also tried hamming), and then I reconstruct with <code>overlap_and_add</code>, I'd expect the signal to be reconstructed correctly (because of COLA etc). But instead it comes out exactly double the amplitude. I need to divide the resulting signal by two for it to be correct. </p>
<p>B.) If I use STFT to create a series of overlapping spectrograms, and then reconstruct with inverse STFT, again with <code>frame_length=1024</code> and <code>frame_step=256</code>, the signal is again reconstructed at double amplitude. </p>
<p>I realise why these might be the case (unity gain at 50% overlap for hann, so 75% overlap will double the signal). But is it not normal for the reconstruction function to take this into account? E.g. librosa istft does return signal with correct amplitude while tensorflow returns double.</p>
<p>C.)
At any other frame_step there is severe amplitude modulation going on. See images below. This doesn't seem right at all. </p>
<p><strong>UPDATE</strong>: If I explicitly set <code>window_fn=tf.contrib.signal.inverse_stft_window_fn(frame_step)</code> in <code>inverse_stft</code> the output is correct. So it seems the <code>frame_step</code> in <code>inverse_stft</code> is not being passed into the window function (which is also what the results hint at).</p>
<p>original data:</p>
<p><a href="https://i.stack.imgur.com/PyYVr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PyYVr.png" alt="enter image description here"></a></p>
<p>tensorflow output from frames + overlap_and_add:</p>
<p><a href="https://i.stack.imgur.com/q42SF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/q42SF.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/oAFA7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oAFA7.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/tYRmR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tYRmR.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/B7Fqr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B7Fqr.png" alt="enter image description here"></a></p>
<p>tensorflow output from stft+istft:</p>
<p><a href="https://i.stack.imgur.com/Gcbq0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Gcbq0.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/4HTwt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4HTwt.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Tsb7u.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tsb7u.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/fQgIK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fQgIK.png" alt="enter image description here"></a></p>
<p>librosa output from stft+istft:</p>
<p><a href="https://i.stack.imgur.com/dW9zq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dW9zq.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/CY2Zg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CY2Zg.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/IR2ZH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IR2ZH.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/3HzYb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3HzYb.png" alt="enter image description here"></a></p>
<p>tensorflow code:</p>
<pre><code>from __future__ import print_function
from __future__ import division
import numpy as np
import scipy.io.wavfile
import math
import random
import matplotlib.pyplot as plt
import tensorflow as tf
out_prefix = 'tensorflow'
def plot(data, title, do_save=True):
plt.figure(figsize=(20,5))
plt.plot(data[:3*frame_length])
plt.ylim([-1, 1])
plt.title(title)
plt.grid()
if do_save: plt.savefig(title + '.png')
plt.show()
def reconstruct_from_frames(x, frame_length, frame_step):
name = 'frame'
frames_T = tf.contrib.signal.frame(x, frame_length=frame_length, frame_step=frame_step)
windowed_frames_T = frames_T * tf.contrib.signal.hann_window(frame_length, periodic=True)
output_T = tf.contrib.signal.overlap_and_add(windowed_frames_T, frame_step=frame_step)
return name, output_T
def reconstruct_from_stft(x, frame_length, frame_step):
name = 'stft'
spectrograms_T = tf.contrib.signal.stft(x, frame_length, frame_step)
output_T = tf.contrib.signal.inverse_stft(spectrograms_T, frame_length, frame_step)
return name, output_T
def test(fn, input_data):
print('-'*80)
tf.reset_default_graph()
input_T = tf.placeholder(tf.float32, [None])
name, output_T = fn(input_T, frame_length, frame_step)
title = "{}.{}.{}.l{}.s{}".format(out_prefix, sample_rate, name, frame_length, frame_step)
print(title)
with tf.Session():
output_data = output_T.eval({input_T:input_data})
# output_data /= frame_length/frame_step/2 # tensorflow needs this to normalise amp
plot(output_data, title)
scipy.io.wavfile.write(title+'.wav', sample_rate, output_data)
def generate_data(duration_secs, sample_rate, num_sin, min_freq=10, max_freq=500, rnd_seed=0, max_val=0):
'''generate signal from multiple random sin waves'''
if rnd_seed>0: random.seed(rnd_seed)
data = np.zeros([duration_secs*sample_rate], np.float32)
for i in range(num_sin):
w = np.float32(np.sin(np.linspace(0, math.pi*2*random.randrange(min_freq, max_freq), num=duration_secs*sample_rate)))
data += random.random() * w
if max_val>0:
data *= max_val / np.max(np.abs(data))
return data
frame_length = 1024
sample_rate = 22050
input_data = generate_data(duration_secs=1, sample_rate=sample_rate, num_sin=1, rnd_seed=2, max_val=0.5)
title = "{}.orig".format(sample_rate)
plot(input_data, title)
scipy.io.wavfile.write(title+'.wav', sample_rate, input_data)
for frame_step in [256, 512, 768, 1024]:
test(reconstruct_from_frames, input_data)
test(reconstruct_from_stft, input_data)
print('done.')
</code></pre>
<p>librosa code:</p>
<pre><code>from __future__ import print_function
from __future__ import division
import numpy as np
import scipy.io.wavfile
import math
import random
import matplotlib.pyplot as plt
import librosa.core as lc
out_prefix = 'librosa'
def plot(data, title, do_save=True):
plt.figure(figsize=(20,5))
plt.plot(data[:3*frame_length])
plt.ylim([-1, 1])
plt.title(title)
plt.grid()
if do_save: plt.savefig(title + '.png')
plt.show()
def reconstruct_from_stft(x, frame_length, frame_step):
name = 'stft'
stft = lc.stft(x, n_fft=frame_length, hop_length=frame_step)
istft = lc.istft(stft, frame_step)
return name, istft
def test(fn, input_data):
print('-'*80)
name, output_data = fn(input_data, frame_length, frame_step)
title = "{}.{}.{}.l{}.s{}".format(out_prefix, sample_rate, name, frame_length, frame_step)
print(title)
# output_data /= frame_length/frame_step/2 # tensorflow needs this to normalise amp
plot(output_data, title)
scipy.io.wavfile.write(title+'.wav', sample_rate, output_data)
def generate_data(duration_secs, sample_rate, num_sin, min_freq=10, max_freq=500, rnd_seed=0, max_val=0):
'''generate signal from multiple random sin waves'''
if rnd_seed>0: random.seed(rnd_seed)
data = np.zeros([duration_secs*sample_rate], np.float32)
for i in range(num_sin):
w = np.float32(np.sin(np.linspace(0, math.pi*2*random.randrange(min_freq, max_freq), num=duration_secs*sample_rate)))
data += random.random() * w
if max_val>0:
data *= max_val / np.max(np.abs(data))
return data
frame_length = 1024
sample_rate = 22050
input_data = generate_data(duration_secs=1, sample_rate=sample_rate, num_sin=1, rnd_seed=2, max_val=0.5)
title = "{}.orig".format(sample_rate)
plot(input_data, title)
scipy.io.wavfile.write(title+'.wav', sample_rate, input_data)
for frame_step in [256, 512, 768, 1024]:
test(reconstruct_from_stft, input_data)
print('done.')
</code></pre>
<ul>
<li>Linux Ubuntu 16.04</li>
<li>Tensorflow installed from binary v1.4.0-19-ga52c8d9, 1.4.1</li>
<li>Python 2.7.14 | Anaconda custom (64-bit)| (default, Oct 16 2017, 17:29:19). IPython 5.4.1</li>
<li>Cuda release 8.0, V8.0.61, cuDNN 6</li>
<li>Geforce GTX 970M, Driver Version: 384.111</li>
</ul>
<p>(Just tried with TF1.5, Cuda9.0, cuDNN 7.0.5 as well, and same results). </p>
|
<p>You should use <code>tf.signal.inverse_stft_window_fn</code></p>
<pre><code>window_fn=tf.signal.inverse_stft_window_fn(frame_step)
tf_istfts=tf.signal.inverse_stft(tf_stfts, frame_length=frame_length, frame_step=frame_step, fft_length=fft_length, window_fn=window_fn)}
</code></pre>
<p>See more at <a href="https://www.tensorflow.org/api_docs/python/tf/signal/inverse_stft_window_fn" rel="nofollow noreferrer">inverse_stft_window_fn</a></p>
|
python|tensorflow|time-series|signal-processing
| 1
|
8,262
| 48,534,879
|
Converting sparse IndexedSlices to a dense Tensor
|
<p>I got the following warning:</p>
<pre><code>94: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 1200012120 elements. This may consume a large amount of memory.
</code></pre>
<p>For the following code:</p>
<pre><code>from wordbatch.extractors import WordSeq
import wordbatch
from keras.layers import Input,Embedding
...
wb = wordbatch.WordBatch(normalize_text, extractor=(WordSeq, {"seq_maxlen": MAX_NAME_SEQ}), procs=NUM_PROCESSOR)
wb.dictionary_freeze = True
full_df["ws_name"] = wb.fit_transform(full_df["name"])
...
name = Input(shape=[full_df["name"].shape[1]], name="name")
emb_name = Embedding(MAX_TEXT, 50)(name)
...
</code></pre>
<p>That is I make use of WordSeq (from WordBatch) output from the Embedding layer of a GRU network. How should I modify the code to make it work without converting to dense tensor?</p>
|
<p>I've had the same issue with the Embedding layer in Keras. <a href="https://github.com/keras-team/keras/issues/4365#issuecomment-260482550" rel="nofollow noreferrer">The solution</a> is to explicitly use a TensorFlow optimizer, like here:</p>
<p><code>model.compile(loss='mse',
optimizer=TFOptimizer(tf.train.GradientDescentOptimizer(0.1)))</code></p>
|
tensorflow|keras|word-embedding|gated-recurrent-unit
| 1
|
8,263
| 48,669,373
|
invalid column name error when writing pandas DataFrame to sql
|
<p>When I try to write a dataframe to ms sql server, like this:</p>
<pre><code>cnxn = sqlalchemy.create_engine("mssql+pyodbc://@HOST:PORT/DATABASE?driver=SQL+Server")
df.to_sql('DATABASE.dbo.TABLENAME', cnxn, if_exists='append', index=False)
</code></pre>
<p>I get the following error:</p>
<pre><code>ProgrammingError: (pyodbc.ProgrammingError) ('42S22', "[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'DateDay'. (207) (SQLExecDirectW)") [SQL: 'INSERT INTO [DATABASE.dbo.TABLENAME] ([DateDay], [ID], [Code], [Forecasted], [Lower95CI], [Upper95CI], [ForecastMethod], [ForecastDate]) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ((datetime.datetime(2017, 12, 10, 0, 0), '8496', "'IO'", 197, 138, 138, 'ARIMAX',...
</code></pre>
<p>it seems that the column name is producing the error? it is looking for [DateDay] but it finds 'DateDay' with the ' '? how to fix this?</p>
<p>I am using python 3.6 on a windows machine, pandas 0.22, sqlalchemy 1.1.13 and pyodbc 4.0.17</p>
<p><strong>UPDATE-- SOLUTION FOUND:</strong></p>
<p>So I realized that my mistake was in the tablename which calls the database: 'DATABASE.dbo.TABLENAME', when i removed the DATABASE.dbo, it worked:</p>
<pre><code>df.to_sql('TABLENAME', cnxn, if_exists='append', index=False)
</code></pre>
|
<p>The problem was that I added the database name when executing the df.to_sql command, which was not needed since I had already established a connection to that database. This worked:</p>
<pre><code>df.to_sql('TABLENAME', cnxn, if_exists='append', index=False)
</code></pre>
|
python-3.x|pandas|sqlalchemy|pymssql
| 3
|
8,264
| 48,812,737
|
Advice for ignoring ds_store file when uploading files to jupyter
|
<p>I was wondering if anyone had some advice on how deal with the ds.store file that is automatically created by apple for each folder when uploading data. Does everyone just write an if statement:</p>
<pre><code> for i in files:
if file == '.DS_Store'
continue
upload file...
</code></pre>
<p>or is there a better way to do this?</p>
|
<p>If you want to ensure you skip all hidden files use something like <code>filename.startswith('.')</code></p>
|
python|pandas
| 2
|
8,265
| 48,448,385
|
Convert multidimensional list to multidimensional numpy.array
|
<p>I am having trouble converting a python list-of-list-of-list to a 3 dimensional numpy array. </p>
<pre><code>a = [
[
[1,2,3,4], # = len 4
...
], # = len 58
...
] # = len 1245
</code></pre>
<p>when I call <code>a = np.array(a)</code> on it, it reports shape as <code>(1245,)</code> and I cannot reshape it. <code>a.reshape(1245,58,4)</code> It gives me the error:
<code>ValueError: cannot reshape array of size 1245 into shape (1245,58,4)</code> But If I print a[0] it gives me a 58 element list and a[0][0] gives me a 4 element list, as I expected, so the data is there.</p>
<p>I see plenty of stack exchange posts wanting to flatten it, but I just want to make it into a numpy array in the shape that it already is. I don't know why <code>numpy.array()</code> is not seeing the other dimensions.</p>
|
<p>Lists of the same level (the same numpy axis) need to be the same size. Otherwise you get an array of lists.</p>
<pre><code>np.array([[0, 1], [2]])[0] # returns [0, 1]
np.array([[0, 1], [2, 3]])[0] # returns array([1, 2])
</code></pre>
<p>You can get around this by calling [<code>pad</code>] on your lists before converting them to an array.</p>
<p>Furthermore the dimensions for <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow noreferrer"><code>reshape</code></a> "should be compatible with the original shape." Meaning (with the exception of a <code>-1</code> value for inference) the product of the new dimensions should equal the product of the old dimensions.</p>
<p>For example in your case you have an array of shape <code>(1245,)</code> so you could call:</p>
<pre><code>a.reshape(83, 5, 3) # works
a.reshape(83, -1, 3) # works
a.reshape(83, 5, 5) # fails since 83 * 5 * 5 = 2075 != 1245
</code></pre>
|
python|arrays|numpy|multidimensional-array
| 2
|
8,266
| 48,775,531
|
how to convert a monthly period in a date?
|
<p>Consider this simple example</p>
<pre><code>df = pd.DataFrame({'mydate' : ['1985m3','1985m4','1985m5']})
df
Out[18]:
mydate
0 1985m3
1 1985m4
2 1985m5
</code></pre>
<p>How can I convert these monthly periods into a proper <code>datetime</code> (artificially using the first day of the month, such as <code>'1985-03-01'</code> for the first row)? </p>
<p>I can of course strip the month and do it <em>the hard way</em> but is there a better pandonic way?</p>
<p>Thanks!</p>
|
<p>Try using <code>pandas.to_datetime</code> with Python <a href="http://strftime.org/" rel="nofollow noreferrer">time directives</a> where '%Y' for year, 'm' hard code for the letter m, and '%m' for month:</p>
<pre><code>pd.to_datetime(df.mydate, format='%Ym%m')
</code></pre>
<p>Output:</p>
<pre><code>0 1985-03-01
1 1985-04-01
2 1985-05-01
Name: mydate, dtype: datetime64[ns]
</code></pre>
|
python|pandas
| 4
|
8,267
| 70,897,341
|
pandas dataframe groupby and agg to obtain a value if conditions in another column
|
<p>I have a dataframe like this:</p>
<pre><code>df_test = pd.DataFrame({'ID1':['A','A','A','A','A','A','A','A','A','A'],
'ID2':['a','a','a','aa','aaa','aaa','b','b','b','b'],
'ID3':['c1','c2','c3','c4','c5','c6','c7','c8','c9','c10'],
'condition1':[1,2,1,1,1,1,1,2,1,1],
'condition2':[80,85,88,80,70,83,85,90,90,70]})
</code></pre>
<p>df_test
<a href="https://i.stack.imgur.com/cBQLw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBQLw.png" alt="df_test" /></a></p>
<p>I want to pick values in ID3 after group by ['ID1','ID2','condition1'] and (1):if there is only one row in the group, then it will be picked (such as c4), (2)if there is one more rows in the group, then it will be picked at the condition2 is max in the group (such as c3, c6,c9, and c8). the result will like this:</p>
<pre><code>df_test_result = pd.DataFrame({'ID1':['A','A','A','A','A','A'],
'ID2':['a','a','aa','aaa','b','b'],
'condition1':[2,1,1,1,2,1],
'condition2':[85,88,80,83,90,90],
'ID3':['c2','c3','c4','c6','c8','c9']})
df_test_result
</code></pre>
<p><a href="https://i.stack.imgur.com/VW4Mb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VW4Mb.png" alt="df_test_result" /></a></p>
<p>the process appears to be that way, but it is too ineffective (because I need to contact them together):</p>
<pre><code>groups = df_test.groupby(['ID1','ID2','condition1'])
for group in groups:
dfi = group[1][group[1]['condition2']==group[1]['condition2'].max()]
print(dfi,'\n')
</code></pre>
<p><a href="https://i.stack.imgur.com/srUNF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/srUNF.png" alt="process" /></a></p>
|
<p>Your condition (1) generalises as (2), so you can always just look at the first row in the group according to <code>condition2</code>:</p>
<pre><code>(
df_test
.sort_values("condition2", ascending=False) # sort everything by condition2
.groupby(["ID1", "ID2", "condition1"])
.first() # select first row in each group (now ordered by condition2)
.reset_index() # reset groupby columns
)
</code></pre>
|
python|pandas
| 1
|
8,268
| 70,928,576
|
Python: fuzzywuzzy matching dataframe but returns unpredictable results
|
<p>I've been searching around for a while now, but I can't seem to find the answer to this small problem.</p>
<p>I created this function to match words with the wrong result in the column by mapping the main column containing the correct word</p>
<pre><code>data_provinsi = {'id':[11, 12, 13, 14, 15, 16],
'name':['PAPUA', 'PAPUA BARAT', 'JAWA TIMUR', 'DKI JAKARTA', 'MALUKU', 'MALUKU UTARA']}
df_provinsi = pd.DataFrame(data_provinsi)
df_provinsi
data_geo = {'province_id':[11, 11, 12, 12, 12, 12, 12, 13, 13],
'provinsi':['PAPUA', 'PAPUA', 'PAPUA BARAT', 'PAPUA BARAT', 'PAPUA BARAT',' PAPUA BARAT', 'PAPUA BARAT', 'JAWA TIMIR', 'JAWA TIMAR']}
df_sample = pd.DataFrame(data_geo)
df_sample
def checker(df_is_wrongs,col_is_wrong,df_correct_options,col_correct_options):
names_array=[]
ratio_array=[]
for word in df_is_wrongs[col_is_wrong]:
if word in df_correct_options[col_correct_options]:
names_array.append(word)
ratio_array.append('100')
else:
x=process.extractOne(word,df_correct_options[col_correct_options],scorer=fuzz.token_set_ratio)
names_array.append(x[0])
ratio_array.append(x[1])
return names_array,ratio_array
name_match,ratio_match=checker(df_sample,'provinsi',data_provinsi,'name') #buat jalanin checker
df_sample['mapped_names']=pd.Series(name_match).astype(str) #bikin hasil mapping
df_sample['ratio'] = pd.Series(ratio_match).astype(int) #bikin ratio kebenarannya
</code></pre>
<p>however, after trying the function, there are some results that are changed even though the word is still correct according to the data mapping column data. such as index 5 ('PAPUA' AND 'PAPUA BARAT' data). The 2 words are still true but have changed.</p>
<pre><code>>>> output
province_id provinsi mapped_names ratio
0 11 PAPUA PAPUA 100
1 11 PAPUA PAPUA 100
2 11 PAPUA BARAT PAPUA BARAT 100
3 12 PAPUA BARAT PAPUA BARAT 100
4 12 PAPUA BARAT PAPUA BARAT 100
5 12 PAPUA BARAT PAPUA 100
6 12 PAPUA BARAT PAPUA BARAT 100
7 13 JAWA TIMIR JAWA TIMUR 90
8 13 JAWA TIMAR JAWA TIMUR 90
</code></pre>
<p>is there any other solution to fix this with simpler program code without looping? because it will take a long time to compute</p>
|
<p>Try:</p>
<pre><code>best_match = lambda x: pd.Series(process.extractOne(x, df_provinsi['name'].unique()))
df_sample[['mapped_names', 'ratio']] = df_sample['provinsi'].apply(best_match)
print(df_sample)
# Output:
province_id provinsi mapped_names ratio
0 11 PAPUA PAPUA 100
1 11 PAPUA PAPUA 100
2 12 PAPUA BARAT PAPUA BARAT 100
3 12 PAPUA BARAT PAPUA BARAT 100
4 12 PAPUA BARAT PAPUA BARAT 100
5 12 PAPUA BARAT PAPUA BARAT 100
6 12 PAPUA BARAT PAPUA BARAT 100
7 13 JAWA TIMIR JAWA TIMUR 90
8 13 JAWA TIMAR JAWA TIMUR 90
</code></pre>
|
python-3.x|pandas|mapping|matching|fuzzywuzzy
| 0
|
8,269
| 70,825,604
|
BERT error - module 'tensorflow_core.keras.activations' has no attribute 'swish'
|
<p>I am trying to execute the transformer model but ended up with error.</p>
<ul>
<li>Python version == 3.7</li>
<li>Tensorflow == 2.0</li>
<li>Transformers == 4.15.0</li>
</ul>
<p>Source : <a href="https://huggingface.co/cross-encoder/nli-deberta-base?candidateLabels=supply+chain%2C+scientific+discovery%2C+microbiology%2C+robots%2C+archeology&multiClass=false&text=shipment+will+arrive+on+next+week.+our+company+will+transport" rel="nofollow noreferrer">https://huggingface.co/cross-encoder/nli-deberta-base?candidateLabels=supply+chain%2C+scientific+discovery%2C+microbiology%2C+robots%2C+archeology&multiClass=false&text=shipment+will+arrive+on+next+week.+our+company+will+transport</a></p>
<p>my code:</p>
<pre><code>import tensorflow
from transformers import pipeline, AutoModelForTokenClassification,BertTokenizer
pipeline("zero-shot-classification",model="cross-encoder/nli-deberta-v3-small")
</code></pre>
<p>Error : module 'tensorflow_core.keras.activations' has no attribute 'swish'</p>
|
<p>I tried installing Tensorflow==2.3 and restarted the machine. Now the error gone....</p>
|
python|nlp|tensorflow2.0|bert-language-model|transformer-model
| 0
|
8,270
| 70,955,340
|
how to find Max_winning_streak (max number of consecutive +ve values) in pandas dataframe
|
<p>I have datafram like This:</p>
<p>dataframe_name-> p_and_l</p>
<pre><code>date pnl
1/2/17 15:14 -907.5
1/3/17 15:14 1685.75
1/4/17 15:14 817
1/5/17 15:14 -182.5
1/6/17 15:14 415.25
1/9/17 15:14 -339.75
1/10/17 15:14 -413
1/11/17 15:14 1137.5
1/12/17 15:14 127.25
1/13/17 15:14 617.5
1/16/17 15:14 -875
1/17/17 15:14 158
1/18/17 15:14 -498.75
1/19/17 15:14 224.5
</code></pre>
<p>I tried to find Max_winning_streak (max number of consecutive +ve values) like this...
Max_winning_streak = pd.Series(p_and_l['pnl'])
Max_winning_streak.apply(consecutiveCount)</p>
<p>But not getting the desired solution.</p>
<p>Can anyone please help me to resolve this...
Thanks in advance</p>
|
<p>Compare values for groups by less ot equal <code>0</code> with cumulative sum and then count values, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> sorting by default, so first value is maximal count:</p>
<pre><code>m = df['pnl'].le(0)
max1 = m.cumsum()[~m].value_counts().iat[0]
print (max1)
3
</code></pre>
|
python|pandas|numpy|pandas-groupby|numpy-ndarray
| 0
|
8,271
| 51,635,290
|
Pandas - combine two columns
|
<p>I have 2 columns, which we'll call <code>x</code> and <code>y</code>. I want to create a new column called <code>xy</code>:</p>
<pre><code>x y xy
1 1
2 2
4 4
8 8
</code></pre>
<p><em>There shouldn't be any conflicting values, but if there are, y takes precedence. If it makes the solution easier, you can assume that <code>x</code> will always be <code>NaN</code> where <code>y</code> has a value.</em></p>
|
<p>it could be quite simple if your example is accurate</p>
<pre><code>df.fillna(0) #if the blanks are nan will need this line first
df['xy']=df['x']+df['y']
</code></pre>
|
python|pandas|dataframe
| 4
|
8,272
| 41,941,605
|
Fill values from one dataframe to another with matching IDs
|
<p>I have two pandas data frames, I want to get the sum of items_bought for each ID in DF1. Then add a column to DF2 containing the sum of items_bought calculated from DF1 with matching ID else fill it with 0. How can I do this in an elegant and efficient manner? </p>
<p>DF1</p>
<pre><code>ID | items_bought
1 5
3 8
2 2
3 5
4 6
2 2
</code></pre>
<p>DF2</p>
<pre><code>ID
1
2
8
3
2
</code></pre>
<p>Desired Result: DF2 Becomes</p>
<pre><code>ID | items_bought
1 5
2 4
8 0
3 13
2 4
</code></pre>
|
<pre><code>df1.groupby('ID').sum().loc[df2.ID].fillna(0).astype(int)
Out[104]:
items_bought
ID
1 5
2 4
8 0
3 13
2 4
</code></pre>
<ol>
<li>Work on df1 to calculate the sum for each <code>ID</code>.</li>
<li>The resulting dataframe is now indexed by <code>ID</code>, so you can select with <code>df2</code> IDs by calling <code>loc</code>.</li>
<li>Fill the gaps with <code>fillna</code>.</li>
<li><code>NA</code> are handled by float type. Now that they are removed, convert the column back to integer.</li>
</ol>
|
python|pandas
| 3
|
8,273
| 41,896,995
|
Multiple filters Python Data.frame
|
<p>I'm pretty new to python. I'm trying to filter rows in a data.frame as I do in R. </p>
<pre><code>sub_df = df[df[main_id]==3]
</code></pre>
<p>works, but </p>
<pre><code>df[df[main_id] in [3,7]]
</code></pre>
<p>gives me error</p>
<blockquote>
<p>"The truth value of a Series is ambiguous"</p>
</blockquote>
<p>Can you please suggest me a correct syntax to write similar selections?</p>
|
<p>You can use pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow noreferrer"><code>isin</code></a> function. This would look like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3], 'B': ['a', 'b', 'f']})
df[df['A'].isin([2, 3])]
</code></pre>
<p>giving:</p>
<pre><code> A B
1 2 b
2 3 f
</code></pre>
|
python|pandas|dataframe
| 3
|
8,274
| 42,041,151
|
numpy irfft by amplitude and phase spectrum
|
<p>How to compute irfft if I have only amplitude and phase spectrum of signal? In numpy docs I've found only irfft which use fourier coefficients for this transformation.</p>
|
<p>If you have amplitude and phase vectors for a spectrum, you can convert them to a complex (IQ or Re,Im) vector by multiplying the cosine and sine of each phase value by its associated amplitude value (for each FFT bin with a non-zero amplitude, or vector-wise).</p>
|
python|numpy|fft|ifft
| 1
|
8,275
| 64,404,953
|
The fastest way to check all values from the list of coordinates
|
<p>I have a list of coordinates <code>a = [(1,2),(1,300),(2,3).....]</code>
These values area coordinates of <code>1000 x 1000 NumPy</code> array.</p>
<p>Let's say I want to sum all the values under these coordinates. Is there a faster way to do it than:</p>
<pre><code>sum([array[i[0],i[1]] for i in a])
</code></pre>
|
<p>Apply a mask to <code>array</code> using <code>a</code> and then sum over the masked array. Example:</p>
<pre><code># Prepare sample array and indices
a = np.arange(10*10).reshape(10,10)
ind = [(1,0), (2, 4), (2,6), (7,7), (8,9), (9,3)]
# Cast list of coordinates into a form that will work for indexing
indx = np.split(np.array(ind), 2, axis = 1)
# A warning may be raised about not using tuples for indexing. You can use tuple(indx) to avoid that.
np.sum(a[indx])
</code></pre>
|
performance|numpy
| 0
|
8,276
| 64,458,459
|
How to convert json dataframe to normal dataframe?
|
<p>I have a dataframe which has lots of json datas inside.</p>
<p>for example :</p>
<pre><code>{"serial": "000000001fb105ea", "sensorType": "acceleration", "data": [1603261123.328814, 0.171875, -0.9609375, 0.0234375]}
{"serial": "000000001fb105ea", "sensorType": "acceleration", "data": [1603261125.0605137, 0.0859375, -0.984375, 0.0]}
{"serial": "000000001fb105ea", "sensorType": "strain", "data": [1603261126.3532753, 0.9649793604217437]}
{"serial": "000000001fb105ea", "sensorType": "acceleration", "data": [1603261127.6988888, 0.0390625, -1.0, 0.125]}
{"serial": "000000001fb105ea", "sensorType": "acceleration", "data": [1603261128.8530502, 0.078125, -0.9921875, 0.0]}
</code></pre>
<p>There are two types of data.Strain sensor and acceleration sensor.</p>
<p>I want to parse these json datas and convert to normal form. I just need data part of json objects.At result I should have 4 columns for every values in Data.</p>
<pre><code>Date: 21.20.2020:09:18:46 x:0.171875 y:-0.9609375 z:0.0234375
</code></pre>
<p>I tried json_normalize but I got this error.</p>
<pre><code>AttributeError: 'str' object has no attribute 'itervalues'
</code></pre>
<p>How to parse data part to 4 column dataframe ?</p>
<p>thanks.</p>
|
<p>If input data are in <code>json</code> file use:</p>
<pre><code>cols = ['Date','x','y','z']
df = pd.DataFrame(pd.read_json('json.json', lines=True)['data'].tolist(), columns=cols)
df['Date'] = pd.to_datetime(df['Date'], unit='s')
print (df)
Date x y z
0 2020-10-21 06:18:43.328814030 0.171875 -0.960938 0.023438
1 2020-10-21 06:18:45.060513735 0.085938 -0.984375 0.000000
2 2020-10-21 06:18:46.353275299 0.964979 NaN NaN
3 2020-10-21 06:18:47.698888779 0.039062 -1.000000 0.125000
4 2020-10-21 06:18:48.853050232 0.078125 -0.992188 0.000000
</code></pre>
<p>If input is <code>DataFrame</code> with column <code>col</code>:</p>
<pre><code>cols = ['Date','x','y','z']
df = pd.DataFrame(pd.json_normalize(df['col'])['data'].tolist(), columns=cols)
df['Date'] = pd.to_datetime(df['Date'], unit='s')
print (df)
Date x y z
0 2020-10-21 06:18:43.328814030 0.171875 -0.960938 0.023438
1 2020-10-21 06:18:45.060513735 0.085938 -0.984375 0.000000
2 2020-10-21 06:18:46.353275299 0.964979 NaN NaN
3 2020-10-21 06:18:47.698888779 0.039062 -1.000000 0.125000
4 2020-10-21 06:18:48.853050232 0.078125 -0.992188 0.000000
</code></pre>
<p>EDIT:</p>
<p>Personally save csv like <code>.xls</code> is not good idea, because then <code>read_excel</code> raise weird error, but you can use:</p>
<pre><code>import ast
df = pd.read_csv('15-10-2020-OO.xls')
cols = ['Date','x','y','z']
data = [x['data'] for x in df['Data'].apply(ast.literal_eval)]
df = pd.DataFrame(data, columns=cols)
df['Date'] = pd.to_datetime(df['Date'], unit='s')
print (df)
Date x y z
0 2020-10-15 07:21:16.159236193 0.085938 -0.972656 0.003906
1 2020-10-15 07:21:17.597931385 0.089844 -0.968750 0.003906
2 2020-10-15 07:21:18.838171959 0.089844 -0.972656 0.003906
3 2020-10-15 07:21:20.338105917 0.085938 -0.972656 0.003906
4 2020-10-15 07:21:21.768864155 0.089844 -0.984375 0.003906
... ... ... ...
8457 2020-10-15 08:59:57.907007933 0.085938 -0.972656 0.003906
8458 2020-10-15 08:59:58.371274233 0.089844 -0.976562 0.003906
8459 2020-10-15 08:59:58.833237648 0.085938 -0.976562 0.003906
8460 2020-10-15 08:59:59.313337088 1.517057 NaN NaN
8461 2020-10-15 08:59:59.863240004 0.089844 -0.968750 0.007812
[8462 rows x 4 columns]
</code></pre>
|
python|json|pandas
| 0
|
8,277
| 64,208,601
|
Creating a 3-D (or larger) diagonal NumPy array from diagonals
|
<p>Is there an efficient 'Numpy'-based solution to create a 3 (or higher) dimensional diagonal matrix?</p>
<p>More specifically, I am looking for a shorter (and perhaps more efficient) solution to replace the following:</p>
<pre><code>N = 100
M = 4
d = np.random.randn(N) # calculated in the real use case from other parameters
A = np.zeros(M, M, N, dtype=d.dtype)
for i in range(M):
A[i, i, :] = d
</code></pre>
<p>The above-mentioned solution will be slow if <code>M</code> is large, and I think not very memory-efficient as <code>d</code> is copied <code>M</code> times in the memory.</p>
|
<p>Here's one with <a href="https://numpy.org/doc/stable/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a> diag-view -</p>
<pre><code>np.einsum('iij->ij',A)[:] = d
</code></pre>
<p>Looking at the string notation, this also translates well from the iterative part : <code>A[i, i, :] = d</code>.</p>
<p>Generalize to ndarray with <code>ellipsis</code> -</p>
<pre><code>np.einsum('ii...->i...',A)[:] = d
</code></pre>
|
python-3.x|numpy|numpy-ndarray
| 0
|
8,278
| 64,509,613
|
ValueError: No gradients provided for any variable: ['embedding/embeddings:0', ']
|
<p>I am new in Tensorflow 2 and I want to train a multi input neural network in keras/tensorflow. This is my sample code:</p>
<pre><code>First_inputs = Input(shape=(2000, ),name="first")
Second_inputs = Input(shape=(4, ),name="second")
embedding_layer = Embedding(3,3, input_length=2000,)(First_inputs)
flatten = Flatten()(embedding_layer)
first_dense = Dense(neuronCount,kernel_initializer=initializer, )(flatten)
merge = concatenate([first_dense, Second_inputs])
drop = Dropout(dropout)(merge)
output = Dense(1, )(drop)
model = Model(inputs=[First_inputs, Second_inputs], outputs=output)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1,shuffle=True, random_state=42)
First_inputs =x_train[:,0:2000]
Second_inputs =x_train[:,2000:2004]
model.fit(([First_inputs, Second_inputs], y_train),validation_data=([First_inputs, Second_inputs], y_train),verbose=1,epochs=100,steps_per_epoch=209)
</code></pre>
<p>However, I get this error:</p>
<pre><code>ValueError: No gradients provided for any variable: ['embedding/embeddings:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0'].
</code></pre>
<p>Anybody knows what the problem is? Thanks!</p>
|
<p>Your data is numpy arrays ,you have to give two separate arguments to fit() method ,list of np.arrays as inputs and np.array as label.(remove the tuple as input):</p>
<pre><code>First_inputs = Input(shape=(2000, ),name="first")
Second_inputs = Input(shape=(4, ),name="second")
embedding_layer = Embedding(3,3, input_length=2000,)(First_inputs)
flatten = Flatten()(embedding_layer)
first_dense = Dense(neuronCount,kernel_initializer=initializer, )(flatten)
merge = concatenate([first_dense, Second_inputs])
drop = Dropout(dropout)(merge)
output = Dense(1, )(drop)
model = Model(inputs=[First_inputs, Second_inputs], outputs=output)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1,shuffle=True,
random_state=42)
First_inputs =x_train[:,0:2000]
Second_inputs =x_train[:,2000:2004]
model.fit([First_inputs, Second_inputs], y_train,validation_data=([First_inputs,
Second_inputs], y_train),verbose=1,epochs=100,steps_per_epoch=209)
</code></pre>
|
python|tensorflow|keras|deep-learning|tensorflow2.0
| 1
|
8,279
| 64,513,356
|
How to find highest and lowest value and aggregate into a string in Pandas, Pysimplegui
|
<p>I have a dataframe in pandas</p>
<p>This code is part of a function in a GUI and I'm trying to create one line of string that would mention the highest count of COVID cases in a country within a continent whereas the continent is selected from the user.</p>
<p>This is the dataset I am using: <a href="https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv" rel="nofollow noreferrer">https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv</a></p>
<p>Once user selects, a continent, for e.g. Asia</p>
<p>My graph would show all the total number of case for all the countries in Asia.
I'm trying to add one more of line at plt.xlabel that summarizes the country with highest number of case and country with lowest number of case.</p>
<p>expected output:
In Asia, X country has the highest number of cases with xx,xxx cases and X country has the lowest number of cases with x,xxx cases.</p>
<p>Here is my code:</p>
<pre><code>import PySimpleGUI as sg
import matplotlib.pyplot as plt
def graphUI(dfIn):
# getting the continent and location to use as value for dropdown search
df_continent = dfIn[1].drop_duplicates()
df_location = dfIn[2].drop_duplicates()
#for LinePlot Options
linePlotList = ['New Case', 'Total Death', 'New Death']
layout = [
[sg.Text('Display Graph')],
[sg.Text('Continent:'),
sg.Combo(df_continent.values[1:-1].tolist(), default_value="Continent", key="-continent-",change_submits=True)],
[sg.Text('Location: '), sg.Combo(df_location.values[1:].tolist(), default_value="Location", key="-location-")],
[sg.Text('Only For Line Plot: '), sg.Combo(linePlotList, default_value="New Case", key="-linePlot-")],
[sg.Button('Bar plot', key='bar', tooltip='View Graph'),
sg.Button('Line plot', key='line', tooltip='View Graph'),
sg.Button('Cancel')]
]
window = sg.Window('Search and Filter', layout)
while True:
event, values = window.read()
#on combo continent changes value, it will run the code below
if event == "-continent-":
if values['-continent-'] != df_continent.values[0]:
#run checkBoxUpdate function to update the list of country inside the selected continent
formUpdate = checkBoxupdate(dfIn, values['-continent-'])
#update the window by finding the element of location combo and update the latest country value
window.FindElement('-location-').Update(values=formUpdate.values[1:].tolist())
# Once user press Ok button, get all values and compare to df
if event == "bar":
searchedDf = dfIn[::]
if values['-continent-'] != df_continent.values[0]:
barchart(searchedDf,values)
if event == "line":
searchedDf = dfIn[::]
if values['-location-'] != df_continent.values[1]:
selectedLineChoice = values['-linePlot-']
linePlot(searchedDf,values,selectedLineChoice)
elif event == "Cancel" or event is None:
window.close()
return dfIn
def barchart(searchedDf,values) :
# getting the continent and location to use as value for dropdown search
searchedDf = searchedDf[searchedDf.isin([values['-continent-']]).any(axis=1)]
#drop duplicates country and keep latest
searchedDf = searchedDf.drop_duplicates(subset=[2], keep='last')
allcountry = list(searchedDf[2])
highestInfected = list(map(int, searchedDf[4]))
# Access the values which were entered and store in lists
plt.figure(figsize=(10, 5))
plt.barh(allcountry, highestInfected)
#set axist label to smaller size
plt.tick_params(axis='y', which='major', labelsize=6)
plt.suptitle('Total Case of ' + values['-continent-'])
plt.xlabel('In ' + values['-continent-'] + 'has the most number of cases.' )
plt.show()
def linePlot(searchedDf, values,selectedLineChoice):
# getting the continent and location to use as value for dropdown search
searchedDf = searchedDf[searchedDf.isin([values['-location-']]).any(axis=1)]
eachDate = list(searchedDf[3])
if selectedLineChoice == 'New Case':
selectedLineChoiceValues = list(map(int, searchedDf[5]))
if selectedLineChoice == 'Total Death':
selectedLineChoiceValues = list(map(int, searchedDf[6]))
if selectedLineChoice == 'New Death':
selectedLineChoiceValues = list(map(int, searchedDf[7]))
#set frequency of the date on x axis to appear on lower freq
frequency = 50
plt.plot(eachDate , selectedLineChoiceValues)
plt.xticks(eachDate[::frequency])
plt.xticks(rotation=45)
plt.tick_params(axis='x', which='major', labelsize=6)
plt.suptitle('Total New Case of ' + values['-location-'])
plt.ylabel(selectedLineChoice, fontsize=10)
plt.show()
def checkBoxupdate(dfIn, input):
#search the DF for the selected continents
searchedDf = dfIn[dfIn.isin([input]).any(axis=1)]
#drop duplicates country of the selected continenets and return
df_location = searchedDf[2].drop_duplicates()
return df_location
</code></pre>
|
<p>Understanding that the subject of this question is to display the maximum and minimum values on the x-axis labels for the selected continent, I created the following code.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import requests
url = 'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv'
df = pd.read_csv(url, sep=',')
df.fillna(0, inplace=True)
continent = 'Asia'
country = 'Indonesia'
searchedDf = df.copy()
searchedDf = searchedDf[searchedDf.isin([continent]).any(axis=1)]
# total_cases -> max, min
casesDf = searchedDf.copy()
cases_ = casesDf.groupby(['location'])[['date','total_cases']].last().reset_index()
cases_max_df = cases_[cases_['total_cases'] == max(cases_['total_cases'])]
cases_min_df = cases_[cases_['total_cases'] == min(cases_['total_cases'])]
searchedDf = searchedDf[searchedDf.isin([country]).any(axis=1)]
#drop duplicates country and keep latest
searchedDf = searchedDf.drop_duplicates(subset=['continent'], keep='last')
# print(searchedDf)
allcountry = list(searchedDf['location'])
highestInfected = list(map(int, searchedDf['total_cases']))
# Access the values which were entered and store in lists
plt.figure(figsize=(10, 5))
plt.barh(allcountry, highestInfected)
#set axist label to smaller size
plt.tick_params(axis='y', which='major', labelsize=16)
plt.suptitle('Total Case of ' + continent)
labels = ('In ' + continent + ' has the most number of cases.\n'
+ str(cases_max_df['location'].values[0]) + ':' + str(cases_max_df['total_cases'].values[0]) + '\n'
+ str(cases_min_df['location'].values[0]) + ':' + str(cases_min_df['total_cases'].values[0]))
plt.xlabel(labels, fontsize=18)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/l727J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l727J.png" alt="enter image description here" /></a></p>
<pre><code>import datetime
searchedDf['date'] = pd.to_datetime(searchedDf['date'])
searchedDf['yyyy-mm'] = str(searchedDf['date'].dt.year) + '-' + str(searchedDf['date'].dt.month)
month_gb = searchedDf.groupby('yyyy-mm')['total-cases'].sum()
</code></pre>
|
python|pandas|matplotlib|pysimplegui
| 0
|
8,280
| 47,609,730
|
Why does this numpy attribute suddenly become shared between instances
|
<p>I stumbled upon odd behavior when using python 3.6 and numpy 1.12.1 under Linux.</p>
<p>I have an attribute <code>self.count</code> which I initialize with <code>np.array([0.0, 0.0, 0.0])</code>. I would expect that <code>self.count</code> would behave like any other attribute and have its own value per class instance.</p>
<p>However, in the code below, in the <code>addPixel</code> method when I use</p>
<pre><code>self.count += (1.0, 1.0, 1.0)
</code></pre>
<p>the self.count attribute gets increased for all instances of the class <code>CumulativePixel</code>. I want to understand why this happens and why it's fixed when I do:</p>
<pre><code>self.count = self.count + (1.0, 1.0, 1.0)
</code></pre>
<p>instead.</p>
<pre><code>import numpy as np
class CumulativePixel(object):
'''
class adds rgb triples and counts how many have been added
'''
def __init__(self, rgb = (0,0,0), count=np.array([0.0, 0.0, 0.0]) ):
'''
Constructor
rgb sum is stored as two values. The integer part plus float part
they are stored in a 2x3 matrix where the first row are integer
parts and the second row are float parts. The code always tries to
make sure that float part is below 1.0
'''
self.rgb = np.array( [np.fmod(rgb, (1,1,1)).astype(float), (rgb - np.fmod(rgb, (1,1,1)))] )
self.count = count
@staticmethod
#for now only works for positve numbers
def _pixeladdition (disassembled, rgb):
disassembled += np.array( [np.fmod(rgb, (1,1,1)).astype(float), (rgb - np.fmod(rgb, (1,1,1)))] )
fpart = np.fmod(disassembled[0], (1,1,1))
overflowpart = disassembled[0]-fpart
disassembled[0]=fpart
disassembled[1]+=overflowpart
return disassembled
def addPixel(self, rgb):
self.rgb = self._pixeladdition(self.rgb, rgb)
# += would globalize self.count into all instances! why ???
self.count = self.count + (1.0, 1.0, 1.0)
def getAvgPixel(self, multiply = (1.0, 1.0, 1.0), add = (0.0, 0.0, 0.0), roundpx = False):
if 0.0 in self.count: return (0.0, 0.0, 0.0)
averagepixel = np.sum(self._pixeladdition((self.rgb/self.count), add)*multiply, axis=0)
if roundpx: averagepixel = np.round(averagepixel).astype(int)
return averagepixel
def getSums(self):
return np.sum(self.rgb, axis=0)
def __str__(self):
return "count: " + str(self.count) + " integers: " + str(self.rgb[1].tolist())+ " floats: " + str(self.rgb[0].tolist())
def __repr__(self):
return "CumulativePixel(rgb = " + str(tuple(np.sum(self.rgb, axis=0))) + ", count=" + str(self.count) +")"
</code></pre>
<p><strong>Edit:</strong>
I create instances of this class (in yet another class) as follows:</p>
<pre><code>self.pixeldata = [CumulativePixel() for i in range(self.imagewidth*self.imageheight)]
</code></pre>
|
<p>This is a common bug, most often seen when using a list as the default value for a function.</p>
<pre><code>count=np.array([0.0, 0.0, 0.0])
</code></pre>
<p>This array is created once, when the class is initialized. So all instances share the same <code>create</code> attribute, same array. They don't get a fresh array.</p>
<p>When you do <code>self.create +=...</code> you modify it in-place.</p>
<p>With <code>self.create = self.create + ...</code>, you create a new array, so the change in one instance doesn't affect the others.</p>
<p>It's good practice to do something like this:</p>
<pre><code> def __init__(self, create=None):
if create is None:
create = np.array([1,2,3,4])
self.create = create
</code></pre>
<p>Now the default value will be fresh, unique for each instance.</p>
|
python-3.x|numpy|attributes|shared-ptr|instantiation
| 1
|
8,281
| 58,751,186
|
python counting examples based on criteria
|
<p>I want to count in a dataframe how many examples have the same criteria. The criteria will be selected by me before counting the examples. </p>
<p>i want to use it with the groupby but i didn't find a solution</p>
<pre class="lang-py prettyprint-override"><code>df_education = df.groupby(['Education','Self_Employed',"Loan_Status"], axis=0).count()
</code></pre>
|
<p>did you try:</p>
<pre class="lang-py prettyprint-override"><code>df_education = df.groupby(
["Education", "Self_Employed", "Loan_Status"],
axis=0
).size()
</code></pre>
|
python|pandas
| 2
|
8,282
| 58,945,461
|
Cannot work out why I am getting this error.|TypeError: unsupported operand type(s) for /: 'list' and 'int'
|
<p>I have a project for school and I need to get the historical data from Yahoo Finance and hen perform some calculations on it and write a report on it.</p>
<pre><code>import numpy as np
import csv
import pandas_datareader as pdr
def dataanalysis(stock1, comp1, comp2, comp3): # Function to download data from Yahoo
stk1 = pdr.get_data_yahoo(str(stock1), start="1999-11-01", end="2019-11-01") # Downloading 20 years of data
cmp1 = pdr.get_data_yahoo(str(comp1), start="1999-11-01", end="2019-11-01")
cmp2 = pdr.get_data_yahoo(str(comp2), start="1999-11-01", end="2019-11-01")
cmp3 = pdr.get_data_yahoo(str(comp3), start="1999-11-01", end="2019-11-01")
stk1.sort_index(ascending=False, inplace=True) # Put dem badboys in descending order
cmp1.sort_index(ascending=False, inplace=True)
cmp2.sort_index(ascending=False, inplace=True)
cmp3.sort_index(ascending=False, inplace=True)
stk1['Returns'] = (np.log(stk1['Close'] / stk1['Close'].shift(-1)))
cmp1['Returns'] = (np.log(cmp1['Close'] / cmp1['Close'].shift(-1)))
cmp2['Returns'] = (np.log(cmp2['Close'] / cmp2['Close'].shift(-1)))
cmp3['Returns'] = (np.log(cmp3['Close'] / cmp3['Close'].shift(-1)))
stk1.to_csv(str(stock1) + '.csv') # Out putting data to csv files
cmp1.to_csv(str(comp1) + '.csv')
cmp2.to_csv(str(comp2) + '.csv')
cmp3.to_csv(str(comp3) + '.csv')
stk1_returns = list(stk1['Returns']) # Creating a list from the 'Returns' column
cmp1_returns = list(cmp1['Returns'])
cmp2_returns = list(cmp2['Returns'])
cmp3_returns = list(cmp3['Returns'])
del stk1_returns[-1], cmp1_returns[-1], cmp2_returns[-1], cmp3_returns[-1]
stk1_ret_avg, cmp1_ret_avg, cmp2_ret_avg, cmp3_ret_avg = np.average(stk1_returns), np.average(cmp1_returns), np.average(cmp2_returns), np.average(cmp3_returns)
stk1_volit, cmp1_volit, cmp2_volit, cmp3_volit = np.std(stk1_returns), np.std(cmp1_returns), np.std(cmp2_returns), np.std(cmp3_returns)
tickers = ["", str(stock1), str(comp1), str(comp2), str(comp3)]
averages = ["Averages = ", stk1_ret_avg, cmp1_ret_avg, cmp2_ret_avg, cmp3_ret_avg]
volatility = ["Volatility = ", stk1_volit, cmp1_volit, cmp2_volit, cmp3_volit]
correlations = np.corrcoef([stk1_returns, cmp1_returns, cmp2_returns, cmp3_returns])
data_analyzed = [tickers, averages, volatility, correlations]
# print(data_analyzed)
return data_analyzed
def print_results(group):
ticker_list = list(group[0])
with open(str(ticker_list[1]) + " and Comp"".csv", "w") as group_anal:
groupCSV = csv.writer(group_anal)
for i in range(3):
groupCSV.writerow(group[i])
for i in range(2):
groupCSV.writerow([])
groupCSV.writerow(["Correlation Matrix"])
groupCSV.writerow(ticker_list[1:5])
for r in group[3]:
groupCSV.writerow(r)
group1 = dataanalysis("AAPL", "AMZN", "INTC", "MSFT") # Running the function
# group2 = dataanalysis("BARC.L", "BK", "GS", "DB")
group3 = dataanalysis("BRK-B", "ALL", "PGR", "MKL")
group4 = dataanalysis("MCD", "SBUX", "YUM", "WEN")
# group5 = dataanalysis("TSCO.L", "CA.PA", "SBRY.L", "WMT")
group6 = dataanalysis("WWE", "DISH", "DIS", "CMCSA")
print_results(group1)
# print_results(group2)
print_results(group3)
print_results(group4)
# print_results(group5)
print_results(group6)
</code></pre>
<p>This runs no problem, but if I include the other commented out group2 and 5 I get the following error:</p>
<pre><code>File "C:/Users/HHF/OneDrive/Programming/Python/Assignments/Final Assignment/Financial Records/calculations.py", line 62, in <module>
group2 = dataanalysis("BARC.L", "BK", "GS", "DB")
File "C:/Users/HHF/OneDrive/Programming/Python/Assignments/Final Assignment/Financial Records/calculations.py", line 40, in dataanalysis
correlations = np.corrcoef([stk1_returns, cmp1_returns, cmp2_returns, cmp3_returns])
"TypeError: unsupported operand type(s) for /: 'list' and 'int'" error.
</code></pre>
<p>I have tried everything I can think of and nothing works. If I remove all the tickers with a period symbol it works, but I am not sure why this is the case because if I print stk1_returns it seems to be a regular list.</p>
<p>Thank you so much for any help you can give me. </p>
|
<p><strong>One of the things you are passing to <code>np.corrcoef</code> is not what you think it is.</strong></p>
<p>For example, this throws the same error:</p>
<pre><code>import numpy as np
np.corrcoef([[[1,2,3,4]], [4,5,6,7]])
</code></pre>
<p>Notice that the first 'array' is actually a list of one list. Maybe the bit where you cast <code>stk1['Returns']</code> etc to lists is going wrong. (I would stick to arrays if I were you.)</p>
|
numpy|typeerror
| 0
|
8,283
| 70,371,054
|
Finding relevant points in the given curve
|
<p>Consider the following Python code which plots a curve and analyzes it to find some points:</p>
<pre><code>%matplotlib inline
import numpy as np
from numpy.polynomial.polynomial import Polynomial
from scipy.interpolate import UnivariateSpline
from scipy.signal import savgol_filter
import scipy.stats
import scipy.optimize
import matplotlib.pyplot as plt
# Create curve X axis.
x = np.arange(10000)
y = np.zeros_like(x, dtype=float)
# Create a straight line for the first 1000 points.
y[:1000] = (10.0,) * 1000
# The remaining points, create an exponential.
y[1000:] = 20 * np.exp(-x[1000:]/1200.0)
# Add noise.
y += np.random.normal(0, 0.5, x.size)
# Create polynomial.
poly = Polynomial.fit(x, y, 15)
# Create Y values for the polynomial.
py = poly(x)
# Calculate first and second derivatives.
dxdy = np.gradient(py)
dx2dy2 = np.gradient(dxdy)
# Plot original curve and fit on top.
plt.plot(x, y, '', color='black')
plt.plot(x, py, '', color='red')
# Plot first order and second order derivatives.
plt.plot(x, dxdy * 100, '', color='blue')
plt.plot(x, dx2dy2, '', color='green')
</code></pre>
<p><a href="https://i.stack.imgur.com/IASah.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IASah.png" alt="enter image description here" /></a></p>
<p>I have marked in the image above the points I want to calculate:</p>
<p>I think I can use the first derivative to calculate the first one, which is the start of the transition.</p>
<p>I am not so sure how to calculate the second one, which is the end of the transition, or when the curve becomes flat. I have tried to calculate using the average of the last 100 points in the curve, and finding the first value in the curve below that average, however, it does not seem very reliable.</p>
<p>EDIT1:</p>
<p>While investigating the first derivative, I came up with the following potential solution, which is finding the change of sign to the left and right sides of the peak, I illustrate with an image of the first derivative and signal and fit below:</p>
<p><a href="https://i.stack.imgur.com/S3UE5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3UE5.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/MgNlF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MgNlF.png" alt="enter image description here" /></a></p>
|
<p>I prefer to work on filtered (smoothed) rather than interpolated data.</p>
<p>First point I find by:</p>
<ul>
<li>finding maximum of the smoothed data</li>
<li>finding first point whose value is 90% of the maximum value</li>
<li>going back to find first point whose derivative is >= 0</li>
</ul>
<p>Second point I find by</p>
<ul>
<li>finding minimum of the smoothed data</li>
<li>finding first point that has value <code>minimum + (maximum - minimum) * 1%</code></li>
</ul>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
# Create curve X axis.
x = np.arange(10000)
y = np.zeros_like(x, dtype=float)
# Create a straight line for the first 1000 points.
y[:1000] = (10.0,) * 1000
# The remaining points, create an exponential.
y[1000:] = 20 * np.exp(-x[1000:]/1200.0)
# Add noise.
y += np.random.normal(0, 0.5, x.size)
y_smoothed = scipy.ndimage.uniform_filter1d(y, 250, mode='reflect')
diff_y_smoothed = np.diff(y_smoothed)
minimum = np.min(y_smoothed)
maximum = np.max(y_smoothed)
almost_first_point_idx = np.where(y_smoothed < 0.9 * maximum)[0][0]
first_point_idx = almost_first_point_idx - np.where(diff_y_smoothed[:almost_first_point_idx][::-1] >= 0)[0][0]
second_point_idx = np.where(y_smoothed < 0.01 * (maximum - minimum) + minimum)[0][0]
# Plot original curve and fit on top.
plt.plot(x, y, '', color='black')
plt.plot(x, y_smoothed, '', color='red')
plt.axvline(x[first_point_idx])
plt.axvline(x[second_point_idx])
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/gCSKx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCSKx.png" alt="enter image description here" /></a></p>
<p>You may wont to tweak a bit those parameters as running window size (hardcoded 250) and 1% used for finding second point.</p>
|
python|numpy|machine-learning|scipy|signal-processing
| 1
|
8,284
| 56,392,463
|
Python | Reading JSON files and applying simple algorithm on each iteratively into a dataframe
|
<p>we have a large json file that takes too long to be read with pd.read_json. </p>
<p>What we want to do initially is : </p>
<pre><code># Load the file
df_view = pd.read_json('/path/to/file', lines=True)
# Create a new feature using the above dataframe
df_nb_view = df_view[['userid','itemid']]
df_nb_view = df_nb_view.groupby('userid').count()
df_nb_view.rename(index=str, columns = {"itemid":'item_viewed'}, inplace=True)
</code></pre>
<p>So I've divided the dataset into subsets of it in one folder, and would like to read iteratively into it so as to do the work above on each subset, and concatenating the result at each step.</p>
<p>Hope this is clear enough.</p>
<p>I started out with this so as to read each file into one final df, but not sure how to create the new features in the process.</p>
<pre><code>files = []
for file in os.listdir("/path/to/folder"):
if file.endswith(".json"):
files.append(os.path.join("/path/to/folder", file))
for file in files:
with codecs.open(file,'r','utf-8') as f:
df_view = json.load(f, encoding='utf-b')
</code></pre>
<p>Thanks a lot in advance.</p>
|
<p>If I understand correctly, you want to read and process in file chunks.
If so, create a final result dataframe and append to it in each iteration</p>
<pre><code>final_df = pd.DataFrame()
for filename in files:
df_view = pd.read_json(filename, lines=True)
df_nb_view = df_view[['userid','itemid']]
df_nb_view = df_nb_view.groupby('userid').count()
df_nb_view.rename(index=str, columns = {"itemid":'item_viewed'}, inplace=True)
final_df.append(df_nb_view)
</code></pre>
|
python|json|pandas|machine-learning
| 0
|
8,285
| 56,042,548
|
How to convert a pandas time series with hour (h) as index unit into pandas datetime format?
|
<p>I am working on time-series data, where my pandas dataframe has indices specified in hours, like this:</p>
<pre><code>[0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, ...]
</code></pre>
<p>This goes on for a few thousand hours. I know that the first measurement was taken on, let's say, <code>May 1, 2017 12:00</code>. How do I use this information to turn my indices into pandas datetime format?</p>
|
<p>You can add hours to index by parameter <code>origin</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> for <code>DatetimeIndex</code>:</p>
<pre><code>idx = [0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4]
df = pd.DataFrame({'a':range(13)}, index=idx)
start = 'May 1, 2017 12:00'
df.index = pd.to_datetime(df.index, origin=start, unit='h')
print (df)
a
2017-05-01 12:00:00 0
2017-05-01 12:12:00 1
2017-05-01 12:24:00 2
2017-05-01 12:36:00 3
2017-05-01 12:48:00 4
2017-05-01 13:00:00 5
2017-05-01 13:12:00 6
2017-05-01 13:24:00 7
2017-05-01 13:36:00 8
2017-05-01 13:48:00 9
2017-05-01 14:00:00 10
2017-05-01 14:12:00 11
2017-05-01 14:24:00 12
</code></pre>
|
python|pandas|datetime|time-series
| 2
|
8,286
| 55,827,792
|
Pivoting a table partially in Pandas
|
<p>I have a table containing user reviews (numbers totally made-up):</p>
<pre><code>| user_id | vote | votes_for_user | average_user_vote | ISBN_categ |
213 4.5 12 3.4 1
563 3.7 74 2.3 2
213 1.2 12 3.6 3
213 3.2 74 2.1 2
213 1.9 12 3.8 4
563 1.4 74 2.6 1
563 5.0 74 2.9 4
</code></pre>
<p>I want to place the <code>vote</code> of every user into a corresponding column, headed by the <code>ISBN_categ</code> value, with 0 where no votes where given.</p>
<pre><code>| user_id | votes_for_user | average_user_vote | ISBN_cat_1 | ISBN_cat_2 | ISBN_cat_3 | ISBN_cat_4 |
213 12 3.4 4.5 3.2 1.2 1.9
563 74 2.3 1.4 3.7 0.0 5.0
</code></pre>
<p>Notice how, due to the fact that user 563 did not vote for book number 3 (ISBN_cat_3 in the second table or 3 in ISBN_categ in the first table), the assigned value is 0.0 </p>
<p>I understand this is some kind of pivoting of the table, however I can't find anything similar in the Pandas documentation. </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>DataFrame.pivot</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>DataFrame.add_prefix</code></a> first and then remove duplicates by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> if necessary and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> together:</p>
<pre><code>df1 = df.pivot('user_id','ISBN_categ','vote').fillna(0).add_prefix('ISBN_cat_')
df = df.drop_duplicates('user_id').join(df1, on='user_id').drop('vote', axis=1)
print (df)
user_id votes_for_user average_user_vote ISBN_categ ISBN_cat_1 \
0 213 12 3.4 1 4.5
1 563 74 2.3 2 1.4
ISBN_cat_2 ISBN_cat_3 ISBN_cat_4
0 3.2 1.2 1.9
1 3.7 0.0 5.0
</code></pre>
|
pandas|dataframe|pivot
| 1
|
8,287
| 55,670,244
|
Why is an OOM happening on my model init()?
|
<p>A single line in my model, <code>tr.nn.Linear(hw_flat * num_filters*8, num_fc)</code>, is causing an OOM error on initialization of the model. Commenting it out removes the memory issue.</p>
<pre><code>import torch as tr
from layers import Conv2dSame, Flatten
class Discriminator(tr.nn.Module):
def __init__(self, cfg):
super(Discriminator, self).__init__()
num_filters = 64
hw_flat = int(cfg.hr_resolution[0] / 2**4)**2
num_fc = 1024
self.model = tr.nn.Sequential(
# Channels in, channels out, filter size, stride, padding
Conv2dSame(cfg.num_channels, num_filters, 3),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters, 3, 2),
tr.nn.BatchNorm2d(num_filters),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters*2, 3),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*2, 3, 2),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*4, 3),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*4, 3, 2),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*8, 3),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*8, num_filters*8, 3, 2),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Flatten(),
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
tr.nn.LeakyReLU(),
tr.nn.Linear(num_fc, 1),
tr.nn.Sigmoid()
)
self.model.apply(self.init_weights)
def forward(self, x_in):
x_out = self.model(x_in)
return x_out
def init_weights(self, layer):
if type(layer) in [tr.nn.Conv2d, tr.nn.Linear]:
tr.nn.init.xavier_uniform_(layer.weight)
</code></pre>
<p>This is strange, as hw_flat = 96*96 = 9216, and num_filters*8 = 512, so hw_flat * num_filters = 4718592, which is the number of parameters in that layer. I have confirmed this calculation as changing the layer to <code>tr.nn.Linear(4718592, num_fc)</code> results in the same output.</p>
<p>To me this makes no sense as dtype=float32, so the expected size of this would be 32*4718592 = 150,994,944 bytes. This is equivalent to about 150mb.</p>
<p>Error message is:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 116, in <module>
main()
File "main.py", line 112, in main
srgan = SRGAN(cfg)
File "main.py", line 25, in __init__
self.discriminator = Discriminator(cfg).to(device)
File "/home/jpatts/Documents/ECE/ECE471-SRGAN/models.py", line 87, in __init__
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
File "/home/jpatts/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 51, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
RuntimeError: $ Torch: not enough memory: you tried to allocate 18GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201
</code></pre>
<p>I am only running batch sizes of 1 as well (not that that affects this error), with overall input shape to the network being (1, 3, 1536, 1536), and shape after flatten layer being (1, 4718592).</p>
<p>Why is this happening?</p>
|
<p><strong>Your linear layer is quite large</strong> - it does, in fact, need at least 18GB of memory. (Your estimate is off for two reasons: (1) a <code>float32</code> takes 4 bytes of memory, not 32, and (2) you didn't multiply by the output size.)</p>
<p>From the <a href="https://pytorch.org/docs/stable/notes/faq.html" rel="nofollow noreferrer">PyTorch documentation FAQs</a>:</p>
<blockquote>
<p>Don’t use linear layers that are too large. A linear layer <code>nn.Linear(m, n)</code> uses <code>O(n*m)</code>
memory: that is to say, the memory requirements of the weights scales quadratically with
the number of features. It is very easy to blow through your memory this way (and
remember that you will need at least twice the size of the weights, since you also need
to store the gradients.)</p>
</blockquote>
|
python-3.x|pytorch
| 1
|
8,288
| 55,583,130
|
Tensorflow 2.0 Keras Model subclassing
|
<p>I'm trying to implement a simple UNet-like model using the model subclassing method. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow import keras as K
class Enc_block(K.layers.Layer):
def __init__(self, in_dim):
super(Enc_block, self).__init__()
self.conv_layer = K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
self.batchnorm_layer = K.layers.BatchNormalization()
self.pool_layer = K.layers.SeparableConv2D(in_dim,3, padding='same',strides=2, activation='relu')
def call(self, x):
x = self.conv_layer(x)
x = self.batchnorm_layer(x)
x = self.conv_layer(x)
x = self.batchnorm_layer(x)
return self.pool_layer(x), x
class Dec_block(K.layers.Layer):
def __init__(self, in_dim):
super(Dec_block, self).__init__()
self.conv_layer = K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
self.batchnorm_layer = K.layers.BatchNormalization()
def call(self, x):
x = self.conv_layer(x)
x = self.batchnorm_layer(x)
x = self.conv_layer(x)
x = self.batchnorm_layer(x)
return x
class Bottleneck(K.layers.Layer):
def __init__(self, in_dim):
super(Bottleneck, self).__init__()
self.conv_1layer = K.layers.SeparableConv2D(in_dim,1, padding='same', activation='relu')
self.conv_3layer = K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
self.batchnorm_layer = K.layers.BatchNormalization()
def call(self, x):
x = self.conv_1layer(x)
x = self.batchnorm_layer(x)
x = self.conv_3layer(x)
x = self.batchnorm_layer(x)
return x
class Output_block(K.layers.Layer):
def __init__(self, in_dim):
super(Output_block, self).__init__()
self.logits = K.layers.SeparableConv2D(in_dim,3, padding='same', activation=None)
self.out = K.layers.Softmax()
def call(self, x):
x_logits = self.logits(x)
x = self.out(x_logits)
return x_logits, x
class UNetModel(K.Model):
def __init__(self,in_dim):
super(UNetModel, self).__init__()
self.encoder_block = Enc_block(in_dim)
self.bottleneck = Bottleneck(in_dim)
self.decoder_block = Dec_block(in_dim)
self.output_block = Output_block(in_dim)
def call(self, inputs, training=None):
x, x_skip1 = self.encoder_block(32)(inputs)
x, x_skip2 = self.encoder_block(64)(x)
x, x_skip3 = self.encoder_block(128)(x)
x, x_skip4 = self.encoder_block(256)(x)
x = self.bottleneck(x)
x = K.layers.UpSampling2D(size=(2,2))(x)
x = K.layers.concatenate([x,x_skip4],axis=-1)
x = self.decoder_block(256)(x)
x = K.layers.UpSampling2D(size=(2,2))(x) #56x56
x = K.layers.concatenate([x,x_skip3],axis=-1)
x = self.decoder_block(128)(x)
x = K.layers.UpSampling2D(size=(2,2))(x) #112x112
x = K.layers.concatenate([x,x_skip2],axis=-1)
x = self.decoder_block(64)(x)
x = K.layers.UpSampling2D(size=(2,2))(x) #224x224
x = K.layers.concatenate([x,x_skip1],axis=-1)
x = self.decoder_block(32)(x)
x_logits, x = self.output_block(2)(x)
return x_logits, x
</code></pre>
<p>I am getting the following error:</p>
<pre><code>ValueError: Input 0 of layer separable_conv2d is incompatible with the layer: expected ndim=4, found ndim=0. Full shape received: []
</code></pre>
<p>I'm not sure if this is the correct way to implement a network in tf.keras
The idea was to implement encoder and decoder blocks by subclassing keras layers and subclassing the Model later.</p>
|
<p>Take a look at this line from <code>UNetModel</code> class:</p>
<pre><code>x, x_skip1 = self.encoder_block(32)(inputs)
</code></pre>
<p>where <code>self.encoder_block()</code> is defined by</p>
<pre><code>self.encoder_block = Enc_block(in_dim)
</code></pre>
<p><code>encoder_block</code> is an instance of class. By doing <code>self.encoder_block(32)</code> you are invoking a <code>__call__()</code> method of the <code>End_block</code> class which expect to receive an iterable of image inputs of <code>rank=4</code>. Instead you're passing an integer number <code>32</code> of <code>rank=0</code> and you get <code>ValueError</code> which says exactly what I've just explained: <code>expected ndim=4, found ndim=0</code>. What probably you intended to do is:</p>
<pre class="lang-py prettyprint-override"><code>x, x_skip1 = self.encoder_block(inputs)
</code></pre>
<p>You repeat the same mistake in the subsequent lines as well. There are additional errors where you define the same <code>in_dim</code> for every custom layer:</p>
<pre class="lang-py prettyprint-override"><code>self.encoder_block = Enc_block(in_dim)
self.bottleneck = Bottleneck(in_dim)
self.decoder_block = Dec_block(in_dim)
self.output_block = Output_block(in_dim)
</code></pre>
<p>The input shape for <code>Bottleneck</code> layer should be the same shape as output of the <code>Enc_Block</code> layer and so one. I suggest you first to understand simple example before you're trying to implement more complicated ones. Take a look at this example. It has two custom layers:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
from tensorflow.keras import layers
class CustomLayer1(layers.Layer):
def __init__(self, outshape=4):
super(CustomLayer1, self).__init__()
self.outshape = outshape
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(int(input_shape[1]), self.outshape),
trainable=True)
super(CustomLayer1, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
class CustomLayer2(layers.Layer):
def __init__(self):
super(CustomLayer2, self).__init__()
def call(self, inputs):
return inputs / tf.reshape(tf.reduce_sum(inputs, 1), (-1, 1))
</code></pre>
<p>Now I will use both of these layers in the new <code>CombinedLayers</code> class:</p>
<pre class="lang-py prettyprint-override"><code>class CombinedLayers(layers.Layer):
def __init__(self, units=3):
super(CombinedLayers, self).__init__()
# `units` defines a number of units in the layer. It is the
# output shape of the `CustomLayer`
self.layer1 = CustomLayer1(units)
# The input shape is inferred dynamically in the `build()`
# method of the `CustomLayer1` class
self.layer2 = CustomLayer1(units)
# Some layers such as this one do not need to know the shape
self.layer3 = CustomLayer2()
def call(self, inputs):
x = self.layer1(inputs)
x = self.layer2(x)
x = self.layer3(x)
return x
</code></pre>
<p>Note that the input shape of <code>CustomLayer1</code> is inferred dynamically in the <code>build()</code> method. Now let's test it with some input:</p>
<pre class="lang-py prettyprint-override"><code>x_train = [np.random.normal(size=(3, )) for _ in range(5)]
x_train_tensor = tf.convert_to_tensor(x_train)
combined = CombinedLayers(3)
result = combined(x_train_tensor)
result.numpy()
# array([[ 0.50822063, -0.0800476 , 0.57182697],
# [ -0.76052217, 0.50127872, 1.25924345],
# [-19.5887986 , 9.23529798, 11.35350062],
# [ -0.33696137, 0.22741248, 1.10954888],
# [ 0.53079047, -0.08941536, 0.55862488]])
</code></pre>
<p>This is how you should approach it. Create layers one by one. Each time you add a new layer test everything with some input to verify that you are doing things correctly.</p>
|
tensorflow|tf.keras
| 3
|
8,289
| 64,633,264
|
About epochs and images in Machine learning
|
<p>I have image 186 images in train_images and 174 images in valid_images when I pass to CNN model It only train 6 images. I did not create any batch size. The dataset name is <a href="https://www.kaggle.com/ihelon/lego-minifigures-classification" rel="nofollow noreferrer">Lego minifigure</a>.</p>
<pre><code> '''
print(train_images.shape)
print(type(train_images))
print(valid_images.shape)
print(type(valid_images))
print(train_targets.shape)
print(valid_targets.shape)
print(type(train_targets))
print(type(valid_targets))
'''
output is
(186, 20, 20, 3)
<class 'numpy.ndarray'>
(174, 20, 20, 3)
<class 'numpy.ndarray'>
(186, 33)
(174, 33)
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
'''
model
model=tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(20,(3,3),activation='relu',input_shape=(20,20,3)))
model.add(tf.keras.layers.MaxPooling2D(2,2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(100,activation='relu'))
model.add(tf.keras.layers.Dense(33,activation='softmax'))
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='adam')
model.summary()
'''
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_3 (Conv2D) (None, 18, 18, 20) 560
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 9, 9, 20) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 1620) 0
_________________________________________________________________
dense_4 (Dense) (None, 100) 162100
_________________________________________________________________
dense_5 (Dense) (None, 33) 3333
=================================================================
Total params: 165,993
Trainable params: 165,993
Non-trainable params: 0
___________________________
'''
hist=model.fit(train_images,train_targets,epochs=100,validation_data=(valid_images,valid_targets))
'''
Epoch 1/100
6/6 [==============================] - 0s 10ms/step - loss: 2.7642 - accuracy: 0.4355 - val_loss: 3.1673 - val_accuracy: 0.1839
</code></pre>
<p>why it is training only 6 images?
I am beginner in Ml ,so if someone helps it would be great help!</p>
|
<p>batch_size equals to 32 on default</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
8,290
| 41,029,287
|
How to call only some files from a folder full of files using python?
|
<p>I have many files inside 1 folder.
This is a description of names:</p>
<p>AWA_s1_Fp1_features.mat</p>
<p>AWA_s1_C3_features.mat</p>
<p>AWA_s1_C4_features.mat</p>
<p>AWA_s1_Fp2_features.mat</p>
<p>Rem_s1_Fp1_features.mat</p>
<p>Rem_s1_C3_features.mat</p>
<p>Rem_s1_C4_features.mat</p>
<p>Rem_s1_Fp2_features.mat</p>
<p>SWS_s1_Fp1_features.mat</p>
<p>SWS_s1_C3_features.mat</p>
<p>SWS_s1_C4_features.mat</p>
<p>SWS_s1_Fp2_features.mat</p>
<p>s1 goes from 1 to 38.</p>
<p>So, how can I call them? For example I only want to call the AWA_sx_C3 ones:
AWA_s1_C3_features.mat, AWA_s2_C3_features.mat ...AWA_s38_C3_features.mat</p>
<p>How can I do it?
With this code I call all the AWA files (C3, C4, Fp1 and Fp2). But I only want the C3 ones.</p>
<pre><code> read_files = glob.glob('/media/FeaturesX/AWA_s*.mat')
</code></pre>
|
<p>Try this:</p>
<pre><code>read_files = glob.glob('/media/FeaturesX/AWA_s*_C3_features.mat')
</code></pre>
<p>The pattern matching in <code>glob</code> is fairly literal. By putting <code>_C3_features.mat</code> after the <code>*</code>, we require that part of the string to exist for the match to be valid.</p>
|
python-3.x|numpy
| 1
|
8,291
| 40,924,025
|
Pandas concatenate/join/group rows in a dataframe based on date
|
<p>I have a pandas dataset like this:</p>
<pre><code> Date WaterTemp Discharge AirTemp Precip
0 2012-10-05 00:00 10.9 414.0 39.2 0.0
1 2012-10-05 00:15 10.1 406.0 39.2 0.0
2 2012-10-05 00:45 10.4 406.0 37.4 0.0
...
63661 2016-10-12 14:30 10.5 329.0 15.8 0.0
63662 2016-10-12 14:45 10.6 323.0 19.4 0.0
63663 2016-10-12 15:15 10.8 329.0 23 0.0
</code></pre>
<p>I want to extend each row so that I get a dataset that looks like:</p>
<pre><code> Date WaterTemp 00:00 WaterTemp 00:15 .... Discharge 00:00 ...
0 2012-10-05 10.9 10.1 414.0
</code></pre>
<p>There will be at most 72 readings for each date so I should have 288 columns in addition to the date and index columns, and at most I should have at most 1460 rows (4 years * 365 days in year - possibly some missing dates). Eventually, I will use the 288-column dataset in a classification task (I'll be adding the label later), so I need to convert this dataframe to a 2d array (sans datetime) to feed into the classifier, so I can't simply group by date and then access the group. I did try grouping based on date, but I was uncertain how to change each group into a single row. I also looked at joining. It looks like joining could suit my needs (for example a join based on (day, month, year)) but I was uncertain how to split things into different pandas dataframes so that the join would work. What is a way to do this? </p>
<p>PS. I do already know how to change the my datetimes in my Date column to dates without the time. </p>
|
<p>I figured it out. I group the readings by time of day of reading. Each group is a dataframe in and of itself, so I just then need to concatenate the dataframes based on date. My code for the whole function is as follows. </p>
<pre><code>import pandas
def readInData(filename):
#read in files and remove missing values
ds = pandas.read_csv(filename)
ds = ds[ds.AirTemp != 'M']
#set index to date
ds['Date'] = pandas.to_datetime(ds.Date, yearfirst=True, errors='coerce')
ds.Date = pandas.DatetimeIndex(ds.Date)
ds.index = ds.Date
#group by time (so group readings by time of day of reading, i.e. all readings at midnight)
dg = ds.groupby(ds.index.time)
#initialize the final dataframe
df = pandas.DataFrame()
for name, group in dg: #for each group
#each group is a dateframe
try:
#set unique column names except for date
group.columns = ['Date', 'WaterTemp'+str(name), 'Discharge'+str(name), 'AirTemp'+str(name), 'Precip'+str(name)]
#ensure date is the index
group.index = group.Date
#remove time from index
group.index = group.index.normalize()
#join based on date
df = pandas.concat([df, group], axis=1)
except: #if the try catch block isn't here, throws errors! (three for my dataset?)
pass
#remove duplicate date columns
df = df.loc[:,~df.columns.duplicated()]
#since date is index, drop the first date column
df = df.drop('Date', 1)
#return the dataset
return df
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
8,292
| 53,828,383
|
Loading a CNN from checkpoints and feeding it in tensorflow
|
<p>Assuming that I have a simple network including a CNN with a specific name. We can save the checkpoints using tf saver and restore it with tf.saver.restore (checkoiints address). We also can get all tensors and operations in the graph using tf.graph_def().get_operations() and etc.
For my specific question, I load a CNN layer from checkpoints which look likes:</p>
<pre><code> 'tower_0/conv1_fullres/truncated_normal/shape',
'tower_0/conv1_fullres/truncated_normal/mean',
'tower_0/conv1_fullres/truncated_normal/stddev',
'tower_0/conv1_fullres/truncated_normal/TruncatedNormal',
'tower_0/conv1_fullres/truncated_normal/mul',
'tower_0/conv1_fullres/truncated_normal',
'tower_0/conv1_fullres/Const',
'tower_0/conv1_fullres/Conv2D',
'tower_0/conv1_fullres/add',
'tower_0/conv1_fullres/Relu',
'tower_0/conv1_fullres/BatchNorm/cond/Switch',
'tower_0/conv1_fullres/BatchNorm/cond/switch_t',
'tower_0/conv1_fullres/BatchNorm/cond/switch_f',
'tower_0/conv1_fullres/BatchNorm/cond/pred_id',
'tower_0/conv1_fullres/BatchNorm/cond/Const',
'tower_0/conv1_fullres/BatchNorm/cond/Const_1',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm/Switch',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm/Switch_1',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm/Switch_2',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1/Switch',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1/Switch_1',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1/Switch_2',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1/Switch_3',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1/Switch_4',
'tower_0/conv1_fullres/BatchNorm/cond/FusedBatchNorm_1',
'tower_0/conv1_fullres/BatchNorm/cond/Merge',
'tower_0/conv1_fullres/BatchNorm/cond/Merge_1',
'tower_0/conv1_fullres/BatchNorm/cond/Merge_2',
'tower_0/conv1_fullres/BatchNorm/cond_1/Switch',
'tower_0/conv1_fullres/BatchNorm/cond_1/switch_t',
'tower_0/conv1_fullres/BatchNorm/cond_1/switch_f',
'tower_0/conv1_fullres/BatchNorm/cond_1/pred_id',
'tower_0/conv1_fullres/BatchNorm/cond_1/Const',
'tower_0/conv1_fullres/BatchNorm/cond_1/Const_1',
'tower_0/conv1_fullres/BatchNorm/cond_1/Merge',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg/sub/x',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg/sub',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg/sub_1',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg/mul',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg_1/sub/x',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg_1/sub',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg_1/sub_1',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg_1/mul',
'tower_0/conv1_fullres/BatchNorm/AssignMovingAvg_1',
</code></pre>
<p>Naming is in a common way. My Question is How can I feed an image to this convolution layer and get results? </p>
<p>Thank you </p>
|
<p>If the model is saved with <code>write_meta_graph=True</code>, it will create meta file which we can load to create the network, else you have to write python code to create each and every layer manually as the original model.</p>
<p>You can use <a href="https://www.tensorflow.org/api_docs/python/tf/train/import_meta_graph" rel="nofollow noreferrer"><code>tf.train.import_meta_graph</code></a> function to load meta-graph. This will append the network defined in <code>.meta</code> file to the current graph, but will not load any parameters values.</p>
<p>We can restore the parameters of the network by calling restore on this saver which is an instance of <code>tf.train.Saver()</code> class.</p>
<p>Read more here:<br>
- <a href="https://www.tensorflow.org/guide/saved_model" rel="nofollow noreferrer">Saved models - Tensorflow docs</a><br>
- <a href="https://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/" rel="nofollow noreferrer">A quick complete tutorial to save and restore Tensorflow models</a><br>
- <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/save_and_restore_models.ipynb#scrollTo=pZJ3uY9O17VN" rel="nofollow noreferrer">Code example on Colab on Save and restore model</a> </p>
|
tensorflow
| 0
|
8,293
| 53,996,802
|
Using ctypes to call C++ function with pointer args
|
<p>Some background (might not be directly related to the problem): I need to perform an efficient matrix multiplication with a known sparsity.<br>
Because it's sparse, using normal matrix multiplication is wasteful, and because it's a known sparsity I can implement it in an efficient way rather than using sparse libraries.</p>
<p>I have implemented my function in C++ </p>
<pre><code>void SparsePrecisionMult(double *Q, double *X, double *out, const int dim, const int markov, const int n);
</code></pre>
<p>This is the "wrapper":</p>
<pre><code>import ctypes
_SPMlib = ctypes.CDLL('./SparsePrecisionMult.so')
_SPMlib.SparsePrecisionMult.argtypes = (ctypes.POINTER(ctypes.c_double), ctypes.POINTER(ctypes.c_double), ctypes.POINTER(ctypes.c_double),
ctypes.c_int, ctypes.c_int, ctypes.c_int)
def sparse_precision_mult(Q, X, out, markov_blanket_size):
global _SPM
m, d = X.shape
_SPMlib.SparsePrecisionMult(Q.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
X.T.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
out.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
d, markov_blanket_size, m)
</code></pre>
<p>And this is how i called it:</p>
<pre><code>patch_size = 3
markov_blanket = 3
C = np.eye(9)
X = np.array(range(0, 27, 1)).reshape(3, 9)
out = np.zeros([3, 9])
sparse_precision_mult(C.astype(np.float64), X.astype(np.float64), out.astype(np.float64), 3)
print(out)
</code></pre>
<p>This test should result in out=X.<br>
A version of this test written in C performs well.<br>
I get out = zeros. So my guess is that somehow the memory isn't shared and being copied.<br>
I don't want duplications of data on my RAM (this function will be used on high dimension matrices). So how can I solve it?</p>
<p>Thanks.</p>
|
<p><code>astype</code> creates a copy of an array. Therefore the <code>out.astype(np.float64)</code> parameter gives a copy to <code>sparse_precision_mult</code> which is modified and then thrown away. Original <code>out</code> isn't modified.</p>
<p>Create <code>out</code> with type <code>np.float64</code> and (if necessary) convert after function call.</p>
<p>If possible you should create all parameters with the type needed for the function call initially to avoid the copying by <code>astype</code>.</p>
<p><code>astype</code> has a parameter <code>copy</code> which can be set to <code>False</code> to avoid unnecessary copies but it is better to be sure that a copy is/isn't needed than relying on that.</p>
|
python|c++|numpy|ctypes
| 1
|
8,294
| 66,144,453
|
Merging multiple columns on pandas dataframe ("Vlookup" on different columns)
|
<p>I have a dataframe called <code>reference</code> and it looks like this:</p>
<pre><code> wind P
0 15.5 300
1 16.0 333
2 16.5 421
3 17.0 498
4 17.5 544
</code></pre>
<p>and another one, called <code>vdb1</code> with all its columns with <code>wind</code> values. What I want to do is for each element in <code>vdb1</code>, replace it for the corresponding <code>P</code> on <code>reference</code> dataframe. I'm able to do it for individual columns, but as I have +10 columns to replace, I'm not getting there with only <code>pd.merge</code>.</p>
<p>Here is <code>vdb1</code>:</p>
<pre><code> VDB1-01 VDB1-02 VDB1-03 VDB1-04 VDB1-05 VDB1-06 VDB1-07 VDB1-08 VDB1-09 VDB1-10 VDB1-11 VDB1-12 VDB1-13
2021-01-30 00:00:00 16.0 16.0 15.5 15.0 14.5 15.0 15.0 15.0 15.5 11.5 12.0 12.0 13.0
2021-01-30 00:10:00 15.5 15.5 15.5 15.5 15.0 15.0 14.5 15.5 15.5 11.0 11.5 11.5 13.0
2021-01-30 00:20:00 15.5 15.5 15.0 15.0 15.0 15.0 14.5 15.0 15.5 11.0 11.0 12.5 13.0
</code></pre>
<p>I'm trying to create another dataframe with the corresponding values, but I'm having some troubles with this:</p>
<pre><code>expected1 = (vdb1.merge(reference,left_on=[vdb1.columns], right_on=['wind'],how='left'))
</code></pre>
<p>Thank you in advance!</p>
|
<p>Lets map accross the values using map. values not in the rep(small datframe will become null). Lets fill those using combine_first.</p>
<pre><code>vdb1.apply(lambda x: x.map(dict(zip(rep['wind'],rep['P'])))).combine_first(vdb1)
</code></pre>
|
python|pandas
| 1
|
8,295
| 65,951,706
|
How to take rows with continous time for more than 3 three rows python
|
<p>I have one data frame i want to get rows when time is continuous for three rows and delete other rows.</p>
<pre><code>df_input:
Value time
8970 2020-11-20 15:40:00
7602 2020-11-20 15:50:00
7603 2020-11-20 16:00:00
7604 2020-11-20 16:10:00
7757 2020-11-29 06:30:00
7758 2020-11-29 06:40:00
7877 2020-12-02 01:00:00
11179 2021-01-06 23:50:00
11230 2021-01-07 08:20:00
11283 2021-01-07 17:30:00
df_out:
8970 2020-11-20 15:40:00
7602 2020-11-20 15:50:00
7603 2020-11-20 16:00:00
</code></pre>
|
<p>by</p>
<pre><code>df['timeDiff'] = df['time'].diff()
</code></pre>
<p>you will get</p>
<pre><code>df_input
Value time
8970 2020-11-20 15:40:00
7602 2020-11-20 15:50:00
7603 2020-11-20 16:00:00
7604 2020-11-20 16:10:00
7757 2020-11-20 18:30:00
7758 2020-11-20 20:30:00
</code></pre>
<p>into :</p>
<pre><code>df_output
Value timeDiff
8970 Na
7602 00:10:00
7603 00:10:00
7604 00:10:00
7757 02:20:00
7758 02:00:00
</code></pre>
<p>then you can group them as you need to determine continues. I guess this will help you to solve the problem.</p>
|
python-3.x|pandas|numpy|pandas-groupby
| 1
|
8,296
| 66,033,938
|
Python/matplot "fill_between" stops just above y=7.5 where I want it to stop
|
<p>I am trying to get the shaded colors to stop at the y=7.5 bar, but the green one doesn't go far enough (i.e. to the line). Is anyone able to figure this out? Many thanks!</p>
<pre><code># sensitivity analysis
sensitivity = pd.DataFrame()
for x in exit_probabilities:
for y in exit_valuations:
sensitivity.loc[y, x] = vc_valuation_method_mod(0.15, 5,x, y, 0.5, 5, 10, 6, 100, 80, 2.5)[0]
print(sensitivity)
#Plot data
sns.set(font_scale = 1.5, style = 'white', rc=None)
fig, ax = plt.subplots(figsize = (15,10))
a = sensitivity.plot(y = 0.2, ax = ax, linestyle = '--', color = 'gray')
b = sensitivity.plot(y = 0.3, ax = ax, linestyle = '-.', color = 'gray')
c = sensitivity.plot(y = 0.4, ax = ax, linestyle = ':', color = 'gray')
d = ax.hlines(y=7.5, xmin=100, xmax=900, colors='black', linestyles='-', lw=2, label='Single Short Line')
#ax.fill_between(a, d, alpha=2)
# fill between y=0.75 and df.y
# Adds headers
sensitivity.columns =['.1', '.15','.2','.25','.3','.35','.4']
ax.fill_between(x=sensitivity.index, y1=sensitivity['.2'], y2=7.5, where=sensitivity['.2'] > 7.5, interpolate=True)
ax.fill_between(x=sensitivity.index, y1=sensitivity['.3'], y2=np.maximum(sensitivity['.2'], 7.5),
where=sensitivity['.3'] > 7.5, interpolate=True)
ax.fill_between(x=sensitivity.index, y1=sensitivity['.4'], y2=np.maximum(sensitivity['.3'], 7.5),
where=sensitivity['.4'] > 7.5, interpolate=True)
plt.show()
sns.despine();
</code></pre>
|
<p>Your first fill (the blue one) goes fine.</p>
<p>The second fill should go:</p>
<ul>
<li>the top is <code>sensitivity['.3']</code></li>
<li>the bottom is <code>sensitivity['.2']</code>, but only where it is larger than <code>7.5</code>, so, take the maximum of <code>sensitivity['.2']</code> and <code>7.5</code></li>
<li>the <code>where</code> parameter is a limit on the x-axis; the filling should start where <code>sensitivity['.3']</code> becomes larger than <code>7.5</code>; if you'd start earlier, matplotlib would fill between the horizontal at <code>7.5</code> and <code>sensitivity['.3']</code></li>
</ul>
<p>The third fill is similar to the second. Together it would look like:</p>
<pre class="lang-py prettyprint-override"><code>ax.fill_between(x=sensitivity.index, y1=sensitivity['.2'], y2=7.5, where=sensitivity['.2'] > 7.5, interpolate=True)
ax.fill_between(x=sensitivity.index, y1=sensitivity['.3'], y2=np.maximum(sensitivity['.2'], 7.5),
where=sensitivity['.3'] > 7.5, interpolate=True)
ax.fill_between(x=sensitivity.index, y1=sensitivity['.4'], y2=np.maximum(sensitivity['.3'], 7.5),
where=sensitivity['.4'] > 7.5, interpolate=True)
</code></pre>
|
python|python-3.x|numpy|matplotlib|plot
| 0
|
8,297
| 66,115,632
|
Gunicorn worker, threads for GPU tasks to increase concurrency/parallelism
|
<p>I'm using Flask with Gunicorn to implement an AI server. The server takes in HTTP requests and calls the algorithm (built with pytorch). The computation is run on the nvidia GPU.</p>
<p>I need some input as to how can I achieve concurrency/parallelism in this case. The machine has 8 vCPUs, 20 GB memory and 1 GPU, 12 GB memory.</p>
<ul>
<li>1 worker occupies, 4 GB memory, 2.2GB GPU memory.
max workers I can give is 5. (Because of GPU memory 2.2 GB * 5 workers = 11 GB )</li>
<li>1 worker = 1 HTTP request (max simultaneous requests = 5)</li>
</ul>
<p>The specific question is</p>
<ol>
<li>How can I increase the concurrency/parallelism?</li>
<li>Do I have to specify number of threads for computation on GPU?</li>
</ol>
<p>Now my gunicorn command is</p>
<blockquote>
<p><strong>gunicorn --bind 0.0.0.0:8002 main:app --timeout 360 --workers=5 --worker-class=gevent --worker-connections=1000</strong></p>
</blockquote>
|
<p>fast Tokenizers are not thread-safe apparently.</p>
<p>AutoTokenizers seems like a wrapper that uses fast or slow internally. their default is set to fast (not thread-safe) .. you'll have to switch that to slow (safe) .. that's why add the <strong>use_fast=False</strong> flag</p>
<p>I was able to solve this by:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
</code></pre>
<p>Best,
Chirag Sanghvi</p>
|
concurrency|parallel-processing|pytorch|gpu|gunicorn
| 0
|
8,298
| 52,796,629
|
How to save and use a trained neural network developed in PyTorch / TensorFlow / Keras?
|
<p>Are there ways to save a model after training and sharing just the model with others? Like a regular script? Since the network is a collection of float matrices, is it possible to just extract these trained weights and run it on new data to make predictions, instead of requiring the users to install these frameworks too? I am new to these frameworks and will make any clarifications as needed.</p>
|
<p>PyTorch: As explained in <a href="https://stackoverflow.com/questions/42703500/best-way-to-save-a-trained-model-in-pytorch#43819235">this post</a>, you can save a model's parameters as a dictionary, or load a dictionary to set your model's parameters.
You can also save/load a PyTorch model as an object.
Both procedures require the user to have at least one tensor computation framework installed, e.g. for efficient matrix multiplication.</p>
|
tensorflow|keras|pytorch
| 1
|
8,299
| 52,901,546
|
And operator on two rank-1 tensorflow tensors
|
<p>I have two rank-1 tensors, one of them contains X floats, the other is one-hot, also with X entries. </p>
<p>I want to create a new rank-1 tensor, with a single element: the float in the first tensor at the index where the one-hot vector is equal to 1.</p>
<p>Any help would be much appreciated.</p>
|
<p>It sounds like tf.boolean_mask is what you're looking for:
<a href="https://www.tensorflow.org/api_docs/python/tf/boolean_mask" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/boolean_mask</a></p>
|
python|tensorflow
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.