Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
4,400
| 35,402,074
|
`unique` in data.frame.describe() not work [python][pandas]
|
<p>Hi it's something fundamental but I can't fix it... <code>unique()</code> shows unique values in each column, but <code>describe()</code> shows NaN. Why... Any help's appreciated. thanks</p>
<pre><code>import numpy as np
import pandas as pd
train = pd.read_csv('train.csv', header=0)
# works:
train['Pclass'].unique()
# array([3, 1, 2], dtype=int64)
train['Survived'].unique()
# array([0, 1], dtype=int64)
# not work:
train.describe(include='all')
# PassengerId Survived Pclass Name Sex \
# count 891.000000 891.000000 891.000000 891 891
# unique NaN NaN NaN 891 2
# top NaN NaN NaN Mitkoff, Mr. Mito male
# freq NaN NaN NaN 1 577
# mean 446.000000 0.383838 2.308642 NaN NaN
# std 257.353842 0.486592 0.836071 NaN NaN
# min 1.000000 0.000000 1.000000 NaN NaN
# 25% 223.500000 0.000000 2.000000 NaN NaN
# 50% 446.000000 0.000000 3.000000 NaN NaN
# 75% 668.500000 1.000000 3.000000 NaN NaN
# max 891.000000 1.000000 3.000000 NaN NaN
#
# Age SibSp Parch Ticket Fare Cabin \
# count 714.000000 891.000000 891.000000 891 891.000000 204
# unique NaN NaN NaN 681 NaN 147
# top NaN NaN NaN 347082 NaN C23 C25 C27
# freq NaN NaN NaN 7 NaN 4
# mean 29.699118 0.523008 0.381594 NaN 32.204208 NaN
# std 14.526497 1.102743 0.806057 NaN 49.693429 NaN
# min 0.420000 0.000000 0.000000 NaN 0.000000 NaN
# 25% 20.125000 0.000000 0.000000 NaN 7.910400 NaN
# 50% 28.000000 0.000000 0.000000 NaN 14.454200 NaN
# 75% 38.000000 1.000000 0.000000 NaN 31.000000 NaN
# max 80.000000 8.000000 6.000000 NaN 512.329200 NaN
#
# Embarked
# count 889
# unique 3
# top S
# freq 644
# mean NaN
# std NaN
# min NaN
# 25% NaN
# 50% NaN
# 75% NaN
# max NaN
</code></pre>
|
<p>The <code>describe</code> method for numeric columns doesn't list the number of unique values, since this is usually not particularly meaningful for numeric data, the <code>describe</code> method for string columns does:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'string_column': ['a', 'a', 'b'], 'numeric': [1, 2, 1]})
df['numeric'].describe()
Out[6]:
count 3.000000
mean 1.333333
std 0.577350
min 1.000000
25% 1.000000
50% 1.000000
75% 1.500000
max 2.000000
Name: numeric, dtype: float64
df['string_column'].describe()
Out[7]:
count 3
unique 2
top a
freq 2
Name: string_column, dtype: object
</code></pre>
<p>Since your dataframe contains both, the results are being merged and <code>nan</code>s inserted where the column doesn't have that value.</p>
<p>If your numeric columns are actually just codes reflecting different classes/categories, you might want to convert them to <code>Categorical</code> to get more meaningful info about them:</p>
<pre><code>df['categorized'] = pd.Categorical(df['numeric'])
df['categorized'].describe()
Out[10]:
count 3
unique 2
top 1
freq 2
Name: categorized, dtype: int64
</code></pre>
|
python|pandas|unique|describe
| 3
|
4,401
| 35,465,176
|
Visible Deprecation warning...?
|
<p>I have some data that Im reading from a h5 file as a numpy array and am doing some analysis with. For context, the data plots a spectral response curve. I am indexing the data (and a subsequent array I have made for my x axis) to get a specific value or range of values. Im not doing anything complex and even the little maths I'm doing is pretty basic. However I get the following warning error in a number of places</p>
<p>"VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 44 but corresponding boolean dimension is 17"</p>
<p>even though the output I get is the correct one when I check it. </p>
<p>Can someone explain what this warning means and whether I need to be more concerned about it than I currently am?</p>
<p>Im not sure example code would shed much light on this, but seeing as it is a warning that occurs when I index and slice arrays, here is some anyway:</p>
<pre><code>data = h5py.File(file,'r')
dset = data['/DATA/DATA/'][:]
vals1 = dset[0]
AVIRIS = numpy.linspace(346.2995778, 2505.0363678, 432)
AVIRIS1 = AVIRIS[vals1>0]
AVIRIS1 = AVIRIS[vals1<1]
</code></pre>
|
<p>Previous questions on this warning:</p>
<p><a href="https://stackoverflow.com/questions/33098765/visibledeprecationwarning-boolean-index-did-not-match-indexed-array-along-dimen">VisibleDeprecationWarning: boolean index did not match indexed array along dimension 1; dimension is 2 but corresponding boolean dimension is 1</a></p>
<p><a href="https://stackoverflow.com/a/34296620/901925">https://stackoverflow.com/a/34296620/901925</a></p>
<p>I think this is something new in numpy 1.10, and is the result of using boolean index that is shorter than array. I don't have that version installed so can't give an example. But in an earlier numpy</p>
<pre><code>In [667]: x=np.arange(10)
In [668]: ind=np.array([1,0,0,1],bool)
In [669]: ind
Out[669]: array([ True, False, False, True], dtype=bool)
In [670]: x[ind]
Out[670]: array([0, 3])
</code></pre>
<p>runs ok, even though <code>ind</code> is shorter than <code>x</code>. It effectively pads <code>ind</code> with <code>False</code>. I think newer versions continue to do the calculation, but issue this warning. I need to find a commit that changed this or a SO question that discusses it.</p>
<p>It is possible to suppress warnings - see the side bar. But you really should check the shape of the offending arrays. Do they match, or is the boolean index too short? Can you correct that?</p>
<p>Github discussion</p>
<p><a href="https://github.com/numpy/numpy/issues/4980" rel="noreferrer">https://github.com/numpy/numpy/issues/4980</a> Boolean array indexing fails silently #4980</p>
<p>Pull request</p>
<p><a href="https://github.com/numpy/numpy/pull/4353" rel="noreferrer">https://github.com/numpy/numpy/pull/4353</a> DEP: Deprecate boolean array indices with non-matching shape #4353</p>
<p>To suppress the warning use something like:</p>
<pre><code>import warnings
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
</code></pre>
<p>you may have to tweak the category name to get it right.</p>
|
arrays|numpy|warnings|h5py
| 18
|
4,402
| 28,622,619
|
InvalidBSON on MongoDB import - Pandas
|
<p>I'm currently working with Pandas (0.14.1) in Python 3.4.2 importing data from a Mongo database using pymongo (2.8). Upon a simple import,</p>
<pre><code>cur = db.collection.find()
df = pd.DataFrame(list(cur))
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>InvalidBSON: 'utf-8' codec can't decode byte 0xed in position 3123: invalid continuation byte
</code></pre>
<p>Import note: Previously, I was doing the same tasks (importing the same collections into a pandas dataframe for processing) using pandas in Python 2.7+ and all of the imports worked without issue. For other reasons, I would now prefer to stay in the 3.4+ environment. </p>
<p>While I cannot share the data, I can say it is UTF-8 encoded (which makes the error confusing) line-delimited JSON documents I bulk imported into MongoDB. Some of the fields contain many unicode characters. Up until now, working in the mongo console and python 2.7+ with read-only (from the db) tasks, I have not run into the above problem. As a check, after getting this error in python 3.4, I ran the same code in 2.7 (for the same db collection) and it imported fine.</p>
<p>Is anyone able to provide some insight into what is happening, and perhaps provide some support to remedy the problem? I am willing to provide any additional information I can.</p>
<p>Update:</p>
<p>I identified the offending document using </p>
<pre><code>for doc in cur.sort([('_id', 1)]): print(doc['_id'])
</code></pre>
<p>and taking the _id following the last one listed. However, there is some odd behavior. Specifically, if I create a DataFrame using </p>
<pre><code>pd.DataFrame(list(db.collection.find({'_id' : ObjectId('offending _id')}))
</code></pre>
<p>it works fine. The same document exists in several collections, and throws the error in each one when attempting to import the full collection.</p>
<p>Document:</p>
<pre><code>{"app_name" : "Tiles", "description" : "Tiles is a sliding tile puzzle, also known as a \"15 Puzzle\". Using Tiles, you choose photos from your Photo Library on your iPhone or iPod Touch, or use the built-in camera on your iPhone. Tiles then cuts the photo into tiles and scrambles them into a fun puzzle for you to solve! Your job is to slide the tiles around and re-assemble the photo!\n\nSee if you can re-assemble the photo in the least number of moves or the fastest time possible! Challenge your friends to beat your time! Choose from an infinite number of images you create yourself, and up to 4 different puzzle configurations.\n\nFeatures:\n\n* 9, 16, 25, or 36 Tile Selections\n* Integrated with the built in iPhone camera and Photo Library so you can use your photos for puzzles.\n\nBy Request: A standard \"15\" Puzzle image can be downloaded at http://www.random-ideas.net/Software/Tiles/16.png simply download it and sync it with your phone (via iTunes) touse it.\n\nIn keeping with our company mission, we will be donating 5% of the pre-tax net profits from Tiles to charity. The selected charity for Tiles will be to benefit autism.\n \n \n", "whats_new" : "Fixed a rare crashing bug while selecting a new image.\n \n \n"}
</code></pre>
|
<p>I don't believe there is a way of applying encoding on a cursor object while directly loading it into pandas. You may want to use mongoexport to dump your data into a csv first:</p>
<pre><code>mongoexport --host localhost --db dbname --collection name --csv > test.csv
</code></pre>
<p>...and then you load that data in as utf-8.</p>
<pre><code>df = pd.read_csv('test.csv', encoding = "utf-8")
</code></pre>
|
python|mongodb|python-3.x|pandas|pymongo
| 0
|
4,403
| 50,966,435
|
How to make the conversion NaN >> [''] to all the elements of a Pandas Dataframe?
|
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'A': [[1, 2, 3, 4], [4, 5, 6, 7, 8], [7, 6, 4], np.nan, [1, 2]],
'B': [[1, 2, 3, 4], [4, 5, 6, 7, 8], [3, 7, 9], np.nan, [4, 5]],
'E': [np.nan, np.nan, np.nan, np.nan, np.nan],
'F': [[2, 2], [4, 4], np.nan, [78, 90], np.nan]
})
# First try
# ERROR: Cannot do inplace boolean setting on mixed-types with a non np.nan value
# df[df.isnull()] = df[df.isnull()].applymap(lambda x: [''])
# Second try
# ERROR: Invalid "to_replace" type: 'float'
# df.replace(to_replace=np.nan, value=[''], inplace=True)
# Third try
# RESULT: The column 'E' dissapears and the rest of NaN values are converted to None
# stack = df.stack()
# stack[stack.isnull()] = [''] # or stack[stack == np.nan] = ['']
# stack.unstack()
# Fourth try
# ERROR: "value" parameter must be a scalar or dict, but you passed a "list"
# df.fillna([''])
</code></pre>
<p>This is my expected result:</p>
<pre><code>df = pd.DataFrame({
'A': [[1, 2, 3, 4], [4, 5, 6, 7, 8], [7, 6, 4], [''], [1, 2]],
'B': [[1, 2, 3, 4], [4, 5, 6, 7, 8], [3, 7, 9], [''], [4, 5]],
'E': [[''], [''], [''], [''], ['']],
'F': [[2, 2], [4, 4], [''], [78, 90], ['']]
})
</code></pre>
<p>I have tried all the ways shown in the example with no results. How to achieve this?</p>
<p><strong>Note</strong>: I want to point out that the replacement is a list with only one element, an empty string. Also, it could be <code>[np.nan]</code></p>
|
<p><strong>UPDATE:</strong></p>
<pre><code>In [136]: df.applymap(lambda x: x if isinstance(x, list) else [])
Out[136]:
A B E F
0 [1, 2, 3, 4] [1, 2, 3, 4] [] [2, 2]
1 [4, 5, 6, 7, 8] [4, 5, 6, 7, 8] [] [4, 4]
2 [7, 6, 4] [3, 7, 9] [] []
3 [] [] [] [78, 90]
4 [1, 2] [4, 5] [] []
</code></pre>
<p>or:</p>
<pre><code>In [152]: df = df.applymap(lambda x: x if isinstance(x, list) else [np.nan])
In [153]: df
Out[153]:
A B E F
0 [1, 2, 3, 4] [1, 2, 3, 4] [nan] [2, 2]
1 [4, 5, 6, 7, 8] [4, 5, 6, 7, 8] [nan] [4, 4]
2 [7, 6, 4] [3, 7, 9] [nan] [nan]
3 [nan] [nan] [nan] [78, 90]
4 [1, 2] [4, 5] [nan] [nan]
</code></pre>
<p><strong>NOTE:</strong> please pay attention at <a href="https://stackoverflow.com/questions/50966435/how-to-make-the-conversion-nan-to-all-the-elements-of-a-pandas-dataframe#comment88929330_50966435">@jpp's comment</a> - storing non-scalar values in cells destroys 90% of Pandas/Numpy magic, as most of fast internal vectorized methods expect scalar values in cells - they will not work or won't work as expected.</p>
<hr>
<p><strong>Answer for the data set <em>before</em> the question has been updated:</strong></p>
<p>you can do it:</p>
<pre><code>In [120]: df = df.fillna('')
In [121]: df
Out[121]:
A B C D E F
0 zero one 0.226100 1.764036 2
1 one one -1.672476 -0.867188 2
2 two 0.671258 0.125589 4
3 three three 1.135731 0.080577 4
4 four two -1.711692 0.735028 67
5 two 0.608488 1.012977
6 six one -1.233979 -0.623781 78
7 seven three 0.256893 -0.546639 90
</code></pre>
<p>but all columns containing at least one <code>NaN</code> value will be converted to strings, because an empty string <code>''</code> will always have a string (<code>object</code>) <code>dtype</code>:</p>
<pre><code>In [122]: df.dtypes
Out[122]:
A object
B object
C float64
D float64
E object
F object
dtype: object
</code></pre>
|
python|python-3.x|pandas|dataframe|nan
| 2
|
4,404
| 50,905,364
|
get column name that contains a specific value in pandas
|
<p>I want to get column name from the whole database (assume the database contains more than 100 rows with more than 50 column) based on specific value that contain in a specific column in pandas.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6], 'C':[7,8,9]})
pos = 2
response = raw_input("input")
placeholder = (df == response).idxmax(axis=1)[0]
print df
print (placeholder)
</code></pre>
<p>Tried a lot . . .</p>
<p>Example:
when the user will input 2; it will show answer: A
if the input is 4; feedback will be B
and if 7 then reply will be C</p>
<p>tried iloc but I've seen row have to be noticed there.</p>
<p>Please Help Dear Guys . . . . .
Thanks . . . :)</p>
|
<p>Try this</p>
<pre><code>for i in df.columns:
newDf = df.loc[lambda df: df[i] == response]
if(not newDf.empty):
print(i)
</code></pre>
|
python|python-2.7|pandas
| 2
|
4,405
| 9,224,718
|
What's the most efficient way to compute the mode in a sliding window over a 2D array in Python?
|
<p>I have an RGBA image that I need to upscale while keeping it smooth. </p>
<p>The catch is that I need to keep the colors exactly the way they are (background: I'm resizing a map where provinces are color-coded), and so I cannot just perform a resize with bicubic interpolation, because that will also interpolate the pixel colors while smoothing. </p>
<p>Thus, in order to get smooth edges I was hoping to upscale using nearest neighbor (giving me staircase patterns) and then round out the edges by replacing each pixel in the target image with the pixel color that occurs most often within a certain radius, a la so:</p>
<pre><code>from PIL import Image, ImageFilter
amount=3
image=Image.open(<file>)
image=image.filter(ImageFilter.ModeFilter(amount))
</code></pre>
<p>This finishes fairly quickly, except that it doesn't work, as PIL's ImageFilters operate separately on each channel. <strong>shakes fist</strong></p>
<p>I tried resorting to numpy arrays and doing the following in a loop:</p>
<pre><code>dest[x,y]=Counter([tuple(e) for e in reshape(source[max(x-r,0):x+r+1,max(y-r,0):y+r+1],(-1,4))]).most_common()[0][0]
</code></pre>
<p>Note that dest and source here are the same shape XxYx4 arrays, hence the necessary reshaping and converting into tuples.<br>
In theory this would work, but would take 12 hours to finish for the <strong>82 million pixel</strong> image I am operating on. I am inferring that this is mostly due to unnecessary overhead with casting and reshaping.</p>
<p>What would be the appropriate way to do this in Python? </p>
<p>I am about ready to throw up my hands and write a C++ module to do this task.<br>
Anything to steer me away from this path would be much appreciated!</p>
|
<p>If you care about a fixed set of colors in your image, the "Palette" image mode would perhaps be more appropriate (at least, if you don't have more than 256 colors in your map).</p>
<p>I would suggest to first convert your image to the "P" mode (Since I'm not really familiar with PIL, I'm not sure, how easy that is. Perhaps you'll have to explicitely construct the palette first?) and then apply the mode filter.</p>
<p>Another solution which comes into my mind is to simply use bicubic interpolation when upsizing and then converting to a palette image using a palette derived from the original image. That might yield better results (and be easier to implement) than your current approach.</p>
|
python|numpy|python-imaging-library|mode|sliding-window
| 1
|
4,406
| 6,091,378
|
The concept behind scipy.signal.medfilt2d in python
|
<p>I am trying to understand how scipy.signal.medfilt2d works. After I looked at the scipy documentation, its format is:</p>
<p>scipy.signal.medfilt2d(input, kernel_size=3)</p>
<p>So like if I have a matrix like</p>
<pre><code>1 2 3 7 2 4
3 4 2 2 6 7
1 7 3 1 2 6
3 2 3 4 3 1
2 6 7 8 2 5
3 4 2 2 1 8
</code></pre>
<p>Since the kernel size is set to 3 in default. If I apply medfilt2d to this matrix, will the matrix become the one like below? ( what I did is take every element inside the 3 x 3 box, add them all up and divide by 9 ( average) )</p>
<pre><code>2.8 2.8 2.8 4.1 4.1 4.1
2.8 2.8 2.8 4.1 4.1 4.1
2.8 2.8 2.8 4.1 4.1 4.1
3.5 3.5 3.5 3.7 3.7 3.7
3.5 3.5 3.5 3.7 3.7 3.7
3.5 3.5 3.5 3.7 3.7 3.7
</code></pre>
<p>So please share your knowledge and insight with me? Tell me if I am wrong so I can learn from this mistake. Thanks a lot</p>
|
<p>As explained in the <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/signal.html#median-filter" rel="nofollow">Scipy documentation</a>, <code>medfilt2</code> is a median filter. Quoting from the documentation,</p>
<pre><code>The sample median is the middle array value in a sorted list of neighborhood values
</code></pre>
<p>So for your example, the submatrix at position <code>1,1</code></p>
<pre><code>1 2 3
3 4 2
1 7 3
</code></pre>
<p>would be sorted into</p>
<pre><code>1 1 2 2 3 3 3 4 7
</code></pre>
<p>The middle element is <code>3</code> so that would be the output of the filter. There's more detail <a href="http://en.wikipedia.org/wiki/Median_filter" rel="nofollow">on Wikipedia</a>. </p>
|
python|matrix|numpy|scipy
| 2
|
4,407
| 66,596,142
|
BertModel or BertForPreTraining
|
<p>I want to use Bert only for embedding and use the Bert output as an input for a classification net that I will build from scratch.</p>
<p>I am not sure if I want to do finetuning for the model.</p>
<p>I think the relevant classes are BertModel or BertForPreTraining.</p>
<p><a href="https://dejanbatanjac.github.io/bert-word-predicting/" rel="nofollow noreferrer">BertForPreTraining</a> head contains two "actions":
self.predictions is MLM (Masked Language Modeling) head is what gives BERT the power to fix the grammar errors, and self.seq_relationship is NSP (Next Sentence Prediction); usually refereed as the classification head.</p>
<pre><code>class BertPreTrainingHeads(nn.Module):
def __init__(self, config):
super().__init__()
self.predictions = BertLMPredictionHead(config)
self.seq_relationship = nn.Linear(config.hidden_size, 2)
</code></pre>
<p>I think the NSP isn't relevant for my task so I can "override" it.
what does the MLM do and is it relevant for my goal or should I use the BertModel?</p>
|
<p>You should be using <code>BertModel</code> instead of <code>BertForPreTraining</code>.</p>
<p><code>BertForPreTraining</code> is used to train bert on Masked Language Model (MLM) and Next Sentence Prediction (NSP) tasks. They are not meant for classification.</p>
<p>BERT model simply gives the output of the BERT model, you can then finetune the BERT model along with the classifier that you build on top of it. For classification, if its just a single layer on top of BERT model, you can directly go with <code>BertForSequenceClassification</code>.</p>
<p>In anycase, if you just want to take the output of BERT model and learn your classifier (without fine-tuning BERT model), then you can freeze the Bert model weights using:</p>
<pre><code>model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
for param in model.bert.bert.parameters():
param.requires_grad = False
</code></pre>
<p>The above code is borrowed from <a href="https://github.com/huggingface/transformers/issues/400" rel="noreferrer">here</a></p>
|
deep-learning|nlp|bert-language-model|huggingface-transformers|transformer-model
| 5
|
4,408
| 66,605,849
|
extract date only from pandas column
|
<p>I have this column in pandas df:
'''</p>
<pre><code>full_date
2020-12-02T08:11:30-0600
2020-12-02T02:11:50-0600
2020-12-03T08:56:29-0600
</code></pre>
<p>'''</p>
<p>I only need the date, hoping to have this column:
'''</p>
<pre><code>date
2020-12-02
2020-12-02
2020-12-03
</code></pre>
<p>'''</p>
<p>I have tried to find the solution from previous questions, but still failed. If anyone can help, I will appreciate that a lot. thanks.</p>
|
<p>In case your column is not a <code>datetime</code> type, you can convert it to that and then use the <code>.dt</code> accessor to get just the date:</p>
<pre><code>>>> df["date"] = df["full_date"].pipe(pd.to_datetime, utc=True).dt.date
>>> print(df)
full_date date
0 2020-12-02T08:11:30-0600 2020-12-02
1 2020-12-02T02:11:50-0600 2020-12-02
2 2020-12-03T08:56:29-0600 2020-12-03
</code></pre>
|
python|pandas
| 1
|
4,409
| 66,597,043
|
How can I count frequency by data in dataframe?
|
<p>I have an issue with groupby in pandsa. My DF looks like below:</p>
<pre><code>time ID
01-13 1
01-13 2
01-14 3
01-15 4
01-15 5
</code></pre>
<p>I need result like below:</p>
<pre><code>time ID
01-13 2
01-14 1
01-15 2
</code></pre>
<p>So basically I need to count frequency ID by data. I treid with this but I am not sure of result (df is huge). Any idea? Thanks for help</p>
<pre><code>df = df.groupby("Time").Id.value_counts()
</code></pre>
<p>Best regards</p>
|
<p>One way is:</p>
<pre><code>df.time.value_counts()
</code></pre>
<p>Output:</p>
<pre><code>01-15 2
01-13 2
01-14 1
Name: time, dtype: int64
</code></pre>
<p>Other way, as suggested by reviewer above:</p>
<pre><code>df.groupby(['time']).size().reset_index(name='Frequency')
</code></pre>
<p>Output:</p>
<pre><code> time Frequency
0 01-13 2
1 01-14 1
2 01-15 2
</code></pre>
<p>Note you can group by several variables if needed:</p>
<pre><code>df.groupby(['col1', 'col2', 'etc.'])...
</code></pre>
|
pandas|dataframe|group-by|count|frequency
| 2
|
4,410
| 66,621,907
|
Filter date index based on regex in pandas
|
<p>I have date column with price index like below,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Date</th>
<th style="text-align: center;">Price</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2010-01-01</td>
<td style="text-align: center;">23</td>
</tr>
<tr>
<td style="text-align: left;">2010-12-31</td>
<td style="text-align: center;">25</td>
</tr>
<tr>
<td style="text-align: left;">2013-02-03</td>
<td style="text-align: center;">24</td>
</tr>
<tr>
<td style="text-align: left;">2013-12-31</td>
<td style="text-align: center;">28</td>
</tr>
<tr>
<td style="text-align: left;">2016-03-04</td>
<td style="text-align: center;">27</td>
</tr>
<tr>
<td style="text-align: left;">2016-12-31</td>
<td style="text-align: center;">28</td>
</tr>
<tr>
<td style="text-align: left;">2018-01-01</td>
<td style="text-align: center;">31</td>
</tr>
<tr>
<td style="text-align: left;">2020-01-01</td>
<td style="text-align: center;">30</td>
</tr>
<tr>
<td style="text-align: left;">2020-12-31</td>
<td style="text-align: center;">20</td>
</tr>
</tbody>
</table>
</div>
<p>I want to extract dates which ends with 12-31.How can I do that?</p>
<p>I tried with <code>data.index.loc['*-12-31]</code> it is not working.</p>
<p>Since this is date str.contains or startswith or endswith is not working.</p>
<p>Is there any way to do this?</p>
<p>Thanks</p>
|
<p>Convert Date column to <code>datetime</code> data type</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'])
</code></pre>
<p>Filter by month and day</p>
<pre><code>df.loc[(df.Date.dt.month == 12) & (df.Date.dt.day == 31)]
</code></pre>
<p>Output</p>
<pre><code> Date Price
1 2010-12-31 25
3 2013-12-31 28
5 2016-12-31 28
8 2020-12-31 20
</code></pre>
|
python|pandas|nsepy
| 2
|
4,411
| 16,410,827
|
How to iterate Numpy array and perform calculation only if element matches a criteria?
|
<p>I want to iterate a numpy array and process only elements match with specific criteria. In the code below, I want to perform calculation only if element is greater than 1.</p>
<pre><code>a = np.array([[1,3,5],
[2,4,3],
[1,2,0]])
for i in range(0, a.shape[0]):
for j in range(0, a.shape[1]):
if a[i,j] > 1:
a[i,j] = (a[i,j] - 3) * 5
</code></pre>
<p>Is it possible to use single-line code instead of the double loop above? and perhaps make it faster?</p>
|
<p>Method #1: use a boolean array to index:</p>
<pre><code>>>> a = np.array([[1,3,5], [2,4,3], [1,2,0]])
>>> a[a > 1] = (a[a > 1] - 3) * 5
>>> a
array([[ 1, 0, 10],
[-5, 5, 0],
[ 1, -5, 0]])
</code></pre>
<p>This computes <code>a > 1</code> twice, although you could assign it to a variable instead. (In practice it's very unlikely to be a bottleneck, of course, although if <code>a</code> is large enough memory can be an issue.)</p>
<p>Method #2: use <code>np.where</code>:</p>
<pre><code>>>> a = np.array([[1,3,5], [2,4,3], [1,2,0]])
>>> np.where(a > 1, (a-3)*5, a)
array([[ 1, 0, 10],
[-5, 5, 0],
[ 1, -5, 0]])
</code></pre>
<p>This only computes <code>a > 1</code> once, but OTOH computes <code>(ax-3)*5</code> for every element <code>ax</code> in <code>a</code>, instead of only doing it for those elements that really need it.</p>
|
python|numpy
| 3
|
4,412
| 57,537,474
|
How to fix 'AttributeError: 'list' object has no attribute 'shape'' error in python with Tensorflow / Keras when loading Model
|
<p>Im trying to save my Model in Keras and then load it but when it try to use the loaded Model it trows an Error</p>
<p>Python Vesion: 3.6.8
Tensorflow Version: 2.0.0-beta1
Keras Version: 2.2.4-tf</p>
<p>Here is my Code:</p>
<pre><code>from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
print("Tensorflow Version:")
print(tf.version.VERSION)
print("Keras Verstion:")
print(tf.keras.__version__)
minst = tf.keras.datasets.mnist # 28x28 0-9
(x_train, y_train), (x_test, y_test) = minst.load_data()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x__test = x_test;
x_test = tf.keras.utils.normalize(x_test, axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
model.save('epic_num.h5')
print("Saved")
# Loading
nmodel = keras.models.load_model("epic_num.h5")
import numpy as np;
# Test
predics = nmodel.predict([x_test])
print(predics)
import matplotlib.pyplot as plt
plt.imshow(x__test[11], cmap=plt.cm.binary)
plt.show()
print(np.argmax(predics[11]))
</code></pre>
<p>The Output:</p>
<pre><code>C:\Users\minec\PycharmProjects\TFMLTest3\venv\Scripts\python.exe C:/Users/minec/PycharmProjects/TFMLTest3/main.py
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Tensorflow Version:
2.0.0-beta1
Keras Verstion:
2.2.4-tf
2019-08-17 17:02:25.535385: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING: Logging before flag parsing goes to stderr.
W0817 17:02:25.636994 14512 deprecation.py:323] From C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\ops\math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 60000 samples
32/60000 [..............................] - ETA: 2:39 - loss: 2.3092 - accuracy: 0.0938
1120/60000 [..............................] - ETA: 7s - loss: 1.7317 - accuracy: 0.5580
...
accuracy: 0.9590
9600/10000 [===========================>..] - ETA: 0s - loss: 0.1232 - accuracy: 0.9624
10000/10000 [==============================] - 0s 32us/sample - loss: 0.1271 - accuracy: 0.9610
0.1271383908316493 0.961
Saved
W0817 17:02:29.220302 14512 hdf5_format.py:197] Sequential models without an `input_shape` passed to the first layer cannot reload their optimizer state. As a result, your model isstarting with a freshly initialized optimizer.
Traceback (most recent call last):
File "C:/Users/minec/PycharmProjects/TFMLTest3/main.py", line 41, in <module>
predics = nmodel.predict([x_test])
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 821, in predict
use_multiprocessing=use_multiprocessing)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 705, in predict
x, check_steps=True, steps_name='steps', steps=steps)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2335, in _standardize_user_data
self._set_inputs(cast_inputs)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2553, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 662, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 262, in call
outputs = layer(inputs, **kwargs)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 662, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\minec\PycharmProjects\TFMLTest3\venv\lib\site-packages\tensorflow\python\keras\layers\core.py", line 580, in call
inputs, (tensor_shape.dimension_value(inputs.shape[0]) or
AttributeError: 'list' object has no attribute 'shape'
Process finished with exit code 1
</code></pre>
|
<p>Try using</p>
<pre><code>nmodel.predict(x_test)
</code></pre>
<p>instead of</p>
<pre><code>nmodel.predict([x_test])
</code></pre>
<p>(remove the brackets).</p>
|
python|python-3.x|numpy|tensorflow|keras
| 2
|
4,413
| 57,598,032
|
Sort Monthly Abbreviation columns (Jan, Feb, Mar, etc.) in Dataframe (currently sorting alphabetically)
|
<p>I have a dataframe I've created from stock data. I am counting how many times the 'close > open' by month and by year using a pivot table. If I use the integer for each month my table is in the correct order. If I use the 3-letter abbreviation for each month it sorts alphabetically. How can I get the month abbreviations to appear in the correct order? I'm sure there is a simple solution.</p>
<p>Here is my code:</p>
<pre><code>data = pd.read_csv('SPY.CSV')
data['Date'] = pd.to_datetime(data['Date'])
data.set_index('Date', inplace=True)
data['UpClose'] = np.where(data['Close'] > data['Open'], 1, 0)
data['Year'] = data.index.year
data['Month'] = data.index.month
data['Month'] = pd.to_datetime(data['Month'], format='%m').dt.month_name().str.slice(stop=3)
table = pd.pivot_table(data, values='UpClose', index=['Year'],columns=['Month'], aggfunc=np.sum).reset_index().rename_axis(None, axis=1)
</code></pre>
<p>This outputs (the month abbreviation names sorted alphabetically):</p>
<pre><code> Year Apr Aug Dec Feb Jan Jul Jun Mar May Nov Oct Sep
0 1997 NaN NaN 10.0 NaN NaN NaN NaN NaN NaN 12.0 9.0 7.0
1 1998 10.0 8.0 12.0 11.0 11.0 11.0 13.0 13.0 9.0 12.0 12.0 11.0
2 1999 11.0 11.0 15.0 9.0 10.0 10.0 13.0 13.0 10.0 11.0 12.0 7.0
3 2000 7.0 15.0 10.0 9.0 8.0 10.0 11.0 14.0 9.0 8.0 11.0 7.0
</code></pre>
<p>If I use the Integer instead of the month abbreviations, this is the correct order:</p>
<pre><code> Year 1 2 3 4 5 6 7 8 9 10 11 12
0 1997 NaN NaN NaN NaN NaN NaN NaN NaN 7.0 9.0 12.0 10.0
1 1998 11.0 11.0 13.0 10.0 9.0 13.0 11.0 8.0 11.0 12.0 12.0 12.0
2 1999 10.0 9.0 13.0 11.0 10.0 13.0 10.0 11.0 7.0 12.0 11.0 15.0
3 2000 8.0 9.0 14.0 7.0 9.0 11.0 10.0 15.0 7.0 11.0 8.0 10.0
</code></pre>
<p>Desired Output (month abbreviations in the correct order):</p>
<pre><code> Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
0 1997 NaN NaN NaN NaN NaN NaN NaN NaN 7.0 9.0 12.0 10.0
1 1998 11.0 11.0 13.0 10.0 9.0 13.0 11.0 8.0 11.0 12.0 12.0 12.0
2 1999 10.0 9.0 13.0 11.0 10.0 13.0 10.0 11.0 7.0 12.0 11.0 15.0
3 2000 8.0 9.0 14.0 7.0 9.0 11.0 10.0 15.0 7.0 11.0 8.0 10.0
</code></pre>
|
<p>As WeNYoBen commented, one way to achieve customized ordering of strings is through ordered categorical.</p>
<p>Another thing to note is that you can do numeric operation (such as sum) over boolean (True=1, False=0), therefore <code>np.where(data['Close'] > data['Open'], 1, 0)</code> is really not necessary, <code>data['Close'] > data['Open']</code> will do </p>
<pre><code>import numpy as np
import pandas_datareader as pdr # Get SPY Data
from pandas.api.types import CategoricalDtype
# Define month order
month_lst = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
# Create ordered month
cat_type = CategoricalDtype(categories=month_lst, ordered=True)
data = (pdr.get_data_yahoo('SPY',start='1997',end='2001')
.assign(UpClose=lambda x:x.Close > x.Open,
Year=lambda x:x.index.year,
Month=lambda x:x.index.month_name().astype(cat_type))
.pivot_table(index='Year',columns='Month',values='UpClose',aggfunc=np.sum))
</code></pre>
|
python|pandas|sorting
| 1
|
4,414
| 57,683,299
|
Increment the max rows of the pandas DataFrame
|
<p>I have this python function to get financial data from some tickers</p>
<pre><code>def get_quandl_data_df(ticker, start, end, api_key):
import quandl
return quandl.get_table('WIKI/PRICES', qopts={'columns': ['ticker','date', 'open', 'high', 'low', 'close', 'volume']}, ticker = ticker, date = { 'gte': start, 'lte': end }, api_key=api_key)
</code></pre>
<p>Then</p>
<pre><code># len(tickers_sp500) = 500
data = get_quandl_data_df(tickers_sp500,'2017-01-01','2018-01-01','xxxxxxxxxx')
</code></pre>
<p>So the number of rows of my DataFrame should be around 100k rows.
But data.info() is returning: (just 10k rows)</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 7 columns):
ticker 10000 non-null object
date 10000 non-null datetime64[ns]
open 10000 non-null float64
high 10000 non-null float64
low 10000 non-null float64
close 10000 non-null float64
volume 10000 non-null float64
dtypes: datetime64[ns](1), float64(5), object(1)
memory usage: 547.0+ KB
</code></pre>
<p>How can I increment the max rows of the pandas DataFrame??</p>
|
<p>1)Appending the argument paginate=True will extend the limit to 1,000,000 rows.</p>
<pre><code>get_quandl_data_df(ticker, start, end, api_key,paginate=True):
</code></pre>
|
python|python-3.x|pandas|quandl
| 1
|
4,415
| 57,409,679
|
pandas groupby aggregate keep equal values
|
<p>I am trying to build an aggregator which simply returns a value if it is equal to all other values in the variable and NaN if it isn't.</p>
<p>It is ment to keep meta information while aggregating sensory data. </p>
<p>I get a strange key error... </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame.from_dict({'v1' : [1,1,1,2,2,2],
'v2' : [1,2,3,4,5,6],
'v3' : [1,1,1,2,3,2],
'v4' : [2,2,2,3,3,3]})
def keep_equal(x):
if (x == x[0]).all(): return x[0]
else: return np.NaN
df = df.groupby(df["v1"], as_index=False, observed =True).agg(keep_equal)
</code></pre>
<p>expected output would be:</p>
<pre><code> v1 v2 v3 v4
0 1 NaN 1 2
1 2 NaN NaN 3
</code></pre>
<p>But I get a key error:</p>
<pre><code>Traceback (most recent call last):
File "pandas\_libs\index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 998, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
</code></pre>
|
<p>You need to check for the location with <code>iloc</code></p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame.from_dict({'v1' : [1,1,1,2,2,2],
'v2' : [1,2,3,4,5,6],
'v3' : [1,1,1,2,3,2],
'v4' : [2,2,2,3,3,3]})
def keep_equal(x):
if (x == x.iloc[0]).all(): return x.iloc[0]
else: return np.NaN
df = df.groupby(df["v1"], as_index=False, observed =True).agg(keep_equal)
print(df)
>>
v1 v2 v3 v4
0 1 NaN 1.0 2
1 2 NaN NaN 3
</code></pre>
|
python|pandas
| 1
|
4,416
| 57,296,791
|
How to display matplotlib numpy.ndarray in tkinter
|
<p>I want to display numpy.ndarray matplotlib in tkinter. </p>
<p>I tried in backend it works fine, but does not display in tkinter and show the canvas with graph empty.instead the code below display the picture in separate window as pop-up. How can I display it in the canvas and inside the window?</p>
<pre><code> from tkinter import *
from tkinter import ttk
import numpy as np
import pandas as pd
from scipy.stats import norm
import requests
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
%matplotlib inline
from yahoofinancials import YahooFinancials
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib
matplotlib.use('TkAgg')
class Scr:
def __init__(self, master):
master.geometry('300x300+120+60')
self.frame_content = ttk.Frame(master)
self.frame_content.pack()
tickers = ['AAPL']
new_data = pd.DataFrame()
for t in tickers :new_data[t] = wb.DataReader(t, data_source ='yahoo', start = '2004-1-1')['Adj Close']
lr = np.log(1+new_data.pct_change())
var=lr.var()
mean=lr.mean()
drift = mean-(0.5 * var)
stdv=lr.std()
norm.ppf(0.95)
x = np.random.rand(10,2)
norm.ppf(x)
Ze=norm.ppf(np.random.rand(10,2))
t_intervals =1000
iteration=10
daily_returns=np.exp(drift.values + stdv.values * norm.ppf(np.random.rand(t_intervals,iteration)))
S=new_data.iloc[-1]
am = np.zeros_like(daily_returns)
am[0] = S
for t in range (1, t_intervals):
am[t]=am[t-1] * daily_returns[t]
graph3=ttk.Frame(master)
graph3.pack()
graph3.place(x=750,y=550)
plt.plot(am)
fig3 = matplotlib.pyplot.Figure(figsize=(6,6))
canvas3 = FigureCanvasTkAgg(fig3, graph3)
canvas3.get_tk_widget().pack()
ax3 = fig3.add_subplot(211)
am.plot(kind='line', legend=True, ax=ax3).grid(linestyle = 'dashed')
def main():
root = Tk()
scr = Scr(root)
root.mainloop()
if __name__ == "__main__": main()
</code></pre>
<p>The error message I got is :</p>
<p>'numpy.ndarray' object has no attribute 'plot'</p>
|
<p><code>am</code> is <code>numpy.ndarray</code></p>
<pre><code>am = np.zeros_like(daily_returns)
</code></pre>
<p>and it doesn't have <code>am.plot()</code>. </p>
<p>But <code>pandas.DataFrame</code> has it. You have to convert <code>am</code> to <code>DataFrame</code></p>
<pre><code> df = pd.DataFrame(am)
df.plot(kind='line', legend=True, ax=ax3).grid(linestyle = 'dashed')
</code></pre>
<p>(and you can remove <code>plt.plot(am)</code>)</p>
<hr>
<p>And remove <code>graph3.place(x=750,y=550)</code> which moves plot far away and it is invisible. You have to manually resize window to see plot. </p>
<hr>
<p><a href="https://i.stack.imgur.com/rFhCx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rFhCx.png" alt="enter image description here"></a></p>
<pre><code>from tkinter import *
from tkinter import ttk
import numpy as np
import pandas as pd
from scipy.stats import norm
import requests
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
from yahoofinancials import YahooFinancials
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib
matplotlib.use('TkAgg')
class Scr:
def __init__(self, master):
master.geometry('300x300+120+60')
self.frame_content = ttk.Frame(master)
self.frame_content.pack()
tickers = ['AAPL']
new_data = pd.DataFrame()
for t in tickers:
new_data[t] = wb.DataReader(t, data_source='yahoo', start='2004-1-1')['Adj Close']
lr = np.log(1+new_data.pct_change())
var = lr.var()
mean = lr.mean()
drift = mean-(0.5 * var)
stdv = lr.std()
norm.ppf(0.95)
x = np.random.rand(10,2)
norm.ppf(x)
Ze = norm.ppf(np.random.rand(10,2))
t_intervals =1000
iteration = 10
daily_returns = np.exp(drift.values + stdv.values * norm.ppf(np.random.rand(t_intervals,iteration)))
am = np.zeros_like(daily_returns)
am[0] = new_data.iloc[-1]
for t in range (1, t_intervals):
am[t]=am[t-1] * daily_returns[t]
graph3 = ttk.Frame(master)
graph3.pack()
#graph3.place(x=750,y=550)
fig3 = matplotlib.pyplot.Figure(figsize=(6,6))
canvas3 = FigureCanvasTkAgg(fig3, graph3)
canvas3.get_tk_widget().pack()
ax3 = fig3.add_subplot(211)
df = pd.DataFrame(am)
df.plot(kind='line', legend=True, ax=ax3).grid(linestyle = 'dashed')
def main():
root = Tk()
scr = Scr(root)
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
|
python-3.x|matplotlib|tkinter|numpy-ndarray
| 1
|
4,417
| 57,482,954
|
Grouping and splitting to avoid leakage
|
<p>I have a pandas <code>dataframe</code> where the data is arranged as follows:</p>
<pre><code> filename label
0 4456723 0
1 4456723_01 0
2 4456723_02 0
3 ab43912 1
4 ab43912_01 1
5 ab43912_03 1
... ... ...
</code></pre>
<p>I want to randomly split this <code>dataframe</code> in <code>training</code> and <code>validation</code> sets. Though if I do so, I will introduce a leakage because the files are images with slight variations but represented with different names, for example <code>ab43912, ab43912_01, ab43912_03</code>, are all same images with some variations. </p>
<p>Is there any efficient way to group these files and then make a split that doesn't introduce leakage?</p>
|
<p>You can manually select ~80% of the unique file handles randomly.</p>
<pre><code>df = pd.DataFrame({'filename': list('aaabbbcccdddeeefff')})
df['filename'] = df['filename'] + ['', '_01', '_02']*6
</code></pre>
<hr>
<pre><code># Get the unique handles
files = df.filename.str.split('_').str[0]
# Randomly select ~80%.
m = files.isin(np.random.choice(files.unique(), int(files.nunique()*0.8), replace=False))
# Split
train, test = df.loc[m], df.loc[~m]
</code></pre>
<hr>
<p>In effect we got a 2/3-1/3 split because of the small N</p>
<p><code>train</code>:</p>
<pre><code> filename
0 a
1 a_01
2 a_02
6 c
7 c_01
8 c_02
12 e
13 e_01
14 e_02
15 f
16 f_01
17 f_02
</code></pre>
<p><code>test</code>:</p>
<pre><code> filename
3 b
4 b_01
5 b_02
9 d
10 d_01
11 d_02
</code></pre>
|
python|python-3.x|pandas
| 3
|
4,418
| 73,154,668
|
unexpected keyword argument trying to instantiate a class inheriting from torch.nn.Module
|
<p>I have seen similar questions, but most seem a little more involved. My problem seems to me to be very straightforward, yet I cannot figure it out. I am simply trying to define a class and then instantiate it, but the arguments passed to the constructor are not recognized.</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
# fully connected network
class NN(nn.Module):
def __int__(self, in_size, num_class):
super(NN, self).__init__()
self.fc1 = nn.Linear(in_size, 50)
self.fc2 = nn.Linear(50, num_class)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
# initialize network
model = NN(in_size=input_size, num_class=num_classes)
</code></pre>
<p>I get the error: <code>__init__() got an unexpected keyword argument 'in_size'</code>
I am using Python 3.1, PyTorch 1.7.1, using PyCharm on macOS Monterey. Thank you!</p>
|
<p>You have a typo: it should be <code>__init__</code> instead of <code>__int__</code>.</p>
|
python|pytorch
| 0
|
4,419
| 70,675,292
|
Python Same Period Last Year in Pandas with GroupBy
|
<p>I have following DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
n = 72
dates = list(pd.date_range(start='2016-01-01',periods=n,freq='MS'))
products = ['a','b','c']
countries = ['a','b','c']
df = pd.merge(pd.DataFrame({'product':products}), pd.DataFrame({'country':countries}),how='cross')
df = pd.merge(df, pd.DataFrame({'ds':dates}),how='cross')
df['y'] = np.random.default_rng(12345).integers(low=200,high=1000,size=len(df))
</code></pre>
<p>How do I copy 'y' from same month last year to current month? <br />
What I need is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ds</th>
<th>product</th>
<th>country</th>
<th style="text-align: right;">y</th>
<th style="text-align: right;">last_year</th>
</tr>
</thead>
<tbody>
<tr>
<td>2016-01-01</td>
<td>a</td>
<td>a</td>
<td style="text-align: right;">759</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td>2016-01-01</td>
<td>b</td>
<td>a</td>
<td style="text-align: right;">330</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td>2016-01-01</td>
<td>c</td>
<td>a</td>
<td style="text-align: right;">794</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td>2016-02-01</td>
<td>a</td>
<td>b</td>
<td style="text-align: right;">633</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td>...</td>
<td>..</td>
<td>..</td>
<td style="text-align: right;">..</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td>2017-01-01</td>
<td>a</td>
<td>a</td>
<td style="text-align: right;">654</td>
<td style="text-align: right;">759</td>
</tr>
<tr>
<td>2017-01-01</td>
<td>b</td>
<td>a</td>
<td style="text-align: right;">295</td>
<td style="text-align: right;">330</td>
</tr>
<tr>
<td>2017-01-01</td>
<td>c</td>
<td>a</td>
<td style="text-align: right;">969</td>
<td style="text-align: right;">794</td>
</tr>
<tr>
<td>...</td>
<td>..</td>
<td>..</td>
<td style="text-align: right;">..</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td>2017-02-01</td>
<td>a</td>
<td>b</td>
<td style="text-align: right;">464</td>
<td style="text-align: right;">636</td>
</tr>
<tr>
<td>...</td>
<td>..</td>
<td>..</td>
<td style="text-align: right;">..</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td>2021-12-01</td>
<td>b</td>
<td>c</td>
<td style="text-align: right;">498</td>
<td style="text-align: right;">722</td>
</tr>
<tr>
<td>...</td>
<td>..</td>
<td>..</td>
<td style="text-align: right;">..</td>
<td style="text-align: right;">...</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried following with no success:</p>
<pre class="lang-py prettyprint-override"><code>df.set_index('ds',inplace=True)
df['last_year'] = df.groupby(['product', 'country']).y.shift(freq='12MS').reset_index()['y']
</code></pre>
<p>Shows NaN in all rows</p>
<p>And</p>
<pre class="lang-py prettyprint-override"><code>df = df.groupby(['product', 'country'])['y'].rolling(12).apply(lambda x: x[-1])
</code></pre>
<p>gives</p>
<pre class="lang-py prettyprint-override"><code>KeyError: -1
</code></pre>
<p>Hope you can assist!.</p>
|
<p>Not sure if this is your expected result. You can try this :</p>
<pre><code>df['last_year'] = df['y'].shift(12)
</code></pre>
|
python|pandas|group-by|offset|forecasting
| 0
|
4,420
| 70,650,161
|
how to create column with other dataframe as legend in python (groupby() alike)- pandas library
|
<p>i want to use df2 as legend to my primary df ('main_df')</p>
<p>what "<strong>code</strong>" need to be ?</p>
<pre><code>main_df = pd.DataFrame({'genre_NAME': ['comedy', 'action', 'horror'], 'genre_id': [nan, nan, nan]})
df2 = pd.DataFrame({'genre': ['comedy', 'horror'], 'id': [0, 1]})
#main_df is not relate to df2
some code
#result
print(main_df)
*OUTPUT =* 'genre_NAME': ['comedy', 'action', 'horror'], 'genre_id': [0, nan,1]
</code></pre>
|
<p>You can access and change a specific value in a dataframe with <code>df.loc[]</code>. So in your case:</p>
<pre><code>for _, line in df2.iterrows():
main_df.loc[main_df.genre_NAME == line.genre, "genre_id"] = line.id
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 0
|
4,421
| 70,622,747
|
How to identify a number that has a correspondent negative in a list?
|
<p>I am beginner in this journey of Python/VBA so I have a problem I kindly want do ask you.</p>
<p>So I have a list of rows in an excel spreadsheet (also I treat this data with Pandas to reduce the number of rows I have to analyse on Excel)</p>
<p>The fact is that I have dozens of thousands rows that in a specific column has values like these:</p>
<pre><code> col
0 -142.60
1 142.60
2 -565.78
3 565.78
4 -90.00
5 90.00
6 63.26
7 -63.26
8 117.96
</code></pre>
<p>So I just want to know how can I automatically delete rows that has a correspondent negative number and sum = 0.</p>
<p>I only want 117,96 row here.</p>
|
<p>Assuming you have float in this column, you can compare the rows with the opposite of the next one, and use this information to subset only those rows.</p>
<pre><code># is the next row the opposite value?
m = df['col'].eq(-df['col'].shift())
# drop the matching rows and the next ones
df2 = df.loc[~(m|m.shift(-1))]
</code></pre>
<p>output:</p>
<pre><code> col
8 117.96
</code></pre>
<p>Used input:</p>
<pre><code> col
0 -142.60
1 142.60
2 -565.78
3 565.78
4 -90.00
5 90.00
6 63.26
7 -63.26
8 117.96
</code></pre>
|
python|excel|pandas|numpy
| 2
|
4,422
| 42,647,710
|
Compare Boolean Row values across multiple Columns in Pandas using & / np.where() / np.any()
|
<p>I have a dataframe that looks like:</p>
<pre><code> a A a B a C a D a E a F p A p B p C p D p E p F
0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0
2 0 1 0 0 0 0 0 0 1 0 0 0
3 0 0 1 0 0 1 0 0 0 0 0 0
4 0 0 0 1 0 1 0 0 0 0 0 0
5 0 0 0 0 1 0 0 0 0 0 0 0
6 0 0 0 0 0 0 1 0 0 0 0 0
df = pd.DataFrame({'p A':[0,0,0,0,0,0,1],'p B':[0,0,0,0,0,0,0],'p C':[0,0,1,0,0,0,0],'p D':[0,0,0,0,0,0,0],'p E':[0,0,0,0,0,0,0],'p F':[0,0,0,0,0,0,0],'a A':[0,1,0,0,0,0,0],'a B':[0,0,1,0,0,0,0],'a C':[0,0,0,1,0,0,0],'a D':[0,0,0,0,1,0,0],'a E':[0,0,0,0,0,1,0],'a F': [0,0,0,1,1,0,0]})
</code></pre>
<p>Note: This is a much simplified version of my actual data. </p>
<p>a stands for Actual; p stands for Predicted; A - F represent a series of labels</p>
<p>I want to write a query that, for each row in my dataframe, returns True when: (all row values in "p columns" = 0 ) and (at least one row value in "a columns" = 1) i.e. for each row, p columns are fixed at 0 and at least 1 a column = 1.</p>
<p>Using answers to <a href="https://stackoverflow.com/questions/22701799/pandas-dataframe-find-rows-where-all-columns-equal">Pandas Dataframe Find Rows Where all Columns Equal</a> and <a href="https://stackoverflow.com/questions/27474921/compare-two-columns-using-pandas">Compare two columns using pandas</a>
I achieve this currently by using <code>&</code> and <code>np.any()</code> </p>
<pre><code>((df.iloc[:,6] == 0) & (df.iloc[:,7] == 0) & (df.iloc[:,8] == 0) & (df.iloc[:,9] == 0) & (df.iloc[:,10] == 0) & (df.iloc[:,11] == 0) & df.iloc[:,0:6].any(axis = 1) )
>>
0 False
1 True
2 False
3 True
4 True
5 True
6 False
dtype: bool
</code></pre>
<p>Is there a more succinct, readable way I can achieve this?</p>
|
<p>You can use <code>~</code> for invert boolean mask with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> for select by position:</p>
<pre><code>print (~df.iloc[:,6:11].any(1) & df.iloc[:,0:6].any(1))
0 False
1 True
2 False
3 True
4 True
5 True
6 False
dtype: bool
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow noreferrer"><code>filter</code></a> for select by column names, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>any</code></a> for check at least one <code>True</code> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>all</code></a> for check if all values are <code>True</code> per row.</p>
<p>Function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a> is for compare with <code>0</code>.</p>
<pre><code>print (~df.filter(like='p').any(1) & df.filter(like='a').any(1))
0 False
1 True
2 False
3 True
4 True
5 True
6 False
dtype: bool
</code></pre>
<hr>
<pre><code>print (df.filter(like='p').eq(0).all(1) & df.filter(like='a').any(1))
0 False
1 True
2 False
3 True
4 True
5 True
6 False
dtype: bool
</code></pre>
|
python|pandas|numpy|boolean|any
| 3
|
4,423
| 42,878,600
|
Access a list within Pandas Dataframe apply function without making list global
|
<p>I have a Pandas dataframe. For each row in this frame I want to make a certain check. If the check yields <code>True</code>, then I want to add certain columns of the dataframe-row to a list-structure.</p>
<p>How can I access a list from within the apply function without the need to create a global list variable? Is this possible? Or is there a better way to do this?</p>
<p>The code looks like this:</p>
<pre><code>df.apply(checkFunction, axis=1)
checkFunction(row):
if (check == True):
myList.append(row)
return row
</code></pre>
|
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>df.apply()</code></a> allows you to pass positional arguments and keywords.</p>
<pre><code>def checkFunction(row, lst):
if (check == True):
lst.append(row)
return row
my_list = []
df.apply(checkFunction, axis=1, args=(my_list,))
</code></pre>
<p>Don't call it <code>list</code>, that collides with the reserved <code>list</code> keyword.</p>
<p>NOTE: The value passed to <code>args</code> must be a tuple. I just edited the sample code to show this.</p>
|
python|pandas|dataframe|apply
| 1
|
4,424
| 42,606,387
|
why i can't open my files?
|
<pre><code>import pandas as pd
df=pd.read_csv(r"C:\Users\champion\Desktop\政大資料科學競賽\104.1~106.1\臺北捷運各站出站量統計_201501.csv",encoding='big5')
</code></pre>
<p>I can execute it before.I don't know why python happen OSError.</p>
<p>Is it could be my data which is Chinese?</p>
<p>I browsed a lot of question,but no one can't answer me.</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-9-0dfa88abff19> in <module>()
----> 1 df=pd.read_csv(r'C:\Users\champion\Desktop\政大資料科學競賽\104.1~106.1\臺北捷運各站出站量統計_201503.csv')
D:\anoconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
644 skip_blank_lines=skip_blank_lines)
645
--> 646 return _read(filepath_or_buffer, kwds)
647
648 parser_f.__name__ = name
D:\anoconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
387
388 # Create the parser.
--> 389 parser = TextFileReader(filepath_or_buffer, **kwds)
390
391 if (nrows is not None) and (chunksize is not None):
D:\anoconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
728 self.options['has_index_names'] = kwds['has_index_names']
729
--> 730 self._make_engine(self.engine)
731
732 def close(self):
D:\anoconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
921 def _make_engine(self, engine='c'):
922 if engine == 'c':
--> 923 self._engine = CParserWrapper(self.f, **self.options)
924 else:
925 if engine == 'python':
D:\anoconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1388 kwds['allow_leading_cols'] = self.index_col is not False
1389
-> 1390 self._reader = _parser.TextReader(src, **kwds)
1391
1392 # XXX
pandas\parser.pyx in pandas.parser.TextReader.__cinit__ (pandas\parser.c:4184)()
pandas\parser.pyx in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:8471)()
OSError: Initializing from file failed
</code></pre>
<p>
OSError: Initializing from file failed,is it mean my system is wrong?</p>
|
<p>If u use python 3.6, u can try </p>
<blockquote>
<p>df=pd.read_csv(r"C:\Users\champion\Desktop\政大資料科學競賽\104.1~106.1\臺北捷運各站出站量統計_201501.csv",encoding='big5',engine="python")</p>
</blockquote>
|
python|pandas
| -1
|
4,425
| 30,423,052
|
Matplotlib: Import and plot multiple time series with legends direct from .csv
|
<p>I have several spreadsheets containing data saved as comma delimited (.csv) files in the following format: The first row contains column labels as strings ('Time', 'Parameter_1'...). The first column of data is Time and each subsequent column contains the corresponding parameter data, as a float or integer. </p>
<p>I want to plot each parameter against Time on the same plot, with parameter legends which are derived directly from the first row of the .csv file. </p>
<p>My spreadsheets have different numbers of (columns of) parameters to be plotted against Time; so I'd like to find a generic solution which will also derive the number of columns directly from the .csv file.</p>
<p>The attached minimal working example shows what I'm trying to achieve using np.loadtxt (minus the legend); but I can't find a way to import the column labels from the .csv file to make the legends using this approach.</p>
<p>np.genfromtext offers more functionality, but I'm not familiar with this and am struggling to find a way of using it to do the above. </p>
<p>Plotting data in this style from .csv files must be a common problem, but I've been unable to find a solution on the web. I'd be very grateful for your help & suggestions.</p>
<p>Many thanks</p>
<pre><code>"""
Example data: Data.csv:
Time,Parameter_1,Parameter_2,Parameter_3
0,10,0,10
1,20,30,10
2,40,20,20
3,20,10,30
"""
import numpy as np
import matplotlib.pyplot as plt
data = np.loadtxt('Data.csv', skiprows=1, delimiter=',') # skip the column labels
cols = data.shape[1] # get the number of columns in the array
for n in range (1,cols):
plt.plot(data[:,0],data[:,n]) # plot each parameter against time
plt.xlabel('Time',fontsize=14)
plt.ylabel('Parameter values',fontsize=14)
plt.show()
</code></pre>
|
<p>Here's my minimal working example for the above using genfromtxt rather than loadtxt, in case it is helpful for anyone else.
I'm sure there are more concise and elegant ways of doing this (I'm always happy to get constructive criticism on how to improve my coding), but it makes sense and works OK:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
arr = np.genfromtxt('Data.csv', delimiter=',', dtype=None) # dtype=None automatically defines appropriate format (e.g. string, int, etc.) based on cell contents
names = (arr[0]) # select the first row of data = column names
for n in range (1,len(names)): # plot each column in turn against column 0 (= time)
plt.plot (arr[1:,0],arr[1:,n],label=names[n]) # omitting the first row ( = column names)
plt.legend()
plt.show()
</code></pre>
|
excel|csv|numpy|matplotlib|import
| 2
|
4,426
| 26,467,696
|
pandas DataFrame conditional string split
|
<p>I have a column of influenza virus names within my DataFrame. Here is a representative sampling of the name formats present:</p>
<ol>
<li>(A/Egypt/84/2001(H1N2))</li>
<li>A/Brazil/1759/2004(H3N2)</li>
<li>A/Argentina/126/2004</li>
</ol>
<p>I am only interested in getting out A/COUNTRY/NUMBER/YEAR from the strain names, e.g. <strong>A/Brazil/1759/2004</strong>. I have tried doing:</p>
<pre><code>df['Strain Name'] = df['Original Name'].str.split("(")
</code></pre>
<p>However, if I try accessing <code>.str[0]</code>, then I miss out case #1. If I do <code>.str[1]</code>, I miss out case 2 and 3.</p>
<p>Is there a solution that works for all three cases? Or is there some way to apply a condition in string splits, without iterating over each row in the data frame?</p>
|
<p>So, based on EdChum's recommendation, I'll post my answer here.</p>
<p>Minimal data frame required for tackling this problem:</p>
<pre><code>Index Strain Name Year
0 (A/Egypt/84/2001(H1N2)) 2001
1 A/Brazil/1759/2004(H3N2) 2004
2 A/Argentina/126/2004 2004
</code></pre>
<p>Code for getting the strain names only, without parentheses or anything else inside the parentheses:</p>
<pre><code>df['Strain Name'] = df['Strain Name'].str.split('(').apply(lambda x: max(x, key=len))
</code></pre>
<p>This code works for the particular case spelled here, as the trick is that the isolate's "strain name" is the longest string after splitting by the opening parentheses ("<code>(</code>") value.</p>
|
python|pandas
| 1
|
4,427
| 26,590,359
|
Compare Dictionaries for close enough match
|
<p>I am looking for a good way to compare two dictionaries which contain the information of a matrix. So the structure of my dictionaries are the following, both dictionaries have identical keys:</p>
<pre><code>dict_1 = {("a","a"):0.01, ("a","b"): 0.02, ("a","c"): 0.00015, ...
dict_2 = {("a","a"):0.01, ("a","b"): 0.018, ("a","c"): 0.00014, ...
</code></pre>
<p>If I have to matrices, i.e. lists of list, I can use <code>numpy.allclose</code>. Is there something similar for dictionaries or is there a nice way to transform my dictionaries into such matrices?</p>
<p>Thanks for your help.</p>
|
<p>Easiest way I could think of:</p>
<pre><code>keylist = dict_1.keys()
array_1 = numpy.array([dict_1[key] for key in keylist])
array_2 = numpy.array([dict_2[key] for key in keylist])
if numpy.allclose(array_1, array_2):
print('Equal')
else:
print('Not equal')
</code></pre>
|
python|python-2.7|numpy|dictionary
| 6
|
4,428
| 39,112,689
|
Can I create a new column based on when the value changes in another column?
|
<p>Let s say I have this <code>df</code></p>
<pre><code>print(df)
DATE_TIME A B
0 10/08/2016 12:04:56 1 5
1 10/08/2016 12:04:58 1 6
2 10/08/2016 12:04:59 2 3
3 10/08/2016 12:05:00 2 2
4 10/08/2016 12:05:01 3 4
5 10/08/2016 12:05:02 3 6
6 10/08/2016 12:05:03 1 3
7 10/08/2016 12:05:04 1 2
8 10/08/2016 12:05:05 2 4
9 10/08/2016 12:05:06 2 6
10 10/08/2016 12:05:07 3 4
11 10/08/2016 12:05:08 3 2
</code></pre>
<p>The values in column <code>['A']</code> repeat over time, I need a column though, where they have a new ID each time they change, so that I would have something like the following <code>df</code></p>
<pre><code>print(df)
DATE_TIME A B C
0 10/08/2016 12:04:56 1 5 1
1 10/08/2016 12:04:58 1 6 1
2 10/08/2016 12:04:59 2 3 2
3 10/08/2016 12:05:00 2 2 2
4 10/08/2016 12:05:01 3 4 3
5 10/08/2016 12:05:02 3 6 3
6 10/08/2016 12:05:03 1 3 4
7 10/08/2016 12:05:04 1 2 4
8 10/08/2016 12:05:05 2 4 5
9 10/08/2016 12:05:06 2 6 5
10 10/08/2016 12:05:07 3 4 6
11 10/08/2016 12:05:08 3 2 6
</code></pre>
<p>Is there a way to do this with python? I am still very new to this and hoped to find something that could help me in pandas, but I have not found anything yet. In my original dataframe the values in Column <code>['A']</code> change on irregular intervals approximately every ten minutes and not every two rows like in my example. Has anybody an idea how I could approach this task? Thank you</p>
|
<p>You can use the <em>shift-cumsum</em> pattern.</p>
<pre><code>df['C'] = (df.A != df.A.shift()).cumsum()
>>> df
DATE_TIME A B C
0 10/08/2016 12:04:56 1 5 1
1 10/08/2016 12:04:58 1 6 1
2 10/08/2016 12:04:59 2 3 2
3 10/08/2016 12:05:00 2 2 2
4 10/08/2016 12:05:01 3 4 3
5 10/08/2016 12:05:02 3 6 3
6 10/08/2016 12:05:03 1 3 4
7 10/08/2016 12:05:04 1 2 4
8 10/08/2016 12:05:05 2 4 5
9 10/08/2016 12:05:06 2 6 5
10 10/08/2016 12:05:07 3 4 6
11 10/08/2016 12:05:08 3 2 6
</code></pre>
<p>As a side note, this is a popular pattern for grouping. For example, to get the average <code>B</code> value of each such group:</p>
<pre><code>df.groupby((df.A != df.A.shift()).cumsum()).B.mean()
</code></pre>
|
python|pandas|uniqueidentifier
| 5
|
4,429
| 19,590,966
|
Memory error with large data sets for pandas.concat and numpy.append
|
<p>I am facing a problem where I have to generate large DataFrames in a loop (50 iterations computing every time two 2000 x 800 pandas DataFrames). I would like to keep the results in memory in a bigger DataFrame, or in a dictionary like structure.
When using pandas.concat, I get a memory error at some point in the loop. The same happens when using numpy.append to store the results in a dictionary of numpy arrays rather than in a DataFrame. In both cases, I still have a lot of available memory (several GB). Is this too much data for pandas or numpy to process? Are there more memory-efficient ways to store my data without saving it on disk?</p>
<p>As an example, the following script fails as soon as <code>nbIds</code> is greater than 376:</p>
<pre><code>import pandas as pd
import numpy as np
nbIds = 376
dataids = range(nbIds)
dataCollection1 = []
dataCollection2 = []
for bs in range(50):
newData1 = pd.DataFrame( np.reshape(np.random.uniform(size =
2000 * len(dataids)),
(2000,len(dataids ))))
dataCollection1.append( newData1 )
newData2 = pd.DataFrame( np.reshape(np.random.uniform(size =
2000 * len(dataids)),
(2000,len(dataids ))))
dataCollection2.append( newData2 )
dataCollection1 = pd.concat(dataCollection1).reset_index(drop = True)
dataCollection2 = pd.concat(dataCollection2).reset_index(drop = True)
</code></pre>
<p>The code below fails when <code>nbIds</code> is 665 or higher</p>
<pre><code>import pandas as pd
import numpy as np
nbIds = 665
dataids = range(nbIds)
dataCollection1 = dict( (i , np.array([])) for i in dataids )
dataCollection2 = dict( (i , np.array([])) for i in dataids )
for bs in range(50):
newData1 = np.reshape(np.random.uniform(size = 2000 * len(dataids)),
(2000,len(dataids )))
newData1 = pd.DataFrame(newData1)
newData2 = np.reshape(np.random.uniform(size = 2000 * len(dataids)),
(2000,len(dataids)))
newData2 = pd.DataFrame(newData2)
for i in dataids :
dataCollection1[i] = np.append(dataCollection1[i] ,
np.array(newData1[i]))
dataCollection2[i] = np.append(dataCollection2[i] ,
np.array(newData2[i]))
</code></pre>
<p>I do need to compute both DataFrames everytime, and for each element <code>i</code> of <code>dataids</code> I need to obtain a pandas Series or a numpy array containing the 50 * 2000 numbers generated for <code>i</code>. Ideally, I need to be able to run this with <code>nbIds</code> equal to 800 or more.
Is there a straightforward way of doing this? </p>
<p>I am using a 32-bit Python with Python 2.7.5, pandas 0.12.0 and numpy 1.7.1.</p>
<p>Thank you very much for your help!</p>
|
<p>This is essentially what you are doing. Note that it doesn't make much difference from a memory perspective if you do conversition to DataFrames before or after.</p>
<p>But you can specify dtype='float32' to effectively 1/2 your memory.</p>
<pre><code>In [45]: np.concatenate([ np.random.uniform(size=2000 * 1000).astype('float32').reshape(2000,1000) for i in xrange(50) ]).nbytes
Out[45]: 400000000
In [46]: np.concatenate([ np.random.uniform(size=2000 * 1000).reshape(2000,1000) for i in xrange(50) ]).nbytes
Out[46]: 800000000
In [47]: DataFrame(np.concatenate([ np.random.uniform(size=2000 * 1000).reshape(2000,1000) for i in xrange(50) ]))
Out[47]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Columns: 1000 entries, 0 to 999
dtypes: float64(1000)
</code></pre>
|
python|python-2.7|numpy|pandas
| 6
|
4,430
| 33,651,668
|
How to add leading zero formatting to string in Pandas?
|
<p><strong>Objective:</strong> To format <code>['Birth Month']</code> with leading zeros</p>
<p>Currently, I have this code:</p>
<pre><code>import pandas as pd
import numpy as np
df1=pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])])
df1['Birth Year']= np.random.randint(1905,1995, len(df1))
df1['Birth Month']= str(np.random.randint(1,12, len(df1))).zfill(2)
df1
</code></pre>
<p>Which produces a list of values in <code>['Birth Month']</code> which is not what I need:</p>
<pre><code> A B Birth Year Birth Month
0 1 4 1912 [4 5 9]
1 2 5 1989 [4 5 9]
2 3 6 1921 [4 5 9]
</code></pre>
<p>Instead, I am looking for values and formatting like the following in <code>['Birth Month']</code>:</p>
<pre><code> A B Birth Year Birth Month
0 1 4 1912 04
1 2 5 1989 12
2 3 6 1921 09
</code></pre>
|
<p>Cast the dtype of the series to <code>str</code> using <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.astype.html#pandas.Series.astype" rel="noreferrer"><code>astype</code></a> and use vectorised <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.zfill.html" rel="noreferrer"><code>str.zfill</code></a> to pad with <code>0</code>:</p>
<pre><code>In [212]:
df1=pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])])
df1['Birth Year']= np.random.randint(1905,1995, len(df1))
df1['Birth Month']= pd.Series(np.random.randint(1,12, len(df1))).astype(str).str.zfill(2)
df1
Out[212]:
A B Birth Year Birth Month
0 1 4 1940 09
1 2 5 1945 04
2 3 6 1962 03
</code></pre>
<p>All you did was assign a scalar value (which is why every row is the same) and convert the element to a str of a list:</p>
<pre><code>In [217]:
df1['Birth Month'].iloc[0]
Out[217]:
'[3 6 9]'
</code></pre>
<p>You can see the result of the assignment here broken down:</p>
<pre><code>In [213]:
(np.random.randint(1,12, len(df1)))
Out[213]:
array([5, 7, 4])
In [214]:
str(np.random.randint(1,12, len(df1))).zfill(2)
Out[214]:
'[2 9 5]'
</code></pre>
|
python|string|numpy|pandas|dataframe
| 9
|
4,431
| 33,534,657
|
How to find index value from meshgrid in numpy
|
<p>I have the following problem: I have two surface equations, and I am looking at what point they are zero. So I have the following:</p>
<pre><code>b = np.arange(0,2,0.1)
k = np.arange(0,50,1)
b,k = np.meshgrid(b,k)
</code></pre>
<p>with these I produce <code>z1</code> and <code>z2</code>, massive formulas, but they both use <code>b</code> and <code>k</code>:</p>
<pre><code>z1 = ((0.5*rho*k**2 * Vd**2 * c)*(Cl * 0.1516*b**3 +
Cd*(((b*np.sqrt(b**2 * k**2 +1))/(2*k**2)) -
((np.log(np.sqrt(b**2 * k**2 + 1) + b*k))/(2*k**3)))) - F)
z2 = ((Cl * 0.1516 * b**3 * k**(-1)) -
((Cd/(8*k**4))*((3*np.log(np.sqrt(b**2 * k**2 + 1) + b*k)) +
(b*np.sqrt(b**2 * k**2 +1)*(2*b**2 * k**2 -3)*k))))
</code></pre>
<p>Now I know how to find the closest point at which z1 and z2 are zero. Just like below:</p>
<pre><code>print min(z1[(-0.1<z1)&(z1<0.1)]), min(z2[(-0.1<z2)&(z2<0.1)])
</code></pre>
<p>but with these I only get the z-value which gives me a close value to zero. What I need is to find which <code>b</code> and <code>k</code> values correspond to that given result of either <code>z1</code> or <code>z2</code>. </p>
<p>I tried to index it, but I seem not to do it correctly.</p>
|
<p>In this case, your "is close to zero" expression <code>(-0.1<z1)&(z1<0.1)</code> is an array of booleans. To find the indices of the <code>True</code> items you simply need to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html" rel="nofollow"><code>nonzero()</code></a>.</p>
<pre><code>(-0.1<z1) & (z1<0.1).nonzero()
</code></pre>
<p>For example:</p>
<pre><code>>>> np.array([False, False, True, False, True, True, False]).nonzero()
(array([2, 4, 5]),)
</code></pre>
|
python|numpy|intersection
| 1
|
4,432
| 22,734,763
|
Using Pandas To Find the Number Of Periods Since the Rolling High
|
<p>I am using the rolling_max function in Pandas:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments</a></p>
<p>How would I find the number of periods since the price was at this high?</p>
|
<p>starting with:</p>
<pre><code>>>> ts
A 10
B 10
C -5
D -15
E -9
F -8
G -13
H -9
I -15
J -21
dtype: int64
</code></pre>
<p>you may do:</p>
<pre><code>>>> rmlag = lambda xs: np.argmax(xs[::-1])
>>> pd.rolling_apply(ts, func=rmlag, window=3, min_periods=0).astype(int)
A 0
B 0
C 1
D 2
E 2
F 0
G 1
H 2
I 1
J 2
dtype: int64
</code></pre>
<p>concateneted with original series and rolling max values:</p>
<pre><code> value roll-max rm-lag
A 10 10 0
B 10 10 0
C -5 10 1
D -15 10 2
E -9 -5 2
F -8 -8 0
G -13 -8 1
H -9 -8 2
I -15 -9 1
J -21 -9 2
[10 rows x 3 columns]
</code></pre>
|
python|pandas
| 3
|
4,433
| 13,572,576
|
using 'OR' to select data in pandas
|
<p>I have a dataframe of values and I would like to explore the rows that are outliers. I wrote a function below that can be called with the <code>groupby().apply()</code> function and it works great for high or low values but when I want to combine them together i generate an error. I am somehow messing up the boolean <code>OR</code> selection but I could only find documentation for selection criteria using <code>&</code>. Any suggestions would be appreciated.</p>
<p>zach cp</p>
<pre><code>df = DataFrame( {'a': [1,1,1,2,2,2,2,2,2,2], 'b': [5,5,6,9,9,9,9,9,9,20] } )
#this works fine
def get_outliers(group):
x = mean(group.b)
y = std(group.b)
top_cutoff = x + 2*y
bottom_cutoff = x - 2*y
cutoffs = group[group.b > top_cutoff]
return cutoffs
#this will trigger an error
def get_all_ outliers(group):
x = mean(group.b)
y = std(group.b)
top_cutoff = x + 2*y
bottom_cutoff = x -2*y
cutoffs = group[(group.b > top_cutoff) or (group.b < top_cutoff)]
return cutoffs
#works fine
grouped1 = df.groupby(['a']).apply(get_outliers)
#triggers error
grouped2 = df.groupby(['a']).apply(get_all_outliers)
</code></pre>
|
<p>You need to use <code>|</code> instead of <code>or</code>. The <code>and</code> and <code>or</code> operators are special in Python and don't interact well with things like numpy and pandas that try to apply to them elementwise across a collection. So for these contexts, they've redefined the "bitwise" operators <code>&</code> and <code>|</code> to mean "and" and "or".</p>
|
python|pandas
| 8
|
4,434
| 62,301,720
|
How to multiply the numbers 0 to 150 by the same value?
|
<p>I have a Problem with my Python exercise.</p>
<p>Here is the part of my code:</p>
<pre><code>import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
import math as m
import loaddataa as ld
dataListStride = ld.loadData("../Data/Fabienne")
indexStrideData = 0
strideData = dataListStride[indexStrideData]
def horizontal(yAngle, yAcceleration, xAcceleration):
a = (m.cos(yAngle)*yAcceleration)-(m.sin(yAngle)*xAcceleration)
return a
resultsHorizontal = list()
for i in range (len(strideData)):
strideData_yAngle = strideData.to_numpy()[i, 2]
strideData_xAcceleration = strideData.to_numpy()[i, 4]
strideData_yAcceleration= strideData.to_numpy()[i, 5]
resultsHorizontal.append(horizontal(strideData_yAngle, strideData_yAcceleration, strideData_xAcceleration))
print("The Values are: " +str(resultsHorizontal))
plt.plot(resultsHorizontal)
print(len(resultsHorizontal))
</code></pre>
<p>And the code of loaddataa.py:</p>
<pre><code>import os
import pandas as pd
def loadData(relativPath):
files = os.listdir(relativPath)
path = os.path.abspath(relativPath)
datas = list()
for file in files:
absPath = path + "/" + file
print(absPath)
data = pd.read_csv(absPath)
datas.append(data)
return datas
</code></pre>
<p>With the Calculation of <code>def horizontal</code> I get many values in the end. With <code>print(len(resultsHorizontal))</code> I know how many values there are. In this example or with the CSV read in here, I get 150 values.
The plot looks like this:
<a href="https://i.stack.imgur.com/krhHr.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>As you can see the x-axis consists of the values 0 to 150, because there are 150 calculated values in the list by the CSV . I would like to calculate every single value on the x-axis multiplied by 0.01. So that the calculation looks like this: 0 * 0.01= 0, 1*0.01=0.01, 2*0.01=0.02, .... , 20*0.01=0.2 ,.... ,149*0.01=1.49, 150*0.01=1.5. The values after the = should then be on the x-axis.
There is also an other problem. It shall be independent of 150, because because it can happen that there are more or less than 150 values. This depends on the imported CSV file. I hope it's clear what my problem is. Thanks for helping me.</p>
|
<p>If your list is a numpy array then you can literally just multiply it by 0.01 to get the result, but if you want to keep it as a python list for some reason, then a simple list comprehension solves this quite easily.</p>
<pre><code>new_ret = [0.01 * value for value in resultsHorizontal]
</code></pre>
<p>`</p>
|
python|list|numpy|matplotlib
| 0
|
4,435
| 62,395,085
|
Why does one need Google's WaveNet model to generate audio if it already takes audio as input?
|
<p>I've spent a lot of time trying to understand the <a href="https://arxiv.org/pdf/1609.03499.pdf" rel="nofollow noreferrer">Google's WaveNet work</a> (also used in their DeepVoice model), but still confused about some very basic aspects. I'm referring to <a href="https://github.com/Rayhane-mamah/Tacotron-2/blob/master/wavenet_vocoder/models/wavenet.py" rel="nofollow noreferrer">this Tensorflow implementation of Wavenet.</a></p>
<p>Page-2 of the paper says: </p>
<blockquote>
<p>"<em>In this paper we introduce a new generative model operating directly
on the raw audio waveform.</em>".</p>
</blockquote>
<p>If we already have <em>raw audio waveform</em>, why do we need WaveNet? Isn't that what model is supposed to generate?</p>
<p>When I print out the model it shows <code>input</code> as just 1 float value in <code>input_convolution</code> kernel as its shape is 1x1x128. What is that 1 float in the input representing? Am I missing something?</p>
<pre><code> `inference/input_convolution/kernel:0 (float32_ref 1x1x128) [128, bytes: 512`]
</code></pre>
<p>more layers below:</p>
<pre><code>---------
Variables: name (type shape) [size]
---------
inference/ConvTranspose1D_layer_0/kernel:0 (float32_ref 1x11x80x80) [70400, bytes: 281600]
inference/ConvTranspose1D_layer_0/bias:0 (float32_ref 80) [80, bytes: 320]
inference/ConvTranspose1D_layer_1/kernel:0 (float32_ref 1x25x80x80) [160000, bytes: 640000]
inference/ConvTranspose1D_layer_1/bias:0 (float32_ref 80) [80, bytes: 320]
inference/input_convolution/kernel:0 (float32_ref 1x1x128) [128, bytes: 512]
inference/input_convolution/bias:0 (float32_ref 128) [128, bytes: 512]
inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 3x128x256) [98304, bytes: 393216]
inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias:0 (float32_ref 256) [256, bytes: 1024]
inference/ResidualConv1DGLU_0/residual_block_cin_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 1x80x256) [20480, bytes: 81920]
inference/ResidualConv1DGLU_0/residual_block_cin_conv_ResidualConv1DGLU_0/bias:0 (float32_ref 256) [256, bytes: 1024]
inference/ResidualConv1DGLU_0/residual_block_skip_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 1x128x128) [16384, bytes: 65536]
inference/ResidualConv1DGLU_0/residual_block_skip_conv_ResidualConv1DGLU_0/bias:0 (float32_ref 128) [128, bytes: 512]
inference/ResidualConv1DGLU_0/residual_block_out_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 1x128x128) [16384, bytes: 65536]
inference/ResidualConv1DGLU_0/residual_block_out_conv_ResidualConv1DGLU_0/bias:0 (float32_ref 128) [128, bytes: 512]
inference/ResidualConv1DGLU_1/residual_block_causal_conv_ResidualConv1DGLU_1/kernel:0 (float32_ref 3x128x256) [98304, bytes: 393216]
inference/ResidualConv1DGLU_1/residual_block_causal_conv_ResidualConv1DGLU_1/bias:0 (float32_ref 256) [256, bytes: 1024]
inference/ResidualConv1DGLU_1/residual_block_cin_conv_ResidualConv1DGLU_1/kernel:0 (float32_ref 1x80x256) [20480, bytes: 81920]
inference/ResidualConv1DGLU_1/residual_block_cin_conv_ResidualConv1DGLU_1/bias:0 (float32_ref 256) [256, bytes: 1024]
inference/ResidualConv1DGLU_1/residual_block_skip_conv_ResidualConv1DGLU_1/kernel:0 (float32_ref 1x128x128) [16384, bytes: 65536]
inference/ResidualConv1DGLU_1/residual_block_skip_conv_ResidualConv1DGLU_1/bias:0 (float32_ref 128) [128, bytes: 512]
inference/ResidualConv1DGLU_1/residual_block_out_conv_ResidualConv1DGLU_1/kernel:0 (float32_ref 1x128x128) [16384, bytes: 65536]
inference/ResidualConv1DGLU_1/residual_block_out_conv_ResidualConv1DGLU_1/bias:0 (float32_ref 128) [128, bytes: 512]
</code></pre>
|
<p>The generative networks typically operate on conditional probability of getting <code>new_element</code> given <code>old_element(s)</code>. In math terms:</p>
<p><a href="https://i.stack.imgur.com/shSF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/shSF1.png" alt="conditional probability"></a></p>
<p>as defined in the Google paper. As you can see, the network needs to start from something (the <code>x1...xt-1</code> - the past values) , it cannot go from scratch. You can think of it as if the network needs a theme that will tell it what genre you are interested in; heavy metal and country have <em>somewhat different</em> vibe.</p>
<p>If you like, you can generate this starter waveform yourself: a sine wave, white noise or something more complex. Once you run the network, it will start outputting new values that will eventually become an input to it.</p>
|
tensorflow|deep-learning|conv-neural-network|speech-recognition|text-to-speech
| 1
|
4,436
| 51,526,800
|
Setting up my decision tree python
|
<pre><code>import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.tree import export_graphviz
import graphviz
import pydotplus
import io
from scipy import misc
%matplotlib inline
data = pd.read_csv(r'''C:\Users\Pwego\Desktop\spotifyclassification2\data.csv''')
train, test = train_test_split(data, test_size = 0.15)
print("Training size: {}; Test size: {};".format(len(train), len(test)))
pos_tempo = data[data['target'] == 1]['tempo']
neg_tempo = data[data['target'] == 0]['tempo']
pos_danceability = data[data['target'] == 1]['danceability']
neg_danceability = data[data['target'] == 0]['danceability']
pos_duration = data[data['target'] == 1]['duration_ms']
neg_duration = data[data['target'] == 0]['duration_ms']
pos_energy = data[data['target'] == 1]['energy']
neg_energy = data[data['target'] == 0]['energy']
pos_instrumentalness = data[data['target'] == 1]['instrumentalness']
neg_instrumentalness = data[data['target'] == 0]['instrumentalness']
pos_key = data[data['target'] == 1]['key']
neg_key = data[data['target'] == 0]['key']
pos_liveness = data[data['target'] == 1]['liveness']
neg_liveness = data[data['target'] == 0]['liveness']
pos_loudness = data[data['target'] == 1]['loudness']
neg_loudness = data[data['target'] == 0]['loudness']
pos_mode = data[data['target'] == 1]['mode']
neg_mode = data[data['target'] == 0]['mode']
pos_speechiness = data[data['target'] == 1]['speechiness']
neg_speechiness = data[data['target'] == 0]['speechiness']
pos_time_signature = data[data['target'] == 1]['time_signature']
neg_time_signature = data[data['target'] == 0]['time_signature']
pos_valence = data[data['target'] == 1]['valence']
neg_valence = data[data['target'] == 0]['valence']
fig = plt.figure(figsize =(12, 8))
plt.title("Song Tempo Like / Dislike Distribution")
pos_tempo.hist(alpha = 0.7, bins = 30, label='positive', color ="green")
neg_tempo.hist(alpha = 0.7, bins = 30, label='negative', color ='red')
plt.legend(loc = "upper right")
fig2 = plt.figure(figsize=(15,15))
#Danceabiliy
ax3 = fig2.add_subplot(331)
ax3.set_xlabel('dancebility')
ax3.set_ylabel('count')
ax3.set_title("Song Dancebility Like Distribution")
pos_danceability.hist(alpha=0.5, bins=30)
neg_danceability.hist(alpha=0.5, bins=30)
ax4 = fig2.add_subplot(331)
ax4.set_xlabel('duration')
ax4.set_ylabel('count')
ax4.set_title("Song Duration Like Distribution")
pos_duration.hist(alpha=0.5, bins=30)
neg_duration.hist(alpha=0.5, bins=30)
ax5 = fig2.add_subplot(332)
ax5.set_xlabel('energy')
ax5.set_ylabel('count')
ax5.set_title("Song Energy Like Distribution")
pos_energy.hist(alpha=0.5, bins=30)
neg_energy.hist(alpha=0.5, bins=30)
ax6 = fig2.add_subplot(333)
ax6.set_xlabel('instrumentalness')
ax6.set_ylabel('count')
ax6.set_title("Song Instrumentalness Like Distribution")
pos_instrumentalness.hist(alpha=0.5, bins=30)
neg_instrumentalness.hist(alpha=0.5, bins=30)
ax7 = fig2.add_subplot(334)
ax7.set_xlabel('key')
ax7.set_ylabel('count')
ax7.set_title("Song Keys Like Distribution")
pos_key.hist(alpha=0.5, bins=30)
neg_key.hist(alpha=0.5, bins=30)
ax8= fig2.add_subplot(335)
ax8.set_xlabel('liveness')
ax8.set_ylabel('count')
ax8.set_title("Song Liveness Like Distribution")
pos_liveness.hist(alpha=0.5, bins=30)
neg_liveness.hist(alpha=0.5, bins=30)
ax9 = fig2.add_subplot(336)
ax9.set_xlabel('loudness')
ax9.set_ylabel('count')
ax9.set_title("Song Loudness Like Distribution")
pos_loudness.hist(alpha=0.5, bins=30)
neg_loudness.hist(alpha=0.5, bins=30)
ax10 = fig2.add_subplot(337)
ax10.set_xlabel('mode')
ax10.set_ylabel('count')
ax10.set_title("Song Mode Like Distribution")
pos_mode.hist(alpha=0.5, bins=30)
neg_mode.hist(alpha=0.5, bins=30)
ax11 = fig2.add_subplot(338)
ax11.set_xlabel('speechiness')
ax11.set_ylabel('count')
ax11.set_title("Song Speechiness Like Distribution")
pos_speechiness.hist(alpha=0.5, bins=30)
neg_speechiness.hist(alpha=0.5, bins=30)
ax12 = fig2.add_subplot(339)
ax12.set_xlabel('time_signature')
ax12.set_ylabel('count')
ax12.set_title("Song Time Signature over Distribution")
pos_time_signature.hist(alpha=0.5, bins=30)
neg_time_signature.hist(alpha=0.5, bins=30)
ax13 = fig2.add_subplot(339)
ax13.set_xlabel('valence')
ax13.set_ylabel('count')
ax13.set_title("Song Valence over Distribution")
pos_valence.hist(alpha=0.5, bins=30)
neg_valence.hist(alpha=0.5, bins=30)
c = DecisionTreeClassifier(min_samples_split=100)
features = ["danceability","loudness","valence","energy","instrumentalness","acousticness","k"]
X_train = train[features]
y_train = train['target']
X_test = test[features]
y_test = test['target']
def show_tree(tree, features, path):
f = io.StringIO()
export_graphviz(tree, out_file=f, feature_names=features)
pydotplus.graph_from_dot_data(f.getvalue()).write_png(path)
img = scipy.misc.inread(path)
plt.rcParams["figure.figsize"] = (20, 20)
plt.imgshow(img)
show_tree(dt, features, 'tree1.png')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-30-72a100e0eeec> in <module>()
----> 1 show_tree(dt, features, 'tree1.png')
<ipython-input-21-9c398f00bf98> in show_tree(tree, features, path)
3 export_graphviz(tree, out_file=f, feature_names=features)
4 pydotplus.graph_from_dot_data(f.getvalue()).write_png(path)
----> 5 img = scipy.misc.inread(path)
6 plt.rcParams["figure.figsize"] = (20, 20)
7 plt.imgshow(img)
AttributeError: module 'scipy.misc' has no attribute 'inread'
</code></pre>
<p>So I'm trying to create this decision tree for Spotify dataset and I tried multiple ways to install these libraries. </p>
<p>I'm constantly getting this error can somebody help me with this?</p>
<p>I'm using this python tutorial </p>
<p><a href="https://www.youtube.com/watch?v=XDbj6PxaSf0&pbjreload=10" rel="nofollow noreferrer">https://www.youtube.com/watch?v=XDbj6PxaSf0&pbjreload=10</a></p>
<p>If anyone has any more sources for machine learning please send me!</p>
|
<p>I think this is a simple typo/mishearing. Scipy.misc does not have a function called <code>inread</code>. The function is called <code>imread</code>. Replace <code>scipy.misc.inread(path)</code> with <code>scipy.misc.imread(path)</code></p>
|
python|pandas|scipy|decision-tree|pydot
| 0
|
4,437
| 51,341,568
|
What is the difference between s[i] and s.iloc[i] in pandas series?
|
<p>The similar question has been asked <a href="https://stackoverflow.com/questions/45983801/pandas-iloc-vs-direct-slicing">here</a>, but I don't think it has a very perfect answer. If I am slicing a series rather than a data frame, what is the difference between these two? </p>
<p><code>s[i]</code> vs <code>s.iloc[i]</code> and <code>s[i:i+10]</code> vs <code>s.iloc[i: i+10]</code>?</p>
<p>What if the series is a slice of a dataframe in the first place, for example: <code>s=df["col1"]</code>, does the two slicing methods have any difference?</p>
<p>Please give concrete example to explain which is better in practice. Right now, the first slicing method is more preferable to me, because it make the code shorter.</p>
|
<p>Indexing (using <code>[..]</code>) into a series (and a dataframe) acts as sort of a Swiss Army knife; it has to support a variety of use-cases, and these use-cases are not always compatible or efficient. Using <code>Series[...]</code> requires Pandas to check the datatype of both the object you passed and the current index type, and translate your request into the correct rows to return. </p>
<p>Using indexing on the <code>.iloc</code> object on the other hand, is <em>unambigiously</em> meant to only accept integer indices and slices. Nothing else. This can then also be optimised, and should be preferred in production code. From the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer"><em>Selecting and Indexing</em> documentation</a>:</p>
<blockquote>
<p>The Python and NumPy indexing operators <code>[]</code> and attribute operator <code>.</code> provide quick and easy access to pandas data structures across a wide range of use cases. This makes interactive work intuitive, as there’s little new to learn if you already know how to deal with Python dictionaries and NumPy arrays. However, since the type of the data to be accessed isn’t known in advance, directly using standard operators has some optimization limits. For production code, we recommended that you take advantage of the optimized pandas data access methods exposed in this chapter.</p>
</blockquote>
<p>Moreover, you can't always use <code>[...]</code> to index into a series, not when you have an integer index already with values that are not aligned with the row indices:</p>
<pre><code>>>> a = pd.Series([41, 82], index=[2, 1])
>>> s.iloc[0]
41
>>> s[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/pandas/core/series.py", line 623, in __getitem__
result = self.index.get_value(self, key)
File "/Users/mj/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2557, in get_value
tz=getattr(series.dtype, 'tz', None))
File "pandas/_libs/index.pyx", line 83, in pandas._libs.index.IndexEngine.get_value
File "pandas/_libs/index.pyx", line 91, in pandas._libs.index.IndexEngine.get_value
File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 811, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 817, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
</code></pre>
<p>Indexing the first row fails with <code>s[0]</code>, because there's an integer index. You'd have to use <code>s[2]</code> instead to address the specific cell here.</p>
<p>When using <em>slicing</em>, Python passes in a <code>slice()</code> object to the underlying code handling the <code>s[...]</code> and <code>s.iloc[...]</code> operations (<code>__getitem__</code> methods), so that's easier to detect, and will give you the same outcome on either.</p>
|
python|pandas|slice
| 2
|
4,438
| 48,198,319
|
Groupby Pandas generate multiple fields with condition
|
<p>I have a pandas dataframe as such:</p>
<pre><code>df = pandas.DataFrame( {
"Label" : ["A", "A", "B", "B", "C" , "C"] ,
"Value" : [1, 9, 1, 1, 9, 9],
"Weight" : [2, 4, 6, 8, 10, 12} )
</code></pre>
<p>I would like to group the data by 'Label' and generate 2 fields.</p>
<ul>
<li>The First field, 'newweight' would sum Weight if Value==1</li>
<li>The Second field, 'weightvalue' would sum Weight*Value</li>
</ul>
<p>So I would be left with the following dataframe:</p>
<pre><code>Label newweight weightvalue
A 2 38
B 14 14
C 0 198
</code></pre>
<p>I have looked into the pandas groupby() function but have had trouble generating the 2 fields with it.</p>
|
<p>Use <code>groupby.apply</code>, you can do:</p>
<pre><code>df.groupby('Label').apply(
lambda g: pd.Series({
"newweight": g.Weight[g.Value == 1].sum(),
"weightvalue": g.Weight.mul(g.Value).sum()
})).fillna(0)
# newweight weightvalue
#Label
#A 2.0 38.0
#B 14.0 14.0
#C 0.0 198.0
</code></pre>
|
python|pandas|pandas-groupby
| 4
|
4,439
| 48,715,480
|
Pick the row and column of an DateFrame cell where the value true
|
<p>In a DataFrame I have to find a specific value. If it exists I need the cell coordinates (row/column). Currently I get only the row and struggle with the column. </p>
<pre><code>values = [[100.0, 127.0], [17.0, 24.13], [151.13, 0.0]]
df = pd.DataFrame(np.concatenate(values).reshape(3,2))
# df:
# 0 1
# 0 100.00 127.00
# 1 17.00 24.13
# 2 151.13 0.00
v17 = df.isin([17,17])
# v17:
# 0 1
# 0 False False
# 1 True False
# 2 False False
row = v17.loc[v17[0] == True].index.values
# row = [1]
</code></pre>
<p>How can I get the row and column of a cell with a specific value?</p>
|
<p>By using <code>np.where</code></p>
<pre><code>s,v=np.where(df==17)
df.columns[v]
Out[244]: Int64Index([0], dtype='int64')
df.index[s]
Out[245]: Int64Index([1], dtype='int64')
</code></pre>
|
python-3.x|pandas|dataframe
| 2
|
4,440
| 48,636,600
|
what is best way to use trained tensorflow in java
|
<p>I am coding UI by <code>javaFx</code> in <code>eclipse</code>
because I can only use java, no python, no c. </p>
<p>Now I try to use trained <code>tensorflow</code> file in this UI. (this <code>tensorflow</code> file is under python)</p>
<p>I am looking for several ways (<code>API</code>, <code>jython</code>, <code>TCP/IP</code>) but I am not sure which one is best.</p>
<p>Please write your opinion which has more advantages or fewer disadvantages.</p>
|
<p>I think there are two ways for that,</p>
<ol>
<li><code>tensorflow-serving</code> which uses <code>grpc</code> to connect from java to the serving server, making it independent of any language. Look <a href="https://www.tensorflow.org/serving/" rel="nofollow noreferrer">here</a> for details.</li>
<li>Use <code>tensorflow</code> <code>java</code> API. <a href="https://github.com/tensorflow/models/tree/master/samples/languages/java/" rel="nofollow noreferrer">Here</a> and <a href="https://github.com/sumsuddin/TensorflowObjectDetectorJava" rel="nofollow noreferrer">Here</a> is an example of using object detection model in native java program.</li>
</ol>
<p>Now, <code>tensorflow-serving</code> is the preferred way to go, as there are several advantages. Serving is heavily optimized for speed and resource management like GPU. It also stacks multiple requests (if there is too many) and then processes them in a batch which utilizes the GPU efficiently.</p>
|
java|python|tensorflow|jython
| 0
|
4,441
| 48,487,245
|
What to expect from deep learning object detection on black and white pictures?
|
<p>With TensorFlow, I want to train an object detection model with my own images based on ssd_inception_v2_coco model. The problem I have is that all my pictures are black and white. What performance can I expect? Should I try to colorize my B&W pictures first? Or at the opposite, should I try to retrain base network with images "uncolorized"? Are there general guidelines for B&W processing of images for deep learning object detection?</p>
|
<p>I wouldn't go through the trouble of colorizing if you are planning on using a pretrained model. I would expect that explicitly colorizing your images as a pre-processing step would help very little (if at all) since in theory the features that a colorizing network learns can also be learned by the detection network. </p>
<p>If you are planning on pretraining your detection network that was trained on an RGB dataset, make sure you either (i) replace the first convolution in the network with a convolutional layer that expects a single-channel input, or (ii) pad your image with two all-zero channels.</p>
<p>You may get slightly worse detection performance simply because you lose two thirds of the image's pixel information when using BW instead of RGB.</p>
|
tensorflow|deep-learning|object-detection
| 2
|
4,442
| 48,726,418
|
k nearest neighbors with cross validation for accuracy score and confusion matrix
|
<p>I have the following data where for each column, the rows with numbers are the input and the letter is the output.</p>
<pre><code>A,A,A,B,B,B
-0.979090189,0.338819904,-0.253746508,0.213454999,-0.580601104,-0.441683968
-0.48395313,0.436456904,-1.427424032,-0.107093825,0.320813402,0.060866105
-1.098818173,-0.999161692,-1.371721698,-1.057324962,-1.161752652,-0.854872591
-1.53191442,-1.465454248,-1.350414216,-1.732518018,-1.674040715,-1.561568496
2.522796162,2.498153298,3.11756171,2.125738509,3.003929536,2.514411247
-0.060161596,-0.487513844,-1.083513761,-0.908023322,-1.047536921,-0.48276759
0.241962669,0.181365373,0.174042637,-0.048013217,-0.177434916,0.42738621
-0.603856395,-1.020531402,-1.091134021,-0.863008165,-0.683233589,-0.849059931
-0.626159165,-0.348144322,-0.518640038,-0.394482485,-0.249935646,-0.543947259
-1.407263942,-1.387660115,-1.612988118,-1.141282747,-0.944745366,-1.030944216
-0.682567673,-0.043613473,-0.105679403,0.135431139,0.059104888,-0.132060832
-1.10107164,-1.030047313,-1.239075022,-0.651818656,-1.043589073,-0.765992541
</code></pre>
<p>I am trying to perform KNN LOOCV to get accuracy score and confusion matrix.</p>
<pre><code>from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import LeaveOneOut
import pandas as pd
def main():
csv = 'data.csv'
df = pd.read_csv(csv)
X = df.values.T
y = df.columns.values
clf = KNeighborsClassifier()
loo = LeaveOneOut()
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
y_true = y_test
y_pred = clf.predict(X_test)
ac = accuracy_score(y_true, y_pred)
cm = confusion_matrix(y_true, y_pred)
print ac
print cm
if __name__ == '__main__':
main()
</code></pre>
<p>However my results are all 0s. Where am I going wrong?</p>
|
<p>I think your model does not get trained properly and because it only has to guess one value it doesn't get it right. May I suggest switching to KFold or StratifiedKFold. LOO has the disadvantage that for large samples it becomes extemely time consuming. Here is what happened when I implemented StratifiedKFold with 3 splits on your X data. I have randomly filled y with 0 and 1, instead of using A and B and have not trasposed the data so it has 12 rows:</p>
<pre><code>from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import StratifiedKFold
import pandas as pd
csv = 'C:\df_low_X.csv'
df = pd.read_csv(csv, header=None)
print(df)
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
clf = KNeighborsClassifier()
kf = StratifiedKFold(n_splits = 3)
ac = []
cm = []
for train_index, test_index in kf.split(X,y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
print(X_train, X_test)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
ac.append(accuracy_score(y_test, y_pred))
cm.append(confusion_matrix(y_test, y_pred))
print(ac)
print(cm)
# ac
[0.25, 0.75, 0.5]
# cm
[array([[1, 1],
[2, 0]], dtype=int64),
array([[1, 1],
[0, 2]], dtype=int64),
array([[0, 2],
[0, 2]], dtype=int64)]
</code></pre>
|
python|pandas|machine-learning|scikit-learn|cross-validation
| 3
|
4,443
| 71,074,196
|
Pandas Multi Index Division
|
<p>I have a Multi Index dataframe that looks like</p>
<pre><code> Mid
Strike Expiration Symbol
167.5 2022-02-11 AAPL170 5.4
170 2022-02-11 AAPL170 3.1
2022-02-18 AAPL170 4.525
2022-02-25 AAPL170 5.25
2022-03-04 AAPL170 6.00
172.5 2022-02-11 AAPL172 1.265
2022-02-18 AAPL172 2.91
175 2022-02-11 AAPL175 0.265
2022-02-18 AAPL175 1.695
</code></pre>
<p>so it is a multi index with the index of <code>strike</code> <code>expiration</code> and <code>symbol</code>, and then <code>mid</code> is just a column name, and NOT an index. I have a few other columns but they are not important for now. These are sorted in increasing value from the first index, I'm hoping to divide each <code>Mid</code> row by the next row below it, but only within each individual <code>strike</code> index. Currently, I am doing</p>
<pre class="lang-py prettyprint-override"><code>df['ratio'] = (df['mid'] / df['mid'].shift(-1))
</code></pre>
<p>and it works to give me a new column of all the divisions, but I'm running into problems where, for example, the <code>167.5 2022-02-11</code> row is getting divided by the <code>170 2022-02-11</code> row, and I need those to remain separated by index.</p>
<p>My goal is after this division is done, to be able to search for any ratios that are above a cutoff, ex. 0.5, and output what was divided to a new dataframe, so something similar to</p>
<pre><code>Strike Expirations Symbol Ratio
170 2022-02-11 / 2022-02-18 AAPL170 0.685
</code></pre>
<p>If someone could advise on both parts, I'd greatly appreciate it but I primarily need the first part fixed.</p>
|
<p>One way using <code>pandas.DataFrame.groupby</code> with <code>pct_change</code>:</p>
<pre><code>new_df = df.groupby(level=0).pct_change(-1) + 1
print(new_df)
</code></pre>
<p>Output:</p>
<pre><code> Mid
Strike Expiration Symbol
167.5 2022-02-11 AAPL170 NaN
170.0 2022-02-11 AAPL170 0.685083
2022-02-18 AAPL170 0.861905
2022-02-25 AAPL170 0.875000
2022-03-04 AAPL170 NaN
172.5 2022-02-11 AAPL172 0.434708
2022-02-18 AAPL172 NaN
175.0 2022-02-11 AAPL175 0.156342
2022-02-18 AAPL175 NaN
</code></pre>
<p>For debugging, you can try something like:</p>
<pre><code>for _, d in df.groupby(level=0):
try:
d.pct_change()
except:
print(d)
break
</code></pre>
|
python|pandas|dataframe|multi-index
| 0
|
4,444
| 51,804,671
|
How to set the variables of LSTMCell as input instead of letting it create it in Tensorflow?
|
<p>When I create a tf.contrib.rnn.LSTMCell, it creates its <strong>kernel</strong> and <strong>bias</strong> trainable variables during initialisation.</p>
<p>How the code looks now:</p>
<pre><code>cell_fw = tf.contrib.rnn.LSTMCell(hidden_size_char,
state_is_tuple=True)
</code></pre>
<p>What I want it to look:</p>
<pre><code>kernel = tf.get_variable(...)
bias = tf.get_variable(...)
cell_fw = tf.contrib.rnn.LSTMCell(kernel, bias, hidden_size,
state_is_tuple=True)
</code></pre>
<p>What I want to do is to create those variables myself, and give it to the LSTMCell class when instantiating it as input to its init.</p>
<p>Is there an easy way to do this? I looked at the <a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/python/ops/rnn_cell_impl.py" rel="nofollow noreferrer">class source code</a> but it seems that it is within a complex hierarchy of classes.</p>
|
<p>I subclassed the LSTMCell class, and changed its <em>init</em> and <em>build</em>
methods so that they accept given variables. If variables are given in init
within build, we wouldn't use <em>get_variable</em> anymore, and would use the given kernel and bias variables.</p>
<p>There might be cleaner ways to do it though.</p>
<pre><code>_BIAS_VARIABLE_NAME = "bias"
_WEIGHTS_VARIABLE_NAME = "kernel"
class MyLSTMCell(tf.contrib.rnn.LSTMCell):
def __init__(self, num_units,
use_peepholes=False, cell_clip=None,
initializer=None, num_proj=None, proj_clip=None,
num_unit_shards=None, num_proj_shards=None,
forget_bias=1.0, state_is_tuple=True,
activation=None, reuse=None, name=None, var_given=False, kernel=None, bias=None):
super(MyLSTMCell, self).__init__(num_units,
use_peepholes=use_peepholes, cell_clip=cell_clip,
initializer=initializer, num_proj=num_proj, proj_clip=proj_clip,
num_unit_shards=num_unit_shards, num_proj_shards=num_proj_shards,
forget_bias=forget_bias, state_is_tuple=state_is_tuple,
activation=activation, reuse=reuse, name=name)
self.var_given = var_given
if self.var_given:
self._kernel = kernel
self._bias = bias
def build(self, inputs_shape):
if inputs_shape[1].value is None:
raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"
% inputs_shape)
input_depth = inputs_shape[1].value
h_depth = self._num_units if self._num_proj is None else self._num_proj
maybe_partitioner = (
partitioned_variables.fixed_size_partitioner(self._num_unit_shards)
if self._num_unit_shards is not None
else None)
if self.var_given:
# self._kernel and self._bais are already added in init
pass
else:
self._kernel = self.add_variable(
_WEIGHTS_VARIABLE_NAME,
shape=[input_depth + h_depth, 4 * self._num_units],
initializer=self._initializer,
partitioner=maybe_partitioner)
self._bias = self.add_variable(
_BIAS_VARIABLE_NAME,
shape=[4 * self._num_units],
initializer=init_ops.zeros_initializer(dtype=self.dtype))
if self._use_peepholes:
self._w_f_diag = self.add_variable("w_f_diag", shape=[self._num_units],
initializer=self._initializer)
self._w_i_diag = self.add_variable("w_i_diag", shape=[self._num_units],
initializer=self._initializer)
self._w_o_diag = self.add_variable("w_o_diag", shape=[self._num_units],
initializer=self._initializer)
if self._num_proj is not None:
maybe_proj_partitioner = (
partitioned_variables.fixed_size_partitioner(self._num_proj_shards)
if self._num_proj_shards is not None
else None)
self._proj_kernel = self.add_variable(
"projection/%s" % _WEIGHTS_VARIABLE_NAME,
shape=[self._num_units, self._num_proj],
initializer=self._initializer,
partitioner=maybe_proj_partitioner)
self.built = True
</code></pre>
<p>So the code will be like this:</p>
<pre><code>kernel = get_variable(...)
bias = get_variable(...)
lstm_fw = MyLSTMCell(....., var_given=True, kernel=kernel, bias=bias)
</code></pre>
|
tensorflow|lstm
| 2
|
4,445
| 51,572,338
|
Getting:"ModuleNotFoundError: No module named 'tensorflow'", only when running from the command line
|
<p>I'm trying to run the Transformer (speech2text) model on windows (till I get my linux machine).
when I'm running the entire command from the cmd:</p>
<p>"python transformer_main.py --data_dir=$DATA_DIR --model_dir=$MODEL_DIR --params=$PARAMS"</p>
<p>I'm getting an error :"ModuleNotFoundError: No module named 'tensorflow'"</p>
<p>But I know that tf is installed and also when I'm using pycharm, first, I can see the package installed(File -> Settings ->Project interpreter).
second, when I'm running the code it's passing that fall site...</p>
<p>I can run through pycharm, but I think it's important to understand what I'm missing, Is it something with the interpreter?</p>
<p>Thanks.</p>
|
<p>Most likely you are simply using a different interpreter from your command line than from your PyCharm project. This will happen, for example, if you have set up your PyCharm project using a fresh conda environment.</p>
<p>To see which one you are using on the command-line, simply run <code>where python</code>. Then compare this with what you found in PyCharm.</p>
|
python|tensorflow
| 0
|
4,446
| 51,757,759
|
Converting PNG image file into row of pixels for saving in dataframe
|
<p>I am working on an image dataset for classification. I want to store all the pixel values of the image in a single row in the pandas dataframe.
I am able to convert the image into a matrix and then into an array but when I'm saving this array, it is getting saved in the columns. </p>
<p>I used </p>
<pre><code>img = mpimg.imread(path_for_png) #for getting image data into matrix
img = np.ravel(img) #this method for converting it into an array
</code></pre>
<p>And now when I apply this: </p>
<pre><code>df = pd.DataFrame(img) #to convert it into dataframe
</code></pre>
<p>I get the dataframe in the format shown below but I want to convert it into a single row for a single example.</p>
<pre><code> 0
0 1.0
1 1.0
2 1.0
3 1.0
4 1.0
</code></pre>
|
<p>Use <code>pd.DataFrame</code> on a list images to put each image on a separate row. Since there is only one image here, adding <code>[]</code> is enough depending on the output you want.</p>
<pre><code>df = pd.DataFrame([img])
</code></pre>
<p>will give</p>
<pre><code> 0 1 2 3 4
0 1.0 1.0 1.0 1.0 1.0
</code></pre>
<p>while</p>
<pre><code>df = pd.DataFrame([[img]])
</code></pre>
<p>gives</p>
<pre><code> 0
0 [1.0, 1.0, 1.0, 1.0, 1.0]
</code></pre>
<p>The second output is most probably what you want if the array is long.</p>
|
python|pandas|numpy|image-processing
| 3
|
4,447
| 51,959,497
|
Replace missing values of a sequence in a column for each id of pandas dataframe
|
<p>I have a dataset:</p>
<pre><code>dt = {'id': [120,120,120,120,120,121,121,345], 'day': [0, 1,2,3,4,0,2,0], 'value': [[0.3,-0.5,-0.7],[0.5,3.4,2.7],[0.45,3.4,0.7],[0.25,0.4,0.7],[0.15,0.34,0.17],[0.35,3.4,2.7],[0.5,3.44,2.57],[0.5,0.34,0.37]]}
df = pd.DataFrame(data=dt)
day id value
0 0 120 [0.3, -0.5, -0.7]
1 1 120 [0.5, 3.4, 2.7]
2 2 120 [0.45, 3.4, 0.7]
3 3 120 [0.25, 0.4, 0.7]
4 4 120 [0.15, 0.34, 0.17]
5 0 121 [0.35, 3.4, 2.7]
6 2 121 [0.5, 3.44, 2.57]
7 0 345 [0.5, 0.34, 0.37]
</code></pre>
<p>For each id there should be a sequence of days from 0-5. Here in my data set for the column id, some days are missing. I want to add the missing days and for those ids and add array of zeroes for the corresponding "value" column.</p>
<p>Result:</p>
<pre><code> day id value
0 0 120 [0.3, -0.5, -0.7]
1 1 120 [0.5, 3.4, 2.7]
2 2 120 [0.45, 3.4, 0.7]
3 3 120 [0.25, 0.4, 0.7]
4 4 120 [0.15, 0.34, 0.17]
5 0 121 [0.35, 3.4, 2.7]
6 1 121 [0, 0, 0]
7 2 121 [0.5, 3.44, 2.57]
8 3 121 [0, 0, 0]
9 4 121 [0, 0, 0]
10 0 345 [0.5, 0.34, 0.37]
11 1 345 [0, 0, 0]
12 2 345 [0, 0, 0]
13 3 345 [0, 0, 0]
14 4 345 [0, 0, 0]
</code></pre>
<p>This is the sample space. I will be doing this on a data set of huge size.</p>
<p>My try:</p>
<pre><code> r1=0
for i in df.id.unique():
val=df.loc[df['id'] == i]
mx=val.loc[val['day'].idxmax()].day
for index,row in val.iterrows():
if row.day!=r1:
for k in range(int(row.day)-r1-1):
a.append(np.asarray([0]*3))
r1=row.day
else:
a.append(row.value)
if(row.day==mx):
a.append(row.value)
for j in range(4-mx):
a.append(np.asarray([0]*3)))
r1=r1+1
</code></pre>
<p>But this code is not working.</p>
<p>How do I do this?</p>
|
<p>Using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>pd.MultiIndex.from_product</code></a>:</p>
<pre><code>idx = pd.MultiIndex.from_product([df.id.unique(), np.arange(5)], names=['id', 'day'])
out = (df.set_index(['id', 'day'])
.reindex(idx).reset_index()
)
</code></pre>
<p>Then simply replace <code>NaN</code> with your desired fill value.</p>
<pre><code>out.value = [d if isinstance(d, list) else [0, 0, 0] for d in out.value]
id day value
0 120 0 [0.3, -0.5, -0.7]
1 120 1 [0.5, 3.4, 2.7]
2 120 2 [0.45, 3.4, 0.7]
3 120 3 [0.25, 0.4, 0.7]
4 120 4 [0.15, 0.34, 0.17]
5 121 0 [0.35, 3.4, 2.7]
6 121 1 [0, 0, 0]
7 121 2 [0.5, 3.44, 2.57]
8 121 3 [0, 0, 0]
9 121 4 [0, 0, 0]
10 345 0 [0.5, 0.34, 0.37]
11 345 1 [0, 0, 0]
12 345 2 [0, 0, 0]
13 345 3 [0, 0, 0]
14 345 4 [0, 0, 0]
</code></pre>
|
python|pandas|dictionary|dataframe|padding
| 1
|
4,448
| 41,710,501
|
Is there a way to have a dictionary as an entry of a pandas Dataframe in python?
|
<p>Something like: </p>
<pre><code>d={'a':1, 'b'=2}
data=pandas.DataFrame()
data['new column'] = d
data['new column'][0]
</code></pre>
<p>where the last command will return the dictionary d?</p>
|
<p>You can wrap the dictionary in a list, so that the dictionary will be treated as an element instead of an iterable:</p>
<pre><code>d={'a':1, 'b': 2}
data=pd.DataFrame()
data['new column'] = [d]
data['new column'][0]
# {'a': 1, 'b': 2}
</code></pre>
|
python|pandas|dataframe
| 6
|
4,449
| 64,596,588
|
splitting pandas column containing list of dicts
|
<p>I have a pandas column, with each cell in the column containing a list of dicts with color attributes of each photo, such as:</p>
<pre><code>[{'color': 'black', 'confidence': 1.0}, {'color': 'brown', 'confidence': 0.72}, {'color': 'gray', 'confidence': 0.62}, {'color': 'other', 'confidence': 0.52}, {'color': 'red', 'confidence': 0.01}, {'color': 'blond', 'confidence': 0.01}, {'color': 'white', 'confidence': 0.0}]
</code></pre>
<p>I want to be able to split this column containing lists of dicts into multiple new pandas columns. For example, I want a column named "black", with the value "1.0", a column named "brown" with the value "0.72" ect.</p>
<p>I'm struggling to get this done. Will appreciate tips.
Thanks!</p>
|
<pre><code>a = [{'color': 'black', 'confidence': 1.0}, {'color': 'brown', 'confidence': 0.72}, {'color': 'gray', 'confidence': 0.62}, {'color': 'other', 'confidence': 0.52}, {'color': 'red', 'confidence': 0.01}, {'color': 'blond', 'confidence': 0.01}, {'color': 'white', 'confidence': 0.0}]
c= []
co = []
for d in a:
c.append(d['color'])
co.append(d['confidence'])
df = pd.DataFrame()
df['color'] = c
df['confidence'] = co
df = df.transpose()
#make the first column header
df.columns = df.iloc[0]
df = df[1:]
</code></pre>
<pre><code>Output:
df
Out[159]:
color black brown gray other red blond white
confidence 1 0.72 0.62 0.52 0.01 0.01 0
'''
If this answer is correct, kindly accept and upvote the answer. Else, comment the doubt or issue, I would be happy to help
</code></pre>
|
python|pandas
| 1
|
4,450
| 64,433,604
|
rearrange columns of multidimensional arrays efficiently
|
<p>I'm fairly certain this problem has an almost trivial solution, but for the life of me I can't seem to figure it out right now, nor find anything online, so bear with me please.</p>
<p>I have a 3D array of size (n x m x m), called v (think of it as n (m x m)-matrices.)
And I wish to rearrange the columns in each matrix according to the indices I get from a sorting.</p>
<p>Hence I have the indices, idxs, (n x m), with which I wish to rearrange the matrix.</p>
<p>Now I can do the rearranging like this:</p>
<pre><code>V = np.empty_like(v)
for i in range(v.shape[0]):
V[i,:,np.arange(len(idxs[0]))] = v[i,:,idxs[i,:]]
</code></pre>
<p>But this is a very slow operation.</p>
<p>Does anyone have a better way of doing this rearranging?</p>
|
<p>Use the iterator as a ranged-array for advanced-indexing for a vectorized way -</p>
<pre><code>I = np.arange(v.shape[0])[:,None]
V[I,:,np.arange(len(idxs[0]))] = v[I,:,idxs]
</code></pre>
<p>Another with simply indexing into <code>v</code> to directly get <code>V</code> -</p>
<pre><code>V = v[np.arange(v.shape[0])[:,None],:,idxs].swapaxes(1,2)
</code></pre>
|
python|numpy|optimization|linear-algebra
| 2
|
4,451
| 64,395,296
|
Pandas data frame not exporting to excel properly
|
<p>I am not being able to produce a csv file with all the data from the scraper.</p>
<p>When I test one item, it works properly, the exported csv has all the columns and one row with the corresponding value.</p>
<p>when I try to apply the csv to all the code, it just doesn't work.</p>
<p>Can someone tell me what am I doing wrong?</p>
<p>Here is the scraper:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
baseUrl = 'https://www.ebay.com/str/suitcharityestbysaveasuit?_pgn=1'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/74.0.3729.169 Safari/537.36'}
productLinks = []
for x in range(1,2):
r = requests.get(f'https://www.ebay.com/str/suitcharityestbysaveasuit?_pgn={x}')
soup = BeautifulSoup(r.content, 'lxml')
productList = soup.find_all('li', class_='s-item')
for item in productList:
for link in item.find_all('a', href=True):
productLinks.append(link['href'])
alldata = []
for link in productLinks:
r = requests.get(link, headers=headers)
soup = BeautifulSoup(r.content, 'lxml')
data = {}
data['Name'] = soup.find('h1', class_='it-ttl').text.strip("Details, about")
try:
data['Price'] = soup.find('span', class_='notranslate').text.strip("US, $")
except:
data['Price'] = 0
try:
data['ebayID'] = soup.find('div', class_='u-flL iti-act-num itm-num-txt').text
except:
data['ebayID'] = 0
data['Color'] = soup.find('h2', itemprop='color').text
data['Brand'] = soup.find('h2', itemprop='brand').text
try:
soup = BeautifulSoup(requests.get(link).content, 'html.parser')
image = soup.select_one('[itemprop="image"]')['src'].replace('l300', 'l1600')
data['image'] = image
except:
data['image'] = 'None'
for label, value in zip(soup.select('td.attrLabels'), soup.select('td.attrLabels + td')):
label = label.get_text(strip=True)
label = label.rstrip(':').lower()
value = value.get_text(strip=True)
data[label] = value
try:
soup = BeautifulSoup(requests.get(soup.iframe['src']).content, 'html.parser')
number = soup.find(text=lambda t: t.strip().startswith('Item no.')).find_next('div').get_text(strip=True)
data['Item Number'] = number
except:
data['Item Number'] = 'none'
df = pd.DataFrame(alldata)
df.to_csv('data.csv')
</code></pre>
|
<p>I guess it's because <code>alldata</code> is empty - you never filled it with scraped data.</p>
<p>Try adding</p>
<pre><code> try:
soup = BeautifulSoup(requests.get(soup.iframe['src']).content, 'html.parser')
number = soup.find(text=lambda t: t.strip().startswith('Item no.')).find_next('div').get_text(strip=True)
data['Item Number'] = number
except:
data['Item Number'] = 'none'
alldata.append(data) # <= here
df = pd.DataFrame(alldata)
df.to_csv('data.csv')
</code></pre>
<hr />
<p>EDIT</p>
<p>I also noticed your code produces duplicates within <code>productLinks</code>. To avoid making unnecessary requests, consider set:</p>
<pre><code>for link in set(productLinks):
...
# to keep track of parsed links
data["link"] = link
alldata.append(data)
</code></pre>
<p>Sample output:</p>
<pre><code> Name Price ebayID Color Brand image condition size fit jacket/coat length type jacket cut color department brand chest size jacket front button style material jacket vent style pattern size type Item Number link country/region of manufacture
0 Canali Men's Plaid Brown Wool Blazer 42L $2,195 55.99 324240475008 Brown Canali https://i.ebayimg.com/images/g/mhEAAOSwMUpfEJ8O/s-l1600.jpg Pre-owned:An item that has been used or worn previously. See the seller’s listing for full details anddescription of any imperfections.See all condition definitions- opens in a new window or tab...Read moreabout the condition 42 Athletic Long Blazer Single-Breasted Brown Men Canali 42 Two-Button Wool Double-Vented Plaid Regular LXW304-Julw4 https://www.ebay.com/itm/Canali-Mens-Plaid-Brown-Wool-Blazer-42L-2-195/324240475008?hash=item4b7e3d0380:g:mhEAAOSwMUpfEJ8O
1 Brooks Brothers Men's Gray Plaid Wool Blazer 42R $2,795 92.12 224093561666 Gray Brooks Brothers https://i.ebayimg.com/images/g/E7YAAOSwCpRfEaso/s-l1600.jpg Pre-owned:An item that has been used or worn previously. See the seller’s listing for full details anddescription of any imperfections.See all condition definitions- opens in a new window or tab...Read moreabout the condition 42 Regular Regular Blazer Gray Men Brooks Brothers 42 Three-Button Wool Double-Vented Plaid Regular LXW373-JULW3 https://www.ebay.com/itm/Brooks-Brothers-Mens-Gray-Plaid-Wool-Blazer-42R-2-795/224093561666?hash=item342d046342:g:E7YAAOSwCpRfEaso Canada
</code></pre>
|
python|pandas|dataframe|web-scraping
| 0
|
4,452
| 64,324,685
|
Why my PCA is not invariant to rotation and axis swap?
|
<p>I have a voxel (np.array) with size 3x3x3, filled with some values, this setup is essential for me. I want to have rotation-invariant representation of it. For this case, I decided to try PCA representation which is believed to be <a href="https://stats.stackexchange.com/questions/239069/is-pca-invariant-to-orthogonal-transformations">invariant to orthogonal transformations</a>. <a href="https://datascience.stackexchange.com/questions/43800/how-to-mathematically-explain-the-translational-and-rotational-invariance-of-pca">another</a></p>
<p>For simplicity, I took some axes swap, but in case I'm mistaken there can be <code>np.rot90</code>.</p>
<p>I have interpereted my 3d voxels as a set of weighted 3d cube point vectors which I incorrectly called "basis", total 27 (so that is some set of 3d point in space, represented by the vectors, obtained from cube points, scaled by voxel values).</p>
<pre><code>import numpy as np
voxel1 = np.random.normal(size=(3,3,3))
voxel2 = np.transpose(voxel1, (1,0,2)) #np.rot90(voxel1) #
basis = []
for i in range(3):
for j in range(3):
for k in range(3):
basis.append([i+1, j+1, k+1]) # avoid 0
basis = np.array(basis)
voxel1 = voxel1.reshape((27,1))
voxel2 = voxel2.reshape((27,1))
voxel1 = voxel1*basis # weighted basis vectors
voxel2 = voxel2*basis
</code></pre>
<pre><code>print(voxel1.shape)
(27, 3)
</code></pre>
<p>Then I did PCA to those 27 3-dimensional vectors:</p>
<pre><code>def pca(x):
center = np.mean(x, 0)
x = x - center
cov = np.cov(x.T) / x.shape[0]
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)
v = e_vectors[:, order].transpose()
return x.dot(v)
vp1 = pca(voxel1)
vp2 = pca(voxel2)
</code></pre>
<p>But the results in <code>vp1</code> and <code>vp2</code> are different. Perhaps, I have a mistake (though I beleive this is the right formula), and the proper code must be</p>
<p><code>x.dot(v.T)</code></p>
<p>But in this case the results are very strange. The upper and bottom blocks of the transofrmed data are the same up to the sign:</p>
<pre><code>>>> np.abs(np.abs(vp1)-np.abs(vp2)) > 0.01
array([[False, False, False],
[False, False, False],
[False, False, False],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, False, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[False, False, False],
[False, False, False],
[False, False, False],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, False, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[False, False, False],
[False, False, False],
[False, False, False]])
</code></pre>
<p>What I'm doing wrong?</p>
<p>What I want to do is to find some invariant representation of my weighted voxel, something like positioning according to the axes of inertia or principal axes. I would really appreciate if someone helps me.</p>
<p>UPD: <a href="https://stackoverflow.com/questions/37006542/how-to-align-principal-axes-of-3d-density-map-in-numpy-with-cartesian-axes">Found the question similar to mine</a>, but code is unavailable</p>
<p>EDIT2: Found the code <a href="https://github.com/smparker/orient-molecule/blob/master/orient.py" rel="nofollow noreferrer">InertiaRotate</a> and managed to monkey-do the following:</p>
<pre><code>import numpy as np
# https://github.com/smparker/orient-molecule/blob/master/orient.py
voxel1 = np.random.normal(size=(3,3,3))
voxel2 = np.transpose(voxel1, (1,0,2))
voxel1 = voxel1.reshape((27,))
voxel2 = voxel2.reshape((27,))
basis = []
for i in range(3):
for j in range(3):
for k in range(3):
basis.append([i+1, j+1, k+1]) # avoid 0
basis = np.array(basis)
basis = basis - np.mean(basis, axis=0)
def rotate_func(data, mass):
#mass = [ masses[n.lower()] for n in geom.names ]
inertial_tensor = -np.einsum("ax,a,ay->xy", data, mass, data)
# negate sign to reverse the sorting of the tensor
eig, axes = np.linalg.eigh(-inertial_tensor)
axes = axes.T
# adjust sign of axes so third moment moment is positive new in X, and Y axes
testcoords = np.dot(data, axes.T) # a little wasteful, but fine for now
thirdmoment = np.einsum("ax,a->x", testcoords**3, mass)
for i in range(2):
if thirdmoment[i] < 1.0e-6:
axes[i,:] *= -1.0
# rotation matrix must have determinant of 1
if np.linalg.det(axes) < 0.0:
axes[2,:] *= -1.0
return axes
axes1 = rotate_func(basis, voxel1)
v1 = np.dot(basis, axes1.T)
axes2 = rotate_func(basis, voxel2)
v2 = np.dot(basis, axes2.T)
print(v1)
print(v2)
</code></pre>
<p>It seems to use basis (coordinates) and mass separately. The results are quite similar to my problem above: some parts of the transformed data match up to the sign, I believe those are some cube sides</p>
<p><code>print(np.abs(np.abs(v1)-np.abs(v2)) > 0.01)</code></p>
<pre><code>[[False False False]
[False False False]
[False False False]
[ True True True]
[ True True True]
[ True True True]
[ True True True]
[False False False]
[ True True True]
[ True True True]
[ True True True]
[ True True True]
[False False False]
[False False False]
[False False False]
[ True True True]
[ True True True]
[ True True True]
[ True True True]
[False False False]
[ True True True]
[ True True True]
[ True True True]
[ True True True]
[False False False]
[False False False]
[False False False]]
</code></pre>
<p>Looking for some explanation. This code is designed for molecules, and must work...</p>
<p>UPD: Tried to choose 3 vectors as a new basis from those 24 - the one with biggest norm, the one with the smallest and their cross product. Combined them into the matrix V, then used the formula V^(-1)*X to transform coordinates, and got the same problem - the resulting sets of vectors are not equal for rotated voxels.</p>
<hr />
<p>UPD2: I agree with meTchaikovsky that my idea of multiplying voxel vectors by weights and thus creating some non-cubic point cloud was incorrect. Probably, we indeed need to take the solution for rotated "basis"(yes, this is not a basis, but rather a way to determine point cloud) which will work later when "basis" is the same, but the weights are rotated according to the 3D rotation.</p>
<p>Based on the answer and the reference provided by meTchaikovsky, and finding <a href="https://stats.stackexchange.com/questions/39073/are-pca-solutions-unique">other answers</a> we together with my friend came to conclusion that <code>rotate_func</code> from molecular package mentioned above tries to invent some convention for computing the signs of the components. Their solution tries to use 3rd moment for the first 2 axes and determinant for the last axis (?). We tried a bit another approach and succeeded to have half of the representations matching:</p>
<pre><code># -*- coding: utf-8 -*-
"""
Created on Fri Oct 16 11:40:30 2020
@author: Dima
"""
import numpy as np
from numpy.random import randn
from numpy import linalg as la
from scipy.spatial.transform import Rotation as R
np.random.seed(10)
rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]])
def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c =x- np.mean(x,axis=0)
x_f= feat*x
x_f-= np.mean(x_f, axis=0)
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
#print(order)
v = e_vectors[:,order]
v= v/np.sign(v[0,:])
if(la.det(v)<0):
v= -v
return x_c @ v
def standardize(x):
# take vector with biggest norm, with smallest and thir cross product as basis
x -= np.mean(x,axis=0)
nrms= la.norm(x, axis=1)
imin= argmin(nrms)
imax= argmax(nrms)
vec1= x[imin, :]
vec2= x[imax, :]
vec3= np.cross(vec1, vec2)
Smat= np.stack([vec1, vec2, vec3], axis=0)
if(la.det(Smat)<0):
Smat= -Smat
return(la.inv(Smat)@x.T)
angles = np.linspace(0.0,90.0,91)
voxel1 = np.random.normal(size=(3,3,3))
res = []
for angle in angles:
voxel2 = voxel1.copy()
voxel1 = voxel1.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 = basis1+1e-4*randn(27,3) # perturbation
basis2 = basis1 @rotate_mat(np.deg2rad(angle))
#voxel1 = voxel1*basis1
#voxel2 = voxel2*basis2
#print(angle,(np.abs(pca(voxel1) - pca(voxel2) )))
#gg= np.abs(standardize(basis1) - standardize(basis2) )
gg= np.abs(pca(voxel1, basis1) - pca(voxel1, basis2) )
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print(angle,ss, bl)
#res.append(np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6)))
del basis1, basis2
</code></pre>
<p>The results are good up to 58 degree angle (yet we're still experimenting with rotation of x, y axes). After that we have constant difference which indicates some uncounted sign reverse. This is better than the less consistent result of <code>rotate_func</code>:</p>
<pre><code>0.0 0.0 True
1.0 1.1103280567106161e-13 True
2.0 5.150139890290964e-14 True
3.0 8.977126225544196e-14 True
4.0 5.57341699240722e-14 True
5.0 4.205149954378956e-14 True
6.0 3.7435437643664957e-14 True
7.0 1.2943967187158123e-13 True
8.0 5.400185371573149e-14 True
9.0 8.006410204958181e-14 True
10.0 7.777189536904011e-14 True
11.0 5.992073021576436e-14 True
12.0 6.3716122222085e-14 True
13.0 1.0120048110065158e-13 True
14.0 1.4193029076233626e-13 True
15.0 5.32774440341853e-14 True
16.0 4.056702432878251e-14 True
17.0 6.52062429116855e-14 True
18.0 1.3237663595853556e-13 True
19.0 8.950259695710006e-14 True
20.0 1.3795067925438317e-13 True
21.0 7.498727794307339e-14 True
22.0 8.570866862371226e-14 True
23.0 8.961510590826412e-14 True
24.0 1.1839169916779899e-13 True
25.0 1.422193407555868e-13 True
26.0 6.578778015788652e-14 True
27.0 1.0042963537887101e-13 True
28.0 8.438153062569065e-14 True
29.0 1.1299103064863272e-13 True
30.0 8.192453876745831e-14 True
31.0 1.2618492405483406e-13 True
32.0 4.9237819394886296e-14 True
33.0 1.0971028569666842e-13 True
34.0 1.332138304559801e-13 True
35.0 5.280024600049296e-14 True
</code></pre>
<p>From the code above, you can see that we tried to use another basis: vector with the biggest norm, vector with the smallest and their cross product. Here we should have only two variants (direction of the cross product) which could be later fixed, but I couldn't manage this alternative solution to work.</p>
<p>I hope that someone can help me finish this and obtain rotation-invariant representation for voxels.</p>
<hr />
<p>EDIT 3. Thank you very much meTchaikovsky, but the situation is still unclear. My problem initially lies in processing 3d voxels which are (3,3,3) numpy arrays. We reached the conclusion that for finding invariant representation, we just need to fix 3d voxel as weights used for calculating cov matrix, and apply rotations on the centered "basis" (some vectors used for describing point cloud).</p>
<p>Therefore, when we achieved invariance to "basis" rotations, the problem should have been solved: now, when we fix "basis" and use rotated voxel, the result must be invariant. Surprisingly, this is not so. Here I check 24 rotations of the cube with basis2=basis1 (except small perturbation):</p>
<pre><code>import scipy.ndimage
def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c = x - np.mean(x,axis=0)
x_f = feat * x
x_f -= np.mean(x_f,axis=0)
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
# here is the solution, we switch the sign of the projections
# so that the projection with the largest absolute value along a principal axis is positive
proj = x_c @ v
asign = np.sign(proj)
max_ind = np.argmax(np.abs(proj),axis=0)[None,:]
sign = np.take_along_axis(asign,max_ind,axis=0)
proj = proj * sign
return proj
def rotate_3d(image1, alpha, beta, gamma):
# z
# The rotation angle in degrees.
image2 = scipy.ndimage.rotate(image1, alpha, mode='nearest', axes=(0, 1), reshape=False)
# rotate along y-axis
image3 = scipy.ndimage.rotate(image2, beta, mode='nearest', axes=(0, 2), reshape=False)
# rotate along x-axis
image4 = scipy.ndimage.rotate(image3, gamma, mode='nearest', axes=(1, 2), reshape=False)
return image4
voxel10 = np.random.normal(size=(3,3,3))
angles = [[x,y,z] for x in [-90,0,90] for y in [-90,0,90] for z in [-90,0,90]]
res = []
for angle in angles:
voxel2 = rotate_3d(voxel10, angle[0], angle[1], angle[2])
voxel1 = voxel10.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation
basis2 = basis1
original_diff = np.sum(np.abs(basis1-basis2))
gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2))
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<pre><code>difference before pca 0.000, difference after pca 45.738 False
difference before pca 0.000, difference after pca 12.157 False
difference before pca 0.000, difference after pca 26.257 False
difference before pca 0.000, difference after pca 37.128 False
difference before pca 0.000, difference after pca 52.131 False
difference before pca 0.000, difference after pca 45.436 False
difference before pca 0.000, difference after pca 42.226 False
difference before pca 0.000, difference after pca 18.959 False
difference before pca 0.000, difference after pca 38.888 False
difference before pca 0.000, difference after pca 12.157 False
difference before pca 0.000, difference after pca 26.257 False
difference before pca 0.000, difference after pca 50.613 False
difference before pca 0.000, difference after pca 52.132 False
difference before pca 0.000, difference after pca 0.000 True
difference before pca 0.000, difference after pca 52.299 False
</code></pre>
<p>Here basis1=basis2 (hence basis difference before pca=0), and you can see 0 for (0,0,0) rotation. But rotated voxels give different result. In case scipy does something wrong, I've checked the approach with <a href="https://stackoverflow.com/a/33190472/1692060">numpy.rot90</a> with the same result:</p>
<pre><code>rot90 = np.rot90
def rotations24(polycube):
# imagine shape is pointing in axis 0 (up)
# 4 rotations about axis 0
yield from rotations4(polycube, 0)
# rotate 180 about axis 1, now shape is pointing down in axis 0
# 4 rotations about axis 0
yield from rotations4(rot90(polycube, 2, axis=1), 0)
# rotate 90 or 270 about axis 1, now shape is pointing in axis 2
# 8 rotations about axis 2
yield from rotations4(rot90(polycube, axis=1), 2)
yield from rotations4(rot90(polycube, -1, axis=1), 2)
# rotate about axis 2, now shape is pointing in axis 1
# 8 rotations about axis 1
yield from rotations4(rot90(polycube, axis=2), 1)
yield from rotations4(rot90(polycube, -1, axis=2), 1)
def rotations4(polycube, axis):
"""List the four rotations of the given cube about the given axis."""
for i in range(4):
yield rot90(polycube, i, axis)
def rot90(m, k=1, axis=2):
"""Rotate an array k*90 degrees in the counter-clockwise direction around the given axis"""
m = np.swapaxes(m, 2, axis)
m = np.rot90(m, k)
m = np.swapaxes(m, 2, axis)
return m
voxel10 = np.random.normal(size=(3,3,3))
gen = rotations24(voxel10)
res = []
for voxel2 in gen:
#voxel2 = rotate_3d(voxel10, angle[0], angle[1], angle[2])
voxel1 = voxel10.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation
basis2 = basis1
original_diff = np.sum(np.abs(basis1-basis2))
gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2))
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<p>I tried to investigate this case, and the only perhaps irrelevant thing I found the following:</p>
<pre><code>voxel1 = np.ones((3,3,3))
voxel1[0,0,0] = 0 # if I change 0 to 0.5 it stops working at all
# mirrored around diagonal
voxel2 = np.ones((3,3,3))
voxel2[2,2,2] = 0
for angle in range(1):
voxel1 = voxel1.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 = basis1 + 1e-4 * randn(27,3) # perturbation
basis2 = basis1
# If perturbation is used we have
# difference before pca 0.000, difference after pca 0.000 True
# correct for 100.0 percent of time
# eigenvalues for both voxels
# [1.03417495 0.69231107 0.69235402]
# [0.99995368 0.69231107 0.69235402]
# If no perturbation applied for basis, difference is present
# difference before pca 0.000, difference after pca 55.218 False
# correct for 0.0 percent of time
# eignevalues for both voxels (always have 1.):
# [0.69230769 1.03418803 0.69230769]
# [1. 0.69230769 0.69230769]
</code></pre>
<p>Currently don't know how to proceed from there.</p>
<hr />
<p>EDIT4:</p>
<p>I'm currently thinking that there is some problem with voxel rotations transformed into basis coefficients via <code>voxel.reshape()</code></p>
<p>Simple experiment with creating array of indices</p>
<pre><code>indices = np.arange(27)
indices3d = indices.reshape((3,3,3))
voxel10 = np.random.normal(size=(3,3,3))
assert voxel10[0,1,2] == voxel10.ravel()[indices3d[0,1,2]]
</code></pre>
<p>And then using it for rotations</p>
<pre><code>gen = rotations24(indices3d)
res = []
for ind2 in gen:
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
voxel1 = voxel10.copy().reshape(27,1) #np.array([voxel10[i,j,k] for k in range(3) for j in range(3) for i in range(3)])[...,np.newaxis]
voxel2 = voxel1[ind2.reshape(27,)]
basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation
basis2 = basis1[ind2.reshape(27,)]
original_diff = np.sum(np.abs(basis1-basis2))
gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2))
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<p>Shows that those rotations are not correct, because on my opinion rotated voxel and basis should match:</p>
<pre><code>difference before pca 0.000, difference after pca 0.000 True
difference before pca 48.006, difference after pca 87.459 False
difference before pca 72.004, difference after pca 70.644 False
difference before pca 48.003, difference after pca 71.930 False
difference before pca 72.004, difference after pca 79.409 False
difference before pca 84.005, difference after pca 36.177 False
</code></pre>
<hr />
<p>EDIT 5: Okaaay, so here we go at least for 24 rotations. At first, we had a slight change of logic lurked into our pca function. Here we center <code>x_c</code> (basis) and forget about it, further centering <code>x_f</code> (features*basis) and transforming it with pca. This does not work perhaps because our basis is not centered and multiplication by features further increased the bias. If we center <code>x_c</code> first, and multiply it by features, everything will be Ok. Also, previously we had <code>proj = x_c @ v</code> with <code>v</code> computed from <code>x_f</code> which was totally wrong in this case, as <code>x_f</code> and <code>x_c</code> were centered around different centers.</p>
<pre><code>def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c = x - np.mean(x,axis=0)
x_f = feat * x
x_f -= np.mean(x_f,axis=0)
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
# here is the solution, we switch the sign of the projections
# so that the projection with the largest absolute value along a principal axis is positive
proj = x_f @ v
return proj
</code></pre>
<p>Secondly, as we already found, we need to sort vectors obtained by pca, for example by the first column:</p>
<pre><code> basis2 = basis1
original_diff = np.sum(np.abs(basis1-basis2))
a = pca(voxel1, basis1)
t1 = a[a[:,0].argsort()]
a = pca(voxel2, basis2)
t2 = a[a[:,0].argsort()]
gg= np.abs(t1-t2)
</code></pre>
<p>And the last thing we also discovered already, is that simple <code>reshape</code> is wrong for voxel, it must correspond to rotation:</p>
<p><code>voxel2 = voxel1[ind2.reshape(27,)] #np.take(voxel10, ind2).reshape(27,1)</code>.</p>
<p>One more important comment to understand the solution. When we perform PCA on the 3d vectors (point cloud, defined by our basis) with weights assigned (analogously to the inertia of the rigid body), the actual <strong>assignment of the weights to the points</strong> is sort of external information, which becomes hard-defined for the algorithm. When we rotated basis by applying rotation matrices, we did not change the order of the vectors in the array, hence the order of the mass assignments wasn't changed too. When we start to rotate voxel, we change the order of the masses, so in general PCA algorithm will not work without the same transformation applied to the basis. So, only if we have some array of 3d vectors, transformed by some rotation AND the list of masses re-arranged accordingly, we can detect the rotation of the rigid body using PCA. Otherwise, if we detach masses from points, that would be another body in general.</p>
<p>So how does it work for us then? It works because our points are fully symmetric around the center after centering basis. In this case reassignment of the masses does not change "the body" because vector norms are the same. In this case we can use the same (numerically) <code>basis2=basis1</code> for testing 24 rotations and rotated voxel2 (rotated point cloud cubes match, just masses migrate). This correspond to the rotation of the point cloud with mass points around the center of the cube. PCA will transform vectors with the same lengths and different masses in the same way according to the body's "inertia" then (after we reached convention on the signs of the components). The only thing left is to sort the pca transformed vectors in the end, because they have different position in the array (because our body was rotated, mass points changed their positions). This makes us lose some information related to the order of the vectors but it looks inevitable.</p>
<p>Here is the code which checks the solution for 24 rotations. If should theoretically work in the general case as well, giving some closer values for more complicated objects rotated inside a bigger voxel:</p>
<pre><code>import numpy as np
from numpy.random import randn
#np.random.seed(20)
def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c = x - np.mean(x,axis=0)
x_f = feat * x_c
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
# here is the solution, we switch the sign of the projections
# so that the projection with the largest absolute value along a principal axis is positive
proj = x_f @ v
asign = np.sign(proj)
max_ind = np.argmax(np.abs(proj),axis=0)[None,:]
sign = np.take_along_axis(asign,max_ind,axis=0)
proj = proj * sign
return proj
# must be correct https://stackoverflow.com/questions/15230179/how-to-get-the-linear-index-for-a-numpy-array-sub2ind
indices = np.arange(27)
indices3d = indices.reshape((3,3,3))
voxel10 = np.random.normal(size=(3,3,3))
assert voxel10[0,1,2] == voxel10.ravel()[indices3d[0,1,2]]
rot90 = np.rot90
def rotations24(polycube):
# imagine shape is pointing in axis 0 (up)
# 4 rotations about axis 0
yield from rotations4(polycube, 0)
# rotate 180 about axis 1, now shape is pointing down in axis 0
# 4 rotations about axis 0
yield from rotations4(rot90(polycube, 2, axis=1), 0)
# rotate 90 or 270 about axis 1, now shape is pointing in axis 2
# 8 rotations about axis 2
yield from rotations4(rot90(polycube, axis=1), 2)
yield from rotations4(rot90(polycube, -1, axis=1), 2)
# rotate about axis 2, now shape is pointing in axis 1
# 8 rotations about axis 1
yield from rotations4(rot90(polycube, axis=2), 1)
yield from rotations4(rot90(polycube, -1, axis=2), 1)
def rotations4(polycube, axis):
"""List the four rotations of the given cube about the given axis."""
for i in range(4):
yield rot90(polycube, i, axis)
def rot90(m, k=1, axis=2):
"""Rotate an array k*90 degrees in the counter-clockwise direction around the given axis"""
m = np.swapaxes(m, 2, axis)
m = np.rot90(m, k)
m = np.swapaxes(m, 2, axis)
return m
gen = rotations24(indices3d)
res = []
for ind2 in gen:
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
voxel1 = voxel10.copy().reshape(27,1)
voxel2 = voxel1[ind2.reshape(27,)] #np.take(voxel10, ind2).reshape(27,1)
basis1 += 1e-6*np.random.normal(size=(27, 1)) # perturbation
basis2 = basis1
original_diff = np.sum(np.abs(basis1-basis2))
a = pca(voxel1, basis1)
t1 = a[a[:,0].argsort()]
a = pca(voxel2, basis2)
t2 = a[a[:,0].argsort()]
gg= np.abs(t1-t2)
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<pre><code>difference before pca 0.000, difference after pca 0.000 True
difference before pca 0.000, difference after pca 0.000 True
difference before pca 0.000, difference after pca 0.000 True
difference before pca 0.000, difference after pca 0.000 True
</code></pre>
<hr />
<p>PS. I want to propose better ordering theme to take into account zero values in the voxel which might confuse previous approach when entire first column of PCA vectors is zero, etc. I propose to sort by vector norms, multiplied by the sign of the sum of elements. Here is tensorflow 2 code:</p>
<pre><code>
def infer_shape(x):
x = tf.convert_to_tensor(x)
# If unknown rank, return dynamic shape
if x.shape.dims is None:
return tf.shape(x)
static_shape = x.shape.as_list()
dynamic_shape = tf.shape(x)
ret = []
for i in range(len(static_shape)):
dim = static_shape[i]
if dim is None:
dim = dynamic_shape[i]
ret.append(dim)
return ret
def merge_last_two_dims(tensor):
shape = infer_shape(tensor)
shape[-2] *= shape[-1]
#shape.pop(1)
shape = shape[:-1]
return tf.reshape(tensor, shape)
def pca(inpt_voxel):
patches = tf.extract_volume_patches(inpt_voxel, ksizes=[1,3,3,3,1], strides=[1, 1,1,1, 1], padding="VALID")
features0 = patches[...,tf.newaxis]*basis
# centered basises
basis1_ = tf.ones(shape=tf.shape(patches[...,tf.newaxis]), dtype=tf.float32)*basis
basis1 = basis1_ - tf.math.divide_no_nan(tf.reduce_sum(features0, axis=-2), tf.reduce_sum(patches, axis=-1)[...,None])[:,:,:,:,None,:]
features = patches[...,tf.newaxis]*basis1
features_centered_basis = features - tf.reduce_mean(features, axis=-2)[:,:,:,:,None,:]
x = features_centered_basis
m = tf.cast(x.get_shape()[-2], tf.float32)
cov = tf.matmul(x,x,transpose_a=True)/(m - 1)
e,v = tf.linalg.eigh(cov,name="eigh")
proj = tf.matmul(x,v,transpose_b=False)
asign = tf.sign(proj)
max_ind = tf.argmax(tf.abs(proj),axis=-2)[:,:,:,:,None,:]
sign = tf.gather(asign,indices=max_ind, batch_dims=4, axis=-2)
sign = tf.linalg.diag_part(sign)
proj = proj * sign
# But we can have 1st coordinate zero. In this case,
# other coordinates become ambiguous
#s = tf.argsort(proj[...,0], axis=-1)
# sort by l2 vector norms, multiplied by signs of sums
sum_signs = tf.sign(tf.reduce_sum(proj, axis=-1))
norms = tf.norm(proj, axis=-1)
s = tf.argsort(sum_signs*norms, axis=-1)
proj = tf.gather(proj, s, batch_dims=4, axis=-2)
return merge_last_two_dims(proj)
</code></pre>
|
<p>Firstly, your <code>pca</code> function is not correct, it should be</p>
<pre><code>def pca(x):
x -= np.mean(x,axis=0)
cov = np.cov(x.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
return x @ v
</code></pre>
<p>You shouldn't transpose the <code>e_vectors[:,order]</code> because we want each column of the <code>v</code> array is an eigenvector, therefore, <code>x @ v</code> will be projections of <code>x</code> on those eigenvectors.</p>
<p>Secondly, I think you misunderstand the meaning of rotation. It is not <code>voxel1</code> that should be rotated, but the <code>basis1</code>. If you rotate (by taking transposition) <code>voxel1</code>, what you really do is to rearrange the indices of grid points, while the coordinates of the points <code>basis1</code> are not changed.</p>
<p>In order to rotate the points (around the z axis for example), you can first define a function to calculate the rotation matrix given an angle</p>
<pre><code>rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]])
</code></pre>
<p>with the rotation matrix generated by this function, you can rotate the array <code>basis1</code> to create another array <code>basis2</code></p>
<pre><code>basis2 = basis1 @ rotate_mat(np.deg2rad(angle))
</code></pre>
<p>Now it comes to the title of your question "Why my PCA is not invariant to rotation and axis swap?", from <a href="https://stats.stackexchange.com/questions/88880/does-the-sign-of-scores-or-of-loadings-in-pca-or-fa-have-a-meaning-may-i-revers">this post</a>, the PCA result is not unique, you can actually run a test to see this</p>
<pre><code>import numpy as np
np.random.seed(10)
rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]])
def pca(x):
x -= np.mean(x,axis=0)
cov = np.cov(x.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
return x @ v
angles = np.linspace(0,90,91)
res = []
for angle in angles:
voxel1 = np.random.normal(size=(3,3,3))
voxel2 = voxel1.copy()
voxel1 = voxel1.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)])
# basis2 = np.hstack((-basis1[:,1][:,None],basis1[:,0][:,None],-basis1[:,2][:,None]))
basis2 = basis1 @ rotate_mat(np.deg2rad(angle))
voxel1 = voxel1*basis1
voxel2 = voxel2*basis2
print(angle,np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6)))
res.append(np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6)))
print()
print(np.sum(res) / len(angles))
</code></pre>
<p>After you run this script, you will see that in only 21% of times the two PCA results are the same.</p>
<hr />
<p><strong>UPDATE</strong></p>
<p>I think instead of focusing on the eigenvectors of the principal components, <strong>you can instead focus on the projections</strong>. For two clouds of points, even though they are essentially the same, the eigenvectors can be drastically different. Therefore, hardcoding in order to somehow let the two sets of eigenvectors to be the same is a very difficult task.</p>
<p>However, based on <a href="https://stats.stackexchange.com/a/88882/198896">this post</a>, for the same cloud of points, two sets of eigenvectors can be different only up to a minus sign. Therefore, the projections upon the two sets of eigenvectors are also different only up to a minus sign. This actually offers us an elegant solution, for the projections along an eigenvector (principal axis), all we need to do is to switch the sign of the projections so that the projection with the largest absolute value along that principal axis is positive.</p>
<pre><code>import numpy as np
from numpy.random import randn
#np.random.seed(20)
rotmat_z = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]])
rotmat_y = lambda theta: np.array([[np.cos(theta),0.,np.sin(theta)],[0.,1.,0.],[-np.sin(theta),0.,np.cos(theta)]])
rotmat_x = lambda theta: np.array([[1.,0.,0.],[0.,np.cos(theta),-np.sin(theta)],[0.,np.sin(theta),np.cos(theta)]])
# based on https://en.wikipedia.org/wiki/Rotation_matrix
rot_mat = lambda alpha,beta,gamma: rotmat_z(alpha) @ rotmat_y(beta) @ rotmat_x(gamma)
deg2rad = lambda alpha,beta,gamma: [np.deg2rad(alpha),np.deg2rad(beta),np.deg2rad(gamma)]
def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c = x - np.mean(x,axis=0)
x_f = feat * x
x_f -= np.mean(x_f,axis=0)
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
# here is the solution, we switch the sign of the projections
# so that the projection with the largest absolute value along a principal axis is positive
proj = x_f @ v
asign = np.sign(proj)
max_ind = np.argmax(np.abs(proj),axis=0)[None,:]
sign = np.take_along_axis(asign,max_ind,axis=0)
proj = proj * sign
return proj
ref_angles = np.linspace(0.0,90.0,10)
angles = [[alpha,beta,gamma] for alpha in ref_angles for beta in ref_angles for gamma in ref_angles]
voxel1 = np.random.normal(size=(3,3,3))
res = []
for angle in angles:
voxel2 = voxel1.copy()
voxel1 = voxel1.reshape(27,1)
voxel2 = voxel2.reshape(27,1)
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 = basis1 + 1e-4 * randn(27,3) # perturbation
basis2 = basis1 @ rot_mat(*deg2rad(*angle))
original_diff = np.sum(np.abs(basis1-basis2))
gg= np.abs(pca(voxel1, basis1) - pca(voxel1, basis2))
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<p>As you can see by running this script, the projections on the principal axis are the same, this means we have resolved the issue of PCA results being not unique.</p>
<hr />
<p><strong>Reply to EDIT 3</strong></p>
<p>As for the new issue you raised, I think you missed an important point, it is the projections of the cloud of points onto the principal axes that are invariant, not anything else. Therefore, if you rotate <code>voxel1</code> and obtain <code>voxel2</code>, they are the same in the sense that their own respective projections onto the principal axes of the cloud of points are the same, it actually does not make too much sense to compare <code>pca(voxel1,basis1)</code> with <code>pca(voxel2,basis1)</code>.</p>
<p>Furthermore, the method <code>rotate</code> of <code>scipy.ndimage</code> actually changes information, as you can see by running this script</p>
<pre><code>image1 = np.linspace(1,100,100).reshape(10,10)
image2 = scipy.ndimage.rotate(image1, 45, mode='nearest', axes=(0, 1), reshape=False)
image3 = scipy.ndimage.rotate(image2, -45, mode='nearest', axes=(0, 1), reshape=False)
fig,ax = plt.subplots(nrows=1,ncols=3,figsize=(12,4))
ax[0].imshow(image1)
ax[1].imshow(image2)
ax[2].imshow(image3)
</code></pre>
<p>The output image is</p>
<p>As you can see the matrix after rotation is not the same as the original one, some information of the original matrix is changed.</p>
<p><a href="https://i.stack.imgur.com/OItNv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OItNv.jpg" alt="output" /></a></p>
<hr />
<p><strong>Reply to EDIT 4</strong></p>
<p>Actually, we are almost there, the two pca results are different because we are comparing pca components for different points.</p>
<pre><code>indices = np.arange(27)
indices3d = indices.reshape((3,3,3))
# apply rotations to the indices, it is not weights yet
gen = rotations24(indices3d)
# construct the weights
voxel10 = np.random.normal(size=(3,3,3))
res = []
count = 0
for ind2 in gen:
count += 1
# ind2 is the array of indices after rotation
# reindex the weights with the indices after rotation
voxel1 = voxel10.copy().reshape(27,1)
voxel2 = voxel1[ind2.reshape(27,)]
# basis1 is the array of coordinates where the points are
basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double)
basis1 += 1e-4*np.random.normal(size=(27, 1))
# reindex the coordinates with the indices after rotation
basis2 = basis1[ind2.reshape(27,)]
# add a slight modification to pca, return the axes also
pca1,v1 = pca(voxel1,basis1)
pca2,v2 = pca(voxel2,basis2)
# sort the principal components before comparing them
pca1 = np.sort(pca1,axis=0)
pca2 = np.sort(pca2,axis=0)
gg= np.abs(pca1 - pca2)
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
<p>Running this script, you will find, for each rotation, the two sets of principal axes are different only up to a minus sign. The two sets of pca results are different because the indices of the cloud of points before and after rotation are different (since you apply rotation to the indices). If you sort the pca results before comparing them, you will find the two pca results are exactly the same.</p>
<hr />
<p><strong>Summary</strong></p>
<p>The answer to this question can be divided into two parts. In the first part, the rotation is applied to the basis (the coordinates of points), while the indices and the corresponding weights are unchanged. In the second part, the rotation is applied to the indices, then the weights and the basis are rearranged with the new indices. For both of the two parts, the solution <code>pca</code> function is the same</p>
<pre><code>def pca(feat, x):
# pca with attemt to create convention on sign changes
x_c = x - np.mean(x,axis=0)
x_f = feat * x
x_f -= np.mean(x_f,axis=0)
cov = np.cov(x_f.T)
e_values, e_vectors = np.linalg.eig(cov)
order = np.argsort(e_values)[::-1]
v = e_vectors[:,order]
# here is the solution, we switch the sign of the projections
# so that the projection with the largest absolute value along a principal axis is positive
proj = x_f @ v
asign = np.sign(proj)
max_ind = np.argmax(np.abs(proj),axis=0)[None,:]
sign = np.take_along_axis(asign,max_ind,axis=0)
proj = proj * sign
return proj
</code></pre>
<p>The idea of this function is, instead of matching the principal axes, we can match the principal components since it is the principal components that are rotationally invariant after all.</p>
<p>Based on this function <code>pca</code>, the first part of this answer is easy to understand, since the indices of the points are unchanged while we only rotate the basis. In order to understand the second part of this answer (<strong>Reply to EDIT 5</strong>), we must first understand the function <code>rotations24</code>. This function rotates the indices rather than the coordinates of the points, therefore, if we stay at the same position observing the points, we will feel that the positions of the points are changed.</p>
<p><a href="https://i.stack.imgur.com/ULUKim.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ULUKim.jpg" alt="plot" /></a></p>
<p>With this in mind, it is not hard to understand <strong><strong>Reply to EDIT 5</strong></strong>.</p>
<p>Actually, the function <code>pca</code> in this answer can be applied to more general cases, for example (we rotate the indices)</p>
<pre><code>num_of_points_per_dim = 10
num_of_points = num_of_points_per_dim ** 3
indices = np.arange(num_of_points)
indices3d = indices.reshape((num_of_points_per_dim,num_of_points_per_dim,num_of_points_per_dim))
voxel10 = 100*np.random.normal(size=(num_of_points_per_dim,num_of_points_per_dim,num_of_points_per_dim))
gen = rotations24(indices3d)
res = []
for ind2 in gen:
voxel1 = voxel10.copy().reshape(num_of_points,1)
voxel2 = voxel1[ind2.reshape(num_of_points,)]
basis1 = 100*np.random.rand(num_of_points,3)
basis2 = basis1[ind2.reshape(num_of_points,)]
pc1 = np.sort(pca(voxel1, basis1),axis=0)
pc2 = np.sort(pca(voxel2, basis2),axis=0)
gg= np.abs(pc1-pc2)
ss= np.sum(np.ravel(gg))
bl= np.all(gg<1e-4)
print('difference after pca %.3f' % ss,bl)
res.append(bl)
del basis1, basis2
print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res))))
</code></pre>
|
python|numpy|math|pca|voxel
| 4
|
4,453
| 64,577,657
|
Appending values to an array for every iteration
|
<p>I am trying to store the indices inside my test array inside two other arrays.</p>
<pre><code>train_idx = np.zeros(3,dtype='object')
test_dx = np.zeros(3,dtype='object')
array = np.array([1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1])
test = np.array(np.array_split(array, k))
[array([1, 0, 0, 1, 0]) array([1, 1, 1, 0]) array([1, 0, 0, 1])]
</code></pre>
<p>The arrays in test, i am trying to store each array inside a different array (train_index and test_index).</p>
<pre><code>for i in range(len(test)):
test_index = test[i]
for j in range(len(test)):
if i == j:
continue
train_index = test[j]
</code></pre>
<p>The issue with this, is that it's storing the last values for both arrays train_index and test_index</p>
<pre><code>print(train_index)
print(test_index)
[1 1 1 0]
[1 0 0 1]
</code></pre>
<p>Whereas the full output can be see below:</p>
<pre><code>[1 0 0 1 0]
[1 1 1 0]
[1 0 0 1]
[1 1 1 0]
[1 0 0 1 0]
[1 0 0 1]
[1 0 0 1]
[1 0 0 1 0]
[1 1 1 0]
</code></pre>
<p>What i want to do in-fact is append these values into their respective arrays, the output i'm trying to achieve is the following:</p>
<pre><code>test_index = [[1 0 0 1 0], [1 1 1 0], [1 0 0 1]]
train_index = [[[1 1 1 0], [1 0 0 1]], [[1 1 1 0], [1 0 0 1]], [[1 0 0 1], [1 1 1 0]]]
</code></pre>
|
<p>Define test_index and train_index as empty list:</p>
<pre><code>test_index = []
</code></pre>
<p>Then use append method:</p>
<pre><code>test_index.append(test[i])
</code></pre>
|
python|arrays|numpy
| 0
|
4,454
| 64,530,885
|
TFX Pipeline Error While Executing TFMA: AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'
|
<p>Basically I only reused code from <a href="https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_utils_native_keras.py" rel="nofollow noreferrer">iris utils</a> and <a href="https://github.com/tensorflow/tfx/blob/master/tfx/examples/iris/iris_pipeline_native_keras.py" rel="nofollow noreferrer">iris pipeline</a> with minor change on serving input:</p>
<pre class="lang-py prettyprint-override"><code>def _get_serve_tf_examples_fn(model, tf_transform_output):
model.tft_layer = tf_transform_output.transform_features_layer()
feature_spec = tf_transform_output.raw_feature_spec()
print(feature_spec)
feature_spec.pop(_LABEL_KEY)
@tf.function
def serve_tf_examples_fn(*args):
parsed_features = {}
for arg in args:
parsed_features[arg.name.split(":")[0]] = arg
print(parsed_features)
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
def run_fn(fn_args: TrainerFnArgs):
...
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
inputs = [tf.TensorSpec(
shape=[None, 1],
dtype=feature_spec[f].dtype,
name=f) for f in feature_spec]
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model, tf_transform_output).get_concrete_function(*inputs),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
</code></pre>
<p>the get_concrete_function() original input from iris codes is only a TensorSpec with dtype string. I already tried serving the model using the exact input but when I test the REST API I got a parsing error. So I tried to change the serving input so it can receive JSON input like this:</p>
<pre><code>{"instances": [{"feat1": 90, "feat2": 23.8, "feat3": 12}]}
</code></pre>
<p>when I run the pipeline, the training was successful but then the error occurred when running the evaluator component. this is the latest logs:</p>
<pre><code>INFO:absl:Using ./tfx/pipelines/toilet_native_keras/Trainer/model/67/serving_model_dir as candidate model.
INFO:absl:Using ./tfx/pipelines/toilet_native_keras/Trainer/model/14/serving_model_dir as baseline model.
INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.
INFO:absl:Evaluating model.
INFO:absl:We decided to produce LargeList and LargeBinary types.
WARNING:tensorflow:5 out of the last 5 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7f0e44560> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:6 out of the last 6 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7c77f8a70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
...
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1213, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 570, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/model_util.py", line 466, in process
result = self._batch_reducible_process(element)
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/batched_predict_extractor_v2.py", line 164, in _batch_reducible_process
self._tensor_adapter.ToBatchTensors(record_batch), input_names)
AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 256, in _execute
response = task()
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 313, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 483, in do_instruction
getattr(request, request_type), request.instruction_id)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 518, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 983, in process_bundle
element.data)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 219, in process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 330, in apache_beam.runners.worker.operations.Operation.output
...
File "apache_beam/runners/common.py", line 1294, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "/usr/local/lib/python3.7/site-packages/future/utils/__init__.py", line 446, in raise_with_traceback
raise exc.with_traceback(traceback)
File "apache_beam/runners/common.py", line 1213, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 570, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/model_util.py", line 466, in process
result = self._batch_reducible_process(element)
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/batched_predict_extractor_v2.py", line 164, in _batch_reducible_process
self._tensor_adapter.ToBatchTensors(record_batch), input_names)
AttributeError: 'NoneType' object has no attribute 'ToBatchTensors' [while running 'ExtractEvaluateAndWriteResults/ExtractAndEvaluate/ExtractBatchPredictions/Predict']
...
WARNING:tensorflow:7 out of the last 7 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7f0273050> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:8 out of the last 8 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7c77fc170> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes arg
</code></pre>
<p>I don't think evaluator component has anything to do with serving input function as it just compare to the newly trained model with a latest published model, but then where did I go wrong?</p>
|
<p>So in the end I was mistaken about the evaluator component, or more appropriately if I address the TFMA instead. it indeed uses the serving input function defined in serving signatures. According to <a href="https://www.tensorflow.org/tfx/guide/evaluator" rel="nofollow noreferrer">this link</a>, the default signature used by the TFMA EvalConfig is "serving_default" which describes the serving model input to be serialized examples. That's why when I changed the input signature other than string, TFMA would raised as exception.</p>
<p>I think this signature is not meant to be used in serving the model through REST API and because the "serving_default" signature is still needeed and I am not in the mood with tinkering the EvalConfig, I created another signature which would receive the JSON input that I want. For tht to work, I need to make another function decorated by @tf.function. That's all. I hope my answer will help people who struggle with similar problems.</p>
|
python-3.x|tensorflow|tensorflow-serving|tensorflow-model-analysis
| 1
|
4,455
| 47,643,264
|
Pandas rank method dense but skip a number
|
<p>I have a sample data set that i'm trying to rank based on the values in the column 'HP':</p>
<pre><code>import pandas as pd
d = {
'unit': ['UD', 'UD', 'UD' ,'UC','UC', 'UC','UA','UA','UA','UB','UB','UB'],
'N-D': [ 'C1', 'C2', 'C3','Q1', 'Q2', 'Q3','D1','D2','D3','E1','E2','E3'],
'HP': [24, 24, 24,7,7,7,7,7,7,5,5,5]
}
df = pd.DataFrame(d)
df['rank']=df['HP'].rank(ascending=False, method='dense')
df
</code></pre>
<p>it looks like:</p>
<pre><code> HP N-D unit rank
0 24 C1 UD 1.0
1 24 C2 UD 1.0
2 24 C3 UD 1.0
3 7 Q1 UC 2.0
4 7 Q2 UC 2.0
5 7 Q3 UC 2.0
6 7 D1 UA 2.0
7 7 D2 UA 2.0
8 7 D3 UA 2.0
9 5 E1 UB 3.0
10 5 E2 UB 3.0
11 5 E3 UB 3.0
</code></pre>
<p>the 'HP' is a calculated column based on other columns (i won't show it here, but it's necessary in my real dataset)</p>
<p>I also tried the method='min' but the outcome looks like this:</p>
<pre><code> HP N-D unit rank
0 24 C1 UD 1.0
1 24 C2 UD 1.0
2 24 C3 UD 1.0
3 7 Q1 UC 4.0
4 7 Q2 UC 4.0
5 7 Q3 UC 4.0
6 7 D1 UA 4.0
7 7 D2 UA 4.0
8 7 D3 UA 4.0
9 5 E1 UB 10.0
10 5 E2 UB 10.0
11 5 E3 UB 10.0
</code></pre>
<p><strong>Units 'UC' and 'UA' tie for 2nd rank, what i'm looking for is to have the next rank which is unit 'UB' to be '4' instead of '3'. :</strong></p>
<pre><code> HP N-D unit rank
0 24 C1 UD 1.0
1 24 C2 UD 1.0
2 24 C3 UD 1.0
3 7 Q1 UC 2.0
4 7 Q2 UC 2.0
5 7 Q3 UC 2.0
6 7 D1 UA 2.0
7 7 D2 UA 2.0
8 7 D3 UA 2.0
9 5 E1 UB 4.0
10 5 E2 UB 4.0
11 5 E3 UB 4.0
</code></pre>
|
<p>Use a combination of <code>groupby</code> and <code>sort_values</code></p>
<pre><code>g = df.sort_values(
['HP', 'unit'], ascending=False
).groupby(['HP', 'unit'], sort=False)
df.assign(rank=g.ngroup().add(1).groupby(df.HP).transform('first'))
HP N-D unit rank
0 24 C1 UD 1
1 24 C2 UD 1
2 24 C3 UD 1
3 7 Q1 UC 2
4 7 Q2 UC 2
5 7 Q3 UC 2
6 7 D1 UA 2
7 7 D2 UA 2
8 7 D3 UA 2
9 5 E1 UB 4
10 5 E2 UB 4
11 5 E3 UB 4
</code></pre>
<hr>
<p>Another way using <code>nunique</code> and <code>map</code></p>
<pre><code>df.assign(
rank=df.HP.map(
df.sort_values(
['HP', 'unit'], ascending=False
).groupby(
'HP', sort=False
).unit.nunique().shift().fillna(1).cumsum())
)
HP N-D unit rank
0 24 C1 UD 1.0
1 24 C2 UD 1.0
2 24 C3 UD 1.0
3 7 Q1 UC 2.0
4 7 Q2 UC 2.0
5 7 Q3 UC 2.0
6 7 D1 UA 2.0
7 7 D2 UA 2.0
8 7 D3 UA 2.0
9 5 E1 UB 4.0
10 5 E2 UB 4.0
11 5 E3 UB 4.0
</code></pre>
|
python|pandas|rank
| 5
|
4,456
| 47,592,512
|
Adding column in pandas with several conditions based on other columns in dataframe
|
<p>Firstly, I apologise if this is already somewhere on StackOverflow, I searched for an hour after experimenting myself for an hour and couldn't find it. I'm sure there must be an elegant (and probably elementary) solution.</p>
<p>I have the following data frame:</p>
<pre><code> Admit Gender Dept Freq
0 Admitted Male A 512
1 Rejected Male A 313
2 Admitted Female A 89
3 Rejected Female A 19
4 Admitted Male B 353
5 Rejected Male B 207
6 Admitted Female B 17
7 Rejected Female B 8
8 Admitted Male C 120
9 Rejected Male C 205
10 Admitted Female C 202
11 Rejected Female C 391
12 Admitted Male D 138
13 Rejected Male D 279
14 Admitted Female D 131
15 Rejected Female D 244
16 Admitted Male E 53
17 Rejected Male E 138
18 Admitted Female E 94
19 Rejected Female E 299
20 Admitted Male F 22
21 Rejected Male F 351
22 Admitted Female F 24
23 Rejected Female F 317
</code></pre>
<p>And I want to add a column 'Proportion' which gives the proportion of successful / failed applicants by gender to each department.</p>
<p>So that:</p>
<pre><code>df.loc[0, 'Proportion'] = 512/(512+313) = 0.6206
df.loc[1, 'Proportion'] = 313/(512+313) = 0.3794
...
</code></pre>
<p>and so on.</p>
<p>I tried to start off by adding a 'total' column using variations of:</p>
<pre><code>data.groupby(['Dept', 'Gender'])[['Freq']].sum()
</code></pre>
<p>but I can't seem to look up the values of this dataframe by the values in each row of the original dataframe.</p>
<p>I have also tried using lambda functions, but I get the 'function is not iterable' error.</p>
<p>I suppose one could loop over it row by row as it is a small dataset, but in the future when I need to do things like this that won't be an option.</p>
<p>Please help out a novice and aspiring Data Scientist.</p>
|
<p>You can divide column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.div.html" rel="nofollow noreferrer"><code>div</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> for Series with same size as original <code>DataFrame</code>:</p>
<pre><code>data['new'] = data['Freq'].div(data.groupby(['Dept', 'Gender'])['Freq'].transform('sum'))
</code></pre>
<p>Or use <code>apply</code> with custom function:</p>
<pre><code>data['new'] = data.groupby(['Dept', 'Gender'])['Freq'].apply(lambda x: x/x.sum())
</code></pre>
<hr>
<pre><code>print (data)
Admit Gender Dept Freq new
0 Admitted Male A 512 0.620606
1 Rejected Male A 313 0.379394
2 Admitted Female A 89 0.824074
3 Rejected Female A 19 0.175926
4 Admitted Male B 353 0.630357
5 Rejected Male B 207 0.369643
6 Admitted Female B 17 0.680000
7 Rejected Female B 8 0.320000
8 Admitted Male C 120 0.369231
9 Rejected Male C 205 0.630769
10 Admitted Female C 202 0.340641
11 Rejected Female C 391 0.659359
12 Admitted Male D 138 0.330935
13 Rejected Male D 279 0.669065
14 Admitted Female D 131 0.349333
15 Rejected Female D 244 0.650667
16 Admitted Male E 53 0.277487
17 Rejected Male E 138 0.722513
18 Admitted Female E 94 0.239186
19 Rejected Female E 299 0.760814
20 Admitted Male F 22 0.058981
21 Rejected Male F 351 0.941019
22 Admitted Female F 24 0.070381
23 Rejected Female F 317 0.929619
</code></pre>
|
python|pandas|dataframe|conditional|match
| 1
|
4,457
| 49,045,210
|
How to checkwhether an index in a tensorarray has been initialized?
|
<p>Is it in anyway possible to check whether an index in a TensorArray has been initialized?</p>
<p>As I understand TensorArrays can't be initialized with default values.
However I need a way to increment the number on that index which I try to do by reading it, adding one and then writing it to the same index.
If the index is not initialized however this will fail as it cannot read an uninitialized index.</p>
<p>So is there a way to check if it has been initialized and otherwise write a zero to initialize it?</p>
|
<p>The only option as I see it is creating an initialization loop where every index is set to 0. This eliminates the problem but may not be an ideal way.</p>
|
tensorflow|python-3.5
| 0
|
4,458
| 48,926,511
|
TensorFlow Assign
|
<p>I am trying to write a custom version of an RNN and would like to just store the state and last output of the cells in variables but it is not working. My guess is that TensorFlow sees the storing of the values unnecessary and does not execute it. Here is a snippet that illustrates the problem.</p>
<p>For this example, I have five layers of "cells" that intentionally ignore the input and output the sum of the biases for the cell and the previous output, which is initialized to zero. However, as we run this, the output of the network is always just the values of the biases in the final layer and the value of <code>last_output</code> remains zero.</p>
<pre><code>import tensorflow as tf
import numpy as np
def cell_function(cell_inputs, layer):
last_output = tf.get_variable('last_output_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
biases = tf.get_variable('biases_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer)
cell_output = last_output + biases
last_output.assign(cell_output)
return cell_output
def rnn_function(inputs):
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
next_inputs = inputs
for layer in range(num_layers):
next_inputs = cell_function(next_inputs, layer)
return next_inputs
num_layers = 5
data = np.random.uniform(0, 10, size=(1001, 10, 1))
x = tf.placeholder('float', shape=(10, 1))
y = tf.placeholder('float', shape=(10, 1))
predictions = rnn_function(x)
loss = tf.losses.mean_squared_error(predictions=predictions, labels=y)
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(loss=loss)
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
last = tf.get_variable('last_output_4', shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
layer_biases = tf.get_variable('biases_4', shape=(10, 1),
initializer=tf.zeros_initializer)
with tf.Session() as sess:
tf.global_variables_initializer().run()
for t in range(1000):
rnn_input = data[t]
rnn_output = data[t+1]
feed_dict = {x: rnn_input, y: rnn_output}
fetches = [optimizer, redictions, loss, last, layer_biases]
_, pred, mse, value, bias = sess.run(fetches, feed_dict=feed_dict)
print('Predictions:')
print(rnn_predictions)
print(last.name)
print(value)
print(layer_biases.name)
print(bias)
</code></pre>
<p>If change the last line of <code>cell_function</code> before the return to <code>last_output = tf.assign(last_output, cell_output)</code> and then return it with <code>cell_output</code> and then return it again out of <code>rnn_function</code> and use that for the variable <code>last</code> everything works. I think it is because we are forcing TensorFlow to compute that node in the graph.</p>
<p>Is there any way to make this work without passing <code>last_output</code> out of the cell? It would be much nicer if I didn't have to keep passing all this stuff out to get the assignment operation to be executed.</p>
|
<p>Make it dependent on an operation that will be run, in this example I'll use the cost function, but use whatever makes sense:</p>
<pre><code>with tf.control_dependencies(cost):
tf.assign(last_output, cell_output)
</code></pre>
<p>Now the assign operation will be required in order for <code>cost</code> to be computed, which should solve your problem. For any operation you request tensorflow to compute with <code>sess.run(some_op)</code>, tensorflow will work backwards through the dependency graph and only compute the minimum elements necessary to produce the requested output.</p>
|
python|tensorflow|assign
| 0
|
4,459
| 49,330,080
|
NumPy 2D array: selecting indices in a circle
|
<p>For some rectangular we can select all indices in a 2D array very efficiently:</p>
<pre><code>arr[y:y+height, x:x+width]
</code></pre>
<p>...where <code>(x, y)</code> is the upper-left corner of the rectangle and <code>height</code> and <code>width</code> the height (number of rows) and width (number of columns) of the rectangular selection.</p>
<p>Now, let's say we want to select all indices in a 2D array located in a certain circle given center coordinates <code>(cx, cy)</code> and radius <code>r</code>. Is there a numpy function to achieve this efficiently?</p>
<p>Currently I am pre-computing the indices manually by having a Python loop that adds indices into a buffer (list). Thus, this is pretty inefficent for large 2D arrays, since I need to queue up every integer lying in some circle.</p>
<pre><code># buffer for x & y indices
indices_x = list()
indices_y = list()
# lower and upper index range
x_lower, x_upper = int(max(cx-r, 0)), int(min(cx+r, arr.shape[1]-1))
y_lower, y_upper = int(max(cy-r, 0)), int(min(cy+r, arr.shape[0]-1))
range_x = range(x_lower, x_upper)
range_y = range(y_lower, y_upper)
# loop over all indices
for y, x in product(range_y, range_x):
# check if point lies within radius r
if (x-cx)**2 + (y-cy)**2 < r**2:
indices_y.append(y)
indices_x.append(x)
# circle indexing
arr[(indices_y, indices_x)]
</code></pre>
<p>As mentioned, this procedure gets quite inefficient for larger arrays / circles. Any ideas for speeding things up?</p>
<p>If there is a better way to index a circle, does this also apply for "arbitrary" 2D shapes? For example, could I somehow pass a function that expresses membership of points for an arbitrary shape to get the corresponding numpy indices of an array?</p>
|
<p>You could define a mask that contains the circle. Below, I have demonstrated it for a circle, but you could write any arbitrary function in the <code>mask</code> assignment. The field <code>mask</code> has the dimensions of <code>arr</code> and has the value <code>True</code> if the condition on the righthand side is satisfied, and <code>False</code> otherwise. This mask can be used in combination with the indexing operator to assign to only a selection of indices, as the line <code>arr[mask] = 123.</code> demonstrates.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 32)
y = np.arange(0, 32)
arr = np.zeros((y.size, x.size))
cx = 12.
cy = 16.
r = 5.
# The two lines below could be merged, but I stored the mask
# for code clarity.
mask = (x[np.newaxis,:]-cx)**2 + (y[:,np.newaxis]-cy)**2 < r**2
arr[mask] = 123.
# This plot shows that only within the circle the value is set to 123.
plt.figure(figsize=(6, 6))
plt.pcolormesh(x, y, arr)
plt.colorbar()
plt.show()
</code></pre>
|
python|arrays|numpy|indexing
| 16
|
4,460
| 58,901,094
|
Select corresponding values to a instant t in columns
|
<p>I work in Python and i have a pandas Dataframe with an evolution of steps at differents months :</p>
<pre><code>+----+-------+------+
| Id | Month | Step |
+----+-------+------+
| a | 1 | a_1 |
| a | 4 | a_2 |
| a | 6 | a_3 |
| b | 1 | a_1 |
| b | 2 | a_4 |
+----+-------+------+
</code></pre>
<p>I want to have the evolution of steps corresponding to each month in columns like this table :</p>
<pre><code>+----+---------+----------+---------+---------+---------+---------+
| Id | Month_1 | Month_2 | Month_3 | Month_4 | Month_5 | Month_6 |
+----+---------+----------+---------+---------+---------+---------+
| a | a_1 | a_1 | a_1 | a_2 | a_2 | a_3 |
| b | a_1 | a_4 | a_4 | a_4 | a_4 | a_4 |
+----+---------+----------+---------+---------+---------+---------+
</code></pre>
<p>I don't find simple solution, so if someone have a solution, i take !</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>:</p>
<pre><code>new_df=df.pivot_table(index='Id',columns='Month',values='Step',aggfunc=''.join).add_prefix('Month_').rename_axis(columns=None)
print(new_df)
Month_1 Month_2 Month_4 Month_6
Id
a a_1 NaN a_2 a_3
b a_1 a_4 NaN NaN
</code></pre>
<hr>
<p>If you want appear all rage of months
and fill use:</p>
<pre><code>new_df=( df.pivot_table(index='Id',columns='Month',values='Step',aggfunc=''.join)
.reindex(columns=range(df['Month'].min(),df['Month'].max()+1))
.ffill(axis=1)
.add_prefix('Month_')
.rename_axis(columns=None)
.reset_index())
print(new_df)
Id Month_1 Month_2 Month_3 Month_4 Month_5 Month_6
0 a a_1 a_1 a_1 a_2 a_2 a_3
1 b a_1 a_4 a_4 a_4 a_4 a_4
</code></pre>
<hr>
<p>If you don't want fill remove <code>ffill</code></p>
|
python|pandas
| 1
|
4,461
| 58,743,756
|
How to read images from a list
|
<p>Hi I have a set of images stored as list items and I want to read the images one by one and perform some operations on it. I cannot figure out how to iterate through each image item in the list. I can read them explicitly from a folder using <code>cv2.imread</code> but I want to make use of the list element in which they are stored.</p>
<p>I am trying to read the aligned images which I have stored in a list element "align". The subroutine I have used for image alignment is this:</p>
<pre><code>def stackImagesECC(file_list):
M = np.eye(3, 3, dtype=np.float32)
first_image = None
stacked_image = None
align = []
for file in file_list:
image = cv2.imread(file,1).astype(np.float32) / 255
print(file)
if first_image is None:
# convert to gray scale floating point image
first_image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
stacked_image = image
else:
# Estimate perspective transform
s, M = cv2.findTransformECC(cv2.cvtColor(image,cv2.COLOR_BGR2GRAY), first_image, M, cv2.MOTION_HOMOGRAPHY)
w, h, _ = image.shape
# Align image to first image
image = cv2.warpPerspective(image, M, (h, w))
align.append(image)
stacked_image += image
# cv2.imwrite("aligned{}/aligned{}.png".format(file), image)
cv2.imshow("aligned", image)
# cv2.imwrite("output/aligned/",image)
cv2.waitKey(0)
stacked_image /= len(file_list)
stacked_image = (stacked_image*255).astype(np.uint8)
return align
</code></pre>
<p>And then I called this function using:</p>
<pre><code>align = stackImagesECC(glob.glob(path))
</code></pre>
<p>Now to perform some functions on this I am trying to read these files from the align variable.</p>
<pre><code>#function to detect edges in images
def auto_canny(image, sigma=0.33):
# Compute the median of the single channel pixel intensities
img = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
v = np.median(image)
# Apply automatic Canny edge detection using the computed median
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
return cv2.Canny(image, lower, upper)
</code></pre>
<p>this is the edge detection subroutine for which I want to read the aligned images</p>
<pre><code>for file in range(0,len(align)):
img = cv2.imread(file)
</code></pre>
<p>Can anyone suggest what am I doing wrong? Thanks in advance!</p>
|
<p><code>list</code> is itself a iterator just iterate over it.</p>
<pre><code>for file in align:
# code here...
</code></pre>
|
python|list|numpy|opencv|image-processing
| 0
|
4,462
| 58,936,694
|
Is it possible to apply a function to one column when reading a data file?
|
<p>Is there a way to apply directly a Series operation (buid in function or custom) when building a dataframe from a file (in a pythonic way)?</p>
<p>I would like to change the following:</p>
<pre><code># import data frame containing a custom timestamp column (ex: _2019_11_19_15_10_35_)
df1 = pd.read_csv('mydatafile.csv').assign(newcol='newval')
df1['Timestamp'] = pd.todatetime(df1['Timestamp'], format='_%Y_%m_%d_%H_%M_%S_')
</code></pre>
<p>in something like:</p>
<pre><code>df1 = pd.read_csv('mydatafile.csv').assign(newcol='newval').todatetime(df1['Timestamp'], format='_%Y_%m_%d_%H_%M_%S_')
</code></pre>
<p>I tried also:</p>
<pre><code>df1 = pd.read_csv('mydatafile.csv').assign(newcol='newval').apply(lambda x: pd.todatetime(df1['Timestamp'], format='_%Y_%m_%d_%H_%M_%S_') if x.name=='Timestamp' else x)
</code></pre>
|
<p>Well you can assign another Timestamp column, erasing the previous one:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.read_csv('mydatafile.csv').assign(
newcol='newval',
Timestamp=lambda df: pd.to_datetime(df['Timestamp'], format='_%Y_%m_%d_%H_%M_%S_'))
</code></pre>
|
python|pandas|dataframe|series
| 1
|
4,463
| 70,059,491
|
bumping to good business day when generating range
|
<p>I am using pandas pd.bdate_range() to generate a range of dates given a start and end, but it seems to not work as expected.</p>
<p>What I am ultimately after is quarterly dates over a start and end date, but I want the dates to be valid business days.</p>
<pre><code>start = '2015-06-01'
end = '2019-06-01'
dates = pd.bdate_range(start,end,freq='MS')[::3]
</code></pre>
<p>unfortunately this includes 2018-09-01 which is a Saturday</p>
<p>is there a more foolproof way to get an index of only business days, <strong>also taking account USFederalHolidayCalendar()</strong>?</p>
|
<p>You can take your existing Series and increment to the next business day like so</p>
<pre class="lang-py prettyprint-override"><code>from pandas.tseries.offsets import BDay
start = '2015-06-01'
end = '2019-06-01'
dates = pd.bdate_range(start,end,freq='MS')[::3]
new_dates = dates.map(lambda x : x + 0*BDay())
</code></pre>
<p>Or you can pass <code>BMS</code> to the <code>freq</code> keyword attribute like so</p>
<pre><code>start = '2015-06-01'
end = '2019-06-01'
dates = pd.bdate_range(start,end, freq='BMS')[::3]
</code></pre>
<p>Both give this output</p>
<pre><code>DatetimeIndex(['2015-06-01', '2015-09-01', '2015-12-01', '2016-03-01',
'2016-06-01', '2016-09-01', '2016-12-01', '2017-03-01',
'2017-06-01', '2017-09-01', '2017-12-01', '2018-03-01',
'2018-06-01', '2018-09-03', '2018-12-03', '2019-03-01',
'2019-06-03'],
dtype='datetime64[ns]', freq=None)
</code></pre>
|
python-3.x|pandas|date
| 1
|
4,464
| 70,347,700
|
Generate DF from attributes of tags in list
|
<p>I have a list of revisions from a Wikipedia article that I queried like this:</p>
<pre><code>import urllib
import re
def getRevisions(wikititle):
url = "https://en.wikipedia.org/w/api.php?action=query&format=xml&prop=revisions&rvlimit=500&titles="+wikititle
revisions = [] #list of all accumulated revisions
next = '' #information for the next request
while True:
response = urllib.request.urlopen(url + next).read() #web request
response = str(response)
revisions += re.findall('<rev [^>]*>', response) #adds all revisions from the current request to the list
cont = re.search('<continue rvcontinue="([^"]+)"', response)
if not cont: #break the loop if 'continue' element missing
break
next = "&rvcontinue=" + cont.group(1) #gets the revision Id from which to start the next request
return revisions
</code></pre>
<p>Which results in a list with each element being a <code>rev</code> Tag as a string:</p>
<pre><code>['<rev revid="343143654" parentid="6546465" minor="" user="name" timestamp="2021-12-12T08:26:38Z" comment="abc" />',...]
</code></pre>
<p>How can I get generate a DF from this list</p>
|
<p>An "easy" way without using regex would be splitting the string and then parsing:</p>
<pre><code>for rev_string in revisions:
rev_dict = {}
# Skipping the first and last as it's the tag.
attributes = rev_string.split(' ')[1:-1]
#Split on = and take each value as key and value and convert value to string to get rid of excess ""
for attribute in attributes:
key, value = attribute.split("=")
rev_dict[key] = str(value)
df = pd.DataFrame.from_dict(rev_dict)
</code></pre>
<p>This sample would create one dataframe per revision. If you would like to gather multiple reivsions in one dictionary then you handle unique attributes (I don't know if these are changing depending on wiki-document) and then after gathering all attributes in the dictionary you convert to a DataFrame.</p>
|
python|python-3.x|pandas|wikipedia|mediawiki-api
| 2
|
4,465
| 70,088,916
|
Format Pandas date array with mixed formats
|
<p>I'm trying to unify dates in a column as they come in different formats; current date entries:</p>
<p>[... '18-Aug-21' '16-Aug-21' '17-Aug-21'
'22-Aug-21' '21-Aug-21' '20-Aug-21' '19-Aug-21' '23-Aug-21' '24-Aug-21'
'25-Aug-21' '28-Aug-21' '26-Aug-21' '27-Aug-21' '31-Aug-21' '30-Aug-21'
'29-Aug-21' '06 Sep 2021' '07 Sep 2021' '23 Sep 2021' '17 Sep 2021'
'18 Sep 2021' '30 Sep 2021' '11 Sep 2021' '12 Sep 2021' '20 Sep 2021'
'15 Sep 2021' '16 Sep 2021' '08 Sep 2021' '09 Sep 2021' '24 Sep 2021'
'25 Sep 2021' '03 Sep 2021' '10 Sep 2021' '19 Sep 2021' '01 Sep 2021'
'29 Sep 2021' '26 Sep 2021' '27 Sep 2021' '13 Sep 2021' '14 Sep 2021'
'02 Sep 2021' '04 Sep 2021' '05 Sep 2021' ...</p>
<p>#1: trying to replace the dash here doesn't work on all dates</p>
<p>#2: when the year is YY as in '6-Aug-21', how can I format it?</p>
<pre><code>for date in DF_all["SALES_DATE"]:
date = date.replace("-"," ")
DF_all["SALES_DATE"] = pd.to_datetime(DF_all["SALES_DATE"], format='%d
%b%Y', errors='ignore')
print(DF_all["SALES_DATE"].unique())
</code></pre>
<p>Output:</p>
<p>[...'18-Aug-21' '16-Aug-21' '17-Aug-21'
'22-Aug-21' '21-Aug-21' '20-Aug-21' '19-Aug-21' '23-Aug-21' '24-Aug-21'
'25-Aug-21' '28-Aug-21' '26-Aug-21' '27-Aug-21' '31-Aug-21' '30-Aug-21'
'29-Aug-21' '06 Sep 2021' '07 Sep 2021' '23 Sep 2021' '17 Sep 2021'
'18 Sep 2021' '30 Sep 2021' '11 Sep 2021' '12 Sep 2021' '20 Sep 2021'
'15 Sep 2021' '16 Sep 2021' '08 Sep 2021' '09 Sep 2021' '24 Sep 2021'
'25 Sep 2021' '03 Sep 2021' '10 Sep 2021' '19 Sep 2021' '01 Sep 2021'
'29 Sep 2021' '26 Sep 2021' '27 Sep 2021' '13 Sep 2021' '14 Sep 2021'
'02 Sep 2021' '04 Sep 2021' ...]</p>
<p>Is there a preferred method in python that solves this issue?</p>
|
<p>I recommend <a href="https://pypi.org/project/python-dateutil/" rel="nofollow noreferrer"><code>dateutil</code></a> for this:</p>
<pre class="lang-py prettyprint-override"><code>import dateutil
DF_all["SALES_DATE"] = DF_all["SALES_DATE"].apply(dateutil.parser.parse)
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>>>> DF_all
0 2021-08-18
1 2021-08-16
2 2021-08-17
3 2021-08-22
4 2021-08-21
5 2021-08-20
6 2021-08-19
7 2021-08-23
8 2021-08-24
9 2021-08-25
...
Name: 0, dtype: datetime64[ns]
</code></pre>
<p>You might need to install <code>dateutil</code> first. Run the following in a terminal:</p>
<pre><code>pip install python-dateutil
</code></pre>
<p>Or, in IPython or a Jupyter Notebook, run:</p>
<pre><code>!pip install python-dateutil
</code></pre>
|
python|pandas|dataframe|format
| 1
|
4,466
| 56,348,725
|
Pandas - Replace values based on index and not in index
|
<p>Here is the sample code.</p>
<pre><code>import pandas as pd, numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(10, 1)), columns=list('A'))
</code></pre>
<p>I have a list dl=[0,2,3,4,7]</p>
<p>At the index positions specified by list, I would like to have column A as "Yes".</p>
<p>The following code works</p>
<pre><code>df.loc[dl,'A']='Yes'
</code></pre>
<p>How do I fill column 'A' with 'No' for column values not in index.
Please forgive me if this is a duplicate post. </p>
|
<h3><code>np.where</code></h3>
<p>I'm making an assumption that there is a better way to do both <code>'Yes'</code> and <code>'No'</code> at the same time. If you truly just want to fill in the <code>'No'</code> after you've already got the <code>'Yes'</code> then refer to <a href="https://stackoverflow.com/a/56348904/2336654">Fatemehhh</a>'s answer</p>
<pre><code>df.loc[:, 'A'] = np.where(df.index.isin(dl), 'Yes', 'No')
</code></pre>
<hr>
<h3>Experimental Section</h3>
<p>Not meant for actual suggestions</p>
<pre><code>f = dl.__contains__
g = ['No', 'Yes'].__getitem__
df.loc[:, 'A'] = [*map(g, map(f, df.index))]
df
A
0 Yes
1 No
2 Yes
3 Yes
4 Yes
5 No
6 No
7 Yes
8 No
9 No
</code></pre>
|
python|pandas|dataframe
| 6
|
4,467
| 56,288,089
|
Unexpected behaviour from applying np.isin() on a pandas dataframe
|
<p>While working <a href="https://stackoverflow.com/a/56286723/565489">on an answer to another question</a>, I stumbled upon an unexpected behaviour:</p>
<p>Consider the following DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'A':list('AAcdef'),
'B':[4,5,4,5,5,4],
'E':[5,3,6,9,2,4],
'F':list('BaaBbA')
})
print(df)
</code></pre>
<pre class="lang-none prettyprint-override"><code> A B E F
0 A 4 5 B #<— row contains 'A' and 5
1 A 5 3 a #<— row contains 'A' and 5
2 c 4 6 a
3 d 5 9 B
4 e 5 2 b
5 f 4 4 A
</code></pre>
<p>If we try to find all columns that contain <code>['A', 5]</code>, we can use <a href="https://stackoverflow.com/a/56286519/565489">jezrael's answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>cond = [['A'],[5]]
print( np.logical_and.reduce([df.isin(x).any(1) for x in cond]) )
</code></pre>
<p>which (correctly) yields: <code>[ True True False False False False]</code></p>
<p>If we however use: </p>
<pre class="lang-py prettyprint-override"><code>cond = [['A'],[5]]
print( df.apply(lambda x: np.isin([cond],[x]).all(),axis=1) )
</code></pre>
<p>this yields:</p>
<pre class="lang-py prettyprint-override"><code>0 False
1 False
2 False
3 False
4 False
5 False
dtype: bool
</code></pre>
<p>Closer inspection of the second attempt reveals that:</p>
<ul>
<li><code>np.isin(['A',5],df.loc[0])</code> <strong>"wrongly"</strong> yields <code>array([ True, False])</code>, likely due to <code>numpy</code> infering a dtype <code><U1</code>, and consequently <code>5!='5'</code></li>
<li><code>np.isin(['A',5],['A',4,5,'B'])</code> <strong>"correctly"</strong> yields <code>array([ True, True])</code>, which means we can (and <em>should</em>) use <code>df.loc[0].values.tolist()</code> in the <code>.apply()</code> method above</li>
</ul>
<p><strong>The question, simplified:</strong></p>
<p>Why do I <em>need</em> to specify <code>x.values.tolist()</code> in one case, and can directly use <code>x</code> in the other?</p>
<pre class="lang-py prettyprint-override"><code>print( np.logical_and.reduce([df.isin(x).any(1) for x in cond]) )
print( df.apply(lambda x: np.isin([cond],x.values.tolist()).all(),axis=1 ) )
</code></pre>
<p><strong>Edit:</strong></p>
<p>Even worse is what happens if we search for <code>[4,5]</code>:</p>
<pre class="lang-py prettyprint-override"><code>cond = [[4],[5]]
## this returns False for row 0
print( df.apply(lambda x: np.isin([cond],x.values.tolist() ).all() ,axis=1) )
## this returns True for row 0
print( df.apply(lambda x: np.isin([cond],x.values ).all() ,axis=1) )
</code></pre>
|
<p>I think in DataFrame are mixed numeric with integer solumns, so if loop by rows get <code>Series</code> with mixing types, so numpy coerce the to <code>strings</code>.</p>
<p>Possible solution is convert to array and then to <code>string</code> values in <code>cond</code>:</p>
<pre><code>cond = [[4],[5]]
print(df.apply(lambda x: np.isin(np.array(cond).astype(str), x.values.tolist()).all(),axis=1))
0 True
1 False
2 False
3 False
4 False
5 False
dtype: bool
</code></pre>
<p>Unfortunately for general solution (if possible only numeric columns) need convert both - <code>cond</code> and <code>Series</code>:</p>
<pre><code>f = lambda x: np.isin(np.array(cond).astype(str), x.astype(str).tolist()).all()
print (df.apply(f, axis=1))
</code></pre>
<p>Or all data:</p>
<pre><code>f = lambda x: np.isin(np.array(cond).astype(str), x.tolist()).all()
print (df.astype(str).apply(f, axis=1))
</code></pre>
<p>If use sets in pure python, it working nice:</p>
<pre><code>print(df.apply(lambda x: set([4,5]).issubset(x),axis=1) )
0 True
1 False
2 False
3 False
4 False
5 False
dtype: bool
print(df.apply(lambda x: set(['A',5]).issubset(x),axis=1) )
0 True
1 True
2 False
3 False
4 False
5 False
dtype: bool
</code></pre>
|
python|pandas|numpy
| 2
|
4,468
| 56,047,379
|
Problem with Tensorflow iterator returning tuples
|
<p>I want to iterate over a TF dataset in order to convert the obtained data to numpy tensors. Being new to tensorflow, this is what my code looks like</p>
<pre><code> def convert_dataset_to_pytorch(self, dataset):
sess = tf.Session(config=self.config)
iterator = dataset.make_one_shot_iterator()
exampleTF, labelsTF = iterator.get_next()
examples = torch.Tensor()
labels = torch.Tensor()
try:
while True:
examples = torch.cat((examples,torch.Tensor(exampleTF.eval(session=sess))),0)
labels = torch.cat((labels,torch.Tensor([labelsTF.eval(session=sess)])),0)
except tf.errors.OutOfRangeError:
pass
return examples, labels
</code></pre>
<p>The apparent problem is that every call to eval() iterates both over exampleTF and labelsTF, thus skipping half of the entries. Any help? I also tried something like</p>
<pre><code> def convert_dataset_to_pytorch(self, dataset):
sess = tf.Session(config=self.config)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
examples = torch.Tensor()
labels = torch.Tensor()
try:
while True:
sess.run(next_element)
examples = torch.cat((examples,torch.Tensor(next_element[0])),0)
labels = torch.cat((labels,torch.Tensor([next_element[0]])),0)
except tf.errors.OutOfRangeError:
pass
return examples, labels
</code></pre>
<p>but this results only in errors of the form</p>
<pre><code>examples = torch.cat((examples,torch.Tensor(next_element[0])),0)
TypeError: object of type 'Tensor' has no len()
</code></pre>
|
<p>Not sure why you are creating a pytorch tensor in tensorflow when all you want is a numpy tensor. To answer your question (mentioned below)</p>
<blockquote>
<p>iterate over a TF dataset in order to convert the obtained data to
numpy tensors.</p>
</blockquote>
<h3>Sample Code:</h3>
<pre><code>import numpy as np
inc_dataset = tf.data.Dataset.range(100)
dec_dataset = tf.data.Dataset.range(0, -100, -1)
dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
result = list()
with tf.Session() as sess:
try:
while True:
result.append(sess.run(next_element))
except tf.errors.OutOfRangeError:
pass
examples = np.array(list(zip(*result))[0])
labels = np.array(list(zip(*result))[1])
</code></pre>
<p>Now you can convert <code>examples</code> and <code>labels</code> np arrays to pytorch or tensorflow tensors or to whatever tensors you want. </p>
|
python|numpy|tensorflow|tuples|tensorflow-datasets
| 2
|
4,469
| 56,280,602
|
Matplotlib / Seaborn legend changes style upon adding labels
|
<p>I'm plotting some of my data from a pandas df using seaborn. Almost everything plots nicely using the following code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import seaborn as sns
sns.set(style='whitegrid', palette='muted')
legend = ["Hue 1", "Hue 2"]
order = ["A", "B"]
ax = sns.violinplot(x=df.xaxis, y=df.yaxis, hue=df.hue,
split=True, order=order)
ax.set_ylim(0, 100)
ax.set(xlabel='X - axis', ylabel='Y - axis')
ax.legend(title='Legend', loc='upper left', labels=legend)
ax.set_title('My little plot')
plt.show()
</code></pre>
<p>As soon as I add the <code>labels=</code> there is a change in 'linetype' shown in the legend. Below is a screenshot. Unfortunately my dataset is too large to publish so I hope this is enough.</p>
<p>Thanks in advance. BBQuercus :)</p>
<p>Left without, right with <code>labels</code> (R, C are the values in my data).</p>
<p><a href="https://i.stack.imgur.com/A9zhZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A9zhZ.png" alt="Without labels"></a>
<a href="https://i.stack.imgur.com/iIjQc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iIjQc.png" alt="With labels"></a></p>
|
<p>You could try to draw custom patches for your legend. I haven't tested this but I think that it should work.</p>
<pre><code>from matplotlib.patches import Patch
palette=sns.color_palette('muted')
bluepatch = Patch(
facecolor=palette[0],edgecolor='k',label='Hue 1'
)
orangepatch = Patch(
facecolor=palette[1],edgecolor='k',label='Hue 2'
)
ax.legend(
labels=['Hue 1','Hue 2'],
handles=[bluepatch, orangepatch],
title='Legend',
loc='upper left'
)
</code></pre>
|
python|pandas|matplotlib|seaborn
| 1
|
4,470
| 55,766,304
|
Coloring cells in pandas according to their relative value
|
<p>I would like to color the cells of a (python) pandas dataframe according to wether their value is in the top 5%, top 10%, ..., last 10%, last 5% of the data in this column. </p>
<p>According to this post <a href="https://stackoverflow.com/questions/28075699/coloring-cells-in-pandas">Coloring Cells in Pandas</a>, one can define a function and then apply it to the dataframe.</p>
<p>If you want to color cell if they are in a fixed range, this works fine.
But if you want to color only the first 5%, you need to have all the information about the column. Hence you cannot apply a function that only evaluates every single cell. </p>
<p>Hence my question:
Is there a smart way to color the top 5%, 10%,... of a dataframe in each column? </p>
|
<p>Try this:</p>
<pre><code>df = pd.DataFrame(np.arange(100).reshape(20,-1))
def colorme(x):
c = x.rank(pct=True)
c = np.select([c<=.05,c<=.10,c>=.95,c>=.90],['red','orange','yellow','green'])
return [f'background-color: {i}' for i in c]
df.style.apply(colorme)
</code></pre>
<p><a href="https://i.stack.imgur.com/QrBfb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QrBfb.png" alt="enter image description here"></a></p>
|
python|python-3.x|pandas
| 0
|
4,471
| 55,893,511
|
"Dependency was not found" for tfjs in Vue/Webpack project with yarn
|
<p>I'm trying to use TensorFlow.js for array operations in a JavaScript project. I'm importing it in my Vue component with <code>import * as tf from '@tensorflow/tfjs';</code></p>
<p>It appears <code>yarn install tensorflow</code> requires Python 2.7, so I instead used <code>yarn add tensorflow/tfjs</code> to install the subset of Tensorflow.js I needed. This appeared to work, but when I did <code>yarn run serve</code> I got this message:</p>
<pre><code> ERROR Failed to compile with 1 errors
This dependency was not found:
* @tensorflow/tfjs in ./node_modules/cache-loader/dist/cjs.js??ref--12-0!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--0-0!./node_modules/vue-loader/lib??vue-loader-options!./src/components/DetectorPlot.vue?vue&type=script&lang=js&
To install it, you can run: npm install --save @tensorflow/tfjs
</code></pre>
<p>In <code>yarn.lock</code> after <code>yarn add tensorflow/tfjs</code> I see:</p>
<pre><code>
"@tensorflow/tfjs@github:tensorflow/tfjs":
version "1.1.0"
resolved "https://codeload.github.com/tensorflow/tfjs/tar.gz/74b4edef368aa39decc6073af735f81d112bafd8"
dependencies:
"@tensorflow/tfjs-converter" "1.1.0"
"@tensorflow/tfjs-core" "1.1.0"
"@tensorflow/tfjs-data" "1.1.0"
"@tensorflow/tfjs-layers" "1.1.0"
</code></pre>
|
<p>Thanks to a friend, I have a solution:</p>
<p><code>yarn remove @tensorflow/tfjs</code></p>
<p>and</p>
<p><code>yarn add @tensorflow/tfjs</code></p>
<p>I guess it got in a weird state when the first attempt to install tensorflow failed. (It's depressing how often the answer is "turn it off and on again"...)</p>
|
javascript|tensorflow|vue.js|webpack
| 1
|
4,472
| 55,885,970
|
Matplotlib, legends are not appearing in the histogram
|
<p>to describe my problem i am providing you with small dataset as an example: so imagine following dataset:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.DataFrame({'name':['a', 'b', 'c', 'a', 'b', 'c'], 'val':[1,5,3,4,5,3]} )
</code></pre>
<p>I am creating simple histogam with following code:</p>
<pre><code>def plot_bar_x():
index = np.arange(len(df['name']))
plt.bar(index, df['val'])
plt.legend(list(df['name'].unique()))
plt.xticks(index, df['name'], fontsize=10, rotation=30)
plt.show()
plot_bar_x()
</code></pre>
<p>But it gives me following plot:
<a href="https://i.stack.imgur.com/ldtqy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ldtqy.png" alt="enter image description here"></a></p>
<p>Although I have 3 unique names but i see only 'a' label, however i used this line: plt.legend(list(df['name'].unique()))
the other issue is that all the bars are in the same color, is there a way to get different colors for unique labels without defining colors beforehand manually?</p>
<p>desired output is:</p>
<p><a href="https://i.stack.imgur.com/Yv5EL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yv5EL.png" alt="enter image description here"></a></p>
|
<p>You only plot once on one series, that's why <code>plt</code> only picks one label for legend. If you don't have a lot of names, try:</p>
<pre><code>def plot_bar_x():
index = np.arange(len(df['name']))
plt.figure()
for name in df.name.unique():
tmp_df = df[df.name == name]
plt.bar(tmp_df.index, tmp_df.val, label=name)
plt.xticks(index, df['name'], fontsize=10, rotation=30)
plt.legend()
plt.show()
</code></pre>
<p>There must be some clever way to solve your problem, but it's over my head now.</p>
|
python|pandas|matplotlib|histogram|legend
| 1
|
4,473
| 55,606,347
|
Append 2 pandas dataframes with subset of rows and columns
|
<p>I have 2 dataframes like this</p>
<pre><code>df = pd.DataFrame({"date":["2019-01-01", "2019-01-02", "2019-01-03", "2019-01-04"],
"A": [1., 2., 3., 4.],
"B": ["a", "b", "c", "d"]})
df["date"] = pd.to_datetime(df["date"])
df_new = pd.DataFrame({"date":["2019-01-02", "2019-01-03", "2019-01-04", "2019-01-05", "2019-01-06"],
"A": [2, 3.5, 4, 5., 6.],
"B": ["b", "c1", "d", "e", "f"]})
df_new["date"] = pd.to_datetime(df_new["date"])
</code></pre>
<p>So, my dataframes look like this</p>
<pre><code>df
-----------------------
date A B
2019-01-01 1 a
2019-01-02 2 b
2019-01-03 3 c
2019-01-04 4 d
df_new
----------------------
date A B
2019-01-02 2 b
2019-01-03 3.5 c1
2019-01-04 4 d
2019-01-05 5 e
2019-01-06 6 f
</code></pre>
<p>From these dataframes, I would like to append df to df_new with specific condition as follows: </p>
<ol>
<li><p>Any row with date available in both dataframe, we take such rows in df_new </p></li>
<li><p>Any row with date available in df but not in df_new, we take such rows in df</p></li>
</ol>
<p>Finally my expected output look like this</p>
<pre><code>Expected output
----------------------
date A B
2019-01-01 1 a (take from df)
2019-01-02 2 b (take from df_new)
2019-01-03 3.5 c1 (take from df_new)
2019-01-04 4 d (take from df_new)
2019-01-05 5 e (take from df_new)
2019-01-06 6 f (take from df_new)
</code></pre>
<p>I can think of finding the row difference between 2 dataframes but it does not work when I take the date column into account. May I have your suggestions? Thank you.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> and remove duplicates by <code>date</code> column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a>, last create default uniqe index values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a> :</p>
<pre><code>df = pd.concat([df, df_new]).drop_duplicates('date', keep='last').reset_index(drop=True)
print (df)
date A B
0 2019-01-01 1.0 a
1 2019-01-02 2.0 b
2 2019-01-03 3.5 c1
3 2019-01-04 4.0 d
4 2019-01-05 5.0 e
5 2019-01-06 6.0 f
</code></pre>
|
python|pandas
| 3
|
4,474
| 64,780,469
|
tf.keras "All layer names should be unique." but layer names are already changed
|
<p>I am trying create an ensemble of lstm. Below is my implementation of one lstm:</p>
<pre><code>def lstm_model(n_features, n_hidden_unit, learning_rate, p, recurrent_p):
model = keras.Sequential()
model.add(Masking(mask_value=-1, input_shape=(100, n_features)))
model.add(Bidirectional(LSTM(n_hidden_unit, input_shape=(None, n_features), return_sequences=True,
dropout = p, recurrent_dropout = recurrent_p)))
model.add(TimeDistributed(Dense(3, activation='softmax')))
model.compile(loss=CategoricalCrossentropy(from_logits=True),
optimizer=Adam(learning_rate=learning_rate),
metrics=['categorical_accuracy'])
return model
</code></pre>
<p>Then I trained a few lstm. The models are stored as a list then pass into the function below</p>
<pre><code>def define_stacked_model(members):
for i in range(len(members)):
model = members[i]['model']
model.input._name = 'ensemble_' + str(i+1) + '_' + model.input.name
for layer in model.layers:
# make not trainable
layer.trainable = False
# rename to avoid 'unique layer name' issue
layer._name = 'ensemble_' + str(i+1) + '_' + layer.name
print(layer._name)
# define multi-headed input
ensemble_visible = [model_dictionary['model'].input for model_dictionary in members]
# concatenate merge output from each model
ensemble_outputs = [model_dictionary['model'].output for model_dictionary in members]
merge = tf.keras.layers.Concatenate(axis=2)(ensemble_outputs)
lstm_n_features = merge.shape[-1]
stack_lstm = Bidirectional(LSTM(25, input_shape=(None, lstm_n_features), return_sequences=True,
dropout = 0, recurrent_dropout = 0))(merge)
output = TimeDistributed(Dense(3, activation='softmax'))(stack_lstm)
model = Model(inputs=ensemble_visible, outputs=output)
# plot graph of ensemble
plot_model(model, show_shapes=True, to_file='model_graph.png')
rename(model, model.layers[1], 'new_name')
# compile
model.compile(loss=CategoricalCrossentropy(from_logits=True),
optimizer=Adam(learning_rate=learning_rate),
metrics=['categorical_accuracy'])
return model
</code></pre>
<p>The output shows the layer and input names are already changed</p>
<pre><code>ensemble_1_masking_input:0
ensemble_1_masking
ensemble_1_bidirectional
ensemble_1_time_distributed
ensemble_2_masking_input_1:0
ensemble_2_masking
ensemble_2_bidirectional
ensemble_2_time_distributed
ensemble_3_masking_input_2:0
ensemble_3_masking
ensemble_3_bidirectional
ensemble_3_time_distributed
ensemble_4_masking_input_3:0
ensemble_4_masking
ensemble_4_bidirectional
ensemble_4_time_distributed
</code></pre>
<p>but there is a value error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-30bc0e96adc5> in <module>
25 #hidden = Dense(10, activation='relu')(merge)
26 output = TimeDistributed(Dense(3, activation='softmax'))(stack_lstm)
---> 27 model = Model(inputs=ensemble_visible, outputs=output)
28 #model = Model(inputs=ensemble_visible)
29 # plot graph of ensemble
~\anaconda3\envs\env_notebook\lib\site-packages\tensorflow\python\keras\engine\training.py in __init__(self, *args, **kwargs)
165
166 def __init__(self, *args, **kwargs):
--> 167 super(Model, self).__init__(*args, **kwargs)
168 _keras_api_gauge.get_cell('model').set(True)
169 # Model must be created under scope of DistStrat it will be trained with.
~\anaconda3\envs\env_notebook\lib\site-packages\tensorflow\python\keras\engine\network.py in __init__(self, *args, **kwargs)
171 'inputs' in kwargs and 'outputs' in kwargs):
172 # Graph network
--> 173 self._init_graph_network(*args, **kwargs)
174 else:
175 # Subclassed network
~\anaconda3\envs\env_notebook\lib\site-packages\tensorflow\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
~\anaconda3\envs\env_notebook\lib\site-packages\tensorflow\python\keras\engine\network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
304
305 # Keep track of the network's nodes and layers.
--> 306 nodes, nodes_by_depth, layers, _ = _map_graph_network(
307 self.inputs, self.outputs)
308 self._network_nodes = nodes
~\anaconda3\envs\env_notebook\lib\site-packages\tensorflow\python\keras\engine\network.py in _map_graph_network(inputs, outputs)
1800 for name in all_names:
1801 if all_names.count(name) != 1:
-> 1802 raise ValueError('The name "' + name + '" is used ' +
1803 str(all_names.count(name)) + ' times in the model. '
1804 'All layer names should be unique.')
ValueError: The name "masking_input" is used 4 times in the model. All layer names should be unique.
</code></pre>
<p>I cant find where is this "masking_input" and how to change it.</p>
|
<p>Not sure if you ever fixed this, or if anyone else will find the same issue; I just spent hours on the same problem and finally found the solution.</p>
<p>When listing layers for renaming, instead of <code>model.layers</code>, you have to use <code>model._layers</code>, otherwise the input name remains unchanged. Changing model.inputs.name does not work and produces the mentioned error.</p>
|
python|tensorflow|keras|keras-layer
| 2
|
4,475
| 64,871,055
|
Importing all the sql tables into python using pandas dataframe
|
<p>I have an requirement wherein I would like to import all the tables stored in the sql database into python. I have successfully created a python code for it as follows:</p>
<code>
<pre><code>import pandas as pd
import mysql.connector
from pandas import DataFrame
db = mysql.connector.connect(host="localhost", user="root", passwd="********")
pointer = db.cursor()
pointer.execute("use stock")
pointer.execute("SELECT * FROM 3iinfotech")
data = pointer.fetchall()
data = DataFrame(data,columns =['date','open','high','low','close','volume'])
</code></pre>
</code>
<p>Using this I am able to successfully import the Tables data into a pandas Dataframe.</p>
<p>But as can be seen from the database schema there are multiple tables which will again increase in the future.</p>
<p><a href="https://i.stack.imgur.com/VHnif.png" rel="nofollow noreferrer">Full database schema along with output of the table</a></p>
<p>The dataframe looks like:</p>
<p><a href="https://i.stack.imgur.com/pM1gn.png" rel="nofollow noreferrer">Imported table from sql converted to DataFrame</a></p>
<p>Is there any ways using loops or by any other methods that this script can be automated for all the tables in a given database.</p>
<p>I referred the following :</p>
<p><a href="https://stackoverflow.com/questions/32912373/importing-multiple-sql-tables-using-pandas">Importing multiple SQL tables using pandas</a></p>
<p>But this does not work in my case.</p>
<p>Thanks.....</p>
|
<p>The SQL error come from ' around the table name, It's not the best but if you are working on a local application, you can workaround with the f strings:</p>
<pre><code>with connection.cursor() as pointer:
pointer.execute("use use stock")
pointer.execute("SHOW TABLES")
tables_tuples=[]
for table_dict in pointer:
for k,v in table_dict.items():
tables_tuples.append(v)
for tables_name in tables_tuples:
with connection.cursor() as pointer:
#pointer.execute(f"SELECT * FROM %s",tables_name)
pointer.execute(f"SELECT * FROM {tables_name}")
connection.commit()
df=pd.DataFrame(pointer)
print(df)
</code></pre>
|
python|mysql|pandas
| 1
|
4,476
| 64,699,496
|
How to compute loss of chained models with Keras?
|
<p>For testing, I have split a model into two models and I want to compute the loss and apply the gradient to both models like it would be one.</p>
<p>Here are my two simple models:</p>
<pre><code>model1 = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=(10,)),
])
model2 = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation="softmax", input_shape=(10,)),
])
</code></pre>
<p>And I run a forward pass through the two models, calculate the loss of the second model and apply the gradients:</p>
<pre><code>optimizer = tf.keras.optimizers.SGD()
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
x = tf.random.normal((1, 10)) # Input of the 1st model
y = tf.random.normal((1, 10)) # Expected output of the 2nd model
with tf.GradientTape() as tape:
pred1 = model1(x, training=True)
pred2 = model2(pred1, training=True)
loss_value2 = loss(y, pred2) # Compute the loss for the second model prediction
grads = tape.gradient(loss_value2, model2.trainable_variables)
optimizer.apply_gradients(zip(grads, model2.trainable_variables))
</code></pre>
<p>But how do I get the expected output of the first model wrt the second model to compute the loss and apply gradients on it?</p>
<p>EDIT:</p>
<p>The end goal of the testing is to have two models 1, that send their output to a single third model. And having each model 1 trained on two GPUs:</p>
<pre><code>with tf.device('/gpu:0'):
pred1_1 = model1_1(x, training=True)
with tf.device('/gpu:1'):
pred1_2 = model1_2(x, training=True)
pred1 = tf.keras.layers.concatenate([pred1_1, pred1_2])
with tf.device('/gpu:0'):
pred2 = model2(pred1, training=True)
</code></pre>
|
<p>@Begoodpy,
I suggest you combine the 2 models into a single one and train it as you would usually do.</p>
<pre><code>supermodel = keras.Sequential(
[
model1(),
model2(),
]
</code></pre>
<p>If you need more control over the models, try this:</p>
<pre><code>all_vars = model1.trainable_variables + model2.trainable_variables
grads = tape.gradient(loss_value2, all_vars)
optimizer.apply_gradients(zip(grads, all_vars))
</code></pre>
|
python|tensorflow|keras
| 2
|
4,477
| 64,979,047
|
Can´t save in a "to_csv" with pandas
|
<p>I have been trying to sabe some usersnames and passwords at the end of my code with the "to_csv" but it doesn´t save anything at all. The console does not show any message or output file at all. Can´t find whtas the problem.</p>
<pre><code>def registrar():
df = pd.read_csv(r"C:\Users\Usuario\Documents\David\Programación\usuarios_ayp.csv", encoding='cp1252')
decision = input('Bienvenido. Ya eres usuario? (si/no)')
if decision.lower() == 'si' :
usuarionuevo = validar()
if usuarionuevo != '':
return usuarionuevo
else:
opcion = input('Quieres registrarte r o cualuier otra letr para salir')
if opcion.lower == 'r':
registrar() #llamado recursivo
else:
return ''
else:
usuarionuevo = input('Ingresa un nombre de usuario: ')
contranuevo = input('Ingresa una contraseña: ')
buscar = df['Usuarios']
while True:
if usuarionuevo in buscar.unique():
print('Usuario existente')
usuarionuevo = input('Ingresa un nombre de usuario: ')
else:
dic = {'Usuarios': usuarionuevo, 'Contrasenas': contranuevo}
serie = pd.Series(dic)
df.append(serie, ignore_index = True)
df.to_csv(r"C:\Users\Usuario\Documents\David\Programación\usuarios_ayp.csv",
index = False)
registrar()
</code></pre>
|
<p>You are not getting any message because you missed a break and are entering an infinity loop in your last else condition. Thats why you should always avoid using while true loops.</p>
<p>And also you need to reassign <code>df</code> in order to write it to the csv file.</p>
<p>Change your else for this and should work fine:</p>
<pre class="lang-py prettyprint-override"><code> else:
usuarionuevo = input('Ingresa un nombre de usuario: ')
contranuevo = input('Ingresa una contraseña: ')
buscar = df['Usuarios']
while usuarionuevo in buscar.unique():
print('Usuario existente')
usuarionuevo = input('Ingresa un nombre de usuario: ')
else:
dic = {'Usuarios': usuarionuevo, 'Contrasenas': contranuevo}
serie = pd.Series(dic)
df = df.append(serie, ignore_index=True)
df.to_csv(r"usuarios_ayp.csv",
index=False)
</code></pre>
|
python|pandas|csv
| 0
|
4,478
| 64,629,823
|
Accuracy of binary classification on MNIST “1” and “5” get better?
|
<p>I tried the binary classification using MNIST only number “1” and “5”.
But the accuracy isn’t well.. The following program is anything wrong?
If you find something, please give me some advice.</p>
<p>loss: -9.9190e+04</p>
<p>accuracy: 0.5599</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train / 255.0
x_test = x_test / 255.0
train_filter = np.where((y_train == 1) | (y_train == 5))
test_filter = np.where((y_test == 1) | (y_test == 5))
x_train, y_train = x_train[train_filter], y_train[train_filter]
x_test, y_test = x_test[test_filter], y_test[test_filter]
print("x_train", x_train.shape)
print("x_test", x_test.shape)
# x_train (12163, 28, 28)
# x_test (2027, 28, 28)
model = keras.Sequential(
[
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
]
)
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5)
loss, acc = model.evaluate(x_test, y_test, verbose=2)
print("accuracy:", acc)
# 2027/1 - 0s - loss: -9.9190e+04 - accuracy: 0.5599
# accuracy: 0.5599408
</code></pre>
|
<p>Your y_train and y_test is filled with class labels 1 and 5, but sigmoid activation in your last layer is squashing output between 0 and 1.</p>
<p>if you change 5 into 0 in your y you will get a really high accuracy:</p>
<pre><code>y_train = np.where(y_train == 5, 0, y_train)
y_test = np.where(y_test == 5, 0, y_test)
</code></pre>
<p>result:</p>
<pre><code>64/64 - 0s - loss: 0.0087 - accuracy: 0.9990
</code></pre>
|
tensorflow|machine-learning|prediction|mnist
| 2
|
4,479
| 65,017,040
|
How to convert pandas DataFrame to dictionary for Newick format
|
<p>I have the following dataset:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([['root', 'b', 'a', 'leaf1'],
['root', 'b', 'a', 'leaf2'],
['root', 'b', 'leaf3', ''],
['root', 'b', 'leaf4', ''],
['root', 'c', 'leaf5', ''],
['root', 'c', 'leaf6', '']],
columns=['col1', 'col2', 'col3', 'col4'])
</code></pre>
<p>Because I have found no way to directly convert it into Newic format, I would like to convert it into a dictionary with the following format:</p>
<pre><code>node_to_children = {
'root': {'b': 0, 'c': 0},
'a': {'leaf1': 0, 'leaf2': 0},
'b': {'a': 0, 'leaf3': 0, 'leaf4': 0},
'c': {'leaf5': 0, 'leaf6': 0}
}
</code></pre>
<p>Then I can ultimately convert this node_to_children into Newic format, however, how can I make the conversion from the pandas DataFrame to dictionary?</p>
|
<p>I assumed that every row in your dataframe stands for one complete branch of the tree from the root to the leaves. Based on this, I came up with the following solution. Comments to each step in the algorithm can be found in the code below, but feel free to ask if anything is unclear.</p>
<pre><code>node_to_children = {}
#iterate over dataframe row-wise. Assuming that every row stands for one complete branch of the tree
for row in df.itertuples():
#remove index at position 0 and elements that contain no child ("")
row_list = [element for element in row[1:] if element != ""]
for i in range(len(row_list)-1):
if row_list[i] in node_to_children.keys():
#parent entry already existing
if row_list[i+1] in node_to_children[row_list[i]].keys():
#entry itself already existing --> next
continue
else:
#entry not existing --> update dict and add the connection
node_to_children[row_list[i]].update({row_list[i+1]:0})
else:
#add the branching point
node_to_children[row_list[i]] = {row_list[i+1]:0}
</code></pre>
<p>Output:</p>
<pre><code>print(node_to_children)
{'root': {'b': 0, 'c': 0},
'b': {'a': 0, 'leaf3': 0, 'leaf4': 0},
'a': {'leaf1': 0, 'leaf2': 0},
'c': {'leaf5': 0, 'leaf6': 0}}
</code></pre>
|
python|pandas
| 1
|
4,480
| 39,885,928
|
Variable shape tensor
|
<p>I need to get a tensor that is variable shape as I do not know the vector size before hand. So far I tried: </p>
<pre><code>hashtag_len = tf.placeholder(tf.int32)
train_hashtag = tf.placeholder(tf.int32, shape=[hashtag_len])
</code></pre>
<p>but I get the error <code>TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'</code>. </p>
<p>The only other way around this that I can think of is to pad the vector with enough zeros so that I can fit the intended vector inside a giant vector. Seems like tensorflow should have a better way of doing this.</p>
|
<p>If you want a VECTOR for sure, you should do the following:</p>
<pre><code>train_hashtag = tf.placeholder(tf.int32, shape=[None])
</code></pre>
<p>This shape describes vector of arbitrary length.</p>
|
python|tensorflow
| 3
|
4,481
| 44,064,299
|
How can I concatenate Pandas DataFrames by column and index?
|
<p>I've got four Pandas DataFrames with numerical columns and indices:</p>
<pre><code>A = pd.DataFrame(data={"435000": [9.792, 9.795], "435002": [9.825, 9.812]}, index=[119000, 119002])
B = pd.DataFrame(data={"435004": [9.805, 9.783], "435006": [9.785, 9.78]}, index=[119000, 119002])
C = pd.DataFrame(data={"435000": [9.778, 9.743], "435002": [9.75, 9.743]}, index=[119004, 119006])
D = pd.DataFrame(data={"435004": [9.743, 9.743], "435006": [9.762, 9.738]}, index=[119004, 119006])
</code></pre>
<p><a href="https://i.stack.imgur.com/eJQd9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/eJQd9.png" alt="enter image description here"></a></p>
<p>I want to concatenate them into one DataFrame like this, matching on both column names and indices:</p>
<p><a href="https://i.stack.imgur.com/sjEHT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sjEHT.png" alt="enter image description here"></a></p>
<p>If I try to <code>pd.concat</code> the four dfs, they are stacked (either above and below, or to the side, depending on <code>axis</code>) and I end up with <code>NaN</code> values in the df:</p>
<pre><code>result = pd.concat([A, B, C, D], axis=0)
</code></pre>
<p><a href="https://i.stack.imgur.com/s2YQI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s2YQI.png" alt="enter image description here"></a></p>
<p>How can I use <code>pd.concat</code> (or <code>merge</code>, <code>join</code> etc.) to get the right result?</p>
|
<p>You need concat in pairs:</p>
<pre><code>result = pd.concat([pd.concat([A, C], axis=0), pd.concat([B, D], axis=0)], axis=1)
print (result)
435000 435002 435004 435006
119000 9.792 9.825 9.805 9.785
119002 9.795 9.812 9.783 9.780
119004 9.778 9.750 9.743 9.762
119006 9.743 9.743 9.743 9.738
</code></pre>
<p>Better is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="noreferrer"><code>stack</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="noreferrer"><code>concat</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="noreferrer"><code>unstack</code></a>:</p>
<pre><code>result = pd.concat([A.stack(), B.stack(), C.stack(), D.stack()], axis=0).unstack()
print (result)
435000 435002 435004 435006
119000 9.792 9.825 9.805 9.785
119002 9.795 9.812 9.783 9.780
119004 9.778 9.750 9.743 9.762
119006 9.743 9.743 9.743 9.738
</code></pre>
<p>More dynamic:</p>
<pre><code>dfs = [A,B,C,D]
result = pd.concat([df.stack() for df in dfs], axis=0).unstack()
print (result)
435000 435002 435004 435006
119000 9.792 9.825 9.805 9.785
119002 9.795 9.812 9.783 9.780
119004 9.778 9.750 9.743 9.762
119006 9.743 9.743 9.743 9.738
</code></pre>
|
python|pandas
| 16
|
4,482
| 44,289,277
|
Tensorflow: Import failed with "Failed to load the native TensorFlow runtime."
|
<p>I'm trying to install the CPU version of tensorflow in a virtualenv on Mac OS 10.6.8 (all I've got for now) with Python 3.6, using the package url as described <a href="https://www.tensorflow.org/install/install_mac#python_34_35_or_36" rel="nofollow noreferrer">here</a>. It seems to work fine:</p>
<pre><code>Ms-MacBook:tensorflow User$ source tfvenv/bin/activate
(tfvenv) Ms-MacBook:tensorflow User$ python --version
Python 3.6.1
(tfvenv) Ms-MacBook:tensorflow User$ pip --version
pip 9.0.1 from /Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages (python 3.6)
(tfvenv) Ms-MacBook:tensorflow User$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.1.0-py3-none-any.whl
[...]
Successfully installed numpy-1.12.1 protobuf-3.3.0 tensorflow-1.1.0 werkzeug-0.12.2
</code></pre>
<p>However, when I try to import tensorflow in a Python interpreter I get this error:</p>
<pre><code>(tfvenv) Ms-MacBook:tensorflow User$ python
Python 3.6.1 (v3.6.1:69c0db5050, Mar 21 2017, 01:21:04)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dlopen(/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 10): Library not loaded: /usr/lib/libc++.1.dylib
Referenced from: /Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Reason: image not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 51, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dlopen(/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 10): Library not loaded: /usr/lib/libc++.1.dylib
Referenced from: /Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Reason: image not found
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p>Unfortunately there is now mention of this among the <a href="https://www.tensorflow.org/install/install_mac#common_installation_problems" rel="nofollow noreferrer">common installation problems</a>. Can anybody tell me what's going wrong here?</p>
|
<blockquote>
<p>ImportError: dlopen(/Users/M/Developer/tensorflow/tfvenv/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 10): Library not loaded: /usr/lib/libc++.1.dylib</p>
</blockquote>
<p>libc++ was not included in MacOS 10.6 because Apple had not switched to Clang and libc++ yet. That's your problem.</p>
<p>Refer to <a href="https://github.com/tensorflow/tensorflow/issues/153" rel="nofollow noreferrer">this</a>.</p>
|
python|tensorflow|pip
| 1
|
4,483
| 69,594,876
|
DataFrame. Split Column to antoher Columns with average by date
|
<p>There is a dataframe:</p>
<pre><code>d = {'date' : ['2020-02-01', '2020-02-01', '2020-02-01', '2020-02-01', '2020-02-02', '2020-02-02', '2020-02-02'], 'type' : ['Bird', 'Dog', 'Cat', 'Bird', 'Dog', 'Cat', 'Bird'], 'weight' : [1, 2, 3, 4, 5, 6, 7]}
df = pd.DataFrame(d)
</code></pre>
<p>I would like to split the "type" column by the value of type and get columns - Bird, Dog, Cat. And values in these columns must be the average weight of the birds, dogs, etc. on the same date.</p>
<p>To get something like that.</p>
<pre><code>date bird dog cat
2020-02-01 ... ... ...
2020-02-02 ... ... ...
</code></pre>
<p>I started to try group by but can't figure out with that. Maybe split dataframe by val in the "type" column and merge the obtained dataframes again?</p>
|
<p>Use <code>pivot_table</code> and apply <code>mean</code> to aggregate values those has the same index/column:</p>
<pre><code>out = df.pivot_table(index='date', columns='type', values='weight', aggfunc='mean') \
.rename_axis(columns=None).reset_index()
print(out)
# Output:
date Bird Cat Dog
0 2020-02-01 2.5 3.0 2.0
1 2020-02-02 7.0 6.0 5.0
</code></pre>
|
pandas|dataframe
| 0
|
4,484
| 69,303,487
|
What is the pythonic way to convert day and month in string format to date in python
|
<p>I have day and month in string format <strong>23rd Sep</strong>, I would like to convert the above string to the current date <strong>23/09/2021</strong>. I achieved that using the below code, what is the more pythonic way to do this.?</p>
<pre><code>from datetime import datetime
# datetime object containing current date and time
now = datetime.now()
year = str(now).split('-')
year =year[0]
dm = '23rd Sep'
s = re.findall(r'[A-Za-z]+|\d+', dm)
day_month = " ".join(s[::len(s)-1] )
date = day_month +' ' + year
dmap = {'Jan':'January','Feb':'Febuary','Mar':'March','Sep':'September'}
import arrow
for i in dmap.keys():
s = date.split(' ')
if i in s:
s[1] = dmap[i]
clean_date = ' '.join(s)
p = arrow.get(clean_date, 'DD MMMM YYYY').format('DD/MM/YYYY')
print(p)
</code></pre>
|
<p>I don't see the problem:</p>
<pre><code>import pandas as pd
s = '23rd Sep'
pd.Timestamp(s + ' 2021')
Out[1]: Timestamp('2021-09-23 00:00:00')
</code></pre>
<p>If you want it in DMY format:</p>
<pre><code>_.strftime('%d/%m/%Y')
Out[2]: '23/09/2021'
</code></pre>
|
python|pandas|dataframe
| 2
|
4,485
| 41,221,084
|
Apply VLOOKUP in pandas and python
|
<p>I have a csv called 'data.csv' which has:</p>
<pre><code>EmployeeIDNumber
A
B
C
D
</code></pre>
<p>I have another csv called 'basic.csv' which has the same data but is jumbled:</p>
<pre><code>MemberIdentifier
B
A
C
</code></pre>
<p>I want to use PANDAS to create a result sheet which has:</p>
<pre><code>EmployeeIDNumber MemberIdentifier
A A
B B
C C
D Not Found
</code></pre>
|
<p>There are several ways to do this but the most powerful is the following,</p>
<pre><code>import pandas as pd
df1 = pd.csv_read('data.csv')
df = merge(df1, df2, left_on='EmployeeIDNumber', right_on='MemberIdentifier', how='left')
</code></pre>
<p>Here we are choosing the specific columns we wish to join our DataFrames on. If you wish to also include any reading in the <code>MemberIdentifier</code> columns that does not match anything in the <code>EmployeeIDNumber</code> columns then you can set <code>how='outer'</code>.</p>
|
python|csv|pandas
| 1
|
4,486
| 54,061,753
|
Creating Estimation Windows based on a Criteria (DataFrame)
|
<p>I am looking at how I can select a couple of rows (specifically -15 until -5) based on a specific criteria.</p>
<p>We have a list of Events (dates) and a large DataFrame with all BitCoin orders, ordered by Date. In this DataFrame we have a column that marks a row with 'True' if the value in Events is found in the DataFrame.</p>
<p>What I want to do is when 'True' is found in this column, that Python selects the rows from 15 rows (-15) before the True until the 5 (-5) rows before the True. In total we have 42 events and our goal is to create a new DataFrame that we will use calculate the descriptive statistics of these values.</p>
<p><a href="https://i.stack.imgur.com/DKGQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DKGQt.png" alt="Image of Our DataFrame"></a></p>
|
<p>Here is an example. FYI. It's normally easier to answer these when you post some code that generates some test dataset :)</p>
<p>First, here is a dataset. Here we're basically trying to select based on the True values. But we only want 1 before and 1 after, so we shouldn't see any gone. </p>
<pre><code>import pandas as pd
import numpy as np
data = [
['gone', False],
['a', False],
['abb', True],
['a', False],
['gone', False],
['gone', False],
['a', False],
['abbb', True],
['a', False],
['gone', False],
['gone', False]
]
df = pd.DataFrame(data=data, columns=['label', 'indicator'])
ranges = df[df['indicator']].index.values
</code></pre>
<p>Next we generate a range of rows we're interested in. For your case you would want to set num_before and num_after differently. You can probably compress the code somewhat but I thought the steps are easier to understand this way.</p>
<pre><code>num_before = 1
num_after = 1
indexes = [range(x-num_before, x+num_after+1) for x in ranges] #+1 due to the behaviour of range
x = [list(rang) for rang in indexes]
i = np.array(x).reshape(-1)
</code></pre>
<p>Finally we select the rows matching the generated list we just made.</p>
<pre><code>df.iloc[i]
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/vtjI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vtjI8.png" alt="enter image description here"></a></p>
|
python|pandas|dataframe
| 0
|
4,487
| 54,099,431
|
Locate rows by particular label, found only in last multi-index level
|
<p>After performing group-by, my new df has 3 level multindex. I need to access all rows with 'ZEBRA' labels; which is contained in the 3rd level index. I'm trying to use <code>df.loc</code> but unable to do so. I thought of iterating through the labels, but that will have to be a nested loop to make below; which makes me feel I'm not thinking along the right lines, there must be a much easier. </p>
<pre><code>> indexlevel1_value1->indexlevel2_value1>indexlevel3_'stabilizer'
> indexlevel1_value1->indexlevel2_value2>indexlevel3_'stabilizer'
> indexlevel1_value1->indexlevel2_value3>indexlevel3_'stabilizer'
> ...................
> indexlevel2_value1->indexlevel2_value1>indexlevel3_'stabilizer'
</code></pre>
<p>This question looks close - <a href="https://stackoverflow.com/questions/47886401/selecting-rows-in-a-multiindex-dataframe-by-index-without-losing-any-levels">Selecting rows in a MultiIndex dataframe by index without losing any levels</a> but focused on first level index. </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo',
'bar', 'foo', 'bar','foo',
'bar','foo' ],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three',
'two', 'three','two', 'two',
'one', 'three'],
'C' : ['MR', 'ZEBRA', 'KID', 'ZEBRA',
'MOS', 'ALPHA', 'ZULU', 'ZEBRA',
'TREE','PLANT', 'JOOMLA','ZEBRA',
'MOS','ZULU'],
'D' : np.random.randn(14)})
grouped = df.groupby(['A', 'B','C'])
grouped.count()
| A | B | C | D |
|-----|-------|--------|---|
| bar | one | MOS | 1 |
| | | ZEBRA | 1 |
| | three | ZEBRA | 1 |
| | two | ALPHA | 1 |
| | | JOOMLA | 1 |
| | | TREE | 1 |
| foo | one | MR | 1 |
| | | ZULU | 1 |
| | three | PLANT | 1 |
| | | ZEBRA | 1 |
| | | ZULU | 1 |
| | two | KID | 1 |
| | | MOS | 1 |
| | | ZEBRA | 1 |
newdf= grouped.count()
newdf.loc[('bar','three','ZEBRA')]
#1
</code></pre>
<p>Desired: </p>
<pre><code>| A | B | C | D |
|-----|-------|-------|---|
| bar | one | ZEBRA | 1 |
| bar | three | ZEBRA | 1 |
| foo | three | ZEBRA | 1 |
| foo | two | ZEBRA | 1 |
</code></pre>
|
<p>You can do:</p>
<pre><code>grouped[grouped.index.get_level_values(2) == 'ZEBRA'].reset_index()
A B C D
0 bar one ZEBRA 1
1 bar three ZEBRA 1
2 foo three ZEBRA 1
3 foo two ZEBRA 1
</code></pre>
<p>Alternate way: <code>grouped.query("C == 'ZEBRA'").reset_index()</code></p>
|
python|python-3.x|pandas
| 1
|
4,488
| 66,036,271
|
Splitting a tensorflow dataset into training, test, and validation sets from keras.preprocessing API
|
<p>I'm new to tensorflow/keras and I have a file structure with 3000 folders containing 200 images each to be loaded in as data. I know that <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory" rel="nofollow noreferrer">keras.preprocessing.image_dataset_from_directory</a> allows me to load the data and split it into training/validation set as below:</p>
<pre><code>val_data = tf.keras.preprocessing.image_dataset_from_directory('etlcdb/ETL9G_IMG/',
image_size = (128, 127),
validation_split = 0.3,
subset = "validation",
seed = 1,
color_mode = 'grayscale',
shuffle = True)
</code></pre>
<blockquote>
<p>Found 607200 files belonging to 3036 classes.
Using 182160 files for validation.</p>
</blockquote>
<p>But then I'm not sure how to further split my validation into a test split while maintaining proper classes. From what I can tell (through the GitHub <a href="https://github.com/tensorflow/tensorflow/blob/85c8b2a817f95a3e979ecd1ed95bff1dc1335cff/tensorflow/python/data/ops/dataset_ops.py#L3848" rel="nofollow noreferrer">source code</a>), the take method simply takes the first x elements of the dataset, and skip does the same. I am unsure if this maintains stratification of the data or not, and I'm not quite sure how to return labels from the dataset to test it.</p>
<p>Any help would be appreciated.</p>
|
<p>I could not find supporting documentation, but I believe <code>image_dataset_from_directory</code> is taking the end portion of the dataset as the validation split. <code>shuffle</code> is now set to <code>True</code> by default, so the dataset is shuffled before training, to avoid using only some classes for the validation split.
The split done by <code>image_dataset_from_directory</code> only relates to the training process. If you need a (highly recommended) test split, you should split your data beforehand into training and testing. Then, <code>image_dataset_from_directory</code> will split your training data into training and validation.</p>
<p>I usually take a smaller percent (10%) for the in-training validation, and split the original dataset 80% training, 20% testing.
With these values, the final splits (from the initial dataset size) are:</p>
<ul>
<li>80% training:
<ul>
<li>72% training (used to adjust the weights in the network)</li>
<li>8% in-training validation (used only to check the metrics of the model after each epoch)</li>
</ul>
</li>
<li>20% testing (never seen by the training process at all)</li>
</ul>
<p>There is additional information how to split data in your directories in this question: <a href="https://stackoverflow.com/questions/42443936/keras-split-train-test-set-when-using-imagedatagenerator">Keras split train test set when using ImageDataGenerator</a></p>
|
python|tensorflow|keras
| 3
|
4,489
| 66,193,817
|
Pandas dataframe groupby remove column
|
<p>I stumble upon with a problem with dataframe.
I am using this snippet code to generate dataframe after that I group by dataframe based on <strong>‘chr’
Column</strong>.</p>
<pre><code>import pandas as pd
DF = pd.DataFrame({'chr':["chr3","chr3","chr7","chr6","chr1", "chr7"],'y':[10,20,30,40,50,90],'ds':
['2018-01-01', '2018-01-02', '2018-01-01', '2018-01-01', '2018-01-01', '2018-12-01']})
DF.head(n=10)
chr y ds
0 chr3 10 2018-01-01
1 chr3 20 2018-01-02
2 chr7 30 2018-01-01
3 chr6 40 2018-01-01
4 chr1 50 2018-01-01
5 chr7 90 2018-12-01
ans = [pd.DataFrame(y) for x, y in DF.groupby('chr', as_index=False)]
ans
[ chr y ds
4 chr1 50 2018-01-01,
chr y ds
0 chr3 10 2018-01-01
1 chr3 20 2018-01-02,
chr y ds
3 chr6 40 2018-01-01,
chr y ds
2 chr7 30 2018-01-01
5 chr7 90 2018-12-01]
</code></pre>
<p>Please note that once I use <strong>groupby</strong> I store the result in list. As a result, I have list with nested <strong>dataframe</strong> based on <strong>chr</strong>.
What is the way if I need to delete chr column in each sub dataframe from my list? I need simply to drop <strong>chr</strong> in each dataframe from the list. <strong>Please note that solution should scale on bigger list size.</strong></p>
|
<p>You can do it while creating your original list like this if there are only two columns:</p>
<pre><code>ans = [pd.DataFrame(y, columns=DF.columns.difference(['chr'])) for x, y in DF.groupby('chr', as_index=False)]
</code></pre>
<p>Alternatively, drop <code>chr</code> from each subDf explicitly:</p>
<pre><code>ans = [pd.DataFrame(y).drop('chr', axis=1) for x, y in DF.groupby('chr', as_index=False)]
</code></pre>
<hr />
<p>If you can't drop while creating the original list (as shown above), you can update it like this:</p>
<pre><code># Create `ans` as you're currently doing:
ans = [pd.DataFrame(y) for x, y in DF.groupby('chr', as_index=False)]
#
# some processing on `ans`
#
# Now update `ans` by dropping "chr" from each subDf
ans = [df.drop('chr', axis=1) for df in ans]
</code></pre>
|
python|pandas|dataframe
| 1
|
4,490
| 52,656,884
|
ImportError: No module named pandas_datareader
|
<p>I am trying to use the pandas data reader library.
I initially tried <code>import pandas.io.data</code> but this threw up an import error, stating I should be using </p>
<blockquote>
<p>from pandas_datareader import data, wb</p>
</blockquote>
<p>instead. Upon trying this I was greeted with </p>
<blockquote>
<p>ImportError: No module named pandas_datareader</p>
</blockquote>
<p>I have had a look around and have tried...</p>
<ul>
<li><p>"pip install pandas_datareader"</p></li>
<li><p>"pip install python_datareader"</p></li>
<li><p>"pip install pandas-datareader"</p></li>
</ul>
<p>Any help would be greatly appreciated </p>
|
<p>run this command <code>pip install pandas-datareader</code> and for more info documentation is <a href="https://pandas-datareader.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> </p>
|
python|pandas|importerror
| 0
|
4,491
| 52,881,426
|
Python Pandas Create Column Based on a Function Requiring to Filter the DataFrame
|
<p>I have a larger script with multiple functions. In one of those functions I am creating a dataframe and then creating a column applying a separate function.</p>
<p>The function to create dataframe at a high level:</p>
<pre><code>def data(file):
df = pd.DataFrame('A': [1,2,3,4], 'B':[5,5,6,6]
df['C'] = df['B'].apply(func)
</code></pre>
<p>The 'func' function essentially is supposed to filter the dataframe by column B and return the list of values in column 'A'</p>
<pre><code>def func(x):
df2 = df[df['B']==x]
names = df2['A']
return names
</code></pre>
<p>Unfortunately, I cannot use a global call to retrieve df into the func so I am confused how to perform this request. The ideal out put should be as such:</p>
<pre><code>A B C
1 5 [1,2]
2 5 [1,2]
3 6 [3,4]
4 6 [3,4]
</code></pre>
|
<p>Using <code>map</code> after <code>groupby.apply</code> (PS: Not recommend using list in column , which will make adjustment harder)</p>
<pre><code>df['C']=df.B.map(df.groupby('B').A.apply(list))
df
Out[872]:
A B C
0 1 5 [1, 2]
1 2 5 [1, 2]
2 3 6 [3, 4]
3 4 6 [3, 4]
</code></pre>
|
python|pandas
| 4
|
4,492
| 52,776,723
|
After converting image to binary cannot display it in notebook with matplotlib
|
<p>I am trying to display the image After coverting the image to binary in python notebook:</p>
<pre><code>resized_img = cv2.cvtColor(char_mask, cv2.COLOR_BGR2GRAY)
resized_img = cv2.threshold(resized_img, 100, 200, cv2.THRESH_BINARY)
#cv2.imwrite('licence_plate_mask3.png', char_mask)
plt.imshow(resized_img)
plt.show()
</code></pre>
<p>I cannot show image. I get this error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-32-0a3eb57cc497> in <module>()
8 resized_img = cv2.threshold(resized_img, 100, 200, cv2.THRESH_BINARY)
9 #cv2.imwrite('licence_plate_mask3.png', char_mask)
---> 10 plt.imshow(resized_img)
11
12 plt.show()
C:\Anaconda3\lib\site-packages\matplotlib\pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, hold, data, **kwargs)
3155 filternorm=filternorm, filterrad=filterrad,
3156 imlim=imlim, resample=resample, url=url, data=data,
-> 3157 **kwargs)
3158 finally:
3159 ax._hold = washold
C:\Anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1895 warnings.warn(msg % (label_namer, func.__name__),
1896 RuntimeWarning, stacklevel=2)
-> 1897 return func(ax, *args, **kwargs)
1898 pre_doc = inner.__doc__
1899 if pre_doc is None:
C:\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
5122 resample=resample, **kwargs)
5123
-> 5124 im.set_data(X)
5125 im.set_alpha(alpha)
5126 if im.get_clip_path() is None:
C:\Anaconda3\lib\site-packages\matplotlib\image.py in set_data(self, A)
590 self._A = pil_to_array(A)
591 else:
--> 592 self._A = cbook.safe_masked_invalid(A, copy=True)
593
594 if (self._A.dtype != np.uint8 and
C:\Anaconda3\lib\site-packages\matplotlib\cbook.py in safe_masked_invalid(x, copy)
1504
1505 def safe_masked_invalid(x, copy=False):
-> 1506 x = np.array(x, subok=True, copy=copy)
1507 if not x.dtype.isnative:
1508 # Note that the argument to `byteswap` is 'inplace',
ValueError: setting an array element with a sequence.
</code></pre>
<p>Before using threshold I can see the image inside the notebook without problem. Any way to resolve this problem? Thanks.</p>
|
<p>The error lies in line <code>resized_img = cv2.threshold(resized_img, 100, 200, cv2.THRESH_BINARY)</code>.</p>
<p><code>cv2.threshold()</code> returns two values. The first is the threshold value (float) and the second is the image. </p>
<p>So all the while you have been trying to plot a <code>float</code> value, hence the <em>Value error</em>.</p>
<p>Rewrite the line to the following:</p>
<pre><code>ret, resized_img = cv2.threshold(resized_img, 100, 200, cv2.THRESH_BINARY)
</code></pre>
<p>You can have a look <a href="https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html" rel="nofollow noreferrer">at the documentation</a> as well.</p>
|
python|numpy|opencv|matplotlib|imshow
| 1
|
4,493
| 52,684,961
|
Pandas - Rename only first dictionary match instead of last match
|
<p>I am trying to use pandas to rename a column in CSV files. I want to use a dictionary since sometimes columns with the same information can be named differently (e.g. mobile_phone and telephone instead of phone).</p>
<p>I want to rename the first instance of phone. Here is an example to hopefully explain more.</p>
<p>Here is the original in this example:</p>
<pre><code>0 name mobile_phone telephone
1 Bob 12364234234 12364234234
2 Joe 23534235435 43564564563
3 Jill 34573474563 78098080807
</code></pre>
<p>Here is what I want it to do:</p>
<pre><code>0 name phone telephone
1 Bob 12364234234 12364234234
2 Joe 23534235435 43564564563
3 Jill 34573474563 78098080807
</code></pre>
<p>This is the code I tried:</p>
<pre><code>phone_dict = {
'phone_number': 'phone',
'mobile_phone': 'phone',
'telephone': 'phone',
'phones': 'phone',
}
if 'phone' not in df.columns:
df.rename(columns=dict(phone_dict), inplace=True)
if 'phone' not in df.columns:
raise ValueError("What are these peoples numbers!? (Need 'phone' column)")
</code></pre>
<p>I made a dictionary with some possible column names and that I want them to be named 'phone'. However, when I run this code it turns the columns to this changes the second column instead of the first one that matches a key in the dictionary. I want it to stop after it matches the first column it comes across in the CSV.</p>
<p>This is what is happening:</p>
<pre><code>0 name mobile_phone phone
1 Bob 12364234234 12364234234
2 Joe 23534235435 43564564563
3 Jill 34573474563 78098080807
</code></pre>
<p>If there is, for example, a third column that matches the dictionary they turn to 'phone' which is again not what I want. I am trying to get it to just change the first column it matches.</p>
<p>Here is an example of what happens when I add a third column.
It goes from:</p>
<pre><code>0 name mobile_phone telephone phone_1
1 Bob 12364234234 12364234234 36346346311
2 Joe 23534235435 43564564563 34634634623
3 Jill 34573474563 78098080807 34634654622
</code></pre>
<p>To this:</p>
<pre><code>0 name phone phone phone
1 Bob 12364234234 12364234234 36346346311
2 Joe 23534235435 43564564563 34634634623
3 Jill 34573474563 78098080807 34634654622
</code></pre>
<p>But I want it to be this:</p>
<pre><code>0 name phone telephone phone_1
1 Bob 12364234234 12364234234 36346346311
2 Joe 23534235435 43564564563 34634634623
3 Jill 34573474563 78098080807 34634654622
</code></pre>
<p>Any advice or tips to stop it second changing the second dictionary match instead of the first one or all of them?</p>
<p>Before I had a bunch of elif statements but I thought a dictionary would be cleaner and easier to read.</p>
|
<p>Here's one solution:</p>
<p><code>df</code>:</p>
<pre><code>Columns: [name, mobile_phone, telephone]
Index: []
</code></pre>
<p>Finding the first instance of phone (left to right) in the column index:</p>
<pre><code>a = [True if ('phone' in df.columns[i]) & ('phone' not in df.columns[i-1]) else False for i in range(len(df.columns))]
</code></pre>
<p>Getting the column that needs to be renamed <code>phone</code>:</p>
<pre><code> phonecol = df.columns[a][0]
</code></pre>
<p>Renaming the column:</p>
<pre><code>df.rename(columns = {phonecol : 'phone'})
</code></pre>
<p>Output:</p>
<pre><code>Columns: [name, phone, telephone]
Index: []
</code></pre>
|
python|pandas|dictionary|indexing|python-3.6
| 0
|
4,494
| 52,533,156
|
Weight Initialization Tensorflow tf.estimator
|
<p>Is there a way to adjust the weight initialization in the pre-built tf.estimator?
I would like to use the method after Xavier (<code>tf.contrib.layers.xavier_initializer</code>) or from He. Which method is used by default? I couldn't figure it out from the documentation. </p>
<p>I use the DNNRegressor.</p>
|
<p><code>DNNRegressor</code> uses <a href="https://www.tensorflow.org/api_docs/python/tf/glorot_uniform_initializer" rel="nofollow noreferrer">glorot_uniform_initializer</a> (aka Xavier uniform), it is hardcoded in the <a href="https://github.com/tensorflow/tensorflow/blob/4dcfddc5d12018a5a0fdca652b9221ed95e9eb23/tensorflow/python/estimator/canned/dnn.py#L102" rel="nofollow noreferrer">implementation</a>.</p>
<p>To use a different initializer with estimators API you have to use <a href="https://www.tensorflow.org/guide/custom_estimators" rel="nofollow noreferrer">custom estimator</a>.</p>
|
tensorflow|initialization
| 2
|
4,495
| 58,538,135
|
Keras methods 'predict' and 'predict_generator' with different result
|
<p>I have trained a basic CNN model for image classification.
While training the model I have used ImageDataGenerator from keras api.
After the model is being trained i used testdatagenerator and flow_from_directory method for testing.
Everything Went well.
Then I saved the model for future use.
Now i am using the same model and used predict method from keras api with a single image, but the prediction is very different every time I test using different images.
Could you please let me know any solution.</p>
<pre class="lang-py prettyprint-override"><code>training_augmentation = ImageDataGenerator(rescale=1 / 255.0)
validation_testing_augmentation = ImageDataGenerator(rescale=1 / 255.0)
# Initialize training generator
training_generator = training_augmentation.flow_from_directory(
JPG_TRAIN_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=True,
batch_size=batch_size
)
# initialize the validation generator
validation_generator = validation_testing_augmentation.flow_from_directory(
JPG_VAL_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=False,
batch_size=batch_size
)
# initialize the testing generator
testing_generator = validation_testing_augmentation.flow_from_directory(
JPG_TEST_IMAGE_DIR,
class_mode="categorical",
target_size=(32, 32),
color_mode="rgb",
shuffle=False,
batch_size=batch_size
)
history = model.fit_generator(
training_generator,
steps_per_epoch=total_img_count_dict['train'] // batch_size,
validation_data=validation_generator,
validation_steps=total_img_count_dict['val'] // batch_size,
epochs=epochs,
callbacks=callbacks)
testing_generator.reset()
prediction_stats = model.predict_generator(testing_generator, steps=(total_img_count_dict['test'] // batch_size) + 1)
### Trying to use predict method
img_file = '/content/drive/My Drive/Traffic_Sign_Recognition/to_display_output/Copy of 00003_00019.jpg'
img = cv2.imread(img_file)
img=cv2.resize(img, (32,32))
img = img/255.0
a=np.reshape(img, (1, 32, 32, 3))
model = load_model('/content/drive/My Drive/Traffic_Sign_Recognition/basic_cnn.h5')
prediction = model.predict(a)
</code></pre>
<p>When I am trying to use predict, every time wrong prediction is coming.</p>
<p>Any leads will be appreciated.</p>
|
<p><code>Keras generator</code> uses <code>PIL</code> for image reading which read images from disk as <code>RGB</code>.</p>
<p>You are using <code>opencv</code> for reading which reads images as <code>BGR</code>. You have to convert your image from <code>BGR</code> to <code>RGB</code>.</p>
<pre><code>img = cv2.imread(img_file)
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
...
</code></pre>
|
python-3.x|tensorflow|machine-learning|keras|conv-neural-network
| 0
|
4,496
| 58,194,296
|
How to convert currency values in DataFrame column?
|
<p>I've got a dataframe which has columns - </p>
<pre><code>Product Price in AUD Price in BTC Price in USD Date
A 1450.22 0.120 NaN 2019-08-15
B NaN NaN 550 2019-09-12
C NaN 0.18 1500 2019-09-02
D NaN NaN 1244 2019-09-10
</code></pre>
<p>I need to convert all alternate prices (Price in Bitcoin and Price in US Dollar) to <code>Price in AUD</code> where the value of <code>Price in AUD</code> is null. If both alternate prices are given (eg. C), I want to use <code>Price in BTC</code> to convert to AUD, else whichever is available. </p>
<p>How can I do this? Is there an API or Python library that I can use for this, since the prices of Bitcoin and USD keep fluctuating everyday? I would like to use the <code>Date</code> column to get the exact conversion value in AUD on that date. Has anyone done something similar and can help with this? </p>
|
<p>You can use the <a href="https://pypi.org/project/forex-python/" rel="nofollow noreferrer">forex-python</a> package for that:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import datetime
from forex_python.converter import CurrencyRates
from forex_python.bitcoin import BtcConverter
data = [('A', 1450.22 , 0.120 , None , '2019-08-15')
,('B', None , None , 550 , '2019-09-12')
,('C', None , 0.18 , 1500 , '2019-09-02')
,('D', None , None , 1244 , '2019-09-10')]
colNames = ['Product', 'Price in AUD', 'Price in BTC', 'Price in USD', 'Date']
df = pd.DataFrame(data, columns=colNames)
c = CurrencyRates()
b = BtcConverter()
def convertBtcToAUD(row):
if pd.isna(row['Price in AUD']):
date = datetime.datetime.strptime(row['Date'], '%Y-%m-%d')
aud = b.convert_btc_to_cur_on(row['Price in BTC'], 'AUD', date )
else:
aud = row['Price in AUD']
return aud
def convertUSDToAUD(row):
if pd.isna(row['Price in AUD']):
date = datetime.datetime.strptime(row['Date'], '%Y-%m-%d')
aud = c.convert('USD', 'AUD', row['Price in USD'], date )
else:
aud = row['Price in AUD']
return aud
df['Price in AUD'] = df.apply(convertBtcToAUD, axis=1)
df['Price in AUD'] = df.apply(convertUSDToAUD, axis=1)
</code></pre>
<p>Output:</p>
<pre><code> Product Price in AUD Price in BTC Price in USD Date
0 A 1450.220000 0.12 NaN 2019-08-15
1 B 799.840372 NaN 550.0 2019-09-12
2 C 2783.197980 0.18 1500.0 2019-09-02
3 D 1814.504710 NaN 1244.0 2019-09-10
</code></pre>
<p>P.S.: Please keep in mind that stackoverflow is not a code writting service. I just provided an answer because I was interested in the problem.</p>
|
python-3.x|pandas
| 1
|
4,497
| 58,383,737
|
R keras says array doesnt have the right dimensions for a conv_2d, but it drops a value from the array that was correct
|
<p>When making a conv neral network keras takes the imput_shape of c(28, 28, 1), but when I run it to be trained it then tells me the input it got was (6000, 28, 28). I understand keras inputs the data size it's self, but why it is dropping the one, then causing it to brake?</p>
<p>Problem line (I think):</p>
<pre><code>model %>%
layer_conv_2d(filters = 32, kernel_size = c(3, 3), padding = 'same', input_shape = c(28, 28, 1)) %>%
</code></pre>
<p>Error:</p>
<pre><code>Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: Error when checking input: expected conv2d_5_input to have 4 dimensions, but got array with shape (60000, 28, 28)
</code></pre>
<p>Code:</p>
<pre><code>library(keras)
install_keras(tensorflow = "gpu")
mnist <- dataset_mnist()
x_train <- mnist$train$x
y_train <- mnist$train$y
x_test <- mnist$test$x
y_test <- mnist$test$y
# # reshape
#x_train <- array_reshape(x_train, c(nrow(x_train), 784))
#x_test <- array_reshape(x_test, c(nrow(x_test), 784))
# # rescale
# x_train <- x_train / 255
# x_test <- x_test / 255
y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 32, kernel_size = c(3, 3), padding = 'same', input_shape = c(28, 28, 1)) %>%
layer_activation('relu') %>%
# layer_max_pooling_2d(pool_size=c(2, 2), strides=c(2, 2)) %>%
layer_conv_2d(filters = 16, kernel_size = c(2, 2), dilation_rate = 1, activation = 'softplus', padding = 'same') %>%
layer_max_pooling_2d(pool_size=c(2, 2)) %>%
layer_flatten() %>%
layer_dense(1000, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(10, activation = 'softmax')
# Stock test model functions
# layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>%
# layer_dropout(rate = 0.4) %>%
# layer_dense(units = 128, activation = 'relu') %>%
# layer_dropout(rate = 0.3) %>%
# layer_dense(units = 10, activation = 'softmax') %>%
summary(model)
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_rmsprop(lr = 0.0001, decay = 1e-6),
metrics = c('accuracy')
)
history <- model %>% fit(
x_train, y_train,
epochs = 10
)
plot(history)
model %>% evaluate(x_test, y_test)
model %>% predict_classes(x_test)
</code></pre>
|
<p>From the documentation of <code>layer_conv_2d</code> :</p>
<p><code>input_shape</code> :- Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model.</p>
<p>So the input shape <code>(28, 28, 1)</code> means 28 rows by 28 columns by 1 channel. There is only one channel in this dataset since the images are black and white. RGB images would have 3 channels.</p>
<p>You are getting this error because the input shape is <code>(60000, 28, 28)</code> that is the 4th dimension (channel) is missing. You can fix this by doing this :- </p>
<p><code>x_train <- array_reshape(x_train, dim = c(n_train, 28, 28, 1))
x_test <- array_reshape(x_test, dim = c(n_test, 28, 28, 1))</code></p>
<p>Full code :-</p>
<pre><code>library(keras)
mnist <- dataset_mnist()
x_train <- mnist$train$x
y_train <- mnist$train$y
x_test <- mnist$test$x
y_test <- mnist$test$y
n_train <- dim(x_train)[1]
n_test <- dim(x_test)[1]
x_train <- array_reshape(x_train, dim = c(n_train, 28, 28, 1))
x_test <- array_reshape(x_test, dim = c(n_test, 28, 28, 1))
y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)
model <- keras_model_sequential()
model %>%
layer_conv_2d(filters = 32, kernel_size = c(3, 3), padding = 'same', input_shape = c(28, 28, 1)) %>%
layer_activation('relu') %>%
# layer_max_pooling_2d(pool_size=c(2, 2), strides=c(2, 2)) %>%
layer_conv_2d(filters = 16, kernel_size = c(2, 2),
dilation_rate = c(1,1), activation = 'softplus', padding = 'same') %>%
layer_max_pooling_2d(pool_size=c(2, 2)) %>%
layer_flatten() %>%
layer_dense(1000, activation = 'relu') %>%
layer_dropout(0.5) %>%
layer_dense(10, activation = 'softmax')
summary(model)
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_rmsprop(lr = 0.0001, decay = 1e-6),
metrics = c('accuracy')
)
history <- model %>% fit(
x_train, y_train,
epochs = 1
)
plot(history)
model %>% evaluate(x_test, y_test)
model %>% predict_classes(x_test)
</code></pre>
|
r|tensorflow|keras
| 0
|
4,498
| 58,195,826
|
How to delete row with timedelta value lower than 0?
|
<p>How can I delete rows which have values of timedelta < 0:</p>
<pre><code>Index Date/Time id Timedelta
8 2019-09-09 07:31:37.979 2555 0 days 00:40:00.033000
9 2019-09-09 07:32:38.006 2555 0 days 00:01:00.027000
10 2019-09-09 07:32:37.938 2555 -1 days +23:59:59.932000
11 2019-09-09 11:22:38.154 2555 0 days 03:50:00.216000
12 2019-09-09 13:04:38.138 2555 0 days 01:41:59.984000
</code></pre>
<p>Desired result is </p>
<pre><code>Index Date/Time id Timedelta
8 2019-09-09 07:31:37.979 2555 0 days 00:40:00.033000
9 2019-09-09 07:32:38.006 2555 0 days 00:01:00.027000
11 2019-09-09 11:22:38.154 2555 0 days 03:50:00.216000
12 2019-09-09 13:04:38.138 2555 0 days 01:41:59.984000
</code></pre>
<p>Thank you</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <code>'inverse'</code> logic - filter all rows with <code>Timedelta</code> higher or equal <code>0</code>:</p>
<pre><code>df1 = df[df['Timedelta'] >= pd.Timedelta(0)]
</code></pre>
<p>Or compare days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a>:</p>
<pre><code>df1 = df[df['Timedelta'].dt.days >= 0]
</code></pre>
|
python|pandas|timedelta
| 1
|
4,499
| 58,514,251
|
Making a series from another dataframe columns every nth value
|
<p>I have a data frame that I am trying to clean. For one of my columns I want to add every other index value (starting from 0) to a separate series. So essentially every other value down the column into its own series. I tried iterating but with no success. How can this be done? <a href="https://i.stack.imgur.com/LXDZ1.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<pre><code>import pandas as pd
test_df = pd.DataFrame({'val1': ['a', 'b', 'c', 'd', 'e'], 'val2': ['v', 'w', 'x', 'y', 'z']})
alternating_srs = test_df['val1'].iloc[::2]
</code></pre>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.