Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,800
| 63,249,580
|
Delete Consecutive Values of Specific Number - Python Dataframe
|
<p>How can you remove consecutive duplicates of a specific value?</p>
<p>I am aware of the <code>groupby()</code> function but that deletes consecutive duplicates of any value.</p>
<p>See the example code below. The specific value is 2, in which I want to remove duplicates</p>
<pre><code>import pandas as pd
from itertools import groupby
example = [1,1,5,2,2,2,7,9,9,2,2]
Col1 = pd.DataFrame(res)
# This does not work for just a specific number
res = [i[0] for i in groupby(Col1)]
</code></pre>
<p>The resulting DataFrame would be <code>[1,1,5,2,7,9,9,2]</code></p>
|
<p>Doing this with <code>pandas</code> seems overkill unless you are using <code>pandas</code> for other purposes, e.g.:</p>
<pre><code>In []:
import itertools as it
example = [1,1,5,2,2,2,7,9,9,2,2]
[x for k, g in it.groupby(example) for x in ([k] if k == 2 else g)]
Out[]:
[1, 1, 5, 2, 7, 9, 9, 2]
</code></pre>
|
python|python-3.x|pandas|dataframe|itertools
| 1
|
376,801
| 63,046,321
|
Concatenating two numpy arrays side by side
|
<p>I need to concatenate two numpy arrays side by side</p>
<pre><code>np1=np.array([1,2,3])
np2=np.array([4,5,6])
</code></pre>
<p>I need <code>np3</code> as <code>[1,2,3,4,5,6]</code> with the same shape, how to achieve this?</p>
|
<p>In <code>concatenate</code> you have to pass <code>axis</code> as <code>None</code></p>
<pre><code>In [9]: np1=np.array([1,2,3])
...: np2=np.array([4,5,6])
In [10]: np.concatenate((np1,np2), axis=None)
Out[10]: array([1, 2, 3, 4, 5, 6])
</code></pre>
|
python|arrays|numpy
| 1
|
376,802
| 62,991,016
|
tensorflow keras model for solving simple equation
|
<p>I'm trying to understand how to create a simple tensorflow 2.2 keras model that can predict a simple function value:</p>
<pre><code>f(a, b, c, d) = a < b : max(a/2, c/3) : max (b/2, d/3)
</code></pre>
<p>I know this exact question can be reduced to a categorical classification but my intention is to find a good way to build a model that can estimate the value and build more and more functions based on that with a more and more complex conditions later on.
For start I am stumbled upon understanding why a simple function works that hard.</p>
<p>For using with tensorflow on a created model I have:</p>
<pre class="lang-py prettyprint-override"><code>def generate_input(multiplier):
return np.random.rand(1024 * multiplier, 4) * 1000
def generate_output(input):
def compute_func(row):
return max(row[0]/2, row[2]/3) if row[0] < row[1] else max(row[1]/2, row[3]/3)
return np.apply_along_axis(compute_func, 1, input)
for epochs in range(0, 512):
# print('Generating data...')
train_input = generate_input(1000)
train_output = generate_output(train_input)
# print('Training...')
fit_history = model.fit(
train_input, train_output,
epochs=1,
batch_size=1024
)
</code></pre>
<p>I have tried with different models that are less or more complex but I still didn't got a good conversion.
For example a simple liniar one:</p>
<pre class="lang-py prettyprint-override"><code>input = Input(shape=(4,))
layer = Dense(8, activation=tanh)(input)
layer = Dense(16, activation=tanh)(layer)
layer = Dense(32, activation=tanh)(layer)
layer = Dense(64, activation=tanh)(layer)
layer = Dense(128, activation=tanh)(layer)
layer = Dense(32, activation=tanh)(layer)
layer = Dense(8, activation=tanh)(layer)
output = Dense(1)(layer)
model = Model(inputs=input, outputs=output)
model.compile(optimizer=Adam(), loss=mean_squared_error)
</code></pre>
<p>Can you give point to the direction one should follow in order to solve this type of conditional functions?</p>
<p>Or do I miss some pre-processing?</p>
|
<ol>
<li><p>In my honest opinion, you have a pretty deep model, and therefore, you do not have enough data to train. I do not think you will need that much deep architecture.</p>
</li>
<li><p>Your problem definition is not what I would have done. You actually do not desire to generate the max value at the output, but you want the max value to get selected, right? If it is the case, I would go with a multiclass classification instead of a regression problem in my design. That's saying, I would go with an <code>output = Dense(4)(layer,activation=softmax)</code> as the last layer and in my optimizer, I would use a categorical cross-entropy. Of course, in the output generation, you need to manage to return an array of 3 zeros and one 1, something like this:</p>
</li>
</ol>
<p>def compute_func(row):</p>
<pre><code>ret_value=[0,0,0,0]
if row[0] < row[1]:
if row[0] < row[2]:
ret_value[2]=1
else:
ret_value[0]=1
else:
if row[1]< row[3]:
ret_value[3]=1
else:
ret_value[1]=1
return ret_value
</code></pre>
|
tensorflow|machine-learning|keras|deep-learning|neural-network
| 1
|
376,803
| 63,234,174
|
Display full Pandas dataframe in Jupyter without index
|
<p>I have a pandas dataframe that I would like to pretty-print in full (it's ~90 rows) in a Jupyter notebook. I'd also like to display it without the index column, if possible. How can I do that?</p>
|
<p>In pandas you can use this</p>
<pre><code>pd.set_option("display.max_rows", None, "display.max_columns", None)
</code></pre>
<p>please use this.</p>
<p>Without index use additionally.</p>
<pre><code>df.to_string(index=False)
</code></pre>
|
python-3.x|pandas|jupyter-notebook
| 3
|
376,804
| 63,232,970
|
How to loop through pandas data frame and rename all of the columns?
|
<p>I have a pandas data frame that has several columns that I would like to rename.</p>
<pre><code>+------+--------------------------+--------------------------+--------------------------+
| FIPS | ('Active', '03/22/2020') | ('Active', '03/23/2020') | ('Active', '03/25/2020') |
+------+--------------------------+--------------------------+--------------------------+
| 1001 | 1 | 4 | 8 |
| 1003 | 4 | 6 | 9 |
| 1005 | 6 | 8 | 9 |
+------+--------------------------+--------------------------+--------------------------+
</code></pre>
<p>I want to rename all the columns after the first column. Changing ('Active', '03/22/2020') to Active_20200322 and so on. This is what I want my final output to be:</p>
<pre><code>+------+--------------------------+--------------------------+--------------------------+
| FIPS | Active_20200322 | Active_20200323 | Active_20200325
+------+--------------------------+--------------------------+--------------------------+
| 1001 | 1 | 4 | 8 |
| 1003 | 4 | 6 | 9 |
| 1005 | 6 | 8 | 9 |
+------+--------------------------+--------------------------+--------------------------+
</code></pre>
<p>Is there a way I can do this using a loop?</p>
|
<p>One solution would be to write a function to fix the column names. You can then create a dict comprehension and pass this to <code>DataFrame.rename</code> to fix them. For example, something like:</p>
<pre><code>import pandas as pd
import re
from datetime import datetime
def fix_column_name(val):
if 'Active' in val:
val = re.sub(r'[\(\)\']', '', val)
s1, s2 = re.split(',\s*', val)
s2 = datetime.strptime(s2, '%m/%d/%Y')
return f'{s1}_{s2.strftime("%Y%m%d")}'
return val
# Setup
df = pd.DataFrame({'FIPS': [1001, 1003, 1005],
"('Active', '03/22/2020')": [1, 4, 6],
"('Active', '03/23/2020')": [4, 6, 8],
"('Active', '03/25/2020')": [8, 9, 9]})
print(df)
</code></pre>
<p>[out]</p>
<pre><code> FIPS ('Active', '03/22/2020') ('Active', '03/23/2020') ('Active', '03/25/2020')
0 1001 1 4 8
1 1003 4 6 9
2 1005 6 8 9
</code></pre>
<hr />
<pre><code>d = {c:fix_column_name(c) for c in df.columns}
df = df.rename(d, axis=1)
print(df)
</code></pre>
<p>[out]</p>
<pre><code> FIPS Active_20200322 Active_20200323 Active_20200325
0 1001 1 4 8
1 1003 4 6 9
2 1005 6 8 9
</code></pre>
|
python|pandas
| 1
|
376,805
| 62,978,957
|
Sliding window for long text in BERT for Question Answering
|
<p>I've read post which explains how the sliding window works but I cannot find any information on how it is actually implemented.</p>
<p>From what I understand if the input are too long, sliding window can be used to process the text.</p>
<p>Please correct me if I am wrong.
Say I have a text <em><strong>"In June 2017 Kaggle announced that it passed 1 million registered users"</strong></em>.</p>
<p>Given some <code>stride</code> and <code>max_len</code>, the input can be split into chunks with over lapping words (not considering padding).</p>
<pre><code>In June 2017 Kaggle announced that # chunk 1
announced that it passed 1 million # chunk 2
1 million registered users # chunk 3
</code></pre>
<p>If my questions were <em><strong>"when did Kaggle make the announcement"</strong></em> and <em><strong>"how many registered users"</strong></em> I can use <code>chunk 1</code> and <code>chunk 3</code> and <strong>not use</strong> <code>chunk 2</code> <strong>at all</strong> in the model. Not quiet sure if I should still use <code>chunk 2</code> to train the model</p>
<p>So the input will be:
<code>[CLS]when did Kaggle make the announcement[SEP]In June 2017 Kaggle announced that[SEP]</code>
and
<code>[CLS]how many registered users[SEP]1 million registered users[SEP]</code></p>
<hr>
<p>Then if I have a question with no answers do I feed it into the model with all chunks like and indicate the starting and ending index as <strong>-1</strong>? For example <em><strong>"can pigs fly?"</strong></em></p>
<p><code>[CLS]can pigs fly[SEP]In June 2017 Kaggle announced that[SEP]</code></p>
<p><code>[CLS]can pigs fly[SEP]announced that it passed 1 million[SEP]</code></p>
<p><code>[CLS]can pigs fly[SEP]1 million registered users[SEP]</code></p>
<hr>
<p>As suggested in the comments, II tried to run <code>squad_convert_example_to_features</code> (<a href="https://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/data/processors/squad.py#L134" rel="noreferrer">source code</a>) to investigate the problem I have above, but it doesn't seem to work, nor there are any documentation. It seems like <code>run_squad.py</code> from huggingface uses <code>squad_convert_example_to_features</code> with the <code>s</code> in <code>example</code>.</p>
<pre class="lang-py prettyprint-override"><code>from transformers.data.processors.squad import SquadResult, SquadV1Processor, SquadV2Processor, squad_convert_example_to_features
from transformers import AutoTokenizer, AutoConfig, squad_convert_examples_to_features
FILE_DIR = "."
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
processor = SquadV2Processor()
examples = processor.get_train_examples(FILE_DIR)
features = squad_convert_example_to_features(
example=examples[0],
max_seq_length=384,
doc_stride=128,
max_query_length=64,
is_training=True,
)
</code></pre>
<p>I get the error.</p>
<pre><code>100%|ββββββββββ| 1/1 [00:00<00:00, 159.95it/s]
Traceback (most recent call last):
File "<input>", line 25, in <module>
sub_tokens = tokenizer.tokenize(token)
NameError: name 'tokenizer' is not defined
</code></pre>
<p>The error indicates that there are no <code>tokenizers</code> but it does not allow us to pass a <code>tokenizer</code>. Though it does work if I add a tokenizer while I am inside the function in debug mode. So how exactly do I use the <code>squad_convert_example_to_features</code> function?</p>
|
<p>I think there is a problem with the examples you pick. Both <a href="https://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/data/processors/squad.py#L273" rel="nofollow noreferrer">squad_convert_examples_to_features</a> and <a href="https://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/data/processors/squad.py#L86" rel="nofollow noreferrer">squad_convert_example_to_features</a> have a sliding window approach implemented because <code>squad_convert_examples_to_features</code> is just a parallelization wrapper for <code>squad_convert_example_to_features</code>. But let's look at the single example function. First of all you need to call <a href="https://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/data/processors/squad.py#L268" rel="nofollow noreferrer">squad_convert_example_to_features_init</a> to make the tokenizer global (this is done automatically for you in <code>squad_convert_examples_to_features</code>):</p>
<pre class="lang-py prettyprint-override"><code>from transformers.data.processors.squad import SquadResult, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features, squad_convert_example_to_features_init
from transformers import AutoTokenizer, AutoConfig, squad_convert_examples_to_features
FILE_DIR = "."
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
squad_convert_example_to_features_init(tokenizer)
processor = SquadV2Processor()
examples = processor.get_train_examples(FILE_DIR)
features = squad_convert_example_to_features(
example=examples[0],
max_seq_length=384,
doc_stride=128,
max_query_length=64,
is_training=True,
)
print(len(features))
</code></pre>
<p>Output:</p>
<pre><code>1
</code></pre>
<p>You might say that this function is not using a sliding window approach, but this is wrong because your example doesn't needed to be split:</p>
<pre class="lang-py prettyprint-override"><code>print(len(examples[0].question_text.split()) + len(examples[0].doc_tokens))
</code></pre>
<p>Output:</p>
<pre><code>115
</code></pre>
<p>which is less as the max_seq_length which you have set to 384. Now lets try a different one:</p>
<pre><code>print(len(examples[129603].question_text.split()) + len(examples[129603].doc_tokens))
features = squad_convert_example_to_features(
example=examples[129603],
max_seq_length=384,
doc_stride=128,
max_query_length=64,
is_training=True,
)
print(len(features))
</code></pre>
<p>Output:</p>
<pre><code>454
3
</code></pre>
<p>Which you can now compare with the original sample:</p>
<pre class="lang-py prettyprint-override"><code>print('[CLS]' + examples[129603].question_text + '[SEP]' + ' '.join(examples[129603].doc_tokens) + '[SEP]')
for idx, f in enumerate(features):
print('Split {}'.format(idx))
print(' '.join(f.tokens))
</code></pre>
<p>Output:</p>
<pre><code>[CLS]How often is hunting occurring in Delaware each year?[SEP]There is a very active tradition of hunting of small to medium-sized wild game in Trinidad and Tobago. Hunting is carried out with firearms, and aided by the use of hounds, with the illegal use of trap guns, trap cages and snare nets. With approximately 12,000 sport hunters applying for hunting licences in recent years (in a very small country of about the size of the state of Delaware at about 5128 square kilometers and 1.3 million inhabitants), there is some concern that the practice might not be sustainable. In addition there are at present no bag limits and the open season is comparatively very long (5 months - October to February inclusive). As such hunting pressure from legal hunters is very high. Added to that, there is a thriving and very lucrative black market for poached wild game (sold and enthusiastically purchased as expensive luxury delicacies) and the numbers of commercial poachers in operation is unknown but presumed to be fairly high. As a result, the populations of the five major mammalian game species (red-rumped agouti, lowland paca, nine-banded armadillo, collared peccary, and red brocket deer) are thought to be quite low (although scientifically conducted population studies are only just recently being conducted as of 2013). It appears that the red brocket deer population has been extirpated on Tobago as a result of over-hunting. Various herons, ducks, doves, the green iguana, the gold tegu, the spectacled caiman and the common opossum are also commonly hunted and poached. There is also some poaching of 'fully protected species', including red howler monkeys and capuchin monkeys, southern tamanduas, Brazilian porcupines, yellow-footed tortoises, Trinidad piping guans and even one of the national birds, the scarlet ibis. Legal hunters pay very small fees to obtain hunting licences and undergo no official basic conservation biology or hunting-ethics training. There is presumed to be relatively very little subsistence hunting in the country (with most hunting for either sport or commercial profit). The local wildlife management authority is under-staffed and under-funded, and as such very little in the way of enforcement is done to uphold existing wildlife management laws, with hunting occurring both in and out of season, and even in wildlife sanctuaries. There is some indication that the government is beginning to take the issue of wildlife management more seriously, with well drafted legislation being brought before Parliament in 2015. It remains to be seen if the drafted legislation will be fully adopted and financially supported by the current and future governments, and if the general populace will move towards a greater awareness of the importance of wildlife conservation and change the culture of wanton consumption to one of sustainable management.[SEP]
Split 0
[CLS] how often is hunting occurring in delaware each year ? [SEP] there is a very active tradition of hunting of small to medium - sized wild game in trinidad and tobago . hunting is carried out with firearms , and aided by the use of hounds , with the illegal use of trap guns , trap cages and s ##nare nets . with approximately 12 , 000 sport hunters applying for hunting licence ##s in recent years ( in a very small country of about the size of the state of delaware at about 512 ##8 square kilometers and 1 . 3 million inhabitants ) , there is some concern that the practice might not be sustainable . in addition there are at present no bag limits and the open season is comparatively very long ( 5 months - october to february inclusive ) . as such hunting pressure from legal hunters is very high . added to that , there is a thriving and very lucrative black market for po ##ache ##d wild game ( sold and enthusiastically purchased as expensive luxury del ##ica ##cies ) and the numbers of commercial po ##ache ##rs in operation is unknown but presumed to be fairly high . as a result , the populations of the five major mammalian game species ( red - rum ##ped ago ##uti , lowland pac ##a , nine - banded arm ##adi ##llo , collar ##ed pe ##cca ##ry , and red brock ##et deer ) are thought to be quite low ( although scientific ##ally conducted population studies are only just recently being conducted as of 2013 ) . it appears that the red brock ##et deer population has been ex ##ti ##rp ##ated on tobago as a result of over - hunting . various heron ##s , ducks , dove ##s , the green i ##gua ##na , the gold te ##gu , the spectacle ##d cai ##man and the common op ##oss ##um are also commonly hunted and po ##ache ##d . there is also some po ##achi ##ng of ' fully protected species ' , including red howl ##er monkeys and cap ##uchi ##n monkeys , southern tam ##and ##ua ##s , brazilian por ##cup ##ines , yellow - footed tor ##to ##ises , [SEP]
Split 1
[CLS] how often is hunting occurring in delaware each year ? [SEP] october to february inclusive ) . as such hunting pressure from legal hunters is very high . added to that , there is a thriving and very lucrative black market for po ##ache ##d wild game ( sold and enthusiastically purchased as expensive luxury del ##ica ##cies ) and the numbers of commercial po ##ache ##rs in operation is unknown but presumed to be fairly high . as a result , the populations of the five major mammalian game species ( red - rum ##ped ago ##uti , lowland pac ##a , nine - banded arm ##adi ##llo , collar ##ed pe ##cca ##ry , and red brock ##et deer ) are thought to be quite low ( although scientific ##ally conducted population studies are only just recently being conducted as of 2013 ) . it appears that the red brock ##et deer population has been ex ##ti ##rp ##ated on tobago as a result of over - hunting . various heron ##s , ducks , dove ##s , the green i ##gua ##na , the gold te ##gu , the spectacle ##d cai ##man and the common op ##oss ##um are also commonly hunted and po ##ache ##d . there is also some po ##achi ##ng of ' fully protected species ' , including red howl ##er monkeys and cap ##uchi ##n monkeys , southern tam ##and ##ua ##s , brazilian por ##cup ##ines , yellow - footed tor ##to ##ises , trinidad pip ##ing gu ##ans and even one of the national birds , the scarlet ib ##is . legal hunters pay very small fees to obtain hunting licence ##s and undergo no official basic conservation biology or hunting - ethics training . there is presumed to be relatively very little subsistence hunting in the country ( with most hunting for either sport or commercial profit ) . the local wildlife management authority is under - staffed and under - funded , and as such very little in the way of enforcement is done to uphold existing wildlife management laws , with hunting occurring both in and out of season , and even in wildlife san ##ct ##uaries . there is some indication that the government is beginning to [SEP]
Split 2
[CLS] how often is hunting occurring in delaware each year ? [SEP] being conducted as of 2013 ) . it appears that the red brock ##et deer population has been ex ##ti ##rp ##ated on tobago as a result of over - hunting . various heron ##s , ducks , dove ##s , the green i ##gua ##na , the gold te ##gu , the spectacle ##d cai ##man and the common op ##oss ##um are also commonly hunted and po ##ache ##d . there is also some po ##achi ##ng of ' fully protected species ' , including red howl ##er monkeys and cap ##uchi ##n monkeys , southern tam ##and ##ua ##s , brazilian por ##cup ##ines , yellow - footed tor ##to ##ises , trinidad pip ##ing gu ##ans and even one of the national birds , the scarlet ib ##is . legal hunters pay very small fees to obtain hunting licence ##s and undergo no official basic conservation biology or hunting - ethics training . there is presumed to be relatively very little subsistence hunting in the country ( with most hunting for either sport or commercial profit ) . the local wildlife management authority is under - staffed and under - funded , and as such very little in the way of enforcement is done to uphold existing wildlife management laws , with hunting occurring both in and out of season , and even in wildlife san ##ct ##uaries . there is some indication that the government is beginning to take the issue of wildlife management more seriously , with well drafted legislation being brought before parliament in 2015 . it remains to be seen if the drafted legislation will be fully adopted and financially supported by the current and future governments , and if the general populace will move towards a greater awareness of the importance of wildlife conservation and change the culture of want ##on consumption to one of sustainable management . [SEP]
</code></pre>
<hr />
<blockquote>
<p>If my questions were "when did Kaggle make the announcement" and "how
many registered users" I can use chunk 1 and chunk 3 and not use chunk
2 at all in the model. Not quiet sure if I should still use chunk 2 to
train the model</p>
</blockquote>
<p>Yes you should also use chunk 2 to train your model, because when you try to predict the same sequence you hope that your model predicts 0:0 as answer span for chunk 2 (i.e. you can easily select the chunk which contains the answer).</p>
|
nlp|text-classification|huggingface-transformers|nlp-question-answering|bert-language-model
| 3
|
376,806
| 63,127,668
|
Pandas "to_datetime" not accepting series
|
<p>I am new to pandas and am trying to convert a column of strings with dates in the format '%d %B' (01 January, 02 January .... ) to date time objects and the type of the column is <code><class 'pandas.core.series.Series'></code> .
if i pass in this series in the to_datetime method, like</p>
<pre><code>print(pd.to_datetime(data_file['Date'], format='%d %B', errors="coerce"))
</code></pre>
<p>it all returns <code>NaT</code> for all the entries, where as it should return date time objects</p>
<p>I checked the documentation and it says that it accepts a Series object.</p>
<p>Any way to fix this?</p>
<p>Edit 1:
here is the head of the data i am using:</p>
<pre><code> Date Daily Confirmed
0 30 January 1
1 31 January 0
2 01 February 0
3 02 February 1
4 03 February 1
</code></pre>
<p>edit 2: here is the information of the data</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 179 entries, 0 to 178
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 179 non-null object
1 Daily Confirmed 179 non-null int64
dtypes: int64(1), object(1)
memory usage: 2.2+ KB
</code></pre>
|
<p>If I understand correctly, you may be facing this issue because there are spaces around the dates in this column. To solve it, use <code>strip</code> before <code>to_datetime</code>. Here's a piece of code that does that:</p>
<pre><code>df = pd.DataFrame({'Date':
['30 January ', '31 January ', ' 01 February ', '02 February',
'03 February'], 'Daily Confirmed': [1, 0, 0, 1, 1]})
pd.to_datetime(df.Date.str.strip(), format = "%d %B")
</code></pre>
<p>The output is:</p>
<pre><code>0 1900-01-30
1 1900-01-31
2 1900-02-01
...
</code></pre>
|
python|pandas|dataframe|time-series
| 1
|
376,807
| 67,895,055
|
Build a Pandas DataFrame with one unique column of different size
|
<p>I want to build the following DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>num | col1 | col2
a | 1 | 3
| 2 | 4
| 8 | 2
b | 0 | 9
| 4 | 2
| 3 | 1
</code></pre>
<p>I tried the following but did not know how of include the column <code>num</code></p>
<pre class="lang-py prettyprint-override"><code>col1 = [1, 2, 8, 0, 4, 3]
col2 = [3, 4, 2, 9, 2, 1]
num = ['a', 'b']
d = pd.DataFrame({'col1': col1, 'col2': col2}, index=num)
</code></pre>
|
<pre><code>col1 = [1, 2, 8, 0, 4, 3]
col2 = [3, 4, 2, 9, 2, 1]
num = ['a']*3 + ['b']*3 # same as ['a', 'a', 'a', 'b', 'b', 'b']
d = pd.DataFrame({'col1': col1, 'col2': col2}, index=num)
</code></pre>
<pre><code>>>> d
col1 col2
a 1 3
a 2 4
a 8 2
b 0 9
b 4 2
b 3 1
</code></pre>
|
python|pandas|matrix
| 0
|
376,808
| 67,837,701
|
How to use sorting in pandas with similar names?
|
<p>I have used <strong>.sort_values(by = ['Tag'])</strong> it gets the work done but it doesn't sort based on the index and when I used <strong>.sort_index</strong> it works but again the <strong>tag</strong> gets <strong>jumbled</strong>. Any heads up, please.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th style="text-align: center;">Tag</th>
</tr>
</thead>
<tbody>
<tr>
<td>1294</td>
<td style="text-align: center;">P3010A</td>
</tr>
<tr>
<td>1638</td>
<td style="text-align: center;">P3010B</td>
</tr>
<tr>
<td>1122</td>
<td style="text-align: center;">P3010A</td>
</tr>
<tr>
<td>1466</td>
<td style="text-align: center;">P3010A</td>
</tr>
<tr>
<td>950</td>
<td style="text-align: center;">P3010B</td>
</tr>
<tr>
<td>99</td>
<td style="text-align: center;">P3010A</td>
</tr>
<tr>
<td>434</td>
<td style="text-align: center;">P3010A</td>
</tr>
<tr>
<td>262</td>
<td style="text-align: center;">P3010B</td>
</tr>
</tbody>
</table>
</div>
|
<p>Try this?</p>
<pre><code>>>> df.sort_values(['Tag','Index'])
Index Tag
5 99 P3010A
6 434 P3010A
2 1122 P3010A
0 1294 P3010A
3 1466 P3010A
7 262 P3010B
4 950 P3010B
1 1638 P3010B
</code></pre>
|
pandas
| 0
|
376,809
| 67,900,834
|
How to convert the dictionary with one key and a tuple as the key value into a DataFrame?
|
<p>I have a dictionary like this.</p>
<p><code>dict={'good': (5, 3, 5), 'nice': (6, 0, 0), 'very': (8, 3, 4), 'not': (2, 0, 1)}</code></p>
<p>I need to convert it into a dataframe such that it looks like this:</p>
<pre><code>Variable Positive Neutral Negative
good 5 3 5
nice 6 0 0
very 8 3 4
not 2 0 1
</code></pre>
<p>I tried using:</p>
<p><code>pd.DataFrame(dict).melt()</code></p>
<p>But it is giving me the data frame in the following form:</p>
<pre><code>Variable Value
good 5
good 3
good 5
nice 6
nice 0
nice 0
...
</code></pre>
<p>Please help me get the required output.</p>
|
<p>You can pass an index to <code>pd.DataFrame</code> and then take <code>T</code>ranpose:</p>
<pre><code>pd.DataFrame(d, index=["Positive", "Neutral", "Negative"]).T
</code></pre>
<p>or <code>swapaxes</code>:</p>
<pre><code>pd.DataFrame(d, index=["Positive", "Neutral", "Negative"]).swapaxes(0, 1)
</code></pre>
<p>where <code>d</code> is your dictionary,</p>
<p>to get</p>
<pre><code> Positive Neutral Negative
good 5 3 5
nice 6 0 0
very 8 3 4
not 2 0 1
</code></pre>
|
python|pandas|dataframe|dictionary|count
| 2
|
376,810
| 67,795,164
|
Count consecutive occurrence's for column
|
<p>I am trying to count consecutive occurrence's for Products column. Result should be as shown in "Total counts" column. I tried using groupby with cumsum but my logic could not work</p>
<pre><code>+----------+--------------+
| Products | Total counts |
+----------+--------------+
| 1 | 3 |
+----------+--------------+
| 1 | 3 |
+----------+--------------+
| 1 | 3 |
+----------+--------------+
| 2 | 1 |
+----------+--------------+
| 3 | 3 |
+----------+--------------+
| 3 | 3 |
+----------+--------------+
| 3 | 3 |
+----------+--------------+
| 4 | 2 |
+----------+--------------+
| 4 | 2 |
+----------+--------------+
</code></pre>
|
<p>Use <code>groupby</code> with <code>transform</code> and count,</p>
<pre><code>df['Total counts'] = df.groupby('Products').transform('count')
</code></pre>
<p>Output:</p>
<pre><code> Products Total counts
0 1 3
1 1 3
2 1 3
3 2 1
4 3 3
5 3 3
6 3 3
7 4 2
8 4 2
</code></pre>
<hr />
<p>Consective Products, that repeat later in dataframe:</p>
<pre><code>grp = (df['Products'] != df['Products'].shift()).cumsum()
df['Total counts'] = df.groupby(grp)['Products'].transform('count')
</code></pre>
<p>Output:</p>
<pre><code> Products Total counts
0 1 3
1 1 3
2 1 3
3 2 1
4 3 3
5 3 3
6 3 3
7 4 2
8 4 2
</code></pre>
|
python|pandas|numpy
| 1
|
376,811
| 67,728,562
|
Pandas how to split vlaues of every column based on colon
|
<p>I am trying to split all the column values and want to retain only second index ie <code>-1</code>, it works with the individual columns like <code>str.split(': ').str[-1]</code> but as being a novice learner I'm not able to apply it for all columns.</p>
<p>Indeed I want to retain the values from every column after <code>:</code>.</p>
<p>Maybe writing a function and applying that to df but not getting that.</p>
<p>Dataframe:</p>
<pre><code>>>> df
LoginShell ExpiryDate UID
0 loginShell: /bin/bash Enddate: 20991212 uid: auto_soc
1 loginShell: /bin/bash Enddate: 20991212 uid: sambakul
2 loginShell: /bin/bash Enddate: 20991212 uid: services2go-jenkins
3 loginShell: /bin/bash Enddate: 20991212 uid: rdtest0
4 loginShell: /bin/bash Enddate: 20991212 uid: sudo
.. ... ... ...
171 loginShell: /bin/bash Enddate: 20991230 uid: elmadm
172 loginShell: /bin/bash Enddate: 20991231 uid: git
173 loginShell: /bin/bash Enddate: 20991231 uid: rhspadm
174 loginShell: /bin/bash Enddate: 20991231 uid: bossadm
175 loginShell: /bin/bash Enddate: 20991231 uid: ngvp_vmware_management_tst
[176 rows x 3 columns]
</code></pre>
<p>Result for individual column:</p>
<pre><code>>>> df['LoginShell'].str.split(': ').str[-1]
0 /bin/bash
1 /bin/bash
2 /bin/bash
3 /bin/bash
4 /bin/bash
...
171 /bin/bash
172 /bin/bash
173 /bin/bash
174 /bin/bash
175 /bin/bash
Name: LoginShell, Length: 176, dtype: object
</code></pre>
<p>Expected values:</p>
<pre><code> LoginShell ExpiryDate UID
0 /bin/bash 20991212 auto_soc
1 /bin/bash 20991212 sambakul
</code></pre>
<p>Any help will be so helpful</p>
|
<p>Try with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>applymap</code></a>:</p>
<pre><code>df = df.applymap(lambda x: x.split(': ', 1)[-1])
</code></pre>
<p><code>df</code>:</p>
<pre><code> LoginShell ExpiryDate UID
0 /bin/bash 20991212 au:to:_soc
1 /bin/bash 20991212 sambakul
2 /bin/bash 20991212 services2go-jenkins
3 /bin/bash 20991212 rdtest0
</code></pre>
<hr/>
<p>Complete Code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'LoginShell': ['loginShell: /bin/bash', 'loginShell: /bin/bash',
'loginShell: /bin/bash', 'loginShell: /bin/bash'],
'ExpiryDate': ['Enddate: 20991212', 'Enddate: 20991212',
'Enddate: 20991212', 'Enddate: 20991212'],
'UID': ['uid: au:to:_soc', 'uid: sambakul', 'uid: services2go-jenkins',
'uid: rdtest0']
})
df = df.applymap(lambda x: x.split(': ', 1)[-1])
</code></pre>
<hr/>
<p>As a <code>def</code> function rather than a <code>lambda</code>:</p>
<pre><code>def split_on_first_colon(x):
return x.split(': ', 1)[-1]
df = df.applymap(split_on_first_colon)
</code></pre>
|
python|pandas|dataframe|split
| 2
|
376,812
| 67,784,653
|
Filter on specific value dataframe pandas/ python
|
<p>I want to create a function that filter on a specific value in a column of an dataframe(<br />
My dataframe has the follow columns and value:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Zoekterm</th>
<th style="text-align: left;">High_bias</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Man</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">Man</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">Vrouw</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">kind</td>
<td style="text-align: left;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I wrote a function that filter on a specific value see below</p>
<pre><code>Def most_likey_bias():
bias = data['high_bias'] == 1
if bias.any():
print(data.loc[bias,['High_bias','Zoekterm']
print(most_likey_bias())
</code></pre>
<p>The outcome of the table is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Zoekterm</th>
<th style="text-align: left;">High_bias</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">vrouw</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">kind</td>
<td style="text-align: left;">1</td>
</tr>
</tbody>
</table>
</div>
<p>This table gives back which "Zoekterm" has a value of 1<br />
But because the " Zoekterm" has duplicates of the same name i want a table that gives me a count of each zoekterm
So the table that i want is:<br />
This means a table where it counts for each "Zoekterm" how much "High bias" it has based on an specific value (1)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Zoekterm</th>
<th style="text-align: left;">High_bias</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Man</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">Vrouw</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">kind</td>
<td style="text-align: left;">5</td>
</tr>
</tbody>
</table>
</div>
<p>I tried with groupby or with count, but i don't get it. Could someone give me some tips.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> with filtered rows and convert <code>Series</code> to DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a>:</p>
<pre><code>def most_likey_bias():
bias = data['high_bias'] == 1
if bias.any():
return data[bias].groupby('Zoekterm').size().reset_index(name='High_bias')
</code></pre>
<p>Similar idea is aggregate <code>sum</code>:</p>
<pre><code>def most_likey_bias():
bias = data['High_bias'] == 1
if bias.any():
return data[bias].groupby('Zoekterm')['High_bias'].sum().reset_index(name='High_bias')
</code></pre>
<hr />
<pre><code>print (most_likey_bias())
Zoekterm High_bias
0 Man 2
1 Vrouw 1
</code></pre>
|
python|pandas|dataframe|data-science
| 1
|
376,813
| 67,990,773
|
Size of input allowed in AllenNLP - Predict when using a Predictor
|
<p>does anyone has an idea what is the input text size limit that can be passed to the
predict(passage, question) method of the AllenNLP Predictors.</p>
<p>I have tried with passage of 30-40 sentences, which is working fine. But eventually it is not working for me when I am passing some significant amount of text around 5K statement.</p>
|
<p>Which model are you using? Some models truncate the input, others try to handle arbitrary length input using a sliding window approach. With the latter, the limit will depend on the memory available on your system.</p>
|
pytorch|prediction|allennlp
| 0
|
376,814
| 67,704,405
|
Python/Numpy: Using np.tile to tile 2D array of boolean mask arrays
|
<p>I have a 2D array of boolean mask arrays:</p>
<pre><code>maskArr = [[False, True, False, True], [False, True, True, True], [True, True, False, True]]
</code></pre>
<p>for which I am trying to use <code>np.tile(maskArr, (3, 1))</code> to get the following output:</p>
<pre><code>[
[[False, True, False, True], [False, True, True, True], [True, True, False, True]],
[[False, True, False, True], [False, True, True, True], [True, True, False, True]],
[[False, True, False, True], [False, True, True, True], [True, True, False, True]],
]
</code></pre>
<p>but I'm getting this:</p>
<pre><code>[[False True False True]
[False True True True]
[ True True False True]
[False True False True]
[False True True True]
[ True True False True]
[False True False True]
[False True True True]
[ True True False True]]
</code></pre>
<p>Any suggestions for how I can fix this? It works fine with <code>arr = [1,2,3]</code>:</p>
<pre><code>>>> np.tile([1,2,3], (3, 1))
[[1,2,3]
[1,2,3]
[1,2,3]]
</code></pre>
|
<p>You can use:</p>
<pre class="lang-py prettyprint-override"><code>x = np.tile(maskArr, (3, 1, 1))
print(x)
</code></pre>
<p>Prints:</p>
<pre class="lang-py prettyprint-override"><code>[[[False True False True]
[False True True True]
[ True True False True]]
[[False True False True]
[False True True True]
[ True True False True]]
[[False True False True]
[False True True True]
[ True True False True]]]
</code></pre>
|
python|numpy|multidimensional-array
| 2
|
376,815
| 67,934,526
|
Column Data in CSV file to h5 format
|
<p>I am trying to convert a CSV file to h5 format file.</p>
<p>I have gone through multiple posts and I have been able to create the h5 file but still unable to pull individual columns from the CSV file and add them to the h5 file, please let me know if there is any solution to this.</p>
<p>Essentially I have four columns in my CSV file with 4000 observations in each column, trying to check if there is any way to directly convert it to h5 or pull individual column data and edit the existing h5 file. Thank you.</p>
<pre><code>import pandas as pd
filename = '/home/test3.h5'
df = pd.DataFrame(np.array([[1, 2], [4, 5]]),
columns=['a', 'b'])
print(pd.read_hdf(filename, 'data'))
</code></pre>
|
<p>As specified in the <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/user_guide/io.html#hdf5-pytables" rel="nofollow noreferrer">pandas I/O guide, section HDF5 (PyTables)</a>, there are 2 simple functions to store as hdf:</p>
<ul>
<li><a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.DataFrame.to_hdf.html" rel="nofollow noreferrer"><code>pd.to_hdf</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.DataFrame.read_hdf.html" rel="nofollow noreferrer"><code>pd.read_hdf</code></a></li>
</ul>
<p>So converting a csv to h5 could be as simple as:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv('input_file.csv')
df.to_hdf('output_file.h5', 'data')
</code></pre>
<p>If you want to combine the data</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.read_csv('input_file.csv')
df2 = pd.read_hdf('input_file.h5', 'data')
save = pd.merge(df1, df2, on=[...]) # combine data
save.to_hdf('output_file.h5', 'data')
</code></pre>
<p>If <code>input_file.h5</code> and <code>output_file.h5</code> are the same, <code>mode='w'</code> allows to overwrite the file, using different keys with <code>mode='a'</code> (by default) allows to append to the file, <code>append=True</code> allows to append to the dataframe inside the file, etc.</p>
<p>The guide I linked contains a lot more examples of how to use these tools and also the <code>pd.HDFStore</code> which allows to open the whole file and look into the keys it contains, I suggest you give it a thorough read.</p>
|
python|pandas|csv|hdf5
| 0
|
376,816
| 67,671,601
|
Add weekday sequentially from one-hot encoded input
|
<p>I am trying to convert a sparse weekday format, sequentially representing week days, adding contextual information from an additional list;</p>
<p>to illustrate:</p>
<pre><code>weekdays = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
print(df)
employee_name rest_day
Alex 0
Alex 0
Alex 0
Alex 0
Alex 0
Alex 0
Alex 1
Frank 1
Frank 0
Frank 0
Frank 0
Frank 1
Frank 0
Frank 0
...
</code></pre>
<p>In which Alex takes a rest on Sunday and Frank on Mondays and Fridays.</p>
<p>I'd like to add a new column, with values from the aforementioned list, indicating weekday as:</p>
<pre><code>print(final_df)
employee_name rest_day weekday
Alex 0 Monday
Alex 0 Tuesday
Alex 0 Wednesday
Alex 0 Thursday
Alex 0 ...
Alex 0
Alex 1 Sunday
Frank 1 Monday
Frank 0 ...
Frank 0
Frank 0
Frank 1
Frank 0
Frank 0 Sunday
...
</code></pre>
|
<p>Assuming each <code>employee_name</code> has exactly 7 rows, you can use:</p>
<pre><code>weekdays = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
df['weekday'] = df.groupby('employee_name').transform(lambda x: weekdays)
</code></pre>
<p>Result:</p>
<pre><code>print(df)
employee_name rest_day weekday
0 Alex 0 Monday
1 Alex 0 Tuesday
2 Alex 0 Wednesday
3 Alex 0 Thursday
4 Alex 0 Friday
5 Alex 0 Saturday
6 Alex 1 Sunday
7 Frank 1 Monday
8 Frank 0 Tuesday
9 Frank 0 Wednesday
10 Frank 0 Thursday
11 Frank 1 Friday
12 Frank 0 Saturday
13 Frank 0 Sunday
</code></pre>
|
python|python-3.x|pandas
| 0
|
376,817
| 67,718,791
|
Training Word2Vec Model from sourced data - Issue Tokenizing data
|
<p>I have recently sourced and curated a lot of reddit data from Google Bigquery.</p>
<p>The dataset looks like this:</p>
<p><a href="https://i.stack.imgur.com/q1C1G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q1C1G.png" alt="Data Preview" /></a></p>
<p>Before passing this data to word2vec to create a vocabulary and be trained, it is required that I properly tokenize the 'body_cleaned' column.</p>
<p>I have attempted the tokenization with both manually created functions and NLTK's word_tokenize, but for now I'll keep it focused on using word_tokenize.</p>
<p>Because my dataset is rather large, close to 12 million rows, it is impossible for me to open and perform functions on the dataset in one go. Pandas tries to load everything to RAM and as you can understand it crashes, even on a system with 24GB of ram.</p>
<p>I am facing the following issue:</p>
<ul>
<li>When I tokenize the dataset (using NTLK word_tokenize), if I perform the function on the dataset as a whole, it correctly tokenizes and word2vec accepts that input and learns/outputs words correctly in its vocabulary.</li>
<li>When I tokenize the dataset by first batching the dataframe and iterating through it, the resulting token column is not what word2vec prefers; although word2vec trains its model on the data gathered for over 4 hours, the resulting vocabulary it has learnt consists of single characters in several encodings, as well as emojis - not words.</li>
</ul>
<p>To troubleshoot this, I created a tiny subset of my data and tried to perform the tokenization on that data in two different ways:</p>
<ul>
<li>Knowing that my computer can handle performing the action on the dataset, I simply did:</li>
</ul>
<pre><code>reddit_subset = reddit_data[:50]
reddit_subset['tokens'] = reddit_subset['body_cleaned'].apply(lambda x: word_tokenize(x))
</code></pre>
<p>This produces the following result:
<a href="https://i.stack.imgur.com/FDrB8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FDrB8.png" alt="Tokenized Data Preview" /></a></p>
<p>This in fact works with word2vec and produces model one can work with. Great so far.</p>
<p>Because of my inability to operate on such a large dataset in one go, I had to get creative with how I handle this dataset. My solution was to batch the dataset and work on it in small iterations using Panda's own batchsize argument.</p>
<p>I wrote the following function to achieve that:</p>
<pre><code>def reddit_data_cleaning(filepath, batchsize=20000):
if batchsize:
df = pd.read_csv(filepath, encoding='utf-8', error_bad_lines=False, chunksize=batchsize, iterator=True, lineterminator='\n')
print("Beginning the data cleaning process!")
start_time = time.time()
flag = 1
chunk_num = 1
for chunk in df:
chunk[u'tokens'] = chunk[u'body_cleaned'].apply(lambda x: word_tokenize(x))
chunk_num += 1
if flag == 1:
chunk.dropna(how='any')
chunk = chunk[chunk['body_cleaned'] != 'deleted']
chunk = chunk[chunk['body_cleaned'] != 'removed']
print("Beginning writing a new file")
chunk.to_csv(str(filepath[:-4] + '_tokenized.csv'), mode='w+', index=None, header=True)
flag = 0
else:
chunk.dropna(how='any')
chunk = chunk[chunk['body_cleaned'] != 'deleted']
chunk = chunk[chunk['body_cleaned'] != 'removed']
print("Adding a chunk into an already existing file")
chunk.to_csv(str(filepath[:-4] + '_tokenized.csv'), mode='a', index=None, header=None)
end_time = time.time()
print("Processing has been completed in: ", (end_time - start_time), " seconds.")
</code></pre>
<p>Although this piece of code allows me to actually work through this huge dataset in chunks and produces results where otherwise I'd crash from memory failures, I get a result which doesn't fit my word2vec requirements, and leaves me quite baffled at the reason for it.</p>
<p>I used the above function to perform the same operation on the Data subset to compare how the result differs between the two functions, and got the following:</p>
<p><a href="https://i.stack.imgur.com/ZkGFD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZkGFD.png" alt="Comparison of Methods" /></a></p>
<p>The desired result is on the new_tokens column, and the function that chunks the dataframe produces the "tokens" column result.</p>
<p>Is anyone any wiser to help me understand why the same function to tokenize produces a wholly different result depending on how I iterate over the dataframe?</p>
<p>I appreciate you if you read through the whole issue and stuck through!</p>
|
<p>After taking gojomo's advice I simplified my approach at reading the csv file and writing to a text file.</p>
<p>My initial approach using pandas had yielded some pretty bad processing times for a file with around 12 million rows, and memory issues due to how pandas reads data all into memory before writing it out to a file.</p>
<p>What I also realized was that I had a major flaw in my previous code.
I was printing some output (as a sanity check), and because I printed output too often, I overflowed Jupyter and crashed the notebook, not allowing the underlying and most important task to complete.</p>
<p>I got rid of that, simplified reading with the csv module and writing into a txt file, and I processed the reddit database of ~12 million rows in less than 10 seconds.</p>
<p>Maybe not the finest piece of code, but I was scrambling to solve an issue that stood as a roadblock for me for a couple of days (and not realizing that part of my problem was my sanity checks crashing Jupyter was an even bigger frustration).</p>
<pre><code>def generate_corpus_txt(csv_filepath, output_filepath):
import csv
import time
start_time = time.time()
with open(csv_filepath, encoding = 'utf-8') as csvfile:
datareader = csv.reader(csvfile)
count = 0
header = next(csvfile)
print(time.asctime(time.localtime()), " ---- Beginning Processing")
with open(output_filepath, 'w+') as output:
# Check file as empty
if header != None:
for row in datareader:
# Iterate over each row after the header in the csv
# row variable is a list that represents a row in csv
processed_row = str(' '.join(row)) + '\n'
output.write(processed_row)
count += 1
if count == 1000000:
print(time.asctime(time.localtime()), " ---- Processed 1,000,000 Rows of data.")
count = 0
print('Processing took:', int((time.time()-start_time)/60), ' minutes')
output.close()
csvfile.close()
</code></pre>
|
python|pandas|tokenize|word2vec
| 1
|
376,818
| 67,889,900
|
How to convert row names into a column in Pandas
|
<p>I have the following excel dataset:</p>
<pre><code>
($/bbl) ($/bbl) ($/bbl)
crude_petro crude_brent crude_dubai
1960M01 1.63 1.63 1.63
1960M02 1.63 1.63 1.63
1960M03 1.63 1.63 1.63
</code></pre>
<p>What I want to do is to convert the row names <code>crude_petro, crude_brent, crude_dubai</code>
into columns such that in the format like this:</p>
<p><strong>EDIT 2</strong></p>
<pre><code> unit commodity price date
0 ($/bbl) crude_petro 1.63 1960M01
1 ($/bbl) crude_brent 1.63 1960M01
2 ($/bbl) crude_dubai 1.63 1960M01
</code></pre>
<p>How can I achieve this using pandas?</p>
<p><strong>EDIT</strong>:
This is how I am reading the excel to possibly parse those values</p>
<pre><code>df = pd.read_excel(local_path, sheet_name='Monthly Prices', engine='openpyxl', skiprows=5, usecols="B:BT")
</code></pre>
<p><strong>EDIT 3:</strong>
In my final column, the data is generating extra columns that doesn't exist on my spreadsheet(source) which is associated with NaN values. For example, the commodity SILVER is outputting 'SILVER.1', 'SILVER.2', etc.</p>
|
<p>Try with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html#pandas-melt" rel="nofollow noreferrer"><code>melt</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html#pandas-dataframe-sort-index" rel="nofollow noreferrer"><code>sort_index</code></a>:</p>
<pre><code>new_df = (
df.melt(ignore_index=False, var_name=['unit', 'commodity'])
.sort_index()
.rename_axis('Date')
.reset_index()
)
</code></pre>
<p><code>new_df</code>:</p>
<pre><code> Date unit commodity value
0 1960M01 ($/bbl) crude_petro 1.63
1 1960M01 ($/bbl) crude_brent 1.63
2 1960M01 ($/bbl) crude_dubai 1.63
3 1960M02 ($/bbl) crude_petro 1.63
4 1960M02 ($/bbl) crude_brent 1.63
5 1960M02 ($/bbl) crude_dubai 1.63
6 1960M03 ($/bbl) crude_petro 1.63
7 1960M03 ($/bbl) crude_brent 1.63
8 1960M03 ($/bbl) crude_dubai 1.63
</code></pre>
<hr/>
<p>Sample Frame Used:</p>
<pre><code>df = pd.DataFrame({
('($/bbl)', 'crude_petro'): {'1960M01': 1.63, '1960M02': 1.63,
'1960M03': 1.63},
('($/bbl)', 'crude_brent'): {'1960M01': 1.63, '1960M02': 1.63,
'1960M03': 1.63},
('($/bbl)', 'crude_dubai'): {'1960M01': 1.63, '1960M02': 1.63,
'1960M03': 1.63}
})
</code></pre>
<p><code>df</code>:</p>
<pre><code> ($/bbl)
crude_petro crude_brent crude_dubai
1960M01 1.63 1.63 1.63
1960M02 1.63 1.63 1.63
1960M03 1.63 1.63 1.63
</code></pre>
|
python|pandas|dataframe
| 1
|
376,819
| 67,711,552
|
Python: Split pandas dataframe by range of values
|
<p>I have a simple dataframe in which I am trying to split into multiple groups based on whether the x column value falls within a range.</p>
<p>e.g. if I have:</p>
<pre><code>print(df1)
x
0 5
1 7.5
2 10
3 12.5
4 15
</code></pre>
<p>And wish to create a new dataframe, df2, of values of x which are within the range 7-13 (7 < x < 13)</p>
<pre><code>print(df1)
x
0 5
4 15
print(df2)
x
1 7.5
2 10
3 12.5
</code></pre>
<p>I have been able to split the dataframe based on a single value boolean e.g. ( x < 11), using the following - but have unable to develop this into a <em>range</em> of values.</p>
<pre><code>thresh = 11
df2 = df1[df1['x'] < thresh]
print(df2)
x
0 5
1 7.5
2 10
</code></pre>
|
<p>You can create a boolean mask for the range (7 < x < 13) by AND condition of (x > 7) and (x < 13). Then create <code>df2</code> with this boolean mask. The remaining entries left in <code>df1</code> being the negation of this boolean mask:</p>
<pre><code>thresh_low = 7
thresh_high = 13
mask = (df1['x'] > thresh_low) & (df1['x'] < thresh_high)
df2 = df1[mask]
df1 = df1[~mask]
</code></pre>
<p>Result:</p>
<pre><code>print(df2)
x
1 7.5
2 10.0
3 12.5
print(df1)
x
0 5.0
4 15.0
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 3
|
376,820
| 67,967,625
|
Create a column with values of a list and depending on another column
|
<ul>
<li>I have a list of paths of different images:</li>
</ul>
<p><code>img_dir = [img_pathA.1.jpg, img_pathA.2.jpg, img_pathA.3.jpg, img_pathB.1.jpg, img_pathB.2.jpg, .... img_pathZ.3.jpg]</code></p>
<ul>
<li>And a dataframe with an <code>ID</code> column:</li>
</ul>
<p><code>df</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
</tr>
<tr>
<td>B</td>
</tr>
<tr>
<td>C</td>
</tr>
<tr>
<td>..</td>
</tr>
<tr>
<td>Z</td>
</tr>
</tbody>
</table>
</div>
<p>As you can see, every image path in the list contains in its filename the ID who belongs to.</p>
<p>I would like to add all the image paths for every ID in the dataframe. The goal is to get something like this:</p>
<p><code>final_df</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>img_path</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>img_pathA.1.jpg</td>
</tr>
<tr>
<td>A</td>
<td>img_pathA.2.jpg</td>
</tr>
<tr>
<td>A</td>
<td>img_pathA.3.jpg</td>
</tr>
<tr>
<td>B</td>
<td>img_pathB.1.jpg</td>
</tr>
<tr>
<td>B</td>
<td>img_pathB.2.jpg</td>
</tr>
<tr>
<td>..</td>
<td>............</td>
</tr>
<tr>
<td>Z</td>
<td>img_pathZ.3.jpg</td>
</tr>
</tbody>
</table>
</div>
<p>The numbers of images per ID is not fixed (usually 2-3 images per ID), so I have thought that I could replicate the entire dataframe maybe 3 times, do the assignment for every row and after that, delete the rows that doesn't have a path ("No path").</p>
<p>I have tried the following code:</p>
<pre><code>df['img_path'] = "No path"
df = pd.concat([df]*3, ignore_index=True)
for ID in df['ID']:
for path in img_dir:
if ID in path:
df.loc[(df['ID'] == ID), 'img_path'] = path
</code></pre>
<p>But I get something like this. I think that it's because the ID gets replicated too and the column seems to store the last image for every ID:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>img_path</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>img_pathA.3.jpg</td>
</tr>
<tr>
<td>A</td>
<td>img_pathA.3.jpg</td>
</tr>
<tr>
<td>A</td>
<td>img_pathA.3.jpg</td>
</tr>
<tr>
<td>B</td>
<td>img_pathB.2.jpg</td>
</tr>
<tr>
<td>B</td>
<td>img_pathB.2.jpg</td>
</tr>
<tr>
<td>..</td>
<td>............</td>
</tr>
<tr>
<td>Z</td>
<td>img_pathZ.3.jpg</td>
</tr>
</tbody>
</table>
</div>
<p>Any idea of how could I solve or improve this?</p>
<p>Thank you in advance.</p>
|
<p>Create a series from the <code>img_dir</code> list then <code>extract</code> the <code>ID</code> from the corresponding paths and set the extracted <code>ID</code> as the index of the series, then <code>join</code> the dataframe with this series on the column <code>ID</code></p>
<pre><code>s = pd.Series(img_dir)
s.index = s.str.extract(fr"({'|'.join(df['ID'])})", expand=False)
df.join(s.rename('img_path'), on='ID')
</code></pre>
<hr />
<pre><code> ID img_path
0 A img_pathA.1.jpg
0 A img_pathA.2.jpg
0 A img_pathA.3.jpg
1 B img_pathB.1.jpg
1 B img_pathB.2.jpg
...
3 Z img_pathZ.3.jpg
</code></pre>
|
python|pandas|dataframe|loops|data-manipulation
| 2
|
376,821
| 67,738,487
|
Pandas-Profiling.to_widgets(): Error displaying widget: model not found
|
<p>Error screenshot</p>
<p><a href="https://i.stack.imgur.com/p0sLi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p0sLi.png" alt="" /></a></p>
<p>I've been facing an <strong>intermittent issue with pandas profiling widget</strong> not rendering & it has been going on and off for awhile.</p>
<p>I've tried this in the command prompt:</p>
<pre><code> jupyter nbextension enable --py widgetsnbextension
</code></pre>
<p>it comes up with " - Validating: ok" but still not rendering.</p>
<p>A quick google search led me to a few githubs/pandas-profiling/issues sections but they were a few years old.</p>
|
<p>I had this problem in Kaggle, I think it is related to memory. It happens when I repeat running my notebook a few times, without restarting the kernel.</p>
<p>To fix it, I just clicked Run, then Restart and Clear Outputs, and it's working again.</p>
<p>I have since then optimized my codes to release memory when done with them, as well as get into the habit of restarting and clearing outputs before a fresh run.</p>
<p>It hasn't happened on my local environment with Jupyter Notebook, probably because I have better memory locally. But if it did happen, I guess I would select Kernel, then Restart and Clear Output.</p>
|
widget|pandas-profiling
| 0
|
376,822
| 67,649,606
|
Error in reverse scaling outputs predicted by a LSTM RNN
|
<p>I used the LSTM model to predict the future <code>open</code> price of a stock. Here the data was preprocessed and the model was built and trained without any errors, and I used Standard Scaler to scale down the values in the DataFrame. But while retrieving the predictions from the model, when I used the <code>scaler.reverse()</code> method it gave the following error.</p>
<pre><code>ValueError: non-broadcastable output operand with shape (59,1) doesn't match the broadcast shape (59,4)
</code></pre>
<p>The complete code is a too big jupyter notebook to directly show, so I have uploaded it in a <a href="https://github.com/Samar-080301/help_please" rel="nofollow noreferrer">git repository</a></p>
|
<p>This is because the model is predicting output with shape (59, 1). But your Scaler was fit on (251, 4) data frame. Either create a new scaler on the data frame of the shape of y values or change your model dense layer output to 4 dimensions instead of 1.
The data shape on which scaler is fit, it will take that shape only during <code>scaler.inverse_transform</code>.</p>
<p>Old Code - Shape (n,1)</p>
<p><code>trainY.append(df_for_training_scaled[i + n_future - 1:i + n_future, 0])</code></p>
<p>Updated Code - Shape (n,4) - use all 4 outputs</p>
<p><code>trainY.append(df_for_training_scaled[i + n_future - 1:i + n_future,:])</code></p>
|
python|tensorflow|keras|lstm|recurrent-neural-network
| 3
|
376,823
| 67,871,982
|
Train a custom Mask RCNN with different image sizes in Google Colab. Unfortunately i can't get it running
|
<p>currently I'm trying to train a Matterport Mask R-CNN with custom classes and a custom dataset on Colab. I followed this tutorial:
<a href="https://github.com/TannerGilbert/MaskRCNN-Object-Detection-and-Segmentation" rel="nofollow noreferrer">https://github.com/TannerGilbert/MaskRCNN-Object-Detection-and-Segmentation</a></p>
<p>Instead of using images with the same size, my images do have different sizes. I spend hours to match the masks(.json) to the image size. But finally it is working:</p>
<p>The following lines are loading two sample images and displaying the segmentation mask:</p>
<pre><code># Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 2)
for image_id in image_ids:
image, img_height, img_width = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id, img_height, img_width)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
</code></pre>
<p>load_image() in utils.py is looking like this:</p>
<pre><code>def load_image(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
img_height, img_width, num_channels = image.shape
# If grayscale. Convert to RGB for consistency.
if image.ndim != 3:
image = skimage.color.gray2rgb(image)
# If has an alpha channel, remove it for consistency
if num_channels == 4:
image = image[..., :3]
return image, img_height, img_width
</code></pre>
<p>load_mask() is looking like this:</p>
<pre><code>def load_mask(self, image_id, img_height, img_width):
# get details of image
info = self.image_info[image_id]
# define box file location
path = info['annotation']
# load XML
masks, classes = self.extract_masks(path, img_height, img_width)
return masks, np.asarray(classes, dtype='int32')
</code></pre>
<p>And extract_mask() is looking like this:</p>
<pre><code>def extract_masks(self, filename, img_height, img_width):
json_file = os.path.join(filename)
with open(json_file) as f:
img_anns = json.load(f)
masks = np.zeros([img_height, img_width, len(img_anns['shapes'])], dtype='uint8')
classes = []
for i, anno in enumerate(img_anns['shapes']):
mask = np.zeros([img_height, img_width], dtype=np.uint8)
cv2.fillPoly(mask, np.array([anno['points']], dtype=np.int32), 1)
masks[:, :, i] = mask
classes.append(self.class_names.index(anno['label']))
return masks, classes
</code></pre>
<p>Now we are getting to the curious part...</p>
<p>After going on with my code creating the model</p>
<pre><code># Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
</code></pre>
<p>...and choosing the weights to start with</p>
<pre><code># Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
</code></pre>
<p>...I get to the point where training should start:</p>
<pre><code>model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=5,
layers='heads')
</code></pre>
<p>I'm receiving following error:</p>
<pre><code>/content/drive/My Drive/Colab/Mask_RCNN/mrcnn/model.py in load_image_gt()
1210 # Load image and mask
1211 image, img_height, img_width = dataset.load_image(image_id)
-> 1212 mask, class_ids = dataset.load_mask(image_id, img_height, img_width)
1213 original_shape = image.shape
1214 image, window, scale, padding, crop = utils.resize_image(
TypeError: load_mask() missing 2 required positional arguments: 'img_height' and 'img_width'
</code></pre>
<p>I noticed, that load_image_gt() is different to the above mentioned load_image(). I've already adjusted load_image_gt() as followed (I added <strong>#<-------</strong> to the relevant lines):</p>
<pre><code>def load_image_gt(dataset, config, image_id, augment=False, augmentation=None,
use_mini_mask=False):
"""Load and return ground truth data for an image (image, mask, bounding boxes).
augment: (deprecated. Use augmentation instead). If true, apply random
image augmentation. Currently, only horizontal flipping is offered.
augmentation: Optional. An imgaug (https://github.com/aleju/imgaug) augmentation.
For example, passing imgaug.augmenters.Fliplr(0.5) flips images
right/left 50% of the time.
use_mini_mask: If False, returns full-size masks that are the same height
and width as the original image. These can be big, for example
1024x1024x100 (for 100 instances). Mini masks are smaller, typically,
224x224 and are generated by extracting the bounding box of the
object and resizing it to MINI_MASK_SHAPE.
Returns:
image: [height, width, 3]
shape: the original shape of the image before resizing and cropping.
class_ids: [instance_count] Integer class IDs
bbox: [instance_count, (y1, x1, y2, x2)]
mask: [height, width, instance_count]. The height and width are those
of the image unless use_mini_mask is True, in which case they are
defined in MINI_MASK_SHAPE.
"""
# Load image and mask
image, img_height, img_width = dataset.load_image(image_id) **#<-------**
mask, class_ids = dataset.load_mask(image_id, img_height, img_width) **#<-------**
original_shape = image.shape
image, window, scale, padding, crop = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
min_scale=config.IMAGE_MIN_SCALE,
max_dim=config.IMAGE_MAX_DIM,
mode=config.IMAGE_RESIZE_MODE)
mask = utils.resize_mask(mask, scale, padding, crop)
# Random horizontal flips.
# TODO: will be removed in a future update in favor of augmentation
if augment:
logging.warning("'augment' is deprecated. Use 'augmentation' instead.")
if random.randint(0, 1):
image = np.fliplr(image)
mask = np.fliplr(mask)
# Augmentation
# This requires the imgaug lib (https://github.com/aleju/imgaug)
if augmentation:
import imgaug
# Augmenters that are safe to apply to masks
# Some, such as Affine, have settings that make them unsafe, so always
# test your augmentation on masks
MASK_AUGMENTERS = ["Sequential", "SomeOf", "OneOf", "Sometimes",
"Fliplr", "Flipud", "CropAndPad",
"Affine", "PiecewiseAffine"]
def hook(images, augmenter, parents, default):
"""Determines which augmenters to apply to masks."""
return augmenter.__class__.__name__ in MASK_AUGMENTERS
# Store shapes before augmentation to compare
image_shape = image.shape
mask_shape = mask.shape
# Make augmenters deterministic to apply similarly to images and masks
det = augmentation.to_deterministic()
image = det.augment_image(image)
# Change mask to np.uint8 because imgaug doesn't support np.bool
mask = det.augment_image(mask.astype(np.uint8),
hooks=imgaug.HooksImages(activator=hook))
# Verify that shapes didn't change
assert image.shape == image_shape, "Augmentation shouldn't change image size"
assert mask.shape == mask_shape, "Augmentation shouldn't change mask size"
# Change mask back to bool
mask = mask.astype(np.bool)
# Note that some boxes might be all zeros if the corresponding mask got cropped out.
# and here is to filter them out
_idx = np.sum(mask, axis=(0, 1)) > 0
mask = mask[:, :, _idx]
class_ids = class_ids[_idx]
# Bounding boxes. Note that some boxes might be all zeros
# if the corresponding mask got cropped out.
# bbox: [num_instances, (y1, x1, y2, x2)]
bbox = utils.extract_bboxes(mask)
# Active classes
# Different datasets have different classes, so track the
# classes supported in the dataset of this image.
active_class_ids = np.zeros([dataset.num_classes], dtype=np.int32)
source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
active_class_ids[source_class_ids] = 1
# Resize masks to smaller size to reduce memory usage
if use_mini_mask:
mask = utils.minimize_mask(bbox, mask, config.MINI_MASK_SHAPE)
# Image meta data
image_meta = compose_image_meta(image_id, original_shape, image.shape,
window, scale, active_class_ids)
return image, img_height, img_width, image_meta, class_ids, bbox, mask **#<-------**
</code></pre>
<p>I don't know why the two requied arguments are missing, because img_height and img_with are defined, arent't they?
In my oppion the code is exactly the same like the "# Load and display random samples"-code mentioned above.</p>
<p>I would be very gratefull, if someone could help.
Many thanks in advance!</p>
|
<p>You must set width and height value in <code>load_yourdatasetname()</code> by <code>self.add_imge</code> and get in <code>load_mask</code> function</p>
<p>Example:</p>
<pre><code>class Covid19Dataset(utils.Dataset):
def load_covid19(self, dataset_dir, subset):
"""Load a subset of the covid-19 dataset.
dataset_dir: Root directory of the dataset.
subset: Subset to load: train or val
"""
# Add classes. We have three class to add.
self.add_class("covid19", 1, "lung right")
self.add_class("covid19", 2, "lung left")
self.add_class("covid19", 3, "infection")
# Train or validation dataset?
assert subset in ["train", "val"]
dataset_dir = os.path.join(dataset_dir, subset)
dataset_Image_dir = os.path.join(dataset_dir, "Images")
dataset_Labels_dir = os.path.join(dataset_dir, "Labels")
# get image names in dataset directory
filenamesInDir = [f for f in listdir(dataset_Image_dir) if isfile(join(dataset_Image_dir, f))]
png_filenames = []
for name in filenamesInDir:
# Skip if file does not end with .png
if not name.endswith(".png") : continue
png_filenames.append(name)
# Add images
for a in png_filenames:
image_path = os.path.join(dataset_Image_dir, a)
label_path = os.path.join(dataset_Labels_dir, a)
image = skimage.io.imread(image_path)
height, width = image.shape[:2]
self.add_image(
"covid19",
image_id=a, # use file name as a unique image id
path=image_path,
label_path=label_path,
width=width, height=height)
def load_mask(self, image_id):
"""Generate instance masks for an image.
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
# If not a covid19 dataset image, delegate to parent class.
image_info = self.image_info[image_id]
if image_info["source"] != "covid19":
return super(self.__class__, self).load_mask(image_id)
# [height, width, instance_count]
info = self.image_info[image_id]
label = skimage.io.imread(info["label_path"])
Class_names = np.unique(label)
Class_names = Class_names[1:]
mask = np.zeros([info["height"], info["width"], len(Class_names)],
dtype=np.uint8)
class_ids = []
for i,p in enumerate(Class_names):
mask[label==p,i]=1
class_ids.append(i+1)
# Return mask, and array of class IDs of each instance. Since we have
class_ids = np.array(class_ids, dtype=np.int32)
mask = np.rot90(mask)
return mask.astype(np.bool), class_ids
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "covid19":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
def load_image(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
#image = rotate(image, 90, resize=False)
image = np.rot90(image)
# If grayscale. Convert to RGB for consistency.
if image.ndim != 3:
image = skimage.color.gray2rgb(image)
# If has an alpha channel, remove it for consistency
if image.shape[-1] == 4:
image = image[..., :3]
return image
</code></pre>
|
python|tensorflow|google-colaboratory
| 0
|
376,824
| 67,654,412
|
Auto-extracting columns from nested dictionaries in pandas
|
<p>So I have this nested multiple dictionaries in a jsonl file column as below:</p>
<pre><code> `df['referenced_tweets'][0]`
</code></pre>
<p>producing (<em>shortened output</em>)</p>
<pre><code> 'id': '1392893055112400898',
'public_metrics': {'retweet_count': 0,
'reply_count': 1,
'like_count': 2,
'quote_count': 0},
'conversation_id': '1392893055112400898',
'created_at': '2021-05-13T17:22:37.000Z',
'reply_settings': 'everyone',
'entities': {'annotations': [{'start': 65,
'end': 77,
'probability': 0.9719000000000001,
'type': 'Person',
'normalized_text': 'Jill McMillan'}],
'mentions': [{'start': 23,
'end': 36,
'username': 'usasklibrary',
'protected': False,
'description': 'The official account of the University Library at USask.',
'created_at': '2019-06-04T17:19:12.000Z',
'entities': {'url': {'urls': [{'start': 0,
'end': 23,
'url': '*removed*',
'expanded_url': 'http://library.usask.ca',
'display_url': 'library.usask.ca'}]}},
'name': 'University Library',
'url': '....',
'profile_image_url': 'https://pbs.twimg.com/profile_images/1278828446026629120/G1w7t-HK_normal.jpg',
'verified': False,
'id': '1135959197902921728',
'public_metrics': {'followers_count': 365,
'following_count': 119,
'tweet_count': 556,
'listed_count': 9}}]},
'text': 'Wonderful session with @usasklibrary Graduate Writing Specialist Jill McMillan who is walking SURE students through the process of organizing/analyzing a literature review! So grateful to the library -- our largest SURE: Student Undergraduate Research Experience partner!',
...
</code></pre>
<p>My intention is to create a function that would auto extract specific columns (e.g. text,type) in the entire dataframe (not just a row). So I wrote the function:</p>
<pre><code>### x = df['referenced_tweets']
def extract_TextType(x):
dic = {}
for i in x:
if i != " ":
new_df= pd.DataFrame.from_dict(i)
dic['refd_text']=new_df['text']
dic['refd_type'] = new_df['type']
else:
print('none')
return dic
</code></pre>
<p>However running the function:</p>
<pre><code>df['referenced_tweets'].apply(extract_TextType)
</code></pre>
<p>produces an error:</p>
<pre><code>ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.
</code></pre>
<p>The whole point is to extract these two nested columns (texts & type) from the original "referenced tweets" column and match them to the original rows.</p>
<p>What am I doing wrong pls?</p>
<p>P.S.
The original df is shotgrabbed below:
<a href="https://i.stack.imgur.com/Fzpw0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fzpw0.png" alt="original DF containing the target column" /></a></p>
|
<p>A couple things to consider here. <code>referenced_tweets</code> holds a list so this line <code>new_df= pd.DataFrame.from_dict(i)</code> is most likely not parsing correctly the way you are entering it.</p>
<p>Also, because it's possible there are multiple tweets in that list you are correctly iterating over it but you don't need put it into a df to do so. This will also create a new dictionary in each cell as you are using a <code>.apply()</code>. If that's what you want that is ok. If you really just want a new dataframe, you can adapt the following. I don't have access to <code>referenced_tweets</code> so I'm using <code>entities</code> as an example.
Here's my example:</p>
<pre><code>ents = df[df.entities.notnull()]['entities']
dict_hold_list = []
for ent in ents:
# print(ent['hashtags'])
for htag in ent['hashtags']:
# print(htag['text'])
# print(htag['indices'])
dict_hold_list.append({'text': htag['text'], 'indices': htag['indices']})
df_hashtags = pd.DataFrame(dict_hold_list)
</code></pre>
<p>Because you have not provided a good working json or dataframe, I can't test this, but your solution could look like this</p>
<pre><code>refs = df[df.referenced_tweets.notnull()]['referenced_tweets']
dict_hold_list = []
for ref in refs:
# print(ref)
for r in ref:
# print(r['text'])
# print(r['type'])
dict_hold_list.append({'text': r['text'], 'type': r['type']})
df_ref_tweets = pd.DataFrame(dict_hold_list)
</code></pre>
|
python-3.x|pandas|dictionary|twitter|jsonlines
| 0
|
376,825
| 67,726,123
|
Postgresql batch insert for csv file/dataframe (on GCP)
|
<p>I have a csv file with two columns, <code>[key, chunk]</code>, which I need to insert to into a SQL db table. (Amplifying info- Postgresql db hosted on GCP, I'm able to select and perform other db operations fine.)</p>
<p>My csv file has over 10 million rows. And so, I'm curious what's the best batch insert option available to me, specific to Postgresql syntax? Would opening the csv file as a pandas dataframe help at all? Because of the size of the file, I'd like to avoid iterative row insertions.</p>
|
<p>You can simply use the <a href="https://cloud.google.com/sql/docs/postgres/import-export/importing#gcloud" rel="nofollow noreferrer">import function in Cloud SQL</a> to load a CSV file in the database. Then, run a query to select the value that you want and to merge them in the target table.</p>
<p><em>When you can, prefer the built-in/native feature than self-made!</em></p>
|
sql|pandas|postgresql|google-cloud-platform|batch-processing
| 0
|
376,826
| 67,886,396
|
What is the Vaex command for pd.isnull().sum()?
|
<p>Someone please give me a VAEX alternative for this code:</p>
<pre><code>df_train = vaex.open('../input/ms-malware-hdf5/train.csv.hdf5')
total = df_train.isnull().sum().sort_values(ascending = False)
</code></pre>
|
<p>Vaex does not at this time support counting missing values on a dataframe level, only on an expression (column) level. So you will have to do a bit of work yourself.</p>
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import vaex
import vaex.ml
import pandas as pd
df = vaex.ml.datasets.load_titanic()
count_na = [] # to count the missing value per column
for col in df.column_names:
count_na.append(df[col].isna().sum().item())
s = pd.Series(data=count_na, index=df.column_names).sort_values(ascending=True)
</code></pre>
<p>If you think this is something you might need to use often, it might be worth it to create your own dataframe method following <a href="https://vaex.readthedocs.io/en/latest/tutorial.html#Adding-DataFrame-accessors" rel="nofollow noreferrer">this example</a>.</p>
|
pandas|dataframe|vaex
| 2
|
376,827
| 67,648,160
|
What's the fastest way to produce rolling window embeddings in time series data?
|
<p>I'm interested in transforming a typical time series dataset (one dimension) into a matrix consisting of every possible sequential combination of the original one. My stride is always 1 (might change in the future), window size should change according to preference, overlaps are encouraged and my focus is intraday data, meaning that combinations can only stem from the same day, one day at a time.</p>
<p>Here is a sample dataset</p>
<pre><code>import pandas as pd
date_1 = pd.date_range('2015-02-24', periods=5, freq='1T')
date_2 = pd.date_range('2015-02-25', periods=5, freq='1T')
date = date_1.union(date_2)
values = range(len(date))
df = pd.DataFrame({'date': date, 'values': values})
</code></pre>
<p>Given a window size of 3, do you know of any fast, preferably Pythonic way to end up with the following output</p>
<pre><code>0 1 2
1 2 3
2 3 4
5 6 7
6 7 8
7 8 9
</code></pre>
<p>I've messed around with <code>group_by</code> but wasn't able to come up with the demonstrated result.</p>
|
<p>Group the column <code>values</code> on <code>date</code> then inside a list comprehension iterate over each group and apply the <code>sliding_window_view</code> transformation, then vertically stack all the sliding views corresponding to each group</p>
<p>For numpy version >= <code>1.20</code></p>
<pre><code>from numpy.lib.stride_tricks import sliding_window_view
grp = df['values'].groupby(df['date'].dt.floor('D'))
np.vstack([sliding_window_view(v, 3) for _, v in grp])
</code></pre>
<p>For numpy version < <code>1.20</code></p>
<pre><code>def sliding_view(a, w):
s = a.strides[0]
shape = a.shape[0] - w + 1, w
return np.lib.stride_tricks.as_strided(a, shape, (s, s))
grp = df['values'].groupby(df['date'].dt.floor('D'))
np.vstack([sliding_view(v.values, 3) for _, v in grp])
</code></pre>
<hr />
<pre><code>array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[5, 6, 7],
[6, 7, 8],
[7, 8, 9]])
</code></pre>
|
pandas|datetime|time-series
| 2
|
376,828
| 67,692,862
|
tensorflow dependencies continuously gives me errors in colab during installation of deepspeech environment
|
<p>when I run the following command on Google Colab</p>
<pre><code>!pip3 install --upgrade pip==20.0.2 wheel==0.34.2 setuptools==46.1.3
!pip3 install --upgrade --force-reinstall -e .
</code></pre>
<p>Got an error</p>
<pre><code> ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-probability 0.12.1 requires gast>=0.3.2, but you have gast 0.2.2 which is incompatible.
networkx 2.5.1 requires decorator<5,>=4.3, but you have decorator 5.0.9 which is incompatible.
moviepy 0.2.3.5 requires decorator<5.0,>=4.0.2, but you have decorator 5.0.9 which is incompatible.
google-colab 1.0.0 requires pandas~=1.1.0; python_version >= "3.0", but you have pandas 1.2.4 which is incompatible.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.25.1 which is incompatible.
google-colab 1.0.0 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.
Successfully installed Mako-1.1.4 MarkupSafe-2.0.1 PrettyTable-2.1.0 PyYAML-5.4.1 absl-py-0.12.0 alembic-1.6.4 appdirs-1.4.4 astor-0.8.1 attrdict-2.0.1 attrs-21.2.0 audioread-2.1.9 beautifulsoup4-4.9.3 bs4-0.0.1 cached-property-1.5.2 certifi-2020.12.5 cffi-1.14.5 chardet-4.0.0 cliff-3.7.0 cmaes-0.8.2 cmd2-1.5.0 colorama-0.4.4 colorlog-5.0.1 decorator-5.0.9 deepspeech-training-0.9.3 ds-ctcdecoder-0.9.3 gast-0.2.2 google-pasta-0.2.0 greenlet-1.1.0 grpcio-1.38.0 h5py-3.2.1 idna-2.10 importlib-metadata-4.0.1 joblib-1.0.1 keras-applications-1.0.8 keras-preprocessing-1.1.2 librosa-0.8.0 llvmlite-0.31.0 markdown-3.3.4 numba-0.47.0 numpy-1.18.5 opt-einsum-3.3.0 optuna-2.7.0 opuslib-2.0.0 packaging-20.9 pandas-1.2.4 pbr-5.6.0 pooch-1.3.0 progressbar2-3.53.1 protobuf-3.17.1 pycparser-2.20 pyparsing-2.4.7 pyperclip-1.8.2 python-dateutil-2.8.1 python-editor-1.0.4 python-utils-2.5.6 pytz-2021.1 pyxdg-0.27 requests-2.25.1 resampy-0.2.2 scikit-learn-0.24.2 scipy-1.6.3 semver-2.13.0 setuptools-57.0.0 six-1.16.0 soundfile-0.10.3.post1 soupsieve-2.2.1 sox-1.4.1 sqlalchemy-1.4.15 stevedore-3.3.0 tensorboard-1.15.0 tensorflow-1.15.4 tensorflow-estimator-1.15.1 termcolor-1.1.0 threadpoolctl-2.1.0 tqdm-4.61.0 typing-extensions-3.10.0.0 urllib3-1.26.4 wcwidth-0.2.5 werkzeug-2.0.1 wheel-0.36.2 wrapt-1.12.1 zipp-3.4.1
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
WARNING: The following packages were previously imported in this runtime:
[astor,cffi,colorama,dateutil,decorator,google,numpy,pandas,pkg_resources,pyparsing,pytz,six,wcwidth]
You must restart the runtime in order to use newly installed versions.
</code></pre>
<p>and when I update gast version 0.2.2 to 0.3.2 it says it requires gast version 0.2.2 again and when i downgrade from gast version 0.3.2 to 0.2.2 it says it requires gast version 0.2.2 again (vice-versa)
<a href="https://i.stack.imgur.com/BjXdA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BjXdA.png" alt="enter image description here" /></a></p>
|
<p>First do not forget to restart runtime after running your current cell. i.e.</p>
<pre><code>!pip3 install folium==0.2.1
!pip3 install --upgrade pip==20.0.2 wheel==0.34.2 setuptools==46.1.3
!pip3 install --upgrade --force-reinstall -e .
# Restart Colab after this cell
</code></pre>
<p><strong>I was getting the same errors. But It does not affect the training as we are re-installing the <code>tensorflow-gpu==1.15.2</code> (Deepspeech's dependency) in the later step</strong> (Please setup your GPU configurations first then uninstall and reinstall the tensorflow).</p>
<pre><code>### Setting-up GPU for training
#### Checking if we have nvidia or not
!nvcc --version
!nvidia-smi
#### Setting up environment variables for cuda
! echo $PATH
import os
os.environ['PATH'] += ":/usr/local/cuda-10.0/bin"
os.environ['CUDADIR'] = "/usr/local/cuda-10.0"
os.environ['LD_LIBRARY_PATH'] = "/usr/lib64-nvidia:/usr/local/cuda-10.0/lib64"
!echo $PATH
!echo $LD_LIBRARY_PATH
!source ~/.bashrc
!env | grep -i cuda
#### Adding freeglut and other OPENGL packaged for deepspeech requirement satisfaction
%cd /content/
!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
!sudo apt-get install build-essential dkms
!sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
!sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
!sudo apt-get update
!sudo apt-get install cuda-10-0
!sudo rm /usr/local/cuda
!sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda
%ls -l /usr/local/
**# Finally uninstalling and reinstalling tensorflow**
!pip3 uninstall tensorflow
!pip3 install 'tensorflow-gpu==1.15.2'
</code></pre>
<p>Also use following snippet once you finished installing all dependencies to fix broken dependencies</p>
<pre><code>!sudo apt-get -o Dpkg::Options::="--force-overwrite" install --fix-broken
</code></pre>
|
python|tensorflow|google-colaboratory|mozilla-deepspeech
| 1
|
376,829
| 67,605,983
|
how to read an excel file using pandad pd.read_excel in databricks from /Filestore/tables/ directory?
|
<p>Hi I am trying to read an excel file that's uploaded to DBX filestore from UI. I can see that file is available under /Filestore/tables directory and I am trying to create a pandas dataframe using the code below</p>
<pre><code>import pandas as pd
df = pd.read_excel("/dbfs/FileStore/tables/abc.xlsx")
display(df)
</code></pre>
<p>I am getting the error below</p>
<p>FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/tables/abc.xlsx</p>
<p>I understand that the path is not relative to my current working directory I would like to know how can I point to the file from the file store using python</p>
<p>things I have tried:</p>
<p>I have used /FileStore/tables/abc.xlsx in the path and it didn't work</p>
<p>I know the scala code with spark-excel jar works but I cant execute scala commands as my org didn't and will not provide me access to execute scala commands.</p>
<p>any ideas how to get this working?</p>
|
<p>The file is not stored as an excel file when you create a table. You access the data via the <a href="https://docs.databricks.com/data/tables.html#language-python" rel="nofollow noreferrer">Spark API</a>.</p>
<p>You could also read the table into a koalas dataframe and then convert it to pandas if you don't want to use koalas.</p>
<p>If you don't want to use Spark or koalas, then upload the file to /dbfs/FileStore and use read_excel from the file in that location.</p>
|
python|pandas|pyspark|databricks
| 0
|
376,830
| 68,011,337
|
need to add a column together and put the average beneath the column in Pandas
|
<p>I'm currently trying to add a column together that has two rows to it as such:</p>
<p><a href="https://i.stack.imgur.com/q8FXP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q8FXP.png" alt="Housekeeping" /></a></p>
<p>Now I just need to add row 1 and 2 together for each column, and I want to append the average underneath the given column for their respective header name. I currently have this:</p>
<pre><code>for x in sub_houseKeeping:
if "phch" in x:
sub_houseKeeping['Average'] = sub_houseKeeping[x].sum()/2
</code></pre>
<p>However, this adds together the entire row and appends it to the end of the rows, not the bottom of the column as I wished. How can I fix it to add to the bottom of the column?</p>
|
<p>Try this:</p>
<pre><code>sub_houseKeeping = pd.DataFrame({'ID':['200650_s_at','1565446_at'], 'phchp003v1':[2174.84972,6.724141107], 'phchp003v2':[444.9008362,4.093883364]})
sub_houseKeeping = sub_houseKeeping.append(pd.DataFrame(sub_houseKeeping.mean(axis=0)).T, ignore_index=True)
</code></pre>
<p>Output:</p>
<pre><code>print(sub_houseKeeping)
ID phchp003v1 phchp003v2
0 200650_s_at 2174.849720 444.900836
1 1565446_at 6.724141 4.093883
2 NaN 1090.786931 224.497360
</code></pre>
|
python|pandas|csv
| 0
|
376,831
| 68,004,473
|
Compare values in a list to a data frame and if true, amend value to item in data frame
|
<pre><code>import pandas as pd
trade_count = 3
Buyer = ["Company", "Company", "Company"]
'''
MAPPING = pd.read_excel(r"H:\Metals_tempest\MAPPING.xlsx", na_filter = False)
print(Buyer)
# Buyer = pd.DataFrame({'col':Buyer})
# print (df)
for i in range (0, trade_count):
# Buyer[i] = MAPPING['Tempest'].where(MAPPING['Ticket'] == Buyer[i])
# Buyer.loc[Buyer[i]==MAPPING[i],Buyer[i]] = MAPPING['Tempest']
# Buyer[i] = MAPPING['Tempest'] = MAPPING.isin(Buyer).any()
print(Buyer)
'''
</code></pre>
<p><a href="https://i.stack.imgur.com/sCfBK.png" rel="nofollow noreferrer">Here is a screenshot of my mapping table</a></p>
<p>I want to be able to read my Ticket column in my mapping table, see if Buyer[i] exists in the mapping table and if it does remap the value to the Tempest column value from my mapping table</p>
<p><a href="https://i.stack.imgur.com/29OJI.png" rel="nofollow noreferrer">EG when mum == mum and dad == dad, new variable = child</a></p>
|
<p>You can convert <code>MAPPING</code> to a dictionary and use it in a list comprehension:</p>
<pre><code>map_dict = MAPPING.set_index("Ticket")["Tempest"].to_dict()
Buyer = [map_dict.get(i, i) for i in Buyer]
</code></pre>
|
python|pandas|dataframe
| 0
|
376,832
| 67,916,283
|
Deeplearning with Python Tensorflow,Keras
|
<p>I am writing masters thesis about deeplearning and have a problem probably about library.</p>
<p>Below is the error:</p>
<pre><code>AttributeError: module 'tensorflow.compat.v2' has no attribute '__internal__'
</code></pre>
<p>Model:</p>
<pre><code>import tensorflow
from tensorflow import keras
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(32, input_shape=(784,)))
model.add(layers.Dense(32))
</code></pre>
|
<p>I think that the problem lies in the way you are importing the modules you need. Try to do this way:</p>
<pre><code>import tensorflow
from tensorflow.keras import models, layers
model = models.Sequential()
model.add(layers.Dense(32, input_shape=(784,)))
model.add(layers.Dense(32))
model.summary()
</code></pre>
<p>If you get the summary of your network it means that everything is working fine, otherwise it could me that you haven't installed Tensorflow properly.</p>
<p>For reference this is the summary:</p>
<pre><code>Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 32) 25120
_________________________________________________________________
dense_1 (Dense) (None, 32) 1056
=================================================================
Total params: 26,176
Trainable params: 26,176
Non-trainable params: 0
_________________________________________________________________
</code></pre>
|
python|tensorflow|keras
| 1
|
376,833
| 67,989,744
|
Pandas replacing values in a column by values in another column
|
<p>Let's say I have the following dataframe X (ppid is unique):</p>
<pre><code> ppid col2 ...
1 'id1' '1'
2 'id2' '2'
3 'id3' '3'
...
</code></pre>
<p>I have another dataframe which serves as a mapping. ppid is same as above and unique, however it might not contain all X's ppids:</p>
<pre><code> ppid val
1 'id1' '5'
2 'id2' '6'
</code></pre>
<p>I would like to use the mapping dataframe to switch col2 in dataframe X according to where the ppids are equal (in reality, they're multiple columns which are unique together), to get:</p>
<pre><code> ppid col2 ...
1 'id1' '5'
2 'id2' '6'
3 'id3' '3' # didn't change, as there's no match
...
</code></pre>
|
<p>Try using <code>map</code> with <code>set_index</code>:</p>
<pre><code>df_x = pd.DataFrame({'ppid':['id1','id2','id3'], 'col2':[*'123']})
df_a = pd.DataFrame({'ppid':['id1','id2'], 'val':[*'56']})
df_x['col2'] = df_x['ppid'].map(df_a.set_index('ppid')['val']).fillna(df_x['col2'])
</code></pre>
<p>Output:</p>
<pre><code> ppid col2
0 id1 5
1 id2 6
2 id3 3
</code></pre>
|
python|pandas|dataframe
| 2
|
376,834
| 67,644,774
|
Force index reset in pandas
|
<p>Below is how I prepare the DataFrame from a candle list obtained from an API</p>
<p>candle list contains the open, high, low, close, volume values as a nested</p>
<pre><code>candles = [ [time.time() , random(1000 ,9999) , random(1000 ,9999) ,random(1000 ,9999),random(1000 ,9999),random(1000 ,9999) ] for i in range(10) ]
def handler(candles):
date_time, open_lst , high_lst , low_lst , close_lst , volume_lst = [],[],[],[],[],[]
for item in candles:
dt = datetime.fromtimestamp(float(item[0])/1000)
date_time.append(dt)
open_lst.append(float(item[1]))
high_lst.append(float(item[2]))
low_lst.append(float(item[3]))
close_lst.append(float(item[4]))
volume_lst.append(float(item[5]))
## creating the data frame
coin_data_frame = {
'dt': date_time,
'open': open_lst,
'high': high_lst,
'low': close_lst,
'close': close_lst,
'volume': volume_lst }
df = pd.DataFrame(coin_data_frame , columns= ['dt','open','high','low', 'close', 'volume'] )
rolling_mean = df['close'].rolling(window=5, min_periods=5 ).mean()
rolling_mean2 = df['close'].rolling(window=10, min_periods=10 ).mean()
df['5_sma'] = rolling_mean
df['10_sma'] = rolling_mean2
df.dropna(subset = ["5_sma"], inplace=True)
df.dropna(subset = ["10_sma"], inplace=True)
puts(colored.yellow(str(df)))
return df.reset_index(drop=True, inplace=True)
</code></pre>
<p>The line <code>puts(colored.yellow(str(df))</code> show the dataframe but the index does not start from 0 instead it starts form 9 for some reason, I tried to use <code>df.reset_index(drop=True, inplace=True) </code> but it does not seem to fix my problem I can still see that the dataframe starts with 9</p>
|
<p>Use either</p>
<pre><code>return df.reset_index(drop=True)
</code></pre>
<p>or</p>
<pre><code>df.reset_index(drop=True, inplace=True)
return df
</code></pre>
<p>You have used <code>return df.reset_index(drop=True, inplace=True) </code> which is wrong because when <code>inplace=True</code> the method returns None. Check <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer">docs</a> here.</p>
<h3>Fixed code:</h3>
<pre><code>def handler(candles):
date_time, open_lst , high_lst , low_lst = [],[],[],[]
close_lst , volume_lst = [],[]
for item in candles:
dt = datetime.fromtimestamp(float(item[0])/1000)
date_time.append(dt)
open_lst.append(float(item[1]))
high_lst.append(float(item[2]))
low_lst.append(float(item[3]))
close_lst.append(float(item[4]))
volume_lst.append(float(item[5]))
## creating the data frame
coin_data_frame = {
'dt': date_time,
'open': open_lst,
'high': high_lst,
'low': close_lst,
'close': close_lst,
'volume': volume_lst }
df = pd.DataFrame(coin_data_frame ,
columns=['dt','open','high','low', 'close', 'volume'] )
rolling_mean = df['close'].rolling(window=5, min_periods=5 ).mean()
rolling_mean2 = df['close'].rolling(window=10, min_periods=10 ).mean()
df['5_sma'] = rolling_mean
df['10_sma'] = rolling_mean2
df.dropna(subset = ["5_sma"], inplace=True)
df.dropna(subset = ["10_sma"], inplace=True)
df.reset_index(drop=True, inplace=True)
return df
candles = [ [time.time() , np.random.randint(1000 ,9999) ,
np.random.randint(1000 ,9999) ,
np.random.randint(1000 ,9999),
np.random.randint(1000 ,9999),
np.random.randint(1000 ,9999) ] for i in range(10) ]
print (handler(candles))
</code></pre>
<p>Output:</p>
<pre><code>dt open high ... volume 5_sma 10_sma
0 1970-01-19 18:27:20.858940 2370.0 1095.0 ... 4547.0 5433.6 4643.6
[1 rows x 8 columns]
</code></pre>
|
python|pandas
| 0
|
376,835
| 67,734,217
|
how to change datetimeindex to just contain date in python
|
<p>I have a column called date with this values</p>
<pre><code>> DatetimeIndex(['2014-02-19'], dtype='datetime64[ns]', freq=None)
> DatetimeIndex(['2013-02-29'], dtype='datetime64[ns]', freq=None)
> DatetimeIndex(['2018-04-15'], dtype='datetime64[ns]', freq=None)
</code></pre>
<p>how do i modify the column to just extract the date values and get rid of words like DatetimeIndex and brackets etc?</p>
<pre><code>> 2014-02-19
> 2013-02-19
> 2018-04-15
</code></pre>
<p>The code I wrote I think is pretty incorrect but still attaching it here:</p>
<pre><code>def fundate(x):
return x[0][0]
df['date'] = df.apply(lambda row : fundate(row['date']), axis = 1)
</code></pre>
<p>could someone please help me?</p>
|
<p>Do you mean something like this (not tested)?</p>
<pre><code>def fundate(x):
return x.strftime("%Y-%m-%d")
</code></pre>
<p>From <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html</a></p>
|
python|python-3.x|pandas|datetime|strptime
| 0
|
376,836
| 67,682,674
|
Filter a numpy array by function
|
<p>I have a 2D numpy array in the following form:</p>
<pre><code>array([[0, 4],
[1, 5],
[2, 6]])
</code></pre>
<p>I want to filter out the rows that their first value is bigger than 1, but I couldn't find a numpy function to do so.</p>
<p>I know that I can use <code>filter</code>:</p>
<pre><code>np.array(list(filter((lambda x: x[0] <= 1), my_arr)))
</code></pre>
<p>This approach is not efficient, since I need to convert the result into list and only than into numpy array. Is there a better way?</p>
|
<p>There is not <code>numpy</code> interface to do this efficiently with a function. However, in this <em>particular</em> case, you just want something like:</p>
<pre><code>>>> import numpy as np
>>> arr = np.array([[0, 4],
... [1, 5],
... [2, 6]])
>>> arr[arr[:,0] <= 1]
array([[0, 4],
[1, 5]])
</code></pre>
|
python|numpy
| 1
|
376,837
| 67,803,553
|
Pandas: tidy multilevel data
|
<p>I'm trying to get this dataset into a tidy format in pandas.</p>
<p><a href="https://i.stack.imgur.com/mbvY2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mbvY2.jpg" alt="enter image description here" /></a></p>
<p>I need to melt/reshape this in such a way as to have one <strong>id</strong> column, one for <strong>Side (Left/Right)</strong>, one for <strong>Section (1/2/3)</strong>, one for <strong>Size</strong> and another for <strong>Distance</strong>:</p>
<p><a href="https://i.stack.imgur.com/erfQN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/erfQN.jpg" alt="enter image description here" /></a></p>
<p>I'm quite new to Python (Pandas in particular) and I've tried to follow this example: <a href="https://stackoverflow.com/questions/40319532/tidy-data-from-multilevel-excel-file-via-pandas">Tidy data from multilevel Excel file via pandas</a></p>
<p>However, I'm not fully understanding how to get this done.
Any advice would be very welcome!</p>
<p>EDIT: here's a sample file:
<a href="https://wetransfer.com/downloads/bc88b005b185c48eee99fd9583483f4720210602112358/b5f691" rel="nofollow noreferrer">https://wetransfer.com/downloads/bc88b005b185c48eee99fd9583483f4720210602112358/b5f691</a></p>
|
<p>First create <code>MultiIndex</code> by <code>header</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer"><code>read_excel</code></a> and then reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> by first and second level, last set names of axis by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a> and create columns from <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>df = pd.read_excel('test_data.xlsx', header=[0,1,2], index_col=0)
df = df.stack([0,1]).rename_axis(index=['id','Side','Section'], columns=None).reset_index()
print (df)
id Side Section Distance Size
0 f1 Left 1 234 12
1 f1 Left 2 678 456
2 f1 Left 3 1122 900
3 f1 Right 1 1566 1344
4 f1 Right 2 2010 1788
5 f1 Right 3 2454 2232
6 f2 Left 1 453 33
7 f2 Left 2 1293 873
8 f2 Left 3 2133 1713
9 f2 Right 1 2973 2553
10 f2 Right 2 3813 3393
11 f2 Right 3 4653 4233
</code></pre>
|
python|pandas|dataframe
| 2
|
376,838
| 67,758,456
|
Merging the column names based on the values to create another column
|
<p>I have a movies dataset containing various movie genres and whether the movie belongs to that genre or not. E.g.</p>
<pre><code>Index Biography Comedy Crime Documentary Drama Family Fantasy
0 0 1 0 0 1 1 0
1 0 1 0 0 0 1 0
2 0 0 0 0 1 0 0
3 0 1 0 0 0 0 0
4 0 1 0 0 1 0 0
5 0 0 1 0 1 0 0
6 0 1 0 0 0 0 0
</code></pre>
<p>I would like to get a new column having the movie genres names separated by a space or comma if the movie belongs to that genre like</p>
<pre><code>Index New column
0 Comedy Drama Family
1 Comedy Family
2 Drama
3 Comedy
4 Comedy Drama
5 Crime Drama
</code></pre>
<p>Please share the code in R or Python.
Thank you for your help.</p>
|
<p>With matrix multiplication in Python:</p>
<pre><code>df.dot(df.columns + " ")
</code></pre>
<p>to get</p>
<pre><code>Index
0 Comedy Drama Family
1 Comedy Family
2 Drama
3 Comedy
4 Comedy Drama
5 Crime Drama
6 Comedy
</code></pre>
<hr>
to make it more generic:
<pre><code>sep = ", "
df.dot(df.columns + sep).str.rstrip(sep)
</code></pre>
<p>i.e., add the separator to column names, perform the matrix-vector multiplication and then right strip the separator at the end.</p>
|
python|r|pandas|dataframe|data-preprocessing
| 2
|
376,839
| 67,636,478
|
Python/Numpy: Vectorizing repeated row insertion in a 2D array
|
<p>Is it possible to vectorize the insertion of rows?</p>
<p>I have a large 2D numpy array <code>arr</code> (below) and a list of <code>indices</code>. For each index of <code>arr</code> in <code>indices</code> I would like to insert the row at that index back into <code>arr</code> row 5 times at that same index.</p>
<pre><code>indices = [2, 4, 5, 9, 11, 12, 16, 18, 19]
</code></pre>
<p>Currently I'm just looping through all the indices and inserting new rows. This approach is slow for a large list of thousands of rows, so for performance reasons I'm wondering is it possible to vectorize this multi-point tile-type insertion?</p>
<pre><code>arr = [Β Β Β Β
[' ', ' ', 'd'],
[' ', 'd', ' '],
[' ', 'd', 'd'],Β Β # <-- reinsert arr[2] here 5 times
['d', ' ', ' '],
['d', ' ', 'd'],Β Β # <-- reinsert arr[4] here 5 times
['d', 'd', ' '],Β Β # <-- reinsert arr[5] here 5 times
['d', 'd', 'd'],
[' ', ' ', 'e'],
[' ', 'e', ' '],
[' ', 'e', 'e'],Β Β # <-- reinsert arr[9] here 5 times
['e', ' ', ' '],
['e', ' ', 'e'],Β Β # <-- reinsert arr[11] here 5 times
['e', 'e', ' '],Β Β # <-- reinsert arr[12] here 5 times
['e', 'e', 'e'],
[' ', ' ', 'f'],
[' ', 'f', ' '],
[' ', 'f', 'f'],Β Β # <-- reinsert arr[16] here 5 times
['f', ' ', ' '],
['f', ' ', 'f'],Β Β # <-- reinsert arr[18] here 5 times
['f', 'f', ' ']Β Β # <-- reinsert arr[19] here 5 times
]
</code></pre>
<p>Example of first insertion of desired result:</p>
<pre><code>arr = [Β Β Β Β
[' ', ' ', 'd'],
[' ', 'd', ' '],
[' ', 'd', 'd'],Β Β # <-- arr[2]
[' ', 'd', 'd'],Β Β # <-- new insert
[' ', 'd', 'd'],Β Β # <-- new insert
[' ', 'd', 'd'],Β Β # <-- new insert
[' ', 'd', 'd'],Β Β # <-- new insert
[' ', 'd', 'd'],Β Β # <-- new insert
['d', ' ', ' ']
#...
]
</code></pre>
|
<p>You could use <code>np.repeat</code> for this:</p>
<pre><code>indices = [2, 4, 5, 9, 11, 12, 16, 18, 19]
rpt = np.ones(len(arr), dtype=int)
rpt[indices] = 5
np.repeat(arr, rpt, axis=0)
</code></pre>
|
python|numpy|multidimensional-array|vectorization|masking
| 3
|
376,840
| 67,959,516
|
How to efficiently find the indices a first array values matching with a second array values?
|
<p>I have two numpy arrays A and B. A has shape <code>(10000000, 3)</code> and B has shape <code>(1000000, 3)</code>. Both the arrays are XYZ coordinates such that B corresponds to some region of A. I have to find indexes of A which correspond to values B.
Right now I am solving as below. I would like some help in optimizing this using Numpy or other python packages.</p>
<pre class="lang-py prettyprint-override"><code>extract_BinA=np.empty(B.shape[0])
for i in range(B.shape[0]):
for j in range(A.shape[0]):
if(A[j][0]==B[i][0] and A[j][1]==B[i][1] and A[j][2]==B[i][2]):
extract_BinA[i]=j
</code></pre>
|
<p>The issue here is not the speed of pure-python code, but the <strong>algorithm</strong> itself. You can use <strong>sorted-arrays</strong> or <strong>hash-tables</strong> to improve the complexity of the algorithm to <code>O(n log n)</code> or even <code>O(n)</code> rather than the slow current <code>O(n^2)</code> solution (as well as the solution proposed by @Mazen). An <code>O(n^2)</code> cannot be efficient here since it will results in roughly <code>10,000,000 * 10,000,000 = 100,000 billion operations</code> which is too much for any modern computer.</p>
<p>Here is a hash-table solution in pure Python:</p>
<pre class="lang-py prettyprint-override"><code>table = {tuple(A[i]):i for i in range(A.shape[0])}
extract_BinA = np.empty(B.shape[0])
for i in range(B.shape[0]):
val = tuple(B[i])
if val in table:
extract_BinA[i] = table[val]
</code></pre>
<p>Note that the result may differ if there are multiple points in the same location in <code>A</code>.</p>
<p>Here is a benchmark with two random array of size 10,000:</p>
<pre class="lang-none prettyprint-override"><code>Initial solution: 53.82 s
Mazen solution: 1.76 s
This solution: 0.02 s
</code></pre>
<p>On this small input, the above code is <strong>2700 times faster</strong> than the initial solution and 88 times faster than the proposed alternative solution. On bigger input, the gap will be much bigger and the above code is many order of magnitude faster than the two other solutions (ie. >10000 times faster).</p>
<hr />
<h2>Update:</h2>
<p>If there are multiple points equal each other in <code>A</code>, then the dictionary can be modified to store list of indices rather than one value. Alternatively, the dictionary can be created so that the first value is kept like in the original code. Here are example of the two solutions:</p>
<pre class="lang-py prettyprint-override"><code>table = dict()
for i in range(A.shape[0])
key = tuple(A[i])
if key in table:
table[key].append(i)
else:
table[key] = [i]
extract_BinA = np.empty(B.shape[0])
for i in range(B.shape[0]):
val = tuple(B[i])
if val in table:
# Here table[val] is a list and thus you
# can do whatever you want with the indices.
# For example you can take the first one like here,
# or possibly the last as you want.
extract_BinA[i] = table[val][0]
</code></pre>
<pre class="lang-py prettyprint-override"><code># Select always directly the first index
table = dict()
for i in range(A.shape[0])
key = tuple(A[i])
if key not in table:
table[key] = i
extract_BinA = np.empty(B.shape[0])
for i in range(B.shape[0]):
val = tuple(B[i])
if val in table:
extract_BinA[i] = table[val]
</code></pre>
<p>Note that these solution are a bit slower than the above code but the complexity is still linear (and thus still very fast).</p>
|
python|performance|numpy
| 4
|
376,841
| 67,922,408
|
Zooming in on Mandelbrot set
|
<p>I have the following code:</p>
<pre><code>
# MANDELBROT
a = np.arange(-2, 2, 0.01)
b = np.arange(-2, 2, 0.01)
M_new = new_matrix(a, b)
plt.imshow(M_new, cmap='gray', extent=(-2, 2, -2, 2))
plt.show()
## ZOOMING
a_2 = np.arange(0.1, 0.5, 0.01)
b_2 = np.arange(0.1, 0.5, 0.01)
M_new_2 = new_matrix(a_2, b_2)
plt.imshow(M_new_2, cmap='gray', extent=(0.1, 0.5, 0.1, 0.5))
plt.show()
</code></pre>
<p>When I plot the Mandelbrot it looks fine, but what I want to do next is to zoom in on specific part of the set (between 0,1 - 0,5 both x and y-axis). I do not know if the second part of the code is incorrect or maybe I am thinking in a wrong way.</p>
|
<p>I did several changes to you code, mainly respecting the shape of the input arrays.</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
def mandelbrot(c: complex):
m = 0
z = complex(0, 0)
for i in range(0, 100):
z = z*z+c
m += 1
if np.abs(z) > 2:
return False
return True
def new_matrix(a_1, b_1):
M = np.zeros((a_1.shape[0], b_1.shape[0]))
for i, number1 in enumerate(a_1):
for j, number2 in enumerate(b_1):
M[i, j] = mandelbrot(complex(number1, number2))
return M
# MANDELBROT
a = np.arange(-2, 2, 0.01)
b = np.arange(-2, 2, 0.01)
M_new = new_matrix(a, b)
plt.imshow(M_new, cmap='gray', extent=(-2, 2, -2, 2))
plt.show()
## ZOOMING
a_2 = np.arange(0.1, 0.5, 0.01)
b_2 = np.arange(0.1, 0.5, 0.01)
M_new_2 = new_matrix(a_2, b_2)
plt.imshow(M_new_2, cmap='gray', extent=(0.1, 0.5, 0.1, 0.5))
plt.show()
</code></pre>
<p>Check in particular the definition of the matrix M (using the shape of the input arguments, and the enumerate construct in the for loops.</p>
<p>It could without doubt be optimized further, but at least this makes it working. You might want to use <code>linspace</code> in your application, to have better control over the number of points generated.</p>
|
python|numpy|mandelbrot
| 1
|
376,842
| 67,795,380
|
Pandas Dataframe Comparison and Copying
|
<p>Below I have two dataframes, the first being dataframe det and the second being orig. I need to compare <code>det['Detection']</code> with <code>orig['Date/Time']</code>. Once the values are found during the comparion, I need to copy values from <code>orig</code> and <code>det</code> to some final dataframe (<code>final</code>). The format that I need the final dataframe in is <code>det['Date/Time'] orig['Lat'] orig['Lon'] orig['Dep'] det['Mag']</code> I hope that my formatting is adequate for folks. I was not sure how to handle the dataframes so I just placed them in tables. Some additional information that probably won't matter is that <code>det</code> is 3385 rows by 3 columns and <code>orig</code> is 818 rows by 9 columns.</p>
<p><code>det</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date/Time</th>
<th>Mag</th>
<th>Detection</th>
</tr>
</thead>
<tbody>
<tr>
<td>2008/12/27T01:06:56.37</td>
<td>0.280</td>
<td>2008/12/27T13:50:07.00</td>
</tr>
<tr>
<td>2008/12/27T01:17:39.39</td>
<td>0.485</td>
<td>2008/12/27T01:17:39.00</td>
</tr>
<tr>
<td>2008/12/27T01:33:23.00</td>
<td>-0.080</td>
<td>2008/12/27T01:17:39.00</td>
</tr>
</tbody>
</table>
</div>
<p><code>orig</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date/Time</th>
<th>Lat</th>
<th>Lon</th>
<th>Dep</th>
<th>Ml</th>
<th>Mc</th>
<th>N</th>
<th>Dmin</th>
<th>ehz</th>
</tr>
</thead>
<tbody>
<tr>
<td>2008/12/27T01:17:39.00</td>
<td>44.5112</td>
<td>-110.3742</td>
<td>5.07</td>
<td>-9.99</td>
<td>0.51</td>
<td>5</td>
<td>6</td>
<td>3.2</td>
</tr>
<tr>
<td>2008/12/27T04:33:30.00</td>
<td>44.4985</td>
<td>-110.3750</td>
<td>4.24</td>
<td>-9.99</td>
<td>1.63</td>
<td>9</td>
<td>8</td>
<td>0.9</td>
</tr>
<tr>
<td>2008/12/27T05:38:22.00</td>
<td>44.4912</td>
<td>-110.3743</td>
<td>4.73</td>
<td>-9.99</td>
<td>0.37</td>
<td>8</td>
<td>8</td>
<td>0.8</td>
</tr>
</tbody>
</table>
</div>
<p><code>final</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>det['Date/Time']</th>
<th>orig['Lat']</th>
<th>orig['Lon']</th>
<th>orig['Dep']</th>
<th>det['Mag']</th>
</tr>
</thead>
</table>
</div>
|
<p>You can merge the two dataframes, since you want to use <code>Detection</code> column from the first data frame and <code>Date/Time</code> column from the second dataframe, you can just rename the column of second dataframe while merging since the column name already exits in the first dataframe:</p>
<pre class="lang-py prettyprint-override"><code>det.merge(org.rename(columns={'Date/Time': 'Detection'}))
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre class="lang-py prettyprint-override"><code> Date/Time Mag Detection Lat Lon Dep Ml Mc N Dmin ehz
0 2008/12/27T01:17:39.39 0.485 2008/12/27T01:17:39.00 44.5112 -110.3742 5.07 -9.99 0.51 5 6 3.2
1 2008/12/27T01:33:23.00 -0.080 2008/12/27T01:17:39.00 44.5112 -110.3742 5.07 -9.99 0.51 5 6 3.2
</code></pre>
<p>You can then select the columns you want.</p>
|
python|pandas|dataframe
| 1
|
376,843
| 68,007,293
|
Plotting with Crosstab and groupby question
|
<p>I need a help, I am not getting to make this kind of graph using crosstab pandas. Any help please?</p>
<p>Dataframe</p>
<pre><code>{'nome_munic': {66: 'Ferraz de Vasconcelos',
97: 'SΓ£o Paulo',
100: 'SΓ£o JosΓ© dos Campos',
207: 'MauΓ‘',
249: 'Cajamar',
258: 'Votuporanga',
285: 'Ferraz de Vasconcelos',
290: 'SΓ£o Paulo',
345: 'SΓ£o Pedro',
378: 'SΓ£o Paulo'},
'codigo_ibge': {66: 3515707,
97: 3550308,
100: 3549904,
207: 3529401,
249: 3509205,
258: 3557105,
285: 3515707,
290: 3550308,
345: 3550407,
378: 3550308},
'idade': {66: 86,
97: 62,
100: 58,
207: 54,
249: 62,
258: 37,
285: 54,
290: 71,
345: 79,
378: 61},
'sexo': {66: 0,
97: 0,
100: 0,
207: 1,
249: 0,
258: 1,
285: 0,
290: 0,
345: 0,
378: 0},
'obito': {66: 1,
97: 0,
100: 0,
207: 1,
249: 1,
258: 1,
285: 0,
290: 1,
345: 1,
378: 0},
'asma': {66: 0,
97: 0,
100: 0,
207: 1,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'cardiopatia': {66: 1,
97: 0,
100: 1,
207: 1,
249: 1,
258: 0,
285: 1,
290: 1,
345: 0,
378: 0},
'diabetes': {66: 1,
97: 1,
100: 0,
207: 0,
249: 1,
258: 1,
285: 0,
290: 0,
345: 1,
378: 0},
'doenca_hematologica': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'doenca_hepatica': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'doenca_neurologica': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 1,
345: 0,
378: 0},
'doenca_renal': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'imunodepressao': {66: 0,
97: 0,
100: 1,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'obesidade': {66: 0,
97: 0,
100: 0,
207: 0,
249: 1,
258: 1,
285: 0,
290: 0,
345: 0,
378: 0},
'outros_fatores_de_risco': {66: 0,
97: 0,
100: 1,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 1},
'pneumopatia': {66: 0,
97: 1,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'puerpera': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0},
'sindrome_de_down': {66: 0,
97: 0,
100: 0,
207: 0,
249: 0,
258: 0,
285: 0,
290: 0,
345: 0,
378: 0}}
</code></pre>
<pre><code>table = pd.crosstab(index=dados['obito'], columns=dados['asma', 'cardiopatia','diabetes','doenca_renal','obesidade']
</code></pre>
|
<p>IIUC, you can try:</p>
<pre><code>df.pivot_table(index='obito', values=['asma', 'cardiopatia','diabetes','doenca_renal','obesidade']).T.plot(kind ='bar' , stacked = True)
</code></pre>
<p>OUTPUT:</p>
<p><a href="https://i.stack.imgur.com/R5X1P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R5X1P.png" alt="enter image description here" /></a></p>
|
pandas|matplotlib|seaborn|pivot-table|crosstab
| 1
|
376,844
| 67,717,566
|
How to create a two-dimensional numpy array from two tif files in python?
|
<p>I am working in Google Colab.
I have imported two tif files with 1000 rows and 1000 columns with the following script:</p>
<pre><code>import cv2
green = cv2.imread('green.tif')
nir = cv2.imread('nir.tif')
</code></pre>
<p>I want to create an array which will have a two-dimensional vector in every pixel with the value of the green.tif in the first dimension and the value of the nir.tif in the second dimension.</p>
<p>How can i do this?</p>
|
<p>If you want to interlace the two volumes and consider <em>green</em> as channel 1 and <em>nir</em> as channel 2 you can proceed as follows:</p>
<pre><code>ch1 = np.ones((2,2,2))
ch2 = np.zeros((2,2,2))
out = np.empty((4,2,2))
out[::2,:,:] = ch1
out[1::2,:,:] = ch2
out = out.reshape((2,2,2,2))
</code></pre>
<p>Output:</p>
<pre><code>>>> print(out)
[[[[1. 1.]
[1. 1.]]
[[0. 0.]
[0. 0.]]]
[[[1. 1.]
[1. 1.]]
[[0. 0.]
[0. 0.]]]]
</code></pre>
<p>If you only want to do "first row - second row", just look at <code>np.stack</code>.</p>
|
python|arrays|numpy|google-colaboratory|tiff
| 0
|
376,845
| 67,660,842
|
Dirac Delta function with Python
|
<p>I tried to plot the Dirac Delta rectangular function in Python 2.7 code such that:</p>
<p><a href="https://i.stack.imgur.com/QALGy.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
def ddf(x,sig):
if -(1/(2*sig))<=x and x<=(1/(2*sig)):
val=sig
else:
val=0
return val
X=np.linspace(-5,5,1000)
for sig in np.arange(1,5,0.1):
plt.cla()
plt.grid()
plt.title('Dirac Delta function',size=20)
plt.xlabel('X values',size=10)
plt.ylabel("Dirac Delta functions' values",size=10)
plt.ylim(0,1)
plt.plot(X,ddf(X,sig),color='black')
plt.pause(0.5)
plt.show()
</code></pre>
<p>But when I ran the code it gave the error:</p>
<pre><code>Traceback (most recent call last):
File "c:/Users/Shubhadeep/Desktop/dff.py", line 22, in <module>
plt.plot(X,ddf(X,sig),color='black')
File "c:/Users/Shubhadeep/Desktop/dff.py", line 7, in ddf
if -(1/(2*sig))<=x and x<=(1/(2*sig)):
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>Can anyone solve this?</p>
|
<p>As the error states, you cannot compare single number to an array. Here is a solution for it:</p>
<pre><code>def ddf(x,sig):
val = np.zeros_like(x)
val[(-(1/(2*sig))<=x) & (x<=(1/(2*sig)))] = 1
return val
</code></pre>
<p>output sample:</p>
<p><a href="https://i.stack.imgur.com/5OklX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5OklX.png" alt="enter image description here" /></a></p>
|
python|python-2.7|numpy
| 3
|
376,846
| 68,010,466
|
Make a dataframe column with names of variables in the for loop
|
<p>I have an issue with my code. Below is the code snippet:</p>
<pre><code>abc = [1, 2, 3, 4]
def = [5, 6, 7, 8]
for i in [abc, def]:
df[str(i)] = i #This is giving an issue
</code></pre>
<p>I want to make abc and def (list) a column in my dataframe with column name as abc and def (Same as in for loop.</p>
<p>Is it possible. Can anybody help please</p>
|
<p>You can't (easily) do that with your variable declaration. Maybe you can try:</p>
<pre><code>cols = {'abc': [1, 2, 3, 4],
'def': [5, 6, 7, 8]}
out = pd.concat([df, pd.DataFrame(cols)], axis="columns")
</code></pre>
<pre><code>>>> out
xyz abc def
0 A 1 5
1 B 2 6
2 C 3 7
3 D 4 8
</code></pre>
|
python|python-3.x|pandas|list|dataframe
| 1
|
376,847
| 67,641,311
|
Pandas csv parser not working properly when it encounters `"`
|
<p>Problem statement:</p>
<p>Initially what I had</p>
<blockquote>
<p>I have a CSV file with the below records:-</p>
<p>data.csv:-</p>
</blockquote>
<blockquote>
<pre><code> id,age,name
3500300026,23,"rahul"
3500300163,45,"sunita"
3500320786,12,"patrick"
3500321074,41,"Viper"
3500321107,54,"Dawn Breaker"
</code></pre>
</blockquote>
<blockquote>
<p>When I tried to run script.py on this with encoding 'ISO-8859-1', it's running fine</p>
</blockquote>
<blockquote>
<pre><code># script.py
import pandas as pd
test_data2=pd.read_csv('data.csv', sep=',', encoding='ISO-8859-1')
print(test_data2)
</code></pre>
</blockquote>
<blockquote>
<p><a href="https://i.stack.imgur.com/5qSIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5qSIZ.png" alt="Result1" /></a></p>
</blockquote>
<hr />
<p>Now what I have:-</p>
<blockquote>
<p>But when I got a feed of the same file with <code>"</code> at the front of every record, the parser behaved awkwardly. After the data change, new records looks like below:-</p>
</blockquote>
<blockquote>
<pre><code>id,age,name
"3500300026,23,"rahul"
"3500300163,45,"sunita"
"3500320786,12,"patrick"
"3500321074,41,"Viper"
"3500321107,54,"Dawn Breaker"
</code></pre>
</blockquote>
<blockquote>
<p>And after running the same script (script.py) for this new data file, I am getting the below result</p>
</blockquote>
<blockquote>
<p><a href="https://i.stack.imgur.com/LxTkJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LxTkJ.png" alt="Result2" /></a></p>
</blockquote>
<p>Character <code>"</code> comes under ISO-8859-1 Character Set only so this can't be an issue anyway. It should be the parser, can't really get it why isn't the parser only focusing on <code>,</code> which I specifically passed as a separator to read_csv().</p>
<p>References: <a href="https://www.w3schools.com/charsets/ref_html_8859.asp" rel="nofollow noreferrer">ISO-8859-1 Character set</a></p>
<p>I am curious to know the reason why pandas was not able to parse it properly or does it has any special connection with <code>"</code>.</p>
|
<p>You can tell pandas that you don't want double quotes to be treated specially by adding an argument to read_csv:</p>
<pre><code> test_data2=pd.read_csv('data.csv', quoting=csv.QUOTE_NONE)
</code></pre>
<p>to <code>read_csv()</code>. The output will be:</p>
<pre><code>In [11]: df
Out[11]:
id age name
0 "3500300026 23 "rahul"
1 "3500300163 45 "sunita"
2 "3500320786 12 "patrick"
3 "3500321074 41 "Viper"
4 "3500321107 54 "Dawn Breaker"
</code></pre>
<p>parsing only on the comma.</p>
|
python|pandas|character-encoding|csv-parser
| 1
|
376,848
| 67,939,673
|
Get the index corresponding to True boolen values only
|
<p>After doing some operations, I am getting a dataframe with an index and a column with boolean values. I just need to get those indexes having boolean value to be True. How can I get that?
My output is like this: Here, "AC name" is the index as per the output dataframe.</p>
<pre><code> AC name
Agiaon False
Alamnagar False
Alauli True
Alinagar False
Ziradei True
Name: Vote percentage, Length: 253, dtype: bool
</code></pre>
|
<p>Considering that the dataframe is <code>df</code>, it would be:</p>
<pre class="lang-py prettyprint-override"><code>res = df[df['Vote percentage']].index
</code></pre>
|
python|pandas|dataframe
| 1
|
376,849
| 67,861,486
|
Passing string into a lambda function
|
<p>I am trying to automatically generate lambda functions given a list of lists of strings to generate constraints for a <code>scipy.optimize.minimize()</code> routine. I have a <code>list</code> of string pairs, which I need to pass into each lambda function constraint, as so:</p>
<pre class="lang-py prettyprint-override"><code>list = [
["parameter1", "parameter2"],
["parameter3", "parameter4"]
]
constraints = []
for pair in list:
constraints.append( {"type":"ineq","fun": lambda p: p[param_names.index(pair[0])]-p[param_names.index(pair[1])]} )
</code></pre>
<p>However, when this list of constraints is passed to <code>scipy.optimize.minimize()</code>, the constraints are ignored. Alternatively, when I explicitly define the strings like so</p>
<pre class="lang-py prettyprint-override"><code>cons = [
{'type':'ineq','fun': lambda p: p[param_names.index("parameter1")]-p[param_names.index("parameter2")] },
{'type':'ineq','fun': lambda p: p[param_names.index("parameter3")]-p[param_names.index("parameter4")] }
]
</code></pre>
<p><code>scipy.optimize.minimize()</code> obeys the constraints. I believe this is a problem with how I'm defining the lambda function, namely trying to pass variables (the strings) into the lambda function, and not a problem with <code>scipy.optimize.minimize()</code>. I need my code to be able to parse a list of pairs of strings as above to automatically define these lambda functions and constraints, since the list can vary depending on the situation.</p>
<p>Is there a way to pass variables from outside the lambda function into the lambda function? Or another way I should be doing this?</p>
|
<p>See <a href="https://stackoverflow.com/questions/37791680/scipy-optimize-minimize-slsqp-with-linear-constraints-fails/37792650#37792650">this answer</a> to a similar question. In your case, the problem is the use of the name <code>pair</code> in the lambda expression. Because of Python's <a href="https://docs.python-guide.org/writing/gotchas/#late-binding-closures" rel="nofollow noreferrer">late-binding closures</a>, <code>pair</code> will refer to the value that it has at the time the lambda is evaluated, not the value that it had when the lambda was defined. In your code, that probably means <code>pair</code> will always refer to the <em>last</em> element in <code>list</code>.</p>
<p>A possible <a href="https://docs.python-guide.org/writing/gotchas/#id4" rel="nofollow noreferrer">fix</a>:</p>
<pre><code>constraints = []
for pair in list:
constraints.append({"type": "ineq", "fun": lambda p, pair=pair: p[param_names.index(pair[0])]-p[param_names.index(pair[1])]})
</code></pre>
|
python|numpy|lambda|scipy
| 1
|
376,850
| 67,865,519
|
Comparing two dataframes with some entries missing
|
<p>I have two dataframes that I want to compare like:</p>
<p>df1</p>
<pre><code> name | value_1 | value_2 | value_3
0 | A | 2 | NaN | 2
1 | B | 3 | 1 | NaN
2 | C | 5 | 2 | 1
</code></pre>
<p>df2</p>
<pre><code> name | value_1 | value_2 | value_3
0 | A | NaN | NaN | 2
1 | B | 2 | 1 | 0
2 | C | 5 | 3 | 1
</code></pre>
<p>An ideal comparison result df would look like:</p>
<pre><code> name | value_1 | value_2 | value_3
0 | A | missing2 | missing | True
1 | B | False | True | missing1
2 | C | True | False | True
</code></pre>
<p>This is what I did (but failed):</p>
<pre><code>df1 = pd.DataFrame([
['A', 2, np.nan, 2],
['B', 3, 1, np.nan],
['C', 5, 2, 1],
], columns=['name', 'value_1', 'value_2', 'value_3'])
df2 = pd.DataFrame([
['A', np.nan, np.nan, 2],
['B', 2, 1, 0],
['C', 5, 3, 1],
], columns=['name', 'value_1', 'value_2', 'value_3'])
df = df1 == df2
df[['name']] = df1[['name']]
df[df1.isnull()] = "missing1"
df[df2.isnull()] = "missing2"
df[df1.isnull() & df2.isnull()] = "missing"
</code></pre>
<p>I received the following error message when doing <code>df[df1.isnull()] = "missing1"</code>:</p>
<blockquote>
<p>TypeError: Cannot do inplace boolean setting on mixed-types with a non np.nan value</p>
</blockquote>
<p>Does anyone have any clue on how to solve this?</p>
|
<p>As the error indicates, you can't assign a string value when there are mixed types in the data frame. One workaround is to convert the boolean result data frame to string before assigning <code>missing</code> labels:</p>
<pre><code>df1.set_index('name', inplace=True)
df2.set_index('name', inplace=True)
df = (df1 == df2).astype(str)
df[df1.isnull()] = "missing1"
df[df2.isnull()] = "missing2"
df[df1.isnull() & df2.isnull()] = "missing"
df
value_1 value_2 value_3
name
A missing2 missing True
B False True missing1
C True False True
</code></pre>
|
python|pandas|dataframe
| 1
|
376,851
| 67,941,087
|
Create several dataframes with csv files and give them a specific name
|
<p>The following dataframe etf_list is given:</p>
<pre><code>etf_list = pd.DataFrame({'ISIN': ['IE00B4X9L533', 'IE00B0M62Q58', 'LU0292097234', 'IE00BF4RFH31'],
'Name': ['HSBC MSCI WORLD UCITS ETF', 'iShares MSCI World UCITS ETF', 'FTSE 100 Income UCITS ETF 1D', 'iShares MSCI World Small Cap UCITS ETF'],
'Anbieter': ['HSBC', 'iShares', 'Xtrackers', 'iShares' ],
'Extension': ['xls', 'csv', 'xlsx', 'csv' ]})
</code></pre>
<p>In the folder /ETF I have the following files, which were generated today on June 11, 2021:</p>
<ul>
<li>IE00B4X9L533_20210611.xls</li>
<li>IE00B0M62Q58_20210611.csv</li>
<li>LU0292097234_20210611.xlsx</li>
<li>IE00BF4RFH31_20210611.csv</li>
</ul>
<p>As you can see, the files have the following structure:</p>
<pre><code>etf_list['ISIN'] + '_' + timestr + '.' + etf_list['Extension']
</code></pre>
<p>whereas <code>timestr = time.strftime("%Y%m%d")</code></p>
<p>The objective is to create in a for loop dataframes for the files, where Anbieter in etf_list equals 'iShares'. The created dataframes shall have the name of ISIN in the dataframe etf_list. In order to achieve this I defined an empty dictionary df ={}</p>
<pre><code>df = {}
for i, row in etf_list.iterrows():
if row['Anbieter']=='iShares':
df[row['ISIN']] = 'ETF/'+ row['ISIN'] + '_' + timestr + '.csv'
df[row['ISIN']] = pd.read_csv(df[row['ISIN']], sep=',',skiprows=2, thousands='.', decimal=',')
else:
pass
</code></pre>
<p>The problem with this approach is, in order to reference to the created dataframes, I have to call them with for instance df['IE00B0M62Q58'] or df['IE00BF4RFH31'], but my objective is to use IE00B0M62Q58 instead df['IE00B0M62Q58'] and IE00BF4RFH31 instead of df['IE00BF4RFH31'].</p>
<p>What do I have to do in order reach my goal? How do I have to adjust my code?</p>
|
<p>You can <code>exec</code> to use strings as variable names.</p>
<pre><code>for i, row in etf_list.iterrows():
if row['Anbieter']=='iShares':
ISIN = row['ISIN']
file_name = 'ETF/'+ ISIN + '_' + timestr + '.csv'
command_str = f"{ISIN} = pd.read_csv('{file_name}', sep=',',skiprows=2, thousands='.', decimal=',')" # f doesn't work in Python 2, use format or % instead
exec(command_str)
else:
pass
</code></pre>
|
python|pandas|dataframe
| 0
|
376,852
| 67,650,358
|
Fill holes holes/area in a binary image using OpenCV
|
<p>I have below preprocessed rice image. I want to fill the rice with black color and then perform the inverse operation to find contours. I am trying to use Erosion/Dilation operation but not working. Below is the code snippet I am using.</p>
<p>First I used shadow removal algorithm then used adaptive thresholding which gives the Input image. Now, I want to change the Input image to the output image.</p>
<p>Original Image:</p>
<p><a href="https://i.stack.imgur.com/4vqAw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4vqAw.jpg" alt="enter image description here" /></a></p>
<p>Input Image:</p>
<p><img src="https://i.stack.imgur.com/Gcwa5.jpg" alt="Input Image" /></p>
<p>Required Output Image:</p>
<p><img src="https://i.stack.imgur.com/4Z0RQ.jpg" alt="Output Image" /></p>
<p>Code Snippet:</p>
<pre class="lang-py prettyprint-override"><code>oposite = cv2.bitwise_not(img)
#Erosion
kernel = np.ones((3,3),np.uint8)
erosion = cv2.dilate(des,kernel,iterations = 1)
erosion = cv2.bitwise_not(erosion)
im_out = oposite + erosion
cv2.imshow("output", im_out)
cv2.waitKey(0)
</code></pre>
|
<p>You can create an HSV mask by using the <a href="https://docs.opencv.org/3.4/d8/d01/group__imgproc__color__conversions.html#gga4e0972be5de079fed4e3a10e24ef5ef0aa4a7f0ecf2e94150699e48c79139ee12" rel="nofollow noreferrer"><code>cv2.COLOR_BGR2HSV</code></a> flag in the <a href="https://docs.opencv.org/3.4/d8/d01/group__imgproc__color__conversions.html#ga397ae87e1288a81d2363b61574eb8cab" rel="nofollow noreferrer"><code>cv2.cvtColor()</code></a> method, and by using the <a href="https://docs.opencv.org/master/d2/de8/group__core__array.html#ga48af0ab51e36436c5d04340e036ce981" rel="nofollow noreferrer"><code>cv2.inRange()</code></a> method. I basically changed the maximum saturation value from <code>255</code> to <code>100</code>:</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread("rice.jpg")
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 0])
upper = np.array([179, 100, 255])
mask = cv2.inRange(img_hsv, lower, upper)
cv2.imshow("Mask", mask)
cv2.waitKey(0)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/YLRLN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YLRLN.png" alt="enter image description here" /></a></p>
<hr />
<p>If you're looking to make the grains thinner, you can turn down the maximum hue value, from <code>179</code> to, say,<code>111</code>, along with the maximum saturation value at <code>100</code>:</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread("rice.jpg")
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 0])
upper = np.array([111, 100, 255])
mask = cv2.inRange(img_hsv, lower, upper)
cv2.imshow("Mask", mask)
cv2.waitKey(0)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/crjji.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/crjji.png" alt="enter image description here" /></a></p>
|
python|numpy|opencv|image-processing|computer-vision
| 4
|
376,853
| 67,653,764
|
Multiple if conditions pandas
|
<p>Looking to write an if statement which does a calculation based on if 3 conditions across other columns in a dataframe are true. I have tried the below code which seems to have worked for others on stackoverflow but kicks up an error for me. Note the 'check', 'sqm' and 'sqft' columns are in float64 format.</p>
<pre><code>if ((merge['check'] == 1) & (merge['sqft'] > 0) & (merge['sqm'] == 0)):
merge['checksqm'] == merge['sqft']/10.7639
</code></pre>
<p>#Error below:</p>
<pre><code>
alueError Traceback (most recent call last)
<ipython-input-383-e84717fde2c0> in <module>
----> 1 if ((merge['check'] == 1) & (merge['sqft'] > 0) & (merge['sqm'] == 0)):
2 merge['checksqm'] == merge['sqft']/10.7639
~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py in __nonzero__(self)
1327
1328 def __nonzero__(self):
-> 1329 raise ValueError(
1330 f"The truth value of a {type(self).__name__} is ambiguous. "
1331 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>Each condition you code evaluates into a series of multiple boolean values. The combined result of the 3 conditions also become a boolean series. Python <code>if</code> statement cannot handle such Pandas series with evaluating each element in the series and feed to the statement following it one by one. Hence, the error <code>ValueError: The truth value of a Series is ambiguous.</code></p>
<p>To solve the problem, you have to code it using Pandas syntax, like the following:</p>
<pre><code>mask = (merge['check'] == 1) & (merge['sqft'] > 0) & (merge['sqm'] == 0)
merge.loc[mask, 'checksqm'] = merge['sqft']/10.7639
</code></pre>
<p>or, combine in one statement, as follows:</p>
<pre><code>merge.loc[(merge['check'] == 1) & (merge['sqft'] > 0) & (merge['sqm'] == 0), 'checksqm'] = merge['sqft']/10.7639
</code></pre>
<p>In this way, Pandas can evaluate the boolean series and work on the rows corresponding to <code>True</code> values of the combined 3 conditions and process each row one by one taking corresponding values from each row for processing. This kind of vectorized operation under the scene is not supported by ordinary Python statement such as <code>if</code> statement.</p>
|
python|pandas|dataframe
| 1
|
376,854
| 31,709,651
|
Pandas opposite of fillna(0)
|
<p>Whereas <code>df.fillna(0)</code> fills all NA/NaN values with 0, is there a function to replace all <strong>non</strong>-NA/NaN values with another value, such as 1?</p>
<p>If the values in my DataFrame are variable-length lists then:</p>
<ul>
<li><code>df.replace()</code> requires that the lists are the same length</li>
<li>boolean index like <code>df[len(df) > 0] = 1</code> throws <code>ValueError: cannot insert True, already exists</code></li>
<li><code>pandas.get_dummies()</code> throws <code>TypeError: unhashable type: 'list'</code></li>
</ul>
<p>Is there a more straightforward solution?</p>
|
<p>You could use indexing/assignment with <code>df[df.notnull()] = 1</code>. For instance:</p>
<pre><code>>>> df = pd.DataFrame([[np.nan, 2, 5], [2, 5, np.nan], [2, 5, np.nan]])
>>> df # example frame
0 1 2
0 NaN 2 5
1 2 5 NaN
2 2 5 NaN
>>> df[df.notnull()] = 1
>>> df
0 1 2
0 NaN 1 1
1 1 1 NaN
2 1 1 NaN
</code></pre>
|
python|pandas|dataframe|nan
| 4
|
376,855
| 31,799,681
|
How to *extract* latitud and longitude greedily in Pandas?
|
<p>I have a dataframe in Pandas like this:</p>
<pre><code> id loc
40 100005090 -38.229889,-72.326819
188 100020985 ut: -33.442101,-70.650327
249 10002732 ut: -33.437478,-70.614637
361 100039605 ut: 10.646041,-71.619039 \N
440 100048229 4.666439,-74.071554
</code></pre>
<p>I need to extract the gps points. I first ask for a contain of a certain regex (found here in SO, see below) to match all cells that have a "valid" lat/long value. However, I also need to <code>extract</code> these numbers and either put them on a series of their own (and then call split on the comma) or put them in two new pandas series. I have tried the following for the extraction part:</p>
<p><code>ids_with_latlong["loc"].str.extract("[-+]?([1-8]?\d(\.\d+)?|90(\.0+)?),\s*[-+]?(180(\.0+)?|((1[0-7]\d)|([1-9]?\d))(\.\d+)?)$")</code></p>
<p>but it looks, because of the output, that the reg exp is not doing the matching greedily, because I get something like this:</p>
<pre><code> 0 1 2 3 4 5 6 7 8
40 38.229889 .229889 NaN 72.326819 NaN 72 NaN 72 .326819
188 33.442101 .442101 NaN 70.650327 NaN 70 NaN 70 .650327
</code></pre>
<p>Obviously it's matching more than I want (I would just need cols 0, 1, and 4), but simply dropping them is too much of a hack for me to do. Notice that the extract function also got rid of the +/- signs at the beginning. If anyone has a solution, I'd really appreciate.</p>
|
<p>@HYRY's answer looks pretty good to me. This is just an alternate approach that uses built in pandas methods rather than a regex approach. I think it's a little simpler to read though I'm not sure if it will be sufficiently general for all your cases (it works fine on this sample data though).</p>
<pre><code>df['loc'] = df['loc'].str.replace('ut: ','')
df['lat'] = df['loc'].apply( lambda x: x.split(',')[0] )
df['lon'] = df['loc'].apply( lambda x: x.split(',')[1] )
id loc lat lon
0 100005090 -38.229889,-72.326819 -38.229889 -72.326819
1 100020985 -33.442101,-70.650327 -33.442101 -70.650327
2 10002732 -33.437478,-70.614637 -33.437478 -70.614637
3 100039605 10.646041,-71.619039 10.646041 -71.619039
4 100048229 4.666439,-74.071554 4.666439 -74.071554
</code></pre>
<p>As a general suggestion for this type of approach you might think about doing in in the following steps:</p>
<p>1) remove extraneous characters with <code>replace</code> (or maybe this is where the regex is best)</p>
<p>2) split into pieces</p>
<p>3) check that each piece is valid (all you need to do is check that it's a number although you could take an extra step that it falls into the number range of being a valid lat or lon)</p>
|
python|regex|pandas
| 2
|
376,856
| 31,705,373
|
Minimize float format in Pandas df.to_csv()
|
<p>For large datasets, I would like to encode floats minimally when writing the CSV.</p>
<pre><code>0.0 or 1.0 should be written 0 or 1
1.234567 should be written 1.235
123.0 should be written 123
</code></pre>
<p><code>DataFrame.to_csv()</code> allows a float_format, but that makes every float look the same, which doesn't save space when writing integers.</p>
|
<p>You could do something hacky like this:</p>
<pre><code>def to_str(item):
if type(item) in {np.int, np.float64}:
return '{:g}'.format(item)
else:
return item
pd.DataFrame({'int': [1, 2], 'float': [1.03, 1.0], 'str': ['a', 'b']}).applymap(to_str)
</code></pre>
<p>which returns</p>
<pre><code> float int str
0 1.03 1 a
1 1 2 b
</code></pre>
<p>If that's too slow, you can also skip the type checking and just apply the string conversion to columns matching the numeric type.</p>
|
python-3.x|pandas
| 0
|
376,857
| 32,114,215
|
How to calculate the intercept using numpy.linalg.lstsq
|
<p>After running a multiple linear regression using <code>numpy.linalg.lstsq</code> I get 4 arrays as described in the documentation, however it is not clear to me how do I get the intercept value. Does anyone know this? I'm new to statistical analysis.</p>
<p>Here is my model:</p>
<pre><code>X1 = np.array(a)
X2 = np.array(b)
X3 = np.array(c)
X4 = np.array(d)
X5 = np.array(e)
X6 = np.array(f)
X1l = np.log(X1)
X2l = np.log(X2)
X3l = np.log(X3)
X6l = np.log(X6)
Y = np.array(g)
A = np.column_stack([X1l, X2l, X3l, X4, X5, X6l, np.ones(len(a), float)])
result = np.linalg.lstsq(A, Y)
</code></pre>
<p>This is a sample of what my model is generating:</p>
<pre><code>(array([ 654.12744154, -623.28893569, 276.50269246, 11.52493817,
49.92528734, -375.43282832, 3852.95023087]), array([ 4.80339071e+11]),
7, array([ 1060.38693842, 494.69470547, 243.14700033, 164.97697748,
58.58072929, 19.30593045, 13.35948642]))
</code></pre>
<p>I believe the intercept is the second array, still I'm not sure about that, as its value is just too high.</p>
|
<p>The intersect is the coefficient that corresponds to the column of <code>ones</code>, which in this case is:</p>
<pre><code>result[0][6]
</code></pre>
<p>To make it clearer to see, consider your regression, which is something like:</p>
<pre><code>y = c1*x1 + c2*x2 + c3*x3 + c4*x4 + m
</code></pre>
<p>written in matrix form as:</p>
<pre><code>[[y1], [[x1_1, x2_1, x3_1, x4_1, 1], [[c1],
[y2], [x1_2, x2_2, x3_2, x4_2, 1], [c2],
[y3], = [x1_3, x2_3, x3_3, x4_3, 1], * [c3],
... ... [c4],
[yn]] [x1_n, x2_n, x3_n, x4_n, 1]] [m]]
</code></pre>
<p>or:</p>
<pre><code> Y = A * C
</code></pre>
<p>where <code>A</code> is the so called "Coefficient' matrix and <code>C</code> the vector containing the solution for your regression. Note that <code>m</code> corresponds to the column of <code>ones</code>.</p>
|
python|arrays|numpy|regression|linear-regression
| 4
|
376,858
| 31,929,645
|
How to set values of a masked Pandas Series from a different mask
|
<p>I have a series. I would like to extract some elements from that series using a mask and put them back to same or different locations in the same series using a different mask. Something like this:</p>
<pre><code>s = pd.Series(randn(5))
s
0 0.466829
1 -1.821200
2 0.025857
3 0.238267
4 2.192390
dtype: float64
s[pd.Series([False,True,False,True,False])] = s[pd.Series([True,False,True,False,False])]
s
0 0.466829
1 NaN
2 0.025857
3 NaN
4 2.192390
dtype: float64
</code></pre>
<p>As you can see that example doesn't work. Note that in my use case, both masks are guaranteed to have the same number of <code>True</code> and <code>False</code> elements. </p>
|
<p>When setting values on a Pandas series, the index is used to align values. So for you example, even though the number of values is the same, the indexes are different, hence the null values. If you want to override this behavior, you can use <code>values</code> to access the underlying array of the Series (ignoring the index). Also you don't need to cast your indexers explicitly with <code>pd.Series</code>, this will happen implicitly if you past a list:</p>
<pre><code>s[[False,True,False,True,False]] = s[[True,False,True,False,False]].values
s
Out[300]:
0 0.013654
1 0.013654
2 0.691198
3 0.691198
4 0.344096
dtype: float64
</code></pre>
|
python|pandas
| 2
|
376,859
| 31,785,331
|
Pandas How to convert from series to a data frame
|
<p>suppose I have a simple Series like this data frame like this </p>
<pre><code>S1 = Series([2.0, 0.816 , 0.2] , [51.0, 50.0 , 0.3])
</code></pre>
<p>What us the best way in pandas to convert this Series to a data frame like this</p>
<pre><code>pd.DataFrame({
'mean' : [2.0 , 51.0] ,
'median' : [0.816 , 50.0] ,
'sd' : [0.2 ,0.3]
})
</code></pre>
<p>this is how a data frame should look like</p>
<pre><code>mean median sd
2 0.816 0.2
51 50.000 0.3
</code></pre>
|
<h2>one method</h2>
<p>You can do some index wizardry</p>
<pre><code>D1 = S1.to_frame().reset_index().T
</code></pre>
<p>Now you can map the column names to whatever</p>
<pre><code>D1.rename( columns={0:'mean',1:'median',2:'sd'}, inplace=True) # should match the list order in S1
D1.reset_index(drop=True,inplace=True) # reset the funky index
# mean median sd
#0 51 50.000 0.3
#1 2 0.816 0.2
</code></pre>
<h2>one more method</h2>
<p>You can make a dictionary</p>
<pre><code>vars = ['mean','media','mode'] #again matching the order of the lists in S1
data_dict = dict(zip( vars,S1.iteritems()))
D1 = pandas.DataFrame.from_dict(data_dict)
</code></pre>
|
python|pandas|series
| 4
|
376,860
| 31,701,732
|
Python (Pandas) updating previous x rows within specified condition
|
<p>I have data about machine failures. The data is in a pandas dataframe with <code>date</code>, <code>id</code>, <code>failure</code> and <code>previous_30_days</code> columns. The <code>previous_30_days</code> column is currently all zeros. My desired outcome is to populate rows in the <code>previous_30_days</code> column with a '1' if they occur within a 30 day time-span before a failure. I am currently able to do this with the following code:</p>
<pre><code>failure_df = df[(df['failure'] == 1)] # create a dataframe of just failures
for index, row in failure_df.iterrows():
df.loc[(df['date'] >= (row.date - datetime.timedelta(days=30))) &
(df['date'] <= row.date) & (df['id'] == row.id), 'previous_30_days'] = 1
</code></pre>
<p>Note that I also check for the id match, because dates are repeated in the dataframe, so I cannot assume it is simply the previous 30 rows.</p>
<p>My code works, but the problem is that the dataframe is millions of rows, and this code is too slow at the moment. </p>
<p>Is there a more efficient way to achieve the desired outcome? Any thoughts would be very much appreciated.</p>
|
<p>I'm a little confused about how your code works (or is supposed to work), but this ought to point you in the right direction and can be easily adapted. It will be much faster by avoiding <code>iterrows</code> in favor of vectorized operations (about 7x faster for this small dataframe, it should be a much bigger improvement on your large dataframe).</p>
<pre><code>np.random.seed(123)
df=pd.DataFrame({ 'date':np.random.choice(pd.date_range('2015-1-1',periods=300),20),
'id':np.random.randint(1,4,20) })
df=df.sort(['id','date'])
</code></pre>
<p>Now, calculate days between current and previous date (by id).</p>
<pre><code>df['since_last'] = df.groupby('id')['date'].apply( lambda x: x - x.shift() )
</code></pre>
<p>Then create your new column based on the number of days to the previous date.</p>
<pre><code>df['previous_30_days'] = df['since_last'] < datetime.timedelta(days=30)
date id since_last previous_30_days
12 2015-02-17 1 NaT False
6 2015-02-27 1 10 days True
3 2015-03-25 1 26 days True
0 2015-04-09 1 15 days True
10 2015-04-24 1 15 days True
5 2015-05-04 1 10 days True
11 2015-05-07 1 3 days True
8 2015-08-14 1 99 days False
14 2015-02-02 2 NaT False
9 2015-04-07 2 64 days False
19 2015-07-28 2 112 days False
7 2015-08-03 2 6 days True
15 2015-08-13 2 10 days True
1 2015-08-19 2 6 days True
2 2015-01-18 3 NaT False
13 2015-03-15 3 56 days False
18 2015-04-07 3 23 days True
4 2015-04-17 3 10 days True
16 2015-04-22 3 5 days True
17 2015-09-11 3 142 days False
</code></pre>
|
python|pandas
| 1
|
376,861
| 32,126,454
|
Python 2.7 / Pandas: writing new string from each row in dataframe
|
<p>In Pandas, I have a dataframe, written from a csv. My end goal is to generate an XML schema from that CSV, because each of the items in the CSV correspond to a schema variable. The only solution (that I could think of) would be to read each item from that dataframe so that it generates a text file, with each value in the dataframe surrounded by a string. </p>
<pre><code>TableName Variable Interpretation Col4 Col5
CRASH CRASH_ID integer 1
CRASH SER_NO range 0
CRASH SER_NO code 99999
CRASH CRASH_MO_NO code 1 January
CRASH CRASH_MO_NO code 2 February
</code></pre>
<p>Which would generate a text file that results in something along the lines of (using the first row as an example):</p>
<pre><code><table = "CRASH">
<name = "CRASH_ID">
<type = "integer">
<value = "1">
</code></pre>
<p>Where <code><table = >, <name = ></code>, are all strings. They don't have to be formatted that way specifically (although that would be nice)-- I just need a faster way to generate this schema than typing it all out by hand from a CSV file.</p>
<p>It seems like the best way to do that would be to read through each row and generate a string while writing it to the output file. I've looked at the .iterrows() method, but that doesn't let me concatenate strings and tuples. I've also <a href="http://chrisalbon.com/python/pandas_iterate_over_rows_of_multiple_columns.html" rel="nofollow noreferrer">looked</a> at some <a href="https://stackoverflow.com/questions/7837722/what-is-the-most-efficient-way-to-loop-through-dataframes-with-pandas">posts</a> from <a href="https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe">other</a> users, but their focus seems to be more on calculating things within dataframes, or changing the data itself, rather than generating a string from each row. </p>
<p>My current code is below. I understand that pandas is based off Numpy arrays, and that running "for i in df" loops is not an efficient method, but I am not really sure where to start. </p>
<p>EDIT: Some of the rows might need to loop through to display a certain way. For instance, the schema has multiple value codes that have strings attached:</p>
<pre><code><values>
<value code = "01">January</value>
<value code = "02">February</value>
<value code = "03">March</value>
</values>
</code></pre>
<p>I am thinking maybe I could group the values by "interpretation"? And then, if they have the "code" interpretation, I could do some kind of iteration through the group so that it displayed all the codes.</p>
<p>Here is my current code, for reference. I have updated it to reflect Randy's excellent suggestion below. I have also edited the above post to reflect some updated concerns.</p>
<pre><code>import pandas as pd
text_file = open(r'oregon_output.txt', 'w')
df = pd.read_csv(r'oregon_2013_var_list.csv')
#selects only CRASH variables
crash = df['Col1'] == 'CRASH'
df_crash = df[crash]
#value which will be populated with code values from codebook
code_fill = " "
#replaces NaN values in dataframe wih code_fill
df_crash.fillna(code_fill, inplace = True)
for row_id, row in df.iterrows():
print '<variable>'
for k, v in row.iterkv():
if v is not None:
print '<{} = "{}">'.format(k, v)
print '</variable>'
print
</code></pre>
|
<p>There is a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html" rel="nofollow"><code>to_dict()</code></a> method you may want to consider in this case:</p>
<pre><code>In [178]:
df.columns = ['table','name','type','value']
[["<%s='%s'>"%(k,v) for k,v in D.items()] for D in df.to_dict('records')]
Out[178]:
[["<table='CRASH'>", "<type='integer'>", "<name='CRASH_ID'>", "<value='1.0'>"],
["<table='CRASH'>", "<type='range'>", "<name='SER_NO'>", "<value='0.0'>"],
["<table='CRASH'>", "<type='code'>", "<name='SER_NO'>", "<value='99999.0'>"],
["<table='CRASH'>", "<type='string'>", "<name='CRASH_DT'>", "<value='nan'>"]]
</code></pre>
|
python|python-2.7|pandas|dataframe
| 0
|
376,862
| 32,076,624
|
Equations containing outer products of vectors
|
<p><a href="https://i.stack.imgur.com/0XOlk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0XOlk.gif" alt="enter image description here"></a></p>
<p>where x is a column vector.
We know from the diagonal elements in A, the value of x entries. But signs of them remains unknown. For example:</p>
<pre><code>import numpy as np
A = array([[ 1.562, -0.833, -0.833, -0.031, -0.031, 0.167],
[-0.833, 0.795, 0.167, -0.149, 0.167, -0.146],
[-0.833, 0.167, 0.795, 0.167, -0.149, -0.146],
[-0.031, -0.149, 0.167, 1.68 , -0.833, -0.833],
[-0.031, 0.167, -0.149, -0.833, 1.68 , -0.833],
[ 0.167, -0.146, -0.146, -0.833, -0.833, 1.792]])
np.sqrt(A.diagonal())
>>> array([ 1.24979998, 0.89162773, 0.89162773, 1.29614814, 1.29614814,
1.33865604])
</code></pre>
<p>But we still dont know the signs. With a mask we have the product signs:</p>
<pre><code>A > 0
>>> array([[ True, False, False, False, False, True],
[False, True, True, False, True, False],
[False, True, True, True, False, False],
[False, False, True, True, False, False],
[False, True, False, False, True, False],
[ True, False, False, False, False, True]], dtype=bool)
</code></pre>
<p>How can I find x elements signs.</p>
|
<p>Note that (-x)(-x)^T = (x)(x)^T, so you can't distinguish x from -x. Given that, you can determine the sign pattern (i.e. you can determine whether two elements have the same or opposite signs). In fact, since each row of A is a scalar multiple of x, each row gives you the sign pattern (unless the row is all 0, which is possible if an element of x is 0). The same holds for the columns.</p>
<p>Note that your example A can not be a product of the form (x)(x)^T. It has full rank. The maximum possible rank of (x)(x)^T is 1.</p>
<p>For example,</p>
<pre><code>In [14]: x = np.array([1.0, -2.0, -3.0, 4.0])
In [15]: np.outer(x, x)
Out[15]:
array([[ 1., -2., -3., 4.],
[ -2., 4., 6., -8.],
[ -3., 6., 9., -12.],
[ 4., -8., -12., 16.]])
</code></pre>
<p>Note the sign pattern in the product. Each row (and each column) is either (+, -, -, +) or (-, +, +, -).</p>
|
python|c++|numpy|matrix|linear-algebra
| 5
|
376,863
| 32,137,320
|
Append with date_range() in Python
|
<p>I have a csv file which contains start dates and end dates, with format <code>dd/mm/yy</code>.
These are read by :</p>
<pre><code>dateparse = lambda x: pnd.datetime.strptime(x, '%d/%m/%y')
df = pnd.read_csv('file.csv',sep=';',parse_dates=['StartDate','EndDate'], date_parser=dateparse)
</code></pre>
<p>A sample of the dataframe looks like this:</p>
<pre><code> StartDate EndDate
0 2015-07-15 2015-07-18
1 2015-06-06 2015-06-08
</code></pre>
<p>I want to get all the dates listed in these intervals in a column in a new dataframe:</p>
<pre><code> Date
0 2015-07-15
1 2015-07-16
2 2015-07-17
3 2015-07-18
4 2015-06-06
5 2015-06-07
6 2015-06-08
</code></pre>
<p>I use iteratively <code>date_range(StartDate, EndDate)</code>, appending each time the result, but I get either an empty array, or something like</p>
<pre><code>[[2015-07-15, 2015-07-16, 2015-07-17, 2015-07-18], [ 2015-06-06, 2015-06-07 , 2015-06-08 ]]
</code></pre>
<p>and I would like</p>
<pre><code>[ 2015-07-15, 2015-07-16, 2015-07-17, 2015-07-18, 2015-06-06, 2015-06-07 , 2015-06-08 ]
</code></pre>
<p>What to do?</p>
|
<p>You can chain the ranges together using <code>itertools.chain</code> to create your list of dates:</p>
<pre><code>from itertools import chain
new_df = pnd.DataFrame(list(chain.from_iterable(pnd.date_range(r["StartDate"],r["EndDate"])
for _,r in df.iterrows())), columns=("Date",))
</code></pre>
<p>Output:</p>
<pre><code> Date
0 2015-07-15
1 2015-07-16
2 2015-07-17
3 2015-07-18
4 2015-06-06
5 2015-06-07
6 2015-06-08
</code></pre>
|
python|date|pandas|append
| 3
|
376,864
| 31,952,560
|
python: bandpass filter of an image
|
<p>I have a data image with an imaging artifact that comes out as a sinusoidal background, which I want to remove. Since it is a single frequency sine wave, it seems natural to Fourier transform and either bandpass filter or "notch filter" (where I think I'd use a gaussian filter at +-omega). </p>
<p><a href="https://i.stack.imgur.com/AacAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AacAP.png" alt="My data. The red spots are what I want, the background sine wave in kx+ky is unwanted."></a></p>
<p>In trying to do this, I notice two things: </p>
<p>1) simply by performing the fft and back, I have reduced the sine wave component, shown below. There seems to be some high-pass filtering of the data just by going there and back?</p>
<pre><code>import numpy as np
f = np.fft.fft2(img) #do the fourier transform
fshift1 = np.fft.fftshift(f) #shift the zero to the center
f_ishift = np.fft.ifftshift(fshift1) #inverse shift
img_back = np.fft.ifft2(f_ishift) #inverse fourier transform
img_back = np.abs(img_back)
</code></pre>
<p>This is an image of img_back:</p>
<p><a href="https://i.stack.imgur.com/DKCEi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DKCEi.png" alt="The inverse fourier transform, no filter applied."></a></p>
<p>Maybe the filtering here is good enough for me, but I'm not that confident in it since I don't have a good understanding of the background suppression.</p>
<p>2) To be more sure of the suppression at the unwanted frequencies, I made a boolean 'bandpass' mask and applied it to the data, but the fourier transform ignores the mask. </p>
<pre><code>a = shape(fshift1)[0]
b = shape(fshift1)[1]
ro = 8
ri = 5
y,x = np.ogrid[-a/2:a/2, -b/2:b/2]
m1 = x*x + y*y >= ro*ro
m2 = x*x + y*y <= ri*ri
m3=np.dstack((m1,m2))
maskcomb =[]
for r in m3:
maskcomb.append([any(c) for c in r]) #probably not pythonic, sorry
newma = np.invert(maskcomb)
filtdat = ma.array(fshift1,mask=newma)
imshow(abs(filtdat))
f_ishift = np.fft.ifftshift(filtdat)
img_back2 = np.fft.ifft2(f_ishift)
img_back2 = np.abs(img_back2)
</code></pre>
<p>Here the result is the same as before, because np.fft ignores masks. The fix to that was simple: </p>
<p><code>filtdat2 = filtdat.filled(filtdat.mean())</code></p>
<p>Unfortunately, (but upon reflection also unsurprisingly) the result is shown here: </p>
<p><a href="https://i.stack.imgur.com/9Klkd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Klkd.png" alt="The result of a brickwall bandpass filter."></a></p>
<p>The left plot is of the amplitude of the FFT, with the bandpass filter applied. It is the dark ring around the central (DC) component. The phase is not shown.</p>
<p>Clearly, the 'brickwall' filter is not the right solution. The phenomenon of making rings from this filter is well explained here: <a href="https://dsp.stackexchange.com/questions/724/low-pass-filter-and-fft-for-beginners-with-python/725#725">What happens when you apply a brick-wall filter to a 1D dataset.</a></p>
<p>So now I'm stuck. Perhaps it would be better to use one of the built in scipy methods, but they seem to be for 1d data, as <a href="http://wiki.scipy.org/Cookbook/ButterworthBandpass" rel="nofollow noreferrer">in this implementation of a butterworth filter</a>. Possibly the right thing to do involves using fftconvolve() as is done <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve" rel="nofollow noreferrer">here to blur an image.</a> My question about fftconvolve is this: Does it require both 'images' (the image and the filter) to be in real space? I think yes, but in the example they use a gaussian, so it's ambiguous (fft(gaussian)=gaussian). If so, then it seems wrong to try to make a real space bandpass filter. Maybe the right strategy uses convolve2d() with the fourier space image and a homemade filter. If so, do you know how to make a good 2d filter? </p>
|
<p>So, one problem here is that your background sinusoid has a period not terribly different from the signal components you are trying to preserve. i.e., the spacing of the signal peaks is about the same as the period of the background. This is going to make filtering difficult.</p>
<p>My first question is whether this background is truly constant from experiment to experiment, or does it depend on the sample and experimental setup? If it is constant, then background frame subtraction would work better than filtering.</p>
<p>Most of the standard scipy.signal filter functions (bessel, chebychev, etc.) are, as you say, designed for 1-D data. But you can easily extend them to isotropic filtering in 2-D. Each filter in frequency space is a rational function of f. The two representations are [a,b] which are the coefficiets of the numerator and denominator polynomial, or [z,p,k] which is the factored representation of the polynomial i.e.,: <code>H(f) = k(f-z0)*(f-z1)/(f-p0)*(f-p1)</code> You can just take the polynomial from one of the filter design algorithms, evaluate it as a function of sqrt(x^2+y^2) and apply it to your frequency domain data.</p>
<p>Can you post a link to the original image data?</p>
|
python|numpy|image-processing|filter|convolution
| 2
|
376,865
| 31,975,139
|
Reshaping Pandas groupby data row values into column headers
|
<p>I am trying to extract grouped row data from a pandas groupby object so that the primary group data ('course' in the example below) act as a row index, the secondary grouped row values act as column headers ('student') and the aggregate values as the corresponding row data ('score').</p>
<p>So, for example, I would like to transform:</p>
<pre><code>import pandas as pd
import numpy as np
data = {'course_id':[101,101,101,101,102,102,102,102] ,
'student_id':[1,1,2,2,1,1,2,2],
'score':[80,85,70,60,90,65,95,80]}
df = pd.DataFrame(data, columns=['course_id', 'student_id','score'])
</code></pre>
<p>Which I have grouped by course_id and student_id:</p>
<pre><code>group = df.groupby(['course_id', 'student_id']).aggregate(np.mean)
g = pd.DataFrame(group)
</code></pre>
<p>Into something like this:</p>
<pre><code>data = {'course':[101,102],'1':[82.5,77.5],'2':[65.0,87.5]}
g3 = pd.DataFrame(data, columns=['course', '1', '2'])
</code></pre>
<p>I have spent some time looking through the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="noreferrer">groupby documentation</a> and I have trawled stack overflow and the like but I'm still not sure how to approach the problem. I would be very grateful if anyone would suggest a sensible way of achieving this for a largish dataset.</p>
<p>Many thanks!</p>
<ul>
<li>Edited: to fix g3 example typo</li>
</ul>
|
<pre><code>>>> g.reset_index().pivot('course_id', 'student_id', 'score')
student_id 1 2
course_id
101 82.5 65.0
102 77.5 87.5
</code></pre>
|
python|pandas
| 10
|
376,866
| 31,975,205
|
Python/Numba: Unknown attribute error with scipy.special.gammainc()
|
<p>I am having an error when running code using the @jit decorator. It appears that some information for the function scipy.special.gammainc() can't be located:</p>
<pre><code>Failed at nopython (nopython frontend)
Unknown attribute 'gammainc' for Module(<module 'scipy.special' from 'C:\home\Miniconda\lib\site-packages\scipy\special\__init__.pyc'>) $164.2 $164.3 = getattr(attr=gammainc, value=$164.2)
</code></pre>
<p>Without the @jit decorator the code will run fine. Maybe there is something required to make the attributes of the scipy.special module visible to Numba?</p>
<p>Thanks in advance for any suggestions, comments, etc.</p>
|
<p>The problem is that <code>gammainc</code> isn't one of the small list of functions that Numba inherently knows how to deal with (see <a href="http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html" rel="noreferrer">http://numba.pydata.org/numba-doc/dev/reference/numpysupported.html</a>) - in fact none of the scipy functions are. This means you can't use it in "nopython" mode, unfortunately - it just has to treat it as a normal python function call.</p>
<p>If you remove <code>nopython=True</code>, it should work. However, that isn't hugely satisfactory, because it may well be slower. Without seeing your code it's difficult to know exact what to suggest. However, in general:</p>
<ul>
<li><p>loops (that don't contain things like <code>gammainc</code>) will be sped up, even without nopython.</p></li>
<li><p><code>gammainc</code> is a "ufunc", which means it can be readily applied to a whole array at a time, and should run quickly anyway.</p></li>
<li><p>you can call <code>func.inspect_types()</code> to see it's been able to compile.</p></li>
</ul>
<p>As a trivial example:</p>
<pre><code>from scipy.special import gammainc
import numba as nb
import numpy as np
@nb.jit # note - no "nopython"
def f(x):
for n in range(x.shape[0]):
x[n] += 1
y = gammainc(x,2.5)
for n in range(y.shape[0]):
y[n] -= 1
return y
f(np.linspace(0,20)) # forces it to be JIT'd and outputs an array
</code></pre>
<p>Then <code>f.inspect_types()</code> identifies the two loops as "lifted loops", meaning they'll be JIT'd and run quickly. The bit with <code>gammainc</code> is not JIT'd, but is applied to the whole array at once and so should be fast too.</p>
|
python|numpy|scipy|anaconda|numba
| 7
|
376,867
| 31,901,506
|
Grouping in Pandas
|
<p>I want to group data in a dataframe I have oo the Column "Count" and by another column "State". I would like to output a list of list, each sub set list would just be the count for each state. </p>
<p>example output: [[120,200], [40, 20, 40], ...]</p>
<p>120 and 200 would be counts for let's say the State California</p>
<p>I tried the following: </p>
<pre><code>df_new = df[['State']].groupby(['Count']).to_list()
</code></pre>
<p>I get a keyerror: 'count'</p>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 84, in <module>
getStateCountsDF(filepath)
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 81, in getStateCountsDF
df_new = df[['State']].groupby(['Count']).to_list()
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\generic.py", line 3159, in groupby
sort=sort, group_keys=group_keys, squeeze=squeeze)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\groupby.py", line 1199, in groupby
return klass(obj, by, **kwds)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\groupby.py", line 388, in __init__
level=level, sort=sort)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\groupby.py", line 2148, in _get_grouper
in_axis, name, gpr = True, gpr, obj[gpr]
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\frame.py", line 1797, in __getitem__
return self._getitem_column(key)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\frame.py", line 1804, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\generic.py", line 1084, in _get_item_cache
values = self._data.get(item)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\internals.py", line 2851, in get
loc = self.items.get_loc(item)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\index.py", line 1572, in get_loc
return self._engine.get_loc(_values_from_object(key))
File "pandas\index.pyx", line 134, in pandas.index.IndexEngine.get_loc (pandas\index.c:3824)
File "pandas\index.pyx", line 154, in pandas.index.IndexEngine.get_loc (pandas\index.c:3704)
File "pandas\hashtable.pyx", line 686, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12280)
File "pandas\hashtable.pyx", line 694, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12231)
KeyError: 'Count'
</code></pre>
<p>I feel like this should be a simple line of code, what am I doing wrong here?</p>
|
<p>It is possible as a one-liner:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_dict({"State": ["ny", "or", "ny", "nm"],
"Counts": [100,300,200,400]})
list_new = df.groupby("State")["Counts"].apply(list).tolist()
print(list_new)
[[400], [100, 200], [300]]
</code></pre>
<p>You should read the doc of groupby to see what the expected outcome of the grouping is and how to change that (<a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/groupby.html</a>).</p>
|
python|pandas|group-by|dataframe
| 1
|
376,868
| 32,018,256
|
converting rows of a single dataFrame column into a single list in python
|
<p>I have a pandas Series and I would like to covert all the rows of my series into a single list. I have:</p>
<pre><code>list1=Series([89.34,88.52,104.19,186.84,198.15,208.88]). Then I have a function which i call:
func(list1)
def func(list1):
list1=list1.values.tolist()
print(list1)
</code></pre>
<p>the printed result is :</p>
<pre><code>[[89.34], [88.52], [104.19], [186.84], [198.15], [208.88]]
</code></pre>
<p>but I would like to have :
<code>[89.34,88.52,104.19,186.84,198.15,208.88]</code> </p>
<p>any help. I am using python 2.7</p>
|
<p>Easiest way would be to access the <code>values</code> property of the <code>Series</code> object, like you have in your example.</p>
<pre><code>> from pandas import Series
> a_series = Series([89.34,88.52,104.19,186.84,198.15,208.88])
> print(a_series.values.tolist())
[89.34, 88.52, 104.19, 186.84, 198.15, 208.88]
</code></pre>
<p>Accessing more than one elements in the list at the same time:</p>
<pre><code>> counter = 0
> a_list = a_series.values.tolist()
> while(counter < len(a_list)):
.. if((counter+1) < len(a_list)):
.. print(a_list[counter] == a_list[counter+1])
.. counter+=1
False
False
False
False
False
</code></pre>
|
python|pandas
| 1
|
376,869
| 41,656,162
|
Multi Column Deep Neural Network with TFLearn and Tensorflow
|
<p>I am trying to build a multi column deep neural network (MDNN) with tflearn and tensorflow. The MDNN is explained in <a href="http://people.idsia.ch/~juergen/nn2012traffic.pdf" rel="nofollow noreferrer">this paper</a>. The part I am struggling with is how I can add two or more inputs together to be fed to tensorflow.</p>
<p>For a single column I have:</p>
<pre><code>network = tflearn.input_data(shape=[None, image_shape, image_shape, 3])
</code></pre>
<p>and </p>
<pre><code>model.fit(X_input, y_train, n_epoch=50, shuffle=True,
validation_set=(X_test_norm, y_test),
show_metric=True, batch_size=240, run_id='traffic_cnn2')
</code></pre>
<p>where <code>X_input</code> is of shape <code>(31367, 32, 32, 3)</code>. I am pretty new to numpy, tensorflow and tflearn. The difficulty for now really lays in how to specify multiple inputs to tflearn.</p>
<p>Any help is greatly appreciated.</p>
|
<p>The MDNN explained in the paper individually trains several models using random (but bounded) distortions on the data. Once all models are trained, they produce predictions using an ensemble classifier by averaging the output of all the models on different versions of the data.</p>
<p>As far as I understand, the columns are not jointly but independently trained. So you must create different models and call fit on each on them. I recommend you start training a single model and once you have a training setting getting good results, replicate it. To generate predictions, you must compute the average of the predicted probabilities from the <em>predict</em> function and take the most probable class.</p>
<p>One way to a generate data from your inputs is to use <a href="http://tflearn.org/data_augmentation/#data-augmentation" rel="nofollow noreferrer">data augmentation</a>. However, instead of generating new data you must replace it by the modified versions.</p>
|
python|numpy|tensorflow|neural-network|tflearn
| 1
|
376,870
| 41,473,112
|
Replace column values in python
|
<p>This is hopefully an easy question for someone out there:</p>
<p>I have one data frame that looks like this:</p>
<pre><code>import pandas as pd
names_raw = {
'device_id': [ '1d28d33a-c98e-4986-a7bb-5881d222c9a8','54322099-e76d-4986-afd2-0861e2113a16','ec3a9f9d-8e4d-4986-bea8-c17c361366e9','cc8e247d-4e2e-4986-b783-e516d03a358c','ca2d8769-ccf5-4986-8aed-741ca68e94cd','12178e22-6d64-4986-966a-374326fdaf3d','50ba7a2e-a1aa-4986-86a7-08e0605dc702','f427c8e9-65d4-46de-b986-8f8e79242842','cee68e2b-135f-45b0-be4b-7c23009866ba','e785988e-2693-47ad-9899-0049860ccaa7','a1986866-13f8-4dbe-b661-8c9f78eac745','a9998ecd-9fe9-4932-870d-29c6b5df1214','9b88e362-b06d-4317-96f5-f266c986a8d6','a04498ef-fd7c-4aa4-bffc-9158ccbad3a1'],
'pod_id': ['B00001','B00011','B00013','B00016','B00021','B00023','B00024','B00026','B00027','B00028','B00030','B00032','B00034','B00039'],
'native_id': ['zim_pod_0001','zim_pod_0002', 'zim_pod_0003', 'zim_pod_0004', 'zim_pod_0005', 'zim_pod_0006', 'zim_pod_0007', 'zim_pod_0008', 'zim_pod_0009', 'zim_pod_0010', 'zim_pod_0011', 'zim_pod_0012', 'zim_pod_0013','zim_pod_0014']
}
names = pd.DataFrame(names_raw, columns = ['device_id', 'pod_id', 'native_id'])
</code></pre>
<p>And another data frame that looks like this:</p>
<pre><code>>>> df
device_id day month year rain
0 1d28d33a-c98e-4986-a7bb-5881d222c9a8 31 12 2016 0.0
1 54322099-e76d-4986-afd2-0861e2113a16 31 12 2016 0.0
2 ec3a9f9d-8e4d-4986-bea8-c17c361366e9 31 12 2016 0.0
3 cc8e247d-4e2e-4986-b783-e516d03a358c 31 12 2016 1.2
4 ca2d8769-ccf5-4986-8aed-741ca68e94cd 31 12 2016 2.2
5 12178e22-6d64-4986-966a-374326fdaf3d 31 12 2016 0.2
6 9b88e362-b06d-4317-96f5-f266c986a8d6 31 12 2016 0.0
</code></pre>
<p>I want to replace the <code>device_id</code> column with the <code>native_id</code> column. How can this be done using the least amount of lines of code?</p>
<p>The final data frame should look something like this:</p>
<pre><code>>>> df
native_id day month year rain
0 zim_pod_0001 31 12 2016 0.0
1 zim_pod_0002 31 12 2016 0.0
2 zim_pod_0003 31 12 2016 0.0
</code></pre>
<p>etc. etc...</p>
|
<p>Try this:</p>
<pre><code>df['native_id'] = df.device_id.map(names.set_index('device_id')['native_id'])
</code></pre>
<p>Or if you don't want to preserve <code>device_id</code> column in the <code>df</code> DF:</p>
<pre><code>In [210]: df['native_id'] = df.pop('device_id').map(names.set_index('device_id')['native_id'])
In [211]: df
Out[211]:
day month year rain native_id
0 31 12 2016 0.0 zim_pod_0001
1 31 12 2016 0.0 zim_pod_0002
2 31 12 2016 0.0 zim_pod_0003
3 31 12 2016 1.2 zim_pod_0004
4 31 12 2016 2.2 zim_pod_0005
5 31 12 2016 0.2 zim_pod_0006
6 31 12 2016 0.0 zim_pod_0013
</code></pre>
|
python|pandas|dataframe|multiple-columns
| 1
|
376,871
| 41,584,420
|
How to calculate the mean of a column by decade in Python
|
<p><a href="https://i.stack.imgur.com/WtHTj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WtHTj.png" alt="Image of dataset"></a></p>
<p>I am unsure as to how to calculate the mean for a column given specific rows.
I need to calculate the mean of the column Mkt-RF by decade, as in the mean from 193001 to 193912, and so on. I need to do this for each decade until 2016. </p>
<p>Is there also any way to put the results into a new dataframe of its own? With the decade (1920,1930) in one column and the mean of each decade in the other?</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by first <code>3</code> chars of first column by <code>str[:3]</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html" rel="nofollow noreferrer"><code>mean</code></a>:</p>
<pre><code>df = df['Mkt-RF'].groupby(df['Unnamed:0'].str[:3]).mean()
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'Unnamed:0':['192607','192608','193609','193610','193611'],
'Mkt-RF':[4,5,6,7,5]})
print (df)
Mkt-RF Unnamed:0
0 4 192607
1 5 192608
2 6 193609
3 7 193610
4 5 193611
#rename column
df = df.rename(columns={'Unnamed:0':'YEARMONTH'})
df = df['Mkt-RF'].groupby(df.YEARMONTH.str[:3]).mean().rename('MEAN').reset_index()
df.YEARMONTH = (df.YEARMONTH + '0').astype(int)
print (df)
YEARMONTH MEAN
0 1920 4.5
1 1930 6.0
</code></pre>
<p>Another solution is convert first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> and <code>groupby</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.year.html" rel="nofollow noreferrer"><code>year</code></a> floor divided by <code>10</code>:</p>
<pre><code>df = df.rename(columns={'Unnamed:0':'YEARMONTH'})
df.YEARMONTH = pd.to_datetime(df.YEARMONTH, format='%Y%m')
df = df['Mkt-RF'].groupby(df.YEARMONTH.dt.year // 10).mean().rename('MEAN').reset_index()
df.YEARMONTH = df.YEARMONTH *10
print (df)
YEARMONTH MEAN
0 1920 4.5
1 1930 6.0
</code></pre>
|
python|pandas|rows|mean
| 0
|
376,872
| 41,244,988
|
Joining sentences to dataframe
|
<p>I want to export a dataframe to csv. But on top of it, I would like to print the date of the dataframe to produce the following result in the csv file. How can I join the string sentence to the dataframe so that I can export it together to csv?</p>
<pre><code>import pandas as pd
import datetime as dt
today1=dt.datetime.today().strftime('%Y%m%d')
print('This dataframe is created on ',today1)
df=pd.DataFrame({'A':[1,2],'B':[3,4]})
print(df)
df.to_csv('temp.csv')
</code></pre>
<p><a href="https://i.stack.imgur.com/VE4Md.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VE4Md.png" alt="enter image description here"></a></p>
|
<p><code>pd.to_csv</code> accepts a filehandle as input. So write your first line, then call <code>to_csv</code> with the same handle:</p>
<pre><code>import pandas as pd
import datetime as dt
today1=dt.datetime.today().strftime('%Y%m%d')
df=pd.DataFrame({'A':[1,2],'B':[3,4]})
with open("temp.csv","w") as f:
f.write('This dataframe is created on {}\n'.format(today1))
df.to_csv(f)
</code></pre>
<p>when you read the data back just do the same with <code>pd.read_csv()</code>:</p>
<pre><code>with open("temp.csv","r") as f:
date_line = next(f)
df = pd.read_csv(f)
</code></pre>
|
python|pandas
| 1
|
376,873
| 41,288,989
|
Creating new (more detailed) data frame with Pandas based on index data frame
|
<p>I apologize for the neophyte question, but I'm having a hard time figuring out Pandas' data frames. I have one data frame with something like</p>
<pre><code>df_index:
Product Title
100000 Sample main product
200000 Non-consecutive main sample
</code></pre>
<p>I have another data frame with a more detailed list of the products with formats, like</p>
<pre><code>df_details:
Product Title
100000 Sample main product
100000-Format-English Sample product details
100000-Format-Spanish Sample product details
100000-Format-French Sample product details
110000 Another sample main product
110000-Format-English Another sample details
110000-Format-Spanish Another sample details
120000 Yet another sample main product
120000-Format-English Yet another sample details
120000-Format-Spanish Yet another sample details
...
200000 Non-consecutive main sample
200000-Format-English Non-consecutive sample details
200000-Format-Spanish Non-consecutive sample details
</code></pre>
<p>I want to create a new data frame based on df_details, but only for the products that appear in df_index. Ideally, it would look something like:</p>
<pre><code>new_df:
Product Title
100000 Sample main product
100000-Format-English Sample product details
100000-Format-Spanish Sample product details
100000-Format-French Sample product details
200000 Non-consecutive main sample
200000-Format-English Non-consecutive sample details
200000-Format-Spanish Non-consecutive sample details
</code></pre>
<p>Here's what I've tried so far:</p>
<pre><code>new_df = df_details[df_details['Product'][0:5] == df_index['Product'][0:5]]
</code></pre>
<p>That gives me a error:</p>
<pre><code>ValueError: Can only compare identically-labeled Series objects
</code></pre>
<p>I've also tried </p>
<pre><code>new_df = pd.merge(df_index, df_details,
left_on=['Product'[0:5]], right_index=True, how='left')
</code></pre>
<p>Which does give me a resulting data set, but not the kind I wantβit doesn't include the details rows with the format information. </p>
|
<p>You should be able to use <code>.isin()</code> as:</p>
<pre><code>new_df = df_details[df_details['Product'].isin(df_index['Product']]
</code></pre>
<p>This will perform a mask looking up only the common indices.</p>
<p>EDIT: this works only whether the column has the same string. To solve this you can use <code>str.contains()</code> with:</p>
<pre><code>import re
# create a pattern to look for
pat ='|'.join(map(re.escape, df_index['Product']))
# Create the mask
new_df = df_details[df_details['Product'].str.contains(pat)]
</code></pre>
<p>This works if the column is formatted as string.</p>
|
python|pandas|dataframe
| 2
|
376,874
| 41,345,289
|
Getting a random sample in Python dataframe by category
|
<p>I have a sample list like this: </p>
<pre><code>Category| Item
--------|-------
Animal | Fish
Animal | Cat
... |
Food | Fish
Food | Cake
... |
etc...
</code></pre>
<p>I want to take a random sample of 10 items out of each category, so that the remaining dataframe just has those records. </p>
<p>I've tried <code>df.sample()</code> but it just gives me samples across the board. </p>
<p>I can do this this through <code>df.iterrows()</code> but I am hoping there is a more simple solution. </p>
|
<p>You have to tell pandas you want to group by category with the <code>groupby</code> method.</p>
<pre><code>df.groupby('category')['item'].apply(lambda s: s.sample(10))
</code></pre>
<p>If you have less than ten items in a sample but don't want to sample with replacement you can do this.</p>
<pre><code>df.groupby('category')['item'].apply(lambda s: s.sample(min(len(s), 10)))
</code></pre>
|
python-3.x|pandas
| 16
|
376,875
| 41,590,993
|
create a new pandas dataframe by taking values from a different dataframe and perforing some mathematical operations on it
|
<p>Suppose I have a pandas dataframe with 16 columns and approx 1000 rows,
the format is like this</p>
<pre><code>date_time sec01 sec02 sec03 sec04 sec05 sec06 sec07 sec08 sec09 sec10 sec11 sec12 sec13 sec14 sec15 sec16
1970-01-01 05:54:17 8.50 8.62 8.53 8.45 8.50 8.62 8.53 8.45 8.42 8.39 8.39 8.40 8.47 8.54 8.65 8.70
1970-01-01 05:56:55 8.43 8.62 8.55 8.45 8.43 8.62 8.55 8.45 8.42 8.39 8.39 8.40 8.46 8.53 8.65 8.71
</code></pre>
<p>and now I need to make another pandas dataframe with 32 columns: </p>
<pre><code>x_sec01 y_sec01 x_sec02 y_sec02 x_sec03 y_sec03 x_sec04 y_sec04 x_sec05 y_sec05 x_sec06 y_sec06 x_sec07 ...
</code></pre>
<p>where the values of each column needs to be multiplied with a specific mathematical constant which is dependent on the column number (sector number):</p>
<pre><code>x = sec_data * (math.cos(math.radians(1.40625*(sector_number))))
y = sec_data * (math.sin(math.radians(1.40625*(sector_number))))
</code></pre>
<p>Thus each columns in the original pandas dataframe (sec01-sec16) needs to be converted to two columns (x_sec01,y_sec01) and the factor by which it has to be multiplied depends on the sector_number value.</p>
<p>Currently I am using this function and calling this for every single rows in a for loop that is taking too much of time.</p>
<pre><code>def sec_to_xy(sec_no,sec_data): #function to convert sector data to xy coordinate system
for sec_convno in range(0,32,2):
sector_number = (77-(sec_no-1)*2) #goes from 79 till 49
x = sec_data * (math.cos(math.radians(1.40625*(sector_number))))
y = sec_data * (math.sin(math.radians(1.40625*(sector_number))))
return(x,y)
</code></pre>
|
<p>Here's an approach with NumPy -</p>
<pre><code># Extract as float array
a = df.values # Extract all 16 columns
m,n = a.shape
# Scaling array
s = np.radians(1.40625*(np.arange(79,47,-2)))
# Initialize output array and set cosine and sine values
out = np.zeros((m,n,2))
out[:,:,0] = a*np.cos(s)
out[:,:,1] = a*np.sin(s)
# Transfer to a dataframe output
df_out = pd.DataFrame(out.reshape(-1,n*2),index=df.index)
</code></pre>
<p>Please note that if there are actually 17 columns with the first column being <code>date_time</code>, then we need to skip the first column. So, at the start, get <code>a</code> with the following step instead -</p>
<pre><code>a = df.ix[:,1:].values
</code></pre>
|
python|pandas|dataframe
| 2
|
376,876
| 41,285,090
|
gen_word2vec in tensorflow is not found
|
<p>As I ran the code (<a href="https://github.com/tensorflow/models/blob/master/tutorials/embedding/word2vec.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/embedding/word2vec.py</a>) in my laptop(Mac,python3), I received an error: </p>
<pre><code> AttributeError: module 'tensorflow.models.embedding.gen_word2vec' has no attribute 'skipgram_word2vec'
</code></pre>
<p>tensorflow has been installed and working in my laptop. It seems like "gen_word2vec" is missing. Could someone help me? </p>
|
<p>Try installing the latest version of TensorFlow.</p>
|
machine-learning|tensorflow|word2vec
| 1
|
376,877
| 41,642,799
|
How to overlay data over a "day period" in Pandas for plotting
|
<p>I have a DataFrame with some (<em>more-sensical</em>) data in the following form:</p>
<pre><code>In[67] df
Out[67]:
latency
timestamp
2016-09-15 00:00:00.000000 0.042731
2016-09-15 00:16:24.376901 0.930874
2016-09-15 00:33:19.268295 0.425996
2016-09-15 00:51:30.956065 0.570245
2016-09-15 01:09:23.905364 0.044203
...
2017-01-13 13:08:31.707328 0.071137
2017-01-13 13:25:41.154199 0.322872
2017-01-13 13:38:19.732391 0.193918
2017-01-13 13:57:36.687049 0.999191
</code></pre>
<p>So it spans about 50 days, and the timestamps are <em>not</em> at the same time every day. I would like to overlay some plots for each day, that is, inspect the time series of each day on the same plot. 50 days may be too many lines, but I think there is a kind of "daily seasonality" which I would like to investigate, and this seems like a useful visualization before anything more rigorous. </p>
<p><strong>How do I overlay this data on the same plot representing a "single-day" time period</strong>?</p>
<hr>
<p><strong>My thoughts</strong></p>
<p>I am not yet very familiar with Pandas, but I managed to group my data into daily bins with </p>
<pre><code>In[67]: df.groupby(pd.TimeGrouper('D'))
Out[68]: <pandas.core.groupby.DataFrameGroupBy object at 0x000000B698CD34E0>
</code></pre>
<p>Now I've been trying to determine how I am supposed to create a new DataFrame structure such that the plots can be overlayed by day. This the fundamental thing I can't figure out - how can I utilize a DataFrameGroupBy object to overlay the plots? A very rudimentary-seeming approach would be to just iterate over each GroupBy object, but my issue with doing so has been configuring the x-axis such that it only displays a "daily time period" independent of the particular day, instead of capturing the entire timestamp. </p>
<p>Splitting the data up into separate frames and calling them in the same figure with some kind of date coercion to use the approach <a href="https://stackoverflow.com/a/13873014/5636510">in this more general answer</a> doesn't seem very good to me. </p>
<hr>
<p>You can generate pseudo-data similarly with something like this: </p>
<pre><code>import datetime
start_date = datetime.datetime(2016, 9, 15)
end_date = datetime.datetime.now()
dts = []
cur_date = start_date
while cur_date < end_date:
dts.append((cur_date, np.random.rand()))
cur_date = cur_date + datetime.timedelta(minutes=np.random.uniform(10, 20))
</code></pre>
|
<p>Consider the dataframe <code>df</code> (generated mostly from OP provided code)</p>
<pre><code>import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
start_date = datetime.datetime(2016, 9, 15)
end_date = datetime.datetime.now()
dts = []
cur_date = start_date
while cur_date < end_date:
dts.append((cur_date, np.random.rand()))
cur_date = cur_date + datetime.timedelta(minutes=np.random.uniform(10, 20))
df = pd.DataFrame(dts, columns=['Date', 'Value']).set_index('Date')
</code></pre>
<hr>
<p>The real trick is splitting the index into date and time components and unstacking. Then interpolate to fill in missing values</p>
<pre><code>d1 = df.copy()
d1.index = [d1.index.time, d1.index.date]
d1 = d1.Value.unstack().interpolate()
</code></pre>
<p>From here we can <code>d1.plot(legend=0)</code></p>
<pre><code>ax = d1.plot(legend=0)
ax.figure.autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/LC69K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LC69K.png" alt="enter image description here"></a></p>
<p>But that isn't very helpful.</p>
<hr>
<p>You might try something like this... hopefully this helps</p>
<pre><code>n, m = len(d1.columns) // 7 // 4 + 1, 4
fig, axes = plt.subplots(n, m, figsize=(10, 15), sharex=False)
for i, (w, g) in enumerate(d1.T.groupby(pd.TimeGrouper('W'))):
r, c = i // m, i % m
ax = g.T.plot(ax=axes[r, c], title=w, legend=0)
fig.autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/Say4V.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Say4V.jpg" alt="enter image description here"></a></p>
<hr>
<p><strong><em>How to do it over weeks</em></strong></p>
<ul>
<li>create a multi index
<ul>
<li>include the period representing the week</li>
<li>include the day of the week</li>
<li>include the time of day</li>
</ul></li>
<li><code>unstack</code> to get weekly periods into columns</li>
<li>still not convinced of the axis format</li>
</ul>
<hr>
<pre><code>d2 = df.copy()
idx = df.index
d2.index = [idx.weekday_name, idx.time, idx.to_period('W').rename('Week')]
ax = d2.Value.unstack().interpolate().iloc[:, :2].plot()
ax.figure.autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/LQLUf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQLUf.png" alt="enter image description here"></a></p>
|
python|python-2.7|pandas|matplotlib|plot
| 8
|
376,878
| 41,298,791
|
Implementation of optical flow using CNN by tensorflow
|
<p>I want to implement <em>FlowNetCorr model</em> in Figure 2 on
<a href="https://arxiv.org/abs/1504.06852" rel="nofollow noreferrer">https://arxiv.org/abs/1504.06852</a> , by Tensorflow.</p>
<p>Although I know the basic implementation of LeNet or Alexnet, it is hard to write more complex models.</p>
<p>I don't know how to implement correlation layer and by Tensorflow. Is there any recommendation material or suggestion?</p>
<p><a href="https://i.stack.imgur.com/uzuTz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uzuTz.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/HBJJD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBJJD.png" alt="enter image description here"></a></p>
|
<p>You may consider using this op: <a href="https://www.tensorflow.org/api_guides/python/nn#Convolution" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/nn#Convolution</a></p>
|
tensorflow
| 1
|
376,879
| 41,328,633
|
How do I print entire number in Python from describe() function?
|
<p>I am doing some statistical work using Python's pandas and I am having the following code to print out the data description (mean, count, median, etc).</p>
<pre><code>data=pandas.read_csv(input_file)
print(data.describe())
</code></pre>
<p>But my data is pretty big (around 4 million rows) and each rows has very small data. So inevitably, the count would be big and the mean would be pretty small and thus Python print it like this.</p>
<p><a href="https://i.stack.imgur.com/au6GY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/au6GY.png" alt="enter image description here"></a></p>
<p>I just want to print these numbers entirely just for ease of use and understanding, for example it better be <code>4393476</code> instead of <code>4.393476e+06</code>. I have googled it around and the most I can find is <a href="https://stackoverflow.com/questions/6149006/display-a-float-with-two-decimal-places-in-python">Display a float with two decimal places in Python</a> and some other similar posts. But that will only work only if I have the numbers in a variable already. Not in my case though. In my case I haven't got those numbers. The numbers are created by the describe() function, so I don't know what numbers I will get.</p>
<p>Sorry if this seems like a very basic question, I am still new to Python. Any response is appreaciated. Thanks.</p>
|
<p>Suppose you have the following <code>DataFrame</code>:</p>
<h2>Edit</h2>
<p>I checked the docs and you should probably use the <code>pandas.set_option</code> API to do this:</p>
<pre><code>In [13]: df
Out[13]:
a b c
0 4.405544e+08 1.425305e+08 6.387200e+08
1 8.792502e+08 7.135909e+08 4.652605e+07
2 5.074937e+08 3.008761e+08 1.781351e+08
3 1.188494e+07 7.926714e+08 9.485948e+08
4 6.071372e+08 3.236949e+08 4.464244e+08
5 1.744240e+08 4.062852e+08 4.456160e+08
6 7.622656e+07 9.790510e+08 7.587101e+08
7 8.762620e+08 1.298574e+08 4.487193e+08
8 6.262644e+08 4.648143e+08 5.947500e+08
9 5.951188e+08 9.744804e+08 8.572475e+08
In [14]: pd.set_option('float_format', '{:f}'.format)
In [15]: df
Out[15]:
a b c
0 440554429.333866 142530512.999182 638719977.824965
1 879250168.522411 713590875.479215 46526045.819487
2 507493741.709532 300876106.387427 178135140.583541
3 11884941.851962 792671390.499431 948594814.816647
4 607137206.305609 323694879.619369 446424361.522071
5 174424035.448168 406285189.907148 445616045.754137
6 76226556.685384 979050957.963583 758710090.127867
7 876261954.607558 129857447.076183 448719292.453509
8 626264394.999419 464814260.796770 594750038.747595
9 595118819.308896 974480400.272515 857247528.610996
In [16]: df.describe()
Out[16]:
a b c
count 10.000000 10.000000 10.000000
mean 479461624.877280 522785202.100082 536344333.626082
std 306428177.277935 320806568.078629 284507176.411675
min 11884941.851962 129857447.076183 46526045.819487
25% 240956633.919592 306580799.695412 445818124.696121
50% 551306280.509214 435549725.351959 521734665.600552
75% 621482597.825966 772901261.744377 728712562.052142
max 879250168.522411 979050957.963583 948594814.816647
</code></pre>
<h2> End of edit </h2>
<pre><code>In [7]: df
Out[7]:
a b c
0 4.405544e+08 1.425305e+08 6.387200e+08
1 8.792502e+08 7.135909e+08 4.652605e+07
2 5.074937e+08 3.008761e+08 1.781351e+08
3 1.188494e+07 7.926714e+08 9.485948e+08
4 6.071372e+08 3.236949e+08 4.464244e+08
5 1.744240e+08 4.062852e+08 4.456160e+08
6 7.622656e+07 9.790510e+08 7.587101e+08
7 8.762620e+08 1.298574e+08 4.487193e+08
8 6.262644e+08 4.648143e+08 5.947500e+08
9 5.951188e+08 9.744804e+08 8.572475e+08
In [8]: df.describe()
Out[8]:
a b c
count 1.000000e+01 1.000000e+01 1.000000e+01
mean 4.794616e+08 5.227852e+08 5.363443e+08
std 3.064282e+08 3.208066e+08 2.845072e+08
min 1.188494e+07 1.298574e+08 4.652605e+07
25% 2.409566e+08 3.065808e+08 4.458181e+08
50% 5.513063e+08 4.355497e+08 5.217347e+08
75% 6.214826e+08 7.729013e+08 7.287126e+08
max 8.792502e+08 9.790510e+08 9.485948e+08
</code></pre>
<p>You need to fiddle with the <code>pandas.options.display.float_format</code> attribute. Note, in my code I've used <code>import pandas as pd</code>. A quick fix is something like:</p>
<pre><code>In [29]: pd.options.display.float_format = "{:.2f}".format
In [10]: df
Out[10]:
a b c
0 440554429.33 142530513.00 638719977.82
1 879250168.52 713590875.48 46526045.82
2 507493741.71 300876106.39 178135140.58
3 11884941.85 792671390.50 948594814.82
4 607137206.31 323694879.62 446424361.52
5 174424035.45 406285189.91 445616045.75
6 76226556.69 979050957.96 758710090.13
7 876261954.61 129857447.08 448719292.45
8 626264395.00 464814260.80 594750038.75
9 595118819.31 974480400.27 857247528.61
In [11]: df.describe()
Out[11]:
a b c
count 10.00 10.00 10.00
mean 479461624.88 522785202.10 536344333.63
std 306428177.28 320806568.08 284507176.41
min 11884941.85 129857447.08 46526045.82
25% 240956633.92 306580799.70 445818124.70
50% 551306280.51 435549725.35 521734665.60
75% 621482597.83 772901261.74 728712562.05
max 879250168.52 979050957.96 948594814.82
</code></pre>
|
python|pandas
| 67
|
376,880
| 41,247,221
|
Can inception model be used for object counting in an image?
|
<p>I have already gone through the image classification part in Inception model, but I require to count the objects in the image. </p>
<p>Considering the flowers data-set, one image can have multiple instances of a flower, so how can I get that count?</p>
|
<p>What you describe is known to research community as <strong>Instance-Level Segmentation</strong>. </p>
<p>In last year itself there have been a significant spike in papers addressing this problem.</p>
<p>Here are some of the papers:</p>
<ul>
<li><a href="https://arxiv.org/pdf/1412.7144v4.pdf" rel="noreferrer">https://arxiv.org/pdf/1412.7144v4.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1511.08498v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1511.08498v3.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1607.03222v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1607.03222v2.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1607.04889v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1607.04889v2.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1511.08250v3.pdf" rel="noreferrer">https://arxiv.org/pdf/1511.08250v3.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1611.07709v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.07709v1.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1603.07485v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1603.07485v2.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1611.08303v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.08303v1.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1611.08991v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.08991v2.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1611.06661v2.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.06661v2.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1612.03129v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1612.03129v1.pdf</a></li>
<li><a href="https://arxiv.org/pdf/1605.09410v4.pdf" rel="noreferrer">https://arxiv.org/pdf/1605.09410v4.pdf</a></li>
</ul>
<p>As you see in these papers simple object classification network won't solve the problem. </p>
<p>If you search github you will find a few repositories with basic frameworks, you can build on top of them.</p>
<ul>
<li><a href="https://github.com/daijifeng001/MNC" rel="noreferrer">https://github.com/daijifeng001/MNC</a> (caffe)</li>
<li><a href="https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb" rel="noreferrer">https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb</a> (torch)</li>
<li><a href="https://github.com/jr0th/segmentation" rel="noreferrer">https://github.com/jr0th/segmentation</a> (keras, tensorflow)</li>
</ul>
|
image-processing|tensorflow|deep-learning
| 5
|
376,881
| 41,659,152
|
Python subscript syntax clarification
|
<p>Can you clarify what the <code>[:, :5]</code> part of the code does in the following code segment?</p>
<pre><code>for i in range(5):
weights = None
test_inputs = testset[i][:, :5]
test_inputs = test_inputs.astype(np.float32)
test_answer = testset[i][:, :5]
test_answer = code_answer(test_answer)
</code></pre>
|
<p>this is explained in the <a href="https://docs.scipy.org/doc/numpy/user/basics.indexing.html" rel="nofollow noreferrer">numpy indexing guide</a> of the manual. this is not standard python syntax.</p>
<p>if you have an array <code>a</code>, <code>a[:]</code> returns a view (not a copy; assigning to this will change <code>a</code>) on the whole array; <code>a[:5]</code> a view on elements <code>0, 1, ..., 4</code>.</p>
<p><code>numpy</code> allows more-dimensional arrays to be indexed with <code>a[i, j]</code> instead of the pure python version <code>a[i][j]</code>.</p>
<p>this should cover all your cases.</p>
|
python|arrays|numpy|syntax
| 0
|
376,882
| 41,416,740
|
Probability in pandas
|
<p>I have a simple dataframe like the one mentioned below. </p>
<p>How to count the probability of the occurrence of one in <code>Column_1</code> according to the <code>Column_2</code> and <code>Column_3</code> ?</p>
<p><code>Column_1</code> is a result (either one or zero).</p>
<p><code>Column_2</code> <code>Column_3</code> is a kind of classification.</p>
<p>So the first row means 1 for a person who lives in building numbers A with a car is model LM.</p>
<pre><code>Column_1 Column_2 Column_3
1 A LM
1 B LO
0 C LP
1 D LM
0 A LK
1 A LM
</code></pre>
<p>If i understand correct the result could be</p>
<pre><code> LM LO LP LK
A .33 0
B .167
C 0
D .167
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a>:</p>
<pre><code>print (df.pivot_table(index='Column_2',
columns='Column_3',
values='Column_1',
aggfunc='sum',
fill_value=0))
Column_3 LK LM LO LP
Column_2
A 0 2 0 0
B 0 0 1 0
C 0 0 0 0
D 0 1 0 0
</code></pre>
<p>Another solution with <code>groupby</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>df1 = df.groupby(['Column_2','Column_3'])['Column_1'].sum().unstack(fill_value=0)
print (df1)
Column_3 LK LM LO LP
Column_2
A 0 2 0 0
B 0 0 1 0
C 0 0 0 0
D 0 1 0 0
</code></pre>
<p>Last you can divide by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="nofollow noreferrer"><code>div</code></a> <code>length</code> of <code>index</code> - it is <code>length</code> of <code>df</code>:</p>
<pre><code>print (df1.div(len(df.index)))
Column_3 LK LM LO LP
Column_2
A 0.0 0.333333 0.000000 0.0
B 0.0 0.000000 0.166667 0.0
C 0.0 0.000000 0.000000 0.0
D 0.0 0.166667 0.000000 0.0
</code></pre>
|
python|pandas
| 1
|
376,883
| 41,661,798
|
Basic confusion about residuals in python
|
<p>I am writing some code for a class project that requires me to find the residuals of some data points and a fitted line to test its "fit" </p>
<p>I have been given this code: </p>
<pre><code>p, residuals, rank, singular_values, rcond = np.polyfit(temp,voltage,degree,full=True)
</code></pre>
<p>but the residuals it gives me is a sum of the square of the residuals.
If I want the residuals for each point, i.e the vertical distance between each plot and the fitted line, how can I do this?</p>
|
<p>Assume you have these data points:</p>
<pre><code>np.random.seed(0)
x = np.random.randn(10)
y = 5*x + np.random.randn(10)
</code></pre>
<p>In your code, <code>p</code> gives you the coefficients of the fitted function:</p>
<pre><code>p, residuals, rank, singular_values, rcond = np.polyfit(x, y, deg=1, full=True)
p
Out: array([ 5.04994402, 0.36378617])
</code></pre>
<p>You can calculate the fitted points using those coefficients as follows:</p>
<pre><code>y_hat = p[0]*x + p[1] # add higher degree terms if needed
</code></pre>
<p>The same can be done with <code>np.polyval</code>:</p>
<pre><code>y_hat = np.polyval(p, x)
</code></pre>
<p>The difference between <code>y</code> and <code>y_hat</code> will give you the residuals:</p>
<pre><code>res = y - y_hat
res
Out:
array([-0.30784646, 1.07050188, 0.34836945, -0.35403036, -0.01319629,
0.01869734, 1.08284167, -0.56138505, -0.04556331, -1.23838885])
</code></pre>
<p>And if you want to check:</p>
<pre><code>(res**2).sum() == residuals # sum of squared errors
Out: array([ True], dtype=bool)
</code></pre>
|
python|numpy
| 2
|
376,884
| 41,356,865
|
TensorFlow InvalidArgumentError: Matrix size-compatible: In[0]: [100,784], In[1]: [500,10]
|
<p>I'm new to tensorflow and am following a tutorial. I am getting an error that says: </p>
<pre><code>InvalidArgumentError (see above for traceback): Matrix size-compatible: In[0]: [100,784], In[1]: [500,10]
[[Node: MatMul_3 = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0, Variable_6/read)]]
</code></pre>
<p>Here is my code:</p>
<pre><code>import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
n_classes = 10
batch_size = 100
x = tf.placeholder('float') #this second parameter makes sure that the image fed in is 28*28
y = tf.placeholder('float')
def neural_network_model(data):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes]))}
# input_data * weights + biases
l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
# activation function
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(data, hidden_2_layer['weights']), hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(data, hidden_3_layer['weights']), hidden_3_layer['biases'])
l3 = tf.nn.relu(l3)
output = tf.matmul(data, output_layer['weights']) + output_layer['biases']
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))
#learning rate = 0.001
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x, epoch_y = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x:epoch_x,y:epoch_y})//THIS IS THE LINE WHERE THE ERROR 0CCURS
epoch_loss += c
print 'Epoch ' + epoch + ' completed out of ' + hm_epoch + ' loss: ' + epoch_loss
correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print 'Accuracy: ' + accuracy.eval({x:mnist.test.images, y:mnist.test.labels})
train_neural_network(x)
</code></pre>
<p>I have marked the line where the error occurs what am I doing wrong and how can I fix it?</p>
<p>Stack overflow wants me to write more, it says that there is not enough details and too much code. I honestly don't understand tensorflow well enough to add any more details. I'm hoping someone can just help me with this. I think the problem is that <code>optimizer</code> and <code>cost</code> have different dimensions, but I don't understand why or what I should do about it.</p>
|
<p>One error lies in this line</p>
<pre><code> l2 = tf.add(tf.matmul(data, hidden_2_layer['weights']), hidden_2_layer['biases'])
</code></pre>
<p>Your second weights variable has dimensions <code>500 x 500</code>, but your <code>data</code> variable was fed with data <code>100x784</code> so multiplication is incompatible. Make this,</p>
<pre><code> l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
</code></pre>
<p>Also make the corresponding change for <code>l3</code> and <code>output</code>.</p>
<hr>
<p>Always specify a shape for the placeholder, like this,</p>
<pre><code>x = tf.placeholder(tf.float32, shape=(None, 784))
</code></pre>
<p>This will allow you to catch such errors while building the graph and TensorFlow will be able to pinpoint these errors.</p>
|
python|tensorflow
| 6
|
376,885
| 41,368,940
|
Signal handler works in python but not in ipython
|
<p>I'm attempting to set the numpy print options using a signal handler on the window resize event. Don't want to make the connection until numpy has been imported, and don't want to import numpy automatically at python startup. I've got it almost-working with the code below:</p>
<pre><code># example.py
import wrapt
@wrapt.when_imported('numpy')
def post_import_hook(numpy):
import signal
try:
from shutil import get_terminal_size
except ImportError:
# Python 2
from shutil_backports import get_terminal_size
def resize_handler(signum=signal.SIGWINCH, frame=None):
w, h = get_terminal_size()
numpy.set_printoptions(linewidth=w)
print('handled window resize {}'.format(w))
resize_handler()
signal.signal(signal.SIGWINCH, resize_handler)
</code></pre>
<p>It works in vanilla python REPL (test with <code>python -i example.py</code> and resize the terminal a bit). But it doesn't work in <code>ipython</code> when the same code is added to my startup ipython config, and I don't understand why. </p>
<p>I'm not fixed on this particular approach (that's just what I've tried so far), so I'll phrase the question more generally: </p>
<p><strong>How can numpy correctly fill to the terminal width automatically in ipython?</strong></p>
<p>You can use <code>print(np.arange(200))</code>, for example, to check numpy's line wrapping behaviour. </p>
|
<p>Inspired by <a href="https://stackoverflow.com/a/1988024/5067311">the standard fix for printing large arrays without truncation</a>, I tried setting the line width to infinity. This seems to be working fine both in the REPL and in ipython, so I suggest this workaround:</p>
<pre><code>import numpy
numpy.set_printoptions(linewidth=numpy.inf)
</code></pre>
<p>This doesn't explain why your fix doesn't work for ipython, but in case the above line doesn't mess with anything unexpected, it should make printing immune to resizing.</p>
|
python|numpy|ipython|window-resize
| 1
|
376,886
| 41,651,317
|
Reinstalling numpy on OS X using pip - "canβt be modified or deleted because itβs required by OSΒ X"
|
<p>I'm trying to upgrade the <code>numpy</code> library on macOS, but <code>pip</code> doesn't seem to have sufficient permissions to delete numpy. Running `pip install --upgrade pip gives me this traceback:</p>
<pre><code>β Desktop sudo -H pip install --upgrade numpy
Collecting numpy
Using cached numpy-1.11.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Installing collected packages: numpy
Found existing installation: numpy 1.8.0rc1
DEPRECATION: Uninstalling a distutils installed project (numpy) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling numpy-1.8.0rc1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 778, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 754, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-21oX9d-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'
</code></pre>
<p>So apparently pip is having trouble deleting <code>/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info</code>. Trying to delete this file manually in finder gives me this error:</p>
<blockquote>
<p>βnumpy-1.8.0rc1-py2.7.egg-infoβ canβt be modified or deleted because itβs required by OSΒ X.</p>
</blockquote>
<p>How can I solve this? (Using the builtin numpy isn't an option because <code>tensorflow</code>, a package I need, needs the newest version).</p>
|
<p>Was facing the same issue</p>
<p>sudo pip install --ignore-installed numpy</p>
|
python|macos|numpy|pip
| 15
|
376,887
| 41,407,241
|
Does TensorFlow execute entire computation graph with sess.run()?
|
<p>For example, when we compute a variable <code>c</code> as <code>result = sess.run(c)</code>, does TF only compute the inputs required for computing <code>c</code> or updates all the variables of the complete computational graph?</p>
<p>Also, I don't seem to be able to do this:
<code>c = c*a*b</code>
as I am stuck with <code>uninitialized variable</code> error even after initializing <code>c</code> as <code>tf.Variable(tf.constant(1))</code>. Any suggestions?</p>
|
<p>Since Python code of TF only setups the graph, which is actually executed by native implementation of all <code>ops</code>, your variables need to be executed in this underlying environment. This happens by executing two ops - for local and global variables initialization:</p>
<p><code>
session.run(tf.global_variables_initializer(), tf.local_variables_initializer())
</code></p>
<p>On the original question - as far as I know - YES, it computes all the graph, and it requires you to feed placeholders, even if the executed op (in the session) is not dependent on them.</p>
|
python|machine-learning|tensorflow
| 0
|
376,888
| 41,322,379
|
Python - Verifying event based on values in dataframe
|
<p>I've got a dataframe for which I am trying to verify an event based on others values in the dataframe.
To be more concrete it's about UFO sightings. I've already grouped the df by date of sighting and dropped all rows with only one unique entry.
The next step would be to check when dates are equal whether the city also is.</p>
<p><a href="https://i.stack.imgur.com/9TbhW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9TbhW.png" alt="enter image description here"></a></p>
<p>In this case I would like to drop all lines, as city is different. </p>
<p><a href="https://i.stack.imgur.com/iuN4c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iuN4c.png" alt="enter image description here"></a></p>
<p>I'd like to keep, as the event has got the same time and and the city is the same. </p>
<p>I am looking for way to do this for my entire dataframe. Sorry if that's a stupid question I'm very new to programming. </p>
|
<p>I don't think I'm understanding your problem, but I'll post this answer and we can work from there.</p>
<p>The <code>corroborations</code> column counts the number of times we have an observation with the same datetime and city/state combination. So in the example below, the 20th of December has three sightings, but two of those were in Portville, and the other was in Duluth. Thus the corroborations column for each event receives values of 2 and 1, respectively.</p>
<p>Similarly, even though we have four observations taking place in Portville, there two of them happened on the 20th, and the others on the 21st. Thus we group them as two separate events.</p>
<pre><code>df = pd.DataFrame({'datetime': pd.to_datetime(['2016-12-20', '2016-12-20', '2016-12-20', '2016-12-21', '2016-12-21']),
'city': ['duluth', 'portville', 'portville', 'portville', 'portville'],
'state': ['mn', 'ny', 'ny', 'ny', 'ny']})
s = lambda x: x.shape[0]
df['corroborations'] = df.groupby(['datetime', 'city', 'state'])['city'].transform(s)
>>> df
datetime city state corroborations
0 2016-12-20 duluth mn 1
1 2016-12-20 portville ny 2
2 2016-12-20 portville ny 2
3 2016-12-21 portville ny 2
4 2016-12-21 portville ny 2
</code></pre>
|
python|pandas|numpy
| 0
|
376,889
| 41,574,536
|
Running sums based on another column in Pandas
|
<p>I have a dataframe like the following:</p>
<pre><code> col1 col2
0 1 True
1 3 True
2 3 True
3 1 False
4 2 True
5 3 True
6 2 False
7 2 True
</code></pre>
<p>I want to get a running sum of <code>True</code> values. Whenever I see a <code>False</code> value in <code>col2</code>, I need to take the cumulative sum of <code>col1</code> up to that point. So, the DataFrame would look like the following:</p>
<pre><code> col1 col2 col3
0 1 True 0
1 3 True 0
2 3 True 0
3 1 False 7
4 2 True 0
5 3 True 0
6 2 False 5
7 2 True 0
</code></pre>
<p>How can I do this?</p>
|
<p>You can create a group variable with cumsum on <code>col2</code> and then calculate the sum per group:</p>
<pre><code>df.loc[~df.col2, 'col3'] = (df.col1 * df.col2).groupby(by = (~df.col2).cumsum()).cumsum().shift()
df.fillna(0)
</code></pre>
<p><a href="https://i.stack.imgur.com/t2c24.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t2c24.png" alt="enter image description here"></a></p>
|
python|pandas
| 3
|
376,890
| 41,241,782
|
How to implement Pandas GroupBy filter on mixed type data?
|
<p>Thanks for reading. Apologies for what I am sure is a simple problem to answer.</p>
<p>I have some dataframe:</p>
<pre><code>df:
Entry Found
0 Dog [1,0]
1 Sheep [0,1]
2 Cow "No Match"
3 Goat "No Match"
</code></pre>
<p>I want to return a new dataframe which contains only entries which contain <code>No Match</code> in the <code>Found</code> column (and preserve their index order) i.e.:</p>
<p>Output:</p>
<pre><code> Entry Found
0 Cow "No Match"
1 Goat "No Match"
</code></pre>
<p>I know to do this I must use the built in Pandas <code>GroupBy()</code> and <code>filter()</code> functions. Following these questions (<a href="https://stackoverflow.com/questions/38544301/filter-data-with-groupby-in-pandas">Filter data with groupby in pandas</a>) and (<a href="https://stackoverflow.com/questions/17950835/pandas-dataframe-filtering-using-groupby-and-a-function">Pandas: DataFrame filtering using groupby and a function</a>) I tried:</p>
<pre><code>>> df.groupby('Found','Entry').filter(lambda x: type(x) == str)
>> No axis named Entry for object type <class 'pandas.core.frame.DataFrame'>
</code></pre>
<p>and:</p>
<pre><code>>> df.groupby('Found').filter(lambda x: type(x) == str)
>> TypeError: unhashable type: 'list'
</code></pre>
<p>Can anyone tell me what I am doing wrong? </p>
|
<p>Instead of using the <code>groupby</code> function, you can call the query such as:</p>
<pre><code>df = df[df["Found"] == "No Match"]
</code></pre>
<p>Thus it will look for the column <code>Found</code> if there are <code>"No Match"</code>, which will be <code>False</code> when it is a list, instead of an error.</p>
|
python|python-2.7|pandas|dataframe|group-by
| 3
|
376,891
| 41,318,471
|
Too slow for converting to tf.constant when list contains 1000000 elems
|
<p>What is the best way to do the training. That's too slow! And I don't know why it is that slow. </p>
<pre><code>samples_all = tf.constant(samples_all)//more than 1000000 elems
labels_all = tf.constant(labels_all)
[sample, label] = tf.train.slice_input_producer([samples_all, labels_all])
</code></pre>
<p>The samples_all contains more than 1000000 string elems which represent the image paths. The first two lines in the code take too long to execute.</p>
|
<p>Below an example of loading 2M strings into string variable which takes less than 1 second on my MacBook. Variables are more efficient for large constants than <code>tf.constant</code> which are inlined in the graph. Note that there's also <code>tf.string_input_producer</code> which can handle large lists of strings without loading them all into TensorFlow memory.</p>
<pre><code>n = 2000000
string_list = np.asarray(["string"]*n);
sess = tf.Session()
var_input = tf.placeholder(dtype=tf.string, shape=(n,))
var = tf.Variable(var_input)
start = time.time()
# var.initializer is equivalent to var.assign(var_input).op
sess.run(var.initializer, feed_dict={var_input: string_list})
elapsed = time.time()-start
rate = n*6./elapsed/10**6
print("%.2f MB/sec"%(rate,)) # => 15.13 MB/sec
</code></pre>
|
tensorflow
| 0
|
376,892
| 27,451,885
|
Python Curve Fitting issue
|
<p>EDIT: First problem solved but I now have a new issue:</p>
<p>I am currently doing a curve fit on some data to be input. My function is: </p>
<pre><code>def extract_parameters(Ts, ts):
def model(t, Ti, Ta, c):
return (Ti - Ta)*math.e**(-t / c) + Ta
popt, pcov = cf(model, ts, Ts, p0 = (10, 7, 6))
Ti, Ta, c = popt
xfine = np.linspace(0, 10, 101)
yfitted = model(xfine, *popt)
pl.plot(Ts, ts, 'o', label = 'data point')
pl.plot(xfine, yfitted, label = 'fit')
pylab.legend()
pylab.show()
</code></pre>
<p>When I enter:</p>
<pre><code>extract_parameters(np.array([1,2,3,4,5,6,7,8,9,10]), np.array([10.0,9.0,8.5,8.0,7.5,7.3,7.0,6.8,6.6,6.3]))
</code></pre>
<p>My graph starts to fit right at the end but while my data starts at 10 my curve starts at about 240 and then sweeps dow, which is not what I want. I thought setting p0 would help but it doesn't appear to help at all. </p>
<p>Any thoughts would be greatly appreciated. </p>
|
<p>Your parameters to fit are <code>Ti</code>, <code>Ta</code> and <code>c</code>, so don't define <code>Ti</code> first:</p>
<pre><code>from scipy.optimize import curve_fit
def model(t, Ti, Ta, c):
return (Ti - Ta) * np.exp(-t / c) + Ta
Ti, Ta, c = 100, 25, 10 # super-low heat-capacity tea!
t = np.linspace(0,100,101) # time grid
data = model(t, Ti, Ta, c) # the data to be fitted
data += np.random.rand(len(data)) # add some noise
curve_fit(model, t, data)
</code></pre>
<p>gives:</p>
<pre><code>(array([ 100.4656674 , 25.44794971, 10.04560802]),
array([[ 0.02530277, 0.00100244, -0.00377959],
[ 0.00100244, 0.00122549, -0.00062579],
[-0.00377959, -0.00062579, 0.00128791]]))
</code></pre>
|
python|numpy
| 1
|
376,893
| 27,744,908
|
computing sum of pandas dataframes
|
<p>I have two dataframes that I want to add bin-wise. That is, given</p>
<pre><code>dfc1 = pd.DataFrame(list(zip(range(10),np.zeros(10))), columns=['bin', 'count'])
dfc2 = pd.DataFrame(list(zip(range(0,10,2), np.ones(5))), columns=['bin', 'count'])
</code></pre>
<p>which gives me this</p>
<p>dfc1:</p>
<pre><code> bin count
0 0 0
1 1 0
2 2 0
3 3 0
4 4 0
5 5 0
6 6 0
7 7 0
8 8 0
9 9 0
</code></pre>
<p>dfc2:</p>
<pre><code> bin count
0 0 1
1 2 1
2 4 1
3 6 1
4 8 1
</code></pre>
<p>I want to generate this:</p>
<pre><code> bin count
0 0 1
1 1 0
2 2 1
3 3 0
4 4 1
5 5 0
6 6 1
7 7 0
8 8 1
9 9 0
</code></pre>
<p>where I've added the count columns where the bin columns matched.</p>
<p>In fact, it turns out that I only ever add 1 (that is, count in dfc2 is always 1). So an alternate version of the question is "given an array of bin values (dfc2.bin), how can I add one to each of their corresponding count values in dfc1?"</p>
<p>My only solution thus far feels grossly inefficient (and slightly unreadable in the end), doing an outer joint between the two bin columns, thus creating a third dataframe on which I do a computation and then project out the unneeded column.</p>
<p>Suggestions?</p>
|
<p>First set <code>bin</code> to be index in both dataframes, then you can use <code>add</code>, fillvalue is needed to point that zero shall be used if bin is missing in dataframe:</p>
<pre><code>dfc1 = dfc1.set_index('bin')
dfc2 = dfc2.set_index('bin')
result = pd.DataFrame.add(dfc1, dfc2, fill_value=0)
</code></pre>
<p>Pandas automatically sums up rows with equal index.</p>
<p>By the way, if you need to perform such operation frequently, I strongly recommend using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow">numpy.bincount</a>, which allows even repeating the bin index inside one dataframe</p>
|
python|python-3.x|pandas
| 1
|
376,894
| 27,673,654
|
Dataframe output difference compared to what's supposed to be in the textbook
|
<p>I have a dataframe,names, containing columns of name, sex births, year etc for "Python for Data Analysis" book. </p>
<p>When I type <code>names</code>, it gives me below. </p>
<pre><code> name sex births year prop
0 Mary F 7065 1880 0.077643
1 Anna F 2604 1880 0.028618
2 Emma F 2003 1880 0.022013
3 Elizabeth F 1939 1880 0.021309
4 Minnie F 1746 1880 0.019188
5 Margaret F 1578 1880 0.017342
6 Ida F 1472 1880 0.016177
7 Alice F 1414 1880 0.015540
8 Bertha F 1320 1880 0.014507...
</code></pre>
<p>However, in the book, it's supposed to be like below:</p>
<pre><code>In [378]: names
Out[378]:
<class 'pandas.core.frame.DataFrame'> Int64Index: 1690784 entries, 0 to 1690783 Data columns:
name 1690784 non-null values
sex 1690784 non-null values births 1690784 non-null values
year 1690784 non-null values
prop 1690784 non-null values dtypes: float64(1), int64(2), object(2)
</code></pre>
<p>Would someone know any idea how to fix this? </p>
|
<p>That's what modern <code>pandas</code> should be expected to show, so I don't think there's anything needing to be fixed. If you want something more like that representation, you can call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html" rel="nofollow"><code>df.info()</code></a>. Note that the below is only taken from the values you showed, so it's obviously much smaller:</p>
<pre><code>In [20]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9 entries, 0 to 8
Data columns (total 5 columns):
name 9 non-null object
sex 9 non-null object
births 9 non-null int64
year 9 non-null int64
prop 9 non-null object
dtypes: int64(2), object(3)
memory usage: 324.0+ bytes
</code></pre>
<p>I vaguely remember that when frames were large enough in the past the default might have been to give an <code>info</code>-like overview but I can't remember the details. In any case, I don't think anything's wrong.</p>
|
python|pandas
| 2
|
376,895
| 27,431,578
|
Analyzing a CSV file using Django Pandas and Pyplot
|
<p>So I am currently working on a Django web app that allows for users to upload CSV files, analyze those files, and then present a graph back to the client. The CSVs that will inputted are generated from Matlab and all describe the same general type of data, but the formatting of each file is different depending on how the user exported the data from Matlab. My problem is there is no standard to how the files are to be formatted so I have to dynamically check the CSV file upon uploading and then generate the correct graph accordingly. I think this is best demonstrated in an example. </p>
<h3>Example CSV data for velocity and force</h3>
<pre><code>Shock Name,
Shock ID,
Vehicle,
Location,
Compression Valving,
Rebound Valving,
Piston Valving,
Other Valving,
Compression Setting,
Rebound Setting,
Preload Setting,
Notes,
,
Measured_Stroke, 2.00 in
Seal_Drag, 7.77 lbs
Test_Temperature, 73.63 F
Peak_Velocity, 12.47 in/sec
,
Amplitude, 1.00 in
Test_Period, 0.00 sec
Gas_Force, 34.78 lbs
Test_Speed, 12.21 in/sec
Velocity, CO, RC, CC, RO, CA, RA
in/sec, lbs, lbs, lbs, lbs, lbs, lbs
0, -139.3172, -138.4583, 33.49831, 34.24039, -52.90947, -52.10897
1, 2.637415, -353.36, 119.1066, -98.40744, 60.87201, -225.8837
2, 92.96767, -423.1163, 136.1344, -293.0744, 114.551, -358.0953
3, 117.664, -445.5688, 144.661, -417.9908, 131.1625, -431.7798
4, 126.363, -460.8381, 151.5483, -456.5551, 138.9557, -458.6966
5, 133.3087, -474.8662, 158.4935, -473.8318, 145.9011, -474.349
6, 139.7847, -487.5624, 163.9969, -486.3072, 151.8908, -486.9348
7, 146.0275, -500.0915, 168.9006, -497.6936, 157.464, -498.8926
8, 152.5096, -512.0554, 174.573, -508.9675, 163.5413, -510.5115
9, 160.0202, -524.4933, 178.737, -519.4616, 169.3786, -521.9774
10, 166.6279, -534.5439, 182.7012, -529.475, 174.6645, -532.0095
11, 174.6142, -545.5678, 186.8209, -541.7671, 180.7175, -543.6675
12, 183.1358, -556.0939, 188.4442, -553.749, 185.79, -554.9215
</code></pre>
<p>Everything before the Velocity box is merely a large settings header that can vary from file to file depending on the user's settings in Matlab. Velocity should be the index column in that each row is a velocity step in time. Each column after Velocity is labeled with an acronym (e.g. CO, RC, CC, etc.) which all need to be graphed according to the velocity time step.</p>
<p>My attempted implementation is as follows:</p>
<pre><code># graph input file
def graph(request):
# graph style
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = (15,5)
new_file = request.session.get('docFile')
fig = Figure()
ax = fig.add_subplot(111)
ax.set_xlabel("Time")
ax.set_ylabel("Velocity")
data_df = pd.read_csv(new_file, header=28)
data_df = pd.DataFrame(data_df)
data_df.plot(ax=ax, title="Roehrig Shock Data", style="-o")
canvas = FigureCanvas(fig)
response = HttpResponse( content_type = 'image/png')
canvas.print_png(response)
return response
</code></pre>
<p>This presents a graph correctly, but I am hard-coding <code>header=28</code> as the line that Velocity falls on.</p>
<p>My questions are:</p>
<ol>
<li>Is there a way to dynamically scan the CSV for the Velocity and then start the header there?</li>
<li>How can I label each line plot as the name of the corresponding column acronym?
</ol>
|
<p>You can try to browse through the entire file with a regular <code>open</code> statement and parse your headers dynamically before using panda.</p>
<p>For instance:</p>
<pre><code>import re
import panda as pd
raw_data = open('your_file.csv', 'rb').read()
rows = re.split('\n', raw_data)
for idx, row in enumerate(rows):
cells = row.split(',')
if 'Velocity' in cells:
header_names = cells # this will be something like ['Velocity', ' CO', ' RC', ...]
header_row = idx
break
# Now you have the header line as well as the custom header names.
# You can start using pandas.read_csv
pd.read_csv('your_file.csv', header=header_row)
# ...
# and use `header_names` for your plots.
</code></pre>
|
python|django|csv|matplotlib|pandas
| 3
|
376,896
| 27,557,406
|
array operation results differ between interactive and the program
|
<blockquote>
<p>I compare two arrays interactively in iPython, the returns are
correct: </p>
<p>In[143]: r=np.array([0.,0.04166667, 0.08333333, 0.125, 0.16666667 ,
0.20833333 , 0.25, 0.29166667 , 0.33333333 , 0.375, 0.41666667,0.45833333 , 0.5, 0.54166667, 0.58333333 , 0.625, 0.66666667 , 0.70833333 , 0.75, 0.79166667, 0.83333333, 0.875, 0.91666667, 0.95833333])</p>
<pre><code>In[144]: c=np.array([ 0., 0.04166667, 0., 0., 0., 0.20833333, 0., 0.29166667 , 0., 0. , 0., 0.45833333, 0., 0.54166667, 0. , 0. , 0.,0.70833333, 0. , 0.79166667 , 0., 0., 0., 0.95833333])
In [145]: c==r
Out[145]:
array([ True, True, False, False, False, True, False, True, False,
False, False, True, False, True, False, False, False, True,
False, True, False, False, False, True], dtype=bool
</code></pre>
</blockquote>
<p>)</p>
<p>But in a python program in the 'same' setting, the results are not correct:</p>
<pre><code>turns=1
r = np.linspace(1/24, turns, 24*turns,endpoint=False)
rr=r%1
c=np.array([0.,0.04166667,0,0,0,0.20833333,0,0.29166667,0,0,0,0.45833333,
0,0.54166667,0,0,0,0.70833333,0,0.79166667,0,0,0,0.95833333])
cc=np.tile(c,turns)
print([rr==cc])
</code></pre>
<p>The result is different:</p>
<pre><code>[array([ True, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False], dtype=bool)]
</code></pre>
<p>What mistake did I made?
Your help is appreciated. </p>
|
<p>This is an issue with floating point precision; your first and second case are really <strong>not the same</strong>. The <a href="http://floating-point-gui.de/" rel="nofollow">Floating Point guide</a> is a useful resource here. </p>
<p>In the first instance, you're populating two arrays from floating point literals and then comparing them directly. In the second, you're using <em>the output of a numpy function</em> and directly comparing that against the second array which is still created from literals.</p>
<p>Even though the <strong>displayed</strong> version of the function output looks the same, the actual numbers have more precision than that, and that's enough to throw off the <code>==</code> comparison; you should (almost) never be using equality comparison for floats in the first place. </p>
<p>A quick example: </p>
<pre><code>>> print 0.1 + 0.2
0.3
>> 0.1 + 0.2
0.30000000000000004
</code></pre>
|
python|arrays|numpy|operation
| 1
|
376,897
| 27,851,188
|
Python pandas: Identify Records Based on Multiple Criteria on Multiple Fields
|
<p>Using IPython (Python 3.4) with pandas: I have a data frame that looks roughly like this (notice the duplicate records per student, sometimes there are 3+ per student):</p>
<pre><code>Year Subject Student Score Date
2014 Math 1 34 31-Jan
2014 Math 1 34 26-Jan
2014 Math 2 65 26-Jan
2014 Math 2 76 31-Jan
2014 Math 3 45 3-Feb
2014 Math 3 67 31-Jan
</code></pre>
<p>I am looking for a way to return the score per student based on the following criteria:
1. highest score
and when the scores are the same for each of an individual student's records:
2. most recent date</p>
<p>Here's the desired output:</p>
<pre><code>Year Subject Student Score Date
2014 Math 1 34 31-Jan
2014 Math 2 76 31-Jan
2014 Math 3 67 31-Jan
</code></pre>
<p>Here's what I've tried so far:
Used groupby on year, subject, and student to obtain the highest score per student for a given year and subject area:</p>
<pre><code>by_duplicate = df.groupby(['Year', 'Subject', 'Student'])
HighScore = by_duplicate[['Year', 'Subject', 'Student', 'Score']].max()
</code></pre>
<p>Here, I rename the score column so that when I join it to the to the original dataframe, I know which column is which. This may not be necessary, but I'm not sure.</p>
<pre><code>HighScore.rename(columns={'Score': 'Score2'}, inplace=True)
</code></pre>
<p>Here, I add a blank 'HighScore' column in anticipation that when it will be populated with a 1 later if the row has the highest score. More on this later...</p>
<pre><code>HighScore['HighScore'] = ""
</code></pre>
<p>Then I do the same for the most recent date:</p>
<pre><code>Recent = by_duplicate[['Year', 'Subject', 'Student', 'Date']].max()
Recent.rename(columns={'Date': 'Date2'}, inplace=True)
Recent['Recent'] = ""
My approach was to
1. create tables for each field (score and date) using groupby,
2. identify the rows containing the highest and most recent scores, respectively, by entering a "1" in their respective new columns (HighScore' and 'Recent')
3. somehow join these grouped tables back to the original dataframe on Year, Subject, and Student
-I'm guessing this requires somehow ungrouping the groups as the pd.merge is not working on the grouped data frames
4. The end result, according to my theory, would look something like this:
Year Subject Student Score Date HighScore Recent
2014 Math 1 34 31-Jan 1 1
2014 Math 1 34 26-Jan 1 0
2014 Math 2 65 26-Jan 0 0
2014 Math 2 76 31-Jan 1 1
2014 Math 3 45 3-Feb 0 1
2014 Math 3 67 31-Jan 1 0
And once I have this table, I would need to do something like this:
1. Per student for a given year and subject area: return the sum of 'HighScore'
2. If the sum of 'HighScore' is greater than 1, then take the 'Recent' row equal to 1.
I believe this will give me what I need.
</code></pre>
<p>Thanks in advance!!!</p>
|
<p>If I'm following correctly, I think you can simplify this by sorting on both the score and the date, so that the last element of each group is always the most recent of the highest score. I might do something like</p>
<pre><code>>>> df["FullDate"] = pd.to_datetime(df["Year"].astype(str) + "-" + df["Date"],
format="%Y-%d-%b")
>>> df = df.sort(["Score", "FullDate"])
>>> df.groupby(["Year", "Subject", "Student"]).tail(1)
Year Subject Student Score Date FullDate
0 2014 Math 1 34 31-Jan 2014-01-31
5 2014 Math 3 67 31-Jan 2014-01-31
3 2014 Math 2 76 31-Jan 2014-01-31
</code></pre>
<p>where first I create a <code>FullDate</code> column which is a real datetime and not a string, so that I know it'll sort correctly.</p>
<p>Note that the order we sort in matters: we want first by score, and then within the maximum scores the "largest" (most recent) date last. If instead we had done it the other way, we'd have instead had</p>
<pre><code>>>> df = df.sort(["FullDate", "Score"]) # THIS IS THE WRONG ORDER
>>> df.groupby(["Year", "Subject", "Student"]).tail(1)
Year Subject Student Score Date FullDate
0 2014 Math 1 34 31-Jan 2014-01-31
3 2014 Math 2 76 31-Jan 2014-01-31
4 2014 Math 3 45 3-Feb 2014-02-03
</code></pre>
<p>which would give us the maximum score on the latest day. </p>
<p>Now it is true that sorting is ~O(N log N) and finding the maximum can be done in O(N), but IMHO the simplicity dramatically outweighs the usually minor performance loss.</p>
|
python|pandas|dataframe
| 1
|
376,898
| 27,807,272
|
Building Numpy with Intel Compilers and MKL - CentOS 7
|
<p>Currently I am attempting to build Numpy-1.9.1for Intel's MKL using the Intel compilers on CentOS 7. I have Intel Parallel XE Studio 2015 C++ and Fortran for Linux installed, and in my terminal I can use both 'icc' and 'ifort' command, they are both found without issue. I have also run:</p>
<p><code>$ source /opt/intel/composer_xe_2015/bin/compilervars.sh intel64</code></p>
<p>In accordance with this guide from Intel's webpage for doing exactly what I am trying to do: <a href="https://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl" rel="nofollow">https://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl</a>, I have attempted to build numpy using this command:</p>
<p><code>$ sudo python setup.py config --compiler=intelem build_clib --compiler=intelem build_ext --compiler=intelem install</code></p>
<p>The resulting message is:</p>
<pre><code>Running from numpy source directory.
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'test_suite'
warnings.warn(msg)
non-existing path in 'numpy/f2py': 'docs'
non-existing path in 'numpy/f2py': 'f2py.1'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
FOUND:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/opt/intel/composer_xe_2015/mkl/lib/intel64']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/composer_xe_2015/mkl/include']
FOUND:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/opt/intel/composer_xe_2015/mkl/lib/intel64']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/composer_xe_2015/mkl/include']
non-existing path in 'numpy/lib': 'benchmarks'
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
lapack_mkl_info:
mkl_info:
FOUND:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/opt/intel/composer_xe_2015/mkl/lib/intel64']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/composer_xe_2015/mkl/include']
FOUND:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/opt/intel/composer_xe_2015/mkl/lib/intel64']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/composer_xe_2015/mkl/include']
FOUND:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/opt/intel/composer_xe_2015/mkl/lib/intel64']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/composer_xe_2015/mkl/include']
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running config
running build_clib
running build_src
build_src
building py_modules sources
building library "npymath" sources
Could not locate executable icc
Could not locate executable ecc
customize Gnu95FCompiler
Found executable /usr/bin/gfortran
customize Gnu95FCompiler
customize Gnu95FCompiler using config
C compiler: icc -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -c'
icc: _configtest.c
sh: icc: command not found
sh: icc: command not found
failure.
removing: _configtest.c _configtest.o
Traceback (most recent call last):
File "setup.py", line 251, in <module>
setup_package()
File "setup.py", line 243, in setup_package
setup(**metadata)
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/command/build_clib.py", line 63, in run
self.run_command('build_src')
File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/command/build_src.py", line 153, in run
self.build_sources()
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/command/build_src.py", line 164, in build_sources
self.build_library_sources(*libname_info)
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/command/build_src.py", line 299, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/home/myles/Downloads/numpy-1.9.1/numpy/distutils/command/build_src.py", line 386, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 686, in get_mathlib_info
raise RuntimeError("Broken toolchain: cannot link a simple C program")
RuntimeError: Broken toolchain: cannot link a simple C program
</code></pre>
<p>I am using CentOS 7 as my operating system for this. It appears that for some reason the build script cannot find the icc. </p>
<p>I need numpy build with the Intel compilers because from what I have gathered, I need numpy build with the Intel compilers and Intel MKL in order for automatic offloading to a Xeon Phi to be available. If anyone has built numpy for automatic offloading to a Xeon Phi, or just built numpy with the Intel compilers in this matter I would appreciate the input on fixing this error.</p>
|
<p>On Unix/Linux systems, the <code>sudo</code> command (<b>S</b>uper<b>U</b>ser <b>DO</b>) is set up to use the environment variables defined for the <code>root</code> user, not the user running the command. This can lead to problems if you install a program in a non-standard location, then need to run it with superuser privileges. For example, on OS X systems (which run on a flavor of BSD Unix), <code>/usr/local/bin</code> is not included by default in the <code>PATH</code> environment variable. You may set up your user's account to include this directory in your <code>PATH</code>, but if you try to use <code>sudo</code> with a program in there, it will not be found, unless you modify <code>root</code>'s environment (or the system's environment) to include <code>/usr/local/bin</code> in the <code>PATH</code>.</p>
<p>This is likely the cause of your issues. <code>icc</code> and <code>ifort</code> are in (IIRC) in <code>/opt/intel/bin</code> and/or <code>/opt/intel/composer_xe_2015/bin</code> (one may be symlinked to the other), and you've added at least one of those directories to your <code>$PATH</code> environment variable, but when using <code>sudo</code> to execute the build commands, the programs are not being found. To get around this, don't build using <code>sudo</code>. First, to clear out any leftover temp files, run</p>
<pre><code>sudo make clean
</code></pre>
<p>then run</p>
<pre><code>python setup.py config --compiler=intelem
</code></pre>
<p>and confirm everything set up properly. Next, run</p>
<pre><code>python setup.py build_clib --compiler=intelem build_ext --compiler=intelem
</code></pre>
<p>to build everything. If that succeeded with no errors, optionally run </p>
<pre><code>python setup.py test
</code></pre>
<p>(I don't think you need the <code>compiler</code> argument for this target) to verify the build, <strong>then</strong> run</p>
<pre><code>sudo python setup.py install
</code></pre>
<p>to install everything in your system's <code>site-packages</code> directory. <code>sudo</code> is needed here as <code>site-packages</code> is owned by <code>root</code>.</p>
|
python|linux|numpy|intel-mkl|centos7
| 1
|
376,899
| 27,582,056
|
How to split DataFrame using some constraint?
|
<p>Suppose, I have a DataFrame df. I want to split this DataFrame into new DataFrames such that salaries are always increasing </p>
<pre><code>>>> DATA = {'id':[1,2,3,4,5], 'salary':[1200,2300,2400,1200,2100] }
>>> df = DataFrame(DATA)
>>> df
id salary
0 1 1200
1 2 2300
2 3 2400
3 4 1200
4 5 2100
</code></pre>
<p>From the above DataFrame, I need to get two like below:</p>
<pre><code> DataFrame 1 DataFrame 2
----------- -----------
id salary | id salary
0 1 1200 | 0 4 1200
1 2 2300 | 1 5 2100
2 3 2400 |
</code></pre>
<p>Any help?</p>
|
<p>You could do something like</p>
<pre><code>>>> grouped = df.groupby((df.salary.diff() <= 0).cumsum())
>>> parts = [g.reset_index(drop=True) for k, g in grouped]
>>> for p in parts:
... print(p)
...
id salary
0 1 1200
1 2 2300
2 3 2400
id salary
0 4 1200
1 5 2100
</code></pre>
<hr>
<p>This works because you can take a new group when the difference is <=0:</p>
<pre><code>>>> df.salary.diff()
0 NaN
1 1100
2 100
3 -1200
4 900
Name: salary, dtype: float64
>>> df.salary.diff() <= 0
0 False
1 False
2 False
3 True
4 False
Name: salary, dtype: bool
</code></pre>
<p>Giving us a <code>True</code> whenever a new group should begin, and since <code>True</code> has a value of 1 as an integer, we can use <code>cumsum</code> to give us a new number for each group:</p>
<pre><code>>>> (df.salary.diff() <= 0).cumsum()
0 0
1 0
2 0
3 1
4 1
Name: salary, dtype: int32
</code></pre>
|
python|pandas
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.