Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,800
| 66,539,653
|
Pandas - Number of rows to the last row in a group that meets a requirement
|
<p>My data is like this</p>
<pre><code>date group meet_criteria
2020-03-31 1 no
2020-04-01 1 yes
2020-04-02 1 no
2020-04-03 1 no
2020-04-04 1 yes
2020-04-05 1 no
2020-03-31 2 yes
2020-04-01 2 no
</code></pre>
<p>I would like to create another column which will equal 1 divide by the number of days since the last date in a group that the column <code>meet_criteria</code> is yes (the current <code>meet_criteria</code> is excluded and if a group has never met the criteria then the value will be 0.)</p>
<p>My desired data will look like this</p>
<pre><code>date group meet_criteria last_time_met_criteria
2020-03-31 1 no 0
2020-04-01 1 yes 0
2020-04-02 1 no 1
2020-04-03 1 no 0.5
2020-04-04 1 yes 0.333333
2020-04-05 1 no 1
2020-03-31 2 yes 0
2020-04-01 2 no 1
</code></pre>
<p>Is there any way to do this efficiently in pandas? Thanks</p>
|
<p>This can be done using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>pd.merge_asof</code></a> & subsequent calculations in pandas.</p>
<p>Here's a fully worked example with your data (original data loaded into a variable called <code>df</code>, and <code>df.date</code> converted to <code>datetime</code> first)</p>
<pre><code># sorting necessary for how `merge_asof` will be used
df2 = df.sort_values(['date', 'group'])
# construct the `right` data frame of dates to lookup
df_meet_criteria = df2[df2.meet_criteria == 'yes'].copy()
df_meet_criteria['date_met_criteria'] = df_meet_criteria.date
# merge
# `by`: columns to do regular merge on
# `on`: columns to do as_of merge on
# `allow_exact_matches`: True -> closed interval, False -> open interval,
# i.e. latest date before current date
last_date = pd.merge_asof(
df2,
df_meet_criteria,
by='group',
on='date',
allow_exact_matches=False,
suffixes=('', '_y')
).sort_values(['group', 'date'])
# calculate the inverse_days.
last_date['days_since'] = (last_date.date - last_date.date_met_criteria).dt.days
last_date.loc[last_date.days_since == 0, 'days_since'] = np.nan
last_date['last_time_met_criteria'] = (1 / last_date.days_since).fillna(0)
final = last_date[['date', 'group', 'meet_criteria', 'last_time_met_criteria']]
</code></pre>
<p>final dataframe looks like this:</p>
<pre><code> date group meet_criteria last_time_met_criteria
0 2020-03-31 1 no 0.000000
2 2020-04-01 1 yes 0.000000
4 2020-04-02 1 no 1.000000
5 2020-04-03 1 no 0.500000
6 2020-04-04 1 yes 0.333333
7 2020-04-05 1 no 1.000000
1 2020-03-31 2 yes 0.000000
3 2020-04-01 2 no 1.000000
</code></pre>
|
python|pandas
| 0
|
8,801
| 66,731,885
|
How Encoder passes Attention Matrix to Decoder in Tranformers 'Attention is all you need'?
|
<p>I was reading the renowned paper <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">'Attention is all you need'</a>. Though I am clear with most of the major concepts, got buggy with a few points
<a href="https://i.stack.imgur.com/8wE8A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8wE8A.png" alt="enter image description here" /></a></p>
<ol>
<li>How Encoder pass the attention matrix calculated using the input to Decoder? Like what I understood is it only passes the Key & Value matrix to the decoder</li>
<li>From where do we get shifted output for the decoder while testing?</li>
<li>As it is able to output just one token at a time, is this transformer run for multiple iterations to generate output sequence. If yes, then, how to know when to stop?</li>
<li>Are weights trained in Multi-Head Attention in the decoder as it already gets Q,K & V from encoder & masked multi-head attention</li>
</ol>
<p>Any help is appreciated</p>
|
<ol>
<li><p>The Encoder passes the 'Attention' matrix calculated. This attention matrix is considered as the 'Key' & 'Value' matrix for the Decoder Multi-Head Attention module</p>
</li>
<li><p>Why do we need shifted output for testing? It is not required as when testing, we need to predict from token one for which 'BOS' (Beginning Of Sequence) token is considered as past token & hence automatically left-shifted</p>
</li>
<li><p>Yes, we need to iterate over & over predicting one token at a time. If the predicted token is 'EOS' (End Of Sequence), we stop</p>
</li>
<li><p>This isn't clear but looks like the Decoder's multi-head attention isn't trained</p>
</li>
</ol>
|
machine-learning|nlp|artificial-intelligence|huggingface-transformers|attention-model
| 0
|
8,802
| 66,415,018
|
Rearrange dataframe in pandas - move sets of rows into new column
|
<p>I have a dataframe file <code>df</code> that looks like:</p>
<p><a href="https://i.stack.imgur.com/cNX4a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cNX4a.png" alt="input dataframe" /></a></p>
<p>which I am trying to convert into:</p>
<p><a href="https://i.stack.imgur.com/XiUUo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XiUUo.png" alt="enter image description here" /></a></p>
<p>My attempt was to use the numpy.reshape function:</p>
<pre class="lang-py prettyprint-override"><code>df2 = np.reshape(df, (10, 2))
</code></pre>
<p>But this gives <code>ValueError: cannot reshape array of size 24 into shape (9,2)</code> as the indices change, and change the size of the dataframe in the rearrangment. Is there a simplee Python function to make such a rearrangement in a dataframe?</p>
|
<p>Do you need the column "Unnamed:0"? if not, a simple way would be to do:</p>
<pre><code>df = pd.DataFrame({0:df[range(3)].values[[0,2,4]].flatten(),1:df[range(3)].values[[1,3,5]].flatten()}).T
</code></pre>
<p>if you need the column 'Unnamed: 0', you could add the following line:</p>
<pre><code>df['Unnamed: 0'] = [0,1]
</code></pre>
|
python|pandas|dataframe
| 0
|
8,803
| 57,346,628
|
Most elegant way to select rows by a string value
|
<p>Is there a more elegant way to write this code:</p>
<pre><code>df['exchange'] = frame.loc[frame['Description'].str.lower().str.contains("on wallet exchange")]
</code></pre>
<p>The .str twice seems ugly.</p>
<p>When I iterate over the entire dataframe row by row, I can use:</p>
<pre><code>if "on wallet exchange" in row['Description'].casefold():
</code></pre>
|
<p>Use <code>case=False</code> , also add <code>na=False</code> to be safe so if the series contains either numerics(@ jezrael-Thank you ) or NaN , this will be evaluated as False</p>
<pre><code>frame.loc[frame['Description'].str.contains("on wallet exchange",case=False,na=False)]
</code></pre>
|
python|pandas
| 6
|
8,804
| 57,366,684
|
Import file with pandas to Jupyter Notebook running on iPad with the app carnets
|
<p>I’m running a Jupyter notebook on my iPad with an app called Carnets (vs. creating a remote server). I have been attempting to import a dataset into the notebook to create a panda’s dataframe. </p>
<p>So the dataset I’m tying to use is from kaggle. I first tried uploading it to GitHub LFS. I was able to successfully use pd.read_cvs(‘url’), but I only got a table of the meta data vs. the actual data set. I’m not certain I set up my LFS correctly but also haven’t been able to change it.</p>
<p>Next I tried using Kaggle’s API, but since I’m on a iPad I am unable to put the certificates in the required location.</p>
<p>I also attempted to use the local file path on my iPad but I’m not familiar with iOS file path conventions, so either I got it completely wrong and/or the way apps are packaged I can’t access the file path as a user input?</p>
<p>I recognize the root of the problem is doing this on a iPad Pro (1st model), but my computer is very old and stationary. I don’t have the funds to update and am stubborn enough to attempt this. I’ve used Juno semi-successfully in the recent past, but had problems with the app crashing so I wanted to try something else. I also don’t want to rely on Kaggle’s website for future projects that are not based on data from Kaggle. </p>
<pre><code># GitHub attempt
import pandas as pd
url_dipole_moments = 'https://raw.githubusercontent.com/ncotanche/PredictingMolecularProperties/master/RawData/dipole_moments.csv'
df_dipole_moments = pd.read_csv(url_diple_moments)
df_dipole_moments.head()
# Local file attempt
import pandas as pd
df_dipole_moments = pd.read_csv(‘../RadData/dipole_moments.csv’)
df_dipole_moments.head()
</code></pre>
<p>With the GitHub attempt, I received a data with version, iod, and size which I recognized as the metadata(?) for the file.</p>
<p>With the local file attempt, I received a FileNotFound error.</p>
|
<p>I’m the author of the app. The issue is related to iOS limitations on file access. </p>
<p>Carnets can access all files in the App directory. Since you have issues, I guess the notebook is not in the App directory, but in another App. By opening the notebook, you granted Carnets access to the notebook, but not to other files in the directory. </p>
<p>The solution is to grant Carnets access to the directory that contains both the notebook and the dataset: at the file open screen, navigate to the directory immediately above this one, then click “Select” (top right corner), then click on the directory, then click “Open”. </p>
<p>You will access a screen showing the content of that directory. Navigate to your notebook, and it should be able to access the dataset. </p>
|
python|pandas|jupyter-notebook|kaggle
| 5
|
8,805
| 73,064,148
|
Why does a derived MultiIndex retain unused level data from the original index in pandas?
|
<p>When a filtered <code>MultiIndex</code> is derived from a larger <code>MultiIndex</code> instance, it appears that there's a discrepancy between the level values returned by <code>MultiIndex.levels</code> and <code>MutliIndex.get_level_values()</code>:</p>
<pre><code>import pandas as pd
times = pd.date_range('2012-01-01', periods=365, freq='1d')
colors = ['red', 'blue']
index = pd.MultiIndex.from_product([times, colors])
new_index = index[index.get_level_values(0) > '20120701']
new_index.get_level_values(0) # includes only dates starting from 2012-07-02
</code></pre>
<p>results in:</p>
<pre><code>DatetimeIndex(['2012-07-02', '2012-07-02', '2012-07-03', '2012-07-03',
'2012-07-04', '2012-07-04', '2012-07-05', '2012-07-05',
'2012-07-06', '2012-07-06',
...
'2012-12-26', '2012-12-26', '2012-12-27', '2012-12-27',
'2012-12-28', '2012-12-28', '2012-12-29', '2012-12-29',
'2012-12-30', '2012-12-30'],
dtype='datetime64[ns]', length=364, freq=None)
</code></pre>
<p>but <code>levels[0]</code> contains dates starting in Jan 2012:</p>
<pre><code>new_index.levels[0] # includes all dates in the original index starting 2012-01-01
</code></pre>
<p>will yield:</p>
<pre><code>DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03', '2012-01-04',
'2012-01-05', '2012-01-06', '2012-01-07', '2012-01-08',
'2012-01-09', '2012-01-10',
...
'2012-12-21', '2012-12-22', '2012-12-23', '2012-12-24',
'2012-12-25', '2012-12-26', '2012-12-27', '2012-12-28',
'2012-12-29', '2012-12-30'],
dtype='datetime64[ns]', length=365, freq='D')
</code></pre>
<p>I would expect <code>.get_level_values()</code> to be consistent with <code>.levels</code>, so I'm wondering if there's a good reason for the derived index to retain the values of the original instance?</p>
<p>PS: as pointed out in <a href="https://stackoverflow.com/questions/73064148/pandas-multiindex-get-level-values-and-multiindex-levels-not-consistent-in/73064243#73064243">jezrael's</a> answer, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.remove_unused_levels.html" rel="nofollow noreferrer">MultiIndex.remove_unused_levels</a> will remove the redundant dates</p>
|
<p>If use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.remove_unused_levels.html" rel="nofollow noreferrer"><code>MultiIndex.remove_unused_levels</code></a> then get:</p>
<pre><code>new_index = index[index.get_level_values(0) > '20120701'].remove_unused_levels()
</code></pre>
<p>All value of first level by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.get_level_values.html" rel="nofollow noreferrer"><code>MultiIndex.get_level_values</code></a>:</p>
<pre><code>print (new_index.get_level_values(0))
DatetimeIndex(['2012-07-02', '2012-07-02', '2012-07-03', '2012-07-03',
'2012-07-04', '2012-07-04', '2012-07-05', '2012-07-05',
'2012-07-06', '2012-07-06',
...
'2012-12-26', '2012-12-26', '2012-12-27', '2012-12-27',
'2012-12-28', '2012-12-28', '2012-12-29', '2012-12-29',
'2012-12-30', '2012-12-30'],
dtype='datetime64[ns]', length=364, freq=None)
</code></pre>
<p>All unique values in first level:</p>
<pre><code>print (new_index.levels[0])
DatetimeIndex(['2012-07-02', '2012-07-03', '2012-07-04', '2012-07-05',
'2012-07-06', '2012-07-07', '2012-07-08', '2012-07-09',
'2012-07-10', '2012-07-11',
...
'2012-12-21', '2012-12-22', '2012-12-23', '2012-12-24',
'2012-12-25', '2012-12-26', '2012-12-27', '2012-12-28',
'2012-12-29', '2012-12-30'],
dtype='datetime64[ns]', length=182, freq='D')
</code></pre>
|
pandas
| 1
|
8,806
| 70,544,368
|
Setting (Dynamic) String Equal to Output of a Function
|
<p>Is it possible to set a (dynamic) string equal to the output of a function?</p>
<p>Please see picture below...unfortunately, I'm not able to get it to work using the method shown here.</p>
<p><a href="https://i.stack.imgur.com/ZgRjI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZgRjI.png" alt="enter image description here" /></a></p>
|
<p>The code <em>is</em> actually working, but you need to make your <code>simulator()</code> function <strong>return</strong> the dataframe it makes. It <em>prints</em> it, but it doesn't return it:</p>
<p>Change the code for your <code>simulator</code> function to this:</p>
<pre><code>def simulator():
df = pd.DataFrame(np.random.randint(0,9,size=(4, 3)), columns=list('ABC'))
return df # instead of print(df)
</code></pre>
|
python|pandas|string|dataframe|function
| 1
|
8,807
| 70,392,276
|
How to compare rows within a dataframe column and check if value is changing?
|
<p>I have a data frame as seen:</p>
<p><a href="https://i.stack.imgur.com/jAZB8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAZB8.png" alt="enter image description here" /></a></p>
<p>How to check conditions when results change from 'NO' to 'OK' and if it does populate the next column as true. Results should look like this:</p>
<p><a href="https://i.stack.imgur.com/F74Ud.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F74Ud.png" alt="enter image description here" /></a></p>
|
<p>Try:</p>
<pre><code>df['check'] = df['Result'].eq('OK') & df['Result'].shift().eq('NO')
print(df)
# Output:
Result check
0 NO False
1 OK True
2 OK False
3 NO False
4 OK True
5 OK False
6 OK False
7 NO False
</code></pre>
<p>To display correctly:</p>
<pre><code>df['check'] = df['check'].replace({True: 'TRUE', False: ''})
print(df)
# Output:
Result check
0 NO
1 OK TRUE
2 OK
3 NO
4 OK TRUE
5 OK
6 OK
7 NO
</code></pre>
|
python|pandas|dataframe
| 1
|
8,808
| 51,333,491
|
pandas groupby NA behavior not consistent with R?
|
<p>The pandas documentation says: </p>
<p>"NA groups in GroupBy are automatically excluded. This behavior is consistent with R, for example"</p>
<p>I understand the documentation but not how this is consistent with R? Here's an example using a dataframe x with tidyverse. </p>
<pre><code>> x
c b a
1 NA 1 NA
2 NA 2 NA
3 NA 3 1
4 3 4 2
> x %>% group_by(c, a) %>% summarise(x = mean(b))
Source: local data frame [3 x 3]
Groups: c [?]
c a x
<dbl> <dbl> <dbl>
1 3 2 4.0
2 NA 1 3.0
3 NA NA 1.5
> x %>% group_by(c) %>% summarise(x = mean(b))
# A tibble: 2 × 2
c x
<dbl> <dbl>
1 3 4
2 NA 2
</code></pre>
|
<p><a href="https://github.com/pwwang/datar" rel="nofollow noreferrer"><code>datar</code></a> tries to follow the API designs of <code>tidyrverse</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> from datar.all import f, c, tibble, group_by, summarise, mean, NA
>>> x = tibble(c=[NA,NA,NA,3], b=[1,2,3,4], a=[NA,NA,1,2])
>>> x
c b a
0 NaN 1 NaN
1 NaN 2 NaN
2 NaN 3 1.0
3 3.0 4 2.0
>>> x >> group_by(f.c, f.a) >> summarise(x=mean(f.b))
[2021-06-08 12:55:47][datar][ INFO] `summarise()` has grouped output by ['c'] (override with `_groups
` argument)
c a x
0 3.0 2.0 4.0
1 NaN 1.0 3.0
2 NaN NaN 1.5
[Groups: ['c'] (n=2)]
>>> x >> group_by(f.c) >> summarise(x=mean(f.b))
c x
0 3.0 4.0
1 NaN 2.0
</code></pre>
<p>I am the author of the package. Feel free to submit issues if you have any questions.</p>
|
r|pandas
| 0
|
8,809
| 71,024,914
|
install python huggingface datasets package without internet connection from python environment
|
<p>I dont have access to internet connection from my python environment. I would like to install this <a href="https://pypi.org/project/datasets/" rel="nofollow noreferrer">library</a></p>
<p>I also noticed this <a href="https://pypi.org/project/datasets/#files" rel="nofollow noreferrer">page</a> which has files required for the package. I installed that package by coping that file to my python environment and then running the below code</p>
<pre><code>pip install 'datasets_package/datasets-1.18.3.tar.gz'
Successfully installed datasets-1.18.3 dill-0.3.4 fsspec-2022.1.0 multiprocess-0.70.12.2 pyarrow-6.0.1 xxhash-2.0.2
</code></pre>
<p>But when I try the below code</p>
<pre><code>import datasets
datasets.load_dataset('imdb', split =['train', 'test'])
</code></pre>
<p>it throws error
<code>ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.18.3/datasets/imdb/imdb.py (error 403) </code></p>
<p>I can access the file <code>https://raw.githubusercontent.com/huggingface/datasets/1.18.3/datasets/imdb/imdb.py</code> from outside my python enviroment</p>
<p>what files should I copy and what other code changes that I should make so that this line would work <code>datasets.load_dataset('imdb', split =['train', 'test']) </code>?</p>
<p>#Update 1=====================</p>
<p>I followed below suggestions and copied below files within my python environment. So</p>
<pre><code>os.listdir('huggingface_imdb_data/')
['dummy_data.zip',
'dataset_infos.json',
'imdb.py',
'README.md',
'aclImdb_v1.tar.gz']
</code></pre>
<p>The last file comes from <code>http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz</code> and the other files come from <code>github.com/huggingface/datasets/tree/master/datasets/imdb</code></p>
<p>Then I tried</p>
<pre><code>import datasets
#datasets.load_dataset('imdb', split =['train', 'test'])
datasets.load_dataset('huggingface_imdb_data/aclImdb_v1.tar.gz')
</code></pre>
<p>but i get the below error :(</p>
<pre><code>HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/huggingface_imdb_data/aclImdb_v1.tar.gz?full=true
</code></pre>
<p>I also tried</p>
<pre><code>datasets.load_from_disk('huggingface_imdb_data/aclImdb_v1.tar.gz')
</code></pre>
<p>but get the error</p>
<pre><code>FileNotFoundError: Directory huggingface_imdb_data/aclImdb_v1.tar.gz is neither a dataset directory nor a dataset dict directory.
</code></pre>
|
<p>Unfortunately the method 1 not working because not yet supported: <a href="https://github.com/huggingface/datasets/issues/761" rel="nofollow noreferrer">https://github.com/huggingface/datasets/issues/761</a></p>
<blockquote>
<p><strong>Method 1.:</strong> You should use the <code>data_files</code> parameter of the
<code>datasets.load_dataset</code> function, and provide the path to your local
datafile. See the documentation:
<a href="https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset" rel="nofollow noreferrer">https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset</a></p>
<pre><code>datasets.load_dataset
Parameters
...
data_dir (str, optional) – Defining the data_dir of the dataset configuration.
data_files (str or Sequence or Mapping, optional) – Path(s) to source data file(s).
...
</code></pre>
<p>Update 1.: You should use something like this:</p>
<pre><code>datasets.load_dataset('imdb', split =['train', 'test'], data_files='huggingface_imdb_data/aclImdb_v1.tar.gz')
</code></pre>
</blockquote>
<p><strong>Method 2.:</strong></p>
<p>Or check out this discussion: <a href="https://github.com/huggingface/datasets/issues/824#issuecomment-758358089" rel="nofollow noreferrer">https://github.com/huggingface/datasets/issues/824#issuecomment-758358089</a></p>
<pre><code>>here is my way to load a dataset offline, but it requires an online machine
(online machine)
import datasets
data = datasets.load_dataset(...)
data.save_to_disk('./saved_imdb')
>copy the './saved_imdb' dir to the offline machine
(offline machine)
import datasets
data = datasets.load_from_disk('./saved_imdb')
</code></pre>
|
python|package|huggingface-transformers|huggingface-datasets
| 2
|
8,810
| 70,770,746
|
Python parallel apply on dataframe
|
<p>I have this part of code in my application.
What I want is to iterate over each row in my data frame (pandas) and modify column to function result.</p>
<p>I tried to implement it with multiprocessing, but I'm to see if there is any faster and easier to implement way to do it.
Is there any simple way to run this part in parallel?</p>
<pre><code>def _format(data: pd.DataFrame, context: pd.DataFrame)
data['context'] = data.apply(lambda row: get_context_value(context, row), axis=1)
</code></pre>
<p>The data frame I work with is not to large (10,000 - 100,000) and the function to evaluate the value to assign to the column take around 250ms - 500ms for one row. But the whole process for the size of the data frame takes to much.</p>
<p>Thanks</p>
|
<p>I have a project which it is done there: <a href="https://github.com/mjafari98/dm-classification/blob/main/inference.py" rel="nofollow noreferrer">https://github.com/mjafari98/dm-classification/blob/main/inference.py</a></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from functools import partial
from multiprocessing import Pool
import numpy as np
def parallelize(data, func, num_of_processes=8):
data_split = np.array_split(data, num_of_processes)
pool = Pool(num_of_processes)
data = pd.concat(pool.map(func, data_split))
pool.close()
pool.join()
return data
def run_on_subset(func, data_subset):
return data_subset.apply(func, axis=1)
def parallelize_on_rows(data, func, num_of_processes=8):
return parallelize(data, partial(run_on_subset, func), num_of_processes)
def a_function(row):
...do something ...
return row
df = ...somedf...
new_df = parallelize_on_rows(df, a_function)
</code></pre>
|
python|pandas|parallel-processing
| 0
|
8,811
| 70,751,495
|
How do you group pandas dataframe rows based on permutation of booleans?
|
<p>Imagine there is a pandas dataframe with five columns and n rows. Each column holds a boolean value.</p>
<p>Maths says there should be 32 permutations of boolean values.</p>
<p>How do I group them by the permutation of boolean values associated with each row so I can get a count on each group or return other properties?</p>
<p>For example, how do I find out how many rows associated with TTTTTs or TTTTFs or whatever permutation I'm interested in?</p>
|
<p>There are a couple of ways of doing this. One way would be to just group by all the columns you care about at once. If you want the counts, you can call the <code>GroupBy.count</code> method on the result:</p>
<pre><code>df.groupby(['c1', 'c2', 'c3', 'c4', 'c5']).count()
</code></pre>
<p>Or more simply, if all the columns are of interest:</p>
<pre><code>df.groupby(list(df.columns)).count()
</code></pre>
<p>You could also convert the booleans to a number, and group on that:</p>
<pre><code>df['Num'] = (df.to_numpy() << [4, 3, 2, 1, 0]).sum(0)
df.groupby('Num').count()
</code></pre>
<p>A more general solution that does not require creating a new column could use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a></p>
<pre><code>names = ['c1', 'c2', 'c3', 'c4', 'c5']
pd.Series((df[names].to_numpy() << np.arange(len(names))).sum(0)).value_counts()
</code></pre>
<p>Which you can very conveniently rewrite as</p>
<pre><code>pd.Series.value_counts((df[names].to_numpy() << np.arange(len(names))).sum(0))
</code></pre>
|
python|pandas|permutation
| 1
|
8,812
| 51,733,136
|
Pandas: Reading excel files when the first row is NOT the column name Excel Files
|
<p>I am using pandas to read an excel file. It doesn't have column name but it continues to read the first row as the column name. </p>
<p>Following is the excel file that is being read. </p>
<pre><code>data1 0.994676
data2 0.994588
data3 0.99488
data4 0.994483
data5 0.994312
data6 0.993823
data7 0.993575
data8 0.994231
data9 0.993838
data10 0.994007
data11 0.994328
data12 0.993503
data13 0.99342
data14 0.992729
data15 0.993013
data16 0.993049
data17 0.993133
data18 0.99262
</code></pre>
<p>I'm reading the 2nd column using the following code.
import pandas as pd</p>
<pre><code>df=pd.ExcelFile('C:/Users/JohnDoe/Desktop/080718_output.xlsx', header=None, index_col=False).parse('Data_sheet')
y=df.iloc[0:17,1]
</code></pre>
<p>The following is the y.</p>
<pre><code>In[38]:y
Out[38]:
0 0.994588
1 0.994880
2 0.994483
3 0.994312
4 0.993823
5 0.993575
6 0.994231
7 0.993838
8 0.994007
9 0.994328
10 0.993503
11 0.993420
12 0.992729
13 0.993013
14 0.993049
15 0.993133
16 0.992620
Name: 0.994676, dtype: float64
</code></pre>
<p>It skips the first data because the first row is being used as a column name..
Any idea on how I can improve this?</p>
<p>Edit: 'header=False' to 'header=None'. Both cases give the same outcome.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="noreferrer"><code>read_excel</code></a> with <code>header=None</code> for default columns with <code>rangeIndex</code>:</p>
<pre><code>df = pd.read_excel('file.xlsx',
sheet_name ='Data_sheet',
header=None,
index_col=False)
</code></pre>
|
python-3.x|pandas
| 15
|
8,813
| 36,043,717
|
Theano: Operate on nonzero elements of sparse matrix
|
<p>I'm trying to take the <code>exp</code> of nonzero elements in a sparse theano variable. I have the current code:</p>
<pre><code>A = T.matrix("Some matrix with many zeros")
A_sparse = theano.sparse.csc_from_dense(A)
</code></pre>
<p>I'm trying to do something that's equivalent to the following numpy syntax:</p>
<pre><code>mask = (A_sparse != 0)
A_sparse[mask] = np.exp(A_sparse[mask])
</code></pre>
<p>but Theano doesn't support <code>!=</code> masks yet. (And <code>(A_sparse > 0) | (A_sparse < 0)</code> doesn't seem to work either.)</p>
<p>How can I achieve this?</p>
|
<p>The support for sparse matrices in Theano is incomplete, so some things are tricky to achieve. You can use <code>theano.sparse.structured_exp(A_sparse)</code> in that particular case, but I try to answer your question more generally below.</p>
<p><strong>Comparison</strong></p>
<p>In Theano one would normally use the comparison operators described here: <a href="http://deeplearning.net/software/theano/library/tensor/basic.html" rel="nofollow">http://deeplearning.net/software/theano/library/tensor/basic.html</a></p>
<p>For example, instead of <code>A != 0</code>, one would write <code>T.neq(A, 0)</code>. With sparse matrices one has to use the comparison operators in <code>theano.sparse</code>. Both operators have to be sparse matrices, and the result is also a sparse matrix:</p>
<pre><code>mask = theano.sparse.neq(A_sparse, theano.sparse.sp_zeros_like(A_sparse))
</code></pre>
<p><strong>Modifying a Subtensor</strong></p>
<p>In order to modify part of a matrix, one can use <code>theano.tensor.set_subtensor</code>. With dense matrices this would work:</p>
<pre><code>indices = mask.nonzero()
A = T.set_subtensor(A[indices], T.exp(A[indices]))
</code></pre>
<p>Notice that Theano doesn't have a separated boolean type—the mask is zeros and ones—so <code>nonzero()</code> has to be called first to take the indices of the nonzero elements. Furthermore, this is not implemented for sparse matrices. </p>
<p><strong>Operating on Nonzero Sparse Elements</strong></p>
<p>Theano provides sparse operations that are said to be structured and operate only on the nonzero elements. See:
<a href="http://deeplearning.net/software/theano/tutorial/sparse.html#structured-operation" rel="nofollow">http://deeplearning.net/software/theano/tutorial/sparse.html#structured-operation</a></p>
<p>More precisely, they operate on the <code>data</code> attribute of a sparse matrix, independent of the indices of the elements. Such operations are straightforward to implement. Note that the structured operations will operate on all the values in the <code>data</code> array, also those that are explicitly set to zero.</p>
|
python|numpy|theano
| 1
|
8,814
| 37,304,309
|
Type error on my fit function using gaussian process in scikit
|
<p>Ok, so I have been working on some code that takes an image that represents sparse point data from Houdini and interpolates it into a usable complete map. This has been working really well, except that now I am running into some memory issues. I have narrowed down the memory problem in the kriging algorithm I am using to the predict() step. I am trying to use the batch_size parameter to limit the memory consumption, but it is being a pain. I am getting this error:</p>
<pre><code>Traceback (most recent call last):
File "e:\mapInterpolation.py", line 88, in <module>
prepareImage(file, interpType="kriging")
File "e:\mapInterpolation.py", line 61, in prepareImage
rInterpInt = kriging(r).astype(int)
File "e:\mapInterpolation.py", line 36, in kriging
interpolated = gp.predict(rr_cc_as_cols, batch_size=a).reshape(data.shape)
File "e:\miniconda\lib\site-packages\sklearn\gaussian_process\gaussian_process
.py", line 522, in predict
for k in range(max(1, n_eval / batch_size)):
TypeError: 'float' object cannot be interpreted as an integer
</code></pre>
<p>I have triple checked the type that I am passing to the batch_size parameter, and it is an int, not a float. I really need this to work so I can get an output to use in my final project for my Master's degree that is due in a few weeks. I am including the code below. Also, if anyone has any suggestions on how to make the radial calculation more memory efficient, I am more than open.</p>
<pre><code>import numpy as np
def parseToM(array):
print("parsing to matrix")
r = np.linspace(0, 1, array.shape[0])
c = np.linspace(0, 1, array.shape[1])
rr, cc = np.meshgrid(r, c)
vals = ~np.isnan(array)
return {"rr":rr, "cc":cc, "vals":vals}
def radial(data):
import scipy.interpolate as interpolate
hold = parseToM(data)
rr, cc, vals = hold["rr"], hold["cc"], hold["vals"]
print("starting RBF interpolation")
f = interpolate.Rbf(rr[vals], cc[vals], data[vals], function='linear')
print("storing data")
interpolated = f(rr, cc)
return interpolated
def kriging(data):
from sklearn.gaussian_process import GaussianProcess
hold = parseToM(data)
rr, cc, vals = hold["rr"], hold["cc"], hold["vals"]
print("starting gaussian process")
gp = GaussianProcess(theta0=0.1, thetaL=.001, thetaU=1., nugget=0.1, storage_mode="light")
print("fitting data")
gp.fit(X=np.column_stack([rr[vals],cc[vals]]), y=data[vals])
print("flattening data")
rr_cc_as_cols = np.column_stack([rr.flatten(), cc.flatten()])
print("reshaping data")
a = 1000
print(type(a))
interpolated = gp.predict(rr_cc_as_cols, batch_size=a).reshape(data.shape)
return interpolated
def prepareImage(filename, interpType="kriging"):
print("opening image", filename)
from PIL import Image
f = Image.open(filename)
image = f.load()
image_size = f.size
xmax = image_size[0]
ymax = image_size[1]
r = np.ndarray(shape=(xmax, ymax))
g = np.ndarray(shape=(xmax, ymax))
b = np.ndarray(shape=(xmax, ymax))
print("processing image")
for x in range(xmax):
for y in range(ymax):
value = image[x,y]
if value[3] == 0:
r[x,y], g[x,y], b[x,y] = [np.nan, np.nan, np.nan]
else:
r[x,y], g[x,y], b[x,y] = value[:3]
print("interpolating")
if interpType == "kriging":
rInterpInt = kriging(r).astype(int)
gInterpInt = kriging(g).astype(int)
bInterpInt = kriging(b).astype(int)
elif interpType == "radial":
rInterpInt = radial(r).astype(int)
gInterpInt = radial(g).astype(int)
bInterpInt = radial(b).astype(int)
print("reapplying pixels")
for i in range(rInterpInt.size):
if rInterpInt.item(i) < 0:
rInterpInt.itemset(i, 0)
if gInterpInt.item(i) < 0:
gInterpInt.itemset(i, 0)
if bInterpInt.item(i) < 0:
bInterpInt.itemset(i, 0)
x = i%xmax
y = int(np.floor(i/ymax))
newValue = (rInterpInt[x,y], gInterpInt[x,y], bInterpInt[x,y], 255)
image[x,y] = newValue
print("saving")
savename = "E:\\"+filename[3:9]+"."+interpType+".png"
f.save(savename, "PNG")
print("done")
for i in range(1,10):
file = r"E:\occ"+str(i*100)+".png"
prepareImage(file, interpType="kriging")
</code></pre>
|
<p>This looks like a bug in scikit-learn in python 3 - the division <a href="https://github.com/scikit-learn/scikit-learn/blob/51a765a/sklearn/gaussian_process/gaussian_process.py#L510" rel="nofollow">here</a> results in a float in python 3, which <code>range</code> then rightly balks on.</p>
<p>There's a <a href="https://github.com/scikit-learn/scikit-learn/issues/6483" rel="nofollow">corresponding issue here</a>, but it seems to be a <code>wontfix</code>, citing that <code>GaussianProcess</code> is deprecated anyway</p>
|
python|numpy|scipy|scikit-learn|interpolation
| 3
|
8,815
| 41,831,214
|
What is SYCL 1.2?
|
<p>I am trying to install tensorflow</p>
<pre><code>Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]:
Invalid SYCL 1.2 library path. /usr/local/computecpp/lib/libComputeCpp.so cannot be found
</code></pre>
<p>What should I do?What is SYCL 1.2?</p>
|
<p><a href="https://www.khronos.org/sycl" rel="noreferrer">SYCL</a> is a C++ abstraction layer for OpenCL. TensorFlow's <a href="https://www.codeplay.com/portal/tensorflow%E2%84%A2-for-opencl%E2%84%A2-using-sycl%E2%84%A2" rel="noreferrer">experimental support</a> for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler.</p>
<p>As Yaroslav pointed out in <a href="https://stackoverflow.com/questions/41831214/what-is-sycl-1-2#comment70853775_41831214">his comment</a>, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of <code>./configure</code> asks about OpenCL support:</p>
<pre><code>Do you wish to build TensorFlow with OpenCL support? [y/N]
</code></pre>
<p>If you answer <code>N</code>, you will not have to supply a SYCL path.</p>
|
tensorflow
| 30
|
8,816
| 7,891,247
|
numpy: reorder array by specified values
|
<p>I have a matrix:</p>
<pre><code>A = [ [1,2],
[3,4],
[5,6] ]
</code></pre>
<p>and a vector of values:</p>
<pre><code>V = [4,6,2]
</code></pre>
<p>I would like to reorder A by 2nd column, using values from V. The result should
be:</p>
<pre><code>A = [ [3,4],
[5,6],
[1,2] ] # 2nd columns' values have the same order as V
</code></pre>
<p>How to do it?</p>
|
<p>First, we need to find the indicies of the values in the second column of <code>A</code> that we'd need to match the order of <code>V</code>. In this case, that's <code>[1,2,0]</code>. Once we have those, we can just use numpy's "fancy" indexing to do the rest.</p>
<p>So, you might do something like this:</p>
<pre><code>import numpy as np
A = np.arange(6).reshape((3,2)) + 1
V = [4,6,2]
column = A[:,1].tolist()
order = [column.index(item) for item in V]
print A[order,:]
</code></pre>
<p>If you want to avoid python lists entirely, then you can do something like what's shown below. It's hackish, and there may be a better way, though...</p>
<p>We can abuse <code>numpy.unique</code> to do this... What I'm doing here is depending on a particular implementation detail (<code>unique</code> seems to start at the end of the array) which could change at any time... That's what makes it an ugly hack.</p>
<pre><code>import numpy as np
A = np.arange(6).reshape((3,2)) + 1
V = np.array([4,6,2])
vals, order = np.unique(np.hstack((A[:,1],V)), return_inverse=True)
order = order[-V.size:]
print A[order,:]
</code></pre>
|
python|numpy
| 7
|
8,817
| 37,646,501
|
How can I slice a dataframe by timestamp, when timestamp isn't classified as index?
|
<p>How can I split my pandas dataframe by using the timestamp on it?</p>
<p>I got the following prices when I call <code>df30m</code>:</p>
<pre><code> Timestamp Open High Low Close Volume
0 2016-05-01 19:30:00 449.80 450.13 449.80 449.90 74.1760
1 2016-05-01 20:00:00 449.90 450.27 449.90 450.07 63.5840
2 2016-05-01 20:30:00 450.12 451.00 450.02 450.51 64.1080
3 2016-05-01 21:00:00 450.51 452.05 450.50 451.22 75.7390
4 2016-05-01 21:30:00 451.21 451.64 450.81 450.87 71.1190
5 2016-05-01 22:00:00 450.87 452.05 450.87 451.07 73.8430
6 2016-05-01 22:30:00 451.09 451.70 450.91 450.91 68.1490
7 2016-05-01 23:00:00 450.91 450.98 449.97 450.61 84.5430
8 2016-05-01 23:30:00 450.61 451.50 450.55 451.45 111.2370
9 2016-05-02 00:00:00 451.47 452.31 450.69 451.19 190.0750
10 2016-05-02 00:30:00 451.20 451.68 450.45 450.82 186.0930
11 2016-05-02 01:00:00 450.83 451.64 450.65 450.73 112.4630
12 2016-05-02 01:30:00 450.73 451.10 450.31 450.56 137.7530
13 2016-05-02 02:00:00 450.56 452.01 449.98 450.27 151.6140
14 2016-05-02 02:30:00 450.27 451.30 450.23 451.11 99.5490
15 2016-05-02 03:00:00 451.29 451.29 450.17 450.33 178.9860
16 2016-05-02 03:30:00 450.44 451.20 450.44 450.75 65.1480
17 2016-05-02 04:00:00 450.79 451.20 450.75 451.00 78.0430
18 2016-05-02 04:30:00 451.00 451.11 450.85 451.11 64.7250
19 2016-05-02 05:00:00 451.11 451.64 451.00 451.12 73.4840
20 2016-05-02 05:30:00 451.12 451.83 450.67 451.33 94.1950
21 2016-05-02 06:00:00 451.35 451.37 450.17 450.18 227.7480
22 2016-05-02 06:30:00 450.18 450.43 450.17 450.17 83.0270
23 2016-05-02 07:00:00 450.17 450.43 448.90 449.41 170.4950
24 2016-05-02 07:30:00 449.38 450.00 448.56 448.56 243.0420
25 2016-05-02 08:00:00 448.67 448.67 446.21 448.00 525.7090
26 2016-05-02 08:30:00 448.12 448.49 445.00 445.00 673.5810
27 2016-05-02 09:00:00 445.00 445.51 440.11 444.20 1392.9049
28 2016-05-02 09:30:00 444.24 444.36 440.11 442.00 438.6860
29 2016-05-02 10:00:00 441.91 443.20 440.05 442.24 400.5850
... ... ... ... ... ... ...
1651 2016-06-05 05:00:00 578.74 579.00 577.92 578.39 93.6980
1652 2016-06-05 05:30:00 578.40 578.48 574.52 575.26 98.1580
1653 2016-06-05 06:00:00 575.24 576.02 572.47 574.06 126.8620
1654 2016-06-05 06:30:00 574.06 576.35 574.06 576.34 125.4120
1655 2016-06-05 07:00:00 576.34 576.34 574.73 575.83 34.8070
1656 2016-06-05 07:30:00 575.84 576.27 574.91 575.58 74.8180
1657 2016-06-05 08:00:00 575.58 578.57 575.58 578.36 123.2560
1658 2016-06-05 08:30:00 578.23 578.47 576.18 577.25 43.6590
1659 2016-06-05 09:00:00 577.20 578.85 576.70 577.27 95.3900
1660 2016-06-05 09:30:00 577.36 578.18 576.70 576.70 51.0250
1661 2016-06-05 10:00:00 576.70 576.70 574.55 575.39 101.0590
1662 2016-06-05 10:30:00 575.41 576.44 575.18 576.44 86.4340
1663 2016-06-05 11:00:00 576.50 577.89 576.50 577.80 113.0600
1664 2016-06-05 11:30:00 577.80 578.10 576.03 576.98 57.5050
1665 2016-06-05 12:00:00 576.98 577.55 576.59 577.54 56.1070
1666 2016-06-05 12:30:00 577.54 583.00 570.93 572.82 872.8200
1667 2016-06-05 13:00:00 572.94 573.19 569.64 572.50 310.0020
1668 2016-06-05 13:30:00 572.50 574.37 572.50 574.09 59.3410
1669 2016-06-05 14:00:00 574.09 574.19 571.51 572.98 155.4310
1670 2016-06-05 14:30:00 572.98 573.57 572.02 573.47 76.9270
1671 2016-06-05 15:00:00 573.62 575.10 572.97 573.37 59.1430
1672 2016-06-05 15:30:00 573.37 574.39 573.37 574.38 77.3270
1673 2016-06-05 16:00:00 574.39 575.59 574.38 575.59 52.0150
1674 2016-06-05 16:30:00 575.00 575.59 574.50 575.00 66.9300
1675 2016-06-05 17:00:00 575.00 576.83 574.38 576.60 50.2990
1676 2016-06-05 17:30:00 576.60 577.50 575.50 576.86 104.5200
1677 2016-06-05 18:00:00 576.86 577.21 575.44 575.80 55.3270
1678 2016-06-05 18:30:00 575.77 575.80 574.52 574.77 78.7760
1679 2016-06-05 19:00:00 574.73 575.18 572.52 574.47 126.4300
1680 2016-06-05 19:30:00 574.49 574.87 573.80 574.32 10.4930
</code></pre>
<p>As you can see, it contains the last 35 days grouped by intervals of 30 min.</p>
<p>I wanna manipulate this price history in different time windows.</p>
<p>So, as a beginner example, I would like to fetch only the info from the last 1 day.</p>
<p><strong>How can I filter this dataframe to show the info from the last 1 day?</strong></p>
<p>This is what I've tried: </p>
<pre><code>import datetime
d0 = datetime.datetime.today()
d1 = datetime.datetime.today() - datetime.timedelta(days=1)
print d0
>>> 2016-06-05 17:10:37.633824
print d1
>>> 2016-06-04 17:10:37.633967
df_1d = df30m['Timestamp'] > d1
print df_1d
</code></pre>
<p>This returns me a pandas series filled with True or False</p>
<pre><code>0 False
1 False
2 False
3 False
4 False
...
1676 True
1677 True
1678 True
1679 True
1680 True
</code></pre>
<p>Also I've tried to use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.between_time.html" rel="noreferrer"><code>between_time()</code></a> module.</p>
<pre><code>df_1d = df30m.between_time(d0, d1)
</code></pre>
<p>But I got the following error message:</p>
<pre><code>TypeError: Index must be DatetimeIndex
</code></pre>
<p>Please, can anyone show me a pythonic way to slice my dataframe?</p>
|
<p>You can use <code>loc</code> to index your data. Do you know if your timestamps at datetime.datetime formats or Pandas Timestamps?</p>
<pre><code>df30m.loc[(df30m.Timestamp <= d0) & (df30m.Timestamp >= d1)]
</code></pre>
<p>You can set the index to the Timestamp column and then index as follows:</p>
<pre><code>df.set_index('Timestamp', inplace=True)
df[d1:d0]
</code></pre>
|
python|pandas|dataframe|split|timestamp
| 8
|
8,818
| 37,857,254
|
Does ComputeBandStats take nodata into account?
|
<p>I am trying to compute the stats for an image which is only partly covered by data. I would like to know if ComputeBandStats ignores the pixels with the same value as the files nodata.</p>
<p>Here is my code:</p>
<pre><code>inIMG = gdal.Open(infile)
# getting stats for the first 3 bands
# Using ComputeBandStats insted of stats array has min, max, mean and sd values
print "Computing band statistics"
bandas = [inIMG.GetRasterBand(b+1) for b in range(3)]
minMax = [b.ComputeRasterMinMax() for b in bandas]
meanSD = [b.ComputeBandStats(1) for b in bandas]
print minMax
print meanSD
</code></pre>
<p>For the image without the nodata attribute the output is:</p>
<pre><code>Computing band statistics
[(0.0, 26046.0), (0.0, 24439.0), (0.0, 22856.0)]
[(762.9534697777777, 647.9056493556284), (767.642869, 516.0531530834181), (818.0449643333334, 511.5360132592902)]
</code></pre>
<p>For the image with nodata = 0 the output is:</p>
<pre><code>Computing band statistics
[(121.0, 26046.0), (202.0, 24439.0), (79.0, 22856.0)]
[(762.9534697777777, 647.9056493556284), (767.642869, 516.0531530834181), (818.0449643333334, 511.5360132592902)]
</code></pre>
<p>The min and max values have changed such that 0 is no longer min, which makes sense, because in the second version it is nodata and therefore not regarded by ComputeRasterMinMax(). However, the mean and standard deviation has not changed.</p>
<p>Does this mean that ComputeBandStats doesn't disregard the nodata values?<br>
Is there any way to force ComputeBandStats to disregard the nodata values?</p>
|
<p>Setting the NoData value has no effect on the data itself. You can try it this way:</p>
<pre><code># First image, all valid data
data = numpy.random.randint(1,10,(10,10))
driver = gdal.GetDriverByName('GTIFF')
ds = driver.Create("stats1.tif", 10, 10, 1, gdal.GDT_Byte)
ds.GetRasterBand(1).WriteArray(data)
print ds.GetRasterBand(1).ComputeBandStats(1)
print ds.GetRasterBand(1).ComputeStatistics(False)
ds = None
# Second image, values of "1" set to no data
driver = gdal.GetDriverByName('GTIFF')
ds = driver.Create("stats2.tif", 10, 10, 1, gdal.GDT_Byte)
ds.GetRasterBand(1).SetNoDataValue(1)
ds.GetRasterBand(1).WriteArray(data)
print ds.GetRasterBand(1).ComputeBandStats(1)
print ds.GetRasterBand(1).ComputeStatistics(False)
ds = None
</code></pre>
<p>Note that the stats returned by <code>ComputeBandStats</code> are unchanged, but that those returned by <code>ComputeStatistics</code> are:</p>
<pre><code>>>> (4.97, 2.451346568725035)
>>> [1.0, 9.0, 4.970000000000001, 2.4513465687250346]
>>> (4.97, 2.451346568725035)
>>> [2.0, 9.0, 5.411111111111111, 2.1750833672117]
</code></pre>
<p>You can confirm manually that the stats are correct:</p>
<pre><code>numpy.mean(data)
numpy.mean(data[data != 1])
numpy.std(data)
numpy.std(data[data != 1])
>>> 4.9699999999999998
>>> 5.4111111111111114
>>> 2.4513465687250346
>>> 2.1750833672117
</code></pre>
|
python|python-2.7|python-3.x|numpy|gdal
| 1
|
8,819
| 37,714,241
|
pandas: how to combine matrix with different index and columns?
|
<p>I am using python, pandas and numpy to read a few data.</p>
<p>I have two data frames:</p>
<p>Input 1- Cost matrix(it has the cost per season and region): index = regions and columns = seasons
Input 2- Binary matrix(value 1 when a month "a" belongs to a season "b": index=seasons, columns=months</p>
<p>The output that I would like to have is a matrix C that has the cost per region and month: index=region, column month.</p>
<p>Could anyone please help me with that? I googled a lot, but I can't find the solution.</p>
<p>updating with my code</p>
<pre><code>import pandas as pd
import numpy as np
from xlwings import Workbook, Range
import os
print(os.getcwd())
link = (os.getcwd() + '/test.xlsx')
print(link)
#Open the Workbook
wb = Workbook(link)
#
#Reading data
regions=np.array(Range('Sheet1','regions').value)
#[u'Region A' u'Region B' u'Region C' u'Region D']
seasons=np.array(Range('Sheet1','seasons').value)
#[u'Season A' u'Season B' u'Season C' u'Season D']
months=np.array(Range('Sheet1','months').value)
#[u'Jan' u'Feb' u'Mar' u'Apr' u'May' u'Jun' u'Jul' u'Aug']
#read relationship between season and month
data=Range('Sheet1','rel').table.value
relationship=pd.DataFrame(data[0:], index = regions, columns=months)
# Jan Feb Mar Apr May Jun Jul Aug
#Region A 1 1 0 0 0 0 0 0
#Region B 0 0 1 1 0 0 0 0
#Region C 0 0 0 0 1 1 0 0
#Region D 0 0 0 0 0 0 1 1
#read the cost per region
data=Range('Sheet1','cost').table.value
cost=pd.DataFrame(data[0:], index = regions, columns=seasons)
# Season A Season B Season C Season D
#Region A 1 9 7 2
#Region B 7 0 3 3
#Region C 4 0 7 5
#Region D 3 10 3 10
#What I want:
# Jan Feb Mar Apr May Jun Jul Aug
#Region A 1 1 9 9 7 7 2 2
#Region B 7 7 0 0 3 3 3 3
#Region C 4 4 0 0 7 7 5 5
#Region D 3 3 10 10 3 3 10 10
</code></pre>
|
<p>I believe that there is a mistake in the relationship dataframe from your example, since you clearly state that it should be the relationship between <strong>season</strong> (and not region) and month, so I changed it accordingly.</p>
<pre><code>import pandas as pd
import numpy as np
regions = ['Region A', 'Region B', 'Region C', 'Region D']
seasons = ['Season A', 'Season B', 'Season C', 'Season D']
cost_data = np.array([[1, 9, 7, 2], [7, 0, 3, 3], [4, 0, 7, 5], [3, 10, 3, 10]])
cost = pd.DataFrame(data=cost_data, index=regions, columns=seasons)
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug']
rel_data = np.array([[1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1]])
rel = pd.DataFrame(data=rel_data, index=seasons, columns=months)
c = pd.DataFrame(index=regions, columns=months)
for region in regions:
for month in months:
for season in seasons:
if rel.loc[season][month]:
c.loc[region][month] = cost.loc[region][season]
print c
# Jan Feb Mar Apr May Jun Jul Aug
#Region A 1 1 9 9 7 7 2 2
#Region B 7 7 0 0 3 3 3 3
#Region C 4 4 0 0 7 7 5 5
#Region D 3 3 10 10 3 3 10 10
</code></pre>
|
python|numpy|pandas
| 0
|
8,820
| 37,716,383
|
noob at MNIST and I do not know what to look for in the documentation
|
<p>I am getting errors like </p>
<p>from: can't read /var/mail/tensorflow.examples.tutorials.mnist</p>
<p>from: can't read /var/mail/<strong>future</strong></p>
<p>what am i doing wrong?</p>
|
<p>From the error messages, it sounds like you are trying to run a TensorFlow program by (i) typing commands into a <code>bash</code> (or other command) prompt, or (ii) running it using a command-line interpreter (e.g. by running <code>source mnist.py</code>).</p>
<p>To run a TensorFlow Python program, e.g. <code>tensorflow/examples/tutorials/mnist/mnist.py</code> you must run it under <code>python</code>, by entering the following command at a command-line prompt:</p>
<pre><code>python tensorflow/examples/tutorials/mnist/mnist.py
</code></pre>
|
tensorflow
| 0
|
8,821
| 37,890,373
|
Python reshape array
|
<p>I have a very unique error I feel. I am working with an array 'A' of shape</p>
<pre><code>>>> A.shape
(1L, 1823L, 24L)
</code></pre>
<p>I am trying to get rid of first dimension as it is empty. So, I do something as:</p>
<pre><code>>>> z1 = A[0,:,:]
>>> z1.shape
(1253L,)
>>> z1[0].shape
()
>>> z1[0]
10411.505889359611
</code></pre>
<p>I don't understand where this 1253 is coming from. I tried to use other simple examples and it works fine for smaller arrays.</p>
<p>I also tried squeeze as:</p>
<pre><code>import numpy as np
>>> z2 = np.squeeze(A,0)
>>> z2.shape
(1253L,)
</code></pre>
<p>My ultimate goal is to make a vector out of elements of array 'A' such that after squeezing A to dimension 1823x24, I gather the elements from rows wise order.</p>
<p>Edit: My code should have worked too but the temporary variable I was using for the process wasn't changing for some reason. I tried deleting the temporary variable and still won't go away. So, I created a new temporary variable and it worked. I am using Pycharm so I am not sure where the issue was coming from.</p>
|
<p>You can use <code>np.reshape()</code> for this.</p>
<p>Example <a href="http://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.reshape.html" rel="nofollow">from the docs</a>:</p>
<pre><code>>>> a = np.arange(6).reshape((3, 2))
>>> a
array([[0, 1],
[2, 3],
[4, 5]])
</code></pre>
<p>In your example, this would be:</p>
<pre><code>A.reshape((1823, 24))
</code></pre>
|
python|numpy
| 0
|
8,822
| 31,238,079
|
How to split Test and Train data such that there is garenteed at least one of each Class in each
|
<p>I have some fairly unbalanced data I am trying to classify.
However, it is classifying fairly well.</p>
<p>To evaluate exactly how well, I must split the data into training and test subsets.</p>
<p>Right now I am doing that by the very simple measure of:</p>
<pre><code>import numpy as np
corpus = pandas.DataFrame(..., columns=["data","label"]) # My data, simplified
train_index = np.random.rand(len(corpus))>0.2
training_data = corpus[train_index]
test_data = corpus[np.logical_not(train_index)]
</code></pre>
<p>This is nice and simple, but some of the classes occur very very rarely:
about 15 occur less than 100 times each in the corpus of over 50,000 cases, and two of them each occur only once.</p>
<p>I would like to partition my data corpus into test and training subsets such that: </p>
<ul>
<li>If a class occurs less than twice, it is excluded from both</li>
<li>each class occurs at least once, in test and in training</li>
<li>The split into test and training is otherwise random</li>
</ul>
<p>I can throw together something to do this,
(likely the simplest way is to remove things with less than 2 occurances) and then just resample til the spit has both on each side), but I wonder if there is a clean method that already exists.</p>
<p>I don't think that <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html#sklearn.cross_validation.train_test_split" rel="nofollow">sklearn.cross_validation.train_test_split</a> will do for this, but that it exists suggests that sklearn might have this kind of functionality.</p>
|
<p>The following meets your 3 conditions for partitioning the data into test and training:</p>
<pre><code>#get rid of items with fewer than 2 occurrences.
corpus=corpus[corpus.groupby('label').label.transform(len)>1]
from sklearn.cross_validation import StratifiedShuffleSplit
sss=StratifiedShuffleSplit(corpus['label'].tolist(), 1, test_size=0.5, random_state=None)
train_index, test_index =list(*sss)
training_data=corpus.iloc[train_index]
test_data=corpus.iloc[test_index]
</code></pre>
<p>I've tested the code above by using the following fictitious dataframe:</p>
<pre><code>#create random data with labels 0 to 39, then add 2 label case and one label case.
corpus=pd.DataFrame({'data':np.random.randn(49998),'label':np.random.randint(40,size=49998)})
corpus.loc[49998]=[random.random(),40]
corpus.loc[49999]=[random.random(),40]
corpus.loc[50000]=[random.random(),41]
</code></pre>
<p>Which produces the following output when testing the code:</p>
<pre><code>test_data[test_data['label']==40]
Out[110]:
data label
49999 0.231547 40
training_data[training_data['label']==40]
Out[111]:
data label
49998 0.253789 40
test_data[test_data['label']==41]
Out[112]:
Empty DataFrame
Columns: [data, label]
Index: []
training_data[training_data['label']==41]
Out[113]:
Empty DataFrame
Columns: [data, label]
Index: []
</code></pre>
|
python|pandas|machine-learning|scikit-learn|classification
| 3
|
8,823
| 64,408,663
|
Printing Coordinates to a csv File
|
<p>I would like to extract coordinates to a .csv file with the brackets and comma delimiter format (x,y).</p>
<p>I have a 4x4 matrix written as a list (network1) and need to identify the coordinates where a 1 occurs to then export these coordinates to a .csv file.</p>
<p>The code below was suggested by another user which works great for a different set of data, although I need to adjust this a bit further to suit this format.</p>
<p>I expect there is only a small modification needed in the existing code which is below.</p>
<pre><code>import numpy as np
import pandas as pd
network1 = [0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,1]
network1_matrix = np.array(network1).reshape(4, 4)
coordinates = np.transpose(np.where(network1_matrix == 1))
result_df = pd.DataFrame({'1': coordinates[:, 0] + 1, '2': coordinates[:, 1] + 1})
result_df = result_df.append({'1': ';', '2': ''}, ignore_index=True)
result_df.columns = ['set A :=', '']
result_df.to_csv('result.csv', sep=' ', index=False)
</code></pre>
<p>This produces an output as follows (I have included results from a text file for greater clarity):</p>
<p><a href="https://i.stack.imgur.com/5z0Ne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5z0Ne.png" alt="Current A Output" /></a></p>
<p>For this specific output, I need it in the following format:</p>
<p><a href="https://i.stack.imgur.com/XV9gO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XV9gO.png" alt="Ideal A Output" /></a></p>
<p>I would greatly appreciate the assistance to complete the following as per the second image:</p>
<ol>
<li>Print set A := to the .csv file without the quotation marks (" ") around it.</li>
<li>Print the coordinates with the brackets and only delimited by a comma.</li>
</ol>
<p>Thanks a lot for the help!</p>
|
<p>I have seen your problem. Now fix this issue..</p>
<pre><code>df.to_csv(quotechar='"')
</code></pre>
<p>bydefault <code>quotechar</code> is string. Think about it.</p>
<p>So try this like...</p>
<pre><code>import numpy as np
import pandas as pd
network1 = [0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,1]
network1_matrix = np.array(network1).reshape(4, 4)
"""
array([[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 1],
[0, 0, 1, 1]])
"""
coordinates = np.transpose(np.where(network1_matrix == 1))
"""
array([[0, 1],
[1, 2],
[2, 2],
[2, 3],
[3, 2],
[3, 3]])
"""
result_df = pd.DataFrame({'1': coordinates[:, 0] + 1, '2': coordinates[:, 1] + 1})
result_df.columns = ['set', '']
result_df['set A :='] = result_df[['set', '']].apply(tuple, axis=1)
result_df = result_df.append({'set A :=': ';'}, ignore_index=True)
#result_df
result_df = result_df['set A :=']
result_df.to_csv('result.csv', sep=' ', float_format = True, index = False, quotechar= '\r')
</code></pre>
<p><code>!head result.csv</code></p>
<p>Output:</p>
<pre><code>set A :=
(1, 2)
(2, 3)
(3, 3)
(3, 4)
(4, 3)
(4, 4)
;
</code></pre>
|
python|pandas|numpy|coordinates|export-to-csv
| 1
|
8,824
| 64,328,617
|
Python Pandas Replace Values with NAN from Tuple
|
<p>Got the Following Dataframe:</p>
<pre><code> A B
Temp1 1
Temp2 2
NaN NaN
NaN 4
</code></pre>
<p>Since the A nad B are correlated, I am able to create new column where I have calculated the nan value of A and B and form a tuple:</p>
<pre><code> A B C
Temp1 1 (1,Temp1)
Temp2 2 (2, Temp2)
NaN NaN (3, Temp3)
NaN 4 (4, Temp4)
</code></pre>
<p>Now I have to drop the column C and fill the Nan value corrosponding to the Columns.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> with select values in tuple by indexing with <code>str</code>, last remove <code>C</code> column:</p>
<pre><code>#if values are not in tuples
#df.C = df.C.str.strip('()').str.split(',').apply(tuple)
df.A = df.A.fillna(df.C.str[1])
df.B = df.B.fillna(df.C.str[0])
df = df.drop('C', axis=1)
print (df)
A B
0 Temp1 1
1 Temp2 2
2 Temp3 3
3 Temp4 4
</code></pre>
<p>Or create DataFrame from <code>C</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for use and remove column, set new columns names and pass to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a>:</p>
<pre><code>#if values are not in tuples
#df.C = df.C.str.strip('()').str.split(',').apply(tuple)
df[['A','B']] = df[['A','B']].fillna(pd.DataFrame(df.pop('C').tolist(), columns=['B','A']))
print (df)
A B
0 Temp1 1
1 Temp2 2
2 Temp3 3
3 Temp4 4
</code></pre>
|
python|pandas|tuples|nan|fillna
| 1
|
8,825
| 64,464,585
|
Pandas Python How to handle question mark that appeared in dataframe
|
<p>I have these question marks that appeared in my data frame just next to numbers and I dont know how to erase or or replace them. I dont want to drop the whole row since it may result in inaccurate results.
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>. Value
0 58
1 82
2 69
3 48
4 8</code></pre>
</div>
</div>
<a href="https://i.stack.imgur.com/sTBdE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sTBdE.jpg" alt="enter image description here" /></a></p>
|
<p>I agree with the comments above that you should look into how you imported the data. But here is the answer to your question of how to remove the non numeric characters:</p>
<p>This will remove the non numeric characters</p>
<pre><code>df['Value'] = df['Value'].str.extract('(\d+)')
</code></pre>
<p>Then if you wish to change the datatype to in you can use this:</p>
<pre><code>df['Value'] = pd.to_numeric(df['Value'])
</code></pre>
|
pandas
| 1
|
8,826
| 47,664,026
|
How to read arrays to form a matrix from file using numpy
|
<p>I have a file with data as:
2 arrays in each row. Total= 10,000 rows.</p>
<pre><code>[1,2,3,4,5][2,4,6,8,10]
[3,6,9,12,24][6,12,18,24,48]
....]
</code></pre>
<p>I am planning to give this input to Linear Regression in the fit command.
I am having issue how to construct a matrix with entries.</p>
<p>I am looking at constructing Array (2 by x) like:</p>
<pre><code>x=[
[1,2,3,4,5]
[3,6,9,12,24]
....]
y=
[[2,4,6,8,10]
[6,12,18,24,48]
....]
</code></pre>
<p>so that I can give to the fit command as input.</p>
<p>I see numpy.fromfile is used to get the binary data.
can I use it for lists?</p>
<p><a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.fromfile.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.fromfile.html</a></p>
|
<p>Solution using pandas</p>
<pre><code>import pandas as pd
df = pd.read_csv('input.txt', delimiter="\]\[", header=None, engine='python')
df[0] = (df[0] + ']')
df[1] = ('[' + df[1])
x = df[0].tolist()
y = df[1].tolist()
</code></pre>
|
python|numpy
| 0
|
8,827
| 47,735,836
|
How to manipulate numpy arrays in this way?
|
<p>I have a (28,28) array <code>a</code>. And I want to obtain a (28,28,3) array <code>b</code> s.t. <code>b[i][j][0] = b[i][j][1] = b[i][j][2] = a[i][j]</code>.</p>
<p>Is there any numpy shortcut to do this without tedious for loops?</p>
|
<pre><code>>>> import numpy as np
>>> a = np.zeros((28,28))
>>> b = np.dstack((a,a,a))
>>> a.shape
(28, 28)
>>> b.shape
(28, 28, 3)
</code></pre>
<p>Example:</p>
<pre><code>>>> a = np.array([[1,2],[3,4]])
>>> b = np.dstack((a,a,a))
>>> a
array([[1, 2],
[3, 4]])
>>> b
array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]])
</code></pre>
|
python|numpy
| 1
|
8,828
| 47,985,787
|
How to use numpy fillna() with numpy.where() for a column in a pandas DataFrame?
|
<p>Here is an example pandas DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
dict1 = {'file': ['filename2', 'filename2', 'filename3', 'filename4',
'filename4', 'filename3'], 'amount': [3, 4, 5, 1, 2, 1],
'front': [21889611, 36357723, 196312, 11, 42, 1992],
'back':[21973805, 36403870, 277500, 19, 120, 3210],
'type':['A', 'A', 'A', 'B', 'B', 'C']}
df1 = pd.DataFrame(dict1)
print(df1)
file amount front back type
0 filename2 3 21889611 21973805 A
1 filename2 4 36357723 36403870 A
2 filename3 5 196312 277500 A
3 filename4 1 11 19 B
4 filename4 2 42 120 B
5 filename3 1 1992 3210 C
</code></pre>
<p>I'm defining a new column <code>end</code> using <code>numpy.where()</code>:</p>
<pre><code>df1['end'] = np.where(df1['type']=='B', df1['front'], df1['front'] + df1['back'])
print(df1)
amount back file front type end
0 3 21973805 filename2 21889611 A 43863416
1 4 36403870 filename2 36357723 A 72761593
2 5 277500 filename3 196312 A 473812
3 1 19 filename4 11 B 11
4 2 120 filename4 42 B 42
5 1 3210 filename3 1992 C 5202
</code></pre>
<p>I would like to use the same approach to fill in <code>NaN</code> values, if the <code>end</code> column partially exists, e.g. here is a <code>DataFrame</code> where a <code>end</code> does exist as a column, but with many <code>NaN</code> values. (EDIT: these values that are not NA may be entirely unique):</p>
<pre><code>new_df
amount back file front type end
0 3 21973805 filename2 21889611 A NaN
1 4 36403870 filename2 36357723 A NaN
2 5 277500 filename3 196312 A 12
3 1 19 filename4 11 B NaN
4 2 120 filename4 42 B 49
5 1 3210 filename3 1992 C NaN
</code></pre>
<p>I would think one could do this with <code>pandas.DataFrame.fillna()</code>, but this throws an error:</p>
<pre><code>df1['end'].fillna(np.where(df1['type']=='B', df1['front'], df1['front'] + df1['back']), inplace=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/pandas/core/series.py", line 2434, in fillna
**kwargs)
File "/usr/local/lib/python3.6/site-packages/pandas/core/generic.py", line 3631, in fillna
type(value))
ValueError: invalid fill value with a <class 'numpy.ndarray'>
</code></pre>
<p>Question: how do I efficiently use <code>np.where()</code> only on <code>NaN</code> values within a certain column? </p>
|
<p><code>fillna</code> is base on index </p>
<pre><code>df['New']=np.where(df1['type']=='B', df1['front'], df1['front'] + df1['back'])
df
Out[125]:
amount back file front type end New
0 3 21973805 filename2 21889611 A NaN 43863416
1 4 36403870 filename2 36357723 A NaN 72761593
2 5 277500 filename3 196312 A 473812.0 473812
3 1 19 filename4 11 B NaN 11
4 2 120 filename4 42 B 42.0 42
5 1 3210 filename3 1992 C NaN 5202
df.end.fillna(df.New)
Out[126]:
0 43863416.0
1 72761593.0
2 473812.0
3 11.0
4 42.0
5 5202.0
Name: end, dtype: float64
df.end=df.end.fillna(df.New)
df
Out[128]:
amount back file front type end New
0 3 21973805 filename2 21889611 A 43863416.0 43863416
1 4 36403870 filename2 36357723 A 72761593.0 72761593
2 5 277500 filename3 196312 A 473812.0 473812
3 1 19 filename4 11 B 11.0 11
4 2 120 filename4 42 B 42.0 42
5 1 3210 filename3 1992 C 5202.0 5202
</code></pre>
<p>Update </p>
<pre><code>df['New']=np.where(df1['type']=='B', df1['front'], df1['front'] + df1['back'])
df.end=df.end.fillna(df.New)
df
Out[133]:
amount back file front type end New
0 3 21973805 filename2 21889611 A 43863416.0 43863416
1 4 36403870 filename2 36357723 A 72761593.0 72761593
2 5 277500 filename3 196312 A 12.0 473812
3 1 19 filename4 11 B 11.0 11
4 2 120 filename4 42 B 49.0 42
5 1 3210 filename3 1992 C 5202.0 5202
</code></pre>
|
python|pandas|numpy|dataframe|fillna
| 3
|
8,829
| 58,633,364
|
Max pool a single image in tensorflow using "tf.nn.avg_pool"
|
<p>I want to apply "tf.nn.max_pool()" on a single image but I get a result with dimension that is totally different than the input:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
ifmaps_1 = tf.Variable(tf.random_uniform( shape=[ 7, 7, 3], minval=0, maxval=3, dtype=tf.int32))
ifmaps=tf.dtypes.cast(ifmaps_1, dtype=tf.float64)
ofmaps_tf = tf.nn.max_pool([ifmaps], ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding="SAME")[0] # no padding
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
print("ifmaps_tf = ")
print(ifmaps.eval())
print("ofmaps_tf = ")
result = sess.run(ofmaps_tf)
print(result)
</code></pre>
<p>I think this is related to trying to apply pooling to single example not on a batch. I need to do the pooling on a single example.</p>
<p>Any help is appreciated.</p>
|
<p>Your input is <code>(7,7,3)</code>, kernel size is <code>(3,3)</code> and stride is <code>(2,2)</code>. So if you do not want any paddings, (state in your comment), you should use <code>padding="VALID"</code>, that will return a <code>(3,3)</code> tensor as output. If you use <code>padding="SAME"</code>, it will return <code>(4,4)</code> tensor. </p>
<p>Usually, the formula of calculating output size for SAME pad is:</p>
<pre class="lang-py prettyprint-override"><code>out_size = ceil(in_sizei/stride)
</code></pre>
<p>For VALID pad is:</p>
<pre class="lang-py prettyprint-override"><code>out_size = ceil(in_size-filter_size+1/stride)
</code></pre>
|
python|numpy|tensorflow|conv-neural-network|max-pooling
| 1
|
8,830
| 70,325,648
|
NumPy genfromtxt OSError: file not found
|
<p>I'm learning the NumPy library and when I try to read something from the file I get this error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\Reading_from_file.py", line 3, in <module>
example = genfromtxt("example.txt", delimiter=',')
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\npyio.py", line 1793, in genfromtxt
fid = np.lib._datasource.open(fname, 'rt', encoding=encoding)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\_datasource.py", line 193, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\_datasource.py", line 533, in open
raise IOError("%s not found." % path)
OSError: example.txt not found.
</code></pre>
<p>Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>from numpy import genfromtxt
example = genfromtxt("example.txt", delimiter=',')
</code></pre>
<p><code>Reading_from_file.py</code> and <code>example.txt</code> are in the same folder</p>
<p>I read the <a href="https://numpy.org/doc/stable/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer">documentation</a> and I was trying to find something here but found nothing (maybe I missed something)</p>
<p>If there is already a thread on this topic, please link to it</p>
|
<p>You probably aren't running the script from the same folder that <code>example.txt</code> is in. <code>example.txt</code> doesn't need to be in the same directory as the script itself, it needs to be in the same directory as you are when you're running the script.</p>
|
python|numpy|file-not-found
| 1
|
8,831
| 70,191,014
|
solve_ivp - TypeError: 'numpy.ndarray' object is not callable
|
<p>I'm working on the code below for a class and no matter what I try I can't figure out how to fix it so I can move on.</p>
<pre><code>def eqs(t, x):
return np.array([[(1 - np.multiply((1 - f(z(t), z_thresh)), (1 - f(x(1), y_thresh)))) - x(0)],
[(1 - np.multiply((1 - f(z(t), z_thresh)),(1 - f(x(0), x_thresh)))) - x(1)]])
f = lambda x, thresh: x >= thresh
z = lambda t: np.multiply((t >= 2), (t <= 4))
z_thresh = 0.5
y_thresh = 0.5
x_thresh = 0.5
x_0 = np.zeros([0,])
sol = int.solve_ivp(eqs, range(0, 6), x_0)
</code></pre>
<p>I've been banging my head against the wall all night and can't figure out how to get around. No matter what I seem to try it still throws "TypeError: 'numpy.ndarray' object is not callable."</p>
<p>Edit: The traceback is as follows:</p>
<pre><code>Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_7252/1552450284.py in <module>
9 x0 = np.zeros([0,])
10
---> 11 sol = int.solve_ivp(eqns_a,range(0,6),x0)
12 plt.figure(1)
13 subplot(311)
~\anaconda3\lib\site-packages\scipy\integrate\_ivp\ivp.py in solve_ivp(fun, t_span, y0, method, t_eval, dense_output, events, vectorized, args, **options)
540 method = METHODS[method]
541
--> 542 solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
543
544 if t_eval is None:
~\anaconda3\lib\site-packages\scipy\integrate\_ivp\rk.py in __init__(self, fun, t0, y0, t_bound, max_step, rtol, atol, vectorized, first_step, **extraneous)
92 self.max_step = validate_max_step(max_step)
93 self.rtol, self.atol = validate_tol(rtol, atol, self.n)
---> 94 self.f = self.fun(self.t, self.y)
95 if first_step is None:
96 self.h_abs = select_initial_step(
~\anaconda3\lib\site-packages\scipy\integrate\_ivp\base.py in fun(t, y)
136 def fun(t, y):
137 self.nfev += 1
--> 138 return self.fun_single(t, y)
139
140 self.fun = fun
~\anaconda3\lib\site-packages\scipy\integrate\_ivp\base.py in fun_wrapped(t, y)
18
19 def fun_wrapped(t, y):
---> 20 return np.asarray(fun(t, y), dtype=dtype)
21
22 return fun_wrapped, y0
~\AppData\Local\Temp/ipykernel_7252/1552450284.py in <lambda>(t, x)
6 x_thresh = 0.5
7
----> 8 eqns_a = lambda t, x: np.array([[(1 - np.multiply((1 - f(z(t), z_thresh)), (1 - f(x(1), y_thresh)))) - x(0)], [(1 - np.multiply((1 - f(z(t), z_thresh)),(1 - f(x(0), x_thresh)))) - x(1)]])
9 x0 = np.zeros([0,])
10
TypeError: 'numpy.ndarray' object is not callable
</code></pre>
|
<p>the traceback tells you that the problem is in</p>
<pre><code>np.array([[(1 - np.multiply((1 - f(z(t), z_thresh)), (1 - f(x(1), y_thresh)))) - x(0)], [(1 - np.multiply((1 - f(z(t), z_thresh)),(1 - f(x(0), x_thresh)))) - x(1)]]
</code></pre>
<p>So we look for apparent function calls, <code>fn(...)</code>. <code>np.multiply</code> should be ok unless you redefined it somplace. <code>f(...)</code> is defined with a <code>lambda</code>. <code>z(...)</code> as well. That leaves <code>x(1)</code> and <code>x(0)</code>. What is <code>x</code>? It's the lambda function argument.</p>
|
python|numpy|scipy
| 0
|
8,832
| 56,371,996
|
Conversion of Daily pandas dataframe to minute frequency
|
<p>I have a dataframe as defined below (df) with daily frequency and I would like to convert this to minute frequency, starting at 8:30 and ending at 16:00.</p>
<pre><code>import pandas as pd
dict = [
{'ticker':'jpm','date': '2016-11-28','returns': '0.2'},
{ 'ticker':'ge','date': '2016-11-28','returns': '0.2'},
{'ticker':'fb', 'date': '2016-11-28','returns': '0.2'},
{'ticker':'aapl', 'date': '2016-11-28','returns': '0.2'},
{'ticker':'msft','date': '2016-11-28','returns': '0.2'},
{'ticker':'amzn','date': '2016-11-28','returns': '0.2'},
{'ticker':'jpm','date': '2016-11-29','returns': '0.2'},
{'ticker':'ge', 'date': '2016-11-29','returns': '0.2'},
{'ticker':'fb','date': '2016-11-29','returns': '0.2'},
{'ticker':'aapl','date': '2016-11-29','returns': '0.2'},
{'ticker':'msft','date': '2016-11-29','returns': '0.2'},
{'ticker':'amzn','date': '2016-11-29','returns': '0.2'}
]
df = pd.DataFrame(dict)
df['date'] = pd.to_datetime(df['date'])
df=df.set_index(['date','ticker'], drop=True)
</code></pre>
<p>Can anyone suggest how to do this?</p>
|
<p>I believe you need reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> for <code>DatetimeIndex</code>, then set minute frequency by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.asfreq.html" rel="nofollow noreferrer"><code>DataFrame.asfreq</code></a>, filter times by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>DataFrame.between_time</code></a> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> for <code>MultiIndex</code>:</p>
<pre><code>df1 = df.unstack().asfreq('Min', method='ffill').between_time('8:30','16:00').stack()
print (df1.head(10))
returns
date ticker
2016-11-28 08:30:00 aapl 0.2
amzn 0.2
fb 0.2
ge 0.2
jpm 0.2
msft 0.2
2016-11-28 08:31:00 aapl 0.2
amzn 0.2
fb 0.2
ge 0.2
</code></pre>
|
python|pandas|dataframe
| 2
|
8,833
| 56,412,030
|
Python- Looping through pandas Groupby object
|
<p>Here is a sample row that I have in my dataframe: </p>
<pre><code>{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "3652b5d7-55b8-4bee-82b6-ab32d5543352",
"timestamp" : "1559403699899",
"endFlow" : "true"
}
</code></pre>
<p>I do groupby by 'sessionId', which will give me a group like this: </p>
<pre><code>Row 1:
{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "usecaseId1",
"timestamp" : "1559403699899",
"endFlow" : "false"
},
Row 2:
{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "usecaseId1",
"timestamp" : "1559403699899",
"endFlow" : "false"
},
Row 3:
{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "usecaseId2",
"timestamp" : "1559403699899",
"endFlow" : "true"
},
Row 4:
{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "usecaseId1",
"timestamp" : "1559403699899",
"endFlow" : "false"
},
Row 5:
{
"sessionId" : "454ec8b8-7f00-40b2-901c-724c5d9f5a91",
"useCaseId" : "usecaseId1",
"timestamp" : "1559403699899",
"endFlow" : "true"
}
</code></pre>
<p>Taking the above group as example, what I want to achieve here is, after grouping the dataframe by 'sessionId', I want to loop through consecutive rows with same 'useCaseId'(So from, the above group, there will be three sets of consecutive rows through which I want to loop,<br>
<strong>Row1-Row2,Row3,Row4-Row5</strong>) </p>
<p>And from each of the above consecutive sets of rows(<strong>Row1-Row2,Row3,Row4-Row5 (Where each set has same useCaseId)</strong>,<br>
<strong>I want to find the number of sets who's rows endflow value in only false</strong>. </p>
<p>So, from the above given example of group,the expected outcome is as follows:<br>
<strong>1(Since, Row1-Row2 with same useCaseId 'usecaseId1' has endflow only 'false', while 'Row3' and 'Row4-Row5' has endflow 'true')</strong> </p>
<p>How can I achieve this?<br>
<strong>Updates:</strong> </p>
<ol>
<li><p>df.head(): </p>
<pre><code>sessionId useCaseId timestamp endFlow
0 sessionId1 useCaseId1 1559403699899 false
1 sessionId1 useCaseId1 1559403699899 false
2 sessionId1 useCaseId2 1559403699899 true
3 sessionId1 useCaseId1 1559403699899 false
4 sessionId1 useCaseId1 1559403699899 true
</code></pre></li>
<li><p>What I tried:<br>
I have tried grouping the dataframe by 'sessionId' and 'usecaseId',but that won't work because it will group the dataframe uniquely with 'usecaseId' which is not what I wanted, I want to loop through consecutive rows after grouping by 'sessionId' with same 'usecaseId', and then count the consecutive rows with same 'useCaseId' having 'endFlow' only as 'false'. </p></li>
<li><p>Expected output:
After grouping by 'sessionId', I want to count the number of consecutive rows with same 'useCaseId' having 'endFlow' only as 'false'<br>
<strong>from the above given example of group,the expected outcome is as follows:
1(Since, Row1-Row2 with same useCaseId 'usecaseId1' has endflow only 'false', while 'Row3' and 'Row4-Row5' has endflow 'true')</strong></p></li>
</ol>
|
<p>You may try this: (<em>I assume <code>df.endFlow</code> contains string of <code>'true'</code> and <code>'false'</code>. If it contains boolean <code>True</code> and <code>False</code>, you just take out the <code>replace</code> command</em>.)</p>
<pre><code>df.endFlow.replace({'true': True, 'false': False}).groupby([df.sessionId, df.useCaseId.ne(df.useCaseId.shift()).cumsum()]).sum().eq(False).sum()
Out[1258]: 1
</code></pre>
<p>Now, I change your sample to include 2 groups satisfying condition, it also reports count correctly as follows:</p>
<pre><code>df1:
sessionId useCaseId timestamp endFlow
0 sessionId1 useCaseId1 1559403699899 false
1 sessionId1 useCaseId1 1559403699899 false
2 sessionId1 useCaseId2 1559403699899 true
3 sessionId1 useCaseId1 1559403699899 false
4 sessionId1 useCaseId1 1559403699899 false
df1.endFlow.replace({'true': True, 'false': False}).groupby([df1.sessionId, df1.useCaseId.ne(df1.useCaseId.shift()).cumsum()]).sum().eq(False).sum()
Out[1264]: 2
</code></pre>
<p>Note: I understand from your description that a group with a single row is also consider as consecutive-row group. Therefore, the count will include it if its <code>endFlow</code> is <code>False</code></p>
|
python|pandas|dataframe
| 2
|
8,834
| 55,591,077
|
How to reshape this array the way I need?
|
<p>I'm looking to reshape an array of three 2x2 matrices, that is of shape (3,2,2) i.e.</p>
<pre><code>a = np.array([[[a1,a2],[a3,a4]],
[[b1,b2],[b3,b4]],
[[c1,c2],[c3,c4]]])
</code></pre>
<p>to this array of shape (2,2,3):</p>
<pre><code>[[[a1,b1,c1],[a2,b2,c2]],
[[a3,b3,c3],[a4,b4,c4]]])
</code></pre>
<p>The regular <code>np.reshape(a, (2,2,3))</code> returns this array:</p>
<pre><code>[[[a1, a2, a3],[a4, b1, b2]],
[[b3, b4, c1],[c2, c3, c4]]]
</code></pre>
<p>and <code>np.reshape(a, (2,2,3), order = 'F')</code> brings this:</p>
<pre><code>[[[a1, b3, c2],[c1, a2, b4]],
[[b1, c3, a4],[a3, b2, c4]]]
</code></pre>
<p>How can I reshape the initial array to get what I need?</p>
<p>This is in order to use with <code>matplotlib.pyplot.imshow</code> where the three initial matrices correspond to the three colors 'RGB' and each of the elements is a float in the range [0,1]. So also if there's a better way to do it I would be happy to know.</p>
|
<p>We simply need to permute axes. Two ways to do so.</p>
<p>Use <code>np.transpose</code> -</p>
<pre><code>a.transpose(1,2,0) # a is input array
# or np.transpose(a,(1,2,0))
</code></pre>
<p>We can also use <code>np.moveaxis</code> -</p>
<pre><code>np.moveaxis(a,0,2) # np.moveaxis(a, 0, -1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [157]: np.random.seed(0)
In [158]: a = np.random.randint(11,99,(3,2,2))
In [159]: a
Out[159]:
array([[[55, 58],
[75, 78]],
[[78, 20],
[94, 32]],
[[47, 98],
[81, 23]]])
In [160]: a.transpose(1,2,0)
Out[160]:
array([[[55, 78, 47],
[58, 20, 98]],
[[75, 94, 81],
[78, 32, 23]]])
</code></pre>
|
python|arrays|numpy|reshape
| 2
|
8,835
| 55,669,882
|
Python np.asarray does not return the true shape
|
<p>I spin a loop on two sub table of my original table.</p>
<p>When I start the loop, and that I check the shape, I get (1008,) while the shape must be (1008,168,252,3). Is there a problem in my loop?</p>
<pre><code>train_images2 = []
for i in range(len(train_2)):
im = process_image(Image.open(train_2['Path'][i]))
train_images2.append(im)
train_images2 = np.asarray(train_images2)
</code></pre>
|
<p>The problem is that your <strong><code>process_image()</code></strong> function is returning a scalar instead of the processed image (i.e. a 3D array of shape <code>(168,252,3)</code>). So, the variable <code>im</code> is just a scalar. Because of this, you get the array <code>train_images2</code> to be 1D array. Below is a contrived example which illustrates this:</p>
<pre><code>In [59]: train_2 = range(1008)
In [65]: train_images2 = []
In [66]: for i in range(len(train_2)):
...: im = np.random.random_sample()
...: train_images2.append(im)
...: train_images2 = np.asarray(train_images2)
...:
In [67]: train_images2.shape
Out[67]: (1008,)
</code></pre>
<hr>
<p>So, the fix is that you should make sure that <code>process_image()</code> function returns a 3D array as in the below contrived example:</p>
<pre><code>In [58]: train_images2 = []
In [59]: train_2 = range(1008)
In [60]: for i in range(len(train_2)):
...: im = np.random.random_sample((168,252,3))
...: train_images2.append(im)
...: train_images2 = np.asarray(train_images2)
...:
# indeed a 4D array as you expected
In [61]: train_images2.shape
Out[61]: (1008, 168, 252, 3)
</code></pre>
|
python|loops|numpy|multidimensional-array|numpy-ndarray
| 0
|
8,836
| 64,838,581
|
Why I'm getting zero accuracy in Keras binary classification model?
|
<p>I have a Keras Sequential model taking inputs from csv files. When I run the model, <strong>its accuracy remains zero</strong> even after 20 epochs.</p>
<p>I have gone through these two stackoverflow threads (<a href="https://stackoverflow.com/questions/41819457/zero-accuracy-training-a-neural-network-in-keras/42661667">zero-accuracy-training</a> and <a href="https://stackoverflow.com/questions/45632549/why-is-the-accuracy-for-my-keras-model-always-0-when-training">why-is-the-accuracy-for-my-keras-model-always-0</a>) but nothing solved my problem.</p>
<p>As my model is binary classification, and I think it should not work like a regression model to make accuracy metric ineffective.
Here is the Model</p>
<pre><code>def preprocess(*fields):
return tf.stack(fields[:-1]), tf.stack(fields[-1:]) # x, y
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import feature_column
import pathlib
csvs = sorted(str(p) for p in pathlib.Path('.').glob("My_Dataset/*/*/*.csv"))
data_set=tf.data.experimental.CsvDataset(
csvs, record_defaults=defaults, compression_type=None, buffer_size=None,
header=True, field_delim=',', use_quote_delim=True, na_value=""
)
print(type(data_set))
#Output: <class 'tensorflow.python.data.experimental.ops.readers.CsvDatasetV2'>
data_set.take(1)
#Output: <TakeDataset shapes: ((), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), ()), types: (tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32)>
validate_ds = data_set.map(preprocess).take(10).batch(100).repeat()
train_ds = data_set.map(preprocess).skip(10).take(90).batch(100).repeat()
model = tf.keras.Sequential([
layers.Dense(256,activation='elu'),
layers.Dense(128,activation='elu'),
layers.Dense(64,activation='elu'),
layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy']) #have to find the related evaluation metrics
model.fit(train_ds,
validation_data=validate_ds,
validation_steps=5,
steps_per_epoch= 5,
epochs=20,
verbose=1
)
</code></pre>
<p>What I'm doing wrong?</p>
|
<p>The problem is here:</p>
<pre><code>model = tf.keras.Sequential([
layers.Dense(256,activation='elu'),
layers.Dense(128,activation='elu'),
layers.Dense(64,activation='elu'),
layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',
#Here is the problem
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy']) #Have to find the related evaluation metrics
</code></pre>
<p>You have two solutions:</p>
<ol>
<li><p>Either set <code>from_logits=False</code></p>
</li>
<li><p>Or leave <code>layers.Dense(1) and (from_logits=True)</code></p>
</li>
</ol>
<p>This is the reason why you have the problem, since <code>from_logits = True</code> implies that there is no activation function used.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 1
|
8,837
| 64,975,363
|
Pandas when several columns meet a condition, assign value
|
<p>The use case is the following: If in a Pandas Dataframe, several columns are greater than zero, I want to create a new column with value <code>1</code>, if the same columns are negative, I wish to set <code>-1</code>, otherwise I wish to set <code>0</code>.</p>
<p>Now, I want to extend the previous. Let's say I want to check for <code>4</code> columns the conditions, but I still wish to assign the corresponding value if three of them hold. An example below.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
[
[1, 2, 3, 4, 5],
[-1, -2, -3, -4, -5],
[1, 2, -1, -2, -3],
[1, 2, 3, -1, -2]
]
, columns=list('ABCDE'))
def f(df):
dst = pd.Series(np.zeros(df.shape[0], dtype=int))
dst[(df < 0).all(1)] = -1
dst[(df > 0).all(1)] = 1
return dst
columns = ['A', 'B', 'C', 'D']
df['dst'] = f(df[columns])
</code></pre>
<p>The code above would return the following DataFrame:</p>
<pre><code> A B C D E dst
0 1 2 3 4 5 1
1 -1 -2 -3 -4 -5 -1
2 1 2 -1 -2 -3 0
3 1 2 3 -1 -2 0
</code></pre>
<p>What would be the expected behavior:</p>
<ol>
<li>For row <code>0</code>, <code>dst</code> should be <code>1</code> as <code>A</code> to <code>D</code> hold the positive condition.</li>
<li>For row <code>1</code>, <code>dst</code> should be <code>-1</code> as <code>A</code> to <code>D</code> hold the negative condition.</li>
<li>For row <code>2</code>, <code>dst</code> should be <code>0</code> as <code>A</code> to <code>D</code> do not meet any of the conditions.</li>
<li>For row <code>3</code>, <code>dst</code> should be <code>1</code> as <code>A</code> to <code>C</code> hold the positive condition, and only <code>D</code> does not hold.</li>
</ol>
|
<p>You could use the <code>loc()</code> method withing padas <code>DataFrame</code> object. Like this:</p>
<pre class="lang-py prettyprint-override"><code>input_df.loc[conditions[0], dst_col] = choices[0]
input_df.loc[conditions[1], dst_col] = choices[1]
</code></pre>
<p>This will basically filter <code>input_df</code> by both conditions and assign the proper choice to that.</p>
|
python|pandas|numpy|dataframe
| 0
|
8,838
| 64,678,636
|
Sort dataframe by multiple columns while ignoring case
|
<p>I want to sort a dataframe by multiple columns like this:</p>
<pre><code>df.sort_values( by=[ 'A', 'B', 'C', 'D', 'E' ], inplace=True )
</code></pre>
<p>However i found out that python first sorts the uppercase values and then the lowercase.</p>
<p>I tried this:</p>
<pre><code>df.sort_values( by=[ 'A', 'B', 'C', 'D', 'E' ], inplace=True, key=lambda x: x.str.lower() )
</code></pre>
<p>but i get this error:</p>
<pre><code>TypeError: sort_values() got an unexpected keyword argument 'key'
</code></pre>
<p>If i could, i would turn all columns to lowercase but i want them as they are.</p>
<p>Any hints?</p>
|
<p>If check docs - <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> for correct working need upgrade pandas higher like <code>pandas 1.1.0</code>:</p>
<blockquote>
<p><strong>key</strong> - callable, optional</p>
</blockquote>
<blockquote>
<p>Apply the key function to the values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect a Series and return a Series with the same shape as the input. It will be applied to each column in by independently.</p>
</blockquote>
<blockquote>
<p><strong>New in version 1.1.0.</strong></p>
</blockquote>
<p><strong>Sample</strong>:</p>
<pre><code>df = pd.DataFrame({
'A':list('MmMJJj'),
'B':list('aYAbCc')
})
df.sort_values(by=[ 'A', 'B'], inplace=True, key=lambda x: x.str.lower())
print (df)
A B
3 J b
4 J C
5 j c
0 M a
2 M A
1 m Y
</code></pre>
|
python|pandas|dataframe|sorting|case-insensitive
| 3
|
8,839
| 64,902,795
|
Creating a pandas column based on conditions from another column
|
<p>I have this 'Club' column in my pandas df that contains the names of English Premier League clubs but the naming of the club is not suited for what I want to achieve. I tried writing a function with conditional statements to populate another column with the names of the club in the format I want. I have tried applying my function to my df but I get this error:</p>
<pre><code> ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', 'occurred at index 0')
</code></pre>
<p>This is what df called test looks like:
<a href="https://i.stack.imgur.com/mvNJ6.jpg" rel="nofollow noreferrer">test df</a></p>
<p>This is my function called clubs_name:</p>
<pre><code>#We want to rename the clubs to be exact names like in Squad column in the epl_table_df dataframe
def clubs_name(Club):
if Club == 'Leicester City LEI':
return 'Leicester City'
elif Club == 'Tottenham Hotspur TOT':
return 'Tottenham'
elif Club == 'Liverpool LIV':
return 'Liverpool'
elif Club == 'Southampton SOU':
return 'Southampton'
elif Club == 'Chelsea CHE':
return 'Chelsea'
elif Club == 'Aston Villa AVL':
return 'Aston Villa'
elif Club == 'Everton EVE':
return 'Everton'
elif Club == 'Crystal Palace CRY':
return 'Crystal Palace'
elif Club == 'Wolverhampton Wanderers WOL':
return 'Wolves'
elif Club == 'Manchester City MCI':
return 'Manchester City'
elif Club == 'Arsenal ARS':
return 'Arsenal'
elif Club == 'West Ham United WHU':
return 'West Ham'
elif Club == 'Newcastle United NEW ':
return 'Newcastle Utd'
elif Club == 'Manchester United MUN':
return 'Manchester Utd'
elif Club == 'Leeds United LEE':
return 'Leeds United'
elif Club == 'Brighton and Hove Albion BHA':
return 'Brighton'
elif Club == 'Fulham FUL':
return 'Fulham'
elif Club == 'West Bromwich Albion WBA':
return 'West Brom'
elif Club == 'Burnley BUR':
return 'Burnley'
elif Club == 'Sheffield United SHU':
return 'Sheffield Utd'
else:
return Club'
</code></pre>
<p>When I test my function, it seems to be working:</p>
<pre><code>print(clubs_name('Fulham FUL'))
</code></pre>
<p>This was how I tried to apply the function to the test df:</p>
<pre><code>test.apply (lambda Club: clubs_name(Club), axis=1)
</code></pre>
<p>I am new to python and data science/analysis. I will appreciate a solution, an explanation of the error and what I was doing wrong.</p>
|
<p>I think this can be easier achieved through panda's replace().</p>
<p>Simply create a dictionary of your old values to new values:</p>
<p>eg:</p>
<pre><code>dict_replace = {
'Tottenham Hotspur TOT':'Tottenham',
'Liverpool LIV':'Liverpool',
'Southampton SOU':'Southampton',
'Chelsea CHE':'Chelsea'
} #etc
</code></pre>
<p>And then use the dictionary to update the column in your dataframe:</p>
<p>assuming the column name in df you want to change is <code>club</code></p>
<pre><code>df['club'].replace(dict_replace, inplace=True)
</code></pre>
<p>or if you want a separate column, instead of overwriting:</p>
<pre><code>df['club_name_new'] = df['club'].replace(dict_replace)
</code></pre>
<p>FULL TEST EXAMPLE:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'club': ['Tottenham Hotspur TOT',
'Liverpool LIV',
'Southampton SOU',
'Chelsea CHE',
'Some other club'],
'column': ['b', 'a', 'c', 'd', 'e'],'column2': [1, 2, 3, 4, 5]})
print('INITIAL DATAFRAME:')
print(df)
print('*'*10)
dict_replace = {
'Tottenham Hotspur TOT':'Tottenham',
'Liverpool LIV':'Liverpool',
'Southampton SOU':'Southampton',
'Chelsea CHE':'Chelsea'
}
df['club_name_new'] = df['club'].replace(dict_replace)
print('DATAFRAME WITH NEW COLUMN NAMES:')
print(df)
</code></pre>
<p>returns processed df as:</p>
<pre><code> club column column2 club_name_new
0 Tottenham Hotspur TOT b 1 Tottenham
1 Liverpool LIV a 2 Liverpool
2 Southampton SOU c 3 Southampton
3 Chelsea CHE d 4 Chelsea
4 Some other club e 5 Some other club
</code></pre>
<p>--
Comment follow up:</p>
<p>possible way to apply changes with rules:</p>
<pre><code>## replace 'United' with 'Utd':
df['club'].str.replace('United', 'Utd')
## remove last 4 characters:
df['club'].str[:-4]
</code></pre>
<p>and then make a dictionary for remaining exceptions that don't follow the patterns, and apply that...</p>
<p>i.e. for specific conversion from some unique value to another, you would have to make a dictionary (how else would the program know what to change to?). But if the changes can be reduced to some pattern, you can use .str.replace()</p>
|
python|pandas|dataframe|data-science|data-analysis
| 1
|
8,840
| 64,683,408
|
Find distance between all pairs of pixels in an image
|
<h3>Question</h3>
<p>I have a <code>numpy.array</code> of shape <code>(H, W)</code>, storing pixel intensities of an image. I want to generate a new array of shape <code>(H, W, H, W)</code>, which stores the Euclidean distance between each pair of pixels in the image (the "spatial" distance between the pixels; not the difference in their intensities).</p>
<h3>Solution attempt</h3>
<p>The following method does exactly what I want, but very slowly. I'm looking for a fast way to do this.</p>
<pre><code>d = numpy.zeros((H, W, H, W)) # array to store distances.
for x1 in range(H):
for y1 in range(W):
for x2 in range(H):
for y2 in range(W):
d[x1, y1, x2, y2] = numpy.sqrt( (x2-x1)**2 + (y2-y1)**2 )
</code></pre>
<h3>Extra details</h3>
<p>Here are more details for my problem. A solution to the simpler problem above would probably be enough for me to figure out the rest.</p>
<ul>
<li>In my case, the image is actually a 3D medical image (i.e. a <code>numpy.array</code> of shape <code>(H, W, D)</code>).</li>
<li>The 3D pixels might not be cubic (e.g. each pixel might represent a volume of 1mm x 2mm x 3mm).</li>
</ul>
|
<p>We can setup open grids with <code>1D</code> ranged arrays using <code>np.ogrid</code>, which could be operated upon in the same iterator notation for a vectorized solution and this will leverage <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a> for the perf. boost :</p>
<pre><code>X1,Y1,X2,Y2 = np.ogrid[:H,:W,:H,:W]
d_out = numpy.sqrt( (X2-X1)**2 + (Y2-Y1)**2 )
</code></pre>
<p>To save on two open grids :</p>
<pre><code>X,Y = np.ogrid[:H,:W]
d_out = numpy.sqrt( (X[:,:,None,None]-X)**2 + (Y[:,:,None,None]-Y)**2 )
</code></pre>
<p>If we are working with large arrays, consider using <code>numexpr</code> for further boost :</p>
<pre><code>import numexpr as ne
d_out = ne.evaluate('sqrt( (X2-X1)**2 + (Y2-Y1)**2 )')
</code></pre>
|
python|performance|numpy|linear-algebra
| 2
|
8,841
| 39,552,773
|
Apply Numpy function over entire Dataframe
|
<p>I am applying this function over a dataframe <code>df1</code> such as the following:</p>
<pre><code> AA AB AC AD
2005-01-02 23:55:00 "EQUITY" "EQUITY" "EQUITY" "EQUITY"
2005-01-03 00:00:00 32.32 19.5299 32.32 31.0455
2005-01-04 00:00:00 31.9075 19.4487 31.9075 30.3755
2005-01-05 00:00:00 31.6151 19.5799 31.6151 29.971
2005-01-06 00:00:00 31.1426 19.7174 31.1426 29.9647
def func(x):
for index, price in x.iteritems():
x[index] = price / np.sum(x,axis=1)
return x[index]
df3=func(df1.ix[1:])
</code></pre>
<p>However, I only get a single column returned as opposed to 3</p>
<pre><code> 2005-01-03 0.955843
2005-01-04 0.955233
2005-01-05 0.955098
2005-01-06 0.955773
2005-01-07 0.955877
2005-01-10 0.95606
2005-01-11 0.95578
2005-01-12 0.955621
</code></pre>
<p>I am guessing I am missing something in the formula to make it apply to the entire dataframe. Also how could I return the first index that has strings in its row?</p>
|
<p>You need to do it the following way :</p>
<pre><code>def func(row):
return row/np.sum(row)
df2 = pd.concat([df[:1], df[1:].apply(func, axis=1)], axis=0)
</code></pre>
<p>It has 2 steps :</p>
<ol>
<li><code>df[:1]</code> extracts the first row, which contains strings, while <code>df[1:]</code> represents the rest of the DataFrame. You concatenate them later on, which answers the second part of your question.</li>
<li>For operating over rows you should use <code>apply()</code> method.</li>
</ol>
|
python|pandas|numpy|dataframe
| 2
|
8,842
| 39,713,678
|
Assigning indicators based on observation quantile
|
<p>I am working with a pandas DataFrame. I would like to assign a column indicator variable to 1 when a particular condition is met. I compute quantiles for particular groups. If the value is outside the quantile, I want to assign the column indicator variable to 1. For example, the following code prints the quantiles for each group:</p>
<pre><code>df[df['LENGTH'] > 1].groupby(['CLIMATE', 'TEMP'])['LENGTH'].quantile(.95)]
</code></pre>
<p>Now for all observations in my dataframe which are larger than the grouped value I would like to set </p>
<pre><code>df['INDICATOR'] = 1
</code></pre>
<p>I tried using the following if statement:</p>
<pre><code>if df.groupby(['CLIMATE','BIN'])['LENGTH'] > df[df['LENGTH'] > 1].groupby(['CLIMATE','BIN'])['LENGTH'].quantile(.95):
df['INDICATOR'] = 1
</code></pre>
<p>This gives me the error: "ValueError: operands could not be broadcast together with shapes (269,) (269,2)". Any help would be appreciated! </p>
|
<p>you want to use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow noreferrer"><code>transform</code></a> after your <code>groupby</code> to get an equivalently sized array. <code>gt</code> is greater than. <code>mul</code> is multiply. I multiply by <code>1</code> to get the boolean results from <code>gt</code> to <code>0</code> or <code>1</code>.</p>
<p>Consider the dataframe <code>df</code></p>
<pre><code>df = pd.DataFrame(dict(labels=np.random.choice(list('abcde'), 100),
A=np.random.randn(100)))
</code></pre>
<p>I'd get the indicator like this</p>
<pre><code>df.A.gt(df.groupby('labels').A.transform(pd.Series.quantile, q=.95)).mul(1)
</code></pre>
<hr>
<p>In your case, I'd do</p>
<pre><code>df['INDICATOR'] = df['LENGTH'].gt(df.groupby(['CLIMATE','BIN'])['LENGTH'] \
.transform(pd.Series.quantile, q=.95)).mul(1)
</code></pre>
|
python|pandas|numpy|dataframe
| 2
|
8,843
| 39,560,099
|
Cannot combine bar and line plot using pandas plot() function
|
<p>I am plotting one column of a pandas dataframe as line plot, using plot() :</p>
<pre><code>df.iloc[:,1].plot()
</code></pre>
<p>and get the desired result:</p>
<p><a href="https://i.stack.imgur.com/6FFdq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6FFdq.png" alt="enter image description here"></a></p>
<p>Now I want to plot another column of the same dataframe as bar chart using</p>
<pre><code>ax=df.iloc[:,3].plot(kind='bar',width=1)
</code></pre>
<p>with the result:</p>
<p><a href="https://i.stack.imgur.com/92UxV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/92UxV.png" alt="enter image description here"></a></p>
<p>And finally I want to combine both by</p>
<pre><code>spy_price_data.iloc[:,1].plot(ax=ax)
</code></pre>
<p>which doesn't produce any plot.</p>
<p>Why are the x-ticks of the bar plot so different to the x-ticks of the line plot? How can I combine both plots in one plot?</p>
|
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
</code></pre>
<p>some data</p>
<pre><code>df = pd.DataFrame(np.random.randn(5,2))
print (df)
0 1
0 0.008177 -0.121644
1 0.643535 -0.070786
2 -0.104024 0.872997
3 -0.033835 0.067264
4 -0.576762 0.571293
</code></pre>
<p>then we create an axes object (ax). Notice that we pass ax to both plots</p>
<pre><code>_, ax = plt.subplots()
df[0].plot(ax=ax)
df[1].plot(kind='bar', ax=ax)
</code></pre>
<p><a href="https://i.stack.imgur.com/GKVXm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GKVXm.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|plot
| 8
|
8,844
| 43,925,624
|
fastest method to dump numpy array into string
|
<p>I need to organized a data file with chunks of named data. Data is NUMPY arrays. But I don't want to use numpy.save or numpy.savez function, because in some cases, data have to be sent on a server over a pipe or other interface. So I want to dump numpy array into memory, zip it, and then, send it into a server.</p>
<p>I've tried simple pickle, like this:</p>
<pre><code>try:
import cPickle as pkl
except:
import pickle as pkl
import ziplib
import numpy as np
def send_to_db(data, compress=5):
send( zlib.compress(pkl.dumps(data),compress) )
</code></pre>
<p>.. but this is extremely slow process.</p>
<p>Even with compress level 0 (without compression), the process is very slow and just because of pickling.</p>
<p>Is there any way to dump numpy array into string without pickle? I know that numpy allows to get buffer <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.getbuffer.html#numpy.getbuffer" rel="noreferrer">numpy.getbuffer</a>, but it isn't obvious to me, how to use this dumped buffer to obtaine an array back.</p>
|
<p>You should definitely use <code>numpy.save</code>, you can still do it in-memory:</p>
<pre><code>>>> import io
>>> import numpy as np
>>> import zlib
>>> f = io.BytesIO()
>>> arr = np.random.rand(100, 100)
>>> np.save(f, arr)
>>> compressed = zlib.compress(f.getbuffer())
</code></pre>
<p>And to decompress, reverse the process:</p>
<pre><code>>>> np.load(io.BytesIO(zlib.decompress(compressed)))
array([[ 0.80881898, 0.50553303, 0.03859795, ..., 0.05850996,
0.9174782 , 0.48671767],
[ 0.79715979, 0.81465744, 0.93529834, ..., 0.53577085,
0.59098735, 0.22716425],
[ 0.49570713, 0.09599001, 0.74023709, ..., 0.85172897,
0.05066641, 0.10364143],
...,
[ 0.89720137, 0.60616688, 0.62966729, ..., 0.6206728 ,
0.96160519, 0.69746633],
[ 0.59276237, 0.71586014, 0.35959289, ..., 0.46977027,
0.46586237, 0.10949621],
[ 0.8075795 , 0.70107856, 0.81389246, ..., 0.92068768,
0.38013495, 0.21489793]])
>>>
</code></pre>
<p>Which, as you can see, matches what we saved earlier:</p>
<pre><code>>>> arr
array([[ 0.80881898, 0.50553303, 0.03859795, ..., 0.05850996,
0.9174782 , 0.48671767],
[ 0.79715979, 0.81465744, 0.93529834, ..., 0.53577085,
0.59098735, 0.22716425],
[ 0.49570713, 0.09599001, 0.74023709, ..., 0.85172897,
0.05066641, 0.10364143],
...,
[ 0.89720137, 0.60616688, 0.62966729, ..., 0.6206728 ,
0.96160519, 0.69746633],
[ 0.59276237, 0.71586014, 0.35959289, ..., 0.46977027,
0.46586237, 0.10949621],
[ 0.8075795 , 0.70107856, 0.81389246, ..., 0.92068768,
0.38013495, 0.21489793]])
>>>
</code></pre>
|
python|arrays|numpy|pickle
| 12
|
8,845
| 69,438,417
|
torch.cuda.is_available() is False only in Jupyter Lab/Notebook
|
<p>I tried to install CUDA on my computer. After doing so I checked in my Anaconda Prompt and is appeared to work out fine.</p>
<p><a href="https://i.stack.imgur.com/NPoaL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NPoaL.png" alt="enter image description here" /></a></p>
<p>However, when I started Jupyter Lab from the same environment, <code>torch.cuda.is_available()</code> returns False. I managed to find and follow <a href="https://discuss.pytorch.org/t/torch-cuda-is-available-returns-false-only-for-jupyter-notebook/78281" rel="nofollow noreferrer">this solution</a>, but the problem still persisted for me.</p>
<p><a href="https://i.stack.imgur.com/xPcHf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xPcHf.png" alt="enter image description here" /></a></p>
<p>Does anybody have any idea why? Thank you so much!</p>
|
<p>Seems to be a problem with similar causes:</p>
<p><a href="https://stackoverflow.com/questions/62939415/not-able-to-import-tensorflow-in-jupyter-notebook/62939942#62939942">Not able to import Tensorflow in Jupyter Notebook</a></p>
<p>You are probably using other environment than the one you are using outside jupyter.</p>
<p>Try open Anaconda Navigator, navigate to Environments and active your env, navigate to Home and install jupyter notebook, then lunch jupyter notebook from the navigator. This should solve the issue.</p>
<p>In linux, you can check using <code>!which python</code> inside jupyter.</p>
<p>In windows you can use:</p>
<pre><code>import sys
os.path.dirname(sys.executable)
</code></pre>
<p>To find where is the python that you are using.
See if the path matches.</p>
|
python|jupyter-notebook|pytorch
| 1
|
8,846
| 69,574,128
|
Changing the Structure of a Dataframe in Python
|
<p>I need help to change the structure to a pandas dataframe with many columns like the example:</p>
<p>original dataframe:</p>
<pre><code>| xx | yy | zz | a | b | c | k |
|:---|:---|:---|:--|:--|:--|:--|
| x1 | y1 | z1 | 0 | 2 | 1 | 3 |
| x2 | y2 | z2 | 1 | 0 | 2 | 0 |
</code></pre>
<p>I need just the first 3 columns and change the rest</p>
<p>new dataframe:</p>
<pre><code>| xx | yy | zz | valor | nueva columna|
|:---|:---|:---|:--|:--|
| x1 | y1 | z1 | 0 | a |
| x1 | y1 | z1 | 2 | b |
| x1 | y1 | z1 | 1 | c |
| x1 | y1 | z1 | 3 | k |
| x2 | y2 | z2 | 1 | a |
| x2 | y2 | z2 | 0 | b |
| x2 | y2 | z2 | 2 | c |
| x2 | y2 | z2 | 0 | k |
</code></pre>
<p>I get a solution with a for loop, but in colab when the columns and rows are many the time is excessive</p>
|
<pre><code>df1 = df.set_index(['xx','yy','zz']).stack()
df1.reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/Mxok6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mxok6.png" alt="enter image description here" /></a></p>
|
python-3.x|pandas|dataframe
| 1
|
8,847
| 69,434,334
|
arranging data by date (month/day format)
|
<p>After I append 4 different dataframes in:</p>
<pre><code>list_1 = [ ]
</code></pre>
<p>I have the following data stored in list_1:</p>
<pre><code>| date | 16/17 |
| -------- | ------|
| 2016-12-29 | 50 |
| 2016-12-30 | 52 |
| 2017-01-01 | 53 |
| 2017-01-02 | 51 |
[4 rows x 1 columns],
16/17
| date | 17/18 |
| -------- | ------|
| 2017-12-29 | 60 |
| 2017-12-31 | 62 |
| 2018-01-01 | 64 |
| 2018-01-03 | 65 |
[4 rows x 1 columns],
17/18
| date | 18/19 |
| -------- | ------|
| 2018-12-30 | 54 |
| 2018-12-31 | 53 |
| 2019-01-02 | 52 |
| 2019-01-03 | 51 |
[4 rows x 1 columns],
18/19
| date | 19/20 |
| -------- | ------|
| 2019-12-29 | 62 |
| 2019-12-30 | 63 |
| 2020-01-01 | 62 |
| 2020-01-02 | 60 |
[4 rows x 1 columns],
19/20
</code></pre>
<p>For changing the date format to month/day I use the following code:</p>
<pre><code>pd.to_datetime(df['date']).dt.strftime('%m/%d')
</code></pre>
<p>But the problem is when I want to arrange the data by months/days like that:</p>
<pre><code>| date | 16/17 | 17/18 | 18/19 | 19/20 |
| -------- | ------| ------| ------| ------|
| 12/29 | 50 | 60 | NaN | 62 |
| 12/30 | 52 | NaN | 54 | 63 |
| 12/31 | NaN | 62 | 53 | NaN |
| 01/01 | 53 | 64 | NaN | 62 |
| 01/02 | 51 | NaN | 52 | 60 |
| 01/03 | NaN | 65 | 51 | NaN |
</code></pre>
<p>I've tried the following:</p>
<pre><code>df = pd.concat(list_1,axis=1)
</code></pre>
<p>also:</p>
<pre><code>df = pd.concat(list_1)
df.reset_index(inplace=True)
df = df.groupby(['date']).first()
</code></pre>
<p>also:</p>
<pre><code>df = pd.concat(list_1)
df.reset_index(inplace=True)
df = df.groupby(['date'] sort=False).first()
</code></pre>
<p>but still cannot achieve the desired result.</p>
|
<p>You can use <code>sort=False</code> in <code>groupby</code> and create new column for subtract by first value of <code>DatetimeIndex</code> and use it for sorting:</p>
<pre><code>def f(x):
x.index = pd.to_datetime(x.index)
return x.assign(new = x.index - x.index.min())
L = [x.pipe(f) for x in list_1]
df = pd.concat(L, axis=0).sort_values('new', kind='mergesort')
df = df.groupby(df.index.strftime('%m/%d'), sort=False).first().drop('new', axis=1)
print (df)
16/17 17/18 18/19 19/20
date
12/29 50.0 60.0 NaN 62.0
12/30 52.0 NaN 54.0 63.0
12/31 NaN 62.0 53.0 NaN
01/01 53.0 64.0 NaN 62.0
01/02 51.0 NaN 52.0 60.0
01/03 NaN 65.0 51.0 NaN
</code></pre>
|
python|pandas|sorting|time-series|pandas-groupby
| 0
|
8,848
| 69,454,833
|
Pandas: to.datetime() quesiton
|
<p>I have Date/Time in the following format:</p>
<p>10/01/21 04:49:43.75<br />
MM/DD/YY HH/MM/SS.ms</p>
<p>I am trying to convert this from being an object to a datetime. I tried the following code but i am getting an error that it does not match the format. Any ideas?</p>
<pre><code>df['Date/Time'] = pd.to_datetime(df['Date/Time'], format = '%m%d%y %H%M%S%f')
</code></pre>
<p><a href="https://i.stack.imgur.com/atA81.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/atA81.png" alt="enter image description here" /></a></p>
|
<p>you can try letting pandas infer the datetime format with:</p>
<pre><code>pd.to_datetime(df['Date/Time'], infer_datetime_format=True)
</code></pre>
|
python|pandas|dataframe
| 0
|
8,849
| 69,431,747
|
Replace an item in a column with a similar item from that column that has repeated the most
|
<p>I have a list of job titles, the no of unique job titles are around 24,000. Most of the job titles are very similar.</p>
<pre><code>For example:
Software Developer, Software engineer, software engineering, software engineer 2, senior software engineer, junior software engineer,....
</code></pre>
<p>I want to find the most similar titles and replace all the the similar titles with the most repeated title to reduce the uniqueness in the column.</p>
<p>For example, in the screenshots, all the variations of aadhar card supervisor will be replaced with aadhar supervisor since it has been repeated most often. all variants of Software engineering jobs will be replaced with software engineer title since it has been repeated very often, and so on...</p>
<p>Please suggest solutions and approaches to achieve this desired result.</p>
<p>Sample Job titles for your reference is here in this repository:
<a href="https://github.com/skwolvie/jobprofile_sample" rel="nofollow noreferrer">https://github.com/skwolvie/jobprofile_sample</a></p>
<p>I have also tabulated the similarity scores of a title with every other title in the sample job title dataset. each title is associated with a value_counts score.</p>
<p>Screenshots:</p>
<p><a href="https://i.stack.imgur.com/XWHTR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XWHTR.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/lmMAq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lmMAq.png" alt="enter image description here" /></a></p>
|
<p>Here's another try:</p>
<p>There are 5 steps in this solution:</p>
<ol>
<li><p>Use <code>pd.Series.value_counts().reset_index()</code> to get only unique titles in descending order of frequency.</p>
</li>
<li><p>Calculate the distances between these unique <code>titles</code> using the Levenshtein distance measure</p>
</li>
<li><p>Find the indices of the words most close to each word using a <code>threshold</code> in the Levenshtein distances</p>
</li>
<li><p>Consolidate the nodes of duplicates to avoid repetition (i.e., if ids 1, 2, and 5 are duplicates, we want only one entry for them and not [1, 2, 5], [2, 1, 5], and [5, 1, 2]).</p>
</li>
<li><p>Finally, we consolidate the information into the <code>df.title.value_counts()</code> series and in a dictionary to replace in the original DataFrame.</p>
</li>
</ol>
<p>Code based on the <code>csv</code> file you shared earlier:</p>
<pre class="lang-py prettyprint-override"><code># Load required libraries
import pandas as pd
import numpy as np
import Levenshtein
from collections import defaultdict
</code></pre>
<p>STEP1: Load data (it is already in the value_counts() desired format)</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv("https://raw.githubusercontent.com/skwolvie/jobprofile_sample/main/sample_jobprofiles.csv",
index_col=False)
df.columns = ['title', "frequency"]
</code></pre>
<p>STEP2: Calculate the distances</p>
<pre class="lang-py prettyprint-override"><code>def levenshtein_matrix(titles):
"""
Fill a matrix with the Levenshtein ratio between each word in a list
of words with each other.
Since Levenshtein.ratio(w1, w2) == Levenshtein.ratio(w2, w1), we can
sequentially decrease the lenght of the inner loop in order to calculate
the Levenshtein ratio distance only once
"""
size = len(titles)
final = np.zeros((size, size))
for i, w1 in enumerate(titles):
for j, w2 in enumerate(titles[i:], i):
lev = Levenshtein.ratio(w1, w2)
final[i, j] = lev
final[j, i] = lev
return final
titles = df.title
lev_matrix = levenshtein_matrix(titles) # 30 seconds to run in my machine with 7k+ items
</code></pre>
<p>STEP3: Loop through each row of the <code>lev_matrix</code> to find the ids of similar entries</p>
<pre class="lang-py prettyprint-override"><code># Create function
def get_similar_nodes(distance_matrix, threshold=.9):
"""
Takes a matrix of distances and returns a generator with the entries
that have a distance measure higher than threshold for each row
in the matrix.
"""
for i in lev_matrix:
yield np.where(i > threshold)[0].tolist()
similar_nodes = get_similar_nodes(lev_matrix)
</code></pre>
<p>STEP4: Consolidate all the lists that share at least one item in a single list</p>
<pre class="lang-py prettyprint-override"><code>def connected_components(lists):
"""
This function yields a generator with all connected lists inside the given
list of lists.
"""
neighbors = defaultdict(set)
seen = set()
for each in lists:
for item in each:
neighbors[item].update(each)
def component(node, neighbors=neighbors, seen=seen, see=seen.add):
nodes = set([node])
next_node = nodes.pop
while nodes:
node = next_node()
see(node)
nodes |= neighbors[node] - seen
yield node
for node in neighbors:
if node not in seen:
yield sorted(component(node))
connected_nodes = list(connected_components(similar_nodes))
</code></pre>
<p>For updating the values, you create a dictionary mapping all names to the most common name in their group and pass it to the DataFrame.</p>
<p>Note that using <code>nodes[0]</code> as the most common title in the node works because the DataFrame is ordered by descending frequency since we created it using <code>.value_counts()</code>.</p>
<pre class="lang-py prettyprint-override"><code># Copy the DataFrame for comparison
df_test = df.copy()
dict_most_popular_names = {}
for nodes in connected_nodes:
dict_most_popular_names |= {key: titles[nodes[0]] for key in titles[nodes]}
# Check the dictionary
titles[connected_nodes[0]][:3]
# >>> 0 'software engineer'
# >>> 20 'software qa engineer'
# >>> 23 'software engineer ii'
# >>> Name: title, dtype: object
dict_most_popular_names["software engineer qa"]
# >>> 'software engineer'
dict_most_popular_names["software engineer"]
# >>> 'software engineer'
dict_most_popular_names["software engineer ii"]
# >>> 'software engineer'
# Update the dataframe
df_test["clean_title"] = [dict_most_popular_names[x] for x in titles]
</code></pre>
<p>You can use the <code>dict_most_popular_names</code> to replace in your original dataframe as well.</p>
<p>For me, run this whole script takes 30 seconds, which is pretty much the time spent calculating the Levenshtein distances. If you need to optimize further, there is where you need to check.</p>
|
python|pandas|string|nlp|cosine-similarity
| 1
|
8,850
| 69,412,036
|
Python OpenCV Duplicate a transparent shape in the same image
|
<p>I have an image of a circle, refer to the image attached below. I already retrieved the transparent circle and want to paste that circle back to the image to make some overlapped circles.</p>
<p>Below is my code but it led to the problem A, it's like a (transparent) hole in the image. I need to have circles on normal white background.</p>
<pre><code>height, width, channels = circle.shape
original_image[60:60+height, 40:40+width] = circle
</code></pre>
<p>I used cv2.addWeighted but got blending issue, I need clear circles</p>
<pre><code>circle = cv2.addWeighted(original_image[60:60+height, 40:40+width],0.5,circle,0.5,0)
original_image[60:60+rows, 40:40+cols] = circle
</code></pre>
<p><a href="https://i.stack.imgur.com/jdMOQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jdMOQ.png" alt="Need to implement the flow to B" /></a></p>
|
<p>If you already have a transparent black circle, then in Python/OpenCV here is one way to do that.</p>
<pre><code> - Read the transparent image unchanged
- Extract the bgr channels and the alpha channel
- Create a colored image of the background color and size desired
- Create similar sized white and black images
- Initialize a copy of the background color image for the output
- Define a list offset coordinates in the larger image
- Loop for over the list of offsets and do the following
- Insert the bgr image into a copy of the white image as the base image
- Insert the alpha channel into a copy of the black image for a mask
- composite the initialized output and base images using the mask image
- When finished with the loop, save the result
</code></pre>
<br>
<p>Input (transparent):</p>
<p><a href="https://i.stack.imgur.com/FMeXI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FMeXI.png" alt="enter image description here" /></a></p>
<pre><code>import cv2
import numpy as np
# load image with transparency
img = cv2.imread('black_circle_transp.png', cv2.IMREAD_UNCHANGED)
height, width = img.shape[:2]
print(img.shape)
# extract the bgr channels and the alpha channel
bgr = img[:,:,0:3]
aa = img[:,:,3]
aa = cv2.merge([aa,aa,aa])
# create whatever color background you want, in this case white
background=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create white image of the size you want
white=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create black image of the size you want
black=np.zeros((500,500,3), dtype=np.float64)
# initialize output
result = background.copy()
# define top left corner x,y locations for circle offsets
xy_offsets = [(100,100), (150,150), (200,200)]
# insert bgr and alpha into white and black images respectively of desired output size and composite
for offset in xy_offsets:
xoff = offset[0]
yoff = offset[1]
base = white.copy()
base[yoff:height+yoff, xoff:width+xoff] = bgr
mask = black.copy()
mask[yoff:height+yoff, xoff:width+xoff] = aa
result = (result * (255-mask) + base * mask)/255
result = result.clip(0,255).astype(np.uint8)
# save resulting masked image
cv2.imwrite('black_circle_composite.png', result)
# display result, though it won't show transparency
cv2.imshow("image", img)
cv2.imshow("aa", aa)
cv2.imshow("bgr", bgr)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/NYG1R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NYG1R.png" alt="enter image description here" /></a></p>
|
numpy|opencv|transparent|alpha
| 1
|
8,851
| 41,227,373
|
python pandas get ride of plural "s" in words to prepare for word count
|
<p>I have the following python pandas dataframe: </p>
<pre><code>Question_ID | Customer_ID | Answer
1 234 The team worked very hard ...
2 234 All the teams have been working together ...
</code></pre>
<p>I am going to use my code to count words in the answer column. But beforehand, I want to take out the "s" from the word "teams", so that in the example above I count team: 2 instead of team:1 and teams:1. </p>
<p>How can I do this for all words? </p>
|
<p>You need to use a tokenizer (for breaking a sentence into words) and lemmmatizer (for standardizing word forms), both provided by the natural language toolkit <code>nltk</code>:</p>
<pre><code>import nltk
wnl = nltk.WordNetLemmatizer()
[wnl.lemmatize(word) for word in nltk.wordpunct_tokenize(sentence)]
# ['All', 'the', 'team', 'have', 'been', 'working', 'together']
</code></pre>
|
python|pandas|word-count
| 7
|
8,852
| 53,818,775
|
pandas, comparison within lambda
|
<p>I have a function which returns the names of the pandas dataframe columns which have a number of unique values <= 100:</p>
<pre><code>cols_unique = list(df[cols].loc[:, df[cols].apply(lambda x: x.nunique()) <= 100])
</code></pre>
<p>I would like to change this to return the column names in which the number of unique values are <= 50% of the total number of values, my attempt:</p>
<pre><code>cols_unique = list(df[cols].loc[:, df[cols].apply(lambda x: x.nunique() <= x.count()/2]))
</code></pre>
<p>But this doesn't work.</p>
<p>How does one do a comparison within a lambda function?</p>
|
<p>IIUC you might try:</p>
<pre><code>cols_unique = list(df[cols].loc[:, df[cols].apply(lambda x: x.nunique() <= len(df) / 2)])
</code></pre>
<p>If you're open to an alternative that doesn't use a <code>lambda</code> function, you could try:</p>
<pre><code> list(cols[df[cols].nunique().le(len(df) // 2)])
</code></pre>
|
python|pandas|lambda
| 2
|
8,853
| 65,955,827
|
Tensorflow target column always returns 1
|
<p>I'm working on a classification problem with Tensorflow and I'm new to this. I want to see two targets (1 and 0) after all. I'm asking because I don't know, is it normal for the whole target column to be 1 as below? Thank you.</p>
<pre><code>df['target'] = np.where(df['Class']== 2, 0, 1)
df = df.drop(columns=['Class'])
</code></pre>
<p>then when I run the command line below, the target column shows exactly 1.</p>
<pre><code>print(df.head(50))
</code></pre>
|
<p>Just change the last parameter to the array you are making the comparisons on.</p>
<p>This will replace the values of <code>2</code> with <code>1</code> in <code>df["Class"]</code></p>
<pre><code>df['target'] = np.where(df['Class']== 2, 1, df['Class'])
</code></pre>
|
python|pandas
| 0
|
8,854
| 66,124,497
|
How to use pandas or equivalent python library to parse a csv file
|
<p>I have a csv file data.csv with below file content(| delimited)</p>
<pre><code>A|B|X|Y|Z
S|T|U|V|W|X
</code></pre>
<p>I want to parse this file to print the data in below format(1st two columns constant and third column split by | and generate new row</p>
<pre><code>A|B|X
A|B|Y
A|B|Z
S|T|U
S|T|V
S|T|W
S|T|X
</code></pre>
|
<p>Try with <code>read_csv</code> and <code>melt</code>:</p>
<pre><code>df = pd.read_csv('data.csv', sep='|', header=None).melt([0,1])
</code></pre>
<p>Output:</p>
<p>print(df.melt([0,1]))</p>
<pre><code> 0 1 variable value
0 1338980 2528742011 2 B00HFPOXM4:0
1 1338981 2528742012 2 B00HFPOXCY:0
2 1338980 2528742011 3 B00HFPOX9C:0
3 1338981 2528742012 3 B00HFPOX9W:0
4 1338980 2528742011 4 B00NPZ7WNU:0
5 1338981 2528742012 4 B00HFPOVCG:0
6 1338980 2528742011 5 B00HFPOXCO:0
7 1338981 2528742012 5 B00KGBX5DC:0
</code></pre>
|
python|pandas|csv
| 0
|
8,855
| 66,164,305
|
How to create histograms in pure Python?
|
<p>I know you can just use <code>numpy.histogram()</code>, and that will create the histogram for a respective image. What I don't understand is, how to create histograms without using that function.</p>
<p>Everytime I try to use sample code, it has issues with packages especially OpenCV.</p>
<p>Is there anyway to create histograms without using OpenCV or <code>skimage</code>, maybe using something like <code>imageio</code>?</p>
|
<p>You'd need to implement the histogram calculation using lists, dictionaries or any other standard Python data structure, if you explicitly don't want to have NumPy as some kind of import. For <a href="https://en.wikipedia.org/wiki/Image_histogram" rel="nofollow noreferrer">image histograms</a>, you basically need to count intensity occurences, most of the time these are values in the range of <code>0 ... 255</code> (when dealing with 8-bit images). So, iterate all channels of an image, iterate all pixels within that channel, and increment the corresponding "counter" for the observed intensity value of that pixel.</p>
<p>For example, that'd be a solution:</p>
<pre class="lang-py prettyprint-override"><code>import imageio
# Read image via imageio; get dimensions (width, height)
img = imageio.imread('path/to/your/image.png')
h, w = img.shape[:2]
# Dictionary (or any other data structure) to store histograms
hist = {
'R': [0 for i in range(256)],
'G': [0 for j in range(256)],
'B': [0 for k in range(256)]
}
# Iterate every pixel and increment corresponding histogram element
for i, c in enumerate(['R', 'G', 'B']):
for x in range(w):
for y in range(h):
hist[c][img[y, x, i]] += 1
</code></pre>
<p>I added some NumPy code to test for equality:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# Calculate histograms using NumPy
hist_np = {
'R': list(np.histogram(img[:, :, 0], bins=range(257))[0]),
'G': list(np.histogram(img[:, :, 1], bins=range(257))[0]),
'B': list(np.histogram(img[:, :, 2], bins=range(257))[0])
}
# Comparisons
print(hist['R'] == hist_np['R'])
print(hist['G'] == hist_np['G'])
print(hist['B'] == hist_np['B'])
</code></pre>
<p>And, the corresponding output in fact is:</p>
<pre class="lang-none prettyprint-override"><code>True
True
True
</code></pre>
<pre class="lang-none prettyprint-override"><code>----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
imageio: 2.9.0
NumPy: 1.20.1
----------------------------------------
</code></pre>
|
numpy|opencv|image-processing|histogram|python-imageio
| 1
|
8,856
| 52,620,354
|
Creating a new Sheet instead of using existing sheet
|
<p>I am copying data from one workbook to another workbook. The problem is pandas is creating a new sheet in the with the name 'sheet_name1' instead of using 'sheet_name'. I am using openpyxl as the Pandas Engine. Can you help with the reason?</p>
<pre><code> input_file = "C:\Automations\FastenersAudit\ClassfiedExports\\" + str(export)
output_file = "C:\Automations\FastenersAudit\Templates\\" + str(template)
input_df = pd.read_excel(io=input_file, skiprows=1)
output_df = pd.read_excel(io=output_file, skiprows=4)
headings = list(output_df.head())
writer = pd.ExcelWriter(output_file, engine="openpyxl")
writer.book = openpyxl.load_workbook(output_file)
for each_heading in headings:
try:
output_df[each_heading] = input_df[each_heading]
except KeyError:
continue
output_df = output_df.loc[:, :'Total Attributes']
output_df = output_df.drop(labels='Total Attributes', axis=1)
output_df.to_excel(writer, sheet_name="Attribute Analysis", na_rep='', index=False, startrow=5, engine="openpyxl")
writer.save()
writer.close()
</code></pre>
|
<p>Slightly Edited Version:
Thank you so much for your solution. It worked perfectly with a slight modification.</p>
<pre><code> writer.sheets = {ws.title: ws for ws in writer.book.worksheets}
for sheetname in writer.sheets:
if sheetname == 'Attribute Analysis':
output_df.to_excel(writer, sheet_name=sheetname, na_rep='', index=False, startrow=5, engine="openpyxl", header=False)
writer.save()
</code></pre>
<p>Original Answer:</p>
<p>i think you would need startrow and maxrow functions together along with<code>writer.sheets</code> to read the sheetname:</p>
<pre><code>writer.sheets = {ws.title: ws for ws in book.worksheets}
for sheetname in writer.sheets:
output_df.to_excel(writer,sheet_name=sheetname,startrow=writer.sheets[sheetname].max_row, index = False,header= False)
writer.save()
</code></pre>
|
python|python-3.x|pandas|openpyxl
| 1
|
8,857
| 46,246,212
|
Calculate into new Columns
|
<p>A dataframe columns looks like this:</p>
<pre><code>VALUE
1
2
3
4
5
...
40
</code></pre>
<p>i want to produce two new columns for eah value like this:</p>
<pre><code> df['VALUE1'] = math.cos(df['VALUE'] * 2 * math.pi / 48)
df['VALUE2'] = math.sin(df['VALUE'] * 2 * math.pi / 48)
</code></pre>
<p>but my Script crashes with no errors given...</p>
<p>The result should be something like this:</p>
<pre><code>VALUE VALUE1 VALUE2
1 ... ...
2 ... ...
3
4
5
...
40 ... ...
</code></pre>
<p>Whats the problem?</p>
|
<p><code>math.sin</code> and <code>math.cos</code> don't accept series. Use <code>numpy</code>, vector methods are fast.</p>
<pre><code>In [511]: df = pd.DataFrame({'VALUE': range(1, 41)})
In [512]: df['VALUE1'] = np.cos(df['VALUE'] * 2 * np.pi /48)
In [513]: df['VALUE2'] = np.sin(df['VALUE'] * 2 * np.pi /48)
In [514]: df.head()
Out[514]:
VALUE VALUE1 VALUE2
0 1 0.991445 0.130526
1 2 0.965926 0.258819
2 3 0.923880 0.382683
3 4 0.866025 0.500000
4 5 0.793353 0.608761
</code></pre>
<p>You could use <code>apply</code>, but they tend to be slow.</p>
|
python|pandas|dataframe
| 0
|
8,858
| 46,427,606
|
Reload tensorflow model in Golang app server
|
<p>I have a Golang app server wherein I keep reloading a saved tensorflow model every 15 minutes. Every api call that uses the tensorflow model, takes a read mutex lock and whenever I reload the model, I take a write lock. Functionality wise, this works fine but during the model load, my API response time increases as the request threads keep waiting for the write lock to be released. Could you please suggest a better approach to keep the loaded model up to date? </p>
<p><strong>Edit, Code updated</strong></p>
<p>Model Load Code:</p>
<pre><code> tags := []string{"serve"}
// load from updated saved model
var m *tensorflow.SavedModel
var err error
m, err = tensorflow.LoadSavedModel("/path/to/model", tags, nil)
if err != nil {
log.Errorf("Exception caught while reloading saved model %v", err)
destroyTFModel(m)
}
if err == nil {
ModelLoadMutex.Lock()
defer ModelLoadMutex.Unlock()
// destroy existing model
destroyTFModel(TensorModel)
TensorModel = m
}
</code></pre>
<p>Model Use Code(Part of the API request):</p>
<pre><code> config.ModelLoadMutex.RLock()
defer config.ModelLoadMutex.RUnlock()
scoreTensorList, err = TensorModel.Session.Run(map[tensorflow.Output]*tensorflow.Tensor{
UserOp.Output(0): uT,
DataOp.Output(0): nT},
[]tensorflow.Output{config.SumOp.Output(0)},
nil,
)
</code></pre>
|
<p>Presumably destroyTFModel takes a long time. You could try this:</p>
<pre><code>old := TensorModel
ModelLoadMutex.Lock()
TensorModel = new
ModelLoadMutex.Unlock()
go destroyTFModel(old)
</code></pre>
<p>So destroy after assign and/or try destroying on another goroutine if it needs to clean up resources and somehow takes a long time blocking this response. I'd look into what you're doing in destroyTFModel and why it is slow though, does it make network requests to the db or involve the file system? Are you sure there isn't another lock external to your app you're not aware of (for example if it had to open a file and locked it for reads while destroying this model?).</p>
<p>Instead of using if err == nil { around it, consider returning on error. </p>
|
go|tensorflow
| 0
|
8,859
| 58,424,060
|
Performing a variable number of array dot products in python?
|
<p>I'm looking for a quick way to dot and array with itself a variable number of times.</p>
<p>In Mathlab it would look like: </p>
<p>A = some array elements</p>
<p>A^d where d is some scalar. </p>
<p>Hence A^2 is = A*A </p>
<p>However in Python i'm having trouble finding something that does the same thing</p>
<p>A @ A would works when you know have many times your dotting the arrays.</p>
<p>Is there something I missed and could use?</p>
|
<p>This is called a <a href="http://mathworld.wolfram.com/MatrixPower.html" rel="nofollow noreferrer">"matrix power"</a>, and it is computed by <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.matrix_power.html" rel="nofollow noreferrer"><code>numpy.linalg.matrix_power</code></a>.</p>
|
python|numpy|dot
| 0
|
8,860
| 58,290,221
|
Compare row value for duplication in adjacent columns in a loop to clean data in pandas
|
<h1>Summary</h1>
<pre><code>0 101 2017/11 -9999.0 -7.60 -4.00 -9999.0 -9999.0 -4.00 -0.22 1.76 4.64 6.98 8.96 12.56 15.98 19.58 22.46 25.34 28.40
1 101 2017/11 -9999.0 -7.78 -4.36 -9999.0 -9999.0 -4.36 -0.22 1.76 4.64 6.80 8.78 12.56 15.98 19.58 22.46 25.16 28.22
2 101 2017/11 -9999.0 -7.60 -4.18 -9999.0 -9999.0 -4.18 -0.22 1.76 4.46 6.80 8.78 12.56 15.98 19.58 22.46 25.16 28.22
3 101 2017/11 -9999.0 -7.96 -5.26 -9999.0 -9999.0 -5.26 -0.40 1.76 4.46 6.80 8.60 12.38 15.98 19.58 22.46 25.16 28.22
4 101 2017/11 -9999.0 -6.88 -4.36 -9999.0 -9999.0 -4.36 -0.40 1.58 4.46 6.80 8.60 12.38 15.98 19.58 22.46 25.16 28.22
5 101 2017/11 20.30 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06
6 101 2017/11 19.76 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06
7 101 2017/11 20.30 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06 35.06
</code></pre>
<p>I need to be able to remove the data from columns where the adjacent column has the same exact number. So in this example columns 5, 6 and 7 would look like the following:</p>
<pre><code>5 2017/11 20.30 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
6 2017/11 19.76 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
7 2017/11 20.30 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<h1>What I've tried</h1>
<p>A lot of answers I found seem to transform and then indicate a Boolean value. </p>
<p>i was considering something like this psuedocode to check adjacent columns</p>
<pre><code>for i, row in data.iterrows():
rowvar = i
if data.iloc[i] == rowvar:
data.iloc[i] = np.nan
</code></pre>
<p>but it obviously doesn't work.</p>
<h1>Actual</h1>
<p><code>ValueError: Location based indexing can only have [integer, integer slice (START point is INCLUDED, END point is EXCLUDED), listlike of integers, boolean array] types</code></p>
<p>Is there an easy way to do this that is maybe more Pythonic/Pandas?</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>pandas.diff()</code></a> is indeed the right function for you. However you need to check across the columns <em>in both directions</em> if values are equal or not. This code sets all values to <code>NaN</code> if the previous <em>or</em> next column have the same value:</p>
<pre><code>import numpy as np
data[np.logical_or(data.diff(axis=1) == 0, data.diff(axis=1, periods=-1) == 0)] = np.nan
</code></pre>
|
python|pandas|csv
| 2
|
8,861
| 69,044,145
|
Convert quarterly dataframe to monthly and fill missing values for each ID
|
<p>I have a dataframe that, for each ID, contains a timestamp and a value. The timestamp is for a given quarter:</p>
<pre><code>import pandas as pd
a = pd.DataFrame({'id': [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,3],
'date': ['2002Q1', '2002Q2', '2002Q3', '2002Q4', '2003Q1', '2002Q2', '2002Q3', '2002Q4', '2003Q1', '2002Q2', '2002Q3', '2002Q4', '2003Q1', '2002Q2', '2002Q3', '2002Q4'],
'value': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]})
</code></pre>
<p>Now, I want to expand the dataframe to monthly frequencies. This means that each row is expanded to three rows (i.e., one quarter becomes 3 months) and all months in any given quarter should have the same value.</p>
<p>As an example, the first two rows in <code>a</code> we expand in to 6 rows:</p>
<pre><code>pd.DataFrame({'id': [1,1,1,1,1,1],
'date': ['2002-1', '2002-2', '2002-3', '2002-4', '2002-5', '2002-6'],
'value': [1,1,1,2,2,2]})
</code></pre>
<p>So basically, I am doing the same as in <a href="https://stackoverflow.com/questions/58637982/convert-quarterly-dataframe-to-monthly-and-fill-missing-values-in-pandas">this answer</a>, but now an ID is involved.</p>
<p>Is it possible to do this?</p>
<hr />
<p>EDIT: The last value per group also needs to be expanded. The current solution gives this result, which is wrong:</p>
<pre><code>import pandas as pd
a = pd.DataFrame({'id': [1,1],
'date': ['2002Q1', '2002Q2'],
'value': [1,2]})
mask = a['id'].duplicated(keep='last')
dates = pd.to_datetime(a['date'])
a.index = dates.where(mask, dates + pd.DateOffset(months=2))
a = a.groupby('id')['value'].resample('MS').first().ffill().reset_index()
a['date'] = a['date'].dt.to_period('M')
a
id date value
0 1 2002-01 1.0 # fine
1 1 2002-02 1.0 # fine
2 1 2002-03 1.0 # fine
3 1 2002-04 1.0 # should be 2
4 1 2002-05 1.0 # should be 2
5 1 2002-06 2.0 # fine
</code></pre>
|
<p>I imagine you can <code>groupby</code> and <code>resample</code>:</p>
<pre><code>a['date'] = pd.to_datetime(a['date'])
(a.set_index('date')
.groupby('id')
['value']
.resample('MS')
.first().ffill()
.reset_index()
)
</code></pre>
<p>output:</p>
<pre><code> date id value
0 2002-01-01 1.0 1.0
1 2002-02-01 1.0 1.0
2 2002-03-01 1.0 1.0
3 2002-04-01 1.0 2.0
4 2002-05-01 1.0 2.0
</code></pre>
|
python|pandas|dataframe
| 2
|
8,862
| 68,940,728
|
Syntax Error when calling the name of a model's layer in captum
|
<p>I'm trying to use the gradCAM feature of captum for PyTorch. Previously, I asked the question of how to find the name of layers in pyTorch (which is done using model.named_modules()). However, since getting the names of the modules (my model name is 'model') I have tried to use it with LayerGradCam from captum and am receiving a syntax error - it seems to always happen on the 'number' within the model name.</p>
<p>I import the function with:</p>
<pre><code>from captum.attr import LayerGradCam
</code></pre>
<p>I'm a bit of a Python newbie, so I've tried calling both:</p>
<pre><code>layer_gc = LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
</code></pre>
<p>and:</p>
<pre><code>layer_gc = captum.attr.LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
</code></pre>
<p>The error message I get is:</p>
<pre><code> File "gradCAM.py", line 120
layer_gc = LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
^
SyntaxError: invalid syntax
</code></pre>
<p>This is really stumping me, so any help is appreciated! Thanks in advance :)</p>
|
<p>Array or list indexing is done using <code>[]</code> syntax, not <code>.</code>.</p>
<pre><code>model.dl.backbone.layer4[2]conv3
</code></pre>
|
python|pytorch|syntax-error
| 1
|
8,863
| 60,988,820
|
read row and convert float to integer in pandas
|
<p>I have a dataframe with multiple rows and columns. One of my columns (lets call that column A) has rows that contain mix of strings, strings and integers (i.e RSE1023), integers only and floats only. I want to find a way to convert the rows of the column A that are floats to integers. Probably with something that can scan through the column in the dataframe and find the rows that are columns and make them integers?</p>
|
<p>You could try something like:</p>
<pre><code>df['A']=df['A'].apply(lambda r:int(r) if isinstance(r,float) else r)
</code></pre>
|
python|pandas|dataframe
| 1
|
8,864
| 71,501,334
|
'Image data of dtype object cannot be converted to float' on imshow()
|
<p>I'm trying to show images from my dataset. But on imshow() function I have this error.
'Image data of dtype object cannot be converted to float'</p>
<p>This is my code:</p>
<pre><code>val_ds = tf.keras.utils.image_dataset_from_directory(
'/media/Tesi/',
validation_split=0.2,
subset="validation",
seed=123,
image_size=(360, 360),
batch_size=18)
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(val_ds)
predictions[0]
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(val_ds, cmap=plt.cm.binary)
plt.xlabel(class_names[predictions[i]])
plt.show()
</code></pre>
<p>Can I solve it?
Thank you,
Best regards</p>
|
<p><code>image_dataset_from_directory</code> function uses <code>cv2.imread()</code> as function to read images from your directory. Though, <code>cv2.imread()</code> returns <code>None</code> if file weren't found. None is type <code>object</code>. So, check your path</p>
|
python|tensorflow|matplotlib|machine-learning|google-colaboratory
| 2
|
8,865
| 71,478,786
|
Interpolate pandas dataframe around certain value
|
<p>I have a dataset showing water levels over time and I want to plot all the data above a certain value (-0.75m in the example) in green and all the data below this value in orange. The problem I am facing is that whenever my data crosses over my value, the plotted line stops at that value and there are multiple gaps in my plot</p>
<p><a href="https://i.stack.imgur.com/g5dj5.png" rel="nofollow noreferrer">Example plot</a></p>
<p>What I want to do is interpolate any time the data crosses this border so that my line will continue to the level of -0.75m in green and become orange from there on out.</p>
<p>I have tried to find out at which spots my data crosses this line and have inserted a row in my dataset with a y-value of -0.75 to later on interpolate the corresponding date in my dataframe but this has not worked yet so far.</p>
<p>Below is an example code where I make my own dataset and try to interpolate whenever I cross the value 2. This does seem to work for the trial dataset but not for my original data and the way in which I get the code to work seems very sketchy to me. Are there better ways of trying to achieve my goal?</p>
<pre><code>d = {'x': ['2020-03-14', '2020-03-15', '2020-03-16', '2020-03-18', '2020-03-19'], 'y': [3, 4, 5, -1, 1]}
df = pd.DataFrame(data=d)
df.set_index('x', inplace = True)
empty = {'y' : 2}
df_empty = pd.DataFrame(data=empty, index=[np.nan])
df_temp = df.copy()
df_temp.y -= 2
df_new = df.iloc[[0]]
# Add nan row in dataframe
for i in range(len(df_temp) -1):
df_new = pd.concat([df_new, df.iloc[[i]]])
if df_temp.y.iloc[i] * df_temp.y.iloc[i+1] < 0:
df_new = pd.concat([df_new, df_empty])
# Polish new dataframe
df_new = df_new = pd.concat([df_new, df.iloc[[-1]]])
df_new = df_new.iloc[1:]
# set desired values as index
df_new.reset_index(inplace = True)
df_new.set_index('y',inplace = True)
# convert dates to numbers
df_new.iloc[:,0] = pd.to_numeric(pd.to_datetime(df_new.iloc[:,0]))
# set negative numbers (the missing dates) to nan
df_new[df_new < 0] = np.nan
# interpolate nan values
df_new.iloc[:,0].interpolate(method = 'linear', inplace = True)
# convert back to datetime
df_new.iloc[:,0] = pd.to_datetime(df_new.iloc[:,0])
# undo index change
df_new.reset_index(inplace = True)
df_new.set_index('index',inplace = True)
df.plot()
df_new.plot()
</code></pre>
|
<p>First, if you have to iterate over data, using <code>NumPy</code> is faster than using <code>Pandas</code>.<br>
See: <a href="https://towardsdatascience.com/how-to-make-your-pandas-loop-71-803-times-faster-805030df4f06" rel="nofollow noreferrer">https://towardsdatascience.com/how-to-make-your-pandas-loop-71-803-times-faster-805030df4f06</a></p>
<p>Second, you said "This does seem to work for the trial dataset but not for my original data". In this case, you should provide original data for your question.</p>
<p>Anyway, here is a working code that uses <code>NumPy</code>. I am not sure that this will work for your original data.</p>
<pre><code>import datetime
import numpy as np
import matplotlib.pyplot as plt
# Raw data
list_dtstr = ['2020-03-14', '2020-03-15', '2020-03-16', '2020-03-18', '2020-03-19', '2020-03-20', '2020-03-21', '2020-03-22']
list_value = [3.0, 4.0, 5.0, -1.0, 1.0, 4.0, -1.0, 3.0]
npdata = np.array([[datetime.datetime.strptime(dtstr, '%Y-%m-%d').timestamp() for dtstr in list_dtstr], list_value])
npdata = npdata.transpose()
# Threshold value
threshold = 2.0
# Interpolate
for idx in range(len(npdata) - 1 , 0, -1):
if (npdata[idx, 1] - threshold) * (npdata[idx - 1, 1] - threshold) < 0:
interp_x = [npdata[idx - 1, 1], npdata[idx, 1]]
interp_y = [npdata[idx - 1, 0], npdata[idx, 0]]
# Sort interpolation data
if interp_x[0] > interp_x[1]:
interp_x = [interp_x[1], interp_x[0]]
interp_y = [interp_y[1], interp_y[0]]
# Interpolation
dt_value = np.interp(threshold, interp_x, interp_y)
# Insert interpolated data
npdata = np.insert(npdata, idx, [dt_value, threshold], axis=0)
# Convert timestamp to datetime
npdata_new = np.array([[datetime.datetime.fromtimestamp(npdata[idx, 0]), npdata[idx, 1]] for idx in range(len(npdata))])
# Split data
npdata_above = npdata_new.copy()
npdata_below = npdata_new.copy()
for idx in range(len(npdata_new)):
if npdata_new[idx, 1] > threshold:
npdata_below[idx, 1] = None
elif npdata_new[idx, 1] < threshold:
npdata_above[idx, 1] = None
# Plot
plt.plot(npdata_above[:, 0], npdata_above[:, 1], c = 'green')
plt.plot(npdata_below[:, 0], npdata_below[:, 1], c = 'orange')
</code></pre>
<p><a href="https://i.stack.imgur.com/JWsDd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JWsDd.jpg" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|plot|interpolation
| 0
|
8,866
| 42,346,898
|
Loading one dimensional data into a Dense layer in a sequential Keras model
|
<p>I have the results of a trained model, ending in a Flatten layer in numpy output files.
I try to load them and use them as inputs of a Dense layer.</p>
<pre><code>train_data = np.load(open('bottleneck_flat_features_train.npy'))
train_labels = np.array([0] * (nb_train_samples / 2) + [1] * (nb_train_samples / 2))
#
validation_data = np.load(open('bottleneck_flat_features_validation.npy'))
validation_labels = np.array([0] * (nb_validation_samples / 2) + [1] * (nb_validation_samples / 2))
#
top_m = Sequential()
top_m.add(Dense(2,input_shape=train_data.shape[1:], activation='sigmoid', name='top_dense1'))
top_m.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
#
top_m.fit(train_data, train_labels,
nb_epoch=nb_epoch, batch_size=my_batch_size,
validation_data=(validation_data, validation_labels))
</code></pre>
<p>However I get the following error message:</p>
<pre><code> ValueError: Error when checking model target: expected top_dense1 to have
shape (None, 2) but got array with shape (13, 1)
</code></pre>
<p>My input dimensions are (16,1536) - 16 images for this limited trail run, 1536 features. </p>
<pre><code>>>> train_data.shape
(16, 1536)
</code></pre>
<p>The dense layer should expect a one dimensional 1536 long array.</p>
<pre><code>>>> train_data.shape[1]
1536
</code></pre>
<p>What should I do?
Many thanks!</p>
|
<p>I have found my problem - I did not define the lables correctly. I have switched the model compilation to a sparse categorical crossentropy mode.</p>
<p>my current code is </p>
<pre><code>def train_top_model():
train_data = np.load(open('bottleneck_flat_features_train.npy'))
train_labels = np.array([0] * (nb_train_samples / 2) + [1] * (nb_train_samples / 2))
#
validation_data = np.load(open('bottleneck_flat_features_validation.npy'))
validation_labels = np.array([0] * (nb_validation_samples / 2) + [1] * (nb_validation_samples / 2))
#
top_m = Sequential()
top_m.add(Dense(2,input_shape=train_data.shape[1:], activation='softmax', name='top_dense1'))
top_m.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
#
top_m.fit(train_data, train_labels,
nb_epoch=nb_epoch, batch_size=my_batch_size,
validation_data=(validation_data, validation_labels))
</code></pre>
<p>Now it works and converges. </p>
|
python|numpy|keras
| 1
|
8,867
| 69,770,425
|
Querying data frames in Python/Pandas when columns are optional or missing
|
<p>I'm developing a script in Python/Pandas to compare the contents of two dataframes.</p>
<p>Both dataframes contain any combination of columns from a fixed list, for instance:</p>
<pre><code>"Case Name", "MAC", "Machine Name", "OS", "Exec Time", "RSS"
</code></pre>
<p>Some combination of columns are used as a unique key, but some of those columns <strong>might be missing</strong> some times. Also, both dataframes contain (and miss) the same columns (to avoid extra complexity).</p>
<p>So, I want to retrieve a row from one dataframe given a key I obtain from the other dataframe (I'm certain the key matches a single row in each dataframe, that's not an issue in this case either).</p>
<p>For example, in this case, the pair <code>"Case Name" + "MAC"</code> is my key:</p>
<ul>
<li>Dataframe 1:</li>
</ul>
<pre><code>"Case Name" | "MAC" |"Machine Name" | "OS" | "Exec Time" | "RSS"
------------+-------------------+---------------+---------+-------------+------
Case1 | FB:E8:99:88:AC:DE | Linux1 | Linux | 60 | 1000
</code></pre>
<ul>
<li>DataFrame 2</li>
</ul>
<pre><code>"Case Name" | "MAC" |"Machine Name" | "OS" | "Exec Time" | "RSS"
------------+-------------------+---------------+---------+-------------+------
Case1 | FB:E8:99:88:AC:DE | Windows1 | Windows | 80 | 500
</code></pre>
<p>Based on these dataframes, I want to generate another one like this:</p>
<pre><code>"Case Name" | "MAC" | "Machine Name 1" | "Machine Name 2" | "OS 1" | "OS 2" | "Exec Time 1" | "Exec Time 2" | "RSS 1" | "RSS 2"
------------+-------------------+------------------+------------------+-----------+-----------+---------------+---------------+---------+--------
Case1 | FB:E8:99:88:AC:DE | Linux1 | Windows1 | Linux | Windows | 60 | 80 | 1000 | 500
</code></pre>
<p>However, in certain cases, some of those <em>"key"</em> columns might be missing, in which case the dataframes will look like:</p>
<p>Dataframe 1:</p>
<pre><code>"Case Name" | "Machine Name" | "OS" | "Exec Time" | "RSS"
------------+----------------+---------+-------------+------
Case1 | Linux1 | Linux | 60 | 1000
</code></pre>
<p>DataFrame 2:</p>
<pre><code>"Case Name" | "Machine Name" | "OS" | "Exec Time" | "RSS"
------------+----------------+---------+-------------+------
Case1 | Windows1 | Windows | 80 | 500
</code></pre>
<p>As you can see, the <code>"MAC"</code> column is missing, in which case I'm certain (this is not an issue here either) that <code>"Case Name"</code> is a good enough unique key.</p>
<p>So, to build the combined data frame I tried something like this:</p>
<pre><code>for index1, data1 in dataFrame1.iterrows():
caseName = data1['Case Name']
try:
macAddr = data1['MAC']
except:
macAddr = None
# Let's see if pd.isnull() works fine when no MAC column exists
if pd.isnull(macAddr):
print("No MAC column data detected")
else:
print("MAC column data detected")
# The rest of the data from the dataFrame1
machineName1 = data1['Machine Name']
os1 = data1['OS']
# etc., etc.
#then try to locate the equivalent data in the other data frame:
data2 = dataFrame2.loc[(dataFrame2['Case Name'] == caseName) & (pd.isnull(macAddr) | (dataFrame2['MAC'] == macAddr)), ['Machine Name', 'OS', 'Exec Time', 'RSS']]
machineName2 = data2['Machine Name']
os2 = data2['OS']
# etc., etc.
</code></pre>
<p>As a C-based guy (and very beginner at Python) as I am, I'd expect the sentence to stop processing once a <code>True</code> condition is reached, in this case, <code>pd.isnull(macAddr)</code>, avoiding to perform the piece that will certainly trigger an error, <code>(dataFrame2['MAC'] == macAddr)</code>, since the column is missing. According to <a href="https://stackoverflow.com/questions/2580136/does-python-support-short-circuiting">this</a>, I'd expect so, however, it doesn't seem to happen in my case, and when I run it, my script returns:</p>
<pre><code>caseName = testCase
No MAC column data detected -> So pd.isnull() works fine!!!
Traceback (most recent call last):
File "~/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'MAC'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~/compare_dataframes.py", line 135, in <module>
main()
File "~/compare_dataframes.py", line 79, in main
data2 = dataFrame2.loc[(dataFrame2['Case Name'] == caseName) & (pd.isnull(macAddr) | (dataFrame2['MAC'] == macAddr)), ['Machine Name', 'OS', 'Exec Time', 'RSS']]
File "~/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 3458, in __getitem__
indexer = self.columns.get_loc(key)
File "~/.local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 'MAC'
</code></pre>
<p>Now, I can change this to a serie of nested <code>if</code> conditions:</p>
<pre><code>if pd.isnull(macAddr):
data2 = dataFrame2.loc[(dataFrame2['Case Name'] == caseName), ['Machine Name', 'OS', 'Exec Time','RSS']]
else:
data2 = dataFrame2.loc[(dataFrame2['Case Name'] == caseName) & (dataFrame2['MAC'] == macAddr), ['Machine Name', 'OS', 'Exec Time','RSS']]
</code></pre>
<p>But this is impractical since it becomes a <code>2^n if</code>, what happens if I add a new column in the future?</p>
<p>So, my question is: <strong>What is wrong with that condition?</strong> I'd added parenthesis as much as possible, but to no effect.</p>
<p>I'm using Python 3.8, Pandas 1.3.4</p>
<p>Thanks a lot for your help.</p>
|
<p>I think you might want to try a simple merge, and depending on whether <code>MAC</code> is present, add it to the merge fields, no?</p>
<pre class="lang-py prettyprint-override"><code>merge_cols = ["Case Name"]
if "MAC" in df1.columns:
merge_cols.append("MAC")
print(merge_cols)
result = df1.merge(df2, on=merge_cols, how='left')
print(result)
</code></pre>
<p>Does that answer your question?</p>
|
python|pandas|dataframe
| 1
|
8,868
| 72,339,990
|
week of the year aggregation using python (week starts from 01 01 YYYY)
|
<p>I search in previous questions, and it does not resolve what i am searching, please can u help me</p>
<p>I have a dataset from</p>
<pre><code> Date T2M Y T F H G Week_Number
0 1981-01-01 11.08 17.35 6.94 0.00 5.37 4.63 1
1 1981-01-02 10.82 16.41 7.51 0.00 5.55 2.73 1
2 1981-01-03 10.74 15.64 7.35 0.00 6.23 2.33 1
3 1981-01-04 11.17 15.99 8.46 0.00 6.16 1.66 1
4 1981-01-05 10.20 15.60 6.87 0.12 6.10 2.78 2
5 1981-01-06 10.35 16.16 5.95 0.00 6.59 3.92 2
6 1981-01-07 12.26 18.24 9.30 0.00 6.10 2.30 2
7 1981-01-08 12.76 19.23 8.72 0.00 6.29 3.96 2
8 1981-01-09 12.61 17.80 8.90 0.00 6.71 2.05 2
</code></pre>
<p>I already created a column of the week number using this code</p>
<pre><code>df['Week_Number'] = df['Date'].dt.week
</code></pre>
<p>but it gives me only the first four days of the year that design the first week, maybe it means that the week start from monday. In my cases I don t give interest if it start from monday or another day, I just want to subdivise each year every seven days (group every 7 days of each year like from 1 1 1980 to 07 1 1980 FISRT WEEK, and go on, and every next year the first week starts too from 1 1 xxxx</p>
|
<p>If you want your week numbers to start from the 1st of January, irrespective of the day of week, simply get the day of year, subtract 1 and compute the integer division by 7:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'])
df['week_number'] = df['Date'].dt.dayofyear.sub(1).floordiv(7).add(1)
</code></pre>
<p><em>NB. you do not need to add 1 if you want the first week to start with 0</em></p>
<p>output:</p>
<pre><code> Date T2M Y T F H G Week_Number week_number
0 1981-01-01 11.08 17.35 6.94 0.00 5.37 4.63 1 1
1 1981-01-02 10.82 16.41 7.51 0.00 5.55 2.73 1 1
2 1981-01-03 10.74 15.64 7.35 0.00 6.23 2.33 1 1
3 1981-01-04 11.17 15.99 8.46 0.00 6.16 1.66 1 1
4 1981-01-05 10.20 15.60 6.87 0.12 6.10 2.78 2 1
5 1981-01-06 10.35 16.16 5.95 0.00 6.59 3.92 2 1
6 1981-01-07 12.26 18.24 9.30 0.00 6.10 2.30 2 1
7 1981-01-08 12.76 19.23 8.72 0.00 6.29 3.96 2 2
8 1981-01-09 12.61 17.80 8.90 0.00 6.71 2.05 2 2
</code></pre>
<p>Then you can use the new column to <code>groupby</code>, for example:</p>
<pre><code>df.groupby('week_number').agg({'Date': ['min', 'max'], 'T2M': 'sum'})
</code></pre>
<p>output:</p>
<pre><code> Date T2M
min max sum
week_number
1 1981-01-01 1981-01-07 76.62
2 1981-01-08 1981-01-09 25.37
</code></pre>
|
python|pandas|datetime|week-number
| 1
|
8,869
| 72,198,284
|
Append rows to a dataframe efficiently
|
<p>I have a dataframe that looks like this</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Timestamp': ['1642847484', '1642847484', '1642847484', '1642847484', '1642847487', '1642847487','1642847487','1642847487','1642847487','1642847487','1642847487','1642847487', '1642847489', '1642847489', '1642847489'],
'value': [11, 10, 14, 20, 3, 2, 9, 48, 5, 20, 12, 20, 56, 12, 8]})
</code></pre>
<p>I need to do some operations on each group of values with the same timestamp , so I use groupBy as follows :</p>
<pre><code>df_grouped = df.groupby('Timestamp')
</code></pre>
<p>And then iterate over the rows of each group and append the results row by row in a new dataframe:</p>
<pre><code>df_out = pd.DataFrame(columns=( 'Timestamp', 'value'))
for group_name, df_group in df_grouped:
i = 0
for row_index, row in df_group.iterrows():
row['Timestamp'] = row['Timestamp']* 1000 + i * 30
df_out = df_out.append(row)
i = i+1
print(df_out.tail())
</code></pre>
<p>But my approach takes so much time (7M+ rows ) and I was wondering if there is a more efficient way to do so . Thank you</p>
|
<p>I think <code>itterows</code> here is not necessary, you can use:</p>
<pre><code>def f(x):
x['Timestamp'] = ...
....
return x
df1 = df.groupby('Timestamp').apply(f)
</code></pre>
<p>EDIT: Create counter <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a>, multiple and add to <code>Timestamp</code>:</p>
<pre><code>#if necessary
df['Timestamp'] = df['Timestamp'].astype(np.int64)
df['Timestamp'] = df['Timestamp'] * 1000 + df.groupby('Timestamp').cumcount() * 30
print(df)
Timestamp value
0 1642847484000 11
1 1642847484030 10
2 1642847484060 14
3 1642847484090 20
4 1642847487000 3
5 1642847487030 2
6 1642847487060 9
7 1642847487090 48
8 1642847487120 5
9 1642847487150 20
10 1642847487180 12
11 1642847487210 20
12 1642847489000 56
13 1642847489030 12
14 1642847489060 8
</code></pre>
|
python|pandas|dataframe|append
| 2
|
8,870
| 72,309,955
|
Python's Numpy dot function returning incorrect value, why?
|
<p>Real simple, my code is:</p>
<pre><code>import numpy as np
a = np.array([0.4, 0.3])
b = np.array([-0.15, 0.2])
print(np.dot(a,b))
</code></pre>
<p>The dot product of this should be 0, and instead i get:</p>
<pre><code>3.3306690738754695e-18
</code></pre>
|
<p>Floating-point!</p>
<p>Floating-point (i.e. non-integer) arithmetic tends not to be 100% accurate.</p>
<p>See <a href="https://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate">here</a> for more info.</p>
<p>Also, note that your result is very close to zero.</p>
|
python|numpy|dot-product
| 2
|
8,871
| 50,441,524
|
RunMetadata in eager mode
|
<p>Does eager mode support <code>tf.profiler</code> in r1.8? Since it no longer has the session object, is there any way to pass in a <code>tf.RunMetadata()</code> into the execution? I saw the profiler constructor checks for eager mode; but without <code>RunMetadata</code> it doesn't work. Thanks!</p>
|
<pre><code>with context.eager_mode():
outfile = os.path.join(test.get_temp_dir(), 'dump')
opts = builder(
builder.time_and_memory()).with_file_output(outfile).build()
context.enable_run_metadata()
# run your model here #
profiler = model_analyzer.Profiler()
profiler.add_step(0, context.export_run_metadata())
context.disable_run_metadata()
profiler.profile_operations(opts)
</code></pre>
<p>You may refer to the below link (testEager function). <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/profiler/model_analyzer_test.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/profiler/model_analyzer_test.py</a></p>
|
tensorflow
| -1
|
8,872
| 50,549,681
|
Data type for gaussian Naive bayes classifivation using sklearn, how to clean data
|
<p>I'm trying to classify mobiles according to their features but when I apply the gaussian NB code through sklearn , I'm unable to do so because of the following error :
the code :</p>
<pre><code>clf = GaussianNB()
clf.fit(X_train,y_train)
GaussianNB()
accuracy = clf.score(X_test,y_test)
print(accuracy)
</code></pre>
<p>error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-18-e9515ccc2439> in <module>()
2 clf.fit(X_train,y_train)
3 GaussianNB()
----> 4 accuracy = clf.score(X_test,y_test)
5 print(accuracy)
/Users/kiran/anaconda/lib/python3.6/site-packages/sklearn/base.py in score(self, X, y, sample_weight)
347 """
348 from .metrics import accuracy_score
--> 349 return accuracy_score(y, self.predict(X), sample_weight=sample_weight)
350
351
/Users/kiran/anaconda/lib/python3.6/site-packages/sklearn/naive_bayes.py in predict(self, X)
63 Predicted target values for X
64 """
---> 65 jll = self._joint_log_likelihood(X)
66 return self.classes_[np.argmax(jll, axis=1)]
67
/Users/kiran/anaconda/lib/python3.6/site-packages/sklearn/naive_bayes.py in _joint_log_likelihood(self, X)
422 check_is_fitted(self, "classes_")
423
--> 424 X = check_array(X)
425 joint_log_likelihood = []
426 for i in range(np.size(self.classes_)):
/Users/kiran/anaconda/lib/python3.6/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
380 force_all_finite)
381 else:
--> 382 array = np.array(array, dtype=dtype, order=order, copy=copy)
383
384 if ensure_2d:
ValueError: could not convert string to float:
</code></pre>
<p>My dataset has been scraped so it contains string as well as float values. It would be helpful if someone could suggest me how I can clean the data and avoid the error .</p>
|
<p>try the following:</p>
<pre><code>accuracy = clf.score(X_test.astype('float'),y_test.astype('float'))
</code></pre>
|
python|pandas|scikit-learn|classification
| 1
|
8,873
| 50,441,794
|
TypeError: unsupported operand type(s) for /: 'float' and 'csr_matrix'
|
<p>I want to write a sigmoid function:</p>
<pre><code>def fn(w, x):
return 1.0 / (np.expm1(-w.dot(x))+0.0)
</code></pre>
<p>Because -w.dot(x) is a sparse matrix, I used np.expm1() instead of np.exp(), but how to divide a float by a csr_matrix? Thanks!</p>
|
<pre><code>from spicy import sparse
res2 = np.expm1(-w.dot(x))
res1 = sparse.csr_matrix(np.ones(res2.shape()))
return res1/res2
</code></pre>
|
python|numpy|python-3.5|array-broadcasting
| 0
|
8,874
| 50,565,880
|
Quantization of mobilenet-ssd
|
<p>I wanted to quantize (change all the floats into INT8) a ssd-mobilenet model and then want to deploy it onto my raspberry-pi. So far, I have not yet found any thing which can help me with it. Any help would be highly appreciated.
I saw tensorflow-lite but it seems it only supports android and iOS.
Any library/framweork is acceptable.</p>
<p>Thanks in advance.</p>
|
<p>Tensorflow Lite now has support for the Raspberry Pi via Makefiles. <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/build_rpi_lib.sh" rel="nofollow noreferrer">Here's the shell script</a>. Regarding Mobilenet-SSD, you can get details on how to use it with TensorFlow Lite in <a href="https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html" rel="nofollow noreferrer">this blog post</a> (and <a href="https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193" rel="nofollow noreferrer">here</a>)</p>
|
tensorflow|raspberry-pi|deep-learning
| 0
|
8,875
| 50,358,564
|
computing the mean for python datetime
|
<p>I have a datetime attribute:</p>
<pre><code>d = {
'DOB': pd.Series([
datetime.datetime(2014, 7, 9),
datetime.datetime(2014, 7, 15),
np.datetime64('NaT')
], index=['a', 'b', 'c'])
}
df_test = pd.DataFrame(d)
</code></pre>
<p>I would like to compute the mean for that attribute. Running mean() causes an error: </p>
<blockquote>
<p>TypeError: reduction operation 'mean' not allowed for this dtype</p>
</blockquote>
<p>I also tried the solution proposed <a href="https://stackoverflow.com/questions/27907902/datetime-objects-with-pandas-mean-function">elsewhere</a>. It doesn't work as running the function proposed there causes </p>
<blockquote>
<p>OverflowError: Python int too large to convert to C long</p>
</blockquote>
<p>What would you propose? The result for the above dataframe should be equivalent to </p>
<pre><code>datetime.datetime(2014, 7, 12).
</code></pre>
|
<p>You can take the mean of <code>Timedelta</code>. So find the minimum value and subtract it from the series to get a series of <code>Timedelta</code>. Then take the mean and add it back to the minimum.</p>
<pre><code>dob = df_test.DOB
m = dob.min()
(m + (dob - m).mean()).to_pydatetime()
datetime.datetime(2014, 7, 12, 0, 0)
</code></pre>
<hr>
<p>One-line</p>
<pre><code>df_test.DOB.pipe(lambda d: (lambda m: m + (d - m).mean())(d.min())).to_pydatetime()
</code></pre>
<hr>
<p>To <a href="https://stackoverflow.com/questions/50358564/computing-the-mean-for-python-datetime/50358798#comment87734565_50358798">@ALollz point</a></p>
<p>I use the epoch <code>pd.Timestamp(0)</code> instead of <code>min</code></p>
<pre><code>df_test.DOB.pipe(lambda d: (lambda m: m + (d - m).mean())(pd.Timestamp(0))).to_pydatetime()
</code></pre>
|
python-3.x|pandas|datetime|mean|python-datetime
| 10
|
8,876
| 50,305,206
|
How To Normalize Array Between 1 and 10?
|
<p>I have a numpy array with the following integer numbers: </p>
<pre><code>[10 30 16 18 24 18 30 30 21 7 15 14 24 27 14 16 30 12 18]
</code></pre>
<p>I want to normalize them to a range between 1 and 10. </p>
<p>I know that the general formula to normalize arrays is:</p>
<p><a href="https://i.stack.imgur.com/gztOl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gztOl.png" alt="enter image description here"></a></p>
<p>But how am I supposed to scale them between 1 and 10?</p>
<p><strong>Question:</strong> What is the simplest/fastest way to normalize this array to values between 1 and 10? </p>
|
<p>Your range is actually 9 long: from 1 to 10. If you multiply the normalized array by 9 you get values from 0 to 9, which you need to shift back by 1:</p>
<pre><code>start = 1
end = 10
width = end - start
res = (arr - arr.min())/(arr.max() - arr.min()) * width + start
</code></pre>
<p>Note that the denominator here has a numpy built-in named <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ptp.html" rel="noreferrer"><code>arr.ptp()</code></a>:</p>
<pre><code>res = (arr - arr.min())/arr.ptp() * width + start
</code></pre>
|
python|python-3.x|numpy
| 7
|
8,877
| 50,319,837
|
mapping similar text strings in between two pandas dataframes
|
<p>I have dataset named <code>data_feed</code> contains feedbacks given as:</p>
<pre><code>feedback
Fast Delivery. Always before time.Thanks
I have order brown shoe .And I got olive green shoe
Delivery guy is a decent nd friendly guy
Its really good .. my daughter loves it
One t shirt was fully crushed rest everything is good
Superfast delivery! I'm impressed.
......................... .
........................ .
so on
</code></pre>
<p>and a another dataset named <code>reference</code> as:-</p>
<pre><code>refer_feedback sub-category category sentiment
The delivery was on time. delivery speed delivery positive
he was polite enough delivery man behaviour delivery positive
worst products product quality general negative
</code></pre>
<p>Now I want to extend dataset <code>datafeed</code> with columns as:-</p>
<pre><code>feedback sub-category category sentiment
</code></pre>
<p>How can I match similar feedbacks i.e I want to match column <code>feedback</code> in dataframe <code>data_feed</code> with column <code>refer_feedback</code> in dataframe <code>reference</code> and give corresponding labels to subcategory, category and sentiment.</p>
<p>for ex- first feedback in dataset <code>data_feed</code> is quite similar to first feedback of dataset <code>reference</code> then first observation for <code>data_feed</code> would be:</p>
<pre><code>feedback subcategory category sentiment
Fast Delivery. Always before time.Thanks delivery speed delivery positive
</code></pre>
|
<p>One strategy you could use is to analyze the feedback with <a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation" rel="nofollow noreferrer">LDA</a> to discover common topics. You could then use the topics to map like to like between the two tables.</p>
<p>LDA analyzes what is referred to as a 'corpus' of documents. Document is used abstractly here. Each example of <code>refer_feedback</code> or <code>feedback</code> could form a corpus.</p>
<p>Two differing approaches that could work follow:</p>
<h2>Corpus from <code>refer_feedback</code></h2>
<p>Each example of <code>refer_feedback</code> will be a document in your corpus for this approach. The number of topics you are looking for is equal to the count of unique subcategories.</p>
<p>Use <a href="https://www.nltk.org/" rel="nofollow noreferrer">nltk</a> to remove stop words and perform <a href="https://en.wikipedia.org/wiki/Lemmatisation" rel="nofollow noreferrer">lemmatisation</a>. Use <a href="https://radimrehurek.com/gensim/" rel="nofollow noreferrer">gensim</a> to perform LDA on the results to get your topics model. Use this topics model to classify <code>feedback</code> as it comes in.</p>
<h2>Corpus from <code>feedback</code></h2>
<p>If you do not have enough <code>refer_feedback</code> examples or you try the first approach and it does not work, try building a corpus from a large set of <code>feedback</code> examples. In this approach, the number of topics is not as easy to determine but it would be valuable to start with something close to the number of subcategories that you have.</p>
<p>Use <code>ntlk</code> again to remove stop words and perform lemmatisation. Build the LDA model. </p>
<p>Next, you will need to manually map the topics generated by the model to subcategories. Save this mapping. </p>
<p>When future feedback comes in, use the ldamodel to discover its most probable topics then use your mapping of topic to subcategory to assign the appropriate fields.</p>
|
python-2.7|pandas|nlp|mapping|sentiment-analysis
| 0
|
8,878
| 45,687,723
|
JSON object inside Pandas DataFrame
|
<p>I have a JSON object inside a pandas dataframe column, which I want to pull apart and put into other columns. In the dataframe, the JSON object looks like a string containing an array of dictionaries. The array can be of variable length, including zero, or the column can even be null. I've written some code, shown below, which does what I want. The column names are built from two components, the first being the keys in the dictionaries, and the second being a substring from a key value in the dictionary. </p>
<p>This code works okay, but it is very slow when running on a big dataframe. Can anyone offer a faster (and probably simpler) way to do this? Also, feel free to pick holes in what I have done if you see something which is not sensible/efficient/pythonic. I'm still a relative beginner. Thanks heaps.</p>
<pre><code># Import libraries
import pandas as pd
from IPython.display import display # Used to display df's nicely in jupyter notebook.
import json
# Set some display options
pd.set_option('max_colwidth',150)
# Create the example dataframe
print("Original df:")
df = pd.DataFrame.from_dict({'ColA': {0: 123, 1: 234, 2: 345, 3: 456, 4: 567},\
'ColB': {0: '[{"key":"keyValue=1","valA":"8","valB":"18"},{"key":"keyValue=2","valA":"9","valB":"19"}]',\
1: '[{"key":"keyValue=2","valA":"28","valB":"38"},{"key":"keyValue=3","valA":"29","valB":"39"}]',\
2: '[{"key":"keyValue=4","valA":"48","valC":"58"}]',\
3: '[]',\
4: None}})
display(df)
# Create a temporary dataframe to append results to, record by record
dfTemp = pd.DataFrame()
# Step through all rows in the dataframe
for i in range(df.shape[0]):
# Check whether record is null, or doesn't contain any real data
if pd.notnull(df.iloc[i,df.columns.get_loc("ColB")]) and len(df.iloc[i,df.columns.get_loc("ColB")]) > 2:
# Convert the json structure into a dataframe, one cell at a time in the relevant column
x = pd.read_json(df.iloc[i,df.columns.get_loc("ColB")])
# The last bit of this string (after the last =) will be used as a key for the column labels
x['key'] = x['key'].apply(lambda x: x.split("=")[-1])
# Set this new key to be the index
y = x.set_index('key')
# Stack the rows up via a multi-level column index
y = y.stack().to_frame().T
# Flatten out the multi-level column index
y.columns = ['{1}_{0}'.format(*c) for c in y.columns]
# Give the single record the same index number as the parent dataframe (for the merge to work)
y.index = [df.index[i]]
# Append this dataframe on sequentially for each row as we go through the loop
dfTemp = dfTemp.append(y)
# Merge the new dataframe back onto the original one as extra columns, with index mataching original dataframe
df = pd.merge(df,dfTemp, how = 'left', left_index = True, right_index = True)
print("Processed df:")
display(df)
</code></pre>
|
<p>First, a general piece of advice about pandas. <strong>If you find yourself iterating over the rows of a dataframe, you are most likely doing it wrong.</strong></p>
<p>With this in mind, we can re-write your current procedure using pandas 'apply' method (this will likely speed it up to begin with, as it means far fewer index lookups on the df):</p>
<pre><code># Check whether record is null, or doesn't contain any real data
def do_the_thing(row):
if pd.notnull(row) and len(row) > 2:
# Convert the json structure into a dataframe, one cell at a time in the relevant column
x = pd.read_json(row)
# The last bit of this string (after the last =) will be used as a key for the column labels
x['key'] = x['key'].apply(lambda x: x.split("=")[-1])
# Set this new key to be the index
y = x.set_index('key')
# Stack the rows up via a multi-level column index
y = y.stack().to_frame().T
# Flatten out the multi-level column index
y.columns = ['{1}_{0}'.format(*c) for c in y.columns]
#we don't need to re-index
# Give the single record the same index number as the parent dataframe (for the merge to work)
#y.index = [df.index[i]]
#we don't need to add to a temp df
# Append this dataframe on sequentially for each row as we go through the loop
return y.iloc[0]
else:
return pd.Series()
df2 = df.merge(df.ColB.apply(do_the_thing), how = 'left', left_index = True, right_index = True)
</code></pre>
<p>Notice that this returns exactly the same result as before, we haven't changed the logic. the apply method sorts out the indexes, so we can just merge, fine.</p>
<p>I believe that answers your question in terms of speeding it up and being a bit more idiomatic. </p>
<p>I think you should consider however, what you want to do with this data structure, and how you might better structure what you're doing. </p>
<p>Given ColB could be of arbitrary length, you will end up with a dataframe with an arbitrary number of columns. When you come to access these values for whatever purpose, this will cause you pain, whatever the purpose is.</p>
<p>Are all entries in ColB important? Could you get away with just keeping the first one? Do you need to know the index of a certain valA val? </p>
<p>These are questions you should ask yourself, then decide on a structure which will allow you to do whatever analysis you need, without having to check a bunch of arbitrary things.</p>
|
python|json|pandas|dataframe
| 4
|
8,879
| 62,711,102
|
Is there an inbuilt method in python(pandas) which can simulate a single day from multiple days
|
<p>I have a time series data for solar radiation with 15 min time step values (from 1st June till 30th June) for a month. My aim is to simulate one single day from all the 30 days by taking an average of each time instants. For example, initially i have 30 different values at 11am , 11.15am, 11.45am and so on. I want to average those 30 values so that i have a single value at 11am, 11.15am, 11.45am respectively.</p>
|
<p>You can extract minutes to separate column an group by it:</p>
<pre><code>data['Minutes15'] = data['Date'].apply(lambda x: int(x.minute/15) *15))
data.groupby('Minutes15').mean()
</code></pre>
<p>Where <em>Date</em> is your date column in datetime format</p>
|
python|pandas|time-series
| 0
|
8,880
| 62,673,494
|
pandas How to drop the whole row if any specific columns contains a specific values?
|
<p>I have a dataFrame like this:
<a href="https://i.stack.imgur.com/uPiBk.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I wonder how to drop the whole row if any specific columns contain a specific value?</p>
<p>For example, If columns Q1, Q2 or Q3 contain zero, delete the whole row. But if columns Q4 or Q5 contain zero, do not delete the row.</p>
<p><a href="https://i.stack.imgur.com/uPiBk.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> to filter with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>any</code></a> along axis 1, and logical NOT operator <code>~</code>:</p>
<pre><code>df.loc[~df[['Q1', 'Q2', 'Q3']].eq(0).any(1)]
</code></pre>
<h3>Example</h3>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.randn(5,5), columns=['Q1', 'Q2', 'Q3', 'Q4', 'Q5'])
df.loc[1,'Q1'] = 0
df.loc[4, 'Q2'] = 0
df.loc[3, 'Q5'] = 0
</code></pre>
<p>[out]</p>
<pre><code> Q1 Q2 Q3 Q4 Q5
0 1.764052 0.400157 0.978738 2.240893 1.867558
1 0.000000 0.950088 -0.151357 -0.103219 0.410599
2 0.144044 1.454274 0.761038 0.121675 0.443863
3 0.333674 1.494079 -0.205158 0.313068 0.000000
4 -2.552990 0.000000 0.864436 -0.742165 2.269755
</code></pre>
<hr />
<pre><code># Should drop rows 1 and 4, but leave row 3
df.loc[~df[['Q1', 'Q2', 'Q3']].eq(0).any(1)]
</code></pre>
<p>[out]</p>
<pre><code> Q1 Q2 Q3 Q4 Q5
0 1.764052 0.400157 0.978738 2.240893 1.867558
2 0.144044 1.454274 0.761038 0.121675 0.443863
3 0.333674 1.494079 -0.205158 0.313068 0.000000
</code></pre>
|
python|pandas
| 1
|
8,881
| 54,654,772
|
How to change read_csv handling of empty values
|
<p>When loading a header with missing values, pandas' read_csv creates a name like <code>Unnamed: 0_level_1</code>. How would I do to replace these with empty strings?</p>
<pre><code>import pandas as pd
file = """A,B,C,C
,,C1,C2
1,2,3,4
5,6,7,8
"""
with open('test.csv', 'w') as f:
f.write(file)
df = pd.read_csv('test.csv', header=[0, 1])
print(df.columns)
</code></pre>
|
<p>You can use built-in rename, something like:</p>
<pre><code>data.rename( columns={0:'whatever you want'}, inplace=True )
</code></pre>
<p>More info <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html</a></p>
|
python|pandas
| 0
|
8,882
| 54,254,825
|
Python read data from csv using space sep except first column
|
<p>Hi im wondering if there is a way to read data from csv file using pandas read_csv that every entry is separated by space except the first column:</p>
<pre><code>Alabama 400 300 200
New York 400 200 100
Missouri 400 200 50
District of Columbia 450 100 250
</code></pre>
<p>So there would be 4 columns, with the first being state.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with separator not in data like <code>|</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.rsplit.html" rel="nofollow noreferrer"><code>str.rsplit</code></a> with parameter <code>n=3</code> for spliting by 3 whitespace from right side and <code>expand=True</code> for <code>DataFrame</code>:</p>
<pre><code>import pandas as pd
temp=u"""Alabama 400 300 200
New York 400 200 100
Missouri 400 200 50
District of Columbia 450 100 250"""
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), sep="|", names=['Data'])
print (df)
Data
0 Alabama 400 300 200
1 New York 400 200 100
2 Missouri 400 200 50
3 District of Columbia 450 100 250
df = df['Data'].str.rsplit(n=3, expand=True)
print (df)
0 1 2 3
0 Alabama 400 300 200
1 New York 400 200 100
2 Missouri 400 200 50
3 District of Columbia 450 100 250
</code></pre>
|
python|pandas|dataframe
| 3
|
8,883
| 54,451,362
|
How to use GPUs with Ray in Pytorch? Should I specify the num_gpus for the remote class?
|
<p>When I use the Ray with pytorch, I do not set any num_gpus flag for the remote class. </p>
<p>I get the following <strong>error</strong>: </p>
<pre><code>RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.
</code></pre>
<p>The main process is: I create a remote class and transfer a pytorch model <code>state_dict()(created in main function)</code> to it. In the main function, the <code>torch.cuda.is_available()</code> is <code>True</code>, but In the remote function, <code>torch.cuda.is_available()</code> is <code>False</code>. Thanks</p>
<p>I try to set the num_gpus=1 and got a new issue: the program just got stuck. Below is the minimal example code for reproducing this issue. Thanks.</p>
<pre class="lang-py prettyprint-override"><code>import ray
@ray.remote(num_gpus=1)
class Worker(object):
def __init__(self, args):
self.args = args
self.gen_frames = 0
def set_gen_frames(self, value):
self.gen_frames = value
return self.gen_frames
def get_gen_num(self):
return self.gen_frames
class Parameters:
def __init__(self):
self.is_cuda = False;
self.is_memory_cuda = True
self.pop_size = 10
if __name__ == "__main__":
ray.init()
args = Parameters()
workers = [Worker.remote(args) for _ in range(args.pop_size)]
get_num_ids = [worker.get_gen_num.remote() for worker in workers]
gen_nums = ray.get(get_num_ids)
print(gen_nums)
</code></pre>
|
<p>If you also want to deploy the model on a gpu, you need to make sure that your actor or task indeed has access to a gpu (with @ray.remote(num_gpus=1), this will make sure that torch.cuda.is_available() will be true in that remote function). If you want to deploy your model on a CPU, you need to specify that when loading the model, see for example <a href="https://github.com/pytorch/pytorch/issues/9139" rel="noreferrer">https://github.com/pytorch/pytorch/issues/9139</a>.</p>
|
pytorch|ray
| 5
|
8,884
| 73,636,779
|
How to add column for every month and generate number i.e. 1,2,3..etc
|
<p>I have a huge csv file of dataframe. However, I don't have the date column. I only have the sales for every month from Jan-2022 until Dec-2034. Below is the example of my dataframe:</p>
<pre><code>import pandas as pd
data = [[6661, 'Mobile Phone', 43578, 5000, 78564, 52353, 67456, 86965, 43634, 32546, 56332, 58944, 98878, 68588, 43634, 3463, 74533, 73733, 64436, 45426, 57333, 89762, 4373, 75457, 74845, 86843, 59957, 74563, 745335, 46342, 463473, 52352, 23622],
[6672, 'Play Station', 4475, 2546, 5757, 2352, 57896, 98574, 53536, 56533, 88645, 44884, 76585, 43575, 74573, 75347, 57573, 5736, 53737, 35235, 5322, 54757, 74573, 75473, 77362, 21554, 73462, 74736, 1435, 4367, 63462, 32362, 56332],
[6631, 'Laptop', 35347, 36376, 164577, 94584, 78675, 76758, 75464, 56373, 56343, 54787, 7658, 76584, 47347, 5748, 8684, 75373, 57573, 26626, 25632, 73774, 847373, 736646, 847457, 57346, 43732, 347346, 75373, 6473, 85674, 35743, 45734],
[6600, 'Camera', 14365, 60785, 25436, 46747, 75456, 97644, 63573, 56433, 25646, 32548, 14325, 64748, 68458, 46537, 7537, 46266, 7457, 78235, 46223, 8747, 67453, 4636, 3425, 4636, 352236, 6622, 64625, 36346, 46346, 35225, 6436],
[6643, 'Lamp', 324355, 143255, 696954, 97823, 43657, 66686, 56346, 57563, 65734, 64484, 87685, 54748, 9868, 573, 73472, 5735, 73422, 86352, 5325, 84333, 7473, 35252, 7547, 73733, 7374, 32266, 654747, 85743, 57333, 46346, 46266]]
ds = pd.DataFrame(data, columns = ['ID', 'Product', 'SalesJan-22', 'SalesFeb-22', 'SalesMar-22', 'SalesApr-22', 'SalesMay-22', 'SalesJun-22', 'SalesJul-22', 'SalesAug-22', 'SalesSep-22', 'SalesOct-22', 'SalesNov-22', 'SalesDec-22', 'SalesJan-23', 'SalesFeb-23', 'SalesMar-23', 'SalesApr-23', 'SalesMay-23', 'SalesJun-23', 'SalesJul-23', 'SalesAug-23', 'SalesSep-23', 'SalesOct-23', 'SalesNov-23', 'SalesDec-23', 'SalesJan-24', 'SalesFeb-24', 'SalesMar-24', 'SalesApr-24', 'SalesMay-24', 'SalesJun-24', 'SalesJul-24']
</code></pre>
<p><a href="https://i.stack.imgur.com/sEPDK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sEPDK.png" alt="enter image description here" /></a></p>
<p>Since I have more than 10 monthly sales column, I want to loop the date after each of the month sales column. Then, the first 6 months will generate number 1, while the next 12 months will generate number 2, then another 12 months will generate number 3, another subsequent 12 months will generate number 4 and so on.</p>
<p>Below shows the sample of result that I want:</p>
<p><a href="https://i.stack.imgur.com/m6PIV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m6PIV.png" alt="enter image description here" /></a></p>
<p>Is there any way to perform the loop and adding the date column beside each of the sales month?</p>
|
<p>Here is the simplest approach I can think of:</p>
<pre class="lang-py prettyprint-override"><code>for i, col in enumerate(ds.columns[2:]):
ds.insert(2 * i + 2, col.removeprefix("Sales"), (i - 6) // 12 + 2)
</code></pre>
|
python|pandas|loops
| 1
|
8,885
| 73,559,949
|
Fetch a column value based on another column value in Pandas
|
<p>Got stuck in figuring out extracting a column value based on another column value as I am new to pandas dataframe.</p>
<pre><code> Name Age City
0 Jim 19 NY
1 Tom 25 LA
2 Sid 33 PH
</code></pre>
<p>How to extract Name based on a value for City? ie, to get result as Sid when City = PH</p>
|
<p>This will work:</p>
<pre class="lang-py prettyprint-override"><code>name = df[df.City=='PH'].Name.iloc[0]
</code></pre>
<p>Alternatively, you can do:</p>
<pre class="lang-py prettyprint-override"><code>name = df.query("City=='PH'").Name.iloc[0]
</code></pre>
<p>Also this:</p>
<pre class="lang-py prettyprint-override"><code>name = df.loc[df.City=='PH', 'Name'].iloc[0]
</code></pre>
<p>And one more:</p>
<pre class="lang-py prettyprint-override"><code>name = df.set_index('City').Name['PH']
</code></pre>
|
python|pandas
| 0
|
8,886
| 73,646,841
|
Python add missing values to index
|
<p>Having the following DF :</p>
<pre><code>Index Date
1D 9/13/2022
1W 9/19/2022
2W 9/26/2022
3W 10/3/2022
1M 10/12/2022
2M 11/14/2022
3M 12/12/2022
4M 1/12/2023
5M 2/13/2023
6M 3/13/2023
7M 4/12/2023
8M 5/12/2023
9M 6/12/2023
10M 7/12/2023
11M 8/14/2023
12M 9/12/2023
18M 3/12/2024
2Y 9/12/2024
3Y 9/12/2025
4Y 9/14/2026
5Y 9/13/2027
6Y 9/12/2028
7Y 9/12/2029
8Y 9/12/2030
9Y 9/12/2031
10Y 9/13/2032
12Y 9/12/2034
15Y 9/14/2037
20Y 9/12/2042
</code></pre>
<p>The idea would be to do a loop, and to do :</p>
<pre><code>if DF.index[i][-1] == 'Y':
if int(self.dfcurve.index[i+1][:-1])-int(self.dfcurve.index[i][:-1])!= 1:
###Add Missing Index:
index_val = int(self.dfcurve.index[i][:-1]) +1
index_val = str(index_val)+'Y'
### Example of missing index :
## 11Y
## 13Y
## 14Y
## 16Y
## 17Y
## 18Y
## 19Y
</code></pre>
<p>But I don't know how to add the index in list at the right place. The final DF would be :</p>
<pre><code>Index Date
1D 9/13/2022
1W 9/19/2022
2W 9/26/2022
3W 10/3/2022
1M 10/12/2022
2M 11/14/2022
3M 12/12/2022
4M 1/12/2023
5M 2/13/2023
6M 3/13/2023
7M 4/12/2023
8M 5/12/2023
9M 6/12/2023
10M 7/12/2023
11M 8/14/2023
12M 9/12/2023
18M 3/12/2024
2Y 9/12/2024
3Y 9/12/2025
4Y 9/14/2026
5Y 9/13/2027
6Y 9/12/2028
7Y 9/12/2029
8Y 9/12/2030
9Y 9/12/2031
10Y 9/13/2032
11Y NA
12Y 9/12/2034
13Y NA
14Y NA
15Y 9/14/2037
16Y NA
17Y NA
18Y NA
19Y NA
20Y 9/12/2042
</code></pre>
|
<p>Use:</p>
<pre><code>#filter Y index values
m = df.index.str.endswith('Y')
#processing only years
df1 = df[m].copy()
#extract numbers to index
df1.index = df1.index.str.extract(r'(\d+)', expand=False).astype(int)
#reindex by range for append missing rows
df1 = df1.reindex(range(df1.index.min(), df1.index.max()+1)).rename(index=str)
#added Y substring
df1.index += 'Y'
</code></pre>
<hr />
<pre><code>print (df1)
Date
Index
2Y 9/12/2024
3Y 9/12/2025
4Y 9/14/2026
5Y 9/13/2027
6Y 9/12/2028
7Y 9/12/2029
8Y 9/12/2030
9Y 9/12/2031
10Y 9/13/2032
11Y NaN
12Y 9/12/2034
13Y NaN
14Y NaN
15Y 9/14/2037
16Y NaN
17Y NaN
18Y NaN
19Y NaN
20Y 9/12/2042
</code></pre>
<hr />
<pre><code>#remove Y original rows from Dataframe and append new Y rows
df = pd.concat([df[~m], df1])
print (df)
</code></pre>
<p>First solution add to all categories missing values:</p>
<pre><code>df.index = pd.MultiIndex.from_frame(df.index.str.extract(r'(\d+)(\D+)', expand=True))
f = lambda x: x.reindex(range(x.index.min(), x.index.max()+1))
df = df.reset_index(1).rename(index=int).groupby(1).apply(f).drop(1, axis=1)
df.index = df.index.map(lambda x: f'{x[0]}{x[1]}')
</code></pre>
<hr />
<pre><code>print (df)
Date
D1 9/13/2022
M1 10/12/2022
M2 11/14/2022
M3 12/12/2022
</code></pre>
|
python|pandas
| 2
|
8,887
| 71,436,163
|
Pandas version upgrade causing value error while using groupby and aggregate max
|
<p>A and B are non numeric columns. A and B columns dont have NaN Values.However, dataframe has NaN values in other columns.</p>
<p>I got a related link on github issues : <a href="https://github.com/pandas-dev/pandas/issues/32077" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/32077</a> but I am not sure if this is relevant but I think upgrade is cauing the issue.</p>
<p><code>trepos = prdf.groupby(['A','B']).agg('max').reset_index()[['A', 'B']].apply(lambda x: f'{x.A}/{x.B}', axis=1).values</code></p>
<p>I want to migrate the code from older pandas version to 1.1.5 version of pandas.</p>
<p>The above code works fine in 0.22.0 version of pandas. However, its breaking in pandas version 1.1.5. Following is the error:</p>
<pre><code>/tmp/ipykernel_283/1981918777.py in <module>
1 # release tags
----> 2 trepos = prdf.groupby(['A','B']).agg('max').reset_index()[['A', 'B']]#.apply(lambda x: f'{x.A}/{x.B}', axis=1).values
/opt/conda/lib/python3.7/site-packages/pandas/core/groupby/generic.py in aggregate(self, func, engine, engine_kwargs, *args, **kwargs)
949 As usual, the aggregation can be a callable or a string alias.
950
--> 951 See :ref:`groupby.aggregate.named` for more.
952
953 .. versionchanged:: 1.3.0
/opt/conda/lib/python3.7/site-packages/pandas/core/base.py in _aggregate(self, arg, *args, **kwargs)
305 # We need this defined here for mypy
306 raise AbstractMethodError(self)
--> 307
308 @property
309 def ndim(self) -> int:
/opt/conda/lib/python3.7/site-packages/pandas/core/base.py in _try_aggregate_string_function(self, arg, *args, **kwargs)
261 """
262
--> 263 # ndarray compatibility
264 __array_priority__ = 1000
265 _hidden_attrs: frozenset[str] = frozenset(
/opt/conda/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in max(self, numeric_only, min_count)
1558 @final
1559 @Substitution(name="groupby")
-> 1560 @Appender(_common_see_also)
1561 def any(self, skipna: bool = True):
1562 """
/opt/conda/lib/python3.7/site-packages/pandas/core/groupby/groupby.py in _agg_general(self, numeric_only, min_count, alias, npfunc)
999 # Dispatch/Wrapping
1000
-> 1001 @final
1002 def _concat_objects(self, keys, values, not_indexed_same: bool = False):
1003 from pandas.core.reshape.concat import concat
/opt/conda/lib/python3.7/site-packages/pandas/core/groupby/generic.py in _cython_agg_general(self, how, alt, numeric_only, min_count)
1020
1021 if isinstance(sobj, Series):
-> 1022 # GH#35246 test_groupby_as_index_select_column_sum_empty_df
1023 result.columns = self._obj_with_exclusions.columns.copy()
1024 else:
/opt/conda/lib/python3.7/site-packages/pandas/core/groupby/generic.py in _cython_agg_blocks(self, how, alt, numeric_only, min_count)
1122
1123 def _aggregate_item_by_item(self, func, *args, **kwargs) -> DataFrame:
-> 1124 # only for axis==0
1125 # tests that get here with non-unique cols:
1126 # test_resample_with_timedelta_yields_no_empty_groups,
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/blocks.py in make_block(self, values, placement)
252 if placement is None:
253 placement = self._mgr_locs
--> 254 if self.is_extension:
255 values = ensure_block_shape(values, ndim=self.ndim)
256
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/blocks.py in make_block(values, placement, klass, ndim, dtype)
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/blocks.py in __init__(self, values, placement, ndim)
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/blocks.py in __init__(self, values, placement, ndim)
129 """
130 If we have a multi-column block, split and operate block-wise. Otherwise
--> 131 use the original method.
132 """
133
ValueError: Wrong number of items passed 4, placement implies 5```
For Example:
the below code works fine in 0.22.0:
```import numpy as np
import pandas as pd
df_simple_max = pd.DataFrame({'key': ['a','a','b','b','c','c'], 'data' : ['e','e','f','f','g','g'],
'good_string' : ['cat','dog','cat','dog','fish','pig'],
'bad_string' : ['cat',np.nan,np.nan, np.nan, np.nan, np.nan]})
df_simple_max.groupby(['key','data']).agg('max').reset_index()[['key', 'data']].apply(lambda x: f'{x.key}/{x.data}', axis=1).values```
And the output is :
array(['a/<memory at 0x7fb181255108>', 'b/<memory at 0x7fb181255108>',
'c/<memory at 0x7fb181255108>'], dtype=object)
but breaks on 1.1.5 pandas version
</code></pre>
|
<p>Pandas Version 1.1.5 has a bug while doing aggregation for max on groupbydataframes. This was fixed in 1.3.1. Running the above code works fine in 1.3.1 version of pandas. Hence closing the ticket.</p>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 1
|
8,888
| 71,409,586
|
Efficiently combining groupby, last and count in pandas
|
<p>From a list of logs, i want to get the number of active events at each timestamp for a specific event type.</p>
<p>A sample log input looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>id</th>
<th>event</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-03-01 10:00</td>
<td>1</td>
<td>A</td>
</tr>
<tr>
<td>2022-03-01 11:00</td>
<td>2</td>
<td>B</td>
</tr>
<tr>
<td>2022-03-01 12:00</td>
<td>3</td>
<td>A</td>
</tr>
<tr>
<td>2022-03-01 13:00</td>
<td>1</td>
<td>B</td>
</tr>
<tr>
<td>2022-03-01 14:00</td>
<td>4</td>
<td>A</td>
</tr>
<tr>
<td>2022-03-01 15:00</td>
<td>2</td>
<td>C</td>
</tr>
<tr>
<td>2022-03-01 16:00</td>
<td>1</td>
<td>A</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p><strong>What i want is basically how many ids have event A active at each time in the df, like in the table below.</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>eventA</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-03-01 10:00</td>
<td>1</td>
</tr>
<tr>
<td>2022-03-01 11:00</td>
<td>1</td>
</tr>
<tr>
<td>2022-03-01 12:00</td>
<td>2</td>
</tr>
<tr>
<td>2022-03-01 13:00</td>
<td>1</td>
</tr>
<tr>
<td>2022-03-01 14:00</td>
<td>2</td>
</tr>
<tr>
<td>2022-03-01 15:00</td>
<td>2</td>
</tr>
<tr>
<td>2022-03-01 16:00</td>
<td>3</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>I achieved this with some basic pandas operations:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{
"time": pd.date_range("2022-03-01 10:00", periods=7, freq="H"),
"id": [1, 2, 3, 1, 4, 2, 1],
"event": ["A", "B", "A", "B", "A", "C", "A"],
}
)
</code></pre>
<pre><code>timestamps = df.time
values = []
for timestamp in timestamps:
filtered_df = df.loc[df.time <= timestamp]
eventA = filtered_df.groupby("id").last().groupby("event").count().["time"]["A"]
values.append({"time": timestamp, "eventA": eventA})
df_count = pd.DataFrame(values)
</code></pre>
<p>In my case though, i have to go over >50,000 rows and this basic approach becomes very inefficient time wise.</p>
<p>Is there a better approach to achieve the desired result? I guess there might be some pandas groupby aggregation methods that could help here, but i found none that helped me.</p>
|
<pre><code>df.set_index(['time', 'id']).unstack().fillna(method='ffill')\
.stack().value_counts(['time', 'event']).unstack().fillna(0)
</code></pre>
<p>The first line takes care of getting the latest event from each <code>id</code> at each hour by forward-filling the <code>NaN</code>s</p>
<pre><code> event
id 1 2 3 4
time
2022-03-01 10:00:00 A NaN NaN NaN
2022-03-01 11:00:00 A B NaN NaN
2022-03-01 12:00:00 A B A NaN
2022-03-01 13:00:00 B B A NaN
2022-03-01 14:00:00 B B A A
2022-03-01 15:00:00 B C A A
2022-03-01 16:00:00 A C A A
</code></pre>
<p>The second line does the counting and thus</p>
<pre><code>event A B C
time
2022-03-01 10:00:00 1.0 0.0 0.0
2022-03-01 11:00:00 1.0 1.0 0.0
2022-03-01 12:00:00 2.0 1.0 0.0
2022-03-01 13:00:00 1.0 2.0 0.0
2022-03-01 14:00:00 2.0 2.0 0.0
2022-03-01 15:00:00 2.0 1.0 1.0
2022-03-01 16:00:00 3.0 0.0 1.0
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
8,889
| 52,331,427
|
numpy linspace returning negative numbers for a positive interval
|
<pre><code>np.linspace(10**3, 10**6, num=5, dtype=np.int16)
</code></pre>
<p>yelds</p>
<pre><code>array([ 1000, -11394, -23788, 29354, 16960], dtype=int16)
</code></pre>
<p>I don't understand the presence of negative numbers in a positive interval.</p>
<p>Can anyone point me to what I'm missing ? (And eventually how can I manage to get linearly spaced numbers over long sequences of integers like those.)</p>
<p>Thanks!</p>
|
<p>As mentioned in the comments, the reason for this is <a href="https://en.wikipedia.org/wiki/Integer_overflow" rel="nofollow noreferrer">overflow</a>. </p>
<p>More specifically, you asked for numbers between 1E3 to 1E6, but <code>int16</code> supports values in the range <code>[-32768, 32767]</code>. When we try to represent a number like <code>40000</code> using <code>int16</code>, the value wraps and what we get is <code>40000-2**16 == -25536</code>. Large numbers keep "wrapping" until they are small enough to be represented. </p>
|
python|numpy
| 0
|
8,890
| 52,047,906
|
Tensorflow how to read tensor from OpenCV image frame
|
<p>Tensorflow provides a <code>label_image.py</code> implementation for inference of an image. This works great for images on disk. But i have a case wherein I am reading a streaming video from a webcam and I would like to run inference on each image frame to detect the object in the camera feed.</p>
<p>Currently <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/label_image.py#L38" rel="nofollow noreferrer">label_image.py</a> only accepts an image on disk and using <code>read_tensor_from_image_file</code> converts it into a Tensor. How do i get a Tensor with the necessary pre-processing as being done in <code>read_tensor_from_image_file</code> from the Open CV image frame that i have in memory?</p>
|
<p>There are 3 ways. you can</p>
<ol>
<li>feed numpy array to tensorflow, considering opencv python wrapper read image as numpy array, </li>
<li><code>tf.py_func</code> is useful when you don't want to feed your input because of performance issue. </li>
<li>Also, if you use c++, you can define a custom op to wrap opencv mat to a tensor. </li>
</ol>
|
opencv|tensorflow|computer-vision
| 0
|
8,891
| 60,366,773
|
Fill dataframe column with a value if multiple columns match values in a dictionary
|
<p>I have two dataframes - one large dataframe with multiple categorical columns and one column with missing values, and another that's sort of a dictionary with the same categorical columns and one column with a key value.</p>
<p>Essentially, I want to fill the missing values in the large dataframe with the key value in the second if all the categorical columns match. </p>
<p>Missing value df:</p>
<pre><code> Color Number Letter Value
0 Red 2 B NaN
1 Green 2 A NaN
2 Red 2 B NaN
3 Red 1 B NaN
4 Green 1 A NaN
5 Red 2 B NaN
6 Green 1 B NaN
7 Green 2 A NaN
</code></pre>
<p>Dictionary df:</p>
<pre><code> Color Number Letter Value
0 Red 1 A 10
1 Red 1 B 4
2 Red 2 A 3
3 Red 2 B 15
4 Green 1 A 21
5 Green 1 B 9
6 Green 2 A 22
7 Green 2 B 1
</code></pre>
<p>Desired df:</p>
<pre><code>0 Red 2 B 15
1 Green 2 A 22
2 Red 2 B 15
3 Red 1 B 4
4 Green 1 A 21
5 Red 2 B 15
6 Green 1 B 9
7 Green 2 A 22
</code></pre>
<p>I'm not sure if I should have the 'dictionary df' as an actual dictionary, or keep it as a dataframe (it's pulled from a csv).</p>
<p>Is this possible to do cleanly without a myriad of if else statements?</p>
<p>Thanks!</p>
|
<p>Try:</p>
<pre><code>missing_df.reset_index()[['index', 'Color', 'Number', 'Letter']]\
.merge(dict_df, on = ['Color', 'Number', 'Letter'])\
.set_index('index').reindex(missing_df.index)
</code></pre>
<p>Output:</p>
<pre><code> Color Number Letter Value
0 Red 2 B 15
1 Green 2 A 22
2 Red 2 B 15
3 Red 1 B 4
4 Green 1 A 21
5 Red 2 B 15
6 Green 1 B 9
7 Green 2 A 22
</code></pre>
|
python|pandas|dataframe|dictionary
| 1
|
8,892
| 60,457,833
|
Conditional Pandas Statement and Apply
|
<p>I can't share a lot of my code, but I'm using a conditional statement to then pass a function to a column in my dataframe. I'm getting a Database error <code>(<cx_Oracle._Error object at 0x00000114A0BE38D0></code>, <code>'occurred at index 880')</code>. </p>
<pre><code>def my_new_func(row):
return RCCheck.nsamcheck(
sk=row['sk'], reportseries=row['reportseries'], rssd=row['rssd'], username=username, pw=password)
NSAM.loc[(NSAM['Security_Description']=='Update External User - Reporting') & (NSAM.Analyst.isin(analyst_list)), 'Action'] = NSAM.apply(my_new_func, axis=1)
</code></pre>
<p>The error message indicates there is a problem at index 880, which is the first row of my dataframe, but does not have the conditions expressed in my Boolean indexing above. My question is why is it applying the function to my entire data frame as opposed to those I am filtering for?</p>
|
<p>Figured it out:</p>
<pre><code>NSAM['Action'] = NSAM.loc[(NSAM['Security_Description'] == 'Update External User - Reporting') & (NSAM.Analyst.isin(analyst_list))].apply(my_new_func, axis=1)
</code></pre>
<p>In my original code I said NSAM.apply(my_new_func, axis =1). I was calling the whole dataframe to apply the function to, not the filtered version.</p>
|
python|pandas|dataframe
| 0
|
8,893
| 60,543,446
|
I can't save my Dataframe to Cloud Storage
|
<pre><code>def save_csv_to_cloud_storage(df,file_name,folder='output'):
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/Users/******/Desktop/*****.json'
storage_client = storage.Client()
bucket = storage_client.get_bucket('fluxlengow')
now = datetime.now()
dt_string = now.strftime("%Y%m%d-%H%M%S")
f = StringIO()
df.to_csv(f,sep=',')
f.seek(0)
Blob('{}/{}_{}_.csv'.format(folder, dt_string, file_name),bucket).upload_from_file(f,content_type='text/csv')
def lengowToStorage():
liste = ['https://httpnas.****.*****/******/SUP****/*******_FR.csv','https://httpnas.****.*****/******/SUP****/*******_UK.csv','https://httpnas.****.*****/******/SUP****/*******_IT.csv']
for i in liste :
name = i.split('/')[-1]
name = name.split('.')[0]
CSV_URL = '{}'.format(i)
with requests.Session() as s:
download = s.get(CSV_URL)
decoded_content = download.content.decode('utf8')
cr = csv.reader(decoded_content.splitlines(), delimiter='|')
my_list = list(cr)
df_ = pd.DataFrame(my_list, columns=my_list[0]).drop(0)
save_csv_to_cloud_storage(df_,file_name=name,folder='input')
print("recuperation du fichier : {}".format(i))
lengowToStorage()
</code></pre>
<p>Hi guys , I'm sorry but I need your help because I'm really stuck on this encoding issue.
I'm trying to send my dataframe to cloud storage as a CSV file.
unfortunately when i try to save it to storage i get this error </p>
<pre><code>'latin-1' codec can't encode character '\u2019' in position 32318: Body ('’') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.
</code></pre>
<p>I then did the "utf-8" encoding on each columns (not shown in the code) and the saving to Cloud Storage does work but my data
are into this format : </p>
<pre><code>b'Antaeus Lotion Apr\xc3\xa8s Rasage 100ml'
</code></pre>
<p>I decode UTF8, I encode UTF8 ...
but can't get my data into the string version I want... </p>
<pre><code>'Antaeus Lotion Après Rasage 100ml'
</code></pre>
<p>If you could help , Id be very greatful</p>
|
<p>I strongly suggest to use the <code>gcsfs</code> package which allows to write to the bucket directly from pandas, given the URL</p>
<pre class="lang-py prettyprint-override"><code>def store_dataframe(df, filename, path = "news"):
url = f"gs://{BUCKET_NAME}/{path}/{filename}"
df.to_csv(url)
</code></pre>
|
python|pandas|utf-8|character-encoding|google-cloud-storage
| 1
|
8,894
| 59,805,321
|
What is the best way to check if the last rows of a pandas dataframe meet a condition?
|
<p>I got stuck trying to create a new column that is a check column based on the 'signal' column. If the last five rows(including the last) are 1, it would return 1, If the last five rows(including the last) are 0, it would return 0, everything else would be the last value of check, like this:</p>
<p>I have the following data frame:</p>
<pre><code> signal
index
0 1
1 1
2 1
3 1
4 1
5 1
6 0
7 0
8 0
9 0
10 0
11 0
12 0
13 1
14 0
15 1
16 1
17 1
18 1
19 1
</code></pre>
<p>I'd like something like this:</p>
<pre><code> signal check
index
0 1 1
1 1 1
2 1 1
3 1 1
4 1 1
5 1 1
6 0 1
7 0 1
8 0 1
9 0 1
10 0 0
11 0 0
12 0 0
13 1 0
14 0 0
15 1 0
16 1 0
17 1 0
18 1 0
19 1 1
</code></pre>
<p>I would appreciate any kind of help!</p>
<p>Thank you!</p>
|
<p>You want to use a rolling window over your dataframe, followed by <code>fillna</code>:</p>
<pre><code>def allSame(x):
if (x == 1).all():
return 1.0
elif (x == 0).all():
return 0.0
else:
return np.nan
df['signal'] = df.rolling(5).apply(allSame, raw=False).fillna(method="ffill")
</code></pre>
<p><code>rolling</code> returns a rolling window object over a number of elements (5 in this case). The window object is similar to a dataframe, but instead of having rows it has windows over the original dataframe's rows. We can use its <code>apply</code> method to convert each rolling window to a value, converting the rolling window object to a dataframe. The <code>apply</code> method takes a function that can convert from an ndarray to an appropriate output value.</p>
<p>Here we pass to <code>apply</code> a function that returns 1 or 0 if the 5 rows in the window are all 1 or 0 respectively, and otherwise returns NaN. As a result we get a new dataframe with values that are either 1, 0, or NaN. We then use <code>fillna</code> on this dataframe to overwrite the NaN values with the first preceeding 1 or 0 value. Finally, we merge the resulting dataframe back into the original dataframe, creating the "signal" column.</p>
|
python|pandas|dataframe
| 3
|
8,895
| 40,456,416
|
error on importing tensorflow in ubuntu
|
<p>i have <code>ubuntu 14.04</code> ; But a got this error when I run import tensorflow : </p>
<p>if i'am import a tensorflow </p>
<pre><code>>>> import tensorflow
</code></pre>
<p>.</p>
<pre><code>RuntimeError: module compiled against API version 0xa but this version of numpy is 0x9
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: numpy.core.multiarray failed to import
</code></pre>
<p>what is the problem and how resolved this problem ?</p>
|
<p>You need to update your numpy version to 10 (your current version is 9). </p>
|
ubuntu|import|tensorflow
| 1
|
8,896
| 40,683,200
|
Group column data into Week in Python
|
<p>I have 4 columns which have Date , Account #, Quantity and Sale respectively. I have daily data but I want to be able to show Weekly Sales per Customer and the Quantity.
I have been able to group the column by week, but I also want to group it by OracleNumber, and <strong>Sum</strong> the Quantity and Sales columns. How would I get that to work without messing up the Week format.</p>
<pre><code>import pandas as pd
names = ['Date','OracleNumber','Quantity','Sale']
sales = pd.read_csv("CustomerSalesNVG.csv",names=names)
sales['Date'] = pd.to_datetime(sales['Date'])
grouped=sales.groupby(sales['Date'].map(lambda x:x.week))
print(grouped.head())
</code></pre>
|
<p>IIUC, you could <code>groupby</code> w.r.t the week column and <em>OracleNumber</em> column by providing an extra key to the <code>list</code> for which the Groupby object has to use and perform <code>sum</code> operation later:</p>
<pre><code>sales.groupby([sales['Date'].dt.week, 'OracleNumber']).sum()
</code></pre>
|
python|python-3.x|pandas
| 2
|
8,897
| 61,718,049
|
Error when running (open-mmlab) C:\mmdetection>python setup.py develop - raise RuntimeError(message)
|
<p>I get the following error in the last stage of the mmdetection install from
<a href="https://github.com/open-mmlab/mmdetection/blob/master/docs/install.md" rel="nofollow noreferrer">https://github.com/open-mmlab/mmdetection/blob/master/docs/install.md</a></p>
<p>when running</p>
<pre><code>C:\...\mmdetection\python setup.py develop
running develop
running egg_info
writing mmdet.egg-info\PKG-INFO
writing dependency_links to mmdet.egg-info\dependency_links.txt
writing requirements to mmdet.egg-info\requires.txt
writing top-level names to mmdet.egg-info\top_level.txt
reading manifest file 'mmdet.egg-info\SOURCES.txt'
writing manifest file 'mmdet.egg-info\SOURCES.txt'
running build_ext
C:\...\torch\utils\cpp_extension.py:237: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
building 'mmdet.ops.utils.compiling_info' extension
Emitting ninja build file C:\...\mmdetection\build\temp.win-amd64-3.7\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
FAILED: C:/.../mmdetection/build/temp.win-amd64-3.7/Release/mmdet/ops/utils/src/compiling_info.obj
cl /showIncludes /nologo /Ox /W3 /GL /DNDEBUG /MD /MD /wd4819 /EHsc -DWITH_CUDA -IC:\torch\include -
CreateProcess failed: The system cannot find the file specified.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:...\torch\utils\cpp_extension.py", line 1400, in _run_ninja_build
check=True)
File "C:\anaconda3\envs\open-mmlab\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "setup.py", line 300, in <module>
zip_safe=False)
File "C:...\setuptools\__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "C:\...\anaconda3\envs\open-mmlab\lib\distutils\core.py", line 148, in
File "C:\...\torch\utils\cpp_extension.py", line 1140, in _write_ninja_file_and_compile_objects
..............
error_prefix='Error compiling objects for extension')
File "C:\...\torch\utils\cpp_extension.py", line 1413, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error compiling objects for extension
</code></pre>
<p>I was wondering if anyone knows what could be the cause of this error message ?</p>
<p>I am using windows 10</p>
|
<p>In github <a href="https://github.com/open-mmlab/mmdetection/blob/v2.0.0/docs/install.md" rel="nofollow noreferrer">https://github.com/open-mmlab/mmdetection/blob/v2.0.0/docs/install.md</a></p>
<p>by following the instructions, but this time switching github tag to v2.0.0 and cloning it</p>
<p>typing in this </p>
<pre><code>pip install -v -e .
</code></pre>
<p>gives the following. :-</p>
<pre><code>#sha256=a37ee82f1b8ed4b4645619c504311e71ce845b78f40055e78d71add5fab7da82 (from https://pypi.org/simple/opencv-python/)
Skipping link: none of the wheel's tags match: cp36-cp36m-manylinux1_x86_64: https://files.pythonhosted.org/packages/72/c2/e9cf54ae5b1102020ef895866a67cb2e1aef72f16dd1fde5b5fb1495ad9c/opencv_python-4.2.0.34-cp36-cp36m-manylinux1_x86_64.whl#sha256=dcb8da8c5ebaa6360c8555547a4c7beb6cd983dd95ba895bb78b86cc8cf3de2b (from https://pypi.org/simple/opencv-python/)
... lots more link skipping
Skipping link: none of the wheel's tags match: cp38-cp38-win32: https://files.pythonhosted.org/packages/e6/d6/516883f8d2f255c41d8c560ef70c91085f2ceac7b70b7afe41432bd8adbb/opencv_python-4.2.0.34-cp38-cp38-win32.whl#sha256=1ab92d807427641ec45d28d5907426aa06b4ffd19c5b794729c74d91cd95090e (from https://pypi.org/simple/opencv-python/)
Skipping link: none of the wheel's tags match: cp38-cp38-win_amd64: https://files.pythonhosted.org/packages/df/9e/56d8b98652ecac8c8f9e59b7f00d5d99a9fa86661adcf324b8dc73351a6b/opencv_python-4.2.0.34-cp38-cp38-win_amd64.whl#sha256=e2206bb8c17c0f212f1f356d82d72dd090ff4651994034416da9bf0c29732825 (from https://pypi.org/simple/opencv-python/)
Given no hashes to check 18 links for project 'opencv-python': discarding no candidates
Using version 4.2.0.34 (newest of versions: bunch of versions....)
Collecting opencv-python>=3
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpack-y8apfmt3
Looking up "https://files.pythonhosted.org/packages/85/17/bad54f67bbe27d88ba520c3f59315e95b4e254cd28767c20accacb0597d8/opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl" in the cache
Current age based on date: 1902095
Ignoring unknown cache-control directive: immutable
Freshness lifeOneWorlde from max-age: 365000000
The response is "fresh", returning cached response
365000000 > 1902095
Using cached opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl (33.0 MB)
Added opencv-python>=3 from https://files.pythonhosted.org/packages/85/17/bad54f67bbe27d88ba520c3f59315e95b4e254cd28767c20accacb0597d8/opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl#sha256=d8a55585631f9c9eca4b1a996e9732ae023169cf2f46f69e4518d67d96198226 (from mmcv>=0.5.1->mmdet==2.0.0+unknown) to build tracker 'C:\\Users\\OneWorld\\AppData\\Local\\Temp\\pip-req-tracker-6eq3f7le'
Removed opencv-python>=3 from https://files.pythonhosted.org/packages/85/17/bad54f67bbe27d88ba520c3f59315e95b4e254cd28767c20accacb0597d8/opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl#sha256=d8a55585631f9c9eca4b1a996e9732ae023169cf2f46f69e4518d67d96198226 (from mmcv>=0.5.1->mmdet==2.0.0+unknown) from build tracker 'C:\\Users\\OneWorld\\AppData\\Local\\Temp\\pip-req-tracker-6eq3f7le'
Requirement already satisfied: future in c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages (from torch>=1.3->mmdet==2.0.0+unknown) (0.18.2)
Building wheels for collected packages: mmcv
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-wheel-93yp2cnh
Building wheel for mmcv (setup.py) ... Destination directory: C:\Users\OneWorld\AppData\Local\Temp\pip-wheel-93yp2cnh
Running command 'C:\Users\OneWorld\anaconda3\envs\open-mmlab2\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\OneWorld\\AppData\\Local\\Temp\\pip-install-voimjxde\\mmcv\\setup.py'"'"'; __file__='"'"'C:\\Users\\OneWorld\\AppData\\Local\\Temp\\pip-install-
amd64-3.7\mmcv\video\optflow_warp
copying mmcv\video\optflow_warp\flow_warp_module.pyx -> build\lib.win-amd64-3.7\mmcv\video\optflow_warp
....etc.
running build_ext
skipping './mmcv/video/optflow_warp\flow_warp_module.cpp' Cython extension (up-to-date)
building 'mmcv._ext' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\mmcv
creating build\temp.win-amd64-3.7\Release\mmcv\video
creating build\temp.win-amd64-3.7\Release\mmcv\video\optflow_warp
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\numpy\core\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /EHsc /Tp./mmcv/video/optflow_warp/flow_warp.cpp /Fobuild\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp/flow_warp.obj
flow_warp.cpp
./mmcv/video/optflow_warp/flow_warp.cpp(37): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
./mmcv/video/optflow_warp/flow_warp.cpp(38): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
./mmcv/video/optflow_warp/flow_warp.cpp(59): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
./mmcv/video/optflow_warp/flow_warp.cpp(60): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\numpy\core\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /EHsc /Tp./mmcv/video/optflow_warp\flow_warp_module.cpp /Fobuild\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp\flow_warp_module.obj
flow_warp_module.cpp
c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\Users\OneWorld\anaconda3\envs\open-mmlab2\libs /LIBPATH:C:\Users\OneWorld\anaconda3\envs\open-mmlab2\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\um\x64" /EXPORT:PyInit__ext build\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp/flow_warp.obj build\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp\flow_warp_module.obj /OUT:build\lib.win-amd64-3.7\mmcv\_ext.cp37-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp\_ext.cp37-win_amd64.lib
flow_warp_module.obj : warning LNK4197: export 'PyInit__ext' specified multiple OneWorldes; using first specification
Creating library build\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp\_ext.cp37-win_amd64.lib and object build\temp.win-amd64-3.7\Release\./mmcv/video/optflow_warp\_ext.cp37-win_amd64.exp
Generating code
Finished generating code
installing to build\bdist.win-amd64\wheel
running install
running install_lib
creating build\bdist.win-amd64
creating build\bdist.win-amd64\wheel
creating build\bdist.win-amd64\wheel\mmcv
creating build\bdist.win-amd64\wheel\mmcv\arraymisc
copying build\lib.win-amd64-3.7\mmcv\arraymisc\quantization.py -> build\bdist.win-amd64\wheel\.\mmcv\arraymisc
...
... bunch of copying and creating...
...
running install_egg_info
Copying mmcv.egg-info to build\bdist.win-amd64\wheel\.\mmcv-0.5.8-py3.7.egg-info
running install_scripts
C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\wheel\pep425tags.py:82: RunOneWorldeWarning: Config variable 'Py_DEBUG' is unset, Python ABI tag may be incorrect
warn=(impl == 'cp')):
C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\wheel\pep425tags.py:87: RunOneWorldeWarning: Config variable 'WITH_PYMALLOC' is unset, Python ABI tag may be incorrect
sys.version_info < (3, 8))) \
creating build\bdist.win-amd64\wheel\mmcv-0.5.8.dist-info\WHEEL
creating 'C:\Users\OneWorld\AppData\Local\Temp\pip-wheel-93yp2cnh\mmcv-0.5.8-cp37-cp37m-win_amd64.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'mmcv/__init__.py'
adding 'mmcv/_ext.cp37-win_amd64.pyd'
adding 'mmcv/version.py'
adding 'mmcv/arraymisc/__init__.py'
adding 'mmcv/arraymisc/quantization.py'
adding 'mmcv/cnn/__init__.py'
etc...
adding 'mmcv/visualization/optflow.py'
adding 'mmcv-0.5.8.dist-info/METADATA'
adding 'mmcv-0.5.8.dist-info/WHEEL'
adding 'mmcv-0.5.8.dist-info/top_level.txt'
adding 'mmcv-0.5.8.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
done
Created wheel for mmcv: filename=mmcv-0.5.8-cp37-cp37m-win_amd64.whl size=184354 sha256=023fa1fdb01a9fbf7d833975737bc93003c5f1c813ce8c4ae27340b19ddb9cc3
Stored in directory: c:\users\OneWorld\appdata\local\pip\cache\wheels\cc\7c\4c\a2cc81d990c63b3d157ab3c6e2cc4b5b298c0a4a13e6a46e38
Successfully built mmcv
Installing collected packages: addict, pyyaml, yapf, opencv-python, mmcv, Pillow, terminaltables, mmdet
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-n1vstot6
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-qqv40s97
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-yk9n5qrl
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-dmbql_3c
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-6vlnr7x0
Attempting uninstall: Pillow
Found existing installation: Pillow 7.1.2
Uninstalling Pillow-7.1.2:
Created temporary directory: c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\~il
Removing file or directory c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pil\
Created temporary directory: c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\~illow-7.1.2.dist-info
Removing file or directory c:\users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pillow-7.1.2.dist-info\
Successfully uninstalled Pillow-7.1.2
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-thrgvqkj
Created temporary directory: C:\Users\OneWorld\AppData\Local\Temp\pip-unpacked-wheel-9orxl8mp
Running setup.py develop for mmdet
Running command 'C:\Users\OneWorld\anaconda3\envs\open-mmlab2\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"'; __file__='"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
running develop
running egg_info
writing mmdet.egg-info\PKG-INFO
writing dependency_links to mmdet.egg-info\dependency_links.txt
writing requirements to mmdet.egg-info\requires.txt
writing top-level names to mmdet.egg-info\top_level.txt
reading manifest file 'mmdet.egg-info\SOURCES.txt'
writing manifest file 'mmdet.egg-info\SOURCES.txt'
running build_ext
building 'mmdet.ops.utils.compiling_info' extension
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet\ops
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet\ops\utils
creating C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet\ops\utils\src
Emitting ninja build file C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py:237: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
[1/1] cl /showIncludes /nologo /Ox /W3 /GL /DNDEBUG /MD /MD /wd4819 /EHsc -DWITH_CUDA -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\TH -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" -c C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\mmdet\ops\utils\src\compiling_info.cpp /FoC:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet\ops\utils\src/compiling_info.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=compiling_info -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++14
FAILED: C:/Users/OneWorld/Documents/DeepLearning/VideoObjectSegmentation/mmdetection-2.0.0/build/temp.win-amd64-3.7/Release/mmdet/ops/utils/src/compiling_info.obj
cl /showIncludes /nologo /Ox /W3 /GL /DNDEBUG /MD /MD /wd4819 /EHsc -DWITH_CUDA -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\TH -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include -IC:\Users\OneWorld\anaconda3\envs\open-mmlab2\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" -c C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\mmdet\ops\utils\src\compiling_info.cpp /FoC:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\build\temp.win-amd64-3.7\Release\mmdet\ops\utils\src/compiling_info.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=compiling_info -D_GLIBCXX_USE_CXX11_ABI=0 /std:c++14
CreateProcess failed: The system cannot find the file specified.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py", line 1400, in _run_ninja_build
check=True)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\OneWorld\Documents\DeepLearning\VideoObjectSegmentation\mmdetection-2.0.0\setup.py", line 300, in <module>
zip_safe=False)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\setuptools\__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\setuptools\command\develop.py", line 38, in run
self.install_for_development()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\setuptools\command\develop.py", line 140, in install_for_development
self.run_command('build_ext')
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\setuptools\command\build_ext.py", line 87, in run
_build_ext.run(self)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\command\build_ext.py", line 340, in run
self.build_extensions()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py", line 580, in build_extensions
build_ext.build_extensions(self)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\command\build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\command\build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\setuptools\command\build_ext.py", line 208, in build_extension
_build_ext.build_extension(self, ext)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\distutils\command\build_ext.py", line 534, in build_extension
depends=ext.depends)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py", line 562, in win_wrap_ninja_compile
with_cuda=with_cuda)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py", line 1140, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\torch\utils\cpp_extension.py", line 1413, in _run_ninja_build
raise RunOneWorldeError(message)
RunOneWorldeError: Error compiling objects for extension
Cleaning up...
Removing source in C:\Users\OneWorld\AppData\Local\Temp\pip-install-voimjxde\mmcv
Removed build tracker: 'C:\\Users\\OneWorld\\AppData\\Local\\Temp\\pip-req-tracker-6eq3f7le'
ERROR: Command errored out with exit status 1: 'C:\Users\OneWorld\anaconda3\envs\open-mmlab2\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"'; __file__='"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
Exception information:
Traceback (most recent call last):
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\cli\base_command.py", line 186, in _main
status = self.run(options, args)
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\commands\install.py", line 404, in run
use_user_site=options.use_user_site,
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\req\__init__.py", line 71, in install_given_reqs
**kwargs
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\req\req_install.py", line 802, in install
unpacked_source_directory=self.unpacked_source_directory,
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\operations\install\editable_legacy.py", line 51, in install_editable
cwd=unpacked_source_directory,
File "C:\Users\OneWorld\anaconda3\envs\open-mmlab2\lib\site-packages\pip\_internal\utils\subprocess.py", line 242, in call_subprocess
raise InstallationError(exc_msg)
pip._internal.exceptions.InstallationError: Command errored out with exit status 1: 'C:\Users\OneWorld\anaconda3\envs\open-mmlab2\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"'; __file__='"'"'C:\\Users\\OneWorld\\Documents\\DeepLearning\\VideoObjectSegmentation\\mmdetection-2.0.0\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
</code></pre>
<p>Seems like error messages, anything wrong with this? How can I check that HTC detection is working ?</p>
<p>This is using Windows 10</p>
|
python|python-3.x|deep-learning|pytorch|object-detection
| 0
|
8,898
| 61,881,950
|
Converting Spark DF too Pandas DF and other way - Performance
|
<p>Trying to convert Spark DF with 8m records to Pandas DF</p>
<pre><code>spark.conf.set("spark.sql.execution.arrow.enabled", "true")
sourcePandas = srcDF.select("*").toPandas()
</code></pre>
<p>Takes almost 2 minutes</p>
<p>And other way from Pandas to Spark DF </p>
<pre><code>finalDF = spark.createDataFrame(sourcePandas)
</code></pre>
<p>takes too long and never finishes.</p>
<p>sourcePandas</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 42 columns):
CONSIGNMENT_PK 10 non-null int32
CERTIFICATE_NO 10 non-null object
ACTOR_NAME 10 non-null object
GENERATOR_FK 10 non-null int32
TRANSPORTER_FK 10 non-null int32
RECEIVER_FK 10 non-null int32
REC_POST_CODE 0 non-null object
WASTEDESC 10 non-null object
WASTE_FK 10 non-null int32
GEN_LICNUM 0 non-null object
VOLUME 10 non-null int32
MEASURE 10 non-null object
WASTE_TYPE 10 non-null object
WASTE_ADD 0 non-null object
CONTAMINENT1_FK 0 non-null float64
CONTAMINENT2_FK 0 non-null float64
CONTAMINENT3_FK 0 non-null float64
CONTAMINENT4_FK 0 non-null float64
TREATMENT_FK 10 non-null int32
ANZSICODE_FK 10 non-null int32
VEH1_REGNO 10 non-null object
VEH1_LICNO 0 non-null object
VEH2_REGNO 0 non-null object
VEH2_LICNO 0 non-null object
GEN_SIGNEE 0 non-null object
GEN_DATE 10 non-null datetime64[ns]
TRANS_SIGNEE 0 non-null object
TRANS_DATE 10 non-null datetime64[ns]
REC_SIGNEE 0 non-null object
REC_DATE 10 non-null datetime64[ns]
DATECREATED 10 non-null datetime64[ns]
DISCREPANCY 0 non-null object
APPROVAL_NUMBER 0 non-null object
TR_TYPE 10 non-null object
REC_WASTE_FK 10 non-null int32
REC_WASTE_TYPE 10 non-null object
REC_VOLUME 10 non-null int32
REC_MEASURE 10 non-null object
DATE_RECEIVED 10 non-null datetime64[ns]
DATE_SCANNED 0 non-null datetime64[ns]
HAS_IMAGE 10 non-null object
LASTMODIFIED 10 non-null datetime64[ns]
dtypes: datetime64[ns](7), float64(4), int32(10), object(21)
memory usage: 3.0+ KB
</code></pre>
<p>srcDF</p>
<pre><code>|-- CONSIGNMENT_PK: integer (nullable = true)
|-- CERTIFICATE_NO: string (nullable = true)
|-- ACTOR_NAME: string (nullable = true)
|-- GENERATOR_FK: integer (nullable = true)
|-- TRANSPORTER_FK: integer (nullable = true)
|-- RECEIVER_FK: integer (nullable = true)
|-- REC_POST_CODE: string (nullable = true)
|-- WASTEDESC: string (nullable = true)
|-- WASTE_FK: integer (nullable = true)
|-- GEN_LICNUM: string (nullable = true)
|-- VOLUME: integer (nullable = true)
|-- MEASURE: string (nullable = true)
|-- WASTE_TYPE: string (nullable = true)
|-- WASTE_ADD: string (nullable = true)
|-- CONTAMINENT1_FK: integer (nullable = true)
|-- CONTAMINENT2_FK: integer (nullable = true)
|-- CONTAMINENT3_FK: integer (nullable = true)
|-- CONTAMINENT4_FK: integer (nullable = true)
|-- TREATMENT_FK: integer (nullable = true)
|-- ANZSICODE_FK: integer (nullable = true)
|-- VEH1_REGNO: string (nullable = true)
|-- VEH1_LICNO: string (nullable = true)
|-- VEH2_REGNO: string (nullable = true)
|-- VEH2_LICNO: string (nullable = true)
|-- GEN_SIGNEE: string (nullable = true)
|-- GEN_DATE: timestamp (nullable = true)
|-- TRANS_SIGNEE: string (nullable = true)
|-- TRANS_DATE: timestamp (nullable = true)
|-- REC_SIGNEE: string (nullable = true)
|-- REC_DATE: timestamp (nullable = true)
|-- DATECREATED: timestamp (nullable = true)
|-- DISCREPANCY: string (nullable = true)
|-- APPROVAL_NUMBER: string (nullable = true)
|-- TR_TYPE: string (nullable = true)
|-- REC_WASTE_FK: integer (nullable = true)
|-- REC_WASTE_TYPE: string (nullable = true)
|-- REC_VOLUME: integer (nullable = true)
|-- REC_MEASURE: string (nullable = true)
|-- DATE_RECEIVED: timestamp (nullable = true)
|-- DATE_SCANNED: timestamp (nullable = true)
|-- HAS_IMAGE: string (nullable = true)
|-- LASTMODIFIED: timestamp (nullable = true)
</code></pre>
<p>Cluster size</p>
<p><a href="https://i.stack.imgur.com/rJp87.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rJp87.png" alt="enter image description here"></a></p>
|
<p>Collecting to pandas and re-parallelizing to the cluster will have a memory high water mark of approx 2 times the storage cost of the pandas DF. Ten rows of your dataframe is 3KB, so 8M will be ~2.5G. Doubling to get the high water mark takes us to ~5G. The default spark driver memory is 1G, which is too low for what you want to do, causing the JVM to thrash GC:</p>
<ol>
<li><p>Push the <a href="https://spark.apache.org/docs/latest/configuration.html#application-properties" rel="nofollow noreferrer">Application Property</a> <code>spark.driver.memory</code> up to 8G and it should work.</p></li>
<li><p>Revaluate why you want to collect all this data to the driver. Can you use a <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.pandas_udf" rel="nofollow noreferrer">pandas UDF in a GROUPED_MAP</a>?</p></li>
</ol>
|
pandas|azure-databricks|pyspark-dataframes
| 0
|
8,899
| 61,829,495
|
How to check if all items in list of RGB colors is in an image without a loop?
|
<p>Say I have a list of RGB values as follows:</p>
<pre><code>rgbL = [[20 45 40] [30 45 60] .... [70 50 100]]
</code></pre>
<p>Then, I have an image say, <code>img = cv.imread("location")</code></p>
<p>Now, I want to change ALL RGB values of an image into (255, 0, 0) if the image RGB value is IN my list of RGB values (rgbL).</p>
<p>I was able to do this by this code:</p>
<pre><code>for rgb in rgbL :
k = list(filter(None, rgb[1:-1].split(" ")))
r = int(k[0])
g = int(k[1])
b = int(k[2])
img[np.all(img == (r, g, b), axis=-1)] = (255,0,0)
</code></pre>
<p>But the code above is taking too long because my "rgbL" list is too long. </p>
<p>Is there a way that I can do this without a loop? What is the best way to implement it in numpyish way?</p>
|
<p>convert your <code>rgbL</code> and <code>img</code> into numpy arrays. one way of doing it without loop:</p>
<pre><code>sh = img.shape
img = img.reshape(-1, 3)
img[np.where(((rgbL[:,None,:]-img)==0).all(axis=2))[1]]=np.array([255,0,0])
img = img.reshape(sh)
</code></pre>
<p>which takes a difference of your image with every row of <code>rgbL</code> and checks for <code>all</code> zero difference in RGBs to replace using <code>np.where</code>.</p>
<p>sample <code>img</code> and output:</p>
<pre><code>img:
[[ 20 45 40]
[ 30 45 60]
[ 0 1 2]
[ 70 50 100]
[ 4 5 6]]
rgbL:
[[ 20 45 40]
[ 30 45 60]
[ 70 50 100]]
Output:
[[255 0 0]
[255 0 0]
[ 0 1 2]
[255 0 0]
[ 4 5 6]]
</code></pre>
<p><em><strong>UPDATE</strong></em>: Per OP's comment on converting string dict keys to numpy arrays:</p>
<pre><code>rgbL = np.array([list(map(int,[s.strip() for s in key.strip('[').strip(']').strip(' ').split(' ') if s.strip()])) for key in rgb_dict.keys()])
</code></pre>
|
python|numpy
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.