Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,100
| 72,711,734
|
I want to create new features from a pandas dataset by an arbitrary process
|
<p>The following data sets are currently being used.</p>
<pre><code>import pandas as pd
import io
csv_data = '''
ID,age,get_sick,year
4567,76,0,2014
4567,78,0,2016
4567,79,1,2017
12168,65,0,2014
12168,68,0,2017
12168,69,0,2018
12168,70,1,2019
20268,65,0,2014
20268,66,0,2015
20268,67,0,2016
20268,68,0,2017
20268,69,1,2018
22818,65,0,2008
22818,73,1,2016
'''
df = pd.read_csv(io.StringIO(csv_data), index_col=['ID', 'age'])
get_sick year
ID age
4567 76 0 2014
78 0 2016
79 1 2017
12168 65 0 2014
68 0 2017
69 0 2018
70 1 2019
20268 65 0 2014
66 1 2015
67 1 2016
68 1 2017
69 1 2018
22818 65 0 2008
73 1 2016
</code></pre>
<p>For each individual, get_sick is 1 if the person's age at the time of the physical exam, the year of the year measured, and if the person has ever had an illness.</p>
<p>We are now trying to build a model that predicts the likelihood that a person with get_sick=0 will develop a disease in the future.</p>
<p><strong>We want to check if the person with get_sick=0 has changed from 0 to 1 within 5 years, and if so, we want to store 1 in the new column 'history', and if 0 to 0, we want to store 0.</strong></p>
<p>We only target data with get_sick=0, since data with get_sick=1 is not used for training.</p>
<blockquote>
<p>Tried</p>
</blockquote>
<pre><code>N = 3
idx = df.groupby('ID').apply(lambda x: x.query("(year - @x.year.min()) <= @N")['get_sick'].max())
df_1 = df.reset_index().assign(history=df.reset_index()['ID'].map(idx)).set_index(['ID', 'age'])
df_1
</code></pre>
<p>This process did not give us the ideal treatment because we were comparing only the first year.</p>
<p>The ideal output result would be the following</p>
<pre><code> get_sick year history
ID age
4567 76 0 2014 1
78 0 2016 1
79 1 2017 Nan
12168 65 0 2014 1
68 0 2017 1
69 0 2018 1
70 1 2019 Nan
20268 65 0 2014 1
66 1 2015 Nan
67 1 2016 Nan
68 1 2017 Nan
69 1 2018 Nan
22818 65 0 2008 0
73 1 2016 Nan
</code></pre>
<p>If anyone is familiar with Pandas operation, I would appreciate it if you could let me know.</p>
<p>Thank you in advance.</p>
<blockquote>
<p><strong>※The following results are obtained for certain data frames.</strong></p>
</blockquote>
<pre><code>import pandas as pd
import io
csv_data = '''
ID,age,get_sick,year
33868,76,0,2014
33868,78,1,2016
33868,79,1,2017
33868,80,1,2018
'''
df_1 = pd.read_csv(io.StringIO(csv_data), index_col=['ID', 'age'])
get_sick year
ID age
33868 76 0 2014
78 1 2016
79 1 2017
80 1 2018
df_mer_1 = df_1[df_1.get_sick == 1].reset_index()[['ID', 'year']]
df_1 = df_1.reset_index().merge(df_mer_1, on = 'ID', suffixes=('', '_max'))
df_1.loc[(df_1.get_sick == 0) & (df_1.year_max - df_1.year <= 5), 'history'] = 1
df_1.loc[(df_1.get_sick == 0) & (df_1.year_max - df_1.year > 5), 'history'] = 0
df_1 = df_1.set_index(['ID', 'age']).drop(columns='year_max')
</code></pre>
<p>The results are as follows</p>
<pre><code> get_sick year history
ID age
33868 76 0 2014 1
76 0 2014 1
76 0 2014 1
78 1 2016 Nan
78 1 2016 Nan
78 1 2016 Nan
79 1 2017 Nan
79 1 2017 Nan
79 1 2017 Nan
80 1 2018 Nan
80 1 2018 Nan
80 1 2018 Nan
</code></pre>
<p>Do you know why multiple identical rows are generated in this way?
I would be glad if you could help me. Thank you in advance.</p>
|
<p>First I created a column with the year for which <code>get_sick = 1</code>.</p>
<pre><code>df_mer = df[df.get_sick == 1].reset_index()[['ID', 'year']].drop_duplicates(subset = 'ID')
df = df.reset_index().merge(df_mer, on = 'ID', suffixes=('', '_max'))
</code></pre>
<p>Then you can use <code>year_max</code> to compute the difference in years and assign a 1/0.</p>
<pre><code>df.loc[(df.get_sick == 0) & (df.year_max - df.year <= 5), 'history'] = 1
df.loc[(df.get_sick == 0) & (df.year_max - df.year > 5), 'history'] = 0
df = df.set_index(['ID', 'age']).drop(columns='year_max')
</code></pre>
<p>Output:</p>
<pre><code> get_sick year history
ID age
4567 76 0 2014 1.0
78 0 2016 1.0
79 1 2017 NaN
12168 65 0 2014 1.0
68 0 2017 1.0
69 0 2018 1.0
70 1 2019 NaN
20268 65 0 2014 1.0
66 0 2015 1.0
67 0 2016 1.0
68 0 2017 1.0
69 1 2018 NaN
22818 65 0 2008 0.0
73 1 2016 NaN
</code></pre>
|
python|pandas
| 0
|
376,101
| 72,670,379
|
Split Columns in Pandas
|
<p>So I have a pandas column that looks like this:</p>
<pre><code>full_name = pd.Series([
'Reservoir 1 Compartment 1',
'Reservoir 1 Common Inlet',
'Reservoir 2 Compartment 1',
'Vyrnwy Line 2 Balancing Tank 1',
'Reservoir 1'
])
</code></pre>
<p>I am trying to split it into two columns. The expected output should look like this:</p>
<pre><code>[['Reservoir 1', 'Compartment 1'],
['Reservoir 1', 'Common Inlet'],
['Reservoir 2', 'Compartment 1'],
['Vyrnwy Line 2', 'Balancing Tank 1'],
['Reservoir 1', None]]
</code></pre>
<p>I have tried this:</p>
<pre><code>res_compartment_split = pd.concat([full_name.str.split(r'\s\s*?(?=[A-Z])', expand=True)])
</code></pre>
<p>but I get this output</p>
<pre><code>[['Reservoir 1', 'Compartment 1', None, None],
['Reservoir 1', 'Common', 'Inlet', None],
['Reservoir 2', 'Compartment 1', None, None],
['Vyrnwy', 'Line 2', 'Balancing', 'Tank 1'],
['Reservoir 1', None, None, None]]
</code></pre>
<p>Thanks for any help.</p>
|
<p>Try the following:</p>
<pre><code>import pandas as pd
full_name = pd.Series([
'Reservoir 1 Compartment 1',
'Reservoir 1 Common Inlet',
'Reservoir 2 Compartment 1',
'Vyrnwy Line 2 Balancing Tank 1',
'Reservoir 1'
])
res = full_name.str.split('(?<=\d)\s+(?=[A-Z])', expand=True)
</code></pre>
<p>Output:</p>
<pre><code>>>> res
0 1
0 Reservoir 1 Compartment 1
1 Reservoir 1 Common Inlet
2 Reservoir 2 Compartment 1
3 Vyrnwy Line 2 Balancing Tank 1
4 Reservoir 1 None
</code></pre>
<p>Explanation of the regex pattern:</p>
<ul>
<li><code>(?<=\d)</code> - positive lookbehind: ensures that there is a digit right before the separator, without consuming it</li>
<li><code>\s+</code> - separator: matches one or more whitespace</li>
<li><code>(?=[A-Z])</code> - positive lookahead: ensures that there is a letter (A to Z) right after, without consuming it</li>
</ul>
<p>See it in action using <a href="https://regex101.com/r/wmCFyH/1" rel="nofollow noreferrer">regex101.com</a>.</p>
<p>Also, you can see here why your pattern doesn't work: <a href="https://regex101.com/r/nSmEEs/1" rel="nofollow noreferrer">https://regex101.com/r/nSmEEs/1</a> .</p>
|
python|regex|pandas|split
| 3
|
376,102
| 72,697,587
|
How is Pandas Block Manager improving performance?
|
<p>The Pandas documentation says :
<em>The primary benefit of the BlockManager is improved performance on certain operations (construction from a 2D array, binary operations, reductions across the columns), especially for wide DataFrames.</em></p>
<p>I thought I understood how the BlockManager improves performance thanks to a great article (<a href="https://uwekorn.com/2020/05/24/the-one-pandas-internal.html" rel="nofollow noreferrer">https://uwekorn.com/2020/05/24/the-one-pandas-internal.html</a>), but I realized there was a small mistake in the example.</p>
<p>If I correct the mistake in the example :</p>
<pre><code>a1 = np.arange(128 * 1024 * 10124)
a2 = np.arange(128 * 1024 * 1024)
a_both = np.empty((2, a1.shape[0]))
a_both[0, :] = a1
a_both[1, :] = a2
%timeit a1 + a2
%timeit np.sum(a_both, axis=0)
#Result :
895 ms ± 204 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.09 s ± 35.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>It seems grouping data in a numpy array does not improve performance.</p>
<p>Is Pandas BlockManger still improving performance in 2022 ?
It would be great if someone could illustrate those "improved performance" with an example using numpy... (how grouping data or using a specific layout of data in memory could improve performance)</p>
|
<p>Long story short : you need to work on more than 20 columns to benefit from the BlockManager for columns addition/multiplication.</p>
<p>There's actually a great explanation in Pandas documentation that I had missed :</p>
<p><em>What is BlockManager and why does it exist?</em></p>
<p><em>The reason for this is not really a memory layout issue (NumPy users know about how contiguous memory access produces much better performance) so much as a reliance on NumPy's two-dimensional array operations for carrying out pandas's computations. So, to do anything row oriented on an all-numeric DataFrame, pandas would concatenate all of the columns together (using numpy.vstack or numpy.hstack) then use array broadcasting or methods like ndarray.sum (combined with np.isnan to mind missing data) to carry out certain operations.</em></p>
<p><em>Another motivation for the BlockManager was to be able to create DataFrame objects with zero copy from two-dimensional NumPy arrays.</em></p>
<p><a href="https://github.com/pydata/pandas-design/blob/master/source/internal-architecture.rst#what-is-blockmanager-and-why-does-it-exist" rel="nofollow noreferrer">https://github.com/pydata/pandas-design/blob/master/source/internal-architecture.rst#what-is-blockmanager-and-why-does-it-exist</a></p>
|
python|pandas|numpy|performance
| 1
|
376,103
| 72,558,725
|
Error running tfjs BlazePose pose detection on single HTMLImageElement
|
<p>I'm trying to get 3d pose detection working in the browser using tfjs.
I've followed the instructions at <a href="https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe</a></p>
<p>However the code fails with error. What am I doing wrong?</p>
<p>Here's my code</p>
<p>main.js</p>
<pre><code>var img = new Image();
img.onload = async () => {
const model = poseDetection.SupportedModels.BlazePose;
const detectorConfig = {
runtime: 'mediapipe', // or 'tfjs'
modelType: 'lite',
solutionPath: 'https://cdn.jsdelivr.net/npm/@mediapipe/pose',
};
try {
detector = await poseDetection.createDetector(model, detectorConfig);
console.log(img);
const estimationConfig = {
enableSmoothing: false, maxPoses: 1,
type: 'full',
scoreThreshold: 0.65,
render3D: true
};
try {
const poses = await detector.estimatePoses(img);
console.log(poses);
} catch (error) {
detector.dispose();
detector = null;
console.log(error);
}
} catch (err) {
console.log(err);
}
}
img.src = "testimg.jpg";
</code></pre>
<p>index.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/pose-detection" crossorigin="anonymous"></script>
<!-- Include below scripts if you want to use TF.js runtime. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-webgl" crossorigin="anonymous"></script>
<!-- Optional: Include below scripts if you want to use MediaPipe runtime. -->
<script src="https://cdn.jsdelivr.net/npm/@mediapipe/pose" crossorigin="anonymous"></script>
</head>
<body>
<script src="main.js"></script>
</body>
</html>
</code></pre>
<p>I get the following error when I open the html in the browser. The html is hosted on expressjs.</p>
<pre><code>main.js:11 <img src="testimg.jpg">
main.js:24 TypeError: Cannot read properties of undefined (reading 'Tensor')
at e.<anonymous> (pose-detection:17:7199)
at pose-detection:17:2162
at Object.next (pose-detection:17:2267)
at pose-detection:17:1204
at new Promise (<anonymous>)
at s (pose-detection:17:949)
at e.estimatePoses (pose-detection:17:6935)
at img.onload (main.js:19:42)
</code></pre>
|
<p>I managed to replicate your issue locally and it seems the script tags are in the wrong order. This will work if you drop the "pose-detection" script tag to the bottom.</p>
<p>See the correct order as per documentation <a href="https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe#installation" rel="nofollow noreferrer">here</a></p>
|
javascript|tensorflow.js
| 0
|
376,104
| 72,664,476
|
np.random.choice Throws Exception when p Value Provided
|
<p>I can call <code>np.random.choice(5, 3)</code> with success:</p>
<p><a href="https://i.stack.imgur.com/NljlH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NljlH.png" alt="enter image description here" /></a></p>
<p>However, adding any <code>p</code> values (e.g. <code>np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])</code>) throws an exception:</p>
<p><a href="https://i.stack.imgur.com/dMISD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dMISD.png" alt="enter image description here" /></a></p>
<p>These examples are directly from <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="nofollow noreferrer">Numpy's Documentation for np.random.choice</a>.</p>
<p>I am running python 3.10.0:
<a href="https://i.stack.imgur.com/ZSXWR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZSXWR.png" alt="enter image description here" /></a></p>
<p>I have the latest (1.22.4) version of Numpy at the time of writing:
<a href="https://i.stack.imgur.com/9UIEb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9UIEb.png" alt="enter image description here" /></a></p>
<p>I have tracked this down to <code>np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0])</code> works right up until a call to <code>pd.read_sql</code>, then fails immediately after running that line of code.</p>
|
<p>The issue might be with the version of numpy please check it once as the piece of code is working fine for me</p>
<pre><code>import numpy as np
print(np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]))
</code></pre>
<p>output</p>
<pre><code>[0 3 0]
</code></pre>
<p>my numpy version is <strong>1.21.2</strong></p>
|
python|numpy
| 1
|
376,105
| 72,741,431
|
How to get matrix position of the max element in sliding window view of an array?
|
<p>I have successfully found the maximum of the array in each sliding window view using <code>amax</code> and <code>sliding_window_view</code> functions from NumPy as follows:</p>
<pre><code>import numpy as np
a = np.random.randint(0, 100, (5, 6)) # 2D array
array([[51, 92, 14, 71, 60, 20],
[82, 86, 74, 74, 87, 66],
[23, 2, 21, 52, 1, 87],
[29, 37, 1, 63, 59, 20],
[32, 75, 57, 21, 83, 48]])
windows = np.lib.stride_tricks.sliding_window_view(a, (3, 3))
np.amax(windows, axis=(2, 3))
array([[92, 92, 87, 87],
[86, 86, 87, 87],
[75, 75, 83, 87]])
</code></pre>
<p>Now, I'm trying to find the position of the max values in the original array considering the windows.</p>
<p>Expected Output</p>
<pre><code>The first element i.e. `92` should give position `(1, 0)`.
The second element i.e. `92` should give position `(1, 0)`.
The third element i.e. `87` should give position `(4, 1)`.
.
.
The seventh element i.e. `87` should give position `(4, 1)`.
The twelveth element i.e. `87` should give position `(5, 2)`.
.
so on
</code></pre>
<p>NOTE: Only one position per value is needed. Hence, if there are multiple positions inside a window, return only the first.</p>
|
<p>This solution gives indices per-window but does not give unique indices if a max-value appears twice in some window:</p>
<pre><code>maxvals = np.amax(windows, axis=(2, 3))
# array([[92, 92, 87, 87],
# [86, 86, 87, 87],
# [75, 75, 83, 87]])
indx = np.array((windows == np.expand_dims(maxvals, axis = (2, 3)).nonzero())
</code></pre>
<p>which gives you back one array for each of the four axes in the <code>windows</code> array. Now we use some math with the relative index positions in each window to get back the indices at which max values occur in the original array:</p>
<pre><code>np.sum(indx.reshape(2, 2, -1), axis = 0)
# array([[0, 0, 1, 1, 2, 1, 1, 1, 1, 2, 4, 4, 4, 2],
# [1, 1, 4, 4, 5, 1, 1, 4, 4, 5, 1, 1, 4, 5]])
</code></pre>
<p>The reshaping is done to faciliate adding of indices. The first two array give the window position. the second two arrays are positions relative to the window. So we just add them up.
You can check that each pair of value along the second axis is the pair of indices you require.</p>
|
python|numpy|sliding-window
| 2
|
376,106
| 59,840,427
|
Submit a Keras training job to Google cloud
|
<p>I am trying to follow this tutorial:
<a href="https://medium.com/@natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196" rel="nofollow noreferrer">https://medium.com/@natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196</a></p>
<p>to upload and train a Keras model on Google Cloud Platform, but I can't get it to work.</p>
<p>Right now I have downloaded the package from GitHub, and I have created a cloud environment with AI-Platform and a bucket for storage. </p>
<p>I am uploading the files (with the suggested folder structure) to my Cloud Storage bucket (basically to the root of my storage), and then trying the following command in the cloud terminal:</p>
<pre><code>gcloud ai-platform jobs submit training JOB1
--module-name=trainer.cnn_with_keras
--package-path=./trainer
--job-dir=gs://mykerasstorage
--region=europe-north1
--config=gs://mykerasstorage/trainer/cloudml-gpu.yaml
</code></pre>
<p>But I get errors, first the <em>cloudml-gpu.yaml</em> file can't be found, it says "no such folder or file", and trying to just remove it, I get errors because it says the <em>--init--.py</em> file is missing, but it isn't, even if it is empty (which it was when I downloaded from the tutorial GitHub). I am Guessing I haven't uploaded it the right way. </p>
<p>Any suggestions of how I should do this? There is really no info on this in the tutorial itself.</p>
<p>I have read in another guide that it is possible to let gcloud package and upload the job directly, but I am not sure how to do this or where to write the commands, in my terminal with <code>gcloud</code> command? Or in the Cloud Shell in the browser? And how do I define the path where my python files are located?</p>
<p>Should mention that I am working with Mac, and pretty new to using Keras and Python. </p>
|
<p>I was able to follow the tutorial you mentioned successfully, with some modifications along the way.</p>
<p>I will mention all the steps although you made it halfway as you mentioned.</p>
<p>First of all create a Cloud Storage Bucket for the job:</p>
<pre><code>gsutil mb -l europe-north1 gs://keras-cloud-tutorial
</code></pre>
<p>To answer your question on where you should write these commands, depends on where you want to store the files that you will download from GitHub. In the tutorial you posted, the writer is using his own computer to run the commands and that's why he initializes the gcloud command with <code>gcloud init</code>. However, you can submit the job from the Cloud Shell too, if you download the needed files there.
The only files we need from the <a href="https://github.com/Neeraj-Natu/keras-cloud-test" rel="nofollow noreferrer">repository</a> are the <code>trainer</code> folder and the <code>setup.py</code> file. So, if we put them in a folder named <code>keras-cloud-tutorial</code> we will have this file structure:</p>
<pre><code>keras-cloud-tutorial/
├── setup.py
└── trainer
├── __init__.py
├── cloudml-gpu.yaml
└── cnn_with_keras.py
</code></pre>
<p>Now, a possible reason for the <code>ImportError: No module named eager</code> error is that you might have changed the <code>runtimeVersion</code> inside the <code>cloudml-gpu.yaml</code> file. As we can read <a href="https://github.com/tensorflow/tensorflow/issues/14247#issuecomment-362112577" rel="nofollow noreferrer">here</a>, <code>eager</code> was introduced in Tensorflow 1.5. If you have specified an earlier version, it is expected to experience this error. So the structure of <code>cloudml-gpu.yaml</code> should be like this:</p>
<pre><code>trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4 GPUs
masterType: standard_gpu
runtimeVersion: "1.5"
</code></pre>
<p><em>Note: "standard_gpu" is a <a href="https://cloud.google.com/ml-engine/docs/machine-types#legacy-machine-types" rel="nofollow noreferrer">legacy machine type</a>.</em></p>
<p>Also, the <code>setup.py</code> file should look like this:</p>
<pre><code>from setuptools import setup, find_packages
setup(name='trainer',
version='0.1',
packages=find_packages(),
description='Example on how to run keras on gcloud ml-engine',
author='Username',
author_email='user@gmail.com',
install_requires=[
'keras==2.1.5',
'h5py'
],
zip_safe=False)
</code></pre>
<p><strong>Attention:</strong> As you can see, I have specified that I want version <code>2.1.5</code> of <code>keras</code>. This is because if I don't do that, the latest version is used which has compatibility issues with versions of Tensorflow earlier than <code>2.0</code>. </p>
<p>If everything is set, you can submit the job by running the following command inside the folder <code>keras-cloud-tutorial</code>:</p>
<pre><code>gcloud ai-platform jobs submit training test_job --module-name=trainer.cnn_with_keras --package-path=./trainer --job-dir=gs://keras-cloud-tutorial --region=europe-west1 --config=trainer/cloudml-gpu.yaml
</code></pre>
<p><em>Note: I used <code>gcloud ai-platform</code> instead of <code>gcloud ml-engine</code> command although both will work. At some point in the future though, <code>gcloud ml-engine</code> will be deprecated.</em></p>
<p><strong>Attention:</strong> Be careful when choosing the region in which the job will be submitted. Some regions do not support GPUs and will throw an error if chosen. For example, if in my command I set the <code>region</code> parameter to <code>europe-north1</code> instead of <code>europe-west1</code>, I will receive the following error:</p>
<blockquote>
<p>ERROR: (gcloud.ai-platform.jobs.submit.training) RESOURCE_EXHAUSTED:
Quota failure for project . The request for 1 K80
accelerators exceeds the allowed maximum of 0 K80, 0 P100, 0 P4, 0 T4,
0 TPU_V2, 0 TPU_V3, 0 V100. To read more about Cloud ML Engine quota,
see <a href="https://cloud.google.com/ml-engine/quotas" rel="nofollow noreferrer">https://cloud.google.com/ml-engine/quotas</a>.
- '@type': type.googleapis.com/google.rpc.QuotaFailure violations:
- description: The request for 1 K80 accelerators exceeds the allowed maximum of
0 K80, 0 P100, 0 P4, 0 T4, 0 TPU_V2, 0 TPU_V3, 0 V100.
subject: </p>
</blockquote>
<p>You can read more about the features of each region <a href="https://cloud.google.com/ml-engine/docs/regions#region_considerations" rel="nofollow noreferrer">here</a> and <a href="https://cloud.google.com/compute/docs/regions-zones/" rel="nofollow noreferrer">here</a>.</p>
<p><strong>EDIT:</strong></p>
<p>After the completion of the training job, there should be 3 folders in the bucket that you specified: <code>logs/</code>, <code>model/</code> and <code>packages/</code>. The model is saved on the <code>model/</code> folder a an <code>.h5</code> file. Have in mind that if you set a specific folder for the destination you should include the '/' at the end. For example, you should set <code>gs://my-bucket/output/</code> instead of <code>gs://mybucket/output</code>. If you do the latter you will end up with folders <code>output</code>, <code>outputlogs</code> and <code>outputmodel</code>. Inside <code>output</code> there should be <code>packages</code>. The job page link should direct to <code>output</code> folder so make sure to check the rest of the bucket too! </p>
<p>In addition, in the AI-Platform job page you should be able to see information regarding <code>CPU</code>, <code>GPU</code> and <code>Network</code> utilization:
<a href="https://i.stack.imgur.com/sVIU3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sVIU3.png" alt="AI-Platform training job page screenshot"></a></p>
<p>Also, I would like to clarify something as I saw that you posted some related questions as an answer:</p>
<p>Your local environment, either it is your personal Mac or the Cloud Shell has nothing to do with the actual training job. You don't need to install any specific package or framework locally. You just need to have the Google Cloud SDK installed (in Cloud Shell is of course already installed) to run the appropriate <code>gcloud</code> and <code>gsutil</code> commands. You can read more on how exactly training jobs on the AI-Platform work <a href="https://cloud.google.com/ml-engine/docs/training-overview#how_training_works" rel="nofollow noreferrer">here</a>. </p>
<p>I hope that you will find my answer helpful.</p>
|
tensorflow|keras|gcloud|gcp-ai-platform-training
| 3
|
376,107
| 59,881,163
|
How to load a sklearn model in Tensorflowjs?
|
<p>I have a gradient boost model saved in the .pkl format. I have to load this model in tensorflowjs. i can see that there is a way to load a keras model but I can't find a way to load a sklearn model. Is it possible to do this?</p>
|
<p>It is not possible to load sklearn model in tensorflow.js. Tensorflow.js allows to load models written in tensorflow.</p>
<p>Though, I haven't tried myself, but I think that you can possibly use the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/wrappers/scikit_learn" rel="nofollow noreferrer">scikit learn wrapper</a> to rewrite the classifier in tensorflow. The model can be saved and converted to a format that can be loaded in tensorflow.js.</p>
|
scikit-learn|tensorflow.js
| 0
|
376,108
| 59,895,682
|
My code has created a numpy array inside another numpy array for one list but it does not for another list that goes through the exact same process
|
<p>I'm developing a simple Artificial Intelligence for a college project and so far it has worked until it randomly began creating a numpy array inside another numpy array. One of the lists that are being converted is a dataset that I've created myself that then is iterated through and each image is read by cv2 and added to a new list. This new list is then converted into a numpy array (this is the one that causes the problem). A second, smaller list (test images) goes through the same process and comes out with the desired result.</p>
<p>This is the code for the dataset, each string is a file name.</p>
<pre><code>images = ['Dana C zero 1.png','Dana C zero 2.png','Dana C zero 3.png','Dana C zero 4.png','Dana C zero 5.png',
'Dana C zero 6.png','Dana C zero 7.png','Dana C zero 8.png','Dana C zero 9.png',
'Dana C zero 10.png','Dana C zero 11.png','Dana C zero 12.png','Dana C zero 13.png','Dana C zero 14.png',
'Dana C zero 15.png','Dana C zero 16.png','Dana C zero 17.png','Dana C zero 18.png','Dana C zero 19.png',
'Dana C one 1.png','Dana C one 2.png','Dana C one 3.png','Dana C one 4.png','Dana C one 5.png',
'Dana C one 6.png','Dana C one 7.png','Dana C one 8.png','Dana C one 9.png',
'Dana C one 10.png','Dana C one 11.png','Dana C one 12.png','Dana C one 13.png',
'Dana C one 14.png','Dana C one 15.png','Dana C one 16.png','Dana C one 17.png',
'Dana C one 18.png','Dana C one 19.png','Dana C two 1.png','Dana C two 2.png','Dana C two 3.png',
'Dana C two 4.png','Dana C two 5.png','Dana C two 6.png','Dana C two 7.png','Dana C two 8.png',
'Dana C two 9.png','Dana C two 10.png','Dana C two 11.png','Dana C two 12.png','Dana C two 13.png',
'Dana C two 14.png','Dana C two 15.png','Dana C two 16.png','Dana C two 17.png','Dana C two 19.png',
'Dana C two 20.png','Dana C three 1.png','Dana C three 2.png','Dana C three 3.png',
'Dana C three 4.png','Dana C three 5.png','Dana C three 6.png','Dana C three 7.png','Dana C three 8.png',
'Dana C three 9.png','Dana C three 10.png','Dana C three 11.png','Dana C three 12.png',
'Dana C three 13.png','Dana C three 14.png','Dana C three 15.png','Dana C three 16.png',
'Dana C three 17.png','Dana C three 18.png','Dana C three 19.png',
'Dana C four 1.png','Dana C four 2.png','Dana C four 3.png','Dana C four 4.png','Dana C four 5.png','Dana C four 6.png','Dana C four 7.png',
'Dana C four 8.png','Dana C four 9.png','Dana C four 10.png','Dana C four 11.png','Dana C four 12.png','Dana C four 13.png','Dana C four 14.png',
'Dana C four 15.png','Dana C four 16.png','Dana C four 17.png','Dana C four 18.png','Dana C four 19.png',
'Dana C five 1.png','Dana C five 2.png','Dana C five 3.png','Dana C five 4.png','Dana C five 5.png','Dana C five 6.png','Dana C five 7.png',
'Dana C five 8.png','Dana C five 9.png','Dana C five 10.png','Dana C five 11.png','Dana C five 12.png','Dana C five 13.png','Dana C five 14.png','Dana C five 15.png',
'Dana C five 16.png','Dana C five 17.png','Dana C five 18.png','Dana C five 19.png',
'Dana C six 1.png','Dana C six 2.png','Dana C six 3.png','Dana C six 4.png','Dana C six 5.png','Dana C six 6.png','Dana C six 7.png',
'Dana C six 8.png','Dana C six 9.png','Dana C six 10.png','Dana C six 11.png','Dana C six 12.png','Dana C six 13.png',
'Dana C six 14.png','Dana C six 15.png','Dana C six 16.png','Dana C six 17.png','Dana C six 18.png','Dana C six 19.png',
'Dana C seven 1.png','Dana C seven 2.png','Dana C seven 3.png','Dana C seven 4.png','Dana C seven 5.png','Dana C seven 6.png',
'Dana C seven 7.png','Dana C seven 8.png','Dana C seven 9.png','Dana C seven 10.png','Dana C seven 11.png','Dana C seven 12.png',
'Dana C seven 13.png','Dana C seven 14.png','Dana C seven 15.png','Dana C seven 16.png','Dana C seven 17.png','Dana C seven 18.png','Dana C seven 19.png',
'Dana C eight 1.png','Dana C eight 2.png','Dana C eight 3.png','Dana C eight 4.png','Dana C eight 5.png','Dana C eight 6.png',
'Dana C eight 7.png','Dana C eight 8.png','Dana C eight 9.png','Dan C eight 10.png','Dana C eight 11.png','Dana C eight 12.png',
'Dana C eight 13.png','Dana C eight 14.png','Dana C eight 15.png','Dana C eight 16.png','Dana C eight 17.png','Dana C eight 18.png','Dana C eight 19.png']
readyImages = []
readyLabels = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,
4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,
5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8])
testImages = ['Dana C zero 20.png','Dana C one 20.png','Dana C two 18.png','Dana C three 20.png','Dana C four 20.png','Dana C five 20.png',
'Dana C six 20.png','Dana C seven 20.png','Dana C eight 20.png']
readyTestImages = []
testLabels = [0,1,2,3,4,5,6,7,8]
</code></pre>
<p>The code below is the two for loops used to create the list of "prepared images"</p>
<pre><code>for i in range (0, len(images)):
img = cv2.imread(images[i])
readyImages.append(img)
for i in range (0, len(testImages)):
img = cv2.imread(testImages[i])
readyTestImages.append(img)
</code></pre>
<p>These two 'ready' lists are then turned into numpy arrays with the following code:</p>
<pre><code>readyImages = np.array(readyImages)
readyTestImages = np.array(readyTestImages)
</code></pre>
<p>After this, the 'readyImages' array looks like this:</p>
<pre><code>array([array([[[179, 179, 179],
[185, 185, 185],
[204, 204, 204],
...,
[181, 181, 181],
[182, 182, 182],
[179, 179, 179]],
[[218, 218, 218],
[229, 229, 229],
[237, 237, 237],
...,
[228, 228, 228],
[229, 229, 229],
[229, 229, 229]],
[[240, 240, 240],
[252, 252, 252],
[253, 253, 253],
...,
[252, 252, 252],
[252, 252, 252],
[254, 254, 254]],
...,
</code></pre>
<p>(The rest of the array I have not included as it is massive)
The 'readyTestImages' array looks like this (normal):</p>
<pre><code>array([[[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
...,
</code></pre>
<p>All of the images, testing and training, are 28x28 so that is not the cause of the problem (as it has already caused issues before which have been solved)
I do not know what is causing this issue but it is preventing my program from running.
Incase its helpful, while the given data is trying to run through the neural model(code below):</p>
<pre><code>train_images = readyImages
train_labels = readyLabels
test_images = testImages
train_images = train_images / 255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28,28, 3)),
keras.layers.Dense(784, activation = 'relu'),
keras.layers.Dense(128, activation = 'relu'),
keras.layers.Dense(10, activation = 'softmax')
])
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
</code></pre>
<p>I get this error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-93cf1c129e74> in <module>()
2 train_labels = readyLabels
3 test_images = testImages
----> 4 train_images = train_images / 255.0
5
6
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'
</code></pre>
<p>If anyone can help I would greatly appreciate it and I can send or add any information needed.</p>
|
<p>The problem is in</p>
<pre><code>for i in range (0, len(images)):
img = cv2.imread(images[i])
readyImages.append(img)
</code></pre>
<p><code>cv2.imread</code> can fail (e.g., if the image file is truncated), in which case <code>img</code> will be <code>None</code>. When you convert a list with <code>None</code> to numpy array, some elements in the array will be also <code>None</code>, so when you try to normalize the images dividing by 255 you get the reported error.</p>
<p>Add a check on <code>img</code> to ensure the image loaded correctly, then discard the matching labels for the images that fail loading.</p>
|
python|arrays|numpy|artificial-intelligence
| 2
|
376,109
| 59,846,499
|
Slice the middle of a nd array
|
<p>I have a numpy array with shape <code>(20,50,100,500,500)</code> and I want to slice the array based on the 3rd dimension, let's say 40/60.</p>
<p>All that I can think of is to do. <code>array[:,:,:40,:,:]</code> and <code>array[:,:,60:,:,:]</code>, but how does one connect those without messing up the dimensions? </p>
|
<p>I used np.stack, setting <code>axis=2</code> as following:</p>
<pre><code>>>> a = np.random.rand(2,2,2,2,2)
>>> a1 = a[:,:,:1,:,:]
>>> a2 = a[:,:,1:,:,:]
>>> b = np.stack((a1,a2), axis=2)
</code></pre>
<p>Hope this helps.</p>
|
python|numpy
| 1
|
376,110
| 59,562,795
|
Iteration of groups of columns
|
<p>I have a dataset that looks like this:</p>
<pre><code>A B C D E ecc
x1A x1B x1C x1D x1E x1N
x2A x2B x2C x2D x2E x1N
xnA xnB xnC xnD xnE xnN
</code></pre>
<p>where A, B, C, D, E are the column names and xi are numbers. I would like to perform a certain operation considering stretches of 3 columns, so first considering columns A, B, C, then B, C, D as second iteration, C, D, E as third an so on. For example, I would like to calculate the variance from the sum of each column in each stretch of 3 (so first considering the columns A, B, C and calculate the sum of each column and calculate the variance; then do the same for B, C, D ecc). Could you suggest an effective way to do it in Python? Thanks!</p>
|
<p>Use <code>.rolling</code>:</p>
<pre><code>df.sum().rolling(window=3).var()
</code></pre>
|
python|pandas
| 0
|
376,111
| 59,612,743
|
Create new column based on group-by function
|
<p>I have a dataframe:</p>
<pre><code>df1 = pd.DataFrame({'Name': ['Bob', 'Bob', 'Bob', 'Joe', 'Joe', 'Joe', 'Alan', 'Alan', 'Steve', 'Steve'],
'ID': [1,2,3,4,5,6,7,8,9,10],
'Value': ['Y','Y','Y','N','N','N','Y','N','N','Y']})
Name ID Value
Bob 1 Y
Bob 2 Y
Bob 3 Y
Joe 4 N
Joe 5 N
Joe 6 N
Alan 7 Y
Alan 8 N
Steve 9 N
Steve 10 Y
</code></pre>
<p>I need to compute a new <code>Result</code> column that has the following rule. For each group <code>Name</code> so Bob, Joe, etc., if each <code>Value</code> is 'Y', assign each value a Y in the new column. Otherwise, assign it a 'N'.</p>
<p>So ideal output is:</p>
<pre><code> Name ID Value Result
Bob 1 Y Y
Bob 2 Y Y
Bob 3 Y Y
Joe 4 N N
Joe 5 N N
Joe 6 N N
Alan 7 Y N
Alan 8 N N
Steve 9 N N
Steve 10 Y N
</code></pre>
<p>This is what I have so far but doesn't work correctly.</p>
<pre><code>df1['Result'] = df1.groupby('Name').Value.all().reindex(df1.Name).astype(str).values
df1
</code></pre>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for <code>Series</code> with same size like original and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.all.html" rel="nofollow noreferrer"><code>GroupBy.all</code></a>:</p>
<pre><code>df1['Result'] = np.where(df1['Value'].eq('Y').groupby(df1['Name']).transform('all'), 'Y', 'N')
</code></pre>
<p>Alternative:</p>
<pre><code>mask = df1['Value'].eq('Y').groupby(df1['Name']).transform('all')
df1.loc[~mask, 'Value'] = 'N'
</code></pre>
<p>Or get all groups with at least <code>N</code> and replace by <code>N</code> by <code>mask</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a>:</p>
<pre><code>mask = df1['Name'].isin(df1.loc[df1['Value'].eq('N'), 'Name'])
df1.loc[mask, 'Value'] = 'N'
</code></pre>
<hr>
<pre><code>print (df1)
Name ID Value
0 Bob 1 Y
1 Bob 2 Y
2 Bob 3 Y
3 Joe 4 N
4 Joe 5 N
5 Joe 6 N
6 Alan 7 N
7 Alan 8 N
8 Steve 9 N
9 Steve 10 N
</code></pre>
|
python|pandas|dataframe|group-by
| 2
|
376,112
| 59,799,635
|
Keep common rows within every group of a pandas dataframe
|
<p>Given the following pandas data frame: </p>
<pre><code> | a b
--+-----
0 | 1 A
1 | 2 A
2 | 3 A
3 | 4 A
4 | 1 B
5 | 2 B
6 | 3 B
7 | 1 C
8 | 3 C
9 | 4 C
</code></pre>
<p>If you group it by column <code>b</code> I want to perform an action that keeps only the rows where they have column <code>a</code> in common. The result would be the following data frame:</p>
<pre><code> | a b
--+-----
0 | 1 A
2 | 3 A
4 | 1 B
6 | 3 B
7 | 1 C
8 | 3 C
</code></pre>
<p>Is there some built in method to do this?</p>
|
<p>You can try <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.pivot_table.html" rel="noreferrer"><code>pivot_table</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html" rel="noreferrer"><code>dropna</code></a> here then filter using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="noreferrer"><code>sreries.isin</code></a> :</p>
<pre><code>s = df.pivot_table(index='a',columns='b',aggfunc=len).dropna().index
df[df['a'].isin(s)]
</code></pre>
<p>Similarly with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html" rel="noreferrer"><code>crosstab</code></a>:</p>
<pre><code>s = pd.crosstab(df['a'],df['b'])
df[df['a'].isin(s[s.all(axis=1)].index)]
</code></pre>
<hr>
<pre><code> a b
0 1 A
2 3 A
4 1 B
6 3 B
7 1 C
8 3 C
</code></pre>
|
python|pandas
| 8
|
376,113
| 59,835,767
|
Min and Max frequency using pandas?
|
<p>is it possible to find the min and max frequency using pandas? I have a series of values and i'd like to know the min and max frequency of it appearing. Example for 1, it appears three times out of 24 counts. Therefore, average frequency is 3/24 or 1/8. Which can be derived with count of 1 / total. </p>
<p>However, what i'm looking for is finding the min & max of 1 which:</p>
<ul>
<li>min: 0 (the number of times other values appearing between first 1 and the second 1)</li>
<li>max: 14 (the number of times other values appearing between second 1 and the third 1)</li>
</ul>
<p>sample DF:</p>
<pre>
╔════╗
║ X ║
╠════╣
║ 1 ║
║ 1 ║
║ 8 ║
║ 5 ║
║ 8 ║
║ 11 ║
║ 7 ║
║ 11 ║
║ 12 ║
║ 7 ║
║ 2 ║
║ 2 ║
║ 6 ║
║ 7 ║
║ 9 ║
║ 2 ║
║ 1 ║
║ 3 ║
║ 10 ║
║ 2 ║
║ 10 ║
║ 13 ║
║ 4 ║
║ 6 ║
╚════╝
</pre>
<pre><code>data = {'X':[1,1,8,5,8,11,7,11,12,7,2,2,6,7,9,2,1,3,10,2,10,13,4,6]}
</code></pre>
<p>Many thanks</p>
|
<p>Use:</p>
<pre><code>#changed sample data for possible non 1 before first 1 occurence
df = pd.DataFrame(data = {'X':[5,8,1,1,8,5,8,11,7,11,12,7,2,2,6,7,9,2,1,3,10,2,10,13,4,6]})
#print (df)
</code></pre>
<p>You can compare values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> and create groups by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>Series.cumsum</code></a>, remove groups with <code>0</code> (if exist some values befor first 1) and last group (also is necessary remove if last value of column is <code>1</code>) by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> with inverted mask by <code>~</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> with subtract <code>1</code>:</p>
<pre><code>s = df['X'].eq(1).cumsum()
s = s[~s.isin([0, s.max()])].value_counts().sub(1)
print (s)
2 14
1 0
Name: X, dtype: int64
</code></pre>
<p>Last get minimal and maximal value:</p>
<pre><code>min1 = s.min()
max1 = s.max()
print (min1)
0
print (max1)
14
</code></pre>
<p>EDIT:</p>
<p>If need also groups before first <code>1</code> then use:</p>
<pre><code>s = df['X'].eq(1).cumsum().value_counts().sort_index().iloc[:-1].sub(1)
print (s)
min1 = s.min()
max1 = s.max()
print (min1)
print (max1)
</code></pre>
|
python|pandas
| 2
|
376,114
| 59,778,744
|
pandas- grouping and aggregating consecutive rows with same value in column
|
<p>I have a pandas DataFrame from a long list of datetime ranges pulled from a database, each range with a label. The dates are ordered such that the start date of one row, is the end date of the row before. A workable example is here:</p>
<pre><code>import pandas as pd
bins = [{'start': '2020-01-12 00:00:00', 'end': '2020-01-13 00:00:00', 'label': 't3'},
{'start': '2020-01-13 00:00:00', 'end': '2020-01-13 07:00:00', 'label': 't2'},
{'start': '2020-01-13 07:00:00', 'end': '2020-01-13 15:30:00', 'label': 't1'},
{'start': '2020-01-13 15:30:00', 'end': '2020-01-14 00:00:00', 'label': 't2'},
{'start': '2020-01-14 00:00:00', 'end': '2020-01-14 07:00:00', 'label': 't2'},
{'start': '2020-01-14 07:00:00', 'end': '2020-01-14 15:30:00', 'label': 't1'},
{'start': '2020-01-14 15:30:00', 'end': '2020-01-15 00:00:00', 'label': 't2'},
{'start': '2020-01-15 00:00:00', 'end': '2020-01-15 07:00:00', 'label': 't2'},
{'start': '2020-01-15 07:00:00', 'end': '2020-01-15 15:30:00', 'label': 't1'},
{'start': '2020-01-15 15:30:00', 'end': '2020-01-16 00:00:00', 'label': 't2'},
{'start': '2020-01-16 00:00:00', 'end': '2020-01-16 07:00:00', 'label': 't2'},
{'start': '2020-01-16 07:00:00', 'end': '2020-01-16 15:30:00', 'label': 't1'},
{'start': '2020-01-16 15:30:00', 'end': '2020-01-17 00:00:00', 'label': 't2'},
{'start': '2020-01-17 00:00:00', 'end': '2020-01-17 07:00:00', 'label': 't2'},
{'start': '2020-01-17 07:00:00', 'end': '2020-01-17 15:30:00', 'label': 't1'},
{'start': '2020-01-17 15:30:00', 'end': '2020-01-18 00:00:00', 'label': 't2'},
{'start': '2020-01-18 00:00:00', 'end': '2020-01-19 00:00:00', 'label': 't2'}]
bins_df = pd.DataFrame(bins)
</code></pre>
<p>Notice that some labels are repeated consecutively, for example, the 4th and 5th row, have the same label. Thus, the label <code>'t2'</code> applies to the range from <code>2020-01-13 15:30:00</code> to <code>2020-01-14 07:00:00</code>. Using pandas, how can I group/aggregate consecutive rows with the same label, and take the minimum <code>start</code>, and maximum <code>end</code> to combine consecutive date ranges with the same label?</p>
|
<p>First we use <code>Series.shift</code> with <code>Series.cumsum</code> to make a group indicator for each consecutive <code>label</code> value.</p>
<p>Then we use <code>groupby.agg</code> with <code>min</code> and <code>max</code>. </p>
<pre><code>label_groups = bins_df['label'].ne(bins_df['label'].shift()).cumsum()
df = (
bins_df.groupby(label_groups).agg({'start':'min', 'end':'max', 'label':'first'})
.reset_index(drop=True)
)
</code></pre>
<pre><code> start end label
0 2020-01-12 00:00:00 2020-01-13 00:00:00 t3
1 2020-01-13 00:00:00 2020-01-13 07:00:00 t2
2 2020-01-13 07:00:00 2020-01-13 15:30:00 t1
3 2020-01-13 15:30:00 2020-01-14 07:00:00 t2
4 2020-01-14 07:00:00 2020-01-14 15:30:00 t1
5 2020-01-14 15:30:00 2020-01-15 07:00:00 t2
6 2020-01-15 07:00:00 2020-01-15 15:30:00 t1
7 2020-01-15 15:30:00 2020-01-16 07:00:00 t2
8 2020-01-16 07:00:00 2020-01-16 15:30:00 t1
9 2020-01-16 15:30:00 2020-01-17 07:00:00 t2
10 2020-01-17 07:00:00 2020-01-17 15:30:00 t1
11 2020-01-17 15:30:00 2020-01-19 00:00:00 t2
</code></pre>
|
django|pandas|dataframe|pandas-groupby|aggregation
| 3
|
376,115
| 59,722,739
|
How to calculate the mean/median/standard deviation of multiple matrices in a dictionary?
|
<p>Could you provide some solutions or suggestions for the following problem?</p>
<p>If I have a dictionary which contains three pandas dataframe, how should I calculate the mean/median/standard deviation of the three dataframes in the dictionary? </p>
<pre><code>df1 = pd.DataFrame(np.random.randint(10, size=(3,4)))
df2 = pd.DataFrame(np.random.randint(10, size=(3,4)))
df3 = pd.DataFrame(np.random.randint(10, size=(3,4)))
df_dict = {'a': df1, 'b': df2, 'c':df3}
</code></pre>
<p>(the output matrix should still be a 3x4 matrix)</p>
<pre><code>(df1+df2+df3)/3
0 1 2 3
0 5.666667 5.000000 3.333333 3.000000
1 4.000000 1.666667 6.666667 4.333333
2 3.000000 3.666667 4.666667 4.333333
</code></pre>
<p>Since I have a dictionary containing 50+ dataframes in reality, an efficient approach is appreciated.
Hopefully, no simple loop.</p>
<p>Thank you in advance!</p>
|
<p>IIUC, try:</p>
<pre><code>(pd.concat(df_dict).groupby(level=1)
.agg(['mean','median','std'])
.swaplevel(0,1, axis=1)
.sort_index(level=0, axis=1))
</code></pre>
<p>Output:</p>
<pre><code> mean median std
0 1 2 3 0 1 2 3 0 1 2 3
0 5.333333 4.333333 8.666667 4.666667 4 5 9 4 2.309401 2.081666 0.577350 4.041452
1 3.333333 5.666667 3.333333 4.666667 3 5 3 4 0.577350 1.154701 3.511885 2.081666
2 4.333333 2.000000 2.666667 8.333333 4 1 2 8 3.511885 2.645751 2.081666 0.577350
</code></pre>
|
python|pandas|dictionary
| 3
|
376,116
| 59,719,283
|
Plotting time series information with missing date values
|
<p>I have the following dataset: </p>
<pre><code>dataset.head(7)
Transaction_date Product Product Code Description
2019-01-01 A 123 A123
2019-01-02 B 267 B267
2019-01-09 B 267 B267
2019-02-11 C 139 C139
2019-02-11 A 125 C125
2019-02-12 C 139 C139
2019-02-12 A 123 A123
</code></pre>
<p>The dataset stores transaction information, for which a transaction date is available. In other words, not for all days, data is available.
Ultimately, I want to create a time series plot, showing me the number of transactions per day. </p>
<p>So far, I have done a simple countplot: </p>
<pre><code>ax = sns.countplot(x=dataset["Transaction_date"],data=dataset)
</code></pre>
<p>This plot shows me the dates, where a transaction happened. But I would prefer to see also the dates, where no transaction has happened in a plot, preferably shown as 0. </p>
<p>I have tried the following, but retrieve an error message:</p>
<pre><code>groupbydate = dataset.groupby("Transaction_date")
ax = sns.tsplot(x="Transaction_date",y="Product",data=groubydate.fillna(0))
</code></pre>
<p>But I get the error
<code>cannot label index with a null key</code>
Due to restrictions, I can only use <code>seaborn 0.8.1</code> </p>
|
<p>I believe <code>reindex</code> should work for you:</p>
<pre><code># First convert the index to datetime
dataset.index = pd.DatetimeIndex(dataset.index)
# Then reindex! You can also select the min and max of the index for the limits
dataset= dataset.reindex(pd.date_range("2019-01-01", "2019-02-12"), fill_value="NaN")
</code></pre>
|
python|pandas|indexing|time-series|seaborn
| 0
|
376,117
| 59,761,699
|
Tensorflow no module named '_pywrap_tensorflow_internal' (DLL's appear to be fine)
|
<p>I had to reinstall python (it was a whole mess but to my knowledge, it does appear to have fully reinstalled without remnants) and, while I never ran it beforehand, I recently have wanted to start running TensorFlow, unfortunately, whenever I try to import the module it raises an error. I have spent the last days adding every possible Cuda and Cudnn dll to my paths and every other solution I have found suggested but none of them have had any effect. (random information I don't think will be relevant but am putting here just in case: Pip stopped finding TensorFlow so I used the google link when installing via pip, while I do have the most up to date python 3 I also have python 2.7, if I run a script that imports it with python3 the script just ends then and there with no error but if I run it with the traditional "python" command (from cmd) it raises the error below)
os: Windows 10</p>
<pre><code>Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
Traceback (most recent call last):
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Program Files\Python38\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Program Files\Python38\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Paul Duke\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
|
<p>You are using Python 3.8.</p>
<p>from the file list of
<a href="https://pypi.org/project/tensorflow/#files" rel="nofollow noreferrer">pypi for tensorflow</a>
and <a href="https://pypi.org/project/tensorflow-gpu/#files" rel="nofollow noreferrer">pypi for tensorflow</a>, </p>
<p>here is the <a href="https://github.com/tensorflow/tensorflow/issues/33374" rel="nofollow noreferrer">tracking issue on github</a></p>
<p>So just use an old version of python will solve you problem. I'd suggest you using <a href="https://www.anaconda.com/distribution/" rel="nofollow noreferrer">anaconda</a> or miniconda to manage you deeping learning dev environment.</p>
|
python|tensorflow|python-import|importerror
| 0
|
376,118
| 59,733,569
|
Reporting with Pandas
|
<p>I'm trying to generate reports using Pandas, grouping by a set of fields:</p>
<p>This is what I'm doing:</p>
<pre><code>#!/usr/bin/env python3
import pandas as pd
data = [
{
'id': 1,
'name': 'name1',
'pretty_name': 'Pretty Name 1',
'server_name': 'exampleserver.local',
'provider': 'provider1',
'type': 'A',
'status': 'KO'
},
{
'id': 2,
'name': 'name2',
'pretty_name': 'Pretty Name 2',
'server_name': 'exampleserver1.local',
'provider': 'provider2',
'type': 'B',
'status': 'OK'
},
{
'id': 1,
'name': 'name1',
'pretty_name': 'Pretty Name 1',
'server_name': 'exampleserver.local',
'provider': 'provider1',
'type': 'A',
'status': 'KO'
},
{
'id': 1,
'name': 'name1',
'pretty_name': 'Pretty Name 1',
'server_name': 'exampleserver.local',
'provider': 'provider1',
'type': 'A',
'status': 'OK'
},
{
'id': 2,
'name': 'name2',
'pretty_name': 'Pretty Name 2',
'server_name': 'exampleserver.local',
'provider': 'provider2',
'type': 'A',
'status': 'OK'
}
]
df = pd.DataFrame(data)
grouped = df.groupby(['server_name', 'provider', 'type', 'status'])['id'].count()
print(grouped.to_string())
</code></pre>
<p>Which returns:</p>
<pre><code>server_name provider type status
exampleserver.local provider1 A KO 2
OK 1
provider2 A OK 1
exampleserver1.local provider2 B OK 1
</code></pre>
<p>This is alright, but I would like to add to the result a row containing the total for each provider. I.e.</p>
<pre><code>server_name provider tot type status
exampleserver.local provider1 3 A KO 2
OK 1
provider2 1 A OK 1
exampleserver1.local provider2 1 B OK 1
</code></pre>
<p>I'm pretty sure this can be done quite easily with Pandas, but I've spent hours reading documentation with no luck.</p>
<p>Any pointers?</p>
<p>Thanks.</p>
<p>EDIT: I've corrected and extended the example as it didn't really made sense.</p>
|
<p>You can create helper column for compare if match <code>provider1</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a>, convert to integers, so you can use <code>sum</code> for count matched values:</p>
<pre><code>grouped = (df.assign(new=df['provider'].str.contains('provider1').astype(int))
.groupby(['server_name', 'provider', 'type', 'status'])['new']
.agg([('count','size'), ('provider1_count','sum')])
.reset_index())
print (grouped)
server_name provider type status count provider1_count
0 exampleserver.local provider1 A KO 1 1
1 exampleserver.local provider2 A OK 1 0
2 exampleserver.local provider2 B OK 1 0
</code></pre>
<p>EDIT:</p>
<p>You can add <code>as_index=False</code> for <code>DataFrame</code> and <code>rename</code> column:</p>
<pre><code>df1 = (df.groupby(['server_name', 'provider', 'type', 'status'], as_index=False)['id']
.count()
.rename(columns={'id':'counts'}))
</code></pre>
<p>Then if want new column in position <code>2</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>DataFrame.insert</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a>:</p>
<pre><code>df1.insert(2, 'tot', df1.groupby(['server_name','provider'])['counts'].transform('sum'))
print(df1)
server_name provider tot type status counts
0 exampleserver.local provider1 3 A KO 2
1 exampleserver.local provider1 3 A OK 1
2 exampleserver.local provider2 1 A OK 1
3 exampleserver1.local provider2 1 B OK 1
</code></pre>
<p>And last if need <code>Multiindex</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a>:</p>
<pre><code>grouped = df1.set_index(['server_name', 'provider', 'tot','type', 'status'])['counts']
print (grouped)
server_name provider tot type status
exampleserver.local provider1 3 A KO 2
OK 1
provider2 1 A OK 1
exampleserver1.local provider2 1 B OK 1
Name: counts, dtype: int64
</code></pre>
|
python|pandas|report
| 3
|
376,119
| 59,784,933
|
iterrows() of 2 columns and save results in one column
|
<p>in my data frame I want to iterrows() of two columns but want to save result in 1 column.for example df is</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>x y
5 10
30 445
70 32</code></pre>
</div>
</div>
</p>
<p>expected output is</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>points sequence
5 1
10 2
30 1
445 2</code></pre>
</div>
</div>
I know about iterrows() but it saved out put in two different columns.How can I get expected output and is there any way to generate sequence number according to condition? any help will be appreciated.</p>
|
<p>First never use <a href="https://stackoverflow.com/questions/24870953/does-pandas-iterrows-have-performance-issues/24871316#24871316"><code>iterrows</code></a>, because really slow.</p>
<p>If want <code>1, 2</code> sequence by number of columns convert values to numy array by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html" rel="nofollow noreferrer"><code>DataFrame.to_numpy</code></a> and add <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html" rel="nofollow noreferrer"><code>numpy.ravel</code></a>, then for sequence use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>numpy.tile</code></a>:</p>
<pre><code>df = pd.DataFrame({'points': df.to_numpy().ravel(),
'sequence': np.tile([1,2], len(df))})
print (df)
points sequence
0 5 1
1 10 2
2 30 1
3 445 2
4 70 1
5 32 2
</code></pre>
|
pandas|postgresql
| 0
|
376,120
| 59,518,694
|
Filter 2 coordinate index in xarray dataframe
|
<p>I am trying to filter large dataset in xarray for exact <code>latitude</code>, <code>longitude</code> values from following dataset:</p>
<pre><code><xarray.Dataset>
Dimensions: (latitude: 23, level: 6, longitude: 21, time: 178486)
Coordinates:
* time (time) datetime64[ns] 1979-01-01 ... 2019-11-26T21:00:00
* latitude (latitude) float32 46.5 46.25 46.0 45.75 ... 41.5 41.25 41.0
* longitude (longitude) float32 18.0 18.25 18.5 18.75 ... 22.5 22.75 23.0
* level (level) int32 750 800 850 900 950 1000
Data variables:
cbh (time, latitude, longitude) float32 ...
clwc (time, level, latitude, longitude) float32 ...
t (time, level, latitude, longitude) float32 ...
vetar (time, level, latitude, longitude) float32 ...
sp (time, latitude, longitude) float32 ...
Attributes:
Conventions: CF-1.6
history: 2019-05-11 06:14:51 GMT by grib_to_netcdf-2.10.0: /opt/ecmw...
</code></pre>
<p>I am trying to do it with where statement, but it seems like I need to compare arrays in order to do this.
With <code>DS1.where(DS1.longitude==22.0 and DS1.latitude==43.5,drop=True)</code> I get famous error: </p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I can perform this filtering in two steps, first with </p>
<pre><code>ds22=DS1.where(DS1.longitude==22.0,drop=True)
</code></pre>
<p>and then with </p>
<pre><code>ds22435=ds22.where(ds22.latitude==43.5,drop=True)
</code></pre>
<p>But is there any way of doing this in one step?</p>
|
<p>Have a look at <a href="http://xarray.pydata.org/en/stable/generated/xarray.Dataset.sel.html" rel="nofollow noreferrer"><code>Dataset.sel</code></a> (see also examples <a href="http://xarray.pydata.org/en/stable/indexing.html#indexing-and-selecting-data" rel="nofollow noreferrer">here</a>). I think something like the following would suit your needs:</p>
<pre class="lang-py prettyprint-override"><code>result = DS1.sel(latitude=43.5, longitude=22.0)
</code></pre>
|
python|numpy|python-xarray
| 1
|
376,121
| 59,569,943
|
Replace column of pandas multi-index DataFrame with another DataFrame
|
<p>I have a pandas DataFrame like this:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = np.repeat(np.array(range(3), ndmin=2), 3, axis=0)
columns1 = pd.MultiIndex.from_tuples([('foo', 'a'), ('foo', 'b'), ('bar', 'c')])
df1 = pd.DataFrame(data1, columns=columns1)
print(df1)
foo bar
a b c
0 0 1 2
1 0 1 2
2 0 1 2
</code></pre>
<p>And another one like this:</p>
<pre><code>data2 = np.repeat(np.array(range(3, 5), ndmin=2), 3, axis=0)
columns2 = ['d', 'e']
df2 = pd.DataFrame(data2, columns=columns2)
print(df2)
d e
0 3 4
1 3 4
2 3 4
</code></pre>
<p>Now, I would like to replace 'bar' of df1 with df2, but the regular syntax of single-level indexing doesn't seem to work:</p>
<pre><code>df1['bar'] = df2
print(df1)
foo bar
a b c
0 0 1 NaN
1 0 1 NaN
2 0 1 NaN
</code></pre>
<p>When what I would like to get is:</p>
<pre><code> foo bar
a b d e
0 0 1 3 4
1 0 1 3 4
2 0 1 3 4
</code></pre>
<p>I'm not sure if I'm missing something on the syntax or if this is related to the issues described <a href="https://github.com/pandas-dev/pandas/issues/10440" rel="nofollow noreferrer">here</a> and <a href="https://github.com/pandas-dev/pandas/issues/15310" rel="nofollow noreferrer">here</a>. Could someone explain why this doesn't work and how to get the desired outcome?</p>
<p>I'm using python 2.7 and pandas 0.24, if it makes a difference.</p>
|
<p>For lack of better alternative, I'm currently doing this:</p>
<pre><code>df2.columns = pd.MultiIndex.from_product([['bar'], df2.columns])
df1.drop(columns='bar', level=0, inplace=True)
df1 = df1.join(df2)
</code></pre>
<p>Which gives the desired result. One needs to be cautious though if the order of columns is important, as this approach will likely change it.</p>
<p>Reading further the mentioned issues on Github, I think the reason the approach in the question doesn't work is indeed related to an inconsistency in the pandas API that hasn't been fixed yet.</p>
|
python|pandas|multi-index
| 1
|
376,122
| 59,761,890
|
IndexError: tuple index out of range when using Datasets with tensorflow 2.1
|
<p>Generating my own TFRecords and I can't seem to properly use datasets in my models. Just to test if it was my current files or something in the model code I used tfds with MNIST and am having the same error. </p>
<p>The error is: <code>IndexError: tuple index out of range</code>
The full output is below. I'm doing this from a jupyter notebook if it changes anything.</p>
<pre><code> 1/Unknown - 0s 47ms/step
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-302-f8e9089d7285> in <module>
----> 1 model.fit(dataset['train'].batch(4096))
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing,
**kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing,
**kwargs)
340 mode=ModeKeys.TRAIN,
341 training_context=training_context,
--> 342 total_epochs=epochs)
343 cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
344
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
126 step=step, mode=mode, size=current_batch_size) as batch_logs:
127 try:
--> 128 batch_outs = execution_function(iterator)
129 except (StopIteration, errors.OutOfRangeError):
130 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
96 # `numpy` translates Tensors to values in Eager mode.
97 return nest.map_structure(_non_none_constant_value,
---> 98 distributed_function(input_fn))
99
100 return execution_function
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
566 xla_context.Exit()
567 else:
--> 568 result = self._call(*args, **kwds)
569
570 if tracing_count == self._get_tracing_count():
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
613 # This is the first call of __call__, so we have to initialize.
614 initializers = []
--> 615 self._initialize(args, kwds, add_initializers_to=initializers)
616 finally:
617 # At this point we know that the initialization is complete (or less
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
495 self._concrete_stateful_fn = (
496 self._stateful_fn._get_concrete_function_internal_garbage_collected(
# pylint: disable=protected-access
--> 497 *args, **kwds))
498
499 def invalid_creator_scope(*unused_args, **unused_kwds):
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args,
**kwargs) 2387 args, kwargs = None, None 2388 with self._lock:
-> 2389 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2390 return graph_function 2391
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2701 2702 self._function_cache.missed.add(call_context_key)
-> 2703 graph_function = self._create_graph_function(args, kwargs) 2704 self._function_cache.primary[cache_key] = graph_function 2705 return graph_function, args, kwargs
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2591 arg_names=arg_names, 2592 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593 capture_by_value=self._capture_by_value), 2594 self._function_attributes, 2595 # Tell the ConcreteFunction to clean up its graph once it goes out of
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
976 converted_func)
977
--> 978 func_outputs = python_func(*func_args, **func_kwargs)
979
980 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
437 # __wrapped__ allows AutoGraph to swap in a converted function. We give
438 # the function a weak reference to itself to avoid a reference cycle.
--> 439 return weak_wrapped_fn().__wrapped__(*args, **kwds)
440 weak_wrapped_fn = weakref.ref(wrapped_fn)
441
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in distributed_function(input_iterator)
83 args = _prepare_feed_values(model, input_iterator, mode, strategy)
84 outputs = strategy.experimental_run_v2(
---> 85 per_replica_function, args=args)
86 # Out of PerReplica outputs reduce or pick values to return.
87 all_outputs = dist_utils.unwrap_output_dict(
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in experimental_run_v2(self, fn, args, kwargs)
761 fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(),
762 convert_by_default=False)
--> 763 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
764
765 def reduce(self, reduce_op, value, axis):
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs) 1817 kwargs
= {} 1818 with self._container_strategy().scope():
-> 1819 return self._call_for_each_replica(fn, args, kwargs) 1820 1821 def _call_for_each_replica(self, fn, args, kwargs):
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs) 2162 self._container_strategy(), 2163 replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2164 return fn(*args, **kwargs) 2165 2166 def _reduce_to(self, reduce_op, value, destinations):
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics, standalone)
414 x, y, sample_weights = model._standardize_user_data(
415 x, y, sample_weight=sample_weight, class_weight=class_weight,
--> 416 extract_tensors_from_dataset=True)
417 batch_size = array_ops.shape(nest.flatten(x, expand_composites=True)[0])[0]
418 # If `model._distribution_strategy` is True, then we are in a replica context
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset) 2381 is_dataset=is_dataset, 2382 class_weight=class_weight,
-> 2383 batch_size=batch_size) 2384 2385 def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs,
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size) 2467 shapes=None, 2468 check_batch_axis=False, # Don't enforce the batch size.
-> 2469 exception_prefix='target') 2470 2471 # Generate sample-wise weight values given the `sample_weight` and
~/miniconda3/envs/keras/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
510 'for each key in: ' + str(names))
511 elif isinstance(data, (list, tuple)):
--> 512 if isinstance(data[0], (list, tuple)):
513 data = [np.asarray(d) for d in data]
514 elif len(names) == 1 and isinstance(data[0], (float, int)):
IndexError: tuple index out of range
</code></pre>
<p>Minimal code to reproduce this:</p>
<pre><code>from tensorflow.keras.layers import Dense, Embedding, Flatten, Lambda, Subtract, Input, Concatenate, Average, Reshape, GlobalAveragePooling1D, Dot, Dropout
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.utils import Sequence
from tensorflow.keras import initializers
import tensorflow_datasets as tfds
tfds.list_builders()
dataset, info = tfds.load("mnist", with_info=True)
inputs = Input((28, 28, 1), name="image")
First = Dense(128, activation="relu")
Second = Dropout(0.2)
Third = Dense(10, activation="softmax", name="label")
first = First(inputs)
second = Second(first)
third = Third(second)
model = Model(inputs=[inputs], outputs=[third])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(dataset['train'].batch(4096))
</code></pre>
<p>I bet I'm missing something in the docs, but I can't figure it out and have been hammering away at it for a few hours. The model trains fine from a generator but as the datasets get larger I'd like to switch over.</p>
|
<p>Adding <code>as_supervised=True</code> to <code>tfds.load()</code> will solve the problem. Another question is why this problem occurs in the first place, probably a bug in TF.</p>
|
python|tensorflow|keras|tensorflow-datasets
| 0
|
376,123
| 59,724,593
|
How to iterate over dataframe rows, replacing values from a matching tuple in a more pythonic way?
|
<p>I am able to replace values in a specific column of a pandas dataframe by iterating over the rows, and match these values to the corresponding tuple pairs which are contained in a list of tuples.</p>
<p>However, when I run this code on a large dataframe, it becomes relatively slow as it has to iterate over the entire list of tuples to find a match for the row in the dataframe.
(12280it [23:21, 8.66it/s])</p>
<p>Is there a more pythonic way to do the matching and replacing? For example indexing the list of tuples, and a bit of code that filters by index?</p>
<p>My used code can be found below.</p>
<pre><code>import pandas as pd
from tqdm import tqdm
# initialize list of lists
data = [['some', 1], ['random', 10], ['stuff', 14],['which',8],['is',22],['irrelevant',24]]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['Strings', 'Number'])
</code></pre>
<pre><code>df
Strings Number
0 some 1
1 random 10
2 stuff 14
3 which 8
4 is 22
5 irrelevant 24
</code></pre>
<pre><code>#Create lists necessary to make tuples
x = list(range(1, 25))
y = list(range(345, 395, 2))
#Create tuple
z = list(zip(x,y))
#Replace number values in dataframe
#With corresponding values from tuple
for index, row in tqdm(df.iterrows()):
for x in z:
if row["Number"] ==x[0]:
df.set_value(index,"Number", int(x[1]))
</code></pre>
<p>results in </p>
<pre><code>df
Strings Number
0 some 345
1 random 363
2 stuff 371
3 which 359
4 is 387
5 irrelevant 391
</code></pre>
|
<p>Use <code>map</code></p>
<pre><code>z = dict(zip(x,y))
df['Number'] = df['Number'].map(z)
</code></pre>
<hr>
<pre><code> Strings Number
0 some 345
1 random 363
2 stuff 371
3 which 359
4 is 387
5 irrelevant 391
</code></pre>
<hr>
<p>To map only <em>some</em> values and avoid <code>NaN</code>, use <code>replace</code></p>
<pre><code>df['Number'] = df['Number'].replace(z)
</code></pre>
|
python|pandas|loops|dataframe|optimization
| 3
|
376,124
| 59,787,897
|
How does TensorFlow SparseCategoricalCrossentropy work?
|
<p>I'm trying to understand this loss function in TensorFlow but I don't get it. It's <strong>SparseCategoricalCrossentropy</strong>. All other loss functions need outputs and labels of the same shape, this specific loss function doesn't.</p>
<p>Source code:</p>
<pre><code>import tensorflow as tf;
scce = tf.keras.losses.SparseCategoricalCrossentropy();
Loss = scce(
tf.constant([ 1, 1, 1, 2 ], tf.float32),
tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32)
);
print("Loss:", Loss.numpy());
</code></pre>
<p>The error is:</p>
<pre><code>InvalidArgumentError: Received a label value of 2 which is outside the valid range of [0, 2).
Label values: 1 1 1 2 [Op:SparseSoftmaxCrossEntropyWithLogits]
</code></pre>
<p>How to provide proper params to the loss function SparseCategoricalCrossentropy?</p>
|
<p>SparseCategoricalCrossentropy and CategoricalCrossentropy both compute categorical cross-entropy. The only difference is in how the targets/labels should be encoded.</p>
<p>When using SparseCategoricalCrossentropy the targets are represented by the index of the category (starting from 0). Your outputs have shape 4x2, which means you have two categories. Therefore, the targets should be a 4 dimensional vector with entries that are either 0 or 1. For example:</p>
<pre><code>scce = tf.keras.losses.SparseCategoricalCrossentropy();
Loss = scce(
tf.constant([ 0, 0, 0, 1 ], tf.float32),
tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))
</code></pre>
<p>This in contrast to CategoricalCrossentropy where the labels should be one-hot encoded:</p>
<pre><code>cce = tf.keras.losses.CategoricalCrossentropy();
Loss = cce(
tf.constant([ [1,0] [1,0], [1, 0], [0, 1] ], tf.float32),
tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))
</code></pre>
<p>SparseCategoricalCrossentropy is more efficient when you have a lot of categories.</p>
|
tensorflow|machine-learning|deep-learning|loss-function|cross-entropy
| 30
|
376,125
| 59,521,480
|
Extract Keras concatenated layer of 3 embedding layers, but it's an empty list
|
<p>I am constructing a Keras Classification model with Multiple Inputs (3 actually) to predict one single output. Specifically, my 3 <strong>inputs</strong> are:</p>
<ol>
<li>Actors</li>
<li>Plot Summary</li>
<li>Relevant Movie Features</li>
</ol>
<p><strong>Output:</strong></p>
<ol>
<li>Genre tags</li>
</ol>
<p><strong>Python Code (create the multiple input keras)</strong></p>
<pre><code>def kera_multy_classification_model():
sentenceLength_actors = 15
vocab_size_frequent_words_actors = 20001
sentenceLength_plot = 23
vocab_size_frequent_words_plot = 17501
sentenceLength_features = 69
vocab_size_frequent_words_features = 20001
model = keras.Sequential(name='Multy-Input Keras Classification model')
actors = keras.Input(shape=(sentenceLength_actors,), name='actors_input')
plot = keras.Input(shape=(sentenceLength_plot,), name='plot_input')
features = keras.Input(shape=(sentenceLength_features,), name='features_input')
emb1 = layers.Embedding(input_dim = vocab_size_frequent_words_actors + 1,
# based on keras documentation input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1.
output_dim = Keras_Configurations_model1.EMB_DIMENSIONS,
# int >= 0. Dimension of the dense embedding
embeddings_initializer = 'uniform',
# Initializer for the embeddings matrix.
mask_zero = False,
input_length = sentenceLength_actors,
name="actors_embedding_layer")(actors)
encoded_layer1 = layers.LSTM(100)(emb1)
emb2 = layers.Embedding(input_dim = vocab_size_frequent_words_plot + 1,
output_dim = Keras_Configurations_model2.EMB_DIMENSIONS,
embeddings_initializer = 'uniform',
mask_zero = False,
input_length = sentenceLength_plot,
name="plot_embedding_layer")(plot)
encoded_layer2 = layers.LSTM(100)(emb2)
emb3 = layers.Embedding(input_dim = vocab_size_frequent_words_features + 1,
output_dim = Keras_Configurations_model3.EMB_DIMENSIONS,
embeddings_initializer = 'uniform',
mask_zero = False,
input_length = sentenceLength_features,
name="features_embedding_layer")(features)
encoded_layer3 = layers.LSTM(100)(emb3)
merged = layers.concatenate([encoded_layer1, encoded_layer2, encoded_layer3])
layer_1 = layers.Dense(Keras_Configurations_model1.BATCH_SIZE, activation='relu')(merged)
output_layer = layers.Dense(Keras_Configurations_model1.TARGET_LABELS, activation='softmax')(layer_1)
model = keras.Model(inputs=[actors, plot, features], outputs=output_layer)
print(model.output_shape)
print(model.summary())
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
</code></pre>
<p><strong>Model's Structure</strong></p>
<p><a href="https://i.stack.imgur.com/9wfri.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9wfri.png" alt="enter image description here"></a></p>
<p><strong>My problem:</strong></p>
<p>After successfully fitting and training the model on some training data, I would like to extract the embeddings of this model for later use. My main approach before using a multiple input keras model, was to train 3 different keras models and extract 3 different embedding layers of shape 100. Now that I have the multiple input keras model, <strong>I want to extract the concatenated embedding layer</strong> with output shape (None, 300).</p>
<p>Although, when I try to use this python command:</p>
<pre><code>embeddings = model_4.layers[9].get_weights()
print(embeddings)
</code></pre>
<p>or </p>
<pre><code>embeddings = model_4.layers[9].get_weights()[0]
print(embeddings)
</code></pre>
<p>I get either an empty list (1st code sample) either an <em>IndenError: list index out of range</em> (2nd code sample).</p>
<p>Thank you in advance for any advice or help on this matter. Feel free to ask on the comments any additional information that I may have missed, to make this question more complete.</p>
<p><em>Note: Python code and model's structure have been also presented to this previously answered <a href="https://stackoverflow.com/questions/59489625/model-fit-keras-classification-multiple-inputs-single-output-gives-error-attr">question</a></em></p>
|
<p>Concatenate layer does not have any weights (it does not have trainable parameter as you ca see from your model summary) hence your <code>get_weights()</code> output is coming empty. Concatenation is an operation.
<br>
For your case you can get weights of your individual embedding layers after training.</p>
<pre><code>model.layers[3].get_weights() # similarly for layer 4 and 5
</code></pre>
<p>Alternatively if you want to store your embedding in (None, 300) you can use numpy to concatenate weights.<br></p>
<pre><code>out_concat = np.concatenate([mdoel.layers[3].get_weights()[0], mdoel.layers[4].get_weights()[0], mdoel.layers[5].get_weights()[0]], axis=-1)
</code></pre>
<p>Although you can get output tensor of concatenate layer:</p>
<pre><code>out_tensor = model.layers[9].output
# <tf.Tensor 'concatenate_3_1/concat:0' shape=(?, 300) dtype=float32>
</code></pre>
|
python|tensorflow|keras|nlp|word-embedding
| 1
|
376,126
| 59,830,168
|
layer Normalization in pytorch?
|
<p>shouldn't the layer normalization of <code>x = torch.tensor([[1.5,0,0,0,0]])</code> be <code>[[1.5,-0.5,-0.5,-0.5]]</code> ? according to this paper <a href="https://arxiv.org/pdf/1607.06450.pdf" rel="noreferrer">paper</a> and the equation from the <a href="https://pytorch.org/docs/stable/nn.html#layernorm" rel="noreferrer">pytorch doc</a>. But the <code>torch.nn.LayerNorm</code> gives <code>[[ 1.7320, -0.5773, -0.5773, -0.5773]]</code></p>
<p>Here is the example code:</p>
<pre><code>x = torch.tensor([[1.5,.0,.0,.0]])
layerNorm = torch.nn.LayerNorm(4, elementwise_affine = False)
y1 = layerNorm(x)
mean = x.mean(-1, keepdim = True)
var = x.var(-1, keepdim = True)
y2 = (x-mean)/torch.sqrt(var+layerNorm.eps)
</code></pre>
<p>where:</p>
<pre><code>y1 == tensor([[ 1.7320, -0.5773, -0.5773, -0.5773]])
y2 == tensor([[ 1.5000, -0.5000, -0.5000, -0.5000]])
</code></pre>
|
<p>Yet another simplified implementation of a Layer Norm layer with bare PyTorch.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple
import torch
def layer_norm(
x: torch.Tensor, dim: Tuple[int], eps: float = 0.00001
) -> torch.Tensor:
mean = torch.mean(x, dim=dim, keepdim=True)
var = torch.square(x - mean).mean(dim=dim, keepdim=True)
return (x - mean) / torch.sqrt(var + eps)
def test_that_results_match() -> None:
dims = (1, 2)
X = torch.normal(0, 1, size=(3, 3, 3))
indices = torch.tensor(dims)
normalized_shape = torch.tensor(X.size()).index_select(0, indices)
orig_layer_norm = torch.nn.LayerNorm(normalized_shape)
y = orig_layer_norm(X)
y_hat = layer_norm(X, dim=dims)
assert torch.allclose(y, y_hat)
</code></pre>
<p>Note, that original implementation also has trainable parameters <strong></strong> and <strong>β</strong> (see the <a href="https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html" rel="nofollow noreferrer">docs</a>).</p>
|
machine-learning|deep-learning|nlp|pytorch
| 4
|
376,127
| 59,492,723
|
How do I change a torch tensor to concat with another tensor
|
<p>I'm trying to concatenate a tensor of numerical data with the output tensor of a resnet-50 model. The output of that model is tensor shape <code>torch.Size([10,1000])</code> and the numerical data is tensor shape <code>torch.Size([10, 110528,8])</code> where the <code>10</code> is the batch size, <code>110528</code> is the number of observations in a data frame sense, and 8 is the number of columns (in a dataframe sense). I need to reshape the numerical tensor to <code>torch.Size([10,8])</code> so it will concatenate properly.</p>
<p>How would I reshape the tensor? </p>
|
<p>Starting tensors.</p>
<pre><code>a = torch.randn(10, 1000)
b = torch.randn(10, 110528, 8)
</code></pre>
<p>New tensor to allow concatenate.</p>
<pre><code>c = torch.zeros(10,1000,7)
</code></pre>
<p>Check shapes.</p>
<pre><code>a[:,:,None].shape, c.shape
</code></pre>
<pre><code>(torch.Size([10, 1000, 1]), torch.Size([10, 1000, 7]))
</code></pre>
<p>Alter tensor <code>a</code> to allow concatenate.</p>
<pre><code>a = torch.cat([a[:,:,None],c], dim=2)
</code></pre>
<p>Concatenate in dimension 1.</p>
<pre><code>torch.cat([a,b], dim=1).shape
</code></pre>
<pre><code>torch.Size([10, 111528, 8])
</code></pre>
|
pytorch
| 1
|
376,128
| 59,735,946
|
Is there any problem in DataGenerator for 3D data?
|
<p>I tried to use DataGenerator for 3D data set. But I got some error.</p>
<p><img src="https://i.stack.imgur.com/gj8jh.png" alt="Error message"></p>
<p>class DataFeedGenerator(tf.keras.utils.Sequence):</p>
<pre><code>def __init__(self,x1,x2,y,batch_size=32, dim=(44,52,52,1), n_channels=1, n_classes=1, shuffle=True, name="Training"):
self.dim = dim
self.batch_size = batch_size
self.Y = y
self.X1 = x1
self.X2 = x2
self.currentX1 = None
self.currentX2 = None
self.currentY = None
self.batch_index = 0
self.n_channels = n_channels
self.classes = n_classes
self.shuffle = shuffle
self.name = name
def __len__(self):
n = math.ceil(self.X1.shape[0] / self.batch_size)
print(self.name, "__len__", n)
return n
def __getitem__(self,index):
self.currentX1 = self.X1[index * self.batch_size:(index + 1) * self.batch_size]
self.currentX2 = self.X2[index * self.batch_size:(index + 1) * self.batch_size]
self.currentY = self.Y[index * self.batch_size:(index + 1) * self.batch_size]
return [self.currentX1, self.currentX2], self.currentY
</code></pre>
|
<p>Does the error only occurs for the 208th batch?<br>
Considering your shapes and batch_size, the total number of batches should be 15000/32=468, since you have 245 as total number it means that you give a batch_size of 61? is that correct?<br>
Also, </p>
<pre class="lang-py prettyprint-override"><code>self.currentX1 = None
self.currentX2 = None
self.currentY = None
...
self.currentX1 = self.X1[index * self.batch_size:(index + 1) * self.batch_size]
self.currentX2 = self.X2[index * self.batch_size:(index + 1) * self.batch_size]
self.currentY = self.Y[index * self.batch_size:(index + 1) * self.batch_size]
return [self.currentX1, self.currentX2], self.currentY
</code></pre>
<p>Why are you using class attribute, you could only use local variable current_x1, current_x2...<br>
I would also write the __len__ as:</p>
<pre class="lang-py prettyprint-override"><code>def __len__(self):
return math.ceil(len(self.X1) / self.batch_size)
</code></pre>
|
tensorflow|keras
| 0
|
376,129
| 59,796,225
|
combining three dataframes with matching timestamps and duration match
|
<p>Note: This question is some similar to the answered question here <a href="https://stackoverflow.com/questions/56980740/combining-three-different-timestamp-dataframes-using-duration-match">combining three different timestamp dataframes using duration match</a></p>
<p>I have two master and one slave data frame. The two master data frames data occurs for every 30 minutes. I am combining three data frames with masters as a reference and matching timestamps from the slave as given below. Data of two masters taken during a particular session should appear in a single row.</p>
<p><strong>My input</strong> is </p>
<pre><code>mas_df1 =
index S1
2019-01-09 13:20:17 2202.517620
2019-01-09 14:00:17 2392.173558
mas_df2 =
index S2
2019-01-09 13:24:32 2134.791454
2019-01-09 14:04:32 1958.719125
mas_list = [mas_df1,mas_df2]
slv_df =
index POA
2019-01-09 13:20:00 752.743700
2019-01-09 13:20:17 742.961815
2019-01-09 13:24:32 697.267647
2019-01-09 13:24:48 699.418420
2019-01-09 14:00:00 778.720800
2019-01-09 14:00:17 791.852790
2019-01-09 14:04:32 691.605547
2019-01-09 14:04:48 688.313520
</code></pre>
<p>The combined data frame should have timestamps and complete data of both masters. But, only the data of slave df should append to it at matching timestamps.</p>
<p><strong>My present code</strong> to achieve this is given below. </p>
<pre><code>aux = []
for i in range(0,len(mas_list),1):
s1=slv_df['POA'].reindex(mas_list[i].index,method='nearest').add_prefix(mas_list[i].columns[0])
if i==0:
aux.append(s1.join(mas_list[i]))
else:
aux.append(s1.join(mas_list[i]).reindex(aux.index,method='nearest'))
cmb_df = pd.concat(aux,axis=1)
</code></pre>
<p><strong>My present output</strong> is: </p>
<pre><code>raise ValueError("cannot reindex a non-unique index "
ValueError: cannot reindex a non-unique index with a method or limit
</code></pre>
<p><strong>My expected output</strong> is: </p>
<pre><code>cmd_df =
index S1 S1POA S2 S2POA
2019-01-09 13:20:17 2202.517620 742.961815 2134.791454 697.267647
2019-01-09 14:00:17 2392.173558 791.852790 1958.719125 691.605547
</code></pre>
<p>Any suggestions to improve my code?</p>
|
<p>Is this what you are looking for?</p>
<pre><code>import pandas as pd
# create dataframes
mas_df1 = pd.DataFrame({'S1': [2202.517620, 2392.173558]}, index=pd.to_datetime(['2019-01-09 13:20:17', '2019-01-09 14:00:17']))
mas_df2 = pd.DataFrame({'S2': [2134.791454, 1958.719125]}, index=pd.to_datetime(['2019-01-09 13:24:32', '2019-01-09 14:04:32']))
slv_df = pd.DataFrame(
{'POA': [752.743700, 742.961815, 697.267647, 699.418420, 778.720800, 791.852790, 691.605547, 688.313520]},
index = pd.to_datetime(['2019-01-09 13:20:00', '2019-01-09 13:20:17', '2019-01-09 13:24:32', '2019-01-09 13:24:48',
'2019-01-09 14:00:00', '2019-01-09 14:00:17', '2019-01-09 14:04:32', '2019-01-09 14:04:48'])
)
# combine slave to master (i.e. left join per master df)
mas_df1 = mas_df1.merge(slv_df, how='left', left_index=True, right_index=True).rename(columns={'POA': 'S1PAO'})
mas_df2 = mas_df2.merge(slv_df, how='left', left_index=True, right_index=True).rename(columns={'POA': 'S2PAO'})
# combine two master dataframes, by matching to the nearest time
mas_df2 = mas_df2.reindex(mas_df1.index, method='nearest') # set index of df2 to match (nearest) index of df1
mas_df = pd.concat([mas_df1, mas_df2], axis=1) # comnine dataframe
mas_df
</code></pre>
<p><strong>EDIT: doing the same for a list of dataframes</strong></p>
<pre><code># combine slave to master (i.e. left join per master df)
mas_list = [mas_df1, mas_df2]
for i, df in enumerate(mas_list):
mas_list[i] = df.merge(slv_df, how='left', left_index=True, right_index=True).rename(columns={'POA': f'S{i}PAO'})
# combine master dataframes, by matching to the nearest time of the first master frame
for i, df in enumerate(mas_list[1:]):
mas_list[i+1] = df.reindex(mas_list[0].index, method='nearest') # set index of mas dfs > 1 to match (nearest) index of df1
mas_df = pd.concat(mas_list, axis=1) # comnine dataframe
</code></pre>
|
python|pandas|dataframe
| 2
|
376,130
| 59,759,565
|
Pandas error cannot convert String to Float setting a value at index
|
<p>that I know this was working for me last time I ran my script. but it looks like is not anymore. I have a scraping module which returns a dict, at my main script im running the <strong>scraping</strong> and assigning values . but now im getting this error about cannot convert a the string value to a float ( should i set the column to string from the very beginning ? )</p>
<p>This is the error</p>
<pre><code>dataset.at[index,'UserPhotoUrl'] = scrapedData['usernamePhotoLink']
</code></pre>
<p>ValueError: could not convert string to float: 'https://instagram.fhex4-1.fna.fbcdn.net/v/t51.2885-19/s150x150/81572390_579207132636171_1735861275205828608_n.jpg?_nc_ht=instagram.fhex4-1.fna.fbcdn.net&_nc_ohc=EfweZRX7mn8AX8kKx7e&oh=b7cb7aaf3ee583604e4a40cd7b23447f&oe=5EA1B8F7'</p>
|
<p>Well guys, I did find the solution. doesn't look as elegant though .</p>
<pre><code>dataset = pd.read_csv(openFilename, delimiter = ',',encoding = my_encoding)
dataset['UserPhotoUrl'] = " "
dataset['PostPhotoUrl'] = " "
dataset.astype({'UserPhotoUrl': 'str'})
dataset.astype({'PostPhotoUrl': 'str'})
</code></pre>
<p>Had to set at the very beginning forcing to str.</p>
|
python|json|pandas|screen-scraping
| 0
|
376,131
| 59,772,302
|
How to return columns of dataframe held in dictionary
|
<p>I have a dictionary <code>dict</code> of dataframes <code>df1, df2, df3</code>, I want to return the columns of any dataframe (they are always the same)?</p>
<p>I want to use them as a graph titles, I've tried a few variantions of;</p>
<p><code>titles = dict.items(df1.columns)</code> </p>
<p>I know this is likely very simple but my noob brain can't see the answer.</p>
<p>Any help is really appreciated. Thanks</p>
|
<pre class="lang-py prettyprint-override"><code># if you don't know dict keys
titles = dict_df[[*dict_df.keys()][0]].columns
# if you know dict keys you can use this
titles = dict_df['df1'].columns
</code></pre>
|
python|pandas
| 1
|
376,132
| 59,815,840
|
matplotlib plot line and bar chart together on same x-axis
|
<p>When I plot two pandas dfs together as two line charts, I get them on the same x-axis properly. When I plot one as a bar chart, however, the axis seems to be offset.</p>
<pre><code>ax = names_df.loc[:, name].plot(color='black')
living_df.loc[:, name].plot(figsize=(12, 8), ax=ax)
</code></pre>
<p>This works properly, producing this result</p>
<p><a href="https://i.stack.imgur.com/Cni5u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cni5u.png" alt="result"></a></p>
<p>On the other hand, this:</p>
<pre><code>ax = names_df.loc[:, name].plot(color='black')
living_df.loc[:, name].plot.bar(figsize=(12, 8), ax=ax)
</code></pre>
<p>does not, and has this result</p>
<p><a href="https://i.stack.imgur.com/pswep.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pswep.png" alt="result"></a>.</p>
|
<p>Use <code>matplotlib</code> instead of calling the <code>plot</code> method of the pandas object:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
# Line plot
plt.plot(names_df.loc[:, name], color='black')
plt.plot(living_df.loc[:, name])
plt.show()
plt.close()
# Bar plot
plt.plot(names_df.loc[:, name].values)
bar_data = living_df.loc[:, name].values
plt.bar(range(len(bar_data)), bar_data)
plt.xticks(range(len(bar_data)), names_df.index.values) # Restore xticks
plt.show()
plt.close()
</code></pre>
|
python|pandas|matplotlib
| 1
|
376,133
| 59,609,829
|
Weighted Pixel Wise Categorical Cross Entropy for Semantic Segmentation
|
<p>I have recently started learning about Semantic Segmentation. I am trying to train a UNet for the same. My input is RGB 128x128x3 images. My masks are made up of 4 classes 0, 1, 2, 3 and are One-Hot Encoded with dimension 128x128x4.</p>
<pre><code>def weighted_cce(y_true, y_pred):
weights = []
t_inf = tf.convert_to_tensor(1e9, dtype = 'float32')
t_zero = tf.convert_to_tensor(0, dtype = 'int64')
for i in range(0, 4):
l = tf.argmax(y_true, axis = -1) == i
n = tf.cast(tf.math.count_nonzero(l), 'float32') + K.epsilon()
weights.append(n)
weights = [batch_size/j for j in weights]
y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
# clip to prevent NaN's and Inf's
y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
# calc
loss = y_true * K.log(y_pred) * weights
loss = -K.sum(loss, -1)
return loss
</code></pre>
<p>This is the loss function that I am using but it classifies every pixel as 2. What am I doing wrong?</p>
|
<p>You should have weights based on you entire data (unless your batch size is reasonably big so you have sort of stable weights).</p>
<p>If some class is underrepresented, with a small batch size, it will have near infinity weights. </p>
<p>If your target data is numpy array:</p>
<pre><code>shp = y_train.shape
totalPixels = shp[0] * shp[1] * shp[2]
weights = np.sum(y_train, axis=(0, 1, 2)) #final shape (4,)
weights = totalPixels/weights
</code></pre>
<p>If your data is in a <code>Sequence</code> generator:</p>
<pre><code>totalPixels = 0
counts = np.zeros((4,))
for i in range(len(generator)):
x, y = generator[i]
shp = y.shape
totalPixels += shp[0] * shp[1] * shp[2]
counts = counts + np.sum(y, axis=(0,1,2))
weights = totalPixels / counts
</code></pre>
<p>If your data is in a <code>yield</code> generator (you must know how many batches you have in an epoch):</p>
<pre><code>for i in range(batches_per_epoch):
x, y = next(generator)
#the rest is equal to the Sequence example above
</code></pre>
<hr>
<h2>Attempt 1</h2>
<p>I don't know if newer versions of Keras are able to handle this, but you can try the simplest approach first: simply call <code>fit</code> or <code>fit_generator</code> with the <code>class_weight</code> argument:</p>
<pre><code>model.fit(...., class_weight = {0: weights[0], 1: weights[1], 2: weights[2], 3: weights[3]})
</code></pre>
<h2>Attempt 2</h2>
<p>Make a healthier loss function:</p>
<pre><code>weights = weights.reshape((1,1,1,4))
kWeights = K.constant(weights)
def weighted_cce(y_true, y_pred):
yWeights = kWeights * y_pred #shape (batch, 128, 128, 4)
yWeights = K.sum(yWeights, axis=-1) #shape (batch, 128, 128)
loss = K.categorical_crossentropy(y_true, y_pred) #shape (batch, 128, 128)
wLoss = yWeights * loss
return K.sum(wLoss, axis=(1,2))
</code></pre>
|
tensorflow|keras|deep-learning|image-segmentation|semantic-segmentation
| 0
|
376,134
| 59,775,640
|
Python : How to find the count of empty cells in one column based on another column element wise?
|
<pre><code>df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice','Jane', 'Alice','Bob', 'Alice'],
'income': [40000, np.nan, 42000, 50000, np.nan, np.nan, 30000]})
user income
0 Bob 40000.0
1 Jane NaN
2 Alice 42000.0
3 Jane 50000.0
4 Alice NaN
5 Bob NaN
6 Alice 30000.0
</code></pre>
<p>I want to find the count of all the Null Values in 'income' column based on 'user' column in my df ?
I'm trying something like this: <code>len(df[df.income.isnull().sum()])</code> but it is incomplete.</p>
|
<p>You can use the method <code>value_counts()</code>:</p>
<pre><code>df.loc[df['income'].isna(), 'user'].value_counts()
</code></pre>
<p>Output:</p>
<pre><code>Jane 1
Bob 1
Alice 1
Name: user, dtype: int64
</code></pre>
|
python|pandas|dataframe
| 3
|
376,135
| 59,775,373
|
How can i solve InvalidArgumentError: cycle_length must be > 0 when load tfrecords file
|
<p>I am starting out with build a efficient data pipeline of audio file using <code>tf.TFRecord and tf.Example</code>. But i get an error <code>tensorflow.python.framework.errors_impl.InvalidArgumentError</code> when i trying to load data from saved tfrecords file. I have been looking for a lot of solutions for this problem but it didn't work.</p>
<pre><code>AUTO = tf.data.experimental.AUTOTUNE
def _parse_batch(record_batch, sample_rate, duration):
n_sample = sample_rate * duration
feature_description = {
'audio': tf.io.FixedLenFeature([n_sample], tf.float32),
'label': tf.io.VarLenFeature(tf.int64)
}
example = tf.io.parse_example(record_batch, feature_description)
return example['audio'], example['label']
def get_dataset_from_tfrecords(tfrecords_dir='tfrecords', split='train', batch_size=16,
sample_rate=44100, duration=4, n_epochs=10):
if split not in ('train', 'validate'):
raise ValueError("Split must be either 'train' or 'validate'")
pattern = os.path.join(tfrecords_dir, '{}*.tfrecord'.format(split))
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False
filenames = tf.io.gfile.glob(pattern)
# Read TFRecord files in an interleaved order
dataset = tf.data.TFRecordDataset(filenames, compression_type='ZLIB', num_parallel_reads=AUTO)
dataset = dataset.with_options(ignore_order)
# Prepare batches
dataset = dataset.batch(batch_size)
# Parse a batch into a dataset of [audio, label] pairs
dataset = dataset.map(lambda x: _parse_batch(x, sample_rate, duration))
# Repeat the training data for n_epochs. Don't repeat test/validate splits.
if split == 'train':
dataset = dataset.repeat(n_epochs)
return dataset.prefetch(buffer_size=AUTO)
</code></pre>
<p>Here is the full error</p>
<pre><code>Traceback (most recent call last):
File "train.py", line 25, in <module>
main()
File "train.py", line 16, in main
n_epochs=n_epochs)
File "D:\Natural Language Processing\speech_to_text\utils\load_tfrecord.py", line 33, in get_dataset_from_tfrecords
dataset = tf.data.TFRecordDataset(filenames, compression_type='ZLIB', num_parallel_reads=AUTO)
File "C:\Users\levan\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\readers.py", line 304, in __init__
num_parallel_reads)
File "C:\Users\levan\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\readers.py", line 85, in _create_dataset_reader
prefetch_input_elements=None)
File "C:\Users\levan\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\readers.py", line 250, in __init__
**self._flat_structure)
File "C:\Users\levan\Anaconda3\lib\site-packages\tensorflow_core\python\ops\gen_experimental_dataset_ops.py", line 5977, in parallel_interleave_dataset
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: `cycle_length` must be > 0 [Op:ParallelInterleaveDataset]
</code></pre>
<p>Can anyone help me?</p>
|
<p>i encountered similar problem, on tensorflow 2.0, however,upgrading to 2.1 solves the issue</p>
|
python|python-3.x|tensorflow|tfrecord|data-pipeline
| 0
|
376,136
| 59,774,367
|
How to split ' ; ' separated CSV file in pandas after using read_csv() in pandas?
|
<p><a href="https://i.stack.imgur.com/xZepu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZepu.png" alt="enter image description here"></a></p>
<p>I am trying to split the comma-separated column into 4 individual columns shown in the picture using split() but it's not working after importing CSV file. Can anyone please tell me how I can do it?</p>
<pre><code>I want something like this:
sexe preusuel annais nombre
1 A 1980 3
and so on....
</code></pre>
|
<pre><code>df = pd.read_csv('nat2018.csv', sep=';')
</code></pre>
<p>Should work for you.</p>
|
python|pandas
| 3
|
376,137
| 59,612,914
|
Difference about "BinaryCrossentropy" and "binary_crossentropy" in tf.keras.losses?
|
<p>I'm training a model using TensorFlow 2.0 using tf.GradientTape(), but I find that the model's accuracy is <code>95%</code> if I use <code>tf.keras.losses.BinaryCrossentropy</code>, but degrade to <code>75%</code> if I use <code>tf.keras.losses.binary_crossentropy</code>. So I'm confused about the difference about the same metric here? </p>
<pre><code>import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
def read_data():
red_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv", sep=";")
white_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv", sep=";")
red_wine["type"] = 1
white_wine["type"] = 0
wines = red_wine.append(white_wine)
return wines
def get_x_y(df):
x = df.iloc[:, :-1].values.astype(np.float32)
y = df.iloc[:, -1].values.astype(np.int32)
return x, y
def build_model():
inputs = layers.Input(shape=(12,))
dense1 = layers.Dense(12, activation="relu", name="dense1")(inputs)
dense2 = layers.Dense(9, activation="relu", name="dense2")(dense1)
outputs = layers.Dense(1, activation = "sigmoid", name="outputs")(dense2)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
def generate_dataset(df, batch_size=32, shuffle=True, train_or_test = "train"):
x, y = get_x_y(df)
ds = tf.data.Dataset.from_tensor_slices((x, y))
if shuffle:
ds = ds.shuffle(10000)
if train_or_test == "train":
ds = ds.batch(batch_size)
else:
ds = ds.batch(len(df))
return ds
# loss_object = tf.keras.losses.binary_crossentropy
loss_object = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
def train_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
pred = model(x, training=True)
loss = loss_object(y, pred)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
def train_model(model, train_ds, epochs=10):
for epoch in range(epochs):
print(epoch)
for x, y in train_ds:
train_step(model, optimizer, x, y)
def main():
data = read_data()
train, test = train_test_split(data, test_size=0.2, random_state=23)
train_ds = generate_dataset(train, 32, True, "train")
test_ds = generate_dataset(test, 32, False, "test")
model = build_model()
train_model(model, train_ds, 10)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.evaluate(test_ds)
main()
</code></pre>
|
<p>They should indeed work the same; <a href="https://github.com/tensorflow/tensorflow/blob/8d40fa56169d08c6a9911242dfdf8ba74876673b/tensorflow/python/keras/losses.py#L371" rel="nofollow noreferrer"><code>BinaryCrossentropy</code></a> uses <a href="https://github.com/tensorflow/tensorflow/blob/8d40fa56169d08c6a9911242dfdf8ba74876673b/tensorflow/python/keras/losses.py#L1102" rel="nofollow noreferrer"><code>binary_crossentropy</code></a>, with difference apparent in docstring descriptions; former's intended for <em>two</em> class labels, whereas later supports an arbitrary class count. However, if passing in targets in expected format, both apply same preprocessing before calling backend's <a href="https://github.com/tensorflow/tensorflow/blob/8d40fa56169d08c6a9911242dfdf8ba74876673b/tensorflow/python/keras/backend.py#L4605" rel="nofollow noreferrer"><code>binary_crossentropy</code></a>, which does the actual computing.</p>
<p>The difference you observe is likely a <em>reproducibility</em> issue; ensure you set the random seed - see function below. For a more complete answer on reproducibility, see <a href="https://stackoverflow.com/questions/59075244/if-keras-results-are-not-reproducible-whats-the-best-practice-for-comparing-mo/59075958#59075958">here</a>.</p>
<hr>
<p><strong>Function</strong></p>
<pre class="lang-py prettyprint-override"><code>def reset_seeds(reset_graph_with_backend=None):
if reset_graph_with_backend is not None:
K = reset_graph_with_backend
K.clear_session()
tf.compat.v1.reset_default_graph()
print("KERAS AND TENSORFLOW GRAPHS RESET") # optional
np.random.seed(1)
random.seed(2)
tf.compat.v1.set_random_seed(3)
print("RANDOM SEEDS RESET") # optional
</code></pre>
<hr>
<p><strong>Usage</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import tensorflow.keras.backend as K
reset_seeds(K)
</code></pre>
|
python|tensorflow|tf.keras
| 2
|
376,138
| 59,612,966
|
Matching data frame rows based on opposite values of two columns?
|
<p>I am trying to perform a calculation on this table of movements between location codes, snippet below:</p>
<pre><code>origin destination age sex moves
E06000019 E06000019 98 m 0
E06000019 E06000019 99 f 0
E06000019 E06000019 99 m 0
E06000019 E06000019 100 f 0
E06000019 E06000019 100 m 0
E06000019 E06000020 0 f 0.3632
E06000019 E06000020 0 m 0.8249
E06000019 E06000020 1 f 1.1931
E06000019 E06000020 1 m 1.192
</code></pre>
<p>The aim is to find the net flow between any two locations, e.g:
(1) match each row to another row, in which the age and sex are the same but the origin/destination are the opposite way around. Then (2) subtract the number of moves in the second row from the number of moves in the first row.</p>
<p>I have tried creating nested loops or defining a function before using apply: </p>
<pre><code>df['col_3'] = df.apply(lambda x: f(x.col_1, x.col_2), axis=1)
</code></pre>
<p>But in both cases I have had difficulty understanding how to create a match for every row.</p>
<p>Anyone have any ideas on how I might approach this? Thank you!</p>
|
<p>You gave a bad example: the first 4 rows have the same <code>origin</code> and <code>destination</code> so they will inevitably match themselves.</p>
<p>Having said that, your problem can be solved with a self-join:</p>
<pre><code>df.merge(df, how='left', suffixes=('', '_'),
left_on=['origin', 'destination', 'age', 'sex'],
right_on=['destination', 'origin', 'age', 'sex']) \
.assign(delta=lambda x: x['moves_'] - x['moves'])
</code></pre>
|
python|pandas
| 0
|
376,139
| 59,597,242
|
Pandas: Replace all values in column with column maximum
|
<p>I have the following dataframe</p>
<pre><code> NDQ CFRI NFFV [more columns....]
2002-01-24 92.11310000 57.78140000 90.95720000
2002-01-25 57.97080000 91.05430000 58.19820000
</code></pre>
<p>I want to set all values in a column equal to the maximum value of the respective column.</p>
<p>Desired output:</p>
<pre><code> NDQ CFRI NFFV [more columns....]
2002-01-24 92.11310000 91.05430000 90.95720000
2002-01-25 92.11310000 91.05430000 90.95720000
</code></pre>
<p>I have attempted to map it out to the result of <code>df.max()</code>, but struggled with the implementation and also feel like there would be an easier solution available.</p>
<p>Any help would be greatly appreciated.</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.clip.html" rel="noreferrer"><code>df.clip</code></a> and set the <code>lower</code> param to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.max.html" rel="noreferrer"><code>df.max</code></a>:</p>
<p>From Docs:</p>
<blockquote>
<p>lower : float or array_like, default None
Minimum threshold value. All values below this threshold will be set to it.</p>
</blockquote>
<pre><code>df.clip(df.max(),axis=1)
</code></pre>
<hr>
<pre><code> NDQ CFRI NFFV
2002-01-24 92.1131 91.0543 90.9572
2002-01-25 92.1131 91.0543 90.9572
</code></pre>
|
python|pandas|dataframe
| 7
|
376,140
| 59,529,080
|
Pandas - Extract unique column combinations and count them in another table
|
<p><strong>TASK 1:</strong></p>
<p>I have table like this:</p>
<pre><code>+----------+------------+----------+------------+----------+------------+-------+
| a_name_0 | id_qname_0 | a_name_1 | id_qname_1 | a_name_2 | id_qname_2 | count |
+----------+------------+----------+------------+----------+------------+-------+
| country | 1 | NAN | NAN | NAN | NAN | 100 |
+----------+------------+----------+------------+----------+------------+-------+
| region | 2 | city | 8 | NAN | NAN | 20 |
+----------+------------+----------+------------+----------+------------+-------+
| region | 2 | city | 9 | NAN | NAN | 80 |
+----------+------------+----------+------------+----------+------------+-------+
| region | 3 | age | 4 | sex | 6 | 40 |
+----------+------------+----------+------------+----------+------------+-------+
| region | 3 | age | 5 | sex | 7 | 60 |
+----------+------------+----------+------------+----------+------------+-------+
</code></pre>
<p>I need to turn each row in series, drop NANs and convert series in a dictionaries which will be variable in size, for example, first 2 dicts will look like this:</p>
<pre><code>{'a_name_0':'country','id_qname_0':1}
{'a_name_0':'region','id_qname_0':2, 'a_name_1':'city','id_qname_1':8}
{'a_name_0':'region','id_qname_0':2, 'a_name_1':'city','id_qname_1':9}
</code></pre>
<p>Each dictionary after that should be stored in a list.</p>
<p><strong>TASK 2.</strong></p>
<p>Using table below I have to count appearance of columns from dict from previous step: </p>
<pre><code>+----------+------------+----------+------------+----------+
| id | country | city | age | sex |
+----------+------------+----------+------------+----------+
| 1 | 1 | NAN | NAN | NAN |
+----------+------------+----------+------------+----------+
| 2 | 1 | 8 | NAN | NAN |
+----------+------------+----------+------------+----------+
</code></pre>
<p>If there is some faster mapping solution please advise since what I'm about to do is probably going to be quite messy.
<a href="https://stackoverflow.com/questions/51262355/identify-unique-combinations-of-values-in-columns-sum-another-column-and-count">This</a> answer doesn't help me since I need iterator for extracting parameters as well as counting their appearance. </p>
|
<p>You can remove <code>count</code> column and convert all rows to list of dicts by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer"><code>DataFrame.to_dict</code></a> with <code>orient='r'</code> (<code>records</code>) and then filter out dicts with missing values in dictionary comprehension:</p>
<pre><code>L = [{k:v for k, v in x.items() if pd.notna(v)} for x in df.drop('count', 1).to_dict('r')]
print (L)
[{'a_name_0': 'country', 'id_qname_0': 1},
{'a_name_0': 'region', 'id_qname_0': 2, 'a_name_1': 'city', 'id_qname_1': 8.0},
{'a_name_0': 'region', 'id_qname_0': 2, 'a_name_1': 'city', 'id_qname_1': 9.0},
{'a_name_0': 'region', 'id_qname_0': 3, 'a_name_1': 'age',
'id_qname_1': 4.0, 'a_name_2': 'sex', 'id_qname_2': 6.0},
{'a_name_0': 'region', 'id_qname_0': 3, 'a_name_1': 'age',
'id_qname_1': 5.0, 'a_name_2': 'sex', 'id_qname_2': 7.0}]
</code></pre>
<p>Not 100% sure for second DataFrame:</p>
<pre><code>L1 = [dict(zip(list(x.values())[::2], list(x.values())[1::2])) for x in L]
df = pd.DataFrame(L1)
print (df)
country region city age sex
0 1.0 NaN NaN NaN NaN
1 NaN 2.0 8.0 NaN NaN
2 NaN 2.0 9.0 NaN NaN
3 NaN 3.0 NaN 4.0 6.0
4 NaN 3.0 NaN 5.0 7.0
</code></pre>
|
python|pandas|dictionary
| 3
|
376,141
| 59,714,963
|
How to reshape my data as per model requirement?
|
<p>I have data which consist training as train_x and testing train_y. but major problem is that while fitting to model it shows error like.</p>
<p>Error when checking input: expected dense_12_input to have shape (8,) but got array with shape (13923,)</p>
<p>Training data shape is</p>
<pre><code>d=np.array(train_x)
d.shape
</code></pre>
<p>output is</p>
<pre><code>(6995, 13923)
</code></pre>
<p>Testing Data shape is</p>
<pre><code>f = np.array(train_y)
f.shape
</code></pre>
<p>Output is</p>
<pre><code>(6995, 8)
</code></pre>
<p>so we can convert above this or fit in the model</p>
<h1>Fitting the data to the training dataset</h1>
<pre><code> classifier.fit(np.array(train_x),np.array(train_y), batch_size=10, epochs=2)
</code></pre>
<p>how will be convert as per model.</p>
|
<pre><code>classifier.add(Dense(3923, activation='relu', kernel_initializer='random_normal', input_dim=13923))
classifier.add(Dense(923, activation='relu', kernel_initializer='random_normal'))
classifier.add(Dense(23, activation='relu', kernel_initializer='random_normal'))
classifier.add(Dense(8, activation='sigmoid', kernel_initializer='random_normal'))
</code></pre>
|
python|tensorflow|machine-learning
| 0
|
376,142
| 59,481,741
|
Generating a random integer in a range while excluding a number in the given range
|
<p>I was going through the <a href="https://github.com/google-research/bert" rel="nofollow noreferrer">BERT</a> repo and found the following piece of code:</p>
<pre class="lang-py prettyprint-override"><code>for _ in range(10):
random_document_index = rng.randint(0, len(all_documents) - 1)
if random_document_index != document_index:
break
</code></pre>
<p>The idea here being to generate a random integer on <code>[0, len(all_documents)-1]</code> that cannot equal <code>document_index</code>. Because <code>len(all_documents)</code> is suppose to be a very large number, the first iteration is almost guaranteed to produce a valid randint, but just to be safe, they try it for 10 iterations. I can't help but think there has to be a better way to do this.</p>
<p>I found this <a href="https://stackoverflow.com/questions/34182699/random-integer-in-a-certain-range-excluding-one-number">answer</a> which is easy enough to implement in python:</p>
<pre class="lang-py prettyprint-override"><code>random_document_index = rng.randint(0, len(all_documents) - 2)
random_document_index += 1 if random_document_index >= document_index else 0
</code></pre>
<p>I was just wondering if there's a better way to achieve this in python using the in-built functions (or even with <code>numpy</code>), or if this is the best you can do. </p>
|
<p>Had <code>len(all_documents)</code> been small, a pretty solution would be to realize all valid numbers (e.g. in a <code>list</code>) and use <code>random.choice()</code>. Since your <code>len(all_documents)</code> is supposedly large, this solution will waste a lot of memory.</p>
<p>A more memory efficient solution is to stick with the original strategy. It's really very reasonable for large <code>len(all_documents)</code> where a single iteration is very likely to be enough, though the hard-coded <code>10</code> is ugly. A pretty one-line solution would be to make use of the <a href="https://docs.python.org/3/whatsnew/3.8.html#assignment-expressions" rel="nofollow noreferrer">new walrus operator</a> in Python 3.8:</p>
<pre><code>while (random_document_index := rng.randint(0, len(all_documents) - 1)) == document_index: pass
</code></pre>
|
python|numpy|random
| 2
|
376,143
| 59,803,949
|
Concatenate Multiples CSV files in one dataframe
|
<p>I'm relatively new in python. Here is what I'd like to do. I got a folder with multiple csv files (<code>2018.csv</code>, <code>2017.csv</code>, <code>2016.csv</code>,... etc.), 500 CSV files to be precise. Each CSV file contains header "<code>date</code>", "<code>Code</code>", "<code>Cur</code>", "<code>Price</code>" etc. I'd like to concatenate all 500 CSV files in one datafame. Here is my code for one csv file but it's very slow. I want to do it for all 500 files and concatenate to one dataframe:</p>
<pre><code>DB_2017 = pd.read_csv("C:/folder/2018.dat",sep=",", header =None).iloc[: 0,4,5,6]
DB_2017.columns =["date","Code","Cur",Price]
DB_2017['Code'] =DB_2017['Code'].map(lambdax:x.lstrip('@').rstrip('@'))
DB_2017['Cur'] =DB_2017['Cur'].map(lambdax:x.lstrip('@').rstrip('@'))
DB_2017['date'] =DB_2017['date'].apply(lambdax:pd.timestamp(str(x)[:10)
DB_2017['Price'] =pd.to_numeric(DB_2017.Price.replace(',',';')
</code></pre>
|
<p>You can do the following:</p>
<pre><code>def clean_up(df):
df = df.iloc[:,[0,4,5,6]]
df.columns = ["date","Code","Cur","Price"]
df['Code'] = df['Code'].map(lambda x:x.lstrip('@').rstrip('@'))
df['Cur'] = df['Cur'].map(lambda x:x.lstrip('@').rstrip('@'))
df['date'] = df['date'].apply(lambda x:pd.timestamp(str(x)[:10]))
df['Price'] = pd.to_numeric(df['Price'].replace(',',';'), errors='coerce')
return df
from pathlib import Path
file_path = Path("your_files_path/")
df = pd.concat([clean_up(pd.read_csv(i)) for i in file_path.iterdir()])
</code></pre>
<p>In case, your path/folder contains files other than <code>.csv</code>, you can filter csv files using:</p>
<pre><code>df = pd.concat([pd.read_csv(i) for i in file_path.glob('**/*.csv')])
</code></pre>
<p>To read <code>.dat</code> files, I think you can do:</p>
<pre><code>df = pd.concat([pd.read_fwf(i) for i in file_path.glob('**/*.dat')])
</code></pre>
<p>To read <code>.dat</code> files (a small sample), I think you can do:</p>
<pre><code>n = 5
df = pd.concat([pd.read_fwf(i) for i in file_path.glob('**/*.dat')[:n])
</code></pre>
|
python|pandas
| 2
|
376,144
| 59,805,561
|
Python using curve_fit to fit a logarithmic function
|
<p>I'm trying to fit a log curve using <code>curve_fit</code>, assuming it follows <code>Y=a*ln(X)+b</code>, but the fitted data still looks off.</p>
<p>Right now I'm using the following code: </p>
<pre><code>from scipy.optimize import curve_fit
X=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4,
4.5, 4.6, 4.7]
Y=[-5.890486683, -3.87063815, -2.733484754, -2.104972457, -1.728190699,
-1.477976987, -1.285589215, -1.120224363, -0.968576581, -0.82492453,
-0.688457731, -0.559780327, -0.440437932, -0.331886009, -0.235162505,
-0.150572236, -0.078157925, -0.01718885]
#plot Y against X
fig = plt.figure(num=None, figsize=(9, 7),facecolor='w', edgecolor='k')
ax2=fig.add_subplot(111)
ax2.scatter(X,Y)
#fit using curve_fit
popt, pcov = curve_fit(Hyp_func, X, Y,maxfev=10000)
print(' fit coefficients:\n', popt)
#fit coefficients:
#[9.51543579 -14.10114674]
#plot Y_estimated against X
Y_estimated=[popt[0]*np.log(i)+popt[1] for i in X]
ax2.scatter(X,Y_estimated, c='r')
def Hyp_func(x, a,b):
return a*np.log(x)+b
</code></pre>
<p><a href="https://i.stack.imgur.com/DHNE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DHNE1.png" alt="enter image description here"></a></p>
<p>the fitted curve (red) still looks not as 'curvy' like the read curve (blue). Any help would be appreciated. </p>
|
<p>The X data values sometimes need to be shifted a bit for this equation, and when I tried this it worked rather well. Here is a graphical Python fitter using your data and an X-shifted equation "y = a * ln(x + b)+c".</p>
<p><a href="https://i.stack.imgur.com/KxaDl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KxaDl.png" alt="enter image description here"></a></p>
<pre><code>import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# ignore any "invalid value in log" warnings internal to curve_fit() routine
import warnings
warnings.filterwarnings("ignore")
X=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7]
Y=[-5.890486683, -3.87063815, -2.733484754, -2.104972457, -1.728190699, -1.477976987, -1.285589215, -1.120224363, -0.968576581, -0.82492453, -0.688457731, -0.559780327, -0.440437932, -0.331886009, -0.235162505, -0.150572236, -0.078157925, -0.01718885]
# alias data to match previous example
xData = numpy.array(X, dtype=float)
yData = numpy.array(Y, dtype=float)
def func(x, a, b, c): # x-shifted log
return a*numpy.log(x + b)+c
# these are the same as the scipy defaults
initialParameters = numpy.array([1.0, 1.0, 1.0])
# curve fit the test data
fittedParameters, pcov = curve_fit(func, xData, yData, initialParameters)
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print('Parameters:', fittedParameters)
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
</code></pre>
|
python|numpy|matplotlib|curve-fitting|logarithm
| 5
|
376,145
| 59,518,785
|
numpy broadcasting to each column of the matrix separately
|
<p>I have to matrices:</p>
<pre><code>a = np.array([[6],[3],[4]])
b = np.array([1,10])
</code></pre>
<p>when I do:</p>
<pre><code>c = a * b
</code></pre>
<p>c looks like this:</p>
<pre><code>[ 6, 60]
[ 3, 30]
[ 4, 40]
</code></pre>
<p>which is good.
now, lets say I add a column to a (for the sake of the example its an identical column. but it dosent have to be):</p>
<pre><code>a = np.array([[6,6],[3,3],[4,4]])
</code></pre>
<p>b stayes the same. </p>
<p>the result I want is 2 identical copies of c (since the column are identical), stacked along a new axis:</p>
<pre><code>new_c.shape == [3,2,2]
</code></pre>
<p>when if u do <code>new_c[:,:,0]</code> or <code>new_c[:,:,1]</code> you get the original c.
I tried adding new axes to both a and b using <code>np.expand_dims</code> but it did not help. </p>
|
<p>One way is using <strong><code>numpy.einsum</code></strong>:</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> a = np.array([[6],[3],[4]])
>>> b = np.array([1,10])
>>> print(a * b)
[[ 6 60]
[ 3 30]
[ 4 40]]
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> print(np.einsum('ij, j -> ij', a, b))
[[ 6 60]
[ 3 30]
[ 4 40]]
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> a = np.array([[6,6],[3,3],[4,4]])
>>> print(np.einsum('ij, k -> ikj', a, b)[:, :, 0])
>>> print(np.einsum('ij, k -> ikj', a, b)[:, :, 1])
[[ 6 60]
[ 3 30]
[ 4 40]]
[[ 6 60]
[ 3 30]
[ 4 40]]
</code></pre>
<p>For more usage about <strong><code>numpy.einsum</code></strong>, I recommend:</p>
<p><a href="https://stackoverflow.com/questions/26089893/understanding-numpys-einsum">Understanding NumPy's einsum</a></p>
|
python|numpy|array-broadcasting
| 1
|
376,146
| 59,662,420
|
Pandas read csv skips some lines
|
<p>Following an old <a href="https://stackoverflow.com/questions/59090572/does-pandas-automatically-skip-rows-do-a-size-limit">question</a> of mine. I finally identified what happens.</p>
<p>I have a csv-file which has the sperator <code>\t</code> and reading it with the following command:</p>
<pre><code>df = pd.read_csv(r'C:\..\file.csv', sep='\t', encoding='unicode_escape')
</code></pre>
<p>the length for example is: 800.000</p>
<p>The problem is the original file has around 1.400.000 lines, and I also know where the issue occures, one column (let's say columnA) has the following entry:</p>
<pre><code>"HILFE FüR DIE Alten
</code></pre>
<p>Do you have any idea what is happening? When I delete that row I get the correct number of lines (length), what is python doing here?</p>
|
<p>According to pandas documentation <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html</a></p>
<blockquote>
<p>sep : str, default ‘,’
Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\r\t'.</p>
</blockquote>
<p>It may be issue with double quotes symbol.
Try this instead:</p>
<pre><code>df = pd.read_csv(r'C:\..\file.csv', sep='\\t', encoding='unicode_escape', engine='python')
</code></pre>
<p>or this:</p>
<pre><code>df = pd.read_csv(r'C:\..\file.csv', sep=r'\t', encoding='unicode_escape')
</code></pre>
|
python|pandas|csv
| 1
|
376,147
| 59,716,875
|
Remove quotation marks when printing data frame
|
<p>I have a data frame that consists of 2 columns (Main, Sub). I iterate through the data frame and print the results.</p>
<p>The second column, however, keeps having quotation marks, which is not ideal. </p>
<p>How do I remove them? I tried using <code>.strip("'")</code>, but nothing changed.</p>
<p><a href="https://i.stack.imgur.com/xONEB.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xONEB.gif" alt="head of data frame"></a></p>
<p><a href="https://i.stack.imgur.com/tenPs.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tenPs.gif" alt="my code"></a></p>
<p>Thanks in advance for any assistance.</p>
|
<p>You have to convert dataframe object to string:</p>
<pre class="lang-py prettyprint-override"><code>row[1].str.strip("'")
</code></pre>
|
python|pandas|dataframe
| 0
|
376,148
| 59,870,915
|
Pandas : reshaping dataframe
|
<p>I have a dataframe which is currently looks like this,</p>
<p><a href="https://i.stack.imgur.com/aZfFe.png" rel="nofollow noreferrer">dataframe 1</a></p>
<p>I need to create a dataframe that looks like this.</p>
<p><a href="https://i.stack.imgur.com/vcUiN.png" rel="nofollow noreferrer">dataframe 2</a></p>
<p>I need to populate the columns of dataframe 2 from the values from dataframe 1 columns. Image shows the example.
What should be the algorithm and process for this? </p>
<p>Here is the sample dataset </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>df_dict =
{'hostname': {0: 'Comp890263', 1: 'Comp813682', 2: 'Comp213302', 3: 'Comp839013', 4: 'Comp966241'},
'days': {0: 90, 1: 90, 2: 90, 3: 90, 4: 90},
'status': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'features':
{0: '0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0',
1: '0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0',
2: '0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0',
3: '0,0,0,0,0,0,0,21,0,25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,21,0,0,0,0,0,0,0,0,0,6,46,0,0,0,0,759,0,0,0,0',
4: '0,0,0,0,0,0,0,43,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,43,0,0,0,0,0,0,0,0,0,30,46,0,0,0,0,795,0,0,0,0'}}</code></pre>
</div>
</div>
</p>
|
<p>There is a similar question here :<a href="https://stackoverflow.com/questions/26255671/pandas-column-values-to-columns">Pandas column values to columns?</a>. See if their solutions with <code>.pivot_table</code> or <code>unstack</code> work for you.</p>
|
python|pandas|dataframe
| 0
|
376,149
| 59,616,757
|
Combine index header row and column header row in Pandas
|
<p>I create a dataframe and export to an html table. However the headers are off as below</p>
<p><strong>How can I combine the index name row, and the column name row?</strong></p>
<p>I want the table header to look like this:</p>
<p><a href="https://i.stack.imgur.com/sQdnl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQdnl.png" alt="<table><th>Name</th></table>"></a></p>
<p>but it currently exports to html like this:</p>
<p><a href="https://i.stack.imgur.com/sGFJz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGFJz.png" alt="enter image description here"></a></p>
<p>I create the dataframe as below (example):</p>
<pre><code>data = [{'Name': 'A', 'status': 'ok', 'host': '1', 'time1': '2020-01-06 06:31:06', 'time2': '2020-02-06 21:10:00'}, {'Name': 'A', 'status': 'ok', 'host': '2', 'time1': '2020-01-06 06:31:06', 'time2': '-'}, {'Name': 'B', 'status': 'Alert', 'host': '1', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'}, {'Name': 'B', 'status': 'ok', 'host': '2', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'B', 'status': 'ok', 'host': '4', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'Alert', 'host': '2', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'ok', 'host': '3', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'ok', 'host': '4', 'time1': '-', 'time2': '-'}]
df = pandas.DataFrame(data)
df.set_index(['Name', 'status', 'host'], inplace=True)
html_body = df.to_html(bold_rows=False)
</code></pre>
<p>The index is set to have hierarchical rows, for easier reading in an html table:</p>
<pre><code>print(df)
time1 time2
Name status host
A ok 1 2020-01-06 06:31:06 2020-02-06 21:10:00
2 2020-01-06 06:31:06 -
B Alert 1 2020-01-06 10:31:06 2020-02-06 21:10:00
ok 2 2020-01-06 10:31:06 2020-02-06 21:10:00
4 2020-01-06 10:31:06 2020-02-06 21:10:00
C Alert 2 2020-01-06 10:31:06 2020-02-06 21:10:00
ok 3 2020-01-06 10:31:06 2020-02-06 21:10:00
4 - -
</code></pre>
<p>The only solution that I've got working is to set every column to index.
This doesn't seem practical tho, and leaves an empty row that must be manually removed:</p>
<p><a href="https://i.stack.imgur.com/ae76Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ae76Q.png" alt="enter image description here"></a></p>
|
<h3>Setup</h3>
<pre><code>import pandas as pd
from IPython.display import HTML
l0 = ('Foo', 'Bar')
l1 = ('One', 'Two')
ix = pd.MultiIndex.from_product([l0, l1], names=('L0', 'L1'))
df = pd.DataFrame(1, ix, [*'WXYZ'])
HTML(df.to_html())
</code></pre>
<p><a href="https://i.stack.imgur.com/bUDFS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bUDFS.png" alt="enter image description here"></a></p>
<hr>
<h3>BeautifulSoup</h3>
<p>Hack the HTML result from <code>df.to_html(header=False)</code>. Pluck out the empty cells in the table head and drop in the column names.</p>
<pre><code>from bs4 import BeautifulSoup
html_doc = df.to_html(header=False)
soup = BeautifulSoup(html_doc, 'html.parser')
empty_cols = soup.find('thead').find_all(lambda tag: not tag.contents)
for tag, col in zip(empty_cols, df):
tag.string = col
HTML(soup.decode_contents())
</code></pre>
<p><a href="https://i.stack.imgur.com/62eBa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/62eBa.png" alt="enter image description here"></a></p>
|
pandas|python-3.6
| 4
|
376,150
| 59,616,087
|
Getting a strange error when attempting to subtract the minimum of a column from the entire column
|
<p>I'm getting a weird error when I attempt to subtract the min of a column in pandas from the column itself.</p>
<p>My code looks like this:</p>
<pre><code>self.portfolio.number = self.portfolio.number - self.portfolio.number.min()
</code></pre>
<p>The dataframe itself is member data of a class - not sure if this is relevant to the issue (assuming it isn't).</p>
<p>The error is as follows:</p>
<pre><code>ValueError: Buffer has wrong number of dimensions (expected 1, got 0)
</code></pre>
<p>I've seen a couple of similar errors posted on stack overflow. I have checked the column isn't duplicated and it isn't. Any help would be appreciated.</p>
|
<p>Ok I figured it out. Had to change my code to:</p>
<pre><code>self.portfolio.loc[:, 'number'] = self.portfolio['number'].sub(self.portfolio['number'].min())
</code></pre>
<p>This works but I'm not sure why. If someone could enlighten me I would hugely appreciate it.</p>
|
python|pandas
| 0
|
376,151
| 59,618,058
|
Percentage of events before and after a sequence of zeros in pandas rows
|
<p>I have a dataframe like the following:</p>
<pre><code> ID 0 1 2 3 4 5 6 7 8 ... 81 82 83 84 85 86 87 88 89 90 total
-----------------------------------------------------------------------------------------------------
0 A 2 21 0 18 3 0 0 0 2 ... 0 0 0 0 0 0 0 0 0 0 156
1 B 0 20 12 2 0 8 14 23 0 ... 0 0 0 0 0 0 0 0 0 0 231
2 C 0 38 19 3 1 3 3 7 1 ... 0 0 0 0 0 0 0 0 0 0 78
3 D 3 0 0 1 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 5
</code></pre>
<p>and I want to know the % of events (the numbers in the cells) before and after the first sequence of zeros of length n appears in each row. This problem started as another question found here: <a href="https://stackoverflow.com/questions/59581340/length-of-first-sequence-of-zeros-of-given-size-after-certain-column-in-pandas-d/59583773#59583773">Length of first sequence of zeros of given size after certain column in pandas dataframe</a>, and I am trying to modify the code to do what I need, but I keep getting errors and can't seem to find the right way. This is what I have tried:</p>
<pre><code>def func(row, n):
"""Returns the number of events before the
first sequence of 0s of length n is found
"""
idx = np.arange(0, 91)
a = row[idx]
b = (a != 0).cumsum()
c = b[a == 0]
d = c.groupby(c).count()
#in case there is no sequence of 0s with length n
try:
e = c[c >= d.index[d >= n][0]]
f = str(e.index[0])
except IndexError:
e = [90]
f = str(e[0])
idx_sliced = np.arange(0, int(f)+1)
a = row[idx_sliced]
if (int(f) + n > 90):
perc_before = 100
else:
perc_before = a.cumsum().tail(1).values[0]/row['total']
return perc_before
</code></pre>
<p>As is, the error I get is:</p>
<pre><code>---> perc_before = a.cumsum().tail(1).values[0]/row['total']
TypeError: ('must be str, not int', 'occurred at index 0')
</code></pre>
<p>Finally, I would apply this function to a dataframe and return a new column with the % of events before the first sequence of n 0s in each row, like this:</p>
<pre><code> ID 0 1 2 3 4 5 6 7 8 ... 81 82 83 84 85 86 87 88 89 90 total %_before
---------------------------------------------------------------------------------------------------------------
0 A 2 21 0 18 3 0 0 0 2 ... 0 0 0 0 0 0 0 0 0 0 156 43
1 B 0 20 12 2 0 8 14 23 0 ... 0 0 0 0 0 0 0 0 0 0 231 21
2 C 0 38 19 3 1 3 3 7 1 ... 0 0 0 0 0 0 0 0 0 0 78 90
3 D 3 0 0 1 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 5 100
</code></pre>
<p>If trying to solve this, you can test by using this sample input:</p>
<pre><code>a = pd.Series([1,1,13,0,0,0,4,0,0,0,0,0,12,1,1])
b = pd.Series([1,1,13,0,0,0,4,12,1,12,3,0,0,5,1])
c = pd.Series([1,1,13,0,0,0,4,12,2,0,5,0,5,1,1])
d = pd.Series([1,1,13,0,0,0,4,12,1,12,4,50,0,0,1])
e = pd.Series([1,1,13,0,0,0,4,12,0,0,0,54,0,1,1])
df = pd.DataFrame({'0':a, '1':b, '2':c, '3':d, '4':e})
df = df.transpose()
</code></pre>
|
<p>Give this a try:</p>
<pre><code>def percent_before(row, n, ncols):
"""Return the percentage of activities happen before
the first sequence of at least `n` consecutive 0s
"""
start_index, i, size = 0, 0, 0
for i in range(ncols):
if row[i] == 0:
# increase the size of the island
size += 1
elif size >= n:
# found the island we want
break
else:
# start a new island
# row[start_index] is always non-zero
start_index = i
size = 0
if size < n:
# didn't find the island we want
return 1
else:
# get the sum of activities that happen
# before the island
idx = np.arange(0, start_index + 1).astype(str)
return row.loc[idx].sum() / row['total']
df['percent_before'] = df.apply(percent_before, n=3, ncols=15, axis=1)
</code></pre>
<p>Result:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 total percent_before
0 1 1 13 0 0 0 4 0 0 0 0 0 12 1 1 33 0.454545
1 1 1 13 0 0 0 4 12 1 12 3 0 0 5 1 53 0.283019
2 1 1 13 0 0 0 4 12 2 0 5 0 5 1 1 45 0.333333
3 1 1 13 0 0 0 4 12 1 12 4 50 0 0 1 99 0.151515
4 1 1 13 0 0 0 4 12 0 0 0 54 0 1 1 87 0.172414
</code></pre>
<p>For the full frame, call <code>apply</code> with <code>ncols=91</code>.</p>
|
python|pandas|cumsum
| 1
|
376,152
| 59,636,048
|
How to create matrix from same column with relation between previous element in Pandas?
|
<p>I have a dataframe like this,</p>
<pre class="lang-py prettyprint-override"><code>>>> import pandas as pd
>>> data = {
'user_id': [1, 1, 1, 2, 2, 3, 3, 4, 4, 4],
'movie_id': [0, 1, 2, 0, 1, 2, 3, 2, 3, 4]
}
>>> df = pd.DataFrame(data)
>>> df
user_id movie_id
0 1 0
1 1 1
2 1 2
3 2 0
4 2 1
5 3 2
6 3 3
7 4 2
8 4 3
9 4 4
</code></pre>
<p>I wonder how many people liked the second movie after they liked the first movie. Or liked the third movie after you liked the second movie. Etc. Here is my expected output,</p>
<pre><code>[[0., 2., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 2., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0.]]
</code></pre>
<p>For instance, <code>movie_id=1</code> liked two times after they liked <code>movie_id=0</code>, so <code>matrix[0][1]=2</code> and <code>matrix[1][0]=2</code>. OK, how I found this result? <code>user_id=1</code> liked <code>movie_id=0</code>, <code>movie_id=1</code> and <code>movie_id=2</code> by respectively. Also, <code>user_id=2</code> liked <code>movie_id=0</code> and <code>movie_id=1</code> by respectively. So, <code>matrix[0][1]=2</code></p>
<p>I tried this one, that returns incorrect output and very slow working in big dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
item = dict()
def cross(a):
for i in a:
for j in a:
if i == j:
continue
if (i, j) in item.keys():
item[(i, j)] += 1
else:
item[(i, j)] = 1
_ = df.groupby('user_id')['movie_id'].apply(cross)
length = df['movie_id'].nunique()
res = np.zeros([length, length])
for k, v in item.items():
res[k] = v
</code></pre>
<p>Any idea? Thanks in advance.</p>
|
<p>You can do the following:</p>
<pre><code># add row_numbers as a column
df.reset_index(inplace=True)
# merge df on itself
df2 = df.merge(df, how='inner', on='user_id')
# remove some entries, keep only pairs where movie_id_x was liked before movie_id_y
df2 = df2[df2['index_x']<df2['index_y']].drop(['index_x','index_y'], axis=1)
# use pivot table to make matrix
df3 = df2.pivot_table(index='movie_id_x',columns='movie_id_y', values='user_id', aggfunc='count')
# UPD: add empty rows for movies which were removed
ids = df['movie_id'].unique()
df3 = df3.reindex(ids)
df3 = df3.reindex(ids, axis=1)
df3 = df3.fillna(0)
# convert result from dataframe to array if necessary
res = np.array(df3)
</code></pre>
<p>Result:</p>
<pre><code>print(res)
[[0 2 1 0 0]
[0 0 1 0 0]
[0 0 0 2 1]
[0 0 0 0 1]
[0 0 0 0 0]]
</code></pre>
<h2>Faster version (with sparse matrix)</h2>
<p>The idea is that your matrix is actually sparse and it takes a lot of memory to store it in dense form (especially in form of pandas dataframe). So it is reasonable to store it like sparse matrix.
Approach was found <a href="https://stackoverflow.com/questions/31661604/efficiently-create-sparse-pivot-tables-in-pandas">here</a>.</p>
<pre><code># add row_numbers as a column
df.reset_index(inplace=True)
# merge df on itself
df2 = df.merge(df, how='inner', on='user_id')
# remove some entries, keep only pairs where movie_id_x was liked before movie_id_y
df2 = df2[df2['index_x']<df2['index_y']].drop(['index_x','index_y'], axis=1)
# use groupby to count movie pairs
df2 = df2.groupby(['movie_id_x','movie_id_y'])['user_id'].count().reset_index()
# create pivot as sparse matrix
movies_t = CategoricalDtype(sorted(df['movie_id'].unique()), ordered=True)
row = df2['movie_id_x'].astype(movies_t).cat.codes
col = df2['movie_id_y'].astype(movies_t).cat.codes
sparse_matrix = csr_matrix((df2["user_id"], (row, col)), \
shape=(movies_t.categories.size, movies_t.categories.size))
# convert sparse to dense if needed
res = sparse_matrix.todense()
</code></pre>
|
python|pandas|dataframe
| 2
|
376,153
| 59,654,229
|
Remove all characters before a pipe and also remove pipe using regex in python
|
<p>Hi I'm at work and I'm working in pandas and trying to remove all characters before this pipe in this csv file. Also replacing semi colons with a pipe would also be very helpful.</p>
<pre><code>Size| Medium; Large; Xlarge; 2Xlarge; 3Xlarge; 4Xlarge; 5xXlarge;
Size| Medium; Large; Xlarge; 2Xlarge; 3Xlarge; 4Xlarge; 5xlarge;
Sizes| Small - ( only one mic tab); Medium; Large; Xlarge; 2Xlarge; 3Xlarge; 4Xlarge; 5Xlarge;
Sizes| Small - ( only one mic tab); Medium; Large; Xlarge; 2Xlarge; 3Xlarge; 4Xlarge; 5Xlarge;
</code></pre>
<p>Here's what I've been trying but am having trouble with escaping the pipe.</p>
<pre><code>df['Variations'] = df['Variations'].replace(regex=r'/\|$', value='')
</code></pre>
<p>I need to get this</p>
<pre><code>Medium|Large|Xlarge|2Xlarge|3Xlarge|4Xlarge|5xXlarge
Medium|Large|Xlarge|2Xlarge|3Xlarge|4Xlarge|5xlarge
</code></pre>
|
<p>You may use</p>
<pre><code>data['Variations'] = data['Variations'].str.replace(r'^[^|]*\|\s*|;\s*$', '').str.replace('\s*;\s*', '|')
</code></pre>
<p>The <code>.replace(r'^[^|]*\|\s*|;\s*$', '')</code> will remove all substrings from start of string till the first <code>|</code> including it and any subsequent whitespace chars and final <code>;</code> (with any 0+ whitespace at the end) and <code>.replace('\s*;\s*', '|')</code> will replace all <code>;</code> with any whitespaces around the semi-colon with a pipe char.</p>
|
python|regex|pandas|dataframe|e-commerce
| 1
|
376,154
| 32,566,320
|
Dealing with missing data in Pandas and Numpy
|
<p>I have the following data sample. I would like to </p>
<ul>
<li>a) in column C, <strong>replace the <code>np.NaN with 999</code></strong>, </li>
<li>b) in column D, <strong>place '' with <code>np.NaN</code></strong>.</li>
</ul>
<p>Neither of my attempts is working, and I am not sure why.</p>
<pre><code>import pandas
from pandas import DataFrame
import numpy as np
df = DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
'bar', 'bar', 'bar', 'bar'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : [1, np.NaN, 1, 2, np.NaN, 1, 1, 2], 'D' : [2, '', 1, 1, '', 2, 2, 1]})
print df
df.C.fillna(999)
df.D.replace('', np.NaN)
print df
Output:
A B C D
0 foo one 1 2
1 foo one NaN
2 foo two 1 1
3 foo three 2 1
4 bar two NaN
5 bar two 1 2
6 bar one 1 2
7 bar three 2 1
A B C D
0 foo one 1 2
1 foo one NaN
2 foo two 1 1
3 foo three 2 1
4 bar two NaN
5 bar two 1 2
6 bar one 1 2
7 bar three 2 1
</code></pre>
|
<p>Those operations return a copy of the data (most of the pandas ops behave the same), they don't operate in place unless you explicitly say so (the default is <code>inplace=False</code>), see <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html#pandas.Series.fillna" rel="nofollow"><code>fillna</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html#pandas.Series.replace" rel="nofollow"><code>replace</code></a>:</p>
<pre><code>df.C.fillna(999, inplace=True)
df.D.replace('', np.NaN, inplace=True)
</code></pre>
<p>or assign back:</p>
<pre><code>df['C'] = df.C.fillna(999)
df['D'] = df.D.replace('', np.NaN)
</code></pre>
<p>Also I strongly suggest you access your columns using subscript operator <code>[]</code> rather than as an attribute using dot operator <code>.</code> to avoid ambiguous behaviour</p>
<pre><code>In [60]:
df = pd.DataFrame({'A' : ['foo', 'foo', 'foo', 'foo',
'bar', 'bar', 'bar', 'bar'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : [1, np.NaN, 1, 2, np.NaN, 1, 1, 2], 'D' : [2, '', 1, 1, '', 2, 2, 1]})
df.C.fillna(999, inplace =True)
df.D.replace('', np.NaN, inplace=True)
df
Out[60]:
A B C D
0 foo one 1 2
1 foo one 999 NaN
2 foo two 1 1
3 foo three 2 1
4 bar two 999 NaN
5 bar two 1 2
6 bar one 1 2
7 bar three 2 1
</code></pre>
|
python-2.7|numpy|pandas|missing-data
| 3
|
376,155
| 32,203,293
|
numpy.correlate vs numpy documentation - is there a contradiction here ? Why is the resulting list reversed ?
|
<p>I get the following result using numpy's correlate function:</p>
<pre><code>In [153]: np.correlate([1],np.arange(100))
Out[153]:
array([99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 87, 86, 85, 84, 83,
82, 81, 80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66,
65, 64, 63, 62, 61, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49,
48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32,
31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15,
14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
In [154]:
</code></pre>
<p>This result seems to be in contradiction with the 90. page of the numpy <a href="http://web.mit.edu/dvp/Public/numpybook.pdf" rel="nofollow noreferrer">book</a> :</p>
<p><a href="https://i.stack.imgur.com/ZspkE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZspkE.png" alt="enter image description here"></a></p>
<p>Based on the formula above I would have expected an increasing array 0..99, however, the result is a decreasing array 99..0.</p>
<p>Can someone explain what is going on here ? </p>
<p>Why does the implementation contradicts the specification ? </p>
<p>Why does it make sense to reverse the list ?</p>
|
<p>Looks like you are expecting the <code>old_behaviour</code> of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html" rel="nofollow noreferrer"><code>numpy.correlate</code></a>. The book you link to is very old (2006), so it looks like <code>numpy.correlate</code> has changed since it was written (it actually changed in <a href="https://github.com/numpy/numpy/blob/master/doc/release/1.4.0-notes.rst#deprecations" rel="nofollow noreferrer"><code>numpy v1.4</code></a>). From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html" rel="nofollow noreferrer">docs for <code>numpy v1.9</code></a>:</p>
<blockquote>
<p>old_behavior : bool</p>
<p>If True, uses the old behavior from Numeric, (correlate(a,v) == correlate(v,a), and the conjugate is not taken for complex arrays). If False, uses the conventional signal processing definition.</p>
</blockquote>
<pre><code>In [2]: np.correlate([1],np.arange(100))
Out[2]:
array([99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 87, 86, 85, 84, 83,
82, 81, 80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66,
65, 64, 63, 62, 61, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49,
48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32,
31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15,
14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
In [3]: np.correlate([1],np.arange(100),old_behavior=True)
Out[3]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67,
68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84,
85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])
In [4]: np.correlate(np.arange(100),[1])
Out[4]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67,
68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84,
85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])
</code></pre>
<p><strong>EDIT</strong></p>
<p>On further inspection, I think the difference is due to this line in the old definition:</p>
<blockquote>
<p><code>K=len(x)-1</code> and <code>M=len(y)-1</code>, and we assume <code>K ≥ M</code> (without loss of generality because we can interchange the roles of <code>x</code> and <code>y</code> without effect).</p>
</blockquote>
<p>So, I believe for your case, in the old definition, it is making <code>y=[1]</code> and <code>x=np.arange(100)</code>, because <code>len(x)</code> must be greater than <code>len(y)</code>. The new definition does not do that, instead <a href="https://stackoverflow.com/a/12254067/588071">"input arrays are never swapped"</a>, so <code>x=[1]</code> and <code>y=np.arange(100)</code>. Thus, the differences.</p>
|
python|numpy|scipy|correlation
| 6
|
376,156
| 32,447,900
|
Reading and writing CSV files into a data structure suitable for Excel-style column/row manipulations
|
<p>So I am currently working on a web-application with a few other people for a client, and we've hit a stumbling block. Basically we need to be able to upload a CSV file in a specific layout - and the application will take that CSV file and based on specific columns and their values, it will perform the algorithm and calculations required.</p>
<p>The output would also be a downloadable CSV file. None of us have had experience working with CSV in Python. </p>
<p>The layout of the CSV file is as follows:
ID, Name, Address, Suburb, Postcode, Email, Phone</p>
<p>I need to take the address fields and use that in a calculation to determine how to get to the destination from their specific address. I would also need to print the specific details related to that person as well. </p>
<p><strong>EDIT</strong>
Okay so basically, the CSV file will contain details about employees and their relevant personal information. What our application does is takes that information, and based on the employees address, will predict the most optimised route for them to get to the destination.
Basically how the hell do I read CSV files and then write an algorithm based on a certain column/row to perform my calculations required.</p>
|
<p>Reading a <code>.csv</code> is easy with the <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">csv</a> standard library module.</p>
<p>A more efficient library that allows for better manipulation of <code>.csv</code> files is <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>, you should consider playing around with this one first.</p>
<p>For instance, given a csv file:</p>
<pre><code>csv = r"""col1,col2,col3,col4
bar,20150301,homer,53
foo,20150502,bart,102
barfoo,20150201,lisa,13
foobar,20150501,marge,97"""
</code></pre>
<p>We can operate on it with the <code>csv</code> module:</p>
<pre><code>import csv # built-in no need to install
from StringIO import StringIO
with open(StringIO(csv), 'rb') as f:
reader = csv.reader(f)
for row in reader:
# Do whatever you need
</code></pre>
<p>And, similarly, with pandas:</p>
<pre><code>import pandas as pnd # external, installation required
# returns a dataframe, specify cols, index et cetera
df = pnd.read_csv(StringIO(csv),
header=0,
index_col=["col1", "col3"],
usecols=["col1", "col2", "col3"],
parse_dates=["col2"])
# do dirty things with it.
</code></pre>
|
php|python|excel|csv|pandas
| 0
|
376,157
| 32,298,047
|
Efficient storage of large string column in pandas dataframe
|
<p>I have a large pandas dataframe with a string column that is highly skewed in size of the strings. Most of the rows have string of length < 20, but there are some rows whose string lengths are more than 2000.</p>
<p>I store this dataframe on disk using pandas.HDFStorage.append and set min_itemsize = 4000. However, this approach is highly inefficient, as the hdf5 file is very large in size, and we know that most of it are empty.</p>
<p>Is it possible to assign different sizes for the rows of this string column? That is, assign small min_itemsize to rows whose string is short, and assign large min_itemsize to rows whose string is long.</p>
|
<p>When using <code>HDFStore</code> to store strings, the maximum length of a string in the column is the width for that particular columns, this can be customized, see <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#string-columns" rel="nofollow">here</a>.</p>
<p>Several options are available to handle various cases. Compression can help a lot.</p>
<pre><code>In [6]: df = DataFrame({'A' : ['too']*10000})
In [7]: df.iloc[-1] = 'A'*4000
In [8]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000 entries, 0 to 9999
Data columns (total 1 columns):
A 10000 non-null object
dtypes: object(1)
memory usage: 156.2+ KB
</code></pre>
<p>These are fixed stores, the strings are stored as <code>object</code> types, so its not particularly performant; nor are these stores available for query / appends.</p>
<pre><code>In [9]: df.to_hdf('test_no_compression_fixed.h5','df',mode='w',format='fixed')
In [10]: df.to_hdf('test_no_compression_table.h5','df',mode='w',format='table')
</code></pre>
<p>Table stores are quite flexible, but force a fixed size on the storage.</p>
<pre><code>In [11]: df.to_hdf('test_compression_fixed.h5','df',mode='w',format='fixed',complib='blosc')
In [12]: df.to_hdf('test_compression_table.h5','df',mode='w',format='table',complib='blosc')
</code></pre>
<p>Generally using a categorical representation provides run-time and storage efficiency.</p>
<pre><code>In [13]: df['A'] = df['A'].astype('category')
In [14]: df.to_hdf('test_categorical_table.h5','df',mode='w',format='table')
In [15]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000 entries, 0 to 9999
Data columns (total 1 columns):
A 10000 non-null category
dtypes: category(1)
memory usage: 87.9 KB
In [18]: ls -ltr *.h5
-rw-rw-r-- 1162080 Aug 31 06:36 test_no_compression_fixed.h5
-rw-rw-r-- 1088361 Aug 31 06:39 test_compression_fixed.h5
-rw-rw-r-- 40179679 Aug 31 06:36 test_no_compression_table.h5
-rw-rw-r-- 259058 Aug 31 06:39 test_compression_table.h5
-rw-rw-r-- 339281 Aug 31 06:37 test_categorical_table.h5
</code></pre>
|
python-2.7|pandas|hdf5
| 6
|
376,158
| 32,393,217
|
numpy, return of array of indices in shape of
|
<p>I want to get the result of a list (or array) of indices from a numpy array, in the shape: ( len(indices), (shape of one indexing operation) ).</p>
<p>Is there any way to use a list of indices directly, without using a for loop, like I used in the mininal example, shown below?</p>
<pre><code>c = np.random.randint(0, 5, size=(4, 5))
indices = [[0, slice(0, 4)], [1, slice(0, 4)], [1, slice(0, 4)], [2, slice(0, 4)]]
# desired result using a for loop
res = []
for idx in indices:
res.append(c[idx])
</code></pre>
<p>It should be noted, that the indices list is not representative of my problem, it serves as an example, in general it is generated during runtime.
However, each index operation returns the same shape</p>
|
<p>Your example can be rewritten as a list comprehension:</p>
<pre><code>In [121]: [c[idx] for idx in indices]
Out[121]:
[array([4, 2, 1, 2]),
array([3, 2, 2, 3]),
array([3, 2, 2, 3]),
array([0, 3, 4, 4])]
</code></pre>
<p>which can be turned into a nice 2d array:</p>
<pre><code>In [122]: np.array([c[idx] for idx in indices])
Out[122]:
array([[4, 2, 1, 2],
[3, 2, 2, 3],
[3, 2, 2, 3],
[0, 3, 4, 4]])
</code></pre>
<p>Here <code>np.array()</code> is a form of concatenation, joining the arrays along a new axis.</p>
<p>Since the 2nd index is the same for all rows (<code>slice(4)</code>), this indexing also works:</p>
<pre><code>In [123]: c[[0,1,1,2],slice(4)] # or [...,:4]
Out[123]:
array([[4, 2, 1, 2],
[3, 2, 2, 3],
[3, 2, 2, 3],
[0, 3, 4, 4]])
</code></pre>
<p>Repetition on the 1st axis is not a problem. Differing slices in the 2nd take some more manipulation. Except for this special <code>:4</code> case, you will have to turn the slices in to ranges. There's no way of indexing one dimension with multiple slices.</p>
<hr>
<p>The case where the slices all have same length, but different 'start' values, is similar to the one discussed in <a href="https://stackoverflow.com/a/28007256/901925">https://stackoverflow.com/a/28007256/901925</a> <code>access-multiple-elements-of-an-array</code>. </p>
<pre><code>In [135]: c.flat[[i*c.shape[1]+np.arange(j.start,j.stop) for i,j in indices]]
Out[135]:
array([[4, 2, 1, 2],
[3, 2, 2, 3],
[3, 2, 2, 3],
[0, 3, 4, 4]])
</code></pre>
<p>The indices that I generate this way are:</p>
<pre><code>In [136]: [i*c.shape[1]+np.arange(j.start,j.stop) for i,j in indices]
Out[136]:
[array([0, 1, 2, 3]),
array([5, 6, 7, 8]),
array([5, 6, 7, 8]),
array([10, 11, 12, 13])]
</code></pre>
<p>It works fine if <code>indices</code> is somewhat irregular: <code>indices1 = [[0, slice(0, 3)], [1, slice(2, 5)], [1, slice(1, 4)], [2, slice(0, 3)]]</code></p>
<p>My earlier answer looks at some other ways indexing. But often indexing on a flatten array is fastest, even if you take into account the calculation required to generate the index array.</p>
<p>If the slices vary in length, then you are stuck with generating a list of arrays, or an <code>hstack</code> of such a list:</p>
<pre><code>In [158]: indices2 = [[0, slice(0, 2)], [1, slice(2, 5)],
[1, slice(0, 4)], [2, slice(0, 5)]]
In [159]: c.flat[np.hstack([i*c.shape[1]+np.arange(j.start,j.stop)
for i,j in indices2])]
Out[159]: array([4, 2, 2, 3, 1, 3, 2, 2, 3, 0, 3, 4, 4, 3])
In [160]: [c.flat[i*c.shape[1]+np.arange(j.start,j.stop)] for i,j in indices2]
Out[160]: [array([4, 2]), array([2, 3, 1]), array([3, 2, 2, 3]),
array([0, 3, 4, 4, 3])]
In [161]: np.hstack(_)
Out[161]: array([4, 2, 2, 3, 1, 3, 2, 2, 3, 0, 3, 4, 4, 3])
</code></pre>
<hr>
<p>more on the varying, but equal length slices:</p>
<pre><code>In [190]: indices1 = [[0, slice(0, 3)], [1, slice(2, 5)], [1, slice(1, 4)], [2, slice(0, 3)]]
In [191]: c.flat[[i*c.shape[1]+np.arange(j.start,j.stop) for i,j in indices1]]Out[191]:
array([[4, 2, 1],
[2, 3, 1],
[2, 2, 3],
[0, 3, 4]])
In [193]: rows = [[i] for i,j in indices1]
In [200]: cols=[np.arange(j.start,j.stop) for i,j in indices1]
In [201]: c[rows,cols]
Out[201]:
array([[4, 2, 1],
[2, 3, 1],
[2, 2, 3],
[0, 3, 4]])
</code></pre>
<p>In this case <code>rows</code> is a vertical list that can be broadcasted with <code>cols</code>.</p>
|
python|arrays|numpy|matrix-indexing
| 0
|
376,159
| 32,221,946
|
Numpy matrix of arrays without copying possible?
|
<p>I got a question about numpy and it's memory. Is it possible to generate a view or something out of multiple numpy arrays without copying them?</p>
<pre><code> import numpy as np
def test_var_args(*inputData):
dataArray = np.array(inputData)
print np.may_share_memory(inputData, dataArray) # prints false, b.c. of no shared memory
test_var_args(np.arange(32),np.arange(32)*2)
</code></pre>
<p>I've got a c++ application with images and want to do some python magic. I pass the images in rows to the python script using the c-api and want to combine them without copying them. </p>
<p>I am able to pass the data s.t. c++ and python share the same memory. Now I want to arange the memory to a numpy view/array or something like that. </p>
<p>The images in c++ are not continuously present in the memory (I slice them). The rows that I hand over to python are aranged in a continuous memory block.</p>
<p>The number of images I pass are varying. Maybe I can change that if there exist a preallocation trick.</p>
|
<p>There's a useful discussion in the answer here: <a href="https://stackoverflow.com/questions/45943160/can-memmap-pandas-series-what-about-a-dataframe">Can memmap pandas series. What about a dataframe?</a></p>
<p>In short:</p>
<ul>
<li>If you initialize your DataFrame from a single array of matrix, then it may not copy the data.</li>
<li>If you initialize from multiple arrays of the same or different types, your data will be copied.</li>
</ul>
<p>This is the only behavior permitted by the default BlockManager used by Pandas' DataFrame, which organizes the DataFrame's memory internally.</p>
<p>Its possible to <a href="https://stackoverflow.com/questions/45943160/can-memmap-pandas-series-what-about-a-dataframe">monkey patch the BlockManager</a> to change this behavior though, in which case your supplied data will be referenced.</p>
|
python|c++|arrays|numpy
| 0
|
376,160
| 40,777,201
|
Is this a good log loss for multiclass regression?
|
<p>I have multiple classes to predict at once, so I see the problem as a non-linear regression on the binary labels/classes I have as true output. </p>
<p>That said, the loss function is a sum of the log losses of every label which is then averaged for each example in the batch. Here is my loss function: </p>
<pre><code>prediction = tf.sigmoid(hidden_out) # Prediction output, from 0.0 to 1.0
# Avoid infinite gradients for extreme cases
# by remapping the prediction from 0.005 to 0.995:
pred = prediction*0.99+0.005
# Log loss: mean is on batch_size, sum is on labels:
loss = tf.reduce_mean(
tf.reduce_sum(
- labels*tf.log(pred) - (1.0-labels)*tf.log(1.0-pred),
reduction_indices=1
)
)
</code></pre>
<p>I suspect there is something wrong with this. I am trying to train a deep convolutional neural network with residual inception layers. I get some <code>NaN</code> values for the loss quickly with low learning rates.</p>
<p>Is my log loss correct? Any suggestions?</p>
|
<p>You can use a loss that is already implemented for multiclass logistic regression instead of your loss: <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#sigmoid_cross_entropy_with_logits" rel="nofollow noreferrer">sigmoid_cross_entropy_with_logits</a>. It was carefully designed to avoid numerical problems.</p>
<p>In addition, instead of clipping prediction, it might be better to clip gradients. See <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.layers.html#optimize_loss" rel="nofollow noreferrer">here</a>, for example.</p>
|
machine-learning|tensorflow|classification|regression|deep-learning
| 1
|
376,161
| 40,613,850
|
Conditional data selection with text string data in pandas dataframe
|
<p>I've looked but seem to be coming up dry for an answer to the following question. </p>
<p>I have a pandas dataframe analogous to this (call it 'df'):</p>
<pre><code> Type Set
1 theGreen Z
2 andGreen Z
3 yellowRed X
4 roadRed Y
</code></pre>
<p>I want to add another column to the dataframe (or generate a series) of the same length as the dataframe (= equal number of records/rows) which assigns a numerical coding variable (1) if the Type contains the string "Green", (0) otherwise. </p>
<p>Essentially, I'm trying to find a way of doing this:</p>
<pre><code> df['color'] = np.where(df['Type'] == 'Green', 1, 0)
</code></pre>
<p>Except instead of the usual numpy operators (<,>,==,!=, etc.) I need a way of saying "in" or "contains". Is this possible? Any and all help appreciated! </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="noreferrer"><code>str.contains</code></a>:</p>
<pre><code>df['color'] = np.where(df['Type'].str.contains('Green'), 1, 0)
print (df)
Type Set color
1 theGreen Z 1
2 andGreen Z 1
3 yellowRed X 0
4 roadRed Y 0
</code></pre>
<p>Another solution with <code>apply</code>:</p>
<pre><code>df['color'] = np.where(df['Type'].apply(lambda x: 'Green' in x), 1, 0)
print (df)
Type Set color
1 theGreen Z 1
2 andGreen Z 1
3 yellowRed X 0
4 roadRed Y 0
</code></pre>
<p>Second solution is faster, but doesn't work with <code>NaN</code> in column <code>Type</code>, then return <code>error</code>:</p>
<blockquote>
<p>TypeError: argument of type 'float' is not iterable</p>
</blockquote>
<p><strong>Timings</strong>:</p>
<pre><code>#[400000 rows x 4 columns]
df = pd.concat([df]*100000).reset_index(drop=True)
In [276]: %timeit df['color'] = np.where(df['Type'].apply(lambda x: 'Green' in x), 1, 0)
10 loops, best of 3: 94.1 ms per loop
In [277]: %timeit df['color1'] = np.where(df['Type'].str.contains('Green'), 1, 0)
1 loop, best of 3: 256 ms per loop
</code></pre>
|
python|string|pandas|numpy|dataframe
| 7
|
376,162
| 40,340,131
|
Debugging python tests in TensorFlow
|
<p>We want to debug Python tests in TensorFlow such as <em>sparse_split_op_test</em> and <em>string_to_hash_bucket_op_test</em> </p>
<p>The other c++ tests we could debug using gdb, however we cannot find a way to debug python tests.</p>
<p>Is there a way in which we can debug specific python test case run via Bazel test command (for example, bazel test //tensorflow/python/kernel_tests:sparse_split_op_test)</p>
|
<p>I would first build the test:</p>
<pre><code>bazel build //tensorflow/python/kernel_tests:sparse_split_op_test
</code></pre>
<p>Then use pdb on the resulting Python binary:</p>
<pre><code>pdb bazel-bin/tensorflow/python/kernel_tests/sparse_split_op_test
</code></pre>
<p>That seems to work for me stepping through the first few lines of the test.</p>
|
python|tensorflow
| 3
|
376,163
| 40,335,140
|
How to highlight both a row and a column at once in pandas
|
<p>I can highlight a column using the syntax </p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1,0],[0,1]])
df.style.apply(lambda x: ['background: lightblue' if x.name == 0 else '' for i in x])
</code></pre>
<p><a href="https://i.stack.imgur.com/TGnFb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TGnFb.png" alt="enter image description here"></a></p>
<p>Similarly I can highlight a row by passing <code>axis=1</code>:</p>
<pre><code>df.style.apply(lambda x: ['background: lightgreen' if x.name == 0 else '' for i in x],
axis=1)
</code></pre>
<p><a href="https://i.stack.imgur.com/Q6BBJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Q6BBJ.png" alt="enter image description here"></a></p>
<p>However I can't work out how to do both at once; the problem is that when I use <code>applymap</code>, I only get the values, not the names of the series that they come from.</p>
|
<p>How about doing something like this? Enumerate the column and check the index while building up the style list:</p>
<pre><code>df.style.apply(lambda x: ['background: lightblue' if x.name == 0 or i == 0 else ''
for i,_ in x.iteritems()])
</code></pre>
<p><a href="https://i.stack.imgur.com/zNbsO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zNbsO.png" alt="enter image description here"></a></p>
<p>Or if you have color preference:</p>
<pre><code>df.style.apply(lambda x: [('background: lightblue' if x.name == 0
else ('background: lightgreen' if i == 0 else ''))
for i,_ in x.iteritems()])
</code></pre>
<p><a href="https://i.stack.imgur.com/lNS9s.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lNS9s.png" alt="enter image description here"></a></p>
|
python|pandas
| 13
|
376,164
| 40,651,005
|
python pandas columns contain dict
|
<p>I got some trouble.</p>
<pre><code>import pandas
df=pandas.DataFrame([[{'a':1,'b':2},3,3],[{'a':2,'b':4},6,5]],columns=['c1','c2','c3'])
print df
c1 c2 c3
0 {u'a': 1, u'b': 2} 3 3
1 {u'a': 2, u'b': 4} 6 5
</code></pre>
<p>I want to get the result like this:</p>
<pre><code> b c3
0 2 3
1 4 5
</code></pre>
<p>I try use the df.loc[:,['c1','c3']]. but I don't know next step how I should do.
thanks a lot</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <code>df1</code> created with <code>DataFrame</code> constructor - first convert column <code>c1</code> to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow noreferrer"><code>values</code></a> and then to <code>list</code>:</p>
<pre><code>df=pd.DataFrame([[{'a':1,'b':2},2,3],[{'a':2,'b':4},4,5]],columns=['c1','c2','c3'])
df1 = pd.DataFrame(df.c1.values.tolist())
print (df1)
a b
0 1 2
1 2 4
print (pd.concat([df1[['b']], df[['c3']]], axis=1))
b c3
0 2 3
1 4 5
</code></pre>
|
python|pandas|dictionary
| 3
|
376,165
| 40,576,876
|
Efficient Haskell equivalent to NumPy's argsort
|
<p>Is there a standard Haskell equivalent to NumPy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow noreferrer"><code>argsort</code></a> function?</p>
<p>I'm using <a href="http://hackage.haskell.org/package/hmatrix" rel="nofollow noreferrer">HMatrix</a> and, so, would like a function compatible with <code>Vector R</code> which is an alias for <code>Data.Vector.Storable.Vector Double</code>. The <code>argSort</code> function below is the implementation I'm currently using:</p>
<pre><code>{-# LANGUAGE NoImplicitPrelude #-}
module Main where
import qualified Data.List as L
import qualified Data.Vector as V
import qualified Data.Vector.Storable as VS
import Prelude (($), Double, IO, Int, compare, print, snd)
a :: VS.Vector Double
a = VS.fromList [40.0, 20.0, 10.0, 11.0]
argSort :: VS.Vector Double -> V.Vector Int
argSort xs = V.fromList (L.map snd $ L.sortBy (\(x0, _) (x1, _) -> compare x0 x1) (L.zip (VS.toList xs) [0..]))
main :: IO ()
main = print $ argSort a -- yields [2,3,1,0]
</code></pre>
<p>I'm using explicit qualified <code>import</code>s just to make it clear where every type and function is coming from.</p>
<p>This implementation is not terribly efficient since it converts the input vector to a list and the result back to a vector. Does something like this (but more efficient) exist somewhere?</p>
<p><strong>Update</strong></p>
<p>@leftaroundabout had a good solution. This is the solution I ended up with:</p>
<pre><code>module LAUtil.Sorting
( IndexVector
, argSort
)
where
import Control.Monad
import Control.Monad.ST
import Data.Ord
import qualified Data.Vector.Algorithms.Intro as VAI
import qualified Data.Vector.Storable as VS
import qualified Data.Vector.Unboxed as VU
import qualified Data.Vector.Unboxed.Mutable as VUM
import Numeric.LinearAlgebra
type IndexVector = VU.Vector Int
argSort :: Vector R -> IndexVector
argSort xs = runST $ do
let l = VS.length xs
t0 <- VUM.new l
forM_ [0..l - 1] $
\i -> VUM.unsafeWrite t0 i (i, (VS.!) xs i)
VAI.sortBy (comparing snd) t0
t1 <- VUM.new l
forM_ [0..l - 1] $
\i -> VUM.unsafeRead t0 i >>= \(x, _) -> VUM.unsafeWrite t1 i x
VU.freeze t1
</code></pre>
<p>This is more directly usable with <code>Numeric.LinearAlgebra</code> since the data vector is a <code>Storable</code>. This uses an unboxed vector for the indices.</p>
|
<p>Use <a href="http://hackage.haskell.org/package/vector-algorithms" rel="noreferrer">vector-algorithms</a>:</p>
<pre class="lang-hs prettyprint-override"><code>import Data.Ord (comparing)
import qualified Data.Vector.Unboxed as VU
import qualified Data.Vector.Algorithms.Intro as VAlgo
argSort :: (Ord a, VU.Unbox a) => VU.Vector a -> VU.Vector Int
argSort xs = VU.map fst $ VU.create $ do
xsi <- VU.thaw $ VU.indexed xs
VAlgo.sortBy (comparing snd) xsi
return xsi
</code></pre>
<p>Note these are <code>Unboxed</code> rather than <code>Storable</code> vectors. The latter need to make some tradeoffs to allow impure C FFI operations and can't properly handle heterogeneous tuples. You can of course always <a href="http://hackage.haskell.org/package/vector-0.11.0.0/docs/Data-Vector-Unboxed.html#g:36" rel="noreferrer"><code>convert</code></a> to and from storable vectors.</p>
|
haskell|numpy|hmatrix
| 5
|
376,166
| 40,603,278
|
Running SyntaxNet with designated instance (in Python-level)
|
<p>Could you please let me know how I designate which instance to use when training/testing SyntaxNet?</p>
<p>In other tensorflow models we can easily change configurations by editing Python code:</p>
<p>ex) <code>tf.device('/cpu:0')</code> => <code>tf.device('/gpu:0')</code>.</p>
<p>I could run parsey mcparseface model via running <code>demo.sh</code> and I followed back symbolic links to find device configurations.</p>
<p>Maybe I misedBut I cannot find gpu configuration python codes in <code>demo.sh</code>, <code>parser_eval.py</code> and <code>context.proto</code>.</p>
<p>When I search with query '<code>device</code>' in <a href="https://github.com/tensorflow/models/" rel="nofollow noreferrer">tensorflow/models</a>, I could see several C files such as <a href="https://github.com/tensorflow/models/blob/a9133ae914b44602c5f26afbbd7dd794ff9c6637/syntaxnet/syntaxnet/unpack_sparse_features.cc" rel="nofollow noreferrer">syntaxnet/syntaxnet/unpack_sparse_features.cc</a> contain line <code>using tensorflow::DEVICE_CPU;</code></p>
<p>So.. is to change C codes in these files the only way to change device configuration for SyntaxNet?</p>
<p>I hope there is a simpler way to change the setting in Python level.</p>
<p>Thanks in advance.</p>
|
<p>You can refer to this page for instructions on running syntax net on GPU: <a href="https://github.com/tensorflow/models/issues/248" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/248</a></p>
<p>Tensorflow would automatically assign devices including GPU to the ops: <a href="https://www.tensorflow.org/versions/r0.11/how_tos/using_gpu/index.html" rel="nofollow noreferrer">https://www.tensorflow.org/versions/r0.11/how_tos/using_gpu/index.html</a>. You can also manually specify the device when building the graph.</p>
|
python|nlp|tensorflow|gpu|syntaxnet
| 0
|
376,167
| 40,495,732
|
Creating an RGB composite SAR image
|
<p>I am quite new at Python programming and I need your help. I always do a research for my problem first before posting.</p>
<p>I have SAR dual polarization image (2^16 gray level values) in tiff format. In this tiff image there are two bands. The first band (HH_band) is a horizontal polarization channel and the second one (HV_band) is the vertical polarization channel. I want to create an RGB composite image. For this to happen, I need to layer stack the two channels as follows:</p>
<ol>
<li>get the first band (HH_band)</li>
<li>get the second band (HV_band)</li>
<li>get the ratio (HH_band/HV_band)</li>
</ol>
<p>I know that there are many people posting about sometime similar to this (RGB composite image of natural colors). I tried to use <code>cv2.merge</code> or <code>cv2.split</code> from openCV library but didn't work. I thought it would be relatively easy to create a SAR RGB image in Python (as I have seen a few post about creating RGB image of LANDSAT) but I got stuck in my case.</p>
<p>I would much appreciate any help.</p>
|
<p>Here is a possible way to accomplish the band composition programmatically:</p>
<pre><code>import numpy as np
tif = io.imread('dual_polarization_image.tif')
band = {'HH': 0, 'HV': 1}
r = tif[:, :, band['HH']]
g = tif[:, :, band['HV']]
hh = r.astype(np.float64)
hv = g.astype(np.float64)
b = np.divide(hh, hv, out=np.zeros_like(hh), where=hv!=0)
rgb = np.dstack((r, g, b.astype(np.uint16)))
</code></pre>
<p>Remarks:</p>
<ul>
<li>It would be possible to deal with different arrangements of the bands in the TIFF image by simply redefining the values of the dictionary <code>band</code>.</li>
<li>Prior to calculating the band ratio is necessary to convert data to <code>np.float64</code>.</li>
<li>I have taken advantage of the <code>where</code> option for <a href="https://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="nofollow noreferrer">universal functions</a> to avoid zero division warnings.</li>
<li>In order for the composition to be possible, the band ratio (blue channel) has to be converted back to the same type (i.e. <code>np.uint16</code>) as the original bands (red and green channels).</li>
</ul>
|
python|opencv|numpy|image-processing|tiff
| 3
|
376,168
| 40,642,295
|
Timeseries charts with Bokeh
|
<p>Hi I am trying to create a timeseries chart using Bokeh.
The data I have looks like this, 2 columns one for the timestamp which is effectively the current time, the value and the sensor that provided the value</p>
<p>Time | Value | Sensor</p>
<p>2011-05-03 17:45:35.177000 | 213.130005| A</p>
<p>2011-05-03 17:45:36.177000 | 208.83 | B</p>
<p>2011-05-03 17:45:36.277000 | 212.629993 | C</p>
<p>2011-05-03 17:45:45.317000 | 211.719999| A</p>
<p>2011-05-03 17:45:45.577000 | 203.549999| B</p>
<p>2011-05-03 17:45:48.177000 | 201.199999| B</p>
<p>2011-05-03 17:45:55.175000 | 199.439999| C</p>
<p>I am completely new to Bokeh
and I'm not sure how I can user bokeh to render the data for each of the sensors independently on a chart, something along the lines of <a href="http://bokeh.pydata.org/en/latest/docs/reference/charts.html#timeseries" rel="nofollow noreferrer">this</a></p>
<ul>
<li>Do I need pandas as shown in the example? </li>
<li>How can I use pandas to parse the timestamp column, from the example I can only see pandas.parse_dates</li>
</ul>
|
<p>1) You don't <em>have</em> to use pandas, but this route makes it much easier.
<br/>
2) You can parse the <em>time</em> feature by just using pandas' <code>to_datetime()</code> function (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer">pandas docs reference</a>)</p>
<pre><code>from bokeh.plotting import figure, show
from bokeh.models import DatetimeTickFormatter
import pandas as pd
import csv
d = pd.read_csv("SO.txt", delimiter="|")
# Strip whitespace from spacing in column headers
d.columns = [val.strip() for val in d.columns.values]
d["Time"] = pd.to_datetime(d["Time"], yearfirst=True)
unique_sensors = d["Sensor"].unique()
c = ["red","blue","green"]
fig = figure(x_axis_label="Time (Seconds)", y_axis_label="Value", title="Sample")
# Draw a line for each unique sensor value
for i, s in enumerate(unique_sensors):
sdf = d.loc[d["Sensor"]==s]
fig.line(x=sdf["Time"], y=sdf["Value"], legend=s, line_color=c[i], line_width=3.0)
fig.xaxis.formatter = DatetimeTickFormatter(hours=["%b %d %Y"],
days=["%b %d %Y"],
months=["%b %d %Y"],
years=["%b %d %Y"])
show(fig)
</code></pre>
<p>But obviously you will need your color map to scale to your problem so I would suggest looking at bokeh palettes (<a href="http://bokeh.pydata.org/en/latest/docs/reference/palettes.html" rel="nofollow noreferrer">bokeh docs reference</a>)</p>
|
pandas|bokeh
| 1
|
376,169
| 40,472,912
|
hdf5 file to pandas dataframe
|
<p>I downloaded a dataset which is stored in .h5 files.
I need to keep only certain columns and to be able to manipulate the data in it.</p>
<p>To do this, I tried to load it in a pandas dataframe. I've tried to use:</p>
<pre><code>pd.read_hdf(path)
</code></pre>
<p>But I get: <code>No dataset in HDF5 file.</code></p>
<p>I've found answers on SO (<a href="https://stackoverflow.com/questions/33451926/read-hdf5-file-to-pandas-dataframe-with-conditions">read HDF5 file to pandas DataFrame with conditions</a>) but I don't need conditions, and the answer adds conditions about how the file was written but I'm not the creator of the file so I can't do anything about that.</p>
<p>I've also tried using h5py: </p>
<pre><code>df = h5py.File(path)
</code></pre>
<p>But this is not easily manipulable and I can't seem to get the columns out of it (only the names of the columns using <code>df.keys()</code>)
Any idea on how to do this ?</p>
|
<p>Easiest way to read them into Pandas is to convert into <code>h5py</code>, then <code>np.array</code>, and then into <code>DataFrame</code>. It would look something like: </p>
<pre><code>df = pd.DataFrame(np.array(h5py.File(path)['variable_1']))
</code></pre>
|
python|pandas|hdf5
| 12
|
376,170
| 40,454,713
|
python pandas dataframe to_sql converting an object to Mysql INT datatype yields incorrect results
|
<p>I am trying to read a csv file into a Pandas dataframe and insert the final dataframe into Mysql using pandas.to_sql function.</p>
<p>All the columns are inserting the correct data except for one column in dataframe which has a length of 25 characters. This column (transaction_id) is defined as a INT(25) in MYSQL and I have not been able to figure out why this column has wrong data.</p>
<p>And the weird thing is, the transaction_id column in MySQL has the same value for more than 360K rows per csv file. </p>
<p>Any help would be great. </p>
<p>Client transaction ID example:</p>
<pre><code>format: transaction id_page id-banner id
2343213254646775357496618_12-586542237
2343213254646775357881218_14-586542237
2343213254646775357886268_10-586542237
2343213254646775357886218_27-586542237
2343213254646775357886248_10-586542237
</code></pre>
<p>Here is my code: </p>
<pre><code>xls = pd.ExcelFile(path_value)
df = xls.parse('report', skiprows=13, index_col=None, na_values=['NA'])
# remove last row
df = df[:-1]
df['transaction_datetime'] = pd.to_datetime(df['transaction_datetime'])
# add transaction date column to data frame:
df['transaction_date'] = df['transaction_datetime'].dt.date
df.loc[:, 'created_date'] = datetime.datetime.now()
# convert client transaction id into three parts
df['transaction_id'], df['placeholder'] = zip(
*df['Client Transaction ID'].apply(lambda x: x.split('_', 1)))
df['page_id'], df['banner_id'] = zip(*df['placeholder'].apply(lambda x: x.split('-', 1)))
df.drop('placeholder', axis=1, inplace=True)
df.drop('Client Transaction ID', axis=1, inplace=True)
print datetime.datetime.now()
# connect to mysql
engine = create_engine(
'connection string'
echo=False)
df.to_sql(name='table', con=engine, if_exists='append', index=False)
print datetime.datetime.now()
</code></pre>
|
<p>Apparently, the issue stems from MySQL. My transaction id, having a length of 25, was big for BIGINT. I have to convert it to VARCHAR(25) to get the right value in the table. Thanks @MaxU for improving my code. </p>
|
python|mysql|pandas|dataframe
| 0
|
376,171
| 40,754,040
|
I am getting error KeyError: 'duration' when it exists
|
<p>The following code returns error </p>
<pre><code>KeyError: 'duration'
for i in range(0, 3):
exam_df['duration'] = pd.to_datetime(i,(exam_df['Duration '])[i])
exam_df['grade'] = exam_df['Grade'].astype(np.int64)
exam_df.plot.scatter(x='duration', y='grade')
</code></pre>
|
<p>I think that you misspelled the key 'duration', try to change:</p>
<pre><code>exam_df['duration'] = pd.to_datetime(i,(exam_df['Duration '])[i])
</code></pre>
<p>With:</p>
<pre><code>exam_df['duration'] = pd.to_datetime(i,(exam_df['duration'])[i])
</code></pre>
|
python|pandas
| 3
|
376,172
| 40,473,299
|
How to save out in a new column the url which is reading pandas read_html() function?
|
<p>I am interested in extracting some tables from a website, I defined a list of links where the tables live in. Each link has a several tables with the same number of columns. So, I am extracting all the tables from the list of links into a single table with pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow noreferrer">read_html()</a> function as follows:</p>
<pre><code>links = ['url1.com','url2.com',...,'urlN.com']
import multiprocessing
def process_url(link):
return pd.concat(pd.read_html(link), ignore_index=False) # add in a new column the link where the table was extracted..
p = multiprocessing.Pool()
df = pd.concat(p.map(process, links), ignore_index=True)
</code></pre>
<p>I noticed that it would be helpful to carryout the provenance link of each table (i.e. to save in a new column from which link comes the rows of the final table). Thus, my question is, how to carry out pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow noreferrer">read_html()</a> reference link in a new column?.</p>
<p>For example:</p>
<p>The tables 1 and 2 are in url1.com:</p>
<p>table1:</p>
<pre><code>fruit, color, season, price
apple, red, winter, 2$
watermelon, green, winter, 3$
orange, orange, spring, 1$
</code></pre>
<p>table2:</p>
<pre><code>fruit, color, season, price
peppermint, green, fall, 3$
pear, yellow, fall, 4$
</code></pre>
<p>The table 3 lives in a in url2.com</p>
<pre><code>fruit, color, season, price
tomato, red, fall, 3$
pumpking, orange, fall, 1$
</code></pre>
<p>I would like to save in a new column the place where each table were extracted (i.e. carry out the reference of the table in a new column):</p>
<pre><code> fruit, color, season, price, link
0 apple, red, winter, 2$, url1.com
1 watermelon, green, winter, 3$, url1.com
2 orange, orange, spring, 1$, url1.com
3 peppermint, green, fall, 3$, url1.com
4 pear, yellow, fall, 4$, url1.com
5 tomato, red, fall, 3$, url2.com
6 pumpking, orange, fall, 1$, url2.com
</code></pre>
<p>Another example is this "diagram", note that table1 and table2 are in url1.com. On the other hand, table 3 is in url2.com. with the above function I create a single table from tables that are in different links, my objective is to create a column which is conformed of the place the table was extracted (just to save the referece):</p>
<pre><code>source: url1.com
fruit, color, season, price
apple, red, winter, 2$
watermelon, green, winter, 3$
orange, orange, spring, 1$
source: url1.com
fruit, color, season, price
peppermint, green, fall, 3$
pear, yellow, fall, 4$
----> fruit, color, season, price, link
apple, red, winter, 2$, url1.com
watermelon, green, winter, 3$, url1.com
orange, orange, spring, 1$, url1.com
peppermint, green, fall, 3$, url1.com
pear, yellow, fall, 4$, url1.com
tomato, red, fall, 3$, url2.com
source: url2.com pumpking, orange, fall, 1$, url1.com
fruit, color, season, price
tomato, red, fall, 3$
pumpking, orange, fall, 1$
</code></pre>
<p>Any idea of how to do it?.</p>
|
<p>This should do the trick:</p>
<pre><code>def process_url(link):
return pd.concat(pd.read_html(link), ignore_index=False).assign(link=link)
</code></pre>
<p>Explanation: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer">DataFrame.assign(new_column=expression)</a> will add a new virtual column to your DF.</p>
<p>Demo:</p>
<pre><code>In [2]: d1
Out[2]:
a b
0 1 10
1 2 20
In [3]: d2
Out[3]:
a b
0 11 100
1 12 200
In [4]: link = 'http://url1.com'
In [5]: pd.concat([d1, d2], ignore_index=True).assign(link=link)
Out[5]:
a b link
0 1 10 http://url1.com
1 2 20 http://url1.com
2 11 100 http://url1.com
3 12 200 http://url1.com
</code></pre>
|
python|python-3.x|pandas|dataframe|beautifulsoup
| 2
|
376,173
| 40,705,480
|
Python pandas: remove everything after a delimiter in a string
|
<p>I have data frames which contain e.g.:</p>
<pre><code>"vendor a::ProductA"
"vendor b::ProductA"
"vendor a::Productb"
</code></pre>
<p>I need to remove everything (and including) the two :: so that I end up with:</p>
<pre><code>"vendor a"
"vendor b"
"vendor a"
</code></pre>
<p>I tried str.trim (which seems to not exist) and str.split without success.
what would be the easiest way to accomplish this?</p>
|
<p>You can use <code>pandas.Series.str.split</code> just like you would use <code>split</code> normally. Just split on the string <code>'::'</code>, and index the list that's created from the <code>split</code> method:</p>
<pre><code>>>> df = pd.DataFrame({'text': ["vendor a::ProductA", "vendor b::ProductA", "vendor a::Productb"]})
>>> df
text
0 vendor a::ProductA
1 vendor b::ProductA
2 vendor a::Productb
>>> df['text_new'] = df['text'].str.split('::').str[0]
>>> df
text text_new
0 vendor a::ProductA vendor a
1 vendor b::ProductA vendor b
2 vendor a::Productb vendor a
</code></pre>
<p>Here's a non-pandas solution:</p>
<pre><code>>>> df['text_new1'] = [x.split('::')[0] for x in df['text']]
>>> df
text text_new text_new1
0 vendor a::ProductA vendor a vendor a
1 vendor b::ProductA vendor b vendor b
2 vendor a::Productb vendor a vendor a
</code></pre>
<p>Edit: Here's the step-by-step explanation of what's happening in <code>pandas</code> above:</p>
<pre><code># Select the pandas.Series object you want
>>> df['text']
0 vendor a::ProductA
1 vendor b::ProductA
2 vendor a::Productb
Name: text, dtype: object
# using pandas.Series.str allows us to implement "normal" string methods
# (like split) on a Series
>>> df['text'].str
<pandas.core.strings.StringMethods object at 0x110af4e48>
# Now we can use the split method to split on our '::' string. You'll see that
# a Series of lists is returned (just like what you'd see outside of pandas)
>>> df['text'].str.split('::')
0 [vendor a, ProductA]
1 [vendor b, ProductA]
2 [vendor a, Productb]
Name: text, dtype: object
# using the pandas.Series.str method, again, we will be able to index through
# the lists returned in the previous step
>>> df['text'].str.split('::').str
<pandas.core.strings.StringMethods object at 0x110b254a8>
# now we can grab the first item in each list above for our desired output
>>> df['text'].str.split('::').str[0]
0 vendor a
1 vendor b
2 vendor a
Name: text, dtype: object
</code></pre>
<p>I would suggest checking out the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.html" rel="noreferrer">pandas.Series.str docs</a>, or, better yet, <a href="http://pandas.pydata.org/pandas-docs/stable/text.html" rel="noreferrer">Working with Text Data in pandas</a>.</p>
|
python|python-3.x|pandas
| 106
|
376,174
| 40,519,046
|
Insert dataframe into postgresql sqlalchemy with idx autoincrement
|
<p>I'm <code>requests.get()</code> to get some json. After that, I want to insert the data into postgresql. Something very interesting is happening, if I use the <code>df.to_sql(index=False)</code>, the data gets appended into postgresql with no problem, but the Id in postgresql is not creating the autoincrement value; the column is totally empty. If I eliminate the parameter in <code>df.to_sql()</code> then I get the following error... <code>IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint</code>. Here is my code...</p>
<pre><code>import requests
import pandas as pd
import sqlalchemy
urls = ['https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22DIA%22%20and%20startDate%20%3D%20%222015-01-01%22%20and%20endDate%20%3D%20%222015-12-31%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=',
'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22DIA%22%20and%20startDate%20%3D%20%222016-01-01%22%20and%20endDate%20%3D%20%222016-11-08%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=',
'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22SPY%22%20and%20startDate%20%3D%20%222015-01-01%22%20and%20endDate%20%3D%20%222015-12-31%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=',
'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22SPY%22%20and%20startDate%20%3D%20%222016-01-01%22%20and%20endDate%20%3D%20%222016-11-08%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=',
'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22IWN%22%20and%20startDate%20%3D%20%222015-01-01%22%20and%20endDate%20%3D%20%222015-12-31%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=',
'https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20%3D%20%22IWN%22%20and%20startDate%20%3D%20%222016-01-01%22%20and%20endDate%20%3D%20%222016-11-08%22&format=json&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=']
df_list = []
for url in urls:
data = requests.get(url)
data_json = data.json()
df = pd.DataFrame(data_json['query']['results']['quote'])
df_list.append(df)
quote_df = pd.concat(df_list)
engine = sqlalchemy.create_engine('postgresql://postgres:wpc,.2016@localhost:5432/stocks')
quote_df.to_sql('quotes', engine, if_exists='append')
</code></pre>
<p>I would like to insert the <code>df</code> into postgresql with the postgresql autoincrement index.
How can I fix my code to do so.</p>
<h3>Question Update 10NOV2016 1900</h3>
<p>I add the following code to fix the indexing in the data frame...</p>
<pre><code>quote_df = pd.concat(df_list)
quote_df.index.name = 'Index'
quote_df = quote_df.reset_index()
quote_df['Index'] = quote_df.index
engine = create_engine('postgresql://postgres:wpc,.2016@localhost:5432/stocks')
</code></pre>
<p>quote_df.to_sql('quotes', engine, if_exists = 'append', index=False)
engine.dispose()</p>
<p>Now I'm having the following error when appending to postgresql...</p>
<pre><code>ProgrammingError: (psycopg2.ProgrammingError) column "Index" of relation "quotes" does not exist LINE 1: INSERT INTO quotes ("Index", "Adj_Close", "Close", "Date", "...
</code></pre>
<p>The column does exists in the database.</p>
|
<p>One way (among many) to do this would be:</p>
<p>to fetch maximum <code>Id</code> and store it to a variable (let's call it <code>max_id</code>):</p>
<pre><code>select max(Id) from quotes;
</code></pre>
<p>now we can do this:</p>
<p>Original DF:</p>
<pre><code>In [55]: quote_df
Out[55]:
Adj_Close Close Date High Low Open Symbol Volume
0 170.572764 173.990005 2015-12-31 175.649994 173.970001 175.089996 DIA 5773400
1 172.347213 175.800003 2015-12-30 176.720001 175.619995 176.570007 DIA 2910000
2 173.50403 176.979996 2015-12-29 177.25 176.00 176.190002 DIA 6145700
.. ... ... ... ... ... ... ... ...
213 88.252244 89.480003 2016-01-06 90.099998 89.080002 89.279999 IWN 1570400
214 89.297697 90.540001 2016-01-05 90.620003 89.75 90.410004 IWN 2053100
215 88.893319 90.129997 2016-01-04 90.730003 89.360001 90.550003 IWN 2540600
[1404 rows x 8 columns]
</code></pre>
<p>now we can increase index by <code>max_id</code>:</p>
<pre><code>In [56]: max_id = 123456 # <-- you don't need this line...
In [57]: quote_df.index += max_id
</code></pre>
<p>and set index as <code>Id</code> column:</p>
<pre><code>In [58]: quote_df.reset_index().rename(columns={'index':'Id'})
Out[58]:
Id Adj_Close Close Date High Low Open Symbol Volume
0 123456 170.572764 173.990005 2015-12-31 175.649994 173.970001 175.089996 DIA 5773400
1 123457 172.347213 175.800003 2015-12-30 176.720001 175.619995 176.570007 DIA 2910000
2 123458 173.50403 176.979996 2015-12-29 177.25 176.00 176.190002 DIA 6145700
... ... ... ... ... ... ... ... ... ...
1401 123669 88.252244 89.480003 2016-01-06 90.099998 89.080002 89.279999 IWN 1570400
1402 123670 89.297697 90.540001 2016-01-05 90.620003 89.75 90.410004 IWN 2053100
1403 123671 88.893319 90.129997 2016-01-04 90.730003 89.360001 90.550003 IWN 2540600
[1404 rows x 9 columns]
</code></pre>
<p>Now it should be possible to write this DF to PostgreSQL specifying (<code>index=False</code>)</p>
|
python|postgresql|pandas|sqlalchemy
| 0
|
376,175
| 40,739,585
|
Error in scikit code
|
<p>I am new to Machine Learning and am trying the <a href="https://www.kaggle.com/c/titanic/" rel="nofollow noreferrer">titanic problem</a> from Kaggle. I have written the attached code that uses decision tree to do computations on data. There is an error that I am unable to remove.</p>
<p>Code :</p>
<pre><code>#!/usr/bin/env python
from __future__ import print_function
import pandas as pd
import numpy as np
from sklearn import tree
train_uri = './titanic/train.csv'
test_uri = './titanic/test.csv'
train = pd.read_csv(train_uri)
test = pd.read_csv(test_uri)
# print(train[train["Sex"] == 'female']["Survived"].value_counts(normalize=True))
train['Child'] = float('NaN')
train['Child'][train['Age'] < 18] = 1
train['Child'][train['Age'] >= 18] = 0
# print(train[train['Child'] == 1]['Survived'].value_counts(normalize=True))
# print(train['Embarked'][train['Embarked'] == 'C'].value_counts())
# print(train.shape)
## Fill empty 'Embarked' values with 'S'
train['Embarked'] = train['Embarked'].fillna('S')
## Convert Embarked classes to integers
train["Embarked"][train["Embarked"] == "S"] = 0
train['Embarked'][train['Embarked'] == "C"] = 1
train['Embarked'][train['Embarked'] == "Q"] = 2
train['Sex'][train['Sex'] == 'male'] = 0
train['Sex'][train['Sex'] == 'female'] = 1
target = train['Survived'].values
features_a = train[['Pclass', 'Sex', 'Age', 'Fare']].values
tree_a = tree.DecisionTreeClassifier()
##### Line With Error #####
tree_a = tree_a.fit(features_a, target)
# print(tree_a.feature_importances_)
# print(tree_a.score(features_a, target))
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "titanic.py", line 40, in <module>
tree_a = tree_a.fit(features_a, target)
File "/usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py", line 739, in fit
X_idx_sorted=X_idx_sorted)
File "/usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py", line 122, in fit
X = check_array(X, dtype=DTYPE, accept_sparse="csc")
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 407, in check_array
_assert_all_finite(array)
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 58, in _assert_all_finite
" or a value too large for %r." % X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
</code></pre>
<p>This error isn't present when I run the code on Datacamp server but present when I run it locally. I don't understand why this is coming up, I have checked the data and the values in either <code>features_a</code> or <code>target</code> don't contain <code>NaN</code> or really high values.</p>
|
<p>Try each feature one by one and you will probably find one of them has some nulls. I note you do not check if sex has nulls.</p>
<p>Also by coding each categoric variable manually it would be easy to make an error perhaps by misspelling one of the categories. Instead you can use df=pd.get_dummies(df) and it will automatically code all the categoric variables for you. No need to specify each category manually.</p>
|
python|numpy|scikit-learn
| 1
|
376,176
| 40,717,105
|
Why do I need lambda to apply functions to a Pandas Dataframe?
|
<p>I have a Pandas data frame and am attempting to pass a function over the entries in one column using the apply() function.</p>
<p>My function is of the form:</p>
<pre><code>def foo(Y):
#accepts a pandas data frame
#carries out some search on the text in each row of the dataframe
#groups successful searches
#return a new column as a pandas series
</code></pre>
<p>My dataframe is of the form:</p>
<pre><code> Info WN RN
0 XX YY ZZ
1 AA BB CC
2 JJ KK LL
</code></pre>
<p>I attempt to execute:</p>
<pre><code>df['SR'] = (df['Info'].apply(foo(x)))
</code></pre>
<p>My error is as follows:</p>
<pre><code>File "<ipython-input-11-ae54015436d8>", line 1, in <module>
df['SR'] = (df['Info'].apply(foo(x))
NameError: name 'x' is not defined
</code></pre>
<p>But if I use:</p>
<pre><code>df['SR'] = (df['Info'].apply(lambda x:foo(x)))
</code></pre>
<p>It works fine.</p>
<p>I understand how Lambda works (at least I thought I did). I don't understand why I need it.</p>
<p>Why do I need lambda to successfully pass the function over the data frame? Shouldn't the apply() function do that by definition? </p>
<p>Or is it that I am effectively doing it the other way around i.e. passing my data frame into the function, and returning some output, rather than iteratively applying the function to the data frame (if that makes sense)?</p>
<p>Can anyone offer any insight? </p>
<p>My sincere thanks!</p>
|
<p>The lambda is unnecessary, you can just do </p>
<pre><code>df['SR'] = df['Info'].apply(foo)
</code></pre>
<p>here it will still work</p>
|
python|pandas|lambda|apply
| 3
|
376,177
| 40,420,240
|
Grouped Bar graph Pandas
|
<p>I have a table in a pandas <code>DataFrame</code> named <code>df</code>:</p>
<pre><code>+--- -----+------------+-------------+----------+------------+-----------+
|avg_views| avg_orders | max_views |max_orders| min_views |min_orders |
+---------+------------+-------------+----------+------------+-----------+
| 23 | 123 | 135 | 500 | 3 | 1 |
+---------+------------+-------------+----------+------------+-----------+
</code></pre>
<p>What I am looking for now is to plot a grouped bar graph which shows me
(avg, max, min) of views and orders in one single bar chart.</p>
<p>i.e on x axis there would be Views and orders separated by a distance
and 3 bars of (avg, max, min) for views and similarly for orders.</p>
<p>I have attached a sample bar graph image, just to know how the bar graph should look.</p>
<p><a href="https://i.stack.imgur.com/M2Tdy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/M2Tdy.png" alt="just sample: green color should be for avg, yellow for max and pin"></a>
Green color should be for avg, yellow for max and pink for avg.</p>
<p>I took the following code from <a href="https://stackoverflow.com/questions/11597785/setting-spacing-between-grouped-bar-plots-in-matplotlib">setting spacing between grouped bar plots in matplotlib</a> but it is not working for me:</p>
<pre><code>plt.figure(figsize=(13, 7), dpi=300)
groups = [[23, 135, 3], [123, 500, 1]]
group_labels = ['views', 'orders']
num_items = len(group_labels)
ind = np.arange(num_items)
margin = 0.05
width = (1. - 2. * margin) / num_items
s = plt.subplot(1, 1, 1)
for num, vals in enumerate(groups):
print 'plotting: ', vals
# The position of the xdata must be calculated for each of the two data
# series.
xdata = ind + margin + (num * width)
# Removing the "align=center" feature will left align graphs, which is
# what this method of calculating positions assumes.
gene_rects = plt.bar(xdata, vals, width)
s.set_xticks(ind + 0.5)
s.set_xticklabels(group_labels)
</code></pre>
<blockquote>
<p>plotting: [23, 135, 3]
...
ValueError: shape mismatch: objects cannot be broadcast to a single shape</p>
</blockquote>
|
<p>Using pandas:</p>
<pre><code>import pandas as pd
groups = [[23,135,3], [123,500,1]]
group_labels = ['views', 'orders']
# Convert data to pandas DataFrame.
df = pd.DataFrame(groups, index=group_labels).T
# Plot.
pd.concat(
[df.mean().rename('average'), df.min().rename('min'),
df.max().rename('max')],
axis=1).plot.bar()
</code></pre>
<p><a href="https://i.stack.imgur.com/4WzbH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4WzbH.png" alt="Result plot"></a></p>
|
python|pandas|matplotlib
| 34
|
376,178
| 40,555,477
|
Unable to install GPU enabled TensorFlow
|
<p>To install TensorFlow with GPU on an Ubuntu system, I installed CUDA v 8.0 using "cuda-repo-ubuntu1404_8.0.44-1_amd64.deb" and cuDNN using "cudnn-8.0-linux-x64-v5.1", however, on uncompressing the file and copying them into CUDA toolkit the following files are added to the /usr/local/cuda/lib64 folder:</p>
<pre><code>libcudnn.so
libcudnn.so.5
libcudnn.so.5.1.5
libcudnn_static.a
</code></pre>
<p>The following are the environment variables in ~/.profile file</p>
<pre><code>LD_LIBRARY_PATH=/usr/local/cuda/lib64
CUDA_PATH=/usr/local/cuda
</code></pre>
<p>On running the ./configure command inside the tensorflow folder the following error is displayed:</p>
<pre><code>ubuntu@ip-172-31-20-185:~/tensorflow$ ./configure
Please specify the location of python. [Default is /usr/bin/python]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with GPU support? [y/N] y
GPU support will be enabled for TensorFlow
Please specify which gcc nvcc should use as the host compiler. [Default is /usr/bin/gcc]:
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Invalid path to CUDA 8.0 toolkit. /usr/local/cuda/lib64/libcudart.so.8.0 cannot be found
</code></pre>
<p>Am I missing any steps? Any help is appreciated.</p>
|
<p>This might be because of multiple versions of CUDA installed in your system.
Remove all the CUDA versions using <code>sudo apt-get purge cuda-.</code>
Install the CUDA 8.0.</p>
<p>Download the CUDA toolkit from this <a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux" rel="nofollow noreferrer">link</a>. Install local deb file.</p>
<p>After that Run</p>
<pre><code>sudo apt-get update
sudo apt-get install cuda
</code></pre>
|
tensorflow|gpu
| 0
|
376,179
| 18,475,651
|
Download future price series from Yahoo! with Pandas
|
<p>That's strange, I have been unable to download future price series from Yahoo! with panda.</p>
<p>Take this snippet which is supposed to download prices for CBoT corn Sept-13 :</p>
<pre><code>import pandas.io.data as fetch
ts = fetch.get_data_yahoo('CU13.CBT', '8/8/2013', '10/8/2013')
print(ts)
</code></pre>
<p>I get a weblink error message:</p>
<pre><code>urllib.error.HTTPError: HTTP Error 404: Not Found
</code></pre>
<p>I have tried other underlying (metals, livestock ...), and different maturities but that just doesn't work. I have also tried tricks such as removing the .CBT part but with no success</p>
|
<p>It is not a pandas problem, historical data for <strong>CU13.CBT</strong> is not available, you can check that <a href="http://finance.yahoo.com/q?s=CU13.CBT&ql=1" rel="nofollow noreferrer">here</a> you wont find the link to historical prices (compare with <a href="http://finance.yahoo.com/q?s=F&ql=1" rel="nofollow noreferrer">this</a>). </p>
<p><img src="https://i.stack.imgur.com/Qsbts.png" alt="Yahoo historical prices"></p>
<p>Try with another symbol and it should work. Example:</p>
<pre><code>>>> import pandas.io.data as web
>>> start = datetime.datetime(2013, 8, 8)
>>> end = datetime.datetime(2013, 8, 10)
>>> f = web.DataReader("F", 'yahoo', start, end)
>>> f
Open High Low Close Volume Adj Close
Date
2013-08-08 16.94 17.03 16.87 16.98 26589500 16.98
2013-08-09 16.95 17.11 16.94 17.02 25625300 17.02
</code></pre>
|
python|pandas|yahoo|finance
| 4
|
376,180
| 18,334,121
|
Numpy array: concatenate arrays and integers
|
<p>In my Python program I concatenate several integers and an array. It would be intuitive if this would work:</p>
<pre><code>x,y,z = 1,2,np.array([3,3,3])
np.concatenate((x,y,z))
</code></pre>
<p>However, instead all ints have to be converted to np.arrays:</p>
<pre><code>x,y,z = 1,2,np.array([3,3,3])
np.concatenate((np.array([x]),np.array([y]),z))
</code></pre>
<p>Especially if you have many variables this manual converting is tedious. The problem is that x and y are 0-dimensional arrays, while z is 1-dimensional. Is there any way to do the concatenation without the converting? </p>
|
<p>They just have to be sequence objects, not necessarily numpy arrays:</p>
<pre><code>x,y,z = 1,2,np.array([3,3,3])
np.concatenate(([x],[y],z))
# array([1, 2, 3, 4, 5])
</code></pre>
<p>Numpy also does have an <code>insert</code> function that will do this:</p>
<pre><code>x,y,z = 1,2,np.array([3,3,3])
np.insert(z, [0,0], [x, y])
</code></pre>
<p>I'll add that if you're just trying to add integers to an list, you don't need numpy to do it:</p>
<pre><code>x,y,z = 1,2,[3,3,3]
z = [x] + [y] + z
</code></pre>
<p>or</p>
<pre><code>x,y,z = 1,2,[3,3,3]
[x, y] + z
</code></pre>
<p>or</p>
<pre><code>x,y,z = 1,2,[3,3,3]
z.insert(0, y)
z.insert(0, x)
</code></pre>
|
python|arrays|numpy|integer|concatenation
| 6
|
376,181
| 18,722,296
|
How to count the frequency of an element in an ndarray
|
<p>I have a numpy ndarray of strings and want to find out how often a certain word appears in the array. I found out this solution:</p>
<pre><code>letters = numpy.array([["a","b"],["c","a"]])
print (numpy.count_nonzero(letters=="a"))
</code></pre>
<blockquote>
<p>-->2</p>
</blockquote>
<p>I'm just wondering if i solved this problem unnecessarily complicated or if this is the simplest solution, because for lists there is a simple .count().</p>
|
<p>You can also use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html" rel="noreferrer"><code>sum</code></a>:</p>
<pre><code>>>> letters = numpy.array([["a","b"],["c","a"]])
>>> (letters == 'a').sum()
2
>>> numpy.sum(letters == 'a')
2
</code></pre>
|
python|numpy|count|element|multidimensional-array
| 5
|
376,182
| 18,287,806
|
Numpy way to determine the value(s) in an array which is causing a high variance
|
<p>Is there a numpy way to determine the value(s) in an array which is causing a high variance?</p>
<p>Consider the set of numbers</p>
<pre><code>array([164, 202, 164, 164, 164, 166], dtype=uint16)
</code></pre>
<p>A quick scan reveals, 202 would cause a high variance which if I remove from the list would reduce the variance considerably</p>
<pre><code>>>> np.var(np.array([164, 202, 164, 164, 164, 166]))
196.88888888888886
</code></pre>
<p>and removing 202 from the above list would reduce the variance considerably</p>
<pre><code>>>> np.var(np.array([164, 164, 164, 164, 166]))
0.64000000000000012
</code></pre>
<p>But, how to determine the offending value?</p>
|
<p>Suppose this is your data:</p>
<pre><code>In [19]: import numpy as np
In [167]: x = np.array([164, 202, 164, 164, 164, 166], dtype=np.uint16)
</code></pre>
<p>Here is a boolean array indicating which values in <code>x</code> are more than 1 standard deviation away from the mean:</p>
<pre><code>In [170]: abs(x-x.mean()) > x.std()
Out[170]: array([False, True, False, False, False, False], dtype=bool)
</code></pre>
<p>We can use the boolean array as a so-called "fancy index" to retrieve the values which are more than 1 standard deviation away from the mean:</p>
<pre><code>In [171]: x[abs(x-x.mean()) > x.std()]
Out[171]: array([202], dtype=uint16)
</code></pre>
<p>Or, reverse the inequality to get the data with the "outliers" removed:</p>
<pre><code>In [172]: x[abs(x-x.mean()) <= x.std()]
Out[172]: array([164, 164, 164, 164, 166], dtype=uint16)
</code></pre>
|
python|numpy
| 5
|
376,183
| 18,290,123
|
disable index pandas data frame
|
<p>How can I drop or disable the indices in a pandas Data Frame?</p>
<p>I am learning the pandas from the book "python for data analysis" and I already know I can use the dataframe.drop to drop one column or one row. But I did not find anything about disabling the all the indices in place.</p>
|
<p><code>df.values</code> gives you the raw NumPy <code>ndarray</code> without the indexes.</p>
<pre><code>>>> df
x y
0 4 GE
1 1 RE
2 1 AE
3 4 CD
>>> df.values
array([[4, 'GE'],
[1, 'RE'],
[1, 'AE'],
[4, 'CD']], dtype=object)
</code></pre>
<p>You cannot have a DataFrame without the indexes, they are the whole point of the DataFrame :)</p>
<p>But just to be clear, this operation is not <em>inplace</em>:</p>
<pre><code>>>> df.values is df.values
False
</code></pre>
<p>DataFrame keeps the data in two dimensional arrays grouped by type, so when you want the whole data frame it will have to find the LCD of all the dtypes and construct a 2D array of that type.</p>
<p>To instantiate a new data frame with the values from the old one, just pass the old DataFrame to the new ones constructor and no data will be copied the same data structures will be reused:</p>
<pre><code>>>> df1 = pd.DataFrame([[1, 2], [3, 4]])
>>> df2 = pd.DataFrame(df1)
>>> df2.iloc[0,0] = 42
>>> df1
0 1
0 42 2
1 3 4
</code></pre>
<p>But you can explicitly specify the <code>copy</code> parameter:</p>
<pre><code>>>> df1 = pd.DataFrame([[1, 2], [3, 4]])
>>> df2 = pd.DataFrame(df1, copy=True)
>>> df2.iloc[0,0] = 42
>>> df1
0 1
0 1 2
1 3 4
</code></pre>
|
python|pandas
| 19
|
376,184
| 61,994,759
|
ResNet152 - High Training Accuracy but Failed to Classify Binary Labels
|
<p>I am working on the Skin Cancer Images available in <a href="https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign" rel="nofollow noreferrer">Kaggle</a> for my mini-project. I am trying to use different CNN models for comparison. Both VGG16 and VGG19 work on the data and yield acceptable results with >90% of accuracy on training, validation data, and around 85% on testing data. </p>
<p>However, it appears ResNet50/152 overfit the data as it could also produce >90% accuracy on training data but fails on validation/testing data (all validation/testing images are classified as 1/0). I have tried image augmentation and dropout but both of them don't work for me. Appreaciate if I could get any comment on the following block of codes, thanks so much!</p>
<pre><code>IMAGE_WIDTH = 224
IMAGE_HEIGHT = 224
IMAGE_CHANNELS = 3
train_data, valid_data, train_label, valid_label = train_test_split(trainval_data, trainval_label, test_size=0.05, random_state=999)
train_label = to_categorical(train_label)
valid_label = to_categorical(valid_label)
test_label = to_categorical(test_label)
train_array = np.zeros((len(train_data), IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS))
test_array = np.zeros((len(test_data), IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS))
valid_array = np.zeros((len(valid_data), IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS))
for i in range(len(train_data)):
image = load_img(train_data[i], target_size=(224, 224))
train_array[i] = img_to_array(image)
for i in range(len(test_data)):
image = load_img(test_data[i], target_size=(224, 224))
test_array[i] = img_to_array(image)
for i in range(len(valid_data)):
image = load_img(valid_data[i], target_size=(224, 224))
valid_array[i] = img_to_array(image)
train_array = train_array/255.0
test_array = test_array/255.0
valid_array = valid_array/255.0
def img_transfer(image):
image = image - image.mean()
return image
# data pre-processing for training
train_datagen = ImageDataGenerator(
rotation_range = 20,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
fill_mode = 'nearest',
horizontal_flip = True,
preprocessing_function=img_transfer)
# data pre-processing for validation
validate_datagen = ImageDataGenerator(
rotation_range = 20,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
fill_mode = 'nearest',
horizontal_flip = True,
preprocessing_function=img_transfer)
test_datagen = ImageDataGenerator(
preprocessing_function=img_transfer)
train_datagen.fit(train_array, augment=True, seed=8021)
train_generator = train_datagen.flow(train_array, train_label, shuffle=True, seed = 8021)
validate_datagen.fit(valid_array, augment=True, seed=8021)
val_generator = validate_datagen.flow(valid_array, valid_label, shuffle=True, seed = 8021)
resnet152model = ResNet152(include_top=False, classes=2, input_shape = (224,224,3))
#print(vgg16model.summary())
for layer in resnet152model.layers:
layer.trainable = False
x = resnet152model.output
x = Flatten()(x)
x = Dense(512, activation="relu")(x)
x = Dense(256, activation="relu")(x)
predictions = Dense(2, activation="softmax")(x)
resnet152model = Model(inputs=resnet152model.input,outputs=predictions)
earlystop = EarlyStopping(patience=10)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy',
patience=5,
verbose=1,
factor=0.5,
min_lr=0.00001)
filepath="weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [earlystop, checkpoint, learning_rate_reduction]
resnet152model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history1 = resnet152model.fit_generator(train_generator, validation_data=val_generator,
epochs=30, verbose=1, callbacks=callbacks_list)
Epoch 1/30
79/79 [==============================] - 65s 819ms/step - loss: 3.4226 - accuracy: 0.7673 - val_loss: 0.5739 - val_accuracy: 0.6818
Epoch 00001: val_accuracy improved from -inf to 0.68182, saving model to weights-improvement-01-0.68.hdf5
Epoch 2/30
79/79 [==============================] - 44s 559ms/step - loss: 0.7746 - accuracy: 0.8092 - val_loss: 0.3414 - val_accuracy: 0.6818
Epoch 00002: val_accuracy did not improve from 0.68182
Epoch 3/30
79/79 [==============================] - 44s 559ms/step - loss: 0.4426 - accuracy: 0.8407 - val_loss: 0.7188 - val_accuracy: 0.6818
Epoch 00003: val_accuracy did not improve from 0.68182
Epoch 4/30
79/79 [==============================] - 44s 560ms/step - loss: 0.4133 - accuracy: 0.8415 - val_loss: 0.5881 - val_accuracy: 0.6818
Epoch 00004: val_accuracy did not improve from 0.68182
Epoch 5/30
79/79 [==============================] - 44s 558ms/step - loss: 0.3836 - accuracy: 0.8595 - val_loss: 1.2216 - val_accuracy: 0.3182
Epoch 00005: val_accuracy did not improve from 0.68182
Epoch 6/30
79/79 [==============================] - 44s 558ms/step - loss: 0.3961 - accuracy: 0.8551 - val_loss: 1.0454 - val_accuracy: 0.3182
Epoch 00006: val_accuracy did not improve from 0.68182
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 7/30
79/79 [==============================] - 44s 558ms/step - loss: 0.3074 - accuracy: 0.8719 - val_loss: 0.9247 - val_accuracy: 0.3182
</code></pre>
|
<p>I think the problem, i.e. why VGG works while ResNet doesn't, is caused by the keras <code>BatchNormalization</code> layer. Long story in short, because of the domain gap between the ImageNet dataset and your own dataset, the pretrained BatchNormalization parameters don't reflect the actual batch statistics of your new dataset. </p>
<p>Therefore, here are some options:</p>
<ul>
<li><p>Option 1: fast training, but might be slightly worse performance</p>
<ul>
<li>freeze all feature extraction layers of your ResNet model</li>
<li>only train your classification layer</li>
</ul></li>
<li><p>Option 2: slightly slow training, but might be better performance</p>
<ul>
<li>build a customized ResNet -- everything is the same as the original ResNet, except for those <code>BatchNormalization</code> layers. </li>
<li>load a pretrained ResNet in this customized one</li>
<li>train the customized network instead.</li>
<li>More precisely, you should call <code>BatchNormalization</code> layer as below, where <code>training=False</code> (read the keras doc carefully, <a href="https://keras.io/api/layers/normalization_layers/batch_normalization/" rel="nofollow noreferrer">https://keras.io/api/layers/normalization_layers/batch_normalization/</a>)</li>
</ul></li>
</ul>
<pre><code>f = BatchNormalization(...)(x, training=False)
</code></pre>
<p>Note: both options do one common thing -- disable the updating of the <code>BatchNormalization</code> parameters during finetuning. Test it and see whether it works. </p>
|
python|tensorflow|keras|resnet|vgg-net
| 1
|
376,185
| 61,658,267
|
How to map data within Pandas DataFrame w.r.t index and column from another DataFrame
|
<p>Let's say I have two DataFrames as below :</p>
<p><strong>DF1:</strong></p>
<pre><code>from datetime import date, timedelta
import pandas as pd
import numpy as np
sdate = date(2019,1,1) # start date
edate = date(2019,1,7) # end date
required_dates = pd.date_range(sdate,edate-timedelta(days=1),freq='d')
# initialize list of lists
data = [['2019-01-01', 1001], ['2019-01-03', 1121] ,['2019-01-02', 1500],
['2019-01-02', 1400],['2019-01-04', 1501],['2019-01-01', 1200],
['2019-01-04', 1201],['2019-01-04', 1551],['2019-01-05', 1400]]
# Create the pandas DataFrame
df1 = pd.DataFrame(data, columns = ['OnlyDate', 'TBID'])
df1.sort_values(by='OnlyDate',inplace=True)
df1
OnlyDate TBID
0 2019-01-01 1001
5 2019-01-01 1200
2 2019-01-02 1500
3 2019-01-02 1400
1 2019-01-03 1121
4 2019-01-04 1501
6 2019-01-04 1201
7 2019-01-04 1551
8 2019-01-05 1400
</code></pre>
<p><strong>DF2 :</strong></p>
<pre><code>df2=pd.DataFrame(columns=[sorted(df1['TBID'].unique())],index=required_dates)
df2
1001 1121 1200 1201 1400 1500 1501 1551
2019-01-01 NaN NaN NaN NaN NaN NaN NaN NaN
2019-01-02 NaN NaN NaN NaN NaN NaN NaN NaN
2019-01-03 NaN NaN NaN NaN NaN NaN NaN NaN
2019-01-04 NaN NaN NaN NaN NaN NaN NaN NaN
2019-01-05 NaN NaN NaN NaN NaN NaN NaN NaN
2019-01-06 NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>What I am trying is to apply (True or 1 ) to this DF3 Dataframe w.r.t to the values from df1 like below output:</p>
<pre><code>df3 =df2.copy()
for index, row in df1.iterrows():
df3.loc[row['OnlyDate'],row['TBID']] = 1
df3.fillna(0, inplace=True)
df3
1001 1121 1200 1201 1400 1500 1501 1551
2019-01-01 1 0 1 0 0 0 0 0
2019-01-02 0 0 0 0 1 1 0 0
2019-01-03 0 1 0 0 0 0 0 0
2019-01-04 0 0 0 1 0 0 1 1
2019-01-05 0 0 0 0 1 0 0 0
2019-01-06 0 0 0 0 0 0 0 0
</code></pre>
<p>Is there any better way for doing this?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a> with <code>max</code> for indicators (always <code>0, 1</code>) or <code>sum</code> if want count values:</p>
<pre><code>df = pd.get_dummies(df1.set_index('OnlyDate')['TBID']).max(level=0)
print (df)
1001 1121 1200 1201 1400 1500 1501 1551
OnlyDate
2019-01-01 1 0 1 0 0 0 0 0
2019-01-02 0 0 0 0 1 1 0 0
2019-01-03 0 1 0 0 0 0 0 0
2019-01-04 0 0 0 1 0 0 1 1
2019-01-05 0 0 0 0 1 0 0 0
</code></pre>
|
pandas|dataframe|dictionary|time-series
| 1
|
376,186
| 61,935,177
|
value of years on X axis is not displaying correctly
|
<p>value of years on X axis is not displaying correctly . it is displaying years in two parts but I want it as single value</p>
<p>here is what I have tried<a href="https://i.stack.imgur.com/QywOL.jpg" rel="nofollow noreferrer">pandas</a></p>
<p><a href="https://i.stack.imgur.com/M2cpl.jpg" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You can convert year to date format and plot. Eg.:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
data = pd.DataFrame({"year" : [2010,2011,2012,2013,2014],
"count" :[1000,2200,3890,5600,8000] })
data["year"] = pd.to_datetime(data["year"].astype(str), format="%Y")
ax = data.plot(x="year",y="count")
plt.show()
</code></pre>
|
python-3.x|pandas
| 0
|
376,187
| 61,772,146
|
How to convert a dataframe into datetime format
|
<p>I have this dataframe: </p>
<pre><code>3_21_19_59
1
4
22
25
28
31
34
37
.
.
.
.
</code></pre>
<p>It has 410 rows. </p>
<p>Here in <code>3_21_19_59</code>: <code>3</code> indicates month, <code>21</code> indicates date, <code>19</code> is hours and <code>59</code> is minutes. The numbers in the rows below that: <code>1</code>, <code>4</code>, <code>22</code>... are the seconds. </p>
<p>Now, I want to convert this dataframe into a datetime format like this: </p>
<pre><code>2020-03-21 19:59:00
2020-03-21 19:59:01
2020-03-21 19:59:04
2020-03-21 19:59:22
2020-03-21 19:59:25
2020-03-21 19:59:28
...
...
...
</code></pre>
<p>and so on. And after 60 seconds, the minutes should be automatically incremented. For example: If it's 64 seconds, it should be like <code>2020-03-21 19:60:04</code>. </p>
<p>Any help would be appreciated. </p>
|
<p>First convert datetimes by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with format and <code>errors='coerce'</code> parameter, so missing values for not matched values. Then forward fillinf them for repating <code>datetimes</code>.</p>
<p>Then processing <code>seconds</code> - first convert to numeric by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html" rel="nofollow noreferrer"><code>to_numeric</code></a>, then to timedeltas by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a> and last add to datetimes:</p>
<pre><code>print (df)
col
0 3_21_19_59
1 1
2 4
3 22
4 25
5 28
6 31
7 34
8 37
d = pd.to_datetime('20_' + df['col'], format='%y_%m_%d_%H_%M', errors='coerce').ffill()
td = pd.to_numeric(df['col'], errors='coerce').fillna(0)
df['col'] = d.add(pd.to_timedelta(td, unit='s'))
print (df)
col
0 2020-03-21 19:59:00
1 2020-03-21 19:59:01
2 2020-03-21 19:59:04
3 2020-03-21 19:59:22
4 2020-03-21 19:59:25
5 2020-03-21 19:59:28
6 2020-03-21 19:59:31
7 2020-03-21 19:59:34
8 2020-03-21 19:59:37
</code></pre>
|
python-3.x|pandas|data-science|datetime-format
| 1
|
376,188
| 61,709,168
|
How to map dictionary values to data frame column which has values as lists
|
<p>I have a data frame as:</p>
<pre><code>df = pd.DataFrame(
{'title':['a1','a2','a3','a4','a5'],
'genre_name':[
['family', 'animation'],
['action', 'family', 'comedy'],
['family', 'comedy'],
['horror','action'],
['family', 'animation','comedy']]}
)
df
title genre_name
0 a1 ['family', 'animation']
1 a2 ['action', 'family', 'comedy']
2 a3 ['family', 'comedy']
3 a4 ['horror','action]
4 a5 ['family', 'animation','comedy']
</code></pre>
<p>I have dictionary as:</p>
<pre><code>dict={'1':'family','2':'animation','3':'action','4':'comedy','5':'horror'}
</code></pre>
<p>I want to create a new column called as 'genre_ids' which will map all genre_names to the keys in the dictionary 'dict'.</p>
<p>the required df is:</p>
<pre><code>df
title genre_name genre_ids
0 a1 ['family', 'animation'] [1,2]
1 a2 ['action', 'family', 'comedy'] [3,1,4]
2 a3 ['family', 'comedy'] [1,4]
3 a4 ['horror','action] [5,3]
4 a5 ['family', 'animation','comedy'] [1,2,4]
</code></pre>
<p>How can i achieve this?</p>
|
<p>Change dictionary name from <code>dict</code> to some another variable, because builtins (python code word), then swap keys with values and map values in list comprehenesion:</p>
<pre><code>d={'1':'family','2':'animation','3':'action','4':'comedy','5':'horror'}
d1 = {v:k for k, v in d.items()}
df['genre_ids'] = df['genre_name'].apply(lambda x: [d1.get(y) for y in x])
#alternative
#df['genre_ids'] = [[d1.get(y) for y in x] for x in df['genre_name']]
print (df)
title genre_name genre_ids
0 a1 [family, animation] [1, 2]
1 a2 [action, family, comedy] [3, 1, 4]
2 a3 [family, comedy] [1, 4]
3 a4 [horror, action] [5, 3]
4 a5 [family, animation, comedy] [1, 2, 4]
</code></pre>
<p>EDIT: You can also specified whats happen if no match, here is added <code>crime</code> for first list:</p>
<pre><code>df = pd.DataFrame({'title':['a1','a2','a3','a4','a5'],
'genre_name':[['crime', 'animation'],['action', 'family', 'comedy'],
['family', 'comedy'],['horror','action'],
['family', 'animation','comedy']]})
d={'1':'family','2':'animation','3':'action','4':'comedy','5':'horror'}
d1 = {v:k for k, v in d.items()}
#no matched values repalced to None
df['genre_ids0'] = df['genre_name'].apply(lambda x: [d1.get(y) for y in x])
#no match replaced to default value
df['genre_ids1'] = df['genre_name'].apply(lambda x: [d1.get(y, 0) for y in x])
#no match is removed
df['genre_ids2'] = df['genre_name'].apply(lambda x: [d1[y] for y in x if y in d1])
print (df)
title genre_name genre_ids0 genre_ids1 genre_ids2
0 a1 [crime, animation] [None, 2] [0, 2] [2]
1 a2 [action, family, comedy] [3, 1, 4] [3, 1, 4] [3, 1, 4]
2 a3 [family, comedy] [1, 4] [1, 4] [1, 4]
3 a4 [horror, action] [5, 3] [5, 3] [5, 3]
4 a5 [family, animation, comedy] [1, 2, 4] [1, 2, 4] [1, 2, 4]
</code></pre>
|
python|pandas|dataframe|dictionary
| 9
|
376,189
| 61,657,519
|
Dataframe with two index columns - reset to one index column
|
<p>I have a data-frame <code>df1</code> that looks like:</p>
<pre><code> col2 col3
date dept
2020-05-07 A 29 21
2020-05-08 B 56 12
2020-05-09 C 82 15
2020-05-10 D 13 9
2020-05-11 E 35 13
2020-05-12 F 53 87
2020-05-13 G 25 9
2020-05-14 H 23 63
</code></pre>
<p>the data-frame has two index columns (<code>date</code> and <code>dept</code>). How can I change the data-frame so that it is only indexed by <code>date</code>? So my desired output looks like:</p>
<pre><code> dept col2 col3
date
2020-05-07 A 29 21
2020-05-08 B 56 12
2020-05-09 C 82 15
2020-05-10 D 13 9
2020-05-11 E 35 13
2020-05-12 F 53 87
2020-05-13 G 25 9
2020-05-14 H 23 63
</code></pre>
<p>I have tried to use:</p>
<pre><code>df1 = df1.reset_index('date')
</code></pre>
<p>without success.</p>
|
<p>Here is necessary select column(s) or position(s) for converting to columns:</p>
<pre><code>#convert dept to columns
df1 = df1.reset_index(level='dept')
#convert date to columns
#df1 = df1.reset_index('date')
</code></pre>
<p>Or:</p>
<pre><code>df1 = df1.reset_index(level=1)
</code></pre>
|
python|pandas
| 1
|
376,190
| 62,032,932
|
How to split pandas record with one large timedelta into multiple records with smaller ones?
|
<p>I have a dataframe with 3 columns: timedeltas (duration) of time slot, datetime of slot start and datetime informing when record was created. Timedeltas are all multipliers of 15 minutes:</p>
<pre><code>Index duration slot_start creation_time
1. 15 minutes some datetime 1 some datetime 3
2. 45 minutes some datetime 2 some datetime 4
</code></pre>
<p>What I want to achieve is:</p>
<pre><code>Index duration slot_start creation_time
1. 15 minutes some datetime 1 some datetime 3
2. 15 minutes some datetime 2 some datetime 4
3. 15 minutes some datetime 2 + 15 minutes some datetime 4
4. 15 minutes some datetime 2 + 30 minutes some datetime 4
</code></pre>
<p>Is there any tool for such operation? How to achieve it easily and time efficiently on very large dataframes?</p>
|
<p>Try this:</p>
<pre><code>unit = pd.Timedelta(minutes=15)
s = pd.to_timedelta(df['duration']).div(unit) \
.apply(lambda n: unit * np.arange(n)) \
.rename('offset') \
.explode()
df = df.join(s)
df['slot_start'] = df['slot_start'] + df['offset']
</code></pre>
|
python|pandas
| 1
|
376,191
| 61,614,935
|
Trying to extract data from JSON URL into Pandas
|
<p>I am trying to extract data from a JSON URL into pandas but this file has multiple "layers" of lists and dictionaries which i just cannot seem to navigate.</p>
<pre><code>import json
from urllib.request import urlopen
with urlopen('https://statdata.pgatour.com/r/010/2020/player_stats.json') as response:
source = response.read()
data = json.loads(source)
for item in data['tournament']['players']:
pid = item['pid']
statId = item['stats']['statId']
name = item['stats']['name']
tValue = item['stats']['tValue']
print(pid, statId, name, tValue)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-84-eadd8bdb34cb> in <module>
1 for item in data['tournament']['players']:
2 player_id = item['pid']
----> 3 stat_id = item['stats']['statId']
4 stat_name = item['stats']['name']
5 stat_value = item['stats']['tValue']
TypeError: list indices must be integers or slices, not str
</code></pre>
<p>The output i am trying to get to is like :-</p>
<p><a href="https://i.stack.imgur.com/xXuEJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xXuEJ.png" alt="enter image description here"></a></p>
|
<p>You are missing a layer.</p>
<p>To simplify the data, we are trying to access:</p>
<pre><code>"stats": [{
"statId":"106",
"name":"Eagles",
"tValue":"0",
}]
</code></pre>
<p>The data of 'stats' starts with <code>[{</code>. This is a dictionary within an array. </p>
<p>I <em>think</em> this should work:</p>
<pre><code>for item in data['tournament']['players']:
pid = item['pid']
for stat in item['stats']:
statId = stat['statId']
name = stat['name']
tValue = stat['tValue']
print(pid, statId, name, tValue)
</code></pre>
<p>To read more on dictionaries: <a href="https://realpython.com/iterate-through-dictionary-python/" rel="nofollow noreferrer">https://realpython.com/iterate-through-dictionary-python/</a></p>
|
python|arrays|json|pandas
| 1
|
376,192
| 61,631,553
|
How to handle memory errors with adjacency matrix?
|
<p>I am doing graph clustering with python. The algorithm requires that the data passed from graph <code>G</code> should be adjacency-matrix. However, in order to get <code>adjacency-matrix</code> as <code>numpy-array</code> like this:</p>
<pre><code>import networkx as nx
matrix = nx.to_numpy_matrix(G)
</code></pre>
<p>I get a memory error. The message is <code>MemoryError: Unable to allocate 2.70 TiB for an array with shape (609627, 609627) and data type float64</code></p>
<p>However, my device is new (Lenovo E490), windows 64 bit, memory 8 Gb</p>
<p>Other important information could be:</p>
<pre><code>Number of nodes: 609627
Number of edges: 915549
</code></pre>
<h2>The entire story is as follows:</h2>
<pre><code>Graphtype = nx.Graph()
G = nx.from_pandas_edgelist(df, 'source','target', edge_attr='weight', create_using=Graphtype)
</code></pre>
<h3>Markov Clustering</h3>
<pre><code>import markov_clustering as mc
import networkx as nx
matrix = nx.to_scipy_sparse_matrix(G) # build the matrix
result = mc.run_mcl(matrix) # run MCL with default parameters
MemoryError
</code></pre>
<p><a href="https://i.stack.imgur.com/5Or9O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Or9O.png" alt="enter image description here"></a></p>
|
<p>The matrix you are trying to create is of size <code>609627x609627</code> of float64. With each float64 using 8 bytes of memory, you will need <code>609627*609627*8~3TB</code> memory. Well your system has only 8GB and even with added physical memory, 3TB seems too large to operate. Assuming your node ids are integer, you can use <code>dtype=unit4</code>(to account for all <code>609627</code> nodes) but it still will need over TB of memory which sounds inaccessible. What is it that you are trying to do, seems like you have a sparse matrix and you can probably have another possible approach to your goal. The adjacency matrix (unless compressed) seems hard to achieve. </p>
<p>Maybe you can benefit of something like: </p>
<pre><code>to_scipy_sparse_matrix(G, nodelist=None, dtype=None, weight='weight', format='csr')
</code></pre>
<p>in <code>networks</code> package. Or rather use edgelist to calculate whatever you are trying to achieve. </p>
|
python|pandas|numpy|cluster-analysis|networkx
| 2
|
376,193
| 61,996,588
|
Is there any way to access layers in tensorflow_hub.KerasLayer object?
|
<p>I am trying to use a pre-trained model from tensorflow hub into my object detection model. I wrapped a model from hub as a KerasLayer object following the official instruction. Then I realized that <strong>I cannot access the layers in this pre-trained model</strong>. But I need to use outputs from some specific layers to build my model. Is there any way to access layers in tensorflow_hub.KerasLayer object?</p>
|
<p>There is an undocumented way to get intermediate layers out of some TF2 SavedModels exported from TF-Slim, such as <a href="https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4" rel="nofollow noreferrer">https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4</a>: passing <code>return_endpoints=True</code> to the SavedModel's <code>__call__</code> function changes the output to a <code>dict</code>.</p>
<p>NOTE: This interface is subject to change or removal, and has known issues.</p>
<pre class="lang-py prettyprint-override"><code>model = tfhub.KerasLayer('https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4', trainable=False, arguments=dict(return_endpoints=True))
input = tf.keras.layers.Input((224, 224, 3))
outputs = model(input)
for k, v in sorted(outputs.items()):
print(k, v.shape)
</code></pre>
<p>Output for this example:</p>
<pre><code>InceptionV1/Conv2d_1a_7x7 (None, 112, 112, 64)
InceptionV1/Conv2d_2b_1x1 (None, 56, 56, 64)
InceptionV1/Conv2d_2c_3x3 (None, 56, 56, 192)
InceptionV1/MaxPool_2a_3x3 (None, 56, 56, 64)
InceptionV1/MaxPool_3a_3x3 (None, 28, 28, 192)
InceptionV1/MaxPool_4a_3x3 (None, 14, 14, 480)
InceptionV1/MaxPool_5a_2x2 (None, 7, 7, 832)
InceptionV1/Mixed_3b (None, 28, 28, 256)
InceptionV1/Mixed_3c (None, 28, 28, 480)
InceptionV1/Mixed_4b (None, 14, 14, 512)
InceptionV1/Mixed_4c (None, 14, 14, 512)
InceptionV1/Mixed_4d (None, 14, 14, 512)
InceptionV1/Mixed_4e (None, 14, 14, 528)
InceptionV1/Mixed_4f (None, 14, 14, 832)
InceptionV1/Mixed_5b (None, 7, 7, 832)
InceptionV1/Mixed_5c (None, 7, 7, 1024)
InceptionV1/global_pool (None, 1, 1, 1024)
default (None, 1024)
</code></pre>
<p>Issues to be aware of:</p>
<ul>
<li>Undocumented, subject to change or removal, not available consistently.</li>
<li><code>__call__</code> computes all outputs (and applies all update ops during training) irrespective of the ones being used later on.</li>
</ul>
<p>Source: <a href="https://github.com/tensorflow/hub/issues/453#issuecomment-571986326" rel="nofollow noreferrer">https://github.com/tensorflow/hub/issues/453</a></p>
|
tensorflow|keras|tensorflow-hub
| 2
|
376,194
| 61,951,626
|
Pandas: calculation of the number of days when the sum of the durations on that day was more than 30 minutes
|
<p>Here is a sample source:</p>
<pre><code>ID Date Duration
111 2020-01-01 00:42:23
111 2020-01-01 00:23:23
111 2020-01-02 00:37:22
222 2020-01-02 00:13:08
222 2020-01-03 01:52:11
....
999 2020-01-31 00:15:21
999 2020-01-31 00:52:12
</code></pre>
<p>I use Pandas and I want to calculate the sum of duration for each day by Date, and calculate how many days in month sum of duration by day > 30 min (group by ID)</p>
<p>Here is what I need to get:</p>
<pre><code>ID Total days when sum of duration by day from each ID > 30 min (per month)
111 2
222 1
....
999 5
</code></pre>
<p>Some like this:</p>
<pre><code> aggregation = {
'num_days': pd.NamedAgg(column="duration", aggfunc=lambda x: x.sum() > dt.timedelta(minutes=30)),
}
total_active = df.groupby('Id').agg(**aggregation)
</code></pre>
<p>But this is not at all what I need... </p>
<p>Can anyone help?</p>
|
<p>Try this,</p>
<pre><code>df['_duration'] = pd.to_datetime(df['Duration'], format="%H:%M:%S").dt.hour
df_g = df.groupby('id')['_duration'].sum().reset_index()
# this should yield greater than 30.
df_g = df_g[df_g['_duration'] > 30]
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer">to_dateime</a></p>
|
python|pandas
| 0
|
376,195
| 61,817,378
|
Count regex matches in one column by values in another column with pandas
|
<p>I am working with pandas and have a dataframe that contains a list of sentences and people who said them, like this:</p>
<pre><code> sentence person
'hello world' Matt
'cake, delicious cake!' Matt
'lovely day' Maria
'i like cake' Matt
'a new day' Maria
'a new world' Maria
</code></pre>
<p>I want to count non-overlapping matches of regex strings in <code>sentence</code> (e.g. <code>cake</code>, <code>world</code>, <code>day</code>) by the <code>person</code>. Note each row of <code>sentence</code> may contain more than one match (e.g <code>cake</code>):</p>
<pre><code>person 'day' 'cake' 'world'
Matt 0 3 1
Maria 2 0 1
</code></pre>
<p>So far I am doing this: </p>
<pre><code>rows_cake = df[df['sentences'].str.contains(r"cake")
counts_cake = rows_cake.value_counts()
</code></pre>
<p>However this <code>str.contains</code> gives me rows containing <code>cake</code>, but not individual instances of <code>cake</code>. </p>
<p>I know I can use <code>str.counts(r"cake")</code> on <code>rows_cake</code>. However, in practise my dataframe is extremely large (> 10 million rows) and the regexes I am using are quite complex so I am looking for a more efficient solution if possible.</p>
|
<p>since this primarily involves strings, I would suggest taking the computation out of Pandas - Python is faster than Pandas in most cases when it comes to string manipulation : </p>
<pre><code>#read in data
df = pd.read_clipboard(sep='\s{2,}', engine='python')
#create a dictionary of persons and sentences :
from collections import defaultdict, ChainMap
d = defaultdict(list)
for k,v in zip(df.person, df.sentence):
d[k].append(v)
d = {k:",".join(v) for k,v in d.items()}
#search words
strings = ("cake", "world", "day")
#get count of words and create a dict
m = defaultdict(list)
for k,v in d.items():
for st in strings:
m[k].append({st:v.count(st)})
res = {k:dict(ChainMap(*v)) for k,v in m.items()}
print(res)
{'Matt': {'day': 0, 'world': 1, 'cake': 3},
'Maria': {'day': 2, 'world': 1, 'cake': 0}}
output = pd.DataFrame(res).T
day world cake
Matt 0 1 3
Maria 2 1 0
</code></pre>
<p>test the speeds and see which one is better. it would be useful for me and others as well. </p>
|
python|regex|pandas|dataframe
| 0
|
376,196
| 61,836,221
|
In which form should the input and the label data be needed into Keras fit function?
|
<p>I am trying to train a sequential classifier, with 1 input neuron, 3 output neurons. The data is in data frames <code>X</code> and <code>Y</code>, but how must I feed this data into <code>fit</code> function in <code>keras</code> library? In other words, what should be the variable type of <code>train_x</code> and <code>train_y</code> (for example, is it data frame, matrix, list, etc)?</p>
<pre><code>[...]
predictor <- keras_model_sequential() %>%
layer_dense(units = 8, activation = "relu", input_shape = c(1)) %>%
layer_dense(units = 8, activation = "relu") %>%
layer_dense(units = 3, activation = "softmax")
[...]
train_x <- X
train_y <- Y
history <- predictor %>% fit(
train_x,
train_y,
epochs = 20,
verbose = 2
)
</code></pre>
<p><strong>Edit:</strong></p>
<p>If I can use dataframe, then how should I set <code>input_shape</code>?</p>
|
<p>The variable type for <code>fit</code> should be of vector, matrix, or array.</p>
<p>As per the <a href="https://keras.rstudio.com/reference/fit.html" rel="nofollow noreferrer">documentation</a>, it states below,</p>
<blockquote>
<p>x -
Vector, matrix, or array of training data (or list if the model has multiple inputs). If all inputs in the model are named, you can also pass a list mapping input names to data. x can be NULL (default) if feeding from framework-native tensors (e.g. TensorFlow data tensors).</p>
<p>y - Vector, matrix, or array of target (label) data (or list if the
model has multiple outputs). If all outputs in the model are named,
you can also pass a list mapping output names to data. y can be NULL
(default) if feeding from framework-native tensors (e.g. TensorFlow
data tensors).</p>
</blockquote>
<p>The model needs to know what input shape it should expect. For this reason, the first layer in a sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. Ex: You can pass a batch_size argument to a layer. If you pass both <code>batch_size=32</code> and <code>input_shape=c(6, 8)</code> to a layer, it will then expect every batch of inputs to have the batch shape <code>(32, 6, 8)</code>.</p>
<p>Hope this answers your question. Happy Learning.</p>
|
r|tensorflow|keras|deep-learning|neural-network
| 1
|
376,197
| 61,842,615
|
No such file or directory exists Error from server request
|
<p>Actually my flask app runs fine in local host but after deploying it to server,I used pythonanywhere to deploy my flask web app it got some errors.
My motive is to send a path of a file from input and python takes the path and uses to locate the file and perform some operations on the data(excel file),if works good in local host.But in server it says no file or directory exists</p>
<pre><code>address=request.form['address']
file_location =address
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
psitrnid = int(sheet.cell_value(9,4))
psiootid = int(sheet.cell_value(9,5))
goodtrnid = int(sheet.cell_value(9,7))
badtrnid = int(sheet.cell_value(9,8))
goodootid = int(sheet.cell_value(9,10))
badootid = int(sheet.cell_value(9,11))
</code></pre>
<p>The file_location variable will have the path of file and xlrd uses it open and read it.
I don't know what is causing this error but I want to know whether we can access a local file using xlrd or pandas from server or cloud app.
Does the server perform request or the system allow the web app to take the file by mentioning the path.</p>
|
<p>Your Flask code only has access to files that are stored on the machine where it is running; when you run it locally, it has access to files on your local machine, but if you run it on a server like PythonAnywhere, it will only have access to files that are stored on that server. If you want people to be able to specify files on their local machine and have your code process those files, you'll need to implement code to upload the files to the server. If you google for "upload file flask" you will find useful guides on how to do that.</p>
|
python|pandas|file|flask|xlrd
| 1
|
376,198
| 61,681,307
|
Loc filter and exclude null values
|
<pre><code>1. vat.loc[(vat['Sum of VAT'].isin([np.nan, 0])) &
2. (vat['Comment'] == "Transactions 0DKK") &
3. (vat['Type'].isin(['Bill', 'Bill Credit'])) &
4. (vat['Maximum of Linked Invoice'].notnull()), 'Comment'] = 'Linked invoice'
5. vat[vat["Comment"] == "Linked invoice"]
</code></pre>
<p>Hi all,</p>
<p>I have a problem with the line:</p>
<pre><code>(vat['Maximum of Linked Invoice'].notnull()
</code></pre>
<p>It seems not to be working properly when I'm trying to exclude all of the null values in the rows. In fact, it does not exclude the null values and instead, it is included in the output from the data frame. The rest of the syntax works perfectly. I have tried using different syntax but the null values are still included in the column 'Maximum of Linked Invoice'. I don't understand why it doesn't work? </p>
<p>Hi again,</p>
<p>I've done some more research and it seems that the csv file, when imported, had 62107 non-null values for the column 'Maximum of Linked Invoice', but this incorrect, when opening the csv_file and checking, it did have thousands of blanks in the rows, but why has it not been read as null values when imported? Have you seen anything like this before?</p>
<p>Please see the info below</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 62108 entries, 0 to 62107
Data columns (total 35 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 External ID 62108 non-null object
1 Document Number 62107 non-null object
2 Transaction Number 62107 non-null object
3 Maximum of Linked Invoice 62107 non-null object
4 Type 62107 non-null object
5 Date 62107 non-null object
6 Period 62107 non-null object
7 Terms 62107 non-null object
8 Maximum of Due Date/Receive By 50885 non-null object
9 Company Name 62107 non-null object
10 Customer VAT Registration Number 62107 non-null object
11 Bill to City 62107 non-null object
12 Bill to State 62107 non-null object
13 Bill to Country 62107 non-null object
14 Bill to Zip 62107 non-null object
15 Source System 62107 non-null object
16 Source System Identifier 62107 non-null object
17 City 62107 non-null object
18 State/Province 62107 non-null object
19 Country 62107 non-null object
20 Zip 62107 non-null object
21 Currency 62107 non-null object
22 Memo (Main) 62107 non-null object
23 Maximum of GMAX Tax Code 24189 non-null object
24 Maximum of NetSuite Tax Item 59815 non-null object
25 Maximum of Coupa Tax Code 0 non-null float64
26 Maximum of External System Tax Code 0 non-null float64
27 Maximum of Tax Code (Consolidated) 59815 non-null object
28 FOP Type 62107 non-null object
29 Sum of Assets 60680 non-null float64
30 Sum of Accounts Payable 3741 non-null float64
31 Sum of Other Liabilities 57066 non-null float64
32 Sum of Income 60290 non-null float64
33 Sum of Expense 300 non-null float64
34 Sum of VAT 56269 non-null float64
dtypes: float64(8), object(27)
memory usage: 16.6+ MB
</code></pre>
|
<p>If anyone is reading this then I have found an answer. There is nothing wrong with my syntax, but the problem lies with the CSV file itself. The reason why pandas read the column 'Maximum of Linked Invoice' as 62107 non-null, is because there was a space embedded within each row in that column. The only thing I saw at first were blank rows, but this was inaccurate. So, I urge you to check the CSV file to avoid any time-consuming efforts to solve these types of tricky problems.</p>
<p>And this is the solution for code line 4:</p>
<pre><code>(~vat['Maximum of Linked Invoice'].isin([np.nan, ' '])
</code></pre>
|
python|python-3.x|pandas|filter|pandas-loc
| 0
|
376,199
| 61,812,567
|
How to remove wrong values in the pandas dataframe?
|
<p>I have a dataframe which has multiple columns and I am interested to take one column out of it and create a new dataframe with that column.
My dataframe is</p>
<pre><code>category_id category_name channel_id
24 Entertainment UCv1ZjbkebUwVOJCgtstOBZQ
</code></pre>
<p>I am creating a new dataframe as want the category_id in it to create 10k rows.</p>
<pre><code>df1 = pd.DataFrame({'category_id': [df['category_id'] for x in range(10000)]})
df1.head()
</code></pre>
<p>This creates a dataframe with 10k rows however the value that I am getting into the dataframe
is </p>
<pre><code>category_id
0 178 10 215 10 251 10 312 1...
1 178 10 215 10 251 10 312 1...
2 178 10 215 10 251 10 312 1...
3 178 10 215 10 251 10 312 1...
4 178 10 215 10 251 10 312 1...
</code></pre>
<p>which is wrong as I wanted the values like </p>
<pre><code>category_id
0 10
1 10
2 10
3 10
4 10
</code></pre>
<p>I made changes to this dataframe by removing the list comprehension from <code>df5 = pd.DataFrame({'category_id': df4['category_id'] for x in range(10000)})
df5.head()</code> it solved this error but that didn't create 10k records.
What can be done over to solve this? </p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.repeat.html" rel="nofollow noreferrer"><code>Series.repeat</code></a>:</p>
<pre><code>print (df)
category_id category_name channel_id
0 10 Entertainment UCv1ZjbkebUwVOJCgtstOBZQ
1 24 Entertainment UCv1ZjbkebUwVOJCgtstOBZQ
</code></pre>
<hr>
<pre><code>N = 5
df5 = df['category_id'].repeat(N).reset_index(drop=True).to_frame()
print (df5)
category_id
0 10
1 10
2 10
3 10
4 10
5 24
6 24
7 24
8 24
9 24
</code></pre>
|
python|pandas|dataframe
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.