Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,100
| 63,127,564
|
why does batch normalization make my batches so abnormal?
|
<p>I'm playing with pytorch for the first time, and I've noticed that when training my neural net, about one time in four or so the loss takes a left turn towards infinity, then nan shortly after that. I've seen a few other questions about nan-ing, but the recommendations there seem to be essentially to do normalization; but the first layer in my net below is such a normalization, and I still see this problem! The full net is <a href="https://github.com/dmwit/nurse-sveta/blob/803b0a2a17c5d82454a0765e300c440d6e5f1f5e/nsaid/nsaid/__main__.py" rel="nofollow noreferrer">a bit convoluted</a>, but I've done some debugging to try to produce a very small, understandable net that still displays the same issue.</p>
<p>The code is below; it consists of sixteen inputs, 0-1, which are passed through a batch normalization and then a fully-connected layer to a single output. I'd like it to learn the function that always outputs 1, so I take the squared error from 1 for the loss.</p>
<pre><code>import torch as t
import torch.nn as tn
import torch.optim as to
if __name__ == '__main__':
board = t.rand([1,1,1,16])
net = tn.Sequential \
( tn.BatchNorm2d(1)
, tn.Conv2d(1, 1, [1,16])
)
optimizer = to.SGD(net.parameters(), lr=0.1)
for i in range(10):
net.zero_grad()
nn_outputs = net.forward(board)
loss = t.sum((nn_outputs - 1)**2)
print(i, nn_outputs, loss)
loss.backward()
optimizer.step()
</code></pre>
<p>If you run it a few times, eventually you'll see a run that looks like this:</p>
<pre><code>0 tensor([[[[-0.7594]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(3.0953, grad_fn=<SumBackward0>)
1 tensor([[[[4.0954]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(9.5812, grad_fn=<SumBackward0>)
2 tensor([[[[5.5210]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(20.4391, grad_fn=<SumBackward0>)
3 tensor([[[[-3.4042]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(19.3966, grad_fn=<SumBackward0>)
4 tensor([[[[823.6523]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(676756.7500, grad_fn=<SumBackward0>)
5 tensor([[[[3.5471e+08]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(1.2582e+17, grad_fn=<SumBackward0>)
6 tensor([[[[2.8560e+25]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(inf, grad_fn=<SumBackward0>)
7 tensor([[[[inf]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(inf, grad_fn=<SumBackward0>)
8 tensor([[[[nan]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(nan, grad_fn=<SumBackward0>)
9 tensor([[[[nan]]]], grad_fn=<MkldnnConvolutionBackward>) tensor(nan, grad_fn=<SumBackward0>)
</code></pre>
<p>Why does my loss go to nan, and what can I do about it?</p>
|
<p>Welcome to pytorch!</p>
<p>Here is how I would set up your training. Please check the comments.</p>
<pre><code># how the comunity usually does the import:
import torch # some people do: import torch as th
import torch.nn as nn
import torch.optim as optim
if __name__ == '__main__':
# setting some parameters:
batch_size = 32
n_dims = 128
# select GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# initializing a simple neural net
net = nn.Sequential(nn.Linear(n_dims, n_dims // 2), # Batch norm is not usually used directly on the input
nn.BachNorm1d(n_dims // 2), # Batch norm is used before the activation function (it centers the input and helps make the dims of the previous layers independent of each other)
nn.ReLU(), # the most common activation function
nn.Linear(n_dims // 2, 1) # final layer)
net.to(device) # model is copied to the GPU if it is availalbe
optimizer = to.SGD(net.parameters(), lr=0.01) # it is better to start with a low lr and increase it at later experiments to avoid training divergence, the range [1.e-6, 5.e-2] is recommended.
for i in range(10):
# generating random data:
board = torch.rand([batch_size, n_dims])
# for sequences: [batch_size, channels, L]
# for image data: [batch_size, channels, W, H]
# for videos: [batch_size, chanels, L, W, H]
boad = board.to(device) # data is copied to the gpu if it is available
optimizer.zero_grad() # the convension the comunity uses, though the result is the same as net.zero_grad()
nn_outputs = net(board) # don't call net.forward(x), call net(x). Pytorch applies some hooks in the net.__call__(x) that are useful for backpropagation.
loss = ((nn_outputs - 1)**2).mean() # using .mean() makes your training less sensitive to the batch size.
print(i, nn_outputs, loss.item())
loss.backward()
optimizer.step()
</code></pre>
<p>One comment about the batch norm. Per dimension, it calculates the mean and the standard deviation of your batch (check the documentation <a href="https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d</a>):</p>
<pre><code>x_normalized = (x.mean(dim=0) / (x.std(dim=0) + e-6)) * scale + shift
</code></pre>
<p>Where scale and shift are learnable parameters. If you only give one example per batch, <code>x.std(0) = 0</code> will make <code>x_normalized</code> contain very very large values.</p>
|
python|neural-network|pytorch|batch-normalization
| 1
|
5,101
| 63,014,971
|
Create new column that is the sum of two rows, but repeat every two rows
|
<p>I am working on building an additional column in a data frame which is the sum of two rows for one Time Period. A picture is attached here:</p>
<p><a href="https://i.stack.imgur.com/oKCbD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oKCbD.png" alt="enter image description here" /></a></p>
<p>I want to create a new column which is the Sum of Lives for 'IN' and 'SA' in the 'BillType' column for each TimePeriodId. That way I will have one 'Lives Total' entry for a single TimePeriodId. I have gone through a lot of documentation and cant figure out how I would do that in this case.</p>
<p>code sample:</p>
<pre><code>sa = pd.read_sql(sa_q1, sql_conn)
#convert TimePeriodId to string values
sa['TimePeriodId'] = sa['TimePeriodId'].astype(str)
sa = sa.loc[(sa['BillType'] =='SA') | (sa['BillType']=='IN')]#.drop(['BillType'], axis = 1)
sa.head(10).to_dict()
#the last line returns the following:
{'TimePeriodId': {1: '201811',
2: '201811',
4: '201812',
5: '201812',
9: '201901',
11: '201901',
13: '201902',
14: '201902',
17: '201903',
18: '201903'},
'BillType': {1: 'IN',
2: 'SA',
4: 'IN',
5: 'SA',
9: 'SA',
11: 'IN',
13: 'IN',
14: 'SA',
17: 'IN',
18: 'SA'},
'Lives': {1: 1067,
2: 288028,
4: 1058,
5: 287501,
9: 293560,
11: 1068,
13: 1089,
14: 278850,
17: 1076,
18: 276961}}
</code></pre>
<p>Any help would be appreciated!</p>
|
<p>You can try to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>pandas.DataFrame.groupby()</code></a> method to compute sum of lives for every time period. After that you can enrich <code>sa</code> dataframe by the computed column using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html" rel="nofollow noreferrer"><code>pandas.DataFrame.transform()</code></a> method.</p>
<pre><code>>>> sa['LivesTotal'] = sa.groupby('TimePeriodId').Lives.transform('sum')
</code></pre>
|
python|pandas
| 1
|
5,102
| 63,149,034
|
Sometimes get the error "err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: out of memory"
|
<p>I'm very lost on how to solve my particular problem, which is why I followed the getting help guideline in the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Object Detection API</a> and made a post here on Stack Overflow.</p>
<p>To start off, my goal was to run distributed training jobs on Azure. I've previously used <code>gcloud ai-platform jobs submit training</code> with great ease to run distributed jobs, but it's a bit difficult on Azure.</p>
<p>I built the tf1 docker image for the Object Detection API from the dockerfile <a href="https://github.com/tensorflow/models/tree/master/research/object_detection/dockerfiles/tf1" rel="nofollow noreferrer">here</a>.</p>
<p>I had a cluster (Azure Kubernetes Service/AKS Cluster) with the following nodes:</p>
<pre><code>4x Standard_DS2_V2 nodes
8x Standard_NC6 nodes
</code></pre>
<p>In Azure, <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/nc-series" rel="nofollow noreferrer">NC6</a> nodes are GPU nodes backed by a single K80 GPU each, while <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series" rel="nofollow noreferrer">DS2_V2</a> are typical CPU nodes.</p>
<p>I used <a href="https://github.com/kubeflow/tf-operator" rel="nofollow noreferrer">TFJob</a> to configure my job with the following replica settings:</p>
<pre><code>Master (limit: 1 GPU) 1 replica
Worker (limit: 1 GPU) 7 replicas
Parameter Server (limit: 1 CPU) 3 replicas
</code></pre>
<p>Here's my conundrum: The job fails as one of the workers throw the following error:</p>
<pre><code>tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: out of memory
</code></pre>
<p>I randomly tried reducing the number of workers, and surprisingly, the job worked. It worked only if I had <em>3 or less Worker replicas</em>. <strong>Although it took a lot of time (bit more than a day), the model could finish training successfully with 1 Master and 3 Workers</strong>.</p>
<p>This was a bit vexing as I could only use up to 4 GPUs even though the cluster had 8 GPUs allocated. I ran another test: When my cluster had 3 GPU nodes, I could only successfully run the job with 1 Master and 1 Worker! Seems like I can't fully utilize the GPUs for some reason.</p>
<p>Finally, I ran into another problem. The above runs were done with a very small amount of data (about 150 Mb) since they were tests. I ran a proper job later with a lot more data (about 12 GiB). Even though the cluster had 8 GPU nodes, it could only successfully do the job when there was 1 Master and 1 Worker.</p>
<p>Increasing the Worker replica count to more than 1 immediately caused the same cuda error as above.</p>
<p>I'm not sure if this is an Object Detection API based issue, or if it is caused by Kubeflow/TFJob or even if it's something Azure specific. I've opened a similar issue on the Kubeflow page, but I'm also now seeing if I can get some guide from the Object Detection API community. If you need any further details (like the tfjob yaml, or pipeline.config for the training) or have any questions, please let me know in the comments.</p>
|
<p>It might be related to the batch size used by the API.<br>
Try to control the batch size, maybe as described in this answer:
<a href="https://stackoverflow.com/a/55529875/2109287">https://stackoverflow.com/a/55529875/2109287</a></p>
|
tensorflow|object-detection
| 0
|
5,103
| 67,628,309
|
Pandas read_excel UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 0: invalid continuation byte
|
<p>The following codes creates the error</p>
<p><strong>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 0: invalid continuation byte</strong></p>
<pre><code>with open(file_path) as excel_file:
df = pd.read_excel(excel_file)
</code></pre>
|
<p>The following code solved the issue</p>
<pre><code>with open(file_path,mode="rb") as excel_file:
df = pd.read_excel(excel_file)
</code></pre>
|
python|pandas
| 1
|
5,104
| 67,906,187
|
Write all pandas dataframe in workspace to excel
|
<p>I'm trying to write all the currently available pandas dataframe in workspace to excel sheets. By following example from this <a href="https://stackoverflow.com/q/41113663">SO thead</a>, but I'm unable to make it work.</p>
<p>This is my not working code:</p>
<pre><code>alldfs = {var: eval(var) for var in dir() if isinstance(eval(var), pd.core.frame.DataFrame)}
for df in alldfs.values():
print(df.name)
fmane = df+".xlsx"
writer = pd.ExcelWriter(fmane)
df.to_excel(writer)
writer.save()
</code></pre>
<p>Any help on how to correct this, so that I can pass the dataframe names to a variable, so that the excel filename being written can be same as the dataframe. I'm using spyder 4, python 3.8</p>
|
<p>Just a small fix will do the job:</p>
<pre class="lang-py prettyprint-override"><code>alldfs = {var: eval(var) for var in dir() if isinstance(eval(var), pd.core.frame.DataFrame)}
for df_name, df in alldfs.items():
print(df_name)
fmane = df_name+".xlsx"
writer = pd.ExcelWriter(fmane)
df.to_excel(writer)
writer.save()
</code></pre>
|
excel|pandas|dataframe
| 1
|
5,105
| 67,909,456
|
node-tflite - GLIBC_2.27 not found (required by libtensorflowlite_c.so)
|
<p>I am trying to deploy a Node.js server to IBM cloud (Linux) using <code>node-tflite</code> as a dependency, and get the following error preventing the server from starting:</p>
<pre><code>un 9 17:04:57 Code Engine deployment-0001 /usr/src/app/node_modules/bindings/bindings.js:121
throw e;
^
Jun 9 17:04:57 Code Engine deployment-0001 Error: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /usr/src/app/node_modules/node-tflite/build/Release/libtensorflowlite_c.so)
at Object.Module._extensions..node (internal/modules/cjs/loader.js:1127:18)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Module.require (internal/modules/cjs/loader.js:957:19)
at require (internal/modules/cjs/helpers.js:88:18)
at bindings (/usr/src/app/node_modules/bindings/bindings.js:112:48)
at Object.<anonymous> (/usr/src/app/node_modules/node-tflite/index.js:4:32)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! code ELIFECYCLE
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! errno 1
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! sp-streamer@1.0.0 start: `nodemon index.js || node index.js`
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! Exit status 1
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR!
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! Failed at the sp-streamer@1.0.0 start script.
Jun 9 17:04:57 Code Engine deployment-0001 npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Jun 9 17:04:57 Code Engine deployment-0001 aggressive probe error (failed 202 times): dial tcp 127.0.0.1:9000: connect: connection refused
</code></pre>
<p>This is what my <code>package.json</code> looks like:</p>
<pre><code>{
"name": "sp-streamer",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "nodemon index.js || node index.js"
},
"license": "ISC",
"dependencies": {
"csv-parser": "^3.0.0",
"dotenv": "^10.0.0",
"express": "^4.17.1",
"fs": "^0.0.1-security",
"mathjs": "^9.4.2",
"node-tflite": "^0.0.2",
"os-utils": "^0.0.14",
"socket.io-client": "^4.1.2"
}
}
</code></pre>
<p>And I have a <code>dockerfile</code> as well:</p>
<pre><code>FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 9000
CMD [ "npm", "start" ]
</code></pre>
<p>And the following is a <code>.gitlab-ci.yml</code>:</p>
<pre><code>stages:
- build
docker-build:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
</code></pre>
<p>How can I get the missing <code>/lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.27 not found</code>?</p>
|
<p>Found the solution, works for sure on deployment, this is the <code>dockerfile</code>:</p>
<pre><code>FROM ubuntu
RUN apt-get update
RUN apt-get install -y curl wget make g++
RUN curl -fsSL https://deb.nodesource.com/setup_lts.x | bash -
RUN apt-get install -y nodejs
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 9007
CMD [ "npm", "start" ]
</code></pre>
|
javascript|node.js|linux|docker|tensorflow.js
| 1
|
5,106
| 67,898,455
|
How to save dictionary to csv
|
<pre><code>import pandas as pd
df = pd.read_excel('test.xlsx')
dict_of_region = dict(iter(df.groupby('Region')))
dict_of_region
</code></pre>
<p>df is a dataframe having 100 rows and 20 columns with a column name region which have 4 regions like say north, south, east, west. By using group by I have made a dictionary which has a key as north and value is dataframe of 20 columns and so on for other regions. Now I want to save these values into a different csv file, like north as one file, south as one file and so on.
How do I do this?</p>
|
<pre><code>for df_name, df in dict_of_region.items():
df.to_csv(f'{df_name}.csv', index=False)
</code></pre>
|
python|pandas
| 1
|
5,107
| 41,475,758
|
Problems while filtering pandas dataframe by row values?
|
<p>I have the following pandas dataframe:</p>
<pre><code> Col
0 []
1 []
2 [(foo, bar), (foo, bar)]
3 []
4 []
5 []
6 []
7 [(foo, bar), (foo, bar)]
</code></pre>
<p>I would like to remove all the empty lists (*):</p>
<pre><code> Col
2 [(foo, bar), (foo, bar)]
7 [(foo, bar), (foo, bar)]
</code></pre>
<p>For the above I tried:</p>
<pre><code>df = df.loc[df.Col != '[]']
df
</code></pre>
<p>and</p>
<pre><code>df.pipe(lambda d: d[d['Col'] != '[]'])
</code></pre>
<p>However, none of them worked. So, my question is how can I remove all the empty lists from the dataframe like (*)?.</p>
|
<p>Slicing through the data frame as though the values were <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow noreferrer">strings</a> instead of lists may work:</p>
<pre><code>df[df.astype(str)['Col'] != '[]']
</code></pre>
|
python|python-3.x|pandas
| 3
|
5,108
| 41,635,738
|
Get Pretrained Inception v3 model from Open Images dataset working on Android
|
<p>I tried a while to get the pretrained model working on android. The problem is, I only got the ckpt and meta file for the pretrained net. In my opinion I need the .pb for the android app. So I tried to convert the given files to an .pb file. </p>
<p>Therefore I tried the freeze_graph.py but without succes. So I used the example code from <a href="https://github.com/openimages/dataset/blob/master/tools/classify.py" rel="nofollow noreferrer">https://github.com/openimages/dataset/blob/master/tools/classify.py</a> and modified it to store a pb. file after loading </p>
<pre><code>if not os.path.exists(FLAGS.checkpoint):
tf.logging.fatal(
'Checkpoint %s does not exist. Have you download it? See tools/download_data.sh',
FLAGS.checkpoint)
g = tf.Graph()
with g.as_default():
input_image = tf.placeholder(tf.string)
processed_image = PreprocessImage(input_image)
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, end_points = inception.inception_v3(
processed_image, num_classes=FLAGS.num_classes, is_training=False)
predictions = end_points['multi_predictions'] = tf.nn.sigmoid(
logits, name='multi_predictions')
init_op = control_flow_ops.group(tf.global_variables_initializer(),
tf.global_variables_initializer(),
data_flow_ops.initialize_all_tables())
saver = tf_saver.Saver()
sess = tf.Session()
saver.restore(sess, FLAGS.checkpoint)
outpt_filename = 'output_graph.pb'
#output_graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ["multi_predictions"])
with gfile.FastGFile(outpt_filename, 'wb') as f:
f.write(output_graph_def.SerializeToString())
</code></pre>
<p>Now my problem is that I have the .pb file but I don't have any opinion what is the input node name and I am not sure if <code>multi_predictions</code> is the right output name. In the example android app I have to specify both. And the android app crashed with:</p>
<pre><code>tensorflow_inference_jni.cc:138 Could not create Tensorflow Graph: Invalid argument: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs.
</code></pre>
<p>I don't know if there are more problem by trying to fix the .pb problem. Or if anyone knows a better way to port the ckpt and meta files to a .pd file in my case or knows a source for the final file with input and ouput names please give me a hint to complete this task.</p>
<p>Thanks</p>
|
<p>You'll need to use the optimize_for_inference.py script to strip out the unused nodes in your graph. "decodeJpeg" is not supported on Android -- pixel values should be fed in directly. ClassifierActivity.java has more detail about the specific nodes to use for inception v3.</p>
|
android|tensorflow
| 2
|
5,109
| 27,488,032
|
'latin-1' codec can't encode character u'\u2014' in position 23: ordinal not in range(256)
|
<p>I am loading data into a pandas dataframe from an excel workbook and am attempting to push it to a database when I get the above error.</p>
<p>I thought at first the collation of the database was at issue which I changed to utf8_bin</p>
<p>Next I checked the database engine create statement on my end which I added a parameter for the encoding too. </p>
<pre><code>engine = create_engine('mysql+pymysql://root@localhost/test', encoding="utf-8")
</code></pre>
<p>But neither of these things work I am still getting the error from the line:</p>
<pre><code>df.to_sql("strand", engine, if_exists="append", index=False)
</code></pre>
<p>I checked if there was an encoding parameter for the to_sql method but this does not seem to be the case. </p>
|
<p>Apparently I needed to add ?charset-utf8 to the query string as well as the encoding variable which resulted in me ending up wht the engine create statement</p>
<pre><code>engine = create_engine('mysql+pymysql://root@localhost/test?charset=utf8', encoding="utf-8")
</code></pre>
|
mysql|excel|utf-8|pandas|latin1
| 9
|
5,110
| 61,466,228
|
pandas Third Column answer is based on column 1 and column 2
|
<p>First - I have tried reviewing similar posts, but I am still not getting it.</p>
<p>I have data with corporate codes that I have to reclassify. First thing, I created a new column -['corp_reclassed'].</p>
<p>I populate that column with the use of the map function and a dictionary. </p>
<p>Most of the original corporate numbers do not change thus I have nans in the new column (see below). </p>
<pre><code>corp_number corp_reclassed
100 nan
110 nan
120 160
130 nan
150 170
</code></pre>
<p>I want to create a final column where if ['corp_reclased'] = nan then ['corp_number] is populate by the ['corp_number'] . If not, then populate['corp_reclassed'].</p>
<p>I have tried many ways, but I keep running into problems. For instance, this is my lastest try:</p>
<pre><code>df['final_number'] = df.['corp_number'].where(df.['gem_reclassed'] = isnull, eq['gem_reclassed'])
</code></pre>
<p>Please help.</p>
<p>FYI- I am using pandas 0.19.2. I get upgrade because of restrictions at work.</p>
|
<pre><code>df.loc[df['gem_reclassed']= pd.np.nan, 'final_number'] = df['corp_reclassed']
</code></pre>
|
python|pandas
| 0
|
5,111
| 61,195,022
|
Pivot Table in Pandas with two column(Index and Value)
|
<p>I have a CSV file with <code>obj</code> and <code>VS</code> column.
I need to sum <code>VS</code> values for each <code>obj</code> and have output like below</p>
<p><strong>Input:</strong></p>
<pre><code>+-----+------+
| obj | VS |
+-----+------+
| B | 2048 |
| A | 1024 |
| B | 10 |
| A | 1024 |
| B | 1025 |
| A | 1026 |
| B | 1027 |
+-----+------+
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>+---+------+
| A | 3074 |
+---+------+
| B | 4110 |
+---+------+
</code></pre>
<p>I have tried below code,As I have just two column to apply I added <code>value</code> column with unique value to have pivot(pivot table need Index,Column and Value).Then <code>Value</code> column is just to help. However out put is sum thing weird!!!</p>
<pre><code>import pandas as pd
import numpy as np
filename='1test.csv'
df = pd.read_csv(filename, dtype='str')
df["value"]=1
pd.pivot_table(df, values="VS", index="obj", columns="value", aggfunc=np.sum)
</code></pre>
<p><strong>output of my code</strong>:</p>
<pre><code>+-------+----------------+
| value | 1 |
+-------+----------------+
| obj | |
| A | 102410241026 |
| B | 20481010251027 |
+-------+----------------+
</code></pre>
|
<p>Just consider that as you read from CSV, values are string, you need to convert them to int by <code>df['VS']=pd.to_numeric(df['VS'])</code>
<code>print(df.dtypes)</code> show the type of column in df</p>
<pre><code>import pandas as pd
import numpy as np
filename='1test.csv'
df = pd.read_csv(filename, dtype='str')
df["value"]=1
print(df.dtypes)
df['VS']=pd.to_numeric(df['VS'])
print(df.dtypes)
pd.pivot_table(df, values="VS", index="obj", columns="value", aggfunc=np.sum)
</code></pre>
|
python|pandas|csv|dataframe
| 1
|
5,112
| 68,573,859
|
GAN Model Code Modification — 3 Channels to 1 Channel
|
<p>This model is designed for processing 3-channel images (RGB) while I need to handle some black and white image data (grayscale), so I’d like to change the “ch” parameter to “1” instead of “3.”</p>
<p>The full code is available here — <a href="https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html</a></p>
<p>If we just change this parameter — “nc = 3” --> “nc = 1” — without adjusting generator’s and discriminator’s code blocks, executing just gives an error message:</p>
<pre><code>RuntimeError: Given groups=1, weight of size [64, 1, 4, 4], expected input[128, 3, 64, 64] to have 1 channels, but got 3 channels instead
</code></pre>
<p>Is there a guide on how to modify this or, perhaps, calculate these values manually using <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" rel="nofollow noreferrer">this formula</a> (shape section)?</p>
<p>Please advise.</p>
|
<p>A grayscale image is a "special case" of a color image: a pixel has a gray color iff the red channel equals the green equals the blue. Thus a pixel with values <code>[200, 10, 30]</code> will be green-ish in color, while a pixel with values <code>[180, 180, 180]</code> will have a gray color.<br />
Therefore, the simplest way to process gray scale images using a pre-trained RGB model is to duplicate the single channel of the grayscale image 3 times to generate RGB-like image with three channels that has gray colors.</p>
|
python|pytorch|generative-adversarial-network
| 1
|
5,113
| 68,752,887
|
Pandas- Create column based on sum of previous row values
|
<p>Sample dataset:</p>
<pre><code> id val
0 9 1
1 9 0
2 9 4
3 9 6
4 9 2
5 9 3
6 5 0
7 5 1
8 5 6
9 5 2
10 5 4
</code></pre>
<p>From the dataset, I want to generate a column <code>sum</code>. For the first 3 rows: <code>sum</code>=<code>sum</code>+<code>val</code>(group by id). From 4th row, each row contains the cumulative sum of the previous 3 rows of <code>val</code> column(group by id). Loop through each row. When a new id appears, it should calculate from the beginning.</p>
<p>Desired Output:</p>
<pre><code> id val sum
0 9 1 1
1 9 0 1
2 9 4 5
3 9 6 10
4 9 2 12
5 9 3 11
6 5 0 0
7 5 1 1
8 5 6 7
9 5 2 9
10 5 4 12
</code></pre>
<p>Code I tried:</p>
<pre><code>df['sum']=df['val'].rolling(min_periods=1, window=3).groupby(df['id']).cumsum()
</code></pre>
<p>How do I figure out the custom cumulative sum function?</p>
|
<p>Are you sure the expected output is correct?</p>
<p>I would do:</p>
<pre><code>df['sum'] = df.groupby('id')['val'].rolling(min_periods=1, window=3).sum().values
</code></pre>
<p>output:</p>
<pre><code> id val sum
0 5 1 1.0
1 5 0 1.0
2 5 4 5.0
3 5 6 10.0
4 5 2 12.0
5 5 3 11.0
6 9 0 0.0
7 9 1 1.0
8 9 6 7.0
9 9 2 9.0
10 9 4 12.0
</code></pre>
|
pandas
| 3
|
5,114
| 68,639,166
|
Dictionary Unique Keys Rename and Replace
|
<p>I have a dictionary format structure like this</p>
<pre><code>df = pd.DataFrame({'ID' : ['A', 'B', 'C'],
'CODES' : [{"1407273790":5,"1801032636":20,"1174813554":1,"1215470448":2,"1053754655":4,"1891751228":1},
{"1497066526":19,"1801032636":16,"1215470448":11,"1891751228":18},
{"1215470448":8,"1407273790":4},]})
</code></pre>
<p>Now I want to create a unique list of keys and create names for them like this -</p>
<pre><code>np_code np_rename
1407273790 np_1
1801032636 np_2
1174813554 np_3
1215470448 np_4
1053754655 np_5
1891751228 np_6
1497066526 np_7
</code></pre>
<p>And finally replace the new names in main dataframe df -</p>
<pre><code>df = pd.DataFrame({'ID' : ['A', 'B', 'C'],
'CODES' : [{"np_1":5,"np_2":20,"np_3":1,"np_4":2,"np_5":4,"np_6":1},
{"np_7":19,"1801032636":16,"np_4":11,"np_6":18},
{"np_4":8,"np_1":4},]})
</code></pre>
|
<p>You can use apply here:</p>
<p>Assuming the unique list dataframe is <code>unique_list_df</code>:</p>
<pre><code>u = df['CODES'].map(lambda x: [*x.keys()]).explode().unique()
d = dict(zip(u,'np_'+pd.Index((pd.factorize(u)[0]+1).astype(str))))
f = lambda x: {d.get(k,k): v for k,v in x.items()}
df['CODES'] = df['CODES'].apply(f)
</code></pre>
<hr />
<pre><code>print(df)
ID CODES
0 A {'np_1': 5, 'np_2': 20, 'np_3': 1, 'np_4': 2, ...
1 B {'np_7': 19, 'np_2': 16, 'np_4': 11, 'np_6': 18}
2 C {'np_4': 8, 'np_1': 4}
</code></pre>
|
pandas|dictionary|replace|unique
| 2
|
5,115
| 68,588,784
|
Select top K value from a tensor with no duplication
|
<p><code>torch.Tensor.topk</code> provides an efficient way to extract top k values in a tensor along one dimension. Is it possible to restrict the top k value to be <strong>non-repetitive</strong>?</p>
<p>For example,</p>
<pre><code>input = torch.tensor([0.2,0.2,0.1])
k = 2
dim = 0
output[0] = torch.tensor([0.2,0.1])
output[1] = torch.longtensor([0,2])
</code></pre>
|
<p>You could apply <a href="https://pytorch.org/docs/stable/generated/torch.unique.html" rel="nofollow noreferrer"><code>torch.unique</code></a> on the input tensor.</p>
<pre><code>>>> input.unique().topk(k=2).values
tensor([0.2000, 0.1000])
</code></pre>
<p>Do note you will lose the indices at this point.</p>
<hr />
<p><em>Edit</em>: actually <a href="https://pytorch.org/docs/stable/generated/torch.unique.html" rel="nofollow noreferrer"><code>torch.unique</code></a> has an option to sort the results (the option is on by default).</p>
<pre><code>>>> input
tensor([0.0000, 0.3000, 0.2000, 0.2000, 0.1000])
>>> input.unique(return_inverse=True)[1].unique(sorted=False)
tensor([1, 2, 3, 0])
</code></pre>
|
pytorch|selection
| 0
|
5,116
| 68,544,939
|
Search for different variations of text in pandas
|
<p>I have a pandas dataframe that consists of people and their degree type:</p>
<pre><code>data = {'Name': ['Alice','Bob','Chris','David'],
'Degree': ['phd','BA','MBA','B.Sc.']
}
df = pd.DataFrame(data, columns = ['Name', 'Degree'])
</code></pre>
<p>And I would like to one hot encode based on the degree type:</p>
<pre><code>data = {'Name': ['Alice','Bob','Chris','David'],
'is_bachelors': [0,1,0,1],
'is_masters': [0,0,1,0],
'is_phd': [1,0,0,0]
}
</code></pre>
<p>The problem is that people have input their degree type in lots of different ways. For example, for phd you could have PhD, phd, Ph. D, Ph.D., ph.d etc. Essentially lots of variations of spacings and periods.</p>
<p>Furthermore, I don't want MBA for example being flagged as bachelors (because it contains BA). I found this happened with pandas str.contains.</p>
<p>Any advice would be greatly appreciated.</p>
|
<p>Since the same degree types all start with the same character you can lower them and map on the first character:</p>
<pre><code>data = {'Name': ['Alice','Bob','Chris','David'],
'Degree': ['phd','BA','MBA','B.Sc.']
}
df = pd.DataFrame(data, columns = ['Name', 'Degree'])
df['Degree'] = df['Degree'].str.lower().str[:1].map({'b': 'is_bachelors', 'm': 'is_masters', 'p': 'is_phd'})
df.pivot_table(index='Name', columns='Degree', aggfunc=len, fill_value=0)
</code></pre>
<p>result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Name</th>
<th style="text-align: right;">is_bachelors</th>
<th style="text-align: right;">is_masters</th>
<th style="text-align: right;">is_phd</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Alice</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">Bob</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">Chris</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">David</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>NB: you can add any other relevant letters to the mapping dictionary, eg. for <code>dr.</code> add <code>'d': 'is_phd'</code></p>
|
python|pandas
| 1
|
5,117
| 68,455,389
|
ValueError: could not convert string to float: 'Yes, policy' while fitting to my Logistic Regression Model
|
<p>I am reading an excelsheet using pandas, the excelsheet has columns more than 10, out of which I am only interested in 3, so I read it, remove the rows which have Null values and then creating test and a validation set. While fitting it to the Logistic regression Model, I am getting an error</p>
<p>Here's the code</p>
<pre><code>train, tv = train_test_split(df1, test_size=0.2, random_state=0)
test, val = train_test_split(tv, test_size=0.5, random_state=0)
# Logistic Regression
lr = LogisticRegression()
logit_model = lr.fit(train, test)
</code></pre>
<p>The stacktrace:</p>
<pre><code>Traceback (most recent call last):
File "ml.py", line 22, in <module>
logit_model = lr.fit(train, test)
File "F:\proj\venv\lib\site-packages\sklearn\linear_model\_logistic.py", line 1344, in fit
X, y = self._validate_data(X, y, accept_sparse='csr', dtype=_dtype,
File "F:\proj\venv\lib\site-packages\sklearn\base.py", line 433, in _validate_data
X, y = check_X_y(X, y, **check_params)
File "F:\proj\venv\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "F:\proj\venv\lib\site-packages\sklearn\utils\validation.py", line 871, in check_X_y
X = check_array(X, accept_sparse=accept_sparse,
File "F:\proj\venv\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "F:\proj\venv\lib\site-packages\sklearn\utils\validation.py", line 673, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File "F:\proj\venv\lib\site-packages\pandas\core\generic.py", line 1990, in __array__
return np.asarray(self._values, dtype=dtype)
ValueError: could not convert string to float: 'Yes, policy'
</code></pre>
<p>The dataframe looks something like this:</p>
<pre><code> ID ANSWER TEXT
0 24100.0 Yes, policy Source text snippet:The ACS Group combines its...
1 24100.0 Yes, policy Source text snippet:The ACS Environmental Poli...
2 24100.0 Yes, policy Source text snippet:The ACS Environmental Poli...
3 24100.0 Yes, policy Source text snippet:6. CONTENTS OF THE ENVIRON...
4 24100.0 Yes, policy Source text snippet:6. CONTENTS OF THE ENVIRON...
</code></pre>
<p>By looking at the valueerror I thought that It might be because of the commma after Yes in Answer column, but even after removing it gave the same error. The ID in excel looks like 24100 but when I check its type in dataframe it shows as float64 and is displayed as 24100.0. I am not getting the point, like why is it throwing an error while fitting it on the model.</p>
|
<p>It looks like your <code>ANSWER</code> and <code>TEXT</code> columns contains categorical values, and you have to encode them in numeric form before feeding it to the model. Because machine learning models don't understand text. Use this code on the dataframe before using <code>train_test_split</code></p>
<pre><code> from sklearn.preprocessing import LabelEncoder
df['TEXT'] = df['TEXT'].astype('str')
df['ANSWER'] = df['ANSWER'].astype('str')
df[['ANSWER', 'TEXT']] = df[['ANSWER', 'TEXT']].apply(LabelEncoder().fit_transform)
</code></pre>
<p>Also, it's a multi-class classification problem, so <code>Logistic Regression</code> isn't going to give you good results. Use <code>RandomForestClassifier</code>.</p>
|
python|pandas|machine-learning|scikit-learn
| 2
|
5,118
| 36,569,933
|
Updating values with another dataframe
|
<p>I have 2 pandas dataframes. The second one is contained in the first one. How can I replace the values in the first one with the ones in the second?</p>
<p>consider this example:</p>
<pre><code>df1 = pd.DataFrame(0, index=[1,2,3], columns=['a','b','c'])
df2 = pd.DataFrame(1, index=[1, 2], columns=['a', 'c'])
ris= [[1, 0, 1],
[1, 0, 1],
[0, 0, 0]]
</code></pre>
<p>and <code>ris</code> has the same index and columns of <code>d1</code></p>
<p>A possible solution is:</p>
<pre><code>for i in df2.index:
for j in df2.columns:
df1.loc[i, j] = df2.loc[i, j]
</code></pre>
<p>But this is ugly</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow"><code>copy</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow"><code>combine_first</code></a>:</p>
<pre><code>df3 = df1.copy()
df1[df2.columns] = df2[df2.columns]
print df1.combine_first(df3)
a b c
1 1.0 0 1.0
2 1.0 0 1.0
3 0.0 0 0.0
</code></pre>
<p>Next solution is creating empty new <code>DataFrame</code> <code>df4</code> with <code>index</code> and <code>columns</code> from <code>df1</code> and fill it by double <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow"><code>combine_first</code></a>:</p>
<pre><code>df4 = pd.DataFrame(index=df1.index, columns=df1.columns)
df4 = df4.combine_first(df2).combine_first(df1)
print df4
a b c
1 1.0 0.0 1.0
2 1.0 0.0 1.0
3 0.0 0.0 0.0
</code></pre>
|
python|pandas|replace|dataframe
| 1
|
5,119
| 36,621,778
|
Pandas Dataframe datetime slicing with Index vs MultiIndex
|
<p>With single indexed dataframe I can do the following:</p>
<pre><code>df2 = DataFrame(data={'data': [1,2,3]},
index=Index([dt(2016,1,1),
dt(2016,1,2),
dt(2016,2,1)]))
>>> df2['2016-01 : '2016-01']
data
2016-01-01 1
2016-01-02 2
>>> df2['2016-01-01' : '2016-01-01']
data
2016-01-01 1
</code></pre>
<p>Date time slicing works when you give it a complete day (i.e. 2016-01-01), and it also works when you give it a partial date, like just the year and month (2016-01). All this works great, but when you introduce a multiindex, it only works for complete dates. The partial date slicing doesn't seem to work anymore</p>
<pre><code>df = DataFrame(data={'data': [1, 2, 3]},
index=MultiIndex.from_tuples([(dt(2016, 1, 1), 2),
(dt(2016, 1, 1), 3),
(dt(2016, 1, 2), 2)],
names=['date', 'val']))
>>> df['2016-01-01 : '2016-01-02']
data
date val
2016-01-01 2 1
3 2
2016-01-02 2 3
</code></pre>
<p>ok, thats fine, but the partial date:</p>
<pre><code>>>> df['2016-01' : '2016-01']
File "pandas/index.pyx", line 134, in pandas.index.IndexEngine.get_loc (pandas/index.c:3824)
File "pandas/index.pyx", line 154, in pandas.index.IndexEngine.get_loc (pandas/index.c:3704)
File "pandas/hashtable.pyx", line 686, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12280)
File "pandas/hashtable.pyx", line 694, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12231)
KeyError: '2016-01'
</code></pre>
<p>(I shortened the traceback). </p>
<p>Any idea if this is possible? Is this a bug? Is there any way to do what I want to do without having to resort to something like:</p>
<pre><code>df.loc[(df.index.get_level_values('date') >= start_date) &
(df.index.get_level_values('date') <= end_date)]
</code></pre>
<p>Any tips, comments, suggestions, etc are MOST appreciated! I've tried a lot of other things to no avail!</p>
|
<p>Cross-section should work:</p>
<pre><code>df.xs(slice('2016-01-01', '2016-01-01'), level='date')
</code></pre>
<p>Documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html</a></p>
|
python|datetime|pandas|dataframe|slice
| 14
|
5,120
| 36,289,127
|
How do I draw a area plot in ggplot with timeseries data?
|
<p>I'm trying to post a graph like <a href="http://chrisalbon.com/python/ggplot_area_plot.html" rel="nofollow">this</a>. </p>
<p>My data set looks like this. It has two columns. The first is the date and the second is the total number:</p>
<pre><code>date volume
3/21/16 280
3/20/16 279
3/18/16 278
3/4/16 277
</code></pre>
<p>I am at a loss on how to make the graph from the link work with my data set. Thank you so much.</p>
<pre><code># Import required modules
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as pyplot
import ggplot
# Data
data = pd.read_csv("niagra-falls-escape.csv") # Read CSV
df = pd.DataFrame(data)
# Viz
ggplot(df, aes(x='date')) + \
geom_area()</code>
</code></pre>
|
<p>There are a couple issues here. First <code>aes</code>, <code>geom_area</code> etc, are classes of the <code>ggplot</code> module. Thus as in the referenced post they import via <code>from ggplot import *</code> instead of <code>import ggplot</code>. What I would recommend for easier debugging and maintainable code is to do <code>from ggplot import ggplot, aes, geom_area</code>. </p>
<p>Then there are a couple issues with your code. I think you need to specify that the date is a datetime type of data. you can do this by adding a line <code>df['date'] = pd.to_datetime(df['date'])</code>. </p>
<p>Then you will also need to specify the y axis (both ymin and ymax for an area plot) of your plot. This can be done by: <code>ggplot(df, aes(x='date', ymin='0', ymax='volume')) + geom_area()</code>. Hope this helps.</p>
|
python|pandas|ggplot2
| 1
|
5,121
| 5,064,822
|
How to add items into a numpy array
|
<p>I need to accomplish the following task:</p>
<p>from:</p>
<pre><code>a = array([[1,3,4],[1,2,3]...[1,2,1]])
</code></pre>
<p>(add one element to each row) to:</p>
<pre><code>a = array([[1,3,4,x],[1,2,3,x]...[1,2,1,x]])
</code></pre>
<p>I have tried doing stuff like a[n] = array([1,3,4,x])</p>
<p>but numpy complained of shape mismatch. I tried iterating through <code>a</code> and appending element x to each item, but the changes are not reflected.</p>
<p>Any ideas on how I can accomplish this?</p>
|
<p>Appending data to an existing array is a natural thing to want to do for anyone with python experience. However, if you find yourself regularly appending to large arrays, you'll quickly discover that NumPy doesn't easily or efficiently do this the way a python <code>list</code> will. You'll find that every "append" action requires re-allocation of the array memory and short-term doubling of memory requirements. So, the more general solution to the problem is to try to allocate arrays to be as large as the final output of your algorithm. Then perform all your operations on sub-sets (<a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#other-indexing-options" rel="noreferrer">slices</a>) of that array. Array creation and destruction should ideally be minimized.</p>
<p>That said, It's often unavoidable and the functions that do this are:</p>
<p>for 2-D arrays:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html" rel="noreferrer">np.hstack</a> </li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="noreferrer">np.vstack</a></li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html" rel="noreferrer">np.column_stack</a></li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.row_stack.html" rel="noreferrer">np.row_stack</a></li>
</ul>
<p>for 3-D arrays (the above plus):</p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html" rel="noreferrer">np.dstack</a></li>
</ul>
<p>for N-D arrays:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="noreferrer">np.concatenate</a></li>
</ul>
|
python|numpy
| 133
|
5,122
| 65,649,030
|
Transpose specific rows into columns in pandas
|
<p>I have a dataset that has information like below:</p>
<pre><code>Title Year Revenue (Mi)
---------------------------------------------------------------
Guardians of the Galaxy 2015 313.13
Split 2016 138.0
Sing 2016 270.32
Moana 2015 248.02
</code></pre>
<p>I want to convert this piece of data to this form:</p>
<pre><code>Title 2015 2016
------------------------------------------------------------
Guardians of the Galaxy 313.13
Split 138.02
Sing 270.32
Moana 248.02
</code></pre>
<p>How can I convert my data frame to this form ?? I am not getting what should I do in this case.</p>
|
<p>Try with</p>
<pre><code>out = df.pivot(index='Title', columns='Year', values='Revenue (Mi)').reset_index()
</code></pre>
|
python|pandas|dataframe|numpy
| 2
|
5,123
| 65,902,816
|
Removing stop words from a pandas column
|
<pre><code>import nltk
nltk.download('punkt')
nltk.download('stopwords')
import datetime
import numpy as np
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
# Load the Pandas libraries with alias 'pd'
import pandas as pd
# Read data from file 'filename.csv'
# (in the same directory that your python process is based)
# Control delimiters, rows, column names with read_csv (see later)
data = pd.read_csv("march20_21.csv")
# Preview the first 5 lines of the loaded data
#drop NA rows
data.dropna()
#drop all columns not needed
droppeddata = data.drop(columns=['created_at'])
#drop NA rows
alldata = droppeddata.dropna()
ukdata = alldata[alldata.place.str.contains('England')]
ukdata.drop(columns=['place'])
ukdata['text'].apply(word_tokenize)
eng_stopwords = stopwords.words('english')
</code></pre>
<p>I know there is a lot of redundant variables, but im still working on gettig it working before going back to refine it.</p>
<p>I am unsure on how to remove the stopwords, stored in the variable, from the tokenised columns. Any help is appreciated, I am brand new to Python! Thanks.</p>
|
<ol>
<li><p>after applying a function to a column you need to assign the result back to the column, it's not an in-place operation.</p>
</li>
<li><p>after tokenization <code>ukdata['text']</code> holds a <code>list</code> of words, so you can use a <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">list comprehension</a> in the apply to remove the stop words.</p>
</li>
</ol>
<br>
<pre><code>ukdata['text'] = ukdata['text'].apply(word_tokenize)
eng_stopwords = stopwords.words('english')
ukdata['text'] = ukdata['text'].apply(lambda words: [word for word in words if word not in eng_stopwords])
</code></pre>
<hr>
Minimal example:
<pre><code>import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
eng_stopwords = stopwords.words('english')
ukdata = pd.DataFrame({'text': ["This is a sentence."]})
ukdata['text'] = ukdata['text'].apply(word_tokenize)
ukdata['text'] = ukdata['text'].apply(lambda words: [word for word in words if word not in eng_stopwords])
</code></pre>
|
python|pandas|dataframe|tokenize
| 1
|
5,124
| 65,777,807
|
Python Pandas: How to get previous dates with a condition
|
<p>I have dataset with the following columns:</p>
<ul>
<li>encounterDate</li>
<li>FormName</li>
<li>PersonID</li>
</ul>
<p>In the FormName, I have the following forms:</p>
<ol>
<li>Baseline</li>
<li>Follow-up</li>
</ol>
<p>Here's a sample data:</p>
<pre><code>encounterDate, FormName, PersonID
2019-01-12, Baseline, 01
2020-01-01, Baseline, 01
2019-04-12, Follow-up, 01
2019-13-12, Follow-up, 01
2020-15-01, Follow-up, 01
</code></pre>
<p>I would like to have the following table:</p>
<pre><code>encounterDate, FormName, PersonID, Previous_date
2019-01-12, Baseline, 01, null
2020-01-01, Baseline, 01, 2019-01-12
2019-04-12, Follow-up, 01, null
2019-13-12, Follow-up, 01, 2019-04-12
2020-15-01, Follow-up, 01, 2019-13-12
</code></pre>
<p>How do I write this code in Python?</p>
<p>Additionally, I would like to also rank them:</p>
<pre><code>encounterDate, FormName, PersonID, Previous_date, Rank
2019-01-12, Baseline, 01, null, 1
2020-01-01, Baseline, 01, 2019-01-12, 2
2019-04-12, Follow-up, 01, null, 1
2019-13-12, Follow-up, 01, 2019-04-12, 2
2020-15-01, Follow-up, 01, 2019-13-12, 3
</code></pre>
<p>Here's my working code in SQL</p>
<pre><code>select encounter_date,FormName,PersonID
, date((select max(enc.encounter_datetime)
from encounter enc
where enc.patient_id=e.patient_id
and enc.encounter_type=e.encounter_type
and date(e.encounter_datetime)>date(enc.encounter_datetime))) previous_date
from encounter e
</code></pre>
<p>Thank you in advance.</p>
<p>John</p>
|
<p>This would be fairly straight forward in <code>pandas</code></p>
<p>It looks like you need to <code>groupby</code> both the <code>PersonID</code> and <code>FormName</code> to get the proper groupings. Within those groups you need to shift <code>encounterDate</code> and you need a cumulative count of the same.</p>
<p><code>cumcount</code> starts at zero, so you may want to add 1 to the rank column to get the desired output.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'encounterDate': ['2019-01-12','2020-01-01','2019-04-12','2019-13-12','2020-15-01'],
'FormName': ['Baseline','Baseline','Follow-up','Follow-up','Follow-up'],
'PersonID': [1, 1, 1, 1, 1]
})
df[['Previous_date',
'Rank']] = df.groupby(['PersonID',
'FormName']).agg(Previous_date=('encounterDate','shift'),
Rank=('encounterDate','cumcount'))
df['Rank']+=1
</code></pre>
<p>Output</p>
<pre><code> encounterDate FormName PersonID Previous_date Rank
0 2019-01-12 Baseline 1 NaN 1
1 2020-01-01 Baseline 1 2019-01-12 2
2 2019-04-12 Follow-up 1 NaN 1
3 2019-13-12 Follow-up 1 2019-04-12 2
4 2020-15-01 Follow-up 1 2019-13-12 3
</code></pre>
|
python|pandas
| 1
|
5,125
| 21,201,618
|
pandas.merge: match the nearest time stamp >= the series of timestamps
|
<p>I have two dataframes, both of which contain an irregularly spaced, millisecond resolution timestamp column. My goal here is to match up the rows so that for each matched row, 1) the first time stamp is always smaller or equal to the second timestamp, and 2) the matched timestamps are the closest for all pairs of timestamps satisfying 1). </p>
<p>Is there any way to do this with pandas.merge?</p>
|
<p><code>merge()</code> can't do this kind of join, but you can use <code>searchsorted()</code>:</p>
<p>Create some random timestamps: <code>t1</code>, <code>t2</code>, there are in ascending order:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
base = np.array(["2013-01-01 00:00:00"], "datetime64[ns]")
a = (np.random.rand(30)*1000000*1000).astype(np.int64)*1000000
t1 = base + a
t1.sort()
b = (np.random.rand(10)*1000000*1000).astype(np.int64)*1000000
t2 = base + b
t2.sort()
</code></pre>
<p>call <code>searchsorted()</code> to find index in <code>t1</code> for every value in <code>t2</code>:</p>
<pre><code>idx = np.searchsorted(t1, t2) - 1
mask = idx >= 0
df = pd.DataFrame({"t1":t1[idx][mask], "t2":t2[mask]})
</code></pre>
<p>here is the output:</p>
<pre><code> t1 t2
0 2013-01-02 06:49:13.287000 2013-01-03 16:29:15.612000
1 2013-01-05 16:33:07.211000 2013-01-05 21:42:30.332000
2 2013-01-07 04:47:24.561000 2013-01-07 04:53:53.948000
3 2013-01-07 14:26:03.376000 2013-01-07 17:01:35.722000
4 2013-01-07 14:26:03.376000 2013-01-07 18:22:13.996000
5 2013-01-07 14:26:03.376000 2013-01-07 18:33:55.497000
6 2013-01-08 02:24:54.113000 2013-01-08 12:23:40.299000
7 2013-01-08 21:39:49.366000 2013-01-09 14:03:53.689000
8 2013-01-11 08:06:36.638000 2013-01-11 13:09:08.078000
</code></pre>
<p>To view this result by graph:</p>
<pre><code>import pylab as pl
pl.figure(figsize=(18, 4))
pl.vlines(pd.Series(t1), 0, 1, colors="g", lw=1)
pl.vlines(df.t1, 0.3, 0.7, colors="r", lw=2)
pl.vlines(df.t2, 0.3, 0.7, colors="b", lw=2)
pl.margins(0.02)
</code></pre>
<p>output:</p>
<p><img src="https://i.stack.imgur.com/H5dbR.png" alt="enter image description here"></p>
<p>The green lines are <code>t1</code>, blue lines are <code>t2</code>, red lines are selected from <code>t1</code> for every <code>t2</code>.</p>
|
python|pandas
| 32
|
5,126
| 21,403,398
|
Troubleshooting Latex table from pandas dataframe to_latex()
|
<p>I am unable to visualize the Latex table generated using <code>dataframe.to_latex()</code> from pandas in my IPython notebook. It shows the exact string with the <code>"\begin.."</code> line in a box.</p>
<p>I am also curious why the table is formatted <code>{lrrrrr}</code> and how I can change it columns with lines separating the values like <code>{l|c|c|c|c}</code>. </p>
<p>I'm not quite sure if my setup is the issue and I am wondering if there are further documentation for formatting Latex rendered tables using <code>pandas.dataframe.to_latex()</code>. I use IPython notebook (0.132) and Pandas (0.13). I'm running Ubuntu 13.04, Texlive2012. </p>
<p>IPython Notebook code:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(np.random.random((5, 5)))
print df.to_latex()
</code></pre>
<p>IPython notebook output even after copying and running as markdown, only a box around the text is added. </p>
<pre class="lang-tex prettyprint-override"><code>\begin{tabular}{lrrrrr}
\toprule
{} & 0 & 1 & 2 & 3 & 4 \\
\midrule
0 & 0.021896 & 0.716925 & 0.599158 & 0.260573 & 0.665406 \\
1 & 0.467573 & 0.235992 & 0.557386 & 0.640438 & 0.528914 \\
2 & 0.872155 & 0.053389 & 0.419169 & 0.613036 & 0.606046 \\
3 & 0.130878 & 0.732334 & 0.168879 & 0.039845 & 0.289991 \\
4 & 0.247346 & 0.370549 & 0.906652 & 0.228841 & 0.766951 \\
\bottomrule
\end{tabular}
</code></pre>
<p>I would appreciate any help as I am still very new to pandas and the wonderful things it can do with the SciPy suite! </p>
|
<p>1) Live Notebook does not support full LaTeX it support Math. Table are not math. Hence table are not rendered. </p>
<p>2) You are printing your latex representation, so you are not triggering the display hook. Hence only the text will show. </p>
<p>3) do not "choose" the representation of on object you like, just <code>from IPython.display import display</code> and <code>display(object)</code> let IPython handle the rest in your case you will get a nice html table in notebok . <code>to_latex</code>, <code>to_xls</code>
... etc are ment for people that know what they are doing with the raw data.</p>
<p>Also in general try to avoid many question in the same post. and IPython 0.13.2 is <strong>really old</strong>, you should think of updating.</p>
|
python|pandas|latex|ipython-notebook|ubuntu-13.04
| 3
|
5,127
| 21,160,134
|
Flatten a column with value of type list while duplicating the other column's value accordingly in Pandas
|
<p>Dear power Pandas experts:</p>
<p>I'm trying to implement a function to flatten a column of a dataframe which has element of type list, I want for each row of the dataframe where the column has element of type list, all columns but the designated column to be flattened will be duplicated, while the designated column will have one of the value in the list. </p>
<p>The following illustrate my requirements:</p>
<pre><code>input = DataFrame({'A': [1, 2], 'B': [['a', 'b'], 'c']})
A B
0 1 [a, b]
1 2 c
expected = DataFrame({'A': [1, 1, 2], 'B': ['a', 'b', 'c']}, index=[0, 0, 1])
A B
0 1 a
0 1 b
1 2 c
</code></pre>
<p>I feel that there might be an elegant solution/concept for it, but I'm struggling. </p>
<p>Here is my attempt, which does not work yet.</p>
<pre><code>def flattenColumn(df, column):
'''column is a string of the column's name.
for each value of the column's element (which might be a list), duplicate the rest of columns at the correspdonding row with the (each) value.
'''
def duplicate_if_needed(row):
return concat([concat([row.drop(column, axis = 1), DataFrame({column: each})], axis = 1) for each in row[column][0]])
return df.groupby(df.index).transform(duplicate_if_needed)
</code></pre>
<hr>
<p>In recognition of alko's help, here is my trivial generalization of the solution to deal with more than 2 columns in a dataframe:</p>
<pre><code>def flattenColumn(input, column):
'''
column is a string of the column's name.
for each value of the column's element (which might be a list),
duplicate the rest of columns at the corresponding row with the (each) value.
'''
column_flat = pandas.DataFrame(
[
[i, c_flattened]
for i, y in input[column].apply(list).iteritems()
for c_flattened in y
],
columns=['I', column]
)
column_flat = column_flat.set_index('I')
return (
input.drop(column, 1)
.merge(column_flat, left_index=True, right_index=True)
)
</code></pre>
<p>The only limitation at the moment is that the order of columns changed, the column flatten would be at the right most, not in its original position. It should be feasible to fix.</p>
|
<p>I guess easies way to flatten list of lists would be a pure python code, as this object type is not well suited for pandas or numpy. So you can do it with for example</p>
<pre><code>>>> b_flat = pd.DataFrame([[i, x]
... for i, y in input['B'].apply(list).iteritems()
... for x in y], columns=list('IB'))
>>> b_flat = b_flat.set_index('I')
</code></pre>
<p>Having B column flattened, you can merge it back:</p>
<pre><code>>>> input[['A']].merge(b_flat, left_index=True, right_index=True)
A B
0 1 a
0 1 b
1 2 c
[3 rows x 2 columns]
</code></pre>
<p>If you want the index to be recreated, as in your expected result, you can add <code>.reset_index(drop=True)</code> to last command.</p>
|
python|pandas|dataframe|data-manipulation
| 12
|
5,128
| 63,592,683
|
Wondering why scipy.spatial.distance.sqeuclidean is twice slower than numpy.sum((y1-y2)**2)
|
<p>Here is my code</p>
<pre><code>import numpy as np
import time
from scipy.spatial import distance
y1=np.array([0,0,0,0,1,0,0,0,0,0])
y2=np.array([0. , 0.1, 0. , 0. , 0.7, 0.2, 0. , 0. , 0. , 0. ])
start_time = time.time()
for i in range(1000000):
distance.sqeuclidean(y1,y2)
print("--- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>---15.212640523910522 seconds---</p>
<pre><code>start_time = time.time()
for i in range(1000000):
np.sum((y1-y2)**2)
print("--- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>---8.381187438964844--- seconds</p>
<p>I supposed that the Scipy is kind of optimized so it should be faster.</p>
<p>Any comments will be appreciated.</p>
|
<p>Here is a more comprehensive comparison (credit to @Divakar's <code>benchit</code> package):</p>
<pre><code>def m1(y1,y2):
return distance.sqeuclidean(y1,y2)
def m2(y1,y2):
return np.sum((y1-y2)**2)
in_ = {n:[np.random.rand(n), np.random.rand(n)] for n in [10,100,1000,10000,20000]}
</code></pre>
<p><a href="https://i.stack.imgur.com/tJvrJm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJvrJm.png" alt="enter image description here" /></a></p>
<p>scipy gets more efficient for larger arrays. For smaller arrays, the overhead of calling the function most likely outweighs its benefit. According to <a href="https://github.com/scipy/scipy/blob/v1.5.2/scipy/spatial/distance.py#L617-L668" rel="nofollow noreferrer">source</a>, scipy calculates <code>np.dot(y1-y2,y1-y2)</code>.</p>
<p>And if you want an even faster solution, use <code>np.dot</code> directly without the overhead of extra lines and function calling:</p>
<pre><code>def m3(y1,y2):
y_d = y1-y2
return np.dot(y_d,y_d)
</code></pre>
<p><a href="https://i.stack.imgur.com/5u0Tbm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5u0Tbm.png" alt="enter image description here" /></a></p>
|
python|performance|numpy|scipy|matrix-multiplication
| 6
|
5,129
| 63,675,559
|
how can i adjust space between elements of numpy array?
|
<p>I want such type of output</p>
<pre><code> [[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
</code></pre>
<p>But I am getting This</p>
<pre><code> [[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
</code></pre>
<p>My code is this :</p>
<pre><code>import numpy
print(numpy.identity(size))
</code></pre>
|
<p>use <strong>numpy.arrange(start, stop, size)</strong>
as follow</p>
<pre><code>import numpy as np
arr1 = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 0]])
arr2 = np.arange(1, 9, 1)
print(arr2)
</code></pre>
<p>output</p>
<pre><code>[1 2 3 4 5 6 7 8]
</code></pre>
|
python|numpy|formatting|spacing
| 0
|
5,130
| 63,400,447
|
Pandas : how to consider content of certain columns as list
|
<p>Let's say I have a simple pandas dataframe named df :</p>
<pre><code> 0 1
0 a [b, c, d]
</code></pre>
<p>I save this dataframe into a CSV file as follow :</p>
<pre><code>df.to_csv("test.csv", index=False, sep="\t", encoding="utf-8")
</code></pre>
<p>Then later in my script I read this csv :</p>
<pre><code>df = pd.read_csv("test.csv", index_col=False, sep="\t", encoding="utf-8")
</code></pre>
<p>Now what I want to do is to use explode() on column '1' but it does not work because the content of column '1' is not a list since I saved df into a CSV file.</p>
<p>What I tried so far is to change column '1' type into a list with astype() without any success.</p>
<p>Thank you by advance.</p>
|
<p>Try this, Since you are reading from csv file,your dataframe value in column <code>A</code> (<code>1</code> in your case) is essentially a string for which you need to infer the values as list.</p>
<pre><code>import pandas as pd
import ast
df=pd.DataFrame({"A":["['a','b']","['c']"],"B":[1,2]})
df["A"]=df["A"].apply(lambda x: ast.literal_eval(x))
</code></pre>
<p>Now, the following works !</p>
<pre><code>df.explode("A")
</code></pre>
|
python-3.x|pandas|dataframe
| 0
|
5,131
| 53,546,605
|
What's happened to pandas data reader using stooq?
|
<p>I have built a set of scripts that obtain and perform technical analysis on stocks using pandas-datareader. It has been up and working now for 6-8 months without problems.</p>
<p>Suddenly this week the datareader function is returning empty frames eg:-</p>
<pre><code> import pandas_datareader as web
BAC = web.DataReader("AAPL.US", "stooq")
print BAC
</code></pre>
<p>Has the format changed- do i need to phrase it differently now?</p>
<p>Many thanks </p>
|
<p>A bug was introduced in 0.7.0 which automatically appends <code>.US</code> to every ticker. This is likely being reverted in 0.8.0 (see this pull request), but for now, use:</p>
<p><code>
BAC = web.DataReader("AAPL", "stooq")
</code></p>
<p>as a workaround.</p>
|
python-2.7|pandas-datareader
| 0
|
5,132
| 53,459,234
|
How to get the Numpy array of file stream of any image
|
<p>I'm trying to use the imageai python library, and more particularly this function:</p>
<pre><code>detector.detectObjectsFromImage()
</code></pre>
<p>The doc says it should be used with a Numpy array of file stream of any image.</p>
<p><a href="https://imageai.readthedocs.io/en/latest/detection/index.html" rel="nofollow noreferrer">https://imageai.readthedocs.io/en/latest/detection/index.html</a></p>
<p>When I pass it a Numpy array, like this:</p>
<pre><code>detections = detector.detectObjectsFromImage(input_image=anumpyarray,input_type = "array")
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>detections =
detector.detectObjectsFromImage(input_image=anumpyarray,input_type =
"array") File
"/usr/local/lib/python3.6/site-packages/imageai/Detection/<strong>init</strong>.py",
line 517, in detectObjectsFromImage raise ValueError("Ensure you
specified correct input image, input type, output type and/or output
image path ") ValueError: Ensure you specified correct input image,
input type, output type and/or output image path</p>
</blockquote>
<p>Is it because a Numpy array and a Numpy array of a stream of an image are different things?</p>
|
<p>I know it's old, but for anyone who needs help: </p>
<p>Try to set 2 additional params:</p>
<pre class="lang-py prettyprint-override"><code>minimum_percentage_probability=0, output_type='array'
</code></pre>
<p>For more info, go into <code>imageai\Detection\__init__.py -> detectObjectsFromImage</code></p>
|
numpy
| 0
|
5,133
| 53,364,292
|
Finding the probability of a variable in collection of lists
|
<p>I have a selection of lists of variables</p>
<pre><code>import numpy.random as npr
w = [0.02, 0.03, 0.05, 0.07, 0.11, 0.13, 0.17]
x = 1
y = False
z = [0.12, 0.2, 0.25, 0.05, 0.08, 0.125, 0.175]
v = npr.choice(w, x, y, z)
</code></pre>
<p>I want to find the probability of the value V being a selection of variables eg; False or 0.12. </p>
<p>How do I do this.
Heres what I've tried;</p>
<pre><code> import numpy.random as npr
import math
w = [0.02, 0.03, 0.05, 0.07, 0.11, 0.13, 0.17]
x = 1
y = False
z = [0.12, 0.2, 0.25, 0.05, 0.08, 0.125, 0.175]
v = npr.choice(w, x, y, z)
from collections import Counter
c = Counter(0.02, 0.03, 0.05, 0.07, 0.11, 0.13, 0.17,1,False,0.12, 0.2, 0.25, 0.05, 0.08, 0.125, 0.175)
def probability(0.12):
return float(c[v]/len(w,x,y,z))
</code></pre>
<p>which I'm getting that 0.12 is an invalid syntax</p>
|
<p>There are several issues in the code, I think you want the following:</p>
<pre><code>import numpy.random as npr
import math
from collections import Counter
def probability(v=0.12):
return float(c[v]/len(combined))
w = [0.02, 0.03, 0.05, 0.07, 0.11, 0.13, 0.17]
x = [1]
y = [False]
z = [0.12, 0.2, 0.25, 0.05, 0.08, 0.125, 0.175]
combined = w + x + y + z
v = npr.choice(combined)
c = Counter(combined)
print(probability())
print(probability(v=0.05))
</code></pre>
<p>1) <code>def probability(0.12)</code> does not make sense; you will have to pass a variable which can also have a default value (above I use <code>0.12</code>)</p>
<p>2) <code>len(w, x, y, z)</code> does not make much sense either; you probably look for a list that combines all the elements of <code>w</code>, <code>x</code>, <code>y</code> and <code>z</code>. I put all of those in the list <code>combined</code>.</p>
<p>3) One would also have to put in an additional check, in case the user passes e.g. <code>v=12345</code> which is not included in <code>combined</code> (I leave this to you).</p>
<p>The above will print</p>
<pre><code>0.0625
0.125
</code></pre>
<p>which gives the expected outcome.</p>
|
python|numpy|random|probability
| 0
|
5,134
| 17,663,393
|
indexing spherical subset of 3d grid data in numpy
|
<p>I have a 3d grid with coordinates</p>
<pre><code>x = linspace(0, Lx, Nx)
y = linspace(0, Ly, Ny)
z = linspace(0, Lz, Nz)
</code></pre>
<p>and I need to index points (i.e. x[i],y[j],z[k]) within some radius R of a position (x0,y0,z0). N_i can be quite large. I can do a simple loop to find what I need</p>
<pre><code>points=[]
i0,j0,k0 = floor( (x0,y0,z0)/grid_spacing )
Nr = (i0,j0,k0)/grid_spacing + 2
for i in range(i0-Nr, i0+Nr):
for j in range(j0-Nr, j0+Nr):
for k in range(k0-Nr, k0+Nr):
if norm(array([i,j,k])*grid_spacing - (x0,y0,k0)) < cutoff:
points.append((i,j,k))
</code></pre>
<p>but this quite slow. Is there a more natural/ faster way to do this type of operation with numpy?</p>
|
<p>How about this:</p>
<pre><code>import scipy.spatial as sp
x = np.linspace(0, Lx, Nx)
y = np.linspace(0, Ly, Ny)
z = np.linspace(0, Lz, Nz)
#Manipulate x,y,z here to obtain the dimensions you are looking for
center=np.array([x0,y0,z0])
#First mask the obvious points- may actually slow down your calculation depending.
x=x[abs(x-x0)<cutoff]
y=y[abs(y-y0)<cutoff]
z=z[abs(z-z0)<cutoff]
#Generate grid of points
X,Y,Z=np.meshgrid(x,y,z)
data=np.vstack((X.ravel(),Y.ravel(),Z.ravel())).T
distance=sp.distance.cdist(data,center.reshape(1,-1)).ravel()
points_in_sphere=data[distance<cutoff]
</code></pre>
<p>Instead of the last two lines you should be able to do:</p>
<pre><code>tree=sp.cKDTree(data)
mask=tree.query_ball_point(center,cutoff)
points_in_sphere=data[mask]
</code></pre>
<p>If you dont want to call spatial:</p>
<pre><code>distance=np.power(np.sum(np.power(data-center,2),axis=1),.5)
points_in_sphere=data[distance<cutoff]
</code></pre>
|
python|numpy|3d|grid|subset
| 3
|
5,135
| 20,095,685
|
Python NumPy - How to print array not displaying full set
|
<p>I'm converting a list into a NumPy array:</p>
<pre><code>a = np.array(l) # Where l is the list of data
return a
</code></pre>
<p>But whenever I go to print this array:</p>
<pre><code>print (a)
</code></pre>
<p>I only get a slice of the array:</p>
<blockquote>
<p>[-0.00750732 -0.00741577 -0.00778198 ..., 0.00222778 0.00219727
-0.00048828]</p>
</blockquote>
<p>However, If I print the size, I get the actual size of the array: <code>61238</code> Could anyone have a guess to where I am going wrong? </p>
|
<p>You can change the summarization options with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html#numpy.set_printoptions" rel="nofollow"><code>set_printoptions</code></a></p>
<pre><code>np.set_printoptions(threshold = your_threshold)
</code></pre>
<p>The threshold parameter sets:</p>
<blockquote>
<p>Total number of array elements which trigger summarization rather than
full repr (default 1000).</p>
</blockquote>
<p>But do you really want to print a huge array?</p>
|
python|arrays|numpy
| 3
|
5,136
| 12,576,313
|
Convert excel or csv file to pandas multilevel dataframe
|
<p>I've been given a reasonably large Excel file (5k rows), also as a CSV, that I would like to make into a pandas multilevel DataFame. The file is structured like this:</p>
<pre><code>SampleID OtherInfo Measurements Error Notes
sample1 stuff more stuff
36 6
26 7
37 8
sample2 newstuff lots of stuff
25 6
27 7
</code></pre>
<p>where the number of measurements is variable (and sometimes zero). There is no full blank row in between any of the information, and the 'Measurements' and 'Error' columns are empty on rows that have the other (string) data; this might make it harder to parse(?). Is there an easy way to automate this conversion? My initial idea is to parse the file with Python first and then feed stuff into DataFrame slots in a loop, but I don't know exactly how to implement it, or if it is even the best course of action.</p>
<p>Thanks in advance!</p>
|
<p>Looks like your file has fixed width columns, for which read_fwf() can be used.</p>
<pre><code>In [145]: data = """\
SampleID OtherInfo Measurements Error Notes
sample1 stuff more stuff
36 6
26 7
37 8
sample2 newstuff lots of stuff
25 6
27 7
"""
In [146]: df = pandas.read_fwf(StringIO(data), widths=[12, 13, 14, 9, 15])
</code></pre>
<p>Ok, now we have the data, just a little bit of extra work and you have a frame on which you can use set_index() to create a MultiLevel index.</p>
<pre><code>In [147]: df[['Measurements', 'Error']] = df[['Measurements', 'Error']].shift(-1)
In [148]: df[['SampleID', 'OtherInfo', 'Notes']] = df[['SampleID', 'OtherInfo', 'Notes']].fillna()
In [150]: df = df.dropna()
In [151]: df
Out[151]:
SampleID OtherInfo Measurements Error Notes
0 sample1 stuff 36 6 more stuff
1 sample1 stuff 26 7 more stuff
2 sample1 stuff 37 8 more stuff
4 sample2 newstuff 25 6 lots of stuff
5 sample2 newstuff 27 7 lots of stuff
</code></pre>
|
python|excel|csv|dataframe|pandas
| 5
|
5,137
| 71,980,410
|
How to draw the smooth lineplot and display the dates on the x-axis with python?
|
<p>I would really really appreciate it if you guys can point me to where to look. I have been trying to do it for 3 days and still can't find the right one. I need to draw the chart which looks as the first picture's chart and I need to display the dates on the X axis as it gets displayed on the second chart. I am complete beginner with seaborn, python and everything. I used lineplot first, which only met one criteria, display the dates on X-axis. But, the lines are actually sharp like in the second picture rather than smooth like in the first picture. Then, I kept digging and found implot. With that, I could get the design of the chart I wanted (Smoothed chart). But, the problem is when I tried to display the dates on the X-axis, it didn't work. I got an error <code>could not convert string to float: '2022-07-27T13:31:00Z'</code>.</p>
<p><strong>Here is the code for implot, got the wanted plot design but date can't be displayed on X-axis</strong></p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
T = np.array([ "2022-07-27T13:31:00Z",
"2022-08-28T13:31:00Z",
"2022-09-29T13:31:00Z",
"2022-10-30T13:31:00Z",])
power = np.array([10,25,60,42])
df = pd.DataFrame(data = {'T': T, 'power': power})
sns.lmplot(x='T', y='power', data=df, ci=None, order=4, truncate=False)
</code></pre>
<p>If I use the number instead of date, the output is this. Exactly as I need</p>
<p><a href="https://i.stack.imgur.com/WRkO3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRkO3.png" alt="enter image description here" /></a></p>
<p><strong>Here is the code with which all the data gets displayed correctly. But, the plot design is not smoothed.</strong></p>
<pre><code>import seaborn as sns
import numpy as np
import scipy
import matplotlib.pyplot as plt
import pandas as pd
from pandas.core.apply import frame_apply
years = ["2022-03-22T13:30:00Z",
"2022-03-23T13:31:00Z",
"2022-04-24T19:27:00Z",
"2022-05-25T13:31:00Z",
"2022-06-26T13:31:00Z",
"2022-07-27T13:31:00Z",
"2022-08-28T13:31:00Z",
"2022-09-29T13:31:00Z",
"2022-10-30T13:31:00Z",
]
feature_1 =[0,
6,
1,
5,
9,
15,
21,
4,
1,
]
data_preproc = pd.DataFrame({
'Period': years,
# 'Feature 1': feature_1,
# 'Feature 2': feature_2,
# 'Feature 3': feature_3,
# 'Feature 4': feature_4,
"Feature 1" :feature_1
})
data_preproc['Period'] = pd.to_datetime(data_preproc['Period'],
format="%Y-%m-%d",errors='coerce')
data_preproc['Period'] = data_preproc['Period'].dt.strftime('%b')
# aiAlertPlot =sns.lineplot(x='Period', y='value', hue='variable',ci=None,
# data=pd.melt(data_preproc, ['Period']))
sns.lineplot(x="Period",y="Feature 1",data=data_preproc)
# plt.xticks(np.linspace(start=0, stop=21, num=52))
plt.xticks(rotation=90)
plt.legend(title="features")
plt.ylabel("Alerts")
plt.legend(loc='upper right')
plt.show()
</code></pre>
<p>The output is this. Correct data, wrong chart design.
<a href="https://i.stack.imgur.com/IOeiT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IOeiT.png" alt="enter image description here" /></a></p>
|
<p><a href="https://seaborn.pydata.org/generated/seaborn.lmplot.html" rel="nofollow noreferrer">lmplot</a> is a model based method, which requires numeric <code>x</code>. If you think the date values are evenly spaced, you can just create another variable <code>range</code> which is numeric and calculate <code>lmplot</code> on that variable and then change the <code>xticks</code> labels.</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
T = np.array([ "2022-07-27T13:31:00Z",
"2022-08-28T13:31:00Z",
"2022-09-29T13:31:00Z",
"2022-10-30T13:31:00Z",])
power = np.array([10,25,60,42])
df = pd.DataFrame(data = {'T': T, 'power': power})
df['range'] = np.arange(df.shape[0])
sns.lmplot(x='range', y='power', data=df, ci=None, order=4, truncate=False)
plt.xticks(df['range'], df['T'], rotation = 45);
</code></pre>
<p><a href="https://i.stack.imgur.com/qrzuV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qrzuV.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy|matplotlib|seaborn
| 0
|
5,138
| 71,929,783
|
Save generated random numbers with numpy in dictionary
|
<p>I have a code that generates numbers using NumPy uniform distribution, but I want to save them in a dictionary.</p>
<pre><code>x = [1391096.378,
9187722.876,
516012.8602,
238575.7104,
228114.5731,
4685929.962,
675871.7576,
2371583.637,
368942.1523,
39899030.28,
2068330.11,
2663072.562,
587247.8119,
2077461.753,
6620623.744,
246850.7431,
313607.552,
1108662.4,
250.7123965,
39.30786207,
1975239.413,
]
t = 0
for i in x:
s = i/31
a = (s + 0.2*s)
b = (s - 0.2*s)
dft = np.random.uniform(a,b,31)
t =+1
</code></pre>
<p>But I don't know how to do that.</p>
|
<p>In this case, the keys of the dictionary will be integers from 0 to 20. If you'd like to have the value of <code>x</code> as a key, then you can substitute the <code>t</code> with the <code>x</code>, like this: <code>stored[x] = dft</code>.</p>
<p>Or you can use <a href="https://docs.python.org/3/tutorial/inputoutput.html" rel="nofollow noreferrer">f-strings</a> to be more creative with the dictionary's keys</p>
<pre><code>t = 0
stored = {}
for i in x:
s = i/31
a = (s + 0.2*s)
b = (s - 0.2*s)
dft = np.random.uniform(a,b,31)
stored[t] = dft
t += 1
</code></pre>
|
pandas|numpy|dictionary|random
| 0
|
5,139
| 72,135,587
|
Efficient way to update values in a GeoDataFrame based on the result of DataFrame.within method
|
<p>I have two large GeoDataFrame:</p>
<p>One came from a shapefile where each polygons has a float value called 'asapp'.</p>
<p>Second are the centroids of a fishnet grid with 3x3 meters and a column 'asapp' zeroed.</p>
<p>What I need is to fill the 'asapp' of the second one base where the centroid is within the polygons of the first.</p>
<p>The code below do this but with a ridiculos low rate of 15 polygons per sec (One of the smallest shapefile has more tha 20000 polygons).</p>
<pre class="lang-py prettyprint-override"><code># fishnet_grid is a dict created by GDAL with a raster with 3m pixel size
cells_in_wsg = np.array([(self.__convert_geom_sirgas(geom, ogr_transform), int(fid), 0.0) for fid, geom in fishnet_grid.items()])
# transforming the grid raster (which are square polygons) in a GeoDataframe of point using the centroids of the cells
fishnet_base = gpd.GeoDataFrame({'geometry': cells_in_wsg[..., 0], 'id': cells_in_wsg[..., 1], 'asapp': cells_in_wsg[..., 2]})
fishnet = gpd.GeoDataFrame({'geometry': fishnet_base.centroid, 'id': fishnet_base['id'], 'asapp': fishnet_base['asapp']})
# as_applied_data is the polygons GeoDataFrame
# the code below takes a lot of time to complete
for as_applied in as_applied_data.iterrows():
fishnet.loc[fishnet.within(as_applied[1]['geometry']), ['asapp']] += [as_applied[1]['asapp']]
</code></pre>
<p>There is another way to do this with better performance?</p>
<p>Tys!</p>
|
<p>I solved the problem.</p>
<p>I readed about using <code>geopandas.overlay</code> (<a href="https://geopandas.org/en/stable/docs/user_guide/set_operations.html" rel="nofollow noreferrer">https://geopandas.org/en/stable/docs/user_guide/set_operations.html</a>) with work with a lot of polygons, but the problem is that it work only with polygons and I had polygons and points.</p>
<p>So, my solution, was to create very small polygons (2cm side squares) from the points and then using the overlay.</p>
<p>The final code:</p>
<pre class="lang-py prettyprint-override"><code># fishnet is now a GeoDataFrame of little squares
fishnet = gpd.GeoDataFrame({'geometry': cells_in_wsg[..., 0], 'id': cells_in_wsg[..., 1]})
#intersection has only the little squares that intersects with all as_applied_data polygons and the value in those polygons
intersection = gpd.overlay(fishnet, as_applied_data, how='intersection')
# now this is as easy as to calculate the mean and put it back in the fishnet using the merge
values = fishnet.merge(intersection.groupby(['id'], as_index=False).mean())
#and values has the the little squares, the geom_id and the mean values of the intersections!
</code></pre>
<p>Its worked very fine!</p>
|
python|performance|geopandas
| 2
|
5,140
| 19,079,819
|
Pandas dataframe column multiplication
|
<p>how to multiply, if you have 2 df's to multiply using selected columns , and store the result in a new column for example</p>
<p>df1:</p>
<pre><code> AAPL IBM GOOG XOM
2011-01-10 16:00:00 1500 0 0 0
</code></pre>
<p>df2:</p>
<pre><code>AAPL IBM GOOG XOM
340.99 143.41 614.21 72.02
340.18 143.06 616.01 72.56
</code></pre>
<p>i want to multiply , aapl in df1 with 340.99 in df2 , and store the result in transaction_amount.</p>
|
<pre><code>transaction_amount = np.diag(df1.dot(df2.T))
</code></pre>
<p>Basically, to do what you want, you want to do some form of dot product of df1 with df2</p>
<pre><code>df1.dot(df2)
</code></pre>
<p>but since they have matching dimensions you need to transpose one of the DataFrames</p>
<pre><code>df2.T
</code></pre>
<p>And if you understand how matrix dot products work you'll understand you only want the array data from the diagonal of the resulting matrix. ie: You only want (AAPL price of day X * AAPL shares of day Y, where X == Y) Therefore, the values in the matrix that are relevant to you are located at (0,0), (1,1), (2,2), etc ie: the diagonal.</p>
<p>This line will also be useful when calculating portfolio value once you use cumsum to create a holding matrix.</p>
<p>Some helpful sources</p>
<p><a href="http://www.mathsisfun.com/algebra/matrix-multiplying.html" rel="nofollow">http://www.mathsisfun.com/algebra/matrix-multiplying.html</a>
<a href="http://mathinsight.org/dot_product_matrix_notation" rel="nofollow">http://mathinsight.org/dot_product_matrix_notation</a></p>
|
python|numpy|pandas
| 0
|
5,141
| 22,190,638
|
Python: Round time to the nearest second and minute
|
<p>I have a <code>DataFrame</code> with a column <code>Timestamp</code> where each values are the number of seconds since midnight with nanosecond precision. For example:</p>
<pre><code>Timestamp
34200.984537482
34201.395432198
</code></pre>
<p>and so on. 34200 seconds since midnight is 9:30:00am. </p>
<p>I would like to create new entries in my <code>dataframe</code> with one column <code>Second</code> and <code>Minute</code> where I round the <code>Timestamp</code> to its nearest second and minute (forward looking). So</p>
<pre><code>Timestamp Second Minute
34200.984537482 34201 34260
34201.395432198 34202 34260
</code></pre>
<p>How can I do this in Python? Also, should I use Pandas' <code>DateTimeIndex</code>? Once I round the time, I will compute the time difference between each timestamps so maybe DateTimeIndex is more appropriate. </p>
|
<p>There is a Series <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.round.html" rel="nofollow">round</a> method:</p>
<pre><code>In [11]: df.Timestamp.round()
Out[11]:
0 34201
1 34201
Name: Timestamp, dtype: float64
In [12]: df.Timestamp.round(1)
Out[12]:
0 34201.0
1 34201.4
Name: Timestamp, dtype: float64
In [13]: df.Timestamp.round(-1)
Out[13]:
0 34200
1 34200
Name: Timestamp, dtype: float64
</code></pre>
<p>I recommend using datetime64 or DatetimeIndex rather than as seconds from midnight... keeping time is <strong>hard</strong>.</p>
<p>One simple way to get a proper datetime column:</p>
<pre><code>In [21]: pd.Timestamp('2014-03-04') + df.Timestamp.apply(pd.offsets.Second)
Out[21]:
0 2014-03-04 09:30:00
1 2014-03-04 09:30:01
Name: Timestamp, dtype: datetime64[ns]
</code></pre>
|
python|time|pandas
| 4
|
5,142
| 4,387,878
|
simulator of realistic ECG signal from rr data for matlab or python
|
<p>I have a series of rr data (distances between r-r peak in PQRST electrocardiogramm signal)
and I want to generate realistic ECG signal in matlab or python. I've found some materials for matlab (<code>ecg</code> built-in function in matlab) but I can't figure out how to generate it from rr data, and I've found nothing for python. Any advice?</p>
|
<p>Does this suit your needs? If not, please let me know. Good luck.</p>
<pre><code>import scipy
import scipy.signal as sig
rr = [1.0, 1.0, 0.5, 1.5, 1.0, 1.0] # rr time in seconds
fs = 8000.0 # sampling rate
pqrst = sig.wavelets.daub(10) # just to simulate a signal, whatever
ecg = scipy.concatenate([sig.resample(pqrst, int(r*fs)) for r in rr])
t = scipy.arange(len(ecg))/fs
pylab.plot(t, ecg)
pylab.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/9vwcE.png" alt="ECG signal"></p>
|
python|matlab|numpy|signal-processing|scipy
| 13
|
5,143
| 8,824,739
|
Global Interpreter Lock and access to data (eg. for NumPy arrays)
|
<p>I am writing a C extension for Python, which should release the Global Interpreter Lock while it operates on data. I think I have understood the mechanism of the GIL fairly well, but one question remains: Can I access data in a Python object while the thread does not own the GIL? For example, I want to read data from a (big) NumPy array in the C function while I still want to allow other threads to do other things on the other CPU cores. The C function should</p>
<ul>
<li>release the GIL with <code>Py_BEGIN_ALLOW_THREADS</code></li>
<li>read and work on the data without using Python functions</li>
<li>even write data to previously constructed NumPy arrays</li>
<li>reacquire the GIL with <code>Py_END_ALLOW_THREADS</code></li>
</ul>
<p>Is this safe? Of course, other threads are not supposed to change the variables which the C function uses. But maybe there is one hidden source for errors: could the Python interpreter move an object, eg. by some sort of garbage collection, while the C function works on it in a separate thread?</p>
<p>To illustrate the question with a minimal example, consider the (minimal but complete) code below. Compile it (on Linux) with</p>
<pre><code>gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -fPIC -I/usr/lib/pymodules/python2.7/numpy/core/include -I/usr/include/python2.7 -c gilexample.c -o gilexample.o
gcc -pthread -shared gilexample.o -o gilexample.so
</code></pre>
<p>and test it in Python with</p>
<pre><code>import gilexample
gilexample.sum([1,2,3])
</code></pre>
<p>Is the code between <code>Py_BEGIN_ALLOW_THREADS</code> and <code>Py_END_ALLOW_THREADS</code> safe? It accesses the contents of a Python object, and I do not want to duplicate the (possibly large) array in memory.</p>
<pre class="lang-c prettyprint-override"><code>#include <Python.h>
#include <numpy/arrayobject.h>
// The relevant function
static PyObject * sum(PyObject * const self, PyObject * const args) {
PyObject * X;
PyArg_ParseTuple(args, "O", &X);
PyObject const * const X_double = PyArray_FROM_OTF(X, NPY_DOUBLE, NPY_ALIGNED);
npy_intp const size = PyArray_SIZE(X_double);
double * const data = (double *) PyArray_DATA(X_double);
double sum = 0;
Py_BEGIN_ALLOW_THREADS // IS THIS SAFE?
npy_intp i;
for (i=0; i<size; i++)
sum += data[i];
Py_END_ALLOW_THREADS
Py_DECREF(X_double);
return PyFloat_FromDouble(sum);
}
// Python interface code
// List the C methods that this extension provides.
static PyMethodDef gilexampleMethods[] = {
{"sum", sum, METH_VARARGS},
{NULL, NULL, 0, NULL} /* Sentinel - marks the end of this structure */
};
// Tell Python about these methods.
PyMODINIT_FUNC initgilexample(void) {
(void) Py_InitModule("gilexample", gilexampleMethods);
import_array(); // Must be present for NumPy.
}
</code></pre>
|
<blockquote>
<p>Is this safe?</p>
</blockquote>
<p><del>Strictly, no. I think you should move the calls to <code>PyArray_SIZE</code> and <code>PyArray_DATA</code> outside the GIL-less block; if you do that, you'll be operating on C data only. You might also want to increment the reference count on the object before going into the GIL-less block and decrement it afterwards.</del></p>
<p>After your edits, it should be safe. Don't forget to decrement the reference count afterwards.</p>
|
python|numpy|python-c-api
| 6
|
5,144
| 55,542,719
|
Make pandas dataframe column of length of lists in entries on other column
|
<p>Suppose I have the dataframe:</p>
<pre><code> df=pd.DataFrame(data={'col1':[[1,4,4,1],[2,3]],'col2':[[1,2],[1,5,2,4]]})
</code></pre>
<p>How can I add a new column to this dataframe whose entry at every row is the length of the corresponding list in, say, col1?</p>
|
<p>use <code>str.len()</code></p>
<pre><code>df['length'] = df.col1.str.len()
print(df)
col1 col2 length
0 [1, 4, 4, 1] [1, 2] 4
1 [2, 3] [1, 5, 2, 4] 2
</code></pre>
|
python|python-3.x|pandas
| 0
|
5,145
| 55,317,695
|
Convert table having string column, array column to all string columns
|
<p>I am trying to convert a table containing string columns and array columns to a table with string columns only</p>
<pre><code>Here is how current table looks like:
+-----+--------------------+--------------------+
|col1 | col2 | col3 |
+-----+--------------------+--------------------+
| 1 |[2,3] | [4,5] |
| 2 |[6,7,8] | [8,9,10] |
+-----+--------------------+--------------------+
How can I get expected result like that:
+-----+--------------------+--------------------+
|col1 | col2 | col3 |
+-----+--------------------+--------------------+
| 1 | 2 | 4 |
| 1 | 3 | 5 |
| 2 | 6 | 8 |
| 2 | 7 | 9 |
| 2 | 8 | 10 |
+-----+--------------------+--------------------+
</code></pre>
|
<p>Conver the columns to lists and after that to <code>numpy.array</code>, finally convert them to a <code>DataFrame</code>:</p>
<pre><code>vals1 = np.array(df.col2.values.tolist())
vals2 = np.array(df.col3.values.tolist())
col1 = np.repeat(df.col1, vals1.shape[1])
df = pd.DataFrame(np.column_stack((col1, vals1.ravel(), vals2.ravel())), columns=df.columns)
print(df)
col1 col2 col3
0 1 2 4
1 1 3 5
2 2 6 8
3 2 7 9
</code></pre>
|
python|pandas
| 0
|
5,146
| 55,566,084
|
How to turn a dataframe into a dictionary variable and make a barchart from it
|
<p>I need to convert a data frame query into a dictionary variable but it is saving the STATE and Occurrences as one variable. I need them separate so I can plot a bar chart from it
Am I doing it wrong?</p>
<p>Code for dataframe to dict</p>
<pre><code>df = pd.DataFrame(data_dict)
df = df['xyz'].value_counts().to_frame()
df.to_dict('dict')
print (dict)
</code></pre>
<p>Code i want to use to plot my bar chart</p>
<pre><code>data = (dict)
names = list(data.keys())
values = list(data.values())
plt.bar(range(len(data)),values,tick_label=names)
plt.show()
</code></pre>
|
<p>In my opinion convert back to <code>dictionary</code> is not necessary, use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.plot.bar.html" rel="nofollow noreferrer"><code>Series.plot.bar</code></a> instead:</p>
<pre><code>df = pd.DataFrame(data_dict)
s = df['STATE'].value_counts()
s.plot.bar()
</code></pre>
<p>Solution with <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html" rel="nofollow noreferrer"><code>pyplot.bar</code></a>:</p>
<pre><code>plt.bar(range(len(s)),s,tick_label=s.index)
</code></pre>
<p>EDIT: You are realy close, only remove <code>to_frame</code> for convert <code>Series</code> to <code>dict</code>:</p>
<pre><code>df = pd.DataFrame(data_dict)
s = df['STATE'].value_counts()
data = s.to_dict()
names = list(data.keys())
values = list(data.values())
plt.bar(range(len(data)),values,tick_label=names)
plt.show()
</code></pre>
<p>If want use one column DataFrame add <code>STATE</code> before convert to <code>dict</code>:</p>
<pre><code>df = df['STATE'].value_counts().to_frame()
#select column `STATE`
data = df['STATE'].to_dict('dict')
names = list(data.keys())
values = list(data.values())
plt.bar(range(len(data)),values,tick_label=names)
plt.show()
</code></pre>
|
python|pandas|dataframe|bar-chart
| 0
|
5,147
| 55,190,988
|
numpy ndarray indexing with __index__ method
|
<p>I don't understand how indexing of a numpy ndarray works, when using a custom class instance as the index.</p>
<p>I have the following code:</p>
<pre><code>import numpy as np
class MyClass:
def __index__(self):
return 1,2
foo = np.array([[1,2,3],[4,5,6]])
bar = MyClass()
print(foo[1,2])
print(foo[bar])
</code></pre>
<p>I expect to get the same result (6) from both print functions. But from the second one, where the class instance is used a the index, I receive an error:</p>
<pre><code>IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>If I call the __index__ method explicitly with</p>
<pre><code>print(foo[bar.__index__()])
</code></pre>
<p>it works. But this defeats the purpose of the magic method.</p>
<p>If I call the array with just one index, everything works fine:</p>
<pre><code>import numpy as np
class MyClass:
def __index__(self):
return 1
foo = np.array([[1,2,3],[4,5,6]])
bar = MyClass()
print(foo[1])
print(foo[bar])
>>> [4 5 6]
>>> [4 5 6]
</code></pre>
<hr>
<p>So what I don't get:</p>
<ul>
<li>The ndarray can use the output of the __index__ method for multiple dimensions. Seen when calling it explicitly.</li>
<li>The ndarray does coll the __index__ method. Seen in the second example.</li>
<li>But for some reason, this does not work at the same time. Why?</li>
</ul>
<p>Did I miss something, or does the ndarray not support this kind of indexing?</p>
<hr>
<p>I just want to add, that it apparently doesn't matter, how the __index__ method outputs its result. I tried:</p>
<pre><code>return a, b
return (a, b)
return tuple((a, b))
</code></pre>
<p>None of them worked for me.</p>
|
<p>As mentioned <a href="https://docs.python.org/3/reference/datamodel.html#object.__index__" rel="nofollow noreferrer">here</a>, <code>__index__</code> method <code>Must return an integer.</code></p>
<p>That's why your attempt didn't work, while the "one index" example worked.</p>
|
python|arrays|numpy|indexing|magic-methods
| 2
|
5,148
| 7,635,237
|
numpy: syntax/idiom to cast (n,) array to a (n, 1) array?
|
<p>I'd like to cast a numpy <code>ndarray</code> object of shape (<em>n</em>,) into one having shape (<em>n</em>, 1). The best I've come up with is to roll my own _to_col function:</p>
<pre><code>def _to_col(a):
return a.reshape((a.size, 1))
</code></pre>
<p>But it is hard for me to believe that such a ubiquitous operation is not already built into numpy's syntax. I figure that I just have not been able to hit upon the right Google search to find it.</p>
|
<p>I'd use the following:</p>
<pre><code>a[:,np.newaxis]
</code></pre>
<p>An alternative (but perhaps slightly less clear) way to write the same thing is:</p>
<pre><code>a[:,None]
</code></pre>
<p>All of the above (including your version) are constant-time operations.</p>
|
python|arrays|vector|numpy|casting
| 10
|
5,149
| 56,681,889
|
How to write the data in excel file using pandas excel writer?
|
<p>I have some code which gives output through several for loops in the form of multiple lists and I want to write the output in excel or csv file using pandas excel writer.</p>
<pre><code>from pulp import *
from openpyxl import load_workbook
import pandas as pd
import numbers
from pulp import solvers
import xlwt
P=[4.645885257, 4.481959238, 4.160581972, 2.893299763, 2.746552049, 2.762327167, 2.785312466, 2.782704044,\
2.761575576, 2.790301008, 2.826271593, 2.98196142, 3.106517237, 3.049694785, 2.841111886, 2.469119048,\
2.424998603, 2.482937879, 2.541880038, 2.544940077, 2.526766508, 2.539441678, 2.60810043, 2.782490319]
X=[-50, -40, -30, -20, -10, 0, 10, 20, 30, 40]
S=[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150]
x=10
s=16
n=24
F=[[0 for j in range(x)] for i in range(s)]
def xyz():
Fbar=list()
Xbar=list()
Mega=[22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1]
for k in Mega:
for f in F:
try:
FFF=max([x for x in f if isinstance(x, numbers.Number)])
XXX=X[f.index(max([x for x in f if isinstance(x, numbers.Number)]))]
Fbar.append(FFF)
Xbar.append(XXX)
except ValueError:
FFF="NA"
Fbar.append(FFF)
Xbar.append(FFF)
for i in range(s):
for j in range(x):
if 150>=(S[i]+X[j])>=S[Xbar.index(max([x for x in Xbar if isinstance(x, numbers.Number)]))]:
FFFFF=(S[i]+X[j])/10
F[i][j]=-X[j]*P[k]+Fbar[int(FFFFF)]
if 150<(S[i]+X[j])<S[Xbar.index(max([x for x in Xbar if isinstance(x, numbers.Number)]))]:
F[i][j]="NA"
Xbar=list()
for f in F:
try:
FFF=max([x for x in f if isinstance(x, numbers.Number)])
XXX=X[f.index(max([x for x in f if isinstance(x, numbers.Number)]))]
Fbar.append(FFF)
Xbar.append(XXX)
except ValueError:
FFF="NA"
Fbar.append(FFF)
Xbar.append(FFF)
print(Xbar)
df= pd.DataFrame(Xbar)
writer= pd.ExcelWriter('C:\Fourth Term @ Dal\Project\Directive studies\output.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1', startcol=0, startrow=1, header=False, index=True)
workbook= writer.book
writer.save()
xyz()
</code></pre>
<p>This is how print output look like: </p>
<pre><code>[-50, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[40, 40, 30, 20, 10, -50, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50]
[40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0, -10, -20, -30, -40, -50]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[40, 40, 30, 20, 10, 0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0, -10, -20, -30, -40]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0, -10, -20, -30, -40, -50]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0, -10, -20, -30, -40]
[40, 40, 40, 40, 30, 20, 10, 0, -10, -20, -30, -40, -50, -50, -50, -50]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 30, 20, 10, 0]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
[0, -10, -20, -30, -40, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50, -50]
</code></pre>
<p>And this is what I got in excel (output.xlsx):</p>
<pre><code>0 0
1 -10
2 -20
3 -30
4 -40
5 -50
6 -50
7 -50
8 -50
9 -50
10 -50
11 -50
12 -50
13 -50
14 -50
15 -50
</code></pre>
<p>I just got the last list out of several lists from the print output copied in the excel file but what I want is the complete output (Xbar in above code) to be copied in the excel file. Thanks in advance. :)</p>
|
<p>I haven't been able to reproduce the same output on my computer. But here isn't the question.</p>
<p>The reason why you only have the last row is that you're writing the <code>csv</code> file in the <code>for</code> loop. So you only get the last row because you overwrite all of them each time.</p>
<p>On solution (I think the simplest here), is to save your results over each iteration in a results dataframe (here <code>output_df</code>) and then export this dataframe in a <code>csv</code> file.
In the code below, I save the results of each <code>mega</code> loop as a new column of <code>output_df</code>. </p>
<p>Here the code:</p>
<pre><code>from pulp import *
from openpyxl import load_workbook
import pandas as pd
import numbers
from pulp import solvers
import xlwt
P = [4.645885257, 4.481959238, 4.160581972, 2.893299763, 2.746552049, 2.762327167, 2.785312466, 2.782704044,
2.761575576, 2.790301008, 2.826271593, 2.98196142, 3.106517237, 3.049694785, 2.841111886, 2.469119048,
2.424998603, 2.482937879, 2.541880038, 2.544940077, 2.526766508, 2.539441678, 2.60810043, 2.782490319]
X = [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40]
S = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150]
x = 10
s = 16
n = 24
F = [[0 for j in range(x)] for i in range(s)]
def xyz():
Fbar = list()
Xbar = list()
Mega = [22, 21, 20, 19, 18, 17, 16, 15, 14,
13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
df_output = pd.DataFrame()
for count, k in enumerate(Mega):
for f in F:
try:
FFF = max([x for x in f if isinstance(x, numbers.Number)])
XXX = X[f.index(
max([x for x in f if isinstance(x, numbers.Number)]))]
Fbar.append(FFF)
Xbar.append(XXX)
except ValueError:
FFF = "NA"
Fbar.append(FFF)
Xbar.append(FFF)
for i in range(s):
for j in range(x):
if 150 >= (S[i]+X[j]) >= S[Xbar.index(max([x for x in Xbar if isinstance(x, numbers.Number)]))]:
FFFFF = (S[i]+X[j])/10
F[i][j] = -X[j]*P[k]+Fbar[int(FFFFF)]
if 150 < (S[i]+X[j]) < S[Xbar.index(max([x for x in Xbar if isinstance(x, numbers.Number)]))]:
F[i][j] = "NA"
Xbar = list()
for f in F:
try:
FFF = max([x for x in f if isinstance(x, numbers.Number)])
XXX = X[f.index(
max([x for x in f if isinstance(x, numbers.Number)]))]
Fbar.append(FFF)
Xbar.append(XXX)
except ValueError:
FFF = "NA"
Fbar.append(FFF)
Xbar.append(FFF)
print(Xbar)
df_output["Mega_{0}".format(k)] = Xbar
print(df_output)
writer = pd.ExcelWriter('output.xlsx', engine='xlsxwriter')
df_output.to_excel(writer, sheet_name='Sheet1', startcol=0, header=True, index=True)
writer.save()
xyz()
</code></pre>
<p>Output printed:</p>
<pre><code># Mega_22 Mega_21 Mega_20 Mega_19 Mega_18 Mega_17 Mega_16 ... Mega_7 Mega_6 Mega_5 Mega_4 Mega_3 Mega_2 Mega_1
# 0 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 1 -10 -10 -10 -10 -10 -10 -10 ... -10 -10 -10 -10 -10 -10 -10
# 2 -20 -20 -20 -20 -20 -20 -20 ... -20 -20 -20 -20 -20 -20 -20
# 3 -30 -30 -30 -30 -30 -30 -30 ... -30 -30 -30 -30 -30 -20 -20
# 4 -40 -40 -40 -40 -40 -40 -40 ... -40 -40 -40 -40 -40 -30 -30
# 5 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -40 -40
# 6 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 7 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 8 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 9 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 10 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 11 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 12 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 13 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 14 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# 15 -50 -50 -50 -50 -50 -50 -50 ... -50 -50 -50 -50 -50 -50 -50
# [16 rows x 22 columns]
</code></pre>
<p>The <code>csv</code> file:
<a href="https://i.stack.imgur.com/2INkC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2INkC.png" alt="enter image description here"></a></p>
|
excel|python-3.x|pandas|writer
| 1
|
5,150
| 56,441,025
|
Using a function to replace cell values in a column
|
<p>I have a fairly large Dataframes 22000X29 . I want to clean up one particular column for data aggregation. A number of cells can be replaced by one column value. I would like to write a function to accomplish this using replace function. How do I pass the column name to the function?</p>
<p>I tried passing the column name as a variable to the function.
Of course, I could do this variable by variable, but that would be tedious</p>
<pre><code>#replace in df from list
def replaceCell(mylist,myval,mycol,mydf):
for i in range(len(mylist)):
mydf.mycol.replace(to_replace=mylist[i],value=myval,inplace=True)
return mydf
replaceCell((c1,c2,c3,c4,c5,c6,c7),c0,'SCity',cimsBid)
</code></pre>
<p>cimsBid is the Dataframes, SCity is the column in which I want values to be changed</p>
<p>Error message:</p>
<blockquote>
<p>AttributeError: 'DataFrame' object has no attribute 'mycol'</p>
</blockquote>
|
<p>Try accessing your column as:</p>
<pre><code>mydf[mycol]
</code></pre>
|
python-3.x|pandas
| 1
|
5,151
| 56,858,698
|
How to convert a multi dimensional list to numpy array and then write to a csv file
|
<p>I am running a query using psycopg2 in python. The results of the query is saved to a list. I am trying to convert this list into a numpy array and then write to a csv file. Here is how I did that. </p>
<pre><code>rows = rcursor.fetchall()
df = pd.DataFrame(np.array(rows), columns = rows("db1 db2 db3 db4 db5"))
df.to_csv('alldata.csv',sep=',')]
</code></pre>
<p>But when I do that i get the error :</p>
<pre><code>ValueError: Must pass 2-d input
</code></pre>
<p>I guess I have to apply .reshape but the number of rows is huge (like 200000). The data fetched from query to list looks like this.</p>
<pre><code>RealDictRow([('db1', '0001'), ('db2', 002), ('db3', 003), ('db4', '004'), ('db5', 'Hello I worked on this so far but not happening. Call my number 245-456-7892)
</code></pre>
<p>How can I write this to csv properly without getting the ValueError: Must pass 2-d input. Thanks in advance!</p>
|
<p>You can use pandas.read_sql_query directly to convert a sql query + connection into a dataframe. See <a href="https://gist.github.com/jakebrinkmann/de7fd185efe9a1f459946cf72def057e" rel="nofollow noreferrer">here</a> for example.</p>
|
python-3.x|pandas|dataframe|psycopg2|numpy-ndarray
| 0
|
5,152
| 26,368,194
|
Specifying a numpy.datype to read GPX trackpoints
|
<p>I want to represent a GPS track extracted from GPX file as a Numpy array. For that, each element will be of type "trackpoint", containing one datetime and three floats.</p>
<p>I am trying to do this (actually, after parsing the GPX file with some XML library):</p>
<pre><code>import numpy
trkptType = numpy.dtype([('time', 'datetime64'),
('lat', 'float32'),
('lon', 'float32'),
('elev', 'float32')])
a = numpy.array([('2014-08-08T03:03Z', '30', '51', '40'),
('2014-08-08T03:03Z', '30', '51', '40')], dtype=trkptType)
</code></pre>
<p>But I get this error:</p>
<pre><code>ValueError: Cannot create a NumPy datetime other than NaT with generic units
</code></pre>
<p>What am I doing wrong, and how should I create a much larger array from some list in an efficient manner?</p>
|
<p><a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow">As explained here</a>, you have to use at least <code>datetime64[m]</code> (minutes), instead of <code>datetime64</code>, then your code will work. You could also use a datetime that goes down to seconds or miliseconds, such as <code>datetime64[s]</code> or <code>datetime64[ms]</code>.</p>
<pre><code>trkptType = np.dtype([('time', 'datetime64[m]'),
('lat', 'float32'),
('lon', 'float32'),
('elev', 'float32')])
</code></pre>
|
python|arrays|numpy|user-defined-types|recarray
| 1
|
5,153
| 67,147,195
|
Unable to uninstall cuda even after purging it and removing the files
|
<p>I'm working on a computer on which Nvidia drivers and Cuda were installed by someone else so I don't know the method they used to install them.
In the <code>/usr/local/</code> there were two directories <code>cuda</code> and <code>cuda.10.0</code>. Running <code>nvidia-smi</code> would output:</p>
<blockquote>
<p>CUDA Version: 11.0</p>
</blockquote>
<p>which made me believe two cuda versions were installed on the system which were causing some errors.</p>
<p>following <a href="https://stackoverflow.com/questions/56431461/how-to-remove-cuda-completely-from-ubuntu">this question</a> I removed cuda by first doing:</p>
<pre><code>sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*"
</code></pre>
<p>and then doing</p>
<pre><code>sudo rm -rf /usr/local/cuda*
</code></pre>
<p>(I did not uninstsall nvidia-drivers and <code>Driver Version: 450.80.02</code> is installed).
Running <code>nvidia-smi</code> still outputs:</p>
<blockquote>
<p>CUDA Version: 11.0</p>
</blockquote>
<p>How do I uninstall cuda 11? I prefer to have cuda 10 and I can't find where cuda 11 is installed.</p>
<p>Do I need to uninstall nvidia-drivers as well?</p>
|
<p>The <code>nvidia-smi</code> command does not show which version of CUDA is installed, it shows which CUDA version the installed nVidia driver supports, so there is no problem here, just the incorrect interpretation of the output of this command.</p>
<p>Even if you remove all CUDA installations, <code>nvidia-smi</code> would still show the maximum CUDA version that you can use with this driver.</p>
|
tensorflow|ubuntu|cuda|nvidia
| 3
|
5,154
| 66,793,010
|
Warning when reassigning to series in pandas
|
<p>Please suggest the right way of doing the following.</p>
<pre><code>data_tr['loss'] = data_tr['loss'].apply(lambda x:x**0.25)
</code></pre>
<pre><code><ipython-input-368-59c3c700212e>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>See the caveats in the documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy</a></p>
<pre><code>data_tr['loss'] = data_tr['loss'].apply(lambda x:x**0.25)
</code></pre>
|
<p>Have you tried what's suggested?</p>
<pre><code>data_tr.loc[:,'loss'] = data_tr.loc[:,'loss']**0.25
</code></pre>
|
pandas|series
| 0
|
5,155
| 67,014,120
|
Exploding pandas dataframe by list to multiple rows with a new column for multiindex
|
<p>I have a Pandas dataframe where the columns are 'month' and 'year', and a 'value_list' which is a list of values - one for each day of the month. Something like this -</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">year</th>
<th style="text-align: left;">month</th>
<th style="text-align: right;">value_list</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">2</td>
<td style="text-align: right;">[10, 20, 30, 40, ...... 290, 300]</td>
</tr>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">3</td>
<td style="text-align: right;">[110, 120, 130, 140, ...... 390, 400]</td>
</tr>
</tbody>
</table>
</div>
<p>I want to unpack these into multiple rows - one for each day (new column), so that I get something like the following -</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">year</th>
<th style="text-align: left;">month</th>
<th style="text-align: left;">day</th>
<th style="text-align: right;">value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">1</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">2</td>
<td style="text-align: right;">20</td>
</tr>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">3</td>
<td style="text-align: right;">30</td>
</tr>
<tr>
<td style="text-align: left;">1990</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">4</td>
<td style="text-align: right;">40</td>
</tr>
</tbody>
</table>
</div>
<p>and so on.</p>
<p>I tried using <code>df.explode()</code> but to no avail since the index gets reset or is set to a new single index. How can I automatically get the dates (essentially creating a year, month, date multiindex) while unpacking the lists?</p>
|
<p>After you <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><strong><code>explode</code></strong></a>, you can create the <code>day</code> sequences using <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><strong><code>groupby-transform</code></strong></a> with <code>cumcount</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = df.explode('value_list').rename(columns={'value_list': 'value'})
df['day'] = df.groupby(['year', 'month']).transform('cumcount').add(1)
# year month value day
# 0 1990 2 10 1
# 0 1990 2 20 2
# 0 1990 2 30 3
# 0 1990 2 40 4
# 0 1990 2 290 5
# 0 1990 2 300 6
# 1 1990 3 110 1
# 1 1990 3 120 2
# 1 1990 3 130 3
# 1 1990 3 140 4
# 1 1990 3 390 5
# 1 1990 3 400 6
</code></pre>
<hr />
<p>Also as @wwnde commented, if <code>value_list</code> doesn't contain true lists but just strings that look like lists, convert them to lists before exploding:</p>
<pre class="lang-py prettyprint-override"><code>df.value_list = df.value_list.str.strip('[]').str.split(r'\s*,\s*')
</code></pre>
|
python|pandas|dataframe|multi-index
| 3
|
5,156
| 66,973,617
|
get value from dataframe based on row values without using column names
|
<p>I am trying to get a value situated on the third column from a pandas dataframe by knowing the values of interest on the first two columns, which point me to the right value to fish out. I do not know the row index, just the values I need to look for on the first two columns. The combination of values from the first two columns is unique, so I do not expect to get a subset of the dataframe, but only a row. I do not have column names and I would like to avoid using them.</p>
<p>Consider the dataframe <code>df</code>:</p>
<pre><code>a 1 bla
b 2 tra
b 3 foo
b 1 bar
c 3 cra
</code></pre>
<p>I would like to get <code>tra</code> from the second row, based on the <code>b</code> and <code>2</code> combination that I know beforehand. I've tried subsetting with</p>
<pre><code>df = df.loc['b', :]
</code></pre>
<p>which returns all the rows with <code>b</code> on the same column (provided I've read the data with <code>index_col = 0</code>) but I am not able to pass multiple conditions on it without crashing or knowing the index of the row of interest. I tried both <code>df.loc</code> and <code>df.iloc</code>.</p>
<p>In other words, ideally I would like to get <code>tra</code> without even using row indexes, by doing something like:</p>
<pre><code>df[(df[,0] == 'b' & df[,1] == `2`)][2]
</code></pre>
<p>Any suggestions? Probably it is something simple enough, but I have the tendency to use the same syntax as in R, which apparently is not compatible.</p>
<p>Thank you in advance</p>
|
<p>As @anky has suggested, a way to do this without knowing the column names nor the row index where your value of interest is, would be to read the file in a pandas dataframe using multiple column indexing.</p>
<p>For the provided example, knowing the column indexes at least, that would be:</p>
<pre><code>df = pd.read_csv(path, sep='\t', index_col=[0, 1])
</code></pre>
<p>then, you can use:</p>
<pre><code>df = df.iloc[df.index.get_loc(("b", 2)):]
df.iloc[0]
</code></pre>
<p>to get the value of interest.</p>
<p>Thanks again @anky for your help. If you found this question useful, please upvote @anky 's comment in the posted question.</p>
|
python|pandas|dataframe
| 1
|
5,157
| 47,446,284
|
Python: How do you make a variable which can be used as an index call to slice another variable?
|
<p>I am coming from a MATLAB background and moving over to Python. I am trying to figure out a way to set up a variable which is some vector which contains a range of indices which can then be used to slice some other array.</p>
<p>In MATLAB I would do this:</p>
<pre><code>A = [2,3,4,5,6; 9,4,3,2,1; 5,4,3,2,5]; %some arbitrary matrix
begin = 2; %the first index I want to pull
end = 4; %the last index I want to pull
idx = 2:4; %the vector of indices I want
A(:,idx) %results in me pulling out the 2nd, 3rd and 4th column of A
</code></pre>
<p>Now in Python, what is the equivalent?</p>
<pre><code>import numpy as np
A = np.array([[2,3,4,5,6],[9,4,3,2,1],[5,4,3,2,5]]) #some arbitrary matrix
begin = 1 #first index
end = 3 #last index
idx = ??? #This is the part I don't know! <<<-------------------
A[:,idx] #I want the same result as the Matlab example above
</code></pre>
<p>Obviously for this trivial example I could just have <code>idx = [1,2,3]</code>, but I have much more complicated scenario in real life where I cannot write out the indices manually.</p>
<p>I have tried using the <code>range</code> and <code>np.arange</code> functions but they give the error that the object is not callable.</p>
<p>When I look at some MATLAB-to-Numpy conversions such as <a href="http://mathesaurus.sourceforge.net/matlab-numpy.html" rel="nofollow noreferrer">here</a>, it suggests that the <code>idx = 2:4</code> command in MATLAB command is equivalent to <code>idx = range(1,3)</code> in Python, but this is apparently not quite true?</p>
<p>Any help is appreciated.</p>
|
<p>You need <code>slice</code>:</p>
<pre><code>>>> import numpy as np
>>> A = np.array([[2,3,4,5,6],[9,4,3,2,1],[5,4,3,2,5]])
>>> begin = 1
>>> end = 3
>>> s = slice(begin, end)
>>> A[:,s]
array([[3, 4],
[4, 3],
[4, 3]])
</code></pre>
|
python|arrays|matlab|numpy|indexing
| 1
|
5,158
| 68,256,123
|
Is there a better way of making sure that a coordinate is not out of range in an np.array?
|
<p>I want to make sure that when I iterate through coordinates (i, j) of a 2D np.array, and access "neighbors", that I am not accessing values that are out of the array's range. Is there a better way of doing it than what I do here?</p>
<p>I consider the following coordinates as neighbors:</p>
<ul>
<li>(i - 1, j),</li>
<li>(i, j - 1),</li>
<li>(i - 1, j - 1),</li>
<li>(i - 1, j + 1)</li>
</ul>
<pre><code>import numpy as np
for (i, j), value in np.ndenumerate(arr):
if i and j and i < arr.shape[0]:
neighbors = [
(i - 1, j),
(i, j - 1),
(i - 1, j - 1),
(i - 1, j + 1),
]
print(neighbors)
elif i and j:
neighbors = [
(i - 1, j),
(i, j - 1),
(i - 1, j - 1),
]
elif i:
neighbors = [(i - 1, j)]
print(neighbors)
elif j and i < arr.shape[0]:
neighbors = [
(i - 1, j),
(i, j - 1),
(i - 1, j + 1),
]
print(neighbors)
elif j:
neighbors = [
(i - 1, j),
(i, j - 1),
]
print(neighbors)
else:
print("There are no neighbors")
</code></pre>
|
<p>I implemented something like this in the end:</p>
<pre><code>import numpy as np
for (i, j), value in np.ndenumerate(arr):
neighbors = [
neighbor
for neighbor in [(i - 1, j), (i, j - 1), (i - 1, j - 1), (i - 1, j + 1)]
if neighbor[0] and neighbor[1] and neighbor[0] < arr.shape[0]
]
</code></pre>
<p>Inspired by <a href="https://stackoverflow.com/questions/68256123/is-there-a-better-way-of-making-shure-that-a-coordinate-is-not-out-of-range-in-a#comment120633223_68256123">Bas van der Linden's coments</a>.</p>
|
python|arrays|numpy
| 0
|
5,159
| 68,339,938
|
Create a Dataframe from a list and keep duplicate items
|
<p>I have a list of dataframes. Each dataframe within the list is unique - meaning that there are some shared, but different columns. I would like to create a single dataframe that contains all of the columns from the list of dataframes and will fill NaN if an element is not present. I have tried the following</p>
<pre><code>import pandas as pd
df_new = pd.concat(list_of_dfs)
#I get the following: InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>Issue seem to be due to the dataframes in the list. Each data frame only has one row, so its index is zero and thus reindexing will not do the trick. I have tried this:</p>
<pre><code> list_of_dfs.append(pd.DataFrame([rows], columns = tags).set_index(np.array(random.randint(0,5000))))
</code></pre>
<p>Pretty much generating a random number as the index. However, O get this error:</p>
<pre><code>ValueError: The parameter "keys" may be a column key, one-dimensional array, or a list containing only valid column keys and one-dimensional arrays.
</code></pre>
|
<p>You need to use some params in pd.concat:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'a':[1,2,3],'x':[4,5,6],'y':[7,8,9]})
df2 = pd.DataFrame({'b':[10,11,12],'x':[13,14,15],'y':[16,17,18]})
print(pd.concat([df1,df2], axis=0, ignore_index=True))
</code></pre>
<p>Result:</p>
<pre><code> a x y b
0 1.0 4 7 NaN
1 2.0 5 8 NaN
2 3.0 6 9 NaN
3 NaN 13 16 10.0
4 NaN 14 17 11.0
5 NaN 15 18 12.0
</code></pre>
<p>So, use concat like that:</p>
<pre><code>pd.concat(list_of_dfs, axis=0, ignore_index=True)
</code></pre>
|
python|pandas
| 1
|
5,160
| 68,437,240
|
Obtains small array by cuting a bigger one in Python
|
<p>I'm today working with array. I'm trying to slice a big array which looks like :</p>
<pre><code>A = [[a(0;0), a(0;1), ..., a(0;n)],
[a(1;0), a(1;1), ..., a(1;n)],
[....., ....., ....., .....],
[a(m;0), ...., ....., a(m;n)],
[b(m+1;0), b(m+1,1), ..., b(m+1,n)],
[b(m+2;0), b(m+2,1), ..., b(m+2,n)],
[....., ....., ....., .....],
[b(m+y;0), b(m+y,1), ..., b(m+y,n)],
[c(m+y+1;0), c(m+y+1,1), ..., c(m+y+1,n)],
[c(m+y+2;0), c(m+y+2,1), ..., c(m+y+2,n)],
[....., ....., ....., .....],
[c(m+2y;0), c(m+2y,1), ..., c(m+2y,n)]]
</code></pre>
<p>As you can see, there are 3 arrays in my A-array. I'm trying to get them in separated arrays to obtain at the end :</p>
<pre><code>A_0 =[[a(0;0), a(0;1), ..., a(0;n)],
[a(1;0), a(1;1), ..., a(1;n)],
[....., ....., ....., .....],
[a(m;0), ...., ....., a(m;n)]]
B = [[b(m+1;0), b(m+1,1), ..., b(m+1,n)],
[b(m+2;0), b(m+2,1), ..., b(m+2,n)],
[....., ....., ....., .....],
[b(m+y;0), b(m+y,1), ..., b(m+y,n)]]
C = [[c(m+y+1;0), c(m+y+1,1), ..., c(m+y+1,n)],
[c(m+y+2;0), c(m+y+2,1), ..., c(m+y+2,n)],
[....., ....., ....., .....],
[c(m+2y;0), c(m+2y,1), ..., c(m+2y,n)]]
</code></pre>
<p>The point is I'm looking for a method which can be used for x smallers arrays to isolate in the bigger one.
I hope my question is cristal clear, and i'm looking forward for some help.
Thanks to everybordy !</p>
|
<p>If your data is even, then simply chunk it with this:</p>
<pre class="lang-py prettyprint-override"><code>data = A
m = <your offset>
[data[idx: idx + m] for idx in range(0, len(data), m)]
# example:
data = [["a1"],["a2"],["b1"],["b2"],["c1"],["c2"]]
m = 2
print([data[idx: idx + m] for idx in range(0, len(data), m)])
</code></pre>
<pre class="lang-py prettyprint-override"><code>[
[['a1'], ['a2']],
[['b1'], ['b2']],
[['c1'], ['c2']]
]
</code></pre>
<p>And if you want then split it into variables (or use a dictionary):</p>
<pre class="lang-py prettyprint-override"><code>A_0, B, C = [data[idx: idx + m] for idx in range(0, len(data), m)]
</code></pre>
<p>If the data however isn't even, you'll need to find the borders/edges between the data and do something like:</p>
<pre class="lang-py prettyprint-override"><code>out = []
tmp = []
for idx, item in enumerate(big_array):
tmp.append(item)
# last row of current item or first row of the next one
if item[-1] == something or big_array[idx + 1][0] == something:
out.append(tmp)
</code></pre>
|
python|arrays|numpy|slice
| 0
|
5,161
| 68,338,620
|
python numpy adding rows to 2d empty array
|
<p>i want to add rows to an empty 2d numpy array through a loop :</p>
<pre><code>yi_list_for_M =np.array([])
M =[]
for x in range(6) :
#some code
yi_m = np.array([y1_m,y2_m])
yi_list_for_M = np.append(yi_list_for_M,yi_m)
</code></pre>
<p>the result is :</p>
<pre><code>[0. 0. 2.7015625 2.5328125 4.63125 4.29375
5.7890625 5.2828125 6.05452935 5.47381073 6.175 5.5 ]
</code></pre>
<p>but i want it to be :</p>
<pre><code>[ [0. 0.] , [2.7015625 2.5328125,] [ 4.63125 4.29375],
[5.7890625 5.2828125 ], [6.05452935 5.47381073],[ 6.175 5.5 ] ]
</code></pre>
<p>and i dont want to use lists i want to use a 1d numpy array inside a 2d numpy array</p>
|
<pre class="lang-py prettyprint-override"><code>yi_list_for_M = np.empty((0,2), int)
for x in range(6):
y1_m = x**2
y2_m = x**3
yi_list_for_M = np.append(yi_list_for_M, np.array([[y1_m, y2_m]]), axis=0)
</code></pre>
|
python|arrays|numpy
| 1
|
5,162
| 59,470,464
|
Importing TensorFlow onto Xcode: The active toolchain is not compatible with playgrounds
|
<p>I have macOS Catalina on MacBook Air 2014. I'm experiencing problems in importing TensorFlow on Xcode 11.3. </p>
<p>I downloaded a Swift tensor 0.6 release and Swift for TensorFlow development snapshot.
I opened it in macOS blank playground but it doesn't work. </p>
<p>The error:</p>
<blockquote>
<p>The active toolchain is not compatible with playgrounds. PlaygroundLogger.framework could not be loaded.</p>
</blockquote>
|
<p>Maybe you can try the following:</p>
<ul>
<li><p>First of all, make sure that you have the December 23 (or later) development snapshot of the toolchain. It enables S4TF to work with Xcode's new build system.</p>
<p>(You can get it <a href="https://github.com/tensorflow/swift/blob/master/Installation.md#development-snapshots" rel="nofollow noreferrer">here</a>.)</p>
<p>Stable version 0.6 is also fine.</p></li>
<li><p>Then, instead of trying to <code>import TensorFlow</code> in Xcode <em>Playgounds</em>, create a new project as a <em>macOS Command Line Tool</em>.</p></li>
<li><p>Check if you have the S4TF toolchain selected in: Xcode > Preferences > Components.</p></li>
<li><p>Write your code in the <code>main.swift</code> file. If you don't see it in the editor window, you can find it in the Xcode's Navigator pane.</p></li>
<li><p>If you want to run the code, click on the "Build" button in the top left corner to run the S4TF code. Or go to: Xcode > Product > Build.</p></li>
<li><p>You should see the output in the Debug area at the bottom of Xcode.</p></li>
</ul>
<p>Let us know if it helps!</p>
<p>P.S. Check the "Disable Library Validation" under "Signing & Capabilities" solution in this related S4TF discussion in Google Groups <a href="https://groups.google.com/a/tensorflow.org/forum/#!topic/swift/dveQVX_ysfw" rel="nofollow noreferrer">here</a>.</p>
|
swift|xcode|macos|tensorflow
| 0
|
5,163
| 59,402,061
|
Why is my .drop removing all values in my dataframe?
|
<p>I keep trying to use this line of code to remove the rows of data with NaN in a certain column but it keeps removing all rows:</p>
<pre><code>df = df.drop(df[(df.test_variable == 'NaN')].index)
</code></pre>
|
<p>To answer the question you're <em>really</em> trying to ask (how to drop rows containing NaNs), use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>DataFrame.dropna()</code></a> function:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(12).reshape(3,4),
columns = ['A','B','C','D'])
df.loc[0, 'A'] = None
df.loc[1, 'B'] = None
df.loc[1, 'C'] = None
print(df)
print('------')
print(df.dropna())
</code></pre>
<p>prints:</p>
<pre><code>$ py na.py
A B C D
0 NaN 1.0 2.0 3
1 4.0 NaN NaN 7
2 8.0 9.0 10.0 11
------
A B C D
2 8.0 9.0 10.0 11
</code></pre>
<p>To answer the question you asked, though, this boolean check is problematic: </p>
<pre><code>df.test_variable == 'NaN'
</code></pre>
<p>This will literally use the string <code>"NaN"</code> when checking for rows that match, instead of checking for actual NaNs. In fact, NaN is an equivalent way of saying None in Pandas, so you can use <code>None</code> instead of <code>"NaN"</code>. However, simply replacing your boolean check with <code>df.test_variable is None</code> will still not work, because that boolean check is literally asking if <code>df.test_variable</code> (which will return a Pandas Series object) is <code>None</code> (which it is not), instead of asking which of the elements of <code>df.test_variable</code> is equal to <code>None</code>. To do the element-wise check, use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>isna()</code> function</a> of the Pandas Series object:</p>
<p>Start by getting the indices of the rows containing NaN values for a particular column (in this case, A):</p>
<pre><code>df[df.A.isna()].index
</code></pre>
<p>Then pass the result to the <code>drop()</code> function:</p>
<pre><code>df.drop(df[df.A.isna()].index)
</code></pre>
|
python|pandas
| 0
|
5,164
| 59,052,814
|
Comparing Overlap in Pandas Columns
|
<p>So I have four columns in a pandas dataframe, column A, B, C and D. Column A contains 30 words, 18 of which are in column B. Column C contains either a 1 or 2 (keyboard response to column B words) and column D contains 1 or 2 also (the correct response). </p>
<p>What I need to do is see the total correct for only the words where column A and B overlap. I understand how to compare the C and D columns to get the total correct once I have the correct dataframe, but I am having a hard time wrapping my head around comparing the overlap in A and B. </p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin()</code></a>:</p>
<pre><code>df.B.isin(df.A)
</code></pre>
<p>That will give you a boolean Series the same length as <code>df.B</code> indicating for each value in <code>df.B</code> whether it is also present in <code>df.A</code>.</p>
|
python|pandas
| 0
|
5,165
| 59,456,901
|
how to fill values in empty rows with condition in python
|
<p>I want to put values in empty/NaN with condition wrt existing table
Please find the attached </p>
<p><strong>Existing Data</strong></p>
<pre><code>import pandas as pd
col_names = ['Date', 'ID', 'Individual','Category','Age','DW','Gender']
my_df = pd.DataFrame(columns = col_names)
my_df['Date']=2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,3112019,3112019,3112019,3112019,
3112019,3112019,3112019,3112019,3112019,3112019,'...',8112019,8112019,8112019,8112019,8112019,8112019,8112019,
8112019,8112019,8112019]
my_df['ID']=[1,1,1,2,2,2,2,3,3,3,1,1,1,2,2,2,2,3,3,3,'...',1,1,1,2,2,2,2,3,3,3]
my_df['Individual']=[1,2,3,1,2,3,4,1,2,3,1,2,3,1,2,3,4,1,2,3,'...',1,2,3,1,2,3,4,1,2,3]
my_df['Category']=['DE','DE','DE','C','C','C','C','A','A','A','DE','DE','DE','C','C','C','C','A','A','A','...','DE',
'DE','DE','C','C','C','C','A','A','A']
my_df['Age']=['51-60','02-14','31-40','02-14','31-40','15-21','22-30','60+','22-30','02-14','51-60','02-14','31-40',
'02-14','31-40','15-21','22-30','60+','22-30','02-14','...','51-60','02-14','31-40','02-14','31-40',
'15-21','22-30','60+','22-30','02-14']
my_df['DW']=[6554,7875,10063,5661,7851,10063,6552,2365,8569,7875,6554,7875,10063,5661,7875,'...',
6554,7875,10063,5661,7851,10063,6552,2365,8569,7875,6554,7875,10063,5661,7875]
my_df['Gender']=['M','F','F','M','M','F','M','F','F','M','M','F','F','M','M','F','M','F','F','M',
'...','M','F','F','M','M','F','M','F','F','M']
</code></pre>
<p><strong>O/p</strong></p>
<pre><code> Date ID Individual Category Age DW Gender
0 2112019 1 1 DE 51-60 6554 M
1 2112019 1 2 DE 02-14 7875 F
2 2112019 1 3 DE 31-40 10063 F
3 2112019 2 1 C 02-14 5661 M
4 2112019 2 2 C 31-40 7851 M
5 2112019 2 3 C 15-21 10063 F
6 2112019 2 4 C 22-30 6552 M
7 2112019 3 1 A 60+ 2365 F
8 2112019 3 2 A 22-30 8569 F
9 2112019 3 3 A 02-14 7875 M
10 3112019 1 1 DE 51-60 6554 M
11 3112019 1 2 DE 02-14 7875 F
12 3112019 1 3 DE 31-40 10063 F
13 3112019 2 1 C 02-14 5661 M
14 3112019 2 2 C 31-40 7875 M
15 3112019 2 3 C 15-21 10063 F
16 3112019 2 4 C 22-30 5661 M
17 3112019 3 1 A 60+ 2365 F
18 3112019 3 2 A 22-30 8569 F
19 3112019 3 3 A 02-14 7875 M
20 ... ... ... ... ... ... ...
21 8112019 1 1 DE 51-60 6554 M
22 8112019 1 2 DE 02-14 7875 F
23 8112019 1 3 DE 31-40 10063 F
24 8112019 2 1 C 02-14 5661 M
25 8112019 2 2 C 31-40 7851 M
26 8112019 2 3 C 15-21 10063 F
27 8112019 2 4 C 22-30 6552 M
28 8112019 3 1 A 60+ 2365 F
29 8112019 3 2 A 22-30 8569 F
30 8112019 3 3 A 02-14 7875 M
</code></pre>
<p><strong>I want to generate below table using condition on different combination from the above table:</strong></p>
<pre><code>col = ['Target', 'Day1', 'Day2','Day3','Day4','Day5','Day6','Day7']
new_df = pd.DataFrame(columns = col)
new_df['Target']=['A-Category & Age 22+','F-Female & ABC-Category & Age <21','M & Age 22-30','...']
new_df
Target Day1 Day2 Day3 Day4 Day5 Day6 Day7
0 A-Category & Age 22+ NaN NaN NaN NaN NaN NaN NaN
1 F-Female & ABC-Category & Age <21 NaN NaN NaN NaN NaN NaN NaN
2 M & Age 22-30 NaN NaN NaN NaN NaN NaN NaN
3 ... NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>I want to put agggregate sum of WT on each day based on date and different condition on Target variable eg. in Column Table.</p>
|
<p>You don't have a WT column, so we don't now what that is. However for this example I'll use the DW column as the aggregation column. You can change it to suit your needs.</p>
<pre><code>import pandas as pd
col_names = ['Date', 'ID', 'Individual','Category','Age','DW','Gender']
my_df = pd.DataFrame(columns = col_names)
my_df['Date']=[2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,2112019,3112019,3112019,3112019,3112019,
3112019,3112019,3112019,3112019,3112019,3112019,8112019,8112019,8112019,8112019,8112019,8112019,8112019,
8112019,8112019,8112019]
my_df['ID']=[1,1,1,2,2,2,2,3,3,3,1,1,1,2,2,2,2,3,3,3,1,1,1,2,2,2,2,3,3,3]
my_df['Individual']=[1,2,3,1,2,3,4,1,2,3,1,2,3,1,2,3,4,1,2,3,1,2,3,1,2,3,4,1,2,3]
my_df['Category']=['DE','DE','DE','C','C','C','C','A','A','A','DE','DE','DE','C','C','C','C','A','A','A','DE',
'DE','DE','C','C','C','C','A','A','A']
my_df['Age']=['51-60','02-14','31-40','02-14','31-40','15-21','22-30','60+','22-30','02-14','51-60','02-14','31-40',
'02-14','31-40','15-21','22-30','60+','22-30','02-14','51-60','02-14','31-40','02-14','31-40',
'15-21','22-30','60+','22-30','02-14']
my_df['DW']=[6554,7875,10063,5661,7851,10063,6552,2365,8569,7875,6554,7875,10063,5661,7875,
6554,7875,10063,5661,7851,10063,6552,2365,8569,7875,6554,7875,10063,5661,7875]
my_df['Gender']=['M','F','F','M','M','F','M','F','F','M','M','F','F','M','M','F','M','F','F','M',
'M','F','F','M','M','F','M','F','F','M']
col = ['Target', 'Day1', 'Day2','Day3','Day4','Day5','Day6','Day7']
new_df = pd.DataFrame(columns = col)
new_df['Target']=['A-Category & Age 22+','F-Female & ABC-Category & Age <21','M & Age 22-30','...']
</code></pre>
<p>Create a list of dictionaries containing all of your matching criteria. I have skipped the second example in your list due to there not being any ABC category in your data. If you mean any of the three, you'll have to modify this somewhat.</p>
<pre><code>condition_list = []
groups = [
{
'ID':'any',
'Individual':'any',
'Category':'A',
'age_min':22,
'age_max':100,
'Gender':'any',
'Target':'A-Category & Age 22+'
},
{
'ID':'any',
'Individual':'any',
'Category':'any',
'age_min':22,
'age_max':30,
'Gender':'M',
'Target':'M & Age 22-30'
}
]
for group in groups:
temp_list = []
for key, value in group.items():
if value == 'any':
temp_list.append([x for x in my_df[key].unique()])
else:
temp_list.append([value])
condition_list.append(temp_list)
</code></pre>
<p>Iterate over your list of conditions, slicing the dataframe, grouping, summing the aggregate column, pivoting, and appending to a final dataframe.</p>
<pre><code>output = pd.DataFrame(columns=['Target'])
for condition in condition_list:
t = my_df[
(my_df['ID'].isin(condition[0])) &
(my_df['Individual'].isin(condition[1])) &
(my_df['Category'].isin(condition[2]) &
(my_df['Age'].apply(lambda x: int(min(x.replace('+','').split('-')))) >= condition[3][0]) &
(my_df['Age'].apply(lambda x: int(max(x.replace('+','').split('-')))) <= condition[4][0]) &
(my_df['Gender']).isin(condition[5]))
]
t['Target'] = condition[6][0]
output = output.append(t.groupby(['Target','Date'])['DW'].sum().reset_index().pivot(index='Target',columns='Date',values='DW'))
</code></pre>
<p>Assign target column</p>
<pre><code>output['Target'] = output.index
output = output.reset_index(drop=True)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> 2112019 3112019 8112019 Target
0 10934.0 15724.0 15724.0 A-Category & Age 22+
1 6552.0 7875.0 7875.0 M & Age 22-30
</code></pre>
|
python|pandas
| 0
|
5,166
| 45,212,479
|
Calculating cumulative sum for previous period in each cohort in pandas, python
|
<p>I am newbie in pandas and I am trying to build cohort analysis. I need column containing cumulative sum of values for previous periods for this cohort.
For example for this Data frame</p>
<pre>
Canceled
CohortGroup NewCustomers CancelPeriod
2016-05 75 2016-07 2
2016-08 5
2016-09 6
2016-10 7
2016-11 6
2016-12 2
2017-01 5
2017-02 6
2017-03 1
2017-04 5
2017-05 6
2017-06 1
2016-06 81 2016-07 1
2016-08 3
2016-09 4
2016-10 1
2016-11 6
2016-12 2
2017-01 5
2017-02 3
2017-03 3
2017-04 4
2017-05 4
2017-06 4
2016-07 139 2016-07 1
2016-08 6
2016-09 4
2016-10 8
2016-11 13
2016-12 5
</pre>
<p>I want to see output like this:</p>
<pre>
CanceledCustomers TotalCancCust
CohortGroup NewCustomers CancelPeriod
2016-05 75 2016-07 2 2
2016-08 5 7
2016-09 6 13
2016-10 7 19
2016-11 6 25
2016-12 2 27
2017-01 5 32
2017-02 6 38
2017-03 1 39
2017-04 5 44
2017-05 6 50
2017-06 1 51
2016-06 81 2016-07 1 1
2016-08 3 4
2016-09 4 8
2016-10 1 9
2016-11 6 15
2016-12 2 17
2017-01 5 22
2017-02 3 25
2017-03 3 28
2017-04 4 32
2017-05 4 36
2017-06 4 40
2016-07 139 2016-07 1 1
2016-08 6 7
2016-09 4 11
2016-10 8 19
2016-11 13 32
2016-12 5 37
</pre>
<p>How can I do it?</p>
|
<p>first forward fill your Dataframe and do groupby</p>
<pre><code>df = df.fillna(method='ffill')
df['TotalCancCust'] = df.groupby(['CohortGroup'])['CanceledCustomers'].cumsum()
</code></pre>
|
python|pandas|statistics|analytics
| 0
|
5,167
| 45,057,978
|
python performance problems using loops with big tables
|
<p>I am using python and multiple libaries like pandas and scipy to prepare data so I can start deeper analysis. For the preparation purpose I am for instance creating new columns with the difference of two dates. <br>
My code is providing the expected results but is really slow so I cannot use it for a table with like 80K rows. The run time would take ca. 80 minutes for the table just for this simple operation.</p>
<p>The problem is definitely related with my writing operation:</p>
<pre><code>tableContent[6]['p_test_Duration'].iloc[x] = difference
</code></pre>
<br>
<h3>Moreover python is providing a Warning:</h3>
<p><a href="https://i.stack.imgur.com/oLpfC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oLpfC.png" alt="enter image description here" /></a></p>
<h1>complete code example for date difference:</h1>
<pre><code>import time
from datetime import date, datetime
tableContent[6]['p_test_Duration'] = 0
#for x in range (0,len(tableContent[6]['p_test_Duration'])):
for x in range (0,1000):
p_test_ZEIT_ANFANG = datetime.strptime(tableContent[6]['p_test_ZEIT_ANFANG'].iloc[x], '%Y-%m-%d %H:%M:%S')
p_test_ZEIT_ENDE = datetime.strptime(tableContent[6]['p_test_ZEIT_ENDE'].iloc[x], '%Y-%m-%d %H:%M:%S')
difference = p_test_ZEIT_ENDE - p_test_ZEIT_ANFANG
tableContent[6]['p_test_Duration'].iloc[x] = difference
</code></pre>
<h1>the correct result table:</h1>
<p><a href="https://i.stack.imgur.com/4iyhC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4iyhC.png" alt="---" /></a></p>
|
<p>Take away the loop, and apply the functions to the whole series.</p>
<pre><code>ZEIT_ANFANG = tableContent[6]['p_test_ZEIT_ANFANG'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S'))
ZEIT_ENDE = tableContent[6]['p_test_ZEIT_ENDE'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S'))
tableContent[6]['p_test_Duration'] = ZEIT_ENDE - ZEIT_ANFANG
</code></pre>
|
python|python-3.x|pandas|chained-assignment
| 4
|
5,168
| 44,921,911
|
Shuffling the training dataset with Tensorflow object detection api
|
<p>I'm working on a logo detection algorithm using the Faster-RCNN model with the Tensorflow object detection api.
My dataset is alphabetically ordered (so there are a hundred adidas logo, then hundred apple logo etc.). And i would like it to be shuffled while training.</p>
<p>I've put some values in the config file:</p>
<pre><code>train_input_reader:{
shuffle: true
queue_capacity: some value
min_after_dequeue : some other value}
</code></pre>
<p>However whatever are the values, I'm putting in, algorithm is at first training on all of the a's logos (adidas, apple and so on) and only a lapse of time after starting to see the b's logos (bmw etc.) and the c's one etc.</p>
<p>Of course I could just shuffle my input dataset directly, but I would like to understand the logic behind it.</p>
<p>PS: I've seen this <a href="https://stackoverflow.com/questions/43028683/whats-going-on-in-tf-train-shuffle-batch-and-tf-train-batch">post</a> about shuffling and min_after_dequeue, but I still dont quite get it. My batch size is 1 so it shouldn't be using <code>tf.train.shuffle_batch()</code> but only <code>tf.RandomShuffleQueue</code></p>
<p>My training dataset size is 5000 and if I write <code>min_after_dequeue: 4000 or 5000</code> it is still not shuffled right. Why though?</p>
<hr>
<p>Update:
@AllenLavoie It's a bit hard for me; as there is a lot of dependencies and I'm new to Tensorflow.
But in the end the queue is constructed by </p>
<pre><code>tf.contrib.slim.parallel_reader.parallel_read( _, string_tensor = parallel_reader.parallel_read(
config.input_path,
reader_class=tf.TFRecordReader,
num_epochs=(input_reader_config.num_epochs
if input_reader_config.num_epochs else None),
num_readers=input_reader_config.num_readers,
shuffle=input_reader_config.shuffle,
dtypes=[tf.string, tf.string],
capacity=input_reader_config.queue_capacity,
min_after_dequeue=input_reader_config.min_after_dequeue)
</code></pre>
<p>It seems that when I'm putting <code>num_readers = 1</code> in the config file the dataset is finally shuffling as I want, (at least in the beginning), but when there are more somehow on the start the logos are getting in the alphabetical order. </p>
|
<p>I recommend shuffling the dataset prior to training. The way shuffling currently happens is imperfect and my guess at what is happening is that at the beginning the queue starts off empty and only gets examples that start with 'A' --- after a while it may be more shuffled, but there is no getting around the beginning part when the queue hasn't been filled yet. </p>
|
tensorflow|queue|shuffle|object-detection
| 1
|
5,169
| 45,178,480
|
How to merge two pandas dataframes using a column as pattern and include columns of the left dataframe?
|
<p>Having the following Python code, was trying to use pd.merge but seems that key columns requires to be identical.
Trying to to something similar to SQL join with "like" operator from df.B with categories.Pattern.</p>
<p><strong>UPDATE</strong> with better data example.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame([[1, 'Gas Station'], [2, 'Servicenter'], [5, 'Bakery good bread'], [58, 'Fresh market MIA'], [76, 'Auto Liberty aa1121']], columns=['A','B'])
Out[12]:
A B
0 1 Gas Station
1 2 Servicenter
2 5 Bakery good bread
3 58 Fresh market MIA
4 76 Auto Liberty aa1121
categories = pd.DataFrame([['Gasoline', 'Gas Station'], ['Gasoline', 'Servicenter'], ['Food', 'Bakery'], ['Food', 'Fresh market'], ['Insurance', 'Auto Liberty']], columns=['Category','Pattern'])
Out[13]:
Category Pattern
0 Gasoline Gas Station
1 Gasoline Servicenter
2 Food Bakery
3 Food Fresh market
4 Insurance Auto Liberty
</code></pre>
<p>Expected result is:</p>
<pre><code> Out[14]:
A B Category
0 1 Gas Station Gasoline
1 2 Servicenter Gasoline
2 5 Bakery good bread Food
3 58 Fresh market MIA Food
4 58 Auto Liberty aa1121 Insurance
</code></pre>
<p>Appreciate your suggestions/feedback.</p>
|
<p>By Creating a new function like:</p>
<pre><code>def lookup_table(value, df):
"""
:param value: value to find the dataframe
:param df: dataframe which constains the lookup table
:return:
A String representing a the data found
"""
# Variable Initialization for non found entry in list
out = None
list_items = df['Pattern'].tolist()
for item in list_items:
if item in value:
out = item
break
return out
</code></pre>
<p>which will return a new value using a dataframe as look-up table and the parameter <em>value</em></p>
<p>The following Complete example will show the expected dataframe.</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 'Gas Station'], [2, 'Servicenter'], [5, 'Bakery good bread'], [58, 'Fresh market MIA'], [76, 'Auto Liberty aa1121']], columns=['A','B'])
categories = pd.DataFrame([['Gasoline', 'Gas Station'], ['Gasoline', 'Servicenter'], ['Food', 'Bakery'], ['Food', 'Fresh market'], ['Insurance', 'Auto Liberty']], columns=['Category','Pattern'])
def lookup_table(value, df):
"""
:param value: value to find the dataframe
:param df: dataframe which constains the lookup table
:return:
A String representing a the data found
"""
# Variable Initialization for non found entry in list
out = None
list_items = df['Pattern'].tolist()
for item in list_items:
if item in value:
out = item
break
return out
df['Pattern'] = df['B'].apply(lambda x: lookup_table(x, categories))
final = pd.merge(df, categories)
</code></pre>
<p><a href="https://i.stack.imgur.com/pAvpB.png" rel="nofollow noreferrer">Expected output</a></p>
|
python|pandas
| 1
|
5,170
| 45,224,889
|
installing tensorflow in windows anaconda - and running using it using Spyder GUI
|
<p>I visited <a href="https://www.tensorflow.org/install/install_windows#common_installation_problems" rel="nofollow noreferrer">the tensorflow page</a> and followed instructions from <code>Installing with Anaconda</code> section. When I tried to validate my installation, I got below errors</p>
<pre><code>(C:\ProgramData\Anaconda3) C:\Users\nik>python
Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>> hello = tf.constant('Hello, TensorFlow!')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'tf' is not defined
>>> exit
Use exit() or Ctrl-Z plus Return to exit
>>> exit()
</code></pre>
<p>then i tried </p>
<pre><code>(C:\ProgramData\Anaconda3) C:\Users\nik>activate tensorflow
(tensorflow) C:\Users\nik>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl
Collecting tensorflow==1.2.1 from https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl
Using cached https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl
Collecting bleach==1.5.0 (from tensorflow==1.2.1)
Using cached bleach-1.5.0-py2.py3-none-any.whl
Collecting html5lib==0.9999999 (from tensorflow==1.2.1)
Collecting backports.weakref==1.0rc1 (from tensorflow==1.2.1)
Using cached backports.weakref-1.0rc1-py3-none-any.whl
Collecting werkzeug>=0.11.10 (from tensorflow==1.2.1)
Using cached Werkzeug-0.12.2-py2.py3-none-any.whl
Collecting markdown>=2.6.8 (from tensorflow==1.2.1)
Collecting protobuf>=3.2.0 (from tensorflow==1.2.1)
Collecting numpy>=1.11.0 (from tensorflow==1.2.1)
Using cached numpy-1.13.1-cp35-none-win_amd64.whl
Collecting six>=1.10.0 (from tensorflow==1.2.1)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting wheel>=0.26 (from tensorflow==1.2.1)
Using cached wheel-0.29.0-py2.py3-none-any.whl
Collecting setuptools (from protobuf>=3.2.0->tensorflow==1.2.1)
Using cached setuptools-36.2.0-py2.py3-none-any.whl
Installing collected packages: six, html5lib, bleach, backports.weakref, werkzeug, markdown, setuptools, protobuf, numpy, wheel, tensorflow
Successfully installed backports.weakref-1.0rc1 bleach-1.5.0 html5lib-0.9999999 markdown-2.6.8 numpy-1.13.1 protobuf-3.3.0 setuptools-36.2.0 six-1.10.0 tensorflow-1.2.1 werkzeug-0.12.2 wheel-0.29.0
(tensorflow) C:\Users\nik>python
Python 3.5.3 |Continuum Analytics, Inc.| (default, May 15 2017, 10:43:23) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-07-20 12:20:26.177654: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.178276: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.178687: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.179189: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.179713: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.180250: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.180687: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-20 12:20:26.181092: W c:\tf_jenkins\home\workspace\release-win\m\windows\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
</code></pre>
<p>My question as below - my main question is question 3: </p>
<ol>
<li>Am I suppose to validate installation after typing command -
<code>activate tensorflow</code> as shown in the second command block above? </li>
<li>why am i getting multiple instructions after
the command <code>sess = tf.Session()</code> ?</li>
<li><p><strong>Can i use tensorflow within the
SPYDER gui? how? I tried below but in the SPYDER gui, but didnt get any success :(</strong></p>
<p>activate tensorflow</p></li>
</ol>
<p>File "", line 1</p>
<pre><code> activate tensorflow
^
SyntaxError: invalid syntax
import tensorflow as tf
Traceback (most recent call last):
File "<ipython-input-2-41389fad42b5>", line 1, in <module>
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
</code></pre>
|
<p><strong>Q1</strong>: Yes you need to activate the virtual environment to import tensorflow as you have installed tensorflow in virtual environment. </p>
<p><strong>Q2</strong>: Not sure why there are multiple instructions but this is normal and is built in in tensorflow. You can avoid these by building tensorflow yourself with SIMD instructions enabled. <a href="https://www.youtube.com/watch?v=ghv5fbC287o" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ghv5fbC287o</a></p>
<p><strong>Q3</strong>: You need to change the first step when you create virtual environment. Create virtual environment using the following command {conda create -n tensorflow python=3.5 anaconda}. </p>
<p>The detailed answer to your Q3 is as follows:</p>
<ol>
<li><p>Create tensorflow environment using "conda create -n tensorflow python=3.5 anaconda"</p></li>
<li><p>Once virtual environment is created enter the command "activate tensorflow"</p></li>
<li><p>Now install tensorflow using "pip install tensorflow" (CPU-only) or pip install tensorflow-gpu (for GPU).</p></li>
<li><p>Now go to the folder where anaconda is installed. </p></li>
<li><p>If C:\ProgramData\Anaconda3 is Anaconda root folder then go to "C:\ProgramData\Anaconda3\envs\test\Scripts" and open spyder.exe. You should be able to import tensorflow successfully in this environment. </p></li>
</ol>
|
python|windows|tensorflow|anaconda|spyder
| 2
|
5,171
| 57,225,716
|
GradientTape losing track of variable
|
<p>I have a script that performs a Gatys-like neural style transfer. It uses style loss, and a total variation loss. I'm using the GradientTape() to compute my gradients. The losses that I have implemented seem to work fine, but a new loss that I added isn't being properly accounted for by the GradientTape(). I'm using TensorFlow with eager execution enabled.</p>
<p>I suspect it has something to do with how I compute the loss based on the input variable. The input is a 4D tensor (batch, h, w, channels). At the most basic level, the input is a floating point image, and in order to compute this new loss I need to convert it to a binary image to compute the ratio of one pixel color to another. I don't want to actually go and change the image like that during every iteration, so I just make a copy of the tensor(in numpy form) and operate on that to compute the loss. I do not understand the limitations of the GradientTape, but I believe it is "losing the thread" of how the input variable is used to get to the loss when it's converted to a numpy array.</p>
<p>Could I make a copy of the image tensor and perform binarizing operations & loss computation using that? Or am I asking tensorflow to do something that it just can not do?</p>
<p>My new loss function:</p>
<pre><code>def compute_loss(self, **kwargs):
loss = 0
image = self.model.deprocess_image(kwargs['image'].numpy())
binarized_image = self.image_decoder.binarize_image(image)
volume_fraction = self.compute_volume_fraction(binarized_image)
loss = np.abs(self.volume_fraction_target - volume_fraction)
return loss
</code></pre>
<p>My implementation using the GradientTape:</p>
<pre><code>def compute_grads_and_losses(self, style_transfer_state):
"""
Computes gradients with respect to input image
"""
with tf.GradientTape() as tape:
loss = self.loss_evaluator.compute_total_loss(style_transfer_state)
total_loss = loss['total_loss']
return tape.gradient(total_loss, style_transfer_state['image']), loss
</code></pre>
<p>An example that I believe might illustrate my confusion. The strangest thing is that my code doesn't have any problem running; it just doesn't seem to minimize the new loss term whatsoever. But this example won't even run due to an attribute error: <code>AttributeError: 'numpy.float64' object has no attribute '_id'</code>.</p>
<p>Example:</p>
<pre><code>import tensorflow.contrib.eager as tfe
import tensorflow as tf
def compute_square_of_value(x):
a = turn_to_numpy(x['x'])
return a**2
def turn_to_numpy(arg):
return arg.numpy() #just return arg to eliminate the error
tf.enable_eager_execution()
x = tfe.Variable(3.0, dtype=tf.float32)
data_dict = {'x': x}
with tf.GradientTape() as tape:
tape.watch(x)
y = compute_square_of_value(data_dict)
dy_dx = tape.gradient(y, x) # Will compute to 6.0
print(dy_dx)
</code></pre>
<p>Edit:</p>
<p>From my current understanding the issue arises that my use of the .numpy() operation is what makes the Gradient Tape lose track of the variable to compute the gradient from. My original reason for doing this is because my loss operation requires me to physically change values of the tensor, and I don't want to actually change the values used for the tensor that is being optimized. Hence the use of the numpy() copy to work on in order to compute the loss properly. Is there any way around this? Or is shall I consider my loss calculation to be impossible to implement because of this constraint of having to perform essentially non-reversible operations on the input tensor?</p>
|
<p>The first issue here is that GradientTape only traces operations on tf.Tensor objects. When you call tensor.numpy() the operations executed there fall outside the tape.</p>
<p>The second issue is that your first example never calls tape.watche on the image you want to differentiate with respect to.</p>
|
python-3.x|tensorflow|tensorflow2.0
| 2
|
5,172
| 57,167,760
|
How to properly apply a function to a series using series.map() or series.apply() in pandas
|
<p>I am trying to apply a predefined function (myfunc) to a new series in my DataFrame using pandas. The function will check if the value in each index in old column (for each row) is bigger then it's previous one and return 1 if yes and 0 if no. </p>
<p>I have also tried series.apply() function and I am getting: across all rows in the newly created column.</p>
<pre><code>def myfunc(x):
for i in range(0,86):
if x.iloc[i + 1] > x.iloc[i]:
yield 1
else:
yield 0
df2['Higher Inflation - US'] = df2['US'].map(myfunc)
print(df2)
</code></pre>
<p>I expect to see 1's and 0's in the new column. I get the results I want when I use 'print' instead of 'yield' in my function, but what I want to do is apply this function to multiple series.</p>
|
<p>You can use the shift function to compare values in previous rows:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df2 = pd.DataFrame(
{
'US':[1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,2,1]
})
def myfunc1(x):
if x:
return 1
else:
return 0
df2['Higher Inflation - US'] = (df2['US']>df2['US'].shift()).map(myfunc1)
df2
</code></pre>
<p>Will return:</p>
<pre><code> US Higher Inflation - US
0 1 0
1 2 1
2 3 1
3 4 1
4 5 1
5 6 1
6 7 1
7 8 1
8 9 1
9 8 0
10 7 0
11 6 0
12 5 0
13 4 0
14 3 0
15 2 0
16 1 0
</code></pre>
<p>The function is so simple that it would be better to replace it with lambda:</p>
<pre class="lang-py prettyprint-override"><code>(df2['US']>=df2['US'].shift()).apply(lambda x: 1 if x else 0)
</code></pre>
<p>The result is the same</p>
|
python|pandas
| 0
|
5,173
| 56,924,618
|
Am I able to fix the headers in my DataFrame while still using Pandas html_read?
|
<p>I want to pull an HTML table into a Pandas dataframe, and html_read so far has been the easiest method. However, some of the headers are coming through a little funky, and I'm trying to avoid fixing them manually in Excel.</p>
<p>I also tried several BeautifulSoup tutorials, but was unable to pull the tables into a dataframe rather than lists.</p>
<pre><code>import pandas as pd
url = "https://www.espn.com/nba/stats/player/_/season/2017/seasontype/2/table/offensive/sort/avgAssists/dir/desc"
df = pd.read_html(url)[0]
df.to_excel("espn_table.xlsx")
</code></pre>
<p>I was hoping to get a simple copy of the table showing NBA assist leaders, but three things are happening: <br>
1. A full player/team list is populating a single cell (B4) in Excel <br>
2. A full player name list is appearing in rows 5-51, separate from the stats they're associated with <br>
3. A second cell in Excel (B56) is displaying a duplicated copy of the table stats</p>
<p>Thanks in advance for any tips.</p>
|
<p>This should help.</p>
<p>As @jottbe pointed out, one should try to avoid reading the url twice. </p>
<pre><code>dfs = pd.read_html(url)
df1 = dfs[1]
df2 = dfs[3]
df = df1.join(df2)
</code></pre>
<pre><code> Name POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER
0 James HardenHOU PG 81 36.4 29.1 8.3 18.9 44.0 3.2 9.3 34.7 9.2 10.9 84.7 8.1 11.2 1.5 0.5 5.7 64 22 27.43
1 John WallWSH PG 78 36.4 23.1 8.3 18.4 45.1 1.1 3.5 32.7 5.4 6.8 80.1 4.2 10.7 2.0 0.6 4.1 50 0 23.28
2 Russell WestbrookOKC PG 81 34.6 31.6 10.2 24.0 42.5 2.5 7.2 34.3 8.8 10.4 84.5 10.7 10.4 1.6 0.4 5.4 62 42 30.7
3 Chris PaulLAC PG 61 31.5 18.1 6.1 12.9 47.6 2.0 5.0 41.1 3.8 4.3 89.2 5.0 9.2 2.0 0.1 2.4 24 1 26.25
4 Ricky RubioMIN PG 75 32.9 11.1 3.5 8.7 40.2 0.8 2.6 30.6 3.4 3.8 89.1 4.1 9.1 1.7 0.1 2.6 25 1 16.87
</code></pre>
<hr>
<p>Now, <code>Name</code> column also contains two- or three-character abbreviations, which you could just get rid of using <code>.str.replace()</code>. </p>
<pre><code>df['Name'] = df['Name'].str.replace(r'([A-Z]{2,3}$)', '')
</code></pre>
<pre><code> Name POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER
0 James Harden PG 81 36.4 29.1 8.3 18.9 44.0 3.2 9.3 34.7 9.2 10.9 84.7 8.1 11.2 1.5 0.5 5.7 64 22 27.43
1 John Wall PG 78 36.4 23.1 8.3 18.4 45.1 1.1 3.5 32.7 5.4 6.8 80.1 4.2 10.7 2.0 0.6 4.1 50 0 23.28
2 Russell Westbrook PG 81 34.6 31.6 10.2 24.0 42.5 2.5 7.2 34.3 8.8 10.4 84.5 10.7 10.4 1.6 0.4 5.4 62 42 30.7
3 Chris Paul PG 61 31.5 18.1 6.1 12.9 47.6 2.0 5.0 41.1 3.8 4.3 89.2 5.0 9.2 2.0 0.1 2.4 24 1 26.25
4 Ricky Rubio PG 75 32.9 11.1 3.5 8.7 40.2 0.8 2.6 30.6 3.4 3.8 89.1 4.1 9.1 1.7 0.1 2.6 25 1 16.87
</code></pre>
<hr>
<p>Lastly, you could easily save it as an excel table <code>df.to_excel("espn_table.xlsx")</code></p>
|
html|python-3.x|pandas
| 1
|
5,174
| 45,888,173
|
Nested list (of strings) to matrix of float in python
|
<p>I wondered what would convert a large list, structured like <code>['12,-1', '0.01,3']</code> to an array like </p>
<pre><code>12 -1
0.01 3
</code></pre>
<p>The following code does this, but I don't think it is efficient:</p>
<pre><code>import numpy as nu
list1 = ['12,-1', '0.01,3']
pp= nu.zeros(shape=(len (list1),2))
for i in range (len (list1)):
pp[i,0]= map (float,list1[i].split(','))[0]
pp[i,1]= map (float,list1[i].split(','))[1]
</code></pre>
<p>Any suggestions?</p>
|
<p>The solution using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html" rel="nofollow noreferrer"><strong><em>numpy.split</em></strong></a> routine:</p>
<pre><code>import numpy as np
list1 = ['12,-1', '0.01,3']
result = np.split(np.array([float(i) for _ in list1 for i in _.split(',')]), 2)
print(result)
</code></pre>
<p>The output:</p>
<pre><code>[array([ 12., -1.]), array([ 0.01, 3. ])]
</code></pre>
|
list|numpy
| 0
|
5,175
| 45,745,932
|
Update pandas dataframe columns based on index
|
<p>In pandas have a dataframe:</p>
<pre><code>df1 = pd.DataFrame({'Type':['Application','Application','Hardware'],
'Category': ['None','None','Hardware']})
</code></pre>
<p>I have the following index to retrieve rows where type contains "application" and Category contains 'None'.</p>
<pre><code>df1[df1['Type'].str.contains('Application') & df1['Category'].str.contains('None')]
Category Type
0 None Application
1 None Application
</code></pre>
<p>I would like to update the column Category such that the value is 'some new value' for each row.</p>
<p>I have also tried the same with the following loc index with no success</p>
<pre><code>df1[df1.loc[:,'Type'].str.contains('Application') \
& df1.loc[:,'Category'].str.contains('None')]
</code></pre>
|
<p>Are you looking for this?</p>
<pre><code>df1.loc[(df1['Type'] == 'Application') & (df1['Category'] == 'None'), 'Category'] = 'New category'
Category Type
0 New category Application
1 New category Application
2 Hardware Hardware
</code></pre>
|
python|pandas|dataframe
| 1
|
5,176
| 46,146,441
|
Building a CMake library within a Bazel project
|
<p>I've written a module on top of a private fork off of TensorFlow that uses <a href="https://github.com/nanomsg/nanomsg" rel="noreferrer">nanomsg</a>. </p>
<p>For my local development server, I used <code>cmake install</code> to install nanomsg (to <code>/usr/local</code>) and accessed the header files from their installed location. The project runs fine locally.</p>
<p>However, I now need to package nanomsg within my TensorFlow workspace. I've tried the following two approaches, and find neither satisfactory:</p>
<ol>
<li><p>Similar to <a href="https://stackoverflow.com/a/35024014">this answer</a> for OpenCV, I precompiled nanomsg into a private repository, loaded it within my workspace (within <code>tensorflow/workspace.bzl</code>) using an <a href="https://docs.bazel.build/versions/master/be/workspace.html#http_archive" rel="noreferrer">http_archive directive</a> then included the headers and libraries in the relevant build script. This runs fine, but is not a portable solution.</p></li>
<li><p>A more portable solution, I created a <code>genrule</code> to run a specific sequence of <code>cmake</code> commands that can be used to build nanomsg. This approach is neater, but the <code>genrule</code> cannot be reused to <code>cmake</code> other projects. (I referred to <a href="https://github.com/bazelbuild/bazel/issues/2532" rel="noreferrer">this discussion</a>).</p></li>
</ol>
<p>Clearly <code>cmake</code> is not supported as a first-class citizen in Bazel builds. Is there anyone who has faced this problem in your own projects created a generic, portable way to include libraries within Bazel projects that are built using <code>cmake</code>? If so, how did you approach it?</p>
|
<p>As Ulf wrote, I think your suggested option 2 should work fine.</p>
<p>Regarding "can I identify if the cmake fails", yes: cmake should return with an error exit code (!= 0) when it fails. This in turn will cause Bazel to automatically recognize the genrule action as failed and thus fail the build. Because Bazel sets "set -e -o pipefail" before running your command (cf. <a href="https://docs.bazel.build/versions/master/be/general.html#genrule-environment" rel="noreferrer">https://docs.bazel.build/versions/master/be/general.html#genrule-environment</a>), it should also work if you chain multiple cmake commands in your genrule "cmd".</p>
<p>If you call out to a shell script in your "cmd" attribute that then actually runs the cmake commands, make sure to put "set -e -o pipefail" in the first line of your script yourself. Otherwise the script will not fail when cmake fails.</p>
<p>If I misunderstood your question "Can I identify if the cmake fails", please let me know. :)</p>
|
tensorflow|build|cmake|bazel|nanomsg
| 6
|
5,177
| 45,824,897
|
Add Elements to array
|
<p>I want to create an empty array and add in a for loop multiple different values. Should I use append or concatenate for this? The code is nearly working.</p>
<pre><code>values=np.array([[]])
for i in range(diff.shape[0]):
add=np.array([[i,j,k,fScore,costMST,1]])
values=np.append([values,add])
</code></pre>
<p>The result should like </p>
<pre><code>[[0,...],[1,...],[2,...],...]
</code></pre>
<p>Thanks a lot</p>
|
<p>Use neither. <code>np.append</code> is just another way of calling <code>concatenate</code>, one that takes 2 arguments instead of a list. So both are relatively expensive, creating a new array with each call. Plus it is hard to get the initial value correct, as you have probably found.</p>
<p>List append is the correct way to build an array. The result will be a list or list of lists. That can be turned into an array with <code>np.array</code> or one of the <code>stack</code> (concatenate) functions at the end.</p>
<p>Try:</p>
<pre><code>values=[]
for i in range(diff.shape[0]):
add=np.array([[i,j,k,fScore,costMST,1]])
values.append(add)
values = np.stack(values)
</code></pre>
<p>Since <code>add</code> is 2d, this use of <code>stack</code> will make a 3d. You might want <code>vstack</code> instead (or <code>np.concatenate(values, axis=0)</code> is the same thing).</p>
<p>Or try:</p>
<pre><code>values=[]
for i in range(diff.shape[0]):
add=[i,j,k,fScore,costMST,1]
values.append(add)
values = np.array(values)
</code></pre>
<p>this makes a list of lists, which <code>np.array</code> turns into a 2d array.</p>
|
python|arrays|numpy
| 3
|
5,178
| 35,491,185
|
orientation of normal surface/vertex vectors
|
<p>Given a convex 3d polygon (convex hull) How can I determine the correct direction for normal surface/vertex vectors? As the polygon is convex, by correct I mean outward facing (away from the centroid). </p>
<pre><code>def surface_normal(centroid, p1, p2, p3):
a = p2-p1
b = p3-p1
n = np.cross(a,b)
if **test including centroid?** :
return n
else:
return -n # change direction
</code></pre>
<p>I actually need the normal vertex vectors as I am exporting as a .obj file, but I am assuming that I would need to calculate the surface vectors before hand and combine them.</p>
|
<p>This solution should work under the assumption of a convex hull in 3d. You calculate the normal as shown in the question. You can normalize the normal vector with</p>
<pre><code>n /= np.linalg.norm(n) # which should be sqrt(n[0]**2 + n[1]**2 + n[2]**2)
</code></pre>
<p>You can then calculate the center point of your input triangle: </p>
<pre><code>pmid = (p1 + p2 + p3) / 3
</code></pre>
<p>After that you calculate the distance of the triangle-center to your surface centroid. This is</p>
<pre><code>dist_centroid = np.linalg.norm(pmid - centroid)
</code></pre>
<p>The you can calculate the distance of your triangle_center + your normal with the length of the distance to the centroid. </p>
<pre><code>dist_with_normal = np.linalg.norm(pmid + n * dist_centroid - centroid)
</code></pre>
<p>If this distance is larger than dist_centroid, then your normal is facing outwards. If it is smaller, it is pointing inwards. If you have a perfect sphere and point towards the centroid, it should almost be zero. This may not be the case for your general surface, but the convexity of the surface should make sure, that this is enough to check for its direction. </p>
<pre><code>if(dist_centroid < dist_with_normal):
n *= -1
</code></pre>
<hr>
<p>Another, nicer option is to use a scalar product. </p>
<pre><code>pmid = (p1 + p2 + p3) / 3
if(np.dot(pmid - centroid, n) < 0):
n *= -1
</code></pre>
<p>This checks if your normal and the vector from the mid of your triangle to the centroid have the same direction. If that is not so, change the direction. </p>
|
numpy|vector-graphics|normals
| 1
|
5,179
| 35,651,553
|
How to make data frame from python dictionary
|
<p>I have a python dictionary in below format.</p>
<pre><code>dict={'a':[1,2,3],'b':[4,5,6],'c':[7,8,9]}
</code></pre>
<p>how to create a data frame such that</p>
<pre><code>a b c-----> column names or feature names
1 4 7
2 5 8
3 6 9
</code></pre>
|
<p>Simply call the <code>DataFrame</code> constructor:</p>
<pre><code>import pandas as pd
pd.DataFrame(dict)
</code></pre>
|
python|pandas
| 0
|
5,180
| 50,751,135
|
Iterating operation with two arrays using numpy
|
<p>I'm working with two different arrays (75x4), and I'm applying a shortest distance algorithm between the two arrays.</p>
<p>So I want to:</p>
<ul>
<li>perform an operation with one row of the first array with every individual row of the second array, iterating to obtain 75 values</li>
<li>find the minimum value, and store that in a new array</li>
<li>repeat this with the second row of the first array, once again iterating the operation for all the rows of the second array, and again storing the minimum difference to the new array</li>
</ul>
<p>How would I go about doing this with numpy?</p>
<p>Essentially I want to perform an operation between one row of array 1 on every row of array 2, find the minimum value, and store that in a new array. Then do that very same thing for the 2nd row of array 1, and so on for all 75 rows of array 1.</p>
<p>Here is the code for the formula I'm using. What I get here is just the distance between every row of array 1 (training data) and array 2 (testing data). But what I'm looking for is to do it for one row of array 1 iterating down for all rows of array 2, storing the minimum value in a new array, then doing the same for the next row of array 1, and so on. </p>
<pre><code>arr_attributedifference = (arr_trainingdata - arr_testingdata)**2
arr_distance = np.sqrt(arr_attributedifference.sum(axis=1))
</code></pre>
|
<p>Here are two methods one using <code>einsum</code>, the other <code>KDTree</code>:</p>
<p><code>einsum</code> does essentially what we could also achieve via broadcasting, for example <code>np.einsum('ik,jk', A, B)</code> is roughly equivalent to <code>(A[:, None, :] * B[None, :, :]).sum(axis=2)</code>. The advantage of einsum is that it does the summing straight away, so it avoids creating an mxmxn intermediate array.</p>
<p><code>KDTree</code> is more sophisticated. We have to invest upfront into generating the tree but afterwards querying nearest neighbors is very efficient.</p>
<pre><code>import numpy as np
from scipy.spatial import cKDTree as KDTree
def f_einsum(A, B):
B2AB = np.einsum('ij,ij->i', B, B) / 2 - np.einsum('ik,jk', A, B)
idx = B2AB.argmin(axis=1)
D = A-B[idx]
return np.sqrt(np.einsum('ij,ij->i', D, D)), idx
def f_KDTree(A, B):
T = KDTree(B)
return T.query(A, 1)
m, n = 75, 4
A, B = np.random.randn(2, m, n)
de, ie = f_einsum(A, B)
dt, it = f_KDTree(A, B)
assert np.all(ie == it) and np.allclose(de, dt)
from timeit import timeit
for m, n in [(75, 4), (500, 4)]:
A, B = np.random.randn(2, m, n)
print(m, n)
print('einsum:', timeit("f_einsum(A, B)", globals=globals(), number=1000))
print('KDTree:', timeit("f_KDTree(A, B)", globals=globals(), number=1000))
</code></pre>
<p>Sample run:</p>
<pre><code>75 4
einsum: 0.067826496087946
KDTree: 0.12196151306852698
500 4
einsum: 3.1056990439537913
KDTree: 0.85108971898444
</code></pre>
<p>We can see that at small problem size the direct method (einsum) is faster while for larger problem size KDTree wins.</p>
|
python|arrays|numpy|iteration
| 3
|
5,181
| 50,994,140
|
Problems while plotting label vs datetime in a pandas column?
|
<p>I have the following pandas dataframe, which consist of datetime timestamps and user ids:</p>
<pre><code> id datetime
130 2018-05-17 19:46:18
133 2018-05-17 20:59:57
131 2018-05-17 21:54:01
142 2018-05-17 22:49:07
114 2018-05-17 23:02:34
136 2018-05-18 06:06:48
324 2018-05-18 12:21:38
180 2018-05-18 12:49:33
120 2018-05-18 14:03:58
120 2018-05-18 15:28:36
</code></pre>
<p>How can I plot on the y axis the id and on the x axis day or minutes? I tried to:</p>
<pre><code>plt.plot(df3['datatime'], df3['id'], '|')
plt.xticks(rotation='vertical')
</code></pre>
<p>However, I have two problems, my dataframe is quite large and I have multiple ids, the second problem is that I wasn't able to arrange each label on the y axis and plot it against its datime value in the x axis. Any idea of how to do something like this:</p>
<p><img src="https://i.stack.imgur.com/UkExX.png" alt="example"></p>
<p>The whole objective of this plot is to visualize the logins per time of that specific user.</p>
|
<p>Something like this?</p>
<p>X axis: date, Y axis: id</p>
<pre><code>from datetime import date
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import pandas as pd
# set your data as df
# strip only YYYY-mm-dd part from original `datetime` column
df.datetime = df.datetime.apply(lambda x: str(x)[:10])
df.datetime = df.datetime.apply(lambda x: date(int(x[:4]), int(x[5:7]), int(x[8:10])))
# plot
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator())
plt.plot(df.datetime, df.id, '|')
plt.gcf().autofmt_xdate()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/fkxZX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fkxZX.png" alt="enter image description here"></a></p>
|
python|python-3.x|pandas|matplotlib|seaborn
| 0
|
5,182
| 51,031,691
|
tensorflow custom loss that are not in the form of sum of single sample errors
|
<p>I was working through <a href="http://stackabuse.com/tensorflow-neural-network-tutorial/" rel="nofollow noreferrer">here</a>. I am currently modifying the loss. Looking at,</p>
<pre><code> deltas=tf.square(y_est-y)
loss=tf.reduce_sum(deltas)
</code></pre>
<p>I am understanding this to be calculating the squared difference between the output and the true label. Then the loss is the sum of these squares. So, writing the squared single sample error as <code>S_i</code> for sample <code>i</code>, things are simple for the case when the batch loss is just <code>\sum_{i} f(S_i)</code>,
where summation goes through the all samples. But what if you cannot write the loss in this form? That is, the batch loss for the whole data is <code>f({S_i})</code> for some general <code>f</code> with <code>i</code> going over all the samples. That is, the loss for the whole data cannot be calculated as some simple linear combination of the losses for the constituent samples. How do you code this in tensorflow? Thank you. </p>
<p>Below is more details on <code>f</code>. The outputs of the neural network are <code>u_i</code> where <code>i</code> goes from 1 to n. n is the number of samples we have. My error is something like</p>
<pre><code>sum_{i from 1 to n} C_i log{sum_{k from 1 to n} I_{ik} exp{-d(u_i,u_k)} }
</code></pre>
<p><code>C_i</code> is number of nodes connected to node <code>i</code>, which I already have and is a constant. <code>I_{ik}</code> is 1 if node <code>i</code> and node <code>k</code> is not connected. </p>
<p>Thanks for the code. maybe my question has not been worded correctly. I am not really looking for the code for the loss. This I can do myself. If you look at,</p>
<pre><code>deltas=tf.square(y_est-y)
loss=tf.reduce_sum(deltas)
</code></pre>
<p>deltas, are they (1,3)? A bit above it reads</p>
<pre><code># Placeholders for input and output data
X = tf.placeholder(shape=(120, 4), dtype=tf.float64, name='X')
y = tf.placeholder(shape=(120, 3), dtype=tf.float64, name='y')
# Variables for two group of weights between the three layers of the network
W1 = tf.Variable(np.random.rand(4, hidden_nodes), dtype=tf.float64)
W2 = tf.Variable(np.random.rand(hidden_nodes, 3), dtype=tf.float64)
# Create the neural net graph
A1 = tf.sigmoid(tf.matmul(X, W1))
y_est = tf.sigmoid(tf.matmul(A1, W2))
</code></pre>
<p>I think it is (1,3). They use (1,3) <code>y_est</code> and <code>y</code>. I want to know the specific tensorflow syntax for working with (m,3) <code>y_est</code> and <code>y</code> for any given m.</p>
|
<p>I might be wrong with syntaxes ... but this should give you a general idea. Also, you can optimize it further by vectorization. I have simply put your loss function as it is.<br>
N is the batch size. </p>
<pre><code>def f(C, I, U, N):
loss = 0
for i in range(N):
sum_ = 0
for k in range(N):
sum_ += I[i,k] * tf.exp(d(u[i]-u[k])
loss += C[i]*tf.log(sum)
return loss
loss = f(C,I,U, batch_size)
</code></pre>
|
python|tensorflow|machine-learning
| 2
|
5,183
| 20,711,838
|
How can I approximate the periodicity of a pandas time Series
|
<p>Is there a way to approximate the periodicity of a time series in pandas? For R, the <code>xts</code> objects have a method called <code>periodicity</code> that serves exactly this purpose. Is there an implemented method to do so? </p>
<p>For instance, can we infer the frequency from time series that do not specify frequency?</p>
<pre><code>import pandas.io.data as web
aapl = web.get_data_yahoo("AAPL")
<class 'pandas.tseries.index.DatetimeIndex'>
[2010-01-04 00:00:00, ..., 2013-12-19 00:00:00]
Length: 999, Freq: None, Timezone: None
</code></pre>
<p>The frequency of this series can reasonably be approximated to be daily.</p>
<p><strong>Update:</strong></p>
<p>I think it might be helpful to show the source code of R's implementation of the periodicity method.</p>
<pre><code>function (x, ...)
{
if (timeBased(x) || !is.xts(x))
x <- try.xts(x, error = "'x' needs to be timeBased or xtsible")
p <- median(diff(.index(x)))
if (is.na(p))
stop("can not calculate periodicity of 1 observation")
units <- "days"
scale <- "yearly"
label <- "year"
if (p < 60) {
units <- "secs"
scale <- "seconds"
label <- "second"
}
else if (p < 3600) {
units <- "mins"
scale <- "minute"
label <- "minute"
p <- p/60L
}
else if (p < 86400) {
units <- "hours"
scale <- "hourly"
label <- "hour"
}
else if (p == 86400) {
scale <- "daily"
label <- "day"
}
else if (p <= 604800) {
scale <- "weekly"
label <- "week"
}
else if (p <= 2678400) {
scale <- "monthly"
label <- "month"
}
else if (p <= 7948800) {
scale <- "quarterly"
label <- "quarter"
}
structure(list(difftime = structure(p, units = units, class = "difftime"),
frequency = p, start = start(x), end = end(x), units = units,
scale = scale, label = label), class = "periodicity")
}
</code></pre>
<p>I think this line is the key, which I don't quite understand
<code>p <- median(diff(.index(x)))</code></p>
|
<p>This time series skips weekends (and holidays), so it really doesn't have a daily frequency to begin with. You could use <code>asfreq</code> to upsample it to a time series with daily frequency, however:</p>
<pre><code>aapl = aapl.asfreq('D', method='ffill')
</code></pre>
<p>Doing so propagates forward the last observed value to dates with missing values. </p>
<p>Note that Pandas also has a business day frequency, so it is also possible to upsample to business days by using:</p>
<pre><code>aapl = aapl.asfreq('B', method='ffill')
</code></pre>
<hr>
<p>If you wish to automate the process of inferring the median frequency in days, then you could do this:</p>
<pre><code>import pandas as pd
import numpy as np
import pandas.io.data as web
aapl = web.get_data_yahoo("AAPL")
f = np.median(np.diff(aapl.index.values))
days = f.astype('timedelta64[D]').item().days
aapl = aapl.asfreq('{}D'.format(days), method='ffill')
print(aapl)
</code></pre>
<hr>
<p>This code needs testing, but perhaps it comes close to the R code you posted:</p>
<pre><code>import pandas as pd
import numpy as np
import pandas.io.data as web
def infer_freq(ts):
med = np.median(np.diff(ts.index.values))
seconds = int(med.astype('timedelta64[s]').item().total_seconds())
if seconds < 60:
freq = '{}s'.format(seconds)
elif seconds < 3600:
freq = '{}T'.format(seconds//60)
elif seconds < 86400:
freq = '{}H'.format(seconds//3600)
elif seconds < 604800:
freq = '{}D'.format(seconds//86400)
elif seconds < 2678400:
freq = '{}W'.format(seconds//604800)
elif seconds < 7948800:
freq = '{}M'.format(seconds//2678400)
else:
freq = '{}Q'.format(seconds//7948800)
return ts.asfreq(freq, method='ffill')
aapl = web.get_data_yahoo("AAPL")
print(infer_freq(aapl))
</code></pre>
|
python|pandas
| 5
|
5,184
| 20,390,227
|
Do I underestimate the power of NumPy.. again?
|
<p>I don't think I can optimize my function anymore, but it won't be my first time that I underestimate the power of NumPy.</p>
<p>Given: </p>
<ul>
<li>2 rank NumPy array with coordinates</li>
<li>1 rank NumPy array with elevation of each coordinate</li>
<li>Pandas DataFrame with stations</li>
</ul>
<p>Function:</p>
<pre><code>def Function(xy_coord):
# Apply a KDTree search for (and select) 8 nearest stations
dist_tree_real, ix_tree_real = tree.query(xy_coord, k=8, eps=0, p=1)
df_sel = df.ix[ix_tree_real]
# Fits multi-linear regression to find coefficients
M = np.vstack((np.ones(len(df_sel['POINT_X'])),df_sel['POINT_X'], df_sel['POINT_Y'],df_sel['Elev'])).T
b1,b2,b3 = np.linalg.lstsq(M,df_sel['TEMP'])[0][1:4]
# Compute IDW using the coefficients
return sum( (1/dist_tree_real)**2)**-1 * sum((df_sel['TEMP'] + (b1*(xy_coord[0] - df_sel['POINT_X'])) +
(b2*(xy_coord[1]-df_sel['POINT_Y'])) + (b3*(dem[index]-df_sel['Elev']))) *
(1/dist_tree_real)**2)
</code></pre>
<p>And I apply the function on the coordinates as follow:</p>
<pre><code>for index, coord in enumerate(xy):
outarr[index] = func(coord)
</code></pre>
<p>This is an iterative process, if I try this <code>outarr = np.vectorize(func)(xy)</code> then Python crashes, so I guess that's something I should avoid doing.</p>
<p>I also prepared an IPython Notebook, so I could write LaTeX, something I've always dreamed of doing for a long time. Till now. The day has come. <a href="http://nbviewer.ipython.org/github/mattijn/pynotebook/blob/master/ipynotebooks/Untitled11.ipynb" rel="nofollow noreferrer">Yeah</a></p>
<hr>
<p>Off topic: the math won't show up in the nbviewer.. on my local machine it looks like this:</p>
<p><img src="https://i.stack.imgur.com/uR6eC.png" alt="LaTeX in my local IPython working"></p>
|
<p>My suggest is don't use DataFrame for the calculation, use numpy array only. Here is the code:</p>
<pre><code>dist, idx = tree.query(xy, k=8, eps=0, p=1)
columns = ["POINT_X", "POINT_Y", "Elev", "TEMP"]
px, py, elev, tmp = df[columns].values.T[:, idx, None]
tmp = np.squeeze(tmp)
one = np.ones_like(px)
m = np.concatenate((one, px, py, elev), axis=-1)
mtm = np.einsum("ijx,ijy->ixy", m, m)
mty = np.einsum("ijx,ij->ix", m, tmp)
b1,b2,b3 = np.linalg.solve(mtm, mty)[:, 1:].T
px, py, elev = px.squeeze(), py.squeeze(), elev.squeeze()
b1 = b1[:,None]
b2 = b2[:,None]
b3 = b3[:,None]
rdist = (1/dist)**2
t0 = tmp + b1*(xy[:,0,None]-px) + b2*(xy[:,1,None]-py) + b3*(dem[:,None]-elev)
outarr = (t0*rdist).sum(1) / rdist.sum(1)
print outarr
</code></pre>
<p>output:</p>
<pre><code>[ -499.24287422 -540.28111668 -512.43789349 -589.75389439 -411.65598912
-233.1779803 -1249.63803291 -232.4924416 -273.3978919 -289.35240473]
</code></pre>
<p>There are some trick in the code:</p>
<ol>
<li><p><code>np.linalg.solve</code> in numpy 1.8 is a generalized ufunc that can solve many linear equations by one call, but <code>lstsq</code> is not. So I need use <code>solve</code> to calculate <code>lstsq</code>.</p></li>
<li><p>To do many matrix multiply by one call, we can't use <code>dot</code>, <code>einsum()</code> does the trick, but I think it may be slower than <code>dot</code>. You can <code>timeit</code> for your real data.</p></li>
</ol>
|
python|arrays|numpy
| 1
|
5,185
| 33,183,292
|
Need Framework to handle Interactions between Redshift and python
|
<p>I am building a python application with a lot of interactions between Amazon Redshift and local python (sending queries to redshift, sending results to local etc...). My question is: what is the cleanest way to handle such interactions.</p>
<p>Currently, I am using <code>sqlalchemy</code> to load tables directly on local thanks to <code>pandas.read_sql()</code>. But I am not sure this is very optimised or safe.</p>
<p>Would it be better to go through Amazon S3, and then bring back files with <code>boto</code>, to finally read them with <code>pandas.read_csv()</code>?</p>
<p>Finally, is there a better idea to handle such interactions, maybe not doing everything in Python?</p>
|
<p>You can look at the blaze ecosystem for ideas and libraries you might find useful: <a href="http://blaze.pydata.org" rel="nofollow">http://blaze.pydata.org</a></p>
<p>The blaze library itself lets you write queries at a high, pandas-like level, and then it translates the query to redshift (using SQLAlchemy): <a href="http://blaze.readthedocs.org/en/latest/index.html" rel="nofollow">http://blaze.readthedocs.org/en/latest/index.html</a></p>
<p>But this may be too high-level for your purposes and you might need more precise control over the behavior -- but it would let you keep the code similar regardless of how and when you moved the data around. </p>
<p>The odo library can be used independently to copy from Redshift to S3 to local files and back. This can be used independently of the blaze library: <a href="http://odo.readthedocs.org/en/latest/" rel="nofollow">http://odo.readthedocs.org/en/latest/</a></p>
|
python|pandas|amazon-s3|sqlalchemy|amazon-redshift
| 3
|
5,186
| 66,445,068
|
Faster way to create pandas dataframe with many rows
|
<p>I am reading hdf5 files with large amounts of data . I want to store it in a dataframe (it will contain around 1.3e9 rows). For the moment I am using the following procedure:</p>
<pre><code>df = pd.DataFrame()
for key in ['Column1', 'Column2', 'Column3']:
df[key] = np.array(h5assembly.get(key))
</code></pre>
<p>I have timed it and it takes ~110 seconds</p>
<p>If I just assign the values to numpy arrays, like this:</p>
<pre><code>v1 = np.array(h5assembly.get('Column1'))
v2 = np.array(h5assembly.get('Column2'))
v3 = np.array(h5assembly.get('Column3'))
</code></pre>
<p>It takes ~22 seconds.</p>
<p>Am I doing something wrong? Is it expected that the creation of the dataframe is so much slower? Is there any way to accelerate this process?</p>
|
<p>Yes, it is expected that a DataFrame will take longer than Numpy arrays. This is due to various reasons and I won't list them all. Partly due to the may Numpy uses and frees up memory. Numpy operations are implemented in C, a compiled language giving performance benefits.</p>
<p>An interesting comparison between pandas and Numpy performance may be seen here:
<a href="https://penandpants.com/2014/09/05/performance-of-pandas-series-vs-numpy-arrays/" rel="nofollow noreferrer">https://penandpants.com/2014/09/05/performance-of-pandas-series-vs-numpy-arrays/</a></p>
<p>A package that aims to speed up Pandas using parallelization is Molin: <a href="https://www.kdnuggets.com/2019/11/speed-up-pandas-4x.html" rel="nofollow noreferrer">https://www.kdnuggets.com/2019/11/speed-up-pandas-4x.html</a></p>
<p>Here is also a package called 'PyPolars' which aims to work in a very similar way to Pandas with greater performance due to the implementation of Rust:
<a href="https://www.analyticsvidhya.com/blog/2021/02/is-pypolars-the-new-alternative-to-pandas/" rel="nofollow noreferrer">https://www.analyticsvidhya.com/blog/2021/02/is-pypolars-the-new-alternative-to-pandas/</a></p>
|
python|pandas|dataframe|hdf5
| 1
|
5,187
| 66,704,053
|
Seaborn Line Plot for plotting multiple parameters
|
<p>I have dataset as below,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: center;">10_YR_CAGR</th>
<th style="text-align: center;">5_YR_CAGR</th>
<th style="text-align: center;">1_YR_CAGR</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">c1_rev</td>
<td style="text-align: center;">20.5</td>
<td style="text-align: center;">21.5</td>
<td style="text-align: center;">31.5</td>
</tr>
<tr>
<td style="text-align: left;">c2_rev</td>
<td style="text-align: center;">20.5</td>
<td style="text-align: center;">22.5</td>
<td style="text-align: center;">24</td>
</tr>
<tr>
<td style="text-align: left;">c3_rev</td>
<td style="text-align: center;">21</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">27</td>
</tr>
<tr>
<td style="text-align: left;">c4_rev</td>
<td style="text-align: center;">20</td>
<td style="text-align: center;">26</td>
<td style="text-align: center;">30</td>
</tr>
<tr>
<td style="text-align: left;">c5_rev</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">19</td>
<td style="text-align: center;">15</td>
</tr>
<tr>
<td style="text-align: left;">c1_eps</td>
<td style="text-align: center;">21</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">23</td>
</tr>
<tr>
<td style="text-align: left;">c2_eps</td>
<td style="text-align: center;">21</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">25</td>
</tr>
</tbody>
</table>
</div>
<p>This data has 5 companies and its parameters like rev, eps, profit etc. I need to plot as below:</p>
<p>rev:</p>
<ul>
<li>x_axis-> index_col c1_rev, ...c5_rev</li>
<li>y_axis -> 10_YR_CAGR .. 1_YR_CAGR</li>
</ul>
<p>eps:</p>
<ul>
<li>x_axis -> index_col: c1_eps,...c5_eps</li>
<li>y_axis -> 10_YR_CAGR,... 1_YR_CAGR</li>
</ul>
<p>etc...</p>
<p>I have tried with following code:</p>
<pre><code>eps = analysis_df[analysis_df.index.str.contains('eps',regex=True)]
for i1 in eps.columns[eps.columns!='index']:
sns.lineplot(x="index",y=i1,data=eps,label=i1)
</code></pre>
<p>I have to make a dataframe from source and then loop it. How can I try to create a for loop which loops from the main source dataframe itself?</p>
<p>Instead of creating a loop for separate parameters, how can I loop from the main source dataframe to create a chart of plots with parameters like rev, eps, profit to facegrid parameters? How to apply those filter in facetgrid?</p>
<p>My sample output of the above code,</p>
<p><a href="https://i.stack.imgur.com/oSbV4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSbV4.png" alt="enter image description here" /></a></p>
<p>How to plot the same sort of plot for different parameters in a single for loop?</p>
|
<p>The way facets are typically plotted is by "melting" your <code>analysis_df</code> into id/variable/value columns.</p>
<ol>
<li><p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><strong><code>split()</code></strong></a> the <code>index</code> column into <code>Company</code> and <code>Parameter</code>, which we'll later use as id columns when melting:</p>
<pre class="lang-py prettyprint-override"><code>analysis_df[['Company', 'Parameter']] = analysis_df['index'].str.split('_', expand=True)
# index 10_YR_CAGR 5_YR_CAGR 1_YR_CAGR Company Parameter
# 0 c1_rev 100 21 1 c1 rev
# 1 c2_rev 1 32 24 c2 rev
# ...
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer"><strong><code>melt()</code></strong></a> the CAGR columns:</p>
<pre class="lang-py prettyprint-override"><code>melted = analysis_df.melt(
id_vars=['Company', 'Parameter'],
value_vars=['10_YR_CAGR', '5_YR_CAGR', '1_YR_CAGR'],
var_name='Period',
value_name='CAGR',
)
# Company Parameter Period CAGR
# 0 c1 rev 10_YR_CAGR 100
# 1 c2 rev 10_YR_CAGR 1
# 2 c3 rev 10_YR_CAGR 14
# 3 c1 eps 10_YR_CAGR 1
# ...
# 25 c2 pft 1_YR_CAGR 14
# 26 c3 pft 1_YR_CAGR 17
</code></pre>
</li>
<li><p><a href="https://seaborn.pydata.org/generated/seaborn.relplot.html" rel="nofollow noreferrer"><strong><code>relplot()</code></strong></a> <code>CAGR</code> vs <code>Company</code> (colored by <code>Period</code>) for each <code>Parameter</code> using the <code>melted</code> dataframe:</p>
<pre class="lang-py prettyprint-override"><code>sns.relplot(
data=melted,
kind='line',
col='Parameter',
x='Company',
y='CAGR',
hue='Period',
col_wrap=1,
facet_kws={'sharex': False, 'sharey': False},
)
</code></pre>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/2fOJJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2fOJJ.png" alt="relplot 3x1" /></a></p>
<p>Sample data to reproduce this plot:</p>
<pre class="lang-py prettyprint-override"><code>import io
import pandas as pd
csv = '''
index,10_YR_CAGR,5_YR_CAGR,1_YR_CAGR
c1_rev,100,21,1
c2_rev,1,32,24
c3_rev,14,23,7
c1_eps,1,20,50
c2_eps,21,20,25
c3_eps,31,20,37
c1_pft,20,1,10
c2_pft,25,20,14
c3_pft,11,55,17
'''
analysis_df = pd.read_csv(io.StringIO(csv))
</code></pre>
|
python|pandas|matplotlib|seaborn|facet-grid
| 2
|
5,188
| 57,583,927
|
What to use in place of pandas.Series.filter?
|
<h2>pandas -> cuDF</h2>
<p>Converting some python written for pandas to run on rapids</p>
<h3>pandas</h3>
<pre><code>temp=df_train.copy()
temp['buildingqualitytypeid']=temp['buildingqualitytypeid'].fillna(-1)
temp=temp.groupby("buildingqualitytypeid").filter(lambda x: x.buildingqualitytypeid.size > 3)
temp['buildingqualitytypeid'] = temp['buildingqualitytypeid'].replace(-1,np.nan)
print(temp.buildingqualitytypeid.isnull().sum())
print(temp.shape)
</code></pre>
<p>Anyone know what to use in place of <code>pandas.Series.filter</code> for same outcome in <code>cuDF</code>?</p>
|
<p>We're still working on <code>filter</code> functionality in <code>cudf</code>, but for now the following approach will implement many <code>filter</code>-like needs:</p>
<pre><code>df_train = pd.DataFrame({'buildingqualitytypeid': np.random.randint(0, 4, 12), 'value': np.arange(12)})
temp=df_train.copy()
temp['buildingqualitytypeid']=temp['buildingqualitytypeid'].fillna(-1)
gtemp=temp.groupby("buildingqualitytypeid").count()
gtemp=gtemp[gtemp['value'] > 3]
gtemp = gtemp.drop('value', axis=1)
gtemp = gtemp.merge(temp.reset_index(), on="buildingqualitytypeid")
gtemp = gtemp.sort_values('index')
gtemp.index = gtemp['index']
gtemp.index.name = None
gtemp = gtemp.drop('index', axis=1)
</code></pre>
<p>This can be completed considerably more simply if you don't need the <code>index</code> values.</p>
|
pandas|rapids|cudf
| 1
|
5,189
| 24,398,497
|
Pandas: How to stack time series into a dataframe with time columns?
|
<p>I have a pandas timeseries with minute tick data:</p>
<pre><code>2011-01-01 09:30:00 -0.358525
2011-01-01 09:31:00 -0.185970
2011-01-01 09:32:00 -0.357479
2011-01-01 09:33:00 -1.486157
2011-01-01 09:34:00 -1.101909
2011-01-01 09:35:00 -1.957380
2011-01-02 09:30:00 -0.489747
2011-01-02 09:31:00 -0.341163
2011-01-02 09:32:00 1.588071
2011-01-02 09:33:00 -0.146610
2011-01-02 09:34:00 -0.185834
2011-01-02 09:35:00 -0.872918
2011-01-03 09:30:00 0.682824
2011-01-03 09:31:00 -0.344875
2011-01-03 09:32:00 -0.641186
2011-01-03 09:33:00 -0.501414
2011-01-03 09:34:00 0.877347
2011-01-03 09:35:00 2.183530
</code></pre>
<p>What is the best way to stack it into a dataframe such as :</p>
<pre><code> 09:30:00 09:31:00 09:32:00 09:33:00 09:34:00 09:35:00
2011-01-01 -0.358525 -0.185970 -0.357479 -1.486157 -1.101909 -1.957380
2011-01-02 -0.489747 -0.341163 1.588071 -0.146610 -0.185834 -0.872918
2011-01-03 0.682824 -0.344875 -0.641186 -0.501414 0.877347 2.183530
</code></pre>
|
<p>I'd make sure that this is actually want you want to do, as the resulting df loses a lot of the nice time-series functionality that pandas has.</p>
<p>But here is some code that would accomplish it. First, a time column is added, and the index is set to just the date part of the DateTimeIndex. The <code>pivot</code> command reshapes the data, setting the times as columns.</p>
<pre><code>In [74]: df.head()
Out[74]:
value
date
2011-01-01 09:30:00 -0.358525
2011-01-01 09:31:00 -0.185970
2011-01-01 09:32:00 -0.357479
2011-01-01 09:33:00 -1.486157
2011-01-01 09:34:00 -1.101909
In [75]: df['time'] = df.index.time
In [76]: df.index = df.index.date
In [77]: df2 = df.pivot(index=df.index, columns='time')
</code></pre>
<p>The results dataframe will have a MultiIndex for the columns (the top level just being the name of your values variable). If you want it back to just a list of columns, the code below will flatten the column list.</p>
<pre><code>In [78]: df2.columns = [c for (_, c) in df2.columns]
In [79]: df2
Out[79]:
09:30:00 09:31:00 09:32:00 09:33:00 09:34:00 09:35:00
2011-01-01 -0.358525 -0.185970 -0.357479 -1.486157 -1.101909 -1.957380
2011-01-02 -0.489747 -0.341163 1.588071 -0.146610 -0.185834 -0.872918
2011-01-03 0.682824 -0.344875 -0.641186 -0.501414 0.877347 2.183530
</code></pre>
|
python-2.7|pandas|time-series|dataframe
| 1
|
5,190
| 73,056,782
|
Adding legend of graph to data-frame plot
|
<p>I want to add a legend for the blue vertical dashed lines and black vertical dashed lines with label long entry points and short entry points respectively. The other two lines (benchmark and manual strategy portfolio) came from the dataframe.</p>
<p>How do I add a legend for the two vertical line styles?</p>
<p>Here is my existing code and the corresponding graph. The dataframe is a two column dataframe of values that share date indices (the x) and have y values. The <code>blue_x_coords</code> and <code>black_x_coords</code> are the date indices for the vertical lines, as you would expect. Thanks in advance!</p>
<pre class="lang-py prettyprint-override"><code>ax = df.plot(title=title, fontsize=12, color=["tab:purple", "tab:red"])
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
for xc in blue_x_coords:
plt.axvline(x=xc, color="blue", linestyle="dashed", label="Long Entry points")
for xc in black_x_coords:
plt.axvline(x=xc, color="black", linestyle="dashed", label="Short Entry points")
plt.savefig("./images/" + filename)
plt.clf()
</code></pre>
<p><a href="https://i.stack.imgur.com/D9h6q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D9h6q.png" alt="enter image description here" /></a></p>
|
<p>Answer: Seems like the easiest way is to replace the for loops:</p>
<pre><code> ax.vlines(x=blue_x_coords, colors="blue", ymin=bottom, ymax=top, linestyles="--", label="Long Entry Points")
ax.vlines(x=black_x_coords, colors="black", ymin=bottom, ymax=top, linestyles="--", label="Short Entry Points")
ax.legend()
</code></pre>
|
python|pandas|matplotlib
| 0
|
5,191
| 73,035,839
|
How to save the variables as different files in a for loop?
|
<p>I have a list of csv file pathnames in a list, and I am trying to save them as dataframes. How can I do it?</p>
<pre><code>import pandas as pd
import os
import glob
# use glob to get all the csv files
# in the folder
path = "/Users/azmath/Library/CloudStorage/OneDrive-Personal/Projects/LESA/2022 HY/All"
csv_files = glob.glob(os.path.join(path, "*.xlsx"))
# loop over the list of csv files
for f in csv_files:
# read the csv file
df = pd.read_excel(f)
display(df)
print()
</code></pre>
<p>The issue is that it only prints. but I dont know how to save. I would like to save all the data frames as variables, preferably as their file names.</p>
|
<p>By “save” I think you mean store dataframes in variables. I would use a dictionary for this instead of separate variables.</p>
<pre><code>import os
data = {}
for f in csv_files:
name = os.path.basename(f)
# read the csv file
data[name] = pd.read_excel(f)
display(data[name])
print()
</code></pre>
<p>Now all your dataframes are stored in the <code>data</code> dictionary where you can iterate them (easily handle all of them together if needed). Their key in the dictionary is the basename (filename) of the input file.</p>
<p>Recall that dictionaries remember insertion order, so the order the files were inserted is also preserved. I'd probably recommend sorting the input files before parsing - this way you get a reproducible script and sequence of operations!</p>
|
python|pandas|dataframe
| 2
|
5,192
| 73,052,176
|
Efficient Pandas Row Iteration for comparison
|
<p>I have a large Dataframe based on market data from the online game EVE.
I'm trying to determine the most profitable trades based on the price of the buy or sell order of an item.
I've found that it takes quite a while to loop through all the possibilities and would like some advice on how to make my code more efficient.</p>
<p>data = <a href="https://market.fuzzwork.co.uk/orderbooks/latest.csv.gz" rel="nofollow noreferrer">https://market.fuzzwork.co.uk/orderbooks/latest.csv.gz</a></p>
<p>SETUP:</p>
<pre><code>import pandas as pd
df = pd.read_csv('latest.csv', sep='\t', names=["orderID","typeID","issued","buy","volume","volumeEntered","minVolume","price","stationID","range","duration","region","orderSet"])
</code></pre>
<p><strong>Iterate through all the possibilites</strong></p>
<pre><code>buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
profitable_trade = []
for i in buy_order.index:
for j in sell_order.index:
if buy_order.loc[i,'price'] > sell_order.loc[j, 'price']:
profitable_trade.append(buy_order.loc[i, ['typeID', 'orderID', 'price', 'volume', 'stationID', 'range']].tolist() + sell_order.loc[j, ['orderID', 'price', 'volume', 'stationID', 'range']].tolist())
</code></pre>
<p>This takes quite a long time (33s on a ryzen 2600x, 12s on an M1 Pro)</p>
<p><strong>Shorten the iteration</strong></p>
<pre><code>buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
buy_order.sort_values(by='price', ascending=False, inplace=True, ignore_index=True)
sell_order.sort_values(by='price', ascending=True, inplace=True, ignore_index=True)
for i in buy_order.index:
if buy_order.loc[i, 'price'] > sell_order.price.min():
for j in sell_order.index:
if buy_order.loc[i,'price'] > sell_order.loc[j, 'price']:
profitable_trade2.append(buy_order.loc[i, ['typeID', 'orderID', 'price', 'volume', 'stationID', 'range']].tolist() + sell_order.loc[j, ['orderID', 'price', 'volume', 'stationID', 'range']].tolist())
else:
break
else:
break
</code></pre>
<p>This shaves about 25%-30% off the time (23s on 2600x, 9s on the M1 Pro)</p>
<p><em>Times have been recorded in a Jupyter Notebook</em></p>
<p><strong>Any Tips are welcome!</strong></p>
|
<p>Many thanks to @daniel.fehrenbacher for the explanation and suggestions.</p>
<p>In addition to his options, I've found a few myself using this article:
<a href="https://towardsdatascience.com/heres-the-most-efficient-way-to-iterate-through-your-pandas-dataframe-4dad88ac92ee#" rel="nofollow noreferrer">https://towardsdatascience.com/heres-the-most-efficient-way-to-iterate-through-your-pandas-dataframe-4dad88ac92ee#</a>:</p>
<p><strong>TL;DR</strong></p>
<ul>
<li>Don't use <code>tolist()</code></li>
<li>Filter operation isn't always better, depends on the iteration method</li>
<li>There are much faster iteration methods than a regular for loop, or even <code>iterrows()</code>: use dictionary iteration</li>
</ul>
<h2><strong>Use of <code>.tolist()</code> is detrimental</strong></h2>
<p>As mention in the answer above, a <code>.tolist()</code> uses too much time. It's much faster to use <code>append([item1, item2, item3...])</code> than use <code>append(row[['item1', 'item2', item3'...]].tolist())</code></p>
<ul>
<li><code>tolist()</code>: <strong>19.2s</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)]
sell_order = df[(df.typeID == 34) & (df.buy == False)]
profitable_trade = []
for _, buy in buy_order.iterrows():
filtered_sell_orders = sell_order[sell_order["price"] < buy["price"]]
for _, sell in filtered_sell_orders.iterrows():
profitable_trade.append(buy[['typeID', 'orderID', 'price', 'volume', 'stationID', 'range']].tolist() + sell[['orderID', 'price', 'volume', 'stationID', 'range']].tolist())
</code></pre>
<ul>
<li><code>append([item1, item2])</code>: <strong>3.5s</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)]
sell_order = df[(df.typeID == 34) & (df.buy == False)]
profitable_trade = []
for _, buy in buy_order.iterrows():
filtered_sell_orders = sell_order[sell_order["price"] < buy["price"]]
for _, sell in filtered_sell_orders.iterrows():
profitable_trade.append([
buy.typeID,
buy.orderID,
buy.price,
buy.volume,
buy.stationID,
buy.range,
sell.orderID,
sell.price,
sell.volume,
sell.stationID,
sell.range
])
</code></pre>
<h2><strong>Filtering Operation VS <code>break</code></strong></h2>
<p>While the single filtering operation has a slight efficiency increase when you use <code>.iterrows()</code>, I've found it is the opposite when you use the better <code>.itertuples()</code>.</p>
<ul>
<li><code>iterrows()</code> with filter operation: <strong>3.26s</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)]
sell_order = df[(df.typeID == 34) & (df.buy == False)]
profitable_trade = []
for _, row_buy in buy_order.iterrows():
filtered_sell_orders = sell_order[sell_order["price"] < row_buy.price]
for _, row_sell in filtered_sell_orders.iterrows():
profitable_trade.append([
row_buy.typeID,
row_buy.orderID,
row_buy.price,
row_buy.volume,
row_buy.stationID,
row_buy.range,
row_sell.orderID,
row_sell.price,
row_sell.volume,
row_sell.stationID,
row_sell.range
])
</code></pre>
<ul>
<li><code>iterrows()</code> with <code>break</code> statements: <strong>3.77s</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
buy_order.sort_values(by='price', ascending=False, inplace=True, ignore_index=True)
sell_order.sort_values(by='price', ascending=True, inplace=True, ignore_index=True)
profitable_trade3 = []
lowest_sell = sell_order.price.min()
for _, row_buy in buy_order.iterrows():
if row_buy.price > lowest_sell:
for _, row_sell in sell_order.iterrows():
if row_buy.price > row_sell.price:
profitable_trade3.append([
row_buy.typeID,
row_buy.orderID,
row_buy.price,
row_buy.volume,
row_buy.stationID,
row_buy.range,
row_sell.orderID,
row_sell.price,
row_sell.volume,
row_sell.stationID,
row_sell.range
])
else:
break
else:
break
</code></pre>
<ul>
<li><code>itertuples</code> with filter operation: <strong>650ms</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)]
sell_order = df[(df.typeID == 34) & (df.buy == False)]
profitable_trade = []
for row_buy in buy_order.itertuples():
filtered_sell_orders = sell_order[sell_order["price"] < row_buy.price]
for row_sell in filtered_sell_orders.itertuples():
profitable_trade.append([
row_buy.typeID,
row_buy.orderID,
row_buy.price,
row_buy.volume,
row_buy.stationID,
row_buy.range,
row_sell.orderID,
row_sell.price,
row_sell.volume,
row_sell.stationID,
row_sell.range
])
</code></pre>
<ul>
<li><code>itertuples</code> with <code>break</code> statement: <strong>375ms</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
buy_order.sort_values(by='price', ascending=False, inplace=True, ignore_index=True)
sell_order.sort_values(by='price', ascending=True, inplace=True, ignore_index=True)
profitable_trade3 = []
lowest_sell = sell_order.price.min()
for row_buy in buy_order.itertuples():
if row_buy.price > lowest_sell:
for row_sell in sell_order.itertuples():
if row_buy.price > row_sell.price:
profitable_trade3.append([
row_buy.typeID,
row_buy.orderID,
row_buy.price,
row_buy.volume,
row_buy.stationID,
row_buy.range,
row_sell.orderID,
row_sell.price,
row_sell.volume,
row_sell.stationID,
row_sell.range
])
else:
break
else:
break
</code></pre>
<h2><strong>Better iteration methods</strong></h2>
<ul>
<li><code>itertuples</code> (see above): <strong>375ms</strong></li>
<li>Numpy Iteration Method (<code>df.values</code>): <strong>200ms</strong></li>
</ul>
<pre><code>buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
buy_order.sort_values(by='price', ascending=False, inplace=True, ignore_index=True)
sell_order.sort_values(by='price', ascending=True, inplace=True, ignore_index=True)
profitable_trade4 = []
lowest_sell = sell_order.price.min()
for row_buy in buy_order.values:
if row_buy[7] > lowest_sell:
for row_sell in sell_order.values:
if row_buy[7] > row_sell[7]:
profitable_trade4.append([
row_buy[1],
row_buy[0],
row_buy[7],
row_buy[4],
row_buy[8],
row_buy[9],
row_sell[0],
row_sell[7],
row_sell[4],
row_sell[8],
row_sell[9]
])
else:
break
else:
break
</code></pre>
<ul>
<li>Dictionary Iteration (<code>df.to_dict('records')</code>): <strong>78ms</strong></li>
</ul>
<pre><code>%%time
buy_order = df[(df.typeID == 34) & (df.buy == True)].copy()
sell_order = df[(df.typeID == 34) & (df.buy == False)].copy()
buy_order.sort_values(by='price', ascending=False, inplace=True, ignore_index=True)
sell_order.sort_values(by='price', ascending=True, inplace=True, ignore_index=True)
profitable_trade5 = []
buy_dict = buy_order.to_dict('records')
sell_dict = sell_order.to_dict('records')
lowest_sell = sell_order.price.min()
for row_buy in buy_dict:
if row_buy['price'] > lowest_sell:
for row_sell in sell_dict:
if row_buy['price'] > row_sell['price']:
profitable_trade5.append([
row_buy['typeID'],
row_buy['orderID'],
row_buy['price'],
row_buy['volume'],
row_buy['stationID'],
row_buy['range'],
row_sell['orderID'],
row_sell['price'],
row_sell['volume'],
row_sell['stationID'],
row_sell['range']
])
else:
break
else:
break
</code></pre>
|
python|pandas|loops
| 0
|
5,193
| 70,577,039
|
Lists of PyTorch Lightning sub-models don't get transferred to GPU
|
<p>When using PyTorch Lightning on CPU, everything works fine. However when using GPUs, I get a <code>RuntimeError: Expected all tensors to be on the same device</code>.</p>
<p>It seems that the trouble comes from the model using a list of sub-models which don't get passed to the GPU:</p>
<pre class="lang-py prettyprint-override"><code>class LambdaLayer(LightningModule):
def __init__(self, fun):
super(LambdaLayer, self).__init__()
self.fun = fun
def forward(self, x):
return self.fun(x)
class TorchModel(LightningModule):
def __init__(self):
super(TorchModel, self).__init__()
self.cat_layers = [TorchCatEmbedding(cat) for cat in columns_to_embed]
self.num_layers = [LambdaLayer(lambda x: x[:, idx:idx+1]) for _, idx in numeric_columns]
self.ffo = TorchFFO(len(self.num_layers) + sum([embed_dim(l) for l in self.cat_layers]), y.shape[1])
self.softmax = torch.nn.Softmax(dim=1)
model = TorchModel()
trainer = Trainer(gpus=-1)
</code></pre>
<p>Before running <code>trainer(model)</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> model.device
device(type='cpu')
>>> model.ffo.device
device(type='cpu')
>>> model.cat_layers[0].device
device(type='cpu')
</code></pre>
<p>After running <code>trainer(model)</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> model.device
device(type='cuda', index=0) # <---- correct
>>> model.ffo.device
device(type='cuda', index=0) # <---- correct
>>> model.cat_layers[0].device
device(type='cpu') # <---- still showing 'cpu'
</code></pre>
<p>Apparently, PyTorch Lightning is not able to transfer the lists of sub-models to the GPU. How to proceed so that the entire model, including list of sub-models (<code>cat_layers</code> and <code>num_layers</code>) is transferred to the GPU?</p>
|
<p>Submodules contained in lists are not registered and can't be transformed as is.
You need to use <a href="https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html" rel="nofollow noreferrer">ModuleList</a> instead, i.e.:</p>
<pre><code>...
from torch.nn import ModuleList
...
class TorchModel(LightningModule):
def __init__(self):
super(TorchModel, self).__init__()
self.cat_layers = ModuleList([TorchCatEmbedding(cat) for cat in columns_to_embed])
self.num_layers = ModuleList([LambdaLayer(lambda x: x[:, idx:idx+1]) for _, idx in numeric_columns])
self.ffo = TorchFFO(len(self.num_layers) + sum([embed_dim(l) for l in self.cat_layers]), y.shape[1])
self.softmax = torch.nn.Softmax(dim=1)
</code></pre>
<p>edit: I'm not sure what the Lightning equivalent is, or if one such exists, see also <a href="https://stackoverflow.com/questions/70312071/pytorch-lightning-lightningmodule-for-modulelist-moduledict">PyTorch Lightning - LightningModule for ModuleList / ModuleDict?</a></p>
|
python|pytorch|gpu|pytorch-lightning
| 3
|
5,194
| 42,889,987
|
Get max value in third dimension from an array of shape (10, 21, 2) to get a final array of (10,21)
|
<p>I have a state-value matrix as - </p>
<pre><code>in[]: q
out[]: array([[[ 1.78571429e-01, 4.00000000e-01],
[ 2.92307692e-01, 3.91304348e-01],
[ 3.93939394e-01, 4.21052632e-01],
[ 4.41176471e-01, 2.83916084e-01],
[ 1.48148148e-01, -8.08080808e-02],
[ 4.08450704e-01, 2.94117647e-01],
[ 1.87500000e-01, 4.34782609e-02],
[ 3.05555556e-01, 4.60000000e-01],
[ 3.75000000e-01, -6.66666667e-02],
[ 2.23880597e-01, 5.00000000e-01],
[ 2.41379310e-01, 5.00000000e-01],
[ 5.00000000e-01, -6.66666667e-02],
[ 8.33333333e-02, 3.68421053e-01],
[ 5.00000000e-01, 1.80555556e-01],
[ 4.00000000e-01, 3.84615385e-01],
[ 2.94117647e-01, 2.13615023e-01],
[ 4.60000000e-01, 1.25000000e-01],
[ 7.46031746e-01, 3.58024691e-01],
[ 1.00000000e+00, 1.59420290e-01],
[ 1.00000000e+00, -6.66666667e-02],
[ 0.00000000e+00, 0.00000000e+00]],
[[ 4.68750000e-01, 1.48642430e-01],
[ 3.33333333e-02, 2.81004710e-01],
[ 6.25000000e-02, 2.09302326e-01],
[ 3.44262295e-01, 2.83582090e-01],
[ 3.60000000e-01, -1.13235294e-01],
[ 2.94117647e-01, 3.51351351e-01],
[ 2.54237288e-01, 1.25000000e-01],
[ 1.90476190e-01, 7.50000000e-01],
[ 2.83018868e-01, 4.54545455e-01],
[ 2.09302326e-01, 4.25499232e-01],
[ 3.33333333e-01, 1.25000000e-01],
[ 4.16666667e-01, 3.66666667e-01],
[ 2.00000000e-01, 4.25499232e-01],
[ -1.00000000e-01, 6.66666667e-02],
[ 5.23809524e-01, 6.00000000e-01],
[ 1.25000000e-01, -6.97478992e-01],
[ 1.44444444e-01, 1.44444444e-01],
[ -3.33333333e-01, 7.50000000e-01],
[ 1.00000000e+00, -1.00000000e+00],
[ 1.00000000e+00, 4.54545455e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ 2.28070175e-01, 3.23076923e-01],
[ 3.23076923e-01, 2.61538462e-01],
[ 3.75000000e-01, 2.09176788e-01],
[ 6.12244898e-02, 3.33333333e-01],
[ 2.32876712e-01, 2.25464191e-01],
[ 2.42424242e-01, 8.04597701e-02],
[ 2.64705882e-01, 1.91394777e-01],
[ 2.25806452e-01, 2.09176788e-01],
[ 4.76923077e-01, 5.55555556e-02],
[ 8.88888889e-02, 1.42857143e-01],
[ 2.41379310e-01, 2.06171108e-01],
[ 3.33333333e-01, 1.00000000e+00],
[ 2.30769231e-01, 2.00000000e-01],
[ 2.72727273e-01, 4.08163265e-02],
[ 2.00000000e-01, 2.22079589e-01],
[ -6.66666667e-02, 2.22079589e-01],
[ 5.55555556e-02, -1.00000000e+00],
[ 6.66666667e-01, 4.08163265e-02],
[ 6.36363636e-01, -9.09090909e-02],
[ 1.00000000e+00, 2.14285714e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ 2.18750000e-01, 7.14285714e-02],
[ 5.26315789e-02, 2.13921902e-01],
[ 2.00000000e-01, 2.08888889e-01],
[ -1.61290323e-01, 9.72222222e-01],
[ -1.01449275e-01, 1.70568562e-01],
[ 1.27272727e-01, 1.04370891e-01],
[ 2.46753247e-01, 8.33333333e-02],
[ 2.35294118e-01, 1.66666667e-01],
[ 2.77777778e-02, 2.30769231e-01],
[ -9.67741935e-02, 2.30769231e-01],
[ 3.57142857e-01, -1.52941176e-01],
[ 2.38095238e-01, -9.09090909e-02],
[ 2.63157895e-01, 1.81818182e-01],
[ 8.33333333e-02, 1.00000000e+00],
[ -4.76190476e-02, -1.11111111e-01],
[ 2.30769231e-01, 1.81818182e-01],
[ -1.52941176e-01, 1.30434783e-01],
[ 6.92307692e-01, 2.50000000e-01],
[ 6.66666667e-01, -1.11111111e-01],
[ 1.00000000e+00, 1.31313131e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ 1.60000000e-01, 1.64179104e-01],
[ -3.22580645e-02, 1.49253731e-02],
[ 4.11764706e-01, 2.54237288e-01],
[ -1.44927536e-02, 1.30434783e-01],
[ 1.00000000e-01, -5.08474576e-02],
[ 1.50684932e-01, -1.42857143e-01],
[ -7.93650794e-02, 1.50684932e-01],
[ 2.66666667e-01, 1.66666667e-01],
[ 1.66666667e-01, -4.54545455e-02],
[ 1.42857143e-01, -4.54545455e-02],
[ 1.30434783e-01, 1.81818182e-01],
[ 5.88235294e-02, 1.66666667e-01],
[ -9.09090909e-02, 1.53846154e-01],
[ 2.50000000e-01, -1.42857143e-01],
[ -7.69230769e-02, 1.50684932e-01],
[ 1.76470588e-01, -2.65432099e-01],
[ 2.50000000e-01, -4.54545455e-02],
[ 1.37500000e-01, 2.41379310e-01],
[ 1.00000000e+00, 1.81818182e-01],
[ 6.00000000e-01, 1.81818182e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ -6.06060606e-02, -1.85185185e-01],
[ 1.66666667e-01, -5.26315789e-02],
[ 1.86440678e-01, 3.44827586e-02],
[ -1.61290323e-01, 3.91304348e-01],
[ 3.84615385e-02, 2.00000000e-01],
[ -9.80392157e-02, 1.80645161e-01],
[ 1.11111111e-01, -1.61290323e-01],
[ 1.52542373e-01, 8.18181818e-01],
[ 2.94117647e-02, 5.29411765e-01],
[ 1.03448276e-01, -9.80392157e-02],
[ 3.91304348e-01, 8.00000000e-01],
[ -2.77555756e-17, -1.00000000e+00],
[ 3.44827586e-02, -1.05820106e-01],
[ 2.00000000e-01, -4.34782609e-02],
[ 5.29411765e-01, 8.18181818e-01],
[ 2.72727273e-01, 4.97737557e-02],
[ 1.11111111e-01, -1.86440678e-01],
[ 6.14035088e-01, -1.05820106e-01],
[ 8.18181818e-01, -3.70370370e-02],
[ 1.00000000e+00, -1.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00]],
[[ -1.22807018e-01, 7.24637681e-02],
[ -7.93650794e-02, 1.21212121e-01],
[ -7.69230769e-02, 6.66666667e-01],
[ 1.34328358e-01, 2.05479452e-01],
[ 7.24637681e-02, 1.00000000e-01],
[ -1.42857143e-01, -2.14285714e-01],
[ 2.28070175e-01, 4.76190476e-02],
[ 4.76190476e-02, 5.56708673e-01],
[ 1.66666667e-01, -2.14285714e-01],
[ -2.11267606e-01, 5.05912620e-01],
[ 1.00000000e-01, 5.68627451e-01],
[ 2.50000000e-01, 1.00000000e+00],
[ -3.33333333e-01, 5.56708673e-01],
[ -3.04347826e-01, 5.64705882e-01],
[ 0.00000000e+00, -5.15570934e-01],
[ 1.66666667e-01, 0.00000000e+00],
[ 5.05912620e-01, 5.68627451e-01],
[ 5.78947368e-01, 5.05912620e-01],
[ 1.00000000e+00, 7.86599022e-02],
[ 1.00000000e+00, -5.42483660e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ -1.91489362e-01, -2.90322581e-01],
[ -2.04081633e-02, -1.32231405e-01],
[ -4.10958904e-02, -6.56790123e-02],
[ -2.92307692e-01, -5.67733990e-01],
[ -3.12500000e-02, -2.94117647e-01],
[ -2.90322581e-01, -2.25806452e-01],
[ -4.54545455e-01, -5.26315789e-02],
[ -1.29032258e-01, -1.52542373e-01],
[ -7.50000000e-02, -9.23629929e-02],
[ -7.46268657e-02, -2.91666667e-01],
[ -2.25806452e-01, -1.52542373e-01],
[ -2.94117647e-01, -2.91666667e-01],
[ 5.88235294e-02, -1.97090909e-01],
[ 5.88235294e-02, -5.67733990e-01],
[ -1.42857143e-01, -1.63636364e-01],
[ -5.26315789e-02, -4.73170732e-01],
[ -2.91666667e-01, -1.57894737e-01],
[ -2.50000000e-01, 8.18181818e-01],
[ 1.00000000e+00, -5.34482759e-01],
[ 1.00000000e+00, -1.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00]],
[[ 8.77192982e-02, -3.63636364e-01],
[ 1.42857143e-01, -5.71428571e-02],
[ -5.71428571e-02, -1.72167707e-01],
[ -3.33333333e-01, -7.24637681e-02],
[ -3.23076923e-01, -2.00595238e-01],
[ -2.18750000e-01, -8.33333333e-02],
[ -2.50000000e-01, 1.11111111e-01],
[ -3.51351351e-01, -7.24637681e-02],
[ -2.20338983e-01, -2.00595238e-01],
[ -3.23529412e-01, 1.06227106e-01],
[ -1.20000000e-01, -1.87500000e-01],
[ -5.20000000e-01, -1.42857143e-01],
[ -4.54545455e-01, -2.00595238e-01],
[ -1.42857143e-01, 1.06227106e-01],
[ -1.42857143e-01, -2.00595238e-01],
[ 1.11111111e-01, 1.98412698e-01],
[ -1.17647059e-01, 1.06227106e-01],
[ 1.06227106e-01, -4.00000000e-01],
[ 8.57142857e-01, -1.87500000e-01],
[ 6.92307692e-01, -1.11111111e-01],
[ 0.00000000e+00, 0.00000000e+00]],
[[ -6.12244898e-02, -3.06122449e-01],
[ -3.33333333e-01, -2.72727273e-01],
[ -2.78688525e-01, -3.23936961e-01],
[ -9.43396226e-02, -3.20000000e-01],
[ 6.06060606e-02, -2.80000000e-01],
[ -2.69841270e-01, -2.38095238e-01],
[ -3.20000000e-01, -3.20000000e-01],
[ -2.00000000e-01, 1.40000000e-01],
[ -3.82352941e-01, 2.26190476e-01],
[ -2.63157895e-01, -2.98245614e-01],
[ -2.72727273e-01, 1.40000000e-01],
[ -3.68421053e-01, 1.40000000e-01],
[ -3.00000000e-01, -1.09230324e-02],
[ -3.75000000e-01, 1.40000000e-01],
[ -2.00000000e-01, 2.77555756e-17],
[ 2.00000000e-01, -5.86428571e-01],
[ 2.95138889e-02, -3.26923077e-01],
[ 2.26190476e-01, 2.26190476e-01],
[ 5.00000000e-02, 3.33333333e-01],
[ 5.00000000e-01, -2.18750000e-01],
[ 0.00000000e+00, 0.00000000e+00]]])
</code></pre>
<p>I want to find max of corresponding values from q[:,:,0] and q[:,:,1] to obtain a resulting matrix of shape (10, 21)</p>
|
<p>You can use <code>numpy.amax</code>, it returns the maximum of an array or maximum along an axis.</p>
<p>Example (your case):</p>
<p>In: <code>np.amax(np.array([[[1,2],[5,6]],[[10,11],[15,16]]]), 2)</code></p>
<p>Out:
<code>array([[ 2, 6], [11, 16]])</code></p>
|
python|arrays|numpy
| 1
|
5,195
| 42,909,783
|
Python pandas - select rows based on groupby
|
<p>I have a sample table like this:</p>
<p>Dataframe: df</p>
<pre><code>Col1 Col2 Col3 Col4
A 1 10 i
A 1 11 k
A 1 12 a
A 2 10 w
A 2 11 e
B 1 15 s
B 1 16 d
B 2 21 w
B 2 25 e
B 2 36 q
C 1 23 a
C 1 24 b
</code></pre>
<p>I'm trying to get all records/rows of the groups (Col1, Col2) that has the smaller number of records AND skipping over those groups that have only 1 record (in this example Col1 = 'C'). So, the output would be as follows:</p>
<pre><code>A 2 10 w
A 2 11 e
B 1 15 s
B 1 16 d
</code></pre>
<p>since group (A,2) has 2 records compared to group (A,1) which has 3 records.</p>
<p>I tried to approach this issue from different angles but just can't seem to get the result that I need. I am able to find the groups that I need using a combination of groupby, filter and agg but how do I now use this as a select filter on df? After spending a lot of time on this, I wasn't even sure that the approach was correct as it looked overly complicated. I am sure that there is an elegant solution but I just can't see it.
Any advise on how to approach this would be greatly appreciated.</p>
<p>I had this to get the groups for which I wanted the rows displayed:</p>
<pre><code> groups = df.groupby(["Col1, Col2"])["Col2"].agg({'no':'count'})
filteredGroups = groups.groupby(level=0).filter(lambda group: group.size > 1)
print filteredGroups.groupby(level=0).agg('idxmin')
</code></pre>
<p>The second line was to account for groups that may have only one record as those I don't want to consider. Honestly, I tried so many variations and approaches that eventually did not give me the result that I wanted. I see that all answers are not one-liners so that at least I don't feel like I was over thinking the problem.</p>
|
<pre><code>df['sz'] = df.groupby(['Col1','Col2'])['Col3'].transform("size")
df['rnk'] = df.groupby('Col1')['sz'].rank(method='min')
df['rnk_rev'] = df.groupby('Col1')['sz'].rank(method='min',ascending=False)
df.loc[ (df['rnk'] == 1.0) & (df['rnk_rev'] != 1.0) ]
Col1 Col2 Col3 Col4 sz rnk rnk_rev
3 A 2 10 w 2 1.0 4.0
4 A 2 11 e 2 1.0 4.0
5 B 1 15 s 2 1.0 4.0
6 B 1 16 d 2 1.0 4.0
</code></pre>
<p>Edit: changed "count" to "size" (as in @Marco Spinaci's answer) which doesn't matter in this example but might if there were missing values. </p>
<p>And for clarity, here's what the df looks like before dropping the selected rows.</p>
<pre><code> Col1 Col2 Col3 Col4 sz rnk rnk_rev
0 A 1 10 i 3 3.0 1.0
1 A 1 11 k 3 3.0 1.0
2 A 1 12 a 3 3.0 1.0
3 A 2 10 w 2 1.0 4.0
4 A 2 11 e 2 1.0 4.0
5 B 1 15 s 2 1.0 4.0
6 B 1 16 d 2 1.0 4.0
7 B 2 21 w 3 3.0 1.0
8 B 2 25 e 3 3.0 1.0
9 B 2 36 q 3 3.0 1.0
10 C 1 23 a 2 1.0 1.0
11 C 1 24 b 2 1.0 1.0
</code></pre>
|
python|pandas|group-by
| 4
|
5,196
| 27,385,948
|
Get list of headers for a given element python
|
<p>I have a CSV file like below.</p>
<p><img src="https://i.stack.imgur.com/RvM4M.png" alt="Data.csv"> </p>
<p>I'm trying to get the list (not necessarily a Python list structure, but basically all the instances in whichever format) of headers when an ID (element from column1) is given as input, i.e., for example, if I give A1 as input, it should return me <code>Head1,Head2,Head4</code>. Similarly, for C1 it'd be <code>Head1,Head3</code> and the same way for the rest of the ID's. I can get a list of headers irrespective of the ID by using <code>list(data.columns.values)</code> but I need to do with respect to ID's now. I really appreciate any help. </p>
<p>EDIT-
I need the headers and not the row values. By headers I mean, if I give A1 as input, it should return me <code>Head1,Head2,Head4</code> and not <code>2,.,NA</code>.</p>
|
<p>If <code>Code</code> is your index then the following will do what you want:</p>
<pre><code>In [43]:
df[df.index == 'SFDT-09-04-0001'].dropna(axis=1).columns
Out[43]:
Index(['dID', 'cID', 'sID', 'Other_Data'], dtype='object')
</code></pre>
|
python|csv|pandas
| 0
|
5,197
| 27,229,301
|
Calculate 3D variant for summed area table using numpy cumsum
|
<p>In case of a 2D array <code>array.cumsum(0).cumsum(1)</code> gives the <a href="http://en.wikipedia.org/wiki/Summed_area_table" rel="nofollow noreferrer">Integral image</a> of the array.</p>
<p>What happens if I compute <code>array.cumsum(0).cumsum(1).cumsum(2)</code> over a 3D array? </p>
<p>Do I get a 3D extension of Integral Image i.e, Integral volume over the array? </p>
<p>Its hard to visualize what happens in case of 3D.</p>
<p>I have gone through this discussion.
<a href="https://stackoverflow.com/questions/20445084/3d-variant-for-summed-area-table-sat">3D variant for summed area table (SAT)</a></p>
<p>This gives a recursive way on how to compute the Integral volume. What if I use the <code>cumsum</code> along the 3 axes. Will it give me the same thing? </p>
<p>Will it be more efficient than the recursive method?</p>
|
<p>Yes, the formula you give, <code>array.cumsum(0).cumsum(1).cumsum(2)</code>, will work. </p>
<p>What the formula does is compute a few partial sums so that the sum of these sums is the volume sum. That is, every element needs to be summed exactly once, or, in other words, no element can be skipped and no element counted twice. I think going through each of these questions (is any element skipped or counted twice) is a good way to verify to yourself that this will work. And also run a small test:</p>
<pre><code>x = np.ones((20,20,20)).cumsum(0).cumsum(1).cumsum(2)
print x[2,6,10] # 231.0
print 3*7*11 # 231
</code></pre>
<p>Of course, with all ones there could two errors that cancel each other out, but this wouldn't happen everywhere so it's a reasonable test.</p>
<p>As for efficiency, I would guess that the single pass approach is probably faster, but not by a lot. Also, the above could be sped up using an output array, eg, <code>cumsum(n, out=temp)</code> as otherwise three arrays will be created for this calculation. The best way to know is to test (but only if you need to).</p>
|
python|algorithm|numpy|data-structures|cumsum
| 1
|
5,198
| 26,841,908
|
Is it possible to optimize update of a pandas dataframe that involves only some rows?
|
<p>I have a pandas data frame. One of its columns (let's call it <code>col1</code>) contains nominal values (for example <code>A</code>, <code>B</code>, <code>C</code> and so on). I have also a dictionary that maps these nominal values into numeric values (for example: <code>my_dict = {'A':3, 'B':1, 'C':1}</code>). Now I create a new column in the following way:</p>
<pre><code>df['col2'] = map(my_dict, df['col1'])
</code></pre>
<p>Now assume that I changed one value in the dictionary. For example <code>C</code> key is now mapping to <code>7</code> instead of <code>1</code>. I also want to update <code>col2</code>, respectively. One of the ways would be to recalculate all rows. However, maybe there is a way to change only those rows that are needed to be changed. Is there a way to do that?</p>
|
<p>Well you can use <code>loc</code> and change just the values <code>1</code> with mapping <code>C</code> with the newly mapped dictionary key of <code>C</code>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(['A','B','A','C','A','B','C','A','C'],columns=['col1'])
my_dict = {'A':3, 'B':1, 'C':1}
# by the way you need lambda with map
%timeit df['col2'] = map(lambda x: my_dict[x], df['col1'])
1000 loops, best of 3: 205 µs per loop
</code></pre>
<p>Now change the value of <code>my_dict</code> key of <strong>C</strong> to <strong>7</strong></p>
<pre><code>my_dict = {'A':3, 'B':1, 'C':7}
%timeit df['col2'] = map(lambda x: my_dict[x], df['col1'])
1000 loops, best of 3: 210 µs per loop
%timeit df.loc[df['col1']=='C']['col2'] = my_dict['C']
10 loops, best of 3: 43.7 ms per loop
</code></pre>
<p>Both having the same result</p>
<pre><code>df
col1 col2
0 A 3
1 B 1
2 A 3
3 C 7
4 A 3
5 B 1
6 C 7
7 A 3
8 C 7
</code></pre>
<p>And apparently using <code>loc</code> is more efficient in this case.</p>
|
python|optimization|map|pandas
| 1
|
5,199
| 26,637,715
|
How to extract data from .csv file and create a plot?
|
<p>I have a .csv file with 24columns x 514rows of data. Each of these column represent different parameters and I wish to study the trends between different parameters.</p>
<p>I am using genfromtxt to import the data as a numpy array such that I can plot the values of two particular columns(e.g. column 9 against column 11). Here is what I have so far:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = np.genfromtxt('output_burnin.csv', delimiter=',')
impactparameter=data[:,11]
planetradius=data[:,9]
plt.plot(planetradius,impactparameter,'bo')
plt.title('Impact Parameter vs. Planet Radius')
plt.xlabel('R$_P$/R$_Jup$')
plt.ylabel('b/R$_star$')
plt.show()
</code></pre>
<p>With this code I encounter an error at line 12:</p>
<pre><code> impactparameter=data[:,11]
IndexError: too many indices
</code></pre>
<p>What could the problem be in here?</p>
<p>Also, I have been trying to figure out how to give each column a header in the .csv file. So instead of counting the column number, I can just call the name of that particular column when I do the plotting. Is there a way to do this?</p>
<p>I am a complete newbie in Python, any help would be much appreciated, Thanks!</p>
|
<blockquote>
<p>Also, I have been trying to figure out how to give each column a header in the .csv file. So instead of counting the column number, I can just call the name of that particular column when I do the plotting. Is there a way to do this?</p>
</blockquote>
<p>To give columns in your array names, you need to make it a <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured array</a>.</p>
<p>Here's a simple example:</p>
<pre><code>a = np.zeros(5, dtype='f4, f4, f4')
a.dtype.names = ('col1', 'col2', 'col3')
print a[0] # prints [0, 0, 0], the first row (record)
print a['col1'] # prints [0, 0, 0, 0, 0], the first column
</code></pre>
<p>If you have the column names at the beginning of your CSV file, and set <code>names=True</code> in <code>np.genfromtxt</code>, then Numpy will automatically create a structured array for you with the correct names.</p>
|
python|csv|numpy|genfromtxt
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.