Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
4,900
| 69,079,214
|
convert timestamp information to session information Python
|
<p>I have a dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">user</th>
<th style="text-align: center;">timestamp</th>
<th style="text-align: center;">minutes</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:53:06</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:54:06</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:56:36</td>
<td style="text-align: center;">2.5</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:57:06</td>
<td style="text-align: center;">0.5</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-03-18 22:11:29</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">Arjun</td>
<td style="text-align: center;">2020-03-18 22:42:29</td>
<td style="text-align: center;">31</td>
</tr>
<tr>
<td style="text-align: center;">Arjun</td>
<td style="text-align: center;">2020-03-18 23:42:29</td>
<td style="text-align: center;">60</td>
</tr>
</tbody>
</table>
</div>
<p>this data shows the timestamp of all user visits in a website and the time difference between each consecutive visit in minutes.
I want to create a logic that creates session_id for each user, where if the difference between two timestamps is 30 minutes, we create a new session_id.</p>
<p>This is certainly possible through for loops. But if there are any faster methods, please let me know. I have more than 20 million data and if loops takes a lot of time!</p>
<p>Final result should be something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">user</th>
<th style="text-align: center;">timestamp</th>
<th style="text-align: center;">minutes</th>
<th style="text-align: center;">session_id</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:53:06</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:54:06</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:56:36</td>
<td style="text-align: center;">2.5</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Ram</td>
<td style="text-align: center;">2020-07-25 12:57:06</td>
<td style="text-align: center;">0.5</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Arjun</td>
<td style="text-align: center;">2020-03-18 22:11:29</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">Arjun</td>
<td style="text-align: center;">2020-03-18 22:42:29</td>
<td style="text-align: center;">31</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: center;">Arjun</td>
<td style="text-align: center;">2020-03-18 23:42:29</td>
<td style="text-align: center;">60</td>
<td style="text-align: center;">4</td>
</tr>
</tbody>
</table>
</div>
|
<p>Increment the session_id when:</p>
<ol>
<li>The user changes, or</li>
<li>Minutes > 30</li>
</ol>
<h5>Code:</h5>
<pre><code>df["session_id"] = ((df["user"]!=df["user"].shift())|(df["minutes"]>30)).cumsum()
</code></pre>
<h5>Output:</h5>
<pre><code> user timestamp minutes session_id
Ram 2020-07-25 2021-09-06 12:53:06 0.0 1
Ram 2020-07-25 2021-09-06 12:54:06 1.0 1
Ram 2020-07-25 2021-09-06 12:56:36 2.5 1
Ram 2020-07-25 2021-09-06 12:57:06 0.5 1
Ram 2020-03-18 2021-09-06 22:11:29 0.0 2
Arjun 2020-03-18 2021-09-06 22:42:29 31.0 3
Arjun 2020-03-18 2021-09-06 23:42:29 60.0 4
</code></pre>
|
python|pandas|dataframe|numpy
| 2
|
4,901
| 68,873,625
|
How does calculation in a GRU layer take place
|
<p>So I want to understand <strong>exactly</strong> how the outputs and hidden state of a GRU cell are calculated.</p>
<p>I obtained the pre-trained model from <a href="https://github.com/nanoporetech/taiyaki" rel="nofollow noreferrer">here</a> and the GRU layer has been defined as <code>nn.GRU(96, 96, bias=True)</code>.</p>
<p>I looked at the the <a href="https://pytorch.org/docs/stable/generated/torch.nn.GRU.html" rel="nofollow noreferrer">PyTorch Documentation</a> and confirmed the dimensions of the weights and bias as:</p>
<ul>
<li><code>weight_ih_l0</code>: <code>(288, 96)</code></li>
<li><code>weight_hh_l0</code>: <code>(288, 96)</code></li>
<li><code>bias_ih_l0</code>: <code>(288)</code></li>
<li><code>bias_hh_l0</code>: <code>(288)</code></li>
</ul>
<p>My input size and output size are <code>(1000, 8, 96)</code>. I understand that there are <code>1000</code> tensors, each of size <code>(8, 96)</code>. The hidden state is <code>(1, 8, 96)</code>, which is one tensor of size <code>(8, 96)</code>.</p>
<p>I have also printed the variable <code>batch_first</code> and found it to be <code>False</code>. This means that:</p>
<ul>
<li>Sequence length: <code>L=1000</code></li>
<li>Batch size: <code>B=8</code></li>
<li>Input size: <code>Hin=96</code><br></li>
</ul>
<p>Now going by the equations from the documentation, for the reset gate, I need to multiply the weight by the input <code>x</code>. But my weights are 2-dimensions and my input has three dimensions.</p>
<p>Here is what I've tried, I took the first <code>(8, 96)</code> matrix from my input and multiplied it with the transpose of my weight matrix:</p>
<pre><code>Input (8, 96) x Weight (96, 288) = (8, 288)
</code></pre>
<p>Then I add the bias by replicating the <code>(288)</code> eight times to give <code>(8, 288)</code>. This would give the size of <code>r(t)</code> as <code>(8, 288)</code>. Similarly, <code>z(t)</code> would also be <code>(8, 288)</code>.</p>
<p>This <code>r(t)</code> is used in <code>n(t)</code>, since Hadamard product is used, both the matrices being multiplied have to be the same size that is <code>(8, 288)</code>. This implies that <code>n(t)</code> is also <code>(8, 288)</code>.</p>
<p>Finally, <code>h(t)</code> is the Hadamard produce and matrix addition, which would give the size of <code>h(t)</code> as <code>(8, 288)</code> which is <strong>wrong</strong>.</p>
<p>Where am I going wrong in this process?</p>
|
<p><strong>TLDR; This confusion comes from the fact that the weights of the layer are the concatenation of <em>input_hidden</em> and <em>hidden-hidden</em> respectively.</strong></p>
<hr />
<h4>- <a href="https://pytorch.org/docs/stable/generated/torch.nn.GRU.html" rel="nofollow noreferrer"><code>nn.GRU</code></a> layer weight/bias layout</h4>
<p>You can take a closer look at what's <a href="https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/rnn.py#L704" rel="nofollow noreferrer">inside the GRU layer</a> implementation <a href="https://pytorch.org/docs/stable/generated/torch.nn.GRU.html" rel="nofollow noreferrer"><code>torch.nn.GRU</code></a> by peaking through the weights and biases.</p>
<pre><code>>>> gru = nn.GRU(input_size=96, hidden_size=96, num_layers=1)
</code></pre>
<p>First the parameters of the GRU layer:</p>
<pre><code>>>> gru._all_weights
[['weight_ih_l0', 'weight_hh_l0', 'bias_ih_l0', 'bias_hh_l0']]
</code></pre>
<p>You can look at <code>gru.state_dict()</code> to get the dictionary of weights of the layer.</p>
<p>We have two weights and two biases, <code>_ih</code> stands for '<em>input-hidden</em>' and <code>_hh</code> stands for '<em>hidden-hidden</em>'.</p>
<p>For more efficient computation the parameters have been concatenated together, as the documentation page clearly explains (<code>|</code> means concatenation). In this particular example <code>num_layers=1</code> and <code>k=0</code>:</p>
<ul>
<li><p><code>~GRU.weight_ih_l[k]</code> β the learnable input-hidden weights of the layer <code>(W_ir | W_iz | W_in)</code>, of shape <code>(3*hidden_size, input_size)</code>.</p>
</li>
<li><p><code>~GRU.weight_hh_l[k]</code> β the learnable hidden-hidden weights of the layer <code>(W_hr | W_hz | W_hn)</code>, of shape <code>(3*hidden_size, hidden_size)</code>.</p>
</li>
<li><p><code>~GRU.bias_ih_l[k]</code> β the learnable input-hidden bias of the layer <code>(b_ir | b_iz | b_in)</code>, of shape <code>(3*hidden_size)</code>.</p>
</li>
<li><p><code>~GRU.bias_hh_l[k]</code> β the learnable hidden-hidden bias of the <code>(b_hr | b_hz | b_hn)</code>.</p>
</li>
</ul>
<p>For further inspection we can get those split up with the following code:</p>
<pre><code>>>> W_ih, W_hh, b_ih, b_hh = gru._flat_weights
>>> W_ir, W_iz, W_in = W_ih.split(H_in)
>>> W_hr, W_hz, W_hn = W_hh.split(H_in)
>>> b_ir, b_iz, b_in = b_ih.split(H_in)
>>> b_hr, b_hz, b_hn = b_hh.split(H_in)
</code></pre>
<p>Now we have the <em>12</em> tensor parameters sorted out.</p>
<hr />
<h3>- Expressions</h3>
<p>The four expressions for a GRU layer: <code>r_t</code>, <code>z_t</code>, <code>n_t</code>, and <code>h_t</code>, are computed <em>at each timestep</em>.</p>
<p>The first operation is <code>r_t = Ο(W_ir@x_t + b_ir + W_hr@h + b_hr)</code>. I used the <code>@</code> sign to designate the matrix multiplication operator (<a href="https://pytorch.org/docs/stable/generated/torch.matmul.html" rel="nofollow noreferrer"><code>__matmul__</code></a>). Remember <code>W_ir</code> is shaped <code>(H_in=input_size, hidden_size)</code> while <code>x_t</code> contains the element at step <code>t</code> from the <code>x</code> sequence. Tensor <code>x_t = x[t]</code> is shaped as <code>(N=batch_size, H_in=input_size)</code>. At this point, it's simply a matrix multiplication between the input <code>x[t]</code> and the weight matrix. The resulting tensor <code>r</code> is shaped <code>(N, hidden_size=H_in)</code>:</p>
<pre><code>>>> (x[t]@W_ir.T).shape
(8, 96)
</code></pre>
<p>The same is true for all other weight multiplication operations performed. As a result, you end up with an output tensor shaped <code>(N, H_out=hidden_size)</code>.</p>
<p>In the following expressions <code>h</code> is the tensor containing the hidden state of the previous step for each element in the batch, i.e. shaped <code>(N, hidden_size=H_out)</code>, since <code>num_layers=1</code>, <em>i.e.</em> there's a single hidden layer.</p>
<pre><code>>>> r_t = torch.sigmoid(x[t]@W_ir.T + b_ir + h@W_hr.T + b_hr)
>>> r_t.shape
(8, 96)
>>> z_t = torch.sigmoid(x[t]@W_iz.T + b_iz + h@W_hz.T + b_hz)
>>> z_t.shape
(8, 96)
</code></pre>
<p>The output of the layer is the concatenation of the computed <code>h</code> tensors at
consecutive timesteps <code>t</code> (between <code>0</code> and <code>L-1</code>).</p>
<hr />
<h4>- Demonstration</h4>
<p>Here is a minimal example of an <code>nn.GRU</code> inference manually computed:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
<th>Values</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>H_in</code></td>
<td>feature size</td>
<td><code>3</code></td>
</tr>
<tr>
<td><code>H_out</code></td>
<td>hidden size</td>
<td><code>2</code></td>
</tr>
<tr>
<td><code>L</code></td>
<td>sequence length</td>
<td><code>3</code></td>
</tr>
<tr>
<td><code>N</code></td>
<td>batch size</td>
<td><code>1</code></td>
</tr>
<tr>
<td><code>k</code></td>
<td>number of layers</td>
<td><code>1</code></td>
</tr>
</tbody>
</table>
</div>
<p>Setup:</p>
<pre><code>gru = nn.GRU(input_size=H_in, hidden_size=H_out, num_layers=k)
W_ih, W_hh, b_ih, b_hh = gru._flat_weights
W_ir, W_iz, W_in = W_ih.split(H_out)
W_hr, W_hz, W_hn = W_hh.split(H_out)
b_ir, b_iz, b_in = b_ih.split(H_out)
b_hr, b_hz, b_hn = b_hh.split(H_out)
</code></pre>
<p>Random input:</p>
<pre><code>x = torch.rand(L, N, H_in)
</code></pre>
<p>Inference loop:</p>
<pre><code>output = []
h = torch.zeros(1, N, H_out)
for t in range(L):
r = torch.sigmoid(x[t]@W_ir.T + b_ir + h@W_hr.T + b_hr)
z = torch.sigmoid(x[t]@W_iz.T + b_iz + h@W_hz.T + b_hz)
n = torch.tanh(x[t]@W_in.T + b_in + r*(h@W_hn.T + b_hn))
h = (1-z)*n + z*h
output.append(h)
</code></pre>
<p>The final output is given by the stacking the tensors <code>h</code> at consecutive timesteps:</p>
<pre><code>>>> torch.vstack(output)
tensor([[[0.1086, 0.0362]],
[[0.2150, 0.0108]],
[[0.3020, 0.0352]]], grad_fn=<CatBackward>)
</code></pre>
<p>In this case the output shape is <code>(L, N, H_out)</code>, <em>i.e.</em> <code>(3, 1, 2)</code>.</p>
<p>Which you can compare with <code>output, _ = gru(x)</code>.</p>
|
pytorch|recurrent-neural-network|gated-recurrent-unit
| 2
|
4,902
| 69,273,384
|
Random sample the model scores into 4 groups with a similar distribution in python
|
<p>I have a dataset with model scores ranging from 0 to 1. The table looks like below:</p>
<pre><code>| Score |
| ----- |
| 0.55 |
| 0.67 |
| 0.21 |
| 0.05 |
| 0.91 |
| 0.15 |
| 0.33 |
| 0.47 |
</code></pre>
<p>I want to randomly divide these scores into 4 groups. <code>control</code>, <code>treatment 1</code>, <code>treatment 2</code>, <code>treatment 3</code>. <code>control</code> group should have 20% of the observations and the rest 80% has to be divided into the other 3 equal sized groups. However, i want the distribution of scores in each group to be the same. How can i solve this using python?</p>
<p>PS: This is just a representation of the actual table, but it will have a lot more observations than this.</p>
|
<p>You can use <code>numpy.random.choice</code> to set random groups with defined probabilities, then <code>groupby</code> to split the dataframe:</p>
<pre><code>import numpy as np
group = np.random.choice(['control', 'treatment 1', 'treatment 2', 'treatment 3'],
size=len(df),
p=[.2, .8/3, .8/3, .8/3])
dict(list(df.groupby(pd.Series(group, index=df.index))))
</code></pre>
<p>possible output (each value in the dictionary is a DataFrame):</p>
<pre><code>{'control': Score
2 0.21
5 0.15,
'treatment 1': Score
7 0.47,
'treatment 2': Score
1 0.67
3 0.05,
'treatment 3': Score
0 0.55
4 0.91
6 0.33}
</code></pre>
|
python|pandas|distribution
| 1
|
4,903
| 44,771,853
|
Mac: OSError: [Errno 1] Operation not permitted: '/tmp/pip-XcfgD6
|
<p>When I played with tensorflow in Mac OS, I got this error:</p>
<pre><code>Installing collected packages: html5lib, bleach, markdown, backports.weakref, numpy, funcsigs, pbr, mock, protobuf, tensorflow
Found existing installation: numpy 1.8.0rc1
DEPRECATION: Uninstalling a distutils installed project (numpy) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling numpy-1.8.0rc1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_set.py", line 778, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 754, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/nl/69lb138j255bkbzzjdx1hb6h0000gp/T/pip-1Jo9RF-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'
</code></pre>
<p>What does it mean and how do I fix it?</p>
|
<p>Add the argument <code>--ignore-installed</code> to the pip command you're running. See <a href="https://stackoverflow.com/questions/31900008/oserror-errno-1-operation-not-permitted-when-installing-scrapy-in-osx-10-11">this question for more</a>.</p>
|
python|tensorflow|sip
| 11
|
4,904
| 44,727,726
|
Nested list to a dictionary of index counts
|
<p>I'm very new to Python 3 and I'm working with Keras sigmoid activations which produce a nested list of probabilities.</p>
<p>I have a nested list that looks something like this:</p>
<pre><code>[[0.1, 0.2, 0.3, 0.2, 0.4, 0.5]
[0.2, 0.3, 0.3, 0.3, 0.2, 0.1]
...
[0.1, 0.1, 0.4, 0.5, 0.1, 0.2]]
</code></pre>
<p>What I want to do is convert this list into a dictionary of indices wherein each index key has a corresponding frequency count of how many times in the list it meets a certain condition.</p>
<p>For example, given the three rows in the sample nested list above and given the condition:</p>
<pre><code>element > 0.2
</code></pre>
<p>It will build the following dictionary:</p>
<pre><code>[
0: 0
1: 1
2: 3
3: 2
4: 1
5: 1
]
</code></pre>
<p>This is because across the three nested lists, the value at index 0 is never greater than 0.2, the value at index 1 is greater than 0.2 only once (at the second nested list), the value at index 2 is greater than 0.2 for all the nested lists, the value at index 3 is greater than 0.2 for two nested lists (namely the second and third nested list), and so on.</p>
<p>Thank you very much!</p>
|
<p>With <code>a</code> as the list of lists of same lengths, we could convert to an array, giving us a <code>2D</code> array. Then, compare against <code>2</code> and then sum the <code>True</code> matches along each column, as the counts. Finally setup the output dictionary from it.</p>
<p>Thus, one implementation would be -</p>
<pre><code>C = (np.asarray(a)>0.2).sum(axis=0)
dict_out = {i:c for i,c in enumerate(C)}
</code></pre>
<p><code>np.count_nonzero</code> could also be used in place <code>np.sum</code> for summing matches there.</p>
<p>Sample run -</p>
<pre><code>In [209]: a
Out[209]:
[[0.1, 0.2, 0.3, 0.2, 0.4, 0.5],
[0.2, 0.3, 0.3, 0.3, 0.2, 0.1],
[0.1, 0.1, 0.4, 0.5, 0.1, 0.2]]
In [210]: C = (np.asarray(a)>0.2).sum(axis=0)
In [211]: C
Out[211]: array([0, 1, 3, 2, 1, 1])
In [212]: {i:c for i,c in enumerate(C)}
Out[212]: {0: 0, 1: 1, 2: 3, 3: 2, 4: 1, 5: 1}
</code></pre>
<hr>
<p><strong>Handling ragged sublists</strong></p>
<p>For ragged sublists (lists having different lengths in input list), we could convert it to a regular array upon filling values with a invalid specifier (NaN seems suitable here) and then sum along the appropriate axis. Thus, to handle such a case, the modified implementation would be -</p>
<pre><code>from itertools import izip_longest # For Python3, use zip_longest
C = (np.array(list(izip_longest(*a, fillvalue=np.nan)))>0.2).sum(1)
dict_out = {i:c for i,c in enumerate(C)}
</code></pre>
<p>Sample run -</p>
<pre><code>In [253]: a
Out[253]:
[[0.1, 0.2, 0.3, 0.2, 0.4, 0.5, 0.7, 0.2],
[0.2, 0.3, 0.3, 0.3, 0.2, 0.1],
[0.1, 0.1, 0.4, 0.5, 0.1, 0.2, 0.1]]
In [254]: C = (np.array(list(izip_longest(*a, fillvalue=np.nan)))>0.2).sum(1)
In [255]: {i:c for i,c in enumerate(C)}
Out[255]: {0: 0, 1: 1, 2: 3, 3: 2, 4: 1, 5: 1, 6: 1, 7: 0}
</code></pre>
|
python|list|numpy|dictionary|keras
| 4
|
4,905
| 60,774,959
|
Use part of string in DF
|
<p>Please a need to return a part of string</p>
<p>I have this (example):</p>
<pre><code>df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['XXX2100M', 'yyyy2100M', 'AAA850M',
'BBB2100M']})
</code></pre>
<p>My goal:</p>
<pre><code> vals ids test
0 1 XXX2100M 2100M
1 2 yyyy2100M 2100M
2 3 AAA850M
3 4 2100M 2100M
</code></pre>
<p>Modify <code>['test']</code> just if i have '2100M' on string.</p>
|
<p>We can use <code>np.where</code> with <code>str.contains</code>:</p>
<pre><code>import numpy as np
df['test'] = np.where(df.ids.str.contains('2100M'), '2100M', '')
</code></pre>
<hr>
<pre><code>print(df)
vals ids test
0 1 XXX2100M 2100M
1 2 yyyy2100M 2100M
2 3 AAA850M
3 4 BBB2100M 2100M
</code></pre>
|
python|pandas
| 2
|
4,906
| 71,636,041
|
Error on transfer learning model, ValueError: Unexpected result of `train_function` (Empty logs)
|
<p>thanks for reading, I am having an issue when using a transfer learning model. However I believe the issue is due to the model.fit_generator() as the exact same error occurs when I try to run my custom convolutional neural network.</p>
<pre><code># transfer learning model, vgg16
vgg = VGG16(input_shape= IMAGE_SIZE + [3], weights = 'imagenet', include_top = False)
x = vgg.output
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.3)(x)
prediction = Dense(nb_classes, activation='softmax')(x)
for layers in vgg.layers:
layers.trainable = False
model_name = 'vgg16.h5'
early_stop = EarlyStopping(monitor='val_loss',min_delta=0.003, patience=15, verbose=1, mode='auto',
restore_best_weights=True)
checkpoint_fixed_name = ModelCheckpoint(model_name,
monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=True, mode='auto', save_Freq=5)
callbacks = [checkpoint_fixed_name, early_stop]
model = Model( inputs = vgg.input, outputs = prediction)
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics= ['accuracy'])
# train model
history = model.fit_generator(training_set,epochs=100,steps_per_epoch=int(len(training_set)/8),
validation_steps=int(len(test_set)/4),validation_data=valid_set,
callbacks=callbacks)
#Error
ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`.
</code></pre>
<p>I tried splitting each line and found out that the error is probably due to the callbacks=callbacks but I am not sure how to fix it?</p>
<p>Thanks.</p>
|
<p>Check Your input images have a shape; they are grayscale or RGB, or the possibility of an empty input image.</p>
|
python|tensorflow|keras|conv-neural-network|transfer-learning
| 0
|
4,907
| 69,859,163
|
How to modify dataframe based on column values
|
<p>I want to add relationships to column 'relations' based on rel_list. Specifically, for each tuple, i.e. ('a', 'b'), I want to replace the relationships column value '' with 'b' in the first row, but no duplicate, meaning that for the 2nd row, don't replace '' with 'a', since they are considered as duplicated. The following code doesn't work fully correct:</p>
<pre><code>import pandas as pd
data = {
"names": ['a', 'b', 'c', 'd'],
"ages": [50, 40, 45, 20],
"relations": ['', '', '', '']
}
rel_list = [('a', 'b'), ('a', 'c'), ('c', 'd')]
df = pd.DataFrame(data)
for rel_tuple in rel_list:
head = rel_tuple[0]
tail = rel_tuple[1]
df.loc[df.names == head, 'relations'] = tail
print(df)
</code></pre>
<p>The current result of df is:</p>
<pre><code> names ages relations
0 a 50 c
1 b 40
2 c 45 d
3 d 20
</code></pre>
<p>However, the correct one is:</p>
<pre><code> names ages relations
0 a 50 b
0 a 50 c
1 b 40
2 c 45 d
3 d 20
</code></pre>
<p>There are new rows that need to be added. The 2nd row in this case, like above. How to do that?</p>
|
<p>You can craft a dataframe and <code>merge</code>:</p>
<pre><code>(df.drop('relations', axis=1)
.merge(pd.DataFrame(rel_list, columns=['names', 'relations']),
on='names',
how='outer'
)
# .fillna('') # uncomment to replace NaN with empty string
)
</code></pre>
<p>Output:</p>
<pre><code> names ages relations
0 a 50 b
1 a 50 c
2 b 40 NaN
3 c 45 d
4 d 20 NaN
</code></pre>
|
pandas|dataframe
| 1
|
4,908
| 70,001,976
|
How do I add the matching records of one of two different datasets with the same date to the other in python?
|
<p>I have 2 different datasets.</p>
<p>Table-1: df1</p>
<pre><code>Date sales product
2021-08-01 10000 a
2021-08-02 575 a
2021-08-03 12212 a
2021-08-04 902 a
2021-08-05 456 a
</code></pre>
<p>Table-2: df2</p>
<pre><code>Date sales product
2021-08-03 1000 b
2021-08-04 435 b
2021-08-05 759 b
2021-08-06 9123 b
2021-08-07 642 b
</code></pre>
<p>In python, I want to create a new data set which is all records of df2 that match df1 with date columns are added to df1 and I want to assign the value 0 for non-match dates.</p>
<p>New Dataset:</p>
<pre><code>Date sales_a sales_b
2021-08-01 10000 0
2021-08-02 575 0
2021-08-03 12212 1000
2021-08-04 902 435
2021-08-05 456 759
</code></pre>
<p>How can I do that?</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>pd.merge</code></a> on the date and sales columns, joining on Date on the left dataframe (df1), adding the suffixes _a and _b to the resulting column names. You can then fill in any nans with 0 to get the desired output:</p>
<pre><code> df1[['Date', 'sales']].merge(df2[['Date', 'sales']], on='Date', how='left',
suffixes=['_a', '_b']).fillna(0)
</code></pre>
<p>Output:</p>
<pre><code> Date sales_a sales_b
0 2021-08-01 10000 0.0
1 2021-08-02 575 0.0
2 2021-08-03 12212 1000.0
3 2021-08-04 902 435.0
4 2021-08-05 456 759.0
</code></pre>
|
python|pandas|dataframe
| 0
|
4,909
| 43,433,075
|
Create list of tuples from 2d array
|
<p>I'm looking to create a list of tuples from a 2xn array where the first row is an ID and the second row is that IDs group assignment. I'd like to create a list of the IDs organized to their group assignments. </p>
<p>For example:</p>
<pre><code>array([[ 0., 1., 2., 3., 4., 5., 6.],
[ 1., 2., 1., 2., 2., 1., 1.])
</code></pre>
<p>In the above example, ID 0 is assigned to group 1, ID 1 to group 2 and so on. The output list would look like:</p>
<pre><code>a=[(0,2,5,6),(1,3,4)]
</code></pre>
<p>Does anyone have any creative, quick ways to do this?</p>
<p>Thanks!</p>
|
<p>The standard (sorry, not creative -- but reasonably quick) numpy way would be an indirect sort:</p>
<pre><code>import numpy as np
data = np.array([[ 0., 1., 2., 3., 4., 5., 6.],
[ 1., 2., 1., 2., 2., 1., 1.]])
index = np.argsort(data[1], kind='mergesort') # mergesort is a bit
# slower than the default
# algorithm but is stable,
# i.e. if there's a tie
# it will preserve order
# use the index to sort both parts of data
sorted = data[:, index]
# the group labels are now in blocks, we can detect the boundaries by
# shifting by one and looking for mismatch
split_points = np.where(sorted[1, 1:] != sorted[1, :-1])[0] + 1
# could convert to int dtype here if desired
result = map(tuple, np.split(sorted[0], split_points))
# That's Python 2. In Python 3 you'd have to explicitly convert to list:
# result = list(result)
print(result)
</code></pre>
<p>Prints:</p>
<pre><code>[(0.0, 2.0, 5.0, 6.0), (1.0, 3.0, 4.0)]
</code></pre>
|
python|arrays|numpy
| 1
|
4,910
| 43,095,955
|
Rename duplicated index values pandas DataFrame
|
<p>I have a DataFrame that contains some duplicated index values:</p>
<pre><code>df1 = pd.DataFrame( np.random.randn(6,6), columns = pd.date_range('1/1/2010', periods=6), index = {"A", "B", "C", "D", "E", "F"})
df1.rename(index = {"C": "A", "B": "E"}, inplace = 1)
ipdb> df1
2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 2010-01-06
A -1.163883 0.593760 2.323342 -0.928527 0.058336 -0.209101
A -0.593566 -0.894161 -0.789849 1.452725 0.821477 -0.738937
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
F 1.707686 0.323213 0.048503 1.168898 0.002662 -1.988825
D 0.403028 -0.879873 -1.809991 -1.817214 -0.012758 0.283450
E -0.224405 -1.803301 0.582946 0.338941 0.798908 0.714560
</code></pre>
<p>I would like to change only the name of the duplicated values and to obtain a DataFrame like the following one:</p>
<pre><code>ipdb> df1
2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 2010-01-06
A -1.163883 0.593760 2.323342 -0.928527 0.058336 -0.209101
A_dp -0.593566 -0.894161 -0.789849 1.452725 0.821477 -0.738937
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
F 1.707686 0.323213 0.048503 1.168898 0.002662 -1.988825
D 0.403028 -0.879873 -1.809991 -1.817214 -0.012758 0.283450
E_dp -0.224405 -1.803301 0.582946 0.338941 0.798908 0.714560
</code></pre>
<p><strong>My approach:</strong></p>
<p>(i) Create dictionary with new names</p>
<pre><code>old_names = df1[df1.index.duplicated()].index.values
new_names = df1[df1.index.duplicated()].index.values + "_dp"
dictionary = dict(zip(old_names, new_names))
</code></pre>
<p>(ii) Rename only the duplicated values</p>
<pre><code>df1.loc[df1.index.duplicated(),:].rename(index = dictionary, inplace = True)
</code></pre>
<p>However this does not seem to work. </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.where.html" rel="noreferrer"><code>Index.where</code></a>:</p>
<pre><code>df1.index = df1.index.where(~df1.index.duplicated(), df1.index + '_dp')
print (df1)
2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 2010-01-06
A -1.163883 0.593760 2.323342 -0.928527 0.058336 -0.209101
A_dp -0.593566 -0.894161 -0.789849 1.452725 0.821477 -0.738937
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
F 1.707686 0.323213 0.048503 1.168898 0.002662 -1.988825
D 0.403028 -0.879873 -1.809991 -1.817214 -0.012758 0.283450
E_dp -0.224405 -1.803301 0.582946 0.338941 0.798908 0.714560
</code></pre>
<p>And if need remove of duplicated index to unique:</p>
<pre><code>print (df1)
2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 2010-01-06
A -1.163883 0.593760 2.323342 -0.928527 0.058336 -0.209101
A -0.593566 -0.894161 -0.789849 1.452725 0.821477 -0.738937
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
F 1.707686 0.323213 0.048503 1.168898 0.002662 -1.988825
D 0.403028 -0.879873 -1.809991 -1.817214 -0.012758 0.283450
E -0.224405 -1.803301 0.582946 0.338941 0.798908 0.714560
df1.index = df1.index + df1.groupby(level=0).cumcount().astype(str).replace('0','')
print (df1)
2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 2010-01-06
A -1.163883 0.593760 2.323342 -0.928527 0.058336 -0.209101
A1 -0.593566 -0.894161 -0.789849 1.452725 0.821477 -0.738937
E -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
E1 -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
E2 -0.670305 -1.788403 0.134790 -0.270894 0.672948 1.149089
F 1.707686 0.323213 0.048503 1.168898 0.002662 -1.988825
D 0.403028 -0.879873 -1.809991 -1.817214 -0.012758 0.283450
E3 -0.224405 -1.803301 0.582946 0.338941 0.798908 0.714560
</code></pre>
|
python|pandas
| 24
|
4,911
| 43,167,362
|
R to Python pandas numpy.where conversion
|
<p>what is the best way to write the following R code in Python (pandas dataframe) using numpy.where syntax.</p>
<pre><code>Data$new = ifelse(Data$Diff > 1.652*Data$Diff10, 1,
ifelse(Data$Diff < 3.95*Data$Diff10, -1, 0 ))
</code></pre>
|
<p>You can use:</p>
<pre><code>Data['new'] = np.where(Data['Diff'] > 1.652*Data['Diff10'], 1,
np.where(Data['Diff'] < 3.95*Data['Diff10'], -1, 0 ))
</code></pre>
<p>EDIT:</p>
<p>It seems code above have logic error, because never return <code>0</code>.</p>
<p>Maybe need:</p>
<pre><code>Data = pd.DataFrame({'Diff':[2,4,.5],
'Diff10':[1,1,1]})
print (Data)
Diff Diff10
0 2.0 1
1 4.0 1
2 0.5 1
Data['new'] = np.where(Data['Diff'] < 1.652*Data['Diff10'], 2,
np.where(Data['Diff'] > 3.95*Data['Diff10'], 3, 4 ))
print (Data)
Diff Diff10 new
0 2.0 1 4
1 4.0 1 3
2 0.5 1 2
</code></pre>
|
python|pandas|numpy
| 0
|
4,912
| 43,382,237
|
NumPy ndarray.all() vs np.all(ndarray) vs all(ndarray)
|
<p>What is the the difference between the three "all" methods in Python/NumPy? What is the reason for the performance difference? Is it true that ndarray.all() is always the fastest of the three?</p>
<p>Here is a timing test that I ran:</p>
<pre><code>In [59]: a = np.full(100000, True, dtype=bool)
In [60]: timeit a.all()
The slowest run took 5.40 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 5.24 Β΅s per loop
In [61]: timeit all(a)
1000 loops, best of 3: 1.34 ms per loop
In [62]: timeit np.all(a)
The slowest run took 5.54 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 6.41 Β΅s per loop
</code></pre>
|
<p>The difference between <code>np.all(a)</code> and <code>a.all()</code> is simple:</p>
<ul>
<li>If <code>a</code> is a <code>numpy.array</code> then <code>np.all()</code> will simply call <code>a.all()</code>.</li>
<li>If <code>a</code> is not a <code>numpy.array</code> the <code>np.all()</code> call will convert it to an <code>numpy.array</code> and then call <code>a.all()</code>. <code>a.all()</code> on the other hand will fail because <code>a</code> wasn't a <code>numpy.array</code> and therefore probably has no <code>all</code> method.</li>
</ul>
<p>The difference between <code>np.all</code> and <code>all</code> is more complicated. </p>
<ul>
<li>The <code>all</code> function works on any iterable (including <code>list</code>, <code>set</code>s, <code>generators</code>, ...). <code>np.all</code> works only for <code>numpy.array</code>s (including everything that can be converted to a numpy array, i.e. <code>list</code>s and <code>tuple</code>s). </li>
<li><code>np.all</code> processes an <code>array</code> with specified data type, that makes it pretty efficient when comparing for <code>!= 0</code>. <code>all</code> however needs to evaluate <code>bool</code> for each item, that's much slower.</li>
<li>processing arrays with python functions is pretty slow because each item in the array needs to be converted to a python object. <code>np.all</code> doesn't need to do that conversion.</li>
</ul>
<p>Note that the timings depend also on the type of your <code>a</code>. If you process a python list <code>all</code> can be faster for relativly short lists. If you process an array, <code>np.all</code> and <code>a.all()</code> will be faster in almost all cases (except maybe for <code>object</code> arrays, but I won't go down that path, that way lies madness).</p>
|
python|performance|numpy
| 11
|
4,913
| 72,379,115
|
KDE shows higher count as compared acutal pandas dataframe
|
<p>I am getting started with Titanic Spaceship dataset. (<a href="https://www.kaggle.com/competitions/spaceship-titanic" rel="nofollow noreferrer">https://www.kaggle.com/competitions/spaceship-titanic</a>)</p>
<p>I am trying to understand the effect of the foodcourt feature on the transported result. On plotting a violin plot, I see that the number of passengers who did not use the foodcourt (0 expense) and did not get transported is higher as compared to the number of passengers who did not use the foodcourt (0 expense) and get transported.</p>
<p>However, when I use the following code, I see that the passengers who did not use the foodcourt and did not get transported are lesser as compared to the number of passengers who did not use the foodcourt and got transported.</p>
<pre class="lang-py prettyprint-override"><code>temp = df_train.loc[df_train['FoodCourt'] == 0]
temp_0 = temp.loc[temp['Transported'] == False][enter image description here][1]
temp_1 = temp.loc[temp['Transported'] == True]
print(len(temp_0))
print(len(temp_1))
</code></pre>
|
<p>I have tried the same code that you posted, the results I got from the violin plot (and swarm plot) are identical to the data.
Slicing the dataframe to the passengers that got transported and not transported:</p>
<pre><code>temp_0 = temp[temp["Transported"] == True]
temp_1 = temp[temp["Transported"] == False]
</code></pre>
<p>The lengths of these dataframes are:</p>
<pre><code>temp_0.shape[0] #3224
temp_1.shape[0] #2232
</code></pre>
<p>The plot is:
<a href="https://i.stack.imgur.com/RMSVJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RMSVJ.png" alt="plot" /></a></p>
<p>If you want the width of the violin to scale up using the number of observations in each group, add scale="count" argument as the following:</p>
<pre><code>plt.figure(figsize=(16, 8))
plot = sns.violinplot(y="FoodCourt", x="Transported", data=data, scale="count")
plt.show()
</code></pre>
<p>The result:
<a href="https://i.stack.imgur.com/lNIZ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lNIZ6.png" alt="plot2" /></a></p>
<p>Just note that, it is better to create a countplot, to see how many observation we have in each subgroup, like this:</p>
<pre><code>plt.figure(figsize=(16, 8))
plot = sns.countplot(x=temp["Transported"])
plt.bar_label(plot.containers[0])
plt.show()
</code></pre>
<p>The result:
<a href="https://i.stack.imgur.com/7u8V8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7u8V8.png" alt="countplot" /></a></p>
|
python|pandas
| 0
|
4,914
| 72,228,086
|
Inconsistent indexing of subplots returned by `pandas.DataFrame.plot` when changing plot kind
|
<p>I know that, this issue is known and was already discussed. But I am encountering a strange behaviour, may be someone has idea why:
When I run this:</p>
<pre><code>plot = df.plot(kind="box", subplots=True, layout= (7,4), fontsize=12, figsize=(20,40))
fig = plot[0].get_figure()
fig.savefig(str(path) + "/Graphics/" + "Boxplot_all.png")
plot
</code></pre>
<p>It works just fine.</p>
<p>When I change the kind plot to ="line", it gives that known error... but why? I don't get it...</p>
<pre><code>plot = df.plot(kind="line", subplots=True, layout= (7,4), fontsize=12, figsize=(20,40))
fig = plot[0].get_figure()
fig.savefig(str(path) + "/Graphics/" + "Line_all.png")
plot
</code></pre>
<p>Thanks for your ideas and hints.
Cheers Dave</p>
|
<p>This seems to be an inconsistency in Pandas. According to their <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">docs</a>, indeed, the <code>DataFrame</code>'s method <code>.plot()</code> should return</p>
<blockquote>
<p><code>matplotlib.axes.Axes</code> or <code>numpy.ndarray</code> of them</p>
</blockquote>
<p>This is true if you choose the <code>kind="line"</code> option:</p>
<pre><code>>>> df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
>>> plot=df.plot(subplots=True, layout=(2,2),kind="line")
>>> type(plot)
numpy.ndarray
>>> plot.shape
(2, 2)
</code></pre>
<p>But not with <code>kind="box"</code>, where you get a pandas Series:</p>
<pre><code>>>> plot=df.plot(subplots=True, layout=(2,2),kind="box")
>>> type(plot)
pandas.core.series.Series
>>> plot.shape
(3,)
</code></pre>
<p>So, if using <code>kind="line"</code>, you have to access a 2D array, so you should use:</p>
<pre><code>fig = plot[0,0].get_figure()
</code></pre>
|
python|pandas|numpy|plot|figure
| 1
|
4,915
| 72,418,115
|
FuncAnimation how to update text after each iteration
|
<p>I am trying to create an animation of a Monte-Carlo estimation of the number pi, for each iteration I would like the numerical estimation to be in text on the plot, but the previous text is not removed and makes the values unreadable. I tried <code>Artist.remove(frame)</code> with no success. The plot is done with Jupiter Notebook.</p>
<pre><code>#Enable interactive plot
%matplotlib notebook
import math
from matplotlib.path import Path
from matplotlib.animation import FuncAnimation
from matplotlib.path import Path
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import ConvexHull
from matplotlib.artist import Artist
N = 10000
#create necessary arrays
x = np.arange(0,N)
y = np.zeros(N)
#set initial points to zero
inHull = 0
def inCircle(point):
#the function is given a point in R^n
#returns a boolean stating if the norm of the point is smaller than 1.
if np.sum(np.square(point)) <= 1:
return True
else:
return False
#iterate over each point
for i in range(N):
random_point = np.random.rand(2)*2 - 1
#determine if the point is inside the hull
if inCircle(random_point):
inHull += 1
#we store areas in array y.
y[i] = (inHull*4)/(i+1)
fig = plt.figure()
ax = plt.subplot(1, 1, 1)
data_skip = 20
def init_func():
ax.clear()
plt.xlabel('n points')
plt.ylabel('Estimated area')
plt.xlim((x[0], x[-1]))
plt.ylim((min(y)- 1, max(y)+0.5))
def update_plot(i):
ax.plot(x[i:i+data_skip], y[i:i+data_skip], color='k')
ax.scatter(x[i], y[i], color='none')
Artist.remove(ax.text(N*0.6, max(y)+0.25, "Estimation: "+ str(round(y[i],5))))
ax.text(N*0.6, max(y)+0.25, "Estimation: "+ str(round(y[i],5)))
anim = FuncAnimation(fig,
update_plot,
frames=np.arange(0, len(x), data_skip),
init_func=init_func,
interval=20)
plt.show()
</code></pre>
<p>Thank you.</p>
|
<p>As you have already done in <code> init_func</code>, you should clear the plot in each iteration with <code>ax.clear()</code>. Then it is necessary to edit slighlty the plot function:</p>
<pre><code>ax.plot(x[i:i+data_skip], y[i:i+data_skip], color='k')
</code></pre>
<p>And finally you have to fix x axis limits in each iteration with <code>ax.set_xlim(0, N)</code>.</p>
<h2>Complete Code</h2>
<pre><code>#Enable interactive plot
%matplotlib notebook
import math
from matplotlib.path import Path
from matplotlib.animation import FuncAnimation
from matplotlib.path import Path
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import ConvexHull
from matplotlib.artist import Artist
N = 10000
# create necessary arrays
x = np.arange(0, N)
y = np.zeros(N)
# set initial points to zero
inHull = 0
def inCircle(point):
# the function is given a point in R^n
# returns a boolean stating if the norm of the point is smaller than 1.
if np.sum(np.square(point)) <= 1:
return True
else:
return False
# iterate over each point
for i in range(N):
random_point = np.random.rand(2)*2 - 1
# determine if the point is inside the hull
if inCircle(random_point):
inHull += 1
# we store areas in array y.
y[i] = (inHull*4)/(i + 1)
fig = plt.figure()
ax = plt.subplot(1, 1, 1)
data_skip = 20
txt = ax.text(N*0.6, max(y) + 0.25, "")
def init_func():
ax.clear()
plt.xlabel('n points')
plt.ylabel('Estimated area')
plt.xlim((x[0], x[-1]))
plt.ylim((min(y) - 1, max(y) + 0.5))
def update_plot(i):
ax.clear()
ax.plot(x[:i + data_skip], y[:i + data_skip], color = 'k')
ax.scatter(x[i], y[i], color = 'none')
ax.text(N*0.6, max(y) + 0.25, "Estimation: " + str(round(y[i], 5)))
ax.set_xlim(0, N)
anim = FuncAnimation(fig,
update_plot,
frames = np.arange(0, len(x), data_skip),
init_func = init_func,
interval = 20)
plt.show()
</code></pre>
<h2>Animation</h2>
<p><a href="https://i.stack.imgur.com/e2XcP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e2XcP.gif" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib|animation|math
| 1
|
4,916
| 72,386,928
|
Apply Function on Dataframe Returns RangeIndex Error
|
<p>Below is the code I'm trying to execute but I am getting the error:
<code>KeyError: 'None of [RangeIndex(start=0, stop=54, step=1)] are in the [columns]'</code></p>
<p>I've tried feeding in columns a few ways using dev_cols, feeding in RangeIndex identical to the index of the dataframe. I'm just a bit stuck.</p>
<pre class="lang-py prettyprint-override"><code>dev_sentences = []
dev_labels = []
for i, (label, words) in enumerate(read_from("dev.txt"), 1):
dev_sentences.append(words)
dev_labels.append(label)
dev_sentences = pd.DataFrame(dev_sentences)
dev_sentences = dev_sentences.applymap(str)
dev_cols = dev_sentences.columns
print(dev_cols)
print(dev_sentences)
dev = pd.DataFrame()
dev['dev_combined'] = dev[dev_sentences.columns].apply(lambda row: ' '.join(row.values.astype(str)), axis=1)
</code></pre>
|
<p>It is expected, because <code>dev</code> is empty DataFrame, so select by any column(s) failed:</p>
<pre><code>dev = pd.DataFrame()
dev[dev_sentences.columns]
</code></pre>
<p>Need:</p>
<pre><code>dev_sentences = pd.DataFrame(dev_sentences).astype(str)
dev = dev_sentences.copy()
dev['dev_combined'] = dev.apply(' '.join, axis=1)
</code></pre>
<p>Or use <code>dev_sentences</code> instead <code>dev</code>:</p>
<pre><code>dev_sentences = pd.DataFrame(dev_sentences).astype(str)
dev_sentences['dev_combined'] = dev_sentences.apply(' '.join, axis=1)
</code></pre>
<p>instead:</p>
<pre><code>dev_sentences = pd.DataFrame(dev_sentences)
dev_sentences = dev_sentences.applymap(str)
dev = pd.DataFrame()
dev['dev_combined'] = dev[dev_sentences.columns].apply(lambda row: ' '.join(row.values.astype(str)), axis=1)
</code></pre>
|
python|pandas
| 0
|
4,917
| 50,252,761
|
How to count comma seperated repeated values in a pandas column?
|
<p>I have a dataframe column like this:</p>
<pre><code>1 Applied Learning, Literacy & Language
2 Literacy & Language, Special Needs
3 Math & Science, Literacy & Language
4 Literacy & Language, Math & Science
6 Math & Science, Applied Learning
7 Applied Learning
8 Literacy & Language
10 Math & Science...
</code></pre>
<p>There are comma seperated values in each row. What I want is to count the occurance of all the unique values. For eg: Math & Science appears 4 times. So the count for Math & Science should be 4. I tried the following code:</p>
<pre><code>cato=response['Category'].str.split(',')
cat_set=[]
for i in cato.dropna():
cat_set.extend(i)
plt1=pd.Series(cat_set).value_counts().sort_values(ascending=False).to_frame()
</code></pre>
<p>But the problem is, this code works for small datasets, but it takes a lot of time for a large dataset. Any solutions for this?</p>
<p>Thanks</p>
|
<p>Try using <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="noreferrer"><code>collections.Counter</code></a>, which is built specifically for high performance of tasks like this one.</p>
<p>Say you start with</p>
<pre><code>df = pd.DataFrame({'Category': ['Applied Learning, Literacy & Language', 'Literacy & Language, Special Needs']})
</code></pre>
<p>then do</p>
<pre><code>import collections
import itertools
>>> collections.Counter(itertools.chain.from_iterable(v.split(',') for v in df.Category))
Counter({' Literacy & Language': 1,
' Special Needs': 1,
'Applied Learning': 1,
'Literacy & Language': 1})
</code></pre>
|
python|string|pandas|dataframe
| 5
|
4,918
| 50,451,793
|
Pandas pivot table selecting rows with maximum values
|
<p>I have pandas dataframe as:</p>
<pre><code>df
Id Name CaseId Value
82 A1 case1.01 37.71
1558 A3 case1.01 27.71
82 A1 case1.06 29.54
1558 A3 case1.06 29.54
82 A1 case1.11 12.09
1558 A3 case1.11 32.09
82 A1 case1.16 33.35
1558 A3 case1.16 33.35
</code></pre>
<p>For each Id, Name pair I need to select the CaseId with maximum value.</p>
<p>i.e. I am seeking the following output:</p>
<pre><code>Id Name CaseId Value
82 A1 case1.01 37.71
1558 A3 case1.16 33.35
</code></pre>
<p>I tried the following:</p>
<pre><code>import pandas as pd
pd.pivot_table(df, index=['Id', 'Name'], columns=['CaseId'], values=['Value'], aggfunc=[np.max])['amax']
</code></pre>
<p>But all it does is for each <code>CaseId</code> as column it gives maximum value and not the results that I am seeking above.</p>
|
<p><code>sort_values</code> + <code>drop_duplicates</code></p>
<pre><code>df.sort_values('Value').drop_duplicates(['Id'],keep='last')
Out[93]:
Id Name CaseId Value
7 1558 A3 case1.16 33.35
0 82 A1 case1.01 37.71
</code></pre>
<p>Since we post same time , adding more method </p>
<pre><code>df.sort_values('Value').groupby('Id').tail(1)
Out[98]:
Id Name CaseId Value
7 1558 A3 case1.16 33.35
0 82 A1 case1.01 37.71
</code></pre>
|
pandas|python-3.5
| 5
|
4,919
| 62,824,004
|
Expected input batch_size (32) to match target batch_size (19840) BERT Classifier
|
<p>I ran into this error with code:</p>
<pre><code>model = BertForSequenceClassification.from_pretrained("pretrained/", num_labels=ohe_count)
model.to(device)
from IPython.display import clear_output
train_loss_set = []
train_loss = 0
model.train()
for step, batch in enumerate(train_dataloader):
# Π΄ΠΎΠ±Π°Π²Π»ΡΠ΅ΠΌ Π±Π°ΡΡ Π΄Π»Ρ Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΡ Π½Π° GPU
batch = tuple(t.to(device) for t in batch)
# Π Π°ΡΠΏΠ°ΠΊΠΎΠ²ΡΠ²Π°Π΅ΠΌ Π΄Π°Π½Π½ΡΠ΅ ΠΈΠ· dataloader
b_input_ids, b_input_mask, b_labels = batch
b_input_ids = b_input_ids.type(torch.LongTensor)
b_input_mask = b_input_mask.type(torch.LongTensor)
b_labels = b_labels.type(torch.LongTensor)
b_input_ids = b_input_ids.to(device)
b_input_mask = b_input_mask.to(device)
b_labels = b_labels.to(device)
optimizer.zero_grad()
# Forward pass
loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
train_loss_set.append(loss[0].item())
# Backward pass
loss[0].backward()
optimizer.step()
train_loss += loss[0].item()
clear_output(True)
plt.plot(train_loss_set)
plt.title("Training loss")
plt.xlabel("Batch")
plt.ylabel("Loss")
plt.show()
b_input_ids.shape = torch.Size([32, 100])
b_labels.shape = torch.Size([32, 620])
</code></pre>
<p>Shapes seem fine, but I am getting the error <code>Expected input batch_size (32) to match target batch_size (19840)</code></p>
|
<p>Never mind, I was trying to do multi-label classification, but BertForSequenceClassification can't do that.</p>
|
python|deep-learning|nlp|pytorch
| 0
|
4,920
| 62,565,272
|
How to split a column of dictionary type into two different pandas column of different type?
|
<p>I have a dataframe with 2 columns (plus index) like this, it has around 14,000 lines.</p>
<pre><code>Employee | RecordID
{'Id': 185, 'Title': 'Full Name'} | 9
</code></pre>
<p>I'd like to split the columns like this:</p>
<pre><code>Id | Title | RecordID
185 | 'Full Name' | 9
</code></pre>
<p>I tried to use this solution:</p>
<pre><code>df2 = pd.DataFrame(data_df["Employee"].values.tolist(), index=data_df.index) <- error
data_df = pd.concat([data_df, df2], axis = 1).drop(column, axis = 1)
</code></pre>
<p>but it gives this error on the <code>df2</code> line</p>
<pre><code> *** AttributeError: 'float' object has no attribute 'keys'
</code></pre>
<p>I have 2 theories: one that it's because i have different column types in the employee dictionary, and two: there are 3 records that have an empty employee id, like this:</p>
<pre><code>Employee | RecordID
nan | 7051
</code></pre>
<p>I need to keep those 3 records without an employee record and show their <code>record Id</code>, and in the final <code>data_df</code> show empty columns for employee id and employee name.</p>
<p>So in summary:</p>
<p>INPUT</p>
<pre><code>Employee | RecordID
{'Id': 185, 'Title': 'Full Name'} | 9
nan | 7051
</code></pre>
<p>EXPECTED OUTPUT</p>
<pre><code>Id | Title | RecordID
185 | 'Full Name' | 9
nan | nan | 7051
</code></pre>
<p>I made it work using <code>data_df["Employee"].apply(pd.Series)</code> but it's painfully slow.</p>
<p>Is there a way <em>not</em> using <code>pd.series</code> to split a column of dictionaries where such dictionary has different column types and nan values to separate columns into the parent pandas dataframe?</p>
<p>Thanks,</p>
|
<p>You can do</p>
<pre><code>data_df1= data_df.dropna()
df2 = pd.DataFrame(data_df1["Employee"].values.tolist(), index= data_df1.index)
data_df=data_df.join(df2,how='left')
</code></pre>
|
python|pandas|dataframe|dictionary
| 2
|
4,921
| 62,651,300
|
pandas groupby and countif in multiple columns
|
<p>I have the following df</p>
<pre><code>import pandas as pd
# -- create a dataframe
list_columns = ['pet', 'grade', 'class']
list_data = [
['dog', 'A', 'A'],
['cat', 'A', 'C'],
['dog', 'B', 'E'],
['mouse', 'C', 'A'],
['dog', 'A', 'B'],
['cat', 'B', 'E'],
['dog', 'C', 'D'],
['dog', 'A', 'C'],
]
df_animals = pd.DataFrame(columns=list_columns, data=list_data)
df_animals.head()
</code></pre>
<p>I want for each pet to count how many <code>'A','B','C','D','E'</code> are in the column <code>grade</code> and how many in <code>class</code>.</p>
<p>Expected output would be</p>
<pre><code>pet status grade class
dog A 3 1
dog B 1 1
dog C 0 1
dog D 0 0
dog E 0 1
cat A 1 0
cat B 0 0
cat C 0 1
cat D 0 0
cat E 0 0
mouse A 0 1
mouse B 0 0
mouse C 1 0
mouse D 0 0
mouse E 0 0
</code></pre>
<p>I tried to group and count by a specific item but does not work.
One idea was to count for each pet the A,B,C,D,E but it would be manual and don't think it's ok.
Can someone tell me how should I proceed?</p>
<pre><code>df_animals.groupby('grade').apply(lambda x: (x=='A').count())
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a> for reshape and then add missing categories by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p>
<pre><code>df = (df_animals.melt('pet')
.pivot_table(index=['pet','value'],
columns='variable',
aggfunc='size',
fill_value=0)
.rename_axis(None, axis=1))
df = df.reindex(pd.MultiIndex.from_product(df.index.levels), fill_value=0).reset_index()
print (df)
pet value class grade
0 cat A 0 1
1 cat B 0 1
2 cat C 1 0
3 cat D 0 0
4 cat E 1 0
5 dog A 1 3
6 dog B 1 1
7 dog C 1 1
8 dog D 1 0
9 dog E 1 0
10 mouse A 1 0
11 mouse B 0 0
12 mouse C 0 1
13 mouse D 0 0
14 mouse E 0 0
</code></pre>
|
python|pandas
| 1
|
4,922
| 54,396,178
|
Creating new pandas dataframe based on existing columns with duplicates
|
<p>I have a pandas dataframe that looks like</p>
<pre><code>Event Person Data
Event1 Person1 Data1
Event1 Person2 Data2
Event1 Person3 Data3
Event2 Person1 Data4
Event2 Person2 Data5
Event2 Person3 Data6
</code></pre>
<p>and so on. I would like to create a new dataframe where the rows are the people (removing duplicates) and the columns are the events, and each cell is the particular datapoint corresponding to that event and that person. I can obviously just create a new dataframe and populate it iterating through the rows and columns of that dataframe, but I'm wondering if there's a slicker way?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">df.pivot()</a></p>
<pre><code>df.pivot(columns='Event', values='data', index='Person')
</code></pre>
<p>Output:</p>
<pre><code>Event Event1 Event2
Person
Person1 Data1 Data4
Person2 Data2 Data5
Person3 Data3 Data6
</code></pre>
|
python|pandas|dataframe
| 1
|
4,923
| 54,457,788
|
Double backslashes for filepath_or_buffer with pd.read_csv
|
<p>Python 3.6, OS Windows 7</p>
<p>I am trying to read a .txt using <code>pd.read_csv()</code> using relative filepath. So, from pd.read_csv() API checked out that the filepath argument can be any valid string path. </p>
<p>So, in order to define the relative path I use pathlib module. I have defined the relative path as:</p>
<pre><code>df_rel_path = pathlib.Path.cwd() / ("folder1") / ("folder2") / ("file.txt")
a = str(df_rel_path)
</code></pre>
<p>Finally, I just want to use it to feed <code>pd.read_csv()</code> as:</p>
<pre><code>df = pd.read_csv(a, engine = "python", sep = "\s+")
</code></pre>
<p>However, I am just getting an error stating "No such file or directory: ..." showing double backslashes on the folder path.</p>
<p>I have tried to manually write the path on pd.read_csv() using a raw string, that is, using <code>r"relative/path"</code>. However, I am still getting the same result, double backslashes. Is there something I am overlooking?</p>
|
<p>You need a filename to call pd.read_csv. In the example 'a' is a only the path and does not point to a specific file. You could do something like this:</p>
<pre><code>df_rel_path = pathlib.Path.cwd() / ("folder1") / ("folder2")
a = str(df_rel_path)
df = pd.read_csv(a+'/' +'filename.txt')
</code></pre>
<p>With the filename your code works for me (on Windows 10):</p>
<pre><code>df_rel_path = pathlib.Path.cwd() / ("folder1") / ("folder2")/ ("file.txt")
a = str(df_rel_path)
df = pd.read_csv(a)
</code></pre>
|
python-3.x|pandas|pathlib
| 0
|
4,924
| 73,525,998
|
How do I restore the normal color after blur-filtering this image with a NP matrix?
|
<p>I managed to blur an image using only a NP matrix, but for some reason can't restore the normal color channels to it. If someone can give me guidance on how to fix this without calling me an idiot, that would be appreciated.</p>
<pre><code>def convolve(image,kernel):
image_copy = image.copy()
height=image_copy.shape[0]
width=image_copy.shape[1]
width_kernel=len(kernel)
final_img = [ [0]*width for i in range(height)]
# print(outAr)
for i in range(width_kernel,image_copy.shape[0]-width_kernel):
for j in range(width_kernel,image_copy.shape[1]-width_kernel):
avg_square = image_copy[i-width_kernel:i+width_kernel+1, j-width_kernel:j+width_kernel+1]
avg = np.mean(avg_square,dtype=np.float32)
final_img[i][j] = int(avg)
return final_img
kernel=np.ones((9,9))*1/9
plt.imshow(convolve(image,kernel))
</code></pre>
<p><a href="https://i.stack.imgur.com/XenxV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XenxV.png" alt="enter image description here" /></a></p>
|
<p>First of all, you are not an idiot and don't take anyone seriously who calls you that. You made a strong first attempt at blurring an image and did a good job for the first try. You are almost there so be proud. To fix this, you should separate the Red, Green, and Blue channels before you do any blurring. Blur them independently and then recombine. I would recommend a few changes to make it easier.</p>
<p>Firstly, it will make life easier to make the image a numpy array right after you read the image in. I'm assuming you already did this, but it isn't shown.</p>
<pre><code>image_rgb = np.array(image)
</code></pre>
<p>Second, separate the R,G,B channels.</p>
<pre><code>r = image_rgb[:,:,0]
g = image_rgb[:,:,1]
b = image_rgb[:,:,2]
</code></pre>
<p>From here, perform your convolution function on each channel. Then recombine the blurred channels and you have a blurred RGB image.</p>
<pre><code>blurred_r = convolve(r, kernel)
blurred_g = convolve(g, kernel)
blurred_b = convolve(b, kernel)
blurred_image_rgb = np.dstack([blurred_r, blurred_g, blurred_b])
</code></pre>
<p>There are some edge effects from your convolution, I'm not sure if that's intended. However, you seem like the type that will be able to figure that out if you need to.</p>
|
python|numpy|colors|numpy-ndarray|array-broadcasting
| 1
|
4,925
| 73,637,365
|
How to convert a json object into a dataframe when arrays are of different lengths?
|
<p>I am trying to convert a json format into a dataframe but getting an error saying "All arrays must be of the same length".
Below is the code. Any advice are highly appreciated.</p>
<p>I have the following codes</p>
<pre><code>import pandas as pd
import requests
import json
from pandas import json_normalize
resp = requests.get("https://unstats.un.org/SDGAPI/v1/sdg/Indicator/Data?indicator=7.2.1")
resp.json()
pd.DataFrame(resp.json())
ValueError: All arrays must be of the same length
</code></pre>
|
<p>Here is how to convert the entire json using Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.from_dict.html" rel="nofollow noreferrer">from_dict</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.transpose.html" rel="nofollow noreferrer">transpose</a> (T) methods:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import requests
resp = requests.get(
"https://unstats.un.org/SDGAPI/v1/sdg/Indicator/Data?indicator=7.2.1"
)
df = pd.DataFrame.from_dict(resp.json(), orient="index").T
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(df.info())
# Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1 entries, 0 to 0
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 size 1 non-null object
1 totalElements 1 non-null object
2 totalPages 1 non-null object
3 pageNumber 1 non-null object
4 attributes 1 non-null object
5 dimensions 1 non-null object
6 data 1 non-null object
dtypes: object(7)
memory usage: 184.0+ bytes
</code></pre>
<p>But if you are interested in specific values of your json, which, I presume, are the ones associated with the key <code>data</code>, then simply do:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(resp.json()["data"])
print(df.info())
# Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 25 entries, 0 to 24
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 goal 25 non-null object
1 target 25 non-null object
2 indicator 25 non-null object
3 series 25 non-null object
4 seriesDescription 25 non-null object
5 seriesCount 25 non-null object
6 geoAreaCode 25 non-null object
7 geoAreaName 25 non-null object
8 timePeriodStart 25 non-null float64
9 value 25 non-null object
10 valueType 25 non-null object
11 time_detail 0 non-null object
12 timeCoverage 0 non-null object
13 upperBound 0 non-null object
14 lowerBound 0 non-null object
15 basePeriod 0 non-null object
16 source 25 non-null object
17 geoInfoUrl 0 non-null object
18 footnotes 25 non-null object
19 attributes 25 non-null object
20 dimensions 25 non-null object
dtypes: float64(1), object(20)
memory usage: 4.2+ KB
</code></pre>
|
python-3.x|pandas|dataframe|python-requests
| 1
|
4,926
| 71,241,648
|
Get max time of grouped by time series dataframe
|
<p>I have a dataframe as follows:</p>
<pre><code>Date User Tag
2-22-2022 09:00:00 u1 a
2-22-2022 10:00:00 u1 b
2-22-2022 11:00:00 u2 c
2-23-2022 09:00:00 u1 a
2-23-2022 10:00:00 u2 b
</code></pre>
<p>Want to creat, for each user, a column with the time difference between followed users/records.</p>
<p>Something like:</p>
<pre><code>df["diff"] = df.groupby("user")["StartT"].diff().shift(-1)
Date User Tag diff
2-22-2022 09:00:00 u1 a 1 hour
2-22-2022 10:00:00 u1 b 23 hours
2-22-2022 11:00:00 u2 c 23 hours
2-23-2022 09:00:00 u1 a NaN
2-23-2022 10:00:00 u2 b NaN
</code></pre>
<p>What I want to do is get, for each user (daily), and for each tag, the tag the user spent more time in.</p>
<p>Output:</p>
<pre><code>Date User Tag
2-22-2022 10:00:00 u1 b
2-22-2022 11:00:00 u2 c
2-23-2022 09:00:00 u1 a
2-23-2022 10:00:00 u2 b
</code></pre>
<p>Tried to <code>groupby(user, date(1day), tag)['diff].sum()</code>.idxmax() ?</p>
<p>There might be multiple tags per day/user, that's why i'm grouping by tag</p>
|
<p>First of all, I had to split your calculation of the "diff" column in two steps to reach the same output as you:</p>
<pre><code>>>> df["diff"] = df.groupby("User")["Date"].diff()
>>> df["diff"] = df.groupby("User")["diff"].shift(-1)
</code></pre>
<p>We'll also fill the NaN values on the "diff" columns by calculating the hours remaining to end the day (useful at the end of your data).</p>
<pre><code>>>> df["diff"] = df["diff"].fillna(df["Date"].dt.date + pd.DateOffset(days=1) - df["Date"])
>>> df
Date User Tag diff
0 2022-02-22 09:00:00 u1 a 0 days 01:00:00
1 2022-02-22 10:00:00 u1 b 0 days 23:00:00
2 2022-02-22 11:00:00 u2 c 0 days 23:00:00
3 2022-02-23 09:00:00 u1 a 0 days 15:00:00
4 2022-02-23 10:00:00 u2 b 0 days 14:00:00
</code></pre>
<p>Now, we apply <code>.groupby</code> to group by "User", day (using <code>df["Date"].dt.date</code>), and "Tag" to calculate the total time spent on each "Tag":</p>
<pre><code>>>> times = pd.to_datetime(df["Date"])
>>> df_total_diff = df.dropna().groupby(["User", times.dt.date, "Tag"])["diff"].sum().reset_index()
>>> df_total_diff
User Date Tag diff
0 u1 2022-02-22 a 0 days 01:00:00
1 u1 2022-02-22 b 0 days 23:00:00
2 u1 2022-02-23 a 0 days 15:00:00
3 u2 2022-02-22 c 0 days 23:00:00
4 u2 2022-02-23 b 0 days 14:00:00
</code></pre>
<p>Finally, we can group by "User" and "Date" to find the most consumed "Tag":</p>
<pre><code>>>> df_output = df.loc[df_total_diff.groupby(["User", "Date"])["diff"].idxmax()]
>>> df_output
Date User Tag
1 2022-02-22 10:00:00 u1 b
2 2022-02-22 11:00:00 u2 c
3 2022-02-23 09:00:00 u1 a
4 2022-02-23 10:00:00 u2 b
</code></pre>
|
pandas
| 0
|
4,927
| 60,710,211
|
Neural network classification loss from validation set: Does it update anything dynamically
|
<p>I'm attempting t study up a bit on the theory of training neural networks, and right now i have gotten to validation sets. </p>
<p>Now, I can understand that a validation set gives us a loss-index, which helps us in knowing whether we are overfitting or not. But when I read in books and and watch videos, everyone seems to express themselves in a manner that is a bit ambiguous. </p>
<p>Does the model update itself manually when the validation set is being run? can layers, weights, biases, or net-amounts of neurons be updated "automatically" when it is being validated?</p>
<p>Thank you very much</p>
|
<p><code>Loss</code> and <code>Accuracy</code> refer to the current loss and accuracy of the training set.</p>
<p>The <code>Loss</code> is being fixed during back propagation of the epoch to improve the <code>Accuracy</code>. </p>
<p>At the end of each epoch your trained Neural Network is evaluated against your validation set. This is what <code>Validation Loss</code> and <code>Validation Accuracy</code> refer to.</p>
|
tensorflow|artificial-intelligence
| 0
|
4,928
| 72,796,183
|
how to use keras demo code siamese_contrastive.py to use a custom dataset?
|
<p>I am following this example <a href="https://keras.io/examples/vision/siamese_contrastive/" rel="nofollow noreferrer">Image similarity estimation using a Siamese Network with a contrastive loss</a>.</p>
<p>The given code snippet reads directly from <code>keras.datasets.mnist.load_data()</code>.</p>
<p>I am trying to adapt this example and trying to feed a <strong>new dataset</strong>. I have a directory where I kept examples of positive and negative images.</p>
<p>So I have three directories of <code>anchor</code>, <code>positive</code>, and <code>negative</code> examples, i.e.:</p>
<pre><code>anchor/
1.jpg
2.jpg
3.jpg
....
</code></pre>
<pre><code>positive/
1.jpg
2.jpg
3.jpg
....
</code></pre>
<pre><code>negative/
1.jpg
2.jpg
3.jpg
....
</code></pre>
<p>How could I feed this dataset into this model?</p>
<p>For my case I have to get embedding of the images before feeding into the model. Any idea is welcome.</p>
|
<p>You can load images like this :</p>
<pre><code>anchor_images = sorted([str(anchor_images_path / f) for f in os.listdir(anchor_images_path)])
</code></pre>
<p>Same for positives and negatives.</p>
<p><strong>Detailed example here:</strong></p>
<p><a href="https://keras.io/examples/vision/siamese_network/" rel="nofollow noreferrer">https://keras.io/examples/vision/siamese_network/</a></p>
|
python|tensorflow|keras|deep-learning|siamese-network
| 0
|
4,929
| 72,747,918
|
Nested dictionary to Dataframe in the most efficient way possible
|
<p>I have a nested dictionary like this one:</p>
<pre><code>my_dict[user_profile][user_id][level] = [[9999, 'Heavy Purchaser', 340, 'Star_chest', 999, 1000],
[9999, 'Heavy Purchaser', 340, 'Star_chest', 998, 5],
[9999, 'Heavy Purchaser', 340, 'Star_chest', 3, 1],
[9999, 'Heavy Purchaser', 340, 'Star_chest', 4, 1]]
</code></pre>
<p>Basically, per each user_profile, user_id I'm collecting the rewards received per level.
The number of lists contained in <code>dict[user_profile][user_id][level]</code> is variable and not fix.</p>
<p>A reward looks like this : <code>[9999, 'Heavy Purchaser', 340, 'Star_chest', 999, 1000]</code></p>
<p>I want to create a DF of rewards using the most efficient and fastest solution.
In the end this is what I want:</p>
<pre><code> ID user_profile user_id Chest_type item_code amount
9999 'Heavy Purchaser' 340 'Star_chest' 999 1000
9999 'Heavy Purchaser' 340 'Star_chest' 4 1
9999 'Heavy Purchaser' 340 'Star_chest' 3 1
</code></pre>
<p>I tried to append each single list using <code>df.loc[df.shape[0]] = list_with_rewards</code>, but it's taking too much time. Any suggestion ?</p>
|
<p>The data that you are starting with is <em>not</em> a nested dictionary, it is just a nested list. You may want to consider transitioning to a nested dictionary that would seem to make more sense for the type of data you are gathering... But that is another question. :)</p>
<p>In <code>pandas</code>, generally the last thing you want to do is add to a data frame row by row, or anything row by row in general. If you look through the dox for data frame, there are several ways to create from data, based on data structure or file type and data orientation. Your data is a "list of lists" where each list can be interpreted as a "record" or one row in a datframe or database. So, you can just use the <code>from_records()</code> construct. Behold:</p>
<pre><code>In [7]: import pandas as pd
In [8]: data = [[9999, 'Heavy Purchaser', 340, 'Star_chest', 999, 1000],
...: [9999, 'Heavy Purchaser', 340, 'Star_chest', 998, 5],
...: [9999, 'Heavy Purchaser', 340, 'Star_chest', 3, 1],
...: [9999, 'Heavy Purchaser', 340, 'Star_chest', 4, 1]]
In [9]: type(data)
Out[9]: list
In [10]: pd.DataFrame.from_records(data, columns=['ID', 'user', 'user_id', 'chest', 'count', 'amount'])
Out[10]:
ID user user_id chest count amount
0 9999 Heavy Purchaser 340 Star_chest 999 1000
1 9999 Heavy Purchaser 340 Star_chest 998 5
2 9999 Heavy Purchaser 340 Star_chest 3 1
3 9999 Heavy Purchaser 340 Star_chest 4 1
</code></pre>
|
python|pandas|dictionary|optimization|nested
| 1
|
4,930
| 72,610,478
|
Problem using Pandas for joining dataframes
|
<p>I am trying to join a lot of CSV files into a single dataframe after doing some conversions and filters, when I use the append method for the sn2 dataframe, the exported CSV contains all the data I want, however when I use the append method for the sn3 dataframe, only the data from the last CSV is exported, what am I missing?</p>
<pre><code>sn2=pd.DataFrame()
sn3=pd.DataFrame()
files=os.listdir(load_path)
for file in files:
df_temp=pd.read_csv(load_path+file)
df_temp['Date']=file.split('.')[0]
df_temp['Date']=pd.to_datetime(df_temp['Date'],format='%Y%m%d%H%M')
filter1=df_temp['Name']=='Atribute1'
temp1=df_temp[filter1]
sn2=sn2.append(temp1)
filter2=df_temp['Name']=='Atribute2'
temp2=df_temp[filter2]
sn3=pd.concat([temp2])
</code></pre>
|
<p>You have to pass <em>all</em> the dataframes that you want to concatenate to <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>sn3 = pd.concat([sn3, temp2])
</code></pre>
|
python|pandas|concatenation
| 1
|
4,931
| 59,666,834
|
Joining dataframe based on ranges
|
<p>I would like to left join one dataframe to another based on whether the values in the left data frame occur between a specified range indicated in the right dataframe:</p>
<pre><code>df1 = pd.DataFrame()
df2 = pd.DataFrame()
df1['col1'] = ['A', 'B', 'C', 'D','E']
df1['col2'] = ['alpha', 'beta', 'gamma', 'delta','epsilon']
df1['min'] = [0, 15, 20, 90, 100]
df1['max'] = [15, 20, 90, 100, 200]
df2['x'] = np.linspace(0,199, 6)
</code></pre>
<p>My desired result is:</p>
<pre><code> x col1 col2
0 0.0 'A' 'alpha'
1 39.8 'C' 'gamma'
2 79.6 'C' 'gamma'
3 119.4 'E' 'epsilon'
4 159.2 'E' 'epsilon'
5 199.0 'E' 'epsilon'
</code></pre>
<p>Does anyone know of a simple way to achieve this? Perhaps using the <code>merge</code>, <code>join</code> or <code>apply</code> methods?</p>
<h2>Edit</h2>
<p>I have just edited my question to reflect more what is needed. I would like solutions which will not require me to explicitly type out every single non-range column in <code>df1</code> (i.e. <code>col1</code>, <code>col2</code> ... <code>coln</code>) as there will be too many columns to do this.</p>
|
<p>Here is another from <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IntervalIndex.html" rel="nofollow noreferrer"><code>IntervalIndex</code></a> :</p>
<p>Note: <code>min</code> and <code>max</code> are methods (your df it is column names) so be careful if you use <code>.</code> (dot) notation.</p>
<pre><code>s = pd.IntervalIndex.from_arrays(df1['min'],df1['max'], 'left')
df2 = df2.assign(**df1.set_index(s).loc[df2['x'],['col1','col2']].reset_index(drop=True))
</code></pre>
<p>Or better is using <code>reindex</code> for missing values:</p>
<pre><code>s = pd.IntervalIndex.from_arrays(df1['min'],df1['max'], 'left')
df1.set_index(s).reindex(df2['x']).loc[:,['col1','col2']].reset_index()
</code></pre>
<hr>
<pre><code>print(df2)
x col1 col2
0 0.0 A alpha
1 39.8 C gamma
2 79.6 C gamma
3 119.4 E epsilon
4 159.2 E epsilon
5 199.0 E epsilon
</code></pre>
|
python|pandas
| 2
|
4,932
| 59,687,567
|
How to perform Kneser-Ney smoothing in NLTK at word-level for tri-gram language model?
|
<p>I am trying to train a tri-gram language model on a text corpus and want to perform KN smoothing. Apparently, the 'nltk.trigrams' does this at character-level. I was wondering how I would be able to do this at word-level and also perform KN smoothing. Here is a piece of code that I wrote and doesn't work: </p>
<pre><code> with open('file.txt',"r",encoding = "ISO-8859-1") as ff:
text = ff.read()
word_tok = tknzr.tokenize(text)
ngrams = nltk.trigrams(word_tok)
freq_dist = nltk.FreqDist(ngrams)
kneser_ney = nltk.KneserNeyProbDist(freq_dist)
print(kneser_ney.prob('you go to'))
</code></pre>
<p>I get the error:</p>
<pre><code> Expected an iterable with 3 members.
</code></pre>
|
<p>Replace the line:</p>
<pre><code>print(kneser_ney.prob('you go to'))
</code></pre>
<p>with:</p>
<pre><code>print(kneser_ney.prob('you go to'.split()))
</code></pre>
<p>Then it works ok. I get a value of 0.05217391304347826 when using a as training file the text from the novel "Moby Dick" downloaded from Project Gutenberg</p>
<p>With this modification, your code becomes analogous to thefollwing:</p>
<pre><code>with open('./txts/mobyDick.txt') as ff:
text = ff.read()
from nltk import word_tokenize,trigrams
from nltk import FreqDist, KneserNeyProbDist
word_tok = word_tokenize(text)
ngrams = trigrams(word_tok)
freq_dist = FreqDist(ngrams)
kneser_ney = KneserNeyProbDist(freq_dist)
print(kneser_ney.prob('you go to'.split()))
</code></pre>
<p>Also everything here is being done a the word level and not at the character level:</p>
<pre><code>ngrams = trigrams(word_tok)
for _ in range(0,10):
print(next(ngrams))
#('\ufeff', 'The', 'Project')
#('The', 'Project', 'Gutenberg')
#('Project', 'Gutenberg', 'EBook')
#('Gutenberg', 'EBook', 'of')
#('EBook', 'of', 'Moby')
#('of', 'Moby', 'Dick')
#('Moby', 'Dick', ';')
#('Dick', ';', 'or')
#(';', 'or', 'The')
#('or', 'The', 'Whale')
</code></pre>
<p>The frequency distribution is also at word level:</p>
<pre><code>freq_dist.freq(tuple('on the ocean'.split()))
#7.710783916846906e-06
freq_dist.freq(tuple('new Intel CPU'.split()))
#0.0
</code></pre>
|
python|nlp|nltk|pytorch|trigram
| 0
|
4,933
| 61,966,029
|
Convert text to binary columns
|
<p>I have a column in my dataframe that contains many different companies separated by commas (assume there are additional rows with even more companies).</p>
<pre><code>company
apple,microsoft,disney,nike
microsoft,adidas,amazon,eBay
</code></pre>
<p>I want to convert this to binary columns for every possible company that appears. It should ultimately look like this:</p>
<pre><code>adidas apple amazon eBay disney microsoft nike ... last_store
0 1 0 0 1 1 1 ... 0
1 0 1 1 0 1 0 ... 0
</code></pre>
|
<p>Let us try <code>get_dummies</code></p>
<pre><code>s=df.brand.str.get_dummies(',')
adidas amazon apple disney eBay microsoft nike
0 0 0 1 1 0 1 1
1 1 1 0 0 1 1 0
</code></pre>
|
python|pandas|dataframe|text
| 4
|
4,934
| 61,984,525
|
Assigning List Values to Pandas df Column generates NaN or Length Error
|
<p>I have a DataFrame</p>
<pre><code> Close Delta
Date
2020-05-11 2920.50 -440
2020-05-11 2920.25 -9
2020-05-11 2920.25 -27
2020-05-11 2920.50 2
2020-05-11 2920.75 117
</code></pre>
<p>Now i'm calculating consecutive increments of 'Close' with this function:</p>
<pre><code>tickbox = []
cumtickCount = 0
for i in range(len(df.index)):
if df.Close[i] > df.Close[i-1]:
cumtickCount += 1
tickbox.append(cumtickCount)
else:
cumtickCount = 0
</code></pre>
<p>I get the list but here I also don't understand why the values starting with 1 and not with 0<br>
tickbox: </p>
<pre><code>[1,
1,
2,
3,
1,
2,
3,
4,
5,
6,
1,
1,
2,
3,
4,
5,
6,
7,
8,
9,
1,
2,
3,
4,
5,
</code></pre>
<p>If I convert the List to the df column</p>
<pre><code>ct = pd.Series(tickbox)
df['consec_tick'] = ct
</code></pre>
<p>I get NaN values</p>
<pre><code> Close Delta consec_tick
Date
2020-05-11 2920.50 -440 NaN
2020-05-11 2920.25 -9 NaN
2020-05-11 2920.25 -27 NaN
2020-05-11 2920.50 2 NaN
2020-05-11 2920.75 117 NaN
</code></pre>
<p>If I assign the list like this:</p>
<pre><code>df.assign(new_col=consec_tickup)
</code></pre>
<p>or</p>
<pre><code>df['consec_tick'] = consec_tickup
</code></pre>
<p>I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-57-9d3e9ad7ceb3> in <module>
7 cumtickCount += 1
8 #tickbox.append(cumtickCount)
----> 9 df['consec_tick'] = tickbox
10 else:
11 cumtickCount = 0
/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in __setitem__(self, key, value)
3470 else:
3471 # set column
-> 3472 self._set_item(key, value)
3473
3474 def _setitem_slice(self, key, value):
/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in _set_item(self, key, value)
3547
3548 self._ensure_valid_index(value)
-> 3549 value = self._sanitize_column(key, value)
3550 NDFrame._set_item(self, key, value)
3551
/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in _sanitize_column(self, key, value, broadcast)
3732
3733 # turn me into an ndarray
-> 3734 value = sanitize_index(value, self.index, copy=False)
3735 if not isinstance(value, (np.ndarray, Index)):
3736 if isinstance(value, list) and len(value) > 0:
/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in sanitize_index(data, index, copy)
610
611 if len(data) != len(index):
--> 612 raise ValueError("Length of values does not match length of index")
613
614 if isinstance(data, ABCIndexClass) and not copy:
ValueError: Length of values does not match length of index
</code></pre>
<p>How can I assign the values from 'tickbox' to the column correctly?</p>
|
<p>There are a few issues with your solution that might stem from my misunderstanding of your goals.</p>
<p>If you want the column to have the same number of values as the other column, you will want to add a value to <code>tickbox</code> for EVERY element. In your case, you're not appending anything in the <code>else</code> branch, meaning that you're actually skipping some values.</p>
<p>Another issue is that the first value needs to probably be set to <code>0</code>. Instead, when <code>i = 0</code>, you're comparing element <code>0</code> with element <code>-1</code>. I actually get a <code>KeyError: -1</code> when I try your code.</p>
<p>Taking the above issues into account, we could rewrite the function:</p>
<pre class="lang-py prettyprint-override"><code>def consecutive_ticks(close_prices):
# start with 0 for the first data point
ticks = [0]
count = 0
# go from element 1 to the last element
for i in range(1, len(close_prices)):
if close_prices[i] > close_prices[i-1]:
count += 1
else:
count = 0
# we append the current count anyway.
# it's either going to be an increment, or it's 0 if "close" is smaller
ticks.append(count)
return ticks
</code></pre>
<p>This will return a list with same length as the <code>close_prices</code> series. Thus, you can add it to your data frame simply by:</p>
<pre class="lang-py prettyprint-override"><code>df['consec_tick'] = consecutive_ticks(df.Close)
</code></pre>
|
python|pandas|dataframe
| 1
|
4,935
| 57,844,567
|
How can I create a function that iterates a list and simultaneously creates a new column in a dataframe?
|
<p>I want to create a function that returns multiple columns containing moving averages with different windows. But I get only one column returned.</p>
<p>This is what I've tried:</p>
<pre><code>[3] data = pd.read_csv('data.csv')
[4] data.head()
[4] close
0 126.70
1 127.30
2 127.38
3 128.44
4 128.77
[5] li = range(2,101)
[6] def builder(data):
for n in li:
data[n] = data.close.rolling(window=n).mean().shift()
return data
[7] test = builder(data)
[8] test.head()
[8] close 2
0 126.70 NaN
1 127.30 NaN
2 127.38 127.00
3 128.44 127.34
4 128.77 127.91
</code></pre>
<p>Why doesn't my function return all the moving averages (2 to 100)?</p>
|
<p>check line "return data". Should not be in for-loop.</p>
|
python|pandas
| 0
|
4,936
| 58,010,075
|
How to reduce time taken by to convert dask dataframe to pandas dataframe
|
<p>I have a function to read large csv files using dask dataframe and then convert to pandas dataframe, which takes quite a lot time. The code is:</p>
<pre><code>def t_createdd(Path):
dataframe = dd.read_csv(Path, sep = chr(1), encoding = "utf-16")
return dataframe
#Get the latest file
Array_EXT = "Export_GTT_Tea2Array_*.csv"
array_csv_files = sorted([file
for path, subdir, files in os.walk(PATH)
for file in glob(os.path.join(path, Array_EXT))])
latest_Tea2Array=array_csv_files[(len(array_csv_files)-(58+25)):
(len(array_csv_files)-58)]
Tea2Array_latest = t_createdd(latest_Tea2Array)
#keep only the required columns
Tea2Array = Tea2Array_latest[['Parameter_Id','Reading_Id','X','Value']]
P1MI3 = Tea2Array.loc[Tea2Array['parameter_id']==168566]
P1MI3=P1MI3.compute()
P1MJC_main = Tea2Array.loc[Tea2Array['parameter_id']==168577]
P1MJC_old=P1MJC_main.compute()
</code></pre>
<p><code>P1MI3=P1MI3.compute()</code> and <code>P1MJC_old=P1MJC_main.compute()</code> takes around <code>10</code> and <code>11</code> mins respectively to execute. Is there any way to reduce the time.</p>
|
<p>I would encourage you to consider, with reference to the Dask documentation, why you would expect the process to be any faster than using Pandas alone.
Consider: </p>
<ul>
<li>file access may be from several threads, but you only have one disc interface bottleneck, and likely performs much better reading sequentially than trying to read several files in parallel</li>
<li>reading CSVs is CPU-heavy, and needs the python GIL. The multiple threads will not actually be running in parallel</li>
<li>when you compute, you materialise the whole dataframe. It is true that you appear to be selecting a single row in each case, but Dask has no way to know in which file/part it is.</li>
<li>you call compute twice, but could have combined them: Dask works hard to evict data from memory which is not currently needed by any computation, so you do double the work. By calling compute on both outputs, you would halve the time.</li>
</ul>
<p>Further remarks:</p>
<ul>
<li>obviously you would do much better if you knew which partition contained what</li>
<li>you can get around the GIL using processes, e.g., Dask's distributed scheduler</li>
<li>if you only need certain columns, do not bother to load everything and then subselect, include those columns right in the read_csv function, saving a lot of time and memory (true for pandas or Dask).</li>
</ul>
<p>To compute both lazy things at once:</p>
<pre><code>dask.compute(P1MI3, P1MJC_main)
</code></pre>
|
python-3.x|pandas|dask|dask-delayed
| 1
|
4,937
| 54,975,554
|
Converting a Dict having blank list (Values) to a df
|
<p>I'm a neophyte to pandas and have been struggling to convert a Dict to a df using <code>pd.DataFrame(Dict)</code>. Here's further detail: This Dict is part of a for loop that in every iteration reads in a new input file. As a result, Dict Values (lists) update every time and take different List sizes. The problem is, my code fails to execute <code>pd.DataFrame(Dict)</code> once Dict contains blank lists (Values) for all Keys and spits out: "ValueError: If using all scalar values, you must pass an index"</p>
<pre><code>Dict = {'Title': [],
'Organization': [],
'City': [],
'Company': []}
</code></pre>
<p>Could anybody shed some light on this? Thanks a million in advance.</p>
|
<p>Running your dictionary example through the <code>pd.DataFrame(Dict)</code> command does not give me any errors, I just get an empty DataFrame. And that's how it should be, as those empty lists you have as values are not scalars, they are iterables. This <code>ValueError</code> shows up when all values are in fact scalars, e.g., integers/floats/strings. If you just convert those scalars into iterables, such as empty lists or lists each containing one element (e.g., <code>['New York']</code>), you should not get any errors.</p>
<p>Per this SO <a href="https://stackoverflow.com/questions/17839973/constructing-pandas-dataframe-from-values-in-variables-gives-valueerror-if-usi">thread</a>, there are many suggested ways to handle this error when you actually have scalars, such as actually passing an index to <code>pd.DataFrame()</code> as a list, as requested by the error message. You just need to make the size of that index list the same as the number of elements you are feeding to the DataFrame.</p>
|
python|pandas|list|dataframe|dictionary
| 0
|
4,938
| 49,351,001
|
Convert a list of dict that has list to one dataframe
|
<p>whats the most efficient way to convert this?</p>
<p>given a list of dict with a list.</p>
<pre><code>list_df = [
{'High':[2,3,4,5,5,3,3,4,5,5],'Low':[0,-3,1,4,1,2,2,3,1,-1],'Name':['A','A','A','A','A','A','A','A','A','A']},
{'High':[35,23,424,5,25,3,223,4,5,255],'Low':[3,3,44,5,2,3,22,2,1,25]},'Name':['B','B','B','B','B','B','B','B','B','B']
]
</code></pre>
<p>if i do <code>df = pd.DataFrame(list_df)</code>, then each row is a list of the values.
the resulting is </p>
<pre><code> Name High Low
0 [A,A,..][2,3,4..][0,-3,1..]
1 [B,B,..][35,23,424..][3,3,44..]
</code></pre>
<p>What I'd like is:</p>
<pre><code>Name High Low
A 2 0
A 3 -3
A 4 1
A 5 4
</code></pre>
|
<p>IIUC</p>
<pre><code>pd.concat([pd.DataFrame(x) for x in list_df])
Out[190]:
High Low Name
0 2 0 A
1 3 -3 A
2 4 1 A
3 5 4 A
4 5 1 A
5 3 2 A
6 3 2 A
7 4 3 A
8 5 1 A
9 5 -1 A
0 35 3 B
1 23 3 B
2 424 44 B
3 5 5 B
4 25 2 B
5 3 3 B
6 223 22 B
7 4 2 B
8 5 1 B
9 255 25 B
</code></pre>
|
pandas
| 0
|
4,939
| 49,577,645
|
ask user which column to read in pd.read_csv()
|
<p>I have a large data set with multiple columns and I would like the user to tell me which col to analyze.
so far I have:</p>
<pre><code>file = some_file
col_name = raw_input("Enter column name: ")
cols_used = ["X",col_name]
read_cols = pd.read_csv(file, usecols = cols_used, skiprows = [0,1], name =cols_used)
test = pd.unique(read_cols["X"])
</code></pre>
<p>for some reason I am not pulling the right cols. When I hard code the col name in everything works fine. I'm not sure what else to try.</p>
|
<p>this should work</p>
<pre><code>file = some_file
col_name = raw_input("Enter column name: ")
cols_used = ["X",col_name]
read_cols = pd.read_csv(file, usecols = cols_used, name =cols_used)
test = pd.unique(read_cols["X"])
</code></pre>
<p>it seams that </p>
<pre><code>skiprows = [0,1]
</code></pre>
<p>cause problems so probably 0 cause to skip column names.... so try this:</p>
<pre><code>read_cols = pd.read_csv(file, usecols = cols_used, skiprows = [1,2], name =cols_used)
</code></pre>
|
python|pandas
| 0
|
4,940
| 49,469,337
|
Finding the count of letters in each column
|
<p>I need to find the count of letters in each column as follows:</p>
<pre><code>String: ATCG
TGCA
AAGC
GCAT
</code></pre>
<p>string is a series.</p>
<p>I need to write a program to get the following:</p>
<pre><code> 0 1 2 3
A 2 1 1 1
T 1 1 0 1
C 0 1 2 1
G 1 1 1 1
</code></pre>
<p>I have written the following code but I am getting a row in 0 index and column at the end (column index 450, actual column no 451) with nan values. I should not be getting either the row or the column 451. I need to have only 450 columns.</p>
<pre><code>f = zip(*string)
counts = [{letter: column.count(letter) for letter in column} for column in
f]
counts=pd.DataFrame(counts).transpose()
print(counts)
counts = counts.drop(counts.columns[[450]], axis =1)
</code></pre>
<p>Can anyone please help me understand the issue?</p>
|
<p>Here is one way you can implement your logic. If required, you can turn your series into a list via <code>lst = s.tolist()</code>.</p>
<pre><code>lst = ['ATCG', 'TGCA', 'AAGC', 'GCAT']
arr = [[i.count(x) for i in zip(*lst)] for x in ('ATCG')]
res = pd.DataFrame(arr, index=list('ATCG'))
</code></pre>
<p><strong>Result</strong></p>
<pre><code> 0 1 2 3
A 2 1 1 1
T 1 1 0 1
C 0 1 2 1
G 1 1 1 1
</code></pre>
<p><strong>Explanation</strong></p>
<ul>
<li>In the list comprehension, deal with columns first by iterating the first, second, third and fourth elements of each string sequentially.</li>
<li>Deal with rows second by iterating through 'ATCG' sequentially.</li>
<li>This produces a list of lists which can be fed directly into <code>pd.DataFrame</code>.</li>
</ul>
|
python|pandas|bioinformatics|biopython
| 3
|
4,941
| 49,347,002
|
pandas: count rows within time moving window
|
<pre><code>import pandas as pd
d = [{'col1' : ' B', 'col2' : '2015-3-06 01:37:57'},
{'col1' : ' A', 'col2' : '2015-3-06 01:39:57'},
{'col1' : ' A', 'col2' : '2015-3-06 01:45:28'},
{'col1' : ' B', 'col2' : '2015-3-06 02:31:44'},
{'col1' : ' B', 'col2' : '2015-3-06 03:55:45'},
{'col1' : ' B', 'col2' : '2015-3-06 04:01:40'}]
df = pd.DataFrame(d)
df['col2'] = pd.to_datetime(df['col2'])
</code></pre>
<p>For each row I want to count number of rows with same
values of 'col1' and time within window of past 10 minutes before time of this row(include). I'm interested in <strong>implementation</strong> which work <strong>fast</strong></p>
<p>this source work very <strong>slow</strong> on big dataset:</p>
<pre><code>dt = pd.Timedelta(10, unit='m')
def count1(row):
id1 = row['col1']
start_time = row['col2'] - dt
end_time = row['col2']
mask = (df['col1'] == id1) & ((df['col2'] >= start_time) & (df['col2'] <= end_time))
return df.loc[mask].shape[0]
df['count1'] = df.apply(count1, axis=1)
df.head(6)
col1 col2 count1
0 B 2015-03-06 01:37:57 1
1 A 2015-03-06 01:39:57 1
2 A 2015-03-06 01:45:28 2
3 B 2015-03-06 02:31:44 1
4 B 2015-03-06 03:55:45 1
5 B 2015-03-06 04:01:40 2
</code></pre>
<p>Notice: column 'col2' is date sensitive, not only time</p>
|
<p>The problem is, that <code>apply</code> is very expensive.
One option is to optimize the code via cython or with the use of numba.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/enhancingperf.html" rel="nofollow noreferrer">This</a> might be helpful.</p>
<p>Another option is the following:</p>
<ol>
<li>Create a column with timestamps from col2</li>
<li>Create a column with ids which group the timestamps by your 10 min criterium</li>
<li>Create a combined column with the previous created ids and col1 as in <code>df['time_ids'].map(str) + df['col1']</code></li>
<li>Use <code>groupby</code> to determine the number of equal rows. Something like: <code>df.groupby(df['combined_ids']).size()</code></li>
</ol>
|
python-3.x|pandas|dataframe|count
| 3
|
4,942
| 49,772,706
|
How to initialize a tuple of lists to columns of an existing DataFrame in python pandas
|
<p>I have a function which takes the text as input and returns a tuple of lists. I want to convert the tuple into columns of an existing DataFrame.</p>
<pre><code>def func(text):
// some code //
return (tuple)
</code></pre>
<p>The tuple is in this format:</p>
<pre><code>(['1','2','3'],['abc','def','efg'])
</code></pre>
<p>I tried doing</p>
<pre><code>df[['col1','col2']] = df.col_text.apply(func)
</code></pre>
<p>But it is throwing an error:</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I want the output as below:</p>
<pre><code>df
col1 col2
0 [1, 2, 3] [abc, def, efg]
</code></pre>
<p>Please let me know the correct and efficient way of doing it.</p>
|
<p>I believe you need convert output to <code>Series</code> if need columns of <code>list</code>s:</p>
<pre><code>df = pd.DataFrame({'col_text':range(5)})
def func(text):
a = (['1','2','3'],['abc','def','efg'])
return pd.Series(a)
df[['col1','col2']] = df.col_text.apply(func)
print (df)
col_text col1 col2
0 0 [1, 2, 3] [abc, def, efg]
1 1 [1, 2, 3] [abc, def, efg]
2 2 [1, 2, 3] [abc, def, efg]
3 3 [1, 2, 3] [abc, def, efg]
4 4 [1, 2, 3] [abc, def, efg]
</code></pre>
|
python|pandas|tuples
| 1
|
4,943
| 73,411,720
|
Cross Regularization between two Neural Networks
|
<p>I am trying to add a loss term to regularise between two neural networks and make them as similar as possible while still performing different tasks. The closes I could find is the answers in this post:
<a href="https://stackoverflow.com/questions/44641976/pytorch-how-to-add-l1-regularizer-to-activations">Pytorch: how to add L1 regularizer to activations?</a></p>
<p>But trying the solutions I could not get it to work. The model trains both models to a good accuracy, but ignores the regularization ( even if set to an insanely high value ), and the difference between the two only ever seems to go up. Is there something else I need to do with the additional regularization loss term to make it so that it is not ignored?</p>
<p>My current best attempt is shown here:</p>
<pre><code>def train_combined(nets, dataset_train, dataset_test, num_epochs, alpha=0):
criterion = nn.L1Loss()
optimizers = [optim.SGD(net.parameters(), lr=0.01, momentum=0.9 ) for net in nets]
trainloader = DataLoader(dataset_train, batch_size=32, shuffle=True )
train_losses = []
test_losses = []
for epoch in range(num_epochs): # loop over the dataset multiple times
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, *labels = data
inputs = inputs
# get the average of the paramaters between the two networks
with t.no_grad():
params = t.stack([t.cat(tuple(t.flatten(p.data) for p in net.parameters())) for net in nets])
avg = t.sum(params, dim=0)*0.5
# keep track of loss for both models
all_losses = np.zeros( 2 )
all_reg_losses = np.zeros( 2 )
all_final_losses = np.zeros( 2 )
# forward + backward + optimize
for i, (net, optimizer, label) in enumerate(zip(nets, optimizers, labels)):
optimizer.zero_grad()
# calculate normal loss
outputs = net(inputs)
loss = criterion(outputs, label)
# calculate regularization loss loss
params = t.cat(tuple(t.flatten(p.data) for p in net.parameters()))
regularization_loss = t.sum(t.abs( params - avg ))
regularization = regularization_loss * alpha
# calculate total loss
final_loss = loss + regularization
final_loss.backward()
optimizer.step()
# keep track of losses
all_losses[i] = float( loss.item() )
all_reg_losses[i] = float( 0 if (regularization == 0) else regularization.item() )
all_final_losses[i] = float( final_loss.item() )
# keep track of performance
train_losses.append( loss )
with t.no_grad():
for i in range(2):
test_losses.append( light_eval( nets[i], data_test, index=i ) )
# log performance each epoch
for i in range(2):
print("%3d" % (epoch+1), i, ':',
f' train loss = { ("%.4f "*3) % (all_losses[i], all_reg_losses[i], all_final_losses[i]) }',
f', test_losses = { "%.4f" % test_losses[-(2-i)] }')
print('Finished Training')
models = [ Net().to(device) for i in range(2) ]
train_combined( models, dataset_train, dataset_test, 50, alpha=1e-2 )
</code></pre>
<p>What am I doing wrong?</p>
|
<p>Problem Solved. The issue seems to come from using <code>p.data</code> instead of <code>p</code> when getting <code>params</code>.</p>
<p>The working solution looks like this:</p>
<pre><code>params = t.cat(tuple(t.flatten(p) for p in net.parameters()))
assert params.requires_grad
distance = criterion(params, avg)
regularization_loss = t.sum( distance )
</code></pre>
|
python|machine-learning|neural-network|pytorch|loss-function
| 0
|
4,944
| 59,919,391
|
Why is match.columns.get_loc returning a boolean array, not an indice?
|
<p>I'm trying to identify the index position of a particular column name in Python. I used this exact same method previously on the same dataframe and it returned the number of the index position of the column name. However, in this case it doesn't seem to be working. Here is the relevant code:</p>
<p>The dataframe:</p>
<pre><code>match.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 25979 entries, 0 to 25978
Data columns (total 68 columns):
id_x 25979 non-null int64
country_id 25979 non-null int64
league_id 25979 non-null int64
season 25979 non-null object
stage 25979 non-null int64
date 25979 non-null object
match_api_id 25979 non-null int64
home_team_api_id 25979 non-null int64
away_team_api_id 25979 non-null int64
home_team_goal 25979 non-null int64
away_team_goal 25979 non-null int64
home_player_1 24755 non-null float64
home_player_2 24664 non-null float64
home_player_3 24698 non-null float64
home_player_4 24656 non-null float64
home_player_5 24663 non-null float64
home_player_6 24654 non-null float64
home_player_7 24752 non-null float64
home_player_8 24670 non-null float64
home_player_9 24706 non-null float64
home_player_10 24543 non-null float64
home_player_11 24424 non-null float64
away_player_1 24745 non-null float64
away_player_2 24701 non-null float64
away_player_3 24686 non-null float64
away_player_4 24658 non-null float64
away_player_5 24644 non-null float64
away_player_6 24666 non-null float64
away_player_7 24744 non-null float64
away_player_8 24638 non-null float64
away_player_9 24651 non-null float64
away_player_10 24538 non-null float64
away_player_11 24425 non-null float64
goal 14217 non-null object
shoton 14217 non-null object
shotoff 14217 non-null object
foulcommit 14217 non-null object
card 14217 non-null object
cross 14217 non-null object
corner 14217 non-null object
possession 14217 non-null object
BSA 14161 non-null float64
Home Team 25979 non-null object
Away Team 25979 non-null object
name_x 25979 non-null object
name_y 25979 non-null object
home_player_1 24755 non-null object
home_player_2 24664 non-null object
home_player_3 24698 non-null object
home_player_4 24656 non-null object
home_player_5 24663 non-null object
home_player_6 24654 non-null object
home_player_7 24752 non-null object
home_player_8 24670 non-null object
home_player_9 24706 non-null object
home_player_10 24543 non-null object
home_player_11 24424 non-null object
away_player_1 24745 non-null object
away_player_2 24701 non-null object
away_player_3 24686 non-null object
away_player_4 24658 non-null object
away_player_5 24644 non-null object
away_player_6 24666 non-null object
away_player_7 24744 non-null object
away_player_8 24638 non-null object
away_player_9 24651 non-null object
away_player_10 24538 non-null object
away_player_11 24425 non-null object
dtypes: float64(23), int64(9), object(36)
</code></pre>
<p>Rest of code:</p>
<pre><code>#remove rows that dont contain player names
column_start = match.columns.get_loc("home_player_1")
column_start
column_end = match.columns.get_loc("away_player_11")
columns = match.columns[column_start:column_end]
#match.dropna(axis=columns)
</code></pre>
<p>This causes the following error:</p>
<pre><code>TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
|
<p>Problem is both columns are duplicated, <code>home_player_1</code> and also <code>away_player_11</code> (and many another columns too).</p>
<p>So if same values in columns you can remove duplicated columns by:</p>
<pre><code>match = match.loc[:, ~match.columns.duplicated()]
</code></pre>
<p>Or you can deduplicate columns names by:</p>
<pre><code>s = match.columns.to_series()
match.columns = (match.columns +
s.groupby(s).cumcount().astype(str).radd('_').str.replace('_0',''))
</code></pre>
|
python|pandas
| 4
|
4,945
| 65,157,083
|
visualize tuple with entities grouped
|
<p>I have a tuple with named entity values in a Dataframe, how do I group the values by each entities over the column containing tuples and visualize it.</p>
<pre><code>s = pd.Series("At noon, Trump became the 45th president of the United States, taking the oath of office with Chief Justice John Roberts. Trump was also sworn in using two Bibles, a Bible his mother gifted him and the historic Lincoln Bible.")
print(hero.named_entities(s)[0])
</code></pre>
<p>which outputs</p>
<pre><code>[('noon', 'TIME', 3, 7), ('Trump', 'PERSON', 9, 14), ('45th', 'ORDINAL', 26, 30), ('the United States', 'GPE', 44, 61), ('John Roberts', 'PERSON', 108, 120), ('two', 'CARDINAL', 152, 155), ('Bibles', 'PRODUCT', 156, 162), ('Bible', 'WORK_OF_ART', 166, 171), ('Lincoln Bible', 'PERSON', 211, 224)]
</code></pre>
|
<p>You don't have "a tuple with named entity values in a Dataframe". You have "a list of tuples".
In order to do that, you have to follow the instructions below:</p>
<p>Use a for-loop to iterate through a list of tuples. Within the for-loop, use the indexing syntax tuple[0] to access the first element of each tuple, and call list.append(object) with object as the tuple's first element to append each first element to list.</p>
<p>As shown <a href="https://www.kite.com/python/answers/how-to-get-the-first-element-of-each-tuple-in-a-list-in-python#:%7E:text=Use%20indexing%20to%20get%20the,each%20first%20element%20to%20list%20." rel="nofollow noreferrer">here</a></p>
<p>And then, if you want you can transform each list to DataFrame using pandas.</p>
<p>Also for anyone else who wants to run this command:
<code>print(hero.named_entities(s)[0])</code></p>
<p>Ξ₯ou have to import the following package:
<code>import texthero as hero</code></p>
|
python|pandas|matplotlib|seaborn
| 0
|
4,946
| 65,399,531
|
Forward fill pandas based on column conditions with increment
|
<p>I have the following dataframe. I would like to forward fill from the startTime till the endTime for each id I have.</p>
<pre><code>id current startTime endTime
1 2015-05-10 2015-05-10 2015-05-12
2 2015-07-11 2015-07-11 2015-07-13
3 2015-10-01 2015-10-01 2015-10-03
4 2015-12-01 None None
</code></pre>
<p>Here's my expected output:</p>
<pre><code>id current
1 2015-05-10
1 2015-05-11
1 2015-05-12
2 2015-07-11
2 2015-07-12
2 2015-07-13
3 2015-10-01
3 2015-10-02
3 2015-10-03
4 2015-12-01
</code></pre>
|
<p>I use <code>date_range</code> and <code>explode</code> for this.</p>
<pre><code>df['current'] = df.apply(lambda row: row['current'] if row['startTime'] is None else pd.date_range(row['startTime'], row['endTime'], freq='D'), axis=1)
df = df.explode('current')
</code></pre>
|
python|pandas
| 1
|
4,947
| 65,394,235
|
How to compare last value to previous 6 values in pandas dataframe?
|
<p>Is there a way to find out if the last value is in the lower 50% range of the previous six days values? I want to add another column that shows yes or no. I tried sorting the previous six to get the middle value, but could not compare it to last and/or make it iterate to populate the new column. My data looks like below:</p>
<pre><code>Date Open High Low Close Adj Close Volume
2020-12-14 3675.270020 3697.610107 3645.840088 3647.489990 3647.489990 4594920000
2020-12-15 3666.409912 3695.290039 3659.620117 3694.620117 3694.620117 4360280000
2020-12-16 3696.250000 3711.270020 3688.570068 3701.169922 3701.169922 4056950000
2020-12-17 3713.649902 3725.120117 3710.870117 3722.479980 3722.479980 4184930000
2020-12-18 3722.389893 3726.699951 3685.840088 3709.409912 3709.409912 7068340000
</code></pre>
<p>I spent 5-6 hours googling and trying to no avail, any sort of guidance is greatly appreciated</p>
|
<p>Understanding your question as asking for the ratio of the previous day's closing price to the average of the previous six days, I created the following code. Sort the closing prices of the retrieved stocks in descending order. In a new column, use the rolling function to calculate the six-day average and add it. Then shift the data to align the new column with the closing price to be compared. Then we added the ratio calculation.</p>
<pre><code>import yfinance as yf
data = yf.download("AAPL", start="2020-11-17", end="2020-12-18")['Adj Close'].to_frame()
data.sort_index(ascending=False, inplace=True)
data['pre_6'] = data.rolling(6).mean()
data['pre_6'] = data['pre_6'].shift(-5)
data['check'] = data['Adj Close'] /data['pre_6']
data
γγγγγγAdj Close pre_6 check
Date
2020-12-17 128.699997 125.303332 1.027108
2020-12-16 127.809998 124.149999 1.029480
2020-12-15 127.879997 123.578332 1.034809
2020-12-14 121.779999 122.889999 0.990968
2020-12-11 122.410004 122.968333 0.995460
2020-12-10 123.239998 123.056666 1.001490
2020-12-09 121.779999 123.030000 0.989840
2020-12-08 124.379997 123.186667 1.009687
2020-12-07 123.750000 122.298335 1.011870
2020-12-04 122.250000 121.105001 1.009455
2020-12-03 122.940002 120.068334 1.023917
2020-12-02 123.080002 118.773333 1.036260
2020-12-01 122.720001 117.234999 1.046786
2020-11-30 119.050003 116.338332 1.023308
2020-11-27 116.589996 116.269998 1.002752
2020-11-25 116.029999 116.509998 0.995880
2020-11-24 115.169998 117.069998 0.983770
2020-11-23 113.849998 117.924999 0.965444
2020-11-20 117.339996 NaN NaN
2020-11-19 118.639999 NaN NaN
2020-11-18 118.029999 NaN NaN
2020-11-17 119.389999 NaN NaN
2020-11-16 120.300003 NaN NaN
</code></pre>
|
python|pandas|dataframe|finance|technical-indicator
| 0
|
4,948
| 65,396,264
|
How do I efficiently map transformations over a pandas DataFrame
|
<p>a bit of a funny ask.</p>
<p>I have a (big) table that looks like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>transaction_date (index)</th>
<th>store_id</th>
<th>department_id</th>
<th>gross_revenue</th>
</tr>
</thead>
<tbody>
<tr>
<td>'2020-01-01'</td>
<td>Store1</td>
<td>Fruit</td>
<td>$7.50</td>
</tr>
<tr>
<td>'2020-01-01'</td>
<td>Store2</td>
<td>Fruit</td>
<td>$2.75</td>
</tr>
<tr>
<td>'2020-01-01'</td>
<td>Store1</td>
<td>Veg</td>
<td>$47.50</td>
</tr>
<tr>
<td>'2020-01-01'</td>
<td>Store2</td>
<td>Veg</td>
<td>$8.25</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to transform the <code>gross_revenue</code> column depending on the value of <code>store_id</code> and <code>department_id</code>.</p>
<p>For argument sake, let's say I want to increase all <code>Store1</code> sales by 25%, increase <code>Veg</code> sales by 10%, and <code>Fruit</code> sales by 75% (let's not worry about the order just yet).</p>
<p>I'd like the user to be able to write:</p>
<pre><code>modifiers = {
'store_id': {
'Store1': lambda x: x*1.25
},
'department_id: {
'Veg' : lambda x: x*1.10,
'Fruit': lambda x: x*1.75
}
}
</code></pre>
<p>Is there a performant way to execute this in Pandas?</p>
<p>As a baseline, this code works:</p>
<pre><code>from functools import reduce
ans = (table
.assign(gross_revenue = lambda x: x
.apply(lambda row: reduce(lambda x, f: f(x), [row['gross_revenue'],
modifiers.get(row['business_id'], lambda x: x),
modifiers.get(row['department_description'], lambda x: x)
]), axis=1)
)
)
</code></pre>
<p>However, it takes close to 2min (table is 5-10m rows).</p>
<p>Does anyone know a faster approach?</p>
<p>Thanks in advance.</p>
|
<p>Use <code>map</code>:</p>
<pre><code>store_adjust = {'Store1': 1.25, 'Store10':1.3}
dep_adjust = {'Veg': 1.10, 'Fruit':1.75}
df['gross_revenue'] *= ( df['store_id'].map(store_adjust).fillna(1) *
df['department_id'].map(dep_adjust).fillna(1) )
</code></pre>
|
python|pandas|performance|dataframe
| 3
|
4,949
| 49,893,741
|
tensorflow: CNN for non square image
|
<p>tensorflow version 1.5.0rc1
python version:3.5</p>
<p>When reshape a rectangular image to <code>[height,width]</code>
by using <code>tf.reshape(x,[-1,x,y,1])</code> </p>
<blockquote>
<p>eg. tf.reshape(x,[-1,14,56,1]) run conv2d returns:
InvalidArgumentError (see above for traceback): Input to reshape is a
tensor with 358400 values, but the requested shape requires a multiple
of 3136 [[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32,
_device="/job:localhost/replica:0/task:0/device:GPU:0"](MaxPool_1, Reshape_1/shape)]]</p>
</blockquote>
<p>which 3136 is the square of 56.
the tensor treats the reshape as 56x56 instead of 14*56 matrix.</p>
<p>Is there a way to get rid of it and set my CNN to a non square image?</p>
<p>Thanks</p>
|
<p>I don't exactly agree with reshaping a rectangular picture as you destroy the relationship between neighbour pixels. Instead, you have several options to apply CNNs on a non-quadratic image:</p>
<p>1.) Use padding. During preprocessing, you could fill in pixels to get a quadratic image. Through this you can apply the quadratic filter.</p>
<p>2.) Use different quadratic windows of that image for training. For example, create a quadratic window and run it over the image to get a number of sub-pictures.</p>
<p>3.) You could use different strides for the dimensions of your picture.</p>
<p>4.) You could stretch the picture in the needed direction, although I'm not exactly sure how this affects the performance later on. I would only try this as a last resort solution.</p>
|
python|tensorflow
| 0
|
4,950
| 50,076,420
|
How to combine Python 32bit and 64bit modules
|
<p>For one my Robotics projects, I am trying to grab an image from Nao Robot's camera and use Tensorflow for object recognition. </p>
<p>The problem is that the Robot's NaoQi API is built on Python2.7 32bit.
(<a href="http://doc.aldebaran.com/1-14/dev/python/install_guide.html" rel="nofollow noreferrer">http://doc.aldebaran.com/1-14/dev/python/install_guide.html</a>) </p>
<p>The Tensorflow Object recognition API works only with 64bit. (<a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md</a> and <a href="https://www.tensorflow.org/install/install_windows" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_windows</a>)</p>
<p>I am using Windows 10 and I have both Python 2.7 32 bit and 3.6 64bit installed and I am able to run the modules independently but I am unable to pass the image between the two. </p>
<p>Are there any workarounds to address this issue? Thank you.</p>
|
<p>I don't think there is a way to have both modules work in the same interpreter if you say one is 32bit only and the other 64bit only.</p>
<p>So, consider running two interpreters, and having them communicate with each other with message exchange, remote procedure call, and such.</p>
<p>I strongly discourage to use shared memory sections, UNIX or TCP sockets, as there are too many low level details to handle that would distract you from the real objective of your work.</p>
<p>Instead, consider some high level library such as <a href="http://www.zeromq.org" rel="nofollow noreferrer">zeromq</a> which has also <a href="http://zeromq.org/bindings:python" rel="nofollow noreferrer">python bindings</a> and it's very simple to use: you can send binary data, strings or python objects along the wire, which would be automatically serialized and deserialized using pickle.</p>
<p>Useful readings:</p>
<ul>
<li><a href="https://learning-0mq-with-pyzmq.readthedocs.io/en/latest/" rel="nofollow noreferrer">Learning ΓMQ with pyzmq</a></li>
<li><a href="https://learning-0mq-with-pyzmq.readthedocs.io/en/latest/pyzmq/patterns/client_server.html" rel="nofollow noreferrer">Client / Server (Request-reply pattern)</a></li>
<li><a href="http://pyzmq.readthedocs.io/en/latest/serialization.html" rel="nofollow noreferrer">Serializing messages with PyZMQ</a></li>
</ul>
<p>Example client:</p>
<pre><code>import zmq
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
print("Sending request...")
socket.send_string("Hello")
# Get the reply.
message = socket.recv_string()
print(f"Received reply: {message}")
</code></pre>
<p>Example server:</p>
<pre><code>import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
while True:
message = socket.recv_string()
print(f"Received request: {message}")
socket.send_string("Hello")
</code></pre>
<p>Similarly to <code>socket.send_string()</code>, you have <code>socket.send_json()</code>, and <code>socket.send_pyobj()</code>.</p>
<p>Check <a href="http://pyzmq.readthedocs.io/en/latest/api/zmq.html" rel="nofollow noreferrer">the documentation</a>.</p>
|
python|tensorflow|robotics|nao-robot
| 3
|
4,951
| 64,079,215
|
Same keras models do not give the same output for get_config() method
|
<p>I would like to know why for two same keras models, sometimes get_method() gives the same results (See <code>model_dense_A</code> and <code>model_dense_B</code>) and sometimes not (example for <code>model_conv_A</code> and <code>model_conv_B</code>).</p>
<p>Even if I use the <code>clear_session()</code> method and the exact same code, the models are still different.</p>
<p>Does someone know about this behaviour ?</p>
<p>Code snippet:</p>
<pre><code>from tensorflow import keras
input_shape = (300, 3)
# MODEL DENSE A
keras.backend.clear_session()
input_ = keras.layers.Input(shape=input_shape)
out = keras.layers.Flatten()(input_)
out = keras.layers.Dense(units=100, activation='relu')(out)
out = keras.layers.Dense(units=50, activation='relu')(out)
out = keras.layers.Dense(units=10, activation='softmax')(out)
mdl_dense_A = keras.models.Model(name='dense', inputs=input_, outputs=out)
# MODEL DENSE B
keras.backend.clear_session()
input_ = keras.layers.Input(shape=input_shape)
out = keras.layers.Flatten()(input_)
out = keras.layers.Dense(units=100, activation='relu')(out)
out = keras.layers.Dense(units=50, activation='relu')(out)
out = keras.layers.Dense(units=10, activation='softmax')(out)
mdl_dense_B = keras.models.Model(name='dense', inputs=input_, outputs=out)
print(mdl_dense_A.get_config() == mdl_dense_B.get_config()) # True
# MODEL CONV1D-LSTM A
keras.backend.clear_session()
input_ = keras.layers.Input(shape=input_shape)
branch_outputs = []
for i in range(input_shape[-1]):
out = keras.layers.Lambda(lambda x: keras.backend.expand_dims(x[:, :, i], axis=-1))(input_)
out = keras.layers.Conv1D(filters=20, kernel_size=5, strides=2, padding='valid')(out)
branch_outputs.append(out)
out = keras.layers.Concatenate()(branch_outputs)
recursive = keras.layers.LSTM(50, return_sequences=True)(out)
recursive = keras.layers.LSTM(50)(recursive)
out = keras.layers.Dense(10, activation='softmax')(recursive)
mdl_conv_A = keras.models.Model(name='conv1d-lstm', inputs=input_, outputs=out)
# MODEL CONV1D-LSTM B
keras.backend.clear_session()
input_ = keras.layers.Input(shape=input_shape)
branch_outputs = []
for i in range(input_shape[-1]):
out = keras.layers.Lambda(lambda x: keras.backend.expand_dims(x[:, :, i], axis=-1))(input_)
out = keras.layers.Conv1D(filters=20, kernel_size=5, strides=2, padding='valid')(out)
branch_outputs.append(out)
out = keras.layers.Concatenate()(branch_outputs)
recursive = keras.layers.LSTM(50, return_sequences=True)(out)
recursive = keras.layers.LSTM(50)(recursive)
out = keras.layers.Dense(10, activation='softmax')(recursive)
mdl_conv_B = keras.models.Model(name='conv1d-lstm', inputs=input_, outputs=out)
print(mdl_conv_A.get_config() == mdl_conv_B.get_config()) # False !?
</code></pre>
|
<p>It usually helps to take a look at where exactly both configs differ, i.e.</p>
<pre><code>print(mdl_conv_A.get_config() == mdl_conv_B.get_config(), (mdl_conv_A.get_config(), mdl_conv_B.get_config())) # False !?
</code></pre>
<p>In this case they differ because of the lambda layer, which is not very serializable.</p>
|
python|tensorflow|keras|conv-neural-network|lstm
| 0
|
4,952
| 63,780,445
|
Python - Comparing two dataframes
|
<p><strong>Comparing two Dataframes // Deciphering one Dataframe with another</strong></p>
<p>Hello everyone and thanks for the help!</p>
<p>I have two dataframes. The first (df1) contains all my data, including a column with a long list of abbreviations (df1[ab]) which i want to translate into numbers via the second dataframe (df2). df2 contains two columns, one column with the same abbreviations (df2[key]) and one column with the related numbers (df2[value]).</p>
<p>My goal is to use the second dataframe as a deciphering tool for the first. I want to compare df1[ab] with df2[key] and create a new column in df1 which contains the correct numbers from df2[value] in the right order. Since the real list of abbreviations is quite long, i dont want to use a large number of "if-statements" to complete this task.</p>
<p>Example:</p>
<pre><code>import pandas as pd
import numpy as np
abbreviations = ["Sl2","Sl4","Ss","Tu4","Slu","Su2/Su3","Ut2", "Ss","Sl2","Slu","Slu"]
dictab = {"ab": abbreviations}
df1 = pd.DataFrame(dictab)
key = ["Ss","Sl2","Sl3","Sl4","Slu"]
value = [11,25,27,30,33]
dictkv = {"key":key, "value":value}
df2 = pd.DataFrame(dictkv)
</code></pre>
<p>As a result, df1 should contain a new column df1[result] which should contain the following values in the following order:</p>
<pre><code>print(df1)
ab result
0 Sl2 25
1 Sl4 30
2 Ss 11
3 Tu4 NaN
4 Slu 33
5 Su2/Su3 NaN
6 Ut2 NaN
7 Ss 11
8 Sl2 25
9 Slu 33
10 Slu 33
</code></pre>
<p>Any help would be much appreciated!</p>
<p>Cheers, Jato</p>
|
<p>If you just want the additional column:</p>
<pre><code>df1.merge(df2, left_on='ab', right_on='key', how='left')
</code></pre>
<p>Output</p>
<pre><code> ab key value
0 Sl2 Sl2 25.0
1 Sl4 Sl4 30.0
2 Ss Ss 11.0
3 Tu4 NaN NaN
4 Slu Slu 33.0
5 Su2/Su3 NaN NaN
6 Ut2 NaN NaN
7 Ss Ss 11.0
8 Sl2 Sl2 25.0
9 Slu Slu 33.0
10 Slu Slu 33.0
</code></pre>
<p>If you want the array of matches:</p>
<pre><code>df1.merge(df2, left_on='ab', right_on='key', how='left').value.values
</code></pre>
<p>Output</p>
<pre><code>array([25., 30., 11., nan, 33., nan, nan, 11., 25., 33., 33.])
</code></pre>
|
python|pandas|dataframe
| 1
|
4,953
| 63,955,968
|
Is there a javascript implementation of MediaPipe's Palm Tracking?
|
<p>I found a javascript implementation of MediaPipe's FaceMesh and HandPose but not Palm Tracking.</p>
|
<p>They currently do not have a javascript API for hand tracking.</p>
|
mediapipe|tensorflow.js
| 0
|
4,954
| 64,118,898
|
pandas: from how to unpack nested JSON as dataframe?
|
<p>I have an JSON output like this</p>
<p><code>json.json</code></p>
<pre><code>{"SeriousDlqin2yrs": {"prediction": "0", "prediction_probs": {"0": 0.95, "1": 0.04}}}
{"SeriousDlqin2yrs": {"prediction": "0", "prediction_probs": {"0": 0.96, "1": 0.03}}}
</code></pre>
<p>and I would like to read it in as a pandas dataframes that looks like this</p>
<pre><code>prediction, prediction_probs.0, prediction_probs.1
0, 0.95, 0.04
0, 0.96, 0.03
</code></pre>
<p>but I can't seem to find the right way</p>
<p>I tried</p>
<pre><code>predictions = pd.read_json("json.json", lines=True)
predictions.apply(lambda x: pd.DataFrame(x[0]), axis=1)
</code></pre>
|
<p>Tested in pandas <code>1.1.1</code> - convert values to <code>list</code>s and pass to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.json.json_normalize.html" rel="nofollow noreferrer"><code>json_normalize</code></a>:</p>
<pre><code>s = pd.read_json('json.json', lines=True)['SeriousDlqin2yrs'].tolist()
df = pd.json_normalize(s)
print (df)
prediction prediction_probs.0 prediction_probs.1
0 0 0.95 0.04
1 0 0.96 0.03
</code></pre>
<p>Another idea is parsing json to list instead <code>pd.read_json</code>:</p>
<pre><code>import json
s = []
with open('json.json') as f:
for line in f:
s.append(json.loads(line)['SeriousDlqin2yrs'])
df = pd.json_normalize(s)
print (df)
prediction prediction_probs.0 prediction_probs.1
0 0 0.95 0.04
1 0 0.96 0.03
</code></pre>
|
python|pandas
| 3
|
4,955
| 62,962,795
|
How to read SPSS aka (.sav) in Python
|
<p>It's my first time using Jupyter Notebook to analyze survey data (.sav file), and I would like to read it in a way it will show the metadata so I can connect the answers with the questions. I'm totally a newbie in this field, so any help is appreciated!</p>
<pre><code>import pandas as pd
import pyreadstat
df, meta = pyreadstat.read_sav('./SimData/survey_1.sav')
type(df)
type(meta)
df.head()
</code></pre>
<p>Please lmk if there is an additional step needed for me to be able to see the metadata!</p>
|
<p>The meta object contains the metadata you are looking for. Probably the most useful attributes to look at are:</p>
<ul>
<li>meta.column_names_to_labels : it's a dictionary with column names as you have in your pandas dataframe to labels meaning longer explanations on the meaning of each column</li>
</ul>
<pre><code>print(meta.column_names_to_labels)
</code></pre>
<ul>
<li>meta.variable_value_labels : a dict where keys are column names and values are a dict where the keys are values you find in your dataframe and values are value labels.</li>
</ul>
<pre><code>print(meta.variable_value_labels)
</code></pre>
<p>For instance if you have a column "gender' with values 1 and 2, you could get:
{"gender": {1:"male", 2:"female"}}
which means value 1 is male and 2 female.
You can get those labels from the beginning if you pass the argument apply_value_formats :</p>
<pre><code>df, meta = pyreadstat.read_sav('survey.sav', apply_value_formats=True)
</code></pre>
<p>You can also apply those value formats to your dataframe anytime with pyreadstat.set_value_labels which returns a copy of your dataframe with labels:</p>
<pre><code>df_copy = pyreadstat.set_value_labels(df, meta)
</code></pre>
<ul>
<li>meta.missing_ranges : you get labels for missing values. Let's say in the survey in certain variable they encoded 1 meaning yes, 2 no and then mussing values, 5 meaning didn't answer, 6 person not at home. When you read the dataframe by default you will get values 1 and 2 and NaN (missing) instead of 5 and 6. You can pass the argument user_missing to get 5 and 6, and meta.missing_ranges will tell you that 5 and 6 are missing values. Variable_value_labels will give you the "didn't answer" and "person not at home" labels.</li>
</ul>
<pre><code>df, meta = pyreadstat.read_sav("survey.sav", user_missing=True)
print(meta.missing_ranges)
print(meta.variable_value_labels)
</code></pre>
<p>These are the potential pieces of information useful for your case, not necessarily all of these pieces will be present in your dataset.</p>
<p>More information here: <a href="https://ofajardo.github.io/pyreadstat_documentation/_build/html/index.html" rel="noreferrer">https://ofajardo.github.io/pyreadstat_documentation/_build/html/index.html</a></p>
|
python|pandas|jupyter-notebook|metadata|spss
| 7
|
4,956
| 62,922,147
|
How to pick from multiple lists randomly to fill DFcolumns
|
<p>I want to fill a Pandas DataFrame with 3 columns and 20 rows based on random values from the 3 lists below. I cant quite figure out what I am doing wrong. Any suggestions?</p>
<pre><code>import random
import pandas as pd
import numpy as np
tests= ['TestA', 'TestB', 'TestC', 'TestD']
projects = ['AK', 'AA', 'JH', 'WM']
number = [10, 100, 200, 1000, 2000]
df = pd.DataFrame()
for i in range(1,21):
df = df.append(
{'TEST': random.choice(tests),
'PROJ': random.choice(projects),
'NUMBER': random.choice(number)})
</code></pre>
|
<p>You can use <code>np.random.choice</code>:</p>
<pre><code>tests= ['TestA', 'TestB', 'TestC', 'TestD']
projects = ['AK', 'AA', 'JH', 'WM']
number = [10, 100, 200, 1000, 2000]
num_rows = 20
# for repeatability, drop in actual code
np.random.seed(1)
df = pd.DataFrame({
'TEST': np.random.choice(tests, size=num_rows),
'PROJ': np.random.choice(projects, size=num_rows),
'NUMBER': np.random.choice(number, size=num_rows)
})
</code></pre>
<p>Output:</p>
<pre><code> TEST PROJ NUMBER
0 TestB JH 100
1 TestD AA 100
2 TestA JH 100
3 TestA AK 100
4 TestD WM 10
5 TestB AK 2000
6 TestD JH 100
7 TestB AK 10
8 TestD AA 10
9 TestA JH 1000
10 TestA JH 200
11 TestB AK 100
12 TestA WM 10
13 TestD WM 1000
14 TestB AA 100
15 TestA AA 100
16 TestC WM 1000
17 TestB JH 2000
18 TestC AK 10
19 TestA JH 100
</code></pre>
|
python|pandas|dataframe|random
| 2
|
4,957
| 63,074,439
|
Can not uninstall Tensorflow 2.1.0 as conda can't find the package and solving environment fails
|
<p>The tensorflow 2.1.0 package is shown under <code>conda list</code> as follows:</p>
<p><a href="https://i.stack.imgur.com/9Ho54.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Ho54.png" alt="conda list output" /></a></p>
<p>But when I try to uninstall it using <code>conda remove tensorflow</code> I get the following message:</p>
<p><a href="https://i.stack.imgur.com/GZ7Dt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GZ7Dt.png" alt="conda remove output" /></a></p>
<p><code>pip uninstall</code> is also not working. I tried several other methods as well <em>(shown below)</em>, and none of these worked. This kinda makes sense as <code>pip list</code> <em>doesn't</em> show this package.</p>
<p><a href="https://i.stack.imgur.com/YFxWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YFxWp.png" alt="enter image description here" /></a></p>
<p><em>Additional information which is also the strangest thing.</em> This is how the anaconda navigator shows the package.</p>
<p><a href="https://i.stack.imgur.com/yI4JJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yI4JJ.png" alt="enter image description here" /></a></p>
<p>As there are no other packages named as tensorflow present in the list, I assumed that this package marked in red must be the same tensorflow package which comes up in <code>conda list</code>.</p>
<p>Please can someone help me to uninstall this remaining package so that I can have a clean re-installation of the latest tensorflow packages.</p>
|
<p>First obtain the path where your packages are installed in anaconda-spyder using this command. Refer <a href="https://stackoverflow.com/a/49028561/9279666">this link</a> for more information</p>
<pre><code>python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
</code></pre>
<p>Then I was able to find the package that was listed in the final image of the question. So there after it was a matter of deleting those folders shown below.</p>
<p><a href="https://i.stack.imgur.com/VXnTw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VXnTw.png" alt="problematic folders" /></a></p>
<p>After that <code>conda list</code> doesn't have that package anymore.</p>
<p><a href="https://i.stack.imgur.com/0m62j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0m62j.png" alt="enter image description here" /></a></p>
|
python|tensorflow|pip|anaconda|conda
| 0
|
4,958
| 67,823,576
|
External datasources do not recognize header row of csv exported from Pandas dataframe
|
<p>My code follows below</p>
<pre><code>HVSPointsJoined = pd.read_csv(r'M:\08_Geography\CurrentSurveys\HVS\HVSMap\pointsjoined.csv',dtype='object')
HVSPointsJoined = HVSPointsJoined[['Join_Count',u'psu', u'tract', u'block',
u'outcome_code', u'g_short_1', u'lat', u'lon', u'building_type',
u'description', u'GEOID',u'NTACode']]
typeA_DF = HVSPointsJoined[HVSPointsJoined.outcome_code.isin(['213','214','216','217','218','219'])]
totaltypeA = len(typeA_DF)
ct_typeA = pd.crosstab(typeA_DF.NTACode,typeA_DF.g_short_1)
ct_typeA['Total'] = ct_typeA.sum(axis=1)
ct_typeA[['1','10','2','3','4','5','6','7','8','9']] = ct_typeA[['1','10','2','3','4','5','6','7','8','9']].div(ct_typeA['Total'].values,axis=0)
ct_typeA.rename(columns={"1": "TypeA_Per_1",
"2": "TypeA_Per_2",
"3": "TypeA_Per_3",
"4": "TypeA_Per_4",
"5": "TypeA_Per_5",
"6": "TypeA_Per_6",
"7": "TypeA_Per_7",
"8": "TypeA_Per_8",
"9": "TypeA_Per_9",
"10": "TypeA_Per_10"},
inplace=True)
ct_typeA.drop(['Total'], axis=1)
ct_allCodes = pd.crosstab(HVSPointsJoined['NTACode'],HVSPointsJoined['outcome_code'])
ct_allCodes['Total_Cases'] = ct_allCodes.sum(axis=1)
ct_allCodes['TotalTypeA'] = ct_allCodes['213'] + ct_allCodes['214'] + ct_allCodes['216'] + ct_allCodes['217'] + ct_allCodes['218'] + ct_allCodes['219']
ct_allCodes['PerTypeAofCity'] = ct_allCodes['TotalTypeA']/totaltypeA
ct_allCodes['PerTypeAofNeig'] = ct_allCodes['TotalTypeA']/ct_allCodes['Total_Cases']
ct_buildings = pd.crosstab(HVSPointsJoined['NTACode'],HVSPointsJoined['g_short_1'])
NTATable = pd.concat([ct_allCodes,ct_typeA,ct_buildings],axis=1)
NTATable = NTATable[['200', '201', '202', '203', '204', '205',
'213', '214', '216', '217', '218', '219',
'226', '229', '230', '232', '233', '240',
'243', '244', '245', '247', '248', '305',
'321', '401', '580', '583', 'Total_Cases',
'TotalTypeA', 'PerTypeAofCity', 'PerTypeAofNeig',
'TypeA_Per_1', 'TypeA_Per_2','TypeA_Per_3',
'TypeA_Per_4', 'TypeA_Per_5','TypeA_Per_6',
'TypeA_Per_7', 'TypeA_Per_8', 'TypeA_Per_9','TypeA_Per_10',
'Total', '1', '2', '3', '4', '5', '6', '7', '8', '9','10']]
print(NTATable.head())
print(NTATable.columns.tolist())
NTATable.to_csv(r'M:\08_Geography\CurrentSurveys\HVS\HVSMap\NTA_Tab.csv')
</code></pre>
<p>The dataframe head prints properly to the screen, as well as the printing of the columns. However, when loading this csv into an external datasource such as excel, excel does not recognize the first row as being a header row. Typically with my other pandas exports the headers are recognized as headers when read into external programs. The first two rows of my csv in notepad look like this</p>
<pre><code> ,200,201,202,203,204,205,213,214,216,217,218,219,226,229,230,232,233,240,243,244,245,247,248,305,321,401,580,583,Total_Cases,TotalTypeA,PerTypeAofCity,PerTypeAofNeig,TypeA_Per_1,TypeA_Per_2,TypeA_Per_3,TypeA_Per_4,TypeA_Per_5,TypeA_Per_6,TypeA_Per_7,TypeA_Per_8,TypeA_Per_9,TypeA_Per_10,Total,1,2,3,4,5,6,7,8,9,10
BK09,2,21,10,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,5,0,0,40,1,0.000769822940724,0.025,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0,0,7,0,0,1,14,12,4,2,0
</code></pre>
<p>I need this csv output to be automated as input for another process so manually assigning the first row in the external program is not an option.</p>
<p>Thanks</p>
|
<p>Disregard this question, the issue is not related to pandas. External programs not recognizing a header row if the column names contain numeric characters only.</p>
|
python|pandas
| 0
|
4,959
| 61,540,658
|
CS231n assignment 2: TwoLayerNet and Solver
|
<p>I'm running into an error message when I try to execute solver.train.</p>
<p>I finished editing fc_net, including initialization, feed-forward, loss and backward propagation. When I executed the FullyConnectedNets code that meant to compare their solution vs mine, everything went fine (my analytic gradients identical to the numeric ones, same loss, etc.) dimensions are also the same (otherwize the comparison would have not worked).</p>
<p>Nevertheless, when I try to execute the solver I'm running into an error message. Specifically, I execute these lines:</p>
<pre><code>model = TwoLayerNet()
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
</code></pre>
<p>And the error message I get originaly comes from <code>optim.py</code> and it says:</p>
<pre><code>41 config.setdefault('learning_rate', 1e-2)
42
---> 43 w -= config['learning_rate'] * dw
44 return w, config
45 ValueError: non-broadcastable output operand with shape (100,1) doesn't match the broadcast shape (100,100)
</code></pre>
<p>Did someone get similar error? From the message I understand that the gradient and W are not of the same dimensions. How could it be if all the test up to this part were positive?</p>
|
<p>I used <code>db = np.ones((1, dout.shape[0])) @ dout</code> in the <code>affine_backward</code> and also got your error. After changing the line into <code>db = dout.T @ np.ones(N)</code>, the error is gone.</p>
|
numpy|neural-network
| 0
|
4,960
| 68,536,412
|
Convert python pandas dataframe into different format
|
<p>I have a data frame in python and I want to convert in different format :</p>
<p>Below is the example of the same :</p>
<p>Current Data frame :</p>
<pre><code> Header 1 Header 1
Col_A Col_B Col_A Col_B
2021-07-15 1 2 3 4
2021-07-16 5 6 7 8
</code></pre>
<p>Expected Output :</p>
<pre><code>Date Header_No Col_A Col_B
2021-07-15 1 1 2
2021-07-16 1 5 6
2021-07-15 2 3 4
2021-07-16 2 7 8
</code></pre>
<p>Basically I want 4 columns Date , Header_No , Col_A, Col_B.</p>
|
<p>Thatβs literally what <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>.stack()</code></a> does:</p>
<blockquote>
<p>Stack the prescribed level(s) from columns to index.</p>
</blockquote>
<p>With some tweaking to rename columns as you want to and from index levels and/or columns:</p>
<pre><code>>>> stacked = df.rename(columns=lambda c: int(c.split()[-1]), level=0).stack(level=0)
>>> stacked
Col_A Col_B
2021-07-15 1 1 2
2 3 4
2021-07-16 1 5 6
2 7 8
>>> stacked.rename_axis(['Date', 'Header_No']).reset_index()
Date Header_No Col_A Col_B
0 2021-07-15 1 1 2
1 2021-07-15 2 3 4
2 2021-07-16 1 5 6
3 2021-07-16 2 7 8
</code></pre>
|
python|pandas|sklearn-pandas
| 1
|
4,961
| 68,466,535
|
How to get a date_range and insert them as a 'list' to a new column in dataframe?
|
<p>I have a dataframe with 50k+ rows. <code>df.head(5)</code> is below:</p>
<pre><code> start_date finish_date months_used
2841 2019-06-23 2019-07-17 2
2842 2019-06-16 2019-06-23 1
2843 2019-03-27 2019-07-17 5
2844 2019-05-29 2019-06-05 2
2845 2019-03-25 2019-07-17 5
</code></pre>
<p>I need to create one more column with a list of months spent between start and finish dates to use <code>df.explode</code> relying on this column and get for every ID with months_used > 1 new row with date of every month the work was in progress.</p>
<p>My primitive way is:</p>
<pre><code>for i in df2.index:
if df2.loc[i, 'months_used'] > 1:
x = pd.date_range(start=df2.loc[i, 'start_date'], end=df2.loc[i, 'finish_date'], freq='M')
df2.loc[i, 'rep_list'] = str(x)
</code></pre>
<p>But it does not make any sense, because it gives me strings like this <code>"DatetimeIndex(['2019-03-31', '2019-04-30', '2019-05-31', '2019-06-30'], dtype='datetime64[ns]', freq='M')"</code> and explode doesnt work. If i remove <code>str()</code> i get a <code>ValueError: Must have equal len keys and value when setting with an iterable</code>. And it is very slow...</p>
<p>I even cant fill any column with <code>list-type</code> because i get the same error (i suppose it tries to fill a column with values from the list instead of inserting the list itself to the dataframe...)</p>
<p>expected:</p>
<pre><code> start_date finish_date months_used rep_list
2841 2019-06-23 2019-07-17 2 [2019-06, 2019-07]
2842 2019-06-16 2019-06-23 1 [2019-06]
2843 2019-03-27 2019-07-17 5 [2019-03, 2019-04, 2019-05, 2019-06, 2019-07]
2844 2019-05-29 2019-06-05 2 [2019-05, 2019-06]
2845 2019-03-25 2019-07-17 5 [2019-03, 2019-04, 2019-05, 2019-06, 2019-07]
</code></pre>
<p>Inside 'expected'-'rep_list' there could be any dates from current month... e.g. [2019-03-01, 2019-04-01, 2019-05-01, 2019-06-01, 2019-07-01] or others. Just need explode to work after that.</p>
|
<p>Try using:</p>
<pre><code>df['rep_list'] = df[['start_date', 'finish_date']].apply(lambda x: pd.date_range(start=x[0], end=x[1], freq='M').tolist(), axis=1)
</code></pre>
<p>And now:</p>
<pre><code>print(df)
</code></pre>
<p>Would give the expected result.</p>
|
python|pandas
| 1
|
4,962
| 68,740,447
|
How to drop all strings in a column using a wildcard?
|
<p>I have some data that changes regularly but the column headers need to be consistent (so I cant drop the headers) but I need to clear our the strings in a given column.</p>
<p>This is what I have now but this only seems to work for where I know what the string is called and one at a time?</p>
<pre><code> df1= pd.read_csv(r'C:\Users\Test.csv')
df2 = df1.drop(df1[~(df1['column'] != 'String1')].index)
</code></pre>
|
<p>You can use the <code>pd.drop</code> function which removes rows having a specific index from a dataframe.</p>
<pre class="lang-py prettyprint-override"><code>for i in df.index:
if type(df.loc[i, 'Aborted Reason']) == str:
df.drop(i, inplace = True)
</code></pre>
<p><code>df.drop</code> will remove the index having a string in the relevant column from the dataframe.</p>
|
python|python-3.x|pandas
| 2
|
4,963
| 53,074,937
|
Transforming a Pandas datafrom based on condition
|
<p>I have a dataframe of the form:</p>
<pre><code> order_id product_id
0 2 33120
1 4 28985
2 4 9327
3 7 45918
4 14 30035
</code></pre>
<p>I would like to transform or create a new dataframe where all of the product_id's for each order_id are in the same row. And eventually write to a csv.</p>
<pre><code> product_id1 product_id2 ...
0 33120
1 28985 9327
2 45918
3 30035
</code></pre>
|
<p>This is a <code>pivot</code> problem , you just need <code>cumcount</code> create the key </p>
<pre><code>newdf=df.assign(key=df.groupby('order_id').cumcount()).pivot('order_id','key','product_id').fillna('')
newdf
Out[124]:
key 0 1
order_id
2 33120.0
4 28985.0 9327
7 45918.0
14 30035.0
#newdf.to_csv('your.csv')
</code></pre>
|
python|pandas
| 1
|
4,964
| 52,979,089
|
Read and Write to a specific cell in a file
|
<p>I am writing a program where it reads from an excel sheet, it randomly picks a row (100 rows, 2 columns). </p>
<pre><code>with open("file1.csv") as f:
reader = csv.reader(f)
for index, row in enumerate(reader):
if index == 0:
chosen_row = row
else:
r = random.randint(0, index)
if r == 0:
chosen_row = row
</code></pre>
<p>I want it to write to a specific row/column.
For example; if it randomly picks from row 4, column A. It would write the answer to row4, column B.</p>
<p>Here is what I have (it's wrong and it doesn't write to the specific cell)</p>
<pre><code> x = input(chosen_row[0])
srcfile = openpyxl.load_workbook("file1.csv",read_only=False,
keep_vba= True)
sheetname = srcfile.get_sheet_by_name('Sheet1')
sheetname.cell(row=chosen_index,column=2).value = x
srcfile.save('file1.csv')
</code></pre>
<p>I want to know how it can randomly pick a row, and have my code get the user's input and write it to the specific cell of the that row.</p>
|
<p>From the example code on openpyxl:</p>
<pre><code>from openpyxl import Workbook
wb = Workbook()
# grab the active worksheet
ws = wb.active
# Data can be assigned directly to cells
ws['A1'] = 42
</code></pre>
<p>Given this, you should be able to see assigned the value of a given cell with </p>
<pre><code>ws['A1']
</code></pre>
<p>and assign random other with:</p>
<pre><code>ws['A1']=ws[random[row]+random[cell]]
</code></pre>
<p>ramdom[row/cell] is psudocode. You'd select how you want to define the random[row/cell] data.</p>
<p>Don't forget to save, btw.</p>
<pre><code># Save the file
wb.save("sample.xlsx")
</code></pre>
|
python|pandas|numpy
| 0
|
4,965
| 53,242,570
|
Pandas dataframe find first and last element given condition and calculate slope
|
<p><b>The situation</b>:</p>
<p>I have a pandas dataframe where I have some data about the production of a product. The product is produced in 3 phases. The phases are not fixed meaning that their cycles (the time till last) is changing. During the production phases, at each cycle the temperature of the product is measured. </p>
<p>Please see the table below:</p>
<p><a href="https://i.stack.imgur.com/tAo6L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tAo6L.png" alt="enter image description here"></a> </p>
<p><b>The problem</b>:</p>
<p>I need to calculate the slope for each cycle of each phase for each product. I also need to add it to the dataframe in a new column called "Slope". The one you can see, highlighted in yellow was added by me manually in an excel file. The real dataset contains hundreds of parameters (not only temperatures) so in reality I need to calculate the slope for many, many columns, therefore I tried to define a function.</p>
<p><b>My solution is not working at all</b>:</p>
<p>This is the code I tried, but it does not work. I am trying to catch the first and last row for the given product, for the given phase. And then get the temperature data and the difference of these two rows. And this way I could calculate the slope.
This is all I could come up with so far (I created another column called: "Max_cylce_no", this stores the maximum amount of the cycle for each phase):</p>
<pre><code>temp_at_start=-1
def slope(col_name):
global temp_at_start
start_cycle_no = 1
if row["Cycle"]==1:
temp_at_start =row["Temperature"]
start_row = df.index(row)
cycle_numbers = row["Max_cylce_no"]
last_cycle_row = cycle_numbers + start_row
last_temp = df.loc[last_cycle_row, "Temperature"]
</code></pre>
<p>And the way I would like to apply it:</p>
<pre><code>df.apply(slope("Temperature"), axis=1)
</code></pre>
<p>Unfortunatelly I get a NameError right away saying that: name 'row' is not defined.</p>
<p>Could you please help me and show me the right direction on how to solve this problem. It gives me a really hard time. :(</p>
<p>Thank you in advance!</p>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with subtract last value with first and divide by length:</p>
<pre><code>f = lambda x: (x.iloc[-1] - x.iloc[0]) / len(x)
df['new'] = df.groupby(['Product_no','Phase_no'])['Temperature'].transform(f)
</code></pre>
|
python|pandas|dataframe
| 2
|
4,966
| 65,520,833
|
If I Trace a PyTorch Network on Cuda, can I use it on CPU?
|
<p>I traced my Neural Network using <code>torch.jit.trace</code> on a CUDA-compatible GPU server. When I reloaded that Trace on the same server, I could reload it and use it fine. Now, when I downloaded it onto my laptop (for quick testing), when I try to load the trace I get:</p>
<pre><code>RuntimeError: Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. 'aten::empty_strided' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
</code></pre>
<p>Can I not switch between GPU and CPU on a trace? Or is there something else going on?</p>
|
<p>I had this exact same issue. In my model I had one line of code that was causing this:</p>
<pre><code>if torch.cuda.is_available():
weight = weight.cuda()
</code></pre>
<p>If you have a look at the official documentation for trace (<a href="https://pytorch.org/docs/stable/generated/torch.jit.trace.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.jit.trace.html</a>) you will see that</p>
<blockquote>
<p>the returned ScriptModule will always run the same traced graph on any input. This has some important implications when your module is expected to run different sets of operations, depending on the input and/or the module state</p>
</blockquote>
<p>So, if the model was traced on a machine with GPU this operation will be recorded and you won't be able to even load your model a CPU only machine. To solve this, deleted everything that makes you model CUDA dependent. In my case it was as easy as deleting the code-block above.</p>
|
pytorch|torchscript
| 0
|
4,967
| 65,570,123
|
creating a matrix from multiple pandas data frames
|
<p>I have basically no experience with pandas and I'm trying to force myself to use it more.</p>
<p>I'm trying to join the "count" of multiple data frames based on a specific column to create a count matrix. I usually do this with good old python dictionaries, but if there's a simple way to do this with pandas, I'd be interested in learning.</p>
<p>I have multiple data frames. They are not equal in size. GeneID and geneName are basically the same thing. Just different ways of identifying the gene.</p>
<p>My data frames look like this:</p>
<p>Data frame1:</p>
<pre><code> geneID geneName count
0 A123 ABC 202
1 B456 DEF 30
2 C789 GHI 265
</code></pre>
<p>Data frame2:</p>
<pre><code> geneID geneName count
0 X999 FOO 700
1 B456 DEF 606
2 C789 GHI 777
</code></pre>
<p>If a gene name/ gene ID is not present in any of the data frames, it should have the count value of "0" in the matrix file.</p>
<p>Here is the desired result after joining counts:</p>
<pre><code> geneID geneName df1 df2 df3 ...
0 A123 ABC 202 0
1 B456 DEF 30 606
2 C789 GHI 265 777
3 X999 FOO 0 700
</code></pre>
<p>Thanks in advance for any solutions, and any pandas learning tips!</p>
|
<p>Try <code>pd.concat</code>:</p>
<pre><code>pd.concat([d.set_index(['geneID','geneName']).rename(columns={'count':f'df{i}'})
for i,d in enumerate([df1,df2])], axis=1
).fillna(0)
</code></pre>
<p>Output:</p>
<pre><code> df0 df1
geneID geneName
A123 ABC 202.0 0.0
B456 DEF 30.0 606.0
C789 GHI 265.0 777.0
X999 FOO 0.0 700.0
</code></pre>
<hr />
<p>Or <code>concat</code> then <code>pivot_table</code>:</p>
<pre><code>(pd.concat([d.assign(col=f'df{i}') for i,d in enumerate([df1,df2])])
.pivot_table(index=['geneID','geneName'], columns='col',
values='count', fill_value=0)
)
</code></pre>
<p>Or a similar approach with option <code>key</code> in <code>concat</code>:</p>
<pre><code>(pd.concat([df1,df2], keys=['df1','df2'])
.reset_index(level=1,drop=True)
.set_index(['geneID','geneName'],append=True)
['count']
.unstack(level=0, fill_value=0)
)
</code></pre>
|
python|pandas|dataframe|join
| 1
|
4,968
| 63,443,650
|
Can tabula lead with merge columns?
|
<p>Recently I've working in table extraction, specifically with <em>stream</em> tables. An in <a href="https://github.com/camelot-dev/camelot/wiki/Comparison-with-other-PDF-Table-Extraction-libraries-and-tools" rel="nofollow noreferrer">this</a> post I saw that tabula achieves very well this kind of extraction.
For example when compares <code>tabula</code> vs <code>camelot</code> in "<a href="https://github.com/atlanhq/camelot/blob/master/docs/benchmark/stream/budget/budget.pdf" rel="nofollow noreferrer">budget.pdf</a>", in the extraction Tabula combines the last two columns. Using <code>.split(' ', expand = True)</code> can be fixed and then use <code>combine</code>, <code>join</code> or <code>merge</code> make the original pdf table.</p>
<p>I noticed that when the gap between the columns is so close they would be merged in one. In the taks that I'm trying to achieve that is very common. I don't know how well might be my solution because in some examples that I work on in the middle of the dataframe the columns are merged and I have to sort the columns of the whole dataframe.</p>
<p>I would like to know if Tabula has a hyperparameter tuning to deal with that, like <code>PDFMiner</code> in which you can manage distances between values...</p>
|
<p>maintainer of Tabula here.</p>
<p>You can try specifying the horizontal coordinates of the column boundaries. This parameter is exposed in <code>tabula-py</code> in the <code>columns=</code> keyword argument of the <code>read_pdf</code> method.</p>
|
python|pandas|tabula
| 0
|
4,969
| 63,419,095
|
Populate Pandas Dataframe Based on Column Values Matching Other Column Names
|
<p>I'd like to populate one dataframe (df2) based on the column names of df2 matching values within a column in another dataframe (df2). Here is a simplified example:</p>
<pre><code>names = list('abcd')
data = list('aadc')
df1 = pd.DataFrame(data,columns=['data'])
df2 = pd.DataFrame(np.empty([4,4]),columns=names)
df1:
data
0 a
1 a
2 d
3 c
df2:
a b c d
0 0.00 0.00 0.00 0.00
1 0.00 0.00 0.00 0.00
2 0.00 0.00 0.00 0.00
3 0.00 0.00 0.00 0.00
</code></pre>
<p>I'd like to update df2 so that the first row returns a number (let's say 1 for now) under column a, and 0 for other columns. Second row of df2 would return the same, third frow would return a 0 for column a/b/c and a 1 for column d, fourth row would return a 0 for column a/b/d and a 1 for column c.</p>
<p>Thanks very much for the help!</p>
|
<p>You can do numpy broadcasting here:</p>
<pre><code>df2[:] = (df1['data'].values[:,None] == df2.columns.values).astype(int)
</code></pre>
<p>Or use <code>get_dummies</code>:</p>
<pre><code>df2[:] = pd.get_dummies(df1['data']).reindex(df2.columns, axis=1)
</code></pre>
<p>Output:</p>
<pre><code> a b c d
0 1 0 0 0
1 1 0 0 0
2 0 0 0 1
3 0 0 1 0
</code></pre>
|
python|pandas|numpy
| 1
|
4,970
| 53,469,325
|
Percentage growth between values in column
|
<p>Let's say that I have a df like below:</p>
<pre><code>x name
12 q
1 q
3 q
383 z
31 z
21 z
68 r
32 r
2 r
</code></pre>
<p>I need to count the percentage growth between first and last value for each of name, so result should be like this</p>
<pre><code>x name
300% q
1723% z
20% r
</code></pre>
<p>I tried to use first group by name but now I can't move forward. Do you have any ideas how to fix it ? </p>
<p>Thanks All for help </p>
|
<p>First aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>first</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>last</code></a> functions and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pct_change.html" rel="nofollow noreferrer"><code>pct_change</code></a>:</p>
<pre><code>df = (df.groupby('name')['x']
.agg([('a','last'),('x','first')])
.pct_change(axis=1)['x']
.mul(100)
.reset_index())
print (df)
name x
0 q 300.000000
1 r 3300.000000
2 z 1723.809524
</code></pre>
<p>Another solution:</p>
<pre><code>a = df.drop_duplicates('name', keep='last').set_index('name')['x']
b = df.drop_duplicates('name').set_index('name')['x']
df = b.sub(a).div(a).mul(100).round(2).reset_index()
print (df)
name x
0 q 300.00
1 z 1723.81
2 r 3300.00
</code></pre>
|
python-3.x|pandas|dataframe|percentage
| 1
|
4,971
| 71,900,015
|
Initialize a list in cells in specific indexes (the indexes are in a list)
|
<p>I have a list of indexes in each of which I need to initialize a list in a specific column. I tried this:</p>
<pre><code>index = [0, 1, 2, 3, 4]
dataframe.at[indexes, 'column_x'] = [] * len(indexes)
</code></pre>
<p>which resulted in the error message:</p>
<pre><code>pandas.errors.InvalidIndexError: Int64Index([0, 1, 2, 3, 4], dtype='int64')
</code></pre>
<p>I tried using loc and iloc instead of at, which also resulted in errors. I couldn't find relevant solutions.<br />
Any suggestions will be welcomed.<br />
Thanks!</p>
|
<p>You can create an empty series with <code>[]</code> then use <code>combine_first</code> to fill right index:</p>
<pre><code>sr = pd.Series([[]] * len(df))
df['column_x'] = df['column_x'].mask(df.index.isin(index)).combine_first(sr)
</code></pre>
|
pandas|dataframe
| 1
|
4,972
| 55,561,467
|
Extracting subrows from rows
|
<p>i have a dataset having year, total runs scored per ball, inning and batting_team.
i want to display the data where i will have year in which innings are displayed in which i will list the team who has scored the highest runs in that inning of that year</p>
<p>i have reached till this extent but dont know to take the max out of the years(row-wise) and total(column-wise)</p>
<pre><code>a = pd.DataFrame((df.groupby(['year','inning','batting_team'])["total"].sum()))
</code></pre>
<p><a href="https://i.stack.imgur.com/JleOb.png" rel="nofollow noreferrer">This is the result of the above command</a></p>
<p>i want only the team who has scored the highest among other teams to be displayed of that year's inning
For example :-
2008 1 DeccanChargers somevalue(max runs scored in that years 1st inning) </p>
|
<p>If you just want to see the top result for each GroupBy you can use <code>.head(1)</code></p>
|
python|pandas
| 0
|
4,973
| 66,801,447
|
Merge pandas dataframes by timestamps
|
<p>I've got a few pandas dataframes indexed with timestamps and I would like to merge them into one dataframe, matching nearest timestamp. So I would like to have for example:</p>
<pre><code>a =
CPU
2021-03-25 13:40:44.208 70.571797
2021-03-25 13:40:44.723 14.126870
2021-03-25 13:40:45.228 17.182844
b =
X Y
2021-03-25 13:40:44.193 45 1
2021-03-25 13:40:44.707 46 1
2021-03-25 13:40:45.216 50 2
a + b =
CPU X Y
2021-03-25 13:40:44.208 70.571797 45 1
2021-03-25 13:40:44.723 14.126870 46 1
2021-03-25 13:40:45.228 17.182844 50 2
</code></pre>
<p>What exact timestamp there is going to be in final DataFrame is not important to me.</p>
<p>BTW. Is there an easy way to leter convert "absolute" timestamps into time from start (either in seconds or miliseconds)? So for this example:</p>
<pre><code>
CPU X Y
0.0 70.571797 45 1
0.5 14.126870 46 1
1.0 17.182844 50 2
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> with <code>direction='nearest'</code>:</p>
<pre><code>pd.merge_asof(df1, df2, left_index=True, right_index=True, direction='nearest')
</code></pre>
|
python|pandas|dataframe
| 2
|
4,974
| 47,527,747
|
Pandas: Rolling mean over array of windows
|
<p>Similar to <a href="https://stackoverflow.com/a/42152692/2327328">this answer</a>, I can calculate multiple rolling means</p>
<pre><code>d1 = df.set_index('DateTime').sort_index()
ma_1h = d1.groupby('Event').rolling('H').mean()
ma_2h = d1.groupby('Event').rolling('2H').mean()
</code></pre>
<p>But how can I do this performantly if I want to do it for a list of arrays?</p>
<pre><code>window_array = ['H','3H','6H','9H'] # etc
</code></pre>
<p>And that my rolling means are included back into my original dataframe</p>
|
<p>I believe you need convert offsets and create new <code>DataFrame</code>s in loop by list comprehension, last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p>
<pre><code>from pandas.tseries.frequencies import to_offset
df1 = pd.concat([d1.groupby('Event').rolling(to_offset(x)).mean() for x in window_array],
axis=1,
keys=window_array)
</code></pre>
<p>Sample:</p>
<pre><code>rng = pd.date_range('2017-04-03', periods=10, freq='38T')
df = pd.DataFrame({'DateTime': rng, 'a': range(10), 'Event':[4] * 3 + [3] * 3 + [1] * 4})
print (df)
from pandas.tseries.frequencies import to_offset
window_array = ['H','3H','6H','9H']
d1 = df.set_index('DateTime').sort_index()
a = pd.concat([d1.groupby('Event')['a'].rolling(to_offset(x)).mean() for x in window_array],
axis=1,
keys=window_array)
print (a)
H 3H 6H 9H
Event DateTime
1 2017-04-03 03:48:00 6.0 6.0 6.0 6.0
2017-04-03 04:26:00 6.5 6.5 6.5 6.5
2017-04-03 05:04:00 7.5 7.0 7.0 7.0
2017-04-03 05:42:00 8.5 7.5 7.5 7.5
3 2017-04-03 01:54:00 3.0 3.0 3.0 3.0
2017-04-03 02:32:00 3.5 3.5 3.5 3.5
2017-04-03 03:10:00 4.5 4.0 4.0 4.0
4 2017-04-03 00:00:00 0.0 0.0 0.0 0.0
2017-04-03 00:38:00 0.5 0.5 0.5 0.5
2017-04-03 01:16:00 1.5 1.0 1.0 1.0
</code></pre>
|
python|pandas|time-series|moving-average
| 1
|
4,975
| 47,310,132
|
Number of CNN learnable parameters - Python / TensorFlow
|
<p>In TensorFlow, is there any function to something I can do to find out the amount of learning parameters in my network?</p>
|
<p>No function I am aware of, but you can still count yourself using a for loop on the <code>tf.trainable_variables():</code></p>
<pre><code>total_parameters = 0
for variable in tf.trainable_variables():
variable_parameters = 1
for dim in variable.get_shape():
variable_parameters *= dim.value
total_parameters += variable_parameters
print("Total number of trainable parameters: %d" % total_parameters)
</code></pre>
|
python|tensorflow|conv-neural-network
| 6
|
4,976
| 68,098,852
|
Lookup value from data Pandas
|
<p>I have a list of postcodes coordinates in a <code>df</code></p>
<pre><code>print(df)
out[0]:
X Y Postcode
84060.2933273726 452334.434562507 2543
842443.2065506417 452310.49440726795 2544
78129.7656972764 450394.36304550205 2542
76143.40136149981 452922.516876715 2551
</code></pre>
<p>I also have an activity file (<code>df2</code>), with unknown coordinates are stated by NaN.</p>
<pre><code>print(df2)
out[1]:
OrigLoc DestLoc O_X O_Y D_X D_Y
0 2515 2515 82190.12097 454778.5460 81694.8038 454266.4303
1 2515 2544 81203.80496 453952.5966 NaN NaN
2 2544 2515 NaN NaN 81759.58454 454494.4784
3 2515 2543 81573.1442 454424.602 NaN NaN
</code></pre>
<p>How can I fill the NaNs in O_X, O_Y, D_X, and D_Y by taking the data of X and Y coordinate from <code>df</code>? I have tried to use pd.merge, but since I want to find values for multiple column, does it mean that I have to do pd.merge 4 times? Is there a more efficient way to do this? Any help is appreciated!</p>
|
<p>A merge will get you the coordinates of the origin or of the destination, so 2 merges should be enough:</p>
<pre><code>>>> orig = df2.reset_index().merge(df, left_on='OrigLoc', right_on='Postcode')\
... .set_index('index')[['X', 'Y']].add_prefix('O_')
>>> orig
O_X O_Y
index
2 842443.206551 452310.494407
>>> dest = df2.reset_index().merge(df, left_on='DestLoc', right_on='Postcode')\
... .set_index('index')[['X', 'Y']].add_prefix('D_')
>>> dest
D_X D_Y
index
1 842443.206551 452310.494407
3 84060.293327 452334.434563
</code></pre>
<p>Note how these <code>orig</code> and <code>dest</code> dataframe have the same column names than your coordinates in <code>df</code>: <code>O_X</code>, <code>O_Y</code>, <code>D_X</code>, <code>D_Y</code>. You should also note the <code>reset_index</code> and <code>set_index</code> steps which allow to preserve the original index β <code>merge</code> usually erases that column. All this allows us to keep the information of which cells in <code>df2</code> these values should fill.</p>
<p>We can now simply use <code>fillna</code> to fill in the gaps in df:</p>
<pre><code>>>> df2.fillna(dest).fillna(orig)
OrigLoc DestLoc O_X O_Y D_X D_Y
0 2515 2515 82190.120970 454778.546000 81694.803800 454266.430300
1 2515 2544 81203.804960 453952.596600 842443.206551 452310.494407
2 2544 2515 842443.206551 452310.494407 81759.584540 454494.478400
3 2515 2543 81573.144200 454424.602000 84060.293327 452334.434563
</code></pre>
|
python|pandas|dataframe
| 0
|
4,977
| 68,141,627
|
Add a path to libraries for python in VS Code Windows
|
<p>I have <code>pandas</code> installed in computer via Anaconda that I downloaded previously and now when I wish to use VS Code, I tried installing <code>pandas</code> using <code>pip install pandas</code> and it said that the Requirement is already satisfied. I am not sure what path to change and how to change, though I have been able to locate pandas through <code>pip freeze</code>. Please help!!</p>
|
<p>You do not need to install pandas again even tho you're using a different IDE, you might have added the python PATH in your environment variables and that's it.
Can you just try</p>
<pre><code>import pandas as pd
</code></pre>
<p>in your VSCode to cross check?</p>
|
python|pandas|windows|visual-studio-code|cmd
| 0
|
4,978
| 68,258,207
|
How to use fillna function in pandas?
|
<p>I have a dataframe which has three Battery's charging and discharging sequence:</p>
<pre><code> Battery 1 Battery 2 Battery 3
0 32 3 -1
1 21 11 -31
2 23 27 63
3 12 -22 -22
4 -21 22 44
5 -66 6 66
6 -12 32 -52
7 -45 -45 -4
8 45 -55 -77
9 66 66 96
10 99 -39 -69
11 88 99 48
</code></pre>
<p>if the number is negative then it will be charging and if it is positive then it will discharging. So what I added all the numbers rows and then try to the charging and discharging sequence.</p>
<pre><code>import pandas as pd
dic1 = {
'Battery 1': [32,21,23,12,-21,-66,-12,-45,45,66,99,88],
'Battery 2': [3,11,27,-22,22,6,32,-45,-55,66,-39,99],
'Battery 3': [-1,-31,63,-22,44,66,-52,-4,-77,96,-69,48]
}
df = pd.DataFrame(dic1)
bess = df.filter(like='Battery').sum(axis=1) # Adding all batteries
charging = bess[bess<=0].fillna(0) #Charging
discharging = bess[bess>0].fillna(0) #Discharging
bess['charging'] = charging #creating new column for charging
bess['discharging'] = discharging #creating new column for discharging
print(bess)
</code></pre>
<p><strong>Excpected output:</strong></p>
<pre><code> bess charging discharging
0 34 0.0 34.0
1 1 0.0 1.0
2 113 0.0 113.0
3 -32 -32.0 0.0
4 45 0.0 45.0
5 6 0.0 6.0
6 -32 -32.0 0.0
7 -94 -94.0 0.0
8 -87 -87.0 0.0
9 228 0.0 228.0
10 -9 -9.0 0.0
11 235 0.0 235.0
</code></pre>
<p>but instead somehow this <code>fillna</code> is not filling 0 values and giving this output:</p>
<pre><code> bess charging discharging
0 34 34
1 1 1
2 113 113
3 -32 -32
4 45 45
5 6 6
6 -32 -32
7 -94 -94
8 -87 -87
9 228 228
10 -9 -9
11 235 235
</code></pre>
|
<p>Change the lane here with <code>reindex</code></p>
<pre><code>charging = bess[bess<=0].reindex(df.index,fill_value=0) #Charging
discharging = bess[bess>0].reindex(df.index,fill_value=0) #Discharging
</code></pre>
|
python|pandas|fillna
| 3
|
4,979
| 59,076,441
|
Different color in hvplot.box
|
<p>The following code generates the linked image. It generates mostly what I want but I would like the box color to be different between Real and Preds. How would I do that with Holoviews or Hvplot?</p>
<pre><code>import hvplot.pandas
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(20), columns=['Value'])
df['Source'] = ['Preds'] *10 +['Real'] * 10
df['Item'] = ['item1'] *5 + ['item2']*5 + ['item1'] *5 + ['item2']*5
df.hvplot.box(y='Value', by=['Item', 'Source'])
</code></pre>
<p>I would like the first graph of this image to be in the style of the second</p>
<p><a href="https://i.stack.imgur.com/ILHhc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ILHhc.png" alt="enter image description here"></a></p>
|
<p>You can do it by <strong>setting the color and cmap parameter</strong>:</p>
<pre><code>df.hvplot.box(
y='Value',
by=['Item', 'Source'],
color='Source',
cmap=['blue', 'orange'],
legend=False,
)
</code></pre>
<p>Or by setting <strong>.opts(box_color)</strong>:</p>
<pre><code>df.hvplot.box(
y='Value',
by=['Item', 'Source'],
legend=False,
).opts(
box_color='Source',
cmap='Category20',
)
</code></pre>
<p>See also this SO question:<br> <a href="https://stackoverflow.com/questions/47657085/holoviews-color-per-category">Holoviews color per category</a></p>
<p>More info on choosing particular colors for plots:<br>
<a href="http://holoviews.org/user_guide/Styling_Plots.html" rel="nofollow noreferrer">http://holoviews.org/user_guide/Styling_Plots.html</a> <br>
<a href="http://holoviews.org/user_guide/Colormaps.html" rel="nofollow noreferrer">http://holoviews.org/user_guide/Colormaps.html</a></p>
|
python|pandas|holoviews|hvplot
| 4
|
4,980
| 59,159,444
|
Cumulative histogram with bins in frequency python
|
<p><a href="https://i.stack.imgur.com/clMGJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/clMGJ.png" alt="histogram barplot and cumulative histogram curve"></a></p>
<p>I am looking for a python function to get a cumulative curve of frequency with regularly spaced frequence (y axis) and not values (x axis). On this image, the sampling of the dots is regularly spaced for x axis, I would like it to be regular for y axis. </p>
<p>The output of the function would be the regular percentiles, from 0 to 100 by step of n, and the values corresponding to those percentiles.</p>
<p>It would correspond to <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.cumfreq.html" rel="nofollow noreferrer">scipy.stats.cumfreq</a> but with numbins corresponding to y axis (frequencies or percent) and not x axis (values).</p>
<p>This function is a draft of what I am looking for:</p>
<pre class="lang-py prettyprint-override"><code>def cumfreq_even_freq(array, nbins):
array = array.flatten()
array.sort()
step = len(array)/nbins
percents = [(i*step * step)/len(array) for i in range(nbins)]
values = [array[i*step +step] for i in range(nbins)]
return percents, values
</code></pre>
|
<p>A very rough version, you can use pandas' <code>qcut</code>:</p>
<pre><code># toy data
np.random.seed(1)
a = np.random.rand(100)
# Quantile cut into 10 bins
cuts = (pd.qcut(a, np.arange(0,1,0.1)) # change arange to your liking
.value_counts().cumsum()
)
plt.plot([a.right for a in cuts.index], cuts, marker='s')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/Z3hCL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z3hCL.png" alt="enter image description here"></a></p>
|
python|numpy|scipy|cumulative-frequency
| 1
|
4,981
| 57,168,393
|
How to add a column in a df with mapped values from identical column in df2?
|
<p>I have two data frames <code>categories</code> and <code>data</code> and would like to add a column to <code>data</code> based on a column of <code>categories</code>. Here's some of the information for these data frames.</p>
<pre><code>items: DataFrame | (22170, 3) | Column names: item_name, item_id, item_category_id
data: DataFrame | (2935849, 6) | Column names: date, date_block_num, shop_id, item_id, item_price, item_cnt_day
</code></pre>
<p>There are 83 categories of items and 22169 unique items. I would like <code>item_category_id</code> to be added to data with its values uniquely equated to each <code>item_id</code>. I have gone through some of the posts here at SO but they seem perfect for smaller datasets or sets that require simpler mapping. What I'm looking for is this:</p>
<pre><code>print(data.head())
date shop_id item_id item_category_id -> # Newly added column
D.M.Y 50 22142 32
D.M.Y 25 521 12
D.M.Y 25 541 57
.
.
D.M.Y 44 42 83
</code></pre>
<p><code>merge</code> seems to be good enough, but it merges all of the data and them removing the unneeded columns makes the process inefficient. Whats a nice way to achieve this?</p>
|
<p>You could <code>merge</code> only on the slices of your DataFrames containing the columns that you need in the final result:</p>
<pre><code>data_cols = ['date', 'shop_id', 'item_id']
items_cols = ['item_id', 'item_category_id']
pd.merge(data[data_cols], items[items_cols], how='left', on='item_id')
</code></pre>
<p>Alternatively, you could create a lookup dictionary (or Series) then use <code>map</code>:</p>
<pre><code>lookup = dict(zip(items['item_id'], items['item_category_id']))
data['item_category_id'] = data['item_id'].map(lookup)
</code></pre>
|
python|pandas
| 1
|
4,982
| 57,130,434
|
Monitoring weight sparsity during training
|
<p>I wonder if it is possible to monitor the percentage of nonzero weights of the full network (not just a layer) during training? </p>
<p>For example, I use </p>
<pre><code>optim = AdagradDAOptimizer(learning_rate=0.01).minimize(my_loss)
</code></pre>
<p>and </p>
<pre><code>for i in range(10):
sess = tf.Session()
loss, _ = sess.run([my_loss, optim])
</code></pre>
<p>and I would like to print the ratio of the number nonzero weights over the number of all weights after every iteration. Is it possible?</p>
|
<p>The following code calculate the number of nonzero weights.</p>
<pre><code>import tensorflow as tf
import numpy as np
tvars = sess.run(tf.trainable_variables())
nonzero_parameters = np.sum([np.count_nonzero(var) for var in tvars])
</code></pre>
<p>Here shows how to calculate the total number of weights: <a href="https://stackoverflow.com/questions/38160940/how-to-count-total-number-of-trainable-parameters-in-a-tensorflow-model">How to count total number of trainable parameters in a tensorflow model?</a>.</p>
|
tensorflow|deep-learning
| 0
|
4,983
| 45,950,723
|
Error: Tensorflow BRNN logits and labels must be same size
|
<p>I have an error like this:</p>
<pre><code>InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[10,9] labels_size=[7040,9] [[Node:
SoftmaxCrossEntropyWithLogits =
SoftmaxCrossEntropyWithLogits[T=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/gpu:0"](Reshape, Reshape_1)]]
</code></pre>
<p>But I can't find the tensor which occurs this error.... I think it is appeared by size mismatching...</p>
<p>My Input size is <code>batch_size</code> * <code>n_steps</code> * <code>n_input</code></p>
<p>so, It will be 10*704*100, And I want to make the output </p>
<p><code>batch_size</code> * <code>n_steps</code> * <code>n_classes</code> => It will by 10*700*9, by Bidirectional RNN</p>
<p>How should I change this code to fix the error?</p>
<p>batch_size means the number of datas like this:</p>
<p>data 1 : ABCABCABCAAADDD...
...
data 10 : ABCCCCABCDBBAA...</p>
<p>And
n_step means the length of each data ( The data was padded by 'O' to fix the length of each data) : 704</p>
<p>And
n_input means the data how to express the each alphabet in each data like this:
A - [1, 2, 1, -1, ..., -1]</p>
<p>And the output of the learning should be like this:
output of data 1 : XYZYXYZYYXY ...
...
output of data 10 : ZXYYRZYZZ ...</p>
<p>the each alphabet of output was effected by the surrounding and sequence of alphabet of input.</p>
<pre><code>learning_rate = 0.001
training_iters = 100000
batch_size = 10
display_step = 10
# Network Parameters
n_input = 100
n_steps = 704 # timesteps
n_hidden = 50 # hidden layer num of features
n_classes = 9
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_steps, n_classes])
weights = {
'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
def BiRNN(x, weights, biases):
x = tf.unstack(tf.transpose(x, perm=[1, 0, 2]))
# Forward direction cell
lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Backward direction cell
lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
try:
outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
except Exception: # Old TensorFlow version only returns outputs not states
outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = BiRNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
while step * batch_size < training_iters:
batch_x, batch_y = next_batch(batch_size, r_big_d, y_r_big_d)
#batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
test_x, test_y = next_batch(batch_size, v_big_d, y_v_big_d)
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_x, y: test_y}))
</code></pre>
|
<p>The first return value of <a href="https://www.tensorflow.org/api_docs/python/tf/nn/static_bidirectional_rnn" rel="nofollow noreferrer"><code>static_bidirectional_rnn</code></a> is a list of tensors - one for each rnn step. By using only the last one in your <code>tf.matmul</code> you're losing all the rest. Instead, stack them into a single tensor of the appropriate shape, reshape for the <code>matmul</code> then shape back.</p>
<pre><code>outputs = tf.stack(outputs, axis=1)
outputs = tf.reshape(outputs, (batch_size*n_steps, n_hidden))
outputs = tf.matmul(outputs, weights['out']) + biases['out']
outputs = tf.reshape(outputs, (batch_size, n_steps, n_classes))
</code></pre>
<p>Alternatively, you could use <a href="https://www.tensorflow.org/api_docs/python/tf/einsum" rel="nofollow noreferrer"><code>tf.einsum</code></a>:</p>
<pre><code>outputs = tf.stack(outputs, axis=1)
outputs = tf.einsum('ijk,kl->ijl', outputs, weights['out']) + biases['out']
</code></pre>
|
tensorflow|deep-learning|rnn
| 1
|
4,984
| 45,907,431
|
Shape mismatch in LSTM in keras
|
<p>I am trying to run a LSTM using Keras on my custom features set. I have train and test features in separate files. Each csv file contains 11 columns with last column as class label. There are total 40 classes in my dataset. The problem is I am not able to figure out the correct input_shape to the first layer. I had explored all the stackoverflow and github but still not able to solve this
Below is my complete code.</p>
<pre><code>import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
numpy.random.seed(7)
train_dataset = numpy.loadtxt("train.csv", delimiter=",")
X_train = train_dataset[:, 0:10]
y_train = train_dataset[:, 10]
test_dataset = numpy.loadtxt("test.csv", delimiter=",")
X_test = test_dataset[:, 0:10]
y_test = test_dataset[:, 10]
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=X_train.shape))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32))
model.add(Dense(1, activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=10, epochs=1)
score, acc = model.evaluate(X_test, y_test, batch_size=10)
print('Test score:', score)
print('Test accuracy:', acc * 100)
</code></pre>
<p>Whatever I change in input_shape parameter wither I get error in first LSTM layer of in fit method.</p>
|
<p>you don't have a time dimension in your input.
Input for RNN should be <code>(batch_size, time_step, features)</code> while your input has dimension <code>(batch_size, features)</code>.</p>
<p>If you want to use your 10 columns one at a time you should reshape the array with
<code>numpy.reshape(train_dataset, (-1, train_dataset.shape[1], 1))</code></p>
<p>Try this code:</p>
<pre><code>train_dataset = numpy.loadtxt("train.csv", delimiter=",")
train_dataset = numpy.reshape(train_dataset, (-1, train_dataset.shape[1], 1))
X_train = train_dataset[:, 0:10]
y_train = train_dataset[:, 10]
test_dataset = numpy.loadtxt("test.csv", delimiter=",")
test_dataset = numpy.reshape(test_dataset, (-1, train_dataset.shape[1], 1))
X_test = test_dataset[:, 0:10]
y_test = test_dataset[:, 10]
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=(X_train.shape[1], 1)))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32))
model.add(Dense(1, activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=10, epochs=1)
score, acc = model.evaluate(X_test, y_test, batch_size=10)
print('Test score:', score)
print('Test accuracy:', acc * 100)
</code></pre>
|
python|numpy|keras|lstm|keras-layer
| 1
|
4,985
| 46,090,386
|
Keep columns after a groupby in an empty dataframe
|
<p>The dataframe is an empty df after query.when groupby,raise runtime waring,then get another empty dataframe with no columns.How to keep the columns?</p>
<pre><code>df = pd.DataFrame(columns=["PlatformCategory","Platform","ResClassName","Amount"])
print df
</code></pre>
<p>result:</p>
<pre><code>Empty DataFrame
Columns: [PlatformCategory, Platform, ResClassName, Amount]
Index: []
</code></pre>
<p>then groupby:</p>
<pre><code>df = df.groupby(["PlatformCategory","Platform","ResClassName"]).sum()
df = df.reset_index(drop=False,inplace=True)
print df
</code></pre>
<p>result:
sometimes is None
sometime is empty dataframe</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
<p>why empty dataframe has no columns.</p>
<p>runtimewaring:</p>
<pre><code>/data/pyrun/lib/python2.7/site-packages/pandas/core/groupby.py:3672: RuntimeWarning: divide by zero encountered in log
</code></pre>
<p>if alpha + beta * ngroups < count * np.log(count):</p>
<pre><code>/data/pyrun/lib/python2.7/site-packages/pandas/core/groupby.py:3672: RuntimeWarning: invalid value encountered in double_scalars
if alpha + beta * ngroups < count * np.log(count):
</code></pre>
|
<p>You need <code>as_index=False</code> and <code>group_keys=False</code>:</p>
<pre><code>df = df.groupby(["PlatformCategory","Platform","ResClassName"], as_index=False).count()
df
Empty DataFrame
Columns: [PlatformCategory, Platform, ResClassName, Amount]
Index: []
</code></pre>
<p>No need to reset your index afterwards.</p>
|
python|pandas|dataframe|group-by|pandas-groupby
| 5
|
4,986
| 51,045,781
|
Tensorflow: How to retrieve information from the prediction Tensor?
|
<p>I have found a neural network for semantic segmentation purpose. The network works just fine, I feed my training, validation and test data and I get the output (segmented parts in different colors). Until here, all is OK. I am using Keras with Tensorflow 1.7.0, GPU enabled. Python version is 3.5</p>
<p>What I want to achieve though is to get access to the pixel groups (segments) so that I can get their boundaries' image coordinates, i.e. an array of points which forms the boundary of the segment X shown in green in the prediction image.</p>
<p>How to do that? Obviously I cannot put the entire code here but here is a snippet which I should modify to achieve what I would like to:</p>
<p>I have the following in my <strong>evaluate function</strong>:</p>
<pre><code> def evaluate(model_file):
net = load_model(model_file, custom_objects={'iou_metric': create_iou_metric(1 + len(PART_NAMES)),
'acc_metric': create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')})
img_size = net.input_shape[1]
image_filename = lambda fp: fp + '.jpg'
d_test_x = TensorResize((img_size, img_size))(ImageSource(TEST_DATA, image_filename=image_filename))
d_test_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_test_x)
d_test_pred = Predict(net)(d_test_x)
d_test_pred.metadata['properties'] = ['background'] + PART_NAMES
d_x, d_y = process_data(VALIDATION_DATA, img_size)
d_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_x)
d_y = AddBackgroundMap(use_lane_names=['Y'])(d_y)
d_train = Join()([d_x, d_y])
print('losses:', net.evaluate_generator(d_train.batch_array_tuple_generator(batch_size=3), 3))
# the tensor which needs to be modified
pred_y = Predict(net)(d_x)
Visualize(('slices', 'labels'))(Join()([d_test_x, d_test_pred]))
Visualize(('slices', 'labels', 'labels'))(Join()([d_x, pred_y, d_y]))
</code></pre>
<p>As for the Predict function, here is the snippet:</p>
<p>Alternatively, I've found that by using the following, one can get access to the tensor:</p>
<pre><code># for sample_img, in d_x.batch_array_tuple_generator(batch_size=3, n_samples=5):
# aa = net.predict(sample_img)
# indexes = np.argmax(aa,axis=3)
# print(indexes)
# import pdb
# pdb.set_trace()
</code></pre>
<p>But I have no idea how this works, I've never used pdb, therefore no idea.</p>
<p>In case if anyone wants to also see the <strong>training function</strong>, here it is:</p>
<pre><code>def train(model_name='refine_res', k=3, recompute=False, img_size=224,
epochs=10, train_decoder_only=False, augmentation_boost=2, learning_rate=0.001,
opt='rmsprop'):
print("Traning on: " + str(PART_NAMES))
print("In Total: " + str(1 + len(PART_NAMES)) + " parts.")
metrics = [create_iou_metric(1 + len(PART_NAMES)),
create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')]
if model_name == 'dummy':
net = build_dummy((224, 224, 3), 1 + len(PART_NAMES)) # 1+ because background class
elif model_name == 'refine_res':
net = build_resnet50_upconv_refine((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
objective_function=categorical_crossentropy,
metrics=metrics, train_full=not train_decoder_only)
elif model_name == 'vgg_upconv':
net = build_vgg_upconv((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
objective_function=categorical_crossentropy,metrics=metrics, train_full=not train_decoder_only)
else:
net = load_model(model_name)
d_x, d_y = process_data(TRAINING_DATA, img_size, recompute=recompute, ignore_cache=False)
d = Join()([d_x, d_y])
# create more samples by rotating top view images and translating
images_to_be_rotated = {}
factor = 5
for root, dirs, files in os.walk(TRAINING_DATA, topdown=False):
for name in dirs:
format = str(name + '/' + name) # construct the format of foldername/foldername
images_to_be_rotated.update({format: factor})
d_aug = ImageAugmentation(factor_per_filepath_prefix=images_to_be_rotated, rotation_variance=90, recalc_base_seed=True)(d)
d_aug = ImageAugmentation(factor=3 * augmentation_boost, color_interval=0.03, shift_interval=0.1, contrast=0.4, recalc_base_seed=True, use_lane_names=['X'])(d_aug)
d_aug = ImageAugmentation(factor=2, rotation_variance=20, recalc_base_seed=True)(d_aug)
d_aug = ImageAugmentation(factor=7 * augmentation_boost, rotation_variance=10, translation=35, mirror=True, recalc_base_seed=True)(d_aug)
# apply augmentation on the images of the training dataset only
d_aug = AddBackgroundMap(use_lane_names=['Y'])(d_aug)
d_aug.metadata['properties'] = ['background'] + PART_NAMES
# substract mean and shuffle
d_aug = Shuffle()(d_aug)
d_aug, d_val = RandomSplit(0.8)(d_aug)
d_aug = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_aug)
d_val = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_val)
# Visualize()(d_aug)
d_aug.configure()
d_val.configure()
print('training size:', d_aug.size())
batch_size = 4
callbacks = []
#callbacks += [EarlyStopping(patience=10)]
callbacks += [ModelCheckpoint(filepath="trained_models/"+model_name + '.hdf5', monitor='val_iou_metric', mode='max',
verbose=1, save_best_only=True)]
callbacks += [CSVLogger('logs/'+model_name + '.csv')]
history = History()
callbacks += [history]
# sess = K.get_session()
# sess.run(tf.initialize_local_variables())
net.fit_generator(d_aug.batch_array_tuple_generator(batch_size=batch_size, shuffle_samples=True), steps_per_epoch=d_aug.size() // batch_size,
validation_data=d_val.batch_array_tuple_generator(batch_size=batch_size), validation_steps=d_val.size() // batch_size,
callbacks=callbacks, epochs=epochs)
return {k: (max(history.history[k]), min(history.history[k])) for k in history.history.keys()}
</code></pre>
|
<p>for segmentation tasks, considering that your batch is one image, each pixel in the image is assigned a probability to belong to a class. Suppose you have 5 classes, and the image has 784 pixels(28x28) , you will get from the <code>net.predict</code> an array of shape <code>(784,5)</code> each pixel among 784 is assigned 5 probabilities values to belong to those classes. when you do <code>np.argmax(aa,axis=3)</code> you get the index of the highests probabilities for each pixel that would of shape <code>(784,1)</code> you can then reshape it to 28x28 <code>indexes.reshape(28,28)</code> and you get the mask of your predictions.</p>
<p>Reducing the problem to a 7x7 dimension and 4 classes(0-3) that looks like</p>
<pre><code>array([[2, 1, 0, 1, 2, 3, 1],
[3, 1, 1, 0, 3, 0, 0],
[3, 3, 2, 2, 0, 3, 1],
[1, 1, 0, 3, 1, 3, 1],
[0, 0, 0, 3, 3, 1, 0],
[1, 2, 3, 0, 1, 2, 3],
[0, 2, 1, 1, 0, 1, 3]])
</code></pre>
<p>you want to extract the indexes where the model predicted 1</p>
<pre><code>segment_1=np.where(indexes==1)
</code></pre>
<p>since its 2 dimension array, segment_1 will be 2x7 array,where the first array is the row indexes, and second array will be column value.</p>
<pre><code>(array([0, 0, 0, 1, 1, 2, 3, 3, 3, 3, 4, 5, 5, 6, 6, 6]), array([1, 3, 6, 1, 2, 6, 0, 1, 4, 6, 5, 0, 4, 2, 3, 5]))
</code></pre>
<p>looking at first number in the first and second array,<code>0 and 1</code> point to where the located in <code>indexes</code></p>
<p>You can extract its value like </p>
<pre><code>indexes[segment_1]
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
</code></pre>
<p>and then proceed with second class you want to get ,lets say 2</p>
<pre><code>segment_2=np.where(image==2)
segment_2
(array([0, 0, 2, 2, 5, 5, 6]), array([0, 4, 2, 3, 1, 5, 1]))
</code></pre>
<p>and if you want to get each classes itsself.
you can create a copy of <code>indexes</code> for each class,4 copies in total <code>class_1=indexes</code> and set to zero any value that is not equal to 1. <code>class_1[class_1!=1]=0</code> and get something like this</p>
<pre><code>array([[0, 1, 0, 1, 0, 0, 1],
[0, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
[1, 1, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 0, 1, 0]])
</code></pre>
<p>for the eye, you may think that there are countour but from this example, you can tell that there is no clear contour of each segment. The only way i could think of,is to loop the image in rows and record where the value change and do the same in columns.
I am not entired sure if this would be ideal situation.
I hope i covered some part of your question.
PDB is just a debugging package that allows you execute your code step by step</p>
|
python|tensorflow
| 3
|
4,987
| 50,740,557
|
PyTorch how to implement disconnection (connections and corresponding gradients are masked)?
|
<p>I try to implement the following graph. As you can see, the neurons are not fully connected, i.e., the weights are masked and so are their corresponding gradients.</p>
<p><a href="https://i.stack.imgur.com/J15OL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J15OL.png" alt="enter image description here"></a></p>
<pre><code>import torch
import numpy as np
x = torch.rand((3, 1))
# tensor([[ 0.8525],
# [ 0.1509],
# [ 0.9724]])
weights = torch.rand((2, 3), requires_grad=True)
# tensor([[ 0.3240, 0.0792, 0.6858],
# [ 0.5248, 0.4565, 0.3625]])
mask = torch.Tensor([[0,1,0],[1,0,1]])
# tensor([[ 0., 1., 0.],
# [ 1., 0., 1.]])
mask_weights = weights * mask
# tensor([[ 0.0000, 0.0792, 0.0000],
# [ 0.5248, 0.0000, 0.3625]])
y = torch.mm(mask_weights, x)
# tensor([[ 0.0120],
# [ 0.7999]])
</code></pre>
<p>This question is originally posted at <a href="https://discuss.pytorch.org/t/weights-disconnection-implementation/19314" rel="nofollow noreferrer">Pytorch Forum</a>. Note the above way </p>
<blockquote>
<p>mask_weights = weights * mask</p>
</blockquote>
<p>is <strong>NOT</strong> suitable since corresponding gradients are not 0.</p>
<p>Is there an elegant way to do that please?</p>
<p>Thank you in advance.</p>
|
<p>Actually, the above method is correct. <strong>The disconnections essentially block feed-forward and back-propogation on corresponding connections.</strong> In other words, weights and gradients are masked. The codes in question reveal the first while this answer reveals the latter.</p>
<pre><code>mask_weights.register_hook(print)
z = torch.Tensor([[1], [1]])
# tensor([[ 1.],
# [ 1.]])
out = (y-z).mean()
# tensor(-0.6595)
out.backward()
# tensor([[ 0.1920, 0.1757, 0.0046],
# [ 0.1920, 0.1757, 0.0046]])
weights.grad
# tensor([[ 0.0000, 0.1757, 0.0000],
# [ 0.1920, 0.0000, 0.0046]])
</code></pre>
<p>As you can see, the gradients of weights are masked automatically.</p>
|
python|neural-network|pytorch
| 1
|
4,988
| 50,745,224
|
Replacing column value by NaN when another column has a certain value in pandas
|
<p>I have the following data frame:</p>
<pre><code>Month,Value1,Value2
02,1,10
03,2,2
04,3,12
</code></pre>
<p>In this Dataframe I wish to replace <code>Value1</code> by NaN each time <code>Value2</code> is < to 10. So the desired output will look as follow:</p>
<pre><code>Month,Value1,Value2
02,1,10
03,NaN,2
04,3,12
</code></pre>
<p>I did try the following code:</p>
<pre><code>data = pd.read_csv("test.csv", index_col=[0])
data = data.loc[data['Value2'] < 10, 'Value1'] = np.nan
</code></pre>
<p>But unfortunately it's not working and gives me back the following error:</p>
<pre><code>AttributeError: 'float' object has no attribute 'loc'
</code></pre>
<p>Does anyone know why? and maybe how to fix that?</p>
|
<p>You don't need a double assignment. Try this:</p>
<pre><code>data.loc[data['Value2'] < 10, 'Value1'] = np.nan
</code></pre>
<p>Remember <a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>pd.DataFrame.loc</code></a> can be used both as a "getter" and as a "setter". So it is perfectly natural to assign a value to <code>data.loc[slice1, slice2]</code>.</p>
|
python|pandas|dataframe|nan
| 2
|
4,989
| 66,496,428
|
Automatically detect security identifier columns using Visions
|
<p>I'm interested in using the <a href="https://dylan-profiler.github.io/visions/index.html" rel="nofollow noreferrer">Visions</a> library to automate the process of identifying certain types of security (stock) identifiers. The <a href="https://dylan-profiler.github.io/visions/visions/applications/validation.html" rel="nofollow noreferrer">documentation mentions</a> that it could be used in such a way for ISBN codes but I'm looking for a more concrete example of how to do it. I think the process would be pretty much identical for the fields I'm thinking of as they all have check digits (<a href="https://en.wikipedia.org/wiki/International_Securities_Identification_Number" rel="nofollow noreferrer">ISIN</a>, <a href="https://en.wikipedia.org/wiki/SEDOL" rel="nofollow noreferrer">SEDOL</a>, <a href="https://en.wikipedia.org/wiki/CUSIP" rel="nofollow noreferrer">CUSIP</a>).</p>
<p>My general idea is that I would create custom types for the different identifier types and could use those types to</p>
<ul>
<li>Take a dataframe where the types are unknown and identify columns matching the types (even if it's not a 100% match)</li>
<li>Validate the types on a dataframe where the intended type is known</li>
</ul>
|
<p>Great question and use-case! Unfortunately, the <a href="https://dylan-profiler.github.io/visions/visions/getting_started/extending.html" rel="nofollow noreferrer">documentation</a> on making new types probably needs a little love right now as there were API breaking changes with the 0.7.0 release. Both the previous link and <a href="https://www.ianeaves.com/post/titanic-visions/" rel="nofollow noreferrer">this</a> post from August, 2020 should cover the conceptual idea of type creation in greater detail. If any of those examples break then mea culpa and our apologies, we switched to a dispatch based implementation to support different backends (pandas, numpy, dask, spark, etc...) for each type. You shouldn't have to worry about that for now but if you're interested you can find the default type definitions <a href="https://github.com/dylan-profiler/visions/tree/develop/src/visions/types" rel="nofollow noreferrer">here</a> with their backends <a href="https://github.com/dylan-profiler/visions/tree/develop/src/visions/backends" rel="nofollow noreferrer">here</a>.</p>
<h1>Building an ISBN Type</h1>
<p>We need to make two basic decisions when defining a type:</p>
<ol>
<li>What defines the type</li>
<li>What other types are our new type related to?</li>
</ol>
<p>For the ISBN use-case <a href="https://www.oreilly.com/library/view/regular-expressions-cookbook/9781449327453/ch04s13.html" rel="nofollow noreferrer">O'Reilly</a> provides a validation regex to match ISBN-10 and ISBN-13 codes. So,</p>
<blockquote>
<ol>
<li>What defines a type?</li>
</ol>
</blockquote>
<p>We want every element in the sequence to be a string which matches a corresponding ISBN-10 or ISBN-13 regex</p>
<blockquote>
<ol start="2">
<li>What other types are our new type related to?</li>
</ol>
</blockquote>
<p>Since ISBN's are themselves strings we can use the default String type provided by visions.</p>
<h2>Type Definition</h2>
<pre class="lang-py prettyprint-override"><code>from typing import Sequence
import pandas as pd
from visions.relations import IdentityRelation, TypeRelation
from visions.types.string import String
from visions.types.type import VisionsBaseType
isbn_regex = "^(?:ISBN(?:-1[03])?:?β)?(?=[0-9X]{10}$|(?=(?:[0-9]+[-β]){3})[-β0-9X]{13}$|97[89][0-9]{10}$|(?=(?:[0-9]+[-β]){4})[-β0-9]{17}$)(?:97[89][-β]?)?[0-9]{1,5}[-β]?[0-9]+[-β]?[0-9]+[-β]?[0-9X]$"
class ISBN(VisionsBaseType):
@staticmethod
def get_relations() -> Sequence[TypeRelation]:
relations = [
IdentityRelation(String),
]
return relations
@staticmethod
def contains_op(series: pd.Series, state: dict) -> bool:
return series.str.contains(isbn_regex).all()
</code></pre>
<p>Looking at this closely there are three things to take note of.</p>
<ol>
<li>The new type inherits from <code>VisionsBaseType</code></li>
<li>We had to define a <code>get_relations</code> method which is how we relate a new type to others we might want to use in a typeset. In this case, I've used an <code>IdentityRelation</code> to String which means ISBNs are subsets of String. We can also use <code>InferenceRelation</code>'s when we want to support relations which change the underlying data (say converting the string '4.2' to the float 4.2).</li>
<li>A <code>contains_op</code> this is our definition of the type. In this case, we are applying a regex string to every element in the input and verifying it matched the regex provided by O'Reilly.</li>
</ol>
<h2>Extensions</h2>
<p>In theory ISBNs can be encoded in what looks like a 10 or 13 digit integer as well - to work with those you might want to create an <code>InferenceRelation</code> between <code>Integer</code> and <code>ISBN</code>. A simple implementation would involve coercing Integers to string and applying the above regex.</p>
|
pandas|dataframe|custom-data-type
| 0
|
4,990
| 66,404,003
|
Oversampling of image data for keras
|
<p>I am working on Kaggle competition and trying to solve a multilabel classification problem with keras.</p>
<p>My dataset is highly imbalanced. I am familiar with this concept and did it for simple machine learning datasets, but now sure how to deal with both images and csv data.</p>
<p>There are a couple of questions, but they did not help me.</p>
<p><a href="https://stackoverflow.com/questions/53666759/use-smote-to-oversample-image-data">Use SMOTE to oversample image data</a></p>
<p><a href="https://stackoverflow.com/questions/48532069/how-to-oversample-image-dataset-using-python">How to oversample image dataset using Python?</a></p>
<pre><code>Class
No finding 25462
Aortic enlargement 5738
Cardiomegaly 4345
Pleural thickening 3866
Pulmonary fibrosis 3726
Nodule/Mass 2085
Pleural effusion 1970
Lung Opacity 1949
Other lesion 1771
Infiltration 997
ILD 792
Calcification 775
Consolidation 441
Atelectasis 229
Pneumothorax 185
</code></pre>
<p>I am trying to do oversampling, but not sure how to approach it. I have 15000 <code>png</code> images and <code>train.csv</code> dataset, which looks like:</p>
<pre><code>image_id class_name class_id rad_id x_min y_min x_max y_max width height
0 50a418190bc3fb1ef1633bf9678929b3 No finding 14 R11 0.0 0.0 0.0 0.0 2332 2580
1 21a10246a5ec7af151081d0cd6d65dc9 No finding 14 R7 0.0 0.0 0.0 0.0 2954 3159
2 9a5094b2563a1ef3ff50dc5c7ff71345 Cardiomegaly 3 R10 691.0 1375.0 1653.0 1831.0 2080 2336
3 051132a778e61a86eb147c7c6f564dfe Aortic enlargement 0 R10 1264.0 743.0 1611.0 1019.0 2304 2880
4 063319de25ce7edb9b1c6b8881290140 No finding 14 R10 0.0 0.0 0.0 0.0 2540 3072
</code></pre>
<p>How to attack this problem, when I have images and csv?</p>
<p>When I converted data, it looks like:</p>
<pre><code> Images Class
56 d106ec9b305178f3da060efe3191499a.png Nodule/Mass
38694 081d1700020b6bf0099f1e4d8aeec0f3.png Lung Opacity
50141 ff8ef73390f04480aba0be7810ef94cf.png No finding
233 253d35b7096d0957bd79cfb4b1c954e1.png No finding
2166 1951e0eba7c68aa1fbd6d723f19ee7c4.png Pleural thickening
</code></pre>
<p>I use image generator</p>
<pre><code># Create a train generator
train_generator = train_dataGen.flow_from_dataframe(dataframe = train,
directory = 'my_directory',
x_col = 'Images',
y_col = 'Class',
class_mode = 'categorical',
# target_size = (256, 256),
batch_size = 32)
</code></pre>
<p>I tried something dumb, but obviously did not work.</p>
<pre><code># Create an instance
oversample = SMOTE()
# Oversample
train_ovsm, valid_ovsm = oversample.fit_resample(train_ovsm, valid_ovsm)
</code></pre>
<p>Gives me an error:</p>
<pre><code>ValueError: could not convert string to float: '954984f75efe6890cfa45d0784a3a1e6.png'
</code></pre>
<p>Appreciate tips and good tutorials, cannot find anything so far.</p>
|
<p>I'm not sure if this answer satisfies you or not, but here is my thought. If I were you, I wouldn't try to balance it in the way you're trying it now. IMO, that's not the proper way. Your main concern is this <a href="https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/data" rel="nofollow noreferrer">VinBigData</a> is <strong>highly imbalanced</strong> and you're not sure how to address it properly.</p>
<p>Here are some first approaches all would adopt to address this issue in this competition.</p>
<pre><code>- External dataset
- Heavy and meaningful augmentation
- Modified the loss function
</code></pre>
<h3>External Datasets</h3>
<ul>
<li>NIH Chest X-rays : <a href="https://www.kaggle.com/nih-chest-xrays/data" rel="nofollow noreferrer">Data</a></li>
<li>SIIM-ACR Pneumothorax Segmentation : <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/overview" rel="nofollow noreferrer">Data</a></li>
<li>OSIC Pulmonary Fibrosis Progression : <a href="https://www.kaggle.com/c/osic-pulmonary-fibrosis-progression" rel="nofollow noreferrer">Data</a></li>
<li>RSNA Pneumonia Detection Challenge : <a href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/overview" rel="nofollow noreferrer">Data</a></li>
<li>Chest X-Ray Images (Pneumonia) : <a href="https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia" rel="nofollow noreferrer">Data</a></li>
</ul>
<p>What you need to do, collect all <strong>possible external samples</strong> from these datasets, combine them and make new datasets. It may take time but it worth it.</p>
<h2>Medical Image Augmentation</h2>
<p>We all know augmentation is one of the key strategies for deep learning model training. But it would make sense to choose the right augmentation. <a href="https://github.com/albumentations-team/albumentations#medical-imaging" rel="nofollow noreferrer">Here</a> are some demonstrations. The main intuition is to try not to destroy sensitive information. Be careful on that.</p>
<h2>Class Loss Weighting</h2>
<p>You can modify the loss function to weight the predicted score. <a href="https://stackoverflow.com/a/48700950/9215780">Here</a> is a detailed explanation of this topic.</p>
|
python|tensorflow|keras|oversampling
| 1
|
4,991
| 57,659,624
|
Renaming columns in dataframe w.r.t another specific column
|
<p><strong>BACKGROUND:</strong> Large excel mapping file with about 100 columns and 200 rows converted to .csv. Then stored as dataframe. General format of df as below. </p>
<p>Starts with a named column (e.g. Sales) and following two columns need to be renamed. This pattern needs to be repeated for all columns in excel file.</p>
<p><strong>Essentially</strong>: Link the subsequent 2 columns to the "parent" one preceding them. </p>
<pre><code> Sales Unnamed: 2 Unnamed: 3 Validation Unnamed: 5 Unnamed: 6
0 Commented No comment Commented No comment
1 x x
2 x x
3 x x
</code></pre>
<p><strong>APPROACH FOR SOLUTION:</strong> I assume it would be possible to begin with an index (e.g. index of Sales column 1 = x) and then rename the following two columns as (x+1) and (x+2).
Then take in the text for the next named column (e.g. Validation) and so on.</p>
<p>I know the <code>rename()</code> function for dataframes.</p>
<p>BUT, not sure how to apply the iteratively for <em>changing</em> column titles.</p>
<p><strong>EXPECTED OUTPUT:</strong> Unnamed 2 & 3 changed to Sales_Commented and Sales_No_Comment, respectively. </p>
<p>Similarly Unnamed 5 & 6 change to Validation_Commented and Validation_No_Comment.</p>
<p>Again, repeated for all 100 columns of file. </p>
<p>EDIT: Due to the large number of cols in the file, creating a manual list to store column names is not a viable solution. I have already seen this elsewhere on SO. Also, the amount of columns and departments (Sales, Validation) changes in different excel files with the mapping. So a dynamic solution is required. </p>
<pre><code> Sales Sales_Commented Sales_No_Comment Validation Validation_Commented Validation_No_Comment
0 Commented No comment Commented No comment
1 x x
2 x
3 x x x
</code></pre>
<p>As a python novice, I considered a possible approach for the solution using the limited knowledge I have, but not sure what this would look like as a workable code.</p>
<p>I would appreciate all help and guidance.</p>
|
<p>1.You need is to make a list with the column names that you would want.<br>
2.Make it a dict with the old column names as the keys and new column name as the values.<br>
3. Use df.rename(columns = your_dictionary). </p>
<pre><code>import numpy as np
import pandas as pd
df = pd.read_excel("name of the excel file",sheet_name = "name of sheet")
print(df.head())
Output>>>
Sales Unnamed : 2 Unnamed : 3 Validation Unnamed : 5 Unnamed : 6 Unnamed :7
0 NaN Commented No comment NaN Comment No comment Extra
1 1.0 2 1 1.0 1 1 1
2 3.0 1 1 1.0 1 1 1
3 4.0 3 4 5.0 5 6 6
4 5.0 1 1 1.0 21 3 6
# get new names based on the values of a previous named column
new_column_names = []
counter = 0
for col_name in df.columns:
if (col_name[:7].strip()=="Unnamed"):
new_column_names.append(base_name+"_"+df.iloc[0,counter].replace(" ", "_"))
else:
base_name = col_name
new_column_names.append(base_name)
counter +=1
# convert to dict key pair
dictionary = dict(zip(df.columns.tolist(),new_column_names))
# rename columns
df = df.rename(columns=dictionary)
# drop first column
df = df.iloc[1:].reset_index(drop=True)
print(df.head())
Output>>
Sales Sales_Commented Sales_No_comment Validation Validation_Comment Validation_No_comment Validation_Extra
0 1.0 2 1 1.0 1 1 1
1 3.0 1 1 1.0 1 1 1
2 4.0 3 4 5.0 5 6 6
3 5.0 1 1 1.0 21 3 6
</code></pre>
|
python-3.x|pandas|dataframe
| 2
|
4,992
| 57,391,329
|
Numeric precision of climate science calculations in python
|
<p>I am currently trying to recreate findings of a paper (<a href="https://www.researchgate.net/publication/309723672_Evidence_for_wave_resonance_as_a_key_mechanism_for_generating_high-amplitude_quasi-stationary_waves_in_boreal_summer" rel="nofollow noreferrer">https://www.researchgate.net/publication/309723672_Evidence_for_wave_resonance_as_a_key_mechanism_for_generating_high-amplitude_quasi-stationary_waves_in_boreal_summer</a>) for my masters thesis.
I calculate the meridional (degrees North) distribution of the square of the meridional wavenumber of a Rossby wave (a certain kind of flow in the atmosphere), for a number of days. This value is only dependent on the mean zonal (degrees West and East) winds and their first and second meridional derivative, as well as the zonal wavenumber of the 2D Rossby wave. This kind of wave can be thought of like a wave on a drum for example, only in a spherical environment, the atmosphere.
I am using python 3.6.5, and I suspect the problem to be numeric precision, however I am not sure.</p>
<p>I have read through other threads concerning numeric precision, and came across this one for example: <a href="https://stackoverflow.com/questions/43848038/python-sine-and-cosine-precision">python sine and cosine precision</a> .
However, I have not tried this yet because I am trying to avoid writing my own trigonometric functions. Also, since I have to process quite a large amount of data, i try to not to slow down my code. From experiments I found that the math library is not more precise than the numpy library concerning the trigonometric functions.</p>
<p>Here is the snippet of code that concerns me:</p>
<pre class="lang-py prettyprint-override"><code>Lat = np.linspace(0,90,37)
MeridWN = np.zeros((29,36), dtype='float64')
######################################################
#define Meridional wavenumber, l^2
for i in range(5,28,10):
for j in range(36):
MeridWN[i,j] = (((2*EarthRot*np.cos(Lat[j]*np.pi/180.0)**3.0)/(EarthRad*ZonMeanZonWiNH[j]))-
((np.cos(Lat[j]*np.pi/180.)**2.)/(EarthRad**2.0*ZonMeanZonWiNH[j]))*
ZonMeanZonWiMeridGradGrad[j]+
((np.sin(Lat[j]*np.pi/180.)*np.cos(Lat[j]*np.pi/180.))/(EarthRad**2.0*ZonMeanZonWiNH[j]))*
ZonMeanZonWiMeridGrad[j]+(1./(EarthRad**2.0))-(ZonWN[i]/EarthRad)**2.0)
MeridWNMerge[i,x,j] = MeridWN[i,j]
</code></pre>
<p>Index i is for a range of zonal wavenumbers, x is the day (this snippet is from a larger loop that runs over the days) and j is the Latitude position.
To calculate the derivatives I use the numpy gradient functions like this:</p>
<pre><code>ZonMeanZonWiMeridGrad = np.gradient(ZonMeanZonWiNH,np.linspace(0,90,37))
ZonMeanZonWiMeridGradGrad = np.gradient(ZonMeanZonWiMeridGrad,np.linspace(0,90,37))
</code></pre>
<p><a href="https://i.imgur.com/V6I7kEV.png" rel="nofollow noreferrer">This</a> is the formula for the calculation of the square of the meridional wavenumber (l), where Omega is the Earths rotation, Phi is the latitudinal position, a is Earths radius, U is the zonal mean, averaged zonally and k is the zonal Wavenumber, in my case an array ranging from 5.5 to 8.5.</p>
<p><a href="https://i.imgur.com/PTQ9n7S.png" rel="nofollow noreferrer">This</a> is a comparison of my zonal mean zonal wind field (bottom) from June to August and the one on from the paper (top), indicating that we have the same data and this is not the issue. The color scales are sligthly different, however the most prominent features of the wind profile are very similar, and the small differences shouldn't produce such a different <a href="https://i.imgur.com/CU3hziG.png" rel="nofollow noreferrer">profile of the meridional wavenumber (for k = 7)</a>, where the figure from the paper is again on top and mine on the bottom. Here the colorscales are again different, but large structural similarities should be captured nonetheless. As you can see, I deal with very small numbers, leading to my suspicion of having a numerically imprecise code.<br>
If you want, i can upload my entire code, however I think for the discussion about the precision this is sufficient.
I am trying to solve this problem no for about 2 weeks, trying all diferent changes in my code, some were good changes that, however none gave the desired output. </p>
<p>Thank you in advance,<br>
Thomas</p>
|
<p>If its issue with numerical precision then the algorithm is not the issue but what variable types you are using. if you switch to longer float representations like dtype='c16l' ( 128- complex floating-point number.
<a href="https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html</a></p>
|
python|numpy|precision|calculation
| 0
|
4,993
| 72,945,031
|
Find date of the first occurance of values in columns of a data frame - Find start dates for each column
|
<p>How to find the date of the first occurrence of a value for columns A and B in this data frame?</p>
<p>So, I want <code>2012-04-03</code> of <strong>A</strong> and <code>2012-04-04</code> of column <strong>B</strong>:</p>
<pre><code>| | A | B |
|:--------------------|----:|----:|
| 2012-04-01 00:00:00 | nan | nan |
| 2012-04-02 00:00:00 | nan | nan |
| 2012-04-03 00:00:00 | 4 | nan | <- First occurrence of A
| 2012-04-04 00:00:00 | 6 | 2 | <- First occurrence of B
| 2012-04-05 00:00:00 | 5 | nan |
| 2012-04-06 00:00:00 | nan | 2 |
| 2012-04-07 00:00:00 | 8 | 3 |
| 2012-04-08 00:00:00 | 4 | nan |
</code></pre>
<p>Here is the code that makes the df:</p>
<pre><code>df = pd.DataFrame(data={"A":[np.NaN, np.NaN, 4,6,5,np.NaN,8,4],"B":[np.NaN,np.NaN,np.NaN,2,np.NaN,2,3, np.NaN,]}, index=pd.date_range('2012-04-01', '2012-04-08'))
</code></pre>
<p>I tried with iterating over the columns, then using <code>dropna()</code> to get rid of <code>NaNs</code> then retrieve the date via the index. ... I am sure that there are better ways.</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.first_valid_index.html" rel="nofollow noreferrer"><code>first_valid_index</code></a>:</p>
<pre><code>>>> df.apply(lambda x: x.first_valid_index())
A 2012-04-03
B 2012-04-04
dtype: datetime64[ns]
</code></pre>
|
python|pandas
| 2
|
4,994
| 72,979,240
|
pandas replace text for top N rows for each category in a column
|
<p>I have a df something like this</p>
<pre><code> animal age comment
1 cat 1 xyz
2 cat 2 xyz
3 cat 3 xyz
4 cat 4 xyz
5 cat 5 xyz
6 dog 1 xyz
7 dog 2 xyz
8 dog 3 xyz
9 dog 4 xyz
10 dog 5 xyz
</code></pre>
<p>It is already sorted by animal and age. My task is to replace the comment for top two rows in each animal with a certain text and then the next two rows with another text. And rest of the rows should be deleted.</p>
<p>Desired output:</p>
<pre><code> animal age comment
1 cat 1 young
2 cat 2 young
3 cat 3 old
4 cat 4 old
5 dog 1 young
6 dog 2 young
7 dog 3 old
8 dog 4 old
</code></pre>
<p>I am able to do this but in 5-7 steps. I was wondering if there is a more efficient way to do this please.</p>
|
<p>The trick here is to use <code>cumcount</code> to create a sequential counter per <code>animal</code> group, then use <code>np.where</code> to update values in <code>comment</code> based on the value of seq counter</p>
<pre><code>i = df.groupby('animal').cumcount()
df['comment'] = np.where(i < 2, 'young', 'old')
df[i < 4]
</code></pre>
<hr />
<pre><code> animal age comment
1 cat 1 young
2 cat 2 young
3 cat 3 old
4 cat 4 old
6 dog 1 young
7 dog 2 young
8 dog 3 old
9 dog 4 old
</code></pre>
|
python|pandas|dataframe
| 3
|
4,995
| 51,420,032
|
using saved sklearn model to make prediction
|
<p>I have a saved logistic regression model which I trained with training data and saved using joblib. I am trying to load this model in a different script, pass it new data and make a prediction based on the new data.</p>
<p>I am getting the following error "sklearn.exceptions.NotFittedError: CountVectorizer - Vocabulary wasn't fitted." Do I need to fit the data again ? I would have thought that the point of being able to save the model would be to not have to do this.</p>
<p>The code I am using is below excluding the data cleaning section. Any help to get the prediction to work would be appreciated.</p>
<pre><code>new_df = pd.DataFrame(latest_tweets,columns=['text'])
new_df.to_csv('new_tweet.csv',encoding='utf-8')
csv = 'new_tweet.csv'
latest_df = pd.read_csv(csv)
latest_df.dropna(inplace=True)
latest_df.reset_index(drop=True,inplace=True)
new_x = latest_df.text
loaded_model = joblib.load("finalized_mode.sav")
tfidf_transformer = TfidfTransformer()
cvec = CountVectorizer()
x_val_vec = cvec.transform(new_x)
X_val_tfidf = tfidf_transformer.transform(x_val_vec)
result = loaded_model.predict(X_val_tfidf)
print (result)
</code></pre>
|
<p>Your training part have 3 parts which are fitting the data:</p>
<ul>
<li><p><code>CountVectorizer</code>: Learns the vocabulary of the training data and returns counts</p></li>
<li><p><code>TfidfTransformer</code>: Learns the counts of the vocabulary from previous part, and returns tfidf</p></li>
<li><p><code>LogisticRegression</code>: Learns the coefficients for features for optimum classification performance.</p></li>
</ul>
<p>Since each part is learning something about the data and using it to output the transformed data, you need to have all 3 parts while testing on new data. But you are only saving the <code>lr</code> with joblib, so the other two are lost and with it is lost the training data vocabulary and count. </p>
<p>Now in your testing part, you are initializing new <code>CountVectorizer</code> and <code>TfidfTransformer</code>, and calling <code>fit()</code> (<code>fit_transform()</code>), which will learn the vocabulary only from this new data. So the words will be less than the training words. But then you loaded the previously saved LR model, which expects the data according to features like training data. Hence this error:</p>
<pre><code>ValueError: X has 130 features per sample; expecting 223086
</code></pre>
<p>What you need to do is this:</p>
<h1>During training:</h1>
<pre><code>filename = 'finalized_model.sav'
joblib.dump(lr, filename)
filename = 'finalized_countvectorizer.sav'
joblib.dump(cvec, filename)
filename = 'finalized_tfidftransformer.sav'
joblib.dump(tfidf_transformer, filename)
</code></pre>
<h1>During testing</h1>
<pre><code>loaded_model = joblib.load("finalized_model.sav")
loaded_cvec = joblib.load("finalized_countvectorizer.sav")
loaded_tfidf_transformer = joblib.load("finalized_tfidftransformer.sav")
# Observe that I only use transform(), not fit_transform()
x_val_vec = loaded_cvec.transform(new_x)
X_val_tfidf = loaded_tfidf_transformer.transform(x_val_vec)
result = loaded_model.predict(X_val_tfidf)
</code></pre>
<p>Now you wont get that error.</p>
<h1>Recommendation:</h1>
<p>You should use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html" rel="noreferrer">TfidfVectorizer</a> in place of both CountVectorizer and TfidfTransformer, so that you dont have to use two objects all the time. </p>
<p>And along with that you should use Pipeline to combine the two steps:- TfidfVectorizer and LogisticRegression, so that you only have to use a single object (which is easier to save and load and generic handling).</p>
<p>So edit the training part like this:</p>
<pre><code>tfidf_vectorizer = TfidfVectorizer()
lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])
# Internally your X_train will be automatically converted to tfidf
# and that will be passed to lr
tfidf_lr_pipe.fit(X_train, y_train)
# Similarly here only transform() will be called internally for tfidfvectorizer
# And that data will be passed to lr.predict()
y_preds = tfidf_lr_pipe.predict(x_test)
# Now you can save this pipeline alone (which will save all its internal parts)
filename = 'finalized_model.sav'
joblib.dump(tfidf_lr_pipe, filename)
</code></pre>
<p>During testing, do this:</p>
<pre><code>loaded_pipe = joblib.load("finalized_model.sav")
result = loaded_model.predict(new_x)
</code></pre>
|
python|pandas|numpy|scikit-learn|logistic-regression
| 5
|
4,996
| 51,825,862
|
python pandas changing several columns in dataframe based on one condition
|
<p>I am new in Python and Pandas. I worked with SAS. In SAS I can use IF statement with "Do; End;" to update values of several columns based on one condition.<br>
I tried np.where() clause but it updates only one column. The "apply(function, ...)" also updates only one column. Positioning extra update statement inside the function body didn't help. </p>
<p>Suggestions?</p>
|
<p>You could use:</p>
<pre><code>for col in df:
df[col] = np.where(df[col] == your_condition, value_if, value_else)
</code></pre>
<p>eg:</p>
<pre><code> a b
0 0 2
1 2 0
2 1 1
3 2 0
for col in df:
df[col] = np.where(df[col]==0,12, df[col])
</code></pre>
<p>Output:</p>
<pre><code> a b
0 12 2
1 2 12
2 1 1
3 2 12
</code></pre>
<p>Or if you want apply the condition only on some columns, select them in the <code>for loop</code>:</p>
<pre><code>for col in ['a','b']:
</code></pre>
<p>or just in this way:</p>
<pre><code>df[['a','b']] = np.where(df[['a','b']]==0,12, df[['a','b']])
</code></pre>
|
python|pandas
| 0
|
4,997
| 36,095,363
|
How to replace values in a column if another column is a NaN?
|
<p>So this should be the easiest thing on earth. Pseudocode:</p>
<pre><code>Replace column C with NaN if column E is NaN
</code></pre>
<p>I know I can do this by pulling out all dataframe rows where column E is NaN, replacing all of Column C, and then merging that on the original dataset, but that seems like a lot of work for a simple operation. Why doesn't this work:</p>
<p>Sample data:</p>
<pre><code>dfz = pd.DataFrame({'A' : [1,0,0,1,0,0],
'B' : [1,0,0,1,0,1],
'C' : [1,0,0,1,3,1],
'D' : [1,0,0,1,0,0],
'E' : [22.0,15.0,None,10.,None,557.0]})
</code></pre>
<p>Replace Function:</p>
<pre><code>def NaNfunc(dfz):
if dfz['E'] == None:
return None
else:
return dfz['C']
dfz['C'] = dfz.apply(NaNfunc, axis=1)
</code></pre>
<p>And how to do this in one line?</p>
|
<p>Use <code>np.where</code>:</p>
<pre><code>In [34]:
dfz['C'] = np.where(dfz['E'].isnull(), dfz['E'], dfz['C'])
dfz
Out[34]:
A B C D E
0 1 1 1 1 22
1 0 0 0 0 15
2 0 0 NaN 0 NaN
3 1 1 1 1 10
4 0 0 NaN 0 NaN
5 0 1 1 0 557
</code></pre>
<p>Or simply mask the df:</p>
<pre><code>In [38]:
dfz.loc[dfz['E'].isnull(), 'C'] = dfz['E']
dfz
Out[38]:
A B C D E
0 1 1 1 1 22
1 0 0 0 0 15
2 0 0 NaN 0 NaN
3 1 1 1 1 10
4 0 0 NaN 0 NaN
5 0 1 1 0 557
</code></pre>
|
python|pandas
| 7
|
4,998
| 35,928,114
|
TypeError: 'numpy.ndarray' object is not callable - working with banded/sparse matrices
|
<p>Hello I am trying to create a banded matrix - when I try to extract the upper diagonal and add a zero to the array I get the following error - "TypeError: 'numpy.ndarray' object is not callable"</p>
<pre><code>>>> A = np.eye(5, k=-1) -2 * np.eye(5) + np.eye(5, k=1)
>>> udA = np.insert (np.diag(A, 1), 0, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'numpy.ndarray' object is not callable
>>>
</code></pre>
<p>What am I doing wrong - I am very new to python. Thank you. </p>
|
<p>What <code>numpy</code> version are you using? In my version (1.9) your code works.</p>
<p>I think it's a problem using <code>np.diag</code> inside the <code>insert</code> function. </p>
<p>In 1.9 version, <code>np.diag</code> has this warning:</p>
<blockquote>
<p>See the more detailed documentation for <code>numpy.diagonal</code> if you use this
function to extract a diagonal and wish to write to the resulting array;
whether it returns a copy or a view depends on what version of numpy you
are using.</p>
</blockquote>
<p>I think in new versions trying to use <code>np.diag</code> in a context where it might be assigned to produces this error. Try:</p>
<pre><code>np.diag(A,1) = 0
</code></pre>
<p>That probably will produce the same error.</p>
<p>There have been earlier questions about this issue - we need to find a good one.</p>
<p><a href="https://stackoverflow.com/questions/35022178/fill-off-diagonal-of-numpy-array-fails">fill off diagonal of numpy array fails</a></p>
|
python|numpy|matrix
| 0
|
4,999
| 36,073,425
|
Concating a nested numpy array into 2D array
|
<p>I am using Pandas to generate some information and features. I will be using that database as my input for sklearn. Currently, I am converting the dataframe to array using <code>.as_matrix()</code>. Following is the output:</p>
<pre><code>array([[0.4437294900417328, 0.13434134423732758, 0.474, 0.482,
array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0])],
[0.09896088391542435, 0.10105254501104355, 0.474, 0.526,
array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0])],
[0.026971107348799706, 0.08766224980354309, 0.474, 0.581,
array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0])],
...,
</code></pre>
<p>I want to dissolve this inner array into the parent 2D array. The result should look something like this.</p>
<pre><code>array([[0.4437294900417328, 0.13434134423732758, 0.474, 0.482,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0.09896088391542435, 0.10105254501104355, 0.474, 0.526,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0.026971107348799706, 0.08766224980354309, 0.474, 0.581,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
...,
</code></pre>
<p>TIA</p>
|
<p>As I commented, the exact structure of your array is unclear. I'm sure the outer dtype is object. Pandas often uses that to hold mixed data.</p>
<p>Here's a guess, and possible solution:</p>
<p>Make an object array and fill it with some floats and arrays of integers:</p>
<pre><code>In [38]: A=np.empty((3,5),dtype=object)
In [39]: A[:,:4]=np.arange(12.).reshape(3,4)/10
In [40]: A[0,-1]=np.arange(5)
In [41]: A[1,-1]=np.arange(1,6)
In [42]: A[2,-1]=np.arange(2,7)
In [43]: A
Out[43]:
array([[0.0, 0.1, 0.2, 0.3, array([0, 1, 2, 3, 4])],
[0.4, 0.5, 0.6, 0.7, array([1, 2, 3, 4, 5])],
[0.8, 0.9, 1.0, 1.1, array([2, 3, 4, 5, 6])]], dtype=object)
</code></pre>
<p>Print is similar. <code>reshape</code>, <code>concatenate</code>, <code>ravel</code> etc don't join the floats and array.</p>
<p>Instead lets make an array to hold the expected values, and copy them to it:</p>
<pre><code>In [44]: B=np.zeros((3,9),float)
In [45]: B[:,:4]=A[:,:4]
</code></pre>
<p>Copying the float columns is easy. But reworking the arrays into something that can be copied as a block, requires concatenation. The <code>vstack</code> form seems to do the trick:</p>
<pre><code>In [46]: B[:,4:]=np.vstack(A[:,-1])
In [47]: B
Out[47]:
array([[ 0. , 0.1, 0.2, 0.3, 0. , 1. , 2. , 3. , 4. ],
[ 0.4, 0.5, 0.6, 0.7, 1. , 2. , 3. , 4. , 5. ],
[ 0.8, 0.9, 1. , 1.1, 2. , 3. , 4. , 5. , 6. ]])
</code></pre>
<p>I had to recreate your array, based on what I know of array displays, including the object type. Then I just had to play around, trying various ways of joining the values. So there was a lot of trial and error.</p>
|
python|arrays|numpy|pandas|scikit-learn
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.