Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
373,600
| 56,502,924
|
How to create new pandas DataFrame with group by values?
|
<p>I have data with 3 locations. I would like to group by my locations and create new pandas DataFrame.</p>
<p>I have pandas DataFrame as follows:</p>
<pre><code>Day Time LocationA LocationB
1 8 XX YY
1 8 XX ZZ
1 8 XX ZZ
1 9 YY ZZ
1 9 XX YY
1 9 ZZ XX
1 9 ZZ YY
2 8 XX ZZ
2 8 XX YY
</code></pre>
<p>I need pandas DataFrame as follows:</p>
<pre><code>Day Time Location A B
1 8 XX 3 0
1 8 YY 0 1
1 8 ZZ 0 2
1 9 XX 1 1
1 9 YY 1 2
1 9 ZZ 2 1
2 8 XX 2 0
2 8 YY 0 1
2 8 ZZ 0 1
</code></pre>
|
<p>In your case using <code>melt</code> then <code>groupby</code> + <code>stack</code> </p>
<pre><code>yourdf=df.melt(['Day','Time']).groupby(['Day','Time','variable']).value.value_counts().unstack(level=2,fill_value=0).reset_index()
yourdf
Out[405]:
variable Day Time value LocationA LocationB
0 1 8 XX 3 0
1 1 8 YY 0 1
2 1 8 ZZ 0 2
3 1 9 XX 1 1
4 1 9 YY 1 2
5 1 9 ZZ 2 1
6 2 8 XX 2 0
7 2 8 YY 0 1
8 2 8 ZZ 0 1
</code></pre>
|
python|pandas|pandas-groupby
| 4
|
373,601
| 56,577,979
|
Change the default colors of a mosaic plot
|
<p>I want to change the color of this mosaic plot to make it printable in black in white but can't find a way to change this parameter</p>
<p><a href="https://i.stack.imgur.com/lOMAT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lOMAT.png" alt="the mosaic plot"></a></p>
<pre><code>from statsmodels.graphics.mosaicplot import mosaic
import matplotlib.pyplot as plt
import pandas
x = ['yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes']
y = ['yes', 'yes', 'yes', 'yes', 'no', 'no', 'no']
data = pandas.DataFrame({'x': x, 'y': y})
mosaic(data, ['x', 'y'])
plt.savefig("mosaicplot.pdf", figsize=[10,5])
plt.show()
</code></pre>
<p>Here is what I actually have : I saw I could change the color with mosaic(properties) on this link : <a href="http://www.statsmodels.org/stable/generated/statsmodels.graphics.mosaicplot.mosaic.html" rel="nofollow noreferrer">http://www.statsmodels.org/stable/generated/statsmodels.graphics.mosaicplot.mosaic.html</a>
but I can only give 2 different colors and I need a different color for each plot, like this:
<a href="https://i.stack.imgur.com/pBAZE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBAZE.png" alt="enter image description here"></a></p>
|
<p><a href="http://www.statsmodels.org/stable/generated/statsmodels.graphics.mosaicplot.mosaic.html" rel="nofollow noreferrer">The documentation</a> mentions a <code>properties=</code> argument:</p>
<blockquote>
<p><strong>properties function (key) -> dict, optional</strong></p>
<p>A function that for each tile in the mosaic take the key of the tile and returns the dictionary of properties of the generated
Rectangle, like color, hatch or similar. A default properties set will
be provided fot the keys whose color has not been defined, and will
use color variation to help visually separates the various categories.
It should return None to indicate that it should use the default
property for the tile. A dictionary of the properties for each key can
be passed, and it will be internally converted to the correct function</p>
</blockquote>
<p>Therefore, you can pass either a function (see the example in the link above), or more simply a dictionary, to <code>properties=</code> to change the appearance of the rectangles:</p>
<pre><code>x = ['yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes']
y = ['yes', 'yes', 'yes', 'yes', 'no', 'no', 'no']
data = pandas.DataFrame({'x': x, 'y': y})
props = {}
props[('yes', 'yes')] = {'color': 'xkcd:orange'}
props[('yes','no')] = {'facecolor': 'xkcd:pale blue',
'edgecolor':'xkcd:light grey',
'hatch':'o'}
data = pandas.DataFrame({'x': x, 'y': y})
mosaic(data, ['x', 'y'], properties=props)
</code></pre>
<p><a href="https://i.stack.imgur.com/kflZA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kflZA.png" alt="enter image description here"></a></p>
<p>As far as I can tell, <a href="https://matplotlib.org/api/_as_gen/matplotlib.patches.Rectangle.html?highlight=rectangle#matplotlib.patches.Rectangle" rel="nofollow noreferrer">any argument accepted by <code>Rectangle</code></a> can be passed along in this dictionary.</p>
|
python|pandas|matplotlib|mosaic-plot
| 2
|
373,602
| 56,531,511
|
How to append column values of one dataframe to column of another dataframe
|
<p>I'm working with 2 dataframes, A & B. Dataframe A is populated with values, while dataframe B is empty except for a header structure
I want to take the value of column in dataframe A, and append them to the corresponding column in dataframe B. </p>
<p>I've placed the values of the dataframe A column I want to append in a list. I 've tried setting the destination column values to equal the list of start column values, but that gives me the following error: </p>
<pre><code>dataframeB[x] = list(dataframeA[A])
</code></pre>
<p>This yields the following error:</p>
<pre><code>ValueError: Length of values does not match length of index
</code></pre>
<p>The result I expect is
Dataframe A's column A transfers over to Dataframe B's column x</p>
<pre><code> A B C D
1 2 3 4
1 2 3 4
</code></pre>
<p>Dataframe B</p>
<pre><code> x y
- -
</code></pre>
|
<p>Create the dataframe with the data already in it...</p>
<pre><code>dataframeB = pd.DataFrame(dataframeA['A'], columns = ['x'])
</code></pre>
<p>Then you can add columns in from the other dataframe:</p>
<pre><code>dataframeB['y'] = dataframeA['B']
</code></pre>
<p>Result:</p>
<pre><code> x y
1 2
1 2
</code></pre>
|
python|pandas|dataframe
| 0
|
373,603
| 56,488,402
|
How to replace misspelled words in a pandas dataframe
|
<p>I have 2 pandas DataFrames. One containing a list of properly spelled words:</p>
<pre class="lang-sh prettyprint-override"><code>[In]: df1
[Out]:
words
0 apple
1 phone
2 clock
3 table
4 clean
</code></pre>
<p>and one with misspelled words:</p>
<pre class="lang-sh prettyprint-override"><code>[In]: df2
[Out]:
misspelled
0 aple
1 phn
2 alok
3 garbage
4 appl
5 pho
</code></pre>
<p>The goal is to replace the column of misspelled words in the second DataFrame using the list of correctly spelled words from the first DataFrame. The second DataFrame can have multiple repetitions, can be a different size than the first, can have words that aren't in the first DataFrame (or aren't similar enough to match).</p>
<p>I've been trying to use <code>difflib.get_close_matches</code> with some success, but it does not work out perfectly.</p>
<p>This is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>x = list(map(lambda x: get_close_matches(x, df1.col1), df2.col1))
good_words = list(map(''.join, x))
l = np.array(good_words, dtype='object')
df2.col1 = pd.Series(l)
df2 = df2[df2.col1 != '']
</code></pre>
<p>After applying the transformation, I should get the second DataFrame to look like:</p>
<pre class="lang-sh prettyprint-override"><code>[In]: df2
[Out]:
0
0 apple
1 phone
2 clock
3 NaN
4 apple
5 phone
</code></pre>
<p>If no match is found the row gets replaced with <code>NaN</code>. My problem is that I get a result that looks like this:</p>
<pre class="lang-sh prettyprint-override"><code>[In]: df2
[Out]:
misspelled
0 apple
1 phone
2 clockclean
3 NaN
4 apple
5 phone
</code></pre>
<p>At this time of writing I have not figured out why some of the words are combined. I suspect it has something to do with <code>difflib.get_close_matches</code> matching different words that are similar in length and/or lettering. So far I get aroun ~10% - 15% of the words combined like this out of a whole column.
Thanks in advance.</p>
|
<p>If want match first value returned by <code>get_close_matches</code>, the cutoff parameter can be adjusted based on your desired threshold, use <code>next</code> with <code>iter</code> for possible add value if no match - here <code>np.nan</code>:</p>
<pre><code>x = [next(iter(x), np.nan)
for x in map(lambda x: difflib.get_close_matches(x, df1.words, cutoff = 0.6), df2.misspelled)]
df2['col1'] = x
print (df2)
misspelled col1
0 aple apple
1 phn phone
2 alok clock
3 garbage NaN
4 appl apple
5 pho phone
</code></pre>
|
python|python-3.x|pandas|numpy|dataframe
| 5
|
373,604
| 56,705,686
|
i tried installing tensorflow using 'pip install tensorflow ' in anaconda prompt and command prompt. its showing following output
|
<p>Found existing installation: wrapt 1.10.11
Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.</p>
|
<p><strong>(1)</strong> First try to install wrapt manually using following command</p>
<pre><code>pip install wrapt --upgrade --ignore-installed
</code></pre>
<blockquote>
<p>make sure that you use "--ignore-installed" flag when install 'wrapt' as above mentioned command</p>
</blockquote>
<p><strong>(2)</strong> Then install tensorflow with pip</p>
<p>eg:</p>
<pre><code>pip install tensorflow==1.14
</code></pre>
<p>this should work</p>
|
python-3.x|tensorflow|anaconda
| 2
|
373,605
| 56,786,677
|
TensorFlow 1.14.0 is not using GPU
|
<p>I set up TensorFlow using <code>pip install --user tensorflow-gpu</code> on my Ubuntu 19.04 laptop. All dependencies like CUDA, CUDNN are installed to and working. But still, when importing TensorFlow and checking <code>tf.test.is_gpu_available()</code> gives me False. I have tried completely uninstalling and reinstalling TensorFlow, which did not work.
Output of <code>tf.test.is_gpu_available()</code>:</p>
<blockquote>
<p>2019-06-27 14:06:18.359739: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
FMA 2019-06-27 14:06:18.611194: I
tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency:
2194885000 Hz 2019-06-27 14:06:18.621295: I
tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19d54e0
executing computations on platform Host. Devices: 2019-06-27
14:06:18.621339: I tensorflow/compiler/xla/service/service.cc:175]<br>
StreamExecutor device (0): , 2019-06-27
14:06:18.742193: I
tensorflow/stream_executor/platform/default/dso_loader.cc:42]
Successfully opened dynamic library libcuda.so.1 2019-06-27
14:06:18.869601: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero 2019-06-27
14:06:18.870469: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0
with properties: name: GeForce 920M major: 3 minor: 5
memoryClockRate(GHz): 0.954 pciBusID: 0000:08:00.0 2019-06-27
14:06:18.870675: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0:
cannot open shared object file: No such file or directory;
LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64 2019-06-27
14:06:18.870812: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0:
cannot open shared object file: No such file or directory;
LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64 2019-06-27
14:06:18.870973: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0:
cannot open shared object file: No such file or directory;
LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64 2019-06-27
14:06:18.871111: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0:
cannot open shared object file: No such file or directory;
LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64 2019-06-27
14:06:18.871228: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcusolver.so.10.0'; dlerror:
libcusolver.so.10.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64
2019-06-27 14:06:18.871352: I
tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could
not dlopen library 'libcusparse.so.10.0'; dlerror:
libcusparse.so.10.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH: :/usr/local/cuda/extras/CUPTI/lib64
2019-06-27 14:06:20.233321: I
tensorflow/stream_executor/platform/default/dso_loader.cc:42]
Successfully opened dynamic library libcudnn.so.7 2019-06-27
14:06:20.233363: W
tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen
some GPU libraries. Skipping registering GPU devices... 2019-06-27
14:06:20.407248: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device
interconnect StreamExecutor with strength 1 edge matrix: 2019-06-27
14:06:20.407318: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-06-27 14:06:20.407351: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-06-27 14:06:20.441266: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero 2019-06-27
14:06:20.443613: I tensorflow/compiler/xla/service/service.cc:168] XLA
service 0x4ed6d40 executing computations on platform CUDA. Devices:
2019-06-27 14:06:20.443670: I
tensorflow/compiler/xla/service/service.cc:175] StreamExecutor
device (0): GeForce 920M, Compute Capability 3.5 False</p>
</blockquote>
<p>Output of deviceQuery from CUDA Samples:</p>
<blockquote>
<p>CUDA Device Query (Runtime API) version (CUDART static linking)</p>
<p>Detected 1 CUDA Capable device(s)</p>
<p>Device 0: "GeForce 920M" CUDA Driver Version / Runtime Version<br>
10.1 / 10.1 CUDA Capability Major/Minor version number: 3.5 Total amount of global memory: 4046 MBytes (4242341888
bytes) ( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 954 MHz (0.95 GHz)<br>
Memory Clock rate: 900 Mhz Memory Bus
Width: 64-bit L2 Cache Size:<br>
524288 bytes Maximum Texture Dimension Size (x,y,z)<br>
1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered
1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum
Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes Total
amount of shared memory per block: 49152 bytes Total number of
registers available per block: 65536 Warp size:<br>
32 Maximum number of threads per multiprocessor: 2048 Maximum
number of threads per block: 1024 Max dimension size of a
thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid
size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch:<br>
2147483647 bytes Texture alignment: 512
bytes Concurrent copy and kernel execution: Yes with 1 copy
engine(s) Run time limit on kernels: Yes<br>
Integrated GPU sharing Host Memory: No Support host
page-locked memory mapping: Yes Alignment requirement for
Surfaces: Yes Device has ECC support:<br>
Disabled Device supports Unified Addressing (UVA): Yes Device
supports Compute Preemption: No Supports Cooperative
Kernel Launch: No Supports MultiDevice Co-op Kernel
Launch: No Device PCI Domain ID / Bus ID / location ID: 0 / 8
/ 0 Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) ></p>
<p>deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA
Runtime Version = 10.1, NumDevs = 1 Result = PASS</p>
</blockquote>
|
<p>My particular problem was that <strong>TensorFlow 1.14.0</strong> were seeking for <strong>CUDA 10.0</strong> binary, while I had only <strong>10.1</strong> installed. For some reason CUDA 10.0 could not be installed on my <strong>Ubuntu 19.04</strong> so I installed <strong>18.04</strong> instead and followed standard way to make <strong>TF</strong> work with GPU (install <strong>CUDA 10.0</strong>, install <strong>CUDNN</strong>, etc.) and everything works just fine.</p>
<p>This table shows TF versions vs. required CUDA versions: <a href="https://www.tensorflow.org/install/source#linux" rel="noreferrer">https://www.tensorflow.org/install/source#linux</a></p>
<p>Here are instructions from TF:
<a href="https://www.tensorflow.org/install/gpu#ubuntu_1804_cuda_10" rel="noreferrer">https://www.tensorflow.org/install/gpu#ubuntu_1804_cuda_10</a></p>
<p>You may also downgrade to TF 1.12 (CUDA 9.0): <a href="https://www.tensorflow.org/install/gpu#ubuntu_1604_cuda_90_for_tensorflow_1130" rel="noreferrer">https://www.tensorflow.org/install/gpu#ubuntu_1604_cuda_90_for_tensorflow_1130</a></p>
|
python|tensorflow
| 14
|
373,606
| 56,750,631
|
Broadcasting two dataframe
|
<p>I have 2 dataframes as follow:</p>
<p>1st dataframe <code>data</code>:</p>
<pre><code> 2019-06-19 2019-06-20 2019-06-21 2019-06-22 2019-06-23 2019-06-24 2019-06-25
currency
BCH 485.424079 485.424079 57.574609 57.559609 57.559609 57.559609 57.559609
BTC 202.204572 256.085103 197.291801 177.359726 177.359726 177.359726 252.859726
BTG 4065.370000 4065.370000 4065.370000 4065.370000 4065.370000 4065.370000 4065.370000
ETC 40001.000000 40001.000000 40001.000000 40001.000000 40001.000000 40001.000000 0.000000
ETH 4092.917231 4092.917231 1497.655594 1497.655594 1497.655594 1497.655594 1497.655594
</code></pre>
<p>2nd dataframe <code>sys_bal</code>:</p>
<pre><code>created_at 2019-06-19 2019-06-20 2019-06-21 2019-06-22 2019-06-23 2019-06-24 2019-06-25
currency
1WO 1997308 1996908 1996908 1996908 1996908 1996908 1996908
ABX 241444 241444 241444 241444 241444 241444 241444
ADH 5981797 5981797 5981797 5981797 5981797 5981797 5981797
ALX 385466 385466 385466 385466 385466 385466 385466
AMLT 4749604 4749604 4749604 4687869 4687869 4687869 4687869
BCH 4547 4547 4483 4463 4465 4467 4403
BRC 1231312 1231312 1231312 1231312 1231312 1231312 1231142
BTC 7366 7342 7287 7307 8292 8635 7772
BTRN 15236038 15236038 15236038 15236038 15236038 15236233 15236233
</code></pre>
<p>I try to add one with the other by doing <code>pos_bal = sys_bal + data</code>. They have the same size but i have an error.</p>
<p>the error:</p>
<pre><code>pos_bal = sys_bal + data
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/core/ops.py", line 1547, in f
other = _align_method_FRAME(self, other, axis)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/core/ops.py", line 1481, in _align_method_FRAME
right = to_series(right)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/core/ops.py", line 1456, in to_series
given_len=len(right)))
ValueError: Unable to coerce to Series, length must be 7: given 2
</code></pre>
<p>I printed the dtypes of both dataframes and I got the following:</p>
<p>1st dataframe:</p>
<pre><code>2019-06-19 float64
2019-06-20 float64
2019-06-21 float64
2019-06-22 float64
2019-06-23 float64
2019-06-24 float64
2019-06-25 float64
dtype: object
</code></pre>
<p>2nd dataframe:</p>
<pre><code> created_at
0 2019-06-19 int64
2019-06-20 int64
2019-06-21 int64
2019-06-22 int64
2019-06-23 int64
2019-06-24 int64
2019-06-25 int64
dtype: object
</code></pre>
<p><code>data.info()</code> output:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Index: 12 entries, BCH to XRP
Data columns (total 7 columns):
2019-06-20 12 non-null float64
2019-06-21 12 non-null float64
2019-06-22 12 non-null float64
2019-06-23 12 non-null float64
2019-06-24 12 non-null float64
2019-06-25 12 non-null float64
2019-06-26 12 non-null float64
dtypes: float64(7)
memory usage: 768.0+ bytes
None
</code></pre>
<p><code>sys_bal.info()</code> output:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Index: 126 entries, 1WO to ZPR
Data columns (total 7 columns):
2019-06-20 126 non-null int64
2019-06-21 126 non-null int64
2019-06-22 126 non-null int64
2019-06-23 126 non-null int64
2019-06-24 126 non-null int64
2019-06-25 126 non-null int64
2019-06-26 126 non-null int64
dtypes: int64(7)
memory usage: 7.9+ KB
None
</code></pre>
|
<pre><code>data=pd.DataFrame({'currency':['BCH','BTC'],'2019-06-19 ':['485.424079','202.204572'],'2019-06-20':['485.424079','256.085103']})
sys_bal=pd.DataFrame({'currency':['1WO','ABX'],'2019-06-19 ':['1997308','241444'],'2019-06-20':['1996908','241444']})
</code></pre>
<p>EDIT: if you receiving <code>'dict' object has no attribute 'set_index'</code>
it means that you are not using dataframes as I expect, try use on your data:</p>
<pre><code>data=pd.DataFrame.from_dict(data)
sys_bal=pd.DataFrame.from_dict(sys_bal)
data=data.set_index('currency')
sys_bal=sys_bal.set_index('currency')
df=pd.concat([data,sys_bal])
print(df)
2019-06-19 2019-06-20
currency
BCH 485.424079 485.424079
BTC 202.204572 256.085103
1WO 1997308 1996908
ABX 241444 241444
</code></pre>
<p>it should works for you, if doesnt maybe try to look at your dataframes i see that in <code>sys_bal</code> you have additional header name <code>created_at</code></p>
|
python|pandas|datetime
| 0
|
373,607
| 56,562,576
|
How to add values to columns if field is NaN upon split()
|
<p>How to set the values of fields to NaN using Pandas.</p>
<p>I have a spreadsheet file as an input and one of the columns has empty values which I filled with NaN values.</p>
<p>I am trying to split the first name with the suffix. I did use str.split().
But since there are NaN-value fields.</p>
<p>I encountered this error. </p>
<blockquote>
<p>ValueError: Columns must be same length as key</p>
</blockquote>
<p>This is my sample DataFrame.</p>
<blockquote>
<p>input_data = {
["John III","Snow"],["",""],["John","Snow"]}</p>
</blockquote>
<p>This is my expected output</p>
<blockquote>
<p>expected_output = {["John","Snow","III"],["","",""],["John","Snow",""]}</p>
</blockquote>
<p>This is my sample code</p>
<pre><code>df[[fname[0][0],fname[1][0]]] = df[column].str.split('&', expand=True, n=1)
df.applymap(lambda x: x.strip() if type(x) is str else x)
df.fillna(value=pd.np.nan, inplace=True)
df[[fname[0][0],fname[0][2]]] = df[fname[0][0]].str.split('\s+(?=Jr|Sr|JR|SR|II|III|IV)', expand=True, n=1)
</code></pre>
<p>I am just a newbie in Pandas and Numpy.</p>
|
<p>You can go about it like this:</p>
<pre><code>input_data = [['John III', 'Snow'], ['', ''], ['John', 'Snow']]
split_data = [[k for j in i for k in j.split()] for i in input_data]
#[['John', 'III', 'Snow'], [], ['John', 'Snow']]
df = pd.DataFrame(split_data).fillna('')
# 0 1 2
#0 John III Snow
#1
#2 John Snow
df.values
#array([['John', 'III', 'Snow'],
# ['', '', ''],
# ['John', 'Snow', '']], dtype=object)
</code></pre>
|
python|pandas|numpy
| 0
|
373,608
| 56,672,331
|
Plotting a Tensor in Python
|
<p>I am following the tutorial from <a href="https://www.tensorflow.org/beta/tutorials/generative/dcgan" rel="nofollow noreferrer">https://www.tensorflow.org/beta/tutorials/generative/dcgan</a></p>
<p>I want to be able to see the image that is being generated using plt.imshow() but for some reason the line </p>
<pre><code>generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
#type = tensorflow.python.framework.ops.Tensor
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
</code></pre>
<p>doesn't work for me and I get an error :</p>
<pre><code>TypeError: Image data cannot be converted to float
</code></pre>
<p>I followed a few threads on StackOverflow and tried to cast the Tensor using tf.cast, but even that didn't help.</p>
<p>The model as on the website is different from my code (only slightly)</p>
<pre><code>def make_generator_model():
model = Sequential()
model.add(Dense(9*9*256, use_bias=False, input_shape=(100,)))
# model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Reshape((9, 9, 256)))
assert model.output_shape == (None, 9, 9, 256) # Note: None is the batch size
model.add(Conv2DTranspose(128, (3, 3), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 9, 9, 128)
# model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Conv2DTranspose(64, (3,3), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 9, 9, 64)
# model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Conv2DTranspose(1, (3,3), strides=(1, 1), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 9,9,1)
return model
</code></pre>
|
<p>In TensorFlow 1.xx you need to <a href="https://www.tensorflow.org/guide/graphs#executing_a_graph_in_a_tfsession" rel="nofollow noreferrer">evaluate</a> output tensor.</p>
<pre class="lang-py prettyprint-override"><code>generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
sess = tf.Session() #create session
sess.run(tf.global_variables_initializer()) #initialize variables
image = sess.run(generated_image[0, :, :, 0]) #evaluate image tensor inside session
plt.imshow(im, cmap='gray')
plt.show()
</code></pre>
<p>Or you can use TensorFlow 2.0 beta, where eager execution is used by default. </p>
<pre class="lang-py prettyprint-override"><code>generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
plt.show()
</code></pre>
|
python|tensorflow|matplotlib|machine-learning
| 1
|
373,609
| 56,649,500
|
Is there any difference between using Dataframe.columns and Dataframe.keys() to obtain the column names?
|
<p>For the sake of curiosity is there any practical difference between getting the column names of a DataFrame (let's say df) by using df.columns or df.keys()? </p>
<p>I've checked the outs by type and it seems to be exactly the same. Am I missing something or these two methods are just as redundant as it seems? Is one more appropriate to use than the other?</p>
<p>Thanks.</p>
|
<p>Doesn't look like there's a practical difference and if there is, I'd really like to know what it is. You probably saw in the documentation that <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.columns.html" rel="nofollow noreferrer">DataFrame.columns</a> has the column labels and it is an axis property and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.keys.html" rel="nofollow noreferrer">DataFrame.keys</a> gets the info axis. I would think that since the former is an attribute or reference and the latter a callable method, the method takes a little more time to execute. I have not tested this but I'm pretty sure that, even if there's a difference, it is not significant. Also they both return the same type:</p>
<pre><code>>>> type(data.columns)
<class 'pandas.core.indexes.base.Index'>
>>> type(data.keys())
<class 'pandas.core.indexes.base.Index'>
</code></pre>
|
python-3.x|pandas|dataframe
| 4
|
373,610
| 56,681,786
|
How to ignore Null values in a CSV columns with pandas while processing the text?
|
<p>I have a CSV file and each word in a sentence is represented in cell, with a null cell between each sentence. </p>
<p><a href="https://i.stack.imgur.com/XtrCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XtrCn.png" alt="CSV snippet file"></a></p>
<p>My problem is in <strong>run_id</strong> column, after I load the csv file using pandas I separate each sentence using function "get sent from df" but I've a line of assertion that double check that the run_id is unique and =1 but it fails because it take "Null" as a "Null sentence"</p>
<p>Below is a snippet of my code, I hope you can help</p>
<p><strong>Note : I working on T="test_RE"</strong></p>
<pre><code>def load_dataset(fn,T):
if T=="test_RE":
df = pandas.read_csv(fn,
sep= ";",
header=0,
keep_default_na=False)
df.drop(df.columns[df.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
df.word_id = pd.to_numeric(df.word_id, errors='coerce').astype('Int64')
df.run_id = pd.to_numeric(df.run_id, errors='coerce').astype('Int64')
df.sent_id = pd.to_numeric(df.sent_id, errors='coerce').astype('Int64')
df.head_pred_id = pd.to_numeric(df.head_pred_id, errors='coerce').astype('Int64')
else:
df = pandas.read_csv(fn,
sep= "\t",
header=0,
keep_default_na=False)
print (df.dtypes)
if T=="train":
encoder.fit(df.label.values)
print('this is the IF cond')
print('df.label.values. shape',df.label.values.shape)
sents = get_sents_from_df(df)
print('shape of sents 0',sents[0].shape)
print('sents[0]',sents[0])
print('shape of sents 1',sents[1].shape)
print('sents[1]',sents[1])
#make sure that all sents agree on run_id
assert(all([len(set(sent.run_id.values)) == 1
for sent in sents])) **ERROR HERE**
</code></pre>
<p>the function</p>
<pre><code>def get_sents_from_df( df):
#Split a data frame by rows accroding to the sentences
return [df[df.run_id == run_id]
for run_id
in sorted(set(df.run_id.values))]
</code></pre>
<p>shape of sent 0 is (10,8) which is correct and the sent[0] is correct</p>
<p>but shape of sent<a href="https://i.stack.imgur.com/KvDBt.png" rel="nofollow noreferrer">1</a> is (0,8) and of course sent<a href="https://i.stack.imgur.com/KvDBt.png" rel="nofollow noreferrer">1</a> isn't printed because it null, I should have sent<a href="https://i.stack.imgur.com/KvDBt.png" rel="nofollow noreferrer">1</a> shape = (6,8) any help ?</p>
<p>Image of Output of print statements:</p>
<p><a href="https://i.stack.imgur.com/tXCyC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tXCyC.png" alt="Output of print stsatemts"></a></p>
|
<p>To skip the blank rows (which contain both None values and empty strings) , why not just do: </p>
<pre><code>df = df[df.word.apply(lambda x : len(x)>0)]
</code></pre>
|
python|pandas|csv|nlp
| 1
|
373,611
| 56,583,049
|
Calculate Percent-Change (over time) of pandas column values based on other column value
|
<p>I'm working with an example dataset:</p>
<pre><code> date name point
0 4/24/2019 Martha 3617138
1 4/25/2019 Martha 3961918
2 4/26/2019 Martha 4774966
3 4/27/2019 Martha 5217946
4 4/24/2019 Alex 62700321
5 4/25/2019 Alex 66721020
6 4/26/2019 Alex 71745138
7 4/27/2019 Alex 88762943
8 4/28/2019 Alex 102772578
9 4/29/2019 Alex 129089274
10 3/1/2019 Josh 1063259
11 3/3/2019 Josh 1063259
12 3/4/2019 Josh 1063259
13 3/5/2019 Josh 1063259
14 3/6/2019 Josh 1063259
</code></pre>
<p>and a list of name values</p>
<pre><code>nameslist = ['Martha', 'Alex', 'Josh']
</code></pre>
<p>I want to calculate the percent change of all rows, based on the identifier in the name column.</p>
<p>expected output:</p>
<pre><code>name percent change
Martha 30.7
Alex 51.4
Josh 0
</code></pre>
<p>I initially tried to iterate through my list and table, and add all rows that match the list value, append a list with the calculate of change, then move the the next value of my list, but I can't articulate my code properly to make that happen.</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df = df.sort_values(by='date')
growthlist=[]
temptable=[]
for i in nameslist:
for j in df:
temptable.append(df[df['name'].str.match(nameslist[i])])
length=[]
growth=temptable[0]-temptable[length-1]
growthlist.append(i,growth)
</code></pre>
<p>but that generates error:</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>I also wouldn't mind using .groupby() and .pct_change() to accomplish this goal, but </p>
<pre><code>growth = df.groupby('name').pct_change()
</code></pre>
<p>generates a long traceback that ends with:</p>
<pre><code>TypeError: unsupported operand type(s) for /: 'str' and 'float'
</code></pre>
<p>ultimately, I would like to nest this within a function so I could use it on other datasets and be able to choose my column name (the actual datasets i'm working with are not standardized so the target column names are often different)</p>
<pre><code>def calc_growth(dataset,colname):
</code></pre>
<p>but I'm not sure if that's too much too ask for this one question.</p>
<p>Unfortunately, I'm quite lost with this question so any help would be appreciated. I'm also wondering if transformation is an easier way to go with this, because at least i will always know the exact location of the two figures I need to calculate, but I don't even know how I would start something like that.</p>
<p>Thanks</p>
|
<p>You can use <code>apply</code> with <code>last</code> and <code>first</code> value approached through <code>.values</code> to calculate the percentage change over the whole group:</p>
<pre><code>df.groupby('name',sort=False).apply(lambda x: (x['point'].values[-1] - x['point'].values[0]) / x['point'].values[-1] * 100)\
.reset_index(name='pct change')
name pct change
0 Martha 30.67889165583545363347
1 Alex 51.42871358932579539669
2 Josh 0.00000000000000000000
</code></pre>
<h3>Explanation</h3>
<p>First we use groupby on <code>name</code> which will give us a group (read: a dataframe) based on each unique name:</p>
<pre><code>for _, d in df.groupby('name', sort=False):
print(d, '\n')
date name point
0 2019-04-24 Martha 3617138
1 2019-04-25 Martha 3961918
2 2019-04-26 Martha 4774966
3 2019-04-27 Martha 5217946
date name point
4 2019-04-24 Alex 62700321
5 2019-04-25 Alex 66721020
6 2019-04-26 Alex 71745138
7 2019-04-27 Alex 88762943
8 2019-04-28 Alex 102772578
9 2019-04-29 Alex 129089274
date name point
10 2019-03-01 Josh 1063259
11 2019-03-03 Josh 1063259
12 2019-03-04 Josh 1063259
13 2019-03-05 Josh 1063259
14 2019-03-06 Josh 1063259
</code></pre>
<hr>
<p>Then we apply our own made <code>lambda</code> function to <em>each seperate group</em> and apply the following calculation:</p>
<blockquote>
<p>percentage change = (point last value - point first value) / point last value * 100</p>
</blockquote>
<hr>
<p>Then we use <code>reset_index</code> to get our <code>name</code> column out of the index, since <code>groupby</code> puts it as index.</p>
|
python|pandas|dataframe|pandas-groupby
| 1
|
373,612
| 56,545,152
|
Is there a way to take the values from one column in a dataframe and append them to different dataframe's column in pandas python
|
<p>I'm working with 2 dataframes A & B of different shapes</p>
<p>Dataframe A has 193 rows and 33 columns
Dataframe B has 2 rows and 196 columns</p>
<p>I want to be able to take a column from Dataframe A "Province or State" and have its values append on to Dataframe B's column "State".</p>
<p>I've tried the following </p>
<p>Method 1 I attempted:</p>
<pre><code>dataframeB["State"] = dataframeA["Province or State"]
</code></pre>
<p>This doesn't fill DataframeB with 193 rows of values from Dataframe A. it justs gives</p>
<pre><code>State
NaN
NaN
</code></pre>
<p>I want the state column to be filled with the string values from the "Province or State", how can I make this happen?</p>
<p>EDIT:</p>
<p>I was able to accomplish this by setting the row count of DataFrameB to 193 using the following method:</p>
<pre><code>num_rows = 93
for x in np.arange(0, num_rows):
dataframeB.loc[x] = [np.NaN for n in range(96)]
</code></pre>
<p>Then, I set dataframeB's State column to equal DataframeA's Province or state column</p>
<pre><code>dataframeB['State'] = dataframeA['Province or State'].reset_index(drop = True)
</code></pre>
|
<p>I was able to accomplish this by setting the row count of DataFrameB to 193 using the following method:</p>
<pre><code>num_rows = 93
for x in np.arange(0, num_rows):
dataframeB.loc[x] = [np.NaN for n in range(96)]
</code></pre>
<p>Then, I set dataframeB's State column to equal DataframeA's Province or state column</p>
<pre><code>dataframeB['State'] = dataframeA['Province or State'].reset_index(drop = True)
</code></pre>
|
python|pandas|dataframe|data-science
| -1
|
373,613
| 56,862,204
|
Image data cannot be converted to float
|
<p>I have a code for predicting dog breed after training on CNN model, I get the class index from the below function. I want to display an random image from the class <code>idx</code> folder obtained from the function.</p>
<pre><code> class_name = [item for item in loaders['train'].dataset.classes]
def predict_dog_breed(img,model,class_names):
image = Image.open(img).convert('RGB')
transform = transforms.Compose([
transforms. RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485,0.456,0.406],
std=[0.229, 0.224, 0.225])])
image = transform(image)
test_image = image.unsqueeze(0)
net.eval()
output = net(test_image)
idx = torch.argmax(output)
a = random.choice(os.listdir("./dogImages/train/{}/".format (class_name[idx])))
imshow(a)
return class_name[idx]
</code></pre>
<p>When I tried to display the random image, I am getting the below error:</p>
<blockquote>
<p>TypeError Traceback (most recent call last) in 1 for img_file in os.listdir('./images'): 2 image = os.path.join('./images', img_file) ----> 3 dog_or_human(image)</p>
<p>in dog_or_human(img) 5 plt.show() 6 if dog_detector(img) == True: ----> 7 predict_dog = predict_dog_breed(img, net, class_name) 8 print("Dog Detected!The breed is {}".format(predict_dog)) 9 elif face_detector(img) > 0:</p>
<p>in predict_dog_breed(img, model, class_name) 18 a = random.choice(os.listdir("./dogImages/train/{}/".format(class_name[idx]))) 19 print(a) ---> 20 imshow(a) 21 #subdir = ''.join(["/dogImages/train/", class_name[idx]]) 22 #print(file)</p>
<p>~/Library/Python/3.7/lib/python/site-packages/matplotlib/pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, data, **kwargs) 2697 filternorm=filternorm, filterrad=filterrad, imlim=imlim, 2698 resample=resample, url=url, **({"data": data} if data is not -> 2699 None else {}), **kwargs) 2700 sci(__ret) 2701 return __ret</p>
<p>~/Library/Python/3.7/lib/python/site-packages/matplotlib/init.py in inner(ax, data, *args, **kwargs) 1808 "the Matplotlib list!)" % (label_namer, func.name), 1809 RuntimeWarning, stacklevel=2) -> 1810 return func(ax, *args, **kwargs) 1811 1812 inner.doc = _add_data_doc(inner.doc,</p>
<p>~/Library/Python/3.7/lib/python/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs) 5492 resample=resample, **kwargs) 5493 -> 5494 im.set_data(X) 5495 im.set_alpha(alpha) 5496 if im.get_clip_path() is None:</p>
<p>~/Library/Python/3.7/lib/python/site-packages/matplotlib/image.py in set_data(self, A) 632 if (self._A.dtype != np.uint8 and 633 not np.can_cast(self._A.dtype, float, "same_kind")): --> 634 raise TypeError("Image data cannot be converted to float") 635 636 if not (self._A.ndim == 2</p>
<p>TypeError: Image data cannot be converted to float</p>
</blockquote>
<p>Any help on this would be appreciated!</p>
|
<p>So, I tried to reproduced the error in your code <a href="https://github.com/gprashmi/Dog_breed_classifier/blob/master/dog_breed_classifier-5.ipynb" rel="nofollow noreferrer">here</a> and was successful in doing that. You are getting error because of these lines in your code:</p>
<pre><code>a = random.choice(os.listdir("./dogImages/train/{}/".format(class_name[idx])))
imshow(a)
</code></pre>
<p><code>random.choice(os.listdir("./dogImages/train/{}/".format(class_name[idx])))</code> basically returns an image filename, which is a string. You are not reading the image, just passing the filename to <code>imshow</code> function, which is incorrect. Check below figures for clarification.</p>
<p>Code with error:</p>
<p><a href="https://i.stack.imgur.com/DCiZw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DCiZw.png" alt="enter image description here"></a></p>
<p>Code without error:</p>
<p><a href="https://i.stack.imgur.com/8EqsF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8EqsF.png" alt="enter image description here"></a></p>
<p>Hence, change your <code>predict_do_breed</code> function to following:</p>
<pre><code>def predict_dog_breed(img,model,class_name):
image = Image.open(img).convert('RGB')
transform = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
image = transform(image)
test_image = image.unsqueeze(0)
net.eval()
output = net(test_image)
idx = torch.argmax(output)
a = random.choice(os.listdir("./dogImages/train/{}/".format(class_name[idx])))
print(a)
img = cv2.imread("./dogImages/train/{}/".format(class_name[idx])+a)
imshow(img)
return class_name[idx]
</code></pre>
<p>In the above code, <code>cv2.imread</code> function has been used to read the image filename, outputted by <code>random.choice(os.listdir("./dogImages/train/{}/".format(class_name[idx])))</code>.</p>
|
python|image|image-processing|pytorch
| 2
|
373,614
| 56,741,087
|
How to fix RuntimeError "Expected object of scalar type Float but got scalar type Double for argument"?
|
<p>I'm trying to train a classifier via PyTorch. However, I am experiencing problems with training when I feed the model with training data.
I get this error on <code>y_pred = model(X_trainTensor)</code>:</p>
<blockquote>
<p>RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1'</p>
</blockquote>
<p>Here are key parts of my code:</p>
<pre class="lang-py prettyprint-override"><code># Hyper-parameters
D_in = 47 # there are 47 parameters I investigate
H = 33
D_out = 2 # output should be either 1 or 0
</code></pre>
<pre class="lang-py prettyprint-override"><code># Format and load the data
y = np.array( df['target'] )
X = np.array( df.drop(columns = ['target'], axis = 1) )
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8) # split training/test data
X_trainTensor = torch.from_numpy(X_train) # convert to tensors
y_trainTensor = torch.from_numpy(y_train)
X_testTensor = torch.from_numpy(X_test)
y_testTensor = torch.from_numpy(y_test)
</code></pre>
<pre class="lang-py prettyprint-override"><code># Define the model
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
nn.LogSoftmax(dim = 1)
)
</code></pre>
<pre class="lang-py prettyprint-override"><code># Define the loss function
loss_fn = torch.nn.NLLLoss()
</code></pre>
<pre class="lang-py prettyprint-override"><code>for i in range(50):
y_pred = model(X_trainTensor)
loss = loss_fn(y_pred, y_trainTensor)
model.zero_grad()
loss.backward()
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
</code></pre>
|
<p>Reference is from <a href="https://github.com/pytorch/pytorch/issues/2138" rel="noreferrer">this github issue</a>.</p>
<p>When the error is <code>RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1'</code>, you would need to use the <code>.float()</code> function since it says <code>Expected object of scalar type Float</code>.</p>
<p>Therefore, the solution is changing <code>y_pred = model(X_trainTensor)</code> to <code>y_pred = model(X_trainTensor.float())</code>.</p>
<p>Likewise, when you get another error for <code>loss = loss_fn(y_pred, y_trainTensor)</code>, you need <code>y_trainTensor.long()</code> since the error message says <code>Expected object of scalar type Long</code>.</p>
<p>You could also do <code>model.double()</code>, as suggested by @Paddy
.</p>
|
python|neural-network|deep-learning|classification|pytorch
| 158
|
373,615
| 25,830,584
|
Graphs in python using matplotlib
|
<p>I wanted to plot <code>y=(x+2)(x−1)(x−2)</code> for x going from −3 to 3 using a dashed red line. When I wrote the following code, nothing shows up.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def graph(formula, x_range):
x = np.array(x_range)
y = eval(formula)
plt.plot(x, y)
plt.show()
graph('((x-3) * (x-2))', range(-3,3))
</code></pre>
|
<p>Make sure <code>graph(..)</code> call is outside the <code>graph</code> function definition (IOW, indent correctly):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def graph(formula, x_range):
x = np.array(x_range)
y = eval(formula)
plt.plot(x, y, 'r--') # `r--` for dashed red line
plt.show()
graph('((x-3) * (x-2))', range(-3,3)) # <----
</code></pre>
<p><strong>UPDATE</strong></p>
<p>It's not a good idea to use <a href="https://docs.python.org/2/library/functions.html#eval" rel="nofollow"><code>eval</code></a>. Instead you can pass a function in this case.</p>
<pre><code>def graph(formula, x_range):
x = np.array(x_range)
y = formula(x) # <-----
plt.plot(x, y, 'r--')
plt.show()
graph(lambda x: (x-3) * (x-2), range(-3,3)) # <---
</code></pre>
|
python|numpy|matplotlib
| 1
|
373,616
| 25,791,053
|
ANOVA and HSD tests from Python dataframe
|
<p>I'm looking for a method to perform an ANOVA and HSD tests from a dataframe in Python. I tried to read some examples on forums and tutorials but i didn't achieve to apply it to my work.</p>
<p>Here is a simple Pandas dataframe:</p>
<pre><code>Date Density Hour Repetition Glucose
A HD AM 1 6.7
A HD AM 2 6.8
A HD PM 2 9.6
A HD PM 3 11.9
B HD AM 1 23
B HD AM 2 18.1
B HD PM 1 43.3
B HD PM 2 32
C HD AM 1 5.1
C HD AM 2 3.8
C HD PM 1 5.2
C HD PM 2 5.5
</code></pre>
<p>How could I perform an ANOVA and then an HSD test to inspect the effects of Date, Density and Hour on Glucose?
I tried with those libraries:</p>
<pre><code>from scipy.stats import f_oneway
from statsmodels.stats.multicomp import pairwise_tukeyhsd
</code></pre>
<p>but i can't achieve to apply them to my example</p>
<p>Thank you by advance</p>
|
<p><code>pairwise_tukeyhsd</code> only allows a single group variable, it is for oneway ANOVA. It is possible to make all pairwise comparisons for all fully interacted groups after creating a group index for all different explanatory variables. For example <code>group1 = (A, HD, AM, 1)</code>, <code>group2 = (A, HD, AM, 2)</code>, and so on.</p>
<p>For pairwise comparison for only some effects, we would need the pairwise comparison after estimating the multiway ANOVA with OLS. This is currently not available in statsmodels. The critical values and p-values of Tukey-HSD would not apply in that case.</p>
<p>What would be possible in this case is to estimate the full model with OLS, define all desired pairwise contrasts, use the <code>t_test</code> to get the raw p-values for the comparisons, and then apply one of the multiple p-value corrections that are available.</p>
|
python|pandas|statistics|statsmodels|anova
| 0
|
373,617
| 25,717,686
|
NumPy Tensor / Kronecker product of matrices coming out shuffled
|
<p>I'm trying to compute the <s>tensor product</s> (update: what I wanted was actually called the <a href="http://en.wikipedia.org/wiki/Kronecker_product" rel="nofollow"><em>Kronecker</em> product</a>, and this naming confusion was why I couldn't find <code>np.kron</code>) of multiple matrices, so that I can apply transformations to vectors that are themselves the tensor product of multiple vectors. I'm running into trouble with flattening the result correctly.</p>
<p>For example, say I want to compute the tensor product of <code>[[0,1],[1,0]]</code> against itself. The result should be something like:</p>
<pre><code>| 0*|0,1| 1*|0,1| |
| |1,0| |1,0| |
| |
| 1*|0,1| 0*|0,1| |
| |1,0| |1,0| |
</code></pre>
<p>which I then want to flatten to:</p>
<pre><code>| 0 0 0 1 |
| 0 0 1 0 |
| 0 1 0 0 |
| 1 0 0 0 |
</code></pre>
<p>Unfortunately, the things I try all either fail to flatten the matrix or flatten it too much or permute the entries so that some columns are empty. More specifically, the output of the python program:</p>
<pre><code>import numpy as np
flip = np.matrix([[0, 1], [1, 0]])
print np.tensordot(flip, flip, axes=0)
print np.reshape(np.tensordot(flip, flip, axes=0), (4, 4))
</code></pre>
<p>is</p>
<pre><code>[[[[0 0]
[0 0]]
[[0 1]
[1 0]]]
[[[0 1]
[1 0]]
[[0 0]
[0 0]]]]
[[0 0 0 0]
[0 1 1 0]
[0 1 1 0]
[0 0 0 0]]
</code></pre>
<p>Neither of which is what I want.</p>
<p>There are a lot of other questions similar to this one, but the things suggested in them haven't worked (or maybe I missed the ones that work). Maybe "tensor product" means something slightly different than I thought; but the example above should make it clear.</p>
|
<p>From the answers to <a href="https://stackoverflow.com/q/23592229/2379410">this</a> and <a href="https://stackoverflow.com/q/16330971/2379410">this</a> question, I learned what you want is called the "<a href="http://en.wikipedia.org/wiki/Kronecker_product" rel="nofollow noreferrer">Kronecker product</a>". It's actually built into Numpy, so just do:</p>
<pre><code>np.kron(flip, flip)
</code></pre>
<p>But if you want to make the <code>reshape</code> approach work, first rearrange the rows in the tensor:</p>
<pre><code>flip = [[0,1],[1,0]]
tensor4d = np.tensordot(flip, flip, axes=0)
print tensor4d.swapaxes(2, 1).reshape((4,4))
</code></pre>
|
python|numpy|matrix
| 3
|
373,618
| 25,923,587
|
Pandas to form clusters based on diff column
|
<p>I am trying to use Pandas to eliminate some near duplicates in a data frame based on the difference in a column representing time in seconds. For example:</p>
<pre><code>import pandas as pd, numpy as np
df=pd.DataFrame([1200,1201,1233,1555,1650,5561,5562],columns=['Time'])
df['Dif']=df.Time.diff()
df['Coef']=np.random.rand(len(df))
</code></pre>
<p><img src="https://i.stack.imgur.com/1DYF6.png" alt="enter image description here"></p>
<p>so what I need to do is examine each group that has time values that occur within 2 seconds of each other, choose the one with the highest value in Coef and discard the rest. So in this example I would somehow group index 0 and 1 together and discard index 0 (because df.Coef[0] < df.Coef<a href="https://i.stack.imgur.com/1DYF6.png" rel="nofollow noreferrer">1</a> ).</p>
<p>Likewise, index 5,6, and 7 would be grouped togther and all but index 6 discarded. so the desired output would be df.drop([0,5,7]):</p>
<p><img src="https://i.stack.imgur.com/4jJv8.png" alt="enter image description here"></p>
<p>I currently have a python while loop algorithm to do this but the data frame can contain millions of indicies so it is much too slow. Any pure pandas solution would be much appreciated </p>
|
<p>You could do a groupby here, by enumerating the groups:</p>
<pre><code>In [11]: (df['Time'].diff() > 2).cumsum()
Out[11]:
0 0
1 0
2 1
3 2
4 3
5 4
6 4
Name: Time, dtype: int64
</code></pre>
<p><em>Note: if this was a datetime column rather than 2 you'd want to compare to a timedelta.</em></p>
<pre><code>In [12]: g = df.groupby((df.Time.diff() > 2).cumsum())
</code></pre>
<p>Now you can use the idxmax (the index with maximal element) for the Coeff column on each group:</p>
<pre><code>In [13]: g.Coef.idxmax()
Out[13]:
Time
0 1
1 2
2 3
3 4
4 5
Name: Coef, dtype: int64
</code></pre>
<p>and select these rows:</p>
<pre><code>In [14]: df.loc[g.Coef.idxmax()] # results will vary since we've used a random df
Out[14]:
Time Dif Coef
1 1201 1 0.760751
2 1233 32 0.501199
3 1555 322 0.473628
4 1650 95 0.371059
5 5561 3911 0.917556
</code></pre>
|
python|pandas
| 5
|
373,619
| 25,820,071
|
Pandas column.sum() without having the index values multiply
|
<p>I have a pd like this:</p>
<p><img src="https://i.stack.imgur.com/kKsAP.png" alt="pd"></p>
<p>When I take the .sum() of the columns, Pandas is multiplying each row entry by the index value. </p>
<p>I need just a raw count at the end of each column, not a "sum" per se. What is the best way?</p>
|
<p>To find the sum of the values, use <code>.sum()</code>. To find a count of the non-empty cells, use <code>.count()</code>. To find a count of the cells which have a value greather than 0, try <code>df[df>0].count()</code>.</p>
<pre><code>In [29]: df=pd.read_table('data.csv', delim_whitespace=True)
In [30]: df
Out[30]:
BPC B-S
0 2 1
1 5 2
2 0 1
3 0 0
4 0 0
5 2 1
6 8 3
7 38 12
[8 rows x 2 columns]
In [31]: df.sum()
Out[31]:
BPC 55
B-S 20
dtype: int64
In [32]: df[df>0].count()
Out[32]:
BPC 5
B-S 6
dtype: int64
</code></pre>
|
python|pandas
| 2
|
373,620
| 25,792,086
|
Pandas merge return empty dataframe
|
<p>I have two dataframes</p>
<pre><code>current_bin.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 16 entries, 0 to 15
Data columns (total 3 columns):
id 16 non-null object
fpd 16 non-null float64
avgSpeedBinID 16 non-null object
dtypes: float64(1), object(2)
</code></pre>
<p>the current_bin data frame looks like:</p>
<pre><code>current_bin
id fpd avgSpeedBinID
0 1.1.4.1 2.818623 1
1 1.1.4.10 0.266681 10
2 1.1.4.11 0.250017 11
3 1.1.4.12 0.234749 12
4 1.1.4.13 0.222515 13
5 1.1.4.14 0.216150 14
6 1.1.4.15 0.218368 15
7 1.1.4.16 0.227663 16
8 1.1.4.2 1.475454 2
9 1.1.4.3 0.805842 3
10 1.1.4.4 0.581797 4
11 1.1.4.5 0.450314 5
12 1.1.4.6 0.379107 6
13 1.1.4.7 0.335155 7
14 1.1.4.8 0.305992 8
15 1.1.4.9 0.284210 9
</code></pre>
<p>and </p>
<pre><code>avg.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 16 entries, 0 to 15
Data columns (total 4 columns):
avgSpeedBinID 16 non-null int64
avgBinSpeed 16 non-null float64
avgSpeedBinDesc 16 non-null object
temp 16 non-null int64
dtypes: float64(1), int64(2), object(1)
</code></pre>
<p>which looks like:</p>
<pre><code> avgSpeedBinID avgBinSpeed avgSpeedBinDesc temp
0 1 3 speed < 2.5mph 0
1 2 5 2.5mph <= speed < 7.5mph 0
2 3 10 7.5mph <= speed < 12.5mph 0
3 4 15 12.5mph <= speed < 17.5mph 0
4 5 20 17.5mph <= speed <22.5mph 0
5 6 25 22.5mph <= speed < 27.5mph 0
6 7 30 27.5mph <= speed < 32.5mph 0
7 8 35 32.5mph <= speed < 37.5mph 0
8 9 40 37.5mph <= speed < 42.5mph 0
9 10 45 42.5mph <= speed < 47.5mph 0
10 11 50 47.5mph <= speed < 52.5mph 0
11 12 55 52.5mph <= speed < 57.5mph 0
12 13 60 57.5mph <= speed < 62.5mph 0
13 14 65 62.5mph <= speed < 67.5mph 0
14 15 70 67.5mph <= speed < 72.5mph 0
15 16 75 72.5mph <= speed 0
</code></pre>
<p>both dataframes have a value 1 to 16 on the avgSpeedBinID field, however, when i try to merge the data frames together</p>
<pre><code>avg.merge(current_bin, on='avgSpeedBinID')
</code></pre>
<p>I'm getting a null dataframe</p>
<pre><code>avgSpeedBinID avgBinSpeed avgSpeedBinDesc temp id fpd
</code></pre>
<p>Why is this happening and how can i correct the problem?</p>
|
<p>The <code>avgSpeedBinID</code> in the current bin dataframe is type <code>str</code> and in avg is <code>int</code>.
Just cast the <code>str</code> one into an <code>int</code> and the merge will work.</p>
<pre><code>current_bin['avgSpeedBinID'] = current_bin['avgSpeedBinID'].astype(int)
avg.merge(current_bin, on='avgSpeedBinID')
avgSpeedBinID avgBinSpeed avgSpeedBinDesc temp id fpd
0 1 3 speed < 2.5mph 0 1.1.4.1 2.818623
1 2 5 2.5mph <= speed < 7.5mph 0 1.1.4.2 1.475454
2 3 10 7.5mph <= speed < 12.5mph 0 1.1.4.3 0.805842
3 4 15 12.5mph <= speed < 17.5mph 0 1.1.4.4 0.581797
4 5 20 17.5mph <= speed <22.5mph 0 1.1.4.5 0.450314
5 6 25 22.5mph <= speed < 27.5mph 0 1.1.4.6 0.379107
6 7 30 27.5mph <= speed < 32.5mph 0 1.1.4.7 0.335155
7 8 35 32.5mph <= speed < 37.5mph 0 1.1.4.8 0.305992
8 9 40 37.5mph <= speed < 42.5mph 0 1.1.4.9 0.284210
9 10 45 42.5mph <= speed < 47.5mph 0 1.1.4.10 0.266681
10 11 50 47.5mph <= speed < 52.5mph 0 1.1.4.11 0.250017
11 12 55 52.5mph <= speed < 57.5mph 0 1.1.4.12 0.234749
12 13 60 57.5mph <= speed < 62.5mph 0 1.1.4.13 0.222515
13 14 65 62.5mph <= speed < 67.5mph 0 1.1.4.14 0.216150
14 15 70 67.5mph <= speed < 72.5mph 0 1.1.4.15 0.218368
15 16 75 72.5mph <= speed 0 1.1.4.16 0.22763
</code></pre>
|
python|pandas
| 23
|
373,621
| 25,773,245
|
Ambiguity in Pandas Dataframe / Numpy Array "axis" definition
|
<p>I've been very confused about how python axes are defined, and whether they refer to a DataFrame's rows or columns. Consider the code below:</p>
<pre><code>>>> df = pd.DataFrame([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]], columns=["col1", "col2", "col3", "col4"])
>>> df
col1 col2 col3 col4
0 1 1 1 1
1 2 2 2 2
2 3 3 3 3
</code></pre>
<p>So if we call <code>df.mean(axis=1)</code>, we'll get a mean across the rows:</p>
<pre><code>>>> df.mean(axis=1)
0 1
1 2
2 3
</code></pre>
<p>However, if we call <code>df.drop(name, axis=1)</code>, we actually <strong>drop a column</strong>, not a row:</p>
<pre><code>>>> df.drop("col4", axis=1)
col1 col2 col3
0 1 1 1
1 2 2 2
2 3 3 3
</code></pre>
<p>Can someone help me understand what is meant by an "axis" in pandas/numpy/scipy?</p>
<p>A side note, <code>DataFrame.mean</code> just might be defined wrong. It says in the documentation for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html"><code>DataFrame.mean</code></a> that <code>axis=1</code> is supposed to mean a mean over the columns, not the rows...</p>
|
<p>It's perhaps simplest to remember it as <em>0=down</em> and <em>1=across</em>. </p>
<p>This means:</p>
<ul>
<li>Use <code>axis=0</code> to apply a method down each column, or to the row labels (the index).</li>
<li>Use <code>axis=1</code> to apply a method across each row, or to the column labels.</li>
</ul>
<p>Here's a picture to show the parts of a DataFrame that each axis refers to:</p>
<p><img src="https://i.stack.imgur.com/DL0iQ.jpg" width="410" height="210"></p>
<p>It's also useful to remember that Pandas follows NumPy's use of the word <code>axis</code>. The usage is explained in NumPy's <a href="http://docs.scipy.org/doc/numpy/glossary.html">glossary of terms</a>:</p>
<blockquote>
<p>Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically <strong>downwards across rows (axis 0)</strong>, and the second running <strong>horizontally across columns (axis 1)</strong>. [<em>my emphasis</em>] </p>
</blockquote>
<p>So, concerning the method in the question, <code>df.mean(axis=1)</code>, seems to be correctly defined. It takes the mean of entries <em>horizontally across columns</em>, that is, along each individual row. On the other hand, <code>df.mean(axis=0)</code> would be an operation acting vertically <em>downwards across rows</em>.</p>
<p>Similarly, <code>df.drop(name, axis=1)</code> refers to an action on column labels, because they intuitively go across the horizontal axis. Specifying <code>axis=0</code> would make the method act on rows instead.</p>
|
python|arrays|pandas|numpy|dataframe
| 182
|
373,622
| 25,455,067
|
Pandas DataFrame datetime index doesn't survive JSON conversion and reconversion
|
<p>I have the following snippet of Python code:</p>
<pre><code>import pandas as pd
# print normal index
print data.index
# convert from df to JSON and back
data_json = data.to_json()
df = pd.read_json(data_json)
df.index = pd.to_datetime(df.index)
print df.index
</code></pre>
<p>for some reason running this returns in:</p>
<pre><code><class 'pandas.tseries.index.DatetimeIndex'>
[1950-01-03 00:00:00, ..., 2014-08-21 00:00:00]
Length: 16264, Freq: None, Timezone: None
<class 'pandas.tseries.index.DatetimeIndex'>
[1966-10-31 00:00:00, ..., 2001-09-07 00:00:00]
Length: 16264, Freq: None, Timezone: None
</code></pre>
<p>Can someone explain to me what is going on and how I can have the index persist through the transformations?</p>
|
<p>The error here is that <code>to_json</code> saves dates with ms resolution by defaul, while <code>to_datetime</code> converts with nanosecond resolution by default. To fix, either of these (but not both!) would work.</p>
<pre><code>pd.to_datetime(df.index, unit='ms')
#OR
data_json = data.to_json(date_unit='ns')
</code></pre>
<p>As noted in comments, you can also just save the json with the dates in iso format.</p>
|
python|json|datetime|pandas
| 12
|
373,623
| 25,813,529
|
Joining same-key dictionaries into a dataframe in pandas
|
<p>How to create a pandas <code>DataFrame</code> out of two and more dictionaries having common keys? That is, to convert</p>
<pre><code>d1 = {'a': 1}
d2 = {'a': 3}
...
</code></pre>
<p>into a dataframe with columns <code>['d1', 'd2', ...]</code>, rows indexed like <code>"a"</code> and values determined by the respective dictionaries?</p>
|
<pre><code>import pandas as pd
d1 = {'a': 1, 'b':2}
d2 = {'a': 3, 'b':5}
df = pd.DataFrame([d1, d2]).T
df.columns = ['d{}'.format(i) for i, col in enumerate(df, 1)]
</code></pre>
<p>yields</p>
<pre><code>In [40]: df
Out[40]:
d1 d2
a 1 3
b 2 5
</code></pre>
|
python|dictionary|pandas
| 9
|
373,624
| 25,459,982
|
Trouble with grouby on millions of keys on a chunked file in python pandas
|
<p>I have a very big CSV file (tens of Gigas) containing web logs with the following columns: <code>user_id</code>, <code>time_stamp</code>, <code>category_clicked</code>. I have to build a scorer to identify what categories users like and dislike. Note that I have more than 10 millions users.</p>
<p>I first cut it in chunks and store them in a <code>HDFStore</code> named <code>input.h5</code> then I use <code>groupby</code> on <code>user_id</code> following <a href="https://stackoverflow.com/a/15800314/3478208">Jeff's way</a>.</p>
<p>Here is my data: about 200 millions rows, 10 millions unique user_ids.</p>
<pre><code>user id | timestamp | category_clicked
20140512081646222000004-927168801|20140722|7
20140512081714121000004-383009763|20140727|4
201405011348508050000041009490586|20140728|1
20140512081646222000004-927168801|20140724|1
20140501135024818000004-1623130763|20140728|3
</code></pre>
<p>Here is my pandas.show_version():</p>
<pre><code>INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Windows
OS-release: 8
machine: AMD64
processor: AMD64 Family 21 Model 2 Stepping 0, AuthenticAMD
byteorder: little
LC_ALL: None
LANG: fr_FR
pandas: 0.13.1
Cython: 0.20.1
numpy: 1.8.1
scipy: 0.13.3
statsmodels: 0.5.0
IPython: 2.0.0
sphinx: 1.2.2
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2013.9
bottleneck: None
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.3.1
openpyxl: None
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: None
sqlalchemy: 0.9.4
lxml: None
bs4: None
html5lib: None
bq: None
apiclient: None
</code></pre>
<p>Here is what I want as an output:</p>
<p>for each user_id, a list <code>[0.1,0.45,0.89,1.45,5.12,0.,0.,0.45,0.12,2.36,7.8]</code> representing the score of the user for each category and a a global score. I can't tell you more about the score but it needs both ALL the timestamps and the category_clicked to be calculated. You can't sum up later or things like this.</p>
<p>Here is my code:</p>
<pre><code>clean_input_reader = read_csv(work_path + '/input/input.csv', chunksize=500000)
with get_store(work_path+'/input/input.h5') as store:
for chunk in clean_input_reader:
store.append('clean_input', chunk,
data_columns=['user_id','timestamp','category_clicked'],
min_itemsize=15)
groups = store.select_column('clean_input','user_id').unique()
for user in groups:
group_user = store.select('clean_input',where=['user_id==%s' %user])
<<<<TREATMENT returns a list user_cat_score>>>>
store.append(user, Series(user_cat_score))
</code></pre>
<p>My question is the following: It looks to me that the line:
<code>group_user=store.select('clean_input',where=['user_id==%s' %user])</code> is too heavy in time complexity since I have really a lot of groups, and I am sure there is a lot of redundant sorting in the routine of <code>store.select</code> if I apply it 10 millions times.</p>
<p>To give you an estimation, I take <strong>250 seconds to process 1000 keys</strong> with this technique, instead of only <strong>1 second</strong> in the case of a usual <code>groupby</code> with full-in-memory CSV file read with <code>read_csv</code> without chunking.</p>
<p>**********UPDATE***********</p>
<p>After applying Jeff's hashing method, I could process 1000 keys in 1 second (same timing as for the full in-memory method), and absolutely reduced the RAM usage. The only time penalty I had not previously is of course the time I take for chunking, saving the 100 hash groups, and getting the real groups from hash ones in the store. But this operation doesn't take more than a few minutes.</p>
|
<p>Here's a soln for scaling this problem arbitrarily. This is in effect a high-density version of this question <a href="https://stackoverflow.com/questions/15798209/pandas-group-by-query-on-large-data-in-hdfstore">here</a></p>
<p>Define a function to hash a particular group value to a smaller number of groups. I would design this such that it divides your dataset into in-memory manageable pieces.</p>
<pre><code>def sub_group_hash(x):
# x is a dataframe with the 'user id' field given above
# return the last 2 characters of the input
# if these are number like, then you will be sub-grouping into 100 sub-groups
return x['user id'].str[-2:]
</code></pre>
<p>Using the data provided above, this creates a grouped frame on the input data like so:</p>
<pre><code>In [199]: [ (grp, grouped) for grp, grouped in df.groupby(sub_group_hash) ][0][1]
Out[199]:
user id timestamp category
0 20140512081646222000004-927168801 20140722 7
3 20140512081646222000004-927168801 20140724 1
</code></pre>
<p>with <code>grp</code> as the name of the group, and <code>grouped</code> as resultant frame</p>
<pre><code># read in the input in a chunked way
clean_input_reader = read_csv('input.csv', chunksize=500000)
with get_store('output.h5') as store:
for chunk in clean_input_reader:
# create a grouper for each chunk using the sub_group_hash
g = chunk.groupby(sub_group_hash)
# append each of the subgroups to a separate group in the resulting hdf file
# this will be a loop around the sub_groups (100 max in this case)
for grp, grouped in g:
store.append('group_%s' % grp, grouped,
data_columns=['user_id','timestamp','category_clicked'],
min_itemsize=15)
</code></pre>
<p>Now you have a hdf file with 100 sub-groups (potentially less if not all groups were represented), each of which contains all of the data necessary for performing your operation.</p>
<pre><code>with get_store('output.h5') as store:
# all of the groups are now the keys of the store
for grp in store.keys():
# this is a complete group that will fit in memory
grouped = store.select(grp)
# perform the operation on grouped and write the new output
grouped.groupby(......).apply(your_cool_function)
</code></pre>
<p>So this will reduce the problem by a factor of 100 in this case. If that is not sufficient, then simply increase the sub_group_hash to make more groups. </p>
<p>You should strive for a smaller number as the HDF5 works better (e.g. don't make 10M sub_groups that defeats the purpose, 100, 1000, even 10k is ok). But I think 100 should prob work for you, unless you have a very wild group density (e.g. you have massive numbers in a single group, while very few in other groups).</p>
<p>Note that this problem then scales easily; you could store the sub_groups in separate files if you want, and/or work on them separately (in parallel) if necessary.</p>
<p>This should make your soln time approx <code>O(number_of_sub_groups)</code>.</p>
|
python|csv|pandas|bigdata
| 5
|
373,625
| 26,198,477
|
Transposing a numpy matrix causes cv's draw functions to throw errors
|
<p>I've been running into a few problems using cv to display images from numpy matrices when I transpose them.</p>
<p>Consider the following code.</p>
<pre><code>import cv2, numpy as np
...
ones = np.ones((100,100))
onesT = np.copy(ones.T)
onesCT = np.copy(ones.T, order='C')
cv2.circle(ones, (50,50), 3, (0), thickness=-1)
cv2.circle(onesCT, (50,50), 3, (0), thickness=-1)
cv2.circle(onesT, (50,50), 3, (0), thickness=-1)
</code></pre>
<p>The first two "cv2.circle" calls work but the third one gives me the following error:</p>
<pre><code> 102 cv2.circle(ones, (50,50), 3, (0), thickness=-1)
103 cv2.circle(onesCT, (50,50), 3, (0), thickness=-1)
--> 104 cv2.circle(onesT, (50,50), 3, (0), thickness=-1)
TypeError: Layout of the output array img is incompatible with cv::Mat (step[ndims-1] != elemsize or step[1] != elemsize*nchannels)
</code></pre>
<p>Why does this happen with transposed matrices but not if I change the order in which the memory is copied? To me, all those matrices are exactly the same.</p>
|
<p>At one level of abstraction, all those matrices are the same. But at a lower level, two of them have their data stored using the C convention (<a href="http://en.wikipedia.org/wiki/Row-major_order" rel="nofollow">row-major order</a>) for arrays, and the other (<code>onesT</code>) uses the Fortran convention (column-major order). Apparently <code>cv2.circle</code> expects a C-contiguous array.</p>
<p>You can check the order using the <code>flags</code> attribute. Note that the <code>F_CONTIGUOUS</code> flag of <code>onesT</code> is True:</p>
<pre><code>In [24]: ones.flags
Out[24]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [25]: onesT.flags
Out[25]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [26]: onesCT.flags
Out[26]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
</code></pre>
<p>A concise way to check the order information is <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.isfortran.html" rel="nofollow"><code>np.isfortran</code></a>:</p>
<pre><code>In [27]: np.isfortran(onesT)
Out[27]: True
</code></pre>
<p><code>onesT</code> uses the Fortran order because the transpose of a 2-d array is implemented in numpy by simply swapping the "strides" for each dimension, without actually copying the array of values in memory.</p>
<p>For example,</p>
<pre><code>In [28]: x = np.array([[1, 2, 3], [4, 5, 6]])
In [29]: np.isfortran(x)
Out[29]: False
In [30]: np.isfortran(x.T)
Out[30]: True
</code></pre>
<p>(This makes the transpose operation very efficient.)</p>
<p>You copied the transposed array to create <code>onesT</code>, but if you look at the docstring of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.copy.html" rel="nofollow"><code>np.copy</code></a>, you'll see that the default value of the <code>order</code> argument is <code>'K'</code>, which means "match the layout of a as closely as possible." In particular, it preserves the Fortran order in this case. <code>onesCT</code>, on the other hand, is a C-contiguous array because you explicitly told <code>np.copy</code> to order the copy using the C convention.</p>
|
python|opencv|numpy
| 1
|
373,626
| 26,101,008
|
pandaslooping through grouped data for a plot
|
<p>I did the following:</p>
<pre><code>for grp, val in df_grp:
ax1.plot(val.concentration,val.capacity,'o', label = grp)
ax1.set_xlim(0,2.5)
plt.legend(loc=1, bbox_to_anchor=[0,0,1.5,1])
</code></pre>
<p>How do i get rid of the brackets, the 'u' and the quotation marks ?</p>
<p><img src="https://i.stack.imgur.com/jnhoV.png" alt="Result from plot"></p>
|
<p>Because you are using the groups as labels, the labels are actually the <code>str</code> property of <code>tuple</code> representing each group, a quick work around:</p>
<pre><code>In [42]:
print df
v1 v2 v3
0 A 11 1
1 A 11 2
2 A 30 3
3 A 30 4
4 B 45 5
5 B 45 6
6 B 12 7
7 B 12 8
In [43]:
ax = plt.subplot(111)
for grp, val in df.groupby(['v1', 'v2']):
ax.plot(val.v3,val.v3-1,'o', label = grp)
L = ax.legend(loc=4)
_ = [item.set_text(' '.join(map(str, eval(item.get_text())))) for item in L.get_texts()]
</code></pre>
<p><img src="https://i.stack.imgur.com/fb2aE.png" alt="enter image description here"></p>
<p>Show it step-by-step:</p>
<pre><code>In [38]:
[item.get_text() for item in L.get_texts()]
Out[38]:
["('A', 11)", "('A', 30)", "('B', 12)", "('B', 45)"]
In [39]:
[eval(item.get_text()) for item in L.get_texts()] #convert them back to tuple
Out[39]:
[('A', 11), ('A', 30), ('B', 12), ('B', 45)]
In [41]:
[' '.join(map(str, eval(item.get_text()))) for item in L.get_texts()] #into strings
Out[41]:
['A 11', 'A 30', 'B 12', 'B 45']
</code></pre>
|
matplotlib|pandas
| 1
|
373,627
| 26,242,438
|
Save data from plot to numpy array
|
<p>I'm wondering how could I save the data content of a plot generated using <strong>Matplotlib</strong> to a Numpy array.</p>
<p>As a example, suppose I generated a contour plot with the following <a href="http://matplotlib.org/examples/pylab_examples/contour_demo.html" rel="nofollow noreferrer">code</a>:</p>
<pre><code>import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure()
CS = plt.contour(X, Y, Z)
plt.show()
</code></pre>
<p>From that I get the following:</p>
<p><img src="https://i.stack.imgur.com/zQVeM.png" alt="enter image description here"></p>
<p>I'm wondering how I could save this data so, after some other manipulations, if I show that data with <code>imshow</code>, for example, I could recover the contours color mapped and filled like the original image.</p>
<p>EDIT:</p>
<p>I.e., I would like to be able to get the image generate by the <code>contourf</code> method, do some manipulation to it, like to apply some mask in specific areas, and then plot this modified data. For the <code>contour</code> case, I would like to use this plot with a bunch of levels represented to do the manipulations, instead of iterating over the <code>cs.collection</code> and do something (which I really don't know what) to obtain an equivalent <code>numpy.array</code> to represent this plot.</p>
<p>I know that I can save the image to a file than read this file but that seems a poor solution to me.
I tried <a href="https://stackoverflow.com/questions/7821518/matplotlib-save-plot-to-numpy-array">that</a> solution too, but then I got the full plot, with green areas and so on, not just the real content.</p>
|
<p>For recent versions of matplotlib, you can use <code>pickle</code> to save the whole plot or just selected pieces, and even show the plot again from the pickled data:</p>
<pre><code>import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import pickle
if 0: # to generate the file
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
Z = 10.0 * (Z2 - Z1)
ax = plt.subplot(111)
CS = ax.contourf(X, Y, Z)
pickle.dump(ax, open("mpl_test.pkl", "w"))
pickle.dump(CS, open("contours.pkl", "w"))
else: # Then at a later time...
x0 = pickle.load(open("mpl_test.pkl", "r"))
x1 = pickle.load(open("contours.pkl", "r"))
v = x1.collections[0].get_paths()[0].vertices # get the vertices of the contour
x, y = v[:,0]+.2, v[:,1]+.1 # shift the contour
x0.plot(x, y, 'w', linewidth=3) # add it to the plot as a white line
</code></pre>
<p>The example above first pickles the contour plot with the <code>if</code> clause, and then, at a latter time, using the <code>else</code> part. It then takes one of the contours and shifts it and replots it as a white line.</p>
<p>That is, this figure and modified contour are drawn completely from the reloaded pickled figure.</p>
<p><img src="https://i.stack.imgur.com/Y5HzV.png" alt="enter image description here"></p>
<p>Contours, are <a href="http://matplotlib.org/api/path_api.html#matplotlib.path.Path" rel="nofollow noreferrer">mpl Paths</a>, and can be more complicated than this example implies, so this method won't always work so well (though a generalized version of it the took into account other path data would -- see the docs linked above).</p>
<p>Pickling mpl items is a bit new and not fully documented or reliable, but is a useful feature.</p>
<p><strong>IPython Notebook</strong>:<br>
On the other hand, maybe what you really want is something like an IPython Notebook! There, the whole history of your computations is available, viewable, and runnable. Rather than storing the data, it allows you to easily re-access, modify what you did before, etc. It's very powerful. Here are a few links and examples: <a href="http://ipython.org/notebook.html" rel="nofollow noreferrer">A</a>, <a href="http://nbviewer.ipython.org/github/jrjohansson/qutip-lectures/blob/master/Lecture-2B-Single-Atom-Lasing.ipynb" rel="nofollow noreferrer">B</a>, <a href="https://www.wakari.io/nb/url///wakari.io/static/notebooks/Lecture_4_Matplotlib.ipynb" rel="nofollow noreferrer">C</a>, <a href="http://nbviewer.ipython.org/github/jvns/talks/blob/master/pyconca2013/pistes-cyclables.ipynb" rel="nofollow noreferrer">D</a>.</p>
|
python|arrays|numpy|matplotlib
| 4
|
373,628
| 26,245,862
|
Reducing pandas series with multiple nan values to a set gives multiple nan values
|
<p>I'm expecting to get <code>set([nan,0,1])</code> but I get <code>set([nan, 0.0, nan, 1.0])</code>:</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> l= [np.nan,0,1,np.nan]
>>> set(pd.Series(l))
set([nan, 0.0, nan, 1.0])
>>> set(pd.Series(l).tolist())
set([nan, 0.0, nan, 1.0])
>>> set(l)
set([nan, 0, 1])
</code></pre>
|
<p>Not all nans are identical:</p>
<pre><code>In [182]: np.nan is np.nan
Out[182]: True
In [183]: float('nan') is float('nan')
Out[183]: False
In [184]: np.float64('nan') is np.float64('nan')
Out[184]: False
</code></pre>
<p>Therefore,</p>
<pre><code>In [178]: set([np.nan, np.nan])
Out[178]: {nan}
In [179]: set([float('nan'), float('nan')])
Out[179]: {nan, nan}
In [180]: set([np.float64('nan'), np.float64('nan')])
Out[180]: {nan, nan}
</code></pre>
<p><code>l</code> contains <code>np.nan</code>s, which are identical, so</p>
<pre><code>In [158]: set(l)
Out[158]: {nan, 0, 1}
</code></pre>
<p>but <code>pd.Series(l).tolist()</code> contains <code>np.float64('nan')</code>s which are not identical:</p>
<pre><code>In [160]: [type(item) for item in pd.Series(l).tolist()]
Out[160]: [numpy.float64, numpy.float64, numpy.float64, numpy.float64]
</code></pre>
<p>so set does not treat them as equal:</p>
<pre><code>In [157]: set(pd.Series(l).tolist())
Out[157]: {nan, 0.0, nan, 1.0}
</code></pre>
<hr>
<p>If you have a Pandas Series, use it's <code>unique</code> method instead of <code>set</code> to find unique values:</p>
<pre><code>>>> s = pd.Series(l)
>>> s.unique()
array([ nan, 0., 1.])
</code></pre>
|
python|numpy|pandas|set|nan
| 14
|
373,629
| 26,422,869
|
All pairs of numbers between 2 arrays
|
<p>I am trying to get all pairs of numbers between two arrays using numpy without success.
Basically what I need is an outer product where the numbers instead of being multiplied are put in an array, i.e.:</p>
<pre><code>a = np.array([1, 2])
b = np.array([3, 4])
np.Func(a, b)
>>> [[[1,3], [1,4]]
[[2,3], [2,4]]]
</code></pre>
<p>I am trying <code>np.meshgrid(a,b)</code> but the output is not what I expect.</p>
|
<p>You could also take the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html#numpy.transpose" rel="nofollow"><code>transpose</code></a> of the meshgrid:</p>
<pre><code>>>> np.transpose(np.meshgrid(a, b))
array([[[1, 3],
[1, 4]],
[[2, 3],
[2, 4]]])
</code></pre>
|
python|arrays|numpy|combinations
| 3
|
373,630
| 26,044,349
|
How Do I Use Excel's Format Painter Across a Whole Workbook
|
<p>Every week I generate a large excel sheet using Python/Pandas. However, the xls writer in Pandas does not allow one to format the excel sheets likely because of the proprietary format. Currently, I have to go worksheet by worksheet in the newly generated file and copy the formatting from the sheet the week before which is a little obnoxious.</p>
<p>Is there a way (in order of preference):</p>
<ol>
<li>Copy all the formatting from one excel sheet to another in Python</li>
<li>Format Paint all sheets from a workbook to a second workbook</li>
<li>This would be making a sheet with formatting and links which I could update and than resave, but I'm hoping for a solution like (1) or (2).</li>
</ol>
|
<p>I'd do it that way:</p>
<pre><code>import win32com.client
xlPasteFormats = -4122
xlPasteSpecialOperationNone = -4142
excelInstance = win32com.client.gencache.EnsureDispatch ("Excel.Application")
workbook = excelInstance.Workbooks.Item(1)
worksheet = workbook.Worksheets(1)
worksheet2 = workbook.Worksheets(3)
cells1 = worksheet.UsedRange
cells2 = worksheet2.UsedRange
cells1.Copy()
cells2.PasteSpecial(xlPasteFormats, xlPasteSpecialOperationNone)
</code></pre>
<p>which is quite similar to solution in VBA, because uses the same functions, but does it via COM, so you stay completely in Python.</p>
<p>In this code I had workbook open. If you want to open workbook you should put:</p>
<pre><code>filepath = r"path:\To\Excel\Workbook"
excelInstance.Workbooks.Open(filepath)
</code></pre>
|
python|excel|pandas|formatting|vba
| 2
|
373,631
| 26,335,732
|
Pandas: how to use query to select closest values
|
<p>I'm using Pandas 0.13.0 and I try to get the two closest values as follow.</p>
<p>The index is sorted with increasing and unique values.</p>
<pre><code>import pandas as pd
import Quantities as pq
f = {
'A': [ 0.0, 0.1, 0.2, 0.5, 1.0] * pq.m,
'B': [10.0, 11.0, 12.0, 15.0, 20.0] * pq.kPa,
'C': [ a1, b1, c1, d1, e1]
}
df = pd.DataFrame(f)
df.set_index(df['A'], inplace=True)
</code></pre>
<p>The DataFrame gives:</p>
<pre><code>in: print df
out:
A B C
A
0.00 0.00 m 10.0 kPa a1
0.10 0.10 m 11.0 kPa b1
0.20 0.20 m 12.0 kPa c1
0.50 0.50 m 15.0 kPa d1
1.00 1.00 m 20.0 kPa e1
</code></pre>
<p>I have a value that is not in the column A: <code>value_to_find = 0.15 m</code>.
This value changes during the process, so I cannot hard code it.</p>
<p>I try to find the best way to get the first value <code>just before</code> and the value <code>just after</code> <code>value_to_find</code> in the column <code>A</code>, and then return column <code>A</code> and <code>B</code>. And then interpolate value_to_find to get the <code>B</code> value.</p>
<p>Result would after filtering:</p>
<pre><code> A B
A
0.10 0.10 m 11.0 kPa
0.20 0.20 m 12.0 kPa
</code></pre>
<hr>
<p>One way to select the right values before interpolating is:</p>
<pre><code>filter_before = '%s <= %f' % ( 'A', value_to_find)
filter_after = '%s >= %f' % ( 'A', value_to_find)
</code></pre>
<p>Then:</p>
<pre><code>df_before = df.query(filter_before)
df_after = df.query(filter_after )
value_before = df_before.loc[df_before['A'].idxmax(), ['A', 'B']]
value_after = df_before.loc[df_before['A'].idxmin(), ['A', 'B']]
</code></pre>
<hr>
<p>Is there any better way to do it? Maybe using query, map or something similar.</p>
<p>like: <code>filter_before = '%s <= %f | max(%s)' % ( 'A', value_to_find)</code> (this one doesn't work for me)</p>
<p>Thanks.</p>
|
<p>Unless I misunderstood your question I get output you wanted without using <code>query</code>:</p>
<pre><code>value_to_find = 0.15
Min = df['A'] <= value_to_find
Max = df['A'] >= value_to_find
idx_Min = df.ix[Min, 'A'].idxmax()
idx_Max = df.ix[Max, 'A'].idxmin()
df.ix[idx_Min:idx_Max, ['A','B']]
A B
A
0.1 0.1 11
0.2 0.2 12
</code></pre>
<p>I did not use <code>Quantities</code> module but this should not play a role here.
Indeed if you get to find an exact match to <code>value_to_find</code> there will be only one line in the output.</p>
|
python|pandas
| 5
|
373,632
| 66,983,666
|
How can I calculate the percentage of empty values in a pandas dataframe?
|
<p>I have a dataframe <code>df</code>, from which I know there are empty values, i.e. '' (blank spaces).
I want to calculate the percentage per column of those observations and replace them with <code>NaN</code>.</p>
<p>To get the percentage I've tried:</p>
<pre><code>for col in df:
empty = round((df[df[col]] == '').sum()/df.shape[0]*100, 1)
</code></pre>
<p>I have a similar code which calculates the zeros, which does work:</p>
<pre><code>zeros = round((df[col] == 0).sum()/df.shape[0]*100, 1)
</code></pre>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a> for test missing values (but not empty spaces):</p>
<pre><code>nans = round(df[col].isna().sum()/df.shape[0]*100, 1)
</code></pre>
<p>Solution should be simplify with <code>mean</code>:</p>
<pre><code>nans = round(df[col].isna().mean()*100, 1)
</code></pre>
<p>For replace empty spaces or spaces to <code>NaN</code>s use:</p>
<pre><code>df = df.replace(r'^\s*$', np.nan, regex=True)
nans = round(df[col].isna().mean()*100, 1)
</code></pre>
<p>If need test all columns:</p>
<pre><code>nans = df.isna().mean().mul(100).round()
</code></pre>
|
python|pandas
| 2
|
373,633
| 67,164,667
|
How to use PyTorch to softmax only the upper triangular elements of a matrix?
|
<p>Given input like:</p>
<pre><code>tensor([[[1.9392, -1.9266, 0.9664],
[0.0000, -1.9266, 0.9664],
[0.0000, -0.0000, 0.9664]]])
</code></pre>
<p>My desired output is:</p>
<pre><code>tensor([[[0.4596, 0.0096, 0.1737],
[0.0000, 0.0096, 0.1737],
[0.0000, -0.0000, 0.1737]]])
</code></pre>
<p>I.e. just calculating the function over the upper triangular elements.</p>
|
<p>You can access the upper triangular elements with <code>torch.triu_indices</code>:</p>
<pre class="lang-py prettyprint-override"><code>t = tensor([[1.9392, -1.9266, 0.9664],
[0.0000, -1.9266, 0.9664],
[0.0000, -0.0000, 0.9664]])
idx = torch.triu_indices(*t.shape)
soft = F.softmax(t[idx[0], idx[1]], dim=0)
</code></pre>
<p>If you want to reassign the values as in your desired output:</p>
<pre class="lang-py prettyprint-override"><code>>>> t[idx[0], idx[1]] = soft
>>> t
</code></pre>
<pre><code>tensor([[0.4596, 0.0096, 0.1737],
[0.0000, 0.0096, 0.1737],
[0.0000, -0.0000, 0.1737]])
</code></pre>
|
python|matrix|pytorch|tensor|softmax
| 4
|
373,634
| 66,944,445
|
Convert Date headers followed by AM & PM time cells to whole Timestamp column
|
<p>How to convert 'Time & Date' column to timestamp? As you can see there's a header cell for each date followed by AM & PM times. I would like to have a whole timestamp column.</p>
<pre><code>Time & Date Country ... Consensus Forecast
15 4:00 PM DE ... NaN NaN
16 Tuesday April 02 2019 NaN ... Consensus Forecast
17 7:00 AM EA ... NaN NaN
18 7:00 AM ES ... -33.3K -38.6K
19 8:30 AM GB ... 49.8 49.1
20 9:00 AM CY ... NaN 8.90%
21 9:40 AM RO ... 2.50% 2.50%
22 10:00 AM IE ... NaN 5.50%
23 5:30 PM DE ... NaN NaN
24 Wednesday April 03 2019 NaN ... Consensus Forecast
25 7:15 AM ES ... 55 52.5
26 7:45 AM IT ... 50.8 50.1
27 7:50 AM FR ... 48.7 48.7
28 7:55 AM DE ... 54.9 54.9
29 8:00 AM EA ... 52.7 52.7
30 8:30 AM GB ... 50.9 50.5
31 9:00 AM EA ... 0.20% 0.40%
32 9:00 AM EA ... 2.30% 1.80%
33 11:25 AM PL ... 1.50% 1.50%
34 Thursday April 04 2019 NaN ... Consensus Forecast
35 4:30 AM NL ... NaN 2.60%
36 6:00 AM DE ... 0.30% 0.50%
37 7:30 AM DE ... NaN 54
38 11:30 AM EA ... NaN NaN
39 Friday April 05 2019 NaN ... Consensus Forecast
40 6:00 AM DE ... 0.50% 0.70%
41 6:45 AM FR ... €-4.7B €-4.7B
42 7:30 AM GB ... -2.40% -2.20%
43 7:30 AM GB ... 2.30% 1.50%
44 11:30 AM ES ... NaN 92.5
</code></pre>
|
<p>Your <code>Time & Date</code> column represents two different things, so it needs to be two different columns to start with. If there's a way to cut out that step I would love to see it, but I'm guessing you need to expand it into two columns before combining again before using <code>pandas.to_datetime()</code> with the <code>format</code> argument to get it into datetime.</p>
<p>First, get the dates into a different column, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ffill.html" rel="nofollow noreferrer">forward-fill</a> the missing values.</p>
<pre><code>df['date'] = df['Time & Date'].apply(
lambda x: np.nan if (('AM' in str(x))|('PM' in str(x))) else x
).ffill()
</code></pre>
<p>Then you can rename <code>Time & Date</code> to <code>time</code>, slice out the date columns in that row from the dataframe, and concatenate <code>date</code> and <code>time</code> together.</p>
<pre><code>df.rename(columns={'Time & Date': 'time'}, inplace=True)
df = df.loc[df.time.str.contains('AM|PM', regex=True)]
df['datetime'] = df.date + ' ' + df.time
</code></pre>
<p>From there all you have to do is find the right <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">format</a> for <code>pd.to_datetime()</code> and you're set!</p>
|
pandas|date|header|timestamp
| 1
|
373,635
| 66,887,785
|
How to visualize nested `tf.keras.Model (SubClassed API)` GAN model with plot_model?
|
<p>Models implemented as subclasses of <code>keras. Model</code> can generally not be visualized with <code>plot_model</code>. There is a workaround as described <a href="https://stackoverflow.com/questions/61427583/how-do-i-plot-a-keras-tensorflow-subclassing-api-model">here</a>. However, it only applies to simple models. As soon as a model is enclosed by another model, the nestings will not be resolved.</p>
<p>I am looking for a way to resolve nested models implemented as subclasses of the <code>keras. Model</code>. As an example, I have created a minimal GAN model:</p>
<pre class="lang-py prettyprint-override"><code>import keras
from keras import layers
from tensorflow.python.keras.utils.vis_utils import plot_model
class BaseModel(keras.Model):
def __init__(self, *args, **kwargs):
super(BaseModel, self).__init__(*args, **kwargs)
def call(self, inputs, training=None, mask=None):
super(BaseModel, self).call(inputs=inputs, training=training, mask=mask)
def get_config(self):
super(BaseModel, self).get_config()
def build_graph(self, raw_shape):
""" Plot models that subclass `keras.Model`
Adapted from https://stackoverflow.com/questions/61427583/how-do-i-plot-a-keras-tensorflow-subclassing-api-model
:param raw_shape: Shape tuple not containing the batch_size
:return:
"""
x = keras.Input(shape=raw_shape)
return keras.Model(inputs=[x], outputs=self.call(x))
class GANModel(BaseModel):
def __init__(self, generator, discriminator):
super(GANModel, self).__init__()
self.generator = generator
self.discriminator = discriminator
def call(self, input_tensor, training=False, mask=None):
x = self.generator(input_tensor)
x = self.discriminator(x)
return x
class DiscriminatorModel(BaseModel):
def __init__(self, name="Critic"):
super(DiscriminatorModel, self).__init__(name=name)
self.l1 = layers.Conv2D(64, 2, activation=layers.ReLU())
self.flat = layers.Flatten()
self.dense = layers.Dense(1)
def call(self, inputs, training=False, mask=None):
x = self.l1(inputs, training=training)
x = self.flat(x)
x = self.dense(x, training=training)
return x
class GeneratorModel(BaseModel):
def __init__(self, name="Generator"):
super(GeneratorModel, self).__init__(name=name)
self.dense = layers.Dense(128, activation=layers.ReLU())
self.reshape = layers.Reshape((7, 7, 128))
self.out = layers.Conv2D(1, (7, 7), activation='tanh', padding="same")
def call(self, inputs, training=False, mask=None):
x = self.dense(inputs, training=training)
x = self.reshape(x)
x = self.out(x, training=training)
return x
g = GeneratorModel()
d = DiscriminatorModel()
plot_model(g.build_graph((7, 7, 1)), to_file="generator_model.png",
expand_nested=True, show_shapes=True)
gan = GANModel(generator=g, discriminator=d)
plot_model(gan.build_graph((7, 7, 1)), to_file="gan_model.png",
expand_nested=True, show_shapes=True)
</code></pre>
<hr />
<h1>Edit</h1>
<p>Using the functional keras API I get the desired result (see <a href="https://imgur.com/a/6ECLRCy" rel="nofollow noreferrer">here</a>). The nested models are correctly resolved within the GAN model.</p>
<pre class="lang-py prettyprint-override"><code>from keras import Model, layers, optimizers
from tensorflow.python.keras.utils.vis_utils import plot_model
def get_generator(input_dim):
initial = layers.Input(shape=input_dim)
x = layers.Dense(128, activation=layers.ReLU())(initial)
x = layers.Reshape((7, 7, 128))(x)
x = layers.Conv2D(1, (7, 7), activation='tanh', padding="same")(x)
return Model(inputs=initial, outputs=x, name="Generator")
def get_discriminator(input_dim):
initial = layers.Input(shape=input_dim)
x = layers.Conv2D(64, 2, activation=layers.ReLU())(initial)
x = layers.Flatten()(x)
x = layers.Dense(1)(x)
return Model(inputs=initial, outputs=x, name="Discriminator")
def get_gan(input_dim, latent_dim):
initial = layers.Input(shape=input_dim)
x = get_generator(input_dim)(initial)
x = get_discriminator(latent_dim)(x)
return Model(inputs=initial, outputs=x, name="GAN")
m = get_generator((7, 7, 1))
m.compile(optimizer=optimizers.Adam())
plot_model(m, expand_nested=True, show_shapes=True, to_file="generator_model_functional.png")
gan = get_gan((7, 7, 1), (7, 7, 1))
plot_model(gan, expand_nested=True, show_shapes=True, to_file="gan_model_functional.png")
</code></pre>
|
<p>Whenever you pass each <code>generator</code> and <code>discriminator</code> to <code>GANModel</code>, they act like an encompassed child layer consisting of <code>n</code> times layers. So, if you plot only the <code>generator</code> model by the <code>GANModel</code> instances, it will show as follows (same goes to <code>discriminator</code>) unlike plots while using them separately.</p>
<p>The fact is while we pass data at this point using the <code>call()</code> method of <code>GANModel</code>, the <strong>input</strong> passes <strong>implicitly</strong> all internal layers (<code>generator</code>, <code>discriminator</code>) according to its design. Here I will show you two workaround for this to get your desired plot.</p>
<p><a href="https://i.stack.imgur.com/HkKkf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HkKkf.png" alt="enter image description here" /></a></p>
<hr />
<h2>Option 1</h2>
<p>I believe you probably guess the method. In the <code>GANModel</code> model, we will pass the <code>input</code> very <strong>explicitly</strong> to each internal layer of those child layers (<code>generator</code>, <code>discriminator</code>).</p>
<pre><code>class GANModel(BaseModel):
def __init__(self, generator, discriminator):
super(GANModel, self).__init__()
self.generator = generator
self.discriminator = discriminator
def call(self, input_tensor, training=False, mask=None):
x = input_tensor
for gen_lyr in self.generator.layers:
print(gen_lyr) # checking
x = gen_lyr(x)
for disc_lyr in self.discriminator.layers:
print(disc_lyr) # checking
x = disc_lyr(x)
return x
</code></pre>
<p>If you plot now, you will get</p>
<pre><code># All Internal Layers of self.generator, self.discriminator
<tensorflow.python.keras.layers.core.Dense object at 0x7f2a472a3710>
<tensorflow.python.keras.layers.core.Reshape object at 0x7f2a461e8f50>
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7f2a44591f90>
<tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7f2a47317290>
<tensorflow.python.keras.layers.core.Flatten object at 0x7f2a47317ed0>
<tensorflow.python.keras.layers.core.Dense object at 0x7f2a57f42910>
</code></pre>
<p><a href="https://i.stack.imgur.com/eyJ8I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eyJ8I.png" alt="enter image description here" /></a></p>
<hr />
<h2>Option 2</h2>
<p>I think it's a bit ugly approach. First, we take each internal layer and build a <code>Sequential</code> model with them. Then use <code>.build</code> to create its input layer. BOOM.</p>
<pre><code>gan = GANModel(generator=g, discriminator=d)
all_layer = []
for layer in gan.layers:
all_layer.extend(layer.layers)
gan_plot = tf.keras.models.Sequential(all_layer)
gan_plot.build((None,7,7,1))
list(all_layer)
[<tensorflow.python.keras.layers.core.Dense at 0x7f2a461ab390>,
<tensorflow.python.keras.layers.core.Reshape at 0x7f2a46156110>,
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x7f2a461fedd0>,
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x7f2a461500d0>,
<tensorflow.python.keras.layers.core.Flatten at 0x7f2a4613ea10>,
<tensorflow.python.keras.layers.core.Dense at 0x7f2a462cae10>]
</code></pre>
<pre><code>tf.keras.utils.plot_model(gan_plot, expand_nested=True, show_shapes=True)
</code></pre>
|
python|tensorflow|keras
| 1
|
373,636
| 66,766,808
|
how make a list an element of all rows of a df?
|
<p>I have a data frame and I have a list. How I can make a new column in my df and have the list in all rows?</p>
<p><code>list_skill=[A,B,C,D]</code></p>
<p>df</p>
<pre><code> col new_list
pdf [A,B,C,D]
dog [A,B,C,D]
dev [A,B,C,D]
</code></pre>
|
<h3><code>np.tile</code></h3>
<pre><code>df['new_list'] = np.tile(list_skill, (len(df), 1)).tolist()
</code></pre>
<hr />
<pre><code> col new_list
0 pdf [A, B, C, D]
1 dog [A, B, C, D]
2 dev [A, B, C, D]
</code></pre>
|
python|pandas|dataframe
| 0
|
373,637
| 67,009,661
|
Error calculating gradient on function imported from R using reticulate
|
<p>I'm working on a problem right now where I am trying to use the optimizers from Tensorflow probability in Python to solve a simple optimization problem I've already defined in R.</p>
<p>Here are the steps:</p>
<p><strong>Step 1: Define the original Python problem for solving the Rosenbrock banana function:</strong></p>
<pre><code>import contextlib
import functools
import os
import time
import numpy as np
import pandas as pd
import scipy as sp
from six.moves import urllib
from sklearn import preprocessing
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
def make_val_and_grad_fn(value_fn):
@functools.wraps(value_fn)
def val_and_grad(x):
return tfp.math.value_and_gradient(value_fn, x)
return val_and_grad
@contextlib.contextmanager
def timed_execution():
t0 = time.time()
yield
dt = time.time() - t0
print('Evaluation took: %f seconds' % dt)
def np_value(tensor):
"""Get numpy value out of possibly nested tuple of tensors."""
if isinstance(tensor, tuple):
return type(tensor)(*(np_value(t) for t in tensor))
else:
return tensor.numpy()
def run(optimizer):
optimizer() # Warmup.
with timed_execution():
result = optimizer()
return np_value(result)
def run(optimizer):
optimizer() # Warmup.
with timed_execution():
result = optimizer()
return np_value(result)
@make_val_and_grad_fn
def rosenbrock_test(coord):
x, y = coord[..., 0], coord[..., 1]
return((5.0-x)**2 + 10.0 * (y-x**2)**2)
optim_results = tfp.optimizer.lbfgs_minimize(
rosenbrock_test,
initial_position=rosenbrock_start,
tolerance=1e-12)
rosenbrock_start = tf.constant([2.,2.])
optim_results = tfp.optimizer.lbfgs_minimize(
rosenbrock_test,
initial_position=rosenbrock_start,
tolerance=1e-12)
rosenbrock_start = tf.constant([2.,2.])
print('L-BFGS Results')
print('Converged:', optim_results.converged)
print('Location of the minimum:', optim_results.position)
print('Number of iterations:', optim_results.num_iterations)
</code></pre>
<p><strong>Step 2: Define an identical function in R:</strong></p>
<pre><code>rosenbrock_for_r <- function(coord){
x <- coord[1]
y <- coord[2]
return( (5-x)^2 + 10 * (y-x^2)^2 ) }
rosenbrock_for_r(c(2,2))
</code></pre>
<p><strong>Step 3: Define Python wrapper for the R function:</strong></p>
<pre><code>def rosenbrock_R(coord):
return(r.rosenbrock_for_r(coord))
</code></pre>
<p>The error occurs at this step:</p>
<pre><code>temp = [2.0,2.0]
tfp.math.value_and_gradient(rosenbrock_R, [2.,2.])
</code></pre>
<p>The error is:</p>
<blockquote>
<p>TypeError: rosenbrock_R() takes 1 positional argument but 2 were given</p>
</blockquote>
<p>I've tried investigating if I'm inputting something incorrectly to the function, but the implementation is the same as my native implementation:</p>
<pre><code>def rosenbrock_alt(coord):
x, y = coord[..., 0], coord[..., 1]
return((5.0-x)**2 + 10.0 * (y-x**2)**2)
temp = tf.constant([2.0,2.0])
tfp.math.value_and_gradient(rosenbrock_alt,temp)
</code></pre>
<p>This produces the expected output:</p>
<blockquote>
<p>(<tf.Tensor: shape=(), dtype=float32, numpy=49.0>, <tf.Tensor: shape=(2,), dtype=float32, numpy=array([154., -40.], dtype=float32)>)</p>
</blockquote>
|
<p><code>tfp.math.value_and_gradient</code> will unpack the list into multiple arguments and diff with respect to each of them. You'll have to wrap in <code>np.array</code> or <code>tf.convert_to_tensor</code>.</p>
<p>Also, it's unclear how you will get a gradient for <code>rosenbrock_for_r</code>. You may have to use something like</p>
<pre class="lang-py prettyprint-override"><code>@tf.custom_gradient
def f(x):
def df(df_x):
return r.grad_rosenbrock(x, df_x)
return r.rosenbrock_for_r(x), df # or x.numpy() but that will be eager-only
</code></pre>
<p>You could use <code>tf.py_function</code> to embed eager/r code into a TF graph.</p>
|
python|r|tensorflow|reticulate|tensorflow-probability
| 1
|
373,638
| 67,055,939
|
how do I perform the explained operation in pandas?
|
<p>this is my df</p>
<pre><code>idx = pd.date_range('2020-01-01',periods=26,freq='D')
vals = [0,0,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0,0,1,0,0,0,1,1]
pd.DataFrame(vals,index=idx)
</code></pre>
<p><a href="https://i.stack.imgur.com/51cRh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/51cRh.png" alt="enter image description here" /></a></p>
<p>what I need to know is what periods the values turn 1. so this particular case it turns 1 for the following period (and the output I want to see)</p>
<pre><code>2020-01-04:2020-01-07
2020-01-14:2020-01-18
2020-01-21:2020-01-21
2020-01-25:2020-01-26
</code></pre>
<p>thanks</p>
|
<p>We can <code>group</code> the <code>index</code> of the dataframe on the sequential blocks of <code>1's</code> and aggreagte using <code>first</code> and <code>last</code> to calculate the periods where the value turns/stays <code>1</code>.</p>
<pre><code>m = df[0].eq(1)
m[m].index.to_series().groupby((~m).cumsum()).agg(['first', 'last'])
</code></pre>
<hr />
<pre><code> first last
0
3 2020-01-04 2020-01-07
9 2020-01-14 2020-01-18
11 2020-01-21 2020-01-21
14 2020-01-25 2020-01-26
</code></pre>
|
python|pandas|dataframe|date
| 5
|
373,639
| 67,082,682
|
Creating dummy variables for ordinals in pandas dataframe
|
<p>I am trying to create dummy variables in python in the pandas dataframe format. I have a variable called "Weight Group" and I want to transform the variables like so:</p>
<p>Before transformation:</p>
<pre><code> Weight_Group
0 1
1 5
2 4
3 2
4 2
5 3
6 1
</code></pre>
<p>After transformation:</p>
<pre><code> WD_1 WD_2 WD_3 WD_4 WD_5
0 1 0 0 0 0
1 1 1 1 1 1
2 1 1 1 1 0
3 1 1 0 0 0
4 1 1 0 0 0
5 1 1 1 0 0
6 1 0 0 0 0
</code></pre>
<p>I know that pandas has the get_dummies() function that creates dummy variables, but it doesn't give me the functionality that I want, where someone in weight group 3 has ones in the WG_1, WG_2, and WG_3 columns. I have a lot of data points so a fast method would be great. If anyone has any ideas on how I can implement this I would really appreciate it!</p>
|
<p>You can call <code>pd.get_dummies()</code> and then replace your <code>0</code> tallies with <code>NaN</code> and use <code>bfill()</code> (plus a bit of extra cleanup for display):</p>
<pre><code>pd.get_dummies(df['Weight_Group'], prefix='WD').replace(0,np.nan).bfill(axis=1).fillna(0).astype(int)
</code></pre>
<p>Yields:</p>
<pre><code> WD_1 WD_2 WD_3 WD_4 WD_5
0 1 0 0 0 0
1 1 1 1 1 1
2 1 1 1 1 0
3 1 1 0 0 0
4 1 1 0 0 0
5 1 1 1 0 0
6 1 0 0 0 0
</code></pre>
|
python|pandas|dataframe|dummy-variable
| 3
|
373,640
| 66,786,498
|
Pandas- update value in a specific column based on duplicate rows
|
<p>I have a pandas database of apartment building sales, one column is the price and another column is the date sold. Some of these sales were for multiple properties, however the price listed for each property reflects the total sale price of multiple properties. These bundle deals can be further identified by the date which the sale took place on.</p>
<p>For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Address</th>
<th>Price</th>
<th>Date Sold</th>
<th>Tax Assessed Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>301-303 EAST 4TH STREET</td>
<td>3672530</td>
<td>11/24/2020</td>
<td>3420000</td>
</tr>
<tr>
<td>9 AVENUE B</td>
<td>1250000</td>
<td>06/16/2020</td>
<td>650000</td>
</tr>
<tr>
<td>11 AVENUE B</td>
<td>1250000</td>
<td>06/16/2020</td>
<td>800000</td>
</tr>
<tr>
<td>231-233 EAST 4TH STREET</td>
<td>2500000</td>
<td>06/16/2020</td>
<td>5111000</td>
</tr>
</tbody>
</table>
</div>
<p>I've so far identified all duplicates in the dataframe by using:</p>
<pre><code>df[df.duplicated(['Price', 'Date Sold'], keep = False)]
</code></pre>
<p>Which returns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Address</th>
<th>Price</th>
<th>Date Sold</th>
<th>Tax Assessed Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>9 AVENUE B</td>
<td>1250000</td>
<td>06/16/2020</td>
<td>650000</td>
</tr>
<tr>
<td>11 AVENUE B</td>
<td>1250000</td>
<td>06/16/2020</td>
<td>800000</td>
</tr>
</tbody>
</table>
</div>
<p>There are many bundle deals within the database with varying numbers of buildings. I'd like to estimate and update the price for each building within a bundle by using its proportion of the total tax assessed value for the bundle multiplied by the price value.</p>
<p>ex. (650000/(650000+800000))*1250000 = 560344.8</p>
<p>So, I'd end up with:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Address</th>
<th>Price</th>
<th>Date Sold</th>
<th>Tax Assessed Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>9 AVENUE B</td>
<td>560344.8</td>
<td>06/16/2020</td>
<td>650000</td>
</tr>
<tr>
<td>11 AVENUE B</td>
<td>689655.2</td>
<td>06/16/2020</td>
<td>800000</td>
</tr>
</tbody>
</table>
</div>
<p>I've found some previous questions on how to replace the whole row or one column value, but ultimately I'm pretty lost when it comes to identifying each bundle and calculating the proportion.</p>
|
<p>Try:</p>
<pre><code>df['Price'] *= (df['Tax Assessed Value'] /
df.groupby(['Price', 'Date Sold'])['Tax Assessed Value'].transform('sum')
)
</code></pre>
<p>but I think you need to identify exactly what you mean by duplicates</p>
|
python|pandas
| 1
|
373,641
| 66,818,027
|
add list of lists to pandas dataframe, where each item of the list is a new column
|
<p>I have 10 list of lists (variable length) looking like this, but each inside list is of the same length.</p>
<pre><code>[[0.2908717393875122, 0.012684155255556107, -0.0040715765208005905], [0.02942436747252941, 0.011299843899905682, 0.009102505631744862], [0.0382646806538105, 0.004623611457645893, 0.004776048939675093]]
</code></pre>
<p>I also have a list of 10 dataframes, where each df looks like this:</p>
<pre><code>Second | Number |
2 | B |
3 | B |
4 | B |
</code></pre>
<p>What I would like to do is add this list of list to each dataframe, where the resutls looks like this:</p>
<pre><code> Second | Number | V1 | V2 |V3
2 | B |0.2908717393875122 |0.012684155255556107 | -0.0040715765208005905
3 | B |0.02942436747252941| 0.011299843899905682| 0.009102505631744862
4 | B |0.0382646806538105 | 0.004623611457645893| 0.004776048939675093
</code></pre>
<p>I know how to append a single list to the existing dataframe (and it works), but I would like to to do this all in one go.</p>
<pre><code>df['V1'] = list
</code></pre>
|
<pre><code>lst=[[0.2908717393875122, 0.012684155255556107, -0.0040715765208005905], [0.02942436747252941, 0.011299843899905682, 0.009102505631744862], [0.0382646806538105, 0.004623611457645893, 0.004776048939675093]]
</code></pre>
<p>Just simply use:-</p>
<pre><code>df[['V1','V2','V3']]=lst
</code></pre>
<p>Now if you print <code>df</code> you will get your desired output:-</p>
<pre><code> Second Number V1 V2 V3
0 2 B 0.290872 0.012684 -0.004072
1 3 B 0.029424 0.011300 0.009103
2 4 B 0.038265 0.004624 0.004776
</code></pre>
<p><strong>Edit by @msa</strong>:</p>
<pre><code>first: new_cols = ['V1', 'V2', 'V3']
df = df.reindex(df.columns.union(new_cols), axis=1)
</code></pre>
|
python|python-3.x|pandas|list|dataframe
| 1
|
373,642
| 67,069,271
|
Want to create a function with def, but ValueError returned
|
<p><strong>What I wanna do</strong></p>
<p>I want to do RFM analytics for purchase data of a e-commerce site.</p>
<p>I processed the data into RFM format, so I want to rank every ID depending on the values of each column (Money, Recency and Frequency).</p>
<p>However, I got the error message as below.</p>
<pre><code> ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-15-e7bf5ddc856d> in <module>
13 return 5
14
---> 15 rfm['money rank'] = rfm['money'].apply(money)
16 rfm.head()
c:\users\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, raw, result_type, args, **kwds)
7766 kwds=kwds,
7767 )
-> 7768 return op.get_result()
7769
7770 def applymap(self, func, na_action: Optional[str] = None) -> DataFrame:
c:\users\lib\site-packages\pandas\core\apply.py in get_result(self)
183 return self.apply_raw()
184
--> 185 return self.apply_standard()
186
187 def apply_empty_result(self):
c:\users\lib\site-packages\pandas\core\apply.py in apply_standard(self)
274
275 def apply_standard(self):
--> 276 results, res_index = self.apply_series_generator()
277
278 # wrap results
c:\users\lib\site-packages\pandas\core\apply.py in apply_series_generator(self)
288 for i, v in enumerate(series_gen):
289 # ignore SettingWithCopy here in case the user mutates
--> 290 results[i] = self.f(v)
291 if isinstance(results[i], ABCSeries):
292 # If we have a view on v, we need to make a copy because
<ipython-input-15-e7bf5ddc856d> in money(a)
1 def money(a):
----> 2 if a < 1000:
3 return 0
4 if (1000 <= a) & (a < 2000):
5 return 1
c:\users\lib\site-packages\pandas\core\generic.py in __nonzero__(self)
1440 @final
1441 def __nonzero__(self):
-> 1442 raise ValueError(
1443 f"The truth value of a {type(self).__name__} is ambiguous. "
1444 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p><strong>Data</strong></p>
<pre><code>```
money recency frequency
sum <lambda> len
ID
100 2674 169 days 1
101 19760 98 days 3
103 2674 167 days 1
109 7904 56 days 3
11 2674 211 days 1
<class 'pandas.core.frame.DataFrame'>
Index: 290 entries, 100 to 99
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 (money, sum) 290 non-null int64
1 (recency, <lambda>) 290 non-null timedelta64[ns]
2 (freqency, len) 290 non-null int64
dtypes: int64(2), timedelta64[ns](1)
memory usage: 9.1+ KB
```
</code></pre>
<p>Code</p>
<pre><code>```
def money(a):
if a < 1000:
return 0
if (1000 <= a) & (a < 2000):
return 1
if (2000 <= a) & (a < 3000):
return 2
if (3000 <= a) & (a < 4000):
return 3
if (4000 <= a) & (a < 5000):
return 4
if a >= 5000:
return 5
rfm['money rank'] = rfm['money'].apply(money)
```
</code></pre>
<p>I tried different types of (), but all of them never worked.</p>
<p>If you could help me out, I'd be so grateful.
Thank you in advance!!!</p>
|
<p>If working with scalars use <code>and</code> instead <code>&</code> with remove last level of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.droplevel.html" rel="nofollow noreferrer"><code>MultiIndex.droplevel</code></a>.</p>
<p>So use:</p>
<pre><code>def money(a):
if a < 1000:
return 0
if (1000 <= a) and (a < 2000):
return 1
if (2000 <= a) and (a < 3000):
return 2
if (3000 <= a) and (a < 4000):
return 3
if (4000 <= a) and (a < 5000):
return 4
if a >= 5000:
return 5
rfm.columns = rfm.columns.droplevel(-1)
rfm['money rank'] = rfm['money'].apply(money)
</code></pre>
<p>Another solution here is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a>:</p>
<pre><code>rfm.columns = rfm.columns.droplevel(-1)
rfm['money rank'] = pd.cut(rfm['money'],
bins=[-np.inf, 1000,2000,3000,4000,5000,np.inf],
labels=[0,1,2,3,4,5],
right=False)
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
373,643
| 67,172,162
|
How to count the sale of each day according stocks with pandas
|
<p>I want to count the sale of each day. The values in the original table are stocks but not sale.
I use excel to solve the problem,But now I have millions of products ,so I want to solve the problem with pandas.</p>
<p><a href="https://i.stack.imgur.com/VRGuP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VRGuP.png" alt="The original table like this:" /></a></p>
<p><a href="https://i.stack.imgur.com/uEsqD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uEsqD.png" alt="I want to generate the table:" /></a></p>
<p>I am still new to programming and Pandas but I have read up on pandas docs and am still unable to do it.</p>
|
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer">pandas.DataFrame.diff()</a> is enough.</p>
<pre class="lang-py prettyprint-override"><code>df['STOCK'] = df['STOCK'].diff()
df.rename(columns={'STOCK': 'SALE'}, inplace=True)
df.rename(columns={'ID1_stock': 'ID1_sale', 'ID2_stock': 'ID2_sale', 'ID3_stock': 'ID3_sale'}, level=1, inplace=True)
</code></pre>
|
python|pandas
| 1
|
373,644
| 66,850,213
|
Understanding shapes in keras layers
|
<p>I am learning Tensorflow and Keras to implement <code>LSTM</code> <code>many-to-many</code> model where the length of input sequence is equal to the length of the output sequence.</p>
<p>Sample Code:</p>
<p>Inputs:</p>
<pre><code>voc_size = 10000
embed_dim = 64
lstm_units = 75
size_batch = 30
count_classes = 5
</code></pre>
<p>Model:</p>
<pre><code>from tensorflow.keras.layers import ( Bidirectional, LSTM,
Dense, Embedding, TimeDistributed )
from tensorflow.keras import Sequential
def sample_build(embed_dim, voc_size, batch_size, lstm_units, count_classes):
model = Sequential()
model.add(Embedding(input_dim=voc_size,
output_dim=embed_dim,input_length=50))
model.add(Bidirectional(LSTM(units=lstm_units,return_sequences=True),
merge_mode="ave"))
model.add(Dense(200))
model.add(TimeDistributed(Dense(count_classes+1)))
# Compile model
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
return model
sample_model = sample_build(embed_dim,voc_size,
size_batch, rnn_units,
count_classes)
</code></pre>
<p>I am having trouble understanding the shapes of input and output for each layer. For example, the shape of the output of <code>Embedding_Layer</code> is <code>(BATCH_SIZE, time_steps, length_of_input)</code> and in this case, it is <code>(30, 50, 64)</code>.</p>
<p>Similarly, the output shape of <code>Bidirectional LSTM</code> later is <code>(30, 50, 75)</code>. This is will be the input for the next <code>Dense Layer</code> with <code>200</code> units. But the shape of the weight matrix of <code>Dense Layer</code> is (number of <code>units</code> in the current layer, number of units in the previous layer, which is <code>(200,75)</code> in this case. So how does the matrix calculation happen between <code>2D</code> shape of the <code>Dense Layer</code> and the <code>3D</code> shape of the Bidirectional Layer? Any explanations on the shape clarification will be helpful</p>
|
<p>The Dense can do 3D operation, it will flatten the the input to shape (batch_size * time_steps, features) and then apply a dense layer and reshape it back to orignal (batch_size, time_steps, units). In keras's <a href="https://keras.io/api/layers/core_layers/dense/" rel="nofollow noreferrer">documentation</a> of Dense layer, it says:</p>
<blockquote>
<p>Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).</p>
</blockquote>
<p>Another point regarding the output of <code>Embedding</code> layer. As you said, it is correct that it is a 3D output, but correctly the shape correspond to (BATCH_SIZE, input_dim, embeddings_dim)</p>
|
tensorflow|keras|deep-learning|neural-network|tensorflow2.0
| 1
|
373,645
| 66,969,738
|
How to return all indexes in multiindex on ANY condition
|
<p>I am trying to wrap my head around multilevel indices.</p>
<p>Specifically, i am trying to get all level 0 indicies that fullfill an 'ANY' criteria.
But i can't for the life of me, understand how to get it to work.</p>
<p>For instance, in the dataframe below, we want all indicies that have a '3' in the column 'test_variable_2'</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>event_name</th>
<th>test_variable_1</th>
<th>test_variable_2</th>
<th>test_variable_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>subject_id</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>pre_event</td>
<td>NaN</td>
<td>3</td>
<td>foo</td>
</tr>
<tr>
<td>1</td>
<td>intra_event</td>
<td>15</td>
<td>NaN</td>
<td>bar</td>
</tr>
<tr>
<td>1</td>
<td>post_event</td>
<td>30</td>
<td>NaN</td>
<td>fum</td>
</tr>
<tr>
<td>2</td>
<td>pre_event</td>
<td>NaN</td>
<td>2</td>
<td>foo</td>
</tr>
<tr>
<td>2</td>
<td>intra_event</td>
<td>45</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>post_event</td>
<td>60</td>
<td>NaN</td>
<td>fum</td>
</tr>
<tr>
<td>3</td>
<td>pre_event</td>
<td>NaN</td>
<td>3</td>
<td>foo</td>
</tr>
<tr>
<td>3</td>
<td>intra_event_1</td>
<td>75</td>
<td>NaN</td>
<td>bar</td>
</tr>
<tr>
<td>3</td>
<td>intra_event_2</td>
<td>90</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>3</td>
<td>post_event</td>
<td>105</td>
<td>NaN</td>
<td>fum</td>
</tr>
</tbody>
</table>
</div>
<p>And the result should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>event_name</th>
<th>test_variable_1</th>
<th>test_variable_2</th>
<th>test_variable_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>subject_id</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>pre_event</td>
<td>NaN</td>
<td>3</td>
<td>foo</td>
</tr>
<tr>
<td>1</td>
<td>intra_event</td>
<td>15</td>
<td>NaN</td>
<td>bar</td>
</tr>
<tr>
<td>1</td>
<td>post_event</td>
<td>30</td>
<td>NaN</td>
<td>fum</td>
</tr>
<tr>
<td>3</td>
<td>pre_event</td>
<td>NaN</td>
<td>3</td>
<td>foo</td>
</tr>
<tr>
<td>3</td>
<td>intra_event_1</td>
<td>75</td>
<td>NaN</td>
<td>bar</td>
</tr>
<tr>
<td>3</td>
<td>intra_event_2</td>
<td>90</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>3</td>
<td>post_event</td>
<td>105</td>
<td>NaN</td>
<td>fum</td>
</tr>
</tbody>
</table>
</div>
<p>I thought about using the .groupby function, but I am worried that i loose some of the test-variables that contain several values.
The solution i have thus far is to select the indicies that fullfill the boolean mask, and then drop all other indicies, but it seems cumbersome, and not very panda'esque.</p>
<p>I am certain there is a way of harnessing the multilevel indicies. Any pointers in the right direction would help.</p>
|
<p>Try with</p>
<pre><code>out = df.loc[df.index.isin(df.index[df['test_variable_2'].eq(3)])]
Out[529]:
event_name test_variable_1 test_variable_2 test_variable_3
subject_id
1 pre_event NaN 3.0 foo
1 intra_event 15.0 NaN bar
1 post_event 30.0 NaN fum
3 pre_event NaN 3.0 foo
3 intra_event_1 75.0 NaN bar
3 intra_event_2 90.0 NaN NaN
3 post_event 105.0 NaN fum
</code></pre>
|
python|pandas|dataframe|multi-level
| 4
|
373,646
| 67,155,624
|
Keras LSTM loading data from CSV "expected ndim=3, found ndim=2. Full shape received: (None, 150)"
|
<p>I am a beginner with LSTMs so sorry if this is a basic question. I've been trying to make a simple LSTM model that loads data from a csv text file for training</p>
<pre><code> trainX = pd.read_csv("Train\\X_Data.txt", header=None, delim_whitespace=True).to_numpy()
trainY = pd.read_csv("Train\\Y_Data.txt", header=None, delim_whitespace=True).to_numpy()
testX = pd.read_csv("Test\\X_Data.txt", header=None, delim_whitespace=True).to_numpy()
testY = pd.read_csv("Test\\Y_Data.txt", header=None, delim_whitespace=True).to_numpy()
n_timesteps = trainX.shape[0]
n_features = trainX.shape[1]
model = Sequential()
model.add(LSTM(100, input_shape=trainX.shape, return_sequences=True))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
#may need 2 neurons as there are two classes
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(trainX, trainY, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=1)
# evaluate model
evalLosses, evalAccuracy = model.evaluate(testX, testY, batch_size=BATCH_SIZE, verbose=1)
print("Overall Accuracy: " + str(evalAccuracy))
print("Overall Loss: " + str(evalLosses))
</code></pre>
<p>Where my inputs are:</p>
<pre><code>trainY.shape = (35, 1)
trainX.shape = (35, 150)
trainX = [[0.48597709 0.52190752 0.62556772 ... 0.09958187 0.12535847 0.0833305 ]
[0.40917949 0.40525872 0.24515716 ... 0.33276069 0.40186229 0.36288622]
[0.16203835 0.14811591 0.1618184 ... 0.08745848 0.09398027 0.1056776 ]
...
[0.21770377 0.24859037 0.20659391 ... 0.01323494 0.01249982 0.01307911]
[0.27596078 0.26605097 0.36028712 ... 0.10316001 0.10662966 0.10724351]
[0.34860233 0.3500129 0.35434798 ... 0.04347154 0.02899346 0.02327774]]
trainY = [[0]
[0]
[0]
[0]
.
.
.
[0]
[0]
[1]
[1]
[1]]
</code></pre>
<p>When I try to fit the data to my model I get the following error</p>
<pre><code>ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 150)
</code></pre>
<p>How do I get my data to load? the shape is 2 dimensions (35,150), so why does keras only see (None, 150)?</p>
<p>Thanks</p>
|
<p><code>trainX.shape = (35, 150)</code> which means that you have <code>35</code> samples of <code>150</code>. But you need to pass the data with the <code>batch_size</code> in the first position according to Keras. So you would have to expand the <code>2D</code> input to <code>3D</code>:</p>
<pre><code>trainX = tf.expand_dims(trainX, axis=-1) # new shape = (35, 150, 1)
trainY = tf.expand_dims(trainY, axis=-1) # new shape = (35, 150, 1)
</code></pre>
<p>You can then pass the data to the model:</p>
<pre><code>model = Sequential()
model.add(LSTM(100,input_shape=(trainX.shape[1], trainX.shape[2]), return_sequences=True)
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
</code></pre>
<p>Edit:</p>
<p>Since you are dealing with a binary classification task change the loss from <code>categorical_crossentropy</code> to <code>binary_crossentropy</code>.</p>
|
python|tensorflow|machine-learning|keras|lstm
| 1
|
373,647
| 66,954,976
|
How to plot same colors for same values in a map?
|
<p>I'm creating a colorbar with the function make_colormap. Source: <a href="https://stackoverflow.com/questions/16834861/create-own-colormap-using-matplotlib-and-plot-color-scale">Create own colormap using matplotlib and plot color scale</a>.
Also i'm plotting many maps with <code>for month, data in normals.groupby('MONTH'):</code>
I want to create a color bar with the same values for the same colors (to be able to compare values in maps) but in the:</p>
<pre><code>rvb = make_colormap(
[c('brown'), c('orange'), 0.10, c('orange'), c('yellow'), 0.20, c('green'), c('cyan'), 0.66, c('blue'), c('purple') ])
</code></pre>
<p>I can only put percentages. Do you know how can i modify this to put exact values instead of percentages?</p>
<pre><code>import matplotlib.colors as mcolors
def make_colormap(seq):
"""Return a LinearSegmentedColormap
seq: a sequence of floats and RGB-tuples. The floats should be increasing
and in the interval (0,1).
"""
seq = [(None,) * 3, 0.0] + list(seq) + [1.0, (None,) * 3]
cdict = {'red': [], 'green': [], 'blue': []}
for i, item in enumerate(seq):
if isinstance(item, float):
r1, g1, b1 = seq[i - 1]
r2, g2, b2 = seq[i + 1]
cdict['red'].append([item, r1, r2])
cdict['green'].append([item, g1, g2])
cdict['blue'].append([item, b1, b2])
return mcolors.LinearSegmentedColormap('CustomMap', cdict)
c = mcolors.ColorConverter().to_rgb
rvb = make_colormap(
[c('brown'), c('orange'), 0.10, c('orange'), c('yellow'), 0.20, c('green'), c('cyan'), 0.66, c('blue'), c('purple') ])
for month, data in normals.groupby('MONTH'):
lons, lats= np.array(data['LONGITUDE']), np.array(data['LATITUDE'])
ppvalues=np.array(data['PP']).astype(int)
month = data['MONTH'].iloc[0]
fig = plt.figure('map', figsize=(7,7), dpi=200)
ax = fig.add_axes([0.1, 0.12, 0.80, 0.75], projection=ccrs.PlateCarree())
plt.xlabel('LONGITUDE')
plt.ylabel('LATITUDE')
ax.outline_patch.set_linewidth(0.3)
l = NaturalEarthFeature(category='cultural', name='admin_0_countries', scale='50m', facecolor='none')
ax.add_feature(l, edgecolor='black', linewidth=0.25)
img = ax.scatter(lons, lats, s=7, c=ppvalores, cmap=rvb,
marker='o', transform=ccrs.PlateCarree())
#ticks=[0,1,2,3,4,5,6,7,8,9,10]
cb = plt.colorbar(img, extend='both',
spacing='proportional', orientation='horizontal',
cax=fig.add_axes([0.12, 0.12, 0.76, 0.02]))
plt.show()
fig.savefig("path/".png")
</code></pre>
<p>I'm relatively new in python so would you mind to help me?</p>
<p>Thanks in advance.</p>
|
<p>You could apply a <a href="https://matplotlib.org/stable/tutorials/colors/colormapnorms.html" rel="nofollow noreferrer"><code>norm</code></a>. Using the same norm for all plots would make the colors consistent. It is unclear what the range of your <code>data['PP']</code> column is. Here is an example of the changes if you would like <code>100</code>, <code>200</code> and <code>660</code> for the three values in the list given to <code>make_colormap</code>:</p>
<pre class="lang-py prettyprint-override"><code>vmin = data['PP'].min() # the overall minimum
vmax = data['PP'].max() # the overall maximum
norm = plt.Normalize(vmin, vmax) # function that maps the range of data['PP'] to the range [0,1]
rvb = make_colormap(
[c('brown'), c('orange'), norm(100), c('orange'), c('yellow'), norm(200), c('green'), c('cyan'), norm(660), c('blue'), c('purple')])
for month, data in normals.groupby('MONTH'):
...
img = ax.scatter(..., cmap=rvb, norm=norm)
...
</code></pre>
|
pandas|matplotlib
| 2
|
373,648
| 67,177,973
|
How can I avoid this for loop in pytorch? Is there a function for efficient computation?
|
<p>I have the following code in my Pytorch neural net:</p>
<pre><code>cos = nn.CosineSimilarity(dim=1)
d = torch.zeros(batch_sz, n, n).to(device="cuda")
for i in range(n):
for j in range(n):
d[:, i, j] = cos(q[:, i, :], k[:, j, :])
</code></pre>
<p><code>q</code> and <code>k</code> are both of size <code>(batch_sz, n, m)</code>.
This piece of Code obviously slows down my program and I'm wondering if Pytorch offers any functions that might make this more efficient.</p>
<p>Thanks so much!</p>
|
<p>I am not sure how to vectorize using <code>nn.CosineSimilarity</code> but you could use this vectorized implementation. It computes the cosine similarity in the same way as PyTorch's internal module.</p>
<pre><code>import torch
import torch.nn as nn
import time
# some dummy inputs
n=20
m=30
batch_sz = 10
k = torch.rand(batch_sz, n, m)
q = torch.rand(batch_sz, n, m)
d = torch.zeros(batch_sz, n, n)
cos = nn.CosineSimilarity(dim=1)
for i in range(n):
for j in range(n):
d[:, i, j] = cos(q[:, i, :], k[:, j, :])
# dot product (numerator)
out = torch.bmm(q, k.transpose(1,2))
# computing the denominator in the next 5 steps
# compute the norm and restore dimensions
q_norm = q.norm(dim=2).unsqueeze(2)
k_norm = k.norm(dim=2).unsqueeze(1)
# This repeats the norms along dim 2 for q and dim 1 for k
q_norm_expanded = q_norm.expand(batch_sz, n, n)
k_norm_expanded = k_norm.expand(batch_sz, n, n)
# we compute the product.
norms = q_norm_expanded* k_norm_expanded
# cosine similarity
out = out/(norms+1e-9)
print(torch.allclose(d, out))
</code></pre>
<p>The process of expanding and multiplying the norms is actually computing the outer product. So you could use also use this operation:</p>
<pre><code>norms = torch.bmm(q_norm, k_norm)
</code></pre>
<p>instead of</p>
<pre><code>q_norm_expanded = q_norm.expand(batch_sz, n, n)
k_norm_expanded = k_norm.expand(batch_sz, n, n)
norms = q_norm_expanded* k_norm_expanded
</code></pre>
<p>I just realized you could normalize the vectors before hand for a <strong>More concise and computationally stable version</strong></p>
<pre><code>q_norm = q.norm(dim=2)+1e-9
k_norm = k.norm(dim=2)+1e-9
q = q/q_norm.unsqueeze(2)
k = k/k_norm.unsqueeze(2)
out = torch.bmm(q, k.transpose(1,2))
</code></pre>
|
performance|pytorch
| 0
|
373,649
| 67,086,769
|
Conditional mapping to a dataframe based on multiple columns
|
<p>I have a dataframe where I need to map categories based on value-based conditions on two separate columns. Total rows to do this are about a million.</p>
<p>Sample dataframe is:</p>
<pre><code>df = pd.DataFrame({'col1':['B','A','A','B','C','B','C','C','A'],
'col2':[10,30,40,20,60,30,70,80,50]})
</code></pre>
<p>Now, the conditions for True are:</p>
<ol>
<li>A: >30</li>
<li>B: >20</li>
<li>C: >60</li>
</ol>
<p>If the value in col2 are as per above condition, then the result is True(1), else False(0).</p>
<p>Expected outcome is:</p>
<pre><code> col1 col2 result
0 B 10 0
1 A 30 0
2 A 40 1
3 B 20 1
4 C 60 0
5 B 30 1
6 C 70 1
7 C 80 1
8 A 50 1
</code></pre>
|
<p>You can chain masks by <code>|</code> for bitwise <code>OR</code>:</p>
<pre><code>df['result'] = (df['col1']=='A') & (df['col2']>30) |
(df['col1']=='B') & (df['col2']>10) |
(df['col1']=='C') & (df['col2']>60)
</code></pre>
<p>Or:</p>
<pre><code>df['result'] = np.where((df['col1']=='A') & (df['col2']>30) |
(df['col1']=='B') & (df['col2']>10) |
(df['col1']=='C') & (df['col2']>60), 1, 0)
</code></pre>
|
python-3.x|pandas|dataframe|numpy|mapping
| 1
|
373,650
| 67,120,888
|
How can I do a sjoin iteratively over features in a shapefile with geopandas, then encode categorical data?
|
<p>I have two shapefiles (<a href="https://drive.google.com/drive/folders/1pbvKvhIIvhqHfcMe9g6qtsjbZ6SzZrqt?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1pbvKvhIIvhqHfcMe9g6qtsjbZ6SzZrqt?usp=sharing</a>) - one point layer, and one polygon layer. The point layer represents customers and their location, while the polygon layers represents two boundaries. The objective is to get a table in the following format:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>customer</th>
<th>location 1</th>
<th>location 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>9</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>10</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>The way I've thought of doing this is to iterate through the polygons and do a sjoin with the points, then encode the categories as such:</p>
<pre><code>import geopandas as gpd
points = gpd.read_file('point.shp')
polygons = gpd.read_file('polygon.shp')
for index,row in polygons.iterrows():
points = gpd.sjoin(points, row, how='left', op='intersects')
points = pd.get_dummies(points, columns=['name'])
</code></pre>
<p>I get this error message:</p>
<blockquote>
<p>ValueError: 'right_df' should be GeoDataFrame, got <class 'pandas.core.series.Series'></p>
</blockquote>
<p>Appreciate any advice, thanks in advance!</p>
|
<p>You do not need a join, the <code>intersects</code> method is enough. Your target structure can be achieved using:</p>
<pre><code>points_in_locations = points.copy()
for idx, row in polygons.iterrows():
is_in_polygon = points.intersects(row.geometry)
points_in_locations[f"location {idx + 1}"] = is_in_polygon.astype(int)
</code></pre>
<p>resulting in:</p>
<pre><code> id geometry location 1 location 2
0 1 POINT (103.87728 1.30449) 0 1
1 2 POINT (103.87723 1.30415) 0 1
2 3 POINT (103.87761 1.30408) 0 1
3 1 POINT (103.87680 1.30287) 1 0
4 5 POINT (103.87724 1.30288) 1 0
5 6 POINT (103.87710 1.30275) 1 0
6 3 POINT (103.87687 1.30270) 1 0
7 9 POINT (103.87669 1.30444) 0 0
8 10 POINT (103.87681 1.30396) 0 0
</code></pre>
|
python|pandas|geopandas
| 1
|
373,651
| 66,921,943
|
PyTorch Tensor Operation for adding the maximum of the previous row to the next
|
<p>Follow-Up question to <a href="https://stackoverflow.com/questions/66919743/pytorch-dynamic-programming-as-tensor-operation">PyTorch: Dynamic Programming as Tensor Operation</a>.</p>
<p>Could the following be written as a tensor operation instead of a loop?</p>
<pre class="lang-py prettyprint-override"><code>a = torch.Tensor([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
print(a.shape)
# (3, 4)
for i in range(1, a.shape[0]):
a[i] = a[i-1].max(dim=0)[0] + a[i]
print(a)
# tensor([[ 1, 2, 3, 4],
# [ 9, 10, 11, 12],
# [21, 22, 23, 24]])
</code></pre>
<p>Basically adding the maximum of the previous row to all elements of the next.</p>
<p>The interesting part is that you can't compute the maximum for each row beforehand and then add that to the respective row, because adding the first maximum influences what the maximum of the next row is.</p>
|
<p>Not entirely sure why you're trying to do this, but yes, this is possible. It's basically the same as your last question:</p>
<pre><code>max_vals, _ = a.max(axis=1, keepdim=True)
additions = max_vals.cumsum(0)[:-1]
a[1:, :] += additions
</code></pre>
<p>This is because the marginal addition from one row to the next is equivalent to the maximum, so you can take the maximums first, then cumulatively sum them and add them to the original tensor.</p>
|
python|pytorch|dynamic-programming|tensor
| 2
|
373,652
| 67,003,191
|
Pandas all rows into one row of pd.series
|
<p>I have a pandas DataFrame loaded from a csv file like below:</p>
<pre><code>0 1 2 3 4 5
0 -1.140625 -1.828125 0.671875 -1.031250 -0.390625 -0.203125
1 -1.203125 -1.843750 0.687500 -0.953125 -0.281250 -0.156250
2 -1.187500 -1.781250 0.656250 -0.843750 -0.218750 -0.171875
3 -1.125000 -1.640625 0.593750 -0.765625 -0.234375 -0.062500
4 -1.031250 -1.453125 0.531250 -0.718750 -0.265625 -0.093750
... ... ... ... ... ... ...
6968 -1.093750 -0.687500 0.062500 -1.156250 -0.281250 -0.156250
6969 -1.140625 -0.734375 0.109375 -1.343750 -0.046875 -0.093750
6970 -1.203125 -0.765625 0.156250 -1.234375 0.046875 -0.171875
6971 -1.234375 -0.812500 0.234375 -0.953125 0.171875 -0.093750
6972 -1.265625 -0.843750 0.281250 -0.828125 0.078125 -0.265625
6973 rows × 6 columns
</code></pre>
<p>And I want to squeeze all row elements into one row of pandas series.<br />
The original dataFrame is 6973 rows of 6 columns with every cell having one digit data, and my desired output is 1 row of 6 columns, with every column having a sequence(series?) of length 6973</p>
<pre><code> 0 1 2 3 4 5
0 -1.140625 -1.203125 1.891651 2 1.939205 3... 0 -0.207383 1 -0.193249 2 -0.239664 3... 0 0.261557 1 0.235363 2 0.258561 3... 0 -0.214562 1 -0.249118 2 -0.291458 3... 0 -0.171253 1 -0.112890 2 -0.041053 3... 0 -0.118167 1 -0.112238 2 -0.102034 3...
</code></pre>
<p>How can I do this?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> with default index, convert to DataFrame and transpose:</p>
<pre><code>df = df.unstack().reset_index(drop=True).to_frame().T
</code></pre>
<p>Or use <a href="https://numpy.org/doc/stable/reference/generated/numpy.ravel.html" rel="nofollow noreferrer"><code>numpy.ravel</code></a>:</p>
<pre><code>df = pd.DataFrame([np.ravel(df, order='F')])
</code></pre>
<p>EDIT:</p>
<p>You can convert all Series to lists by:</p>
<pre><code>df = pd.DataFrame([{k: list(v) for k, v in df.items()}])
</code></pre>
|
python|pandas
| 0
|
373,653
| 66,766,006
|
Can't load a dataset from torchvision
|
<p>I'm trying to load the <a href="https://www.yf.io/p/lsun" rel="nofollow noreferrer">LSUN dataset</a> following PyTorch's <a href="https://pytorch.org/vision/stable/datasets.html#lsun" rel="nofollow noreferrer">code</a>. I used their other datasets but this one seems to give me errors.</p>
<pre><code>import torch
import torchvision.transforms as transforms
#convert the data to torch tensors
transform = transforms.Compose([transforms.ToTensor()])
from torchvision.datasets import LSUN
data = LSUN(root = './', transform=transform)
>>>Error: .//bedroom_train_lmdb: No such file or directory
</code></pre>
<p>Am I doing something wrong here? The code works just fine with MNIST/CIFAR/etc. (with a slight modification <code>data = MNIST(root = './', train=False, download=True, transform=transform)</code></p>
<p><strong>Update</strong><br />
Cloned the repo and downloaded the dataset:</p>
<pre><code>!git clone https://github.com/fyu/lsun.git
cd lsun
# Download testing set
!python3 download.py -c test
</code></pre>
<p>Tried running the code as before with</p>
<pre><code>data = LSUN(root = '',classes='test_lmdb.zip', transform=transform)
</code></pre>
<p>But getting this error now:</p>
<pre><code>ValueError: Unknown value 'test_lmdb.zip' for argument classes. Valid values are {'train', 'val', 'test'}.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/torchvision/datasets/utils.py in verify_str_arg(value, arg, valid_values, custom_msg)
348 msg = msg.format(value=value, arg=arg,
349 valid_values=iterable_to_str(valid_values))
--> 350 raise ValueError(msg)
351
352 return value
ValueError: Unknown value '' for LSUN class. Valid values are {'bedroom', 'bridge', 'church_outdoor', 'classroom', 'conference_room', 'dining_room', 'kitchen', 'living_room', 'restaurant', 'tower'}.
</code></pre>
<p>When I change it to</p>
<pre><code>data = LSUN(root = '',classes='test', transform=transform)
</code></pre>
<p>I get this error:</p>
<pre><code>Error: /test_lmdb: No such file or directory
</code></pre>
|
<p>Unlike most other <a href="https://pytorch.org/vision/stable/datasets.html#lsun" rel="nofollow noreferrer">datasets offered by Torchvision</a>, LSUN doesn't appear to have a <code>download</code> argument. You can manually download the files to the specified directory from here:</p>
<p><a href="https://www.yf.io/p/lsun" rel="nofollow noreferrer">https://www.yf.io/p/lsun</a></p>
<p>And then run your code as written.</p>
|
python|pytorch|torchvision
| -1
|
373,654
| 67,124,432
|
Max Pooling across MRI Slices
|
<p>I am trying to implement a Machine Learning Model for MRI scan diagnosis.
I have Inputs of shape (x, 256, 256, 3), where we have 3 color channels and where x is the number of slices in a sequence.
I read the <a href="https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002699" rel="nofollow noreferrer">MRNet</a> paper and I want to implement a similar architecture in TensorFlow Keras. Instead of using the AlexNet feature extractor, I'd like to use VGG16.</p>
<p>The model architecture in the paper:</p>
<blockquote>
<p>The primary building block of our prediction system is MRNet, a convolutional
neural network (CNN) mapping a 3-dimensional MRI series to a probability [15] (Fig 2). The
input to MRNet has dimensions s × 3 × 256 × 256, where s is the number of images in the MRI
series (3 is the number of color channels). First, each 2-dimensional MRI image slice was
passed through a feature extractor based on AlexNet to obtain a s × 256 × 7 × 7 tensor containing features for each slice. A global average pooling layer was then applied to reduce these features to s × 256. <strong>We then applied max pooling across slices to obtain a 256-dimensional
vector, which was passed to a fully connected layer and sigmoid activation function to
obtain a prediction in the 0 to 1 range.</strong></p>
</blockquote>
<p>So far so good. I have a sequential model, added the feature extractor as the first step, then apply a GlobalAveragePooling2D() to reduce features to shape (x, 512). Then I must MaxPool across the slices but I have no approach for this problem.</p>
<pre class="lang-py prettyprint-override"><code>feature_extractor = VGG16(weights='imagenet', include_top=False, input_shape=(256, 256, 3))
model = Sequential()
model.add(feature_extractor) #output shape: (x, 8, 8, 512)
model.add(GlobalAveragePooling2D()) #output shape: (x, 512)
# Here i have to add a Layer witch Pools over the slices.
model.add( ) #output shape(1, 512)
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
</code></pre>
<p>The example Scan has a shape of (44, 256, 256, 3). When it runs through the VGG16 its features have the Dimension of (44, 8, 8, 512). After GlobalAverage Pooling I got (44, 512). This 2-D Array must then somehow be transformed into the shape of (1, 512). I mean if I do the operation on a simple 2-D NumPy array I need a function like np.max over the 0-axis</p>
<pre><code>np.max(x, axis=0)
</code></pre>
<p>Maybe you can give me a hint or have an approach for this.
Thanks a lot for your help :)</p>
<p>################################################################################
Edit: 01.05.2021</p>
<p>I played around with your approach @Aaron Keesing, but fitting the model does not train it at all somehow. Ater 25 epochs i still have the same accuracy. And the accuracy is the distribution over my 2 classes (i was just training on coronal plane and abnormal)</p>
<p><a href="https://i.stack.imgur.com/kqnG2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kqnG2.png" alt="accuracy metrics" /></a></p>
<p>In this case for example i have 500 cases, and 80% of the cases do have an abnormality and 20% doesn't.</p>
<pre><code># Dataset train, overall 500 cases
Absolute:
abnormal acl meniscus
1 0 0 184
1 118
0 0 0 100
1 1 1 63
0 35
dtype: int64
Relative:
abnormal acl meniscus
1 0 0 0.368
1 0.236
0 0 0 0.200
1 1 1 0.126
0 0.070
###########################################################
# Dataset valid, overall 100 cases
Absolute:
abnormal acl meniscus
1 1 1 27
0 0 0 25
1 1 0 23
0 0 20
1 5
dtype: int64
Relative:
abnormal acl meniscus
1 1 1 0.27
0 0 0 0.25
1 1 0 0.23
0 0 0.20
1 0.05
</code></pre>
|
<p>After thinking about this problem I have found one solution which could work</p>
<pre><code> vgg16 = VGG16(weights='imagenet', include_top=False, input_shape=(256, 256, 3)) #
average_pool = Sequential(name='AveragePool')
average_pool.add(layers.AveragePooling2D(input_shape=(8, 8, 512)))
average_pool.add(layers.Flatten())
self.average_pool = average_pool
self.model = Sequential([
vgg16,
average_pool], name='MyModel')
self.model.summary()
# Max-pooling
self.model.add(Dense(256, activation='relu', kernel_constraint=constraints.MaxNorm(max_value=2, axis=0)))
self.model.add(Dense(1, activation='sigmoid'))
self.model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
</code></pre>
<p>This leads to the following summary</p>
<pre><code>Model: "AveragePool"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
average_pooling2d (AveragePo (None, 4, 4, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 8192) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________
Model: "MyModel"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 8, 8, 512) 14714688
_________________________________________________________________
AveragePool (Sequential) (None, 8192) 0
_________________________________________________________________
dense (Dense) (None, 256) 2097408
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 16,812,353
Trainable params: 16,812,353
Non-trainable params: 0
</code></pre>
<p>If you got any other ideas or ideas for improvement, let me know!</p>
|
python|tensorflow|machine-learning|keras
| 0
|
373,655
| 67,005,853
|
Extract Month and the week of the month
|
<p>I am currently working on the pandas DataFrame with DateTime object. Is there a way to extract the month-weekofthemonth from pandas datetime object?</p>
<pre><code>data = pd.DataFrame(pd.date_range(' 1/ 1/ 2020', periods = 7, freq ='D'))
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
Expected:
0 2000-01-01 01-01
1 2000-01-02 01-01
2 2000-01-03 01-01
3 2000-01-04 01-01
4 2000-01-05 01-01
5 2000-01-06 01-01
6 2000-01-07 01-01
7 2000-01-08 01-02
8 2000-01-09 01-02
9 2000-01-10 01-02
</code></pre>
|
<p>Based on <a href="https://stackoverflow.com/questions/25249033/week-of-a-month-pandas">Week of a month pandas</a></p>
<pre><code>data[0].apply(lambda d: f'{d.month:02}-{(d.day-1) // 7 + 1:02}')
</code></pre>
<p>should give</p>
<pre><code>0 01-01
1 01-01
2 01-01
3 01-01
4 01-01
5 01-01
6 01-01
7 01-02
8 01-02
9 01-02
</code></pre>
<p>Name: 0, dtype: object</p>
|
python|pandas|datetime
| 1
|
373,656
| 66,959,856
|
Python: plotting several arrays in a single plot using for loop
|
<p>I have several arrays (more than this, about 20 x arrays and 20 y arrays) but this is an example</p>
<pre><code>xa1=[0,...3000]
ya1=[0,...3000]
xa2=[0,...3000]
ya2=[0,...3000]
xa3=[0,...3000]
ya3=[0,...3000]
</code></pre>
<p>I want to plot these arrays in a single plot using a for loop</p>
<p>I try first making an array of arrays</p>
<pre><code>xarr = np.array([[xa1],[xa2],[xa3]])
yarr = np.array([[ya1],[ya2],[ya3]])
for i in range(3):
plt.plot(xarr[i], yarr[i])
plt.show()
</code></pre>
<p>but my jupyter notebook crashes. I am new to coding, so could anyone help me improve this little code? or help me with an alternative? thank you so much in advance.</p>
|
<p>Based on your syntax xarr[i] is not an array but a list with one item in it and THAT item is an array. Matplotlib won't like that.</p>
<p>Try initializing xarr as a list instead, i.e. [xa1,xa2,xa3], and the same for yarr: you don't need them to be arrays, just a list OF arrays for the for-loop to iterate through.</p>
|
python|arrays|numpy|loops
| 0
|
373,657
| 66,909,817
|
Custom function with multiple argument and one return value in map_fn for tensor object in Tensorflow
|
<p>I have two tensors t1 and t2 (shape=(64,64,3), dtype=tf.float64). I want to execute a custom function "func" which takes two tensors as input and returns one new tensor.</p>
<pre><code>@tf.function
def func(a):
t1 = a[0]
t2 = a[1]
return tf.add(t1, t2)
</code></pre>
<p>I am using map_fn of tensorflow to execute the function for each element of the inputs.</p>
<pre><code>t = tf.map_fn(fn=func, elems=(t1, t2), dtype=(tf.float64, tf.float64))
tf.print(t)
</code></pre>
<p>Sample input tensors for testing purpose are,</p>
<pre><code>t1 = tf.constant([[1.1, 2.2, 3.3],
[4.4, 5.5, 6.6]])
t2 = tf.constant([[7.7, 8.8, 9.9],
[10.1, 11.11, 12.12]])
</code></pre>
<p>I cannot use map_fn with two arguments. [Tried with tf.stack, unstack also, but that didn't also work.] Any idea how to do that?</p>
|
<p>The "elems" parameter of "map_fn" unpacks the argument passed to it along axis 0. So, in order to pass multiple tensors in the custom function,</p>
<ol>
<li>We have to stack them together.</li>
<li>Add an extra dimension along axis 0.</li>
</ol>
<pre><code># t1 and t2 has shape [2, 3]
val = tf.stack([t1, t2]) # shape is now [2, 2, 3]
val = tf.expand_dims(val, axis=0) # shape is now [1, 2, 2, 3]
t = tf.map_fn(fn=func, elems=val, dtype=tf.float64)
</code></pre>
<p>Also the "dtype" of "map_fn" should be the return type of the function. For example, in this case it should be tf.float64. If the function would return a tuple, the dtype would also be a tuple.</p>
<pre><code>@tf.function
def func(a): # a has shape [2, 2, 3]
t1 = a[0] # shape [2, 3]
t2 = a[1] # shape [2, 3]
return tf.add(t1, t2)
</code></pre>
|
python|tensorflow|tensor|xor|custom-function
| 0
|
373,658
| 66,829,855
|
matplotlib: share x axis from one subplot with y axis from another
|
<p>I want to project 3D data onto XY, XZ, YZ subplots with interactive shared axes.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, axes = plt.subplots(3, 1, constrained_layout=True)
n = 10000
pts = {
'X': np.random.normal(0, 1, n),
'Y': np.random.normal(0, 2, n),
'Z': np.random.normal(0, 4, n)
}
for ax, (k1, k2) in zip(axes, [('X', 'Y'), ('X', 'Z'), ('Y', 'Z')]):
ax.plot(pts[k1], pts[k2], ',')
ax.set_xlabel(k1)
ax.set_ylabel(k2)
axes[0].sharex(axes[1])
axes[1].sharey(axes[2])
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/ARykz.png" alt="enter image description here" /></p>
<p>The XY and XZ plots share the X axis limits, and the YZ and XZ plots share the Z axis limits, but how can I make it such that the XY and YZ share the Y axis limits? Maybe some syntax like <code>axes[2].sharexy(axes[0])</code> exists in some fashion?</p>
|
<p>A quick (and possibly very stupid) fix I found is to sort of do the axis sharing manually. In other words, if both x and y axes you want to share have the same size in figure (i.e. both of them span e.g. 10 cm), you can manually set them to have equal limits, ticks and tick labels. In your case it would be something like:</p>
<pre><code>axes[0].set_ylim(axes[2].get_xlim()) #set the y lim of first subplot the same
#as x lim of last subplot
axes[0].set_yticks(axes[2].get_xticks()) #set y ticks of first subplot the same as
#x ticks of last subplot
#You can also do further stuff like turning off y ticks labels of axes[0]
#with something like plt.setp(axes[0].get_yticklabels(), visible=False).
</code></pre>
<p>In my case, doing this after all the figure adjustments (<code>subplots_adjust()</code> etc.) yielded in a result very similar to sharing both of these axes, also with correct tick labels. Adjusting the y tick labels of <code>axes[0]</code> after "sharing" manually seems to be trickier, since <code>axes[2].get_xticklabels()</code> returns an array of Text objects, which have also x and y positions. Another (dirty) workaround for this to manually adjust y tick labels for <code>axes[0]</code> is:</p>
<pre><code>#list comprehension to get strings of each x ticks of last subplot
ax2xticklabels = [i.get_text() for i in axes[2].get_xticklabels()]
axes[0].set_yticklabels(ax2xticklabels) #set y tick labels of first subplot
#the same as x tick labels of last subplot
</code></pre>
<p>I actually came here also to search for a much more elegant way to solve this problem, but noticing there seems to be nothing to independently share 2 axes objects in the internet, I wanted to share my dirty quick fix :D. Hope this might help!</p>
|
python|numpy|matplotlib|plot
| 0
|
373,659
| 67,119,378
|
How to create a new dataFrame based on some column values?
|
<p>I have a dataframe which has a column Flag whose values are either True or False. I want to create a new dataframe whose column Flag's values must be all True.</p>
<pre><code>df = {'Name':['Tom', 'nick', 'krish', 'jack'],
'Age':['12', '23', '25', '16'],
'Flag':[True, False, False, True]}
</code></pre>
<p>I want get a new df whose Flags are True, and also drop this Flag column:</p>
<pre><code>df = {'Name':['Tom', 'jack'],
'Age':['12', '16']}
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <code>boolean</code> column, so compare by <code>True</code> is not necessary:</p>
<pre><code>df = pd.DataFrame(df)
</code></pre>
<p>You can select some columns only by list use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>df1 = df.loc[df['Flag'], ['Name','Age']]
</code></pre>
<p>Or use and remove <code>Flag</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a>:</p>
<pre><code>df1 = df[df.pop('Flag')]
</code></pre>
<p>Or delete Flag after selecting add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>DataFrame.drop</code></a>:</p>
<pre><code>df1 = df[df['Flag']].drop('Flag', axis=1)
</code></pre>
<hr />
<pre><code>print (df1)
Name Age
0 Tom 12
3 jack 16
</code></pre>
|
python|pandas
| 3
|
373,660
| 67,038,376
|
Difference between torch.Size([64]) and (64,)?
|
<p>I created a Pytorch dataset class to store 64 lines of text. The file only has text, no label so I artificially generated an index list y (just to follow along with a tutorial https://medium.com/swlh/how-to-use-pytorch-dataloaders-to-work-with-enormously-large-text-files-bbd672e955a0#4fe0). After I created the dataset object and wrapped it around a dataloader, <code>y.shape</code> returned <code>torch.Size([64])</code> while the tutorial said it would return <code>(64,)</code>. (torch version is 1.8.1. torchvision version is 0.9.1. Python is 3.7.10.)</p>
<p>Is there a difference between torch.Size([64]) and (64,)? Thank you.</p>
<pre><code>##### IMPORT PACKAGES #####
import nltk
import string
from nltk import word_tokenize
from torch.utils.data import IterableDataset, DataLoader, Dataset
##### DEFINE CLASS #####
class CustomDataset(Dataset):
# A Pytorch Dataset class to store text
def __init__(self, filename):
'''
Input: filename (Each line is a string.)
Output: member variable X (list of unprocessed strings)
member variable y (index list of X)
'''
# Open file and store contents in list
with open(filename) as f:
lines = f.read().split('\n')
X, y = [], []
i = 0
for line in lines:
X.append(line)
y.append(i)
i +=1
# Store in member variables
self.X = X
self.y = y
def preprocess(self, text):
'''
Input: a string from X
Output: a preprocessed string
'''
text_pp = text.lower() # lower case
return text_pp
def __len__(self):
return len(self.y)
def __getitem__(self, index):
'''
Input: a number (within range of X's indices)
Output: string at specified index
'''
return self.preprocess(self.X[index]), self.y[index]
##### CREATE OBJECT #####
dataset = CustomDataset('micro.txt')
dataloader = DataLoader(dataset, batch_size = 64, num_workers = 2)
for X, y in dataloader:
print(y.shape) # torch.Size([64]) [Is it same as (64,)?])
</code></pre>
|
<p>In a way they are same thing. You are printing shape of a one dimensional tensor. Shape written in tutorial is shape format of one dimensional numpy array.</p>
<p>If shape of y is printed after converting it into numpy array, mentioned format will appear. You can see both format with following code.</p>
<pre><code>print((torch.rand(64)).shape) # torch.Size([64])
print((torch.rand(64)).numpy().shape) # (64,)
</code></pre>
|
python|pytorch
| 0
|
373,661
| 66,932,811
|
Return key on fuzzy match of element in dictionary list
|
<p>I have a dataframe like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Cost Category</th>
<th>Vendor</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-03-22</td>
<td>-</td>
<td>FamilyMart</td>
</tr>
<tr>
<td>2021-03-04</td>
<td>-</td>
<td>FAMILY MART</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Subway MAIN</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>OTHER</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Transit Authority</td>
</tr>
<tr>
<td>2021-03-09</td>
<td>-</td>
<td>Subway local</td>
</tr>
<tr>
<td>2021-03-24</td>
<td>-</td>
<td>Seven Eleven</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Seven-Eleven</td>
</tr>
</tbody>
</table>
</div>
<p>I want to add category tags like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Cost Category</th>
<th>Vendor</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-03-22</td>
<td>Store</td>
<td>FamilyMart</td>
</tr>
<tr>
<td>2021-03-04</td>
<td>Store</td>
<td>FAMILY MART</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>Dining</td>
<td>Subway MAIN</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>OTHER</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Transit Authority</td>
</tr>
<tr>
<td>2021-03-09</td>
<td>Dining</td>
<td>Subway local</td>
</tr>
<tr>
<td>2021-03-24</td>
<td>Store</td>
<td>Seven Eleven</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>Store</td>
<td>Seven-Eleven</td>
</tr>
</tbody>
</table>
</div>
<p>I try the following, which would just return the value of the matching element in the list:</p>
<pre><code>from fuzzywuzzy import process
from fuzzywuzzy import fuzz
Store = ['Family Mart', 'Seven Eleven', 'York Mart', 'Tokyu', 'Ministop']
Dining = ['Subway', 'Salad Works']
def fuzz_m(col, cat_list, score_t):
tag, score = process.extractOne(col, cat_list, scorer = score_t)
if score < 51:
return ''
else:
return tag
df['Cost Category'] = df['Vendor'].apply(fuzz_m, cat_list = Store, score_t = fuzz.ratio)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Cost Category</th>
<th>Vendor</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-03-22</td>
<td>Family Mart</td>
<td>FamilyMart</td>
</tr>
<tr>
<td>2021-03-04</td>
<td>Family Mart</td>
<td>FAMILY MART</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Subway MAIN</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>OTHER</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>-</td>
<td>Transit Authority</td>
</tr>
<tr>
<td>2021-03-09</td>
<td>-</td>
<td>Subway local</td>
</tr>
<tr>
<td>2021-03-24</td>
<td>Seven Eleven</td>
<td>Seven Eleven</td>
</tr>
<tr>
<td>2021-03-14</td>
<td>Seven Eleven</td>
<td>Seven-Eleven</td>
</tr>
</tbody>
</table>
</div>
<p>What I want to do is use a dictionary in place of cat_list and return the key in Cost Category.</p>
<pre><code>dictionary = {'Store':['Family Mart', 'Seven Eleven', 'York Mart', 'Tokyu', 'Ministop'],
'Dining':['Subway', 'Salad Works']
}
</code></pre>
<p>Where if any value in the column has a 51+ match to an element in a list, then I want to add the key under Cost Category. If it is a low match (below 51) I want to do nothing.</p>
<p>Is there a feasible approach to achieve this?</p>
|
<p>With <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><strong><code>Series.apply()</code></strong></a>, <code>fuzz_m()</code> receives one <code>Vendor</code> value at a time, so you can use that <code>dictionary</code> directly as <code>extractOne(value, dictionary)</code>:</p>
<pre class="lang-py prettyprint-override"><code>def fuzz_m(value):
_, score, tag = process.extractOne(value, dictionary)
return tag if score > 50 else '-'
df['Cost Category'] = df['Vendor'].apply(fuzz_m)
# Date Cost Category Vendor
# 0 2021-03-22 Store FamilyMart
# 1 2021-03-04 Store FAMILY MART
# 2 2021-03-14 Dining Subway MAIN
# 3 2021-03-14 - OTHER
# 4 2021-03-14 - Transit Authority
# 5 2021-03-09 Dining Subway local
# 6 2021-03-24 Store Seven Eleven
# 7 2021-03-14 Store Seven-Eleven
</code></pre>
|
python|pandas|numpy
| 1
|
373,662
| 66,782,065
|
How to update the a specific set of indices of a multi-dimensional tensor in Tensorflow
|
<p>I have this multi-dimensional tensor of shape [1,32,32,155], of which I want to update
the [:,:,:,0:27] indices.</p>
<p>In pytorch, one would do this simply with index assign i.e [:,:,:,0:27] = [1,32,32,27].
Index assign is currently not supported in Tensorflow. Therefore, my first attempt was to do the following:</p>
<pre><code> feat_ch = tf.unstack(feat, axis=3)
feat_ch[0:self.ncIn] = tf.unstack(upFeat, axis=3 )
feat = tf.stack(feat_ch, axis=3)
</code></pre>
<p>feat_ch being the [1,32,32,155], and upFeat being the tensor [1,32,32,27].
The idea here was to collapse the feat_ch tensor on the channel dimension, such that I get a list of 155 entries with 1,32,32. And then doing the same with upFeat, and then replace the first 27 of the feat_ch list with the 27 of the upFeat. Finally, stacking them up to get the [1,32,32,155] shaped tensor again (this time with the 27 first channels updated)</p>
<p>However, I am not sure if it does what I want. So I began to investigate what other alternatives to update.</p>
<p>Tensorflow has a method tensor_scatter_nd_update, which seems to be exactly what I wanted. However, I find it hard to wrap my head around. What I have tried so far is:</p>
<pre><code> i1, i2, i3, i4 = tf.meshgrid(tf.range(1),
tf.range(32), tf.range(32), tf.range(27) , indexing="ij") #shape [1,32,32,27]
feat = tf.tensor_scatter_nd_update(feat, i1, upFeat)
</code></pre>
<p>The idea here was to create a mesh grid of the same shape and in such a way that each element corresponds to an index of the feat that I wish to update. This does not work, however and throws the following:</p>
<pre><code>The inner -23 dimensions of output.shape=[1,32,32,155] must match the inner 1 dimensions of updates.shape=[1,32,32,27]: Shapes must be equal rank, but are 0 and 1
</code></pre>
<p>Am I understanding it wrong? Why does it not work? How would one update a ND-tensor?</p>
<p>Thanks</p>
|
<p>Use slice and <code>concat</code>:</p>
<pre><code>feat = tf.random.uniform([1, 32, 32, 155])
updates = tf.zeros([1, 32, 32, 27])
result = tf.concat((feat[:,:,:,27:], updates), -1)
</code></pre>
|
python|tensorflow|keras|tensorflow2.0
| 1
|
373,663
| 67,173,820
|
Trouble Finding Spectrum Peaks on Python/ Google Colab
|
<p>I have a spectrum (of an oil sample) as a 2D array in a cvs file that i want to find the peaks for in wavelengths 600 - 1800 cm-1. I've tried the scipy.signal.find_peaks but that takes a 1D array and I have a 2D array with the wavelengths and corresponding peak values.
any help would be appreactiated since im very beginner at python</p>
<p>Edit: I also tried doing the following:</p>
<p>from detecta import detect_peaks</p>
<p>ind = detect_peaks(df)</p>
<p>where df is the name of my array (which has two columns) and an error pops up: ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)</p>
|
<p><code>scipy.signal.find_peaks()</code> only takes a one-dimensional array containing the peaks. So you should be able to just select the column in your DataFrame with the peaks as so:</p>
<pre><code># note that find_peaks returns an array of peak indices, and a dictionary of properties
ind, properties = scipy.signal.find_peaks(df["name of column with peaks"])
</code></pre>
<p>Then if you only want the peaks, select the rows using the ind array you just created:</p>
<pre><code>peak_df = df[df.index.isin(ind)]
</code></pre>
|
python|matlab|numpy|data-analysis|spectra
| 0
|
373,664
| 67,033,501
|
How to plot a scatter plot over a map separated by divisions?
|
<p>I want to plot a scatter plot over a map separated by divisions. So far I have tried the following.</p>
<pre><code>import os
import matplotlib.pyplot as plt
import pandas as pd
import geopandas as gpd
import numpy as np
fPath = 'shape/bgd_adm_bbs_20201113_SHP/bgd_admbnda_adm2_bbs_20201113.shp'
bgd = gpd.read_file(fPath)
ax = bgd.plot(figsize=(15,15),column='ADM1_EN', legend=True)
</code></pre>
<p><strong>bgd_admbnda_adm2_bbs_20201113.shp</strong> has been found in <a href="https://github.com/Shisir/Chemistry/tree/main/shape" rel="nofollow noreferrer">github</a>. It produces this <a href="https://github.com/Shisir/Chemistry/blob/main/download.png" rel="nofollow noreferrer">figure</a>.</p>
<p><code>Here, there are 8 divisions 'Barishal', 'Chattogram', 'Dhaka', 'Khulna', 'Rajshahi', 'Rangpur', 'Sylhet', 'Mymensingh'.</code> For every division, there are some <strong>numeric values</strong>(not latitude, longitude values). E.g. for <strong>Dhaka division</strong> [73.13 77.64 74.32 82.48 84.21 88.23 89.90]. For your convenience, I have attached the files in <a href="https://github.com/Shisir/Chemistry/blob/main/health.xlsx" rel="nofollow noreferrer">github</a>.
Now, I split the values based on value range. E.g. i) 70-80: [73.13 77.64 74.32], ii) 80-90: [82.48 84.21 88.23 89.90]. Now, I want to draw a scatter plot of two categories of values on any places of Dhaka division with <strong>two colors</strong> such as <a href="https://github.com/Shisir/Chemistry/blob/main/download_LI.jpg" rel="nofollow noreferrer">this image</a>. I have attached another <strong>expected output image</strong> for your reference <a href="https://i.stack.imgur.com/1oAbc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1oAbc.png" alt="Expected output" /></a>.</p>
<p>Thanks in advance.</p>
|
<p>This is as simple as plotting data on same axis</p>
<ul>
<li>have data of healthcare facilities, then get GIS data for these facilities</li>
<li>get map GEOJSON and plot on axis</li>
<li>scatter data on same axis, using healthcare facility type as color</li>
</ul>
<pre><code>import requests, io
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
# get some data of healthcare facilities
searchendpoint = "https://directory.spineservices.nhs.uk/ORD/2-0-0/organisations"
# get all healthcare facilities in Herefordshire
dfhc = pd.concat([pd.json_normalize(requests
.get(searchendpoint, params={"PostCode":f"HR{i}","Status":"Active"})
.json()["Organisations"])
for i in range(1,10)]).reset_index(drop=True)
# get geo data for postcodes
dfgeo = (pd.json_normalize(requests.post("http://api.postcodes.io/postcodes",
json={"postcodes":dfhc.PostCode.unique().tolist()[0:100]}).json()["result"])
.rename(columns={"result.postcode":"PostCode","result.longitude":"lng","result.latitude":"lat"})
.loc[:,["PostCode","lat","lng"]]
)
dfdata = dfhc.merge(dfgeo, on="PostCode", how="inner")
# going to use as color, so make if categorical so can get codes
dfdata["PrimaryRoleId"] = pd.Categorical(dfdata["PrimaryRoleId"])
fig, ax = plt.subplots(figsize=[14,6])
# get map of regions
df = gpd.read_file(io.StringIO(requests.get("https://martinjc.github.io/UK-GeoJSON/json/eng/msoa_by_lad/topo_E06000019.json").text))
df.plot(ax=ax)
# scatter data on top of region map
ax.scatter(x=dfdata["lng"],y=dfdata["lat"], s=50, c=dfdata["PrimaryRoleId"].cat.codes)
</code></pre>
<h3>Using same data set</h3>
<pre><code>import numpy as np
import geopandas as gpd
import matplotlib.pyplot as plt
import matplotlib
df = gpd.read_file("bgd_admbnda_adm2_bbs_20201113.shp")
fig, ax = plt.subplots(figsize=[8,8])
df.plot(ax=ax, alpha=0.5, edgecolor='k')
# some data that can be plotted on centroid
df["val"] = np.random.randint(1,100,len(df))
# use a discrete
cmap = plt.cm.get_cmap('jet', 5)
# scatter data based on co-ords of centroid
sc = ax.scatter(x=df.centroid.x, y=df.centroid.y, s=50, c=df["val"], cmap=cmap)
plt.colorbar(sc)
</code></pre>
<p><a href="https://i.stack.imgur.com/dam0I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dam0I.png" alt="enter image description here" /></a></p>
|
pandas|dataframe|matplotlib|geopandas
| 2
|
373,665
| 67,157,966
|
Create boolean flag in pandas from signal's crossings
|
<p>I would like to create a flag with a function and applying it to one column in a pandas dataframe.
The intention of the function is to set the value 1 when the signal crosses upwards over -1 and resets the value to 0 when the signal crosses 1 downwards.
Here is my code example:
I just cant get the function to work</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
x = np.arange(0, 10, 0.01)
x2 = np.arange(0, 20, 0.02)
sin1 = np.sin(x)
sin2 = np.sin(x2)
x2 /= 2
sin3 = sin1 + sin2
df = pd.DataFrame(sin3)
#name signal column
df.columns = ['signal']
df.signal.plot()
def my_flag(x):
#cross over -1
ok1 = (x.iloc[-1] > -1)*1
ok2 = (x.iloc[-2] < -1)*1
activate = (ok1*ok2) > 0.5
if activate:
flag_activate = 1
# OFF
#cross under 1
ok3 = (x.iloc[-1] <1)*1
ok4 = (x.iloc[-2] > 1)*1
inactivate = (ok3*ok4) > 0.5
if inactivate:
flag_activate = 0
# # add to df
return flag_activate
df['the_flag'] = df['signal'].apply(my_flag)
#I have set the flag to 0 for plotting purposes for demo,
# should be replaced when my_flag function works
df['the_flag'] = 0
fig, (ax1,ax2) = plt.subplots(2)
ax1.plot(df['signal'])
ax1.set_title('signal')
y1 = -1
y2 = 1
ax1.axhline(y1,color='r')
</code></pre>
<p>I have made a "cartoon picture" of what I would like the flag to llook like for a sine signal:
<a href="https://i.stack.imgur.com/Dkoeo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dkoeo.png" alt="enter image description here" /></a></p>
|
<p>We can first detect the <code>-1</code> and <code>+1</code> crossings whilst considering they should cross-up and cross-down, respectively. This can be done via shifting the signal to left and right by 1 and comparing against <code>-/+ 1</code> with the crossing behaviour in mind:</p>
<pre class="lang-py prettyprint-override"><code>neg_1_crossings = np.where((sin3[:-1] < -1) & (sin3[1:] > -1))[0]
pos_1_crossings = np.where((sin3[:-1] > +1) & (sin3[1:] < +1))[0]
</code></pre>
<p>For <code>-1</code> cross-up's: First mask imposes <em>previous</em> values be less than <code>-1</code>, second one imposes <em>next</em> values be greater then <code>-1</code>. Similar for the <code>+1</code>, except operators flipped.</p>
<p>Now we have:</p>
<pre class="lang-py prettyprint-override"><code>>>> neg_1_crossings
array([592], dtype=int64)
>>> pos_1_crossings
array([157, 785], dtype=int64)
</code></pre>
<p>I'd run <code>for</code> loops here to get the flag:</p>
<pre class="lang-py prettyprint-override"><code>flag = np.zeros_like(sin3)
for neg_cross in neg_1_crossings:
# a `neg_cross` raises the flag
flag[neg_cross:] = 1
for pos_cross in pos_1_crossings:
if pos_cross > neg_cross:
# once we hit a `pos_cross` later on, restrict the flag's ON
# periods to be between the `neg_cross` and this `pos_cross`
flag[pos_cross:] = 0
# we are done with this `neg_cross`
break
</code></pre>
<p>which gives</p>
<p><a href="https://i.stack.imgur.com/Tmi4R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tmi4R.png" alt="signal+flag" /></a></p>
<p>Overall:</p>
<pre class="lang-py prettyprint-override"><code>def get_flag(col):
"""
`col` is a pd.Series
"""
# signal in numpy domain; also its shifted versions
signal = col.to_numpy()
sig_shifted_left = signal[1:]
sig_shifted_right = signal[:-1]
# detect crossings
neg_1_crossings = np.where((sig_shifted_right < -1) & (sig_shifted_left > -1))[0]
pos_1_crossings = np.where((sig_shifted_right > +1) & (sig_shifted_left < +1))[0]
# form the `flag` signal
flag = np.zeros_like(signal)
for neg_cross in neg_1_crossings:
# a `neg_cross` raises the flag
flag[neg_cross:] = 1
for pos_cross in pos_1_crossings:
if pos_cross > neg_cross:
# once we hit a `pos_cross` later on, restrict the flag's ON
# periods to be between the `neg_cross` and this `pos_cross`
flag[pos_cross:] = 0
# we are done with this `neg_cross`
break
return flag
</code></pre>
|
python|pandas|function
| 2
|
373,666
| 67,046,540
|
Plot a bar plot by using Seaborn
|
<p>I am new in data visualization. I am practicing Seaborn and I am trying to plot a barplot with this dataframe. I want the chart has 3 bars on each symbol, however, the output has only 1 bar on each symbol. May I know how to fix it?</p>
<p>Part of the DataFrame...</p>
<pre><code> returns_7d returns_30d returns_ytd
symbol
TDOC -0.210839 -17.712095 -3.922423
EXAS -4.649067 -6.439275 -1.415680
PACB -2.953760 11.886232 37.815711
REGN 0.465364 5.803325 -0.629814
TWST 6.707956 3.619967 10.4043
</code></pre>
<p>The code like this:</p>
<pre><code>import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Change the style of the figure to the "dark" theme
sns.set_style("darkgrid")
plt.figure(figsize=(12,6))
plt.title('YTD Returns')
sns.barplot(x=returns_all.index,y=returns_all['returns_7d'],color='b',edgecolor='w',label='returns_7d')
sns.barplot(x=returns_all.index,y=returns_all['returns_30d'],color='r',edgecolor='w',label='returns_30d')
sns.barplot(x=returns_all.index,y=returns_all['returns_ytd'],color='g',edgecolor='w',label='returns_ytd')
plt.xlabel('symbol', fontsize=11)
plt.ylabel('%', fontsize=11)
plt.xticks(rotation = 90)
plt.legend()
plt.show()
</code></pre>
<p>Output like this:</p>
<p><a href="https://i.stack.imgur.com/WQEkJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WQEkJ.png" alt="enter image description here" /></a></p>
|
<p>I think <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">pandas.DataFrame.plot()</a> is all your need.</p>
<pre class="lang-py prettyprint-override"><code>df.plot(kind='bar')
</code></pre>
<p><a href="https://i.stack.imgur.com/2ATa8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ATa8.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|seaborn
| 1
|
373,667
| 66,821,212
|
Web scraping data table with python Selenium, BeautifulSoup and Pandas failed
|
<p>I am trying to web scrape/extract the table in the following website using python. (This is a dynamic table, so i cant just save the html in an html file, since it will get updated every so often).
<a href="https://www.eib.org/en/about/procurement/index.htm" rel="nofollow noreferrer">https://www.eib.org/en/about/procurement/index.htm</a></p>
<p>My goal is to turn the table into a dataframe.
I used:</p>
<ol>
<li>Selenium and BeautifulSoup, which both return an empty list</li>
<li>Pandas with pd.read_html which returns "no tables found" error</li>
</ol>
<p>Any ideas why this is happening? And how can i fix this?</p>
<p>Here's my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver import ActionChains
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
URL='http://www.eib.org/en/about/procurement/index.htm'
driver=webdriver.Firefox(executable_path ='/Users/***********')
driver.get(URL)
r=requests.get(URL)
soup = BeautifulSoup(r.content,"lxml")
page = driver.page_source
page_soup = soup(page,'html.parser')
#Using beautiful soup
elements=soup.findAll("tr")
print(elements)
for e in elements:
dr=e.find("td")
print(dr.text)
#Using selenium
elems = driver.find_elements_by_xpath("//td")
for elem in elems:
e=elem.find_element_by_tag_name("a")
print(e.tex)
#Using pandas
pd.read_html(URL)
</code></pre>
<p>Thanks!</p>
|
<p>Try the url that actually has the table in the response. Can find this by searching the Network tab in the Dev Tools:</p>
<pre><code>import pandas as pd
url = 'https://www.eib.org/tools/jsp/calls.jsp?&lang=en&language=en&l=en&url=/about/procurement/index.htm&forceLanguage=en&_=1616778335822'
df = pd.read_html(url)[0]
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>print(df)
Date Title Status
0 26/03/2021 Education Buildings in Ireland — Energy Effi... On going
1 18/03/2021 Public Procurement Expertise and Support to ov... Closed
2 18/03/2021 Advisory Support to Project Advisory Support U... Closed
3 17/03/2021 Advanced Case Management System (e-CMS) for th... Closed
4 16/03/2021 Technical Assistance to Support the Implementa... On going
.. ... ... ...
524 01/09/2005 Maintenance contract for the EIB's parkland an... Closed
525 05/08/2005 Cleaning services for EIB premises Closed
526 15/07/2005 Network maintenance engineering and acquisitio... Closed
527 13/07/2005 Contract relating to removals Closed
528 09/07/2003 Extension of call for tenders for translation/... Closed
[529 rows x 3 columns]
</code></pre>
|
python|pandas|selenium|selenium-webdriver|beautifulsoup
| 0
|
373,668
| 66,990,389
|
TensorFlow Lite: Update existing model or add new one in deployed app
|
<p>I'm creating a mobile app (with Flutter, for more details) that will need to do some offline inference using TensorFlow Lite models. The fact this needs to be offline means the models need to be shipped with the APP.</p>
<p>I know how to deploy the models with the APP (see <a href="https://itnext.io/working-with-tensorflow-lite-in-flutter-f00d733a09c3" rel="nofollow noreferrer">this tutorial</a>, for example; there are plenty out there) but, as these might change over time - for example re-trained for better accuracy with new data or even new models added to analyse different things - it will be good to find a way of doing that without the need to update the whole application.</p>
<p>So far, the only options I have found for the dynamic update/addition of models require the APP connecting to an external service hosting the models, like Firebase, but that's not good enough when the APP needs to run offline.</p>
<p>Do you have any suggestion on how to do this?</p>
<p>Many thanks,
Diego</p>
|
<p>I'm sorry, but are you saying you want to update the app (one part of the app) without internet connection (or network connection)? The act of updating the app from the Play Store/ App Store requires an internet connection.</p>
<ul>
<li><p><strong>Manually install new application updates.</strong> This would be updating the entire app, not just the model file, but this is fine, you just don't have to make any code changes.</p>
<ul>
<li>On Android: Install a new APK <code>adb install</code>, and</li>
<li>On iOS: use Adhoc distribution</li>
</ul>
</li>
<li><p><strong>Model file picker and manual model selection:</strong> Alternatively, you can add a file picker in the application to select a local model file. This model file will need to be on the device, which you can copy onto the device manually (e.g. <code>adb push model.tflite</code> on Android, Airdrop on iOS).</p>
</li>
<li><p><strong>Local network hosted model:</strong> Or you can download the model from the local network automatically (still "offline"/ no internet connection).</p>
</li>
</ul>
|
flutter|tensorflow-lite
| 0
|
373,669
| 66,986,927
|
Python : Flatten xml to csv with nested child tags
|
<p>There are multiple XML files that I would like to flatten, I am looking for a generic function or logic to convert the xml to a flat file. Most of the answers include hard-coded tags. Closest one being <a href="https://stackoverflow.com/questions/56343492/python-flatten-xml-to-csv-with-parent-tag-repeated-in-child">Python : Flatten xml to csv with parent tag repeated in child</a> but still has hard-coded solution.
For below input xml</p>
<pre class="lang-xml prettyprint-override"><code><root>
<child> child-val </child>
<child2> child2-val2 </child2>
<anotherchild>
<childid> another child 45</childid>
<childname> another child name </childname>
</anotherchild>
<group>
<groupid> groupid-123</groupid>
<grouplist>
<groupzone>
<groupname>first </groupname>
<groupsize> 4</groupsize>
</groupzone>
<groupzone>
<groupname>second </groupname>
<groupsize> 6</groupsize>
</groupzone>
<groupzone>
<groupname> third </groupname>
<groupsize> 8 </groupsize>
</groupzone>
</grouplist>
</group>
<secondgroup>
<secondgroupid> secondgroupid-42 </secondgroupid>
<secondgrouptitle> second group title </secondgrouptitle>
<secondgrouplist>
<secondgroupzone>
<secondgroupsub>
<secondsub>v1</secondsub>
<secondsubid>12</secondsubid>
</secondgroupsub>
<secondgroupname> third </secondgroupname>
<secondgroupsize> 4</secondgroupsize>
</secondgroupzone>
<secondgroupzone>
<secondgroupsub>
<secondsub>v2</secondsub>
<secondsubid>1</secondsubid>
</secondgroupsub>
<secondgroupname>fourth </secondgroupname>
<secondgroupsize> 6</secondgroupsize>
</secondgroupzone>
<secondgroupzone>
<secondgroupsub>
<secondsub>v3</secondsub>
<secondsubid>45</secondsubid>
</secondgroupsub>
<secondgroupname> tenth </secondgroupname>
<secondgroupsize> 10 </secondgroupsize>
</secondgroupzone>
</secondgrouplist>
</secondgroup>
<child3> val3 </child3>
</root>
</code></pre>
<p>I tried using this package <a href="https://pypi.org/project/pandas-read-xml/" rel="nofollow noreferrer">pandas-read-xml</a> got most of the values but the anotherchild tag values are showing up in one column(anotherchild) instead of anotherchild|childid and anotherchild|anotherchild. If possible suggest a generic logic to convert an xml to flat file.</p>
<pre class="lang-py prettyprint-override"><code>import pandas_read_xml as pdx
df = pdx.read_xml(xml_content, ['root'])
fully_fatten_df = pdx.fully_flatten(df)
fully_fatten_df.to_csv("stack.csv", index=False)
</code></pre>
<p>Output csv</p>
<pre><code>anotherchild,child,child2,child3,group|groupzone|groupname,group|groupzone|groupsize,secondgroup|secondgroupzone|secondgroupname,secondgroup|secondgroupzone|secondgroupsize,secondgroup|secondgroupzone|secondgroupsub|secondsub,secondgroup|secondgroupzone|secondgroupsub|secondsubid
,child-val,child2-val2,val3,,,third,4,v1,12
,child-val,child2-val2,val3,,,fourth,6,v2,1
,child-val,child2-val2,val3,,,tenth,10,v3,45
,child-val,child2-val2,val3,first,4,,,,
,child-val,child2-val2,val3,second,6,,,,
,child-val,child2-val2,val3,third,8,,,,
another child 45,child-val,child2-val2,val3,,,,,,
another child name,child-val,child2-val2,val3,,,,,,
,child-val,child2-val2,val3,,,,,,
,child-val,child2-val2,val3,,,,,,
,child-val,child2-val2,val3,,,,,,
</code></pre>
|
<p>Normally the <em>xml</em> nodes that hold a value should be the corresponding columns. As I see in your <em>xml</em> example "child", "child2", "childid", and so on, should be columns.</p>
<p>Based on the above <em>xml</em> I've made this piece of code that should be sufficiently generic to accommodate similar examples.</p>
<pre><code>import pandas as pd
import tabulate
import xml.etree.ElementTree as Xet
def getData(root, rows, columns, rowcount, name=None):
if name != None:
name = "{0}{1}{2}".format(name,"|",root.tag) # we construct the column names like this so that we don't risk haveing the same column on different nodes that should repeat
# for example: a node named "name" could be under group and secondgroup and they shouldn't be the same column
else:
name = root.tag
for item in root:
if len(item) == 0:
colName = "{0}{1}{2}".format(name,"|", item.tag)
# colName = item.tag # remove this line to get the full column name; ex: root|group|grouplist|groupzone|groupsize
if not colName in columns:
columns.append(colName) # save the column to a list
rowcount.append(0) # save the row on which we add the value for this column
rows[rowcount[columns.index(colName)]].update({colName : item.text.strip()}) # add the value to the row - this will always happen on row 0
else:
repeatPosition = columns.index(colName) # get the column position for the repeated item
rowcount[repeatPosition] = rowcount[repeatPosition] + 1 # increase row count
if len(rows) <= max(rowcount):
rows.append({}) # add a new row based on row count
rows[rowcount[repeatPosition]].update({colName : item.text.strip()}) # add the value on the new row
getData(item, rows, columns, rowcount, name) # recursive call to walk trough each list of elements
xmlParse = Xet.parse('example.xml')
root = xmlParse.getroot()
rows = [{}] # adding at least one row from the start and will add additional rows as we go along
columns = [] # holds the names of the columns
rowcount = [] # holds the rows on which we add each element value; ex:
getData(root, rows, columns, rowcount)
df = pd.DataFrame(rows, columns=columns)
print(df)
df.to_csv('parse.csv')
</code></pre>
<p>The end result after running this code looks like this:
<a href="https://i.stack.imgur.com/lkN6g.png" rel="nofollow noreferrer">csv result</a></p>
<p>And this is the plain csv:</p>
<pre><code>,root|child,root|child2,root|anotherchild|childid,root|anotherchild|childname,root|group|groupid,root|group|grouplist|groupzone|groupname,root|group|grouplist|groupzone|groupsize,root|secondgroup|secondgroupid,root|secondgroup|secondgrouptitle,root|secondgroup|secondgrouplist|secondgroupzone|secondgroupsub|secondsub,root|secondgroup|secondgrouplist|secondgroupzone|secondgroupsub|secondsubid,root|secondgroup|secondgrouplist|secondgroupzone|secondgroupname,root|secondgroup|secondgrouplist|secondgroupzone|secondgroupsize,root|child3
0,child-val,child2-val2,another child 45,another child name,groupid-123,first,4,secondgroupid-42,second group title,v1,12,third,4,val3
1,,,,,,second,6,,,v2,1,fourth,6,
2,,,,,,third,8,,,v3,45,tenth,10,
</code></pre>
<p>Hopefully this should get you started in the right direction.</p>
|
python-3.x|xml|pandas|logic
| 0
|
373,670
| 66,962,022
|
How to get values from other rows based on multiple conditions in Pandas?
|
<p>I have the following df -</p>
<pre><code> +--------+--------+--------------------+------------+--------------------+----------+----------+
| GameID | TeamID | Team | OpponentID | Opponent | Location | score |
+--------+--------+--------------------+------------+--------------------+----------+----------+
| 1 | 1 | Alabama | 2 | Jacksonville State | H | 1.098633 |
+--------+--------+--------------------+------------+--------------------+----------+----------+
| 1 | 2 | Jacksonville State | 1 | Alabama | V | 0.756562 |
+--------+--------+--------------------+------------+--------------------+----------+----------+
| 2 | 3 | UAB | 4 | Alcorn State | H | 1.270638 |
+--------+--------+--------------------+------------+--------------------+----------+----------+
| 2 | 4 | Alcorn State | 3 | UAB | V | 0.682791 |
+--------+--------+--------------------+------------+--------------------+----------+----------+
</code></pre>
<p>Each row represnts one of two teams results from a distinct GameID. My goal is to have a final df that looks like this</p>
<pre><code>+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
| GameID | TeamID | Team | OpponentID | Opponent | Location | score | opponents score |
+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
| 1 | 1 | Alabama | 2 | Jacksonville State | H | 1.098633 | 0.756562 |
+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
| 1 | 2 | Jacksonville State | 1 | Alabama | V | 0.756562 | 1.098633 |
+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
| 2 | 3 | UAB | 4 | Alcorn State | H | 1.270638 | 0.682791 |
+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
| 2 | 4 | Alcorn State | 3 | UAB | V | 0.682791 | 1.270638 |
+--------+--------+--------------------+------------+--------------------+----------+----------+-----------------+
</code></pre>
<p>I am stuck on how to look up values that match criteria with different column names. Thanks!</p>
|
<p>You can make use of <code>merge()</code> method:</p>
<pre><code>resultdf=df.merge(df[['GameID','OpponentID','score']], left_on=['GameID','TeamID'], right_on=['GameID','OpponentID'], how='left')
</code></pre>
<p>Now make use of <code>drop()</code> method:</p>
<pre><code>result.drop(columns=['OpponentID_y'])
</code></pre>
<p>Finally make use of <code>rename()</code> method:</p>
<pre><code>result=result.rename(columns={'OpponentID_x':'OpponentID','score_x':'score','score_y':'opponents score'})
</code></pre>
<p>Now if you print <code>result</code> you will get your desired output</p>
|
python|pandas
| 1
|
373,671
| 66,846,030
|
TypeError: linear(): argument 'input' (position 1) must be Tensor, not str
|
<p>so ive been trying to work on some example of bert that i found on github as its the first time im trying to use bert and see how it works. The respiratory im working with is the following: <a href="https://github.com/prateekjoshi565/Fine-Tuning-BERT/blob/master/Fine_Tuning_BERT_for_Spam_Classification.ipynb" rel="nofollow noreferrer">https://github.com/prateekjoshi565/Fine-Tuning-BERT/blob/master/Fine_Tuning_BERT_for_Spam_Classification.ipynb</a></p>
<p>im using a different dataset however im getting the issue TypeError: linear(): argument 'input' (position 1) must be Tensor, not str" and honestly i dont know what im doing wrong. is there anyone that could help me?</p>
<p>the code ive been using is the following:</p>
<pre><code># convert class weights to tensor
weights= torch.tensor(class_wts,dtype=torch.float)
weights = weights.to(device)
# loss function
cross_entropy = nn.NLLLoss(weight=weights)
# number of training epochs
epochs = 10
def train():
model.train()
total_loss, total_accuracy = 0, 0
# empty list to save model predictions
total_preds=[]
# iterate over batches
for step,batch in enumerate(train_dataloader):
# progress update after every 50 batches.
if step % 50 == 0 and not step == 0:
print(' Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader)))
# push the batch to gpu
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
# clear previously calculated gradients
model.zero_grad()
# get model predictions for the current batch
preds = model(sent_id, mask)
# compute the loss between actual and predicted values
loss = cross_entropy(preds, labels)
# add on to the total loss
total_loss = total_loss + loss.item()
# backward pass to calculate the gradients
loss.backward()
# clip the the gradients to 1.0. It helps in preventing the exploding gradient problem
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# update parameters
optimizer.step()
# model predictions are stored on GPU. So, push it to CPU
preds=preds.detach().cpu().numpy()
# append the model predictions
total_preds.append(preds)
# compute the training loss of the epoch
avg_loss = total_loss / len(train_dataloader)
# predictions are in the form of (no. of batches, size of batch, no. of classes).
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
#returns the loss and predictions
return avg_loss, total_preds
def evaluate():
print("\nEvaluating...")
# deactivate dropout layers
model.eval()
total_loss, total_accuracy = 0, 0
# empty list to save the model predictions
total_preds = []
# iterate over batches
for step,batch in enumerate(val_dataloader):
# Progress update every 50 batches.
if step % 50 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}.'.format(step, len(val_dataloader)))
# push the batch to gpu
batch = [t.to(device) for t in batch]
sent_id, mask, labels = batch
# deactivate autograd
with torch.no_grad():
# model predictions
preds = model(sent_id, mask)
# compute the validation loss between actual and predicted values
loss = cross_entropy(preds,labels)
total_loss = total_loss + loss.item()
preds = preds.detach().cpu().numpy()
total_preds.append(preds)
# compute the validation loss of the epoch
avg_loss = total_loss / len(val_dataloader)
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
# set initial loss to infinite
best_valid_loss = float('inf')
# empty lists to store training and validation loss of each epoch
train_losses=[]
valid_losses=[]
#for each epoch
for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
#train model
train_loss, _ = train()
#evaluate model
valid_loss, _ = evaluate()
#save the best model
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'saved_weights.pt')
# append training and validation loss
train_losses.append(train_loss)
valid_losses.append(valid_loss)
print(f'\nTraining Loss: {train_loss:.3f}')
print(f'Validation Loss: {valid_loss:.3f}')
</code></pre>
<p>the traceback i receive is:</p>
<pre><code>Epoch 1 / 10
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-105-c5138ddf6b25> in <module>()
12
13 #train model
---> 14 train_loss, _ = train()
15
16 #evaluate model
5 frames
<ipython-input-103-3236a6e339dd> in train()
24
25 # get model predictions for the current batch
---> 26 preds = model(sent_id, mask)
27
28 # compute the loss between actual and predicted values
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
<ipython-input-99-9ebdcf410f97> in forward(self, sent_id, mask)
28 _, cls_hs = self.bert(sent_id, attention_mask=mask)
29
---> 30 x = self.fc1(cls_hs)
31
32 x = self.relu(x)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)
92
93 def forward(self, input: Tensor) -> Tensor:
---> 94 return F.linear(input, self.weight, self.bias)
95
96 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755
TypeError: linear(): argument 'input' (position 1) must be Tensor, not str
</code></pre>
|
<p>I've been working on this repo too.
Motivated by the answer provided on this <a href="https://stackoverflow.com/questions/65082243/dropout-argument-input-position-1-must-be-tensor-not-str-when-using-bert">link</a>. There is a class probably named Bert_Arch that inherits the nn.Module and this class has a overriden method named forward. Inside forward method just add the parameter 'return_dict=False' to the self.bert() method call. Like so:</p>
<pre><code>_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)
</code></pre>
<p>This worked for me.</p>
|
python|pytorch|bert-language-model
| 9
|
373,672
| 66,935,613
|
Problem with combining multiple excel files in python pandas
|
<p>I am quite new to python programming. I need to combine 1000+ files into one file. each file has 3 sheets in it and I need to get data only from sheet2 and make an final excel file. I am facing a problem to pick a value from specific cell from each excel file on sheet2 and create a column. python is picking the value from first file and create a column on that</p>
<pre class="lang-py prettyprint-override"><code> df = pd.DataFrame()
for file in files:
if file.endswith('.xlsm'):
df = pd.read_excel(file, sheet_name=1, header=None)
df['REPORT_NO'] = df.iloc[1][4] #Report Number
df['SUPPLIER'] = df.iloc[2][4] #Supplier
df['REPORT_DATE'] = df.iloc[0][4] #Report Number
df2 = df2.dropna(thresh=15)
df2 = df.append(df, ignore_index=True)
df = df.reset_index()
del df['index']
df2.to_excel('FINAL_FILES.xlsx')
</code></pre>
<p>How can I solve this issue so python can take from each excel and put the information on right rows.</p>
|
<p>I <code>df.iloc[2][4]</code> refers to the 2nd row and 4th column of the 1st sheet. You have imported with <code>sheet_name=1</code> and never activated a different sheet, though you mentioned all of the <code>.xlsm</code> have 3 sheets.</p>
<p>II <em>your scoping could be wrong. Why define <code>df</code> outside of the loop? If will change per file, so no need for an external one. All info form the loop should be put into your <code>df2</code> before the next iteration of the loop.</em></p>
<p>III Have you checked if <code>append</code> is adding a row or a column?<br />
Even though</p>
<pre><code>df['REPORT_NO'] = df.iloc[1][4] #Report Number
df['SUPPLIER'] = df.iloc[2][4] #Supplier
df['REPORT_DATE'] = df.iloc[0][4] #Report Number
</code></pre>
<p>are written as columns they have Report Number/Supplier/Report Date repeated for every row in that column.</p>
<p>When you use <code>df2 = df.append(df, ignore_index=True)</code> check the output. It might not be appending in the way you intend.</p>
|
python|excel|pandas
| 0
|
373,673
| 66,873,597
|
Look up the matching element from another data frame and return its id- python
|
<p>i have a orders dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>items</th>
<th>chat_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>curd,vada,rice</td>
<td>74374374h4473</td>
</tr>
<tr>
<td>idly,sambar</td>
<td>7949759459h34</td>
</tr>
</tbody>
</table>
</div>
<p>I have another unique menu items data frame with its own particular id:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>items</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>idly</td>
</tr>
<tr>
<td>2</td>
<td>vada</td>
</tr>
<tr>
<td>3</td>
<td>rice</td>
</tr>
<tr>
<td>4</td>
<td>curd</td>
</tr>
<tr>
<td>5</td>
<td>sambar</td>
</tr>
</tbody>
</table>
</div>
<p>now I want to match the elements and return its id's by printing its chat_id in the orders data frame</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>chat_id</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>74374374h4473</td>
<td>4</td>
</tr>
<tr>
<td>74374374h4473</td>
<td>2</td>
</tr>
<tr>
<td>74374374h4473</td>
<td>3</td>
</tr>
<tr>
<td>7949759459h34</td>
<td>1</td>
</tr>
<tr>
<td>7949759459h34</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
|
<p>Id you have dataframe <code>df_orders</code>:</p>
<pre><code> items chat_id
0 curd,vada,rice 74374374h4473
1 idly,sambar 7949759459h34
</code></pre>
<p>and dataframe <code>df_menu</code>:</p>
<pre><code> id items
0 1 idly
1 2 vada
2 3 rice
3 4 curd
4 5 sambar
</code></pre>
<p>Then:</p>
<pre><code>df_orders["items"] = df_orders["items"].str.split(",")
df_orders = df_orders.explode("items")
print(df_orders.merge(df_menu, on="items")[["chat_id", "id"]])
</code></pre>
<p>Prints:</p>
<pre><code> chat_id id
0 74374374h4473 4
1 74374374h4473 2
2 74374374h4473 3
3 7949759459h34 1
4 7949759459h34 5
</code></pre>
|
python|pandas|dataframe|data-science
| 0
|
373,674
| 67,133,802
|
Combining csv files columns together Pandas Python
|
<p>I am trying to combine <code>file1-3.csv</code> so that I could get the expected result. I want to combine all the rows together on all 3 file, but disregard the 1st column as it is the same on all 3 files. How can i do this with pandas.</p>
<p>Code:</p>
<pre><code>import pandas as pd
file1 = pd.read_csv('STDOutputs_Q1.csv')
file2 = pd.read_csv('STDOutputs_Q2.csv')
file3 = pd.read_csv('STDOutputs_Q3.csv')
</code></pre>
<p>Inside file1.csv</p>
<pre><code>element,LNPT,SNPT
[ 2. 2. 30.],89,60
[ 2. 2. 40.],999,77
</code></pre>
<p>Inside file2.csv</p>
<pre><code>element,MxU,MxD,TT
[ 2. 2. 30.],17127,-3,0
[ 2. 2. 40.],17141,-40,2
</code></pre>
<p>Inside file3.csv</p>
<pre><code>element,TNT
[ 2. 2. 30.],1000
[ 2. 2. 40.],30
</code></pre>
<p>Expected Results:</p>
<pre><code>element,LNPT,SNPT,MxU,MxD,TT,TNT
[ 2. 2. 30.],89,60,17127,-3,0,1000
[ 2. 2. 40.],999,77,17141,-40,2,30
</code></pre>
|
<p>You can use <code>pd.join</code> like:</p>
<pre><code>q1_2 = file1.join(file2, lsuffix='_Q1', rsuffix='_Q2')
file1-3 = q1_2.join(file3, rsuffix='_Q3')
</code></pre>
<p>Or if the 'element' column is the same for all three data frame, and there are no conflicting column names, you can use <code>pd.merge</code>:</p>
<pre><code>q1_2 = file1.merge(file2)
file1-3 = q1_2.merge(file3)
</code></pre>
|
python|pandas|csv|format|multiple-columns
| 1
|
373,675
| 67,176,489
|
convert from saved model to quant. tflite, 'Quantization not yet supported for op: CUSTOM'
|
<p>I read similar question, <a href="https://stackoverflow.com/questions/64621991/tensorflow-tf2-quantization-to-full-integer-error-with-tfliteconverter-runtime">Tensorflow (TF2) quantization to full integer error with TFLiteConverter RuntimeError: Quantization not yet supported for op: 'CUSTOM'</a><br />
However it cannot resolve this at TF 2.4.1.</p>
<p>I referred this tensorflow site to convert using integer-only quantization.
<a href="https://tensorflow.google.cn/lite/performance/post_training_integer_quant" rel="nofollow noreferrer">https://tensorflow.google.cn/lite/performance/post_training_integer_quant</a><br />
However, it returns this error:</p>
<blockquote>
<p>RuntimeError: Quantization not yet supported for op: 'CUSTOM'.</p>
</blockquote>
<p><strong>Code:</strong></p>
<pre><code>import tensorflow as tf
import numpy as np
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_saved_model(model)
# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# set the representative dataset for the converter so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
#write the quantized tflite model to a file
with open('my_quant.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>How to resolve this issue?<br />
Thanks</p>
|
<p>Can you try to use the flags</p>
<pre><code>converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.experimental_new_quantizer = True
</code></pre>
<p>instead.</p>
<p>"TFLITE_BUILTINS_INT8" indicates a fully quantized op set and we don't have the quantized kernel for the custom op.</p>
|
tensorflow-lite|quantization
| 1
|
373,676
| 67,077,583
|
Correctly Load Binary Mask/GIF with PIL and Imageio
|
<p>I have to load a gif containing a binary mask in Python.</p>
<p><a href="https://i.stack.imgur.com/IQpDA.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IQpDA.gif" alt="inputmask" /></a></p>
<pre><code>import numpy as np
from PIL import Image
import imageio
from matplotlib import pyplot as plt
maskPIL = np.array(Image.open('mask.gif'))
maskIO = np.array(imageio.imread('mask.gif'))
plt.subplot(1,2,1)
plt.title('PIL Mask')
plt.imshow(maskPIL,cmap='Greys')
plt.subplot(1,2,2)
plt.title('ImageIO Mask')
plt.imshow(maskIO,cmap='Greys')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/kT5Sd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kT5Sd.png" alt="Result" /></a></p>
<p><strong>Why are the 2 methods behaving differently?</strong></p>
<p>PIL version: 8.0.1</p>
<p>imageio version: 2.9.0</p>
|
<p>If you do this:</p>
<pre><code>im = Image.open('mask.gif')
print(im)
</code></pre>
<p><strong>Output</strong></p>
<pre><code><PIL.GifImagePlugin.GifImageFile image mode=P size=683x512 at 0x7FC0C86FF430>
</code></pre>
<p>you will see that your image is a <strong>palette</strong> image - because <code>mode=P</code>. That means that the values in the image are not RGB or greyscale values, but indices into a palette. If you look at the palette:</p>
<pre><code>np.array(im.getpalette()).reshape(256,3)
Out[25]:
array([[255, 255, 255], <--- palette entry 0
[ 0, 0, 0], <--- palette entry 1
[ 2, 2, 2],
[ 3, 3, 3],
[ 4, 4, 4],
[ 5, 5, 5],
...
...
</code></pre>
<p>you will see that entry 0 is rgb(255,255,255), so that means wherever you have zero in your image, it should display white! And wherever you have one in your image, it should display black.</p>
<p>If you want the proper values, as greyscale, you need to comvert the image to <code>L</code> mode, then all your pixels will be actual grey values:</p>
<pre><code>maskPIL = np.array(Image.open('mask.gif').convert('L'))
</code></pre>
<p>Fuller explanation <a href="https://stackoverflow.com/a/52307690/2836621">here</a>.</p>
|
python|numpy|python-imaging-library|python-imageio
| 2
|
373,677
| 67,154,206
|
pandas groupby then filter by date to get mean
|
<p>Using pandas dataframes and I'm attempting to get the average number of purchases in the last 90 days for each row(not including the current row itself) based on CustId and then add a new column "PurchaseMeanLast90Days".</p>
<p>This is the code I tried, which is incorrect:</p>
<pre><code>group = df.groupby(['CustId'])
df['PurchaseMeanLast90Days'] = group.apply(lambda g: g[g['Date'] > (pd.DatetimeIndex(g['Date']) + pd.DateOffset(-90))])['Purchases'].mean()
</code></pre>
<p>Here's my data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1/01/2021</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1/12/2021</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>5/01/2021</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>1/01/2021</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>2</td>
<td>2/01/2021</td>
<td>1</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>3/01/2021</td>
<td>2</td>
</tr>
<tr>
<td>9</td>
<td>2</td>
<td>4/01/2021</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>For example, row index 5 would include these rows in it's mean() = 3.33</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>The new dataframe would look like this(I didn't do the calcs for CustId=2):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
<th>PurchaseMeanLast90Days</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1/09/2021</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1/12/2021</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
<td>2.67</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
<td>3.0</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>5/01/2021</td>
<td>5</td>
<td>3.33</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>1/01/2021</td>
<td>1</td>
<td>...</td>
</tr>
<tr>
<td>7</td>
<td>2</td>
<td>2/01/2021</td>
<td>1</td>
<td>...</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>3/01/2021</td>
<td>2</td>
<td>...</td>
</tr>
<tr>
<td>9</td>
<td>2</td>
<td>4/01/2021</td>
<td>3</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can do a rolling computation:</p>
<pre><code>df["Date"] = pd.to_datetime(df["Date"], dayfirst=False)
df["PurchaseMeanLast90Days"] = (
(
df.groupby("CustId")
.rolling("90D", min_periods=1, on="Date", closed="both")["Purchases"]
.apply(lambda x: x.shift(1).sum() / (len(x) - 1))
)
.fillna(0)
.values
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Index CustId Date Purchases PurchaseMeanLast90Days
0 0 1 2021-01-01 5 0.000000
1 1 1 2021-01-12 1 5.000000
2 2 1 2021-03-28 2 3.000000
3 3 1 2021-04-01 4 2.666667
4 4 1 2021-04-20 2 3.000000
5 5 1 2021-05-01 5 2.666667
6 6 2 2021-01-01 1 0.000000
7 7 2 2021-02-01 1 1.000000
8 8 2 2021-03-01 2 1.000000
9 9 2 2021-04-01 3 1.333333
</code></pre>
|
python|pandas|dataframe|filter|mean
| 1
|
373,678
| 67,133,991
|
Introducing noise to a binary class
|
<p>I have a dataset that I'm running classification on and the class itself is binary (0, 1). Essentially I want to introduce some noise to the class column, that is, randomly invert 5% of the classes. I.e. if I had 1000 rows of data I would want to invert the class of 50 of these.</p>
<p>My variables are like</p>
<pre><code>data = read_csv(...)
x = dataset.drop("class", axis=1)
y = dataset["class"]
</code></pre>
<p>And I want to introduce the noise to <code>y</code> where each row in y is either 0 or 1</p>
|
<p>There is a way you can do this in a somewhat long one line with <code>np.where</code>. At the moment, I can't seem to remember it. But, I have done a version of this in the past and it's worked just fine. All you're doing is changing one value to something outside of your choices, like a placeholder, changing the other, and changing the placeholder to your desired output. Since you're using binary data (0, 1), we'll just introduce the number two, temporarily. This gives us something to swap around.</p>
<pre class="lang-py prettyprint-override"><code>sample_indexes = df.sample(.05).index.values
df.loc[(sample_indexes) & (df['my_column'] == 1), 'my_column] = 2
df.loc[(sample_indexes) & (df['my_column'] == 0), 'my_column] = 1
df.loc[(sample_indexes) & (df['my_column'] == 2), 'my_column] = 0
</code></pre>
|
python|numpy
| 0
|
373,679
| 66,831,349
|
Filter rows by criteria and select multiple columns from a dataframe with python pandas
|
<p>If I have the following dataframe subset</p>
<pre><code> A B C D E Date
R0 xy 78 io 16 73 2021-03-25
R1 xx 27 ya 80 1 2021-04-20
R2 xx 53 ya 27 44 2021-06-20
R3 xx 65 io 30 84 2021-08-22
R4 xv 9 ui 62 1 2021-08-01
</code></pre>
<p>How can I do with panda to have the following dataframe:</p>
<pre><code> A B C Date
R1 xx 27 ya 2021-04-20
R2 xx 53 ya 2021-06-20
</code></pre>
<p>I was thinking of filtering columns by doing:</p>
<pre><code>sbset = subset[['A','B','C', 'Date' ]]
</code></pre>
<p>and then filter where A = 'XX' and C = 'ya', but with a dataframe of 1 million of obs and 127 vars it takes too long, can I do both actions (filter by two or more variables and select more variables) in one step?</p>
<p>Another question, if the dataframe takes the dates as a string, how can I change the format to date?</p>
<p>Thanks for reading.</p>
|
<p>You just need boolean making for this:</p>
<pre><code>mask=(df['A']=='xx') & (df['C']=='ya')
</code></pre>
<p>Finally:-</p>
<pre><code>result=df[mask]
</code></pre>
<p>Now if you print <code>result</code> you will get your desired output</p>
|
python|pandas
| 0
|
373,680
| 67,168,867
|
(Pandas) correct lambda expression to sort column by value @ index position 1
|
<p>I am attempting to sort <code>SrcWell</code> by the value's index position 1. I understand there is a keyword argument <code>key</code>, which is similar in behavior to <code>key</code> in <code>sorted</code>, however I receive a ValueError when attempting to sort using <code>key</code>. Here is an example CSV file to be loaded as a pandas DataFrame:</p>
<pre><code>SrcPlate SrcWell
PS000000123456 A4
PS000000123456 B7
PS000000123456 A7
PS000000123456 H6
PS000000123456 G6
PS000000123456 F6
</code></pre>
<p>And a small script to sort <code>SrcWell</code> by its numerical values:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
worklist = pd.read_csv('worklist.csv')
print(worklist.sort_values(by="SrcWell", key=lambda x: int(x[1])))
>>> [...] ValueError: invalid literal for int() with base 10: 'B7'
</code></pre>
|
<p>Try using .str accessor and slicing:</p>
<pre><code>df.sort_values(by="SrcWell", key=lambda x: x.str[1])
</code></pre>
<p>Output:</p>
<pre><code> SrcPlate SrcWell
0 PS000000123456 A4
3 PS000000123456 H6
4 PS000000123456 G6
5 PS000000123456 F6
1 PS000000123456 B7
2 PS000000123456 A7
</code></pre>
<p>As @Ben.T points out, per the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p><strong>key : callable, optional</strong><br />
Apply the key function to the values before
sorting. This is similar to the key argument in the builtin sorted()
function, with the notable difference that this key function should be
vectorized. <em>It should expect a Series and return a Series with the</em>
<em>same shape as the input.</em> It will be applied to each column in by
independently.</p>
</blockquote>
|
python|python-3.x|pandas|numpy
| 2
|
373,681
| 66,991,355
|
How can I substring to specific character in pandas?
|
<p>For example, I have 2 columns(1,2), and in table 2 I want to fetch everything until " character.</p>
<p>I wanted to do something like this:</p>
<pre><code>df.columns = ['1','2']
a = df['2'].str[:' " ']
print(a)
</code></pre>
<p>but is not possible since I need a number</p>
<pre><code> column 2 example
1234@gmail.com, 12@gmail.com", blah blah
1234@gmail.com", ....
123@gmail.com", ...
1234@gmail.com, 1234@gmail.com, 1234@gmail.com", blah
</code></pre>
<p>return everything until " character.</p>
|
<p>Split the string on <code>"</code> and pick the <code>first</code> element.</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a>:</p>
<pre><code>df['2'].str.split('"').str[0]
</code></pre>
|
python|pandas|substring|jupyter
| 1
|
373,682
| 66,934,896
|
Looking to find the sum of a unique member's payment based of whether some dates fall in between a certain time in python
|
<p>this is my first time asking on the community although I have used the website for help extensively in the past. I was not able to find a solution on this specific problem and I am fairly amatuer at python so having a hard time putting the logic in code although I think the logic is clear enough. I am using python via google colab for this and have shared a google sheet with data at the end.</p>
<p>In my scenario, we have a start month, length of time, and payout month. End month can be calculated through length. A person can be a part of multiple groups and thus can have multiple start, end and payout months.</p>
<p>The goal is to find how much is expected to be paid by a member as off today.</p>
<p>eg group begins in jan 2020, is 10 months long and will end in oct 2020. The monthly contribution is 5k. The payout month is lets say mar 2020. While we technically should be getting 10 payments (10 month group) we will expect only 9 payments ie 45k because when the payout month comes around, the member is not expected to pay for that month. If say the group began in dec 2020 and if it was 10 months long, then as off today we would only expect 5 payments (dec to apr 21).</p>
<p>These scenarios get complicated for eg when a member is part of 3 groups, so 3 start dates, 3 end dates and 3 payout dates and likely 3 different instalment amounts. lets say the start dates are jan 20, feb 20, mar 20 and all groups are 10 months long. lets also say that there is a payout in apr 20. In apr 20, all the groups will be active (the end month has not been reached yet), so in apr 20 (the payout month) we will expect no payments from all the groups.</p>
<p>Meaning that, if there are 3 groups and there is a payout that falls between any groups start and end month, then we will not expect a payment for that group in that month. If there are two payouts that fall in between the start and end months of the groups then we we will not expect 6 payments for that month, 2 for each group and so on. If say 3 groups and 1 payout falls between the dates of only 2 groups, then we will not expect instalments for only those two groups (what ever the instalment is for those groups)</p>
<p>The following google sheet has some sample data.
The group ID col is entirely unique and will have no dups (you can think of this an invoice since all invoices are unique). The member code col can have duplicates since a member can have more than one group. Do not worry about the days in the date, what matter is the month and year. We have start month, group length and payout month. we also have how much money is owed monthly by a member for that group.
<a href="https://docs.google.com/spreadsheets/d/1nAXlifIQdYiN1MWTv7vs2FqbFu2v6ykCzQjrJNPTBWI/edit#gid=0" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1nAXlifIQdYiN1MWTv7vs2FqbFu2v6ykCzQjrJNPTBWI/edit#gid=0</a></p>
<p>any help or advice would be great.</p>
<p>EDITED -> I have tried the following but got an error: (i coded the months ie jan 2020 = 1, feb 2020 = 2 and so on so i dont have to mess around with dates)</p>
<pre><code>deal_list = df['Group ID'].tolist()
def instalment(deal_list):
for member in df['Member Code'].unique():
if df['Coded Payout Month']>=df['Coded Start Month'] and df['Coded
Payout Month']<=df['Coded End Month']:
count_months = count_months + 1
return count_months * df['Instalment']
instalment(deal_list)
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>EDITED - have also tried the following just now (took help from <a href="https://stackoverflow.com/questions/52405670/pandas-groupby-and-iterate-with-conditionals-within-groups">Pandas: Groupby and iterate with conditionals within groups?</a>). It sort of worked in that it gave me a count of 1 for each row. I was trying to get the number of times each payout month appears within the dates of a group</p>
<pre><code>grouped = df.groupby('Member Code')
for g_idx, group in grouped:
for r_idx, row in group.iterrows():
if (((row['Coded Payout Month'] >= group['Coded Start Month']).any())
& (row['Coded Payout Month'] <= group['Coded End Month']).any()):
df.loc[r_idx, 'payout_cut'] =+ 1
print(df)
</code></pre>
|
<p>I found a way around it. Essentially, rather than trying to iterate through all the rows, I transformed my data into long form first in Google sheets via transpose and filter (I filtered for all payout months for a member and transposed the results into the rows. I then pushed that into colab and through pd.melt transformed the data back into unique rows per deal with the additional payouts as required. Then running the condition was simple enough and finally summed for all true value.</p>
<p>I can explain a bit more of anyone needs.
I took inspiration from here:
<a href="https://youtu.be/pKvWD0f18Pc" rel="nofollow noreferrer">https://youtu.be/pKvWD0f18Pc</a></p>
|
python|pandas|numpy|loops|group-by
| 0
|
373,683
| 66,841,646
|
Get new column with groupby and return the maximum to entire group
|
<p>I want to add a new column with the maximum next_crossing_down for the entire x street.
I have this:</p>
<pre><code>cars = pd.DataFrame({'x': [1,1,1,1,1,1,1,2,2,2,2],
'y': [7,None,13,14,22,None,9,13,14,15,16],
'next_crossing_down': [5,None,10,10,20,None,5,10,10,10,15]})
x y next_crossing_down
0 1 7.0 5.0
1 1 NaN NaN
2 1 13.0 10.0
3 1 14.0 10.0
4 1 22.0 20.0
5 1 NaN NaN
6 1 9.0 5.0
7 2 13.0 10.0
8 2 14.0 10.0
9 2 15.0 10.0
10 2 16.0 15.0
</code></pre>
<p>And I would like this:</p>
<pre><code> x y next_crossing_down next_crossing_down_max
0 1 7.0 5.0 20.0
1 1 NaN NaN NaN
2 1 13.0 10.0 20.0
3 1 14.0 10.0 20.0
4 1 22.0 20.0 20.0
5 1 NaN NaN NaN
6 1 9.0 5.0 15.0
7 2 13.0 10.0 15.0
8 2 14.0 10.0 15.0
9 2 15.0 10.0 15.0
10 2 16.0 15.0 15.0
</code></pre>
<p>This is the closest that I have come. I get the right numbers, only not in the entire x_street.</p>
<pre><code>cars['next_crossing_down_max']= cars.groupby(['x'])['next_crossing_down'].max()
x y next_crossing_down next_crossing_down_max
0 1 7.0 5.0 NaN
1 1 NaN NaN 20.0
2 1 13.0 10.0 15.0
3 1 14.0 10.0 NaN
4 1 22.0 20.0 NaN
5 1 NaN NaN NaN
6 1 9.0 5.0 NaN
7 2 13.0 10.0 NaN
8 2 14.0 10.0 NaN
9 2 15.0 10.0 NaN
10 2 16.0 15.0 NaN
</code></pre>
|
<p>Are you looking for <code>pandas.DataFrame.transform</code>?</p>
<pre><code>import numpy as np
cars['next_crossing_down_max']= cars.groupby(['x'])['next_crossing_down'].transform('max')
cars['next_crossing_down_max'] = np.where(cars['next_crossing_down'].isnull(),
np.nan,
cars['next_crossing_down_max'])
</code></pre>
<p>Output</p>
<pre><code>cars
Out[18]:
x y next_crossing_down next_crossing_down_max
0 1 7.0 5.0 20.0
1 1 NaN NaN NaN
2 1 13.0 10.0 20.0
3 1 14.0 10.0 20.0
4 1 22.0 20.0 20.0
5 1 NaN NaN NaN
6 1 9.0 5.0 20.0
7 2 13.0 10.0 15.0
8 2 14.0 10.0 15.0
9 2 15.0 10.0 15.0
10 2 16.0 15.0 15.0
</code></pre>
<p>Alternatively you could <code>mask</code> instead of <code>np.where</code>, which will get you the same result, but it's a bit slower (thanks to @Anky):</p>
<pre><code>>>> cars.groupby("x")['next_crossing_down'].transform('max').mask(cars['next_crossing_down'].isna())
Out[19]:
0 20.0
1 NaN
2 20.0
3 20.0
4 20.0
5 NaN
6 20.0
7 15.0
8 15.0
9 15.0
10 15.0
</code></pre>
|
python|pandas
| 2
|
373,684
| 66,900,782
|
Can I use the column name as condition?
|
<p>I have a pandas dataframe which contains around a hundred columns.
Most of these columns are dates and I want to iterate through all these.</p>
<p>Here is an example :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">nbDays</th>
<th style="text-align: right;">2020-12-20</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2020-12-30</td>
<td style="text-align: center;">4</td>
<td style="text-align: right;"></td>
</tr>
</tbody>
</table>
</div>
<p>IF date + nbdays <= 2020-12-20, set the value of this column to TRUE, FALSE if No.
The only thing I can't do is taking the column name as an argument in my condition, and do it for all these date columns.</p>
<p>Here's my expected output :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">date</th>
<th style="text-align: center;">nbDays</th>
<th style="text-align: center;">2020-12-20</th>
<th style="text-align: right;">2020-12-21</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2020-12-30</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">FALSE</td>
<td style="text-align: right;">FALSE</td>
</tr>
<tr>
<td style="text-align: left;">2020-12-18</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">TRUE</td>
<td style="text-align: right;">FALSE</td>
</tr>
</tbody>
</table>
</div>
<p>Maybe in a loop but it'll be long to run?</p>
|
<p><em>Ensure your date columns are converted to datetime for this to work</em></p>
<p>The basic steps I've used are:</p>
<ol>
<li>get pandas to identify the date columns</li>
<li>shift the "date" column by nbDays</li>
<li>compare the shifted date column to the dates in the columns</li>
</ol>
<pre><code>from dateutil.relativedelta import relativedelta
shifted_date = [
t + relativedelta(days=nb_days)
for t, nb_days in zip(df[date], df[nbDays])
]
date_columns = df.select_dtypes(include=[np.datetime64]).columns
for date_column in date_columns:
date_to_check = pd.to_datetime(date_column)
df[date_column] = np.where(
shifted_date <= date_to_check, True, False
)
</code></pre>
<p>Don't be afraid to use a <code>for</code> loop here because the tough work is vectorised in the <code>np.where</code> function.</p>
|
python|pandas|numpy
| 1
|
373,685
| 66,861,558
|
Pandas - Boolean value conditional statement not being picked up in function
|
<p>New to python.</p>
<p>I have a dataset containing a date column formatted as yyyy-mm-dd (%Y%m%d) as datetime64 type. The dataset spans 2 years 2019-2020. I'm trying to write a function that will add the quarter based on the date. I can't get the if statements to recognize the data so everything is coming back as 'Q42020' and I don't understand why.</p>
<pre><code>def applyquarter(x):
if 'date' < '2019-04-01':
return('Q12019')
elif 'date' < '2019-07-01':
return('Q22019')
elif 'date' < '2019-10-01':
return('Q32019')
elif 'date' < '2020-01-01':
return('Q42019')
elif 'date' < '2020-04-01':
return('Q12020')
elif 'date' < '2020-07-01':
return('Q22020')
elif 'date' < '2020-10-01':
return('Q32020')
else:
return('Q42020')
</code></pre>
<p>My understanding:</p>
<p>if/elif/else statements run until the criteria is met, then stop running</p>
<p>datetime64 can be used with boolean operators</p>
<p>the problem must exist in the boolean interaction with the datetime value</p>
<p>Would anyone please explain what I'm not understanding correctly?</p>
|
<pre><code> if 'date' < '2019-04-01':
</code></pre>
<p>This compares two character strings. Nothing in your posted code makes <em>any</em> reference to a data frame.</p>
<p>See <a href="https://stackoverflow.com/questions/50459301/how-to-convert-dates-to-quarters-in-python#50459364">here</a> for converting a date to a quarter. The critical call is</p>
<pre><code>pd.PeriodIndex(df.date, freq='Q')
</code></pre>
|
python|pandas
| 1
|
373,686
| 66,990,266
|
InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [80,16] and labels shape [1280]
|
<p>I am trying to make an image classifier CNN using TensorFlow. I am trying to load the dataset using a <code>ImageDataGenerator</code>. Like this:</p>
<pre><code>from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_datagen = ImageDataGenerator(rescale=1/255)
IMAGE_DIMS=(200,200)
train_generator = image_datagen.flow_from_directory(
TRAIN_DIR,
target_size=IMAGE_DIMS,
batch_size=80,
class_mode="categorical",
color_mode="grayscale",
shuffle=True
)
</code></pre>
<p>model architecture:</p>
<pre><code>model = keras.models.Sequential([
keras.layers.Conv2D(16,(3,3), input_shape=(200,200,1), activation='relu'),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Conv2D(32,(3,3), activation='relu'),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Conv2D(64,(3,3), activation='relu'),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Flatten(),
keras.layers.Dense(units=512, activation="relu"),
keras.layers.Dense(units=16, activation="softmax")
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
</code></pre>
<p>I am loading an image dataset that is 200x200 pixels and greyscaled. There are 16 labels for the dataset(or the images are contained in 16 different folders). Loading the dataset works properly:</p>
<pre><code>print(train_generator.labels)
print(train_generator.image_shape)
</code></pre>
<pre><code>[ 0 0 0 ... 15 15 15]
(200, 200, 1)
</code></pre>
<p>After running:</p>
<pre><code>model.fit(
train_generator,
steps_per_epoch=4,
epochs=2
)
</code></pre>
<p>I am getting this error:</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-124-95f52517d8f4> in <module>
----> 1 model.fit(
2 train_generator,
3 steps_per_epoch=4,
4 epochs=2
5 )
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1098 _r=1):
1099 callbacks.on_train_batch_begin(step)
-> 1100 tmp_logs = self.train_function(iterator)
1101 if data_handler.should_sync:
1102 context.async_wait()
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
886 # Lifting succeeded, so variables are initialized and we can run the
887 # stateless function.
--> 888 return self._stateless_fn(*args, **kwds)
889 else:
890 _, _, _, filtered_flat_args = \
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
2940 (graph_function,
2941 filtered_flat_args) = self._maybe_define_function(args, kwargs)
-> 2942 return graph_function._call_flat(
2943 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
2944
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1916 and executing_eagerly):
1917 # No tape is watching; skip to running the function.
-> 1918 return self._build_call_outputs(self._inference_function.call(
1919 ctx, args, cancellation_manager=cancellation_manager))
1920 forward_backward = self._select_forward_and_backward_functions(
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
553 with _InterpolateFunctionError(self):
554 if cancellation_manager is None:
--> 555 outputs = execute.execute(
556 str(self.signature.name),
557 num_outputs=self._num_outputs,
~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 try:
58 ctx.ensure_initialized()
---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [80,16] and labels shape [1280]
[[node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at <ipython-input-124-95f52517d8f4>:1) ]] [Op:__inference_train_function_19777]
Function call stack:
train_function
</code></pre>
<p>I am using jupyter Ipython notebook.</p>
<p>I am relatively new to TensorFlow.</p>
<p>What is the error about? How do I fix this issue?</p>
|
<p>This resolution follows from <a href="https://stackoverflow.com/questions/49161174/tensorflow-logits-and-labels-must-have-the-same-first-dimension">this</a> thread.</p>
<pre><code>train_generator = image_datagen.flow_from_directory(
'/Users/Anu/Documents',
target_size=IMAGE_DIMS,
batch_size=80,
class_mode="sparse",
color_mode="grayscale",
shuffle=True
)
</code></pre>
|
python|tensorflow|keras|computer-vision|tensorflow2.0
| 0
|
373,687
| 66,885,220
|
How to get coordinates of best object detected with tensorflow 2?
|
<p>The accepted answer of <a href="https://stackoverflow.com/questions/48915003/get-the-bounding-box-coordinates-in-the-tensorflow-object-detection-api-tutorial">this question</a> says how tensorflow draws the bounding boxes of the detected object however does not show or explain how to retrieve these coordinates. Could someone show me how this can be done for tensorflow 2?</p>
|
<p>You can use most of the code in <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/auto_examples/plot_object_detection_saved_model.html#putting-everything-together" rel="nofollow noreferrer">this</a> documentation here.</p>
<p>Just add the below code for getting the bounding box coordinates (after <code>detection_classes</code> has been defined)</p>
<pre><code>width = image_np.shape[1]
height = image_np.shape[0]
for box,score,cls in zip(detections['detection_boxes'][0],detections['detection_scores'][0],detections['detection_classes'][0]):
if score >= 0.5: # or any other value
xmin = box[1]*width
ymin = box[0]*height
xmax = box[3]*width
ymax = box[2]*height
</code></pre>
|
python|tensorflow|tensorflow2.0|object-detection-api
| 1
|
373,688
| 66,958,544
|
Why is matrix multiplication with Numba slow?
|
<p>I try to find an explanation why my matrix multiplication with Numba is much slower than using NumPy's dot function. Although I am using the most basic code for writing a matrix multiplication function with Numba, I don't think that the significantly slower performance is due to the algorithm. For simplicity, I consider two k x k square matrices, A and B. My code reads</p>
<pre><code>1 @njit('f8[:,:](f8[:,:], f8[:,:])')
2 def numba_dot(A, B):
3
4 k=A.shape[1]
5 C = np.zeros((k, k))
6
7 for i in range(k):
8 for j in range(k):
9
10 tmp = 0.
11 for l in range(k):
12 tmp += A[i, l] * B[l, j]
13
14 C[i, j] = tmp
15
16 return C
</code></pre>
<p>Running this code repeatedly with two random matrices 1000 x 1000 Matrices, it typically takes at least about 1.5 seconds to finish.
On the other hand, if I don't update the matrix C, i.e. if I drop line 14, or replace it for the sake of a test by for example the following line:</p>
<pre><code>14 C[i, j] = i * j
</code></pre>
<p>the code finishes in about 1-5 ms. Compared to that, NumPy's dot function requires for this matrix multiplication around 10 ms.</p>
<p>What is the reason behind the discrepancy of the running times between the above code for the matrix multiplication and this small variation? Is there a way to store the value of the variable tmp in C[i, j] without deteriorating the performance of the code so significantly?</p>
|
<p>The native <code>NumPy</code> implementation works with vectorized operations. If your CPU supports these, the processing is <em>much</em> faster. Current microprocessors have on-chip matrix multiplication, which pipelines the data transfers and vector operations.</p>
<p>Your implementation performs k^3 loop iterations; a billion of anything will take some non-trivial time.
Your code specifies that you want to perform each cell-by-cell operation in isolation, a billion distinct operations instead of roughly 5k operations done in parallel and pipelined.</p>
|
python|numpy|numba
| 2
|
373,689
| 67,053,314
|
Why arr.flat.base is different from the a.ravel().base?
|
<p>I am trying to dig a bit into how Numpy works internally, and I am confused about some stuff regarding the <code>base</code> and the array flattening.</p>
<pre><code>import numpy as np
a = np.arange(12, dtype=int).reshape((3, 4))
</code></pre>
<p>So, we have this easy array. Then I try to use <code>flat</code> and <code>ravel()</code>:</p>
<pre><code>flat_iter = a.flat
print(flat_iter.base is a)
</code></pre>
<p>This prints me <code>True</code>, which is kind of what I was expecting.</p>
<pre><code>a_ravel = a.ravel()
print(a_ravel.base is a)
</code></pre>
<p>However, this gives me <code>False</code>. Why?</p>
<p>The <code>flat_iter.base</code> seems to correspond to the reshaped <code>a</code> array, i.e. <code>np.arange(12, dtype=int).reshape((3, 4))</code>. However, <code>a_ravel.base</code> seems to correspond to <code>np.arange(12, dtype=int)</code>.</p>
<p>I've tried to google about it, but I did not really understand why this happens. Why would the <code>base</code> behavior be different between two?</p>
|
<pre><code>In [82]: a = np.arange(5)
In [83]: b = a.reshape(5,1)
In [84]: c = b.ravel()
In [85]: biter=b.flat
</code></pre>
<p>Now check the databuffer location:</p>
<pre><code>In [86]: a.__array_interface__['data']
Out[86]: (44761168, False)
In [87]: b.__array_interface__['data']
Out[87]: (44761168, False)
In [88]: c.__array_interface__['data']
Out[88]: (44761168, False)
</code></pre>
<p>All the same, <code>b</code> and <code>c</code> are views. But:</p>
<pre><code>In [89]: biter.__array_interface__['data']
Traceback (most recent call last):
File "<ipython-input-89-9d5e2ed1a08d>", line 1, in <module>
biter.__array_interface__['data']
AttributeError: 'numpy.flatiter' object has no attribute '__array_interface__'
</code></pre>
<p><code>flatiter</code> is not an array!</p>
<pre><code>In [90]: a.base # None
In [91]: b.base
Out[91]: array([0, 1, 2, 3, 4])
In [92]: b.base is a
Out[92]: True
In [93]: c.base is a
Out[93]: True
</code></pre>
<p>As expected for views, the <code>b</code> and <code>c</code> base are both <code>a</code>.</p>
<p>But the <code>b.flat</code> base is <code>b</code>:</p>
<pre><code>In [94]: biter.base
Out[94]:
array([[0],
[1],
[2],
[3],
[4]])
In [95]: biter.base is b
Out[95]: True
In [96]: biter.base is a # not a
Out[96]: False
</code></pre>
<p>Again <code>biter</code> is not an array, so doesn't 'obey' the same <code>base</code> logic.</p>
<p>Regardless of the <code>base</code>, modifying the <code>biter</code> modifies <code>b</code> and <code>a</code> (and <code>c</code>):</p>
<pre><code>In [97]: biter[::2] = 10
In [98]: b
Out[98]:
array([[10],
[ 1],
[10],
[ 3],
[10]])
In [99]: a
Out[99]: array([10, 1, 10, 3, 10])
</code></pre>
<p>So the short answer is that a <code>flatiter</code> base is not the same thing as a <code>view's</code> base`.</p>
|
python|numpy|numpy-ndarray
| 1
|
373,690
| 66,966,170
|
pandas.read_csv: keep column as integer while having NaN values
|
<p>I just converted to Python from R, and now I'm trying to read in data from a csv file.
I was very annoyed with all my integer columns being treated as floats, and after some digging I see that this is the problem:
<a href="https://stackoverflow.com/questions/11548005/numpy-or-pandas-keeping-array-type-as-integer-while-having-a-nan-value">NumPy or Pandas: Keeping array type as integer while having a NaN value</a></p>
<p>I see that the accepted answer gives me a hint as to where to go, but problem is that I have data with hundreds of columns, as is typical when doing data science, I suppose. So I don't want to specify for every column what type to use when reading in data with <code>read_csv</code>. This is fixed automatically in <code>R</code>.</p>
<p>Is it really this hard to use pandas to read in data in a proper way in Python?</p>
<p><strong>Source</strong>: <a href="https://pandas.pydata.org/pandas-docs/version/0.24/whatsnew/v0.24.0.html#optional-integer-na-support" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/version/0.24/whatsnew/v0.24.0.html#optional-integer-na-support</a></p>
|
<p>You can try using:</p>
<pre><code>df = pd.read_csv('./file.csv', dtype='Int64')
</code></pre>
<p>Edit: So that doesn't work for strings. Instead, try something like this:</p>
<pre><code>for col in df.columns[df.isna().any()].tolist():
if df[col].dtype == 'float':
df[col] = df[col].astype('Int64')
</code></pre>
<p>Loop through each column that has an NA value and check it has type of <code>float</code>, then convert them to <code>Int64</code></p>
|
python|pandas
| 1
|
373,691
| 66,910,047
|
Groupby pandas but perform calculation on multiple columns
|
<p>I have a dataframe like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">name</th>
<th style="text-align: center;">date</th>
<th style="text-align: center;">col1</th>
<th style="text-align: center;">col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-01</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-02</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-03</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-04</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-05</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">2021-03-06</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-01</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-02</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-03</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-04</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-05</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">2021-03-06</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like to group by the names and find the number of days spanned by the nonzero entries of the other non-date columns (basically excluding any leading or trailing zeroes) to get something like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">name</th>
<th style="text-align: center;">col1</th>
<th style="text-align: center;">col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">A</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">5</td>
</tr>
<tr>
<td style="text-align: center;">B</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">2</td>
</tr>
</tbody>
</table>
</div>
<p>How can I do this without resorting to a for loop?</p>
|
<p>I think, <a href="https://numpy.org/doc/stable/reference/generated/numpy.trim_zeros.html" rel="nofollow noreferrer"><code>np.trim_zeros</code></a> is what you are looking for:</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np; import pandas as pd
>>> df = pd.DataFrame.from_dict({'name': ['A']*6 + ['B']*6, 'col1': [0, 0, 3, 1, 3, 1, 1, 2, 3, 0, 0, 0], 'col2': [1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0]})
>>> df
name col1 col2
0 A 0 1
1 A 0 0
2 A 3 1
3 A 1 0
4 A 3 1
5 A 1 0
6 B 1 0
7 B 2 0
8 B 3 1
9 B 0 1
10 B 0 0
11 B 0 0
>>> df.groupby('name').aggregate(lambda x: len(np.trim_zeros(x))).reset_index()
name col1 col2
0 A 4 5
1 B 3 2
</code></pre>
|
python|pandas
| 4
|
373,692
| 66,910,807
|
How to flag an anomaly in a data frame (row wise)?
|
<p>Python newbie here, I will like to flag sporadic numbers that are obviously off from the rest of the row.
In simple terms, flag numbers that seem not to belong to each row. Numbers in 100s and 100000s are considered 'off the rest'</p>
<pre><code>import pandas as pd
# intialise data of lists.
data = {'A':['R1', 'R2', 'R3', 'R4', 'R5'],
'B':[12005, 18190, 1021, 13301, 31119,],
'C':[11021, 19112, 19021,15, 24509 ],
'D':[10022,19910, 19113,449999, 25519],
'E':[14029, 29100, 39022, 24509, 412271],
'F':[52119,32991,52883,69359,57835],
'G':[41218, 52991,1021,69152,79355],
'H': [43211,7672991,56881,211,77342],
'J': [31211,42901,53818,62158,69325],
}
# Create DataFrame
df = pd.DataFrame(data)
# Print the output.
df.describe()
</code></pre>
<p>I am trying to do something exactly like this</p>
<p><a href="https://i.stack.imgur.com/zrGF6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zrGF6.jpg" alt="enter image description here" /></a></p>
<pre><code># I need help with step 1
#my code/pseudocode
# step 1: identify the values in each row that are don't belong to the group
# step 2: flag the identified values and export to excel
style_df = .applymap(lambda x: "background-color: yellow" if x else "") # flags the values that meets the criteria
with pd.ExcelWriter("flagged_data.xlsx", engine="openpyxl") as writer:
df.style.apply(lambda x: style_df, axis=None).to_excel(writer,index=False)
</code></pre>
|
<p>I used two conditions here one to check less than 1000 and another one for greater than 99999. Based on this condition, the code will highlight outliers in red color.</p>
<pre><code># Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('pandas_conditional.xlsx', engine='xlsxwriter')
# Convert the dataframe to an XlsxWriter Excel object.
df.to_excel(writer, sheet_name='Sheet1')
# Get the xlsxwriter workbook and worksheet objects.
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Add a format. Light red fill with dark red text.
format1 = workbook.add_format({'bg_color': '#FFC7CE',
'font_color': '#9C0006'})
first_row = 1
first_col = 2
last_row = len(df)
last_col = 9
worksheet.conditional_format(first_row, first_col, last_row, last_col,
{'type': 'cell',
'criteria': '<',
'value': 1000,
'format': format1})
worksheet.conditional_format(first_row, first_col, last_row, last_col,
{'type': 'cell',
'criteria': '>',
'value': 99999,
'format': format1})
# Close the Pandas Excel writer and output the Excel file.
writer.save()
</code></pre>
|
python|pandas|dataframe|export-to-excel
| 2
|
373,693
| 66,947,283
|
Creating a Pandas Dataframe and Assigning Values Based on Another Dataframe
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df1
ticker period calendarDate updated dateKey assetsAverage
WMT Q 2021-01-01 2021-03-31 2021-04-02 100000000
</code></pre>
<p>What I want to do is take these values and put them into another dataframe that looks like this:</p>
<pre><code>df2
ticker period Calendar Date Updated Date Key Assets Average
WMT Q 2021-01-01 2021-03-31 2021-04-02 100000000
</code></pre>
<p>I'm using the 2nd dataframe as my output and using my 1st dataframe as temporary storage.</p>
<p>Any suggestions?</p>
<p>I tried doing something like this:</p>
<pre><code>df2 = pd.DataFrame(
{
"Ticker":df1["ticker"],
"Period":df1["period"],
"Calendar Date":df1["calendarDate"],
"Updated":df1["updated"],
"Date Key":df1["dateKey"],
"Assets Average":df1["assetsAverage"]
}
)
</code></pre>
<p>The error message I got was</p>
<blockquote>
<p>TypeError: <strong>init</strong>() takes from 1 to 6 positional arguments but 112 (I'm actually using more columns, but getting my point across only required a few).
were given</p>
</blockquote>
<p>Edit #1:</p>
<p>This is what I am trying to do now:</p>
<pre><code>df2 = df1.copy()
df2 = df2.rename(columns = {
"ticker":"Ticker",
"period":"Period",
"calendarDate":"Calendar Date",
"updated":"Updated",
"dateKey":"Date Key",
"assetsAverage":"Assets Average"
}
)
</code></pre>
<p>Unfortunately, I got the same error message as before, any suggestions?</p>
|
<p>Do this.</p>
<pre><code>df2 = df1.copy()
df2.columns = ['ticker', 'period', 'Calendar Date', 'Updated', 'Date Key', 'Assets Average']
</code></pre>
|
pandas|python-3.6
| 0
|
373,694
| 47,328,397
|
Keras ValueError: Error when checking model target: expected dense_18
|
<p>I am all done, just stuck in training my NN model in KERAS.
Here is my situation.</p>
<ol>
<li><p>I have a folder, i have 30 CSV files in there , all different name.</p></li>
<li><p>Now, I am doing classification. </p></li>
<li>Each CSV file (5000,3 after reading in an array dfs as shown below is a single training instance itself, ,so I have 30 training instances for 30 CSVs).</li>
<li>The filename is the label, I want to classify. Theses are 3 unique labels, use one hot encoding. </li>
<li>I am confused in the input shape and how to reshape my training data dfs in a correct shape. </li>
</ol>
<p>Note: 30 observations which are CSV files themselves of 5000, 3 dim and filename is the label.</p>
<p>Here is my code and error.</p>
<pre><code>import os
import glob
import pandas as pd
import numpy as np
from keras.preprocessing.text import one_hot
from keras.models import Sequential
from keras.layers import Dense
path = os.getcwd()
file_list = glob.glob(path + '/*.csv')
dfs=np.array([pd.read_csv(fp).values for fp in file_list])
dfs.shape
# (30, 5000, 3)
from sklearn.preprocessing import OneHotEncoder
# define class labels
labels = np.array([1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3])
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = labels.reshape(len(labels), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
len(onehot_encoded)
print(onehot_encoded)
# 30
array([[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.],
[ 0., 0., 1.]])
model = Sequential()
model.add(Dense(24, input_shape=(5000,3), activation='relu'))
model.add(Dense(8))
model.add(Dense(3, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(dfs, onehot_encoded, epochs=50, verbose=2)
</code></pre>
<p>ERROR:
<strong>ValueError: Error when checking model target: expected dense_10 to have 3 dimensions, but got array with shape (30, 3)</strong></p>
|
<p>Your labels array is of shape <code>(30,3)</code>, while your model is expecting it to be <code>(None, 5000, 3)</code>. -- Always check the <code>model.summary()</code> to understand what is going on with shapes.</p>
<p>The Dense layers work only on the last dimension, leaving all other dimensions untouched. Since your input is <code>(None, 5000, 3)</code>, all your Dense layers are transforming only the last dimension and leaving the 5000 untouched. </p>
<p>At some point in the model, you must get rid of the extra dimension so you can match your labels, which are <code>(None, 3)</code>. </p>
<p>There are many possibilities, but the best option depends on how you want the model to interpret data. </p>
<p><strong>Option 1:</strong></p>
<p>If all the 5000 lines are completely independent and of different nature (and the model shouldn't learn any common behavioer between these lines), you can add a <code>Flatten()</code> layer at the beginning of the model, so it will immediately become <code>(None, 15000)</code>. </p>
<pre><code>model.add(Flatten(input_shape=(5000,3))) #first layer in the model
</code></pre>
<p><strong>Option 2:</strong></p>
<p>Now, if the 5000 lines share something in common, and your model should treat them as if they were different samples of equal nature, put the <code>Flatten()</code> layer at the end, just before the last <code>Dense</code>. </p>
<p>Example:</p>
<pre><code>model = Sequential()
model.add(Dense(24, input_shape=(5000,3), activation='relu'))
model.add(Dense(8))
#the flatten layer comes here:
model.add(Flatten())
model.add(Dense(3, activation='sigmoid'))
</code></pre>
<p><strong>Option 3:</strong></p>
<p>If these lines are forming a sequence (time series), and you want somehow to learn how this sequence evolves, you're probably going to have better results changing your <code>Dense</code> layers by <code>LSTM</code> layers. All of them, except for the last one, should use <code>return_sequences = True</code>. </p>
<p>Example:</p>
<pre><code>model = Sequential()
model.add(LSTM(24, return_sequences=True,input_shape=(5000,3)))
model.add(LSTM(8,return_sequences=True))
#here there are many possibilities as well, one of them being just another LSTM layer without return sequences:
model.add(LSTM(3,return_sequences=False))
model.add(Activation('sigmoid'))
</code></pre>
<p>I used the activation in a separate layer because the LSTM's usually work better with their default activation, which is 'tanh'.</p>
|
python|python-3.x|numpy|neural-network|keras
| 2
|
373,695
| 47,097,161
|
Creating filtered Table to remove #N/A in python
|
<p>I am wanting to create a filtered table to remove #N/A from my table. This can be achieved easily in vba by setting columns 9 to between values of -100 and 100 which should automatically remove #N/A and make the table look like the ideal table. Though I am wanting to do this with python any idea on how this can be achieved?</p>
<p><a href="https://i.stack.imgur.com/umsRK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/umsRK.png" alt="enter image description here"></a></p>
<p>Current attempt to remove #N/A</p>
<p><a href="https://i.stack.imgur.com/dy3LX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dy3LX.png" alt="enter image description here"></a></p>
<p>I have tried the following:</p>
<pre><code>ws = ('C:/BAAC.xlsx')
ws.auto_filter.add_filter_column('AA:AI', '501', keep_default_na=False)
wb.save('C:/EEEA.xlsx')
</code></pre>
<p>Though it does not seem to like this.</p>
|
<p>IIUC you can do it this way:</p>
<pre><code>pd.read_excel(filename).dropna(how='any').to_excel(filename, index=False)
</code></pre>
|
python|excel|pandas|excel-2010|openpyxl
| 0
|
373,696
| 47,514,376
|
How to select most recent value pulled using wb api
|
<p>I currently have this: </p>
<pre><code> industry population
country date
Australia 2017-01-01 NaN NaN
2016-01-01 24.327571 18.898304
2015-01-01 25.396251 18.835267
2014-01-01 27.277007 18.834835
United States2017-01-01 NaN NaN
2016-01-01 NaN 19.028231
2015-01-01 20.027274 19.212860
2014-01-01 20.867359 19.379071
</code></pre>
<p>And would like to select the most recent values for each country and column so that the most recent non null value is returned:</p>
<pre><code> industry population
Australia 24.327571 18.898304
United States 20.027274 19.028231
</code></pre>
<p>I know that I can groupby the country index, which is part of a multilevel industry consisting of the country and date but after that I am not sure how to proceed. </p>
|
<p>Solution is use custom function with <code>bfill</code> and <code>iloc</code> for select first row in group:</p>
<pre><code>df = df.groupby(level=0).apply(lambda x: x.bfill().iloc[0])
print (df)
industry population
country
Australia 24.327571 18.898304
United States 20.027274 19.028231
</code></pre>
<hr>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>first</code></a> for automatically removing first <code>NaN</code>s, but in future this behaviour <a href="https://github.com/pandas-dev/pandas/issues/8427" rel="nofollow noreferrer">should be changed - it is bug</a>:</p>
<pre><code>df = df.groupby(level=0).first()
print (df)
industry population
country
Australia 24.327571 18.898304
United States 20.027274 19.028231
</code></pre>
|
pandas|dataframe|indexing|pandas-groupby
| 1
|
373,697
| 47,339,611
|
Looping for specific values on a df
|
<p>On one side I have a massive <code>df</code>:</p>
<pre><code>df1
A B C ....
2005-11-01 5.3 22 6
2005-11-02 5.4 21 4
2005-11-03 5.2 17 7
....
</code></pre>
<p>On the other hand I have a smaller df with the following structure;</p>
<pre><code>df2
date
A 2005-11-02
B 2005-11-01
C 2005-11-03
</code></pre>
<p>What I am looking forward to is to add an additional column in <code>df2</code> called price which loops over each index value and column value from df2 and look for the corresponding prices in <code>df</code> .</p>
<p>The desired output would be something like this:</p>
<pre><code> date price
A 2005-11-02 5.4
B 2005-11-01 22
C 2005-11-03 7
</code></pre>
<p>I tried :</p>
<pre><code>prices=[]
for index,column in df2:
prices.append(df.loc[column['date'][i],index.iloc[i]])
i+=1
return prices
</code></pre>
<p>However returns this undesired output.</p>
<pre><code>ValueError: too many values to unpack (expected 2)
</code></pre>
<p>Could anyone tell me how could I loop through <code>df</code> based on the index and column from <code>df2</code></p>
|
<p>By using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.lookup.html" rel="nofollow noreferrer"><code>lookup</code></a></p>
<pre><code>df.lookup(df2.date,df2.index)
Out[1003]: array([ 5.4, 22. , 7. ])
</code></pre>
<p>After assign it back </p>
<pre><code>df2['Value']=df.lookup(df2.date,df2.index)
df2
Out[1005]:
date Value
A 2005-11-02 5.4
B 2005-11-01 22.0
C 2005-11-03 7.0
</code></pre>
|
python|pandas
| 2
|
373,698
| 47,480,137
|
merging multiple data sets with similar columns names
|
<p>I have a multiple data sets represents multiple economic indicators.
Every data set have 5 columns with the same columns names for every data set.
The columns names are [Date Time, Actual, Consensus, Previous, Revised].
The thing is I want to merge these data sets into a single one to prepare it for future work.
I did try this:</p>
<pre><code>import pandas as pd
df1 = pd.read_csv(r'E:\Tutorial\Sentix Investor Confidence - European Monetary Union.csv')
df2 = pd.read_csv(r'E:\Tutorial\Services Sentiment - European Monetary Union.csv')
df3 = pd.read_csv(r'E:\Tutorial\ZEW Survey - Economic Sentiment - European Monetary Union.csv')
frames = [df1, df2, df3]
result = pd.concat(frames,join='inner')
print(result)
</code></pre>
<p>But the result is like this
<a href="https://i.stack.imgur.com/4W6ci.jpg" rel="nofollow noreferrer">data result</a>
Which is absolutely wrong for me because despite similarity in names it’s extremely different indictors so I can NOT just mix them together.
What I need is some thing <a href="https://i.stack.imgur.com/dhYkX.jpg" rel="nofollow noreferrer">like this</a>,
or some other thing do similar job to remain every indictor with its identity.</p>
|
<p>Still using <code>pd.concat</code></p>
<pre><code>pd.concat([df,df,df],keys=['yourkey1','yourkey2','yourkey3'],axis=1)
Out[234]:
yourkey1 yourkey2 yourkey3
C1 C2 C1 C2 C1 C2
0 1 10 1 10 1 10
1 2 20 2 20 2 20
2 3 3 3 3 3 3
3 4 40 4 40 4 40
</code></pre>
<p>Data input </p>
<pre><code>df = pd.DataFrame({'C1': [1,2,3,'4'], 'C2': [10, 20, '3',40]})
</code></pre>
<p>change the 'yourkey_' to the one you need </p>
|
pandas|dataset
| 1
|
373,699
| 47,141,359
|
How to calculate factorial in tensorflow?
|
<p>I am new to tensorflow, I am trying to find a function that calculates the n!.
I saw that one can use the gamma function, which was possible with theano, but did not work for tensorflow.</p>
<pre class="lang-py prettyprint-override"><code>factorial = theano.tensor.gamma(v)
</code></pre>
<p>I am using a for loop to multiply number from n to 1, but I assume there is an easier and faster way. I saw functions related to gamma distribution, but couldn't figure out how to calculate the factorial. Would appreciate if one can point me to some documentation. </p>
<p>Here is the way I do it now</p>
<pre><code>import tensorflow as tf
factorial = tf.Variable(1, "factorial")
recursion = tf.Variable(1, "recursion")
# calculate factorial
mult = tf.multiply(recursion, factorial)
assign_fact = tf.assign(factorial, mult)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(2,10):
counter = tf.assign(recursion, tf.constant(i))
sess.run(counter)
sess.run(assign_fact)
print(i,"factorial is", sess.run(factorial))
sess.close()
</code></pre>
<p>Output is </p>
<pre><code>2 factorial is 2
3 factorial is 6
4 factorial is 24
5 factorial is 120
6 factorial is 720
7 factorial is 5040
8 factorial is 40320
9 factorial is 362880
</code></pre>
|
<p>Try this: <code>tf.exp(tf.lgamma(x + 1))</code>.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/lgamma" rel="noreferrer"><code>tf.lgamma</code></a> computes the log of the absolute value of Gamma(x) element-wise, so the exponent will give you the raw Gamma(x) value:</p>
<pre><code>>>> sess.run(tf.exp(tf.lgamma(5.0)))
24.0
</code></pre>
|
python|tensorflow|factorial
| 7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.