Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
1,300
68,811,534
Several problems when packaging with pyinstaller
<p>I was packaging my python program with PyInstaller, and several problems occurred. Here's my code below:</p> <pre><code>#!/usr/bin/python # -*- coding: UTF-8 -*- from spleeter import separator import tkinter as TKT from tkinter import ttk from tkinter import messagebox import tensorflow window = TKT.Tk() screen_width,screen_height = window.maxsize() window.title(&quot;Spleeter GUI Version&quot;) w = int((screen_width-700)/2) h = int((screen_height-400)/2) window.geometry(f'700x400+{w}+{h}') lbl = TKT.Label(window, text=&quot;File Path:&quot;) lbl.grid(column=0, row=0) txt = TKT.Entry(window, width=10) txt.grid(column=1, row=0) lbl2 = TKT.Label(window, text=&quot;Stems:&quot;) lbl2.grid(column=0, row=1) combo = ttk.Combobox(window) combo['values'] = (2,4,5) combo.current(0) combo.grid(column=1, row=1) def Separation(): File_name=txt.get(); stems='spleeter:'+combo.get()+'stems' sep = separator.Separator(stems) messagebox.showinfo(&quot;Notification&quot;, &quot;Separation working!&quot;) sep.separate_to_file(File_name, 'out') messagebox.showinfo(&quot;Notification&quot;, &quot;Separation Finished!&quot;) def clicked(): Separation() btn = TKT.Button(window, text=&quot;Separate&quot;, command=clicked) btn.grid(column=2, row=0) def main(): window.mainloop() if __name__=='__main__': main() </code></pre> <p>And when I use PyInstaller to package it, some problems shown in terminal:</p> <pre><code>2015 INFO: PyInstaller: 4.5.1 2016 INFO: Python: 3.8.8 (conda) 2021 INFO: Platform: Windows-10-10.0.19041-SP0 2022 INFO: wrote D:\File\Code\python\Spleeter\helloworld.spec 2081 INFO: UPX is available. 2083 INFO: Extending PYTHONPATH with paths ['D:\\File\\Code\\python\\Spleeter', 'D:\\File\\Code\\python\\Spleeter'] 4046 INFO: checking Analysis 4046 INFO: Building Analysis because Analysis-00.toc is non existent 4046 INFO: Initializing module dependency graph... 4059 INFO: Caching module graph hooks... 4083 INFO: Analyzing base_library.zip ... 13438 INFO: Processing pre-find module path hook distutils from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'. 13439 INFO: distutils: retargeting to non-venv dir 'c:\\programdata\\anaconda3\\lib' 21633 INFO: Caching module dependency graph... 22011 INFO: running Analysis Analysis-00.toc 22032 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable required by c:\programdata\anaconda3\python.exe 22460 INFO: Analyzing D:\File\Code\python\Spleeter\helloworld.py 29316 INFO: Processing pre-find module path hook site from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-site.py'. 29317 INFO: site: retargeting to fake-dir 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\fake-modules' Aborted by user request. PS D:\File\Code\python\Spleeter&gt; ^C PS D:\File\Code\python\Spleeter&gt; ^C PS D:\File\Code\python\Spleeter&gt; pyinstaller -D helloworld.py 2023 INFO: PyInstaller: 4.5.1 2023 INFO: Python: 3.8.8 (conda) 2024 INFO: Platform: Windows-10-10.0.19041-SP0 2024 INFO: wrote D:\File\Code\python\Spleeter\helloworld.spec 2082 INFO: UPX is available. 2084 INFO: Extending PYTHONPATH with paths ['D:\\File\\Code\\python\\Spleeter', 'D:\\File\\Code\\python\\Spleeter'] 3988 INFO: checking Analysis 3989 INFO: Building Analysis because Analysis-00.toc is non existent 3989 INFO: Initializing module dependency graph... 4004 INFO: Caching module graph hooks... 4029 INFO: Analyzing base_library.zip ... 13432 INFO: Processing pre-find module path hook distutils from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'. 13434 INFO: distutils: retargeting to non-venv dir 'c:\\programdata\\anaconda3\\lib' 21247 INFO: Caching module dependency graph... 21598 INFO: running Analysis Analysis-00.toc 21619 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable required by c:\programdata\anaconda3\python.exe 22022 INFO: Analyzing D:\File\Code\python\Spleeter\helloworld.py 28474 INFO: Processing pre-find module path hook site from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-site.py'. 28475 INFO: site: retargeting to fake-dir 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\fake-modules' 67007 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'. 90273 INFO: Processing pre-safe import module hook six.moves from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-six.moves.py'. 119884 INFO: Processing pre-safe import module hook win32com from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\pre_safe_import_module\\hook-win32com.py'. 525201 INFO: Processing module hooks... 525201 INFO: Loading module hook 'hook-anyio.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 525957 INFO: Loading module hook 'hook-appdirs.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 525974 INFO: Loading module hook 'hook-argon2.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 525975 INFO: Loading module hook 'hook-bcrypt.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 525976 INFO: Loading module hook 'hook-bokeh.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 532986 INFO: Loading module hook 'hook-certifi.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 533000 INFO: Loading module hook 'hook-cryptography.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 533554 INFO: Loading module hook 'hook-dask.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 533744 INFO: Loading module hook 'hook-docutils.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 539214 INFO: Loading module hook 'hook-h5py.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 539217 INFO: Loading module hook 'hook-IPython.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 540525 INFO: Loading module hook 'hook-jedi.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 544838 INFO: Loading module hook 'hook-jinja2.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 544857 INFO: Loading module hook 'hook-jsonschema.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 544927 INFO: Loading module hook 'hook-llvmlite.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 544940 INFO: Loading module hook 'hook-lxml.etree.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 544942 INFO: Loading module hook 'hook-lxml.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 545951 INFO: Loading module hook 'hook-nacl.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 546016 INFO: Loading module hook 'hook-nbconvert.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 546390 INFO: Loading module hook 'hook-nbformat.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 546578 INFO: Loading module hook 'hook-notebook.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 554119 INFO: Loading module hook 'hook-numba.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 554158 INFO: Loading module hook 'hook-openpyxl.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 554566 INFO: Loading module hook 'hook-pycparser.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 554567 INFO: Loading module hook 'hook-pytest.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 557724 INFO: Loading module hook 'hook-pythoncom.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 559668 INFO: Loading module hook 'hook-pywintypes.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 561629 INFO: Loading module hook 'hook-regex.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 561630 INFO: Loading module hook 'hook-resampy.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 561651 INFO: Loading module hook 'hook-sklearn.cluster.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 567793 INFO: Loading module hook 'hook-sklearn.linear_model.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 573895 INFO: Loading module hook 'hook-sklearn.metrics.cluster.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 573897 WARNING: Hidden import &quot;sklearn.utils.lgamma&quot; not found! 573900 WARNING: Hidden import &quot;sklearn.utils.weight_vector&quot; not found! 573901 INFO: Loading module hook 'hook-sklearn.neighbors.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 580411 INFO: Loading module hook 'hook-sklearn.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 582058 INFO: Loading module hook 'hook-sklearn.tree.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 582060 INFO: Loading module hook 'hook-sklearn.utils.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 582061 INFO: Loading module hook 'hook-soundfile.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 582065 INFO: Loading module hook 'hook-tables.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 582068 INFO: Loading module hook 'hook-tensorflow.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 2021-08-17 11:24:18.101964: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2021-08-17 11:24:18.102134: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 619385 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.premade&quot; not found! 619386 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.layers.experimental&quot; not found! 619387 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.datasets.fashion_mnist&quot; not found! 619391 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.applications.inception_v3&quot; not found! 619392 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.models&quot; not found! 619746 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.constraints&quot; not found! 621119 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.applications.vgg19&quot; not found! 621120 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.premade&quot; not found! 621120 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.applications.resnet_v2&quot; not found! 621121 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.utils&quot; not found! 621125 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.wrappers&quot; not found! 621126 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.applications&quot; not found! 621128 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.estimator&quot; not found! 621133 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.preprocessing.text&quot; not found! 621515 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.metrics&quot; not found! 621516 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.estimator.experimental&quot; not found! 621518 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.datasets.imdb&quot; not found! 621520 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.layers&quot; not found! 621530 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.datasets.imdb&quot; not found! 621531 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.datasets.boston_housing&quot; not found! 621532 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras&quot; not found! 621532 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.keras.applications.mobilenet_v2&quot; not found! 621533 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.wrappers.scikit_learn&quot; not found! 621536 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.callbacks&quot; not found! 621538 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.layers.experimental&quot; not found! 621541 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.applications.efficientnet&quot; not found! 622273 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.applications.efficientnet&quot; not found! 622274 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.optimizers.schedules&quot; not found! 623359 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.estimator.export&quot; not found! 623362 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.estimator.inputs&quot; not found! 623363 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras&quot; not found! 623366 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.layers.experimental.preprocessing&quot; not found! 623366 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.keras.applications.resnet50&quot; not found! 623374 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.keras.preprocessing&quot; not found! 623724 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.estimator.inputs&quot; not found! 623725 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.preprocessing.image&quot; not found! 623727 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.estimator&quot; not found! 623734 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.applications&quot; not found! 623737 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.constraints&quot; not found! 624786 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.applications.inception_resnet_v2&quot; not found! 625129 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.applications.densenet&quot; not found! 625130 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.layers&quot; not found! 625148 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.datasets.imdb&quot; not found! 625150 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.initializers&quot; not found! 625157 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.preprocessing&quot; not found! 625158 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.estimator.experimental&quot; not found! 625159 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.estimator&quot; not found! 625159 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.mixed_precision&quot; not found! 625171 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.losses&quot; not found! 625174 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.keras.constraints&quot; not found! 625175 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.constraints&quot; not found! 625186 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.datasets.imdb&quot; not found! 625187 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.applications.efficientnet&quot; not found! 625188 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.estimator.export&quot; not found! 626410 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.keras.applications.nasnet&quot; not found! 626412 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.estimator&quot; not found! 626414 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.v1&quot; not found! 626418 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v1.keras.applications.vgg19&quot; not found! 626419 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.datasets.mnist&quot; not found! 626788 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v2.keras.layers.experimental&quot; not found! 626800 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.regularizers&quot; not found! 640280 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.keras.applications.densenet&quot; not found! 640282 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.estimator.experimental&quot; not found! 640288 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v1.compat.v2.keras.applications.resnet&quot; not found! 640289 WARNING: Hidden import &quot;tensorflow._api.v2.compat.v2.compat.v1.keras.optimizers&quot; not found! 640359 INFO: Loading module hook 'hook-win32com.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... c:\programdata\anaconda3\lib\site-packages\win32com\client\makepy.py:369: SyntaxWarning: &quot;is not&quot; with a literal. Did you mean &quot;!=&quot;? if path is not '' and not os.path.exists(path): 642590 INFO: Loading module hook 'hook-zmq.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'... 646400 INFO: Loading module hook 'hook-babel.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 647110 INFO: Loading module hook 'hook-difflib.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 647128 INFO: Loading module hook 'hook-distutils.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 647130 INFO: Loading module hook 'hook-distutils.util.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 647153 INFO: Loading module hook 'hook-encodings.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 647325 INFO: Loading module hook 'hook-gevent.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 648478 INFO: Determining a mapping of distributions to packages... 900281 WARNING: Unable to find package for requirement zope.interface from package gevent. 900281 WARNING: Unable to find package for requirement zope.event from package gevent. 900282 INFO: Packages required by gevent: ['cffi', 'setuptools', 'greenlet'] 902659 INFO: Loading module hook 'hook-heapq.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 902678 INFO: Loading module hook 'hook-importlib_metadata.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 902683 INFO: Loading module hook 'hook-lib2to3.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 902818 INFO: Loading module hook 'hook-matplotlib.backends.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 903953 INFO: Matplotlib backend &quot;GTK3Agg&quot;: ignored backend Gtk3Agg requires cairo 904498 INFO: Matplotlib backend &quot;GTK3Cairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 905044 INFO: Matplotlib backend &quot;MacOSX&quot;: ignored cannot import name '_macosx' from 'matplotlib.backends' (c:\programdata\anaconda3\lib\site-packages\matplotlib\backends\__init__.py) 906697 INFO: Matplotlib backend &quot;nbAgg&quot;: added &lt;string&gt;:12: MatplotlibDeprecationWarning: The matplotlib.backends.backend_qt4agg backend was deprecated in Matplotlib 3.3 and will be removed two minor releases later. 907461 INFO: Matplotlib backend &quot;Qt4Agg&quot;: added 908021 INFO: Matplotlib backend &quot;Qt4Cairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 908794 INFO: Matplotlib backend &quot;Qt5Agg&quot;: added 909366 INFO: Matplotlib backend &quot;Qt5Cairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 910260 INFO: Matplotlib backend &quot;TkAgg&quot;: added 911135 INFO: Matplotlib backend &quot;TkCairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 912086 INFO: Matplotlib backend &quot;WebAgg&quot;: added 912989 INFO: Matplotlib backend &quot;WX&quot;: added 913909 INFO: Matplotlib backend &quot;WXAgg&quot;: added 914522 INFO: Matplotlib backend &quot;WXCairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 915194 INFO: Matplotlib backend &quot;agg&quot;: added 915746 INFO: Matplotlib backend &quot;cairo&quot;: ignored cairo backend requires that pycairo&gt;=1.11.0 or cairocffiis installed 916593 INFO: Matplotlib backend &quot;pdf&quot;: added 917436 INFO: Matplotlib backend &quot;pgf&quot;: added 918114 INFO: Matplotlib backend &quot;ps&quot;: added 918807 INFO: Matplotlib backend &quot;svg&quot;: added 919640 INFO: Matplotlib backend &quot;template&quot;: added 920548 INFO: Loading module hook 'hook-matplotlib.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 921148 INFO: Loading module hook 'hook-multiprocessing.util.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 921167 INFO: Loading module hook 'hook-numpy.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 921371 INFO: Import to be excluded not found: 'f2py' 921405 INFO: Loading module hook 'hook-numpy._pytesttester.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 921422 INFO: Loading module hook 'hook-packaging.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 921424 INFO: Loading module hook 'hook-pandas.io.formats.style.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 922156 INFO: Loading module hook 'hook-pandas.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 923169 INFO: Loading module hook 'hook-pickle.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 923187 INFO: Loading module hook 'hook-PIL.Image.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 924017 INFO: Loading module hook 'hook-PIL.ImageFilter.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 924036 INFO: Loading module hook 'hook-PIL.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 924088 INFO: Loading module hook 'hook-PIL.SpiderImagePlugin.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 924106 INFO: Loading module hook 'hook-pkg_resources.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 927054 WARNING: Hidden import &quot;pkg_resources.py2_warn&quot; not found! 927055 WARNING: Hidden import &quot;pkg_resources.markers&quot; not found! 927073 INFO: Loading module hook 'hook-pygments.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... 929382 INFO: Loading module hook 'hook-PyQt5.py' from 'c:\\programdata\\anaconda3\\lib\\site-packages\\PyInstaller\\hooks'... Traceback (most recent call last): File &quot;c:\programdata\anaconda3\lib\runpy.py&quot;, line 194, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;c:\programdata\anaconda3\lib\runpy.py&quot;, line 87, in _run_code exec(code, run_globals) File &quot;C:\ProgramData\Anaconda3\Scripts\pyinstaller.exe\__main__.py&quot;, line 7, in &lt;module&gt; File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\__main__.py&quot;, line 126, in run run_build(pyi_config, spec_file, **vars(args)) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\__main__.py&quot;, line 65, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\building\build_main.py&quot;, line 815, in main build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build')) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\building\build_main.py&quot;, line 762, in build exec(code, spec_namespace) File &quot;D:\File\Code\python\Spleeter\helloworld.spec&quot;, line 7, in &lt;module&gt; a = Analysis(['helloworld.py'], File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\building\build_main.py&quot;, line 294, in __init__ self.__postinit__() File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\building\datastruct.py&quot;, line 159, in __postinit__ self.assemble() File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\building\build_main.py&quot;, line 473, in assemble self.graph.process_post_graph_hooks(self) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\depend\analysis.py&quot;, line 373, in process_post_graph_hooks module_hook.post_graph(analysis) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\depend\imphook.py&quot;, line 451, in post_graph self._load_hook_module() File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\depend\imphook.py&quot;, line 408, in _load_hook_module self._hook_module = importlib_load_source( File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\compat.py&quot;, line 632, in importlib_load_source return mod_loader.load_module() File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 462, in _check_name_wrapper File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 962, in load_module File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 787, in load_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 265, in _load_module_shim File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 702, in _load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 671, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 783, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 219, in _call_with_frames_removed File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\hooks\hook-PyQt5.py&quot;, line 11, in &lt;module&gt; from PyInstaller.utils.hooks.qt import pyqt5_library_info, \ File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\utils\hooks\qt.py&quot;, line 162, in &lt;module&gt; pyqt5_library_info = QtLibraryInfo('PyQt5') File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\utils\hooks\qt.py&quot;, line 54, in __init__ if hooks.is_module_satisfies(&quot;PyQt5 &gt;= 5.15.4&quot;): File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\utils\hooks\__init__.py&quot;, line 502, in is_module_satisfies version = get_module_attribute(module_name, version_attr) File &quot;c:\programdata\anaconda3\lib\site-packages\PyInstaller\utils\hooks\__init__.py&quot;, line 352, in get_module_attribute raise AttributeError( AttributeError: Module 'PyQt5' has no attribute '__version__' </code></pre> <p>(Because of the limit of the words, I deleted some similar 'Hidden Import' errors above.)</p> <p>There are two main problems:</p> <ol> <li>the hidden import problem</li> <li>the AttributeError. and other problems that I didn't notice</li> </ol> <p>I'm new to python and PyInstaller. The problems may be silly. Thank you very much for helping me.</p>
<p>You need to exclude the PyQt5 Library in the pyinstaller command. Adding the command below, in the pyinstaller command.</p> <pre><code>--exclude-module &quot;PyQt5&quot; </code></pre> <p>You can use the auto-py-to-exe application to create an exe. This application use the pyinstaller library. It's really easy to handle.</p>
python|python-3.x|tensorflow|pyinstaller
1
1,301
68,671,394
Add New Values ​to Dataframe according to predictions
<p>I have the following dataframe called &quot;lastDays&quot;:</p> <pre><code> Units Date 2021-06-01 00:00:00 3 2021-06-01 01:00:00 4 2021-06-01 02:00:00 1 2021-06-01 03:00:00 2 2021-06-01 04:00:00 8 2021-06-01 05:00:00 9 2021-06-01 06:00:00 3 2021-06-01 07:00:00 5 2021-06-01 08:00:00 7 2021-06-01 09:00:00 8 </code></pre> <p>I want to integrate the predictions of my model to my dataframe, so I have the following function to add the new values ​​from the last row + 1 of the dataframe:</p> <pre><code>def addNewValue(lastDays,nValues): i = 0 for i in range(len(lastDays)): lastDays[i] = lastDays[i+1] lastDays[lastDays.shift(-1)]=nValues return lastDays </code></pre> <p>I want to add the results of my 30-day forecast to the original dataframe &quot;lastDays&quot;:</p> <pre><code>steps = 24*20 results=[] for i in range(7): pred = forecaster.predict(steps=steps) results.append(pred) print(results) lastDays=addNewValue(lastDays, results) </code></pre> <p>The error it gives me is <code>KeyError: 1 </code> when applying the function lastDays = addNewValue (lastDays, predictions) in the line of the function <code>lastDays [i] = lastDays [i + 1]</code></p> <p>I may have a basic concept error of applying the i + 1 augmentation or should consider applying a shape, but I need your support to know which option would be the most optimal to be able to add the new values ​​to my original dataframe.</p> <p>Thanks in advance.</p>
<p>I was able to find the solution that was the following:</p> <p>Apply <code>pd.date-range</code> to create a new column of dates in the created dataframe &quot;nextMonth&quot; and reflect the forecasts. The applied function <code>nextMonth['Date'] = pd.date_range (start = lastDays.index [-1], period = len (lastDays), freq = &quot;H&quot;)</code> starts the last day of the original dataframe &quot;lastDays&quot; for append the subsequent hours and days based on the length of the original dataframe, which joins the forecast results from day + 1 of the last date of the dataframe originates &quot;lastDays&quot;.</p> <p>Finally, what I did was apply apend to add the dataframe built with the forecasts &quot;nextMonth&quot; to the original dataframe &quot;lastDays&quot; lastDays.append (nextMonth), obtaining the expected results.</p> <p>I don't know if it will be the most optimal solution but it is fully functional for me and meets the expectations of my project.</p>
python|pandas|dataframe|datetime
0
1,302
68,494,337
Split rows of pandas textual dataframe
<p>I have a pandas textual dataframe which looks like this:</p> <pre><code>+-----------------------------------+ | text | +-----------------------------------+ | A very long sentence | +-----------------------------------+ | Another very long sentence | +-----------------------------------+ </code></pre> <p>But I would like to split each row with pieces of sentences no more than 512 words in every row:</p> <pre><code>+-----------------------------------+ | text | +-----------------------------------+ | A piece of very long sentence | +-----------------------------------+ | One more piece of long sentence | +-----------------------------------+ </code></pre> <p>I can convert my data into python list but <code>str.split()</code> works with separators.</p> <p>It there a pythonic way to do this?</p>
<p>you can try via <code>str.split()</code>+<code>str.join()</code> if the sentence are seperated by <code>' '</code> :</p> <pre><code>df['text']=df['text'].str.split().str[:512].str.join(' ') </code></pre> <p>Now For checking the length(after running the above code) you can do:</p> <pre><code>df['text_len']=df['text'].str.split().str.len() </code></pre>
python|pandas|dataframe|text
0
1,303
68,641,777
Excel file only opens with pandas in Python after being resaved
<p>I have some Excel files with measurements taken from National Instruments' LabView. I'm trying to use Pandas to be able to edit the data but when using read_excel on those Excel files I get the error <code>TypeError: expected &lt;class 'openpyxl.styles.fills.Fill'&gt;</code>.</p> <p>The strange part is that if I open the file by hand and click save, without changing anything, read_excel is suddenly able to open the files. The amount of files is unfortunately too much for me to be able to resave by hand. Does anyone have any idea how to solve this problem? I've searched for this problem a lot and found nothing yet. Thanks!</p> <p>Edit:</p> <p>The code I'm using is the following.</p> <pre><code>import pandas as pd import os fname = 'C' # All the file I want to open start with C fextension = '.xlsx' directory = 'D:/TEST_Raw' df_list = [] for filename in os.listdir(directory): if fname in filename and filename.endswith(fextension): df1 = pd.read_excel(directory + '/' + filename, header = 0, index_col = None, engine = 'openpyxl') </code></pre> <p>An example file is in <a href="https://www.mediafire.com/file/bw1gv80g91gh0jh/CA2_coil_100Hz_05a1V.xlsx/file" rel="nofollow noreferrer">this link</a>. If I use this file the program will not run and give the error, but if I open and save the Excel it will run.</p>
<p>Seems like the source file is corrupt to the point that a standard method of opening the file is not possible (e.g., <code>pd.read_excel()</code> or <code>pd.ExcelFile()</code>. If there are too many files to open manually and save...Try a non-standard way of opening the file.</p> <p>One idea is using the code from: <a href="https://blog.adimian.com/2018/09/04/fast-xlsx-parsing-with-python/" rel="nofollow noreferrer">https://blog.adimian.com/2018/09/04/fast-xlsx-parsing-with-python/</a> (there may be better ways out there).</p> <p>I tested the sample file using the code from blog.adimian.com (see the <strong>Full Code</strong> section right at the bottom of the page) and it seems to work. However, the column names are missing and need to be set manually. If the column names are all the same you could loop this for all the files.</p> <p><strong>Example output:</strong></p> <p><a href="https://i.stack.imgur.com/qjv7V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qjv7V.png" alt="enter image description here" /></a></p>
python|excel|pandas
1
1,304
68,449,166
pandas rolling apply return np.nan
<p>I would like to apply a custom skewness function to rolling apply, but got np.nan instead.</p> <pre><code>import pandas as pd import numpy as np def _get_skewness(col, q=(0.05, 0.95)): if q[0] &gt; 0: quantiles = col.quantile(q) col.loc[(col&lt;quantiles[q[0]]) | (col &gt; quantiles[q[1]])] = np.nan skew = col.skew(axis=0, skipna=True) return skew df = pd.DataFrame(np.arange(40).reshape(-1, 2)) df_skew = df.rolling(20, 10).apply(_get_skewness) print(df_skew) </code></pre> <p>I got the following result. I understand the first 10 rows are due to the rolling window min_period=10. Just don't get why the last few rows return np.nan as well.</p> <pre><code> 0 1 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 0.0 0.0 10 0.0 0.0 11 0.0 0.0 12 0.0 0.0 13 0.0 0.0 14 0.0 0.0 15 NaN NaN 16 NaN NaN 17 NaN NaN 18 NaN NaN 19 NaN NaN </code></pre>
<p>By using <code>loc</code> on <code>col</code> the actual DataFrame is being modified in each iteration. The introduction of <code>NaN</code> in the column eventually means the window becomes all <code>NaN</code>. The easiest fix (without understanding more about how the skewness is to be applied) would be to create a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.copy.html" rel="nofollow noreferrer">copy</a> of <code>col</code> to work on:</p> <pre><code>def _get_skewness(col, q=(0.05, 0.95)): copy_col = col.copy() # Make a copy so as to not overwrite future values. if q[0] &gt; 0: quantiles = copy_col.quantile(q) copy_col.loc[ (copy_col &lt; quantiles[q[0]]) | (copy_col &gt; quantiles[q[1]]) ] = np.nan skew = copy_col.skew(axis=0, skipna=True) return skew </code></pre> <pre><code>df = pd.DataFrame(np.arange(40).reshape(-1, 2)) df_skew = df.rolling(20, 10).apply(_get_skewness) </code></pre> <p><code>df_skew</code>:</p> <pre><code> 0 1 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 0.0 0.0 10 0.0 0.0 11 0.0 0.0 12 0.0 0.0 13 0.0 0.0 14 0.0 0.0 15 0.0 0.0 16 0.0 0.0 17 0.0 0.0 18 0.0 0.0 19 0.0 0.0 </code></pre>
python|pandas
1
1,305
68,649,384
Pandas Dataframe remove rows depending on two columns with equal values
<p>basically i have a dataframe where is a lot of columns, but the main are ITEM_ID and PRICE.</p> <p>For example:</p> <pre><code>ID ITEM_ID ITEM PRICE 1 1 potato 20 2 1 potato 20 3 1 potato 25 4 2 tomato 50 5 2 tomato 55 </code></pre> <p>And I want to delete the rows where ITEM_ID and PRICE are equal, so the output will be this:</p> <pre><code>ID ITEM_ID ITEM PRICE 1 1 potato 20 2 1 potato 25 3 2 tomato 50 4 2 tomato 55 </code></pre> <p>I am counting average price using</p> <pre><code>df['AVG'] = df.groupby('ITEM_ID')['PRICE'].transform('mean') </code></pre> <p>But I realised, that I am counting using the duplicate values, so the average is not right.</p> <p>Can anybody help?</p> <p>EDIT:</p> <p>After trying suggested</p> <pre><code>df.drop_duplicates(subset=['item_id', 'price']) </code></pre> <p>the data are still there, even keep=False wont do nothing.</p>
<p>Solution to this problem is:</p> <pre><code>df.drop_duplicates(subset=['item_id', 'price'], inplace=True) </code></pre>
python|pandas|dataframe|duplicates
2
1,306
36,458,573
Torch/Lua equivalent function to MATLAB or Numpy 'Unique'
<p>In Python one can do the following to get the unique values in a vector/matrix/tensor:</p> <pre><code>import numpy as np a = np.unique([1, 1, 2, 2, 3, 3]) # Now a = array([1, 2, 3]) </code></pre> <p>There is a similar function in MATLAB as well:</p> <pre><code>A = [9 2 9 5]; C = unique(A) %Now C = [9, 2, 9] </code></pre> <p>Is there an equivalent function in Torch/Lua as well?</p>
<p>Nope, there is no such a standard function in stock Lua and/or Torch.</p> <p>Considering using some implementation of <code>set</code> data structure, rolling Your own implementation of <code>unique()</code> or redesigning Your application not to require this kind of functionality.</p> <p>Example 11-liner:</p> <pre><code>function vector_unique(input_table) local unique_elements = {} --tracking down all unique elements local output_table = {} --result table/vector for _, value in ipairs(input_table) do unique_elements[value] = true end for key, _ in pairs(unique_elements) do table.insert(output_table, key) end return output_table end </code></pre> <p>Related questions:</p> <ul> <li><a href="https://stackoverflow.com/questions/6618843/lua-smartest-way-to-add-to-table-only-if-not-already-in-table-or-remove-duplic">Lua: Smartest way to add to table only if not already in table, or remove duplicates</a></li> <li><a href="https://stackoverflow.com/questions/20066835/lua-remove-duplicate-elements">Lua : remove duplicate elements</a></li> </ul>
python|matlab|numpy|lua|torch
1
1,307
53,216,162
How to train Tensorflow Object Detection images that do not contain objects?
<p>I am training an object detection network using Tensorflow's object detection,</p> <p><a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="noreferrer">https://github.com/tensorflow/models/tree/master/research/object_detection</a></p> <p>I can successfully train a network based on my own images and labels. However, I have a large dataset of images that do not contain any of my labeled objects, and I want to be able to train the network to not detect anything in these images.</p> <p>From what I understand with Tensorflow object detection, I need to give it a set of images and corresponding XML files that box and label the objects in the image. The scripts convert the XML to CSV and then to another format for the training, and do not allow XML files that have no objects.</p> <p>How to give an image and XML files that have no objects?</p> <p>Or, how does the network learn what is not an object?</p> <p>For example if you want to detect "hot dogs" you can train it with a set of images with hot dogs. But how to train it what is not a hot dog?</p>
<p>An Object Detection CNN can learn what is not an object, simply by letting it see examples of images without any labels.</p> <p>There are two main architecture types: </p> <ol> <li>two-stages, with first stage object/region proposal (RPN), and second - classification and bounding box fine-tuning; </li> <li>one-stage, which directly classifies and regresses BB based on the feature vector corresponding to a certain cell in the feature map.</li> </ol> <p>In any case, there's a part which is responsible to decide what is an object and what's not. In RPN you have "objectness" score, and in one-stages there's the confidence of classification, where you usually a background class (i.e. everything which is not the supported classes).</p> <p>So in both cases, in case a specific example in an image doesn't have any supported class, you teach the CNN to decrease the objectness score or increase the background confidence correspondingly.</p>
python|tensorflow|deep-learning|object-detection|object-detection-api
8
1,308
52,939,153
Creating new dataframe based on maximum element
<p>I have two dataframes-</p> <pre><code>cols = ['A','B'] data = [[-1,2],[0,2],[5,1]] data = np.asarray(data) indices = np.arange(0,len(data)) df = pd.DataFrame(data, index=indices, columns=cols) cols = ['A','B'] data2 = [[-13,2],[-1,2],[0,4],[2,1],[5,0]] data2 = np.asarray(data2) indices = np.arange(0,len(data2)) df2 = pd.DataFrame(data2, index=indices, columns=cols) </code></pre> <p>Now I want to create a new dataframe which has for the same <code>A</code> the maximum of <code>B</code> from either dataframe.</p> <p>Therefore, the output would be-</p> <pre><code> A B 0 -13 2 1 -1 2 2 0 4 3 2 1 4 5 1 </code></pre>
<p>Using <code>drop_duplicates</code></p> <pre><code>pd.concat([df2,df]).sort_values('B').drop_duplicates('A',keep='last') Out[80]: A B 3 2 1 2 5 1 0 -13 2 0 -1 2 2 0 4 </code></pre>
python|pandas
4
1,309
65,495,533
Getting an error when trying to get pandas to read my json file
<p>I'm getting <code>ValueError: Expected object or value</code> when trying to get pandas to read my json file.</p> <p>Here is the code I'm using:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import json dataframe = pd.read_json(r'C:\Users\stans\WFH Project\data.json') </code></pre> <p>This is when I receive the value error. I saved the json data as a text file with the .json extension.</p> <p>Here is a sample:</p> <pre class="lang-json prettyprint-override"><code>{ 'created_at': 'Thu Dec 24 10:09:36 +0000 2020', 'id': 1342049779233284097, 'id_str': '1342049779233284097', 'text': 'RT @ab: S2E13 IntelAI podcast—from #AI beating world chess champs to solving the grand research challenge known as the #protein…', 'truncated': False, 'entities': { 'hashtags': [{ 'text': 'AI', 'indices': [47, 50] } </code></pre> <p>I even tried saving the json file in pwd to see if it was a directory path issue, but I received the same error. Any help with this would be greatly appreciated. Thanks.</p>
<pre><code>import json import panda as pd f = open('C:/Users/stans/WFH Project/data.json',) data = json.load(f) df = pd.DataFrame(data) f.close() </code></pre>
python|json|pandas
0
1,310
65,816,766
How do I save a file in f4 format using numpy in python
<p>I have generated a coupling file and save it like this, <br></p> <p>np.save('J', Jindep,)</p> <p>This saves in J.npy format. How do I convert it in 'f4'format?</p>
<p>You can use open function to create files.</p> <pre><code>numpy_array = [] f = open('j.f4','w') f.write(numpy_array) f.close() </code></pre>
python|numpy
0
1,311
65,797,608
Most efficient way to iterate over large vector?
<p>I have an input ndarray, <code>pointsCount</code>, with shape (4000000, 1). I have another ndarray, <code>clusters</code>, with shape (2,1). I then want to perform the following:</p> <pre><code>distances = np.zeros((pointsCount, n_clusters)) for x in range(len(trainPoints)): for c in range(len(clusters)): distances[x,c] = (trainPoints[x]-clusters[c]).T@(trainPoints[x]-clusters[c]) </code></pre> <p>However, this takes <em>ages</em> to complete. The same is true for the list comprehension <code>distances = np.array([(x-cluster).T@(x-cluster) for x in trainPoints for cluster in clusters]).reshape((4000000, 2))</code>.</p> <p>Any way that I can perform this faster using numpy?</p>
<p>All you need to do is transpose <code>clusters</code>. For example, given initial arrays:</p> <pre><code>&gt;&gt;&gt; pointsCount # I have considered 4 instead of 4 mil array([[2], [4], [7], [6]]) &gt;&gt;&gt; clusters array([[2], [3]]) # Your code: &gt;&gt;&gt; np.array([(x-cluster).T@(x-cluster) for x in pointsCount for cluster in clusters]).reshape((4, 2)) array([[ 0, 1], [ 4, 1], [25, 16], [16, 9]]) # Faster code: &gt;&gt;&gt; (pointsCount - clusters.T)**2 array([[ 0, 1], [ 4, 1], [25, 16], [16, 9]], dtype=int32) </code></pre> <p>You may want to take a look at <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html#broadcasting" rel="nofollow noreferrer"><code>NumPy Broadcasting</code></a></p>
python|numpy
1
1,312
63,585,285
Why is history storing auc and val_auc with incrementing integers (auc_2, auc_4, ...)?
<p>I am beginner with keras and today I bumped into this sort of issue I don't know how to handle. The values for <code>auc</code> and <code>val_auc</code> are being stored in <code>history</code> with the first even integers, like <code>auc</code>, <code>auc_2</code>, <code>auc_4</code>, <code>auc_6</code>... and so on.</p> <p>This is preventing me from managing and studying those values along my Kfold cross validation, as I cannot access <code>history.history['auc']</code> value because there is not always such key <code>'auc'</code>. Here is the code:</p> <pre><code>from tensorflow.keras.models import Sequential # pylint: disable= import-error from tensorflow.keras.layers import Dense # pylint: disable= import-error from tensorflow.keras import Input # pylint: disable= import-error from sklearn.model_selection import StratifiedKFold from keras.utils.vis_utils import plot_model from keras.metrics import AUC, Accuracy # pylint: disable= import-error BATCH_SIZE = 32 EPOCHS = 10 K = 5 N_SAMPLE = 1168 METRICS = ['AUC', 'accuracy'] SAVE_PATH = '../data/exp/final/submodels/' def create_mlp(model_name, keyword, n_sample= N_SAMPLE, batch_size= BATCH_SIZE, epochs= EPOCHS): df = readCSV(n_sample) skf = StratifiedKFold(n_splits = K, random_state = 7, shuffle = True) for train_index, valid_index in skf.split(np.zeros(n_sample), df[['target']]): x_train, y_train, x_valid, y_valid = get_train_valid_dataset(keyword, df, train_index, valid_index) model = get_model(keyword) history = model.fit( x = x_train, y = y_train, validation_data = (x_valid, y_valid), epochs = epochs ) def get_train_valid_dataset(keyword, df, train_index, valid_index): aux = df[[c for c in columns[keyword]]] return aux.iloc[train_index].values, df['target'].iloc[train_index].values, aux.iloc[valid_index].values, df['target'].iloc[valid_index].values def create_callbacks(model_name, save_path, fold_var): checkpoint = ModelCheckpoint( save_path + model_name + '_' +str(fold_var), monitor=CALLBACK_MONITOR, verbose=1, save_best_only= True, save_weights_only= True, mode='max' ) return [checkpoint] </code></pre> <p>In <code>main.py</code> I call <code>create_mlp('model0', 'euler', n_sample=100)</code>, and the log is (only relevant lines):</p> <pre><code>Epoch 9/10 32/80 [===========&gt;..................] - ETA: 0s - loss: 0.6931 - auc: 0.5000 - acc: 0.5625 Epoch 00009: val_auc did not improve from 0.50000 80/80 [==============================] - 0s 1ms/sample - loss: 0.6931 - auc: 0.5000 - acc: 0.5000 - val_loss: 0.6931 - val_auc: 0.5000 - val_acc: 0.5000 Epoch 10/10 32/80 [===========&gt;..................] - ETA: 0s - loss: 0.6932 - auc: 0.5000 - acc: 0.4375 Epoch 00010: val_auc did not improve from 0.50000 80/80 [==============================] - 0s 1ms/sample - loss: 0.6931 - auc: 0.5000 - acc: 0.5000 - val_loss: 0.6931 - val_auc: 0.5000 - val_acc: 0.5000 Train on 80 samples, validate on 20 samples Epoch 1/10 32/80 [===========&gt;..................] - ETA: 0s - loss: 0.7644 - auc_2: 0.3075 - acc: 0.5000WARNING:tensorflow:Can save best model only with val_auc available, skipping. 80/80 [==============================] - 1s 10ms/sample - loss: 0.7246 - auc_2: 0.4563 - acc: 0.5250 - val_loss: 0.6072 - val_auc_2: 0.8250 - val_acc: 0.6500 Epoch 2/10 32/80 [===========&gt;..................] - ETA: 0s - loss: 0.7046 - auc_2: 0.4766 - acc: 0.5000WARNING:tensorflow:Can save best model only with val_auc available, skipping. 80/80 [==============================] - 0s 1ms/sample - loss: 0.6511 - auc_2: 0.6322 - acc: 0.5625 - val_loss: 0.5899 - val_auc_2: 0.8000 - val_acc: 0.6000 </code></pre> <p>Any help will be appreciated. I am using:</p> <pre><code>keras==2.3.1 tensorflow==1.14.0 </code></pre>
<p>Use tf.keras.backend.clear_session()</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/backend/clear_session" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/backend/clear_session</a></p>
python|tensorflow|keras|deep-learning
1
1,313
63,398,403
Replace date "/" tobe "-" in existing column .csv pandas
<p>I have data in wind.csv format :</p> <pre><code>Date,Time,Wind 13/08/2020,12.00z, 13020knot 14/08/2020,12.00z, 14004knot 15/08/2020,12.00z, 10005knot </code></pre> <p>I want to replace the sign &quot;/&quot; to &quot;-&quot;, the Date Data.</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv(&quot;F:wind.csv&quot;) Date,Time,Wind 13/08/2020,12.00z, 13020knot 14/08/2020,12.00z, 14004knot 15/08/2020,12.00z, 10005knot df.replace('/', '-', inplace=True) </code></pre> <p>it's not working, How does the script python with the pandas?</p> <p>I want to be like this :</p> <pre><code>13-08-2020,12.00z, 13020knot 14-08-2020,12.00z, 14004knot 15-08-2020,12.00z, 10005knot </code></pre>
<p>Here is my solution. It gets the result that you want but I don't know if it fit your desire.</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;Date.csv&quot;) df[&quot;Date&quot;] = df[&quot;Date&quot;].str.replace(&quot;/&quot;,&quot;-&quot;) </code></pre> <p>This one work but take long line of code</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;Date.csv&quot;) date = [] for d in df[&quot;Date&quot;]: d = d.replace('/', '-') date.append(d) df[&quot;Date&quot;] = date </code></pre>
python|pandas
0
1,314
63,369,555
Google Cloud Function Build timeout - all requirements have been loaded
<p>I have the following code on my cloud function -</p> <pre><code>import os import numpy as np import requests import torch from torch import nn from torch.nn import functional as F import math from torch.nn import BCEWithLogitsLoss from torch.utils.data import TensorDataset from transformers import AdamW, XLNetTokenizer, XLNetModel, XLNetLMHeadModel, XLNetConfig from keras.preprocessing.sequence import pad_sequences import numpy as np import pandas as pd def polarization(request): MODEL_URL = 'https://polarization.s3-us-west-1.amazonaws.com/classifier_state_dict.pt' print(MODEL_URL) r = requests.get(MODEL_URL) print(r) #Cloud function vm is a read only s/m. The only writable place is the tmp folder file = open(&quot;/tmp/model.pth&quot;, &quot;wb&quot;) file.write(r.content) file.close() print(&quot;Wrote to the tmp file&quot;) # State dict requires model object model = XLNetForPolarizationClassification(num_labels=1) model.load_state_dict(torch.load('/tmp/model.pth')) # Tokenize the embedded article embeddedArticle = request[&quot;embeddedArticle&quot;] tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased', do_lower_case=True) textIds = tokenize_inputs(embeddedArticle, tokenizer, num_embeddings=250) # Generate the attention masks and padding masks = create_attn_masks(textIds) article = pd.DataFrame() article[&quot;features&quot;] = textIds.tolist() article[&quot;masks&quot;] = masks # Call generate_predictions pred = generate_predictions(model, article, 1) return pred ## Extracting parameter and returning prediction def generate_predictions(model, df, num_labels, device=&quot;cpu&quot;): model.eval() X = df_subset[&quot;features&quot;].values.tolist() masks = df_subset[&quot;masks&quot;].values.tolist() X = torch.tensor(X) masks = torch.tensor(masks, dtype=torch.long) with torch.no_grad(): # Run the model with the input_ids and attention_masks separately logits = model(input_ids=X, attention_mask=masks) # Get the logits for each class logits = logits.sigmoid().detach().cpu().numpy() return round(logits) class XLNetForPolarizationClassification(torch.nn.Module): def __init__(self, num_labels=2): super(XLNetForPolarizationClassification, self).__init__() self.num_labels = num_labels self.xlnet = XLNetModel.from_pretrained('xlnet-base-cased') self.classifier = torch.nn.Linear(768, 1) torch.nn.init.xavier_normal_(self.classifier.weight) def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None): last_hidden_state = self.xlnet(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) mean_last_hidden_state = self.pool_hidden_state(last_hidden_state) logits = self.classifier(mean_last_hidden_state) # If you know the labels, compute the loss otherwise if labels is not None: loss_fct = BCEWithLogitsLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels)) return loss else: return logits def pool_hidden_state(self, last_hidden_state): last_hidden_state = last_hidden_state[0] mean_last_hidden_state = torch.mean(last_hidden_state, 1) return mean_last_hidden_state def create_attn_masks(input_ids): &quot;&quot;&quot; This will set a 1 or 0 based on if it is a mask or an actual input it for the word &quot;&quot;&quot; attention_masks = [] for seq in input_ids: seq_mask = [float(i&gt;0) for i in seq] attention_masks.append(seq_mask) return attention_masks def tokenize_inputs(text, tokenizer, num_embeddings=250): # tokenize the text, then truncate sequence to the desired length minus 2 for # the 2 special characters tokenized_texts = list(map(lambda t: tokenizer.tokenize(t)[:num_embeddings-2], text)) # convert tokenized text into numeric ids for the appropriate LM input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] # append special token &quot;&lt;s&gt;&quot; and &lt;/s&gt; to end of sentence input_ids = [tokenizer.build_inputs_with_special_tokens(x) for x in input_ids] # pad sequences input_ids = pad_sequences(input_ids, maxlen=num_embeddings, dtype=&quot;long&quot;, truncating=&quot;post&quot;, padding=&quot;post&quot;) return input_ids </code></pre> <p>and requirements.txt</p> <pre><code>certifi==2020.6.20 chardet==3.0.4 click==7.1.2 cycler==0.10.0 filelock==3.0.12 future==0.18.2 h5py==2.10.0 idna==2.10 joblib==0.16.0 Keras==2.4.3 kiwisolver==1.2.0 matplotlib==3.3.0 numpy==1.19.1 packaging==20.4 Pillow==7.2.0 pyparsing==2.4.7 python-dateutil==2.8.1 PyYAML==5.3.1 regex==2020.7.14 requests==2.24.0 sacremoses==0.0.43 scipy==1.5.2 sentencepiece==0.1.91 six==1.15.0 tokenizers==0.8.1rc1 torch==1.6.0 tqdm==4.48.2 transformers==3.0.2 urllib3==1.25.10 </code></pre> <p>However when I deploy, it gives me build timed out. The logs don't show any errors and show that each of the dependencies in the requirements.txt file have built. The first print statement doesn't get logged either. I don't see what part of the model/ requirements is causing the time out issue. Here's a screenshot of the logs - there are no errors all the logs are info until theres a Conext deadline exceeded statement. I can't share the actual logs without giving access to the function I believe. I've set the timeout to 9 minutes (540 seconds)<a href="https://i.stack.imgur.com/wogiz.png" rel="nofollow noreferrer">1</a></p>
<p>It might be due to the <code>torch</code> import, as this causes you to import the PyTorch lib that contains CUDA and hence requires a GPU, which isn't available on Cloud Functions.</p> <p>Instead, you can use a direct link to the cpu only version in your requirements.txt, like this:</p> <pre><code>certifi==2020.6.20 chardet==3.0.4 click==7.1.2 cycler==0.10.0 filelock==3.0.12 future==0.18.2 h5py==2.10.0 idna==2.10 joblib==0.16.0 Keras==2.4.3 kiwisolver==1.2.0 matplotlib==3.3.0 numpy==1.19.1 packaging==20.4 Pillow==7.2.0 pyparsing==2.4.7 python-dateutil==2.8.1 PyYAML==5.3.1 regex==2020.7.14 requests==2.24.0 sacremoses==0.0.43 scipy==1.5.2 sentencepiece==0.1.91 six==1.15.0 tokenizers==0.8.1rc1 https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp37-cp37m-linux_x86_64.whl tqdm==4.48.2 transformers==3.0.2 urllib3==1.25.10 </code></pre> <p>See this <a href="https://stackoverflow.com/questions/55449313/google-cloud-function-python-3-7-requirements-txt-makes-deploy-fail">answer</a> about how to select a different PyTorch version.</p>
python|google-cloud-functions|pytorch|cloud|requirements.txt
1
1,315
24,884,399
How to perform a simple signal backtest in python pandas
<p>I want to perform a simple and quick backtest in pandas by providing buy signals as DatetimeIndex to check against ohlc quotes DataFrame (adjusted close price) and am not sure if I am doing this right.</p> <p>To be clear I want to calculate the cummulated returns of all swapping buy signals (and stock returns as well?) over the whole holding period. After that I want to compare several calculations via a simple sharpe function. Is this the right way to test a buy singal quick and easy in pandas?</p> <p>Any help is very appreciated!</p> <p>signals:</p> <pre><code>In [216]: signal Out[216]: &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2000-08-21, ..., 2013-07-09] Length: 21, Freq: None, Timezone: UTC </code></pre> <p>ohlc:</p> <pre><code>In [218]: df.head() Out[218]: open high low close volume amount Date 2000-01-14 00:00:00+00:00 6.64 6.64 6.06 6.08 74500 4.91 2000-01-17 00:00:00+00:00 6.30 6.54 6.25 6.40 45000 5.17 2000-01-18 00:00:00+00:00 7.56 8.75 7.51 8.75 250200 7.07 </code></pre> <p>backtest:</p> <pre><code>analysis = pd.DataFrame(index=df.index) #calculate returns of adjusted close price analysis["returns"] = df['amount'].pct_change() #set signal returns to quote returns where there is a signal DatetimeIndex and ffill analysis["signal"] = nan analysis["signal"][signal] = analysis["returns"][signal] analysis["signal"] = analysis["signal"].fillna(method="ffill") #calculation of signal returns trade_rets = analysis["signal"].shift(1)*analysis["returns"] </code></pre> <p>expected result (values of buy_returns are not correct):</p> <pre><code>Out[2]: returns buy_returns Date 2000-08-21 00:00:00+00:00 -0.153226 -0.076613 2001-02-12 00:00:00+00:00 0.000000 0.000000 2002-10-29 00:00:00+00:00 0.246155 0.030769 2003-02-12 00:00:00+00:00 0.231884 0.014493 2003-03-12 00:00:00+00:00 1.548386 0.048387 </code></pre> <p>My question really is how do I have to calculate a returns Series to represent the strength of a provided buy signal (True/ False Series or Datetimeindex) in pandas? </p>
<p>You don't have enough information to run a backtest. Your "strategy" currently just has True or False. When it's True, how much do you want to buy? If it's True twice in a row, does that mean buy-and-hold or buy at both times? Does False mean liquidate or not to buy?</p> <p>You need to:</p> <ol> <li>Translate your signal into a "quantity held at <code>t</code>"</li> <li>Then check the what the result of holding that quantity is</li> <li>Don't forget to benchmark</li> </ol> <p>When doing (2), which is I think the main point of your question here, don't focus on speed, just make an intuitive iterative simulator that iterates through time, does what you say, and has a new overall value. For the size of data you're looking at, any speedup from being more complicated in pandas will be less than a blink of an eye.</p>
python|numpy|pandas|finance|quantitative-finance
2
1,316
53,459,046
Pandas dataframe change all value if greater than 0
<p>I have a dataframe</p> <pre><code>df= A B C 1 2 55 0 44 0 0 0 0 </code></pre> <p>and I want to change values to 1 if the value is >0.</p> <p>Is this the right approach: df.loc[df>0,]=1</p> <pre><code>to give: A B C 1 1 1 0 1 0 0 0 0 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.clip_upper.html" rel="nofollow noreferrer"><code>clip_upper</code></a>:</p> <pre><code>df = df.clip_upper(1) print (df) A B C 0 1 1 1 1 0 1 0 2 0 0 0 </code></pre> <p>Numpy alternative:</p> <pre><code>df = pd.DataFrame(np.clip(df.values, a_min=0, a_max=1), index=df.index, columns=df.columns) print (df) A B C 0 1 1 1 1 0 1 0 2 0 0 0 </code></pre> <p>And solution if no negative integer values - compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ge.html" rel="nofollow noreferrer"><code>ge</code></a> (<code>&gt;=</code>) and cast mask to integers:</p> <pre><code>print (df.ge(1)) A B C 0 True True True 1 False True False 2 False False False df = df.ge(1).astype(int) print (df) A B C 0 1 1 1 1 0 1 0 2 0 0 0 </code></pre>
pandas|dataframe
6
1,317
17,387,219
Sorting numpy matrix for a given column
<p>I've tried to use Ned Batchelder code to sort in human order a <code>NumPy</code> matrix, as it was proposed in this following post:</p> <p><a href="https://stackoverflow.com/q/7638738">Sort numpy string array with negative numbers?</a></p> <p>The code runs on a one-dimensional array, the command being:</p> <pre><code>print (sorted(a, key=natural_keys)) </code></pre> <p>Now, my problem is that my data is a 10 column matrix and I want to sort it according to one column (let's say <code>MyColumn</code>). I can't find a way to modify the code to print the whole matrix sorted according to this very column. All I could come up with is this:</p> <pre><code>print (sorted(a['MyColumn'], key=natural_keys)) </code></pre> <p>But, of course, only <code>MyColumn</code> shows up in the output, although it is correctly sorted...</p> <p><strong>Is there a way to print the whole Matrix?</strong></p> <p>Here is the command I used to load my array (I simplified my original imputfile to a 3 column array):</p> <pre><code>data = np.loadtxt(inputfile, dtype={'names': ('ID', 'MyColumn', 'length'), 'formats': ('int32', 'S40', 'int32')},skiprows=1, delimiter='\t') ID MyColumn length 164967 BFT_job13_q1_type2 426 197388 BFT_job8_q0_type2 244 164967 BFT_job13_q0_type1 944 72406 BFT_job1_q0_type3 696 </code></pre> <p>Here is what the output would ideally look like:</p> <pre><code>ID MyColumn length 72406 BFT_job1_q0_type3 696 197388 BFT_job8_q0_type2 244 164967 BFT_job13_q0_type1 944 164967 BFT_job13_q1_type2 426 </code></pre>
<p>If you have a <code>np.matrix</code>, called <code>m</code>:</p> <pre><code>col = 1 m[np.array(m[:,col].argsort(axis=0).tolist()).ravel()] </code></pre> <p>If you have a <code>np.ndarray</code>, called <code>a</code>:</p> <pre><code>col = 1 a[a[:,col].argsort(axis=0)] </code></pre> <p>If you have a structured array with named columns:</p> <pre><code>def mysort(data, col_name, key=None): d = data.copy() cols = [i[0] for i in eval(str(d.dtype))] if key: argsort = np.array([key(i) for i in d[col_name]]).argsort() else: argsort = d[col_name].argsort() for col in cols: d[col] = d[col][argsort] return d </code></pre> <p>For your specific case you need <a href="https://stackoverflow.com/a/17378150/832621">the following <code>key</code> function</a>:</p> <pre><code>def key(x): x = ''.join([i for i in x if i.isdigit() or i=='_']) return '{1:{f}{a}10}_{2:{f}{a}10}_{3:{f}{a}10}'.format(*x.split('_'), f='0', a='&gt;') d = mysort(data, 'MyColumn', key) </code></pre>
python|numpy|sorting|matrix
5
1,318
17,451,425
Hist in matplotlib: Bins are not centered and proportions not correct on the axis
<p>take a look at this example:</p> <pre><code> import matplotlib.pyplot as plt l = [3,3,3,2,1,4,4,5,5,5,5,5,5,5,5,5] plt.hist(l,normed=True) plt.show() </code></pre> <p>The output is posted as a picture. I have two questions:</p> <p>a) Why are only the 4 and 5 bins centered around its value? Shouldn't the others be that as well? Is there a trick to get them centered?</p> <p>b)Why are the bins not normalised to proportion? I want the y values of all the bins to sum up to one.</p> <p>Note that my real example contains much more values in the list, but they are all discrete.</p> <p><img src="https://i.stack.imgur.com/6wA6M.png" alt="enter image description here"></p>
<p>You should adjust the keyword arguments of the <code>plt.hist</code> function. There are many of them and the <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist" rel="noreferrer">documentation</a> can help you answer many of these questions.</p> <p>a. ) You can pass the keywords <code>bins=range(1,7)</code> and <code>align=left</code>. Setting the <code>bins</code> keyword to a sequence gives the borders of each bin. For example, <code>[1,2], [2,3], [3,4], ..., [5, 6]</code>.</p> <p>b. ) Check your bin widths (<code>rwidth!=1</code>). From the <code>matplotlib.pyplot.hist</code> documentation: </p> <blockquote> <p>If True, the first element of the return tuple will be the counts normalized to form a probability density, i.e., n/(len(x)*dbin). In a probability density, the integral of the histogram should be 1; you can verify that with a trapezoidal integration of the probability density function:</p> </blockquote> <p>This means that the area under your bins is summing up to one, but because the bin widths are less than 1, the heights get normalized in such a way that the heights don't add up to 1. If you adjust <code>rwidth=1</code>, you get a good looking plot:</p> <pre><code>plt.hist(l, bins=range(1,7), align='left', rwidth=1, normed=True) </code></pre>
python|numpy|matplotlib
16
1,319
71,988,044
Financial performance and risk analysis statistics from sample DataFrame
<p>How do I output detailed financial performance and risk analysis statistics from this sample pandas DataFrame?</p> <p>Can anyone show how this could be done with Quantstats, Pyfolio or another similar approach?</p> <p><strong>Code</strong></p> <pre><code>start_amount = 100000 np.random.seed(8) win_loss_df = pd.DataFrame( np.random.choice([1000, -1000], 543), index=pd.date_range(&quot;2020-01-01&quot;, &quot;2022-01-30&quot;, freq=&quot;B&quot;), columns=[&quot;win_loss_amount&quot;] ) win_loss_df[&quot;total_profit&quot;] = win_loss_df.win_loss_amount.cumsum() + start_amount </code></pre> <p><strong>Sample DataFrame</strong></p> <pre><code>win_loss_df.head(10) win_loss_amount total_profit 2020-01-01 -1000 99000 2020-01-02 1000 100000 2020-01-03 -1000 99000 2020-01-06 -1000 98000 2020-01-07 -1000 97000 2020-01-08 1000 98000 2020-01-09 1000 99000 2020-01-10 -1000 98000 2020-01-13 1000 99000 2020-01-14 -1000 98000 </code></pre> <p><strong>Desired output</strong></p> <p>I would like to see output including:</p> <ul> <li>Annual return</li> <li>Sharpe ratio</li> <li>Max drawdown</li> </ul> <p>I was hoping to use a library for this which would simplify the process and return data similar to a tear sheet.</p>
<p>We will use the profit column and use <a href="https://github.com/ranaroussi/quantstats" rel="nofollow noreferrer">quantstats</a> to generate reports.</p> <h4>Code</h4> <pre><code>import quantstats as qs import numpy as np import pandas as pd start_amount = 100000 np.random.seed(8) win_loss_df = pd.DataFrame( np.random.choice([1000, -1000], 543), index=pd.date_range(&quot;2020-01-01&quot;, &quot;2022-01-30&quot;, freq=&quot;B&quot;), columns=[&quot;win_loss_amount&quot;] ) win_loss_df[&quot;total_profit&quot;] = win_loss_df.win_loss_amount.cumsum() + start_amount profit = win_loss_df.total_profit # Save to image file, this image can also be seen in full report. qs.plots.yearly_returns(profit, savefig='yearly_return.png') print(f'montly returns:\n{qs.stats.monthly_returns(profit)}') print(f'sharpe ratio: {qs.stats.sharpe(profit)}') print(f'max markdown: {qs.stats.max_drawdown(profit)}') # Print full report in html. qs.reports.html(profit, title='ABC', output='', download_filename='profit.html') </code></pre> <h3>Output</h3> <h5>Yearly return</h5> <p><a href="https://i.stack.imgur.com/PRkFB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRkFB.png" alt="enter image description here" /></a></p> <h5>Montly returns, Sharpe and markdown</h5> <pre><code>montly returns: JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC EOY 2020 -0.060606 0.000000 -0.064516 4.597701e-02 0.032967 0.000000 -0.010638 -0.010753 0.086957 -1.110223e-16 0.030000 0.048544 0.101444 2021 0.046296 -0.035398 0.045872 -4.440892e-16 -0.026316 0.018018 0.017699 -0.069565 0.018692 -4.587156e-02 -0.057692 -0.030612 -0.117146 2022 -0.042105 0.000000 0.000000 0.000000e+00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000e+00 0.000000 0.000000 -0.041881 sharpe ratio: -0.16968348978006012 max markdown: -0.23529411764705888 </code></pre> <h5>Full report</h5> <p>profit.html</p> <p><a href="https://i.stack.imgur.com/Yk90L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yk90L.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/1tCjM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1tCjM.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/3ucKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ucKA.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/TEE1A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TEE1A.png" alt="enter image description here" /></a></p>
python|pandas|finance
1
1,320
71,950,486
I have unlabelled data and I want to make it into labelled into three category 1.Refilling 2. theft 3. sloshing
<pre><code>timestamp vehicle_speed fuel_in_lit 2022-01-01 00:00:03 0 61 2022-01-01 00:00:23 2 60 2022-01-01 00:00:33 0 59 2022-01-01 00:00:43 0 58 2022-01-01 00:00:53 0 56 </code></pre> <ul> <li>if vehicle speed is zero and fuel tank level increasing then it's refilling</li> <li>if speed is zero and fuel decreasing means it's theft means stolen the fuel</li> <li>other than this is sloshing</li> </ul>
<p>IIUC, you can use a set of masks and <a href="https://numpy.org/doc/stable/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a>:</p> <pre><code>diff = df['fuel_in_lit'].diff() # speed is 0 m1 = df['vehicle_speed'].eq(0) # fuel is increasing m2 = diff.gt(0) # fuel is decreasing m3 = diff.lt(0) df['state'] = np.select([m1&amp;m2, m1&amp;m3], ['refilling', 'stolen fuel'], 'sloshing') </code></pre> <p><em>NB. I probably would have used a additional condition where the speed is &gt; 0 and the fuel is decreasing, but the above fits the description</em></p> <p>output:</p> <pre><code> timestamp vehicle_speed fuel_in_lit state 0 2022-01-01 00:00:03 0 61 sloshing 1 2022-01-01 00:00:23 2 60 sloshing 2 2022-01-01 00:00:33 0 59 stolen fuel 3 2022-01-01 00:00:43 0 58 stolen fuel 4 2022-01-01 00:00:53 0 56 stolen fuel </code></pre>
python|pandas|dataframe|numpy
-1
1,321
72,031,917
How to compare two dataframes' structures
<p>I have two pandas dataframes and I want to compare their structures only. I tried to do this:</p> <pre><code>df0Info = df0.info() df1Info = df1.info() if df0Info == df1Info: print(&quot;They are same&quot;) else: print(&quot;They are diff&quot;) </code></pre> <p>I found the result always is same whether the two dataframes structures are same or different. How can I get the correct result?</p>
<p><code>pandas.DataFrame.info</code> prints a summary of a DataFrame and returns <code>None</code>, so comparing outputs to one another will always be True because you're essentially testing <code>None == None</code>.</p>
python|pandas
0
1,322
72,046,021
Save multiple dataframes to the same file, one after the other
<p>Lets say I have three dfs</p> <pre><code>x,y,z 0,1,1,1 1,2,2,2 2,3,3,3 a,b,c 0,4,4,4 1,5,5,5 2,6,6,6 d,e,f 0,7,7,7 1,8,8,8 2,9,9,9 </code></pre> <p>How can I stick them all together so that i get:</p> <pre><code>x,y,z 0,1,1,1 1,2,2,2 2,3,3,3 a,b,c 0,4,4,4 1,5,5,5 2,6,6,6 d,e,f 0,7,7,7 1,8,8,8 2,9,9,9 </code></pre> <p>I am not fussed if it's in a df or not hence I haven't included a new index. Essentially I just want to glue n amount of dfs together to save me having to copy and paste the data into an excel sheet myself.</p>
<p>If you want to save all your dataframes in the <strong>same file</strong> one after the other, use a simple loop with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>to_csv</code></a> and use the file append mode (<strong>a</strong>):</p> <pre class="lang-py prettyprint-override"><code>dfs = [df1, df2, df3] for d in dfs: d.to_csv('out.csv', mode='a') </code></pre> <p><em>NB. the initial file must be empty or non existent</em></p> <p>output <code>out.csv</code>:</p> <pre><code>,x,y,z 0,1,1,1 1,2,2,2 2,3,3,3 ,a,b,c 0,4,4,4 1,5,5,5 2,6,6,6 ,d,e,f 0,7,7,7 1,8,8,8 2,9,9,9 </code></pre>
python|pandas|dataframe
2
1,323
19,071,199
Drop columns whose name contains a specific string from pandas DataFrame
<p>I have a pandas dataframe with the following column names:</p> <p>Result1, Test1, Result2, Test2, Result3, Test3, etc...</p> <p>I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function.</p> <p>How can I do that?</p>
<p>Here is one way to do this:</p> <pre><code>df = df[df.columns.drop(list(df.filter(regex='Test')))] </code></pre>
python|pandas|dataframe
283
1,324
17,833,119
Lowpass Filter in python
<p>I am trying to convert a Matlab code to Python. I want to implement <code>fdesign.lowpass()</code> of Matlab in Python. What will be the exact substitute of this Matlab code using <code>scipy.signal.firwin()</code>:</p> <pre><code>demod_1_a = mod_noisy * 2.*cos(2*pi*Fc*t+phi); d = fdesign.lowpass('N,Fc', 10, 40, 1600); Hd = design(d); y = filter(Hd, demod_1_a); </code></pre>
<p>A very basic approach would be to invoke</p> <pre><code># spell out the args that were passed to the Matlab function N = 10 Fc = 40 Fs = 1600 # provide them to firwin h = scipy.signal.firwin(numtaps=N, cutoff=40, nyq=Fs/2) # 'x' is the time-series data you are filtering y = scipy.signal.lfilter(h, 1.0, x) </code></pre> <p>This should yield a filter <em>similar</em> to the one that ends up being made in the Matlab code. If your goal is to obtain functionally equivalent results, this should provide a useful filter. </p> <p>However, if your goal is that the python code provide exactly the same results, then you'll have to look under the hood of the <code>design</code> call (in Matlab); From my quick check, it's not trivial to parse through the Matlab calls to identify exactly what it is doing, i.e. what design method is used and so on, and how to map that into corresponding <code>scipy</code> calls. If you really want compatibility, and you only need to do this for a limited number of filters, you could, by hand, look at the <code>Hd.Numerator</code> field -- this array of numbers directly corresponds to the <code>h</code> variable in the python code above. So if you copy those numbers into an array by hand, you'll get numerically equivalent results.</p>
python|numpy|filter|scipy
6
1,325
55,272,152
Pandas - Calculate average of columns with condition based on values in other columns
<p>I struggle to create a new column in my data frame, which would be the result of going through each row a data frame and calculating the average based on some conditions. That is how the data frame looks like</p> <pre><code>ID, 1_a, 1_b, 1_c, 2_a, 2_b, 2_c, 3_a, 3_b, 3_c 0, 0, 145, 0.8, 0, 555, 0.7, 1, 335, 0.7 1, 1, 222, 0.9, 1, 224, 0.4, 1, 555, 0.6 3, 1, 111, 0.3, 0, 222, 0.5, 1, 999, 0.7 </code></pre> <p>I hope to have the following result:</p> <pre><code>ID, 1_a, 1_b, 1_c, 2_a, 2_b, 2_c, 3_a, 3_b, 3_c, NEW 0, 0, 145, 0.8, 0, 555, 0.7, 1, 335, 0.7, 0.7 1, 1, 222, 0.8, 1, 224, 0.4, 1, 555, 0.6, 0.6 3, 1, 111, 0.3, 0, 222, 0.5, 1, 999, 0.7, 0.5 </code></pre> <p>The logic is the following.</p> <pre><code>If 1_a is 1, keep value in 1_c, if not ignore If 2_a is 1, keep value in 2_c, if not ignore If 3_a is 1, keep value in 3_c, if not ignore </code></pre> <p>calculate the average of the kept values for each row and store in column 'NEW'</p> <p>I tried several ways, but it only works if I have only 1 row in the data frame. If I have more than 1 row, it seems to calculate the mean across the whole data frame. Additionally, I try to optimise the function as I have more 10 of these IF conditions. That is what I tried, but it does not give me the result, I am looking for: </p> <pre><code> def test(x): a = x[x['1_a']==1]['1_c'].values b = x[x['2_a']==1]['2_c'].values c = x[x['3_a']==1]['3_c'].values xx =np.concatenate((a,b,c), axis=0) z = sum(xx)/len(xx) x['New_Prob'] = z return x print(test(df)) </code></pre> <p>The result is something like that:</p> <pre><code>ID, 1_a, 1_b, 1_c, 2_a, 2_b, 2_c, 3_a, 3_b, 3_c, NEW 0, 0, 145, 0.8, 0, 555, 0.7, 1, 335, 0.7, 0.6 1, 1, 222, 0.8, 1, 224, 0.4, 1, 555, 0.6, 0.6 3, 1, 111, 0.3, 0, 222, 0.5, 1, 999, 0.7, 0.6 </code></pre>
<p>If your columns are in a similar range for both '_a' and '_c', you can simply loop through them;</p> <pre><code>r = range(1,4) for i in r: df.loc[df["{}_a".format(i)] != 1, "{}_c".format(i)] = np.NaN df['NEW'] = df[['{}_c'.format(i) for i in r]].mean(axis=1) </code></pre>
python|pandas|if-statement|iteration
1
1,326
55,159,061
store values from a list into an array after each iteration consists of specified column of files in python
<p>For a file in files:</p> <p>This is my list which consists of values from 3 files after each iteration. </p> <pre><code>import pandas files = [r"C:\Users\Anjana\Documents\radar\HeightVsDopplr\EXP_DBS_CH4_24Apr2017_10_49_10_Beam2_W1_Az_90.00_Oz_10.00.mmts",r"C:\Users\Anjana\Documents\radar\HeightVsDopplr\EXP_DBS_CH4_24Apr2017_10_49_10_Beam4_W1_Az_180.00_Oz_10.00.mmts", r"C:\Users\Anjana\Documents\radar\HeightVsDopplr\EXP_DBS_CH4_24Apr2017_10_49_10_Beam1_W1_Az_0.00_Oz_0.00.mmts"] for file in files: if file.endswith(".mmts"): csvfiles.append(str(file)) </code></pre> <pre><code>a = pd.read_csv(file) x = list(a0[:][:]['Mean']) matrix = np.empty((a0.shape[0],3)) matrix.fill(np.nan) </code></pre> <h3>I need the output somewhat like this</h3> <ul> <li>matrix(row1,col1) should be value of first value from file 1 </li> <li>matrix(row2,col1) should be value of first value from file 2 </li> <li>matrix(row3,col1) should be value of first value from file 3 sample input:</li> </ul> <blockquote> <p>file 1</p> </blockquote> <p>+--------+----------+<br> | Height | Mean |<br> +--------+----------+<br> | 3.33 | -0.41005 |<br> +--------+----------+<br> | 3.51 | 0.15782 |<br> +--------+----------+<br> | 3.69 | 0.12896 |<br> +--------+----------+ </p> <blockquote> <p>file 2</p> </blockquote> <p>+--------+--------+<br> | Height | Mean |<br> +--------+--------+<br> | 3.33 | 1.8867 |<br> +--------+--------+<br> | 3.51 | 2.3108 |<br> +--------+--------+<br> | 3.69 | 2.5924 |<br> +--------+--------+ </p> <blockquote> <p>output</p> </blockquote> <pre><code>array[-0.41005,0.15782 ,0.12896] [1.8867 ,2.3108 ,2.5924] </code></pre>
<p>Your question is very general. Try to provide a minimal complete verifiable example otherwise it is difficult to help you.</p> <p>What is in the files? Is each line a number? Too few information.</p> <p>In Python you can use multiple iteration variables. </p> <pre><code>#open the files first f1 = open("file1", "r") f2 = open("file2", "r") f3 = open("file3", "r") #Assuming the files have the same number of lines: for linef1, linef2, linef3 in zip(file1,file2,file3): #Assuming each line is a number matrix[i,0]= int(linef1) matrix[i,1]= int(linef1) matrix[i,2]= int(linef1) </code></pre>
python|pandas|numpy
0
1,327
55,193,810
Binary Image classification using TensorFlow
<p>I am writing a code to classify between dogs and cats in python(Tensorflow) but the code is displaying this error:</p> <p><strong>IndexError: index 0 is out of bounds for axis 0 with size 0</strong></p> <p>I am stuck here. Any help is appreciated. </p> <p>Also can you please help me I cant figure out here how OneHotEncoder is working .I didn't understand the logic here. I spent a lot of time on the same.</p> <pre><code>def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) reset_graph() img_size = 64 num_channels = 3 img_size_flat = img_size * img_size * num_channels img_shape = (img_size, img_size) trainpath='C:\ProgramData\Anaconda3\train' testpath='C:\ProgramData\Anaconda3\test' labels = {'cats': 0, 'dogs': 1} fc_size=32 #size of the output of final FC layer num_steps=300 tf.logging.set_verbosity(tf.logging.INFO) def read_images_classes(basepath,imgSize=img_size): image_stack = [] label_stack = [] for counter, l in enumerate(labels): path = os.path.join(basepath, l,'*g') for img in glob.glob(path): one_hot_vector = np.zeros(len(labels),dtype=np.int16) one_hot_vector[counter]=1 image = cv2.imread(img) image_stack.append(im_resize) label_stack.append(labels[l]) return np.array(image_stack), np.array(label_stack) X_train, y_train = read_images_classes(trainpath) X_test, y_test = read_images_classes(testpath)b print('length of train image set',len(X_train)) print('X_data shape:', X_train.shape) print('y_data shape:', y_train.shape) fig1 = plt.figure() ax1 = fig1.add_subplot(2,2,1) img = cv2.resize(X_train[0],(64,64), interpolation=cv2.INTER_CUBIC) ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.title(y_train[0]) plt.show() </code></pre>
<blockquote> <p>IndexError: index 0 is out of bounds for axis 0 with size 0</p> </blockquote> <p>It means you don't have the index you are trying to reference. </p> <p>Since this is a binary classification problem, you don't required one_hot encoding for pre-processing labels. if you have more than two labels then you can use one_hot encoding.</p> <p>Please refer binary classification code using Tensorflow for Cats and Dogs Dataset</p> <pre><code>import os import numpy as np from keras import layers import pandas as pd from tensorflow.keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from tensorflow.keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from tensorflow.keras.models import Sequential from tensorflow.keras import regularizers, optimizers from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator import keras.backend as K import keras.backend as K K.set_image_data_format('channels_last') from google.colab import drive drive.mount('/content/drive') train_dir = '/content/drive/My Drive/Dogs_Vs_Cats/train' test_dir = '/content/drive/My Drive/Dogs_Vs_Cats/test' img_width, img_height = 300, 281 input_shape = img_width, img_height, 3 train_samples = 2000 test_samples = 1000 epochs = 30 batch_size = 32 train_datagen = ImageDataGenerator( rescale = 1. /255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator( rescale = 1. /255) train_data = train_datagen.flow_from_directory( train_dir, target_size = (img_width, img_height), batch_size = batch_size, class_mode = 'binary') test_data = test_datagen.flow_from_directory( test_dir, target_size = (img_width, img_height), batch_size = batch_size, class_mode = 'binary') model = Sequential() model.add(Conv2D(32, (7, 7), strides = (1, 1), input_shape = input_shape)) model.add(BatchNormalization(axis = 3)) model.add(Activation('relu')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (7, 7), strides = (1, 1))) model.add(BatchNormalization(axis = 3)) model.add(Activation('relu')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy']) model.build(input_shape) model.summary() model.fit_generator( train_data, steps_per_epoch = train_samples//batch_size, epochs = epochs, validation_data = test_data, verbose = 1, validation_steps = test_samples//batch_size) </code></pre>
tensorflow|deep-learning|classification|conv-neural-network
0
1,328
56,695,811
Is it possible to calculate accuracy and ROC-AUC score at the same time with GridSearchCV?
<pre><code>rf = RandomForestClassifier(random_state=0) parameters = {'bootstrap': [True, False], 'min_samples_split':[2,3,4], 'criterion':['entropy', 'gini'], 'n_estimators':[100, 200] } grid_search = GridSearchCV(estimator=rf, param_grid=parameters, scoring='accuracy', cv=10, n_jobs=-1) </code></pre> <p>My code currently performs a grid search with <code>GridSearchCV</code>, scoring predictions by their accuracy. How can I calculate the ROC-AUC score as well without using a <code>for</code> loop? </p>
<p>Per the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html" rel="nofollow noreferrer">docs</a>, you can pass a <code>list</code> of strings. In this specific case, <code>scoring=['accuracy', 'roc_auc']</code> would be what you want.</p>
python|pandas|numpy|cross-validation|grid-search
0
1,329
56,850,868
How do I add numbers to a column based on another column? (dictionary)
<p>I have a dictionary with values that I need to add to a column in a dataframe. The dictionary looks like this:</p> <pre><code>{1:123, 2:345, 3:678} </code></pre> <p>and the column of the dataframe looks like this:</p> <pre><code>col1 1 2 3 </code></pre> <p>and I want this result:</p> <pre><code>col1 1123 2345 3678 </code></pre> <p>This is code I am using (replace function)</p> <pre><code>file['col1'] = file['col1'].replace(dict) </code></pre> <p>but replace() unfortunately deletes the value in column 1.</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>Series.astype</code></a> to cast as <code>str</code> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> - you can concatenate strings this way simply with <code>+</code></p> <pre><code>df['col1'].astype(str) + df['col1'].map(d).astype(str) </code></pre> <p>[out]</p> <pre><code>0 1123 1 2345 2 3678 Name: col1, dtype: object </code></pre> <p>If it's necessary to cast back to <code>int</code> type, use:</p> <pre><code>(df['col1'].astype(str) + df['col1'].map(d).astype(str)).astype(int) </code></pre>
python|pandas|dataframe
2
1,330
56,455,555
Adding weights to edges in networkx automatically depending of the number of connections form pandas dataframe
<p>I am trying to create out of a pandas dataframe a directed graph right now with networkx, so far i can use:</p> <pre><code>nx.from_pandas_edgelist(df, 'Activity', 'Activity followed', create_using=nx.DiGraph()) </code></pre> <p>which shows me all the nodes and edges from Activity --> Activity followed.</p> <p>In my dataframe there is sometimes the same activity followed by the same activity and i want to count this number in form of weights for the edges so far example this is my dataframe:</p> <pre><code>Index Activity Activityfollowed 0 Lunch Dinner 1 Lunch Dinner 2 Breakfast Lunch </code></pre> <p>should have the Edges:</p> <p>Lunch --> Dinner (weight 2)</p> <p>Breakfast --> Lunch (weight 1)</p> <p>Is there any way to do it?</p>
<p>You could try adding the <code>weight</code> attribute as a column, using <a href="http://OutEdgeDataView([(&#39;Lunch&#39;,%20&#39;Dinner&#39;,%20%7B&#39;weight&#39;:%202%7D),%20(&#39;Breakfast&#39;,%20&#39;Lunch&#39;,%20%7B&#39;weight&#39;:%201%7D)])" rel="noreferrer"><code>groupby.transform</code></a>, then pass the <code>edge_attr</code> argument to the <code>from_pandas_edgelist</code> method:</p> <pre><code>df['weight'] = df.groupby(['Activity', 'Activityfollowed'])['Activity'].transform('size') G = nx.from_pandas_edgelist(df, 'Activity', 'Activityfollowed', create_using=nx.DiGraph(), edge_attr='weight') </code></pre> <p>Confirm that it's worked using:</p> <pre><code>G.edges(data=True) </code></pre> <p>[out]</p> <pre><code>OutEdgeDataView([('Lunch', 'Dinner', {'weight': 2}), ('Breakfast', 'Lunch', {'weight': 1})]) </code></pre>
python|pandas|networkx
5
1,331
56,775,346
Numpy 2d array, clipping each index of each row to the minimum of that index and a specific column
<p>Given a 2d array, I want to take a specific column of that array.</p> <p>I then want to take every value of each row in the array, and change that value to whatever the minimum is between its current value, and the value in the specified column <em>for that row</em> is.</p> <p>What is an efficient way to do this? Thank you.</p> <p>Here is an example:</p> <p>Given a 3x3 matrix:</p> <pre><code>array([[1, 2, 1], [2, 2, 8], [3, 7, 11]]) </code></pre> <p>And a chosen column = column 2</p> <pre><code>array([2, 2, 7]) </code></pre> <p>For every value in the matrix I take the minimum between that value and the value in the corresponding row of the chosen column</p> <pre><code>= [1, 2, 1; 2, 2, 2; 3, 7, 7] </code></pre> <p>How can I efficiently do this for a large matrix? Thank you.</p>
<p>Use <code>numpy.minimum</code>. You need to broadcast to keep the dimensions of the column so you aren't comparing row-wise with the entire column.</p> <pre><code>np.minimum(a, a[:, col, None]) </code></pre> <hr> <p><strong><em>MCVE</em></strong></p> <pre><code>a = np.array([[1, 3, 1, 9, 4], [2, 3, 7, 5, 5], [9, 8, 8, 4, 5], [6, 9, 5, 7, 9], [9, 9, 1, 9, 1]]) col = 2 # array([1, 7, 8, 5, 1]) np.minimum(a, a[:, col, None]) </code></pre> <p></p> <pre><code>array([[1, 1, 1, 1, 1], [2, 3, 7, 5, 5], [8, 8, 8, 4, 5], [5, 5, 5, 5, 5], [1, 1, 1, 1, 1]]) </code></pre>
python|numpy
2
1,332
56,601,817
Conditional Cumulative Sums in Pandas
<p>I am a former Excel power user repenting for his sins. I need help recreating a common calculation for me.</p> <p>I am trying to calculate the performance of a loan portfolio. In the numerator, I am calculating the cumulative total of losses. In the denominator, I need the original balance of the loans included in the cumulative total.</p> <p>I cannot figure out how to do a conditional groupby in Pandas to accomplish this. It is very simple in Excel, so I am hoping that I am overthinking it.</p> <p>I could not find much on the issue on StackOverflow, but this was the closest: <a href="https://stackoverflow.com/questions/41420822/python-pandas-conditional-cumulative-sum">python pandas conditional cumulative sum</a></p> <p>The thing I cannot figure out is that my conditions are based on values in the index and contained in columns</p> <p>Below is my data:</p> <pre><code>| Loan | Origination | Balance | NCO Date | NCO | As of Date | Age (Months) | NCO Age (Months) | |---------|-------------|---------|-----------|-----|------------|--------------|------------------| | Loan 1 | 1/31/2011 | 1000 | 1/31/2018 | 25 | 5/31/2019 | 100 | 84 | | Loan 2 | 3/31/2011 | 2500 | | 0 | 5/31/2019 | 98 | | | Loan 3 | 5/31/2011 | 3000 | 1/31/2019 | 15 | 5/31/2019 | 96 | 92 | | Loan 4 | 7/31/2011 | 2500 | | 0 | 5/31/2019 | 94 | | | Loan 5 | 9/30/2011 | 1500 | 3/31/2019 | 35 | 5/31/2019 | 92 | 90 | | Loan 6 | 11/30/2011 | 2500 | | 0 | 5/31/2019 | 90 | | | Loan 7 | 1/31/2012 | 1000 | 5/31/2019 | 5 | 5/31/2019 | 88 | 88 | | Loan 8 | 3/31/2012 | 2500 | | 0 | 5/31/2019 | 86 | | | Loan 9 | 5/31/2012 | 1000 | | 0 | 5/31/2019 | 84 | | | Loan 10 | 7/31/2012 | 1250 | | 0 | 5/31/2019 | 82 | | </code></pre> <p>In Excel, I would calculate this total using the following formulas:</p> <p>Outstanding Balance Line: <code>=SUMIFS(Balance,Age (Months),Reference Age)</code></p> <pre><code>Cumulative NCO: =SUMIFS(NCO,Age (Months),&gt;=Reference Age,NCO Age (Months),&lt;=&amp;Reference Age) </code></pre> <p>Data:</p> <pre><code>| Reference Age | 85 | 90 | 95 | 100 |---------------------|-------|-------|------|------ | Outstanding Balance | 16500 | 13000 | 6500 | 1000 | Cumulative NCO | 25 | 60 | 40 | 25 </code></pre> <p>The goal here is to include things in Outstanding Balance that are old enough to have an observation for NCO. And NCOs are the total amount that have occurred up until that point for those loans outstanding.</p> <p>EDIT:</p> <p>I have gotten a calculation this way. But is this the most efficient?</p> <pre class="lang-py prettyprint-override"><code>age_bins = list(np.arange(85, 101, 5)) final_df = pd.DataFrame() df.fillna(value=0, inplace=True) df["NCO Age (Months)"] = df["NCO Age (Months)"].astype(int) for x in age_bins: age = x nco = df.loc[(df["Age (Months)"] &gt;= x) &amp; (df["NCO Age (Months)"] &lt;= x), "NCO"].sum() bal = df.loc[(df["Age (Months)"] &gt;= x), "Balance"].sum() temp_df = pd.DataFrame( data=[[age, nco, bal]], columns=["Age", "Cumulative NCO", "Outstanding Balance"], index=[age], ) final_df = final_df.append(temp_df, sort=True) </code></pre>
<p>You use a complex conditions depending on variables. It is easy to find a vectorized way for simple cumulative sums, but I cannot imagine a nice way for the Cumulative NCO.</p> <p>So I would revert to Python comprehensions:</p> <pre><code>data = [ { 'Reference Age': ref, 'Outstanding Balance': df.loc[df.iloc[:,6]&gt;=ref,'Balance'].sum(), 'Cumulative NCO': df.loc[(df.iloc[:,6]&gt;=ref)&amp;(df.iloc[:,7]&lt;=ref), 'NCO'].sum() } for ref in [85, 90, 95, 100]] result = pd.DataFrame(data).set_index('Reference Age').T </code></pre> <p>It produces:</p> <pre><code>Reference Age 85 90 95 100 Cumulative NCO 25 60 40 25 Outstanding Balance 16500 13000 6500 1000 </code></pre>
python|pandas|pandas-groupby
2
1,333
56,611,648
CUMSUM addition as below
<p>i have to calculate the cumsum addition in the below. A should be blank. B should be as it, c should 31 + 30 = 61, previous item and addition of present item, D = 61 + 31 = 92 and so on. </p> <p>data: </p> <pre><code> 0 1 cumsum 1 A 31 2 B 31 31 3 C 30 61 4 D 31 92 5 E 30 122 6 F 31 153 7 G 31 184 8 H 30 214 9 I 31 245 10 J 30 276 my code: data['cumsum'] = data[1].cumsum() data 0 1 cumsum 1 A 31 31 2 B 31 61 3 C 30 92 4 D 31 122 5 E 30 153 6 F 31 184 7 G 31 214 8 H 30 245 9 I 31 276 10 J 30 306 </code></pre> <p>i need the expected output as below: </p> <pre><code>0 1 cumsum 1 A 31 2 B 31 31 3 C 30 61 4 D 31 92 5 E 30 122 6 F 31 153 7 G 31 184 8 H 30 214 9 I 31 245 10 J 30 276 my code: data['cumsum'] = data[1].cumsum() data </code></pre>
<p>I think you need </p> <pre><code>df['1'].shift(-1).cumsum().shift(1) 1 NaN 2 31.0 3 61.0 4 92.0 5 122.0 6 153.0 7 184.0 8 214.0 9 245.0 10 275.0 Name: 1, dtype: float64 </code></pre>
python|pandas|python-2.7
3
1,334
25,596,639
Get the expected array with SciPy's Fisher exact test?
<p>SciPy allows you to conduct both chi square tests and Fisher exact tests. While the output of the chi square test includes the expected array, the Fisher exact does not. </p> <p>e.g.:</p> <pre><code>from scipy import stats import numpy as np obs = np.array( [[1100,6848], [11860,75292]]) stats.chi2_contingency(obs) </code></pre> <p>returns:</p> <pre><code>(0.31240019935827701, 0.57621104841277448, 1L, array([[ 1083.13438486, 6864.86561514], [ 11876.86561514, 75275.13438486]])) </code></pre> <p>while:</p> <pre><code>from scipy import stats oddsratio, pvalue = stats.fisher_exact([[1100,6848], [11860,75292]]) print pvalue, oddsratio </code></pre> <p>returns:</p> <pre><code>0.561533439157 1.01974850672 </code></pre> <p>The <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html#scipy.stats.fisher_exact" rel="nofollow">documentation</a> says nothing, and I couldn't find anything online either. Any chance it's possible? Thanks!</p>
<p>Fisher's exact test (<a href="http://en.wikipedia.org/wiki/Fisher%27s_exact_test" rel="nofollow">http://en.wikipedia.org/wiki/Fisher%27s_exact_test</a>) doesn't involve computing an expected array. That's why <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html" rel="nofollow"><code>fisher_exact()</code></a> doesn't return one.</p> <p>If you need the expected array, it is the same as that returned by <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html" rel="nofollow"><code>chi2_contingency</code></a>. If you want to compute it without calling <code>chi2_contingency</code>, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.contingency.expected_freq.html" rel="nofollow"><code>scipy.stats.contingency.expected_freq</code></a>. For example:</p> <pre><code>In [40]: obs Out[40]: array([[ 1100, 6848], [11860, 75292]]) In [41]: from scipy.stats.contingency import expected_freq In [42]: expected_freq(obs) Out[42]: array([[ 1083.13438486, 6864.86561514], [ 11876.86561514, 75275.13438486]]) </code></pre>
python|numpy|scipy
2
1,335
25,936,899
Select a subset of a Pandas DataFrame based on a list of criteria built from another DataFrame
<p>Suppose we have the following DataFrame</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df_org = pd.DataFrame({'A' : [1,2,3,4,5,6], 'B' : [1,1,1,1,2,2], 'C' : [1,2,3,4,1,2]}) A B C 0 1 1 1 1 2 1 2 2 3 1 3 3 4 1 4 4 5 2 1 5 6 2 2 </code></pre> <p>And this another one, <code>df_criteria</code>, that has some of the columns of <code>df_org</code> and from which we will build our criteria. For instance:</p> <pre><code>&gt;&gt;&gt; df_criteria = pd.DataFrame({'B' : [1,2], 'C' : [1,1]}) B C 0 1 1 1 2 1 </code></pre> <p>I'd like to be able to fetch the value of <code>A</code> in the <code>df_org</code> DataFrame for which the corresponding values of the <code>B</code> and <code>C</code> match the ones listed in the <code>df_criteria</code> DataFrame. In this examples, I would like to have a subset of <code>df_org</code> that contains its rows '0' and '4', like so:</p> <pre><code> A B C 0 1 1 1 4 5 2 1 </code></pre> <p>Being a newbie in pandas, the way I've implemented this is using the <code>for</code>-loop mindset: by iterating over the rows of <code>df_criteria</code> and querying <code>df_org</code> for each row. However, this is very slow and I have the impression that there must be a more pythonic (and faster) way that does not make use of <code>for</code>-loops. I've also explored the use of <code>DataFrame.lookup</code>, however it is not useful in my case because the indices in <code>df_criteria</code> and <code>df_org</code> do not necessarily match.</p> <p>Any suggestion would be very much appreciated. Many thanks!</p>
<p>A simple inner merge would work:</p> <pre><code>In [285]: df_org.merge(df_criteria, on=['B','C']) Out[285]: A B C 0 1 1 1 1 5 2 1 </code></pre>
python|pandas|dataframe
7
1,336
26,390,895
Why isn't pip updating my numpy and scipy?
<p>My problem is that pip won't update my Python Packages, even though there are no errors. </p> <p>It is similar to <a href="https://stackoverflow.com/questions/21473600/matplotlib-version">this one</a>, but I am still now sure what to do. Basically, ALL my packages for python appear to be ridiculously outdated, even after updating everything via pip. Here are the details:</p> <ul> <li>I am using pip, version 1.5.6. </li> <li>I am using Python, version 2.7.5</li> <li>I am on a Mac OSX, verion 10.9.5.</li> </ul> <p>Using that, I have:</p> <ul> <li>My numpy version is 1.6.2.</li> <li>My scipy version is 0.11.0.</li> <li>My matplotlib version is 1.1.1.</li> </ul> <p>Even after I try:</p> <pre><code>sudo pip uninstall numpy </code></pre> <p>Followed by:</p> <pre><code>sudo pip install numpy </code></pre> <p>They both complete successfully, but when I go into python and check the version of numpy, it is still the old one. (As are all the other packages).</p> <p>Not sure what is going on here?... How can this be fixed? P.S. I am new to this, so I might need explicit instructions. Thanks. Also, if anyone wants, I can provide a screenshot of pip as it is installing numpy.</p> <p><strong>EDIT:</strong></p> <p>Commands I ran as per the comments:</p> <pre><code>$which -a pip /usr/local/bin/pip $ head -1 $(which pip) #!/usr/bin/python $ which -a python /usr/bin/python </code></pre>
<p>In OS X 10.9, Apple's Python comes with a bunch of pre-installed extra packages, in a directory named <code>/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python</code>. Including <code>numpy</code>.</p> <p>And the way they're installed (as if by using <code>easy_install</code> with an ancient pre-0.7 version of <code>setuptools</code>, but not into either of the normal <code>easy_install</code> destinations), <code>pip</code> doesn't know anything about them.</p> <p>So, what happens is that <code>sudo pip install numpy</code> installs a separate copy of <code>numpy</code> into <code>'/Library/Python/2.7/site-packages'</code>—but in your <code>sys.path</code>, the <code>Extras</code> directory comes before the <code>site-packages</code> directory, so <code>import numpy</code> still finds Apple's copy. I'm not sure why that is, but it's probably not something you want to monkey with.</p> <hr> <p>So, how do you fix this?</p> <p>The two best solutions are:</p> <ul> <li><p>Use <a href="https://pypi.python.org/pypi/virtualenv" rel="noreferrer"><code>virtualenv</code></a>, and install your <code>numpy</code> and friends into a virtual environment, instead of system-wide. This has the downside that you have to learn how to use <code>virtualenv</code>—but that's definitely worth doing at some point, and if you have the time to learn it now, go for it.</p></li> <li><p>Upgrade to Python 3.x, either from a python.org installer or via Homebrew. Python 3.4 or later comes with <code>pip</code>, and doesn't come with any <code>pip</code>-unfriendly pre-installed packages. And, unlike installing a separate 2.7, it doesn't interfere with Apple's Python at all; <code>python3</code> and <code>python</code>, <code>pip3</code> and <code>pip</code>, etc., will all be separate programs, and you don't have to learn anything about how PATH works or any of that. This has the downside that you have to learn Python 3.x, which has <a href="https://docs.python.org/3/whatsnew/3.0.html" rel="noreferrer">some major changes</a>, so again, a bit of a learning curve, but again, definitely worth doing at some point.</p></li> </ul> <hr> <p>Assuming neither of those is possible, I think the simplest option is to use <code>easy_install</code> instead of <code>pip</code>, for the packages you want to install newer versions of any of Apple's "extras". You can get a full list of those by looking at what's in <code>/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python</code>. When you upgrade <code>numpy</code>, you probably also want to upgrade <code>scipy</code> and <code>matplotlib</code>; I think everything else there is unrelated. (You can of course upgrade <code>PyObjC</code> or <code>dateutil</code> or anything else you care about there, but you don't have to.)</p> <p>This isn't an ideal solution; there are a lot of reasons <code>easy_install</code> is inferior to <code>pip</code> (e.g., not having an uninstaller, so you're going to have to remember where that <code>/Library/blah/blah</code> path is (or find it again by printout out <code>sys.path</code> from inside Python). I wouldn't normally suggest <code>easy_install</code> for anything except <code>readline</code> and <code>pip</code> itself (and then only with Apple's Python). But in this case, I think it's simpler than the other alternatives.</p>
python|macos|numpy|pip|package-managers
14
1,337
66,829,674
Can't create pandas DataFrame with MultiIndex columns from dicts with tuples as columns
<p>I have this Python data structure:</p> <pre><code>a = [ { ('Temperature', 'C'): 25, ('Temperature', 'F'): 77 }, { ('Temperature', 'C'): 30, ('Temperature', 'F'): 86 } ] </code></pre> <p>I try to convert this data structure to a tab separated string like this, having 2 rows for header:</p> <pre><code>Temperature Temperature C F 25 77 30 86 </code></pre> <p>In order to do this, I try to convert this data structure first into a DataFrame with MultiIndex columns. But I cannot do it using <code>DataFrame.from_records(a)</code> method because it returns a single index column with tuples as the names. How can I achieve MultiIndex columns instead?</p> <p>By the way, I can do the reverse. When I have a tab separated file like above, I can read it with:</p> <pre><code>a_as_dataframe = pd.read_csv(r&quot;C:\my_tab_separated_file.tsv&quot;, sep=&quot;\t&quot;, header=[0, 1]) </code></pre> <p>Then convert it to records like this:</p> <pre><code>a = a_as_dataframe.to_dict(&quot;records&quot;) </code></pre> <p>What I want to achieve is the reverse of this.</p>
<p>How about using <code>MultiIndex.from_tuples</code></p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(a) df.columns = pd.MultiIndex.from_tuples(df.columns) </code></pre> <p><code>df</code> is now:</p> <pre><code> Temperature C F 0 25 77 1 30 86 </code></pre> <pre><code>&gt;&gt;&gt; df.equals(a_as_dataframe) True </code></pre>
python|pandas
2
1,338
67,113,949
make pandas recognize a list containing column names the same of of the columns in its dataframe?
<p>Let's say I have a list called x</p> <pre><code>x = ['Sales', 'Total', 'Quantity'] </code></pre> <p>and I have an excel dataframe with columns named 'Employee', 'Age', 'Sex', 'Sales' , 'Quantity' and 'Total'. How do I make pandas only pick the columns of the dataframe that have the same name as of those in the list?</p>
<p>Just do:</p> <pre><code>x = ['Sales', 'Total', 'Quantity'] df = df[x] </code></pre> <p>Since <code>x</code> is already a list of columns, use it inside <code>single-brackets</code> to subset the dataframe.</p> <p><strong>OR</strong> use <code>Index.intersection</code>:</p> <pre><code>df = df[df.columns.intersection(x)] </code></pre>
python|pandas
2
1,339
67,128,738
Sort and Filter Pandas Dataframe in the most efficient manner
<p>I want to filter by the column name 'duration' and then display values greater than 200. This is just a snippet of the dataset. I have a very huge dataset. I can use df[df.duration &gt; 200]. However, this runs on the entire dataframe. Is there any way in which I can specifically target the column duration and then filter the data and display only the column duration without introducing the new dataframe. Also some explanation related to optimization of the same in huge datasets(working environment) would be helpful.</p> <pre><code>import pandas as pd data = { &quot;calories&quot;: [420, 380, 390,100], &quot;duration&quot;: [50, 40, 45,300] } df = pd.DataFrame(data) </code></pre>
<p>Using pandas, I think <code>df[df.duration &gt; 200]</code> would be among the best choices, but eager to compare with any alternatives.</p>
python|pandas
0
1,340
10,907,917
Python numpy addition error
<p>I'm getting a very odd error using a basic shortcut method in python. It seems, unless I'm being very stupid, I get different values for A = A + B, and A += B. Here is my code:</p> <pre><code>def variance(phi,sigma,numberOfIterations): variance = sigma for k in range(1,numberOfIterations): phik = np.linalg.matrix_power(phi,k) variance = variance + phik*sigma*phik.T return variance </code></pre> <p>This basically just calculates the covariance of a vector autoregression. So for:</p> <pre><code>phi = np.matrix('0.7 0.2 -0.1; 0.001 0.8 0.1; 0.001 0.002 0.9') sigma = np.matrix('0.07 0.01 0.001; 0.01 0.05 0.004; 0.001 0.004 0.01') </code></pre> <p>I get:</p> <pre><code>variance(phi,sigma,10) = [[ 0.1825225 0.07054728 0.00430524] [ 0.07054728 0.14837229 0.02659357] [ 0.00430524 0.02659357 0.04657858]] </code></pre> <p>This is correct I believe (agrees with Matlab). Now if I change the line above to </p> <pre><code>variance += phik*sigma*(phik.T) </code></pre> <p>I get:</p> <pre><code>variance(phi,sigma,10) = [[ 0.34537165 0.20258329 0.04365378] [ 0.20258329 0.33471052 0.1529369 ] [ 0.04365378 0.1529369 0.19684553]] </code></pre> <p>Whats going on?</p> <p>Many thanks</p> <p>Dan</p>
<p>The culprit is:</p> <pre><code>variance = sigma </code></pre> <p>If you change that to:</p> <pre><code>variance = sigma.copy() </code></pre> <p>You'll see the correct result.</p> <p>This is because <code>+=</code> actually performs a (more efficient) in-place addition… And since both <code>variance</code> and <code>sigma</code> reference the same array, <em>both</em> will be updated. For example:</p> <pre><code>&gt;&gt;&gt; sigma = np.array([1]) &gt;&gt;&gt; variance = sigma &gt;&gt;&gt; variance += 3 &gt;&gt;&gt; sigma array([4]) </code></pre>
python|numpy
7
1,341
68,441,433
Get time data was submitted from Yfinance
<p>I am working on a project using the Yfinance module to get information from the stock market. My problem is, I need the date/time that the data was submitted, and I don't know how to access it. There is nothing about this that I can find on the documentation, but i know it is there because when i run:</p> <pre><code>import yfinance as yf data = yf.download(tickers='UBER', period='5d', interval='5m') print(data[&quot;Close&quot;]) </code></pre> <p>it outputs:</p> <pre><code>Datetime 2021-07-13 09:30:00-04:00 48.349998 2021-07-13 09:35:00-04:00 48.099998 2021-07-13 09:40:00-04:00 47.965000 2021-07-13 09:45:00-04:00 48.021500 2021-07-13 09:50:00-04:00 48.040001 </code></pre> <p>The Datetime and the corresponding data. Does anyone know how to access the datetime and store it as a variable?</p> <p>Many thanks!</p>
<p>you can store the datetime data in a list</p> <pre><code>from os import pardir import yfinance as yf data = yf.download(tickers='UBER', period='5d', interval='5m') stored_datetime = data.index print(stored_datetime) </code></pre> <p>Output:</p> <pre><code>DatetimeIndex(['2021-07-16 09:30:00-04:00', '2021-07-16 09:35:00-04:00', '2021-07-16 09:40:00-04:00', '2021-07-16 09:45:00-04:00', '2021-07-16 09:50:00-04:00', '2021-07-16 09:55:00-04:00', '2021-07-16 10:00:00-04:00', '2021-07-16 10:05:00-04:00', '2021-07-16 10:10:00-04:00', '2021-07-16 10:15:00-04:00', ... '2021-07-22 09:30:00-04:00', '2021-07-22 09:35:00-04:00', '2021-07-22 09:40:00-04:00', '2021-07-22 09:45:00-04:00', '2021-07-22 09:50:00-04:00', '2021-07-22 09:55:00-04:00', '2021-07-22 10:00:00-04:00', '2021-07-22 10:05:00-04:00', '2021-07-22 10:10:00-04:00', '2021-07-22 10:13:05-04:00'], dtype='datetime64[ns, America/New_York]', name='Datetime', length=322, freq=None) </code></pre> <p>you can also find datetime for specific row using the row index, data.index[row_number]</p> <pre><code>print(data.index[0]) </code></pre> <p>Output:</p> <pre><code>2021-07-16 09:30:00-04:00 </code></pre>
python|pandas|datetime|yfinance
0
1,342
59,273,860
How can I change my code so that string is NOT changed to float
<p>I am trying to write a code that detects fake news. Unfortunately, I keep getting the same error message. Please could someone explain where I've gone wrong? I have got some lines of codes from <a href="https://data-flair.training/blogs/advanced-python-project-detecting-fake-news/" rel="nofollow noreferrer">https://data-flair.training/blogs/advanced-python-project-detecting-fake-news/</a> and some lines of code from <a href="https://www.datacamp.com/community/tutorials/text-analytics-beginners-nltk" rel="nofollow noreferrer">https://www.datacamp.com/community/tutorials/text-analytics-beginners-nltk</a>. When I tried to combine the two different codes (by getting rid of duplicate codes), I receive an error message.</p> <p><strong>THE CODE</strong></p> <pre><code>%matplotlib inline import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt import matplotlib.patches as mpatches import itertools import json import csv from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import PassiveAggressiveClassifier from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import MultinomialNB from sklearn import metrics df = pd.read_csv(r"C:\Users\johnrambo\Downloads\fake_news(1).csv", sep=',', header=0, engine='python', escapechar='\\') X_train, X_test, y_train, y_test = train_test_split(df['headline'], is_sarcastic_1, test_size = 0.2, random_state = 7) clf = MultinomialNB().fit(X_train, y_train) predicted = clf.predict(X_test) print("MultinomialNB Accuracy:", metrics.accuracy_score(y_test, predicted)) </code></pre> <hr> <p><strong>THE ERROR</strong></p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-8-e1f11a702626&gt; in &lt;module&gt; 21 X_train, X_test, y_train, y_test = train_test_split(df['headline'], is_sarcastic_1, test_size = 0.2, random_state = 7) 22 ---&gt; 23 clf = MultinomialNB().fit(X_train, y_train) 24 25 predicted = clf.predict(X_test) ~\Anaconda\lib\site-packages\sklearn\naive_bayes.py in fit(self, X, y, sample_weight) 586 self : object 587 """ --&gt; 588 X, y = check_X_y(X, y, 'csr') 589 _, n_features = X.shape 590 ~\Anaconda\lib\site-packages\sklearn\utils\validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator) 717 ensure_min_features=ensure_min_features, 718 warn_on_dtype=warn_on_dtype, --&gt; 719 estimator=estimator) 720 if multi_output: 721 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False, ~\Anaconda\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 494 try: 495 warnings.simplefilter('error', ComplexWarning) --&gt; 496 array = np.asarray(array, dtype=dtype, order=order) 497 except ComplexWarning: 498 raise ValueError("Complex data not supported\n" ~\Anaconda\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order) 536 537 """ --&gt; 538 return array(a, dtype, copy=False, order=order) 539 540 ~\Anaconda\lib\site-packages\pandas\core\series.py in __array__(self, dtype) 946 warnings.warn(msg, FutureWarning, stacklevel=3) 947 dtype = "M8[ns]" --&gt; 948 return np.asarray(self.array, dtype) 949 950 # ---------------------------------------------------------------------- ~\Anaconda\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order) 536 537 """ --&gt; 538 return array(a, dtype, copy=False, order=order) 539 540 ~\Anaconda\lib\site-packages\pandas\core\arrays\numpy_.py in __array__(self, dtype) 164 165 def __array__(self, dtype=None): --&gt; 166 return np.asarray(self._ndarray, dtype=dtype) 167 168 _HANDLED_TYPES = (np.ndarray, numbers.Number) ~\Anaconda\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order) 536 537 """ --&gt; 538 return array(a, dtype, copy=False, order=order) 539 540 ValueError: could not convert string to float: 'experts caution new car loses 90% of value as soon as you drive it off cliff' </code></pre> <hr> <p>FIRST FEW LINES OF DATA </p> <p><a href="https://i.stack.imgur.com/BgyG0.png" rel="nofollow noreferrer">Excel file: fake news</a></p> <p><strong><em>This is what I get when I input df.head().to_dict() :</em></strong> </p> <p>{'is_sarcastic': {0: 1, 1: 0, 2: 0, 3: 1, 4: 1}, 'headline': {0: 'thirtysomething scientists unveil doomsday clock of hair loss', 1: 'dem rep. totally nails why congress is falling short on gender, racial equality', 2: 'eat your veggies: 9 deliciously different recipes', 3: 'inclement weather prevents liar from getting to work', 4: "mother comes pretty close to using word 'streaming' correctly"}, 'article_link': {0: '<a href="https://www.theonion.com/thirtysomething-scientists-unveil-doomsday-clock-of-hai-1819586205" rel="nofollow noreferrer">https://www.theonion.com/thirtysomething-scientists-unveil-doomsday-clock-of-hai-1819586205</a>', 1: '<a href="https://www.huffingtonpost.com/entry/donna-edwards-inequality_us_57455f7fe4b055bb1170b207" rel="nofollow noreferrer">https://www.huffingtonpost.com/entry/donna-edwards-inequality_us_57455f7fe4b055bb1170b207</a>', 2: '<a href="https://www.huffingtonpost.com/entry/eat-your-veggies-9-delici_b_8899742.html" rel="nofollow noreferrer">https://www.huffingtonpost.com/entry/eat-your-veggies-9-delici_b_8899742.html</a>', 3: '<a href="https://local.theonion.com/inclement-weather-prevents-liar-from-getting-to-work-1819576031" rel="nofollow noreferrer">https://local.theonion.com/inclement-weather-prevents-liar-from-getting-to-work-1819576031</a>', 4: '<a href="https://www.theonion.com/mother-comes-pretty-close-to-using-word-streaming-cor-1819575546" rel="nofollow noreferrer">https://www.theonion.com/mother-comes-pretty-close-to-using-word-streaming-cor-1819575546</a>'}}</p>
<p>I imagine you have text data in <code>df['headline']</code> column, you need a few steps to first convert the text data to a number based format, then pass it to machine learning models to handle. </p> <p>You might want to refer to sklearn's <code>CountVectorizer</code> and <code>TfidfTransformer</code> <a href="https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html" rel="nofollow noreferrer">here</a></p>
python|pandas|numpy|scikit-learn
1
1,343
44,991,076
One line solution for editing a numpy array of counts? (python)
<p>I want to make a numpy array that contains how many times a value (between 1-3) occurs at a specific location. For example, if I have:</p> <pre><code>a = np.array([[1,2,3], [3,2,1], [2,1,3], [1,1,1]]) </code></pre> <p>I want to get back an array like so:</p> <pre><code>[[[ 1 0 0] [ 0 1 0] [ 0 0 1]] [[ 0 0 1] [ 0 1 0] [ 1 0 0]] [[ 0 1 0] [ 1 0 0] [ 0 0 1]] [[ 1 0 0] [ 1 0 0] [ 1 0 0]]] </code></pre> <p>Where the array tells me that 1 occurs once in the first position, 2 occurs once in the second position, 3 occurs once in the third position, 1 occurs once in the fourth position, etc. Later, I'll have more input arrays of the same dimensions, and I would like to add on the totals of the values to this array of counts.</p> <p>The code I have right now is:</p> <pre><code>a = np.array([[1,2,3], [3,2,1], [2,1,3], [1,1,1]]) cumulative = np.zeros((4,3,3)) for r in range(len(cumulative)): for c in range(len(cumulative[0])): cumulative[r, c, a[r,c]-1] +=1 </code></pre> <p>This does give me the output I want. However, I would like to condense the for loops into one line, using a line similar to this:</p> <pre><code>cumulative[:, :, a[:, :]-1] +=1 </code></pre> <p>This line doesn't work, and I can't find anything online on how to perform this operation. Any suggestions?</p>
<p>IIUC, you could take advantage of broadcasting:</p> <pre><code>In [93]: ((a[:, None] - 1) == np.arange(3)[:, None]).swapaxes(2, 1).astype(int) Out[93]: array([[[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[0, 0, 1], [0, 1, 0], [1, 0, 0]], [[0, 1, 0], [1, 0, 0], [0, 0, 1]], [[1, 0, 0], [1, 0, 0], [1, 0, 0]]]) </code></pre>
python|arrays|numpy
3
1,344
45,126,821
Saving confusion matrix
<p>Is there any possibility to save the confusion matrix which is generated by <code>sklearn.metrics</code>? </p> <p>I would like to save multiple results of different classification algorithms in an array or maybe a pandas data frame so I can show which algorithm works best.</p> <pre><code>print('Neural net: \n',confusion_matrix(Y_test, Y_pred), sep=' ') </code></pre> <p>How could I save the generated confusion matrix within a loop? (I am training over a set of 200 different target variables)</p> <pre><code>array[i] = confusion_matrix(Y_test,Y_pred) </code></pre> <p>I run into some definition problems here [array is not defined whereas in the non [i] - version it runs smoothly]</p> <p>Additionally, I am normalizing the confusion matrix. How could I print out the average result of the confusion matrix after the whole loop? (average of the 200 different confusion matrices)</p> <p>I am not that fluent with python yet.</p>
<p>First getting to array not defined problem. In python list is declared as :</p> <pre><code>array=[] </code></pre> <p>Since size of list is not given during declaration, no space is allocated. Hence we can't assign values the place which is not allocated.</p> <pre><code>array[i]=some value, but no space is allocated for array </code></pre> <p>So if you know the required size of array,fill zeroes during declaration and the use array this way or use array.append() method inside the loop.</p> <p>Now for saving confusion matrix: Since confusion matrix returns 2-D array and you need to save multiple such arrays, use 3-D array for saving the value.</p> <pre><code>import numpy as np matrix_result=np.zeroes((200,len(y_pred),len(y_pred))) for i in range(200): matrix_result[i]=confusion_matrix(X_pred,y_pred) </code></pre> <p>For averaging</p> <pre><code> matrix_result_average=matrix_result.mean(axis=0) </code></pre>
python|pandas|dataframe|confusion-matrix
1
1,345
45,255,167
Use numpy.argwhere to obtain the matching values in an np.array
<p>I'd like to use <code>np.argwhere()</code> to obtain the values in an <code>np.array</code>.</p> <p>For example:</p> <pre><code>z = np.arange(9).reshape(3,3) [[0 1 2] [3 4 5] [6 7 8]] zi = np.argwhere(z % 3 == 0) [[0 0] [1 0] [2 0]] </code></pre> <p>I want this array: <code>[0, 3, 6]</code> and did this:</p> <p><code>t = [z[tuple(i)] for i in zi] # -&gt; [0, 3, 6]</code></p> <p>I assume there is an easier way. </p>
<p>Why not simply use masking here:</p> <pre><code>z<b>[z % 3 == 0]</b></code></pre> <p>For your sample matrix, this will generate:</p> <pre><code>&gt;&gt;&gt; z[z % 3 == 0] array([0, 3, 6]) </code></pre> <p>If you pass a matrix with the same dimensions with booleans as indices, you get an array with the elements of that matrix where the boolean matrix is <code>True</code>.</p> <p>This will furthermore work more efficient, since you do the filtering at the numpy level (whereas list comprehension works at the Python interpreter level).</p>
python|numpy
14
1,346
44,874,061
Comparing cell values with a integer in pandas giving typeerror
<p>I have been trying to compare values of each cell in a row with an integer like so:</p> <pre><code>df.loc[df['A'] &lt;= 14, 'A'] </code></pre> <p>All rows whose values are less than or equal to 14, but it shows an error like:</p> <blockquote> <pre><code>TypeError : '&lt;=' is not supported between instances of str and int </code></pre> </blockquote>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>replace</code></a> <code>,</code> to empty space and convert to <code>int</code>:</p> <pre><code>df = pd.DataFrame({'A':['1,473','1,473','1,4', '1,2'], 'B':[2,4,5,5]}) print (df) A B 0 1,473 2 1 1,473 4 2 1,4 5 3 1,2 5 df['A'] = df['A'].str.replace(',', '').astype(int) s = df.loc[df['A'] &lt;= 14, 'A'] print (s) 2 14 3 12 Name: A, dtype: int32 </code></pre> <hr> <p>If use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> for <code>DataFrame</code> add parameter <code>thousands=','</code>:</p> <pre><code>df = pd.read_csv(file, thousands=',', sep=';') </code></pre> <p>Sample:</p> <pre><code>import pandas as pd from pandas.compat import StringIO temp=u"""A;B 1,473;2 1,473;4 1,4;5 1,2;5""" #after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp), thousands=',', sep=';') print (df) A B 0 1473 2 1 1473 4 2 14 5 3 12 5 s = df.loc[df['A'] &lt;= 14, 'A'] print (s) 2 14 3 12 Name: A, dtype: int64 </code></pre>
python|pandas
1
1,347
56,896,712
How to represent the Null class in Multilabel Classification with Convolutional Neural Nets?
<p>I'm trying to label images with the various categories that they belong to with a convolutional neural net. For my problem, the image can be in a single category, multiple categories, or zero categories. Is it standard practice to set the zero category as all zeroes or should I add an additional null class neuron to the final layer?</p> <p>As an example, let's say there are 5 categories (not including the null class). Currently, I'm representing that with [0,0,0,0,0]. The alternative is adding a null category, which would look like [0,0,0,0,0,1]. Won't there also be some additional unnecessary parameters in this second case or will this allow the model to perform better?</p> <p>I've looked on Stackoverflow for similar questions, but they pertain to Multiclass Classification, which uses the Categorical Crossentropy with softmax output instead of Binary Crossentropy with Sigmoid output, so the obvious choice there is to add the null class (or to do thresholding).</p>
<p>Yes, the "null" category should be represented as just zeros. In the end multi-label classification is a set of C binary classification problems, where C is the number of classes, and if all C problems output "no class", then you get a vector of just zeros.</p>
python|tensorflow|machine-learning|keras
5
1,348
57,279,754
What are the Tensorflow qint8, quint8, qint32, qint16, and quint16 datatypes?
<p>I'm looking at the Tensorflow tf.nn.quantized_conv2d function and I'm wondering what exactly the qint8, etc. dataypes are, particularly if they are the datatypes used for the "fake quantization nodes" in tf.contrib.quantize or are actually stored using 8 bits (for qint8) in memory.</p> <p>I know that they are defined in tf.dtypes.DType, but that doesn't have any information about what they actually are.</p>
<p>These are the data types of the <code>output Tensor</code> of the function, <code>tf.quantization.quantize()</code>. This corresponds to the Argument, <code>T</code> of the function.</p> <p>Mentioned below is the underlying code, which converts/quantizes a Tensor from one Data Type (e.g. <code>float32</code>) to another (<code>tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16</code>).</p> <pre><code>out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) if T == qint8: out[i] -= (range(T) + 1) / 2.0 </code></pre> <p>Then, they can be passed to functions like <code>tf.nn.quantized_conv2d</code>, etc.., whose input is a Quantized Tensor, explained above. </p> <p><strong>TLDR</strong>, to answer your question in short, they are actually stored 8 bits (for <code>qint8</code>) in memory. </p> <p>You can find more information about this topic in the below links:</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/quantization/quantize" rel="noreferrer">https://www.tensorflow.org/api_docs/python/tf/quantization/quantize</a></p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/nn/quantized_conv2d" rel="noreferrer">https://www.tensorflow.org/api_docs/python/tf/nn/quantized_conv2d</a></p> <p><a href="https://www.tensorflow.org/lite/performance/post_training_quantization" rel="noreferrer">https://www.tensorflow.org/lite/performance/post_training_quantization</a></p> <p>If you feel this answer is useful, kindly accept this answer and/or up vote it. Thanks.</p>
python|tensorflow|neural-network|tensorflow-lite|quantization
6
1,349
57,037,252
Updating column in for loop using merge
<p>Hi, </p> <p>I have two dataframes and I want to loop through subsets of my first DF and merge values to my second DF. </p> <p>My data looks like: </p> <pre><code> DF1 product survey_id X1 survey_1 x2 survey_1 x3 survey_2 x4 survey_3 x5 survey_3 x1 survey_3 : : x(i) survey(j) </code></pre> <p>My second DF contains the same products (only appear once/unique in DF2) and I have added an empty column to put the survey number in. </p> <pre><code>DF2 product survey_id x1 nan x2 nan : : : : x(i) nan </code></pre> <p>What I want to do is take a subset of DF1 for each survey and merge them to DF2 so that should a product appear more than once, the most recent survey_id will appear in the survey_id column: </p> <pre><code>surveys = DF1['survey_id'].unique() for survey in surveys: DF2 = DF2.merge(DF1['survey_id'] == survey], how='left', on='product') </code></pre> <p>If I sort the survey list I will be able to merge the survey data on in chronological order. From there I want to merge/fill the survey_id column with each iteration, overwriting the survey_id value should the product appear more than once. </p> <p>I was hoping to take a subset of DF1 where, for example </p> <pre><code> DF1[DF1['survey_id']=='survey_1'] </code></pre> <p>and merge all this data to DF2. So where ever x(i) in DF1 and DF2 match we have</p> <pre><code> DF2['survey_id'] = 'survey_1' </code></pre> <p>The next iteration of this loop will use a subset where </p> <pre><code> DF1[DF1['survey_id']=='survey_2'] </code></pre> <p>and the survey_id values will be set to 'survey_2' where the products match. The survey_id should be overwritten or filled if it is still NaN</p> <p>EDIT: </p> <pre><code>output product survey_id X1 survey_3 x2 survey_1 x3 survey_2 x4 survey_3 x5 survey_3 </code></pre> <p>Not sure if merge is the best way to go about this. I was trying to use .loc but this doesn't seem to work either:</p> <pre><code> DF2['survey_id'] = DF1['survey_id'].loc[DF1['product'] == DF2['substance']] </code></pre>
<p>This is based on assumption: </p> <p><strong>for all product xi, we require survey_j such that j is maximum.</strong></p> <pre><code>&gt;&gt;&gt; data = {'product':['x1','x1','x2','x2','x2'], 'survey_id':['survey_1','survey_2','survey_1', 'survey_2', 'survey_3'] } &gt;&gt;&gt; df = pd.DataFrame(data) &gt;&gt;&gt; df product survey_id 0 x1 survey_1 1 x1 survey_2 2 x2 survey_1 3 x2 survey_2 4 x2 survey_3 &gt;&gt;&gt; df.groupby(['product'],as_index=False)['survey_id'].max() product survey_id 0 x1 survey_2 1 x2 survey_3 </code></pre>
python|sql|pandas|numpy|merge
0
1,350
56,942,827
How do I style a subset of a pandas dataframe?
<p>I previously asked <a href="https://stackoverflow.com/questions/56942320/how-do-i-style-only-the-last-row-of-a-pandas-dataframe">How do I style only the last row of a pandas dataframe?</a> and got a perfect answer to the toy problem that I gave. </p> <p>Turns out I should have made the toy problem a bit closer to my real problem. Consider a dataframe with more than 1 column of text data (which I can apply styling to):</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import seaborn as sns cm = sns.diverging_palette(-5, 5, as_cmap=True) df = pd.DataFrame(np.random.randn(3, 4)) df['text_column'] = 'a' df['second_text_column'] = 'b' df.style.background_gradient(cmap=cm) </code></pre> <p><a href="https://i.stack.imgur.com/kKu7l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kKu7l.png" alt="styling applied to all numeric data"></a></p> <p>However, like the previous question, <strong>I wish to only apply this styling to the last row</strong>. The answer to the previous question was: </p> <pre class="lang-py prettyprint-override"><code>df.style.background_gradient(cmap=cm, subset=df.index[-1]) </code></pre> <p>which in this case gives the error: </p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/miniconda/lib/python3.7/site-packages/IPython/core/formatters.py in __call__(self, obj) 343 method = get_real_method(obj, self.print_method) 344 if method is not None: --&gt; 345 return method() 346 return None 347 else: /usr/local/miniconda/lib/python3.7/site-packages/pandas/io/formats/style.py in _repr_html_(self) 161 Hooks into Jupyter notebook rich display system. 162 """ --&gt; 163 return self.render() 164 165 @Appender(_shared_docs['to_excel'] % dict( /usr/local/miniconda/lib/python3.7/site-packages/pandas/io/formats/style.py in render(self, **kwargs) 457 * table_attributes 458 """ --&gt; 459 self._compute() 460 # TODO: namespace all the pandas keys 461 d = self._translate() /usr/local/miniconda/lib/python3.7/site-packages/pandas/io/formats/style.py in _compute(self) 527 r = self 528 for func, args, kwargs in self._todo: --&gt; 529 r = func(self)(*args, **kwargs) 530 return r 531 /usr/local/miniconda/lib/python3.7/site-packages/pandas/io/formats/style.py in _apply(self, func, axis, subset, **kwargs) 536 if axis is not None: 537 result = data.apply(func, axis=axis, --&gt; 538 result_type='expand', **kwargs) 539 result.columns = data.columns 540 else: /usr/local/miniconda/lib/python3.7/site-packages/pandas/core/frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds) 6485 args=args, 6486 kwds=kwds) -&gt; 6487 return op.get_result() 6488 6489 def applymap(self, func): /usr/local/miniconda/lib/python3.7/site-packages/pandas/core/apply.py in get_result(self) 149 return self.apply_raw() 150 --&gt; 151 return self.apply_standard() 152 153 def apply_empty_result(self): /usr/local/miniconda/lib/python3.7/site-packages/pandas/core/apply.py in apply_standard(self) 255 256 # compute the result using the series generator --&gt; 257 self.apply_series_generator() 258 259 # wrap results /usr/local/miniconda/lib/python3.7/site-packages/pandas/core/apply.py in apply_series_generator(self) 284 try: 285 for i, v in enumerate(series_gen): --&gt; 286 results[i] = self.f(v) 287 keys.append(v.name) 288 except Exception as e: /usr/local/miniconda/lib/python3.7/site-packages/pandas/core/apply.py in f(x) 76 77 def f(x): ---&gt; 78 return func(x, *args, **kwds) 79 else: 80 f = func /usr/local/miniconda/lib/python3.7/site-packages/pandas/io/formats/style.py in _background_gradient(s, cmap, low, high, text_color_threshold) 941 smin = s.values.min() 942 smax = s.values.max() --&gt; 943 rng = smax - smin 944 # extend lower / upper bounds, compresses color range 945 norm = colors.Normalize(smin - (rng * low), smax + (rng * high)) TypeError: ("unsupported operand type(s) for -: 'str' and 'str'", 'occurred at index text_column') &lt;pandas.io.formats.style.Styler at 0x7f948dde7278&gt; </code></pre> <p>which seems to come from the fact that it's trying to do an operation to strings in the <code>text_column</code>. Fair enough. How do I tell it to only apply to the last row for all non-text columns? I'm ok with giving it explicit column names to use or avoid, but I don't know how to pass that into this inscrutable <code>subset</code> method.</p> <p>I am running:</p> <pre><code>python version 3.7.3 pandas version 0.24.2 </code></pre>
<p>Using a <code>tuple</code> for <code>subset</code> worked for me, but not sure if it is the most elegant solution:</p> <pre><code>df.style.background_gradient(cmap=cm, subset=(df.index[-1], df.select_dtypes(float).columns)) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/7phX0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7phX0.png" alt="enter image description here"></a></p>
python|pandas|dataframe
10
1,351
57,060,655
Error com_error: (-2147221005, 'Invalid class string', None, None) while writing dataframe into excel binary notebook
<p>I am trying to write a data frame into excel sheet(xlsb), which is having formulas, using xlwing library:</p> <pre><code>app = xw.App() book = xw.Book('ABC.xlsb') sheet = book.sheets('SL Dump') sheet.range('A1').values=final_merge_dataframe </code></pre> <p>After running the above code, I am getting this error:</p> <blockquote> <p>com_error: (-2147221005, 'Invalid class string', None, None)</p> </blockquote>
<p>I use to have the same problem,</p> <pre><code>dispatch = pythoncom.CoCreateInstanceEx( pywintypes.com_error: (-2147221005, 'Invalid class string', None, None) </code></pre> <p>After installing MS Office into my machine the error was resolved, it seems <em>xlwings</em> requires some packages or files from the MS Office to communicate with, although, I did not find anything referring to that on the documentation if someone could refer to that documentation it would be helpful.</p>
python|pandas
0
1,352
57,255,266
Search excel columns for matching text value, print row #s
<p>Here I grab the name and zip values from a different document; and store in variables: (works fine)</p> <pre><code> Name = find_name.group(0) </code></pre> <p><strong>Then I simply want to search my excel file to find a match; where the <code>Name</code> text value is found, get row number(s):</strong></p> <pre><code> data = pd.read_excel(config.Excel2) row_number = data[data['Member Name'].str.contains(Name)].index.min() print(row_number) </code></pre> <p>The above outputs the incorrect row number when printed, I cannot understand why. <em>i.e.</em> It does not print the row where the matching text value is found within my excel document. It prints an erroneous row number, that doesn't match the <code>Name</code>.</p> <p><strong>Then</strong>, I have tried something like this; but this doesn't output anything at all: <em>(outputs Key Error)</em></p> <pre><code> idx = data[data['Member Name'].str.contains(Name)].index row_number = idx[0] if len(idx)&gt;0 else None print(row_number) </code></pre> <p>Any thoughts on how to achieve this?</p> <p>My excel looks as follows (with about 11000 rows like the below, and 8 columns).</p> <pre><code> A 1 | Member Name | Member Address Line 1 | Member Address Line 2 RHONDA GILBERT ADDRESS PT 1 ADDRESS PT 2 W/ ZIP </code></pre>
<p>I do not have your excel file, so I setup the following code:</p> <pre><code>import pandas as pd names = [&quot;RHONDA GILBERT&quot;, &quot;FRED FLINTSTONE&quot;, &quot;FRED FLINTSTONE&quot;, &quot;BARNEY RUBLE&quot;, &quot;RHONDA GILBERT&quot;] add1 = [&quot;123 Elm St&quot;, &quot;254 Pine Ave&quot;, &quot;254 Pine Ave&quot;, &quot;654 Spruce Grove&quot;, &quot;123 Elm St&quot;] df = pd.DataFrame(list(zip(names, add1)), columns =['Member Name', 'Member Address Line 1']) df </code></pre> <p>It gives me the following output:</p> <pre><code> Member Name Member Address Line 1 0 RHONDA GILBERT 123 Elm St 1 FRED FLINTSTONE 254 Pine Ave 2 FRED FLINTSTONE 254 Pine Ave 3 BARNEY RUBLE 654 Spruce Grove 4 RHONDA GILBERT 123 Elm St </code></pre> <p>If I now search for &quot;FRED&quot; then I write it like so:</p> <pre><code>Name = &quot;FRED&quot; matches = df[df['Member Name'].str.contains(Name)] matches </code></pre> <p>and the output I get is this:</p> <pre><code> Member Name Member Address Line 1 1 FRED FLINTSTONE 254 Pine Ave 2 FRED FLINTSTONE 254 Pine Ave </code></pre> <p>Note that if I ask for the indices of matches I get</p> <pre><code>matches.index # outputs Int64Index([1, 2], dtype='int64') </code></pre> <p>These are the original indices of df. So then looking for the minimum value of the index</p> <pre><code>matches.index.min() # outputs 1 </code></pre> <p>This is the minimum of the indices. I am not too sure how your results deviated from the above. If you care to clarify, I will alter my explanation.</p>
python|regex|excel|pandas|python-3.7
1
1,353
45,978,058
Return NaN from indexing operation on a pandas series
<pre><code>a = pd.Series([0.1,0.2,0.3,0.4]) &gt;&gt;&gt; a.loc[[0,1,2]] 0 0.1 1 0.2 2 0.3 dtype: float64 </code></pre> <p>When a non existent index is added to the request along with existing ones, it returns NaN (which is what I need).</p> <pre><code>&gt;&gt;&gt; a.loc[[0,1,2, 5]] 0 0.1 1 0.2 2 0.3 5 NaN dtype: float64 </code></pre> <p>But when I am requesting solely non-existing indices I am getting an error</p> <pre><code>&gt;&gt;&gt; a.loc[[5]] Traceback (most recent call last): File "&lt;pyshell#481&gt;", line 1, in &lt;module&gt; a.loc[[5]] KeyError: 'None of [[5]] are in the [index]' </code></pre> <p>Is there a way to get a NaN there again in order to avoid resorting to a <code>try/except</code> solution ? </p>
<p>Try <code>pd.Series.reindex</code> instead.</p> <pre><code>out = a.reindex([0,1,2, 5]) print(out) 0 0.1 1 0.2 2 0.3 5 NaN dtype: float64 </code></pre> <hr> <pre><code>out = a.reindex([5]) print(out) 5 NaN dtype: float64 </code></pre>
python|pandas|indexing|nan|series
5
1,354
45,927,399
Memory error when initializing Xception using Keras
<p>I am having difficulty implementing the pre-trained Xception model for binary classification over new set of classes. The model is successfully returned from the following function:</p> <pre><code>#adapted from: #https://github.com/fchollet/keras/issues/4465 from keras.applications.xception import Xception from keras.layers import Input, Flatten, Dense from keras.models import Model def get_xception(in_shape,trn_conv): #Get back the convolutional part of Xception trained on ImageNet model = Xception(weights='imagenet', include_top=False) #Here the input images have been resized to 299x299x3, so this is the #same as Xception's native input input = Input(in_shape,name = 'image_input') #Use the generated model output = model(input) #Only train the top fully connected layers (keep pre-trained feature extractors) for layer in model.layers: layer.trainable = False #Add the fully-connected layers x = Flatten(name='flatten')(output) x = Dense(2048, activation='relu', name='fc1')(x) x = Dense(2048, activation='relu', name='fc2')(x) x = Dense(2, activation='softmax', name='predictions')(x) #Create your own model my_model = Model(input=input, output=x) my_model.compile(loss='binary_crossentropy', optimizer='SGD') return my_model </code></pre> <p>This returns fine, however when I run this code:</p> <pre><code>model=get_xception(shp,trn_feat) in_data=HDF5Matrix(str_trn,'/inputs') labels=HDF5Matrix(str_trn,'/labels') model.fit(in_data,labels,shuffle="batch") </code></pre> <p>I get the following error:</p> <pre><code>File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 1576, in fit self._make_train_function() File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 960, in _make_train_function loss=self.total_loss) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/optimizers.py", line 169, in get_updates v = self.momentum * m - lr * g # velocity File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 705, in _run_op return getattr(ops.Tensor, operator)(a._AsTensor(), *args) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 865, in binary_op_wrapper return func(x, y, name=name) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1088, in _mul_dispatch return gen_math_ops._mul(x, y, name=name) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1449, in _mul result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[204800,2048] [[Node: training/SGD/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](SGD/momentum/read, training/SGD/Variable/read)]] </code></pre> <p>I have been tracing the function calls for hours now and still can't figure out what is happening. The system should be far above and beyond the requirements. System specs:</p> <pre><code>Ubuntu Version: 14.04.5 LTS Tensorflow Version: 1.3.0 Keras Version: 2.0.7 28x dual core Inten Xeon processor (1.2 GHz) 4x NVidia GeForce 1080 (8Gb memory each) </code></pre> <p>Any clues as to what is going wrong here?</p>
<p>Per Yu-Yang, the simplest solution was to reduce the batch size, everything ran fine after that!</p>
tensorflow|out-of-memory|keras|gpu
0
1,355
46,045,512
h5py - HDF5 database randomly returning NaNs and near very small data with multiple readers?
<p>I have an HDF5 dataset and I'm using a framework which is creating multiple processes to read from it (PyTorch's DataLoader, but this framework shouldn't be important). I'm indexing the first dimension of a 3D float array randomly, and to debug what was going on, I have been summing the slice from the indexing. Every once and a while, the summed slice turns out nan or as an extremely small value (a value that shouldn't appear in my data). If I preform the same index twice in a row, the values come out correct the other time (either the first or the second index might come out wrong). For example, below is some of values I get during indexing, where the left is expected to match the right, but sometimes the value comes out wrong:</p> <pre><code>21.2162 21.2162 89.9759 6.5469e-33 35.7114 35.7114 35.2934 35.2934 56.8512 56.8512 42.2215 42.2215 11.5307 nan 19.2904 19.2904 25.4261 25.4261 </code></pre> <p>This comes from indexing one right after the other:</p> <pre><code>print(dataset[index].sum(), end=' ') print(dataset[index].sum()) </code></pre> <p>The problem does not seem to arise when I only use a single process to index the dataset. The dataset is only being read from (no writing). Does anyone know why this might be happening and if there's a way to prevent it?</p>
<p>I encountered the very same issue, and after spending a day trying to marry PyTorch DataParallel loader wrapper with HDF5 via h5py, I discovered that it is crucial to open <code>h5py.File</code> inside the new process, rather than having it opened in the main process and hope it gets inherited by the underlying multiprocessing implementation. </p> <p>Since PyTorch seems to adopt lazy way of initializing workers, this means that the actual file opening has to happen inside of the <code>__getitem__</code> function of the <code>Dataset</code> wrapper.</p>
python|numpy|hdf5|h5py
7
1,356
35,700,389
How to use numpy where for several possible values?
<p>Consider a numpy ndarray called <code>picks_user</code> with shape <code>picks_user.shape = (2016,3)</code>. The 'columns' represent the variables user, item and count in that order. The 'rows' represent observations.</p> <p>When performing:</p> <p><code>target_users = picks_user[np.where(picks_user[:,1]== 2711)][:,0]</code></p> <p>the result is another numpy ndarray with the users that have select item 2711.</p> <p>Say that <code>target_users</code> has shape <code>target_users.shape = (14,)</code>. I want to use this array to get all items picked by the target users, something like the following (which doesnt work):</p> <pre><code>picks_user[np.where(picks_user[:,1] == target_users)] </code></pre> <p>This could be equivalent to:</p> <pre><code>for element in target_users: picks_user[np.where(picks_user[:,1] == element] </code></pre> <p>How can I achieve this in a vectorized way, no for loop?</p> <p><strong>UPDATE</strong></p> <p>Consider the following example:</p> <pre><code>a = np.array([ [1,10,1],[2,11,1],[3,12,1],[4,13,1],[5,10,1],[2,13,1],[1,11,1],[5,16,1]]) target_users = a[np.where(a[:,1]==10)][:,0] </code></pre> <p>In this case <code>target_users = [1 5]</code></p> <p>The vector which I want to get is:</p> <pre><code>[[1,10,1],[5,10,1],[1,11,1],[5,16,1]] </code></pre>
<p>You can use <code>np.in1d</code> as:</p> <pre><code>&gt;&gt;&gt; picks_user = np.random.randint(0,10, (10,3)) &gt;&gt;&gt; picks_user array([[7, 8, 7], [6, 0, 9], [5, 6, 7], [6, 7, 3], [0, 1, 3], [8, 7, 5], [2, 6, 6], [7, 9, 8], [1, 7, 1], [9, 8, 4]]) &gt;&gt;&gt; target_users = np.array([1,7,8]) &gt;&gt;&gt; picks_user[np.in1d(picks_user[:,1], target_users)] array([[7, 8, 7], [6, 7, 3], [0, 1, 3], [8, 7, 5], [1, 7, 1], [9, 8, 4]]) </code></pre>
python|numpy|vectorization
3
1,357
35,495,543
pandas DataFrame cumulative value
<p>I have the following pandas dataframe:</p> <pre><code>&gt;&gt;&gt; df Category Year Costs 0 A 1 20.00 1 A 2 30.00 2 A 3 40.00 3 B 1 15.00 4 B 2 25.00 5 B 3 35.00 </code></pre> <p>How do I add a cumulative cost column that adds up the cost for the same category and previous years. Example of the extra column with previous df:</p> <pre><code>&gt;&gt;&gt; new_df Category Year Costs Cumulative Costs 0 A 1 20.00 20.00 1 A 2 30.00 50.00 2 A 3 40.00 90.00 3 B 1 15.00 15.00 4 B 2 25.00 40.00 5 B 3 35.00 75.00 </code></pre> <p>Suggestions?</p>
<p>This works in pandas 0.17.0 Thanks to @DSM in the comments for the terser solution.</p> <pre><code>df['Cumulative Costs'] = df.groupby(['Category'])['Costs'].cumsum() &gt;&gt;&gt; df Category Year Costs Cumulative Costs 0 A 1 20 20 1 A 2 30 50 2 A 3 40 90 3 B 1 15 15 4 B 2 25 40 5 B 3 35 75 </code></pre>
python|pandas|dataframe
2
1,358
35,695,259
pandas read_sql convers column names to lower case - is there a workaroud?
<p>related: <a href="https://stackoverflow.com/questions/28318722/pandas-read-sql-drops-dot-in-column-names">pandas read_sql drops dot in column names</a></p> <p>I use pandas.read_sql to create a data frame from an sql query from a postgres database. some column aliases\names use mixed case, and I want it to propagate to the data frame. however, pandas (or the underlining engine - SQLAlchemy as much as I know) return only lower case field names.</p> <p>is there a workaround? (besides using a lookup table and fix the values afterwards)</p>
<p>Postgres normalizes unquoted column names to lower case. If you have such a table:</p> <pre><code>create table foo ("Id" integer, "PointInTime" timestamp); </code></pre> <p>PostgreSQL will obey the case, but you will <strong>have to</strong> specify table names quoted as such:</p> <pre><code>select "Id", "PointInTime" from foo; </code></pre> <hr> <p>A better solution is to add column aliases, eg:</p> <pre><code>select name as "Name", value as "Value" from parameters; </code></pre> <p>And Postgres will return properly cased column names. If the problem is SQLAlchemy or pandas, then this will not suffice.</p>
postgresql|pandas|sqlalchemy
4
1,359
11,718,852
sum over values in python dict except one
<p>Is there a way to sum over all values in a python dict except one by using a selector in </p> <pre><code>&gt;&gt;&gt; x = dict(a=1, b=2, c=3) &gt;&gt;&gt; np.sum(x.values()) 6 </code></pre> <p>? My current solution is a loop based one:</p> <pre><code>&gt;&gt;&gt; x = dict(a=1, b=2, c=3) &gt;&gt;&gt; y = 0 &gt;&gt;&gt; for i in x: ... if 'a' != i: ... y += x[i] ... &gt;&gt;&gt; y 5 </code></pre> <p>EDIT: </p> <pre><code>import numpy as np from scipy.sparse import * x = dict(a=csr_matrix(np.array([1,0,0,0,0,0,0,0,0]).reshape(3,3)), b=csr_matrix(np.array([0,0,0,0,0,0,0,0,1]).reshape(3,3)), c=csr_matrix(np.array([0,0,0,0,0,0,0,0,1]).reshape(3,3))) y = csr_matrix((3,3)) for i in x: if 'a' != i: y = y + x[i] print y </code></pre> <p>returns <code>(2, 2) 2.0</code></p> <p>and </p> <pre><code>print np.sum(value for key, value in x.iteritems() if key != 'a') </code></pre> <p>raises</p> <pre><code>File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/numpy/core/fromnumeric.py", line 1446, in sum res = _sum_(a) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 187, in __radd__ return self.__add__(other) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 173, in __add__ raise NotImplementedError('adding a scalar to a CSC or CSR ' NotImplementedError: adding a scalar to a CSC or CSR matrix is not supported </code></pre>
<p>You could loop over the dict to create a generator for the <code>sum</code> method:</p> <pre><code>np.sum(value for key, value in x.iteritems() if key != 'a') </code></pre>
python|dictionary|numpy|sum
8
1,360
11,687,953
Operations on rows in scipy sparse matrix of csr format
<p>I would like to multiply single rows of a csr matrix with a scalar. In numpy I would do</p> <pre><code>matrix[indices,:] = x * matrix[indices,:] </code></pre> <p>For csr this raises an exception in scipy.</p> <p>Is there a way to do this similarily with csr matrixes?</p>
<p>No, there's no way to this directly, because although you can compute <code>row * x</code>, you can't assign to a row in a CSR matrix. You can either convert to DOK format and back, or work on the innards of the CSR matrix directly. The <code>i</code>'th row of a CSR matrix <code>X</code> is the slice</p> <pre><code>X.data[X.indptr[i] : X.indptr[i + 1]] </code></pre> <p>which you can update in-place, i.e.</p> <pre><code>X.data[X.indptr[i] : X.indptr[i + 1]] *= factor </code></pre> <p>(This obviously works for multiplication and other operations that preserve sparsity, but not things like addition.)</p>
python|numpy|scipy
12
1,361
28,863,853
Minimum element from the matrix column
<p>I need to find minimum over all elements from the column which has the maximum column sum. I do the following things:</p> <p>Create random matrix</p> <pre><code>from numpy import * a = random.rand(5,4) </code></pre> <p>Then calculate sum of each column and find index of the maximum element</p> <pre><code>c = a.sum(axis=0) d = argmax(c) </code></pre> <p>Then I try to find the minimum number in this column, but I am quite bad with syntax, I know how to find the minimum element in the row with current index.</p> <pre><code>e = min(a[d]) </code></pre> <p>But how can I change it for columns?</p>
<p>You can extract the minimum value of a column as follows (using the variables you have indicated):</p> <pre><code>e=a[:,d].min() </code></pre> <p>Note that using</p> <pre><code>a=min(a[:,d]) </code></pre> <p>will break you out of Numpy, slowing things down (thanks for pointing this out @SaulloCastro).</p>
python|numpy|matrix|minimum
5
1,362
50,905,520
Using pd.ExcelWriter to write many dataframes to a single Excel workbook for VBA manipulation
<p>I have over 60 dataframes to write to an excel template. My intention is to paste all these to various named ranges via vba once all df's are exported. </p> <pre><code>excel_writer = pd.ExcelWriter('test.xlsx') df['state'].value_counts().to_excel(excel_writer, sheet_name='Sheet1', startrow=5, startcol=0, na_rep=0, header=True, index=True, merge_cells= True) </code></pre> <p>This works as expected but when I look to add the two following df's I get AttributeErrors</p> <pre><code>df_Shape = df.shape[0] display(df_Shape) df_Shape.to_excel(excel_writer, sheet_name='Sheet1', startrow=0, startcol=0, na_rep=0, header=True, index=True, merge_cells= True) 4802 AttributeError: 'int' object has no attribute 'to_excel' df_Start_Date = df['rfq_create_date_time'].min() display(df_Start_Date) df_Start_Date.to_excel(excel_writer, sheet_name='Sheet1', startrow=3, startcol=0, na_rep=0, header=True, index=True, merge_cells= True) Timestamp('2018-05-01 06:55:25') AttributeError: 'Timestamp' object has no attribute 'to_excel' excel_writer.save() </code></pre> <p>Is there additional work for single celled dataframes when using .to_excel</p>
<p><code>df_Shape</code> is a shape of <code>df</code><br> <code>df_Start_Date</code> is a time</p> <p>They're not <code>pd.DataFrame</code> objects, thus they don't have method <code>.to_excel</code></p> <p><strong>EDIT</strong>:<br> You can create new dataframe with necessary statistics and write it to the same sheet:</p> <pre><code>stats = [{ "len": df.shape[0], "min_time": df["rfq_create_date_time"].min() }, ] # List of dictionaries, to have a row, not column with pd.ExcelWriter(path) as excel_writer: pd.DataFrame(stats).to_excel(excel_writer, index=False, sheet_name='Sheet1', startrow=0) df.to_excel(excel_writer, startrow=3, sheet_name='Sheet1')) </code></pre>
python|pandas|dataframe
2
1,363
50,932,129
Pandas: groupby column and set it as index
<p>If I have a dataframe such as this:</p> <pre><code>df1 = pd.DataFrame({'A':[1,2,3,4,5,6,7,8], 'B':['a','b','c','d','e','f','g','h'], 'C':['u1','u2','u4','u3','u1','u1','u2','u4']}) </code></pre> <p>And would like it to be as such, with u1 to u4 as the indices.</p> <pre><code> A B u1 1 a 5 e 6 f u2 2 b 7 g u3 4 d u4 3 c 8 h </code></pre> <p>What's the best way to groupby a column and then set it as the index? I've tried to directly .set_index('C'), but it sets an index for every instance of u1 to u4.</p>
<p>You can set C as an index and then sort it : </p> <pre><code>df1.set_index('C').sort_index(axis=0) </code></pre>
python|pandas
2
1,364
50,955,960
How can I map 2 numpy arrays with same indices
<p>I am trying to map 2 numpy arrays as [x, y] similar to what zip does for lists and tuples.</p> <p>I have 2 numpy arrays as follows:</p> <pre><code>arr1 = [1, 2, 3, 4] arr2 = [5, 6, 7, 8] </code></pre> <p>I am looking for an <code>output as np.array([[[1, 5], [2, 6], [3, 7], [4, 8]]])</code></p> <p>I tried this but it maps every value and not with same indices. I can add more if conditions here but is there any other way to do so without adding any more if conditions.</p> <pre><code>res = [arr1, arr2] for a1 in arr1 for a2 in arr2] </code></pre>
<p>You are looking for <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.dstack.html" rel="nofollow noreferrer"><strong><code>np.dstack</code></strong></a></p> <blockquote> <p>Stack arrays in sequence depth wise (along third axis).</p> </blockquote> <pre><code>np.dstack([arr1, arr2]) array([[[1, 5], [2, 6], [3, 7], [4, 8]]]) </code></pre>
python|numpy
2
1,365
50,941,409
Quote marks in bash file created in Python
<p>I'm using Python and Pandas to write multiple bash scripts. I have a pandas.Series containing the script. Simplified::</p> <pre><code>script = pd.Series([ '#!/bin/bash', '#SBATCH --output "/home/path/output_filename.out"' ]) </code></pre> <p>I then use <code>script.to_csv('script_file.bat',index=False)</code> to create the file. The output file looks like this:</p> <pre><code>#!/bin/bash "#SBATCH --output ""/home/path/output_filename.out""" </code></pre> <p>I have tried all the suggestions here <a href="https://stackoverflow.com/questions/9050355/python-using-quotation-marks-inside-quotation-marks">Python - Using quotation marks inside quotation marks</a> (triple quotes, single and double quotes (as shown in the example) , escaping the quotemarks), as well as making the quoted text a variable, but none works. </p>
<p>It's possible to explicitly specify quoting for <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>df.to_csv</code></a>:</p> <pre><code>import csv pd.DataFrame(script).to_csv("test.sh", index=False, header=False, quoting=csv.QUOTE_NONE) &gt; #!/bin/bash &gt; #SBATCH --output "/home/path/output_filename.out" </code></pre>
python|string|bash|pandas|quotes
1
1,366
50,698,834
How can I remove sharp jumps in data?
<p>I have some skin temperature data (collected at 1Hz) which I intend to analyse. </p> <p>However, the sensors were not always in contact with the skin. So I have a challenge of removing this non-skin temperature data, whilst preserving the actual skin temperature data. I have about 100 files to analyse, so I need to make this automated.</p> <p>I'm aware that there is already <a href="https://stackoverflow.com/questions/41857814/remove-jumps-like-peaks-and-steps-in-timeseries#comment70899855_41857814">this similar post</a>, however I've not been able to use that to solve my problem.</p> <p>My data roughly looks like this:</p> <pre><code>df = timeStamp Temp 2018-05-04 10:08:00 28.63 . . . . 2018-05-04 21:00:00 31.63 </code></pre> <p>The first step I've taken is to simply apply a minimum threshold- this has got rid of the majority of the non-skin data. However, I'm left with the sharp jumps where the sensor was either removed or attached:</p> <p><a href="https://i.stack.imgur.com/Z2sZh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z2sZh.png" alt="basic threshold filtered data"></a></p> <p>To remove these jumps, I was thinking about taking an approach where I use the first order differential of the temp and then use another set of thresholds to get rid of the data I'm not interested in.</p> <p>e.g.</p> <pre><code>df_diff = df.diff(60) # period of about 60 makes jumps stick out filter_index = np.nonzero((df.Temp &lt;-1) | (df.Temp&gt;0.5)) # when diff is less than -1 and greater than 0.5, most likely data jumps. </code></pre> <p><a href="https://i.stack.imgur.com/KT1aU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KT1aU.png" alt="diff data"></a></p> <p>However, I find myself stuck here. The main problem is that:</p> <p>1) I don't know how to now use this index list to delete the non-skin data in df. How is best to do this?</p> <p>The more minor problem is that 2) I think I will still be left with some residual artefacts from the data jumps near the edges (e.g. where a tighter threshold would start to chuck away good data). Is there either a better filtering strategy or a way to then get rid of these artefacts?</p> <p>*Edit as suggested I've also calculated the second order diff, but to be honest, I think the first order diff would allow for tighter thresholds (see below):</p> <p><a href="https://i.stack.imgur.com/Z1HyB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z1HyB.png" alt="enter image description here"></a></p> <p>*Edit 2: <a href="http://www.sharecsv.com/s/6ff862c5f2acef782deadf9428675805/example_data.csv" rel="nofollow noreferrer">Link to sample data</a> </p>
<p>Try the code below (I used a tangent function to generate data). I used the second order difference idea from Mad Physicist in the comments.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.DataFrame() df[0] = np.arange(0,10,0.005) df[1] = np.tan(df[0]) #the following line calculates the absolute value of a second order finite #difference (derivative) df[2] = 0.5*(df[1].diff()+df[1].diff(periods=-1)).abs() df.loc[df[2] &lt; .05][1].plot() #select out regions of a high rate-of-change df[1].plot() #plot original data plt.show() </code></pre> <p>Following is a zoom of the output showing what got filtered. Matplotlib plots a line from beginning to end of the removed data.</p> <p><a href="https://i.stack.imgur.com/fEwXD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fEwXD.png" alt="enter image description here"></a></p> <p>Your first question I believe is answered with the .loc selection above.</p> <p>You second question will take some experimentation with your dataset. The code above only selects out high-derivative data. You'll also need your threshold selection to remove zeroes or the like. You can experiment with where to make the derivative selection. You can also plot a histogram of the derivative to give you a hint as to what to select out. </p> <p>Also, higher order difference equations are possible to help with smoothing. This should help remove artifacts without having to trim around the cuts. </p> <p>Edit:</p> <p>A fourth-order finite difference can be applied using this:</p> <pre><code>df[2] = (df[1].diff(periods=1)-df[1].diff(periods=-1))*8/12 - \ (df[1].diff(periods=2)-df[1].diff(periods=-2))*1/12 df[2] = df[2].abs() </code></pre> <p>It's reasonable to think that it may help. The coefficients above can be worked out or derived from the following link for higher orders. <a href="http://web.media.mit.edu/~crtaylor/calculator.html" rel="nofollow noreferrer">Finite Difference Coefficients Calculator</a></p> <p>Note: The above second and fourth order central difference equations are not proper first derivatives. One must divide by the interval length (in this case 0.005) to get the actual derivative. </p>
python|python-3.x|pandas|dataframe|filtering
3
1,367
33,226,017
Pandas Aggregating a GroupBy object using UDF
<p>Suppose that I have the following. I group by "happy" and then sum over each group. It works great.</p> <pre><code>import pandas as pd testdf = pd.DataFrame({"happy": [1, 2, 1, 3], "sad": [4, 5, 6, 7], \ "cool":[1, 99, 0, -5]}) testgb = testdf.groupby(["happy"]) testgb.sum() </code></pre> <p>But what if I want to use my own function that takes in a list of values and returns a number INSTEAD of sum(); like</p> <pre><code>def my_max(ilist): return max(ilist) testgb.my_max() </code></pre> <p>In this case the output should be:</p> <pre><code>happy sad cool 1 6 1 2 5 99 3 7 -5 </code></pre> <p>Does anyone know how to do it? I read how to use your own function for grouping by but not for accumulation</p>
<p>I'm assuming that you want to pass the list of values from other column, e.g. <code>sad</code>. You can use the <code>agg</code> function</p> <pre><code>testdf = pd.DataFrame({"happy": [1, 2, 1, 3], "sad": [4, 5, 6, 7], "cool":[1, 99, 0, -5]}) testgb = testdf.groupby(["happy"]).agg({'sad': lambda x: max(x)}) </code></pre> <p>Of course there are probably built-in procedures to accomplish what you have in mind but since you pose a hypothetical scenario, it's hard to say more.</p>
python|pandas
1
1,368
33,460,209
How to get the union of two MultiIndex DataFrames?
<p>How do I merge two MultiIndexed DataFrames? </p> <p>For example, let's say I have:</p> <pre><code>index1 = pd.MultiIndex.from_tuples([('2010-01-01', 'Jim'), ('2010-01-01', 'Mike'), ('2010-01-02', 'Sam')]) index2 = pd.MultiIndex.from_tuples([('2010-01-02', 'Jim'), ('2010-01-02', 'Sam'), ('2010-01-03', 'Joe')]) df1 = pd.DataFrame([[7,0,7],[4,3,2],[6,2,6]], index=index1, columns=['a', 'b', 'c']) df2 = pd.DataFrame([[4,2,0],[8,8,4],[5,5,3]], index=index2, columns=['a', 'b', 'c']) </code></pre> <p>This results in:</p> <pre><code>&gt;&gt; df1 a b c 2010-01-01 Jim 7 0 7 Mike 4 3 2 2010-01-02 Sam 6 2 6 &gt;&gt; df2 a b c 2010-01-02 Jim 4 2 0 Sam 8 8 4 2010-01-03 Joe 5 5 3 </code></pre> <p>I want to merge <code>df1</code> and <code>df2</code> to produce:</p> <pre><code>&gt;&gt; df3 a1 b1 c1 a2 b2 c2 2010-01-01 Jim 7 0 7 NaN NaN NaN Mike 4 3 2 NaN NaN NaN 2010-01-02 Jim NaN NaN NaN 4 2 0 Sam 6 2 6 8 8 4 2010-01-03 Joe NaN NaN NaN 5 5 3 </code></pre> <p>I'm struggling to find a good way to do this. Any suggestions?</p>
<p>Try:</p> <pre><code>df3 = df1.join(df2, how='outer', lsuffix='1', rsuffix='2') </code></pre> <p>Or</p> <pre><code>df3 = pd.merge(df1, df2, how='outer', left_index=True, right_index=True) </code></pre>
python|pandas
1
1,369
66,569,033
How to fill one column in a csv by comparing values to three different columns in another csv file?
<p>As I am completely new to pandas, I would like to ask.</p> <p>I have two CSV files.</p> <p>One of them has all the colors of different languages in one column:</p> <p>I have color blue in three rows here: azul, bleu all means blue in different languages, so they should be in the same group 1. rouge and rojo means red, so they are also in the same group, so they should have 2.</p> <p>Here is my first table colors:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>name</th> <th>group</th> </tr> </thead> <tbody> <tr> <td>blue</td> <td>1</td> </tr> <tr> <td>azul</td> <td>1</td> </tr> <tr> <td>bleu</td> <td>1</td> </tr> <tr> <td>rouge</td> <td></td> </tr> <tr> <td>red</td> <td></td> </tr> <tr> <td>rojo</td> <td></td> </tr> <tr> <td>verde</td> <td></td> </tr> <tr> <td>vert</td> <td></td> </tr> <tr> <td>green</td> <td></td> </tr> </tbody> </table> </div> <p>and so on, in my csv this column group is empty and I have to fill it</p> <p>I have also second CSV file, when it looks like this</p> <p>Table colors_language:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>english</th> <th>french</th> <th>spanish</th> <th>group</th> </tr> </thead> <tbody> <tr> <td>green</td> <td>vert</td> <td>verde</td> <td>3</td> </tr> <tr> <td>red</td> <td>rouge</td> <td>rojo</td> <td>2</td> </tr> <tr> <td>blue</td> <td>bleu</td> <td>azul</td> <td>1</td> </tr> </tbody> </table> </div> <p>I want to fill the column group in the first CSV file by comparing it to the second CSV file which has information.</p> <p>I have found out how to compare on one column, but how I compare on column from one csv to three columns from another csv to fill the group column in the first CSV?</p> <pre><code>df1 = pd.read_csv('colors.csv') df2 = pd.read_csv('colors_language.csv') mergedStuff = pd.merge(df1, df2, on=['name'], how='inner') #I have tried also this, but it gives me information of Empty DataFrame my_list = df1['name'].unique().tolist() lang = df2[df2['english'].isin(my_list)] print(lang) </code></pre> <p>Here is my code, but it doesn't work. I think this is because column name is only in one csv, but how I can join name from the first csv to English, French, Spanish columns in second csv?</p>
<p>You can use <code>map</code></p> <p>Using <code>df2</code> you can create a dict <code>d</code> and then map the name values to their corresponding group.</p> <pre><code>d = dict(zip(df2.T.values[3], df2.values[:,:3].tolist())) df1['group'] = df1.name.map(lambda x: [k for k in d if x in d[k]][0]) </code></pre> <hr /> <p><strong>d:</strong></p> <pre><code>{3: ['green', 'vert', 'verde'], 2: ['red', 'rouge', 'rojo'], 1: ['blue', 'bleu', 'azul']} </code></pre> <hr /> <p><strong>df1:</strong></p> <pre><code> name group 0 blue 1 1 azul 1 2 bleu 1 3 rouge 2 4 red 2 5 rojo 2 6 verde 3 7 vert 3 8 green 3 </code></pre>
python-3.x|pandas|dataframe|csv
1
1,370
66,721,047
Pandas: compute average and standard deviation by clock time
<p>I have a DataFrame like this:</p> <pre><code> date time value 0 2019-04-18 07:00:10 100.8 1 2019-04-18 07:00:20 95.6 2 2019-04-18 07:00:30 87.6 3 2019-04-18 07:00:40 94.2 </code></pre> <p>The DataFrame contains value recorded every 10 seconds for entire year 2019. I need to calculate standard deviation and mean/average of <code>value</code> for each hour of each date, and create two new columns for them. I have tried first separating the hour for each value like:</p> <pre><code>df[&quot;hour&quot;] = df[&quot;time&quot;].astype(str).str[:2] </code></pre> <p>Then I have tried to calculate standard deviation by:</p> <pre><code>df[&quot;std&quot;] = df.groupby(&quot;hour&quot;).median().index.get_level_values('value').stack().std() </code></pre> <p>But that won't work, could I have some advise on the problem?</p>
<p>We can <code>split</code> the <code>time</code> column around the delimiter <code>:</code>, then slice the <code>hour</code> component using <code>str[0]</code>, finally <code>group</code> the dataframe on <code>date</code> along with <code>hour</code> component and aggregate column <code>value</code> with <code>mean</code> and <code>std</code>:</p> <pre><code>hr = df['time'].str.split(':', n=1).str[0] df.groupby(['date', hr])['value'].agg(['mean', 'std']) </code></pre> <p>If you want to <code>broadcast</code> the aggregated values to original dataframe, then we need to use <code>transform</code> instead of <code>agg</code>:</p> <pre><code>g = df.groupby(['date', df['time'].str.split(':', n=1).str[0]])['value'] df['mean'], df['std'] = g.transform('mean'), g.transform('std') </code></pre> <hr /> <pre><code> date time value mean std 0 2019-04-18 07:00:10 100.8 94.55 5.434151 1 2019-04-18 07:00:20 95.6 94.55 5.434151 2 2019-04-18 07:00:30 87.6 94.55 5.434151 3 2019-04-18 07:00:40 94.2 94.55 5.434151 </code></pre>
python|pandas|dataframe
4
1,371
16,574,470
Out of memory when using numpy's multivariate_normal random sampliing
<p>I tried to use numpy.random.multivariate_normal to do random samplings on some 30000+ variables, while it always took all of my memory (32G) and then terminated. Actually, the correlation is spherical and every variable is correlated to about only 2500 other variables. Is there another way to specify the spherical covariance matrix, rather than the full covariance matrix, or any other way to reduce the usage of the memory?</p> <p>My code is like this: </p> <pre><code>cm = [] #covariance matrix for i in range(width*height): cm.append([]) for j in range(width*height): cm[i].append(corr_calc()) #corr is inversely proportional to the distance mean = [vth]*(width*height) cache_vth=numpy.random.multivariate_normal(mean,cm) </code></pre>
<p>If your correlation is spherical, that is the same as saying that the value along each dimension is uncorrelated to the other dimensions, and that the variance along every dimension is the same. You don't need to build the covariance matrix at all, drawing one sample from your 30,000-D multivariate normal is the same as drawing 30,000 samples from a 1-D normal. That is, instead of doing:</p> <pre><code>n = 30000 mu= 0 corr = 1 cm = np.eye(n) * corr mean = np.ones((n,)) * mu np.random.multivariate_normal(mean, cm) </code></pre> <p>Which fails when trying to build the <code>cm</code> array, try the following:</p> <pre><code>n = 30000 mu = 0 corr = 1 &gt;&gt;&gt; np.random.normal(mu, corr, size=n) array([ 0.88433649, -0.55460098, -0.74259886, ..., 0.66459841, 0.71225572, 1.04012445]) </code></pre> <p>If you want more than one random sample, say 3, try</p> <pre><code>&gt;&gt;&gt; np.random.normal(mu, corr, size=(3, n)) array([[-0.97458499, 0.05072532, -0.0759601 , ..., -0.31849315, -2.17552787, -0.36884723], [ 1.5116701 , 2.53383547, 1.99921923, ..., -1.2769304 , 0.36912488, 0.3024549 ], [-1.12615267, 0.78125589, 0.67133243, ..., -0.45441239, -1.21083007, 1.45696714]]) </code></pre>
python|memory|numpy
1
1,372
57,403,472
How do I add a new feature column to a tf.data.Dataset object?
<p>I am building an input pipeline for proprietary data using Tensorflow 2.0's data module and using the tf.data.Dataset object to store my features. Here is my issue - the data source is a CSV file that has only 3 columns, a label column and then two columns which just hold strings referring to JSON files where that data is stored. I have developed functions that access all the data I need, and am able to use Dataset's map function on the columns to get the data, but I don't see how I can add a new column to my tf.data.Dataset object to hold the new data. So if anyone could help with the following questions, it would really help:</p> <ol> <li>How can a new feature be appended to a tf.data.Dataset object?</li> <li>Should this process be done on the entire Dataset before iterating through it, or during (I think during iteration would allow utilization of the performance boost, but I don't know how this functionality works)?</li> </ol> <p>I have all the methods for taking the input as the elements from the columns and performing everything required to get the features for each element, I just don't understand how to get this data into the dataset. I could do "hacky" workarounds, using a Pandas Dataframe as a "mediator" or something along those lines, but I want to keep everything within the Tensorflow Dataset and pipeline process, for both performance gains and higher quality code.</p> <p>I have looked through the Tensorflow 2.0 documentation for the Dataset class (<a href="https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Dataset" rel="noreferrer">https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Dataset</a>), but haven't been able to find a method that can manipulate the structure of the object.</p> <p>Here is the function I use to load the original dataset:</p> <pre><code>def load_dataset(self): # TODO: Function to get max number of available CPU threads dataset = tf.data.experimental.make_csv_dataset(self.dataset_path, self.batch_size, label_name='score', shuffle_buffer_size=self.get_dataset_size(), shuffle_seed=self.seed, num_parallel_reads=1) return dataset </code></pre> <p>Then, I have methods which allow me to take a string input (column element) and return the actual feature data. And I am able to access the elements from the Dataset using a function like ".map". But how do I add that as a column?</p>
<p>Wow, this is embarassing, but I have found the solution and it's simplicity literally makes me feel like an idiot for asking this. But I will leave the answer up just in case anyone else is ever facing this issue.</p> <p>You first create a new tf.data.Dataset object using any function that returns a Dataset, such as ".map".</p> <p>Then you create a new Dataset by zipping the original and the one with the new data:</p> <pre><code>dataset3 = tf.data.Dataset.zip((dataset1, dataset2)) </code></pre>
tensorflow|dataset|tensorflow-datasets
15
1,373
57,478,251
Maximum of an array constituting a pandas dataframe cell
<p>I have a pandas dataframe in which a column is formed by arrays. So every cell is an array.</p> <p>Say there is a column A in dataframe df, such that</p> <pre><code>A = [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ... ] </code></pre> <p>I want to operate in each array and get, e.g. the maximum of each array, and store it in another column. </p> <p>In the example, I would like to obtain another column </p> <pre><code>B = [ 3, 6, 9, ...] </code></pre> <p>I have tried these approaches so far, none of them giving what I want.</p> <pre><code>df['B'] = np.max(df['A']);# df.applymap (lambda B: A.max()) df['B'] = df.applymap (lambda B: np.max(np.array(df['A'].tolist()),0)) </code></pre> <p>How should I proceed? And is this the best way to have my dataframe organized?</p>
<p>Here is one way without apply:</p> <pre><code>df['B']=np.max(df['A'].values.tolist(),axis=1) </code></pre> <hr> <pre><code> A B 0 [1, 2, 3] 3 1 [4, 5, 6] 6 2 [7, 8, 9] 9 </code></pre>
python|arrays|pandas|data-analysis
1
1,374
43,549,885
Trying to replace all values matching a pattern in a pandas dataframe with the matched capture groups reversed
<p>Been trying to replace all values matching a pattern in a pandas dataframe with the matched capture groups reversed. So <code>Mouse, Mickey</code> would be replaced with <code>Mickey Mouse</code></p> <p>Dataframe looks like:</p> <pre><code>+---+---------------+------+------+------+------+------+------+------+------+------+--+--+------+------+------+------+------+------+------+------+------+------+ | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | +---+---------------+------+------+------+------+------+------+------+------+------+--+--+------+------+------+------+------+------+------+------+------+------+ | 0 | Mouse, Mickey | None | None | None | None | None | None | None | None | None | | | None | None | None | None | None | None | None | None | None | None | | 1 | Duck, Donald | None | None | None | None | None | None | None | None | None | | | None | None | None | None | None | None | None | None | None | None | +---+---------------+------+------+------+------+------+------+------+------+------+--+--+------+------+------+------+------+------+------+------+------+------+ </code></pre> <p>Code:</p> <pre><code>df.replace(r'(.*),\s+(.*)', r'\2 \1', inplace=True) </code></pre> <p>No change in output. What am I doing wrong? Thanks!</p>
<p>You need to specify <code>regex=True</code>; By default, <em>DataFrame.replace</em> method replaces values literally:</p> <pre><code>df = pd.DataFrame({"A": ["Mouse, Mickey", "Duck, Donald"]}) df.replace(r'(.*),\s+(.*)', r'\2 \1', inplace=True, regex=True) df # A #0 Mickey Mouse #1 Donald Duck </code></pre>
python|regex|python-3.x|pandas|dataframe
2
1,375
43,524,953
Find index where elements change value pandas dataframe
<p>Regaring to <a href="https://stackoverflow.com/questions/19125661/find-index-where-elements-change-value-numpy">this question/answer</a>, is there a way to accomplish the same function for a pandas dataframe structure without casting it as a numpy array?</p>
<h3>Update: we can use this per @LorenzoMeneghetti</h3> <pre><code>s[s.diff() != 0].index.tolist() </code></pre> <p>Output:</p> <pre><code>[0, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16] </code></pre> <hr /> <pre><code>s = pd.Series([1, 1, 1, 1, 1, 2, 2, 2, 3, 4, 3, 4, 3, 4, 3, 4, 5, 5, 5]) print(s.diff()[s.diff() != 0].index.values) </code></pre> <p>OR:</p> <pre><code>df = pd.DataFrame([1, 1, 1, 1, 1, 2, 2, 2, 3, 4, 3, 4, 3, 4, 3, 4, 5, 5, 5]) print(df[0].diff()[df[0].diff() != 0].index.values) </code></pre> <p>Output:</p> <p>[ 0 5 8 9 10 11 12 13 14 15 16]</p>
python|pandas
13
1,376
43,547,754
How to Import data through Panda for several years but specifc months?
<p>I want to import data from panda for 10 years. But I need each season to be imported separately. for example all the data during Spring from 2000 to 2010.</p> <p>I have following code but this does not separate the season. </p> <pre><code>import pandas_datareader.data as web import datetime start = datetime.datetime(2000, 1, 1) end = datetime.datetime(2010, 1, 1) f = web.DataReader("F", 'yahoo', start, end) </code></pre> <p>Is there anyway ? </p>
<p>Assuming that you are targeting Spring months (maybe January 1st through April 30th, subject to change), you can create a <code>list</code> of date <code>tuples</code> where each <code>tuple</code> holds the <code>start</code> and <code>end</code> dates for a given <code>year</code>. For instance, the first element of the <code>list</code> would be <code>(datetime.datetime(2000, 1, 1, 0, 0), datetime.datetime(2000, 4, 30, 0, 0))</code>. This would be from January 1st to April 30th in 2000.</p> <p>Once you have your <code>list</code> of date <code>tuples</code>, you can iterate through them and fetch data for each combination of <code>start</code> and <code>end</code> dates. The result will also be a <code>list</code>, but this time it is going to be a <code>list</code> of <code>pandas</code> DataFrames, which you can very easily concatenate together to get your desired dataset. The following should serve as a working script:</p> <pre><code>import pandas_datareader.data as web import datetime import pandas as pd all_dates = [(datetime.datetime(year, 1, 1), datetime.datetime(year, 4, 30)) for year in range(2000, 2011)] f = pd.concat([web.DataReader("F", 'yahoo', start, end) for start, end in all_dates], axis=1) print(f.tail()) # Close Volume Adj Close # Date # 2010-04-26 14.46 123029200.0 11.684445 # 2010-04-27 13.57 292667400.0 10.965278 # 2010-04-28 13.25 208023500.0 10.706701 # 2010-04-29 13.58 110114400.0 10.973358 # 2010-04-30 13.02 146322900.0 10.520849 </code></pre> <p>I hope this helps.</p>
python|python-2.7|python-3.x|pandas-datareader
2
1,377
43,668,993
How to select rows that not consist of only NaN values and 0s
<p>This is my dataframe:</p> <pre><code>cols = ['Country', 'Year', 'Orange', 'Apple', 'Plump'] data = [['US', 2008, 17, 29, 19], ['US', 2009, 11, 12, 16], ['US', 2010, 14, 16, 38], ['Spain', 2008, 11, None, 33], ['Spain', 2009, 12, 19, 17], ['France', 2008, 17, 19, 21], ['France', 2009, 19, 22, 13], ['France', 2010, 12, 11, 0], ['France', 2010, 0, 0, 0], ['Italy', 2009, None, None, None], ['Italy', 2010, 15, 16, 17], ['Italy', 2010, 0, None, None], ['Italy', 2011, 42, None, None]] </code></pre> <p>I want to select rows which in which orange apple and plumps are not consist of only "None"s, only 0s or mix of them. So the Resulting output should be:</p> <pre><code> Country Year Orange Apple Plump 0 US 2008 17.0 29.0 19.0 1 US 2009 11.0 12.0 16.0 2 US 2010 14.0 16.0 38.0 3 Spain 2008 11.0 NaN 33.0 4 Spain 2009 12.0 19.0 17.0 5 France 2008 17.0 19.0 21.0 6 France 2009 19.0 22.0 13.0 7 France 2010 12.0 11.0 0.0 10 Italy 2010 15.0 16.0 17.0 12 Italy 2011 42.0 NaN NaN </code></pre> <p>Second I want to drop the countries for which I don't have observations for all three years. So resulting output should only consist Us and France. How I could get them ? I have tried something like:</p> <pre><code>df = df[(df['Orange'].notnull())| \ (df['Apple'].notnull()) | (df['Plump'].notnull()) | (df['Orange'] != 0 )| (df['Apple']!= 0) | (df['Plump']!= 0)] </code></pre> <p>Also I tried: </p> <pre><code>df = df[((df['Orange'].notnull())| \ (df['Apple'].notnull()) | (df['Plump'].notnull())) &amp; ((df['Orange'] != 0 )| (df['Apple']!= 0) | (df['Plump']!= 0))] </code></pre>
<pre><code>In [307]: df[~df[['Orange','Apple','Plump']].fillna(0).eq(0).all(1)] Out[307]: Country Year Orange Apple Plump 0 US 2008 17.0 29.0 19.0 1 US 2009 11.0 12.0 16.0 2 US 2010 14.0 16.0 38.0 3 Spain 2008 11.0 NaN 33.0 4 Spain 2009 12.0 19.0 17.0 5 France 2008 17.0 19.0 21.0 6 France 2009 19.0 22.0 13.0 7 France 2010 12.0 11.0 0.0 10 Italy 2010 15.0 16.0 17.0 12 Italy 2011 42.0 NaN NaN </code></pre>
python|pandas
6
1,378
43,643,663
Merge serval models (LSTMs) in TensorFlow
<p>I know how to merge different models into one in Keras.</p> <pre><code>first_model = Sequential() first_model.add(LSTM(output_dim, input_shape=(m, input_dim))) second_model = Sequential() second_model.add(LSTM(output_dim, input_shape=(n-m, input_dim))) model = Sequential() model.add(Merge([first_model, second_model], mode='concat')) model.fit([X1,X2]) </code></pre> <p>I am not sure how to do this in TensorFlow though.</p> <p>I have two LSTM models and want to merge those (in the same way as in above Keras example).</p> <pre><code>outputs_1, state_1 = tf.nn.dynamic_rnn(stacked_lstm_1, model_input_1) outputs_2, state_2 = tf.nn.dynamic_rnn(stacked_lstm_2, model_input_2) </code></pre> <p>Any help would be much appreciated!</p>
<p>As was said in the comment, I believe the simplest way to do this is just to concatenate the outputs. The only complication that I've found is that, at least how I made my LSTM layers, they ended up with the exact same names for their weight tensors. This led to an error because TensorFlow thought the weights were already made when I tried to make the second layer. If you have this problem, you can solve it using a variable scope, which will apply to the names of the tensors in that LSTM layer:</p> <pre><code>with tf.variable_scope("LSTM_1"): lstm_cells_1 = tf.contrib.rnn.MultiRNNCell(tf.contrib.rnn.LSTMCell(256)) output_1, state_1 = tf.nn.dynamic_rnn(lstm_cells_1, inputs_1) last_output_1 = output_1[:, -1, :] # I usually work with the last one; you can keep them all, if you want with tf.variable_scope("LSTM_2"): lstm_cells_2 = tf.contrib.rnn.MultiRNNCell(tf.contrib.rnn.LSTMCell(256)) output_2, state_2 = tf.nn.dynamic_rnn(lstm_cells_2, inputs_2) last_output_2 = output_2[:, -1, :] merged = tf.concat((last_output_1, last_output_2), axis=1) </code></pre>
deep-learning|tensorflow|keras
3
1,379
1,664,917
Automatic string length in recarray
<p>If I create a recarray in this way:</p> <pre><code>In [29]: np.rec.fromrecords([(1,'hello'),(2,'world')],names=['a','b']) </code></pre> <p>The result looks fine:</p> <pre><code>Out[29]: rec.array([(1, 'hello'), (2, 'world')], dtype=[('a', '&lt;i8'), ('b', '|S5')]) </code></pre> <p>But if I want to specify the data types:</p> <pre><code>In [32]: np.rec.fromrecords([(1,'hello'),(2,'world')],dtype=[('a',np.int8),('b',np.str)]) </code></pre> <p>The string is set to a length of zero:</p> <pre><code>Out[32]: rec.array([(1, ''), (2, '')], dtype=[('a', '|i1'), ('b', '|S0')]) </code></pre> <p>I need to specify datatypes for all numerical types since I care about int8/16/32, etc, but I would like to benefit from the auto string length detection that works if I don't specify datatypes. I tried replacing np.str by None but no luck. I know I can specify '|S5' for example, but I don't know in advance what the string length should be set to.</p>
<p>If you don't need to manipulate the strings as bytes, you may use the object data-type to represent them. This essentially stores a pointer instead of the actual bytes:</p> <pre><code>In [38]: np.array(data, dtype=[('a', np.uint8), ('b', np.object)]) Out[38]: array([(1, 'hello'), (2, 'world')], dtype=[('a', '|u1'), ('b', '|O8')]) </code></pre> <p>Alternatively, Alex's idea would work well:</p> <pre><code>new_dt = [] # For each field of a given type and alignment, determine # whether the field is an integer. If so, represent it as a byte. for f, (T, align) in dt.fields.iteritems(): if np.issubdtype(T, int): new_dt.append((f, np.uint8)) else: new_dt.append((f, T)) new_dt = np.dtype(new_dt) np.array(data, dtype=new_dt) </code></pre> <p>which should yield</p> <pre><code>array([(1, 'hello'), (2, 'world')], dtype=[('f0', '|u1'), ('f1', '|S5')]) </code></pre>
python|numpy
2
1,380
72,907,661
Creating a Multiple Dictionaries from a CSV File
<p>I am currently importing a file as so:</p> <pre><code>df= pd.read_csv(r&quot;Test.csv&quot;) </code></pre> <p>And the output looks like</p> <pre><code> Type Value 0 Food_Place_1 1 1 Food_Place_2 2 2 Car_Type_1 3 3 Car_Type_2 4 </code></pre> <p>I would like to iterate through this df and depending on the Type column allocated to a dictionary like this</p> <pre><code>food_type_dict = {'Type': ['Food_Place_1', 'Food_Place_2'], 'Value': [1, 2]} car_type_dict = {'Type': ['Car_Type_1', 'Car_Type_2'], 'Value': [3, 4]} </code></pre> <p>My plan was to convert the entire dataframe into a dictionary and filter from there. However when I try to convert using this, the output is not what I was expecting. I can't seem to remove the Value header from the dictionary</p> <pre><code>df_dict = df.set_index(['Type']).T.to_dict('dict') Output {'A1': {'Value': 1},....} </code></pre>
<p>Create category for possible aggregate lists for nested dictionary:</p> <pre><code>#If category is set by remove digits cat = df['Type'].str.replace('\d','') #If category is set by first letter #cat = df['Type'].str[0] d = df.rename(columns={'Type':'Component'}).groupby(cat).agg(list).to_dict('index') print (d) {'A': {'Component': ['A1', 'A2'], 'Value': [1, 2]}, 'B': {'Component': ['B1', 'B2'], 'Value': [3, 4]}} </code></pre> <p>Then instead <code>a_type_dict</code> use <code>d['A']</code>, <code>b_type_dict</code> use <code>d['B']</code>.</p> <p>EDIT:</p> <pre><code>cat = df['Type'].str.split('_').str[0] d = df.rename(columns={'Type':'Component'}).groupby(cat).agg(list).to_dict('index') print (d) {'Car': {'Component': ['Car_Type_1', 'Car_Type_2'], 'Value': [3, 4]}, 'Food': {'Component': ['Food_Place_1', 'Food_Place_2'], 'Value': [1, 2]}} </code></pre>
python|pandas
2
1,381
73,022,079
Turning Python Keras Machine Learning model into repeatable function that can take as input multiple X and y data sets
<p>I am currently building various machine learning models, each of the models takes in X and Y data that represent different stock prices e.g. there's an X and y data frame for each stock e.g. Apple, Microsoft.</p> <p>I am trying to produce these models so that they are repeatable, modular, functions that I can quickly call for each of my X and y data sets.</p> <p>I have tried these models as standalone lines of code, or as functions that don't take in parameters and they work as intended, however whenever I try to pass my X and y data sets as parameters they don't work!</p> <p>Currently I have:</p> <pre><code>def LSTM_regressor(X_train, X_test, y_train, y_test): convert_X_y_to_numpy_and_reshape(X_train, X_test, y_train, y_test) model = Sequential() model.add(Dense(1, input_dim=(X_train.shape[1]), kernel_initializer='normal', activation='sigmoid')) model.compile(loss='mse', optimizer='adam', metrics=['mse']) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=20) print(model.summary()) final_model = KerasRegressor(build_fn=LSTM_regressor(X_train_reg_aapl, X_test_reg_aapl, y_train_reg_aapl, y_test_reg_aapl),batch_size=20, epochs=50, verbose=1) kfold = KFold(n_splits=10) # random_state=seed) results = cross_val_score(final_model, X_train_reg_aapl, y_train_reg_aapl, cv=kfold, n_jobs=1) print(&quot;Results: %.2f (%.2f) MSE&quot; % (results.mean(),results.std())) </code></pre> <p>I am trying to pass the below into the function:</p> <pre><code>X_train_reg_aapl, X_test_reg_aapl, y_train_reg_aapl, y_test_reg_aapl </code></pre> <p>but I get the error message:</p> <pre><code>AttributeError: 'KerasRegressor' object has no attribute '__call__' </code></pre> <p>I have tried making a nested function and calling that, but it still doesn't work.</p> <p>Also, the below is a function that I have created that I wanted to use to transform the parameters entered into the machine learning model into a data format suitable for the model type.</p> <pre><code> convert_X_y_to_numpy_and_reshape(X_train, X_test, y_train, y_test) </code></pre> <p>It's full code is this:</p> <pre><code>def convert_X_y_to_numpy_and_reshape(X_train,X_test,y_train,y_test): X_train = X_train.to_numpy() X_test = X_test.to_numpy() y_train = y_train.to_numpy() y_test = y_test.to_numpy() y_train = y_train.reshape(-1) y_test = y_test.reshape(-1) X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)) X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1)) </code></pre> <p>Any help on this would really be appreciated!</p>
<p>Try providing default values for your <code>LSTM_regressor</code> function.</p> <p><code>def LSTM_regressor(X_train=X_tr, X_test=X_te, y_train=y_tr, y_test=y_te):</code></p> <p>From the docs:</p> <blockquote> <p>sk_params takes both model parameters and fitting parameters. Legal model parameters are the arguments of build_fn. Note that like all other estimators in scikit-learn, <strong>'build_fn' should provide default values</strong> for its arguments, so that you could create the estimator without passing any values to sk_params.</p> </blockquote> <p>As a remark, you don't need ask full datasets as arguments. You can have a <code>stock_name</code> as an argument and use that as a key to dictionary of your stock dataframes.</p> <pre><code>dataset_dict = {&quot;AAPL&quot;:(X_train, X_test, y_train, y_test), &quot;GOOGL&quot;: (...)} def LSTM_regressor(stock_name=&quot;AAPL&quot;): # default value X_train, X_test, y_train, y_test = dataset_dict[stock_name] .... </code></pre>
python|function|tensorflow|machine-learning|keras
1
1,382
72,952,488
Pandas Scipy mannwhitneyu in this type of data table
<p>I have a data table similar to this one (but huge), many types and more &quot;Spot&quot; cells for each &quot;Color&quot;:</p> <pre><code>Type Color Spots A Blue 792 A Blue 56 A Blue 2726 A Blue 780 A Blue 591 A Blue 2867 A Blue 193 A Green 134 A Green 631 A Green 1010 A Green 53 A Green 5826 A Green 6409 A Green 3278 B Blue 670 B Blue 42 B Blue 1165 B Blue 3203 B Blue 2164 B Blue 5876 B Blue 525 B Green 26 B Green 143 B Green 399 B Green 68 B Green 939 B Green 1528 B Green 401 B Green 1842 C Blue 265 C Blue 19 C Blue 1381 C Blue 4483 C Blue 1103 C Blue 1906 C Blue 691 C Green 38 C Green 149 C Green 87 C Green 33 C Green 1427 C Green 1009 C Green 342 C Green 190 </code></pre> <p>I want to do a Scipy mannwhitneyu analysis comparing Blue vs Green spots of each type, for instance for type A, this comparison and the same for all the types automatically:</p> <pre><code>Blue Green 792 134 56 631 2726 1010 780 53 591 5826 2867 6409 193 3278 </code></pre> <p>I thought that defining those kind of groups in Pandas and then calling them in scipy should be the strategy, but my skills are not at that level still. The idea is do it automatically for of the types, so I get the p-value of A, B, C, etc. Could somebody give me a hint? Thanks</p>
<p>Your questions might be leaving a lot that is obvious to you implied for people who are not as familiar with the sort of statistical analysis you are interested in. That might be making it difficult to help you along, but by trying to cover all my bases, I think I might be able to help you regardless.</p> <p>The first implied assumption appears to be that, when you select by type, the number of rows (which I do not assume to be consecutive in my code) you have for each type match in number. In your example, there are seven rows for each type, for each color, but you also say that there will be more in your final data. As long as the number of rows match, the following should apply regardless. If they do not match, you can insert a check for the length with <code>len()</code>, but at the very latest the Whitney-Mann function from scipy will probably complain about x and y not being the same length. What you would like to do, it seems, is to select by group from your Pandas dataframe, which is a common operation. For example, to query all types from your dataframe, you could write:</p> <pre><code>set(df['Type']) </code></pre> <p>Then, you can iterate over those types:</p> <pre><code>for t in types: # ... </code></pre> <p>Similar for your colors.</p> <p>You wanting to do a Mann-Whitney test probably also implies that it's always two pairs of lists you want to compare, one green and one blue. I assume that those are the only colors, but if you have more combinations then you can generate them as such:</p> <pre><code>p = itertools.combinations(['Blue', 'Red', 'Green', 'Yellow'], 2) print(*p) </code></pre> <p>This will print:</p> <pre><code>('Blue', 'Red') ('Blue', 'Green') ('Blue', 'Yellow') ('Red', 'Green') ('Red', 'Yellow') ('Green', 'Yellow') </code></pre> <p>I do not know if you consider this over-engineering, but usually hardcoding fewer assumptions and allowing for more flexibility is best, unless it also admits cases that should be caught as errors in your data. Hardcode <code>&quot;Blue&quot;</code> and <code>&quot;Green&quot;</code> if you prefer that. Other readers of this question might be interested in the more general approach. You could do the same with <code>&quot;Spots&quot;</code> and avoid hardcoding even of that variable, in case you have more than just metric, but it should be clear from here to extend the approach.</p> <p>The next hint that might help you is to select, in such a loop, by two criteria, which will be your grouping criteria:</p> <pre><code> df[(df['Type'] == t) &amp; (df['Color'] == 'Green')]['Spots'] </code></pre> <p>The expression within <code>df[...]</code> is a boolean array. That is in turn used to select all rows corresponding to <code>true</code> values within that boolean array. This is one of the main techniques by which more complex queries are typically built with Pandas dataframes and might be the key insight you have been looking for.</p> <p>Do the same with the blue spots. Then, if you want to align two sequences of the same length, there is a builtin function called <code>zip</code>:</p> <pre><code>zip(green, blue) </code></pre> <p>It will give you a generator like object, so convert it to a list if that is undesirable. The lists passed do not need to be of equal length and the result length of the result will be that of the shortest of the lists passed to <code>zip</code>. Even though your data appears to have the same number of measurements for each type and color, it does not seem to be a requirement for the Mann-Whitney test.</p> <p>Depending on how you read your data, it might also be useful to you to know how to convert types in tables. You might not need this step, if your data types area already numeric, but by mentioning it I cover all my bases and might spare you another roadblock later down the line:</p> <pre><code>green = df[(df['Type'] == t) &amp; (df['Color'] == 'Green')]['Spots'].astype('int64') </code></pre> <p>Putting all of this together, you might want to write something like this:</p> <pre><code>import pandas as pd import scipy.stats import itertools records = [ ('A', 'Blue', '792'), ('A', 'Blue', '56'), ('A', 'Blue', '2726'), ('A', 'Blue', '780'), ('A', 'Blue', '591'), ('A', 'Blue', '2867'), ('A', 'Blue', '193'), ('A', 'Green', '134'), ('A', 'Green', '631'), ('A', 'Green', '1010'), ('A', 'Green', '53'), ('A', 'Green', '5826'), ('A', 'Green', '6409'), ('A', 'Green', '3278'), ('B', 'Blue', '670'), ('B', 'Blue', '42'), ('B', 'Blue', '1165'), ('B', 'Blue', '3203'), ('B', 'Blue', '2164'), ('B', 'Blue', '5876'), ('B', 'Blue', '525'), ('B', 'Green', '26'), ('B', 'Green', '143'), ('B', 'Green', '399'), ('B', 'Green', '68'), ('B', 'Green', '939'), ('B', 'Green', '1528'), ('B', 'Green', '401'), ('B', 'Green', '1842'), ('C', 'Blue', '265'), ('C', 'Blue', '19'), ('C', 'Blue', '1381'), ('C', 'Blue', '4483'), ('C', 'Blue', '1103'), ('C', 'Blue', '1906'), ('C', 'Blue', '691'), ('C', 'Green', '38'), ('C', 'Green', '149'), ('C', 'Green', '87'), ('C', 'Green', '33'), ('C', 'Green', '1427'), ('C', 'Green', '1009'), ('C', 'Green', '342'), ('C', 'Green', '190'), ] df = pd.DataFrame(records, columns=['Type', 'Color', 'Spots']) types = set(df['Type']) print(f'Types:\n{types}\n') colors = set(df['Color']) print(f'Colors:\n{colors}\n') groups = list(itertools.combinations(colors, 2)) for t in types: for c in groups: spots1 = df[(df['Type'] == t) &amp; (df['Color'] == c[0])]['Spots'].astype('int64') spots2 = df[(df['Type'] == t) &amp; (df['Color'] == c[1])]['Spots'].astype('int64') print(f'{c[0]}:\n{spots1}\n') print(f'{c[1]}:\n{spots2}\n') # Do you want to align these sequences before the test? # Then, zip might be useful. for cp in zip(spots1, spots2): print(cp) print() mwu = scipy.stats.mannwhitneyu(spots1, spots2) print(f'Type {t}: {c[0]} vs. {c[1]} statistic = {mwu.statistic}, p-value = {mwu.pvalue}') </code></pre> <p>This will result in the following statistics:</p> <pre><code>Type B: Green vs. Blue statistic = 15.0, p-value = 0.151981351981352 Type C: Green vs. Blue statistic = 15.0, p-value = 0.151981351981352 Type A: Green vs. Blue statistic = 30.0, p-value = 0.534965034965035 </code></pre> <p>I am not sure if that is what you are looking for, but I hope this will at least help you towards getting your data transformed in such a way that you can make progress on your issue. For your validation, here is how my code would line up your data:</p> <pre><code>for p in zip(green, blue): print(p) </code></pre> <p>The pairs this will generate:</p> <pre><code>(134, 792) (631, 56) (1010, 2726) (53, 780) (5826, 591) (6409, 2867) (3278, 193) (26, 670) (143, 42) (399, 1165) (68, 3203) (939, 2164) (1528, 5876) (401, 525) (38, 265) (149, 19) (87, 1381) (33, 4483) (1427, 1103) (1009, 1906) (342, 691) </code></pre> <p>I am aware that there is some redundant code in here, but I wanted to make sure to touch on everything relevant. If there are more colors than green and blue then, as I had mentioned, you want to generate all possible pairs. I have left out how to do this, for now, but I can add it if that was meant to be part of your question. Depending on how you read your data and how it is formatted, you might also need to adjust the step that transforms the records for your analysis into a Pandas data frame, or remove those lines entirely, if you already have that step covered. I assume everything to be strings, as it might be the case when data is read from a CSV file, lacking type information.</p> <p>If you need to read the data, formatted the way you present it, from a CSV file, you could use the following. A separator of '\s+' will match all whitespace. So, a compact version stripped of all the explanatory code and the data records could look like this:</p> <pre><code>import pandas import scipy.stats import itertools data = pandas.read_csv('data.csv', sep='\s+') types = set(data[data.columns[0]]) colors = set(data[data.columns[1]]) groups = list(itertools.combinations(colors, 2)) for t in types: for c in groups: sel_x = (data[data.columns[0]] == t) &amp; (data[data.columns[1]] == c[0]) x = data[sel_x][data.columns[2]].astype('int64') sel_y = (data[data.columns[0]] == t) &amp; (data[data.columns[1]] == c[1]) y = data[sel_y][data.columns[2]].astype('int64') print(f'{t} ({c[0]} vs. {c[1]}): {scipy.stats.mannwhitneyu(x, y)}') </code></pre> <p>Output:</p> <pre><code>B (Green vs. Blue): MannwhitneyuResult(statistic=15.0, pvalue=0.151981351981352) A (Green vs. Blue): MannwhitneyuResult(statistic=30.0, pvalue=0.534965034965035) C (Green vs. Blue): MannwhitneyuResult(statistic=15.0, pvalue=0.151981351981352) </code></pre> <p>As you can see, they might be out of order, which is due to us using a set. I assume that to not be an issue, but if it is, sort them first:</p> <pre><code>types = sorted(set(df['Type'])) </code></pre> <p>You can also use Panda's <code>groupby</code> and <code>unique</code>. But I do not think there is a straightforward way to select all pairs of groups, which it appears you need. However, Pandas does have the ability to specify more than one column to group by.</p> <pre><code>import pandas import scipy.stats import itertools df = pandas.read_csv('data.csv', sep='\s+') cp = list(itertools.combinations(df[df.columns[1]].unique(), 2)) for key, group in df.groupby(df.columns[0]): for c in cp: x = group[group[df.columns[1]] == c[0]][df.columns[2]].astype('int64') y = group[group[df.columns[1]] == c[1]][df.columns[2]].astype('int64') print(f'Type {key} ({c[0]} vs {c[1]}): {scipy.stats.mannwhitneyu(x, y)}') </code></pre> <p>Documentation for <code>groupby</code> can be found under <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html</a>.</p> <p>I am rusty with the statistics part, so I might not be able to help you with that right away. However, you seem to be in the know on that part, so if I can help you out with the Python portion and you fill me in where I might get the statistical analysis wrong, I am confident that we can solve this.</p> <p>For other readers, the documentation for the scipy implementation can be found under <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html</a>:</p> <blockquote> <p>The Mann-Whitney U test is a nonparametric test of the null hypothesis that the distribution underlying sample x is the same as the distribution underlying sample y. It is often used as a test of difference in location between distributions.</p> </blockquote> <p>More explanation for the Mann-Whitney test can be found under <a href="https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mann–Whitney_U_test</a>. I hope that including these references will make both question and answer more useful to other readers and easier to follow along. Roughly speaking, what you are probably interested in are the statistical differences in occurrence of green and blue spots between different types of objects being observed. Discussing the applicability of this statistic, given the nature and distribution of the data, I understand to be outside the scope of this question.</p>
python|pandas|scipy|statistics
-1
1,383
72,990,835
How to send emails for each person in a excel file
<p>So i have a file em.xlsx where i have Name &amp; Email columns, i want to send email when Name matchs the the filename in a directory</p> <p>How can i do that ? so far i have this code below, but it actualy return nothing</p> <pre><code>import glob import pandas as pd import smtplib from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart import os from collections import ChainMap # Spreadsheet with emails and names email_list = pd.read_excel(r'C:\Users\arabw\OneDrive\Bureau\html\em.xlsx') folder_path= &quot;C:/Users/arabw/OneDrive/Bureau/html/&quot; my_files=[{each_file.split(&quot;.&quot;)[0]:each_file} for each_file in os.listdir(folder_path) if each_file.endswith(&quot;.csv&quot;)] my_files_dict = dict(ChainMap(*my_files)) # getting the names and the emails names = email_list['Name'] emails = email_list['Email'] for i in range(len(emails)): # iterate through the records # for every record get the name and the email addresses name = names[i] email = emails[i] if my_files_dict.get(name): print(f&quot;file found:{my_files_dict.get(name)}&quot;) # attach this file : my_files_dict.get(name) #Some help needed from here I believe while name == os.path: smtp_ssl_host = 'xxxxx' smtp_ssl_port = 465 email_from = &quot;xxxxx&quot; email_pass = &quot;xxxxx&quot; email_to = email msg2 = MIMEMultipart() msg2['Subject'] = &quot;Present Record(s)&quot; msg2['From'] = email_from msg2['To'] = email fo=open(my_files_dict.get(name),'rb') attach = email.mime.application.MIMEApplication(fo.read(),_subtype=&quot;xlsx&quot;) fo.close() attach.add_header('Content-Disposition','attachment',my_files_dict.get(name)) msg.attach(attach) s2 = smtplib.SMTP_SSL(smtp_ssl_host, smtp_ssl_port) s2.login(email_from, email_pass) s2.send_message(msg) s2.quit() </code></pre>
<p>Following code passed my test. Hope it can help you. I'm not sure what's <code>while name == os.path:</code> for, so I just ignore it.</p> <pre><code>import glob import pandas as pd import smtplib from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.application import MIMEApplication import email import os from collections import ChainMap # Spreadsheet with emails and names email_list = pd.read_excel('email.xlsx') folder_path= './Files/A' my_files=[{each_file.split(&quot;.&quot;)[0]:each_file} for each_file in os.listdir(folder_path) if each_file.endswith(&quot;.csv&quot;)] my_files_dict = dict(ChainMap(*my_files)) # getting the names and the emails names = email_list['Name'] emails = email_list['Email'] for i in range(len(emails)): # iterate through the records # for every record get the name and the email addresses name = names[i] email_item = emails[i] file_name = my_files_dict.get(name) if file_name: print(f&quot;file found:{file_name}&quot;) # attach this file : my_files_dict.get(name) #Some help needed from here I believe # while name == os.path: smtp_ssl_host = 'smtp.xxx.com' smtp_ssl_port = 465 email_from = &quot;xxxxx&quot; email_pass = &quot;xxxxx&quot; email_to = email_item msg = MIMEMultipart() msg['Subject'] = &quot;Present Record(s)&quot; msg['From'] = email_from msg['To'] = email_item fo=open(os.path.join(folder_path, file_name), 'rb') attach = MIMEApplication(fo.read(), _subtype=&quot;xlsx&quot;) fo.close() attach.add_header('Content-Disposition', 'attachment', filename=file_name) msg.attach(attach) smtp = smtplib.SMTP_SSL(smtp_ssl_host, smtp_ssl_port) try: smtp.login(email_from, email_pass) smtp.sendmail(email_from, email_to, msg.as_string()) except smtplib.SMTPException as e: print(&quot;send fail&quot;, e) else: print(&quot;success&quot;) finally: try: smtp.quit() except smtplib.SMTPException: print(&quot;quit fail&quot;) else: print(&quot;quit success&quot;) </code></pre>
python|pandas|email|path|smtplib
0
1,384
72,845,550
Python Pandas. Endless cycle
<p>Why does this part of the code have an infinite loop? It can't be so, because where I stop this part of code (in Jupyter Notebook), all 99999 values have changed to oil_mean_by_year[data.loc[i]['year']]</p> <pre><code>for i in data.index: if data.loc[i]['dcoilwtico'] == 99999: data.loc[i, 'dcoilwtico'] = oil_mean_by_year[data.loc[i]['year']] </code></pre>
<p>Use merge to align the oil mean of a year with the given row:</p> <p>Merge on <code>data['year']</code> vs <code>oil_mean_by_year</code>'s index</p> <pre><code>data_with_oil_mean = pd.merge(data, oil_mean_by_year.rename(&quot;oil_mean&quot;), left_on=&quot;year&quot;, right_index=True, how=&quot;left&quot;) </code></pre> <pre><code>data_with_oil_mean['dcoilwtico'] = data_with_oil_mean['dcoilwtico'].mask(lambda xs: xs.eq(99999), data_with_oil_mean['oil_mean']) </code></pre>
python|pandas
0
1,385
70,622,976
pytest: use fixture with pandas dataframe for parametrization
<p>I have a fixture, which returns a <code>pd.DataFrame</code>. I need to insert the individual columns (<code>pd.Series</code>) into a unit test and I would like to use <code>parametrize</code>.</p> <p>Here's a toy example without <code>parametrize</code>. Every column of the dataframe will be tested individually. However, I guess I can get rid of the <code>input_series</code> fixture, can't I? With this code, only 1 test will be executed. However, I am looking for 3 tests while getting rid of the for-loop at the same time.</p> <pre><code>import numpy as np import pandas as pd import pytest @pytest.fixture(scope=&quot;module&quot;) def input_df(): return pd.DataFrame( data=np.random.randint(1, 10, (5, 3)), columns=[&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;] ) @pytest.fixture(scope=&quot;module&quot;) def input_series(input_df): return [input_df[series] for series in input_df.columns] def test_individual_column(input_series): for series in input_series: assert len(series) == 5 </code></pre> <p>I am basically looking for something like this:</p> <pre><code>@pytest.mark.parametrize(&quot;series&quot;, individual_series_from_input_df) def test_individual_column(series): assert len(series) == 5 </code></pre>
<p>If you try to generate multiple data from a fixture based on another fixture you will get the <code>yield_fixture function has more than one 'yield'</code> error message.</p> <p>One solution is to use <a href="https://docs.pytest.org/en/6.2.x/fixture.html#parametrizing-fixtures" rel="nofollow noreferrer">fixture parametrization</a>. In your case you want to iterate by columns so the Dataframe columns are the parameters.</p> <pre class="lang-py prettyprint-override"><code># test data input_df = pd.DataFrame( data=np.random.randint(1, 10, (5, 3)), columns=[&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;] ) @pytest.fixture( scope=&quot;module&quot;, params=input_df.columns, ) def input_series(request): series = request.param yield input_df[series] def test_individual_column(input_series): assert len(input_series) == 5 </code></pre> <p>This will generate one test by column of the test Dataframe.</p> <pre class="lang-sh prettyprint-override"><code>pytest test_pandas.py # test_pandas.py::test_individual_column[col1] PASSED # test_pandas.py::test_individual_column[col2] PASSED # test_pandas.py::test_individual_column[col3] PASSED </code></pre>
python|pandas|pytest|fixtures|parametrized-testing
1
1,386
70,688,556
What is the equivalent of PyTorch's BoolTensor in Tensorflow 2.x?
<p>Is there an equivalent of BoolTensor from Pytorch in Tensorflow assuming I have the below usage in Pytorch that I want to migrate to Tensorflow</p> <pre><code>done_mask = torch.BoolTensor(dones.values).to(device) next_state_values[done_mask] = 0.0 </code></pre>
<p>What is <code>dones</code>? Assuming it's a 0/1 tensor, you can convert it to a Bool tensor like this:</p> <pre><code>tf.cast(dones,tf.bool) </code></pre> <p>However, if you want to assign values to a tensor, you can't do it that way.</p> <p>A way, which I recommend, is to multiply by a matrix of 1/0:</p> <pre><code>next_state_values *= tf.cast(dones!=1,next_state_values.dtype) </code></pre> <p>Another way , that I don't recommend as it gives some issues when using the gradient, is to use tf.tensor_scatter_nd_update. For your case, that would be:</p> <pre><code>indices = tf.where(dones==1) next_state_values = tf.tensor_scatter_nd_update(next_state_values ,indices,2*tf.zeros(len(indices))) </code></pre>
python-3.x|tensorflow|pytorch|tensorflow2.0|tf.keras
1
1,387
70,498,489
Create multiple dataframes with for loop in python
<p>I need to compile grades from 10 files named quiz2, quiz3 [...], quiz11.</p> <p>I have the following transformation:</p> <ul> <li>Import the xls to df with pandas</li> <li>Get only the 4 renamed columns</li> <li>Keep only the highest grade if there is multiple values for the same ID</li> </ul> <p>The code for one dataframe is the following:</p> <pre><code>quiz2=pd.read_excel(r'C:\Users\llarbodiere\Desktop\Perso\grade compil\quiz\quiz2.xls') quiz2=quiz2.rename({'Nom d’utilisateur': 'ID', 'Note totale': 'quiz2'}, axis='columns') quiz2=quiz2[['Nom','Prénom','ID','quiz2']] quiz2.groupby(&quot;ID&quot;).max().sort_values(&quot;Nom&quot;).fillna(0) </code></pre> <p>I want to iterate the same transformations for all the quizzes from quiz2 to quiz11. I have tried a for loop but it did not worked.</p> <p>Thanks by advance!</p>
<p>You could generate the file name dynamically by looping through a range of numbers from 1 to 11 and concatenating the number to the file name and suffix.</p> <pre><code>#create an empty dataframe for collecting loop results cumulative_df = pd.DataFrame(columns = ['a']) #loop through a range of numbers from 1 to 11 for x in range(1,11): #generate the file name file = 'quiz'+str(x)+'.xls' df=pd.read_excel('C:/Users/llarbodiere/Desktop/Perso/grade compil/quiz/'+file) df=df.rename({'Nom d’utilisateur': 'ID', 'Note totale': 'quiz'}, axis='columns') df=df[['Nom','Prénom','ID','quiz']] df.groupby(&quot;ID&quot;).max().sort_values(&quot;Nom&quot;).fillna(0) #add the df active in the loop to the cumulative df pd.concat([cumulative_df, df]) print(cumulative_df) </code></pre> <p>EDIT: the example above is for the specific file names you mentioned. This could be generalized further to work for all files in a given directory, for example.</p>
python|pandas|dataframe|loops|file
-1
1,388
70,415,426
Allocator ran out of memory - how to clear GPU memory from TensorFlow dataset?
<p>Assuming a Numpy array <code>X_train</code> of shape <code>(4559552, 13, 22)</code>, the following code:</p> <pre class="lang-py prettyprint-override"><code>train_dataset = tf.data.Dataset \ .from_tensor_slices((X_train, y_train)) \ .shuffle(buffer_size=len(X_train) // 10) \ .batch(batch_size) </code></pre> <p>works fine exactly once. When I re-run it (after slight modifications to <code>X_train</code>), it then triggers an <code>InternalError</code> due to an out of memory GPU:</p> <pre><code>2021-12-19 15:36:58.460497: W tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator (GPU_0_bfc) ran out of memory trying to allocate 9.71GiB requested by op _EagerConst </code></pre> <p>It seems that the first time, it finds 100% free GPU memory so all works fine, but the subsequent times, the GPU memory is already almost full and hence the error.</p> <p>From what I understand, it seems that simply clearing GPU memory from the old <code>train_dataset</code> would be sufficient to solve the problem, but I couldn't find any way to achieve this in TensorFlow. Currently the only way to re-assign the dataset is to kill the Python kernel and re-run everything from start.</p> <p>Is there a way to avoid re-starting the Python kernel from scratch and instead free the GPU memory so that the new dataset can be loaded into it?</p> <p>The dataset doesn't need full GPU memory, so I would consider switching to a TFRecord solution as a non-ideal solution here (as it comes with additional complications).</p>
<p>Try setting a hard limit on the total GPU memory as shown in <a href="https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth" rel="nofollow noreferrer">here</a></p> <pre><code>import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(gpus[0], True) </code></pre>
python|tensorflow|gpu|out-of-memory
1
1,389
70,407,722
Using one excel column on two places JSON
<p>I have Excel file which I am converting to JSON and merge with existing JSON file. The case is that In Excel I have column &quot;ja&quot; and values in it. IS there a way to add values from that column in JSON, on two places: &quot;ja&quot; and &quot;ja-jpn&quot; Expected output:</p> <pre><code>&quot;ja&quot;:{ &quot;Ball&quot;:&quot;Ball&quot;, &quot;Snow&quot;:&quot;Schnee&quot;, &quot;Elephant&quot;:&quot;Elephant&quot;, &quot;Woman&quot;:&quot;Frau&quot;, &quot;Potato&quot;:&quot;Kartoffeln&quot;, &quot;Tomato&quot;:&quot;F&quot;, &quot;Carrot&quot;:&quot;G&quot; }, &quot;ja-jpa&quot;:{ Ball&quot;:&quot;Ball&quot;, &quot;Snow&quot;:&quot;Schnee&quot;, &quot;Elephant&quot;:&quot;Elephant&quot;, &quot;Woman&quot;:&quot;Frau&quot;, &quot;Potato&quot;:&quot;Kartoffeln&quot;, &quot;Tomato&quot;:&quot;F&quot;, &quot;Carrot&quot;:&quot;G&quot; } </code></pre>
<p>Before you convert your dataframe to JSON, just duplicate the column, like this:</p> <pre><code>if 'ja' in new_data.columns: new_data['ja-jpn'] = new_data['ja'] </code></pre>
python|json|excel|pandas|localization
0
1,390
70,567,173
Pandas - how to convert objects to float values?
<p>I am working with a pandas dataframe of football players. There is a column with the value of each player. The problem is the type of this column is an object and I want to convert it to float64. How can I do it? The variable is <code>Release clause</code>.</p> <pre><code>df_fifa['Release Clause'] 0 €226.5M 1 €127.1M 2 €228.1M 3 €138.6M 4 €196.4M ... 18202 €143K 18203 €113K 18204 €165K 18205 €143K 18206 €165K Name: Release Clause, Length: 18207, dtype: object </code></pre> <p>I want to convert it to the complete number. E.g. €200M to 200.000.000 and €200k to 200.000.</p> <p>I know the funcion should be</p> <pre><code>df_fifa['Release Clause'].astype(str).astype(int) </code></pre> <p>But first I should remove €, M &amp; k. I tried this for removing but it didn't work</p> <pre><code>df_fifa['Release Clause'] = df_fifa['Release Clause'].replace(&quot;€&quot;,&quot;&quot;) </code></pre> <p>Anyone knows how to do it?</p> <p>Thank you!</p>
<p>Convert your symbol <code>€</code>, <code>K</code> and <code>M</code> to <code>''</code>, <code> * 1e3</code> and <code> * 1e6</code> and evaluate your expression with <code>pd.eval</code>:</p> <pre><code>mapping = {'€': '', 'K': ' * 1e3', 'M': ' * 1e6'} df_fifa['Release Clause'] = \ pd.eval(df_fifa['Release Clause'].replace(mapping, regex=True)) print(df_fifa) # Output Release Clause 0 226500000.0 1 127100000.0 2 228100000.0 3 138600000.0 4 196400000.0 18202 143000.0 18203 113000.0 18204 165000.0 18205 143000.0 18206 165000.0 </code></pre>
python|pandas|dataframe|variables
2
1,391
70,527,526
When using df1[~df1.isin(df2)].dropna() Question
<p>I have a data set filled with invoices columns including:</p> <ul> <li>CaseID</li> <li>Customer</li> <li>Supplier</li> <li>Part Number</li> <li>Cost.</li> </ul> <p>This data set includes Charges and Credits. I want to remove the Credits and the Charges they credited from DataFrame. I'd like to remove the rows in original that are credits and the charges they pertain to. But there are some instances where a transaction was charged twice on accident so the information is a duplicate. I do not want to remove the duplicate if there is a credit since the duplicate will need a credit as well.</p> <p>I have the original df. I created a charge df where it is all rows with Cost &gt; 0 from the original. I created a credit df where it is all rows with Cost &lt; 0 from the original.</p> <p>My question is if I use <code>df1[~df1.isin(df2)].dropna()</code> or in this case:</p> <pre class="lang-py prettyprint-override"><code>invoiced[~invoiced.isin(credits)].dropna() </code></pre> <p>How do I specify that I only want the row dropped one time? Is it possible?</p> <pre><code>ex: invoice = Case| Part_Number | Cost 111 | 2G | 53.00 112 | 7G | 25.00 112 | 7G | 25.00 113 | 8G | 20.00 113 | 8G | -20.00 114 | 9G | 15.00 115 | 2G | 53.00 115 | 2G | 53.00 115 | 2G | -53.00 </code></pre> <pre><code> Charge = Case| Part_Number | Cost 111 | 2G | 53.00 112 | 7G | 25.00 112 | 7G | 25.00 113 | 8G | 20.00 114 | 9G | 15.00 115 | 2G | 53.00 115 | 2G | 53.00 </code></pre> <pre><code> Credits = Case| Part_Number | Cost 113 | 8G | -20.00 115 | 2G | -53.00 </code></pre> <pre><code> Output = df = Case| Part_Number | Cost 111 | 2G | 53.00 112 | 7G | 25.00 112 | 7G | 25.00 114 | 9G | 15.00 115 | 2G | 53.00 </code></pre> <p>See how it removed 113 since there was 1 charge and 1 credit but kept (1) of 115 since there were 2 charges and 1 credit.</p>
<p>Try this:</p> <pre><code>invoices = pd.DataFrame([['111', '2g', 53], ['112', '7g', 25], ['112', '7g', 25], ['113', '8g', 20], ['113', '8g', -20], ['114', '9g', 15], ['115', '2g', 53], ['115', '2g', 53], ['115', '2g', -53]], columns=['Case', 'PartNo', 'Cost']) print(f&quot;Original invoices:\n{invoices}\n\n&quot;) newInvoices = invoices.copy() newInvoices['Charge_Credit'] = 0 for idx, case, part, cost, ch_cr in newInvoices.itertuples(): creditedDf = newInvoices[(newInvoices.Case == case) &amp; (newInvoices.PartNo == part) &amp; (newInvoices.Cost == -cost) &amp; (newInvoices.Charge_Credit != 'remove')] if len(creditedDf): newInvoices.loc[creditedDf.iloc[0].name, 'Charge_Credit'] = 'remove' newInvoices = newInvoices[['Case', 'PartNo', 'Cost']][newInvoices.Charge_Credit != 'remove'] newInvoices.reset_index(drop=True, inplace=True) print(f&quot;New invoices:\n{newInvoices}\n&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/ZiHoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZiHoz.png" alt="enter image description here" /></a></p>
python|pandas|dataframe
1
1,392
70,441,819
In a pandas string column, eliminate the text preceding a substring
<p>For example I have a Pandas DataFrame with a string column in which I would like to delete the <code>**bold**</code> text before a substring:</p> <pre><code>Column1 **Yon-RM-**CT 500M **Abib-RM-**CT 500M **Wal-RM-**CT 500M **Sopxc-RM-**CT 1000M </code></pre> <p>Notice that the bold text could have different length but the substring ends in “-RM-“.</p>
<p>Assuming all you want is CT 500M, and all follow the same format, apply a lambda function that splits by &quot;-&quot;, and get the third index</p> <pre><code> df[&quot;Column1&quot;] = df.apply(lambda x: x[&quot;Column1&quot;].split(&quot;-&quot;)[2], axis=1) </code></pre> <p>You could also split by &quot;RM&quot;</p>
python|pandas|substring
0
1,393
42,914,747
Setting value to a copy of a slice of a DataFrame
<p>I am setting up the following example which is similar to my situation and data:</p> <p>Say, I have the following DataFrame: </p> <pre><code>df = pd.DataFrame ({'ID' : [1,2,3,4], 'price' : [25,30,34,40], 'Category' : ['small', 'medium','medium','small']}) </code></pre> <p><br></p> <pre><code> Category ID price 0 small 1 25 1 medium 2 30 2 medium 3 34 3 small 4 40 </code></pre> <p>Now, I have the following function, which returns the discount amount based on the following logic:</p> <pre><code>def mapper(price, category): if category == 'small': discount = 0.1 * price else: discount = 0.2 * price return discount </code></pre> <p>Now I want the resulting DataFrame:</p> <pre><code> Category ID price Discount 0 small 1 25 0.25 1 medium 2 30 0.6 2 medium 3 40 0.8 3 small 4 40 0.4 </code></pre> <p>So I decided to call series.map on the column price because I don't want to use apply. I am working on a large DataFrame and map is much faster than apply.</p> <p>I tried doing this:</p> <pre><code>for c in list(sample.Category.unique()): sample[sample['Category'] == c]['Discount'] = sample[sample['Category'] == c]['price'].map(lambda x: mapper(x,c)) </code></pre> <p>And that didn't work as I expected because I am trying to set a value on a copy of a slice of the DataFrame. </p> <p>My question is, Is there a way to do this without using <code>df.apply()</code>? </p>
<p>One approach with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> -</p> <pre><code>mask = df.Category.values=='small' df['Discount'] = np.where(mask,df.price*0.01, df.price*0.02) </code></pre> <p>Another way to put things a bit differently -</p> <pre><code>df['Discount'] = df.price*0.01 df['Discount'][df.Category.values!='small'] *= 2 </code></pre> <p>For performance, you might want to work with array data, so we could use <code>df.price.values</code> instead at places where <code>df.price</code> was used.</p> <h2>Benchmarking</h2> <p>Approaches -</p> <pre><code>def app1(df): # Proposed app#1 here mask = df.Category.values=='small' df_price = df.price.values df['Discount'] = np.where(mask,df_price*0.01, df_price*0.02) return df def app2(df): # Proposed app#2 here df['Discount'] = df.price.values*0.01 df['Discount'][df.Category.values!='small'] *= 2 return df def app3(df): # @piRSquared's soln df.assign( Discount=((1 - (df.Category.values == 'small')) + 1) / 100 * df.price.values) return df def app4(df): # @MaxU's soln df.assign(Discount=df.price * df.Category.map({'small':0.01}).fillna(0.02)) return df </code></pre> <p>Timings -</p> <p>1) Large dataset :</p> <pre><code>In [122]: df Out[122]: Category ID price Discount 0 small 1 25 0.25 1 medium 2 30 0.60 2 medium 3 34 0.68 3 small 4 40 0.40 In [123]: df1 = pd.concat([df]*1000,axis=0) ...: df2 = pd.concat([df]*1000,axis=0) ...: df3 = pd.concat([df]*1000,axis=0) ...: df4 = pd.concat([df]*1000,axis=0) ...: In [124]: %timeit app1(df1) ...: %timeit app2(df2) ...: %timeit app3(df3) ...: %timeit app4(df4) ...: 1000 loops, best of 3: 209 µs per loop 10 loops, best of 3: 63.2 ms per loop 1000 loops, best of 3: 351 µs per loop 1000 loops, best of 3: 720 µs per loop </code></pre> <p>2) Very large dataset :</p> <pre><code>In [125]: df1 = pd.concat([df]*10000,axis=0) ...: df2 = pd.concat([df]*10000,axis=0) ...: df3 = pd.concat([df]*10000,axis=0) ...: df4 = pd.concat([df]*10000,axis=0) ...: In [126]: %timeit app1(df1) ...: %timeit app2(df2) ...: %timeit app3(df3) ...: %timeit app4(df4) ...: 1000 loops, best of 3: 758 µs per loop 1 loops, best of 3: 2.78 s per loop 1000 loops, best of 3: 1.37 ms per loop 100 loops, best of 3: 2.57 ms per loop </code></pre> <p><strong>Further boost with data reuse -</strong></p> <pre><code>def app1_modified(df): mask = df.Category.values=='small' df_price = df.price.values*0.01 df['Discount'] = np.where(mask,df_price, df_price*2) return df </code></pre> <p>Timings -</p> <pre><code>In [133]: df1 = pd.concat([df]*10000,axis=0) ...: df2 = pd.concat([df]*10000,axis=0) ...: df3 = pd.concat([df]*10000,axis=0) ...: df4 = pd.concat([df]*10000,axis=0) ...: In [134]: %timeit app1(df1) 1000 loops, best of 3: 699 µs per loop In [135]: %timeit app1_modified(df1) 1000 loops, best of 3: 655 µs per loop </code></pre>
python|pandas|numpy|dataframe
8
1,394
42,956,997
Stripping and testing against Month component of a date
<p>I have a dataset that looks like this:</p> <pre><code>import numpy as np import pandas as pd raw_data = {'Series_Date':['2017-03-10','2017-04-13','2017-05-14','2017-05-15','2017-06-01']} df = pd.DataFrame(raw_data,columns=['Series_Date']) print df </code></pre> <p>I would like to pass in a date parameter as a string as follows:</p> <pre><code>date = '2017-03-22' </code></pre> <p>I would now like to know if there are any dates in my DataFrame 'df' for which the month is 3 months after the month in the date parameter.</p> <p>That is if the month in the date parameter is March, then it should check if there are any dates in df from June. If there are any, I would like to see those dates. If not, it should just output 'No date found'.</p> <p>In this example, the output should be '2017-06-01' as it is a date from June as my date parameter is from March.</p> <p>Could anyone help how may I get started with this?</p>
<p>convert your column to <code>Timestamp</code></p> <pre><code>df.Series_Date = pd.to_datetime(df.Series_Date) date = pd.to_datetime('2017-03-01') </code></pre> <p>Then</p> <pre><code>df[ (df.Series_Date.dt.year - date.year) * 12 + df.Series_Date.dt.month - date.month == 3 ] Series_Date 4 2017-06-01 </code></pre>
python|python-2.7|pandas
0
1,395
42,605,840
Automatic extraction of recent nine months data in python
<p>I have a data frame which consists of data aggregated over certain time span with 'date'as one of the columns. Now every day a new data with exactly same columns is added to this aggregated data. Now I want to apply a filter on this aggregated data, that post appending new daily data I want only recent nine months data.</p> <p>Suppose df_old is your aggregated data and new data is df_new. Currently I am doing like this</p> <pre><code>#Append new data to old aggregated data with same columns df_old=df_old.append(df_new) df_old['date']=pd.to_datetime(df_old['date']) max_date=max(df_old['date'] df_old['date_diff']=(max_date - df_old['date']) ##Considering a calender month has 30 days and three months have 31 days df_old.loc[df_old.date_diff &lt;=273] </code></pre> <p>Now I know the above method involves hard coding and is not efficient. I would appreciate if someone can help with some automated way of doing this.</p>
<p>You can dynamically generate 6 months old date</p> <pre><code>from datetime import date from dateutil.relativedelta import relativedelta six_months_old = date.today() + relativedelta(months=-6) six_months_old #datetime.date(2016, 9, 5) </code></pre> <p>now use this value to filter your dataframe</p> <pre><code>df_old = df_old.append(df_new) df_old['date'] = pd.to_datetime(df_old['date']) max_date = max(df_old['date'] result_df = df_old.loc[(df_old['date'] &gt;= six_months_old)] </code></pre>
python|pandas|datetime
2
1,396
42,955,746
Process to build our own model for image detection
<p>Currently, I am working on deep neural network for image detection and I founded a model called YOLO Network, and it's very powerful to make objects detections, but I have a question:</p> <ul> <li>How can we design and concept our own model? Do we use a brut force for that, for example "I use 2 convolutional and 1 pooling layer and 1 fully connected layer" after that if the result is'nt good I change the number of layers and change the parameter until I find the best model, Please if there is anyone who knows some informations about that, show me how ?</li> </ul> <p>I use Tensorflow.</p> <p>Thanks, </p>
<p>There are a couple of papers addressing this issue. For example in <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf" rel="nofollow noreferrer">http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf</a> some general principles are mentioned, like preserving information by not having too rapid changes in any cut of the graph seperating the output from the input.</p> <p>Another paper is <a href="https://arxiv.org/pdf/1606.02228.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.02228.pdf</a> where specific hyperparameter combinations are tried. </p> <p>The remainder are just what you observe in practice and depends on your dataset and on your requirement. Maybe you have performance requirements because you want to deploy to mobile or you need more than 90 % accuracy. Then you will have to choose your model accordingly.</p>
tensorflow|deep-learning|object-detection
2
1,397
27,100,799
how to do 3d matrix to 3d multiplication in python without loops?
<p>I am doing segmentation part in my project, where I need to multiply a 3 dimensional mask to a color image. </p> <p>What am doing now:</p> <pre><code>maskedFrame=np.zeros((rgbFrame.shape)) maskedHsvFrame=np.zeros((rgbFrame.shape)) for color in range(0,3): maskedFrame[:,:,color]=rgbFrame[:,:,color]*biscuitMask maskedHsvFrame[:,:,color]=hsvFrame[:,:,color]*biscuitMask </code></pre> <p>Is it possible to multiply without any loops in python?</p>
<p>AFAIK, numpy can <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">braodcast</a> <code>biscuitMask</code> along the color dimension to fit the shape of the frame.<br> This broadcasting is done automatically. Thus,</p> <pre><code>maskedFrame = rgbFrame * biscuitMask maskedHsvFrame = hsvFrame * biscuitMask </code></pre> <p>Should work without any loops or modifications.</p> <p>PS,<br> If you are familiar with <a href="/questions/tagged/matlab" class="post-tag" title="show questions tagged 'matlab'" rel="tag">matlab</a>, broadcasting is very much like applying <a href="/questions/tagged/bsxfun" class="post-tag" title="show questions tagged 'bsxfun'" rel="tag">bsxfun</a> only numpy does it automatically for you.</p>
python|image-processing|numpy|image-segmentation
0
1,398
14,398,188
Numpy import in PyCharm yields "Cannot locate working compiler"
<p>I am new to Python and PyCharm, installed PyCharm 2.6 (on Mac OSX) and tried to import NumPy for Python 3.3. JetBrains support file tells me to install Cython which also yields "Cannot locate working compiler"</p> <p>How and which compiler do I need to install?</p> <p>Thanks!</p>
<p>Check whether the 'gcc' command works. If it's there, try setting the CC environment variable to 'gcc'.</p>
compiler-construction|numpy|python-3.x|pycharm
0
1,399
25,275,009
Pandas Series.filter.values returning different type than numpy array
<p>I am trying to run the <code>scipy.stats.entropy</code> function on two arrays. It is being run on each row of a Pandas DataFrame via the apply function:</p> <pre><code>def calculate_H(row): pk = np.histogram(row.filter(regex='stuff'), bins=16)[0] qk = row.filter(regex='other').values stats.entropy(pk, qk, base=2) df['DKL'] = df.apply(calculate_H, axis=1) </code></pre> <p>I am getting the following error: </p> <pre><code>TypeError: ufunc 'xlogy' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>(I've also tried <code>qk = row[row.filter(regex='other').index].values</code>)</p> <p>I know the issue is with the <code>qk</code>, I can pass another array as <code>qk</code> and it works. The issue is that Pandas is giving me something that says it is a numpy array but it is not quite a numpy array. The following examples all work:</p> <pre><code>qk1 = np.array([12024, 9643, 7681, 8193, 8012, 7846, 7615, 7484, 5966, 11484, 13627, 17749, 9820, 5336,4611, 3366]) qk2 = Series([12024, 9643, 7681, 8193, 8012, 7846, 7615, 7484, 5966, 11484, 13627, 17749, 9820, 5336,4611, 3366]).values qk3 = df.filter(regex='other').iloc[0].values </code></pre> <p>If I check the types, e.g. <code>type(qk) == type(qk1)</code> it gives me True (all <code>numpy.ndarray</code>). Or if I use <code>np.array_equals</code>, also True.</p> <p>The ONLY hint I have is what happens when I print out the arrays that work vs don't (not working on bottom):</p> <pre><code>[12024 9643 7681 8193 8012 7846 7615 7484 5966 11484 13627 17749 9820 5336 4611 3366] [12024 9643 7681 8193 8012 7846 7615 7484 5966 11484 13627 17749 9820 5336 4611 3366] </code></pre> <p>Notice the one on top has larger spacing in between values.</p> <p><strong>TLDR</strong>; These two expressions return something different</p> <pre><code>df.filter(regex='other').iloc[0].values df.iloc[0].filter(regex='other').values </code></pre>
<p>I suspect <code>qk</code> is an <code>object</code> array and not an array of integers. In <code>calculate_H</code>, try this:</p> <pre><code>qk = row.filter(regex='other').values.astype(int) </code></pre> <p>(i.e. cast the values to an array of integers).</p>
python|numpy|pandas|scipy
2