Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
4,300
| 59,579,316
|
Can I use a str in np.mean()?
|
<pre><code>stdev = 3
value_1 = array
value_2 = array
value_3 = array
for h in range(1,4):
name = ('value' + str(h))
globals()['new_name_'+ str(h)] = np.mean(name) * stdev
</code></pre>
<p>It should give something like this: </p>
<pre><code>new_name_1 = #result1
new_name_2 = #result2
new_name_3 = #result3
</code></pre>
<p>However, <code>np.mean()</code> does not work with <code>str</code>.<br>
I tried to use unicode and other things.<br>
I have to get the <code>results(new name)</code> by using <code>globals()</code>. Does somebody know how to do it?</p>
|
<p>you can use a list to keep your arrays:</p>
<pre><code>stdev = 3
my_list = [array_1, array_2, array_3]
new_vars = []
for arr in my_list:
new_vars.append(np.mean(arr) * stdev)
</code></pre>
<p>also, you can keep your variables using a dict:</p>
<pre><code>stdev = 3
my_dict = {
'value_1': array_1,
'value_2': array_2,
'value_3': array_3}
new_vars = {}
for var_name, arr in my_dict.items():
new_vars[f'new_{var_name}'] = np.mean(arr) * stdev
</code></pre>
|
python|string|numpy|for-loop|global-scope
| 0
|
4,301
| 59,887,851
|
"Error while extracting" from tensorflow datasets
|
<p>I want to train a tensorflow image segmentation model on COCO, and thought I would leverage the dataset builder already included. Download seems to be completed but <strong>it crashes on extracting the zip files.</strong></p>
<p>Running with TF 2.0.0 on a Jupyter Notebook under a conda environment. Computer is 64-bit Windows 10. The Oxford Pet III dataset used in the <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">official image segmentation tutorial</a> works fine.</p>
<p>Below is the error message (my local user name replaced with <code>%user%</code>).</p>
<pre><code>---------------------------------------------------------------------------
OutOfRangeError Traceback (most recent call last)
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\extractor.py in _sync_extract(self, from_path, method, to_path)
88 try:
---> 89 for path, handle in iter_archive(from_path, method):
90 path = tf.compat.as_text(path)
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\extractor.py in iter_zip(arch_f)
176 with _open_or_pass(arch_f) as fobj:
--> 177 z = zipfile.ZipFile(fobj)
178 for member in z.infolist():
~\.conda\envs\tf-tutorial\lib\zipfile.py in __init__(self, file, mode, compression, allowZip64)
1130 if mode == 'r':
-> 1131 self._RealGetContents()
1132 elif mode in ('w', 'x'):
~\.conda\envs\tf-tutorial\lib\zipfile.py in _RealGetContents(self)
1193 try:
-> 1194 endrec = _EndRecData(fp)
1195 except OSError:
~\.conda\envs\tf-tutorial\lib\zipfile.py in _EndRecData(fpin)
263 # Determine file size
--> 264 fpin.seek(0, 2)
265 filesize = fpin.tell()
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_core\python\util\deprecation.py in new_func(*args, **kwargs)
506 instructions)
--> 507 return func(*args, **kwargs)
508
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_core\python\lib\io\file_io.py in seek(self, offset, whence, position)
166 elif whence == 2:
--> 167 offset += self.size()
168 else:
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_core\python\lib\io\file_io.py in size(self)
101 """Returns the size of the file."""
--> 102 return stat(self.__name).length
103
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_core\python\lib\io\file_io.py in stat(filename)
726 """
--> 727 return stat_v2(filename)
728
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_core\python\lib\io\file_io.py in stat_v2(path)
743 file_statistics = pywrap_tensorflow.FileStatistics()
--> 744 pywrap_tensorflow.Stat(compat.as_bytes(path), file_statistics)
745 return file_statistics
OutOfRangeError: C:\Users\%user%\tensorflow_datasets\downloads\images.cocodataset.org_zips_train20147eQIfmQL3bpVDgkOrnAQklNLVUtCsFrDPwMAuYSzF3U.zip; Unknown error
During handling of the above exception, another exception occurred:
ExtractError Traceback (most recent call last)
<ipython-input-27-887fa0198611> in <module>
1 cocoBuilder = tfds.builder('coco')
2 info = cocoBuilder.info
----> 3 cocoBuilder.download_and_prepare()
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\api_utils.py in disallow_positional_args_dec(fn, instance, args, kwargs)
50 _check_no_positional(fn, args, ismethod, allowed=allowed)
51 _check_required(fn, kwargs)
---> 52 return fn(*args, **kwargs)
53
54 return disallow_positional_args_dec(wrapped) # pylint: disable=no-value-for-parameter
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in download_and_prepare(self, download_dir, download_config)
285 self._download_and_prepare(
286 dl_manager=dl_manager,
--> 287 download_config=download_config)
288
289 # NOTE: If modifying the lines below to put additional information in
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in _download_and_prepare(self, dl_manager, download_config)
946 super(GeneratorBasedBuilder, self)._download_and_prepare(
947 dl_manager=dl_manager,
--> 948 max_examples_per_split=download_config.max_examples_per_split,
949 )
950
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in _download_and_prepare(self, dl_manager, **prepare_split_kwargs)
802 # Generating data for all splits
803 split_dict = splits_lib.SplitDict()
--> 804 for split_generator in self._split_generators(dl_manager):
805 if splits_lib.Split.ALL == split_generator.split_info.name:
806 raise ValueError(
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\image\coco.py in _split_generators(self, dl_manager)
237 root_url = 'http://images.cocodataset.org/'
238 extracted_paths = dl_manager.download_and_extract({
--> 239 key: root_url + url for key, url in urls.items()
240 })
241
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\download_manager.py in download_and_extract(self, url_or_urls)
357 with self._downloader.tqdm():
358 with self._extractor.tqdm():
--> 359 return _map_promise(self._download_extract, url_or_urls)
360
361 @property
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\download_manager.py in _map_promise(map_fn, all_inputs)
393 """Map the function into each element and resolve the promise."""
394 all_promises = utils.map_nested(map_fn, all_inputs) # Apply the function
--> 395 res = utils.map_nested(_wait_on_promise, all_promises)
396 return res
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
127 return {
128 k: map_nested(function, v, dict_only, map_tuple)
--> 129 for k, v in data_struct.items()
130 }
131 elif not dict_only:
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\utils\py_utils.py in <dictcomp>(.0)
127 return {
128 k: map_nested(function, v, dict_only, map_tuple)
--> 129 for k, v in data_struct.items()
130 }
131 elif not dict_only:
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
141 return tuple(mapped)
142 # Singleton
--> 143 return function(data_struct)
144
145
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\download_manager.py in _wait_on_promise(p)
377
378 def _wait_on_promise(p):
--> 379 return p.get()
380
381 else:
~\.conda\envs\tf-tutorial\lib\site-packages\promise\promise.py in get(self, timeout)
508 target = self._target()
509 self._wait(timeout or DEFAULT_TIMEOUT)
--> 510 return self._target_settled_value(_raise=True)
511
512 def _target_settled_value(self, _raise=False):
~\.conda\envs\tf-tutorial\lib\site-packages\promise\promise.py in _target_settled_value(self, _raise)
512 def _target_settled_value(self, _raise=False):
513 # type: (bool) -> Any
--> 514 return self._target()._settled_value(_raise)
515
516 _value = _reason = _target_settled_value
~\.conda\envs\tf-tutorial\lib\site-packages\promise\promise.py in _settled_value(self, _raise)
222 if _raise:
223 raise_val = self._fulfillment_handler0
--> 224 reraise(type(raise_val), raise_val, self._traceback)
225 return self._fulfillment_handler0
226
~\.conda\envs\tf-tutorial\lib\site-packages\six.py in reraise(tp, value, tb)
694 if value.__traceback__ is not tb:
695 raise value.with_traceback(tb)
--> 696 raise value
697 finally:
698 value = None
~\.conda\envs\tf-tutorial\lib\site-packages\promise\promise.py in handle_future_result(future)
840 # type: (Any) -> None
841 try:
--> 842 resolve(future.result())
843 except Exception as e:
844 tb = exc_info()[2]
~\.conda\envs\tf-tutorial\lib\concurrent\futures\_base.py in result(self, timeout)
423 raise CancelledError()
424 elif self._state == FINISHED:
--> 425 return self.__get_result()
426
427 self._condition.wait(timeout)
~\.conda\envs\tf-tutorial\lib\concurrent\futures\_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~\.conda\envs\tf-tutorial\lib\concurrent\futures\thread.py in run(self)
54
55 try:
---> 56 result = self.fn(*self.args, **self.kwargs)
57 except BaseException as exc:
58 self.future.set_exception(exc)
~\.conda\envs\tf-tutorial\lib\site-packages\tensorflow_datasets\core\download\extractor.py in _sync_extract(self, from_path, method, to_path)
92 except BaseException as err:
93 msg = 'Error while extracting %s to %s : %s' % (from_path, to_path, err)
---> 94 raise ExtractError(msg)
95 # `tf.io.gfile.Rename(overwrite=True)` doesn't work for non empty
96 # directories, so delete destination first, if it already exists.
ExtractError: Error while extracting C:\Users\%user%\tensorflow_datasets\downloads\images.cocodataset.org_zips_train20147eQIfmQL3bpVDgkOrnAQklNLVUtCsFrDPwMAuYSzF3U.zip to C:\Users\%user%\tensorflow_datasets\downloads\extracted\ZIP.images.cocodataset.org_zips_train20147eQIfmQL3bpVDgkOrnAQklNLVUtCsFrDPwMAuYSzF3U.zip : C:\Users\%user%\tensorflow_datasets\downloads\images.cocodataset.org_zips_train20147eQIfmQL3bpVDgkOrnAQklNLVUtCsFrDPwMAuYSzF3U.zip; Unknown error
</code></pre>
<p>The message seems cryptic to me. The folder to which it is trying to extract does not exist when the notebook is started - it is created by Tensorflow, and only at that command line. I obviously tried deleting it completely and running it again, to no effect.</p>
<p>The code that leads to the error is (everything runs fine until the last line):</p>
<pre><code>import tensorflow as tf
from __future__ import absolute_import, division, print_function, unicode_literals
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
from IPython.display import clear_output
import matplotlib.pyplot as plt
dataset, info = tfds.load('coco', with_info=True)
</code></pre>
<p>Also tried breaking down the last command into assigning the tdfs.builder object and then running <code>download_and_extract</code>, and again got the same error.</p>
<p>There is enough space in disk - after download, still 50+GB available, while the dataset is supposed to be 37GB in its largest version (2014).</p>
|
<p>I have a similar problem with Windows 10 & COCO 2017. My solution is simple. Extract the ZIP file manually according to the folder path in the error message.</p>
|
python|tensorflow|image-segmentation|tensorflow-datasets|mscoco
| 2
|
4,302
| 40,441,631
|
Get average counts per minute by hour
|
<p>I have a dataframe with a time stamp as the index and a column of labels</p>
<pre><code>df=DataFrame({'time':[ datetime(2015,11,2,4,41,10), datetime(2015,11,2,4,41,39), datetime(2015,11,2,4,41,47),
datetime(2015,11,2,4,41,59), datetime(2015,11,2,4,42,4), datetime(2015,11,2,4,42,11),
datetime(2015,11,2,4,42,15), datetime(2015,11,2,4,42,30), datetime(2015,11,2,4,42,39),
datetime(2015,11,2,4,42,41),datetime(2015,11,2,5,2,9),datetime(2015,11,2, 5,2,10),
datetime(2015,11,2,5,2,16),datetime(2015,11,2,5,2,29),datetime(2015,11,2, 5,2,51),
datetime(2015,11,2,5,9,1),datetime(2015,11,2,5,9,21),datetime(2015,11,2,5,9,31),
datetime(2015,11,2,5,9,40),datetime(2015,11,2,5,9,55)],
'Label':[2,0,0,0,1,0,0,1,1,1,1,3,0,0,3,0,1,0,1,1]}).set_index(['time'])
</code></pre>
<p>I want to get the avergae number of times that a label appears in a distinct minute
in a distnct hour.</p>
<p>For example, Label 0 appears 3 times in hour 4 in minute 41, 2 times in hour 4
in minute 42,<br>
2 times in hour 5 in in minute 2, and 2 times in hour 5 in minute 9 so its average count per
minute in hour 4 is </p>
<pre><code>(2+3)/2=2.5
</code></pre>
<p>and its count per minute in hour 5 is </p>
<pre><code>(2+2)/2=2
</code></pre>
<p>The output I am looking for is </p>
<pre><code>Hour 1
Label avg
0 2.5
1 2
2 .5
3 0
Hour 2
Label avg
0 2
1 1.5
2 0
3 1
</code></pre>
<p>What I have so far is </p>
<pre><code>df['hour']=df.index.hour
hour_grp=df.groupby(['hour'], as_index=False)
</code></pre>
<p>then I can deo something like </p>
<pre><code>res=[]
for key, value in hour_grp:
res.append(value)
</code></pre>
<p>then group by minute </p>
<pre><code>res[0].groupby(pd.TimeGrouper('1Min'))['Label'].value_counts()
</code></pre>
<p>but this is where I'm stuck, not to mention it is not very efficient</p>
|
<p>Start by squeezing you DataFrame into a Series (after all, it only has one column):</p>
<pre><code>s = df.squeeze()
</code></pre>
<p>Compute how many times each label occurs by minute:</p>
<pre><code>counts_by_min = (s.resample('min')
.apply(lambda x: x.value_counts())
.unstack()
.fillna(0))
# 0 1 2 3
# time
# 2015-11-02 04:41:00 3.0 0.0 1.0 0.0
# 2015-11-02 04:42:00 2.0 4.0 0.0 0.0
# 2015-11-02 05:02:00 2.0 1.0 0.0 2.0
# 2015-11-02 05:09:00 2.0 3.0 0.0 0.0
</code></pre>
<p>Resample <code>counts_by_min</code> by hour to obtain the number of times each label occurs by hour:</p>
<pre><code>counts_by_hour = counts_by_min.resample('H').sum()
# 0 1 2 3
# time
# 2015-11-02 04:00:00 5.0 4.0 1.0 0.0
# 2015-11-02 05:00:00 4.0 4.0 0.0 2.0
</code></pre>
<p>Count the number of minutes each label occurs by hour:</p>
<pre><code>minutes_by_hour = counts_by_min.astype(bool).resample('H').sum()
# 0 1 2 3
# time
# 2015-11-02 04:00:00 2.0 1.0 1.0 0.0
# 2015-11-02 05:00:00 2.0 2.0 0.0 1.0
</code></pre>
<p>Divide the last two to get the result you want:</p>
<pre><code>avg_per_hour = counts_by_hour.div(minutes_by_hour).fillna(0)
# 0 1 2 3
# time
# 2015-11-02 04:00:00 2.5 4.0 1.0 0.0
# 2015-11-02 05:00:00 2.0 2.0 0.0 2.0
</code></pre>
|
python|pandas|group-by
| 1
|
4,303
| 40,432,081
|
Check if numpy array is masked or not
|
<p>Is there an easy way to check if numpy array is masked or not? </p>
<p>Currently, I do the following to check if <code>marr</code> is masked or not:</p>
<pre><code>try:
arr = marr.data
except:
arr = marr
</code></pre>
|
<p>You can use the python function <code>isinstance</code> to check if an object is an instance of a class.</p>
<pre><code>>>> isinstance(np.ma.array(np.arange(10)),np.ma.MaskedArray)
True
>>> isinstance(np.arange(10),np.ma.MaskedArray)
False
</code></pre>
|
python|numpy
| 9
|
4,304
| 40,663,543
|
theory behind array manipulation
|
<p>I'm about to be asked to implement an array manipulation functionality somewhat akin to numpy (largish homogeneous arrays used to manipulate unpacked sequences of images and whatever our customers may derive from those images) for in-house scripting language. Naturally I'd like to limit it to the smallest amount of functionality I can get away with. Is there some analogue to structured program theorem for array manipulation? Given that I'll find it hard to predict how precisely it will be used by our customers (forget about controlling it) is there a better way than blindly copying some subset of numpy and then fixing what our customers will complain about? Using numpy is not an option, unfortunately, because there's a substantial amount of code written in our in-house scripting language.</p>
|
<p>Travis Oliphant's 2006 book might be a good start.</p>
<pre><code>Guide to NumPy - Complexity Sciences Center
</code></pre>
<p><a href="https://www.google.com/url?sa=t&source=web&rct=j&url=http://csc.ucdavis.edu/~chaos/courses/nlp/Software/NumPyBook.pdf&ved=0ahUKEwjzlrv67bDQAhVUzWMKHV-tDToQFggjMAE&usg=AFQjCNEXrF-GAQ7w3C_llajIhFEijEg-lA&sig2=lhLb4cVt_URgvTLo2AfM7Q" rel="nofollow">https://www.google.com/url?sa=t&source=web&rct=j&url=http://csc.ucdavis.edu/~chaos/courses/nlp/Software/NumPyBook.pdf&ved=0ahUKEwjzlrv67bDQAhVUzWMKHV-tDToQFggjMAE&usg=AFQjCNEXrF-GAQ7w3C_llajIhFEijEg-lA&sig2=lhLb4cVt_URgvTLo2AfM7Q</a></p>
<p>A 2nd edition has just been published.</p>
|
arrays|numpy
| 1
|
4,305
| 40,572,910
|
Most efficient method to combine pandas DataFrames which have the same column value
|
<p>For example, I have two dataframe which contain some identical sample name with different feature data. </p>
<p>I want to compare how many samples existed in both dataframe. </p>
<h3>data here</h3>
<p><a href="https://drive.google.com/file/d/0B7FE0kxAL8kQQlRFSmR6Q1RoNXM/view?usp=sharing" rel="nofollow noreferrer">df1</a>
<a href="https://drive.google.com/file/d/0B7FE0kxAL8kQa3JyOVJraHJQT0E/view?usp=sharing" rel="nofollow noreferrer">df2</a></p>
<p>A dummy way to achieve this problem I have though about: </p>
<pre><code>hit = 0
for i in range(0,len(df1),1):
for j in range(0,len(df2),1):
if df1.Sample_name.iloc[i] == df2.Sample_name.iloc[j]:
hit+=1
</code></pre>
<p>I thouth this loop procedure may waste a lot of time. Is there any simple technology to takcle with? </p>
<p>Beside, how to extract the subset of each dataframe with idential sample_name and connect their feature data together into a new dataframe. </p>
<p>I have tried pd.concat(df1, df2, keys = 'Sample_name')</p>
|
<p>Here's a vectorized approach using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>NumPy broadcasting</code></a> to get <code>hit</code> value -</p>
<pre><code>np.count_nonzero(df1.Sample_name.values[:,None] == df2.Sample_name.values)
</code></pre>
|
python|performance|python-2.7|pandas|join
| 2
|
4,306
| 18,241,359
|
"Error: setting an array element with a sequence"
|
<p>I am trying to convert Matlab code into Python, but I'm receiving an error when I append zeros in my array.</p>
<p>Matlab Code:</p>
<pre><code>N_bits=1e5;
a1=[0,1];
bits=a1(ceil(length(a1)*rand(1,N_bits)));
bits=[0 0 0 0 0 0 0 0 bits];
</code></pre>
<p>Python Code:</p>
<pre><code>a1=array([0,0,1])
N_bits=1e2
a2=arange(0,2,1)
## Transmitter ##
bits1=ceil(len(a2)*rand(N_bits))
bits=a1[array(bits1,dtype=int)]
bits=array([0,0,0,0,0,0,0,0, bits])
</code></pre>
<p>I get an error on the last line:</p>
<pre>
Error:
bits=array([0,0,0,0,0,0,0,0, bits])
ValueError: setting an array element with a sequence.
</pre>
|
<p>You want to join the list with the array, so try</p>
<pre><code>bits=concatenate(([0,0,0,0,0,0,0,0], bits))
</code></pre>
<p>where <code>concatenate()</code> is <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html#numpy.concatenate" rel="nofollow noreferrer"><code>numpy.concatenate()</code></a>. You can also use <code>zeros(8, dtype=int)</code> in place of the list of zeros (see <a href="https://stackoverflow.com/questions/18241359/error-setting-an-array-element-with-a-sequence"><code>numpy.zeros()</code></a>).</p>
<p>Unlike in Matlab, something like <code>[0,0,0,0,0,0,0,0, bits]</code> in Python creates a list with the initial zeros follows by an <em>embedded</em> list.</p>
<p>Matlab:</p>
<pre><code>>> x = [1,2,3]
x =
1 2 3
>> [0,0,x]
ans =
0 0 1 2 3
</code></pre>
<p>Python:</p>
<pre><code>>>> x = [1,2,3]
>>>
>>> [0,0,x]
[0, 0, [1, 2, 3]]
>>>
>>> [0,0] + x
[0, 0, 1, 2, 3]
</code></pre>
|
python|matlab|numpy
| 5
|
4,307
| 61,693,747
|
How to specify dt.week to use north american week number in dataframe pandas?
|
<p>How can I specify dt.week to use north american week number?</p>
<p><em>2019 Calendar:</em></p>
<p><a href="https://i.stack.imgur.com/Ye5to.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ye5to.png" alt="enter image description here"></a></p>
<p>For example: <strong>Date -> 01.01.2020 is Week 1 of 2020 / 31.12.2019 is Week 53 of 2019</strong></p>
<pre><code>df['Week'] = df['Induction Date'].dt.week.astype(str).str.zfill(2)
</code></pre>
<p>Result:</p>
<pre><code> SENDER_NAME Induction Date InductionPlant Week
0 b'XXXXXXXXXXXX' 2020-01-03 b'XXXXXXXXXX' 01
8910353 b'XXXXXXXXXXXX' 2019-12-31 b'XXXXXXXXXXX' 01
</code></pre>
|
<p>You can do this:</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'])
df['Week'] = df['Date'].dt.strftime('%U').astype(np.int) + 1
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code> SENDER_NAME Induction Date InductionPlant Week
0 0 b'XXXXXXXXXXXX' 2020-01-03 b'XXXXXXXXXX' 1
1 8910353 b'XXXXXXXXXXXX' 2019-12-31 b'XXXXXXXXXXX' 53
</code></pre>
|
python|pandas|dataframe
| 1
|
4,308
| 61,900,343
|
Python np.logical_and operands could not be broadcast together with shapes
|
<p>I have <code>Y_1</code> and <code>X_1</code> as a matrix of (506,1) as shape, both. I'd like to know the Y_1 values when X_1 values are greater than 5 and less than 6. Just like this.</p>
<pre><code>np.logical_and(Y_1[X_1 > 5], Y_1[X_1 < 6])
</code></pre>
<p>But I got this error.</p>
<blockquote>
<p>ValueError: operands could not be broadcast together with shapes (490,) (173,) </p>
</blockquote>
<p>This is the data for values in X_1 and Y_1:</p>
<pre><code>Values for Y_1
[[24. ]
[21.6]
[34.7]
[33.4]
[36.2]
[28.7]
[22.9]
[27.1]
[16.5]
[18.9]
[15. ]
[18.9]
[21.7]
[20.4]
[18.2]
[19.9]
[23.1]
[17.5]
[20.2]
[18.2]
[13.6]
[19.6]
[15.2]
[14.5]
[15.6]
[13.9]
[16.6]
[14.8]
[18.4]
[21. ]
[12.7]
[14.5]
[13.2]
[13.1]
[13.5]
[18.9]
[20. ]
[21. ]
[24.7]
[30.8]
[34.9]
[26.6]
[25.3]
[24.7]
[21.2]
[19.3]
[20. ]
[16.6]
[14.4]
[19.4]
[19.7]
[20.5]
[25. ]
[23.4]
[18.9]
[35.4]
[24.7]
[31.6]
[23.3]
[19.6]
[18.7]
[16. ]
[22.2]
[25. ]
[33. ]
[23.5]
[19.4]
[22. ]
[17.4]
[20.9]
[24.2]
[21.7]
[22.8]
[23.4]
[24.1]
[21.4]
[20. ]
[20.8]
[21.2]
[20.3]
[28. ]
[23.9]
[24.8]
[22.9]
[23.9]
[26.6]
[22.5]
[22.2]
[23.6]
[28.7]
[22.6]
[22. ]
[22.9]
[25. ]
[20.6]
[28.4]
[21.4]
[38.7]
[43.8]
[33.2]
[27.5]
[26.5]
[18.6]
[19.3]
[20.1]
[19.5]
[19.5]
[20.4]
[19.8]
[19.4]
[21.7]
[22.8]
[18.8]
[18.7]
[18.5]
[18.3]
[21.2]
[19.2]
[20.4]
[19.3]
[22. ]
[20.3]
[20.5]
[17.3]
[18.8]
[21.4]
[15.7]
[16.2]
[18. ]
[14.3]
[19.2]
[19.6]
[23. ]
[18.4]
[15.6]
[18.1]
[17.4]
[17.1]
[13.3]
[17.8]
[14. ]
[14.4]
[13.4]
[15.6]
[11.8]
[13.8]
[15.6]
[14.6]
[17.8]
[15.4]
[21.5]
[19.6]
[15.3]
[19.4]
[17. ]
[15.6]
[13.1]
[41.3]
[24.3]
[23.3]
[27. ]
[50. ]
[50. ]
[50. ]
[22.7]
[25. ]
[50. ]
[23.8]
[23.8]
[22.3]
[17.4]
[19.1]
[23.1]
[23.6]
[22.6]
[29.4]
[23.2]
[24.6]
[29.9]
[37.2]
[39.8]
[36.2]
[37.9]
[32.5]
[26.4]
[29.6]
[50. ]
[32. ]
[29.8]
[34.9]
[37. ]
[30.5]
[36.4]
[31.1]
[29.1]
[50. ]
[33.3]
[30.3]
[34.6]
[34.9]
[32.9]
[24.1]
[42.3]
[48.5]
[50. ]
[22.6]
[24.4]
[22.5]
[24.4]
[20. ]
[21.7]
[19.3]
[22.4]
[28.1]
[23.7]
[25. ]
[23.3]
[28.7]
[21.5]
[23. ]
[26.7]
[21.7]
[27.5]
[30.1]
[44.8]
[50. ]
[37.6]
[31.6]
[46.7]
[31.5]
[24.3]
[31.7]
[41.7]
[48.3]
[29. ]
[24. ]
[25.1]
[31.5]
[23.7]
[23.3]
[22. ]
[20.1]
[22.2]
[23.7]
[17.6]
[18.5]
[24.3]
[20.5]
[24.5]
[26.2]
[24.4]
[24.8]
[29.6]
[42.8]
[21.9]
[20.9]
[44. ]
[50. ]
[36. ]
[30.1]
[33.8]
[43.1]
[48.8]
[31. ]
[36.5]
[22.8]
[30.7]
[50. ]
[43.5]
[20.7]
[21.1]
[25.2]
[24.4]
[35.2]
[32.4]
[32. ]
[33.2]
[33.1]
[29.1]
[35.1]
[45.4]
[35.4]
[46. ]
[50. ]
[32.2]
[22. ]
[20.1]
[23.2]
[22.3]
[24.8]
[28.5]
[37.3]
[27.9]
[23.9]
[21.7]
[28.6]
[27.1]
[20.3]
[22.5]
[29. ]
[24.8]
[22. ]
[26.4]
[33.1]
[36.1]
[28.4]
[33.4]
[28.2]
[22.8]
[20.3]
[16.1]
[22.1]
[19.4]
[21.6]
[23.8]
[16.2]
[17.8]
[19.8]
[23.1]
[21. ]
[23.8]
[23.1]
[20.4]
[18.5]
[25. ]
[24.6]
[23. ]
[22.2]
[19.3]
[22.6]
[19.8]
[17.1]
[19.4]
[22.2]
[20.7]
[21.1]
[19.5]
[18.5]
[20.6]
[19. ]
[18.7]
[32.7]
[16.5]
[23.9]
[31.2]
[17.5]
[17.2]
[23.1]
[24.5]
[26.6]
[22.9]
[24.1]
[18.6]
[30.1]
[18.2]
[20.6]
[17.8]
[21.7]
[22.7]
[22.6]
[25. ]
[19.9]
[20.8]
[16.8]
[21.9]
[27.5]
[21.9]
[23.1]
[50. ]
[50. ]
[50. ]
[50. ]
[50. ]
[13.8]
[13.8]
[15. ]
[13.9]
[13.3]
[13.1]
[10.2]
[10.4]
[10.9]
[11.3]
[12.3]
[ 8.8]
[ 7.2]
[10.5]
[ 7.4]
[10.2]
[11.5]
[15.1]
[23.2]
[ 9.7]
[13.8]
[12.7]
[13.1]
[12.5]
[ 8.5]
[ 5. ]
[ 6.3]
[ 5.6]
[ 7.2]
[12.1]
[ 8.3]
[ 8.5]
[ 5. ]
[11.9]
[27.9]
[17.2]
[27.5]
[15. ]
[17.2]
[17.9]
[16.3]
[ 7. ]
[ 7.2]
[ 7.5]
[10.4]
[ 8.8]
[ 8.4]
[16.7]
[14.2]
[20.8]
[13.4]
[11.7]
[ 8.3]
[10.2]
[10.9]
[11. ]
[ 9.5]
[14.5]
[14.1]
[16.1]
[14.3]
[11.7]
[13.4]
[ 9.6]
[ 8.7]
[ 8.4]
[12.8]
[10.5]
[17.1]
[18.4]
[15.4]
[10.8]
[11.8]
[14.9]
[12.6]
[14.1]
[13. ]
[13.4]
[15.2]
[16.1]
[17.8]
[14.9]
[14.1]
[12.7]
[13.5]
[14.9]
[20. ]
[16.4]
[17.7]
[19.5]
[20.2]
[21.4]
[19.9]
[19. ]
[19.1]
[19.1]
[20.1]
[19.9]
[19.6]
[23.2]
[29.8]
[13.8]
[13.3]
[16.7]
[12. ]
[14.6]
[21.4]
[23. ]
[23.7]
[25. ]
[21.8]
[20.6]
[21.2]
[19.1]
[20.6]
[15.2]
[ 7. ]
[ 8.1]
[13.6]
[20.1]
[21.8]
[24.5]
[23.1]
[19.7]
[18.3]
[21.2]
[17.5]
[16.8]
[22.4]
[20.6]
[23.9]
[22. ]
[11.9]]
Values for X_1
[[6.575]
[6.421]
[7.185]
[6.998]
[7.147]
[6.43 ]
[6.012]
[6.172]
[5.631]
[6.004]
[6.377]
[6.009]
[5.889]
[5.949]
[6.096]
[5.834]
[5.935]
[5.99 ]
[5.456]
[5.727]
[5.57 ]
[5.965]
[6.142]
[5.813]
[5.924]
[5.599]
[5.813]
[6.047]
[6.495]
[6.674]
[5.713]
[6.072]
[5.95 ]
[5.701]
[6.096]
[5.933]
[5.841]
[5.85 ]
[5.966]
[6.595]
[7.024]
[6.77 ]
[6.169]
[6.211]
[6.069]
[5.682]
[5.786]
[6.03 ]
[5.399]
[5.602]
[5.963]
[6.115]
[6.511]
[5.998]
[5.888]
[7.249]
[6.383]
[6.816]
[6.145]
[5.927]
[5.741]
[5.966]
[6.456]
[6.762]
[7.104]
[6.29 ]
[5.787]
[5.878]
[5.594]
[5.885]
[6.417]
[5.961]
[6.065]
[6.245]
[6.273]
[6.286]
[6.279]
[6.14 ]
[6.232]
[5.874]
[6.727]
[6.619]
[6.302]
[6.167]
[6.389]
[6.63 ]
[6.015]
[6.121]
[7.007]
[7.079]
[6.417]
[6.405]
[6.442]
[6.211]
[6.249]
[6.625]
[6.163]
[8.069]
[7.82 ]
[7.416]
[6.727]
[6.781]
[6.405]
[6.137]
[6.167]
[5.851]
[5.836]
[6.127]
[6.474]
[6.229]
[6.195]
[6.715]
[5.913]
[6.092]
[6.254]
[5.928]
[6.176]
[6.021]
[5.872]
[5.731]
[5.87 ]
[6.004]
[5.961]
[5.856]
[5.879]
[5.986]
[5.613]
[5.693]
[6.431]
[5.637]
[6.458]
[6.326]
[6.372]
[5.822]
[5.757]
[6.335]
[5.942]
[6.454]
[5.857]
[6.151]
[6.174]
[5.019]
[5.403]
[5.468]
[4.903]
[6.13 ]
[5.628]
[4.926]
[5.186]
[5.597]
[6.122]
[5.404]
[5.012]
[5.709]
[6.129]
[6.152]
[5.272]
[6.943]
[6.066]
[6.51 ]
[6.25 ]
[7.489]
[7.802]
[8.375]
[5.854]
[6.101]
[7.929]
[5.877]
[6.319]
[6.402]
[5.875]
[5.88 ]
[5.572]
[6.416]
[5.859]
[6.546]
[6.02 ]
[6.315]
[6.86 ]
[6.98 ]
[7.765]
[6.144]
[7.155]
[6.563]
[5.604]
[6.153]
[7.831]
[6.782]
[6.556]
[7.185]
[6.951]
[6.739]
[7.178]
[6.8 ]
[6.604]
[7.875]
[7.287]
[7.107]
[7.274]
[6.975]
[7.135]
[6.162]
[7.61 ]
[7.853]
[8.034]
[5.891]
[6.326]
[5.783]
[6.064]
[5.344]
[5.96 ]
[5.404]
[5.807]
[6.375]
[5.412]
[6.182]
[5.888]
[6.642]
[5.951]
[6.373]
[6.951]
[6.164]
[6.879]
[6.618]
[8.266]
[8.725]
[8.04 ]
[7.163]
[7.686]
[6.552]
[5.981]
[7.412]
[8.337]
[8.247]
[6.726]
[6.086]
[6.631]
[7.358]
[6.481]
[6.606]
[6.897]
[6.095]
[6.358]
[6.393]
[5.593]
[5.605]
[6.108]
[6.226]
[6.433]
[6.718]
[6.487]
[6.438]
[6.957]
[8.259]
[6.108]
[5.876]
[7.454]
[8.704]
[7.333]
[6.842]
[7.203]
[7.52 ]
[8.398]
[7.327]
[7.206]
[5.56 ]
[7.014]
[8.297]
[7.47 ]
[5.92 ]
[5.856]
[6.24 ]
[6.538]
[7.691]
[6.758]
[6.854]
[7.267]
[6.826]
[6.482]
[6.812]
[7.82 ]
[6.968]
[7.645]
[7.923]
[7.088]
[6.453]
[6.23 ]
[6.209]
[6.315]
[6.565]
[6.861]
[7.148]
[6.63 ]
[6.127]
[6.009]
[6.678]
[6.549]
[5.79 ]
[6.345]
[7.041]
[6.871]
[6.59 ]
[6.495]
[6.982]
[7.236]
[6.616]
[7.42 ]
[6.849]
[6.635]
[5.972]
[4.973]
[6.122]
[6.023]
[6.266]
[6.567]
[5.705]
[5.914]
[5.782]
[6.382]
[6.113]
[6.426]
[6.376]
[6.041]
[5.708]
[6.415]
[6.431]
[6.312]
[6.083]
[5.868]
[6.333]
[6.144]
[5.706]
[6.031]
[6.316]
[6.31 ]
[6.037]
[5.869]
[5.895]
[6.059]
[5.985]
[5.968]
[7.241]
[6.54 ]
[6.696]
[6.874]
[6.014]
[5.898]
[6.516]
[6.635]
[6.939]
[6.49 ]
[6.579]
[5.884]
[6.728]
[5.663]
[5.936]
[6.212]
[6.395]
[6.127]
[6.112]
[6.398]
[6.251]
[5.362]
[5.803]
[8.78 ]
[3.561]
[4.963]
[3.863]
[4.97 ]
[6.683]
[7.016]
[6.216]
[5.875]
[4.906]
[4.138]
[7.313]
[6.649]
[6.794]
[6.38 ]
[6.223]
[6.968]
[6.545]
[5.536]
[5.52 ]
[4.368]
[5.277]
[4.652]
[5. ]
[4.88 ]
[5.39 ]
[5.713]
[6.051]
[5.036]
[6.193]
[5.887]
[6.471]
[6.405]
[5.747]
[5.453]
[5.852]
[5.987]
[6.343]
[6.404]
[5.349]
[5.531]
[5.683]
[4.138]
[5.608]
[5.617]
[6.852]
[5.757]
[6.657]
[4.628]
[5.155]
[4.519]
[6.434]
[6.782]
[5.304]
[5.957]
[6.824]
[6.411]
[6.006]
[5.648]
[6.103]
[5.565]
[5.896]
[5.837]
[6.202]
[6.193]
[6.38 ]
[6.348]
[6.833]
[6.425]
[6.436]
[6.208]
[6.629]
[6.461]
[6.152]
[5.935]
[5.627]
[5.818]
[6.406]
[6.219]
[6.485]
[5.854]
[6.459]
[6.341]
[6.251]
[6.185]
[6.417]
[6.749]
[6.655]
[6.297]
[7.393]
[6.728]
[6.525]
[5.976]
[5.936]
[6.301]
[6.081]
[6.701]
[6.376]
[6.317]
[6.513]
[6.209]
[5.759]
[5.952]
[6.003]
[5.926]
[5.713]
[6.167]
[6.229]
[6.437]
[6.98 ]
[5.427]
[6.162]
[6.484]
[5.304]
[6.185]
[6.229]
[6.242]
[6.75 ]
[7.061]
[5.762]
[5.871]
[6.312]
[6.114]
[5.905]
[5.454]
[5.414]
[5.093]
[5.983]
[5.983]
[5.707]
[5.926]
[5.67 ]
[5.39 ]
[5.794]
[6.019]
[5.569]
[6.027]
[6.593]
[6.12 ]
[6.976]
[6.794]
[6.03 ]]
</code></pre>
|
<p>You are using the logical and in a wrong way here. The arguments you are passing here, namely, Y_1[X_1>5] and Y_1[X_1<6] have different shapes as given in the error. Their shapes have to be same. There are 490 elements in X_1 greater than 5 and 173 elements less than 6.</p>
<p>It should be like np.logical_and(X_1>5,X_1<6) over all the elements.</p>
|
python|numpy
| 0
|
4,309
| 61,780,569
|
How many addition operations are being performed by np.sum()?
|
<p>Lets consider I have an array of shape <code>(1, 3, 4, 4)</code> and I apply <code>numpy.sum()</code> on this and reduce against <code>axes [2,3]</code>. Below is a sample code --</p>
<pre><code>import numpy as np
data = np.random.rand(1, 3, 4, 4)
res = np.sum(data, axis=(2,3), keepdims=True)
</code></pre>
<p>How many addition operations are being done by <code>np.sum()</code>?</p>
|
<pre><code>In [202]: data = np.arange(3*4*4).reshape(1,3,4,4)
</code></pre>
<p>do your sum:</p>
<pre><code>In [203]: res = np.sum(data, axis=(2,3), keepdims=True)
In [204]: res
Out[204]:
array([[[[120]],
[[376]],
[[632]]]])
In [205]: res.shape
Out[205]: (1, 3, 1, 1)
</code></pre>
<p>to produce each of the 3 sums:</p>
<pre><code>In [207]: for i in range(3):
...: print(data[0,i].sum())
...:
120
376
632
</code></pre>
<p>And in a more detailed simulation (for one of those 3):</p>
<pre><code>In [208]: tot=0
...: for i in range(4):
...: for j in range(4):
...: tot += data[0,0,i,j]
...:
In [209]: tot
Out[209]: 120
</code></pre>
<p>I'll let you count the <code>+=</code>.</p>
|
python|numpy
| 1
|
4,310
| 61,653,697
|
Pandas str.replace() with regex
|
<p>Say I have this dataframe:</p>
<pre><code>df = pd.DataFrame({'Col': ['DDJFHGBC', 'AWDGUYABC']})
</code></pre>
<p>And I want to replace everything ending with <code>ABC</code> with <code>ABC</code> and everything ending with <code>BC</code> (except the <code>ABC</code>-cases) with <code>BC</code>. The output would look like:</p>
<pre><code> Col
0 BC
1 ABC
</code></pre>
<p>How can I achieve this using regular expressions? I've tried things like:</p>
<pre><code>df.Col.str.replace(r'\w*BC\b', 'BC')
df.Col.str.replace(r'\w*ABC\b', 'ABC')
</code></pre>
<p>But obviously these two lines are conflicting and I would end up with just <code>BC</code> in whichever order I use them.</p>
|
<p>You could match as least word chars using <code>\w*?</code> and then capture in group 1 matching an optional A followed by BC <code>(A?BC)</code> followed by a word boundary.</p>
<pre><code>\w*?(A?BC)\b
</code></pre>
<p><a href="https://regex101.com/r/2EaKua/1" rel="nofollow noreferrer">Regex demo</a></p>
<p>In there replacement use group 1</p>
<pre><code>df.Col.str.replace(r'\w*?(A?BC)\b', r'\1')
</code></pre>
|
python|regex|string|pandas
| 3
|
4,311
| 61,838,277
|
Faster Outer-Product-Like find closest lat long with function pandas
|
<p>I have a pandas dataframe A, with latitude longitudes.</p>
<pre><code>import pandas as pd
df_a = pd.DataFrame([['b',1.591797,103.857887],
['c',1.589416, 103.865322]],
columns = ['place','lat','lng'])
</code></pre>
<p>I have another dataframe of locations B, also with latitude longitudes.</p>
<pre><code>df_b = pd.DataFrame([['ref1',1.594832, 103.853703],
['ref1',1.589749, 103.864678]],
columns = ['place','lat','lng'])
</code></pre>
<p>For <strong>every row</strong> in A, i want to find the <strong>closest</strong> matching row in B (subject to distance limit).
--> I already have a function that calculates the distance between two GPS pairs</p>
<p><strong>intended output</strong></p>
<pre><code># a list where each row is the corresponding closest index in B
In [13]: min_index_arr
Out[13]: [0, 1]
</code></pre>
<p>One way of doing this is:</p>
<pre><code>def haversine(pair1, pair2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
lon1, lat1 = pair1
lon2, lat2 = pair2
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
r = 6371 # Radius of earth in kilometers. Use 3956 for miles
return c * r
import operator
min_vals = []
for i in df_a.index:
pair1 = df_a['lat'][i], df_a['lng'][i]
dist_array = []
for j in df_b.index:
pair2 = df_b['lat'][j], df_b['lng'][j]
dist = haversine(pair1, pair2)
dist_array.append(dist)
min_index, min_value = min(enumerate(dist_array), key=operator.itemgetter(1))
min_vals.append(max_index)
</code></pre>
<p>But im sure there's a faster way to do this, it seems very similar to an outer product, except not a product, and using a function. Does anyone know how?</p>
|
<p>Using an approach from <a href="https://stackoverflow.com/questions/10549402/kdtree-for-longitude-latitude">KDTree for longitude/latitude</a></p>
<p>Based upon <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html" rel="nofollow noreferrer">sklearn.balltree</a></p>
<p><strong>Code</strong></p>
<pre><code># Setup Balltree using df_b as reference dataset
bt = BallTree(np.deg2rad(df_b[['lat', 'lng']].values), metric='haversine')
# Setup distance queries
query_lats = df_a['lat']
query_lons = df_a['lng']
# Find closest city in reference dataset for each city in a
distances, indices = bt.query(np.deg2rad(np.c_[query_lats, query_lons]))
# Result
r_km = 6371
for p, d, i in zip(df_a['place'][:], distances.flatten(), indices.flatten()):
print(f"Place {p} closest to {df_b['place'][i]} with distance {d*r_km:.4f} km")
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Place b closest to ref1 with distance 0.5746 km
Place c closest to ref2 with distance 0.0806 km
</code></pre>
|
python|pandas
| 1
|
4,312
| 57,982,376
|
Pytorch tensor multiplication with Float tensor giving wrong answer
|
<p>I am seeing some strange behavior when i multiply two pytorch tensors.</p>
<pre class="lang-py prettyprint-override"><code>x = torch.tensor([99397544.0])
y = torch.tensor([0.1])
x * y
</code></pre>
<p>This outputs </p>
<pre><code>tensor([9939755.])
</code></pre>
<p>However, the answer should be <code>9939754.4</code></p>
|
<p>In default, the tensor dtype is <code>torch.float32</code> in pytorch. Change it to <code>torch.float64</code> will give the right result. </p>
<pre class="lang-py prettyprint-override"><code>x = torch.tensor([99397544.0], dtype=torch.float64)
y = torch.tensor([0.1], dtype=torch.float64)
x * y
# tensor([9939754.4000])
</code></pre>
<p>The mismatched result for <code>torch.float32</code> caused by rounding error if you do not have enough precision to calculate (represent) it. </p>
<p><a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow noreferrer">What Every Computer Scientist Should Know About Floating-Point Arithmetic</a></p>
|
pytorch
| 2
|
4,313
| 58,095,242
|
Numpy: Understanding Array to the power of Array
|
<p>I was reviewing operators with numpy arrays and I found something I did not expect and that I do not know how to interpret.</p>
<p>The operation I am performing is an array A to the power of an array B, clearly with A and B having the same shape. The behaviour I am expecting is 'element-wise power': each element of A to the power of the corresponding elements in B. However, it seems that something else is going on.</p>
<pre><code>import numpy as np
values = list(range(1, 10))
array_1d = np.array(values)
print(array_1d ** (array_1d * 5))
</code></pre>
<p>So I'm making <code>[1, 2, 3, 4, 5, 6, 7, 9]</code> to the power of <code>[5, 10, 15, 20, ...]</code>.
The expected outcome (in my head) is the equivalent of <code>[v ** (v*5) for v in list(range(1,10))]</code>, that is:</p>
<pre><code> [1,
1024,
14348907,
1099511627776,
298023223876953125,
221073919720733357899776,
378818692265664781682717625943,
1329227995784915872903807060280344576,
8727963568087712425891397479476727340041449].
</code></pre>
<p>However the output is:</p>
<pre><code>array([ 1, 1024, 14348907, 0, 167814181, 1073741824, 613813847, 0, -1054898967], dtype=int32)
</code></pre>
<p>Does someone know the reason of this result? What's really happening here?
Thank you!</p>
|
<p><code>dtype=int32</code> you're overflowing integers and looping back:</p>
<pre><code>1099511627776 % (2**32) == 0
298023223876953125 % (2**32) == 167814181
...
</code></pre>
|
python|arrays|numpy|matrix
| 1
|
4,314
| 58,089,395
|
Usage of str.contains() applied to pandas data frame
|
<p>I am new to Python and Jupyter Notebook and I am currently following this tutorial: <a href="https://www.dataquest.io/blog/jupyter-notebook-tutorial/" rel="nofollow noreferrer">https://www.dataquest.io/blog/jupyter-notebook-tutorial/</a>. So far I've imported the pandas library and a couple other things, and I've made a data frame 'df' which is just a CSV file of company profit and revenue data. I'm having trouble understanding the following line of the tutorial:</p>
<pre><code>non_numberic_profits = df.profit.str.contains('[^0-9.-]')
</code></pre>
<p>I understand the point of what the tutorial is doing: identifying all the companies whose profit variable contains a string instead of a number. But I don't understand the point of [^0-9.-] and how the above function actually works.</p>
<p>My full code is below. Thanks.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
df = pd.read_csv('fortune500.csv')
df.columns = ['year', 'rank', 'company', 'revenue', 'profit']
non_numberic_profits = df.profit.str.contains('[^0-9.-]')
df.loc[non_numberic_profits].head()
</code></pre>
|
<p>The expression <code>[^0-9.-]</code> is a so-called <em>regular expression</em>, which is a special text string for describing a search pattern. With regular expressions (or in short '<em>RegEx</em>') you can extract specific parts of a string. For example, you can extract <code>foo</code> from the string <code>123foo456</code>.</p>
<p>In RegEx, when using <code>[]</code> you define a range of characters that has to be matched. For example, <code>[bac]</code> matches <code>abc</code> in the string <code>abcdefg</code>. <code>[bac]</code> could also be rewritten as <code>[a-c]</code>.</p>
<p>Using <code>[^]</code> you can negate a character range. Thus, the RegEx <code>[^a-c]</code> applied to the above example would match <code>defg</code>.</p>
<p>Now here is a catch:<br>
Since <code>^</code> and <code>-</code> have a special meaning when used in regular expressions, they have to be put in specific positions within <code>[]</code> in order to be matched literally. Specifically, if you want to match <code>-</code> literally and you want to exclude it from the character range, you have to put it <strong>at the rightmost end</strong> of <code>[]</code>, for example <code>[abc-]</code>.</p>
<p><strong>Putting it all together</strong><br>
The RegEx <code>'[^0-9.-]'</code> means: 'Match all substrings that do <strong>not</strong> contain the digits 0 through 9, a dot (<code>.</code>) or a dash (<code>-</code>)'. You can see your regular expression applied to some example strings <a href="https://regex101.com/r/MiII1E/2" rel="nofollow noreferrer">here</a>.</p>
<p>The pandas function <code>df.profit.str.contains('[^0-9.-]')</code> checks whether the strings in the <code>profit</code> column of your DataFrame match this RegEx and returns <code>True</code> if they do and <code>False</code> if they don't. The result is a pandas <code>Series</code> containing the resulting <code>True</code>/<code>False</code> values.</p>
<hr>
<p>If you're ever stuck, the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer">Pandas docs</a> are your friend. Stack Overflow's <a href="https://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean">What Does this Regex Mean?</a> and <a href="https://regex101.com/r/lRW6oG/1" rel="nofollow noreferrer">Regex 101</a> are also good places to start.</p>
|
python|string|pandas|jupyter
| 3
|
4,315
| 58,152,368
|
Graph a multi index dataframe in pandas
|
<p>I have a multi indexed file with values such as these</p>
<p><a href="https://i.stack.imgur.com/5OjXc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5OjXc.png" alt=""></a></p>
<p>How could I plot a dataframe that has separate lines for each symbol in the same graph?</p>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> for all unique <code>symbol</code> values :</p>
<pre><code>df1 = df.pivot(index='4.timestamp', columns='1.symbol', values='2.price')
</code></pre>
<p>If possible duplicated is becessary aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>:</p>
<pre><code>df1 = df.pivot_table(index='4.timestamp', columns='1.symbol', values='2.price', aggfunc='mean')
</code></pre>
<p>and then plot by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>DataFrame.plot</code></a>:</p>
<pre><code>df1.plot()
</code></pre>
|
python|pandas
| 1
|
4,316
| 57,886,871
|
Variables are same for all Epochs
|
<p>I am experimenting the image classifier using CNN with keras</p>
<p>My code -</p>
<pre><code>model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=(224, 224, 3), activation="relu"))
model.add(Conv2D(32, (3, 3), padding='same', activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy']
)
model.summary()
check=ModelCheckpoint(filepath=r'C:\Users\softloft\AppData\Local\Programs\Python\Python37\Scripts\Untitled Folder\i_models.hdf5', verbose=1, save_best_only = True)
history=model.fit(
x_train,
y_train,
batch_size=100,
epochs=30,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=[check],
)
</code></pre>
<p>I run this code with one CNN layer and it gives val_acc of about 71.</p>
<p>Then, I add another CNN layer(which is the above code) and it gives the result -</p>
<pre><code>Train on 3242 samples, validate on 1081 samples
Epoch 1/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.0773 - acc: 0.1771 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00001: val_loss improved from inf to 13.82190, saving model to i_models.hdf5
Epoch 2/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00002: val_loss did not improve from 13.82190
Epoch 3/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00003: val_loss did not improve from 13.82190
Epoch 4/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00004: val_loss did not improve from 13.82190
Epoch 5/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00005: val_loss did not improve from 13.82190
Epoch 6/30
3242/3242 [==============================] - 234s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00006: val_loss did not improve from 13.82190
Epoch 7/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00007: val_loss did not improve from 13.82190
Epoch 8/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00008: val_loss did not improve from 13.82190
Epoch 9/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00009: val_loss did not improve from 13.82190
Epoch 10/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00010: val_loss did not improve from 13.82190
Epoch 11/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00011: val_loss did not improve from 13.82190
Epoch 12/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00012: val_loss did not improve from 13.82190
Epoch 13/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00013: val_loss did not improve from 13.82190
Epoch 14/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00014: val_loss did not improve from 13.82190
Epoch 15/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00015: val_loss did not improve from 13.82190
Epoch 16/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00016: val_loss did not improve from 13.82190
Epoch 17/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00017: val_loss did not improve from 13.82190
Epoch 18/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00018: val_loss did not improve from 13.82190
Epoch 19/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00019: val_loss did not improve from 13.82190
Epoch 20/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00020: val_loss did not improve from 13.82190
Epoch 21/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00021: val_loss did not improve from 13.82190
Epoch 22/30
3242/3242 [==============================] - 235s 72ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00022: val_loss did not improve from 13.82190
Epoch 23/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00023: val_loss did not improve from 13.82190
Epoch 24/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00024: val_loss did not improve from 13.82190
Epoch 25/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00025: val_loss did not improve from 13.82190
Epoch 26/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00026: val_loss did not improve from 13.82190
Epoch 27/30
3242/3242 [==============================] - 235s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00027: val_loss did not improve from 13.82190
Epoch 28/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00028: val_loss did not improve from 13.82190
Epoch 29/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00029: val_loss did not improve from 13.82190
Epoch 30/30
3242/3242 [==============================] - 236s 73ms/step - loss: 13.2345 - acc: 0.1789 - val_loss: 13.8219 - val_acc: 0.1425
Epoch 00030: val_loss did not improve from 13.82190
</code></pre>
<p>What happens and what to do..</p>
<p>How all the variables are same for all epochs</p>
<p>thank you</p>
|
<p>Try changing this</p>
<pre><code>model.compile(loss = "categorical_crossentropy", optimizer = "rmsprop")
</code></pre>
<p>to </p>
<pre><code>model.compile(loss = "categorical_crossentropy", optimizer = 'adam')
</code></pre>
|
python|tensorflow|keras|neural-network|conv-neural-network
| 1
|
4,317
| 34,178,751
|
Extract unique values and number of occurrences of each value from dataframe column
|
<p>I'm trying to extract the number of each unique entry from one dataframe column and store it as a new dataframe, something like this:</p>
<p><strong>Input</strong></p>
<pre><code>sample_name
sample1
sample2
sample2
sample3
sample3
sample3
</code></pre>
<p><strong>Desired output</strong></p>
<pre><code>sample_name count
sample1 1
sample2 2
sample3 3
</code></pre>
<p><strong>Edit</strong>
I'm guessing this is getting downvoted for not showing what I've tried, so for other users who might find themselves in the same situation, here's where I stagnated:</p>
<p>Given the input-dataframe, I was able to extract the unique entries:</p>
<pre><code>input_df["sample_name"].unique() # ['sample1', 'sample2', 'sample3']
</code></pre>
<p>And the number of occurences (not per unique entry):</p>
<pre><code>input_df.groupby("sample_name")["sample_name"].transform("count")
</code></pre>
<p>which outputs</p>
<pre><code>0 1
1 2
2 2
3 3
4 3
5 3
</code></pre>
<p>What I didn't figure out was how to extract the counts per unique entry.</p>
|
<p>You want <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="noreferrer"><code>value_counts</code></a>:</p>
<pre><code>In [142]:
df['sample_name'].value_counts()
Out[142]:
sample3 3
sample2 2
sample1 1
Name: sample_name, dtype: int64
</code></pre>
|
python|pandas
| 5
|
4,318
| 37,091,953
|
array manipulation from MATLAB to Python
|
<p>from MATLAB code:</p>
<pre><code>a = rand(1,120);
d=zeros(1,124);
state=[1:120];
fibre = [1 5 9 13 17 2 6 10 14 18 3 7 11 15 19 4 8 12 16 20 21 69 65 61 57 22 68 64 60 56 71 67 63 59 55 70 66 62 58 54 53 49 45 41 37 52 48 44 40 36 51 47 43 39 35 50 46 42 38 34 121 117 113 109 105 120 116 112 108 104 119 115 111 107 103 118 114 110 106 102 101 97 93 89 85 100 96 92 88 84 99 95 91 87 83 98 94 90 86 82 81 79 77 75 33 80 27 29 31 122 25 78 76 74 123 26 28 30 32 124];
d(fibre)=a(state);
</code></pre>
<p>to Python code:</p>
<pre><code>import numpy as np
a = np.arange(120,219,1)
d=np.zeros([124])
state = np.arange(0,120,1)
fibre = np.array([1,5,9,13,17,2,6,10,14,18,3,7,11,15,19,4,8,12,16,20,21,69,65,61,57,22,68,64,60,56,71,67,63,59,55,70,66,62,58,54,53,49,45,41,37,52,48,44,40,36,51,47,43,39,35,50,46,42,38,34,121,117,113,109,105,120,116,112,108,104,119,115,111,107,103,118,114,110,106,102,101,97,93,89,85,100,96,92,88,84,99,95,91,87,83,98,94,90,86,82,81,79,77,75,33,80,27,29,31,122,25,78,76,74,123,26,28,30,32,124,72,73,23,24])
d[fibre]=a[state]
</code></pre>
<p>The python code throw an exception in regards to array size, any recommendations on how to fix this?</p>
|
<p>Your python script has two problem regarding to Matlab code:
In the second line you should generate random numbers such as:</p>
<pre><code>a = np.random.rand(120)
</code></pre>
<p>and in the last line, like in comments said, you should know indexing in Matlab starts with 1 and python starts with 0, so your last line will be:</p>
<pre><code>d[fibre-1]=a[state]
</code></pre>
|
python|arrays|matlab|numpy
| 1
|
4,319
| 55,098,643
|
list augmentation in python with numpy or other library
|
<p>I would like to augment list from</p>
<pre><code>[1, 2, 3, 4, 5]
</code></pre>
<p>to</p>
<pre><code>[1, 1, 2, 2, 3, 3, 4, 4, 5, 5]
</code></pre>
<p>If I want to augment likewise n times (like 100 or 500 times), how can I do it? I do not want to do it with regular loop, but using some library like numpy. Any helps?</p>
<p>Many thanks.</p>
|
<p>You can do this with numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>np.repeat</code></a>:</p>
<pre><code>import numpy as np
a = np.array([1, 2, 3, 4, 5])
np.repeat(a,2)
# array([1, 1, 2, 2, 3, 3, 4, 4, 5, 5])
</code></pre>
|
python|list|numpy|data-augmentation
| 2
|
4,320
| 49,626,150
|
Python list-dictionary algorithm with big data
|
<p>I'm trying to make a program using two python dictionaries.
'multiple dic1 and dic2 value if dic1 and dic2 key is common, otherwize 0'
the key order and the length of the output list is the same as those of dic1.</p>
<pre><code>dic1 = {'foo': 100,'bar': 200,'baz': 300,'qux': 400,'quux': 500}
dic2 = {'foo': 1,'quux': 2}
# output [100, 0, 0, 0, 1000]
</code></pre>
<p>Of course I can do it with the code below.</p>
<pre><code>output = []
for k,v in dic1.items():
if k in dic2:
output.append(v*dic2[k])
else:
output.append(0)
print(output)
</code></pre>
<p>but the length of the dictionary is 1K-10K so I cannot use for loop because of the speed problem.
Can someone know the way to solve this?
Thanks.</p>
|
<p>Hmm, I don't think there is much you can do. Where is this data coming from? If it's a csv or something then a <code>pandas</code> solution would probably be quicker. If they must be <code>dict</code>s then I think the best thing I can think of is to change it to a comprehension</p>
<pre><code>output = [v * dic2[k] if k in dic2 else 0 for k, v in dic1.items()]
</code></pre>
<p>which removes the relatively expensive <code>list.append</code> call.</p>
<p>Some timings:</p>
<pre><code>import numpy as np # for random generation
dic1 = {k: k for k in np.random.random(10000)}
dic2 = {k: k for k in np.random.choice(list(dic1), 1000)}
def f1():
output = []
for k, v in dic1.items():
if k in dic2:
output.append(v*dic2[k])
else:
output.append(0)
def f2():
output = [v * dic2[k] if k in dic2 else 0 for k, v in dic1.items()]
def f3():
output = [v * dic2.get(k, 0) for k, v in dic1.items()]
%timeit f1()
2.44 ms ± 12.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit f2()
1.66 ms ± 14.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit f3()
2.61 ms ± 59.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each
</code></pre>
|
python|python-3.x|list|numpy|python-3.6
| 4
|
4,321
| 73,189,955
|
Serializing complex object containing multiple nested objects with data frames
|
<p>Below should be a runnable sample of code. i have a Chart1 object which can contain many panes, and each pane can can contain many series. I would like to serialize this to json so i can send to a flask application to render. To do deal with the dataframes, i am using a custom encoder (ChartEncoder below):</p>
<pre><code>from abc import ABC, abstractmethod
from datetime import datetime
import pandas as pd
import copy
import json
from json import JSONEncoder
import pandas_datareader.data as reader
import datetime as dt
class Series1(object):
def __init__(self, data):
self.data = data
class Pane1(object):
def __init__(self, series: Series1 = None , rel_height = None):
self.series = [] if series is None else series
self.rel_height = rel_height
class Chart1(ABC):
def __init__(self, show_volume = True, *args, **kwargs):
self.show_volume = show_volume
self.panes=[Pane1()]
self.symbol = None
self.interval = None
def to_json(self):
obj = copy.copy(self)
obj.data = None
jsn = json.dumps(obj, cls=ChartEncoder)
return jsn
class ChartEncoder(JSONEncoder):
def default(self, obj):
if type(obj) is pd.DataFrame:
return obj.reset_index().to_json(orient="records", date_format = 'iso')
elif type(obj) is pd.Series:
df = pd.DataFrame(obj)
return df.reset_index().to_json(orient="records", date_format = 'iso')
elif hasattr(obj, '__dict__'):
return obj.__dict__
else:
return ''
if __name__ == '__main__':
chart = Chart1()
start = dt.datetime(2022,7,25)
end = dt.datetime(2022,7,29)
tickers = ['AAPL', 'MSFT']
data = {}
for t in tickers:
series = reader.DataReader(t,'yahoo', start, end)
chart.panes[0].series.append(Series1(series))
json = chart.to_json()
print(json)
</code></pre>
<p>After running the code there are two problems with the json string returned:</p>
<ol>
<li>it looks like there are escape characters being added that can not be read by javascript JSON.parse.</li>
</ol>
<pre><code>'{"show_volume": true, "panes": [{"series": [{"data": "[{\\"Date\\":\\"2022-07-25T00:00:00.000Z\\",\\"High\\":155.0399932861,\\"Low\\":152.2799987793,\\"Open\\":154.0099945068,\\"Close\\":152.9499969482,\\"Volume\\":53623900,\\"Adj Close\\":152.9499969482},{\\"Date\\":\\"2022-07-26T00:00:00.000Z\\",\\"High\\":153.0899963379,\\"Low\\":150.8000030518,\\"Open\\":152.2599945068,\\"Close\\":151.6000061035,\\"Volume\\":55138700,\\"Adj Close\\":151.6000061035},{\\"Date\\":\\"2022-07-27T00:00:00.000Z\\",\\"High\\":157.3300018311,\\"Low\\":152.1600036621,\\"Open\\":152.5800018311,\\"Close\\":156.7899932861,\\"Volume\\":78620700,\\"Adj Close\\":156.7899932861},{\\"Date\\":\\"2022-07-28T00:00:00.000Z\\",\\"High\\":157.6399993896,\\"Low\\":154.4100036621,\\"Open\\":156.9799957275,\\"Close\\":157.3500061035,\\"Volume\\":81378700,\\"Adj Close\\":157.3500061035},{\\"Date\\":\\"2022-07-29T00:00:00.000Z\\",\\"High\\":163.6300048828,\\"Low\\":159.5,\\"Open\\":161.2400054932,\\"Close\\":162.5099945068,\\"Volume\\":101689200,\\"Adj Close\\":162.5099945068}]"}, {"data": "[{\\"Date\\":\\"2022-07-25T00:00:00.000Z\\",\\"High\\":261.5,\\"Low\\":256.8099975586,\\"Open\\":261.0,\\"Close\\":258.8299865723,\\"Volume\\":21056000,\\"Adj Close\\":258.8299865723},{\\"Date\\":\\"2022-07-26T00:00:00.000Z\\",\\"High\\":259.8800048828,\\"Low\\":249.5700073242,\\"Open\\":259.8599853516,\\"Close\\":251.8999938965,\\"Volume\\":39348000,\\"Adj Close\\":251.8999938965},{\\"Date\\":\\"2022-07-27T00:00:00.000Z\\",\\"High\\":270.049987793,\\"Low\\":258.8500061035,\\"Open\\":261.1600036621,\\"Close\\":268.7399902344,\\"Volume\\":45994000,\\"Adj Close\\":268.7399902344},{\\"Date\\":\\"2022-07-28T00:00:00.000Z\\",\\"High\\":277.8399963379,\\"Low\\":267.8699951172,\\"Open\\":269.75,\\"Close\\":276.4100036621,\\"Volume\\":33459300,\\"Adj Close\\":276.4100036621},{\\"Date\\":\\"2022-07-29T00:00:00.000Z\\",\\"High\\":282.0,\\"Low\\":276.6300048828,\\"Open\\":277.700012207,\\"Close\\":280.7399902344,\\"Volume\\":32129400,\\"Adj Close\\":280.7399902344}]"}], "rel_height": null}], "symbol": null, "interval": null, "data": null}'
</code></pre>
<ol start="2">
<li>After stripping away these characters manually, im left with the below. but even this is not a valid parseable json according to: <a href="https://jsonformatter.org/json-parser" rel="nofollow noreferrer">https://jsonformatter.org/json-parser</a>. You will notice that this is because the the series.data property ("data:" below) is quoted, as opposed to an array</li>
</ol>
<pre><code>{"show_volume": true, "panes": [{"series": [{"data": "[{"Date":"2022-07-25T00:00:00.000Z","High":155.0399932861,"Low":152.2799987793,"Open":154.0099945068,"Close":152.9499969482,"Volume":53623900,"Adj Close":152.9499969482},{"Date":"2022-07-26T00:00:00.000Z","High":153.0899963379,"Low":150.8000030518,"Open":152.2599945068,"Close":151.6000061035,"Volume":55138700,"Adj Close":151.6000061035},{"Date":"2022-07-27T00:00:00.000Z","High":157.3300018311,"Low":152.1600036621,"Open":152.5800018311,"Close":156.7899932861,"Volume":78620700,"Adj Close":156.7899932861},{"Date":"2022-07-28T00:00:00.000Z","High":157.6399993896,"Low":154.4100036621,"Open":156.9799957275,"Close":157.3500061035,"Volume":81378700,"Adj Close":157.3500061035},{"Date":"2022-07-29T00:00:00.000Z","High":163.6300048828,"Low":159.5,"Open":161.2400054932,"Close":162.5099945068,"Volume":101689200,"Adj Close":162.5099945068}]"}, {"data": "[{"Date":"2022-07-25T00:00:00.000Z","High":261.5,"Low":256.8099975586,"Open":261.0,"Close":258.8299865723,"Volume":21056000,"Adj Close":258.8299865723},{"Date":"2022-07-26T00:00:00.000Z","High":259.8800048828,"Low":249.5700073242,"Open":259.8599853516,"Close":251.8999938965,"Volume":39348000,"Adj Close":251.8999938965},{"Date":"2022-07-27T00:00:00.000Z","High":270.049987793,"Low":258.8500061035,"Open":261.1600036621,"Close":268.7399902344,"Volume":45994000,"Adj Close":268.7399902344},{"Date":"2022-07-28T00:00:00.000Z","High":277.8399963379,"Low":267.8699951172,"Open":269.75,"Close":276.4100036621,"Volume":33459300,"Adj Close":276.4100036621},{"Date":"2022-07-29T00:00:00.000Z","High":282.0,"Low":276.6300048828,"Open":277.700012207,"Close":280.7399902344,"Volume":32129400,"Adj Close":280.7399902344}]"}], "rel_height": null}], "symbol": null, "interval": null, "data": null}
</code></pre>
<p>Any help with being able to avoid these two issues would be grateful</p>
|
<p>For starters, the JSON that is emitted is perfectly parsble by Javascript <code>JSON.load</code>, the issue is that your <code>default</code> implementation returns a <code>str</code> object, so that gets serialized as a JSON <code>str</code>.</p>
<p>You probably want to use <code>to_dict</code> (which returns a <code>dict</code>) instead of <code>to_json</code>, you just have to handle the <code>pd.Timestamp</code> objects:</p>
<pre><code>class ChartEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, (pd.DataFrame, pd.Series)):
return obj.to_dict(orient="records")
elif isinstance(obj, pd.Timestamp):
return obj.isoformat()
elif hasattr(obj, '__dict__'):
return obj.__dict__
else:
# you probably want this, it doesn't make sense to return an empty string
return JSONEncoder.default(self, obj)
</code></pre>
<p>Note, <code>return obj.__dict__</code> probably isn't the right way to do this. You should handle your custom types explicitly.</p>
|
python|json|pandas
| 1
|
4,322
| 35,212,271
|
Numpy create two arrays using fromiter simultaneously
|
<p>I have an iterator that looks something like the following</p>
<pre><code>it = ((x, x**2) for x in range(20))
</code></pre>
<p>and what I want is two arrays. one of the <code>x</code>s and the other of the <code>x**2</code>s but I don't actually know the number of elements, and I can't convert from one entry to the other, so I couldn't build the first, and then build the second from the first.</p>
<p>If I had only one outcome with unknown size, I could use <code>np.fromiter</code> to have it dynamically allocate efficiently, e.g.</p>
<pre><code>y = np.fromiter((x[0] for x in it), float)
</code></pre>
<p>with two I would hope I could do something like</p>
<pre><code>ita, itb = itertools.tee(it)
y = np.fromiter((x[0] for x in ita), float)
y2 = np.fromiter((x[1] for x in itb), float)
</code></pre>
<p>but because the first call exhausts the iterator, I'd be better off doing</p>
<pre><code>lst = list(it)
y = np.fromiter((x[0] for x in lst), float, len(lst))
y2 = np.fromiter((x[1] for x in lst), float, len(lst))
</code></pre>
<p>Because tee will be filling a deque the size of the whole list anyways. I'd love to avoid copying the iterator into a list before then copying it again into an array, but I can't think of a way to incrementally build an array without doing it entirely manually. In addition, <code>fromiter</code> seems to be written in c, so writing it in python would probably end up with no negligible difference over making a list first.</p>
|
<p>You could use <code>np.fromiter</code> to build one array with all the values, and then slice the array:</p>
<pre><code>In [103]: it = ((x, x**2) for x in range(20))
In [104]: import itertools
In [105]: y = np.fromiter(itertools.chain.from_iterable(it), dtype=float)
In [106]: y
Out[106]:
array([ 0., 0., 1., 1., 2., 4., 3., 9., 4.,
16., 5., 25., 6., 36., 7., 49., 8., 64.,
9., 81., 10., 100., 11., 121., 12., 144., 13.,
169., 14., 196., 15., 225., 16., 256., 17., 289.,
18., 324., 19., 361.])
In [107]: y, y2 = y[::2], y[1::2]
In [108]: y
Out[108]:
array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14., 15., 16., 17., 18., 19.])
In [109]: y2
Out[109]:
array([ 0., 1., 4., 9., 16., 25., 36., 49., 64.,
81., 100., 121., 144., 169., 196., 225., 256., 289.,
324., 361.])
</code></pre>
<p>The above manages to load the data from the iterator into arrays without the use of intermediate Python lists. The underlying data in the arrays is not contiguous, however. Many operations are faster on contiguous arrays:</p>
<pre><code>In [19]: a = np.arange(10**6)
In [20]: y1 = a[::2]
In [21]: z1 = np.ascontiguousarray(y1)
In [24]: %timeit y1.sum()
1000 loops, best of 3: 975 µs per loop
In [25]: %timeit z1.sum()
1000 loops, best of 3: 464 µs per loop
</code></pre>
<p>So you may wish to make <code>y</code> and <code>y2</code> contiguous:</p>
<pre><code>y = np.ascontiguousarray(y)
y2 = np.ascontiguousarray(y2)
</code></pre>
<p>Calling <code>np.ascontiguousarray</code> requires copying the non-contiguous data in <code>y</code>
and <code>y2</code> into new arrays. Unfortunately, I do not see a way to create <code>y</code> and
<code>y2</code> as contiguous arrays without copying.</p>
<hr>
<p>Here is a benchmark comparing the use of an intermediate Python list to NumPy slices (with and without <code>ascontiguousarray</code>):</p>
<pre><code>import numpy as np
import itertools as IT
def using_intermediate_list(g):
lst = list(g)
y = np.fromiter((x[0] for x in lst), float, len(lst))
y2 = np.fromiter((x[1] for x in lst), float, len(lst))
return y, y2
def using_slices(g):
y = np.fromiter(IT.chain.from_iterable(g), dtype=float)
y, y2 = y[::2], y[1::2]
return y, y2
def using_slices_contiguous(g):
y = np.fromiter(IT.chain.from_iterable(g), dtype=float)
y, y2 = y[::2], y[1::2]
y = np.ascontiguousarray(y)
y2 = np.ascontiguousarray(y2)
return y, y2
def using_array(g):
y = np.array(list(g))
y, y2 = y[:, 0], y[:, 1]
return y, y2
</code></pre>
<hr>
<pre><code>In [27]: %timeit using_intermediate_list(((x, x**2) for x in range(10**6)))
1 loops, best of 3: 376 ms per loop
In [28]: %timeit using_slices(((x, x**2) for x in range(10**6)))
1 loops, best of 3: 220 ms per loop
In [29]: %timeit using_slices_contiguous(((x, x**2) for x in range(10**6)))
1 loops, best of 3: 237 ms per loop
In [34]: %timeit using_array(((x, x**2) for x in range(10**6)))
1 loops, best of 3: 707 ms per loop
</code></pre>
|
python|arrays|numpy
| 2
|
4,323
| 59,920,461
|
Python 3.8.1: ModuleNotFoundError: No module named '_pywrap_tensorflow_internal' - Is tensorflow only supported up to 3.7?
|
<p>I am using Python 3.8.1 on Windows 10 and am trying to install TensorFlow. </p>
<p>I have tried many methods to install it, but I keep getting the following error upon importing TensorFlow. This time, I installed it using</p>
<pre><code>conda install -c conda-forge tensorflow
</code></pre>
<p>Here is the stack trace:</p>
<pre><code>Using TensorFlow backend.
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Program Files (x86)\Python38-32\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "deep_versions.py", line 6, in <module>
import keras
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\__init__.py", line 3, in <module>
from . import utils
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\utils\__init__.py", line 6, in <module>
from . import conv_utils
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\utils\conv_utils.py", line 9, in <module>
from .. import backend as K
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\backend\__init__.py", line 1, in <module>
from .load_backend import epsilon
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\backend\load_backend.py", line 90, in <module>
from .tensorflow_backend import *
File "C:\Users\rapto\AppData\Roaming\Python\Python38\site-packages\keras\backend\tensorflow_backend.py", line 5, in <module>
import tensorflow as tf
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Program Files (x86)\Python38-32\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Program Files (x86)\Python38-32\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p>What I have done to try to fix it: I have looked at other similar questions on here but they have been for older versions of Python. I have also looked at <a href="https://github.com/tensorflow/tensorflow/issues/22512" rel="nofollow noreferrer">this GitHub post</a> but could not resolve the issue. </p>
<p>I checked the TensorFlow site for Python version compatability but the latest update that I could find was a few months ago, saying that TensorFlow only supports up to Python 3.7 (<a href="https://github.com/tensorflow/tensorflow/issues/33374" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/33374)</a> Is this still the case and do I need to downgrade Python in order to use TensorFlow? </p>
<p>Please let me know if there is any more information I should provide (I'm still learning how to correctly ask questions here). Thank you.</p>
<p>Edit: I created a conda environment with Python 3.7 and I did not get the same error and importing TF seems to be working fine now. It seems that Python 3.8 is still not yet supported, so this may have been the problem.</p>
|
<p>Per the installation documents, only Python 3.5-3.7 are supported. <a href="https://www.tensorflow.org/install" rel="nofollow noreferrer">https://www.tensorflow.org/install</a></p>
|
python-3.x|tensorflow|anaconda|conda
| 0
|
4,324
| 60,087,418
|
Iterate over pandas dataframe and apply condition
|
<p>Consider I have this dataframe, wherein I want to remove toy as a topic from the topics column and if there is row with a single topic as a toy , remove that row. How can we do that in pandas?</p>
<pre><code>+---+-----------------------------------+-------------------------+
| | Comment | Topic |
+---+-----------------------------------+-------------------------+
| 1 | ----- | toy, bottle, vegetable |
| 2 | ----- | fruit, toy, electronics |
| 3 | ----- | toy |
| 4 | ----- | electronics, fruit |
| 5 | ----- | toy, electronic |
+---+-----------------------------------+-------------------------+
</code></pre>
|
<p>Try using <code>str.replace</code> with <code>str.rstrip</code> and <code>ne</code> inside <code>[...]</code>:</p>
<pre><code>df['topic'] = df['topic'].str.replace('toy', ' ').str.replace(' , ', '').str.rstrip()
print(df[df['topic'].ne('')])
</code></pre>
|
python|pandas
| 1
|
4,325
| 59,971,737
|
Numpy Gaussian from /dev/urandom
|
<p>I have an application where I need <code>numpy.random.normal</code> but from a crypgoraphic PRNG source. Numpy doesn't seem to provide this option.</p>
<p>The best I could find was <code>numpy.random.entropy.random_entropy</code> but this is only uint32 and it's buggy, with large arrays you get "RuntimeError: Unable to read from system cryptographic provider" even though urandom is non-blocking...</p>
<p>You can however do this: <code>np.frombuffer(bytearray(os.urandom(1000*1000*4)), dtype=np.uint32).astype(np.double).reshape(1000, 1000)</code></p>
<p>But I'm still left with the problem of somehow converting it to a Guassian and not screwing anything up.</p>
<p>Is there a solution someone knows? Google is poisoned by numpy seeding from /dev/urandom, I don't need seeding, I need urandom being the only source of all randomness.</p>
|
<p>I came across <code>scipy.stats.rvs_ratio_uniforms</code> and adapted their code for my purpose. It's only 3 times slower than <code>np.random.normal</code> despite sampling twice the randomness from a cryptographic source.</p>
<pre><code>import numpy as np
import os
def uniform_0_1(size):
return np.frombuffer(bytearray(os.urandom(size*4)), dtype=np.uint32).astype(np.float) / 2**32
def normal(mu, sigma, size):
bmp = np.sqrt(2.0/np.exp(1.0)) # about 0.8577638849607068
size1d = tuple(np.atleast_1d(size))
N = np.prod(size1d) # number of rvs needed, reshape upon return
x = np.zeros(N)
simulated = 0
while simulated < N:
k = N - simulated
a = uniform_0_1(size=k)
b = (2.0 * uniform_0_1(size=k) - 1.0) * bmp
accept = (b**2 <= - 4 * a**2 * np.log(a))
num_accept = np.sum(accept)
if num_accept > 0:
x[simulated : (simulated + num_accept)] = (b[accept] * sigma / a[accept]) + mu
simulated += num_accept
return np.reshape(x, size1d)
</code></pre>
<p>One worry though, <code>numpy.random.random_sample</code> : Return random floats in the half-open interval [0.0, 1.0).</p>
<p>I'm not sure how to achieve that guarantee (never 1.0) with my uniform_0_1 or whether it even matters.</p>
|
python|numpy|random|gaussian|normal-distribution
| 1
|
4,326
| 50,020,683
|
which one is effecient, join queries using sql, or merge queries using pandas?
|
<p>I want to use data from multiple tables in a <code>pandas dataframe</code>. I have 2 idea for downloading data from the server, one way is to use <code>SQL</code> join and retrieve data and one way is to download dataframes separately and merge them using pandas.merge.</p>
<h1>SQL Join</h1>
<p>when I want to download data into <code>pandas</code>.</p>
<pre><code>query='''SELECT table1.c1, table2.c2
FROM table1
INNER JOIN table2 ON table1.ID=table2.ID where condidtion;'''
df = pd.read_sql(query,engine)
</code></pre>
<h1>Pandas Merge</h1>
<pre><code>df1 = pd.read_sql('select c1 from table1 where condition;',engine)
df2 = pd.read_sql('select c2 from table2 where condition;',engine)
df = pd.merge(df1,df2,on='ID', how='inner')
</code></pre>
<p>which one is faster? Assume that I want to do that for more than 2 tables and 2 columns.
Is there any better idea?
If it is necessary to know I use <code>PostgreSQL</code>.</p>
|
<p>To really know which is faster, you need to try out the two queries using your data on your databases.</p>
<p>The rule of thumb is to do the logic in a single query. Databases are designed for queries. They have sophisticated algorithms, multiple processors, and lots of memory to handle them. So, relying on the database is quite reasonable. In addition, each query has a bit of overhead, so two queries have twice the overhead of one.</p>
<p>That said, there are definitely circumstances where doing the work in pandas is going to be faster. Pandas is going to do the work in local memory. That is limited -- but much less so than in the "good old days". It is probably not going to be multi-threaded.</p>
<p>For example, the result set might be much larger than the two tables. Moving the data from the database to the application might be (relatively) expensive in that case. Doing the work in in pandas could be faster than in the database.</p>
<p>At the other extreme, no records might match the <code>JOIN</code> conditions. That is definitely a case where a single query would be faster.</p>
|
python|sql|postgresql|pandas
| 3
|
4,327
| 64,080,487
|
How to groupby, aggregate and plot a bar plot?
|
<p>I used this code from starting</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('odi.csv')
df=pd.DataFrame(dataset)
</code></pre>
<p>I am using dataset and then i groupby in a column by country and then i took average run scored by each country. I used this code</p>
<pre><code>total_run=df['runs'].groupby(df['bat_team'])
print(total_run.mean())
</code></pre>
<h2>now i used this code for plotting bar graph</h2>
<pre><code>fig = plt.figure(figsize=(21,10))
ax = fig.add_axes([0,0,1,1])
ax.bar(bat_team,mean)
plt.show()
</code></pre>
<h2>but this error shows up</h2>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-26-9b510a8ae181> in <module>
1 fig = plt.figure(figsize=(21,10))
2 ax = fig.add_axes([0,0,1,1])
----> 3 ax.bar(bat_team,mean)
4 plt.show()
NameError: name 'bat_team' is not defined
</code></pre>
<h2>some of data from dataset</h2>
<pre class="lang-py prettyprint-override"><code>,mid,date,venue,bat_team,bowl_team,batsman,bowler,runs,wickets,overs,runs_last_5,wickets_last_5,striker,non-striker,total
0,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,0,0,0.1,0,0,0,0,301
1,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,0,0,0.2,0,0,0,0,301
2,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,4,0,0.3,4,0,0,0,301
3,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,6,0,0.4,6,0,0,0,301
4,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,6,0,0.5,6,0,0,0,301
5,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,ME Trescothick,DT Johnston,6,0,0.6,6,0,0,0,301
6,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,EC Joyce,D Langford-Smith,6,0,1.1,6,0,0,0,301
7,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,EC Joyce,D Langford-Smith,6,0,1.2,6,0,0,0,301
8,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,EC Joyce,D Langford-Smith,6,0,1.3,6,0,0,0,301
9,1,2006-06-13,"Civil Service Cricket Club, Stormont",England,Ireland,EC Joyce,D Langford-Smith,7,0,1.3,7,0,0,0,301
</code></pre>
|
<ul>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pandas.read_csv</code></a> creates a DataFrame, so it's not correct to create <code>dataset</code>, and then <code>df = pd.DataFrame(dataset)</code></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>pandas.DataFrame.groupby</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html" rel="nofollow noreferrer">Group by: split-apply-combine</a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>pandas.DataFrame.plot</code></a></li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
# read the file
df = pd.read_csv('obi.csv')
# groupby bat_team and get mean of runs
dfg = df.groupby('bat_team')['runs'].mean()
# plot the groupby result
ax = dfg.plot.bar(figsize=(20, 10), ylabel='Average Runs')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/1ksUE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ksUE.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|bar-chart
| 1
|
4,328
| 64,099,194
|
AttributeError: 'Concatenate' object has no attribute 'shape'
|
<p>I'm trying to do some image segmentation in tensorflow, here is my model :</p>
<pre><code>inputs = Input((IMAGE_HEIGHT, IMAGE_WIDTH, 3))
s = Lambda(lambda x: x / 255) (inputs)
conv1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (inputs)
conv1 = BatchNormalization() (conv1)
conv1 = Dropout(0.1) (conv1)
conv1 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv1)
conv1 = BatchNormalization() (conv1)
pooling1 = MaxPooling2D((2, 2)) (conv1)
conv2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling1)
conv2 = BatchNormalization() (conv2)
conv2 = Dropout(0.1) (conv2)
conv2 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv2)
conv2 = BatchNormalization() (conv2)
pooling2 = MaxPooling2D((2, 2)) (conv2)
conv3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling2)
conv3 = BatchNormalization() (conv3)
conv3 = Dropout(0.2) (conv3)
conv3 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv3)
conv3 = BatchNormalization() (conv3)
pooling3 = MaxPooling2D((2, 2)) (conv3)
conv4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling3)
conv4 = BatchNormalization() (conv4)
conv4 = Dropout(0.2) (conv4)
conv4 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv4)
conv4 = BatchNormalization() (conv4)
pooling4 = MaxPooling2D(pool_size=(2, 2)) (conv4)
conv5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (pooling4)
conv5 = BatchNormalization() (conv5)
conv5 = Dropout(0.3) (conv5)
conv5 = Conv2D(256, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv5)
conv5 = BatchNormalization() (conv5)
upsample6 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (conv5)
upsample6 = Concatenate([upsample6, conv4])
conv6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample6)
conv6 = BatchNormalization() (conv6)
conv6 = Dropout(0.2) (conv6)
conv6 = Conv2D(128, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv6)
conv6 = BatchNormalization() (conv6)
upsample7 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (conv6)
upsample7 = Concatenate([upsample7, conv3])
conv7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample7)
conv7 = BatchNormalization() (conv7)
conv7 = Dropout(0.2) (conv7)
conv7 = Conv2D(64, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv7)
conv7 = BatchNormalization() (conv7)
upsample8 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (conv7)
upsample8 = Concatenate([upsample8, conv2])
conv8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample8)
conv8 = BatchNormalization() (conv8)
conv8 = Dropout(0.1) (conv8)
conv8 = Conv2D(32, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv8)
conv8 = BatchNormalization() (conv8)
upsample9 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (conv8)
upsample9 = Concatenate([upsample9, conv1], axis=3)
conv9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (upsample9)
conv9 = BatchNormalization() (conv9)
conv9 = Dropout(0.1) (conv9)
conv9 = Conv2D(16, (3, 3), activation='elu', kernel_initializer='he_normal', padding='same') (conv9)
conv9 = BatchNormalization() (conv9)
outputs = Conv2D(1, (1, 1), activation='sigmoid') (conv9)
model = Model(inputs=[inputs], outputs=[outputs])
model.summary()
</code></pre>
<p>But it's giving me this error :</p>
<blockquote>
<blockquote>
<blockquote>
<p>AttributeError: 'Concatenate' object has no attribute 'shape'</p>
</blockquote>
</blockquote>
</blockquote>
<p>My imports :</p>
<pre><code>
from tensorflow.keras.layers import Dense, Dropout, Lambda, Input, Masking,...
from tensorflow.keras.layers import Reshape, Dropout, Dense,Multiply, Dot, Concatenate,Embedding
...
</code></pre>
<p><strong>How can I resolve this ?</strong></p>
<p>Origin - <a href="https://github.com/Paulymorphous/skeyenet/blob/master/Src/Road_Detection_GPU.ipynb" rel="nofollow noreferrer">https://github.com/Paulymorphous/skeyenet/blob/master/Src/Road_Detection_GPU.ipynb</a></p>
<p>As I'm using keras from tensorflow, please suggest some solutions in that manner, else it will break whole project and I have to change whole structure.</p>
|
<p>Try to use:</p>
<pre><code>upsample7 = Concatenate()([upsample7, conv3])
</code></pre>
<p>and update each upsample and where you use the Concatenate function.</p>
<p>Note the <code>()</code> in Concatenate ^^</p>
|
python|tensorflow|keras|tensorflow2.0
| 3
|
4,329
| 64,002,915
|
While running tensorboard command and i am getting error?
|
<p><code>!tensorboard --logdir=drive/My Drive/Proj/fer/checkpoint/logs/</code></p>
<p>**i am running this command in google colab **</p>
<p><a href="https://i.stack.imgur.com/ZQhgw.png" rel="nofollow noreferrer">getting this error</a></p>
|
<p>You get this error because of the empty space in 'My Drive'. So you need to escape the whitespace or rename it:</p>
<pre><code>!tensorboard --logdir=drive/My\ Drive/Proj/fer/checkpoint/logs/
</code></pre>
|
python|tensorflow|tensorboard
| 0
|
4,330
| 64,020,403
|
Pandas multiply selected columns by previous column
|
<p>Assume I have a 3 x 9 Dataframe with index from 0 - 2 and columns from 0 - 8</p>
<pre><code>nums = np.arange(1, 28)
arr = np.array(nums)
arr = arr.reshape((3, 9))
df = pd.DataFrame(arr)
</code></pre>
<p>I want to multiply selected columns (example [2, 5, 7]) by the columns behind them (example [1, 4, 6])
My obstacle is getting the correct index of the previous column to match with the column I want to multiply</p>
<p>Issue:</p>
<pre><code>df[[2, 5, 7]] = df[[2, 5, 7]].multiply(___, axis="index") # in this case I want the blank to be df[[1, 4, 6]], but how to get these indexes for the general case when selected columns vary?
</code></pre>
|
<p>Let's try working with the numpy array:</p>
<pre><code>cols = np.array([2,5,7])
df[cols] *= df[cols-1].values
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3 4 5 6 7 8
0 1 2 6 4 5 30 7 56 9
1 10 11 132 13 14 210 16 272 18
2 19 20 420 22 23 552 25 650 27
</code></pre>
<p>Or you can use:</p>
<pre><code>df.update(df[cols]*df.shift(-1, axis=1))
</code></pre>
<p>which gives:</p>
<pre><code> 0 1 2 3 4 5 6 7 8
0 1 2 12.0 4 5 42.0 7 72.0 9
1 10 11 156.0 13 14 240.0 16 306.0 18
2 19 20 462.0 22 23 600.0 25 702.0 27
</code></pre>
|
python|pandas
| 3
|
4,331
| 63,902,183
|
How do I plot a beautiful scatter plot with linear regression?
|
<p>I want to make a beautiful <code>scatter plot</code> with <code>linear regression</code> line using the data given below. I was able to create a scatter plot but am not satisfied with how it looks. Additionally, I want to plot a <code>linear regression</code> line on the data.</p>
<p>My data and code are below:</p>
<pre><code>x y
117.00 111.0
107.00 110.0
77.22 78.0
112.00 95.4
149.00 150.0
121.00 121.0
121.61 120.0
111.54 140.0
73.00 72.0
70.47 000.0
66.3 72.0
113.00 131.0
81.00 81.0
72.00 00.0
74.20 98.0
84.24 90.0
86.60 88.0
99.00 97.0
90.00 102.0
85.00 000.0
138.0 135.0
96.00 93.0
import numpy as np
import matplotlib.pyplot as plt
print(plt.style.available)
from sklearn.linear_model import LinearRegression
plt.style.use('ggplot')
data = np.loadtxt('test_data', dtype=float, skiprows=1,usecols=(0,1))
x=data[:,0]
y=data[:,1]
plt.xlim(20,200)
plt.ylim(20,200)
plt.scatter(x,y, marker="o",)
plt.show()
</code></pre>
|
<p>Please check the snippet. You can use <code>numpy.polyfit()</code> with degree=1 to calculate slope and y-intercept of line to <code>y=m*x+c</code>
<a href="https://i.stack.imgur.com/TZBA3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZBA3.png" alt="graph" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
data = np.loadtxt('test_data.txt', dtype=float, skiprows=1,usecols=(0,1))
x=data[:,0]
y=data[:,1]
plt.xlim(20,200)
plt.ylim(20,200)
plt.scatter(x,y, marker="o",)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
plt.show()
</code></pre>
<p><strong>Edit1:</strong>
Based on your comment, I added more points and now graph seems like this and it seems it passes via points.</p>
<p>To set transparency to points you can use <code>alpha</code> argument . You can set range between 0 and 1 to change transparency. Here I set <code>alpha=0.5</code></p>
<p><code>plt.scatter(x,y, marker="o",alpha=0.5)</code>
<a href="https://i.stack.imgur.com/QjtDT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QjtDT.png" alt="transparent" /></a>
<strong>Edit2:</strong> Based on @tmdavison suggestion
<a href="https://i.stack.imgur.com/qmahb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qmahb.png" alt="graph2" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
data = np.loadtxt('test_data.txt', dtype=float, skiprows=1,usecols=(0,1))
x=data[:,0]
y=data[:,1]
x2 = np.arange(0, 200)
plt.xlim(20,200)
plt.ylim(20,200)
plt.scatter(x,y, marker="o",)
m, b = np.polyfit(x, y, 1)
plt.plot(x2, m*x2 + b)
plt.show()
</code></pre>
|
python|pandas|matplotlib|seaborn|python-ggplot
| 3
|
4,332
| 64,102,826
|
Efficient method comparing 2 different tables columns
|
<p><img src="https://i.stack.imgur.com/Lopgz.png" alt="Example_List" /></p>
<p>Hi all guys,</p>
<p>I have got 2 dfs and I need to check if the values from the first are matching on the second, only for a specific column on each, and save the values matching in a new list. This is what I did but it is taking quite a lot of time and I was wandering if there's a more efficient way. The lists are like in the image above from 2 different tables.</p>
<pre><code>for x in df_bd_names['Building_Name']:
for y in df_sup['Source_String']:
if x == y:
matching_words_sup.append(x)
</code></pre>
<p>Thanks</p>
|
<p>Let's create both dataframes:</p>
<pre><code>df1 = pd.DataFrame({
'Building_Name': ['Exces', 'Excs', 'Exec', 'Executer', 'Executor']
})
df2 = pd.DataFrame({
'Source_String': ['Executer', 'Executor', 'Executor Of', 'Executor For', 'Exeutor']
})
</code></pre>
<p>Perform inner merge between dataframes and convert first column to list:</p>
<pre><code>pd.merge(df1, df2, left_on='Building_Name', right_on='Source_String', how='inner')['Building_Name'].tolist()
</code></pre>
<p>Output:</p>
<pre><code>['Executer', 'Executor']
</code></pre>
|
python|python-3.x|pandas|list|string-matching
| 0
|
4,333
| 47,040,238
|
Cleanest way to bin using Pandas.cut
|
<p>The purpose of this post is discussion primarily, so even loose ideas or strings to pull would be appreciated. I'm trying to bin some data for analysis, and was wondering what is the cleanest way to bin my data using <code>Pandas.cut</code>. For some context, I'm specifically trying to bin ICD-9 diagnostic data into categories and am using <a href="https://en.wikipedia.org/wiki/List_of_ICD-9_codes" rel="nofollow noreferrer">this list</a> as a starting point. From what I'm reading, a common way to do this is something like this:</p>
<pre><code>break_points = [0, 139, 239, ...]
labels = ['infectious and parasitic diseases', 'neoplasms', 'endocrine diseases', ...]
df['diag_codes_binned'] = pd.cut(df['diag_codes'],
bins=break_points,
labels=labels)
</code></pre>
<p>I recognize that this is a perfectly functional way to do this, but I don't like how hard it is to visually inspect the code and determine what range lines up with what label. I am exploring using a dictionary for this like this:</p>
<pre><code>diagnosis_code_dict = {139: 'infectious and parasitic diseases',
239: 'neoplasms',
279: 'endocrine diseases',
...}
</code></pre>
<p>But the pd.cut function doesn't seem to get along well with my dictionary. There appears to be one way to do this using a dataframe as a lookup table with min and max values, <a href="https://stackoverflow.com/questions/40590998/lookup-value-in-dictionary-between-range-pandas">shown here</a>, and that seems to be one possibility (example below):</p>
<pre><code>In [187]: lkp
Out[187]:
Min Max Val
0 1 99 AAA
1 100 199 BBB
2 200 299 CCC
3 300 399 DDD
</code></pre>
<p>Lastly, I have one more consideration for the data set that I'm working through the best way to handle. Some of the diagnostic codes start with V or E, and currently I'm planning on pre-processing these to convert them into an extension of the range and handling them that way. For example, if the range of possible non-E/V codes is <code>range(0,1000)</code>, then I could convert E's into a <code>range(1000, 2000)</code> and V's into a <code>range(2000, 3000)</code> so that I could maintain a single lookup table or dictionary for all codes from which I could cut into however many bins I wanted. That said, this method results in some loss of the ability to at-a-glance understand these codes, so I'd be open to suggestions if there is a better way to handle this.</p>
|
<p>I would simply write a small helper function. Here's one idea:</p>
<pre><code>import pandas as pd
def bin_helper(code_dict):
break_points = [0] + sorted(code_dict) #0 added for lower bound on binning
labels = [code_dict[value] for value in sorted(code_dict)]
return break_points, labels
# Setting up some minimal reproducible code...
data = {'diag_codes': range(1, 300),
'diag_codes_binned': ''}
df = pd.DataFrame.from_dict(data)
diag_code_dict = {139: 'infectious and parasitic diseases',
239: 'neoplasms',
279: 'endocrine diseases'}
# Run the function and drop it into pandas.cut
bins, labels = bin_helper(diag_code_dict)
df['diag_codes_binned'] = pd.cut(df['diag_codes'],
bins=bins,
labels=labels)
</code></pre>
<p>I agree that dictionaries (besides being an incredibly fast, versatile data structure in their own right!) are a very nice way to provide some context in your code about what data are supposed to mean. I often use a small "black-box" function to do the actual work if I need a dictionary to serve as part of my documentation. </p>
|
python|pandas|icd
| 2
|
4,334
| 46,693,557
|
Finding closest point in array - inverse of KDTree
|
<p>I have a very large ndarray A, and a sorted list of points k (a small list, about 30 points).</p>
<p>For every element of A, I want to determine the closest element in the list of points k, together with the index. So something like:</p>
<pre><code>>>> A = np.asarray([3, 4, 5, 6])
>>> k = np.asarray([4.1, 3])
>>> values, indices
[3, 4.1, 4.1, 4.1], [1, 0, 0, 0]
</code></pre>
<p>Now, the problem is that A is very very large. So I can't do something inefficient like adding one dimension to A, take the abs difference to k, and then take the minimum of each column. </p>
<p>For now I have been using np.searchsorted, as shown in the second answer here: <a href="https://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array">Find nearest value in numpy array</a> but even this is too slow. This is the code I used (modified to work with multiple values):</p>
<pre><code>def find_nearest(A,k):
indicesClosest = np.searchsorted(k, A)
flagToReduce = indicesClosest==k.shape[0]
modifiedIndicesToAvoidOutOfBoundsException = indicesClosest.copy()
modifiedIndicesToAvoidOutOfBoundsException[flagToReduce] -= 1
flagToReduce = np.logical_or(flagToReduce,
np.abs(A-k[indicesClosest-1]) <
np.abs(A - k[modifiedIndicesToAvoidOutOfBoundsException]))
flagToReduce = np.logical_and(indicesClosest > 0, flagToReduce)
indicesClosest[flagToReduce] -= 1
valuesClosest = k[indicesClosest]
return valuesClosest, indicesClosest
</code></pre>
<p>I then thought of using scipy.spatial.KDTree:</p>
<pre><code>>>> d = scipy.spatial.KDTree(k)
>>> d.query(A)
</code></pre>
<p>This turns out to be much slower than the searchsorted solution.</p>
<p>On the other hand, the array A is always the same, only k changes. So it would be beneficial to use some auxiliary structure (like a "inverse KDTree") on A, and then query the results on the small array k. </p>
<p>Is there something like that?</p>
<p><strong>Edit</strong></p>
<p>At the moment I am using a variant of np.searchsorted that requires the array A to be sorted. We can do this in advance as a pre-processing step, but we still have to restore the original order after computing the indices. This variant is about twice as fast as the one above.</p>
<pre><code>A = np.random.random(3000000)
k = np.random.random(30)
indices_sort = np.argsort(A)
sortedA = A[indices_sort]
inv_indices_sort = np.argsort(indices_sort)
k.sort()
def find_nearest(sortedA, k):
midpoints = k[:-1] + np.diff(k)/2
idx_aux = np.searchsorted(sortedA, midpoints)
idx = []
count = 0
final_indices = np.zeros(sortedA.shape, dtype=int)
old_obj = None
for obj in idx_aux:
if obj != old_obj:
idx.append((obj, count))
old_obj = obj
count += 1
old_idx = 0
for idx_A, idx_k in idx:
final_indices[old_idx:idx_A] = idx_k
old_idx = idx_A
final_indices[old_idx:] = len(k)-1
indicesClosest = final_indices[inv_indices_sort] #<- this takes 90% of the time
return k[indicesClosest], indicesClosest
</code></pre>
<p>The line that takes so much time is the line that brings the indices back to their original order. </p>
|
<p>Update:</p>
<p>The builtin function <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.digitize.html" rel="nofollow noreferrer"><code>numpy.digitize</code></a> can actually do exactly what you need. Only a small trick is required: <code>digitize</code> assigns values to <em>bins</em>. We can convert <code>k</code> to bins by sorting the array and setting the bin borders exactly in the middle between adjacent elements.</p>
<pre><code>import numpy as np
A = np.asarray([3, 4, 5, 6])
k = np.asarray([4.1, 3, 1]) # added another value to show that sorting/binning works
ki = np.argsort(k)
ks = k[ki]
i = np.digitize(A, (ks[:-1] + ks[1:]) / 2)
indices = ki[i]
values = ks[i]
print(values, indices)
# [ 3. 4.1 4.1 4.1] [1 0 0 0]
</code></pre>
<hr>
<p>Old answer:</p>
<p>I would take a brute-force approach to perform one vectorized pass over <code>A</code> for each element in <code>k</code> and update those locations where the current element improves the approximation.</p>
<pre><code>import numpy as np
A = np.asarray([3, 4, 5, 6])
k = np.asarray([4.1, 3])
err = np.zeros_like(A) + np.inf # keep track of error over passes
values = np.empty_like(A, dtype=k.dtype)
indices = np.empty_like(A, dtype=int)
for i, v in enumerate(k):
d = np.abs(A - v)
mask = d < err # only update where v is closer to A
values[mask] = v
indices[mask] = i
err[mask] = d[mask]
print(values, indices)
# [ 3. 4.1 4.1 4.1] [1 0 0 0]
</code></pre>
<p>This approach requires three temporary variables of same size as <code>A</code>, so it will fail if not enough memory is available.</p>
|
python|arrays|algorithm|numpy|scipy
| 2
|
4,335
| 32,816,410
|
Parallelize loop over numpy rows
|
<p>I need to apply the same function onto every row in a numpy array and store the result again in a numpy array.</p>
<pre><code># states will contain results of function applied to a row in array
states = np.empty_like(array)
for i, ar in enumerate(array):
states[i] = function(ar, *args)
# do some other stuff on states
</code></pre>
<p><code>function</code> does some <strong>non trivial</strong> filtering of my data and returns an array when the conditions are True and when they are False. <code>function</code> can either be pure python or cython compiled. The filtering operations on the rows are complicated and can depend on previous values in the row, this means I can't operate on the whole array in an element-by-element fashion</p>
<p>Is there a way to do something like this in dask for example?</p>
|
<h3>Dask solution</h3>
<p>You could do with with dask.array by chunking the array by row, calling <code>map_blocks</code>, then computing the result</p>
<pre><code>ar = ...
x = da.from_array(ar, chunks=(1, arr.shape[1]))
x.map_blocks(function, *args)
states = x.compute()
</code></pre>
<p>By default this will use threads, you can use processes in the following way</p>
<pre><code>from dask.multiprocessing import get
states = x.compute(get=get)
</code></pre>
<h3>Pool solution</h3>
<p>However dask is probably overkill for embarrassingly parallel computations like this, you could get by with a threadpool</p>
<pre><code>from multiprocessing.pool import ThreadPool
pool = ThreadPool()
ar = ...
states = np.empty_like(array)
def f(i):
states[i] = function(ar[i], *args)
pool.map(f, range(len(ar)))
</code></pre>
<p>And you could switch to processes with the following change</p>
<pre><code>from multiprocessing import Pool
pool = Pool()
</code></pre>
|
python|numpy|dask
| 6
|
4,336
| 38,657,341
|
Pandas: groupby by date and transform nunique returning too many entries
|
<p>I am trying to do a simple group-by in Pandas and it is not working as it should:</p>
<pre><code>url='https://raw.githubusercontent.com/108michael/ms_thesis/master/raw_bills'
bills=pd.read_csv(url)
bills.date.nunique()
11
bills.dtypes
date float64
bills object
id.thomas int64
dtype: object
bills[['date', 'bills']].groupby(['date']).bills.transform('nunique')
0 3627
1 7454
2 7454
3 7454
4 3627
5 7454
6 7454
7 3627
8 7454
9 7454
10 3627
11 7454
12 7454
13 7454
14 7454
15 7454
16 3627
17 3627
18 7454
</code></pre>
<p>I've done this sort of group-by before, and it usually works fine.</p>
<p>Any suggestions on this?</p>
|
<p>I'm not sure what you're asking, but don't you want to use:</p>
<pre><code>bills[['date', 'bills']].groupby('date').bills.nunique()
date
2005.0 6820
2006.0 3738
2007.0 7454
2008.0 3627
2009.0 7324
2010.0 3297
2011.0 5787
2012.0 4647
2013.0 5694
2014.0 3211
2015.0 5
Name: bills, dtype: int64
</code></pre>
|
python|pandas|group-by
| 2
|
4,337
| 38,588,668
|
Why is prediction not plotted?
|
<p>Here is my code in Python 3:</p>
<pre><code>from sklearn import linear_model
import numpy as np
obj = linear_model.LinearRegression()
allc = np.array([[0,0],[1,1],[2,2],[3,3],[4,4],[5,5],[6,6]])
X=allc[:,0]
X=X.reshape(-1, 1)
Y=X.reshape(X.shape[0],-1)
obj.fit(X, Y)
print(obj.predict(7))
import matplotlib.pyplot as plt
plt.scatter(X,Y,color='black')
plt.plot(X[0],obj.predict(7),color='black',linewidth=3)
plt.show()
</code></pre>
<p>My plotted data looks this way:
<a href="https://i.stack.imgur.com/IGh7F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IGh7F.png" alt="enter image description here"></a>
After fitting, obj.predict(7) equals [7.]</p>
<p>What am I doing wrong? I expected to see 7.7 point being plotted.</p>
|
<p>The plot method is taking an array for the X-axis and an array for the Y-axis, and draws a <strong>line</strong> according to those arrays. You tried to draw a <strong>point</strong> using a method for <strong>lines</strong>...<br></p>
<p>For your code to work (I have tested it and it worked) switch this line:</p>
<pre><code>plt.plot(X[0],obj.predict(7),color='black',linewidth=3)
</code></pre>
<p>with this line: </p>
<pre><code>plt.scatter(7,obj.predict(7),color='black',linewidth=3)
</code></pre>
<p>The scatter method will take the point given (7, 7) and put it in the graph just like you wanted.</p>
<p>I hope this helped :)</p>
|
python|python-3.x|numpy|matplotlib|linear-regression
| 0
|
4,338
| 63,296,247
|
print column values of one dataframe based on the column values of another dataframe
|
<p><img src="https://i.stack.imgur.com/U32iE.png" alt="enter image description here" /> <img src="https://i.stack.imgur.com/VWHgj.png" alt="enter image description here" /></p>
<p>So here is my first dataframe df1. In the columns, Starting DOY and Ending DOY, for example, 3.0 and 6.0, I want to print column values By, Bz, Vsw etc of another dataframe df2 by matching it with column DOY</p>
|
<p>Here is a simple tutorial how you can do it:</p>
<pre><code>from pandas import DataFrame
if __name__ == '__main__':
data1 = {'Starting DOY': [3.0, 3.0, 13.0],
'Ending DOY': [6.0, 6.0, 15.0]}
data2 = {'YEAR': [1975, 1975, 1975],
'DOY': [1.0, 3.0, 6.0],
'HR': [0, 1, 2],
'By': [-7.5, -4.0, -3.6],
'Bz': [0.2, 2.4, -2.3],
'Nsw': [999.9, 6.2, 5.9],
'Vsw': [9999.0, 476.0, 482.0],
'AE': [181, 138, 86]}
df1 = DataFrame(data1, columns=['Starting DOY',
'Ending DOY'])
df2 = DataFrame(data2, columns=['YEAR', 'DOY',
'HR', 'By', 'Bz',
'Nsw', 'Vsw', 'AE'])
for doy in df1.values:
start_doy = doy[0]
end_doy = doy[1]
for val in df2.values:
year = val[0]
current_doy = val[1]
hr = val[2]
By = val[3]
Bz = val[4]
Nsw = val[5]
Vsw = val[6]
AE = val[7]
if start_doy <= current_doy <= end_doy:
print("For DOY {}".format(current_doy))
print("By: {}".format(By))
print("Bz: {}".format(Bz))
print("Vsw: {}".format(Vsw))
print("--------------------")
</code></pre>
<p>Ouput:</p>
<pre><code>For DOY 3.0
By: -4.0
Bz: 2.4
Vsw: 476.0
--------------------
For DOY 6.0
By: -3.6
Bz: -2.3
Vsw: 482.0
--------------------
For DOY 3.0
By: -4.0
Bz: 2.4
Vsw: 476.0
--------------------
For DOY 6.0
By: -3.6
Bz: -2.3
Vsw: 482.0
--------------------
</code></pre>
|
python|pandas|numpy|matplotlib|math
| 0
|
4,339
| 63,107,329
|
Generation of 3-D array
|
<p>I want to generate a 3-D array by assigning an array to an array.<br/>
Following are codes written by me.<br/></p>
<pre><code>import numpy as np
def func01(a):
b = np.array([[a, 3],
[4, 5]])
return b
a = np.array([1, 2])
b = func01(a)
print(b)
</code></pre>
<p>Originally, I wrote this code expecting the following output.</p>
<pre><code> [[[1 3]
[4 5]]
[[2 3]
[4 5]]]
</code></pre>
<p>However, the following output is obtained.</p>
<pre><code> [[array([1, 2]) 3]
[4 5]]
</code></pre>
<p>Could someone give me a solution?<br/></p>
<p>My goal is to curve fit a function involving matrix calculation by SciPy.<br/>
I want to perform the curve fit for following function.</p>
<pre><code>import numpy as np
import scipy.optimize
def func01(A, k):
b1 = np.array([[A[0], 3],
[4, 5]])
b2 = np.array([[A[1], 3],
[4, 5]])
B = np.dot(b1, b2)
w, v = np.linalg.eig(B)
C = w[np.argmax(w)] * k
return C
x = np.array([173, 273, 373])
y = np.array([0.1, 0.2, 0.3])
xy = np.append([x],[y],axis=0)
z = np.array([0.023, 0.027, 0.031])
k =0.00005 # initial value
para_opt, cov = scipy.optimize.curve_fit(func01, xy, z, k)
print(para_opt[0])
</code></pre>
<p>I can get the value with a simple variable assignment in this function.</p>
<pre><code>k = 0.0005
A = np.array([1,2])
C = func01(A, k)
print(C)
</code></pre>
<p>However, I cannot perform the curve fit with SciPy.
When I used the SciPy, the following values were assigned to b1 above.</p>
<pre><code>[[array([173., 273., 373.]) 3]
[4 5]]
</code></pre>
<p>Therefore, I wanted to know how to get the following matrix.</p>
<pre><code>[[[173 3]
[4 5]]
[[173 3]
[4 5]]]
[[173 3]
[4 5]]]
</code></pre>
|
<pre><code>In [210]: res = np.zeros((2,2,2),int)
In [211]: res[:] = np.array([[0,3],[4,5]])
In [212]: res
Out[212]:
array([[[0, 3],
[4, 5]],
[[0, 3],
[4, 5]]])
In [213]: res[:,0,0]=[1,2]
In [214]: res
Out[214]:
array([[[1, 3],
[4, 5]],
[[2, 3],
[4, 5]]])
</code></pre>
<p>Tell us about <code>curve_fit</code>. What does it expect with regards to the arguments - type and shape? Do your arguments fit? As written your function expects <code>A</code> to have shape (2,). Is that what you are getting? Why, or why not?</p>
<p>(I could look up <code>curve_fit</code> docs, but I'd rather you did some that leg work. You need the debugging practice :)</p>
<h2>your curve_fit problem - with column iteration</h2>
<p>So your arrays are:</p>
<pre><code>In [216]: xy
Out[216]:
array([[1.73e+02, 2.73e+02, 3.73e+02],
[1.00e-01, 2.00e-01, 3.00e-01]])
In [217]: z
Out[217]: array([0.023, 0.027, 0.031])
</code></pre>
<p>And apparently you want to call <code>func01</code> with one column of <code>xy</code>:</p>
<pre><code>In [219]: func01(xy[:,0],.00005)
Out[219]: 0.00687966968797453
</code></pre>
<p>to work with all columns we can start with an straight forward iteration:</p>
<pre><code>In [220]: def foo(A,k):
...: return np.array([func01(A[:,i],k) for i in range(A.shape[1])])
In [222]: foo(xy,.00005)
Out[222]: array([0.00687967, 0.00921688, 0.0120737 ])
</code></pre>
<p>And using that in <code>curve_fit</code>:</p>
<pre><code>In [223]: k =0.00005 # initial value
In [226]: optimize.curve_fit(foo, xy, z, k)
Out[226]: (array([0.00014051]), array([[1.04449717e-10]]))
</code></pre>
<p>A further step is to modify the function so it works with 2d <code>xy</code> without that iteration. But for a start we need a clear working example. It's always easier to improve working code that we can check our answers against.</p>
<h2>vectorizing func (part 1)</h2>
<p>Now that we have reference we can try to "vectorize" the function - step by step. We need to do more than generate a 3d array, I think.</p>
<p>Looks like the <code>b1.dot(b2)</code> and following needs to be a "batched" operation, so lets use <code>@</code>:</p>
<pre><code>In [228]: def func02(A, k):
...: bb = np.zeros((2,A.shape[1],2,2))
...: for i in range(A.shape[1]):
...: bb[0,i] = np.array([[A[0,i], 3],
...: [4, 5]])
...: bb[1,i] = np.array([[A[1,i], 3],
...: [4, 5]])
...:
...: #B = np.dot(b1, b2)
...: B = bb[0] @ bb[1]
...: w, v = np.linalg.eig(B)
...: C = w[np.argmax(w, axis=-1)] * k
...: return C
...:
In [230]: func02(xy,.00005)
Out[230]:
array([[ 0.00921688, -0.00403688],
[-0.00356467, 0.00687967],
[-0.00356467, 0.00687967]])
</code></pre>
<p>Not right. Let's a shape print:</p>
<pre><code>In [231]: def func02(A, k):
...: bb = np.zeros((2,A.shape[1],2,2))
...: for i in range(A.shape[1]):
...: bb[0,i] = np.array([[A[0,i], 3],
...: [4, 5]])
...: bb[1,i] = np.array([[A[1,i], 3],
...: [4, 5]])
...:
...: #B = np.dot(b1, b2)
...: B = bb[0] @ bb[1]
...: w, v = np.linalg.eig(B)
...: print(B.shape, w.shape)
...: C = w[np.argmax(w, axis=-1)] * k
...: return C
...:
...:
In [232]:
In [232]: func02(xy,.00005)
(3, 2, 2) (3, 2)
Out[232]:
array([[ 0.00921688, -0.00403688],
[-0.00356467, 0.00687967],
[-0.00356467, 0.00687967]])
</code></pre>
<p>The <code>w</code> indexing needs tweaking:</p>
<pre><code>In [233]: def func02(A, k):
...: bb = np.zeros((2,A.shape[1],2,2))
...: for i in range(A.shape[1]):
...: bb[0,i] = np.array([[A[0,i], 3],
...: [4, 5]])
...: bb[1,i] = np.array([[A[1,i], 3],
...: [4, 5]])
...:
...: #B = np.dot(b1, b2)
...: B = bb[0] @ bb[1]
...: w, v = np.linalg.eig(B)
...: print(B.shape, w.shape)
...: C = w[:, np.argmax(w, axis=-1)] * k
...: return C
...:
...:
In [234]: func02(xy,.00005)
(3, 2, 2) (3, 2)
Out[234]:
array([[ 0.00687967, -0.00356467, -0.00356467],
[-0.00403688, 0.00921688, 0.00921688],
[-0.0040287 , 0.0120737 , 0.0120737 ]])
</code></pre>
<p>The correct values are on the diagonal, so we need another change to the indexing:</p>
<pre><code>In [235]: def func02(A, k):
...: n = A.shape[1]
...: bb = np.zeros((2,n,2,2))
...: for i in range(n):
...: bb[0,i] = np.array([[A[0,i], 3],
...: [4, 5]])
...: bb[1,i] = np.array([[A[1,i], 3],
...: [4, 5]])
...:
...: #B = np.dot(b1, b2)
...: B = bb[0] @ bb[1]
...: w, v = np.linalg.eig(B)
...: print(B.shape, w.shape)
...: C = w[np.arange(n), np.argmax(w, axis=-1)] * k
...: return C
...:
...:
In [236]: func02(xy,.00005)
(3, 2, 2) (3, 2)
Out[236]: array([0.00687967, 0.00921688, 0.0120737 ])
</code></pre>
<p>That matches <code>foo</code>. So works in the curvefit:</p>
<pre><code>In [237]: optimize.curve_fit(func02, xy, z, k)
(3, 2, 2) (3, 2)
...
(3, 2, 2) (3, 2)
Out[237]: (array([0.00014051]), array([[1.04449717e-10]]))
</code></pre>
<p>Now that we have the last part of <code>func02</code> right, we can address your original 3d array issue.</p>
<h2>final</h2>
<p>Using the array construction I started with:</p>
<pre><code>In [240]: def func02(A, k):
...: n = A.shape[1]
...: bb = np.zeros((2,n,2,2))
...: bb[:] = np.array([[0,3],[4,5]])
...: bb[:,:,0,0] = A
...: B = bb[0] @ bb[1]
...: w, v = np.linalg.eig(B)
...: C = w[np.arange(n), np.argmax(w, axis=-1)] * k
...: return C
...:
In [241]: func02(xy,.00005)
Out[241]: array([0.00687967, 0.00921688, 0.0120737 ])
</code></pre>
|
python|arrays|numpy|scipy
| 1
|
4,340
| 68,017,780
|
Stack rows over two columns
|
<p>i want to stack the rows with the same ID and OP Date in one row toghether.</p>
<p>Source:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>OP Date</th>
<th>OP Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01.01.2021</td>
<td>X1</td>
</tr>
<tr>
<td>1</td>
<td>01.01.2021</td>
<td>X2</td>
</tr>
<tr>
<td>1</td>
<td>02.01.2021</td>
<td>X3</td>
</tr>
<tr>
<td>2</td>
<td>03.01.2021</td>
<td>X4</td>
</tr>
<tr>
<td>2</td>
<td>03.01.2021</td>
<td>X5</td>
</tr>
<tr>
<td>3</td>
<td>04.01.2021</td>
<td>X6</td>
</tr>
<tr>
<td>3</td>
<td>04.01.2021</td>
<td>X7</td>
</tr>
<tr>
<td>3</td>
<td>04.01.2021</td>
<td>X8</td>
</tr>
<tr>
<td>3</td>
<td>05.01.2021</td>
<td>X9</td>
</tr>
<tr>
<td>3</td>
<td>05.01.2021</td>
<td>X10</td>
</tr>
</tbody>
</table>
</div>
<p>Desiered Outcome:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>OP Date</th>
<th>OP Code</th>
<th>OP Code_2</th>
<th>OP Code_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01.01.2021</td>
<td>X1</td>
<td>X2</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>02.01.2021</td>
<td>X3</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>03.01.2021</td>
<td>X4</td>
<td>X5</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>04.01.2021</td>
<td>X6</td>
<td>X7</td>
<td>X8</td>
</tr>
<tr>
<td>3</td>
<td>05.01.2021</td>
<td>X9</td>
<td>X10</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>How can i do this operation in Pandas or with another tools in Python?</p>
<p>Thanks for the help!</p>
|
<p>You can parse it:</p>
<pre><code>import pandas as pd
from collections import defaultdict
data=[{"id": 1, "OP Date": "01.01.2021", "OP Code": "X1"},
{"id": 1, "OP Date": "01.01.2021", "OP Code": "X2"}]
parsed_ops = defaultdict(list)
for item in data:
parsed_ops[f'{item["id"]}_{item["OP Date"]}'].append(item["OP Code"])
parsed_data=[]
for key, value in parsed_ops.items():
parsed = {"id": f'{key.split("_")[0]}', "OP Date": f'{key.split("_")[1]}'}
for index, op in enumerate(value, start=1):
parsed[f"OP Code_{index}"]=op
parsed_data.append(parsed)
pd.DataFrame(parsed_data)
</code></pre>
<p>Output:</p>
<pre><code> id OP Date OP Code_1 OP Code_2
0 1 01.01.2021 X1 X2
</code></pre>
|
python|pandas
| 0
|
4,341
| 67,645,828
|
Passing arguments to extended keras.model without init
|
<p><a href="/questions/tagged/keras" class="post-tag" title="show questions tagged 'keras'" rel="tag">keras</a> documentation provides an <a href="https://keras.io/guides/customizing_what_happens_in_fit/" rel="nofollow noreferrer">example</a> of extending the <a href="/questions/tagged/keras" class="post-tag" title="show questions tagged 'keras'" rel="tag">keras</a> model without the <code>init</code> method. From my understanding, this is nice because you don't have to implement the <code>call</code> function. Now to instantiate the model you can do something like this</p>
<pre><code>model = CustomModel(inputs, outputs)
</code></pre>
<p>Not sure where <code>inputs</code>, <code>outputs</code> are going - it would be nice to know - but my question is how to do I pass additional arguments when instantiating the model i.e.:</p>
<pre><code> model = CustomModel(inputs, outputs, other_args)
</code></pre>
<p><strong>Edit</strong></p>
<p><code>other_args</code> can be anything passed to <code>CustomModel</code> (not <code>keras.model</code>) i.e.: <code>alpha=1.0</code></p>
<p>The research effort is that I looked through keras documentation and it shows two ways to extend the model. The advertised method is to implement <code>__init__</code></p>
|
<p>The CustomModel example allows one to override specific methods of the Model class (<code>train_step</code>) in the example.
When you call</p>
<pre><code>model = CustomModel(inputs, outputs)
</code></pre>
<p>inputs is a tensor or list of input tensors to the custom model and outputs the output tensor(s).</p>
<p>You can see that in the example:</p>
<pre><code>inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
</code></pre>
<p>This defines a graph with an input layer with shape (32,) and a single output neuron (and no hidden layers).</p>
<p>Please clarify the usage of <code>other_args</code>. If these are tensors used by the model train / fit they should be part of the input / output arguments (which can be a list of tensors). If <code>other_args</code> are parameters that influence the Model custom <code>train_step</code> function then you need to define a new <strong>init</strong> method that expects it and call <code>super().__init__(inputs, outputs)</code>.</p>
<p>The <code>keras.Model</code> class is concerned with the machinery of training and predicting models. The ML model itself is defined as a graph of keras.Layer(s).</p>
|
python|tensorflow|keras
| 0
|
4,342
| 67,704,886
|
pytorch : retain_graph=True error even though i add this
|
<p>I keep getting this error
<strong>"Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time."</strong></p>
<p>in the beginning, it was without retain_graph=True and then I got the error so I add it to the backward but I am still getting the same error.</p>
<p>i read similar questions but nothing helped.
would like to get help!</p>
<pre><code>trained_cnnfmnist_model=net
class CNNFMnist2(nn.Module):
def __init__(self, trained_cnnfmnist_model):
super(CNNFMnist2, self).__init__()
self.trained_cnnfmnist_model = trained_cnnfmnist_model
# now a few fully connected layers
self.fc1 = nn.Linear(64, 32)
self.fc2= nn.Linear(32,16)
self.fc3= nn.Linear(16,10)
def forward(self, x):
x = self.trained_cnnfmnist_model(x)
x = F.relu(self.fc1(x[0]))
print(x.shape)
# x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
trainset = torchvision.datasets.FashionMNIST(root='./data', train=True,
download=True, transform=transforms.ToTensor())
testset = torchvision.datasets.FashionMNIST(root='./data', train=False,
download=True, transform=transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False)
net2 = CNNFMnist2(trained_cnnfmnist_model).cuda()
optimizer = torch.optim.SGD(net2.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.cuda() # -- For GPU
labels = labels.cuda() # -- For GPU
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
output = net2(inputs)
loss = criterion(outputs, labels)
loss.backward(retain_graph=True)
optimizer.step()
# print statistics
running_loss += loss.item()
if (i+1) % 2000 == 0:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
</code></pre>
|
<p>Basically you can only call <code>optimizer.step()</code> after these three lines.</p>
<pre><code> output = net2(inputs)
loss = criterion(outputs, labels)
loss.backward()
</code></pre>
<p>I don't know the rest of your code so I can only guess.</p>
<p>But I think you didn't have <code>net(inputs)</code> before you call <code>optimizer.step()</code>.</p>
|
python|machine-learning|pytorch
| 0
|
4,343
| 41,429,956
|
Python: Create a model for reports (using pandas)
|
<p>This is more of a model design question with python.</p>
<p>I need to parse and extract data from several log files into a pandas DataFrames.
From these dataframes I need to create reports (as csv, excel and so on).</p>
<p>One way of design such is to create a file with 2 functions:
1. function to extract data from log file (regex is fine)
2. function of pandas query, something like this:</p>
<pre><code>def get_top1000(group):
return group.sort_index(by='births', ascending=False)[:1000]
grouped = names.groupby(['year', 'sex'])
top1000 = grouped.apply(get_top1000)
</code></pre>
<p>Then, my class could get all these queries and produce the reports for this.
How this can be implemented with python correctly?</p>
|
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nlargest.html" rel="nofollow noreferrer"><code>SeriesGroupBy.nlargest</code></a>:</p>
<pre><code>df = names.groupby(['year', 'sex'])['births'].nlargest(1000)
</code></pre>
<p>Sample:</p>
<pre><code>names = pd.DataFrame({'year':[2000,2000,2000,2000,2000],
'sex':['M','M','F','F','F'],
'births':[7,8,9,1,2]})
print (names)
births sex year
0 7 M 2000
1 8 M 2000
2 9 F 2000
3 1 F 2000
4 2 F 2000
df = names.groupby(['year', 'sex'])['births']
.nlargest(1)
.reset_index(level=2, drop=True)
.reset_index()
print (df)
year sex births
0 2000 F 9
1 2000 M 8
</code></pre>
<p>If in your data there are other columns, first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> with these columns:</p>
<pre><code>names = pd.DataFrame({'year':[2000,2000,2000,2000,2000],
'sex':['M','M','F','F','F'],
'births':[7,8,9,1,2],
'val':[3,2,4,5,6]})
print (names)
births sex val year
0 7 M 3 2000
1 8 M 2 2000
2 9 F 4 2000
3 1 F 5 2000
4 2 F 6 2000
df = names.set_index('val') \
.groupby(['year', 'sex'])['births'] \
.nlargest(1) \
.reset_index()
print (df)
year sex val births
0 2000 F 4 9
1 2000 M 2 8
</code></pre>
|
python|pandas
| 2
|
4,344
| 61,412,939
|
Python Tensorflow-GPU: "Successfully opened dynamic libraray cudart64_101.dll"
|
<p>I am currently just trying to get tensorflow-gpu to work on my PC. When I run my script, consisting only of:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow
print("Test")
</code></pre>
<p>... then I get the output:</p>
<pre><code>2020-04-24 18:16:53.660911: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Test
Process finished with exit code 0
</code></pre>
<p>Now, the code compiles just fine and exit code 0 should mean, that everything is alright. I am new to this entire thing, so forgive me if this is a stupid question, but is everything <em>really</em> alright? I am using the PyCharm IDE, and it prints the "2020-04-24 [...]" part in red, everything else is light-gray. Besides, it takes a few seconds to compile, even though all I am doing is importing tensorflow.</p>
<p>Red usually means error, and I find the compile time to be far too long. Is this normal?</p>
<p>And if not, how do I fix it?</p>
|
<p>I have the same problem , that's the only solution works with me:</p>
<ol>
<li><p>check the folder path <code>C:\Users\user\Anaconda3\Lib\site-packages</code> . if any folder begins with <code>~</code>, delete it.</p>
</li>
<li><p>in the anaconda prompt run the following commands :</p>
<pre class="lang-sh prettyprint-override"><code>conda remove tensorflow
</code></pre>
<pre class="lang-sh prettyprint-override"><code>pip install tensorflow
</code></pre>
<pre class="lang-sh prettyprint-override"><code>conda update --all
</code></pre>
</li>
</ol>
|
python|tensorflow
| 0
|
4,345
| 61,537,916
|
Map column birthdates in python pandas df to astrology signs
|
<p>I have a dataframe with a column that includes individuals' birthdays. I would like to map that column to the individuals' astrology sign using code I found (below). I am having trouble writing the code to creat the variables. </p>
<p>My current dataframe looks like this</p>
<pre><code> birthdate answer YEAR MONTH-DAY
1970-03-31 5 1970 03-31
1970-05-25 9 1970 05-25
1970-06-05 3 1970 06-05
1970-08-28 2 1970 08-28
</code></pre>
<p>The code I found that creates a function to map the dates is available at this website: <a href="https://www.geeksforgeeks.org/program-display-astrological-sign-zodiac-sign-given-date-birth/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/program-display-astrological-sign-zodiac-sign-given-date-birth/</a></p>
<p>Any tips would be appreciated.</p>
|
<p>Change previous answer by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.month_name.html" rel="nofollow noreferrer"><code>Series.dt.month_name</code></a> with lowercase strings:</p>
<pre><code>def zodiac_sign(day, month):
# checks month and date within the valid range
# of a specified zodiac
if month == 'december':
return 'Sagittarius' if (day < 22) else 'capricorn'
elif month == 'january':
return 'Capricorn' if (day < 20) else 'aquarius'
elif month == 'february':
return 'Aquarius' if (day < 19) else 'pisces'
elif month == 'march':
return 'Pisces' if (day < 21) else 'aries'
elif month == 'april':
return 'Aries' if (day < 20) else 'taurus'
elif month == 'may':
return 'Taurus' if (day < 21) else 'gemini'
elif month == 'june':
return 'Gemini' if (day < 21) else 'cancer'
elif month == 'july':
return 'Cancer' if (day < 23) else 'leo'
elif month == 'august':
return 'Leo' if (day < 23) else 'virgo'
elif month == 'september':
return 'Virgo' if (day < 23) else 'libra'
elif month == 'october':
return 'Libra' if (day < 23) else 'scorpio'
elif month == 'november':
return 'scorpio' if (day < 22) else 'sagittarius'
</code></pre>
<hr>
<pre><code>dates = pd.to_datetime(astrology['birthdate'])
y = dates.dt.year
now = pd.to_datetime('now').year
astrology = astrology.assign(month = dates.dt.month_name().str.lower(),
day = dates.dt.day,
year = y.mask(y > now, y - 100))
print (astrology)
birthdate answer YEAR MONTH-DAY month day year
0 1970-03-31 5 1970 03-31 march 31 1970
1 1970-05-25 9 1970 05-25 may 25 1970
2 1970-06-05 3 1970 06-05 june 5 1970
3 1970-08-28 2 1970 08-28 august 28 1970
</code></pre>
<hr>
<pre><code>astrology['sign'] = astrology.apply(lambda x: zodiac_sign(x['day'], x['month']), axis=1)
print (astrology)
birthdate answer YEAR MONTH-DAY month day year sign
0 1970-03-31 5 1970 03-31 march 31 1970 aries
1 1970-05-25 9 1970 05-25 may 25 1970 gemini
2 1970-06-05 3 1970 06-05 june 5 1970 Gemini
3 1970-08-28 2 1970 08-28 august 28 1970 virgo
</code></pre>
|
python|pandas
| 1
|
4,346
| 61,472,491
|
For Loop and The truth value of a Series is ambiguous
|
<p>Looked through the answers to similar queries here but still unsure. Code below produces:</p>
<pre><code> for i in range(len(df)):
if df[0]['SubconPartNumber1'].str.isdigit() == False :
df['SubconPartNumber1'] = df['SubconPartNumber1'].str.replace(',', '/', regex = True)
df['SubconPartNumber1'] = df['SubconPartNumber1'].str.replace(r"\(.*\)-", '/', regex = True)
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>In pandas you can <a href="https://stackoverflow.com/a/55557758/2901002">avoid loops</a> if possible. Your solution should be replace by <code>boolean mask</code> with <code>~</code> for invert instead <code>== False</code> and passed to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>m = df['SubconPartNumber1'].str.isdigit()
df.loc[~m, 'SubconPartNumber1'] = df.loc[~m, 'SubconPartNumber1'].str.replace(',', '/', regex = True).str.replace(r"\(.*\)-", '/', regex = True)
</code></pre>
<p>But because numeric has only numbers I think mask is not necessary here, also regex should be join by <code>|</code> for <code>or</code>, <code>regex=True</code> is default parameter, so should be omitted:</p>
<pre><code>df = pd.DataFrame({'SubconPartNumber1':['345','aaa,','(bbb)-ccc']})
print (df)
SubconPartNumber1
0 345
1 aaa,
2 (bbb)-ccc
df['SubconPartNumber1'] = df['SubconPartNumber1'].str.replace(r",|\(.*\)-", '/')
print (df)
SubconPartNumber1
0 345
1 aaa/
2 /ccc
</code></pre>
|
python|pandas|series|valueerror
| 2
|
4,347
| 61,233,759
|
Find value close to number in python array and index it
|
<p>This is the code I have so far:</p>
<pre><code>import numpy as np
#make amplitude and sample arrays
amplitude=[0,1,2,3, 5.5, 6,5,2,2, 4, 2,3,1,6.5,5,7,1,2,2,3,8,4,9,2,3,4,8,4,9,3]
#print(amplitude)
#split arrays up into a line for each sample
traceno=5 #number of traces in file
samplesno=6 #number of samples in each trace. This wont change.
amplitude_split=np.array(amplitude, dtype=np.int).reshape((traceno,samplesno))
print(amplitude_split)
#find max value of trace
max_amp=np.amax(amplitude_split,1)
print(max_amp)
#find index of max value
ind_max_amp=np.argmax(amplitude_split, axis=1, out=None)
#print(ind_max_amp)
#find 90% of max value of trace
amp_90=np.amax(amplitude_split,1)*0.9
print(amp_90)
</code></pre>
<p>I would like to find the value in each line of the array that is closest to the corresponding amp_90. I would also like to be able to obtain the index of this number. Please help!</p>
<p>n.b. I know this is easy to do by eye, but it is a test data set before I apply it to my real data!</p>
|
<p>IIUC, you could do the following:</p>
<pre><code># find the indices of the min absolute difference
indices = np.argmin(np.abs(amplitude_split - amp_90[:, None]), axis=1)
# get the values at those positions
result = amplitude_split[np.arange(5), indices]
print(result)
</code></pre>
|
python|arrays|numpy
| 1
|
4,348
| 61,239,125
|
Join Dataframes by column and create new columns by value Pandas Python
|
<p>I have 2 dataframes df1 and df2, I'm trying to merge then by column (product). </p>
<pre><code>df1
product name Exist
0 1 foo False
1 2 bar True
2 3 lorem False
3 4 ipsum False
.
.
df2
product date_search sold
0 1 2020-04-10 10
1 1 2020-04-11 15
2 1 2020-04-12 20
3 2 2020-04-10 8
4 2 2020-04-11 10
5 2 2020-04-12 30
6 3 2020-04-10 2
7 3 2020-04-11 5
8 3 2020-04-12 7
9 4 2020-04-10 4
10 4 2020-04-11 10
11 4 2020-04-12 15
.
.
</code></pre>
<p>I'd like to append new columns to result dataframe with a value of df2 as column and other values of df2 as the value, Can anybody help me? Like the following:</p>
<pre><code>df2
product Exists 2020-04-10 2020-04-11 2020-04-12
0 1 False 10 15 20
1 2 True 8 10 30
2 3 False 2 5 7
3 4 False 4 10 15
.
.
</code></pre>
<p>Thanks!</p>
|
<p>@Quang Hoang nailed it -- just do this:</p>
<pre><code>import pandas as pd
dict1 = {"product" : [1,2,3,4], "name" : ['foo','bar','lorem','ipsum'], "exists": ['false','true','false','false']}
dict2 = {"product" : [1,1,2,2,3,3,4,4], "date": ['2020-01-01','2020-01-02','2020-01-01','2020-01-02','2020-01-01','2020-01-02','2020-01-01','2020-01-02'], "quantity": [10,14,15,4,6,77,34,9]}
df1 = pd.DataFrame.from_dict(dict1)
df2 = pd.DataFrame.from_dict(dict2)
df1.merge(df2.pivot(*df2), left_on='product', right_index=True, how='left' )
</code></pre>
|
python|pandas|dataframe|join|merge
| 0
|
4,349
| 61,370,108
|
tf.data: Parallelize loading step
|
<p>I have a data input pipeline that has:</p>
<ul>
<li>input datapoints of types that are not castable to a <code>tf.Tensor</code> (dicts and whatnot)</li>
<li>preprocessing functions that could not understand tensorflow types and need to work with those datapoints; some of which do data augmentation on the fly</li>
</ul>
<p>I've been trying to fit this into a <code>tf.data</code> pipeline, and I'm stuck on running the preprocessing for multiple datapoints in parallel. So far I've tried this:</p>
<ul>
<li>use <code>Dataset.from_generator(gen)</code> and do the preprocessing in the generator; this works but it processes each datapoint sequentially, no matter what arrangement of <code>prefetch</code> and fake <code>map</code> calls I patch on it. Is it impossible to prefetch in parallel?</li>
<li>encapsulate the preprocessing in a <code>tf.py_function</code> so I could <code>map</code> it in parallel over my Dataset, but
<ol>
<li>this requires some pretty ugly (de)serialization to fit exotic
types into string tensors,</li>
<li>apparently the execution of the <code>py_function</code> would be handed over to the (single-process) python interpreter, so I'd be stuck with the python GIL which would not help me much</li>
</ol></li>
<li>I saw that you could do some tricks with <code>interleave</code> but haven't found any which does not have issues from the first two ideas.</li>
</ul>
<p>Am I missing anything here? Am I forced to either modify my preprocessing so that it can run in a graph or is there a way to multiprocess it?</p>
<p>Our previous way of doing this was using keras.Sequence which worked well but there's just too many people pushing the upgrade to the <code>tf.data</code> API. <em>(hell, even trying the keras.Sequence with tf 2.2 yields <code>WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended.</code>)</em></p>
<p><em>Note: I'm using tf 2.2rc3</em></p>
|
<p>You can try to add <code>batch()</code> before <code>map()</code> in your input pipeline.</p>
<p>It is usually meant to reduce the overhead of the map function call for small map function, see here:
<a href="https://www.tensorflow.org/guide/data_performance#vectorizing_mapping" rel="nofollow noreferrer">https://www.tensorflow.org/guide/data_performance#vectorizing_mapping</a></p>
<p>However you can also use it to get a batch of input to your map <code>py_function</code> and use python <code>multiprocessing</code> there to speed things up.</p>
<p>This way you can get around the GIL limitations which makes <code>num_parallel_calls</code> in <code>tf.data.map()</code> useless for <code>py_function</code> map functions.</p>
|
python|tensorflow|tensorflow2.0|tensorflow-datasets
| 2
|
4,350
| 61,211,861
|
Why my autoencoder model is not learning?
|
<p>I'm trying to solve captcha dataset using autoencoder. The <a href="https://www.kaggle.com/fournierp/captcha-version-2-images" rel="nofollow noreferrer">dataset</a> is RGB images.</p>
<p>I converted the RGB images to one channel, i.e.:</p>
<p><a href="https://i.stack.imgur.com/1DXbH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1DXbH.png" alt="enter image description here"></a></p>
<p>(The shape of the image is (48, 200)).</p>
<p>So what I did next, is to use take the text of the captcha (in our case "emwpn"), and create another image, with same shape(48, 200) with this text, i.e.:</p>
<p><a href="https://i.stack.imgur.com/2bdtP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2bdtP.png" alt="enter image description here"></a></p>
<p>And what I tried is to feed the encoder of the autoencoder with captchas, and feed the decoder with images I created. </p>
<p>I didn't know if this method will be good, but I didn't expect it not to learn anything. When I tried to predict the test dataset, all I got was purple images, i.e.:</p>
<pre><code>capchas_array_test_pred = conv_ae.predict(capchas_array_test)
plt.imshow(capchas_array_test_pred[1])
</code></pre>
<p><a href="https://i.stack.imgur.com/Q87jj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q87jj.png" alt="enter image description here"></a> </p>
<p>This means that the autoencoder predicts 0 for all the pixels of all the images.</p>
<p>This is the code for the conv autoencoder:</p>
<pre><code>def rounded_accuracy(y_true, y_pred):
return keras.metrics.binary_accuracy(tf.round(y_true), tf.round(y_pred))
conv_encoder = keras.models.Sequential([
keras.layers.Reshape([48, 200, 1], input_shape=[48, 200]),
keras.layers.Conv2D(16, kernel_size=5, padding="SAME"),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Conv2D(32, kernel_size=5, padding="SAME", activation="selu"),
keras.layers.Conv2D(64, kernel_size=5, padding="SAME", activation="selu"),
keras.layers.AvgPool2D(pool_size=2),
])
conv_decoder = keras.models.Sequential([
keras.layers.Conv2DTranspose(32, kernel_size=5, strides=2, padding="SAME", activation="selu",
input_shape=[6, 25, 64]),
keras.layers.Conv2DTranspose(16, kernel_size=5, strides=1, padding="SAME", activation="selu"),
keras.layers.Conv2DTranspose(1, kernel_size=5, strides=1, padding="SAME", activation="sigmoid"),
keras.layers.Reshape([48, 200])
])
conv_ae = keras.models.Sequential([conv_encoder, conv_decoder])
conv_ae.compile(loss="mse", optimizer=keras.optimizers.Adam(lr=1e-1), metrics=[rounded_accuracy])
history = conv_ae.fit(capchas_array_train, capchas_array_rewritten_train, epochs=20,
validation_data=(capchas_array_valid, capchas_array_rewritten_valid))
</code></pre>
<p>The model didn't learn anything:</p>
<pre><code>Epoch 2/20
24/24 [==============================] - 1s 53ms/step - loss: 60879.9883 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
Epoch 3/20
24/24 [==============================] - 1s 53ms/step - loss: 60878.5781 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
Epoch 4/20
24/24 [==============================] - 1s 53ms/step - loss: 60879.2656 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
Epoch 5/20
24/24 [==============================] - 1s 53ms/step - loss: 60876.4648 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
Epoch 6/20
24/24 [==============================] - 1s 53ms/step - loss: 60878.4883 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
Epoch 7/20
24/24 [==============================] - 1s 53ms/step - loss: 60880.8242 - rounded_accuracy: 0.0637 - val_loss: 60930.7344 - val_rounded_accuracy: 0.0635
</code></pre>
<p>I tried to check what the check what happens if I feed the encoder and the decoder with the same images:</p>
<pre><code>conv_ae.compile(loss="mse", optimizer=keras.optimizers.Adam(lr=1e-1), metrics=[rounded_accuracy])
history = conv_ae.fit(capchas_array_train, capchas_array_train, epochs=20,
validation_data=(capchas_array_valid, capchas_array_valid))
</code></pre>
<p>And again I got purple images:</p>
<p><a href="https://i.stack.imgur.com/Q87jj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q87jj.png" alt="enter image description here"></a> </p>
<p>P.s. If you interested, this is the notebook:
<a href="https://colab.research.google.com/drive/1gA1XN1NOZKylGDhVu4PKXWhrPU4q9Ady" rel="nofollow noreferrer">https://colab.research.google.com/drive/1gA1XN1NOZKylGDhVu4PKXWhrPU4q9Ady</a></p>
<hr>
<p>EDIT-</p>
<p>This is the preprocessing I did to the images:</p>
<pre><code>1. Convert RGB image to one channel.
2. Normalize the image from value from 0 to 255 for each pixel, to 0 to 1.
3. Resize the (50, 200) image to (48, 200) - for simpler pooling in the autoencoder (48 can be divided by 2 more times, and stay integer, than 50)
</code></pre>
<p>This is the function for the preprocessing 1,2 steps:</p>
<pre><code>def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = (0.2989 * r + 0.5870 * g + 0.1140 * b)
for x in range(rgb.shape[1]):
for y in range(rgb.shape[0]):
if gray[y][x]>128:
gray[y][x] = 1.0
else:
gray[y][x] = 0.0
return gray
</code></pre>
|
<ol>
<li>Your architecture doesn't have any sense. If you want to create an autoencoder you need to understand that you're going to reverse process after encoding. That means that if you have three convolutional layers with filters in this order: 64, 32, 16; You should make the next group of convolutional layers to do the inverse: 16, 32, 64. That's the reason of why your algorithm is not learning.</li>
<li>You won't get the result that you have expected. You will get a similar structure of that kind of captcha but you won't that clearly text output. If you want that, you need another kind of algorithm (one that allows you to do character segmentation).</li>
</ol>
|
keras|conv-neural-network|tensorflow2.0|autoencoder
| 1
|
4,351
| 68,505,527
|
Pandas Sales Analysis Help - ValueError: could not convert string to float: ''
|
<p>I'm currently running a sales analysis on an excel file with roughly 500 transactions. I have a category called "Sale Price" which should be read in as a float. Pandas read in the dtype as an object, and when trying to change the dtype to a float using:</p>
<pre><code>df['Sale Price'].fillna(0).astype(float)
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: could not convert string to float: ''
</code></pre>
<p>I've tried mixing in various command combinations such as:</p>
<pre><code>df.loc[pd.to_numeric(df['Sale Price'], errors='coerce').isnull()]
</code></pre>
<p>and:</p>
<pre><code>pd.to_numeric(df['Sale Price']).astype(int)
</code></pre>
<p>in order to convert the column to a float, but now I'm thinking the issue is in how the data is being read in. I used the basic:</p>
<pre><code>df = pd.read_excel('...')
</code></pre>
<p>Hopefully someone can help clarify where the issue is coming from as I've been stuck for awhile. Thank you!</p>
|
<p>You could replace your empty strings with 0 before changing it to float:</p>
<pre><code>df["Sale Price"] = df["Sale Price"].astype(str).str.strip().replace("",0).astype(float)
</code></pre>
|
python|pandas
| 1
|
4,352
| 53,300,337
|
variable_scope does not get reused when using default scope name
|
<p>I have a question regarding sub-scopes when reusing variables. This</p>
<pre><code>import tensorflow as tf
def make_bar():
with tf.variable_scope('bar'):
tf.get_variable('baz', ())
with tf.variable_scope('foo') as scope:
make_bar()
scope.reuse_variables()
make_bar()
</code></pre>
<p>works perfectly fine, only a single variable <code>foo/bar/baz</code> is created.</p>
<p>However, if I change <code>make_bar</code> to</p>
<pre><code>def make_bar(scope=None):
with tf.variable_scope(scope, 'bar'):
tf.get_variable('baz', ())
</code></pre>
<p>the code now fails with a </p>
<pre><code>ValueError: Variable foo/bar_1/baz does not exist
</code></pre>
<p>Question: why does variable scope reuse fail when using <code>default name</code>s? If it is on purpose, what is the rationale behind this choice?</p>
<p><strong>EDIT</strong></p>
<p>Some precisions on the <code>default_name</code> argument of <code>tf.variable_scope</code>. <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope#__init__" rel="nofollow noreferrer">From the documentation,</a></p>
<blockquote>
<ul>
<li><code>default_name</code>: The default name to use if the <code>name_or_scope</code> argument is <code>None</code>, this name will be uniquified. If <code>name_or_scope</code> is provided it won't be used and therefore it is not required and can be <code>None</code>.</li>
</ul>
</blockquote>
<p>So as it name underlies, it is a way to provide a default scope name.</p>
<p>In the first version of <code>make_bar</code>, the scope name is forced to be <code>bar</code> -- the function has no parameter to change it.</p>
<p>In the second version of <code>make_bar</code>, I enhance this function to make it parameterizable. So <code>bar</code> is still the default scope name (provided this time as the <code>default_name</code> argument of <code>tf.variable_scope</code>), but this time the caller has the possibility to change it by setting the default argument <code>scope</code> of <code>make_bar</code> to anything else than <code>None</code>.`</p>
<p>When this second version of <code>make_bar</code> is used without an argument, it should, I think, fall back to the behavior of the first version -- which it does not.</p>
<p>Note that in my example, <code>bar</code> is intended to be a subscope of <code>foo</code>. The variable to be reused is <em>meant</em> to be <code>foo/bar/baz</code>.</p>
|
<p>You're not actually using the scope 'foo' in your example. You need to pass the parameter to <code>tf.variable_scope('foo', 'bar')</code> or <code>tf.variable_scope(scope, 'bar')</code>.
you're calling the method <code>make_bar</code> without the parameter in either case, which means in your first example <code>name_or_scope='bar'</code>, in the second example <code>name_or_scope=scope</code> (with value <code>None</code>) and <code>default_name='bar'</code>.</p>
<p>this is probably what you want:</p>
<pre><code>import tensorflow as tf
def make_bar(scope=None):
with tf.variable_scope(scope, 'bar'):
tf.get_variable('baz', ())
with tf.variable_scope('foo') as scope:
make_bar(scope)
scope.reuse_variables()
make_bar(scope)
</code></pre>
<p>I would actually advise against using default parameters because they reduce readability like in your example. When is a <code>None</code> scope ever the answer you want? it would make more sense if you test it, maybe like this?</p>
<pre><code>import tensorflow as tf
def make_bar(scope=None):
if scope is None:
scope = 'default_scope'
with tf.variable_scope(scope, 'bar'):
tf.get_variable('baz', ())
with tf.variable_scope('foo') as scope:
make_bar(scope) # use foo scope
scope.reuse_variables()
make_bar() # use 'default_scope'
</code></pre>
<p>but this makes code less readable and more likely to lead to bugs</p>
|
python|tensorflow
| 4
|
4,353
| 52,922,647
|
Rotate covariance matrix
|
<p>I am generating 3D gaussian point clouds. I'm using the scipy.stats.multivariate.normal() function, which takes a mean value and a covariance matrix as arguments. It can then provide random samples using the rvs() method.</p>
<p>Next I want to perform a rotation of the cloud in 3D, but rather than rotate each point I would like to rotate the random variable parameters, and then regenerate the point cloud. </p>
<p>I'm really struggling to figure this out. After an rotation, the axes of variance will no longer align with the coordinate system. So I guess what what I want is to express variance along three arbitrary orthogonal axes.</p>
<p>Thank you for any help.</p>
<p>Final edit: Thank you, I got what I needed. Below is an example</p>
<pre><code>cov = np.array([
[ 3.89801357, 0.38668784, 1.47657614],
[ 0.38668784, 0.87396495, 1.43575688],
[ 1.47657614, 1.43575688, 15.09192414]])
rotation_matrix = np.array([
[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]]) # 90 degrees around y axis
new_cov = rotation_matrix @ cov @ rotation_matrix.T # based on Warren and Paul's comments
rv = scipy.stats.multivariate_normal(mean=mean,cov=new_cov)
</code></pre>
<p>If you get an error</p>
<pre><code>ValueError: the input matrix must be positive semidefinite
</code></pre>
<p><a href="https://stackoverflow.com/questions/41515522/numpy-positive-semi-definite-warning">This</a> page I found useful</p>
|
<p>I edited the question with the answer, but again it is</p>
<pre><code>new_cov = rotation_matrix @ cov @ rotation_matrix.T
</code></pre>
|
numpy|scipy|linear-algebra|covariance|covariance-matrix
| 2
|
4,354
| 63,342,185
|
Crosstab Output in Python
|
<p>Firstly I am pretty new to Python and I'm trying to output the view from a crosstab:</p>
<pre><code>import pandas as pd
Data_18['age_50_flag'] = (Data_18['age'] > 50).astype(int)
pd.crosstab(Data_18.age_50_flag, Data_18.age)
</code></pre>
<p>The output however, is being buried:</p>
<pre><code>age 16 17 18 19 20 21 22 23 24 25 ... 93 94 95 96 97 98 99 101 998 999
age_50_flag
0 190 212 224 236 241 256 285 315 354 422 ... 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 ... 29 36 26 11 10 7 1 1 96 13
</code></pre>
<p>Is there a way that I can show all of the output? Any suggestions, please?</p>
<p>Thanks!</p>
|
<p>assume you have person and age pairs of data then find the individuals whose age is greater than 50 and display the results in a crosstab</p>
<pre><code>age=[16, 17, 20, 24,96]
name=['A','B','C','D','E']
df=pd.DataFrame({'name':name,'age':age})
df['greater50']=df['age'].apply(lambda person: 1 if person>50 else 0)
cross_tab=pd.crosstab(df['name'],df['age'],df['greater50'],aggfunc={'sum'})
print(cross_tab)
</code></pre>
<p>output:</p>
<pre><code>sum
age 16 17 20 24 96
name
A 0.0 NaN NaN NaN NaN
B NaN 0.0 NaN NaN NaN
C NaN NaN 0.0 NaN NaN
D NaN NaN NaN 0.0 NaN
E NaN NaN NaN NaN 1.0
</code></pre>
<p>sns.heatmap(cross_tab)</p>
|
python|pandas|crosstab
| 0
|
4,355
| 63,532,399
|
Incompatible Shapes: Tensorflow/Keras Sequential LSTM with Autoencoder
|
<p>I am trying to set up an LSTM Autoencoder/Decoder for time series data and continually get <code>Incompatible shapes</code> error when trying to train the model. Following steps and using toy data from <a href="https://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352" rel="nofollow noreferrer">this example</a>. See below code and results. Note Tensorflow version 2.3.0.</p>
<p>Create data. Put data into sequences to temporalize for LSTM in the form of (samples, timestamps, features).</p>
<pre><code>timeseries = np.array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
[0.1**3, 0.2**3, 0.3**3, 0.4**3, 0.5**3, 0.6**3, 0.7**3, 0.8**3, 0.9**3]]).transpose()
timeseries_df = pd.DataFrame(timeseries)
def create_sequenced_dataset(X, time_steps=10):
Xs, ys = [], [] # start empty list
for i in range(len(X) - time_steps): # loop within range of data frame minus the time steps
v = X.iloc[i:(i + time_steps)].values # data from i to end of the time step
Xs.append(v)
ys.append(X.iloc[i + time_steps].values)
return np.array(Xs), np.array(ys) # convert lists into numpy arrays and return
X, y = create_sequenced_dataset(timeseries_df, time_steps=3)
timesteps = X.shape[1]
n_features = X.shape[2]
</code></pre>
<p>Create the LSTM model with autoencoder/decoder given by the Repeat Vector and attempt to train the model.</p>
<pre><code>model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, n_features), return_sequences=False))
model.add(RepeatVector(timesteps))
model.add(LSTM(128, return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))
model.compile(optimizer='adam', loss='mse')
model.summary()
model.fit(X, y, epochs=10, batch_size=4)
</code></pre>
<p>Consistently get error:</p>
<pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [4,3,2] vs. [4,2]
[[node gradient_tape/mean_squared_error/BroadcastGradientArgs (defined at <ipython-input-9-56896428cea9>:1) ]] [Op:__inference_train_function_10833]
</code></pre>
<p>X looks like:</p>
<pre><code>array([[[0.1 , 0.001],
[0.2 , 0.008],
[0.3 , 0.027]],
[[0.2 , 0.008],
[0.3 , 0.027],
[0.4 , 0.064]],
[[0.3 , 0.027],
[0.4 , 0.064],
[0.5 , 0.125]],
[[0.4 , 0.064],
[0.5 , 0.125],
[0.6 , 0.216]],
[[0.5 , 0.125],
[0.6 , 0.216],
[0.7 , 0.343]],
[[0.6 , 0.216],
[0.7 , 0.343],
[0.8 , 0.512]]])
</code></pre>
<p>y looks like:</p>
<pre><code>array([[0.4 , 0.064],
[0.5 , 0.125],
[0.6 , 0.216],
[0.7 , 0.343],
[0.8 , 0.512],
[0.9 , 0.729]])
</code></pre>
|
<p>As the message clearly says, it's the shape issue which you are passing to the model for fit.</p>
<p>From the above data which you have given X is having the shape of <code>(6, 3, 2)</code> and Y is having the shape of <code>(6, 2)</code> which is incompatible.</p>
<p>Below is the modified code with the same input as per the example you have taken with <code>X</code> and <code>Y</code> having a shape <code>(6,3,2)</code>.</p>
<pre><code>model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, n_features), return_sequences=False))
model.add(RepeatVector(timesteps))
model.add(LSTM(128, return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))
model.compile(optimizer='adam', loss='mse')
model.summary()
model.fit(X,Y, epochs=10, batch_size=4)
</code></pre>
<p>Result:</p>
<pre><code>Epoch 1/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0069
Epoch 2/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0065
Epoch 3/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0065
Epoch 4/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0062
Epoch 5/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0059
Epoch 6/10
2/2 [==============================] - 0s 4ms/step - loss: 0.0053
Epoch 7/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0048
Epoch 8/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0046
Epoch 9/10
2/2 [==============================] - 0s 5ms/step - loss: 0.0044
Epoch 10/10
2/2 [==============================] - 0s 6ms/step - loss: 0.0043
<tensorflow.python.keras.callbacks.History at 0x7ff352f9ccf8>
</code></pre>
|
python|tensorflow|keras|deep-learning|lstm
| 0
|
4,356
| 63,381,217
|
Make lists out of column values from one column, filtering on values from another
|
<p>I have a pandas dataframe like :</p>
<pre><code> Fields Player bio Team
0 Name 1 2
1 city 2 2
2 state 1 1
3 stage 0 0
4 effec 2 2
5 points 1 2
</code></pre>
<p>I would like to make lists named after the variables, containing values from the 'fields' column where the other variable values are 2, excluding the 'field' variable.</p>
<p>so the output would be 2 lists</p>
<pre><code> player_bio = ['city', 'effec']
team = ['Name', 'city', 'effec', 'points']
</code></pre>
<p>The actual data has a long list of variables, so I have list such that:</p>
<pre><code> selected_fields = ['Player bio', 'team']
</code></pre>
<p>I am hoping to loop on this list.</p>
<p>I know we should post our starting attempts, but I haven't got an idea where to start.</p>
|
<p>You can do it like this:</p>
<pre><code>selected_fields = ['Player bio', 'Team']
s = (df==2).T.dot(','+df['Fields']).str.strip(',')\
.str.split(',').reindex(selected_fields)
s
</code></pre>
<p>Output:</p>
<pre><code>Player bio [city, effec]
Team [Name, city, effec, points]
dtype: object
</code></pre>
<hr />
<p>Now to see only the 'Player bio' list try this:</p>
<pre><code>s['Player bio']
</code></pre>
<p>Output</p>
<pre><code>['city', 'effec']
</code></pre>
<p>Or</p>
<pre><code>s['Team']
</code></pre>
<p>Output:</p>
<pre><code>['Name', 'city', 'effec', 'points']
</code></pre>
<p><em><strong>Details:</strong></em></p>
<p>Create a boolean matrix then transpose to perform dot-matrix computation with Fields column. Next, use string manipulations to strip extra comma and split to create a list of fields. Outputs a pd.Series with index of 'selected_fields' and values a list.</p>
|
python|pandas
| 3
|
4,357
| 63,682,850
|
Can't reference column by using predefined parameter as part of string
|
<p>I have a dataset where I would like to reference my column by using predefined parameter as a part of the string. The reason for this is that the columns I want to keep will change depending on the time of the year and the year.</p>
<p>My parameter are:</p>
<pre><code>year = '20'
</code></pre>
<p>This is working fine and give me the desired result:</p>
<pre><code>df.['Q1 FY20'] = df.['Q1 FY20'].astype('int32')
</code></pre>
<p>But when I try to replace the "20" in my string with my parameter, I get KeyError: 'Q1 FY20':</p>
<pre><code>df.['Q1 FY' + year] = df.['Q1 FY' + year].astype('int32')
</code></pre>
<p>I dont really get this as I have checked that:</p>
<pre><code>type('Q1 FY20') == type('Q1 FY' + year)
'Q1 FY20' == 'Q1 FY' + year
</code></pre>
<p>... and they are both true. What am I doing wrong?</p>
<p>Here is the complete error message:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2896 try:
-> 2897 return self._engine.get_loc(key)
2898 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Q1 FY20'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-474-3a24ee57971a> in <module>
----> 1 df['Q1 FY' + year] = df['Q1 FY' + year].astype('int32')
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/frame.py in __getitem__(self, key)
2993 if self.columns.nlevels > 1:
2994 return self._getitem_multilevel(key)
-> 2995 indexer = self.columns.get_loc(key)
2996 if is_integer(indexer):
2997 indexer = [indexer]
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2897 return self._engine.get_loc(key)
2898 except KeyError:
-> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key))
2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2901 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Q1 FY20'
```
</code></pre>
|
<p>I found the my mistake, I was trying to test the code in an instance before I did the required cleaning of the data.</p>
<p>When I replaced the actual code with my parameter it worked. I do however still not understand why I didn't get any KeyError when I was writing out the full string.</p>
<p>Thanks a lot everybody!</p>
|
python|pandas
| 0
|
4,358
| 63,506,954
|
Why is this dictionary comprehension so slow? Please suggest way to speed it up
|
<p>Hi Please help me either: speed up this dictionary compression; offer a better way to do it or gain a higher understanding of why it is so slow internally (like for example is calculation slowing down as the dictionary grows in memory size). I'm sure there must be a quicker way without learning some C!</p>
<p><code>classes = {i : [1 if x in df['column'].str.split("|")[i] else 0 for x in df['column']] for i in df.index}</code></p>
<p>with the output:
<code>{1:[0,1,0...0],......, 4000:[0,1,1...0]}</code></p>
<p>from a df like this:</p>
<pre><code>data_ = {'drugbank_id': ['DB06605', 'DB06606', 'DB06607', 'DB06608', 'DB06609'],
'drug-interactions': ['DB06605|DB06695|DB01254|DB01609|DB01586|DB0212',
'DB06605|DB06695|DB01254|DB01609|DB01586|DB0212',
'DB06606|DB06607|DB06608|DB06609',
'DB06606|DB06607',
'DB06608']
}
pd.DataFrame(data = data_ , index=range(0,5) )
</code></pre>
<p>I am preforming it in a df with 4000 rows, the column df['column'] contains a string of Ids separated by |. The number of IDs in each row that needs splitting varies from 1 to 1000, however, this is done for all 4000 indexes. I tested it on the head of the df and it seemed quick enough, now the comprehension has been running for 24hrs. So maybe it is just the sheer size of the job, but feel like I could speed it up and at this point I want to stop it an re-engineer, however, I am scared that will set me back without much increase in speed, so before I do that wanted to get some thoughts, ideas and suggestions.</p>
<p>Beyond 4000x4000 size I suspect that using the Series and Index Objects is the another problem and that I would be better off using lists, but given the size of the task I am not sure how much speed that will gain and maybe I am better off using some other method such as pd.apply(df, f(write line by line to json)). I am not sure - any help and education appreciated, thanks.</p>
|
<p>Here is one approach:</p>
<pre><code>import pandas as pd
# create data frame
df = pd.DataFrame({'idx': [1, 2, 3, 4], 'col': ['1|2', '1|2|3', '2|3', '1|4']})
# split on '|' to convert string to list
df['col'] = df['col'].str.split('|')
# explode to get one row for each list element
df = df.explode('col')
# create dummy ID (this will become True in the final result)
df['dummy'] = 1
# use pivot to create dense matrix
df = (df.pivot(index='idx', columns='col', values='dummy')
.fillna(0)
.astype(int))
# convert each row to a list
df['test'] = df.apply(lambda x: x.to_list(), axis=1)
print(df)
col 1 2 3 4 test
idx
1 1 1 0 0 [1, 1, 0, 0]
2 1 1 1 0 [1, 1, 1, 0]
3 0 1 1 0 [0, 1, 1, 0]
4 1 0 0 1 [1, 0, 0, 1]
</code></pre>
|
python|pandas|list|list-comprehension|dictionary-comprehension
| 3
|
4,359
| 21,769,162
|
Running a groupby on a pivot table with Pandas
|
<p>I have a pivot table that looks like this:</p>
<pre><code>In [41]: counts
Out[41]:
SourceColumnID 3029903181 3029903182 3029903183 3029903184 ResponseCount
ColID QuestionID RowID
3029903193 316923119 3029903189 773 788 778 803 3142
3029903194 316923119 3029903189 766 799 782 773 3120
[2 rows x 5 columns]
</code></pre>
<p>and I'm trying to figure out how I can groupby RowID so that I can get total counts for each column for each RowID (in this case it would just sum up all of them since the 2 are in the same rowid).</p>
<p>This is the pivot tables index:</p>
<pre><code>In [42]: counts.index
Out[42]:
MultiIndex(levels=[[3029903193, 3029903194], [316923119], [3029903189]],
labels=[[0, 1], [0, 0], [0, 0]],
names=[u'ColID', u'QuestionID', u'RowID'])
</code></pre>
|
<p>You'll want to groupby <code>'RowID'</code>. Since it's a level on the MultiIndex you pass <code>'RowID'</code> to the <code>level</code> keyword.</p>
<pre><code>In [5]: df.groupby(level='RowID').sum()
Out[5]:
3029903181 3029903182 3029903183 3029903184 ResponseCount
RowID
3029903189 1539 1587 1560 1576 6262
</code></pre>
|
python|pandas|pivot-table
| 2
|
4,360
| 24,922,315
|
Merging pandas dataframe based on relationship in multiple columns
|
<p>Lets say you have a DataFrame of regions (start, end) coordinates and another DataFrame of positions which may or may not fall within a given region. For example:</p>
<pre><code>region = pd.DataFrame({'chromosome': [1, 1, 1, 1, 2, 2, 2, 2], 'start': [1000, 2000, 3000, 4000, 1000, 2000, 3000, 4000], 'end': [2000, 3000, 4000, 5000, 2000, 3000, 4000, 5000]})
position = pd.DataFrame({'chromosome': [1, 2, 1, 3, 2, 1, 1], 'BP': [1500, 1100, 10000, 2200, 3300, 400, 5000]})
print region
print position
chromosome end start
0 1 2000 1000
1 1 3000 2000
2 1 4000 3000
3 1 5000 4000
4 2 2000 1000
5 2 3000 2000
6 2 4000 3000
7 2 5000 4000
BP chromosome
0 1500 1
1 1100 2
2 10000 1
3 2200 3
4 3300 2
5 400 1
6 5000 1
</code></pre>
<p>A position falls within a region if:</p>
<pre><code>position['BP'] >= region['start'] &
position['BP'] <= region['end'] &
position['chromosome'] == region['chromosome']
</code></pre>
<p>Each position is guaranteed to fall within a maximum of one region although it might not fall in any.</p>
<p>What is the best way to merge these two dataframe such that it appends additional columns to position with the region it falls in if it falls in any region. Giving in this case roughly the following output:</p>
<pre><code> BP chromosome start end
0 1500 1 1000 2000
1 1100 2 1000 2000
2 10000 1 NA NA
3 2200 3 NA NA
4 3300 2 3000 4000
5 400 1 NA NA
6 5000 1 4000 5000
</code></pre>
<p>One approach is to write a function to compute the relationship I want and then to use the DataFrame.apply method as follows:</p>
<pre><code>def within(pos, regs):
istrue = (pos.loc['chromosome'] == regs['chromosome']) & (pos.loc['BP'] >= regs['start']) & (pos.loc['BP'] <= regs['end'])
if istrue.any():
ind = regs.index[istrue].values[0]
return(regs.loc[ind ,['start', 'end']])
else:
return(pd.Series([None, None], index=['start', 'end']))
position[['start', 'end']] = position.apply(lambda x: within(x, region), axis=1)
print position
BP chromosome start end
0 1500 1 1000 2000
1 1100 2 1000 2000
2 10000 1 NaN NaN
3 2200 3 NaN NaN
4 3300 2 3000 4000
5 400 1 NaN NaN
6 5000 1 4000 5000
</code></pre>
<p>But I'm hoping that there is a more optimized way than doing each comparison in O(N) time. Thanks!</p>
|
<p>One solution would be to do an inner-join on <code>chromosome</code>, exclude the violating rows, and then do left-join with <code>position</code>:</p>
<pre><code>>>> df = pd.merge(position, region, on='chromosome', how='inner')
>>> idx = (df['BP'] < df['start']) | (df['end'] < df['BP']) # violating rows
>>> pd.merge(position, df[~idx], on=['BP', 'chromosome'], how='left')
BP chromosome end start
0 1500 1 2000 1000
1 1100 2 2000 1000
2 10000 1 NaN NaN
3 2200 3 NaN NaN
4 3300 2 4000 3000
5 400 1 NaN NaN
6 5000 1 5000 4000
</code></pre>
|
python|pandas|merge
| 5
|
4,361
| 17,595,912
|
Gaussian Smoothing an image in python
|
<p>I am very new to programming in python, and im still trying to figure everything out, but I have a problem trying to gaussian smooth or convolve an image. This is probably an easy fix, but I've spent so much time trying to figure it out im starting to go crazy. I have a 3d .fits file of a group of galaxies and have cut out a certain one and saved it to a png with aplpy. Basically, it needs to be smoothed as a gaussian to a larger beam size (i.e. make the whole thing larger by expanding out the FWHM but dimming the output). I know there are things like scipy.ndimage.convolve and a similar function in numpy that I can use, but im having a hard time translating it into something usefull. If anyone can give me a hand with this and point me in the right direction it would be a huge help.</p>
|
<p>Something like this perhaps?</p>
<pre><code>import numpy as np
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
img = ndimage.imread('galaxies.png')
plt.imshow(img, interpolation='nearest')
plt.show()
# Note the 0 sigma for the last axis, we don't wan't to blurr the color planes together!
img = ndimage.gaussian_filter(img, sigma=(5, 5, 0), order=0)
plt.imshow(img, interpolation='nearest')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/x0uRx.png" alt="enter image description here">
<img src="https://i.stack.imgur.com/mUXYX.png" alt="enter image description here"></p>
<p>(Original image taken from <a href="http://cdn.list25.com/wp-content/uploads/2012/03/galaxies.png" rel="noreferrer">here</a>)</p>
|
python|numpy|scipy|gaussian|smoothing
| 30
|
4,362
| 20,112,760
|
python pandas convert dataframe to dictionary with multiple values
|
<p>I have a dataframe with 2 columns Address and ID. I want to merge IDs with the same addresses in a dictionary</p>
<pre><code>import pandas as pd, numpy as np
df = pd.DataFrame({'Address' : ['12 A', '66 C', '10 B', '10 B', '12 A', '12 A'],
'ID' : ['Aa', 'Bb', 'Cc', 'Dd', 'Ee', 'Ff']})
AS=df.set_index('Address')['ID'].to_dict()
print df
Address ID
0 12 A Aa
1 66 C Bb
2 10 B Cc
3 10 B Dd
4 12 A Ee
5 12 A Ff
print AS
{'66 C': 'Bb', '12 A': 'Ff', '10 B': 'Dd'}
</code></pre>
<p>What I want is for the duplicates to store multiple values like:</p>
<pre><code>{'66 C': ['Bb'], '12 A': ['Aa','Ee','Ff'], '10 B': ['Cc','Dd']}
</code></pre>
|
<p>I think you can use <code>groupby</code> and a dictionary comprehension here:</p>
<pre><code>>>> df
Address ID
0 12 A Aa
1 66 C Bb
2 10 B Cc
3 10 B Dd
4 12 A Ee
5 12 A Ff
>>> {k: list(v) for k,v in df.groupby("Address")["ID"]}
{'66 C': ['Bb'], '12 A': ['Aa', 'Ee', 'Ff'], '10 B': ['Cc', 'Dd']}
</code></pre>
|
python|dictionary|pandas
| 19
|
4,363
| 15,889,998
|
Pandas force matrix multiplication
|
<p>I would like to force matrix multiplication "orientation" using Python Pandas, both between DataFrames against DataFrames, Dataframes against Series and Series against Series.</p>
<p>As an example, I tried the following code:</p>
<pre><code>t = pandas.Series([1, 2])
print(t.T.dot(t))
</code></pre>
<p>Which outputs: 5</p>
<p>But I expect this:</p>
<pre><code>[1 2
2 4]
</code></pre>
<p>Pandas is great, but this inability to do matrix multiplications the way I want is what is the most frustrating, so any help would be greatly appreciated.</p>
<p>PS: I know Pandas tries to implicitly use index to find the right way to compute the matrix product, but it seems this behavior can't be switched off!</p>
|
<p>Here:</p>
<pre><code>In [1]: import pandas
In [2]: t = pandas.Series([1, 2])
In [3]: np.outer(t, t)
Out[3]:
array([[1, 2],
[2, 4]])
</code></pre>
|
python|pandas|matrix-multiplication|dot-product|dataframe
| 4
|
4,364
| 15,933,045
|
Numpy max slow when applied to list of arrays
|
<p>I carry out some computations to obtain a list of numpy arrays. Subsequently, I would like to find the largest values along the first axis. My current implementation (see below) is very slow and I would like to find alternatives.</p>
<p><strong>Original</strong></p>
<pre><code>pending = [<list of items>]
matrix = [compute(item) for item in pending if <some condition on item>]
dominant = np.max(matrix, axis = 0)
</code></pre>
<p><strong>Revision 1:</strong> This implementation is faster (~10x; presumably because numpy does not need to figure out the shape of the array)</p>
<pre><code>pending = [<list of items>]
matrix = [compute(item) for item in pending if <some condition on item>]
matrix = np.vstack(matrix)
dominant = np.max(matrix, axis = 0)
</code></pre>
<p>I ran a couple of tests and the slowdown seems to be due to an internal conversion of the list of arrays to a numpy array</p>
<pre><code> Timer unit: 1e-06 s
Total time: 1.21389 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4 def direct_max(list_of_arrays):
5 1000 1213886 1213.9 100.0 np.max(list_of_arrays, axis = 0)
Total time: 1.20766 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
8 def numpy_max(list_of_arrays):
9 1000 1151281 1151.3 95.3 list_of_arrays = np.array(list_of_arrays)
10 1000 56384 56.4 4.7 np.max(list_of_arrays, axis = 0)
Total time: 0.15437 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
12 @profile
13 def stack_max(list_of_arrays):
14 1000 102205 102.2 66.2 list_of_arrays = np.vstack(list_of_arrays)
15 1000 52165 52.2 33.8 np.max(list_of_arrays, axis = 0)
</code></pre>
<p>Is there any way to speed up the max function or is it possible to populate a numpy array efficiently with the results of my calculation such that max is fast?</p>
|
<p>You can use <code>reduce(np.maximum, matrix)</code>, here is a test:</p>
<pre><code>import numpy as np
np.random.seed(0)
N, M = 1000, 1000
matrix = [np.random.rand(N) for _ in xrange(M)]
%timeit np.max(matrix, axis = 0)
%timeit np.max(np.vstack(matrix), axis = 0)
%timeit reduce(np.maximum, matrix)
</code></pre>
<p>The result is:</p>
<pre><code>10 loops, best of 3: 116 ms per loop
10 loops, best of 3: 10.6 ms per loop
100 loops, best of 3: 3.66 ms per loop
</code></pre>
<p><strong>Edit</strong></p>
<p>`argmax()' is more difficult, but you can use a for loop:</p>
<pre><code>def argmax_list(matrix):
m = matrix[0].copy()
idx = np.zeros(len(m), dtype=np.int)
for i, a in enumerate(matrix[1:], 1):
mask = m < a
m[mask] = a[mask]
idx[mask] = i
return idx
</code></pre>
<p>It's still faster than <code>argmax()</code>:</p>
<pre><code>%timeit np.argmax(matrix, axis=0)
%timeit np.argmax(np.vstack(matrix), axis=0)
%timeit argmax_list(matrix)
</code></pre>
<p>result:</p>
<pre><code>10 loops, best of 3: 131 ms per loop
10 loops, best of 3: 21 ms per loop
100 loops, best of 3: 13.1 ms per loop
</code></pre>
|
python|numpy
| 5
|
4,365
| 15,944,171
|
Python: Differences between lists and numpy array of objects
|
<p>What are the advantages and disadvantages of storing Python objects in a <code>numpy.array</code> with <code>dtype='o'</code> versus using <code>list</code> (or <code>list</code> of <code>list</code>, etc., in higher dimensions)?</p>
<p>Are numpy arrays more efficient in this case? (It seems that they cannot avoid the indirection, but might be more efficient in the multidimensional case.)</p>
|
<p>Slicing works differently with NumPy arrays. <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="noreferrer">The NumPy docs devote a lengthy page on the topic.</a> To highlight some points:</p>
<ul>
<li>NumPy slices can slice through multiple dimensions</li>
<li>All arrays generated by NumPy basic slicing are always views of the original array, while slices of lists are shallow copies.</li>
<li>You can assign a scalar into a NumPy slice.</li>
<li>You can insert and delete items in a <code>list</code> by assigning a sequence of different length to a slice, whereas NumPy would raise an error.</li>
</ul>
<p>Demo:</p>
<pre><code>>>> a = np.arange(4, dtype=object).reshape((2,2))
>>> a
array([[0, 1],
[2, 3]], dtype=object)
>>> a[:,0] #multidimensional slicing
array([0, 2], dtype=object)
>>> b = a[:,0]
>>> b[:] = True #can assign scalar
>>> a #contents of a changed because b is a view to a
array([[True, 1],
[True, 3]], dtype=object)
</code></pre>
<p>Also, NumPy arrays provide convenient mathematical operations with arrays of objects that support them (e.g. <code>fraction.Fraction</code>).</p>
|
python|numpy
| 11
|
4,366
| 12,140,417
|
pandas: normalizing a DataFrame
|
<p>I have input data in a flattened file. I want to normalize this data, by splitting it into tables. Can I do that neatly with <code>pandas</code> - that is, by reading the flattened data into a <code>DataFrame</code> instance, and then applying some functions to obtain the resulting <code>DataFrame</code> instances?</p>
<p>Example:</p>
<p>Data is given to me on disk in the form of a CSV file like this:</p>
<pre><code>ItemId ClientId PriceQuoted ItemDescription
1 1 10 scroll of Sneak
1 2 12 scroll of Sneak
1 3 13 scroll of Sneak
2 2 2500 scroll of Invisible
2 4 2200 scroll of Invisible
</code></pre>
<p>I want to create two DataFrames:</p>
<pre><code>ItemId ItemDescription
1 scroll of Sneak
2 scroll of Invisibile
</code></pre>
<p>and</p>
<pre><code>ItemId ClientId PriceQuoted
1 1 10
1 2 12
1 3 13
2 2 2500
2 4 2200
</code></pre>
<p>If <code>pandas</code> only has a good solution for the simplest case (normalization results in 2 tables with many-to-one relationship - just like in the above example), it might be enough for my current needs. I may need a more general solution in the future, however.</p>
|
<pre><code>In [30]: df = pandas.read_csv('foo1.csv', sep='[\s]{2,}')
In [30]: df
Out[30]:
ItemId ClientId PriceQuoted ItemDescription
0 1 1 10 scroll of Sneak
1 1 2 12 scroll of Sneak
2 1 3 13 scroll of Sneak
3 2 2 2500 scroll of Invisible
4 2 4 2200 scroll of Invisible
In [31]: df1 = df[['ItemId', 'ItemDescription']].drop_duplicates().set_index('ItemId')
In [32]: df1
Out[32]:
ItemDescription
ItemId
1 scroll of Sneak
2 scroll of Invisible
In [33]: df2 = df[['ItemId', 'ClientId', 'PriceQuoted']]
In [34]: df2
Out[34]:
ItemId ClientId PriceQuoted
0 1 1 10
1 1 2 12
2 1 3 13
3 2 2 2500
4 2 4 2200
</code></pre>
|
python|pandas|database-normalization
| 12
|
4,367
| 72,009,695
|
How do I check for null or string values in columns in the whole dataset in python?
|
<pre><code>def check_isnull(self):
df = pd.read_csv(self.table_name)
for j in df.values:
for k in j[0:]:
try:
k = float(k)
Flag=1
except ValueError:
Flag = 0
break
if Flag==1:
QMessageBox.information(self, "Information",
"Dataset is ready to train.",
QMessageBox.Close | QMessageBox.Help)
elif Flag==0:
QMessageBox.information(self, "Information",
"There are one or more non-integer values.",
QMessageBox.Close | QMessageBox.Help)
</code></pre>
<p>Greetings, here is only 40 rows of the dataset I am trying to train it. I want to replace the null or string values that exist here. My existing function for the replacement operation works without any problems. I wanted to write a function that only gives an output to detect them. My function sometimes gives an error, where is the problem?</p>
<pre><code> User ID Age EstimatedSalary Female Male Purchased
0 15624510 19.000000 19000 0 1 0
1 1581qqsdasd0944 35.000000 qweqwe 0 1 0
2 15668575 37.684211 43000 1 0 0
3 NaN 27.000000 57000 1 0 0
4 15804002 19.000000 69726.81704 0 1 0
.. ... ... ... ... ... ...
395 15691863 46.000000 41000 1 0 1
396 15706071 51.000000 23000 0 1 1
397 15654296 50.000000 20000 1 0 1
398 15755018 36.000000 33000 0 1 0
399 15594041 49.000000 36000 1 0 1
</code></pre>
|
<p>try</p>
<pre><code>pd.to_numeric(df['estimated'],errors='coerce')
</code></pre>
<p>then use this to get rid of those rows and also rows with NANs</p>
<pre><code> df.dropna(subset='estimated')
</code></pre>
|
python|pandas|dataframe
| 0
|
4,368
| 72,000,915
|
Warning when fitting the LSTM model
|
<p>When I try to fit my model i get an error. Here is the code:</p>
<pre><code>model = Sequential()
model.add(LSTM(128, activation='relu', input_shape=(trainX.shape[1], trainX.shape[2]), return_sequences=True))
model.add(LSTM(64, activation='relu', return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(trainY.shape[1],'linear'))
model.compile(optimizer=Adam(learning_rate=0.0001), loss='mse')
model.summary()
</code></pre>
<p>I am getting the warnings after running the previous code:</p>
<blockquote>
<p>INFO:tensorflow:Assets written to: C:\Users\X\training_LinReg_NotScaled\cp.ckt\assets
WARNING:absl:<keras.layers.recurrent.LSTMCell object at 0x0000022F630975E0> has the same name 'LSTMCell' as a built-in Keras object. Consider renaming <class 'keras.layers.recurrent.LSTMCell'> to avoid naming conflicts when loading with <code>tf.keras.models.load_model</code>. If renaming is not possible, pass the object in the <code>custom_objects</code> parameter of the load function.
WARNING:absl:<keras.layers.recurrent.LSTMCell object at 0x0000022F684D6A90> has the same name 'LSTMCell' as a built-in Keras object. Consider renaming <class 'keras.layers.recurrent.LSTMCell'> to avoid naming conflicts when loading with <code>tf.keras.models.load_model</code>. If renaming is not possible, pass the object in the <code>custom_objects</code> parameter of the load function.</p>
</blockquote>
<pre><code>import os
from tensorflow.keras.callbacks import ModelCheckpoint
checkpointpath = 'C:\\Users\\X\\training_LinReg_NotScaled/cp.ckt'
# checkpointdir = os.path.dirname(checkpointpath)
cp = ModelCheckpoint(checkpointpath, save_best_only=True)
history = model.fit(trainX, trainY, validation_data=(Xval,Yval),epochs=30, batch_size=16, callbacks=[cp],verbose=1,shuffle=False)
plt.plot(history.history['loss'], label='Training loss')
plt.plot(history.history['val_loss'], label='Validation loss')
</code></pre>
<p>And I get the same warning after running the following code:</p>
<pre><code>from tensorflow.keras.models import load_model
model.save("my_model")
model = load_model("my_model")
</code></pre>
|
<p>You can use the following code to suppress the Warnings.</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
</code></pre>
<p>In detail:-</p>
<pre><code>0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed
</code></pre>
<p>Let us know if the issue still persists. Thanks!</p>
|
python|tensorflow|keras|lstm|warnings
| 0
|
4,369
| 55,392,853
|
Dataframe value not updating when iterating over rows
|
<p>I am fairly new to Pandas, and am trying to use it to analyse a large dataset. I have read everything I can find about it, but just can't get it to work. I am trying to update values in a dataframe whilst iterating over it row by row, but the values are not being updated in the dataframe.</p>
<pre><code>for index, row in df.iterrows():
for j in data_column:
this_value = some_value
print(this_value) # prints some_value
df.loc[index].at[j] = this_value
print(df.loc[index].at[j]) # prints 0 (not updated)
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.at.html" rel="nofollow noreferrer"><code>DataFrame.at</code></a>:</p>
<pre><code>df.at[index, j] = this_value
</code></pre>
<p>instead combination <code>loc</code> and <code>at</code>:</p>
<pre><code>df.loc[index].at[j] = this_value
</code></pre>
<p>But <code>iterrows</code> is used only for some specific solutions, better is use some alternatives.</p>
<p><a href="https://stackoverflow.com/a/24871316/2901002">Does iterrows have performance issues?</a></p>
|
python|pandas
| 0
|
4,370
| 55,377,044
|
What does this 'single' value represent in gradient?
|
<p>I tried to calculate gradient of output layer w.r.t. input and i am expecting a matrix of gradient (as gradient of different nodes in output layer w.r.t. each input) but i am getting a single value. I want to know what this value represent here?</p>
<p>My aim was to calculate gradient of categorical-cross-entropy loss w.r.t to each input. I was searching for a solution and then i stuck at this.</p>
<p>I am new to this, so please ignore silly mistakes.</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import backend as k
import numpy as np
import tensorflow as tf
model = Sequential()
model.add(Dense(2, input_dim=1, init='uniform', activation='relu'))
model.add(Dense(2, init='uniform', activation='softmax'))
outputTensor = model.output
listOfVariableTensors = model.input
gradients = k.gradients(outputTensor, listOfVariableTensors)
trainingExample = np.random.random((1,1))
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
evaluated_gradients = sess.run(gradients,feed_dict={model.input:trainingExample})
print(evaluated_gradients)
</code></pre>
<p>I got output of print statement as:</p>
<pre><code>[array([[0.]], dtype=float32)]
</code></pre>
|
<p><code>k.gradients</code> is a wrapper that actually runs <code>tf.gradients</code>.
As described in the document</p>
<blockquote>
<p>Constructs symbolic derivatives of <strong>sum</strong> of ys w.r.t. x in xs.</p>
</blockquote>
<p>The result of <code>tf.gradients</code> is the sum of all <code>ys</code> derivatives of <code>xs</code>. The formula is as follows:</p>
<p><a href="https://i.stack.imgur.com/ib93c.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ib93c.jpg" alt="enter image description here"></a></p>
<p>The shape of the result is the same as that of <code>xs</code>, not <code>ys</code>. An example:</p>
<pre><code>import tensorflow as tf
a = tf.constant([[1.],[2.]])
b = tf.matmul(a,[[3.,4.]])
c = tf.matmul(a,[[5.,6.]])
grads1 = tf.gradients(ys=b,xs=a)
grads2 = tf.gradients(ys=[b,c],xs=a)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(grads1))
print(sess.run(grads2))
[array([[7.],[7.]], dtype=float32)]
[array([[18.],[18.]], dtype=float32)]
</code></pre>
<p>Just do <code>tf.gradients(ys=loss,xs=input)</code> if you want to calculate the sum gradient of categorical-cross-entropy loss w.r.t to each input. You'd need to call <code>tf.gradients</code> for each <code>ys[i,j]</code> separately if you want to calculate gradient of different nodes in output layer w.r.t. to each input.</p>
|
python|tensorflow|input|keras|gradient
| 1
|
4,371
| 55,463,251
|
Why does multi-class classification fails with sigmoid?
|
<p>MNIST trained with Sigmoid fails while Softmax works fine</p>
<p>I am trying to investigate how different activation affects the final results, so I implemented a simple net for MNIST with PyTorch.</p>
<p>I am using NLLLoss (Negative log likelihood) as it implements Cross Entropy Loss when used with softmax.</p>
<p>When I have softmax as activation of the last layer, it works great.
But when I used sigmoid instead, I noticed that things fall apart</p>
<p>Here is my network code</p>
<pre><code>def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 80)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.XXXX(x)
</code></pre>
<p>where XXXX is the activation function</p>
<p>both Sigmoid and Softmax output values between (0, 1).
Yes Softmax guarantees the sum of 1 but I am not sure if this answers why the training fails with Sigmoid.
Is there any detail I am not catching here?</p>
|
<p>Sigmoid + crossentropy can be used for multilabel classification (assume a picture with a dog and a cat, you want the model to return "dog and cat"). It works when the classes aren't mutually exclusive or the samples contain more than one object that you want to recognize.</p>
<p>In your case MNIST has mutually exclusive classes and in each image there is only one number, so it is better to use logsoftmax + negative loglikelihood, which assume that the classes are mutually exclusive and there is only one correct label associated to the image.</p>
<p>So, you can't really expect to have that behavior from sigmoid.</p>
|
deep-learning|pytorch|loss-function|multiclass-classification|activation-function
| 0
|
4,372
| 7,656,665
|
How to repeat elements of an array along two axes?
|
<p>I want to repeat elements of an array along axis 0 and axis 1 for M and N times respectively:</p>
<pre><code>import numpy as np
a = np.arange(12).reshape(3, 4)
b = a.repeat(2, 0).repeat(2, 1)
print(b)
[[ 0 0 1 1 2 2 3 3]
[ 0 0 1 1 2 2 3 3]
[ 4 4 5 5 6 6 7 7]
[ 4 4 5 5 6 6 7 7]
[ 8 8 9 9 10 10 11 11]
[ 8 8 9 9 10 10 11 11]]
</code></pre>
<p>This works, but I want to know are there better methods without create a temporary array.</p>
|
<p>You could use the <a href="https://en.wikipedia.org/wiki/Kronecker_product" rel="nofollow noreferrer">Kronecker product</a>, see <a href="https://numpy.org/doc/stable/reference/generated/numpy.kron.html" rel="nofollow noreferrer"><code>numpy.kron</code></a>:</p>
<pre><code>>>> a = np.arange(12).reshape(3,4)
>>> print(np.kron(a, np.ones((2,2), dtype=a.dtype)))
[[ 0 0 1 1 2 2 3 3]
[ 0 0 1 1 2 2 3 3]
[ 4 4 5 5 6 6 7 7]
[ 4 4 5 5 6 6 7 7]
[ 8 8 9 9 10 10 11 11]
[ 8 8 9 9 10 10 11 11]]
</code></pre>
<p>Your original method is OK too, though!</p>
|
python|numpy
| 16
|
4,373
| 56,624,946
|
Calculating Quantiles based on a column value?
|
<p>I am trying to figure out a way in which I can calculate quantiles in pandas or python based on a column value? Also can I calculate multiple different quantiles in one output?</p>
<p>For example I want to calculate the 0.25, 0.50 and 0.9 quantiles for </p>
<p><strong>Column Minutes in df where it is <= 5 and where it is > 5 and <=10</strong></p>
<pre><code>df[df['Minutes'] <=5]
df[(df['Minutes'] >5) & (df['Minutes']<=10)]
</code></pre>
<p><strong>where column Minutes is just a column containing value of numerical minutes</strong></p>
<p>Thanks!</p>
|
<p><code>DataFrame.quantile</code> accepts values in array,</p>
<p>Try</p>
<pre><code>df['minute'].quantile([0.25, 0.50 , 0.9])
</code></pre>
<p>Or filter the data first,</p>
<pre><code>df.loc[df['minute'] <= 5, 'minute'].quantile([0.25, 0.50 , 0.9])
</code></pre>
|
python|python-3.x|pandas|data-science
| 3
|
4,374
| 56,590,300
|
Simultaneously change occurrences in a numpy array
|
<p>I have a numpy array that looks something like this:</p>
<pre><code>h = array([string1 1
string2 1
string3 1
string4 3
string5 4
string6 2
string7 2
string8 4
string9 3
string0 2 ])
</code></pre>
<p>In the second column, I would like to change all occurrences of 1 to 3, all occurrences of 3 to 2, all occurrences of 4 to 1</p>
<p>Obviously if I systematically try to do it in place I will get an error, because:</p>
<pre><code>h[,:1 == 1] = 3
h[,:1 == 3] = 2
</code></pre>
<p>will change all the 1's into 2's</p>
<p>The matrix can be up to 50,000 elements long, and the values to change might vary</p>
<p>I was looking at a similar question <a href="https://stackoverflow.com/questions/45381983/simultaneous-changing-of-python-numpy-array-elements">here</a> , but it was turning all digits to 0, and the answers were specific to that.</p>
<p>Is there a way to simultaneously change all these occurrences or am I going to have to find another way?</p>
|
<p>You can use a look up table and advanced indexing:</p>
<pre><code>A = np.rec.fromarrays([np.array("The quick brown fox jumps over the lazy dog .".split()), np.array([1,1,1,3,4,2,2,4,3,2])])
A
# rec.array([('The', 1), ('quick', 1), ('brown', 1), ('fox', 3),
# ('jumps', 4), ('over', 2), ('the', 2), ('lazy', 4), ('dog', 3),
# ('.', 2)],
# dtype=[('f0', '<U5'), ('f1', '<i8')])
LU = np.arange(A['f1'].max()+1)
LU[[1,3,4]] = 3,2,1
A['f1'] = LU[A['f1']]
A
# rec.array([('The', 3), ('quick', 3), ('brown', 3), ('fox', 2),
# ('jumps', 1), ('over', 2), ('the', 2), ('lazy', 1), ('dog', 2),
# ('.', 2)],
# dtype=[('f0', '<U5'), ('f1', '<i8')])
</code></pre>
|
python|numpy
| 2
|
4,375
| 56,672,105
|
Transpose column in a DataFrame into a binary matrix
|
<p><strong><em>Context</em></strong></p>
<p>Lets say I have a pandas-DataFrame like this:</p>
<pre class="lang-py prettyprint-override"><code>>>> data.head()
values atTime
date
2006-07-01 00:00:00+02:00 15.10 0000
2006-07-01 00:15:00+02:00 16.10 0015
2006-07-01 00:30:00+02:00 17.75 0030
2006-07-01 00:45:00+02:00 17.35 0045
2006-07-01 01:00:00+02:00 17.25 0100
</code></pre>
<p><em>atTime</em> represents the hour and minute of the timestamp used as index. I want to transpose the <em>atTime</em>-column to a binary matrix (making it sparse is also an option), which will be used as nominal feature in a machine learning approach.</p>
<p>The desired result should look like:</p>
<pre class="lang-py prettyprint-override"><code>>>> data.head()
values 0000 0015 0030 0045 0000
date
2006-07-01 00:00:00+02:00 15.10 1 0 0 0 0
2006-07-01 00:15:00+02:00 16.10 0 1 0 0 0
2006-07-01 00:30:00+02:00 17.75 0 0 1 0 0
2006-07-01 00:45:00+02:00 17.35 0 0 0 1 0
2006-07-01 01:00:00+02:00 17.25 0 0 0 0 1
</code></pre>
<p>As might be anticipated, this matrix will be much larger when concidering all values in <em>atTime</em>.</p>
<p><br>
<strong><em>My question</em></strong></p>
<p>I can achieve the desired result with workarounds using <code>apply</code> and using the timestamps in order to create the new columns beforehand.</p>
<p><strong><em>However, is there a build-in option in pandas (or via numpy, concidering atTime as numpy-array) to achieve the same without a workaround?</em></strong></p>
|
<p>This is a use case for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a>:</p>
<pre><code>pd.get_dummies(df, columns=["atTime"])
</code></pre>
<pre><code> values atTime_0 atTime_15 atTime_30 atTime_45 atTime_100
date
2006-07-01 00:00:00+02:00 15.10 1 0 0 0 0
2006-07-01 00:15:00+02:00 16.10 0 1 0 0 0
2006-07-01 00:30:00+02:00 17.75 0 0 1 0 0
2006-07-01 00:45:00+02:00 17.35 0 0 0 1 0
2006-07-01 01:00:00+02:00 17.25 0 0 0 0 1
</code></pre>
<p>Solution updated with OP's recommendation. Thanks!</p>
|
python|pandas|dataframe
| 8
|
4,376
| 56,460,344
|
How to iterate over the n-th dimenstion of a numpy array?
|
<p>I use to concatenate my numpy arrays of arbitrary shape to make my code cleaner, however, it seems pretty hard to me to iterate over it in a pythonesque way.</p>
<p>Lets consider a 4 dimension array x (thus <code>len(x.shape) = 4</code>), and that the index I want to iterate is 2, the naive solution that I usually use is something like</p>
<pre class="lang-py prettyprint-override"><code>y = np.array([my_operation(x[:, :, i, :])
for i in range(x.shape[2])])
</code></pre>
<p>I'm looking for something more readable, because it is annoying to have so many ":" and any changes in the dimensions of x would require rewrite a part of my code. Something magic like</p>
<pre><code>y = np.array([my_operation(z) for z in magic_function(x, 2)])
</code></pre>
<p>Is there a numpy method that allows me to iterate over any arbitrary dimension of an array ?</p>
|
<p>One possible solution is to use a dict().</p>
<p>What you can do is:</p>
<pre><code>x = dict()
x['param1'] = [1, 1, 1, 1]
x['param2'] = [2, 2, 2, 2]
print(x['param1'])
# > [1, 1, 1, 1]
</code></pre>
|
python|numpy
| 0
|
4,377
| 56,575,877
|
shuffling two tensors in the same order
|
<p>As above. I tried those to no avail:</p>
<pre><code>tf.random.shuffle( (a,b) )
tf.random.shuffle( zip(a,b) )
</code></pre>
<p>I used to concatenate them and do the shuffling, then unconcatenate / unpack. But now I'm in a situation where (a) is 4D rank tensor while (b) is 1D, so, no way to concatenate.</p>
<p>I also tried to give the seed argument to the shuffle method so it reproduces the same shuffling and I use it twice => Failed. Also tried to do the shuffling myself with randomly shuffled range of numbers, but TF is not as flexible as numpy in fancy indexing and stuff ==> failed.</p>
<p>What I'm doing now is, convert everything back to numpy then use shuffle from sklearn then go back to tensors by recasting. It is sheer stupid way. This is supposed to happen inside a graph.</p>
|
<p>You could just shuffle the indices and then use <code>tf.gather()</code> to extract values corresponding to those shuffled indices:</p>
<p><strong>TF2.x (UPDATE)</strong></p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
x = tf.convert_to_tensor(np.arange(5))
y = tf.convert_to_tensor(['a', 'b', 'c', 'd', 'e'])
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
print('before')
print('x', x.numpy())
print('y', y.numpy())
print('after')
print('x', shuffled_x.numpy())
print('y', shuffled_y.numpy())
# before
# x [0 1 2 3 4]
# y [b'a' b'b' b'c' b'd' b'e']
# after
# x [4 0 1 2 3]
# y [b'e' b'a' b'b' b'c' b'd']
</code></pre>
<p><strong>TF1.x</strong></p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, (None, 1, 1, 1))
y = tf.placeholder(tf.int32, (None))
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
</code></pre>
<p>Make sure that you compute <code>shuffled_x</code>, <code>shuffled_y</code> in the same session run. Otherwise they might get different index orderings.</p>
<pre class="lang-py prettyprint-override"><code># Testing
x_data = np.concatenate([np.zeros((1, 1, 1, 1)),
np.ones((1, 1, 1, 1)),
2*np.ones((1, 1, 1, 1))]).astype('float32')
y_data = np.arange(4, 7, 1)
print('Before shuffling:')
print('x:')
print(x_data.squeeze())
print('y:')
print(y_data)
with tf.Session() as sess:
x_res, y_res = sess.run([shuffled_x, shuffled_y],
feed_dict={x: x_data, y: y_data})
print('After shuffling:')
print('x:')
print(x_res.squeeze())
print('y:')
print(y_res)
</code></pre>
<pre><code>Before shuffling:
x:
[0. 1. 2.]
y:
[4 5 6]
After shuffling:
x:
[1. 2. 0.]
y:
[5 6 4]
</code></pre>
|
tensorflow|tensorflow2.0|tensorflow2.x
| 17
|
4,378
| 56,751,703
|
Why .argmax returns 1 instead of the maximum?
|
<p>I'm looking for the max value of two arrays, and I tried to get the max of each and add to another 'np.array'. However, I got 1.</p>
<pre><code>maximums = [x_train.argmax(), x_test.argmax()]
print(maximums)
maximums = np.array(maximums)
print(maximums)
maximum = maximums.argmax()
print(maximum)
</code></pre>
<p>I expected the value of maximum to be 577, but it is 1.</p>
<pre><code>[417, 577]
[417 577]
1
</code></pre>
<p>Where is the error, or why I don't get what I wanted?</p>
<p>EDITED:
I found a function that makes what I wanted, and it is ´numpy.amax()´ <a href="https://thispointer.com/find-max-value-its-index-in-numpy-array-numpy-amax/" rel="nofollow noreferrer">https://thispointer.com/find-max-value-its-index-in-numpy-array-numpy-amax/</a></p>
|
<p><code>np.argmax</code> returns the index for the maximum value. In this case, the index for maximum value <code>577</code> is 1. Offcial docs reference: <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html</a></p>
<p>To illustrate with an example, np.argmax is equivalent to getting the index for the maximum value in a list:</p>
<pre><code>In [82]: import numpy as np
In [83]: vals = [477,577]
In [84]: max(vals)
Out[84]: 577
In [86]: vals.index(max(vals))
Out[86]: 1
In [87]: max_index = vals.index(max(vals))
In [88]: np.argmax(vals)
Out[88]: 1
In [89]: np.argmax(vals) == max_index
Out[89]: True
</code></pre>
|
python|numpy
| 1
|
4,379
| 67,140,409
|
python pandas apply function in groupby, and add results as column in data frame
|
<p>IM practicing with sample data to learn pandas. I have sample data like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">symbol</th>
<th style="text-align: center;">date_time</th>
<th style="text-align: center;">close</th>
<th style="text-align: center;">volume</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">XOM</td>
<td style="text-align: center;">2021-04-13 13:00:00</td>
<td style="text-align: center;">56.5</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">XOM</td>
<td style="text-align: center;">2021-04-13 13:01:00</td>
<td style="text-align: center;">57.5</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">XOM</td>
<td style="text-align: center;">2021-04-13 13:02:00</td>
<td style="text-align: center;">56.25</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">XOM</td>
<td style="text-align: center;">2021-04-13 13:03:00</td>
<td style="text-align: center;">58.5</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">AAPL</td>
<td style="text-align: center;">2021-04-13 13:00:00</td>
<td style="text-align: center;">135.6</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">AAPL</td>
<td style="text-align: center;">2021-04-13 13:01:00</td>
<td style="text-align: center;">137.5</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">AAPL</td>
<td style="text-align: center;">2021-04-13 13:02:00</td>
<td style="text-align: center;">136.25</td>
<td style="text-align: center;">10000</td>
</tr>
<tr>
<td style="text-align: center;">AAPL</td>
<td style="text-align: center;">2021-04-13 13:03:00</td>
<td style="text-align: center;">138.5</td>
<td style="text-align: center;">10000</td>
</tr>
</tbody>
</table>
</div>
<p>I used the groupby function on symbol and close price to add some simple moving averages using panda.rolling.mean functions.</p>
<p>Now I'd like to us talib to calcuate the RSI, for each symbol. I thought I could use an apply and call a function. I see the output when I print the np array, however, Im not seeing the column added.</p>
<pre><code>quote_data.groupby("sym")["close"].apply(calc_rsi).reset_index(name='rsi_test')
def calc_rsi(series):
rsi_arr=np.array(series)
RSI = talib.RSI(rsi_arr, timeperiod=14)
#print(RSI) --> produces valid output
return(RSI)
</code></pre>
<p>Sample Numpy array output is below, and first 14 values are nan which is expected.</p>
<pre><code> nan nan nan nan nan nan
nan nan 17.10526316 30.8277027 38.64107884 36.42559842
35.98126419 49.82352931 51.12420941 56.4889558 53.50561034 57.38372096
63.24414699 65.34066328 65.70388628 60.26289822 61.54881365 61.54881365
</code></pre>
|
<p>It was index related.</p>
<p>setting the series index before passing it back works:</p>
<pre><code>quote_data['rsi'] = quote_data.groupby("sym")["close"].apply(calc_rsi)
def calc_rsi(series):
rsi_arr=np.array(series)
RSI = talib.RSI(rsi_arr, timeperiod=14)
rsi_series=pd.Series(RSI,series.index)
#print(rsi_series.size)
return(rsi_series)
</code></pre>
|
python|pandas|dataframe
| 0
|
4,380
| 47,356,322
|
Pandas: fill column based on data in different column
|
<p>I Have the following Dataframe:</p>
<pre><code>test_df['A'] = [100, 100, 100, 0, 0, 100, 100, 0, 100, 100, 100]
test_df['B'] = [100, 0, 0, 0, 0, 0, 0, 0, 100, 0, 0]
</code></pre>
<p>What I want to achieve is a new column C where if I iterate on column B and I find a 100 value then I want to forward fill until the column A has 100 value in it. Which will result in a column like this:</p>
<pre><code>test_df['C'] = [100, 100, 100, 0, 0, 0, 0, 0, 100, 100, 100]
</code></pre>
<p>This can be achieved by a simple iteration of all values with something like the following:</p>
<pre><code>test_df['C'] = 0
is_valid_row = False
for index, row in test_df.iterrows():
if (row['B'] == 100):
is_valid_row = True
if (is_valid_row == True and row['A'] == 100):
row['C'] = 100
else:
is_valid_row = False
</code></pre>
<p>I wanted to ask if there is any more efficient way to achieve the same result with pandas or numpy</p>
|
<p>You can use the apply method. </p>
<pre><code>def my_func(input):
#do whatever
test_df['B'] = test_df['A'].apply(my_func)
</code></pre>
<p>You obviously have to fill it in with your own code...</p>
|
python|pandas
| -2
|
4,381
| 68,125,847
|
'>' not supported between instances of 'str' and 'int' pandas function for getting threshold
|
<p>I have a df</p>
<pre><code>import pandas as pd
df= pd.DataFrame({'ID': [1,2,3],
'Text':['This num dogs and cats is (111)888-8780 and other',
'dont block cow 23 here',
'cat two num: dog and cows here'],
'Match':[[('cats', 86), ('dogs', 86), ('dogs', 29)],
[('cow', 33), ('dont', 57), ('cow', 100)],
[('cat', 100), ('dog', 100), ('cows', 86)] ]
})
</code></pre>
<p>And it looks like this</p>
<pre><code> ID Text Match
0 1 This num dogs and cats is (111)888-8780 and other [(cats, 86), (dogs, 86), (dogs, 29)]
1 2 dont block cow 23 here [(cow, 33), (dont, 57), (cow, 100)]
2 3 cat two num: dog and cows here [(cat, 100), (dog, 100), (cows, 86)]
</code></pre>
<p>My goal is to create a function that only keeps certain item within <code>Match</code> column that are above a certain threshold (e.g. 80) so I tried the following</p>
<pre><code>def threshold(column):
column_tup = column
keep_tuple = []
for col in column_tup:
if column_tup > 80:
keep_tuple.append()
return pd.Series([keep_tuple], index = ['Keep_Words'])
df_thresh = df.join(df.apply(lambda x: threshold(x), axis = 1))
</code></pre>
<p>But this gives me an error</p>
<pre><code>'>' not supported between instances of 'str' and 'int'
</code></pre>
<p>My goal is to get a df with a new column <code>Keep_Words</code> that looks like the following where only score above 85 are kept in <code>Keep_Words</code></p>
<pre><code> ID Text Match Keep_Words
0 1 data data [(cats, 86), (dogs, 86)]
1 2 data data [(cow, 100)]
2 3 data data [(cat, 100), (dog, 100)]
</code></pre>
<p>How do I alter my code to reach my goal?</p>
|
<p>Since you're trying to change only the <code>Match</code> column, you might as well only pass that column to <code>apply</code>:</p>
<pre><code>df.Match.apply(threshold)
</code></pre>
<p>where we don't use <code>axis</code> argument anymore since it is a Series we are applying over and it has only one axis anyway.</p>
<p>Then, each time your function is called, a row of <code>df.Match</code> will be passed and get assigned to the function argument, so we can rename the function signature to:</p>
<pre><code>def threshold(match_row):
</code></pre>
<p>for readability.</p>
<p>So, <code>match_row</code> will be a list, e.g., in the first turn it'll be <code>[(cats, 86), (dogs, 86), (dogs, 29)]</code>. We can iterate over as you did but with 2 for-loop variable as:</p>
<pre><code>for name, val in match_row:
</code></pre>
<p>so that <code>name</code> will become the first entry of each tuple and <code>val</code> is the second. Now we can do the filtering:</p>
<pre><code>keep_tuple = []
for name, val in match_row:
if val > 85:
keep_tuple.append((name, val))
</code></pre>
<p>which is fine but not so Pythonic because there is list comprehensions:</p>
<pre><code>keep_tuple = [(name, val) for name, val in match_row if val > 85]
</code></pre>
<p>Lastly we can return this as you did:</p>
<pre><code>return pd.Series([keep_tuple], index=["Keep_Words"])
</code></pre>
<p>As for calling and assignment, we can <code>join</code> as you did:</p>
<pre><code>df_thresh = df.join(df.Match.apply(threshold))
</code></pre>
<p>All in all,</p>
<pre><code>def threshold(match_row):
keep_tuple = [(name, val) for name, val in match_row if val > 85]
return pd.Series([keep_tuple], index=["Keep_Words"])
df_thresh = df.join(df.Match.apply(threshold))
</code></pre>
<p>which gives</p>
<pre><code>>>> df_thresh
ID Text Match Keep_Words
0 1 This num dogs and cats is (111)888-8780 and other [(cats, 86), (dogs, 86), (dogs, 29)] [(cats, 86), (dogs, 86)]
1 2 dont block cow 23 here [(cow, 33), (dont, 57), (cow, 100)] [(cow, 100)]
2 3 cat two num: dog and cows here [(cat, 100), (dog, 100), (cows, 86)] [(cat, 100), (dog, 100), (cows, 86)]
</code></pre>
<hr>
<p>Lastly, for the error you got: I didn't get that error but the infamous</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>error, which was because of this line</p>
<pre><code>if column_tup > 80:
</code></pre>
<p>where <code>column_tup</code> is a whole row as a <code>pd.Series</code> but its behaviour in boolean context is ambiguous.</p>
|
python|pandas|list|function|tuples
| 3
|
4,382
| 68,133,846
|
ERROR: Could not install packages due to an OSError: [WinError 5]
|
<p>i was trying to install tensorflow-gpu on my pycharm (<code>pip install tensorflow-gpu</code>), but unfortunately im getting a Error Message. How can i install this package on my pycharm? What is wrong here? Should i install it directly with cmd? How can I install them with pycharm? However, I was able to install the tenserflow Version 2.5.0 without any problems. Only the Tenserflow gpu I cannot install. Im using python Version 3.7.9</p>
|
<p>You need to run the command prompt or terminal as an administrator. This will permit you to install packages. And also, you need to upgrade pip to the latest version - <code>python -m pip install –-upgrade pip</code> in cmd or terminal.</p>
|
python|tensorflow|pip|pycharm
| 14
|
4,383
| 68,090,432
|
How to map the integer values in a column in a pandas datfarme to random n-digit numbers?
|
<p>I have a pandas data frame like df:</p>
<pre><code>df=pd.DataFrame([[111, 7,8], [409,6,4], [333, 9,0],[111,3,2],[111,0,0], [409,7,0]], columns=['A','B','C'])
df
A B C
0 111 7 8
1 409 6 4
2 333 9 0
3 111 3 2
4 111 0 0
5 409 7 0
</code></pre>
<p>How to map column A to 10-digit random integers such that the same value in columns A (such as 111) has the same 10-digit random integer in the new array. For example, I want something like this</p>
<pre><code> A B C
0 8765479834 7 8
1 7653780954 6 4
2 9400211346 9 0
3 8765479834 3 2
4 8765479834 0 0
5 7653780954 7 0
</code></pre>
<p>Thank you!</p>
|
<p>One way via <code>hashlib</code>:</p>
<pre><code>import hashlib
df['A'] = df['A'].apply(lambda s: int(hashlib.sha1(str(s).encode("utf-8")).hexdigest(), 16) % (10 ** 8))
</code></pre>
<h4>OUTPUT:</h4>
<pre><code> A B C
0 22445762 7 8
1 63857454 6 4
2 61248669 9 0
3 22445762 3 2
4 22445762 0 0
5 63857454 7 0
</code></pre>
<p><em>NOTE:</em> If you want values of random length you can also use:</p>
<pre><code>df['A'] = pd.util.hash_pandas_object(df['A'], index =False)
</code></pre>
|
python|pandas|dataframe
| 4
|
4,384
| 68,377,243
|
Pandas transform rows to columns
|
<p>I have a pandas dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th>p1</th>
<th>a1</th>
<th>phase</th>
<th>file_number</th>
<th style="text-align: right;">e1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">388</td>
<td>19.288</td>
<td>21.630</td>
<td>0.0</td>
<td>0</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">389</td>
<td>40.910</td>
<td>71.489</td>
<td>1.0</td>
<td>0</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">390</td>
<td>31.310</td>
<td>43.952</td>
<td>2.0</td>
<td>0</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">391</td>
<td>28.420</td>
<td>30.250</td>
<td>3.0</td>
<td>0</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">392</td>
<td>17.940</td>
<td>22.000</td>
<td>0.0</td>
<td>1</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">393</td>
<td>38.020</td>
<td>68.750</td>
<td>1.0</td>
<td>1</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">394</td>
<td>31.230</td>
<td>48.352</td>
<td>2.0</td>
<td>1</td>
<td style="text-align: right;">1.0</td>
</tr>
<tr>
<td style="text-align: left;">395</td>
<td>26.902</td>
<td>29.880</td>
<td>3.0</td>
<td>1</td>
<td style="text-align: right;">0.0</td>
</tr>
</tbody>
</table>
</div>
<p>We can create it using this code</p>
<pre><code>d = {'p1': {388: 19.288,389: 40.91,390: 31.31,391: 28.42,392: 17.94,393: 38.02,394: 31.23,395: 26.902},
'a1': {388: 21.63,389: 71.489,390: 43.952,391: 30.25,392: 22.0,393: 68.75,394: 48.352,395: 29.88},
'phase': {388: 0.0,389: 1.0,390: 2.0,391: 3.0,392: 0.0,393: 1.0,394: 2.0,395: 3.0},
'file_number': {388: 0, 389: 0, 390: 0, 391: 0, 392: 1, 393: 1, 394: 1, 395: 1},
'e1': {388: 0.0,389: 0.0,390: 0.0,391: 0.0,392: 0.0,393: 1.0,394: 0.0,395: 0.0}}
df = pd.DataFrame(d)
</code></pre>
<p>For I want to transform this dataframe, so I have 1 row for every file_number. And transform it with respect to phase - basically collapse many rows into one for each file_number. Phase number will always be 0, 1, 2, 3. Final table should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">p1_0</th>
<th style="text-align: center;">p1_1</th>
<th style="text-align: center;">p1_2</th>
<th style="text-align: center;">p1_3</th>
<th style="text-align: center;">a1_0</th>
<th style="text-align: center;">p1_1</th>
<th style="text-align: center;">a1_2</th>
<th style="text-align: center;">a1_3</th>
<th style="text-align: center;">e1_0</th>
<th style="text-align: center;">e1_1</th>
<th style="text-align: center;">e1_2</th>
<th style="text-align: center;">e1_3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">19.288</td>
<td style="text-align: center;">40.910</td>
<td style="text-align: center;">31.310</td>
<td style="text-align: center;">28.420</td>
<td style="text-align: center;">21.630</td>
<td style="text-align: center;">71.489</td>
<td style="text-align: center;">43.952</td>
<td style="text-align: center;">30.250</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: center;">17.940</td>
<td style="text-align: center;">38.020</td>
<td style="text-align: center;">31.230</td>
<td style="text-align: center;">26.902</td>
<td style="text-align: center;">22.000</td>
<td style="text-align: center;">68.750</td>
<td style="text-align: center;">48.352</td>
<td style="text-align: center;">29.880</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
</tr>
</tbody>
</table>
</div>
<p>Where suffix means p1_phase, a1_phase and so on.</p>
<p>I want to make it as fast as possible. Since my data is very large, I'd rather avoid looping.</p>
|
<pre class="lang-py prettyprint-override"><code>d = {'p1': {388: 19.288,389: 40.91,390: 31.31,391: 28.42,392: 17.94,393: 38.02,394: 31.23,395: 26.902},
'a1': {388: 21.63,389: 71.489,390: 43.952,391: 30.25,392: 22.0,393: 68.75,394: 48.352,395: 29.88},
'phase': {388: 0.0,389: 1.0,390: 2.0,391: 3.0,392: 0.0,393: 1.0,394: 2.0,395: 3.0},
'file_number': {388: 0, 389: 0, 390: 0, 391: 0, 392: 1, 393: 1, 394: 1, 395: 1},
'e1': {388: 0.0,389: 0.0,390: 0.0,391: 0.0,392: 0.0,393: 1.0,394: 0.0,395: 0.0}}
df = pd.DataFrame(d)
# pivot the data
pivoted = df.pivot(index='file_number', columns='phase')
# flatten the columns
pivoted.columns = [f'{col[0]}_{int(col[1])}' for col in pivoted.columns.values]
</code></pre>
<p>After this <code>pivoted</code> is a Dataframe with the shape you desired.</p>
<p>Basically a combination of these two questions:</p>
<ul>
<li><a href="https://stackoverflow.com/q/47152691/383793">How to pivot a dataframe?</a></li>
<li><a href="https://stackoverflow.com/q/14507794/383793">Pandas - How to flatten a hierarchical index in columns</a></li>
</ul>
|
python|pandas|dataframe
| 1
|
4,385
| 68,117,724
|
Tensorflow-gpu not detecting GPU
|
<p>I have reinstalled tensorflow-gpu many times.Cuda and Cudnn already installed with path added but available gpu for tensorflow is always 0 for me.</p>
|
<p>I reinstalled tensorflow-gpu again but with following command.</p>
<pre><code>conda install tensorflow-gpu=2.3 tensorflow=2.3=mkl_py38h1fcfbd6_0
</code></pre>
<p><a href="https://github.com/ContinuumIO/anaconda-issues/issues/12194" rel="nofollow noreferrer">For more info</a></p>
|
python|tensorflow
| 0
|
4,386
| 59,166,618
|
getting a histogram of a JaggedArray
|
<p>Hi I have a ROOT TTree with a bit of a complicated structure. When I use uproot to create an array: </p>
<pre><code>analysis = uproot.open("/b/LJ_data/02Oct2019/FRVZ/FRVZprompt2zd_mH125_mzd01.root")["analysis"]
el_Eratio = analysis.arrays(["el_Eratio"], cache=mycache);
print(el_Eratio)
</code></pre>
<p>I get a Jagged Array:</p>
<pre><code>{b'el_Eratio': <JaggedArray [[0.9679527 0.8814101 0.88584787] [0.34557977 0.22699767 0.9040524 0.0] [] ... [0.94681776] [0.91043043 0.621741 0.85297334 0.9364375] [0.83885396]] at 0x7f39cf32dfd0>}
</code></pre>
<p>I am trying to create a simple histogram of this data: </p>
<pre><code>n, bins, patches = plt.hist(el_Eratio, 100, density = True)
</code></pre>
<p>But I am getting the error: </p>
<pre><code>Traceback (most recent call last):
File "macro.py", line 45, in <module>
n, bins, patches = plt.hist(el_Eratio, 100, density = True)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/pyplot.py", line 2636, in hist
**({"data": data} if data is not None else {}), **kwargs)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/__init__.py", line 1589, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py", line 6721, in hist
xmin = min(xmin, np.nanmin(xi))
TypeError: '<' not supported between instances of 'dict' and 'float'
</code></pre>
<p>Do I need to reformat the Jagged Array to a list or normal array? If so how do I do this? </p>
<p>Or am I just calling the array wrong? I also tried: </p>
<pre><code>n, bins, patches = plt.hist(el_Eratio.values(), 100, density = True)
</code></pre>
<p>But I get a similar error: </p>
<pre><code>Traceback (most recent call last):
File "macro.py", line 45, in <module>
n, bins, patches = plt.hist(el_Eratio.values(), 100, density = True)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/pyplot.py", line 2636, in hist
**({"data": data} if data is not None else {}), **kwargs)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/__init__.py", line 1589, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py", line 6721, in hist
xmin = min(xmin, np.nanmin(xi))
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/numpy/lib/nanfunctions.py", line 298, in nanmin
res = np.amin(a, axis=axis, out=out, **kwargs)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 2618, in amin
initial=initial)
File "/opt/ohpc/pub/packages/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: operands could not be broadcast together with shapes (3,) (2,)
</code></pre>
<p>Disclamer: Although I have experience with root and C++, I am new to python. </p>
<p>Thanks,
Sarah</p>
|
<p>Because you said <code>analysis.arrays</code> (plural), you got back a Python dict. The only array it contains (because you asked for only one, <code>["el_Eratio"]</code>) has one key: <code>b"el_Eratio"</code>. Note that this is a bytestring (starts with <code>b</code>). If you know the encoding, such as <code>"utf-8"</code>, you can pass <code>namedecode="utf-8"</code> to the <code>arrays</code> method to get plain strings.</p>
<p>After extracting the JaggedArray, you'll still need to turn it into a flat array for the histogramming function to know what to do with it:</p>
<pre class="lang-py prettyprint-override"><code>plt.hist(el_Eratio[b'el_Eratio'].flatten())
</code></pre>
<p>Specifically, you're saying that you want a histogram of the nested contents of the jagged array, not something else, like</p>
<pre class="lang-py prettyprint-override"><code>plt.hist(el_Eratio[b'el_Eratio'].counts)
</code></pre>
<p>the number of values in each inner array. This kind of dataset has more structure, so you need to decide what to do with that structure before you have a one-dimensional bag of numbers to plot.</p>
|
python|numpy|matplotlib|uproot
| 0
|
4,387
| 59,171,235
|
Pandas data not being plotted
|
<p>I have the following simple code to plot specific data sets from this file <a href="https://easyupload.io/mdci9u" rel="nofollow noreferrer">https://easyupload.io/mdci9u</a></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
url=r'/path_to_file/Book.xlsx'
df1 = pd.read_excel(url, sheet_name=0,sep='\s*,\s*', index_col=0)
print(df1)
x=df1.iloc[:,11].values.tolist()
y=df1.iloc[:,17].values.tolist()
print(x)
fig, axs=plt.subplots(figsize=(12,5))
axs.plot=(x,y)
</code></pre>
<p>Whenever I try to run the code, only an empty graph with no data plotted appears. I cannot troubleshoot why my data is not being plotted. Is it something related to my dataframe? When I print <code>df1</code> everything seems just fine.</p>
<p>I am using Jupyter Notebook.</p>
<p>Could someone try to help me understand the problem?</p>
<p>All help is very much appreciated! </p>
<p>Thanks!</p>
|
<p>You made a mistake in the <strong>last line</strong>. It should be </p>
<blockquote>
<p>axs.plot(x,y)</p>
</blockquote>
<p>Instead, you have used an equal sign which is why you got an empty graph.</p>
|
python|pandas|matplotlib
| 2
|
4,388
| 59,314,762
|
Inconsistent matrices in SymPy
|
<p>I'm calculating reduced echelon forms in SymPy. I'm trying to get the pivot columns of the following matrix:</p>
<pre><code>exercise4 = Matrix([[1,3,5,7],[3,5,7,9],[5,7,9,1]])
</code></pre>
<p>I examine the matrix with the following:</p>
<pre><code>exercise4.rref()[0]
Matrix([
[1, 0, -1, 0],
[0, 1, 2, 0],
[0, 0, 0, 1]])
</code></pre>
<p>...which, as an aside, is different from my NumPy reduced matrix of</p>
<pre><code>exercise4 = np.array([[1,3,5,7],[3,5,7,9],[5,7,9,1]])
exercise4[1] = exercise4[1] + -3*exercise4[0]
exercise4[2] = exercise4[2] + -5*exercise4[0]
exercise4[1] = -1/4*exercise4[1]
exercise4[0] = exercise4[0] + -3*exercise4[1]
exercise4[2] = exercise4[2] + 8*exercise4[1]
exercise4
array([[ 1, 0, -1, -2],
[ 0, 1, 2, 3],
[ 0, 0, 0, -10]])
</code></pre>
<p><code>rref()[1]</code> here returns <code>(0, 1, 3)</code>, the third element of which is obviously incorrect, as it is the last element of augmented matrix. The third row is inconsistent, and there should be no third pivot column.</p>
<p>Is it an inherent flaw of <code>sympy.Matrix().rref()</code> that it will incorrectly interpret inconsistent pivot columns? Is that something that I need to be mindful of, or is there some way around this?</p>
|
<p>Doing your pivot on [2,3] produces the <code>rref</code> matrix:</p>
<pre><code>In [269]: M
Out[269]:
array([[ 1., 0., -1., -2.],
[ -0., 1., 2., 3.],
[ 0., 0., 0., -10.]])
In [271]: M[0]-=M[2]*(-2/-10)
In [276]: M[1]-=M[2]*(3/-10)
In [278]: M[2]/= -10
In [281]: M
Out[281]:
array([[ 1., 0., -1., 0.],
[ 0., 1., 2., 0.],
[-0., -0., -0., 1.]])
</code></pre>
<p>When I plug your matrix into an interactive form, I get the <code>sympy</code> result</p>
<p><a href="http://www.math.odu.edu/~bogacki/cgi-bin/lat.cgi?c=rref" rel="nofollow noreferrer">http://www.math.odu.edu/~bogacki/cgi-bin/lat.cgi?c=rref</a></p>
<p>Here too: <a href="https://www.emathhelp.net/calculators/linear-algebra/reduced-row-echelon-form-rref-caclulator/" rel="nofollow noreferrer">https://www.emathhelp.net/calculators/linear-algebra/reduced-row-echelon-form-rref-caclulator/</a></p>
<p>This version reduces the last row, so all leading terms are 1. But one source allows:</p>
<p><a href="https://stattrek.com/statistics/dictionary.aspx?definition=reduced_row_echelon_form" rel="nofollow noreferrer">https://stattrek.com/statistics/dictionary.aspx?definition=reduced_row_echelon_form</a></p>
<blockquote>
<p>Note: Some references present a slightly different description of the row echelon form. They do not require that the first non-zero entry in each row is equal to 1.</p>
</blockquote>
|
python|numpy|matrix|linear-algebra|sympy
| 0
|
4,389
| 45,180,564
|
Unit Testing Pandas DataFrame
|
<p>I'm looking to develop a unit test where it compares two DataFrames and returns True if their lengths are the same and if not returns the difference in length as well as what the missing output rows are.</p>
<p>For instance:
Example 1:</p>
<pre><code>df1 = {0,1,2,3,4}
df2 = {0,1,2,3,4}
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
<p>Example 2:</p>
<pre><code>df1 = {0,1,2,3,4}
df2 = {0,2,3,4}
</code></pre>
<blockquote>
<p>False. 2 is missing. </p>
</blockquote>
<p>Notifies me that the second item in df1 is missing from df2. </p>
<p>Is this something that is possible?</p>
|
<p>I think first you must decide on what you want: either an unit test or a function that returns the difference between two data frames.</p>
<p>If the former case, you could use <code>pd.util.testing.assert_frame_equal</code>:</p>
<pre><code>first = pd.DataFrame(np.arange(16).reshape((4,4)), columns=['A', 'B', 'C', 'D'])
first['A'][0] = 99
second = pd.DataFrame(np.arange(16).reshape((4,4)), columns=['A', 'B', 'C', 'D'])
pd.util.testing.assert_frame_equal(first, second)
</code></pre>
<p>and if your <code>DataFrame</code>s differ you'll get an assertion error</p>
<pre><code>AssertionError: DataFrame.iloc[:, 0] are different
DataFrame.iloc[:, 0] values are different (25.0 %)
[left]: [99, 4, 8, 12]
[right]: [0, 4, 8, 12]
</code></pre>
<p>In the latter case, if you really want a function to tell you how many lines are missing and what's different from a data frame to the other, then what you are looking for is not an unit test.</p>
|
python|unit-testing|pandas|dataframe|diff
| 3
|
4,390
| 45,122,044
|
Plot partial stacked bar chart in pandas
|
<p>I have a DataFarme df in the following form. I want to plot a graph with 4-pair bars. It means for each of the four days, there are two bars representing 0 and 1. For both of the 2 bars, I want to have a stacked bar.So each bar have 3 colors, representing <30, 30-70 and >70. I tried to use the "stacked = True" but a single bar with 6 colors was plotted instead of paired-bars. Can you anyone please help Thanks a lot.</p>
<pre><code>Score <30 30-70 >70
Gender 0 1 0 1 0 1
2017-07-09 23 10 25 13 12 21
2017-07-10 13 14 12 14 15 10
2017-07-11 24 25 10 15 20 15
2017-07-12 23 17 20 17 18 17
</code></pre>
|
<p>You can use <code>bottom</code> parameter.
Here is the way to go</p>
<pre><code>>> import matplotlib.pyplot as plt
>> import numpy as np
>> import pandas as pd
>>
>> columns = pd.MultiIndex.from_tuples([(r, b) for r in ['<30', '30-70', '>70']
>> for b in [0, 1]])
>> index = ['2017-07-%s' % d for d in ('09', '10', '11', '12')]
>> df = pd.DataFrame([[23,10,25,13,12,21], [13,14,12,14,15,10],
>> [24,25,10,15,20,15], [23,17,20,17,18,17]],
>> columns=columns, index=index)
>>
>> width = 0.25
>> x = np.arange(df.shape[0])
>> xs = [x - width / 2 - 0.01, x + width / 2 + 0.01]
>> for b in [0, 1]:
>> plt.bar(xs[b], df[('<30', b)], width, color='r')
>> plt.bar(xs[b], df[('30-70', b)], width, bottom=df[('<30', b)], color='g')
>> plt.bar(xs[b], df[('>70', b)], width, bottom=df[('<30', b)] + df[('30-70', b)], color='b')
>> plt.xticks(x, df.index)
>> plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/fiuoG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fiuoG.png" alt="enter image description here"></a></p>
|
python|pandas
| 3
|
4,391
| 45,225,814
|
Pandas: Best way to filter bottom 10% and top 25% of data within a groupby using quantile
|
<p>I have a time series in pandas with prices and times. I would like to group the dates by 1 month time intervals, calculate the 10-75% quantile of prices for each month and then filter the original dataframe using these values (so that only the prices that fall between 10% and 75% are left).</p>
<p>The dataframe looks something like this:</p>
<pre><code>df =pd.DataFrame({'date':['01-01-16','02-05-16','01-06-16','01-03-16','01-04-16'],'price':[10,3,4,3,4]})
df['date'] = pd.to_datetime(df.date)
df.index = df.date
date price
date
2016-01-01 2016-01-01 10
2016-01-06 2016-01-06 3
2016-01-03 2016-01-03 11
2016-01-04 2016-01-04 9
2016-02-05 2016-02-05 4
2016-02-12 2016-02-12 3
2016-02-15 2016-02-15 6
</code></pre>
<p>And the quantile in each month looks something like this:</p>
<pre><code>dg = df.groupby(pd.TimeGrouper('1M')).quantile([0.1,0.75])
price
date
2016-01-31 0.10 4.80
0.75 10.25
2016-02-29 0.10 3.20
0.75 5.00
</code></pre>
<p>After filtering, I would like my final dataframe to look like:</p>
<pre><code> date price
date
2016-01-01 2016-01-01 10
2016-01-04 2016-01-04 9
2016-02-05 2016-02-05 4
</code></pre>
<p>I am guessing there is probably an easier way to do this than what I am currently thinking - any help would be much appreciated.</p>
|
<p>First, define a function to check whether a Series is between the specified quantiles:</p>
<pre><code>def in_qrange(ser, q):
return ser.between(*ser.quantile(q=q))
</code></pre>
<p>This returns a boolean array. If you pass this to resample.transform, you will have:</p>
<pre><code>df.resample('1M')['price'].transform(in_qrange, q=[0.1, 0.75])
Out:
date
2016-01-01 True
2016-01-03 False
2016-01-04 True
2016-01-06 False
2016-02-05 True
2016-02-12 False
2016-02-15 False
Name: price, dtype: bool
</code></pre>
<p>You can use this to filter the original DataFrame:</p>
<pre><code>df.loc[df.resample('1M')['price'].transform(in_qrange, q=[0.1, 0.75])]
Out:
date price
date
2016-01-01 2016-01-01 10
2016-01-04 2016-01-04 9
2016-02-05 2016-02-05 4
</code></pre>
|
python|pandas|numpy
| 3
|
4,392
| 57,084,473
|
Return rows with max/min values at bottom of dataframe (python/pandas)
|
<p>I want to write a function that can look at a dataframe, find the max or min value in a specified column, then return the entire datafrane with the row(s) containing the max or min value at the bottom.</p>
<p>I have made it so that the rows with the max or min value alone get returned.</p>
<pre><code>def findAggregate(df, transType, columnName=None):
if transType == 'max1Column':
return df[df[columnName] == df[columnName].max()]
elif transType == 'min1Column':
return df[df[columnName] == df[columnName].min()]
</code></pre>
<p>Given the dataframe below, I want to check col2 for the MIN value</p>
<p>Original Dataframe:</p>
<pre><code>col1 col2 col3
blue 2 dog
orange 18 cat
black 6 fish
</code></pre>
<p>Expected output:</p>
<pre><code>col1 col2 col3
blue 2 dog
orange 18 cat
black 6 fish
blue 2 dog
</code></pre>
<p>Actual output:</p>
<pre><code>col1 col2 col3
blue 2 dog
</code></pre>
|
<h3>Focus on the index values</h3>
<p>And use one <code>loc</code></p>
<pre><code>i = df.col2.idxmin()
df.loc[[*df.index] + [i]]
col1 col2 col3
0 blue 2 dog
1 orange 18 cat
2 black 6 fish
0 blue 2 dog
</code></pre>
<hr>
<p>Same idea but with Numpy and <code>iloc</code></p>
<pre><code>i = np.arange(len(df))
a = df.col2.to_numpy().argmin()
df.iloc[np.append(i, a)]
col1 col2 col3
0 blue 2 dog
1 orange 18 cat
2 black 6 fish
0 blue 2 dog
</code></pre>
|
python|pandas|csv|max|min
| 6
|
4,393
| 57,165,842
|
Extract info from original dataframe after doing some analysis on it
|
<p>So I had a dataframe and I had to do some cleansing to minimize the duplicates. In order to do that I created a dataframe that had instead of 40 only 8 of the original columns. Now I have two columns I need for further analysis from the original dataframe but they would mess with the desired outcome if I used them in my previous analysis. Anyone have any idea on how to "extract" these columns based on the new "clean" dataframe I have?</p>
|
<p>You can merge the new "clean" dataframe with the other two variables by using the indexes. Let me use a pratical example. Suppose the "initial" dataframe, called "df", is:</p>
<pre><code>df
name year reports location
0 Jason 2012 4 Cochice
1 Molly 2012 24 Pima
2 Tina 2013 31 Santa Cruz
3 Jake 2014 2 Maricopa
4 Amy 2014 3 Yuma
</code></pre>
<p>while the "clean" dataframe is:</p>
<pre><code>d1
year location
0 2012 Cochice
2 2013 Santa Cruz
3 2014 Maricopa
</code></pre>
<p>The remaing columns are saved in dataframe "d2" (<code> d2 = df[['name','reports']] </code>):</p>
<pre><code>d2
name reports
0 Jason 4
1 Molly 24
2 Tina 31
3 Jake 2
4 Amy 3
</code></pre>
<p>By using the inner join on the indexes <code> d1.merge(d2, how = 'inner' left_index= True, right_index = True) </code> you get the following result:</p>
<pre><code> name year reports location
0 Jason 2012 4 Cochice
2 Tina 2013 31 Santa Cruz
3 Jake 2014 2 Maricopa
</code></pre>
|
python|pandas
| 1
|
4,394
| 56,944,018
|
One-hot encoding in pytorch/torchtext
|
<p>I have a <code>Bucketiterator</code> from <code>torchtext</code> that I feed to a model in <code>pytorch</code>. An example of how the iterator is constructed:</p>
<pre><code>train_iter, val_iter = BucketIterator.splits((train,val),
batch_size=batch_size,
sort_within_batch = True,
device = device,
shuffle=True,
sort_key=lambda x: (len(x.src), len(x.trg)))
</code></pre>
<p>The data is then fed to a model like this, where I use the <code>nn.Embedding</code> layer. </p>
<pre><code>class encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src sent len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src sent len, batch size, emb dim]
hidden_enc = []
outputs, hidden = self.rnn(embedded[0,:,:].unsqueeze(0))
for i in range(1,len(embedded[:,1,1])):
outputs, hidden = self.rnn(embedded[i,:,:].unsqueeze(0),hidden)
hidden_cpu = []
for k in range(len(hidden)):
hidden_cpu.append(hidden[k])
hidden_cpu[k] = hidden[k].cpu()
hidden_enc.append(tuple(hidden_cpu))
#outputs, hidden = self.rnn(embedded)
#outputs = [src sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
None
#outputs are always from the top hidden layer
return hidden, hidden_enc
</code></pre>
<p>But what if I wanted the embedding to be one-hot encoded? I work on formal languages and it would be nice to preserve orthogonality between tokens. It doesn't seem like <code>pytorch</code> or <code>torchtext</code> has any functionality for doing this. </p>
|
<p>def get_one_hot_torch_tensor(in_tensor):
"""
Function converts a 1d or 2d torch tensor to one-hot encoded
"""</p>
<pre><code>n_channels = torch.max(in_tensor)+1 # maximum number of channels
if in_tensor.ndim == 2:
out_one_hot = torch.zeros((n_channels, in_tensor.shape[0], in_tensor.shape[1]))
# print(out_one_hot)
index = np.indices((in_tensor.shape[0], in_tensor.shape[1])) # create an array of indices
x, y = index[0], index[1]
print(x, y)
out_one_hot[in_tensor, x, y] = 1
print(out_one_hot)
</code></pre>
|
python|machine-learning|deep-learning|pytorch|torchtext
| 0
|
4,395
| 46,003,577
|
pandas: how to save to hdf dataframe with string columns containing np.nan
|
<p>I am wondering if there is a good way to save a pandas dataframe to hdf when it contains string columns.</p>
<p>Given the dataframe :</p>
<pre><code>In [6]: df.head()
Out[6]:
Protocol Src Bytes
10 ICMP NaN 1062
11 ICMP 10.2.0.74 2146
12 ICMP 10.100.100.1 857520
13 ICMP 10.100.100.2 857520
14 ICMP 10.100.100.2 7000
</code></pre>
<p><code>df.to_hdf('save.h5' ,'table')</code> results in:</p>
<pre><code>/home/lpuggini/MyApps/python_2_7_numerical/lib/python2.7/site-packages/pandas/core/generic.py:1138: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed,key->block0_values] [items->['Protocol', 'Src']]
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
</code></pre>
<p>This message can be avoided casting the columns to <code>str</code> as:</p>
<p><code>df['Src'] = df['Src'].apply(str)</code></p>
<p>but then also the <code>np.nan</code> will be saved as <code>'nan'</code></p>
<p>Is there a better way to save dataframe containing columns with <code>string</code> and <code>np.nan</code>?</p>
|
<p>Columns in an HDF file must be of a single dtype. <code>nan</code> is represented by a <code>float</code> internally to numpy. You could replace the <code>nan</code> values with empty strings via:</p>
<pre><code>df['src'].fillna('')
</code></pre>
<p>HDF performs much better on numeric types than strings, so it may make more sense to convert your IP address to an integer type.</p>
<p>Edit: see @Jeff's note below. The above is true for format='fixed'. </p>
<p>Edit2: According to the <a href="https://pandas.pydata.org/pandas-docs/stable/io.html#storing-mixed-types-in-a-table" rel="nofollow noreferrer">docs</a>, you can specify the on-disk representation for nan for string dtype cols:</p>
<pre><code>df.to_hdf((...), nan_rep='whatever you want')
</code></pre>
|
python|pandas|hdf5
| 3
|
4,396
| 45,819,258
|
numpy inverse matrix not working for full rank matrix - hessian in logistic regression using newtons-method
|
<p>I am trying to compute the inverse of a full-rank matrix using numpy, but when I test the dot product, I find that it does not result in the identity matrix - which means it did not invert properly. </p>
<p>My code:</p>
<pre><code>H = calculateLogisticHessian(theta, X) #returns a 5x5 matrix
Hinv = np.linalg.inv(H)
print("H = " + str(H))
print("Hinv = " + str(Hinv))
I = np.dot(H, Hinv)
isIdentity = np.allclose(I , np.eye(5))
print("invdotinv = " + str(isIdentity) + "\n" + str(I))
</code></pre>
<p>and the output:</p>
<pre><code>H = [[ 77.88167948 81.49914902 85.11661855 88.73408809 92.35155763]
[ 81.49914902 85.36097831 89.2228076 93.0846369 96.94646619]
[ 85.11661855 89.2228076 93.32899665 97.4351857 101.54137475]
[ 88.73408809 93.0846369 97.4351857 101.7857345 106.1362833 ]
[ 92.35155763 96.94646619 101.54137475 106.1362833 110.73119186]]
Hinv = [[ 1.41918134e+02 1.00000206e+08 -1.00000632e+08 -9.99999204e+07
1.00000205e+08]
[ 1.00000347e+08 1.00000647e+08 -4.00001421e+08 9.99994941e+07
1.00000932e+08]
[ -1.00000916e+08 -4.00001424e+08 8.00003700e+08 5.68436971e+02
-3.00001928e+08]
[ -9.99997780e+07 1.00000065e+08 -5.72321511e+02 1.00000063e+08
-9.99997769e+07]
[ 1.00000205e+08 1.00000505e+08 -3.00001073e+08 -1.00000205e+08
2.00000567e+08]]
invdotinv = False
[[ 1.00000000e+00 -3.81469727e-06 -7.62939453e-06 3.81469727e-06
3.81469727e-06]
[ 0.00000000e+00 1.00000191e+00 -1.52587891e-05 3.81469727e-06
0.00000000e+00]
[ -3.81469727e-06 1.90734863e-06 9.99992371e-01 3.81469727e-06
3.81469727e-06]
[ 1.90734863e-06 -1.90734863e-06 -7.62939453e-06 1.00000191e+00
3.81469727e-06]
[ 0.00000000e+00 -1.90734863e-06 0.00000000e+00 0.00000000e+00
1.00000000e+00]]
</code></pre>
<p>As you can see the <code>np.dot(H, Hinv)</code> matrix does not return identity and results in <code>False</code> when evaluating <code>np.allclose(I , np.eye(5))</code>.</p>
<p>What am I doing wrong?</p>
<p><strong>Later edit</strong></p>
<p>this is the function which calculates the hessian:</p>
<pre><code>def calculateLogisticHessian(theta, X):
'''
calculate the hessian matrix based on given function, assuming it is some king of logistic funciton
:param theta: the weights
:param x: 2d array of arguments
:return: the hessian matrix
'''
m, n = X.shape
H = np.zeros((n,n))
for i in range(0,m):
hxi = h(theta, X[i]) #in case of logistic, will return p(y|x)
xiDotxiT = np.outer(X[i], np.transpose(X[i]))
hxiTimesOneMinHxi = hxi*(1-hxi)
currh = np.multiply(hxiTimesOneMinHxi, xiDotxiT)
H = np.add(H, currh)
return np.divide(H, m)
</code></pre>
<p>which should be according to the hessian calculation formula in andrew ng's video regarding newtons method for logistic regression:</p>
<p><a href="https://youtu.be/fF-6QnVB-7E?t=5m6s" rel="nofollow noreferrer">https://youtu.be/fF-6QnVB-7E?t=5m6s</a>
at 5:06</p>
<blockquote>
<p>1/m * (SUM from i=1 till m of[h(X[i]) * (1 - h(X[i]) * (X[i] *
X[i]'T)])</p>
</blockquote>
<p>where X is the 2x2 matrix of data and h() is the function based on theta (theta is the weigts) which in this case returns the logistic function.</p>
<p>the inputs I used:</p>
<pre><code>theta = np.array([0.001, 0.002, 0.003, 0.004, 0.005])
X = np.array(range(5*7))
X = X.reshape((7,5))
H = calculateLogisticHessian(theta, X)
</code></pre>
<p>so is there an error in the way I iplemented the hessian formula or is the issue in the inputs, and what is the issue?</p>
<p>Thanks!</p>
|
<p>Hessian matrix are often <a href="https://en.wikipedia.org/wiki/Condition_number" rel="nofollow noreferrer">ill-conditioned</a>. <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.cond.html" rel="nofollow noreferrer"><code>numpy.linalg.cond</code></a>
lets you compute the <em>condition number</em>:</p>
<pre><code>In [188]: np.linalg.cond(H)
Out[188]: 522295671550.72644
</code></pre>
<p>Since the condition number of <code>H</code> is large, computing its inverse has rounding issues .</p>
|
numpy|logistic-regression|matrix-inverse|newtons-method|hessian
| 2
|
4,397
| 46,078,287
|
Plotting lines and bars from pandas dataframe on same graph using matplotlib
|
<p>I want to plot temperature data as a line, with rainfall data as a bar. I can do this easily in <a href="https://ibb.co/hWy2Bv" rel="nofollow noreferrer">Excel</a> but I'd prefer a fancy python graph to show it in a nicer way. </p>
<p>Some sample code to illustrate the problem:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
dates = pd.date_range('20070101',periods=365)
df = pd.DataFrame(data=np.random.randint(0,50,(365,4)), columns =list('ABCD'))
df['date'] = dates
df = df[['date','A', 'B', 'C', 'D']]
df.set_index(['date'],inplace=True) #set date as the index so it will plot as x axis
</code></pre>
<p>This creates a dataframe with four columns (imagine A and B are temp and C and D are rainfall). </p>
<p>I want to plot rainfall as bars, and temp as a line, but when I try to do this:</p>
<pre><code>ax = df.plot(y="A", kind="bar")
df.plot(y="B", kind = "line", ax = ax)
</code></pre>
<p>The lines plot but <a href="https://ibb.co/gAfojF" rel="nofollow noreferrer">not the bars.</a></p>
<p>This is a much simpler version of what I'm trying to do but I think it illustrates the problem. </p>
<p>EDIT:</p>
<p>The following code works: </p>
<pre><code>fig, ax= plt.subplots()
ax.plot_date(df.index, df.iloc[:,2], '-')
for i in range(3):
diff = df.index[1]-df.index[0]
spacing = diff/(1.3*len(df.columns))
ax.bar(df.index+(-5+i)*spacing, df.iloc[:,i],
width=spacing/diff, label=df.columns[i])
plt.legend()
plt.gcf().autofmt_xdate()
plt.show()
</code></pre>
<p>Would really appreciate a less complex answer as this seems quite verbose but it seems to work! </p>
|
<p>A simple way would be to use the <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#suppressing-tick-resolution-adjustment" rel="nofollow noreferrer"><code>x_compat</code></a> property:</p>
<pre><code>ax = df.plot(x=index, y="A", x_compat=True) # plot lines first
df.plot(x=index, y="B", kind="bar", ax=ax)
</code></pre>
<p>You can then adjust the tick frequency.</p>
<p>Hat-tip: <a href="https://stackoverflow.com/a/39907390/5276797">https://stackoverflow.com/a/39907390/5276797</a></p>
|
python|pandas|matplotlib
| 2
|
4,398
| 23,190,799
|
Capturing Datetime Objects in Pandas Dataframe
|
<p>If I'm reading the docs correctly for Pandas 0.13.1, read_csv should yield columns of datetimes when <code>parse_dates = [<col1>,<col2>...]</code> is invoked during the read. What I'm getting instead is columns of Timestamp objects. Even with the application of .to_datetime, I still end up with Timestamp objects. What am I missing here? How can I read the strings and convert straight to datetime objects that are stored in the dataframe? It seems as if the datetime objects are getting converted to Timestamps in the dataframe.</p>
<pre><code>df = read_csv('Beijing_2010_HourlyPM2.5_created20140325.csv',parse_dates=['Date (LST)'])
df['Date (LST)'][0] yields
Timestamp('2010-01-01 23:00:00', tz=None)
df['Date (LST)'] = pd.to_datetime(df['Date (LST)'])
df['Date (LST)'][0] still yields
Timestamp('2010-01-01 23:00:00', tz=None)
</code></pre>
|
<p>Timestamps are the way that pandas deals with datetime, you can <a href="https://stackoverflow.com/questions/13703720/converting-between-datetime-timestamp-and-datetime64">move between Timestamp, datetime64 and datetime</a>, but <strong>most of the time using Timestamp is what you want</strong> (and pandas just converts it for you by default).</p>
<p><em>Note: Timestamp is really just an int64 column of epoch nanoseconds i.e. the same as numpy datetime64 ns (which you'll see is the dtype of Timestamp columns).</em></p>
<p>If you <em>must</em> force a column of dates you can use the <code>to_pydatetime</code> method, and force it to a Series not be converted by assigning the object dtype, however this will be both slower and use more space than just using Timestamps (because datetimes are essentially tuples and Timestamps are int64).</p>
|
python|datetime|pandas
| 3
|
4,399
| 23,298,751
|
Filtering on data with a cast string->float type
|
<p>a few issues here but i think the code is relatively straight forward.</p>
<p>the code is as follows:</p>
<pre><code> import pandas as pd
def establishAdjustmentFactor(df):
df['adjFactor']=df['Adj Close']/df['Close'];
df['chgFactor']=df['adjFactor']/df['adjFactor'].shift(1);
return df;
def yahooFinanceAccessor(ticker,year_,month_,day_):
import datetime
now = datetime.datetime.now()
month = str(int(now.strftime("%m"))-1)
day = str(int(now.strftime("%d"))+1)
year = str(int(now.strftime("%Y")))
data = pd.read_csv('/Users/myDir/Downloads/' + ticker + '.csv');
data['Date']=float(str(data['Date']).replace('-',''));
data.set_index('Date')
data=data.sort(['Date'],ascending=[1]);
return data
def calculateLongReturn(df):
df['Ret']=df['Adj Close'].pct_change();
return df;
argStartYear = '2014';
argStartMonth = '01';
argStartDay='01';
argEndYear = '2014';
argEndMonth = '04';
argEndDay = '30';
#read data
underlying = yahooFinanceAccessor("IBM,"1900","01","01");
#Get one day return
underlying = establishAdjustmentFactor(calculateLongReturn(underlying));
#filter here
underlying = underlying[(underlying['Date'] > long(argStartYear + argStartMonth + argStartDay)) & underlying['Date']<long(argEndYear+argEndMonth+argEndDay)];
</code></pre>
<p>Where this will evolve to a function and argStart(End) would be arguments to the function.</p>
<p>The idea is that there will be some parent function call that will keep a global dataframe of the entire price history of an underlying, and later calls will access that dataframe and filter on dates needed to see if there were splits.</p>
<p>Now, when I read the data and attempt to convert in the <code>read_csv</code> call i get the following error:</p>
<pre><code> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Applications/Spyder.app/Contents/Resources/lib/python2.7/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/Users/myDir/Documents/PythonProjects/dailyOptionValuation.py", line 70, in <module>
underlying = yahooFinanceAccessor("SVXY","1900","01","01");
File "/Users/myDir/Documents/PythonProjects/dailyOptionValuation.py", line 37, in yahooFinanceAccessor
data['Date']=float(str(data['Date']).replace('-',''));
ValueError: invalid literal for float(): 0 20140424
1 20140423
2 20140422
3 20140421
4 20140417
5 20140416
6 20140415
7 20140414
8 20140411
9 20140410
10 20140409
11 20140408
12 20140407
</code></pre>
<p>Any input as to why would be extremely helpful!</p>
|
<p>So it appears as if I have found the issue after poking around a little bit, and changing the way I was thinking about the problem.</p>
<p>Any input on if there is a more efficient way to do this would be great.</p>
<pre><code> def operateOverSetToCreateEasyKey(df):
for i in df.index:
df.ix[i,'fmtDate']=int(str(df.ix[i]['Date']).replace('-',''));
return df;
</code></pre>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.