Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
2,400
| 42,460,530
|
How do I make keras predict something other than a one-hot matrix?
|
<p>I have a dictionary of 122 unique values. I'm feeding the program over 45,000 records with 33 data points to refer to when making a prediction about what the output should be. What I've noticed is that it's only predicting <code>[[1.]...]</code>. I need it to predict 1's 2's 3's ... up until 122. All are floats as well, so I don't know if that matters.</p>
<p>Here is my code-</p>
<pre><code>Y = faults['FAILMODE']
del faults['FAILMODE']
X = faults
len(Y.FAILMODE.unique())
122
</code></pre>
<p>This is how I intend on associating the output with the actual string value (like 'Exhaust')</p>
<pre><code>classes = {}
n = 1.
for u in Y:
if u not in classes:
classes[n] = u
n += 1.
else:
n += 1.
</code></pre>
<p>I use this to turn all data into unique floats in the DataFrame.</p>
<pre><code>for col in X:
values = pd.Series(X[col])
X[col] = (values.factorize()[0]+1).astype('float')
Y = (Y.factorize()[0]+1).astype('float')
Y = pd.DataFrame(Y, columns = ['FAILMODE'])
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
X_train = X_train.as_matrix()
X_test = X_test.as_matrix()
Y_train = Y_train.as_matrix()
Y_test = Y_test.as_matrix()
model = Sequential()
model.add(Dense(12, input_dim=7, init='uniform', activation='relu'))
model.add(Dense(7, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, Y_train, nb_epoch=150, batch_size=10)
predictions = model.predict(X_test)
print predictions[:20
[[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]]
</code></pre>
<p>Thanks for your help!</p>
|
<p>Your model is wrong in this case, change it to:</p>
<pre><code>model = Sequential()
model.add(Dense(12, input_dim=7, init='uniform', activation='relu'))
model.add(Dense(7, init='uniform', activation='relu'))
model.add(Dense(122, init='uniform', activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>And make sure the input data is one-hot encoded with the function keras.utils.np_utils.to_categorical. Then the model should train.</p>
<p>To then recover the integer class indices, when you predict with the model, you take the argmax of the output probability distribution, so you get the array index with the maximum probability.</p>
<pre><code>x = some test point.
y = model.predict(x)[0]
class = np.argmax(y)
</code></pre>
|
python|pandas|numpy|keras
| 6
|
2,401
| 42,250,340
|
RuntimeError: No C++ shape function registered for standard op: NearestNeighbors
|
<p>Update:-
Please try this code:-</p>
<pre><code>from tensorflow.contrib.learn.python.learn.estimators import kmeans as kmeans_lib
import random
import numpy as np
x = np.array([[random.random() for i in range(198)] for j in range(2384)])
km = kmeans_lib.KMeansClustering(num_clusters=200)
km.fit(x)
</code></pre>
<p>The error changes by changing the type of x in the line km.fit(x)
for example x.astype('float32')or x.astype('float64')</p>
<p>I am trying to implement KMeansClustering using tensorflow.contrib.learn.python.learn.estimators.kmeans</p>
<p>But I am getting the following error while using the code :-</p>
<pre><code>def cluster_data(X, num_clusters) :
kmeans = KMeansClustering(num_clusters=num_clusters)
kmeans.fit(X.astype('float32'))
y = kmeans.predict(X)
</code></pre>
<p>"No C++ shape function registered for standard op: %s" % op.type)
RuntimeError: No C++ shape function registered for standard op: NearestNeighbors"</p>
<p>X is a numpy array of shape (2000,1000)</p>
<p>Stacktrace:-</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\####\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-123-4bb22963e7ed>", line 1, in <module>
kmeans.fit(x.astype('float32'))
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 191, in new_func
arg_spec: Output from inspect.getargspec on the called function.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 355, in fit
"""Initializes a BaseEstimator instance.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 733, in _train_model
self._features_info = tensor_signature.create_signatures(features)
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3517, in get_controller
# pylint: disable=g-doc-return-or-yield
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 733, in _train_model
self._features_info = tensor_signature.create_signatures(features)
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2945, in device
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 699, in _train_model
'2016-09-23',
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1052, in _get_train_ops
training hooks.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1021, in _call_model_fn
def __init__(self,
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\kmeans.py", line 201, in _model_fn
kmeans_plus_plus_num_retries=self.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\factorization\python\ops\clustering_ops.py", line 295, in training_graph
# Implementation of kmeans.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\factorization\python\ops\clustering_ops.py", line 198, in _infer_graph
with ops.colocate_with(clusters):
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2869, in colocate_with
in the variable `scope`. This value can be used to name an
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\factorization\python\ops\clustering_ops.py", line 195, in _infer_graph
# nearest_neighbors op.
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\contrib\factorization\python\ops\gen_clustering_ops.py", line 90, in nearest_neighbors
centers=centers, k=k, name=name)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
return op
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3517, in get_controller
# pylint: disable=g-doc-return-or-yield
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
return op
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 4057, in name_scope
ACTIVATIONS = "activations"
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3517, in get_controller
# pylint: disable=g-doc-return-or-yield
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 4057, in name_scope
ACTIVATIONS = "activations"
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2763, in name_scope
c = []
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 4057, in name_scope
ACTIVATIONS = "activations"
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
return op
File "C:\Users\####\Anaconda3\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 254, in _MaybeColocateWith
if not inputs:
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 759, in apply_op
with _MaybeColocateWith(must_colocate_inputs):
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2242, in create_op
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1617, in set_shapes_for_outputs
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1568, in call_with_requiring
return getattr(x, f)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "C:\Users\####\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 680, in _call_cpp_shape_fn_impl
"No C++ shape function registered for standard op: %s" % op.type)
RuntimeError: No C++ shape function registered for standard op: NearestNeighbors
</code></pre>
<p>I also tried it in a ubuntu virtual box but got the following error:-</p>
<pre><code>Traceback (most recent call last):
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-0608497fa903>", line 1, in <module>
m.fit(x)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 410, in fit
SKCompat(self).fit(x, y, batch_size, steps, max_steps, monitors)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1353, in fit
monitors=all_monitors)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 426, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 981, in _train_model
config=self.config.tf_config) as mon_sess:
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 315, in MonitoredTrainingSession
return MonitoredSession(session_creator=session_creator, hooks=all_hooks)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 601, in __init__
session_creator, hooks, should_recover=True)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 434, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 767, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 772, in _create_session
return self._sess_creator.create_session()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 494, in create_session
self.tf_sess = self._session_creator.create_session()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 375, in create_session
init_fn=self._scaffold.init_fn)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/session_manager.py", line 262, in prepare_session
sess.run(init_op, feed_dict=init_feed_dict)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: You must feed a value for placeholder tensor 'input' with dtype float
[[Node: input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'input', defined at:
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/pydevconsole.py", line 526, in <module>
pydevconsole.start_server(pydev_localhost.get_localhost(), int(port), int(client_port))
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/pydevconsole.py", line 359, in start_server
process_exec_queue(interpreter)
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/pydevconsole.py", line 218, in process_exec_queue
more = interpreter.add_exec(code_fragment)
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/_pydev_bundle/pydev_console_utils.py", line 251, in add_exec
more = self.do_add_exec(code_fragment)
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/_pydev_bundle/pydev_ipython_console.py", line 41, in do_add_exec
res = bool(self.interpreter.add_exec(codeFragment.text))
File "/home/ubuntu/softwares/pycharm-community-2016.3.2/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 451, in add_exec
self.ipython.run_cell(line, store_history=True)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes
if self.run_code(code, result):
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-0608497fa903>", line 1, in <module>
m.fit(x)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 410, in fit
SKCompat(self).fit(x, y, batch_size, steps, max_steps, monitors)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1353, in fit
monitors=all_monitors)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 426, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 932, in _train_model
features, labels = input_fn()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py", line 430, in input_builder
self._input_dtype, 'input')
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py", line 426, in get_placeholder
dtypes.as_dtype(dtype), [None] + shape[1:], name=name_prepend)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1520, in placeholder
name=name)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder
name=name)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input' with dtype float
[[Node: input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
</code></pre>
<p>Update :-
As per the suggestion given by @Changming Sun I tried to update the gen_clustering_ops.py, it solved the first problem but I generated a new error.</p>
<pre><code> Traceback (most recent call last):
File "C:\Users\#####\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-21-4bb22963e7ed>", line 1, in <module>
kmeans.fit(x.astype('float32'))
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 410, in fit
SKCompat(self).fit(x, y, batch_size, steps, max_steps, monitors)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1353, in fit
monitors=all_monitors)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 426, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 933, in _train_model
self._check_inputs(features, labels)
File "C:\Users\#####\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 731, in _check_inputs
(str(features), str(self._features_info)))
ValueError: Features are incompatible with given information. Given features: Tensor("input:0", shape=(?, 198), dtype=float32), required signatures: TensorSignature(dtype=tf.float64, shape=TensorShape([Dimension(None), Dimension(198)]), is_sparse=False).
</code></pre>
<p>I am facing a similar issue with GMM in tensorflow</p>
<p><a href="https://stackoverflow.com/questions/42299378/valueerror-features-are-incompatible-with-given-information-given-features-te]">ValueError: Features are incompatible with given information. Given features: Tensor("input:0", shape=(?, 198), dtype=float32)</a></p>
|
<p>Open
"Lib\site-packages\tensorflow\contrib\factorization\python\ops\gen_clustering_ops.py"</p>
<p>Add</p>
<pre><code>ops.RegisterShape("NearestNeighbors")(None)
</code></pre>
<p>For any error like this, fix it in this way.</p>
|
python|python-3.x|machine-learning|tensorflow|k-means
| 0
|
2,402
| 69,946,381
|
Python data data conversion
|
<p>Please help with the below request:</p>
<p>need to clean up below df to df_1: 'SKU' has multiple required data, and this column needs to be exploded to multiple rows</p>
<pre><code>df = pd.DataFrame([[1,'NaN','abj','1/1/2021'],
[2,'[{"Result":"00018"},{"Result":"00065"}]','abj','1/1/2021'],
[3,'','abj','1/1/2021']],
columns = ['ID','SKU','NOTES','Date'])
</code></pre>
<p>df</p>
<pre><code>df_1 = pd.DataFrame([['1','','abj'],
['2','00018','abj'],
['2','00065','abj'],
['3','','abj']],
columns = ['ID','SKU','NOTES'])
</code></pre>
<p>df_1</p>
|
<p>If all records are following the same pattern, this should clean it for you.</p>
<p>Beware that the code below is modifying <em>df</em>.</p>
<pre><code>import pandas as pd
import json
import numpy as np
def clean_SKU(x):
if pd.isna(x) or x == "" or x == "NaN":
return x
else:
return [ i['Result'] for i in json.loads(x)]
df.SKU = df.SKU.apply(lambda x : clean_SKU(x))
df.explode('SKU')
</code></pre>
|
python|pandas
| 0
|
2,403
| 69,758,754
|
Sum all elements in a column in pandas
|
<p>I have a data in one column in Python dataframe.</p>
<pre><code>1-2 3-4 8-9
4-5 6-2
3-1 4-2 1-4
</code></pre>
<p>The need is to sum all the data available in that column.</p>
<p>I tried to apply below logic but it's not working for list of list.</p>
<pre><code>lst=[]
str='5-7 6-1 6-3'
str2 = str.split(' ')
for ele in str2:
lst.append(ele.split('-'))
print(lst)
sum(lst)
</code></pre>
<p>Can anyone please let me know the simplest method ?</p>
<p>My expected result should be:</p>
<pre><code>27
17
15
</code></pre>
|
<p>I think we can do a split</p>
<pre><code>df.col.str.split(' |-').map(lambda x : sum(int(y) for y in x))
Out[149]:
0 27
1 17
2 15
Name: col, dtype: int64
</code></pre>
<p>Or</p>
<pre><code>pd.DataFrame(df.col.str.split(' |-').tolist()).astype(float).sum(1)
Out[156]:
0 27.0
1 17.0
2 15.0
dtype: float64
</code></pre>
|
python-3.x|pandas|dataframe
| 3
|
2,404
| 69,959,127
|
I want to 2 decimal points result for predicts on Python
|
<p>My outputs have too much decimal points. But I want to 2 points float results. Can you help me?</p>
<p>EX: <code>42.44468745 -> 42.44 </code></p>
<pre><code>y_pred=ml.predict(x_test)
print(y_pred)
</code></pre>
<p>Output:</p>
<pre><code>[42.44468745 18.38280575 7.75539511 19.05326276 11.87002186 26.89180941
18.97589775 22.01291508 9.08079557 6.72623692 21.81657224 22.51415263
24.46456776 13.75392096 21.57583275 25.73401908 30.95880457 11.38970094
7.28188274 21.98202474 17.24708345 38.7390475 12.68345506 11.2247757
5.32814356 10.41623796 7.30681434]
</code></pre>
|
<p>Use <code>round(num, round_decimal)</code>
For example:</p>
<pre><code>num = 42.44468745
print(round(num, 2))
</code></pre>
<p>Output:</p>
<pre><code>42.44
</code></pre>
|
python|scikit-learn|floating-point|predict|sklearn-pandas
| 0
|
2,405
| 43,390,222
|
One sided t-test for linear regression?
|
<p>I have problems with this. I am trying to do a linear regression and test the slope. The t-test checks if the slope is far away from 0. The slope can be negative or positive. I am only interested in negative slopes.</p>
<p>In this example, the slope is positive which I am not interested in, so the P value should be large. But it is small because right now it tests if the slope is far away from 0, in either direction. (I am forcing an intercept of zero, which is what I want). Can someone help me with the syntax to see if the slope is only negative. In this case the P value should be large. </p>
<p>And how can I change to, to say 99% confidence level or 95% or...?</p>
<pre><code>import statsmodels.api as sm
import matplotlib.pyplot as plt
import numpy
X = [-0.013459134, 0.01551033, 0.007354476, 0.014686473, -0.014274754, 0.007728445, -0.003034186, -0.007409397]
Y = [-0.010202462, 0.003297546, -0.001406498, 0.004377665, -0.009244517, 0.002136552, 0.006877126, -0.001494624]
regression_results = sm.OLS (Y, X, missing = "drop").fit ()
P_value = regression_results.pvalues [0]
R_squared = regression_results.rsquared
K_slope = regression_results.params [0]
conf_int = regression_results.conf_int ()
low_conf_int = conf_int [0][0]
high_conf_int = conf_int [0][1]
fig, ax = plt.subplots ()
ax.grid (True)
ax.scatter (X, Y, alpha = 1, color='orchid')
x_pred = numpy.linspace (min (X), max (X), 40)
y_pred = regression_results.predict (x_pred)
ax.plot (x_pred, y_pred, '-', color='darkorchid', linewidth=2)
</code></pre>
|
<p>p-value for the two-way t-test is calculated by:</p>
<pre><code>import scipy.stats as ss
df = regression_results.df_resid
ss.t.sf(regression_results.tvalues[0], df) * 2 # About the same as (1 - cdf) * 2.
# see @user333700's comment
Out[12]: 0.02903685649821508
</code></pre>
<p>Your modification would just be:</p>
<pre><code>ss.t.cdf(regression_results.tvalues[0], df)
Out[14]: 0.98548157175089246
</code></pre>
<p>since you are interested in the left-tail only. </p>
<p>For confidence interval, you just need to pass the alpha parameter:</p>
<pre><code>regression_results.conf_int(alpha=0.01)
</code></pre>
<p>for a 99% confidence interval.</p>
|
python|pandas|scikit-learn|statsmodels|t-test
| 3
|
2,406
| 43,094,084
|
hdf to ndarray in numpy - fast way
|
<p>I am looking for a fast way to set my collection of hdf files into a numpy array where each row is a flattened version of an image. What I exactly mean:</p>
<p>My hdf files store, beside other informations, images per frames. Each file holds 51 frames with 512x424 images. Now I have 300+ hdf files and I want the image pixels to be stored as one single vector per frame, where all frames of all images are stored in one numpy ndarray. The following picture should help to understand:</p>
<p><a href="https://i.stack.imgur.com/vaW8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vaW8t.png" alt="Visualized process of transforming many hdf files to one numpy array"></a></p>
<p>What I got so far is a very slow method, and I actually have no idea how i can make it faster. The problem is that my final array is called too often, as far as I think. Since I observe that the first files are loaded into the array very fast but speed decreases fast. (observed by printing the number of the current hdf file) </p>
<p>My current code:</p>
<pre><code>os.chdir(os.getcwd()+"\\datasets")
# predefine first row to use vstack later
numpy_data = np.ndarray((1,217088))
# search for all .hdf files
for idx, file in enumerate(glob.glob("*.hdf5")):
f = h5py.File(file, 'r')
# load all img data to imgs (=ndarray, but not flattened)
imgs = f['img']['data'][:]
# iterate over all frames (50)
for frame in range(0, imgs.shape[0]):
print("processing {}/{} (file/frame)".format(idx+1,frame+1))
data = np.array(imgs[frame].flatten())
numpy_data = np.vstack((numpy_data, data))
# delete first row after another is one is stored
if idx == 0 and frame == 0:
numpy_data = np.delete(numpy_data, 0,0)
f.close()
</code></pre>
<p>For further information, I need this for learning a decision tree. Since my hdf file is bigger than my RAM, I think converting into a numpy array save memory and is therefore better suited. </p>
<p>Thanks for every input. </p>
|
<p>Do you really wan't to load all Images into the RAM and not use a single HDF5-File instead? Accessing a HDF5-File can be quite fast if you don't make any mistakes (unnessesary fancy indexing, improper chunk-chache-size).
If you wan't the numpy-way this would be a possibility:</p>
<pre><code>os.chdir(os.getcwd()+"\\datasets")
img_per_file=51
# get all HDF5-Files
files=[]
for idx, file in enumerate(glob.glob("*.hdf5")):
files.append(file)
# allocate memory for your final Array (change the datatype if your images have some other type)
numpy_data=np.empty((len(files)*img_per_file,217088),dtype=np.uint8)
# Now read all the data
ii=0
for i in range(0,len(files)):
f = h5py.File(files[0], 'r')
imgs = f['img']['data'][:]
f.close()
numpy_data[ii:ii+img_per_file,:]=imgs.reshape((img_per_file,217088))
ii=ii+img_per_file
</code></pre>
<p>Writing your data to a single HDF5-File would be quite similar:</p>
<pre><code>f_out=h5py.File(File_Name_HDF5_out,'w')
# create the dataset (change the datatype if your images have some other type)
dset_out = f_out.create_dataset(Dataset_Name_out, ((len(files)*img_per_file,217088), chunks=(1,217088),dtype='uint8')
# Now read all the data
ii=0
for i in range(0,len(files)):
f = h5py.File(files[0], 'r')
imgs = f['img']['data'][:]
f.close()
dset_out[ii:ii+img_per_file,:]=imgs.reshape((img_per_file,217088))
ii=ii+img_per_file
f_out.close()
</code></pre>
<p>If you only wan't to access whole images afterwards the chunk-size should be okay. If not you have to change that to your needs.</p>
<p>What you should do when accessing a HDF5-File:</p>
<ul>
<li><p>Use a chunk-size, which fits your needs.</p></li>
<li><p>Set a proper chunk-chache-size. This can be done with the h5py low level api or h5py_cache. <a href="https://pypi.python.org/pypi/h5py-cache/1.0" rel="nofollow noreferrer">https://pypi.python.org/pypi/h5py-cache/1.0</a></p></li>
<li><p>Avoid any type of fancy indexing. If your Dataset has n dimensions access it in a way that the returned Array has also n dimensions. </p>
<pre><code># Chunk size is [50,50] and we iterate over the first dimension
numpyArray=h5_dset[i,:] #slow
numpyArray=np.squeeze(h5_dset[i:i+1,:]) #does the same but is much faster
</code></pre></li>
</ul>
<p><strong>EDIT</strong>
This shows how to read your data to a memmaped numpy array. I think your method expects data of format np.float32.
<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html#numpy.memmap" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html#numpy.memmap</a></p>
<pre><code> numpy_data = np.memmap('Your_Data.npy', dtype='np.float32', mode='w+', shape=((len(files)*img_per_file,217088)))
</code></pre>
<p>Everything else could be kept the same. If it works I would also recommend to use a SSD instead of a hardisk.</p>
|
python|numpy|hdf5|h5py
| 1
|
2,407
| 43,317,675
|
How to generate a cyclic sequence of numbers without using looping?
|
<p>I want to generate a cyclic sequence of numbers like: <code>[A B C A B C]</code> with arbitrary length <code>N</code> I tried:</p>
<pre><code>import numpy as np
def cyclic(N):
x = np.array([1.0,2.0,3.0]) # The main sequence
y = np.tile(x,N//3) # Repeats the sequence N//3 times
return y
</code></pre>
<p>but the problem with my code is if i enter any integer which ain't dividable by three then the results would have smaller length (<code>N</code>) than I excpected. I know this is very newbish question but i really got stuck </p>
|
<p>You can just use <code>numpy.resize</code></p>
<pre><code>x = np.array([1.0, 2.0, 3.0])
y = np.resize(x, 13)
y
Out[332]: array([ 1., 2., 3., 1., 2., 3., 1., 2., 3., 1., 2., 3., 1.])
</code></pre>
<p>WARNING: This is answer does not extend to 2D, as <code>resize</code> flattens the array before repeating it. </p>
|
python|numpy|vectorization
| 5
|
2,408
| 72,200,158
|
How can i set the right shape to make predictions for my CNN model?
|
<p>i'm having some problem with prediction phase of my image Classiffier Model in Python.
With the input image size of 128x128 i created model with a model like this:</p>
<pre><code>model = Sequential()
model.add(Conv2D(32,3,padding="same", activation="relu", input_shape=(128,128,3)))
model.add(MaxPool2D())
model.add(Conv2D(32, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Conv2D(64, 3, padding="same", activation="relu"))
model.add(MaxPool2D())
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(2, activation="softmax"))
model.summary()
</code></pre>
<p>I'm getting no errors after training the model and using it for predicting one single image from my validation dataset:</p>
<pre><code>model = load_model('modelimi.h5')
image = load_img('sample_space.png', target_size=(128, 128))
img = np.array(image)
img = img / 255.0
img = img.reshape(1,128, 128,3)
label = model.predict(img)
</code></pre>
<p>The Problem starts when i try to predict the live screen capture:</p>
<pre><code>while True:
image = grab_screen(region=(50, 130, 1000, 650))
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.Canny(image, threshold1=200, threshold2=400)
image = cv2.resize(image,(128,128))
#cv2.imwrite(filename, image)
img = np.array(image)
img = img / 255.0
img = img.reshape(1,128, 128,3)
label = model.predict(img)
</code></pre>
<p>The error that i got is this:</p>
<blockquote>
<p>ValueError Traceback (most recent call
last) Input In [1], in <cell line: 30>()
36 img = np.array(image)
37 img = img / 255.0
---> 38 img = img.reshape(1,128, 128,3)
39 label = model.predict(img)
41 if label[0][0]<label[0][1]:</p>
<p>ValueError: cannot reshape array of size 16384 into shape
(1,128,128,3)</p>
</blockquote>
<p>I tried to reduce the number of channels of captured screen data to one but i couldn't apply that to the code either. The Preprocessing and normalisation was done like this:</p>
<pre><code>x_train = []
y_train = []
x_val = []
y_val = []
for feature, label in train:
x_train.append(feature)
y_train.append(label)
for feature, label in val:
x_val.append(feature)
y_val.append(label)
x_train = np.array(x_train) / 255
x_val = np.array(x_val) / 255
x_train.reshape(-1, img_size, img_size, 1)
y_train = np.array(y_train)
x_val.reshape(-1, img_size, img_size, 1)
y_val = np.array(y_val)
</code></pre>
|
<p>You are using a (128,128, 1) image as input. Indeed, you use <code>cv2.COLOR_BGR2GRAY</code> converting your image to grayscale (hence 1 channel).</p>
<p>The error tells you it cannot reshape 16384 (=128x128) into (128,128,3).</p>
|
python|tensorflow|keras|reshape|image-classification
| 0
|
2,409
| 50,556,642
|
Remembering original image after patch extraction
|
<p>I apologize if this is too general. I am using the PatchExtractor function in scikit-learn to convert images - an array of size = (n_images x image_height x image_width) - into patches, so the resulting array has size = (n_patches, patch_height, patch_width). </p>
<p>However, with this function I lose track of which patch came from which image, which is important for later in the pipeline. Is there a way to keep track of the image from which a patch came?</p>
|
<p>The patches are extracted from images in sequence, so, if you know the count of images, and patches, you can know which patch if from which image:</p>
<pre><code>import numpy as np
from sklearn.feature_extraction import image
images = np.zeros((5, 4, 4, 3))
images[:] = np.arange(5).reshape(-1, 1, 1, 1)
patches = image.PatchExtractor((2, 2)).transform(images)
n_patches = patches.shape[0] // images.shape[0]
index = np.repeat(np.arange(images.shape[0]), n_patches)
print(index, patches[:, 0, 0, 0])
</code></pre>
|
python|image|numpy|scikit-learn
| 1
|
2,410
| 50,475,183
|
Converting a numpy array into a dict of values mapped to rows
|
<p>Consider that I have a 2D numpy array where each row represents a unique item and each column within the row represents a label assigned to this item. For example, a 10 x 25 array in this instance would represent 10 items, each of which have up to 25 labels each.</p>
<p>What would be most efficient way to convert this to a dict (or another appropriate datatype, bonus points if it can be sorted by length) that maps labels to the rows indices in which that label occurs? For example, <code>dict[1]</code> would return a list of the row indices that contain <code>1</code> as a label.</p>
<p>For example,</p>
<pre><code>Given:
[1, 2, 3]
[1, 0, 0]
[1, 3, 0]
Result:
1: 0, 1, 2 # 1 occurs in rows 0, 1, 2
3: 0, 2 # 3 occurs in rows 0, 2
0: 1, 2 # 0 occurs in rows 1, 2 (0 is padding for lack of labels)
2: 0 # 2 occurs in row 0 only
</code></pre>
|
<p><strong>UPDATE</strong>: added ordering by length.</p>
<p>We can use advanced indexing to create a grid indexed by items and labels.
We can then iterate over columns and use <code>flatnonzero</code> to get the item id's:</p>
<pre><code>>>> ex = [[1, 2, 3],
... [1, 0, 0],
... [1, 3, 0]]
>>>
>>> m = len(ex)
>>> n = np.max(ex) + 1
>>> grid = np.zeros((m, n), int) # could also use a smaller dtype here
>>> grid[np.arange(m)[:, None], ex] = 1
>>> grid
array([[0, 1, 1, 1],
[1, 1, 0, 0],
[1, 1, 0, 1]])
>>> idx = np.argsort(np.count_nonzero(grid, 0))[::-1]
>>> dict(zip(idx, map(np.flatnonzero, grid.T[idx])))
{1: array([0, 1, 2]), 3: array([0, 2]), 0: array([1, 2]), 2: array([0])}
</code></pre>
<p>Note that dictionaries remember the insertion order of their keys. That is an implementation detail in 3.6 but will be a guaranteed feature in 3.7.</p>
|
python|arrays|numpy|dictionary|scipy
| 4
|
2,411
| 50,395,651
|
Why accessing values in dataframe and list are different?
|
<p>Suppose I have a list a defined as: <code>a =[[1,2,3,4],[5,6,7,8]]</code>;
then <code>a[0]</code> returns the first element in the list: <code>[1,2,3,4]</code>.</p>
<pre><code>df = pd.DataFrame([[1,2,3,4],[5,6,7,8]])
</code></pre>
<p><code>df</code> is represented as</p>
<pre><code>0 | 1 2 3 4
1 | 5 6 7 8
</code></pre>
<p>If I use <code>df[0]</code> the following value is printed:</p>
<pre><code>0 | 1
1 | 5
</code></pre>
<p>I would have expected the first row ie <code>0| 1 2 3 4</code> to be printed like in a list. Is it because dataframe is represented by <code>dataframe[cols][rows]</code> rather than <code>dataframe[rows][cols]</code>? </p>
|
<p><code>df[x]</code> accesses column(s) named <code>x</code>.</p>
<p><code>df.loc[y]</code> access row(s) with index <code>y</code>.</p>
<p>This is an issue with syntax, not how data is stored internally by <code>pandas</code>.</p>
<p>You should read <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer">Indexing and Selecting Data</a> to understand the various ways you can extract data from a <code>pd.DataFrame</code> object.</p>
|
python|pandas|dataframe
| 3
|
2,412
| 50,658,884
|
Why this numba code is 6x slower than numpy code?
|
<p>Is there any reason why the following code run in 2s,</p>
<pre><code>def euclidean_distance_square(x1, x2):
return -2*np.dot(x1, x2.T) + np.expand_dims(np.sum(np.square(x1), axis=1), axis=1) + np.sum(np.square(x2), axis=1)
</code></pre>
<p>while the following numba code run in 12s?</p>
<pre><code>@jit(nopython=True)
def euclidean_distance_square(x1, x2):
return -2*np.dot(x1, x2.T) + np.expand_dims(np.sum(np.square(x1), axis=1), axis=1) + np.sum(np.square(x2), axis=1)
</code></pre>
<p>My x1 is a matrix of dimension (1, 512) and x2 is a matrix of dimension (3000000, 512). It is quite weird that numba can be so much slower. Am I using it wrong?</p>
<p>I really need to speed this up because I need to run this function 3 million times and 2s is still way too slow.</p>
<p>I need to run this on CPU because as you can see the dimension of x2 is so huge, it cannot be loaded onto a GPU (or at least my GPU), not enough memory.</p>
|
<blockquote>
<p>It is quite weird that numba can be so much slower. </p>
</blockquote>
<p>It's not too weird. When you call NumPy functions inside a numba function you call the numba-version of these functions. These can be faster, slower or just as fast as the NumPy versions. You might be lucky or you can be unlucky (you were unlucky!). But even in the numba function you still create lots of temporaries because you use the NumPy functions (one temporary array for the dot result, one for each square and sum, one for the dot plus first sum) so you don't take advantage of the possibilities with numba.</p>
<blockquote>
<p>Am I using it wrong?</p>
</blockquote>
<p>Essentially: Yes.</p>
<blockquote>
<p>I really need to speed this up</p>
</blockquote>
<p>Okay, I'll give it a try.</p>
<p>Let's start with unrolling the sum of squares along axis 1 calls:</p>
<pre><code>import numba as nb
@nb.njit
def sum_squares_2d_array_along_axis1(arr):
res = np.empty(arr.shape[0], dtype=arr.dtype)
for o_idx in range(arr.shape[0]):
sum_ = 0
for i_idx in range(arr.shape[1]):
sum_ += arr[o_idx, i_idx] * arr[o_idx, i_idx]
res[o_idx] = sum_
return res
@nb.njit
def euclidean_distance_square_numba_v1(x1, x2):
return -2 * np.dot(x1, x2.T) + np.expand_dims(sum_squares_2d_array_along_axis1(x1), axis=1) + sum_squares_2d_array_along_axis1(x2)
</code></pre>
<p>On my computer that's already 2 times faster than the NumPy code and almost 10 times faster than your original Numba code.</p>
<p>Speaking from experience getting it 2x faster than NumPy is generally the limit (at least if the NumPy version isn't needlessly complicated or inefficient), however you can squeeze out a bit more by unrolling everything:</p>
<pre><code>import numba as nb
@nb.njit
def euclidean_distance_square_numba_v2(x1, x2):
f1 = 0.
for i_idx in range(x1.shape[1]):
f1 += x1[0, i_idx] * x1[0, i_idx]
res = np.empty(x2.shape[0], dtype=x2.dtype)
for o_idx in range(x2.shape[0]):
val = 0
for i_idx in range(x2.shape[1]):
val_from_x2 = x2[o_idx, i_idx]
val += (-2) * x1[0, i_idx] * val_from_x2 + val_from_x2 * val_from_x2
val += f1
res[o_idx] = val
return res
</code></pre>
<p>But that only gives a ~10-20% improvement over the latest approach.</p>
<p>At that point you might realize that you can simplify the code (even though it probably won't speed it up):</p>
<pre><code>import numba as nb
@nb.njit
def euclidean_distance_square_numba_v3(x1, x2):
res = np.empty(x2.shape[0], dtype=x2.dtype)
for o_idx in range(x2.shape[0]):
val = 0
for i_idx in range(x2.shape[1]):
tmp = x1[0, i_idx] - x2[o_idx, i_idx]
val += tmp * tmp
res[o_idx] = val
return res
</code></pre>
<p>Yeah, that look pretty straight-forward and it's not really slower.</p>
<p>However in all the excitement I forgot to mention the <strong>obvious</strong> solution: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="noreferrer"><code>scipy.spatial.distance.cdist</code></a> which has a <code>sqeuclidean</code> (squared euclidean distance) option:</p>
<pre><code>from scipy.spatial import distance
distance.cdist(x1, x2, metric='sqeuclidean')
</code></pre>
<p>It's not really faster than numba but it's available without having to write your own function... </p>
<h2>Tests</h2>
<p>Test for correctness and do the warmups:</p>
<pre><code>x1 = np.array([[1.,2,3]])
x2 = np.array([[1.,2,3], [2,3,4], [3,4,5], [4,5,6], [5,6,7]])
res1 = euclidean_distance_square(x1, x2)
res2 = euclidean_distance_square_numba_original(x1, x2)
res3 = euclidean_distance_square_numba_v1(x1, x2)
res4 = euclidean_distance_square_numba_v2(x1, x2)
res5 = euclidean_distance_square_numba_v3(x1, x2)
np.testing.assert_array_equal(res1, res2)
np.testing.assert_array_equal(res1, res3)
np.testing.assert_array_equal(res1[0], res4)
np.testing.assert_array_equal(res1[0], res5)
np.testing.assert_almost_equal(res1, distance.cdist(x1, x2, metric='sqeuclidean'))
</code></pre>
<p>Timings:</p>
<pre><code>x1 = np.random.random((1, 512))
x2 = np.random.random((1000000, 512))
%timeit euclidean_distance_square(x1, x2)
# 2.09 s Β± 54.1 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit euclidean_distance_square_numba_original(x1, x2)
# 10.9 s Β± 158 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit euclidean_distance_square_numba_v1(x1, x2)
# 907 ms Β± 7.11 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit euclidean_distance_square_numba_v2(x1, x2)
# 715 ms Β± 15 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit euclidean_distance_square_numba_v3(x1, x2)
# 731 ms Β± 34.5 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit distance.cdist(x1, x2, metric='sqeuclidean')
# 706 ms Β± 4.99 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>Note: If you have arrays of integers you might want to change the hard-coded <code>0.0</code> in the numba functions to <code>0</code>.</p>
|
python|numpy|numba
| 29
|
2,413
| 62,528,153
|
How to convert text table to dataframe
|
<p>I am trying to scrape the "PRINCIPAL STOCKHOLDERS" table from the link<a href="https://www.sec.gov/Archives/edgar/data/1034239/0000950124-97-003372.txt" rel="nofollow noreferrer">text file</a>and convert it to a csv file. Right now I am only half successful. Namely, I can locate the table and parse it but somehow I cannot convert the text table to a standard one. My code is attached. Can someone help me with it?</p>
<pre><code>url = r'https://www.sec.gov/Archives/edgar/data/1034239/0000950124-97-003372.txt'
# Different approach, the first approach does not work
filing_url = requests.get(url)
content = filing_url.text
splited_data = content.split('\n')
table_title = 'PRINCIPAL STOCKHOLDERS'
END_TABLE_LINE = '- ------------------------'
def find_no_line_start_table(table_title,splited_data):
found_no_lines = []
for index, line in enumerate(splited_data):
if table_title in line:
found_no_lines.append(index)
return found_no_lines
table_start = find_no_line_start_table(table_title,splited_data)
# I need help with locating the table. If I locate the table use the above function, it will return two locations and I have to manually choose the correct one.
table_start = table_start[1]
def get_start_data_table(table_start, splited_data):
for index, row in enumerate(splited_data[table_start:]):
if '<C>' in row:
return table_start + index
def get_end_table(start_table_data, splited_data ):
for index, row in enumerate(splited_data[start_table_data:]):
if END_TABLE_LINE in row:
return start_table_data + index
def row(l):
l = l.split()
number_columns = 8
if len(l) >= number_columns:
data_row = [''] * number_columns
first_column_done = False
index = 0
for w in l:
if not first_column_done:
data_row[0] = ' '.join([data_row[0], w])
if ':' in w:
first_column_done = True
else:
index += 1
data_row[index] = w
return data_row
start_line = get_start_data_table(table_start, splited_data)
end_line = get_end_table(start_line, splited_data)
table = splited_data[start_line : end_line]
# I also need help with convert the text table to a CSV file, somehow the following function does not #recognize my column.
def take_table(table):
owner = []
Num_share = []
middle = []
middle_1 = []
middle_2 = []
middle_3 = []
prior_offering = []
after_offering = []
for r in table:
data_row = row(r)
if data_row:
col_1, col_2, col_3, col_4, col_5, col_6, col_7, col_8 = data_row
owner.append(col_1)
Num_share.append(col_2)
middle.append(col_3)
middle_1.append(col_4)
middle_2.append(col_5)
middle_3.append(col_6)
prior_offering.append(col_7)
after_offering.append(col_8)
table_data = {'owner': owner, 'Num_share': Num_share, 'middle': middle, 'middle_1': middle_1,
'middle_2': middle_2, 'middle_3': middle_3, 'prior_offering': prior_offering,
'after_offering': after_offering}
return table_data
#print (table)
dict_table = take_table(table)
a = pd.DataFrame(dict_table)
a.to_csv('trail.csv')
</code></pre>
|
<p>I think what you need to do is</p>
<pre><code>pd.DataFrame.from_dict(dict_table)
</code></pre>
<p>instead of</p>
<pre><code>pd.DataFrame(dict_table)
</code></pre>
|
python|pandas|dataframe|parsing
| 0
|
2,414
| 62,702,691
|
ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). sklearn
|
<p>Here is my My code:</p>
<pre><code>import pandas as pd
df = pd.read_csv('train.csv')
gender_dict = {"male": 1, "female": 2}
eye_color_dict = {"amber": 1, "blue": 2, "brown": 3, "gray": 4, "green": 5, "hazel": 6}
race_dict = {"black": 1, "white": 2, "middle_eastern": 3,"asian":4}
accommodation_type_dict = {"apartment": 1, "homeless": 2, "shared_residence": 3, "villa": 4, "other": 5}
education_status_dict = {"associate_degree": 1, "bachelors_degree": 2, "graduate_or_professional_degree": 3, "high_school": 4, "less_than_9th_grade": 5, "not_applicable": 6}
blood_type_dict = {"A+": 1, "A-": 2, "B+": 3, "B-": 4, "O+": 5, "O-": 6, "AB+": 7, "AB-": 8}
occupation_dict = {"agriculture": 1, "art": 2, "business": 3, "education": 4, "engineering": 5, "healthcare": 6, "unemployed": 7, "other": 8}
living_area_dict = {"suburbs": 1, "rural": 2, "urban": 3, "other": 4}
sports_engagement_dict = {"never": 1, "sometimes": 2, "seldom": 3, "regularly": 4}
favorite_music_genre_dict = {"r&b": 1, "rock": 2, "pop": 3, "country": 4, "other": 5, "edm": 6, "classical": 7}
favorite_color_dict = {"green": 1, "orange": 2, "yellow": 3, "purple": 4, "blue": 5, "pink": 6, "red": 7}
owned_car_brand_dict = {"audi": 1, "bmw": 2, "ford": 3, "honda": 4, "hyundai": 5, "kia": 6, "none": 7, "tesla": 8, "other": 9, "mitsubishi": 10}
hours_worked_each_week_dict = {"not_applicable": 1}
owns_a_pet_dict = {"yes": 1, "no": 2}
has_health_insurance_dict = {"yes": 1, "no": 2}
has_cancer_dict = {"yes": 1, "no": 2}
smokes_dict = {"yes": 1, "no": 2}
has_alzheimers_dict = {"yes": 1, "no": 2}
facial_hair_dict = {"long": 1, "short": 2, "none": 3}
diet_type_dict = {"regular": 1, "vegetarian": 2, "keto": 3, "vegan": 4, "low-carb": 5, "paleo": 6}
df['gender'] = df['gender'].map(gender_dict)
df['eye_color'] = df['eye_color'].map(eye_color_dict)
df['race'] = df['race'].map(race_dict)
df['accommodation_type'] = df['accommodation_type'].map(accommodation_type_dict)
df['education_status'] = df['education_status'].map(education_status_dict)
df['blood_type'] = df['blood_type'].map(blood_type_dict)
df['occupation'] = df['occupation'].map(occupation_dict)
df['living_area'] = df['living_area'].map(living_area_dict)
df['sports_engagement'] = df['sports_engagement'].map(sports_engagement_dict)
df['favorite_music_genre'] = df['favorite_music_genre'].map(favorite_music_genre_dict)
df['favorite_color'] = df['favorite_color'].map(favorite_color_dict)
df['owned_car_brand'] = df['owned_car_brand'].map(owned_car_brand_dict)
df['hours_worked_each_week'] = df['hours_worked_each_week'].map(hours_worked_each_week_dict)
df['owns_a_pet'] = df['owns_a_pet'].map(owns_a_pet_dict)
df['has_health_insurance'] = df['has_health_insurance'].map(has_health_insurance_dict)
df['has_cancer'] = df['has_cancer'].map(has_cancer_dict)
df['smokes'] = df['smokes'].map(smokes_dict)
df['has_alzheimers'] = df['has_alzheimers'].map(has_alzheimers_dict)
df['facial_hair'] = df['facial_hair'].map(facial_hair_dict)
df['diet_type'] = df['diet_type'].map(diet_type_dict)
import sklearn
from sklearn import svm, preprocessing
df = sklearn.utils.shuffle(df)
X = df.drop("infected", axis=1).values
X = preprocessing.scale(X)
y = df['infected'].values
test_size = 200
X_train = X[:-test_size]
y_train = y[:-test_size]
X_test = X[-test_size:]
y_test = y[-test_size:]
clf = svm.SVR(kernel="linear")
clf.fit(X_train,y_train)
clf.score(X_test,y_test)
for X,y in zip(X_test, y_test):
print(f"Model: {clf.predict([X])[0]}, Actual: {y}")
</code></pre>
<p>I'm getting value error:</p>
<blockquote>
<p>ValueError: Input contains NaN, infinity or a value too large for dtype('float64').</p>
</blockquote>
<p>And it told me :</p>
<blockquote>
<p><code><ipython-input-1-8b8c4c2d113b> in <module></code>
<code>62 </code>
<code>63 clf = svm.SVR(kernel="linear")</code>
<code>---> 64 clf.fit(X_train,y_train)</code>
<code>65 </code>66 clf.score(X_test,y_test)`</p>
</blockquote>
<p><a href="https://1drv.ms/u/s!Al32Y2Ae1QmOaJiT4urDkIfhVJo?e=0jJkFs" rel="nofollow noreferrer">this is the link to train.csv</a></p>
<p>I'm using jupyter-notebook, I'm new to sklearn and ml
I attached the CSV file above, thank you for your help</p>
|
<p>Looks like the column <code>hours_worked_each_week</code> contains nulls.</p>
<p>Do you get the same error if you drop that column:</p>
<pre><code>X = df.drop(['infected', 'hours_worked_each_week'], axis=1).values
</code></pre>
<p>Alternatively, you can replace nulls with 0</p>
<pre><code>df.fillna(0,inplace=True)
</code></pre>
|
python|pandas|scikit-learn|jupyter-notebook
| 0
|
2,415
| 54,351,641
|
Skip the first group in a grouped dataframe
|
<p>I have a pandas df that I have grouped, like so:</p>
<p><code>gQ = df.groupby('Date', as_index=False)['Quantity']</code></p>
<p>and it returns:</p>
<p><code>
0 0 135.68
1 1054.68
2 101.12
1 3 131.74
4 1025.47
5 97.40
2 6 1078.07
7 101.93
3 8 1075.92
9 102.06
4 10 1085.37
11 102.80
12 1656.58
5 13 1081.65
14 104.27
15 1659.42
Name: Price, dtype: float64</code></p>
<p>I want values from all groups except the first group, i.e.,</p>
<p><code>
0 0 135.68
1 1054.68
2 101.12
</code></p>
<p>Is it possible to filter out just that?</p>
|
<p>Use MultiIndex and levels - see <a href="https://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/advanced.html</a> - there are many examples: choose the best for your needs. BR</p>
|
python|pandas|dataframe|pandas-groupby
| 0
|
2,416
| 73,625,459
|
What does self(variable) do in Python?
|
<p>I'm trying to understand someone else's code in Python and I stumbled across a line I don't quite understand and which I can't find on the internet:</p>
<pre><code>x=self(k)
</code></pre>
<p>with k being a torch-array.
I know what self.something does but I haven't seen self(something) before.</p>
|
<p><code>self</code>, for these purposes, is just a variable like any other, and when we call a variable with parentheses, it invokes the <code>__call__</code> magic method. So</p>
<pre><code>x = self(k)
</code></pre>
<p>is effectively a shortcut for</p>
<pre><code>x = self.__call__(k)
</code></pre>
<hr />
<p>Footnote: I say "effectively", because it's really more like</p>
<pre><code>x = type(self).__call__(self, k)
</code></pre>
<p>due to the way magic methods work. This difference shouldn't matter unless you're doing funny things with singleton objects, though.</p>
|
python|oop|pytorch|self
| 3
|
2,417
| 73,610,419
|
Taking multiple slices of numpy 1d array from given indices, copying result into 2d array
|
<p>New to Python. Given in the code snippet below is a numpy 1d array called <em>randomWalk</em>. Given indices (which can be interpreted as start dates and end dates, both of which may vary from item to item), I want to do take multiple slices from that 1d array <em>randomWalk</em> and arrange the results in a 2d array of given shape.</p>
<p>I am trying to vectorize this. Was able to select the slices I wanted from the 1d array using <code>np.r_</code>, but failed to store these in the format I require for the output (a 2d array with rows representing items and columns representing time from <code>min(startDates)</code> to <code>max(endDates)</code>.</p>
<p>Below is the (ugly) code that works.</p>
<pre><code>import numpy as np
numItems = 20
numPeriods = 12
# Data
randomWalk = np.random.normal(loc = 0.0, scale = 0.05, size = (numPeriods,))
startDates = np.random.randint(low = 1, high = 5, size = numItems)
endDates = np.random.randint(low = 5, high = numPeriods + 1, size = numItems)
stochasticItems = np.random.choice([False, True], size=(numItems,), p = [0.9, 0.1])
# Result needs to be in this shape (code snippet is designed to capture that only
# a relatively small fraction of resultMatrix's elements will differ from unity)
resultMatrix = np.ones((numItems, numPeriods))
# Desired result (obtained via brute force)
for i in range(numItems):
if stochasticItems[i]:
resultMatrix[
i, startDates[i]:endDates[i]] = np.cumprod(randomWalk[startDates[i]:endDates[i]] + 1.0)
</code></pre>
|
<p>If the vectorization is the goal, so it is done by <a href="https://stackoverflow.com/a/73616900/13394817">Pig answer</a>, If it is not matter (as it is mentioned by the OP in the <a href="https://stackoverflow.com/questions/73610419/taking-multiple-slices-of-numpy-1d-array-from-given-indices-copying-result-into#comment130001178_73610419">comments</a> --> <em>The aim is improvement in performance</em>), so I suggest using numba library to accelerate the code. We can write <code>np.cumprod</code> equivalent numba code and accelerate it using numba no-python jit:</p>
<pre><code>@nb.njit
def nb_cumprod(arr):
y = np.empty_like(arr)
y[0] = arr[0]
for i in range(1, arr.shape[0]):
y[i] = arr[i] * y[i-1]
return y
@nb.njit
def nb_(numItems, numPeriods, stochasticItems, startDates, endDates, randomWalk):
resultMatrix = np.ones((numItems, numPeriods))
for i in range(numItems):
if stochasticItems[i]:
resultMatrix[i, startDates[i]:endDates[i]] = nb_cumprod(randomWalk[startDates[i]:endDates[i]] + 1.0)
return resultMatrix
</code></pre>
<p>This code improved the code <code>~10 times</code> faster than the OP in my some benchmarks.</p>
|
python|arrays|numpy|performance|vectorization
| 0
|
2,418
| 71,240,439
|
Reindexing only valid with uniquely valued Index objects
|
<p>Irun this code</p>
<pre><code>esg_fm_barron = pd.concat([barron_clean.drop(columns = "10 year return", inplace = False),ESG_fixed.drop(columns = 'Name',inplace = False), financial_clean.drop(columns = 'Name',inplace = False)], axis = 'columns', join = 'inner')
esg_fm_barron.rename(columns={'Average (Current)': "Total ESG Score"}, inplace=True)
esg_fm_barron.head(3)
</code></pre>
<p>but is get this error : Reindexing only valid with uniquely valued Index objects
does anyone know a solution ? thanks</p>
|
<p>When you run <em>pd.concat</em>, each source DataFrame must have unique index.</p>
<p>First identify which source DataFrame has a non-unique index.
For each source DataFrame (assuming it is <em>df</em>) run:</p>
<pre><code>df.index.is_unique
</code></pre>
<p>(this is a <strong>property</strong>, not a method, so put no parentheses).</p>
<p>The <em>false</em> result means that the index in this DataFrame is non-unique.
Then drop rows with duplicated index values:</p>
<pre><code>df = df.loc[~df.index.duplicated(keep='first')]
</code></pre>
<p>In order not to loose original data, maybe you should save the result under
a new temporary DataFrame and then use for concatenation these temporary DataFrames.</p>
|
pandas|indexing|merge|concatenation|drop
| 3
|
2,419
| 71,260,909
|
Pandas dataframe writing to excel as list. But I don't want data as list in excel
|
<p>I have a code which iterate through excel and extract values from excel columns as loaded as list in dataframe. When I write dataframe to excel, I am seeing data with in [] and quotes for string ['']. How can I remove [''] when I write to excel.
Also I want to write only first value in product ID column to excel. how can I do that?</p>
<pre><code>result = pd.DataFrame.from_dict(result) # result has list of data
df_t = result.T
writer = pd.ExcelWriter(path)
df_t.to_excel(writer, 'data')
writer.save()
</code></pre>
<p>My output to excel</p>
<p><a href="https://i.stack.imgur.com/3SwVi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3SwVi.png" alt="enter image description here" /></a></p>
<p>I am expecting output as below and Product_ID column should only have first value in list
<a href="https://i.stack.imgur.com/UlUSi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UlUSi.png" alt="enter image description here" /></a></p>
<p>I tried below and getting error</p>
<pre><code>path = path to excel
df = pd.read_excel(path, engine="openpyxl")
def data_clean(x):
for index, data in enumerate(x.values):
item = eval(data)
if len(item):
x.values[index] = item[0]
else:
x.values[index] = ""
return x
new_df = df.apply(data_clean, axis=1)
new_df.to_excel(path)
</code></pre>
<p>I am getting below error:
item = eval(data)
TypeError: eval() arg 1 must be a string, bytes or code object</p>
|
<pre><code>df_t['id'] = df_t['id'].str[0] # this is a shortcut for if you only want the 0th index
df_t['other_columns'] = df_t['other_columns'].apply(lambda x: " ".join(x)) # this is to "unlist" the lists of lists which you have fed into a pandas column
</code></pre>
|
python|pandas
| 0
|
2,420
| 71,348,855
|
return Cosine Similarity not as single value
|
<p>How can I make a pure NumPy function that will return an array of the shape of the 2 arrays with the cosine similarities of all the pairwise comparisons of the rows of the input array?</p>
<p>I don't want to return a single value.</p>
<pre><code>dataSet1 = [5, 6, 7, 2]
dataSet2 = [2, 3, 1, 15]
def cosine_similarity(list1, list2):
# How to?
pass
print(cosine_similarity(dataSet1, dataSet2))
</code></pre>
|
<p>You can use <code>scipy</code> for this as stated in <a href="https://stackoverflow.com/questions/18424228/cosine-similarity-between-2-number-lists">this answer</a>.</p>
<pre><code>from scipy import spatial
dataSet1 = [5, 6, 7, 2]
dataSet2 = [2, 3, 1, 15]
result = 1 - spatial.distance.cosine(dataSet1, dataSet2)
</code></pre>
|
python|numpy
| 0
|
2,421
| 71,422,177
|
Squeeze dataframe rows with missing values
|
<p>I'd like to squeeze a dataframe like this:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame([[1,pd.NA,100],[2,20,np.nan],[np.nan,np.nan,300],[pd.NA,"bla",400]], columns=["A","B","C"])
df1
A B C
0 1 <NA> 100.0
1 2 20 NaN
2 NaN NaN 300.0
3 <NA> bla 400.0
</code></pre>
<p>into a left-side compact way:</p>
<pre><code> df_out
A B C
0 1 100 NaN
1 2 20 NaN
2 300 NaN NaN
3 bla 400 NaN
</code></pre>
<p>Also, it will be nice to convert all missing values from pd.NA to np.nan.
How can do it? Thanks!</p>
|
<p>You can <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> a function with <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a> to remove the NaN and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> to push (squeeze) the data to the "left" by removing the aligment.</p>
<p>Then <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rename.html" rel="nofollow noreferrer"><code>rename</code></a> to the original A/B/C... and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a> to reinstate the eventually lost levels (here C would be missing as there is no non-NaN values):</p>
<pre><code>df2 = (df1
.apply(lambda x: x.dropna().reset_index(drop=True), axis=1)
.rename(columns=dict(enumerate(df1.columns)))
.reindex(columns=df1.columns)
)
</code></pre>
<p>Or another approach working as the Series level by only dropping the NaNs and adding missing ones in the end:</p>
<pre><code>df2 = df1.apply(lambda x: pd.Series((y:=x.dropna().to_list())
+[float('nan')]*(len(df1.columns)-len(y)),
index=df1.columns), axis=1)
</code></pre>
<p>output:</p>
<pre><code> A B C
0 1 100.0 NaN
1 2 20.0 NaN
2 300.0 NaN NaN
3 bla 400.0 NaN
</code></pre>
<p>Or also possible but without changing the NA to NaN:</p>
<pre><code>df2 = df1.T.apply(sorted, key=pd.isna).T
</code></pre>
<p>output:</p>
<pre><code> A B C
0 1 100.0 <NA>
1 2.0 20.0 NaN
2 300.0 NaN NaN
3 bla 400.0 <NA>
</code></pre>
|
python|pandas|dataframe
| 4
|
2,422
| 71,140,439
|
add suffix based on multiple conditions from string values in another column
|
<p>I would like to add a suffix to strings in one column when a condition is met in another column. If a value is present in "Market" column, "Symbol" column corresponding value is updated to include current ticker but I would like to add a suffix to it representing its market place. I guess I could create multiple masks and change multiple values with multiple lines of codes for each value but I was wondering if there exist a more elegant way of doing this in one operation.</p>
<p>This is what I tried :</p>
<p><code>conditions = [ (df['Market'].str.contains("Oslo")), (df['Market'].str.contains("Paris")), (df['Market'].str.contains("Amsterdam")), (df['Market'].str.contains("Brussels")), (df['Market'].str.contains("Dublin")) ] values = [str+'.OL', str+'.PA', str+'.AS', str+'.BR', str+'.IR'] df['Symbol'] = np.select(conditions, values) print(df)</code></p>
<p>I get an error :</p>
<blockquote>
<p>unsupported operand type(s) for +: 'type' and 'str'</p>
</blockquote>
<p>any help welcome<br/></p>
<p>added after KingOtto's answer...<br/><br/>
the data frame :<br/></p>
<pre><code>| Name | Symbol | Market |
|:------------------|:---------|:-----------------------|
| 1000MERCIS | ALMIL | Euronext Growth Paris |
| 2020 BULKERS | 2020 | Oslo BΓΈrs |
| 2CRSI | 2CRSI | Euronext Paris |
| 2MX ORGANIC | 2MX | Euronext Paris |
| 2MX ORGANIC BS | 2MXBS | Euronext Paris |
| 5TH PLANET GAMES | 5PG | Euronext Expand Oslo |
| A TOUTE VITESSE | MLATV | Euronext Access Paris |
| A.S.T. GROUPE | ASP | Euronext Paris |
| AALBERTS NV | AALB | Euronext Amsterdam |
| AASEN SPAREBANK | AASB | Euronext Growth Oslo |
| AB INBEV | ABI | Euronext Brussels |
| AB SCIENCE | AB | Euronext Paris |
| ABATTOIR | - | Euronext Expert Market |
| ABC ARBITRAGE | ABCA | Euronext Paris |
| ABEO | ABEO | Euronext Paris |
</code></pre>
<p>The code to create a new df with a suffix column and then merge these values with the symbol column to obtain a ticker correctly understood by Yahoo Finance to be queried.<br/><br/></p>
<pre><code>suffix_list = pd.DataFrame({'Market': ['Euronext Growth Oslo','Euronext Expand Oslo', 'Oslo BΓΈrs', 'Euronext Brussels','Euronext Growth Brussels','Euronext Paris', 'Euronext Access Paris','Euronext Growth Paris', 'Euronext Lisbon', 'Euronext Access Lisbon', 'Euronext Dublin', 'Euronext Growth Dublin', 'Euronext Brussels', 'Euronext Access Brussels','Euronext Amsterdam'], 'suffix':['.OL','.OL','.OL','.BR','.BR','.PA','.PA','.PA','.LI','.LI','.IR','.IR','.BR','.BR','.AS']})
new_df = pd.merge(df, suffix_list, how='left', on='Market')
new_df['Ticker']=new_df['Symbol']+new_df['suffix']
</code></pre>
<br/>
<p>The new dataframe containing the Ticker column:<br/></p>
<pre><code>| Name | Symbol | Market | suffix | Ticker |
| :-----------------|:---------|:-----------------------|:---------|:---------|
| 1000MERCIS | ALMIL | Euronext Growth Paris | .PA | ALMIL.PA |
| 2020 BULKERS | 2020 | Oslo BΓΈrs | .OL | 2020.OL |
| 2CRSI | 2CRSI | Euronext Paris | .PA | 2CRSI.PA |
| 2MX ORGANIC | 2MX | Euronext Paris | .PA | 2MX.PA |
| 2MX ORGANIC BS | 2MXBS | Euronext Paris | .PA | 2MXBS.PA |
| 5TH PLANET GAMES | 5PG | Euronext Expand Oslo | .OL | 5PG.OL |
| A TOUTE VITESSE | MLATV | Euronext Access Paris | .PA | MLATV.PA |
| A.S.T. GROUPE | ASP | Euronext Paris | .PA | ASP.PA |
| AALBERTS NV | AALB | Euronext Amsterdam | .AS | AALB.AS |
| AASEN SPAREBANK | AASB | Euronext Growth Oslo | .OL | AASB.OL |
| AB INBEV | ABI | Euronext Brussels | .BR | ABI.BR |
| AB INBEV | ABI | Euronext Brussels | .BR | ABI.BR |
| AB SCIENCE | AB | Euronext Paris | .PA | AB.PA |
| ABATTOIR | - | Euronext Expert Market | nan | nan |
| ABC ARBITRAGE | ABCA | Euronext Paris | .PA | ABCA.PA |
| ABEO | ABEO | Euronext Paris | .PA | ABEO.PA |
</code></pre>
|
<p>You need to proceed in 3 steps</p>
<ol>
<li><p>You need to define an exhaustive suffix_list - a dictionary that holds information only once for each market</p>
<p><code>suffix_list = pd.DataFrame({'Market': ['Oslo', 'Paris'], 'suffix':['OL','PA']})</code></p>
</li>
<li><p>You want to merge the <code>suffix_list</code> into your existing dataframe as a new column - one command for all markets (for each market that has a suffix in the list, you add that suffix):</p>
<p><code>pd.merge(df, suffix_list, how='left', on='Market')</code></p>
</li>
<li><p>Now that you have the 2 columns <code>'value'</code> and <code>'suffix'</code> next to each other for all rows, you can apply 1 single operation for all rows</p>
<p><code>str('value')+'suffix'</code></p>
</li>
</ol>
|
python|pandas|dataframe|multiple-conditions|suffix
| 0
|
2,423
| 60,642,537
|
Report training loss for a specific sample in train dataset, not the average one in the training process (TensorFlow)
|
<p>I'm training an LSTM model using TensorFlow. We know that in the process of training, the is a report for <code>loss</code> and <code>val_loss</code> for every epoch which are the average of losses for train and test datasets. I'm intended to follow the loss of a specific sample in the train dataset (specific date). Also, it should be noted that I'm shuffling train data in <code>fit</code> function.</p>
|
<p>Here is the code for tracking loss for a single sample: </p>
<pre><code>import tensorflow as tf
import numpy as np
import keras
x = tf.Variable(initial_value=np.ndarray(shape=(10, 10), dtype=np.float32)) # your sample input
y =np.random.randint(0, 9, size=(10, )) # your sample label
y_labels = keras.utils.to_categorical(y, 10)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=x, logits=y_labels)) # loss operation for that particular sample
tf.summary.scalar('loss', loss) #logging loss op in summary
print('loss op', loss)
merge = tf.summary.merge_all()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_val, merge_val = sess.run([loss, merge]) # no need to pass any feed_dict, loss value calculated is specific to that sample
print('loss val', loss_val)
# merge val could be put in to tf summary writer
</code></pre>
|
python|tensorflow|keras|lstm|epoch
| 1
|
2,424
| 72,762,543
|
Sort index list in same way as list of pandas dataframes is sorted by length in python?
|
<p>Based on my question <a href="https://stackoverflow.com/questions/72760325/sort-or-remove-elements-from-corresponding-list-in-same-way-as-in-reference-list#72760325">here</a> and <a href="https://stackoverflow.com/questions/72761755/sort-list-of-pandas-dataframes-by-row-count/72761757#72761757">here</a> I want to sort a list of pandas dataframes and based on the desired order (here <code>len</code>) I want to change the values of the <code>idx</code> variable in the same way as the values of <code>lst</code> are changed. Means if lst = [df1, df2, df3] and idx = [1,2,3] and the ordered list (by <code>len</code>) is <code>lst_new = [df3, df1, df2]</code>, then <code>idx_new = [3,1,2]</code>. A small example to illustrate my problem is:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [11, 12, 13]]),
columns=['a', 'b', 'c'])
df3 = pd.DataFrame(np.array([[1, 2, 3], ['x', 'y', 'z']]),
columns=['a', 'b', 'c'])
idx = [1,2,3]
lst = []
lst.append(df1)
lst.append(df2)
lst.append(df3)
lst = sorted(lst, key=len)
test = [i for j, i in sorted(zip(lst, idx))]
print(test)
</code></pre>
<p>gets the error message:</p>
<pre><code>ValueError: Can only compare identically-labeled DataFrame objects
</code></pre>
|
<p>Found some more or less complicated solution:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [11, 12, 13]]),
columns=['a', 'b', 'c'])
df3 = pd.DataFrame(np.array([[1, 2, 3], ['x', 'y', 'z']]),
columns=['a', 'b', 'c'])
idx = [1,2,3]
lst = []
lst.append(df1)
lst.append(df2)
lst.append(df3)
lst_srt = sorted(lst, key=len)
i = 0
idx_lst = []
for a in lst_srt:
i = 0
for b in lst:
i = i + 1
if a.equals(b):
idx_lst.append(i)
break
print(idx_lst)
print(lst_srt)
</code></pre>
<p>with:</p>
<pre><code>[3, 1, 2]
[ a b c
0 1 2 3
1 x y z, a b c
0 1 2 3
1 4 5 6
2 7 8 9, a b c
0 1 2 3
1 4 5 6
2 7 8 9
3 11 12 13]
</code></pre>
|
python|pandas|dataframe|sorting
| 1
|
2,425
| 72,704,462
|
Find and replace using * equivalent in pandas dataframe
|
<p>I want to replace the string "Private room in house" with "Private" in a column in a dataframe</p>
<p>I have tried</p>
<pre><code>df['room'] = df['room'].str.replace("Private[]","Private")
</code></pre>
<p>putting all the various regular expression characters in the [] but nothing works. All I have succeeded in doing is removing the space after Private.</p>
<p>I have looked at re.sub but haven't managed to get anything to work for me. I'm pretty new to Python so this is probably a simple problem but I can't find the answer anywhere</p>
|
<p>You can use:</p>
<pre><code>df['room'] = df['room'].str.replace('Private.*','Private', regex=True)
</code></pre>
<p>Or with a look behind:</p>
<pre><code>df['room'] = df['room'].str.replace('(?<=Private).*', '', regex=True)
</code></pre>
|
python|pandas|string
| 3
|
2,426
| 72,712,975
|
keras target_size and PIL resize inconsistency issue
|
<p>do anyone have idea why the two outputs below are different</p>
<p>in 1st code block image is loaded and PIL resize used. while in 2nd block keras load_img parameter: target_size is used. for same steps it is giving different output.</p>
<pre><code>from keras.preprocessing.image import load_img
import numpy as np
path = 'C:/Users/user/Downloads/random_colour_image.JPG' # actual snippet of image:https://wallpaperaccess.com/full/1523270.jpg
target_size = (3,3)
#code block 1
image = load_img(path)
image = image.resize(target_size)
image = np.asarray(image)
print(image)
</code></pre>
<p>Output 1:</p>
<pre><code>[[[132 99 79]
[146 80 68]
[165 15 81]]
[[116 102 94]
[133 101 69]
[198 28 53]]
[[ 82 129 108]
[119 89 112]
[166 87 51]]]
</code></pre>
<p>code block 2:</p>
<pre><code>image = load_img(path, target_size=target_size)
image = np.asarray(image)
print(image)
</code></pre>
<p>Output 2:</p>
<pre><code>[[[ 48 190 88]
[ 57 159 49]
[145 0 77]]
[[116 90 101]
[ 14 133 67]
[146 19 2]]
[[ 5 119 50]
[129 69 97]
[179 63 2]]]
</code></pre>
|
<p>thanks I'mahdi. but target_size is also intended to load an resized image. <br> got the solution:
Keras load_img default interpolation is NEAREST(0) and that of PIL.resize is BICUBIC(3). hence that ambiguity.</p>
<p>change in code block 1:</p>
<pre><code>image = image.resize(target_size, resample=0)
</code></pre>
<p>or change in code block 2:</p>
<pre><code>image = load_img(path1, target_size=target_size, interpolation='bicubic')
</code></pre>
<p>will solve it and produce same result. <br>
Thanks.</p>
|
python|tensorflow|image-processing|keras
| 0
|
2,427
| 72,792,586
|
Replace None in pandas data frame with Null
|
<p>I have a pandas dataframe and I am getting None for many values. I need to write it to SQL server DB and want to update it with Null. How can i do that?</p>
<p>I cannot use df.to_sql to write to DB, it is very slow. So I use pymsql. I convert the dataframe values as a tuple and form a sql insert statement. Hence i cannot have None, Nan, NAT etc. need to even clear it before writing to tuple.</p>
<pre><code>self.sqlconn = pymssql.connect(server=self.server, user=self.username, password=self.password,database=self.database)
code for writing to sql db
cursor = self.sqlconn.cursor()
for i in sql_dataframe.values:
query = 'insert into ' + table_name + ' (' + ','.join(sql_dataframe.columns) + ') values ' + str(
tuple(i))
</code></pre>
|
<p>try replacing None with explicit NULLs</p>
<pre><code>df['col'] = df['col'].fillna('NULL')
</code></pre>
|
python|pandas|nullable|nonetype
| 0
|
2,428
| 59,882,667
|
Why does date_range give a result different from indexing [] for DataFrame Pandas dates?
|
<p>Here is a simple code with <code>date_range</code> and indexing [ ] I used with Pandas</p>
<pre><code>period_start = '2013-01-01'
period_end = '2019-12-24'
print(pd.DataFrame ({'close':aapl_close,
'returns':aapl_returns},index=pd.date_range(start=period_start,periods=6)))
print(pd.DataFrame ({'close':aapl_close,
'returns':aapl_returns})[period_start:'20130110'])
</code></pre>
<p><code>date_range</code> gives Nan results</p>
<pre><code> close returns
2013-01-01 NaN NaN
2013-01-02 NaN NaN
2013-01-03 NaN NaN
2013-01-04 NaN NaN
</code></pre>
<p>Indexing gives correct results</p>
<pre><code> close returns
2013-01-02 00:00:00+00:00 68.732 0.028322
2013-01-03 00:00:00+00:00 68.032 -0.010184
2013-01-04 00:00:00+00:00 66.091 -0.028531
</code></pre>
<p>Based on how the dates are shown by <code>date_range</code> - I suppose the date format of <code>date_range</code> does not match the date format in the Pandas DataFrame. </p>
<p>1) Can you explaine please why it gives NaN? </p>
<p>2) What would you suggest to get a specific time range from the Panda DataFrame?</p>
|
<p>As I'm a beginner in Python and its libraries, I didn't understand that this question refers to the Quantopian library, not to Pandas. </p>
<p>I got a solution on their forum. All the times returned by methods on Quantopian are timezone aware with a timezone of 'UTC'. By default, the date_range method returns timezone naive dates. Simply supply the timezone information to date_range method. Like this</p>
<pre><code>pd.DataFrame ({
'close':aapl_close,
'returns':aapl_returns,},
index=pd.date_range(start=period_start, periods=6, tz='UTC'))
</code></pre>
<p>To get a specific date or time range in pandas perhaps the easiest is simple bracket notation. For example, to get dates between 2013-01-04 and 2013-01-08 (inclusive) simply enter this:</p>
<pre><code>df = pd.DataFrame ({'close':aapl_close, 'returns':aapl_returns,})
my_selected_dates = df['2013-01-04':'2013-01-08']
</code></pre>
<p>This bracket notation is really shorthand for using the loc method</p>
<pre><code>my_selected_dates = df.loc['2013-01-04':'2013-01-08']
</code></pre>
<p>Both work the same but the loc method has a bit more flexibility. This notation also works with datetimes if desired.</p>
|
python|pandas|dataframe
| 1
|
2,429
| 59,899,052
|
How can I match values on a matrix on python using pandas?
|
<p>I'm trying to match values in a matrix on python using pandas dataframes. Maybe this is not the best way to express it.</p>
<p>Imagine you have the following dataset:</p>
<pre><code>import pandas as pd
d = {'stores':['','','','',''],'col1': ['x','price','','',1],'col2':['y','quantity','',1,''], 'col3':['z','',1,'',''] }
df = pd.DataFrame(data=d)
</code></pre>
<pre class="lang-py prettyprint-override"><code> stores col1 col2 col3
0 NaN x y z
1 NaN price quantity NaN
2 NaN NaN Nan 1
3 NaN NaN 1 NaN
4 NaN 1 NaN NaN
</code></pre>
<p>I'm trying to get the following:</p>
<pre class="lang-py prettyprint-override"><code> stores col1 col2 col3
0 NaN x y z
1 NaN price quantity NaN
2 z NaN Nan 1
3 y NaN 1 NaN
4 x 1 NaN NaN
</code></pre>
<p>Any ideas how this might work? I've tried running loops on lists but I'm not quite sure how to do it. </p>
<p>This is what I have so far but it's just terrible (and obviously not working) and I am sure there is a much simpler way of doing this but I just can't get my head around it.</p>
<pre class="lang-py prettyprint-override"><code>stores = ['x','y','z']
for i in stores:
for v in df.iloc[0,:]:
if i==v :
df['stores'] = i
</code></pre>
<p>It yields the following:</p>
<pre class="lang-py prettyprint-override"><code>
stores col1 col2 col3
0 z x y z
1 z price quantity NaN
2 z NaN NaN 1
3 z NaN 1 NaN
4 z 1 NaN NaN
</code></pre>
<p>Thank you in advance.</p>
|
<p>You can fill the whole column at once, like this:</p>
<pre><code>df["stores"] = df[["col1", "col2", "col3"]].rename(columns=df.loc[0]).eq(1).idxmax(axis=1)
</code></pre>
<p>This first creates a version of the dataframe with the columns renamed "x", "y", and "z" after the values in the first row; then <code>idxmax(axis=1)</code> returns the column heading associated with the max value in each row (which is the True one).</p>
<p>However this adds an "x" in rows where none of the columns has a 1. If that is a problem you could do something like this:</p>
<pre><code>df["NA"] = 1 # add a column of ones
df["stores"] = df[["col1", "col2", "col3", "NA"]].rename(columns=df.loc[0]).eq(1).idxmax(axis=1)
df["stores"].replace(1, np.NaN, inplace=True) # replace the 1s with NaNs
</code></pre>
|
python|pandas|dataframe|matrix
| 0
|
2,430
| 32,525,345
|
Converting 3D matrix to cascaded 2D Matrices
|
<p>I have a <code>3D</code> matrix in python as the following:</p>
<pre><code>import numpy as np
a = np.ones((2,2,3))
a[0,0,0] = 2
a[0,0,1] = 3
a[0,0,2] = 4
</code></pre>
<p>I want to convert this <code>3D</code> matrix to a set of <code>2D</code> matrices. I have tried <code>np.reshape</code> but it did not solve my problem. The final shape I am interested in is the following cascaded vesrsion:</p>
<pre><code> [[ 2. 1. 3. 1. 4. 1.]
[ 1. 1. 1. 1. 1. 1.]]
</code></pre>
<p>However, <code>np.reshape</code> gives me the following</p>
<pre><code> [[ 2. 3. 4. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1.]]
</code></pre>
<p>How can I solve this?</p>
|
<p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html" rel="nofollow"><code>transpose</code></a> alongwith <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow"><code>reshape</code></a> -</p>
<pre><code>a.transpose([0,2,1]).reshape(a.shape[0],-1)
</code></pre>
<p>Or use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.swapaxes.html" rel="nofollow"><code>swapaxes</code></a> that does the same job as <code>transpose</code> alongwith <code>reshape</code> -</p>
<pre><code>a.swapaxes(2,1).reshape(a.shape[0],-1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [66]: a
Out[66]:
array([[[ 2., 3., 4.],
[ 1., 1., 1.]],
[[ 1., 1., 1.],
[ 1., 1., 1.]]])
In [67]: a.transpose([0,2,1]).reshape(a.shape[0],-1)
Out[67]:
array([[ 2., 1., 3., 1., 4., 1.],
[ 1., 1., 1., 1., 1., 1.]])
In [68]: a.swapaxes(2,1).reshape(a.shape[0],-1)
Out[68]:
array([[ 2., 1., 3., 1., 4., 1.],
[ 1., 1., 1., 1., 1., 1.]])
</code></pre>
|
python|numpy|matrix|reshape
| 2
|
2,431
| 40,331,510
|
How to stack multiple lstm in keras?
|
<p>I am using deep learning library keras and trying to stack multiple LSTM with no luck.
Below is my code</p>
<pre><code>model = Sequential()
model.add(LSTM(100,input_shape =(time_steps,vector_size)))
model.add(LSTM(100))
</code></pre>
<p>The above code returns error in the third line <code>Exception: Input 0 is incompatible with layer lstm_28: expected ndim=3, found ndim=2
</code></p>
<p>The input X is a tensor of shape (100,250,50). I am running keras on tensorflow backend</p>
|
<p>You need to add <code>return_sequences=True</code> to the first layer so that its output tensor has <code>ndim=3</code> (i.e. batch size, timesteps, hidden state).</p>
<p>Please see the following example:</p>
<pre><code># expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
</code></pre>
<p>From: <a href="https://keras.io/getting-started/sequential-model-guide/" rel="noreferrer">https://keras.io/getting-started/sequential-model-guide/</a> (search for "stacked lstm")</p>
|
tensorflow|deep-learning|keras|lstm|keras-layer
| 149
|
2,432
| 40,641,561
|
Error with arrays/matrix operations
|
<p>I am trying to run the following code. For those who know it, this is a try of the Ehrenfest Urn simulation.</p>
<pre><code>import numpy as np
import random
C=5
L=2
# Here I create a matrix to be filled with zeros and after with numbers I want
b=np.zeros( (L,C) ) # line x column
A=[] # here I creat 2 lists to put random integer numbers on it
B=[]
i=1
while i<=10: # here I am filling list A (only) with 10 numbers
A.append(i)
i=i+1
for j in range(2):
for i in range(5): #here I want to choose random numbers between 1 and 10,
Sort=random.randint(1,10)
if i==0: # since there is no number on B, the first step, the number goes to B
B.append(Sort)
A.remove(Sort)
print(len(A))
if i>0: # now each list A and B have numbers on it, so I will choose one number and see in which list it is
if Sort in A:
B.append(Sort)
A.remove(Sort)
else:
A.append(Sort)
B.remove(Sort)
i=i+1
b[j,i]=len(A) # here I want to add the lenght of the list A in a matrix, but then I get the error.
j=j+1
print(b)
</code></pre>
<p>But I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<ipython-input-26-4884a3da9648>", line 1, in <module>
runfile('C:/Users/Public/Documents/Python Scripts/33.py', wdir='C:/Users/Public/Documents/Python Scripts')
File "C:\Program Files\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Program Files\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Public/Documents/Python Scripts/33.py", line 42, in <module>
b[j,i]=len(A) # here I want to add the lenght of the list A in a matrix
IndexError: index 5 is out of bounds for axis 1 with size 5
</code></pre>
<p>What am I doing wrong with the arrays?
Is there anything to do with the identification of the numbers in the matrix?</p>
|
<p>The error implies that <code>b.shape[1]</code> (axis 1) is 5; but <code>i</code> is 5. Remember indexing starts at 0.</p>
<p>In the broader picture:</p>
<pre><code>while i<5: #here I want to choose random numbers between 1 and 10,
...
i=i+1
b[j,i] ...
</code></pre>
<p>at the last iteration <code>i==4</code>, you add one, and now it is 5, and gives the error.</p>
<p>Typically in Python we iterate with</p>
<pre><code> for i in range(5):
b[j,i] ...
</code></pre>
<p><code>range(5)</code> produces <code>[0,1,2,3,4]</code>. You may have good reasons to use the while instead, but it's subject to the same bounds.</p>
|
python|python-3.x|numpy|runtime-error
| 1
|
2,433
| 61,806,725
|
Iterate over a pandas data frame or groupby object
|
<p>df_headlines = </p>
<p><img src="https://i.imgur.com/OnLfhQ5.png" alt="https://i.imgur.com/OnLfhQ5.png"></p>
<p>I want to group by the <code>date</code> column and then count how many times <code>-1</code>, <code>0</code>, and <code>1</code> appear by date and then whichever has the highest count, use that as the <code>daily_score</code>. </p>
<p>I started with a <code>groupby</code>: </p>
<pre><code>df_group = df_headlines.groupby('date')
</code></pre>
<p>This returns a groupby object and I'm not sure how to work with this given what I want to do above: </p>
<p><img src="https://i.imgur.com/jmTcrNG.png" alt="https://i.imgur.com/jmTcrNG.png"></p>
<p>Can I iterate through this using the following?: </p>
<pre><code>for index, row in df_group.iterrows():
daily_pos = []
daily_neg = []
daily_neu = []
</code></pre>
|
<p>As Ch3steR hinted as a comment, you can iterate through your groups in the following way: </p>
<pre><code>for name, group in headlines.groupby('date'):
daily_pos = len(group[group['score'] == 1])
daily_neg = len(group[group['score'] == -1])
daily_neu = len(group[group['score'] == 0])
print(name, daily_pos, daily_neg, daily_neu)
</code></pre>
<p>For each iteration, the variable <code>name</code> will contain a value from the <code>date</code> column (e.g. 4/13/20, 4/14/20, 5/13/20), and the variable <code>group</code> will contain a dataframe of all rows for the <code>date</code> contained in the <code>name</code> variable. </p>
|
python|pandas
| 0
|
2,434
| 61,856,322
|
traversing a tree from a list in python to do calculations?
|
<p>I have this list:</p>
<pre><code>new_tree = {'cues': 'glucose_tol',
'directions': '<=',
'thresholds': '122.5',
'exits': 1.0,
'children': [{'cues': True},
{'cues': 'mass_index',
'directions': '<=',
'thresholds': '30.8',
'exits': 1.0,
'children': [{'cues': 'pedigree',
'directions': '<=',
'thresholds': '0.305',
'exits': 1.0,
'children': [{'cues': True},
{'cues': 'diastolic_pb',
'directions': '<=',
'thresholds': '77.0',
'exits': 1,
'children': [{'cues': True},
{'cues': 'insulin',
'directions': '<=',
'thresholds': '480',
'exits': '0.5',
'children': [{'cues': True}, {'cues': False}]}]}]}]}]}
</code></pre>
<p>and I want to get the path that these datapoints go in this tree list so that I can have the cues where they go and then do some calculations.</p>
<p>I have data points in a <code>df</code> (2 datapoints just for illustration):</p>
<pre><code>print(df)
times_pregnant,glucose_tol,diastolic_pb,triceps,insulin,mass_index,pedigree,age,label
6,148,72,35,0,33.6,0.627,50,1
1,85,66,29,0,26.6,0.351,31,0
</code></pre>
<p>the first one will go glucose_tol, mass_index, pedigree, diastolic_pb and will be classified as True. How do I get these 4 cues from that list that this datapoint went through and save them for future calculations? Any help would be highly appreciated.</p>
|
<p>This looks like a decision tree. </p>
<p>The way it works is that at each step you are either on a final decision state ('cues': True, or 'cues': false) or you need to make a decision.</p>
<p>To make the decision you need to get the field named in 'cues' from your dataframe, then using direction and threshold you form an condition. The first one is basically <code>if glucose_tol <= 122.5</code>. Each node should have 2 children I think the first one if for the true case and the second for the false (it should be obvious to you if you know the domain). You then pick the child based on your decision an continue.</p>
<p>Probably the easiest thing to do is to implement a recursive function. Once you have the function to evaluate the tree against a row of data you can add functionality to store what you want or what you think it's interesting. </p>
|
python|pandas|list
| 1
|
2,435
| 61,721,285
|
Vectorizing a function in Python
|
<p>I have a function that I am trying to vectorize:</p>
<pre><code>import pandas as pd
import numpy as np
import random
import statsmodels.api as sm
data = pd.DataFrame({
'state': ['a', 'b', 'c']*200,
'read': [random.uniform(10,50) for i in range(600)],
'write': [random.uniform(0,10) for i in range(600)],
'cansu': [random.uniform(11,20) for i in range(600)],
'brink': [random.uniform(2,10) for i in range(600)]
})
loop = pd.DataFrame({
'state': ['a','a','c','b','c'],
'x': [1,2,3,2,4],
'y': [2,3,4,4,1]
})
def regress(z,x,y):
X = data.query("state==@z").iloc[:,x].values
X = sm.add_constant(X)
Y = data.query("state==@z").iloc[:,y].values
result = sm.OLS(Y,X).fit()
return result.params[1]
</code></pre>
<p>I know I can use <code>apply, list comprehensions, itertools, map, filter, reduce, np.vectorize, etc.</code> and all the cool functions. However, I want to be able to do something like this:</p>
<pre><code>loop['slope'] = regress(loop['state'].values, loop['x'].values, loop['y'].values)
</code></pre>
<p>which is not working at the moment. Is this possible? If yes, how do I rewrite or modify my function to make this possible?</p>
|
<p>try in this way</p>
<p>same as your code:</p>
<pre><code>import statsmodels.api as sm
data = pd.DataFrame({
'state': ['a', 'b', 'c']*200,
'read': [random.uniform(10,50) for i in range(600)],
'write': [random.uniform(0,10) for i in range(600)],
'cansu': [random.uniform(11,20) for i in range(600)],
'brink': [random.uniform(2,10) for i in range(600)]
})
loop = pd.DataFrame({
'state': ['a','a','c','b','c'],
'x': [1,2,3,2,4],
'y': [2,3,4,4,1]
})
def regress(z,x,y):
X = data.query("state==@z").iloc[:,x].values
X = sm.add_constant(X)
Y = data.query("state==@z").iloc[:,y].values
result = sm.OLS(Y,X).fit()
return result.params[1]
</code></pre>
<p>execution in lists:</p>
<pre><code>loop['slope'] = regress(list(loop['state'].values), list(loop['x'].values), list(loop['y'].values))
</code></pre>
|
python|pandas|numpy|vectorization
| 0
|
2,436
| 61,913,458
|
Pandas info for 100+ features
|
<p>I have the dataset in my disposal which consists of around 500 columns which I need to explore and keep only relevant columns. Pandas <code>info(verbose = True)</code> method does not even display this number properly. I also used missingno library to visualise nulls. However, it uses a lot of RAM. What to use instead of matplotlib here? </p>
<p>How do you approach datasets with a lot of features (more than 100)? Any useful workflow to eliminate useless features? How to use info() or any alternative?</p>
<p>Yeah, also used expand options to view everything. Question here is how to set it locally?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
</code></pre>
<p>UPDATE:
Methods or solutions to explore initial raw data are of interest. For instance one cell script which summarises numerical features as distributions, categorical as counts and possibly something else. This can be written by myself, however, maybe there is a library or just your function which does so?</p>
|
<p>Regarding the issue of useless features, you could easily estimate some metrics associated with feature effectiveness and filter it out using some threshold. Check out the <a href="https://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow noreferrer">sklearn feature selection docs</a>.</p>
<p>Of course before doing that you'll have to make sure features are numeric and their representation is fit for the tests of your choice. To do that I suggest you check out sklearn <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow noreferrer">pipelines</a> (optional) and <a href="https://scikit-learn.org/stable/modules/preprocessing.html" rel="nofollow noreferrer">preprocessing docs</a>. </p>
<p>Before estimating feature usefulness, make sure you cover missing data handling, encoding categorical variables and feature scaling.</p>
|
python|pandas
| 0
|
2,437
| 61,968,787
|
What is the purpose of this 'a' in this array slicing in Python ( W[: , : , : , a] )?
|
<p>Here is the code example:</p>
<pre><code>weights = W[:,:,:,a]
</code></pre>
<p><strong>Here, a is an integer number</strong></p>
<p>In array slicing, I need a good explanation (references are a plus) on Python's slice notation. I don't understand what is the purpose of this 'a'. We know that a 3D array is like a stack of matrices where:</p>
<ul>
<li>The first index, i, selects the matrix</li>
<li>The second index, j, selects the row</li>
<li>The third index, k, selects the column</li>
</ul>
|
<h1>Shapes:</h1>
<p>Let <strong>M</strong> to be the your <strong>n</strong>-dimensional array:</p>
<blockquote>
<p>Reference image: <a href="https://fgnt.github.io/python_crashkurs_doc/_images/numpy_array_t.png" rel="nofollow noreferrer">https://fgnt.github.io/python_crashkurs_doc/_images/numpy_array_t.png</a></p>
</blockquote>
<ol>
<li>A shape of <strong>M</strong> (x,) means your array have <strong>x lines</strong></li>
<li>A shape of <strong>M</strong> (x, y) means your array have <strong>x lines</strong> and <strong>y columns</strong></li>
<li>A shape of <strong>M</strong> (x, y, z) means your array have <strong>x lines</strong>, <strong>y columns</strong> and <strong>z "layers"</strong></li>
</ol>
<p>If you want you can think of shapes as (<em>lines</em>, <em>columns</em>, <em>layers</em>,...), but things become complicated when you talk about four-dimensional arrays or greater (maybe you could name them as <em>stacks of blocks</em> for the fourth dimension).</p>
<p>Anyway, a way better naming convention is the following:</p>
<p><strong>M</strong> (<em>axis 0</em>, <em>axis 1</em>, <em>axis 2</em>, ..., <em>axis n</em>) as shown in the reference image.</p>
<p>To find the shape of array in Python, simply write: <code>M.shape</code></p>
<h1><strong>Slicing:</strong></h1>
<p>In array indexing, the comma separates the dimensions of an array:
<strong>M</strong> [<em>axis 0</em>, <em>axis 1</em>, <em>axis 2</em>, ..., <em>axis n</em>]
For each axis you can have the following slice structure:</p>
<p><strong>[ start : stop : step ]</strong> where:</p>
<ol>
<li><strong>start</strong>: the first index for the selected axis (included in the result)</li>
</ol>
<ul>
<li><strong>start = 0</strong> is the default start index (does not need to be specified)</li>
</ul>
<ol start="2">
<li><strong>stop</strong>: the last index for the selected axis (not included in the result)</li>
</ol>
<ul>
<li><strong>stop = len(axis)</strong> is the default end index (does not need to be specified)</li>
</ul>
<ol start="3">
<li><strong>step</strong>: the step of traversing the selected axis:</li>
</ol>
<ul>
<li><strong>step = 0</strong> is not allowed</li>
<li><strong>step = 1</strong> is the default step (does not need to be specified)</li>
<li><strong>step = -1</strong> means reverse traversing</li>
<li><strong>step = n</strong> means from <strong>n</strong> to <strong>n</strong> step</li>
</ul>
<p>The following slicings are equivalent:
<strong>M</strong> [0:n+1:1], <strong>M</strong> [:] and <strong>M</strong>[::] according to <strong>default</strong> values.</p>
<p>Mixed together, now we can write in a generic slicing notation:</p>
<p><b>M</b>Β [<i>start-index-for-axis <b>0</b> : stop-index-for-axis <b>0</b> : step-for-axis <b>0</b></i>,<br>
Β Β Β Β Β <i>start-index-for-axis <b>1</b> : stop-index-for-axis <b>1</b> : step-for-axis <b>1</b></i>,<br>
Β Β Β Β Β <i>start-index-for-axis <b>2</b> : stop-index-for-axis <b>2</b> : step-for-axis <b>2</b></i>,<br>
Β Β Β Β Β ...<br>
Β Β Β Β Β <i>start-index-for-axis <b>n</b> : stop-index-for-axis <b>n</b> : step-for-axis <b>n</b>]</i>,<br></p>
<p>Enough theory, let's see some <strong>examples</strong>:</p>
<p>We have <strong>M</strong>, a two-dimensional array, with a (5, 5) shape:</p>
<pre><code>M = np.arange(1, 26).reshape(5, 5)
print(M)
</code></pre>
<p>result:</p>
<pre><code>[[ 1 2 3 4 5]
[ 6 7 8 9 10]
[11 12 13 14 15]
[16 17 18 19 20]
[21 22 23 24 25]]
</code></pre>
<br/>
<pre><code>print('Traverse the matrix from the last line to the first one (axis=0)', matrix[::-1], sep='\n')
</code></pre>
<p>Result:</p>
<pre><code>[[21 22 23 24 25]
[16 17 18 19 20]
[11 12 13 14 15]
[ 6 7 8 9 10]
[ 1 2 3 4 5]]
</code></pre>
<br/>
<pre><code>print('The 3 columns in the middle of the matrix (take all data from axis=0, and take a slice from axis=1):' , matrix[:, 1:4],sep='\n')
</code></pre>
<p>Result:</p>
<pre><code>[[ 2 3 4]
[ 7 8 9]
[12 13 14]
[17 18 19]
[22 23 24]]
</code></pre>
<p>Now, your slice: <strong>W</strong> [:, :, :, <em>a</em>], where <em>a</em> is an integer variable, can be interpreted as:</p>
<ul>
<li><strong>M</strong> is a four-dimensional array</li>
<li>you take all from <em>axis 0</em>, <em>axis 1</em> and <em>axis 2</em></li>
<li>you take just the index <em>a</em> from <em>axis 3</em></li>
</ul>
<p>A four-dimensional array can be imagined as a stack/array of three-dimensional blocks, and your slice means: take the <em>a</em> column from each matrix from each block, and ends up with a three-dimensional array.</p>
|
python|arrays|numpy-slicing
| 3
|
2,438
| 58,021,252
|
Generating data associated with a trend
|
<p>I want to create 3 different datasets with a column each having dates (dd/mm/yyyy). These dates need to be in a range of 3 months like January 2019 to April 2019. The count for each date needs to represent the number of searches. The dataset should have 2000 entries and dates can be repititive as well. All 3 datasets are to be created such that one has a upward trend to the count, one has a downward trend to the count, and one is normally distributed. </p>
<pre><code>Upward trend with the time, i.e. increasing entries with time ( lower count in beginning and increasing moving forward.)
Declining trend with time i.e. decreasing entries with time (higher count in the beginning and decreasing moving forward)
</code></pre>
<p>I am able to generate a normal distribution using datagenerator plugin of </p>
<blockquote>
<p>www.generatedata.com</p>
</blockquote>
<p>I am now interested in the other 2 use cases i.e. upward trend and declining trend. Can anyone advise me how to do the same. For random distribution, I was able to achieve using the faker library as well. </p>
<pre><code>from faker import Factory
import random
import numpy as np
faker = Factory.create()
def date_between(d1, d2):
f = '%b%d-%Y'
return faker.date_time_between_dates(datetime.strptime(d1, f), datetime.strptime(d2, f))
def fakerecord():
return {'ID': faker.numerify('######'),
'S_date': date_between('jan01-2019', 'apr01-2019')
}
</code></pre>
<p>Can anyone advise how can I incorporate trends to the dataset. </p>
<p>Thanks</p>
|
<p>you can do it like below.</p>
<p>trend function defines your trend if start is higher than end it is downward trend and vice versa. you can also control the rate of trend by changing difference between start and end</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
dates = pd.date_range("2019-1-1", "2019-4-1", freq="D")
def trend(count, start_weight=1, end_weight=3):
lin_sp = np.linspace(start_weight, end_weight, count)
return lin_sp/sum(lin_sp)
date_trends = np.random.choice(dates,size=20000, p=trend(len(dates)))
print("Total dates", len(date_trends))
print("counts of each dates")
print(np.unique(date_trends, return_counts=True)[1])
</code></pre>
|
python|pandas|numpy
| 1
|
2,439
| 58,092,004
|
How to do sequence classification with pytorch nn.Transformer?
|
<p>I am doing a sequence classification task using <code>nn.TransformerEncoder()</code>. Whose pipeline is similar to <code>nn.LSTM()</code>.</p>
<p>I have tried several temporal features fusion methods:</p>
<ol>
<li><p>Selecting the final outputs as the representation of the whole sequence.</p></li>
<li><p>Using an affine transformation to fuse these features.</p></li>
<li><p>Classifying the sequence frame by frame, and then select the max values to be the category of the whole sequence.</p></li>
</ol>
<p>But, all these 3 methods got a terrible accuracy, only <strong>25%</strong> for 4 categories classification. While using nn.LSTM with the last hidden state, I can achieve <strong>83%</strong> accuracy easily. I tried plenty of hyperparameters of <code>nn.TransformerEncoder()</code>, but without any improvement for the accuracy.</p>
<p>I have no idea about how to adjust this model now. Could you give me some practical advice? Thanks.</p>
<p>For <code>LSTM</code>: the <code>forward()</code> is:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, x_in, x_lengths, apply_softmax=False):
# Embed
x_in = self.embeddings(x_in)
# Feed into RNN
out, h_n = self.LSTM(x_in) #shape of out: T*N*D
# Gather the last relevant hidden state
out = out[-1,:,:] # N*D
# FC layers
z = self.dropout(out)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
</code></pre>
<p>For <code>transformer</code>:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, x_in, x_lengths, apply_softmax=False):
# Embed
x_in = self.embeddings(x_in)
# Feed into RNN
out = self.transformer(x_in)#shape of out T*N*D
# Gather the last relevant hidden state
out = out[-1,:,:] # N*D
# FC layers
z = self.dropout(out)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
</code></pre>
|
<p>The accuracy you mentioned indicates that something is wrong. Since you are comparing LSTM with TransformerEncoder, I want to point to some crucial differences. </p>
<ol>
<li><p><strong>Positional embeddings</strong>: This is very important since the Transformer does not have recurrence concept and so it doesn't capture sequence information. So, make sure you add positional information along with the input embeddings.</p></li>
<li><p><strong>Model architecture</strong>: <code>d_model</code>, <code>n_head</code>, <code>num_encoder_layers</code> are important. Go with the default size as used in Vaswani et al., 2017. (<code>d_model=512</code>, <code>n_head=8</code>, <code>num_encoder_layers=6</code>)</p></li>
<li><p><strong>Optimization</strong>: In many scenarios, it has been found that the Transformer needs to be trained with smaller learning rate, large batch size, WarmUpScheduling.</p></li>
</ol>
<p>Last but not least, for a sanity check, just make sure the parameters of the model is updating. You can also check the training accuracy to make sure the accuracy keeps increasing as the training proceeds.</p>
<p>Although it is difficult to say what is exactly wrong in your code but I hope that the above points will help!</p>
|
machine-learning|deep-learning|pytorch|text-classification|transformer-model
| 3
|
2,440
| 55,013,861
|
Create new column with sum of vector column in pandas
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({'A':[[1,2,3],[4,5,6,7],[8,9]]})
</code></pre>
<p>All entries are integers. </p>
<p>I want to make a make a new column, 'B', that would read <code>[5,22,17]</code>.</p>
<p>I can do this with a loop, but is there a one-line solution? Thanks!</p>
|
<p>To extract the rows from your DataFrame and sum each row as a builtin python list:</p>
<pre><code>res = [sum(x[0]) for x in df.values.tolist()]
res
[6, 22, 17]
</code></pre>
<p>To assign the row sums into a new column:</p>
<pre><code>df['B'] = [sum(x[0]) for x in df.values.tolist()]
df
A B
0 [1, 2, 3] 6
1 [4, 5, 6, 7] 22
2 [8, 9] 17
</code></pre>
<p>As @roganjosh commented, try to avoid storing builtin python objects like lists in pandas DataFrames.</p>
|
python|pandas
| 1
|
2,441
| 49,413,824
|
Modifying an existing excel workbook's multiple worksheets based on pandas dataframe
|
<p>I currently have an excel file with, for minimally viable example, say 3 sheets. I want to change 2 of those sheets to be based on new values coming from 2 pandas dataframes (1 dataframe for each sheet). </p>
<p>This is the code I currently have:</p>
<pre><code>from openpyxl.writer.excel import ExcelWriter
from openpyxl import load_workbook
path = r"Libraries\Documents\Current_Standings.xlsx"
book = load_workbook('Current_Standings.xlsx')
writer = pd.ExcelWriter(path, 'Current_Standings.xlsx',
engine='openpyxl')
writer.book = writer
Blank_Propensity_Scores.to_excel(writer, sheet_name =
'Blank_Propensity.xlsx')
Leads_by_Rep.to_excel(writer,sheet_name = 'Leads_by_Rep.xlsx')
writer.save()
</code></pre>
<p>when I run this I get the following error message, not sure why, because every stack overflow answer I have looked at has only 1 item for openpyxl:</p>
<pre><code>TypeError: __new__() got multiple values for argument 'engine'
</code></pre>
<p>I also tried playing around with getting rid of the engine='openpyxl' argument but when I do that I get the following error message instead:</p>
<pre><code>ValueError: No Excel writer 'Current_Standings.xlsx'
</code></pre>
|
<p>If you execute on your Python command line the command 'help(pd.ExcelWriter)' you will see the parameters on the first lines:</p>
<pre><code>class ExcelWriter(builtins.object)
| Class for writing DataFrame objects into excel sheets, default is to use
| xlwt for xls, openpyxl for xlsx. See DataFrame.to_excel for typical usage.
|
| Parameters
| ----------
| path : string
| Path to xls or xlsx file.
| engine : string (optional)
| Engine to use for writing. If None, defaults to
| ``io.excel.<extension>.writer``. NOTE: can only be passed as a keyword
| argument.
| date_format : string, default None
| Format string for dates written into Excel files (e.g. 'YYYY-MM-DD')
| datetime_format : string, default None
| Format string for datetime objects written into Excel files
| (e.g. 'YYYY-MM-DD HH:MM:SS')
|
</code></pre>
<p>In other words, the second parameter in order is the engine. So if you put a String without any detonation, it is considered as the engine (despite the note on the help about passing this parameter only as keyword, seems that this is the behaviour). If you enter again engine='openpyxl', then you are defining the parameter 'engine' twice.</p>
<p>This is the cause for error</p>
<pre><code>TypeError: __new__() got multiple values for argument 'engine'
</code></pre>
<p>In summary, you should call ExcelWriter only with two parameters. The first one is the path of your Excel file (variable 'path', I guess) and the engine.</p>
|
python|python-3.x|pandas|openpyxl|pandas.excelwriter
| 1
|
2,442
| 73,239,947
|
How do I remove duplicates where one has a null value in Python?
|
<p><strong>Problem</strong></p>
<p>Sorry to all who have helped, but I have had to rephrase the question. I have a dataframe with duplicates for most of the columns, except the last column. Where I have duplicates, I want to apply the following rule:</p>
<ol>
<li>If both have valid entries in the last column, then keep both.</li>
<li>If both have null entries in the last column, then keep one.</li>
<li>If one has a valid entry and the other a null entry, then keep the valid entry.</li>
</ol>
<p>I then want to take the duplicate values out and create a separate dataframe with them. At the moment, my approach is laborious and deletes both duplicates where they are both null.</p>
<p><strong>Reprex</strong></p>
<p><em><strong>Starting Dataframe</strong></em></p>
<pre><code>import pandas as pd
import numpy as np
data_input = {'Student': ['A', 'A', 'B', 'B', 'C', 'D', 'E', 'F', 'F', 'G', "H", "H", "I", "I"],
"Subject": ["Law", "Law", "Maths", "Maths", "Maths", "Law", "Maths", "Music", "Music", "Music", "Art", "Art", "Dance", "Dance"],
"Checked": ["Bob", "James", np.nan, "Jack", "Laura", "Laura", np.nan, np.nan, "Tim", "Tim", "Tim", np.nan, np.nan, np.nan]}
# Create DataFrame
df1 = pd.DataFrame(data_input)
</code></pre>
<p><a href="https://i.stack.imgur.com/QdwMq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QdwMq.png" alt="enter image description here" /></a></p>
<p><em><strong>Desired Output</strong></em></p>
<p><a href="https://i.stack.imgur.com/JqwqA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JqwqA.png" alt="enter image description here" /></a></p>
<p><em><strong>First Attempt</strong></em></p>
<pre><code>attempt1 = df1.sort_values(['Student', 'Checked'], ascending=False).drop_duplicates(["Student", "Subject"]).sort_index()
</code></pre>
<p>I took this from another Q&A on Stack, but it does not give me the outcome I want and I don't understand it.</p>
<p><em><strong>Attempt 2</strong></em></p>
<pre><code>#Create Duplicate column
df1["Duplicates"] = df1.duplicated(subset=["Student", "Subject"], keep=False)
#Create list of rows with no duplicates
df_new1 = df1[df1["Duplicates"]==False]
#Create list of rows with duplicates & remove all those with null values
#HERE IS WHERE I GET STUCK. IF BOTH DUPLICATES ARE NULLS, I WANT TO KEEP ONE OF THEM
df_new2 = df1[df1["Duplicates"]==True]
df_new3 = df_new2[~df_new2["Checked"].isnull()]
#Combine unique rows, and duplicates without null values
#Keep duplicates without null values
df_new = df_new1.append(df_new3)
#Tidy up
df_new = df_new[["Student", "Subject", "Checked"]].sort_values(by="Student")
df_new
</code></pre>
<p><em><strong>I can then create a list of the duplicates that both appear valid</strong></em></p>
<pre><code>#Create separate list of duplicates with valid "Checked" values
df_new["Duplicates"] = df_new.duplicated(subset="Student", keep=False)
conflicting_duplicates = df_new[df_new["Duplicates"]==True]
conflicting_duplicates
</code></pre>
<p><em><strong>Help</strong></em></p>
<p>Thank you to everyone! Your answers helped, but I hadn't realised that I also want to keep one of the entries where both are null.</p>
<p>Is there a better way of doing this?</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and drop <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>NaN</code></a> values</p>
<pre><code>df.groupby("Student", sort=False).apply(lambda x : x if len(x) == 1 else x.dropna(subset=['Checked'])).reset_index(drop=True)
</code></pre>
<h4>Output :</h4>
<p>This gives us the expected output</p>
<pre><code> Student Subject Checked
0 A Law Bob
1 A Law James
2 B Maths Jack
3 C Maths Laura
4 D Law Laura
5 E Maths NaN
6 F Music Tim
7 G Music Tim
8 H Art Tim
</code></pre>
|
python|pandas|duplicates
| 1
|
2,443
| 73,454,400
|
concatenate every n rows into one row pandas and keep other data
|
<p>I have a data frame that contains "userid", "gender" and "tweet", each user has 100 tweets:</p>
<p><a href="https://i.stack.imgur.com/w0qdy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w0qdy.png" alt="enter image description here" /></a></p>
<p>link to demo dataset:
<a href="https://drive.google.com/file/d/12FAek_k-8ofHCoR24IxhqkiGa3efrvpA/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/12FAek_k-8ofHCoR24IxhqkiGa3efrvpA/view?usp=sharing</a></p>
<p>how can i merge each 5 tweets of user in a new row (each user has 100 tweets so in new data set there will be 20 rows for each user) and keep their user id and gender. so far i managed to group tweets but i need to have user id and gender as well.</p>
<pre><code>dfMerged = df.groupby(df.index // 5)['ctweet'].agg(' '.join).to_frame()
</code></pre>
|
<h2>Short answer</h2>
<pre><code>data = pd.DataFrame([[1,1,'Hi'],[1,1,'my name is'], [1,1,'Hal'],[1,1,'my name is'], [1,1,'Hal'],[2,0,'Ich bin'], [2,0,'ein kartoffeln'],[2,0,'!'], [2,0,'ein kartoffeln'],[2,0,'!'],[1,1,'my name is'], [1,1,'Obama'],[1,1,'president of USA'], [1,1,'Obama'],[1,1,'president of USA'],[2,0,'Hi'],[2,0,'my name is'], [2,0,'James webb'],[2,0,'my name is'], [2,0,'James webb'],[1,1,'Ich bin'], [1,1,'ein potatoe'], [1,1,'hello human'],[1,1,'ein potatoe'], [1,1,'hello human']])
data.columns = ['id', 'gender', 'tweet']
n = 5
block = int(round(len(data)/n, 0))
data['block'] = np.repeat(range(1, block+1), n)
data_block = data.groupby(['id','gender', 'block'])['tweet'].agg(lambda x: '-'.join(x.dropna())).reset_index()
data_block
</code></pre>
<h2>Long answer : Full explanations</h2>
<p>Creating fake data :</p>
<pre><code>data = pd.DataFrame([
[1,1,'Hi'],[1,1,'my name is'], [1,1,'Hal'],
[2,0,'Ich bin'], [2,0,'ein kartoffeln'],[2,0,'!'],
[1,1,'my name is'], [1,1,'Obama'],[1,1,'president of USA'],
[2,0,'Hi'],[2,0,'my name is'], [2,0,'James webb'],
[1,1,'Ich bin'], [1,1,'ein potatoe'], [1,1,'hello human']
])
data.columns = ['id', 'gender', 'tweet']
</code></pre>
<p><em><strong>Warning : Assuming there is the right number of repetions per id and your data are sorted in the right way.</strong></em></p>
<p>==> If not you can sort data by user id and isolate it as independent dataframe while truncating it if there is not the right number of row (multiple of what you want). Then you merge it again as one. Care of the context of your analysis/work of course :)</p>
<p>I defined the Number of block of repeated value you want - In your case groups of n=5. I used n=3 for the example.</p>
<pre><code>n = 3
block = int(round(len(data)/n, 0))
data['block'] = np.repeat(range(1, block+1), n)
data_block = data.groupby(['id','gender', 'block'])['tweet'].agg(lambda x: '-'.join(x.dropna())).reset_index()
</code></pre>
<p>It gives :</p>
<pre><code>>>> data_block
id gender block tweet
0 1 1 1 Hi-my name is-Hal
1 1 1 3 my name is-Obama-president of USA
2 1 1 5 Ich bin-ein potatoe-hello human
3 2 0 2 Ich bin-ein kartoffeln-!
4 2 0 4 Hi-my name is-James webb
</code></pre>
<p>Is it better ?</p>
<p>Straight example with group of 5 it works also well :</p>
<pre><code>data = pd.DataFrame([
[1,1,'Hi'],[1,1,'my name is'], [1,1,'Hal'],[1,1,'my name is'], [1,1,'Hal'],
[2,0,'Ich bin'], [2,0,'ein kartoffeln'],[2,0,'!'], [2,0,'ein kartoffeln'],[2,0,'!'],
[1,1,'my name is'], [1,1,'Obama'],[1,1,'president of USA'], [1,1,'Obama'],[1,1,'president of USA'],
[2,0,'Hi'],[2,0,'my name is'], [2,0,'James webb'],[2,0,'my name is'], [2,0,'James webb'],
[1,1,'Ich bin'], [1,1,'ein potatoe'], [1,1,'hello human'], [1,1,'ein potatoe'], [1,1,'hello human']
])
data.columns = ['id', 'gender', 'tweet']
n = 5
block = int(round(len(data)/n, 0))
data['block'] = np.repeat(range(1, block+1), n)
data_block = data.groupby(['id','gender', 'block'])['tweet'].agg(lambda x: '-'.join(x.dropna())).reset_index()
data_block
>>> data_block
id gender block tweet
0 1 1 1 Hi-my name is-Hal-my name is-Hal
1 1 1 3 my name is-Obama-president of USA-Obama-presid...
2 1 1 5 Ich bin-ein potatoe-hello human-ein potatoe-he...
3 2 0 2 Ich bin-ein kartoffeln-!-ein kartoffeln-!
4 2 0 4 Hi-my name is-James webb-my name is-James webb
</code></pre>
|
python|pandas
| 1
|
2,444
| 73,194,756
|
Convert dictionary keys and values into rows and columns
|
<p>I have a list of dictionaries with multiple rows. I need to store the keys as columns and value as rows.</p>
<pre><code> date model
22/01/2022 [{'vehicles': {'engine': 0, 'status': 5, 'size': 0, 'warranty': 2, 'type': 3, }}]
.
.
.
23/01/2022 [{'vehicles': {'engine': 3, 'status': 4, 'size': 1, 'warranty': 5, 'type': 1, }}]
df = pd.DataFrame.from_dict(df["model"]['vehicles'], orient="columns")
</code></pre>
<p>I tired to select the values but its not working.</p>
|
<pre><code>df = pd.DataFrame({'date': ['22/01/2022', '23/01/2022'], 'model': [[{'vehicles': {'engine': 0, 'status': 5, 'size': 0, 'warranty': 2, 'type': 3, }}], [{'vehicles': {'engine': 3, 'status': 4, 'size': 1, 'warranty': 5, 'type': 1, }}]]})
df = df.explode('model')
df.model = [m['vehicles'] for m in df.model]
pd.concat([df, df.model.apply(pd.Series)], axis=1).drop('model', axis=1)
</code></pre>
<p>Output:</p>
<pre><code> date engine status size warranty type
0 22/01/2022 0 5 0 2 3
1 23/01/2022 3 4 1 5 1
</code></pre>
|
python|python-3.x|pandas
| 1
|
2,445
| 35,301,262
|
Thresholded pixel indices of a NumPy array
|
<p>I'm sure this question is Googleable, but I don't know what keywords to use. I'm curious about a specific case, but also about how to do it in general. Lets say I have a RGB image as an array of shape <code>(width, height, 3)</code> and I want to find all the pixels where the red channel is greater than 100. I feel like <code>image > [100, 0, 0]</code> should give me an array of indices (and would if I was comparing a scalar and using a greyscale image) but this compares each element with the list. How do I compare over the first two dimensions where each "element" is the last dimension? </p>
|
<p>To detect for red-channel only, you can do something like this -</p>
<pre><code>np.argwhere(image[:,:,0] > threshold)
</code></pre>
<p><strong>Explanation :</strong></p>
<ol>
<li>Compare the <code>red-channel</code> with the <code>threshold</code> to give us a boolean array of same shape as the input image without the third axis (color channel).</li>
<li>Use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.argwhere.html" rel="nofollow noreferrer"><code>np.argwhere</code></a> to get the indices of successful matches.</li>
</ol>
<p>For a case when you want to see if any channel is above some threshold, use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.any.html" rel="nofollow noreferrer"><code>.any(-1)</code></a> (any elements that satisfy the condition along the last axis/color channel).</p>
<pre><code>np.argwhere((image > threshold).any(-1))
</code></pre>
<p><strong>Sample run</strong></p>
<p>Input image :</p>
<pre><code>In [76]: image
Out[76]:
array([[[118, 94, 109],
[ 36, 122, 6],
[ 85, 91, 58],
[ 30, 2, 23]],
[[ 32, 47, 50],
[ 1, 105, 141],
[ 91, 120, 58],
[129, 127, 111]]], dtype=uint8)
In [77]: threshold
Out[77]: 100
</code></pre>
<p>Case #1: Red-channel only</p>
<pre><code>In [69]: np.argwhere(image[:,:,0] > threshold)
Out[69]:
array([[0, 0],
[1, 3]])
In [70]: image[0,0]
Out[70]: array([118, 94, 109], dtype=uint8)
In [71]: image[1,3]
Out[71]: array([129, 127, 111], dtype=uint8)
</code></pre>
<p>Case #2: Any-channel</p>
<pre><code>In [72]: np.argwhere((image > threshold).any(-1))
Out[72]:
array([[0, 0],
[0, 1],
[1, 1],
[1, 2],
[1, 3]])
In [73]: image[0,1]
Out[73]: array([ 36, 122, 6], dtype=uint8)
In [74]: image[1,1]
Out[74]: array([ 1, 105, 141], dtype=uint8)
In [75]: image[1,2]
Out[75]: array([ 91, 120, 58], dtype=uint8)
</code></pre>
<hr />
<h2>Faster alternative to <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.any.html" rel="nofollow noreferrer"><code>np.any</code></a> in <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a></h2>
<p><code>np.einsum</code> could be <em>tricked</em> to perform <code>np.any</code>'s work and as it turns out is a tad faster.</p>
<p>Thus, <strong><code>boolean_arr.any(-1)</code></strong> would be equivalent to <strong><code>np.einsum('ijk->ij',boolean_arr)</code></strong>.</p>
<p>Here are the associated runtimes across various datasizes -</p>
<pre><code>In [105]: image = np.random.randint(0,255,(30,30,3)).astype('uint8')
...: %timeit np.argwhere((image > threshold).any(-1))
...: %timeit np.argwhere(np.einsum('ijk->ij',image>threshold))
...: out1 = np.argwhere((image > threshold).any(-1))
...: out2 = np.argwhere(np.einsum('ijk->ij',image>threshold))
...: print np.allclose(out1,out2)
...:
10000 loops, best of 3: 79.2 Β΅s per loop
10000 loops, best of 3: 56.5 Β΅s per loop
True
In [106]: image = np.random.randint(0,255,(300,300,3)).astype('uint8')
...: %timeit np.argwhere((image > threshold).any(-1))
...: %timeit np.argwhere(np.einsum('ijk->ij',image>threshold))
...: out1 = np.argwhere((image > threshold).any(-1))
...: out2 = np.argwhere(np.einsum('ijk->ij',image>threshold))
...: print np.allclose(out1,out2)
...:
100 loops, best of 3: 5.47 ms per loop
100 loops, best of 3: 3.69 ms per loop
True
In [107]: image = np.random.randint(0,255,(3000,3000,3)).astype('uint8')
...: %timeit np.argwhere((image > threshold).any(-1))
...: %timeit np.argwhere(np.einsum('ijk->ij',image>threshold))
...: out1 = np.argwhere((image > threshold).any(-1))
...: out2 = np.argwhere(np.einsum('ijk->ij',image>threshold))
...: print np.allclose(out1,out2)
...:
1 loops, best of 3: 833 ms per loop
1 loops, best of 3: 640 ms per loop
True
</code></pre>
|
python|arrays|numpy|vectorization
| 3
|
2,446
| 30,998,305
|
Weird numpy.sum behavior when adding zeros
|
<p>I understand how mathematically-equivalent arithmentic operations can result in different results due to numerical errors (e.g. summing floats in different orders).</p>
<p>However, it surprises me that adding zeros to <code>sum</code> can change the result. I thought that this always holds for floats, no matter what: <code>x + 0. == x</code>.</p>
<p>Here's an example. I expected all the lines to be exactly zero. Can anybody please explain why this happens?</p>
<pre><code>M = 4 # number of random values
Z = 4 # number of additional zeros
for i in range(20):
a = np.random.rand(M)
b = np.zeros(M+Z)
b[:M] = a
print a.sum() - b.sum()
-4.4408920985e-16
0.0
0.0
0.0
4.4408920985e-16
0.0
-4.4408920985e-16
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
2.22044604925e-16
0.0
4.4408920985e-16
4.4408920985e-16
0.0
</code></pre>
<p>It seems not to happen for smaller values of <code>M</code> and <code>Z</code>.</p>
<p>I also made sure <code>a.dtype==b.dtype</code>.</p>
<p>Here is one more example, which also demonstrates python's builtin <code>sum</code> behaves as expected:</p>
<pre><code>a = np.array([0.1, 1.0/3, 1.0/7, 1.0/13, 1.0/23])
b = np.array([0.1, 0.0, 1.0/3, 0.0, 1.0/7, 0.0, 1.0/13, 1.0/23])
print a.sum() - b.sum()
=> -1.11022302463e-16
print sum(a) - sum(b)
=> 0.0
</code></pre>
<p>I'm using numpy V1.9.2.</p>
|
<p><strong>Short answer:</strong> You are seeing the difference between</p>
<pre><code>a + b + c + d
</code></pre>
<p>and</p>
<pre><code>(a + b) + (c + d)
</code></pre>
<p>which because of floating point inaccuracies is not the same.</p>
<p><strong>Long answer:</strong> Numpy implements pair-wise summation as an optimization of both speed (it allows for easier vectorization) and rounding error.</p>
<p>The numpy sum-implementation can be found <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/loops.c.src">here</a> (function <code>pairwise_sum_@TYPE@</code>). It essentially does the following:</p>
<ol>
<li>If the length of the array is less than 8, a regular for-loop summation is performed. This is why the strange result is not observed if <code>W < 4</code> in your case - the same for-loop summation will be used in both cases.</li>
<li>If the length is between 8 and 128, it accumulates the sums in 8 bins <code>r[0]-r[7]</code> then sums them by <code>((r[0] + r[1]) + (r[2] + r[3])) + ((r[4] + r[5]) + (r[6] + r[7]))</code>.</li>
<li>Otherwise, it recursively sums two halves of the array.</li>
</ol>
<p>Therefore, in the first case you get <code>a.sum() = a[0] + a[1] + a[2] + a[3]</code> and in the second case <code>b.sum() = (a[0] + a[1]) + (a[2] + a[3])</code> which leads to <code>a.sum() - b.sum() != 0</code>.</p>
|
python|numpy|sum|numerical-stability
| 10
|
2,447
| 67,523,574
|
Sort a data frame in python with duplicates by a string list
|
<p>I have a data frame with a 250 names with values imported in python via pandas read_csv.
It reads in the data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>val1</th>
<th>val2</th>
<th>val3</th>
</tr>
</thead>
<tbody>
<tr>
<td>George</td>
<td>2.5</td>
<td>1.1</td>
<td>1.0</td>
</tr>
<tr>
<td>George</td>
<td>3.1</td>
<td>1.4</td>
<td>0.0</td>
</tr>
<tr>
<td>George</td>
<td>1.1</td>
<td>0.9</td>
<td>4.1</td>
</tr>
<tr>
<td>Tom</td>
<td>2.1</td>
<td>1.2</td>
<td>-3.0</td>
</tr>
<tr>
<td>Tom</td>
<td>3.0</td>
<td>-1.2</td>
<td>3.5</td>
</tr>
<tr>
<td>Tom</td>
<td>7.3</td>
<td>5.2</td>
<td>-1.2</td>
</tr>
<tr>
<td>Tom</td>
<td>0.1</td>
<td>0.1</td>
<td>0.1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>Sally</td>
<td>6.1</td>
<td>9.1</td>
<td>-5.6</td>
</tr>
<tr>
<td>Sally</td>
<td>5.7</td>
<td>4.7</td>
<td>9.1</td>
</tr>
</tbody>
</table>
</div>
<p>I want to reorder these by a particular order:</p>
<pre><code>neworder = ['Sally', ..., 'George', 'Tom']
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>val1</th>
<th>val2</th>
<th>val3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sally</td>
<td>6.1</td>
<td>9.1</td>
<td>-5.6</td>
</tr>
<tr>
<td>Sally</td>
<td>5.7</td>
<td>4.7</td>
<td>9.1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>George</td>
<td>2.5</td>
<td>1.1</td>
<td>1.0</td>
</tr>
<tr>
<td>George</td>
<td>3.1</td>
<td>1.4</td>
<td>0.0</td>
</tr>
<tr>
<td>George</td>
<td>1.1</td>
<td>0.9</td>
<td>4.1</td>
</tr>
<tr>
<td>Tom</td>
<td>2.1</td>
<td>1.2</td>
<td>-3.0</td>
</tr>
<tr>
<td>Tom</td>
<td>3.0</td>
<td>-1.2</td>
<td>3.5</td>
</tr>
<tr>
<td>Tom</td>
<td>7.3</td>
<td>5.2</td>
<td>-1.2</td>
</tr>
<tr>
<td>Tom</td>
<td>0.1</td>
<td>0.1</td>
<td>0.1</td>
</tr>
</tbody>
</table>
</div>
<p>In IDL I would do this with some <code>for</code> loops, but I suspect there's a sorting function in Python that my google skills have not been able to find.</p>
|
<p>Create a lookup dictionary for your sort somehow:</p>
<pre class="lang-py prettyprint-override"><code>name_order = {'Sally':1, ... , 'George':12, 'Tom':13} # hand-numbered
</code></pre>
<pre class="lang-py prettyprint-override"><code>neworder = ['Sally', ... , 'George', 'Tom']
name_order = {nm:ix for ix,nm in enumerate(neworder)} # generated
</code></pre>
<p>And then pass it in a lambda function to the key parameter:</p>
<pre class="lang-py prettyprint-override"><code>df.sort_values(by='name', key=lambda nm: nm.map(name_order))
</code></pre>
<p>I'd need to think a bit about what happened if an unexpected name appeared; you might be able to cope with this by making <code>name_order</code> a <code>collections.defaultdict</code>.</p>
|
python|pandas|sorting
| 3
|
2,448
| 67,212,324
|
merge two DataFrame with two columns and keep the same order with original indexes in the result
|
<p>I have two pandas data frames. Both data frames have two key columns and one value column for merge. I want to keep the same order with original indexes in the merged result.</p>
<ul>
<li>The keys and values might be missing or changed in the other data frame.</li>
<li>The order of data are important. You can't sort them by the keys or values in the merged result.</li>
</ul>
<p>It should looks like this:</p>
<p><a href="https://i.stack.imgur.com/4uWNO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4uWNO.png" alt="enter image description here" /></a></p>
<blockquote>
<p><code>df1_index</code> / <code>df2_index</code> / <code>results</code> are just used for demonstration.</p>
</blockquote>
<p>I tried to use <code>merge</code> with <code>outer</code>:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame({
"key1": ['K', 'K', 'A1', 'A2', 'B1', 'B9', 'C3'],
"key2": ['a5', 'a4', 'a7', 'a9', 'b2', 'b8', 'c1'],
"Value1": ['apple', 'guava', 'kiwi', 'grape', 'banana', 'peach', 'berry'],
})
df2 = pd.DataFrame({
"key1": ['K', 'A1', 'A3', 'B1', 'C2', 'C3'],
"key2": ['a9', 'a7', 'a9', 'b2', 'c7', 'c1'],
"Value2": ['apple', 'kiwi', 'grape', 'banana', 'guava', 'orange'],
})
merged_df = pd.merge(df1, df2, how="outer", on=['key1', 'key2'])
</code></pre>
<p>but it just added missing keys in the end of rows:</p>
<p><a href="https://i.stack.imgur.com/SHV5l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SHV5l.png" alt="enter image description here" /></a></p>
<p>How do I merge and align them up?</p>
|
<p>when constructing the merged dataframe, get the index values from each dataframe.</p>
<pre><code>merged_df = pd.merge(df1, df2, how="outer", on=['key1', 'key2'])
</code></pre>
<p>use <code>combine_first</code> to combine <code>index_x</code> & <code>index_y</code></p>
<pre><code>merged_df['combined_index'] =merged_df.index_x.combine_first(merged_df.index_y)
</code></pre>
<p>sort using <code>combined_index</code> & <code>index_x</code> dropping columns which are not needed & resetting index.</p>
<pre><code>output = merged_df.sort_values(
['combined_index', 'index_x']
).drop(
['index_x', 'index_y', 'combined_index'], axis=1
).reset_index(drop=True)
</code></pre>
<p>This results in the following output:</p>
<pre><code> key1 key2 Value1 Value2
0 K a5 apple NaN
1 K a9 NaN apple
2 K a4 guava NaN
3 A1 a7 kiwi kiwi
4 A3 a9 NaN grape
5 A2 a9 grape NaN
6 B1 b2 banana banana
7 C2 c7 NaN guava
8 B9 b8 peach NaN
9 C3 c1 berry orange
</code></pre>
|
pandas
| 1
|
2,449
| 34,480,630
|
Simple Torch7 equivalent to numpy.roll
|
<p>Is there any simple way of rolling a tensor in torch7 like numpy.roll and numpy.rollaxis in python?</p>
<p>Thanks!</p>
|
<p>You can achieve the effects of numpy's <code>rollaxis</code> with torch's <a href="https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-permutedim1-dim2--dimn" rel="nofollow">permute</a>. While <code>rollaxis</code> requires the start and end position of the one axis to move, <code>permute</code> requires the new positions of all axis. E.g. for a 3-dimensional tensor <code>np.rollaxis(x, 0, 3)</code> (move the 1st axis to the end) would be equivalent to <code>x:permute(2, 3, 1)</code>.</p>
<p>I'm not aware of an easy replacement for numpy's <code>roll</code>, but <a href="https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-scatterdim-index-srcval" rel="nofollow">scatter</a> seems like a decent candidate. Call it with the required dimension and the new order of elements after the shift. (Requires one new order of elements for each single row.)
The following example shifts each row of <code>x</code> (containing 2 rows and 4 columns with random values) 2 to the right along the last axis:</p>
<pre><code>th> x = torch.zeros(2, 4):uniform(0, 10)
th> y = torch.zeros(2, 4):scatter(2, torch.LongTensor{{3, 4, 1, 2}, {3, 4, 1, 2}}, x)
th> x
0.7295 3.2218 7.3979 5.5500
8.4354 3.6722 5.5463 3.4323
[torch.DoubleTensor of size 2x4]
th> y
7.3979 5.5500 0.7295 3.2218
5.5463 3.4323 8.4354 3.6722
[torch.DoubleTensor of size 2x4]
</code></pre>
|
python|numpy|lua|torch
| 0
|
2,450
| 60,217,992
|
Sort dataframe by month and find the first non-zero value in each column for each month
|
<p>I need to load a CSV with 200 columns, the first column is a date, into a pandas dataframe in python. I need to sort through the data and return the first non-zero value for each month. Should I make separate dataframes or each month, and then search? What's the best way to approach this problem?</p>
<pre><code>df = pd.read_csv('loaddata.csv')
df['DATE'] = pd.to_datetime(df['DATE'], format='%m/%d/%Y')
df['Month']= pd.DatetimeIndex(df['DATE']).month
THe data looks like this:
Date Data_1 Data_2 Data_3
1/d/y 0 0 1
2/d/y 0 1 2
3/d/y 2 6 0
1/d/y 5 3 45
2/d/y 20 7 90
3/d/y 25 12 18
Returns:
Data_1 Data_2 Data_3
Jan 5 3 1
Feb 20 7 2
Mar 2 6 18
</code></pre>
|
<p>You got an error for <code>Feb</code>, column <code>Data_2</code>: the first non-zero is 1, not 7.</p>
<hr>
<p>Here's one way to do it:</p>
<pre><code>def first_non_zero(col):
"""Return the first non-zero value of a column, or nan if the column is all-zero"""
head = col[col != 0].head(1)
return np.nan if head.empty else head.values
df.groupby('Month').apply(lambda group: group[['Data_1', 'Data_2', 'Data_3']].apply(first_non_zero)) \
.reset_index(level=1, drop=True)
</code></pre>
<p>Result:</p>
<pre><code> Data_1 Data_2 Data_3
Month
1 5 3 1
2 20 1 2
3 2 6 18
</code></pre>
|
python|pandas|dataframe|datetime|pandas-groupby
| 0
|
2,451
| 60,221,426
|
How to produce an array with values in a specific shape?
|
<p>I would like to create an array with values that range from 0.0 to 1.0 as shown here:
<a href="https://i.stack.imgur.com/d2yBN.png" rel="nofollow noreferrer">weighting matrix</a></p>
<p>Basically, the left and top edges should remain close to 1.0 but slowly decay to 0.5 in the corners.
The bottom and right edges should remain close to 0.0
The middle region should be mostly 0.5, and the values should be decaying diagonally from 1.0 to 0.0.</p>
<p>This is what I've tried but it doesn't give me exactly what I would like.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def sigmoid(x):
y = np.zeros(len(x))
for i in range(len(x)):
y[i] = 1 / (1 + math.exp(-x[i]))
return y
sigmoid_ = sigmoid(np.linspace(20, 2.5, 30))
temp1 = np.repeat(sigmoid_.reshape((1,len(sigmoid_))), repeats=10, axis=0)
sigmoid_ = sigmoid(np.linspace(6, 3, 10))
temp2 = np.repeat(sigmoid_.reshape((len(sigmoid_),1)), repeats=30, axis=1)
alpha1 = temp1 + temp2
sigmoid_ = sigmoid(np.linspace(-2.5, -20, 30))
temp1 = np.repeat(sigmoid_.reshape((1,len(sigmoid_))), repeats=10, axis=0)
sigmoid_ = sigmoid(np.linspace(-3, -6, 10))
temp2 = np.repeat(sigmoid_.reshape((len(sigmoid_),1)), repeats=30, axis=1)
alpha2 = temp1 + temp2
alpha = alpha1 + alpha2
alpha = alpha - np.min(alpha)
alpha = alpha / np.max(alpha)
plt.matshow(alpha)
</code></pre>
<p>Which gives me this: <a href="https://i.stack.imgur.com/iGOWL.png" rel="nofollow noreferrer">results</a></p>
<p>Can someone help me?</p>
|
<p>This is the simplest function I can think of:</p>
<pre><code>tune_me = 101
x = np.linspace(0, 1, tune_me)
y = np.linspace(0, 1, tune_me)
xv, yv = np.meshgrid(x, y)
sig = 1/(1 + np.exp(tune_me - xv - yv))
plt.matshow(sig)
</code></pre>
<p>But if you want something specific, you should probably figure out your math (maybe on the math stack exchange) before you try to implement it.</p>
|
python|numpy|matrix
| 0
|
2,452
| 60,079,496
|
python (pandas) creating a new column based on values from different rows
|
<p>I have a data frame from a cvs file looking like this:</p>
<pre><code> #F E G
0 1 n.e. 153
1 1 60 15
2 1 99 10
3 1 S 23
4 2 n.e. 190
5 2 60 44
6 2 99 22
7 2 S 67
</code></pre>
<p>I would like to add a new column to this. </p>
<p>For every [#F] group, the [G] value in each row should be divided by the [G]-value in the row where [E]='n.e.'
So, in the end, it should look like this:</p>
<pre><code> #F E G rel
0 1 n.e. 153 1.000
1 1 60 15 0.098
2 1 99 10 0.065
3 1 S 23 0.150
4 2 n.e. 190 1.000
5 2 60 44 0.232
6 2 99 22 0.116
7 2 S 67 0.353
</code></pre>
<p>I have tried several approaches using a function, groups or np.where but the problem is a bit more complicated than what I have experience in and nothing works out in the end.</p>
<p>Thanks for your help.</p>
|
<pre><code>df.loc[df['E'] == 'n.e.', 'G_ne'] = df['G']
df['G_ne'] = df['G_ne'].fillna(method='ffill')
df['rel'] = df['G'] / df['G_ne']
print(df)
</code></pre>
<p>Output:</p>
<pre><code> #F E G G_ne rel
0 1 n.e. 153 153.0 1.000000
1 1 60 15 153.0 0.098039
2 1 99 10 153.0 0.065359
3 1 S 23 153.0 0.150327
4 2 n.e. 190 190.0 1.000000
5 2 60 44 190.0 0.231579
6 2 99 22 190.0 0.115789
7 2 S 67 190.0 0.352632
</code></pre>
|
python|pandas|function|rows
| 0
|
2,453
| 59,938,154
|
numpy where - how to set condition on whole column?
|
<p>How to implement :</p>
<pre><code>t=np.where(<exists at least 1 zero in the same column of t>,t,np.zeros_like(t))
</code></pre>
<p>in the "pythonic" way?</p>
<p>this code should set all column to zero in t if t has at least 1 zero in that column</p>
<p>Example :</p>
<pre><code>1 1 1 1 1 1
0 1 1 1 1 1
1 1 0 1 0 1
</code></pre>
<p>should turn to</p>
<pre><code>0 1 0 1 0 1
0 1 0 1 0 1
0 1 0 1 0 1
</code></pre>
|
<p><code>any</code> is what you need</p>
<p><code>~(arr == 0).any(0, keepdims=True) * arr</code></p>
<pre><code>0 1 0 1 0 1
0 1 0 1 0 1
0 1 0 1 0 1
</code></pre>
|
python|numpy
| 1
|
2,454
| 65,450,185
|
Fill np.nan with values based on other columns
|
<p>I try to match the <code>offer_id</code> to the corresponding transaction. This is the dataset:</p>
<pre><code> time event offer_id amount
2077 0 offer received f19421c1d4aa40978ebb69ca19b0e20d NaN
15973 6 offer viewed f19421c1d4aa40978ebb69ca19b0e20d NaN
15974 6 transaction NaN 3.43
18470 12 transaction NaN 6.01
18471 12 offer completed f19421c1d4aa40978ebb69ca19b0e20d NaN
43417 108 transaction NaN 11.00
44532 114 transaction NaN 1.69
50587 150 transaction NaN 3.23
55277 168 offer received 9b98b8c7a33c4b65b9aebfe6a799e6d9 NaN
96598 258 transaction NaN 2.18
</code></pre>
<p>The rule is that when the offer is viewed, the transaction belongs to this offer id. If the offer is reveived, but not viewed, the transaction does not belong to the offer id. I hope the <code>time</code> variable makes it clear. This is the desired result:</p>
<pre><code> time event offer_id amount
2077 0 offer received f19421c1d4aa40978ebb69ca19b0e20d NaN
15973 6 offer viewed f19421c1d4aa40978ebb69ca19b0e20d NaN
15974 6 transaction f19421c1d4aa40978ebb69ca19b0e20d 3.43
18470 12 transaction f19421c1d4aa40978ebb69ca19b0e20d 6.01
18471 12 offer completed f19421c1d4aa40978ebb69ca19b0e20d NaN
43417 108 transaction NaN 11.00
44532 114 transaction NaN 1.69
50587 150 transaction NaN 3.23
55277 168 offer received 9b98b8c7a33c4b65b9aebfe6a799e6d9 NaN
96598 258 transaction NaN 2.18
</code></pre>
|
<p>Example code:</p>
<pre><code>import pandas as pd
import numpy as np
d = {'time': [0, 6, 6, 12, 12, 108, 144, 150, 168, 258],
'event': ["offer received", "offer viewed", "transaction", "transaction", "offer completed", "transaction", "transaction", "transaction", "offer received", "transaction"],
'offer_id': ["f19421c1d4aa40978ebb69ca19b0e20d", "f19421c1d4aa40978ebb69ca19b0e20d", np.nan, np.nan, "f19421c1d4aa40978ebb69ca19b0e20d", np.nan, np.nan, np.nan, "9b98b8c7a33c4b65b9aebfe6a799e6d9", np.nan]}
df = pd.DataFrame(d)
print("Original data:\n{}\n".format(df))
is_offer_viewed = False
now_offer_id = np.nan
for index, row in df.iterrows():
if row['event'] == "offer viewed":
is_offer_viewed = True
now_offer_id = row['offer_id']
elif row['event'] == "transaction" and is_offer_viewed:
df.at[index, 'offer_id'] = now_offer_id
elif row['event'] == "offer completed":
is_offer_viewed = False
now_offer_id = np.nan
print("Processed data:\n{}\n".format(df))
</code></pre>
<p>Outputs:</p>
<pre><code>Original data:
time event offer_id
0 0 offer received f19421c1d4aa40978ebb69ca19b0e20d
1 6 offer viewed f19421c1d4aa40978ebb69ca19b0e20d
2 6 transaction NaN
3 12 transaction NaN
4 12 offer completed f19421c1d4aa40978ebb69ca19b0e20d
5 108 transaction NaN
6 144 transaction NaN
7 150 transaction NaN
8 168 offer received 9b98b8c7a33c4b65b9aebfe6a799e6d9
9 258 transaction NaN
Processed data:
time event offer_id
0 0 offer received f19421c1d4aa40978ebb69ca19b0e20d
1 6 offer viewed f19421c1d4aa40978ebb69ca19b0e20d
2 6 transaction f19421c1d4aa40978ebb69ca19b0e20d
3 12 transaction f19421c1d4aa40978ebb69ca19b0e20d
4 12 offer completed f19421c1d4aa40978ebb69ca19b0e20d
5 108 transaction NaN
6 144 transaction NaN
7 150 transaction NaN
8 168 offer received 9b98b8c7a33c4b65b9aebfe6a799e6d9
9 258 transaction NaN
</code></pre>
|
python|pandas
| 1
|
2,455
| 65,128,134
|
How to flip half of a numpy array
|
<p>I have a numpy array:</p>
<pre><code>arr=np.array([[1., 2., 0.],
[2., 4., 1.],
[1., 3., 2.],
[-1., -2., 4.],
[-1., -2., 5.],
[1., 2., 6.]])
</code></pre>
<p>I want to flip the second half of this array upward. I mean I want to have:</p>
<pre><code>flipped_arr=np.array([[-1., -2., 4.],
[-1., -2., 5.],
[1., 2., 6.],
[1., 2., 0.],
[2., 4., 1.],
[1., 3., 2.]])
</code></pre>
<p>When I try this code:</p>
<pre><code>fliped_arr=np.flip(arr, 0)
</code></pre>
<p>It gives me:</p>
<pre><code>fliped_arr= array([[1., 2., 6.],
[-1., -2., 5.],
[-1., -2., 4.],
[1., 3., 2.],
[2., 4., 1.],
[1., 2., 0.]])
</code></pre>
<p>In advance, I do appreciate any help.</p>
|
<p>You can simply concatenate rows below the <code>n</code>th row (included) with <a href="https://numpy.org/doc/stable/reference/generated/numpy.r_.html" rel="nofollow noreferrer">np.r_</a> for instance, with row index <code>n</code> of your choice, at the top and the other ones at the bottom:</p>
<pre><code>import numpy as np
n = 3
arr_flip_n = np.r_[arr[n:],arr[:n]]
>>> array([[-1., -2., 4.],
[-1., -2., 5.],
[ 1., 2., 6.],
[ 1., 2., 0.],
[ 2., 4., 1.],
[ 1., 3., 2.]])
</code></pre>
|
python|numpy|flip
| 3
|
2,456
| 50,217,206
|
How to manage subplots in Pandas?
|
<p>My DataFrame is:</p>
<pre><code>df = pd.DataFrame({'A': range(0,-10,-1), 'B': range(10,20), 'C': range(10,30,2)})
</code></pre>
<p>and plot:</p>
<pre><code>df[['A','B','C']].plot(subplots=True, sharex=True)
</code></pre>
<p>I get one column with 3 subplots, each even height.</p>
<p>How to plot it this way that I have only two subplots where 'A' is in upper one and 'B' and 'C' are in lower <em>and</em> lower graph's height is different than height of graph 'A' (x axis is shared)?</p>
|
<p>Use <code>subplots</code> with <code>gridspec_kw</code> parmater to setup your grid then use the <code>ax</code> paramter in pandas plot to use those axes defined in your subplots statement:</p>
<pre><code>f, ax = plt.subplots(2,2, gridspec_kw={'height_ratios':[1,2]})
df[['A','B','C']].plot(subplots=True, sharex=True, ax=[ax[0,0],ax[0,1],ax[1,0]])
ax[1,1].set_visible(False)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/XC1AM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XC1AM.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 2
|
2,457
| 49,851,375
|
list and array is global by default in python3.6?
|
<p>Just a simple code below: </p>
<pre><code>import numpy as np
x=np.array([1,2])
y=[1,2]
L=1
def set_L(x,y,L):
x[0]+=1
y[0]+=1
L+=1
print(id(x))
print(id(y))
print(id(L))
</code></pre>
<p>I found that array x and list y is the same in the function set_L(), does this mean by default list and array are global variables? however variable L is not global in the function set_L(). I am confusing that why Python is designed like this? </p>
|
<p>x[0]+=1 and y[0]+=1 just modify the existing object, while L+=1 is an assignment and creates a new local reference. See <a href="https://stackoverflow.com/a/11867500/7662112">https://stackoverflow.com/a/11867500/7662112</a></p>
|
python|arrays|list|numpy|global-variables
| 0
|
2,458
| 50,208,918
|
Python Pandas find statistical difference between 2 distributions
|
<p>i have 2 columns with similar data. I plot them to compare their distributions and i want to quantify their difference.</p>
<pre><code>df = pd.DataFrame({'a':['cat','dog','bird','cat','dog','dog','dog'],
'b':['cat','cat','cat','bird','dog','dog','dog']})
</code></pre>
<p>I then plot the 2 columns of my data frame to compare their distributions:</p>
<pre><code>ax = df['a'].value_counts().plot(kind='bar', color='blue', width=.75, legend=True, alpha=0.8)
df['b'].value_counts().plot(kind='bar', color='maroon', width=.5, alpha=1, legend=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/2dIaS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2dIaS.png" alt="enter image description here"></a></p>
<p>How can i quantify the difference in the distributions statistically to say how similar they are?</p>
<p>would it be a simple t-test or something else?</p>
|
<p>It is very common to use the <a href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov%E2%80%93Smirnov_test" rel="nofollow noreferrer">two-sided Kolmogorov-Smirnov test</a> for this. </p>
<p>In Python, you can do so with <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ks_2samp.html" rel="nofollow noreferrer"><code>scipy.stats.ks_2samp</code></a>:</p>
<pre><code>from scipy import stats
merged = pd.merge(
df.a.value_counts().to_frame(),
df.b.value_counts().to_frame(),
left_index=True,
right_index=True)
stats.ks_2samp(merged.a, merged.b)
</code></pre>
<p>Broadly speaking, if the second value of the returned tuple is small (say less than 0.05), you should reject the hypothesis that the distributions are the same.</p>
|
python|pandas|numpy|scipy
| 4
|
2,459
| 50,094,633
|
Keras TimeDistributed Not Masking CNN Model
|
<p>For the sake of example, I have an input consisting of 2 images,of total shape (2,299,299,3). I'm trying to apply inceptionv3 on each image, and then subsequently process the output with an LSTM. I'm using a masking layer to exclude a blank image from being processed (specified below).</p>
<p>The code is:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from keras import backend as K
from keras.models import Sequential,Model
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D, BatchNormalization, \
Input, GlobalAveragePooling2D, Masking,TimeDistributed, LSTM,Dense,Flatten,Reshape,Lambda, Concatenate
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.applications import inception_v3
IMG_SIZE=(299,299,3)
def create_base():
base_model = inception_v3.InceptionV3(weights='imagenet', include_top=False)
x = GlobalAveragePooling2D()(base_model.output)
base_model=Model(base_model.input,x)
return base_model
base_model=create_base()
#Image mask to ignore images with pixel values of -1
IMAGE_MASK = -2*np.expand_dims(np.ones(IMG_SIZE),0)
final_input=Input((2,IMG_SIZE[0],IMG_SIZE[1],IMG_SIZE[2]))
final_model = Masking(mask_value = -2.)(final_input)
final_model = TimeDistributed(base_model)(final_model)
final_model = Lambda(lambda x: x, output_shape=lambda s:s)(final_model)
#final_model = Reshape(target_shape=(2, 2048))(final_model)
#final_model = Masking(mask_value = 0.)(final_model)
final_model = LSTM(5,return_sequences=False)(final_model)
final_model = Model(final_input,final_model)
#Create a sample test image
TEST_IMAGE = np.ones(IMG_SIZE)
#Create a test sample input, consisting of a normal image and a masked image
TEST_SAMPLE = np.concatenate((np.expand_dims(TEST_IMAGE,axis=0),IMAGE_MASK))
inp = final_model.input # input placeholder
outputs = [layer.output for layer in final_model.layers] # all layer outputs
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs]
layer_outs = [func([np.expand_dims(TEST_SAMPLE,0), 1.]) for func in functors]
</code></pre>
<p>This does not work correctly. Specifically, the model should mask the IMAGE_MASK part of the input, but it instead processes it with inception (giving a nonzero output). here are the details:</p>
<p>layer_out[-1] , the LSTM output is fine:</p>
<p><code>[array([[-0.15324114, -0.09620268, -0.01668587, 0.07938149, -0.00757846]], dtype=float32)]</code></p>
<p>layer_out[-2] and layer_out[-3] , the LSTM input is <em>wrong</em>, it should have all zeros in the second array:</p>
<p><code>[array([[[ 0.37713543, 0.36381325, 0.36197218, ..., 0.23298527,
0.43247852, 0.34844452],
[ 0.24972123, 0.2378867 , 0.11810347, ..., 0.51930511,
0.33289322, 0.33403745]]], dtype=float32)]</code></p>
<p>layer_out[-4], the input to the CNN is correctly masked:</p>
<pre><code>[[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
...,
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]]],
[[[-0., -0., -0.],
[-0., -0., -0.],
[-0., -0., -0.],
...,
[-0., -0., -0.],
[-0., -0., -0.],
[-0., -0., -0.]],
</code></pre>
<p>Note that the code seems to work <strong>correctly</strong> with a simpler base_model such as:</p>
<pre class="lang-py prettyprint-override"><code>def create_base():
input_layer=Input(IMG_SIZE)
base_model=Flatten()(input_layer)
base_model=Dense(2048)(base_model)
base_model=Model(input_layer,base_model)
return base_model
</code></pre>
<p>I have exhausted most online resources on this. Permutations of this question have been asked on Keras's github, such as <a href="https://github.com/keras-team/keras/issues/5533" rel="noreferrer">here</a>, <a href="https://github.com/keras-team/keras/issues/9788" rel="noreferrer">here</a> and <a href="https://github.com/keras-team/keras/issues/5934#issuecomment-288811716" rel="noreferrer">here</a>, but I can't seem to find any concrete resolution.</p>
<p>The links suggest that the issues seem to be stemming from a combination of TimeDistributed being applied to BatchNormalization, and the hacky fixes of either the Lambda identity layer, or Reshape layers remove errors but don't seem to output the correct model. </p>
<p>I've tried to force the base model to support masking via:</p>
<pre class="lang-py prettyprint-override"><code>base_model.__setattr__('supports_masking',True)
</code></pre>
<p>and I've also tried applying an identity layer via:</p>
<pre class="lang-py prettyprint-override"><code>TimeDistributed(Lambda(lambda x: base_model(x), output_shape=lambda s:s))(final_model)
</code></pre>
<p>but none of these seem to work. Note that I would like the final model to be trainable, in particular the CNN part of it should remain trainable. </p>
|
<p>It seems to be working as intended. Masking in Keras doesn't produce zeros as you would expect, it instead skips the timesteps that are masked in upstream layers such as LSTM and loss calculation. In case of RNNs, Keras (at least tensorflow) is implemented such that the states from the previous step are carried over, <a href="https://github.com/keras-team/keras/blob/ba6f4945858bdf02bf10fc833430fb77894693db/keras/backend/tensorflow_backend.py#L2871" rel="nofollow noreferrer">tensorflow_backend.py</a>. This is done in part to preserve the shapes of tensors when dynamic input is given.</p>
<p>If you really want zeros you will have to implement your own layer with a similar logic to Masking and return zeros explicitly. To solve your problem, you need a mask before the final LSTM layer using the <code>final_input</code>:</p>
<pre><code>class MyMask(Masking):
"""Layer that adds a mask based on initial input."""
def compute_mask(self, inputs, mask=None):
# Might need to adjust shapes
return K.any(K.not_equal(inputs[0], self.mask_value), axis=-1)
def call(self, inputs):
# We just return input back
return inputs[1]
def compute_output_shape(self, input_shape):
return input_shape[1]
final_model = MyMask(mask_value=-2.)([final_input, final_model])
</code></pre>
<p>You probably can attach the mask in a simpler manner but this custom class essentially adds a mask based on your initial inputs and outputs a Keras tensor that now has a mask.</p>
<p>Your LSTM will ignore in your example the second image. To confirm you can <code>return_sequences=True</code>and check that the output for 2 images are identical. </p>
|
tensorflow|deep-learning|keras|conv-neural-network
| 3
|
2,460
| 64,152,916
|
How to add row name to cell in pandas dataframe?
|
<p>How do I take data frame, like the following:</p>
<pre><code>d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
df
col1 col2
row0 1 3
row1 2 4
</code></pre>
<p>And produce a dataframe where the row name is added to the cell in the frame, like the following:</p>
<pre><code> col1 col2
row0 1,row0 3,row0
row1 2,row1 4,row1
</code></pre>
<p>Any help is appreciated.</p>
|
<p>You can convert the dtype to string and add the index to the dataframe:</p>
<pre><code>print(df,'\n')
print(df.astype(str).add(","+df.index,axis=0))
</code></pre>
<hr />
<pre><code> col1 col2
row0 1 3
row1 2 4
col1 col2
row0 1,row0 3,row0
row1 2,row1 4,row1
</code></pre>
|
python|python-3.x|pandas
| 3
|
2,461
| 63,836,145
|
Run a function for each element in two lists in Pandas Dataframe Columns
|
<p><strong>df</strong>:</p>
<pre><code>col1
['aa', 'bb', 'cc', 'dd']
['this', 'is', 'a', 'list', '2']
['this', 'list', '3']
col2
[['ee', 'ff', 'gg', 'hh'], ['qq', 'ww', 'ee', 'rr']]
[['list', 'a', 'not', '1'], ['not', 'is', 'this', '2']]
[['this', 'is', 'list', 'not'], ['a', 'not', 'list', '2']]
</code></pre>
<p><strong>What I'm trying to do:</strong></p>
<p>I am trying to run the code below on each element (word) in df <code>col1</code> on each corresponding element in each of the sublists in <code>col2</code>, and put the scores in a new column.</p>
<p>So for the first row in <code>col1</code>, run the <code>get_top_matches</code> function on this:</p>
<pre><code>`col1` "aa" and `col2` "ee" and "qq"
`col1` "bb" and `col2` "ff" and "ww"
`col1` "cc" and `col2` "gg" and "ee"
`col1` "dd" and `col2` "hh" and "rr"
</code></pre>
<p><strong>What the new column should look like:</strong></p>
<p><em>I don't know for sure what row 2 and 3 scores should be</em></p>
<pre><code>score_col
[1.0, 1.0, 1.0, 1.0]
[.34, .33, .27, .24, .23] #not sure
[.23, .13, .26] #not sure
</code></pre>
<p><strong>What I've tried before:</strong></p>
<p>I've done when <code>col1</code> was just a string against each list element in <code>col2</code>, like this, but i don't have the slightest idea how to run it against list elements to corresponding sublist elements:</p>
<pre><code>df.agg(lambda x: get_top_matches(*x), axis=1)
</code></pre>
<p>.
.
.
.</p>
<p><strong>The Function Code</strong></p>
<p>Here's the <code>get_top_matches</code> function - just run this whole thing; i'm only calling the last function for this question:</p>
<pre><code>#jaro version
def sort_token_alphabetically(word):
token = re.split('[,. ]', word)
sorted_token = sorted(token)
return ' '.join(sorted_token)
def get_jaro_distance(first, second, winkler=True, winkler_ajustment=True,
scaling=0.1, sort_tokens=True):
"""
:param first: word to calculate distance for
:param second: word to calculate distance with
:param winkler: same as winkler_ajustment
:param winkler_ajustment: add an adjustment factor to the Jaro of the distance
:param scaling: scaling factor for the Winkler adjustment
:return: Jaro distance adjusted (or not)
"""
if sort_tokens:
first = sort_token_alphabetically(first)
second = sort_token_alphabetically(second)
if not first or not second:
raise JaroDistanceException(
"Cannot calculate distance from NoneType ({0}, {1})".format(
first.__class__.__name__,
second.__class__.__name__))
jaro = _score(first, second)
cl = min(len(_get_prefix(first, second)), 4)
if all([winkler, winkler_ajustment]): # 0.1 as scaling factor
return round((jaro + (scaling * cl * (1.0 - jaro))) * 100.0) / 100.0
return jaro
def _score(first, second):
shorter, longer = first.lower(), second.lower()
if len(first) > len(second):
longer, shorter = shorter, longer
m1 = _get_matching_characters(shorter, longer)
m2 = _get_matching_characters(longer, shorter)
if len(m1) == 0 or len(m2) == 0:
return 0.0
return (float(len(m1)) / len(shorter) +
float(len(m2)) / len(longer) +
float(len(m1) - _transpositions(m1, m2)) / len(m1)) / 3.0
def _get_diff_index(first, second):
if first == second:
pass
if not first or not second:
return 0
max_len = min(len(first), len(second))
for i in range(0, max_len):
if not first[i] == second[i]:
return i
return max_len
def _get_prefix(first, second):
if not first or not second:
return ""
index = _get_diff_index(first, second)
if index == -1:
return first
elif index == 0:
return ""
else:
return first[0:index]
def _get_matching_characters(first, second):
common = []
limit = math.floor(min(len(first), len(second)) / 2)
for i, l in enumerate(first):
left, right = int(max(0, i - limit)), int(
min(i + limit + 1, len(second)))
if l in second[left:right]:
common.append(l)
second = second[0:second.index(l)] + '*' + second[
second.index(l) + 1:]
return ''.join(common)
def _transpositions(first, second):
return math.floor(
len([(f, s) for f, s in zip(first, second) if not f == s]) / 2.0)
def get_top_matches(reference, value_list, max_results=None):
scores = []
if not max_results:
max_results = len(value_list)
for val in value_list:
score_sorted = get_jaro_distance(reference, val)
score_unsorted = get_jaro_distance(reference, val, sort_tokens=False)
scores.append((val, max(score_sorted, score_unsorted)))
scores.sort(key=lambda x: x[1], reverse=True)
return scores[:max_results]
class JaroDistanceException(Exception):
def __init__(self, message):
super(Exception, self).__init__(message)
</code></pre>
<p>.
.
.</p>
<hr />
<p><strong>Attempt 1</strong>
Just trying to get this to compare to each word in the lists rather than each letter:</p>
<pre><code>[[[df1.agg(lambda x: get_top_matches(u,w), axis=1) for u,w in zip(x,v)]\ for v in y] for x,y in zip(df1['parent_org_name_list'], df1['children_org_name_sublists'])]
</code></pre>
<p><a href="https://i.stack.imgur.com/fPvSv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPvSv.png" alt="Results" /></a></p>
<p><strong>Attempt 2</strong>
Changing the <code>get_top_matches</code> function to say <code>for val in value_list.split():</code> resulted in this below - which grabs the first word and compares it to the first word in each sublist in <code>col2</code> 5 times (not sure why 5 times):</p>
<pre><code>[
[0 [(myalyk, 0.73)]1 [(myalyk, 0.73)]2 [(myalyk, 0.73)]3 [(myalyk, 0.73)]4 [(myalyk, 0.73)]dtype: object]
, [0 [(myliu, 0.79)]1 [(myliu, 0.79)]2 [(myliu, 0.79)]3 [(myliu, 0.79)]4 [(myliu, 0.79)]dtype: object]
, [0 [(myllc, 0.97)]1 [(myllc, 0.97)]2 [(myllc, 0.97)]3 [(myllc, 0.97)]4 [(myllc, 0.97)]dtype: object]
, [0 [(myloc, 0.88)]1 [(myloc, 0.88)]2 [(myloc, 0.88)]3 [(myloc, 0.88)]4 [(myloc, 0.88)]dtype: object]
]
</code></pre>
<p>Just need the function to run on each word in the sublists.</p>
<p><strong>Attempt 3</strong>
Removing the second attempt code from the <code>get_top_matches</code> function and modifying the attempt one list comprehension code to below, grabbed the first word in the first 3 sublists in <code>col2</code>; need to compare against the <code>col1</code> list to each word in the <code>col2</code> sublists:</p>
<pre><code>[[df.agg(lambda x: get_top_matches(u,v), axis=1) for u in x ]
for v in zip(*y)]
for x,y in zip(df['col1'], df['col2'])
]
</code></pre>
<p><strong>results to attempt 3</strong></p>
<pre><code>[[0 [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79),
...1 [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79),
...2 [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79),
...3 [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79),
...4 [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79),
...dtype: object]]
</code></pre>
<p><strong>Expectation</strong>
(this example: row 1 has 4 sublists, row 2 has 2 sublists. the function runs on each word in each column 1 for each word in each sublist in column 2 and puts the results in a sublist in a new column.)</p>
<pre><code>[[['myalyk',.97], ['oleksandr',.54], ['nychyporovych',.3], ['pp',0]], [['myliu',.88], ['srl',.43]], [['myllc',1.0]], [['myloc',1.0], ['manag',.45], ['IT',.1], ['ag',0]]],
[[['ltd',.34], ['yuriapharm',.76]], [['yuriypra',.65], ['law',.54], ['offic',.45], ['pc',.34]]],
...
</code></pre>
|
<p>This works:</p>
<pre><code># Generate DataFrame
df = pd.DataFrame (data, columns = ['col1','col2'])
# Clean Data (strip out trailing commas on some words)
df['col1'] = df['col1'].map(lambda lst: [x.rstrip(',') for x in lst])
# 1. List comprehension Technique
# zip provides pairs of col1, col2 rows
result = [[get_top_matches(u, [v]) for u in x for w in y for v in w] for x, y in zip(df['col1'], df['col2'])]
# 2. DataFrame Apply Technique
def func(x, y):
return [get_top_matches(u, [v]) for u in x for w in y for v in w]
df['func_scores'] = df.apply(lambda row: func(row['col1'], row['col2']), axis = 1)
# Verify two methods are equal
print(df['func_scores'].equals(pd.Series(result))) # True
print(df['func_scores'].to_string(index=False))
</code></pre>
<p>Thanks all who helped</p>
|
python|pandas
| 2
|
2,462
| 32,701,977
|
calculating the curl of u and v wind components in satellite data - Python
|
<p>I am not sure how to take derivatives of the u and v components of the wind in satellite data. I thought I could use numpy.gradient in this way:</p>
<pre><code> from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
GridSat = Dataset('analysis_20040713_v11l30flk.nc4','r',format='NETCDF4')
missing_data = -9999.0
lat = GridSat.variables['lat']
lat = lat[:]
lat[np.where(lat==missing_data)] = np.nan
lat[np.where(lat > 90.0)] = np.nan
lon = GridSat.variables['lon']
lon = lon[:]
lon[np.where(lon==missing_data)] = np.nan
uwind_data = GridSat.variables['uwnd']
uwind = GridSat.variables['uwnd'][:]
uwind_sf = uwind_data.scale_factor
uwind_ao = uwind_data.add_offset
miss_uwind = uwind_data.missing_value
uwind[np.where(uwind==miss_uwind)] = np.nan
vwind_data = GridSat.variables['vwnd']
vwind = GridSat.variables['vwnd'][:]
vwind_sf = vwind_data.scale_factor
vwind_ao = vwind_data.add_offset
miss_vwind = vwind_data.missing_value
vwind[np.where(vwind==miss_vwind)] = np.nan
uwind = uwind[2,:,:]
vwind = vwind[2,:,:]
dx = 28400.0 # meters calculated from the 0.25 degree spatial gridding
dy = 28400.0 # meters calculated from the 0.25 degree spatial gridding
dv_dx, dv_dy = np.gradient(vwind, [dx,dy])
du_dx, du_dy = np.gradient(uwind, [dx,dy])
File "<ipython-input-229-c6a5d5b09224>", line 1, in <module>
np.gradient(vwind, [dx,dy])
File "/Users/anaconda/lib/python2.7/site-packages/nump/lib/function_base.py", line 1040, in gradient
out /= dx[axis]
ValueError: operands could not be broadcast together with shapes (628,1440) (2,) (628,1440)
</code></pre>
<p>Honestly, I am not sure how to calculate central differences of satellite data with (0.25x0.25) degree spacing. I dont think my dx and dy are correct either. I would really appreciate if someone had a good idea on approaching these types of calculations in satellite data. Thank you!!</p>
|
<p>As <code>@moarningsun</code> commented, changing how you call <code>np.gradient</code> should correct the <code>ValueError</code></p>
<pre><code>dv_dx, dv_dy = np.gradient(vwind, dx,dy)
du_dx, du_dy = np.gradient(uwind, dx,dy)
</code></pre>
<p>How you got <code>vwind</code> from the file is not particularly important, especially since we don't have access to that file. The shape of <code>vwind</code> would have been useful, though we can guess that from the error message. The reference in the error to a <code>(2,)</code> array is to <code>[dx,dy]</code>. When you get <code>broadcasting</code> errors, check the shapes of the various arguments.</p>
<p><code>np.gradient</code> code is straight forward, only complicated by the fact that it can handle 1, 2, 3d and higher data. Basically it doing calculations like</p>
<pre><code>(z[:,2:]-z[:,:-2])/2
(z[2:,:]-z[:-2,:])/2
</code></pre>
<p>for the inner values, and 1 item steps for the boundary values.</p>
<p>I'll leave the question of deriving a <code>curl</code> from the gradients (or not) to others.</p>
|
python|numpy|vector|signal-processing|weather
| 1
|
2,463
| 38,592,523
|
Checkbox to select/unselect all series in c3.js
|
<p>I have a fairly big amount of data displayed on graphs using c3.js and I was wondering if it was possible to implement a checkbox for each graph with the option to select or unselect all.</p>
<p>I couldn't find anything related on the documentation.</p>
<p>Thanks in advance</p>
|
<p>If I understand well, you want the possibility to hide or show all data at once.<br>
In some of my charts I have buttons to show or hide all data </p>
<p>In a basic html/js example:</p>
<pre><code><div id='mychart'></div>
<button onclick="chart.show()">Show All</button>
<button onclick="chart.hide()">Hide All</button>
<script type="text/javascript">
var chart;
$(document).ready(function() {
chart = c3.generate(...);
});
</script>
</code></pre>
|
pandas|c3.js
| 1
|
2,464
| 63,030,561
|
Get nearest value that is out of datetime range if there is only one record in group
|
<p>Have such dataframe:</p>
<p><a href="https://i.stack.imgur.com/dOnXs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dOnXs.png" alt="enter image description here" /></a></p>
<p>I need first to filter data by <em>date_op</em> and then group by <em>key</em> column:</p>
<p><a href="https://i.stack.imgur.com/aCLfl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aCLfl.png" alt="enter image description here" /></a></p>
<p>As you can there are two records for <em>key2</em> and one for <em>key1</em>. Right here a problem - I need not less than two records in a group. If there is only one record in group I would like to get the nearest record that is out of <em>date_op</em> bounds. These records are under index 3 and 5:</p>
<p><a href="https://i.stack.imgur.com/UKFb4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UKFb4.png" alt="enter image description here" /></a></p>
<p>For this case this out-of-bound record is row with index 3. That means that expected result looks like this (despite the fact that it is less than filter datetime above):</p>
<p><a href="https://i.stack.imgur.com/Aki2S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aki2S.png" alt="enter image description here" /></a></p>
<p>Could you say please how can I reach this?</p>
<p>DataFrame:</p>
<pre><code>data = [
{'date_op': '2020-07-15 00:03:00', 'key': 'key1', 'value': 10},
{'date_op': '2020-07-15 00:02:00', 'key': 'key2', 'value': 9},
{'date_op': '2020-07-15 00:01:00', 'key': 'key2', 'value': 7},
{'date_op': '2020-07-14 23:59:00', 'key': 'key1', 'value': 6},
{'date_op': '2020-07-14 23:59:00', 'key': 'key3', 'value': 3}]
df = pd.DataFrame(data)
</code></pre>
|
<p>Maybe the following can help you:</p>
<pre><code>data["Appearance"] = data.groupby("key").cumcount()
df2 = data[(data["date_op"]>'2020-07-15 00:01:00')].copy()
df2["filter"] = int(1)
df3 = pd.merge(data,df2[["key","filter"]],on="key", how = "left")
df3[(df3["date_op"]>"2020-07-15 00:00:00") | ((df3["filter"] == 1) & (df3["Appearance"] <= 1))][["date_op","key","value"]]
date_op key value
0 2020-07-15 00:03:00 key1 10
1 2020-07-15 00:02:00 key2 9
2 2020-07-15 00:01:00 key2 7
3 2020-07-14 23:59:00 key1 6
</code></pre>
<p>We just mark which keys appear in the filter with the <code>pd.merge</code> and then filter those taking into account to pick no more than two rows for each key.</p>
|
python|pandas
| 1
|
2,465
| 62,983,600
|
Plotting by groupby and average
|
<p>I have a dataframe with multiple columns and rows. One column, say 'name' has several rows with names, the same name used multiple times. Other rows, say, 'x', 'y', 'z', 'zz' have values. I want to group by name and get the mean of each column (x,y,z,zz)for each name, then plot on a bar chart.</p>
|
<p>Using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>pandas.DataFrame.groupby</code></a> is an important data-wrangling stuff. Let's first make a dummy Pandas data frame.</p>
<pre><code>df = pd.DataFrame({"name": ["John", "Sansa", "Bran", "John", "Sansa", "Bran"],
"x": [2, 3, 4, 5, 6, 7],
"y": [5, -3, 10, 34, 1, 54],
"z": [10.6, 99.9, 546.23, 34.12, 65.04, -74.29]})
>>>
name x y z
0 John 2 5 10.60
1 Sansa 3 -3 99.90
2 Bran 4 10 546.23
3 John 5 34 34.12
4 Sansa 6 1 65.04
5 Bran 7 54 -74.29
</code></pre>
<p>We can use the label of the column to group the data (here the label is "name"). Explicitly defining the by parameter can be omitted (c.f., <code>df.groupby("name")</code>).</p>
<pre><code>df.groupby(by = "name").mean().plot(kind = "bar")
</code></pre>
<p>which gives us a nice bar graph.</p>
<p><a href="https://i.stack.imgur.com/HXL6q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HXL6q.png" alt="enter image description here" /></a></p>
<p>Transposing the group by results using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transpose.html" rel="nofollow noreferrer"><code>T</code></a> (as also suggested by anky) yields a different visualization. We can also pass a dictionary as the by parameter to determine the groups. The by parameter can also be a function, Pandas series, or ndarray.</p>
<pre><code>df.groupby(by = {1: "Sansa", 2: "Bran"}).mean().T.plot(kind = "bar")
</code></pre>
<p><a href="https://i.stack.imgur.com/M8Xew.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M8Xew.png" alt="enter image description here" /></a></p>
|
pandas|matplotlib|pandas-groupby
| 1
|
2,466
| 62,982,528
|
How to Count Occurence of Matrix in List of Matrices?
|
<p>I have a list of matrices, for example a list of numpy arrays:</p>
<pre><code>list = [np.array([[0,1],[1,1]]),
np.array([[1,0],[0,0]]),
np.array([[0,1],[1,1]])]
</code></pre>
<p>and I would like to count the occurence of each matrix. Thus, the desirable output is something like:</p>
<pre><code>np.array([[0,1],[1,1]]): 2
np.array([[1,0],[0,0]]): 1
</code></pre>
<p>I could imagine that this might be possible using numpy or pandas. As I need to use the matrices for arithmetic operations, I'm looking for a solution which avoids flattening the matrices. I'm aware that np.unique is able to count the occurences of flat arrays in a list.</p>
|
<p>You can do:</p>
<pre><code>pd.Series(my_list).astype(str).value_counts()
</code></pre>
<hr />
<pre><code>[[0 1]\n [1 1]] 2
[[1 0]\n [0 0]] 1
dtype: int64
</code></pre>
<p>Or:</p>
<pre><code>from collections import defaultdict
d = defaultdict(int)
for arr in my_list:
d[str(arr)] += 1
d = dict(d)
print(d)
{'[[0 1]\n [1 1]]': 2, '[[1 0]\n [0 0]]': 1}
</code></pre>
|
python|pandas|numpy|matrix
| 2
|
2,467
| 63,041,906
|
Python xlsx insert columns at cell location
|
<p>I am trying to copy data from a column and insert that data into another column at a specific cell location preserving the data above it, while shifting right the other column data.</p>
<p>I have been trying to do this in Openpyxl and with Pandas with no luck.
I'm attaching pictures of the desired outcome to help clarify the question.</p>
<p><strong>Before</strong>:</p>
<p><img src="https://i.stack.imgur.com/5F0wG.png" alt="Before" /></p>
<p><strong>After</strong>:</p>
<p><img src="https://i.stack.imgur.com/hgwrS.png" alt="After" /></p>
<p>--Updated with Code and output--</p>
<pre><code>from openpyxl import *
def setupAfter():
flag=False
for cols in beforesheet.iter_cols():
for cell in cols:
if cell.row == 2 and cell.column == 2:
aftersheet.insert_cols(cell.column, amount=1)
flag=True
break
if flag:break
else:
continue
aftersheet['B3'].value = beforesheet['A1'].value
aftersheet['B4'].value = beforesheet['A2'].value
aftersheet['B5'].value = beforesheet['A3'].value
outputwb = load_workbook(filename='aftertest.xlsx')
startwb = load_workbook(filename='beforetest.xlsx',keep_vba=True,data_only=True)
beforesheet = startwb.active
aftersheet = outputwb.active
setupAfter()
outputwb.save(filename=str(datetime.now().strftime("%Y%m%d_%H%M%S"))+"_aftertest.xlsx")
</code></pre>
<p><a href="https://i.stack.imgur.com/zSBt5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zSBt5.png" alt="Script output" /></a></p>
<p>Updated with functionality I'm trying to replicate in Excel:
<a href="https://i.stack.imgur.com/RfjA7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RfjA7.png" alt="Excel Functionality" /></a></p>
|
<p>let's say you have this dataframe:</p>
<pre><code> a b c
0 1 2 3
1 4 5 6
2 7 8 9
</code></pre>
<p>and you are trying to copy A[1:2] to C[1:2] like this:</p>
<pre><code> a b c
0 1 2 3
1 4 5 4
2 7 8 7
</code></pre>
<p>Here is how you do this:</p>
<pre><code>df['c'].iloc[1:2] = df['a'].iloc[1:2]
</code></pre>
|
python|pandas|openpyxl
| 1
|
2,468
| 32,116,900
|
Python: numpy.var yields unknown number
|
<p>numpy.var yields this number: 6.0037250324777306e-28.</p>
<p>I suppose by looking at the data that this number is close to 0. Am I correct? If so, how could I interpret this number? </p>
|
<p>It is indeed a number very very close to 0. For example:</p>
<pre><code>import numpy as np
list_to_check_var = [2,2,2,2,2.00000000001]
np.var(list_to_check_var)
</code></pre>
<p>yields</p>
<pre><code>1.6000002679246418e-23
</code></pre>
<p>As you intuitively know, the variance of the list is very small. The <code>e-23</code> part at the end means you need to multiply the number with <code>10^-23</code>.</p>
|
python|numpy
| 2
|
2,469
| 41,663,885
|
How to translate a simple MATLAB equation to Python?
|
<p>I need to understand how I can translate these few lines of MATLAB code. I don't understand how to create a vector <code>n1</code> of <code>n</code> elements and how to fill it using the same formula as in MATLAB.</p>
<p>Here's the MATLAB code: </p>
<pre><code>nc = 200; ncmax = 600; dx = 0.15e-04;
r = (dx/2):dx:dx*(ncmax+3);
n1(1:nc) =(1 ./ (s.*sqrt(2*pi).*r(1:nc))).*exp(-((log(r(1:nc)) - med).^2)./(2*s^2));
</code></pre>
<p>I have the following in Python, but <code>n1</code> is always an empty array of <code>nc</code> elements:</p>
<pre><code>import numpy as np
r =np.arange((dx/2),(dx*(ncmax+3)),dx)
count=1
n1=np.empty(nc)
while (count<nc)
n1[count]=(1/(s*np.sqrt(2*pi)*r[count]))*np.exp(-((np.log(r[count]))-med)**2)/(2*s**2)
count=count+1
</code></pre>
|
<p>You have a beautifully vectorized solution in MATLAB. One of the main reason for using NumPy is that it also allows for vectorization - so you shouldn't be introducing loops.</p>
<p>As suggested in comments by lucianopaz, there is a <a href="https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html" rel="nofollow noreferrer">guide to NumPy for MATLAB users</a> which explains differences and similarities between the two. It further has a nice list of MATLAB functions and their NumPy equivalents. This may be of great help, when translating MATLAB programs.</p>
<p>Some hints and comments:</p>
<ul>
<li><p>Use the NumPy versions of all functions, i.e. <code>np.sqrt</code>,
<code>np.exp</code> (as you were previously) and <code>np.power</code> (instead of <code>**</code>). These functions can be called in a vectorized fashion, just like in MATLAB.</p></li>
<li><p>As noticed by @Elisha, you are missing the definitions of <code>s</code> and <code>med</code>, so I'll just assume these are scalars, and set them to <code>1</code>.</p></li>
<li><p>Instead of importing <code>math</code> just for the <code>math.pi</code>, you can also use <code>np.pi</code>, which is <a href="https://stackoverflow.com/q/12645547/4221706">exactly the same</a>.</p></li>
<li><p>You are creating a large <code>r</code> vector and only use the first <code>nc</code> elements. Why not make <code>r</code> only of size <code>nc</code> from the start, as shown below?</p></li>
</ul>
<p>Resulting NumPy code:</p>
<pre><code>import numpy as np
nc = 200
ncmax = 600
dx = 0.15e-04
s = 1
med = 1
r = np.arange(dx / 2, dx * nc, dx)
n1 = 1 / (s * np.sqrt(2 * np.pi) * r) * \
np.exp(-np.power(np.log(r) - med, 2) /
(2 * np.power(s, 2)))
</code></pre>
|
python|matlab|numpy
| 1
|
2,470
| 61,259,680
|
Change value of a column in a multi index dataframe
|
<p>I have a dataframe like this:</p>
<pre><code> holiday
YEAR MONTH DAY TIME
2012 10 2 00:00:00 0
06:00:00 0
12:00:00 0
18:00:00 0
2012 10 3 00:00:00 1
06:00:00 0
12:00:00 0
18:00:00 0
2012 10 4 00:00:00 0
06:00:00 0
12:00:00 0
18:00:00 0
</code></pre>
<p>Where the 0 mean that the day is not a holiday and 1 that it is a holiday. However, the 1 only comes in the 00:00:00 hour and I want to replace all 0's on that day to 1's.</p>
<pre><code> holiday
YEAR MONTH DAY TIME
2012 10 2 00:00:00 0
06:00:00 0
12:00:00 0
18:00:00 0
2012 10 3 00:00:00 1
06:00:00 1
12:00:00 1
18:00:00 1
2012 10 4 00:00:00 0
06:00:00 0
12:00:00 0
18:00:00 0
</code></pre>
<p>Any idea on how this can be done?</p>
|
<p>Let us do </p>
<pre><code>df['holiday']=df.groupby(level=[0,1,2]).cumsum().values
</code></pre>
|
python|pandas
| 2
|
2,471
| 68,608,755
|
How can I solve this problem? (vs code error)
|
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import check_util.checker as checker
from IPython.display import clear_output
from PIL import Image
import os
import time
import re
from glob import glob
import shutil
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow import keras
print('tensorflow version: {}'.format(tf.__version__))
print('GPU available: {}'.format(tf.test.is_gpu_available()))
</code></pre>
<blockquote>
<p>Error: Session cannot generate requests Error: Session cannot generate
requests at w.executeCodeCell
(c:\Users\ooche.vscode\extensions\ms-toolsai.jupyter-2021.8.1054968649\out\client\extension.js:90:320068)
at w.execute
(c:\Users\ooche.vscode\extensions\ms-toolsai.jupyter-2021.8.1054968649\out\client\extension.js:90:319389)
at w.start
(c:\Users\ooche.vscode\extensions\ms-toolsai.jupyter-2021.8.1054968649\out\client\extension.js:90:315205)
at runMicrotasks () at processTicksAndRejections
(internal/process/task_queues.js:93:5) at async
t.CellExecutionQueue.executeQueuedCells
(c:\Users\ooche.vscode\extensions\ms-toolsai.jupyter-2021.8.1054968649\out\client\extension.js:90:329732)
at async t.CellExecutionQueue.start
(c:\Users\ooche.vscode\extensions\ms-toolsai.jupyter-2021.8.1054968649\out\client\extension.js:90:329272)</p>
</blockquote>
<p>I think this problem has something to do with kernel.</p>
<p>But I can't find a solution, even though I've already reinstalled and restarted.</p>
<p>Please help me solve this problem.</p>
<p>I'm using Python 3.7.10, Tensorflow 2.3</p>
|
<p>Please try again by restarting the <strong>VS code</strong> or by changing the <strong>jupyter</strong> virtual environment (Change kernel for the notebook) while executing this code in <strong>VS code</strong>.</p>
<p>(I tried the same code as mentioned above in <strong>VS code</strong> using <code>python 3.7.10</code> and <code>tensorflow=2.3</code> and it executed successfully)</p>
<p>Please check this output:</p>
<pre><code>tensorflow version: 2.3.0
WARNING:tensorflow:From C:\Users\Hp\AppData\Local\Temp\ipykernel_6944\127915523.py:16: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
GPU available: False
</code></pre>
|
python-3.x|tensorflow|visual-studio-code|deep-learning|conv-neural-network
| 0
|
2,472
| 68,586,063
|
Python pandas add value to same row
|
<p>How to: in a for loop, insert value at specific column by name, on the same row each time. So I have a few hundred columns. And I want each value to be put on the same row (Using the DATE) in the appropriate column. Here are my dataframe columns:</p>
<pre><code>DATE, ABC, BGF, ATR
</code></pre>
<p>Here is my example value for <code>List</code>:</p>
<pre><code>[["VAL:ABC", 5, 6, 7], ["VAL:ATR", 7, 43]]
</code></pre>
<p>Here is my how my dataframe should behave, put the list with the matched first element into the dataframe on a new row.</p>
<blockquote>
<pre><code>DATE, ABC, BGF, ATR
12-01-2011 5 - -
</code></pre>
</blockquote>
<p>On the same row still, put the next list item with the same column header in.</p>
<blockquote>
<pre><code>DATE, ABC, BGF, ATR
12-01-2011 5 - 7
</code></pre>
</blockquote>
<hr />
<p>Here's my code so far</p>
<pre><code>mod_df = df.append({'Date' : str(currentDate)}, ignore_index=True)
# append new values
for col in df.columns:
#print(col)
for aMrket in List_market_data: # [["VAL:ABC", 5, 6, 7], ["VAL:ATR", 7, 43]]
if aMrket[1] == str(col):
# if column header is the same as the last 3 characters in the list's first item
# then add to same row (use the date)
mod_df.loc[mod_df.Date == currentDate, col] = aMrket[-3]
</code></pre>
<h1>EDIT</h1>
<p>The final code line above is what i wanted. i got it</p>
|
<p>Edit a specific ROW (mod_df.Date)</p>
<p>ONLY where a specific value appears (== currentDate),</p>
<p>on the col (, col)</p>
<p>with this value (= aMrket[-3])</p>
<pre><code>mod_df.loc[mod_df.Date == currentDate, col] = aMrket[-3]
</code></pre>
|
python|pandas
| 0
|
2,473
| 68,787,654
|
Keras Sequential model input: How significant are the dimensions?
|
<p>I am trying to build a multioutput classifier on 3D data structured like <code>[sampleID, timestamp, deviceID, sensorID]</code> with one-hot labels like <code>[sampleID, deviceID]</code> to determine which device "wins".</p>
<p>In a nutshell, it is a massive collection of timeseries readings from five sensors taken at regular intervals from each of four different devices. The objective is to determine which of the devices is most likely to be in a particular state at the end of each <code>sampleID</code>. The labels are a one-hot representation of the devices.</p>
<p>In a case like this where a human would find meaning in the structure of the dataset, does the training process derive similar benefit? Can I simplify my dataset by reducing it to <code>[dataset, deviceID, timestamp X sensor]</code> or even <code>[dataset, deviceID X timestamp X sensor]</code> and still get similar accuracy?</p>
<p>In other words would simplifying the following dataset:
<code>[10000, 1000, 4, 5]</code>
down to
<code>[10000, 4, 5000]</code>
or
<code>[10000, 1000, 20]</code>
or even
<code>[10000, 20000]</code>
significantly diminish the model's ability to classify output?</p>
<p><em>Edited to for detail and formatting.</em></p>
|
<p>IIUC, you are asking if using 1000 timesteps for 20 objects (device X sensor) is better than using 1000 timesteps for 4 devices for 5 sensors.</p>
<p>There is no way of actually determining which would better model your problem, but, we can quickly build some tests to see which models capture the complexity of the problem better.</p>
<hr />
<p><strong>Case 1: 1000 time steps, 20 objects -> Sequential LSTM based model</strong></p>
<p>If you consider the 20 sensors individually, you can simply use a LSTM based model and let the model handle the non linear relationships between them. Since you have a 2D input, simply build reshape your data and build a model in the following structure. Feel free to add more layers and activations etc.</p>
<pre><code>from tensorflow.keras import layers, Model, utils
#Temporal model
inp = layers.Input((1000,20))
x = layers.LSTM(30, return_sequences=True)(inp)
x = layers.LSTM(30)(x)
out = layers.Dense(4, activation='softmax')(x)
model = Model(inp, out)
model.summary()
</code></pre>
<pre><code>Model: "model_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 1000, 20)] 0
_________________________________________________________________
lstm_4 (LSTM) (None, 1000, 30) 6120
_________________________________________________________________
lstm_5 (LSTM) (None, 30) 7320
_________________________________________________________________
dense_20 (Dense) (None, 4) 124
=================================================================
Total params: 13,564
Trainable params: 13,564
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<hr />
<p><strong>Case 2: 1000 time steps, 4x5 objects -> Conv-LSTM based model</strong></p>
<p>Since you have a 3D input, you want to consider the 4x5 as your spatial axes and your 1000 as your channels/feature maps/temporal features. Since your data type has <code>channels_first</code>, do specify them in the <code>Conv2D</code> as well as <code>MaxPooling2D</code> layers.</p>
<p>Then, once you have convolved over the spatial axes, you can start working on the feature maps with an LSTM. Sample code below, feel free to modify and build on top of this.</p>
<pre><code>from tensorflow.keras import layers, Model, utils
#Conv-LSTM model
inp = layers.Input((1000,4,5))
x = layers.Conv2D(30,2, data_format="channels_first")(inp)
x = layers.MaxPooling2D(2, data_format="channels_first")(x)
x = layers.Reshape((-1,2))(x)
x = layers.LSTM(20)(x)
out = layers.Dense(4, activation='softmax')(x)
model = Model(inp, out)
model.summary()
</code></pre>
<pre><code>Model: "model_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_25 (InputLayer) [(None, 1000, 4, 5)] 0
_________________________________________________________________
conv2d_19 (Conv2D) (None, 30, 3, 4) 120030
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 30, 1, 2) 0
_________________________________________________________________
reshape_10 (Reshape) (None, 30, 2) 0
_________________________________________________________________
lstm_19 (LSTM) (None, 20) 1840
_________________________________________________________________
dense_30 (Dense) (None, 4) 84
=================================================================
Total params: 121,954
Trainable params: 121,954
Non-trainable params: 0
_________________________________________________________________
</code></pre>
|
tensorflow|keras
| 1
|
2,474
| 36,270,864
|
Append a row to a dataframe
|
<p>Fairly new to pandas and I have created a data frame called rollParametersDf:</p>
<pre><code> rollParametersDf = pd.DataFrame(columns=['insampleStart','insampleEnd','outsampleStart','outsampleEnd'], index=[])
</code></pre>
<p>with the 4 column headings given. Which I would like to hold the reference dates for a study I am running. I want to add rows of data (one at a time) with the index name roll1, roll2..rolln that is created using the following code:</p>
<pre><code> outsampleEnd = customCalender.iloc[[totalDaysAvailable]]
outsampleStart = customCalender.iloc[[totalDaysAvailable-outsampleLength+1]]
insampleEnd = customCalender.iloc[[totalDaysAvailable-outsampleLength]]
insampleStart = customCalender.iloc[[totalDaysAvailable-outsampleLength-insampleLength+1]]
print('roll',rollCount,'\t',outsampleEnd,'\t',outsampleStart,'\t',insampleEnd,'\t',insampleStart,'\t')
rollParametersDf.append({insampleStart,insampleEnd,outsampleStart,outsampleEnd})
</code></pre>
<p>I have tried using append but cannot get an individual row to append.</p>
<p>I would like the final dataframe to look like:</p>
<pre><code> insampleStart insampleEnd outsampleStart outsampleEnd
roll1 1 5 6 8
roll2 2 6 7 9
:
rolln
</code></pre>
|
<p>You give key-values pairs to append</p>
<pre><code>df = pd.DataFrame({'insampleStart':[], 'insampleEnd':[], 'outsampleStart':[], 'outsampleEnd':[]})
df = df.append({'insampleStart':[1,2], 'insampleEnd':[5,6], 'outsampleStart':[6,7], 'outsampleEnd':[8,9]}, ignore_index=True)
</code></pre>
|
python|python-3.x|pandas
| 2
|
2,475
| 36,532,390
|
How to avoid automatic pseudo coloring in matplotlib.pyplot imshow()
|
<p>I have a 28x28 numpy ndarray that I want to print out as an image. Since it is a grayscale picture, it only has one color value per pixel. These values are scaled from -0.5 to 0.5.
I use plt.imshow(array). When I do that, the image gets printed out with the jet colormap, instead of grayscale.</p>
<p>If I apply cmap = 'gray' I get my grayscale image, but why is the default imshow() using the jet colormap)?</p>
|
<p>To prevent matplotlib from using the "jet" colormap as default you need to modify the line corresponding to the default cmap in the matplotlibrc file, usually found in ~/.config/matplotlib/matplotlibrc:</p>
<pre><code>image.cmap : gray # gray | jet etc...
</code></pre>
<p>Also, I encourage everybody to see the upcoming changes regarding styling in <a href="http://matplotlib.org/style_changes.html" rel="nofollow">matplotlib 2.0</a></p>
|
python|numpy|matplotlib
| 1
|
2,476
| 53,231,882
|
Populating a dict with a list in the same loop
|
<p>I am trying to populate a dict with column-wise occurences of characters in pandas sereis. The sereis is as follows:</p>
<pre><code>>>> jkl
1 ATGC
2 GTCA
3 CATG
Name: 0, dtype: object
</code></pre>
<p>I want a dict in a way that contains all the characters as keys and list of their column-wise occurence frequencies as value for the dict as shown below:</p>
<pre><code>{'A':[1,1,0,1],'C':[1,0,1,1],'G':[1,0,1,1],'T':[0,2,1,0]}
</code></pre>
<p>I have tried several codes and this is one of them:</p>
<pre><code>mylist = ['A', 'C', 'G','T']
dict = {key: None for key in mylist}
for i,(a,b) in enumerate(zip_longest(jkl[1],dict.keys())):
t=str(list(jkl.str[i]))
single_occurrences = Counter(t)
kl.append(single_occurrences.get(b))
dict[b]=kl
</code></pre>
<p>But this dict does not contain the desired output, is there a solution? </p>
|
<p>Using <code>crosstab</code> after re-create your dataframe </p>
<pre><code>S=pd.DataFrame(s.map(list).tolist()).melt()
pd.crosstab(S.value,S.variable)
Out[338]:
variable 0 1 2 3
value
A 1 1 0 1
C 1 0 1 1
G 1 0 1 1
T 0 2 1 0
</code></pre>
<p>after adding <code>to_dict</code> </p>
<pre><code>pd.crosstab(S.value,S.variable).T.to_dict('l')
Out[342]: {'A': [1, 1, 0, 1], 'C': [1, 0, 1, 1], 'G': [1, 0, 1, 1], 'T': [0, 2, 1, 0]}
</code></pre>
|
python|python-3.x|pandas|dictionary
| 5
|
2,477
| 52,994,435
|
PyTorch: create non-fully-connected layer / concatenate output of hidden layers
|
<p>In PyTorch, I want to create a hidden layer whose neurons are not fully connected to the output layer. I try to concatenate the output of two linear layers but run into the following error:</p>
<blockquote>
<p>RuntimeError: size mismatch, m1: [2 x 2], m2: [4 x 4]</p>
</blockquote>
<p>my current code:</p>
<pre><code>class NeuralNet2(nn.Module):
def __init__(self):
super(NeuralNet2, self).__init__()
self.input = nn.Linear(2, 40)
self.hiddenLeft = nn.Linear(40, 2)
self.hiddenRight = nn.Linear(40, 2)
self.out = nn.Linear(4, 4)
def forward(self, x):
x = self.input(x)
xLeft, xRight = torch.sigmoid(self.hiddenLeft(x)), torch.sigmoid(self.hiddenRight(x))
x = torch.cat((xLeft, xRight))
x = self.out(x)
return x
</code></pre>
<p>I don't get why there is a size mismatch? Is there an alternative way to implement non-fully-connected layers in pytorch?</p>
|
<p>It turned out to be a simple comprehension problem with the concatenation function.
Changing
<code>x = torch.cat((xLeft, xRight))</code>
to
<code>x = torch.cat((xLeft, xRight), dim=1)</code>
did the trick.
Thanks @dennlinger</p>
|
python|neural-network|pytorch
| 2
|
2,478
| 52,952,905
|
python increment version number by 0.0.1
|
<p>We are using versioning. The current version is 0.2.3
i would like to increment by 0.0.1 using python.
Getting below error.</p>
<p>tagNumber = 0.2.3
^
SyntaxError: invalid syntax</p>
|
<p>You could do something like this:</p>
<pre><code>def increment_ver(version):
version = version.split('.')
version[2] = str(int(version[2]) + 1)
return '.'.join(version)
</code></pre>
|
python|python-3.x|python-2.7|numpy
| 3
|
2,479
| 65,631,306
|
Python: How do I resolve this ImportError?
|
<p>I installed Tensorflow through <code>pip install</code> and it was successful but when i try to use it I have this ImportError:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\AKIN\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed while importing _pywrap_tensorflow_internal: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/AKIN/PythonProjects/sample_codes/trial_tf.py", line 2, in <module>
import tensorflow as tf
File "C:\Users\AKIN\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\AKIN\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\__init__.py", line 39, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File "C:\Users\AKIN\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\AKIN\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed while importing _pywrap_tensorflow_internal: A dynamic link library (DLL) initialization routine failed.
</code></pre>
<p>I have checked this and I have seen Windows build failing but I don't know if that's what affecting it:</p>
<blockquote>
<p><a href="https://github.com/tensorflow/tensorflow" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow</a></p>
</blockquote>
<p>What is going on?</p>
|
<p>After hundreds of Google searches and Youtube videos, I found the solution to this problem about a month ago. Unlike other third-party modules in python (e.g. Pandas, Matplotlib, etc.)- which require <code>pip</code> install - there are a different set of steps including installing NVIDIA with a Cuda-enabled GPU or CPU (Tensorflow works with either one) and activating a cond environment.</p>
<p>One youtube video I found to be particularly useful briefly explains each step to installing tensorflow without a <code>DLL</code> error. Here is the link: <a href="https://www.youtube.com/watch?v=5Ym-dOS9ssA&t=327s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=5Ym-dOS9ssA&t=327s</a></p>
<p>The instructor uses Pycharm, but I am sure you can easily follow each step with another IDE.</p>
<p>If you have any questions or want clarification, please do not hesitate to ask. Best of luck! :)</p>
|
python|windows|tensorflow|pip|python-import
| 1
|
2,480
| 63,586,976
|
Finding min values in a 3D array across one axis and replacing the non-corresponding values in another 3D array with 0 without loops in python
|
<p>Let's say we have two 3D arrays, A(x,y,z) and B(x,y,z) that x,y,z are dimensions. I want to identify all the minimum values across the z-axis in the A array and then based on those values and their indices choose the corresponding values in the B, keep them and replace other values with zero.</p>
|
<p>You can think of it a little differently. Finding the locations of the minima in <code>A</code> is straightforward:</p>
<pre><code>ind = np.expand_dims(np.argmin(A, axis=2), axis=2)
</code></pre>
<p>You can do one of the following things:</p>
<ol>
<li><p>Simplest: create a replacement of <code>B</code> and populate the relevant elements:</p>
<pre><code> C = np.zeros_like(B)
np.put_along_axis(C, ind, np.take_along_axis(B, ind, 2), 2)
</code></pre>
</li>
<li><p>Same thing, but in-place:</p>
<pre><code> values = np.take_along_axis(B, ind, 2)
B[:] = 0
np.put_along_axis(B, ind, values, 2)
</code></pre>
</li>
<li><p>Convert the index to a mask:</p>
<pre><code> mask = np.ones(B.shape, dtype=bool)
np.put_along_axis(mask, ind, False, 2)
B[mask] = 0
</code></pre>
</li>
</ol>
<p>You can replace the calls to <code>take_along_axis</code> and <code>put_along_axis</code> with suitable indexing expressions. In particular:</p>
<pre><code>indx, indy = np.indices(A.shape[:-1])
indz = np.argmin(A, axis=-1)
</code></pre>
<p>The examples above then transform into</p>
<ol>
<li><p>New array:</p>
<pre><code> C = np.zeros_like(B)
C[indx, indy, indz] = B[indx, indy, indz]
</code></pre>
</li>
<li><p>In-place:</p>
<pre><code> values = B[indx, indy, indz]
B[:] = 0
B[indx, indy, indz] = values
</code></pre>
</li>
<li><p>Masked:</p>
<pre><code> mask = np.ones(B.shape, dtype=bool)
mask[indx, indy, indz] = False
B[mask] = 0
</code></pre>
</li>
</ol>
|
python|numpy|indexing|min|substitution
| 1
|
2,481
| 63,619,153
|
How to reduce used memory for a repetitive optimization constraint in python?
|
<p>I have a mathematical optimization problem of the following simplified form:</p>
<pre><code>min βPxy
s.t. Pxyβ₯Pyz, βx,y,z
Pxy β {0,1}
</code></pre>
<p>This problem has X<em>Y</em>Z constraints. I write the following code to perform the optimization. The only way that came to my mind was introducing two new matrices that by multiplication with the vectors Pxy and Pyz repeat the constraints. These matrices have size of (XYZ* YZ) and (XYZ* XY). As the dimensions of the problem increases the size of these matrices will become huge and my RAM cannot handle it. Is it possible to re-write this code in a way that requires less memory for constraints? (Probably less memory usage might lead to faster speed).</p>
<p>The following code used all of the RAM on google colab and crashed! (while the optimization problem is easy and can be solved by hand)</p>
<pre><code>import cvxpy as cp
import numpy as np
np.random.seed(55)
X_max, Y_max, Z_max = 70, 70, 50
P_yz = np.random.choice([0, 1], size=(Y_max, Z_max), p=[9./10, 1./10])
P_yz = P_yz.reshape(-1)
z_repetition = np.zeros((X_max, Y_max, Z_max, X_max, Y_max), dtype=np.int8)
for x in range(X_max):
for y in range(Y_max):
for z in range(Z_max):
z_repetition[x,y,z,x,y] = 1
z_repetition = z_repetition.reshape(X_max * Y_max * Z_max, -1)
x_repetition = np.zeros((X_max, Y_max, Z_max, Y_max, Z_max), dtype=np.int8)
for x in range(X_max):
for y in range(Y_max):
for z in range(Z_max):
x_repetition[x,y,z,y,z] = 1
x_repetition = x_repetition.reshape(X_max * Y_max * Z_max, -1)
P_xy = cp.Variable((X_max * Y_max), boolean=True)
constraints = []
constraints.append(z_repetition * P_xy >= np.matmul(x_repetition, P_yz))
problem = cp.Problem(cp.Minimize(cp.sum(P_xy)), constraints)
objective = problem.solve(verbose=True)
# print(P_xy.value.reshape(X_max,-1))
</code></pre>
|
<p>Thanks to the comment of @sascha I have re-wrote the code using <code>scipy.sparse.coo_matrix</code> and the memory problem has solved.</p>
<p>I post the modified code here:</p>
<pre><code>import cvxpy as cp
import numpy as np
import scipy.sparse as sp
np.random.seed(55)
X_max, Y_max, Z_max = 70, 70, 50
P_yz = np.random.choice([0, 1], size=(Y_max, Z_max), p=[9./10, 1./10])
P_yz = P_yz.reshape(-1)
row = []
col = []
for x in range(X_max):
for y in range(Y_max):
for z in range(Z_max):
ids = np.unravel_index(np.ravel_multi_index((x,y,z,x,y), (X_max, Y_max, Z_max, X_max, Y_max)), (X_max * Y_max * Z_max, X_max * Y_max))
row.append(ids[0])
col.append(ids[1])
z_repetition_sparse = sp.coo_matrix((np.ones(len(row)), (row, col)), shape=(X_max * Y_max * Z_max, X_max * Y_max))
row = []
col = []
for x in range(X_max):
for y in range(Y_max):
for z in range(Z_max):
ids = np.unravel_index(np.ravel_multi_index((x,y,z,y,z), (X_max, Y_max, Z_max, Y_max, Z_max)), (X_max * Y_max * Z_max, Y_max * Z_max))
row.append(ids[0])
col.append(ids[1])
x_repetition_sparse = sp.coo_matrix((np.ones(len(row)), (row, col)), shape=(X_max * Y_max * Z_max, Y_max * Z_max))
P_xy = cp.Variable((X_max * Y_max), boolean=True)
constraints = []
constraints.append(z_repetition_sparse * P_xy >= sp.csr_matrix.dot(x_repetition_sparse, P_yz))
problem = cp.Problem(cp.Minimize(cp.sum(P_xy)), constraints)
objective = problem.solve(verbose=True)
print(P_xy.value.reshape(X_max,-1))
</code></pre>
|
python|numpy|optimization|ram|cvxpy
| 1
|
2,482
| 63,545,659
|
Numpy array: Remove and append values
|
<p>I have a 3D numpy array of the shape <code>(1, 60, 1)</code>. Now I need to remove the first value of the second dimension and instead append a new value at the end.</p>
<p>If it was a list, the code would look somewhat like this:</p>
<pre class="lang-py prettyprint-override"><code>x = [1, 2, 3, 4]
x = x[1:]
x.append(5)
</code></pre>
<p>resulting in this list: <code>[2, 3, 4, 5]</code></p>
<p>What would be the easiest way to do this with numpy?</p>
<p>I have basically never really worked with numpy before, so that's probably a pretty trivial problem, but thanks for your help!</p>
|
<pre><code>import numpy as np
arr = np.arange(60) #creating a nd array with 60 values
arr = arr.reshape(1,60,1) # shaping it as mentiond in question
arr = np.roll(arr, -1) # use np.roll to circulate the array left or right (-1 is 1 step to the left)
#Now your last value is in the second last position, the second last value in the third last pos and so on (Your first value moves to the last position)
arr[:,-1,:] = 1000 # index the last location and add the values you want
print(arr)
</code></pre>
|
python|numpy|numpy-ndarray
| 3
|
2,483
| 21,533,706
|
Resolving Reindexing only valid with uniquely valued Index objects
|
<p>I have viewed many of the questions that come up with this error. I am running pandas '0.10.1'</p>
<pre><code>df = DataFrame({'A' : np.random.randn(5),
'B' : np.random.randn(5),'C' : np.random.randn(5),
'D':['a','b','c','d','e'] })
#gives error
df.take([2,0,1,2,3], axis=1).drop(['C'],axis=1)
#works fine
df.take([2,0,1,2,1], axis=1).drop(['C'],axis=1)
</code></pre>
<p>Only thing I can see is that in the former case I have the non-numeric column, which seems to be affecting the index somehow but the below command returns empty:</p>
<pre><code>df.take([2,0,1,2,3], axis=1).index.get_duplicates()
</code></pre>
<p><a href="https://stackoverflow.com/questions/16327412/reindexing-error-makes-no-sense">Reindexing error makes no sense</a> does not seem to apply as my old index is unique. </p>
<p>My index appears unique as far as I can tell using this command df.take([2,0,1,2,3], axis=1).index.get_duplicates() from this Q&A: <a href="https://stackoverflow.com/questions/14180615/problems-with-reindexing-dataframes-reindexing-only-valid-with-uniquely-valued">problems with reindexing dataframes: Reindexing only valid with uniquely valued Index objects</a></p>
<p><a href="https://stackoverflow.com/questions/15366949/reindexing-only-valid-with-uniquely-valued-index-objects">"Reindexing only valid with uniquely valued Index objects"</a> does not seem to apply</p>
<p>I think my pandas version# is ok so this should bug should not be the problem <a href="https://stackoverflow.com/questions/13352369/pandas-reindexing-only-valid-with-uniquely-valued-index-objects">pandas Reindexing only valid with uniquely valued Index objects</a></p>
|
<p>Firstly, I believe you meant to test for duplicates using the following command:</p>
<pre><code>df.take([2,0,1,2,3],axis=1).columns.get_duplicates()
</code></pre>
<p>because if you used index instead of columns, then it would obviously returned an empty array because the random float values don't repeat. The above command returns, as expected:</p>
<pre><code>['C']
</code></pre>
<p>Secondly, I think you're right, the non-numeric column is throwing it off, because even if you use the following, there is still an error:</p>
<pre><code>df = DataFrame({'A' : np.random.randn(5), 'B' : np.random.randn(5),'C' :np.random.randn(5), 'D':[str(x) for x in np.random.randn(5) ]})
</code></pre>
<p>It could be a bug, because if you check out the core file called 'index.py', on line 86, and line 1228, the type it is expecting is either (respectively): </p>
<pre><code>_engine_type = _index.ObjectEngine
_engine_type = _index.Int64Engine
</code></pre>
<p>and neither of those seem to be expecting a string, if you look deeper into the documentation. That's the best I got, good luck!! Let me know if you solve this as I'm interested too.</p>
|
python|pandas
| 9
|
2,484
| 53,799,537
|
seaborn heatmap from pandas dataframe with NaNs
|
<p>Hi I really want to create a heatmap but am struggling:</p>
<pre><code># correlations between undergrad studies and occupation
data_uni = n.groupby(['Q5','Q6'])['Q6'].count().to_frame(name = 'count').reset_index()
# some participants did not answer the question in the survey
data_uni.fillna('Unknown', inplace=True)
data_uni.pivot(index='Q5', columns='Q6', values='count')
plt.figure(1, figsize=(14,10))
sns.heatmap(data_uni, cmap="YlGnBu")
</code></pre>
<p>The error message I get is <strong>"TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''"</strong>.<br/></p>
<p>Is this the right way to create a heatmap? If yes, what am I doing wrong, and if not, what would be the right way? Thank you for your help! </p>
|
<p>As per issue <a href="https://github.com/mwaskom/seaborn/issues/375" rel="noreferrer">GH375</a>, you can specify a mask, where data will not be shown for those cells whose mask values are <code>True</code>.</p>
<pre><code>sns.heatmap(data_uni, cmap="YlGnBu", mask=data_uni.isnull())
</code></pre>
|
python|pandas|seaborn|nan|heatmap
| 8
|
2,485
| 53,367,575
|
float object not subscriptable (python)
|
<p>So I am creating 10 dictionaries from a data-frame.</p>
<p>I have already done 3 for each row, but I have decided to do one for every column in my data-frame. When I add the 7 additional dictionaries, I get a float object not subscriptable error. What's confusing is, I had already added the additional 7 dictionary entries for a few other rows. Even more confusing, the error is on a line where the dictionary entries had already been successfully assigned and not for the entries I'm adding to one of the 7 additional dictionaries. Here's my code, please help if you can.</p>
<pre><code>pace[b[1]] = bList[1]
offEff[b[1]] = bList[9]
defEff[b[1]] = bList[10]
ast[b[1]] = bList[2]
to[b[1]] = bList[3]
orr[b[1]] = bList[4]
drr[b[1]] = bList[5]
rebr[b[1]] = bList[6]
effFG[b[1]] = bList[7]
tsPer[b[1]] = bList[8]
</code></pre>
<p>I'm using JupyterLab, if that helps.</p>
|
<p>You should check if the <code>bList</code> is a list object.According to your description,the <code>bList</code> may be a float in your code:</p>
<pre><code>>>> a=1.0
>>> a[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'float' object is not subscriptable
</code></pre>
|
python|pandas|error-handling|jupyter|jupyter-lab
| 0
|
2,486
| 53,644,937
|
Combine two columns in a DataFrame pandas
|
<p>I am having Dataframe which has multiple columns in which some columns are equal (Same key in trailing end eg: column1 = 'a/first', column2 = 'b/first'). I want to merge these two columns. Please help me out to solve the problem.</p>
<p>My Dataframe looks like</p>
<pre><code>name g1/column1 g1/column2 g1/g2/column1 g2/column2
AAAA 10 20 nan nan
AAAA nan nan 30 40
</code></pre>
<p>My result will be like as follows </p>
<pre><code>name g1/column1 g1/column2
AAAA 10 20
AAAA 30 40
</code></pre>
<p>Thanks in advance</p>
|
<p>Use:</p>
<pre><code>#create index by all columns with no merge
df = df.set_index('name')
#MultiIndex by split last /
df.columns = df.columns.str.rsplit('/', n=1, expand=True)
#aggregate first no NaN values per second level of MultiIndex
df = df.groupby(level=1, axis=1).first()
print (df)
column1 column2
name
AAAA 10.0 20.0
AAAA 30.0 40.0
</code></pre>
|
python|pandas
| 2
|
2,487
| 53,399,550
|
Remove string from dataframe index values
|
<p>I want to remove strings from my index values:</p>
<pre><code>df.index.get_values().str.replace("and over", "").astype(int)
</code></pre>
<p>Doing this returns the following error:</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'str'
</code></pre>
<p>I've tried to find a similar function used by numpy to achieve this but I can't seem to find any. Here are my index values:</p>
<pre><code>['0' '1' '2' '3' '4' '5' '6' '7' '8' '9' '10' '11' '12' '13' '14' '15'
'16' '17' '18' '19' '20' '21' '22' '23' '24' '25' '26' '27' '28' '29'
'30' '31' '32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43'
'44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57'
'58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71'
'72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85'
'86' '87' '88' '89' '90 and over' 'All ages']
</code></pre>
|
<p>Something like the following code should work. But first, you would have to delete the 'All ages' entry.</p>
<pre><code>arr = np.array([x.replace(' and over', '') for x in arr]).astype(int)
</code></pre>
|
numpy
| 0
|
2,488
| 17,166,601
|
Summing across rows of Pandas Dataframe
|
<p>I have a DataFrame of records that looks something like this:</p>
<pre><code>stocks = pd.Series(['A', 'A', 'B', 'C', 'C'], name = 'stock')
positions = pd.Series([ 100, 200, 300, 400, 500], name = 'positions')
same1 = pd.Series(['AA', 'AA', 'BB', 'CC', 'CC'], name = 'same1')
same2 = pd.Series(['AAA', 'AAA', 'BBB', 'CCC', 'CCC'], name = 'same2')
diff = pd.Series(['A1', 'A2', 'B3' ,'C1', 'C2'], name = 'different')
df = pd.DataFrame([stocks, same1, positions, same2, diff]).T
df
</code></pre>
<p>This gives a pandas DataFrame that looks like</p>
<pre><code> stock same1 positions same2 different
0 A AA 100 AAA A1
1 A AA 200 AAA A2
2 B BB 300 BBB B3
3 C CC 400 CCC C1
4 C CC 500 CCC C2
</code></pre>
<p>I'm not interested in the data in 'different' columns and want to sum the positions along the unique other columns. I am currently doing it by:</p>
<pre><code>df.groupby(['stock','same1','same2'])['positions'].sum()
</code></pre>
<p>which gives:</p>
<pre><code>stock same1 same2
A AA AAA 300
B BB BBB 300
C CC CCC 900
Name: positions
</code></pre>
<p>Problem is that this is a pd.Series (with Multi-Index). Currently I iterate over it to build a DataFrame again. I am sure that I am missing a method. Basically I want to drop 1 column from a DataFrame and then "rebuild it" so that one column is summed and the rest of the fields (which are the same) stay in place.</p>
<p>This groupby method breaks if there are empty positions. So I currently use an elaborate iteration over the DataFrame to build a new one. Is there a better approach?</p>
|
<p>Step 1. Use [['positions']] instead of ['positions']:</p>
<pre><code>In [30]: df2 = df.groupby(['stock','same1','same2'])[['positions']].sum()
In [31]: df2
Out[31]:
positions
stock same1 same2
A AA AAA 300
B BB BBB 300
C CC CCC 900
</code></pre>
<p>Step 2. And then use <code>reset_index</code> to move the index back to the column</p>
<pre><code>In [34]: df2.reset_index()
Out[34]:
stock same1 same2 positions
0 A AA AAA 300
1 B BB BBB 300
2 C CC CCC 900
</code></pre>
<h2>EDIT</h2>
<p>Seems my method is not so good.</p>
<p>Thanks to @Andy and @unutbu , you can achieve your goal by more elegant ways:</p>
<p>method 1:</p>
<pre><code>df.groupby(['stock', 'same1', 'same2'])['positions'].sum().reset_index()
</code></pre>
<p>method 2:</p>
<pre><code>df.groupby(['stock', 'same1', 'same2'], as_index=False)['positions'].sum()
</code></pre>
|
python|pandas|dataframe
| 10
|
2,489
| 19,865,974
|
Overflow in numpy
|
<p>I am implementing Harris corner detection and having overflow error:</p>
<pre><code>harris.py:27: RuntimeWarning: overflow encountered in ubyte_scalars
Mat[0][1]=Ix[i][j]*Iy[i][j]
harris.py:28: RuntimeWarning: overflow encountered in ubyte_scalars
Mat[1][0]=Ix[i][j]*Iy[i][j]
</code></pre>
<p>This is the whole source code. Where does the error come from? Since Ix.max is 255 and min is 0.</p>
<pre><code>import cv2
import numpy as np
im=cv2.imread('image.png', cv2.CV_LOAD_IMAGE_GRAYSCALE)
(M,N)=im.shape
print M
print N
Gx=np.array([[1, 0, -1],[ 2, 0, -2], [1, 0, -1]])
Gy=np.array([[1, 2, 1],[0, 0, 0],[-1, -2, -1]])
Ix=cv2.filter2D(im, -1, Gx)
Iy=cv2.filter2D(im, -1, Gy)
print np.max(Ix)
print np.min(Ix)
print np.max(Iy)
print np.min(Iy)
Mat=np.zeros((2,2), dtype='float64')
R=np.zeros((M,N), dtype='float64')
for i in range(M):
for j in range(N):
Mat[0][0]=Ix[i][j]**2
Mat[0][1]=Ix[i][j]*Iy[i][j]
Mat[1][0]=Ix[i][j]*Iy[i][j]
Mat[1][1]=Iy[i][j]**2
R[i][j]=np.linalg.det(Mat)-(np.matrix.trace(Mat))**2
cv2.imshow("Ix",Ix)
cv2.imshow("Iy",Iy)
cv2.imshow("R",R)
cv2.waitKey(0)
</code></pre>
|
<p>What's happening is that your input data is <code>uint8</code>. Because you're multiplying two <code>uint8</code>'s, the result is a <code>uint8</code>, even though it will be upcasted when you assign it to an item in the float array <code>Mat</code>.</p>
<p>As an example:</p>
<pre><code>In [1]: import numpy as np
In [2]: np.uint8(255) * np.uint8(255)
./anaconda/bin/ipython:1: RuntimeWarning: overflow encountered in ubyte_scalars
Out[2]: 1
</code></pre>
<p>Notice that numpy will happily return the result (<code>1</code>), but it's not what you might expect if you're not familiar with limited-precision integers.</p>
<p>Newer versions of numpy raise a runtime warning, while older versions with allow it to happen silently.</p>
<p>This is deliberate behavior. It's <em>very</em> useful when dealing with large arrays. You just have to be aware that numpy behaves similar to C when it comes to limited-precision data types.</p>
<p>You have several options.</p>
<ol>
<li>Cast your entire input array to floating point (e.g. <code>Ix, Iy = Ix.astype(float), Iy.astype(float)</code>.</li>
<li>Cast <code>Ix[i][j]</code> and <code>Iy[i][j]</code> to floats inside the loop.</li>
</ol>
|
python|numpy
| 4
|
2,490
| 15,835,358
|
how to solve this exercise with python numpy vectorization?
|
<p>how to solve this exercise 4.5 on page 2 with python Numpy vectorization?</p>
<p>Link to download:</p>
<p><a href="https://dl.dropbox.com/u/92795325/Python%20Scripting%20for%20Computational%20Scien%20-%20H.P.%20%20Langtangen.pdf" rel="nofollow">https://dl.dropbox.com/u/92795325/Python%20Scripting%20for%20Computational%20Scien%20-%20H.P.%20%20Langtangen.pdf</a></p>
<p>I tried this with Python loop, but I need Vectorization version.</p>
<pre><code>from numpy import *
import time
def fun1(x):
return 2*x+1
def integ(a,b,n):
t0 = time.time()
h = (b-a)/n
a1 = (h/2)*fun1(a)
b1 = (h/2)*fun1(b)
c1 = 0
for i in range(1,n,1):
c1 = fun1((a+i*h))+c1
t1 = time.time()
return a1+b1+h*c1, t1-t0
</code></pre>
|
<p>To <em>"vectorize"</em> using <code>numpy</code>, all this means is that instead of doing an explicit loop like,</p>
<pre><code>for i in range(1, n):
c = c + f(i)
</code></pre>
<p>Then instead you should make <code>i</code> into a numpy array, and simply take its sum:</p>
<pre><code>i = np.arange(1,n)
c = i.sum()
</code></pre>
<p>And numpy automatically does the vectorization for you. The reason this is faster is because numpy loops are done in a better optimized way than a plain python loop, for a variety of reasons. Generally speaking, the longer the loop/array, the better the advantage. Here is your trapezoidal integration implemented:</p>
<pre><code>import numpy as np
def f1(x):
return 2*x + 1
# Here's your original function modified just a little bit:
def integ(f,a,b,n):
h = (b-a)/n
a1 = (h/2)*f(a)
b1 = (h/2)*f(b)
c1 = 0
for i in range(1,n,1):
c1 = f((a+i*h))+c1
return a1 + b1 + h*c1
# Here's the 'vectorized' function:
def vinteg(f, a, b, n):
h = (b-a) / n
ab = 0.5 * h * (f(a)+f(b)) #only divide h/2 once
# use numpy to make `i` a 1d array:
i = np.arange(1, n)
# then, passing a numpy array to `f()` means that `f` returns an array
c = f(a + h*i) # now c is a numpy array
return ab + c.sum() # ab + np.sum(c) is equivalent
</code></pre>
<p>Here I will import what I named <code>tmp.py</code> into an <code>ipython</code> session for easier timing than using <code>time.time</code>:</p>
<pre><code>import trap
f = trap.f1
a = 0
b = 100
n = 1000
timeit trap.integ(f, a, b, n)
#1000 loops, best of 3: 378 us per loop
timeit trap.vinteg(f, a, b, n)
#10000 loops, best of 3: 51.6 us per loop
</code></pre>
<p>Wow, seven times faster.</p>
<p>See if it helps much for smaller <code>n</code></p>
<pre><code>n = 10
timeit trap.integ(f, a, b, n)
#100000 loops, best of 3: 6 us per loop
timeit trap.vinteg(f, a, b, n)
#10000 loops, best of 3: 43.4 us per loop
</code></pre>
<p>Nope, much slower for small loops! What about very large <code>n</code>?</p>
<pre><code>n = 10000
timeit trap.integ(f, a, b, n)
#100 loops, best of 3: 3.69 ms per loop
timeit trap.vinteg(f, a, b, n)
#10000 loops, best of 3: 111 us per loop
</code></pre>
<p>Thirty times faster!</p>
|
numpy
| 0
|
2,491
| 71,949,077
|
How to do append based on multiple filter on pandas dataframe more effectively
|
<p>Here's my dataset <code>df1</code></p>
<pre><code>Id Value month Year
1 672 4 2020
1 356 6 2020
2 682 6 2019
3 366 4 2021
</code></pre>
<p>Here's my dataset <code>df2</code></p>
<pre><code>Id Value month Year
1 671 4 2020
1 353 6 2020
2 682 6 2019
3 363 4 2021
</code></pre>
<p>Here's my expected dataset <code>df</code> that is used <code>df2</code> from month=5 Year=2020 and before that using <code>df1</code></p>
<pre><code>Id Value month Year
1 671 4 2020
1 353 6 2020
2 682 6 2019
3 363 4 2021
</code></pre>
<p><em>Note</em>:
The original need is using pyspark, but in this question I'm exploring pandas alternatives</p>
<p>My Idea:</p>
<pre><code>df1['code'] = df1['year']*100 + df1['month']
df2['code'] = df2['year']*100 + df2['month']
df1 = df1[df1['code'] =< 202004]
df2 = df2[df2['code'] => 202005]
df = df1.append(df2)
</code></pre>
<p>I think there's a way that do that more effective way</p>
|
<p>Another option is to use <code>mask</code>:</p>
<pre><code>df = df1.mask((df1['month'].ge(5) & df1['Year'].eq(2020)) | df1['Year'].ge(2021), df2)
</code></pre>
<p>Output:</p>
<pre><code> Id Value month Year
0 1 672 4 2020
1 1 353 6 2020
2 2 682 6 2019
3 3 363 4 2021
</code></pre>
|
python|pandas|dataframe
| 3
|
2,492
| 72,099,844
|
mat1 and mat2 shapes cannot be multiplied (19x1 and 19x1)
|
<p>I have a handmade dataset and all want to do is set a linear regression model with Pytorch.
These are the codes I wrote:</p>
<pre><code>from torch.autograd import Variable
train_x = np.asarray([1,2,3,4,5,6,7,8,9,10,5,4,6,8,5,2,1,1,6])
train_y = train_x * 2
X = Variable(torch.from_numpy(train_x).type(torch.FloatTensor), requires_grad = False).view(19, 1)
y = Variable(torch.from_numpy(train_y).type(torch.FloatTensor), requires_grad = False)
from torch import nn
lr = nn.Linear(19, 1)
loss = nn.MSELoss()
optimizer = torch.optim.SGD(lr.parameters(), lr = 0.01)
output = lr(X) #error occurs here
</code></pre>
<p>I guess this is the simplest Pytorch neural network code in the world but it's still giving this error message:</p>
<pre><code>mat1 and mat2 shapes cannot be multiplied (19x1 and 19x1)
</code></pre>
<p>I just did all the things on the book but it's still giving this error. Can you help me?</p>
|
<p>If you are using a <a href="https://pytorch.org/docs/stable/generated/torch.nn.Linear.html" rel="nofollow noreferrer"><code>torch.nn.Linear(a,b)</code></a> as part of a network, then the input must be of shape <code>(n, a)</code>, and the output will be of shape <code>(n, b)</code>. Therefore you need to make sure that <code>X</code> has shape <code>(n, 19)</code> in your case, so modifying it with</p>
<pre><code>...).view(1, 19)
</code></pre>
<p>would do the trick.</p>
|
python|deep-learning|pytorch|tensor
| 0
|
2,493
| 17,003,034
|
Missing data in pandas.crosstab
|
<p>I'm making some crosstabs with pandas:</p>
<pre><code>a = np.array(['foo', 'foo', 'foo', 'bar', 'bar', 'foo', 'foo'], dtype=object)
b = np.array(['one', 'one', 'two', 'one', 'two', 'two', 'two'], dtype=object)
c = np.array(['dull', 'dull', 'dull', 'dull', 'dull', 'shiny', 'shiny'], dtype=object)
pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'])
b one two
c dull dull shiny
a
bar 1 1 0
foo 2 1 2
</code></pre>
<p>But what I actually want is the following:</p>
<pre><code>b one two
c dull shiny dull shiny
a
bar 1 0 1 0
foo 2 0 1 2
</code></pre>
<p>I found workaround by adding new column and set levels as new MultiIndex, but it seems to be difficult...</p>
<p>Is there any way to pass MultiIndex to crosstabs function to predefine output columns?</p>
|
<p>The crosstab function has a parameter called dropna which is set to True by default. This parameter defines whether empty columns (such as the one-shiny column) should be displayed or not.</p>
<p>I tried calling the funcion like this:</p>
<pre><code>pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'], dropna = False)
</code></pre>
<p>and this is what I got:</p>
<pre><code>b one two
c dull shiny dull shiny
a
bar 1 0 1 0
foo 2 0 1 2
</code></pre>
<p>Hope that was still helpful.</p>
|
python|pandas
| 7
|
2,494
| 18,831,732
|
Error on trying to use Dataframe.to_json method
|
<p>I'm trying to export a pandas dataframe to JSON with no luck. I've tried:</p>
<p>all_data.to_json("spdata.json") and all_data.to_json()</p>
<p>I get the same attribute error on both: <strong>'DataFrame' object has no attribute 'to_json'</strong>. Just to make sure something isn't wrong with the DataFrame, i tested writing it to_csv and that worked.</p>
<p>Is there something i'm missing in my syntax or package i need to import? I am running Python version 2.7.5 which is part of an Enthought Canopy Express package. Imports at the beginning of my code are:</p>
<pre><code>from pandas import Series, DataFrame
import pandas as pd
import numpy as np
from sys import argv
from datetime import datetime, timedelta
from dateutil.parser import parse
</code></pre>
|
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow"><code>to_json</code></a> method was <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#i-o-enhancements" rel="nofollow">introduced to 0.12</a>, so you'll need to <a href="http://pandas.pydata.org/pandas-docs/stable/install.html" rel="nofollow">upgrade your pandas</a> to be able to use it.</p>
|
json|python-2.7|pandas
| 3
|
2,495
| 22,127,051
|
Noise while altering audio data
|
<p>I am playing audio with python and i don't understand why i hear noise on the ouptut when executing code like this: </p>
<pre><code>import pyaudio
import wave
import numpy as np
f = wave.open('blabla.wav',"r")
p = pyaudio.PyAudio()
# open stream
stream = p.open(format = p.get_format_from_width(f.getsampwidth()),
channels = f.getnchannels(),
rate = f.getframerate(),
output = True)
float_array = np.fromstring(f.readframes(10000000), dtype=np.uint16).astype('float32')
output = 0.9 * float_array
stream.write(output.astype('uint16').tostring())
</code></pre>
<p>When i multiply by <code>0.9</code> i expect weakening a signal a little bit. But where from this ouput noise came from ?
I don't even add anything to initial data!</p>
<p>Basically i want to add two signals : </p>
<pre><code> output signal = 0.5 * the origin one + 0.5 * shifted origin one
</code></pre>
<p>But i get mess out of this process, because even a multiplying an original array make the signal near entirely sound like a mess. </p>
<p>Can you point me out what am i doing wrong and how to make the formula </p>
<pre><code> output signal = 0.5 * the origin one + 0.5 * shifted origin one
</code></pre>
<p>work right ? </p>
|
<p>I think 16 bit PCM is usually signed. Try using <code>int16</code> instead of <code>uint16</code></p>
|
python|audio|numpy|stream|real-time
| 1
|
2,496
| 22,053,050
|
Difference between numpy.array shape (R, 1) and (R,)
|
<p>In <code>numpy</code>, some of the operations return in shape <code>(R, 1)</code> but some return <code>(R,)</code>. This will make matrix multiplication more tedious since explicit <code>reshape</code> is required. For example, given a matrix <code>M</code>, if we want to do <code>numpy.dot(M[:,0], numpy.ones((1, R)))</code> where <code>R</code> is the number of rows (of course, the same issue also occurs column-wise). We will get <code>matrices are not aligned</code> error since <code>M[:,0]</code> is in shape <code>(R,)</code> but <code>numpy.ones((1, R))</code> is in shape <code>(1, R)</code>.</p>
<p>So my questions are:</p>
<ol>
<li><p>What's the difference between shape <code>(R, 1)</code> and <code>(R,)</code>. I know literally it's list of numbers and list of lists where all list contains only a number. Just wondering why not design <code>numpy</code> so that it favors shape <code>(R, 1)</code> instead of <code>(R,)</code> for easier matrix multiplication.</p></li>
<li><p>Are there better ways for the above example? Without explicitly reshape like this: <code>numpy.dot(M[:,0].reshape(R, 1), numpy.ones((1, R)))</code></p></li>
</ol>
|
<h3>1. The meaning of shapes in NumPy</h3>
<p>You write, "I know literally it's list of numbers and list of lists where all list contains only a number" but that's a bit of an unhelpful way to think about it.</p>
<p>The best way to think about NumPy arrays is that they consist of two parts, a <em>data buffer</em> which is just a block of raw elements, and a <em>view</em> which describes how to interpret the data buffer.</p>
<p>For example, if we create an array of 12 integers:</p>
<pre><code>>>> a = numpy.arange(12)
>>> a
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
</code></pre>
<p>Then <code>a</code> consists of a data buffer, arranged something like this:</p>
<pre><code>ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>and a view which describes how to interpret the data:</p>
<pre><code>>>> a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
>>> a.dtype
dtype('int64')
>>> a.itemsize
8
>>> a.strides
(8,)
>>> a.shape
(12,)
</code></pre>
<p>Here the <em>shape</em> <code>(12,)</code> means the array is indexed by a single index which runs from 0Β toΒ 11. Conceptually, if we label this single index <code>i</code>, the array <code>a</code> looks like this:</p>
<pre><code>i= 0 1 2 3 4 5 6 7 8 9 10 11
ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>If we <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="noreferrer">reshape</a> an array, this doesn't change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after:</p>
<pre><code>>>> b = a.reshape((3, 4))
</code></pre>
<p>the array <code>b</code> has the same data buffer as <code>a</code>, but now it is indexed by <em>two</em> indices which run from 0Β toΒ 2 and 0Β toΒ 3 respectively. If we label the two indices <code>i</code> and <code>j</code>, the array <code>b</code> looks like this:</p>
<pre><code>i= 0 0 0 0 1 1 1 1 2 2 2 2
j= 0 1 2 3 0 1 2 3 0 1 2 3
ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>which means that:</p>
<pre><code>>>> b[2,1]
9
</code></pre>
<p>You can see that the second index changes quickly and the first index changes slowly. If you prefer this to be the other way round, you can specify the <code>order</code> parameter:</p>
<pre><code>>>> c = a.reshape((3, 4), order='F')
</code></pre>
<p>which results in an array indexed like this:</p>
<pre><code>i= 0 1 2 0 1 2 0 1 2 0 1 2
j= 0 0 0 1 1 1 2 2 2 3 3 3
ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>which means that:</p>
<pre><code>>>> c[2,1]
5
</code></pre>
<p>It should now be clear what it means for an array to have a shape with one or more dimensions of sizeΒ 1. After:</p>
<pre><code>>>> d = a.reshape((12, 1))
</code></pre>
<p>the array <code>d</code> is indexed by two indices, the first of which runs from 0Β toΒ 11, and the second index is alwaysΒ 0:</p>
<pre><code>i= 0 1 2 3 4 5 6 7 8 9 10 11
j= 0 0 0 0 0 0 0 0 0 0 0 0
ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>and so:</p>
<pre><code>>>> d[10,0]
10
</code></pre>
<p>A dimension of lengthΒ 1 is "free" (in some sense), so there's nothing stopping you from going to town:</p>
<pre><code>>>> e = a.reshape((1, 2, 1, 6, 1))
</code></pre>
<p>giving an array indexed like this:</p>
<pre><code>i= 0 0 0 0 0 0 0 0 0 0 0 0
j= 0 0 0 0 0 0 1 1 1 1 1 1
k= 0 0 0 0 0 0 0 0 0 0 0 0
l= 0 1 2 3 4 5 0 1 2 3 4 5
m= 0 0 0 0 0 0 0 0 0 0 0 0
ββββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ¬βββββ
β 0 β 1 β 2 β 3 β 4 β 5 β 6 β 7 β 8 β 9 β 10 β 11 β
ββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
</code></pre>
<p>and so:</p>
<pre><code>>>> e[0,1,0,0,0]
6
</code></pre>
<p>See the <a href="http://docs.scipy.org/doc/numpy/reference/internals.html" rel="noreferrer">NumPy internals documentation</a> for more details about how arrays are implemented.</p>
<h3>2. What to do?</h3>
<p>Since <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="noreferrer"><code>numpy.reshape</code></a> just creates a new view, you shouldn't be scared about using it whenever necessary. It's the right tool to use when you want to index an array in a different way.</p>
<p>However, in a long computation it's usually possible to arrange to construct arrays with the "right" shape in the first place, and so minimize the number of reshapes and transposes. But without seeing the actual context that led to the need for a reshape, it's hard to say what should be changed.</p>
<p>The example in your question is:</p>
<pre><code>numpy.dot(M[:,0], numpy.ones((1, R)))
</code></pre>
<p>but this is not realistic. First, this expression:</p>
<pre><code>M[:,0].sum()
</code></pre>
<p>computes the result more simply. Second, is there really something special about column 0? Perhaps what you actually need is:</p>
<pre><code>M.sum(axis=0)
</code></pre>
|
python|numpy|matrix|multidimensional-array
| 674
|
2,497
| 55,373,997
|
How to set the default parameters of Conv2D in tf.keras?
|
<p>Support i have a network with 5 convolution. I write it by Keras.</p>
<pre class="lang-py prettyprint-override"><code>x = Input(shape=(None, None, 3))
y = Conv2D(10, 3, strides=1)(x)
y = Conv2D(16, 3, strides=1)(y)
y = Conv2D(32, 3, strides=1)(y)
y = Conv2D(48, 3, strides=1)(y)
y = Conv2D(64, 3, strides=1)(y)
</code></pre>
<p>I want set all convolution's <code>kernel_initializer</code> to xavier. One of method is: </p>
<pre class="lang-py prettyprint-override"><code>x = Input(shape=(None, None, 3))
y = Conv2D(10, 3, strides=1, kernel_initializer=tf.glorot_uniform_initializer())(x)
y = Conv2D(16, 3, strides=1, kernel_initializer=tf.glorot_uniform_initializer())(y)
y = Conv2D(32, 3, strides=1, kernel_initializer=tf.glorot_uniform_initializer())(y)
y = Conv2D(48, 3, strides=1, kernel_initializer=tf.glorot_uniform_initializer())(y)
y = Conv2D(64, 3, strides=1, kernel_initializer=tf.glorot_uniform_initializer())(y)
</code></pre>
<p>But this kind of writing is very sad and the code is very redundant. </p>
<p>Is there a better way of writing?</p>
|
<p>Keras provides no way to change the defaults, so you can just make a wrapper function:</p>
<pre><code>def myConv2D(filters, kernel):
return Conv2D(filters, kernel, strides=1, kernel_initializer=tf.glorot_uniform_initializer())
</code></pre>
<p>And then use it as:</p>
<pre><code>x = Input(shape=(None, None, 3))
y = myConv2D(10, 3)(x)
y = myConv2D(16, 3)(y)
y = myConv2D(32, 3)(y)
y = myConv2D(48, 3)(y)
y = myConv2D(64, 3)(y)
</code></pre>
|
python|tensorflow|keras|tf.keras
| 3
|
2,498
| 55,320,491
|
Pandas export to_excel error: 'DataFrame' object has no attribute 'data'
|
<p>I use the following code to try and make a dataframe from a Tf-Idf vectorizer. The output of the vectorizer's fit_transform is a sparse matrix so I use toarray() to convert to array, and then pandas.DataFrame to convert to dataframe. I also extract the list of features using vectorizer.get_feature_names() and use that as column names for the dataframe.</p>
<pre><code>vect = TfidfVectorizer()
X = vect.fit_transform(text_list)
word_list = vect.get_feature_names()
df1 = pd.DataFrame(X.toarray())
df1.to_excel("temp1.xlsx")
df2 = pd.DataFrame(X.toarray(), columns = word_list)
df2.to_excel("temp2.xlsx")
</code></pre>
<p>In case-1, the dataframe df1 gets exported with no problem. However the column names are missing - labeled 0,1,2 ... </p>
<p>In case-2, I try to include the column names, but the export throws an error. </p>
<p>AttributeError: 'DataFrame' object has no attribute 'data'</p>
<p>Funnily, this error happens only in some cases and not all. For different text data, this problem does not arise. So I think it may have something to do the word_list and maybe formatting. </p>
<p>After a bit more investigation, I found that one of the column names was "render" and that is creating the problem. How to I work around it? The following code throws the same error. df = pd.DataFrame([1,2,3,4,5], columns = ["render"]) followed by df.to_excel("temp.xlsx") </p>
<p>Can someone explain why?</p>
|
<p>Solved it by passing column names as header parameter for the pandas.to_excel() rather than including it in the dataframe as column names. Still not sure how to overcome this problem at the root and make it consider "render" as a proper column heading.</p>
<pre><code>df2 = pd.DataFrame(X.toarray())
df2.to_excel("temp2.xlsx", headers = word_list)
</code></pre>
|
python|pandas|scikit-learn|tfidfvectorizer
| 1
|
2,499
| 55,461,990
|
Fetching information from the different links on a web page and writing them to a .xls file using pandas,bs4 in Python
|
<p>I am a beginner to Python Programming. I am practicing web scraping using bs4 module in python.</p>
<p>I have extracted some fields from a web page but it is extracting only 13 items whereas the web page has more than 13 items. I cannot understand why are the rest of the items not extracted. </p>
<p>Another thing is I want to extract the contact number and the email address of each item on the web page but they are available in the item's respective links. I am a beginner and frankly speaking I got stuck to how to access and scrape the link of each item's individual web page within a given web page. Kindly tell where am I doing wrong and if possible suggest what is to be done.</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
res = requests.post('https://www.nelsonalexander.com.au/real-estate-agents/?office=&agent=A')
soup = bs(res.content, 'lxml')
data = soup.find_all("div",{"class":"agent-card large large-3 medium-4 small-12 columns text-center end"})
records = []
for item in data:
name = item.find('h2').text.strip()
position = item.find('h3').text.strip()
records.append({'Names': name, 'Position': position})
df = pd.DataFrame(records,columns=['Names','Position'])
df=df.drop_duplicates()
df.to_excel(r'C:\Users\laptop\Desktop\NelsonAlexander.xls', sheet_name='MyData2', index = False, header=True)
</code></pre>
<p>I have done the above code for just extracting the names and the position of each items but it only scrapes 13 records but there are more records than it in the web page. I could not write any code for extracting the contact number and the email address of each record because it is present inside each item's individual page as I have got stuck.</p>
<p>The Excel sheet looks like this:</p>
<p><a href="https://i.stack.imgur.com/508Mc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/508Mc.png" alt="enter image description here"></a></p>
|
<p>I am convinced that the emails are no where in the DOM. I made some modification to @drec4s code to instead go until there are no entries (dynamically).</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
import itertools
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0', 'Referer': 'https://www.nelsonalexander.com.au/real-estate-agents/?office=&agent=A'}
records = []
with requests.Session() as s:
for i in itertools.count():
res = s.get('https://www.nelsonalexander.com.au/real-estate-agents/page/{}/?ajax=1&agent=A'.format(i), headers=headers)
soup = bs(res.content, 'lxml')
data = soup.find_all("div",{"class":"agent-card large large-3 medium-4 small-12 columns text-center end"})
if(len(data) > 0):
for item in data:
name = item.find('h2').text.strip()
position = item.find('h3').text.strip()
phone = item.find("div",{"class":"small-6 columns text-left"}).find("a").get('href').replace("tel:", "")
records.append({'Names': name, 'Position': position, 'Phone': phone})
print({'Names': name, 'Position': position, 'Phone': phone})
else:
break
</code></pre>
|
python|pandas|web-scraping|beautifulsoup
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.