Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,900
| 62,011,091
|
Index 38 is out of bounds for axis 1 with size 38 - Sklearn
|
<p>I am facing this error with the <code>Naive Bayes's</code> <code>CategoricalNB</code> algorithm</p>
<p>It gives the above error after the <strong>2nd attempt</strong> I run the cells. That means it works without any errors during the 1st time and when I try to change something (as small as a comment) and run the notebook again, it gives the error:</p>
<blockquote>
<p>IndexError: index 38 is out of bounds for axis 1 with size 38</p>
</blockquote>
<p>I don't know what is going wrong and how to fix. When I restart the kernel and try again it works and every attempt after the <strong>1st attempt</strong> it fails and gives the above error.</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
dataframe = pd.read_csv("hr_dataset.csv")
# dataframe = pd.read_csv("WA_Fn-UseC_-HR-Employee-Attrition.csv")
dataframe.head(2)
</code></pre>
<pre><code>from sklearn.naive_bayes import CategoricalNB
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# inputs = scaled_df
X_train, X_test, y_train, y_test = train_test_split(inputs, target, test_size=0.2)
categoricalNB_ = CategoricalNB()
categoricalNB_.fit(X_train, y_train)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
</code></pre>
<pre><code>pred = categoricalNB_.predict(X_test) # --------------> gives the error for every attempt after the 1st attempt. weird
</code></pre>
<pre><code>categoricalNB_.score(X_test, y_test)
# accuracy_score(y_test,pred)
</code></pre>
|
<p>I believe your issue is related to having a different set of values for a feature in the train, and feature set.</p>
<p>I went through your database, and found that you have only one record, where the total working years is 38. If that record is only accessible in the test set, then your fit from the training set will not contain a probability for the value 38 thus throwing an out of bounds error.</p>
<p>You could either solve this with class_prior parameter (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.CategoricalNB.html" rel="nofollow noreferrer">for more details read docs</a>), or you could make sure, that you have for every feature every category at least a certain amount of record.</p>
|
python|pandas|scikit-learn
| 3
|
8,901
| 57,849,831
|
How to predict in multiple and simultanous Keras classifier sessions in Django?
|
<p>I know similar questions have been asked before and I have read all of them but none solved my problem.</p>
<p>I have a Django project in which I am using <code>CNNSequenceClassifier</code> from <code>sequence_classifiers</code> which is a <code>Keras</code> model. The model files have been fit before and saved to particular destinations from which I again load them and predict what I want.</p>
<pre><code>clf = CNNSequenceClassifier(epochs=2)
</code></pre>
<p>When I load the model I do this, due to the suggestions I found in searches, which is using the global model before loading the model, and then the other two lines:</p>
<pre><code>global clf
clf = pickle.load(open(modelfilenameandpath, "rb"))
global graph
graph = tf.get_default_graph()
</code></pre>
<p>and before predicting I use graph.as_default()</p>
<pre><code>with graph.as_default():
probs = clf.predict_proba(the_new_vecs)
K.clear_session()
</code></pre>
<p>I put <code>K.clear_session()</code> because I predict in a for loop and sometimes the next item of the for loop's prediction gets mixed up with the last one and raises tensorflow errors. But clearly <code>K.clear_session()</code> clears the session and makes it easy for the new item's prediction to work fine.</p>
<p>The problem is in my <code>views.py</code> I have two functions which trigger prediction. And sometimes I need to use both simultanously. But since the probject is using Tensorflow Backend, only one session is defined and the predictions of these two functions get mixed up together. <code>K.clear_session()</code> does not help here because it was only for that particular session and it does not expect new stuff coming in the same session.</p>
<p>I really do not know how to make the functions understand whenever they start loading the model and/or want to predict it, start a new and independent session so that nothing gets mixed up.
I have seen codes like:</p>
<pre><code>with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
</code></pre>
<p>but I don't know where to put the <code>probs = clf.predict_proba(the_new_vecs)</code>. It seems all of them require the action you want to be done in the session to be passed as an argument to <code>sess.run()</code> which does not work for me or at least I do not know how to make it work. I want the predict to be done in a new session and after that the values of <code>probs</code> are really important. Also <code>global_variables_initializer()</code> seems to change the value of all my variables.</p>
<p>I have also tried <code>K.set_session()</code> or <code>K.get_session()</code> but did not work.</p>
<p>To sum up, I think the reason I am stuck is that I do not know how to use sessions for my purpose. I do not know where to use what code?!</p>
<p>Please help!</p>
|
<p>My understanding of <code>tensorflow</code> graphs and sessions is:</p>
<p>A <code>tensorflow</code> graph hosts operations, <code>placeholder</code>s and <code>Variable</code>s. A <code>tensorflow</code> graph lives inside a <code>tensorflow</code> session (that's why to save a trained model using <code>tensorflow.train.Saver</code>, you need something like <code>Saver.save(sess, the graph)</code>). </p>
<p>Here is a simple example to help you make sense of the relation between a <code>keras</code> model and a <code>tensorflow</code> graph:</p>
<pre><code>import tensorflow as tf
from keras.layers import Input, Dense
from keras.models import Model
tf.reset_default_graph()
graph_1, graph_2 = tf.Graph(), tf.Graph()
with graph_1.as_default():
x_in = Input(shape=(1, ), name='in_graph1')
pred = Dense(5, name='dense1_graph1')(x_in)
pred = Dense(1, name='dense2_graph1')(pred)
model = Model(input=x_in, output=pred)
with graph_2.as_default():
x_in = Input(shape=(1, ), name='in_graph2')
pred = Dense(10, name='dense1_graph2')(x_in)
pred = Dense(1, name='dense2_graph2')(pred)
model = Model(input=x_in, output=pred)
with tf.Session() as sess:
default_ops = sess.graph.get_operations()
with graph_1.as_default():
with tf.Session() as sess:
one_ops = sess.graph.get_operations()
with graph_2.as_default():
with tf.Session() as sess:
two_ops = sess.graph.get_operations()
</code></pre>
<p>As you can see by running the code, <code>default_ops</code> is an empty list which means there is no operation in the default graph. <code>one_ops</code> is a list of operations of the first <code>keras</code> model and <code>two_ops</code> is a list of operations of the second <code>keras</code> model. </p>
<p><strong>So, by using <code>with graph.as_default()</code>, a <code>keras</code> model can be exclusively embedded in a <code>tensorflow</code> graph.</strong></p>
<p>With this in mind, it becomes easy to load several <code>keras</code> models in a single script. I think the example script below will address your confusions:</p>
<pre><code>import numpy as np
import tensorflow as tf
from keras.layers import Input, Dense
from keras.models import Model
from Keras.models import model_from_json
tf.reset_default_graph()
x = np.linspace(1, 4, 4)
y = np.random.rand(4)
models = {}
graph_1, graph_2 = tf.Graph(), tf.Graph()
# graph_1
with graph_1.as_default():
x_in = Input(shape=(1, ), name='in_graph1')
pred = Dense(5, name='dense1_graph1')(x_in)
pred = Dense(1, name='dense2_graph1')(pred)
model = Model(input=x_in, output=pred)
models['graph_1'] = model
# graph_2
with graph_2.as_default():
x_in = Input(shape=(1, ), name='in_graph2')
pred = Dense(10, name='dense1_graph2')(x_in)
pred = Dense(1, name='dense2_graph2')(pred)
model = Model(input=x_in, output=pred)
models['graph_2'] = model
# save the two models
with tf.Session(graph=graph_1) as sess:
with open("model_1.json", "w") as source:
source.write(models['graph_1'].to_json())
models['graph_1'].save_weights("weights_1.h5")
with tf.Session(graph=graph_2) as sess:
with open("model_2.json", "w") as source:
source.write(models['graph_2'].to_json())
models['graph_2'].save_weights("weights_2.h5")
####################################################
# play with the model
pred_one, pred_one_reloaded = [], []
pred_two, pred_two_reloaded = [], []
for _ in range(10):
print(_)
if _ % 2 == 0:
with graph_1.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
pred_one.append(models['graph_1'].predict(x).ravel())
with tf.Session() as sess:
with open("model_1.json", "r") as f:
model = model_from_json(f.read())
model.load_weights("weights_1.h5")
pred_one_reloaded.append(model.predict(x).ravel())
else:
with graph_2.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
pred_two.append(models['graph_2'].predict(x).ravel())
with tf.Session() as sess:
with open("model_2.json", "r") as f:
model = model_from_json(f.read())
model.load_weights("weights_2.h5")
pred_two_reloaded.append(model.predict(x).ravel())
pred_one = np.array(pred_one)
pred_one_reloaded = np.array(pred_one_reloaded)
pred_two = np.array(pred_two)
pred_two_reloaded = np.array(pred_two_reloaded)
print(pred_one)
print(pred_one_reloaded)
print(pred_two)
print(pred_two_reloaded)
</code></pre>
|
python|django|tensorflow|keras|tf.keras
| 2
|
8,902
| 57,949,871
|
How to set/get Pandas dataframes into Redis using pyarrow
|
<p>Using </p>
<pre><code>dd = {'ID': ['H576','H577','H578','H600', 'H700'],
'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']}
df = pd.DataFrame(dd)
</code></pre>
<p>Pre Pandas 0.25, this below worked. </p>
<pre><code>set: redisConn.set("key", df.to_msgpack(compress='zlib'))
get: pd.read_msgpack(redisConn.get("key"))
</code></pre>
<p>Now, there are deprecated warnings.. </p>
<pre><code>FutureWarning: to_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.
The read_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.
</code></pre>
<p>How does pyarrow work? And, how do I get pyarrow objects into and back from Redis. </p>
<p>reference:
<a href="https://stackoverflow.com/questions/37943778/how-to-set-get-pandas-dataframe-to-from-redis">How to set/get pandas.DataFrame to/from Redis?</a></p>
|
<p>Here's a full example to use pyarrow for serialization of a pandas dataframe to store in redis</p>
<pre><code>apt-get install python3 python3-pip redis-server
pip3 install pandas pyarrow redis
</code></pre>
<p>and then in python</p>
<pre><code>import pandas as pd
import pyarrow as pa
import redis
df=pd.DataFrame({'A':[1,2,3]})
r = redis.Redis(host='localhost', port=6379, db=0)
context = pa.default_serialization_context()
r.set("key", context.serialize(df).to_buffer().to_pybytes())
context.deserialize(r.get("key"))
A
0 1
1 2
2 3
</code></pre>
<p>I just submitted <a href="https://github.com/pandas-dev/pandas/pull/28494" rel="noreferrer">PR 28494</a> to pandas to include this pyarrow example in the docs.</p>
<p>Reference docs:</p>
<ul>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_msgpack.html" rel="noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_msgpack.html</a></li>
<li><a href="https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization" rel="noreferrer">https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization</a></li>
<li><a href="https://arrow.apache.org/docs/python/memory.html#pyarrow-buffer" rel="noreferrer">https://arrow.apache.org/docs/python/memory.html#pyarrow-buffer</a></li>
<li><a href="https://stackoverflow.com/a/37957490/4126114">https://stackoverflow.com/a/37957490/4126114</a></li>
</ul>
|
python|pandas|redis|pyarrow|py-redis
| 42
|
8,903
| 58,115,477
|
Pretrained CNN(tensorflow/darknet/caffe) weights for human/vehicle detection only
|
<p>I am making use of tensorflow's pretrained weights from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models" rel="nofollow noreferrer">Tensorflow detection model zoo</a> , which is primarily trained on <a href="http://cocodataset.org/" rel="nofollow noreferrer">COCO</a> datasets , which covers about 80 different classes including humans, so making use of these models certainly results in higher computational trade-off, so is there any publicly open pre-trained weights which only focuses on one class in this case either human or vehicle(car).</p>
<p>If there is no such models available then how it is possible to fine-tune or customize these existing models like,</p>
<p>"<strong>ssd_inception_v2_coco_2018_01_28</strong>" , which performs pretty well with <em>mAP of 32 and computationally efficient as well</em> , so how such models can be utilized to detect only humans but not any other objects.</p>
|
<p>Have you tried the model <a href="https://github.com/opencv/open_model_zoo/blob/develop/models/intel/person-vehicle-bike-detection-crossroad-1016/description/person-vehicle-bike-detection-crossroad-1016.md" rel="nofollow noreferrer">person-vehicle-bike-detection-crossroad-1016</a> ?</p>
|
python|tensorflow|caffe|yolo
| 1
|
8,904
| 58,101,279
|
Avoid using multiple if statements
|
<p>I am trying to create an if statement to check a condition for each iteration</p>
<pre><code> for in range(100):
B10 = np.random.randint(0, precip.shape[0])
T10 = np.random.randint(0, precip.shape[0] )
if np.abs(B10-T10) <=30:
T10 = np.random.randint(0, precip.shape[0])
</code></pre>
<p>I want to create an if condition that will get new values of T10 until the condition above is met for every iteration. How can I do this?</p>
|
<p>Use a <code>while</code> loop instead of a for loop:</p>
<pre class="lang-py prettyprint-override"><code>B10 = np.random.randint(0, precip.shape[0])
T10 = np.random.randint(0, precip.shape[0])
while np.abs(B10-T10) <= 30:
B10 = np.random.randint(0, precip.shape[0])
T10 = np.random.randint(0, precip.shape[0])
</code></pre>
<p>or you can avoid redeclaring variables using the following:</p>
<pre class="lang-py prettyprint-override"><code>while True:
B10 = np.random.randint(0, precip.shape[0])
T10 = np.random.randint(0, precip.shape[0])
if not (np.abs(B10-T10) <=30):
break
</code></pre>
<p>In general, it is a good practice to use a <code>for</code> loop when you know the number of iterations of your loop or when you are using collections. However, when you don't know it, i.e., when it depends on a condition, you should use a <code>while</code> loop.</p>
|
python|pandas|numpy
| 3
|
8,905
| 58,108,913
|
Dataframe column won't convert from integer string to an actual integer
|
<p>I have a date string in microsecond resolution. I need it as an integer.</p>
<pre><code>import pandas as pd
data = ["20181231235959383171", "20181231235959383172"]
df = pd.DataFrame(data=data, columns=["A"])
df["A"].astype(np.int)
</code></pre>
<p>Error:</p>
<pre><code>File "pandas\_libs\lib.pyx", line 545, in pandas._libs.lib.astype_intsafe
OverflowError: Python int too large to convert to C long
</code></pre>
<p>Same problem if I try to cast it to standard Python <code>int</code></p>
|
<p>Per <a href="https://stackoverflow.com/a/58108900/240443">my answer</a> in your previous question:</p>
<pre><code>import pandas as pd
data = ["20181231235959383171", "20181231235959383172"]
df = pd.DataFrame(data=data, columns=["A"])
# slow but big enough
df["A_as_python_int"] = df["A"].apply(int)
# fast but has to be split to two integers
df["A_seconds"] = (df["A_as_python_int"] // 1000000).astype(np.int)
df["A_fractions"] = (df["A_as_python_int"] % 1000000).astype(np.int)
</code></pre>
|
pandas|numpy
| 1
|
8,906
| 34,168,764
|
How can I read a datetime from pandas
|
<p>I convert a string to date and save the CSV:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df.to_csv('dates.csv')
</code></pre>
<p>But when I try to read the CSV, it get the column as str:</p>
<pre><code>df = pd.read_csv('dates.csv')
type(df['date'].iloc[0])
<type 'str'>
</code></pre>
<p>How can I save it as a datetime and read as a datetime?</p>
|
<p>There is the <code>parse_dates</code> parameter in <code>read_csv</code>.</p>
<blockquote>
<p>parse_dates : boolean, list of ints or names, list of lists, or dict
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
{'foo' : [1, 3]} -> parse columns 1, 3 as date and call result 'foo'
A fast-path exists for iso8601-formatted dates.</p>
</blockquote>
<p>So:</p>
<pre><code>df = pd.read_csv(filename, parse_dates=['date_col_1', 'date_col2', etc...])
</code></pre>
<p>A specific example:</p>
<pre><code>df = pd.DataFrame({'date': ['2015-1-1', '2015-2-1', '2015-3-1']})
df['date'] = pd.to_datetime(df['date'])
df.to_csv('dates.csv')
df2 = pd.read_csv('dates.csv')
>>> type(df2['date'].iloc[0])
str
df2 = pd.read_csv('dates.csv', parse_dates=['date'])
>>> type(df2['date'].iloc[0])
pandas.tslib.Timestamp
</code></pre>
|
python|pandas
| 1
|
8,907
| 34,357,215
|
Add ticks for days on Pandas plot
|
<p>When plotting a DataFrame with pandas:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from StringIO import StringIO
mycsv = StringIO("""
time;openBid;highBid;lowBid;closeBid;openAsk;highAsk;lowAsk;closeAsk;volume
2015-12-06T22:00:00.000000Z;1.08703;1.08713;1.08703;1.08713;1.08793;1.08813;1.08793;1.08813;2
2015-12-07T22:00:05.000000Z;1.08682;1.08682;1.08662;1.08662;1.08782;1.08782;1.08762;1.08762;2
2015-12-08T22:01:20.000000Z;1.08683;1.08684;1.08681;1.08684;1.08759;1.08768;1.08759;1.08768;4
2015-12-09T22:01:30.000000Z;1.08676;1.08676;1.08676;1.08676;1.0876;1.0876;1.0876;1.0876;1
2015-12-10T00:03:00.000000Z;1.08675;1.08675;1.08675;1.08675;1.08737;1.08737;1.08737;1.08737;1
2015-12-06T22:03:10.000000Z;1.08675;1.08675;1.08675;1.08675;1.08728;1.08728;1.08728;1.08728;1
2015-12-06T22:03:50.000000Z;1.08676;1.08676;1.08676;1.08676;1.08728;1.08728;1.08728;1.08728;1
""")
df = pd.read_csv(mycsv, delimiter=';', parse_dates=True, index_col='time', header=0)
spr = df['lowAsk']-df['lowBid']
spr.plot()
plt.show()
</code></pre>
<p>I get this:</p>
<p><a href="https://i.stack.imgur.com/MIXPk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MIXPk.png" alt="enter image description here" /></a></p>
<h1>Question:</h1>
<p><strong>How to add ticks+labels+grid on x-axis for each day?</strong></p>
<p>Example: <code>Sun 06, Mon 07, Tue 08</code>, etc.</p>
<p>I tried various things like <code>plt.axis(...)</code> but it was mostly unsucessful.</p>
|
<p>You can do this with <code>DayLocator</code> and <code>DatesFormatter</code> from <code>matplotlib.dates</code>.</p>
<p>From the docs:</p>
<blockquote>
<p><a href="http://matplotlib.org/api/dates_api.html#matplotlib.dates.DayLocator" rel="nofollow noreferrer"><code>DayLocator</code></a></p>
<p>Make ticks on occurances of each day of the month</p>
</blockquote>
<p>And</p>
<blockquote>
<p><a href="http://matplotlib.org/api/dates_api.html#matplotlib.dates.DateFormatter" rel="nofollow noreferrer"><code>DateFormatter</code></a></p>
<p>Use a <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer"><code>strftime()</code></a> format string.</p>
</blockquote>
<p>So, in your case, we want to set the format string to <code>"%a %d"</code> to get <code>Mon 07</code>, etc.</p>
<p>We set these for the <code>major_locator</code> and <code>major_formatter</code> of the <code>xaxis</code> of the <code>Axes</code> returned by <code>pandas</code> when you call <code>spr.plot()</code></p>
<p>Heres you example:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from StringIO import StringIO
mycsv = StringIO("""
time;openBid;highBid;lowBid;closeBid;openAsk;highAsk;lowAsk;closeAsk;volume
2015-12-06T22:00:00.000000Z;1.08703;1.08713;1.08703;1.08713;1.08793;1.08813;1.08793;1.08813;2
2015-12-07T22:00:05.000000Z;1.08682;1.08682;1.08662;1.08662;1.08782;1.08782;1.08762;1.08762;2
2015-12-08T22:01:20.000000Z;1.08683;1.08684;1.08681;1.08684;1.08759;1.08768;1.08759;1.08768;4
2015-12-09T22:01:30.000000Z;1.08676;1.08676;1.08676;1.08676;1.0876;1.0876;1.0876;1.0876;1
2015-12-10T00:03:00.000000Z;1.08675;1.08675;1.08675;1.08675;1.08737;1.08737;1.08737;1.08737;1
2015-12-06T22:03:10.000000Z;1.08675;1.08675;1.08675;1.08675;1.08728;1.08728;1.08728;1.08728;1
2015-12-06T22:03:50.000000Z;1.08676;1.08676;1.08676;1.08676;1.08728;1.08728;1.08728;1.08728;1
""")
df = pd.read_csv(mycsv, delimiter=';', parse_dates=True, index_col='time', header=0)
spr = df['lowAsk']-df['lowBid']
ax=spr.plot()
ax.xaxis.set_major_locator(mdates.DayLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%a %d'))
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/dq1ts.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dq1ts.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|plot
| 1
|
8,908
| 37,041,696
|
HTML not rendering properly with Canopy 1.7.1.3323 / IPython 4.1.2
|
<p>I've just upgraded to Canopy 1.7.1; I think this problem stems from the change in IPython version from 2.4.1 to 4.1.2.</p>
<p>The issue I have is that calling a DataFrame object in Python seems to use the <code>__print__</code> method, i.e. there's no difference between typing <code>print df</code> and <code>df</code> into the interpreter, and unfortunately this gives me an all-text output rather than the nice tables I normally get.</p>
<p>So I get something that looks exactly like this when I call <code>df</code> rather than a table:
</p>
<pre><code> date flag
1 20151102 0
98663 20151101 1
</code></pre>
<p>This happened immediately after the upgrade, and I also tried updating all my packages. I've also looked at <a href="https://stackoverflow.com/questions/26873127/print-dataframe-as-table-in-ipython-notebook">this</a> and <a href="https://stackoverflow.com/questions/27801379/ipython-notebook-not-printing-dataframe-as-table?lq=1">this</a>, but none of the solutions there work for me. (<code>'display.notebook_repr_html'</code> is already <code>True</code>)</p>
<p>EDIT: The issue seems to do with rendering HTML; typing in </p>
<pre class="lang-python prettyprint-override"><code>from IPython.core.display import display, HTML
display(HTML('<h1>Hello, world!</h1>'))
</code></pre>
<p>returns</p>
<pre class="lang-python prettyprint-override"><code><IPython.core.display.HTML object>
</code></pre>
|
<p>This has purposely been disabled. I have requested a way to have it re-enabled but but unsupported.</p>
<p>Please see the request. <a href="https://github.com/jupyter/qtconsole/issues/165" rel="nofollow noreferrer">https://github.com/jupyter/qtconsole/issues/165</a></p>
|
python|pandas|ipython|canopy
| 1
|
8,909
| 54,822,806
|
How to sort unique table in dataframe based on a single column?
|
<p>have df with values </p>
<pre><code> 0 | 1 | 2
0 sun | east | pass
1 moon | west | pass
2 mars | north | pass
3 saturn | east | pass
4 neptune| west | pass
</code></pre>
<p>Need to get the distinct df by looking the values of 1 column. Here in column 1 there are two east and two west and their 0 values are different. </p>
<p>the output should be</p>
<pre><code> 0 | 1 | 2
0 sun | east | pass
1 moon | west | pass
2 mars | north | pass
or
0 | 1 | 2
0 saturn | east | pass
1 neptune | west | pass
2 mars | north | pass
</code></pre>
<p>so i output need only single value in 0 column not both. here need sun and moon (or) saturn and neptune. </p>
|
<p>I believe you need <code>groupby</code> with <code>join</code> - only necessary same values of <code>2</code> column per groups:</p>
<pre><code>df = df.groupby([1,2], sort=False)[0].apply(' (or) '.join).reset_index().sort_index(axis=1)
print (df)
0 1 2
0 sun (or) saturn east pass
1 moon (or) neptune west pass
2 mars north pass
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 3
|
8,910
| 54,882,230
|
Update pandas dataframe values with if else condition?
|
<p>I want update a pandas dataframe column where it's boolean value is True / False. JSON format is</p>
<pre><code>[
{
"A":"value",
"TIME":1551052800000,
"C":35,
"D":36,
"E":34,
"F":35,
"G":33
},
...
...
]
</code></pre>
<p>Converted it into dataframe</p>
<p><code>df = pd.DataFrame.from_dict(content)</code></p>
<p>I can now access different columns in the original object via the dataframe as </p>
<p><code>df["TIME"] = '2019-12-0'</code></p>
<p>If i do the above it basically gets set for all the df["TIME"] in the dataframe. I want to update only specific columns where a condition matches say </p>
<pre><code>If df["label"].bool() == True
then update 5 columns in x way
Else if df["label"].bool() == False
then update 6 columns in a different way
</code></pre>
<p>I run simple if else condition. </p>
<pre><code> if df["label"].bool() == True:
df["A"] = df["G"]
if df["A"] == 0 :
print(df["A"])
else:
df["C"] = (df["D"]/df["E"])*100
elif df["label"].bool() == False:
....
</code></pre>
<p>It works fine for 1 selection, but for multiple selection it returns </p>
<p><code>ValueError: The truth value of a Series is ambiguous.</code></p>
<p>Note: df["label"] column is added from the client to check a specific row is selected or not</p>
|
<p>If I understand correctly, you are better setting relevant slices of the dataframe (<a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer">indexing</a>), as opposed to setting the values with if/else statements.</p>
<p>For example, </p>
<pre><code># df = a pandas.DataFrame
df.loc[:,'A'][df['label']=='value'] = 0
</code></pre>
<p><strong>Note:</strong> If <code>df['label']</code> just contains Boolean values you don't need to check its value and could just use it directly to mask the dataframe:</p>
<pre><code>df.loc[:,'A'][df['label']] = 0
</code></pre>
|
python|pandas
| 1
|
8,911
| 55,055,176
|
Adding values in a row to next row and deleting first row in pandas dataframe
|
<p>I have a DataFrame that looks something like:</p>
<pre><code> Geo Age 2010 2011 2012
0 toronto -1 ~ 7 2 1 5
1 toronto 0 ~ 4 5 3 4
2 toronto 5 ~ 9 4 5 5
3 bc -1 ~ 7 1 3 2
4 bc 0 ~ 4 2 3 1
5 bc 5 ~ 9 3 1 1
6 mt -1 ~ 7 4 3 4
7 mt 0 ~ 4 2 2 1
8 mt 5 ~ 9 6 6 6
</code></pre>
<p>I want to get rid of -1~7 row for each city however want to add values to 0~4 row before deleting.</p>
<p>Desired output:</p>
<pre><code> Geo Age 2010 2011 2012
1 toronto 0 ~ 4 7 4 9
2 toronto 5 ~ 9 4 5 5
4 bc 0 ~ 4 3 6 3
5 bc 5 ~ 9 3 1 1
7 mt 0 ~ 4 6 5 5
8 mt 5 ~ 9 6 6 6
</code></pre>
<p>Don't care about the index. I will change them.</p>
<p>Thanks!</p>
|
<p>Assuming your df is ordered you can just use a combination of np.where and shift, then filter</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame()
df['Geo'] = ['toronto','toronto','toronto']
df['Age'] = ['-1 ~ 7','0 ~ 4','5 ~ 9']
df['2010'] = [2,5,4]
df['2010'] = np.where(df['Age']=='0 ~ 4',df['2010']+df['2010'].shift(1),df['2010'])
df = df[~(df['Age']=='-1 ~ 7')]
display(df)
Geo Age 2010
1 toronto 0 ~ 4 7.0
2 toronto 5 ~ 9 4.0
</code></pre>
|
python|pandas|indexing
| 1
|
8,912
| 54,818,488
|
What happens to the Pandas Dataframe after groupby command in python?
|
<p>I am trying to figure some things out in Pandas. I have a dataFrame(df) with 109 rows and 2 distinct "owner_name" values.</p>
<p>Before the groupby command I am able to view the entire contents with:</p>
<pre><code>with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df)
</code></pre>
<p>After I do the groupby using:</p>
<pre><code>rdf = df.groupby('owner_name')
</code></pre>
<p>Now when I do:</p>
<pre><code>with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(rdf)
</code></pre>
<p>I get:</p>
<pre><code><pandas.core.groupby.generic.DataFrameGroupBy object at 0x7fe40fc8d2b0>
</code></pre>
<p>How do I print out the contents of the rdf dataFrame?</p>
<p>Also how do I cycle through the various rows and columns now?</p>
<p>Thanks a lot.</p>
|
<p><code>pandas</code> <code>groupby</code> will return the <a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">groupby object</a>, if you want to see the detail of each <code>groupby</code> subset do with <code>list</code> </p>
<pre><code>list(df.groupby('a'))
Out[48]:
[(1, id a b
0 a 1 1), (2, id a b
1 b 2 2
2 c 2 2)]
# in your case list(rdf)
</code></pre>
|
python|pandas|dataframe|group-by
| 3
|
8,913
| 49,371,343
|
Identify cells with only whitespace
|
<p>I want to apply this function</p>
<pre><code>df.column.str.split(expand = True)
</code></pre>
<p>but the problem is there are some "empty cells", and when I mean "empty" it means that it has, for example, 6 white-spaces. Moreover, this is an iteration, so sometimes I have cells with 2 white-spaces. </p>
<p>How can I identify this "empty cells"?</p>
<p>PD:</p>
<pre><code>df[df.column != '(6 spaces inside)']
</code></pre>
<p>works only for a particular case when there are 6 spaces.</p>
<p>EDIT 1: the df.column is an object type with people names (one or more than one, even errors)</p>
<p>EDIT 2: The idea is to remove this cell (row) in order to successfully applied the "str.split" function. This is an interation so sometimes I have cells with 6 spaces and other with 2 spaces.</p>
<p>EDIT 3: I can't remove all whitespaces because then I won't be able to apply the string separation (because I have names like "Jean Carlo" that I want to separate)</p>
<p>FINAL SOLUTION: I could solve the problem with the post that was signaled only adding a '+' because I have whitespaces in other cells. </p>
<p>Solution:</p>
<pre><code>df = df.replace(['^\s+$'], np.nan, regex = True)
</code></pre>
|
<pre><code>df['Col1'] = df['Col1'].map(lambda x: x.strip())
</code></pre>
<p>This will remove all leading and trailing spaces in df['Col1']</p>
|
python|string|pandas
| 0
|
8,914
| 49,674,215
|
Fill missing values (na) with an list/series after modelling missing values
|
<p>I am trying to plug the predicted missing values into original df (of course to the column with missing value). How could I do so?</p>
<p>The predicted missing values are basically stored in a list/series whose length is the number of missing values in the original df. The order in the list matches with the order that missing values appear in the df, I think, since I split the test_set from the df using nonull() at the missing series.</p>
<p>I have been trying <code>pd.Series.fillna</code>, but that just allows one value to replace.</p>
|
<p>You can use numpy <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html" rel="nofollow noreferrer">where</a> and pandas <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="nofollow noreferrer">isnull</a> function to do that.</p>
<pre><code>df['relevant_column'] = np.where(df['relevant_column'].isnull(),
predicted_values,
df['relevant_column'])
</code></pre>
<p>predicted_values should be a pandas series or 1d numpy array with the same lenght as the dataframe.</p>
|
python|list|pandas|missing-data|fillna
| 0
|
8,915
| 49,601,263
|
Loss doesn't decrease in training the pytorch RNN
|
<p>Here is the RNN network I designed for a sentiment.</p>
<pre><code>class rnn(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.h2h = nn.Linear(hidden_size , hidden_size)
self.relu = nn.Tanh()
self.sigmoid = nn.LogSigmoid()
def forward(self, input, hidden):
hidden_new = self.relu(self.i2h(input)+self.h2h(hidden))
output = self.h2o(hidden)
output = self.sigmoid(output)
return output, hidden_new
def init_hidden(self):
return Variable(torch.zeros(1, self.hidden_size))
</code></pre>
<p>Then, I create and train the network as :</p>
<pre><code>RNN = rnn(50, 50, 1)
learning_rate = 0.0005
criteria = nn.MSELoss()
optimizer = optim.Adam(RNN.parameters(), lr=learning_rate)
hidden = RNN.init_hidden()
epochs = 2
for epoch in range(epochs):
for i in range(len(train['Phrase'])):
input = convert_to_vectors(train['Phrase'][i])
for j in range(len(input)):
temp_input = Variable(torch.FloatTensor(input[j]))
output, hidden = RNN(temp_input, hidden)
temp_output = torch.FloatTensor([np.float64(train['Sentiment'][i])/4])
loss = criteria( output, Variable(temp_output))
loss.backward(retain_graph = True)
if (i%20 == 0):
print('Current loss is ', loss)
</code></pre>
<p>The problem is that the loss of the network isn't decreasing. It increases, then decreases and so on. It isn't stable at all. I tried using a smaller learning rate but it doesn't seem to help.</p>
<p>Why is this happening and how can I rectify this?</p>
|
<p>You just need to call <code>optimizer.step()</code> after you do <code>loss.backward()</code>.</p>
<p>Which, by the way, illustrates a common misconception: <strong>Backpropagation is not a learning algorithm</strong>, it's just a cool way of computing the gradient of the loss w.r.t. your parameters. You then use some variant of Gradient Descent (eg. plain SGD, AdaGrad, etc., in your case Adam) to update the weights given the gradients.</p>
|
python|machine-learning|pytorch|rnn
| 1
|
8,916
| 27,940,492
|
Backward compatibility of ewmvar in pandas
|
<p>It seems that the <code>ewmvar</code> is not always backward compatible. When using the settings <code>bias=True</code> in both pandas 0.14.1 and 0.15.2, we obtain the same result. However, when <code>bias=False</code>, as is the default, the results are no longer the same.</p>
<p>Is there a way to stay compatible in this case? I would like to make sure that it does. </p>
<pre><code>s = Series(range(1, 11))
ewmvar(s, span=19, bias=False)
</code></pre>
<p>Gives in pandas 0.14.1:</p>
<pre><code>0 -2.343804e-16
1 2.631579e-01
2 6.998135e-01
3 1.307082e+00
4 2.080978e+00
5 3.016467e+00
6 4.107530e+00
7 5.347237e+00
8 6.727838e+00
9 8.240851e+00
</code></pre>
<p>However in pandas 0.15.2:</p>
<pre><code>0 NaN
1 0.500000
2 0.998155
3 1.658692
4 2.477992
5 3.451425
6 4.573407
7 5.837471
8 7.236344
9 8.762037
</code></pre>
<p>Thank you for any insights. The alternative is I set up my own ewmvar.</p>
|
<p>see the section on ewma changes here (a little ways down): <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#new-features" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#new-features</a></p>
<p>These were mostly bug fixes and inconsistencies. Any actual changes are explained and the rationale. I believe backwards compat was preserved if it was not a buggy case (iirc was you are showing was an incorrect calculation)</p>
|
python|pandas
| 1
|
8,917
| 28,101,317
|
How to self-reference column in pandas Data Frame?
|
<p>In Python's Pandas, I am using the Data Frame as such:</p>
<pre><code>drinks = pandas.read_csv(data_url)
</code></pre>
<p>Where data_url is a string URL to a CSV file</p>
<p>When indexing the frame for all "light drinkers" where light drinkers is constituted by 1 drink, the following is written:</p>
<pre><code>drinks.light_drinker[drinks.light_drinker == 1]
</code></pre>
<p>Is there a more DRY-like way to self-reference the "parent"? I.e. something like:</p>
<pre><code>drinks.light_drinker[self == 1]
</code></pre>
|
<p>You can now use <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.query.html#pandas-dataframe-query" rel="noreferrer">query</a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="noreferrer">assign</a> depending on what you need:</p>
<pre><code>drinks.query('light_drinker == 1')
</code></pre>
<p>or to mutate the the df:</p>
<pre><code>df.assign(strong_drinker = lambda x: x.light_drinker + 100)
</code></pre>
<p><strong>Old answer</strong></p>
<p>Not at the moment, but an enhancement with your ideas is being discussed <a href="https://github.com/pydata/pandas/pull/9239" rel="noreferrer">here</a>. For simple cases <code>where</code> might be enough. The new API might look like this:</p>
<pre><code>df.set(new_column=lambda self: self.light_drinker*2)
</code></pre>
|
python|pandas|scipy
| 8
|
8,918
| 73,355,840
|
Where does pandas_datareader get its data from?
|
<p>I was working on a python program that calculates stock indicators. I used the get_data_yahoo function to get most of the info. But after some research, I found out that the yahoo finance API had been discontinued for quite some time now. So now I'm just curious on how pdr gets this info since it seems that most of the info I get from it is fairly accurate.</p>
|
<p>What does 'calculates stock indicators' mean? Maybe this will help.</p>
<pre><code>import pandas_datareader as web
import pandas as pd
df = web.DataReader('AAPL', data_source='yahoo', start='2011-01-01', end='2021-01-12')
df.head()
import yfinance as yf
aapl = yf.Ticker("AAPL")
aapl
# get stock info
aapl.info
# get historical market data
hist = aapl.history(period="max")
# show actions (dividends, splits)
aapl.actions
# show dividends
aapl.dividends
# show splits
aapl.splits
# show financials
aapl.financials
aapl.quarterly_financials
# show major holders
aapl.major_holders
# show institutional holders
aapl.institutional_holders
# show balance sheet
aapl.balance_sheet
aapl.quarterly_balance_sheet
# show cashflow
aapl.cashflow
aapl.quarterly_cashflow
# show earnings
aapl.earnings
aapl.quarterly_earnings
# show sustainability
aapl.sustainability
# show analysts recommendations
aapl.recommendations
# show next event (earnings, etc)
aapl.calendar
# show ISIN code - *experimental*
# ISIN = International Securities Identification Number
aapl.isin
# show options expirations
aapl.options
# get option chain for specific expiration
opt = aapl.option_chain('YYYY-MM-DD')
</code></pre>
|
python-3.x|pandas
| 0
|
8,919
| 73,458,161
|
Sum of the values in specific rows in dataframe
|
<p>I have a dataframe called 'test' like this:</p>
<pre><code>import pandas as pd
from collections import Counter
from nltk import ngrams
data = [['john tom hello text shine bright', 10], ['random text hello text shine bright', 15], ['random text hello text shine bright juli', 14],
['random text hello great shine bright', 15], ['random text hello great shine bright juli', 14]]
df = pd.DataFrame(data, columns=['Text', 'Value'])
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Text</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>john tom hello text shine bright</td>
<td>34</td>
</tr>
<tr>
<td>random text hello text shine bright</td>
<td>42</td>
</tr>
<tr>
<td>random text hello text shine bright juli</td>
<td>42</td>
</tr>
<tr>
<td>random text hello great shine bright</td>
<td>42</td>
</tr>
<tr>
<td>random text hello great shine bright juli</td>
<td>42</td>
</tr>
</tbody>
</table>
</div>
<p>I have the following code which looks for the most common phrase of 4 words, that looks like this:</p>
<pre><code>vals_df_1 = [y for x in df['Text'] for y in x.split()]
c_fourgrams = Counter([' '.join(y) for x in [4] for y in ngrams(vals_df_1, x)])
df_1_fourgrams = pd.DataFrame({'ngrams': list(c_fourgrams.keys()),
'count': list(c_fourgrams.values())})
df_1_fourgrams = df_1_fourgrams.sort_values('count', ascending=False)
df_1_fourgrams = df_1_fourgrams.head()
df_1_fourgrams
</code></pre>
<p>Then the dataframe 'df_1_fourgrams' looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ngrams</th>
<th>count</th>
</tr>
</thead>
<tbody>
<tr>
<td>hello text shine bright</td>
<td>3</td>
</tr>
<tr>
<td>shine bright random text</td>
<td>3</td>
</tr>
<tr>
<td>bright random text hello</td>
<td>3</td>
</tr>
<tr>
<td>random text hello great</td>
<td>2</td>
</tr>
<tr>
<td>text shine bright random</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>What I am missing is for each phrase to have a sum of the Value column. If it finds the phrase 'most common phrase is' in 5 rows, then I need to sum all of the values from the Value column in those 5 rows.</p>
<p>The resulting dataframe would look something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ngrams</th>
<th>count</th>
<th>Value sum</th>
</tr>
</thead>
<tbody>
<tr>
<td>hello text shine bright</td>
<td>3</td>
<td>118</td>
</tr>
<tr>
<td>shine bright random text</td>
<td>3</td>
<td>118</td>
</tr>
<tr>
<td>bright random text hello</td>
<td>3</td>
<td>118</td>
</tr>
<tr>
<td>random text hello great</td>
<td>2</td>
<td>84</td>
</tr>
<tr>
<td>text shine bright random</td>
<td>2</td>
<td>84</td>
</tr>
</tbody>
</table>
</div>
<p>Is this possible? How could I do this?</p>
|
<p>You can achieve everything with pandas from the original dataframe:</p>
<pre><code>out = (df
.assign(words=[[' '.join(x) for x in ngrams(s.split(), 4)]
for s in df['Text']])
.explode('ngrams')
.groupby('ngrams')['Value']
.agg(['count', 'sum'])
.sort_values('count', ascending=False)
.head(5)
)
</code></pre>
<p>output:</p>
<pre><code> count sum
ngrams
hello text shine bright 3 39
hello great shine bright 2 29
random text hello great 2 29
random text hello text 2 29
text hello great shine 2 29
</code></pre>
|
python|pandas|dataframe
| 0
|
8,920
| 34,997,018
|
How to get time/freq from FFT in Python
|
<p>I've got a little problem managing FFT data. I was looking for many examples of how to do FFT, but I couldn't get what I want from any of them. I have a random wave file with 44kHz sample rate and I want to get magnitude of N harmonics each X ms, let's say 100ms should be enough. I tried this code:</p>
<pre><code>import scipy.io.wavfile as wavfile
import numpy as np
import pylab as pl
rate, data = wavfile.read("sound.wav")
t = np.arange(len(data[:,0]))*1.0/rate
p = 20*np.log10(np.abs(np.fft.rfft(data[:2048, 0])))
f = np.linspace(0, rate/2.0, len(p))
pl.plot(f, p)
pl.xlabel("Frequency(Hz)")
pl.ylabel("Power(dB)")
pl.show()
</code></pre>
<p>This was last example I used, I found it somewhere on stackoverflow. The problem is, this gets magnitude which I want, gets frequency, but no time at all. FFT analysis is 3D as far as I know and this is "merged" result of all harmonics. I get this:</p>
<p><a href="http://i.stack.imgur.com/mhPEG.png" rel="nofollow">X-axis = Frequency, Y-axis = Magnitude, Z-axis = Time (invisible)</a></p>
<p>From my understanding of the code, t is time - and it seems like that, but is not needed in the code - We'll maybe need it though. p is array of powers (or magnitude), but it seems like some average of all magnitudes of each frequency f, which is array of frequencies. I don't want average/merged value, I want magnitude for N harmonics each X milliseconds.</p>
<p>Long story short, we can get: 1 magnitude of all frequencies.</p>
<p>We want: All magnitudes of N freqeuencies including time when certain magnitude is present.</p>
<p>Result should look like this array: [time,frequency,amplitude]
So in the end if we want 3 harmonics, it would look like:</p>
<pre><code>[0,100,2.85489] #100Hz harmonic has 2.85489 amplitude on 0ms
[0,200,1.15695] #200Hz ...
[0,300,3.12215]
[100,100,1.22248] #100Hz harmonic has 1.22248 amplitude on 100ms
[100,200,1.58758]
[100,300,2.57578]
[200,100,5.16574]
[200,200,3.15267]
[200,300,0.89987]
</code></pre>
<p>Visualization is not needed, result should be just arrays (or hashes/dictionaries) as listed above.</p>
|
<p>Further to @Paul R's answer, <code>scipy.signal.spectrogram</code> is a <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html" rel="noreferrer">spectrogram function</a> in <a href="http://docs.scipy.org/doc/scipy/reference/signal.html" rel="noreferrer">scipy's signal processing module</a>.</p>
<p>The example at the above link is as follows:</p>
<pre><code>from scipy import signal
import matplotlib.pyplot as plt
# Generate a test signal, a 2 Vrms sine wave whose frequency linearly
# changes with time from 1kHz to 2kHz, corrupted by 0.001 V**2/Hz of
# white noise sampled at 10 kHz.
fs = 10e3
N = 1e5
amp = 2 * np.sqrt(2)
noise_power = 0.001 * fs / 2
time = np.arange(N) / fs
freq = np.linspace(1e3, 2e3, N)
x = amp * np.sin(2*np.pi*freq*time)
x += np.random.normal(scale=np.sqrt(noise_power), size=time.shape)
#Compute and plot the spectrogram.
f, t, Sxx = signal.spectrogram(x, fs)
plt.pcolormesh(t, f, Sxx)
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/JOFTi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JOFTi.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib|scipy|fft
| 6
|
8,921
| 35,240,328
|
Pythonic way of calculating A x A' (without numpy)
|
<p>So A is a list of list only containing 0's and 1's . What is the most pythonic (and also fairly fast) way of calculating A * A' without using nympy or scipy.</p>
<p>The numpy equivalent of above would be:</p>
<pre><code>def foo(a):
return a * a.T
</code></pre>
|
<p>Being that your data is zeroes and ones, probably the best non-numpy solution is to use bitarrays:</p>
<pre><code>def dot_self(matrix):
""" Multiply a 0-1 matrix by its transpose.
Use bitarrays to possibly speed up calculations.
"""
from bitarray import bitarray
rows = tuple(bitarray(row) for row in matrix)
return [[(r & c).count() for c in rows] for r in rows]
</code></pre>
|
python|numpy|matrix|multidimensional-array|matrix-multiplication
| 2
|
8,922
| 67,557,724
|
Coalescing rows from boolean mask
|
<p>I have a 2D array and a boolean mask of the same size. I want to use the mask to coalesce consecutive rows in the 2D array: By coalesce I mean to reduce the rows by taking the first occurrence. An example:</p>
<pre><code>rows = np.r_['1,2,0', :6, :6]
mask = np.tile([1, 1, 0, 0, 1, 1], (2,1)).T.astype(bool)
</code></pre>
<p>Expected output:</p>
<pre><code>array([[0, 0],
[2, 2],
[3, 3],
[4, 4])
</code></pre>
<p>And to illustrate how the output might be obtained:</p>
<pre><code>array([[0, 0], array([[0, 0], array([[0, 0],
[1, 1], [0, 0], [2, 2],
[2, 2], -> select -> [2, 2], -> reduce -> [3, 3],
[3, 3], [3, 3], [4, 4]])
[4, 4], [4, 4],
[5, 5]]) [4, 4]])
</code></pre>
<p>What I have tried:</p>
<pre><code>rows[~mask].reshape(-1,2)
</code></pre>
<p>But this will only select the rows which should not be reduced.</p>
|
<h1>Upgraded answer</h1>
<p>I realized that my initial submission did a lot of unnecessary operations, I realized that given mask</p>
<pre><code>mask = [1,1,0,0,1,1,0,0,1,1,1,0]
</code></pre>
<p>You simply want to negate the leading ones:</p>
<pre><code>#negate:v v v
mask = [0,1,0,0,0,1,0,0,0,1,1,0]
</code></pre>
<p>then negate the mask to get your wanted rows. This way is MUCH more efficient than doing a forward fill on indices and removing repeated indices (see old answer). Revised solution:</p>
<pre><code>import numpy as np
rows = np.r_['1,2,0', :6, :6]
mask = np.tile([1, 1, 0, 0, 1, 1], (2,1)).T.astype(bool)
def maskforwardfill(a: np.ndarray, mask: np.ndarray):
mask = mask.copy()
mask[1:] = mask[1:] & mask[:-1] # Negate leading True values
mask[0] = False # First element should always be False, either it is False anyways, or it is a leading True value (which should be set to False)
return a[~mask] # index out wanted rows
# Reduce mask's dimension since I assume that you only do complete rows
print(maskforwardfill(rows, mask.any(1)))
#[[0 0]
# [2 2]
# [3 3]
# [4 4]]
</code></pre>
<h1>Old answer</h1>
<p>Here I assume that you only need complete rows (like in @Arne's answer). My idea is that given the mask and the corresponding array indices</p>
<pre><code>mask = [1,1,0,0,1,1]
indices = [0,1,2,3,4,5]
</code></pre>
<p>you can use <code>np.diff</code> to first obtain</p>
<pre><code>indices = [0,-1,2,3,4,-1]
</code></pre>
<p>Then a forward fill (where <code>-1</code> acts as <code>nan</code>) on the indices such that you get</p>
<pre><code>[0,0,2,3,4,4]
</code></pre>
<p>of which can use <code>np.unique</code> to remove repeated indices:</p>
<pre><code>[0,2,3,4] # The rows indices you want
</code></pre>
<h2>Code:</h2>
<pre><code>import numpy as np
rows = np.r_['1,2,0', :6, :6]
mask = np.tile([1, 1, 0, 0, 1, 1], (2,1)).T.astype(bool)
def maskforwardfill(a: np.ndarray, mask: np.ndarray):
mask = mask.copy()
indices = np.arange(len(a))
mask[np.diff(mask,prepend=[0]) == 1] = False # set leading True to False
indices[mask] = -1
indices = np.maximum.accumulate(indices) # forward fill indices
indices = np.unique(indices) # remove repeats
return a[indices] # index out wanted rows
# Reduce mask's dimension since I assume that you only do complete rows
print(maskforwardfill(rows, mask.any(1)))
#[[0 0]
# [2 2]
# [3 3]
# [4 4]]
</code></pre>
|
numpy
| 1
|
8,923
| 60,271,606
|
In pandas, how to convert a numeric type to category type to use with seaborn hue
|
<p>I am stuck on what seems like an easy problem trying to color the different groups on a scatterplot I am creating. I have the following example dataframe and graph:</p>
<pre><code>test_df = pd.DataFrame({ 'A' : 1.,
'B' : np.array([1, 5, 9, 7, 3], dtype='int32'),
'C' : np.array([6, 7, 8, 9, 3], dtype='int32'),
'D' : np.array([2, 2, 3, 4, 4], dtype='int32'),
'E' : pd.Categorical(["test","train","test","train","train"]),
'F' : 'foo' })
# fix to category
# test_df['D'] = test_df["D"].astype('category')
# and test plot
f, ax = plt.subplots(figsize=(6,6))
ax = sns.scatterplot(x="B", y="C", hue="D", s=100,
data=test_df)
</code></pre>
<p>which creates this graph:</p>
<p><a href="https://i.stack.imgur.com/AS98q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AS98q.png" alt="enter image description here"></a>
However, instead of a continuous scale, I'd like a categorical scale for each of the 3 categories [2, 3, 4]. After I uncomment the line of code <code>test_df['D'] = ...</code>, to change this column to a category column-type for category-coloring in the seaborn plot, I receive the following error from the seaborn plot: <code>TypeError: data type not understood</code></p>
<p>Does anybody know the correct way to convert this numeric column to a factor / categorical column to use for coloring? </p>
<p>Thanks!</p>
|
<p>I copy/pasted your code, added libraries for import and removed the comment as I thought it looked good. I get a plot with 'categorical' colouring for value [2,3,4] without changing any of your code. </p>
<p>Try updating your seaborn module using: <code>pip install --upgrade seaborn</code></p>
<p>Here is a list of working libraries used with your code.</p>
<pre><code>matplotlib==3.1.2
numpy==1.18.1
seaborn==0.10.0
pandas==0.25.3
</code></pre>
<p>... which executed below code.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
test_df = pd.DataFrame({ 'A' : 1.,
'B' : np.array([1, 5, 9, 7, 3], dtype='int32'),
'C' : np.array([6, 7, 8, 9, 3], dtype='int32'),
'D' : np.array([2, 2, 3, 4, 4], dtype='int32'),
'E' : pd.Categorical(["test","train","test","train","train"]),
'F' : 'foo' })
# fix to category
test_df['D'] = test_df["D"].astype('category')
# and test plot
f, ax = plt.subplots(figsize=(6,6))
ax = sns.scatterplot(x="B", y="C", hue="D", s=100,
data=test_df)
plt.show()
</code></pre>
|
python|pandas|seaborn
| 2
|
8,924
| 60,082,869
|
Is there a Python alternative for len that returns 1 for simple float
|
<p>Is there a way in Python to let the <code>len(x)</code> function (or any similar function) return 1 if <code>x</code> is a simple float?</p>
<p>In my case, I have <code>x</code> as a input parameter to a function, and I want to make it robust to (NumPy) array type inputs of <code>x</code> as well as simple scalar float input of <code>x</code>. In the function, the <code>len(x)</code> function is used, but it throws an error <code>object of type 'float' has no len()</code> if <code>x</code> is a float, whereas I want it to return 1. Of course I can write an if statement, but I feel like there should be a shorter solution.</p>
<pre><code>def myfunc(x):
y=np.zeros((5,len(x)))
y[1,:]=x
....
return y
</code></pre>
|
<p>No, there is no built-in function like this.</p>
<p>One of the core aspects of how Python is designed is that it is <a href="https://stackoverflow.com/questions/11328920/is-python-strongly-typed">strongly typed</a>, meaning that values are not implicitly coerced from one type to another. For example, you cannot do <code>'foo' + 3</code> to make a string <code>'foo3'</code>; you have to write <code>'foo' + str(3)</code> to explicitly convert the <code>int</code> to <code>str</code> in order to use string concatenation. So having built-in operators or functions which could treat a scalar value as if it's a sequence of length 1 would violate the principle of strong typing.</p>
<p>This is in contrast with weakly typed languages like Javascript and PHP, where type coercion is done with the idea that the programmer doesn't have to think so much about data types and how they are converted; in practice, if you write in these languages then you still do have to think about types and conversions, you just have to also know which conversions are or aren't done implicitly.</p>
<p>So, in Python if you want a function to work with multiple different data types, then you either have to do a conversion explicitly (e.g. <code>if isinstance(x, float): x = np.array([x])</code>) or you have to only use operations which are supported by every data type your function accepts (i.e. <a href="https://en.wikipedia.org/wiki/Duck_typing" rel="nofollow noreferrer">duck typing</a>).</p>
|
python|numpy
| 0
|
8,925
| 65,414,216
|
How to append multiple matrices in python
|
<p>I have read the following related discussions
<a href="https://stackoverflow.com/questions/877479/whats-the-simplest-way-to-extend-a-numpy-array-in-2-dimensions">What's the simplest way to extend a numpy array in 2 dimensions?</a></p>
<p>However, if I want to expend multiple matrices, for example</p>
<pre><code>A = np.matrix([[1,2],[3,4]])
B = np.matrix([[3,4],[5,6]])
C = np.matrix([[7,8],[5,6]])
F = np.append(A,[[B]],0)
</code></pre>
<p>However, python says</p>
<blockquote>
<p>ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 4 dimension(s)</p>
</blockquote>
<p>I want to put B "under" the matrix A, and put C "under" the matrix B.<br />
So, F should be a 6X2 matrix.</p>
<p>How to do this? Thanks!</p>
|
<p>Try using numpy.concatenate (<a href="https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html</a>):</p>
<pre><code>A = np.matrix([[1,2],[3,4]])
B = np.matrix([[3,4],[5,6]])
C = np.matrix([[7,8],[5,6]])
# F = np.append(A,[[B]],0)
F = np.concatenate((A, B, C), axis=1)
</code></pre>
<p>Change the axis parameter to 0 to combine the matrices 'vertically':</p>
<pre><code>print(np.concatenate((A, B, C), axis=1))
</code></pre>
<blockquote>
<p>[[1 2 3 4 7 8]<br />
[3 4 5 6 5 6]]</p>
</blockquote>
<pre><code>print(np.concatenate((A, B, C), axis=0))
</code></pre>
<blockquote>
<p>[[1 2]<br />
[3 4]<br />
[3 4]<br />
[5 6]<br />
[7 8]<br />
[5 6]]</p>
</blockquote>
|
python|numpy|matrix|append
| 2
|
8,926
| 50,111,111
|
Python Pandas finding column value based on multiple column values in same data frame
|
<p>df:</p>
<pre><code>no fruit price city
1 apple 10 Pune
2 apple 20 Mumbai
3 orange 5 Nagpur
4 orange 7 Delhi
5 Mango 20 Bangalore
6 Mango 15 Chennai
</code></pre>
<p>Now I want to get city name where "fruit= orange and price =5"</p>
<pre><code>df.loc[(df['fruit'] == 'orange') & (df['price'] == 5) , 'city'].iloc[0]
</code></pre>
<p>is not working and giving error as:</p>
<pre><code>IndexError: single positional indexer is out-of-bounds
</code></pre>
<p>Versions used: Python 3.5</p>
|
<p>You could create masks step-wise and see how they look like:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([{'city': 'Pune', 'fruit': 'apple', 'no': 1L, 'price': 10L},
{'city': 'Mumbai', 'fruit': 'apple', 'no': 2L, 'price': 20L},
{'city': 'Nagpur', 'fruit': 'orange', 'no': 3L, 'price': 5L},
{'city': 'Delhi', 'fruit': 'orange', 'no': 4L, 'price': 7L},
{'city': 'Bangalore', 'fruit': 'Mango', 'no': 5L, 'price': 20L},
{'city': 'Chennai', 'fruit': 'Mango', 'no': 6L, 'price': 15L}])
m1 = df['fruit'] == 'orange'
m2 = df['price'] == 5
df[m1&m2]['city'].values[0] # 'Nagpur'
</code></pre>
|
python|pandas
| 2
|
8,927
| 63,924,563
|
Intercepting CUDA calls
|
<p>I am trying to intercept cudaMemcpy calls from the pytorch library for analysis. I noticed NVIDIA has a cuHook example in the CUDA toolkit samples. However that example requires one to modify the source code of the application itself which I cannot do in this case. So is there a way to write a hook to intercept CUDA calls without modifying the application source code?</p>
|
<p>A CUDA runtime API call can be hooked (on linux) using the <a href="https://osterlund.xyz/posts/2018-03-12-interceptiong-functions-c.html" rel="nofollow noreferrer">"LD_PRELOAD trick"</a> if the application that is being run is dynamically linked to the CUDA runtime library (<code>libcudart.so</code>).</p>
<p>Here is a simple example on linux:</p>
<pre><code>$ cat mylib.cpp
#include <stdio.h>
#include <unistd.h>
#include <dlfcn.h>
#include <cuda_runtime.h>
cudaError_t cudaMemcpy ( void* dst, const void* src, size_t count, cudaMemcpyKind kind )
{
cudaError_t (*lcudaMemcpy) ( void*, const void*, size_t, cudaMemcpyKind) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind ))dlsym(RTLD_NEXT, "cudaMemcpy");
printf("cudaMemcpy hooked\n");
return lcudaMemcpy( dst, src, count, kind );
}
cudaError_t cudaMemcpyAsync ( void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t str )
{
cudaError_t (*lcudaMemcpyAsync) ( void*, const void*, size_t, cudaMemcpyKind, cudaStream_t) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind, cudaStream_t ))dlsym(RTLD_NEXT, "cudaMemcpyAsync");
printf("cudaMemcpyAsync hooked\n");
return lcudaMemcpyAsync( dst, src, count, kind, str );
}
$ g++ -I/usr/local/cuda/include -fPIC -shared -o libmylib.so mylib.cpp -ldl -L/usr/local/cuda/lib64 -lcudart
$ cat t1.cu
#include <stdio.h>
int main(){
int a, *d_a;
cudaMalloc(&d_a, sizeof(d_a[0]));
cudaMemcpy(d_a, &a, sizeof(a), cudaMemcpyHostToDevice);
cudaStream_t str;
cudaStreamCreate(&str);
cudaMemcpyAsync(d_a, &a, sizeof(a), cudaMemcpyHostToDevice);
cudaMemcpyAsync(d_a, &a, sizeof(a), cudaMemcpyHostToDevice, str);
cudaDeviceSynchronize();
}
$ nvcc -o t1 t1.cu -cudart shared
$ LD_LIBRARY_PATH=/usr/local/cuda/lib64 LD_PRELOAD=./libmylib.so cuda-memcheck ./t1
========= CUDA-MEMCHECK
cudaMemcpy hooked
cudaMemcpyAsync hooked
cudaMemcpyAsync hooked
========= ERROR SUMMARY: 0 errors
$
</code></pre>
<p>(CentOS 7, CUDA 10.2)</p>
<p>A simple test with pytorch seems to indicate that it works:</p>
<pre><code>$ docker run --gpus all -it nvcr.io/nvidia/pytorch:20.08-py3
...
Status: Downloaded newer image for nvcr.io/nvidia/pytorch:20.08-py3
=============
== PyTorch ==
=============
NVIDIA Release 20.08 (build 15516749)
PyTorch Version 1.7.0a0+8deb4fe
Container image Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
Copyright (c) 2014-2020 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
NOTE: MOFED driver for multi-node communication was not detected.
Multi-node communication performance may be reduced.
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for PyTorch. NVIDIA recommends the use of the following flags:
nvidia-docker run --ipc=host ...
...
root@946934df529b:/workspace# cat mylib.cpp
#include <stdio.h>
#include <unistd.h>
#include <dlfcn.h>
#include <cuda_runtime.h>
cudaError_t cudaMemcpy ( void* dst, const void* src, size_t count, cudaMemcpyKind kind )
{
cudaError_t (*lcudaMemcpy) ( void*, const void*, size_t, cudaMemcpyKind) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind ))dlsym(RTLD_NEXT, "cudaMemcpy");
printf("cudaMemcpy hooked\n");
return lcudaMemcpy( dst, src, count, kind );
}
cudaError_t cudaMemcpyAsync ( void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t str )
{
cudaError_t (*lcudaMemcpyAsync) ( void*, const void*, size_t, cudaMemcpyKind, cudaStream_t) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind, cudaStream_t ))dlsym(RTLD_NEXT, "cudaMemcpyAsync");
printf("cudaMemcpyAsync hooked\n");
return lcudaMemcpyAsync( dst, src, count, kind, str );
}
root@946934df529b:/workspace# g++ -I/usr/local/cuda/include -fPIC -shared -o libmylib.so mylib.cpp -ldl -L/usr/local/cuda/lib64 -lcudart
root@946934df529b:/workspace# cat tt.py
import torch
device = torch.cuda.current_device()
x = torch.randn(1024, 1024).to(device)
y = torch.randn(1024, 1024).to(device)
z = torch.matmul(x, y)
root@946934df529b:/workspace# LD_LIBRARY_PATH=/usr/local/cuda/lib64 LD_PRELOAD=./libmylib.so python tt.py
cudaMemcpyAsync hooked
cudaMemcpyAsync hooked
root@946934df529b:/workspace#
</code></pre>
<p>(using NVIDIA <a href="https://ngc.nvidia.com/catalog/containers/nvidia:pytorch" rel="nofollow noreferrer">NGC PyTorch container</a> )</p>
|
c++|cuda|pytorch|interceptor
| 4
|
8,928
| 47,029,099
|
pandas argsort how to leave nan as nan?
|
<p>Suppose <code>sr</code> is a <code>pandas.Series</code>, then unlike <code>sr.mean()</code> or <code>sr.std()</code> which skips <code>nan</code> <em>and</em> leaves them intact in the output, <code>sr.argsort()</code> will use <code>-1</code> to indicate where <code>nan</code> are present. But I don't want this conversion. I simply want <code>argsort</code> to work exactly like <code>mean</code> or <code>std</code> i.e. does not change <code>nan</code> values to <code>-1</code>. Unfortunately <code>argsort</code> doesn't have a <code>skipna</code> parameter. What can I do?</p>
<p>PS I know I can replace <code>-1</code> with <code>nan</code> but this is a bit clumsy.
<hr>
Example:</p>
<pre><code>sr = pd.Series(data=[2,0.5,99,np.nan])
sr
Out[61]:
0 2.0
1 0.5
2 99.0
3 NaN
dtype: float64
expected_sort = sr.argsort().replace(-1, np.nan)
expected_sort
Out[63]:
0 1.0
1 0.0
2 2.0
3 NaN
dtype: float64
</code></pre>
|
<p>Your current approach isn't bad...</p>
<p>Here are some alternatives</p>
<p><strong>Alt 1</strong> </p>
<pre><code>sr.argsort().mask(sr.isnull())
0 1.0
1 0.0
2 2.0
3 NaN
dtype: float64
</code></pre>
<hr>
<p><strong>Alt 2</strong> </p>
<pre><code>sr.dropna().argsort().reindex_like(sr)
0 1.0
1 0.0
2 2.0
3 NaN
dtype: float64
</code></pre>
<hr>
<p><strong>Alt 3</strong><br>
<em>AKA overkill</em> </p>
<pre><code>pd.Series(
np.where(
np.isnan(sr.values),
np.nan, sr.values.argsort()
), sr.index)
0 1.0
1 0.0
2 2.0
3 NaN
dtype: float64
</code></pre>
|
python|pandas|dataframe|nan
| 3
|
8,929
| 46,671,438
|
tensorflow.string_input_producer 'hello world'
|
<p>I can't get the following string_input_producer-<code>hello world</code> program to run:</p>
<pre><code>import tensorflow as tf
filename = tf.placeholder(dtype=tf.string, name='filename')
f_q = tf.train.string_input_producer(filename, num_epochs=1, shuffle=False)
filename_tf = f_q.dequeue()
with tf.Session() as S:
S.run(tf.local_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print(S.run(filename_tf, feed_dict={filename: "hello world"}))
coord.request_stop()
coord.join(threads)
</code></pre>
<p>Seems simple enough, but tf tells me in an error message that i need to pass a string value to placeholder 'filename' (which I do). Anyone gets what I'm doing wrong here? Thanks</p>
<blockquote>
<p>Why does it say paper jam, when there is no paper jam!</p>
</blockquote>
|
<p>This can work.</p>
<pre><code>import tensorflow as tf
filename = ['hello world']
f_q = tf.train.string_input_producer(filename, num_epochs=1, shuffle=False)
filename_tf = f_q.dequeue()
with tf.Session() as S:
S.run(tf.local_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print(S.run(filename_tf))
coord.request_stop()
coord.join(threads)
</code></pre>
<p>because <code>tf.train.string_input_producer</code> returns a <a href="https://www.tensorflow.org/programmers_guide/threading_and_queues" rel="nofollow noreferrer">queue</a> and it needs some real things to enqueue, then it will dequeue with some order.</p>
|
python|tensorflow
| 0
|
8,930
| 46,760,812
|
Use loss in the keras model function
|
<p>I am trying to build the a very simple model using keras using the Model function, like below, where the input and output of the Model function are [img,labels] and the loss.
I am confused why this code is not working, if the output cannot be the loss. How should the Model function work and when should we use the Model function? Thanks.</p>
<pre><code>sess = tf.Session()
K.set_session(sess)
K.set_learning_phase(1)
img = Input((784,),name='img')
labels = Input((10,),name='labels')
# img = tf.placeholder(tf.float32, shape=(None, 784))
# labels = tf.placeholder(tf.float32, shape=(None, 10))
x = Dense(128, activation='relu')(img)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
preds = Dense(10, activation='softmax')(x)
from keras.losses import binary_crossentropy
#loss = tf.reduce_mean(categorical_crossentropy(labels, preds))
loss = binary_crossentropy(labels, preds)
print(type(loss))
model = Model([img,labels], loss, name='squeezenet')
model.summary()
</code></pre>
|
<p>As @yu-yang pointed out, the loss is specified with <code>compile()</code>.
If you think about it, it makes sense because the real output of your model is your prediction, not the loss, the loss is only used to train the model.</p>
<p>A working example of your network:</p>
<pre><code>import keras
from keras.optimizers import Adam
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from keras.losses import categorical_crossentropy
img = Input((784,),name='img')
x = Dense(128, activation='relu')(img)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
preds = Dense(10, activation='softmax')(x)
model = Model(inputs=img, outputs=preds, name='squeezenet')
model.compile(optimizer=Adam(),
loss=categorical_crossentropy,
metrics=['acc'])
model.summary()
</code></pre>
<p>Output:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
img (InputLayer) (None, 784) 0
_________________________________________________________________
dense_32 (Dense) (None, 128) 100480
_________________________________________________________________
dropout_21 (Dropout) (None, 128) 0
_________________________________________________________________
dense_33 (Dense) (None, 128) 16512
_________________________________________________________________
dropout_22 (Dropout) (None, 128) 0
_________________________________________________________________
dense_34 (Dense) (None, 10) 1290
=================================================================
Total params: 118,282
Trainable params: 118,282
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>With MNIST dataset:</p>
<pre><code>from keras.datasets import mnist
from keras.utils import to_categorical
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(-1, 784)
y_train = to_categorical(y_train, num_classes=10)
x_test = x_test.reshape(-1, 784)
y_test = to_categorical(y_test, num_classes=10)
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
</code></pre>
<p>Output:</p>
<pre><code>Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 4s - loss: 12.2797 - acc: 0.2360 - val_loss: 11.0902 - val_acc: 0.3116
Epoch 2/10
60000/60000 [==============================] - 4s - loss: 10.4161 - acc: 0.3527 - val_loss: 8.7122 - val_acc: 0.4589
Epoch 3/10
60000/60000 [==============================] - 4s - loss: 9.5797 - acc: 0.4051 - val_loss: 8.9226 - val_acc: 0.4460
Epoch 4/10
60000/60000 [==============================] - 4s - loss: 9.2017 - acc: 0.4285 - val_loss: 8.0564 - val_acc: 0.4998
Epoch 5/10
60000/60000 [==============================] - 4s - loss: 8.8558 - acc: 0.4501 - val_loss: 8.0878 - val_acc: 0.4980
Epoch 6/10
60000/60000 [==============================] - 5s - loss: 8.8239 - acc: 0.4521 - val_loss: 8.2495 - val_acc: 0.4880
Epoch 7/10
60000/60000 [==============================] - 4s - loss: 8.7842 - acc: 0.4547 - val_loss: 7.7146 - val_acc: 0.5211
Epoch 8/10
60000/60000 [==============================] - 4s - loss: 8.7395 - acc: 0.4575 - val_loss: 7.7944 - val_acc: 0.5163
Epoch 9/10
60000/60000 [==============================] - 5s - loss: 8.7109 - acc: 0.4593 - val_loss: 7.8235 - val_acc: 0.5145
Epoch 10/10
60000/60000 [==============================] - 4s - loss: 8.4927 - acc: 0.4729 - val_loss: 7.5933 - val_acc: 0.5288
</code></pre>
|
tensorflow|keras
| 3
|
8,931
| 46,731,506
|
Remove exact rows and frequency of rows of a data.frame that are in another data.frame in python 3
|
<p>Consider the following two data.frames created using pandas in python 3:</p>
<pre><code>a1 = pd.DataFrame(({'A': [1, 2, 3, 4, 5, 2, 4, 2], 'B': ['a', 'b', 'c', 'd', 'e', 'b', 'd', 'b']}))
a2 = pd.DataFrame(({'A': [1, 2, 3, 2], 'B': ['a', 'b', 'c', 'b']}))
</code></pre>
<p>I would like to remove the exact rows of a1 that are in a2 so that the result should be:</p>
<pre><code>A B
4 d
5 e
4 d
2 b
</code></pre>
<p>Note that one row with 2 b in a1 is retained in the final result (essentially only one of them gets canceled with the one in a2). Is there any built-in function in pandas or any other library in python 3 to get this result?</p>
|
<p>Lets use groupby cumcount:</p>
<pre><code>a1['count'] = a1.groupby(['A','B']).cumcount()
a2['count'] = a2.groupby(['A','B']).cumcount()
</code></pre>
<p><strong>Option 1</strong> - merge and query </p>
<pre><code>df = (pd.merge(a1,a2, indicator=True, how='left')
.query("_merge != 'both'")
.drop(['_merge','count'], 1))
</code></pre>
<p><strong>Option 2</strong> - With index difference after merging i.e </p>
<pre><code>i = a1.index.difference(a1.merge(a2,on=['A','B','count']).index)
df = a1.loc[i].drop('count',1)
</code></pre>
<p><strong>Option 3</strong> - Completing @ John Zwinck's approach</p>
<pre><code>df =pd.DataFrame(pd.Index(a1).difference(pd.Index(a2)).tolist(),columns=a2.columns).drop(['count'],1)
</code></pre>
<p>Output : </p>
<pre>
A B
3 4 d
4 5 e
6 4 d
7 2 b
</pre>
|
python|python-3.x|pandas|dataframe
| 1
|
8,932
| 46,721,982
|
How can I increase the maximum query time?
|
<p>I ran a query which will eventually return roughly 17M rows in chunks of 500,000. Everything seemed to be going just fine, but I ran into the following error:</p>
<pre><code>Traceback (most recent call last):
File "sql_csv.py", line 22, in <module>
for chunk in pd.read_sql_query(hours_query, db.conn, chunksize = 500000):
File "/Users/michael.chirico/anaconda2/lib/python2.7/site-packages/pandas/io/sql.py", line 1424, in _query_iterator
data = cursor.fetchmany(chunksize)
File "/Users/michael.chirico/anaconda2/lib/python2.7/site-packages/jaydebeapi/\__init__.py", line 546, in fetchmany
row = self.fetchone()
File "/Users/michael.chirico/anaconda2/lib/python2.7/site-packages/jaydebeapi/\__init__.py", line 526, in fetchone
if not self._rs.next(): jpype._jexception.SQLExceptionPyRaisable: java.sql.SQLException: Query failed (#20171013_015410_01255_8pff8):
**Query exceeded maximum time limit of 60.00m**
</code></pre>
<p>Obviously such a query can be expected to take some time; I'm fine with this (and chunking means I know I won't be breaking any RAM limitations -- in fact the file output I was running shows the query finished 16M of the 17M rows before crashing!).</p>
<p>But I don't see any direct options for <code>read_sql_query</code>. <code>params</code> seems like a decent candidate, but I can't see in the <code>jaydebeapi</code> documentation any hint of what the right parameter to give to <code>execute</code> might be.</p>
<p>How can I overcome this and run my full query?</p>
|
<p>When executing queries, Presto restricts each query by CPU, memory, execution time and other constraints. You hit execution time limit. Please ensure that your query is sound, otherwise, you can crash the cluster.</p>
<p>To increase query execution time, define a new value in <a href="https://prestodb.io/docs/current/sql/set-session.html" rel="nofollow noreferrer">session variables</a>.</p>
<pre><code>SET SESSION query_max_execution_time=60m;
</code></pre>
|
python|python-2.7|pandas|presto|jaydebeapi
| 2
|
8,933
| 32,855,650
|
generic scoring module using python pandas
|
<p>Hi I'm trying to develop a generic scoring module for grading students based on variety of attributes. I'm trying to develop a generic method using python pandas
Input:
An input data frame with student ID and UG Major and attributes for scoring (I called df_input)
An input ref. data frame that contains scoring params</p>
<p>Process: Based on the variable type, developing a process to calculate scores for each attribute</p>
<p>Output: Input data frame with added cols that capture the attribute score
Example:</p>
<p>df_input</p>
<p>+</p>
<pre><code>------------+-----------+----+------------+-----+------+
| STUDENT_ID | UG_MAJOR | C1 | C2 | C3 | C4 |
+------------+-----------+----+------------+-----+------+
| 123 | MATH | A | 8000-10000 | 12% | 9000 |
| 234 | ALL_OTHER | B | 1500-2000 | 10% | 1500 |
| 345 | ALL_OTHER | A | 2800-3000 | 8% | 2300 |
| 456 | ALL_OTHER | A | 8000-10000 | 12% | 3200 |
| 980 | ALL_OTHER | C | 1000-2500 | 15% | 2700 |
+------------+-----------+----+------------+-----+------+
</code></pre>
<p>df_ref
+</p>
<pre><code>---------+---------+---------+
| REF_COL | REF_VAL | REF_SCR |
+---------+---------+---------+
| C1 | A | 10 |
| C1 | B | 20 |
| C1 | C | 30 |
| C1 | NULL | 0 |
| C1 | MISSING | 0 |
| C1 | A | 20 |
| C1 | B | 30 |
| C1 | C | 40 |
| C1 | NULL | 10 |
| C1 | MISSING | 10 |
| C2 | <1000 | 0 |
| C2 | >1000 | 20 |
| C2 | >7000 | 30 |
| C2 | >9500 | 40 |
| C2 | MISSING | 0 |
| C2 | NULL | 0 |
| C3 | <3% | 5 |
| C3 | >3% | 10 |
| C3 | >5% | 100 |
| C3 | >7% | 200 |
| C3 | >10% | 300 |
| C3 | NULL | 0 |
| C3 | MISSING | 0 |
| C4 | <5000 | 10 |
| C4 | >5000 | 20 |
| C4 | >10000 | 30 |
| C4 | >15000 | 40 |
+---------+---------+---------+
+------------+-----------+----+------------+-----+------+--------+--------+--------+---------+
| Req.Output | | | | | | | | | |
+------------+-----------+----+------------+-----+------+--------+--------+--------+---------+
| STUDENT_ID | UG_MAJOR | C1 | C2 | C3 | C4 | C1_SCR | C2_SCR | C3_SCR | TOT_SCR |
| 123 | MATH | A | 8000-10000 | 12% | 9000 | | | | |
| 234 | ALL_OTHER | B | 1500-2000 | 10% | 1500 | | | | |
| 345 | ALL_OTHER | A | 2800-3000 | 8% | 2300 | | | | |
| 456 | ALL_OTHER | A | 8000-10000 | 12% | 3200 | | | | |
| 980 | ALL_OTHER | C | 1000-2500 | 15% | 2700 | | | | |
+------------+-----------+----+------------+-----+------+--------+--------+--------+---------+
</code></pre>
<p>I want to see if any thing like a function be developed to accomplish this</p>
<p>Thank you
Pari</p>
|
<p>If I understand the question correctly, you are trying to store a collection of rules in <code>df_ref</code> that are to be applied to <code>df_input</code> to generate scores. While this certainly can be done, you should make sure that your rules are well defined. This would also guide you in writing the corresponding scoring function.</p>
<p>For instance, suppose one of the students gets a value of <code>10000</code> in column <code>C3</code>. <code>10000</code> is larger than <code>1000</code>, <code>7000</code> and <code>9500</code>. This means that the score is ambiguous. Suppose you want to choose the highest of all scores from this particular column. Then, you need another table specifying the choice rule for each column when multiple scores are selected.</p>
<p>Second, you should think about the type of Python variable stored in 'REF_VAL' column. If <code>>7000</code> is a string, you would have to do extra work to determine the score. Consider storing this as <code>7000</code> instead and specifying comparison operator elsewhere.</p>
<p>Finally, looking at your current rules, there seems to be a pattern. Each score is associated with <code>NULL</code>, <code>MISSING</code> or a range cutoff. This can be captured as follows:</p>
<pre><code>import pandas as pd
import numpy as np
from itertools import dropwhile
# stores values and scores for special values and cutoff values
sample_range_rule = {
'MISSING' : 0,
'NULL' : 0,
'VALS' : [
(0, 0),
(10, 50),
(70, 75),
(90, 100),
(100, 100)
]
}
# takes a dict with rules and produces a scoring function
def getScoringFunction(range_rule):
def score(val):
if val == 'MISSING':
return range_rule['MISSING']
elif val == 'NULL':
return range_rule['NULL']
else:
return dropwhile(lambda (cutoff, score): cutoff < val,
range_rule['VALS']).next()[1]
return score
sample_scoring_function = getScoringFunction(sample_range_rule)
for test_value in ['MISSING', 'NULL', 0, 12, 55, 66, 99]:
print 'Input', test_value,
print 'Output', sample_scoring_function(test_value)
</code></pre>
<p>After you have a dict specifying a rule for every column, you can do the following:</p>
<p><code>df['Ck_SCR'] = df['Ck'].apply(getScoringFunction(Ck_dct))</code></p>
<p>Converting pandas DataFrame with two columns to a dict of this form should not be to difficult.</p>
|
python|pandas|scoring
| 2
|
8,934
| 63,189,074
|
Doubts about the functioning of a tf/keras net
|
<p>I'm studying quantile regression and I have some problem to understand how works the net below:</p>
<pre><code> z = tf.keras.layers.Input((len(features),), name="Patient")
x = tf.keras.layers.Dense(100, activation="relu", name="d1")(z)
x = tf.keras.layers.Dense(100, activation="relu", name="d2")(x)
p1 = tf.keras.layers.Dense(3, activation="relu", name="p1")(x)
p2 = tf.keras.layers.Dense(3, activation="relu", name="p2")(x)
preds = tf.keras.layers.Lambda(lambda x: x[0] + tf.cumsum(x[1], axis=1),
name="preds")([p1, p2])
model = tf.keras.Model(z, preds)
</code></pre>
<p>In particular, I don't understand the last layer "preds" when Lambda is used: I know how the lambda function works in python, but I don't understand how the Lambda layer combines the input tensor. I hope that someone can help me to understand this.
The net is largely used in a lot of notebooks in Kaggle competition "osic pulmonary fibrosis progression"</p>
|
<blockquote>
<p>In the line:</p>
<pre><code>preds = tf.keras.layers.Lambda(lambda x: x[0] + tf.cumsum(x[1], axis=1),
name="preds")([p1, p2])
</code></pre>
<p>The input to Lambda layer is <code>[p1, p2]</code>. So <code>x = [p1, p2]</code>. Thus,
<code>x[0] = p1</code>, <code>x[1] = p2</code>.</p>
<p>Thus the operation it does is <code>x[0] + tf.cumsum(x[1], axis=1)</code> i.e. <code>p1 + tf.cumsum(p2, axis=1)</code>. You could
refer <a href="https://www.tensorflow.org/api_docs/python/tf/math/cumsum" rel="nofollow noreferrer">here</a>, about the <code>tf.cumsum()</code> function.</p>
</blockquote>
<p>Hope this clears your doubts. If you have any more, feel free to ask in comments.</p>
|
python|tensorflow|keras|regression|data-science
| 1
|
8,935
| 63,195,178
|
In pandas dataframe, Need to split columns and add them back to other rows
|
<p>I have a <code>STATUS</code> column in data frame which I am getting the counts using <code>value_count</code> function</p>
<pre><code>df.STATUS.value_counts(sort=True)
</code></pre>
<p>Output:</p>
<pre><code>Verified 171
ErrTab; 9
WarKeyWord; 4
ErrTab; and WarKeyWord; 10
</code></pre>
<p>so now I want to break the last row and add the values to previous counts.</p>
<p>Expected:</p>
<pre><code>Verified 171
ErrTab; 19
WarKeyWord; 14
</code></pre>
<p>What would be the easiest way to do this? any ideas?</p>
|
<p>In order to get not too long source DataFrame, I defined it
as:</p>
<pre><code> STATUS Amount
0 Verified 1
1 Verified 2
2 Verified 3
3 ErrTab; 1
4 ErrTab; 2
5 ErrTab; 3
6 ErrTab; 4
7 ErrTab; 5
8 ErrTab; 6
9 ErrTab; 7
10 ErrTab; 8
11 ErrTab; 9
12 WarKeyWord; 1
13 WarKeyWord; 2
14 WarKeyWord; 3
15 WarKeyWord; 4
16 ErrTab; and WarKeyWord; 1
17 ErrTab; and WarKeyWord; 2
18 ErrTab; and WarKeyWord; 3
</code></pre>
<p>(3, 9, 4 and 3 items with each <em>STATUS</em>).</p>
<p>Then, to get your expected result, run:</p>
<pre><code>df.STATUS.str.split(' and ').explode().value_counts(sort=True)
</code></pre>
<p>The result is:</p>
<pre><code>ErrTab; 12
WarKeyWord; 7
Verified 3
Name: STATUS, dtype: int64
</code></pre>
<p>Due to the different number of occurrences of each <em>STATUS</em>,
the result ordering is different (a side effect of my source data).</p>
|
python|pandas|numpy
| 0
|
8,936
| 67,704,903
|
Calculating average dataframe filtered by two columns
|
<p>I have this Dataframe</p>
<pre><code> Unnamed: 0 Datetime HomeTeam AwayTeam Ball PossessionMatch_H Ball PossessionMatch_A
0 0 2021-05-24 02:30:00 U. De Chile Everton 68 32
1 1 2021-05-23 21:00:00 Huachipato Colo Colo 48 52
2 2 2021-05-23 18:30:00 Melipilla Antofagasta 47 53
3 3 2021-05-23 02:30:00 U. Espanola U. Catolica 37 63
4 4 2021-05-23 00:00:00 S. Wanderers O'Higgins 29 71
... ... ... ... ... ... ...
57 57 2021-03-28 15:45:00 Palestino Antofagasta 58 42
58 58 2021-03-28 01:00:00 U. Espanola S. Wanderers 50 50
59 59 2021-03-27 22:30:00 Colo Colo Union La Calera 58 42
60 60 2021-03-27 20:00:00 Everton O'Higgins 54 46
61 61 2021-03-27 15:00:00 Curico Unido Melipilla 41 59
</code></pre>
<p>I want to split it in multiple dataframes and apply two criteria in "HomeTeam" and "AwayTeam", then calculate the average of Ball Possession and put it in a new column "Ball PossessionMatch_H/MP" if the team is in "HomeTeam" and "Ball PossessionMatch_A/MP" if the team is in "AwayTeam"</p>
<p>Code:</p>
<pre><code>teams = pd.unique(df[['HomeTeam', 'AwayTeam']].values.ravel('K'))
df_list3 = []
for team in teams:
if team in df['HomeTeam']:
current_df = df[(df.HomeTeam == team) | (df.AwayTeam == team)].copy()
current_df["Ball PossessionMatch_H/MP"] = current_df.apply(
lambda x: (
current_df[(current_df.HomeTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_H"].sum()
+ current_df[(current_df.AwayTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_A"].sum()
)
/ (
current_df[(current_df.HomeTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_H"].size
+ current_df[(current_df.AwayTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_A"].size
),
axis=1,
)
else:
current_df = df[(df.HomeTeam == team) | (df.AwayTeam == team)].copy()
current_df["Ball PossessionMatch_A/MP"] = current_df.apply(
lambda x: (
current_df[(current_df.HomeTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_H"].sum()
+ current_df[(current_df.AwayTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_A"].sum()
)
/ (
current_df[(current_df.HomeTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_H"].size
+ current_df[(current_df.AwayTeam == team)].loc[x.name + 1 :, "Ball PossessionMatch_A"].size
),
axis=1,
)
df_list3.append(current_df)
df = pd.concat(df_list3)
print(df)
</code></pre>
<p>It does not consider the <code>if</code> and create only "Ball PossessionMatch_A/MP" column with calculated values and duplicates the rows, I would <code>append</code> the results to the original dataframe.</p>
<p>Expected Result:</p>
<pre><code> Unnamed: 0 Datetime HomeTeam AwayTeam Ball PossessionMatch_H Ball PossessionMatch_A Ball PossessionMatch_H/MP Ball PossessionMatch_A/MP
0 0 2021-05-24 02:30:00 U. De Chile Everton 68 32 50.33 43.5
</code></pre>
|
<p>How about the following?</p>
<pre><code>hometeam_count = df.groupby("HomeTeam")["Ball PossessionMatch_H"].count()
hometeam_sum = df.groupby("HomeTeam")["Ball PossessionMatch_H"].sum()
awayteam_count = df.groupby("AwayTeam")["Ball PossessionMatch_A"].count()
awayteam_sum = df.groupby("AwayTeam")["Ball PossessionMatch_A"].sum()
df["Ball PossessionMatch_H/MP"] = df["HomeTeam"].apply(lambda x: ((hometeam_sum.loc[x] if x in hometeam_sum.index else 0) + (awayteam_sum.loc[x] if x in awayteam_sum.index else 0)) / ((hometeam_count.loc[x] if x in hometeam_count.index else 0) + (awayteam_count.loc[x] if x in awayteam_count.index else 0)))
df["Ball PossessionMatch_A/MP"] = df["AwayTeam"].apply(lambda x: ((hometeam_sum.loc[x] if x in hometeam_sum.index else 0) + (awayteam_sum.loc[x] if x in awayteam_sum.index else 0)) / ((hometeam_count.loc[x] if x in hometeam_count.index else 0) + (awayteam_count.loc[x] if x in awayteam_count.index else 0)))
</code></pre>
<p>The code above utilises the <code>groupby</code> function to calculate the <code>sum</code> and the <code>count</code> for each team and for both <code>Ball PossessionMatch_A</code> and <code>Ball PossessionMatch_H</code> separately.<br />
The second part of the code basically looking up (<code>loc</code>) the corresponding <code>sum</code> and <code>count</code> values for each <code>HomeTeam</code> and <code>AwayTeam</code> respectively, and perform the average calculation as intended.</p>
<p>As you can tell, the code for both <code>HomeTeam</code> and <code>AwayTeam</code> are pretty much identical which also be cleaned up so that we <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">Dont' repeat yourself</a>. I presented as such purely for readability purpose.</p>
<p>Although this does not solve the problem (as to why the values are not filled in the columns that you have created), I am providing an alternative solution to the question, which I think is more intuitive and more readable.</p>
|
python|pandas|dataframe
| 1
|
8,937
| 67,807,675
|
Split a list from Dataframe column into specific column name
|
<p>I have a question regarding splitting a list in a dataframe column into multiple columns. But every value that is splitted needs to be placed in a specific column.</p>
<p>Let's say I have this Dataframe:</p>
<pre><code>date data
2020-01-01 00:00:00 [G07, G08, G10, G16]
2020-01-01 00:00:01 [G07, G08, G16]
2020-01-01 00:00:02 [G08, G10, G16, G20, G21]
2020-01-01 00:00:03 [G16, G20, G21, G26, G27, R02]
2020-01-01 00:00:04 [G07, G08, G26, G27]
</code></pre>
<p>And I'm looking for this kind of result:</p>
<pre><code>date G07 G08 G10 G16 G20 G21 G26 G27 R02
2020-01-01 00:00:00 G07 G08 G10 G16 NaN NaN NaN NaN NaN
2020-01-01 00:00:01 G07 G08 NaN G16 NaN NaN NaN NaN NaN
2020-01-01 00:00:02 NaN G08 G10 G16 G20 G21 NaN NaN NaN
2020-01-01 00:00:03 NaN NaN NaN G16 G20 G21 G26 G27 R02
2020-01-01 00:00:04 G07 G08 NaN NaN NaN NaN G26 G27 NaN
</code></pre>
<p>To finally get this kind of matrix:</p>
<pre><code>date G07 G08 G10 G16 G20 G21 G26 G27 R02
2020-01-01 00:00:00 1 1 1 1 0 0 0 0 0
2020-01-01 00:00:01 1 1 0 1 0 0 0 0 0
2020-01-01 00:00:02 0 1 1 1 1 1 0 0 0
2020-01-01 00:00:03 0 0 0 1 1 1 1 1 1
2020-01-01 00:00:04 1 1 0 0 0 0 1 1 0
</code></pre>
<p>By doing this type of command :</p>
<pre><code>In [1] pd.DataFrame(self.df['data'].to_list())
Out [1] date 1 2 3 4 5 6
2020-01-01 00:00:00 G07 G08 G10 G16
2020-01-01 00:00:01 G07 G08 G16
2020-01-01 00:00:02 G08 G10 G16 G20 G21
2020-01-01 00:00:03 G16 G20 G21 G26 G27 R02
2020-01-01 00:00:04 G07 G08 G26 G27
</code></pre>
<p>I'm only allowed to split the list into other columns. But I cannot find a way to place each value into a specific column.</p>
<p>I've been thinking of making loops over each values of each dates but it is very slow and I have datasets that are more than 1,000,000 rows.</p>
|
<p>Check with <code>MultiLabelBinarizer</code> from <code>sklearn</code></p>
<pre><code>from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
s = pd.DataFrame(mlb.fit_transform(df['data']),columns=mlb.classes_, index=df.index)
df = df.join(s)
</code></pre>
|
python|pandas|dataframe
| 4
|
8,938
| 31,690,870
|
Pandas Groupby: Summarizing Activity Log
|
<p>I'm trying to parse an activity log that I have simplified below.</p>
<pre><code>df = pd.DataFrame({'Job_Id':[1,1,1,2,2,2],
'Activity': ['issued', 'assigned', 'complete', 'issued', 'assigned', 'complete'],
'Timestamp': ['2015-07-23 19:02:36', '2015-07-23 19:57:47', '2015-07-23 20:35:22','2015-07-23 18:10:11','2015-07-23 19:00:47', '2015-07-23 19:01:36']})
</code></pre>
<p>Looks like this...</p>
<pre><code> Activity Job_Id Timestamp
0 issued 1 2015-07-23 19:02:36
1 assigned 1 2015-07-23 19:57:47
2 complete 1 2015-07-23 20:35:22
3 issued 2 2015-07-23 18:10:11
4 assigned 2 2015-07-23 19:00:47
5 complete 2 2015-07-23 19:01:36
</code></pre>
<p>I would like to summarize each job into a single row like below...</p>
<pre><code>Job_Id Issued Assigned Complete
1 2015-07-23 19:02:36 2015-07-23 19:57:47 2015-07-23 20:35:22
2 2015-07-23 18:10:11 2015-07-23 19:00:47 2015-07-23 19:01:36
</code></pre>
<p>I've used groupby in the past but can't seem to get this to work. I would really appreciate some help or suggestions in how to transform this activity log into the format that I've highlighted. This groupby statement shows the "issued" timestamp but doesn't give me what I need. </p>
<pre><code>grouped = df.groupby(['Job_Id']).agg(lambda x: np.array(x[x['Activity'] == 'issued']['Timestamp'])[0])
</code></pre>
|
<p>It is a perfect usecase for <code>pivot_table</code>:</p>
<pre><code>df.pivot_table(columns=['Activity'],values=['Timestamp'],index=['Job_Id'], aggfunc=lambda x : x)
</code></pre>
|
python-2.7|pandas
| 1
|
8,939
| 31,888,871
|
Pandas - replacing column values
|
<p>I know there are a number of topics on this question, but none of the methods worked for me so I'm posting about my specific situation</p>
<p>I have a dataframe that looks like this:</p>
<pre><code>data = pd.DataFrame([[1,0],[0,1],[1,0],[0,1]], columns=["sex", "split"])
data['sex'].replace(0, 'Female')
data['sex'].replace(1, 'Male')
data
</code></pre>
<p>What I want to do is replace all 0's in the sex column with 'Female', and all 1's with 'Male', but the values within the dataframe don't seem to change when I use the code above</p>
<p>Am I using replace() incorrectly? Or is there a better way to do conditional replacement of values?</p>
|
<p>Yes, you are using it incorrectly, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html"><code>Series.replace()</code></a> is not inplace operation by default, it returns the replaced dataframe/series, you need to assign it back to your dataFrame/Series for its effect to occur. Or if you need to do it inplace, you need to specify the <code>inplace</code> keyword argument as <code>True</code> Example -</p>
<pre><code>data['sex'].replace(0, 'Female',inplace=True)
data['sex'].replace(1, 'Male',inplace=True)
</code></pre>
<p>Also, you can combine the above into a single <code>replace</code> function call by using <code>list</code> for both <code>to_replace</code> argument as well as <code>value</code> argument , Example -</p>
<pre><code>data['sex'].replace([0,1],['Female','Male'],inplace=True)
</code></pre>
<p>Example/Demo -</p>
<pre><code>In [10]: data = pd.DataFrame([[1,0],[0,1],[1,0],[0,1]], columns=["sex", "split"])
In [11]: data['sex'].replace([0,1],['Female','Male'],inplace=True)
In [12]: data
Out[12]:
sex split
0 Male 0
1 Female 1
2 Male 0
3 Female 1
</code></pre>
<hr>
<p>You can also use a dictionary, Example -</p>
<pre><code>In [15]: data = pd.DataFrame([[1,0],[0,1],[1,0],[0,1]], columns=["sex", "split"])
In [16]: data['sex'].replace({0:'Female',1:'Male'},inplace=True)
In [17]: data
Out[17]:
sex split
0 Male 0
1 Female 1
2 Male 0
3 Female 1
</code></pre>
|
python|pandas
| 66
|
8,940
| 31,819,737
|
Numpy put array in Nth dimension
|
<p>I often end up trying to take a bunch of arrays and putting them in different dimensions as below,</p>
<pre><code>x = x.reshape((x.size, 1, 1))
y = y.reshape((1, y.size, 1))
z = z.reshape((1, 1, z.size))
return x + y + z
</code></pre>
<p>I have two problems, I would like to do something like,</p>
<pre><code>x = x.todim(0)
y = y.todim(1)
z = z.todim(2)
</code></pre>
<p>And achieve the same as above.</p>
<p>Also, I would like to do "tensor products" with different operators and have it be lazy evaluated because doing what I am doing often exploded the memory usage. But I do have a good reason for doing these kinds of things ... my insanity is justifiable.</p>
<p>EDIT:</p>
<p>Here is code I wrote to do it, but a built in would be nice is such exists</p>
<pre><code>def todim(a, ndims, axis=0):
nshape = [a.shape[i-axis]
if i >= axis and (i-axis) < len(a.shape)
else 1
for i in range(ndims)]
return a.reshape(tuple(nshape))
</code></pre>
|
<p>first, you are doing something similar to <code>np.ix_</code>.</p>
<pre><code>In [899]: x,y,z=np.ix_(np.arange(3),np.arange(4),np.arange(5))
In [900]: x.shape,y.shape,z.shape
Out[900]: ((3, 1, 1), (1, 4, 1), (1, 1, 5))
</code></pre>
<p><code>numpy.lib.index_tricks.py</code> has this and other indexing functions and classes.</p>
<p>A function like your <code>todim</code> could be:</p>
<pre><code> def todim(x,n,i):
ind = [1]*n
ind[i]=x.shape[0]
return x.reshape(ind)
</code></pre>
<p>I'm not trying to make it a array method. A standalone function is easier. I also need to define the <code>n</code>, the target number of dimensions. <code>np.ix_</code> does something like this.</p>
<hr>
<p>To <code>todim</code> that you added (while I wrote my answer) is similar, but lets <code>x</code> have something other than 1d.</p>
<p><code>np.r_</code> takes an intial string argument that might allow similar specification.</p>
<pre><code>x,y,z = np.r_['0,3,0',np.arange(3)], np.r_['0,3,1',np.arange(4)], np.r_['0,3,2',np.arange(5)]
</code></pre>
<p>produces the same 3 arrays as my initial <code>ix_</code>. It takes a string input, but you could easily insert numbers:</p>
<pre><code>np.r_['0,%s,%s'%(3,1), np.arange(4)]
</code></pre>
|
python|numpy|multidimensional-array
| 1
|
8,941
| 41,664,280
|
Understanding Numpy Multi-dimensional Array Indexing
|
<p>Please, can anyone explain the difference between these three indexing operations: </p>
<pre><code>y = np.arange(35).reshape(5,7)
# Operation 1
y[np.array([0,2,4]),1:3]
# Operation 2
y[np.array([0,2,4]), np.array([[1,2]])]
# Operation 3
y[np.array([0,2,4]), np.array([[1],[2]])]
</code></pre>
<p>What I don't get is:</p>
<ul>
<li>why Operation 2 is not working while Operation 1 is working fine?</li>
<li>why Operation 3 is working, but returning the transpose of what I expect (that is, the result of Operation 1)?</li>
</ul>
<p>According to the numpy reference:</p>
<blockquote>
<p>If the index arrays do not have the same shape, there is an attempt to
broadcast them to the same shape. If they cannot be broadcast to the
same shape, an exception is raised.</p>
</blockquote>
<p>Ok, so this means I cannot do:</p>
<pre><code>y[np.array([0,2,4]), np.array([1,2])]
</code></pre>
<p>But numpy reference also says about Operation 1:</p>
<blockquote>
<p>In effect, the slice is converted to an index array np.array([[1,2]])
(shape (1,2)) that is broadcast with the index array to produce a
resultant array of shape (3,2).</p>
</blockquote>
<p>So why can not I do:</p>
<pre><code>y[np.array([0,2,4]), np.array([[1,2]])]
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (1,2)</p>
</blockquote>
|
<pre><code>In [1]: import numpy as np; y = np.arange(35).reshape(5,7)
</code></pre>
<h2>Operation 1</h2>
<pre><code>In [2]: y[np.array([0,2,4]), 1:3]
Out[2]:
array([[ 1, 2],
[15, 16],
[29, 30]])
</code></pre>
<p>Here we have a mix of advanced indexing (with the array) and basic indexing (with the slice) with only one advanced index. According to the <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow noreferrer">reference</a> </p>
<blockquote>
<p>[a] single advanced index can for example replace a slice and the result
array will be the same [...]</p>
</blockquote>
<p>And that is true as the following code shows:</p>
<pre><code>In [3]: y[::2, 1:3]
Out[3]:
array([[ 1, 2],
[15, 16],
[29, 30]])
</code></pre>
<p>The only difference between <code>Out[2]</code> and <code>Out[3]</code> is that the former is a copy of the data in <code>y</code> (advanced indexing always produces a copy) while the latter is a view sharing the same memory with <code>y</code> (basic indexing only always produces a view).</p>
<p>So with operation 1 we have selected the rows via <code>np.array([0,2,4])</code> and the columns via <code>1:3</code>.</p>
<h2>Operation 2</h2>
<pre><code>In [4]: y[np.array([0,2,4]), np.array([[1,2]])]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-4-bf9ee1361144> in <module>()
----> 1 y[np.array([0,2,4]), np.array([[1,2]])]
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (1,2)
</code></pre>
<p>This fails and to understand why we first have to realize that the very nature of indexing in this example is fundamentally different from operation 1. Now we have advanced indexing only (and more than a single advanced index). This means that the indexing arrays must have the same shape or at least shapes that are compatible for <a href="https://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html#general-broadcasting-rules" rel="nofollow noreferrer">broadcasting</a>. Lets have a look at the shapes.</p>
<pre><code>In [5]: np.array([0,2,4]).shape
Out[5]: (3,)
In [6]: np.array([[1,2]]).shape
Out[6]: (1, 2)
</code></pre>
<p>This means that the broadcasting mechanism will try to combine those two arrays:</p>
<pre><code>np.array([0,2,4]) (1d array): 3
np.array([[1,2]]) (2d array): 1 x 2
Result (2d array): 1 x F
</code></pre>
<p>The <code>F</code> at the end of the last line indicates that the shapes are not compatible. This is the reason for the <code>IndexError</code> in operation 2.</p>
<h2>Operation 3</h2>
<pre><code>In [7]: y[np.array([0,2,4]), np.array([[1],[2]])]
Out[7]:
array([[ 1, 15, 29],
[ 2, 16, 30]])
</code></pre>
<p>Again, we have advanced indexing only. Lets see if the shapes are compatible now:</p>
<pre><code>In [8]: np.array([0,2,4]).shape
Out[8]: (3,)
In [9]: np.array([[1],[2]]).shape
Out[9]: (2, 1)
</code></pre>
<p>This means that broadcasting will work like this:</p>
<pre><code>np.array([0,2,4]) (1d array): 3
np.array([[1],[2]]) (2d array): 2 x 1
Result (2d array): 2 x 3
</code></pre>
<p>So now broadcasting works! And since our indexing arrays are broadcast to a 2x3 array, this will also be the shape of the result. So it also explains the shape of the result which is different than that of operation 1.</p>
<p>To get a result of shape 3x2 as in operation 1, we can do</p>
<pre><code>In [10]: y[np.array([[0],[2],[4]]), np.array([1, 2])]
Out[10]:
array([[ 1, 2],
[15, 16],
[29, 30]])
</code></pre>
<p>Now the broadcasting mechanism works like this:</p>
<pre><code>np.array([[0],[2],[4]]) (2d array): 3 x 1
np.array([1, 2]) (1d array): 2
Result (2d array): 3 x 2
</code></pre>
<p>giving a 3x2 array. Instead of <code>np.array([1, 2])</code> also </p>
<pre><code>In [11]: y[np.array([[0],[2],[4]]), np.array([[1, 2]])]
Out[11]:
array([[ 1, 2],
[15, 16],
[29, 30]])
</code></pre>
<p>would work because of</p>
<pre><code>np.array([[0],[2],[4]]) (2d array): 3 x 1
np.array([[1, 2]]) (2d array): 1 x 2
Result (2d array): 3 x 2
</code></pre>
|
python|numpy|indexing|array-broadcasting
| 3
|
8,942
| 41,476,436
|
Pandas transform() vs apply()
|
<p>I don't understand why <code>apply</code> and <code>transform</code> return different dtypes when called on the same data frame. The way I explained the two functions to myself before went something along the lines of "<code>apply</code> collapses the data, and <code>transform</code> does exactly the same thing as <code>apply</code> but preserves the original index and doesn't collapse." Consider the following.</p>
<pre><code>df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [1,1,0,0,1,0,0,0,0,1]})
</code></pre>
<p>Let's identify those <code>id</code>s which have a nonzero entry in the <code>cat</code> column.</p>
<pre><code>>>> df.groupby('id')['cat'].apply(lambda x: (x == 1).any())
id
1 True
2 True
3 False
4 True
Name: cat, dtype: bool
</code></pre>
<p>Great. If we wanted to create an indicator column, however, we could do the following.</p>
<pre><code>>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 0
8 0
9 1
Name: cat, dtype: int64
</code></pre>
<p>I don't understand why the dtype is now <code>int64</code> instead of the boolean returned by the <code>any()</code> function.</p>
<p>When I change the original data frame to contain some booleans (note that the zeros remain), the transform approach returns booleans in an <code>object</code> column. This is an extra mystery to me since all of the values are boolean, but it's listed as <code>object</code> apparently to match the <code>dtype</code> of the original mixed-type column of integers and booleans.</p>
<pre><code>df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [True,True,0,0,True,0,0,0,0,True]})
>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
Name: cat, dtype: object
</code></pre>
<p>However, when I use all booleans, the transform function returns a boolean column.</p>
<pre><code>df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [True,True,False,False,True,False,False,False,False,True]})
>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
Name: cat, dtype: bool
</code></pre>
<p>Using my acute pattern-recognition skills, it appears that the <code>dtype</code> of the resulting column mirrors that of the original column. I would appreciate any hints about why this occurs or what's going on under the hood in the <code>transform</code> function. Cheers.</p>
|
<p>It looks like <code>SeriesGroupBy.transform()</code> tries to cast the result dtype to the same one as the original column has, but <code>DataFrameGroupBy.transform()</code> doesn't seem to do that:</p>
<pre><code>In [139]: df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
Out[139]:
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 0
8 0
9 1
Name: cat, dtype: int64
# v v
In [140]: df.groupby('id')[['cat']].transform(lambda x: (x == 1).any())
Out[140]:
cat
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
In [141]: df.dtypes
Out[141]:
cat int64
id int64
dtype: object
</code></pre>
|
python|pandas|transform|apply
| 11
|
8,943
| 27,630,715
|
Doing a scipy.optimize.root runs out of memory, what are good alternatives?
|
<p>So I'm trying to use <code>scipy.optimize.root</code> but I'm running out of memory, the reason being there isn't enough memory to calculate the jacobian.</p>
<p>I was wondering what alternative I might be able to use given my memory constraint?, or is there a way to circumvent it somehow?</p>
<p>My input size is ~400,000 and the output is similar, meaning the jacobian is 400,000^2 which is a killer...</p>
<p>Please let me know if you have any suggestions to make the question clearer.</p>
<p><strong>UPDATE</strong></p>
<p>I think I've figured out a way to calculate the jacobian at any given point efficiently. The documentation of <code>scipy.optimize.root</code> states the following:</p>
<blockquote>
<p>If jac is a Boolean and <strong>is True, fun is assumed to return the value of Jacobian along with the objective function</strong>. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of fun. In this case, it must accept the same arguments as fun.</p>
</blockquote>
<p>I'm guessing from the highlighted point what it means is if <code>fun(x)</code> is my funcion it gives something like:</p>
<pre><code>f, jac = fun(x)
</code></pre>
<p>where f = f(x) and jac = jacobian(x).</p>
<p>right?</p>
|
<p>Try using <code>method='krylov'</code>.</p>
<p>See also <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#root-finding-for-large-problems" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#root-finding-for-large-problems</a></p>
<p>If you know the jacobian, you can write your function so that <code>f, jac = fun(x)</code> and give the <code>jac=True</code> option to <code>root</code>. However, given that several of the methods do not support sparse jacobians, this may not help you.</p>
<p>The next best thing is then to use use the sparse jacobian as a preconditioner for the Krylov method.</p>
|
python|numpy|scipy
| 1
|
8,944
| 68,865,728
|
Difference between a = Loss and a = Loss()
|
<p>I'm curious what the difference between the following lines of code are:</p>
<pre><code>a = torch.nn.BCELoss
</code></pre>
<p>and</p>
<pre><code>b = torch.nn.BCELoss()
</code></pre>
<p>I find it very interesting, that both ways work for PyTorch's BCE Loss. However, if I try to do this with a custom function, I can not do this:</p>
<pre><code>b2 = my_func()
</code></pre>
<p>This throws an error about the missing arguments. Only</p>
<pre><code>a2 = my_func
</code></pre>
<p>works. My function is defined as follows:</p>
<pre><code>def my_func(m,n):
return m+n
</code></pre>
<p>Also, why does</p>
<pre><code>b(x,y)
</code></pre>
<p>work perfectly fine but</p>
<pre><code>a(x,y)
</code></pre>
<p>Throw a error? (RuntimeError: Boolean value of Tensor with more than one value is ambiguous).</p>
|
<p>Classes are first-class objects (no pun intended) in Python; you can treat them as values like any other value. <code>torch.nn.BCELoss</code> is just an expression that evaluates to a class reference. Classes are callable, so <code>torch.nn.BCELoss()</code> is a <em>call</em> to that class.</p>
<p>You could also have written</p>
<pre><code>x = torch.nn.BCELoss
b = x()
</code></pre>
<p>and it would work the same. <code>x</code> and <code>torch.nn.BCELoss</code> both evaluate to the same <code>class</code> object.</p>
<p>Similarly,</p>
<pre><code>a2 = my_func
</code></pre>
<p>only makes <code>a2</code> another name for the function referred to by <code>my_func</code>. (Functions are also first-class objects.)</p>
<p><code>b(x,y)</code> is the same as <code>torch.nn.BCELoss()(x,y)</code>, while <code>a(x,y)</code> is the same as <code>torch.nn.BCELoss(x,y)</code>. The former calls an <em>instance</em> of the class; the latter the class itself.</p>
|
python|function|pytorch
| 0
|
8,945
| 68,517,398
|
Excessive CPU RAM being used by Pytorch even inside .cuda() mode
|
<p>I am having issue with excessive CPU RAM usage with <a href="https://github.com/promach/gdas" rel="nofollow noreferrer">this coding</a> even inside <em>.cuda()</em> mode</p>
<p>Could anyone advise ?</p>
|
<p>Problem is now <a href="https://github.com/promach/gdas/commit/9f1e0d24e077d094ebe21f71b6a1d3b5227b8e8b" rel="nofollow noreferrer">solved using this github commit</a></p>
|
python|pytorch
| -1
|
8,946
| 65,854,475
|
Perceptron on multi-dimensional tensor
|
<p>I'm trying to use Perceptron to reduce a tensor of size: <code>[1, 24, 768]</code> to another tensor with size of <code>[1, 1, 768]</code>. The only way I could use was to first reshape the input tensor to <code>[1, 1, 24*768]</code> and then pass it through linear layers. I'm wondering if there's a more elegant way of this transformation --other than using RNNs cause I do not want to use that. Are there other methods generally for the transformation that I want to make? Here is my code for doing the above operation:</p>
<pre><code>lin = nn.Linear(24*768, 768)
# x is in shape of [1, 24, 768]
# out is in shape of [1, 1, 768]
x = x.view(1,1,-1)
out = lin(x)
</code></pre>
|
<p>If the broadcasting is what's bothering you, you could use a <a href="https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html" rel="nofollow noreferrer"><code>nn.Flatten</code></a> to do it:</p>
<pre><code>>>> m = nn.Sequential(
... nn.Flatten(),
... nn.Linear(24*768, 768))
>>> x = torch.rand(1, 24, 768)
>>> m(x).shape
torch.Size([1, 768])
</code></pre>
<p>If you really want the extra dimension you can unsqueeze the tensor on <code>axis=1</code>:</p>
<pre><code>>>> m(x).unsqueeze(1).shape
torch.Size([1, 1, 768])
</code></pre>
|
python|pytorch|perceptron
| 0
|
8,947
| 65,653,192
|
IndexError: index 2047 is out of bounds for axis 0 with size 1638
|
<p>I want to train my dataset with training data and validation data.
Total data is <em>2048</em>, train data is <em>1638</em>, and validation data is <em>410</em> (20% of total).</p>
<p>Here are my codes</p>
<ol>
<li><p>loading data (org: total training data)</p>
<pre><code> org_x = train_csv.drop(['id', 'digit', 'letter'], axis=1).values
org_x = org_x.reshape(-1, 28, 28, 1)
org_x = org_x/255
org_x = np.array(org_x)
org_x = org_x.reshape(-1, 1, 28, 28)
org_x = torch.Tensor(org_x)
x_test = test_csv.drop(['id','letter'], axis=1).values
x_test = x_test.reshape(-1, 28, 28, 1)
x_test = x_test/255
x_test = np.array(x_test)
x_test = x_test.reshape(-1, 1, 28, 28)
x_test = torch.Tensor(x_test)
y = train_csv['digit']
y = list(y)
print(len(y))
org_y = np.zeros([len(y), 1])
for i in range(len(y)):
org_y[i] = y[i]
org1 = np.array(org_y, dtype=object)
</code></pre>
</li>
<li><p>splitting data (org: total training data)</p>
<pre><code> from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(
org, org1, test_size=0.2, random_state=42)
</code></pre>
</li>
<li><p>transform</p>
<pre><code> transform = transforms.Compose([transforms.ToPILImage(),
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, )) ])
</code></pre>
</li>
<li><p>dataset</p>
<pre><code> class kmnistDataset(data.Dataset):
def __init__(self, images, labels=None, transforms=None):
self.x = images
self.y = labels
self.transforms = transforms
def __len__(self):
return (len(self.x))
def __getitem__(self, idx):
data = np.asarray(self.x[idx][0:]).astype(np.uint8)
if self.transforms:
data = self.transforms(data)
if self.y is not None:
return (data, self.y[i])
else:
return data
train_data = kmnistDataset(x_train, y_train, transform)
valid_data = kmnistDataset(x_valid, y_valid, transform)
train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
valid_loader = DataLoader(valid_data, batch_size=16, shuffle = False)
</code></pre>
</li>
</ol>
<p>I'll skip the model structure.</p>
<ol start="5">
<li><p>training(Here, I got the error message)</p>
<pre><code>n_epochs = 30
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
train_loss = 0
valid_loss = 0
###################
# train the model #
###################
model.train()
for data in train_loader:
inputs, labels = data[0], data[1]
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
#####################
# validate the model#
#####################
model.eval()
for data in valid_loader:
inputs, labels = data[0], data[1]
output = model(inputs)
loss = criterion(output, labels)
valid_loss += loss.item()*data.size(0)
train_loss = train_loss/ len(train_loader.dataset)
valid_loss = valid_loss / len(valid_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
</code></pre>
</li>
</ol>
<p>Although I checked the data size, I got the error message below.</p>
<blockquote>
<p>index 2047 is out of bounds for axis 0 with size 1638</p>
</blockquote>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-42-b8783819421f> in <module>
11 ###################
12 model.train()
---> 13 for data in train_loader:
14 inputs, labels = data[0], data[1]
15 optimizer.zero_grad()
/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in_next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self,
possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-38-e5c87dd8a7ff> in __getitem__(self, idx)
17
18 if self.y is not None:
---> 19 return (data, self.y[i])
20 else:
21 return data
IndexError: index 2047 is out of bounds for axis 0 with size 1638
</code></pre>
<p>Can you explain why and how to solve it?</p>
|
<p>At first glance, you are using incorrect shapes: <code>org_x = org_x.reshape(-1, 28, 28, 1)</code>. The channel axis you be the second one (unlike in <em>TensorFlow</em>), as <code>(batch_size, channels, height, width)</code>:</p>
<pre><code>org_x = org_x.reshape(-1, 1, 28, 28)
</code></pre>
<p>Same with <code>x_test</code></p>
<pre><code>x_test = x_test.reshape(-1, 1, 28, 28)
</code></pre>
<hr />
<p>Also, you are accessing a list out of bound. You accessed <code>self.y</code> with <code>i</code>. Seems to me you should be returning <code>(data, self.y[idx])</code> instead.</p>
|
deep-learning|pytorch|conv-neural-network
| 1
|
8,948
| 63,532,429
|
Missing data after scraping
|
<p>I am trying to scrape Google data on the top 250 IMDB movie ratings.</p>
<pre><code>movie_list = top_250_imdb["Title"]
base_url = 'https://www.google.com/search?q='
streaming = []
title = []
price = []
for movie in movie_list:
query_url = (f'{base_url}{movie}')
browser.visit(query_url)
time.sleep(5)
soup = bs(browser.html, 'lxml')
results1 = soup.find_all('div', class_ = 'ellip bclEt')
for result in results1:
streaming.append(result.text)
title.append(movie.capitalize())
results2 = soup.find_all('div', class_ = 'ellip rsj3fb')
for result in results2:
price.append(result.text)
</code></pre>
<p>After scraping, I got both the <code>len(streaming)</code> and <code>len(title)</code> = 1297
but the <code>len(price)</code> = 1296</p>
<p>I couldn't create a DataFrame because they are not in the same length.</p>
<p>What went wrong and how do I fix it?</p>
|
<p>I think the one of the values in price is NaN... Idk how to solve but you might get help with that...</p>
<p>Try to create a dataframe with price only... Then fill the NaN value using fillna function and then join that price dataframe with your main dataframe....</p>
<p>A bit long but might work</p>
|
python|pandas|web-scraping|beautifulsoup|splinter
| 0
|
8,949
| 63,499,158
|
How to remove marker from plot and make it smooth
|
<p>I have been trying to plot a smooth graph, and here is my code</p>
<pre><code>
import matplotlib.pyplot as plt
#fig,axes= plt.subplots(nrows=6, ncols=1, squeeze=False)
x = df["DOY"]
y = df["By"]
z = df["Bz"]
a = df["Vsw"]
b = df["Nsw"]
c = df["magnetopause_distance"]
d = df["reconnection_rate"]
</code></pre>
<p>And after that, I used the following logic to plot the same</p>
<pre><code>#create a figure
fig=plt.figure()
#define subplots and define their position
plt1=fig.add_subplot(611)
plt2=fig.add_subplot(612)
plt3=fig.add_subplot(613)
plt4=fig.add_subplot(614)
plt5=fig.add_subplot(615)
plt6=fig.add_subplot(616)
plt1.plot(x,y,'black',linewidth=0.5,marker=None)
plt1.set_ylabel("By")
plt1.set_title("3-6 July 2003")
plt2.plot(x,z,'black',linewidth=0.5)
plt2.set_ylabel("Bz")
plt3.plot(x,a,'black',linewidth=0.5)
plt3.set_ylabel("Vsw")
plt4.plot(x,b,'black',linewidth=0.5)
plt4.set_ylabel("Nsw")
plt5.plot(x,c,'black',linewidth=0.5)
plt5.set_ylabel("MD")
plt6.plot(x,d,'black',linewidth=0.5)
plt6.set_ylabel("MRR")
plt.subplots_adjust(hspace = 2,wspace = 2)
#saving plot in .jpg format
plt.savefig('myplot01.jpg', format='jpeg',dpi=500, bbox_inches='tight')
</code></pre>
<p>Finally, I am getting a plot like this:
<a href="https://i.stack.imgur.com/NpkJ8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NpkJ8.jpg" alt="enter image description here" /></a></p>
<p>What I want is something like this:
<a href="https://i.stack.imgur.com/RAOKt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RAOKt.png" alt="enter image description here" /></a></p>
<p>Sorry for the typos. Thanks for your time :)</p>
|
<p>Use:</p>
<pre><code>from scipy.interpolate import UnivariateSpline
import numpy as np
list_x_new = np.linspace(min(x), max(x), 1000)
list_y_smooth = UnivariateSpline(x, y, list_x_new)
plt.plot(list_x_new, list_y_smooth)
plt.show()
</code></pre>
<p>This is for one of the graphs, you can substitute the values in <code>list_y_smooth</code> in place of <code>y</code> according to the values you want to plot.</p>
|
python|pandas|matplotlib|plot|linspace
| 0
|
8,950
| 63,549,478
|
Error loading tensorflow_datasets: error with Google cloud
|
<p>When loading tensorflow_dataset, I got following error:</p>
<pre><code>ds = tfds.load('mnist', split='train', shuffle_files=True)
---------------------------------------------------------------------------
FailedPreconditionError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/utils/py_utils.py in try_reraise(*args, **kwargs)
398 try:
--> 399 yield
400 except Exception: # pylint: disable=broad-except
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/registered.py in builder(name, **builder_init_kwargs)
243 prefix="Failed to construct dataset {}".format(name)):
--> 244 return builder_cls(name)(**builder_kwargs)
245
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/api_utils.py in disallow_positional_args_dec(fn, instance, args, kwargs)
68 _check_required(fn, kwargs)
---> 69 return fn(*args, **kwargs)
70
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/dataset_builder.py in __init__(self, data_dir, config, version)
205 else: # Use the code version (do not restore data)
--> 206 self.info.initialize_from_bucket()
207
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/dataset_info.py in initialize_from_bucket(self)
422 tmp_dir = tempfile.mkdtemp("tfds")
--> 423 data_files = gcs_utils.gcs_dataset_info_files(self.full_name)
424 if not data_files:
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/utils/gcs_utils.py in gcs_dataset_info_files(dataset_dir)
70 """Return paths to GCS files in the given dataset directory."""
---> 71 return gcs_listdir(posixpath.join(GCS_DATASET_INFO_DIR, dataset_dir))
72
~/anaconda3/lib/python3.7/site-packages/tensorflow_datasets/core/utils/gcs_utils.py in gcs_listdir(dir_name)
63 root_dir = gcs_path(dir_name)
---> 64 if is_gcs_disabled() or not tf.io.gfile.exists(root_dir):
65 return None
~/anaconda3/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py in file_exists_v2(path)
266 try:
--> 267 _pywrap_file_io.FileExists(compat.as_bytes(path))
268 except errors.NotFoundError:
FailedPreconditionError: Error executing an HTTP request: libcurl code 77 meaning 'Problem with the SSL CA cert (path? access rights?)', error details: error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
when reading metadata of gs://tfds-data/dataset_info/mnist/3.0.1
</code></pre>
<p>In addition, the following appeared on cmd line:</p>
<p>2020-08-23 12:12:46.444195: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".</p>
<p>I am behind a corporate firewall, anything I can do to resolve this issue which looks like Google cloud authentication/certificate issue.</p>
<p>Thanks.</p>
|
<p>'ln -s /etc/ssl/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt' resolved the issue.</p>
|
python|tensorflow
| 1
|
8,951
| 63,680,146
|
Pandas Dataframe from a nested dictionary with list as values
|
<p>I'm newer to python and pandas and I can't figure out a way to push this dict into a dataframe</p>
<pre><code>a_dict = {'position': [{'points': '57.95', 'name': 'Def'}, {'points': '121', 'name': 'PK'}, {'points': '383.1', 'name': 'RB'}, {'points': '299.96', 'name': 'QB'}, {'points': '177.8', 'name': 'TE'}, {'points': '616.42', 'name': 'WR'}], 'id': 'MIN'}
</code></pre>
<p>I have tried multiple FOR loops to iterate through the dict but the list keeps me from organizing it. The data is originally in a JSON format. Thank you!</p>
|
<p>I'm guessing you want the points and names as columns</p>
<pre><code>points = []
name = []
for dct in a_dict['position']:
points.append(dct['points'])
name.append(dct['name'])
pd.DataFrame({'points':points,'name':name})
</code></pre>
<p>With the output</p>
<pre><code> points name
0 57.95 Def
1 121 PK
2 383.1 RB
3 299.96 QB
4 177.8 TE
5 616.42 WR
</code></pre>
|
pandas|dictionary
| 0
|
8,952
| 63,548,669
|
Can I combine values of two rows if labels are not identical in pandas
|
<p>Here's the 2 dataframes I want to combine. But the labels are different from each other</p>
<pre><code>df1
Date Campaign Sales
11/07/2020 AMZ CT BR Leather Shoes ABCDEFG1234 $10
11/07/2020 AMZ CT NB Leather Shoes ABCDEFG1234 $20
11/07/2020 AMZ OG BR Bag HGIJK567 $30
11/07/2020 AMZ OG NB Bag HGIJK567 Desktop $40
df2
Date Campaign Spend
11/07/2020 GA BR Leather Shoes ABCDEFG1234 $5
11/07/2020 GA NB Leather Shoes ABCDEFG1234 $6
11/07/2020 GA BR Bag HGIJK567 $7
11/07/2020 GA NB Bag HGIJK567 Desktop $8
</code></pre>
<p>Here's the output I want</p>
<pre><code>df3
Date Campaign Spend Sales
11/07/2020 CT BR Leather Shoes ABCDEFG1234 $5 $10
11/07/2020 CT NB Leather Shoes ABCDEFG1234 $6 $20
11/07/2020 OG BR Bag HGIJK567 $7 $30
11/07/2020 OG NB Bag HGIJK567 Desktop $8 $40
</code></pre>
|
<p>I would create an extra column to perform the <code>merge</code> on. For what I can see, merging is done based on the product name without the first acronyms.</p>
<pre><code>df1['Campaign_j'] = df1['Campaign'].map(lambda x: ' '.join(x.split()[3:]))
df2['Campaign_j'] = df2['Campaign'].map(lambda x: ' '.join(x.split()[2:]))
print(df1)
print(df2)
df3 = df1.merge(df2,how='left',on=['Campaign_j'],suffixes=('','_x')).drop_duplicates('Campaign_x')[['Campaign','Sales','Spend']]
</code></pre>
<p>After the joining, we will drop the duplicates from the first Campaign column (Campaign_x) and finally select the desired columns. I have not added the <code>date</code> column because it has no effect in this problem. Output:</p>
<pre><code> Campaign Sales Costs
0 AMZ CT BR Leather Shoes ABCDEFG1234 10 5
2 AMZ CT NB Leather Shoes ABCDEFG1234 20 6
4 AMZ OG BR Bag HGIJK567 30 7
5 AMZ OG NB Bag HGIJK567 Desktop 40 8
</code></pre>
|
python|pandas|dataframe
| 1
|
8,953
| 63,392,997
|
Split and replace special characters from column names in Pandas
|
<p>I have a dataframe which has column names like this:</p>
<pre><code>id, xxx>xxx>x, yy>y, zzzz>zzz>zz>z, ...
</code></pre>
<p>I need to split by the second <code>></code> from the right side, replace <code>></code> with <code>-</code>, and then take the last element as new column names, <code>id, xxx-x, yy-y, zz-z, ....</code></p>
<p>I have used: <code>"-".join('zzzz>zzz>zz>z'.rsplit(">", 2)[-2:])</code> and it gives: <code>zz-z</code>, but when I apply this to all column names with: <code>"-".join(df.columns.str.rsplit(">")[-2:])</code></p>
<p>Out:</p>
<pre><code>TypeError: sequence item 0: expected str instance, list found
</code></pre>
|
<p>Using Regex.</p>
<p><strong>Ex:</strong></p>
<pre><code>import re
c = ['id', 'xxx>xxx>x', 'yy>y', 'zzzz>zzz>zz>z']
print([re.sub(r"(.*?)([A-Za-z]+)>([A-Za-z]+)$", r"\2-\3", i) for i in c])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>['id', 'xxx-x', 'yy-y', 'zz-z']
</code></pre>
|
python-3.x|pandas|dataframe|split
| 1
|
8,954
| 63,452,674
|
Binning Data in Python versus R?
|
<p>I'm trying to rewrite some R code into Python but I'm not getting the same output. I would be grateful if somebody could please point me in the right direction:</p>
<p>R Code:</p>
<pre><code>Data_2017_18$ageband3 <- cut(Data_2017_18$age,
breaks = c(0, 30, 50, Inf),
labels = c(1,2,3))
</code></pre>
<p>R Output:</p>
<pre><code>> freq(as.ordered(Data_2017_18$ageband3))
as.ordered(Data_2017_18$ageband3)
Frequency Percent Cum Percent
1 5123 14.76 14.76
2 11308 32.57 47.33
3 18284 52.67 100.00
Total 34715 100.00
</code></pre>
<p>Python Code:</p>
<pre><code>import pandas as pd
Data_2017_18['ageband3'] = pd.cut(x=NVF_2017_18['age'],
bins=[0,30,50,100])
</code></pre>
<p>Python Output:</p>
<pre><code>Data_2017_18.ageband3.value_counts()
(50, 100] 18175
(30, 50] 11308
(0, 30] 5123
Name: ageband3, dtype: int64
</code></pre>
<p>Is there a way to match the Python output with the R output, please?</p>
<p>Much appreciated.</p>
|
<p>If you don't how the maximum possible value of your data you can use <code>numpy.inf</code> or more commonly <code>np.inf</code> (if you <code>import numpy as np</code>)</p>
<p>Take the following data</p>
<pre><code>np.random.seed(123)
df = pd.DataFrame(np.random.randint(1, 110, 35000), columns=['age'])
>>>print(df.describe())
age
count 35000.000000
mean 55.183771
std 31.362987
min 1.000000
25% 28.000000
50% 55.000000
75% 82.000000
max 109.000000
</code></pre>
<p>Then cut to your desired bins, notice the <code>np.inf</code> in the bins list</p>
<pre><code>banded = (
df.groupby( # setup to group original data by the ageband
pd.cut(df.age, bins=[0, 30, 50, np.inf]) # get the actual binning
# notice the last bin ends in infinite
).size().rename('frequency').to_frame() # count and reshape
)
>>> print(banded.sum())
frequency 35000
</code></pre>
<p>And if you wish to add those pct/cumpct columns</p>
<pre><code>banded['percent'] = banded / banded.sum()
banded['cum_percent'] = banded.percent.cumsum()
>>> print(banded)
frequency percent cum_percent
age
(0.0, 30.0] 9529 0.272257 0.272257
(30.0, 50.0] 6394 0.182686 0.454943
(50.0, inf] 19077 0.545057 1.000000
</code></pre>
|
python|r|pandas
| 1
|
8,955
| 63,548,409
|
Run same script in different threads python
|
<p>i have a script that recognize plates from camera, and now i need the same script to recognize from other camera so in short it needs to recognize from two cameras at once ,i am using Tensoflow/keras and YOLO object detection , can someone suggest sollution to this , i tried with different threads but i could not start the second thread , i will post what i have tried</p>
<pre class="lang-py prettyprint-override"><code>import sys, os
import threading
import keras
import cv2
import traceback
import numpy as np
import time
import sqlite3
import pyodbc
import time
from imutils.video import VideoStream
from pattern import apply_pattern
import darknet.python.darknet as dn
from os.path import splitext, basename
from glob import glob
from darknet.python.darknet import detect
from src.label import dknet_label_conversion
from src.utils import nms
from src.keras_utils import load_model
from glob import glob
from os.path import splitext, basename
from src.utils import im2single
from src.keras_utils import load_model, detect_lp
from src.label import Shape, writeShapes
import imutils
cam_vlez ="rtsp://"
cam_izlez = "rtsp://a"
def adjust_pts(pts,lroi):
return pts*lroi.wh().reshape((2,1)) + lroi.tl().reshape((2,1))
def start_vlez(cam):
while True:
cap = VideoStream(cam).start()
start_time = time.time()
sky = cap.read()
frame = sky[100:700, 300:1800]
w = frame.shape[0]
h = frame.shape[1]
ratio = float(max(frame.shape[:2])) / min(frame.shape[:2])
side = int(ratio * 288.)
bound_dim = min(side + (side % (2 ** 4)), 608)
Llp,LlpImgs,_ = detect_lp(wpod_net,im2single(frame),bound_dim,2**4,(240,80),lp_threshold)
cv2.imshow('detected_plate', frame)
if len(LlpImgs):
Ilp = LlpImgs[0]
s = Shape(Llp[0].pts)
for shape in [s]:
ptsarray = shape.pts.flatten()
try:
frame = cv2.rectangle(frame,(int(ptsarray[0]*h), int(ptsarray[5]*w)),(int(ptsarray[1]*h),int(ptsarray[6]*w)),(0,255,0),3)
cv2.imshow('detected_plate', frame)
except:
traceback.print_exc()
sys.exit(1)
Ilp = cv2.cvtColor(Ilp, cv2.COLOR_BGR2GRAY)
Ilp = cv2.cvtColor(Ilp, cv2.COLOR_GRAY2BGR)
cv2.imwrite('%s/_lp.png' % (output_dir),Ilp*255.)
cv2.imshow('lp_bic', Ilp)
R,(width,height) = detect(ocr_net, ocr_meta, 'lp_images/_lp.png' ,thresh=ocr_threshold, nms=None)
if len(R):
L = dknet_label_conversion(R,width,height)
L = nms(L,.45)
L.sort(key=lambda x: x.tl()[0])
lp_str = ''.join([chr(l.cl()) for l in L])
result =apply_pattern(lp_str)
write_to_database(result)
print("License Plate Detected: ", lp_str)
print("Written in database: ", result)
print("--- %s seconds ---" % (time.time() - start_time))
#updateSqliteTable(lp_str)
def start_izlez(cam):
while True:
cap = VideoStream(cam).start()
start_time = time.time()
sky = cap.read()
frame = sky[100:700, 300:1800]
w = frame.shape[0]
h = frame.shape[1]
ratio = float(max(frame.shape[:2])) / min(frame.shape[:2])
side = int(ratio * 288.)
bound_dim = min(side + (side % (2 ** 4)), 608)
Llp,LlpImgs,_ = detect_lp(wpod_net,im2single(frame),bound_dim,2**4,(240,80),lp_threshold)
cv2.imshow('detected_plate1', frame)
if len(LlpImgs):
Ilp = LlpImgs[0]
s = Shape(Llp[0].pts)
for shape in [s]:
ptsarray = shape.pts.flatten()
try:
frame = cv2.rectangle(frame,(int(ptsarray[0]*h), int(ptsarray[5]*w)),(int(ptsarray[1]*h),int(ptsarray[6]*w)),(0,255,0),3)
cv2.imshow('detected_plate1', frame)
except:
traceback.print_exc()
sys.exit(1)
Ilp = cv2.cvtColor(Ilp, cv2.COLOR_BGR2GRAY)
Ilp = cv2.cvtColor(Ilp, cv2.COLOR_GRAY2BGR)
cv2.imwrite('%s/_lp.png' % (output_dir),Ilp*255.)
cv2.imshow('lp_bic', Ilp)
R,(width,height) = detect(ocr_net, ocr_meta, 'lp_images/_lp.png' ,thresh=ocr_threshold, nms=None)
if len(R):
L = dknet_label_conversion(R,width,height)
L = nms(L,.45)
L.sort(key=lambda x: x.tl()[0])
lp_str = ''.join([chr(l.cl()) for l in L])
result =apply_pattern(lp_str)
write_to_database(result)
print("License Plate Detected: ", lp_str)
print("Written in database: ", result)
print("--- %s seconds ---" % (time.time() - start_time))
#updateSqliteTable(lp_str)
if __name__ == '__main__':
try:
output_dir = 'lp_images/'
lp_threshold = .5
wpod_net_path = "./my-trained-model/my-trained-model1_final.json"
wpod_net = load_model(wpod_net_path)
ocr_threshold = .6
ocr_weights = b'data/ocr/ocr-net.weights'
ocr_netcfg = b'data/ocr/ocr-net.cfg'
ocr_dataset = b'data/ocr/ocr-net.data'
ocr_net = dn.load_net(ocr_netcfg, ocr_weights, 0)
ocr_meta = dn.load_meta(ocr_dataset)
t = threading.Thread(target=start_vlez(cam_izlez))
t1 = threading.Thread(target=start_izlez(cam_vlez))
t.start()
t1.start()
except:
print ("Error: unable to start thread")
</code></pre>
|
<p><code>target=</code> in <code>Thread</code> needs function's name without <code>()</code> and arguments - and it will later use <code>()</code> to start it.</p>
<p>Your current code doesn't run functions in threads but it works like</p>
<pre><code>result = start_vlez(cam_izlez)
result1 = start_izlez(cam_vlez)
t = threading.Thread(target=result)
t1 = threading.Thread(target=result1)
t.start()
t2.start()
</code></pre>
<p>so it runs first function in main thread and wait for it ends. And next it runs second function also in main thread and wait for it ends. And after that it tries to use <code>Thread</code></p>
<p>If you have arguments then you need use function's name without <code>()</code> in <code>target=</code>and use tuple with arguments in <code>args=</code></p>
<pre><code>t = threading.Thread(target=start_vlez, args=(cam_izlez,))
t1 = threading.Thread(target=start_izlez, args=(cam_vlez,))
</code></pre>
<p><code>args=</code> needs tuple even for single argument so I use <code>,</code> in <code>(cam_izlez,)</code> and <code>(cam_vlez,)</code></p>
|
python|tensorflow|keras|yolo
| 1
|
8,956
| 29,846,658
|
Computing MAD(mean absolute deviation) GroupBy Pandas
|
<p>I have a dataframe:</p>
<pre><code>Type Name Cost
A X 545
B Y 789
C Z 477
D X 640
C X 435
B Z 335
A X 850
B Y 152
</code></pre>
<p>I have all such combinations in my dataframe with Type ['A','B','C','D'] and Names ['X','Y','Z'] . I used the groupby method to get stats on a specific combination together like <strong>A-X , A-Y , A-Z</strong> .Here's some code: </p>
<pre><code>df = pd.DataFrame({'Type':['A','B','C','D','C','B','A','B'] ,'Name':['X','Y','Z','X','X','Z','X','Y'], 'Cost':[545,789,477,640,435,335,850,152]})
df.groupby(['Name','Type']).agg([mean,std])
#need to use mad instead of std
</code></pre>
<p>I need to eliminate the observations that are more than 3 MADs away ; something like: </p>
<pre><code>test = df[np.abs(df.Cost-df.Cost.mean())<=(3*df.Cost.mad())]
</code></pre>
<p>I am confused with this as df.Cost.mad() returns the MAD for the Cost on the entire data rather than a specific Type-Name category. How could I combine both? </p>
|
<p>You can use <code>groupby</code> and <code>transform</code> to create new data series that can be used to filter out your data.</p>
<pre><code>groups = df.groupby(['Name','Type'])
mad = groups['Cost'].transform(lambda x: x.mad())
dif = groups['Cost'].transform(lambda x: np.abs(x - x.mean()))
df2 = df[dif <= 3*mad]
</code></pre>
<p>However, in this case, no row is filtered out since the difference is equal to the mean absolute deviation (the groups have only two rows at most).</p>
|
python|pandas|group-by|dataframe|aggregate
| 4
|
8,957
| 29,877,226
|
How to define colors for histogram in "groupby"?
|
<p>I need to define clients colors (2 colors for 'F' and 'M') for next sample:</p>
<pre><code>d = {'gender' : Series(['M', 'F', 'F', 'F', 'M']),'year' : Series([1900, 1910, 1920, 1920, 1920])}
df = DataFrame(d)
grouped = df.groupby('gender').year
grouped.plot(kind='hist',legend=True)
</code></pre>
|
<p>If you don't need groupby (I don't see that it gains you anything in this case), then you can easily set colors:</p>
<pre><code>ax1 = plt.subplot(111)
df[df['gender']=='M'].hist(ax=ax1, color='red', label='M')
df[df['gender']=='F'].hist(ax=ax1, color='blue', label='F')
ax1.legend(loc='best')
</code></pre>
|
python|pandas|colors|histogram
| 1
|
8,958
| 53,742,201
|
TensorFlow Hub caching model - permission denied when loading
|
<p>I am trying to save a TensorFlow Module to disk to avoid downloading it for every use.</p>
<p>I read about caching modules here: <a href="https://www.tensorflow.org/hub/basics" rel="nofollow noreferrer">https://www.tensorflow.org/hub/basics</a></p>
<pre><code>$ export TFHUB_CACHE_DIR=/tf_models
$ echo $TFHUB_CACHE_DIR
/tf_models
</code></pre>
<p>So the environment variable is set, I also added it to .bashrc, and reloaded .bashrc with <code>source</code>. </p>
<p>In python:</p>
<pre><code>import tensorflow_hub as hub
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder/2")
~/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it stays alive
530 # as there is a reference to status from this from the traceback due to
PermissionDeniedError: /tf_models; Permission denied
</code></pre>
<p>I can run the hub modules fine when TFHUB_CACHE_DIR is default.</p>
<p>Why do I get permission denied? </p>
|
<p>Solved by removing the <code>/</code> sign such as:</p>
<pre><code>export TFHUB_CACHE_DIR=tf_models
</code></pre>
|
tensorflow|anaconda
| 0
|
8,959
| 53,718,842
|
can't pickle _thread.RLock objects when running tune of ray packge for python (hyper parameter tuning)
|
<p>I am trying to do a hyper parameter tuning with the <a href="https://ray.readthedocs.io/en/latest/tune.html" rel="nofollow noreferrer">tune</a> package of Ray.</p>
<p>Shown below is my code:</p>
<pre><code># Disable linter warnings to maintain consistency with tutorial.
# pylint: disable=invalid-name
# pylint: disable=g-bad-import-order
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import matplotlib as mplt
mplt.use('agg') # Must be before importing matplotlib.pyplot or pylab!
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from math import sqrt
import csv
import argparse
import sys
import tempfile
import pandas as pd
import time
import ray
from ray.tune import grid_search, run_experiments, register_trainable
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
import tensorflow as tf
import ray
from ray import tune
class RNNconfig():
num_steps = 14
lstm_size = 32
batch_size = 8
init_learning_rate = 0.01
learning_rate_decay = 0.99
init_epoch = 5 # 5
max_epoch = 60 # 100 or 50
hidden1_nodes = 30
hidden2_nodes = 15
hidden1_activation = tf.nn.relu
hidden2_activation = tf.nn.relu
lstm_activation = tf.nn.relu
status_reporter = None
FLAGS = None
input_size = 1
num_layers = 1
fileName = 'store2_1.csv'
graph = tf.Graph()
column_min_max = [[0,11000], [1,7]]
columns = ['Sales', 'DayOfWeek','SchoolHoliday', 'Promo']
features = len(columns)
rnn_config = RNNconfig()
def segmentation(data):
seq = [price for tup in data[rnn_config.columns].values for price in tup]
seq = np.array(seq)
# split into items of features
seq = [np.array(seq[i * rnn_config.features: (i + 1) * rnn_config.features])
for i in range(len(seq) // rnn_config.features)]
# split into groups of num_steps
X = np.array([seq[i: i + rnn_config.num_steps] for i in range(len(seq) - rnn_config.num_steps)])
y = np.array([seq[i + rnn_config.num_steps] for i in range(len(seq) - rnn_config.num_steps)])
# get only sales value
y = [[y[i][0]] for i in range(len(y))]
y = np.asarray(y)
return X, y
def scale(data):
for i in range (len(rnn_config.column_min_max)):
data[rnn_config.columns[i]] = (data[rnn_config.columns[i]] - rnn_config.column_min_max[i][0]) / ((rnn_config.column_min_max[i][1]) - (rnn_config.column_min_max[i][0]))
return data
def rescle(test_pred):
prediction = [(pred * (rnn_config.column_min_max[0][1] - rnn_config.column_min_max[0][0])) + rnn_config.column_min_max[0][0] for pred in test_pred]
return prediction
def pre_process():
store_data = pd.read_csv(rnn_config.fileName)
store_data = store_data.drop(store_data[(store_data.Open == 0) & (store_data.Sales == 0)].index)
#
# store_data = store_data.drop(store_data[(store_data.Open != 0) & (store_data.Sales == 0)].index)
# ---for segmenting original data --------------------------------
original_data = store_data.copy()
## train_size = int(len(store_data) * (1.0 - rnn_config.test_ratio))
validation_len = len(store_data[(store_data.Month == 6) & (store_data.Year == 2015)].index)
test_len = len(store_data[(store_data.Month == 7) & (store_data.Year == 2015)].index)
train_size = int(len(store_data) - (validation_len+test_len))
train_data = store_data[:train_size]
validation_data = store_data[(train_size-rnn_config.num_steps): validation_len+train_size]
test_data = store_data[((validation_len+train_size) - rnn_config.num_steps): ]
original_val_data = validation_data.copy()
original_test_data = test_data.copy()
# -------------- processing train data---------------------------------------
scaled_train_data = scale(train_data)
train_X, train_y = segmentation(scaled_train_data)
# -------------- processing validation data---------------------------------------
scaled_validation_data = scale(validation_data)
val_X, val_y = segmentation(scaled_validation_data)
# -------------- processing test data---------------------------------------
scaled_test_data = scale(test_data)
test_X, test_y = segmentation(scaled_test_data)
# ----segmenting original validation data-----------------------------------------------
nonescaled_val_X, nonescaled_val_y = segmentation(original_val_data)
# ----segmenting original test data-----------------------------------------------
nonescaled_test_X, nonescaled_test_y = segmentation(original_test_data)
return train_X, train_y, test_X, test_y, val_X, val_y, nonescaled_test_y,nonescaled_val_y
def generate_batches(train_X, train_y, batch_size):
num_batches = int(len(train_X)) // batch_size
if batch_size * num_batches < len(train_X):
num_batches += 1
batch_indices = range(num_batches)
for j in batch_indices:
batch_X = train_X[j * batch_size: (j + 1) * batch_size]
batch_y = train_y[j * batch_size: (j + 1) * batch_size]
assert set(map(len, batch_X)) == {rnn_config.num_steps}
yield batch_X, batch_y
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
itemindex = np.where(y_true == 0)
y_true = np.delete(y_true, itemindex)
y_pred = np.delete(y_pred, itemindex)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def RMSPE(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.sqrt(np.mean(np.square(((y_true - y_pred) / y_pred)), axis=0))
def deepnn(inputs):
cell = tf.contrib.rnn.LSTMCell(rnn_config.lstm_size, state_is_tuple=True, activation= rnn_config.lstm_activation)
val1, _ = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
val = tf.transpose(val1, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1, name="last_lstm_output")
# hidden layer
hidden1 = tf.layers.dense(last, units=rnn_config.hidden1_nodes, activation=rnn_config.hidden2_activation)
hidden2 = tf.layers.dense(hidden1, units=rnn_config.hidden2_nodes, activation=rnn_config.hidden1_activation)
weight = tf.Variable(tf.truncated_normal([rnn_config.hidden2_nodes, rnn_config.input_size]))
bias = tf.Variable(tf.constant(0.1, shape=[rnn_config.input_size]))
prediction = tf.matmul(hidden2, weight) + bias
return prediction
def main():
train_X, train_y, test_X, test_y, val_X, val_y, nonescaled_test_y, nonescaled_val_y = pre_process()
# Create the model
inputs = tf.placeholder(tf.float32, [None, rnn_config.num_steps, rnn_config.features], name="inputs")
targets = tf.placeholder(tf.float32, [None, rnn_config.input_size], name="targets")
learning_rate = tf.placeholder(tf.float32, None, name="learning_rate")
# Build the graph for the deep net
prediction = deepnn(inputs)
with tf.name_scope('loss'):
model_loss = tf.losses.mean_squared_error(targets, prediction)
with tf.name_scope('adam_optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
graph_location = "checkpoints_sales/sales_pred.ckpt"
# graph_location = tempfile.mkdtemp()
print('Saving graph to: %s' % graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(tf.get_default_graph())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
learning_rates_to_use = [
rnn_config.init_learning_rate * (
rnn_config.learning_rate_decay ** max(float(i + 1 -rnn_config.init_epoch), 0.0)
) for i in range(rnn_config.max_epoch)]
for epoch_step in range(rnn_config.max_epoch):
current_lr = learning_rates_to_use[epoch_step]
i = 0
for batch_X, batch_y in generate_batches(train_X, train_y, rnn_config.batch_size):
train_data_feed = {
inputs: batch_X,
targets: batch_y,
learning_rate: current_lr,
}
train_loss, _ = sess.run([model_loss, optimizer], train_data_feed)
if i % 10 == 0:
val_data_feed = {
inputs: val_X,
targets: val_y,
learning_rate: 0.0,
}
val_prediction = prediction.eval(feed_dict=val_data_feed)
meanSquaredError = mean_squared_error(val_y, val_prediction)
val_rootMeanSquaredError = sqrt(meanSquaredError)
print('epoch %d, step %d, training accuracy %g' % (i, epoch_step, val_rootMeanSquaredError))
if rnn_config.status_reporter:
rnn_config.status_reporter(timesteps_total= epoch_step, mean_accuracy= val_rootMeanSquaredError)
i += 1
test_data_feed = {
inputs: test_X,
targets: test_y,
learning_rate: 0.0,
}
test_prediction = prediction.eval(feed_dict=val_data_feed)
meanSquaredError = mean_squared_error(val_y, test_prediction)
test_rootMeanSquaredError = sqrt(meanSquaredError)
print('training accuracy %g' % (test_rootMeanSquaredError))
# !!! Entrypoint for ray.tune !!!
def train(config, reporter=None):
rnn_config.status_reporter = reporter
rnn_config.num_steps= getattr(config["num_steps"])
rnn_config.lstm_size = getattr(config["lstm_size"])
rnn_config.hidden1_nodes = getattr(config["hidden1_nodes"])
rnn_config.hidden2_nodes = getattr(config["hidden2_nodees"])
rnn_config.lstm_activation = getattr(tf.nn, config["lstm_activation"])
rnn_config.init_learning_rate = getattr(config["learning_rate"])
rnn_config.hidden1_activation = getattr(tf.nn, config['hidden1_activation'])
rnn_config.hidden2_activation = getattr(tf.nn, config['hidden2_activation'])
rnn_config.learning_rate_decay = getattr(config["learning_rate_decay"])
rnn_config.max_epoch = getattr(config["max_epoch"])
rnn_config.init_epoch = getattr(config["init_epoch"])
rnn_config.batch_size = getattr(config["batch_size"])
parser = argparse.ArgumentParser()
parser.add_argument(
'--data_dir',
type=str,
default='/tmp/tensorflow/mnist/input_data',
help='Directory for storing input data')
rnn_config.FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
# !!! Example of using the ray.tune Python API !!!
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--smoke-test', action='store_true', help='Finish quickly for testing')
args, _ = parser.parse_known_args()
register_trainable('train_mnist', train)
mnist_spec = {
'run': 'train_mnist',
'stop': {
'mean_accuracy': 0.99,
},
'config': {
"num_steps": tune.grid_search([1, 2, 3,4,5,6,7,8,9,10,11,12,13,14,15]),
"lstm_size": tune.grid_search([8,16,32,64,128]),
"hidden1_nodes" : tune.grid_search([4,8,16,32,64]),
"hidden2_nodees" : tune.grid_search([2,4,8,16,32]),
"lstm_activation" : tune.grid_search(['relu', 'elu', 'tanh']),
"learning_rate" : tune.grid_search([0.01,0.1,0.5,0.05]),
"hidden1_activation" : tune.grid_search(['relu', 'elu', 'tanh']),
"hidden2_activation" : tune.grid_search(['relu', 'elu', 'tanh']),
"learning_rate_decay" : tune.grid_search([0.99,0.8,0.7]),
"max_epoch" : tune.grid_search([60,50,100,120,200]),
"init_epoch" : tune.grid_search([5,10,15,20]),
"batch_size" : tune.grid_search([5,8,16,32,64])
},
}
if args.smoke_test:
mnist_spec['stop']['training_iteration'] = 2
ray.init()
run_experiments({'tune_mnist_test': mnist_spec})
</code></pre>
<p>When I try to run this, I am getting an eror. Shown below is the stack trace. This is my first time using Tune so I am not sure what I'm doing wrong here. Also note that the <a href="https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/tune_mnist_ray.py" rel="nofollow noreferrer">example algorithm</a> given by ray works fine on my machine.</p>
<blockquote>
<p>/home/suleka/anaconda3/lib/python3.6/site-packages/h5py/<strong>init</strong>.py:36:
FutureWarning: Conversion of the second argument of issubdtype from
<code>float</code> to <code>np.floating</code> is deprecated. In future, it will be treated
as <code>np.float64 == np.dtype(float).type</code>. from ._conv import
register_converters as _register_converters WARNING: Not updating
worker name since <code>setproctitle</code> is not installed. Install this with
<code>pip install setproctitle</code> (or ray[debug]) to enable monitoring of
worker processes. Traceback (most recent call last): File
"/home/suleka/anaconda3/lib/python3.6/pickle.py", line 918, in
save_global
obj2, parent = _getattribute(module, name) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 266, in
_getattribute
.format(name, obj)) AttributeError: Can't get local attribute 'wrap_function..WrappedFunc' on </p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last): File
"/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 639, in save_global
return Pickler.save_global(self, obj, name=name) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 922, in
save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle .WrappedFunc'>: it's not
found as ray.tune.trainable.wrap_function..WrappedFunc</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last): File
"/home/suleka/Documents/sales_prediction/auto_LSTM_withoutZero.py",
line 322, in
register_trainable('train_mnist', train) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/tune/registry.py",
line 38, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/tune/registry.py",
line 77, in register
self._to_flush[(category, key)] = pickle.dumps(value) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 881, in dumps
cp.dump(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 268, in dump
return Pickler.dump(self, obj) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 409, in dump
self.save(obj) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 648, in save_global
return self.save_dynamic_class(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 495, in save_dynamic_class
save(clsdict) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 410, in save_function
self.save_function_tuple(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 553, in save_function_tuple
save(state) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 781, in
save_list
self._batch_appends(obj) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 808, in
_batch_appends
save(tmp[0]) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 405, in save_function
self.save_function_tuple(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 553, in save_function_tuple
save(state) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 405, in save_function
self.save_function_tuple(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 553, in save_function_tuple
save(state) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 405, in save_function
self.save_function_tuple(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 553, in save_function_tuple
save(state) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 605, in
save_reduce
save(cls) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 636, in save_global
return self.save_dynamic_class(obj) File "/home/suleka/anaconda3/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle.py",
line 495, in save_dynamic_class
save(clsdict) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 634, in
save_reduce
save(state) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 821, in
save_dict
self._batch_setitems(obj.items()) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 847, in
_batch_setitems
save(v) File "/home/suleka/anaconda3/lib/python3.6/pickle.py", line 496, in save
rv = reduce(self.proto) TypeError: can't pickle _thread.RLock objects</p>
</blockquote>
|
<p>You will have to call <code>rnn_config = RNNconfig()</code> in <code>def train(config, reporter=None)</code> function. Most importantly, the <code>tf.Graph()</code> needs to be initialized within <code>train</code> because it is not easily pickleable. </p>
<p>Note that the rest of your code may also need to be adjusted accordingly.</p>
|
python-3.x|tensorflow|deep-learning|hyperparameters|ray
| 1
|
8,960
| 53,366,735
|
Combine two numpy arrays into matrix with a two-argument function
|
<p>Roughly I want to convert this (non-numpy) for-loop:</p>
<pre><code>N = len(left)
M = len(right)
matrix = np.zeros(N, M)
for i in range(N):
for j in range(M):
matrix[i][j] = scipy.stats.binom.pmf(left[i], C, right[j])
</code></pre>
<p>It's sort of like a dot product but of course mathematically not a dot product. How would I normally vectorize or make something like this pythonic/numpythonic?</p>
|
<p><strong><code>scipy.stats.binom.pmf</code></strong> already is vectorized. However, you have to <strong><code>broadcast</code></strong> your inputs in order to get your desired result.</p>
<pre><code>broadcast_out = scipy.stats.binom.pmf(left[:, None], C, right)
</code></pre>
<hr>
<p><strong><em>Validation</em></strong></p>
<pre><code>np.random.seed(314)
left = np.arange(5, dtype=float)
right = np.random.rand(5)
C = 5
broadcast_out = scipy.stats.binom.pmf(left[:, None], C, right)
N = len(left)
M = len(right)
matrix = np.zeros((N, M))
for i in range(N):
for j in range(M):
matrix[i][j] = scipy.stats.binom.pmf(left[i], C, right[j])
print(np.array_equal(matrix, broadcast_out))
</code></pre>
<p></p>
<pre><code>True
</code></pre>
|
python|numpy|matrix|scipy
| 3
|
8,961
| 53,680,932
|
Pandas: Find the max value in one column containing lists
|
<p>I have a dataframe like this:</p>
<pre><code>fly_frame:
day plcae
0 [1,2,3,4,5] A
1 [1,2,3,4] B
2 [1,2] C
3 [1,2,3,4] D
</code></pre>
<p>If I want to find the max value in each entry in the day column.</p>
<p>For example:</p>
<pre><code>fly_frame:
day plcae
0 5 A
1 4 B
2 2 C
3 4 D
</code></pre>
<p>What should I do?<br>
Thanks for your help.</p>
|
<pre><code>df.day.apply(max)
#0 5
#1 4
#2 2
#3 4
</code></pre>
|
python|pandas|dataframe
| 10
|
8,962
| 53,594,769
|
Applying a function to an array using Numpy when the function contains a condition
|
<p>I am having a difficulty with applying a function to an array when the function contains a condition. I have an inefficient workaround and am looking for an efficient (fast) approach. In a simple example:</p>
<pre><code>pts = np.linspace(0,1,11)
def fun(x, y):
if x > y:
return 0
else:
return 1
</code></pre>
<p>Now, if I run:</p>
<pre><code>result = fun(pts, pts)
</code></pre>
<p>then I get the error</p>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>raised at the <code>if x > y</code> line. My inefficient workaround, which gives the correct result but is too slow is:</p>
<pre><code>result = np.full([len(pts)]*2, np.nan)
for i in range(len(pts)):
for j in range(len(pts)):
result[i,j] = fun(pts[i], pts[j])
</code></pre>
<p>What is the best way to obtain this in a nicer (and more importantly, faster) way?</p>
<p>I am having a difficulty with applying a function to an array when the function contains a condition. I have an inefficient workaround and am looking for an efficient (fast) approach. In a simple example:</p>
<pre><code>pts = np.linspace(0,1,11)
def fun(x, y):
if x > y:
return 0
else:
return 1
</code></pre>
<p>Now, if I run:</p>
<pre><code>result = fun(pts, pts)
</code></pre>
<p>then I get the error</p>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>raised at the <code>if x > y</code> line. My inefficient workaround, which gives the correct result but is too slow is:</p>
<pre><code>result = np.full([len(pts)]*2, np.nan)
for i in range(len(pts)):
for j in range(len(pts)):
result[i,j] = fun(pts[i], pts[j])
</code></pre>
<p>What is the best way to obtain this in a nicer (and more importantly, faster) way?</p>
<p><strong>EDIT</strong>: using </p>
<pre><code>def fun(x, y):
if x > y:
return 0
else:
return 1
x = np.array(range(10))
y = np.array(range(10))
xv,yv = np.meshgrid(x,y)
result = fun(xv, yv)
</code></pre>
<p>still raises the same <code>ValueError</code>. </p>
|
<p>The error is quite explicit - suppose you have</p>
<pre><code>x = np.array([1,2])
y = np.array([2,1])
</code></pre>
<p>such that </p>
<pre><code>(x>y) == np.array([0,1])
</code></pre>
<p>what should be the result of your <code>if np.array([0,1])</code> statement? is it true or false? <code>numpy</code> is telling you this is ambiguous. Using</p>
<pre><code>(x>y).all()
</code></pre>
<p>or </p>
<pre><code>(x>y).any()
</code></pre>
<p>is explicit, and thus <code>numpy</code> is offering you solutions - either any cell pair fulfills the condition, or all of them - both an unambiguous truth value. You have to define for yourself exactly what you meant by <em>vector x is larger than vector y</em>.</p>
<p>The <code>numpy</code> solution to operate on all pairs of <code>x</code> and <code>y</code> such that <code>x[i]>y[j]</code> is to use mesh grid to generate all pairs:</p>
<pre><code>>>> import numpy as np
>>> x=np.array(range(10))
>>> y=np.array(range(10))
>>> xv,yv=np.meshgrid(x,y)
>>> xv[xv>yv]
array([1, 2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 3, 4, 5, 6, 7, 8,
9, 4, 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 6, 7, 8, 9, 7, 8, 9, 8, 9, 9])
>>> yv[xv>yv]
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2,
2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 7, 7, 8])
</code></pre>
<p>either send <code>xv</code> and <code>yv</code> to <code>fun</code>, or create the mesh in the function, depending on what makes more sense. This generates all pairs <code>xi,yj</code> such that <code>xi>yj</code>. If you want the actual indices just return <code>xv>yv</code>, where each cell <code>ij</code> corresponds <code>x[i]</code> and <code>y[j]</code>. In your case:</p>
<pre><code>def fun(x, y):
xv,yv=np.meshgrid(x,y)
return xv>yv
</code></pre>
<p>will return a matrix where <code>fun(x,y)[i][j]</code> is True if <code>x[i]>y[j]</code>, or False otherwise. Alternatively</p>
<pre><code>return np.where(xv>yv)
</code></pre>
<p>will return a tuple of two arrays of pairs of the indices, such that</p>
<pre><code>for i,j in fun(x,y):
</code></pre>
<p>will guarantee <code>x[i]>y[j]</code> as well.</p>
|
python|numpy|lambda|conditional|vectorization
| 1
|
8,963
| 53,527,688
|
Tabular formatting for JSON file
|
<p>I have a JSON which is making requests from <a href="http://api.worldweatheronline.com/" rel="nofollow noreferrer">http://api.worldweatheronline.com/</a> via their useful API.</p>
<p>I am struggling to convert the JSON into a tabular format such as a pandas data frame. I think the issue is due to the nested structure of the JSON.</p>
<p>I have tried <code>pd.DataFrame(json)</code>, however this doesn't format correctly as it struggles with the nested structure as 'weather hourly time' spans over many rows, whereas the initial lines span a single row.</p>
<p>I have also tried exporting as a JSON and reading in as <code>pd.read_json</code>, however this has also run into similar issues.</p>
<p>Would really appreciate some help!</p>
<p>JSON is as follows:</p>
<pre><code>{'data': {'request': [{'type': 'UK Postcode', 'query': 'E4'}],
'weather': [{'date': '2018-11-28',
'astronomy': [{'sunrise': '07:39 AM',
'sunset': '03:57 PM',
'moonrise': '09:46 PM',
'moonset': '12:21 PM',
'moon_phase': 'Last Quarter',
'moon_illumination': '69'}],
'maxtempC': '13',
'maxtempF': '56',
'mintempC': '10',
'mintempF': '51',
'totalSnow_cm': '0.0',
'sunHour': '3.1',
'uvIndex': '0',
'hourly': [{'time': '0',
'tempC': '9',
'tempF': '48',
'windspeedMiles': '4',
'windspeedKmph': '7',
'winddirDegree': '212',
'winddir16Point': 'SSW',
'weatherCode': '296',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png'}],
'weatherDesc': [{'value': 'Light rain'}],
'precipMM': '1.6',
'humidity': '93',
'visibility': '11',
'pressure': '1010',
'cloudcover': '100',
'HeatIndexC': '9',
'HeatIndexF': '48',
'DewPointC': '8',
'DewPointF': '46',
'WindChillC': '8',
'WindChillF': '46',
'WindGustMiles': '8',
'WindGustKmph': '12',
'FeelsLikeC': '8',
'FeelsLikeF': '46'},
{'time': '300',
'tempC': '9',
'tempF': '48',
'windspeedMiles': '7',
'windspeedKmph': '11',
'winddirDegree': '174',
'winddir16Point': 'S',
'weatherCode': '266',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png'}],
'weatherDesc': [{'value': 'Light drizzle'}],
'precipMM': '0.5',
'humidity': '92',
'visibility': '15',
'pressure': '1009',
'cloudcover': '100',
'HeatIndexC': '9',
'HeatIndexF': '48',
'DewPointC': '8',
'DewPointF': '46',
'WindChillC': '7',
'WindChillF': '45',
'WindGustMiles': '11',
'WindGustKmph': '17',
'FeelsLikeC': '7',
'FeelsLikeF': '45'},
{'time': '600',
'tempC': '11',
'tempF': '51',
'windspeedMiles': '11',
'windspeedKmph': '18',
'winddirDegree': '175',
'winddir16Point': 'S',
'weatherCode': '266',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png'}],
'weatherDesc': [{'value': 'Light drizzle'}],
'precipMM': '0.7',
'humidity': '95',
'visibility': '14',
'pressure': '1008',
'cloudcover': '100',
'HeatIndexC': '11',
'HeatIndexF': '51',
'DewPointC': '10',
'DewPointF': '50',
'WindChillC': '8',
'WindChillF': '47',
'WindGustMiles': '19',
'WindGustKmph': '31',
'FeelsLikeC': '8',
'FeelsLikeF': '47'},
{'time': '900',
'tempC': '12',
'tempF': '54',
'windspeedMiles': '12',
'windspeedKmph': '19',
'winddirDegree': '208',
'winddir16Point': 'SSW',
'weatherCode': '296',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png'}],
'weatherDesc': [{'value': 'Light rain'}],
'precipMM': '0.8',
'humidity': '93',
'visibility': '14',
'pressure': '1008',
'cloudcover': '100',
'HeatIndexC': '12',
'HeatIndexF': '54',
'DewPointC': '11',
'DewPointF': '52',
'WindChillC': '10',
'WindChillF': '51',
'WindGustMiles': '20',
'WindGustKmph': '32',
'FeelsLikeC': '10',
'FeelsLikeF': '51'},
{'time': '1200',
'tempC': '13',
'tempF': '56',
'windspeedMiles': '16',
'windspeedKmph': '26',
'winddirDegree': '209',
'winddir16Point': 'SSW',
'weatherCode': '266',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png'}],
'weatherDesc': [{'value': 'Light drizzle'}],
'precipMM': '0.2',
'humidity': '87',
'visibility': '15',
'pressure': '1008',
'cloudcover': '100',
'HeatIndexC': '13',
'HeatIndexF': '56',
'DewPointC': '11',
'DewPointF': '52',
'WindChillC': '11',
'WindChillF': '52',
'WindGustMiles': '25',
'WindGustKmph': '41',
'FeelsLikeC': '11',
'FeelsLikeF': '52'},
{'time': '1500',
'tempC': '13',
'tempF': '56',
'windspeedMiles': '19',
'windspeedKmph': '31',
'winddirDegree': '205',
'winddir16Point': 'SSW',
'weatherCode': '266',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png'}],
'weatherDesc': [{'value': 'Light drizzle'}],
'precipMM': '0.5',
'humidity': '84',
'visibility': '15',
'pressure': '1007',
'cloudcover': '100',
'HeatIndexC': '13',
'HeatIndexF': '56',
'DewPointC': '11',
'DewPointF': '51',
'WindChillC': '11',
'WindChillF': '52',
'WindGustMiles': '30',
'WindGustKmph': '48',
'FeelsLikeC': '11',
'FeelsLikeF': '52'},
{'time': '1800',
'tempC': '10',
'tempF': '51',
'windspeedMiles': '19',
'windspeedKmph': '31',
'winddirDegree': '216',
'winddir16Point': 'SW',
'weatherCode': '122',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0004_black_low_cloud.png'}],
'weatherDesc': [{'value': 'Overcast'}],
'precipMM': '0.0',
'humidity': '82',
'visibility': '15',
'pressure': '1007',
'cloudcover': '100',
'HeatIndexC': '12',
'HeatIndexF': '54',
'DewPointC': '11',
'DewPointF': '51',
'WindChillC': '10',
'WindChillF': '51',
'WindGustMiles': '30',
'WindGustKmph': '49',
'FeelsLikeC': '10',
'FeelsLikeF': '51'},
{'time': '2100',
'tempC': '6',
'tempF': '44',
'windspeedMiles': '15',
'windspeedKmph': '23',
'winddirDegree': '219',
'winddir16Point': 'SW',
'weatherCode': '353',
'weatherIconUrl': [{'value': 'http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0025_light_rain_showers_night.png'}],
'weatherDesc': [{'value': 'Light rain shower'}],
'precipMM': '0.5',
'humidity': '82',
'visibility': '15',
'pressure': '1008',
'cloudcover': '70',
'HeatIndexC': '8',
'HeatIndexF': '47',
'DewPointC': '10',
'DewPointF': '50',
'WindChillC': '6',
'WindChillF': '44',
'WindGustMiles': '24',
'WindGustKmph': '39',
'FeelsLikeC': '6',
'FeelsLikeF': '44'}]}]}}
</code></pre>
<p>Example csv of what I'd like:</p>
<pre><code>"request__type", "request__query", "weather__date", "weather__astronomy__sunrise", "weather__astronomy__sunset", "weather__astronomy__moonrise", "weather__astronomy__moonset", "weather__astronomy__moon_phase", "weather__astronomy__moon_illumination", "weather__maxtempC", "weather__maxtempF", "weather__mintempC", "weather__mintempF", "weather__totalSnow_cm", "weather__sunHour", "weather__uvIndex", "weather__hourly__time", "weather__hourly__tempC", "weather__hourly__tempF", "weather__hourly__windspeedMiles", "weather__hourly__windspeedKmph", "weather__hourly__winddirDegree", "weather__hourly__winddir16Point", "weather__hourly__weatherCode", "weather__hourly__weatherIconUrl__value", "weather__hourly__weatherDesc__value", "weather__hourly__precipMM", "weather__hourly__humidity", "weather__hourly__visibility", "weather__hourly__pressure", "weather__hourly__cloudcover", "weather__hourly__HeatIndexC", "weather__hourly__HeatIndexF", "weather__hourly__DewPointC", "weather__hourly__DewPointF", "weather__hourly__WindChillC", "weather__hourly__WindChillF", "weather__hourly__WindGustMiles", "weather__hourly__WindGustKmph", "weather__hourly__FeelsLikeC", "weather__hourly__FeelsLikeF"
"UK Postcode", "E4", "2018-11-28", "07:39 AM", "03:57 PM", "09:46 PM", "12:21 PM", "Last Quarter", "69", "13", "56", "10", "51", "0.0", "3.1", "0", "0", "9", "48", "4", "7", "212", "SSW", "296", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png", "Light rain", "1.6", "93", "11", "1010", "100", "9", "48", "8", "46", "8", "46", "8", "12", "8", "46"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "300", "9", "48", "7", "11", "174", "S", "266", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png", "Light drizzle", "0.5", "92", "15", "1009", "100", "9", "48", "8", "46", "7", "45", "11", "17", "7", "45"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "600", "11", "51", "11", "18", "175", "S", "266", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0033_cloudy_with_light_rain_night.png", "Light drizzle", "0.7", "95", "14", "1008", "100", "11", "51", "10", "50", "8", "47", "19", "31", "8", "47"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "900", "12", "54", "12", "19", "208", "SSW", "296", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png", "Light rain", "0.8", "93", "14", "1008", "100", "12", "54", "11", "52", "10", "51", "20", "32", "10", "51"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "1200", "13", "56", "16", "26", "209", "SSW", "266", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png", "Light drizzle", "0.2", "87", "15", "1008", "100", "13", "56", "11", "52", "11", "52", "25", "41", "11", "52"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "1500", "13", "56", "19", "31", "205", "SSW", "266", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0017_cloudy_with_light_rain.png", "Light drizzle", "0.5", "84", "15", "1007", "100", "13", "56", "11", "51", "11", "52", "30", "48", "11", "52"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "1800", "10", "51", "19", "31", "216", "SW", "122", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0004_black_low_cloud.png", "Overcast", "0.0", "82", "15", "1007", "100", "12", "54", "11", "51", "10", "51", "30", "49", "10", "51"
"", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "2100", "6", "44", "15", "23", "219", "SW", "353", "http://cdn.worldweatheronline.net/images/wsymbols01_png_64/wsymbol_0025_light_rain_showers_night.png", "Light rain shower", "0.5", "82", "15", "1008", "70", "8", "47", "10", "50", "6", "44", "24", "39", "6", "44"
</code></pre>
|
<p>I saved your JSON to a file ('test.json', in wich, by the way, the apostrophes need to be swapped with inverted commas, so that the <code>json</code> module can parse it) and read it with the <code>json</code> module. You want to end up with a pandas DataFrame and we will need defaultdicts for making the flattening easy:</p>
<pre><code>import json
import pandas as pd
from collections import defaultdict
</code></pre>
<p>Since you will be working with a nested structure that in every new context is going to look different, you first need a function to flatten a general nested object composed of multiple <code>dict</code> and <code>list</code> until you get a shallow <code>dict</code>: My suggestion:</p>
<pre><code>def flatten_json(json):
flattened_dict = defaultdict(list)
def flatten(structure):
global key
if type(structure) is dict:
for key in structure:
flatten(structure[key])
elif type(structure) is list:
for item in structure:
flatten(item)
else:
flattened_dict[key].append(structure)
flatten(json)
return flattened_dict
</code></pre>
<p>Almost there. The only thing that we need to fix is the fact that pandas is only going to be able to build a Dataframe out of a dictionary if all <code>dict</code> values are of the same length. So we need a function to fill the shorter values up to the length of the longest:</p>
<pre><code>def fill_up_dict(dicti):
max_length = max([len(liist) for liist in dicti.values()])
for value in dicti.values():
remaining_length = max_length - len(value)
value.extend([""]*remaining_length)
return dicti
</code></pre>
<p>Done. Just read your JSON, flatten it, fill it up and dump it into a Dataframe:</p>
<pre><code>with open('test.json', 'r') as f:
json_data = json.load(f)
flat_dict = flatten_json(json_data)
homogeneous_dict = fill_up_dict(flat_dict)
df = pd.DataFrame(homogeneous_dict)
</code></pre>
<p>where the outbound <code>df</code> looks as we want:</p>
<pre><code> DewPointC DewPointF FeelsLikeC FeelsLikeF HeatIndexC HeatIndexF WindChillC
0 8 46 8 46 9 48 8
1 8 46 7 45 9 48 7
2 10 50 8 47 11 51 8
3 11 52 10 51 12 54 10
4 11 52 11 52 13 56 11
5 11 51 11 52 13 56 11
6 11 51 10 51 12 54 10
7 10 50 6 44 8 47 6
8
9
10
11
12
13
14
15
WindChillF WindGustKmph WindGustMiles ... totalSnow_cm \
0 46 12 8 ... 0.0
1 45 17 11 ...
2 47 31 19 ...
3 51 32 20 ...
4 52 41 25 ...
5 52 48 30 ...
6 51 49 30 ...
7 44 39 24 ...
8 ...
9 ...
10 ...
11 ...
12 ...
13 ...
14 ...
15 ...
type uvIndex value \
0 UK Postcode 0 http://cdn.worldweatheronline.net/images/wsymb...
1 Light rain
2 http://cdn.worldweatheronline.net/images/wsymb...
3 Light drizzle
4 http://cdn.worldweatheronline.net/images/wsymb...
5 Light drizzle
6 http://cdn.worldweatheronline.net/images/wsymb...
7 Light rain
8 http://cdn.worldweatheronline.net/images/wsymb...
9 Light drizzle
10 http://cdn.worldweatheronline.net/images/wsymb...
11 Light drizzle
12 http://cdn.worldweatheronline.net/images/wsymb...
13 Overcast
14 http://cdn.worldweatheronline.net/images/wsymb...
15 Light rain shower
visibility weatherCode winddir16Point winddirDegree windspeedKmph \
0 11 296 SSW 212 7
1 15 266 S 174 11
2 14 266 S 175 18
3 14 296 SSW 208 19
4 15 266 SSW 209 26
5 15 266 SSW 205 31
6 15 122 SW 216 31
7 15 353 SW 219 23
8
9
10
11
12
13
14
15
windspeedMiles
0 4
1 7
2 11
3 12
4 16
5 19
6 19
7 15
8
9
10
11
12
13
14
15
[16 rows x 40 columns]
</code></pre>
<p>Good luck with your project!</p>
<p>D.</p>
|
python|json|pandas
| 1
|
8,964
| 17,556,913
|
Python buggy histogram?
|
<p>I have a strange behavior with this very simple code</p>
<pre><code>import numpy as np
[y, binEdges] = np.histogram(x, xout)
</code></pre>
<p>where x and xout are numpy arrays (xout describes the edges of the bins that are equally spaced).</p>
<p>If I do</p>
<pre><code>np.sum(y)
</code></pre>
<p>the value is not equal to the number of elements in x (x.shape), this value is a lot lesser then x.shape and I cannot figure out why. Is it a bug of np.histogram? If you need I can upload the x and xout numpy arrays but they are very long (x.shape is 19133 float64 and xout.shape is 1360 float64). Let me know if I did something wrong in the above code.</p>
|
<p>Try this:</p>
<pre><code>y.sum() + (x < xout[0]).sum() + (x > xout[-1]).sum()
</code></pre>
|
python|numpy|histogram
| 1
|
8,965
| 17,315,881
|
How can I check if a Pandas dataframe's index is sorted
|
<p>I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again.</p>
<p>e.g. I can test an index to see if it is unique by index.is_unique() is there a similar way for testing sorted?</p>
|
<p>How about:</p>
<p><code>df.index.is_monotonic</code></p>
|
python|pandas
| 86
|
8,966
| 20,115,680
|
Dict not hashable python
|
<p>I looked online and cant seem to understand much of it. im new to python and was wondering how i can fix this. </p>
<p>when running:</p>
<pre><code>results = getRecommendations(userCompare[0], userCompare[0]['1'], sim_distance)
</code></pre>
<p>i get error:</p>
<pre><code> TypeError Traceback (most recent call last)
<ipython-input-147-4d74cac55074> in <module>()
----> 1 results = getRecommendations(userCompare[0], userCompare[0]['1'], sim_distance)
<ipython-input-54-5f2d7e0dd3ba> in getRecommendations(data, person, similarity)
5 for other in data:
6 if other==person: continue #dont compare self
----> 7 sim=similarity(data, person, other)
8 if sim<=0: continue #ignore scores of 0 or lower
9 for item in data[other]:
<ipython-input-146-b30288308fee> in sim_distance(data, c1, c2)
2 def sim_distance(data, c1, c2):
3 si = {} #get the list of shared items
----> 4 for item in data[c1]:
5 if item in data[c2]:
6 si[item] = 1
TypeError: unhashable type: 'dict'
</code></pre>
<p>to create userCompare i did the following: </p>
<pre><code> movies = {}
prefsList = []
def loadMovieLens(path = directory):
# Get movie titles
for line in open(path + 'u.item'):
(id, title) = line.split('|')[0:2]
movies[id] = title
# Load data
for k in range(len(centroidsM)):
prefs ={}
for rows in range(len(centroidsM[k])):
for columns in range(len(centroidsM[k][0,:])):
user = str(rows+1)
movieid =str(columns+1)
prefs.setdefault(user,{})
prefs[user][movies[movieid]] = float(centroidsM[k][rows,columns])
prefsList.append(prefs)
return prefsList
</code></pre>
<p>I basically had an array of centroids with different K values, each K value has is a kx1682 matrix (k meaning number of clusters) so i loaded that into a list of dicts. i hope this makes sense. im starting to hate python or atleast dicts.</p>
|
<p>You can't use a <code>dict</code> as a dictionary key. What would happen if I did:</p>
<pre><code>d = {}
k1 = {1: 2}
k2 = {2: 1}
d[k1] = "a"
d[k2] = "b"
k1[2] = 1
k2[1] = 2
</code></pre>
<p>I now have <code>k2 == k1</code>, so what does <code>d[{1:2, 2:1}]</code> do? Well, that's why you can't use a <code>dict</code> as a key.</p>
<p>If you really need to do that (e.g. to use in a <code>Counter</code>), here's an option: freezing the <code>dict</code>:</p>
<pre><code>#coding:utf-8
FROZEN_TAG = "__frozen__"
def freeze_dict(obj):
if isinstance(obj, dict):
dict_items = list(obj.items())
dict_items.append((FROZEN_TAG, True))
return tuple([(k, freeze_dict(v)) for k, v in dict_items])
return obj
def unfreeze_dict(obj):
if isinstance(obj, tuple):
if (FROZEN_TAG, True) in obj:
out = dict((k, unfreeze_dict(v)) for k, v in obj)
del out[FROZEN_TAG]
return out
return obj
</code></pre>
<p>It's from <a href="https://github.com/Scalr/cloudbench/blob/master/cloudbench/utils/freeze.py" rel="nofollow">here</a>.</p>
|
python|dictionary|hash|numpy
| 1
|
8,967
| 12,374,781
|
How to find all neighbors of a given point in a delaunay triangulation using scipy.spatial.Delaunay?
|
<p>I have been searching for an answer to this question but cannot find anything useful.</p>
<p>I am working with the python scientific computing stack (scipy,numpy,matplotlib) and I have a set of 2 dimensional points, for which I compute the Delaunay traingulation (<a href="https://en.wikipedia.org/wiki/Delaunay_triangulation" rel="noreferrer">wiki</a>) using <code>scipy.spatial.Delaunay</code>.</p>
<p>I need to write a function that, given any point <code>a</code>, will return all other points which are vertices of any simplex (i.e. triangle) that <code>a</code> is also a vertex of (the neighbors of <code>a</code> in the triangulation). However, the documentation for <code>scipy.spatial.Delaunay</code> (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html" rel="noreferrer">here</a>) is pretty bad, and I can't for the life of me understand how the simplices are being specified or I would go about doing this. Even just an explanation of how the <code>neighbors</code>, <code>vertices</code> and <code>vertex_to_simplex</code> arrays in the Delaunay output are organized would be enough to get me going.</p>
<p>Much thanks for any help.</p>
|
<p>I figured it out on my own, so here's an explanation for anyone future person who is confused by this.</p>
<p>As an example, let's use the simple lattice of points that I was working with in my code, which I generate as follows</p>
<pre><code>import numpy as np
import itertools as it
from matplotlib import pyplot as plt
import scipy as sp
inputs = list(it.product([0,1,2],[0,1,2]))
i = 0
lattice = range(0,len(inputs))
for pair in inputs:
lattice[i] = mksite(pair[0], pair[1])
i = i +1
</code></pre>
<p>Details here not really important, suffice to say it generates a regular triangular lattice in which the distance between a point and any of its six nearest neighbors is 1.</p>
<p>To plot it</p>
<pre><code>plt.plot(*np.transpose(lattice), marker = 'o', ls = '')
axes().set_aspect('equal')
</code></pre>
<p><a href="https://i.stack.imgur.com/xJFev.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xJFev.png" alt="enter image description here"></a></p>
<p>Now compute the triangulation:</p>
<pre><code>dela = sp.spatial.Delaunay
triang = dela(lattice)
</code></pre>
<p>Let's look at what this gives us.</p>
<pre><code>triang.points
</code></pre>
<p>output:</p>
<pre><code>array([[ 0. , 0. ],
[ 0.5 , 0.8660254 ],
[ 1. , 1.73205081],
[ 1. , 0. ],
[ 1.5 , 0.8660254 ],
[ 2. , 1.73205081],
[ 2. , 0. ],
[ 2.5 , 0.8660254 ],
[ 3. , 1.73205081]])
</code></pre>
<p>simple, just an array of all nine points in the lattice illustrated above. How let's look at:</p>
<pre><code>triang.vertices
</code></pre>
<p>output:</p>
<pre><code>array([[4, 3, 6],
[5, 4, 2],
[1, 3, 0],
[1, 4, 2],
[1, 4, 3],
[7, 4, 6],
[7, 5, 8],
[7, 5, 4]], dtype=int32)
</code></pre>
<p>In this array, each row represents one simplex (triangle) in the triangulation. The three entries in each row are the indices of the vertices of that simplex in the points array we just saw. So for example the first simplex in this array, <code>[4, 3, 6]</code> is composed of the points:</p>
<pre><code>[ 1.5 , 0.8660254 ]
[ 1. , 0. ]
[ 2. , 0. ]
</code></pre>
<p>Its easy to see this by drawing the lattice on a piece of paper, labeling each point according to its index, and then tracing through each row in <code>triang.vertices</code>.</p>
<p>This is all the information we need to write the function I specified in my question.
It looks like:</p>
<pre><code>def find_neighbors(pindex, triang):
neighbors = list()
for simplex in triang.vertices:
if pindex in simplex:
neighbors.extend([simplex[i] for i in range(len(simplex)) if simplex[i] != pindex])
'''
this is a one liner for if a simplex contains the point we`re interested in,
extend the neighbors list by appending all the *other* point indices in the simplex
'''
#now we just have to strip out all the dulicate indices and return the neighbors list:
return list(set(neighbors))
</code></pre>
<p>And that's it! I'm sure the function above could do with some optimization, its just what I came up with in a few minutes. If anyone has any suggestions, feel free to post them. Hopefully this helps somebody in the future who is as confused about this as I was.</p>
|
python|numpy|scipy|triangulation|delaunay
| 18
|
8,968
| 12,418,234
|
logarithmically spaced integers
|
<p>Say I have a 10,000 pt vector that I want to take a slice of only 100 logarithmically spaced points. I want a function to give me integer values for the indices. Here's a simple solution that is simply using around + logspace, then getting rid of duplicates. </p>
<pre><code>def genLogSpace( array_size, num ):
lspace = around(logspace(0,log10(array_size),num)).astype(uint64)
return array(sorted(set(lspace.tolist())))-1
ls=genLogspace(1e4,100)
print ls.size
>>84
print ls
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 13, 14, 15, 17, 19, 21, 23, 25, 27, 30,
33, 37, 40, 44, 49, 54, 59, 65, 71, 78, 86,
94, 104, 114, 125, 137, 151, 166, 182, 200, 220, 241,
265, 291, 319, 350, 384, 422, 463, 508, 558, 613, 672,
738, 810, 889, 976, 1071, 1176, 1291, 1416, 1555, 1706, 1873,
2056, 2256, 2476, 2718, 2983, 3274, 3593, 3943, 4328, 4750, 5213,
5721, 6279, 6892, 7564, 8301, 9111, 9999], dtype=uint64)
</code></pre>
<p>Notice that there were 16 duplicates, so now I only have 84 points.</p>
<p>Does anyone have a solution that will efficiently ensure the number of output samples is num? For this specific example, input values for num of 121 and 122 give 100 output points.</p>
|
<p>This is a bit tricky. You can't always get logarithmically spaced numbers. As in your example, first part is rather linear. If you are OK with that, I have a solution. But for the solution, you should understand why you have duplicates.</p>
<p>Logarithmic scale satisfies the condition:</p>
<pre><code>s[n+1]/s[n] = constant
</code></pre>
<p>Let's call this constant <code>r</code> for <code>ratio</code>. For <code>n</code> of these numbers between range <code>1...size</code>, you'll get:</p>
<pre><code>1, r, r**2, r**3, ..., r**(n-1)=size
</code></pre>
<p>So this gives you:</p>
<pre><code>r = size ** (1/(n-1))
</code></pre>
<p>In your case, <code>n=100</code> and <code>size=10000</code>, <code>r</code> will be <code>~1.0974987654930561</code>, which means, if you start with <code>1</code>, your next number will be <code>1.0974987654930561</code> which is then rounded to <code>1</code> again. Thus your duplicates. This issue is present for small numbers. After a sufficiently large number, multiplying with ratio will result in a different rounded integer.</p>
<p>Keeping this in mind, your best bet is to add consecutive integers up to a certain point so that this multiplication with the ratio is no longer an issue. Then you can continue with the logarithmic scaling. The following function does that:</p>
<pre><code>import numpy as np
def gen_log_space(limit, n):
result = [1]
if n>1: # just a check to avoid ZeroDivisionError
ratio = (float(limit)/result[-1]) ** (1.0/(n-len(result)))
while len(result)<n:
next_value = result[-1]*ratio
if next_value - result[-1] >= 1:
# safe zone. next_value will be a different integer
result.append(next_value)
else:
# problem! same integer. we need to find next_value by artificially incrementing previous value
result.append(result[-1]+1)
# recalculate the ratio so that the remaining values will scale correctly
ratio = (float(limit)/result[-1]) ** (1.0/(n-len(result)))
# round, re-adjust to 0 indexing (i.e. minus 1) and return np.uint64 array
return np.array(list(map(lambda x: round(x)-1, result)), dtype=np.uint64)
</code></pre>
<p><em>Python 3 update: Last line used to be</em> <code>return np.array(map(lambda x: round(x)-1, result), dtype=np.uint64)</code> <em>in Python 2</em></p>
<p>Here are some examples using it:</p>
<pre><code>In [157]: x = gen_log_space(10000, 100)
In [158]: x.size
Out[158]: 100
In [159]: len(set(x))
Out[159]: 100
In [160]: y = gen_log_space(2000, 50)
In [161]: y.size
Out[161]: 50
In [162]: len(set(y))
Out[162]: 50
In [163]: y
Out[163]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11,
13, 14, 17, 19, 22, 25, 29, 33, 38, 43, 49,
56, 65, 74, 84, 96, 110, 125, 143, 164, 187, 213,
243, 277, 316, 361, 412, 470, 536, 612, 698, 796, 908,
1035, 1181, 1347, 1537, 1753, 1999], dtype=uint64)
</code></pre>
<p>And just to show you how logarithmic the results are, here is a semilog plot of the output for <code>x = gen_log_scale(10000, 100)</code> (as you can see, left part is not really logarithmic):</p>
<p><img src="https://i.stack.imgur.com/5SGXk.png" alt="enter image description here"></p>
|
python|numpy|resampling
| 19
|
8,969
| 72,058,273
|
How to install mediapipe with miniforge3?
|
<p>I am on a new Mac M1 trying to install mediapipe and TensorFlow on the same Conda env. Installing both libraries on M1 appear to have a lot of issues. I was finally able to get TensorFlow to install using this tutorial:</p>
<p><a href="https://betterprogramming.pub/installing-tensorflow-on-apple-m1-with-new-metal-plugin-6d3cb9cb00ca" rel="nofollow noreferrer">https://betterprogramming.pub/installing-tensorflow-on-apple-m1-with-new-metal-plugin-6d3cb9cb00ca</a></p>
<p>This tutorial requires the Miniforge3 package manager and python 3.9.</p>
<p>I created a Conda env using miniforge3 and TensorFlow works great now.</p>
<p>Now when I try to install mediapipe into this env, with either of these commands:</p>
<pre><code>pip install mediapipe
</code></pre>
<p>or</p>
<pre><code> ~/miniforge3/envs/vision/bin/pip install mediapipe
</code></pre>
<p>I get this error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement mediapipe (from versions: none)
ERROR: No matching distribution found for mediapipe
</code></pre>
<p>I've done some looking and found that mediapipe has issues with python >3.7.</p>
<p>I tried downgrading python using this command:</p>
<pre><code>conda install python=3.x
</code></pre>
<p>I was able to downgrade to 3.8, but no lower. Python 3.6 and 3.7 were not found by Conda:</p>
<pre><code>(base) % conda install python=3.7
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- python=3.7
Current channels:
- https://conda.anaconda.org/conda-forge/osx-arm64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
<p>What should I do? I need to use both mediapipe and TensorFlow.</p>
|
<p>My solution was also to create a conda environment with Python 3.7 and x86_64 architecture. Python 3.7 is required for Mediapipe to work with TensorFlow (<a href="https://google.github.io/mediapipe/getting_started/install.html" rel="nofollow noreferrer">https://google.github.io/mediapipe/getting_started/install.html</a>).</p>
<ol>
<li><p>Follow this tutorial for installing tensorflow on M1 (<a href="https://towardsdatascience.com/installing-tensorflow-on-the-m1-mac-410bb36b776" rel="nofollow noreferrer">https://towardsdatascience.com/installing-tensorflow-on-the-m1-mac-410bb36b776</a>) up to point 3.</p>
</li>
<li><p>For point 4, download the environment.yml file with tensorflow dependencies (<a href="https://raw.githubusercontent.com/mwidjaja1/DSOnMacARM/main/environment.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/mwidjaja1/DSOnMacARM/main/environment.yml</a>), but modify the line ‘python=3.8’ to ‘python=3.7’. Then create the conda environment using x86_64 architecture:</p>
</li>
</ol>
<pre><code>CONDA_SUBDIR=osx-64 conda env create --file=environment.yml --name my_env
</code></pre>
<ol start="3">
<li>Activate the environment</li>
</ol>
<pre><code>conda activate my_env
</code></pre>
<ol start="4">
<li>Install the mediapipe package, and other packages like opencv.</li>
</ol>
<pre><code>pip install mediapipe
pip install opencv-python
</code></pre>
<ol start="5">
<li>Check that all packages are installed.</li>
</ol>
<pre><code>Python
>>> import tensorflow
>>> import mediapipe
>>> import cv2
</code></pre>
|
python|tensorflow|opencv|conda|mediapipe
| 0
|
8,970
| 72,123,992
|
panda df not showing all rows after loading from MS SQL
|
<p>I'm using <em>Pandas</em> with latest <em>sqlalchemy</em> (<code>1.4.36</code>) to query a MS SQL DB, using the following Python <code>3.10.3</code> [Win] snippet:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd #
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
# ...
def get_table_columns():
SQLA = 'SELECT TABLE_NAME,COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME LIKE \'pa%\' ORDER BY TABLE_NAME;'
# Use pandas for getting named table & columns
conn_str = set_db_info()
conn_url = URL.create("mssql+pyodbc", query={"odbc_connect": conn_str})
engine = create_engine(conn_url)
df = pd.read_sql(SQLA, engine)
# Permanently changes the pandas settings
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
print(df)
return df
</code></pre>
<p>However, this only prints the first 292 rows, and not all of the entire 2351 rows. Using REPL, I can check this with:</p>
<pre class="lang-py prettyprint-override"><code>>>> z = get_table_columns()
>>> z
TABLE_NAME COLUMN_NAME
0 paacc accesscd
... # <-- I added these
292 paapepi piapeheadat
>>> z.count()
TABLE_NAME 2351
COLUMN_NAME 2351
dtype: int64
>>> z.shape[0]
2351
>>> z.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2351 entries, 0 to 2350
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 TABLE_NAME 2351 non-null object
1 COLUMN_NAME 2351 non-null object
dtypes: object(2)
memory usage: 36.9+ KB
</code></pre>
<hr />
<p><strong>Q: What is going on, and why can't I print/show all the rows?</strong></p>
|
<p>To display all the rows in pandas, you should set the display option to None or 1 extra from the dataframe size as you have done in your code:</p>
<pre><code>pd.set_option('display.max_rows', None)
pandas.set_option('display.max_rows', z.shape[0]+1)
</code></pre>
<p>Given that this is not the problem, it may be that the IDE or program that you use automatically crops this information from the view (For example Ipython crops every big output).</p>
<p>Other thing to try is to force the print of the dataframe instead of just is return value:</p>
<pre><code>>>> print(z)
</code></pre>
<p>To inspect everything I would recommend you to pass that into a csv/excel file to do it better.</p>
|
python|sql-server|pandas|dataframe|sqlalchemy
| 1
|
8,971
| 16,960,068
|
Populate predefined numpy array with arrays as columns
|
<p>Something I can't figure out by reading the Python documentation and stackoverflow. Probably I'm thinking in the wrong direction..</p>
<p>Let's say I've a predefined 2D Numpy array as follow:</p>
<pre><code>a = np.zeros(shape=(3,2))
print a
array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])
</code></pre>
<p>Now I would like to populate each column of this 2D array with a 1D data array (one by one), as in:</p>
<pre><code>b = np.array([1,2,3])
# Some code, that I just can't figure out. I've studied insert, column_stack,
# h_stack, append. Nothing seems to do what I need
print a
array([[ 1., 0.],
[ 2., 0.],
[ 3., 0.]])
c = np.array([4,5,6])
# Some code, that I just can't figure out. I've studied insert, column_stack,
# h_stack, append. Nothing seems to do what I need
print a
array([[ 1., 4.],
[ 2., 5.],
[ 3., 6.]])
</code></pre>
<p>Any suggestions would be appreciated!</p>
|
<p>You can assign to columns with slicing:</p>
<pre><code>>>> a[:,0] = b
>>> a
array([[ 1., 0.],
[ 2., 0.],
[ 3., 0.]])
</code></pre>
<p>To assign them all at once instead of one at a time, use <code>np.column_stack</code>:</p>
<pre><code>>>> np.column_stack((b, c))
array([[1, 4],
[2, 5],
[3, 6]])
</code></pre>
<p>If you need it back in the same array, rather than just having the same name, you can assign to a slice containing the whole matrix (as is common with lists):</p>
<pre><code>>>> a[:] = np.column_stack((b, c))
>>> a
array([[ 1., 4.],
[ 2., 5.],
[ 3., 6.]])
</code></pre>
|
python|arrays|numpy|populate
| 1
|
8,972
| 17,871,031
|
Modifying 3D array in Python
|
<p>I am trying to perform operations on specific elements within a 3d array in python. Here is an example of the array:</p>
<pre><code>[[[ 0.5 0.5 50. ]
[ 50.5 50.5 100. ]
[ 0.5 100.5 50. ]
[ 135. 90. 45. ]]
[[ 50.5 50.5 100. ]
[ 100.5 0.5 50. ]
[ 100.5 100.5 50. ]
[ 45. 90. 45. ]]
[[ 100.5 100.5 50. ]
[ 100.5 100.5 0. ]
[ 0.5 100.5 50. ]
[ 90. 0. 90. ]]
</code></pre>
<p>An example of what I need to to is take the three values seen in the array i.e. 0.5, 0.5, 50. and take the first element from the 4th row i.e. 135. and send those four elements into a function. the function then returns new values for the 3 elements which need to be put into the array.</p>
<p>I am quite new to python so I'm having trouble getting it to work. Should I be making a loop? or something else?</p>
<p>Thanks
Nick</p>
<p>An attempt at a solution:</p>
<pre><code>b = shape(a)
triangles = b[0]
for k in range(0,triangles):
for i in range(0,2):
a[k,i,:] = VectMath.rotate_x(a[k,i,0],a[k,i,1],a[k,i,2],a[k,3,2])
</code></pre>
|
<p>You can make your <code>VectMath.rotate_x</code> function to rotate an array of vector, then by using slice to get & put data in <code>a</code>:</p>
<pre><code>a = np.array(
[[[ 0.5, 0.5, 50., ],
[ 50.5, 50.5, 100., ],
[ 0.5, 100.5, 50., ],
[ 135. , 90. , 45., ]],
[[ 50.5, 50.5, 100., ],
[ 100.5, 0.5, 50., ],
[ 100.5, 100.5, 50., ],
[ 45. , 90. , 45., ]],
[[ 100.5, 100.5, 50., ],
[ 100.5, 100.5, 0., ],
[ 0.5, 100.5, 50., ],
[ 90. , 0. , 90., ]]])
def rotate_x(v, deg):
r = np.deg2rad(deg)
c = np.cos(r)
s = np.sin(r)
m = np.array([[1, 0, 0],
[0, c,-s],
[0, s, c]])
return np.dot(m, v)
vectors = a[:, :-1, :]
angles = a[:, -1, 0]
for i, (vec, angle) in enumerate(zip(vectors, angles)):
vec_rx = rotate_x(vec.T, angle).T
a[i, :-1, :] = vec_rx
print a
</code></pre>
<p>output:</p>
<pre><code>[[[ 5.00000000e-01 -3.57088924e+01 -3.50017857e+01]
[ 5.05000000e+01 -1.06419571e+02 -3.50017857e+01]
[ 5.00000000e-01 -1.06419571e+02 3.57088924e+01]
[ 1.35000000e+02 9.00000000e+01 4.50000000e+01]]
[[ 5.05000000e+01 -3.50017857e+01 1.06419571e+02]
[ 1.00500000e+02 -3.50017857e+01 3.57088924e+01]
[ 1.00500000e+02 3.57088924e+01 1.06419571e+02]
[ 4.50000000e+01 9.00000000e+01 4.50000000e+01]]
[[ 1.00500000e+02 -5.00000000e+01 1.00500000e+02]
[ 1.00500000e+02 6.15385017e-15 1.00500000e+02]
[ 5.00000000e-01 -5.00000000e+01 1.00500000e+02]
[ 9.00000000e+01 0.00000000e+00 9.00000000e+01]]]
</code></pre>
<p>If there are many triangle, it maybe faster if we can rotate all the vectors without python for loop.</p>
<p>Here I do the rotate calculation by expand the matrix product:</p>
<pre><code>x' = x
y' = cos(t)*y - sin(t)*z
z' = sin(t)*y + cos(t)*z
</code></pre>
<p>So we can vectorize these formulas:</p>
<pre><code>a2 = np.array(
[[[ 0.5, 0.5, 50., ],
[ 50.5, 50.5, 100., ],
[ 0.5, 100.5, 50., ],
[ 135. , 90. , 45., ]],
[[ 50.5, 50.5, 100., ],
[ 100.5, 0.5, 50., ],
[ 100.5, 100.5, 50., ],
[ 45. , 90. , 45., ]],
[[ 100.5, 100.5, 50., ],
[ 100.5, 100.5, 0., ],
[ 0.5, 100.5, 50., ],
[ 90. , 0. , 90., ]]])
vectors = a2[:, :-1, :]
angles = a2[:, -1:, 0]
def rotate_x_batch(vectors, angles):
rad = np.deg2rad(angles)
c = np.cos(rad)
s = np.sin(rad)
x = vectors[:, :, 0]
y = vectors[:, :, 1]
z = vectors[:, :, 2]
yr = c*y - s*z
zr = s*y + c*z
vectors[:, :, 1] = yr
vectors[:, :, 2] = zr
rotate_x_batch(vectors, angles)
print np.allclose(a, a2)
</code></pre>
|
python|arrays|list|numpy
| 1
|
8,973
| 17,777,482
|
How to remove every other element of an array in python? (The inverse of np.repeat()?)
|
<p>If I have an array x, and do an <code>np.repeat(x,2)</code>, I'm practically duplicating the array.</p>
<pre><code>>>> x = np.array([1,2,3,4])
>>> np.repeat(x, 2)
array([1, 1, 2, 2, 3, 3, 4, 4])
</code></pre>
<p>How can I do the opposite so that I end up with the original array?</p>
<p>It should also work with a random array y:</p>
<pre><code>>>> y = np.array([1,7,9,2,2,8,5,3,4])
</code></pre>
<p>How can I delete every other element so that I end up with the following?</p>
<pre><code>array([7, 2, 8, 3])
</code></pre>
|
<p><code>y[1::2]</code> should do the job. Here the second element is chosen by indexing with 1, and then taken at an interval of 2. </p>
|
python|arrays|numpy
| 55
|
8,974
| 8,407,749
|
removing baseline signal using fourier transforms
|
<p>I have timeseries data for many terms an example of which is below:</p>
<pre><code>term1 = [0.0, 0.0, 0.0, 0.0, 2.2384935833581433e-06, 3.938767914008819e-06, 0.0, 0.0, 1.1961851263949013e-06, 0.0, 2.278384397623645e-06, 1.100158422812885e-06, 0.0, 1.095521835393462e-06, 0.0, 0.0, 1.6933152148605343e-06, 0.0, 8.460737945563612e-07, 8.949410770794851e-07, 0.0, 2.8698467119209605e-06, 0.0, 0.0, 0.0, 3.9163008188985015e-06, 2.2244961516216576e-06, 0.0, 0.0, 1.9407903674692482e-06, 0.0, 0.0, 0.0, 0.0, 9.514657329616274e-07, 1.94463053478312e-06, 0.0, 0.0, 0.0, 2.0373216961518047e-06, 1.8835690620014428e-06, 0.0, 0.0, 0.0, 0.0, 9.707946148081127e-07, 0.0, 0.0, 1.6121985390256838e-06, 1.9547361301697883e-06, 0.0, 2.2876018840689116e-06, 2.208826914114183e-06, 1.9640500282823203e-06, 0.0, 2.6234669115235785e-06, 0.0, 0.0, 0.0, 1.986207773222741e-06, 1.049193537387487e-06, 1.090723073046815e-06, 0.0, 1.0257546476943088e-06, 9.179053033814713e-07, 0.0, 0.0, 0.0, 0.0, 9.335621182897889e-07, 0.0, 0.0, 0.0, 0.0, 2.1267500494469387e-06, 2.215050381320923e-06, 2.163720040591388e-06, 1.937729136470388e-06, 1.6037643556956889e-06, 1.313906783569333e-06, 0.0, 1.0064645216223805e-06, 1.876346865234201e-06, 9.504447606257348e-07, 2.017974095266539e-06, 0.0, 2.120782823355757e-06, 0.0, 0.0, 0.0, 0.0, 9.216394491176685e-07, 0.0, 0.0, 1.0401357169083422e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0089962853658684e-06, 1.8249773702806084e-06, 0.0, 1.2890950295073852e-06, 5.42812725267281e-06, 1.9185480428411778e-06, 2.6955316172381044e-06, 0.0, 0.0, 1.0070239923466176e-06, 0.0, 1.021152145542773e-06, 9.919749228739498e-07, 1.9293082175989564e-06, 9.802489636317832e-07, 1.0483850676418046e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 1.9369409504181854e-06, 0.0, 4.619620451983665e-06, 0.0, 6.0795324434248845e-06, 0.0, 1.5312669396405198e-06, 1.2797051559320733e-06, 1.1002903666277531e-06, 0.0, 1.0054768323055684e-06, 2.060260561153169e-06, 1.0898719291496056e-06, 3.4605907920600203e-06, 3.3500051925080486e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.5521496510980315e-06, 0.0, 0.0, 0.0, 3.01862187836765e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.849817053093449e-06, 6.5552277941658475e-06, 1.985771944021089e-06, 1.010233667047188e-06, 9.802307070992228e-07, 5.605931075077432e-06, 3.651067480854715e-06, 0.0, 0.0, 2.9476960807432912e-06, 1.834478659509754e-06, 0.0, 0.0, 0.0, 0.0, 3.3801712394749917e-06, 0.0, 2.2884970981856794e-06, 1.02014792144861e-06, 2.906143199237428e-06, 9.807873564740302e-07, 0.0, 2.106593638087213e-06, 3.0329622335542676e-06, 2.9093758515985565e-06, 0.0, 2.12762335960239e-06, 9.614820669172289e-07, 9.264114341404848e-07, 0.0, 0.0, 9.073611487918033e-07, 0.0, 0.0, 0.0, 6.0360958532021484e-06, 0.0, 4.553288270957079e-06, 2.0712553257152562e-06, 3.292603824030081e-06, 2.690786880261329e-06, 2.301011409565074e-06, 2.029661472762958e-06, 0.0, 9.657114492818003e-07, 9.948942029504583e-07, 1.028682761437152e-06, 2.0694207898151387e-06, 3.845369982272845e-06, 9.048250701691842e-07, 1.7726379156614332e-06, 0.0, 9.238711680133629e-07, 9.231112912203808e-07, 9.422814896339613e-07, 0.0, 1.2123519263665934e-06, 0.0, 0.0, 2.1675188628329036e-06, 0.0, 4.498718989767663e-06, 0.0, 0.0, 2.650273839544471e-06, 1.1954029583832415e-06, 4.180999656112778e-06, 1.9036523473937095e-06, 9.75877289286136e-07, 0.0, 2.093618232902467e-06, 1.032899928523325e-06, 0.0, 4.473312219299659e-06, 8.762705923589204e-07, 0.0, 0.0, 1.792797436299666e-06, 0.0, 0.0, 1.1974513445582422e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.1404264054329915e-06, 3.061324451410658e-06, 9.84829683554526e-07, 2.932895354293759e-06, 2.0897069394988045e-06, 0.0, 2.128187093183736e-06, 0.0, 4.686861415132188e-06, 6.37755683086446e-06, 1.8420463661490824e-06, 2.8347403094402523e-06, 1.9033842171380715e-06, 6.909144746582441e-06, 0.0, 0.0, 1.5479612576256442e-06, 5.621978186724636e-06, 2.087185930697078e-06, 1.3168406359462377e-05, 1.9676130885622652e-05, 1.9988766313331908e-05, 3.1801079228204546e-05, 3.322824899588385e-05, 2.0358501231090545e-05, 1.2383952049337664e-05, 1.8052256532066507e-05, 7.770543617518302e-06, 9.226179797741636e-06, 4.400430362089412e-06, 4.333084180992927e-06, 7.477274426653279e-06, 3.0526428255261993e-06, 4.952368123389242e-06, 1.2578584707962998e-05, 0.0, 2.121750274236223e-06, 0.0, 2.38940918273843e-06, 0.0, 1.5693511988273807e-06, 0.0, 0.0, 4.520448247648237e-06, 4.0303440122522456e-05, 2.8660979509446863e-05, 2.4793768971660722e-05, 3.957070185234852e-05, 2.64488881248099e-05, 6.428381095168035e-05, 5.6557662521419976e-05, 6.855540059858658e-05, 7.079288025889968e-05, 7.135683422742382e-05, 5.5663480860112103e-05, 8.088436527379357e-05, 7.142268494354861e-05, 8.243171356847987e-05, 7.658173644233611e-05, 5.4275733753644613e-05, 2.7329513031804995e-05, 1.8666856995404658e-05, 2.5061514626811264e-05, 9.707359513272993e-06, 2.233654188450612e-05, 2.0577084330035857e-05, 6.037067595033506e-05, 5.358585847760433e-05, 6.353114888415205e-05, 4.913406130358561e-05, 6.253876100291326e-05, 5.783647108547192e-05, 5.29265883017118e-05, 4.295770587763158e-05, 0.00012513639867455526, 0.0001264425725280477, 0.00010075697417828198, 7.700585441944497e-05, 6.390017630639553e-05, 6.862379380485504e-05, 8.118867124374998e-05, 8.928305705187346e-05, 8.923668314113125e-05, 5.0862818355003976e-05, 2.5192448399293734e-05, 1.9491995287268695e-05, 1.1397180337584482e-05, 1.8548131739430545e-05, 2.8274146120787152e-05, 2.9861740143137274e-05, 5.749201435920551e-05, 8.676081065218611e-05, 0.00011692016691003383, 6.18107213073443e-05, 8.31986307882476e-05, 5.661490072734421e-05, 6.637785526376392e-05, 6.189842468509176e-05, 5.077848495281155e-05, 3.7630726455798414e-05, 6.325167842846687e-05, 7.447442335517917e-05, 7.881778491014126e-05, 8.347575938861497e-05, 6.553610066345062e-05, 6.209221186256924e-05, 4.671174184109858e-05, 4.583301504850661e-05, 2.9423292949863758e-05, 1.9969520206001368e-05, 1.3386836054765546e-05, 1.0233804045678584e-05, 2.3371876153986385e-05, 3.701784260013326e-05, 2.6804842191646374e-05, 3.729558727386808e-05, 7.011179438698544e-05, 4.616049584765358e-05, 6.019787395273405e-05, 8.312188292939014e-05, 6.281596430043117e-05, 6.370630077282333e-05, 6.169767733530766e-05, 6.099512039036877e-05, 7.192322709245217e-05, 6.727547574464268e-05, 4.891125624919348e-05, 8.775231227342841e-05, 9.349358010749929e-05, 4.85363097385816e-05, 4.475820776946539e-05, 1.9528637281926147e-05, 1.5243002033035396e-05, 1.4322461630125293e-05, 1.0492122514416176e-05, 1.1956759574674148e-05, 1.5232250274180506e-05, 3.394641638997643e-05, 2.6115894879792267e-05, 4.868559048521277e-05, 5.612535494090208e-05, 3.269545148571978e-05, 4.967751016319062e-05, 4.8382804751191425e-05, 5.1860846075881435e-05, 4.4034258653232213e-05, 5.362193446127224e-05, 6.213052893181175e-05, 8.561827093901839e-05, 5.877682625663455e-05, 0.0, 0.0, 7.105805443046969e-05, 0.0, 0.0, 2.31393994554528e-05, 7.05044594070575e-06, 2.21491300929156e-05, 4.926848615186025e-06, 1.0752514744385843e-05, 1.4745260873155369e-05, 1.976297604068538e-05, 3.094705732168692e-05, 5.068338091939653e-05, 2.655137469742496e-05, 3.0142790705685793e-05, 3.89279249469607e-05, 6.264176821226988e-05, 3.598536226187379e-05, 4.430195278344506e-05, 2.7501831818440764e-05, 1.7243328268903956e-05, 1.2049184772240285e-05, 2.1016880758625327e-05, 3.411070201956675e-05, 3.1893789428697184e-05, 1.8509911029027654e-05, 3.920735117199027e-05, 3.700840501998454e-05, 8.529330234343347e-06, 1.1007881643256571e-05, 4.661265813344272e-06, 7.306007242688513e-06, 2.6772256446090046e-06, 3.0075821145106816e-06, 6.713527085725027e-06, 2.204123915846549e-05, 7.880065404542858e-06, 4.3539870647002475e-05, 6.0898558226633984e-05, 7.956054903697144e-05, 4.80968670903199e-05, 3.476307626484116e-05, 3.233622280581405e-05, 4.097520999795124e-05, 1.6048981491512094e-05, 3.4725910431663494e-05, 2.3840743831207534e-05, 4.194630872483221e-05, 3.472531193096608e-05, 2.9240209403218155e-05, 2.5871727972711297e-05, 1.1918039641386187e-05, 1.2189485552920143e-05, 8.254477280067191e-06, 5.343416003103456e-06, 0.0, 4.795714549478586e-06, 6.705621859254362e-06, 9.484831383410081e-06, 2.503719812292549e-05, 1.9037212038371403e-05, 2.448114104715256e-05, 3.2063674685728836e-05, 2.73499598297465e-05, 2.6255716088190032e-05, 2.930473870366029e-05, 2.490020970307041e-05, 2.4037259675477766e-05, 1.8683888229243836e-05, 9.573344744760269e-06, 2.01589736663327e-05, 2.8955116521484698e-05, 1.934869527601605e-05, 2.1111566182648825e-05, 1.0035410663340645e-05, 4.154485944681635e-06, 8.468739061212046e-06, 8.415088253238056e-06, 1.3883239181832948e-06, 0.0, 2.9995080806747692e-06, 1.6303266848611124e-06, 3.714448088730736e-06, 8.976418947425114e-06, 9.729566693747293e-06, 3.3588780313874236e-05, 1.7154466165266127e-05, 1.9646193877372823e-05, 9.475852684603824e-06, 9.763432041631274e-06, 2.5840349706066022e-05, 1.4272109443725072e-05, 2.262309793162043e-05, 1.733067926359468e-05, 8.405046389852468e-06, 1.6489619195801272e-05, 6.6721749177376435e-06, 5.2645543870584616e-06, 5.563468043439559e-06, 5.668953522517651e-06, 2.564151874715539e-06, 4.72535638047152e-06, 1.1322053548784465e-06, 4.683593955822e-06, 5.170243182388084e-06, 1.4458242427134072e-06, 5.110793484760465e-06, 8.06295555698897e-06, 1.7613618850094893e-05, 1.3702227753862316e-05, 1.2582942563061514e-05, 1.5863866870429223e-05, 5.763738591399926e-06, 5.010013765012819e-06, 3.355941190486578e-06, 1.2264709219075303e-05, 3.0533139142568385e-06, 5.2266756983622735e-06, 3.0845411025383717e-06, 7.013177761012944e-06, 1.5042033081191253e-05, 7.918060391926394e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0146814988996414e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 1.4253135689851767e-05, 0.0, 1.4218885523752648e-05, 0.0, 1.539361472861057e-05, 0.0, 3.981789947307646e-05, 0.0, 1.2433017120264575e-05, 1.2777756481516975e-05, 1.2764382267720154e-05, 0.0, 1.2131652695046646e-05, 0.0, 0.0, 1.9324418335008117e-05, 0.0, 5.3279343598486864e-05, 1.316118503310038e-05, 8.202637968370628e-06, 8.606938339893733e-06, 2.2898281255009e-06, 0.0, 3.510274573677153e-06, 1.6317872149471709e-06, 4.578600840631114e-06, 3.877291479264245e-06, 2.8741881616021876e-06, 0.0, 1.0729671296519832e-05, 4.808871405969733e-06, 4.534612698729401e-06, 4.3333188889370365e-06, 0.0, 2.743032696949748e-06, 4.019804235533729e-06, 1.8078426019917002e-06, 5.444991968636846e-06]
</code></pre>
<p>Each element is the combined signal for an hour and the list is over 24 days. Therefore, should have 24 days * 24 hours = 576 elements.</p>
<p>I am interested in the dynamics of the signal of each term over the course of days. However, confounding to this is the baseline changes of the signal within a day. I also have time series for basic terms that capture this baseline signal during a day such as the following.</p>
<pre><code>baseline = [0.0056738537419516195, 0.005420397434626666, 0.005019676698052322, 0.004214006968007205, 0.004143451622795924, 0.00373395198248036, 0.0037080495988714344, 0.0036409523281401525, 0.003919898659196092, 0.004388163261294729, 0.004595501330006892, 0.005097033972892097, 0.0052221285817481335, 0.005184009325081863, 0.005273633551787361, 0.005053393415305126, 0.004444952439008902, 0.004552838940992971, 0.004808237374463801, 0.004895327691624783, 0.005059086256629757, 0.005598114319387153, 0.005952632334681949, 0.005717805004263755, 0.006126432469142252, 0.005592477569387059, 0.004920585487387107, 0.004318038669070883, 0.003877225571288378, 0.0036583898426795327, 0.0037336953886437474, 0.0037760782770061294, 0.0042338376814351954, 0.004192003050341723, 0.0046450557083186645, 0.004900468947653463, 0.005272546953959605, 0.005265723105151999, 0.0052304537716869855, 0.005121826744125637, 0.005224078793461002, 0.00501027352884918, 0.004871995260153345, 0.004863044486978714, 0.005310347911635811, 0.0058606870895965765, 0.00596470801561322, 0.005997180909289017, 0.00588291246890472, 0.005328610690842843, 0.004941976393965633, 0.004426509645673344, 0.0041172533679088375, 0.0038888190559989945, 0.003785501144341545, 0.0038683019610415165, 0.003826474222198437, 0.004178336738982966, 0.004574137078717032, 0.004854291797756379, 0.005216590267890586, 0.00514712218170792, 0.005217487414098377, 0.005239554740422529, 0.005138433888329476, 0.0050591314342241745, 0.005099277335119803, 0.00469742744667216, 0.005140145739820509, 0.005534156237221868, 0.006098190503066302, 0.00627542293362276, 0.005859315099582288, 0.0055863100804189264, 0.005193523620749424, 0.004680401455731111, 0.0041370327176107335, 0.003790198190936078, 0.0037143182477912154, 0.0037406926128218908, 0.0038838040372017974, 0.00413455625482474, 0.004309030576010342, 0.004768381364059312, 0.004905695025592956, 0.0050965947056771715, 0.005178951654634759, 0.005250840996574289, 0.005083679897873849, 0.0050438189257025106, 0.00465730975130931, 0.004511425103430987, 0.004631293617276186, 0.0049417738509291015, 0.005495036992426772, 0.0056409591251836465, 0.005487421237451456, 0.005093572532544252, 0.005043698439924855, 0.004837771685295603, 0.0038251289273366134, 0.003817852658627033, 0.003792612420533331, 0.003922716174790973, 0.00412646233748187, 0.004534488299255124, 0.004712687777471286, 0.005096266809417225, 0.00523394116440575, 0.005257672264041691, 0.005305086574696615, 0.005100654966986151, 0.004965826463906992, 0.005073115958176456, 0.00469441228683261, 0.004553136348768357, 0.004723653823501124, 0.004726168081415059, 0.005290955031631742, 0.005325956759690025, 0.005453676000994151, 0.005531903354394338, 0.005121462750913455, 0.004790546408707061, 0.004460326025284444, 0.003982093750443299, 0.0036151869988949132, 0.0035295702958713982, 0.003722662298606401, 0.004089779292755358, 0.004116707488056058, 0.004463311658604419, 0.004863245054602056, 0.005019950105663084, 0.005111292599872651, 0.0050328244675445916, 0.004886511461492081, 0.005017059119637564, 0.004997550003214928, 0.004989853142609061, 0.004888243576205561, 0.004801721031771264, 0.005142349216675533, 0.0053550501391269115, 0.00510410900976245, 0.005113311675603742, 0.004865951202283446, 0.004739388247627576, 0.004314592960862043, 0.003932197365205607, 0.0036889365827877003, 0.003444247563217489, 0.0033695476656641706, 0.003779678994400599, 0.004182362477080399, 0.004650999598571368, 0.004964528816231351, 0.005246502668776329, 0.005150211093436487, 0.0051813375657147505, 0.005326590813316477, 0.00501407415865325, 0.004920848192186853, 0.005020741681762219, 0.005108871853087233, 0.004991922013198609, 0.005551866678436957, 0.005681472655730911, 0.005624204122058199, 0.005202581478369662, 0.00490495583623749, 0.0043628317352519584, 0.0037568042368143423, 0.0035018559432594323, 0.0035627004864066413, 0.003560172130774401, 0.003604382929642445, 0.003782708492731446, 0.003958167037361377, 0.004405696805281344, 0.004888234197579893, 0.004849378554876764, 0.005035728295111269, 0.005150565049279978, 0.005104177573029002, 0.005540331228404623, 0.005146813504207926, 0.004991504807148932, 0.0050371760815936415, 0.005174258383207836, 0.005598418288045426, 0.0056576481335463774, 0.00561832393839059, 0.005408391628077189, 0.0052292710408241285, 0.004705309149638305, 0.003924934489565002, 0.003854606161156092, 0.0038935040219155712, 0.003830335124052002, 0.003746046574771941, 0.003865490274877053, 0.004168222873979538, 0.0045871293840885514, 0.004915772256778214, 0.005072434696646597, 0.00492522147976003, 0.004978792784547765, 0.004963870334948144, 0.004955409293231536, 0.004709890770618299, 0.004888202349958703, 0.0051805005663287775, 0.005568883603736712, 0.005781789868618008, 0.006061759631832967, 0.005730308168750368, 0.0055273545529884146, 0.005050318950400666, 0.004505314632141857, 0.0041320733921015994, 0.0037557073980650723, 0.0034979193552635043, 0.0037461620721961097, 0.0036352203964434373, 0.003974040173135196, 0.004094756199243869, 0.004649079406159152, 0.004920019940715673, 0.005231951964023264, 0.005121117845618645, 0.005064423379922766, 0.00498326981229982, 0.004871188222923238, 0.004660839287914527, 0.0047034466283560495, 0.004866548640835444, 0.005578880008506938, 0.0059683185805929845, 0.006061498706153822, 0.005800490254423062, 0.0054633509277901724, 0.004921961696040911, 0.004376719066835311, 0.00393610914724284, 0.0037954515471031775, 0.003581690980473693, 0.003563708289302751, 0.0037463007418473766, 0.00403278474399164, 0.004356886520045223, 0.004787462849992179, 0.005179338649547787, 0.005143654461390953, 0.005203417442834235, 0.005153892139635152, 0.005114303176192244, 0.00504646961230832, 0.00478839952880454, 0.004711338394289699, 0.004911682972324793, 0.005442432018950797, 0.005865476365139558, 0.006157467255298909, 0.005776991413458904, 0.00537648513923766, 0.005215877640811999, 0.004586994881879395, 0.00404235177861292, 0.0038098588593210615, 0.003611933103919232, 0.003782482344031445, 0.003847756732676113, 0.004015496451997738, 0.004222327790973872, 0.004767228509347478, 0.005026217727591916, 0.005032992226639765, 0.0051856184936032845, 0.005070660243331873, 0.005025667638424633, 0.004771111450073196, 0.0049169687623427365, 0.0, 0.004725137860724068, 0.00480564403797717, 0.004993865191923319, 0.005382243541508231, 0.005436232552738047, 0.005416886729676188, 0.004777014387860352, 0.0048255785043644925, 0.004081842852408802, 0.004090331218562488, 0.00378104976817826, 0.003521792464859018, 0.0036283065618489215, 0.003818665737661915, 0.003988803567300145, 0.004483523199147563, 0.004696601941747573, 0.005206918843848881, 0.005231253931233336, 0.005154439277447777, 0.005107271378732522, 0.004862372011026066, 0.005097539245443387, 0.004771922511620435, 0.004800155668906229, 0.004886324331150043, 0.005186594367994167, 0.0055550364814704704, 0.00565254113064783, 0.00542892074907446, 0.005216026402108949, 0.0050842262523550985, 0.004506330112231061, 0.004262871158699087, 0.004073705404217544, 0.003562133424289835, 0.003499455612234611, 0.0037587992642927636, 0.004170545895025578, 0.004646029409170125, 0.004941082109950799, 0.005336110809450001, 0.005238846272634943, 0.0051019151317224926, 0.004828998520466023, 0.00470819320853546, 0.004974373055097931, 0.004975308413634935, 0.005266317039295838, 0.005489162450620279, 0.005606273008057806, 0.00603476714807901, 0.0061970275556501725, 0.0058349840239690235, 0.005192678736923442, 0.004639151581343363, 0.004229911816211891, 0.003727661961919841, 0.00375780482393585, 0.0033937487713780225, 0.003400171769633621, 0.003719857252709842, 0.0037474521895174925, 0.004410321140619574, 0.00505109832021614, 0.00506160098731807, 0.005046922423918226, 0.005300710721177051, 0.005104647840739084, 0.004974276083656935, 0.004902745159619985, 0.005039594632444682, 0.005189007878086687, 0.005840559146565768, 0.005924790523904985, 0.006041782063467494, 0.006054874048959406, 0.005728511142370623, 0.005014567400775691, 0.004479858189014036, 0.004064222403658478, 0.0038690888337760544, 0.0038101713160671666, 0.0038192317788082945, 0.003855643888760465, 0.004151893395194348, 0.004439198456142054, 0.004868610511159107, 0.005164087705238066, 0.0052260812748906515, 0.005049306708959293, 0.005295364532855441, 0.004976241631407976, 0.005325257379808529, 0.004981215539676753, 0.004904617253355752, 0.005133080934624669, 0.005474999665809228, 0.006018281474269119, 0.0059556619441451936, 0.00582564335486158, 0.0057773567703702745, 0.005185870554701607, 0.004927387470357575, 0.004290471577704514, 0.003894605250504856, 0.0036579206650162693, 0.0037227880322444513, 0.0037587839308041025, 0.004025131552347727, 0.0043915455477435165, 0.004973183367291931, 0.005602412946227073, 0.005438255876982902, 0.005057281453194344, 0.0055819722968782305, 0.0052582960278547575, 0.0060302188495155494, 0.003969113083640037, 0.004874700151948723, 0.0048366059153241445, 0.005174590517957408, 0.005237240077942745, 0.005935388138900985, 0.006375850801552381, 0.006218749794135666, 0.005833520305137985, 0.005325978611613252, 0.00473056992525788, 0.0039874605990664344, 0.0038460789847597175, 0.003587065463944717, 0.00384212944765237, 0.004264645875837948, 0.004969973892903938, 0.005856983835337711, 0.006181231788159266, 0.006313470979891048, 0.006097287985997557, 0.005694104539336737, 0.005355534257732001, 0.005274420505031954, 0.004712403572544698, 0.004584515000959549, 0.004766412751530095, 0.0048104263193712886, 0.005309031929686986, 0.006042498279882524, 0.006496377367343072, 0.005619222170751848, 0.005418471293122766, 0.005015661629991529, 0.005062499505228742, 0.004308572994534354, 0.0038880894398937347, 0.003538125785331658, 0.0034843298748529253, 0.003774099147478583, 0.003896742470805163, 0.004541861762097922, 0.004553179667775172, 0.004948038015149709, 0.0050269339456022605, 0.00522398911361471, 0.005050975431726277, 0.005007174429180125, 0.004833758244552214, 0.004670604547693902, 0.00477521510651887, 0.004939453753268834, 0.005239435739336397, 0.005820798534429634, 0.006094069145690364, 0.005673509972509797, 0.005375844111251002, 0.005187640280456208, 0.00476628984541101, 0.004247493846603608, 0.003794806926377327, 0.003435122854871529, 0.003587919312587277, 0.003811897320196127, 0.0042459415490763925, 0.00460744683733153, 0.004807733730818607, 0.005155657515164588, 0.005405463068510853, 0.005224147724524333, 0.005351078308428722, 0.005384714635929638, 0.005362056525935763, 0.0051377016971353075, 0.004941059319359612, 0.004966034655341646, 0.005026256144832193, 0.005442607412384369, 0.0059898202401797275, 0.005612531062072142, 0.005603529527930128, 0.0051493731726657554, 0.004544820351700367, 0.004496920773323335, 0.00424357787751253, 0.0036690501594786006, 0.003700340743778253, 0.0038846659058119253, 0.004159671170598417, 0.004794839922729552, 0.005004852590193807, 0.005163099925195087, 0.005645338914676821, 0.005432262412191398, 0.0050802949835114155, 0.005169574505964038, 0.0052347116826927985, 0.0052757424822272225, 0.0056125420409050475, 0.005578375783486106, 0.005944651628427074, 0.006010407699526147, 0.0061534279769882615, 0.005756457061668538, 0.005283251628717022, 0.004694029423550289, 0.0042271372620665245, 0.003995084263772031, 0.003916612465121526, 0.00385882298694225, 0.0039353658124175695, 0.00403048536977438, 0.0039523025458470164, 0.004692943486212761, 0.005099144811322234, 0.005182029052264465, 0.005496327599559573, 0.0053953892408097875, 0.005256712315751134, 0.004628655585945719, 0.005255300089578979, 0.004727544165215228, 0.005365188522646431, 0.006321448616075385, 0.005962859901186893, 0.0064913517773445605, 0.006403310018717368, 0.005985231247570929, 0.005536676822123271, 0.005652983876148263, 0.0053962798830303575, 0.0036360246130896887, 0.0034235996705107084, 0.004421584551524996, 0.003810299791511898, 0.0038131330853627154, 0.0038483466362599773, 0.005120205311426739, 0.0048344210780759, 0.005090949889906456, 0.005557094917028417, 0.005276073619631902, 0.0056143238257037814, 0.005700457782933553, 0.00584351804652435, 0.004893880732421001, 0.005475919992851946, 0.005248580353868141, 0.005350058838515571, 0.006083169087767963, 0.005703392826945841, 0.006319084795547654, 0.005231157508317081, 0.005381213703447174, 0.005027572682644346, 0.0042202572347266884, 0.004068212855323105, 0.003991170422748069, 0.0037477607718658665, 0.004077183917326014, 0.00408925876065761, 0.004650332253801002, 0.004960348232472058, 0.005144796809267916, 0.00597460791635549, 0.005407754333445995, 0.005265714189536858, 0.005391654498789258, 0.00495731680894397, 0.005033086804203971, 0.00511026991441738, 0.005391897414595909, 0.006005653123816428, 0.0066265552258310415]
</code></pre>
<p>i think that a good way for me to extract out the signal I am interested in would be to do a spectral analysis on the timeseries for my terms. The high frequencies should be the daily patterns which I want to get rid of and the lower frequencies should be what I'm interested in. I want to somehow 'divide' my observed signal for a term by the baseline daily signal. </p>
<p>This is my baseline's original signal
<img src="https://i.stack.imgur.com/TFKo6.png" alt="baseline original">
and this is my term's original signal
<img src="https://i.stack.imgur.com/lB7O8.png" alt="term's original signal">
and what i'm trying to do is get something like this in a general way without introducing artefacts. i.e remove the ups and downs that happen every day anyway and capture the general trend.
<img src="https://i.stack.imgur.com/pOGwU.png" alt="term tranformed"></p>
<p>The naive way I thought of doing this is to first generate ffts for both using numpy(below).</p>
<p>baselines fft
<img src="https://i.stack.imgur.com/fIx6K.png" alt="baseline's fft"></p>
<p>term1s fft
<img src="https://i.stack.imgur.com/Uec5q.png" alt="term1's fft"></p>
<p>and then create a filter like below</p>
<pre><code>fft2 = fft(term1, n=t)
mgft2=abs(fft2)
plot(mgft2[0:t/2+1])
bp = fft2[:]
for i in range(len(bp)):
if i>=22:
bp[i] = 0
ibp = ifft(bp)
</code></pre>
<p>but from what i understand that introduces artefacts, changes the magnitudes and I am not sure how to pick a cutting point. I was hoping for some guidance with respect to implementation in numpy on a better way to divide out my baseline frequencies from my term's frequencies.
thanks</p>
|
<p>Multiplying (or dividing) in the frequency domain is equivalent to convolving in the time domain. In other words a high-pass FIR filter would remove your low-frequency components directly without going into the frequency domain. If you <em>do</em> go to the frequency domain first be aware that simply removing some frequency components and converting back to time will introduce artifacts. FIR filter design is actually based on choosing filters you <em>can</em> multiply by which also meet your desired frequency specs.</p>
<p>All that said, it sounds like you already know the baselines and you could apply your adjustments directly from your known baselines to your data. The point of filtering would be that it would work <em>without knowing the baselines</em>.</p>
|
numpy|signal-processing|fft
| 2
|
8,975
| 55,550,421
|
Saving each dataframe from a list to separate csv files
|
<p>Im trying to save each element from a list to each separate csv files. each element is a dataframe.</p>
<p>I used the following codes, however, the problem is that the files that it saves are only from the first or last element of the list from the two following codes respectively. e.g. the output files are all identical </p>
<pre><code>for x in allcity:
for a in range(0,20):
x.to_csv('msd{}.csv'.format(a))
for a in range(0,20):
for x in allcity:
x.to_csv('msd{}.csv'.format(a))
</code></pre>
|
<p>IIUC, i think you need:</p>
<pre><code>for a, x in enumerate(allcity):
x.to_csv('msd{}.csv'.format(a))
</code></pre>
|
python|pandas|export-to-csv
| 2
|
8,976
| 55,522,976
|
how can select data of coefficient of 3 columns from csv file
|
<p>I would like to plot amount of columns for 2 different scenario based on <strong>index of rows</strong> in my dataset preferably via <code>Pandas.DataFrame</code> : </p>
<p><strong>1st scenario:</strong> columns index[2,5,8,..., n+2]</p>
<p><strong>2nd scenario:</strong> the last 480 columns or column index [961-1439]
<img src="https://i.imgur.com/AGFXH5W.jpg" alt="img"> <a href="https://i.imgur.com/AGFXH5W.jpg" rel="nofollow noreferrer">picture</a></p>
<p>I've tried to play with index of columns which is following:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
dft = pd.read_csv("D:\Test.csv" , header=None)
dft.head()
id_set = dft[dft.index % 2 == 0].astype('int').values
A = dft[dft.index % 2 == 1].values
B = dft[dft.index % 2 == 2].values
C = dft[dft.index % 2 == 3].values
data = {'A': A[:,0], 'B': B[:,0], 'C': C[:,0]}
df = pd.DataFrame(data, columns=['A','B','C'], index = id_set[:,0])
#1st scenario
j=0
index=[]
for i in range(1439):
if j==2:
j=0
continue
else:
index.append(i)
j+=1
print(index)
#2nd scenario
last_480 = df.[0:480][::-1]
</code></pre>
<p>I've found this <a href="https://stackoverflow.com/questions/31011403/select-column-by-index-for-headless-csv-file">post1</a> and <a href="https://stackoverflow.com/questions/48337371/selecting-specific-range-of-columns-from-csv-file">post2</a> but they weren't my case!</p>
<p>I would appreciate if someone can help me.</p>
|
<p>1st scenario:</p>
<pre><code>df.iloc[:, 2::3]
</code></pre>
<p>The slicing here means all rows, columns starting from the 2nd, and every 3 after that.</p>
<p>2nd scenario:</p>
<pre><code>df.iloc[:, :961:-1]
</code></pre>
<p>The slicing here means all rows, columns to 961 from the end of the list.</p>
<p>EDIT:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
senario1 = df.iloc[:, 2::3].copy()
sns.lineplot(data = senario1.T)
</code></pre>
<p>You can save the copy of the slice to another variable, then since you want to graph row-wise you need to take the transpose of the sliced matrix (This will make yours rows into columns).</p>
|
python|pandas|csv|dataframe|indexing
| 2
|
8,977
| 55,229,848
|
Apache BEAM pipeline fails when writing TF Records - AttributeError: 'str' object has no attribute 'iteritems'
|
<p>The issue started appearing over the weekend. For some reason, it feels to be a DataFlow issue. </p>
<p>Previously, I was able to execute the script and write TF records just fine. However, now, I am unable to initialize the computation graph to process the data.</p>
<p>The traceback is:</p>
<pre><code>Traceback (most recent call last):
File "my_script.py", line 1492, in <module>
MyBeamClass()
File "my_script.py", line 402, in __init__
self.run()
File "my_script.py", line 514, in run
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/pipeline.py", line 426, in __exit__
self.run().wait_until_finish()
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1238, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 176, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 531, in apache_beam.runners.worker.operations.DoOperation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 532, in apache_beam.runners.worker.operations.DoOperation.start
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 533, in apache_beam.runners.worker.operations.DoOperation.start
super(DoOperation, self).start()
File "apache_beam/runners/worker/operations.py", line 202, in apache_beam.runners.worker.operations.Operation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 206, in apache_beam.runners.worker.operations.Operation.start
self.setup()
File "apache_beam/runners/worker/operations.py", line 480, in apache_beam.runners.worker.operations.DoOperation.setup
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 485, in apache_beam.runners.worker.operations.DoOperation.setup
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 247, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 317, in loads
return load(file, ignore)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 305, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1232, in load_build
for k, v in state.iteritems():
AttributeError: 'str' object has no attribute 'iteritems'
</code></pre>
<p>I am using tensorflow==1.13.1 and tensorflow-transform==0.9.0 and apache_beam==2.7.0</p>
<pre><code>with beam.Pipeline(options=self.pipe_opt) as p:
with beam_impl.Context(temp_dir=self.google_cloud_options.temp_location):
# rest of the script
_ = (
transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
</code></pre>
|
<p>I was experiencing the same error. </p>
<p>It seems to be triggered by a mismatch in the <code>tensorflow-transform</code> versions of your local (or master) machine and the workers one (specified in the setup.py file). </p>
<p>In my case I was running <code>tensorflow-transform==0.13</code> on my local machine whereas the workers were running <code>0.8</code>.</p>
<p>Downgrading the local version to <code>0.8</code> fixed the issue.</p>
|
tensorflow|apache-beam|tensorflow-transform
| 0
|
8,978
| 55,348,786
|
How can I change the values in a dataframe column based off the index of a list?
|
<p>lets say I have a data frame:</p>
<pre><code>x y
1 3
2 0
4 1
7 2
</code></pre>
<p>and I have a list:</p>
<pre><code>[1,2,7,5]
</code></pre>
<p>can I change the values of the Y column based on the value of the index of the list?</p>
<p>for instance, for value 2 of the y column its 0. is it anyway to take that 0 and look at the 0th index value of the list and changes it to that value so the 4 would become 1 in the y column?</p>
<p>and for the rest of the values?</p>
<p>so it would be:</p>
<pre><code>x y
1 5
2 1
4 2
7 7
</code></pre>
<p>thanks guys</p>
<p>EDIT: FIXED INDEXING (started from 1 instead of 0)</p>
|
<p>Just do with </p>
<pre><code>df.y=np.array(l)[df.y-1]# here i subtract 1 since the index from pandas or numpy is from 0 by default
df
Out[52]:
x y
0 1 7
1 2 5
2 4 1
3 7 2
</code></pre>
|
python|pandas|numpy
| 1
|
8,979
| 55,274,076
|
PyTorch - GPU is not used by tensors despite CUDA support is detected
|
<p>As the title of the question clearly describes, even though t<code>orch.cuda.is_available()</code> returns <code>True</code>, <code>CPU</code> is used instead of <code>GPU</code> by tensors. I have set the <code>device</code> of the tensor to <code>GPU</code> through the <code>images.to(device)</code> function call after defining the <code>device</code>. When I debug my code, I am able to see that the <code>device</code> is set to <code>cuda:0</code>; but the tensor's <code>device</code> is still set to <code>cpu</code>.</p>
<p>Defining the device:</p>
<pre><code>use_cuda = torch.cuda.is_available() # returns True
device = torch.device('cuda:0' if use_cuda else 'cpu')
</code></pre>
<p>Determining the device of the tensors:</p>
<pre><code>for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images.to(device)
labels.to(device)
# both of images and labels' devices are set to cpu
</code></pre>
<p>The software stack:</p>
<pre><code>Python 3.7.1
torch 1.0.1
Windows 10 64-bit
</code></pre>
<p>p.s. <code>PyTorch</code> is installed with the option of <strong>Cuda 9.0 support</strong>.</p>
|
<p><code>tensor.to()</code> does not modify the tensor inplace. It returns a new tensor that's stored
in the specified device.</p>
<p>Use the following instead.</p>
<pre class="lang-py prettyprint-override"><code> images = images.to(device)
labels = labels.to(device)
</code></pre>
|
python|python-3.x|pytorch|torch
| 3
|
8,980
| 9,857,340
|
Adding a colorbar and a line to multiple imshow() plots
|
<p>I have this source code:</p>
<pre><code>idx=0
b=plt.psd(dOD[:,idx],Fs=self.fs,NFFT=512)
B=np.zeros((2*len(self.Chan),len(b[0])))
B[idx,:]=20*log10(b[0])
c=plt.psd(dOD_filt[:,idx],Fs=self.fs,NFFT=512)
C=np.zeros((2*len(self.Chan),len(b[0])))
C[idx,:]=20*log10(c[0])
for idx in range(2*len(self.Chan)):
b=plt.psd(dOD[:,idx],Fs=self.fs,NFFT=512)
B[idx,:]=20*log10(b[0])
c=plt.psd(dOD_filt[:,idx],Fs=self.fs,NFFT=512)
C[idx,:]=20*log10(c[0])
## Calculate the color scaling for the imshow()
aux1 = max(max(B[i,:]) for i in range(size(B,0)))
aux2 = min(min(B[i,:]) for i in range(size(B,0)))
bux1 = max(max(C[i,:]) for i in range(size(C,0)))
bux2 = min(min(C[i,:]) for i in range(size(C,0)))
scale1 = 0.75*max(aux1,bux1)
scale2 = 0.75*min(aux2,bux2)
fig, axes = plt.subplots(nrows=2, ncols=1,figsize=(7,7))#,sharey='True')
fig.subplots_adjust(wspace=0.24, hspace=0.35)
ii=find(c[1]>frange)[0]
## Making the plots
cax=axes[0].imshow(B, origin = 'lower',vmin=scale2,vmax=scale1)
axes[0].set_ylim((0,2*len(self.Chan)))
axes[0].set_xlabel(' Frequency (Hz) ')
axes[0].set_ylabel(' Channel Number ')
axes[0].set_title('Pre-Filtered')
cax2=axes[1].imshow(C, origin = 'lower',vmin=scale2,vmax=scale1)
axes[1].set_ylim(0,2*len(self.Chan))
axes[1].set_xlabel(' Frequency (Hz) ')
axes[1].set_ylabel(' Channel Number ')
axes[1].set_title('Post-Filtered')
axes[0].annotate('690nm', xy=((ii+1)/2, len(self.Chan)/2-1),
xycoords='data', va='center', ha='right')
axes[0].annotate('830nm', xy=((ii+1)/2, len(self.Chan)*3/2-1 ),
xycoords='data', va='center', ha='right')
axes[1].annotate('690nm', xy=((ii+1)/2, len(self.Chan)/2-1),
xycoords='data', va='center', ha='right')
axes[1].annotate('830nm', xy=((ii+1)/2, len(self.Chan)*3/2-1 ),
xycoords='data', va='center', ha='right')
axes[0].axis('tight')
axes[1].axis('tight')
## Set up the xlim to aprox frange Hz
axes[0].set_xlim(left=0,right=ii)
axes[1].set_xlim(left=0,right=ii)
## Make the xlabels become the actual frequency number
tickslabel=np.zeros((ii))
ticks = r_[0:ii:5]
tickslabel = linspace(0.,2.,size(ticks))
axes[0].set_xticks(ticks)
axes[0].set_xticklabels(tickslabel)
axes[1].set_xticks(ticks)
axes[1].set_xticklabels(tickslabel)
## Draw a line to separate the two different wave lengths, and name each region
l1 = Line2D([0,ii],[28,10],ls=':',color='black')
axes[0].add_line(l1)
axes[1].add_line(l1)
</code></pre>
<p>This code generates this figure:
<img src="https://i.stack.imgur.com/ygUhD.png" alt="The ploted Figure"> </p>
<p>The fixed code to make the xticks looks properly are already inside the code, and the new plot is also shown.</p>
<p>How can I add a single colorbar (and give it a title) to both this subplots? (they are at same scale)
This colorbar should occupy the whole left side of the figure.</p>
<p>Inside the code there`s a place I try to draw a line in both figures (at the same place), but none of those are shown. Why is that?</p>
<p>If you need any more information about my code (like the size of the data entered, just ask). </p>
|
<p>Your ticks variable appears to be all zeros:</p>
<pre><code>ticks=np.zeros((ii))
</code></pre>
<p>but it should enumerate X locations (in axis coordinates) where you'd like the tick marks to go. When you call set_xticklabels, the list gives the text to show for each tick.</p>
<p>Here's a simple example showing how xlim, set_xticks, and set_xticklabels interact:</p>
<pre><code>from pylab import *
x = arange(128*128).reshape((128,128))
matshow(x)
xlim(right=64)
# xticks: where the xticks should go (indexes into x's columns)
xticks = r_[0:64:25]
gca().set_xticks(xticks)
# labels: text to show for each element of xticks
# here, we apply a multiplier just to show how the
# labels can differ from the xticks.
labels = ['%.1f' % (x,) for x in xticks * pi / 2]
gca().set_xticklabels(labels)
show()
</code></pre>
|
python|numpy|matplotlib|scipy
| 2
|
8,981
| 67,024,157
|
Slicing pandas multiindex dataframe using max of second level
|
<p>Supposing that I have this MultiIndex dataframe called <code>df</code>:</p>
<pre><code> | |Value
Year |Month|
1992 | 1 | 3
| 2 | 5
| 3 | 8
-----------------
1993 | 1 | 2
| 2 | 7
----------------
1994 | 1 | 20
| 2 | 50
| 3 | 10
| 4 | 5
</code></pre>
<p>How do I select all years and max month for each of those years?</p>
<p>I'd like the following result:</p>
<pre><code> | |Value
Year |Month|
1992 | 3 | 8
-----------------
1993 | 2 | 7
----------------
1994 | 4 | 5
</code></pre>
<p>I've tried to use</p>
<pre><code>df.loc[(slice(None), [3, 2, 4]),:]
</code></pre>
<p>This works, but it's hard-coded. How do I set it to bring always the maximum month level instead of saying it manually?</p>
<p>My index are sorted, so it would be take the last month for each year.</p>
<p>I've also tried to use the <code>.iloc</code> but it doesn't work with multiindex</p>
<pre><code>>>> df.iloc[(slice(None), -1),:]
...
IndexingError: Too many indexers
...
</code></pre>
|
<p>you can group on the first level and take the last of the second level and then <code>df.loc[]</code>:</p>
<pre><code>df.loc[pd.DataFrame.from_records(df.index).groupby(0)[1].last().items()]
</code></pre>
<hr />
<pre><code> Value
Year Month
1992 3 8
1993 2 7
1994 4 5
</code></pre>
|
python|pandas|multi-index
| 2
|
8,982
| 47,297,585
|
Building a Transition Matrix using words in Python/Numpy
|
<p>Im trying to build a 3x3 transition matrix with this data</p>
<pre><code>days=['rain', 'rain', 'rain', 'clouds', 'rain', 'sun', 'clouds', 'clouds',
'rain', 'sun', 'rain', 'rain', 'clouds', 'clouds', 'sun', 'sun',
'clouds', 'clouds', 'rain', 'clouds', 'sun', 'rain', 'rain', 'sun',
'sun', 'clouds', 'clouds', 'rain', 'rain', 'sun', 'sun', 'rain',
'rain', 'sun', 'clouds', 'clouds', 'sun', 'sun', 'clouds', 'rain',
'rain', 'rain', 'rain', 'sun', 'sun', 'sun', 'sun', 'clouds', 'sun',
'clouds', 'clouds', 'sun', 'clouds', 'rain', 'sun', 'sun', 'sun',
'clouds', 'sun', 'rain', 'sun', 'sun', 'sun', 'sun', 'clouds',
'rain', 'clouds', 'clouds', 'sun', 'sun', 'sun', 'sun', 'sun', 'sun',
'clouds', 'clouds', 'clouds', 'clouds', 'clouds', 'sun', 'rain',
'rain', 'rain', 'clouds', 'sun', 'clouds', 'clouds', 'clouds', 'rain',
'clouds', 'rain', 'sun', 'sun', 'clouds', 'sun', 'sun', 'sun', 'sun',
'sun', 'sun', 'rain']
</code></pre>
<p>Currently, Im doing it with some temp dictionaries and some list that calculates the probability of each weather separately. Its not a pretty solution. Can someone please guide me with a more reasonable solution to this problem?</p>
<pre><code>self.transitionMatrix=np.zeros((3,3))
#the columns are today
sun_total_count = 0
temp_dict={'sun':0, 'clouds':0, 'rain':0}
total_runs = 0
for (x, y), c in Counter(zip(data, data[1:])).items():
#if column 0 is sun
if x is 'sun':
#find the sum of all the numbers in this column
sun_total_count += c
total_runs += 1
if y is 'sun':
temp_dict['sun'] = c
if y is 'clouds':
temp_dict['clouds'] = c
if y is 'rain':
temp_dict['rain'] = c
if total_runs is 3:
self.transitionMatrix[0][0] = temp_dict['sun']/sun_total_count
self.transitionMatrix[1][0] = temp_dict['clouds']/sun_total_count
self.transitionMatrix[2][0] = temp_dict['rain']/sun_total_count
return self.transitionMatrix
</code></pre>
<p>for every type of weather I need to calculate the probability for the next day</p>
|
<p>If you don't mind using <code>pandas</code>, there's a one-liner for extracting the transition probabilities:</p>
<pre><code>pd.crosstab(pd.Series(days[1:],name='Tomorrow'),
pd.Series(days[:-1],name='Today'),normalize=1)
</code></pre>
<p>Output:</p>
<pre><code>Today clouds rain sun
Tomorrow
clouds 0.40625 0.230769 0.309524
rain 0.28125 0.423077 0.142857
sun 0.31250 0.346154 0.547619
</code></pre>
<p>Here the (forward) probability that tomorrow will be sunny given that today it rained is found at the column 'rain', row 'sun'. If you would like to have backward probabilities (<em>what might have been the weather yesterday given the weather today</em>), switch the first two parameters. </p>
<p>If you would like to have the probabilities stored in rows rather than columns, then set <code>normalize=0</code> but note that if you would do that directly on this example, you obtain backwards probabilities stored as rows. If you would like to obtain the same result as above but transposed you could a) yes, transpose or b) switch the order of the first two parameters and set <code>normalize</code> to 0. </p>
<p>If you just want to keep the results as <code>numpy</code> 2-d array (and not as a pandas dataframe), type <code>.values</code> after the last parenthesis. </p>
|
python|numpy|markov-chains
| 20
|
8,983
| 68,358,751
|
ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory* | I have cuda9.0 in my system and not cuda 8
|
<p>I have cuda9.0 and tensorflow-gpu==1.5, while running a script I am getting below error.</p>
<pre><code> Traceback (most recent call last):
File "test.py", line 13, in <module>
from lib.networks.factory import get_network
File "/faster_rcnn/../lib/__init__.py", line 1, in <module>
import fast_rcnn
File "/faster_rcnn/../lib/fast_rcnn/__init__.py", line 10, in <module>
from . import train
File "/faster_rcnn/../lib/fast_rcnn/train.py", line 15, in <module>
from lib.fast_rcnn.nms_wrapper import nms_wrapper
File "/faster_rcnn/../lib/fast_rcnn/nms_wrapper.py", line 10, in <module>
from ..nms.gpu_nms import gpu_nms
**ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory**
[root@ ld.so.conf.d]# cat cuda-9-0.conf
/usr/local/cuda-9.0/targets/x86_64-linux/lib
/usr/local/cuda-9.0/lib64
[root@ ld.so.conf.d]# pwd
/etc/ld.so.conf.d
[root@ ld.so.conf.d]# cat cuda-9-0.conf
/usr/local/cuda-9.0/targets/x86_64-linux/lib
/usr/local/cuda-9.0/lib64
[root@ profile.d]# cat cuda90.sh
export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
[root@ lib64]# ls -ltr libcudart.*
-rwxr-xr-x. 1 root root 442392 Sep 2 2017 libcudart.so.9.0.176
lrwxrwxrwx. 1 root root 20 Jul 12 11:31 libcudart.so.9.0 -> libcudart.so.9.0.176
lrwxrwxrwx. 1 root root 16 Jul 12 11:31 libcudart.so -> libcudart.so.9.0
</code></pre>
<p>I tried to check many posts and there are various flavors of answers but for the same cuda and libcudart issue, but in my case I have cuda 9 installed but its showing error for libcudart.so.8.0:</p>
<p>Also, <a href="https://forums.developer.nvidia.com/t/cuda-9-0-importerror-libcublas-so-8-0/54996" rel="nofollow noreferrer">https://forums.developer.nvidia.com/t/cuda-9-0-importerror-libcublas-so-8-0/54996</a> talks about compiling TF from source in 2017 but its almost 4 years and it should have work now.</p>
|
<p>Particular version of Tensorflow is tested and configured with specific version of CUDA and cuDNN. In this case, Tensorflow version is demanding CUDA 8.0. Please take a look at the screenshot below for tested and build configuration.
<a href="https://i.stack.imgur.com/1QpjQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1QpjQ.png" alt="enter image description here" /></a></p>
|
python|tensorflow
| 0
|
8,984
| 59,403,256
|
Pandas not working: DataFrameGroupBy ; PanelGroupBy
|
<p>I have just upgraded python and I cannot get pandas to run properly, please see below. Nothing appears to work.</p>
<blockquote>
<p>Traceback (most recent call last): File
"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tqdm/_tqdm.py",
line 613, in pandas
from pandas.core.groupby.groupby import DataFrameGroupBy, \ ImportError: cannot import name 'DataFrameGroupBy' from
'pandas.core.groupby.groupby'
(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/groupby/groupby.py)</p>
</blockquote>
<p>During handling of the above exception, another exception occurred:</p>
<blockquote>
<p>Traceback (most recent call last): File
"code/analysis/get_cost_matrix.py", line 23, in
tqdm.pandas() # Gives us nice progress bars File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tqdm/_tqdm.py",
line 616, in pandas
from pandas.core.groupby import DataFrameGroupBy, \ ImportError: cannot import name 'PanelGroupBy' from 'pandas.core.groupby'
(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/groupby/<strong>init</strong>.py)</p>
</blockquote>
|
<p>I guess you are using an older version of tqdm. Try using a version above tqdm>=4.23.4. </p>
<p>The command using pip would be,</p>
<p><code>pip install tqdm --upgrade</code> </p>
|
python-3.x|pandas
| 3
|
8,985
| 59,277,637
|
Installing TensorFlow with pipenv gives error
|
<p>I'm trying to install <strong>TensorFlow</strong> using <strong>pipenv</strong>.</p>
<p>This is my Pipfile:</p>
<pre><code>[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
pylint = "*"
[packages]
python-telegram-bot = "*"
imdbpy = "*"
matplotlib = "*"
scikit-image = "*"
scikit-learn = "*"
tensorflow = "*"
[requires]
python_version = "3.8"
</code></pre>
<p>I then run:</p>
<pre><code>pipenv install tensorflow
</code></pre>
<p>Which outputs:</p>
<pre><code>Installing tensorflow…
Adding tensorflow to Pipfile's [packages]…
Installation Succeeded
Pipfile.lock (989c3d) out of date, updating to (0d6760)…
Locking [dev-packages] dependencies…
Success!
Locking [packages] dependencies…
Locking Failed!
</code></pre>
<p>Followed by a big traceback that ends with:</p>
<pre><code>pipenv.patched.notpip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in C:\Users\lucas\AppData\Local\Temp\tmpyh639mq4build\functools32\
</code></pre>
<p>My virtual environment uses <strong>Python 3.8.0 64 bit</strong>.</p>
<p>What am I doing wrong?</p>
|
<p>As the comments pointed out, Tensorflow only supports up to python 3.7 (as March 2020). You can find more info in the <a href="https://www.tensorflow.org/install/pip#system-requirements" rel="nofollow noreferrer">system requirements page</a> of the documentation.</p>
<p>So, to fix your issue:</p>
<ol>
<li>Remove the virtual environment with <code>pipenv --rm</code></li>
<li>Remove the Pipfile.lock</li>
<li>Change the last lines of your Pipfile to
<pre><code>[requires]
python_version = "3.7"
</code></pre></li>
<li>Run <code>pipenv install --dev</code> to recreate the environment again and <code>pipenv install tensorflow</code> to install tensorflow</li>
</ol>
<p>Done!</p>
|
python|tensorflow|pipenv
| 4
|
8,986
| 59,243,087
|
How to solve python import another python file but the its imports missing
|
<p>I want to import a python file called feature.py and call the functions in it, so I did the 'from feature import *'.</p>
<pre><code>from feature import *
</code></pre>
<p>In the feature.py, I import pandas as pd and define the functions that I would like to call in the main python file. </p>
<pre><code>import pandas as pd
# time features
def add_time_features(df):
df["date"] = pd.to_datetime(data.Timestamp, unit='s').dt.date
df["month"] = pd.to_datetime(data.Timestamp, unit='s').dt.month
df["weekday"] = pd.to_datetime(data.Timestamp, unit='s').dt.weekday_name
df["hour"] = pd.to_datetime(data.Timestamp, unit='s').dt.hour
</code></pre>
<p>However, when I run the main python program and call the function I got the error message said pd is not defined.
<a href="https://i.stack.imgur.com/NctbE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NctbE.png" alt="enter image description here"></a></p>
<p>I thought I did define pd by using "import pandas as pd" in both main file and the feature.py. Bu it does not work. So what is the correct method to do this?</p>
|
<p>When you import code from another file in the form “import filename” then it doesn’t do the same thing as when you extract functions/classes from a file in the form “from filename import *”.</p>
<p>The code you’ve shown appears to be taking the functions from the file you’re importing without actually running the import statement. A simple fix should be to put an “import pandas as pd” statement in the main file.</p>
<p>Essentially the functions you have imported act in a similar/identical way to the functions written in the main file so likely obey the same rules and have access to the same import statements.</p>
<p>Does this help?</p>
|
python|pandas|dataframe
| 0
|
8,987
| 59,307,463
|
Python Business Days within loop
|
<p>I am trying to loop through a dataframe, the dataframe has two columns, both with datetime variables inside. I am trying to loop through this data base and produce a new column with the count of business days between the two dates.I have tried using np.busdays_count < this returned errors like the following.</p>
<pre><code>df_Temp['Maturity(Stlm -Report Date)'] = np.busday_count(df_Temp['Today'],df_Temp['STLMT_DTE2'])
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<__array_function__ internals>", line 6, in busday_count
TypeError: Iterator operand 0 dtype could not be cast from dtype('<M8[ns]') to dtype('<M8[D]') according to the rule 'safe'
</code></pre>
<p>i have also tried using the following function :</p>
<pre><code>import datetime
def working_days(start_dt,end_dt):
num_days = (end_dt -start_dt).days +1
num_weeks =(num_days)//7
a=0
#condition 1
if end_dt.strftime('%a')=='Sat':
if start_dt.strftime('%a') != 'Sun':
a= 1
#condition 2
if start_dt.strftime('%a')=='Sun':
if end_dt.strftime('%a') !='Sat':
a =1
#condition 3
if end_dt.strftime('%a')=='Sun':
if start_dt.strftime('%a') not in ('Mon','Sun'):
a =2
#condition 4
if start_dt.weekday() not in (0,6):
if (start_dt.weekday() -end_dt.weekday()) >=2:
a =2
working_days =num_days -(num_weeks*2)-a
return working_days
</code></pre>
<p>please can you suggest another method to use or an adaptation to the working days function that will allow this to work, so far i have the below code.I hope i covered this in enough detail.</p>
<pre><code>for ns in (NettingSets):
df_Temp = dfNetY[dfNetY['ACCT_NUM'] == ns]
df_Temp['Current Credit Exposure'] = np.where(df_Temp['All NPV Flags']==1,0,df_Temp['MTM_AMT'])
df_Temp['Positive Current Credit Exposure'] = np.where(df_Temp['Current Credit Exposure'] > 0,df_Temp['Current Credit Exposure'],0)
df_Temp['SupervisoryFactor'] = 0.04
df_Temp['STLMT_DTE2'] = pd.to_datetime(df_Temp['STLMT_DTE2'].astype(str), format='%Y-%m-%d')
df_Temp['Today'] = date1
df_Temp['Today'] = pd.to_datetime(df_Temp['Today'].astype(str), format='%Y-%m-%d')
for rows in df_Temp:
df_Temp['Maturity(Stlm -Report Date)'] = np.busday_count(df_Temp['Today'],df_Temp['STLMT_DTE2'])
</code></pre>
|
<p>For np.busday_count to work both dates need to be cast in the 'M8[D]' format. </p>
<pre><code>import datetime
import pandas as pd
import numpy as np
# Create a toy data frame
dates_1 = pd.date_range(datetime.datetime(2018, 4, 5, 0,
0), datetime.datetime(2018, 4, 20, 7, 0),freq='D')
dates_2 = pd.date_range(datetime.datetime(2019, 4, 5, 0,
0), datetime.datetime(2019, 4, 20, 7, 0),freq='D')
df_Temp = pd.DataFrame({'STLMT_DTE2': dates_1, 'Today': dates_2})
df_Temp.head()
df_Temp['Maturity(Stlm-Report Date)'] = np.abs(
np.busday_count(df_Temp['Today'].values.astype('M8[D]'),
df_Temp['STLMT_DTE2'].values.astype('M8[D]')))
df_Temp.head()
</code></pre>
<p>Output:</p>
<pre><code>df_Temp.head()
Out[16]: # Before calculating business days
STLMT_DTE2 Today
0 2018-04-05 2019-04-05
1 2018-04-06 2019-04-06
2 2018-04-07 2019-04-07
3 2018-04-08 2019-04-08
4 2018-04-09 2019-04-09
df_Temp.head()
Out[17]: # After calculating business days
STLMT_DTE2 Today Maturity(Stlm-Report Date)
0 2018-04-05 2019-04-05 261
1 2018-04-06 2019-04-06 261
2 2018-04-07 2019-04-07 260
3 2018-04-08 2019-04-08 260
4 2018-04-09 2019-04-09 261
</code></pre>
|
python|pandas|datetime|weekend
| 3
|
8,988
| 13,857,769
|
Better way to compare neighboring cells in matrix
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/13805286/numpy-python-array-iteration-without-for-loop">Numpy/Python: Array iteration without for-loop</a> </p>
</blockquote>
<p>Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size.
A sample code in Python/Numpy could look like the following:
(the comparison >0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors)</p>
<pre><code>import numpy as np
my_matrix = np.random.rand(100,100)
new_matrix = np.array((100,100))
my_range = np.arange(1,99)
for i in my_range:
for j in my_range:
if my_matrix[i,j+1] > 0.5:
new_matrix[i,j+1] = 1
if my_matrix[i,j-1] > 0.5:
new_matrix[i,j-1] = 1
if my_matrix[i+1,j] > 0.5:
new_matrix[i+1,j] = 1
if my_matrix[i-1,j] > 0.5:
new_matrix[i-1,j] = 1
if my_matrix[i+1,j+1] > 0.5:
new_matrix[i+1,j+1] = 1
if my_matrix[i+1,j-1] > 0.5:
new_matrix[i+1,j-1] = 1
if my_matrix[i-1,j+1] > 0.5:
new_matrix[i-1,j+1] = 1
</code></pre>
<p>This can get really nasty if I want to step into one neighboring cell and start from it to compare it to its neighbors ... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?</p>
|
<p>I'm not 100% sure what you're aiming for with your code, which ignoring indexing issues at boundaries is equivalent to</p>
<pre><code>new_matrix = my_matrix > 0.5
</code></pre>
<p>but you can do advanced versions of these calculation quickly with morphological operations:</p>
<pre><code>import numpy as np
from scipy.ndimage import morphology
a = np.random.rand(5,5)
b = a > 0.5
element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
result = morphology.binary_dilation(b, element) * 1
</code></pre>
|
python|matlab|matrix|numpy|scipy
| 2
|
8,989
| 45,091,991
|
How can I load a data frame saved in pandas as an HDF5 file in R without losing integers larger than 32 bit?
|
<p>I'm getting this warning message when I try to load data frame saved in pandas as an HDF5 file in R:</p>
<blockquote>
<p>Warning message: In H5Dread(h5dataset = h5dataset, h5spaceFile =
h5spaceFile, h5spaceMem = h5spaceMem, : NAs produced by integer
overflow while converting 64-bit integer or unsigned 32-bit integer
from HDF5 to a 32-bit integer in R. Choose bit64conversion='bit64' or
bit64conversion='double' to avoid data loss and see the vignette
'rhdf5' for more details about 64-bit integers.</p>
</blockquote>
<p>For example, if I create HDF5 file in pandas with:</p>
<pre><code>import pandas as pd
frame = pd.DataFrame({
'time':[1234567001,1234515616515167005],
'X2':[23.88,23.96]
},columns=['time','X2'])
store = pd.HDFStore('a.hdf5')
store['df'] = frame
store.close()
print(frame)
</code></pre>
<p>which returns:</p>
<pre><code> time X2
0 1234567001 23.88
1 1234515616515167005 23.96
</code></pre>
<p>and try to load it in R:</p>
<pre><code>#source("http://bioconductor.org/biocLite.R")
#biocLite("rhdf5")
library(rhdf5)
loadhdf5data <- function(h5File) {
# Function taken from [How can I load a data frame saved in pandas as an HDF5 file in R?](https://stackoverflow.com/a/45024089/395857)
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
print(idx)
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx], bit64conversion='bit64'))
#names <- t(h5read(h5File, name_paths[idx], bit64conversion='double'))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
frame = loadhdf5data("a.hdf5")
</code></pre>
<p>I get this warning message:</p>
<pre><code>> frame = loadhdf5data("a.hdf5")
[1] 1
[1] 2
Warning message:
In H5Dread(h5dataset = h5dataset, h5spaceFile = h5spaceFile, h5spaceMem = h5spaceMem, :
NAs produced by integer overflow while converting 64-bit integer or unsigned 32-bit integer from HDF5 to a 32-bit integer in R. Choose bit64conversion='bit64' or bit64conversion='double' to avoid data loss and see the vignette 'rhdf5' for more details about 64-bit integers.
</code></pre>
<p>and I can see that one of the time values became NA:</p>
<pre><code>> frame
X2 time
1 23.88 1234567001
2 23.96 NA
</code></pre>
<p>How can I fix this issue? Choosing <code>bit64conversion='bit64'</code> or <code>bit64conversion='double'</code> doesn't change anything. </p>
<pre><code>> R.version
_
platform x86_64-w64-mingw32
arch x86_64
os mingw32
system x86_64, mingw32
status
major 3
minor 4.0
year 2017
month 04
day 21
svn rev 72570
language R
version.string R version 3.4.0 (2017-04-21)
nickname You Stupid Darkness
</code></pre>
|
<p><a href="https://rdrr.io/github/Bioconductor-mirror/rhdf5/man/H5D.html" rel="nofollow noreferrer">HDF5 Dataset Interface's documentation</a> says:</p>
<blockquote>
<p>bit64conversion: Defines, how 64-bit integers are converted. Internally, R does not support 64-bit integers. All integers in R are 32-bit integers. By setting bit64conversion='int', a coercing to 32-bit integers is enforced, with the risc of data loss, but with the insurance that numbers are represented as integers. bit64conversion='double' coerces the 64-bit integers to floating point numbers. doubles can represent integers with up to 54-bits, but they are not represented as integer values anymore. For larger numbers there is again a data loss. bit64conversion='bit64' is recommended way of coercing. It represents the 64-bit integers as objects of class 'integer64' as defined in the package 'bit64'. Make sure that you have installed 'bit64'. The datatype 'integer64' is not part of base R, but defined in an external package. This can produce unexpected behaviour when working with the data.</p>
</blockquote>
<p>You should therefore install bit64 (<code>install.packages("bit64")</code>) and load it (<code>library(bit64)</code>). You can check that <code>integer64</code> is loaded:</p>
<pre><code>> integer64
Function (length = 0)
{
ret <- double(length)
oldClass(ret) <- "integer64"
ret
}
<bytecode: 0x000000001a7a95f0>
<environment: namespace :it64>
</code></pre>
<p>Now you can run:</p>
<pre><code>library(bit64)
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
print(idx)
data <- data.frame(t(h5read(h5File, data_paths[idx], bit64conversion='bit64')))
names <- t(h5read(h5File, name_paths[idx], bit64conversion='bit64'))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
frame = loadhdf5data("a.hdf5")
</code></pre>
<p>which gives: </p>
<pre><code>> frame
X2 time
1 23.88 1234567001
2 23.96 1234515616515167005
</code></pre>
|
python|r|pandas|dataframe|hdfs
| 1
|
8,990
| 44,889,508
|
If correlation is greater than 0.75 remove the column from dataframe of pandas
|
<p>I have a dataframe name <code>data</code> for which I plotted correlation matrix by using</p>
<pre><code>corr = data.corr()
</code></pre>
<p>I want if <code>corr</code> between two column is greater than 0.75, remove one of them from dataframe <code>data</code>. I tried some option </p>
<pre><code>raw =corr[(corr.abs()>0.75) & (corr.abs() < 1.0)]
</code></pre>
<p>but it did not help, I need column number from raw for which value is nonzero. Basically some python command replacement of following R command</p>
<pre><code>{hc=findCorrelation(corr,cutoff = 0.75)
hc = sort(hc)
data <- data[,-c(hc)]}
</code></pre>
<p>If anyone can help me to get command similar to above mention R command in python pandas, that would be helpful.</p>
|
<p>Use <code>np.eye</code> to ignore the diagonal values and find all columns that have some value whose absolute value is greater than the threshold. Use the logical negation as a mask for the index and columns.</p>
<hr>
<p><strong>Your example</strong> </p>
<pre><code>m = ~(corr.mask(np.eye(len(corr), dtype=bool)).abs() > 0.75).any()
raw = corr.loc[m, m]
</code></pre>
<p><strong>Working example</strong> </p>
<pre><code>np.random.seed([3,1415])
data = pd.DataFrame(
np.random.randint(10, size=(10, 10)),
columns=list('ABCDEFGHIJ'))
data
A B C D E F G H I J
0 0 2 7 3 8 7 0 6 8 6
1 0 2 0 4 9 7 3 2 4 3
2 3 6 7 7 4 5 3 7 5 9
3 8 7 6 4 7 6 2 6 6 5
4 2 8 7 5 8 4 7 6 1 5
5 2 8 2 4 7 6 9 4 2 4
6 6 3 8 3 9 8 0 4 3 0
7 4 1 5 8 6 0 8 7 4 6
8 3 5 8 5 1 5 1 4 3 9
9 5 5 7 0 3 2 5 8 8 9
</code></pre>
<hr>
<pre><code>corr = data.corr()
corr
A B C D E F G H I J
A 1.00 0.22 0.42 -0.12 -0.17 -0.16 -0.11 0.35 0.13 -0.06
B 0.22 1.00 0.10 -0.08 -0.18 0.07 0.33 0.12 -0.34 0.17
C 0.42 0.10 1.00 -0.08 -0.41 -0.12 -0.42 0.55 0.20 0.34
D -0.12 -0.08 -0.08 1.00 -0.05 -0.29 0.27 0.02 -0.45 0.11
E -0.17 -0.18 -0.41 -0.05 1.00 0.47 0.00 -0.38 -0.19 -0.86
F -0.16 0.07 -0.12 -0.29 0.47 1.00 -0.62 -0.67 -0.08 -0.54
G -0.11 0.33 -0.42 0.27 0.00 -0.62 1.00 0.22 -0.40 0.07
H 0.35 0.12 0.55 0.02 -0.38 -0.67 0.22 1.00 0.50 0.59
I 0.13 -0.34 0.20 -0.45 -0.19 -0.08 -0.40 0.50 1.00 0.40
J -0.06 0.17 0.34 0.11 -0.86 -0.54 0.07 0.59 0.40 1.00
</code></pre>
<hr>
<pre><code>m = ~(corr.mask(np.eye(len(corr), dtype=bool)).abs() > 0.5).any()
m
A True
B True
C False
D True
E False
F False
G False
H False
I True
J False
dtype: bool
</code></pre>
<hr>
<pre><code>raw = corr.loc[m, m]
raw
A B D I
A 1.00 0.22 -0.12 0.13
B 0.22 1.00 -0.08 -0.34
D -0.12 -0.08 1.00 -0.45
I 0.13 -0.34 -0.45 1.00
</code></pre>
|
python|r|pandas|machine-learning|scikit-learn
| 13
|
8,991
| 45,202,659
|
ignore NaN in .diff() with Pandas
|
<p>I need to compute differences between elements along axis=1 for each row ignoring the missing values (NaN). For example:</p>
<pre><code> 0 1 2 3 4 5
20 NaN 7.0 5.0 NaN NaN 8.0
21 7.0 5.0 NaN NaN 8.0 NaN
22 5.0 NaN NaN 8.0 NaN 7.0
23 NaN NaN 8.0 NaN 7.0 NaN
24 NaN 8.0 NaN 7.0 NaN 10.0
25 8.0 NaN 7.0 NaN 10.0 NaN
26 NaN 7.0 NaN 10.0 NaN NaN
27 7.0 NaN 10.0 NaN NaN 9.0
28 NaN 10.0 NaN NaN 9.0 6.0
29 10.0 NaN NaN 9.0 6.0 6.0
</code></pre>
<p>so, ideally I need to get :</p>
<pre><code> 7.0 5.0 8.0
7.0 5.0 8.0
5.0 8.0 7.0
8.0 7.0
8.0 7.0 10.0
</code></pre>
<p>then I can apply standard .diff(axis=1) and get what I need. However, I'm struggling to extract non NaN values from each row. Any ideas?</p>
|
<p>I assume that you already know how to compute differences when all the values are filled in. Use that process, but modify the comparison step. Whatever you use to compare existing values, include a filter to accept only <code>item</code>s for which <code>item == item</code>.</p>
<p>By definition, <code>Nan</code> will fail <em>any</em> comparison operation. <code>NaN == NaN</code> is <code>False</code>; <code>NaN != NaN</code> is also <code>False</code>. If you include a condition that the time must be equal to itself, you filter out <code>Nan</code> and <code>Inf</code> entries.</p>
<p>Is that enough to let you continue?</p>
|
python|pandas|diff|nan
| 2
|
8,992
| 57,293,542
|
Pre-defined point colours in seaboarn scatter plot
|
<p>I am doing scatterplots with seaborn and I want the points to have 'pre-defined' colours. I am looping though my dataframe and when I set <code>hue=df['category']</code>it uses the default palette. This is fine but I would like the categories to carry the same colour through each plot i.e if one category is not being plotted the colours do not change. </p>
<p>I thought I could use something like but it doesn't seem to work: </p>
<pre><code>category_colour = {'Netflix':'Blue', 'TV':'Red', 'DVD':'Yellow', 'Radio':'Pink'}
plot = sns.scatterplot(x="Popularity", y="Likelihood", hue=colour, data=df)
</code></pre>
<p>Any help would be much appreciated.</p>
|
<p>You can try:</p>
<pre><code>category_colour = {'Netflix':'Blue', 'TV':'Red', 'DVD':'Yellow', 'Radio':'Pink'}
plot = sns.scatterplot(x="Popularity",
y="Likelihood",
hue=df['category'].map(category_colour),
data=df)
</code></pre>
<p>but make sure the colors in <code>category_colour</code> are valid.</p>
|
python|pandas|matplotlib|seaborn
| 1
|
8,993
| 57,122,015
|
How to read column of Nan from csvfile into python so data can be used?
|
<p>I am trying to read in columns of data from a csvfile, then use it to do some calculations. The problem is that my timestamps are in hexadecimal. I need to read them in and convert to decimal, but I don't know how to get it into python as anything but Nan. </p>
<p>I have tried making it a string first.</p>
<pre><code>colnames = [ 'sensor', 'x', 'y', 'z', 'azimuth', 'elevation', 'roll', 'timestamp']
data = pd.read_csv('The Project- 7-19 SS Arc Test.csv', names = colnames)
hexa_time_initial = data.timestamp.tolist()
</code></pre>
<p>It needs to be a list of hexadecimal, but is just a list of nan. When it tries to run the conversion loop I get the error that it can't convert non-string with explicit base.</p>
<p><a href="https://i.stack.imgur.com/1tbWQ.png" rel="nofollow noreferrer">Sample of excel file</a></p>
<p><code>1, 0.614, -7.798, -1.465, -6.117, 3.050, 5.231,0x42ef4,
1, 0.615, -7.798, -1.465, -6.109, 3.049, 5.231,0x42f05,
1, 0.616, -7.798, -1.465, -6.097, 3.045, 5.232,0x42f15,
1, 0.615, -7.798, -1.465, -6.108, 3.048, 5.232,0x42f26,
1, 0.614, -7.798, -1.465, -6.121, 3.051, 5.230,0x42f37,
1, 0.615, -7.798, -1.465, -6.107, 3.048, 5.230,0x42f47,
1, 0.616, -7.798, -1.465, -6.100, 3.046, 5.230,0x42f58,
1, 0.614, -7.798, -1.465, -6.116, 3.049, 5.230,0x42f69,</code></p>
|
<p>Thank you for the example data. I post here, not because I am sure, I found the solution, but because I couldn't show the output in a comment. But I have a suggestion, which might help.</p>
<p>When I read your csv data as you show it in your post, I get the following output:</p>
<pre><code> sensor x y z azimuth elevation roll timestamp
1 0.614 -7.798 -1.465 -6.117 3.050 5.231 0x42ef4 NaN
1 0.615 -7.798 -1.465 -6.109 3.049 5.231 0x42f05 NaN
1 0.616 -7.798 -1.465 -6.097 3.045 5.232 0x42f15 NaN
1 0.615 -7.798 -1.465 -6.108 3.048 5.232 0x42f26 NaN
1 0.614 -7.798 -1.465 -6.121 3.051 5.230 0x42f37 NaN
1 0.615 -7.798 -1.465 -6.107 3.048 5.230 0x42f47 NaN
1 0.616 -7.798 -1.465 -6.100 3.046 5.230 0x42f58 NaN
1 0.614 -7.798 -1.465 -6.116 3.049 5.230 0x42f69 NaN
</code></pre>
<p>I recognized, that the <code>timestamp</code> column is <code>NaN</code>, but also the sensor column is not the first column. I think this is, because the csv line created from excel ends with a comma. Pandas then acts as if there was an (empty) extra column at the end. And because there is one more column as you have names, it seems to create an index with the first column. This then also shifts the column names by one. This behavior seems odd to me, but could also be intended. To be sure, I just created a <a href="https://github.com/pandas-dev/pandas/issues/27515" rel="nofollow noreferrer">bug ticket for this, to be sure</a>. What pandas version are you using?</p>
<p>If you just change your reading code a bit, you can avoid that:</p>
<pre><code>df= pd.read_csv(io.StringIO(raw), sep=',\s*', names=colnames, index_col=False)
</code></pre>
<p>After reading <code>df</code> like this, it looks better:</p>
<pre><code> sensor x y z azimuth elevation roll timestamp
0 1 0.614 -7.798 -1.465 -6.117 3.050 5.231 0x42ef4
1 1 0.615 -7.798 -1.465 -6.109 3.049 5.231 0x42f05
2 1 0.616 -7.798 -1.465 -6.097 3.045 5.232 0x42f15
3 1 0.615 -7.798 -1.465 -6.108 3.048 5.232 0x42f26
4 1 0.614 -7.798 -1.465 -6.121 3.051 5.230 0x42f37
5 1 0.615 -7.798 -1.465 -6.107 3.048 5.230 0x42f47
6 1 0.616 -7.798 -1.465 -6.100 3.046 5.230 0x42f58
7 1 0.614 -7.798 -1.465 -6.116 3.049 5.230 0x42f69
</code></pre>
<p>Now the column names are assigned correctly. This is because of the <code>index_col=False</code> option, which tells pandas not to use the first colum of the file as index.</p>
<p>If you like, you can also add something like <code>usecols=range(len(colnames))</code> which tells pandas, that it should only use as many columns from your file as you have names, so if excel runs amok and adds dozens of commas at the end of the lines, you don't get problems because of the many empty and unnamed columns in your dataframes.
You should check if you really want to use <code>sep=',\s*'</code> or rather <code>sep=','</code>. The first just makes sure, you remove the leading whitespaces before the values of column 1...</p>
|
python|pandas|csv|hex|nan
| 0
|
8,994
| 45,873,063
|
Pandas DataFrame from list/dict/list
|
<p>I have some data in this form:</p>
<pre><code>a = [{'table': 'a', 'field':['apple', 'pear']},
{'table': 'b', 'field':['grape', 'berry']}]
</code></pre>
<p>I want to create a dataframe that looks like this:</p>
<pre><code> field table
0 apple a
1 pear a
2 grape b
3 berry b
</code></pre>
<p>When I try this:</p>
<pre><code>pd.DataFrame.from_records(a)
</code></pre>
<p>I get this:</p>
<pre><code> field table
0 [apple, pear] a
1 [grape, berry] b
</code></pre>
<p>I'm using a loop to restructure my original data, but I think there must be a more straightforward and simpler methid.</p>
|
<p>You can use a list comprehension to concatenate a series of dataframes, one for each dictionary in <code>a</code>. </p>
<pre><code>>>> pd.concat([pd.DataFrame({'table': d['table'], # Per @piRSquared for simplification.
'field': d['field']})
for d in a]).reset_index(drop=True)
field table
0 apple a
1 pear a
2 grape b
3 berry b
</code></pre>
|
python|pandas
| 4
|
8,995
| 45,947,505
|
Is there a method to perform a derivative by inbuilt function in python 3.5?
|
<p>I don't want to use numpy because it isn't permitted in competitive programming.I want to take derivative of for instance m=a(a**2-24a).<br/>
Now,a=0 and a=8 are critical points. I want to find this value.Is it possible by using any python library function excluding numpy.</p>
|
<p><code>m=a(a**2-24a)</code> does not make sense as a Python expression, regardless of what <code>a</code> is. The first <code>a(</code> implies a function; <code>a**</code> implies a number, and <code>24a</code> is I-don't-know-what. </p>
<p>Derivative is a mathematical concept that's implemented in <code>sympy</code> (symbolic math). <code>numpy</code> also implements it for polynomials. <code>numpy</code> also implements a finite difference approximation for arrays.</p>
<p>For a list of numbers (or two lists) you could calculate a finite difference approximation (without <code>numpy</code>).</p>
<p>But for a general function, there isn't a derivative method for Python nor <code>numpy</code>. </p>
<p>Your question might make more sense if you give examples of the function or lists that you are working with.</p>
|
python-3.x|numpy|python-3.5|standard-library
| 0
|
8,996
| 45,925,034
|
Add a new column to a CSV file using python
|
<p>I have a <code>CSV</code> file that has the following headers :</p>
<pre><code>model ,years ,engine ,power_kW,power_hp,torque_Nm,torque_ft-lb,0-100 km/h
</code></pre>
<p>and I want to write the following list :</p>
<pre><code>power_hp = [113.99, 120.69, 127.4, 134.1, 140.81, 147.51, 154.22, 167.63, 170.31, 174.33, 199.81, 214.56, 214.56, 230.66, 254.79, 268.2, 301.73, 301.73, 321.84, 414.38, 443.88]
</code></pre>
<p>into the <code>power_hp</code> column.</p>
<p>My code is below:</p>
<pre><code>import csv
import shutil
import unit_converter
shutil.copy('bmw_e90_in.csv', 'bmw_e90_out.csv')
path = 'bmw_e90_out.csv'
file = open(path, newline='')
reader = csv.reader(file)
header_row = next(reader) # First line in header
power_hp = []
torque_ft_lb = []
# row = [Model, Years, Engine, Power_kW, Power_hp, Torque_Nm, Torque_ft-lb,
# 0-100 km/h]
for row in reader:
power_kw = float(row[3])
torque_nm = float(row[5])
power_hp.append(unit_converter.power(power_kw))
torque_ft_lb.append(unit_converter.torque(torque_nm))
print(power_hp)
# compute an store power and torque values
return_path = 'bmw_e90_out.csv'
file = open(return_path, 'w', newline='')
writer = csv.writer(file)
</code></pre>
|
<p>Using <code>pandas</code>, load in your dataframe using <code>pd.read_csv</code>, add the new column, and write to the same file using <code>df.to_csv</code>.</p>
<pre><code>import pandas as pd
power_hp = [113.99, 120.69, 127.4, 134.1, 140.81, 147.51, 154.22, 167.63, 170.31, 174.33, 199.81, 214.56, 214.56, 230.66, 254.79, 268.2, 301.73, 301.73, 321.84, 414.38, 443.88]
df = pd.read_csv('bmw_e90_out.csv')
df['power_hp'] = power_hp
df.to_csv('bmw_e90_out.csv')
</code></pre>
|
python|python-3.x|pandas|csv
| 0
|
8,997
| 45,735,230
|
How to replace a list of values in a numpy array?
|
<p>I have an unsorted array of numbers. </p>
<p>I need to replace certain numbers (given in a list) with specific alternatives (also given in a corresponding list) </p>
<p>I wrote the following code (which seems to works): </p>
<pre><code>import numpy as np
numbers = np.arange(0,40)
np.random.shuffle(numbers)
problem_numbers = [33, 23, 15] # table, night_stand, plant
alternative_numbers = [12, 14, 26] # desk, dresser, flower_pot
for i in range(len(problem_numbers)):
idx = numbers == problem_numbers[i]
numbers[idx] = alternative_numbers[i]
</code></pre>
<p>However, this seems highly inefficient (this needs to be done several millions of times for much larger arrays).</p>
<p>I found <a href="https://stackoverflow.com/questions/12122639/find-indices-of-a-list-of-values-in-a-numpy-array">this</a> question which answers a similar problem however in my case the numbers are not sorted and they need to maintain their original location. </p>
<p>Note: <code>numbers</code> may contain multiple or no occurrences of elements in <code>problem_numbers</code></p>
|
<p>EDIT: I implemented a TensorFlow version of this in <a href="https://stackoverflow.com/a/56805806/1782792">this answer</a> (almost exactly the same, except replacements are a dict).</p>
<hr>
<p>Here is a simple way to do it:</p>
<pre><code>import numpy as np
numbers = np.arange(0,40)
np.random.shuffle(numbers)
problem_numbers = [33, 23, 15] # table, night_stand, plant
alternative_numbers = [12, 14, 26] # desk, dresser, flower_pot
# Replace values
problem_numbers = np.asarray(problem_numbers)
alternative_numbers = np.asarray(alternative_numbers)
n_min, n_max = numbers.min(), numbers.max()
replacer = np.arange(n_min, n_max + 1)
# Mask replacements out of range
mask = (problem_numbers >= n_min) & (problem_numbers <= n_max)
replacer[problem_numbers[mask] - n_min] = alternative_numbers[mask]
numbers = replacer[numbers - n_min]
</code></pre>
<p>This works well an should be efficient as long as the range of the values in <code>numbers</code> (the difference between the smallest and the biggest) is not huge (e.g you don't have something like <code>1</code>, <code>7</code> and <code>10000000000</code>).</p>
<p><strong>Benchmarking</strong></p>
<p>I've compared the code in the OP with the three (as of now) proposed solutions with this code:</p>
<pre><code>import numpy as np
def method_itzik(numbers, problem_numbers, alternative_numbers):
numbers = np.asarray(numbers)
for i in range(len(problem_numbers)):
idx = numbers == problem_numbers[i]
numbers[idx] = alternative_numbers[i]
return numbers
def method_mseifert(numbers, problem_numbers, alternative_numbers):
numbers = np.asarray(numbers)
replacer = dict(zip(problem_numbers, alternative_numbers))
numbers_list = numbers.tolist()
numbers = np.array(list(map(replacer.get, numbers_list, numbers_list)))
return numbers
def method_divakar(numbers, problem_numbers, alternative_numbers):
numbers = np.asarray(numbers)
problem_numbers = np.asarray(problem_numbers)
problem_numbers = np.asarray(alternative_numbers)
# Pre-process problem_numbers and correspondingly alternative_numbers
# such that repeats and no matches are taken care of
sidx_pn = problem_numbers.argsort()
pn = problem_numbers[sidx_pn]
mask = np.concatenate(([True],pn[1:] != pn[:-1]))
an = alternative_numbers[sidx_pn]
minN, maxN = numbers.min(), numbers.max()
mask &= (pn >= minN) & (pn <= maxN)
pn = pn[mask]
an = an[mask]
# Pre-pocessing done. Now, we need to use pn and an in place of
# problem_numbers and alternative_numbers repectively. Map, index and assign.
sidx = numbers.argsort()
idx = sidx[np.searchsorted(numbers, pn, sorter=sidx)]
valid_mask = numbers[idx] == pn
numbers[idx[valid_mask]] = an[valid_mask]
def method_jdehesa(numbers, problem_numbers, alternative_numbers):
numbers = np.asarray(numbers)
problem_numbers = np.asarray(problem_numbers)
alternative_numbers = np.asarray(alternative_numbers)
n_min, n_max = numbers.min(), numbers.max()
replacer = np.arange(n_min, n_max + 1)
# Mask replacements out of range
mask = (problem_numbers >= n_min) & (problem_numbers <= n_max)
replacer[problem_numbers[mask] - n_min] = alternative_numbers[mask]
numbers = replacer[numbers - n_min]
return numbers
</code></pre>
<p>The results:</p>
<pre><code>import numpy as np
np.random.seed(100)
MAX_NUM = 100000
numbers = np.random.randint(0, MAX_NUM, size=100000)
problem_numbers = np.unique(np.random.randint(0, MAX_NUM, size=500))
alternative_numbers = np.random.randint(0, MAX_NUM, size=len(problem_numbers))
%timeit method_itzik(numbers, problem_numbers, alternative_numbers)
10 loops, best of 3: 63.3 ms per loop
# This method expects lists
problem_numbers_l = list(problem_numbers)
alternative_numbers_l = list(alternative_numbers)
%timeit method_mseifert(numbers, problem_numbers_l, alternative_numbers_l)
10 loops, best of 3: 20.5 ms per loop
%timeit method_divakar(numbers, problem_numbers, alternative_numbers)
100 loops, best of 3: 9.45 ms per loop
%timeit method_jdehesa(numbers, problem_numbers, alternative_numbers)
1000 loops, best of 3: 822 µs per loop
</code></pre>
|
python|arrays|performance|numpy
| 4
|
8,998
| 23,012,455
|
in Python trying to use cv2.matchShapes() from OpenCV
|
<p>I have done a random drawing on a whiteboard and NAO robot has taken a picture and tried to re-create the same drawing.</p>
<p>My drawing:</p>
<p><img src="https://i.stack.imgur.com/TFs44.jpg" alt="enter image description here" /></p>
<p>NAO's drawing:</p>
<p><img src="https://i.stack.imgur.com/Hm4Xd.jpg" alt="enter image description here" /></p>
<p>At this point I would like to write some conclusions about it, specifically I want to extract the contours from both pictures and match the contours using the <code>OpenCV</code> function <code>cv2.matchShapes()</code>.</p>
<p>However, I wrote a small <code>Python</code> code script for this and it gives me some errors. Here is the code:</p>
<pre><code>import numpy as np
import cv2
#get the pictures from the forlder
original = cv2.imread('eightgon.jpg')
drawn = cv2.imread('eightgon1.jpg')
#make them gray
originalGray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
drawnGray = cv2.cvtColor(drawn, cv2.COLOR_BGR2GRAY)
#apply erosion
kernel = np.ones((2, 2),np.uint8)
originalErosion = cv2.erode(originalGray, kernel, iterations = 1)
drawnErosion = cv2.erode(drawnGray, kernel, iterations = 1)
#retrieve edges with Canny
thresh = 175
originalEdges = cv2.Canny(originalErosion, thresh, thresh*2)
drawnEdges = cv2.Canny(drawnErosion, thresh, thresh*2)
#extract contours
originalContours, Orighierarchy = cv2.findContours(originalEdges, cv2.cv.CV_RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
drawnContours, Drawnhierarchy = cv2.findContours(drawnEdges, cv2.cv.CV_RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
print cv2.matchShapes(drawnContours,originalContours,cv2.cv.CV_CONTOURS_MATCH_I1, 0.0)
</code></pre>
<p>When I run this simple code, it returns me this error:</p>
<pre><code>File "C:/Python27/getResults.py", line 32, in <module>
ret = cv2.matchShapes(drawnContours,originalContours,cv2.cv.CV_CONTOURS_MATCH_I1, 0.0)
TypeError: contour1 is not a numpy array, neither a scalar
</code></pre>
<p>Since the error tells me that the contours should be arrays.. I make a slight change in the code like this:</p>
<pre><code>cnt1 = np.asarray(drawnContours, np.int0)
cnt2 = np.asarray(originalContours, np.int0)
print cv2.matchShapes(cnt1,cnt2,cv2.cv.CV_CONTOURS_MATCH_I1, 0.0)
</code></pre>
<p>and in this case it returns me this error: <code>ValueError: setting an array element with a sequence.</code></p>
<p>What am I doing wrong?
Any help is apreciated!</p>
|
<p>I faced a similar problem. The match shapes function takes a single contour pair, not the whole contour container pair.</p>
<pre><code>cv2.matchShapes(drawnContours[i], originalContours[i], cv2.cv.CV_CONTOURS_MATCH_I1, 0.0)
</code></pre>
<p>Hope that helps.</p>
|
python|opencv|numpy
| 4
|
8,999
| 23,314,500
|
Best way to iterate through a numpy array returning the columns as 2d arrays
|
<p>EDIT: Thank you all for the good solutions, I think if I'd had to pick one, it would be <code>A[:,[0]]</code></p>
<p>I collected 7 approaches now and put them into an <a href="http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/benchmarks/timeit_tests.ipynb?create=1#row_vectors" rel="nofollow">IPython notebook</a>. The timeit benchmarks are not suprising: they are all roughly the same in terms of speed.</p>
<p>Thanks a lot for your suggestion!</p>
<hr>
<p>I a looking for a good way to iterate through the columns of a matrix and return them as 1xd column vectors. I have some ideas, but I don't think that those are good solutions. I think I am missing something here. Which way would you recommend? E.g., let's say I have the following matrix and want to return the first column as a column vector:</p>
<pre><code>A = np.array([ [1,2,3], [4,5,6], [7,8,9] ])
>>> A
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
</code></pre>
<p>By default, numpy returns it like this:</p>
<pre><code>>>> A[:,0]
array([1, 4, 7])
>>> A[:,0].shape
(3,)
</code></pre>
<p>And what I want is this:</p>
<pre><code>array([[1],
[4],
[7]])
</code></pre>
<p>with <code>.shape</code> = (3,1)</p>
<hr>
<p>Transpose doesn't work to return it as a column vector.</p>
<pre><code>>>> A[:,0].T
array([1, 4, 7])
>>> A[:,0]
array([1, 4, 7])
</code></pre>
<p>I would have to create a new axis every time</p>
<pre><code>>>> A[:,0][:,np.newaxis].shape
(3, 1)
>>> A[:,0][:,np.newaxis]
array([[1],
[4],
[7]])
</code></pre>
<p>Or after doing some experimenting, I came up with other workarounds like this:</p>
<pre><code>>>> A[:,0:1]
array([[1],
[4],
[7]])
>>> A[:,0].reshape(A.shape[1],1)
array([[1],
[4],
[7]])
</code></pre>
|
<p>My favorite solution is the slicing. You have different solutions :</p>
<pre><code>A[:,0:1] # not so clear
A[:,:1] # black magic
A[:,[0]] # clearest syntax imho
</code></pre>
<p>Concerning the <code>reshape</code> solution, you can enhance the syntax like this :</p>
<pre><code>A[:,0].reshape(A.shape[1],1)
A[:,0].reshape(-1,1)
</code></pre>
<p>You can also merge the following :</p>
<pre><code>A[:,0][:,np.newaxis] # ->
A[:,0,np.newaxis] # or
A[:,np.newaxis,0]
</code></pre>
|
python|numpy|matrix
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.