title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python & Matplotlib: How to plot the range of bootstrap histogram plot? | 38,558,921 | <p>I want to example the variance of a dataset, by bootstrap(resample) the data.</p>
<pre><code>from numpy.random import randn
fig,ax = plt.subplots()
bins = arange(-5,6,0.5)
df = pd.DataFrame(randn(3000))
df.hist(ax=ax, bins=bins, alpha = 0.7, normed=True)
count_collection = []
for i in xrange(1,100):
temp_df = df.sample(frac=0.5, replace=True)
temp_df.hist(ax=ax, bins=bins, alpha = 0.25, normed=True)
count, division = np.histogram(temp_df, bins=bins)
count_collection.append(count)
</code></pre>
<p><a href="http://i.stack.imgur.com/Ba8xD.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ba8xD.png" alt="enter image description here"></a></p>
<p>However, such plot is hard to see the limit. Is it possible to plot the upper/lower limit of the histogram, so it can be see clearer, maybe some thing like Boxplot for each bin?</p>
<p><img src="http://matplotlib.org/_images/boxplot_demo_06.png" alt=""></p>
<p>or just curves with upper/lower limit to indicate range?</p>
<p><a href="http://i.stack.imgur.com/dVOWj.png" rel="nofollow"><img src="http://i.stack.imgur.com/dVOWj.png" alt="enter image description here"></a></p>
<p>My main difficulty is extracting the max/min value for each bin (The <code>count_collection</code>)</p>
<p>UPDATE:</p>
<p>What would be a good way to plot the range?</p>
<pre><code>count_collection = np.array(count_collection)
mx = np.max(count_collection,0)
mn = np.min(count_collection,0)
ax.plot(division[1:]-0.25, mx, '_', mew=1)
ax.plot(division[1:]-0.25, mn, '_', mew=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/WJYOU.png" rel="nofollow"><img src="http://i.stack.imgur.com/WJYOU.png" alt="enter image description here"></a></p>
<p>I find this is still hard to look, any suggestion?</p>
| 0 | 2016-07-25T02:33:01Z | 38,560,667 | <p>To extract the max and min you may use the following:</p>
<pre><code>count_collection = np.array(count_collection)
mx = np.max(count_collection,0)
mn = np.min(count_collection,0)
</code></pre>
<p>The first line just changes from a list of 1d arrays to 2d array, so max and min can operate.</p>
<p><strong>edit:</strong></p>
<p>Since the original plot was normalized, it is hard to make sense of max and min of half the sample size. But you can do something like this:
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import pandas as pd</p>
<pre><code>fig,ax = plt.subplots()
bins = np.arange(-5,6,0.5)
df = pd.DataFrame(randn(3000))
#df.hist(ax=ax, bins=bins, alpha = 0.7, normed=True)
histval, _ = np.histogram(df, bins=bins)
count_collection = []
for i in np.arange(1,100):
temp_df = df.sample(frac=0.5, replace=True)
# temp_df.hist(ax=ax, bins=bins, alpha = 0.25, normed=True)
count, division = np.histogram(temp_df, bins=bins)
count_collection.append(count)
count_collection = np.array(count_collection)
mx = np.max(count_collection,0)
mn = np.min(count_collection,0)
plt.bar(bins[:-1], histval, 0.5)
plt.plot(bins[:-1] + 0.25, mx*2)
plt.plot(bins[:-1] + 0.25, mn*2)
</code></pre>
<p>The 2x factor is due to the 2x smaller sample size when calculating the max and min.
<a href="http://i.stack.imgur.com/BPSCv.png" rel="nofollow"><img src="http://i.stack.imgur.com/BPSCv.png" alt="enter image description here"></a></p>
| 1 | 2016-07-25T06:01:44Z | [
"python",
"numpy",
"matplotlib"
] |
Tensorflow Wide & Deep Example Not Working | 38,558,976 | <p>I am trying to get the <a href="https://www.tensorflow.org/versions/r0.9/tutorials/wide_and_deep/index.html" rel="nofollow">Wide & Deep</a> tutorial working but the following line keeps giving me issues when copying and pasting the code from github and the website. </p>
<pre><code>df_train["income_bracket"].apply(lambda x: ">50K" in x)).astype(int)
</code></pre>
<p>I get the below error </p>
<blockquote>
<p>TypeError: argument of type 'float' is not iterable</p>
</blockquote>
<p>I am not too familar with lamda functions but I think it is making a dummy variable so I tried that using </p>
<pre><code>for i in range(len(df_train)):
if df_train.loc[i,'income_bracket']=='>50k':
df_train.loc[i,LABEL_COLUMN] =1
else:
df_train.loc[i,LABEL_COLUMN] =0
</code></pre>
<p>But got the error </p>
<blockquote>
<p>TypeError: Expected binary or unicode string, got nan</p>
</blockquote>
<p>How do I get this tutorial working?</p>
<p>EDIT:
first line of data and headers
<a href="http://i.stack.imgur.com/uITFy.png" rel="nofollow"><img src="http://i.stack.imgur.com/uITFy.png" alt="enter image description here"></a></p>
| 0 | 2016-07-25T02:43:05Z | 39,359,972 | <p>lambda function is quite useful and simple. It won't create dummy variables.
I've noticed that you import the original data into a CSV file. Try not do that and just use the original downloading data that shown in tutorial code. I have successfully tried in this way.
But I also got the same problem when I change to other data sets for training. So I still hope someone can solve this problem in a deeper way</p>
| 1 | 2016-09-07T01:24:31Z | [
"python",
"python-2.7",
"tensorflow"
] |
Tensorflow Wide & Deep Example Not Working | 38,558,976 | <p>I am trying to get the <a href="https://www.tensorflow.org/versions/r0.9/tutorials/wide_and_deep/index.html" rel="nofollow">Wide & Deep</a> tutorial working but the following line keeps giving me issues when copying and pasting the code from github and the website. </p>
<pre><code>df_train["income_bracket"].apply(lambda x: ">50K" in x)).astype(int)
</code></pre>
<p>I get the below error </p>
<blockquote>
<p>TypeError: argument of type 'float' is not iterable</p>
</blockquote>
<p>I am not too familar with lamda functions but I think it is making a dummy variable so I tried that using </p>
<pre><code>for i in range(len(df_train)):
if df_train.loc[i,'income_bracket']=='>50k':
df_train.loc[i,LABEL_COLUMN] =1
else:
df_train.loc[i,LABEL_COLUMN] =0
</code></pre>
<p>But got the error </p>
<blockquote>
<p>TypeError: Expected binary or unicode string, got nan</p>
</blockquote>
<p>How do I get this tutorial working?</p>
<p>EDIT:
first line of data and headers
<a href="http://i.stack.imgur.com/uITFy.png" rel="nofollow"><img src="http://i.stack.imgur.com/uITFy.png" alt="enter image description here"></a></p>
| 0 | 2016-07-25T02:43:05Z | 39,405,734 | <p>It's the issue of the data, or the code of TensorFlow. We have submit the issue about that in <a href="https://github.com/tensorflow/tensorflow/issues/4293" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/4293</a></p>
<p>You can download the files manually and remove the broken lines. Then run with this command.</p>
<pre><code>python ./wide_n_deep_tutorial.py --train_data /home/data/train_data --test_data /home/data/test_data
</code></pre>
| 0 | 2016-09-09T07:07:18Z | [
"python",
"python-2.7",
"tensorflow"
] |
Multi-threading vs single thread calculations | 38,559,118 | <pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
#do the work!
threads = []
for x in range(0,4): #4 threads
threads.append(threading.Thread(target=dowork))
for x in threads:
x.start() # and they are off
</code></pre>
<p>Results:</p>
<pre><code> 23949968699026357507152486869104218631097704347109...
--- 11.899 seconds --- Thread-2
10632599432628604090664113776561125984322566079319...
--- 11.924 seconds --- Thread-4
20488842520966388603734530904324501550532057464424...
--- 12.073 seconds --- Thread-1
17247910051860808132548857670360685101748752056479...
--- 12.115 seconds --- Thread-3
[Finished in 12.2s]
</code></pre>
<p>And now let's do it in 1 thread:</p>
<pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
# print(threadtest())
threads = []
for x in range(0,4):
threads.append(True)
for x in threads:
dowork()
</code></pre>
<p>Results:</p>
<pre><code> 14283744921265630410246013584722456869128720814937...
--- 2.8463 seconds --- MainThread
13487957813644386002497605118558198407322675045349...
--- 2.7690 seconds --- MainThread
15058500261169362071147461573764693796710045625582...
--- 2.7372 seconds --- MainThread
77481355564746169357229771752308217188584725215300...
--- 2.7168 seconds --- MainThread
[Finished in 11.1s]
</code></pre>
<p>Why is single threaded and multi-threaded scripts have the <strong>same</strong> processing time?
Shouldn't the multi-threaded implementation only be 1/#number of threads less? (I know when you reach your max cpu threads there is diminishing returns) </p>
<p>Did I mess up my implementation?</p>
| 0 | 2016-07-25T03:03:52Z | 38,559,332 | <p>In CPython, threads don't run in parallel because of the Global Intepreter Lock. From the Python wiki (<a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">https://wiki.python.org/moin/GlobalInterpreterLock</a>):</p>
<blockquote>
<p>In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe</p>
</blockquote>
| 0 | 2016-07-25T03:35:12Z | [
"python",
"multithreading",
"python-3.x"
] |
Multi-threading vs single thread calculations | 38,559,118 | <pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
#do the work!
threads = []
for x in range(0,4): #4 threads
threads.append(threading.Thread(target=dowork))
for x in threads:
x.start() # and they are off
</code></pre>
<p>Results:</p>
<pre><code> 23949968699026357507152486869104218631097704347109...
--- 11.899 seconds --- Thread-2
10632599432628604090664113776561125984322566079319...
--- 11.924 seconds --- Thread-4
20488842520966388603734530904324501550532057464424...
--- 12.073 seconds --- Thread-1
17247910051860808132548857670360685101748752056479...
--- 12.115 seconds --- Thread-3
[Finished in 12.2s]
</code></pre>
<p>And now let's do it in 1 thread:</p>
<pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
# print(threadtest())
threads = []
for x in range(0,4):
threads.append(True)
for x in threads:
dowork()
</code></pre>
<p>Results:</p>
<pre><code> 14283744921265630410246013584722456869128720814937...
--- 2.8463 seconds --- MainThread
13487957813644386002497605118558198407322675045349...
--- 2.7690 seconds --- MainThread
15058500261169362071147461573764693796710045625582...
--- 2.7372 seconds --- MainThread
77481355564746169357229771752308217188584725215300...
--- 2.7168 seconds --- MainThread
[Finished in 11.1s]
</code></pre>
<p>Why is single threaded and multi-threaded scripts have the <strong>same</strong> processing time?
Shouldn't the multi-threaded implementation only be 1/#number of threads less? (I know when you reach your max cpu threads there is diminishing returns) </p>
<p>Did I mess up my implementation?</p>
| 0 | 2016-07-25T03:03:52Z | 38,559,358 | <p><a href="http://www.dabeaz.com/GIL/" rel="nofollow">Here is a link to presentations about the GIL</a> <a href="http://www.dabeaz.com/GIL/" rel="nofollow">http://www.dabeaz.com/GIL/</a></p>
<p>The author of these presentations explained GIL in detail with examples. He also has a few videos posted on Youtube</p>
<p>In addition to using threads you might also interested in asynchronous programming. In python 3, <a href="https://docs.python.org/3/library/asyncio.html" rel="nofollow">This library is added to python</a> to provide asynchronous concurrency</p>
| 2 | 2016-07-25T03:38:48Z | [
"python",
"multithreading",
"python-3.x"
] |
Multi-threading vs single thread calculations | 38,559,118 | <pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
#do the work!
threads = []
for x in range(0,4): #4 threads
threads.append(threading.Thread(target=dowork))
for x in threads:
x.start() # and they are off
</code></pre>
<p>Results:</p>
<pre><code> 23949968699026357507152486869104218631097704347109...
--- 11.899 seconds --- Thread-2
10632599432628604090664113776561125984322566079319...
--- 11.924 seconds --- Thread-4
20488842520966388603734530904324501550532057464424...
--- 12.073 seconds --- Thread-1
17247910051860808132548857670360685101748752056479...
--- 12.115 seconds --- Thread-3
[Finished in 12.2s]
</code></pre>
<p>And now let's do it in 1 thread:</p>
<pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
# print(threadtest())
threads = []
for x in range(0,4):
threads.append(True)
for x in threads:
dowork()
</code></pre>
<p>Results:</p>
<pre><code> 14283744921265630410246013584722456869128720814937...
--- 2.8463 seconds --- MainThread
13487957813644386002497605118558198407322675045349...
--- 2.7690 seconds --- MainThread
15058500261169362071147461573764693796710045625582...
--- 2.7372 seconds --- MainThread
77481355564746169357229771752308217188584725215300...
--- 2.7168 seconds --- MainThread
[Finished in 11.1s]
</code></pre>
<p>Why is single threaded and multi-threaded scripts have the <strong>same</strong> processing time?
Shouldn't the multi-threaded implementation only be 1/#number of threads less? (I know when you reach your max cpu threads there is diminishing returns) </p>
<p>Did I mess up my implementation?</p>
| 0 | 2016-07-25T03:03:52Z | 38,559,374 | <p>Multithreading in Python does not work like other languages, it has something to do with the <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">global interpreter lock</a> if I recalled correctly. There are a lot of different workarounds though, for example you can use <a href="http://sdiehl.github.io/gevent-tutorial/" rel="nofollow">gevent's coroutine based "threads"</a>. I myself prefer <a href="http://dask.pydata.org/en/latest/index.html" rel="nofollow">dask</a> for work that needs to run concurrently. For example</p>
<pre><code>import dask.bag as db
start = time.time()
(db.from_sequence(range(4), npartitions=4)
.map(lambda _: dowork())
.compute())
print('total time: {} seconds'.format(time.time() - start))
start = time.time()
threads = []
for x in range(0,4):
threads.append(True)
for x in threads:
dowork()
print('total time: {} seconds'.format(time.time() - start))
</code></pre>
<p>and the output</p>
<pre><code> 19016975777667561989667836343447216065093401859905...
--- 2.4172 seconds --- MainThread
32883203981076692018141849036349126447899294175228...
--- 2.4685 seconds --- MainThread
34450410116136243300565747102093690912732970152596...
--- 2.4901 seconds --- MainThread
50964938446237359434550325092232546411362261338846...
--- 2.5317 seconds --- MainThread
total time: 2.5557193756103516 seconds
10380860937556820815021239635380958917582122217407...
--- 2.3711 seconds --- MainThread
13309313630078624428079401365574221411759423165825...
--- 2.2861 seconds --- MainThread
27410752090906837219181398184615017013303570495018...
--- 2.2853 seconds --- MainThread
73007436394172372391733482331910124459395132986470...
--- 2.3136 seconds --- MainThread
total time: 9.256525993347168 seconds
</code></pre>
<p>In this case dask uses <code>multiprocessing</code> to do the work, which may or may not be desireable for your case.</p>
<p>Also instead of using cpython, you can try other implementation of python, for example <a href="http://pypy.org/" rel="nofollow">pypy</a>, <a href="https://bitbucket.org/stackless-dev/stackless/wiki/Home" rel="nofollow">stackless python</a> etc. which claimed to provide workaround/solution to the problem.</p>
| 2 | 2016-07-25T03:41:50Z | [
"python",
"multithreading",
"python-3.x"
] |
Multi-threading vs single thread calculations | 38,559,118 | <pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
#do the work!
threads = []
for x in range(0,4): #4 threads
threads.append(threading.Thread(target=dowork))
for x in threads:
x.start() # and they are off
</code></pre>
<p>Results:</p>
<pre><code> 23949968699026357507152486869104218631097704347109...
--- 11.899 seconds --- Thread-2
10632599432628604090664113776561125984322566079319...
--- 11.924 seconds --- Thread-4
20488842520966388603734530904324501550532057464424...
--- 12.073 seconds --- Thread-1
17247910051860808132548857670360685101748752056479...
--- 12.115 seconds --- Thread-3
[Finished in 12.2s]
</code></pre>
<p>And now let's do it in 1 thread:</p>
<pre><code>def dowork():
y = []
z = []
ab = 0
start_time = time.time()
t = threading.current_thread()
for x in range(0,1500):
y.append(random.randint(0,100000))
for x in range(0,1500):
z.append(random.randint(0,1000))
for x in range(0,100):
for k in range(0,len(z)):
ab += y[k] ** z[k]
print(" %.50s..." % ab)
print("--- %.6s seconds --- %s" % (time.time() - start_time, t.name))
# print(threadtest())
threads = []
for x in range(0,4):
threads.append(True)
for x in threads:
dowork()
</code></pre>
<p>Results:</p>
<pre><code> 14283744921265630410246013584722456869128720814937...
--- 2.8463 seconds --- MainThread
13487957813644386002497605118558198407322675045349...
--- 2.7690 seconds --- MainThread
15058500261169362071147461573764693796710045625582...
--- 2.7372 seconds --- MainThread
77481355564746169357229771752308217188584725215300...
--- 2.7168 seconds --- MainThread
[Finished in 11.1s]
</code></pre>
<p>Why is single threaded and multi-threaded scripts have the <strong>same</strong> processing time?
Shouldn't the multi-threaded implementation only be 1/#number of threads less? (I know when you reach your max cpu threads there is diminishing returns) </p>
<p>Did I mess up my implementation?</p>
| 0 | 2016-07-25T03:03:52Z | 38,596,340 | <p>Here is a complete test and example regarding multithreading and multiprocessing vs single threaded/process.</p>
<p>The computation, you can pick any computation you want.</p>
<pre><code>import time, os, threading, random, multiprocessing
def dowork():
total = 0
start_time = time.time()
t = threading.current_thread()
p = multiprocessing.current_process()
for x in range(0,100):
total += random.randint(1000000-1,1000000) ** random.randint(37000-1,37000)
print("--- %.6s seconds DONE --- %s | %s" % (time.time() - start_time, p.name, t.name))
</code></pre>
<p>The test:</p>
<pre><code>t, p = [], []
for x in range(0,4):
#create thread
t.append(threading.Thread(target=dowork))
#create child process
p.append(multiprocessing.Process(target=dowork))
#multi-thread
start_time = time.time()
for l in t:
l.start()
for l in t:
l.join()
print("===== %.6s seconds Multi-Threads =====" % (time.time() - start_time))
start_time = time.time()
#multi-process
for l in p:
l.start()
for l in p:
l.join()
print("===== %.6s seconds Multi-Processes =====" % (time.time() - start_time))
start_time = time.time()
# Sequential
for l in p:
dowork()
print("===== %.6s seconds Single Process/Thread =====" % (time.time() - start_time))
</code></pre>
<p>And here is the sample output:</p>
<pre><code>#Sample Output:
--- 2.6412 seconds DONE --- MainProcess | Thread-1
--- 2.5712 seconds DONE --- MainProcess | Thread-2
--- 2.5774 seconds DONE --- MainProcess | Thread-3
--- 2.5973 seconds DONE --- MainProcess | Thread-4
===== 10.388 seconds Multi-Threads =====
--- 2.4816 seconds DONE --- Process-4 | MainThread
--- 2.4841 seconds DONE --- Process-3 | MainThread
--- 2.4965 seconds DONE --- Process-2 | MainThread
--- 2.5182 seconds DONE --- Process-1 | MainThread
===== 2.5241 seconds Multi-Processes =====
--- 2.4624 seconds DONE --- MainProcess | MainThread
--- 2.6447 seconds DONE --- MainProcess | MainThread
--- 2.5716 seconds DONE --- MainProcess | MainThread
--- 2.4369 seconds DONE --- MainProcess | MainThread
===== 10.115 seconds Single Process/Thread =====
[Finished in 23.1s]
</code></pre>
| 0 | 2016-07-26T17:27:54Z | [
"python",
"multithreading",
"python-3.x"
] |
iterate over only two keys of python dictionary | 38,559,145 | <p>What is the pythonic way to iterate over a dictionary with a setup like this:</p>
<pre><code>dict = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': 6}
</code></pre>
<p>if I only wanted to iterate a for loop over all the values in <code>a</code> and <code>b</code> and skip <code>c</code>. There's obviously a million ways to solve this but I'd prefer to avoid something like:</p>
<pre><code>for each in dict['a']:
# do something
pass
for each in dict['b']:
# do something
pass
</code></pre>
<p>of something destructive like:</p>
<pre><code> del dict['c']
for k,v in dict.iteritems():
pass
</code></pre>
| 0 | 2016-07-25T03:07:37Z | 38,559,202 | <p>You can use <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain" rel="nofollow"><code>chain</code></a> from the <code>itertools</code> module to do this:</p>
<pre><code>In [29]: from itertools import chain
In [30]: mydict = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': 6}
In [31]: for item in chain(mydict['a'], mydict['b']):
...: print(item)
...:
1
2
3
3
4
5
</code></pre>
<p>To iterate over only the values the keys' value in the dictionary that are instance of <code>list</code> simply use <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>chain.from_iterable</code></a>.</p>
<pre><code>wanted_key = ['a', 'b']
for item in chain.from_iterable(mydict[key] for key in wanted_key if isinstance(mydict[key], list)):
# do something with the item
</code></pre>
| 1 | 2016-07-25T03:17:35Z | [
"python",
"dictionary"
] |
iterate over only two keys of python dictionary | 38,559,145 | <p>What is the pythonic way to iterate over a dictionary with a setup like this:</p>
<pre><code>dict = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': 6}
</code></pre>
<p>if I only wanted to iterate a for loop over all the values in <code>a</code> and <code>b</code> and skip <code>c</code>. There's obviously a million ways to solve this but I'd prefer to avoid something like:</p>
<pre><code>for each in dict['a']:
# do something
pass
for each in dict['b']:
# do something
pass
</code></pre>
<p>of something destructive like:</p>
<pre><code> del dict['c']
for k,v in dict.iteritems():
pass
</code></pre>
| 0 | 2016-07-25T03:07:37Z | 38,559,234 | <p>The more generic way is using filter-like approaches by putting an <code>if</code> in the end of a generator expression.</p>
<p>If you want to iterate over every iterable value, filter with <code>hasattr</code>:</p>
<pre><code>for key in (k for k in dict if hasattr(dict[k], '__iter__')):
for item in dict[key]:
print(item)
</code></pre>
<p>If you want to exclude some keys, use a "not in" filter:</p>
<pre><code>invalid = set(['c', 'd'])
for key in (k for k in dict if key not in invalid):
....
</code></pre>
<p>If you want to select only specific keys, use a "in" filter:</p>
<pre><code>valid = set(['a', 'b'])
for key in (k for k in dict if key in valid):
....
</code></pre>
| 2 | 2016-07-25T03:22:27Z | [
"python",
"dictionary"
] |
iterate over only two keys of python dictionary | 38,559,145 | <p>What is the pythonic way to iterate over a dictionary with a setup like this:</p>
<pre><code>dict = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': 6}
</code></pre>
<p>if I only wanted to iterate a for loop over all the values in <code>a</code> and <code>b</code> and skip <code>c</code>. There's obviously a million ways to solve this but I'd prefer to avoid something like:</p>
<pre><code>for each in dict['a']:
# do something
pass
for each in dict['b']:
# do something
pass
</code></pre>
<p>of something destructive like:</p>
<pre><code> del dict['c']
for k,v in dict.iteritems():
pass
</code></pre>
| 0 | 2016-07-25T03:07:37Z | 38,559,294 | <p>Similar to SSDMS's solution you can also just do:</p>
<p><code>mydict = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': 6}
for each in mydict['a']+mydict['b']:
....</code></p>
| 2 | 2016-07-25T03:30:23Z | [
"python",
"dictionary"
] |
How to speed up 4 million set intersections? | 38,559,245 | <p>I'm an inexperienced programmer working through a number of bioinformatics exercises in Python.</p>
<p>One problem area counts elements in the set intersection between name groups, and stores that count in a dictionary. There are two lists of 2000 name groups each; names in the name groups are Latin names of species. For example:</p>
<pre><code>list__of_name_groups_1 = [
['Canis Lupus', 'Canis Latrans'],
['Euarctos Americanus', 'Lynx Rufus'],
...
]
list__of_name_groups_2 = [
['Nasua Narica', 'Odocoileus Hemionus'],
['Felis Concolor', 'Peromyscus Eremicus'],
['Canis Latrans', 'Cervus Canadensis']
...
]
</code></pre>
<p>And I need a dictionary that contains all intersection sizes between the name groups, e.g.</p>
<pre><code>>>> intersections
{ (0, 0): 0, (0, 1): 0, (0, 2): 1, (1, 0): 0, (1, 1): 0, (2, 1): 0,
(2, 0): 1, (2, 1): 0, (2, 2): 0 }
</code></pre>
<p>(<code>'Canis Latrans'</code> occurs in element <code>0</code> in the first list, element <code>2</code> in the second list.)</p>
<p>I've got an implementation of an algorithm that works, but it runs too slowly.</p>
<pre><code>overlap = {}
for i in list_of_lists_of_names_1:
for j in list_of_lists_of_names_2:
overlap[(i,j)] = len(set(i) & set(j))
</code></pre>
<p>Is there a faster way to count the number of elements in set intersections?</p>
<p>(Hello moderators... Nick, this revised post is actually asking a slightly different question than the one I'm working on. While your answer is a very good one for addressing that question, I'm afraid that the method you suggest is actually not useful for what I'm trying to do. I very much appreciate the time and effort you put into your answer, and into editing this post, but I would request that the post be reverted to the original.)</p>
| 4 | 2016-07-25T03:23:36Z | 38,581,603 | <p>Depending on the specifics of your data, an alternative option is, for each possible data item, you record which lists it is contained in.</p>
<p>With such a data structure, for each data item you can quickly determine which pairs of lists contain it, and increment the corresponding entries of <code>overlap</code>.</p>
| 0 | 2016-07-26T05:23:51Z | [
"python",
"set",
"bioinformatics"
] |
How to speed up 4 million set intersections? | 38,559,245 | <p>I'm an inexperienced programmer working through a number of bioinformatics exercises in Python.</p>
<p>One problem area counts elements in the set intersection between name groups, and stores that count in a dictionary. There are two lists of 2000 name groups each; names in the name groups are Latin names of species. For example:</p>
<pre><code>list__of_name_groups_1 = [
['Canis Lupus', 'Canis Latrans'],
['Euarctos Americanus', 'Lynx Rufus'],
...
]
list__of_name_groups_2 = [
['Nasua Narica', 'Odocoileus Hemionus'],
['Felis Concolor', 'Peromyscus Eremicus'],
['Canis Latrans', 'Cervus Canadensis']
...
]
</code></pre>
<p>And I need a dictionary that contains all intersection sizes between the name groups, e.g.</p>
<pre><code>>>> intersections
{ (0, 0): 0, (0, 1): 0, (0, 2): 1, (1, 0): 0, (1, 1): 0, (2, 1): 0,
(2, 0): 1, (2, 1): 0, (2, 2): 0 }
</code></pre>
<p>(<code>'Canis Latrans'</code> occurs in element <code>0</code> in the first list, element <code>2</code> in the second list.)</p>
<p>I've got an implementation of an algorithm that works, but it runs too slowly.</p>
<pre><code>overlap = {}
for i in list_of_lists_of_names_1:
for j in list_of_lists_of_names_2:
overlap[(i,j)] = len(set(i) & set(j))
</code></pre>
<p>Is there a faster way to count the number of elements in set intersections?</p>
<p>(Hello moderators... Nick, this revised post is actually asking a slightly different question than the one I'm working on. While your answer is a very good one for addressing that question, I'm afraid that the method you suggest is actually not useful for what I'm trying to do. I very much appreciate the time and effort you put into your answer, and into editing this post, but I would request that the post be reverted to the original.)</p>
| 4 | 2016-07-25T03:23:36Z | 38,581,702 | <p>In fact you can express each list with one long integer.For example, set with the first element, the second element, but no third element can be expressed as (0 << 3) +ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ï¼ãï¼ãï¼ï¼</p>
<p>then you can compute set intersection by just compute integer & operation.</p>
| 0 | 2016-07-26T05:33:15Z | [
"python",
"set",
"bioinformatics"
] |
How to speed up 4 million set intersections? | 38,559,245 | <p>I'm an inexperienced programmer working through a number of bioinformatics exercises in Python.</p>
<p>One problem area counts elements in the set intersection between name groups, and stores that count in a dictionary. There are two lists of 2000 name groups each; names in the name groups are Latin names of species. For example:</p>
<pre><code>list__of_name_groups_1 = [
['Canis Lupus', 'Canis Latrans'],
['Euarctos Americanus', 'Lynx Rufus'],
...
]
list__of_name_groups_2 = [
['Nasua Narica', 'Odocoileus Hemionus'],
['Felis Concolor', 'Peromyscus Eremicus'],
['Canis Latrans', 'Cervus Canadensis']
...
]
</code></pre>
<p>And I need a dictionary that contains all intersection sizes between the name groups, e.g.</p>
<pre><code>>>> intersections
{ (0, 0): 0, (0, 1): 0, (0, 2): 1, (1, 0): 0, (1, 1): 0, (2, 1): 0,
(2, 0): 1, (2, 1): 0, (2, 2): 0 }
</code></pre>
<p>(<code>'Canis Latrans'</code> occurs in element <code>0</code> in the first list, element <code>2</code> in the second list.)</p>
<p>I've got an implementation of an algorithm that works, but it runs too slowly.</p>
<pre><code>overlap = {}
for i in list_of_lists_of_names_1:
for j in list_of_lists_of_names_2:
overlap[(i,j)] = len(set(i) & set(j))
</code></pre>
<p>Is there a faster way to count the number of elements in set intersections?</p>
<p>(Hello moderators... Nick, this revised post is actually asking a slightly different question than the one I'm working on. While your answer is a very good one for addressing that question, I'm afraid that the method you suggest is actually not useful for what I'm trying to do. I very much appreciate the time and effort you put into your answer, and into editing this post, but I would request that the post be reverted to the original.)</p>
| 4 | 2016-07-25T03:23:36Z | 38,582,948 | <p><strong>First,</strong> Python <code>set</code>s are good at finding intersections (they use hashing), but your code constructs the same <code>set</code>s over and over again. E.g. if the two <code>list</code>s contain 2000 elements each [Did you mean the outer or inner <code>list</code>s are that long?], there are only 4000 different <code>set</code>s to compute but your code computes 2000 x 2000 x 2 = 8 million <code>set</code>s.</p>
<p>So compute those 4000 sets once:</p>
<pre><code>list_of_name_tuples_1 = [("a", "aa"), ("b", "bbb"), ("c", "cc", "ccc")]
list_of_name_tuples_2 = [("a", "AA"), ("b", "BBB"), ("c", "cc", "CCC")]
name_sets_1 = [set(i) for i in list_of_name_tuples_1]
name_sets_2 = [set(i) for i in list_of_name_tuples_2]
overlap = {}
for l1, s1 in zip(list_of_name_tuples_1, name_sets_1):
for l2, s2 in zip(list_of_name_tuples_2, name_sets_2):
overlap[(l1, l2)] = len(s1 & s2)
</code></pre>
<p>Python <code>list</code>s are unhashable, thus they can't be used for <code>dict</code> keys, so I changed the lists-of-lists-of-names into lists-of-tuples-of-names.</p>
<p>(This code assumes you're using Python 3, where <code>zip()</code> returns an iterator. If you're using Python 2, then call <code>itertools.izip()</code> to get an iterator over the paired elements.)</p>
<p><strong>Second,</strong> consider restructuring <code>overlap</code> as a <code>dict</code> of <code>dict</code>s rather than a <code>dict</code> indexed by tuples.</p>
<pre><code>list_of_name_tuples_1 = [("a", "aa"), ("b", "bbb"), ("c", "cc", "ccc")]
list_of_name_tuples_2 = [("a", "AA"), ("b", "BBB"), ("c", "cc", "CCC")]
name_sets_1 = [set(i) for i in list_of_name_tuples_1]
name_sets_2 = [set(i) for i in list_of_name_tuples_2]
overlap = {}
for l1, s1 in zip(list_of_name_tuples_1, name_sets_1):
d = overlap.setdefault(l1, {})
for l2, s2 in zip(list_of_name_tuples_2, name_sets_2):
d[l2] = len(s1 & s2)
</code></pre>
<p>This could save a lot of work in the follow-on code which would access it via <code>overlap[l1][l2]</code> instead of <code>overlap[(l1, l2)]</code> (without tuple construction or hash generation), and nested loops could fetch <code>d = overlap[l1]</code> in an outer loop then access <code>d[l2]</code> in an inner loop.</p>
| 1 | 2016-07-26T06:54:33Z | [
"python",
"set",
"bioinformatics"
] |
How to speed up 4 million set intersections? | 38,559,245 | <p>I'm an inexperienced programmer working through a number of bioinformatics exercises in Python.</p>
<p>One problem area counts elements in the set intersection between name groups, and stores that count in a dictionary. There are two lists of 2000 name groups each; names in the name groups are Latin names of species. For example:</p>
<pre><code>list__of_name_groups_1 = [
['Canis Lupus', 'Canis Latrans'],
['Euarctos Americanus', 'Lynx Rufus'],
...
]
list__of_name_groups_2 = [
['Nasua Narica', 'Odocoileus Hemionus'],
['Felis Concolor', 'Peromyscus Eremicus'],
['Canis Latrans', 'Cervus Canadensis']
...
]
</code></pre>
<p>And I need a dictionary that contains all intersection sizes between the name groups, e.g.</p>
<pre><code>>>> intersections
{ (0, 0): 0, (0, 1): 0, (0, 2): 1, (1, 0): 0, (1, 1): 0, (2, 1): 0,
(2, 0): 1, (2, 1): 0, (2, 2): 0 }
</code></pre>
<p>(<code>'Canis Latrans'</code> occurs in element <code>0</code> in the first list, element <code>2</code> in the second list.)</p>
<p>I've got an implementation of an algorithm that works, but it runs too slowly.</p>
<pre><code>overlap = {}
for i in list_of_lists_of_names_1:
for j in list_of_lists_of_names_2:
overlap[(i,j)] = len(set(i) & set(j))
</code></pre>
<p>Is there a faster way to count the number of elements in set intersections?</p>
<p>(Hello moderators... Nick, this revised post is actually asking a slightly different question than the one I'm working on. While your answer is a very good one for addressing that question, I'm afraid that the method you suggest is actually not useful for what I'm trying to do. I very much appreciate the time and effort you put into your answer, and into editing this post, but I would request that the post be reverted to the original.)</p>
| 4 | 2016-07-25T03:23:36Z | 38,632,673 | <p>This depends on the nature of your data, but may give you a big saving if there are relatively few Latin names common to both super-lists. The method is:</p>
<ol>
<li>Find common names between the lists</li>
<li>compute set intersections only between name groups that contain one of those common names</li>
<li>The rest of the set intersection counts will be <code>0</code>.</li>
</ol>
<p>Your own solution is slow because of the <em>number of set operations you perform</em>: 2,000 x 2,000 == 4,000,000. No matter how quickly Python performs each one, it's going to take time. My method reduces the number of set intersections computed by a factor of 1000, at the expense of some other, smaller calculations.</p>
<p>My back-of-envelope calculation is that you might improve performance by a factor of 4 or better if there are relatively few common names. The improvement will be more the fewer common names there are.</p>
<p>I've used a few things here that may be new to you: list comprehensions and <code>enumerate()</code>, <code>defaultdict</code>, list membership using <code>in</code> and the itertools methods. That chucks you in at the deep end. Happy researching, and let me know if you'd like some explanations.</p>
<pre><code>from collections import defaultdict
import itertools
list_of_name_groups_1 = [
['Canis Lupus', 'Canis Latrans'],
['Euarctos Americanus', 'Lynx Rufus'],
]
list_of_name_groups_2 = [
['Nasua Narica', 'Odocoileus Hemionus'],
['Felis Concolor', 'Peromyscus Eremicus'],
['Canis Latrans', 'Cervus Canadensis']
]
def flatten(list_of_lists):
return itertools.chain.from_iterable(list_of_lists)
def unique_names(list_of_name_groups):
return set(flatten(list_of_name_groups))
def get_matching_name_groups(name, list_of_name_groups):
return (list_index for list_index, name_group
in enumerate(list_of_name_groups)
if name in name_group)
list1_candidates = set()
list2_candidates = set()
common_names = unique_names(list_of_name_groups_1) & unique_names(list_of_name_groups_2)
for name in common_names:
list1_candidates.update(tuple(get_matching_name_groups(name, list_of_name_groups_1)))
list2_candidates.update(tuple(get_matching_name_groups(name, list_of_name_groups_2)))
intersections = defaultdict(lambda: 0)
for i, j in itertools.product(list1_candidates, list2_candidates):
intersections[(i, j)] = len(set(list_of_name_groups_1[i]) & set(list_of_name_groups_2[j]))
print(intersections)
>>> python intersections.py
defaultdict(<function <lambda> at 0x0000000000DC7620>, {(0, 2): 1})
</code></pre>
| 0 | 2016-07-28T09:42:25Z | [
"python",
"set",
"bioinformatics"
] |
change hostname for continuous integration testing | 38,559,252 | <p>I have some functionality that only runs in specific physical locations and it is known by the hostname. This is produced in a cython module that calls socket.gethostname().</p>
<p>Is there anyway to make a test using socket.gethostname() have different data from the host the test is running on?</p>
| 0 | 2016-07-25T03:24:39Z | 38,565,885 | <p>You could use the <a href="https://docs.python.org/dev/library/unittest.mock.html" rel="nofollow"><code>mock</code> module</a>:</p>
<pre><code>import mock
import socket
with mock.patch("socket.gethostname", return_value="completely fake"):
print socket.gethostname()
</code></pre>
<p>Thsi prints <code>completely fake</code> on <code>stdout</code>.</p>
<p><code>mock</code> is bundled with Python 3.3 and over (as <code>unittest.mock</code>) and is available as a backport for Python 2.6.x and up. The above code runs as-is with Python 2.7.x. </p>
| 1 | 2016-07-25T10:55:20Z | [
"python",
"nose"
] |
Matplotlib: scrolling plot | 38,559,270 | <p>I'm new to Python and I want to implement a scrolling plot for a very long time series data. I've found an example from Matplotlib as follows.</p>
<p><a href="http://scipy-cookbook.readthedocs.io/items/Matplotlib_ScrollingPlot.html" rel="nofollow">http://scipy-cookbook.readthedocs.io/items/Matplotlib_ScrollingPlot.html</a></p>
<p>When I run the example from the link, I found every time I scroll the plot and release the scrollbar, the scrollbar returns to the beginning. Want to scroll to the next position? I need to start to scroll from the beginning again.</p>
<p>I want to understand why it happens and how to fix it.</p>
| 1 | 2016-07-25T03:27:06Z | 38,564,453 | <p>Here's an improved version of the example. (Disclaimer: I started digging into it half an hour ago, never before used wx/matplotlib scrollbars so there might be a much better solution.)</p>
<p>The path I took: first I checked the <a href="http://docs.wxwidgets.org/trunk/classwx_scroll_win_event.html" rel="nofollow" title="scroll events">wx scroll events</a>, then found out that the canvas is <a href="http://matplotlib.org/api/backend_wxagg_api.html" rel="nofollow" title="FigureCanvasWxAgg">FigureCanvasWxAgg</a> derived from wxPanel, inheriting <a href="http://docs.wxwidgets.org/trunk/classwx_window.html" rel="nofollow" title="wxWindow">wxWindow</a> methods. There you may find the scroll position handling methods <code>GetScrollPos</code> and <code>SetScrollPos</code>.</p>
<pre><code>from numpy import arange, sin, pi, float, size
import matplotlib
matplotlib.use('WXAgg')
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg
from matplotlib.figure import Figure
import wx
class MyFrame(wx.Frame):
def __init__(self, parent, id):
wx.Frame.__init__(self,parent, id, 'scrollable plot',
style=wx.DEFAULT_FRAME_STYLE ^ wx.RESIZE_BORDER,
size=(800, 400))
self.panel = wx.Panel(self, -1)
self.fig = Figure((5, 4), 75)
self.canvas = FigureCanvasWxAgg(self.panel, -1, self.fig)
self.scroll_range = 400
self.canvas.SetScrollbar(wx.HORIZONTAL, 0, 5,
self.scroll_range)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.canvas, -1, wx.EXPAND)
self.panel.SetSizer(sizer)
self.panel.Fit()
self.init_data()
self.init_plot()
self.canvas.Bind(wx.EVT_SCROLLWIN, self.OnScrollEvt)
def init_data(self):
# Generate some data to plot:
self.dt = 0.01
self.t = arange(0,5,self.dt)
self.x = sin(2*pi*self.t)
# Extents of data sequence:
self.i_min = 0
self.i_max = len(self.t)
# Size of plot window:
self.i_window = 100
# Indices of data interval to be plotted:
self.i_start = 0
self.i_end = self.i_start + self.i_window
def init_plot(self):
self.axes = self.fig.add_subplot(111)
self.plot_data = \
self.axes.plot(self.t[self.i_start:self.i_end],
self.x[self.i_start:self.i_end])[0]
def draw_plot(self):
# Update data in plot:
self.plot_data.set_xdata(self.t[self.i_start:self.i_end])
self.plot_data.set_ydata(self.x[self.i_start:self.i_end])
# Adjust plot limits:
self.axes.set_xlim((min(self.t[self.i_start:self.i_end]),
max(self.t[self.i_start:self.i_end])))
self.axes.set_ylim((min(self.x[self.i_start:self.i_end]),
max(self.x[self.i_start:self.i_end])))
# Redraw:
self.canvas.draw()
def update_scrollpos(self, new_pos):
self.i_start = self.i_min + new_pos
self.i_end = self.i_min + self.i_window + new_pos
self.canvas.SetScrollPos(wx.HORIZONTAL, new_pos)
self.draw_plot()
def OnScrollEvt(self, event):
evtype = event.GetEventType()
if evtype == wx.EVT_SCROLLWIN_THUMBTRACK.typeId:
pos = event.GetPosition()
self.update_scrollpos(pos)
elif evtype == wx.EVT_SCROLLWIN_LINEDOWN.typeId:
pos = self.canvas.GetScrollPos(wx.HORIZONTAL)
self.update_scrollpos(pos + 1)
elif evtype == wx.EVT_SCROLLWIN_LINEUP.typeId:
pos = self.canvas.GetScrollPos(wx.HORIZONTAL)
self.update_scrollpos(pos - 1)
elif evtype == wx.EVT_SCROLLWIN_PAGEUP.typeId:
pos = self.canvas.GetScrollPos(wx.HORIZONTAL)
self.update_scrollpos(pos - 10)
elif evtype == wx.EVT_SCROLLWIN_PAGEDOWN.typeId:
pos = self.canvas.GetScrollPos(wx.HORIZONTAL)
self.update_scrollpos(pos + 10)
else:
print "unhandled scroll event, type id:", evtype
class MyApp(wx.App):
def OnInit(self):
self.frame = MyFrame(parent=None,id=-1)
self.frame.Show()
self.SetTopWindow(self.frame)
return True
if __name__ == '__main__':
app = MyApp()
app.MainLoop()
</code></pre>
<p>You may adjust e.g. the increments for PAGEUP/PAGEDOWN if you feel it too slow.</p>
<p>Also if you wish, the events can be handled separately setting up the specific event handlers instead of their collection <code>EVT_SCROLLWIN</code>, then instead of if/elifs there will be OnScrollPageUpEvt etc.</p>
| 2 | 2016-07-25T09:45:23Z | [
"python",
"matplotlib",
"scroll"
] |
Do instance variables in Python have to be initialized in the __init__ method for a Class? | 38,559,314 | <p>Why is it if I don't set <code>self.ser = False</code> first, I cannot pass around the <code>self.ser</code> variable as a handle to write with <code>pyserial</code>?</p>
<pre><code>import serial
import time
class TestClass(object):
def __init__(self, port='/dev/ttyUSB0', baud=9600):
self.baud = baud
self.port = port
self.ser = False
def connect(self):
"""Opens serial connection to Device"""
self.ser = serial.Serial(self.port, self.baud, timeout=.5)
def writeuart(self, message):
self.ser.flushInput()
self.ser.write(message)
time.sleep(1)
return self.ser.read(self.ser.inWaiting())
</code></pre>
<p>So in other words, can I not create <code>self.ser</code> from within the <code>connect()</code> method and then make use of <code>self.ser</code> in other methods of the same class?</p>
| 1 | 2016-07-25T03:32:41Z | 38,559,408 | <pre><code>def __init__(self, port='/dev/ttyUSB0', baud=9600):
self.baud = baud
self.port = port
self.connect() # it's ok
</code></pre>
| -1 | 2016-07-25T03:46:31Z | [
"python",
"class",
"pyserial"
] |
Too many open files urllib | 38,559,318 | <p>I'm having a problem with processing over 2700 files
This works if I have a little bit of a files like a couple of hundred, and I'm guessing it has to do with windows limiting open files like in linux ulimit can be definied system wide. I'm sure things are not being closed and this is why I am getting this error.</p>
<p>I have a function that sends a file via post:</p>
<pre><code>def upload_photos(url_photo, dict, timeout):
photo = dict['photo']
data_photo = dict['data']
name = dict['name']
conn = requests.post(url_photo, data=data_photo, files=photo, timeout=timeout)
return {'json': conn.json(), 'name': name}
</code></pre>
<p>which is called from a loop of a directory listing:</p>
<pre><code>for photo_path in [p.lower() for p in photos_path]:
if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path:
nr_photos_upload +=1
print("Found " + str(nr_photos_upload) + " pictures to upload")
local_count = 0
list_to_upload = []
for photo_path in [p.lower() for p in photos_path]:
local_count += 1
if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path and local_count > count:
total_img = nr_photos_upload
photo_name = os.path.basename(photo_path)
try :
photo = {'photo': (photo_name, open(path + photo_path, 'rb'), 'image/jpeg')}
try:
latitude, longitude, compas = get_gps_lat_long_compass(path + photo_path)
except ValueError as e:
if e != None:
try:
tags = exifread.process_file(open(path + photo_path, 'rb'))
latitude, longitude = get_exif_location(tags)
compas = -1
except Exception:
continue
if compas == -1:
data_photo = {'coordinate' : str(latitude) + "," + str(longitude),
'sequenceId' : id_sequence,
'sequenceIndex' : count
}
else :
data_photo = {'coordinate' : str(latitude) + "," + str(longitude),
'sequenceId' : id_sequence,
'sequenceIndex' : count,
'headers' : compas
}
info_to_upload = {'data': data_photo, 'photo':photo, 'name': photo_name}
list_to_upload.append(info_to_upload)
count += 1
except Exception as ex:
print(ex)
count_uploaded = 0
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
# Upload feature called from here
future_to_url = {executor.submit(upload_photos, url_photo, dict, 100): dict for dict in list_to_upload}
for future in concurrent.futures.as_completed(future_to_url):
try:
data = future.result()['json']
name = future.result()['name']
print("processing {}".format(name))
if data['status']['apiCode'] == "600":
percentage = float((float(count_uploaded) * 100) / float(total_img))
print(("Uploaded - " + str(count_uploaded) + ' of total :' + str(
total_img) + ", percentage: " + str(round(percentage, 2)) + "%"))
elif data['status']['apiCode'] == "610":
print("skipping - a requirement arguments is missing for upload")
elif data['status']['apiCode'] == "611":
print("skipping - image does not have GPS location metadata")
elif data['status']['apiCode'] == "660":
print("skipping - duplicate image")
else :
print("skipping - bad image")
count_uploaded += 1
with open(path + "count_file.txt", "w") as fis:
fis.write((str(count_uploaded)))
except Exception as exc:
print('%generated an exception: %s' % (exc))
</code></pre>
| 0 | 2016-07-25T03:33:16Z | 38,560,316 | <p>You can set <code>_setmaxstdio</code> in C to change the number of files which can be opened at a time.</p>
<p>For python you have to use <code>win32file</code> from <code>pywin32</code> as:</p>
<pre><code>import win32file
win32file._setmaxstdio(1024) #set max number of files to 1024
</code></pre>
<p>The default is <code>512</code>. And make sure you check max value you set is supported by your platform.</p>
<p>Reference: <a href="https://msdn.microsoft.com/en-us/library/6e3b887c.aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/6e3b887c.aspx</a></p>
| 1 | 2016-07-25T05:30:47Z | [
"python",
"urllib"
] |
grab serial input line and move them to a shell script | 38,559,412 | <p>I tries to grab a uart - line and give this string to a shell script;</p>
<pre><code>#!/usr/bin/env python
import os
import serial
ser = serial.Serial('/dev/ttyAMA0', 4800)
while True :
try:
state=ser.readline()
print(state)
except:
pass
</code></pre>
<p>So, "state" should given to a shell script now,
like: <code>myscript.sh "This is the serial input..."</code>
but how can I do this?</p>
<pre><code>print(os.system('myscript.sh ').ser.readline())
</code></pre>
<p>doesn't work.</p>
| 0 | 2016-07-25T03:47:08Z | 38,559,526 | <p>There are different ways to combine two strings (namely <code>"./myscript.sh"</code> and <code>ser.readLine()</code>), which will then give you the full command to be run by use of <code>os.system</code>. E.g. strings can be arguments of the <code>string.format</code> method:</p>
<pre><code>os.system('myscript.sh {}'.format(ser.readline()))
</code></pre>
<p>Also you can just add two strings:</p>
<pre><code>os.system('myscript.sh {}'+ser.readline())
</code></pre>
<p>I am not sure what you want to achieve with the <code>print</code> statement. A better way to handle the call and input/output of your code would be to switch from os to the subprocess module (<a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">https://docs.python.org/2/library/subprocess.html</a>).</p>
| 0 | 2016-07-25T04:03:01Z | [
"python",
"shell",
"variables",
"os.system"
] |
grab serial input line and move them to a shell script | 38,559,412 | <p>I tries to grab a uart - line and give this string to a shell script;</p>
<pre><code>#!/usr/bin/env python
import os
import serial
ser = serial.Serial('/dev/ttyAMA0', 4800)
while True :
try:
state=ser.readline()
print(state)
except:
pass
</code></pre>
<p>So, "state" should given to a shell script now,
like: <code>myscript.sh "This is the serial input..."</code>
but how can I do this?</p>
<pre><code>print(os.system('myscript.sh ').ser.readline())
</code></pre>
<p>doesn't work.</p>
| 0 | 2016-07-25T03:47:08Z | 38,559,540 | <p>Just simple string concatenation passed to the os.system function.</p>
<pre><code>import os
os.system("myscript.sh " + ser.readline())
</code></pre>
| 0 | 2016-07-25T04:04:27Z | [
"python",
"shell",
"variables",
"os.system"
] |
grab serial input line and move them to a shell script | 38,559,412 | <p>I tries to grab a uart - line and give this string to a shell script;</p>
<pre><code>#!/usr/bin/env python
import os
import serial
ser = serial.Serial('/dev/ttyAMA0', 4800)
while True :
try:
state=ser.readline()
print(state)
except:
pass
</code></pre>
<p>So, "state" should given to a shell script now,
like: <code>myscript.sh "This is the serial input..."</code>
but how can I do this?</p>
<pre><code>print(os.system('myscript.sh ').ser.readline())
</code></pre>
<p>doesn't work.</p>
| 0 | 2016-07-25T03:47:08Z | 38,559,590 | <p>If <code>myscript</code> can continuously read additional input, you have a much more efficient pipeline.</p>
<pre><code>from subprocess import Popen, PIPE
sink = Popen(['myscript.sh'], stdin=PIPE, stdout=PIPE)
while True:
sink.communicate(ser.readline())
</code></pre>
<p>If you have to start a new <code>myscript.sh</code> for every input line, (you'll really want to rethink your design, but) you can, of course:</p>
<pre><code>while True:
subprocess.check_call(['myscript.sh', ser.readline())
</code></pre>
<p>Notice how in both cases we <a href="http://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess">avoid a pesky shell</a>.</p>
| 0 | 2016-07-25T04:10:58Z | [
"python",
"shell",
"variables",
"os.system"
] |
Why I can't open and read the file on python in powershell? | 38,559,424 | <p><strong>I'm pretty sure I got the ex15_sample.txt in mystuff filefolder!</strong></p>
<p>Here is the error I encounter:
<a href="http://i.stack.imgur.com/d6nkN.png" rel="nofollow"><img src="http://i.stack.imgur.com/d6nkN.png" alt="the powershell"></a></p>
<pre><code>IOError: [Errno 2] No such file or directory: 'ex15_sample.txt'
</code></pre>
<p>That's my code</p>
<pre><code>from sys import argv
script, filename = argv
txt = open(filename)
print "Here's your file %r:" % filename
print txt.read()
print "Type the filename again:"
file_again = raw_input("> ")
txt_again = open(file_again)
print txt_again.read() `
</code></pre>
| 0 | 2016-07-25T03:48:34Z | 38,565,173 | <p>you can use this </p>
<pre><code> import os
__location__ = os.path.realpath(
os.path.join(os.getcwd(), os.path.dirname(__file__)))
file_name = raw_input("type the file name")
txt = open(os.path.join(__location__, filename))
print txt.read()
</code></pre>
| 0 | 2016-07-25T10:20:42Z | [
"python",
"powershell"
] |
Initialize a Python object from command line input | 38,559,464 | <p>I'm new to Python and am learning input/output. Right now, I'm trying to add attributes to an object from a file that I specify from command line input. </p>
<p>For example, I want to run the following: <code>$ /...filepath.../ python3 myCode.py < 06</code> to pass the contents of file <code>06</code> </p>
<pre><code>$ /...filepath.../ cat 06
7
4 5 2 7 88 2 1
</code></pre>
<p>to <code>myCode.py</code>. Both <code>myCode.py</code> and <code>06</code> are located in the same directory.</p>
<p>I'm trying to create a <code>MyClass</code> object from a command line call with attributes to be as follows:</p>
<pre><code>## myCode.py ##
# create class
class MyClass(object):
def __init__(self):
self._num_one = int(sys.stdin.readline())
self._num_many = [int(x) for x in sys.stdin.readline().split()]
# print attributes
print(MyClass()._num_many)
print(MyClass()._num_one)
</code></pre>
<p>but I'm getting the following error <code>ValueError: invalid literal for int() with base 10: ''</code> for <code>self._num_one</code> but am able tor print <code>self._num_many</code> and am not sure why. If I swap the order of <code>self._num_one</code> and <code>self.num_many</code>, then I can get <code>self._num_one</code>. Since <code>06</code> is only two lines long, is there a first line that I'm not initially reading? Why can I only print one of the two attributes, and how would I print both?</p>
<p>Many thanks.</p>
| 0 | 2016-07-25T03:52:56Z | 38,559,533 | <p>just because your input stream is not an int, it's a string with space and "\n" delimiter, you have to work on these things first. </p>
<p>The quick and easy way is changing your code to something like this (not tested):</p>
<pre><code>self._num_one = int("".join(sys.stdin.readline().split()))
</code></pre>
| 0 | 2016-07-25T04:03:56Z | [
"python",
"class",
"input",
"command-line"
] |
Initialize a Python object from command line input | 38,559,464 | <p>I'm new to Python and am learning input/output. Right now, I'm trying to add attributes to an object from a file that I specify from command line input. </p>
<p>For example, I want to run the following: <code>$ /...filepath.../ python3 myCode.py < 06</code> to pass the contents of file <code>06</code> </p>
<pre><code>$ /...filepath.../ cat 06
7
4 5 2 7 88 2 1
</code></pre>
<p>to <code>myCode.py</code>. Both <code>myCode.py</code> and <code>06</code> are located in the same directory.</p>
<p>I'm trying to create a <code>MyClass</code> object from a command line call with attributes to be as follows:</p>
<pre><code>## myCode.py ##
# create class
class MyClass(object):
def __init__(self):
self._num_one = int(sys.stdin.readline())
self._num_many = [int(x) for x in sys.stdin.readline().split()]
# print attributes
print(MyClass()._num_many)
print(MyClass()._num_one)
</code></pre>
<p>but I'm getting the following error <code>ValueError: invalid literal for int() with base 10: ''</code> for <code>self._num_one</code> but am able tor print <code>self._num_many</code> and am not sure why. If I swap the order of <code>self._num_one</code> and <code>self.num_many</code>, then I can get <code>self._num_one</code>. Since <code>06</code> is only two lines long, is there a first line that I'm not initially reading? Why can I only print one of the two attributes, and how would I print both?</p>
<p>Many thanks.</p>
| 0 | 2016-07-25T03:52:56Z | 38,559,587 | <p>There are some problems with your code that stops it from working.</p>
<p>First thing first, you need to remove whitespace before and after if you are reading the input file one line after another. This means a call to the <code>.strip()</code> method after <code>.readline()</code>. You are getting the <code>ValueError</code> because <code>int()</code> cannot process non-numeric string.</p>
<pre><code>>>> int('x')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'x'
>>> int(' ')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: ' '
>>> int('\n')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '\n'
>>> int('1')
1
</code></pre>
<p>So the code becomes</p>
<pre><code>class MyClass(object):
def __init__(self):
self._num_one = int(sys.stdin.readline().strip())
self._num_many = [int(x) for x in sys.stdin.readline().strip().split()]
</code></pre>
<p>Secondly, whenever you do <code>MyClass()</code> it instantiates a new object. If I understand the code correctly, it is expecting to read two lines <strong>twice</strong> (4 lines in total). Given the input file, I am assuming you are interested in instantiating <strong>ONE</strong> object. Therefore we instantiate one, and tag it to a variable, then use that reference later.</p>
<pre><code>instantiatedObject = MyClass()
print(instantiatedObject._num_many)
print(instantiatedObject._num_one)
</code></pre>
| 0 | 2016-07-25T04:10:34Z | [
"python",
"class",
"input",
"command-line"
] |
Click on link using selenium | 38,559,469 | <p>I am very new to python programming and I need help in clicking a link on tripadvisor website. I need to extract full reviews. Current code take only partial reviews with More link. HTML code for More link is mentioned below. I clicked on Inspect element and copied the HTML code.</p>
<pre><code><span class="taLnk hvrIE6 tr395770395 moreLink ulBlueLinks" onclick=" var options = {
flow: 'CORE_COMBINED',
pid: 39415,
onSuccess: function() { ta.util.cookie.setPIDCookie(2247); ta.call('ta.servlet.Reviews.expandReviews', {type: 'dummy'}, ta.id('review_395770395'), 'review_395770395', '1', 2247);; window.location.hash = 'review_395770395'; }
};
ta.call('ta.registration.RegOverlay.show', {type: 'dummy'}, ta.id('review_395770395'), options);
return false;
">
More&nbsp; </span>
</code></pre>
<p>Thanks!</p>
| -2 | 2016-07-25T03:54:32Z | 38,560,092 | <p>There are basically two approach to clicking on the <code>More</code> as below :</p>
<ul>
<li><p>using <code>find_element_by_xpath</code> as below :</p>
<pre><code>driver.find_element_by_xpath("//span[contains(.,'More')]").click()
</code></pre></li>
<li><p>using <code>find_element_by_css_selector</code> as below :</p>
<pre><code>driver.find_element_by_css_selector("span.moreLink").click()
</code></pre></li>
</ul>
<p><strong>Note</strong> : Before finding element and clicking make sure this element is not inside any <code>frame</code> or <code>iframe</code>. If is then you need to switch that <code>frame</code> or <code>iframe</code> before finding element and clicking as : <code>driver.switch_to_frame("frame name or id")</code></p>
<p>Hope it works..:)</p>
| 0 | 2016-07-25T05:06:19Z | [
"python",
"selenium"
] |
Concatenate a set of column values based on another column in Pandas | 38,559,541 | <p>Given a Pandas dataframe which has a few labeled series in it, say <em>Name</em> and <em>Villain</em>.</p>
<p>Say the dataframe has values such: <br/>
<strong>Name</strong>: {'Batman', 'Batman', 'Spiderman', 'Spiderman', 'Spiderman', 'Spiderman'} <br/>
<strong>Villain</strong>: {'Joker', 'Bane', 'Green Goblin', 'Electro', 'Venom', 'Dr Octopus'}</p>
<p>In total the above dataframe has 2 series(or columns) each with six datapoints. </p>
<p>Now, based on the <em>Name</em>, I want to concatenate 3 more columns: <em>FirstName, LastName, LoveInterest</em> to each datapoint. </p>
<p>The result of which adds 'Bruce; Wayne; Catwoman' to every row which has Name as Batman. And 'Peter; Parker; MaryJane' to every row which has Name as Spiderman. </p>
<p>The final result should be a dataframe containing 5 columns(series) and 6 rows each. </p>
| 0 | 2016-07-25T04:04:30Z | 38,559,873 | <p>This is a classic inner-join scenario. In <code>pandas</code>, use the <code>merge</code> module-level function:</p>
<pre><code>In [13]: df1
Out[13]:
Name Villain
0 Batman Joker
1 Batman Bane
2 Spiderman Green Goblin
3 Spiderman Electro
4 Spiderman Venom
5 Spiderman Dr. Octopus
In [14]: df2
Out[14]:
FirstName LastName LoveInterest Name
0 Bruce Wayne Catwoman Batman
1 Peter Parker MaryJane Spiderman
In [15]: pd.DataFrame.merge(df1,df2,on='Name')
Out[15]:
Name Villain FirstName LastName LoveInterest
0 Batman Joker Bruce Wayne Catwoman
1 Batman Bane Bruce Wayne Catwoman
2 Spiderman Green Goblin Peter Parker MaryJane
3 Spiderman Electro Peter Parker MaryJane
4 Spiderman Venom Peter Parker MaryJane
5 Spiderman Dr. Octopus Peter Parker MaryJane
</code></pre>
| 1 | 2016-07-25T04:42:55Z | [
"python",
"pandas"
] |
Raising an exception during for loop and continuing at next index in python | 38,559,545 | <p>I've been breaking my head all day, and I cannot seem to solve this. I'm supposed to write an Exception class then raise it during an iteration and continue where I left off.</p>
<pre><code> class OhNoNotTrueException(Exception):
"""Exception raised when False is encountered
Attributes:
message -- explanation of the error"""
def __init__(self, value):
self.value = value
am_i_true_or_false = [True, None, False, "True", 0, "", 8, "False", "True", "0.0"]
try:
for i in am_i_true_or_false:
if i is False:
raise OhNoNotTrueException(i)
continue #<--this continue does not work
else:
print(i, "is True")
except OhNoNotTrueException as e:
print(e.value, "is False")
</code></pre>
<p>However, I can't get the iteration back to the last index, even after putting continue. I'm not sure if this is the only way to do it, but I'm breaking my head over here. Anyone want to take a crack at it?</p>
<p>I'm supposed to get the following output:</p>
<p>True is true.</p>
<p>None is false</p>
<p>False is false</p>
<p>True is true.</p>
<p>0 is false</p>
<p>is false</p>
<p>8 is true.</p>
<p>False is true.</p>
<p>True is true.</p>
<p>0.0 is true.</p>
| 0 | 2016-07-25T04:04:52Z | 38,559,627 | <p>You should do the loop outside of the try/except block. Then one individual list entry will be checken by the try catch and afterwards the loop will continue with the next one:</p>
<pre><code>for i in am_i_true_or_false:
try:
if i is False:
raise OhNoNotTrueException(i)
else:
print("{} is True".format(i))
except OhNoNotTrueException as e:
print("{} is False".format(e.value))
</code></pre>
<p>The way you are doing it, the loop is executed until the first exception and then the except block is executed and the program ends. The <code>continue</code> is not reached in this case because you threw an exception which will be caught in the except block.</p>
| 1 | 2016-07-25T04:15:36Z | [
"python",
"for-loop",
"exception",
"iteration",
"continue"
] |
Raising an exception during for loop and continuing at next index in python | 38,559,545 | <p>I've been breaking my head all day, and I cannot seem to solve this. I'm supposed to write an Exception class then raise it during an iteration and continue where I left off.</p>
<pre><code> class OhNoNotTrueException(Exception):
"""Exception raised when False is encountered
Attributes:
message -- explanation of the error"""
def __init__(self, value):
self.value = value
am_i_true_or_false = [True, None, False, "True", 0, "", 8, "False", "True", "0.0"]
try:
for i in am_i_true_or_false:
if i is False:
raise OhNoNotTrueException(i)
continue #<--this continue does not work
else:
print(i, "is True")
except OhNoNotTrueException as e:
print(e.value, "is False")
</code></pre>
<p>However, I can't get the iteration back to the last index, even after putting continue. I'm not sure if this is the only way to do it, but I'm breaking my head over here. Anyone want to take a crack at it?</p>
<p>I'm supposed to get the following output:</p>
<p>True is true.</p>
<p>None is false</p>
<p>False is false</p>
<p>True is true.</p>
<p>0 is false</p>
<p>is false</p>
<p>8 is true.</p>
<p>False is true.</p>
<p>True is true.</p>
<p>0.0 is true.</p>
| 0 | 2016-07-25T04:04:52Z | 38,559,631 | <p>Everything after the exception is raised will never be reached, and you will be taken outside the loop, as if everything in the try-block had never happend. Thus, you need to try/except inside the loop:</p>
<pre><code>In [5]: for i in am_i_true_or_false:
...: try:
...: if i is False:
...: raise OhNoNotTrueException(i)
...: else:
...: print(i, "is not False")
...: except OhNoNotTrueException as e:
...: print(e.value, "is False")
...:
True is not False
None is not False
False is False
True is not False
0 is not False
is not False
8 is not False
False is not False
True is not False
0.0 is not False
</code></pre>
<p>Notice what happens if your try-block contains the loop:</p>
<pre><code>In [2]: try:
...: for i in am_i_true_or_false:
...: if i is False:
...: raise Exception()
...: else:
...: print(i,"is not False")
...: except Exception as e:
...: continue
...:
File "<ipython-input-2-97971e491461>", line 8
continue
^
SyntaxError: 'continue' not properly in loop
</code></pre>
| 2 | 2016-07-25T04:15:44Z | [
"python",
"for-loop",
"exception",
"iteration",
"continue"
] |
How to plot coarse-grained average of a set of data points? | 38,559,557 | <p>I have a set of discrete 2-dimensional data points. Each of these points has a measured value associated with it. I would like to get a scatter plot with points colored by their measured values. But the data points are so dense that points with different colors would overlap with each other, that may not be good for visualization. So I am thinking if I could associate the color for each point based on the coarse-grained average of measured values of some points near it. Does anyone know how to implement this in Python? </p>
<p>Thanks!</p>
| 0 | 2016-07-25T04:06:55Z | 38,578,500 | <p>I have it done by using <code>sklearn.neighbors.RadiusNeighborsClassifier()</code>, the idea is the take the average of the values of the neighbors within a specific radius. Suppose the coordinates of the data points are in the list <code>temp_coors</code>, the values associated with these points are <code>coloring</code>, then <code>coloring</code> could be coarse-grained in the following way:</p>
<pre><code>r_neigh = RadiusNeighborsRegressor(radius=smoothing_radius, weights='uniform')
r_neigh.fit(temp_coors, coloring)
coloring = r_neigh.predict(temp_coors)
</code></pre>
| 0 | 2016-07-25T22:43:14Z | [
"python"
] |
Python contextmanager() vs closing(): which is appropriate for a stream object? | 38,559,608 | <p>In <a href="http://stackoverflow.com/a/17603000/760905">another answer here</a> that uses contextlib to define a custom "open" function for use with <code>with</code>, <code>contextmanager</code> from contextlib is used to define a function that handles opening and streaming of data and finally closing of the stream.</p>
<p>In learning about this, I see there is also a <code>closing</code> function that seems to work similarly, with a specific focus on closing the stream when done.</p>
<p>I understand how the <code>contextmanager</code> construction presented works (explicitly closing the stream as necessary), but I wonder if it is incomplete - for correctness (and to be Pythonic), should <code>closing</code> be involved as well, or preferred?</p>
<p>Edit: that answer I referred to currently calls fh.close() - I am wondering if somehow <code>closing</code> ought to be involved here in some way instead of that. The documentation on <code>contextlib</code> didn't help me in this either-or-both question in the first place, thus this question.</p>
| 0 | 2016-07-25T04:13:25Z | 38,559,652 | <p>It would be completely inappropriate to stick <code>contextlib.closing</code> around the context manager in that answer, for many reasons:</p>
<ol>
<li>They don't always want to close the file! That context manager is specifically designed to sometimes leave the file open. This is the entire reason that context manager was written.</li>
<li>When they do want to close the file, the context manager already does that.</li>
<li>Wrapping <code>closing</code> around the context manager would attempt to close the wrong object.</li>
</ol>
<p>In the case where you do always want to close the file, you usually need neither <code>closing</code> nor a custom context manager, because files are already context managers. Sticking a file in a <code>with</code> statement will close it at the end without needing any special wrappers.</p>
| 1 | 2016-07-25T04:19:29Z | [
"python",
"contextmanager"
] |
regarding setting up ogrid and fillup the corresponding multidimensional array in numpy | 38,559,676 | <p>I am trying to understand the following Python segment. </p>
<pre><code>def upsample_filt(size):
factor = (size + 1) // 2
if size % 2 == 1:
center = factor - 1
else:
center = factor - 0.5
og = np.ogrid[:size, :size]
return (1 - abs(og[0] - center) / factor) * \
(1 - abs(og[1] - center) / factor)
</code></pre>
<p>According to <code>numpy</code>, <code>ogrid returns a mesh-grid ndarrys with only one dimension.I think the program want to generate</code>size*size<code>array. Why is it be written as</code>og = np.ogrid[:size, :size]<code>Or what does</code>:size` mean?</p>
<p>As a test, I setup <code>size=4</code>, and <code>print((1 - abs(og[0] - center) / factor)*(1 - abs(og[1] - center) / factor))</code>, the output is as follows:</p>
<pre><code>[[ 0.0625 0.1875 0.1875 0.0625]
[ 0.1875 0.5625 0.5625 0.1875]
[ 0.1875 0.5625 0.5625 0.1875]
[ 0.0625 0.1875 0.1875 0.0625]]
</code></pre>
<p>I am not very clear how does <code>(1 - abs(og[0] - center) / factor)*(1 - abs(og[1] - center) / factor)</code> fillup this multi-dimensional array?</p>
| 2 | 2016-07-25T04:21:53Z | 38,560,012 | <p>Lets make it simpler:</p>
<pre><code>In [264]: og=np.ogrid[:3,:2]
In [265]: og
Out[265]:
[array([[0],
[1],
[2]]), array([[0, 1]])]
</code></pre>
<p>The shape of these 2 is <code>(3,1)</code> and <code>(1,2)</code>. They are 2d; 'o' for 'open'.</p>
<pre><code>In [266]: og[0]*og[1]
Out[266]:
array([[0, 0],
[0, 1],
[0, 2]])
</code></pre>
<p>They broadcast together to form a (3,2) array </p>
<p>(3,1), (1,2) => (3,2), (3,2) => (3,2)</p>
<p>Look at what <code>mgrid</code> produces:</p>
<pre><code>In [271]: np.mgrid[:3,:2]
Out[271]:
array([[[0, 0],
[1, 1],
[2, 2]],
[[0, 1],
[0, 1],
[0, 1]]])
</code></pre>
<p>2 (3,2) arrays, that produce the same combination</p>
<p><code>ogrid</code> and <code>mgrid</code> are class objects with unique indexing method. <code>[:3, :2]</code> looks to Python as regular indexing.</p>
<p><code>meshgrid</code> produces the same thing, but with a regular function syntax</p>
<pre><code>In [275]: np.meshgrid(np.arange(3), np.arange(2),sparse=True,indexing='ij')
Out[275]:
[array([[0],
[1],
[2]]), array([[0, 1]])]
</code></pre>
<p>Another way of performing the same calculation - by using <code>[:,None]</code> to turn the 1st range into a (3,1) array. Here the broadcasting is (3,1),(2,) => (3,1),(1,2) => (3,2)</p>
<pre><code>In [276]: np.arange(3)[:,None]*np.arange(2)
Out[276]:
array([[0, 0],
[0, 1],
[0, 2]])
</code></pre>
<p>===================</p>
<pre><code>(1 - abs(og[0] - center) / factor) *
(1 - abs(og[1] - center) / factor)
</code></pre>
<p>this just scales the 2 ranges and them multiplies them together</p>
<pre><code>In [292]: a=(1-abs(np.arange(4)-1.5)/2)
In [293]: a[:,None]*a
Out[293]:
array([[ 0.0625, 0.1875, 0.1875, 0.0625],
[ 0.1875, 0.5625, 0.5625, 0.1875],
[ 0.1875, 0.5625, 0.5625, 0.1875],
[ 0.0625, 0.1875, 0.1875, 0.0625]])
</code></pre>
| 1 | 2016-07-25T04:59:22Z | [
"python",
"numpy"
] |
How to get current available GPUs in tensorflow? | 38,559,755 | <p>I have a plan to use distributed TensorFlow, and I saw TensorFlow can use GPUs for training and testing. In a cluster environment, each machine could have 0 or 1 or more GPUs, and I want to run my TensorFlow graph into GPUs on as many machines as possible.</p>
<p>I found that when running <code>tf.Session()</code> TensorFlow gives information about GPU in the log messages like below:</p>
<pre><code>I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
</code></pre>
<p>My question is how do I get information about current available GPU from TensorFlow? I can get loaded GPU information from the log, but I want to do it in a more sophisticated, programmatic way.
I also could restrict GPUs intentionally using the CUDA_VISIBLE_DEVICES environment variable, so I don't want to know a way of getting GPU information from OS kernel.</p>
<p>In short, I want a function like <code>tf.get_available_gpus()</code> that will return <code>['/gpu:0', '/gpu:1']</code> if there are two GPUs available in the machine. How can I implement this?</p>
| 3 | 2016-07-25T04:30:38Z | 38,580,201 | <p>There is an undocumented method called <a href="https://github.com/tensorflow/tensorflow/blob/d42facc3cc9611f0c9722c81551a7404a0bd3f6b/tensorflow/python/client/device_lib.py#L27"><code>device_lib.list_local_devices()</code></a> that enables you to list the devices available in the local process. (<strong>N.B.</strong> As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of <a href="https://github.com/tensorflow/tensorflow/blob/8a4f6abb395b3f1bca732797068021c786c1ec76/tensorflow/core/framework/device_attributes.proto"><code>DeviceAttributes</code> protocol buffer</a> objects. You can extract a list of string device names for the GPU devices as follows:</p>
<pre><code>from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
</code></pre>
| 6 | 2016-07-26T02:34:21Z | [
"python",
"gpu",
"tensorflow"
] |
How can I make a Pandas data_frame column based on "If any observation in a particular column meets a condition, then True?" | 38,559,757 | <p>I have a variable that I want to define as <code>True</code> if any item in its containing group meets the condition. For example, in the below <code>.csv</code> frame, a column <code>D</code> corresponding to the condition I'm looking at would be True for all rows where <code>A==1</code> because it's true in the first row, <code>False</code> for <code>A==4</code> because it's False in the only row there, <code>True</code> for <code>A==6</code> because two values are <code>True</code>, and <code>False</code> for all rows where <code>A==8</code> because none are true.</p>
<pre><code>A,B,C
1,2,True
1,4,False
1,5,False
4,5,False
6,7,True
6,4,False
6,5,True
8,9,False
8,11,False
8,20,False
</code></pre>
<p>I've tried using the <code>.any()</code> method, but it keeps returning an empty data frame.</p>
| 1 | 2016-07-25T04:30:53Z | 38,559,816 | <p>You could try</p>
<pre><code>In [7]: df.C.groupby(df.A).max()
Out[7]:
A
1 True
4 False
6 True
8 False
Name: C, dtype: bool
</code></pre>
<p>Your question didn't specify what should happen if some of the rules contradict others, e.g., if there's also a row</p>
<pre><code>1,2,False
</code></pre>
<p>The code above would still decide that the value of 1 is <code>True</code>, as <em>some</em> of the rows had</p>
<pre><code>1,2,True
</code></pre>
<p>You can change it to require that <em>all</em> of the rows must be <code>True</code>, by changing <code>max</code> to <code>min</code> in the above.</p>
<hr>
<p>Finally, to add a new column based on the results, you can <code>merge</code>:</p>
<pre><code>pd.merge(
df,
df.C.groupby(df.A).max().reset_index().rename(columns={'C': 'is_true'}))
</code></pre>
| 1 | 2016-07-25T04:37:15Z | [
"python",
"pandas"
] |
How can I make a Pandas data_frame column based on "If any observation in a particular column meets a condition, then True?" | 38,559,757 | <p>I have a variable that I want to define as <code>True</code> if any item in its containing group meets the condition. For example, in the below <code>.csv</code> frame, a column <code>D</code> corresponding to the condition I'm looking at would be True for all rows where <code>A==1</code> because it's true in the first row, <code>False</code> for <code>A==4</code> because it's False in the only row there, <code>True</code> for <code>A==6</code> because two values are <code>True</code>, and <code>False</code> for all rows where <code>A==8</code> because none are true.</p>
<pre><code>A,B,C
1,2,True
1,4,False
1,5,False
4,5,False
6,7,True
6,4,False
6,5,True
8,9,False
8,11,False
8,20,False
</code></pre>
<p>I've tried using the <code>.any()</code> method, but it keeps returning an empty data frame.</p>
| 1 | 2016-07-25T04:30:53Z | 38,559,914 | <p>You can group on <code>A</code> and then use <code>transform</code> which keeps the the same shape as the original dataframe. Apply a <code>lambda</code> function where you test if any member of the corresponding group in column <code>C</code> is True.</p>
<pre><code>df['D'] = df.groupby('A').C.transform(lambda group: group.any())
>>> df
A B C D
0 1 2 True True
1 1 4 False True
2 1 5 False True
3 4 5 False False
4 6 7 True True
5 6 4 False True
6 6 5 True True
7 8 9 False False
8 8 11 False False
9 8 20 False False
</code></pre>
| 1 | 2016-07-25T04:48:51Z | [
"python",
"pandas"
] |
Tkinter print specified item from list in loop | 38,559,778 | <p>I've code like below:</p>
<pre><code>from tkinter import *
root = Tk()
root.title("sample program")
def print_item_from_list(event):
print(variable)
list = [1, 2, 3, 4, 5]
seclist = []
print(list)
for i in range(0,5):
variable = list[i]
sample = Label(text=variable)
sample.pack()
sample.bind('<Enter>', print_item_from_list)
root.mainloop()
</code></pre>
<p>What I want to achieve is that everytime my pointer enter label 'Sample', specified item form list is printed (i.e. when I hover over label '2', I want second object from my list to get printed). I tried already changing <strong>variable</strong> to <strong>list[i]</strong> (Just for tests if it would work) and creating second list and appending to it, but with no luck. My guess is it's somehow connected to Tkniter behaviour.</p>
| 0 | 2016-07-25T04:32:30Z | 38,559,976 | <p>You can make use of closures:</p>
<pre><code>for i in range(0,5):
variable = list[i]
sample = Label(text=variable)
sample.pack()
def connect_callback(variable):
sample.bind('<Enter>', lambda event:print(variable))
connect_callback(variable)
</code></pre>
<p>This creates a new callback function with a fixed value for each label. In your code, all callbacks refer to the same <code>variable</code>, but with this solution every callback has its own <code>variable</code>.</p>
| 1 | 2016-07-25T04:55:32Z | [
"python",
"tkinter"
] |
Tkinter print specified item from list in loop | 38,559,778 | <p>I've code like below:</p>
<pre><code>from tkinter import *
root = Tk()
root.title("sample program")
def print_item_from_list(event):
print(variable)
list = [1, 2, 3, 4, 5]
seclist = []
print(list)
for i in range(0,5):
variable = list[i]
sample = Label(text=variable)
sample.pack()
sample.bind('<Enter>', print_item_from_list)
root.mainloop()
</code></pre>
<p>What I want to achieve is that everytime my pointer enter label 'Sample', specified item form list is printed (i.e. when I hover over label '2', I want second object from my list to get printed). I tried already changing <strong>variable</strong> to <strong>list[i]</strong> (Just for tests if it would work) and creating second list and appending to it, but with no luck. My guess is it's somehow connected to Tkniter behaviour.</p>
| 0 | 2016-07-25T04:32:30Z | 38,560,907 | <p><strong>With your code :</strong></p>
<pre><code>from tkinter import *
root = Tk()
root.title("sample program")
def print_item_from_list(event):
print(event.widget.config("text")[-1])
list = [1, 2, 3, 4, 5]
seclist = []
print(list)
for i in range(0,5):
variable = list[i]
sample = Label(text=variable)
sample.pack()
sample.bind('<Enter>', print_item_from_list)
root.mainloop()
</code></pre>
| 2 | 2016-07-25T06:21:06Z | [
"python",
"tkinter"
] |
how to scrapping all pages (page 1 until infinity) on website | 38,559,791 | <p>guys i wanna scrapping from <a href="https://n0where.net/" rel="nofollow">this link</a>
everythings its okay, my scrapping its succes</p>
<p>then i was thinking, how about if i want to scrapping all of pages (page one till infinity depends on database article)</p>
<p>i am new to using python and scrapy, before this i using java & c#...their two its so different with python but its okay for me</p>
<p>this is my source</p>
<pre><code>import datetime
import urlparse
import socket
import scrapy
from scrapy.loader.processors import MapCompose, Join
from scrapy.loader import ItemLoader
from scrapy.http import Request
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from thehack.items import NowItem
class MySpider(BaseSpider):
name = "nowhere"
allowed_domains = ["n0where.net"]
start_urls = ["https://n0where.net/"]
def parse(self, response):
# Get the next index URLs and yield Requests
next_selector = response.xpath('/html/body/div[4]/div[3]/div/div/div/div/div[1]/div/div[6]/div/a[8]')
for url in next_selector.extract():
yield Request(urlparse.urljoin(response.url, url))
def parse(self, response):
for article in response.css('.loop-panel'):
item = NowItem()
item['title'] = article.css('.article-title::text').extract_first()
item['link'] = article.css('.overlay-link::attr(href)').extract_first()
item['body'] ='' .join(article.css('.excerpt p::text').extract()).strip()
yield item
</code></pre>
<p>anybody know how to fix my problem, my source is okay, but its only scrapping page 1, how about if i want to scrapping next page automatically?</p>
<p>thanks before mate :)</p>
| 0 | 2016-07-25T04:34:03Z | 38,562,226 | <p>The pagination is pretty tough on this website. If you inspect what your browser is doing, you'll see it's making an AJAX POST request with bunch of parameters to <a href="https://n0where.net/wp-admin/admin-ajax.php" rel="nofollow">https://n0where.net/wp-admin/admin-ajax.php</a> </p>
<p><a href="http://i.stack.imgur.com/jrWxJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/jrWxJ.png" alt="firebug inspect tab"></a></p>
<p>You can replicate this request in few ways. One way would be to convert the parameters your inspector shows into a <code>dict</code> and create a <code>scrapy.FormRequest</code> with it.</p>
<pre><code>formdata = {'rating': '', 'layout': 'd', 'excerpt': '1', 'paginated': '2', 'award': '', 'sorter': 'recent', 'disabletrending': '', 'numarticles': '12', 'disablecategory': '', 'meta': '1', 'location': 'loop', 'disablecompare': '', 'action': 'itajax-sort', 'authorship': '', 'size': '', 'badge': '', 'thumbnail': '1', 'loop': 'main', 'icon': '1'}
next_page = 3 # figure out what next page will be
formdata.upadte('paginated': next_page) # update page
req = FormRequest('https://n0where.net/wp-admin/admin-ajax.php', formdata=formdata, callback=self.parse_next_page)
yield req
</code></pre>
<p>Now it looks like the response you'll get is a json response with bunch of data, but in short all you want is to get the html code that is in 'content' and parse it as it is your new page.
<a href="http://i.stack.imgur.com/ff8q7.png" rel="nofollow"><img src="http://i.stack.imgur.com/ff8q7.png" alt="firebug network tab"></a></p>
<p>Afterwards rinse and repeat.</p>
| 0 | 2016-07-25T07:45:46Z | [
"python",
"xpath",
"web-scraping",
"css-selectors",
"scrapy"
] |
How to have a dictionary field in a django model and use aggregation (min/max) on that field across dictionary keys | 38,559,899 | <p>I am trying to build a model in django 1.9 that has a key, value pair (dictionary) field that also allows query set aggregation (min, mix, etc).
I've tried to use the JSONField:</p>
<pre><code>#models.py
from django.contrib.postgres import fields as pgfields
class Entry(models.Model):
pass
class Scorer(models.Model):
name = models.CharField(max_length=100)
class EntryScoreSet(models.Model):
scorer = models.ForeignKey(Scorer)
entry = models.ForeignKey(Entry, related_name="scorecard")
scores = pgfields.JSONField(default={})
....
# shell test
import random
entry = Entry()
scorer,_ = Scorer.objects.get_or_create(name="scorer1")
entry.save()
for i in range(0,10):
scores = dict(scoreA=random.random(),
scoreB=random.random(),
scoreC=random.random(),
)
entry_score_set=EntryScoreSet(scores=scores, entry=entry, scorer=scorer)
entry_score_set.save()
entry.scorecard.filter(scorer="scorer1").aggregate(Max("scores__scoreA"))
</code></pre>
<p>But I run into the error from <a href="https://code.djangoproject.com/ticket/25828" rel="nofollow">this ticket</a> (basically, aggregation is not supported).</p>
<p>A second option is to use a key, value pair model (similar to <a href="http://stackoverflow.com/questions/402217/how-to-store-a-dictionary-on-a-django-model">this answer</a>):</p>
<pre><code>class Score(models.Model):
entry_score_set = models.ForeignKey(EntryScoreSet, db_index=True,
related_name="scores")
key = models.CharField(max_length=64, db_index=True)
value = models.FloatField(db_index=True)
</code></pre>
<p>But I don't know how one would get an aggregation across a query set for a particular a key value.</p>
<p>How would I implement a key, value pair field in Django that allows aggregation on a query set for a particular key's value?</p>
<p><strong>EDIT:</strong></p>
<p>Here is a snippet that demonstrates what I want to do using pandas and the second option (key, pair model):</p>
<pre><code>import django_pandas.io as djpdio
scds=Scorecard.objects.filter(
entry__in=Entry.objects.order_by('?')[:10],
scorer__name="scorer1")
scorecard_base=djpdio.read_frame(scds,fieldnames=["id","entry__id","scorer__name","scores__id"])
scores=djpdio.read_frame(Score.objects.filter(scorecard__in=scds),fieldnames=["id","key","value"])
scorecard_=(scorecard_base
.merge(scores,left_on="scores__id",right_on="id")
.pivot_table(index="entry__id",columns="key",values="value").reset_index())
scorecard=scorecard_base.merge(scorecard_,on="entry__id")
scorecard["scoreA"].max()
</code></pre>
<p>Is something like this possible using django's ORM? How would the efficiency compare to using pandas pivot function?</p>
| 1 | 2016-07-25T04:47:45Z | 38,580,709 | <p>You can do this with <a href="https://docs.djangoproject.com/en/1.9/ref/models/conditional-expressions/" rel="nofollow">conditional expressions</a>, using the second model structure you proposed (<code>Score</code> with a foreign key to <code>EntryScoreSet</code>).</p>
<pre><code>from django.db.models import Case, When, Max, FloatField
entry.scorecard.all().annotate(
max_score_key1=Max(
Case(
When(scores__key='key1', then='scores__value'),
default=0,
output_field=FloatField()
)
),
max_score_key2=Max(
Case(
When(scores__key='key2', then='scores__value'),
default=0,
output_field=FloatField()
)
)
)
</code></pre>
<p>This would add a <code>max_score_key1</code> property to the resulting <code>EntryScoreSet</code> objects, which gives you the maximum value for all <code>Scores</code> that have a <code>key</code> of <code>key1</code>. Similarly <code>max_score_key2</code> for <code>Scores</code> with <code>key2</code>, etc.</p>
<hr>
<p><strong>Edit</strong>: based on conversation in comments it looks like you want to get the maximum for each key in <code>Score</code> across the whole queryset. You can do that like so:</p>
<pre><code>entry.scorecard.filter(scorer=some_scorer).values('scores__key')\
.annotate(Max('scores__value')).order_by()
</code></pre>
<p>This will give you output like so:</p>
<pre><code>[
{'scores__key': 'key1', 'scores__value__max': 16.0},
{'scores__key': 'key2', 'scores__value__max': 15.0},
....
]
</code></pre>
| 1 | 2016-07-26T03:47:18Z | [
"python",
"django",
"orm"
] |
How do you get each element of list/tuple, where list is passed as parameter to function in python? | 38,559,915 | <p>Here is the code snippet, which is unable to print items of the tuple. [Using python versions 2.6.8/2.7.10]</p>
<pre><code>def lists (var, *st):
print type(st)
for item in range(1,len(st)):
print "Items is:" + item
st = ['udf','var_udf']
lists("a",st)
</code></pre>
<p>Thanks in advance</p>
| 1 | 2016-07-25T04:48:57Z | 38,559,999 | <p>This would not be printing anything because you have used <code>item in range(1,len(st))</code> giving item the value of an integer. Instead do something like:</p>
<pre><code>for item in st:
</code></pre>
<p>CODE:</p>
<pre><code>def lists (var, *st):
print type(st)
for item in st:
print "Items is:"
print ' '.join(item)
st = ['','udf/','var_udf']
lists("a",st)
</code></pre>
| 1 | 2016-07-25T04:57:53Z | [
"python",
"python-2.7"
] |
How do you get each element of list/tuple, where list is passed as parameter to function in python? | 38,559,915 | <p>Here is the code snippet, which is unable to print items of the tuple. [Using python versions 2.6.8/2.7.10]</p>
<pre><code>def lists (var, *st):
print type(st)
for item in range(1,len(st)):
print "Items is:" + item
st = ['udf','var_udf']
lists("a",st)
</code></pre>
<p>Thanks in advance</p>
| 1 | 2016-07-25T04:48:57Z | 38,560,061 | <pre><code>a = [1,2,3,4,5,6,7,8,9] #list
print (a)
def b(list):
for i in list:
print (i) #each element of the list printed here
b(a) #calling def
</code></pre>
<p>If you are passing a list as an argument to the function, the below function will print each element of the list. If you want to perform any operations like append etc., you can add the code after <code>for</code> loop.</p>
| 0 | 2016-07-25T05:03:31Z | [
"python",
"python-2.7"
] |
How to apply a function to every value in a column in a pandas dataframe? | 38,559,967 | <p>I had tried doing a somewhat manual approach using a loop like below:</p>
<pre><code>data = pd.read_csv('data/training.csv')
for idx,imageString in enumerate(data.iloc[:,-1]):
# print(imageString[0:10])
data[idx,-1] = imageString.split(" ")
</code></pre>
<p>But this errors out on the last line with:</p>
<blockquote>
<p>ValueError: Length of values does not match length of index</p>
</blockquote>
<p>So my questions are:</p>
<ol>
<li>Can anyone explain why I am getting the above error and how can I
get around it? </li>
<li>Is this the proper way to apply a <code>split</code> to every
value in the last column of my data frame?</li>
</ol>
<p>Regarding #2 - I saw some people using <code>applymap</code> but I think this creates a new column, I really just want to replace the value in the existing column with another list.</p>
| 1 | 2016-07-25T04:54:37Z | 38,560,006 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a>:</p>
<pre><code>data = pd.read_csv('data/training.csv')
data.iloc[:,-1] = data.iloc[:,-1].str.split(expand=False)
</code></pre>
<p>Then select first or some another elements of lists with <code>str[1]</code> or <code>str[n]</code>:</p>
<pre><code>data.iloc[:,-1] = data.iloc[:,-1].str.split(expand=False).str[0]
data.iloc[:,-1] = data.iloc[:,-1].str.split(expand=False).str[n]
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':['aa aa','ss uu','ee tt']})
print (data)
A B C D E F
0 1 4 7 1 5 aa aa
1 2 5 8 3 3 ss uu
2 3 6 9 5 6 ee tt
print (data.iloc[:,-1].str.split(expand=False))
0 [aa, aa]
1 [ss, uu]
2 [ee, tt]
Name: F, dtype: object
data.iloc[:,-1] = data.iloc[:,-1].str.split(expand=False).str[0]
print (data)
A B C D E F
0 1 4 7 1 5 aa
1 2 5 8 3 3 ss
2 3 6 9 5 6 ee
</code></pre>
<hr>
<pre><code>data.iloc[:,-1] = data.iloc[:,-1].str.split(expand=False).str[1]
print (data)
A B C D E F
0 1 4 7 1 5 aa
1 2 5 8 3 3 uu
2 3 6 9 5 6 tt
</code></pre>
<blockquote>
<p>Can anyone explain why I am getting the above error and how can I get around it? </p>
</blockquote>
<p>Problem is <code>imageString.split(" ")</code> return <code>list</code> and if assign to <code>data[idx,-1]</code>, length of elements of strings is less as length of all DataFrame.</p>
<blockquote>
<p>Is this the proper way to apply a split to every value in the last column of my data frame?</p>
</blockquote>
<p>Better is use string methods, see <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#splitting-and-replacing-strings" rel="nofollow">pandas documentation</a>.</p>
| 2 | 2016-07-25T04:58:30Z | [
"python",
"python-3.x",
"pandas"
] |
How to apply a function to every value in a column in a pandas dataframe? | 38,559,967 | <p>I had tried doing a somewhat manual approach using a loop like below:</p>
<pre><code>data = pd.read_csv('data/training.csv')
for idx,imageString in enumerate(data.iloc[:,-1]):
# print(imageString[0:10])
data[idx,-1] = imageString.split(" ")
</code></pre>
<p>But this errors out on the last line with:</p>
<blockquote>
<p>ValueError: Length of values does not match length of index</p>
</blockquote>
<p>So my questions are:</p>
<ol>
<li>Can anyone explain why I am getting the above error and how can I
get around it? </li>
<li>Is this the proper way to apply a <code>split</code> to every
value in the last column of my data frame?</li>
</ol>
<p>Regarding #2 - I saw some people using <code>applymap</code> but I think this creates a new column, I really just want to replace the value in the existing column with another list.</p>
| 1 | 2016-07-25T04:54:37Z | 38,560,277 | <p>You are not accessing the values correctly.</p>
<p>To correct your code, the last line should be:</p>
<pre><code>df.iat[idx, -1] = imageString.split(" ")
</code></pre>
<p><code>iat</code> is used for scalar getting and setting.</p>
<p>This is probably a simpler way to accomplish your objective:</p>
<pre><code>df.iloc[:, -1] = df.iloc[:, -1].str.split()
</code></pre>
| 0 | 2016-07-25T05:27:22Z | [
"python",
"python-3.x",
"pandas"
] |
Sum a column of values including letters in Python | 38,560,126 | <p>I have an input CSV file and need to add all the values in one of the columns, but the values are not plain integers and I'm not sure how to go about it.</p>
<p>The total output should be around 15k, which is the sum of the entire column. I am using pandas dataframe to store .csv file.</p>
<p>Here is the one of the columns in my input <code>.csv</code> file:</p>
<pre><code>DAMAGE_PROPERTY
0K
0K
2.5K
2.5K
.25K
.25K
2.5K
25K
2.5K
.25K
25K
25K
250K
2.5K
25K
2.5K
2.5K
2.5K
0K
2.5K
.25K
2.5K
25K
</code></pre>
| 2 | 2016-07-25T05:11:23Z | 38,560,162 | <p>I think you need first remove <code>K</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>, then cast to <code>float</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sum.html" rel="nofollow"><code>sum</code></a>:</p>
<pre><code>print (df.DAMAGE_PROPERTY.str.replace('K','').astype(float).sum())
401.0
</code></pre>
<p>Then can multiple by <code>1000</code>:</p>
<pre><code>print (df.DAMAGE_PROPERTY.str.replace('K','').astype(float).sum() * 1000)
401000.0
</code></pre>
<p>If need add <code>K</code>:</p>
<pre><code>print (str(df.DAMAGE_PROPERTY.str.replace('K','').astype(float).sum()) + 'K')
401.0K
</code></pre>
<hr>
<p>EDIT by comment:</p>
<p>If need output in <code>K</code>:</p>
<pre><code>print (df)
DAMAGE_PROPERTY
0 2.5K
1 2.5K
2 25M
#create mask where values `M`
mask = df.DAMAGE_PROPERTY.str.contains('M')
print (mask)
0 False
1 False
2 True
Name: DAMAGE_PROPERTY, dtype: bool
#multiple by 1000 where is mask
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.str.replace(r'[KM]','').astype(float)
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.mask(mask, df.DAMAGE_PROPERTY*1000)
print (df)
DAMAGE_PROPERTY
0 2.5
1 2.5
2 25000.0
print (df['DAMAGE_PROPERTY'].sum())
25005.0
print (str(df['DAMAGE_PROPERTY'].sum()) + 'K' )
25005.0K
</code></pre>
<p>If need output as number:</p>
<pre><code>df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.str.replace(r'[KM]','').astype(float)
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.mask(mask, df.DAMAGE_PROPERTY*1000) * 1000
print (df)
DAMAGE_PROPERTY
0 2500.0
1 2500.0
2 25000000.0
print (df['DAMAGE_PROPERTY'].sum())
25005000.0
</code></pre>
<p>EDIT1:</p>
<p>If there are values with <code>B</code>:</p>
<pre><code>print (df)
DAMAGE_PROPERTY
0 2.5K
1 2.5B
2 25M
maskM = df.DAMAGE_PROPERTY.str.contains('M')
print (maskM)
0 False
1 False
2 True
Name: DAMAGE_PROPERTY, dtype: bool
maskB = df.DAMAGE_PROPERTY.str.contains('B')
print (maskB)
0 False
1 True
2 False
Name: DAMAGE_PROPERTY, dtype: bool
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.str.replace(r'[KMB]','').astype(float)
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.mask(maskM, df.DAMAGE_PROPERTY*1000)
df['DAMAGE_PROPERTY'] = df.DAMAGE_PROPERTY.mask(maskB, df.DAMAGE_PROPERTY*1000000)
print (df)
DAMAGE_PROPERTY
0 2.5
1 2500000.0
2 25000.0
print (df['DAMAGE_PROPERTY'])
0 2.5
1 2500000.0
2 25000.0
Name: DAMAGE_PROPERTY, dtype: float64
</code></pre>
| 3 | 2016-07-25T05:16:01Z | [
"python",
"csv",
"pandas",
"dataframe"
] |
Sum a column of values including letters in Python | 38,560,126 | <p>I have an input CSV file and need to add all the values in one of the columns, but the values are not plain integers and I'm not sure how to go about it.</p>
<p>The total output should be around 15k, which is the sum of the entire column. I am using pandas dataframe to store .csv file.</p>
<p>Here is the one of the columns in my input <code>.csv</code> file:</p>
<pre><code>DAMAGE_PROPERTY
0K
0K
2.5K
2.5K
.25K
.25K
2.5K
25K
2.5K
.25K
25K
25K
250K
2.5K
25K
2.5K
2.5K
2.5K
0K
2.5K
.25K
2.5K
25K
</code></pre>
| 2 | 2016-07-25T05:11:23Z | 38,560,177 | <p>I'm not familiar with pandas/dataframe, but you can use simple Python logic for this. Assuming your file follows the same pattern of having the <code>"K"</code> as the last character in each line, consider the following:</p>
<pre><code>>>> float("2.0K"[:-1])
2.0
>>> float("2.0M"[:-1])
2.0
</code></pre>
<p>You can use the bit above on each line. For example:</p>
<pre><code># assuming you've read the contents into a list called "lines"
values = []
for s in lines:
try:
values.append(float(s[:-1])))
except ValueError:
# found something else; log it or something
pass
</code></pre>
<p>Finally, you just add them together with Python's built-in <code>sum</code> function:</p>
<pre><code>total = sum(values)
</code></pre>
| 0 | 2016-07-25T05:17:24Z | [
"python",
"csv",
"pandas",
"dataframe"
] |
Sum a column of values including letters in Python | 38,560,126 | <p>I have an input CSV file and need to add all the values in one of the columns, but the values are not plain integers and I'm not sure how to go about it.</p>
<p>The total output should be around 15k, which is the sum of the entire column. I am using pandas dataframe to store .csv file.</p>
<p>Here is the one of the columns in my input <code>.csv</code> file:</p>
<pre><code>DAMAGE_PROPERTY
0K
0K
2.5K
2.5K
.25K
.25K
2.5K
25K
2.5K
.25K
25K
25K
250K
2.5K
25K
2.5K
2.5K
2.5K
0K
2.5K
.25K
2.5K
25K
</code></pre>
| 2 | 2016-07-25T05:11:23Z | 38,560,970 | <p>Try this:</p>
<p>Following this pattern, you can add "B" for billions. And do nothing for values that dont have "K", or "M". </p>
<pre><code>def chgFormat(x):
newFormat = 0
if x[-1] == 'K': newFormat = float(x[:-1])
elif x[-1] == 'H': newFormat = float(x[:-1])/10
elif x[-1] == 'M': newFormat = float(x[:-1])*1000
elif x[-1] == 'B': newFormat = float(x[:-1])*1000000
return newFormat
print str(sum(df['DAMAGE_PROPERTY'].dropna().apply(chgFormat)))+'K'
print str(sum(df['DAMAGE_PROPERTY'].dropna().apply(chgFormat))/1000)+'M''
Results:
401.0K
0.401M
</code></pre>
<p>Use this: if there are NaNs: </p>
<pre><code> print str(sum(df3['DAMAGE_PROPERTY'].dropna().apply(chgFormat)))+'K'
print str(sum(df3['DAMAGE_PROPERTY'].dropna().apply(chgFormat))/1000)+'M'
</code></pre>
<p>Edited #3: </p>
<pre><code> print sum(df3['DAMAGE_PROPERTY'].dropna().apply(chgFormat))
</code></pre>
| 3 | 2016-07-25T06:26:28Z | [
"python",
"csv",
"pandas",
"dataframe"
] |
Sum a column of values including letters in Python | 38,560,126 | <p>I have an input CSV file and need to add all the values in one of the columns, but the values are not plain integers and I'm not sure how to go about it.</p>
<p>The total output should be around 15k, which is the sum of the entire column. I am using pandas dataframe to store .csv file.</p>
<p>Here is the one of the columns in my input <code>.csv</code> file:</p>
<pre><code>DAMAGE_PROPERTY
0K
0K
2.5K
2.5K
.25K
.25K
2.5K
25K
2.5K
.25K
25K
25K
250K
2.5K
25K
2.5K
2.5K
2.5K
0K
2.5K
.25K
2.5K
25K
</code></pre>
| 2 | 2016-07-25T05:11:23Z | 38,561,717 | <p>I'd write these functions:</p>
<pre><code>import re
mapper = dict(k=1e3, K=1e3,
m=1e6, M=1e6,
b=1e9, B=1e9)
pot = ('K', 'M', 'B')
def revmap(value):
powers_of_K = int(np.log10(value) // 3)
if powers_of_K > len(pot):
suffix = pot[-1]
else:
suffix = pot[powers_of_K - 1]
k = mapper[suffix]
f = ("%f" % (value / k)).rstrip('0').rstrip('.')
return f + suffix
def sum_with_units(s):
regex = r'(?P<value>.*)(?P<unit>k|m)'
s_ = s.str.extract(regex, expand=True, flags=re.IGNORECASE)
summed = (s_.value.astype(float) * s_.unit.map(mapper)).sum()
return revmap(summed)
sum_with_units(df.DAMAGE_PROPERTY)
'401K'
</code></pre>
<p>take:</p>
<pre><code>df_plus = pd.concat([df for _ in range(2500)])
sum_with_units(df.DAMAGE_PROPERTY)
'1.0025B'
</code></pre>
| 2 | 2016-07-25T07:15:02Z | [
"python",
"csv",
"pandas",
"dataframe"
] |
Why does my function in python adds headers improperly? | 38,560,224 | <p>So, I have this function that makes files, in this case i'm doing .csv files. In the function I want to add the feature to add a header in the new written file. However, it seems that when it writes the file, the new header row is added to the previous row instead of moving that first row down to add the header row(see picture). </p>
<p><a href="http://i.stack.imgur.com/DovjH.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/DovjH.jpg" alt="enter image description here"></a></p>
<p>Here is my code:</p>
<pre><code>def fileMaker(name, data):
header = ['UTC Time','ALtitude']
newFile = open(name,'w')
newFile.write(",".join(header)+"\n"))
for row in data:
for each in row:
newFile.write(each)
newFile.write('\n')
newFile.close()
</code></pre>
<p>If you can help get the header in the first row and move the old first row onto the second row, and second row onto the third row and so on... It would be much appreciated seriously! If the question needs formatting please don't hesitate. </p>
| -1 | 2016-07-25T05:22:49Z | 38,560,267 | <p>When you use <code>str()</code> on a list, it will return literally what you see. E.g.:</p>
<pre><code>>>> a = [1, 2, 3]
>>> str(a)
'[1, 2, 3]'
</code></pre>
<p>What you want is to use the <code>.join()</code> function with on a string and join the list:</p>
<pre><code>>>> a = ['1', '2', '3']
>>> "".join(str(a))
'123'
</code></pre>
<p>Note that if those were normal integers, it would raise an error:</p>
<pre><code>>>> a = [1, 2, 3]
>>> "".join(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence item 0: expected string, int found
</code></pre>
<p>In your case, you can actually use this:</p>
<pre><code>newFile.write(",".join(a)+"\n")
</code></pre>
<p>This will separate each value by a comma.</p>
| 1 | 2016-07-25T05:26:33Z | [
"python",
"csv",
"header"
] |
Parsing struct within struct in C using pycparser? | 38,560,264 | <p>I have this example c file I want to parse:</p>
<pre><code>StrcutWithinStruct.c
// simple struct within a struct example
struct A {
int a;
};
struct B {
A a;
int b;
};
</code></pre>
<p>I'm running <a href="https://github.com/eliben/pycparser" rel="nofollow">pcyparser</a> to parse it, with the following code </p>
<pre><code>exploreStruct.py
#parse StructWithinStruct
from pycparser import parse_file
ast = parse_file(filename='..\StructWithinStruct.c')
ast.show()
</code></pre>
<p>As a result, I got the following:</p>
<pre><code>Tracback (most recent call last):
File "exploreStruct.py", line 3, in <module>
ast = parse_file(filename='...\StructWithinStruct.c')
File "D:\...\pycparser\__init__.py", line 93, in parse_file
return parser.parse(text,filename)
File "D:\...\pycparser\c_parser.py", line 146, in parse
debug=debug_level)
File "D:\...\pycparser\yacc.py", line 331, in parse
return self.parseropt_notrack(input, lexer, debug, tracking, tokenfunc)
File "D:\...\pycparser\yacc.py", line 1181, in parseropt_notrack
tok=call_errorfunc(self.errorfunc, errtoken, self)
File "D:\...\pycparser\yacc.py", line 193, in call_errorfunc
r=errorfunc(token)
File "D:\...\pycparser\c_parser.py", line 1699, in p_error
column=self.clex.find_tok_column(p)))
File "D:\...\pycparser\plyparser.py", line 55, in _parse_error
raise ParseError("%s: %s % (coord, msg))
pycparser.plyparser.ParserError: D:...\StructWithinStruct.c:7:2: Before A
</code></pre>
<p>So, is pycparser can handle struct within struct, or not?
I thought this is some basic requirement, so I'm pretty sure that the problem lying in my configuration somewhere...</p>
<p><strong>One more thing:</strong> I know that pcypareser author, @<a href="http://stackoverflow.com/users/8206/eli-bendersky">Eli Bendersky</a>, says that one should <a href="http://eli.thegreenplace.net/2011/07/03/parsing-c-in-python-with-clang" rel="nofollow">use Clang to parse C++</a>, but I will like to know if there's another option nowadays to parse C++ (preferably over Python), and is user-friendly.</p>
<p>Thanks.</p>
| 1 | 2016-07-25T05:26:17Z | 38,560,367 | <p>Your <code>struct</code> declarations are not closed with a semicolon:</p>
<p>Additionally <code>A</code> itself is not a type name in C. In C++ <code>A</code> alone would suffice, but in C you need to add the <code>struct</code> keyword.</p>
<pre><code>struct A {
int a;
};
struct B {
struct A a;
int b;
};
</code></pre>
<p>Or, you can declare a synonym with a <code>typedef</code> keyword:</p>
<pre><code>struct A {
int a;
};
typedef struct A A;
</code></pre>
<p>or, shorter:</p>
<pre><code>typedef struct A {
int a;
} A;
</code></pre>
<p>From that point the declaration</p>
<pre><code>A a;
</code></pre>
<p>should compile properly.</p>
| 2 | 2016-07-25T05:36:11Z | [
"python",
"c++",
"c",
"parsing",
"clang"
] |
Indentation Error: unindent does not match any outer indentation level | 38,560,338 | <p>I can't seem to figure out what's going on here. When I compile I get the <em>does not match</em> error.</p>
<p>It gives me the error about indentation mismatch on the line with <code>bgB = 0;</code></p>
<pre><code>def calcBG(ftemp):
"This calculates the color value for the background"
variance = ftemp - justRight; # Calculate the variance
adj = calcColorAdj(variance); # Scale it to 8 bit int
bgList = [0,0,0] # initialize the color array
if(variance < 0):
bgR = 0; # too cold, no red bgB = adj; # green and blue slide equally with adj bgG = 255 - adj; elif(variance == 0): # perfect, all on green bgR = 0; bgB = 0; bgG = 255; elif(variance > 0): # too hot - no blue
bgB = 0;
bgR = adj; # red and green slide equally with Adj
bgG = 255 - adj;
</code></pre>
<p>so after updating the code with what @Downshift suggested and adding a few elifs i got the same thing
<code>
def calcBG(ftemp):
"This calculates the color value for the background"
variance = ftemp - justRight; # Calculate the variance
adj = calcColorAdj(variance); # Scale it to 8 bit int
bgList = [0,0,0] # initialize the color array
if(variance < 0):<br>
bgR = 0; # too cold, no red<br>
bgB = adj; # green and blue slide equally with adj<br>
bgG = 255 - adj;<br>
elif(variance == 0): # perfect, all on green<br>
bgR = 0;<br>
bgB = 0;<br>
bgG = 255;<br>
elif(variance > 0): # too hot - no blue
bgB = 0;
bgR = adj; # red and green slide equally with Adj
bgG = 255 - adj;
</code></p>
<p>ALSO: if someone could point out/explain to me exactly what i'm failing at that'd be great. because i can't seem to find my issue in this second portion. which is the same issue as the first. </p>
| -4 | 2016-07-25T05:32:32Z | 38,560,755 | <p>As the interpreter tells you your indentation levels are not consistent. Be sure to indent after first line of method definitions and <code>if</code> statements, With no changes to your code other than fixing indents:</p>
<pre><code>def calcBG(ftemp):
"""This calculates the color value for the background"""
variance = ftemp - justRight; # Calculate the variance
adj = calcColorAdj(variance); # Scale it to 8 bit int
bgList = [0,0,0] # initialize the color array
if(variance < 0):
bgR = 0; # too cold, no red bgB = adj; # green and blue slide equally with adj bgG = 255 - adj; elif(variance == 0): # perfect, all on green bgR = 0; bgB = 0; bgG = 255; elif(variance > 0): # too hot - no blue
bgB = 0;
bgR = adj; # red and green slide equally with Adj
bgG = 255 - adj;
</code></pre>
| 0 | 2016-07-25T06:08:53Z | [
"python",
"indentation"
] |
Django select query with limited fields | 38,560,347 | <p>I want to select a object with just one return field. I can do it using values. But the problem is when not using values it returns an object and when using values it returns a dictionary. Any reason for this difference. And is there a way I can get a return of objects with just one or two fields.</p>
<pre><code> obj=UserProfile.objects.get(pk=1)
obj=UserProfile.objects.values('my_field').get(pk=1)
</code></pre>
| 0 | 2016-07-25T05:33:28Z | 38,560,407 | <p>You can use <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#django.db.models.query.QuerySet.only" rel="nofollow">only()</a> method and enter fields which you need </p>
<pre><code>obj=UserProfile.objects.only('my_field').get(pk=1)
</code></pre>
| 4 | 2016-07-25T05:39:21Z | [
"python",
"django",
"django-models"
] |
python threading: max number of threads to run | 38,560,470 | <p>let's say i have something similar to:</p>
<pre><code>def worker(name):
time.sleep(10)
print name
return
thrs = []
for i in range(1000):
t1 = threading.Thread(target=worker, args=(i,))
thrs.append(t1)
for t in thrs:
t.start()
</code></pre>
<p>Is there way to specify how many threads can run in parallel? in the above case, all 1000 will run in parallel</p>
| 1 | 2016-07-25T05:44:39Z | 38,560,689 | <p>This can be done using <a href="https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.dummy" rel="nofollow"><code>multiprocessing.dummy</code></a> which provides a threaded version of the <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow"><code>multiprocessing</code></a> api.</p>
<pre><code>from multiprocessing.dummy import Pool
pool = Pool(10)
result = pool.map(worker, range(1000))
</code></pre>
<p>In python 3, <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor" rel="nofollow"><code>concurrent.futures.ThreadPoolExecutor</code></a> usually provides a nicer interface.</p>
| 2 | 2016-07-25T06:03:36Z | [
"python",
"multithreading",
"python-2.7"
] |
How to connect my python bot to microsoft bot connector | 38,560,546 | <p>I want to write a python bot and I know if it is possible to connect my bot to microsoft bot connector ?</p>
| 0 | 2016-07-25T05:51:29Z | 39,328,480 | <p>Yes it's possible. Please checkout <a href="https://github.com/ahmadfaizalbh/Microsoft-chatbot" rel="nofollow">Microsoft bot built on Django (python web framework)</a> for implementation.</p>
<p>Here below is a python code to reply back to Microsoft bot connector</p>
<pre><code>import requests
app_client_id = `<Microsoft App ID>`
app_client_secret = `<Microsoft App Secret>`
def sendMessage(serviceUrl,channelId,replyToId,fromData, recipientData,message,messageType,conversation):
url="https://login.microsoftonline.com/common/oauth2/v2.0/token"
data = {"grant_type":"client_credentials",
"client_id":app_client_id,
"client_secret":app_client_secret,
"scope":"https://graph.microsoft.com/.default"
}
response = requests.post(url,data)
resData = response.json()
responseURL = serviceUrl + "v3/conversations/%s/activities/%s" % (conversation["id"],replyToId)
chatresponse = requests.post(
responseURL,
json={
"type": messageType,
"timestamp": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f%zZ"),
"from": fromData,
"conversation": conversation,
"recipient": recipientData,
"text": message,
"replyToId": replyToId
},
headers={
"Authorization":"%s %s" % (resData["token_type"],resData["access_token"])
}
)
</code></pre>
<p>In the above example please replace <code><Microsoft App ID></code> and <code><Microsoft App Secret></code> with appropriate <code>App ID</code> and <code>App secret</code>.
for more API checkout <a href="https://docs.botframework.com/en-us/restapi/connector/#navtitle" rel="nofollow">Microsoft Bot Connector REST API - v3.0</a></p>
| 1 | 2016-09-05T10:20:19Z | [
"python",
"bots",
"connector"
] |
Good Design Principles: Where to put text processing code? | 38,560,645 | <p>I have a situation where I have to do a lot of text processing on data before I use the processed text to create an object instance. The question I have is whether it is better oop design to do the text processing first, and then create the object instance, or pass the unprocessed text to the object constructor and do it there. Basically:</p>
<p>Method 1:</p>
<pre><code>lots_of_text = "................."
class_properties = process_text(lots_of_text)
newobject = MyObject(class_properties)
</code></pre>
<p>Method 2:</p>
<pre><code>newobject = MyObject(lots_of_text)
</code></pre>
<p>It seems like a trivial question when both would work, but when the text processing in reality can be hundreds of lines of code, I think it is worth considering. Thanks for any thoughts.</p>
| 0 | 2016-07-25T06:00:01Z | 38,560,976 | <p>When coming up with the design, consider the separation of responsibilities. Is text processing the responsibility of <code>MyObject</code>? Does it ever need contents of <code>lots_of_text</code>, except to process it in the constructor? (Could it ever make sense to defer processing for later, e.g. for performance reasons?) If the answer to these questions is "no", then text processing does not belong to <code>MyObject</code>.</p>
| 0 | 2016-07-25T06:27:04Z | [
"python",
"python-3.x",
"oop"
] |
Python Pandas dataframe reading exact specified range in an excel sheet | 38,560,748 | <p>I have a lot of different table (and other unstructured data in an excel sheet) .. I need to create a dataframe out of range 'A3:D20' from 'Sheet2' of Excel sheet 'data'</p>
<p>all examples that I come across drilldown up to sheet level, but not how to pick it from an exact range</p>
<pre><code>import openpyxl
import pandas as pd
wb = openpyxl.load_workbook('data.xlsx')
sheet = wb.get_sheet_by_name('Sheet2')
range = ['A3':'D20'] #<-- how to specify this?
spots = pd.DataFrame(sheet.range) #what should be the exact syntax for this?
print (spots)
</code></pre>
<p>Once I get this, then I plan to lookup for some data in column A and find its corresponding value in column B</p>
<p>EDIT: I realised that openpyxl takes too long, and so have changed that to <code>pandas.read_excel('data.xlsx','Sheet2')</code> instead, nad is much faster at that stage atleast</p>
<p>Edit2: For the time being, I have put my data in just one sheet and removed all other info..added column names, Applied <code>index_col</code> on my leftmost column.. and then using wb.loc[] which solves it for me</p>
| 0 | 2016-07-25T06:08:37Z | 38,561,012 | <p>Use the following arguments from <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow">pandas read_excel documentation</a>:</p>
<blockquote>
<ul>
<li>skiprows : list-like
<ul>
<li>Rows to skip at the beginning (0-indexed)</li>
</ul></li>
<li>parse_cols : int or list, default None
<ul>
<li>If None then parse all columns,</li>
<li>If int then indicates last column to be parsed</li>
<li>If list of ints then indicates list of column numbers to be parsed</li>
<li>If string then indicates comma separated list of column names and column ranges (e.g. âA:Eâ or âA,C,E:Fâ)</li>
</ul></li>
</ul>
</blockquote>
<p>I imagine the call will look like:</p>
<pre><code>df = read_excel(filename, 'Sheet2', skiprows = 2, parse_cols = 'A:D')
</code></pre>
| 0 | 2016-07-25T06:29:06Z | [
"python",
"excel",
"pandas"
] |
Python Pandas dataframe reading exact specified range in an excel sheet | 38,560,748 | <p>I have a lot of different table (and other unstructured data in an excel sheet) .. I need to create a dataframe out of range 'A3:D20' from 'Sheet2' of Excel sheet 'data'</p>
<p>all examples that I come across drilldown up to sheet level, but not how to pick it from an exact range</p>
<pre><code>import openpyxl
import pandas as pd
wb = openpyxl.load_workbook('data.xlsx')
sheet = wb.get_sheet_by_name('Sheet2')
range = ['A3':'D20'] #<-- how to specify this?
spots = pd.DataFrame(sheet.range) #what should be the exact syntax for this?
print (spots)
</code></pre>
<p>Once I get this, then I plan to lookup for some data in column A and find its corresponding value in column B</p>
<p>EDIT: I realised that openpyxl takes too long, and so have changed that to <code>pandas.read_excel('data.xlsx','Sheet2')</code> instead, nad is much faster at that stage atleast</p>
<p>Edit2: For the time being, I have put my data in just one sheet and removed all other info..added column names, Applied <code>index_col</code> on my leftmost column.. and then using wb.loc[] which solves it for me</p>
| 0 | 2016-07-25T06:08:37Z | 38,561,536 | <p>One way to do this is to use the <a href="https://pypi.python.org/pypi/openpyxl/" rel="nofollow">openpyxl</a> module.</p>
<p>Here's an example:</p>
<pre><code>from openpyxl import load_workbook
wb = load_workbook(filename='data.xlsx',
read_only=True)
ws = wb['Sheet2']
# Read the cell values into a list of lists
data_rows = []
for row in ws['A3':'D20']:
data_cols = []
for cell in row:
data_cols.append(cell.value)
data_rows.append(data_cols)
# Transform into dataframe
import pandas as pd
df = pd.DataFrame(data_rows)
</code></pre>
| 0 | 2016-07-25T07:03:52Z | [
"python",
"excel",
"pandas"
] |
Python: clear items from PriorityQueue | 38,560,760 | <p><a href="http://stackoverflow.com/questions/6517953/clear-all-items-from-the-queue">Clear all items from the queue </a></p>
<p>I read the above answer </p>
<p>Im using python 2.7</p>
<pre><code>import Queue
pq = Queue.PriorityQueue()
pq.clear()
</code></pre>
<p>I get the following error:</p>
<pre><code>AttributeError: PriorityQueue instance has no attribute 'clear'
</code></pre>
<p>Is there way to easily empty priority queue instead of manually popping out all items? or would re-instantiating work (i.e. it wouldn't mess with join())?</p>
| 0 | 2016-07-25T06:09:05Z | 38,560,911 | <p>It's actually <code>pq.queue.clear()</code>. However, as mentioned in the answers to the question you referenced, this is not documented and potentially unsafe.</p>
<p>The cleanest way is described in <a href="http://stackoverflow.com/a/18873213/3165737">this answer</a>:</p>
<pre><code>while not q.empty():
try:
q.get(False)
except Empty:
continue
q.task_done()
</code></pre>
<p>Re-instantiating the queue would work too of course (the object would simple be removed from memory), as long as no other part of your code holds on to a reference to the old queue.</p>
| 1 | 2016-07-25T06:21:20Z | [
"python",
"python-2.7",
"priority-queue"
] |
Draw ROC curve in python using confusion matrix only | 38,560,815 | <p>I need to draw ROC curve using confusion matrix only. Actually my system was crashed ( every information was lost), therefore I am not getting data. I have only values of confusion matrix. I know how to create ROC curve (<a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html</a>), but not getting any clue how to draw from confusion matrix.
Please help me in this regards.</p>
| -1 | 2016-07-25T06:13:06Z | 38,566,612 | <p>Unfortunately you can't build a ROC curve from a single contingency matrix.</p>
<p>A ROC curve shows how the sensitivity and specificity vary as you change the decision threshold. In order to do that, it is necessary to calculate these values at <em>all possible thresholds</em> (at least those where the values step).</p>
<p>A contingency matrix reports the performance of <em>one specific threshold</em>. You can calculate sensitivity and specificity from it, but for a single threshold only. The information regarding the other thresholds has been lost, and therefore you cannot build the ROC curve.</p>
| 0 | 2016-07-25T11:31:30Z | [
"python",
"roc",
"confusion-matrix"
] |
How to read 10 records each time from csv file using pandas? | 38,560,897 | <p>I want to read a csv file which having 1000 rows so, I decide to read this file in chunks. But I'm facing issues while reading this csv file. </p>
<p>I want to read first 10 records at 1st iteration and convert its specific columns to the python dictionary at 2nd iteration skip first 10 records and read next 10 records like that.</p>
<p><strong>Input.csv-</strong></p>
<pre><code>time,line_id,high,low,avg,total,split_counts
1468332421098000,206,50879,50879,50879,2,"[50000,2]"
1468332421195000,206,39556,39556,39556,2,"[30000,2]"
1468332421383000,206,61636,61636,61636,2,"[60000,2]"
1468332423568000,206,47315,38931,43123,4,"[30000,2][40000,2]"
1468332423489000,206,38514,38445,38475,6,"[30000,6]"
1468332421672000,206,60079,60079,60079,2,"[60000,2]"
1468332421818000,206,44664,44664,44664,2,"[40000,2]"
1468332422164000,206,48500,48500,48500,2,"[40000,2]"
1468332423490000,206,39469,37894,38206,12,"[30000,12]"
1468332422538000,206,44023,44023,44023,2,"[40000,2]"
1468332423491000,206,38813,38813,38813,2,"[30000,2]"
1468332423528000,206,75970,75970,75970,2,"[70000,2]"
1468332423533000,206,42546,42470,42508,4,"[40000,4]"
1468332423536000,206,41065,40888,40976,4,"[40000,4]"
1468332423566000,206,66401,62453,64549,6,"[60000,6]"
</code></pre>
<p><strong>Program Code-</strong></p>
<pre><code>if __name__ == '__main__':
s = 0
while(True):
n = 10
df = pandas.read_csv('Input.csv', skiprows=s, nrows=n)
d = dict(zip(df.time, df.split_counts))
print d
s += n
</code></pre>
<p><strong>I'm facing this Issue-</strong> </p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'time'
</code></pre>
<p>I know in the 2nd iteration It unable to identify time and split_counts attributes But Is there any way do what I want?</p>
| 0 | 2016-07-25T06:19:51Z | 38,561,252 | <p>The first iteration should work fine, but any further iterations are problematic.</p>
<p><code>read_csv</code> has an <code>headers</code> kwarg with default value <code>infer</code> (which is basically <code>0</code>). This means that the first row in the parsed csv will be used as the columns' names in the dataframe.</p>
<p>The <code>read_csv</code> also has another kwarg, <code>names</code>. </p>
<p>As explained in the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>header : int or list of ints, default âinferâ
Row number(s) to use as the column names, and the start of the data. Default behavior is as if set to 0 if no names passed, otherwise None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.</p>
<p>names : array-like, default None
List of column names to use. If file contains no header row, then you should explicitly pass header=None
<br/></p>
</blockquote>
<p>You should pass <code>headers=None</code> and <code>names=['time', 'line_id', 'high', 'low', 'avg', 'total', 'split_counts']</code> to <code>read_csv</code>.</p>
| 1 | 2016-07-25T06:44:58Z | [
"python",
"csv",
"pandas",
"dictionary",
"dataframe"
] |
How to read 10 records each time from csv file using pandas? | 38,560,897 | <p>I want to read a csv file which having 1000 rows so, I decide to read this file in chunks. But I'm facing issues while reading this csv file. </p>
<p>I want to read first 10 records at 1st iteration and convert its specific columns to the python dictionary at 2nd iteration skip first 10 records and read next 10 records like that.</p>
<p><strong>Input.csv-</strong></p>
<pre><code>time,line_id,high,low,avg,total,split_counts
1468332421098000,206,50879,50879,50879,2,"[50000,2]"
1468332421195000,206,39556,39556,39556,2,"[30000,2]"
1468332421383000,206,61636,61636,61636,2,"[60000,2]"
1468332423568000,206,47315,38931,43123,4,"[30000,2][40000,2]"
1468332423489000,206,38514,38445,38475,6,"[30000,6]"
1468332421672000,206,60079,60079,60079,2,"[60000,2]"
1468332421818000,206,44664,44664,44664,2,"[40000,2]"
1468332422164000,206,48500,48500,48500,2,"[40000,2]"
1468332423490000,206,39469,37894,38206,12,"[30000,12]"
1468332422538000,206,44023,44023,44023,2,"[40000,2]"
1468332423491000,206,38813,38813,38813,2,"[30000,2]"
1468332423528000,206,75970,75970,75970,2,"[70000,2]"
1468332423533000,206,42546,42470,42508,4,"[40000,4]"
1468332423536000,206,41065,40888,40976,4,"[40000,4]"
1468332423566000,206,66401,62453,64549,6,"[60000,6]"
</code></pre>
<p><strong>Program Code-</strong></p>
<pre><code>if __name__ == '__main__':
s = 0
while(True):
n = 10
df = pandas.read_csv('Input.csv', skiprows=s, nrows=n)
d = dict(zip(df.time, df.split_counts))
print d
s += n
</code></pre>
<p><strong>I'm facing this Issue-</strong> </p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'time'
</code></pre>
<p>I know in the 2nd iteration It unable to identify time and split_counts attributes But Is there any way do what I want?</p>
| 0 | 2016-07-25T06:19:51Z | 38,561,432 | <p>You can use rather <code>chunksize</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u'''time,line_id,high,low,avg,total,split_counts
1468332421098000,206,50879,50879,50879,2,"[50000,2]"
1468332421195000,206,39556,39556,39556,2,"[30000,2]"
1468332421383000,206,61636,61636,61636,2,"[60000,2]"
1468332423568000,206,47315,38931,43123,4,"[30000,2][40000,2]"
1468332423489000,206,38514,38445,38475,6,"[30000,6]"
1468332421672000,206,60079,60079,60079,2,"[60000,2]"
1468332421818000,206,44664,44664,44664,2,"[40000,2]"
1468332422164000,206,48500,48500,48500,2,"[40000,2]"
1468332423490000,206,39469,37894,38206,12,"[30000,12]"
1468332422538000,206,44023,44023,44023,2,"[40000,2]"
1468332423491000,206,38813,38813,38813,2,"[30000,2]"
1468332423528000,206,75970,75970,75970,2,"[70000,2]"
1468332423533000,206,42546,42470,42508,4,"[40000,4]"
1468332423536000,206,41065,40888,40976,4,"[40000,4]"
1468332423566000,206,66401,62453,64549,6,"[60000,6]"'''
#after testing replace io.StringIO(temp) to filename
#for testing 2
reader = pd.read_csv(io.StringIO(temp), chunksize=2)
print (reader)
<pandas.io.parsers.TextFileReader object at 0x000000000AD1CD68>
</code></pre>
<pre><code>for df in reader:
print(dict(zip(df.time, df.split_counts)))
{1468332421098000: '[50000,2]', 1468332421195000: '[30000,2]'}
{1468332421383000: '[60000,2]', 1468332423568000: '[30000,2][40000,2]'}
{1468332423489000: '[30000,6]', 1468332421672000: '[60000,2]'}
{1468332421818000: '[40000,2]', 1468332422164000: '[40000,2]'}
{1468332423490000: '[30000,12]', 1468332422538000: '[40000,2]'}
{1468332423491000: '[30000,2]', 1468332423528000: '[70000,2]'}
{1468332423533000: '[40000,4]', 1468332423536000: '[40000,4]'}
{1468332423566000: '[60000,6]'}
</code></pre>
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk" rel="nofollow">pandas documentation</a>.</p>
| 1 | 2016-07-25T06:57:07Z | [
"python",
"csv",
"pandas",
"dictionary",
"dataframe"
] |
(YouTube API v3) GET request to Search.list returns empty responses | 38,561,148 | <p>I am working on a python program to retrieve information (video id, video author, etc) on all videos that show up as result to a search (<strong>q="cancer+vlog"</strong>).</p>
<p>I have the following <strong>GET</strong> request that runs first:</p>
<pre><code> results = youtube.search().list(
order="relevance",
part="snippet",
publishedAfter="2015-06-01T00:00:00Z",
maxResults=50,
type="video",
q="cancer+vlog"
).execute()
</code></pre>
<p>After processing the first batch of <strong>results</strong> (up to 50 videos as specified in maxResults=50), I check to see if <strong>results</strong> contains the <strong>nextPageToken</strong> key. If so, then I run another <strong>GET</strong> request with the <strong>nextPageToken</strong> from the previous run: </p>
<pre><code> results = youtube.search().list(
pageToken = results["nextPageToken"],
order="relevance",
part="snippet",
publishedAfter="2015-06-01T00:00:00Z",
maxResults=50,
type="video",
q="cancer+vlog"
).execute()
</code></pre>
<p>Since I want to retrieve <em>all</em> the videos from my search result, I repeat the <strong>GET</strong> request with the <strong>pageToken</strong> until <strong>results</strong> does not contain the <strong>nextPageToken</strong> key. This seems to work fine until the program reaches about ~600 videos, then the server continues giving 200 responses without any video information, essentially an empty <strong>results["items"]</strong> array (even though there is about 600K video to be retrieved). I'm wondering if anyone has experienced this?</p>
<p>I didn't want to make this post any longer, but if anyone is interested in the entire code base, it's here: <a href="http://pastebin.com/vXeiQ6cz" rel="nofollow">http://pastebin.com/vXeiQ6cz</a></p>
| 0 | 2016-07-25T06:38:07Z | 38,616,395 | <p>Actually, you make a server load on youtube data api, officially, youtube does not use this api, it is for other users like us, Python is a server side language, and by your script, you send too many query, or get request to the api, that why it shows, this type of error. I am not a python developer. I am php guy, in php there is a function sleep to delay execution for some time, if there is something like that you can use it. Hope it helps you.</p>
| 0 | 2016-07-27T14:49:59Z | [
"python",
"youtube",
"youtube-api",
"youtube-data-api",
"youtube-api-v3"
] |
Neural network XOR gate not learning | 38,561,182 | <p>I'm trying to make a XOR gate by using 2 perceptron network but for some reason the network is not learning, when I plot the change of error in a graph the error comes to a static level and oscillates in that region. </p>
<p>I did not add any bias to the network at the moment.</p>
<pre><code>import numpy as np
def S(x):
return 1/(1+np.exp(-x))
win = np.random.randn(2,2)
wout = np.random.randn(2,1)
eta = 0.15
# win = [[1,1], [2,2]]
# wout = [[1],[2]]
obj = [[0,0],[1,0],[0,1],[1,1]]
target = [0,1,1,0]
epoch = int(10000)
emajor = ""
for r in range(0,epoch):
for xy in range(len(target)):
tar = target[xy]
fdata = obj[xy]
fdata = S(np.dot(1,fdata))
hnw = np.dot(fdata,win)
hnw = S(np.dot(fdata,win))
out = np.dot(hnw,wout)
out = S(out)
diff = tar-out
E = 0.5 * np.power(diff,2)
emajor += str(E[0]) + ",\n"
delta_out = (out-tar)*(out*(1-out))
nindelta_out = delta_out * eta
wout_change = np.dot(nindelta_out[0], hnw)
for x in range(len(wout_change)):
change = wout_change[x]
wout[x] -= change
delta_in = np.dot(hnw,(1-hnw)) * np.dot(delta_out[0], wout)
nindelta_in = eta * delta_in
for x in range(len(nindelta_in)):
midway = np.dot(nindelta_in[x][0], fdata)
for y in range(len(win)):
win[y][x] -= midway[y]
f = open('xor.csv','w')
f.write(emajor) # python will convert \n to os.linesep
f.close() # you can omit in most cases as the destructor will call it
</code></pre>
<p>This is the error changing by the number of learning rounds. Is this correct? The red color line is the line I was expecting how the error should change.</p>
<p><a href="http://i.stack.imgur.com/OZ6nd.png" rel="nofollow"><img src="http://i.stack.imgur.com/OZ6nd.png" alt="enter image description here"></a></p>
<p>Anything wrong I'm doing in the code? As I can't seem to figure out what's causing the error. Help much appreciated. </p>
<p>Thanks in advance</p>
| 0 | 2016-07-25T06:39:53Z | 38,563,035 | <p>The error calculated in each epoch should be a sum total of all sum squared errors (i.e. error for every target)</p>
<pre><code>import numpy as np
def S(x):
return 1/(1+np.exp(-x))
win = np.random.randn(2,2)
wout = np.random.randn(2,1)
eta = 0.15
# win = [[1,1], [2,2]]
# wout = [[1],[2]]
obj = [[0,0],[1,0],[0,1],[1,1]]
target = [0,1,1,0]
epoch = int(10000)
emajor = ""
for r in range(0,epoch):
# ***** initialize final error *****
finalError = 0
for xy in range(len(target)):
tar = target[xy]
fdata = obj[xy]
fdata = S(np.dot(1,fdata))
hnw = np.dot(fdata,win)
hnw = S(np.dot(fdata,win))
out = np.dot(hnw,wout)
out = S(out)
diff = tar-out
E = 0.5 * np.power(diff,2)
# ***** sum all errors *****
finalError += E
delta_out = (out-tar)*(out*(1-out))
nindelta_out = delta_out * eta
wout_change = np.dot(nindelta_out[0], hnw)
for x in range(len(wout_change)):
change = wout_change[x]
wout[x] -= change
delta_in = np.dot(hnw,(1-hnw)) * np.dot(delta_out[0], wout)
nindelta_in = eta * delta_in
for x in range(len(nindelta_in)):
midway = np.dot(nindelta_in[x][0], fdata)
for y in range(len(win)):
win[y][x] -= midway[y]
# ***** Save final error *****
emajor += str(finalError[0]) + ",\n"
f = open('xor.csv','w')
f.write(emajor) # python will convert \n to os.linesep
f.close() # you can omit in most cases as the destructor will call it
</code></pre>
| 0 | 2016-07-25T08:35:06Z | [
"python",
"numpy",
"machine-learning",
"neural-network",
"artificial-intelligence"
] |
Neural network XOR gate not learning | 38,561,182 | <p>I'm trying to make a XOR gate by using 2 perceptron network but for some reason the network is not learning, when I plot the change of error in a graph the error comes to a static level and oscillates in that region. </p>
<p>I did not add any bias to the network at the moment.</p>
<pre><code>import numpy as np
def S(x):
return 1/(1+np.exp(-x))
win = np.random.randn(2,2)
wout = np.random.randn(2,1)
eta = 0.15
# win = [[1,1], [2,2]]
# wout = [[1],[2]]
obj = [[0,0],[1,0],[0,1],[1,1]]
target = [0,1,1,0]
epoch = int(10000)
emajor = ""
for r in range(0,epoch):
for xy in range(len(target)):
tar = target[xy]
fdata = obj[xy]
fdata = S(np.dot(1,fdata))
hnw = np.dot(fdata,win)
hnw = S(np.dot(fdata,win))
out = np.dot(hnw,wout)
out = S(out)
diff = tar-out
E = 0.5 * np.power(diff,2)
emajor += str(E[0]) + ",\n"
delta_out = (out-tar)*(out*(1-out))
nindelta_out = delta_out * eta
wout_change = np.dot(nindelta_out[0], hnw)
for x in range(len(wout_change)):
change = wout_change[x]
wout[x] -= change
delta_in = np.dot(hnw,(1-hnw)) * np.dot(delta_out[0], wout)
nindelta_in = eta * delta_in
for x in range(len(nindelta_in)):
midway = np.dot(nindelta_in[x][0], fdata)
for y in range(len(win)):
win[y][x] -= midway[y]
f = open('xor.csv','w')
f.write(emajor) # python will convert \n to os.linesep
f.close() # you can omit in most cases as the destructor will call it
</code></pre>
<p>This is the error changing by the number of learning rounds. Is this correct? The red color line is the line I was expecting how the error should change.</p>
<p><a href="http://i.stack.imgur.com/OZ6nd.png" rel="nofollow"><img src="http://i.stack.imgur.com/OZ6nd.png" alt="enter image description here"></a></p>
<p>Anything wrong I'm doing in the code? As I can't seem to figure out what's causing the error. Help much appreciated. </p>
<p>Thanks in advance</p>
| 0 | 2016-07-25T06:39:53Z | 38,767,930 | <p>Here is a one hidden layer network with backpropagation which can be customized to run experiments with relu, sigmoid and other activations. After several experiments it was concluded that with relu the network performed better and reached convergence sooner, while with sigmoid the loss value fluctuated. This happens because, "<a href="http://stats.stackexchange.com/questions/126238/what-are-the-advantages-of-relu-over-sigmoid-function-in-deep-neural-network">the gradient of sigmoids becomes increasingly small as the absolute value of x increases</a>".</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from operator import xor
class neuralNetwork():
def __init__(self):
# Define hyperparameters
self.noOfInputLayers = 2
self.noOfOutputLayers = 1
self.noOfHiddenLayerNeurons = 2
# Define weights
self.W1 = np.random.rand(self.noOfInputLayers,self.noOfHiddenLayerNeurons)
self.W2 = np.random.rand(self.noOfHiddenLayerNeurons,self.noOfOutputLayers)
def relu(self,z):
return np.maximum(0,z)
def sigmoid(self,z):
return 1/(1+np.exp(-z))
def forward (self,X):
self.z2 = np.dot(X,self.W1)
self.a2 = self.relu(self.z2)
self.z3 = np.dot(self.a2,self.W2)
yHat = self.relu(self.z3)
return yHat
def costFunction(self, X, y):
#Compute cost for given X,y, use weights already stored in class.
self.yHat = self.forward(X)
J = 0.5*sum((y-self.yHat)**2)
return J
def costFunctionPrime(self,X,y):
# Compute derivative with respect to W1 and W2
delta3 = np.multiply(-(y-self.yHat),self.sigmoid(self.z3))
djw2 = np.dot(self.a2.T, delta3)
delta2 = np.dot(delta3,self.W2.T)*self.sigmoid(self.z2)
djw1 = np.dot(X.T,delta2)
return djw1,djw2
if __name__ == "__main__":
EPOCHS = 6000
SCALAR = 0.01
nn= neuralNetwork()
COST_LIST = []
inputs = [ np.array([[0,0]]), np.array([[0,1]]), np.array([[1,0]]), np.array([[1,1]])]
for epoch in xrange(1,EPOCHS):
cost = 0
for i in inputs:
X = i #inputs
y = xor(X[0][0],X[0][1])
cost += nn.costFunction(X,y)[0]
djw1,djw2 = nn.costFunctionPrime(X,y)
nn.W1 = nn.W1 - SCALAR*djw1
nn.W2 = nn.W2 - SCALAR*djw2
COST_LIST.append(cost)
plt.plot(np.arange(1,EPOCHS),COST_LIST)
plt.ylim(0,1)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(str('Epochs: '+str(EPOCHS)+', Scalar: '+str(SCALAR)))
plt.show()
inputs = [ np.array([[0,0]]), np.array([[0,1]]), np.array([[1,0]]), np.array([[1,1]])]
print "X\ty\ty_hat"
for inp in inputs:
print (inp[0][0],inp[0][1]),"\t",xor(inp[0][0],inp[0][1]),"\t",round(nn.forward(inp)[0][0],4)
</code></pre>
<p>End Result:</p>
<p><a href="http://i.stack.imgur.com/yaRY5.png" rel="nofollow"><img src="http://i.stack.imgur.com/yaRY5.png" alt="enter image description here"></a></p>
<pre><code>X y y_hat
(0, 0) 0 0.0
(0, 1) 1 0.9997
(1, 0) 1 0.9997
(1, 1) 0 0.0005
</code></pre>
<p>The weights obtained after training were:</p>
<p>nn.w1</p>
<pre><code>[ [-0.81781753 0.71323677]
[ 0.48803631 -0.71286155] ]
</code></pre>
<p>nn.w2</p>
<pre><code>[ [ 2.04849235]
[ 1.40170791] ]
</code></pre>
<p>I found the following youtube series extremely helpful for understanding neural nets: <a href="http://www.youtube.com/watch?v=bxe2T-V8XRs&list=PLiaHhY2iBX9hdHaRr6b7XevZtgZRa1PoU" rel="nofollow">Neural networks demystified</a></p>
<p>There is only little which I know and also that can be explained in this answer. If you want an even better understanding of neural nets, then I would suggest you to go through the following link: <a href="http://cs231n.github.io/neural-networks-1/" rel="nofollow">cs231n: Modelling one neuron</a></p>
| 0 | 2016-08-04T12:54:12Z | [
"python",
"numpy",
"machine-learning",
"neural-network",
"artificial-intelligence"
] |
Django library Unresolved Import LiClipse | 38,561,207 | <p>I am creating my first Django project from docs.djangoproject.com. After completing tutorial 4, I tried to import my project in LiClipse. But LiClipse is showing error of Unresolved Import however my projects works perfectly fine.
I have added django in external library.
Please help me with this issue.
LiClipse shows error only with django libraries and not with any other python library</p>
| 0 | 2016-07-25T06:42:15Z | 38,581,034 | <p>Instead of adding django package as external library, add the containing folder of django. For example if folder hierarchy is something like /site-package/django than add site-package as external library and not django.</p>
| 0 | 2016-07-26T04:29:58Z | [
"python",
"django",
"liclipse"
] |
parsing data using pandas with fixed sequence of strings | 38,561,268 | <p>I have data looked like below in file a.dat:</p>
<pre><code>01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g
</code></pre>
<p>I wish to parsing them into three columns: <strong>timeline, floating number, string(either None or g)</strong></p>
<p>I have tried: </p>
<pre><code>df=pd.read_csv('a.dat',sep=' | ',engine='python')
</code></pre>
<p>which ends up with 4 columns: date, time , float and g</p>
<pre><code>df=pd.read_csv('a.dat',sep=' | (g)',engine='python')
</code></pre>
<p>which gives 5 columns with column 1 and 4 as NaN</p>
<p>is there any better way to create the datafram without any post processing? </p>
| 2 | 2016-07-25T06:46:21Z | 38,561,323 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
sep='\s+',
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<p>Or:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
delim_whitespace=True,
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<hr>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html" rel="nofollow"><code>read_fwf</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_fwf(io.StringIO(temp),
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<p>You can also specify width of columns:</p>
<pre><code>df = pd.read_fwf(io.StringIO(temp),
fwidths = [20,12,2],
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
| 2 | 2016-07-25T06:49:56Z | [
"python",
"csv",
"datetime",
"pandas",
"dataframe"
] |
How to read, manipulate and rewrite text in file using python | 38,561,344 | <p>How can I read data from a file, manipulate the data and rewrite it back to the file without wasting memory on a temp file if the file is too big to be processed in one chunk?</p>
| 0 | 2016-07-25T06:51:15Z | 38,561,345 | <p>The following code should work:</p>
<pre><code>chunksize = 64*1024 #arbitrary number
offset = 0
with open(path, 'r+b') as file:
while True:
file.seek(chunksize*offset) # sets pointer to reading spot
chunk = file.read(chunksize)
if len(chunk) == 0: # checks if EoF
break
elif len(chunk) % 16 != 0: # adds bytes to the chunk if it is the last chunk and size doesnt divide by 16 (if processing text of specific size, my case 16 bytes)
chunk += ' ' * (16 - len(chunk) % 16)
file.seek(chunksize*offset) # returns pointer to beginning of the chunk in order to rewrite the data that was encrypted
file.write(do_something(chunk)) # edits and writes data to file
offset += 1
</code></pre>
<p>The code reads the data, returns to the beginning of the chunk and overwrites it. it will not work if the manipulated data is larger than the read data.</p>
| 1 | 2016-07-25T06:51:15Z | [
"python",
"file",
"io"
] |
war/ear file deployment using jython/python scrpting from remote location in webspehere | 38,561,479 | <p>I'm new to jython and python scripts.</p>
<p>My new requirement is to deploy a war file from windows client to windows server, using scripts.</p>
<p>I have done using ant, in local environment completed. From remote I have done R&D but I didn't get solution.</p>
<p>That's why I moved to jython scripting, and local environment deployment completed.
But remote deployment is not working.</p>
<p>Can you please share any ideas and how to deploy the war file from my environment to a remote locations, please?</p>
| 0 | 2016-07-25T06:59:41Z | 38,561,642 | <p>Murali , refer to <a href="http://www.jroller.com/holy/entry/was_6_0_ant_tasks" rel="nofollow">http://www.jroller.com/holy/entry/was_6_0_ant_tasks</a>. Did this help. Are you getting a specific error. You can also refer : <a href="https://stackoverflow.com/questions/3117785/using-ant-to-deploy-ear-to-remote-websphere-application-server">Using Ant to deploy EAR to remote websphere application server</a> for details.</p>
<p>You can deploy using service tasks , you can mention remote server ports and host name for deployment</p>
<pre><code><target name="_startEarApp" depends="" description="Start a deployed ear app on the WAS server">
<wsStartApp wasHome="${was.root}" conntype="SOAP"
application="AppName"
host="my-remote-server"
port="remote-server-soap-port"
user="me" password="pass"
failonerror="true"
/>
</code></pre>
<p>Found this as well for other WS versions : <a href="https://www.ibm.com/developerworks/websphere/library/samples/SampleScripts.html" rel="nofollow">https://www.ibm.com/developerworks/websphere/library/samples/SampleScripts.html</a></p>
| 0 | 2016-07-25T07:09:30Z | [
"java",
"python",
"maven",
"deployment",
"ant"
] |
war/ear file deployment using jython/python scrpting from remote location in webspehere | 38,561,479 | <p>I'm new to jython and python scripts.</p>
<p>My new requirement is to deploy a war file from windows client to windows server, using scripts.</p>
<p>I have done using ant, in local environment completed. From remote I have done R&D but I didn't get solution.</p>
<p>That's why I moved to jython scripting, and local environment deployment completed.
But remote deployment is not working.</p>
<p>Can you please share any ideas and how to deploy the war file from my environment to a remote locations, please?</p>
| 0 | 2016-07-25T06:59:41Z | 38,612,101 | <p>Every one suggesting set the class path to wasanttask.jar or com.ibm.websphere.v61_6.1.100.ws_runtime.jar</p>
<p>and get the details.</p>
<p>but there is no jars is available with that name in WAS 8.5</p>
<p>
</p>
| 0 | 2016-07-27T11:47:45Z | [
"java",
"python",
"maven",
"deployment",
"ant"
] |
Why is this basic GPA Calculator returning a KeyError? | 38,561,521 | <pre><code>Grade1 = input ('Grade for Class 1?')
Grade2 = input ('Grade for Class 2?')
Grade3 = input ('Grade for Class 3?')
Grade4 = input ('Grade for Class 4?')
Grades = (str(Grade1), str(Grade2), str(Grade3), str(Grade4))
def average(numbers):
total= sum(numbers)
return total/len(numbers)
def RealGPA(semestergrades):
PointValues = {'A+': 4.2, 'A':4.0, 'A-': 3.7, 'B+': 3.3, 'B': 3.0, 'B-': 2.7, 'C+': 2.3, 'C':2.0, 'C-': 1.7, 'D+': 1.3, 'D': 1.0, 'D-': 0.7, 'F': 0.00}
PointsEarned= []
for Grade in Grades:
Values=(PointValues[Grades])
PointsEarned.append(Values)
return average(PointsEarned)
print (RealGPA(Grades))
</code></pre>
<p>It says "Key error: ('A', 'A', 'A', 'A') if you type in A for all the inputs-- which is weird, because 'A' is present in PointValues. Am I doing inputs incorrectly?</p>
| -1 | 2016-07-25T07:02:45Z | 38,561,582 | <p><code>PointValues[Grades]</code> looks for the tuple <code>(str(Grade1), str(Grade2), str(Grade3), str(Grade4))</code> in the <code>PointValues</code> dictionary, which obviously generates a <code>KeyError</code>. </p>
<p>Perhaps you meant <code>PointValues[Grade]</code>.</p>
| 0 | 2016-07-25T07:06:30Z | [
"python",
"list"
] |
Why is this basic GPA Calculator returning a KeyError? | 38,561,521 | <pre><code>Grade1 = input ('Grade for Class 1?')
Grade2 = input ('Grade for Class 2?')
Grade3 = input ('Grade for Class 3?')
Grade4 = input ('Grade for Class 4?')
Grades = (str(Grade1), str(Grade2), str(Grade3), str(Grade4))
def average(numbers):
total= sum(numbers)
return total/len(numbers)
def RealGPA(semestergrades):
PointValues = {'A+': 4.2, 'A':4.0, 'A-': 3.7, 'B+': 3.3, 'B': 3.0, 'B-': 2.7, 'C+': 2.3, 'C':2.0, 'C-': 1.7, 'D+': 1.3, 'D': 1.0, 'D-': 0.7, 'F': 0.00}
PointsEarned= []
for Grade in Grades:
Values=(PointValues[Grades])
PointsEarned.append(Values)
return average(PointsEarned)
print (RealGPA(Grades))
</code></pre>
<p>It says "Key error: ('A', 'A', 'A', 'A') if you type in A for all the inputs-- which is weird, because 'A' is present in PointValues. Am I doing inputs incorrectly?</p>
| -1 | 2016-07-25T07:02:45Z | 38,561,598 | <p>A simple typo - you have written <code>Grades</code> instead of <code>Grade</code>.</p>
<p>Added a few other changes as well (have a look at the PEP8 style guide):</p>
<pre><code>grade1 = input ('Grade for Class 1?')
grade2 = input ('Grade for Class 2?')
grade3 = input ('Grade for Class 3?')
grade4 = input ('Grade for Class 4?')
grades = (str(grade1), str(grade2), str(grade3), str(grade4))
def average(numbers):
total = sum(numbers)
return total/len(numbers)
def real_GPA(semestergrades):
point_values = {'A+': 4.2, 'A':4.0, 'A-': 3.7, 'B+': 3.3, 'B': 3.0,
'B-': 2.7, 'C+': 2.3, 'C':2.0, 'C-': 1.7, 'D+': 1.3,
'D': 1.0, 'D-': 0.7, 'F': 0.00}
points_earned = []
for grade in grades:
values = point_values[grade]
points_earned.append(values)
return average(points_earned)
print(real_GPA(grades))
</code></pre>
| 0 | 2016-07-25T07:07:40Z | [
"python",
"list"
] |
Why is this basic GPA Calculator returning a KeyError? | 38,561,521 | <pre><code>Grade1 = input ('Grade for Class 1?')
Grade2 = input ('Grade for Class 2?')
Grade3 = input ('Grade for Class 3?')
Grade4 = input ('Grade for Class 4?')
Grades = (str(Grade1), str(Grade2), str(Grade3), str(Grade4))
def average(numbers):
total= sum(numbers)
return total/len(numbers)
def RealGPA(semestergrades):
PointValues = {'A+': 4.2, 'A':4.0, 'A-': 3.7, 'B+': 3.3, 'B': 3.0, 'B-': 2.7, 'C+': 2.3, 'C':2.0, 'C-': 1.7, 'D+': 1.3, 'D': 1.0, 'D-': 0.7, 'F': 0.00}
PointsEarned= []
for Grade in Grades:
Values=(PointValues[Grades])
PointsEarned.append(Values)
return average(PointsEarned)
print (RealGPA(Grades))
</code></pre>
<p>It says "Key error: ('A', 'A', 'A', 'A') if you type in A for all the inputs-- which is weird, because 'A' is present in PointValues. Am I doing inputs incorrectly?</p>
| -1 | 2016-07-25T07:02:45Z | 38,561,681 | <p>There is a small typo. This <code>Values=(PointValues[Grades])</code> would be <code>Values=(PointValues[Grade])</code>. And also function name and variable name should be all lower case.</p>
| 0 | 2016-07-25T07:12:47Z | [
"python",
"list"
] |
Code feedback for python guessing game | 38,561,588 | <p>I am new to Python (and coding in general) and after about a week of reading "Thinking Like a Computer Scientist: Learning with Python" I decided to try and build a version the classic "guessing game". I added some extra features such as counting the number of guesses the user takes, and playing against a simulated "computer" player to make the program slightly more interesting. Also, the number of guesses the computer takes is based on the mean number of guesses needed to guess a number in a given range (which is logarithmic of base 2 for range n) and varies according to standard deviation. Any feedback on the structure of my code or the way I generate the number of guesses the computer takes would be much appreciated!!!</p>
<p>Anywayyysss.... here is my code</p>
<pre><code>import random
def get_number(level): #selects a random number in range depending on difficulty selected
if level == "e":
number = random.randint(1,20)
if level == "m":
number = random.randint(1,100)
if level == "h":
number = random.randint(1,1000)
elif level != "e" and level != "m" and level != "h":
print ("Invalid input!")
get_number()
return number
def select_level(): #prompts the user to select a difficulty to play on
level = str(input("Would you like to play on easy, medium, or hard? \n"
"Type 'e' for easy, 'm' for medium, or 'h' for hard!\n"))
return level
def guess_number(level): #function that prompts the user to guess within range depending on chosen difficulty
if level == "e":
guess = int(input("Guess a number between 1 and 20:\n"))
if level == "m":
guess = int(input("Guess a number between 1 and 100:\n"))
if level == "h":
guess = int(input("Guess a number between 1 and 1000:\n"))
return guess
def check_guess(guess,number): #processes the users guess and evaluates if it is too high, too low, or bang on
if guess > number:
print ("your guess is too high! Try again! \n")
if guess < number:
print ("your guess is too low! Try again! \n")
if guess == number:
print("\n{0} was the number!".format(number))
def com_num_guesses(level): #function to get the number of guesses taken by the computer
if level == "e":
com_guesses = round(random.normalvariate(3.7,1.1))
if level == "m":
com_guesses = round(random.normalvariate(5.8,1.319))
if level == "h":
com_guesses = round(random.normalvariate(8.99,1.37474))
print("The computer guessed the number in {0} guesses! Can you beat that?".format(com_guesses))
return com_guesses
def mainloop():
level = select_level()
number = get_number(level)
com_guesses = com_num_guesses(level)
guess = guess_number(level)
check_guess(guess,number)
num_guesses = 1
if guess == number: #tells program what to do if first guess is correct
print("You got it in {0} guesses.".format(num_guesses))
if num_guesses == com_guesses:
print("It took the computer {0} guesses too!\nIt's a tie!\n".format(com_guesses))
if num_guesses > com_guesses:
print("It took the computer {0} guesses.\nThe computer wins!\n".format((com_guesses)))
if num_guesses < com_guesses:
print("It took the computer {0} guesses.\nYou win!\n".format(com_guesses))
play_again = str(input("To play again type 'yes'. To exit type 'no'. \n"))
if play_again == "yes":
mainloop()
if play_again == "no":
raise SystemExit(0)
while True: #tells program how to handle guesses after the first guess
guess2 = guess_number(level)
check_guess(guess2,number)
num_guesses += 1
if guess2== number:
print( "You got it in {0} guesses.".format(num_guesses))
if num_guesses == com_guesses:
print("It took the computer {0} guesses too!\nIt's a tie!\n".format(com_guesses))
if num_guesses > com_guesses:
print("It took the computer {0} guesses.\nThe computer wins!\n".format((com_guesses)))
if num_guesses < com_guesses:
print("It took the computer {0} guesses.\nYou win!\n".format(com_guesses))
play_again = str(input("To play again type 'yes'. To exit type 'no'. \n"))
if play_again == "yes":
mainloop()
if play_again == "no":
raise SystemExit(0)
break
mainloop()
</code></pre>
| -2 | 2016-07-25T07:06:56Z | 38,562,199 | <p>In <code>get_number(level)</code>:</p>
<ul>
<li><code>elif</code> could be used for all <code>if</code> expressions after the first one. This would make the execution faster because in cases where an expression is true the later expressions must not be evaluated. (The same in <code>guess_number(level)</code> and <code>check_guess(guess,number)</code>.)</li>
<li>The <code>elif</code> could be made an <code>else</code>.</li>
<li>What is <code>get_number()</code> supposed to do? I think you want to write <code>number = get_number(level)</code> or you could use a <code>while</code> loop for the whole block of statements.</li>
</ul>
<p>What is the <code>while</code> loop in <code>mainloop()</code> supposed to do? You implement a repeated execution of the code by calling <code>mainloop()</code> within itself. However, I would prefer the <code>while</code> loop against your implementation.</p>
<p>Is the beaviour if neither yes or no is input (breaking out of the loop and getting to the end of <code>mainloop</code> intended?</p>
<p>Why do you differentiate between the first guess and later guesses? They could be handled with the same code.</p>
| 0 | 2016-07-25T07:43:50Z | [
"python",
"python-3.x",
"random",
"while-loop",
"feedback"
] |
find regex match individually | 38,561,643 | <p>I have variables in file as which i have to find</p>
<pre><code>#define varName (test_0x5F_u8)
#define varName1 test_0xFF_u16
</code></pre>
<p>i am unable to match below mentioned expression </p>
<pre><code>test_0xFF_u16 and (test_0x5F_u8)
</code></pre>
<ul>
<li>i want to find these variable individually as
<ul>
<li>test is common text and first match</li>
<li>0x5F is second match</li>
<li>u8 is third match</li>
</ul></li>
</ul>
<p>Python code</p>
<pre><code>re.compile(r'^#define\s+(?i)(\w+)\s+[test_0[xX][0-9a-fA-F][a-z0-9]]+')
</code></pre>
<ul>
<li><p>search result should give this</p>
<ul>
<li>group(1) = varName </li>
<li>group(2) = test_0x5F_u8</li>
</ul></li>
</ul>
<p>it is not finding variable in file. Can any one help me on it.</p>
| 2 | 2016-07-25T07:09:32Z | 38,561,716 | <pre><code>re.compile(r'^#define\s+(?i)(\w+)\s+[test_0[xX][0-9a-fA-F][a-z0-9]]+')
</code></pre>
<p>there are many things missing in this regex. the opening parentheses, the underscore after 5F, and the last [a-z0-9] will only give one character while in the end you have "u8", which is 2 characters. obviously it won't match.</p>
<p>this should work fine:</p>
<pre><code>re.compile(r'^#define\s+(\w+)\s+\(test_0[xX][0-9a-fA-F]+_[a-z0-9]+\)')
</code></pre>
| 0 | 2016-07-25T07:14:56Z | [
"python",
"regex"
] |
find regex match individually | 38,561,643 | <p>I have variables in file as which i have to find</p>
<pre><code>#define varName (test_0x5F_u8)
#define varName1 test_0xFF_u16
</code></pre>
<p>i am unable to match below mentioned expression </p>
<pre><code>test_0xFF_u16 and (test_0x5F_u8)
</code></pre>
<ul>
<li>i want to find these variable individually as
<ul>
<li>test is common text and first match</li>
<li>0x5F is second match</li>
<li>u8 is third match</li>
</ul></li>
</ul>
<p>Python code</p>
<pre><code>re.compile(r'^#define\s+(?i)(\w+)\s+[test_0[xX][0-9a-fA-F][a-z0-9]]+')
</code></pre>
<ul>
<li><p>search result should give this</p>
<ul>
<li>group(1) = varName </li>
<li>group(2) = test_0x5F_u8</li>
</ul></li>
</ul>
<p>it is not finding variable in file. Can any one help me on it.</p>
| 2 | 2016-07-25T07:09:32Z | 38,562,713 | <p>You need to add an optional group to match the longer pattern. Also, to use optional parentheses, you need to add <code>\)?</code> and <code>\(?</code>.</p>
<pre><code>^#define\s+(\w+)\s+((?:test_0x[0-9a-f]+_[a-z0-9]+\s+or\s+)?\(?test_0x[0-9a-f]+_[a-z0-9]+\)?)
</code></pre>
<p>See <a href="https://regex101.com/r/gS1rD3/4" rel="nofollow">this regex demo</a>. Note it should be used with the <code>re.I</code> flag to make matching case-insensitive.</p>
<p><strong>Pattern explanation</strong>:</p>
<ul>
<li><code>^</code> - start of string</li>
<li><code>#define</code> - a literal text <code>#define</code></li>
<li><code>\s+</code> - 1+ whitespaces</li>
<li><code>(\w+)</code> - Group 1 capturing 1+ word chars</li>
<li><code>\s+</code> - ibid.</li>
<li><code>((?:test_0x[0-9a-f]+_[a-z0-9]+\s+or\s+)?\(?test_0x[0-9a-f]+_[a-z0-9]+\)?)</code> - Group 2 capturing:
<ul>
<li><code>(?:test_0x[0-9a-f]+_[a-z0-9]+\s+or\s+)?</code> - an optional (1 or 0 times due to <code>?</code> at the end) sequence of
<ul>
<li><code>test_0x</code> - <code>test_0x</code> substring</li>
<li><code>[0-9a-f]+</code> - 1 or more hex chars</li>
<li><code>_[a-z0-9]+</code> - an underscore and 1+ alphanumeric chars</li>
<li><code>\s+or\s+</code> - <code>or</code> enclosed with 1+ whitespaces</li>
</ul></li>
</ul></li>
<li><code>\(?</code> - an optional <code>(</code></li>
<li><code>test_0x[0-9a-f]+_[a-z0-9]+</code> - ibid.</li>
<li><code>\)?</code> - an optional <code>)</code>.</li>
</ul>
<p><a href="https://ideone.com/11IFCc" rel="nofollow">Python demo</a>:</p>
<pre><code>import re
p = re.compile(ur'^#define\s+(\w+)\s+((?:test_0x[0-9a-f]+_[a-z0-9]+\s+or\s+)?\(?test_0x[0-9a-f]+_[a-z0-9]+\)?)', re.IGNORECASE | re.MULTILINE)
s = u"#define varName test_0x5F_u8 or (test_0x5F_u8)\n#define varName (test_0x5F_u8)\n#define varName test_0x5F_u8"
print([x for x in p.findall(s)])
// => [(u'varName', u'test_0x5F_u8 or (test_0x5F_u8)'), (u'varName', u'(test_0x5F_u8)'), (u'varName', u'test_0x5F_u8')]
</code></pre>
| 2 | 2016-07-25T08:16:32Z | [
"python",
"regex"
] |
Remove contents in side paranthesis of data frame | 38,561,705 | <p>I am trying to remove all the contents inside parenthesis of all the columns in a data frame using the following code. But I can't figure out to do it correctly. Any help is highly appreciated</p>
<pre><code>def clean_text(data):
if data.find('(')!=-1:
st=data[data.find("(") + 1:data.find(")")])
data.replace(st,'') # cant use this
return data.lower()
no_dup_cols = no_dup.columns.values
for col in no_dup_cols:
no_dup[col] = no_dup[col].apply(clean_text)
</code></pre>
| 3 | 2016-07-25T07:14:32Z | 38,561,732 | <p>Solution with loop columns and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>replace</code></a>:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'A':['(1)','2','3'],
'B':['(B) 77','s gg','d'],
'C':['s','(d) 44','f']})
print (data)
A B C
0 (1) (B) 77 s
1 2 s gg (d) 44
2 3 d f
for col in data:
data[col] = data[col].str.replace(r'\(.*\)', '')
print (data)
A B C
0 77 s
1 2 s gg 44
2 3 d f
</code></pre>
<p>Solution with list comprehension and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p>
<pre><code>data = pd.concat([data[col].str.replace(r'\(.*\)', '') for col in data], axis=1)
print (data)
A B C
0 77 s
1 2 s gg 44
2 3 d f
</code></pre>
| 4 | 2016-07-25T07:16:22Z | [
"python",
"pandas",
"dataframe"
] |
Remove contents in side paranthesis of data frame | 38,561,705 | <p>I am trying to remove all the contents inside parenthesis of all the columns in a data frame using the following code. But I can't figure out to do it correctly. Any help is highly appreciated</p>
<pre><code>def clean_text(data):
if data.find('(')!=-1:
st=data[data.find("(") + 1:data.find(")")])
data.replace(st,'') # cant use this
return data.lower()
no_dup_cols = no_dup.columns.values
for col in no_dup_cols:
no_dup[col] = no_dup[col].apply(clean_text)
</code></pre>
| 3 | 2016-07-25T07:14:32Z | 38,562,203 | <p>I'm not really familiar with panda but if data is a string type, than you should do </p>
<pre><code>data = data.replace(st, '')
</code></pre>
<p>instead of</p>
<pre><code>data.replace(st,'')
</code></pre>
<p>cf. <a href="https://docs.python.org/2/library/string.html#string.replace" rel="nofollow">https://docs.python.org/2/library/string.html#string.replace</a></p>
<p>Is it possible to have a data sample and a more precise example of what you expect to have as a result please? :)</p>
| 0 | 2016-07-25T07:44:00Z | [
"python",
"pandas",
"dataframe"
] |
Remove contents in side paranthesis of data frame | 38,561,705 | <p>I am trying to remove all the contents inside parenthesis of all the columns in a data frame using the following code. But I can't figure out to do it correctly. Any help is highly appreciated</p>
<pre><code>def clean_text(data):
if data.find('(')!=-1:
st=data[data.find("(") + 1:data.find(")")])
data.replace(st,'') # cant use this
return data.lower()
no_dup_cols = no_dup.columns.values
for col in no_dup_cols:
no_dup[col] = no_dup[col].apply(clean_text)
</code></pre>
| 3 | 2016-07-25T07:14:32Z | 38,562,413 | <p>I'd stack the entire thing into a <code>pd.Series</code></p>
<pre><code>sk = range(df.columns.nlevels)
df = df.stack(sk)
</code></pre>
<p>Then perform a <code>str.replace</code></p>
<pre><code>df = df.str.replace(r'\(.*\)', '')
</code></pre>
<p>Then unstack back</p>
<pre><code>uk = [i * -1 - 1 for i in sk]
df = df.unstack(uk)
</code></pre>
<p>Altogether in a nice function</p>
<pre><code>def df_replace(df, *args, **kwargs):
sk = range(df.columns.nlevels)
uk = [i * -1 - 1 for i in sk]
return df.stack(sk).astype(str).str.replace(*args, **kwargs).unstack(uk)
</code></pre>
<p>Use it like you would <code>str.replace</code></p>
<pre><code>df_replace(df, r'\(.*\)', '')
</code></pre>
<hr>
<h3>Timing</h3>
<p>Conclusion is that my solution looks clever but is a bit slow... Or put another way, jezrael's solutions are faster.</p>
<p><strong>code</strong></p>
<pre><code>data = pd.DataFrame({'A':['(1)','2','3'],
'B':['(B) 77','s gg','d'],
'C':['s','(d) 44','f']})
def jez1(data):
data = data.copy()
for col in data:
data[col] = data[col].str.replace(r'\(.*\)', '')
return data
def jez2(data):
return pd.concat([data[col].str.replace(r'\(.*\)', '') for col in data], axis=1)
def pir(data):
return df_replace(data, r'\(.*\)', '')
</code></pre>
<p><a href="http://i.stack.imgur.com/fa739.png" rel="nofollow"><img src="http://i.stack.imgur.com/fa739.png" alt="enter image description here"></a></p>
| 4 | 2016-07-25T07:55:28Z | [
"python",
"pandas",
"dataframe"
] |
PyQt MainWindow closes after initialization | 38,561,760 | <p>I wanted to start a new project using PyQt5 and QtDesigner. To start, I just copied the code I had from previous projects in PyQt4 and tweaked it to the changes in PyQt5. So, the code to start the Main Window and a Timer which updates the application looks like this:</p>
<pre class="lang-python prettyprint-override"><code># ====Python=============================================================
# SticksNStones
# =======================================================================
import ...
FPS = 45
dt = 1000.0 / FPS
class SNSMainWindow(WindowBaseClass, Ui_Window):
def __init__(self, parent=None):
WindowBaseClass.__init__(self, parent)
Ui_Window.__init__(self)
self.setupUi(self)
self.paused = False
self.timer = None
self.init()
def init(self):
# Setup Display
self.display.setup()
# Setup timer
self.timer = QtCore.QTimer(self)
self.timer.timeout.connect(self.update_loop)
self.timer.start(self.dt)
def update_loop(self):
if not self.paused:
self.display.update(dt)
else:
pass
# ==================================
# Start Application
# ==================================
_dialog = None
def start_sns():
global _dialog
# Start App and frame
app = QtWidgets.QApplication(sys.argv)
_dialog = SNSMainWindow()
_dialog.show()
# Exit if window is closed
sys.exit(app.exec_())
if __name__ == "__main__":
start_sns()
</code></pre>
<p>But as soon as I start the application, it closes after initialization. Debugging showed that the timer is active, but the update_loop is never called.</p>
<p>The PyQt4 Code from which I copied works just fine and I just can't get my head around why this does not work, since all examples I found online have the same code.</p>
<p>The question being: Why does the application close itself upon start?</p>
<h2>Update</h2>
<p>The problem is not the timer, but the usage of a custom .ui. If I run the code with</p>
<pre><code>class SNSMainWindow(QtWidgets.QFrame):
def __init__(self, parent=None):
QtWidgets.QFrame.__init__(self, parent)
...
</code></pre>
<p>a window opens and it stays open until I close it. But a barebone</p>
<pre><code>ui_path = os.path.dirname(os.path.abspath(__file__)) + "/ui/sns_main.ui"
Ui_Window, WindowBaseClass = uic.loadUiType(ui_path)
class SNSMainWindow(WindowBaseClass, Ui_Window):
def __init__(self, parent=None):
WindowBaseClass.__init__(self, parent)
Ui_Window.__init__(self)
self.setupUi(self)
# ==================================
if __name__ == "__main__":
# Start App and frame
app = QtWidgets.QApplication(sys.argv)
_dialog = SNSMainWindow()
_dialog.show()
# Exit if window is closed
sys.exit(app.exec_())
</code></pre>
<p>just disappears within milliseconds after showing. Then again, using the custom widget in PyQt4 stays open, too. I added the uic.load part, which operates just fine. Am I missing something when converting to PyQt5? </p>
<h2>Solution</h2>
<p>I found the solution of the problem in my custom display class. In case of a paintEvent, the display would try to get a (yet) undefined property. But instead of raising an exception that the property was not defined, the window just closed.</p>
<p>Defining the property while initializing the widget solved the problem.
This just keeps me wondering, why no exceptions are raised in this case, since the widget clearly tries to call some undefined properties. A simple</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'xxx'
</code></pre>
<p>would have been enough.</p>
| 3 | 2016-07-25T07:18:22Z | 38,565,717 | <p>I'd try to change some lines, first try to change <code>app</code> definition to </p>
<pre><code>app = QtGui.QApplication(sys.argv)
</code></pre>
<p>Then remove <code>Ui_Window</code> init and set it to <code>self.ui = Ui_Window()</code></p>
<pre><code>class SNSMainWindow(WindowBaseClass):
def __init__(self, parent=None):
WindowBaseClass.__init__(self, parent)
self.ui = Ui_Window()
self.ui.setupUi(self)
self.paused = False
self.timer = None
self.init()
</code></pre>
| 0 | 2016-07-25T10:47:32Z | [
"python",
"qt",
"pyqt5",
"qt-designer",
"qapplication"
] |
Beautifulsoup to extract within tags and output as a JSON | 38,561,808 | <p>As mentioned in the previous question, I am using Beautiful soup with python to retrieve weather data from a website.</p>
<p>Here's how the website looks like:</p>
<pre><code><channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
<channel>
</code></pre>
<p>I managed to retrieve the information I need using these codes :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import urllib3
import json
weather = []
#getting the time
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?dataset=2hr_nowcast&keyref=<keyrefno>')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
for area in soup.find('weatherForecast').find_all('area'):
print area
#file writing
with open("c:/scripts/nea.json", 'w') as outfile:
json.dumps(weather, outfile)
#outfile.write(",")
</code></pre>
<p>This is the output I got (in CMD) :</p>
<pre><code>C:\scripts>python neaweather.py
2.30 pm to 4.30 pm
date: 25-07-2016
time: 02:30 PM
<area forecast="LR" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="LR" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="LR" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="LR" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="LR" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="LR" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>
</code></pre>
<p>I have a few questions that I'm not sure of how to solve :</p>
<ol>
<li><p>Is there any way to retrieve the attributes in <em>area forecast="LR" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"</em> <strong>without</strong> its tags?</p>
<p>I tried adding ".text" to my codes but there would always be an error</p></li>
<li><p>I would like the output to be in a JSON format for my output as it isn't in a table format as shown on tutorials on how to create a JSON file with python :/</p></li>
</ol>
<p><strong>EDIT:</strong> I have managed to open the data in a JSON file but how do I format the unicode string into a normal string as the result contains u' ?</p>
| 0 | 2016-07-25T07:20:33Z | 38,562,293 | <p>Try This in your code:</p>
<pre><code>with open("nea.json",'a+') as fs:
for area in soup.find('weatherForecast').find_all('area'):
fs.write(str(area.attrs))
</code></pre>
| 0 | 2016-07-25T07:48:54Z | [
"python",
"json",
"beautifulsoup"
] |
python pandas read various dataframe from same excel sheet | 38,562,027 | <p>Currently with pandas, I could save various dataframes (of different size) to the same excel sheet, with startrow and startcol to specify the location. </p>
<pre><code>with pd.ExcelWriter(dump_excel) as writer:
dataframe1.to_excel(writer, sheet_name='sheet1', startrow=40, startcol=0)
dataframe2.to_excel(writer, sheet_name='sheet1', startrow=0, startcol=0)
dataframe3.to_excel(writer, sheet_name='sheet2', startrow=0, startcol=0)
</code></pre>
<p>I would like to know, if I can read various dataframe1 and dataframe2 respectively.</p>
| 1 | 2016-07-25T07:33:38Z | 38,562,115 | <p>Have a look at the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow">documentation</a> of <code>read_excel</code>, you will be surprised with some of its arguments (<code>skiprows</code>, <code>skip_footer</code> and <code>parse_cols</code>).</p>
| 0 | 2016-07-25T07:38:59Z | [
"python",
"excel",
"pandas",
"dataframe"
] |
Listing users for certain DB with PyMongo | 38,562,042 | <h1>What I'm trying to acheive</h1>
<p>I'm trying to fetch users for a certain database.</p>
<h1>What I did so far</h1>
<p>I was able to find function to list the databases or create users but none for listing the users, I thought about invoking an arbitrary command such as <code>show users</code> but I could find any way to do it.</p>
<h1>Current code</h1>
<pre><code>#/usr/bin/python
from pymongo import MongoClient
client = MongoClient("localhost",27017)
db = client.this_mongo
</code></pre>
<h1>Trial and error</h1>
<p>I can see the DB names and print them but nothing further:</p>
<pre><code>db_names = client.database_names()
#users = db.command("show users")
for document in db_names:
print(document)
#cursor = db.add_user('TestUser','Test123',roles={'role':'read'})
</code></pre>
<p>If there was only a function that could fetch the users cursor so I can iterate over it it would be great.</p>
<h1>EDIT</h1>
<h2>Working solution</h2>
<pre><code>#/usr/bin/python
from pymongo import MongoClient
client = MongoClient("localhost",27017)
db = client.this_mongo
# This is the line I added with the help of @salmanwahed
listing = db.command('usersInfo')
for document in listing['users']:
print document['user'] +" "+ document['roles'][0]['role']
</code></pre>
<p>Thank you all and @salmanwahed specifically!</p>
| 2 | 2016-07-25T07:34:19Z | 38,562,592 | <p>You can execute the <a href="https://docs.mongodb.com/manual/reference/command/usersInfo/#dbcmd.usersInfo" rel="nofollow"><code>usersInfo</code></a> command to fetch the users data. Like:</p>
<pre><code>db.command('usersInfo')
</code></pre>
<p>It will return you a result like this: (I had created the <code>testingdb</code> for testing) </p>
<pre><code>{u'ok': 1.0,
u'users': [{u'_id': u'testingdb.TestUser',
u'db': u'testingdb',
u'roles': [{u'db': u'testingdb', u'role': u'read'}],
u'user': u'TestUser'}]}
</code></pre>
| 3 | 2016-07-25T08:09:00Z | [
"python",
"database",
"mongodb",
"pymongo"
] |
Simulating a logarithmic spiral galaxy in python | 38,562,144 | <p>I am simulating a logarithmic spiral galaxy using python. Using the parametric equations,</p>
<p><code>x= a*exp(b*theta)*cos(theta)</code>
and
<code>y= a*exp(b*theta)*sin(theta)</code></p>
<p>I used numpy.random for getting the random distribution of stars. The sample code is given below. </p>
<pre><code>import random
from math import *
from pylab import *
import numpy as np
n=100000
a= 1
b=0.6
th =np.random.randn(n)
x= a*exp(b*th)*cos(th)
y=a*exp(b*th)*sin(th)
x1 = a*exp(b*(th))*cos(th+ pi)
y1=a*exp(b*(th))*sin(th + pi)
plot(x,y,"*")
plot(x1, y1,"*")
show()
</code></pre>
<p>The resulting image is shown below
<a href="http://i.stack.imgur.com/ZXQKU.png" rel="nofollow">spiral galaxy with two arms</a></p>
<p>What I need:
1) stars should be radially distributed in the spiral galaxy. I got the distribution only along the arms.
2) Both arms should be blue. Here I have one arm with blue color and other with green.</p>
<p>After simulating this, I need to rotate the galaxy. Any help regarding this would be appreciable.</p>
<p>**edit: I got both the arms in blue color using <code>plot(x1, y1,"b*")</code> </p>
| 1 | 2016-07-25T07:40:45Z | 38,563,658 | <p>To rotate the image, I would calculate the new positions of the stars using a rotation matrix, which you have to do for each star like</p>
<pre><code>R = [ [ np.cos(phi), -np.sin(phi) ], [ np.sin(phi), np.cos(phi) ] ]
[x_new, y_new] = np.dot( [x_old, y_old], R )
</code></pre>
<p>What exactly do you mean with "radially distributed"? Could you draw an example image?</p>
| 1 | 2016-07-25T09:06:28Z | [
"python",
"numpy",
"random",
"galaxy",
"spiral"
] |
Simulating a logarithmic spiral galaxy in python | 38,562,144 | <p>I am simulating a logarithmic spiral galaxy using python. Using the parametric equations,</p>
<p><code>x= a*exp(b*theta)*cos(theta)</code>
and
<code>y= a*exp(b*theta)*sin(theta)</code></p>
<p>I used numpy.random for getting the random distribution of stars. The sample code is given below. </p>
<pre><code>import random
from math import *
from pylab import *
import numpy as np
n=100000
a= 1
b=0.6
th =np.random.randn(n)
x= a*exp(b*th)*cos(th)
y=a*exp(b*th)*sin(th)
x1 = a*exp(b*(th))*cos(th+ pi)
y1=a*exp(b*(th))*sin(th + pi)
plot(x,y,"*")
plot(x1, y1,"*")
show()
</code></pre>
<p>The resulting image is shown below
<a href="http://i.stack.imgur.com/ZXQKU.png" rel="nofollow">spiral galaxy with two arms</a></p>
<p>What I need:
1) stars should be radially distributed in the spiral galaxy. I got the distribution only along the arms.
2) Both arms should be blue. Here I have one arm with blue color and other with green.</p>
<p>After simulating this, I need to rotate the galaxy. Any help regarding this would be appreciable.</p>
<p>**edit: I got both the arms in blue color using <code>plot(x1, y1,"b*")</code> </p>
| 1 | 2016-07-25T07:40:45Z | 38,570,946 | <p>If an approximation is good enough, try adding some noise the points before plotting them. For starters I would start with a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html" rel="nofollow">normal (Gaussian) distribution</a>. For example, this tweaked version:</p>
<pre><code>import random
from math import *
from pylab import *
import numpy as np
n=1000
a=0.5
b=0.6
th=np.random.randn(n)
x=a*exp(b*th)*cos(th)
y=a*exp(b*th)*sin(th)
x1=a*exp(b*(th))*cos(th+pi)
y1=a*exp(b*(th))*sin(th+pi)
sx=np.random.normal(0, a*0.25, n)
sy=np.random.normal(0, a*0.25, n)
plot(x+sy,y+sx,"*")
plot(x1+sx, y1+sy,"*")
show()
</code></pre>
<p>Gives this output:<a href="http://i.stack.imgur.com/o2osO.png" rel="nofollow"><img src="http://i.stack.imgur.com/o2osO.png" alt="enter image description here"></a>
You might need to play around with the variables a bit to adjust the output to your needs. Also, as mentioned in the comments, this isn't true radial noise.</p>
| 1 | 2016-07-25T14:49:50Z | [
"python",
"numpy",
"random",
"galaxy",
"spiral"
] |
Pandas rank based on several columns | 38,562,205 | <p>I have the following dataframe :</p>
<pre><code>event_id occurred_at user_id
19148 2015-10-01 1
19693 2015-10-05 2
20589 2015-10-12 1
20996 2015-10-15 1
20998 2015-10-15 1
23301 2015-10-23 2
23630 2015-10-26 1
25172 2015-11-03 1
31699 2015-12-11 1
32186 2015-12-14 2
43426 2016-01-13 1
68300 2016-04-04 2
71926 2016-04-19 1
</code></pre>
<p>I would like to rank the events by chronological order (1 to n), for each user.</p>
<p>I can achieve this by doing :</p>
<pre><code>df.groupby('user_id')['occurred_at'].rank(method='dense')
</code></pre>
<p>However, for those 2 lines, that occurred on the same date (for the same user), I end up with the same rank :</p>
<pre><code> 20996 2015-10-15 1
20998 2015-10-15 1
</code></pre>
<p>In case the event date is the same, I would like to compare the <code>event_id</code> and arbitrarily rank lower the event with the lowest <code>event_id</code>. How can I achieve this easily ?</p>
<p>I can post process the ranks to make sure every rank is only used once, but this seems pretty bulky...</p>
<p><strong>Edit</strong> : how to reproduce :</p>
<p>Copy paste the data in <code>data.csv</code> file.
Then :</p>
<pre><code>import pandas as pd
df = pd.read_csv('data.csv', delim_whitespace=True)
df['rank'] = df.groupby('user_id')['occurred_at'].rank(method='dense')
>>> df[df['user_id'] == 1]
event_id occurred_at user_id rank
0 19148 2015-10-01 1 1.0
2 20589 2015-10-12 1 2.0
3 20996 2015-10-15 1 3.0 <--
4 20998 2015-10-15 1 3.0 <--
6 23630 2015-10-26 1 4.0
7 25172 2015-11-03 1 5.0
8 31699 2015-12-11 1 6.0
10 43426 2016-01-13 1 7.0
12 71926 2016-04-19 1 8.0
</code></pre>
<p>Am using python3 and pandas 0.18.1</p>
| 2 | 2016-07-25T07:44:09Z | 38,562,820 | <p><code>sort_values('event_id')</code> prior to grouping then pass <code>method='first'</code> to <code>rank</code></p>
<p>Also note that if <code>occurred_at</code> isn't already <code>datetime</code>, make it <code>datetime</code>.</p>
<h3> </h3>
<pre><code># unnecessary if already datetime, but doesn't hurt to do it anyway
df.occurred_at = pd.to_datetime(df.occurred_at)
df['rank'] = df.sort_values('event_id') \
.groupby('user_id').occurred_at \
.rank(method='first')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/Khroq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Khroq.png" alt="enter image description here"></a></p>
<hr>
<h3>Reference for complete verifiable code</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
text = """event_id occurred_at user_id
19148 2015-10-01 1
19693 2015-10-05 2
20589 2015-10-12 1
20996 2015-10-15 1
20998 2015-10-15 1
23301 2015-10-23 2
23630 2015-10-26 1
25172 2015-11-03 1
31699 2015-12-11 1
32186 2015-12-14 2
43426 2016-01-13 1
68300 2016-04-04 2
71926 2016-04-19 1"""
df = pd.read_csv(StringIO(text), delim_whitespace=True)
df.occurred_at = pd.to_datetime(df.occurred_at)
df['rank'] = df.sort_values('event_id').groupby('user_id').occurred_at.rank(method='first')
df
</code></pre>
| 3 | 2016-07-25T08:22:30Z | [
"python",
"python-3.x",
"pandas"
] |
Ubuntu Package Installation via Python Code | 38,562,285 | <p>I have written the following code to install some packages. I don't want the script shows installation process messages in the output. When installation of a package finished, I want just a prompt printed in the output. How can I rewrite the following code to accomplish this task.</p>
<pre><code> def package_installation(self):
self.apt = "apt install -y "
self.packages = "python-pip python-sqlalchemy mongodb python-bson python-dpkt python-jinja2 python-magic python-gridfs python-libvirt python-bottle python-pefile python-chardet git build-essential autoconf automake libtool dh-autoreconf libcurl4-gnutls-dev libmagic-dev python-dev tcpdump libcap2-bin virtualbox dkms python-pyrex"
self.color.print_green("[+] Phase 2 : Installation of the ubuntu packages is starting:")
for self.items in self.packages.split():
self.command = str(self.apt) + str(self.items)
subprocess.run(self.command.split())
self.color.print_blue("\t[+] Package [{}] Installed".format(str(self.items)))
self.color.print_green("[+] Phase 2 Accomplished. ")
</code></pre>
| -1 | 2016-07-25T07:48:41Z | 38,562,790 | <p>Fixed : </p>
<pre><code> def package_installation(self):
self.apt = "apt install -y "
self.packages = "python-pip python-sqlalchemy mongodb python-bson python-dpkt python-jinja2 python-magic python-gridfs python-libvirt python-bottle python-pefile python-chardet git build-essential autoconf automake libtool dh-autoreconf libcurl4-gnutls-dev libmagic-dev python-dev tcpdump libcap2-bin virtualbox dkms python-pyrex"
self.color.print_green("[+] Phase 2 : Installation of the ubuntu packages is starting:")
for self.items in self.packages.split():
self.command = str(self.apt) + str(self.items)
if (subprocess.run(self.command.split(), stdout=DEVNULL, stderr=DEVNULL)):
self.color.print_blue("\t[+] Package [{}] Installed".format(str(self.items)))
else:
self.color.print_red("\t[+] Package [{}] Don't Installed".format(str(self.items)))
self.color.print_red("[+] Phase 2 Accomplished.\n")
</code></pre>
| 0 | 2016-07-25T08:21:14Z | [
"python",
"python-3.x",
"install",
"packageinstaller"
] |
Install python packages on a remote server without access to root | 38,562,310 | <p>I want to install some python packages on a remote server where I can actually log in and work on some existing python packages. Sometimes I need new python packages like easydict, then I have to install it. However, since I don't have access to the root (I mean I cannot sudo). How to solve this problem? Is it impossible to debug on someone else's computer where you cannot even "sudo"? </p>
| 0 | 2016-07-25T07:50:18Z | 38,562,772 | <p>There is no need for sudo if you want to install packages locally. Generally, you should always use a virtualenv; once that is activated, all packages install within that virtualenv only, with no need for admin privileges.</p>
| 1 | 2016-07-25T08:20:02Z | [
"python"
] |
How to print data with the same situation in a separate dives through a for loop in flask template | 38,562,312 | <p>There is a table like this in my database:</p>
<pre><code>-------------------------------------
. id . part . text .
-------------------------------------
. 1 . 1 . different text .
. 2 . 1 . different text .
. 3 . 1 . different text .
. 4 . 1 . different text .
. 5 . 2 . different text .
. 6 . 2 . different text .
. 7 . 3 . different text .
. 8 . 3 . different text .
. 9 . 3 . different text .
. 10 . 3 . different text .
. 11 . 3 . different text .
. 12 . 4 . different text .
. 13 . 4 . different text .
. 14 . 5 . different text .
. 15 . 5 . different text .
-------------------------------------
</code></pre>
<p>In app.py, <code>result2</code> will contain something like this: (a list of tuple)</p>
<blockquote>
<p>((1, 1, 'text'), (2, 1, 'text'), (3, 1, 'text'), (4, 1, 'text'), (5, 2, 'text'), (6, 2, 'text'), (7, 3, 'text'), (8, 3, 'text'), etc..)</p>
</blockquote>
<pre><code>@app.route('/shop/<data>')
def shop(data):
db =MySQLdb.connect("localhost","myusername","mypassword","mydbname" )
cursor = db.cursor()
cursor2 = db.cursor()
query_string = "SELECT * from p_div_chest1"
query_string2 = "SELECT * from p_div_content1"
cursor.execute(query_string)
cursor2.execute(query_string2)
result = cursor.fetchall()
result2 = cursor2.fetchall()
db.close()
return render_template('shop.html', result=result, result2=result2)
</code></pre>
<p>Now in shop.html I want to have a for loop to print all text with their same <code>part</code> numbers in separate <code><div></div></code> tag.</p>
<p>For example all text where their <code>part</code> numbers is 1 should be printed in a separate <code>div</code>.</p>
<p>And all text with <code>part</code> number 2 should go to next separate <code>div</code> tag.</p>
<p>For example:</p>
<p>It's like that I select * from the table <code>where part = 1</code> and print in the first <code>div</code> tag. The output would look like:</p>
<pre><code>--------------------
- different text -
- different text -
- different text -
- different text -
--------------------
</code></pre>
<p>and next I select * from the table <code>where part = 2</code> and print in the first <code>div</code> tag. The output would look like:</p>
<pre><code>--------------------
- different text -
- different text -
--------------------
</code></pre>
<p>shop.html is like this for now:</p>
<pre><code>{% for each in result2 %}
{{ each }}<br>
{% endfor %}
</code></pre>
<p>I want to put something like <code>{%if each.1 == 1 %}</code> in that for loop to check <code>each.1</code> every thime, and if <code>each.1 == 1</code>, for loop starts to print texts in the first div and go on until <code>each.1 == 2</code>, then close last div tag and open the next div tag to print all texts that are equal to <code>each.1 == 2</code> and move on ....</p>
<p>Hope my explanation helps more.</p>
| 0 | 2016-07-25T07:50:20Z | 38,564,134 | <p>i'll suggest you keep your template as clean as possible, and put all the logic in your view, first use itertools.groupby to group the elements into seperate lists to make it easier for printing, the elements in result2 are already sorted so you don't need to sort them again, if they're not sorted groupby won't work</p>
<pre><code>from itertools import groupby
x = [(1, 1, 'text'), (2, 1, 'text'), (3, 1, 'text'), (4, 1, 'text'), (5, 2, 'text'), (6, 2, 'text'), (7, 3, 'text'), (8, 3, 'text')]
result2 = [list(value) for key, value in groupby(x, lambda y: y[1])]
</code></pre>
<p>with raw python code, we have this</p>
<pre><code>for elems in result:
print('<div>')
for elem in elems:
print(elem)
print('</div>')
</code></pre>
<p>Output</p>
<pre><code><div>
(1, 1, 'text')
(2, 1, 'text')
(3, 1, 'text')
(4, 1, 'text')
</div>
<div>
(5, 2, 'text')
(6, 2, 'text')
</div>
<div>
(7, 3, 'text')
(8, 3, 'text')
</div>
</code></pre>
<p>so translating that to jinja we have this</p>
<pre><code>{% for elems in result2 %}
<div>
{% for elem in elems %}
{{ elem }}
{% endfor %}
</div>
{%endfor%}
</code></pre>
| 0 | 2016-07-25T09:29:17Z | [
"python",
"html",
"templates",
"for-loop",
"flask"
] |
MySQL Workbench Migration Wizard python error | 38,562,464 | <p>There are two bugs in bugs.mysql.com related to this same error (<a href="https://bugs.mysql.com/bug.php?id=66861" rel="nofollow">1</a> and <a href="https://bugs.mysql.com/bug.php?id=67831" rel="nofollow">2</a>). They either provide no solution (#2) or a replacement of the .py that does not solve the problem (#1).</p>
<p>The error:</p>
<blockquote>
<p>File "C:\Program Files\MySQL\MySQL Workbench 6.3 CE\modules\db_mysql_re_grt.py", line 288, in wrap_routine_sql</p>
<p>return "DELIMITER $$\n"+sql</p>
<p>TypeError: cannot concatenate 'str' and 'NoneType' objects</p>
</blockquote>
<p>So: the line <code>"DELIMITER $$\n"+sql</code> produces the error <code>cannot concatenate 'str' and 'NoneType' objects</code>.</p>
<p>The error is in the line 288 of the file <code>db_mysql_re_grt.py</code>. <a href="https://github.com/mysql/mysql-workbench/blob/master/modules/db.mysql/db_mysql_re_grt.py#L288" rel="nofollow">This is the original .py file</a> from the mysql-workbench's github.</p>
<p>The call to <code>wrap_routine_sql</code> comes from <a href="https://github.com/mysql/mysql-workbench/blob/master/modules/db.mysql/db_mysql_re_grt.py#L347" rel="nofollow">this other line</a>:</p>
<pre><code>sql = result.stringByName("Create Procedure")
grt.begin_progress_step(0.1 + 0.9 * (i / total), 0.1 + 0.9 * ((i+0.5) / total))
grt.modules.MySQLParserServices.parseSQLIntoCatalogSql(context, catalog, wrap_sql(wrap_routine_sql(sql), schema_name), options)
grt.end_progress_step()
i += 0.5
</code></pre>
| 0 | 2016-07-25T07:59:10Z | 38,563,928 | <p>(not exactly a fix, but an alternative way to circumvent the error in my own question)</p>
<p>An alternative to the migration is: dump source to files -> import dump to destination db.</p>
<p>From the <a href="https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html" rel="nofollow">original info</a>, the steps are:</p>
<ul>
<li>Open MySQL Workbench</li>
<li>Open source db</li>
<li>Server -> Data Export</li>
<li>Open destination db</li>
<li>Create schema (the import does not create the schema for you)</li>
<li>Server -> Data Import</li>
</ul>
<p>In case of error <code>âError querying security informationâ on Data Export</code>, the solution proposed <a href="http://stackoverflow.com/questions/34521822/mysql-workbench-error-1142-error-querying-security-information-on-data-export">here</a> was to download the version 6.3.7 of the Workbench (and it worked).</p>
| 0 | 2016-07-25T09:18:59Z | [
"python",
"mysql",
"mysql-workbench"
] |
HTTP client asynch calls with delay | 38,562,471 | <p>I'm using httpclient.HTTPRequest library to send Async requests, but need to add delay between requests.
This means lets say I configure RPS (Requests per second) = 5. Then I send a request each 0.2 but asynchronously. How can I send the requests asynchronously without waiting for each request response.</p>
<p>This is my code:</p>
<pre><code>def process_campaign(self, campaign_instance):
ioloop.IOLoop.current().run_sync(lambda: start_campaign(campaign_instance))
@gen.coroutine
def start_campaign(campaign_instance):
...
while True:
try:
log.info("start_campaign() Requests in Queue: {}".format(len(web_requests)))
web_request = web_requests.pop()
time.sleep(delay)
headers = {'Content-Type': 'application/json'}
request = httpclient.HTTPRequest(auth_username=settings.api_account,
auth_password=settings.api_password,
url=settings.api_url,
body=json.dumps(web_request),
headers=headers,
request_timeout=15,
method="POST")
response = yield http_client.fetch(request)
except httpclient.HTTPError, e:
log.exception("start_campaign() " + str(e))
except IndexError:
log.info('start_campaign() Campaign web requests completed. Errors {}'.format(api_errors))
break
</code></pre>
<p>But seems to wait for HTTP response before proceeding.</p>
| 1 | 2016-07-25T07:59:37Z | 38,569,629 | <p>You can try:</p>
<pre><code>class WebRequest(RequestHandler):
def __init__(self, web_request):
self.delay = 0
self.web_request = web_request
@asynchronous
def post(self):
IOLoop.instance().add_timeout(self.delay, self._process)
@gen.coroutine
def _process(self):
try:
http_client = httpclient.AsyncHTTPClient()
log.info("start_campaign() Web request: {}".format(self.web_request))
headers = {'Content-Type': 'application/json'}
request = httpclient.HTTPRequest(auth_username=settings.api_account,
auth_password=settings.api_password,
url=settings.api_url,
body=json.dumps(self.web_request),
headers=headers,
request_timeout=15,
method="POST")
response = yield http_client.fetch(request)
except Exception, exception:
log.exception(exception)
</code></pre>
<p>Re-use your while Loop:</p>
<pre><code>while True:
try:
web_request = web_requests.pop()
time.sleep(delay)
client = WebRequest(web_request)
client.post()
except IndexError:
break
</code></pre>
| 0 | 2016-07-25T13:53:03Z | [
"python",
"multithreading",
"http",
"asynchronous",
"client"
] |
python list of dictionaries group by and filter issue | 38,562,484 | <p>I have the following list of dictionaries. The list is already sorted. Now I have to group by worker and get his "prskill", but if in case there is not other option he gets None.</p>
<p>For each worker there are max two dictionaries, one with "prskill" None and one with the actual value. if there is only one dictionary his "prskill" is None</p>
<p>my list</p>
<pre><code>sorted = [{worker_nick: 1B prskill: None },
{worker_nick: B1 prskill: None },
{worker_nick: B2 prskill: None },
{worker_nick: BožiÄ prskill: None },
{worker_nick: BožiÄ prskill: BolniÄar },
{worker_nick: CimermanÄiÄ prskill: None },
{worker_nick: CimermanÄiÄ prskill: BolniÄar },
{worker_nick: CindriÄJ prskill: None },
{worker_nick: CindriÄJ prskill: razno },
{worker_nick: CipuriÄA prskill: None },
{worker_nick: CipuriÄA prskill: Strežnik },
{worker_nick: DanÄuloviÄ prskill: None },
{worker_nick: DanÄuloviÄ prskill: Strežnik },
{worker_nick: Dragovan prskill: BolniÄar },
{worker_nick: Dragovan prskill: None },
{worker_nick: Fofana prskill: SestraOdd },
{worker_nick: Fofana prskill: None },
{worker_nick: GovednikM prskill: None },
{worker_nick: GovednikM prskill: Strežnik },
{worker_nick: Hoenigman prskill: None },
{worker_nick: Hoenigman prskill: SestraOdd },
{worker_nick: HusiÄ prskill: None },
{worker_nick: HuskiÄ prskill: BolniÄar },
{worker_nick: HuskiÄ prskill: None },
{worker_nick: JD-Å uligoj prskill: JD },
{worker_nick: JD-Å uligoj prskill: None },
{worker_nick: Jakša prskill: Gospodinja },
{worker_nick: Jakša prskill: None },
{worker_nick: Kastelic prskill: SestraOdd },
{worker_nick: Kastelic prskill: None },
{worker_nick: LukiniÄ prskill: SestraOdd },
{worker_nick: LukiniÄ prskill: None },
{worker_nick: MaceleJ prskill: None },
{worker_nick: MaceleJ prskill: BolniÄar },
{worker_nick: MaceleM prskill: SestraAmb },
{worker_nick: MaceleM prskill: None },
{worker_nick: MiketiÄ prskill: BolniÄar },
{worker_nick: MiketiÄ prskill: None },
{worker_nick: MikeÅ¡iÄG prskill: SestraOdd },
{worker_nick: MikeÅ¡iÄG prskill: None },
{worker_nick: Muc prskill: None },
{worker_nick: Muc prskill: BolniÄar },
{worker_nick: Petraš prskill: None },
{worker_nick: Petraš prskill: Terapevt },
{worker_nick: Pezdirc prskill: SestraOdd },
{worker_nick: Pezdirc prskill: None },
{worker_nick: PrevalÅ¡ek prskill: BolniÄar },
{worker_nick: Prevalšek prskill: None },
{worker_nick: RamuÅ¡Äak prskill: SestraAmb },
{worker_nick: RamuÅ¡Äak prskill: None },
{worker_nick: S-T1 prskill: None },
{worker_nick: S-T2 prskill: None },
{worker_nick: S1 prskill: None },
{worker_nick: Slanc prskill: Terapevt },
{worker_nick: Slanc prskill: None },
{worker_nick: Sneljer prskill: Terapevt },
{worker_nick: Sneljer prskill: None },
{worker_nick: Stepan prskill: SestraOdd },
{worker_nick: Stepan prskill: None },
{worker_nick: Sudac prskill: None },
{worker_nick: Sudac prskill: BolniÄar },
{worker_nick: Tkalac prskill: BolniÄar },
{worker_nick: Tkalac prskill: None },
{worker_nick: VidoviÄ prskill: SestraOdd },
{worker_nick: VidoviÄ prskill: None },
{worker_nick: VukÅ¡iniÄM prskill: None },
{worker_nick: VukÅ¡iniÄM prskill: BolniÄar },
{worker_nick: VuÄiÄ prskill: BolniÄar },
{worker_nick: VuÄiÄ prskill: None },
{worker_nick: ÄurÄi prskill: None },
{worker_nick: ÄurÄi prskill: BolniÄar },
{worker_nick: Å terk prskill: None },
{worker_nick: Å terk prskill: Namestnik direktorja }]
</code></pre>
<p>Any suggestions?</p>
<p>Thank you</p>
| -2 | 2016-07-25T08:00:51Z | 38,563,064 | <p>You can Try Below Code:-</p>
<pre><code>filteredResult = {}
for sortedDict in sorted:
if sortedDict['worker_nick'] in filteredResult:
if None is not sortedDict['prskill']:
filteredResult[sortedDict['worker_nick']] = sortedDict
else:
filteredResult[sortedDict['worker_nick']] = sortedDict
filteredResult.values()
</code></pre>
<p>O/P of this is unorderd, if you want it in ordered then you have to use orderedDict instead simple dict.</p>
| 0 | 2016-07-25T08:36:49Z | [
"python",
"dictionary",
"group-by"
] |
Access class variables in functions - Django | 38,562,571 | <p>I have the following code:</p>
<pre><code>class MyView(View):
var2 = Choices.objects.get(id=1)
my_strings = ['0','1','2','3']
@login_required
def myfunction(self,request):
return render(request,
'app/submit.html',{'my_strings':my_strings, 'var2':var2})
</code></pre>
<p>I want to access "var2" and "my_string" variables and display them in the template submit.html. If I use only the function without putting it in a class, everything works fine. But inside the class it shows errors. </p>
<p>Can anybody tell me how to access "var2" and "my_string" class variables in "myfunction" ?</p>
| 0 | 2016-07-25T08:07:19Z | 38,562,659 | <p>You have to use self. In front of class variables.</p>
<p>Your function names in class based views should correspond to what http method you try to use(get, post etc...)</p>
<pre><code>@login_required
def get(self,request):
return render(request,
'app/submit.html',{'my_strings':self.my_strings, 'var2':self.var2})
</code></pre>
<p>Please also read:
<a href="https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/</a></p>
| 1 | 2016-07-25T08:13:22Z | [
"python",
"django"
] |
Access class variables in functions - Django | 38,562,571 | <p>I have the following code:</p>
<pre><code>class MyView(View):
var2 = Choices.objects.get(id=1)
my_strings = ['0','1','2','3']
@login_required
def myfunction(self,request):
return render(request,
'app/submit.html',{'my_strings':my_strings, 'var2':var2})
</code></pre>
<p>I want to access "var2" and "my_string" variables and display them in the template submit.html. If I use only the function without putting it in a class, everything works fine. But inside the class it shows errors. </p>
<p>Can anybody tell me how to access "var2" and "my_string" class variables in "myfunction" ?</p>
| 0 | 2016-07-25T08:07:19Z | 38,563,216 | <p>You don't have to write custom function to dispatch request...Django internally have the GET and POST method to do that... And also preferred way to handle login required is <code>method_decorator</code> </p>
<pre><code>from django.utils.decorators import method_decorator
@method_decorator(login_required, name='dispatch')
class MyView(View):
string = "your string"
def dispatch(self, *args, **kwargs):
return super(MyView, self).dispatch(*args, **kwargs)
def get(self, request):
return render(request, 'template', {'string': self.string})
</code></pre>
| 0 | 2016-07-25T08:45:24Z | [
"python",
"django"
] |
Run python file on shared drive without installing python or python frameworks | 38,562,591 | <p>I have placed a Pythonfile at a shared drive that I want any user to be able to start. I don't want the user to install python or any of the libraries needed (e.g pandas).</p>
<p>I want it to be easy for the user to start the program. How should I do that?</p>
<p>I have tried to create a bat-file (Z: is the shared drive location. All users will have it as their Z-drive):</p>
<pre><code>@echo off
Z:\python27\python.exe Z:\main.py %*
pause
</code></pre>
<p>I tried to install python at the shared drive (as specified) and placed all needed imports at the shared drive. </p>
<p>From my computer this is runnable. But from a users computer I get the error message: </p>
<pre><code>ImportError: C extension: DLL load failed: The specified module could not be fou
nd. not built. If you want to import pandas from the source directory, you may n
eed to run 'python setup.py build_ext --inplace' to build the C extensions first
</code></pre>
<p>I have Python 2.7, the Teradata module and Pandas installed. What can I do to make this runnable?</p>
| 0 | 2016-07-25T08:08:57Z | 38,562,683 | <p><code>PyPy</code> does that out of the box.</p>
<p><code>CPython</code> has some sort of redistributable bundle which should suit your use-case.</p>
<p>Finally, there are these projects:</p>
<ul>
<li><a href="http://www.voidspace.org.uk/python/movpy/" rel="nofollow">http://www.voidspace.org.uk/python/movpy/</a></li>
<li><a href="http://portablepython.com/" rel="nofollow">http://portablepython.com/</a></li>
</ul>
| 0 | 2016-07-25T08:15:00Z | [
"python"
] |
Ridge Regression: Scikit-learn vs. direct calculation does not match for alpha > 0 | 38,562,701 | <p>In Ridge Regression, we are solving <code>Ax=b</code> with <code>L2</code> Regularization. The direct calculation is given by:</p>
<blockquote>
<p>x = (A<sup>T</sup>A + alpha * I)<sup>-1</sup>A<sup>T</sup>b</p>
</blockquote>
<p>I have looked at the scikit-learn code and they do implement the same calculation. But, I can't seem to get the same results for <code>alpha > 0</code></p>
<p>The minimal code to reproduce this.</p>
<pre><code>import numpy as np
A = np.asmatrix(np.c_[np.ones((10,1)),np.random.rand(10,3)])
b = np.asmatrix(np.random.rand(10,1))
I = np.identity(A.shape[1])
alpha = 1
x = np.linalg.inv(A.T*A + alpha * I)*A.T*b
print(x.T)
>>> [[ 0.37371021 0.19558433 0.06065241 0.17030177]]
from sklearn.linear_model import Ridge
model = Ridge(alpha = alpha).fit(A[:,1:],b)
print(np.c_[model.intercept_, model.coef_])
>>> [[ 0.61241566 0.02727579 -0.06363385 0.05303027]]
</code></pre>
<p>Any suggestions on what I can do to resolve this discrepancy?</p>
| 2 | 2016-07-25T08:15:46Z | 38,584,218 | <p>This modification seems to yield the same result for the direct version and the numpy version:</p>
<pre><code>import numpy as np
A = np.asmatrix(np.random.rand(10,3))
b = np.asmatrix(np.random.rand(10,1))
I = np.identity(A.shape[1])
alpha = 1
x = np.linalg.inv(A.T*A + alpha * I)*A.T*b
print (x.T)
from sklearn.linear_model import Ridge
model = Ridge(alpha = alpha, tol=0.1, fit_intercept=False).fit(A ,b)
print model.coef_
print model.intercept_
</code></pre>
<p>It seems the main reason for the difference is the class <code>Ridge</code> has the parameter <code>fit_intercept=True</code> (by inheritance from class <code>_BaseRidge</code>) (<a href="https://github.com/scikit-learn/scikit-learn/blob/51a765a/sklearn/linear_model/ridge.py#L601" rel="nofollow">source</a>)</p>
<p>This is applying a data centering procedure before passing the matrices to the <code>_solve_cholesky</code> function. </p>
<p>Here's the line in ridge.py that does it</p>
<pre><code> X, y, X_mean, y_mean, X_std = self._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X,
sample_weight=sample_weight)
</code></pre>
<p>Also, it seems you were trying to implicitly account for the intercept by adding the column of 1's. As you see, this is not necessary if you specify <code>fit_intercept=False</code></p>
<p>Appendix: Does the Ridge class actually implement the direct formula?</p>
<p>It depends on the choice of the <code>solver</code>parameter. </p>
<p>Effectively, if you do not specify the <code>solver</code>parameter in <code>Ridge</code>, it takes by default <code>solver='auto'</code> (which internally resorts to <code>solver='cholesky'</code>). This should be equivalent to the direct computation. </p>
<p>Rigorously, <code>_solve_cholesky</code> uses <code>numpy.linalg.solve</code> instead of <code>numpy.inv</code>. But it can be easily checked that</p>
<pre><code>np.linalg.solve(A.T*A + alpha * I, A.T*b)
</code></pre>
<p>yields the same as </p>
<pre><code>np.linalg.inv(A.T*A + alpha * I)*A.T*b
</code></pre>
| 0 | 2016-07-26T08:00:51Z | [
"python",
"scikit-learn",
"linear-regression"
] |
adding extra information to filename - python | 38,562,839 | <p>I used the following line to rename my file by adding timing and remove extra space and replace it with (-)
if i would like to add extra information like lable before the timing , </p>
<pre><code>filename = ("%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()))).replace(" ", "-")
</code></pre>
<p>the current output looks like </p>
<pre><code>testfile_2016-07-25_12:17:14.mp4
</code></pre>
<p>im looking to have the file output as</p>
<pre><code>testfile_2016-07-25_12:17:14-MediaFile.mp4
</code></pre>
<p>try the following , </p>
<pre><code>filename = ("%s_%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S","Mediafile",time.localtime()))).replace(" ", "-")
</code></pre>
<p>what did i missed here ?</p>
| 0 | 2016-07-25T08:23:43Z | 38,562,918 | <p>You're using the function strftime incorrectly. Strftime only takes 2 arguments and you're passing it 3. </p>
<p>You would need to generate the string from the time and apply some string operations to append the extra info. </p>
<p>If you want to add MediaFile to the end of the filename simply do something like this.</p>
<pre><code>filename = ("%s_%s-MediaFile.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()))).replace(" ", "-")
</code></pre>
| 2 | 2016-07-25T08:28:51Z | [
"python"
] |
adding extra information to filename - python | 38,562,839 | <p>I used the following line to rename my file by adding timing and remove extra space and replace it with (-)
if i would like to add extra information like lable before the timing , </p>
<pre><code>filename = ("%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()))).replace(" ", "-")
</code></pre>
<p>the current output looks like </p>
<pre><code>testfile_2016-07-25_12:17:14.mp4
</code></pre>
<p>im looking to have the file output as</p>
<pre><code>testfile_2016-07-25_12:17:14-MediaFile.mp4
</code></pre>
<p>try the following , </p>
<pre><code>filename = ("%s_%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S","Mediafile",time.localtime()))).replace(" ", "-")
</code></pre>
<p>what did i missed here ?</p>
| 0 | 2016-07-25T08:23:43Z | 38,562,937 | <pre><code>filename = ("%s_%s-%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()), 'MediaFile')).replace(' ', '-')
# 'testfile_2016-07-25_10:29:28-MediaFile.mp4'
</code></pre>
<p>To understand better how this works and slightly improve readability, you can define your time stamp in a separate variable:</p>
<pre><code>timestr = time.strftime("%Y-%m-%d_%H:%M:%S", time.localtime()) # 2016-07-25_10:31:03
filename = ("%s_%s-%s" %(pfile, timestr, 'MediaFile')).replace(' ', '-')
# 'testfile_2016-07-25_10:31:03-MediaFile.mp4'
</code></pre>
<p>or</p>
<pre><code>filename = ("%s_%s-MediaFile.mp4" %(pfile, timestr)).replace(' ', '-')
</code></pre>
<p>For completeness, you can also use the <code>format()</code> method:</p>
<pre><code>filename = '{0}_{1}-MediaFile.mp4'.format(pfile, timestr).replace(' ', '-')
</code></pre>
| 1 | 2016-07-25T08:29:45Z | [
"python"
] |
adding extra information to filename - python | 38,562,839 | <p>I used the following line to rename my file by adding timing and remove extra space and replace it with (-)
if i would like to add extra information like lable before the timing , </p>
<pre><code>filename = ("%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()))).replace(" ", "-")
</code></pre>
<p>the current output looks like </p>
<pre><code>testfile_2016-07-25_12:17:14.mp4
</code></pre>
<p>im looking to have the file output as</p>
<pre><code>testfile_2016-07-25_12:17:14-MediaFile.mp4
</code></pre>
<p>try the following , </p>
<pre><code>filename = ("%s_%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S","Mediafile",time.localtime()))).replace(" ", "-")
</code></pre>
<p>what did i missed here ?</p>
| 0 | 2016-07-25T08:23:43Z | 38,563,476 | <p>What you are looking for should be :</p>
<pre><code>filename = ("%s_%s_%s.mp4" %(pfile, time.strftime("%Y-%m-%d_%H:%M:%S",time.localtime()),"Mediafile")).replace(" ", "-")
</code></pre>
<p>In your original code, the 'Mediafile' string was not in the right place : you put it as an argument of strftime(), when you should put it as one of the string to replace, in the 2nd level of parentheses.</p>
| 0 | 2016-07-25T08:58:21Z | [
"python"
] |
Speeding up the light processing of ~50GB CSV file | 38,562,864 | <p>I have a ~50GB csv file with which I have to</p>
<ul>
<li>Take several subsets of the columns of the CSV</li>
<li>Apply a different format string specification to each subset of columns of the CSV. </li>
<li>Output a new CSV for each subset with its own format specification. </li>
</ul>
<p>I opted to use Pandas, and have a general approach of iterating over chunks of a handy chunk-size (of just over half a million lines) to produce a DataFrame, and appending the chunk to each output CSV. So something like this:</p>
<pre><code>_chunk_size = 630100
column_mapping = {
'first_output_specification' : ['Scen', 'MS', 'Time', 'CCF2', 'ESW10'],
# ..... similar mappings for rest of output specifications
}
union_of_used_cols = ['Scen', 'MS', 'Time', 'CCF1', 'CCF2', 'VS', 'ESW 0.00397', 'ESW0.08',
'ESW0.25', 'ESW1', 'ESW 2', 'ESW3', 'ESW 5', 'ESW7', 'ESW 10', 'ESW12',
'ESW 15', 'ESW18', 'ESW 20', 'ESW22', 'ESW 25', 'ESW30', 'ESW 35',
'ESW40']
chnk_iter = pd.read_csv('my_big_csv.csv', header=0, index_col=False,
iterator=True, na_filter=False, usecols=union_of_used_cols)
cnt = 0
while cnt < 100:
chnk = chnk_iter.get_chunk(_chunk_size)
chnk.to_csv('first_output_specification', float_format='%.8f',
columns=column_mapping['first_output_specification'],
mode='a',
header=True,
index=False)
# ..... do the same thing for the rest of the output specifications
cnt += 1
</code></pre>
<p><strong>My problem</strong> is that this is <em>really</em> slow. Each chunk takes about a minute to generate append to the CSV files for, and thus I'm looking at almost 2 hours for the task to complete. </p>
<p>I have tried to place a few optimizations by only using the union of the column subsets when reading in the CSV, as well as setting <code>na_filter=False</code>, but it still isn't acceptable. </p>
<p>I was wondering if there is a faster way to do this light processing of a CSV file in Python, either by means of an optimization or correction to my approach or perhaps simply there is a better tool suited for this kind of job then Pandas... to me (<em>an inexperienced Pandas user</em>) this looks like it is as fast as it could get with Pandas, but I may very well be mistaken. </p>
| 8 | 2016-07-25T08:25:20Z | 39,213,959 | <p>I don't think you're getting any advantage from a Panda's dataframe, so it is just adding overhead. Instead, you can use python's own <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">CSV module</a> that is easy to use and nicely optimized in C.</p>
<p>Consider reading much larger chunks into memory (perhaps 10MB at a time), then writing-out each of the reformatted column subsets before advancing to the next chunk. That way, the input file only gets read and parsed once.</p>
<p>One other approach you could try is to preprocess the data with the Unix <a href="http://linux.die.net/man/1/cut" rel="nofollow"><em>cut</em></a> command to extract only the relevant columns (so that Python doesn't have to create objects and allocate memory for data in the unused columns): <code>cut -d, -f1,3,5 somedata.csv</code></p>
<p>Lastly, try running the code under <a href="http://pypy.org/" rel="nofollow">PyPy</a> so that the CPU bound portion of your script gets optimized through their tracing JIT.</p>
| 5 | 2016-08-29T19:42:57Z | [
"python",
"csv",
"pandas"
] |
Speeding up the light processing of ~50GB CSV file | 38,562,864 | <p>I have a ~50GB csv file with which I have to</p>
<ul>
<li>Take several subsets of the columns of the CSV</li>
<li>Apply a different format string specification to each subset of columns of the CSV. </li>
<li>Output a new CSV for each subset with its own format specification. </li>
</ul>
<p>I opted to use Pandas, and have a general approach of iterating over chunks of a handy chunk-size (of just over half a million lines) to produce a DataFrame, and appending the chunk to each output CSV. So something like this:</p>
<pre><code>_chunk_size = 630100
column_mapping = {
'first_output_specification' : ['Scen', 'MS', 'Time', 'CCF2', 'ESW10'],
# ..... similar mappings for rest of output specifications
}
union_of_used_cols = ['Scen', 'MS', 'Time', 'CCF1', 'CCF2', 'VS', 'ESW 0.00397', 'ESW0.08',
'ESW0.25', 'ESW1', 'ESW 2', 'ESW3', 'ESW 5', 'ESW7', 'ESW 10', 'ESW12',
'ESW 15', 'ESW18', 'ESW 20', 'ESW22', 'ESW 25', 'ESW30', 'ESW 35',
'ESW40']
chnk_iter = pd.read_csv('my_big_csv.csv', header=0, index_col=False,
iterator=True, na_filter=False, usecols=union_of_used_cols)
cnt = 0
while cnt < 100:
chnk = chnk_iter.get_chunk(_chunk_size)
chnk.to_csv('first_output_specification', float_format='%.8f',
columns=column_mapping['first_output_specification'],
mode='a',
header=True,
index=False)
# ..... do the same thing for the rest of the output specifications
cnt += 1
</code></pre>
<p><strong>My problem</strong> is that this is <em>really</em> slow. Each chunk takes about a minute to generate append to the CSV files for, and thus I'm looking at almost 2 hours for the task to complete. </p>
<p>I have tried to place a few optimizations by only using the union of the column subsets when reading in the CSV, as well as setting <code>na_filter=False</code>, but it still isn't acceptable. </p>
<p>I was wondering if there is a faster way to do this light processing of a CSV file in Python, either by means of an optimization or correction to my approach or perhaps simply there is a better tool suited for this kind of job then Pandas... to me (<em>an inexperienced Pandas user</em>) this looks like it is as fast as it could get with Pandas, but I may very well be mistaken. </p>
| 8 | 2016-07-25T08:25:20Z | 39,215,634 | <p>I would try using the python csv module and generators.</p>
<p>I've found generators much faster than other approaches for parsing huge server logs and such.</p>
<pre><code>import csv
def reader(csv_filename):
with open(csv_filename, 'r') as f:
csvreader = csv.reader(f, delimiter=',', quotechar="'")
for line in csvreader:
yield line # line is a tuple
def formatter(lines):
for line in lines:
# format line according to specs
yield formatted_line
def write(lines, csv_filename):
with open(csv_filename, 'w') as f:
writer = csv.writer(f)
for line in lines:
writer.writerow(line)
lines = reader('myfile.in.csv')
formatted_lines = formatter(lines)
write(formatted_lines, 'myfile.out.csv')
</code></pre>
<p>This is just for reading a transforming a single input csv into a single output csv, but you could write the formatter and writer to output several files.</p>
<p>(I now see that this question is a month old - not sure if you've solved your problem already - if not and if you want more detailed explanations/examples let me know.)</p>
| 0 | 2016-08-29T21:40:43Z | [
"python",
"csv",
"pandas"
] |
Speeding up the light processing of ~50GB CSV file | 38,562,864 | <p>I have a ~50GB csv file with which I have to</p>
<ul>
<li>Take several subsets of the columns of the CSV</li>
<li>Apply a different format string specification to each subset of columns of the CSV. </li>
<li>Output a new CSV for each subset with its own format specification. </li>
</ul>
<p>I opted to use Pandas, and have a general approach of iterating over chunks of a handy chunk-size (of just over half a million lines) to produce a DataFrame, and appending the chunk to each output CSV. So something like this:</p>
<pre><code>_chunk_size = 630100
column_mapping = {
'first_output_specification' : ['Scen', 'MS', 'Time', 'CCF2', 'ESW10'],
# ..... similar mappings for rest of output specifications
}
union_of_used_cols = ['Scen', 'MS', 'Time', 'CCF1', 'CCF2', 'VS', 'ESW 0.00397', 'ESW0.08',
'ESW0.25', 'ESW1', 'ESW 2', 'ESW3', 'ESW 5', 'ESW7', 'ESW 10', 'ESW12',
'ESW 15', 'ESW18', 'ESW 20', 'ESW22', 'ESW 25', 'ESW30', 'ESW 35',
'ESW40']
chnk_iter = pd.read_csv('my_big_csv.csv', header=0, index_col=False,
iterator=True, na_filter=False, usecols=union_of_used_cols)
cnt = 0
while cnt < 100:
chnk = chnk_iter.get_chunk(_chunk_size)
chnk.to_csv('first_output_specification', float_format='%.8f',
columns=column_mapping['first_output_specification'],
mode='a',
header=True,
index=False)
# ..... do the same thing for the rest of the output specifications
cnt += 1
</code></pre>
<p><strong>My problem</strong> is that this is <em>really</em> slow. Each chunk takes about a minute to generate append to the CSV files for, and thus I'm looking at almost 2 hours for the task to complete. </p>
<p>I have tried to place a few optimizations by only using the union of the column subsets when reading in the CSV, as well as setting <code>na_filter=False</code>, but it still isn't acceptable. </p>
<p>I was wondering if there is a faster way to do this light processing of a CSV file in Python, either by means of an optimization or correction to my approach or perhaps simply there is a better tool suited for this kind of job then Pandas... to me (<em>an inexperienced Pandas user</em>) this looks like it is as fast as it could get with Pandas, but I may very well be mistaken. </p>
| 8 | 2016-07-25T08:25:20Z | 39,215,781 | <p>CPU is faster than disk access. One trick is to gzip your file and read from that.</p>
<pre><code>import gzip
with gzip.open('input.gz','r') as fin:
for line in fin:
print('got line', line)
</code></pre>
| 0 | 2016-08-29T21:54:26Z | [
"python",
"csv",
"pandas"
] |
Command failed: tar xzf android-sdk_r20-linux.tgz | 38,562,910 | <p>I was trying to build kivy app to android and got this error</p>
<pre><code># Check configuration tokens
# Ensure build layout
# Check configuration tokens
# Preparing build
# Check requirements for android
# Install platform
# Apache ANT found at /home/ali/.buildozer/android/platform/apache-ant-1.9.4
# Android SDK is missing, downloading
# Unpacking Android SDK
# Command failed: tar xzf android-sdk_r20-linux.tgz
#
# Buildozer failed to execute the last command
# If the error is not obvious, please raise the log_level to 2
# and retry the latest command.
# In case of a bug report, please add a full log with log_level = 2
</code></pre>
<p>command</p>
<pre><code>$ buildozer android_new debug
</code></pre>
<p>log:
<a href="http://paste.ubuntu.com/20850804/">http://paste.ubuntu.com/20850804/</a></p>
<p>want any details? request in the comments</p>
| 6 | 2016-07-25T08:28:13Z | 38,612,319 | <p>The machine fails to properly download the android SDK.</p>
<p>You can confirm this by checking the md5 sum of the file :</p>
<pre><code>wget -O - http://dl.google.com/android/android-sdk_r20-linux.tgz | md5sum
</code></pre>
<p>This should output : 22a81cf1d4a951c62f71a8758290e9bb</p>
<p>If it doesn't, my first guess would be that you're blocked by some kind of proxy or firewall. A proxy can be configured to limit the maximum size of a a file you're trying to download. Check the logs or contact your sysadmins if you're not the administrator of the machine.</p>
| 3 | 2016-07-27T11:58:56Z | [
"android",
"python",
"kivy",
"buildozer"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.