title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
is there a function to get axis interval | 38,605,027 | <p>I have a axes variable <code>ax</code>.</p>
<pre><code>ax.get_yticks()
Out[37]: array([ 0. , 0.2, 0.4, 0.6, 0.8, 1. ])
</code></pre>
<p>I want to get the ticks intervalï¼0.2-0). Is there a function to return the result (0.2)?</p>
| 1 | 2016-07-27T06:11:40Z | 38,605,135 | <p>Use this:</p>
<pre><code>intervalSize = ax.get_yticks[1] - ax.get_yticks[0]
</code></pre>
| 0 | 2016-07-27T06:19:25Z | [
"python",
"matplotlib"
] |
python pandas from 0/1 dataframe to an itemset list | 38,605,111 | <p>What is the most efficient way to go from a 0/1 pandas/numpy dataframe of this form::</p>
<pre><code>>>> dd
{'a': {0: 1, 1: 0, 2: 1, 3: 0, 4: 1, 5: 1},
'b': {0: 1, 1: 1, 2: 0, 3: 0, 4: 1, 5: 1},
'c': {0: 0, 1: 1, 2: 1, 3: 0, 4: 1, 5: 1},
'd': {0: 0, 1: 1, 2: 1, 3: 1, 4: 0, 5: 1},
'e': {0: 0, 1: 0, 2: 1, 3: 0, 4: 0, 5: 0}}
>>> df = pd.DataFrame(dd)
>>> df
a b c d e
0 1 1 0 0 0
1 0 1 1 1 0
2 1 0 1 1 1
3 0 0 0 1 0
4 1 1 1 0 0
5 1 1 1 1 0
>>>
</code></pre>
<p>To an itemset list of list ?::</p>
<pre><code>itemset = [['a', 'b'],
['b', 'c', 'd'],
['a', 'c', 'd', 'e'],
['d'],
['a', 'b', 'c'],
['a', 'b', 'c', 'd']]
</code></pre>
<p>df.shape ~ <code>(1e6, 500)</code></p>
| 2 | 2016-07-27T06:18:01Z | 38,605,258 | <p>You can first multiple by columns names by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html" rel="nofollow"><code>mul</code></a> and convert <code>DataFrame</code> to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow"><code>values</code></a>:</p>
<pre><code>print (df.mul(df.columns.to_series()).values)
[['a' 'b' '' '' '']
['' 'b' 'c' 'd' '']
['a' '' 'c' 'd' 'e']
['' '' '' 'd' '']
['a' 'b' 'c' '' '']
['a' 'b' 'c' 'd' '']]
</code></pre>
<p>Remove empty string by nested list comprehension:</p>
<pre><code>print ([[y for y in x if y != ''] for x in df.mul(df.columns.to_series()).values])
[['a', 'b'],
['b', 'c', 'd'],
['a', 'c', 'd', 'e'],
['d'],
['a', 'b', 'c'],
['a', 'b', 'c', 'd']]
</code></pre>
| 2 | 2016-07-27T06:26:02Z | [
"python",
"numpy",
"pandas"
] |
python pandas from 0/1 dataframe to an itemset list | 38,605,111 | <p>What is the most efficient way to go from a 0/1 pandas/numpy dataframe of this form::</p>
<pre><code>>>> dd
{'a': {0: 1, 1: 0, 2: 1, 3: 0, 4: 1, 5: 1},
'b': {0: 1, 1: 1, 2: 0, 3: 0, 4: 1, 5: 1},
'c': {0: 0, 1: 1, 2: 1, 3: 0, 4: 1, 5: 1},
'd': {0: 0, 1: 1, 2: 1, 3: 1, 4: 0, 5: 1},
'e': {0: 0, 1: 0, 2: 1, 3: 0, 4: 0, 5: 0}}
>>> df = pd.DataFrame(dd)
>>> df
a b c d e
0 1 1 0 0 0
1 0 1 1 1 0
2 1 0 1 1 1
3 0 0 0 1 0
4 1 1 1 0 0
5 1 1 1 1 0
>>>
</code></pre>
<p>To an itemset list of list ?::</p>
<pre><code>itemset = [['a', 'b'],
['b', 'c', 'd'],
['a', 'c', 'd', 'e'],
['d'],
['a', 'b', 'c'],
['a', 'b', 'c', 'd']]
</code></pre>
<p>df.shape ~ <code>(1e6, 500)</code></p>
| 2 | 2016-07-27T06:18:01Z | 38,605,434 | <p>Simple list comprehesion:</p>
<pre><code>itemset = [[df.columns.values[j] # the output based on the following logic:
for j in range(0, len(df.iloc[i]))
if df.iloc[i][j] == 1]
for i in range(0, len(df.index))]
print (itemset)
</code></pre>
<p>Gives the result:</p>
<pre><code>$ python test.py
[['a', 'b'], ['b', 'c', 'd'], ['a', 'c', 'd', 'e'], ['d'], ['a', 'b', 'c'], ['a', 'b', 'c', 'd']]
</code></pre>
<p>Here's another format: Add this to the end of your list comprehension.</p>
<pre><code>print ('[', end='')
for i in range(0, len(itemset)):
if i == len(itemset) - 1:
print (itemset[i], end='')
else:
print (itemset[i], end=',\n ')
print (']')
</code></pre>
<p>Output:</p>
<pre><code>$ python test.py
[['a', 'b'],
['b', 'c', 'd'],
['a', 'c', 'd', 'e'],
['d'],
['a', 'b', 'c'],
['a', 'b', 'c', 'd']]
</code></pre>
| 0 | 2016-07-27T06:37:02Z | [
"python",
"numpy",
"pandas"
] |
python pandas from 0/1 dataframe to an itemset list | 38,605,111 | <p>What is the most efficient way to go from a 0/1 pandas/numpy dataframe of this form::</p>
<pre><code>>>> dd
{'a': {0: 1, 1: 0, 2: 1, 3: 0, 4: 1, 5: 1},
'b': {0: 1, 1: 1, 2: 0, 3: 0, 4: 1, 5: 1},
'c': {0: 0, 1: 1, 2: 1, 3: 0, 4: 1, 5: 1},
'd': {0: 0, 1: 1, 2: 1, 3: 1, 4: 0, 5: 1},
'e': {0: 0, 1: 0, 2: 1, 3: 0, 4: 0, 5: 0}}
>>> df = pd.DataFrame(dd)
>>> df
a b c d e
0 1 1 0 0 0
1 0 1 1 1 0
2 1 0 1 1 1
3 0 0 0 1 0
4 1 1 1 0 0
5 1 1 1 1 0
>>>
</code></pre>
<p>To an itemset list of list ?::</p>
<pre><code>itemset = [['a', 'b'],
['b', 'c', 'd'],
['a', 'c', 'd', 'e'],
['d'],
['a', 'b', 'c'],
['a', 'b', 'c', 'd']]
</code></pre>
<p>df.shape ~ <code>(1e6, 500)</code></p>
| 2 | 2016-07-27T06:18:01Z | 38,607,200 | <p>Here's a NumPy based vectorized approach to get a list of arrays as output -</p>
<pre><code>In [47]: df
Out[47]:
a b c d e
0 1 1 0 0 0
1 0 1 1 1 0
2 1 0 1 1 1
3 0 0 0 1 0
4 1 1 1 0 0
5 1 1 1 1 0
In [48]: cols = df.columns.values.astype(str)
In [49]: R,C = np.where(df.values==1)
In [50]: np.split(cols[C],np.unique(R,return_index=True)[1])[1:]
Out[50]:
[array(['a', 'b'],
dtype='|S1'), array(['b', 'c', 'd'],
dtype='|S1'), array(['a', 'c', 'd', 'e'],
dtype='|S1'), array(['d'],
dtype='|S1'), array(['a', 'b', 'c'],
dtype='|S1'), array(['a', 'b', 'c', 'd'],
dtype='|S1')]
</code></pre>
| 1 | 2016-07-27T08:03:10Z | [
"python",
"numpy",
"pandas"
] |
Why Python Pandas append to DataFrame like this? | 38,605,182 | <p>I want to add l in column 'A' but it creates a new column and adds l to the last one. Why is it happening? And how can I make what I want?</p>
<pre><code>import pandas as pd
l=[1,2,3]
df = pd.DataFrame(columns =['A'])
df = df.append(l, ignore_index=True)
df = df.append(l, ignore_index=True)
print(df)
A 0
0 NaN 1.0
1 NaN 2.0
2 NaN 3.0
3 NaN 1.0
4 NaN 2.0
5 NaN 3.0
</code></pre>
| 0 | 2016-07-27T06:22:22Z | 38,605,221 | <p><strong>Edited</strong>
Is this what you want to do:</p>
<pre><code>In[6]:df=df.A.append(pd.Series(l)).reset_index().drop('index',1).rename(columns={0:'A'})
In[7]:df
Out[7]:
A
0 1
1 2
2 3
</code></pre>
<p>Then you can add any list of different length.
Suppose:</p>
<pre><code>a=[9,8,7,6,5]
In[11]:df=df.A.append(pd.Series(a)).reset_index().drop('index',1).rename(columns={0:'A'})
In[12]:df
Out[12]:
A
0 1
1 2
2 3
3 9
4 8
5 7
6 6
7 5
</code></pre>
<p><strong>Previously</strong>
are you looking for this :</p>
<pre><code>df=pd.DataFrame(l,columns=['A'])
df
Out[5]:
A
0 1
1 2
2 3
</code></pre>
| 2 | 2016-07-27T06:24:29Z | [
"python",
"pandas"
] |
Why Python Pandas append to DataFrame like this? | 38,605,182 | <p>I want to add l in column 'A' but it creates a new column and adds l to the last one. Why is it happening? And how can I make what I want?</p>
<pre><code>import pandas as pd
l=[1,2,3]
df = pd.DataFrame(columns =['A'])
df = df.append(l, ignore_index=True)
df = df.append(l, ignore_index=True)
print(df)
A 0
0 NaN 1.0
1 NaN 2.0
2 NaN 3.0
3 NaN 1.0
4 NaN 2.0
5 NaN 3.0
</code></pre>
| 0 | 2016-07-27T06:22:22Z | 38,605,283 | <p>You can just pass a dictionary in the dataframe constructor, that if I understand your question correctly.</p>
<pre><code>l = [1,2,3]
df = pd.DataFrame({'A': l})
df
A
0 1
1 2
2 3
</code></pre>
| 2 | 2016-07-27T06:28:06Z | [
"python",
"pandas"
] |
Cyclic import error | 38,605,374 | <p>Whatever I try to import from my <code>pupils</code> app, I get an import error. For example:</p>
<p><strong>offices/models.py</strong></p>
<pre><code>from pupils.models import Pupils # => ImportError: cannot import name Pupils
</code></pre>
<p>I bet the path is right, PyCharm reads it. Everything imports fine from another apps by the way.</p>
| 0 | 2016-07-27T06:33:52Z | 38,605,476 | <p>I guess you meet the <code>cyclic imports</code> problem.</p>
<p>An easy way to fix it is import <code>Pupils</code> later:</p>
<pre><code>def where_you_need_pupils():
from pupils.models import Pupils
# do something
</code></pre>
| 1 | 2016-07-27T06:39:48Z | [
"python",
"django",
"import",
"python-import"
] |
calculate the length of a sequence after adding the length of previous sequences | 38,605,751 | <p>I want to determine length of individual sequences in a multifasta file. I got this biopython code from the bio manual as: </p>
<pre><code>from Bio import SeqIO
import sys
cmdargs = str(sys.argv)
for seq_record in SeqIO.parse(str(sys.argv[1]), "fasta"):
output_line = '%s\t%i' % \
(seq_record.id, len(seq_record))
print(output_line)
</code></pre>
<p>My input file is like:</p>
<pre><code>>Protein1
MNT
>Protein2
TSMN
>Protein3
TTQRT
</code></pre>
<p>And the code yields:</p>
<pre><code>Protein1 3
Protein2 4
Protein3 5
</code></pre>
<p>But I want to calculate the length of a sequence after adding the length of previous sequences. It would be like:</p>
<pre><code>Protein1 1-3
Protein2 4-7
Protein3 8-12
</code></pre>
<p>I don't know in which of the above line in the code I need to change to get that output. I'd appreciate any help on this issue, thanks!!!!</p>
| 0 | 2016-07-27T06:54:34Z | 38,606,984 | <p>It is easy just to get the total length:</p>
<pre><code>from Bio import SeqIO
import sys
cmdargs = str(sys.argv)
total_len = 0
for seq_record in SeqIO.parse(str(sys.argv[1]), "fasta"):
total_len += len(seq_record)
output_line = '%s\t%i' % (seq_record.id, total_len))
print(output_line)
</code></pre>
<p>To get a range:</p>
<pre><code>from Bio import SeqIO
import sys
cmdargs = str(sys.argv)
total_len = 0
for seq_record in SeqIO.parse(str(sys.argv[1]), "fasta"):
previous_total_len = total_len
total_len += len(seq_record)
output_line = '%s\t%i - %i' % (seq_record.id, previous_total_len + 1, total_len)
print(output_line)
</code></pre>
| 0 | 2016-07-27T07:52:09Z | [
"python",
"biopython",
"fasta"
] |
Read a file multi-threaded in python in chunks of 2KB. | 38,605,752 | <p>I have to read a file in chunks of 2KB and do some operation on those chunks. Now where I'm actually stuck is, when the data needs to be thread-safe. From what I've seen in online tutorials and StackOverflow answers, we define a worker thread, and override its run method. The run method uses data from a queue which we pass as an argument, and which contains the actual data. But to load that queue with data, I'll have to go through the file serially, which eliminates parallelism. I want that multiple threads read the file in parallel manner. So I'll have to cover the read part in the run function only. But I'm not sure how to go with that. Help needed.</p>
| 0 | 2016-07-27T06:54:37Z | 38,606,759 | <p>Reading the file serially is your best option since (hardware wise) it gives you the best read throughout.</p>
<p><em>Usually</em> the slow part is not in the data reading but in its processing...</p>
| 1 | 2016-07-27T07:41:19Z | [
"python",
"multithreading",
"thread-safety",
"python-multithreading"
] |
Use QProcessAnimation on main window | 38,605,771 | <p>I'm having an issue trying to animate a QMainWindow. I'm trying to make a "slide" animation for a side panel. It works fine if I call it before the "app.exec" however calling the "animate_out" function it seems not to do anything. Any ideas?</p>
<p>PS: You can un-comment the code towards the bottom to see an example of what I'm looking for.</p>
<p>Thanks</p>
<pre><code># PYQT IMPORTS
from PyQt4 import QtCore, QtGui
import sys
import UI_HUB
# MAIN HUB CLASS
class HUB(QtGui.QMainWindow, UI_HUB.Ui_HUB):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
self.setCentralWidget(self._Widget)
self.setWindowTitle('HUB - 0.0')
self._Widget.installEventFilter(self)
self.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.WindowStaysOnTopHint)
self.set_size()
self.animate_out()
def set_size(self):
# Finds available and total screen resolution
resolution_availabe = QtGui.QDesktopWidget().availableGeometry()
ava_height = resolution_availabe.height()
self.resize(380, ava_height)
def animate_out(self):
animation = QtCore.QPropertyAnimation(self, "pos")
animation.setDuration(400)
animation.setStartValue(QtCore.QPoint(1920, 22))
animation.setEndValue(QtCore.QPoint(1541, 22))
animation.setEasingCurve(QtCore.QEasingCurve.OutCubic)
animation.start()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
form = HUB()
form.show()
form.raise_()
form.activateWindow()
# Doing the animation here works just fine
# animation = QtCore.QPropertyAnimation(form, "pos")
# animation.setDuration(400)
# animation.setStartValue(QtCore.QPoint(1920, 22))
# animation.setEndValue(QtCore.QPoint(1541, 22))
# animation.setEasingCurve(QtCore.QEasingCurve.OutCubic)
# animation.start()
app.exec_()
</code></pre>
| 1 | 2016-07-27T06:55:31Z | 38,692,042 | <p>The problem is that the <code>animation</code> Object will not outlive the scope of the <code>animate_out</code> out function.
To solve this you have to add the <code>animation</code> Object as a member to the <code>HUB</code> class.</p>
<p>In my Example code I also split the creation and playing of the animation into different functions.</p>
<pre><code># [...] skipped
class HUB(QtGui.QMainWindow, UI_HUB.Ui_HUB):
def __init__(self):
# [...] skipped
self.create_animations() # see code below
self.animate_out()
def set_size(self):
# [...] skipped
def create_animations(self):
# set up the animation object
self.animation = QtCore.QPropertyAnimation(self, "pos")
self.animation.setDuration(400)
self.animation.setStartValue(QtCore.QPoint(1920, 22))
self.animation.setEndValue(QtCore.QPoint(1541, 22))
self.animation.setEasingCurve(QtCore.QEasingCurve.OutCubic)
def animate_out(self)
# use the animation object
self.animation.start()
# [...] skipped
</code></pre>
| 0 | 2016-08-01T06:14:07Z | [
"python",
"qt",
"pyqt",
"pyside"
] |
python script for convert video to image sequence using FFmpeg | 38,605,822 | <p>I have 100 uncompressed mov (Video files) and i want to convert all mov to sgi image sequences.</p>
<p>i have a list of all mov file path.</p>
<p>how to convert .mov (video) to .sgi (image sequence) using <a href="https://www.python.org/" rel="nofollow">python</a> and <a href="https://ffmpeg.org/" rel="nofollow">FFmpeg</a>.</p>
| 0 | 2016-07-27T06:57:40Z | 38,607,558 | <p>you can use ffmpeg to convert the video to sgi images using this ffmpeg command</p>
<pre><code>ffmpeg -i inputVideo outputFrames_%04d.sgi
</code></pre>
<p>-replace inputVideo your input file path and name</p>
<p>-replace outputFrames with output file path and name</p>
<p>-replace '4' in _%04d with the number of digits you want for sequential image file naming.</p>
<p>now one way to process your files from python is to launch ffmpeg as a subprocess and providing the command you want executed by ffmpeg:</p>
<pre><code>import subprocess as sp
cmd='ffmpeg -i inputVideo outputFrames_%04d.sgi'
sp.call(cmd,shell=True)
</code></pre>
<p>remember to use double \ in your file path in the cmd command string (at least for me on windows).
If you want to loop over 100 movie files, write a loop that concatenates the command string with the appropriate input and output file names.</p>
| 1 | 2016-07-27T08:21:30Z | [
"python",
"ffmpeg"
] |
No. of occurrences of maximum item in a list | 38,605,838 | <p>Suppose I have a <code>list</code></p>
<pre><code>L= [3 2 1 3 5 4 5 3 5 3]
</code></pre>
<p>Output should be <code>3</code> as <code>5</code> is maximum in the list as its no. of occurrences is <code>3</code></p>
<p>I am able to try this till now</p>
<pre><code>from collections import defaultdict
d = defaultdict(int)
for i in height:
d[i] += 1
result = max(d.iteritems(), key=lambda x: x[1])
print len(result)
</code></pre>
<p>But this is not working for every list as it is only giving maximum occurrences of a item in a list but sometimes it is not maximum item.</p>
| -1 | 2016-07-27T06:58:22Z | 38,606,043 | <p>Check this Code:-</p>
<pre><code>L= [3, 2, 1, 3, 5, 4, 5, 3, 5, 3]
newDict = {}
for i in L:
newDict.setdefault(i,0)
newDict[i]+=1
filter( lambda x : (newDict[x] == max(newDict.values())) ,newDict)[0]
</code></pre>
| 1 | 2016-07-27T07:08:14Z | [
"python"
] |
No. of occurrences of maximum item in a list | 38,605,838 | <p>Suppose I have a <code>list</code></p>
<pre><code>L= [3 2 1 3 5 4 5 3 5 3]
</code></pre>
<p>Output should be <code>3</code> as <code>5</code> is maximum in the list as its no. of occurrences is <code>3</code></p>
<p>I am able to try this till now</p>
<pre><code>from collections import defaultdict
d = defaultdict(int)
for i in height:
d[i] += 1
result = max(d.iteritems(), key=lambda x: x[1])
print len(result)
</code></pre>
<p>But this is not working for every list as it is only giving maximum occurrences of a item in a list but sometimes it is not maximum item.</p>
| -1 | 2016-07-27T06:58:22Z | 38,606,210 | <p>You were picking the maximum <em>count</em>, rather than the maximum <em>item</em>. You could have solved this by dropping the <code>key</code> argument to <code>max()</code>, and then just print the result (not the length of it, that'll <em>always</em> be 2!):</p>
<pre><code>result = max(d.iteritems())
print result # prints the (maxvalue, count) pair.
</code></pre>
<p>Alternatively, print <code>result[1]</code> to just print the count for the maximum value.</p>
<p>Use a <a href="https://docs.python.org/2/library/collections.html#itertools.Counter" rel="nofollow"><code>collections.Counter()</code> object</a> to count your items, then find the maximum key-value pair in that:</p>
<pre><code>from collections import Counter
counts = Counter(L)
max_key, max_key_count = max(counts.iteritems())
print max_key_count
</code></pre>
<p>Like your own, this is a O(KN) approach where K is the length of <code>L</code> and <code>N</code> is the number of unique items. This is slightly more efficient than the <code>max_element = max(L); count = L.count(max_element)</code> approach in that it avoids looping over all of <code>L</code> twice. Which one is faster in practice depends on how much smaller <code>N</code> is to <code>K</code>.</p>
| 2 | 2016-07-27T07:15:13Z | [
"python"
] |
No. of occurrences of maximum item in a list | 38,605,838 | <p>Suppose I have a <code>list</code></p>
<pre><code>L= [3 2 1 3 5 4 5 3 5 3]
</code></pre>
<p>Output should be <code>3</code> as <code>5</code> is maximum in the list as its no. of occurrences is <code>3</code></p>
<p>I am able to try this till now</p>
<pre><code>from collections import defaultdict
d = defaultdict(int)
for i in height:
d[i] += 1
result = max(d.iteritems(), key=lambda x: x[1])
print len(result)
</code></pre>
<p>But this is not working for every list as it is only giving maximum occurrences of a item in a list but sometimes it is not maximum item.</p>
| -1 | 2016-07-27T06:58:22Z | 38,606,237 | <p>Use <code>max</code> and <code>list.count</code>:</p>
<pre><code>max_element= max(L)
count= L.count(max_element)
print(count)
</code></pre>
| 3 | 2016-07-27T07:16:33Z | [
"python"
] |
Using python pandas timestamps in sklearn.cross_validation.cross_val_score | 38,605,916 | <p>One of my dataframe columns is dates. In order to use it in my analysis I convert it to datetime as follows:</p>
<pre><code>datetime_columns = ['my_dates']
for c in datetime_columns:
df[c] = pd.to_datetime(df[c], infer_datetime_format=False)
</code></pre>
<p>Conversion does the job:</p>
<pre><code>print df['my_dates'].dtype
datetime64[ns]
</code></pre>
<p>However, when I use it further, <code>sklearn.cross_validation.cross_val_score()</code> throws <code>TypeError</code> exception:</p>
<pre><code>features = df[list(feature_columns)] # Includes 'my_dates'
labels = df[list(target_columns)]
cross_val_score(LinearRegression(), features.values, labels.values, cv=5)
TypeError: float() argument must be a string or a number
</code></pre>
<p>All my other columns (without my_dates) have numeric format:</p>
<pre><code>print list((set(features.dtypes).union(set(labels.dtypes))))
[dtype('int8'), dtype('int64'), dtype('float64')]
</code></pre>
<p>This error occures only if 'my_dates' column is included in features. <code>cross_val_score()</code> seems not to work with timestamps, but I need it in my analysis. What is the pythonic or pandastic way to make it work?</p>
| 2 | 2016-07-27T07:02:30Z | 38,607,335 | <p>Try to convert your <code>my_dates</code> column into <code>np.int64</code> dtype in order to make <code>cross_val_score()</code> happy</p>
<p>Demo:</p>
<pre><code>In [330]: df = pd.DataFrame({'my_dates':pd.date_range('2001-01-01', periods=10, freq='55555T')})
In [331]: df
Out[331]:
my_dates
0 2001-01-01 00:00:00
1 2001-02-08 13:55:00
2 2001-03-19 03:50:00
3 2001-04-26 17:45:00
4 2001-06-04 07:40:00
5 2001-07-12 21:35:00
6 2001-08-20 11:30:00
7 2001-09-28 01:25:00
8 2001-11-05 15:20:00
9 2001-12-14 05:15:00
In [333]: df.my_dates.astype(np.int64) // 10**9
Out[333]:
0 978307200
1 981640500
2 984973800
3 988307100
4 991640400
5 994973700
6 998307000
7 1001640300
8 1004973600
9 1008306900
Name: my_dates, dtype: int64
</code></pre>
| 0 | 2016-07-27T08:10:07Z | [
"python",
"pandas",
"timestamp",
"scikit-learn"
] |
Accesing <td> elements in a table that do not have ID or class when using python and BeautifulSoup on html/css pages | 38,605,967 | <p>I am scraping a page using Selenium, Python and Beautiful Soup, and I want to output the rows of a table as comma delimited values. Unfortunately the HTML of the page is all over the place. So far I have managed to extract two columns by using the IDs of their elements. The rest of the values are just contained in without an identifier such as class or id. Here is a sample of the results.</p>
<pre><code><table id="tblResults" style="z-index: 102; left: 18px; width: 956px;
height: 547px" cellspacing="1" width="956" border="0">
<tr style="color:Black;background-color:LightSkyBlue;font-family:Arial;font-weight:normal;font-style:normal;text-decoration:none;">
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>Select</td>
<td><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl00&#39;,&#39;&#39;)" style="color:Black;">T</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl01&#39;,&#39;&#39;)" style="color:Black;">Party</a></td>
<td>Opposite Party</td>
<td style="width:50px;"><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl02&#39;,&#39;&#39;)" style="color:Black;">Type</a></td>
<td style="width:100px;"><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl03&#39;,&#39;&#39;)" style="color:Black;">Book-Page</a></td>
<td style="width:70px;"><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl04&#39;,&#39;&#39;)" style="color:Black;">Date</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$ContentPlaceHolder1$grdResults$ctl02$ctl05&#39;,&#39;&#39;)" style="color:Black;">Town</a></td>
</tr>
<tr style="font-family:Arial;font-size:Smaller;font-weight:normal;font-style:normal;text-decoration:none;">
<td align="left" valign="top" style="font-weight:normal;font-style:normal;text-decoration:none;">
<input type="submit" name="ctl00$ContentPlaceHolder1$grdResults$ctl03$btnView" value="View" id="ContentPlaceHolder1_grdResults_btnView_0" title="Click to view this document" style="width:50px;" />
</td>
<td align="left" valign="top" style="font-weight:normal;font-style:normal;text-decoration:none;">
<input type="submit" name="ctl00$ContentPlaceHolder1$grdResults$ctl03$btnMyDoc" value="My Doc" id="ContentPlaceHolder1_grdResults_btnMyDoc_0" title="Click to add this document to My Documents" style="width:60px;" />
</td>
<td valign="top">
<span title="Click here to select this document"><input id="ContentPlaceHolder1_grdResults_CheckBox1_0" type="checkbox" name="ctl00$ContentPlaceHolder1$grdResults$ctl03$CheckBox1" /></span>
</td>
<td>1</td>
<td>
<span id="ContentPlaceHolder1_grdResults_lblParty1_0" title="Grantors:
ALBERT G MOSES FARM
MOSES ALBERT G
Grantees:
">MOSES ALBERT G</span>
</td>
<td>
<span id="ContentPlaceHolder1_grdResults_lblParty2_0" title="Grantors:
ALBERT G MOSES FARM
MOSES ALBERT G
Grantees:
"></span>
</td>
<td valign="top">MAP</td>
<td valign="top">- </td>
<td valign="top">01/16/1953</td>
<td valign="top">TOWN OF BINGHAMTON</td>
</tr>
<tr style="background-color:Gainsboro;font-family:Arial;font-size:Smaller;font-weight:normal;font-style:normal;text-decoration:none;">
<td align="left" valign="top" style="font-weight:normal;font-style:normal;text-decoration:none;">
<input type="submit" name="ctl00$ContentPlaceHolder1$grdResults$ctl04$btnView" value="View*" id="ContentPlaceHolder1_grdResults_btnView_1" title="Click to view this document" style="width:50px;" />
</td>
<td align="left" valign="top" style="font-weight:normal;font-style:normal;text-decoration:none;">
<input type="submit" name="ctl00$ContentPlaceHolder1$grdResults$ctl04$btnMyDoc" value="My Doc" id="ContentPlaceHolder1_grdResults_btnMyDoc_1" title="Click to add this document to My Documents" style="width:60px;" />
</td>
<td valign="top">
<span title="Click here to select this document"><input id="ContentPlaceHolder1_grdResults_CheckBox1_1" type="checkbox" name="ctl00$ContentPlaceHolder1$grdResults$ctl04$CheckBox1" /></span>
</td>
<td>1</td>
<td>
<span id="ContentPlaceHolder1_grdResults_lblParty1_1" title="Grantors:
MOSS EMMY-IND&amp;GDN
MOSES ALEXANDRA/GDN
Grantees:
GOODRICH MERLE L
GOODRICH CHARITY M
">MOSES ALEXANDRA/GDN</span>
</td>
<td>
<span id="ContentPlaceHolder1_grdResults_lblParty2_1" title="Grantors:
MOSS EMMY-IND&amp;GDN
MOSES ALEXANDRA/GDN
Grantees:
GOODRICH MERLE L
GOODRICH CHARITY M
">GOODRICH MERLE L</span>
</td>
</table>
</code></pre>
<p>This is the script that i have written so far that works for two columns:</p>
<pre><code>import re
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = open('searched.html')
bsObj = BeautifulSoup(html)
myTable = bsObj.findAll("tr",{ "style":re.compile("font-family:Arial;font-size:Smaller;font-weight:normal;font-style:normal;text-decoration:none;")} )
for table_ in myTable:
party = table_.find("span", {"id": re.compile("Party1_*")})
oppositeParty= table_.find("span", {"id": re.compile("Party2_*")})
print(party.get_text()+ "," + oppositeParty.get_text())
</code></pre>
<p>I have tried doing using children of myTable as follows:</p>
<p>myTable.children</p>
| 0 | 2016-07-27T07:04:39Z | 38,612,036 | <p>If all you want is to just dump out the content, something like this should do:</p>
<pre><code>myTable = bsObj.find_element_by_tag_name("table")
for table_ in myTable:
rows = table_.find_elements_by_tag_name("tr")
for row_ in rows:
columns = row_.find_elements_by_tag_name("td")
for column_ in columns:
# print out comma delimited text of columns...
# print the end of your row
</code></pre>
<p>If you're really wanting to scrape specific information, you'll need to provide us with more instructions about what your ultimate goal is.</p>
| 0 | 2016-07-27T11:45:08Z | [
"python",
"html",
"css",
"selenium",
"beautifulsoup"
] |
How to get output of Python Selenium code in Excel format? | 38,606,024 | <pre><code>import urllib2
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
url = ("http://www.justdial.com/Mumbai/CA")
driver = webdriver.Firefox()
driver.get(url)
driver
elements = driver.find_elements_by_xpath('//div[@class="col-md-12 col-xs-12 padding0"]')
for e in elements:
print e.text
url = driver.current_url
company_name = driver.find_element_by_xpath('//span[@class="jcn"]').text
contact_number = driver.find_element_by_xpath('//p[@class="contact_info"]').text
address = driver.find_element_by_xpath('//p[@class="adress_info"]').text
address_info = driver.find_element_by_xpath('//p[@class="address-info adinfoex"]').text
estd = driver.find_element_by_xpath('//li[@class="fr"]').text
ratings = driver.find_element_by_xpath('//li[@class="last"]').text
tf = 'textfile.csv'
f2 = open(tf, 'a+')
f2.write(', '.join([data.encode('utf-8') for data in [company_name]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [contact_number]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [address]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [address_info]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [estd_ratings]]) + '\n')
f2.close()
</code></pre>
| 0 | 2016-07-27T07:07:19Z | 38,610,461 | <p>The following should get you started. The important thing is to ensure your xpath entries select exactly what you need. Python's <a href="https://docs.python.org/2/library/csv.html#module-csv" rel="nofollow"><code>csv module</code></a> can be used to convert the lists obtained automatically into comma separated entries without you needing to add your own commas:</p>
<pre><code>import csv
import urllib2
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
def get_elements_by_xpath(driver, xpath):
return [entry.text for entry in driver.find_elements_by_xpath(xpath)]
url = ("http://www.justdial.com/Mumbai/CA")
driver = webdriver.Firefox()
driver.get(url)
search_entries = [
("CompanyName", "//span[@class='jcn']"),
("ContactNumber", "//p[@class='contact-info']/span/a"),
("Address", "//span[@class='desk-add jaddt']"),
("AddressInfo", "//p[@class='address-info adinfoex']"),
("Estd", "//li[@class='fr']"),
("Ratings", "//li[@class='last']/a/span[@class='rt_count']")]
with open('textfile.csv', 'wb') as f_output:
csv_output = csv.writer(f_output)
# Write header
csv_output.writerow([name for name, xpath in search_entries])
entries = []
for name, xpath in search_entries:
entries.append(get_elements_by_xpath(driver, xpath))
csv_output.writerows(zip(*entries))
</code></pre>
<p>Which would give you a CSV file looking something like:</p>
<pre class="lang-none prettyprint-override"><code>CompanyName,ContactNumber,Address,AddressInfo,Estd,Ratings
Bansal Investment & Consult...,+(91)-22-38578062,Manpada-thane West.. | more..,"CA, Tax Consultants, more...",Estd.in 2003,27 Ratings
G.Kedia & Associates,+(91)-22-38555914,"Station Road, Thane We.. | more..","CA, Company Registration Consultants, more...",Estd.in 2010,17 Ratings
Tarun Shah & Associates,+(91)-22-38552775,"Mogra Lane, Andheri Ea.. | more..","CA, Income Tax Consultants, more...",Estd.in 2000,12 Ratings
Hemant Shah And Associates LLP,+(91)-22-38588696,"Azad Road, Andheri Eas.. | more..","CA, Company Secretary, more...",Estd.in 1988,65 Ratings
</code></pre>
<p>The loop requests each of your xpath searches and creates an array entries for each search. Each search returns an array of matches so you end up with an array of arrays of entries. </p>
<p>This needs to then be written to a CSV file. <code>entries</code> is in column order, and the CSV file needs to be written in row order. To do this <code>zip(*entries)</code> is used to convert to row order. As the whole array is now in the correct order, a single call to <code>writerows</code> can be used to write the whole file in one go.</p>
<p>The additional benefit of using Python's CSV library is that if any of the fields contain commas, it will automatically add quotes around the field to ensure Excel does not read interpret it as another column. Note, you will probably need to change the default cell type formatting when it is loaded as Excel will try and guess.</p>
| 0 | 2016-07-27T10:31:34Z | [
"python"
] |
REST Framework: how to serialize objects? | 38,606,028 | <p>I want to create a ListView with a array of nested objects. Here what I've tried so far:</p>
<p><strong>rest.py</strong></p>
<pre><code>class GroupDetailSerializer(serializers.ModelSerializer):
class Meta:
model = Group
fields = (
'id',
'num',
'students',
)
@permission_classes((permissions.IsAdminUser,))
class GroupDetailView(mixins.ListModelMixin, viewsets.GenericViewSet):
serializer_class = GroupDetailSerializer
def get_queryset(self):
return Group.objects.all()
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class Group(models.Model):
office = models.ForeignKey(Offices)
num = models.IntegerField()
@property
def students(self):
from pupils.models import Pupils
return Pupils.objects.filter(group=self)
</code></pre>
<p>But it returns a type error:</p>
<blockquote>
<p><code><Pupils: John Doe> is not JSON serializable</code></p>
</blockquote>
<p>I guess I need to use another serializer on my <code>students</code> field, but how?</p>
| 0 | 2016-07-27T07:07:35Z | 38,606,368 | <p>Error is because your model is not json serializable.</p>
<p>you can see @yuwang comment to follow nested serializer <a href="http://www.django-rest-framework.org/api-guide/serializers/#dealing-with-nested-objects" rel="nofollow">http://www.django-rest-framework.org/api-guide/serializers/#dealing-with-nested-objects</a></p>
<p>or for now, particular for this case you can change your code to:</p>
<pre><code>@property
def students(self):
from pupils.models import Pupils
return list(Pupils.objects.filter(group=self).values())
</code></pre>
| 1 | 2016-07-27T07:21:39Z | [
"python",
"json",
"django",
"django-rest-framework"
] |
Error occured while building recommendation Engine using pio build --verbose in predictionio | 38,606,214 | <p>I am trying to Integrate Predicitionio with my App. I followed the steps in <a href="http://docs.prediction.io/templates/recommendation/quickstart/" rel="nofollow">Quick start</a>
I am stuck in step 5, Deploying Engine as a Service. I couldn't build the Recommendation using <code>pio build --verbose</code>. Following logs were printed when I execute <code>pio build --verbose</code>.</p>
<pre><code>Warning: pio-env.sh was not found in /home/PredictionIO/conf. Using system environment variables instead.
=:/home/PredictionIO/PredictionIO-0.9.6/conf is probably an Apache Spark development tree. Please make sure you are using at least 1.3.0.
[INFO] [Console$] Using existing engine manifest JSON at /home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/manifest.json
[INFO] [Console$] Using command '/home/PredictionIO/sbt/sbt' at the current working directory to build.
[INFO] [Console$] If the path above is incorrect, this process will fail.
[INFO] [Console$] Uber JAR disabled. Making sure lib/pio-assembly-0.9.6.jar is absent.
[INFO] [Console$] Going to run: /home/PredictionIO/sbt/sbt package assemblyPackageDependency
[DEBUG] [UpgradeCheckRunner] java.net.UnknownHostException: {e.getMessage}
[INFO] [Console$] [info] Loading project definition from /home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/project
[INFO] [Console$] [info] Set current project to template-scala-parallel-recommendation (in build file:/home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/)
[INFO] [Console$] [warn] No main class detected
[INFO] [Console$] [success] Total time: 1 s, completed Jul 27, 2016 12:20:53 PM
[INFO] [Console$] [warn] No main class detected
[INFO] [Console$] [info] Including from cache: scala-library.jar
[INFO] [Console$] [info] Checking every *.class/*.jar file's SHA-1.
[INFO] [Console$] [info] Merging files...
[INFO] [Console$] [warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'
[INFO] [Console$] [warn] Strategy 'discard' was applied to a file
[INFO] [Console$] [info] Assembly up to date: /home/PredictionIO/PredictionIO-0.9.6/bin/MyRecommendation/target/scala-2.10/template-scala-parallel-recommendation-assembly-0.1-SNAPSHOT-deps.jar
[INFO] [Console$] [success] Total time: 1 s, completed Jul 27, 2016 12:20:54 PM
[INFO] [Console$] Build finished successfully.
[INFO] [Console$] Looking for an engine...
[INFO] [Console$] Found template-scala-parallel-recommendation_2.10-0.1-SNAPSHOT.jar
[INFO] [Console$] Found template-scala-parallel-recommendation-assembly-0.1-SNAPSHOT-deps.jar
[WARN] [Storage$] There is no properly configured data source.
[WARN] [Storage$] There is no properly configured repository.
[ERROR] [Storage$] Required repository (METADATA) configuration is missing.
[ERROR] [Storage$] There were 1 configuration errors. Exiting.
</code></pre>
<p>Please help me to figure this out.</p>
| 2 | 2016-07-27T07:15:26Z | 38,634,140 | <p>Reason was 'pio' is not set as an environment variable. So I execute building engine as follows. </p>
<pre><code>/home/PredictionIO/PredictionIO-0.9.6/bin/pio build --verbose
</code></pre>
<p>Then it works for me</p>
| 0 | 2016-07-28T10:47:39Z | [
"python",
"statistics",
"prediction",
"predictionio"
] |
Python, Correct way to formulate Binary Data ready to be sent over serial | 38,606,244 | <p>I am in need of creating a device which talks to a different device over serial. Pretty basic stuff.</p>
<p>However, All I need to do is pass down specific binary data and the device will handle the rest.</p>
<p>The data has to be in binary format and I have seen various way to do it across the internet but really unsure of what is the correct way of representing binary data and NOT a string.</p>
<p>Here are a few examples of what I found:</p>
<pre><code>b'01001011' # Is this a packed string though?
bytes(4) # This creates bytes. How do I manipulate the bits?, is this data able to send over serial?
int('01001011', 2) # Will this be treated as an integer over serial?
binascii.hexify() # This produces ASCII representation
</code></pre>
<p>I need to formulate a few bytes of information which will involve me setting certain bits in each byte and I'm rather confused how to go about it</p>
| 0 | 2016-07-27T07:16:49Z | 38,606,664 | <p>Binary literals in python look like this:</p>
<pre><code>>>> 0b11
3
>>> 0b10
2
>>> 0b100
4
</code></pre>
<p>you can manipulate bits using bitwise operators:</p>
<pre><code>>>> 0b1000
8
>>> 0b1000 | 0b1
9
</code></pre>
<p><code>|</code> is just the <code>or</code> operator. See other operators here: <a href="https://wiki.python.org/moin/BitwiseOperators" rel="nofollow">BitwiseOperators</a></p>
<p>To see the numbers binary representation you can use string.format:</p>
<pre><code>>>> "{0:b}".format(9)
'1001'
>>> "{0:b}".format(65)
'1000001'
>>> "{0:b}".format(234)
'11101010'
</code></pre>
<p><strong>EDIT</strong>
Exemple of setting a particular bit:</p>
<pre><code># setting off second bit
>>> bin(0b1100 & 0b1011)
'0b1000'
# setting on third bit
>>> bin(0b1100 | 0b0010)
0b1110'
</code></pre>
<p>Note that binary literals give you a int:</p>
<pre><code>>>> type(0b1)
<type 'int'>
</code></pre>
| 1 | 2016-07-27T07:36:08Z | [
"python",
"serial-port",
"byte",
"bit"
] |
How to get pseudo-private attribute through inner class in Python? | 38,606,265 | <pre><code>class Test:
__x = 1
class C:
def test(self):
print(Test.__x)
c = C()
a = Test()
a.c.test()
</code></pre>
<p>I get Error Information like this</p>
<blockquote>
<p>AttributeError: type object 'Test' has no attribute '_C__x'</p>
</blockquote>
<p>So, is it inner class cannot get access to outer class?
Or It can be using some other techniques?</p>
<p>And this questions comes from reading Learning Python, when author write about CardHolder, a inner class as a descriptor use instance.__name to reach the outer class' attribute, so what is the rule of whether can we access __X attribute?</p>
<p>Thank you for reading my problem.</p>
| 3 | 2016-07-27T07:17:43Z | 38,606,477 | <p>Adding to underscores is the proper way to declare private attributes in Python. Your code would work fine if you changed the name of <code>__x</code> to <code>_Test__x</code> when you call it from the other class.</p>
<pre><code>class Test:
__x = 1
class C:
def test(self):
print(Test._Test__x)
c = C()
def test2(self):
print self.__x
a = Test()
a.test2() # prints 1
a.c.test() # prints 1
</code></pre>
<p><a href="http://www.bogotobogo.com/python/python_private_attributes_methods.php" rel="nofollow">This tutorial</a> goes in the details of it. The rule is: you have to be in the class to call it directly. Subclasses won't work, but methods will.</p>
| 1 | 2016-07-27T07:26:34Z | [
"python",
"python-3.x"
] |
How to get pseudo-private attribute through inner class in Python? | 38,606,265 | <pre><code>class Test:
__x = 1
class C:
def test(self):
print(Test.__x)
c = C()
a = Test()
a.c.test()
</code></pre>
<p>I get Error Information like this</p>
<blockquote>
<p>AttributeError: type object 'Test' has no attribute '_C__x'</p>
</blockquote>
<p>So, is it inner class cannot get access to outer class?
Or It can be using some other techniques?</p>
<p>And this questions comes from reading Learning Python, when author write about CardHolder, a inner class as a descriptor use instance.__name to reach the outer class' attribute, so what is the rule of whether can we access __X attribute?</p>
<p>Thank you for reading my problem.</p>
| 3 | 2016-07-27T07:17:43Z | 38,606,797 | <p>From <a href="https://docs.python.org/3/tutorial/classes.html#private-variables" rel="nofollow">Private Variables</a></p>
<blockquote>
<p>Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam, where classname is the current class name with leading underscore(s) stripped. <em>This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</em></p>
</blockquote>
<pre><code>class Test:
__x = 1 # <= A
class C:
def test(self):
print(Test.__x) # <= B
c = C()
</code></pre>
<ul>
<li>In <code>A</code> the class is <code>Test</code> and therefore <code>__x</code> is replaced with <code>_Test__x</code>, So, <code>Test</code> actually have the attribute <code>_Test__x</code></li>
<li>In <code>B</code> the class is <code>C</code> and therefore <code>__x</code> is replaced with <code>_C__x</code>, So the attribute you actually access is <code>Test._C__x</code></li>
</ul>
<p>To access 'private' <code>__x</code> attribute of <code>Test</code> class outside Test class definition you should use: <code>Test._Test__x</code></p>
<pre><code>print(Test._Test__x)
</code></pre>
| 2 | 2016-07-27T07:43:10Z | [
"python",
"python-3.x"
] |
Looping through HTML tags using BeautifulSoup | 38,606,290 | <p>As mentioned in the previous questions, I am using Beautiful soup with python to retrieve weather data from a website.</p>
<p>Here's how the website looks like:</p>
<pre><code><channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
..
..
<area forecast="PC" lat="1.41800000" lon="103.83900000" name="Yishun"/>
<channel>
</code></pre>
<p>I managed to retrieve the information I need using these codes :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import urllib3
import csv
import sys
import json
#getting the Validtime
area_attrs_li = []
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
#print area
for area in soup.select('area'):
area_attrs_li.append(area)
print area
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
f = open("C:\\scripts\\testing\\testingnea.csv" , 'wt')
try:
for area in area_attrs_li:
#print str(area) + "\n"
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name))
finally:
f.close()
print open("C:/scripts/testing/testingnea.csv", 'rt').read()
</code></pre>
<p>I managed to get the data in a CSV, however when I run this part of the codes:</p>
<pre><code>#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
</code></pre>
<p>This is the result:</p>
<p><a href="http://i.stack.imgur.com/8T1gg.png" rel="nofollow"><img src="http://i.stack.imgur.com/8T1gg.png" alt="This is what I got"></a></p>
<p>Apparently, my loop is not working as it keeps printing the last area of the last record over and over again. </p>
<p><strong>EDIT</strong>: I tried looping through the data for area in the list :</p>
<pre><code>for area in area_attrs_li:
name = (area.get('name'))
print name
</code></pre>
<p>However, its still not looping.</p>
<p>I'm not sure where did the codes go wrong :/</p>
| 0 | 2016-07-27T07:18:30Z | 38,606,655 | <p>The problem is in the line: <code>writer.writerow( (time, element['date'], element['time'], area, name))</code>, the <code>name</code> never change.</p>
<p>A way to fix it:</p>
<pre><code>try:
for index, area in enumerate(area_attrs_li):
# print str(area) + "\n"
writer = csv.writer(f)
writer.writerow((time, element['date'], element['time'], area, areas[index].get('name')))
finally:
f.close()
</code></pre>
| 1 | 2016-07-27T07:35:44Z | [
"python",
"beautifulsoup"
] |
Looping through HTML tags using BeautifulSoup | 38,606,290 | <p>As mentioned in the previous questions, I am using Beautiful soup with python to retrieve weather data from a website.</p>
<p>Here's how the website looks like:</p>
<pre><code><channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
..
..
<area forecast="PC" lat="1.41800000" lon="103.83900000" name="Yishun"/>
<channel>
</code></pre>
<p>I managed to retrieve the information I need using these codes :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import urllib3
import csv
import sys
import json
#getting the Validtime
area_attrs_li = []
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
#print area
for area in soup.select('area'):
area_attrs_li.append(area)
print area
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
f = open("C:\\scripts\\testing\\testingnea.csv" , 'wt')
try:
for area in area_attrs_li:
#print str(area) + "\n"
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name))
finally:
f.close()
print open("C:/scripts/testing/testingnea.csv", 'rt').read()
</code></pre>
<p>I managed to get the data in a CSV, however when I run this part of the codes:</p>
<pre><code>#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
</code></pre>
<p>This is the result:</p>
<p><a href="http://i.stack.imgur.com/8T1gg.png" rel="nofollow"><img src="http://i.stack.imgur.com/8T1gg.png" alt="This is what I got"></a></p>
<p>Apparently, my loop is not working as it keeps printing the last area of the last record over and over again. </p>
<p><strong>EDIT</strong>: I tried looping through the data for area in the list :</p>
<pre><code>for area in area_attrs_li:
name = (area.get('name'))
print name
</code></pre>
<p>However, its still not looping.</p>
<p>I'm not sure where did the codes go wrong :/</p>
| 0 | 2016-07-27T07:18:30Z | 38,606,680 | <p>This is because when you are writing, you are referring last instance of loop, try this :</p>
<pre><code>writer.writerow( (time, element['date'], element['time'], area, area['name']))
</code></pre>
| 1 | 2016-07-27T07:36:51Z | [
"python",
"beautifulsoup"
] |
Looping through HTML tags using BeautifulSoup | 38,606,290 | <p>As mentioned in the previous questions, I am using Beautiful soup with python to retrieve weather data from a website.</p>
<p>Here's how the website looks like:</p>
<pre><code><channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
..
..
<area forecast="PC" lat="1.41800000" lon="103.83900000" name="Yishun"/>
<channel>
</code></pre>
<p>I managed to retrieve the information I need using these codes :</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import urllib3
import csv
import sys
import json
#getting the Validtime
area_attrs_li = []
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
#print area
for area in soup.select('area'):
area_attrs_li.append(area)
print area
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
f = open("C:\\scripts\\testing\\testingnea.csv" , 'wt')
try:
for area in area_attrs_li:
#print str(area) + "\n"
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name))
finally:
f.close()
print open("C:/scripts/testing/testingnea.csv", 'rt').read()
</code></pre>
<p>I managed to get the data in a CSV, however when I run this part of the codes:</p>
<pre><code>#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
</code></pre>
<p>This is the result:</p>
<p><a href="http://i.stack.imgur.com/8T1gg.png" rel="nofollow"><img src="http://i.stack.imgur.com/8T1gg.png" alt="This is what I got"></a></p>
<p>Apparently, my loop is not working as it keeps printing the last area of the last record over and over again. </p>
<p><strong>EDIT</strong>: I tried looping through the data for area in the list :</p>
<pre><code>for area in area_attrs_li:
name = (area.get('name'))
print name
</code></pre>
<p>However, its still not looping.</p>
<p>I'm not sure where did the codes go wrong :/</p>
| 0 | 2016-07-27T07:18:30Z | 38,606,750 | <p>After the loop you are getting only one value in the name variable. You need to have a list. try this </p>
<pre><code>areas = soup.select('area')
name=[]
for data in areas:
name.append(data.get('name'))
print name
l=len(name)
</code></pre>
<p>and in try finally</p>
<pre><code>i=0
try:
for area in area_attrs_li:
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name[i]))
i=i+1
</code></pre>
| 1 | 2016-07-27T07:40:44Z | [
"python",
"beautifulsoup"
] |
How do I select a random index in a print statement python | 38,606,291 | <p>I have this print statement and I want to select a random index from the list provided</p>
<pre><code>print("{randint(1, 10)}".format(player_r1, player_r2, player_r3, player_r4, player_r5, player_r6, player_r7, player_r8, player_r9, player_r10))
</code></pre>
| 0 | 2016-07-27T07:18:32Z | 38,606,394 | <p>Code expressions inside strings are not evaluated, it's just text. </p>
<pre><code>import random
print("{}".format(random.choice([player_r1, player_r2, player_r3, player_r4, player_r5, player_r6, player_r7, player_r8, player_r9, player_r10])))
</code></pre>
<p>Or something more readable:</p>
<pre><code>players = [player_r1, player_r2, player_r3, player_r4, player_r5, player_r6, player_r7, player_r8, player_r9, player_r10]
print("{}".format(random.choice(players)))
</code></pre>
| 1 | 2016-07-27T07:22:44Z | [
"python",
"indexing"
] |
Python:Syntax Error while using Lambda Function inside List | 38,606,293 | <p>I am relatively new to python. I am trying to filter data in List using a Lambda, but the compiler gives me a syntax error for the commented code.</p>
<pre><code># documents = [(list(filter(lambda w:w if not w in stop_words,movie_reviews.words(fileid))),category)
# for category in movie_reviews.categories()
# for fileid in movie_reviews.fileids(category)]
#
documents = [(list(movie_reviews.words(fileid)),category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
</code></pre>
<p>The uncommented section works, but the commented section gives a syntax error. Any inputs what i am doing wrong here?</p>
| 0 | 2016-07-27T07:18:34Z | 38,606,397 | <p><code>x if y</code> expressions require an <code>else</code>. It's an expression which must return a value, and without <code>else</code> it's undefined what's supposed to happen in the event that the <code>if</code> condition does not apply.</p>
<p>At the very least you need:</p>
<pre><code>w if w not in stop_words else None
</code></pre>
<p>(Also <code>x not in</code> is the preferred direct operation as opposed to <code>not x in</code>.)</p>
| 1 | 2016-07-27T07:22:58Z | [
"python",
"list",
"lambda"
] |
Python:Syntax Error while using Lambda Function inside List | 38,606,293 | <p>I am relatively new to python. I am trying to filter data in List using a Lambda, but the compiler gives me a syntax error for the commented code.</p>
<pre><code># documents = [(list(filter(lambda w:w if not w in stop_words,movie_reviews.words(fileid))),category)
# for category in movie_reviews.categories()
# for fileid in movie_reviews.fileids(category)]
#
documents = [(list(movie_reviews.words(fileid)),category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
</code></pre>
<p>The uncommented section works, but the commented section gives a syntax error. Any inputs what i am doing wrong here?</p>
| 0 | 2016-07-27T07:18:34Z | 38,606,411 | <p>The problem is here:</p>
<pre><code>w if not w in stop_words
</code></pre>
<p>This is the first half of a <a href="http://stackoverflow.com/questions/394809/does-python-have-a-ternary-conditional-operator">ternary condition operator</a>, but it's missing the <code>else</code> block.</p>
<p>You actually don't need this operator at all, your lambda should look like this:</p>
<pre><code>lambda w:not w in stop_words
</code></pre>
| 2 | 2016-07-27T07:23:28Z | [
"python",
"list",
"lambda"
] |
Skimage: how to show image | 38,606,372 | <p>I am novice at skimage and I try to show the image in my ipython notebook:\</p>
<pre><code>from skimage import data, io
coins = data.coins()
io.imshow(coins)
</code></pre>
<p>But I see only the following string:</p>
<pre><code><matplotlib.image.AxesImage at 0x7f8c9c0cc6d8>
</code></pre>
<p>Can anyboby explain how to show image right under the code like here:
<a href="http://i.stack.imgur.com/yvE9y.png" rel="nofollow">Correct output</a></p>
| 2 | 2016-07-27T07:21:42Z | 38,606,757 | <p>Just add <code>matplotlib.pyplot.show()</code> after the <code>io.imshow(coins)</code> line. </p>
<pre><code>from skimage import data, io
from matplotlib import pyplot as plt
coins = data.coins()
io.imshow(coins)
plt.show()
</code></pre>
| 2 | 2016-07-27T07:41:10Z | [
"python",
"skimage"
] |
Pandas pivot table arrangement no aggregation | 38,606,393 | <p>I want to pivot a pandas dataframe without aggregation, and instead of presenting the pivot index column vertically I want to present it horizontally. I tried with <code>pd.pivot_table</code> but I'm not getting exactly what I wanted.</p>
<pre><code>data = {'year': [2011, 2011, 2012, 2013, 2013],
'A': [10, 21, 20, 10, 39],
'B': [12, 45, 19, 10, 39]}
df = pd.DataFrame(data)
print df
A B year
0 10 12 2011
1 21 45 2011
2 20 19 2012
3 10 10 2013
4 39 39 2013
</code></pre>
<p>But I want to have:</p>
<pre><code>year 2011 2012 2013
cols A B A B A B
0 10 12 20 19 10 10
1 21 45 NaN NaN 39 39
</code></pre>
| 4 | 2016-07-27T07:22:41Z | 38,606,514 | <p>You can first create column for new index by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow"><code>cumcount</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>df['g'] = df.groupby('year')['year'].cumcount()
df1 = df.set_index(['g','year']).stack().unstack([1,2])
print (df1)
year 2011 2012 2013
A B A B A B
g
0 10.0 12.0 20.0 19.0 10.0 10.0
1 21.0 45.0 NaN NaN 39.0 39.0
</code></pre>
<p>If need set columns names use <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>):</p>
<pre><code>df['g'] = df.groupby('year')['year'].cumcount()
df1 = df.set_index(['g','year'])
.stack()
.unstack([1,2])
.rename_axis(None)
.rename_axis(('year','cols'), axis=1)
print (df1)
year 2011 2012 2013
cols A B A B A B
0 10.0 12.0 20.0 19.0 10.0 10.0
1 21.0 45.0 NaN NaN 39.0 39.0
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot.html" rel="nofollow"><code>pivot</code></a>, but you need swap first and second level of <code>Multiindex</code> in columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.swaplevel.html" rel="nofollow"><code>swaplevel</code></a> and then sort it by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>:</p>
<pre><code>df['g'] = df.groupby('year')['year'].cumcount()
df1 = df.pivot(index='g', columns='year')
df1 = df1.swaplevel(0,1, axis=1).sort_index(axis=1)
print (df1)
year 2011 2012 2013
A B A B A B
g
0 10.0 12.0 20.0 19.0 10.0 10.0
1 21.0 45.0 NaN NaN 39.0 39.0
print (df1)
year 2011 2012 2013
A B A B A B
g
0 10.0 12.0 20.0 19.0 10.0 10.0
1 21.0 45.0 NaN NaN 39.0 39.0
</code></pre>
| 2 | 2016-07-27T07:28:11Z | [
"python",
"pandas",
"dataframe",
"pivot"
] |
Pandas pivot table arrangement no aggregation | 38,606,393 | <p>I want to pivot a pandas dataframe without aggregation, and instead of presenting the pivot index column vertically I want to present it horizontally. I tried with <code>pd.pivot_table</code> but I'm not getting exactly what I wanted.</p>
<pre><code>data = {'year': [2011, 2011, 2012, 2013, 2013],
'A': [10, 21, 20, 10, 39],
'B': [12, 45, 19, 10, 39]}
df = pd.DataFrame(data)
print df
A B year
0 10 12 2011
1 21 45 2011
2 20 19 2012
3 10 10 2013
4 39 39 2013
</code></pre>
<p>But I want to have:</p>
<pre><code>year 2011 2012 2013
cols A B A B A B
0 10 12 20 19 10 10
1 21 45 NaN NaN 39 39
</code></pre>
| 4 | 2016-07-27T07:22:41Z | 38,606,794 | <p><code>groupby('year')</code> so I can <code>reset_index</code> to get index values of <code>0</code> and <code>1</code>. Then do a bunch of clean up.</p>
<pre><code>df.groupby('year')['A', 'B'] \
.apply(lambda df: df.reset_index(drop=True)) \
.unstack(0).swaplevel(0, 1, 1).sort_index(1)
</code></pre>
<p><a href="http://i.stack.imgur.com/Fmol8.png" rel="nofollow"><img src="http://i.stack.imgur.com/Fmol8.png" alt="enter image description here"></a></p>
| 1 | 2016-07-27T07:42:50Z | [
"python",
"pandas",
"dataframe",
"pivot"
] |
How does python decorator work on this code? | 38,606,403 | <p>I'm trying to understand python decorator. I thought somehow I understood decorator until I wrote this code.</p>
<pre><code>def func():
def wrapper(x):
return x()
return wrapper
@func()
def b():
return sum
a = b([1,2,5])
print a # Result: 8 How?
e = b # pass b function to variable e
f = e([3,4,8]) # called function b stored in variable e
print f # Result: 15
# I understand how 15 is derived here
</code></pre>
| 1 | 2016-07-27T07:23:12Z | 38,606,644 | <p>You used <code>func</code> as a decorator <em>factory</em>, which produces a decorator that <em>called</em> the original <code>b()</code> to produce the decoration result. Here's what happens:</p>
<ul>
<li><code>@func()</code> executes <code>func()</code> <em>first</em>, then uses the return value as the decorator. <code>func()</code> returns <code>wrapper</code>, so <code>wrapper</code> is used as the decorator.</li>
<li><code>wrapper(b)</code> sets <code>x = b</code>, and returns <code>x()</code>. So the result of the decorator is <code>b()</code>, which is <code>sum</code>. Python sets <code>b = sum</code></li>
<li>You called <code>b([1, 2, 5])</code> where <code>b = sum</code>. So <code>sum([1, 2, 5])</code> is returned.</li>
</ul>
<p>The important part here is that you used <code>func</code> not as a decorator, but as a decorator factory (calling it produces the actual decorator), which adds a layer of indirection.</p>
| 2 | 2016-07-27T07:35:15Z | [
"python",
"python-2.7",
"decorator",
"python-decorators"
] |
Fitting empirical distribution against a hyperbolic distribution using scipy.stats | 38,606,505 | <p>At the moment, I am fitting empirical distributions against theoretical ones as explained in <a href="http://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python">Fitting empirical distribution to theoretical ones with Scipy (Python)?</a></p>
<p>Using the <a href="http://docs.scipy.org/doc/scipy/reference/stats.html" rel="nofollow">scipy.stats</a> distributions, the results show a good fit for the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypsecant.html#scipy.stats.hypsecant" rel="nofollow">hyperbolic secant</a> distribution.</p>
<p>Here's my current approach using some of scipys distributions:</p>
<pre><code># -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
# Sample data with random numbers of hypsecant distribution
data = scipy.stats.hypsecant.rvs(size=8760, loc=1.93, scale=7.19)
# Distributions to check
dist_names = ['gausshyper', 'norm', 'gamma', 'hypsecant']
for dist_name in dist_names:
dist = getattr(scipy.stats, dist_name)
# Fit a distribution to the data
param = dist.fit(data)
# Plot the histogram
plt.hist(data, bins=100, normed=True, alpha=0.8, color='g')
# Plot and save the PDF
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = dist.pdf(x, *param[:-2], loc=param[-2], scale=param[-1])
plt.plot(x, p, 'k', linewidth=2)
title = 'Distribution: ' + dist_name
plt.title(title)
plt.savefig('fit_' + dist_name + '.png')
plt.close()
</code></pre>
<p>which delivers plots like the following:</p>
<p><a href="http://i.stack.imgur.com/lPAsV.png" rel="nofollow"><img src="http://i.stack.imgur.com/lPAsV.png" alt="enter image description here"></a></p>
<p>But I would like to test the fit against a (generalized) <a href="https://en.wikipedia.org/wiki/Hyperbolic_distribution" rel="nofollow">hyperbolic distribution</a> as well since I have the assumptions that it might deliver an even better fit.</p>
<p>Is there a hyperbolic distribution in scipy.stats that I can use? Or is there any workaround?</p>
<p>Using other packages would also be an option.</p>
<p>Thanks in advance!</p>
| 1 | 2016-07-27T07:27:40Z | 38,624,040 | <p>As your distribution is not in <code>scipy.stats</code> you can either add it to the package or try doing things "by hand".</p>
<p>For the former have a look at the <a href="https://github.com/scipy/scipy/tree/master/scipy/stats" rel="nofollow">source code</a> of the <code>scipy.stats</code> package - it might not be all that much work to add a new distribution!</p>
<p>For the latter option you can use a maximum Likelihood approach. To do so define first a method giving you the pdf of the distribution. Based on the pdf construct a function calculating the log likelihood of the data given specific parameters of the distribution. Finally fit your model to the data by maximizing this log likelihood function using <code>scipy.optimize</code>.</p>
| 2 | 2016-07-27T22:10:54Z | [
"python",
"scipy",
"statistics",
"statsmodels",
"scientific-computing"
] |
Measure current time in seconds? | 38,606,572 | <p>How can I take the current time in seconds in Python? I was using <code>calendar.timegm(time.gmtime())</code> but I'm not sure if this given value is in seconds?</p>
| 0 | 2016-07-27T07:31:41Z | 38,606,630 | <p>Try this : </p>
<pre><code>import time
print(round(time.time()))
</code></pre>
| 2 | 2016-07-27T07:34:29Z | [
"python",
"python-2.7",
"time",
"calendar"
] |
Measure current time in seconds? | 38,606,572 | <p>How can I take the current time in seconds in Python? I was using <code>calendar.timegm(time.gmtime())</code> but I'm not sure if this given value is in seconds?</p>
| 0 | 2016-07-27T07:31:41Z | 38,606,719 | <p>Use:</p>
<pre><code>from datetime import datetime
sec = int(datetime.utcnow().total_seconds()) % 86400
</code></pre>
| 0 | 2016-07-27T07:39:14Z | [
"python",
"python-2.7",
"time",
"calendar"
] |
Error in Calling python module from Java | 38,606,613 | <p>Java Code to call Python:</p>
<pre><code>//arguments to be passed to the script
String[] patchArguments = { patchFileDirectory,centralPatchStagePath,patchId,patchFileName, action };
//initialize the interpreter with properties and arguments
PythonInterpreter.initialize(System.getProperties(), System.getProperties(), patchArguments);
pythonInterpreter = new PythonInterpreter();
//invoke python interpreter to execute the script
pythonInterpreter.execfile(opatchScriptPath + opatchScript);
</code></pre>
<p>Traceback (innermost last):</p>
<blockquote>
<p>File "/scratch/app/product/fmw/obpinstall/patching/scripts/PatchUtility.py", line 4, in ?</p>
<p>ImportError: no module named subprocess</p>
</blockquote>
<p>But subprocess is already installed and it runs if I execute the python file directly using terminal <code>python PatchUtility.py</code></p>
<p>Update: I found something</p>
<blockquote>
<p>Jython has some limitations:</p>
<p>There are a number of differences. First, Jython programs cannot use CPython
extension modules written in C. These modules usually have files with the
extension .so, .pyd or .dll.</p>
</blockquote>
<p>does subprocess internally calls C-extensions ?</p>
| 0 | 2016-07-27T07:33:48Z | 38,610,674 | <p>In short: No. Or Maybe. Or Yes. <strong>But most relevant for you, in Jython, No.</strong></p>
<p>TLDR: Jython has its own implementation of subprocess</p>
<p>The details are a little sketchy on the python documentation, However the PEP has more details ( <a href="https://www.python.org/dev/peps/pep-0324/" rel="nofollow">https://www.python.org/dev/peps/pep-0324/</a> ). What this is is the specification for how it should work, not the actual implementation: An implemention of Python can do whatever it likes as long as its functionally the same (which ok, makes it not 'whatever' it likes but ... you get the idea).</p>
<p>From the spec:</p>
<blockquote>
<ul>
<li>On POSIX platforms, no extension module is required: the module
uses os.fork(), os.execvp() etc.</li>
<li>On Windows platforms, the module requires either Mark Hammond's
Windows extensions[5], or a small extension module called
_subprocess.</li>
</ul>
</blockquote>
<p>The Subprocess PEP aimed to prevent wierdnesses that were happening with using the <code>os.popen</code> type functions, however I also note on the Jython docs that this is implemented for jython, both os.fork and the entire subprocess module in its own right: <a href="http://www.jython.org/docs/library/subprocess.html" rel="nofollow">http://www.jython.org/docs/library/subprocess.html</a></p>
<p>I suspect you have another bug somewhere, and perhaps an import error which makes it look like its subprocess which is failing to import.</p>
<p>The C-modules that you refer to is more about custom python c modules. These don't work as they bind against the python functions, where as jython implements its internals using Java bits. All core functions provided by the language have to have been ported to Java for the java interactions to work.</p>
| 1 | 2016-07-27T10:41:40Z | [
"java",
"python",
"jython-2.7"
] |
Private Variables and Class-local References | 38,606,804 | <p>I'm not English mother language, and i learn python at <a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references</a></p>
<p>At section 9.6 Private Variables and Class-local References, the last paragraph stated that: </p>
<blockquote>
<p>Notice that code passed to exec, eval() or execfile() does not
consider the classname of the invoking class to be the current class;
this is similar to the effect of the global statement, the effect of
which is likewise restricted to code that is byte-compiled together.
The same restriction applies to getattr(), setattr() and delattr(), as
well as when referencing <strong>dict</strong> directly.</p>
</blockquote>
<p>It's totally get nothing about that text. Please explain or give me some example to demonstrate the idea.</p>
| 1 | 2016-07-27T07:43:27Z | 38,607,063 | <p>Imagine you have a class with a private member:</p>
<pre><code>class Foo:
__attr= 5
</code></pre>
<p>Inside the class, this attribute can be referenced as <code>__attr</code>:</p>
<pre><code>class Foo:
__attr= 5
print(__attr) # prints 5
</code></pre>
<p>But not outside of the class:</p>
<pre><code>print(Foo.__attr) # raises AttributeError
</code></pre>
<p>But it's different if you use <code>eval</code>, <code>exec</code>, or <code>execfile</code> inside the class:</p>
<pre><code>class Foo:
__attr= 5
print(__attr) # prints 5
exec 'print(__attr)' # raises NameError
</code></pre>
<p>This is explained by the paragraph you quoted. <code>exec</code> does not consider <code>Foo</code> to be the "current class", so the private attribute cannot be referenced (unless you reference it as <code>Foo._Foo__attr</code>).</p>
| 2 | 2016-07-27T07:56:47Z | [
"python",
"class",
"reference",
"local",
"private"
] |
AttributeError: 'NoneType' object has no attribute 'browse' | 38,606,863 | <p>I am having trouble in getting data from res.company
Can someone tell me why this code gives me an error ?</p>
<pre><code> def refresh_calculation(self,cr,uid,ids, context=None):
company_pool = self.pool.get('res.company')
company_id = self.pool.get('res.company')._company_default_get(cr, uid, 'res.company', context=context)
loan = company_pool.browse(cr, uid, company_id)
administration_fee = loan.administration_fee.id
interest_rate = percentage_to_float(loan.interest_rate.id)
trade_mark = percentage_to_float(loan.trade_mark.id)
return self.write(cr, uid, ids, {'monthly_installment': administration_fee})
</code></pre>
<p>Thanks in advance,</p>
| 0 | 2016-07-27T07:46:30Z | 38,606,920 | <p><code>self.pool.get</code> is returning None for "res.company". If that's a dict, it doesn't have that key.</p>
| 3 | 2016-07-27T07:49:09Z | [
"python",
"openerp",
"openerp-7"
] |
AttributeError: 'NoneType' object has no attribute 'browse' | 38,606,863 | <p>I am having trouble in getting data from res.company
Can someone tell me why this code gives me an error ?</p>
<pre><code> def refresh_calculation(self,cr,uid,ids, context=None):
company_pool = self.pool.get('res.company')
company_id = self.pool.get('res.company')._company_default_get(cr, uid, 'res.company', context=context)
loan = company_pool.browse(cr, uid, company_id)
administration_fee = loan.administration_fee.id
interest_rate = percentage_to_float(loan.interest_rate.id)
trade_mark = percentage_to_float(loan.trade_mark.id)
return self.write(cr, uid, ids, {'monthly_installment': administration_fee})
</code></pre>
<p>Thanks in advance,</p>
| 0 | 2016-07-27T07:46:30Z | 38,606,957 | <p>Your <code>company_pool</code> parameter has <code>None</code> stored in it for some reason. To prevent this error, a simple if statement would suffice.</p>
<pre><code>if company_pool is not None:
# doSomething
else
print "You did not enter the parameter res.company"
</code></pre>
| 1 | 2016-07-27T07:50:50Z | [
"python",
"openerp",
"openerp-7"
] |
AttributeError: 'NoneType' object has no attribute 'browse' | 38,606,863 | <p>I am having trouble in getting data from res.company
Can someone tell me why this code gives me an error ?</p>
<pre><code> def refresh_calculation(self,cr,uid,ids, context=None):
company_pool = self.pool.get('res.company')
company_id = self.pool.get('res.company')._company_default_get(cr, uid, 'res.company', context=context)
loan = company_pool.browse(cr, uid, company_id)
administration_fee = loan.administration_fee.id
interest_rate = percentage_to_float(loan.interest_rate.id)
trade_mark = percentage_to_float(loan.trade_mark.id)
return self.write(cr, uid, ids, {'monthly_installment': administration_fee})
</code></pre>
<p>Thanks in advance,</p>
| 0 | 2016-07-27T07:46:30Z | 38,606,999 | <p><code>company_pool</code> returns the value <code>None</code>. Because <code>self.pool</code> does not have the key value 'res.company'. When <code>company_pool.browse(...)</code> is used -> <code>None.browse(...)</code> is called and this throws an error because <code>NoneType</code> does not have a browse attribute. Populate the value map <code>pool</code> before the <code>refresh_calculation(...)</code> is called or perform a None check before accessing this function -> <code>if company_pool is not None:</code> </p>
| 2 | 2016-07-27T07:52:59Z | [
"python",
"openerp",
"openerp-7"
] |
Pandas histogram (counts) on grouped (by) values | 38,606,912 | <p>I have a <code>DataFrame</code> which looks like this:</p>
<pre><code>>>> df
type value
0 1 0.698791
1 3 0.228529
2 3 0.560907
3 1 0.982690
4 1 0.997881
5 1 0.301664
6 1 0.877495
7 2 0.561545
8 1 0.167920
9 1 0.928918
10 2 0.212339
11 2 0.092313
12 4 0.039266
13 2 0.998929
14 4 0.476712
15 4 0.631202
16 1 0.918277
17 3 0.509352
18 1 0.769203
19 3 0.994378
</code></pre>
<p>I would like to group on the <code>type</code> column and obtain histogram bins for the column <code>value</code> in 10 new columns, e.g. something like that:</p>
<pre><code> 1 3 9 6 8 10 5 4 7 2
type
1 0 1 0 0 0 2 1 1 0 1
2 2 1 1 0 0 1 1 0 0 0
3 2 0 0 0 0 1 1 0 0 0
4 1 1 0 0 0 1 0 0 0 1
</code></pre>
<p>Where column <code>1</code> is the count for the first bin (<code>0.0</code> to <code>0.1</code>) and so on...</p>
<p>Using <code>numpy.histogram</code>, I can only obtain the following:</p>
<pre><code>>>> df.groupby('type')['value'].agg(lambda x: numpy.histogram(x, bins=10, range=(0, 1)))
type
1 ([0, 1, 1, 1, 1, 0, 0, 0, 0, 2], [0.0, 0.1, 0....
2 ([2, 0, 1, 0, 1, 0, 0, 0, 1, 1], [0.0, 0.1, 0....
3 ([2, 0, 0, 0, 1, 0, 0, 0, 0, 1], [0.0, 0.1, 0....
4 ([1, 1, 1, 0, 0, 0, 0, 0, 0, 1], [0.0, 0.1, 0....
Name: value, dtype: object
</code></pre>
<p>Which I do not manage to put in the correct format afterwards (at least not in a simple way).</p>
<p>I found a trick to do what I want, but it is very ugly:</p>
<pre><code>>>> d = {str(k): lambda x, _k = k: ((x >= (_k - 1)/10) & (x < _k/10)).sum() for k in range(1, 11)}
>>> df.groupby('type')['value'].agg(d)
1 3 9 6 8 10 5 4 7 2
type
1 0 1 0 0 0 2 1 1 0 1
2 2 1 1 0 0 1 1 0 0 0
3 2 0 0 0 0 1 1 0 0 0
4 1 1 0 0 0 1 0 0 0 1
</code></pre>
<p>Is there a better way to do what I want? I know that in <code>R</code>, the <code>aggregate</code> method can return a <code>DataFrame</code>, but not in python... </p>
| 2 | 2016-07-27T07:48:43Z | 38,607,474 | <p>is that what you want?</p>
<pre><code>In [98]: %paste
bins = np.linspace(0, 1.0, 11)
labels = list(range(1,11))
(df.assign(q=pd.cut(df.value, bins=bins, labels=labels, right=False))
.pivot_table(index='type', columns='q', aggfunc='size', fill_value=0)
)
## -- End pasted text --
Out[98]:
q 1 2 3 4 5 6 7 8 9 10
type
1 0 1 0 1 0 0 1 1 1 4
2 1 0 1 0 0 1 0 0 0 1
3 0 0 1 0 0 2 0 0 0 1
4 1 0 0 0 1 0 1 0 0 0
</code></pre>
| 1 | 2016-07-27T08:16:16Z | [
"python",
"pandas",
"aggregate",
"histogram"
] |
Playing, Opening and Pausing VLC command line executed from Python scripts | 38,606,973 | <p>I am trying to create an small python app that syncs up 2 computers via tcp socket and when I send a command play or pause. Both scripts will/should execute a command line to pause or play or open vlc wit file. Both computers are MacOSX latest with a VLC installed within the past 3 weeks.</p>
<p>I have been reading the documentation using <code>.../vlc -H</code> for the long help but I still don't seem to <code>--global-key-play-pauses</code> or plays. I got it to open a video but I wasn't able to send any commands while it's running.</p>
<p>I tried some examples I saw online with no avail. I have the 2 scripts ready just not the VLC commands.</p>
<pre><code>c-mbp:~ c$ /Applications/VLC.app/Contents/MacOS/VLC -I --global-key-play-pauses
VLC media player 2.2.2 Weatherwax (revision 2.2.2-3-gf8c9253)
[0000000100604a58] core interface error: no suitable interface module
[000000010050d928] core libvlc error: interface "default" initialization failed
</code></pre>
| 1 | 2016-07-27T07:51:41Z | 38,668,805 | <p>I suspect the best way to do this on MacOS would be to use the <a href="https://wiki.videolan.org/documentation:modules/rc/" rel="nofollow">VLC remote control interface</a>.</p>
<p>This allows you to control the behavior of a launched VLC instance using commands which you send to the process' stdin.</p>
<p>You could then use the Python <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess module</a> to launch VLC and then send the appropriate commands to the stdin of this process.</p>
<p>If you were using Linux this could likely be more simply achieved through the VLC DBUS interface however the remote control through stdin should still give you sufficient control for what you are doing.</p>
| 2 | 2016-07-29T23:24:09Z | [
"python",
"ipc",
"vlc"
] |
How to plot image none-inline in Pandas | 38,606,975 | <p>There are lots of posts telling you how to display your chart/image inline. Yet once you used the magical <code>%pylab inline</code>, you have can no longer make it display image in a new window. </p>
<p>Is ther a magical line like <code>%pylab none-inline</code> to again make pandas display graph in new window?</p>
| 1 | 2016-07-27T07:51:44Z | 38,607,006 | <p><a href="https://ipython.org/ipython-doc/3/interactive/magics.html#magic-matplotlib" rel="nofollow">This</a> is what you're looking for:</p>
<pre><code>%matplotlib [backend]
</code></pre>
<p>when you specify <code>%matplotlib inline</code> you're opting to use Ipython's backend renderer. </p>
| 1 | 2016-07-27T07:53:26Z | [
"python",
"matplotlib",
"ipython",
"jupyter-notebook"
] |
Django pickle.dumps(model.query) hitting db | 38,606,989 | <p>I try to pickle django Query object to save it in Redis. </p>
<pre><code>materials = Material.objects.prefetch_related('tags_applied').prefetch_related('materialdata_set').prefetch_related('source')
materials_ids = MaterialData.objects.filter(tag_id__in=tags).values_list('material_id', flat=True)
materials = materials.filter(pk__in=materials_ids)
key_name = SAMPLES_UUID + ':' + str(redis_uuid)
redis_cl.set_key(key_name, pickle.dumps(materials.query))
redis_cl.expire(key_name, SAMPLES_TIMEOUT)
</code></pre>
<p>Here is the trace from debug_panel(i use lazy pagination):
Source query is:</p>
<blockquote>
<p>SELECT "san_material"."id", "san_material"."created_at",
"san_material"."title", "san_material"."author", "san_material"."url",
"san_material"."publication_datetime", "san_material"."text",
"san_material"."size", "san_material"."source_id",
"san_material"."material_type", "san_material"."updated_at",
"san_material"."status", "san_material"."elastic_sync",
"san_material"."tokens", "san_material"."detection_datetime",
"san_material"."article_title",
"san_material"."publication_datetime_article",
"san_material"."author_article", "san_material"."highlight_data" FROM
"san_material" WHERE ("san_material"."detection_datetime" BETWEEN
'2016-07-01T00:00:00+03:00'::timestamptz AND
'2016-07-27T10:39:00+03:00'::timestamptz AND "san_material"."id" IN
(SELECT U0."material_id" FROM "san_materialdata" U0 WHERE U0."tag_id"
IN (660))) ORDER BY "san_material"."detection_datetime" DESC LIMIT 51</p>
</blockquote>
<p>But it is subquery hits db:</p>
<blockquote>
<p>SELECT U0."material_id" FROM "san_materialdata" U0 WHERE U0."tag_id"
IN (660)</p>
</blockquote>
<p>in here:</p>
<pre><code>/home/maxx/analize/san/utils.py in wrapper(82)
result = method_to_decorate(*args, **kwds)
/home/maxx/analize/san/views/flux.py in flux(111)
redis_cl.set_key(key_name, pickle.dumps(materials.query))
/usr/lib/python2.7/pickle.py in dumps(1393)
Pickler(file, protocol).dump(obj)
/usr/lib/python2.7/pickle.py in dump(225)
self.save(obj)
/usr/lib/python2.7/pickle.py in save(333)
self.save_reduce(obj=obj, *rv)
/usr/lib/python2.7/pickle.py in save_reduce(421)
save(state)
/usr/lib/python2.7/pickle.py in save(288)
f(self, obj) # Call unbound method with explicit self
/usr/lib/python2.7/pickle.py in save_dict(657)
self._batch_setitems(obj.iteritems())
/usr/lib/python2.7/pickle.py in _batch_setitems(675)
save(v)
/usr/lib/python2.7/pickle.py in save(333)
self.save_reduce(obj=obj, *rv)
/usr/lib/python2.7/pickle.py in save_reduce(421)
save(state)
/usr/lib/python2.7/pickle.py in save(288)
f(self, obj) # Call unbound method with explicit self
/usr/lib/python2.7/pickle.py in save_dict(657)
self._batch_setitems(obj.iteritems())
/usr/lib/python2.7/pickle.py in _batch_setitems(675)
save(v)
/usr/lib/python2.7/pickle.py in save(288)
f(self, obj) # Call unbound method with explicit self
/usr/lib/python2.7/pickle.py in save_list(604)
self._batch_appends(iter(obj))
/usr/lib/python2.7/pickle.py in _batch_appends(620)
save(x)
/usr/lib/python2.7/pickle.py in save(333)
self.save_reduce(obj=obj, *rv)
/usr/lib/python2.7/pickle.py in save_reduce(421)
save(state)
/usr/lib/python2.7/pickle.py in save(288)
f(self, obj) # Call unbound method with explicit self
/usr/lib/python2.7/pickle.py in save_dict(657)
self._batch_setitems(obj.iteritems())
/usr/lib/python2.7/pickle.py in _batch_setitems(675)
save(v)
/usr/lib/python2.7/pickle.py in save(308)
rv = reduce(self.proto)
/home/maxx/venv/analize/lib/python2.7/copy_reg.py in _reduce_ex(84)
dict = getstate()
</code></pre>
<p>How can i fix it?</p>
<p>p.s i measured time saving argument in def _batch_setitems:</p>
<pre><code>('Save obj time:', 2.5215649604797363, 'arg:', 'rhs')
('Save obj time:', 2.5219039916992188, 'arg:', 'children')
('Save obj time:', 2.5219550132751465, 'arg:', 'where')
</code></pre>
<p>Its 3 times by 2.5 secs. Why?</p>
| 1 | 2016-07-27T07:52:23Z | 38,607,723 | <p>Django query's are lazy query's, but let me explain what you have written:</p>
<pre><code>materials = Material.objects.prefetch_related('tags_applied'
).prefetch_related('materialdata_set').prefetch_related('source')
materials_ids = MaterialData.objects.filter(tag_id__in=tags).values_list('material_id', flat=True)
# till now materials_id is queryset, means it will not hit DB.
# as soon it execute next line of code it will hit db, because in next line you are using materials_ids.
materials = materials.filter(pk__in=materials_ids)
# So you can avoid hiting db if you are not required to use materials
key_name = SAMPLES_UUID + ':' + str(redis_uuid)
redis_cl.set_key(key_name, pickle.dumps(materials.query))
redis_cl.expire(key_name, SAMPLES_TIMEOUT)
</code></pre>
<p>You can correct this by using proper joins in django:</p>
<p>I guess you <strong>MaterialData</strong> model has <strong>material</strong> as foreign key to <strong>Material</strong> model.</p>
<pre><code>materials = MaterialData.objects.filter(tag_id__in=tags).prefetch_related(
'material__tags_applied'
).prefetch_related('material__materialdata_set').prefetch_related('material__source').values(*all values realted to mateials you can put here by adding materials__ before each material field *)
# to fetch foreign key attribue you use field followed by duble underscore
key_name = SAMPLES_UUID + ':' + str(redis_uuid)
redis_cl.set_key(key_name, pickle.dumps(materials.query))
</code></pre>
| 1 | 2016-07-27T08:29:41Z | [
"python",
"django"
] |
Simplest Way To Deploy Javascript Application On Linux | 38,607,079 | <p>I have an folder say 'mywebapp' on windows machines. This folder has index.html page, js directory with java script files and css directory with css files.</p>
<p>Now when i open this index.html into browser, the browser displays contents pretty well, as if i have deployed this application on server, which is not the case.</p>
<p>Now i wanted to do same on my Linux machine vm, login-ed through putty. I tried using pythons SimpleHTTPServer which gave me same result. But as soon as i exit from putty session, the webpage doesnt display. seems like SimpleHTTPServer server connection is broken once i exit the putty session.
Please help me.
Or any other professional and easy way to get my webpage displayed. Tomcat seems good option but i don't have root permission and don't want hectic deployment process.
I heard about node.js, but i don't have root permission to install node.</p>
| 0 | 2016-07-27T07:57:15Z | 38,607,466 | <p>Most simplest way i can suggest it to download and copy tar.gz file from location:</p>
<p><a href="https://tomcat.apache.org/download-70.cgi" rel="nofollow">https://tomcat.apache.org/download-70.cgi</a></p>
<p>1 then gunzip and untar this downloaded file.</p>
<p>2 Go to conf/Catelina/localhost folder.</p>
<p>3 create an xml with your application name, e.g. mywebapp.xml
and put following to this file:</p>
<pre><code><Context path="/mywebapp" reloadable="false" docBase="<root-path of your application folder>"/>
</code></pre>
<p>here "root-path of your application folder" will be the root folder of ypur HTML, js and css files.
Then just start this tomcat using /bin/startup.sh command and check on browser using localhost:8080/mywebapp</p>
| 0 | 2016-07-27T08:16:02Z | [
"javascript",
"python",
"linux",
"web-deployment",
"putty"
] |
Accessing data frame and printing custom error messages | 38,607,202 | <p>I have this dictionary having <code>Error_ID</code> and <code>Error_Messages</code> mapping, and these error messages have <code>{}</code> so that they can have dynamic data while printing</p>
<pre><code>dict = {'101': 'Invalid table name {}', '102': 'Invalid pair {} and {}'}
</code></pre>
<p>I have this function which I'll call every time I have an error</p>
<pre><code>def print_error(error_id,error_data)
print(error_id,dict[error_id].format("sample_table")
error_id='101'
print(error_id,dict[error_id].format("sample_table"))
Invalid table name sample_table
</code></pre>
<p>But for the second error what should I do so that I can have two things passed with the single print statement in my <code>print_error</code> module so that the output will be like</p>
<pre><code>102 Invalid pair Sample_pair1 and Sample_pair2
</code></pre>
| 1 | 2016-07-27T08:03:17Z | 38,607,409 | <p>You can use python's iterable unpacking feature to pass a variable number of arguments to <code>str.format</code>:</p>
<pre><code>def print_error(error_id,error_data):
if not isinstance(error_data, tuple): # if error_data isn't a tuple
error_data= (error_data,) # make it a tuple so we can unpack it
print(error_id,dict[error_id].format(*error_data)) # unpack the tuple
print_error('101',"sample_table")
print_error('102',('a','b'))
</code></pre>
| 2 | 2016-07-27T08:13:44Z | [
"python",
"dictionary",
"printing"
] |
Wrap quotes around many strings in Python | 38,607,346 | <pre><code>Afghanistan
Albania
Algeria
etc
</code></pre>
<p>I have a list of countries, as above, copied from a txt file. I would like to create a list called <code>countries</code> containing all of these entries, without having to go down the list one-by-one wrapping each entry in quotes and add commas.</p>
<p>How can this be done in an efficient and quick way?</p>
<p>Final list should look like:</p>
<pre><code>countries = [
"Afghanistan",
"Albanian",
"Algeria"....
]
</code></pre>
<p>There are lines with 2+ separate strings, <code>Puerto Rico</code> for example. splitlines() seems to separate both words, instead of creating a list entry for each line.</p>
| -2 | 2016-07-27T08:10:27Z | 38,607,441 | <p>You dont need to add quotes and commas to every entry, just wrap everithing in triple quotes and then split it.</p>
<pre><code>text = \
'''
Afghanistan
Albania
Algeria
'''
my_list = []
for line in text.strip().split('\n'):
my_list.append(line)
print my_list
['Afghanistan', 'Albania', 'Algeria']
</code></pre>
<p>Or the compact version:</p>
<pre><code>my_list = \
'''
Afghanistan
Albania
Algeria
'''.split()
</code></pre>
| 1 | 2016-07-27T08:14:50Z | [
"python"
] |
Wrap quotes around many strings in Python | 38,607,346 | <pre><code>Afghanistan
Albania
Algeria
etc
</code></pre>
<p>I have a list of countries, as above, copied from a txt file. I would like to create a list called <code>countries</code> containing all of these entries, without having to go down the list one-by-one wrapping each entry in quotes and add commas.</p>
<p>How can this be done in an efficient and quick way?</p>
<p>Final list should look like:</p>
<pre><code>countries = [
"Afghanistan",
"Albanian",
"Algeria"....
]
</code></pre>
<p>There are lines with 2+ separate strings, <code>Puerto Rico</code> for example. splitlines() seems to separate both words, instead of creating a list entry for each line.</p>
| -2 | 2016-07-27T08:10:27Z | 38,607,489 | <p>Supposing your entry is <code>test.txt</code> and you want to output in <code>out.txt</code>:</p>
<pre><code>if __name__ == '__main__':
with open('test.txt', 'rb') as fr:
reader = fr.readlines()
with open('out.txt', 'wb') as fw:
for line in reader:
fw.write('\''+line.strip('\n').strip('\r')+'\'\n')
</code></pre>
<p>This will write in <code>out.txt</code>:</p>
<pre><code>'Afghanistan'
'Albania'
'Algeria'
'etc'
</code></pre>
<p>Or if you want to get it in a list:</p>
<pre><code>if __name__ == '__main__':
with open('test.txt', 'rb') as fr:
reader = fr.readlines()
res = list()
for line in reader:
res.append(line.strip('\n').strip('\r'))
</code></pre>
<p>At the end, we have</p>
<pre><code>res = ['Afghanistan', 'Albania', 'Algeria', 'etc']
</code></pre>
| 0 | 2016-07-27T08:17:21Z | [
"python"
] |
Wrap quotes around many strings in Python | 38,607,346 | <pre><code>Afghanistan
Albania
Algeria
etc
</code></pre>
<p>I have a list of countries, as above, copied from a txt file. I would like to create a list called <code>countries</code> containing all of these entries, without having to go down the list one-by-one wrapping each entry in quotes and add commas.</p>
<p>How can this be done in an efficient and quick way?</p>
<p>Final list should look like:</p>
<pre><code>countries = [
"Afghanistan",
"Albanian",
"Algeria"....
]
</code></pre>
<p>There are lines with 2+ separate strings, <code>Puerto Rico</code> for example. splitlines() seems to separate both words, instead of creating a list entry for each line.</p>
| -2 | 2016-07-27T08:10:27Z | 38,607,492 | <p>Quotes and commas are for lists and strings specified as <em>literals</em> in code. You don't need that for data you are reading programmatically.</p>
<p>Just read the lines and strip off the tailing newlines.</p>
<pre><code>with open('countries.text') as src:
countries = [row.strip('\n') for row in src]
</code></pre>
| 2 | 2016-07-27T08:17:32Z | [
"python"
] |
Can I extract comments of any page from https://www.rt.com/ using python3? | 38,607,502 | <p>I am writing a web crawler. I extracted heading and Main Discussion of the this <a href="https://www.rt.com/usa/353493-clinton-speech-affairs-silence/" rel="nofollow">link</a> but I am unable to find any one of the comment (Ctrl+u -> Ctrl+f . Comment Text). I think the comments are written in JavaScript. Can I extract it? </p>
| 0 | 2016-07-27T08:18:01Z | 38,607,642 | <p>Yes, if it can be viewed with a web browser, you can extract it.</p>
<p>If you look at the source it is really an iframe that loads a piece of javascript, that then creates a new tag in the document with the source of that script tag loading bundle.js, which really contains the commenting software. This in turns then fetches the actual comments.</p>
<p>Instead of going through this manually, you could consider using for example webkit to create a headless browser that executes the javascript like an ordinary browser. Then you can scrape from that instead of having to manually make your crawler fetch the external resources.</p>
<p>Examples of such headless browsers could be <a href="https://github.com/makinacorpus/spynner" rel="nofollow">Spynner</a>, <a href="https://github.com/niklasb/dryscrape" rel="nofollow">Dryscape</a>, or the PhantomJS derived <a href="https://github.com/niwinz/phantompy" rel="nofollow">PhantomPy</a> (the latter seems to be an abandoned project now). </p>
| 1 | 2016-07-27T08:25:34Z | [
"javascript",
"python",
"beautifulsoup",
"web-crawler"
] |
Can I extract comments of any page from https://www.rt.com/ using python3? | 38,607,502 | <p>I am writing a web crawler. I extracted heading and Main Discussion of the this <a href="https://www.rt.com/usa/353493-clinton-speech-affairs-silence/" rel="nofollow">link</a> but I am unable to find any one of the comment (Ctrl+u -> Ctrl+f . Comment Text). I think the comments are written in JavaScript. Can I extract it? </p>
| 0 | 2016-07-27T08:18:01Z | 38,614,486 | <p>RT are using a service from <a href="https://www.spot.im/" rel="nofollow">spot.im</a> for comments</p>
<p>you need to do make two POST requests, first <code>https://api.spot.im/me/network-token/spotim</code> to get a token, then <code>https://api.spot.im/conversation-read/spot/sp_6phY2k0C/post/353493/get</code> to get the comments as JSON.</p>
<p>i wrote a quick script to do this</p>
<pre><code>import requests
import re
import json
def get_rt_comments(article_url):
spotim_spotId = 'sp_6phY2k0C' # spotim id for RT
post_id = re.search('([0-9]+)', article_url).group(0)
r1 = requests.post('https://api.spot.im/me/network-token/spotim').json()
spotim_token = r1['token']
payload = {
"count": 25, #number of comments to fetch
"sort_by":"best",
"cursor":{"offset":0,"comments_read":0},
"host_url": article_url,
"canonical_url": article_url
}
r2_url ='https://api.spot.im/conversation-read/spot/' + spotim_spotId + '/post/'+ post_id +'/get'
r2 = requests.post(r2_url, data=json.dumps(payload), headers={'X-Spotim-Token': spotim_token , "Content-Type": "application/json"})
return r2.json()
if __name__ == '__main__':
url = 'https://www.rt.com/usa/353493-clinton-speech-affairs-silence/'
comments = get_rt_comments(url)
print(comments)
</code></pre>
| 0 | 2016-07-27T13:32:54Z | [
"javascript",
"python",
"beautifulsoup",
"web-crawler"
] |
Create 3D array from a 2D array by replicating/repeating along the first axis | 38,607,546 | <p>Suppose I have a <code>n à m</code> array, i.e.:</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
</code></pre>
<p>And I what to generate a 3D array <code>k à n à m</code>, where all the arrays in the new axis are equal, i.e.: the same array but now <code>3 à 3 à 3</code>.</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]])
</code></pre>
<p>How can I get it?</p>
| 2 | 2016-07-27T08:20:49Z | 38,607,674 | <p>Introduce a new axis at the start with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow"><code>None/np.newaxis</code></a> and replicate along it with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow"><code>np.repeat</code></a>. This should work for extending any <code>n</code> dim array to <code>n+1</code> dim array. The implementation would be -</p>
<pre><code>np.repeat(arr[None,...],k,axis=0)
</code></pre>
<p>Sample run -</p>
<pre><code>In [143]: arr
Out[143]:
array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
In [144]: np.repeat(arr[None,...],3,axis=0)
Out[144]:
array([[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]])
</code></pre>
| 1 | 2016-07-27T08:26:58Z | [
"python",
"arrays",
"numpy"
] |
Create 3D array from a 2D array by replicating/repeating along the first axis | 38,607,546 | <p>Suppose I have a <code>n à m</code> array, i.e.:</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
</code></pre>
<p>And I what to generate a 3D array <code>k à n à m</code>, where all the arrays in the new axis are equal, i.e.: the same array but now <code>3 à 3 à 3</code>.</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]])
</code></pre>
<p>How can I get it?</p>
| 2 | 2016-07-27T08:20:49Z | 38,607,907 | <p>if you have:</p>
<pre><code>a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
</code></pre>
<p>You can use a list comprehension to generate the duplicate array:</p>
<pre><code>b = [a for x in range(3)]
</code></pre>
<p>Then (for numpy):</p>
<pre><code>c = array(b)
</code></pre>
| 0 | 2016-07-27T08:39:12Z | [
"python",
"arrays",
"numpy"
] |
Create 3D array from a 2D array by replicating/repeating along the first axis | 38,607,546 | <p>Suppose I have a <code>n à m</code> array, i.e.:</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
</code></pre>
<p>And I what to generate a 3D array <code>k à n à m</code>, where all the arrays in the new axis are equal, i.e.: the same array but now <code>3 à 3 à 3</code>.</p>
<pre><code>array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]])
</code></pre>
<p>How can I get it?</p>
| 2 | 2016-07-27T08:20:49Z | 38,618,809 | <p>One possibility would be to use default broadcasting to replicate your array:</p>
<pre><code>a = np.arange(1, 10).reshape(3,3)
n = 3
b = np.ones((n, 3, 3)) * a
</code></pre>
<p>Which results in the array you wanted:</p>
<pre><code>array([[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]],
[[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]]])
</code></pre>
<p>This won't work by default if you want to replicate it along another axis. In that case you would need to be explicit with the dimensions to ensure correct broadcasting.</p>
| 0 | 2016-07-27T16:43:02Z | [
"python",
"arrays",
"numpy"
] |
Delete columns based on repeat value in one row in numpy array | 38,607,586 | <p>I'm hoping to delete columns in my arrays that have repeat entries in row 1 as shown below (row 1 has repeats of values 1 & 2.5, so one of each of those values have been been deleted, together with the column each deleted value lies within).</p>
<pre><code>initial_array =
row 0 [[ 1, 1, 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2.5, 2, 1, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 3, 2.5, 1.5, 4,]
row 3 [228, 314, 173, 452, 168, 351, 300, 396]]
final_array =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 314, 173, 452, 351, 396]]
</code></pre>
<p>Ways I was thinking of included using some function that checked for repeats, giving a True response for the second (or more) time a value turned up in the dataset, then using that response to delete the row. That or possibly using the return indices function within numpy.unique. I just can't quite find a way through it or find the right function though.</p>
<p>If I could find a way to return an mean value in the row 3 of the retained repeat and the deleted one, that would be even better (see below).</p>
<pre><code>final_array_averaged =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 307, 170.5, 452, 351, 396]]
</code></pre>
<p>Thanks in advance for any help you can give to a beginner who is stumped!</p>
| 3 | 2016-07-27T08:22:54Z | 38,607,962 | <p>You can find the indices of wanted columns using <code>unique</code>:</p>
<pre><code>>>> indices = np.sort(np.unique(A[1], return_index=True)[1])
</code></pre>
<p>Then use a simple indexing to get the desire columns:</p>
<pre><code>>>> A[:,indices]
array([[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ 0.5, 1. , 2.5, 4. , 2. , 3.5],
[ 1. , 1.5, 3. , 4.5, 2.5, 4. ],
[ 228. , 314. , 173. , 452. , 351. , 396. ]])
</code></pre>
| 1 | 2016-07-27T08:41:22Z | [
"python",
"arrays",
"numpy"
] |
Delete columns based on repeat value in one row in numpy array | 38,607,586 | <p>I'm hoping to delete columns in my arrays that have repeat entries in row 1 as shown below (row 1 has repeats of values 1 & 2.5, so one of each of those values have been been deleted, together with the column each deleted value lies within).</p>
<pre><code>initial_array =
row 0 [[ 1, 1, 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2.5, 2, 1, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 3, 2.5, 1.5, 4,]
row 3 [228, 314, 173, 452, 168, 351, 300, 396]]
final_array =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 314, 173, 452, 351, 396]]
</code></pre>
<p>Ways I was thinking of included using some function that checked for repeats, giving a True response for the second (or more) time a value turned up in the dataset, then using that response to delete the row. That or possibly using the return indices function within numpy.unique. I just can't quite find a way through it or find the right function though.</p>
<p>If I could find a way to return an mean value in the row 3 of the retained repeat and the deleted one, that would be even better (see below).</p>
<pre><code>final_array_averaged =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 307, 170.5, 452, 351, 396]]
</code></pre>
<p>Thanks in advance for any help you can give to a beginner who is stumped!</p>
| 3 | 2016-07-27T08:22:54Z | 38,608,165 | <p>You can use the optional arguments that come with <code>np.unique</code> and then use <code>np.bincount</code> to use the last row as weights to get the final averaged output, like so -</p>
<pre><code>_,unqID,tag,C = np.unique(arr[1],return_index=1,return_inverse=1,return_counts=1)
out = arr[:,unqID]
out[-1] = np.bincount(tag,arr[3])/C
</code></pre>
<p>Sample run -</p>
<pre><code>In [212]: arr
Out[212]:
array([[ 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ],
[ 0.5, 1. , 2.5, 4. , 2.5, 2. , 1. , 3.5],
[ 1. , 1.5, 3. , 4.5, 3. , 2.5, 1.5, 4. ],
[ 228. , 314. , 173. , 452. , 168. , 351. , 300. , 396. ]])
In [213]: out
Out[213]:
array([[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ 0.5, 1. , 2. , 2.5, 3.5, 4. ],
[ 1. , 1.5, 2.5, 3. , 4. , 4.5],
[ 228. , 307. , 351. , 170.5, 396. , 452. ]])
</code></pre>
<p>As can be seen that the output has now an order with the second row being sorted. If you are looking to keep the order as it was originally, use <code>np.argsort</code> of <code>unqID</code>, like so -</p>
<pre><code>In [221]: out[:,unqID.argsort()]
Out[221]:
array([[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ 0.5, 1. , 2.5, 4. , 2. , 3.5],
[ 1. , 1.5, 3. , 4.5, 2.5, 4. ],
[ 228. , 307. , 170.5, 452. , 351. , 396. ]])
</code></pre>
| 2 | 2016-07-27T08:50:52Z | [
"python",
"arrays",
"numpy"
] |
Delete columns based on repeat value in one row in numpy array | 38,607,586 | <p>I'm hoping to delete columns in my arrays that have repeat entries in row 1 as shown below (row 1 has repeats of values 1 & 2.5, so one of each of those values have been been deleted, together with the column each deleted value lies within).</p>
<pre><code>initial_array =
row 0 [[ 1, 1, 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2.5, 2, 1, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 3, 2.5, 1.5, 4,]
row 3 [228, 314, 173, 452, 168, 351, 300, 396]]
final_array =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 314, 173, 452, 351, 396]]
</code></pre>
<p>Ways I was thinking of included using some function that checked for repeats, giving a True response for the second (or more) time a value turned up in the dataset, then using that response to delete the row. That or possibly using the return indices function within numpy.unique. I just can't quite find a way through it or find the right function though.</p>
<p>If I could find a way to return an mean value in the row 3 of the retained repeat and the deleted one, that would be even better (see below).</p>
<pre><code>final_array_averaged =
row 0 [[ 1, 1, 1, 1, 1, 1,]
row 1 [0.5, 1, 2.5, 4, 2, 3.5,]
row 2 [ 1, 1.5, 3, 4.5, 2.5, 4,]
row 3 [228, 307, 170.5, 452, 351, 396]]
</code></pre>
<p>Thanks in advance for any help you can give to a beginner who is stumped!</p>
| 3 | 2016-07-27T08:22:54Z | 38,608,790 | <p>This is a typical grouping problem, which can be solve elegantly and efficiently using the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author):</p>
<pre><code>import numpy_indexed as npi
unique, final_array = npi.group_by(initial_array[1]).mean(initial_array, axis=1)
</code></pre>
<p>Note that there are many other reductions than mean; if you want the original behavior you described, you could replace 'mean' with 'first', for instance.</p>
| 0 | 2016-07-27T09:19:36Z | [
"python",
"arrays",
"numpy"
] |
Part of Data get lost on adding to session and commiting flask | 38,607,812 | <p>The model is </p>
<pre><code>class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
artistname = db.Column(db.String(64))
photourl = db.Column(db.String(1000))
contactInfo = db.Column(db.String(20))
description = db.Column(db.String(500))
date = db.Column(db.Date)
def __repr__(self):
return '<User %r>' % (self.photourl)
</code></pre>
<p>Here photourl is the url of photos posted.</p>
<p>After form submission.</p>
<pre><code>user = User(artistname = form.artist.data,photourl = "",
description = form.description.data,contactInfo = form.contactinfo.data,date = datetime.datetime.utcnow().date() )
</code></pre>
<p>I add all the details without photourl.</p>
<p>Now i make list of all the filenames which is stored in <strong>filename</strong> variable in below code.And join with * in middle.</p>
<pre><code>filename = "*".join(filename)
print(filename)
</code></pre>
<p>The sample output appeared in terminal of printed filename is</p>
<pre><code>mic16.jpg*nepal_earthquake_death6.png
</code></pre>
<p>After combining all the filenames. I store it in database by.</p>
<pre><code>user.photourl = filename
print(user)
db.session.add(user)
db.session.commit()
</code></pre>
<p>Here printed output of user in terminal is </p>
<pre><code><User u'mic16.jpg*nepal_earthquake_death6.png'>
</code></pre>
<p>which shows that infomation is loaded correctly.</p>
<p>Now when I do <strong>db.session.add(user) followed by db.session.commit().</strong> In <strong>user table of the database under photourl column only mic16.jpg part is stored and rest of the part is ommited</strong> i.e. part before * is stored. </p>
<p>There is no entry in the database.My database if a MYSQL database and using phpmyadmin. I am reading the database by using.</p>
<pre><code>posts = User.query.order_by(User.date.desc()).limit(5).all()
photourls = []
for i in posts:
photourls.append(i.photourl.split('*'))
</code></pre>
<p>Required urls are to be in the photourls. But only a <strong>single url is present</strong> for each post.</p>
<p>I am just out of my mind and don't have clue of whats going on?</p>
| 0 | 2016-07-27T08:34:17Z | 38,636,970 | <p>Judging by the size of your photourl string, you want to save several image filenames inside a string separated by an asterisk *. A better alternative would be storing the filenames in a JSON array with each filename as a string.</p>
<pre><code>class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
artistname = db.Column(db.String(64))
photourls = db.Column(JSON)
</code></pre>
<p>You can use <a href="http://werkzeug.pocoo.org/docs/0.11/datastructures/#werkzeug.datastructures.MultiDict.getlist" rel="nofollow">getlist</a> to upload several image files at once.</p>
<pre><code>def upload():
uploaded_images = flask.request.files.getlist("file")
</code></pre>
<p>The JSON would be stored as shown below.</p>
<pre><code>{
"photourls":["mic16.jpg", "nepal_earthquake_death6.png"]
}
</code></pre>
| 1 | 2016-07-28T12:53:04Z | [
"python",
"mysql",
"session",
"flask",
"flask-sqlalchemy"
] |
Testing stepper motor with python code with easydriver | 38,607,901 | <p>i am having some problems with my python code on my raspberry pi to move my stepper motor. </p>
<p>I am new to python language and hope if i could get help on moving my stepper motor.</p>
<p>I have attach a photo of my setup
<a href="http://i.stack.imgur.com/0hgFZ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0hgFZ.jpg" alt="Stepper Motor Setup"></a></p>
<p>I am using a Nema 17 Stepper Motors, EasyDriver stepper motor driver and the raspberry pi 3.</p>
<p>Python version : 2.7.9</p>
<p>Installed RPi.GPIO</p>
<pre><code>import RPi.GPIO as gpio
import time
import sys
gpio.setmode(gpio.BCM)
gpio.setup(14, gpio.OUT) #step
gpio.setup(15, gpio.OUT) #dir
gpio.setup(23, gpio.OUT) #ms1
gpio.setup(24, gpio.OUT) #ms2
def set_stepper_on():
gpio.output(14, 0)
time.sleep(0.05)
gpio.output(14, 1)
time.sleep(0.05)
def set_cw():
gpio.output(15, 0)
def set_anticw():
gpio.output(15, 1)
def ms_steps():
gpio.output(23, 0)
gpio.output(24, 0)
ms_steps()
set_cw()
infinite_loop = True
steps=0
while (infinite_loop == True):
set_stepper_on()
steps+=1
print steps
</code></pre>
<p>I do not know why my motor doesn't worked... :( </p>
<p>Edited: I have seem many guides saying that i have to on stepper and off stepper in while loop for the motor to take a step, but it still doesn't work... :(</p>
| 3 | 2016-07-27T08:39:02Z | 38,608,814 | <p>You need to define <code>set_stepper</code>, <code>set_cw</code> and <code>set_anticw</code> as functions and not variables. The way it is working now is that you initially define the two GPIO outputs to be false, and nothing happens in while loop.</p>
<pre><code>def set_stepper():
gpio.output(14, False)
def set_cw():
gpio.output(15, False)
def set_anticw():
gpio.output(15, True)
</code></pre>
<p>And, then call them in while loop as:</p>
<pre><code>set_stepper()
set_cw()
</code></pre>
| 0 | 2016-07-27T09:20:45Z | [
"python",
"python-2.7",
"gpio"
] |
Testing stepper motor with python code with easydriver | 38,607,901 | <p>i am having some problems with my python code on my raspberry pi to move my stepper motor. </p>
<p>I am new to python language and hope if i could get help on moving my stepper motor.</p>
<p>I have attach a photo of my setup
<a href="http://i.stack.imgur.com/0hgFZ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0hgFZ.jpg" alt="Stepper Motor Setup"></a></p>
<p>I am using a Nema 17 Stepper Motors, EasyDriver stepper motor driver and the raspberry pi 3.</p>
<p>Python version : 2.7.9</p>
<p>Installed RPi.GPIO</p>
<pre><code>import RPi.GPIO as gpio
import time
import sys
gpio.setmode(gpio.BCM)
gpio.setup(14, gpio.OUT) #step
gpio.setup(15, gpio.OUT) #dir
gpio.setup(23, gpio.OUT) #ms1
gpio.setup(24, gpio.OUT) #ms2
def set_stepper_on():
gpio.output(14, 0)
time.sleep(0.05)
gpio.output(14, 1)
time.sleep(0.05)
def set_cw():
gpio.output(15, 0)
def set_anticw():
gpio.output(15, 1)
def ms_steps():
gpio.output(23, 0)
gpio.output(24, 0)
ms_steps()
set_cw()
infinite_loop = True
steps=0
while (infinite_loop == True):
set_stepper_on()
steps+=1
print steps
</code></pre>
<p>I do not know why my motor doesn't worked... :( </p>
<p>Edited: I have seem many guides saying that i have to on stepper and off stepper in while loop for the motor to take a step, but it still doesn't work... :(</p>
| 3 | 2016-07-27T08:39:02Z | 38,805,187 | <p>Silly me! I check the datasheet of my motor and did a multi meter test.
i wired the A+ and A= wrongly to the easy driver board... Now it's working, my code works fine.</p>
| 0 | 2016-08-06T14:12:35Z | [
"python",
"python-2.7",
"gpio"
] |
How to split string into repeated substrings | 38,607,916 | <p>I have strings each of which is one or more copies of some string. For example:</p>
<pre><code>L = "hellohellohello"
M = "good"
N = "wherewhere"
O = "antant"
</code></pre>
<p>I would like to split such strings into a list so that each element just has the part that was repeated. For example:</p>
<pre><code>splitstring(L) ---> ["hello", "hello", "hello"]
splitstring(M) ---> ["good"]
splitstring(N) ---> ["where", "where"]
splitstring(O) ---> ["ant", "ant"]
</code></pre>
<p>As the strings are each about 1000 characters long it would be great if this was reasonably fast as well.</p>
<p>Note that in my case the repetitions all start at the start of the string and have no gaps in between them so it's much simpler than the general problem of finding maximal repetitions in a string.</p>
<p>How can one do this?</p>
| 3 | 2016-07-27T08:39:33Z | 38,608,105 | <p>The approach i would use:</p>
<pre><code>import re
L = "hellohellohello"
N = "good"
N = "wherewhere"
cnt = 0
result = ''
for i in range(1,len(L)+1):
if cnt <= len(re.findall(L[0:i],L)):
cnt = len(re.findall(L[0:i],L))
result = re.findall(L[0:i],L)[0]
print(result)
</code></pre>
<p>Gives the following outputs with the corresponding variable:</p>
<pre><code>hello
good
where
</code></pre>
| 0 | 2016-07-27T08:48:42Z | [
"python"
] |
How to split string into repeated substrings | 38,607,916 | <p>I have strings each of which is one or more copies of some string. For example:</p>
<pre><code>L = "hellohellohello"
M = "good"
N = "wherewhere"
O = "antant"
</code></pre>
<p>I would like to split such strings into a list so that each element just has the part that was repeated. For example:</p>
<pre><code>splitstring(L) ---> ["hello", "hello", "hello"]
splitstring(M) ---> ["good"]
splitstring(N) ---> ["where", "where"]
splitstring(O) ---> ["ant", "ant"]
</code></pre>
<p>As the strings are each about 1000 characters long it would be great if this was reasonably fast as well.</p>
<p>Note that in my case the repetitions all start at the start of the string and have no gaps in between them so it's much simpler than the general problem of finding maximal repetitions in a string.</p>
<p>How can one do this?</p>
| 3 | 2016-07-27T08:39:33Z | 38,608,219 | <p>Try this. Instead of cutting your list, it concentrates on finding the shortest pattern, then just creates a new list by repeating this pattern an appropriate number of times.</p>
<pre><code>def splitstring(s):
# searching the number of characters to split on
proposed_pattern = s[0]
for i, c in enumerate(s[1:], 1):
if proposed_pattern == s[i:(i+len(proposed_pattern))]:
# found it
break
else:
proposed_pattern += c
else:
print 'found no pattern'
exit(1)
# generating the list
n = len(proposed_pattern)
return [proposed_pattern]*(len(s)//n)
if __name__ == '__main__':
L = 'hellohellohellohello'
print splitstring(L) # prints ['hello', 'hello', 'hello', 'hello']
</code></pre>
| 1 | 2016-07-27T08:53:24Z | [
"python"
] |
How to split string into repeated substrings | 38,607,916 | <p>I have strings each of which is one or more copies of some string. For example:</p>
<pre><code>L = "hellohellohello"
M = "good"
N = "wherewhere"
O = "antant"
</code></pre>
<p>I would like to split such strings into a list so that each element just has the part that was repeated. For example:</p>
<pre><code>splitstring(L) ---> ["hello", "hello", "hello"]
splitstring(M) ---> ["good"]
splitstring(N) ---> ["where", "where"]
splitstring(O) ---> ["ant", "ant"]
</code></pre>
<p>As the strings are each about 1000 characters long it would be great if this was reasonably fast as well.</p>
<p>Note that in my case the repetitions all start at the start of the string and have no gaps in between them so it's much simpler than the general problem of finding maximal repetitions in a string.</p>
<p>How can one do this?</p>
| 3 | 2016-07-27T08:39:33Z | 38,608,257 | <p>Using regex to find the repeating word, then simply creating a list of the appropriate length:</p>
<pre><code>def splitstring(string):
match= re.match(r'(.*?)(?:\1)*$', string)
word= match.group(1)
return [word] * (len(string)//len(word))
</code></pre>
| 4 | 2016-07-27T08:55:13Z | [
"python"
] |
How to split string into repeated substrings | 38,607,916 | <p>I have strings each of which is one or more copies of some string. For example:</p>
<pre><code>L = "hellohellohello"
M = "good"
N = "wherewhere"
O = "antant"
</code></pre>
<p>I would like to split such strings into a list so that each element just has the part that was repeated. For example:</p>
<pre><code>splitstring(L) ---> ["hello", "hello", "hello"]
splitstring(M) ---> ["good"]
splitstring(N) ---> ["where", "where"]
splitstring(O) ---> ["ant", "ant"]
</code></pre>
<p>As the strings are each about 1000 characters long it would be great if this was reasonably fast as well.</p>
<p>Note that in my case the repetitions all start at the start of the string and have no gaps in between them so it's much simpler than the general problem of finding maximal repetitions in a string.</p>
<p>How can one do this?</p>
| 3 | 2016-07-27T08:39:33Z | 38,608,329 | <p>Assuming that the length of the repeated word is longer than 1 this would work:</p>
<pre><code>a = "hellohellohello"
def splitstring(string):
for number in range(1, len(string)):
if string[:number] == string[number:number+number]:
return string[:number]
#in case there is no repetition
return string
splitstring(a)
</code></pre>
| 0 | 2016-07-27T08:58:41Z | [
"python"
] |
How to split string into repeated substrings | 38,607,916 | <p>I have strings each of which is one or more copies of some string. For example:</p>
<pre><code>L = "hellohellohello"
M = "good"
N = "wherewhere"
O = "antant"
</code></pre>
<p>I would like to split such strings into a list so that each element just has the part that was repeated. For example:</p>
<pre><code>splitstring(L) ---> ["hello", "hello", "hello"]
splitstring(M) ---> ["good"]
splitstring(N) ---> ["where", "where"]
splitstring(O) ---> ["ant", "ant"]
</code></pre>
<p>As the strings are each about 1000 characters long it would be great if this was reasonably fast as well.</p>
<p>Note that in my case the repetitions all start at the start of the string and have no gaps in between them so it's much simpler than the general problem of finding maximal repetitions in a string.</p>
<p>How can one do this?</p>
| 3 | 2016-07-27T08:39:33Z | 38,609,264 | <pre><code>#_*_ coding:utf-8 _*_
import re
'''
refer to the code of Gábor Erds below
'''
N = "wherewhere"
cnt = 0
result = ''
countN = 0
showresult = []
for i in range(1,len(N)+1):
if cnt <= len(re.findall(N[0:i],N)):
cnt = len(re.findall(N[0:i],N))
result = re.findall(N[0:i],N)[0]
countN = len(N)/len(result)
for i in range(0,countN):
showresult.append(result)
print showresult
</code></pre>
| 0 | 2016-07-27T09:40:35Z | [
"python"
] |
Shell command works in shell, but not when when fired from Python subprocess | 38,608,121 | <p>I am trying to execute a shell command from a Python script.
I have tried the usual suspects, subprocess.call, Popen, os.system etc.</p>
<p>The command i am trying to execute is admittedly rather long (7k characters), since one of the parameters is a json string. From what I've read length should not be the issue here.</p>
<p>The command looks like this:</p>
<pre><code>phantomjs /some/path/visualizer_interface.js -path /another/path/chart.svg -type chart_pie -id 0 -language de -data '{...}'
</code></pre>
<p>The visualizer interface is a script i wrote myself, that basically renders a requested chart in a Phantom JS context, grabs the svg and writes it to the specified path. When i execute the exact same command i get a flawless chart, but in Python the subprocess never return, and i don't get any form of feedback, not even on the subprocesses stdout.</p>
<pre><code>with open('/home/max/stdout.txt', 'w') as out:
res = subprocess.Popen(command, shell=True, stdout=out)
res.wait()
</code></pre>
<p>I am able to execute other shell commands, so it's not a fundamental Python problem.</p>
<p>Any ideas very much appreciated.</p>
| 1 | 2016-07-27T08:49:06Z | 38,740,554 | <p>Turns out i had a very slight error in the phantomjs script, that behaved differently depending on where it was executed.</p>
| 0 | 2016-08-03T10:08:09Z | [
"python",
"shell",
"phantomjs"
] |
Why metaclass __getattribute__ invoked here? | 38,608,223 | <p>Here is a code snippet retrieved from Python 2.7.12 documentation (<strong>3.4.12. Special method lookup for new-style classes¶</strong>):</p>
<blockquote>
<p>In addition to bypassing any instance attributes in the interest of
correctness, implicit special method lookup generally also bypasses
the <code>__getattribute__()</code> method even of the objectâs metaclass:</p>
<pre><code>>>> class Meta(type):
... def __getattribute__(*args):
... print "Metaclass getattribute invoked"
... return type.__getattribute__(*args)
...
>>> class C(object):
... __metaclass__ = Meta
... def __len__(self):
... return 10
... def __getattribute__(*args):
... print "Class getattribute invoked"
... return object.__getattribute__(*args)
...
>>> c = C()
>>> c.__len__() # Explicit lookup via instance
Class getattribute invoked
10
>>> type(c).__len__(c) # Explicit lookup via type
Metaclass getattribute invoked
10
>>> len(c) # Implicit lookup
10
</code></pre>
</blockquote>
<p>My question is, why metaclass <code>__getattribute__</code> is invoked when executing <code>type(c).__len__(c)</code>?</p>
<p>Since <code>type(c)</code> yields <code>C</code>, this statement can be rewritten as <code>C.__len__(c)</code>. <code>C.__len__</code> is a unbound method defined in class <code>C</code>, and it can be found in <code>C.__dict__</code>, so why is <code>Meta</code> involved in the lookup?</p>
| 0 | 2016-07-27T08:53:28Z | 38,608,763 | <p>Quotation from the same documentation, 3.4.2.1. More attribute access for new-style classes:</p>
<blockquote>
<p><code>object.__getattribute__(self, name)</code></p>
<p>Called <strong>unconditionally</strong> to implement attribute accesses for
<strong>instances</strong> of the class. ...</p>
</blockquote>
<p>Class <code>C</code> is an instance of metaclass <code>Meta</code>, so <code>Meta.__getattribute__</code> is called when <code>C.__len__</code> is accessed, even though the latter is can be found in <code>C.__dict__</code>.</p>
<p>In fact, accessing <code>C.__dict__</code> is also an attribute access, and thus <code>Meta.__getattribute__</code> would still be invoked.</p>
| 1 | 2016-07-27T09:18:21Z | [
"python",
"python-2.7",
"class",
"attributes"
] |
Convert strings to variable references | 38,608,263 | <p>I have a list of strings:</p>
<pre><code>A = ['a', 'b', 'c', 'd']
</code></pre>
<p>I would like to have a list B of variable references:</p>
<pre><code>B = [a, b, c, d]
</code></pre>
<p>How can I do that?</p>
<p><strong>Edit1:</strong></p>
<p>I have </p>
<pre><code>df_wgt_dict["Freq"+vars_dict['varName'+str(I)]]
</code></pre>
<p>here I ranges from 1 to 4.</p>
<p>I want to have a list like below:</p>
<pre><code>[df_wgt_dict["Freq"+vars_dict['varName'+str(1)],df_wgt_dict["Freq"+vars_dict['varName'+str(2)],df_wgt_dict["Freq"+vars_dict['varName'+str(3)],df_wgt_dict["Freq"+vars_dict['varName'+str(4)]]
</code></pre>
| -7 | 2016-07-27T08:55:35Z | 38,608,499 | <pre><code>A = ['a', 'b', 'c', 'd']
B = []
for s in A:
temp_storage = exec('%s = %d' % (s, my_value))
B.append(temp_storage)
</code></pre>
<p>This will change your list A in list B with variables instead of strings</p>
| -1 | 2016-07-27T09:06:28Z | [
"python"
] |
Convert strings to variable references | 38,608,263 | <p>I have a list of strings:</p>
<pre><code>A = ['a', 'b', 'c', 'd']
</code></pre>
<p>I would like to have a list B of variable references:</p>
<pre><code>B = [a, b, c, d]
</code></pre>
<p>How can I do that?</p>
<p><strong>Edit1:</strong></p>
<p>I have </p>
<pre><code>df_wgt_dict["Freq"+vars_dict['varName'+str(I)]]
</code></pre>
<p>here I ranges from 1 to 4.</p>
<p>I want to have a list like below:</p>
<pre><code>[df_wgt_dict["Freq"+vars_dict['varName'+str(1)],df_wgt_dict["Freq"+vars_dict['varName'+str(2)],df_wgt_dict["Freq"+vars_dict['varName'+str(3)],df_wgt_dict["Freq"+vars_dict['varName'+str(4)]]
</code></pre>
| -7 | 2016-07-27T08:55:35Z | 38,608,504 | <p>You can't do this directly. However, if you want to perform some operation with these variables, you can still use the keyword <a href="https://docs.python.org/3/library/functions.html#exec" rel="nofollow"><code>exec</code></a> or the function <a href="https://docs.python.org/2/library/functions.html#eval" rel="nofollow"><code>eval</code></a>(you can see <a class='doc-link' href="http://stackoverflow.com/documentation/python/2251/dynamic-code-execution-with-exec-and-eval#t=201607270913188190817">examples of this</a> in the <a class='doc-link' href="http://stackoverflow.com/documentation/python/topics">SO Documentation</a>). These are two different things, that produce different results, but here you can use on or the other without problems.</p>
<p>For example, this will print <code>5</code></p>
<pre><code>a = 5
exec 'print a'
</code></pre>
<p>A simple trick to do what you want:</p>
<pre><code>a = 5
b = 6
c = 2
d = 3
L = ['a', 'b', 'c', 'd'] # the input list
M = list() # the result list
for e in L:
exec 'M.append('+e+')' # or eval('M.append('+e+')')
print M
</code></pre>
<p>This will print</p>
<pre><code>[5, 6, 2, 3]
</code></pre>
<p><strong>NB : This will not create a list of variables references, but a list of the values of your variables at the moment you call this. You'll have to call it again if you want updated values.</strong></p>
| -1 | 2016-07-27T09:06:42Z | [
"python"
] |
Create an if-else condition column in dask dataframe | 38,608,446 | <p>I need to create a column which is based on some condition on dask dataframe. In pandas it is fairly straightforward:</p>
<pre><code>ddf['TEST_VAR'] = ['THIS' if x == 200607 else
'NOT THIS' if x == 200608 else
'THAT' if x == 200609 else 'NONE'
for x in ddf['shop_week'] ]
</code></pre>
<p>While in dask I have to do same thing like below:</p>
<pre><code>def f(x):
if x == 200607:
y= 'THIS'
elif x == 200608 :
y= 'THAT'
else :
y= 1
return y
ddf1 = ddf.assign(col1 = list(ddf.shop_week.apply(f).compute()))
ddf1.compute()
</code></pre>
<p>Questions:</p>
<ol>
<li>Is there a better/more straightforward way to achieve it?</li>
<li>I can't modify the first dataframe ddf, i need to create ddf1 to se the changes is dask dataframe Immutable object?</li>
</ol>
| 3 | 2016-07-27T09:03:54Z | 38,609,010 | <p>You could just use:</p>
<pre><code>f = lambda x: 'THIS' if x == 200607 else 'NOT THIS' if x == 200608 else 'THAT' if x == 200609 else 'NONE'
</code></pre>
<p>And then:</p>
<pre><code>ddf1 = ddf.assign(col1 = list(ddf.shop_week.apply(f).compute()))
</code></pre>
<p>Unfortunately I don't have an answer to the second question or I don't understand it...</p>
| 1 | 2016-07-27T09:29:39Z | [
"python",
"pandas",
"dask"
] |
Create an if-else condition column in dask dataframe | 38,608,446 | <p>I need to create a column which is based on some condition on dask dataframe. In pandas it is fairly straightforward:</p>
<pre><code>ddf['TEST_VAR'] = ['THIS' if x == 200607 else
'NOT THIS' if x == 200608 else
'THAT' if x == 200609 else 'NONE'
for x in ddf['shop_week'] ]
</code></pre>
<p>While in dask I have to do same thing like below:</p>
<pre><code>def f(x):
if x == 200607:
y= 'THIS'
elif x == 200608 :
y= 'THAT'
else :
y= 1
return y
ddf1 = ddf.assign(col1 = list(ddf.shop_week.apply(f).compute()))
ddf1.compute()
</code></pre>
<p>Questions:</p>
<ol>
<li>Is there a better/more straightforward way to achieve it?</li>
<li>I can't modify the first dataframe ddf, i need to create ddf1 to se the changes is dask dataframe Immutable object?</li>
</ol>
| 3 | 2016-07-27T09:03:54Z | 38,613,444 | <p>Answers:</p>
<ol>
<li><p>What you're doing now is almost ok. You don't need to call <code>compute</code> until you're ready for your final answer.</p>
<pre><code># ddf1 = ddf.assign(col1 = list(ddf.shop_week.apply(f).compute()))
ddf1 = ddf.assign(col1 = ddf.shop_week.apply(f))
</code></pre>
<p>For some cases <code>dd.Series.where</code> might be a good fit</p>
<pre><code>ddf1 = ddf.assign(col1 = ddf.shop_week.where(cond=ddf.balance > 0, other=0))
</code></pre></li>
<li><p>As of version 0.10.2 you can now insert columns directly into dask.dataframes</p>
<pre><code>ddf['col'] = ddf.shop_week.apply(f)
</code></pre></li>
</ol>
| 2 | 2016-07-27T12:48:02Z | [
"python",
"pandas",
"dask"
] |
Pandas repeated values | 38,608,453 | <p>Is there a more idiomatic way of doing this in Pandas?</p>
<p>I want to set-up a column that repeats the integers 1 to 48, for an index of length 2000:</p>
<pre><code>df = pd.DataFrame(np.zeros((2000, 1)), columns=['HH'])
h = 1
for i in range(0,2000) :
df.loc[i,'HH'] = h
if h >=48 : h =1
else : h += 1
</code></pre>
| 0 | 2016-07-27T09:04:05Z | 38,608,547 | <p>Here is more direct and faster way:</p>
<pre><code>pd.DataFrame(np.tile(np.arange(1, 49), 2000 // 48 + 1)[:2000], columns=['HH'])
</code></pre>
<hr>
<p>The detailed step:</p>
<ol>
<li><code>np.arange(1, 49)</code> creates an array from <code>1</code> to <code>48</code> (included)</li>
</ol>
<pre><code>>>> l = np.arange(1, 49)
>>> l
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48])
</code></pre>
<ol start="2">
<li><code>np.tile(A, N)</code> repeats the array <code>A</code> <code>N</code> times, so in this case you get <code>[1 2 3 ... 48 1 2 3 ... 48 ... 1 2 3 ... 48]</code>. You should repeat the array <code>2000 // 48 + 1</code> times in order to get at least 2000 values.</li>
</ol>
<pre><code>>>> r = np.tile(l, 2000 // 48 + 1)
>>> r
array([ 1, 2, 3, ..., 46, 47, 48])
>>> r.shape # The array is slightly larger than 2000
(2016,)
</code></pre>
<ol start="3">
<li><code>[:2000]</code> retrieves the 2000 first values from the generated array to create your <code>DataFrame</code>.</li>
</ol>
<pre><code>>>> d = pd.DataFrame(r[:2000], columns=['HH'])
</code></pre>
| 3 | 2016-07-27T09:08:40Z | [
"python",
"pandas",
"dataframe"
] |
Pandas repeated values | 38,608,453 | <p>Is there a more idiomatic way of doing this in Pandas?</p>
<p>I want to set-up a column that repeats the integers 1 to 48, for an index of length 2000:</p>
<pre><code>df = pd.DataFrame(np.zeros((2000, 1)), columns=['HH'])
h = 1
for i in range(0,2000) :
df.loc[i,'HH'] = h
if h >=48 : h =1
else : h += 1
</code></pre>
| 0 | 2016-07-27T09:04:05Z | 38,609,037 | <pre><code>df = pd.DataFrame({'HH':np.append(np.tile(range(1,49),int(2000/48)), range(1,np.mod(2000,48)+1))})
</code></pre>
<p>That is, appending 2 arrays:</p>
<p>(1) <code>np.tile(range(1,49),int(2000/48))</code></p>
<pre><code>len(np.tile(range(1,49),int(2000/48)))
1968
</code></pre>
<p>(2) <code>range(1,np.mod(2000,48)+1)</code></p>
<pre><code>len(range(1,np.mod(2000,48)+1))
32
</code></pre>
<p>And constructing the <code>DataFrame</code> from a corresponding dictionary.</p>
| 0 | 2016-07-27T09:30:53Z | [
"python",
"pandas",
"dataframe"
] |
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() | 38,608,498 | <p>I am new to pandas.
I am trying to assign a negative sign to one of the column in data frame with below code. but while i do that i get error as below.
I tried with below code</p>
<pre><code>INTERNAL_DEBIT = InternalTxns[InternalTxns['type'].isin(['INTERNAL_DEBIT','INTERNAL'])]
new_amnt = (INTERNAL_DEBIT['amount']*-1)
</code></pre>
<p>but i need to assign this negative value to only matching conditions and get entire data. I am looking for simpler and less coding.</p>
<p>I read through other posts for the similar error but most of them are not for similar requirement.</p>
<p>Thanks in advance.</p>
<p><a href="http://i.stack.imgur.com/rcFAh.png" rel="nofollow"><img src="http://i.stack.imgur.com/rcFAh.png" alt="enter image description here"></a></p>
<pre><code>InternalTxns[[InternalTxns["type"] in ["INTERNAL_DEBIT","INTERNAL_TRANSFER_REVERSAL"]],'amount']=InternalTxns[[InternalTxns["type"] in ["INTERNAL_DEBIT","INTERNAL_TRANSFER_REVERSAL"]],'amount']*-1
</code></pre>
| 1 | 2016-07-27T09:06:27Z | 38,610,365 | <p>You can use the original mask you used from <code>isin</code> with <code>loc</code> to only overwrite those values:</p>
<pre><code>InternalTxns.loc[InternalTxns["type"].isin(["INTERNAL_DEBIT","INTERNAL_TRANSFERââââ_REVERSAL"]),'amount'] *= -1
</code></pre>
| 0 | 2016-07-27T10:27:16Z | [
"python",
"pandas",
"dataframe"
] |
How to split a column into multiple columns with the index of the string with pandas? | 38,608,563 | <p>I have data frame, it looks like:</p>
<pre><code>df = pd.DataFrame({"a":["sea001", "seac002"]})
print(df)
a
0 sea001
1 seac002
</code></pre>
<p>I want to split the a column into two columns, the first three characters in column "b", the rest in column "c"</p>
<pre><code> a b c
0 sea001 sea 001
1 seac002 sea c002
</code></pre>
<p>I want to use df.a.str.split(), but there is no option for me to separate the words after the index. How can I do this cleverly?</p>
| 1 | 2016-07-27T09:09:26Z | 38,608,614 | <p>You can use <code>str</code> with slicing semantics to do this:</p>
<pre><code>In [102]:
df['b'], df['c'] = df['a'].str[:3], df['a'].str[3:]
df
Out[102]:
a b c
0 sea001 sea 001
1 seac002 sea c002
</code></pre>
| 1 | 2016-07-27T09:11:32Z | [
"python",
"pandas"
] |
How to split a column into multiple columns with the index of the string with pandas? | 38,608,563 | <p>I have data frame, it looks like:</p>
<pre><code>df = pd.DataFrame({"a":["sea001", "seac002"]})
print(df)
a
0 sea001
1 seac002
</code></pre>
<p>I want to split the a column into two columns, the first three characters in column "b", the rest in column "c"</p>
<pre><code> a b c
0 sea001 sea 001
1 seac002 sea c002
</code></pre>
<p>I want to use df.a.str.split(), but there is no option for me to separate the words after the index. How can I do this cleverly?</p>
| 1 | 2016-07-27T09:09:26Z | 38,608,657 | <p>try <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow">.str.extract()</a> method:</p>
<pre><code>In [104]: df[['b','c']] = df.a.str.extract(r'(.{3})(.*)', expand=True)
In [105]: df
Out[105]:
a b c
0 sea001 sea 001
1 seac002 sea c002
</code></pre>
| 1 | 2016-07-27T09:13:25Z | [
"python",
"pandas"
] |
Error with pip install scikit-image | 38,608,698 | <p>I am using Windows 8.1 64 bit and Python 2.7. While trying to install <code>scikit-image</code> from the shell </p>
<p><code>pip install scikit-image</code></p>
<p>I have encountered this error:</p>
<p><code>Command "python setup.py egg_info" failed with error code 1 in c:\users\france~1\appdata\local\temp\pip-buildtksnfe\scikit-image\</code></p>
<p>The download is fine but the installation fails. <strong>What is the problem here and how to solve it?</strong></p>
<p><strong>EDIT</strong></p>
<p>After upgrading my pip with</p>
<p><code>python -m pip install -U pip setuptools</code></p>
<p>and trying again, I got:</p>
<p><code>Command "python setup.py egg_info" failed with error code 1 in c:\users\france~1\appdata\local\temp\pip-build-nbemct\scikit-image\</code></p>
<p>What is wrong?</p>
| 2 | 2016-07-27T09:15:16Z | 38,618,044 | <p>install numpy</p>
<pre><code>pip install numpy
</code></pre>
<p>If you face installation issues for numpy, get the pre-built windows installers from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a> for your python version (python version is different from windows version).</p>
<p>numpy 32-bit: numpy-1.11.1+mkl-cp27-cp27m-win32.whl
numpy 64-bit: numpy-1.11.1+mkl-cp27-cp27m-win_amd64.whl</p>
<p>Later you require VC++ 9.0, then please get it from below link
Microsoft Visual C++ 9.0 is required. Get it from <a href="http://aka.ms/vcpython27" rel="nofollow">http://aka.ms/vcpython27</a></p>
<p>Then install</p>
<pre><code>pip install scikit-image
</code></pre>
<p>It will install below list before installing scikit-image</p>
<p>pyparsing, six, python-dateutil, pytz, cycler, matplotlib, scipy, decorator, networkx, pillow, toolz, dask</p>
<p>If it fails at installation of scipy, follow below steps:
get the pre-built windows installers from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a> for your python version (python version is different from windows version).</p>
<p>Scipy 32-bit: scipy-0.18.0-cp27-cp27m-win32.whl
Scipy 64-bit: scipy-0.18.0-cp27-cp27m-win_amd64.whl</p>
<p>If it fails saying <strong>whl is not supported wheel on this platform</strong> , then upgrade pip using <strong>python -m pip install --upgrade pip</strong> and try installing scipy</p>
<p>Now try </p>
<pre><code>pip insatll scikit-image
</code></pre>
<p>It should work like a charm</p>
| 4 | 2016-07-27T16:02:11Z | [
"python",
"installation",
"pip",
"scikit-image"
] |
Linear interpolation of the 4D array in Python/NumPy | 38,608,881 | <p>I have a question about the linear interpolation in python\numpy.
I have a 4D array with the data (all data in binary files) that arrange in this way:
t- time (lets say each hour for a month = 720)
Z-levels (lets say Z'=7)
Y-data1 (one for each t and Z)
X-data2 (one for each t and Z)</p>
<p>So, I want to obtain a new Y and X data for the Z'=25 with the same t.</p>
<p>The first thing, I have a small trouble with the right way to read my data from the binary file. Second, I have to interpolate first 3 levels to Z'=15 and others for the other values. </p>
<p>If anyone has an idea how to do it and can help it will be great.
Thank you for your attention! </p>
| 1 | 2016-07-27T09:23:41Z | 38,609,288 | <p>You can create different interpolation formulas for different combinations of z' and t. </p>
<p>For example, for <code>z=7</code>, and a specific value of <code>t</code>, you can create an interpolation formula:</p>
<pre><code>formula = scipy.interp1d(x,y)
</code></pre>
<p>Another one for say <code>z=25</code> and so on. </p>
<p>Then, given any combination of z and t, you can refer to the specific interpolation formula and do the interpolation.</p>
| 0 | 2016-07-27T09:41:34Z | [
"python",
"arrays",
"numpy",
"4d"
] |
Linear interpolation of the 4D array in Python/NumPy | 38,608,881 | <p>I have a question about the linear interpolation in python\numpy.
I have a 4D array with the data (all data in binary files) that arrange in this way:
t- time (lets say each hour for a month = 720)
Z-levels (lets say Z'=7)
Y-data1 (one for each t and Z)
X-data2 (one for each t and Z)</p>
<p>So, I want to obtain a new Y and X data for the Z'=25 with the same t.</p>
<p>The first thing, I have a small trouble with the right way to read my data from the binary file. Second, I have to interpolate first 3 levels to Z'=15 and others for the other values. </p>
<p>If anyone has an idea how to do it and can help it will be great.
Thank you for your attention! </p>
| 1 | 2016-07-27T09:23:41Z | 38,610,635 | <p>In 2D for instance there is <a href="https://en.wikipedia.org/wiki/Bilinear_interpolation" rel="nofollow">bilinear interpolation</a> - with an example on the unit square with the z-values 0, 1, 1 and 0.5 as indicated. Interpolated values in between represented by colour:</p>
<p><a href="http://i.stack.imgur.com/NVWIh.png" rel="nofollow"><img src="http://i.stack.imgur.com/NVWIh.png" alt="enter image description here"></a></p>
<p>Then <a href="https://en.wikipedia.org/wiki/Trilinear_interpolation" rel="nofollow">trilinear</a>, and so on... </p>
<p><strong><em>Follow the pattern and you'll see that you can nest interpolations to any dimension you require</em></strong>...</p>
<p>:)</p>
| 0 | 2016-07-27T10:39:51Z | [
"python",
"arrays",
"numpy",
"4d"
] |
Test the equality of multiple arguments with Numpy | 38,608,956 | <p>I would like to test the equality of multiple args (i.e. it should return <code>True</code> if all args are equal and <code>False</code> if at least one argument differs).</p>
<p>As <code>numpy.equal</code> can only handle two arguments, I would have tried reduce but it, obviously, fails:</p>
<pre><code>reduce(np.equal, (4, 4, 4)) # return False because...
reduce(np.equal, (True, 4)) # ... is False
</code></pre>
| 2 | 2016-07-27T09:26:52Z | 38,609,133 | <p>You can use <code>np.unique</code> to check if the length of unique items within your array is 1:</p>
<pre><code>np.unique(array).size == 1
</code></pre>
<p>Or <code>np.all()</code> in order to check if all of the items are equal with one of your items (for example the first one):</p>
<pre><code>np.all(array == array[0])
</code></pre>
<p>Demo:</p>
<pre><code>>>> a = np.array([1, 1, 1, 1])
>>> b = np.array([1, 1, 1, 2])
>>> np.unique(a).size == 1
True
>>> np.unique(b).size == 1
False
>>> np.all(a==a[0])
True
>>> np.all(b==b[0])
False
</code></pre>
| 2 | 2016-07-27T09:34:39Z | [
"python",
"numpy"
] |
Test the equality of multiple arguments with Numpy | 38,608,956 | <p>I would like to test the equality of multiple args (i.e. it should return <code>True</code> if all args are equal and <code>False</code> if at least one argument differs).</p>
<p>As <code>numpy.equal</code> can only handle two arguments, I would have tried reduce but it, obviously, fails:</p>
<pre><code>reduce(np.equal, (4, 4, 4)) # return False because...
reduce(np.equal, (True, 4)) # ... is False
</code></pre>
| 2 | 2016-07-27T09:26:52Z | 38,609,892 | <p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package has a builtin function for this. Note that it also works on multidimensional arrays, ie you can use it to check if a stack of images are all identical, for instance.</p>
<pre><code>import numpy_indexed as npi
npi.all_equal(array)
</code></pre>
| 1 | 2016-07-27T10:06:58Z | [
"python",
"numpy"
] |
Test the equality of multiple arguments with Numpy | 38,608,956 | <p>I would like to test the equality of multiple args (i.e. it should return <code>True</code> if all args are equal and <code>False</code> if at least one argument differs).</p>
<p>As <code>numpy.equal</code> can only handle two arguments, I would have tried reduce but it, obviously, fails:</p>
<pre><code>reduce(np.equal, (4, 4, 4)) # return False because...
reduce(np.equal, (True, 4)) # ... is False
</code></pre>
| 2 | 2016-07-27T09:26:52Z | 38,610,186 | <p>If your args are floating point values the equality test can produce weird results due to round off errors. In this case you should use a more robust approach, for example <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html" rel="nofollow"><code>numpy.allclose</code></a>:</p>
<pre><code>In [636]: x = [2./3., .2/.3]
In [637]: x
Out[637]: [0.6666666666666666, 0.6666666666666667]
In [638]: xarr = np.array(x)
In [639]: np.unique(xarr).size == 1
Out[639]: False
In [640]: np.all(xarr == xarr[0])
Out[640]: False
In [641]: reduce(np.allclose, x)
Out[641]: True
</code></pre>
<p><em>Note:</em> Python 3 users will need to include the sentence <code>from functools import reduce</code> since <code>reduce</code> is no longer a built-in function in Python 3.</p>
| 1 | 2016-07-27T10:19:47Z | [
"python",
"numpy"
] |
Iterating over Index in Python | 38,608,970 | <p>I am trying to write a for loop to iterate through my index and only keep the ones that have duplicates.</p>
<p>My current dataframe is two merged together</p>
<pre><code> 0.0102700 0.0308099 0.0616199 0.123240 \
5000000000010 4.330760e-05 4.442720e-05 9.232970e-05 1.994190e-04
5000000000238 6.006910e-04 6.041130e-04 1.220220e-03 2.500240e-03
...
</code></pre>
<p>.</p>
<pre><code> 0.00902317 0.0270695 0.0451159 0.0631622 \
5000000000010 6.962980e-05 7.063750e-05 7.165970e-05 7.269680e-05
5000000000234 4.638970e-04 4.716010e-04 4.794320e-04 4.873930e-04
</code></pre>
<p>.</p>
<pre><code> New = pd.concat([SFR_low, SFR_high])
New = New.sort_index()
print(New)
0.00902317 0.0102700 0.0270695 0.0308099 \
5000000000010 6.962980e-05 NaN 7.063750e-05 NaN
5000000000010 NaN 4.330760e-05 NaN 4.442720e-05
5000000000081 6.299210e-05 NaN 6.299320e-05 NaN
5000000000082 NaN 8.176550e-04 NaN 8.172630e-04
</code></pre>
<p>I need to a new dataframe that only keeps the rows with duplicate indices. </p>
| -1 | 2016-07-27T09:27:46Z | 38,609,000 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.duplicated.html" rel="nofollow"><code>Index.duplicated</code></a> with parameter <code>keep=False</code>:</p>
<pre><code>print (df.index[df.index.duplicated(keep=False)])
Int64Index([1000, 1000, 1002, 1002], dtype='int64')
for i in df.index[df.index.duplicated(keep=False)]:
print (i)
1000
1000
1002
1002
</code></pre>
<p>If need filter rows with duplicated index, use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>print (New.index.duplicated(keep=False))
[ True True False False]
print (New[New.index.duplicated(keep=False)])
0.00902317 0.0102700 0.0270695 0.0308099 0.0451159 \
5000000000010 NaN 0.000043 NaN 0.000044 NaN
5000000000010 0.00007 NaN 0.000071 NaN 0.000072
0.0616199 0.0631622 0.123240
5000000000010 0.000092 NaN 0.000199
5000000000010 NaN 0.000073 NaN
</code></pre>
| 0 | 2016-07-27T09:29:22Z | [
"python",
"loops",
"pandas",
"indexing",
"iteration"
] |
Iterating over Index in Python | 38,608,970 | <p>I am trying to write a for loop to iterate through my index and only keep the ones that have duplicates.</p>
<p>My current dataframe is two merged together</p>
<pre><code> 0.0102700 0.0308099 0.0616199 0.123240 \
5000000000010 4.330760e-05 4.442720e-05 9.232970e-05 1.994190e-04
5000000000238 6.006910e-04 6.041130e-04 1.220220e-03 2.500240e-03
...
</code></pre>
<p>.</p>
<pre><code> 0.00902317 0.0270695 0.0451159 0.0631622 \
5000000000010 6.962980e-05 7.063750e-05 7.165970e-05 7.269680e-05
5000000000234 4.638970e-04 4.716010e-04 4.794320e-04 4.873930e-04
</code></pre>
<p>.</p>
<pre><code> New = pd.concat([SFR_low, SFR_high])
New = New.sort_index()
print(New)
0.00902317 0.0102700 0.0270695 0.0308099 \
5000000000010 6.962980e-05 NaN 7.063750e-05 NaN
5000000000010 NaN 4.330760e-05 NaN 4.442720e-05
5000000000081 6.299210e-05 NaN 6.299320e-05 NaN
5000000000082 NaN 8.176550e-04 NaN 8.172630e-04
</code></pre>
<p>I need to a new dataframe that only keeps the rows with duplicate indices. </p>
| -1 | 2016-07-27T09:27:46Z | 38,609,320 | <pre><code>li = [1000,1000,1001,1002,1002]
for i in li:
temp = i
count = 0
for j in li:
if j is temp:
count +=1
if count > 1:
print i
</code></pre>
<p>This solves your requirement?</p>
| 0 | 2016-07-27T09:42:57Z | [
"python",
"loops",
"pandas",
"indexing",
"iteration"
] |
Iterating over Index in Python | 38,608,970 | <p>I am trying to write a for loop to iterate through my index and only keep the ones that have duplicates.</p>
<p>My current dataframe is two merged together</p>
<pre><code> 0.0102700 0.0308099 0.0616199 0.123240 \
5000000000010 4.330760e-05 4.442720e-05 9.232970e-05 1.994190e-04
5000000000238 6.006910e-04 6.041130e-04 1.220220e-03 2.500240e-03
...
</code></pre>
<p>.</p>
<pre><code> 0.00902317 0.0270695 0.0451159 0.0631622 \
5000000000010 6.962980e-05 7.063750e-05 7.165970e-05 7.269680e-05
5000000000234 4.638970e-04 4.716010e-04 4.794320e-04 4.873930e-04
</code></pre>
<p>.</p>
<pre><code> New = pd.concat([SFR_low, SFR_high])
New = New.sort_index()
print(New)
0.00902317 0.0102700 0.0270695 0.0308099 \
5000000000010 6.962980e-05 NaN 7.063750e-05 NaN
5000000000010 NaN 4.330760e-05 NaN 4.442720e-05
5000000000081 6.299210e-05 NaN 6.299320e-05 NaN
5000000000082 NaN 8.176550e-04 NaN 8.172630e-04
</code></pre>
<p>I need to a new dataframe that only keeps the rows with duplicate indices. </p>
| -1 | 2016-07-27T09:27:46Z | 38,609,607 | <p>try some code first before asking:
there are lot of duplicate questions </p>
<pre><code>a = [1000,1000,1001,1002,1002]
c = [x for x in a if a.count(x) > 1]
print c
</code></pre>
| 0 | 2016-07-27T09:55:11Z | [
"python",
"loops",
"pandas",
"indexing",
"iteration"
] |
Runtime Error: Unknown MATLAB location when using a wrapper in python | 38,609,112 | <p>I am trying to use a Matlab wrapper for my code in the ipython spyder IDE.
I get the following error when I run the code:</p>
<pre><code>raise RuntimeError("Unknown MATLAB location: try to initialize MatlabSession with matlab_root set properly.")
RuntimeError: Unknown MATLAB location: try to initialize MatlabSession with matlab_root set properly.
</code></pre>
<p>I'm hoping this is a simple fix to import a module or change a setting in spyder. Or if anyone knows how to set the matlab_root? Any help would be much appreciated, Thank You! </p>
| 0 | 2016-07-27T09:33:46Z | 38,610,531 | <p>Basically matlab_root is a variable inside <code>MatlabSession</code>, that points to the folder where MATLAB is installed. What is happening here is that probably matlab_root has a default value that is not where your MATLAB is installed. you need to change/set this variable to the folder where MATLAB is installed.</p>
| 0 | 2016-07-27T10:34:42Z | [
"python",
"matlab",
"ipython",
"spyder"
] |
Adding Digits in a Number (Need Code Explanation) | 38,609,125 | <p>I came across this code segment elsewhere. It simply adds all the digits in a given number:</p>
<pre><code>def sumDigits(n):
sum = 0
while n > 0:
sum += n % 10
n //= 10
return sum
</code></pre>
<p>Problem is, I don't get the logic behind it at all. In particular, I don't get exactly what the loop does:</p>
<pre><code> while n > 0:
sum += n % 10 # Why n % 10?
n //= 10 # Again, not sure why we divide the number by 10
</code></pre>
<p>Could someone provide me with an example of how the algorithm works? </p>
<p>Thanks!</p>
| 0 | 2016-07-27T09:34:18Z | 38,609,229 | <p>You should understand 2 things:</p>
<ol>
<li><code>n % 10</code> give you the rightmost digit of a number. For example: <code>123 % 10 = 3</code>. More about <code>%</code> operator <a class='doc-link' href="http://stackoverflow.com/documentation/python/298/simple-math/12336/modulus#t=201607270940153745501">here</a></li>
<li><code>n // 10</code> remove the rightmost digit of a number. For example: <code>123 // 10 = 12</code>. More about <code>//</code> operator <a class='doc-link' href="http://stackoverflow.com/documentation/python/298/simple-math/1065/division#t=201607270940153745501">here</a></li>
</ol>
<p>so if you repeat that process you get the desired result</p>
| 9 | 2016-07-27T09:39:02Z | [
"python",
"algorithm",
"sum-of-digits"
] |
Uploading file with python to bottle server | 38,609,146 | <p>I've got two computers in the same network and I'm trying to pass a file from one to the other with python (in the context of a bigger project). </p>
<p>On the <strong>server side</strong> I've got the following bottle script:</p>
<pre class="lang-py prettyprint-override"><code>import bottle
import json
@bottle.hook('after_request')
def enable_cors():
"""
You need to add some headers to each request.
Don't use the wildcard '*' for Access-Control-Allow-Origin in production.
"""
bottle.response.headers['Access-Control-Allow-Origin'] = '*'
bottle.response.headers['Access-Control-Allow-Methods'] = 'PUT, GET, POST, DELETE, OPTIONS'
bottle.response.headers['Access-Control-Allow-Headers'] = 'Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token'
# LANDING (IT IS NOT REALLY NEEDED JUST TO CHECK STUFF)
@bottle.route('/', method='GET')
def root():
return {
'api': 'api/'
}
@bottle.route('/api', method='POST')
def upload():
upload = bottle.request.files.get('file')
print upload.filename
upload.save('input.txt')
if __name__ == '__main__':
bottle.run(host='0.0.0.0', port=8080, debug=True)
</code></pre>
<p>On the <strong>client side</strong> I'm trying to send the file through the request library such as: </p>
<pre class="lang-py prettyprint-override"><code>import sys
import requests
r = requests.post('http://ip:port/api/', files={'file': open(sys.argv[1], 'rb')})
print r
</code></pre>
<p>(<em>ip</em> and <em>port</em> corresponding to its respective values). </p>
<p>I am getting this error, which I'm not sure how to handle. </p>
<pre><code>Traceback (most recent call last):
File "loopmatch.py", line 4, in <module>
r = requests.post('http://ip:port/api/', files={'file': open(sys.argv[1], 'rb')})
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 111, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 453, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', error(32, 'Broken pipe'))
</code></pre>
<p>Any help will be highly appreciated.<br>
Thanks!</p>
| 0 | 2016-07-27T09:35:24Z | 38,611,520 | <p>So... The thing is pretty silly... Just leaving the answer here in case someone else stomps with this stupid mistake... </p>
<p>The <strong>bottle.route</strong> and the <strong>requests.url</strong> need to match <em>exactly</em>. In may case, the <strong>route</strong> was <code>api</code> while the <strong>url</strong> was <code>api/</code>... this is why they were not working... </p>
<p>One needs to move both to <code>api/</code> or to <code>api</code>. </p>
| 0 | 2016-07-27T11:19:53Z | [
"python",
"python-2.7",
"file-upload",
"request",
"bottle"
] |
impossible to resample date with python | 38,609,254 | <p>I have a dataframe called df1 : </p>
<pre><code>df1 = pd.read_csv('C:/Users/Demonstrator/Desktop/equipement3.csv',delimiter=';', usecols = ['TIMESTAMP','ACT_TIME_AERATEUR_1_F1'])
</code></pre>
<blockquote>
<p>TIMESTAMP;ACT_TIME_AERATEUR_1_F1</p>
<p>2015-07-31 23:00:00;90 </p>
<p>2015-07-31 23:10:00;0 </p>
<p>2015-07-31 23:20:00;0 </p>
<p>2015-07-31 23:30:00;0 </p>
<p>2015-07-31 23:40:00;0 </p>
<p>2015-07-31 23:50:00;0 </p>
<p>2015-08-01 00:00:00;0 </p>
<p>2015-08-01 00:10:00;50 </p>
<p>2015-08-01 00:20:00;0 </p>
<p>2015-08-01 00:30:00;0</p>
<p>2015-08-01 00:40:00;0 </p>
</blockquote>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib import style
import pandas as pd
style.use('ggplot')
df1.index = pd.to_datetime(df1['TIMESTAMP'], format='%Y-%m-%d %H:%M:%S.%f')
df1 = df1.drop('TIMESTAMP', axis=1)
df1 = d1f.resample('resamplestring', how='mean')
</code></pre>
<p>I got this kind of error : </p>
<blockquote>
<p>IndexError: only integers, slices (<code>:</code>), ellipsis (<code>...</code>),
numpy.newaxis (<code>None</code>) and integer or boolean arrays are valid indices</p>
</blockquote>
<p>Could you help meplease? </p>
<p>Thank you</p>
| 1 | 2016-07-27T09:39:58Z | 38,611,327 | <p>You can add parameter <code>parse_dates</code> and <code>index_col</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow"><code>resample</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u"""TIMESTAMP;ACT_TIME_AERATEUR_1_F1
2015-07-31 23:00:00;90
2015-07-31 23:10:00;0
2015-07-31 23:20:00;0
2015-07-31 23:30:00;0
2015-07-31 23:40:00;0
2015-07-31 23:50:00;0
2015-08-01 00:00:00;0
2015-08-01 00:10:00;50
2015-08-01 00:20:00;0
2015-08-01 00:30:00;0
2015-08-01 00:40:00;0"""
#after testing replace io.StringIO(temp) to filename
df1 = pd.read_csv(io.StringIO(temp),
sep=";",
usecols = ['TIMESTAMP','ACT_TIME_AERATEUR_1_F1'],
parse_dates=['TIMESTAMP'],
index_col=['TIMESTAMP'] )
</code></pre>
<pre><code>print (df1)
ACT_TIME_AERATEUR_1_F1
TIMESTAMP
2015-07-31 23:00:00 90
2015-07-31 23:10:00 0
2015-07-31 23:20:00 0
2015-07-31 23:30:00 0
2015-07-31 23:40:00 0
2015-07-31 23:50:00 0
2015-08-01 00:00:00 0
2015-08-01 00:10:00 50
2015-08-01 00:20:00 0
2015-08-01 00:30:00 0
2015-08-01 00:40:00 0
print (df1.index)
DatetimeIndex(['2015-07-31 23:00:00', '2015-07-31 23:10:00',
'2015-07-31 23:20:00', '2015-07-31 23:30:00',
'2015-07-31 23:40:00', '2015-07-31 23:50:00',
'2015-08-01 00:00:00', '2015-08-01 00:10:00',
'2015-08-01 00:20:00', '2015-08-01 00:30:00',
'2015-08-01 00:40:00'],
dtype='datetime64[ns]', name='TIMESTAMP', freq=None)
#pandas 0.18.0 and more
print (df1.resample('30Min').mean())
ACT_TIME_AERATEUR_1_F1
TIMESTAMP
2015-07-31 23:00:00 30.000000
2015-07-31 23:30:00 0.000000
2015-08-01 00:00:00 16.666667
2015-08-01 00:30:00 0.000000
</code></pre>
<hr>
<pre><code>#pandas bellow 0.18.0
print (df1.resample('30Min', how='mean'))
TIMESTAMP
2015-07-31 23:00:00 30.000000
2015-07-31 23:30:00 0.000000
2015-08-01 00:00:00 16.666667
2015-08-01 00:30:00 0.000000
</code></pre>
| 0 | 2016-07-27T11:11:39Z | [
"python",
"date",
"pandas",
"resampling"
] |
Conversion array with float values in the array with the values of str | 38,609,263 | <p>There is an array with the values of <code>float</code> which was prepared as follows:</p>
<pre><code>import numpy as np
result256 = np.linspace(0, 2, 20)
</code></pre>
<p>It is necessary to get an array of <code>result 256</code> with values <code>str</code>.</p>
| 0 | 2016-07-27T09:40:26Z | 38,610,788 | <p>Do you mean like this?</p>
<pre><code>result256str = []
for num in result256: result256str.append(str(num))
</code></pre>
| 0 | 2016-07-27T10:46:44Z | [
"python",
"numpy"
] |
Pandas Dataframe to Seaborn Grouped Barchart | 38,609,339 | <p>I have the following dataframe which I have obtained from a larger dataframe which lists the worst 10 "Benchmark Returns" and their corresponding portfolio returns and dates:</p>
<p><a href="http://i.stack.imgur.com/XGgE8.png" rel="nofollow"><img src="http://i.stack.imgur.com/XGgE8.png" alt="enter image description here"></a></p>
<p>I've managed to create a Seaborn bar plot which lists Benchmark Returns against their corresponding dates with this script:</p>
<pre><code>import pandas as pd
import seaborn as sns
df = pd.read_csv('L:\\My Documents\\Desktop\\Data NEW.csv', parse_dates = True)
df = df.nsmallest(10, columns = 'Benchmark Returns')
df = df[['Date', 'Benchmark Returns', 'Portfolio Returns']]
p6 = sns.barplot(x = 'Date', y = 'Benchmark Returns', data = df)
p6.set(ylabel = 'Return (%)')
for x_ticks in p6.get_xticklabels():
x_ticks.set_rotation(90)
</code></pre>
<p>And it produces this plot:</p>
<p><a href="http://i.stack.imgur.com/Q3Snf.png" rel="nofollow"><img src="http://i.stack.imgur.com/Q3Snf.png" alt="enter image description here"></a></p>
<p>However, what I'd like is a grouped bar plot that contains both Benchmark Returns and Portfolio Returns, where two different colours are used to distinguish between these two categories. </p>
<p>I've tried several different methods but nothing seems to work.</p>
<p>Thanks in advance for all your help! </p>
| 1 | 2016-07-27T09:43:45Z | 38,611,976 | <p>Please look if this is what you wanted to see.</p>
<p>The trick is to transform the pandas <code>df</code> from wide to long format</p>
<p><strong>Step 1: Prepare data</strong></p>
<pre><code>import seaborn as sns
np.random.seed(123)
index = np.random.randint(1,100,10)
x1 = pd.date_range('2000-01-01','2015-01-01').map(lambda t: t.strftime('%Y-%m-%d'))
dts = np.random.choice(x1,10)
benchmark = np.random.randn(10)
portfolio = np.random.randn(10)
df = pd.DataFrame({'Index': index,
'Dates': dts,
'Benchmark': benchmark,
'Portfolio': portfolio},
columns = ['Index','Dates','Benchmark','Portfolio'])
</code></pre>
<p><strong>Step 2: From "wide" to "long" format</strong></p>
<pre><code>df1 = pd.melt(df, id_vars=['Index','Dates']).sort_values(['variable','value'])
df1
Index Dates variable value
9 48 2012-06-13 Benchmark -1.410301
1 93 2002-07-31 Benchmark -1.301489
8 97 2005-01-21 Benchmark -1.100985
0 67 2011-06-01 Benchmark 0.126526
4 84 2003-09-25 Benchmark 0.465645
3 18 2009-07-13 Benchmark 0.522742
5 58 2007-12-04 Benchmark 0.724915
7 98 2002-12-28 Benchmark 0.746581
6 87 2009-02-07 Benchmark 1.495827
2 99 2000-04-21 Benchmark 2.207427
16 87 2009-02-07 Portfolio -2.750224
14 84 2003-09-25 Portfolio -1.855637
15 58 2007-12-04 Portfolio -1.779455
19 48 2012-06-13 Portfolio -1.774134
11 93 2002-07-31 Portfolio -0.984868
12 99 2000-04-21 Portfolio -0.748569
10 67 2011-06-01 Portfolio -0.747651
18 97 2005-01-21 Portfolio -0.695981
17 98 2002-12-28 Portfolio -0.234158
13 18 2009-07-13 Portfolio 0.240367
</code></pre>
<p><strong>Step 3: Plot</strong></p>
<pre><code>sns.barplot(x='Dates', y='value', hue='variable', data=df1)
plt.xticks(rotation=90)
plt.ylabel('Returns')
plt.title('Portfolio vs Benchmark Returns');
</code></pre>
<p><a href="http://i.stack.imgur.com/qPCzZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/qPCzZ.png" alt="enter image description here"></a></p>
| 2 | 2016-07-27T11:42:33Z | [
"python",
"pandas",
"visualization",
"seaborn"
] |
Gmail API python implementation | 38,609,383 | <p>I'm implementing a bot where I can read emails and I'm following the <code>Gmail API</code>. I can read all the labels and I have stored it in the list. I am not able to read the messages inside the label</p>
<pre><code>credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
results = service.users().labels().get('me',"INBOX").execute()
print (results.getName())
</code></pre>
<p>and I get an error - </p>
<pre><code>results = service.users().labels().get('me',"INBOX").execute()
TypeError: method() takes exactly 1 argument (3 given)
</code></pre>
<p>The official api docs implementation <code>get label</code> is in java.
Can someone please tell me what I'm doing wrong?</p>
<pre><code>SCOPES = 'https://www.googleapis.com/auth/gmail.readonly','https://mail.google.com/','https://www.googleapis.com/auth/gmail.modify','https://www.googleapis.com/auth/gmail.labels'
</code></pre>
| 1 | 2016-07-27T09:45:04Z | 38,609,479 | <p>I think this is what you are supposed to do:</p>
<pre><code> results = service.users().labels().list(userId='me').execute()
</code></pre>
<p>From the official documentation:
<a href="https://developers.google.com/gmail/api/quickstart/python" rel="nofollow">https://developers.google.com/gmail/api/quickstart/python</a></p>
<p>Upon further reading, this seems to be a 2 stage process.</p>
<p>First you need to grab the list of messages with a query:</p>
<pre><code>response = service.users().messages().list(userId=user_id, q=query,
pageToken=page_token).execute()
</code></pre>
<p>Then you grab the message by its ID:</p>
<pre><code>message = service.users().messages().get(userId=user_id, id=msg_id).execute()
</code></pre>
| 1 | 2016-07-27T09:48:50Z | [
"python",
"gmail-api"
] |
Gmail API python implementation | 38,609,383 | <p>I'm implementing a bot where I can read emails and I'm following the <code>Gmail API</code>. I can read all the labels and I have stored it in the list. I am not able to read the messages inside the label</p>
<pre><code>credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
results = service.users().labels().get('me',"INBOX").execute()
print (results.getName())
</code></pre>
<p>and I get an error - </p>
<pre><code>results = service.users().labels().get('me',"INBOX").execute()
TypeError: method() takes exactly 1 argument (3 given)
</code></pre>
<p>The official api docs implementation <code>get label</code> is in java.
Can someone please tell me what I'm doing wrong?</p>
<pre><code>SCOPES = 'https://www.googleapis.com/auth/gmail.readonly','https://mail.google.com/','https://www.googleapis.com/auth/gmail.modify','https://www.googleapis.com/auth/gmail.labels'
</code></pre>
| 1 | 2016-07-27T09:45:04Z | 38,792,583 | <p>The mistake here is that you are using the 'get' method for labels rather than for messages. This get method is used to find out information about a label, such as the number of unread messages inside that label.</p>
<p>You can see on the line below you are asking for the .labels</p>
<pre><code>results = service.users().labels().get('me',"INBOX").execute()
</code></pre>
<p>You are correct that this example is only shown in Java. If you want the get (for labels) method to work in python here is an example of the code to use.</p>
<pre><code>results = service.users().labels().get(userId='me',id='Label_15').execute()
</code></pre>
| 0 | 2016-08-05T15:13:18Z | [
"python",
"gmail-api"
] |
Regex not working as required | 38,609,390 | <p>Here is my HTML code:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="../searchbyaddress.aspx">Search by address</a></li>
<li><a href="../searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
<p>Here is my Python code:</p>
<pre><code>import re, os
from urllib.parse import urlparse
url = "http://www.phonebook.com.pk/dynamic/search.aspx?searchtype=cat&class_id=2566"
path = urlparse(url)
lpath = os.path.dirname(path.path)
html = u"<ul class=\"hide menuSearchType\">\n <li><a href=\"../../dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"../../searchbyphone.aspx\">Search by phone</a></li>\n <li><a href=\"../searchbyaddress.aspx\">Search by address</a></li>\n <li><a href=\"../searchbybrand.aspx\">Search by brand</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"searchbybrand.aspx\">Search by brand</a></li>\n</ul>"
linkList1 = re.findall(re.compile(u'(?<=href=")../.*?(?=")'), str(html))
for link1 in linkList:
html = re.sub(link1, path.scheme + "://" + os.path.normpath(path.netloc + os.path.abspath(lpath + "/" + link1)), str(html))
print (html)
</code></pre>
<p>Problem is it detects the links with "../" as intended but also "../../" is changed, is there any way I can restrict my regex to just pick the url's with single "../"?</p>
<p>Expected output:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="http://www.phonebook.com.pk/searchbyaddress.aspx">Search by address</a></li>
<li><a href="http://www.phonebook.com.pk/searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
| -3 | 2016-07-27T09:45:37Z | 38,609,864 | <p>Try using the following:</p>
<pre><code>linkList1 = re.findall(re.compile(u'(?<=href=")../\w.*?(?=")'), str(html))
</code></pre>
<p>That guarantees that there has to be a word character after the slash.</p>
| 1 | 2016-07-27T10:05:55Z | [
"python",
"html",
"regex"
] |
Regex not working as required | 38,609,390 | <p>Here is my HTML code:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="../searchbyaddress.aspx">Search by address</a></li>
<li><a href="../searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
<p>Here is my Python code:</p>
<pre><code>import re, os
from urllib.parse import urlparse
url = "http://www.phonebook.com.pk/dynamic/search.aspx?searchtype=cat&class_id=2566"
path = urlparse(url)
lpath = os.path.dirname(path.path)
html = u"<ul class=\"hide menuSearchType\">\n <li><a href=\"../../dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"../../searchbyphone.aspx\">Search by phone</a></li>\n <li><a href=\"../searchbyaddress.aspx\">Search by address</a></li>\n <li><a href=\"../searchbybrand.aspx\">Search by brand</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"searchbybrand.aspx\">Search by brand</a></li>\n</ul>"
linkList1 = re.findall(re.compile(u'(?<=href=")../.*?(?=")'), str(html))
for link1 in linkList:
html = re.sub(link1, path.scheme + "://" + os.path.normpath(path.netloc + os.path.abspath(lpath + "/" + link1)), str(html))
print (html)
</code></pre>
<p>Problem is it detects the links with "../" as intended but also "../../" is changed, is there any way I can restrict my regex to just pick the url's with single "../"?</p>
<p>Expected output:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="http://www.phonebook.com.pk/searchbyaddress.aspx">Search by address</a></li>
<li><a href="http://www.phonebook.com.pk/searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
| -3 | 2016-07-27T09:45:37Z | 38,610,024 | <p>You can replace string using regex, </p>
<pre><code>output = re.sub(r'(?is)(href="../)([^.])','http://www.phonebook.com.pk/'+r'\2',str(html))
</code></pre>
| 1 | 2016-07-27T10:12:56Z | [
"python",
"html",
"regex"
] |
Regex not working as required | 38,609,390 | <p>Here is my HTML code:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="../searchbyaddress.aspx">Search by address</a></li>
<li><a href="../searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
<p>Here is my Python code:</p>
<pre><code>import re, os
from urllib.parse import urlparse
url = "http://www.phonebook.com.pk/dynamic/search.aspx?searchtype=cat&class_id=2566"
path = urlparse(url)
lpath = os.path.dirname(path.path)
html = u"<ul class=\"hide menuSearchType\">\n <li><a href=\"../../dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"../../searchbyphone.aspx\">Search by phone</a></li>\n <li><a href=\"../searchbyaddress.aspx\">Search by address</a></li>\n <li><a href=\"../searchbybrand.aspx\">Search by brand</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"searchbybrand.aspx\">Search by brand</a></li>\n</ul>"
linkList1 = re.findall(re.compile(u'(?<=href=")../.*?(?=")'), str(html))
for link1 in linkList:
html = re.sub(link1, path.scheme + "://" + os.path.normpath(path.netloc + os.path.abspath(lpath + "/" + link1)), str(html))
print (html)
</code></pre>
<p>Problem is it detects the links with "../" as intended but also "../../" is changed, is there any way I can restrict my regex to just pick the url's with single "../"?</p>
<p>Expected output:</p>
<pre><code><ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="http://www.phonebook.com.pk/searchbyaddress.aspx">Search by address</a></li>
<li><a href="http://www.phonebook.com.pk/searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
</code></pre>
| -3 | 2016-07-27T09:45:37Z | 38,611,137 | <p>By Using BeautifulSoup as requested :</p>
<pre><code>from bs4 import Beautifulsoup
soup = BeautifulSoup(html)
all = soup.select('li')
for i in all:
try:
output = re.sub(r'(?is)(href="../)([^.])','http://www.phonebook.com.pk/'+r'\2',str(i))
except:
output = i
print(output)
</code></pre>
| 2 | 2016-07-27T11:02:03Z | [
"python",
"html",
"regex"
] |
Theano partial derivative over element in symbolic vector | 38,609,421 | <p>I recently hit some performance bottlenecks with symbolic matrix derivatives in Sympy (specifically, the single line of code evaluating symbolic matrices via substitution using lambdas was taking ~90% of the program's runtime), so I decided to give Theano a go.</p>
<p>Its previous application was evaluating the partial derivatives over the hyperparameters of a Gaussian process, where using a (1, k) dimension matrix of Sympy symbols (MatrixSymbol) worked nicely in terms of iterating over this list and differentiating the matrix on each item.</p>
<p>However, this doesn't carry over into Theano, and the documentation doesn't seem to detail how to do this. Indexing a symbolic vector in Theano returns the Subtensor type, which is invalid for calculating the gradient on.</p>
<p>Below is a simple (but entirely algorithmically incorrect - stripped down to the functionality I'm trying to obtain) version of what I'm attempting to do.</p>
<p>EDIT: I have modified the code sample to include the data as a tensor to be passed into the function as suggested below, and included an alternate attempt at instead using a list of separate scalar tensors as I cannot index the values of a symbolic Theano vector, though also to no avail.</p>
<pre><code>import theano
import numpy as np
# Sample data
data = np.array(10*np.random.rand(5, 3), dtype='int64')
# Not including data as tensor, incorrect/invalid indexing of symbolic vector
l_scales_sym = theano.tensor.dvector('l_scales')
x = theano.tensor.dmatrix('x')
f = x/l_scales_sym
f_eval = theano.function([x, l_scales_sym], f)
df_dl = theano.gradient.jacobian(f.flatten(), l_scales_sym[0])
df_dl_eval = theano.function([x, l_scales_sym], df_dl)
</code></pre>
<p>The second last line of the code snippet is where I am trying to get a partial derivative over one of the elements in the list of 'length scale' variables, but this sort of indexing is inapplicable to the symbolic vectors.</p>
<p>Any help would be greatly appreciated!</p>
| 0 | 2016-07-27T09:46:54Z | 38,614,977 | <p>When using theano, all variables should be defined as theano tensors (or shared variables); otherwise, the variable does not become part of the computational graph. In <code>f = data/l_scales_sym</code> the variable <code>data</code> is a numpy array. Try to also define it as a a tensor, it should work. </p>
| 1 | 2016-07-27T13:54:04Z | [
"python",
"theano",
"derivative"
] |
compare an array with two other arrays in python | 38,609,502 | <p>i have a set of 3 arrays which i would compare to eachother.
array a contain a set of values and the values of array b and c should be partly the same.</p>
<p>it is so that if let say <code>a[0] == b[0]</code> than is <code>c[0]</code> always a wrong value</p>
<p>for better explanation i try to show what i mean.</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([2, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [True True False False]
</code></pre>
<p>as you can see the from both sets i have two True and two False. so if one of both is true the total should be true. when i do the following i get a single True/ False for an array. and the answers are what i want...</p>
<pre><code>print((a == b).all())
# return: False
print((a == c).all())
# return: False
print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>When i make the array b and c so that i have one vallue that in both cases is wrong i should end with a False</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([5, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [False True False False]
print((a == b).all())
# return: False
print((a == c).all())
# return: False
</code></pre>
<p>So far so Good</p>
<pre><code>print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>this part is not good this should be a False!!
How do i get an or function like this so that i end with an True when for each value in <code>a</code> an same value exist in <code>b</code> or <code>c</code>??</p>
<p>EDIT:
explanation about <code>a[0] == b[0]</code> than is <code>c[0]</code>:
i have an python function where phase information comes in and have to do some actions.
before that i want check if i have to do with an array of imaginary values or with an array with phase angles. I want to check this before i do something. The problem is the phase angle because the right side is the inverted phase +/- pi. so for every value i have two options. and yes most of the time it is an exclusive or but in the case +/- pi/2 it is not since both are true and that is also fine...</p>
| 2 | 2016-07-27T09:50:02Z | 38,609,576 | <p>You can try this way instead: </p>
<pre><code>(a==b).all() or (a==c).all()
</code></pre>
| 0 | 2016-07-27T09:53:51Z | [
"python",
"arrays",
"python-3.x",
"numpy"
] |
compare an array with two other arrays in python | 38,609,502 | <p>i have a set of 3 arrays which i would compare to eachother.
array a contain a set of values and the values of array b and c should be partly the same.</p>
<p>it is so that if let say <code>a[0] == b[0]</code> than is <code>c[0]</code> always a wrong value</p>
<p>for better explanation i try to show what i mean.</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([2, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [True True False False]
</code></pre>
<p>as you can see the from both sets i have two True and two False. so if one of both is true the total should be true. when i do the following i get a single True/ False for an array. and the answers are what i want...</p>
<pre><code>print((a == b).all())
# return: False
print((a == c).all())
# return: False
print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>When i make the array b and c so that i have one vallue that in both cases is wrong i should end with a False</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([5, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [False True False False]
print((a == b).all())
# return: False
print((a == c).all())
# return: False
</code></pre>
<p>So far so Good</p>
<pre><code>print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>this part is not good this should be a False!!
How do i get an or function like this so that i end with an True when for each value in <code>a</code> an same value exist in <code>b</code> or <code>c</code>??</p>
<p>EDIT:
explanation about <code>a[0] == b[0]</code> than is <code>c[0]</code>:
i have an python function where phase information comes in and have to do some actions.
before that i want check if i have to do with an array of imaginary values or with an array with phase angles. I want to check this before i do something. The problem is the phase angle because the right side is the inverted phase +/- pi. so for every value i have two options. and yes most of the time it is an exclusive or but in the case +/- pi/2 it is not since both are true and that is also fine...</p>
| 2 | 2016-07-27T09:50:02Z | 38,609,626 | <p>You want a logical <code>OR</code>:</p>
<pre><code>np.logical_or(a==b, a==c).all()
</code></pre>
| 2 | 2016-07-27T09:55:54Z | [
"python",
"arrays",
"python-3.x",
"numpy"
] |
compare an array with two other arrays in python | 38,609,502 | <p>i have a set of 3 arrays which i would compare to eachother.
array a contain a set of values and the values of array b and c should be partly the same.</p>
<p>it is so that if let say <code>a[0] == b[0]</code> than is <code>c[0]</code> always a wrong value</p>
<p>for better explanation i try to show what i mean.</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([2, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [True True False False]
</code></pre>
<p>as you can see the from both sets i have two True and two False. so if one of both is true the total should be true. when i do the following i get a single True/ False for an array. and the answers are what i want...</p>
<pre><code>print((a == b).all())
# return: False
print((a == c).all())
# return: False
print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>When i make the array b and c so that i have one vallue that in both cases is wrong i should end with a False</p>
<pre><code>import numpy as np
a = np.array([2, 2, 3, 4])
b = np.array([1, 3, 3, 4])
c = np.array([5, 2, 4, 3])
print(a == b)
# return: [False False True True]
print(a == c)
# return: [False True False False]
print((a == b).all())
# return: False
print((a == c).all())
# return: False
</code></pre>
<p>So far so Good</p>
<pre><code>print(a.all() == (b.all() or c.all()))
# return: True
</code></pre>
<p>this part is not good this should be a False!!
How do i get an or function like this so that i end with an True when for each value in <code>a</code> an same value exist in <code>b</code> or <code>c</code>??</p>
<p>EDIT:
explanation about <code>a[0] == b[0]</code> than is <code>c[0]</code>:
i have an python function where phase information comes in and have to do some actions.
before that i want check if i have to do with an array of imaginary values or with an array with phase angles. I want to check this before i do something. The problem is the phase angle because the right side is the inverted phase +/- pi. so for every value i have two options. and yes most of the time it is an exclusive or but in the case +/- pi/2 it is not since both are true and that is also fine...</p>
| 2 | 2016-07-27T09:50:02Z | 38,609,750 | <p>From your example and explanation, I guess what you want is:</p>
<blockquote>
<p>For each position, <strong>exactly</strong> one of <code>b</code> or <code>c</code> has the same value as <code>a</code></p>
</blockquote>
<p>If that's the case, that can be done with the following code:</p>
<pre><code>def is_exclusively_jointly_same(a, b, c):
return np.logical_xor(a==b, a==c).all() # or use ^ below
# return ((a==b)^(a==c)).all()
</code></pre>
<p>The <code>^</code> is exclusive or (XOR) operator, which returns <code>True</code> if and only if exactly one of its argument is <code>True</code>.</p>
<p>So the expression <code>(a==b)^(a==c)</code> means either <code>a==b</code> or <code>a==c</code>, but not both. Then the <code>.all()</code> checks whether this is true for all positions in the array.</p>
<p>Examples:</p>
<pre>
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([1,2,0,0,5])
>>> c=np.array([0,0,3,4,0])
>>> is_exclusively_jointly_same(a, b, c)
True
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([0,2,0,0,5]) # First value both not 1
>>> c=np.array([0,0,3,4,0])
>>> is_exclusively_jointly_same(a, b, c)
False
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([1,2,0,0,5]) # First value both 1
>>> c=np.array([1,0,3,4,0])
>>> is_exclusively_jointly_same(a, b, c)
False
</pre>
<p>If what you want is that:</p>
<blockquote>
<p>For each position, <strong>at least</strong> one of <code>b</code> and <code>c</code> should have the same value as <code>a</code></p>
</blockquote>
<p>, then you need to change to OR instead of XOR, as follows:</p>
<pre><code>def is_jointly_same(a, b, c):
return np.logical_or(a==b, a==c).all() # or use | below
# return ((a==b) | (a==c)).all()
</code></pre>
<p>Examples:</p>
<pre>
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([1,2,0,0,5])
>>> c=np.array([0,0,3,4,0])
>>> is_jointly_same(a, b, c)
True
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([0,2,0,0,5]) # First value both not 1
>>> c=np.array([0,0,3,4,0])
>>> is_jointly_same(a, b, c)
False
>>> a=np.array([1,2,3,4,5])
>>> b=np.array([1,2,0,0,5]) # First value both 1
>>> c=np.array([1,0,3,4,0])
>>> is_jointly_same(a, b, c)
True
</pre>
<p>The key here is that <code>.all()</code> should be applied <em>once</em> as final aggregator, when each individual values has already been calculated. So when you see that you are applying <code>.all()</code> multiple times, you should be concerned.</p>
| 1 | 2016-07-27T10:00:18Z | [
"python",
"arrays",
"python-3.x",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.