title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Add header #ifndef into C header files in current directory | 38,778,409 | <p>In current directory, All C header files (*.h) don't include the preprocessor macro </p>
<pre><code>#ifndef FILENAME_H
#define FILENAME_H
...
#endif
</code></pre>
<p>, it's too tedious to add them in each header manually. How to do that automatically via python or shell?</p>
| -1 | 2016-08-04T22:30:51Z | 38,778,739 | <p>Assuming you are in a <code>unix</code> shell with <code>find</code>, <code>cut</code>, and <code>sed</code> available. You can get each filename using <code>find</code>, then use <code>sed</code> to change those files.</p>
<p>You can save the below script in a file called <code>addifndef.sh</code>.</p>
<pre><code> for fn in $(find . -maxdepth 1 -type f -regex '.*\.h$' | cut -f 2 -d '/');
do
dn=$(echo $fn | cut -f 1 -d '.');
sed -i -e "1 i#ifndef ${dn}_H\n#define ${dn}_H" -e "$ a#endif" "$fn";
done
</code></pre>
<p>And, run that script as <code>[prompt $] sh addifndef.sh</code> in your shell prompt. </p>
<p>Alternatively you can directly use this command in command line. </p>
<p>For more info you have to look at <code>man pages</code> of <code>find</code>, <code>cut</code>, and <code>sed</code>.</p>
| 1 | 2016-08-04T23:07:29Z | [
"python",
"c",
"shell",
"c-preprocessor"
] |
Add header #ifndef into C header files in current directory | 38,778,409 | <p>In current directory, All C header files (*.h) don't include the preprocessor macro </p>
<pre><code>#ifndef FILENAME_H
#define FILENAME_H
...
#endif
</code></pre>
<p>, it's too tedious to add them in each header manually. How to do that automatically via python or shell?</p>
| -1 | 2016-08-04T22:30:51Z | 38,778,750 | <p>I'm going to second @kaylum's advice and also not give you the whole thing, but here is some pseudocode that might set you on the right path</p>
<pre><code>for each file in the directory
if filename doesn't end with .h
continue
open the file
store its contents in a variable
create the header guard by taking the filename, removing the '.', and replacing it with a '_'
create new contents = headerGuard + contents + "\n#endif"
write file back out to the same name
</code></pre>
<p>Each of these things should be answerable with a quick google/stack overflow search, and if you can't figure out any of those parts, a specific stack overflow question about that bit would be better suited for this site. <a href="http://stackoverflow.com/questions/3964681/find-all-files-in-directory-with-extension-txt-in-python">And here is one link to a relevant question to get you started.</a></p>
| 1 | 2016-08-04T23:09:23Z | [
"python",
"c",
"shell",
"c-preprocessor"
] |
Add header #ifndef into C header files in current directory | 38,778,409 | <p>In current directory, All C header files (*.h) don't include the preprocessor macro </p>
<pre><code>#ifndef FILENAME_H
#define FILENAME_H
...
#endif
</code></pre>
<p>, it's too tedious to add them in each header manually. How to do that automatically via python or shell?</p>
| -1 | 2016-08-04T22:30:51Z | 38,789,861 | <p>Based on Cody's answer, I implemented the guardHeader.py:</p>
<pre><code> 1 #!/usr/bin/python3
2
3 import glob,os,sys
4
5 global search_dir
6
7 def clearContent(pfile):
8 pfile.seek(0)
9 pfile.truncate()
10
11 def guard(fileName):
12 file = open(fileName, 'r+')
13 content = file.read()
14
15 fileNameUp = fileName.split(".")[0].upper() + '_H'
16 guardBegin = '#ifndef ' + fileNameUp + '\n' \
17 '#define ' + fileNameUp + '\n\n'
18 guardEnd = '\n#endif'
19 newContent = guardBegin + content + guardEnd
20
21 clearContent(file)
22 file.write(newContent)
23
24 if __name__ == '__main__':
25 if len(sys.argv) == 1:
26 print('Please provide a directory')
27 else:
28 search_dir = sys.argv[1]
29
30 # enter search directory
31 os.chdir(search_dir)
32
33 for file in glob.glob("*.h"):
34 guard(file)
</code></pre>
| 2 | 2016-08-05T12:58:30Z | [
"python",
"c",
"shell",
"c-preprocessor"
] |
Separate data from one column into three columns | 38,778,498 | <p>I have a column in an excel which contains a mix of First Names, Last Names and Job titles. Only pattern that can be observed is - in each set of 3 rows, every 1st row is first name, 2nd row is last name and 3rd row is job title. I want to create 3 different columns and and segregate this data
Sample data:</p>
<pre><code>John
Bush
Manager
Katrina
Cohn
Secretary
</code></pre>
<p>I want: John , Bush , Manager as one row going in three different columns under First Name, Last name and Job title respectively. Like - </p>
<pre><code>First Name Last Name Job Title
John Bush Manager
Katrina Cohn Secretary
</code></pre>
<p>How can we achieve this task?</p>
| 2 | 2016-08-04T22:41:17Z | 38,778,571 | <p>You can use <a href="https://docs.python.org/2/library/functions.html#range" rel="nofollow">this notation</a> to get every third element with different starting points.</p>
<pre><code>l = ['John', 'Bush', 'Manager', 'Katrina', 'Cohn', 'Secretary']
pd.DataFrame({'First Name': l[::3], 'Last Name': l[1::3], 'Job Title': l[2::3]})
</code></pre>
<p>outputs</p>
<pre><code> First Name Job Title Last Name
0 John Manager Bush
1 Katrina Secretary Cohn
</code></pre>
| 3 | 2016-08-04T22:48:57Z | [
"python",
"pandas"
] |
Separate data from one column into three columns | 38,778,498 | <p>I have a column in an excel which contains a mix of First Names, Last Names and Job titles. Only pattern that can be observed is - in each set of 3 rows, every 1st row is first name, 2nd row is last name and 3rd row is job title. I want to create 3 different columns and and segregate this data
Sample data:</p>
<pre><code>John
Bush
Manager
Katrina
Cohn
Secretary
</code></pre>
<p>I want: John , Bush , Manager as one row going in three different columns under First Name, Last name and Job title respectively. Like - </p>
<pre><code>First Name Last Name Job Title
John Bush Manager
Katrina Cohn Secretary
</code></pre>
<p>How can we achieve this task?</p>
| 2 | 2016-08-04T22:41:17Z | 38,778,820 | <pre><code>s = pd.Series([
'John',
'Bush',
'Manager',
'Katrina',
'Cohn',
'Secretary'])
df = pd.DataFrame(s.values.reshape(-1, 3),
columns=['First Name', 'Last Name', 'Job Title'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/gN5Bu.png" rel="nofollow"><img src="http://i.stack.imgur.com/gN5Bu.png" alt="enter image description here"></a></p>
<hr>
<p>If your length of your data isn't a multiple of 3 then you can force it like this:</p>
<pre><code>s = pd.Series([
'John',
'Bush',
'Manager',
'Katrina',
'Cohn',
'Secretary',
'Bogus'])
s_ = s.iloc[:s.shape[0] // 3 * 3]
df = pd.DataFrame(s_.values.reshape(-1, 3), columns=['First Name', 'Last Name', 'Job Title'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/gN5Bu.png" rel="nofollow"><img src="http://i.stack.imgur.com/gN5Bu.png" alt="enter image description here"></a></p>
| 1 | 2016-08-04T23:18:24Z | [
"python",
"pandas"
] |
Ignore elements of a list up to a certain index | 38,778,546 | <p>I am creating two lists by reading from a text file with two columns, which always has the same layout but the length of the columns differ.</p>
<pre><code>def read_columns(x,y,filename,rw):
print(filename)
if(not os.path.isfile(filename)):
sys.exit(filename + " does not exist")
file = open(filename,rw)
left_column = True
for line in file:
# print(line)
if ("X" not in line and line is not ""):
s = line.split()
for floats in s:
if left_column:
left_column = False
x.append(float(floats))
else:
y.append(float(floats))
left_column = True
</code></pre>
<p>Then I'd like to find the minimum value in the Y list</p>
<pre><code>def find_minforce(y):
min_force = min(y)
return min_force
</code></pre>
<p>However the corresponding x value of the minimum y-value should be higher than a certain value. Like ignore all y values with a corresponding x-value lower than 0.01. Any suggestions?</p>
<p>for example</p>
<pre><code>x y
0 -8
1 -9
2 -4
2.5 -6
2.71 -3
</code></pre>
<p>I should get minimum_y = -6 in case I want to ignore all x<2</p>
| 0 | 2016-08-04T22:46:10Z | 38,778,733 | <p>You have a couple of problems in your code besides what you're trying to do. I've tried to fix the errors and explain what I was doing.</p>
<pre><code>import logging
from collections import namedtuple
logging.basicConfig(level=logging.DEBUG)
# See https://docs.python.org/3/library/collections.html#collections.namedtuple
# but namedtuples are an awesome way to group related
# data that allows you to use attribute lookup.
# In this case - point.x and point.y.
# Point was my best guess for name, since you
# had `x` and `y`.
Point = namedtuple('Point', ('x', 'y'))
# There was no need to pass in any of these values
# Returning lists is better than passing them in to
# be mutated. Also you're not writing to the file in
# here that I see. It'd be better to pass in a file
# object if you need to control that from outside
# the function
def read_columns(filename):
data = [] # creating a list of data here
# Better to use logging - that way you can just
# change the log level if you don't want to see
# the messages any more
logging.debug('Reading from file %r', filename)
try:
with open(filename) as file:
for line in file:
line = line.strip()
logging.debug('Line: %r', line)
# Parenthesis aren't necessary
# `line is not ""` is checking for
# identity and not equality. Also
# an empty line will evaluate to False.
# but iterating over lines will always
# include the `\n` so we call line.strip()
# earlier
if line.strip() and 'X' not in line:
left, right = line.split()
point = Point(x=float(left), y=float(right))
if point.x > 2:
data.append()
except FileNotFoundError as e:
sys.exit("File {!r} does not exist".format(filename))
return data
</code></pre>
<p>Note that if I weren't worried about the file existing, I'd probably do something like this:</p>
<pre><code>with open(filename) as f:
data = [point for point in
(Point(*(float(_) for _ in line.split())
for line in f if line.strip() and 'X' not in line)
if point.x > 2
]
# could be compressed into one line, but that's not very readable.
with open(filename) as f:
data = [point for point in (Point(*(float(_) for _ in line.split()) for line in f if line.strip() and 'X' not in line) if point.x > 2]
</code></pre>
<p>That approach uses a list comprehension, a generator comprehension, and argument unpacking. They're very powerful techniques.</p>
| 0 | 2016-08-04T23:06:37Z | [
"python",
"list"
] |
Django Rest framework 3.4.0, ViewSet with custom create() and Many-to-many relationship | 38,778,587 | <p>I have the following Django models:</p>
<pre><code>class Task(models.Model):
'''Task needed to complete a goal'''
title = models.CharField(max_length=200)
class Issue(models.Model):
'''Issue from different forges'''
title = models.CharField(max_length=200, blank=True)
tasks = models.ManyToManyField(Task)
</code></pre>
<p>I have the following serializers:</p>
<pre><code>class TaskSerializer(serializers.ModelSerializer):
'''Serializer to represent the Task model'''
class Meta:
model = Task
fields = ("id", "title")
class IssueSerializer(serializers.ModelSerializer):
'''Serializer to represent the issue model'''
tasks = TaskSerializer(read_only=True, many=True)
class Meta:
model = Issue
fields = ("title", "tasks")
</code></pre>
<p>Now my POST request is the following one:</p>
<pre><code>{
"id": null,
"title": "",
"tasks": [14]
}
</code></pre>
<p>with an empty title for the issue because I'll provide this one server-side So I need to redefine the create() of the IssueViewSet as follow:</p>
<pre><code>class IssueViewSet(viewsets.ModelViewSet):
'''ViewSet for viewing and editing Issue objects'''
queryset = Issue.objects.all().order_by('id')
serializer_class = IssueSerializer
def create(self, request):
serializer = IssueSerializer(data=request.data)
if serializer.is_valid():
issue = Issue(title='test', tasks=serializer.validated_data['tasks'])
issue.save()
return Response({'status': 'issue created'})
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>I only have the following errors by now:</p>
<pre><code> File "/views.py", line 32, in create
issue = Issue(title='test', tasks=serializer.validated_data['tasks'])
KeyError: 'tasks'
</code></pre>
<p>And indeed it seems serializer.validated_data does not have a tasks key. I'm missing something either defining my serializers or my create().</p>
| 0 | 2016-08-04T22:51:02Z | 38,778,638 | <p>You marked <code>tasks</code> to be <code>read_only</code> in <code>IssueSerializer</code>. That's why you can't find the <code>tasks</code> keys in the <code>validated_data</code>. </p>
| 0 | 2016-08-04T22:56:56Z | [
"python",
"django",
"django-rest-framework"
] |
Python script for web scraping from web pages to find ip address for urls present in it | 38,778,590 | <p>I have started writing script as mentioned below</p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
trg_url='http://timesofindia.indiatimes.com/'
req=urllib2.Request(trg_url)
handle=urllib2.urlopen(req)
page_content=handle.read()
soup=BeautifulSoup(page_content,"html")
new_list=soup.find_all('a')
for link in new_list:
print link.get('href')
</code></pre>
<p>but now i am stuck, as i am getting below mentioned output</p>
<pre><code>http://mytimes.indiatimes.com/?channel=toi
https://www.facebook.com/TimesofIndia
https://twitter.com/timesofindia
https://plus.google.com/117150671992820587865?prsrc=3
http://timesofindia.indiatimes.com/rss.cms
https://www.youtube.com/user/TimesOfIndiaChannel
javascript:void(0);
http://timesofindia.indiatimes.com
javascript://
http://beautypageants.indiatimes.com/
http://photogallery.indiatimes.com/
http://timesofindia.indiatimes.com/videos/entertainment/videolist/3812908.cms
javascript://
/life/fashion/articlelistls/2886715.cms
/life-style/relationship/specials/lsspeciallist/6247311.cms
/debatelist/3133631.cms
</code></pre>
<p>please guide me to extract the different URLs present in web page and there IP address </p>
| -1 | 2016-08-04T22:51:30Z | 38,779,255 | <p>Use the socket module to get the ip address:</p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
import socket
import csv
trg_url='http://timesofindia.indiatimes.com/'
req=urllib2.Request(trg_url)
handle=urllib2.urlopen(req)
page_content=handle.read()
soup=BeautifulSoup(page_content,"lxml")
new_list=soup.find_all('a')
final_list = []
for link in new_list:
l = link.get('href')
try:
final_list.append([l,socket.gethostbyname(l.split('/')[2])])
except:
final_list.append([l,[]])
with open('output.csv','wb') as f:
wr = csv.writer(f)
for row in final_list:
wr.writerow(row)
</code></pre>
| -1 | 2016-08-05T00:12:54Z | [
"python",
"scripting",
"web-crawler"
] |
building python-boost hello world on Mac OS X with Homebrew - Makefile vs. Resolution | 38,778,676 | <p>While not directly an explicit question - I am trying to make a post that addresses some of the bizarre issues encountered when trying to install <code>boost-python</code> as a novice</p>
<p>The post is based on <a href="https://feralchicken.wordpress.com/2013/12/07/boost-python-hello-world-example-using-cmake/" rel="nofollow">this example</a> which can directly be obtained by</p>
<pre><code>wget https://dl.dropboxusercontent.com/u/508241/wordpress/BoostPythonHelloWorld.tar.gz
tar xf BoostPythonHelloWorld.tar.gz
cd BoostPythonHelloWorld
cmake .
make
./test.py
</code></pre>
<p>A bold but accurate claim is that I have systematically read <em>all</em> relevant google search results for <code>"boost-python hello world Makefile"</code> (there aren't that many) and through every SO post that comes up in a search for <code>boost-python hello world Makefile</code>. The methods I have tried are obviously fairly exhaustive and I have also tried a large number of examples.</p>
<h2>Issues</h2>
<p>My best success with a <code>Makefile</code> is at the end of this post for readability. These issues are largely based on that.</p>
<ul>
<li>The first is reasonably trivial that <code>brew install boost-python</code> will install boost somewhere like <code>/usr/Cellar/boost/1.61.0/</code> and this can cause most online resources to fail unless it is explicitly linked</li>
<li><code>Mac OS X</code> comes with native python. Installing <code>homebrew install python</code> as most users will have done, will cause <code>cmake</code> to link to native python by mistake.</li>
<li>We must explicitly link the <code>python</code> library <code>libpython2.7.dylib</code></li>
<li><p>There was some combination of the below <code>Makefile</code> that actually compiled to <code>100%</code> but then I had the following, which seemed to come from <code>boost-python</code> failing to link to <code>cpython</code>:</p>
<pre><code> [ 25%] Building CXX object CMakeFiles/greet.dir/greet.cpp.o
[ 50%] Linking CXX shared library libgreet.dylib
[ 50%] Built target greet
[ 75%] Building CXX object CMakeFiles/greet_ext.dir/greet_ext.cpp.o
[100%] Linking CXX shared library greet_ext.dylib
Undefined symbols for architecture x86_64:
"_PyArg_ParseTupleAndKeywords", referenced from:
</code></pre>
<p>I changed my <code>Makefile</code> without cleaning my <code>Build/</code> a few times and by the time I realised I had forgotten to purge the old files I have changed my <code>Makefile</code> too much to recreate this level of success</p></li>
<li>Removing <code>SHARED</code> compiles without error but I cannot import as I should be able to with <code>import greet_ext</code></li>
</ul>
<h2><code>Makefile</code></h2>
<pre><code>project( BoostPythonHelloWorld )
cmake_minimum_required(VERSION 3.3)
set(Boost_REALPATH ON)
set(Boost_USE_STATIC_LIBS ON)
set(Boost_USE_MULTITHREADED ON)
find_package(Boost COMPONENTS
regex
filesystem
system
thread
python
chrono
date_time
atomic
REQUIRED)
# include extras
message("")
message("CMAKE finds wrong dirs of Boost (Mac OSX default)...")
message("... Include Include of boost: " ${Boost_INCLUDE_DIRS} )
set(Boost_INCLUDE_DIRS "/usr/Cellar/boost/1.61.0/")
set(Boost_INCLUDE_DIRS "/usr/Cellar/boost/1.61.0/include")
message("... Actual Include of boost: " ${Boost_INCLUDE_DIRS}} )
find_package(PythonLibs 2.7 REQUIRED)
message("")
message("CMAKE finds wrong dirs of Python (Mac OSX default)...")
message("... Include dirs of Python: " ${PYTHON_INCLUDE_DIRS} )
message("... Libs of Python: " ${PYTHON_LIBRARIES} )
set(PYTHON_LIBRARIES "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/libpython2.7.dylib")
set(PYTHON_INCLUDE_DIRS "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/include/python2.7")
message("... Actual Include: " ${PYTHON_INCLUDE_DIRS} )
message("... Actual lib: " ${PYTHON_LIBRARIES} )
# Build our library
add_library( greet SHARED greet.cpp )
# Define the wrapper library that wraps our library
add_library( greet_ext SHARED greet_ext.cpp )
target_link_libraries( greet_ext ${Boost_LIBRARIES} greet )
# don't prepend wrapper library name with lib
set_target_properties( greet_ext PROPERTIES PREFIX "" )
</code></pre>
<h3><code>Makefile</code> output</h3>
<p>using the command: <code>rm -rf Build; mkdir Build; cd Build; cmake ..; make;</code> in the directory <code>BoostPythonHelloWorld</code> I obtain:</p>
<pre><code>CMAKE finds wrong dirs of Boost (Mac OSX default)...
... Include Include of boost: /usr/local/include
... Actual Include of boost: /usr/Cellar/boost/1.61.0/include}
-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found suitable version "2.7.10", minimum required is "2.7")
CMAKE finds wrong dirs of Python (Mac OSX default)...
... Include dirs of Python: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/python2.7
... Libs of Python: /usr/lib/libpython2.7.dylib
... Actual Include: /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/include/python2.7
... Actual lib: /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/libpython2.7.dylib
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/alex/Downloads/BoostPythonHelloWorld/Build
Scanning dependencies of target greet
[ 25%] Building CXX object CMakeFiles/greet.dir/greet.cpp.o
[ 50%] Linking CXX shared library libgreet.dylib
clang: error: invalid argument '-bundle' not allowed with '-dynamiclib'
make[2]: *** [libgreet.dylib] Error 1
make[1]: *** [CMakeFiles/greet.dir/all] Error 2
make: *** [all] Error 2
</code></pre>
| 0 | 2016-08-04T23:00:30Z | 38,778,677 | <p>This doesn't directly address the <code>Makefile</code> issues but is rather to make other python users will be aware of my breakthrough that a <code>Makefile</code> was not necessary. </p>
<p>I'll still accept any answers leading to a fix with the <code>Makefile</code> attempt.</p>
<p>Using a different example <code>hello_ext.cpp</code>:</p>
<pre><code>char const* greet()
{
return "hello, world";
}
#include <boost/python.hpp>
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
</code></pre>
<p>I was able to get it to import into python using this <code>setup.py</code> taken from <a href="http://stackoverflow.com/a/21751764/4013571">this post</a>:</p>
<pre><code>from distutils.core import setup
from distutils.extension import Extension
hello_ext = Extension(
'hello_ext',
sources=['hello_ext.cpp'],
libraries=['boost_python-mt'],
setup(
name='hello-world',
version='0.1',
ext_modules=[hello_ext])
</code></pre>
<p>building with: <code>python setup.py build_ext --inplace</code> I was able to import successfully!</p>
<pre><code>In [1]: ls
Makefile build/ hello_ext.cpp hello_ext.so* setup.py
import hello_ext
hello_ext.greet()
## -- End pasted text --
Out[2]: 'hello, world'
</code></pre>
<p>As a side note: The following are in my <code>~/.bash_profile</code></p>
<pre><code>export CPLUS_INCLUDE_PATH="$CPLUS_INCLUDE_PATH:/usr/include/python2.7/"
export BOOST_ROOT='/usr/Cellar/boost/1.61.0/'
export BOOST_INC="/usr/Cellar/boost/1.61.0/include"
export BOOST_LIB="/usr/Cellar/boost/1.61.0/lib"
</code></pre>
| 0 | 2016-08-04T23:00:30Z | [
"python",
"c++",
"osx",
"makefile",
"boost-python"
] |
PyQt: QLineEdit input mask for hexadecimal | 38,778,755 | <p>How can I use the input mask for QLineEdit of pyqt to limit 9 digits of hex numbers only. For example,
I want to limit user to enter hex from 0x300000000 to 0x400000000 only</p>
| 0 | 2016-08-04T23:10:14Z | 38,786,754 | <p>For this particular example you can use the <code>QValidator</code> class which provides validation of input text, please see example below:</p>
<pre><code>import sys
from PyQt4 import QtGui, QtCore
from PyQt4.QtCore import Qt, QRegExp
from PyQt4.QtGui import QRegExpValidator
def window():
app = QtGui.QApplication(sys.argv)
win = QtGui.QWidget()
flo = QtGui.QFormLayout()
e1 = QtGui.QLineEdit("0x300000000")
validator = QRegExpValidator(QRegExp("0x[3-4][0-9A-Fa-f]{1,8}"))
e1.setValidator(validator)
flo.addRow("Hexadecimal", e1)
win.setLayout(flo)
win.setWindowTitle("PyQt")
win.show()
sys.exit(app.exec_())
if __name__ == '__main__':
window()
</code></pre>
| 0 | 2016-08-05T10:15:27Z | [
"python",
"pyqt",
"pyqt4"
] |
Trying to insert a table into jinja2 template from flask | 38,778,758 | <p>I have been trying to create my first Flask website, using super simple templates in <code>Jinja2</code>. <em>(I started with my regular html and abandoned it quickly because figuring out where to put things was a nightmare. I will go back in after this hard part and tweak the flask code, for now, I have eliminated EVERYTHING but the html declaration and the attempt at dropping in the table.)</em></p>
<p><strong>Flask Code:</strong></p>
<pre><code>from flask import Flask
from flask import render_template
import redis
from flask import json
from dateutils.parser import parse
from flask_table import Table, Col, DatetimeCol
r=redis.StrictRedis(host='localhost', port =6379, db=0)
app = Flask(__name__)
@app.route('/')
@app.route('/p/')
def p():
raw = r.get('rData')
raw= yaml.load(raw)
raw = json.loads(raw)
for item in raw:
item['date_time'] = parse(item['date_time'])
class ItemTable(Table):
Sensor = Col("Sensor")
date_time = DatetimeCol("Date & Time")
Reading = Col("Reading")
table = ItemTable(raw)
return render_template('p.html'), table
if __name__ == '__main__':
app.run()
</code></pre>
<p>This code will call the page /p/ locally and <em>if i ask it to show</em> <code>raw</code> it will print out the data for this table (which I have included below). </p>
<p><code>{{ raw }}</code></p>
<p>I get my dictionary back!</p>
<p>But when I try to show the table that
<code>ItemTable</code> should create, it crashes my page or nothing shows at all. I have also tried using the return code this way:</p>
<pre><code> return render_template('p.html', table=table)
</code></pre>
<p>passing it as a variable of fully executed html, nothing good comes of it.
<code>{{ table }}</code> </p>
<p><strong>Actual Data in a list of dictionaries:</strong> </p>
<pre><code>[{"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 27.5, "Sensor": "Water Temperature (degrees Celsius)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 0.91, "Sensor": "Lake Gage Height (in feet)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 53.0, "Sensor": "Specific Conductivity in microSiemens"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 7.3, "Sensor": "Dissolved Oxygen (ppm)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 6.9, "Sensor": "pH (standard field)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 27.7, "Sensor": "Air Temperature (in degrees Celsius)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 2.5, "Sensor": "Wind Speed (in mph)"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 238.0, "Sensor": "Wind Direction (degrees CW from North)"}, {"Reading": 30.7, "Sensor": "Relative Humidity", "date-Time": "2016-08-03T14:00:00.000-04:00"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 1.26, "Sensor": "Marsh Gage Height in feet"}, {"date_time": "2016-08-03T14:00:00.000-04:00", "Reading": 0.01, "Sensor": "24 Cumulative Precipication (inches)"}]
</code></pre>
<p>I pull this out of a redis cache I created, but you can just set <code>raw=</code> to the above list and substitute it. </p>
<p><strong>Conundrum:</strong></p>
<p>What do I need to insert in the Jinja2 to make the table show up from one of these two <code>return</code> presentations, or is there some other way to tell flask to port the converted list of dictionaries to the template? </p>
<p>I have searched everywhere and the page on flask-table does not show how to present it on the other side.</p>
<p>My goal is to drop it in a container that is a child to the base layout. but just getting it to show up as a table is enough, I can figure the rest out from there.</p>
<p><strong>Simplest HTML/Jinja template I could create:</strong></p>
<pre><code><!doctype html>
<h1> test </h1>
{{table}}
</code></pre>
<p>I tried using <code>{% table %}</code> also to no avail. </p>
<p>I am not sure if the problem is that I generated html in flask and tried to send it to the template or if I am not telling Jinja how to present this <code>table</code> element that I have created and included with the <code>render_template</code> method.</p>
<p>the <code><h1> test </h1></code> is just to allow me to see if the page is rendering at all when I send the table. It is not important. </p>
<p>I believe the problem that I am having is that most tutorials are over simplified and each set of documentation (<code>Flask</code> & <code>Jinja2</code>) are light on what happens on the other side of the equation. I found <strong>ONE</strong> explanation of <code>flask-table</code> presented on two pages but the same stuff. Neither made a suggestion about calling it with a template.</p>
<p>Update:</p>
<p>I am getting a KeyError when I run it now: the changes are as follows:</p>
<pre><code> raw= yaml.load(raw) #new to make a list for reading after next line
raw = json.loads(raw) #makes it iterable in UTF-8
for item in raw:
item['date_time'] = parse(item['date_time']) #creates dates but throws
errors which stops flask
</code></pre>
<p>New Edit: Part of the Data after json.loads:</p>
<pre><code> [{u'Reading': 25.9,
u'Sensor': u'Water Temperature (degrees Celsius)',
u'date_time': u'2016-08-05T08:45:00.000-04:00'},
{u'Reading': 0.88,
u'Sensor': u'Lake Gage Height (in feet)',
u'date_time': u'2016-08-05T08:45:00.000-04:00'}]
</code></pre>
<p>Data After `parse['date_time']:</p>
<pre><code>raw[1]['date_time']
datetime.datetime(2016, 8, 5, 8, 45, tzinfo=tzoffset(None, -14400))
</code></pre>
<p><strong>New Update:</strong> I somehow missed a capital <strong>T</strong> in the <code>date_time</code> field, so correcting that fixed the KeyError...but I still cannot get it to pass over into the HTML. I get a whole bunch of lines referring to the folders and files, then I get:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'datetime_formats'
</code></pre>
<p>Why are dates and times so much fun?</p>
| 1 | 2016-08-04T23:10:24Z | 38,784,353 | <p>You need to convert timestamps to datetime objects</p>
<pre><code>from dateutil.parser import parse # pip install python-dateutil
raw = json.loads(raw)
for item in raw:
item['date_time'] = parse(item['date_time'])
</code></pre>
<p>And pass <code>table</code> to template</p>
<pre><code>return render_template('p.html', table=table)
</code></pre>
<p>Upd: All items must have a <code>date_time</code> key. Some items in your example have <code>date-Time</code> instead.</p>
| 1 | 2016-08-05T08:12:00Z | [
"python",
"html",
"flask",
"jinja2"
] |
Tensorflow: No gradients provided for any variable | 38,778,760 | <p>I am new to <code>tensorflow</code> and I am building a network but failing to compute/apply the gradients for it. I get the error:</p>
<pre><code>ValueError: No gradients provided for any variable: ((None, tensorflow.python.ops.variables.Variable object at 0x1025436d0), ... (None, tensorflow.python.ops.variables.Variable object at 0x10800b590))
</code></pre>
<p>I tried using a <a href="http://imgur.com/a/CHuo6" rel="nofollow">tensorboard graph</a> to see if there`s was something that made it impossible to trace the graph and get the gradients but I could not see anything.</p>
<p><strong>Here`s part of the code:</strong></p>
<pre><code>sess = tf.Session()
X = tf.placeholder(type, [batch_size,feature_size])
W = tf.Variable(tf.random_normal([feature_size, elements_size * dictionary_size]), name="W")
target_probabilties = tf.placeholder(type, [batch_size * elements_size, dictionary_size])
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_hidden_size)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * number_of_layers)
initial_state = state = stacked_lstm.zero_state(batch_size, type)
output, state = stacked_lstm(X, state)
pred = tf.matmul(output,W)
pred = tf.reshape(pred, (batch_size * elements_size, dictionary_size))
# instead of calculating this, I will calculate the difference between the target_W and the current W
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(target_probabilties, pred)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess.run(optimizer, feed_dict={X:my_input, target_probabilties:target_prob})
</code></pre>
<p>I will appreciate any help on figuring this out.</p>
| 1 | 2016-08-04T23:10:58Z | 38,783,256 | <p>I always have the tf.nn.softmax_cross_entropy_with_logits() used so that I have the logits as first argument and the labels as second. Can you try this?</p>
| 1 | 2016-08-05T07:10:15Z | [
"python",
"machine-learning",
"tensorflow"
] |
Bind a Dictionary to a SQLAlchemy Model for Update | 38,778,791 | <p>I have a Dictionary (from a WTForm) that has keys in it that match fields in my SQLAlchemy Model.</p>
<pre><code>class Company(database.Model):
__tablename__ = "company"
id = database.Column(database.Integer, primary_key=True, autoincrement=True)
name = database.Column(database.String(255), nullable=False)
address = database.Column(database.String(255), nullable=False)
...
</code></pre>
<p>And a Dictionary:</p>
<pre><code>{"name": "Apple Inc", "address": "1 Infinite Loop", ...}
</code></pre>
<p>Is there any easy way to set the Model's attributes to the matching Dictionary values, or do I need to follow the x = y pattern?</p>
<pre><code>company.name = company_dict["name"]
company.address = company_dict["address"]...
</code></pre>
| 1 | 2016-08-04T23:15:08Z | 38,778,828 | <p>Assuming you actually want to create the record it should be as simple as:</p>
<pre><code>company = Company.create(**company_dict)
</code></pre>
<p>Or for an update:</p>
<pre><code>company.update(**company_dict)
</code></pre>
| 0 | 2016-08-04T23:19:12Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy",
"wtforms"
] |
Python - Fabric maximum 10 parallel SSH connections | 38,778,998 | <p>I am using <code>Fabric</code> with the parallel decorator like so:</p>
<pre><code>parallel(pool_size=100)
def mytask():
# do work
</code></pre>
<p>I was hoping the program to open 100 distinct SSH connections and run the <code>Fabric</code> task on all those servers in parallel.</p>
<p>However, monitoring the number or open SSH connections always gives me an average of 10. I am running on a powerful enough <strong>CentOS</strong> instance.</p>
<p>I am getting the number of concurrent outgoing SSH connections with:</p>
<pre><code>sudo netstat -atp | grep "/python" | grep 10. | grep ssh | wc -l
</code></pre>
<p>I tried to increase <strong><em>MaxSessions</em></strong> and <strong><em>MaxStartups</em></strong> in <code>/etc/ssh/sshd_config</code> but I might not have understood those settings (I am feeling these are setting limits on <strong><em>incoming</em></strong> SSH connections instead of outgoing).</p>
<p><strong>Is there a system limit that I need to increase to be able have more than 10 open SSH connections ?</strong></p>
<p><em>Related (no answers):</em>
<a href="http://stackoverflow.com/questions/26308621/python-fabric-parallel-pool-restrictions">python fabric parallel pool restrictions</a></p>
| 0 | 2016-08-04T23:38:14Z | 38,790,988 | <p>The <code>get_pool_size</code> method in the fabric.tasks.Task class is a bit convoluted, trying to guess a not too large pool_size. It returns an integer, after picking values from the global config, task config, default passed, number of hosts.</p>
<p>By my reading of it, it <em>should</em> return the minimum of the number_of_hosts and the value you configure in your <code>parallel</code> decorator.</p>
<p>Maybe you could just "brute-force" patch that method prior to running the task - Maybe Python's "unittest.mock.patch" decorator can do a prettier job out of this - but it is somewhat complex, and I have no idea how it would interact with the parallel decorator itself.</p>
<p>So, just monkey patch <code>get_pool_size</code> to return 100 at the beginning of your file, and it should work:</p>
<pre><code>import fabric.tasks
fabric.tasks.Task.get_pool_size = lambda self: 100
...
</code></pre>
| 0 | 2016-08-05T13:53:08Z | [
"python",
"ssh",
"fabric"
] |
python libusb1: asynchronous TRANSFER_NO_DEVICE status just after successful syncronous transfers | 38,779,019 | <h1>Story</h1>
<p>I'm programming a driver for a scientific camera. It uses the Cypress FX3 usb peripheral controller. In order to communicate with it I'm using libusb1 for python, specifically the module usb1. My OS is Ubuntu 16.04 LTS.</p>
<p>The communication has two steps:</p>
<ul>
<li><p>The camera is configured. The computer synchronously sends instructions to program the camera and after each instruction the camera responds a status word, which is read synchronously.</p></li>
<li><p>A photo is taken. The computer synchronously sends a single instruction and the camera starts streaming data. The computer reads this data in a asynchronous manner.</p></li>
</ul>
<p>The asynchronous communication is done in the main thread. So even if the communication itself is asynchronous, the operation is blocking.</p>
<h1>Problem</h1>
<p>I'm getting TRANSFER_NO_DEVICE status for each asynchronous transfer, which is strange given that I have just communicated with the camera in the configuration step. I've a similar code in C# in Windows using the cypress library and it works correctly, so I can rule out the camera. Also, part of the image data appears in the FX3 buffer after trying to take a photo, which I can recover using a example application provided by cypress.</p>
<p>I've built a minimum example script. Notice the configure and take_picture functions:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: UTF-8 -*-
#
# StackOverflow.py
import usb1 as usb1 # Libusb, needed to provide a usb context
import GuideCamLib.binary as binary # Handles bytecode formatting
import GuideCamLib.ccd as ccd # Generates configuration information
import GuideCamLib.usb as usb # Manages communication
# Camera usb parameters
vid = 0x04B4;
pid = 0x00F1;
read_addr = 0x81;
write_addr = 0x01;
interface = 0;
success = [0x55, 0x55, 0x55, 0x55] + [0]*(512 - 4); # A successful response
# Objects that generate instructions for the camera
ccd0 = ccd.CCD_47_10();
formatter = binary.ByteCode();
def configure(context):
# Generating bytes to transfer, outputs a list of int lists
bytecode_lines = ccd0.create_configuration_bytecode(formatter);
# Opens device
with usb.Device(vid=vid, pid=pid, context= context) as dev:
# Opens read / write ports
port_write = dev.open_port(write_addr);
port_read = dev.open_port(read_addr);
print(" Start configuration...")
# Transfer synchronously
for line in bytecode_lines:
written_bytes = port_write.write_sync(line);
response = port_read.read_sync(512);
if(response != success):
raise RuntimeError(" Configuration failed. (" + str(response) + ")");
print(" End configuration")
def take_picture(context):
# Generating bytes to transfer, generates a list of ints
take_photo_bytecode = formatter.seq_take_photo(ccd0.get_take_photo_mode_address());
# Opens device
with usb.Device(vid=vid, pid=pid, context= context) as dev:
# Opens read / write ports
port_write = dev.open_port(write_addr);
port_read = dev.open_port(read_addr, 10000); # 10 sec timeout
# Prepare asynchronous read
print(" Prepare read")
with port_read.read_async(512) as data_collector:
print(" Writing")
written_bytes = port_write.write_sync(take_photo_bytecode); # Write synchronously
print(" Reading...")
recieved_image = data_collector(); # Read asynchronously (but in a blocking manner)
print " Recieved: " + str(len(recieved_image)) + " bytes.";
with usb1.USBContext() as context:
print "Configuring camera:"
configure(context); # Configure camera
print "Taking picture:"
take_picture(context); # Take picture
print "Done."
</code></pre>
<p>Here is GuideCamLib/usb.py for the needed contextualization. The class _TransferCollector does most of the work, while _AsyncReader is just a function with state. Port and Device are just helper classes, to reduce boilerplate code in each transfer:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: UTF-8 -*-
#
# GuideCamLib/usb.py
import usb1 as usb
import six as six
import traceback
# For human-readable printing
transfer_status_dict = \
{ \
usb.TRANSFER_COMPLETED : "TRANSFER_COMPLETED",
usb.TRANSFER_ERROR : "TRANSFER_ERROR",
usb.TRANSFER_TIMED_OUT : "TRANSFER_TIMED_OUT",
usb.TRANSFER_CANCELLED : "TRANSFER_CANCELLED",
usb.TRANSFER_STALL : "TRANSFER_STALL",
usb.TRANSFER_NO_DEVICE : "TRANSFER_NO_DEVICE",
usb.TRANSFER_OVERFLOW : "TRANSFER_OVERFLOW" \
};
# Callback to accumulate succesive transfer calls
class _AsyncReader:
def __init__(self):
self.transfers = [];
def __call__(self, transfer):
print "Status: " + transfer_status_dict[transfer.getStatus()]; # Prints the status of the transfer
if(transfer.getStatus() != usb.TRANSFER_COMPLETED):
return;
else:
self.transfers.append(transfer.getBuffer()[:transfer.getActualLength()]);
transfer.submit();
# A collector of asyncronous transfer's data.
# Stops collection after port.timeout time of recieving the last transfer.
class _TransferCollector:
# Prepare data collection
def __init__(self, transfer_size, pararell_transfers, port):
self.interface_handle = port.device.dev.claimInterface(port.interface);
self.reader = _AsyncReader();
self.port = port;
transfers = [];
# Queue transfers
for ii in range(pararell_transfers):
transfer = port.device.dev.getTransfer();
transfer.setBulk(
port.address,
transfer_size,
callback=self.reader,
timeout=port.timeout );
transfer.submit();
transfers.append(transfer);
self.transfers = transfers;
def __enter__(self):
self.interface_handle.__enter__();
return self;
def __exit__(self, exception_type, exception_value, traceback):
self.interface_handle.__exit__(exception_type, exception_value, traceback);
# Activate data collection
def __call__(self):
# Collect tranfers with _AsyncReader while there are active transfers.
while any(x.isSubmitted() for x in self.transfers):
try:
self.port.device.context.handleEvents();
except usb.USBErrorInterrupted:
pass;
return [six.byte2int(d) for data in self.reader.transfers for d in data];
# Port class for creating syncronous / asyncronous transfers
class Port:
def __init__(self, device, address, timeout = None):
self.device = device;
self.address = address;
self.interface = self.device.interface;
self.timeout = timeout;
if(timeout is None):
self.timeout = 0;
def read_sync(self, length):
with self.device.dev.claimInterface(self.interface):
data = self.device.dev.bulkRead(self.address, length, timeout=self.timeout);
return [six.byte2int(d) for d in data];
def write_sync(self, data):
data = [six.int2byte(d) for d in data];
with self.device.dev.claimInterface(self.interface):
return self.device.dev.bulkWrite(self.address, data, timeout=self.timeout);
# Make asyncronous transfers blocking. Collects data as long as the device
# sends data more frecuently than self.timeout or all the transfers fails
def read_async(self, length, pararell_transfers = 32):
return _TransferCollector(length, pararell_transfers, self);
# Device class for creating ports
class Device:
def __init__(self, vid = None, pid = None, context = None, interface = 0):
if(not context):
self.backend = usb.USBContext();
context = self.backend.__enter__();
self.context = context;
self.interface = interface;
self.dev = context.openByVendorIDAndProductID(vid, pid, skip_on_error = False);
if self.dev is None:
raise RuntimeError('Device not found');
def __enter__(self):
return self;
def __exit__(self, exception_type, exception_value, traceback):
if(hasattr(self, "backend")):
self.backend.__exit__(exception_type, exception_value, traceback);
def open_port(self, address, timeout = None):
return Port(self, address, timeout);
</code></pre>
<p>The script outputs the following, which clearly shows the synchronous transfers are successful but each of the queued asynchronous transfers fail with a NO_DEVICE:</p>
<pre><code>>>> python StackOverflow.py
Configuring camera:
Start configuration...
End configuration
Taking picture:
Prepare read
Writing
Reading...
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Status: TRANSFER_NO_DEVICE
Traceback (most recent call last):
File "StackOverflow.py", line 70, in <module>
take_picture(context); # Take picture
File "StackOverflow.py", line 62, in take_picture
recieved_image = data_collector();
File "/media/jabozzo/Data/user_data/jabozzo/desktop/sigmamin/code/workspace_Python/USB/USB wxglade libusb1/GuideCamLib/usb.py", line 62, in __exit__
self.interface_handle.__exit__(exception_type, exception_value, traceback);
File "/home/jabozzo/.local/lib/python2.7/site-packages/usb1.py", line 1036, in __exit__
self._handle.releaseInterface(self._interface)
File "/home/jabozzo/.local/lib/python2.7/site-packages/usb1.py", line 1171, in releaseInterface
libusb1.libusb_release_interface(self.__handle, interface),
File "/home/jabozzo/.local/lib/python2.7/site-packages/usb1.py", line 121, in mayRaiseUSBError
raiseUSBError(value)
File "/home/jabozzo/.local/lib/python2.7/site-packages/usb1.py", line 117, in raiseUSBError
raise STATUS_TO_EXCEPTION_DICT.get(value, USBError)(value)
usb1.USBErrorNotFound: LIBUSB_ERROR_NOT_FOUND [-5]
</code></pre>
<h1>Update</h1>
<p>I've changed the Device and Port classes so the interface is opened when the device is openned. That way the interface is only openned (and closed) once, independently of the number of ports openned:</p>
<pre><code># Port class for creating syncronous / asyncronous transfers
class Port:
def __init__(self, device, address, timeout = None):
self.device = device;
self.address = address;
self.timeout = timeout;
if(timeout is None):
self.timeout = 0;
def read_sync(self, length):
data = self.device.dev.bulkRead(self.address, length, timeout=self.timeout);
return [six.byte2int(d) for d in data];
def write_sync(self, data):
data = [six.int2byte(d) for d in data];
return self.device.dev.bulkWrite(self.address, data, timeout=self.timeout);
# Make asyncronous transfers blocking. Collects data as long as the device
# sends data more frecuently than self.timeout or all the transfers fails
def read_async(self, length, pararell_transfers = 32):
return _TransferCollector(length, pararell_transfers, self);
# Device class for creating ports
class Device:
def __init__(self, vid = None, pid = None, context = None, interface = 0):
if(not context):
self.backend = usb.USBContext();
context = self.backend.__enter__();
self.context = context;
self.interface = interface;
self.dev = context.openByVendorIDAndProductID(vid, pid, skip_on_error = False);
if self.dev is None:
raise RuntimeError('Device not found');
self.interface_handle = self.dev.claimInterface(self.interface);
def __enter__(self):
self.interface_handle.__enter__();
return self;
def __exit__(self, exception_type, exception_value, traceback):
self.interface_handle.__exit__(exception_type, exception_value, traceback);
if(hasattr(self, "backend")):
self.backend.__exit__(exception_type, exception_value, traceback);
def open_port(self, address, timeout = None):
return Port(self, address, timeout);
</code></pre>
<p>I still have the same error. But the printing shows me it fails earlier, at the read preparation:</p>
<pre><code>>>> python StackOverflow.py
Configuring camera:
Start configuration...
End configuration
Taking picture:
Prepare read
Traceback (most recent call last):
...
</code></pre>
<p>I'm beginning to suspect I don't need to open an interface in order to perform asynchronous transfers. </p>
| 0 | 2016-08-04T23:41:12Z | 38,796,487 | <p>As <a href="http://stackoverflow.com/users/5546740/dryman">dryman</a> pointed out, I was freeing the context before finishing (because I opened the context twice). If we inline the read_async and write_sync calls in the code extract:</p>
<pre><code>print(" Prepare read")
with port_read.read_async(512) as data_collector:
print(" Writing")
written_bytes = port_write.write_sync(take_photo_bytecode); # Write synchronously
print(" Reading...")
recieved_image = data_collector(); # Read asynchronously (but in a blocking manner)
</code></pre>
<p>We would obtain something like the following pseudo-code:</p>
<pre><code>print(" Prepare read")
with port_read.claimInterface(0) as ?:
# Read preparation code
print(" Writing")
with port_write.claimInterface(0) as ?:
written_bytes = # Write synchronously
# Here the port_write.claimInterface context has exited,
# leaving the prepared read transfers in a invalid state.
print(" Reading...")
recieved_image = # Read asynchronously (Fails, out of interface context)
</code></pre>
<p>In my question update I forgot to remove the interface claim on _TransferCollector so I had a similar issue. Applying the question update and define _TransferCollector as:</p>
<pre><code># A collector of asyncronous transfer's data.
# Stops collection after port.timeout time of recieving the last transfer.
class _TransferCollector:
# Prepare data collection
def __init__(self, transfer_size, pararell_transfers, port):
self.reader = _AsyncReader();
self.port = port;
transfers = [];
# Queue transfers
for ii in range(pararell_transfers):
transfer = port.device.dev.getTransfer();
transfer.setBulk(
port.address,
transfer_size,
callback=self.reader,
timeout=port.timeout );
transfer.submit();
transfers.append(transfer);
self.transfers = transfers;
# Activate data collection
def __call__(self):
# Collect tranfers with _AsyncReader while there are active transfers.
while any(x.isSubmitted() for x in self.transfers):
try:
self.port.device.context.handleEvents();
except usb.USBErrorInterrupted:
pass;
return [six.byte2int(d) for data in self.reader.transfers for d in data];
</code></pre>
<p>Fixes the issue.</p>
<p>Note that a little change has to be made to call read_async now:</p>
<pre><code># Prepare asynchronous read
print(" Prepare read")
data_collector = port_read.read_async(512):
print(" Writing")
written_bytes = port_write.write_sync(take_photo_bytecode); # Write synchronously
print(" Reading...")
recieved_image = data_collector(); # Read asynchronously (but in a blocking manner)
print " Recieved: " + str(len(recieved_image)) + " bytes.";
</code></pre>
| 0 | 2016-08-05T19:24:45Z | [
"python",
"linux",
"asynchronous",
"libusb-1.0"
] |
Two blank lines after function | 38,779,052 | <p>The PEP 8 style guide says to surround top level functions with <a href="https://www.python.org/dev/peps/pep-0008/#blank-lines" rel="nofollow">two blank lines</a>. I have Sublime Text configured with Anaconda, so it highlights the need to put two blank lines after each function in Flask. But I noticed on GitHub that nobody is following this style guideline. Should I stop following it?</p>
<p>How do I tell Anaconda to stop identifying the lack of two blank lines as errors?</p>
<p>I found that I can disable the error in Sublime Text by editing Anaconda.sublime-settings and adding "E302":</p>
<pre><code> "pep8_ignore":
[
"E309",
"E302"
],
</code></pre>
| 2 | 2016-08-04T23:44:16Z | 39,435,551 | <p>The PEP 8 style guidance to add two blank lines after classes and functions is followed by major open source projects. I will keep doing it! </p>
| -2 | 2016-09-11T11:21:58Z | [
"python",
"anaconda",
"pep8"
] |
Using Python dict comprehension to index list of words by first letter | 38,779,082 | <p>Conceptually, this is pretty easy, but I can't seem to figure it out. </p>
<p>I want to turn a list of strings into a dict with each key being the first letter of the list of words associated with it.</p>
<pre><code># My list of sounds
sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
# My dict comprehension which isn't working
indexed = {s[0]: [s] for s in sounds}
</code></pre>
<p>My output look like this:</p>
<pre><code>{'h': ['heck'], 's': ['sniff'], 'b': ['bore']}
</code></pre>
<p>I'm missing an append function here, but each time I try to implement it fails to give me the correct output, or it throws a SyntaxError. What am I missing?</p>
| 1 | 2016-08-04T23:47:58Z | 38,779,153 | <p>Is this what you are trying to achieve?</p>
<pre><code>firsts = {s[0] for s in sounds}
indexed = {first: [s for s in sounds if s[0]==first] for first in firsts}
</code></pre>
| 0 | 2016-08-04T23:56:07Z | [
"python"
] |
Using Python dict comprehension to index list of words by first letter | 38,779,082 | <p>Conceptually, this is pretty easy, but I can't seem to figure it out. </p>
<p>I want to turn a list of strings into a dict with each key being the first letter of the list of words associated with it.</p>
<pre><code># My list of sounds
sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
# My dict comprehension which isn't working
indexed = {s[0]: [s] for s in sounds}
</code></pre>
<p>My output look like this:</p>
<pre><code>{'h': ['heck'], 's': ['sniff'], 'b': ['bore']}
</code></pre>
<p>I'm missing an append function here, but each time I try to implement it fails to give me the correct output, or it throws a SyntaxError. What am I missing?</p>
| 1 | 2016-08-04T23:47:58Z | 38,779,163 | <p>No problem, <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow">itertools to the rescue</a>. You can group the elements by their first letter, then create a dict out of them.</p>
<pre><code>sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
import itertools
grouped = itertools.groupby(sorted(sounds), key=lambda x: x[0])
d = {k: list(v) for k,v in grouped}
print(d)
</code></pre>
| 3 | 2016-08-04T23:57:46Z | [
"python"
] |
Using Python dict comprehension to index list of words by first letter | 38,779,082 | <p>Conceptually, this is pretty easy, but I can't seem to figure it out. </p>
<p>I want to turn a list of strings into a dict with each key being the first letter of the list of words associated with it.</p>
<pre><code># My list of sounds
sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
# My dict comprehension which isn't working
indexed = {s[0]: [s] for s in sounds}
</code></pre>
<p>My output look like this:</p>
<pre><code>{'h': ['heck'], 's': ['sniff'], 'b': ['bore']}
</code></pre>
<p>I'm missing an append function here, but each time I try to implement it fails to give me the correct output, or it throws a SyntaxError. What am I missing?</p>
| 1 | 2016-08-04T23:47:58Z | 38,779,249 | <p>this can be done in one go with just the standard library</p>
<pre><code>>>> sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
>>> result=dict()
>>> for s in sounds:
result.setdefault(s[0],[]).append(s)
>>> result
{'b': ['bark', 'bork', 'blork', 'borf', 'bjork', 'boo', 'bre', 'bore'], 's': ['sniff'], 'h': ['heck']}
>>>
</code></pre>
<p>the solution with itertools is fine, but it require the extra step of sorting the list, making it O(n log n), while this do the same in just one go so is O(n)</p>
<p>the <a href="https://docs.python.org/3/library/collections.html" rel="nofollow">collections</a> module offer <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a> which have a build in <code>setdeafult</code></p>
<pre><code>>>> from collections import defaultdict
>>> result=defaultdict(list)
>>> for s in sounds:
result[s[0]].append(s)
>>> result
defaultdict(<class 'list'>, {'b': ['bark', 'bork', 'blork', 'borf', 'bjork', 'boo', 'bre', 'bore'], 's': ['sniff'], 'h': ['heck']})
>>>
</code></pre>
| 4 | 2016-08-05T00:12:13Z | [
"python"
] |
Using Python dict comprehension to index list of words by first letter | 38,779,082 | <p>Conceptually, this is pretty easy, but I can't seem to figure it out. </p>
<p>I want to turn a list of strings into a dict with each key being the first letter of the list of words associated with it.</p>
<pre><code># My list of sounds
sounds = ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']
# My dict comprehension which isn't working
indexed = {s[0]: [s] for s in sounds}
</code></pre>
<p>My output look like this:</p>
<pre><code>{'h': ['heck'], 's': ['sniff'], 'b': ['bore']}
</code></pre>
<p>I'm missing an append function here, but each time I try to implement it fails to give me the correct output, or it throws a SyntaxError. What am I missing?</p>
| 1 | 2016-08-04T23:47:58Z | 38,782,332 | <p>This is not a good use for a dict comprehension - you will end up with more loops than you need. If you write it directly then you only scan the input list once:</p>
<pre><code>dict1 = {}
for s in ['sniff', 'bark', 'bork', 'blork', 'heck', 'borf', 'bjork', 'boo', 'bre', 'bore']:
if not s[0] in dict1.keys():
dict1[ s[0] ] = []
dict1[ s[0] ].append(s)
print dict1
</code></pre>
| 0 | 2016-08-05T06:15:02Z | [
"python"
] |
How to connect the QFileSystemModel dataChanged signal in PyQt5? | 38,779,112 | <p>I'm trying to connect the <code>QFileSystemModel.dataChanged</code> signal, but with no luck so far. The code below is spawning this error:</p>
<blockquote>
<p>TypeError: bytes or ASCII string expected not 'list'</p>
</blockquote>
<pre><code>import sys
from PyQt5 import QtGui, QtWidgets, QtCore
from PyQt5.QtWidgets import QFileSystemModel, QTreeView
from PyQt5.QtCore import QDir
class DirectoryTreeWidget(QTreeView):
def __init__(self, path=QDir.currentPath(), *args, **kwargs):
super(DirectoryTreeWidget, self).__init__(*args, **kwargs)
self.model = QFileSystemModel()
self.model.dataChanged[QtCore.QModelIndex,QtCore.QModelIndex,[]].connect(self.dataChanged)
def dataChanged(self, topLeft, bottomRight, roles):
print('dataChanged', topLeft, bottomRight, roles)
def main():
app = QtWidgets.QApplication(sys.argv)
ex = DirectoryTreeWidget()
ex.set_extensions(["*.txt"])
sys.exit(app.exec_())
if __name__ == "__main__":
main()
</code></pre>
<p>How can i connect this signal in PyQt5?</p>
| 1 | 2016-08-04T23:51:35Z | 38,779,472 | <p>You don't need to explicitly select the signal if doesn't have any overloads. So the correct way to connect the signal is like this:</p>
<pre><code> self.model.dataChanged.connect(self.dataChanged)
</code></pre>
<p>But in any case, when you <strong>do</strong> need to select the signature, you must pass in either type objects or strings that represent a type. In your particular case, a string <em>must</em> be used, because the third parameter does not have a corresponding type object. So the explicit version of the above signal connection would be:</p>
<pre><code> self.model.dataChanged[QtCore.QModelIndex, QtCore.QModelIndex, "QVector<int>"].connect(self.dataChanged)
</code></pre>
| 1 | 2016-08-05T00:38:56Z | [
"python",
"pyqt",
"signals-slots",
"pyqt5",
"qfilesystemmodel"
] |
Extra fields on CSV | 38,779,138 | <p>I'm trying to write my first Python script which prints what clients are probing for AP and what APs they're doing for. So my problem comes when a client (well, not client yet) probes for more than one AP. </p>
<pre><code>import csv
import sys
if len(sys.argv) != 2:
print("usage: ./scriptpy.py csvfile")
pass
else:
with open(sys.argv[1], 'rb') as csvfile:
lector = csv.DictReader(csvfile, restkey='extra')
for row in lector:
print(row['Station MAC'] + " probes for " + row[' Probed ESSIDs'] + row['extra'])
pass
</code></pre>
<p>this raises a KeyError in row[extra]</p>
<p>Thanks in advance.</p>
| 0 | 2016-08-04T23:54:40Z | 38,779,180 | <p>One way to deal with key errors is <code>.get('my_key', 'alternate_value')</code>, like this:</p>
<pre><code>row.get('extra', '') # rather than row['extra']
</code></pre>
| 1 | 2016-08-05T00:00:18Z | [
"python",
"csv"
] |
Ansible: validating a variable inside a multi-dimensional array/dict in one statement | 38,779,196 | <p>With the <code>uri</code> module (though this issue isn't limited to that) you can query a web endpoint and get a result back, and I'm trying to validate (with a <code>when</code> conditional) an object in the JSON response. Unfortunately the actual value is (roughly) in:</p>
<pre><code>registered_variable_name.json.result.my_key
</code></pre>
<p>The issue is to test <code>my_key</code> it seems like I need to first verify:</p>
<ol>
<li>is <code>registered_variable_name</code> defined?</li>
<li>is <code>registered_variable_name.json</code> defined?</li>
<li>is <code>registered_variable_name.json.result</code> defined?</li>
</ol>
<p>then</p>
<ol start="4">
<li>is <code>registered_variable_name.json.result.my_key</code>'s value what I want?</li>
</ol>
<p>That seems over-complicated so I'm hoping there is an easier way of just testing <code>my_key</code>'s value, because if any level of that <code>dict</code> is missing it seems to either throw an error or just not validate correctly (e.g. there is no JSON response due to an error, or the there is a JSON response but no <code>result</code> element, and so on).</p>
| 0 | 2016-08-05T00:02:24Z | 38,780,623 | <p>This seems to work just fine (with ansible 2.1.1.0):</p>
<ul>
<li>debug:
msg: "The key is defined"
when: registered_variable_name.json.result.my_key is defined
and registered_variable_name.json.result.my_key == 'target value'</li>
</ul>
<p>Here's a complete example:</p>
<pre><code>- hosts: localhost
name: Key exists and matches
gather_facts: false
vars:
varname:
json:
result:
my_key: target value
tasks:
- debug:
msg: "The key exists."
when: varname.json.result.my_key is defined
and varname.json.result.my_key == "target value"
- hosts: localhost
name: Key exists but no match
gather_facts: false
vars:
varname:
json:
result:
my_key: "these are not the droids you're looking for"
tasks:
- debug:
msg: "The key exists."
when: varname.json.result.my_key is defined
and varname.json.result.my_key == "target value"
- hosts: localhost
name: Key does not exist
gather_facts: false
vars:
varname: nothing to see here
tasks:
- debug:
msg: "The key exists."
when: varname.json.result.my_key is defined
and varname.json.result.my_key == "target value"
</code></pre>
| 1 | 2016-08-05T03:28:54Z | [
"python",
"dictionary",
"ansible"
] |
For Loop to determine weighted average python | 38,779,242 | <p>I'm new with Python and am having trouble crafting the correct for loop for a situation.</p>
<p>I have a dataframe <code>dfclean</code> that contains two columns: a restaurant star rating <code>"Star_Rating"</code> and total number of reviews <code>"Review_Count"</code>.</p>
<p>I want to find weighted averages for these star ratings (Star_Rating * (Review_Count / total number of reviews)) and add them to a new column called <code>"weightedavg"</code>.</p>
<p>Here's what I have so far along with notes of what I <em>think</em> I'm doing with each step:</p>
<pre><code>#get total number of reviews
totalreviews = dfclean.Review_Count.sum()
#create empty list to append values to
weightedavg = []
#for loop
for row in range(len(dfclean)):
weightedavg.append(dfclean.Star_Rating[row] * (dfclean.Review_Count[row] / totalreviews))
#make a new column in df consisting of weightedavg
dfclean['weightedavg'] = weightedavg
</code></pre>
<p>Any help would be greatly appreciated!</p>
| 3 | 2016-08-05T00:11:19Z | 38,779,270 | <p>You shouldn't use a for loop. You can take advantage of broadcasting to do something the following:</p>
<pre><code>dfclean['weightedavg'] = dfclean['Star_Rating'] * dfclean['Review_Count'] / dfclean['Review_Count'].sum()
</code></pre>
<p>This is much faster than using a Python loop and is also syntactically cleaner. You can read about broadcasting in <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">the numpy docs</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html" rel="nofollow">the pandas docs</a>.</p>
| 5 | 2016-08-05T00:15:17Z | [
"python",
"pandas",
"for-loop",
"dataframe"
] |
Random Forest handle negation | 38,779,262 | <p>I'm using Random Forest to apply a sentiment to a string. So basically after cleaning the reviews, which essentially means that stop words (<code>nltk.corpus -> stopwords</code> from where I remove words as <em>no, not, nor, won, wasn, weren</em>) are removed, as well as non-letter characters, and everything is put is lowercase. The <code>CountVectorizer</code> with arguments <code>(analyzer = "word", tokenizer = None, preprocessor = None, ngram_range=(1, 3), stop_words = None, max_features = 5500)</code> builds the vocabulary and adds it to a <code>numpy</code> array. Also I'm using 100 trees.</p>
<p>After splitting the data with <code>test_size = .1</code> the classifier is trained, fitted and scored. </p>
<p><code>score = forest.score(X_test, y_test)</code>:
0.882180882181</p>
<p>Confusion matrix, without normalization:</p>
<pre><code>[[2256 503]
[ 519 5269]]
</code></pre>
<p>Normalized confusion matrix:</p>
<pre><code>[[ 0.82 0.18]
[ 0.09 0.91]]
</code></pre>
<p>ROC curve showing RandomForest (RF) and RandomForest with LinearRegression(RF + LR):</p>
<p><a href="http://i.stack.imgur.com/XSJ1M.png" rel="nofollow"><img src="http://i.stack.imgur.com/XSJ1M.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/gSZ7y.png" rel="nofollow"><img src="http://i.stack.imgur.com/gSZ7y.png" alt="enter image description here"></a></p>
<p>So the problem is that even though the results look very good I get wrong results, for instance:</p>
<p><em>"The movie is no good"</em> -> <em>negative</em></p>
<p><em>"The movie is not bad"</em> -> <em>negative</em></p>
<p><em>"The music and imagery aren't good"</em> -> <em>positive</em></p>
<p><em>"The movie didn't make sense"</em> -> positive</p>
<p>So the above are only some of the problematic cases, but you can get an overall idea what the problem I am facing at the moment is (even using a 3-gram the classifier cannot predict negation properly). I thought that it could be the training set as well, having not enough cases of negation so it can't learn it.</p>
<p>Would you have any suggestion what could be improved or changed so that negation is classified correctly?</p>
| 2 | 2016-08-05T00:13:41Z | 38,783,100 | <p>I believe this question is better suited for the Cross Validated stack exchange, but anyhow.</p>
<p>There are several things that might improve your results:</p>
<ol>
<li><p>For sentiment analysis it doesn't feel right to remove negation stopwords like 'no', 'not', etc. since they can change totally the positive/negative sentiment of the sentence when constructing the n-grams. In your examples, "not bad", "aren't good", etc. would be transformed into "bad", "good" etc.</p></li>
<li><p>If you think the negative class is under-represented in your training set, you can balance it by undersampling the positive class.</p></li>
<li><p>Instead of directly using <code>predict</code>, use <code>predict_proba</code> and try setting different probability thresholds for separating positive from negative examples.</p></li>
<li><p>Try a boosting method like AdaBoost or Gradient Boosted Trees, that are better suited to learning exceptions. For example, for learning that a sentence with the word "bad" is generally negative, but if "not bad" is also present, it's positive.</p></li>
</ol>
| 2 | 2016-08-05T07:02:25Z | [
"python",
"scikit-learn",
"random-forest",
"sklearn-pandas"
] |
Python program for rearranging an array swapping 2 at a time doesn't work, could someone help me? | 38,779,326 | <p>This is the code I've written for a Python program (please keep in mind that I learnt Python from the Internet a couple of days ago). It is supposed to rearrange a inputted list (say, [3,2,4,1,5]), into, in the example case, [1,2,3,4,5].
For some reason, it's not working, and I can't figure the reason out. I've tried many different ways to do it, but none is working.
In this case, the program returns the same value as the original one.
Here's the code:</p>
<pre><code> #General purpose string solver (by 2swapping + sune combos)
def solvepermutation():
#Setup
global pos
pos = raw_input('Ingrese permutacion.\n')
orientation = raw_input('Ingrese orientacion.\n')
#Generating solved position
solved=[]
for i in range(len(pos)):
solved.append(int(i+1))
#Solving pos
solvepos()
#Printing pos solved
print(pos)
#Function which solves pos
def solvepos():
global pos
for z in range(len(pos)-1):
for q in range(len(pos)):
if pos[z] == q+1:
pos[z],pos[q]=pos[q],pos[z]
continue
else:
continue
</code></pre>
| 1 | 2016-08-05T00:21:31Z | 38,779,454 | <p><code>solvepos()</code> operates on the <code>global pos</code>, but that's a string - when you do <code>if pos[z] == q+1:</code>, <code>q+1</code> is an integer (one of the values from the <code>range</code>), but <code>pos[z]</code> is a string (one of the characters of the input). These will never compare equal, so no swap occurs.</p>
<p>There are many things that are wrong with this code, starting with the fact that <code>sort</code> is built in and it does no good to re-implement it. You also don't do anything with the <code>solved</code> list or <code>orientation</code> input; and you should be using parameters and return values instead of trying to communicate through global variables. But most importantly, <a href="http://stackoverflow.com/questions/4383250/why-should-i-use-foreach-instead-of-for-int-i-0-ilength-i-in-loops/4383321#4383321">don't iterate like that</a>.</p>
| 0 | 2016-08-05T00:36:55Z | [
"python"
] |
Reorder pandas groupby dataframe | 38,779,335 | <p>I have the following groupby dataframe in pandas</p>
<pre><code>Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Tropical 77.5
Temperate 13.5
soybean_1 Temperate 18.5
Tropical 35.0
</code></pre>
<p>How can I sort it so that in the 'Region' Column, Temperate preceedes Tropical?</p>
<p>-- EDIT: expected answer is</p>
<pre><code>Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Temperate 13.5
Tropical 77.5
soybean_1 Temperate 18.5
Tropical 35.0
</code></pre>
| 2 | 2016-08-05T00:23:13Z | 38,779,365 | <p>Check out <code>sortlevel</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sortlevel.html" rel="nofollow">docs</a>) or <code>sort_index</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow">docs</a>).</p>
<pre><code>df.sortlevel(level=[0, 1])
</code></pre>
<p>and</p>
<pre><code>df.sort_index(level=[0, 1])
</code></pre>
<p>both output</p>
<pre><code>Crop Region
maize_1 Temperate 30
Tropical 46.0
maize_2 Temperate 13.5
Tropical 77.5
soybean_1 Temperate 18.5
Tropical 35.0
</code></pre>
| 0 | 2016-08-05T00:27:30Z | [
"python",
"pandas"
] |
Reorder pandas groupby dataframe | 38,779,335 | <p>I have the following groupby dataframe in pandas</p>
<pre><code>Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Tropical 77.5
Temperate 13.5
soybean_1 Temperate 18.5
Tropical 35.0
</code></pre>
<p>How can I sort it so that in the 'Region' Column, Temperate preceedes Tropical?</p>
<p>-- EDIT: expected answer is</p>
<pre><code>Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Temperate 13.5
Tropical 77.5
soybean_1 Temperate 18.5
Tropical 35.0
</code></pre>
| 2 | 2016-08-05T00:23:13Z | 38,779,379 | <h3>Setup</h3>
<pre><code>idx = pd.MultiIndex.from_tuples([('maize_1', 'Temperate'), ('maize_1', 'Tropical'),
('maize_2', 'Tropical'), ('maize_2', 'Temperate'),
('soybean_1', 'Temperate'), ('soybean_1', 'Tropical')],
names=['Crop', 'Region'])
s = pd.Series([30., 46., 77.5, 13.5, 18.5, 34.], idx)
s
Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Tropical 77.5
Temperate 13.5
soybean_1 Temperate 18.5
Tropical 34.0
dtype: float64
</code></pre>
<h3>Solution</h3>
<p>IIUC you want to sort by <code>'Region'</code> and leave <code>'Crop'</code> alone.</p>
<pre><code>s.unstack().sort_index(1).stack()
Crop Region
maize_1 Temperate 30.0
Tropical 46.0
maize_2 Temperate 13.5
Tropical 77.5
soybean_1 Temperate 18.5
Tropical 34.0
dtype: float64
</code></pre>
<p>You can also, sort the index as is but it will also sort <code>'Crop'</code>. It so happens your <code>'Crop'</code>s are already in order. But if they weren't, this solution would preserve that.</p>
| 3 | 2016-08-05T00:28:56Z | [
"python",
"pandas"
] |
Slow equality evaluation for identical objects (x == x) | 38,779,421 | <p>Is there any reason <code>x == x</code> is not evaluated quickly? I was hoping <code>__eq__</code> would check if its two arguments are identical, and if so return True instantly. But it doesn't do it:</p>
<pre><code>s = set(range(100000000))
s == s # this doesn't short-circuit, so takes ~1 sec
</code></pre>
<p>For built-ins, <code>x == x</code> always returns True I think? For user-defined classes, I guess someone could define <code>__eq__</code> that doesn't satisfy this property, but is there any reasonable use case for that?</p>
<p>The reason I want <code>x == x</code> to be evaluated quickly is because it's a huge performance hit when <a href="http://stackoverflow.com/a/12726843/336527">memoizing functions with very large arguments</a>:</p>
<pre><code>from functools import lru_cache
@lru_cache()
def f(s):
return sum(s)
large_obj = frozenset(range(50000000))
f(large_obj) # this takes >1 sec every time
</code></pre>
<p>Note that the reason @lru_cache is <em>repeatedly</em> slow for large objects is not because it needs to calculate <code>__hash__</code> (this is only done once and is then hard-cached as <a href="http://stackoverflow.com/a/12726793/336527">pointed out</a> by @jsbueno), but because the dictionary's hash table needs to execute <code>__eq__</code> <em>every time</em> to make sure it found the right object in the bucket (equality of hashes is obviously insufficient).</p>
<p>UPDATE:</p>
<p>It seems it's worth considering this question separately for three situations.</p>
<p>1) User-defined types (i.e., not built-in / standard library).</p>
<p>As @donkopotamus pointed out, there are cases where <code>x == x</code> should not evaluate to True. For example, for <code>numpy.array</code> and <code>pandas.Series</code> types, the result is intentionally not convertible to boolean because it's unclear what the natural semantics should be (does False mean the container is empty, or does it mean all items in it are False?).</p>
<p>But here, there's no need for python to do anything, since the users can always short-circuit <code>x == x</code> comparison themselves if it's appropriate:</p>
<pre><code>def __eq__(self, other):
if self is other:
return True
# continue normal evaluation
</code></pre>
<p>2) Python built-in / standard library types.</p>
<p>a) Non-containers.</p>
<p>For all I know the short-circuit may already be implemented for this case - I can't tell since either way it's super fast.</p>
<p>b) Containers (including <code>str</code>).</p>
<p>As @Karl Knechtel commented, adding short-circuit may hurt total performance if the savings from short-circuit are outweighed by the extra overhead in cases where <code>self is not other</code>. While theoretically possible, even in that case the overhead is a small in relative terms (container comparison is never super-fast). And of course, in cases where short-circuit helps, the savings can be dramatic.</p>
<p>BTW, it turns out that <code>str</code> does short-circuit: comparing huge identical strings is instant.</p>
| 15 | 2016-08-05T00:32:55Z | 38,779,482 | <p>As you say, someone could quite easily define an <code>__eq__</code> that you personally don't happen to approve of ... for example, the <a href="https://en.wikipedia.org/wiki/IEEE_floating_point" rel="nofollow">Institute of Electrical and Electronics Engineers</a> might be so foolish as to do that:</p>
<pre><code>>>> float("NaN") == float("NaN")
False
</code></pre>
<p>Another "unreasonable use case":</p>
<pre><code>>>> bool(numpy.ma.masked == numpy.ma.masked)
False
</code></pre>
<p>Or even:</p>
<pre><code>>>> numpy.arange(10) == numpy.arange(10)
array([ True, True, True, True, True, True, True, True, True, True], dtype=bool)
</code></pre>
<p>which has the audacity to not even be convertible to <code>bool</code>!</p>
<p>So there is certainly practical scope for <code>x == x</code> to <strong>not</strong> automagically be short-circuited to be true. </p>
<h3>Going Off Course</h3>
<p>However the following is perhaps a good question:</p>
<blockquote>
<p>Why doesn't <code>set.__eq__</code> check for instance identity?</p>
</blockquote>
<p>Well, one might think ... because a set <code>S</code> might contain <code>NaN</code> and since <code>NaN</code> cannot equal itself then surely such a set <code>S</code> cannot equal itself? Investigating:</p>
<pre><code>>>> s = set([float("NaN")])
>>> s == s
True
</code></pre>
<p>Hmm, that's interesting, especially since:</p>
<pre><code>>>> {float("NaN")} == {float("NaN")}
False
</code></pre>
<p>This behaviour is due to Python's desire for <a href="https://docs.python.org/3/reference/expressions.html#value-comparisons" rel="nofollow">sequences to be reflexive</a>.</p>
| 5 | 2016-08-05T00:40:32Z | [
"python",
"python-3.x",
"python-internals"
] |
Python CGI downloads the file instead of executing on CentOS | 38,779,463 | <p>I have worked on CGI by python programming on Ubuntu and I got satisfied result. I have a HTML file that has a simple form with a simple input as button, and an action to <code>.py</code> file. the <code>.py</code> file has <code>755</code> permission and is located in <code>/var/www/cgi-bin</code>. and the HTML is located in <code>/var/www/html</code>. I run the <code>CGIHTTPServer</code> by executing the following command in <code>/var/www/</code> directory. But when I click the button, the browser instead of executing the <code>.py</code> file, tries to download it and it's annoying me. I tried to reconfigure <code>http.conf</code> file with <code><Directory></Directory></code> tag of <code>cgi-bin</code> many times but no result! Please HELP ME!!!!</p>
<h2>show_database.py</h2>
<pre><code>#!/usr/bin/env python2
import cgi
import cgitb; cgitb.enable()
from MySQLdb import *
# Creates a connection with database
dbconnection = connect(host="localhost", user="root", passwd="", db="testdb")
# Creates a cursor of db
cursor = dbconnection.cursor()
# Selects data from database
cursor.execute("SELECT * FROM inp;")
# HTML view by printing codes as strings
print("Content-type: text/html\r\n\r\n")
print("")
print("<html>")
# Body of HTML Page
print("<body>")
print("<h1>باÙÚ© Ø§Ø·ÙØ§Ø¹Ø§ØªÛ!</h1>")
# A table that shows the database
print("<table>")
print("<th>ID</th><th>input</th>")
for row in cursor.fetchall():
print("<tr>")
print("<td>" + str(row['ID']) + "</td>")
print("<td>" + str(row['inpt']) + "</td>")
print("</tr>")
print("</table>")
# End of HTML
print("</body></html>")
</code></pre>
<h2>display_database.html</h2>
<pre><code><html>
<head>
<meta charset='utf-8' />
</head>
<style>
body { background: #ececec; }
body, p { text-align: center; direction: rtl; }
</style>
<body>
<h1>ÙÙ
Ø§Û ÙØ¨âÚ¯ÙÙ٠از باÙÚ© Ø§Ø·ÙØ§Ø¹Ø§ØªÛ</h1>
<p>دکÙ
Ù Ø²ÛØ± را ÙØ´Ø§Ø± دÙÛØ¯ تا باÙÚ© Ø§Ø·ÙØ§Ø¹Ø§ØªÛ را Ù
Ø´Ø§ÙØ¯Ù ÙÙ
Ø§Ø¦ÛØ¯!</p>
<form action="/cgi-bin/show_database.py">
<input type="Submit" value="Ù
Ø´Ø§ÙØ¯ÙâÛ Ø¨Ø§ÙÚ©" name="data">
</form>
</body>
</html>
</code></pre>
| 0 | 2016-08-05T00:37:42Z | 38,981,249 | <p>Are you running the html file like <code>hhtp://localhost/display_databse.html</code> ?
Well if its still not working you can try <a href="https://www.apachefriends.org" rel="nofollow">XAMPP</a>.Its hassle free to setup the same apache server with mariadb/php for home/testing usage but needs some tweaking for production.</p>
| 0 | 2016-08-16T17:41:30Z | [
"python",
"centos",
"cgi"
] |
how to dynamically generate methods for proxy class? | 38,779,590 | <p>I have a object like:</p>
<pre><code>class Foo(object):
def __init__(self,instance):
self.instance = instance
</code></pre>
<p>with</p>
<pre><code>>>> instance = SomeOtherObject()
>>> f = Foo(instance)
</code></pre>
<p>I want to be able to do</p>
<pre><code>>>> f.some_method()
</code></pre>
<p>and have the following call,</p>
<pre><code>>>> f.instance.some_method()
</code></pre>
<p>For complicated reasons, I cannot simply chain the attributes as in the above. I need to dynamically create an instance function on <code>f</code> with the same function signature as the embedded <code>instance</code>. That is, I need to do <code>f.some_method()</code> and then dynamically create the <code>some_method</code> instance-method for the <code>f</code> instance when it is invoked that pushes <code>some_method</code> down to the embedded object <code>instance</code>. </p>
<p>I hope that made sense. This is for Python 2.7. Any help appreciated.</p>
| 0 | 2016-08-05T00:58:00Z | 38,779,644 | <p>Write a <code>__getattr__()</code> method for your proxy class. This will be called when an attribute is accessed that doesn't exist on your instance. Return your contained object's attribute of the same name (or a wrapper if you insist, but there's no need if you just want to call the contained object's method and don't need to do anything else). Bonus: works with data as well as callables.</p>
<pre><code>def __getattr__(self, name):
return getattr(self.instance, name)
</code></pre>
<p>Does not work with <code>__</code> methods, however.</p>
| 3 | 2016-08-05T01:06:36Z | [
"python",
"metaprogramming"
] |
how to dynamically generate methods for proxy class? | 38,779,590 | <p>I have a object like:</p>
<pre><code>class Foo(object):
def __init__(self,instance):
self.instance = instance
</code></pre>
<p>with</p>
<pre><code>>>> instance = SomeOtherObject()
>>> f = Foo(instance)
</code></pre>
<p>I want to be able to do</p>
<pre><code>>>> f.some_method()
</code></pre>
<p>and have the following call,</p>
<pre><code>>>> f.instance.some_method()
</code></pre>
<p>For complicated reasons, I cannot simply chain the attributes as in the above. I need to dynamically create an instance function on <code>f</code> with the same function signature as the embedded <code>instance</code>. That is, I need to do <code>f.some_method()</code> and then dynamically create the <code>some_method</code> instance-method for the <code>f</code> instance when it is invoked that pushes <code>some_method</code> down to the embedded object <code>instance</code>. </p>
<p>I hope that made sense. This is for Python 2.7. Any help appreciated.</p>
| 0 | 2016-08-05T00:58:00Z | 38,779,703 | <p>You should look at the <code>wrapt</code> module. It is purpose built for creating transparent object proxy where you can selectively override certain aspects of the wrapped object. For example:</p>
<pre><code>class Test(object):
def some_method(self):
print 'original'
import wrapt
proxy = wrapt.ObjectProxy(Test())
proxy.some_method()
print
class TestWrapper(wrapt.ObjectProxy):
def some_method(self):
self.__wrapped__.some_method()
print 'override'
wrapper = TestWrapper(Test())
wrapper.some_method()
</code></pre>
<p>This yields:</p>
<pre><code>original
original
override
</code></pre>
<p>The default behaviour of the <code>ObjectProxy</code> class is to proxy all method calls or attribute access. Updating attributes via the proxy will also update the wrapped object. Works for special <code>__</code> methods and lots of other stuff as well.</p>
<p>For details on <code>wrapt</code> see:</p>
<ul>
<li><a href="http://wrapt.readthedocs.io" rel="nofollow">http://wrapt.readthedocs.io</a></li>
</ul>
<p>Specific details on the object proxy can be found in:</p>
<ul>
<li><a href="http://wrapt.readthedocs.io/en/latest/wrappers.html" rel="nofollow">http://wrapt.readthedocs.io/en/latest/wrappers.html</a></li>
</ul>
<p>There are so many traps and pitfalls with doing this correctly, so recommended you use <code>wrapt</code> if you can.</p>
| 1 | 2016-08-05T01:15:25Z | [
"python",
"metaprogramming"
] |
pandas "stacked" bar plot with values not added to give height | 38,779,638 | <p>I am trying to display a bar plot in <code>pandas 0.18.1</code> where the values for the different columns are displayed on top of each other but not added. So this is I think a stacked bar chart without the <strong>"stacking"</strong> which adds all stack values.</p>
<p>So in the example below</p>
<pre><code>import pandas
from pandas import DataFrame
so_example = DataFrame( [(15 , 0 , 0 , 4),(16, 0, 1, 4),(17 , 0 , 0 , 6)]).set_index(0)
so_example.plot.bar(stacked=True)
</code></pre>
<p>This gives the <code>Dataframe</code></p>
<pre><code>>>> so_example
1 2 3
0
15 0 0 4
16 0 1 4
17 0 0 6
</code></pre>
<p>I get for the second point "16" a max height of <code>1 + 4 = 5</code> . Instead I want the max height to be 4 and the "1" shown in green like it is now.</p>
<p><a href="http://i.stack.imgur.com/t2pbv.png" rel="nofollow"><img src="http://i.stack.imgur.com/t2pbv.png" alt="stacked bar plot"></a></p>
<p>How do I achieve this without subtracting artificially. Sorry I dont know what these "stacked" plots are called so all my searching failed to yield a simple solution.</p>
| 2 | 2016-08-05T01:05:53Z | 38,780,304 | <p>Please check out the following code, it is not a comprehensive solution, but basically achieve what you want. </p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
so_example = pd.DataFrame( [(15 , 0 , 0 , 4),(16, 0, 1, 4),(17 , 0 , 0 , 6)]).set_index(0)
fig = plt.figure()
ax = fig.add_subplot(111)
_max = so_example.values.max()+1
ax.set_ylim(0, _max)
so_example.ix[:,1].plot(kind='bar', alpha=0.8, ax=ax, color='r')
ax2 = ax.twinx()
ax2.set_ylim(0, _max)
so_example.ix[:,2].plot(kind='bar', alpha=0.8, ax=ax2, color='b')
ax3 = ax.twinx()
ax3.set_ylim(0, _max)
so_example.ix[:,3].plot(kind='bar', alpha=0.8, ax=ax3, color='g')
fig.savefig('c:\haha.png')
fig.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/1PL8S.png" rel="nofollow"><img src="http://i.stack.imgur.com/1PL8S.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Here is my thinking:</strong></p>
<ol>
<li>First of all, I tried the same thing as you did, tried finding some <code>plug and play</code> solutions, but it seems no</li>
<li>Then I tried to play with the values, but you clearly said you don't want to artificially play with the values. I personally think it really depends on how you define <code>artifical</code>, I mean do some data processing for the <code>Dataframe</code> before plotting it would not be that difficult. </li>
<li>Anyway, here we jump into the third solution, which is to play with the <code>axis</code>. Since basically, your request is to <strong>make the bar plot in stacked way but with overlapping</strong>. I mean usually the <code>stacked bar</code> means you stack everthing on top of each other without any overlapping, that is why it is called <code>stack</code>. But since you want to organize the bar in a way that the <strong>smallest value is at the very front, the second smallest value is at the 2nd, so on so forth...</strong></li>
</ol>
<p>So here, I use <code>twinx()</code> to create different axis layer for each data set, and to make things a bit easier for me, I didn't sort them but just use <code>alpha=0.8</code> to change the transparency only. and I didn't use functions for the whole thing. Anyway, I think this is one approach.</p>
| 2 | 2016-08-05T02:44:16Z | [
"python",
"pandas",
"matplotlib",
"bar-chart"
] |
setAttr of a list in maya python | 38,779,640 | <p>I'm still figuring out how Python and Maya work together so forgive ignorance on my part. So I'm trying to change the attributes of a list of joints in maya using a loop like so:</p>
<p><code>for p in jointList:
cmd.getAttr(p, 'radius', .5)</code></p>
<p>and I get this error:</p>
<p><code>Invalid argument 1, '[u'joint1']'. Expected arguments of type ( list, )</code></p>
<p>I have no idea what I'm doing wrong.</p>
| 0 | 2016-08-05T01:05:56Z | 38,791,535 | <p>You need to specify both the node and the channel as your first argument, like 'joint1.radius'.</p>
<p>to set the radius to .5 on all your joints, your code would be:</p>
<pre><code>for p in jointList:
cmd.setAttr(p + '.radius', .5)
</code></pre>
| 0 | 2016-08-05T14:20:25Z | [
"python",
"maya",
"mel"
] |
setAttr of a list in maya python | 38,779,640 | <p>I'm still figuring out how Python and Maya work together so forgive ignorance on my part. So I'm trying to change the attributes of a list of joints in maya using a loop like so:</p>
<p><code>for p in jointList:
cmd.getAttr(p, 'radius', .5)</code></p>
<p>and I get this error:</p>
<p><code>Invalid argument 1, '[u'joint1']'. Expected arguments of type ( list, )</code></p>
<p>I have no idea what I'm doing wrong.</p>
| 0 | 2016-08-05T01:05:56Z | 38,794,243 | <p>Unless you work with pyMel you need to specify attr name and node to get or set. </p>
<p><strong>for getAttr :</strong></p>
<pre><code>for p in jointList:
val = cmd.getAttr('%s.radius' % (p))
</code></pre>
<p><strong>for setAttr :</strong></p>
<pre><code>for p in jointList:
cmd.setAttr('%s.radius' % (p), .5)
</code></pre>
| 2 | 2016-08-05T16:51:42Z | [
"python",
"maya",
"mel"
] |
setAttr of a list in maya python | 38,779,640 | <p>I'm still figuring out how Python and Maya work together so forgive ignorance on my part. So I'm trying to change the attributes of a list of joints in maya using a loop like so:</p>
<p><code>for p in jointList:
cmd.getAttr(p, 'radius', .5)</code></p>
<p>and I get this error:</p>
<p><code>Invalid argument 1, '[u'joint1']'. Expected arguments of type ( list, )</code></p>
<p>I have no idea what I'm doing wrong.</p>
| 0 | 2016-08-05T01:05:56Z | 39,084,531 | <pre><code># lets have a look on the valid/available attributes
# and change some attributes
# create list based on your selection
item_list = cmds.ls(selection=True)
for item in item_list:
# iterate all keyable and unlocked attributes
for key in cmds.listAttr(item, keyable = True, unlocked=True):
# get attr
value = cmds.getAttr("{0}.{1}".format(item, key))
print "{0}:{1}".format(key, value)
# lets set some attributes
attr_id = "radius"
attr_value = 5
for item in item_list:
# check object exists
if cmds.objExists(item):
# check object type
if cmds.objectType(item, isType="transform"):
# check objects attr exists
if cmds.attributeQuery(attr_id, node = item, exists=True):
print "set Attr"
cmds.setAttr("{0}.{1}".format(item,attr_id), attr_value)
</code></pre>
| -1 | 2016-08-22T16:32:11Z | [
"python",
"maya",
"mel"
] |
setAttr of a list in maya python | 38,779,640 | <p>I'm still figuring out how Python and Maya work together so forgive ignorance on my part. So I'm trying to change the attributes of a list of joints in maya using a loop like so:</p>
<p><code>for p in jointList:
cmd.getAttr(p, 'radius', .5)</code></p>
<p>and I get this error:</p>
<p><code>Invalid argument 1, '[u'joint1']'. Expected arguments of type ( list, )</code></p>
<p>I have no idea what I'm doing wrong.</p>
| 0 | 2016-08-05T01:05:56Z | 40,136,254 | <p>Going from examples in the documentation:</p>
<blockquote>
<p><a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/getAttr.html" rel="nofollow">http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/getAttr.html</a></p>
<p><a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/setAttr.html" rel="nofollow">http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/setAttr.html</a></p>
</blockquote>
<p>You need to specify the object name and attribute, as a string, when you pass it into the getAttr() function.</p>
<p>e.g. </p>
<pre><code>translate = cmds.getAttr('pSphere1.translate')
</code></pre>
<p>will return the attribute value for the translate on pSphere1</p>
<p>or </p>
<pre><code>jointList = cmds.ls(type='joint')
for joint in jointList:
jointRadius = cmds.getAttr('{}.radius'.format(joint))
#Do something with the jointRadius below
</code></pre>
<p>And if you want to set it</p>
<pre><code>newJointRadius = 20
jointList = cmds.ls(type='joint')
for joint in jointList:
cmds.setAttr('{}.radius'.format(joint), newJointRadius)
</code></pre>
| 0 | 2016-10-19T15:51:59Z | [
"python",
"maya",
"mel"
] |
Tensorflow doesn't like pandas dataframe? | 38,779,654 | <p>I was trying out tensorflow with the Titanic data from Kaggle:<a href="https://www.kaggle.com/c/titanic" rel="nofollow">https://www.kaggle.com/c/titanic</a></p>
<p>Here's the code I tried to implement from Sendex:<a href="https://www.youtube.com/watch?v=PwAGxqrXSCs&index=46&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v#t=398.046664" rel="nofollow">https://www.youtube.com/watch?v=PwAGxqrXSCs&index=46&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v#t=398.046664</a></p>
<pre><code>import tensorflow as tf
import cleanData
import numpy as np
train, test = cleanData.read_and_clean()
train = train[['Pclass', 'Sex', 'Age', 'Fare', 'Child', 'Fam_size', 'Title', 'Mother', 'Survived']]
# one hot
train['Died'] = int('0')
train["Died"][train["Survived"] == 0] = 1
print(train.head())
n_nodes_hl1 = 500
n_classes = 2
batch_size = 100
# tf graph input
x = tf.placeholder("float", [None, 8])
y = tf.placeholder("float")
def neural_network_model(data):
hidden_layer_1 = {'weights':tf.Variable(tf.random_normal([8, n_nodes_hl1])),
'biases':tf.Variable(tf.random_normal(n_nodes_hl1))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_classes])),
'biases':tf.Variable(tf.random_normal([n_classes]))}
l1 = tf.add(tf.matmul(data, hidden_layer_1['weights']), hidden_layer_1['biases'])
l1 = tf.nn.relu(l1)
output = tf.matmul(l1, output_layer['weights']) + output_layer['biases']
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
desired_epochs = 10
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for epoch in range(desired_epochs):
epoch_loss = 0
for _ in range(int(train.shape[0])/batch_size):
x_epoch, y_epoch = train.next_batch(batch_size)
_, c = sess.run([optimizer, cost], feed_dict= {x:x, y:y})
epoch_loss += c
print('Epoch', epoch, 'completed out of', desired_epochs, 'loss:', epoch_loss)
correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Training accuracy:', accuracy.eval({x:x, y:y}))
train_neural_network(x)
</code></pre>
<p>When I ran the code I got an error that said: "W tensorflow/core/framework/op_kernel.cc:909] Invalid argument: shape must be a vector of {int32,int64}, got shape []"</p>
<p>Is there a way around this? I saw a post on Github for tensorflow's code and apparently the library doesn't take pandas dataframe as an input..</p>
| 0 | 2016-08-05T01:08:49Z | 38,779,816 | <p>I think the error is on this line:</p>
<pre><code> hidden_layer_1 = {'weights': tf.Variable(tf.random_normal([8, n_nodes_hl1])),
'biases': tf.Variable(tf.random_normal(n_nodes_hl1))}
</code></pre>
<p>The <code>shape</code> argument to <code>tf.random_normal()</code> must be a 1-D vector (or list, or array) of integers. For the <code>'biases'</code> variable, you're passing a single integer, <code>n_nodes_hl1</code>. The fix is simple, just wrap that argument in a list:</p>
<pre><code> hidden_layer_1 = {...,
'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}
</code></pre>
| 1 | 2016-08-05T01:36:59Z | [
"python",
"pandas",
"tensorflow"
] |
Comparison of collections containing non-reflexive elements | 38,779,705 | <p>In python, a value <code>x</code> is not always constrained to equal itself. Perhaps the best known example is <code>NaN</code>:</p>
<pre><code>>>> x = float("NaN")
>>> x == x
False
</code></pre>
<p>Now consider a list of exactly one item. We might consider two such lists to be <em>equal</em> if and only the items they contained were <em>equal</em>. For example:</p>
<pre><code>>>> ["hello"] == ["hello"]
True
</code></pre>
<p>But this does not appear to be the case with <code>NaN</code>:</p>
<pre><code>>>> x = float("NaN")
>>> x == x
False
>>> [x] == [x]
True
</code></pre>
<p>So these lists of items that are "not equal", are "equal". But only sometimes ... in particular:</p>
<ul>
<li>two lists consisting of the same instance of <code>NaN</code> are considered equal; while </li>
<li>two separate lists consisting of different instances of <code>NaN</code> are not equal</li>
</ul>
<p>Observe: </p>
<pre><code>>>> x = float("NaN")
>>> [x] == [x]
True
>>> [x] == [float("NaN")]
False
</code></pre>
<p>This general behaviour also applies to other collection types such as tuples and sets. Is there a good rationale for this?</p>
| 8 | 2016-08-05T01:15:56Z | 38,779,764 | <p>Per <a href="https://docs.python.org/3/reference/expressions.html#value-comparisons" rel="nofollow">the docs</a>,</p>
<blockquote>
<p>In enforcing reflexivity of elements, <strong>the comparison of collections assumes that for a collection element x, x == x is always true</strong>. Based on that assumption, element identity is compared first, and element comparison is performed only for distinct elements. This approach yields the same result as a strict element comparison would, if the compared elements are reflexive. For non-reflexive elements, the result is different than for strict element comparison, and may be surprising: The non-reflexive not-a-number values for example result in the following comparison behavior when used in a list: </p>
<pre><code> >>> nan = float('NaN')
>>> nan is nan
True
>>> nan == nan
False <-- the defined non-reflexive behavior of NaN
>>> [nan] == [nan]
True <-- list enforces reflexivity and tests identity first
</code></pre>
</blockquote>
| 7 | 2016-08-05T01:27:31Z | [
"python"
] |
python list - convert to list of list | 38,779,723 | <p>I am hoping someone can point me in the right direction along with any documentation that can provide even more info than just the answer. Here we go, I have a list of strings:</p>
<pre><code>arr = ["abcd","abcdef","def","abcdef"]
</code></pre>
<p>I want to turn that list into a list of lists so that the new element will be its order of appearance</p>
<pre><code>arr = [("abcd",1),("abcdef",1),("def",1),("abcdef",2)]
</code></pre>
<p>the reason for this is because I would like to then sort that list by length of string, and in case any are of identical length, I can use the 2nd element of the list to know which one was first from my original list.</p>
<pre><code>when "abcdef" appears twice, it also contains 1 or 2 in its 2nd element
</code></pre>
<p>hope that makes sense. thanks!</p>
| 1 | 2016-08-05T01:19:23Z | 38,779,752 | <p><code>sorted(["abcd","abcdef","def","abcdef"], key=len)</code> will do the job.</p>
| 0 | 2016-08-05T01:25:36Z | [
"python",
"list",
"sorting",
"indexing"
] |
python list - convert to list of list | 38,779,723 | <p>I am hoping someone can point me in the right direction along with any documentation that can provide even more info than just the answer. Here we go, I have a list of strings:</p>
<pre><code>arr = ["abcd","abcdef","def","abcdef"]
</code></pre>
<p>I want to turn that list into a list of lists so that the new element will be its order of appearance</p>
<pre><code>arr = [("abcd",1),("abcdef",1),("def",1),("abcdef",2)]
</code></pre>
<p>the reason for this is because I would like to then sort that list by length of string, and in case any are of identical length, I can use the 2nd element of the list to know which one was first from my original list.</p>
<pre><code>when "abcdef" appears twice, it also contains 1 or 2 in its 2nd element
</code></pre>
<p>hope that makes sense. thanks!</p>
| 1 | 2016-08-05T01:19:23Z | 38,779,768 | <p>Simple and pythonic.</p>
<pre><code>[(v, lst[:i].count(v)+1) for i,v in enumerate(lst)]
</code></pre>
<p>where <code>lst</code> is your list.</p>
<pre><code>>>> lst = ["abcd","abcdef","def","abcdef"]
>>> [(v, lst[:i].count(v)+1) for i,v in enumerate(lst)]
[('abcd', 1), ('abcdef', 1), ('def', 1), ('abcdef', 2)]
</code></pre>
| -1 | 2016-08-05T01:29:09Z | [
"python",
"list",
"sorting",
"indexing"
] |
python list - convert to list of list | 38,779,723 | <p>I am hoping someone can point me in the right direction along with any documentation that can provide even more info than just the answer. Here we go, I have a list of strings:</p>
<pre><code>arr = ["abcd","abcdef","def","abcdef"]
</code></pre>
<p>I want to turn that list into a list of lists so that the new element will be its order of appearance</p>
<pre><code>arr = [("abcd",1),("abcdef",1),("def",1),("abcdef",2)]
</code></pre>
<p>the reason for this is because I would like to then sort that list by length of string, and in case any are of identical length, I can use the 2nd element of the list to know which one was first from my original list.</p>
<pre><code>when "abcdef" appears twice, it also contains 1 or 2 in its 2nd element
</code></pre>
<p>hope that makes sense. thanks!</p>
| 1 | 2016-08-05T01:19:23Z | 38,779,779 | <p>Try the following for loop:</p>
<pre><code>>>> arr = ["abcd","abcdef","def","abcdef"]
>>> counts = {}
>>> new = []
>>> for item in arr:
... if item not in counts:
... new.append((item, 1))
... counts[item] = 1
... else:
... counts[item]+=1
... new.append((item, counts[item]))
...
>>> new
[('abcd', 1), ('abcdef', 1), ('def', 1), ('abcdef', 2)]
>>>
</code></pre>
| 4 | 2016-08-05T01:30:43Z | [
"python",
"list",
"sorting",
"indexing"
] |
python list - convert to list of list | 38,779,723 | <p>I am hoping someone can point me in the right direction along with any documentation that can provide even more info than just the answer. Here we go, I have a list of strings:</p>
<pre><code>arr = ["abcd","abcdef","def","abcdef"]
</code></pre>
<p>I want to turn that list into a list of lists so that the new element will be its order of appearance</p>
<pre><code>arr = [("abcd",1),("abcdef",1),("def",1),("abcdef",2)]
</code></pre>
<p>the reason for this is because I would like to then sort that list by length of string, and in case any are of identical length, I can use the 2nd element of the list to know which one was first from my original list.</p>
<pre><code>when "abcdef" appears twice, it also contains 1 or 2 in its 2nd element
</code></pre>
<p>hope that makes sense. thanks!</p>
| 1 | 2016-08-05T01:19:23Z | 38,779,841 | <p>this look like a job for <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">Counter</a></p>
<pre><code>>>> from collections import Counter
>>> arr = ["abcd","abcdef","def","abcdef"]
>>> result = []
>>> current_count = Counter()
>>> for x in arr:
current_count[x] += 1
result.append( (x,current_count[x]) )
>>> result
[('abcd', 1), ('abcdef', 1), ('def', 1), ('abcdef', 2)]
>>>
</code></pre>
| 0 | 2016-08-05T01:39:46Z | [
"python",
"list",
"sorting",
"indexing"
] |
python list - convert to list of list | 38,779,723 | <p>I am hoping someone can point me in the right direction along with any documentation that can provide even more info than just the answer. Here we go, I have a list of strings:</p>
<pre><code>arr = ["abcd","abcdef","def","abcdef"]
</code></pre>
<p>I want to turn that list into a list of lists so that the new element will be its order of appearance</p>
<pre><code>arr = [("abcd",1),("abcdef",1),("def",1),("abcdef",2)]
</code></pre>
<p>the reason for this is because I would like to then sort that list by length of string, and in case any are of identical length, I can use the 2nd element of the list to know which one was first from my original list.</p>
<pre><code>when "abcdef" appears twice, it also contains 1 or 2 in its 2nd element
</code></pre>
<p>hope that makes sense. thanks!</p>
| 1 | 2016-08-05T01:19:23Z | 38,779,875 | <p>Python's sort is stable, per <a href="https://docs.python.org/3.3/library/stdtypes.html?highlight=sort#list.sort" rel="nofollow">docs</a>:</p>
<blockquote>
<p>The sort() method is guaranteed to be stable. A sort is stable if it guarantees not to change the relative order of elements that compare equal</p>
</blockquote>
<p>So just sort the list like the deleted answer of @JulienBernu:</p>
<pre><code>>>> sorted(["abcd","abcdeg","def","abcdef"], key=len)
['def', 'abcd', 'abcdeg', 'abcdef']
>>> sorted(["abcd","abcdef","def","abcdeg"], key=len)
['def', 'abcd', 'abcdef', 'abcdeg']
</code></pre>
<p>Note that the items of equal length remain in the original order. You don't need to track it.</p>
| 3 | 2016-08-05T01:43:47Z | [
"python",
"list",
"sorting",
"indexing"
] |
summation using List Comprehensions | 38,779,770 | <p>I am trying to perform a sum as following:</p>
<pre><code>list=[b_{i}{j}=SUMMATION(|d_{i}{j}| - |g_{j}{k}|)] or simply
list=[SUMMATION(|d_{i}{j}| - |g_{j}{k}|)]
</code></pre>
<p>for this using list comprehension I am trying following:</p>
<pre><code>d=Function ('d', IntSort(), IntSort(),RealSort())
g=Function ('g', IntSort(), IntSort(),RealSort())
b=Function ('b', IntSort(),RealSort())
drug=[d(i,j)==randint(1,5) for i in range (input) for j in range (input)]
gene=[g(i,j)==randint(1,5) for i in range (input) for j in range (input)]
benefit=[[[(b(i) == b(i) + abs(d(i)(j)) - abs(g(j)(k))) for k in range(j) ] for j in range(i) ] for i in range(input) ]
</code></pre>
<p>but I am getting following error I think my list comprehension is wrong as I am getting following error, any suggestion?</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 28, in <module>
benifit=[ [[(b(i)== b(i)+abs(d(i,j)) - abs(g(j)(k))) for k in range(j)] for j in range(i)] for i in range(input) ]
TypeError: 'int' object is not iterable
</code></pre>
<p>Any help?</p>
| 0 | 2016-08-05T01:29:31Z | 38,779,912 | <p>I think what you want is</p>
<pre><code>benefit=[[[(b(i) == b(i) + abs(d[i][j]) - abs(g[j][k])) for k in range(j) ] for j in range(i) ] for i in range(input) ]
</code></pre>
<p>You cannot iterate over an integer variable, i.e. running <code>f(k) for k in j</code> where j is an int is invalid. You must create an iterable object to iterate over <code>f(k) for k in range(j)</code></p>
| 0 | 2016-08-05T01:51:06Z | [
"python",
"python-2.7",
"python-3.x",
"z3py"
] |
summation using List Comprehensions | 38,779,770 | <p>I am trying to perform a sum as following:</p>
<pre><code>list=[b_{i}{j}=SUMMATION(|d_{i}{j}| - |g_{j}{k}|)] or simply
list=[SUMMATION(|d_{i}{j}| - |g_{j}{k}|)]
</code></pre>
<p>for this using list comprehension I am trying following:</p>
<pre><code>d=Function ('d', IntSort(), IntSort(),RealSort())
g=Function ('g', IntSort(), IntSort(),RealSort())
b=Function ('b', IntSort(),RealSort())
drug=[d(i,j)==randint(1,5) for i in range (input) for j in range (input)]
gene=[g(i,j)==randint(1,5) for i in range (input) for j in range (input)]
benefit=[[[(b(i) == b(i) + abs(d(i)(j)) - abs(g(j)(k))) for k in range(j) ] for j in range(i) ] for i in range(input) ]
</code></pre>
<p>but I am getting following error I think my list comprehension is wrong as I am getting following error, any suggestion?</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 28, in <module>
benifit=[ [[(b(i)== b(i)+abs(d(i,j)) - abs(g(j)(k))) for k in range(j)] for j in range(i)] for i in range(input) ]
TypeError: 'int' object is not iterable
</code></pre>
<p>Any help?</p>
| 0 | 2016-08-05T01:29:31Z | 38,780,895 | <p>I'm not currently convinced your code is doing what you think it's doing. However, I'm not familiar with z3py, so I could be mistaken here.</p>
<p>First, take a look at this snippet.</p>
<pre><code>d=Function ('d', IntSort(), IntSort(),RealSort())
</code></pre>
<p>I assume <code>Function</code> is a callable defined by z3py which produces another callable.</p>
<pre><code>d(i,j)==randint(1,5)
</code></pre>
<p>In this snippet, you're calling your function d with two arguments, then you're comparing it to a random number. It seems like you might think you're assigning the value to a square matrix called d; perhaps I'm wrong. Either way, unless calling d somehow modifies state, you may as well have a 1 in 5 chance of being true, otherwise false. (alternately, if you expect calling d to produce values outside of 1, 2, 3, 4, or 5, the distribution would be different). </p>
<p>Regardless, that line populates the variable <code>data</code> with a list of booleans whose length is the square of the value of <code>input</code>.</p>
<pre><code>d[i][j]
</code></pre>
<p>This snippet is what makes me think the above. It's the classic way to index into a list of lists.</p>
<pre><code>b(i) == b(i) + abs(d[i][j]) - abs(g[j][k])
</code></pre>
<p>This is very suspicious to me. Unless you somehow overrode the equality operator, this also produces a boolean value. Further, unless calling the function <code>b</code> modifies state or otherwise returns different results for the same arguments, it's strictly equivalent to <code>bool(abs(d[i][j]) - abs(g[j][k]))</code>. This was the kicker that makes me think <strong>you may be confusing equivalence checking with assignment</strong>. The result is <code>benefit</code> will also be a flat list of booleans.</p>
<p>As a final note, I think good style would dictate that you turn your last comprehension inside out to simplify it.</p>
<pre><code>benefit=[[[(b(i) == b(i) + abs(d[i][j]) - abs(g[j][k])) for k in range(j) ] for j in range(i) ] for i in range(input) ]
</code></pre>
<p>should instead read</p>
<pre><code>benefit=[(b(i) == b(i) + abs(d[i][j]) - abs(g[j][k])) for i in range(input) for j in range(i) for k in range(j)]
</code></pre>
| 1 | 2016-08-05T04:01:16Z | [
"python",
"python-2.7",
"python-3.x",
"z3py"
] |
summation using List Comprehensions | 38,779,770 | <p>I am trying to perform a sum as following:</p>
<pre><code>list=[b_{i}{j}=SUMMATION(|d_{i}{j}| - |g_{j}{k}|)] or simply
list=[SUMMATION(|d_{i}{j}| - |g_{j}{k}|)]
</code></pre>
<p>for this using list comprehension I am trying following:</p>
<pre><code>d=Function ('d', IntSort(), IntSort(),RealSort())
g=Function ('g', IntSort(), IntSort(),RealSort())
b=Function ('b', IntSort(),RealSort())
drug=[d(i,j)==randint(1,5) for i in range (input) for j in range (input)]
gene=[g(i,j)==randint(1,5) for i in range (input) for j in range (input)]
benefit=[[[(b(i) == b(i) + abs(d(i)(j)) - abs(g(j)(k))) for k in range(j) ] for j in range(i) ] for i in range(input) ]
</code></pre>
<p>but I am getting following error I think my list comprehension is wrong as I am getting following error, any suggestion?</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 28, in <module>
benifit=[ [[(b(i)== b(i)+abs(d(i,j)) - abs(g(j)(k))) for k in range(j)] for j in range(i)] for i in range(input) ]
TypeError: 'int' object is not iterable
</code></pre>
<p>Any help?</p>
| 0 | 2016-08-05T01:29:31Z | 38,781,154 | <p>Anything you can do with a comprehension can also be done (using a few more lines) with nested loops. I can't tell quite what you're trying to accomplish, and also have a slight suspicion that what you actually want to do isn't best done with list comprehensions.</p>
<p>Can you re-write your <code>benefit = ...</code> line just using loops? It'll make clearer to us what you want to do, and it might just be the right thing do anyways.</p>
| 0 | 2016-08-05T04:30:47Z | [
"python",
"python-2.7",
"python-3.x",
"z3py"
] |
Python Reportlab VerticalBarChart - add chart title | 38,779,796 | <p>I'm using VerticalBarChart from reportlab.graphics.charts.barcharts, and I can't find a documentation to simply add to a title to the chart's bottom. Please help! </p>
<pre><code>chart = VerticalBarChart()
</code></pre>
| 0 | 2016-08-05T01:33:25Z | 38,785,040 | <p>According to the user guide the axis title attribute is not implemented yet, so no there is no quick solution for this. </p>
<blockquote>
<p><strong>title</strong></p>
<p>Not Implemented Yet. This needs to be like a label, but also
lets you set the text directly. It would have a default
location below the axis.</p>
<p><em>(From the user guide Table 11-5 - XCategoryAxis properties)</em></p>
</blockquote>
<p>The next best solution is probably to add a paragraph under the chart and use a <code>KeepTogether</code> to make sure they will always be on the same page. Like this:</p>
<pre><code>combined_flowable = KeepTogether([chart_flowable, bottom_title_paragraph])
</code></pre>
| 0 | 2016-08-05T08:48:13Z | [
"python",
"reportlab"
] |
Passing a "pointer to a virtual function" as argument in Python | 38,779,876 | <p>Compare the following code in <strong>C++</strong>:</p>
<pre><code>#include <iostream>
#include <vector>
struct A
{
virtual void bar(void) { std::cout << "one" << std::endl; }
};
struct B : public A
{
virtual void bar(void) { std::cout << "two" << std::endl; }
};
void test(std::vector<A*> objs, void (A::*fun)())
{
for (auto o = objs.begin(); o != objs.end(); ++o)
{
A* obj = (*o);
(obj->*fun)();
}
}
int main()
{
std::vector<A*> objs = {new A(), new B()};
test(objs, &A::bar);
}
</code></pre>
<p>and in <strong>Python</strong>:</p>
<pre><code>class A:
def bar(self):
print("one")
class B(A):
def bar(self):
print("two")
def test(objs, fun):
for o in objs:
fun(o)
objs = [A(), B()]
test(objs, A.bar)
</code></pre>
<p>The <strong>C++</strong> code will print:</p>
<pre><code>one
two
</code></pre>
<p>while the <strong>Python</strong> code will print </p>
<pre><code>one
one
</code></pre>
<p>How can I pass "a pointer to a method" and resolve it to the overridden one, achieving the same behavior in Python as in C++?</p>
<p>To add some context and explain why I initially thought about this pattern. I have a tree consisting of nodes that can be subclassed. I would like to create a generic graph traversal function which takes a node of the graph as well as a function which might be overridden in subclasses of graph nodes. The function calculates some value for a node, given values calculated for adjacent nodes. The goal is to return a value calculated for the given node (which requires traversing the whole graph). </p>
| 11 | 2016-08-05T01:43:56Z | 38,780,044 | <p>The following produces the output you want:</p>
<pre><code>class A:
def bar(self):
print("one")
class B(A):
def bar(self):
print("two")
def test(objs, funcname):
noop = lambda: None
for o in objs:
getattr(o, funcname, noop)()
objs = [A(), B()]
test(objs, "bar")
</code></pre>
| 4 | 2016-08-05T02:08:02Z | [
"python",
"c++",
"reference",
"virtual-functions",
"dispatch"
] |
Passing a "pointer to a virtual function" as argument in Python | 38,779,876 | <p>Compare the following code in <strong>C++</strong>:</p>
<pre><code>#include <iostream>
#include <vector>
struct A
{
virtual void bar(void) { std::cout << "one" << std::endl; }
};
struct B : public A
{
virtual void bar(void) { std::cout << "two" << std::endl; }
};
void test(std::vector<A*> objs, void (A::*fun)())
{
for (auto o = objs.begin(); o != objs.end(); ++o)
{
A* obj = (*o);
(obj->*fun)();
}
}
int main()
{
std::vector<A*> objs = {new A(), new B()};
test(objs, &A::bar);
}
</code></pre>
<p>and in <strong>Python</strong>:</p>
<pre><code>class A:
def bar(self):
print("one")
class B(A):
def bar(self):
print("two")
def test(objs, fun):
for o in objs:
fun(o)
objs = [A(), B()]
test(objs, A.bar)
</code></pre>
<p>The <strong>C++</strong> code will print:</p>
<pre><code>one
two
</code></pre>
<p>while the <strong>Python</strong> code will print </p>
<pre><code>one
one
</code></pre>
<p>How can I pass "a pointer to a method" and resolve it to the overridden one, achieving the same behavior in Python as in C++?</p>
<p>To add some context and explain why I initially thought about this pattern. I have a tree consisting of nodes that can be subclassed. I would like to create a generic graph traversal function which takes a node of the graph as well as a function which might be overridden in subclasses of graph nodes. The function calculates some value for a node, given values calculated for adjacent nodes. The goal is to return a value calculated for the given node (which requires traversing the whole graph). </p>
| 11 | 2016-08-05T01:43:56Z | 38,780,855 | <p>Regarding your edit, one thing you could do is use a little wrapper lambda that calls the method you want to reference. This way the method call looks like "regular python code" instead of being something complicated based on string-based access.</p>
<p>In your example, the only part that would need to change is the call to the <code>test</code> function:</p>
<pre><code>test(objs, (lambda x: x.bar()))
</code></pre>
| 9 | 2016-08-05T03:56:17Z | [
"python",
"c++",
"reference",
"virtual-functions",
"dispatch"
] |
Random order of returned values in python dictionary | 38,779,921 | <p>I don't understand this and it's going to bother me until I do. </p>
<p>This python code counts the number of times each character appears in the 'message' variable: </p>
<pre><code>message = 'Some random string of words'
dictionary= {}
for character in message.upper():
dictionary.setdefault(character,0)
dictionary[character] = dictionary[character] + 1
print(dictionary)
</code></pre>
<p>If you run this multiple times, you will notice the counts are returned in seemingly random order each time. Why is this? I would think that the loop should start at the beginning of the character string each time and return the values in a consistent order...but they don't. Is there some element of randomness in the <code>setdefault()</code>, <code>print()</code>, or <code>upper()</code> methods that impacts the order of processing of the string?</p>
| 1 | 2016-08-05T01:51:54Z | 38,779,940 | <p><code>dict</code>s are inherently unordered.</p>
<p>From the <a href="https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects" rel="nofollow">Python docs</a>:</p>
<blockquote>
<p>Keys and values are iterated over in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionaryâs history of insertions and deletions.</p>
</blockquote>
<p><strong>EDIT</strong></p>
<p>An alternative to your code that correctly accomplishes your goal is to use an <code>OrderedCounter</code>:</p>
<pre><code>from collections import Counter, OrderedDict
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
message = 'Some random string of words'
print(OrderedCounter(message.upper()))
</code></pre>
| 2 | 2016-08-05T01:54:32Z | [
"python"
] |
Random order of returned values in python dictionary | 38,779,921 | <p>I don't understand this and it's going to bother me until I do. </p>
<p>This python code counts the number of times each character appears in the 'message' variable: </p>
<pre><code>message = 'Some random string of words'
dictionary= {}
for character in message.upper():
dictionary.setdefault(character,0)
dictionary[character] = dictionary[character] + 1
print(dictionary)
</code></pre>
<p>If you run this multiple times, you will notice the counts are returned in seemingly random order each time. Why is this? I would think that the loop should start at the beginning of the character string each time and return the values in a consistent order...but they don't. Is there some element of randomness in the <code>setdefault()</code>, <code>print()</code>, or <code>upper()</code> methods that impacts the order of processing of the string?</p>
| 1 | 2016-08-05T01:51:54Z | 38,780,037 | <p>The way that <code>dict</code> is implemented is designed for look ups to be quick and efficient. Even as the size of the <code>dict</code> increases. Under the hood this means that the key order may change. </p>
<p>If the order of the keys is important to you, try using an <code>ordereddict</code> from <code>collections</code>.</p>
| 1 | 2016-08-05T02:06:49Z | [
"python"
] |
Random order of returned values in python dictionary | 38,779,921 | <p>I don't understand this and it's going to bother me until I do. </p>
<p>This python code counts the number of times each character appears in the 'message' variable: </p>
<pre><code>message = 'Some random string of words'
dictionary= {}
for character in message.upper():
dictionary.setdefault(character,0)
dictionary[character] = dictionary[character] + 1
print(dictionary)
</code></pre>
<p>If you run this multiple times, you will notice the counts are returned in seemingly random order each time. Why is this? I would think that the loop should start at the beginning of the character string each time and return the values in a consistent order...but they don't. Is there some element of randomness in the <code>setdefault()</code>, <code>print()</code>, or <code>upper()</code> methods that impacts the order of processing of the string?</p>
| 1 | 2016-08-05T01:51:54Z | 38,780,069 | <p>Because of two things:</p>
<ul>
<li>Dictionaries "aren't ordered". You of course get <strong>some</strong> order, but it depends, among other things, on the hash values of the keys.</li>
<li>You use (single-character) strings as keys, and <strong>string hashes are randomized</strong>. If you do <code>print(hash(message))</code> or even just <code>print(hash('c'))</code> then you'll see that that differs from one run to the next as well.</li>
</ul>
<p>So since the order depends on the hashes and the hashes change from one run to the next, of course you can get different orders.</p>
<p>On the other hand, if you repeat it <strong>in the same run</strong>, you'll likely get the same order:</p>
<pre><code>message = 'Some random string of words'
for _ in range(10):
dictionary= {}
for character in message:
dictionary.setdefault(character,0)
dictionary[character] = dictionary[character] + 1
print(dictionary)
</code></pre>
<p>I just ran that and it printed the exact same order all ten times, as expected. Then I ran it again, and it printed a different order, but again all ten times the same. As expected.</p>
| 3 | 2016-08-05T02:10:42Z | [
"python"
] |
Random order of returned values in python dictionary | 38,779,921 | <p>I don't understand this and it's going to bother me until I do. </p>
<p>This python code counts the number of times each character appears in the 'message' variable: </p>
<pre><code>message = 'Some random string of words'
dictionary= {}
for character in message.upper():
dictionary.setdefault(character,0)
dictionary[character] = dictionary[character] + 1
print(dictionary)
</code></pre>
<p>If you run this multiple times, you will notice the counts are returned in seemingly random order each time. Why is this? I would think that the loop should start at the beginning of the character string each time and return the values in a consistent order...but they don't. Is there some element of randomness in the <code>setdefault()</code>, <code>print()</code>, or <code>upper()</code> methods that impacts the order of processing of the string?</p>
| 1 | 2016-08-05T01:51:54Z | 38,780,110 | <p>This happens due to security. When you're writing any application where external user can provide data which ends up in a dictionary, you need to make sure they don't know what the result of hashing will be. If they do, they can make sure that every new entry they provide will hash to the same bin. When they do that, you end up with your "amortized <code>O(1)</code>" retrievals taking <code>O(n)</code> instead, because every <code>get()</code> from a dictionary will get the same bin and will have to traverse all items in it. (or possibly longer considering other processing of the request)</p>
<p>Have a look at <a href="https://131002.net/siphash/siphashdos_appsec12_slides.pdf" rel="nofollow">https://131002.net/siphash/siphashdos_appsec12_slides.pdf</a> for some more info.</p>
<p>Almost all languages prevent this by generating a random number at startup and using that as the hash seed, rather than starting from some predefined number like <code>0</code>.</p>
| 2 | 2016-08-05T02:15:59Z | [
"python"
] |
How to make axis tick labels visible on the other side of the plot in gridspec? | 38,779,933 | <p>Plotting my favourite example dataframe,which looks like this:</p>
<pre><code> x val1 val2 val3
0 0.0 10.0 NaN NaN
1 0.5 10.5 NaN NaN
2 1.0 11.0 NaN NaN
3 1.5 11.5 NaN 11.60
4 2.0 12.0 NaN 12.08
5 2.5 12.5 12.2 12.56
6 3.0 13.0 19.8 13.04
7 3.5 13.5 13.3 13.52
8 4.0 14.0 19.8 14.00
9 4.5 14.5 14.4 14.48
10 5.0 NaN 19.8 14.96
11 5.5 15.5 15.5 15.44
12 6.0 16.0 19.8 15.92
13 6.5 16.5 16.6 16.40
14 7.0 17.0 19.8 18.00
15 7.5 17.5 17.7 NaN
16 8.0 18.0 19.8 NaN
17 8.5 18.5 18.8 NaN
18 9.0 19.0 19.8 NaN
19 9.5 19.5 19.9 NaN
20 10.0 20.0 19.8 NaN
</code></pre>
<p>I have two subplots, for some other reasons it is best for me to use gridspec. The plotting code is as follows (it is quite comprehensive, so I would like to avoid major changes in the code that otherwise works perfectly and just doesn't do one unimportant detail):</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
import matplotlib as mpl
df = pd.read_csv('H:/DocumentsRedir/pokus/dataframe.csv', delimiter=',')
# setting limits for x and y
ylimit=(0,10)
yticks1=np.arange(0,11,1)
xlimit1=(10,20)
xticks1 = np.arange(10,21,1)
# general plot formatting (axes colour, background etc.)
plt.style.use('ggplot')
plt.rc('axes',edgecolor='black')
plt.rc('axes', facecolor = 'white')
plt.rc('grid', color = 'grey')
plt.rc('grid', alpha = 0.3) # alpha is percentage of transparency
colours = ['g','b','r']
title1 = 'The plot'
# GRIDSPEC INTRO - rows, cols, distance of individual plots
fig = plt.figure(figsize=(6,4))
gs=gridspec.GridSpec(1,2, hspace=0.15, wspace=0.08,width_ratios=[1,1])
## SUBPLOT of GRIDSPEC with lines
# the first plot
axes1 = plt.subplot(gs[0,0])
for count, vals in enumerate(df.columns.values[1:]):
X = np.asarray(df[vals])
h = vals
p1 = plt.plot(X,df.index,color=colours[count],linestyle='-',linewidth=1.5,label=h)
# formatting
p1 = plt.ylim(ylimit)
p1 = plt.yticks(yticks1, yticks1, rotation=0)
p1 = axes1.yaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p1 = plt.setp(axes1.get_yticklabels(),fontsize=8)
p1 = plt.gca().invert_yaxis()
p1 = plt.ylabel('x [unit]', fontsize=14)
p1 = plt.xlabel("Value [unit]", fontsize=14)
p1 = plt.tick_params('both', length=5, width=1, which='minor', direction = 'in')
p1 = axes1.xaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p1 = plt.xlim(xlimit1)
p1 = plt.xticks(xticks1, xticks1, rotation=0)
p1 = plt.setp(axes1.get_xticklabels(),fontsize=8)
p1 = plt.legend(loc='best',fontsize = 8, ncol=2) #
# the second plot (something random)
axes2 = plt.subplot(gs[0,1])
for count, vals in enumerate(df.columns.values[1:]):
nonans = df[vals].dropna()
result=nonans-0.5
p2 = plt.plot(result,nonans.index,color=colours[count],linestyle='-',linewidth=1.5)
p2 = plt.ylim(ylimit)
p2 = plt.yticks(yticks1, yticks1, rotation=0)
p2 = axes2.yaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p2 = plt.gca().invert_yaxis()
p2 = plt.xlim(xlimit1)
p2 = plt.xticks(xticks1, xticks1, rotation=0)
p2 = axes2.xaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p2 = plt.setp(axes2.get_xticklabels(),fontsize=8)
p2 = plt.xlabel("Other value [unit]", fontsize=14)
p2 = plt.tick_params('x', length=5, width=1, which='minor', direction = 'in')
p2 = plt.setp(axes2.get_yticklabels(), visible=False)
fig.suptitle(title1, size=16)
plt.show()
</code></pre>
<p>However, is it possible to show the y tick labels of the second subplot on the right hand side? The current code produces this:</p>
<p><a href="http://i.stack.imgur.com/7BBjL.png" rel="nofollow"><img src="http://i.stack.imgur.com/7BBjL.png" alt="enter image description here"></a></p>
<p>And I would like to know if there is an easy way to get this:
<a href="http://i.stack.imgur.com/PutvW.png" rel="nofollow"><img src="http://i.stack.imgur.com/PutvW.png" alt="enter image description here"></a></p>
| 0 | 2016-08-05T01:53:22Z | 38,780,417 | <p>try something like</p>
<pre><code>axes2.yaxis.tick_right()
</code></pre>
<p>Just look around <a href="http://stackoverflow.com/questions/10354397/python-matplotlib-y-axis-labels-on-right-side-of-plot">Python Matplotlib Y-Axis Labels on Right Side of Plot</a>.</p>
| 2 | 2016-08-05T03:01:37Z | [
"python",
"matplotlib"
] |
How to make axis tick labels visible on the other side of the plot in gridspec? | 38,779,933 | <p>Plotting my favourite example dataframe,which looks like this:</p>
<pre><code> x val1 val2 val3
0 0.0 10.0 NaN NaN
1 0.5 10.5 NaN NaN
2 1.0 11.0 NaN NaN
3 1.5 11.5 NaN 11.60
4 2.0 12.0 NaN 12.08
5 2.5 12.5 12.2 12.56
6 3.0 13.0 19.8 13.04
7 3.5 13.5 13.3 13.52
8 4.0 14.0 19.8 14.00
9 4.5 14.5 14.4 14.48
10 5.0 NaN 19.8 14.96
11 5.5 15.5 15.5 15.44
12 6.0 16.0 19.8 15.92
13 6.5 16.5 16.6 16.40
14 7.0 17.0 19.8 18.00
15 7.5 17.5 17.7 NaN
16 8.0 18.0 19.8 NaN
17 8.5 18.5 18.8 NaN
18 9.0 19.0 19.8 NaN
19 9.5 19.5 19.9 NaN
20 10.0 20.0 19.8 NaN
</code></pre>
<p>I have two subplots, for some other reasons it is best for me to use gridspec. The plotting code is as follows (it is quite comprehensive, so I would like to avoid major changes in the code that otherwise works perfectly and just doesn't do one unimportant detail):</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
import matplotlib as mpl
df = pd.read_csv('H:/DocumentsRedir/pokus/dataframe.csv', delimiter=',')
# setting limits for x and y
ylimit=(0,10)
yticks1=np.arange(0,11,1)
xlimit1=(10,20)
xticks1 = np.arange(10,21,1)
# general plot formatting (axes colour, background etc.)
plt.style.use('ggplot')
plt.rc('axes',edgecolor='black')
plt.rc('axes', facecolor = 'white')
plt.rc('grid', color = 'grey')
plt.rc('grid', alpha = 0.3) # alpha is percentage of transparency
colours = ['g','b','r']
title1 = 'The plot'
# GRIDSPEC INTRO - rows, cols, distance of individual plots
fig = plt.figure(figsize=(6,4))
gs=gridspec.GridSpec(1,2, hspace=0.15, wspace=0.08,width_ratios=[1,1])
## SUBPLOT of GRIDSPEC with lines
# the first plot
axes1 = plt.subplot(gs[0,0])
for count, vals in enumerate(df.columns.values[1:]):
X = np.asarray(df[vals])
h = vals
p1 = plt.plot(X,df.index,color=colours[count],linestyle='-',linewidth=1.5,label=h)
# formatting
p1 = plt.ylim(ylimit)
p1 = plt.yticks(yticks1, yticks1, rotation=0)
p1 = axes1.yaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p1 = plt.setp(axes1.get_yticklabels(),fontsize=8)
p1 = plt.gca().invert_yaxis()
p1 = plt.ylabel('x [unit]', fontsize=14)
p1 = plt.xlabel("Value [unit]", fontsize=14)
p1 = plt.tick_params('both', length=5, width=1, which='minor', direction = 'in')
p1 = axes1.xaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p1 = plt.xlim(xlimit1)
p1 = plt.xticks(xticks1, xticks1, rotation=0)
p1 = plt.setp(axes1.get_xticklabels(),fontsize=8)
p1 = plt.legend(loc='best',fontsize = 8, ncol=2) #
# the second plot (something random)
axes2 = plt.subplot(gs[0,1])
for count, vals in enumerate(df.columns.values[1:]):
nonans = df[vals].dropna()
result=nonans-0.5
p2 = plt.plot(result,nonans.index,color=colours[count],linestyle='-',linewidth=1.5)
p2 = plt.ylim(ylimit)
p2 = plt.yticks(yticks1, yticks1, rotation=0)
p2 = axes2.yaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p2 = plt.gca().invert_yaxis()
p2 = plt.xlim(xlimit1)
p2 = plt.xticks(xticks1, xticks1, rotation=0)
p2 = axes2.xaxis.set_minor_locator(mpl.ticker.MultipleLocator(0.1))
p2 = plt.setp(axes2.get_xticklabels(),fontsize=8)
p2 = plt.xlabel("Other value [unit]", fontsize=14)
p2 = plt.tick_params('x', length=5, width=1, which='minor', direction = 'in')
p2 = plt.setp(axes2.get_yticklabels(), visible=False)
fig.suptitle(title1, size=16)
plt.show()
</code></pre>
<p>However, is it possible to show the y tick labels of the second subplot on the right hand side? The current code produces this:</p>
<p><a href="http://i.stack.imgur.com/7BBjL.png" rel="nofollow"><img src="http://i.stack.imgur.com/7BBjL.png" alt="enter image description here"></a></p>
<p>And I would like to know if there is an easy way to get this:
<a href="http://i.stack.imgur.com/PutvW.png" rel="nofollow"><img src="http://i.stack.imgur.com/PutvW.png" alt="enter image description here"></a></p>
| 0 | 2016-08-05T01:53:22Z | 38,987,039 | <p>No, ok, found out it is precisely what I wanted.
I want the TICKS to be on BOTH sides, just the LABELS to be on the right. The solution above removes my ticks from the left side of the subplot, which doesn't look good. However, this <a href="http://stackoverflow.com/a/20481365/5553319">answer</a> seems to get the right solution :)
To sum up:
to get the ticks on both sides and labels on the right, this is what fixes it: </p>
<pre><code>axes2.yaxis.tick_right(ââ)
axes2.yaxis.set_ticks_pââosition('both')
</code></pre>
<p>And if you need the same for x axis, it's <code>axes2.xaxis.tick_top(ââ)</code></p>
| 0 | 2016-08-17T02:04:19Z | [
"python",
"matplotlib"
] |
Collection comparison is reflexive, yet does not short circuit. Why? | 38,779,970 | <p>In python, the built in collections compare elements with the explicit assumption that they are reflexive:</p>
<blockquote>
<p>In enforcing reflexivity of elements, <strong>the comparison of collections assumes that for a collection element x, x == x is always true</strong>. Based on that assumption, element identity is compared first, and element comparison is performed only for distinct elements. </p>
</blockquote>
<p>Logically, this means that for any list <code>L</code>, <code>L == L</code> must be <code>True</code>. Given this, why doesn't the implementation check for identity to short circuit the evaluation?</p>
<pre><code>In [1]: x = list(range(10000000))
In [2]: y = list(range(int(len(x)) // 10))
In [3]: z = [1]
# evaluation time likes O(N)
In [4]: %timeit x == x
10 loops, best of 3: 21.8 ms per loop
In [5]: %timeit y == y
100 loops, best of 3: 2.2 ms per loop
In [6]: %timeit z == z
10000000 loops, best of 3: 36.4 ns per loop
</code></pre>
<p>Clearly, child classes could choose to make an identity check, and clearly an identity check would add a very small overhead to every such comparison. </p>
<p>Was a historical decision explicitly made <em>not</em> to make such a check in the built in sequences to avoid this expense?</p>
| 6 | 2016-08-05T01:59:20Z | 38,780,091 | <p>While I'm not privy to the developers' thinking, my guess is that they might have felt comparing <code>L == L</code> does not happen often enough to warrant a special check, and moreover, the user can always use <code>(L is L) or (L==L)</code> to build a
short-circuiting check himself if he deems that advantageous.</p>
<pre><code>In [128]: %timeit (x is x) or (x == x)
10000000 loops, best of 3: 36.1 ns per loop
In [129]: %timeit (y is y) or (y == y)
10000000 loops, best of 3: 34.8 ns per loop
</code></pre>
| 2 | 2016-08-05T02:13:58Z | [
"python"
] |
XML parsing issue in Python using xml.etree or Dom API | 38,780,126 | <p>I have following simple code that is parsing a XML file.
The issue is if in XML file the name space contains ":" i get an error. Not having it, no issue. It happens when i have ":" between "Junos and Style" when i remove the ":" from XML it works perfectly.
Please advise.</p>
<p><strong>Fail with this:</strong></p>
<pre><code><interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" **junos:style**="brief">
</code></pre>
<p><strong>Works with This:</strong></p>
<pre><code><interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" **junosstyle**="brief">
</code></pre>
<p>Python Script:</p>
<pre><code>from xml.dom.minidom import parse
import xml.dom.minidom
DOMTree = xml.dom.minidom.parse("test.xml")
collection = DOMTree.documentElement
if collection.hasAttribute("xmlns"):
print "Root element : %s" % collection.getAttribute("xmlns")
Interfaces = collection.getElementsByTagName("logical-interface")
for rname in Interfaces:
print "*****Interface*****"
rtype = rname.getElementsByTagName('name')[0]
print "Type: %s" % rtype.childNodes[0].data
</code></pre>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 48, in <module>
DOMTree = xml.dom.minidom.parse("test.xml")
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/minidom.py", line 1921, in parse
return expatbuilder.parse(file)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 924, in parse
result = builder.parseFile(fp)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 207, in parseFile
parser.Parse(buffer, 0)
xml.parsers.expat.ExpatError: unbound prefix: line 2, column 0
</code></pre>
<p>It works not having ":" in XML between "Junos and Style"</p>
<p>Here is complete XML:</p>
<pre><code><?xml version="1.0"?>
<interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" junos:style="brief">
<logical-interface>
<name>reth4.10</name>
<description>
Test description
</description>
<if-config-flags>
<iff-snmp-traps/>
<internal-flags>
0x0
</internal-flags>
</if-config-flags>
<encapsulation>
ENET2
</encapsulation>
<filter-information>
</filter-information>
<logical-interface-zone-name>
Test2
</logical-interface-zone-name>
<allowed-host-inbound-traffic>
<inbound-ping/>
</allowed-host-inbound-traffic>
<address-family>
<address-family-name>
inet
</address-family-name>
<interface-address>
</interface-address>
</address-family>
<address-family>
<address-family-name>
multiservice
</address-family-name>
</address-family>
</logical-interface>
</interface-information>
</code></pre>
| 1 | 2016-08-05T02:19:02Z | 38,781,070 | <p>Just found what the issue is.</p>
<p>I had to define xmlns:junos in XML.</p>
<p></p>
<p>Don't know why but somehow i had omitted this line in my XML. I think it happened while i was copy pasting the XML.</p>
<p>Thanks for reply.</p>
| 1 | 2016-08-05T04:21:54Z | [
"python",
"xml"
] |
XML parsing issue in Python using xml.etree or Dom API | 38,780,126 | <p>I have following simple code that is parsing a XML file.
The issue is if in XML file the name space contains ":" i get an error. Not having it, no issue. It happens when i have ":" between "Junos and Style" when i remove the ":" from XML it works perfectly.
Please advise.</p>
<p><strong>Fail with this:</strong></p>
<pre><code><interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" **junos:style**="brief">
</code></pre>
<p><strong>Works with This:</strong></p>
<pre><code><interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" **junosstyle**="brief">
</code></pre>
<p>Python Script:</p>
<pre><code>from xml.dom.minidom import parse
import xml.dom.minidom
DOMTree = xml.dom.minidom.parse("test.xml")
collection = DOMTree.documentElement
if collection.hasAttribute("xmlns"):
print "Root element : %s" % collection.getAttribute("xmlns")
Interfaces = collection.getElementsByTagName("logical-interface")
for rname in Interfaces:
print "*****Interface*****"
rtype = rname.getElementsByTagName('name')[0]
print "Type: %s" % rtype.childNodes[0].data
</code></pre>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 48, in <module>
DOMTree = xml.dom.minidom.parse("test.xml")
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/minidom.py", line 1921, in parse
return expatbuilder.parse(file)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 924, in parse
result = builder.parseFile(fp)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 207, in parseFile
parser.Parse(buffer, 0)
xml.parsers.expat.ExpatError: unbound prefix: line 2, column 0
</code></pre>
<p>It works not having ":" in XML between "Junos and Style"</p>
<p>Here is complete XML:</p>
<pre><code><?xml version="1.0"?>
<interface-information xmlns="http://xml.juniper.net/junos/12.1X47/junos-interface" junos:style="brief">
<logical-interface>
<name>reth4.10</name>
<description>
Test description
</description>
<if-config-flags>
<iff-snmp-traps/>
<internal-flags>
0x0
</internal-flags>
</if-config-flags>
<encapsulation>
ENET2
</encapsulation>
<filter-information>
</filter-information>
<logical-interface-zone-name>
Test2
</logical-interface-zone-name>
<allowed-host-inbound-traffic>
<inbound-ping/>
</allowed-host-inbound-traffic>
<address-family>
<address-family-name>
inet
</address-family-name>
<interface-address>
</interface-address>
</address-family>
<address-family>
<address-family-name>
multiservice
</address-family-name>
</address-family>
</logical-interface>
</interface-information>
</code></pre>
| 1 | 2016-08-05T02:19:02Z | 38,783,181 | <p>Adding the definition for <strong><em>junos</em></strong> apparently resolves the parsing issue.</p>
<pre><code><rpc-reply xmlns:junos="http://xml.juniper.net/junos/12.1X47/junos">
</code></pre>
<p>I have used the following working sample for <a href="http://pastebin.com/peJVU6Zj" rel="nofollow">reference</a>. </p>
| 0 | 2016-08-05T07:06:35Z | [
"python",
"xml"
] |
Not getting video from cv2.VideoCapture from Parrot AR Drone | 38,780,140 | <p>I am not getting video from <code>cv2.VideoCapture</code> from Parrot AR Drone: <code>ret</code> is always <code>False</code></p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture("tcp://192.168.1.1:5555")
ret, frame = cap.read()
print ret
cap.release()
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-08-05T02:21:00Z | 38,780,350 | <pre><code>I am not used to Python, but If it was in Windows I will first check whether cap is getting frames from the given IP Address using:
if(!cap.IsOpened())
{
cerr <<"No video Frames were read, please check your IP and Port" <<endl;
return -1;
}
This will easily help me know whether I am fetching from the Drone or not.
//Secondly, we have to check for the case where we are able to fetch from the drone..
while(true)
{
cap >>frame;
char c = waitkey(10);
if( c== 27) break
}
</code></pre>
| 0 | 2016-08-05T02:51:00Z | [
"python",
"opencv",
"ar.drone"
] |
OpenSSL.crypto.X509.sign() throws " 'bytes' object has no attribute 'encode' " | 38,780,150 | <p>So I'm trying to use the OpenSSL crypto module to generate a new CA certificate with this code:</p>
<pre><code>#warning: this block is background information, probably not
#where my real problem is
#generate the key pair
key=OpenSSL.crypto.PKey()
key.generate_key(OpenSSL.crypto.TYPE_RSA,2048)
#print the private and public keys as PEMs
print(codecs.decode(OpenSSL.crypto.dump_publickey(OpenSSL.crypto.FILETYPE_PEM,key),'utf8'))
print(codecs.decode(OpenSSL.crypto.dump_privatekey(OpenSSL.crypto.FILETYPE_PEM,key),'utf8'))
#generate a new x509 certificate
ca=OpenSSL.crypto.X509()
#fill it with goodies
ca.set_version(3)
ca.set_serial_number(1)
ca.get_subject().CN = "CA.test.com"
ca.gmtime_adj_notBefore(0)
ca.gmtime_adj_notAfter(60 * 60 * 24 * 365 * 10)
ca.set_issuer(ca.get_subject())
ca.set_pubkey(key)
#print the new certificate as a PEM
print(codecs.decode(OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM,ca),'utf8'))
</code></pre>
<p>The certificate that prints out decodes OK at the <a href="https://www.sslshopper.com/certificate-decoder.html" rel="nofollow">SSLShopper certificate decoder</a> so I'm feeling pretty confident about that part. The trouble really starts when I try to sign the certificate with </p>
<pre><code>ca.sign(key, 'sha1')
</code></pre>
<p>because I get an " expected type 'bytes', got 'str' instead " from the IDE. Check the <a href="http://www.pyopenssl.org/en/stable/api/crypto.html#OpenSSL.crypto.X509.sign" rel="nofollow">OpenSSL.crypto.X509.sign()</a> documentation and confirm it really expects a bytes object, switch to </p>
<pre><code>digestname='sha1'.encode('utf-8')
ca.sign(key, digestname)
</code></pre>
<p>and I get an " AttributeError: 'bytes' object has no attribute 'encode' " exception. Stepping through the code I find the exception is thrown in OpenSSL._util.byte_string() because</p>
<pre><code>if PY3:
def byte_string(s):
return s.encode("charmap")
else:
def byte_string(s):
return s
</code></pre>
<p>where PY3=True and s={bytes}b'sha1', which of course has no .encode method.</p>
<p>Thus began my demoralizing 'bytes' vs 'str' struggle. I'd like to think I'm not the only one to be having this problem but my very best Google-fu has convinced me otherwise. At this point I don't even know what to go read about to get this one figured out.</p>
| 1 | 2016-08-05T02:22:27Z | 38,885,306 | <p>It turns out that my IDE (PyCharm) was leading me astray. <code>ca.sign(key, 'sha1')</code> is really the correct way to do it. Even though PyCharm gives a type error program execution flows right through the statement and the output is correct.</p>
| 0 | 2016-08-10T23:54:16Z | [
"python",
"python-3.x",
"pyopenssl"
] |
How to use a dict for resampling a multindex pandas data frame? (>0.18.0) | 38,780,181 | <p>Before pandas 0.18.0 I used to be able to do resample my deeply pivoted table like so:</p>
<pre><code>pivot_df = SoilSensorDf.pivot(status_df, ['A', 'B', 'C'])
resample_methods = map({'A': 'sum',
'B': 'mean',
'C': 'min'}.get,
[x[0] for x in pivot_df.columns])
resample_method_tuple_list = zip(pivot_df.columns, resample_methods)
resample_dict = dict(resample_method_tuple_list)
pivot_df.resample('D', how=resample_dict)
</code></pre>
<p>But with pandas 0.18.1, the docs suggest I do it this way</p>
<pre><code>pivot_df.resample('D').agg(resample_dict)
</code></pre>
<p>But this gives me the following error which I can't seem to figure out how to resolve. Anyone have any ideas?</p>
<pre><code>File "/.../lib/python2.7/site-packages/pandas/tseries/resample.py", line 293, in aggregate
result, how = self._aggregate(arg, *args, **kwargs)
File "/.../lib/python2.7/site-packages/pandas/core/base.py", line 545, in _aggregate
result = _agg(arg, _agg_1dim)
File "/.../lib/python2.7/site-packages/pandas/core/base.py", line 496, in _agg
result[fname] = func(fname, agg_how)
File "/.../lib/python2.7/site-packages/pandas/core/base.py", line 475, in _agg_1dim
colg = self._gotitem(name, ndim=1, subset=subset)
File "/.../lib/python2.7/site-packages/pandas/tseries/resample.py", line 352, in _gotitem
return grouped[key]
File "/.../lib/python2.7/site-packages/pandas/core/base.py", line 330, in __getitem__
if len(self.obj.columns.intersection(key)) != len(key):
File "/.../lib/python2.7/site-packages/pandas/indexes/multi.py", line 2031, in intersection
other, result_names = self._convert_can_do_setop(other)
File "/.../lib/python2.7/site-packages/pandas/indexes/multi.py", line 2095, in _convert_can_do_setop
raise TypeError(msg)
TypeError: other must be a MultiIndex or a list of tuples
</code></pre>
<p>Example to demonstrate:</p>
<pre><code>df = pd.DataFrame(np.random.randint(0, 100, (10, 3)), columns=list('ABC'),
index=pd.date_range('2016-01-01', freq='1400T', periods=10))
df['D'] = ['x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y']
df.index.name = "timestamp"
df_pivot = pd.pivot_table(df.reset_index(), values=['A', 'B'], index="timestamp",
columns=['C', 'D'])
resample_methods = map({'A': 'sum', 'B': 'mean'}.get, [x[0] for x in df_pivot.columns])
resample_method_tuple_list = zip(df_pivot.columns, resample_methods)
resample_dict = dict(resample_method_tuple_list)
df_pivot.resample('D').agg(resample_dict)
</code></pre>
| 1 | 2016-08-05T02:27:41Z | 38,794,088 | <p>Looks like this is a bug that will be worked on by the pandas team:
<a href="https://github.com/pydata/pandas/issues/13914" rel="nofollow">https://github.com/pydata/pandas/issues/13914</a></p>
| 0 | 2016-08-05T16:40:46Z | [
"python",
"python-2.7",
"pandas",
"pivot-table"
] |
Vanilla Django Query to show 'flattened' User/Group (ManyToMany) Listing | 38,780,190 | <p>I'm trying to write a simple Django query to show all users in all groups within the vanilla django.contrib.auth models:</p>
<pre><code>User.username | Group.name
------------- | -----------
user1 | groupA
user1 | groupB
user2 | groupA
user2 | groupC
user3 | groupA
</code></pre>
<p>I'd like to do this somehow using as 'plain' of Django magic as possible here because the goal is to fit this report into an already existing reporting framework that doesn't readily support custom queries and whatnot.</p>
<p>So I'd like to just do something along the lines of:</p>
<pre><code>from django.contrib.auth import UserGroup
for x in UserGroup.objects.all():
print '%s\t%s' % (x.user.username, x.group.name)
</code></pre>
<p>... if that ManyToMany table tying User and Group objects were accessible through standard Django model import. </p>
| 0 | 2016-08-05T02:28:51Z | 38,780,279 | <p>This will get you a list of all users in all groups:</p>
<pre><code>from django.contrib.auth.models import Group
for group in Group.objects.all():
for user in group.user_set.all():
print '%s\t%s' % (user.username, group.name)
</code></pre>
<p>You can also get the same information in one single query with:</p>
<pre><code>user_groups = Group.objects.values_list('user__username', 'name')
</code></pre>
<p>Which will return a list of tuples like so:</p>
<pre><code>[
('user1', 'GroupA'),
('user1', 'GroupB'),
('user2', 'GroupC'),
...
]
</code></pre>
| 1 | 2016-08-05T02:41:33Z | [
"python",
"django"
] |
Vanilla Django Query to show 'flattened' User/Group (ManyToMany) Listing | 38,780,190 | <p>I'm trying to write a simple Django query to show all users in all groups within the vanilla django.contrib.auth models:</p>
<pre><code>User.username | Group.name
------------- | -----------
user1 | groupA
user1 | groupB
user2 | groupA
user2 | groupC
user3 | groupA
</code></pre>
<p>I'd like to do this somehow using as 'plain' of Django magic as possible here because the goal is to fit this report into an already existing reporting framework that doesn't readily support custom queries and whatnot.</p>
<p>So I'd like to just do something along the lines of:</p>
<pre><code>from django.contrib.auth import UserGroup
for x in UserGroup.objects.all():
print '%s\t%s' % (x.user.username, x.group.name)
</code></pre>
<p>... if that ManyToMany table tying User and Group objects were accessible through standard Django model import. </p>
| 0 | 2016-08-05T02:28:51Z | 38,780,451 | <p>For what it's worth, I also found that:</p>
<pre><code>from django.contrib.auth.models import User, Group
qs = User.objects.raw("""select *, g.name as groupname
from auth_user_groups ug
inner join auth_user u on ug.user_id = u.id
inner join auth_group g on ug.group_id = g.id""")
for x in qs:
print '%s\t%s' % (x.username, x.groupname)
</code></pre>
<p>works as desired and returns an actual queryset full of working 'User' object references. </p>
<p>It certainly doesn't satisfy my original question criteria of wanting do do this "somehow using as 'plain' of Django magic as possible" though so I've accepted solarissmoke's answer. Figured I'd add my hack here as a potential answer for posterity's sake anyway though.</p>
| 0 | 2016-08-05T03:05:17Z | [
"python",
"django"
] |
Can a LP created on a Windows platform be run on a Linux platform? | 38,780,223 | <p>I have a huge MILP in Matlab, which I want to re-program in Gurobi using python language, on a Windows desktop. But after that I want to run it on a super computer which has a Linux os. I know python is cross-platform. Does this mean anything I create in Gurobi on Windows will run on Linux too? If this question is dumb I'm sorry, I just want to know for sure.</p>
| 1 | 2016-08-05T02:33:54Z | 38,780,526 | <p>Yes, you can write Gurobi Python code on one system, then copy it and run it on another. You can go from Windows to Linux, Mac to Windows, etc. Alternately, if you have Gurobi Compute Server, your Windows computer can be a client of your Linux server.</p>
| 0 | 2016-08-05T03:16:07Z | [
"python",
"linux",
"windows",
"matlab",
"gurobi"
] |
Can a LP created on a Windows platform be run on a Linux platform? | 38,780,223 | <p>I have a huge MILP in Matlab, which I want to re-program in Gurobi using python language, on a Windows desktop. But after that I want to run it on a super computer which has a Linux os. I know python is cross-platform. Does this mean anything I create in Gurobi on Windows will run on Linux too? If this question is dumb I'm sorry, I just want to know for sure.</p>
| 1 | 2016-08-05T02:33:54Z | 38,789,742 | <p>You could also export the MILP from Matlab as a <a href="https://en.wikipedia.org/wiki/MPS_(format)" rel="nofollow">mps</a> file, using e.g. the <a href="https://www.mathworks.com/matlabcentral/fileexchange/19618-mps-format-exporting-tool" rel="nofollow">MPS format exporting tool</a> and then load the file to Gurobi on the system of your choice.</p>
| 0 | 2016-08-05T12:53:33Z | [
"python",
"linux",
"windows",
"matlab",
"gurobi"
] |
I typed python -v in my terminal and something weird happened | 38,780,240 | <p>Thinking I was about to check the version of Python installed on my computer, I typed </p>
<pre><code>python -v
</code></pre>
<p>in my terminal and I got a first line saying </p>
<blockquote>
<p>"installing zipimport hook", but then also a whole bunch of text (probably 50 or so lines of text), among which "import errno # builtin", "import posix # builtin", "import _codecs # builtin", and toward the end "Python 2.7.8 |Anaconda 2.1.0 (x86_64)| (default, Aug 21 2014, 15:21:46)"</p>
</blockquote>
<p>What did I do? And what did that command install?</p>
<p><strong>EDIT</strong>: the <code>v</code> I typed in <code>python -v</code> was a lowercase <code>v</code>. When I now try an uppercase <code>V</code>, I do get the version of Python on my computer.</p>
| 0 | 2016-08-05T02:35:44Z | 38,780,265 | <p>You want <code>python -V</code> (uppercase) or <code>python --version</code>. The lowercase <code>-v</code> means âverboseâ and adds a bunch of diagnostic information to the output that you can safely ignore.</p>
| 5 | 2016-08-05T02:39:13Z | [
"python",
"terminal"
] |
Improving Python Threads Performance based on Resource Locking | 38,780,252 | <p>The reason between Java and Python threads is </p>
<ul>
<li>Java is designed to lock on the resources</li>
<li>Python is designed to lock the thread itself(GIL)</li>
</ul>
<p>So Python's implementation performs better on a machine with singe core processor. This is fine 10-20 years before. With the increasing computing capacity, if we use multiprocessor machine with same piece of code, it performs very badly.</p>
<p>Is there any hack to disable GIL and use resource locking in Python(Like Java implementation)? </p>
<p>P.S. My application is currently running on Python 2.7.12. It is compute intensive with less I/O and network blocking. Assume that I can't use <code>multiprocessing</code> for my use case.</p>
| 0 | 2016-08-05T02:37:26Z | 38,780,573 | <p>I think the most straight way for you, that will give you also a nice performance increase is to use Cython. </p>
<p>Cython is a Python superset that compiles Python-like code to C code (which makes use of the cPython API), and from there to executable. It allows one to optionally type variables, that then can use native C types instead of Python objects - and also allows one direct control of the GIL. </p>
<p>It does support a <code>with nogil:</code> statement in which the <code>with</code> block runs with the GIL turned off - if there are other threads running (you use the normal Python threading library), they won't be blocked while code is running on the marked with block.</p>
<p>Just keep in mind that the GIL is there for a reason: it is thanks to it that global complex objects like lists and dictionaries work without the danger of getting into an inconsistent state between treads. But if your "nogil" blocks restrict themselves to local data structures, you should have no problems.</p>
<p>Check the <a href="http://cython.org/" rel="nofollow">Cython</a> project - and here is an specific example of turning off the GIL:
<a href="https://lbolla.info/blog/2013/12/23/python-threads-cython-gil" rel="nofollow">https://lbolla.info/blog/2013/12/23/python-threads-cython-gil</a></p>
| 1 | 2016-08-05T03:22:45Z | [
"python",
"multithreading"
] |
max of two dates comes in a totally different format | 38,780,269 | <p>I have 2 columns with dates in them, i want to create a 3rd column with the max of these 2 dates:</p>
<pre><code>df['xxx_MaxSettDate'][0]
Out[186]: Timestamp('2017-01-20 00:00:00')
df['yyy_MaxSettDate'][0]
Out[166]: NaT
</code></pre>
<p>here is my max function:</p>
<pre><code> df['MaxSettDate']=df[['xxx_MaxSettDate','yyy_MaxSettDate']].max(axis=1)
</code></pre>
<p>output:</p>
<pre><code>df['MaxSettDate'][0]
Out[187]: 1.4848704e+18
</code></pre>
<p>I want to be able to do operation on this date, such as remove all dates which are lower than 1m</p>
<p>so I do this:</p>
<pre><code>onemonthdate = date.today() + timedelta(30)
df = df[(df['MaxSettDate']>onemonthdate)]
</code></pre>
<p>This results in the error:</p>
<pre><code>TypeError: unorderable types: float() > datetime.date()
</code></pre>
<p>Thoughts on how I could achieve this pls? I am geting very confused over all the solutions provided.. you could also just point me to something which I could read and understand the whole dates paradigm in python better.. thanks vm!</p>
| 0 | 2016-08-05T02:39:48Z | 38,780,347 | <p>UPDATE:</p>
<p>you can convert your <code>MaxSettDate</code> column to datetime first:</p>
<pre><code>df['MaxSettDate'] = pd.to_datetime(df['MaxSettDate'])
</code></pre>
<p>Demo:</p>
<pre><code>In [41]: pd.to_datetime(1.4848704e+18)
Out[41]: Timestamp('2017-01-20 00:00:00')
</code></pre>
<p>OLD amswer:</p>
<p>I would use pandas Timedelta for that:</p>
<pre><code>df = df[df['MaxSettDate'] > pd.datetime.now() + pd.Timedelta('30 days')]
</code></pre>
| 1 | 2016-08-05T02:50:42Z | [
"python",
"datetime",
"pandas"
] |
Predicting new data using sklearn after standardizing the training data | 38,780,302 | <p>I am using Sklearn to build a linear regression model (or any other model) with the following steps:</p>
<p>X_train and Y_train are the training data</p>
<ol>
<li><p>Standardize the training data</p>
<pre><code> X_train = preprocessing.scale(X_train)
</code></pre></li>
<li><p>fit the model</p>
<pre><code> model.fit(X_train, Y_train)
</code></pre></li>
</ol>
<p>Once the model is fit with scaled data, how can I predict with new data (either one or more data points at a time) using the fit model?</p>
<p>What I am using is</p>
<ol>
<li><p>Scale the data</p>
<pre><code>NewData_Scaled = preprocessing.scale(NewData)
</code></pre></li>
<li><p>Predict the data</p>
<pre><code>PredictedTarget = model.predict(NewData_Scaled)
</code></pre></li>
</ol>
<p>I think I am missing a transformation function with <code>preprocessing.scale</code> so that I can save it with the trained model and then apply it on the new unseen data? any help please.</p>
| 1 | 2016-08-05T02:43:43Z | 38,782,801 | <p>Take a look at <a href="http://scikit-learn.org/stable/modules/preprocessing.html" rel="nofollow">these docs</a>. </p>
<p>You can use the <code>StandardScaler</code> class of the preprocessing module to remember the scaling of your training data so you can apply it to future values.</p>
<pre><code>from sklearn.preprocessing import StandardScaler
X_train = np.array([[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1., -1.]])
scaler = preprocessing.StandardScaler().fit(X_train)
</code></pre>
<p><code>scaler</code> has calculated the mean and scaling factor to standardize each feature.</p>
<pre><code>>>>scaler.mean_
array([ 1. ..., 0. ..., 0.33...])
>>>scaler.scale_
array([ 0.81..., 0.81..., 1.24...])
</code></pre>
<p>To apply it to a dataset:</p>
<pre><code>import numpy as np
X_train_scaled = scaler.transform(X_train)
new_data = np.array([-1., 1., 0.])
new_data_scaled = scaler.transform(new_data)
>>>new_data_scaled
array([[-2.44..., 1.22..., -0.26...]])
</code></pre>
| 3 | 2016-08-05T06:44:34Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Python3 src encodings of Emojis | 38,780,324 | <p><strong>I'd like to print emojis from python(3) src</strong></p>
<p>I'm working on a project that analyzes <a href="https://www.facebook.com/help/131112897028467/" rel="nofollow">Facebook Message histories</a> and in the raw htm data file downloaded I find a lot of emojis are displayed as boxes with question marks, as happens when the value can't be displayed. If I copy paste these symbols into terminal as strings, I get values such as <code>\U000fe328</code>. This is also the output I'm getting when I run the htm files through BeautifulSoup and output the data.</p>
<p>I Googled this string (and others), and consistently one of the only sites that comes up with them is iemoji.com, in the case of the above string <a href="http://www.iemoji.com/view/emoji/822/people/face-with-look-of-triumph" rel="nofollow">this page</a>, that lists the string as a Python Src. I want to be able to print out these strings as their corresponding emojis (after all, they were originallly emojis when being messaged), and after looking around I found a mapping of src encodings <a href="https://github.com/suzukitakafumi/emojicodecs/blob/master/emojicodecs/emojidata.py" rel="nofollow">at this page</a>, that mapped the above like strings to emoji string names. I then found <a href="https://github.com/carpedm20/emoji/blob/master/emoji/unicode_codes.py" rel="nofollow">this emoji string names to Unicode</a> list, that for the most part seems to map the emoji names to Unicode. If I try printing out these values, I get good output. Like following</p>
<pre><code>>>> print(u'\U0001F624')
í ½í¸¤
</code></pre>
<p>Is there a way to map these "Python src" encodings to their unicode values? Chaining both libraries would work if not for the fact that the original src mapping is missing around 50% of the unicode values found in the unicode library. And if I do end up having to do that, is there a good way to find the Python Src value of a given emoji? From my testing emoji as strings equal their Unicode, such as <code>'í ½í¸¤' == u'\U0001F624'</code>, but I can't seem to get any sort of relations to <code>\U000fe328</code></p>
| 4 | 2016-08-05T02:47:44Z | 38,782,246 | <p>This has nothing to do with Python. An escape like <code>\U000fe328</code> just contains the hexadecimal representation of a code point, so this one is <code>U+0FE328</code> (which is a private use character).</p>
<p>These days a lot of emoji are assigned to code points, eg. í ½í¸¤ is <code>U+01F624 â FACE WITH LOOK OF TRIUMPH</code>.</p>
<p>Before these were assigned, various programs used various code points in the <a href="https://en.wikipedia.org/wiki/Private_Use_Areas" rel="nofollow">private use ranges</a> to represent emoji. Facebook apparently used the private use character <code>U+0FE328</code>. The mapping from these code points to the standard code points is arbitrary. Some of them may not have a standard equivalent at all.</p>
<p>So what you have to look for is a table which tells you which of these old assignments correspond to which standard code point.</p>
<p>There's <a href="https://github.com/iamcal/php-emoji/" rel="nofollow">php-emoji</a> on GitHub which appears to contain these mappings. But note that this is PHP code, and the characters are represented as UTF-8 (eg. the character above would be <code>"\xf3\xbe\x8c\xa8"</code>).</p>
| 2 | 2016-08-05T06:08:48Z | [
"python",
"string",
"unicode",
"encoding"
] |
What state_union class is used for | 38,780,440 | <p>Could anyone tell me what is this class' return type and what are its public methods and its general application? Or point me to the place I can read on it? Couldn't find it in <a href="http://www.nltk.org/" rel="nofollow">http://www.nltk.org/</a> docs at all! While other classes, like <code>PunktSentenceTokenizer</code> are present..</p>
| 0 | 2016-08-05T03:03:49Z | 38,780,495 | <p>In a python console exec: </p>
<pre><code>import nltk
help(nltk)
help(nltk.PunktSentenceTokenizer)
</code></pre>
| 0 | 2016-08-05T03:13:06Z | [
"python",
"nltk"
] |
How to use pandas read_excel() for excel file with multi sheets? | 38,780,485 | <p>I have one excel file with many sheets. There is only one column in every sheet, which is column A. I plan to read the excel file with <code>read_excel()</code> method. Hier is the code:</p>
<pre><code>import pandas as PD
ExcelFile = "C:\\AAA.xlsx"
SheetNames = ['0', '1', 'S', 'B', 'U']
# There are five sheets in this excel file. Those are the sheet names.
PageTotal = len(SheetNames)
for Page in range(PageTotal):
df = PD.read_excel(ExcelFile, header=None, squeeze = True, parse_cols = "A" ,sheetname=str(SheetNames[Page]))
print df
#do something with df
</code></pre>
<p>The problem is, the <code>for loop</code> runs only once. By running the second item in the <code>for loop</code> it shows me the following error text:</p>
<pre><code> File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 170, in read_excel
io = ExcelFile(io, engine=engine)
File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 227, in __init__
self.book = xlrd.open_workbook(io)
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 422, in open_workbook
ragged_rows=ragged_rows,
File "C:\Python27\lib\site-packages\xlrd\xlsx.py", line 824, in open_workbook_2007_xml
x12sst.process_stream(zflo, 'SST')
File "C:\Python27\lib\site-packages\xlrd\xlsx.py", line 432, in process_stream_iterparse
for event, elem in ET.iterparse(stream):
File "<string>", line 103, in next
IndexError: pop from empty stack
</code></pre>
<p>As a beginner I have no idea about this error. Could anybody please help me to correct the codes? Thanks.</p>
UPDATE Question:
<p>If it is because that the excel file contains many formulars and external links, why the <code>for loop</code> could still run its first item? Confused.</p>
| 0 | 2016-08-05T03:10:38Z | 38,780,512 | <p>Referring to the answer here:
<a href="http://stackoverflow.com/questions/26521266/using-pandas-to-pd-read-excel-for-multiple-worksheets-of-the-same-workbook">Using Pandas to pd.read_excel() for multiple worksheets of the same workbook</a></p>
<p>Perhaps you can try this:</p>
<pre><code>import pandas as pd
xls = pd.ExcelFile("C:\\AAA.xlsx")
dfs = []
for x in ['0', '1', 'S', 'B', 'U'] :
dfs.append(xls.parse(x))
</code></pre>
<p>Or this as a dict instead of list so you can easily get a particular sheet out to work with</p>
<pre><code>import pandas as pd
xls = pd.ExcelFile("C:\\AAA.xlsx")
dfs = {}
for x in ['0', '1', 'S', 'B', 'U'] :
dfs[x] = xls.parse(x)
</code></pre>
| 0 | 2016-08-05T03:14:44Z | [
"python",
"pandas"
] |
How to use pandas read_excel() for excel file with multi sheets? | 38,780,485 | <p>I have one excel file with many sheets. There is only one column in every sheet, which is column A. I plan to read the excel file with <code>read_excel()</code> method. Hier is the code:</p>
<pre><code>import pandas as PD
ExcelFile = "C:\\AAA.xlsx"
SheetNames = ['0', '1', 'S', 'B', 'U']
# There are five sheets in this excel file. Those are the sheet names.
PageTotal = len(SheetNames)
for Page in range(PageTotal):
df = PD.read_excel(ExcelFile, header=None, squeeze = True, parse_cols = "A" ,sheetname=str(SheetNames[Page]))
print df
#do something with df
</code></pre>
<p>The problem is, the <code>for loop</code> runs only once. By running the second item in the <code>for loop</code> it shows me the following error text:</p>
<pre><code> File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 170, in read_excel
io = ExcelFile(io, engine=engine)
File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 227, in __init__
self.book = xlrd.open_workbook(io)
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 422, in open_workbook
ragged_rows=ragged_rows,
File "C:\Python27\lib\site-packages\xlrd\xlsx.py", line 824, in open_workbook_2007_xml
x12sst.process_stream(zflo, 'SST')
File "C:\Python27\lib\site-packages\xlrd\xlsx.py", line 432, in process_stream_iterparse
for event, elem in ET.iterparse(stream):
File "<string>", line 103, in next
IndexError: pop from empty stack
</code></pre>
<p>As a beginner I have no idea about this error. Could anybody please help me to correct the codes? Thanks.</p>
UPDATE Question:
<p>If it is because that the excel file contains many formulars and external links, why the <code>for loop</code> could still run its first item? Confused.</p>
| 0 | 2016-08-05T03:10:38Z | 38,781,007 | <p>Why are you using <code>sheetname=str(SheetNames[Page])</code>?</p>
<p>If I understand your question properly I think what you want is:</p>
<pre><code>import pandas as PD
excel_file = r"C:\\AAA.xlsx"
sheet_names = ['0', '1', 'S', 'B', 'U']
for sheet_name in sheet_names:
df = pd.read_excel(excel_file, header=None, squeeze=True, parse_cols="A", sheetname=sheet_name)
print(df)
#do something with df
</code></pre>
| 0 | 2016-08-05T04:14:59Z | [
"python",
"pandas"
] |
Using the python sh module, how can I show the output of a command as it's happening? | 38,780,563 | <p>The python <code>sh</code> module seem to wait until a command or at least a line is finished before it can show any of the output. How can I show the output of a command as it's happening? </p>
<p>Here's what I've tried so far. This is for git cloning something using <code>sh</code>. </p>
<pre><code>for line in sh.git.clone(url, '--progress', '--recursive', _err_to_out=True, _iter=True):
print(line)
</code></pre>
<p>And this prints every <em>line</em> after it's completed, but doesn't print it out in real time. So when I'm git cloning something, it doesn't show the progress of the clone, since it waits for the line to finish before it returns it. How can I have the <code>sh</code> module print out a command's progress in real time? </p>
| 0 | 2016-08-05T03:21:07Z | 38,780,923 | <p>Use the <code>_out_bufsize</code> parameter.
In my case, works OK with 100...</p>
<pre><code>for line in sh.git.clone(url, '--progress', '--recursive', _err_to_out=True, _iter=True, _out_bufsize=100):
print(line)
</code></pre>
<p><strong>Output:</strong></p>
<blockquote>
<p>Cloning into 'some_project'...<br>
remote: Counting objects: 70796, done. <br>
remote: Compressing objects: 0<br>
remote: Compressing objects: 1% (203/20259)<br>
remote: Compressing objects: 3% (608/20259)<br>
remote: Compressing objects: 5% (1013/20259)<br>
remote: Compressing objects: 7% (1419/20259)<br>
remote: Compressing objects: 9% (1824/20259) </p>
</blockquote>
| 3 | 2016-08-05T04:04:41Z | [
"python"
] |
Matplotlib: how to update the figure title after user has zoomed? | 38,780,641 | <p>Is there any way to <strong>update the title</strong> of a Matplotlib figure after the user has zoomed in? For example, I would like the title to display the exact extends of the x-axis,</p>
<pre><code>import pylab as pl
import numpy as np
x = np.arange(10,step=0.1)
y = np.sin(x)
f = pl.figure()
pl.plot(x,y)
def my_ondraw(ev):
x1,x2 = f.gca().get_xlim() # FIXME value hasn't been updated yet
pl.title("x = [%f, %f]" % (x1,x2))
f.canvas.mpl_connect('draw_event', my_ondraw)
pl.show()
</code></pre>
<p>As noted, my code doesn't get the right values back from get_xlim() because the re-draw hasn't been done at the time my_ondraw is called...</p>
<p>Any suggestions?</p>
<hr>
<p>Modified code that <strong>works</strong> based on Ilya's suggestion below:</p>
<pre><code>import pylab as pl
import numpy as np
x = np.arange(10,step=0.1)
y = np.sin(x)
f = pl.figure()
ax = f.gca()
pl.plot(x,y)
def my_ondraw(ev):
print "my_ondraw: %s" % ev.name
x1,x2 = f.gca().get_xlim() # FIXME value hasn't been updated yet
pl.title("x = [%f, %f]" % (x1,x2))
ax.callbacks.connect('xlim_changed', my_ondraw)
pl.show()
</code></pre>
| 1 | 2016-08-05T03:30:28Z | 38,780,707 | <p>You can register callback functions on the <code>xlim_changed</code> and <code>ylim_changed</code> events. Try something like this:</p>
<pre><code>def on_xylims_change(axes):
x1,x2 = f.gca().get_xlim() # FIXME value hasn't been updated yet
pl.title("x = [%f, %f]" % (x1,x2))
fig, ax = pl.subplots(1, 1)
ax.callbacks.connect('xlim_changed', on_xylims_change)
ax.callbacks.connect('ylim_changed', on_xylims_change)
</code></pre>
<p>You can read more about it here: <a href="http://matplotlib.org/users/event_handling.html" rel="nofollow">Event handling and picking in Matplotlib</a>.</p>
| 1 | 2016-08-05T03:39:40Z | [
"python",
"matplotlib"
] |
How can I git clone a repository with python, and get the progress of the clone process? | 38,780,693 | <p>I'd like to be able to git clone a large repository using python, using some library, but importantly I'd like to be able to see the progress of the clone as it's happening. I tried pygit2 and GitPython, but they don't seem to show their progress. Is there another way? </p>
| 0 | 2016-08-05T03:37:54Z | 38,780,917 | <p>You can use <a href="http://gitpython.readthedocs.io/en/stable/reference.html#git.remote.RemoteProgress" rel="nofollow"><code>RemoteProgress</code></a> from <a href="http://gitpython.readthedocs.io/en/stable/" rel="nofollow">GitPython</a>. Here is a crude example:</p>
<pre><code>import git
class Progress(git.remote.RemoteProgress):
def update(self, op_code, cur_count, max_count=None, message=''):
print 'update(%s, %s, %s, %s)'%(op_code, cur_count, max_count, message)
repo = git.Repo.clone_from(
'https://github.com/gitpython-developers/GitPython',
'./git-python',
progress=Progress())
</code></pre>
<p>Or use this <code>update()</code> function for a slightly more refined message format:</p>
<pre><code> def update(self, op_code, cur_count, max_count=None, message=''):
print self._cur_line
</code></pre>
| 0 | 2016-08-05T04:03:44Z | [
"python",
"git"
] |
Tkinter Scrollbar not working | 38,780,724 | <p>I have a piece of tkinter code running on python 3.4 that is a large frame placed in a canvas with a vertical scrollbar, however the scrollbar is grayed out and doesn't seem to be linked to the size of the frame. The code I'm using is essentially:</p>
<pre><code>class EntryWindow:
def __init__(self, master):
self.master = master
self.master.minsize(750, 800)
self.master.maxsize(1000, 800)
self.canvas = tk.Canvas(self.master, borderwidth=0, bg='#ffffff')
self.vsb = tk.Scrollbar(self.master)
self.master_frame = tk.Frame(self.canvas)
self.vsb.pack(side="right", fill='y')
self.canvas.pack(side='left', fill='both', expand=True)
self.canvas.create_window((0,0), window=self.master_frame, anchor='nw', tags='self.master_frame')
self.canvas.config(yscrollcommand=self.vsb.set)
self.master_frame.grid()
###build widgets and place into master_frame
...
</code></pre>
<p>The <code>master_frame</code> is filled with about 36 widgets placed using the grid geometry manager and reaches a vertical height of about 2000 pixels, but the scrollbar doesn't work. </p>
<p>I saw a post about using ttk scrollbar, but when I imported that I couldn't get it to work. The input statement I used was:</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
</code></pre>
<p>and then replaced the line <code>self.vsb = tk.Scrollbar(self.master)</code> with <code>self.vsb = ttk.Scrollbar(self.master)</code> but that didn't fix it either. I also tried removing the min/max sizing on master (those aren't the final values, I've been playing with it).</p>
<p>Is there something I'm doing wrong? I feel like it might be in the line where I set the canvas.config() but I read the documentation and it seems to be right. Tomorrow I plan on trying to load the frame into the canvas after I've built the frame.</p>
<p>But in the meantime, and in case that doesn't work, any help would be great! Thanks!</p>
| 0 | 2016-08-05T03:41:53Z | 38,788,177 | <p>You must do three things when configuring a scrolling canvas:</p>
<ol>
<li>have the canvas notify the scrollbar when it scrolls, by configuring the <code>yscrollcommand</code> attribute of the canvas to point to the scrollbar</li>
<li>have the scrollbar control the canvas when it is dragged, by configuring the <code>command</code> attribute of the scrollbar</li>
<li><strong>tell the canvas what part of the virtual canvas should be scrollable by configuring the <code>scrollregion</code> attribute of the canvas</strong></li>
</ol>
<p>You are neglecting to do #3. After adding the widgets to <code>master_frame</code> you should do this:</p>
<pre><code>self.canvas.configure(scrollregion=self.canvas.bbox("all")
</code></pre>
<p>The above will tell the canvas and scrollbar that the area of the canvas that is scrollable is the area that encompasses all of the objects on the canvas.</p>
<p>Finally, you need to remove the following line of code, because you are already adding the frame to the canvas with <code>create_window</code>. Frames added to a canvas with <code>pack</code>, <code>place</code> or <code>grid</code> won't scroll:</p>
<pre><code># remove this line
self.master_frame.grid()
</code></pre>
<p>For a working example of a scrollable frame, see <a href="http://stackoverflow.com/q/3085696/7432">Adding a scrollbar to a group of widgets in Tkinter</a></p>
| 2 | 2016-08-05T11:29:20Z | [
"python",
"python-3.x",
"tkinter",
"tk",
"tkinter-canvas"
] |
How to create a list from a text file in Python | 38,780,764 | <p>I have a text file called "test", and I would like to create a list in Python and print it. I have the following code, but it does not print a list of words; it prints the whole document in one line.</p>
<pre><code>file = open("test", 'r')
lines = file.readlines()
my_list = [line.split(' , ')for line in open ("test")]
print (my_list)
</code></pre>
| 0 | 2016-08-05T03:46:12Z | 38,780,845 | <p>You could do </p>
<pre><code>my_list = open("filename.txt").readlines()
</code></pre>
| 1 | 2016-08-05T03:55:15Z | [
"python",
"python-3.x"
] |
How to create a list from a text file in Python | 38,780,764 | <p>I have a text file called "test", and I would like to create a list in Python and print it. I have the following code, but it does not print a list of words; it prints the whole document in one line.</p>
<pre><code>file = open("test", 'r')
lines = file.readlines()
my_list = [line.split(' , ')for line in open ("test")]
print (my_list)
</code></pre>
| 0 | 2016-08-05T03:46:12Z | 38,780,865 | <p>When you do this:</p>
<pre><code>file = open("test", 'r')
lines = file.readlines()
</code></pre>
<p>Lines is a list of lines. If you want to get a list of words for each line you can do:</p>
<pre><code>list_word = []
for l in lines:
list_word.append(l.split(" "))
</code></pre>
| 1 | 2016-08-05T03:57:36Z | [
"python",
"python-3.x"
] |
How to create a list from a text file in Python | 38,780,764 | <p>I have a text file called "test", and I would like to create a list in Python and print it. I have the following code, but it does not print a list of words; it prints the whole document in one line.</p>
<pre><code>file = open("test", 'r')
lines = file.readlines()
my_list = [line.split(' , ')for line in open ("test")]
print (my_list)
</code></pre>
| 0 | 2016-08-05T03:46:12Z | 38,780,886 | <p>I believe you are trying to achieve something like this:</p>
<pre><code>data = [word.split(',') for word in open("test", 'r').readlines()]
</code></pre>
<p>It would also help if you were to specify what type of text file you are trying to read as there are several modules(i.e. csv) that would produce the result in a much simpler way.</p>
<p>As pointed out, you may also <code>strip</code> a new line(depends on what line ending you are using) and you'll get something like this:</p>
<pre><code>data = [word.strip('\n').split(',') for word in open("test", 'r').readlines()]
</code></pre>
<p>This produces a list of lines with a list of words.</p>
| 2 | 2016-08-05T04:00:14Z | [
"python",
"python-3.x"
] |
Iterate a tuple with dict inside | 38,780,801 | <p>Having trouble iterating over tuples such as this:</p>
<pre><code>t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
</code></pre>
<p>Anyone know how I can preserve the dictionary value? I would like to see <code>a='a', b='b', and c={'c':'d'}</code></p>
| 0 | 2016-08-05T03:50:24Z | 38,780,824 | <p>You'd need to put <code>t</code> inside some other iterable container:</p>
<pre><code>for a, b, c in [t]:
print(a, b, c)
</code></pre>
<p>The problem with your attempts is that each on is iterating over a single element from <code>t</code> and trying to unpack that. e.g. the first turn of the loop is trying to unpack <code>'a'</code> into three places (<code>a</code>, <code>b</code> and <code>c</code>).</p>
<hr>
<p>Obviously, it's also probably better to just unpack directly (no loop required):</p>
<pre><code>a, b, c = t
print(a, b, c)
</code></pre>
| 2 | 2016-08-05T03:53:25Z | [
"python",
"python-3.x",
"python-3.4"
] |
Iterate a tuple with dict inside | 38,780,801 | <p>Having trouble iterating over tuples such as this:</p>
<pre><code>t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
</code></pre>
<p>Anyone know how I can preserve the dictionary value? I would like to see <code>a='a', b='b', and c={'c':'d'}</code></p>
| 0 | 2016-08-05T03:50:24Z | 38,780,826 | <p>Try this:</p>
<pre><code>for a,b,c in [t]:
print(a,b,c)
</code></pre>
<p>Putting <code>t</code> inside a list will allow you to unpack it.</p>
| 1 | 2016-08-05T03:53:37Z | [
"python",
"python-3.x",
"python-3.4"
] |
Iterate a tuple with dict inside | 38,780,801 | <p>Having trouble iterating over tuples such as this:</p>
<pre><code>t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
</code></pre>
<p>Anyone know how I can preserve the dictionary value? I would like to see <code>a='a', b='b', and c={'c':'d'}</code></p>
| 0 | 2016-08-05T03:50:24Z | 38,780,831 | <p>try this</p>
<pre><code>for x in t:
print(x)
</code></pre>
<p><code>x</code> will take all the values in <code>t</code> iteratively, so <code>'a'</code>, then <code>'b'</code> and finally <code>{'c':'d'}</code>.</p>
<p>And to print exactly <code>a = 'a'</code> etc you can do:</p>
<pre><code>for param, val in zip(["a","b","c"], t):
print(param,"=",val)
</code></pre>
| 0 | 2016-08-05T03:54:07Z | [
"python",
"python-3.x",
"python-3.4"
] |
Iterate a tuple with dict inside | 38,780,801 | <p>Having trouble iterating over tuples such as this:</p>
<pre><code>t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
</code></pre>
<p>Anyone know how I can preserve the dictionary value? I would like to see <code>a='a', b='b', and c={'c':'d'}</code></p>
| 0 | 2016-08-05T03:50:24Z | 38,780,862 | <p>Why are you iterating at all when it's a single <code>tuple</code>? Just unpack the single <code>tuple</code>, if that's what you need to do:</p>
<pre><code>a, b, c = t
print(a, b, c)
</code></pre>
<p>Or if it's just printing you want to do unpack in the call itself:</p>
<pre><code>print(*t)
</code></pre>
| 2 | 2016-08-05T03:56:46Z | [
"python",
"python-3.x",
"python-3.4"
] |
Iterate a tuple with dict inside | 38,780,801 | <p>Having trouble iterating over tuples such as this:</p>
<pre><code>t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
</code></pre>
<p>Anyone know how I can preserve the dictionary value? I would like to see <code>a='a', b='b', and c={'c':'d'}</code></p>
| 0 | 2016-08-05T03:50:24Z | 38,780,866 | <p>You can use tuple unpacking with multiple assignment:</p>
<pre><code>a, b, c = t
print(a, b, c)
</code></pre>
| 0 | 2016-08-05T03:57:38Z | [
"python",
"python-3.x",
"python-3.4"
] |
regex to set the group length in python | 38,780,964 | <p>I have a version 'major.minor.patch'<br>
major version range <code>[0-99999]</code><br>
minor version range <code>[0-9999]</code><br>
patch version range <code>[0-999999]</code><br>
but on the whole 'major.minor.path' should not exceed 16 characters including . (dot).
I have tried the following reg expression </p>
<pre><code>^(\d{1,5}[.]\d{1,4}[.]\d{1,6}){1,16}$
</code></pre>
<p>but {1,16} means 1 to 16 repetitions of previous expression not the length of previous group
How can I make the length of following group to 16</p>
<pre><code>(\d{1,5}[.]\d{1,4}[.]\d{1,6})
</code></pre>
| 2 | 2016-08-05T04:09:25Z | 38,781,230 | <p>Add a lookahead at the beginning that allows a match only if its within 1-16 range by using a <code>$</code> to anchor at the end: <code>(?=^.{1,16}$)</code></p>
| 1 | 2016-08-05T04:38:02Z | [
"python",
"regex"
] |
regex to set the group length in python | 38,780,964 | <p>I have a version 'major.minor.patch'<br>
major version range <code>[0-99999]</code><br>
minor version range <code>[0-9999]</code><br>
patch version range <code>[0-999999]</code><br>
but on the whole 'major.minor.path' should not exceed 16 characters including . (dot).
I have tried the following reg expression </p>
<pre><code>^(\d{1,5}[.]\d{1,4}[.]\d{1,6}){1,16}$
</code></pre>
<p>but {1,16} means 1 to 16 repetitions of previous expression not the length of previous group
How can I make the length of following group to 16</p>
<pre><code>(\d{1,5}[.]\d{1,4}[.]\d{1,6})
</code></pre>
| 2 | 2016-08-05T04:09:25Z | 38,781,235 | <p>You have two regular expressions here that you want to combine</p>
<ol>
<li><code>^[\d.]{,16}$</code> </li>
<li><code>^\d{1,5}[.]\d{1,4}[.]\d{1,6}$</code></li>
</ol>
<p>Both in itself is invalid as (1) can match more than 2 dots and the individual length limits on each version are not honoured. (2) definitely does not work as it exceeds the string length limit of 16 characters including '.'</p>
<p>A less known feature of regex lookhead can be used to combine(and-ed) both the above regex expressions which would be something like</p>
<pre><code>^(?=[\d.]{,16}$)\d{1,5}[.]\d{1,4}[.]\d{1,6}$
</code></pre>
<p>Example:</p>
<pre><code>exp = r'^(?=[\d.]{,16}$)\d{1,5}[.]\d{1,4}[.]\d{1,6}$'
vers = ['111.111.111',
'111111.1111.1111',
'11111.1111.111111',
'11111.1111.11111']
["{} Matches ? {}".format(v, "YES" if re.match(exp, v) else "NO" )
for v in vers]
</code></pre>
<p>Output</p>
<pre><code>['111.111.111 Matches ? YES',
'111111.1111.1111 Matches ? NO',
'11111.1111.111111 Matches ? NO',
'11111.1111.11111 Matches ? YES']
</code></pre>
| 1 | 2016-08-05T04:39:06Z | [
"python",
"regex"
] |
Why is my .pythonrc file being run in non-interactive programs? | 38,781,067 | <p>Context: I started using OSX about a year ago, and I had a kind of screwy python installation. That is, I was using system python, and installed packages with sudo when that seemed to make things work. Now, I'm starting from a fresh OSX install, and trying to do it the Right Way. I've installed python and python3 from brew, and trying to use python3 whenever possible. </p>
<p>Problem: I have a .pythonrc file, which just imports a handful of commonly used packages - mostly standard lib, a few popular nonstandard packages, and a few of my own. In the past, this file has only been run when I start an interactive shell. Now, when using brew python, it is run whenever I run any python program.</p>
<p>There must be some gap in my understanding of the rc file - I thought the purpose was specifically for interactive use. Still, when I use system python, the rc file isn't used - so something is different about my system python (2.7.10 at /usr/bin/python) vs brew python (2.7.12 at /usr/local/bin/python; 3.5.2 at /usr/local/bin/python3). The behavior is the same if I remove everything except a print statement from the rc file.</p>
<p>edit: I realized that the rc file is running because I'm importing ipdb. This makes sense I suppose, but I still don't understand why that would happen in some of the python environments/versions but not others.</p>
<p>edit: <a href="https://github.com/gotcha/ipdb/blob/master/ipdb/__main__.py#L44" rel="nofollow">https://github.com/gotcha/ipdb/blob/master/ipdb/<strong>main</strong>.py#L44</a> this line fails, not sure if this means anything.</p>
<p>Full stacktrace from within .pythonrc:</p>
<pre><code> File "hello.py", line 1, in <module>
from ipdb import set_trace
File "/usr/local/lib/python2.7/site-packages/ipdb/__init__.py", line 7, in <module>
from ipdb.__main__ import set_trace, post_mortem, pm, run # noqa
File "/usr/local/lib/python2.7/site-packages/ipdb/__main__.py", line 51, in <module>
ipapp.initialize([])
File "<decorator-gen-109>", line 2, in initialize
File "/usr/local/lib/python2.7/site-packages/traitlets/config/application.py", line 74, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 315, in initialize
self.init_code()
File "/usr/local/lib/python2.7/site-packages/IPython/core/shellapp.py", line 263, in init_code
self._run_startup_files()
File "/usr/local/lib/python2.7/site-packages/IPython/core/shellapp.py", line 342, in _run_startup_files
self._exec_file(python_startup)
File "/usr/local/lib/python2.7/site-packages/IPython/core/shellapp.py", line 328, in _exec_file
raise_exceptions=True)
File "/usr/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2469, in safe_execfile
self.compile if kw['shell_futures'] else None)
File "/usr/local/lib/python2.7/site-packages/IPython/utils/py3compat.py", line 288, in execfile
builtin_mod.execfile(filename, *where)
File "~/.pythonrc", line 57, in <module>
import traceback; traceback.print_stack()
</code></pre>
| 0 | 2016-08-05T04:20:54Z | 38,814,182 | <p>You can ask the traceback module:</p>
<pre><code>$ cat .pythonrc
import traceback; traceback.print_stack()
$ cat test.py
import ipdb
</code></pre>
<p>Then by running</p>
<pre><code>$ PYTHONSTARTUP="$HOME/.pythonrc" python test.py
</code></pre>
<p>you should get a traceback that tells you exactly from where the startup script is being run. Most likely, this is due to a call</p>
<pre><code>start_ipython()
</code></pre>
<p>somewhere in the ipdb import.</p>
| 1 | 2016-08-07T12:34:04Z | [
"python",
"osx",
"ipython",
"homebrew"
] |
Tensorflow softmax function returning one-hot encoded array | 38,781,155 | <p>I have this piece of code which computes the softmax function on the output predictions from my convnet. </p>
<pre><code>pred = conv_net(x, weights, biases, keep_prob, batchSize)
softmax = tf.nn.softmax(pred)
</code></pre>
<p>My prediction array is of shape [batch_size, number_of_classes] = [128,6]
An example row from this array is...</p>
<pre><code>[-2.69500896e+08 4.84445800e+07 1.99136800e+08 6.12981480e+07
2.33545440e+08 1.19338824e+08]
</code></pre>
<p>After running the softmax function I will get a result that is a one hot encoded array...</p>
<pre><code>[ 0 0 0 0 1 0 ]
</code></pre>
<p>I would think this is because I am taking the exponential of very large values. I was just wondering if I am doing something wrong or if I should be scaling my values first before applying the softmax function. My loss function is </p>
<pre><code>cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
</code></pre>
<p>and I am minimizing this with the the Adam Optimizer</p>
<pre><code>optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
</code></pre>
<p>My network is able to learn just fine.</p>
<p>My reasoning for applying the softmax function is to obtain the probability values for each class on the test data.</p>
<p><strong>EDIT</strong></p>
<p>It seems to fix these very large values for my softmax function I should add normalization and regularization. I have added the design code for my convnet and any help on where to place regularization and normalization would be great. </p>
<pre><code># Create model
def conv_net(x, weights, biases, dropout, batchSize):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 150, 200, 1])
x = tf.random_crop(x, size=[batchSize, 128, 192, 1])
# Convolution Layer 1
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer 2
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Convolution Layer 3
conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
# Max Pooling (down-sampling)
conv3 = maxpool2d(conv3, k=2)
# Convolution Layer 4
conv4 = conv2d(conv3, weights['wc4'], biases['bc4'])
# Max Pooling (down-sampling)
conv4 = maxpool2d(conv4, k=2)
# Convolution Layer 5
conv5 = conv2d(conv4, weights['wc5'], biases['bc5'])
# Max Pooling (down-sampling)
conv5 = maxpool2d(conv5, k=2)
# Fully connected layer
# Reshape conv5 output to fit fully connected layer input
fc1 = tf.reshape(conv5, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
</code></pre>
| 0 | 2016-08-05T04:30:49Z | 38,794,752 | <p>You have a serious need for some regularization. Your outputs are on the order of 10^8. Usually, we deal with much smaller numbers. If you add more regularization your classifier won't be so certain about everything and it won't give outputs that look like one-hot encodings.</p>
| 0 | 2016-08-05T17:22:14Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning",
"softmax"
] |
fill in missing fields using import csv | 38,781,211 | <p>My data set looks like the below:</p>
<pre><code>W000000457,,
,9/18/2016 11:28,37
,4/21/2016 0:07,54
,11/5/2016 12:05,42
,7/14/2016 15:43,54
W000000457 - Count,,100
2069320,,
,12/10/2016 0:22,12
,9/25/2016 14:07,28
,1/24/2016 6:54,59
2069320 - Count,,100
111,,
,1/16/2016 10:25,58
,6/11/2016 4:17,43
,4/21/2016 7:56,47
,3/17/2016 3:48,20
111 - Count,,100
</code></pre>
<p>The columns are ID, Date, Value. I do 2 main cleansing/massaging of the data.</p>
<p>1) Using the ID in row 1, I populate the below rows
2) Remove any rows with "Count" in the row[0]</p>
<p>My goal is to get something like this:</p>
<pre><code>W000000457,9/18/2016 11:28,37
W000000457,4/21/2016 0:07,54
W000000457,11/5/2016 12:05,42
W000000457,7/14/2016 15:43,54
2069320,12/10/2016 0:22,12
2069320,9/25/2016 14:07,28
2069320,1/24/2016 6:54,59
111,1/16/2016 10:25,58
111,6/11/2016 4:17,43
111,4/21/2016 7:56,47
111,3/17/2016 3:48,20
</code></pre>
<p>This is the code I have so far:</p>
<pre><code>import csv
with open('data.txt','rb') as f_in:
reader = csv.reader(f_in)
row = next(reader)
last_row = row
for row in reader:
row = [x if x else y for x, y in zip(row, last_row)]
if 'COUNT' not in row[0].upper():
print row
last_row = row
</code></pre>
<p>This gets me close but the problem is handling the records inbetween the different ID's example: </p>
<pre><code>W000000457,,
,1/24/2016 6:54,59
2069320 - Count,,100
111,,
,1/16/2016 10:25,58
</code></pre>
<p>Will become (using my code):</p>
<pre><code>W000000457,1/24/2016 6:54,59
111,1/24/2016 6:54,100
111,1/16/2016 10:25,58
</code></pre>
<p>The first instance of ID 111 is not a real value that was carried from the previous existing values. </p>
<p>Or in the example above i get:</p>
<pre><code>W000000457,9/18/2016 11:28,37
W000000457,4/21/2016 0:07,54
W000000457,11/5/2016 12:05,42
W000000457,7/14/2016 15:43,54
**2069320,7/14/2016 15:43,100**
2069320,12/10/2016 0:22,12
2069320,9/25/2016 14:07,28
2069320,1/24/2016 6:54,59
**111,1/24/2016 6:54,100**
111,1/16/2016 10:25,58
111,6/11/2016 4:17,43
111,4/21/2016 7:56,47
111,3/17/2016 3:48,20
</code></pre>
<p>fields in ** are fake values</p>
<p>Any ideas on how I should handle this? </p>
<p>I was thinking of removing the first instance of each ID or looking for a way to only replace [0] of my csvreader instead of every field. </p>
| 0 | 2016-08-05T04:36:11Z | 38,781,431 | <p>With csv type data, use <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>.</p>
<p>Reading the data:</p>
<pre><code>import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO('''W000000457,,
,9/18/2016 11:28,37
,4/21/2016 0:07,54
,11/5/2016 12:05,42
,7/14/2016 15:43,54
W000000457 - Count,,100
2069320,,
,12/10/2016 0:22,12
,9/25/2016 14:07,28
,1/24/2016 6:54,59
2069320 - Count,,100
111,,
,1/16/2016 10:25,58
,6/11/2016 4:17,43
,4/21/2016 7:56,47
,3/17/2016 3:48,20
111 - Count,,100'''), names=['col1', 'col2', 'col3'])
</code></pre>
<p>Forward fill NaN items in the first column:</p>
<pre><code>df['col1'] = df['col1'].fillna(method='ffill')
</code></pre>
<p>Filter out items where the first column contains 'Count'</p>
<pre><code>df = df[~df['col1'].str.contains('Count')]
</code></pre>
<p>Drop rows that still have NaN:</p>
<pre><code>df = df.dropna()
</code></pre>
<p>Final result:</p>
<pre><code> col1 col2 col3
1 W000000457 9/18/2016 11:28 37.0
2 W000000457 4/21/2016 0:07 54.0
3 W000000457 11/5/2016 12:05 42.0
4 W000000457 7/14/2016 15:43 54.0
7 2069320 12/10/2016 0:22 12.0
8 2069320 9/25/2016 14:07 28.0
9 2069320 1/24/2016 6:54 59.0
12 111 1/16/2016 10:25 58.0
13 111 6/11/2016 4:17 43.0
14 111 4/21/2016 7:56 47.0
15 111 3/17/2016 3:48 20.0
</code></pre>
| 1 | 2016-08-05T04:59:19Z | [
"python"
] |
Is it possible to increase the depth of the sphinx search page? | 38,781,268 | <p><img src="http://i.stack.imgur.com/gntPs.png" alt="Sphinx search page"></p>
<p>The above image is the search feature of the Sphinx documentation generator. </p>
<p>As you can see in the image, the search page doesn't show the 'objects' under <code>Definitions</code> and <code>Orders</code>. Is there a setting which makes the search results one level deeper?</p>
| 0 | 2016-08-05T04:43:16Z | 38,819,575 | <p>I assume with <code>objects</code> you refer to a snippet of the article's text that contains the search hit (<a href="http://www.sphinx-doc.org/en/stable/search.html?q=text&check_keywords=yes&area=default" rel="nofollow">example</a>). Let's call it a <em>search summary</em>.</p>
<p>1) The length of the summary text the Sphinx search returns is hard-coded, as you can see in the <a href="https://github.com/sphinx-doc/sphinx/blob/master/sphinx/themes/basic/static/searchtools.js_t" rel="nofollow">source code</a> (function: <code>makeSearchSummary</code>, l. 457ff).
It's typical 240 characters long (plus two times <code>...</code>= 246).
To change this, you can <a href="http://www.sphinx-doc.org/en/stable/theming.html#creating-themes" rel="nofollow">create your own Sphinx theme</a> with a custom search function.</p>
<p>2) You possibly don't see any search summaries because you are opening the search locally on your file system. In such a case, the search is trying to request the files of search hits dynamically. Some browsers (i.e. Google Chrome) regard these requests as illegal <code>cross origin requests</code> and block them. Open the search in Firefox or Internet Explorer or try serving the files with a (local) static file server - for example with <a href="https://github.com/GaretJax/sphinx-autobuild" rel="nofollow">sphinx-autobuild</a>. Now, the search summary should be displayed.</p>
<p>3) There are <a href="https://github.com/sphinx-doc/sphinx/issues/1618" rel="nofollow">know issues</a> with the Sphinx search summary. And there is a <a href="https://github.com/TimKam/sphinx-pretty-searchresults" rel="nofollow">Sphinx extension trying to fix this</a> (<em>Disclaimer: I wrote the extension</em>).</p>
| 1 | 2016-08-07T23:41:26Z | [
"python",
"documentation",
"python-sphinx"
] |
Pro-Football-Reference Team Stats XPath | 38,781,357 | <p>I am using the scrapy shell on this page <a href="http://www.pro-football-reference.com/boxscores/201509100nwe.htm" rel="nofollow">Pittsburgh Steelers at New England Patriots - September 10th, 2015</a> to pull individual team stats. For example, I want to pull total yards for the away team (464) which, when inspecting the element and copying the XPath yields</p>
<pre><code>//*[@id="team_stats"]/tbody/tr[5]/td[1]
</code></pre>
<p>but when I run</p>
<pre><code>response.xpath('//*[@id="team_stats"]/tbody/tr[5]/td[1]')
</code></pre>
<p>nothing is returned. I noticed that this table is in a separate div from the initial data so I'm not sure if I need to be starting higher up. Even just a search on the</p>
<pre><code>//*[@id="team_stats"]
</code></pre>
<p>xpath returns nothing. Any help would be greatly appreciated.</p>
| 0 | 2016-08-05T04:52:29Z | 38,781,659 | <p>The problem you encounter is (as in most of cases like this) that the website uses JavaScript to render the complete information of the game. This means that Scrapy does not see the website as you see it when you open it in your browser.</p>
<p>Because Scrapy does not run any JavaScript after loading the page it does not render out the right table with the ID <code>team_stats</code>. The contents of the "Team Stats" table are there in the loaded website however they are commented out.</p>
<p>One solution would be to extract the comment which contains the team statistics and convert that comment text to HTML and extract the data found there.</p>
<pre><code>response.xpath('//div[@id="all_team_stats"]//comment()').extract()
</code></pre>
<p>The text above extracts the comments which contains your required table.</p>
<p>For future analysis I recommend you to use Chrome's Developer Tools where you can disable JavaScript for analyzing sites and load the site with that option. This will return the page's content as Scrapy would see it.</p>
<p><strong>EDIT</strong></p>
<p>After you extract the comment you can feed it into a new selector just like Markus mentioned in his comment:</p>
<pre><code>new_selector = Selector(text=extracted_text)
</code></pre>
<p>And on this new selector you can use again <code>.xpath()</code> as you would do on the <code>response</code> object.</p>
<p>Removing the comment delimiter is easy: you have to remove it from the beginning and from the end of the extracted text which is a string. And comments in HTML start with <code><!--</code> and end with <code>--></code>. You need to feed the text between these characters to the new selector.</p>
<p>Extending the example from above:</p>
<pre><code>extracted_text = response.xpath('//div[@id="all_team_stats"]//comment()').extract()[0]
new_selector = Selector(text=extracted_text[4:-3].strip())
new_selector.xpath('//*[@id="team_stats"]/tbody/tr[5]/td[1]').extract()
</code></pre>
| 1 | 2016-08-05T05:24:59Z | [
"python",
"xpath",
"scrapy"
] |
Trouble with Django urls | 38,781,389 | <p>I'm new to Django and using version 1.9.8. I followed the official tutorial, and now I'm trying <a href="https://thinkster.io/django-angularjs-tutorial#registering-new-users" rel="nofollow">this</a> more advanced one. I'm at the end/checkpoint of the "registering users" section. When I visit <code>http://localhost:8000/register</code>, Django is displaying the content I have on my index.html page located at <code>authentication/templates/authentication.html</code> rather than the one created during the tutorial at <code>static/templates/authentication/register.html</code>. </p>
<p>When I initially got to where I am, I was receiving the following error <code>ImportError: cannot import name 'IndexView'</code>, referencing the urls.py</p>
<pre><code>#urls.py
from django.conf.urls import include,url
from django.contrib import admin
from django.conf.urls import patterns
from rest_framework_nested import routers
from authentication.views import AccountViewSet
router = routers.SimpleRouter()
router.register(r'accounts', AccountViewSet)
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^api/v1/', include(router.urls)),
url('^.*$', IndexView.as_view(), name='index'), #this line was causing the error
]
</code></pre>
<p>I came across <a href="http://stackoverflow.com/questions/31287331/django-name-indexview-is-not-defined">this post</a> from someone else who was following the same tutorial. I added the IndexView import to my urls.py as such</p>
<pre><code>from authentication.views import AccountViewSet, IndexView
</code></pre>
<p>And then I added an IndexView class to my views.py </p>
<pre><code># authentication/views.py
....
class IndexView(TemplateView):
template_name = 'mytestproject/index.html' #this page is showing when I visit http://localhost:8000/register rather than the one located at static/templates/authentication/register.html
</code></pre>
<p>The IndexView error went away and the server ran without errors, but when I visited <code>http://localhost:8000/register</code> nothing was being displayed. I opened up that index.html page and added content, and then it displayed that content. Django is clearly using the index file located at <code>authentication/templates/authentication.html</code> instead of the register page I created. How do I get Django to use the template located at <code>static/templates/authentication/register.html</code> when I visit the register url? I'm confused, mainly because no methods named 'register' were defined in the view during the tutorial, nor designated in the urls.py file. </p>
<p>Thanks</p>
| 0 | 2016-08-05T04:56:10Z | 38,782,639 | <pre><code>url('^.*$', IndexView.as_view(), name='index'),
</code></pre>
<p><code>.*</code> in <code>'^.*$'</code>, tells any url will go to this IndexView</p>
<p>and</p>
<p>you did'nt add url for your register page in url.py</p>
<pre><code>url(r'^register/',views.yourview,name='givenameforthisurl')
</code></pre>
| 1 | 2016-08-05T06:35:19Z | [
"python",
"django",
"django-views",
"django-urls"
] |
How to count the most frequent character from various strings inputted inside a while loop? | 38,781,400 | <p>I have to write a code that finds the most frequent letter/character used (including special characters like . , / ), while allowing the user to keep inputting new inputs until he/she enters a "!". I wrote the following, but it only counts the most frequent letter in the last string inputted. Does anyone know how I can fix it so that it counts the most frequently used letter out of all of the inputs? Thanks!</p>
<p>Update: I solved the issue by adding all the inputs to a list, converting the list to a string and finding the most common value in the string. No idea if there's something more efficient, but this worked!</p>
<pre><code>while w.count != 2:
w = input("Enter here: ")
w.count("!")
max_letter = w[0]
min_letter = w[0]
max = w.count(w[0])
min = w.count(w[0])
for char in w:
if char is not " ":
if w.count(char) > max:
max_letter = char
max = w.count(char)
print(max,max_letter)
</code></pre>
| 0 | 2016-08-05T04:57:04Z | 38,781,454 | <p>Use the print statement inside the <code>while</code> loop.</p>
<pre><code>while w.count != 2:
w = input("Enter here: ")
w.count("!")
max_letter = w[0]
min_letter = w[0]
max = w.count(w[0])
min = w.count(w[0])
for char in w:
if char is not " ":
if w.count(char) > max:
max_letter = char
max = w.count(char)
print(max,max_letter)
</code></pre>
| 1 | 2016-08-05T05:01:39Z | [
"python",
"python-3.x"
] |
global name 'reqparse' is not defined | 38,781,416 | <p>I am trying to develop an API in python to create user. Below is my code.</p>
<pre><code> from flask import Flask
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class CreateUser(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('email', type=str, help='Email address to create user')
parser.add_argument('password', type=str, help='Password to create user')
args = parser.parse_args()
_userEmail = args['email']
_userPassword = args['password']
return {'Email': args['email'], 'Password': args['password']}
except Exception as e:
return {'error': str(e)}
api.add_resource(CreateUser, '/CreateUser')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>However when I run it on my REST client, I post the email and password in JSON format to the client as
{
"Email" : "abc@xyz.com" ,
"Password" : "abc"
} </p>
<p>I get an error in the REST client body as
<strong>{
"error": "global name 'reqparse' is not defined"
}</strong></p>
<p>I have Python 2.7 64bit with the flask-restful and all flask libraries installed. Can someone tell me the fix ??? </p>
| 0 | 2016-08-05T04:58:25Z | 38,787,368 | <p>You will want to instantiate your parser and add the arguments before your request handler. I mean move</p>
<pre><code># Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('email', type=str, help='Email address to create user')
parser.add_argument('password', type=str, help='Password to create user')
</code></pre>
<p>to the line after <code>api = Api(app)</code></p>
| 0 | 2016-08-05T10:47:27Z | [
"python",
"api",
"rest"
] |
global name 'reqparse' is not defined | 38,781,416 | <p>I am trying to develop an API in python to create user. Below is my code.</p>
<pre><code> from flask import Flask
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class CreateUser(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('email', type=str, help='Email address to create user')
parser.add_argument('password', type=str, help='Password to create user')
args = parser.parse_args()
_userEmail = args['email']
_userPassword = args['password']
return {'Email': args['email'], 'Password': args['password']}
except Exception as e:
return {'error': str(e)}
api.add_resource(CreateUser, '/CreateUser')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>However when I run it on my REST client, I post the email and password in JSON format to the client as
{
"Email" : "abc@xyz.com" ,
"Password" : "abc"
} </p>
<p>I get an error in the REST client body as
<strong>{
"error": "global name 'reqparse' is not defined"
}</strong></p>
<p>I have Python 2.7 64bit with the flask-restful and all flask libraries installed. Can someone tell me the fix ??? </p>
| 0 | 2016-08-05T04:58:25Z | 38,792,127 | <p>As said in the comments of your post, you need to import reqparse:</p>
<pre><code>from flask_restful import Resource, Api, reqparse
</code></pre>
<p>And for your other problem of null received, you need to be careful to the case.
If you send: <code>{ "Email" : "abc@xyz.com" , "Password" : "abc" }</code>, your python code needs to be like that: </p>
<pre><code>parser.add_argument('Email', type=str, help='Email address to create user')
parser.add_argument('Password', type=str, help='Password to create user')
</code></pre>
<p>(sorry I can not directly respond in the comments, I don't have enough reputation)</p>
| 0 | 2016-08-05T14:50:04Z | [
"python",
"api",
"rest"
] |
Calculating Fibonacci numbers in Python | 38,781,423 | <p>While doing <a href="http://www.mathblog.dk/project-euler-25-fibonacci-sequence-1000-digits/" rel="nofollow">Project Euler Problem 25</a>, I came across various techniques to compute the nth Fibonacci number. Memoization seemed to be the fastest amongst them all, and intuitively, I expected memoization to be faster than creating a list from bottom-up. </p>
<p>The code for the two functions is: </p>
<pre><code>def fib3(n): #FASTEST YET
fibs= [0,1] #list from bottom up
for i in range(2, n+1):
fibs.append(fibs[-1]+fibs[-2])
return fibs[n]
def fib4(n, computed= {0:0, 1:1}): #memoization
if n not in computed:
computed[n]= fib4(n-1, computed)+ fib4(n-2, computed)
return computed[n]
print fib3(1000)
print fib4(1000)
</code></pre>
<p>fib3 was approximately 8 times faster than fib4. I am unable to figure out the reason behind this. Essentially both are storing the values as they compute, one in a list, the other in a dictionary so that they can access them as "cached" later. Why the huge difference?</p>
| 3 | 2016-08-05T04:58:52Z | 38,781,474 | <p>Developing on thesonyman101: <code>my_list[i]</code> gives you immediate access to the element while <code>my_dict[key]</code> requires to compute the hash function and checking for collisions before looking what's in the bucket. Also your memoization sets up some potentially deep recursion stack which has some cost too.</p>
<p>Even faster (provided you don't need to recompute several values, which I know is not the case for Euler problems :) is to only keep track of the last 2 terms. So you don't wast any list management cost.</p>
| 0 | 2016-08-05T05:04:21Z | [
"python",
"python-2.7",
"fibonacci",
"series",
"memoization"
] |
Calculating Fibonacci numbers in Python | 38,781,423 | <p>While doing <a href="http://www.mathblog.dk/project-euler-25-fibonacci-sequence-1000-digits/" rel="nofollow">Project Euler Problem 25</a>, I came across various techniques to compute the nth Fibonacci number. Memoization seemed to be the fastest amongst them all, and intuitively, I expected memoization to be faster than creating a list from bottom-up. </p>
<p>The code for the two functions is: </p>
<pre><code>def fib3(n): #FASTEST YET
fibs= [0,1] #list from bottom up
for i in range(2, n+1):
fibs.append(fibs[-1]+fibs[-2])
return fibs[n]
def fib4(n, computed= {0:0, 1:1}): #memoization
if n not in computed:
computed[n]= fib4(n-1, computed)+ fib4(n-2, computed)
return computed[n]
print fib3(1000)
print fib4(1000)
</code></pre>
<p>fib3 was approximately 8 times faster than fib4. I am unable to figure out the reason behind this. Essentially both are storing the values as they compute, one in a list, the other in a dictionary so that they can access them as "cached" later. Why the huge difference?</p>
| 3 | 2016-08-05T04:58:52Z | 38,781,486 | <p>take a look at this <a href="http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence">stackoverflow question</a>.</p>
<p>as you can see, the complexity of fibonacci algorithm(with recursion) comes out to be approximately O(2^n).</p>
<p>while for the <code>fib3</code> it'll be O(n).</p>
<p>now you can compute, if your input size is 3, <code>fib3</code> will have a complexity of O(3) but <code>fib4</code> will have it O(8). </p>
<p>you can see, why it's slower.</p>
| -2 | 2016-08-05T05:05:54Z | [
"python",
"python-2.7",
"fibonacci",
"series",
"memoization"
] |
Calculating Fibonacci numbers in Python | 38,781,423 | <p>While doing <a href="http://www.mathblog.dk/project-euler-25-fibonacci-sequence-1000-digits/" rel="nofollow">Project Euler Problem 25</a>, I came across various techniques to compute the nth Fibonacci number. Memoization seemed to be the fastest amongst them all, and intuitively, I expected memoization to be faster than creating a list from bottom-up. </p>
<p>The code for the two functions is: </p>
<pre><code>def fib3(n): #FASTEST YET
fibs= [0,1] #list from bottom up
for i in range(2, n+1):
fibs.append(fibs[-1]+fibs[-2])
return fibs[n]
def fib4(n, computed= {0:0, 1:1}): #memoization
if n not in computed:
computed[n]= fib4(n-1, computed)+ fib4(n-2, computed)
return computed[n]
print fib3(1000)
print fib4(1000)
</code></pre>
<p>fib3 was approximately 8 times faster than fib4. I am unable to figure out the reason behind this. Essentially both are storing the values as they compute, one in a list, the other in a dictionary so that they can access them as "cached" later. Why the huge difference?</p>
| 3 | 2016-08-05T04:58:52Z | 38,781,578 | <p>You are using recursion in fib4 function. Which is exponential in terms of time complexity</p>
<p>EDIT after some one said memorize makes fib4 linear:
Except it does not. </p>
<p>The memorize thing works to reduce calculation time for repetitive calls only. For the first time the Fibonacci value of a number n is calculated by only recursion. </p>
<p>Try this out yourself </p>
<pre><code>import timeit
setup ='''
def fib3(n): #FASTEST YET
fibs= [0,1] #list from bottom up
for i in range(2, n+1):
fibs.append(fibs[-1]+fibs[-2])
return fibs[n]
def fib4(n, computed= {0:0, 1:1}): #memoization
if n not in computed:
computed[n]= fib4(n-1, computed)+ fib4(n-2, computed)
return computed[n]
'''
print (min(timeit.Timer('fib3(600)', setup=setup).repeat(3, 1)))
print (min(timeit.Timer('fib4(600)', setup=setup).repeat(3, 1)))
</code></pre>
<p>This will show fib4 taking longer. </p>
<pre><code>0.00010111480978441638
0.00039419570581368307
[Finished in 0.1s]
</code></pre>
<p>If you change the last two lines, that is repeat each for 100 times, the result changes Now fib4 becomes faster, as if not only no recursion, almost like no additional time to compute at all</p>
<pre><code>print (min(timeit.Timer('fib3(600)', setup=setup).repeat(3, 100)))
print (min(timeit.Timer('fib4(600)', setup=setup).repeat(3, 100)))
</code></pre>
<p>Results with 50 repeats</p>
<pre><code>0.00501430622506104
0.00045805769094068097
[Finished in 0.1s]
</code></pre>
<p>Results with 100 repeats</p>
<pre><code>0.01583016969421893
0.0006815746388851851
[Finished in 0.2s]
</code></pre>
| -1 | 2016-08-05T05:16:50Z | [
"python",
"python-2.7",
"fibonacci",
"series",
"memoization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.