title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Adding nodes to a linked list in Python | 38,679,388 | <p>The addToList() function takes a value as an argument. Values are passed to the addToList() function many times consecutively. </p>
<pre><code>def addToList(value):
node = {}
node["data"] = value
node["next"] = None
head = node
</code></pre>
<p>The question is: how do I create many different nodes? Each value that comes in should be assigned to a node. Only the first node should be equal to head. As of now, I'm creating one node, giving it a value, and then when the next value comes in I overwrite the node I had made instead of creating a new one. How would each new node point to a different ["next"]? If I know my values ahead of time, I can program each node manually. I don't understand how to generate new and unique nodes when I don't know how many values I will have, and they are being fed to a function.</p>
| 0 | 2016-07-30T23:23:11Z | 38,679,445 | <p>There are pretty much two possibilities (unless you want to go to object oriented code, then there is a third option). For simplicity I assume that we add element in front (and not at the end) of the list.</p>
<h3>global scope</h3>
<p>Meaning that you have just one list at the time</p>
<pre><code>head = None
def addToList(value):
global head
head = {'value': value, 'next': head}
addToList(3)
addToList(5)
addToList(11)
print head
</code></pre>
<p>or using actual lists:</p>
<pre><code>actual_list = []
def addToList(value):
global actual_list
actual_list.append(value)
addToList(3)
addToList(5)
addToList(11)
print actual_list
</code></pre>
<h3>two arguments</h3>
<pre><code>def addToList(value, head):
return {'value': value, 'next': head}
head = None
head = addToList(3, head)
head = addToList(5, head)
head = addToList(11, head)
print head
</code></pre>
<p>and again using actual lists</p>
<pre><code>def addToList(value, actual_list):
actual_list.append(value)
actual_list = []
addToList(3, actual_list)
addToList(5, actual_list)
addToList(11, actual_list)
print actual_list
</code></pre>
| 1 | 2016-07-30T23:32:57Z | [
"python",
"linked-list",
"nodes"
] |
Python Script to Find and Move Files | 38,679,463 | <p>The purpose of the below python code is to find all the .jpg files included in any folder within the rootdir1 path and move all the files to the targetfolder. </p>
<p>When I spot checked the work of the code, I found a few jpg that were not copied to the target folder. What am I missing in the code?</p>
<pre><code>import os
import shutil
RootDir1 = r'/Users/brianwuollet/Desktop/Takeout'
TargetFolder = r'/Users/brianwuollet/Desktop/Photos'
for root, dirs, files in os.walk((os.path.normpath(RootDir1)), topdown=False):
for name in files:
if name.endswith('.jpg'):
print "Found"
SourceFolder = os.path.join(root,name)
shutil.copy2(SourceFolder, TargetFolder) #copies file to target folder
</code></pre>
| 0 | 2016-07-30T23:34:36Z | 38,679,484 | <p>You should fix the indentation:</p>
<pre><code>import os
import shutil
RootDir1 = r'/Users/brianwuollet/Desktop/Takeout'
TargetFolder = r'/Users/brianwuollet/Desktop/Photos'
for root, dirs, files in os.walk((os.path.normpath(RootDir1)), topdown=False):
for name in files:
if name.endswith('.jpg'):
print "Found"
SourceFolder = os.path.join(root,name) #<--- Here Is The Change
shutil.copy2(SourceFolder, TargetFolder) #<--- Here Is The Change
</code></pre>
<p>Right now you're copying outside the loop, so not all the files copied</p>
| 1 | 2016-07-30T23:39:34Z | [
"python",
"shutil"
] |
Python Script to Find and Move Files | 38,679,463 | <p>The purpose of the below python code is to find all the .jpg files included in any folder within the rootdir1 path and move all the files to the targetfolder. </p>
<p>When I spot checked the work of the code, I found a few jpg that were not copied to the target folder. What am I missing in the code?</p>
<pre><code>import os
import shutil
RootDir1 = r'/Users/brianwuollet/Desktop/Takeout'
TargetFolder = r'/Users/brianwuollet/Desktop/Photos'
for root, dirs, files in os.walk((os.path.normpath(RootDir1)), topdown=False):
for name in files:
if name.endswith('.jpg'):
print "Found"
SourceFolder = os.path.join(root,name)
shutil.copy2(SourceFolder, TargetFolder) #copies file to target folder
</code></pre>
| 0 | 2016-07-30T23:34:36Z | 38,679,502 | <p>The indentation of your code is incorrect. The lines:</p>
<pre><code>SourceFolder = os.path.join(root,name)
shutil.copy2(SourceFolder, TargetFolder) #copies file to target folder
</code></pre>
<p>will be executed only once for each directory traversed by <code>os.walk()</code> resulting in just one file being copied from each directory. Change your code to this:</p>
<pre><code>for root, dirs, files in os.walk((os.path.normpath(RootDir1)), topdown=False):
for name in files:
if name.endswith('.jpg'):
print "Found"
SourceFolder = os.path.join(root,name)
shutil.copy2(SourceFolder, TargetFolder) #copies file to target folder
</code></pre>
<p>Now each file that ends with '.jpg' will be copied.</p>
<p>It's also possible that you might overwrite files with the same base name, and this could result in lost files if you were actually <em>moving</em> the file instead of just copying it. You could check whether a file with the same name already exists in the target directory, and then print a warning, or somehow rename the file when copying it. </p>
| 1 | 2016-07-30T23:43:17Z | [
"python",
"shutil"
] |
How to load data in chunks from a pandas dataframe to a spark dataframe | 38,679,474 | <p>I have read data in chunks over a pyodbc connection using something like this :</p>
<pre><code>import pandas as pd
import pyodbc
conn = pyodbc.connect("Some connection Details")
sql = "SELECT * from TABLES;"
df1 = pd.read_sql(sql,conn,chunksize=10)
</code></pre>
<p>Now I want to read all these chunks into one single spark dataframe using something like:</p>
<pre><code>i = 0
for chunk in df1:
if i==0:
df2 = sqlContext.createDataFrame(chunk)
else:
df2.unionAll(sqlContext.createDataFrame(chunk))
i = i+1
</code></pre>
<p>The problem is when i do a <code>df2.count()</code> i get the result as 10 which means only the i=0 case is working.Is this a bug with unionAll. Am i doing something wrong here??</p>
| 1 | 2016-07-30T23:37:18Z | 38,679,524 | <p>The documentation for <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.unionAll" rel="nofollow"><code>.unionAll()</code></a> states that it returns a new dataframe so you'd have to assign back to the <code>df2</code> DataFrame: </p>
<pre><code>i = 0
for chunk in df1:
if i==0:
df2 = sqlContext.createDataFrame(chunk)
else:
df2 = df2.unionAll(sqlContext.createDataFrame(chunk))
i = i+1
</code></pre>
<p>Furthermore you can instead use <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate()</code></a> to avoid having to manage the <code>i</code> variable yourself:</p>
<pre><code>for i,chunk in enumerate(df1):
if i == 0:
df2 = sqlContext.createDataFrame(chunk)
else:
df2 = df2.unionAll(sqlContext.createDataFrame(chunk))
</code></pre>
<p>Furthermore the documentation for <code>.unionAll()</code> states that <code>.unionAll()</code> is deprecated and now you should use <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.union" rel="nofollow"><code>.union()</code></a> which acts like UNION ALL in SQL:</p>
<pre><code>for i,chunk in enumerate(df1):
if i == 0:
df2 = sqlContext.createDataFrame(chunk)
else:
df2 = df2.union(sqlContext.createDataFrame(chunk))
</code></pre>
<p>Edit:<br>
Furthermore I'll stop saying furthermore but not before I say furthermore: As @zero323 says let's not use <code>.union()</code> in a loop. Let's instead do something like:</p>
<pre><code>def unionAll(*dfs):
' by @zero323 from here: http://stackoverflow.com/a/33744540/42346 '
first, *rest = dfs # Python 3.x, for 2.x you'll have to unpack manually
return first.sql_ctx.createDataFrame(
first.sql_ctx._sc.union([df.rdd for df in dfs]),
first.schema
)
df_list = []
for chunk in df1:
df_list.append(sqlContext.createDataFrame(chunk))
df_all = unionAll(df_list)
</code></pre>
| 3 | 2016-07-30T23:46:19Z | [
"python",
"pandas",
"apache-spark",
"pyspark"
] |
Using user input from arcGis to chose certain csv column | 38,679,516 | <p>I'm currently a beginner in coding and very new in python coding.</p>
<p>I actually wanted to rewrite csv file (<em>read csv and save it to another csv</em>)</p>
<p>What I'm gonna ask you is how to read a certain csv column from arcGis User input.</p>
<p>I use "field" datatypes and "multivalues" for input so i can choose which column i want to display.</p>
<p>For example, i have this csv:</p>
<pre><code> key, province, population, area
A1, provA, 100, 20
B1, provB, 200, 10
C1, provC, 50, 30
</code></pre>
<p>Now from input i choose only key and area for column, i use:</p>
<pre><code> GetParameterAsText(0)
</code></pre>
<p>then i use this code:</p>
<pre><code> headers = arcpy.GetParameterAsText(0)
newhead = headers.replace("'", "")
headersList = newhead.split(";")
arcpy.AddMessage(headersList)
</code></pre>
<p>i got this result (<em>headersList</em>):</p>
<pre><code> [u'key', u'area']
</code></pre>
<p>The <strong>Problem</strong> starts when i want to read that certain column from the above result:</p>
<pre><code> data = []
with csvdata as csvfile:
reader = csv.DictReader(csvfile, delimiter=",")
for row in reader:
try:
new_row = [headersList] <--------------- This One
data.append(new_row)
except IndexError as e:
print e
pass
with open(csvout, "w+") as to_file:
writer = csv.writer(to_file, delimiter=",")
for new_row in data:
writer.writerow(new_row)
</code></pre>
<p>I want to make new_row to like this (get it from <em>headersList</em>): </p>
<pre><code> new_row = [row['key'], row['area']]
</code></pre>
<p>But i just don't know how, can anyone help?</p>
| 0 | 2016-07-30T23:44:55Z | 38,679,636 | <p>From @jedwards comment, the answer is like this:</p>
<pre><code>data = []
with csvdata as csvfile:
reader = csv.DictReader(csvfile, delimiter=",")
for row in reader:
try:
new_row = [row[k] for k in headersList]
data.append(new_row)
except IndexError as e:
print e
pass
with open(csvout, "wb") as to_file:
writer = csv.writer(to_file, delimiter=",")
writer.writerow(headersList)
for new_row in data:
writer.writerow(new_row)
</code></pre>
<p>Thanks Again!</p>
| 1 | 2016-07-31T00:10:18Z | [
"python",
"csv",
"arcgis"
] |
Setting GenericForeignKey field value in Django | 38,679,577 | <p>I've been reading lately about the generic relations. I know that <code>GenericForeignKey</code> is to define and manaeg the generic relation using <code>ForeignKey</code> and <code>PositiveIntegerField</code> fields. I dove into the source code in search for the <code>__set__</code> method of the <code>GenericForeignKey</code> to see how does it work.</p>
<p>Here is the snippet for <code>GenericForeignKey.__set__()</code>:</p>
<pre><code>def __set__(self, instance, value):
ct = None
fk = None
if value is not None:
ct = self.get_content_type(obj=value)
fk = value._get_pk_val()
setattr(instance, self.ct_field, ct)
setattr(instance, self.fk_field, fk)
setattr(instance, self.cache_attr, value)
</code></pre>
<p>and model definition from django docs <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/contenttypes/" rel="nofollow">example</a>:</p>
<pre><code>class TaggedItem(models.Model):
tag = models.SlugField()
content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE)
object_id = models.PositiveIntegerField()
content_object = GenericForeignKey('content_type', 'object_id')
</code></pre>
<h2>Question:</h2>
<p>When I assign value of <code>guido</code> to the <code>content_object</code> then what is the value of each of these paremeters: <code>self</code>, <code>instance</code> and <code>value</code> in the <code>GenericForeignKey.__set__()</code>? </p>
<p>Is <code>self=<GenericForeignKey: 1></code>, <code>instance='content_object'</code>, and <code>value=<User: guido></code>?</p>
<pre><code>>>> guido = User.objects.get(username='Guido')
>>> t = TaggedItem(content_object=guido, tag='bdfl')
>>> t.save()
</code></pre>
| 0 | 2016-07-30T23:57:13Z | 38,679,609 | <p>The <code>__set__</code> method is for <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">descriptors</a>.</p>
<p>The following simple example will show what the arguments passed to <code>__set__</code> are:</p>
<pre><code>class MyDescriptor:
def __set__(self, instance, value):
print((self, instance, value))
class MyClass:
attr = MyDescriptor()
inst = MyClass()
inst.attr = "foo"
</code></pre>
<p>You'll get something like:</p>
<pre><code><__main__.MyDescriptor object at 0x000002017192AD68>, # self
<__main__.MyClass object at 0x000002017192ACF8>, # instance
'foo' # value
</code></pre>
<p>Specifically:</p>
<ul>
<li><code>self</code> is the instance of the <code>MyDescriptor</code> descriptor (<code>MyClass.attr</code>), </li>
<li><code>instance</code> is the instance of the <code>MyClass</code> class (<code>inst</code>), and</li>
<li><code>value</code> is what you're setting the attribute to (<code>"foo"</code>).</li>
</ul>
<p>See a more thorough example <a href="https://docs.python.org/3/howto/descriptor.html#descriptor-example" rel="nofollow">here</a>.</p>
<p>So, without similarly diving into the Django code, it would seem that:</p>
<ul>
<li><code>self</code> is the instance of the <code>GenericForeignKey</code> descriptor (<code>TaggedItem.content_object</code>), </li>
<li><code>instance</code> is the instance of the <code>TaggedItem</code> class, and</li>
<li><code>value</code> is what you're setting the attribute to.</li>
</ul>
<p>But note that, with this line:</p>
<pre><code>t = TaggedItem(content_object=guido, tag='bdfl')
</code></pre>
<p>It looks like you're creating a <code>TaggedItem</code>, which <em>creates</em> a descriptor with this line</p>
<pre><code>content_object = GenericForeignKey('content_type', 'object_id')
</code></pre>
<p>So, at least from the code you posted, the <code>__set__</code> method won't be called. Instead <code>GenericForeignKey</code>'s <code>__init__</code> method would be called.</p>
<p>To call <code>GenericForeignKey</code>'s <code>__set__</code> method, you'd need to do have an instance of a class (call it <code>inst</code>) that had a <code>GenericForeignKey</code> descriptor as an attribute (call it <code>attr</code>), then write something like:</p>
<pre><code>inst.attr = "not guido"
</code></pre>
<p>Then, the <code>__set__</code> method of the <code>GenericForeignKey</code> descriptor <em>would</em> be called.</p>
| 0 | 2016-07-31T00:03:25Z | [
"python",
"django",
"django-contenttypes"
] |
Python iterating through large json file | 38,679,607 | <p>So ive recently been trying to learn how to use django by creating a website that enables the user to create magic the gathering decks. This question is more related to just python though and not the django framework itself.</p>
<p>Anyway im trying to parse a huge json file that contains around 200 sets for MTG and each set has multiple cards and then that card has multiple types as you can also see from the image below, So its a fairly complex data structure.</p>
<p><a href="http://i.stack.imgur.com/KuG1M.png" rel="nofollow"><img src="http://i.stack.imgur.com/KuG1M.png" alt="enter image description here"></a></p>
<p>Now the current whay im parsing all the data is using for loops like this:</p>
<pre><code>def InsertFormats():
json_object = setJson()
for sets in json_object:
for cards in json_object[sets]['cards']:
if 'legalities' in cards:
for legalities in cards['legalities']:
cardFormat = legalities['format']
legalType = legalities['legality']
obj, created = CardFormat.objects.get_or_create(cardFormat=cardFormat)
obj, created = LegalTypes.objects.get_or_create(legalType=legalType)
</code></pre>
<p>But the issue with this is that it will just randomly time out with this error </p>
<blockquote>
<p>Process finished with exit code -1073741819</p>
</blockquote>
<p>which im only assuming is occuring due to the amount of itterations this function is making. I have multiple function like this to insert the data from the json object to my database. </p>
<p>Is there any other way to iterate through a large json object with out needing to go through so many for loops just to reach the data i need or atleast so it wont crash?</p>
| 0 | 2016-07-31T00:02:52Z | 38,679,970 | <p>It's a memory allocation error. Python dicts aren't good allocating memory with large datasets.</p>
<p>You can try out other container datatypes. As namedtuples (lighter than dict):</p>
<pre><code>from collections import namedtuple
import json
with open(yourfile) as f:
json_object = json.load(
f,
object_hook = lambda x: namedtuple('JsonObject', x.keys())(**x)
)
</code></pre>
<p>Or tries: <a href="http://bugs.python.org/issue9520" rel="nofollow">http://bugs.python.org/issue9520</a></p>
| 0 | 2016-07-31T01:29:09Z | [
"python",
"json",
"django"
] |
Finding values based on specific categories | 38,679,614 | <p>I was wondering how would find estimated values based on several different categories. Two of the columns are categorical, one of the other columns contains two strings of interest and the last contain numeric values
I have a csv file called sports.csv</p>
<pre><code>import pandas as pd
import numpy as np
#loading the data into data frame
df = pd.read_csv('sports.csv')
</code></pre>
<p>I'm trying to find a suggested <code>price</code> for a <code>Gym</code> that have both baseball and basketball as well as <code>enrollment</code> from 240 to 260 given they are from <code>region</code> 4 and of <code>type</code> 1</p>
<pre><code>Region Type enroll estimates price Gym
2 1 377 0.43 40 Football|Baseball|Hockey|Running|Basketball|Swimming|Cycling|Volleyball|Tennis|Ballet
4 2 100 0.26 37 Baseball|Tennis
4 1 347 0.65 61 Basketball|Baseball|Ballet
4 1 264 0.17 12 Swimming|Ballet|Cycling|Basketball|Volleyball|Hockey|Running|Tennis|Baseball|Football
1 1 286 0.74 78 Swimming|Basketball
0 1 210 0.13 29 Baseball|Tennis|Ballet|Cycling|Basketball|Football|Volleyball|Swimming
0 1 263 0.91 31 Tennis
2 2 271 0.39 54 Tennis|Football|Ballet|Cycling|Running|Swimming|Baseball|Basketball|Volleyball
3 3 247 0.51 33 Baseball|Hockey|Swimming|Cycling
0 1 109 0.12 17 Football|Hockey|Volleyball
</code></pre>
<p>I don't know how to piece everything together. I apologize if the syntax is incorrect I'm just beginning Python. So far I have:</p>
<pre><code>import pandas as pd
import numpy as np
#loading the data into data frame
df = pd.read_csv('sports.csv')
#group 4th region and type 1 together where enrollment is in between 240 and 260
group = df[df['Region'] == 4] df[df['Type'] == 1] df[240>=df['Enrollment'] <=260 ]
#split by pipe chars to find gyms that contain both Baseball and Basketball
df['Gym'] = df['Gym'].str.split('|')
df['Gym'] = df['Gym'].str.contains('Baseball'& 'Basketball')
price = df.loc[df['Gym'], 'Price']
</code></pre>
<p>Should I do a groupby instead? If so, how would I include the columns <code>Type</code>==1 <code>Region</code> ==4 and enrollment from 240 to 260 ?</p>
| 0 | 2016-07-31T00:05:57Z | 38,679,753 | <p>You can create a <code>mask</code> with all your conditions specified and then use the mask for subsetting:</p>
<pre><code>mask = (df['Region'] == 4) & (df['Type'] == 1) & \
(df['enroll'] <= 260) & (df['enroll'] >= 240) & \
df['Gym'].str.contains('Baseball') & df['Gym'].str.contains('Basketball')
df['price'][mask]
# Series([], name: price, dtype: int64)
</code></pre>
<p>which returns empty, since there is no record satisfying all conditions as above.</p>
| 0 | 2016-07-31T00:34:37Z | [
"python",
"csv",
"pandas"
] |
Finding values based on specific categories | 38,679,614 | <p>I was wondering how would find estimated values based on several different categories. Two of the columns are categorical, one of the other columns contains two strings of interest and the last contain numeric values
I have a csv file called sports.csv</p>
<pre><code>import pandas as pd
import numpy as np
#loading the data into data frame
df = pd.read_csv('sports.csv')
</code></pre>
<p>I'm trying to find a suggested <code>price</code> for a <code>Gym</code> that have both baseball and basketball as well as <code>enrollment</code> from 240 to 260 given they are from <code>region</code> 4 and of <code>type</code> 1</p>
<pre><code>Region Type enroll estimates price Gym
2 1 377 0.43 40 Football|Baseball|Hockey|Running|Basketball|Swimming|Cycling|Volleyball|Tennis|Ballet
4 2 100 0.26 37 Baseball|Tennis
4 1 347 0.65 61 Basketball|Baseball|Ballet
4 1 264 0.17 12 Swimming|Ballet|Cycling|Basketball|Volleyball|Hockey|Running|Tennis|Baseball|Football
1 1 286 0.74 78 Swimming|Basketball
0 1 210 0.13 29 Baseball|Tennis|Ballet|Cycling|Basketball|Football|Volleyball|Swimming
0 1 263 0.91 31 Tennis
2 2 271 0.39 54 Tennis|Football|Ballet|Cycling|Running|Swimming|Baseball|Basketball|Volleyball
3 3 247 0.51 33 Baseball|Hockey|Swimming|Cycling
0 1 109 0.12 17 Football|Hockey|Volleyball
</code></pre>
<p>I don't know how to piece everything together. I apologize if the syntax is incorrect I'm just beginning Python. So far I have:</p>
<pre><code>import pandas as pd
import numpy as np
#loading the data into data frame
df = pd.read_csv('sports.csv')
#group 4th region and type 1 together where enrollment is in between 240 and 260
group = df[df['Region'] == 4] df[df['Type'] == 1] df[240>=df['Enrollment'] <=260 ]
#split by pipe chars to find gyms that contain both Baseball and Basketball
df['Gym'] = df['Gym'].str.split('|')
df['Gym'] = df['Gym'].str.contains('Baseball'& 'Basketball')
price = df.loc[df['Gym'], 'Price']
</code></pre>
<p>Should I do a groupby instead? If so, how would I include the columns <code>Type</code>==1 <code>Region</code> ==4 and enrollment from 240 to 260 ?</p>
| 0 | 2016-07-31T00:05:57Z | 38,679,782 | <p>I had to add an instance that would actually meet your criteria, or else you will get an empty result. You want to use <code>df.loc</code> with conditions as follows:</p>
<pre><code>In [1]: import pandas as pd, numpy as np, io
In [2]: in_string = io.StringIO("""Region Type enroll estimates price Gym
...: 2 1 377 0.43 40 Football|Baseball|Hockey|Running|Basketball|Swimming|Cycling|Volleyball|Tennis|Ballet
...: 4 2 100 0.26 37 Baseball|Tennis
...: 4 1 247 0.65 61 Basketball|Baseball|Ballet
...: 4 1 264 0.17 12 Swimming|Ballet|Cycling|Basketball|Volleyball|Hockey|Running|Tennis|Baseball|Football
...: 1 1 286 0.74 78 Swimming|Basketball
...: 0 1 210 0.13 29 Baseball|Tennis|Ballet|Cycling|Basketball|Football|Volleyball|Swimming
...: 0 1 263 0.91 31 Tennis
...: 2 2 271 0.39 54 Tennis|Football|Ballet|Cycling|Running|Swimming|Baseball|Basketball|Volleyball
...: 3 3 247 0.51 33 Baseball|Hockey|Swimming|Cycling
...: 0 1 109 0.12 17 Football|Hockey|Volleyball""")
In [3]: df = pd.read_csv(in_string,delimiter=r"\s+")
In [4]: df.loc[df.Gym.str.contains(r"(?=.*Baseball)(?=.*Basketball)")
...: & (df.enroll <= 260) & (df.enroll >= 240)
...: & (df.Region == 4) & (df.Type == 1), 'price']
Out[4]:
2 61
Name: price, dtype: int64
</code></pre>
<p>Note I used a regex pattern for contains that essentially acts as an AND operator for regex. You could simply have done another conjunction of <code>.contains</code> conditions for Basketball and Baseball.</p>
| 0 | 2016-07-31T00:42:29Z | [
"python",
"csv",
"pandas"
] |
Slicing a Python list with a NumPy array of indices -- any fast way? | 38,679,666 | <p>I have a regular <code>list</code> called <code>a</code>, and a NumPy array of indices <code>b</code>.<br>
(No, it is not possible for me to convert <code>a</code> to a NumPy array.)</p>
<p>Is there any way for me to the same effect as "<code>a[b]</code>" efficiently? To be clear, this implies that I don't want to extract every individual <code>int</code> in <code>b</code> due to its performance implications. </p>
<p>(Yes, this is a bottleneck in my code. That's why I'm using NumPy arrays to begin with.)</p>
| 3 | 2016-07-31T00:15:49Z | 38,679,861 | <pre><code>a = list(range(1000000))
b = np.random.randint(0, len(a), 10000)
%timeit np.array(a)[b]
10 loops, best of 3: 84.8 ms per loop
%timeit [a[x] for x in b]
100 loops, best of 3: 2.93 ms per loop
%timeit operator.itemgetter(*b)(a)
1000 loops, best of 3: 1.86 ms per loop
%timeit np.take(a, b)
10 loops, best of 3: 91.3 ms per loop
</code></pre>
<p>I had high hopes for <code>numpy.take()</code> but it is far from optimal. I tried some Numba solutions as well, and they yielded similar times--around 92 ms.</p>
<p>So a simple list comprehension is not far from the best here, but <code>operator.itemgetter()</code> wins, at least for input sizes at these orders of magnitude.</p>
| 3 | 2016-07-31T01:00:39Z | [
"python",
"arrays",
"performance",
"numpy",
"optimization"
] |
Slicing a Python list with a NumPy array of indices -- any fast way? | 38,679,666 | <p>I have a regular <code>list</code> called <code>a</code>, and a NumPy array of indices <code>b</code>.<br>
(No, it is not possible for me to convert <code>a</code> to a NumPy array.)</p>
<p>Is there any way for me to the same effect as "<code>a[b]</code>" efficiently? To be clear, this implies that I don't want to extract every individual <code>int</code> in <code>b</code> due to its performance implications. </p>
<p>(Yes, this is a bottleneck in my code. That's why I'm using NumPy arrays to begin with.)</p>
| 3 | 2016-07-31T00:15:49Z | 38,680,028 | <p>Write a cython function:</p>
<pre><code>import cython
from cpython cimport PyList_New, PyList_SET_ITEM, Py_INCREF
@cython.wraparound(False)
@cython.boundscheck(False)
def take(list alist, Py_ssize_t[:] arr):
cdef:
Py_ssize_t i, idx, n = arr.shape[0]
list res = PyList_New(n)
object obj
for i in range(n):
idx = arr[i]
obj = alist[idx]
PyList_SET_ITEM(res, i, alist[idx])
Py_INCREF(obj)
return res
</code></pre>
<p>The result of %timeit:</p>
<pre><code>import numpy as np
al= list(range(10000))
aa = np.array(al)
ba = np.random.randint(0, len(a), 10000)
bl = ba.tolist()
%timeit [al[i] for i in bl]
%timeit np.take(aa, ba)
%timeit take(al, ba)
1000 loops, best of 3: 1.68 ms per loop
10000 loops, best of 3: 51.4 µs per loop
1000 loops, best of 3: 254 µs per loop
</code></pre>
<p><code>numpy.take()</code> is the fastest if both of the arguments are ndarray object. The cython version is 5x faster than list comprehension.</p>
| 3 | 2016-07-31T01:45:55Z | [
"python",
"arrays",
"performance",
"numpy",
"optimization"
] |
how could i generate larger sample that its population? | 38,679,725 | <pre><code>import random
random.sample(range(10,18),100)
Traceback (most recent call last):
File "<ipython-input-6-f5d60cc38869>", line 1, in <module>
random.sample(range(10,18),100)
File "C:\Users\shamsul\Anaconda3\lib\random.py", line 315, in sample
raise ValueError("Sample larger than population")
ValueError: Sample larger than population
</code></pre>
| 0 | 2016-07-31T00:27:44Z | 38,679,745 | <pre><code>sample = [random.randrange(10,18) for _ in range(100)]
</code></pre>
<p>So <a href="https://en.wikipedia.org/wiki/Pigeonhole_principle" rel="nofollow">obviously you're going to have repeats</a>, because the sample is larger than the population, but this will give you an evenly distributed sample. Generally a random sample is taking a random subset of a population, so by the usual definition of a sample you can't have a sample that's larger than a population, but if you want just a uniformly distributed list of random numbers in a certain range, this will do it.</p>
| 2 | 2016-07-31T00:31:51Z | [
"python"
] |
FindRootTest not working | 38,679,763 | <pre class="lang-none prettyprint-override"><code>File "/Users/SalamonCreamcheese/Documents/4.py", line 31, in <module>
testFindRoot()
File "/Users/SalamonCreamcheese/Documents/4.py", line 29, in testFindRoot
print " ", result**power, " ~= ", x
TypeError: unsupported operand type(s) for ** or pow(): 'tuple' and 'int'
</code></pre>
<p>Any help would be highly appreciated, I don't understand why it's saying that result** power is of type(s), Im I'm assuming meaning string, and why thatsthat's an error. Thanks in advance for any feedback.</p>
<pre><code>def findRoot(x, power, epsilon):
"""Assumes x and epsilon int or float,power an int,
epsilon > 0 and power >= 1
Returns float y such that y**power is within epsilon of x
If such a float does not exist, returns None"""
if x < 0 and power % 2 == 0:
return None
low = min(-1.0, x)
high = max(1,.0 ,x)
ans = (high + low) / 2.0
while abs(ans**power - x) > epsilon:
if ans**power < x:
low = ans
else:
high = ans
ans = (high +low) / 2.0
return ans
def testFindRoot():
for x in (0.25, -0.25, 2, -2, 8, -8):
epsilon = 0.0001
for power in range(1, 4):
print 'Testing x = ' + str(x) +\
' and power = ' + str(power)
result = (x, power, epsilon)
if result == None:
print 'No result was found!'
else:
print " ", result**power, " ~= ", x
testFindRoot()
</code></pre>
| 1 | 2016-07-31T00:36:30Z | 38,679,805 | <p>After</p>
<pre><code>result = (x, power, epsilon)
</code></pre>
<p><code>result</code> is bound to a 3-element tuple. So the error message is thoroughly accurate you're later trying to raise that tuple to the integer power <code>power</code>. Python doesn't define <code>__pow__</code> for tuples, and that's all there is to it.</p>
<p>Presumably you <em>intended</em> to code:</p>
<pre><code> result = findRoot(x, power, epsilon)
</code></pre>
<p>instead.</p>
| 5 | 2016-07-31T00:46:57Z | [
"python"
] |
How can I call a Subclass object from a Superclass and is there a better way to do this? (Python 3) | 38,679,784 | <p>I am creating a game in python(3) and I have a main class with the game loop and a bunch of variables in it. I have a subclass of this to create "path" objects, and I need the variables from the main class. I also need to call the subclass from the main, superclass. Every time I call the subclass, it also calls the main classes <strong>init</strong> method to pass variables through. The problem is when this happens, it resets the values of all my variables in the main class.</p>
<pre><code>class Main:
def __init__(self):
foo = 0
sub1 = Sub()
def foo_edit(self):
self.foo += 5
def main(self):
sub2 = Sub()
class Sub(Main):
def __init__(self):
super(Sub, self).__init__()
self.bar = 0
def foo_edit(self):
self.foo += 10
</code></pre>
<p>I've looked at many other similar questions, but none have given me the answer I need. I tried sub1 in my code(in the <strong>init</strong> function of main) and this creates a recursion loop error because the <strong>init</strong> functions call eachother forever. When I call it in the gameloop(or main in this example) it re initializes Main each time it is called, wiping the variables needed in Sub. Before I only had one instance of the "path" so I had no need of class, and had a function. Now that I need multiple "path" objects I am using a subclass of main to get the variables I need. </p>
<p>A solution to this problem that does or does not answer my question(calling a subclass from a superclass might be a bad idea) would be appreciated. Thanks in advance.</p>
<p>EDIT: I could pass the variables needed into the "Sub" class, however I need them to be updated so I would need some sort of update method to make sure the variables passed in are changed in Sub when they are in Main. This is easily doable, but I was wondering if there was easier/better way to do it.</p>
| 0 | 2016-07-31T00:43:03Z | 38,679,902 | <p>You are misunderstanding the purpose of subclasses. Inheritance isn't for sharing data, it's for making special cases of a general type. </p>
<p>If <code>Sub</code> should be exactly like <code>Main</code> with a few specializations then you should use inheritance. When you say that "Sub" is a subclass of "Main", you are saying "Sub is a Main', not "Sub uses part of Main". </p>
<p>If <code>Sub</code> merely needs access to data in <code>Main</code>, use composition. If <code>Sub</code> needs data from <code>Main</code>, pass an instance of <code>Main</code> to <code>Sub</code>, and have <code>Sub</code> operate on the instance.</p>
<p>For example:</p>
<pre><code>class Main():
def __init__(self):
self.foo = 42
self.sub1 = Sub(self)
self.sub1.foo_edit()
...
class Sub():
def __init__(self, main):
self.main = main
def foo_edit(self):
self.main.foo += 10
</code></pre>
| 2 | 2016-07-31T01:11:05Z | [
"python",
"class",
"python-3.x",
"oop"
] |
Gmail API: Python Email Dict appears to be Missing Keys | 38,679,825 | <p>I'm experiencing a strange issue that seems to be inconsistent with google's gmail API:</p>
<p>If you look <a href="https://developers.google.com/gmail/api/v1/reference/users/messages#resource" rel="nofollow" title="here">here</a>, you can see that gmail's representation of an email has keys "snippet" and "id", among others. Here's some code that I use to generate the complete list of all my emails:</p>
<pre><code>response = service.users().messages().list(userId='me').execute()
messageList = []
messageList.extend(response['messages'])
while 'nextPageToken' in response:
pagetoken = response['nextPageToken']
response = service.users().messages().list(userId='me', pageToken=pagetoken).execute()
messageList.extend(response['messages'])
for message in messageList:
if 'snippet' in message:
print(message['snippet'])
else:
print("FALSE")
</code></pre>
<p>The code works!... Except for the fact that I get output "FALSE" for every single one of the emails. 'snippet' doesn't exist! However, if I run the same code with "id" instead of snippet, I get a whole bunch of ids!</p>
<p>I decided to just print out the 'message' objects/dicts themselves, and each one only had an "id" and a "threadId", even though the API claims there should be more in the object... What gives?</p>
<p>Thanks for your help!</p>
| 0 | 2016-07-31T00:51:45Z | 38,680,283 | <p>As @jedwards said in his comment, just because a message 'can' contain all of the fields specified in documentation, doesn't mean it will. 'list' provides the bare minimum amount of information for each message, because it provides a lot of messages and wants to be as lazy as possible. For individual messages that I want to know more about, I'd then use 'messages.get' with the id that I got from 'list'.</p>
<p>Running get for each email in your inbox seems very expensive, but to my knowledge there's no way to run a batch 'get' command.</p>
| 1 | 2016-07-31T02:39:41Z | [
"python",
"gmail-api"
] |
Flask Security rest API | 38,679,884 | <p><strong>Context</strong><br>
I'm creating a quote generation script using Python and Digital Ocean server (Ubuntu 16.04). Here's how it works: </p>
<ol>
<li>End user fills out a form on HubSpot hosted website </li>
<li>the form submission triggers a POST request to my server X.X.X.X:SpecificSocket</li>
<li>that request is read into my python script using dictionaries</li>
<li>using the ReportLab library the script creates a PDF quote for the user</li>
<li>Said PDF is sent to the user in an email (using Postfix) </li>
</ol>
<p>My script is isolated in the server using virtualenv. It is my understanding that POST requests can be intercepted and replicated byte for byte if the server does not have some form of encryption. Since the POST request contains names, emails, addresses even - it is super important that I protect that information from spambots. </p>
<p><strong>Question</strong><br>
I can't seem to find any straight forward information regarding POST requests to a flask application. Does the flask framework by default have some basic encryption applied, or do I need to put that in myself? What recommendations would you make to ensure this is secure and safe, for both the end user and my server? </p>
<p><strong>Update</strong><br>
Thanks to <a href="http://stackoverflow.com/users/996056/david-gomes">David Gomes</a> for answering the black and white part of my question. Mucho appreciated</p>
<blockquote>
<p>The POST request will be sent unencrypted and so you should set up SLS (HTTPS). There is some very good documentation on that <a href="http://flask.pocoo.org/snippets/111/" rel="nofollow">here</a> which allows you to set up OpenSSL to serve requests with HTTPS with Flask only. Without this, all your requests are prone to being listened to by third parties. </p>
</blockquote>
<p><a href="http://stackoverflow.com/users/4675937/roy">Roy</a>, thanks for linking that digital ocean tutorial. I was previously using <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-16-04" rel="nofollow">this one</a> and had very little luck with actually getting nginx and uWSGI to talk properly. Hopefully I will have more luck here and I'll update with the full solution once I get things up and running! </p>
| 2 | 2016-07-31T01:07:40Z | 38,680,009 | <p>The POST request will be sent <strong>unencrypted</strong> and so you should set up SLS (HTTPS). There is some very good documentation on that <a href="http://flask.pocoo.org/snippets/111/" rel="nofollow">here</a> which allows you to set up OpenSSL to serve requests with HTTPS with Flask only. Without this, all your requests are prone to being listened to by third parties.</p>
| 0 | 2016-07-31T01:42:21Z | [
"python",
"security",
"post",
"flask"
] |
Flask Security rest API | 38,679,884 | <p><strong>Context</strong><br>
I'm creating a quote generation script using Python and Digital Ocean server (Ubuntu 16.04). Here's how it works: </p>
<ol>
<li>End user fills out a form on HubSpot hosted website </li>
<li>the form submission triggers a POST request to my server X.X.X.X:SpecificSocket</li>
<li>that request is read into my python script using dictionaries</li>
<li>using the ReportLab library the script creates a PDF quote for the user</li>
<li>Said PDF is sent to the user in an email (using Postfix) </li>
</ol>
<p>My script is isolated in the server using virtualenv. It is my understanding that POST requests can be intercepted and replicated byte for byte if the server does not have some form of encryption. Since the POST request contains names, emails, addresses even - it is super important that I protect that information from spambots. </p>
<p><strong>Question</strong><br>
I can't seem to find any straight forward information regarding POST requests to a flask application. Does the flask framework by default have some basic encryption applied, or do I need to put that in myself? What recommendations would you make to ensure this is secure and safe, for both the end user and my server? </p>
<p><strong>Update</strong><br>
Thanks to <a href="http://stackoverflow.com/users/996056/david-gomes">David Gomes</a> for answering the black and white part of my question. Mucho appreciated</p>
<blockquote>
<p>The POST request will be sent unencrypted and so you should set up SLS (HTTPS). There is some very good documentation on that <a href="http://flask.pocoo.org/snippets/111/" rel="nofollow">here</a> which allows you to set up OpenSSL to serve requests with HTTPS with Flask only. Without this, all your requests are prone to being listened to by third parties. </p>
</blockquote>
<p><a href="http://stackoverflow.com/users/4675937/roy">Roy</a>, thanks for linking that digital ocean tutorial. I was previously using <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-16-04" rel="nofollow">this one</a> and had very little luck with actually getting nginx and uWSGI to talk properly. Hopefully I will have more luck here and I'll update with the full solution once I get things up and running! </p>
| 2 | 2016-07-31T01:07:40Z | 38,685,457 | <p>Here's a <a href="https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04" rel="nofollow">guide</a> from Digital Ocean on how to set up SSL. If you want to do it via Werkzeug's built-in development server, try this:</p>
<pre><code>if __name__ == "__main__":
context = ('cert.crt', 'key.key')
app.run(host='0.0.0.0', port=80, ssl_context=context, threaded=True, debug=True)
</code></pre>
<p>Note that the <code>ssl_context</code> tuple should observe the order <em>cert, key</em> . Read <a href="http://werkzeug.pocoo.org/docs/0.10/serving/" rel="nofollow">here</a> on the <code>ssl_context</code> parameter.</p>
| 0 | 2016-07-31T15:22:09Z | [
"python",
"security",
"post",
"flask"
] |
How Does Deque Work in Python | 38,679,914 | <p>I am having trouble understanding how the deque works in the snippet of code below, while trying to recreate a queue and a stack in Python.</p>
<p><strong>Stack Example - Understood</strong></p>
<pre><code>stack = ["a", "b", "c"]
# push operation
stack.append("e")
print(stack)
# pop operation
stack.pop()
print(stack)
</code></pre>
<p>As expected when pushing and popping, the "e" goes Last In, First Out (LIFO). My question is with the example below.</p>
<p><strong>Queue Example - Not Understanding</strong></p>
<pre><code>from collections import deque
dq = deque(['a','b','c'])
print(dq)
# push
dq.append('e')
print(dq)
# pop
dq.pop()
print(dq)
</code></pre>
<p>When pushing and popping, the "e" goes Last In, First Out (LIFO). Shouldn't it be First In, First Out (FIFO)?</p>
| 1 | 2016-07-31T01:14:11Z | 38,679,934 | <p><a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow">A deque is a generalization of stack and a queue (It is short for "double-ended queue")</a>.</p>
<p>Thus, the pop() operation still causes it to act like a stack, just as it would have as a list. To make it act like a queue, use the popleft() command. Deques are made to support both behaviors, and this way the pop() function is consistent across data structures. In order to make the deque act like a queue, you must use the functions that correspond to queues. So, replace pop() with popleft() in your second example, and you should see the FIFO behavior that you expect.</p>
<p>Deques also support a max length, which means when you add objects to the deque greater than the maxlength, it will "drop" a number of objects off the opposite end to maintain its max size.</p>
| 4 | 2016-07-31T01:18:50Z | [
"python",
"queue"
] |
Beautiful soup just extract header of a table | 38,680,057 | <p>I want to extract information from the table in the following website using beautiful soup in python 3.5. </p>
<pre><code>http://www.askapatient.com/viewrating.asp?drug=19839&name=ZOLOFT
</code></pre>
<p>I have to save the web-page first, since my program needs to work off-line. </p>
<p>I saved the web-page in my computer and I used the following codes to extract table information. But the problem is that the code just extract heading of the table. </p>
<p>This is my code: </p>
<pre><code>from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
url = "file:///Users/MD/Desktop/ZoloftPage01.html"
home_page= urlopen(url)
soup = BeautifulSoup(home_page, "html.parser")
table = soup.find("table", attrs={"class":"ratingsTable" } )
comments = [td.get_text() for td in table.findAll("td")]
print(comments)
</code></pre>
<p>And this is the output of the code: </p>
<pre><code>['RATING', '\xa0 REASON', 'SIDE EFFECTS FOR ZOLOFT', 'COMMENTS', 'SEX', 'AGE', 'DURATION/DOSAGE', 'DATE ADDED ', '\xa0â]
</code></pre>
<p>I need all the information in the tableâs rows.
Thanks for your help !</p>
| 1 | 2016-07-31T01:51:59Z | 38,684,961 | <p>This is because of the <em>broken HTML</em> of the page. You need to switch to a more <em>lenient parser</em> like <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser" rel="nofollow"><code>html5lib</code></a>. Here is what works for me:</p>
<pre><code>from pprint import pprint
import requests
from bs4 import BeautifulSoup
url = "http://www.askapatient.com/viewrating.asp?drug=19839&name=ZOLOFT"
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'})
# HTML parsing part
soup = BeautifulSoup(response.content, "html5lib")
table = soup.find("table", attrs={"class":"ratingsTable"})
comments = [[td.get_text() for td in row.find_all("td")]
for row in table.find_all("tr")]
pprint(comments)
</code></pre>
| 1 | 2016-07-31T14:25:23Z | [
"python",
"python-3.x",
"beautifulsoup",
"bs4"
] |
How to I make a new line every use in openpyxl? | 38,680,073 | <p>I am trying to set up an easy way to mark my business' records through a python program and I am using the module openpyxl for the excel sheets part and I am wondering <strong><em>how I can make it so every time I use the program it uses the next line in the excel sheet</em></strong>. Thank you!</p>
<pre><code>from openpyxl import Workbook
wb = Workbook()
# grab the active worksheet
ws = wb.active
item = raw_input('Item: ')
sold = raw_input('Sold for: ')
percentage = raw_input('Percentage (in decimals): ')
date = raw_input('Date of Sale: ')
customer = raw_input('Customer: ')
# Data can be assigned directly to cells
ws['B2'] = item
ws['C2'] = sold
ws['D2'] = percentage
ws['E2'] = date
ws['F2'] = customer
wb.save("sample.xlsx")
</code></pre>
| 1 | 2016-07-31T01:56:14Z | 38,680,095 | <p>You can use <a href="http://openpyxl.readthedocs.io/en/default/_modules/openpyxl/worksheet/worksheet.html" rel="nofollow"><code>ws.max_row</code></a> here. Also make sure you load the previously saved file instead of opening up a new file each time.</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook('sample.xlsx')
# grab the active worksheet
ws = wb.active
item = raw_input('Item: ')
sold = raw_input('Sold for: ')
percentage = raw_input('Percentage (in decimals): ')
date = raw_input('Date of Sale: ')
customer = raw_input('Customer: ')
# Data can be assigned directly to cells
input_row = ws.max_row + 1
ws['B{}'.format(input_row)] = item
ws['C{}'.format(input_row)] = sold
ws['D{}'.format(input_row)] = percentage
ws['E{}'.format(input_row)] = date
ws['F{}'.format(input_row)] = customer
wb.save("sample.xlsx")
</code></pre>
<p>You also might consider implementing a while loop here:</p>
<pre><code>import openpyxl
enter_more = 'y'
while enter_more == 'y':
wb = openpyxl.load_workbook('sample.xlsx')
# grab the active worksheet
ws = wb.active
item = raw_input('Item: ')
sold = raw_input('Sold for: ')
percentage = raw_input('Percentage (in decimals): ')
date = raw_input('Date of Sale: ')
customer = raw_input('Customer: ')
# Data can be assigned directly to cells
input_row = ws.max_row + 1
ws['B{}'.format(input_row)] = item
ws['C{}'.format(input_row)] = sold
ws['D{}'.format(input_row)] = percentage
ws['E{}'.format(input_row)] = date
ws['F{}'.format(input_row)] = customer
wb.save("sample.xlsx")
enter_more = raw_input('Enter "y" to enter more data...').lower()
</code></pre>
<p>Edit:<br>
As @CharlieClark mentions in a comment you can just use <code>.append()</code>:</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook('sample.xlsx')
# grab the active worksheet
ws = wb.active
item = raw_input('Item: ')
sold = raw_input('Sold for: ')
percentage = raw_input('Percentage (in decimals): ')
date = raw_input('Date of Sale: ')
customer = raw_input('Customer: ')
# Data can be assigned directly to cells
ws.append([None, item, sold, percentage, date customer])
wb.save("sample.xlsx")
</code></pre>
| 1 | 2016-07-31T01:58:46Z | [
"python",
"excel",
"openpyxl"
] |
using count method to count a certain word in text file | 38,680,111 | <p>I'm trying to count the number of times the word 'the' appears in two books saved as text files. The code I'm running returns zero for each book.</p>
<p>Here's my code:</p>
<pre><code>def word_count(filename):
"""Count specified words in a text"""
try:
with open(filename) as f_obj:
contents = f_obj.readlines()
for line in contents:
word_count = line.lower().count('the')
print (word_count)
except FileNotFoundError:
msg = "Sorry, the file you entered, " + filename + ", could not be found."
print (msg)
dracula = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\dracula.txt'
siddhartha = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\siddhartha.txt'
word_count(dracula)
word_count(siddhartha)
</code></pre>
<p>WHat am I doing wrong here?</p>
| 1 | 2016-07-31T02:02:05Z | 38,680,137 | <p>Unless the word 'the' appears on the last line of each file, you'll see zeros.</p>
<p>You likely want to initialize the <code>word_count</code> variable to zero then use augmented addition (<code>+=</code>):</p>
<p>For example:</p>
<pre><code>def word_count(filename):
"""Count specified words in a text"""
try:
word_count = 0 # <- change #1 here
with open(filename) as f_obj:
contents = f_obj.readlines()
for line in contents:
word_count += line.lower().count('the') # <- change #2 here
print(word_count)
except FileNotFoundError:
msg = "Sorry, the file you entered, " + filename + ", could not be found."
print(msg)
dracula = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\dracula.txt'
siddhartha = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\siddhartha.txt'
word_count(dracula)
word_count(siddhartha)
</code></pre>
<p>Augmented addition isn't necessary, just helpful. This line:</p>
<pre><code>word_count += line.lower().count('the')
</code></pre>
<p>could be written as</p>
<pre><code>word_count = word_count + line.lower().count('the')
</code></pre>
<p>But you also don't need to read the lines all into memory at once. You can iterate over the lines right from the file object. For example:</p>
<pre><code>def word_count(filename):
"""Count specified words in a text"""
try:
word_count = 0
with open(filename) as f_obj:
for line in f_obj: # <- change here
word_count += line.lower().count('the')
print(word_count)
except FileNotFoundError:
msg = "Sorry, the file you entered, " + filename + ", could not be found."
print(msg)
dracula = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\dracula.txt'
siddhartha = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\siddhartha.txt'
word_count(dracula)
word_count(siddhartha)
</code></pre>
| 1 | 2016-07-31T02:06:20Z | [
"python",
"file",
"text",
"count"
] |
using count method to count a certain word in text file | 38,680,111 | <p>I'm trying to count the number of times the word 'the' appears in two books saved as text files. The code I'm running returns zero for each book.</p>
<p>Here's my code:</p>
<pre><code>def word_count(filename):
"""Count specified words in a text"""
try:
with open(filename) as f_obj:
contents = f_obj.readlines()
for line in contents:
word_count = line.lower().count('the')
print (word_count)
except FileNotFoundError:
msg = "Sorry, the file you entered, " + filename + ", could not be found."
print (msg)
dracula = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\dracula.txt'
siddhartha = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\siddhartha.txt'
word_count(dracula)
word_count(siddhartha)
</code></pre>
<p>WHat am I doing wrong here?</p>
| 1 | 2016-07-31T02:02:05Z | 38,680,154 | <p>You are re-assigning <code>word_count</code> for each iteration. That means that at the end it will be the same as the number of occurrences of <code>the</code> in the last line of the file. You should be getting the sum. Another thing: should <code>there</code> match? Probably not. You probably want to use <code>line.split()</code>. Also, you can iterate through a file object directly; no need for <code>.readlines()</code>. One last, use a generator expression to simplify. My first example is without the generator expression; the second is with it:</p>
<pre><code>def word_count(filename):
with open(filename) as f_obj:
total = 0
for line in f_obj:
total += line.lower().split().count('the')
print(total)
</code></pre>
<pre><code>def word_count(filename):
with open(filename) as f_obj:
total = sum(line.lower().split().count('the') for line in f_obj)
print(total)
</code></pre>
| 3 | 2016-07-31T02:09:22Z | [
"python",
"file",
"text",
"count"
] |
using count method to count a certain word in text file | 38,680,111 | <p>I'm trying to count the number of times the word 'the' appears in two books saved as text files. The code I'm running returns zero for each book.</p>
<p>Here's my code:</p>
<pre><code>def word_count(filename):
"""Count specified words in a text"""
try:
with open(filename) as f_obj:
contents = f_obj.readlines()
for line in contents:
word_count = line.lower().count('the')
print (word_count)
except FileNotFoundError:
msg = "Sorry, the file you entered, " + filename + ", could not be found."
print (msg)
dracula = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\dracula.txt'
siddhartha = 'C:\\Users\\HP\\Desktop\\Programming\\Python\\Python Crash Course\\TEXT files\\siddhartha.txt'
word_count(dracula)
word_count(siddhartha)
</code></pre>
<p>WHat am I doing wrong here?</p>
| 1 | 2016-07-31T02:02:05Z | 38,680,158 | <pre><code>import os
def word_count(filename):
"""Count specified words in a text"""
if os.path.exists(filename):
if not os.path.isdir(filename):
with open(filename) as f_obj:
print(f_obj.read().lower().count('t'))
else:
print("is path to folder, not to file '%s'" % filename)
else:
print("path not found '%s'" % filename)
</code></pre>
| 0 | 2016-07-31T02:10:04Z | [
"python",
"file",
"text",
"count"
] |
Executing a while loop while constantly checking sensor input | 38,680,152 | <p>what I'm trying to do is have my RC respond to user keystroke and then drive accordingly (forward, reverse, turn left and right, etc). However, I have also mounted a sensor at the front. I want to constantly measure that reading and if it's below a threshold, it break out of the loop and the program stops. I'm having trouble with constantly reading the sensor.</p>
<p>The program currently just get 1 reading when user enter a keystroke. Please help</p>
<pre><code>d1 = distance()
while (d1 >= 20):
d1 = distance()
if (d1 <= 20):
drive("stop")
char = getch()
if (char == "w"):
drive("forward")
char""
GPIO.cleanup()
</code></pre>
| 0 | 2016-07-31T02:08:59Z | 38,680,247 | <p>If <code>distance()</code> function is ok it should work</p>
<pre><code>d1 = distance()
while (d1 >= 20):
d1 = distance()
char = getch()
if (char == "w"):
drive("forward")
char = "" #maybe this typo ?
drive("stop") #there is no need for overlaping logic in case of 20
GPIO.cleanup()
</code></pre>
<p>You will have to use different approach </p>
<pre><code>from threading import Thread
import time
dist = 0
def distance(): # your function for messurment
global dist
dist = #your messurment, ..
Thread(target = distance).start()
time.sleep(1) #give it time to do some readings
while(dist >= 20):
char = getch()
if(char == "w"):
drive("forward")
drive("stop")
GPIO.cleanup()
</code></pre>
| 0 | 2016-07-31T02:31:26Z | [
"python",
"while-loop",
"raspberry-pi",
"sensor",
"car"
] |
In Python, how do I specify directory separator in os.path.join() function? | 38,680,155 | <p>I tried the code as below, attempting to change directory separator to forward slash <code>/</code> but still stuck in the backslash <code>\</code>. The documentation says the function joins paths using directory separator <code>os.sep</code>, but this didn't work in my case.</p>
<pre><code>import os
os.sep = '/'
print(os.sep)
print(os.path.join('.', 'path'))
</code></pre>
| 0 | 2016-07-31T02:09:33Z | 38,680,185 | <p>You can take a look at the source code for the different operating systems. For example, the Mac version is:</p>
<pre><code>def join(s, *p):
path = s
for t in p:
if (not s) or isabs(t):
path = t
continue
if t[:1] == ':':
t = t[1:]
if ':' not in path:
path = ':' + path
if path[-1:] != ':':
path = path + ':'
path = path + t
return path
</code></pre>
<p>You can see that it is placed directly into the function. It does not depend on <code>os.sep</code>. Each Python installation includes the <code>os.path</code> functions for every operating system. They are available in the Python directory under <code>macpath.py</code>, <code>ntpath.py</code>, and <code>posixpath.py</code>. If you look at each one, you will notice that the <code>posixpath</code> module has what you want:</p>
<pre><code>import posixpath
print(posixpath.join('.', 'path'))
</code></pre>
| 1 | 2016-07-31T02:17:42Z | [
"python",
"operating-system"
] |
In Python, how do I specify directory separator in os.path.join() function? | 38,680,155 | <p>I tried the code as below, attempting to change directory separator to forward slash <code>/</code> but still stuck in the backslash <code>\</code>. The documentation says the function joins paths using directory separator <code>os.sep</code>, but this didn't work in my case.</p>
<pre><code>import os
os.sep = '/'
print(os.sep)
print(os.path.join('.', 'path'))
</code></pre>
| 0 | 2016-07-31T02:09:33Z | 38,680,204 | <p>I think <a href="http://stackoverflow.com/a/12086287/3165737">this</a> answers the questions why Python uses a particular separator.</p>
<p>That said, you can use the <a href="https://docs.python.org/3.5/library/pathlib.html#module-pathlib" rel="nofollow"><code>Pathlib</code></a> module to construct your paths and specify whether you want a Posix or Windows path.</p>
<p>Example:</p>
<pre><code>from pathlib import PurePosixPath, PureWindowsPath
print(PurePosixPath('some', 'silly', 'long', 'path'))
>> some/silly/long/path
print(PureWindowsPath('some', 'silly', 'long', 'path'))
>> some\silly\long\path
</code></pre>
<p>Make sure you use the <code>pure</code> version of <code>PosixPath</code> and <code>WindowsPath</code>. If you're trying to use <code>WindowsPath</code> on a Posix system, you'll get the following error:</p>
<pre><code>NotImplementedError: cannot instantiate 'WindowsPath' on your system
</code></pre>
<p>This is also specified in the <a href="https://docs.python.org/3.5/library/pathlib.html#module-pathlib" rel="nofollow">docs</a>:</p>
<blockquote>
<p>If you want to manipulate Windows paths on a Unix machine (or vice versa). You cannot instantiate a <code>WindowsPath</code> when running on Unix, but you can instantiate <code>PureWindowsPath</code>.</p>
</blockquote>
| 1 | 2016-07-31T02:22:42Z | [
"python",
"operating-system"
] |
In Python, how do I specify directory separator in os.path.join() function? | 38,680,155 | <p>I tried the code as below, attempting to change directory separator to forward slash <code>/</code> but still stuck in the backslash <code>\</code>. The documentation says the function joins paths using directory separator <code>os.sep</code>, but this didn't work in my case.</p>
<pre><code>import os
os.sep = '/'
print(os.sep)
print(os.path.join('.', 'path'))
</code></pre>
| 0 | 2016-07-31T02:09:33Z | 38,680,533 | <p>You can replace function in os.path, with self own:</p>
<pre><code>import os
path = "public\\INSTALL\\"
print("Initial unmodified join return: '%s'" % os.path.join('.', path) )
native_os_path_join = os.path.join
def modified_join(*args, **kwargs):
return native_os_path_join(*args, **kwargs).replace('\\', '/')
os.path.join = modified_join
print("Modified join return: '%s'" % os.path.join('.', path) )
</code></pre>
<p>Output:</p>
<pre><code>Initial unmodified join return: '.\public\INSTALL\'
Modified join return: './public/INSTALL/'
</code></pre>
| 0 | 2016-07-31T03:38:36Z | [
"python",
"operating-system"
] |
Python: TypeError: unbound method, must be called with (class) instance | 38,680,172 | <p>Could someone help me understand this bug? Inside my class "Creature" I have this:</p>
<pre><code>def mate_action(self):
if self.mate == 1:
for creature in creature_list:
if creature.mate == 1:
self.str_nb = (self.str + creature.str) / 2
self.attr_nb = (self.attr + creature.attr) / 2
self.cons_nb = (self.cons + creature.cons) / 2
self.size_nb = (self.size + creature.size) / 2
creature.mate = 0
creature_list.append(Creature)
for creature in creature_list:
if creature.alive == 0:
creature.alive = 1
creature.str = self.str_nb
creature.attr = self.attr_nb
creature.cons = self.cons_nb
creature.size = self.size_nb
creature.nb = 1
</code></pre>
<p>When I do this:</p>
<pre><code>for creature in creature_list:
creature.mate_action()
</code></pre>
<p>And I receive this error:</p>
<pre><code>TypeError: unbound method mate_action() must be called with Creature instance as first argument (got nothing instead)
</code></pre>
<p>Thank you for any and all help!</p>
| 0 | 2016-07-31T02:14:12Z | 38,680,223 | <p>The answer was given by @zondo. I needed to change <code>creature_list.append(Creature)</code> to <code>creature_list.append(Creature())</code></p>
| 0 | 2016-07-31T02:27:43Z | [
"python"
] |
Modifying a style attribute in Selenium with execute_script, but the attribute's value doesn't change | 38,680,356 | <p>Using: Selenium with PhantomJS in Python</p>
<p>I need to set a style attribute of an input tag to '' because it is set to "display:None" which prevents me from filling the input with send_keys in Selenium.</p>
<p>I am using execute_script to achieve this. execute_script runs, but the style attribute remains unaltered. Why isn't PhantomJS changing the style attribute?</p>
<p><strong>HTML with style attribute I want to remove</strong>:</p>
<pre><code><input type="password" size="10" id="navbar_password" name="vb_login_password" tabindex="102" class="textbox" style="display: none;">
</code></pre>
<p><strong>Python Selenium script:</strong></p>
<p>Why isn't the style attribute's value being changed by execute_script?</p>
<pre><code>password = driver.find_element_by_name("vb_login_password")
driver.execute_script("arguments[0]['style'] = arguments[1]", password, '')
print(password.get_attribute("style"))
//display:none;
</code></pre>
| 3 | 2016-07-31T02:59:47Z | 38,680,564 | <p>Try as below :-</p>
<pre><code>password = driver.find_element_by_name("vb_login_password")
password = driver.execute_script("arguments[0].style.display = 'block'; return arguments[0];", password)
print(password.value_of_css_property("display"))
#now you can set value using send_keys
password.send_keys("your value");
</code></pre>
<p>Hope it helps...:)</p>
| 0 | 2016-07-31T03:45:21Z | [
"python",
"selenium-webdriver",
"phantomjs"
] |
Python-3x Unicode Print vs Write | 38,680,361 | <p>I created the following program to try to work through what I believe to be a unicode issue:</p>
<pre><code>s = '7/02/16;07:30:00;São Paulo-8;Reachability: 18.5%'
s_type = type(s)
print ("variable s contains: ",s)
print ("variable s type is: ", s_type)
text_file = open("test_file.txt", "w")
text_file.write(s)
text_file.close()
</code></pre>
<p>The print statements provide the following output when the program is run:</p>
<pre class="lang-none prettyprint-override"><code>variable s contains: 7/02/16;07:30:00;São Paulo-8;Reachability: 18.5%
variable s type is: <class 'str'>
</code></pre>
<p>When it comes time to write to the file, I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/tglund/Projects/Python/thousandeyes/unicode.py", line 6, in <module>
text_file.write(s)
UnicodeEncodeError: 'ascii' codec can't encode character '\xe3' in position 18: ordinal not in range(128)
</code></pre>
<p>I have read the unicode documentation from beginning to end at <a href="https://docs.python.org/3/howto/unicode.html" rel="nofollow">https://docs.python.org/3/howto/unicode.html</a></p>
<p>but have not successfully cracked the code.</p>
<p>I can copy the string assigned to the variable <code>s</code> and paste it into a file, save the file, then 'more' the file. I am on a Mac, and the string shows on my screen correctly. The Python print statement shows the string correctly.</p>
<p>My goal of all this is to create a csv text file where the delimiter is ";". The issue appears to be the accented second character in the location field. The string for <code>s</code> contains the following fields: Date, Location, Message</p>
<p>Any assistance in how to resolve the issue would be greatly appreciated.</p>
| 1 | 2016-07-31T03:01:40Z | 38,680,479 | <p>Your error can be reproduced, even on systems that have different defaults, with something like:</p>
<pre><code>text_file = open("test_file.txt", "w", encoding='ascii')
text_file.write('\xe3')
</code></pre>
<p>The issue is that your default text encoding is ascii. Or at least that's what Python is understanding it to be. See "encoding" under <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow"><code>open()</code></a>, and <a href="https://docs.python.org/3/library/locale.html#locale.getpreferredencoding" rel="nofollow"><code>locale.getpreferredencoding()</code></a>.</p>
<p>The easiest way to fix this is to tell Python to open your file with a compatible encoding. For example UTF-8 (because your character is unicode encoded):</p>
<pre><code>text_file = open("test_file.txt", "w")
# Becomes
text_file = open("test_file.txt", "w", encoding='utf_8')
</code></pre>
<p>And you should be done.</p>
| 2 | 2016-07-31T03:28:20Z | [
"python",
"unicode"
] |
How to select several rows in a dataframe based on index in pandas | 38,680,381 | <p>I have a DataFrame object whose label index is not a position integer but a name, how can I extract several rows:</p>
<p>e.g. select the 3rd, 4th row</p>
<pre><code>df.iloc[[2],[3]]
</code></pre>
<p>This gives me an error, telling me I could only extract one row at a time.</p>
| 0 | 2016-07-31T03:06:28Z | 38,681,063 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow">loc</a> is a standard way to extract data via the labels of an index.</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow">iloc</a> is a standard way to extract data via the positions of an index.</p>
<p>Consider the following dataframe <code>df</code></p>
<pre><code>df = pd.DataFrame(np.random.rand(4, 4), list('abcd'), list('ABCD'))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/kSgOW.png" rel="nofollow"><img src="http://i.stack.imgur.com/kSgOW.png" alt="enter image description here"></a></p>
<p>What you said you did seems to work just fine:</p>
<pre><code>df.iloc[[2], [3]]
</code></pre>
<p><a href="http://i.stack.imgur.com/6JT5Y.png" rel="nofollow"><img src="http://i.stack.imgur.com/6JT5Y.png" alt="enter image description here"></a></p>
| 0 | 2016-07-31T05:32:30Z | [
"python",
"pandas",
"dataframe"
] |
In Python, bool(a.append(3)) is False. Why? | 38,680,430 | <p>It seems <code>bool(a.append)</code> and <code>bool(a)</code> are all <code>True</code>, but why <code>bool(a.append(3))</code> is <code>False</code>?</p>
<p>My question is from the code here:</p>
<pre><code>class MovingAverage(object):
def __init__(self, size):
self.next = lambda v, q=collections.deque((), size): q.append(v) or 1.*sum(q)/len(q)
</code></pre>
| -1 | 2016-07-31T03:18:13Z | 38,680,496 | <p>In Python, the following things are considered <code>False</code>. In other words, if you call <code>bool</code> with them as arguments, you get <code>False</code> back:</p>
<ul>
<li><code>False</code> itself.</li>
<li><code>0</code>, in integer or floating point form.</li>
<li>empty sequence types; strings, lists, sets, dictionaries, and anything that's a subclass of them</li>
<li><code>None</code></li>
</ul>
<p>The last one is the most important to us here. The reason that <code>bool(a_list.append(3))</code> is <code>False</code> has to do with how <code>append</code> works. The <code>append</code> method updates the existing list. It doesn't return anything in particular. </p>
<p>In Python, any function which does not explicitly return anything implicitly returns the <code>None</code> value. That means that something like this won't work.</p>
<pre><code>my_list = []
for i in range(10):
if i % 2 == 0:
my_list = my_list.append(i)
</code></pre>
<p>That code I just made up will actually throw an exception (<code>AttributeError</code>) the second time the <code>if</code> block gets executed because <code>my_list</code> gets set to <code>None</code>, and <code>None</code> doesn't have an <code>append</code> method, because it's not a list. </p>
<p>And just to be clear, <em>anything</em> not on that list gets considered <code>True</code>. You can change this for custom objects by overriding the <code>__bool__</code> special method (it's called <code>__nonzero__</code> in Python 2).</p>
<p>So let's just finish up by clarifying why <code>bool(a.append)</code> is <code>True</code>. If you leave the parentheses (and argument list) off a method call, the method doesn't get called. So <code>bool(a.append)</code> is just passing the method <code>a.append</code> to <code>bool</code> without calling it.</p>
| 6 | 2016-07-31T03:31:02Z | [
"python",
"boolean"
] |
datastax opscenter 'API' metrics error | 38,680,440 | <p>I am using datastax opscenter api to retrive metrics through python script and trying to match the results with the graphs on opscenter<br>
While I am trying to to get data for 'TBL : LiveDisk Used' as you can see in the graph below:
<a href="http://i.stack.imgur.com/nk3ec.png" rel="nofollow">enter image description here</a></p>
<p>function in python script is as follows :</p>
<pre><code>def diskUsage(url11, cluster_id, start_time, end_time, node_ip1):
p = {'metrics': 'cf-live-disk-used',
'columnfamilies': 'all',
'nodes': node_ip1,
'step': '120',
'start': start_time,
'end': end_time }
url="http://"+url11+"/"+cluster_id+"/metrics/"+node_ip1+"/cf-live-disk-used"
MetricSingleNode = session.get(url, params=p)
DataC = json.loads(MetricSingleNode.content)
print "DataC is ", DataC
</code></pre>
<p>Output:</p>
<pre><code>DataC is {u'{node_ip}': {u'MAX': [[1469930400, None]],
u'AVERAGE': [[1469930400, None]],
u'MIN': [[1469930400, None]]
}
}
</code></pre>
<p>Why the output is none while opscenter is giving the data?</p>
<p>Help will be highly appreciated </p>
| 1 | 2016-07-31T03:19:26Z | 38,680,544 | <p>In your screenshot, opscenter is reading the 1 minute period metrics. Your query is pulling the 2 hour period which may not have data in your start/end range (yet). Try running with <code>step:1</code>.</p>
| 0 | 2016-07-31T03:40:54Z | [
"python",
"apache",
"cassandra",
"datastax",
"opscenter"
] |
Making a GET request JSON with parameters using Python | 38,680,442 | <p>I was wondering how do I make a GET request to a specific url with two query parameters? These query parameters contain two id numbers
So far I have:</p>
<pre><code>import json, requests
url = 'http://'
requests.post(url)
</code></pre>
<p>But they gave me query paramters first_id=### and last_id=###. I don't know how to include these parameters?</p>
| -2 | 2016-07-31T03:19:56Z | 38,680,450 | <p>To make a GET request you need the <a href="http://docs.python-requests.org/en/master/api/#requests.get" rel="nofollow"><code>get()</code> method</a>, for parameters use <code>params</code> argument:</p>
<pre><code>response = requests.get(url, params={'first_id': 1, 'last_id': 2})
</code></pre>
<hr>
<p>If the response is of a JSON content type, you can use the <a href="http://docs.python-requests.org/en/master/api/#requests.Response.json" rel="nofollow"><code>json()</code></a> shortcut method to get it loaded into a Python object for you:</p>
<pre><code>data = response.json()
print(data)
</code></pre>
| 2 | 2016-07-31T03:22:03Z | [
"python",
"json",
"get"
] |
Polar to Cartesian returning strange results | 38,680,461 |
<p>I'm not sure if it's my maths or my Python which isn't up to scratch... but the code below is giving unexpected results. It still plots a circle of points, but in a strange order and a non-uniform manner (even allowing for int rounding errors) i.e. the points aren't sequential around the circle as degree increases, they jump to entirely different points on the circle?</p>
<pre class="lang-python prettyprint-override"><code>def pol2cart(distance, angle):
x = distance * numpy.cos(angle)
y = distance * numpy.sin(angle)
return(x, y)
for fixedangle in xrange(0,360,10):
x, y = pol2cart(50,fixedangle)
print str(int(x)) + ", " + str(int(y)) + " " + str(fixedangle) + "\xb0"
</code></pre>
<p>A sample of the result:</p>
<pre class="lang-python prettyprint-override"><code>50, 0 0°
-41, -27 10°
20, 45 20°
7, -49 30°
-33, 37 40°
48, -13 50°
-47, -15 60°
31, 38 70°
-5, -49 80°
-22, 44 90°
43, -25 100°
-49, -2 110°
40, 29 120°
-18, -46 130°
-9, 49 140°
34, -35 150°
-48, 10 160°
46, 17 170°
-29, -40 180°
</code></pre>
<p>If 0 degrees = (50,0) then I would expect 10 degrees to be around (49,9) not (-41,-27). And i'd expect 20 degrees to be ~(47,18) not (20,45)... etc. Just with those three examples you can see the Cartesian point has jumped to a completely different quadrant then back again. Even if my ideas about rotation direction, or starting point are completely wrong, I still expect each point to be rotationally sequential either clockwise or anti-clockwise from the 0 degree start point. Plus you can tell from the "square" angles 90 and 180 that the Cartesian point is far from perfectly horizontal or vertical in relation to a (0,0) central point?</p>
| 2 | 2016-07-31T03:24:06Z | 38,680,481 | <p>looks like numpy is working in radians, not degrees</p>
| 4 | 2016-07-31T03:28:38Z | [
"python",
"python-2.7",
"math",
"coordinates",
"polar-coordinates"
] |
Polar to Cartesian returning strange results | 38,680,461 |
<p>I'm not sure if it's my maths or my Python which isn't up to scratch... but the code below is giving unexpected results. It still plots a circle of points, but in a strange order and a non-uniform manner (even allowing for int rounding errors) i.e. the points aren't sequential around the circle as degree increases, they jump to entirely different points on the circle?</p>
<pre class="lang-python prettyprint-override"><code>def pol2cart(distance, angle):
x = distance * numpy.cos(angle)
y = distance * numpy.sin(angle)
return(x, y)
for fixedangle in xrange(0,360,10):
x, y = pol2cart(50,fixedangle)
print str(int(x)) + ", " + str(int(y)) + " " + str(fixedangle) + "\xb0"
</code></pre>
<p>A sample of the result:</p>
<pre class="lang-python prettyprint-override"><code>50, 0 0°
-41, -27 10°
20, 45 20°
7, -49 30°
-33, 37 40°
48, -13 50°
-47, -15 60°
31, 38 70°
-5, -49 80°
-22, 44 90°
43, -25 100°
-49, -2 110°
40, 29 120°
-18, -46 130°
-9, 49 140°
34, -35 150°
-48, 10 160°
46, 17 170°
-29, -40 180°
</code></pre>
<p>If 0 degrees = (50,0) then I would expect 10 degrees to be around (49,9) not (-41,-27). And i'd expect 20 degrees to be ~(47,18) not (20,45)... etc. Just with those three examples you can see the Cartesian point has jumped to a completely different quadrant then back again. Even if my ideas about rotation direction, or starting point are completely wrong, I still expect each point to be rotationally sequential either clockwise or anti-clockwise from the 0 degree start point. Plus you can tell from the "square" angles 90 and 180 that the Cartesian point is far from perfectly horizontal or vertical in relation to a (0,0) central point?</p>
| 2 | 2016-07-31T03:24:06Z | 38,680,489 | <p>The functions of <code>sin()</code> and <code>cos()</code> of numpy take inputs in radians instead of degrees. Converting the degrees to radians should solve your problem.</p>
| 2 | 2016-07-31T03:30:20Z | [
"python",
"python-2.7",
"math",
"coordinates",
"polar-coordinates"
] |
Polar to Cartesian returning strange results | 38,680,461 |
<p>I'm not sure if it's my maths or my Python which isn't up to scratch... but the code below is giving unexpected results. It still plots a circle of points, but in a strange order and a non-uniform manner (even allowing for int rounding errors) i.e. the points aren't sequential around the circle as degree increases, they jump to entirely different points on the circle?</p>
<pre class="lang-python prettyprint-override"><code>def pol2cart(distance, angle):
x = distance * numpy.cos(angle)
y = distance * numpy.sin(angle)
return(x, y)
for fixedangle in xrange(0,360,10):
x, y = pol2cart(50,fixedangle)
print str(int(x)) + ", " + str(int(y)) + " " + str(fixedangle) + "\xb0"
</code></pre>
<p>A sample of the result:</p>
<pre class="lang-python prettyprint-override"><code>50, 0 0°
-41, -27 10°
20, 45 20°
7, -49 30°
-33, 37 40°
48, -13 50°
-47, -15 60°
31, 38 70°
-5, -49 80°
-22, 44 90°
43, -25 100°
-49, -2 110°
40, 29 120°
-18, -46 130°
-9, 49 140°
34, -35 150°
-48, 10 160°
46, 17 170°
-29, -40 180°
</code></pre>
<p>If 0 degrees = (50,0) then I would expect 10 degrees to be around (49,9) not (-41,-27). And i'd expect 20 degrees to be ~(47,18) not (20,45)... etc. Just with those three examples you can see the Cartesian point has jumped to a completely different quadrant then back again. Even if my ideas about rotation direction, or starting point are completely wrong, I still expect each point to be rotationally sequential either clockwise or anti-clockwise from the 0 degree start point. Plus you can tell from the "square" angles 90 and 180 that the Cartesian point is far from perfectly horizontal or vertical in relation to a (0,0) central point?</p>
| 2 | 2016-07-31T03:24:06Z | 38,680,499 | <p>Your code is fine, only problem is that the numpy.cos(angle) takes its argument in radians, not degrees. You can either change the tester to range from 0 to <code>2*numpy.pi</code> or convert the degrees to radians by adding <code>angle = 180*angle/numpy.pi</code> on line 2.</p>
| 2 | 2016-07-31T03:31:42Z | [
"python",
"python-2.7",
"math",
"coordinates",
"polar-coordinates"
] |
pickle.dump blocks the main thread in multithreaded python application because of GIL | 38,680,485 | <p>Currently I'm using python 3.4.3 and developing PyQt5 application.</p>
<p>In my app, there's a QThread, and some large object(100MB) is being (pickle) dumped by the thread.</p>
<p>However, dumping that object requires 1~2 seconds, and it blocks the main thread about 1~2 seconds because of GIL.</p>
<p>How can I solve this issue(non-blocking the main thread)?</p>
<p>I think that serializing my object to string takes time and it requires GIL, eventually blocks the main thread.(As I know, writing to file does not require GIL)</p>
<p>I'm thinking about using Cython, but since I'm the beginner in cython, I'm not sure whether or not using Cython will solve this issue.</p>
<p>Is there any way to work around this issue?</p>
<p>Edit: I tried <code>multiprocessing</code> module, but the intercommunication time (passing shared memory variables across processes) also takes about 1~2 seconds, which eventually gives no advantages.</p>
| 0 | 2016-07-31T03:29:39Z | 38,712,208 | <p>I solved my issue.</p>
<p>The solution was </p>
<ol>
<li><p>Making my object really simple. In my case, I converted my object to array of simple stringified dictionaries.</p></li>
<li><p>I used <code>file.write(stringified_dictionaries)</code> directly instead of using pickle. This reduced time for serializing python object to string.</p></li>
</ol>
<p>Since disk I/O does not require GIL in python, the only moment main thread blocked was the moment of converting my object, which was really short.</p>
| 1 | 2016-08-02T05:15:13Z | [
"python",
"multithreading",
"cython",
"pickle",
"gil"
] |
How to vstack efficiently a sequence of large numpy array chunks? | 38,680,508 | <p>I am generating a sequence of numpy arrays as follows:</p>
<pre><code>def chunker(seq, size):
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
for i in chunker(X,10000):
e = function(i)
print('new marix',e)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
...
new matrix (10000, 3208)
</code></pre>
<p>I would like to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">vstack</a> the above <code>n</code> matrices in a single one. Thus, I tried the following:</p>
<pre><code> X = np.vstack(e)
</code></pre>
<p>However, when I print <code>X</code> I am getting again:</p>
<pre><code>new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
new matrix (10000, 3208)
...
new matrix (10000, 3208)
</code></pre>
<p>Instead of a new vstacked single matrix. Any idea of how to vstack this sequence of numpy arrays?.</p>
<p><strong>Update</strong></p>
<p>From jedward's answer I edited my code as follows:</p>
<p>import numpy as np</p>
<pre><code>def chunker(seq, size):
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
for (r,i) in enumerate(chunker(X,10000)):
e = function(i)
print('new matrix',e)
X[r,:] = e
print(X)
</code></pre>
| 1 | 2016-07-31T03:34:02Z | 38,680,587 | <p>One way, although probably not the most efficient, would be to create a list of the lists you want to stack, then stack once, outside the loop. </p>
<p>For example:</p>
<pre><code>import numpy as np
def chunker(seq, size):
return (seq[pos:pos + size] for pos in range(0, len(seq), size))
# Some fake function (n.b. this is a silly way to reverse a list)
def function(arr):
arr.reverse()
return arr
# Generate fake X
X = list(range(100))
chunks = []
for i in chunker(X,10):
e = function(i)
print('new matrix',e)
chunks.append(e)
merged = np.vstack(chunks)
print(merged)
</code></pre>
<p>Output:</p>
<pre>
new matrix [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
new matrix [19, 18, 17, 16, 15, 14, 13, 12, 11, 10]
new matrix [29, 28, 27, 26, 25, 24, 23, 22, 21, 20]
new matrix [39, 38, 37, 36, 35, 34, 33, 32, 31, 30]
new matrix [49, 48, 47, 46, 45, 44, 43, 42, 41, 40]
new matrix [59, 58, 57, 56, 55, 54, 53, 52, 51, 50]
new matrix [69, 68, 67, 66, 65, 64, 63, 62, 61, 60]
new matrix [79, 78, 77, 76, 75, 74, 73, 72, 71, 70]
new matrix [89, 88, 87, 86, 85, 84, 83, 82, 81, 80]
new matrix [99, 98, 97, 96, 95, 94, 93, 92, 91, 90]
[[ 9 8 7 6 5 4 3 2 1 0]
[19 18 17 16 15 14 13 12 11 10]
[29 28 27 26 25 24 23 22 21 20]
[39 38 37 36 35 34 33 32 31 30]
[49 48 47 46 45 44 43 42 41 40]
[59 58 57 56 55 54 53 52 51 50]
[69 68 67 66 65 64 63 62 61 60]
[79 78 77 76 75 74 73 72 71 70]
[89 88 87 86 85 84 83 82 81 80]
[99 98 97 96 95 94 93 92 91 90]]
</pre>
<p>Or <em>not</em> creating an intermediate list:</p>
<pre><code>merged = np.zeros([0,10])
for i in chunker(X,10):
e = function(i)
print('new matrix',e)
merged = np.vstack([merged, e])
print(merged)
</code></pre>
<p>But the most efficient would be to initialize a numpy array prior to the loop, and then set rows of that array inside the loop. You'd need to calculate the dimensions of the final <code>merged</code> array first (here I just created it as a 10x10 matrix, because I knew the size).</p>
<pre><code>merged = np.zeros([10,10])
for (r,i) in enumerate(chunker(X,10)):
e = function(i)
print('new matrix',e)
merged[r,:] = e
print(merged)
</code></pre>
| 1 | 2016-07-31T03:50:14Z | [
"python",
"python-3.x",
"numpy",
"scipy"
] |
ImportError: No module named google.protobuf | 38,680,593 | <p>I am following this guide (<a href="https://developers.google.com/protocol-buffers/docs/pythontutorial" rel="nofollow">https://developers.google.com/protocol-buffers/docs/pythontutorial</a>) and using the exact sample of addressbook.proto.</p>
<p>Post the content of compiler generated addressbook_pb2.py file as well.
When I run the following simple program, there is error saying, cannot find google.protobuf, any ideas how to resolve this issue? Thanks.</p>
<p>BTW, using Python 2.7 on Mac OSX.</p>
<pre><code>from addressbook_pb2 import Person
p = Person()
p.email = "abc"
print p.email
</code></pre>
<p><strong>Here is the automated generated file addressbook_pb2.py,</strong></p>
<pre><code># Generated by the protocol buffer compiler. DO NOT EDIT!
# source: addressbook.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='addressbook.proto',
package='tutorial',
syntax='proto2',
serialized_pb=_b('\n\x11\x61\x64\x64ressbook.proto\x12\x08tutorial\"\xda\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\n\n\x02id\x18\x02 \x02(\x05\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12+\n\x05phone\x18\x04 \x03(\x0b\x32\x1c.tutorial.Person.PhoneNumber\x1aM\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x02(\t\x12.\n\x04type\x18\x02 \x01(\x0e\x32\x1a.tutorial.Person.PhoneType:\x04HOME\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"/\n\x0b\x41\x64\x64ressBook\x12 \n\x06person\x18\x01 \x03(\x0b\x32\x10.tutorial.Person')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_PERSON_PHONETYPE = _descriptor.EnumDescriptor(
name='PhoneType',
full_name='tutorial.Person.PhoneType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='MOBILE', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HOME', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='WORK', index=2, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=207,
serialized_end=250,
)
_sym_db.RegisterEnumDescriptor(_PERSON_PHONETYPE)
_PERSON_PHONENUMBER = _descriptor.Descriptor(
name='PhoneNumber',
full_name='tutorial.Person.PhoneNumber',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='number', full_name='tutorial.Person.PhoneNumber.number', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='type', full_name='tutorial.Person.PhoneNumber.type', index=1,
number=2, type=14, cpp_type=8, label=1,
has_default_value=True, default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=128,
serialized_end=205,
)
_PERSON = _descriptor.Descriptor(
name='Person',
full_name='tutorial.Person',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='tutorial.Person.name', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='id', full_name='tutorial.Person.id', index=1,
number=2, type=5, cpp_type=1, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='email', full_name='tutorial.Person.email', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='phone', full_name='tutorial.Person.phone', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[_PERSON_PHONENUMBER, ],
enum_types=[
_PERSON_PHONETYPE,
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=32,
serialized_end=250,
)
_ADDRESSBOOK = _descriptor.Descriptor(
name='AddressBook',
full_name='tutorial.AddressBook',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='person', full_name='tutorial.AddressBook.person', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=252,
serialized_end=299,
)
_PERSON_PHONENUMBER.fields_by_name['type'].enum_type = _PERSON_PHONETYPE
_PERSON_PHONENUMBER.containing_type = _PERSON
_PERSON.fields_by_name['phone'].message_type = _PERSON_PHONENUMBER
_PERSON_PHONETYPE.containing_type = _PERSON
_ADDRESSBOOK.fields_by_name['person'].message_type = _PERSON
DESCRIPTOR.message_types_by_name['Person'] = _PERSON
DESCRIPTOR.message_types_by_name['AddressBook'] = _ADDRESSBOOK
Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict(
PhoneNumber = _reflection.GeneratedProtocolMessageType('PhoneNumber', (_message.Message,), dict(
DESCRIPTOR = _PERSON_PHONENUMBER,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.Person.PhoneNumber)
))
,
DESCRIPTOR = _PERSON,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.Person)
))
_sym_db.RegisterMessage(Person)
_sym_db.RegisterMessage(Person.PhoneNumber)
AddressBook = _reflection.GeneratedProtocolMessageType('AddressBook', (_message.Message,), dict(
DESCRIPTOR = _ADDRESSBOOK,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.AddressBook)
))
_sym_db.RegisterMessage(AddressBook)
# @@protoc_insertion_point(module_scope)
</code></pre>
<p><strong>Edit 1</strong>,</p>
<p>Tried <code>pip install protobuf</code>, met with the following error,</p>
<pre><code>Requirement already satisfied (use --upgrade to upgrade): protobuf in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.9 in /Users/foo/miniconda2/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/foo/miniconda2/lib/python2.7/site-packages (from protobuf)
</code></pre>
<p>Here is the output of python version,</p>
<pre><code>python -V
Python 2.7.11 :: Continuum Analytics, Inc.
</code></pre>
<p>**Edit 2 **</p>
<p>Post exact error message,</p>
<pre><code>Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1531, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 938, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/foo/personal/featureExtraction/protobuf_test.py", line 1, in <module>
from addressbook_pb2 import Person
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_monkey_qt.py", line 71, in patched_import
return original_import(name, *args, **kwargs)
File "/Users/foo/personal/featureExtraction/addressbook_pb2.py", line 6, in <module>
from google.protobuf import descriptor as _descriptor
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_monkey_qt.py", line 71, in patched_import
return original_import(name, *args, **kwargs)
ImportError: No module named google.protobuf
</code></pre>
<p><strong>Edit 3</strong>,</p>
<p>error message when <code>import google</code>,</p>
<p><a href="http://i.stack.imgur.com/v2gS3.png" rel="nofollow"><img src="http://i.stack.imgur.com/v2gS3.png" alt="enter image description here"></a></p>
<p><strong>Edit 4</strong>, </p>
<p>Output of <code>which pip</code>,</p>
<pre><code>which pip
/Users/foo/miniconda2/bin/pip
</code></pre>
<p>Output of <code>sys.executable</code>,</p>
<p>/Users/foo/anaconda/bin/python</p>
<p><strong>Edit 5</strong>,</p>
<pre><code>foo-mn1:featureExtraction foo$ sudo /Users/foo/miniconda2/bin/pip install protobuf
Password:
The directory '/Users/foo/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/foo/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied (use --upgrade to upgrade): protobuf in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.9 in /Users/foo/miniconda2/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/foo/miniconda2/lib/python2.7/site-packages (from protobuf)
foo-mn1:featureExtraction foo$ sudo /Users/foo/miniconda2/bin/pip install google
The directory '/Users/foo/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/foo/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied (use --upgrade to upgrade): google in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): beautifulsoup4 in /Users/foo/miniconda2/lib/python2.7/site-packages (from google)
</code></pre>
| 1 | 2016-07-31T03:51:48Z | 38,680,611 | <p>You should run:</p>
<pre><code>pip install protobuf
</code></pre>
<p>That will install Google protobuf and after that you can run that Python script.</p>
<p>As per <a href="https://pypi.python.org/pypi/protobuf" rel="nofollow">this link</a>.</p>
| 1 | 2016-07-31T03:54:19Z | [
"python",
"python-2.7",
"protocol-buffers",
"google-protobuf"
] |
ImportError: No module named google.protobuf | 38,680,593 | <p>I am following this guide (<a href="https://developers.google.com/protocol-buffers/docs/pythontutorial" rel="nofollow">https://developers.google.com/protocol-buffers/docs/pythontutorial</a>) and using the exact sample of addressbook.proto.</p>
<p>Post the content of compiler generated addressbook_pb2.py file as well.
When I run the following simple program, there is error saying, cannot find google.protobuf, any ideas how to resolve this issue? Thanks.</p>
<p>BTW, using Python 2.7 on Mac OSX.</p>
<pre><code>from addressbook_pb2 import Person
p = Person()
p.email = "abc"
print p.email
</code></pre>
<p><strong>Here is the automated generated file addressbook_pb2.py,</strong></p>
<pre><code># Generated by the protocol buffer compiler. DO NOT EDIT!
# source: addressbook.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='addressbook.proto',
package='tutorial',
syntax='proto2',
serialized_pb=_b('\n\x11\x61\x64\x64ressbook.proto\x12\x08tutorial\"\xda\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\n\n\x02id\x18\x02 \x02(\x05\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12+\n\x05phone\x18\x04 \x03(\x0b\x32\x1c.tutorial.Person.PhoneNumber\x1aM\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x02(\t\x12.\n\x04type\x18\x02 \x01(\x0e\x32\x1a.tutorial.Person.PhoneType:\x04HOME\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"/\n\x0b\x41\x64\x64ressBook\x12 \n\x06person\x18\x01 \x03(\x0b\x32\x10.tutorial.Person')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_PERSON_PHONETYPE = _descriptor.EnumDescriptor(
name='PhoneType',
full_name='tutorial.Person.PhoneType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='MOBILE', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HOME', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='WORK', index=2, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=207,
serialized_end=250,
)
_sym_db.RegisterEnumDescriptor(_PERSON_PHONETYPE)
_PERSON_PHONENUMBER = _descriptor.Descriptor(
name='PhoneNumber',
full_name='tutorial.Person.PhoneNumber',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='number', full_name='tutorial.Person.PhoneNumber.number', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='type', full_name='tutorial.Person.PhoneNumber.type', index=1,
number=2, type=14, cpp_type=8, label=1,
has_default_value=True, default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=128,
serialized_end=205,
)
_PERSON = _descriptor.Descriptor(
name='Person',
full_name='tutorial.Person',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='tutorial.Person.name', index=0,
number=1, type=9, cpp_type=9, label=2,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='id', full_name='tutorial.Person.id', index=1,
number=2, type=5, cpp_type=1, label=2,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='email', full_name='tutorial.Person.email', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='phone', full_name='tutorial.Person.phone', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[_PERSON_PHONENUMBER, ],
enum_types=[
_PERSON_PHONETYPE,
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=32,
serialized_end=250,
)
_ADDRESSBOOK = _descriptor.Descriptor(
name='AddressBook',
full_name='tutorial.AddressBook',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='person', full_name='tutorial.AddressBook.person', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=252,
serialized_end=299,
)
_PERSON_PHONENUMBER.fields_by_name['type'].enum_type = _PERSON_PHONETYPE
_PERSON_PHONENUMBER.containing_type = _PERSON
_PERSON.fields_by_name['phone'].message_type = _PERSON_PHONENUMBER
_PERSON_PHONETYPE.containing_type = _PERSON
_ADDRESSBOOK.fields_by_name['person'].message_type = _PERSON
DESCRIPTOR.message_types_by_name['Person'] = _PERSON
DESCRIPTOR.message_types_by_name['AddressBook'] = _ADDRESSBOOK
Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict(
PhoneNumber = _reflection.GeneratedProtocolMessageType('PhoneNumber', (_message.Message,), dict(
DESCRIPTOR = _PERSON_PHONENUMBER,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.Person.PhoneNumber)
))
,
DESCRIPTOR = _PERSON,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.Person)
))
_sym_db.RegisterMessage(Person)
_sym_db.RegisterMessage(Person.PhoneNumber)
AddressBook = _reflection.GeneratedProtocolMessageType('AddressBook', (_message.Message,), dict(
DESCRIPTOR = _ADDRESSBOOK,
__module__ = 'addressbook_pb2'
# @@protoc_insertion_point(class_scope:tutorial.AddressBook)
))
_sym_db.RegisterMessage(AddressBook)
# @@protoc_insertion_point(module_scope)
</code></pre>
<p><strong>Edit 1</strong>,</p>
<p>Tried <code>pip install protobuf</code>, met with the following error,</p>
<pre><code>Requirement already satisfied (use --upgrade to upgrade): protobuf in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.9 in /Users/foo/miniconda2/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/foo/miniconda2/lib/python2.7/site-packages (from protobuf)
</code></pre>
<p>Here is the output of python version,</p>
<pre><code>python -V
Python 2.7.11 :: Continuum Analytics, Inc.
</code></pre>
<p>**Edit 2 **</p>
<p>Post exact error message,</p>
<pre><code>Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1531, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 938, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/foo/personal/featureExtraction/protobuf_test.py", line 1, in <module>
from addressbook_pb2 import Person
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_monkey_qt.py", line 71, in patched_import
return original_import(name, *args, **kwargs)
File "/Users/foo/personal/featureExtraction/addressbook_pb2.py", line 6, in <module>
from google.protobuf import descriptor as _descriptor
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_monkey_qt.py", line 71, in patched_import
return original_import(name, *args, **kwargs)
ImportError: No module named google.protobuf
</code></pre>
<p><strong>Edit 3</strong>,</p>
<p>error message when <code>import google</code>,</p>
<p><a href="http://i.stack.imgur.com/v2gS3.png" rel="nofollow"><img src="http://i.stack.imgur.com/v2gS3.png" alt="enter image description here"></a></p>
<p><strong>Edit 4</strong>, </p>
<p>Output of <code>which pip</code>,</p>
<pre><code>which pip
/Users/foo/miniconda2/bin/pip
</code></pre>
<p>Output of <code>sys.executable</code>,</p>
<p>/Users/foo/anaconda/bin/python</p>
<p><strong>Edit 5</strong>,</p>
<pre><code>foo-mn1:featureExtraction foo$ sudo /Users/foo/miniconda2/bin/pip install protobuf
Password:
The directory '/Users/foo/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/foo/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied (use --upgrade to upgrade): protobuf in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.9 in /Users/foo/miniconda2/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from protobuf)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /Users/foo/miniconda2/lib/python2.7/site-packages (from protobuf)
foo-mn1:featureExtraction foo$ sudo /Users/foo/miniconda2/bin/pip install google
The directory '/Users/foo/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/foo/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied (use --upgrade to upgrade): google in /Users/foo/miniconda2/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): beautifulsoup4 in /Users/foo/miniconda2/lib/python2.7/site-packages (from google)
</code></pre>
| 1 | 2016-07-31T03:51:48Z | 38,682,585 | <p>When <code>pip</code> tells you that you already have <code>protobuf</code>,
but PyCharm (or other) tells you that you don't have it,
it means that <code>pip</code> and PyCharm are using a different Python interpreter.
This is a very common issue, especially on a Mac, with no standard Python package management.</p>
<p>The best way to completely eliminate such issues is using a <code>virtualenv</code> per Python project, which is essentially a directory of Python packages and environment variable settings to isolate the Python environment of the project from everything else.</p>
<p>Create a <code>virtualenv</code> for your project like this:</p>
<pre><code>cd project
virtualenv --distribute virtualenv -p /path/to/python/executable
</code></pre>
<p>This creates a directory called <code>virtualenv</code> inside your project.
(Make sure to configure your VCS (for example Git) to ignore this directory.)</p>
<p>To install packages in this <code>virtualenv</code>, you need to activate the environment variable settings:</p>
<pre><code>. virtualenv/bin/activate
</code></pre>
<p>Verify that <code>pip</code> will use the right Python executable inside the <code>virtualenv</code>, by running <code>pip -V</code>. It should tell you the Python library path used, which should be inside the <code>virtualenv</code>.</p>
<p>Now you can use <code>pip</code> to install <code>protobuf</code> as you did.</p>
<p>And finally, you need to make PyCharm use this <code>virtualenv</code> instead of the system libraries. Somewhere in the project settings you can configure an interpreter for the project, select the Python executable inside the <code>virtualenv</code>.</p>
| 2 | 2016-07-31T09:28:18Z | [
"python",
"python-2.7",
"protocol-buffers",
"google-protobuf"
] |
Python concurrent.futures calling same function twice | 38,680,621 | <p>I have no problem using concurrent.futures if all my processes are started from different functions. But if I want to call the same function with different parameters I can't seem to get the syntax right. This is what I got so far but it doesn't work:</p>
<pre><code>tasks = ((serial_port_local, serial_options_local, serial_port_remote, serial_options_remote, "local"),
(serial_port_remote, serial_options_remote, serial_port_local, serial_options_local, "remote"))
for task in zip(tasks, executor.map(serial_cross_over, tasks)):
print (task)
</code></pre>
<p>This is the error but I don't grok it:</p>
<pre><code>TypeError: serial_cross_over() missing 4 required positional arguments: 'receive_serial_options', 'send_serial_port', 'send_serial_options', and 'source'
</code></pre>
<p>Actually I don't really grok why its complicated at all. Shouldn't I just be able to do:</p>
<pre><code>executor.submit(some_function(parameter1))
executor.submit(some_function(parameter2))
</code></pre>
<p>But that doesn't work. The program hangs on the second submit. Why?</p>
| -1 | 2016-07-31T03:56:00Z | 38,681,645 | <p>It seems that serial_cross_over takes 4 arguments (correct me if I'm wrong) and you do not provide them when .map so,
maybe take a look at this answer: <a href="http://stackoverflow.com/questions/6785226/pass-multiple-parameters-to-concurrent-futures-executor-map">Pass multiple parameters to concurrent.futures.Executor.map?</a></p>
<pre><code>tasks = ((serial_port_local, serial_options_local, serial_port_remote, serial_options_remote, "local"), (serial_port_remote, serial_options_remote, serial_port_local, serial_options_local, "remote"))
for task in zip(executor.map(lambda p: f(*p), tasks)):
pass
</code></pre>
<p>As for why doesn't executor.submit work as expected, I cannot tell without further details. Have you tried like this?:</p>
<pre><code>with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(some_function, parameter1)
print(future.result())
</code></pre>
| 0 | 2016-07-31T07:09:39Z | [
"python",
"concurrent-programming"
] |
How to use BeautifulSoup to get content between<hr class = 'calibre2'> ... <hr class="calibre2" /> | 38,680,626 | <pre><code><hr class="calibre2" />
<h3 class="calibre5">-ability</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
<h3 class="calibre5">-ibility</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
</code></pre>
<p>above this is my part of my soup, and I want to get content between the two <code><hr></code>, because hr is not a close tag, so I couldn't use some simple method, I have think if I can use find_next_elements, but How can let him stop, when he see <code><hr class = 'calibre2'></code>, so I can get those content, thank you.</p>
| 1 | 2016-07-31T03:57:21Z | 38,680,650 | <p>You can loop over all <code>hr</code> elements and use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-next-siblings-and-find-next-sibling" rel="nofollow"><code>.find_next_siblings()</code></a> to iterate over the next sibling elements. Then, if you meet <code>hr</code>, break the loop:</p>
<pre><code>for hr in soup.find_all("hr", class_="calibre2"):
for item in hr.find_next_siblings():
if item.name == "hr":
break
print(item)
print("-----")
</code></pre>
| 1 | 2016-07-31T04:03:09Z | [
"python",
"beautifulsoup",
"html-parsing"
] |
How to use BeautifulSoup to get content between<hr class = 'calibre2'> ... <hr class="calibre2" /> | 38,680,626 | <pre><code><hr class="calibre2" />
<h3 class="calibre5">-ability</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
<h3 class="calibre5">-ibility</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
</code></pre>
<p>above this is my part of my soup, and I want to get content between the two <code><hr></code>, because hr is not a close tag, so I couldn't use some simple method, I have think if I can use find_next_elements, but How can let him stop, when he see <code><hr class = 'calibre2'></code>, so I can get those content, thank you.</p>
| 1 | 2016-07-31T03:57:21Z | 38,681,116 | <p>You can check for the hr and calibre2 class in conjunction with find_all_next
<a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next" rel="nofollow">https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next</a> </p>
<pre><code>from bs4 import BeautifulSoup
testStr = """
<hr class="calibre2" />
<h3 class="calibre5">-ability</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
<h3 class="calibre5">-ibility</h3> (in nouns ææåè¯) ï¼ <br class="calibre4" />
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â capability è½å </span></p></blockquote>
<blockquote class="calibre6"><p class="calibre_1"><span class="italic">â responsibility 责任 </span></p></blockquote>
<hr class="calibre2" />
""";
soup = BeautifulSoup(testStr, 'lxml')
hrTag = soup.hr
nextTags = hrTag.find_all_next()
content = []
for item in nextTags:
# check if we have reached the second calibre2 hr
print("Name %s ; Class %s" % (item.name, item['class'][0]))
if item.name == 'hr' and item['class'][0] == 'calibre2':
break
content.append(item)
print(content)
</code></pre>
| 0 | 2016-07-31T05:41:35Z | [
"python",
"beautifulsoup",
"html-parsing"
] |
tkinter window and background image don't align properly | 38,680,668 | <p>I am writing a Simpsons trivia game as my first big programming project. My question is twofold:</p>
<ol>
<li>Is this the right way to go about creating a background image? Keep in mind that my plan is to include the Simpsons theme song playing in the background as well as one or two buttons on top of the background image. </li>
<li>Assuming the code below is the right approach given what I want to accomplish, why am I getting a thin gray line on the left of my image and window? Ie. Why is the image not filling up the window perfectly like it is on the right side?</li>
</ol>
<p>Here is my code:</p>
<pre><code>from tkinter import *
from tkinter import ttk
from PIL import Image, ImageTk
root = Tk()
root.title("The Simpsons Trivia Game")
root.geometry('400x600')
root.resizable(0,0)
def resize_image(event):
new_width = event.width
new_height = event.height
image = copy_of_image.resize((new_width, new_height))
photo = ImageTk.PhotoImage(image)
label.config(image = photo)
label.image = photo
image = Image.open('E:\Simpsons Trivia Game\SimpsonsPic.png')
copy_of_image = image.copy()
photo = ImageTk.PhotoImage(image)
label = ttk.Label(root, image = photo)
label.bind('<Configure>', resize_image)
label.pack(fill=BOTH, expand = YES)
root.mainloop()
</code></pre>
<p><a href="http://i.stack.imgur.com/XHVa7.png" rel="nofollow">tkinter window with background image (left side of window not perfectly alligned with background image</a></p>
| 1 | 2016-07-31T04:07:24Z | 38,684,629 | <p>I'm not sure I understand everything, but I managed to get rid of the border (at least on Linux) by doing:</p>
<pre><code>from tkinter import *
from tkinter import ttk
from PIL import Image, ImageTk
root = Tk()
root.title("The Simpsons Trivia Game")
root.geometry("400x600")
root.resizable(0,0)
image = Image.open('/tmp/foobar.png')
photo = ImageTk.PhotoImage(image)
label = ttk.Label(root, image = photo)
label.pack()
label.place(relx=0.5, rely=0.5, anchor="center")
root.mainloop()
</code></pre>
| 0 | 2016-07-31T13:50:39Z | [
"python",
"tkinter",
"python-3.4"
] |
Using a proxy with phantomjs (selenium webdriver) | 38,680,743 | <p>I'm using phantomJS as a driver for selenium. My code is written in python. I followed the advice from similar questions, and am using the following: </p>
<pre><code>service_args = [
'--proxy=78.23.244.145:80',
'--proxy-type=http',
]
driver = webdriver.PhantomJS(service_args=service_args)
driver.get('http://www.whatismyip.com/')
</code></pre>
<p>However, when I print the html, barely anything shows up: </p>
<pre><code>print driver.page_source
</code></pre>
<p>OUTPUT: </p>
<pre><code><html><head></head><body></body></html>
</code></pre>
<p>If I do this with just the usual call to phantomJS, the website shows up as usual: </p>
<pre><code>driver = webdriver.PhantomJS()
</code></pre>
<p>For reference, I've tried this with a bunch of proxies from this list: </p>
<p><a href="http://proxylist.hidemyass.com/search-1291972#listable" rel="nofollow">http://proxylist.hidemyass.com/search-1291972#listable</a></p>
<p>I'm wondering how to get the page to properly display when using a proxy. Any help would be appreciated! </p>
| 0 | 2016-07-31T04:28:57Z | 38,681,429 | <p>I suspect that the proxy you are using is incorrect. I tried the following where used proxy behave sanely in windows 8.</p>
<pre><code>from selenium.webdriver.common.proxy import *
from selenium import webdriver
from selenium.webdriver.common.by import By
phantomjs_path = r"E:\Software & Tutorial\Phantom\phantomjs-2.1.1-windows\bin\phantomjs.exe"
service_args = [
'--proxy=217.156.252.118:8080',
'--proxy-type=https',
]
driver = webdriver.PhantomJS(executable_path=phantomjs_path,service_args=service_args)
driver.get("https://www.google.com.bd/?gws_rd=ssl#q=what+is+my+ip")
print driver.page_source.encode('utf-8')
print "="*70
print driver.title
driver.save_screenshot(r"E:\Software & Tutorial\Phantom\test.png")
driver.quit()
</code></pre>
<p>See the saved image(test.png) and see the status. If used ip is blacklisted the google prompted captcha box see that image!! IP has been changed!!</p>
| 1 | 2016-07-31T06:35:43Z | [
"python",
"selenium",
"proxy",
"phantomjs"
] |
Dataframe operations involving nan | 38,680,797 | <p>I want to subtract row with nan values from all rows in a Dataframe. For this I am using </p>
<pre><code>dataframe.sub(row, axis= 1)
</code></pre>
<p>this ignores nan values, i.e. if either of the values in two rows is nan, the result is nan. I want that if either of the values is not nan, the subtraction should proceed taking the nan value to be 0. If both are not nan, the result should be the difference.If both are nan, the result should be nan. For example, subtraction of the following two rows should be as below,</p>
<pre><code>[1, 2, nan, nan, 5] - [nan, 5, 1, nan, 2] = [1 , -3, -1, nan, 3]
</code></pre>
<p>How can I do this?</p>
| 2 | 2016-07-31T04:39:42Z | 38,680,852 | <blockquote>
<p>I want that if either of the values is not nan, the subtraction should proceed taking the nan value to be 0. If both are not nan, the result should be the difference.</p>
</blockquote>
<p>Use <code>fillna</code> to set <code>nan</code>-values to 0, then apply a mask to reset the result to <code>nan</code> where both input values were <code>nan</code>.</p>
<pre><code>import pandas as pd
import numpy as np
# sample data
nan = np.nan
df = pd.DataFrame({ 'a': [1, 2, nan, nan, 5],
'b': [nan, 5, 1, nan, 2] })
# get all rows with both values nan
nan_mask = df.a.isnull() & df.b.isnull()
# calculate with all nans set to 0
result = df.a.fillna(0) - df.b.fillna(0)
# set rows with both nans to nan
result[nan_mask] = nan
print list(result)
=> [1.0, -3.0, -1.0, nan, 3.0]
</code></pre>
<p><strong>Update</strong></p>
<p>If you are looking for a more concise solution it turns out that <code>df.sub(other, fill_value=0.0)</code> achieves the same thing:</p>
<pre><code>df = pd.DataFrame({ 'a': [1, 2, nan, nan, 5],
'b': [nan, 5, 1, nan, 2]})
result = df.a.sub(df.b, fill_value=0.0)
=> [1.0, -3.0, -1.0, nan, 3.0]
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html" rel="nofollow">From the docs</a>:</p>
<blockquote>
<p>fill_value : None or float value, default None (NaN)
Fill missing (NaN) values with this value. If both Series are
missing, the result will be missing</p>
</blockquote>
| 3 | 2016-07-31T04:50:37Z | [
"python",
"numpy",
"pandas"
] |
Django Raw SQL query does not return todays records | 38,680,844 | <p>I was making a appointment system using Django! I wanted to display todays appointments. This raw query must be returning today's records but it isn't returning any record!</p>
<p><strong>Query looks like this:</strong></p>
<pre><code>todays_appointments = Appointment.objects.raw("""
SELECT appointment_Appointment.id,
loginReg_User.user_name,
appointment_Appointment.task,
appointment_Appointment.date,
appointment_Appointment.time,
appointment_Appointment.status
FROM appointment_Appointment
LEFT JOIN loginReg_User
ON appointment_Appointment.user_id = loginReg_User.id
WHERE loginReg_User.id={}
AND appointment_Appointment.date={}
ORDER BY appointment_Appointment.time".format(user_id, today"""))
</code></pre>
<p>Also, <code>today</code> looks something like this:</p>
<p><code>today = datetime.date.today()</code>.</p>
<p>I tried running similar query in the shell using <code>Appointment.objects.filter(date=today)</code> and it worked fine!</p>
<p><strong>Models.py</strong></p>
<pre><code>from __future__ import unicode_literals
from ..loginReg.models import User
from django.db import models
import datetime
class AppointmentValidator(models.Manager):
def task_validator(self, task):
if len(task) < 1:
return False
else:
return True
def date_validator(self, date_time):
present = datetime.now().date()
if date_time < present:
return False
else:
return True
def status(self, status):
if status == "missed" or status == "done" or status == "pending":
return True
else:
return False
class Queries(models.Manager):
def todays_appointments(self, user_id):
today = datetime.date.today()
todays_appointments = Appointment.objects.raw("SELECT appointment_Appointment.id, loginReg_User.user_name, appointment_Appointment.task, appointment_Appointment.date, appointment_Appointment.time, appointment_Appointment.status FROM appointment_Appointment LEFT JOIN loginReg_User ON appointment_Appointment.user_id = loginReg_User.id WHERE loginReg_User.id={} AND appointment_Appointment.date={} ORDER BY appointment_Appointment.time".format(user_id, today))
return todays_appointments
def other_appointments(self, user_id):
today = datetime.date.today()
other_appointments = `Appointment.objects.raw("SELECT appointment_Appointment.id, loginReg_User.user_name, appointment_Appointment.task, appointment_Appointment.date, appointment_Appointment.time, appointment_Appointment.status FROM appointment_Appointment LEFT JOIN loginReg_User ON appointment_Appointment.user_id = loginReg_User.id WHERE loginReg_User.id={} AND appointment_Appointment.date>{} ORDER BY appointment_Appointment.time".format(user_id, today))`
return other_appointments
class Appointment(models.Model):
task = models.CharField(max_length=150)
date = models.DateField(null=False, blank=False)
time = models.TimeField()
status = models.CharField(max_length=45)
user = models.ForeignKey(User)
created_at = models.DateTimeField(auto_now_add = True)
updated_at = models.DateTimeField(auto_now = True)
appointmentManager = AppointmentValidator()
queries = Queries()
objects = models.Manager()
</code></pre>
| 0 | 2016-07-31T04:47:34Z | 38,680,927 | <p>Date is a DateField so you need to compare it to a date, not the formatted representation of the date (see this answer: <a href="http://stackoverflow.com/a/18505739/5679413">http://stackoverflow.com/a/18505739/5679413</a>). By using format, you are converting today into numbers and dashes. You're SQL query needs to turn it back into a date. Django presumably takes care of this for you.</p>
| 0 | 2016-07-31T05:03:17Z | [
"python",
"django",
"django-models"
] |
What is the AUC score in sklearn.metrics? | 38,680,879 | <p>In <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html" rel="nofollow">here</a> it is discussed the auc score but this is different from the regular roc_auc_score. I see no description of this, what is it and what it is used for?</p>
| 2 | 2016-07-31T04:53:59Z | 38,680,912 | <p>As the documentation says, it is the area under an arbitrary curve, i.e., the definite integral (computed with the trapezoidal approximation). Some examples are linked at the bottom of the documentation page showing its use.</p>
| 1 | 2016-07-31T04:59:45Z | [
"python",
"scikit-learn",
"auc"
] |
What is the AUC score in sklearn.metrics? | 38,680,879 | <p>In <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html" rel="nofollow">here</a> it is discussed the auc score but this is different from the regular roc_auc_score. I see no description of this, what is it and what it is used for?</p>
| 2 | 2016-07-31T04:53:59Z | 38,705,540 | <p><code>sklearn.auc</code> is a general fuction to calculate the area under a curve using trapezoid rule. It is used to calculate <code>sklearn.metrics.roc_auc_score</code>. </p>
<p>To calculate roc_auc_score, sklearn evaluates the false positive and true positive rates using the <code>sklearn.metrics.roc_curve</code> at different threshold settings. Then it uses <code>sklearn.metrics.auc</code> to calculate the area under the curves, and finally returns their average binary score. </p>
| 1 | 2016-08-01T18:18:35Z | [
"python",
"scikit-learn",
"auc"
] |
Parsing JSON to CSV with Python - AttributeError: 'str' object has no attribute 'keys' | 38,680,884 | <p>This JSON to CSV code example is working great:</p>
<pre><code>employee_data = '{"info":[{"employee_name": "James", "email": "james@gmail.com", "job_profile": "Sr. Developer"},{"employee_name": "Smith", "email": "Smith@gmail.com", "job_profile": "Project Lead"}]}'
#This can be parsed and converted to CSV using python as shown below:
import json
import csv
employee_parsed = json.loads(employee_data)
emp_data = employee_parsed['info']
# open a file for writing
employ_data = open('Employee.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow([header])
count += 1
csvwriter.writerow([emp.values()])
employ_data.close()
</code></pre>
<p>My trouble comes in when I'm trying to use the following JSON data below...</p>
<p>I get an AttributeError: 'str' object has no attribute 'keys'. Please keep your response simple as this is my Python "Hello World". :-)</p>
<pre><code>employee_data = '{"info": {"arch": "x86_64","platform": "linux"},"packages": {"_license-1.1-py27_0.tar.bz2": {"build": "py27_0","build_number": 0,"date": "2013-03-01","depends": ["python 2.7*"],"license": "proprietary - Continuum Analytics, Inc.","license_family": "Proprietary","md5": "5b13c8cd498ce15b76371ed85278e3a4","name": "_license","requires": [],"size": 194947,"version": "1.1"}}}'
</code></pre>
<p>Thank you for any direction on this.</p>
| 0 | 2016-07-31T04:55:25Z | 38,680,945 | <p>The problem is that your code expects the JSON keys to be arrays. This loop here:</p>
<pre><code>for emp in emp_data:
</code></pre>
<p>Expects each top level JSON key to be iterable (loopable). On your original JSON, the key <code>info</code> maps to a list:</p>
<pre><code>[{"employee_name": "James", "email (...)
</code></pre>
<p>However, the ìnfo` key on the second JSON example does not map to a list but instead to a dictionary:</p>
<pre><code>"info": {"arch": "x86_64","platform": "linux"}
</code></pre>
<p>Changing the <code>info</code> key to a list fixes it:</p>
<pre><code>"info": [{"arch": "x86_64","platform": "linux"}]
</code></pre>
<p>In further depth, your <code>emp_data</code> variable looks like this:</p>
<pre><code>{'platform': 'linux', 'arch': 'x86_64'}
</code></pre>
<p>And so when you iterate it (<code>for emp in emp_data</code>), <code>emp</code> will be <code>"arch"</code> or <code>"platform"</code>, which you cannot get <code>.keys()</code> from.</p>
| 0 | 2016-07-31T05:06:56Z | [
"python",
"json",
"csv"
] |
Python Tkinter: If-statement not working | 38,680,932 | <pre><code>from tkinter import *
def callbackX(button, win, buttonNR):
print("Pressed button", buttonNR)
player1.append(buttonNR)
win.destroy()
gameScreen()
def gameScreen():
win = Tk()
#f = Frame(win)
if '1' in player1 == 'True':
b1 = Button(win, text="X", command=lambda: callbackX(b1, win, '1'))
b1.grid(row=0, column=0)
if '2' in player1 == 'True':
b2 = Button(win, text="X", command=lambda: callbackX(b2, win, '2'))
b2.grid(row=0, column=1)
if '3' in player1 == 'True':
b3 = Button(win, text="X", command=lambda: callbackX(b3, win, '3'))
b3.grid(row=0, column=2)
if '4' in player1 == 'True':
b4 = Button(win, text="X", command=lambda: callbackX(b4, win, '4'))
b4.grid(row=1, column=0)
if '5' in player1 == 'True':
b5 = Button(win, text="X", command=lambda: callbackX(b5, win, '5'))
b5.grid(row=1, column=1)
if '6' in player1 == 'True':
b6 = Button(win, text="X", command=lambda: callbackX(b6, win, '6'))
b6.grid(row=1, column=2)
if '7' in player1 == 'True':
b7 = Button(win, text="X", command=lambda: callbackX(b7, win, '7'))
b7.grid(row=2, column=0)
if '8' in player1 == 'True':
b8 = Button(win, text="X", command=lambda: callbackX(b8, win, '8'))
b8.grid(row=2, column=1)
if '9' in player1 == 'True':
b9 = Button(win, text="X", command=lambda: callbackX(b9, win, '9'))
b9.grid(row=2, column=2)
player1 = []; player2 = []
gameScreen()
</code></pre>
<p>The program doesn't seem to recognize the if-statement criterion being met. Is this some sort of Tkinter quirk? How can this be fixed?</p>
<p>The code should open a tic-tac-toe game screen, for player1, which closes and reopens the window, without the button that was previously pressed.</p>
| 0 | 2016-07-31T05:04:32Z | 38,680,980 | <p>'True' is a string, just remove the quotes, as True is a bool. </p>
<p>Actually, just simply use </p>
<p>if '1' in player1: </p>
<p>is OK in your case.</p>
| 1 | 2016-07-31T05:13:50Z | [
"python",
"if-statement",
"button",
"tkinter",
"tic-tac-toe"
] |
Python parse datestring to date | 38,681,032 | <p>I am trying to parse a datetime string to date in Python. The input value is of the form:</p>
<pre><code> "February 19, 1989"
</code></pre>
<p>I have been trying so far </p>
<pre><code> datetime.datetime.strptime("February 19, 1989", "%B %d, %y")
</code></pre>
<p>but I am always getting error. What is the right way to parse such a date? Thank you!</p>
| 1 | 2016-07-31T05:25:24Z | 38,681,042 | <p>The following works (changed small case <code>y</code> to uppercase <code>Y</code>):</p>
<pre><code>datetime.datetime.strptime("February 19, 1989", "%B %d, %Y")
</code></pre>
<p>The reason is that <code>%y</code> is for <code>99</code>, <code>98</code> and such, while <code>%Y</code> is for full year. You can read the full documentation on this <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">here</a>.</p>
| 2 | 2016-07-31T05:28:07Z | [
"python",
"date"
] |
Python parse datestring to date | 38,681,032 | <p>I am trying to parse a datetime string to date in Python. The input value is of the form:</p>
<pre><code> "February 19, 1989"
</code></pre>
<p>I have been trying so far </p>
<pre><code> datetime.datetime.strptime("February 19, 1989", "%B %d, %y")
</code></pre>
<p>but I am always getting error. What is the right way to parse such a date? Thank you!</p>
| 1 | 2016-07-31T05:25:24Z | 38,681,105 | <p>In addition to David's <a href="http://stackoverflow.com/a/38681042/189134">answer</a>, if you have a list of strings that have a variety of formats (say, some have two digit years and others have four), you can utilize the <a href="https://pypi.python.org/pypi/python-dateutil" rel="nofollow"><code>dateutil</code></a> package to help parse these strings into datetime objects automagically.</p>
<pre><code>from dateutil import parser
dates = ["February 19, 1989", "February 19, 89", "February 19, 01"]
for d in dates:
print(parser.parse(d))
</code></pre>
<p>Prints three date time objects:</p>
<pre><code>datetime.datetime(1989, 2, 19, 0, 0)
datetime.datetime(1989, 2, 19, 0, 0)
datetime.datetime(2001, 2, 19, 0, 0)
</code></pre>
<p>The downside to this, compared to the previously mentioned answer, is that it requires an extra package. The upside is that it is fairly good at determining your date without your needing to know the format string.</p>
| 1 | 2016-07-31T05:39:38Z | [
"python",
"date"
] |
python string replacement using tuple values | 38,681,035 | <p>This code will return the text that I am looking for. But I am not able to adjust the punctuation marks.</p>
<pre><code>fieldnames = ("user","messageid","message","destination", "code","mobile","mytimestamp")
newData = ["\'%s\': %s" % (tup, tup) for tup in fieldnames if tup!='destination' ]
</code></pre>
<p>newData will return</p>
<pre><code>["'user': user", "'messageid': messageid", "'message': message", "'code': code", "'mobile': mobile", "'mytimestamp': mytimestamp"]
</code></pre>
<p>I need to remove the double quotes and add \</p>
<p>The expected string (not list) output is this...</p>
<pre><code>"[\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]"
</code></pre>
| -2 | 2016-07-31T05:26:46Z | 38,681,069 | <p>You can change the code to something like</p>
<pre><code>newData = ("\"[" +
", ".join("\\'%s\\': %s" % (tup, tup)
for tup in fieldnames if tup!='destination') +
"]\"")
</code></pre>
<p>Note: I am assuming you want the double quotes, backslash and single quotes being part of the string. For sure the output is neither Python nor Javascript as <code>:</code> is used for dictionaries/objects but they require braces <code>{</code>/<code>}</code> and not brackets <code>[</code>/<code>]</code>.</p>
| 0 | 2016-07-31T05:33:25Z | [
"python"
] |
python string replacement using tuple values | 38,681,035 | <p>This code will return the text that I am looking for. But I am not able to adjust the punctuation marks.</p>
<pre><code>fieldnames = ("user","messageid","message","destination", "code","mobile","mytimestamp")
newData = ["\'%s\': %s" % (tup, tup) for tup in fieldnames if tup!='destination' ]
</code></pre>
<p>newData will return</p>
<pre><code>["'user': user", "'messageid': messageid", "'message': message", "'code': code", "'mobile': mobile", "'mytimestamp': mytimestamp"]
</code></pre>
<p>I need to remove the double quotes and add \</p>
<p>The expected string (not list) output is this...</p>
<pre><code>"[\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]"
</code></pre>
| -2 | 2016-07-31T05:26:46Z | 38,681,118 | <p>What about:</p>
<pre><code>fieldnames = ("user","messageid","message","destination", "code","mobile","mytimestamp")
s = "[%s]" % ', '.join(["\\'%s\\': %s" % (t,t) for t in fieldnames if t != 'destination'])
print(s)
</code></pre>
| 2 | 2016-07-31T05:42:13Z | [
"python"
] |
python string replacement using tuple values | 38,681,035 | <p>This code will return the text that I am looking for. But I am not able to adjust the punctuation marks.</p>
<pre><code>fieldnames = ("user","messageid","message","destination", "code","mobile","mytimestamp")
newData = ["\'%s\': %s" % (tup, tup) for tup in fieldnames if tup!='destination' ]
</code></pre>
<p>newData will return</p>
<pre><code>["'user': user", "'messageid': messageid", "'message': message", "'code': code", "'mobile': mobile", "'mytimestamp': mytimestamp"]
</code></pre>
<p>I need to remove the double quotes and add \</p>
<p>The expected string (not list) output is this...</p>
<pre><code>"[\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]"
</code></pre>
| -2 | 2016-07-31T05:26:46Z | 38,686,521 | <p>You have it almost done. You only need these minor changes:</p>
<ol>
<li><p>Use raw strings, just add an <code>r</code> before the string you already have:</p>
<p><code>r"\'%s\': %s"</code></p></li>
<li><p>You have a list with all the string groups you need. Just join them using <code>str.join</code>:</p>
<p><code>', '.join(newData)</code></p></li>
<li><p>The only thing missing are your opening and closing brackets.</p></li>
</ol>
<p>Your whole code would be:</p>
<pre><code>fieldnames = ("user","messageid","message","destination", "code","mobile","mytimestamp")
newData = [r"\'%s\': %s" % (tup, tup) for tup in fieldnames if tup!='destination' ]
print('[' + ', '.join(newData) + ']')
# [\'user\': user, \'messageid\': messageid, \'message\': message, \'code\': code, \'mobile\': mobile, \'mytimestamp\': mytimestamp]
</code></pre>
| 1 | 2016-07-31T17:16:25Z | [
"python"
] |
Weird behaviour when trying to estimate an unknown variable | 38,681,102 | <p>I'm trying to estimate an unknown variable (p) with a very high precision. What I have is a large number of ordered values (I call them t-values). Each value has a sequence number (n). Each of those t-values is basically the result of multiplying n with p and then adding a random offset ("noise"). My idea is to simply order the t-values according to their sequence number and then take the mean of all the offsets. It works very well. Here are 10 examples of estimates (true p is 1.0 and the number of t-values is 100000):</p>
<pre><code>1.0000737485173519
0.9999987583319258
1.0000688058361697
1.0002021529901506
0.9999391175701831
1.000012370796987
0.9999891218161053
1.0001566049086157
0.9999818309412788
0.9999594118399372
</code></pre>
<p>Close enough for what I want.</p>
<p>But in practice, a certain amount of t-values will also be lost. If I introduce a random loss of t-values the precision goes down dramatically, even if the number of lost t-values is as low as 0.001% - 0.01% and, this is the weird part, even if I compensate by generating more t-values so the number of t-values used in calculating the mean is the same!</p>
<p>Here are 10 examples when about 1% of the values were dropped:</p>
<pre><code>1.0024257205135292
1.0019969333070318
1.0019520792036436
1.001061555944925
0.997728342781954
1.000205614588305
0.9964173869854615
1.0028314864552466
1.0014389330965119
0.9954499027939065
</code></pre>
<p>Why is this?</p>
<p>I have made a simulation in Python to demonstrate. To see the difference, first run it as is. Then change drop_probability to 0.01 and run again.</p>
<p>Python:</p>
<pre><code>#!/usr/bin/python3
import random
random.seed(42)
runs = 10
effective_number_of_values = 100000
real_period=1
static_offset=0.5
lambd=0.2
drop_probability=0.00000001
#drop_probability=0.0001
#drop_probability=0.001
#drop_probability=0.01
#drop_probability=0.1
#drop_probability=0.5
for run in range(0, runs):
values = []
dropped_ts = 0
last_was_dropped = False
num_values = 0
n = 1
t = 0
while num_values < effective_number_of_values + 1:
actual_t = t
noise = static_offset + random.expovariate(lambd)
effective_t = actual_t + noise
if drop_probability is not None and \
random.random() <= drop_probability:
values.append((n, effective_t, True))
dropped_ts += 1
last_was_dropped = True
else:
values.append((n, effective_t, False))
if not last_was_dropped:
num_values += 1
last_was_dropped = False
t += real_period
n += 1
values.sort()
last_n = 0
last_t = 0
last_was_dropped = False
avg_sum = 0
avg_n = 0
for v in values:
n, t, dropped = v
if n > 1:
if not dropped and not last_was_dropped:
avg_sum += t - last_t
avg_n += 1
last_t = t
last_n = n
last_was_dropped = dropped
print(avg_sum / avg_n, "(values used: %d, dropped along the way: %.2f%% (%d))" % (avg_n, (dropped_ts/len(values))*100, dropped_ts))
<br>
</code></pre>
| 0 | 2016-07-31T05:39:32Z | 38,681,764 | <p>Your problem is due to the nature of your sampling. As you increase the percentage chance of dropping values, the overall percentage of dropped values will increase <em>exponentially</em> and will drastically reduce your accuracy and precision. </p>
<p>Needless to say, such a significant change in the sampling population causes your measurements to become exponentially more imprecise as the percentage of lost samples increases. As you increase the population, this problem becomes less apparent. If you expect more values to be dropped, take a much larger sample. If your means of sampling is so inaccurate that it loses more than ~10% of samples, then you must either rectify this problem which causes reduced accuracy by dropping less samples, taking significantly more, or reconsidering whether you need less than 1% variance in your estimations.</p>
<p>Much of this is rooted in statistical theory. A cursory study of probabilities and random sampling will yield many helpful equations and rules of thumb to help ensure accurate estimates of unknown parameters. </p>
<p>The primary equation you'll need to use for this purpose is the one calculating the Margin of Error for a Normal Distribution to represent your sampling method: <code>ME = z * sqrt( (p_hat * q_hat) / n)</code>. </p>
<p>You will also need the Margin of Error for a Poisson distribution to represent the error introduced by the noise: the <a href="http://stats.stackexchange.com/questions/15371/how-to-calculate-a-confidence-level-for-a-poisson-distribution">formula, given large values of n*lambd</a> is <code>ME = z * sqrt( lambd / n )</code>. You'll need to include this value in your total error after sampling, and with 95% confidence, 10,000 samples, and lambd of 0.2, you find that it goes as high as <code>0.45%</code>, explaining a large proportion of the unexpected error.</p>
<p>However, this method of determining margin for error of the Poisson distribution is only a crude approximation that treats it as if it were a normal distribution. In your situation, with such a small lambd, you may wish to consider one of the 19 approximations contained within <a href="https://www.ine.pt/revstat/pdf/rs120203.pdf" rel="nofollow">this paper</a>.</p>
<h2>In Summary</h2>
<p>It does appear that you are correct about losing accuracy (assuming a normal distribution), however it may be due to the use of <code>random.expovariate(lambd)</code>:</p>
<blockquote>
<p>"Exponential distribution... Returned values range from 0 to positive infinity if lambd is positive."</p>
</blockquote>
<p>Using a mean function will <em>not</em> yield a valid result because such a low value of <code>u=0.2</code> for a Poisson distribution (basically what expovariate is) will be non-symmetrical, as is attested by <a href="https://ned.ipac.caltech.edu/level5/Leo/Stats2_2.html" rel="nofollow">W.R.Leo</a> of CalTech: </p>
<blockquote>
<p>Note that the distribution is not symmetric. The peak or maximum of the distribution <strong>does not, therefore, correspond to the mean.</strong> However, as µ becomes large, the distribution becomes more and more symmetric and approaches a Gaussian form.</p>
</blockquote>
| 0 | 2016-07-31T07:29:34Z | [
"python",
"statistics"
] |
Weird behaviour when trying to estimate an unknown variable | 38,681,102 | <p>I'm trying to estimate an unknown variable (p) with a very high precision. What I have is a large number of ordered values (I call them t-values). Each value has a sequence number (n). Each of those t-values is basically the result of multiplying n with p and then adding a random offset ("noise"). My idea is to simply order the t-values according to their sequence number and then take the mean of all the offsets. It works very well. Here are 10 examples of estimates (true p is 1.0 and the number of t-values is 100000):</p>
<pre><code>1.0000737485173519
0.9999987583319258
1.0000688058361697
1.0002021529901506
0.9999391175701831
1.000012370796987
0.9999891218161053
1.0001566049086157
0.9999818309412788
0.9999594118399372
</code></pre>
<p>Close enough for what I want.</p>
<p>But in practice, a certain amount of t-values will also be lost. If I introduce a random loss of t-values the precision goes down dramatically, even if the number of lost t-values is as low as 0.001% - 0.01% and, this is the weird part, even if I compensate by generating more t-values so the number of t-values used in calculating the mean is the same!</p>
<p>Here are 10 examples when about 1% of the values were dropped:</p>
<pre><code>1.0024257205135292
1.0019969333070318
1.0019520792036436
1.001061555944925
0.997728342781954
1.000205614588305
0.9964173869854615
1.0028314864552466
1.0014389330965119
0.9954499027939065
</code></pre>
<p>Why is this?</p>
<p>I have made a simulation in Python to demonstrate. To see the difference, first run it as is. Then change drop_probability to 0.01 and run again.</p>
<p>Python:</p>
<pre><code>#!/usr/bin/python3
import random
random.seed(42)
runs = 10
effective_number_of_values = 100000
real_period=1
static_offset=0.5
lambd=0.2
drop_probability=0.00000001
#drop_probability=0.0001
#drop_probability=0.001
#drop_probability=0.01
#drop_probability=0.1
#drop_probability=0.5
for run in range(0, runs):
values = []
dropped_ts = 0
last_was_dropped = False
num_values = 0
n = 1
t = 0
while num_values < effective_number_of_values + 1:
actual_t = t
noise = static_offset + random.expovariate(lambd)
effective_t = actual_t + noise
if drop_probability is not None and \
random.random() <= drop_probability:
values.append((n, effective_t, True))
dropped_ts += 1
last_was_dropped = True
else:
values.append((n, effective_t, False))
if not last_was_dropped:
num_values += 1
last_was_dropped = False
t += real_period
n += 1
values.sort()
last_n = 0
last_t = 0
last_was_dropped = False
avg_sum = 0
avg_n = 0
for v in values:
n, t, dropped = v
if n > 1:
if not dropped and not last_was_dropped:
avg_sum += t - last_t
avg_n += 1
last_t = t
last_n = n
last_was_dropped = dropped
print(avg_sum / avg_n, "(values used: %d, dropped along the way: %.2f%% (%d))" % (avg_n, (dropped_ts/len(values))*100, dropped_ts))
<br>
</code></pre>
| 0 | 2016-07-31T05:39:32Z | 38,685,440 | <p>Not sure I fully understand your question, but I'm trying to be helpful.</p>
<p>I do believe the result your seeing is to be expected. Suppose the drop rate increase such that in average every second measurement is dropped. The average difference between two consecutive remaining measurements will be twice the value it had been before. So the drop rate does affect the result. Similarly, If you drop just 10%, then the difference should increase by ~10%.</p>
<p>Here is the way I rewrote your code. In this version I always drop a fixed amount of measurements using the <code>random.sample</code> function.</p>
<pre><code>import random
#random.seed(42)
effective_number_of_values = 100000
real_period = 1
static_offset = 0.5
lambd = 0.2
drop_probabilities = [0.00001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.5, 0.9, 0.99]
values = []
t = 0
for drop_probability in drop_probabilities:
desiredlen = round(effective_number_of_values * (1 + drop_probability))
for t in range(desiredlen):
noise = static_offset + random.expovariate(lambd)
effective_t = t + noise
values.append((t, effective_t))
values_after_drop = random.sample(values, effective_number_of_values)
values_after_drop.sort()
diff_t = [values_after_drop[i][1] - values_after_drop[i-1][1]
for i in range(1, len(values_after_drop))]
avg = sum(diff_t)/len(diff_t)
print("avg = {}. {} dropped out of {} at {} probability".
format(avg, len(values) - effective_number_of_values,
len(values), drop_probability))
</code></pre>
| 0 | 2016-07-31T15:20:43Z | [
"python",
"statistics"
] |
Weird behaviour when trying to estimate an unknown variable | 38,681,102 | <p>I'm trying to estimate an unknown variable (p) with a very high precision. What I have is a large number of ordered values (I call them t-values). Each value has a sequence number (n). Each of those t-values is basically the result of multiplying n with p and then adding a random offset ("noise"). My idea is to simply order the t-values according to their sequence number and then take the mean of all the offsets. It works very well. Here are 10 examples of estimates (true p is 1.0 and the number of t-values is 100000):</p>
<pre><code>1.0000737485173519
0.9999987583319258
1.0000688058361697
1.0002021529901506
0.9999391175701831
1.000012370796987
0.9999891218161053
1.0001566049086157
0.9999818309412788
0.9999594118399372
</code></pre>
<p>Close enough for what I want.</p>
<p>But in practice, a certain amount of t-values will also be lost. If I introduce a random loss of t-values the precision goes down dramatically, even if the number of lost t-values is as low as 0.001% - 0.01% and, this is the weird part, even if I compensate by generating more t-values so the number of t-values used in calculating the mean is the same!</p>
<p>Here are 10 examples when about 1% of the values were dropped:</p>
<pre><code>1.0024257205135292
1.0019969333070318
1.0019520792036436
1.001061555944925
0.997728342781954
1.000205614588305
0.9964173869854615
1.0028314864552466
1.0014389330965119
0.9954499027939065
</code></pre>
<p>Why is this?</p>
<p>I have made a simulation in Python to demonstrate. To see the difference, first run it as is. Then change drop_probability to 0.01 and run again.</p>
<p>Python:</p>
<pre><code>#!/usr/bin/python3
import random
random.seed(42)
runs = 10
effective_number_of_values = 100000
real_period=1
static_offset=0.5
lambd=0.2
drop_probability=0.00000001
#drop_probability=0.0001
#drop_probability=0.001
#drop_probability=0.01
#drop_probability=0.1
#drop_probability=0.5
for run in range(0, runs):
values = []
dropped_ts = 0
last_was_dropped = False
num_values = 0
n = 1
t = 0
while num_values < effective_number_of_values + 1:
actual_t = t
noise = static_offset + random.expovariate(lambd)
effective_t = actual_t + noise
if drop_probability is not None and \
random.random() <= drop_probability:
values.append((n, effective_t, True))
dropped_ts += 1
last_was_dropped = True
else:
values.append((n, effective_t, False))
if not last_was_dropped:
num_values += 1
last_was_dropped = False
t += real_period
n += 1
values.sort()
last_n = 0
last_t = 0
last_was_dropped = False
avg_sum = 0
avg_n = 0
for v in values:
n, t, dropped = v
if n > 1:
if not dropped and not last_was_dropped:
avg_sum += t - last_t
avg_n += 1
last_t = t
last_n = n
last_was_dropped = dropped
print(avg_sum / avg_n, "(values used: %d, dropped along the way: %.2f%% (%d))" % (avg_n, (dropped_ts/len(values))*100, dropped_ts))
<br>
</code></pre>
| 0 | 2016-07-31T05:39:32Z | 38,760,158 | <p>For completeness here is the actual problem I'm trying to solve:</p>
<p>Two computers are connected to the internet.
Computer A is sending a special kind of packets to computer B with a fixed interval (the packets are based on UDP). Computer B needs to estimate what this interval is with very high accuracy.</p>
<p>The packets have sequence numbers. Each packet will obviously be more or less delayed and some will be lost. Some will arrive in the wrong order.</p>
<p>So what computer B knows is this: the list of arrived packets with their sequence numbers and the time they arrived. From that I hoped it would be possible to estimate the interval with at least four decimals accuracy, with not more than around 10000 samples (since I want to account for small fluctuations in the clock of the computers during the day and a typical interval will be 1 second I dont want to use more than that). I hoped this would be possible even with a high rate of lost packets (for example 50%).</p>
| 0 | 2016-08-04T06:40:57Z | [
"python",
"statistics"
] |
Why won't Django load my static CSS files? | 38,681,270 | <p>I'm having trouble getting Django to load my CSS for my html template. I'm aware that there are a lot of posts like this, such as <a href="http://stackoverflow.com/questions/15428146/django-server-error-loading-static-files">here</a> and <a href="http://stackoverflow.com/questions/13446325/django-css-is-not-not-working">here</a>, but I'm not sure what else to do here as I've tried several of these types of these solutions to no avail.</p>
<p>Loading up my website using <code>python manage.py runserver</code> returns this log:</p>
<pre><code>[31/Jul/2016 01:58:29] "GET / HTTP/1.1" 200 1703
[31/Jul/2016 01:58:29] "GET /static/css/about.css HTTP/1.1" 404 1766
</code></pre>
<p>I'm not sure if the 404 on the end of the second line of the log refers to a 404 error or not. </p>
<p>I have tried adding the static files to my <code>urls.py</code>:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', TemplateView.as_view(template_name='about.html'))
]
urlpatterns += staticfiles_urlpatterns()
</code></pre>
<p>I have tried modifying my <code>settings.py</code>:</p>
<pre><code>STATIC_ROOT = ''
STATIC_URL = '/static/'
STATICFILES_DIR = os.path.dirname(os.path.abspath(__file__)) + "/static/"
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder'
)
</code></pre>
<p>And I of course used the proper loading methods in the actual template itself:</p>
<pre><code>{% load staticfiles %}
...
<link rel="stylesheet" href="{% static 'css/about.css' %}">
</code></pre>
<p>Here's my file structure:</p>
<pre><code>Personal_Website
|Personal_Website
||settings.py
||urls.py
||etc...
|static
||css
|||about.css
</code></pre>
<p>Frankly, I'm not sure what else to do here. I feel like I've tried everything with the settings and still I'm getting that same log.</p>
| 0 | 2016-07-31T06:10:59Z | 38,681,375 | <p>You should have this to serve static file in project directory...</p>
<pre><code>STATICFILES_DIRS = [os.path.join(BASE_DIR, "static"),]
</code></pre>
<p><code>os.path.dirname(os.path.abspath(__file__))</code> This is pointing to Personal_Website folder where settings all file is there...</p>
| 1 | 2016-07-31T06:26:44Z | [
"python",
"html",
"css",
"django"
] |
django create custom shell command to create app with custom layout | 38,681,307 | <p>i have different layout for my app in my django project,i.e my app layout is as below,</p>
<pre><code>myapp/
models.py/
__init__.py
rest_api/
__init__.py
services/
__init__.py
__init__.py
admin.py
apps.py
models.py
views.py
tests.py
</code></pre>
<p>everytime i create app using './manage.py create app' and then i have to add those directories manually,i want to create an './manage.py .....' command to create app with the layout i have given above,is it possible in django to create an customize command to create custom layout's django app?</p>
| 0 | 2016-07-31T06:16:54Z | 38,681,371 | <p>Django allows your to create an custom project structure using template flag. For eg:</p>
<pre><code>django-admin.py startproject \
--template=https://github.com/caktus/django-project-template/zipball/master \
--extension=py,rst,yml \
--name=Makefile,gulpfile.js,package.json
<project_name>
</code></pre>
<p><a href="https://github.com/caktus/django-project-template" rel="nofollow">https://github.com/caktus/django-project-template</a></p>
<p>You can use something like this or you can try </p>
<pre><code>usage: django-admin startapp [-h] [--version] [-v {0,1,2,3}]
[--settings SETTINGS] [--pythonpath PYTHONPATH]
[--traceback] [--no-color] [--template TEMPLATE]
[--extension EXTENSIONS] [--name FILES]
name [directory]
</code></pre>
<p>It also allows for passing a similar template for creating app with custom structure.</p>
<p>Hope this answers your query.</p>
| 2 | 2016-07-31T06:25:56Z | [
"python",
"django"
] |
installing opencv3 for python3 in OSX | 38,681,312 | <p>So I followed <a href="http://tsaith.github.io/install-opencv-3-for-python-3-on-osx.html" rel="nofollow">this tutorial</a> and got this error:</p>
<pre><code>brew link --overwrite eigen
brew install opencv3 --with-python3 --with-contrib
</code></pre>
<p>wrote this in <code>~/.profile</code> and sourced it:</p>
<pre><code>export PYTHONPATH=$PYTHONPATH:/usr/local/Cellar/opencv3/3.0.0/lib/python3.4/site-packages
source ~/.profile
python3
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'
</code></pre>
<p>Can someone please guide me what is wrong and how to fix it?</p>
<p>UPDATE:</p>
<p>I tried this too:</p>
<pre><code>Monas-MacBook-Pro:build mona$ brew ln --force opencv3
Linking /usr/local/Cellar/opencv3/3.1.0_3...
Error: Could not symlink bin/opencv_annotation
Target /usr/local/bin/opencv_annotation
is a symlink belonging to opencv. You can unlink it:
brew unlink opencv
To force the link and overwrite all conflicting files:
brew link --overwrite opencv3
To list all files that would be deleted:
brew link --overwrite --dry-run opencv3
Monas-MacBook-Pro:build mona$ brew link --overwrite opencv3
Warning: opencv3 is keg-only and must be linked with --force
Note that doing so can interfere with building software.
Monas-MacBook-Pro:build mona$ brew link --force --overwrite opencv3
Linking /usr/local/Cellar/opencv3/3.1.0_3... 551 symlinks created
</code></pre>
| 0 | 2016-07-31T06:17:35Z | 38,685,867 | <p>Since you installed OpenCV3 using Homebrew, your symlink might be corrupted.</p>
<p>Since the current version of OpenCV in homebrew/science is 3.1.0, your symlink probably should point to</p>
<pre><code>/usr/local/Cellar/opencv3/3.1.0_3/lib/python3.5/site-packages
</code></pre>
<p>3.1.0_3 and python3.5 might differ on your system, so just see for yourself what directories are in /usr/local/Cellar.</p>
<p>BTW I didn't need to link anything after doing </p>
<pre><code>brew ln --force opencv3
</code></pre>
| 0 | 2016-07-31T16:04:12Z | [
"python",
"osx",
"python-3.x",
"opencv",
"opencv3.0"
] |
installing opencv3 for python3 in OSX | 38,681,312 | <p>So I followed <a href="http://tsaith.github.io/install-opencv-3-for-python-3-on-osx.html" rel="nofollow">this tutorial</a> and got this error:</p>
<pre><code>brew link --overwrite eigen
brew install opencv3 --with-python3 --with-contrib
</code></pre>
<p>wrote this in <code>~/.profile</code> and sourced it:</p>
<pre><code>export PYTHONPATH=$PYTHONPATH:/usr/local/Cellar/opencv3/3.0.0/lib/python3.4/site-packages
source ~/.profile
python3
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 00:54:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'cv2'
</code></pre>
<p>Can someone please guide me what is wrong and how to fix it?</p>
<p>UPDATE:</p>
<p>I tried this too:</p>
<pre><code>Monas-MacBook-Pro:build mona$ brew ln --force opencv3
Linking /usr/local/Cellar/opencv3/3.1.0_3...
Error: Could not symlink bin/opencv_annotation
Target /usr/local/bin/opencv_annotation
is a symlink belonging to opencv. You can unlink it:
brew unlink opencv
To force the link and overwrite all conflicting files:
brew link --overwrite opencv3
To list all files that would be deleted:
brew link --overwrite --dry-run opencv3
Monas-MacBook-Pro:build mona$ brew link --overwrite opencv3
Warning: opencv3 is keg-only and must be linked with --force
Note that doing so can interfere with building software.
Monas-MacBook-Pro:build mona$ brew link --force --overwrite opencv3
Linking /usr/local/Cellar/opencv3/3.1.0_3... 551 symlinks created
</code></pre>
| 0 | 2016-07-31T06:17:35Z | 38,688,201 | <p>You need to <strong>link</strong> your opencv to python site-packages</p>
<p>Link <strong>cv.py</strong> and <strong>cv2.so</strong> using <code>ln -s [cellar-opencv-site-packages-path] [lib-python-site-packages-path]</code> </p>
<pre><code>ln -s /usr/local/Cellar/opencv3/3.1.0_3/lib/python3.4.1/site-packages/cv.py /usr/local/lib/python3.4.1/site-packages/cv.py
ln -s /usr/local/Cellar/opencv3/3.1.0_3/lib/python3.4.1/site-packages/cv2.so /usr/local/lib/python3.4.1/site-packages/cv2.so
</code></pre>
<p>Path will be based on opencv and python version. </p>
<p>Here it is - <code>[opencv3/3.1.0_3]</code> and <code>[python3.4.1]</code></p>
<hr>
<p>In short, Copy/Link <strong>cv.py</strong> and <strong>cv2.so</strong> from <code>/usr/local/Cellar/opencv3/[[version]]/lib/python[[version]]/site-packages/</code> to <code>/usr/local/lib/python[[version]]/site-packages/</code></p>
<hr>
<p>OR you can also add opencv site-packages to PYTHONPATH</p>
<pre><code>export PYTHONPATH=$PYTHONPATH:/usr/local/Cellar/opencv3/3.1.0_3/lib/python3.4.1/site-packages/
source ~/.profile
</code></pre>
| 0 | 2016-07-31T20:38:54Z | [
"python",
"osx",
"python-3.x",
"opencv",
"opencv3.0"
] |
Pandas `period_range` gives strange results | 38,681,319 | <p>I want a pandas period range with 25 hours offset, and I saw there are two ways to do this (see <a href="https://github.com/AileenNielsen/TimeSeriesAnalysisWithPython/blob/master/1.%20Dates%20%26%20Times.ipynb" rel="nofollow">here</a>):</p>
<p>The first way is to use <code>freq=25H</code>, which I tried, and gave me the right answer:</p>
<pre><code>import pandas as pd
pd.period_range(start='2016-01-01 10:00', freq = '25H', periods = 10)
</code></pre>
<p>and the result is </p>
<pre><code>PeriodIndex(['2016-01-01 10:00', '2016-01-02 11:00', '2016-01-03 12:00',
'2016-01-04 13:00', '2016-01-05 14:00', '2016-01-06 15:00',
'2016-01-07 16:00', '2016-01-08 17:00', '2016-01-09 18:00',
'2016-01-10 19:00'],
dtype='int64', freq='25H')
</code></pre>
<p>The second way, using <code>freq=1D1H</code>, however, gave me a rather strange result:</p>
<pre><code>pd.period_range(start='2016-01-01 10:00', freq = '1D1H', periods = 10)
</code></pre>
<p>and I got</p>
<pre><code> PeriodIndex(['1971-12-02 01:00', '1971-12-02 02:00', '1971-12-02 03:00',
'1971-12-02 04:00', '1971-12-02 05:00', '1971-12-02 06:00',
'1971-12-02 07:00', '1971-12-02 08:00', '1971-12-02 09:00',
'1971-12-02 10:00'],
dtype='int64', freq='25H')
</code></pre>
<p>So maybe <code>1D1H</code> is not a valid way to specify frequency? how did <code>1971</code> come up? (I also tried to use use <code>1D1H</code> as frequency for the <code>date_range()</code> method, which did yield the right answer.)</p>
<pre><code>pd.date_range('2016-01-01 10:00', freq = '1D1H', periods = 10)
DatetimeIndex(['2016-01-01 10:00:00', '2016-01-02 11:00:00',
'2016-01-03 12:00:00', '2016-01-04 13:00:00',
'2016-01-05 14:00:00', '2016-01-06 15:00:00',
'2016-01-07 16:00:00', '2016-01-08 17:00:00',
'2016-01-09 18:00:00', '2016-01-10 19:00:00'],
dtype='datetime64[ns]', freq='25H')
</code></pre>
<p>EDIT: it appears that with <code>period_range()</code>, though <code>freq=1D1H</code> doesn't work, <code>freq=1H1D</code> does. The reason is still unknown.</p>
<p>EDIT2: this has been identified as a bug, see the answer below.</p>
| 4 | 2016-07-31T06:18:04Z | 38,685,169 | <p>The bug has already been identified and <a href="https://github.com/pydata/pandas/issues/13730" rel="nofollow">reported on GitHub</a>.</p>
<p>EDIT: <a href="https://github.com/pydata/pandas/commit/81819b7aa2537469448fbaeb4cd9e3d500f4e2a1" rel="nofollow">A fix</a> has been merged and will be included in v0.19.</p>
| 4 | 2016-07-31T14:51:14Z | [
"python",
"pandas"
] |
I get some error and problems when install "matplotlib" | 38,681,412 | <p>I have get a virtual env on '/home/name/pyenv' for python2.7.9;
Now I want to install 'matplotlib' for it;
then I activate this virtual env and install 'matplotlib' as below:</p>
<ul>
<li>by command "sudo apt-get install python-matplotlib";
(if delete "sudo", permission denied), it runs well and I find "matplotlib" is exactly installed, but it is for default python and not for virtual env(pyenv) ;</li>
<li><p>by command "pip install matplotlib"</p>
<h1>I get error as below:</h1>
<pre><code> * The following required packages can not be built:
* freetype
</code></pre></li>
</ul>
<hr>
<p>Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-tYCFkL/matplotlib
Exception information:
Traceback (most recent call last):
File "/home/caofa/odoo-9.0/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/caofa/odoo-9.0/local/lib/python2.7/site-packages/pip/commands/install.py", line 290, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/home/caofa/odoo-9.0/local/lib/python2.7/site-packages/pip/req.py", line 1230, in prepare_files
req_to_install.run_egg_info()
File "/home/caofa/odoo-9.0/local/lib/python2.7/site-packages/pip/req.py", line 326, in run_egg_info
command_desc='python setup.py egg_info')
File "/home/caofa/odoo-9.0/local/lib/python2.7/site-packages/pip/util.py", line 716, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-tYCFkL/matplotlib</p>
<p>I want to install it by method 1, but i don;t know how to install it for virtual env.</p>
| 0 | 2016-07-31T06:32:33Z | 38,681,636 | <p>One possibility is to install matplotlib globally then create your virtualenv <strong>with</strong> the site packages, see <a href="http://stackoverflow.com/questions/12079607/make-virtualenv-inherit-specific-packages-from-your-global-site-packages">here</a> for someone with exactly the same problem, by using <code>virtualenv --system-site-packages</code> you can then activate your virtualenv and add additional packages, or update them, within your virtualenv only. </p>
<p>I am reasonably sure that you can even uninstall globally installed packages within your virtualenv without impacting your global installation but suggest you pick a small package that you can easily reinstall to test this on early on.</p>
| 0 | 2016-07-31T07:08:10Z | [
"python",
"matplotlib",
"pip",
"virtualenv"
] |
Working with calculating GCD - Python function return | 38,681,432 | <p>I wrote a code which calculates the GCD of two numbers. The gcd of (24,12) is 12. The function <code>compute_gcd</code> computes the GCD and returns it which gets printed in the main function. However, the output is <code>none</code> when I return it to the main function and it is 12 when I print it in the <code>compute_gcd</code> function itself. </p>
<p>Where am I going wrong while returning the GCD?</p>
<pre><code>def compute_gcd(a,b):
if(b==0):
return a # Prints 12 if I replace with print a
else:
compute_gcd(b,a%b)
def main():
a=24
b=12
print compute_gcd(a,b) # Prints none
main()
</code></pre>
| 1 | 2016-07-31T06:35:58Z | 38,681,452 | <p>You forgot to put a <code>return</code> in the <code>else</code> branch. This works:</p>
<pre><code>def compute_gcd(a,b):
if b == 0:
return a
else:
return compute_gcd(b,a%b)
def main():
a=24
b=12
print compute_gcd(a,b) # Prints 12
main()
</code></pre>
| 4 | 2016-07-31T06:38:40Z | [
"python",
"python-2.7",
"return",
"return-value",
"greatest-common-divisor"
] |
Working with calculating GCD - Python function return | 38,681,432 | <p>I wrote a code which calculates the GCD of two numbers. The gcd of (24,12) is 12. The function <code>compute_gcd</code> computes the GCD and returns it which gets printed in the main function. However, the output is <code>none</code> when I return it to the main function and it is 12 when I print it in the <code>compute_gcd</code> function itself. </p>
<p>Where am I going wrong while returning the GCD?</p>
<pre><code>def compute_gcd(a,b):
if(b==0):
return a # Prints 12 if I replace with print a
else:
compute_gcd(b,a%b)
def main():
a=24
b=12
print compute_gcd(a,b) # Prints none
main()
</code></pre>
| 1 | 2016-07-31T06:35:58Z | 38,681,526 | <p>Your else condition has no return hence output is none. If you change that to</p>
<pre><code>else:
return compute_gcd(b,a%b)
</code></pre>
<p>You'll get <code>12</code>.</p>
| -2 | 2016-07-31T06:52:09Z | [
"python",
"python-2.7",
"return",
"return-value",
"greatest-common-divisor"
] |
Working with calculating GCD - Python function return | 38,681,432 | <p>I wrote a code which calculates the GCD of two numbers. The gcd of (24,12) is 12. The function <code>compute_gcd</code> computes the GCD and returns it which gets printed in the main function. However, the output is <code>none</code> when I return it to the main function and it is 12 when I print it in the <code>compute_gcd</code> function itself. </p>
<p>Where am I going wrong while returning the GCD?</p>
<pre><code>def compute_gcd(a,b):
if(b==0):
return a # Prints 12 if I replace with print a
else:
compute_gcd(b,a%b)
def main():
a=24
b=12
print compute_gcd(a,b) # Prints none
main()
</code></pre>
| 1 | 2016-07-31T06:35:58Z | 38,681,847 | <p>try this ... you have to do a <code>return</code> inside the <code>else</code> statement</p>
<pre><code>def compute_gcd(a,b):
if(b==0):
return a
else:
return compute_gcd(b,a%b)
def main():
a = 24
b = 12
print(compute_gcd(a,b))
main()
</code></pre>
| 0 | 2016-07-31T07:41:36Z | [
"python",
"python-2.7",
"return",
"return-value",
"greatest-common-divisor"
] |
Can this be done with SQLAhclemy / Python? | 38,681,548 | <p>I'm trying to create a database in SQL, with python. Was wondering if it would be possible to group things together like the below image? (I know we can do this in Excel. But I was wondering if it would be possible to do it in SQL?)</p>
<p><a href="http://i.stack.imgur.com/BNSkC.png" rel="nofollow"><img src="http://i.stack.imgur.com/BNSkC.png" alt="capture"></a></p>
<p>If it is not possible to do this in SQL, What do you suggest I can do that would provide similar results?</p>
| 1 | 2016-07-31T06:55:32Z | 38,681,680 | <p>Yes, it is possible to achieve that with SQL. All you have to do is create a table like this:</p>
<pre><code>CREATE TABLE data (
names TEXT,
counter INT,
ratio1_2012 INT,
ratio2_2012 INT,
ratio3_2012 INT,
ratio4_2012 INT,
ratio1_2013 INT,
...
);
</code></pre>
<p>Of course, this is not a very "pretty" solution and so it might be better to have one table per year:</p>
<pre><code>CREATE TABLE data2012 (
names TEXT,
counter INT,
ratio1 INT,
...
)
...
</code></pre>
<p>In the end, which approach you want to take depends on how much data you have and how much of a concern is space vs query execution speed to you. If you have two tables, you have a lot of duplicated data (all names for instance).</p>
<p>Visually, either method works fine you just have to adapt the frontend code in the necessary way.</p>
| 0 | 2016-07-31T07:16:30Z | [
"python",
"sql",
"sqlalchemy"
] |
HMM loaded from pickle looks untrained | 38,681,698 | <p>I am trying to serialise nltk.tag.hmm.HiddenMarkovModelTagger into a pickle to use it when needed without re-training. However, after loading from .pkl my HMM looks untrained. My two questions here are:</p>
<ol>
<li>What am I doing wrong? </li>
<li>Is it a good idea at all to serialise HMM
when one has a <em>big</em> dataset?</li>
</ol>
<p>Here's the code:</p>
<pre><code>In [1]: import nltk
In [2]: from nltk.probability import *
In [3]: from nltk.util import unique_list
In [4]: import json
In [5]: with open('data.json') as data_file:
...: corpus = json.load(data_file)
...:
In [6]: corpus = [[tuple(l) for l in sentence] for sentence in corpus]
In [7]: tag_set = unique_list(tag for sent in corpus for (word,tag) in sent)
In [8]: symbols = unique_list(word for sent in corpus for (word,tag) in sent)
In [9]: trainer = nltk.tag.HiddenMarkovModelTrainer(tag_set, symbols)
In [10]: train_corpus = corpus[:4]
In [11]: test_corpus = [corpus[4]]
In [12]: hmm = trainer.train_supervised(train_corpus, estimator=LaplaceProbDist)
In [13]: print('%.2f%%' % (100 * hmm.evaluate(test_corpus)))
100.00%
</code></pre>
<p>As you can see HMM is trained. Now I pickle it:</p>
<pre><code>In [14]: import pickle
In [16]: output = open('hmm.pkl', 'wb')
In [17]: pickle.dump(hmm, output)
In [18]: output.close()
</code></pre>
<p>After reset and load the model looks dumber than a box of rocks:</p>
<pre><code>In [19]: %reset
Once deleted, variables cannot be recovered. Proceed (y/[n])? y
In [20]: import pickle
In [21]: import json
In [22]: with open('data.json') as data_file:
....: corpus = json.load(data_file)
....:
In [23]: test_corpus = [corpus[4]]
In [24]: pkl_file = open('hmm.pkl', 'rb')
In [25]: hmm = pickle.load(pkl_file)
In [26]: pkl_file.close()
In [27]: type(hmm)
Out[27]: nltk.tag.hmm.HiddenMarkovModelTagger
In [28]: print('%.2f%%' % (100 * hmm.evaluate(test_corpus)))
0.00%
</code></pre>
| 1 | 2016-07-31T07:18:25Z | 38,685,226 | <p>1) After In[22], you need to add - </p>
<pre><code>corpus = [[tuple(l) for l in sentence] for sentence in corpus]
</code></pre>
<p>2) Re-training model every time for testing purpose will be time consuming.
So, It is good to pickle.dump your model and load it.</p>
| 0 | 2016-07-31T14:57:20Z | [
"python",
"nltk",
"pickle"
] |
Locust, upload test | 38,681,714 | <p>I am new to locust and trying to make my first test, upload simple file with headers and path, and cant seem to manage to make it work</p>
<p>will be glad for any help, thanks!</p>
<p>my current test is:</p>
<pre><code>class UserBehavior(TaskSet):
@task
def post_img(self):
self.client.headers['1'] = "1"
self.client.headers['1'] = "1"
test_file = 'PATH/TO.FILE'
self.client.post("address", files={'file': open(test_file, 'rb')})
class WebsiteUser(HttpLocust):
host = 'IP'
task_set = UserBehavior
min_wait = 100
max_wait = 300
</code></pre>
| 0 | 2016-07-31T07:22:05Z | 38,684,143 | <p>Managed to write a test that uploads a file:</p>
<pre><code>class HttpSession(TaskSet):
@task
def post_img(self):
headers = {'1': '1', '2': '2'}
test_file = '/pathTo/file.jpg'
self.client.request('POST', 'url', files={'file': open(test_file, 'rb')}, headers=headers)
class WebsiteUser(HttpLocust):
host = 'http://IP'
task_set = HttpSession
min_wait = 100
max_wait = 300
</code></pre>
| 1 | 2016-07-31T12:49:51Z | [
"python",
"upload",
"locust"
] |
Python replace value in text file | 38,681,740 | <p>I'm trying to replace a value in a specific line in a text file.</p>
<p>My text file contains count of the searchterm,searchterm & date and time</p>
<p>text file:</p>
<pre><code>MemTotal,5,2016-07-30 12:02:33,781
model name,3,2016-07-30 13:37:59,074
model,3,2016-07-30 15:39:59,075
</code></pre>
<p>How can i replace for example the count of the searchterm for line 2 (3,model name,2016-07-30 13:37:59,074)?</p>
<p>This is what i have already:</p>
<pre><code>f = open('file.log','r')
filedata = f.read()
f.close()
newdata = filedata.replace("2","3")
f = open('file.log','w')
f.write(newdata)
f.close()
</code></pre>
<p>It replace all values 2.</p>
| -2 | 2016-07-31T07:25:33Z | 38,682,440 | <p>You have to change three things in your code to get the job done:</p>
<ol>
<li><p>Read the file using <code>readlines</code>. </p>
<pre><code>filedata = f.readlines()
</code></pre></li>
<li><p>Modify the line you want to change (keep in mind that Python indices start at 0 and don't forget to add a newline character <code>\n</code> at the end of the string): </p>
<pre><code>filedata[1] = 'new count,new search term,new date and time\n'
</code></pre></li>
<li><p>Save the file using a for loop: </p>
<pre><code>for line in filedata:
f.write(line)
</code></pre></li>
</ol>
<p>Here is the full code (notice I used the <code>with</code> context manager to open/close the file):</p>
<pre><code>with open('file.log', 'r') as f:
filedata = f.readlines()
filedata[1] = 'new count,new search term,new date and time\n'
with open('file.log', 'w') as f:
for line in filedata:
f.write(line)
</code></pre>
| 0 | 2016-07-31T09:08:23Z | [
"python",
"search",
"replace"
] |
Python replace value in text file | 38,681,740 | <p>I'm trying to replace a value in a specific line in a text file.</p>
<p>My text file contains count of the searchterm,searchterm & date and time</p>
<p>text file:</p>
<pre><code>MemTotal,5,2016-07-30 12:02:33,781
model name,3,2016-07-30 13:37:59,074
model,3,2016-07-30 15:39:59,075
</code></pre>
<p>How can i replace for example the count of the searchterm for line 2 (3,model name,2016-07-30 13:37:59,074)?</p>
<p>This is what i have already:</p>
<pre><code>f = open('file.log','r')
filedata = f.read()
f.close()
newdata = filedata.replace("2","3")
f = open('file.log','w')
f.write(newdata)
f.close()
</code></pre>
<p>It replace all values 2.</p>
| -2 | 2016-07-31T07:25:33Z | 38,683,072 | <p>My solution:</p>
<pre><code>count = 0
line_number = 0
replace = ""
f = open('examen.log','r')
term = "MemTotal"
for line in f.read().split('\n'):
if term in line:
replace= line.replace("5", "25", 1)
line_number = count
count = count + 1
print line_number
f.close()
f = open('examen.log','r')
filedata = f.readlines()
f.close()
filedata[line_number]=replace+'\n'
print filedata[line_number]
print filedata
f = open('examen.log','w')
for line in filedata:
f.write(line)
f.close()
</code></pre>
<p>You only need to define the searchterm & the replace value</p>
| 0 | 2016-07-31T10:33:55Z | [
"python",
"search",
"replace"
] |
pandas not condition with filtering | 38,681,802 | <p>How I can implement not condition on the filtering </p>
<pre><code>grouped = store_ids_with_visits.groupby(level=[0, 1, 2])
grouped.filter(lambda x: (len(x) == 1 and x['template_fk'] == exterior_template))
</code></pre>
<p>I want to get all entries that not answering on the condition</p>
<p>I tried doing:</p>
<pre><code>grouped.filter(lambda x: ~(len(x) == 1 and x['template_fk'] == exterior_template))
</code></pre>
<p>But got following error: </p>
<pre><code>filter function returned a int, but expected a scalar bool
</code></pre>
| 1 | 2016-07-31T07:34:15Z | 38,682,103 | <p>IIUC, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin</code></a> to check for bool conditions and take only the <code>NOT(~)</code> values of the grouped dataframe: </p>
<pre><code> df[~df.isin(grouped.filter(lambda x: (len(x) == 1 and x['template_fk'] == exterior_template)))]
</code></pre>
| 1 | 2016-07-31T08:22:01Z | [
"python",
"pandas",
"dataframe"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,682,055 | <p>This is not exactly what you were asking but here is a way to print the data frame as you wanted:</p>
<pre><code>df = pd.DataFrame(data1)
for name, jobs in df.groupby('Name').groups.iteritems():
print '{0:<15}'.format(name),
for job in jobs:
print '{0:<15}{1:<15}'.format(df['Job'].ix[job], df['Job Eff Date'].ix[job]),
print
## Jane Analyst 1/1/2015 Manager 1/1/2016
## Joe Analyst 1/1/2015 Manager 1/1/2016 Director 7/1/2016
</code></pre>
| 0 | 2016-07-31T08:15:56Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,682,179 | <p>Say you start by unstacking:</p>
<pre><code>df2 = df2.set_index(['Name', 'Job']).unstack()
>>> df2
Job Eff Date
Job Analyst Director Manager
Name
Jane 1/1/2015 None 1/1/2016
Joe 1/1/2015 7/1/2016 1/1/2016
In [29]:
df2
</code></pre>
<p>Now, to make things easier, flatten the multi-index:</p>
<pre><code>df2.columns = df2.columns.get_level_values(1)
>>> df2
Job Analyst Director Manager
Name
Jane 1/1/2015 None 1/1/2016
Joe 1/1/2015 7/1/2016 1/1/2016
</code></pre>
<p>Now, just manipulate the columns:</p>
<pre><code>cols = []
for i, c in enumerate(df2.columns):
col = 'Job %d' % i
df2[col] = c
cols.append(col)
col = 'Eff Date %d' % i
df2[col] = df2[c]
cols.append(col)
>>> df2[cols]
Job Job 0 Eff Date 0 Job 1 Eff Date 1 Job 2 Eff Date 2
Name
Jane Analyst 1/1/2015 Director None Manager 1/1/2016
Joe Analyst 1/1/2015 Director 7/1/2016 Manager 1/1/2016
</code></pre>
<p><strong>Edit</strong></p>
<p>Jane was never a director (alas). The above code states that Jane became Director at <code>None</code> date. To change the result so that it specifies that Jane became <code>None</code> at <code>None</code> date (which is a matter of taste), replace</p>
<pre><code>df2[col] = c
</code></pre>
<p>by</p>
<pre><code>df2[col] = [None if d is None else c for d in df2[c]]
</code></pre>
<p>This gives</p>
<pre><code>Job Job 0 Eff Date 0 Job 1 Eff Date 1 Job 2 Eff Date 2
Name
Jane Analyst 1/1/2015 None None Manager 1/1/2016
Joe Analyst 1/1/2015 Director 7/1/2016 Manager 1/1/2016
</code></pre>
<p>â</p>
| 2 | 2016-07-31T08:30:58Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,682,183 | <p>Here is a possible workaround. Here, I first create a dictionary of the proper form and create a DataFrame based on the new dictionary:</p>
<pre><code>df = pd.DataFrame(data1)
dic = {}
for name, jobs in df.groupby('Name').groups.iteritems():
if not dic:
dic['Name'] = []
dic['Name'].append(name)
for j, job in enumerate(jobs, 1):
jobstr = 'Job {0}'.format(j)
jobeffdatestr = 'Job Eff Date {0}'.format(j)
if jobstr not in dic:
dic[jobstr] = ['']*(len(dic['Name'])-1)
dic[jobeffdatestr] = ['']*(len(dic['Name'])-1)
dic[jobstr].append(df['Job'].ix[job])
dic[jobeffdatestr].append(df['Job Eff Date'].ix[job])
df2 = pd.DataFrame(dic).set_index('Name')
## Job 1 Job 2 Job 3 Job Eff Date 1 Job Eff Date 2 Job Eff Date 3
## Name
## Jane Analyst Manager 1/1/2015 1/1/2016
## Joe Analyst Manager Director 1/1/2015 1/1/2016 7/1/2016
</code></pre>
| 1 | 2016-07-31T08:31:17Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,682,517 | <pre><code>g = df2.groupby('Name').groups
names = list(g.keys())
data2 = {'Name': names}
cols = ['Name']
temp1 = [g[y] for y in names]
job_str = 'Job'
job_date_str = 'Job Eff Date'
for i in range(max([len(x) for x in g.values()])):
temp = [x[i] if len(x) > i else '' for x in temp1]
job_str_curr = job_str + str(i+1)
job_date_curr = job_date_str + str(i + 1)
data2[job_str + str(i+1)] = df2[job_str].ix[temp].values
data2[job_date_str + str(i+1)] = df2[job_date_str].ix[temp].values
cols.extend([job_str_curr, job_date_curr])
df3 = pd.DataFrame(data2, columns=cols)
df3 = df3.fillna('')
print(df3)
</code></pre>
<blockquote>
<pre><code> Name Job1 Job Eff Date1 Job2 Job Eff Date2 Job3 Job Eff Date3
0 Jane Analyst 1/1/2015 Manager 1/1/2016
1 Joe Analyst 1/1/2015 Manager 1/1/2016 Director 7/1/2016
</code></pre>
</blockquote>
| 1 | 2016-07-31T09:19:44Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,685,439 | <p><code>.T</code> within <code>groupby</code></p>
<pre><code>def tgrp(df):
df = df.drop('Name', axis=1)
return df.reset_index(drop=True).T
df2.groupby('Name').apply(tgrp).unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/4b3Nx.png" rel="nofollow"><img src="http://i.stack.imgur.com/4b3Nx.png" alt="enter image description here"></a></p>
<hr>
<h3>Explanation</h3>
<p><code>groupby</code> returns an object that contains information on how the original series or dataframe has been grouped. Instead of performing a <code>groupby</code> with a subsquent action of some sort, we could first assign the <code>df2.groupby('Name')</code> to a variable (I often do), say <code>gb</code>.</p>
<pre><code>gb = df2.groupby('Name')
</code></pre>
<p>On this object <code>gb</code> we could call <code>.mean()</code> to get an average of each group. Or <code>.last()</code> to get the last element (row) of each group. Or <code>.transform(lambda x: (x - x.mean()) / x.std())</code> to get a zscore transformation within each group. When there is something you want to do within a group that doesn't have a predefined function, there is still <code>.apply()</code>.</p>
<p><code>.apply()</code> for a <code>groupby</code> object is different than it is for a <code>dataframe</code>. For a dataframe, <code>.apply()</code> takes callable object as its argument and applies that callable to each column (or row) in the object. the object that is passed to that callable is a <code>pd.Series</code>. When you are using <code>.apply</code> in a <code>dataframe</code> context, it is helpful to keep this fact in mind. In the context of a <code>groupby</code> object, the object passed to the callable argument is a dataframe. In fact, that dataframe is one of the groups specified by the <code>groupby</code>.</p>
<p>When I write such functions to pass to <code>groupby.apply</code>, I typically define the parameter as <code>df</code> to reflect that it is a dataframe.</p>
<p>Ok, so we have:</p>
<pre><code>df2.groupby('Name').apply(tgrp)
</code></pre>
<p>This generates a sub-dataframe for each <code>'Name'</code> and passes that sub-dataframe to the function <code>tgrp</code>. Then the <code>groupby</code> object recombines all such groups having gone through the <code>tgrp</code> function back together again.</p>
<p>It'll look like this.</p>
<p><a href="http://i.stack.imgur.com/k86Bd.png" rel="nofollow"><img src="http://i.stack.imgur.com/k86Bd.png" alt="enter image description here"></a></p>
<p>I took the OP's original attempt to simply transpose to heart. But I had to do some things first. Had I simply done:</p>
<pre><code>df2[df2.Name == 'Jane'].T
</code></pre>
<p><a href="http://i.stack.imgur.com/TkmnY.png" rel="nofollow"><img src="http://i.stack.imgur.com/TkmnY.png" alt="enter image description here"></a></p>
<pre><code>df2[df2.Name == 'Joe'].T
</code></pre>
<p><a href="http://i.stack.imgur.com/aWDLD.png" rel="nofollow"><img src="http://i.stack.imgur.com/aWDLD.png" alt="enter image description here"></a></p>
<p>Combining these manually (without <code>groupby</code>):</p>
<pre><code>pd.concat([df2[df2.Name == 'Jane'].T, df2[df2.Name == 'Joe'].T])
</code></pre>
<p><a href="http://i.stack.imgur.com/ra879.png" rel="nofollow"><img src="http://i.stack.imgur.com/ra879.png" alt="enter image description here"></a></p>
<p>Whoa! Now that's ugly. Obviously the index values of <code>[0, 1, 2]</code> don't mesh with <code>[3, 4]</code>. So let's reset.</p>
<pre><code>pd.concat([df2[df2.Name == 'Jane'].reset_index(drop=True).T,
df2[df2.Name == 'Joe'].reset_index(drop=True).T])
</code></pre>
<p><a href="http://i.stack.imgur.com/NXJN4.png" rel="nofollow"><img src="http://i.stack.imgur.com/NXJN4.png" alt="enter image description here"></a></p>
<p>That's much better. But now we are getting into the territory <code>groupby</code> was intended to handle. So let it handle it. </p>
<p>Back to </p>
<pre><code>df2.groupby('Name').apply(tgrp)
</code></pre>
<p>The only thing missing here is that we want to unstack the results to get the desired output.</p>
<p><a href="http://i.stack.imgur.com/k86Bd.png" rel="nofollow"><img src="http://i.stack.imgur.com/k86Bd.png" alt="enter image description here"></a></p>
| 6 | 2016-07-31T15:20:33Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
Reshape pandas dataframe from rows to columns | 38,681,821 | <p>I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.</p>
<p><strong>Use Case</strong></p>
<p>I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.</p>
<p>Here's the data:</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
</code></pre>
<p>Here's what I want it to look like:
<a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow">Desired Output Table</a></p>
<p><a href="http://i.stack.imgur.com/qmCpb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qmCpb.png" alt="enter image description here"></a></p>
| 7 | 2016-07-31T07:37:37Z | 38,689,330 | <p>Diving into @piRSquared answer.... </p>
<pre><code>def tgrp(df):
df = df.drop('Name', axis=1)
print df, '\n'
out = df.reset_index(drop=True)
print out, '\n'
out.T
print out.T, '\n\n'
return out.T
dfxx = df2.groupby('Name').apply(tgrp).unstack()
dfxx
</code></pre>
<p>The output of above. Why does pandas repeat the first group? Is this a bug? </p>
<pre><code> Job Job Eff Date
3 Analyst 1/1/2015
4 Manager 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
0 1
Job Analyst Manager
Job Eff Date 1/1/2015 1/1/2016
Job Job Eff Date
3 Analyst 1/1/2015
4 Manager 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
0 1
Job Analyst Manager
Job Eff Date 1/1/2015 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
2 Director 7/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
2 Director 7/1/2016
0 1 2
Job Analyst Manager Director
Job Eff Date 1/1/2015 1/1/2016 7/1/2016
</code></pre>
| 0 | 2016-07-31T23:44:28Z | [
"python",
"pandas",
"group-by",
"reshape"
] |
find the shortest path between two points in a circular list in python | 38,681,844 | <p>I have a list like the following :
<code>a =[1,2,3,4]</code></p>
<p>The list is a circular list.
The <strong>values in the list does not represent nodes</strong>,but the index of the list represent nodes.
So the list may contain duplicated elements.
example,</p>
<pre><code>if i take index (1,3)
(ie source is at index 1,and destination is at index 3) .
the shortest path is 1->4
if i take index (0,2) , i get two shortest paths
1->2->3 and
1->4->3
</code></pre>
<p>How may i proceed this in python?</p>
| -3 | 2016-07-31T07:41:18Z | 38,971,144 | <p>Looks that this is what you want:</p>
<pre><code>L = [1, 2, 3, 4, 5, 6]
a = L[1]
b = L[3]
cnt1 = []
cnt2 = []
for x in range(L.index(a), L.index(b) + 1):
cnt1.append(L[x])
for x in range(L.index(a), L.index(b) -(len(L) + 1), -1):
cnt2.append(L[x])
if len(cnt1) <= len(cnt2):
print(cnt1)
else:
print(cnt2)
</code></pre>
| 0 | 2016-08-16T09:25:02Z | [
"python",
"algorithm"
] |
What's the correct way to add Python.h and/or python3-dev in Ubuntu 14.04? | 38,681,862 | <p>On Ubuntu 14.04.4 LTS I was trying to install <a href="https://github.com/coursera-dl/coursera-dl" rel="nofollow">courseara-dl</a> with the default python 3.4.3 and met the error:</p>
<pre><code>src/MD2.c:31:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pycrypto
</code></pre>
<p>I have seen some answers mentioning installing <code>python3.4-dev</code> to solve this problem. But there is dependency error:</p>
<pre><code>The following packages have unmet dependencies:
python3.4-dev : Depends: python3.4 (= 3.4.0-2ubuntu1) but 3.4.3-1ubuntu1~14.04.3 is to be installed
Depends: libpython3.4-dev (= 3.4.0-2ubuntu1) but it is not going to be installed
Depends: libpython3.4 (= 3.4.0-2ubuntu1) but 3.4.3-1ubuntu1~14.04.3 is to be installed
Depends: libexpat1-dev but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
</code></pre>
<p>What is the recommended way to fix this? Do I have to downgrade to <code>python3.4.0</code>? If so, what's the proper way to do that?</p>
<p>When I run <code>dpkg -l | grep python 3</code>, I see both 3.4.0 and 3.4.3. Should I try <code>apt-get uninstall 3.4.3</code>?</p>
<pre><code>ii python3 3.4.0-0ubuntu2 amd64 interactive high-level object-oriented language (default python3 version)
ii python3.4 3.4.3-1ubuntu1~14.04.3 amd64 Interactive high-level object-oriented language (version 3.4)
</code></pre>
| 1 | 2016-07-31T07:44:24Z | 38,684,203 | <p>According to this: <a href="http://packages.ubuntu.com/trusty/python3" rel="nofollow">http://packages.ubuntu.com/trusty/python3</a> the official default python3 version of 14.04 from Ubuntu is 3.4.0. According to this: <a href="http://packages.ubuntu.com/trusty-updates/python3.4" rel="nofollow">http://packages.ubuntu.com/trusty-updates/python3.4</a> the newer version came from the updates repo. It is common for distros to have a base repo and an updates repo with newer packages (Fedora does this too). Somehow you installed both, possibly because aptitude failed in some way here.</p>
<p>You have two options here:</p>
<ol>
<li>Remove the updates package of python3.4 so that you can use the default python-dev.</li>
<li>Remove the default package of python3 so that you can use the updates python3.4-dev.</li>
</ol>
<p>There is another possibility here, which is that python3.4-dev was built with the wrong dependencies or gathered the wrong dependencies when being built, but the output you have seems to imply otherwise.</p>
<p>The other possibility, which will be a headache, is if apt is failing hard at correctly discerning the dependencies for python3.4-dev.</p>
| 1 | 2016-07-31T12:56:42Z | [
"python",
"ubuntu"
] |
What's the correct way to add Python.h and/or python3-dev in Ubuntu 14.04? | 38,681,862 | <p>On Ubuntu 14.04.4 LTS I was trying to install <a href="https://github.com/coursera-dl/coursera-dl" rel="nofollow">courseara-dl</a> with the default python 3.4.3 and met the error:</p>
<pre><code>src/MD2.c:31:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pycrypto
</code></pre>
<p>I have seen some answers mentioning installing <code>python3.4-dev</code> to solve this problem. But there is dependency error:</p>
<pre><code>The following packages have unmet dependencies:
python3.4-dev : Depends: python3.4 (= 3.4.0-2ubuntu1) but 3.4.3-1ubuntu1~14.04.3 is to be installed
Depends: libpython3.4-dev (= 3.4.0-2ubuntu1) but it is not going to be installed
Depends: libpython3.4 (= 3.4.0-2ubuntu1) but 3.4.3-1ubuntu1~14.04.3 is to be installed
Depends: libexpat1-dev but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
</code></pre>
<p>What is the recommended way to fix this? Do I have to downgrade to <code>python3.4.0</code>? If so, what's the proper way to do that?</p>
<p>When I run <code>dpkg -l | grep python 3</code>, I see both 3.4.0 and 3.4.3. Should I try <code>apt-get uninstall 3.4.3</code>?</p>
<pre><code>ii python3 3.4.0-0ubuntu2 amd64 interactive high-level object-oriented language (default python3 version)
ii python3.4 3.4.3-1ubuntu1~14.04.3 amd64 Interactive high-level object-oriented language (version 3.4)
</code></pre>
| 1 | 2016-07-31T07:44:24Z | 38,728,781 | <p>I don't remember exactly how I got python 3.4.3 in my Ubuntu, maybe through ubuntu auto upgrade?</p>
<p>The problem was <code>libexpat1</code>:</p>
<pre><code>apt-cache policy libexpat1
libexpat1:
Installed: 2.1.0-4ubuntu1.1
Candidate: 2.1.0-4ubuntu1.1
Version table:
*** 2.1.0-4ubuntu1.1 0
100 /var/lib/dpkg/status
2.1.0-4ubuntu1 0
500 http://us.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
</code></pre>
<p>version <code>2.1.0-4ubuntu1</code> was needed for python3-dev and version <code>2.1.0-4ubntu1.1</code> was installed.</p>
<p>I also removed python3.4.3 and had to reinstall python3 (python3.4.0).</p>
<p>After that, I was able to install <code>python3-dev</code>.</p>
<p>Similar problems exist for my python2 where I had python2.7.6 but <code>python-dev</code> requires <code>python2.7.5</code>. I did not borther to downgrade python2 since I am not really using it at the moment.</p>
<p>Thanks @matt-schuchard Matt Schuchard for pointing a direction. I am still not sure everything is correct but at least I was able to install <code>python3-dev</code>.</p>
| 0 | 2016-08-02T19:17:36Z | [
"python",
"ubuntu"
] |
gurobi milp model to maximize npv | 38,681,871 | <p>I am trying to model a simple inventory maximizing income with gurobi MILP
but i have been getting trouble about how to write the objective function for Net Present Value Maximizing.</p>
<p>an array A=np.random.randint(100,1500,100)
is the value from every 100 items in the inventory</p>
<pre><code>from gurobipy import *
val=A
m = Model()
n = len(val) # number of items
# Indicator variable for each item
x = {}
for i in range(n):
x[i] = m.addVar(vtype=GRB.BINARY, name="x%d" % i)
#Indicator variable for each period of operation
prd={}
for u in range(7):
prd[u]=m.addVar(vtype=GRB.BINARY name="prd%d" % u)
m.update()
# Set objective
m.setObjective((quicksum((quicksum((val[i])*x[i] for i in range(n)))/(1+0.1**(u+1)))*prd[u] for u in range(7))), GRB.MAXIMIZE)
</code></pre>
<p>if this is the right way to model this type of problem the next step is to add constrains to only use an item at a single period of time.</p>
| 0 | 2016-07-31T07:45:49Z | 38,690,807 | <p><code>quicksum()</code> evaluates a linear expression; you need to convert your nested expressions into a single linear expression. You can do this by computing the coefficient values.</p>
| 0 | 2016-08-01T03:57:19Z | [
"python",
"optimization",
"gurobi"
] |
'libavformat/avformat.h' file not found #include <libavformat/avformat.h> | 38,681,912 | <p>When following the <a href="https://gist.github.com/ranveeraggarwal/f1499a586dabafb63392" rel="nofollow">below tutorial</a>:</p>
<blockquote>
<p>brew install python3 pip3 install numpy brew install cmake git clone
--depth=1 <a href="https://github.com/Itseez/opencv.git" rel="nofollow">https://github.com/Itseez/opencv.git</a> cd opencv mkdir build cd build</p>
<h1>note: in the next line, adjust paths to point to the correct python version cmake -DBUILD_opencv_python3=YES -DBUILD_opencv_python2=NO</h1>
<p>-DINSTALL_PYTHON_EXAMPLES=YES -DPYTHON3_EXECUTABLE=/usr/local/bin/python3 -DPYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/include/python3.5m/
-DPYTHON3_LIBRARY=/usr/local/Cellar/python3/3.5.0/Frameworks/Python.framework/Versions/3.5/lib/libpython3.5.dylib
-DPYTHON3_NUMPY_INCLUDE_DIRS=/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/core/include/
-DPYTHON3_PACKAGES_PATH=/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/
.. make -j8 make install python3 -c "import cv2;
print(cv2.<strong>version</strong>)"</p>
</blockquote>
<p>I get this error for line 10 <code>$ make -j8</code>:</p>
<pre><code>[ 39%] Built target opencv_shape
In file included from /Users/mona/Downloads/opencv/modules/videoio/src/cap_ffmpeg.cpp:47:
In file included from /Users/mona/Downloads/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:65:
/Users/mona/Downloads/opencv/modules/videoio/src/ffmpeg_codecs.hpp:77:12: fatal error:
'libavformat/avformat.h' file not found
#include <libavformat/avformat.h>
^
1 error generated.
make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ffmpeg.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 40%] Linking CXX shared library ../../lib/libopencv_photo.dylib
[ 40%] Built target opencv_photo
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
make: *** [all] Error 2
Monas-MacBook-Pro:build mona$
</code></pre>
<p>Why is this tutorial not passing on my OSX?</p>
| 0 | 2016-07-31T07:51:50Z | 38,758,097 | <p>So this is how I solved the problem. Please let me know if you might have further questions:</p>
<pre><code> ~ î° c î° î master â î° Face_Recognition î° ls î² 10:03:38 PM
3.1.0.zip 3.1.0.zip.1 opencv-3.1.0 opencv_contrib-3.1.0 papers
-------------------------------------------------------
In opencv-3.1.0/modules/viz/src/vtk/vtkCocoaInteractorFix.mm
#if VTK_MAJOR_VERSION >= 6 && VTK_MINOR_VERSION >=2
If we use VTK 7.0.0, although it is a newer version, the minor version is less than two, so the vtkCocoaInteractorFix.mm will handle this as older VTK versions and that causes the problem.
So to fix this we just need to add the version number 7 to the line.
#if VTK_MAJOR_VERSION >= 6 && VTK_MINOR_VERSION >=2 || VTK_MAJOR_VERSION >=7
---------------------------------------------------------------------------
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D BUILD_opencv_java=OFF \
-D WITH_IPP=OFF -D WITH_1394=OFF \
-D WITH_FFMPEG=OFF\
-D BUILD_EXAMPLES=ON \
-D BUILD_TESTS=ON \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_DOCS=ON \
-D BUILD_opencv_python3=ON \
-D BUILD_opencv_video=ON \
-D BUILD_opencv_videoio=ON \
-D BUILD_opencv_videostab=ON \
-D PYTHON_EXECUTABLE=$(which python) \
-D OPENCV_EXTRA_MODULES_PATH=/Users/mona/computer_vision/Face_Recognition/opencv_contrib-3.1.0/modules ..
-------------------------------------------------------
make -j4
-------------------------------------------------------
sudo make install
</code></pre>
<p>Main points are <code>-D WITH_FFMPEG=OFF</code> and also giving the absolute path to <code>opencv</code> modules in <code>-D OPENCV_EXTRA_MODULES_PATH=/Users/mona/computer_vision/Face_Recognition/opencv_contrib-3.1.0/modules ..</code></p>
| 0 | 2016-08-04T03:38:39Z | [
"python",
"osx",
"python-3.x",
"opencv",
"cmake"
] |
python-re.sub() and unicode | 38,681,921 | <p>I want to replace all emoji with <code>''</code> but my regEx doesn't work.<BR>For example, </p>
<pre><code>content= u'?\u86cb\u767d12\U0001f633\uff0c\u4f53\u6e29\u65e9\u6668\u6b63\u5e38\uff0c\u5348\u540e\u665a\u95f4\u53d1\u70ed\uff0c\u6211\u73b0\u5728\u8be5\u548b\U0001f633?'
</code></pre>
<p>and I want to replace all the forms like <code>\U0001f633</code> with <code>''</code> so I write the code:</p>
<p><code>print re.sub(ur'\\U[0-9a-fA-F]{8}','',content)</code><br></p>
<p>But it doesn't work.<br>
Thanks a lot. </p>
| 2 | 2016-07-31T07:52:36Z | 38,682,213 | <p>You won't be able to recognize properly decoded unicode codepoints that way (as strings containing <code>\uXXXX</code>, etc.) Properly decoded, by the time the regex parser gets to them, each is a* character.</p>
<p>Depending on whether your python was compiled with only 16-bit unicode code points or not, you'll want a pattern something like either:</p>
<pre><code># 16-bit codepoints
re_strip = re.compile(u'[\uD800-\uDBFF][\uDC00-\uDFFF]')
# 32-bit* codepoints
re_strip = re.compile(u'[\U00010000-\U0010FFFF]')
</code></pre>
<p>And your code would look like:</p>
<pre><code>import re
# Pick a pattern, adjust as necessary
#re_strip = re.compile(u'[\uD800-\uDBFF][\uDC00-\uDFFF]')
re_strip = re.compile(u'[\U00010000-\U0010FFFF]')
content= u'[\u86cb\u767d12\U0001f633\uff0c\u4f53\u6e29\u65e9\u6668\u6b63\u5e38\uff0c\u5348\u540e\u665a\u95f4\u53d1\u70ed\uff0c\u6211\u73b0\u5728\u8be5\u548b\U0001f633]'
print(content)
stripped = re_strip.sub('', content)
print(stripped)
</code></pre>
<p>Both expressions, reduce the number of characters in the <code>stripped</code> string to 26.</p>
<p>These expressions strip out the emojis you were after, but may also strip out other things you <em>do</em> want. It may be worth reviewing a unicode codepoint range listing (e.g. <a href="https://en.wikipedia.org/wiki/Plane_(Unicode)" rel="nofollow">here</a>) and adjusting them.</p>
<p>You can determine whether your python install will only recognize 16-bit codepoints by doing something like:</p>
<pre><code>import sys
print(sys.maxunicode.bit_length())
</code></pre>
<p>If this displays 16, you'll need the first regex expression. If it displays something greater than 16 (for me it says 21), the second one is what you want.</p>
<p>Neither expression will work when used on a python install with the wrong <code>sys.maxunicode</code>.</p>
<p>See also: <a href="http://stackoverflow.com/questions/28783420/cannot-compile-8-digit-unicode-regex-ranges-in-python-2-7-re">this</a> related.</p>
| 3 | 2016-07-31T08:35:10Z | [
"python",
"python-2.7",
"unicode"
] |
Python pip command returning 'Command "python setup.py egg_info" failed with error code1' | 38,681,940 | <p>I keep getting the error "Python pip command returning 'Command "python setup.py egg_info" failed with error code1'" when trying to install PyEZ/junos-eznc for some reason. My setuptool and ez-setup are all up to date. Here is a snap of the error:</p>
<pre><code>C:\Users\???>py -m pip install junos-eznc
Collecting junos-eznc
Using cached junos-eznc-1.3.1.tar.gz
Collecting lxml>=3.2.4 (from junos-eznc)
Using cached lxml-3.6.1.tar.gz
Collecting ncclient>=0.4.6 (from junos-eznc)
Using cached ncclient-0.5.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\???~1\AppData\Local\Temp\pip-build-bb5l5lel\ncclient\setup.py", line 32, in <mod
ule>
long_description = file.read()
File "C:\Users\???\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp1252.py", line
23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 4336: character maps to <unde
fined>
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\???~1\AppData\Local\Temp\pip-
build-bb5l5lel\ncclient/
</code></pre>
<p>Any thoughts?</p>
| 1 | 2016-07-31T07:55:48Z | 38,685,578 | <p>It looks like the issue is that the description in <code>ncclient</code>'s <code>setup.py</code> file is being loaded in the wrong encoding. A few other packages seem to have similar issues like the one <a href="https://github.com/lxyu/pinyin/issues/9" rel="nofollow">here</a>, so I suggest you try this:</p>
<ul>
<li><p><a href="https://pypi.python.org/pypi/ncclient" rel="nofollow">Download</a> and unzip the latest version directly from PyPI.</p></li>
<li><p>Delete lines <a href="https://github.com/ncclient/ncclient/blob/master/setup.py#L31-L32" rel="nofollow">31 and 32</a> from <code>setup.py</code>.</p></li>
<li><p>Replace it with the following line:</p></li>
</ul>
<hr>
<pre><code>long_description = "Placeholder"
</code></pre>
<ul>
<li>Open a command line prompt, <code>cd</code> to the directory where you extracted the code and run <code>py -m pip install .</code></li>
</ul>
<p>It might also be helpful if you file an issue on <code>ncclient</code>'s GitHub page - it might help them if you link this thread if this solves the problem.</p>
| 2 | 2016-07-31T15:36:03Z | [
"python",
"python-3.x",
"pip",
"setuptools"
] |
Python pip command returning 'Command "python setup.py egg_info" failed with error code1' | 38,681,940 | <p>I keep getting the error "Python pip command returning 'Command "python setup.py egg_info" failed with error code1'" when trying to install PyEZ/junos-eznc for some reason. My setuptool and ez-setup are all up to date. Here is a snap of the error:</p>
<pre><code>C:\Users\???>py -m pip install junos-eznc
Collecting junos-eznc
Using cached junos-eznc-1.3.1.tar.gz
Collecting lxml>=3.2.4 (from junos-eznc)
Using cached lxml-3.6.1.tar.gz
Collecting ncclient>=0.4.6 (from junos-eznc)
Using cached ncclient-0.5.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\???~1\AppData\Local\Temp\pip-build-bb5l5lel\ncclient\setup.py", line 32, in <mod
ule>
long_description = file.read()
File "C:\Users\???\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp1252.py", line
23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 4336: character maps to <unde
fined>
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\???~1\AppData\Local\Temp\pip-
build-bb5l5lel\ncclient/
</code></pre>
<p>Any thoughts?</p>
| 1 | 2016-07-31T07:55:48Z | 38,697,469 | <p>Finally found a solution for this problem.</p>
<p>Step 1) As Aurora0001 stated, download zip file, delete lines 31 and 32 and replace with: </p>
<pre><code>long_description = "Placeholder"
</code></pre>
<p>then, use pip to install.</p>
<p>Step 2) execute this command:</p>
<pre><code>set STATICBUILD=true && pip install lxml
</code></pre>
<p>Step 3) install junos-eznc using pip</p>
<p>I hope it works for everyone else that has the same problem; and thank you Aurora.</p>
| 0 | 2016-08-01T11:16:47Z | [
"python",
"python-3.x",
"pip",
"setuptools"
] |
Why the installed numpy in my system does not have matmul? | 38,682,115 | <p>I have installed numpy as following in ubuntu 14.04, but as is indicated in the sample code using matmul leads to error.</p>
<pre><code>sudo apt-get install python3-numpy
$ python3
Python 3.4.3 (default, Oct 14 2015, 20:28:29)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.__version__
'1.8.2'
>>> a = [[1, 0], [0, 1]]
>>> b = [[4, 1], [2, 2]]
>>> np.matmul(a, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'matmul'
>>>
</code></pre>
<p>What is my fault? <br>
Thanks.</p>
| 1 | 2016-07-31T08:24:41Z | 38,682,136 | <p><code>np.matmul</code> was added in <code>numpy 1.10.0</code>, as per the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html" rel="nofollow">docs</a>:</p>
<blockquote>
<p>New in version 1.10.0</p>
</blockquote>
| 4 | 2016-07-31T08:26:30Z | [
"python",
"numpy"
] |
Condition Codes In Python Programing | 38,682,200 | <p>I typed in Microsoft Visual Studio that</p>
<pre><code>answer = input("What Is Your Age ? ")
if answer >= "18" :
print("You Have The Right To Cast Vote In India.")
else :
print("You Have No Right To Cast Vote In India.")
</code></pre>
<p>When user is typing the number greater than equal to 18 then it is showing,"You Have The Right To Cast Vote In India." and when user is typing the number greater than 9 and less then 18 then it is showing,"You Have No Right To Cast Vote In India." Upto this it is working correctly but when user is typing the number less than 10 it is showing,"You Have The Right To Cast Vote In India."<br>
But I want that it should show,"You Have No Right To Cast Vote In India." when the user type less than 10 and less than 18.<br>
Please Give Me The Solution.</p>
| -4 | 2016-07-31T08:33:33Z | 38,682,220 | <p>You are comparing strings, not integers.
When comparing strings the comparison is lexicographic:</p>
<pre><code>print('101' > '18')
>> False
</code></pre>
<p>You should convert <code>answer</code> to <code>int</code> and then compare as integers:</p>
<pre><code>answer = int(input("What Is Your Age ? "))
if answer >= 18 :
print("You Have The Right To Cast Vote In India.")
else :
print("You Have No Right To Cast Vote In India.")
</code></pre>
<p>This will raise a <code>ValueError</code> if the input isn't a valid number which you will have to account for.</p>
| 0 | 2016-07-31T08:35:56Z | [
"python"
] |
Condition Codes In Python Programing | 38,682,200 | <p>I typed in Microsoft Visual Studio that</p>
<pre><code>answer = input("What Is Your Age ? ")
if answer >= "18" :
print("You Have The Right To Cast Vote In India.")
else :
print("You Have No Right To Cast Vote In India.")
</code></pre>
<p>When user is typing the number greater than equal to 18 then it is showing,"You Have The Right To Cast Vote In India." and when user is typing the number greater than 9 and less then 18 then it is showing,"You Have No Right To Cast Vote In India." Upto this it is working correctly but when user is typing the number less than 10 it is showing,"You Have The Right To Cast Vote In India."<br>
But I want that it should show,"You Have No Right To Cast Vote In India." when the user type less than 10 and less than 18.<br>
Please Give Me The Solution.</p>
| -4 | 2016-07-31T08:33:33Z | 38,682,227 | <p>You're trying to compare a string <code>answer</code> to a string <code>"18"</code> as if they were both numbers. This won't work - Python tries to compare them based on alphabetical order instead of numerically.</p>
<p>To achieve what you want, convert <code>answer</code> to an integer number using the <code>int</code> function, and take <code>18</code> out of quotes to make it an integer too:</p>
<pre><code>answer = input("What Is Your Age ? ")
if int(answer) >= 18 :
print("You Have The Right To Cast Vote In India.")
else :
print("You Have No Right To Cast Vote In India.")
</code></pre>
| 0 | 2016-07-31T08:36:31Z | [
"python"
] |
How to nest numba jitclass | 38,682,260 | <p>I'm trying to understand how the @jitclass decorator works with nested classes. I have written two dummy classes: fifi and toto
fifi has a toto attribute. Both classes have the @jitclass decorator but compilation fails. Here's the code:</p>
<p>fifi.py</p>
<pre><code>from numba import jitclass, float64
from toto import toto
spec = [('a',float64),('b',float64),('c',toto)]
@jitclass(spec)
class fifi(object):
def __init__(self, combis):
self.a = combis
self.b = 2
self.c = toto(combis)
def mySqrt(self,x):
s = x
for i in xrange(self.a):
s = (s + x/s) / 2.0
return s
</code></pre>
<p>toto.py:</p>
<pre><code>from numba import jitclass,int32
spec = [('n',int32)]
@jitclass(spec)
class toto(object):
def __init__(self,n):
self.n = 42 + n
def work(self,y):
return y + self.n
</code></pre>
<p>The script that launches the code:</p>
<pre><code>from datetime import datetime
from fifi import fifi
from numba import jit
@jit(nopython = True)
def run(n,results):
for i in xrange(n):
q = fifi(200)
results[i+1] = q.mySqrt(i + 1)
if __name__ == '__main__':
n = int(1e6)
results = [0.0] * (n+1)
starttime = datetime.now()
run(n,results)
endtime = datetime.now()
print("Script running time: %s"%str(endtime-starttime))
print("Sqrt of 144 is %f"%results[145])
</code></pre>
<p>When I run the script, I get [...] </p>
<blockquote>
<p>TypingError: Untyped global name 'toto'
File "fifi.py", line 11</p>
</blockquote>
<p>Note that if I remove any reference to 'toto' in 'fifi', the code works fine and I get a x16 speed up thanks to numba.</p>
| 4 | 2016-07-31T08:40:16Z | 38,684,908 | <p>It is possible to use a jitclass as a member of another jitclass, although the way of doing this isn't well documented. You need to use a <code>deferred_type</code> instance. This works in Numba 0.27 and possibly earlier. Change <code>fifi.py</code> to:</p>
<pre><code>from numba import jitclass, float64, deferred_type
from toto import toto
toto_type = deferred_type()
toto_type.define(toto.class_type.instance_type)
spec = [('a',float64),('b',float64),('c',toto_type)]
@jitclass(spec)
class fifi(object):
def __init__(self, combis):
self.a = combis
self.b = 2
self.c = toto(combis)
def mySqrt(self,x):
s = x
for i in xrange(self.a):
s = (s + x/s) / 2.0
return s
</code></pre>
<p>I then get as output:</p>
<pre><code>$ python test.py
Script running time: 0:00:01.991600
Sqrt of 144 is 12.041595
</code></pre>
<p>This functionality can be seen in some of the more advanced jitclass examples of data structures, for example:</p>
<ul>
<li><a href="https://github.com/numba/numba/blob/a4237562b78e9c4183173983051e5383dfab901c/examples/stack.py" rel="nofollow">stack.py</a></li>
<li><a href="https://github.com/numba/numba/blob/44aca4325d3a0f1ad4b8f8f9ebf8af3572b59321/examples/linkedlist.py" rel="nofollow">linkedlist.py</a></li>
<li><a href="https://github.com/numba/numba/blob/a4237562b78e9c4183173983051e5383dfab901c/examples/binarytree.py" rel="nofollow">binarytree.py</a></li>
</ul>
| 2 | 2016-07-31T14:18:00Z | [
"python",
"jit",
"numba"
] |
Printing passwords as encrypted in log file | 38,682,261 | <p>I have a Python code like below:</p>
<pre><code>myargs = [param,/PASSWORD="{}",.format(myData['PASSWORD'])]
</code></pre>
<p>When I print it in my log file I use the following statement:</p>
<pre><code>logging.info(myargs)
</code></pre>
<p>It prints the statement correctly, what I need is all passwords should be printed as <code>XXXX</code> or encrypted<code>(base64)</code> </p>
| -1 | 2016-07-31T08:40:18Z | 38,684,727 | <p>First off, <code>base 64</code> is not encrypted, it's encoded. There is a big difference. Secondly, use hashing. </p>
<p>Check out the <a href="https://docs.python.org/2/library/hashlib.html" rel="nofollow">hashlib</a> module. There are various secure hash algorithms you can use. SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSAâs MD5 algorithm. </p>
<p>Though alone doing hashing is not secure due to <a href="https://www.google.com/search?q=Rainbow%20table&oq=Rainbow%20table&aqs=chrome..69i57j69i60&sourceid=chrome&ie=UTF-8" rel="nofollow">Rainbow Table</a> attacks. Use salting along with hashing to make the passwords more secure. Short implementation of it taken from <a href="http://pythoncentral.io/hashing-strings-with-python/" rel="nofollow">here</a>:</p>
<pre><code>import uuid
import hashlib
def hash_password(password):
# uuid is used to generate a random number
salt = uuid.uuid4().hex
return hashlib.sha256(salt.encode() + password.encode()).hexdigest() + ':' + salt
def check_password(hashed_password, user_password):
password, salt = hashed_password.split(':')
return password == hashlib.sha256(salt.encode() + user_password.encode()).hexdigest()
new_pass = raw_input('Please enter a password: ')
hashed_password = hash_password(new_pass)
print('The string to store in the db is: ' + hashed_password)
old_pass = raw_input('Now please enter the password again to check: ')
if check_password(hashed_password, old_pass):
print('You entered the right password')
else:
print('I am sorry but the password does not match')
</code></pre>
<p>Also, you can use werkzeug to help you with this. <a href="http://flask.pocoo.org/snippets/54/" rel="nofollow">This</a> is a good snippet that you can modify and implement.</p>
| 0 | 2016-07-31T14:01:28Z | [
"python",
"logging",
"passwords",
"password-encryption"
] |
Many-to-many fields with intermediate tables must not be symmetrical | 38,682,303 | <p>I'm trying to figure out how to store prices between cities in my project so I can work with that comfortabely and admin can change those prices comfortably. </p>
<p>I've decided to create a <code>through</code> model, according to this <a href="http://stackoverflow.com/a/4098843/3371056">ANSWER</a>, which is called <code>Ride</code>. </p>
<p>But when I do <code>makemigrations</code>, Django returns:</p>
<blockquote>
<p>va_app.City.rides: (fields.E332) Many-to-many fields with intermediate tables must not be symmetrical.</p>
</blockquote>
<pre><code>class City(models.Model):
name = models.CharField(max_length=80)
country = models.ForeignKey('Country')
_close_cities = models.ManyToManyField('City', blank=True, related_name='close_cities_set',symmetrical=True)
rides = models.ManyToManyField('self',through='Ride')
class Ride(models.Model):
price = models.DecimalField(max_digits=8, decimal_places=2, blank=True, null=True)
</code></pre>
<p>Do you know how to make it work?</p>
<p>PS> The only thing I want is to be able to simple access the price (like <code>City.price(City)</code> or something else and admin to be able to change prices.</p>
| 0 | 2016-07-31T08:45:17Z | 38,683,712 | <p>The error is pretty clear you can't have M2M relation with <code>intermediate</code> table and <code>symmetrical=True</code>, it must be <code>symmetrical=False</code>.</p>
<p>So try with: </p>
<pre><code>rides = models.ManyToManyField('self', through='Ride', symmetrical=False)
</code></pre>
<p>However, I think something is wrong with your model structure, you have two M2M fields pointing to <code>self</code>? I'm not sure whats the purpose of the <code>Rides</code> model, but maybe this model should only have <code>FKs</code> to <code>city</code>.</p>
| 0 | 2016-07-31T11:54:28Z | [
"python",
"django",
"django-models"
] |
Why favor object.__setattr__(self, name, value) only in new style classes? | 38,682,318 | <p>According to Python 2.7.12 documentation:</p>
<blockquote>
<p>If <code>__setattr__()</code> wants to assign to an instance attribute, it should
not simply execute <code>self.name = value</code> â this would cause a recursive
call to itself. Instead, it should insert the value in the dictionary
of instance attributes, e.g., <code>self.__dict__[name] = value</code>. <strong>For
new-style classes, rather than accessing the instance dictionary, it
should call the base class method with the same name, for example,
<code>object.__setattr__(self, name, value)</code></strong>.</p>
</blockquote>
<p>However, the following code works as one would expect:</p>
<pre><code>class Class(object):
def __setattr__(self, name, val):
self.__dict__[name] = val;
c = Class()
c.val = 42
print c.val
</code></pre>
<p>I know <code>super(Class, obj).__setattr__(name, value)</code> can ensure the <code>__setattr__</code> methods of all base classes to be called, but classic class can also inherit from bases classes. So why is it only recommended for new style classes?</p>
<p>Or, on the other hand, why is doing so not recommended for classic classes?</p>
| 0 | 2016-07-31T08:48:03Z | 38,682,359 | <p>New-style classes could be using <em>slots</em>, at which point there is no <code>__dict__</code> to assign to. New-style classes also support other <em>data descriptors</em>, objects defined on the class that handle attribute setting or deletion for certain names.</p>
<p>From the <a href="https://docs.python.org/2/reference/datamodel.html#slots" rel="nofollow">documentation on slots</a>:</p>
<blockquote>
<p>By default, instances of both old and new-style classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.</p>
<p>The default can be overridden by defining <code>__slots__</code> in a new-style class definition. The <code>__slots__</code> declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because <code>__dict__</code> is not created for each instance.</p>
</blockquote>
<p>Access to slots is instead implemented by adding <a href="https://docs.python.org/2/howto/descriptor.html" rel="nofollow">data descriptors</a> on the class; an object with <code>__set__</code> and / or <code>__del__</code> methods for each such attribute.</p>
<p>Another example of data descriptors are <a href="https://docs.python.org/2/library/functions.html#property" rel="nofollow"><code>property()</code> objects</a> that have a setter or deleter function attached. Setting a key with the same name as such a descriptor object in the <code>__dict__</code> would be ignored as data descriptors cause attribute lookup to bypass the <code>__dict__</code> altogether.</p>
<p><code>object.__setattr__()</code> knows how to handle data descriptors, which is why you should just call that.</p>
| 3 | 2016-07-31T08:55:21Z | [
"python",
"class",
"inheritance",
"setattr",
"new-style-class"
] |
Comparing two date as string | 38,682,320 | <p>I need to compare two dates in a server with python on every row of data. I used <code>datetime</code> in this case but due to some limitations it will consume a lot of time on big data. I used below code to create a <code>datetime</code> object and use in further:</p>
<pre><code>first_date = datetime.strptime(line_content[3], '%Y-%m-%dT%H:%M:%S.000000Z')
second_date = datetime.strptime(line_content[4].strip(), '%Y-%m-%dT%H:%M:%S.000000Z')
</code></pre>
<p>I want to compare dates with their string and don't use <code>datetime</code>, if I do so there would be a lot of time cost reduction in these kind of data. so use below tests in this regards in python:</p>
<pre><code>>>> "2016-07-28T06:04:12.000000Z" < "2016-04-28T06:04:13.000000Z"
False
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T06:04:13.000000Z"
True
>>>
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T06:04:11.000000Z"
False
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T06:04:12.000000Z"
False
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T07:04:12.000000Z"
True
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T06:04:12.000000Z"
False
>>>
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-26T06:04:12.000000Z"
False
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-29T06:04:12.000000Z"
True
>>> "2016-07-28T06:04:12.000000Z" < "2016-07-28T06:04:12.000000Z"
False
>>>
</code></pre>
<p>Is this a good way to compare date. I mean can you show me an example that this code won't work?</p>
| 3 | 2016-07-31T08:48:11Z | 38,682,515 | <p>Yes - date parsing with Python is pretty slow because dates and times are <a href="https://news.ycombinator.com/item?id=2725015" rel="nofollow">complex things</a>. According to <a href="http://stackoverflow.com/a/14163523/2550354">this</a> stackoverflow thread, regex might be faster for parsing.</p>
<p>I would think again if you really need to parse the strings, since it looks like your data is clean and have the same format in your case it might work.</p>
<p>Things to keep in mind before going with this approach:</p>
<ul>
<li>Do you know the format of your string?</li>
<li>Does it really goes from year>month>day>hour>minute>sec>etc</li>
<li>Does all of your data has the same format?</li>
<li>All of your data is in the same timezone?</li>
</ul>
| 2 | 2016-07-31T09:19:28Z | [
"python",
"datetime"
] |
PyGObject: How to work with objects in thread? | 38,682,348 | <p>Is there a way to create objects and work with them not in the main thread? I've read <a href="https://wiki.gnome.org/Projects/PyGObject/Threading" rel="nofollow">this link</a>, but don't understand how to apply it to my example (see below).</p>
<pre class="lang-py prettyprint-override"><code>import threading
from gi.repository import Gtk, Gdk, GObject, GLib
class Foo:
def bar(self):
pass
class ListeningThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.cls_in_thread = None
def foo(self, _):
self.cls_in_thread = Foo()
print(threading.current_thread())
print('Instance created')
def bar(self, _):
self.cls_in_thread.bar()
print(threading.current_thread())
print('Instance method called')
def run(self):
print(threading.current_thread())
def main_quit(_):
Gtk.main_quit()
if __name__ == '__main__':
GObject.threads_init()
window = Gtk.Window()
box = Gtk.Box(spacing=6)
window.add(box)
lt = ListeningThread()
lt.daemon = True
lt.start()
button = Gtk.Button.new_with_label("Create instance in thread")
button.connect("clicked", lt.foo)
button2 = Gtk.Button.new_with_label("Call instance method in thread")
button2.connect("clicked", lt.bar)
box.pack_start(button, True, True, 0)
box.pack_start(button2, True, True, 0)
window.show_all()
window.connect('destroy', main_quit)
print(threading.current_thread())
Gtk.main()
</code></pre>
<p>To be more precise here is the output I get now:</p>
<pre class="lang-py prettyprint-override"><code><ListeningThread(Thread-1, started daemon 28268)>
<_MainThread(MainThread, started 23644)>
<_MainThread(MainThread, started 23644)>
Instance created
<_MainThread(MainThread, started 23644)>
Instance method called
</code></pre>
<p>And I would like it to be somewhat like this:</p>
<pre class="lang-py prettyprint-override"><code><ListeningThread(Thread-1, started daemon 28268)>
<_MainThread(MainThread, started 23644)>
<ListeningThread(Thread-1, started daemon 28268)>
Instance created
<ListeningThread(Thread-1, started daemon 28268)>
Instance method called
</code></pre>
<p>Moreover I would like to be sure that <code>cls_in_thread</code> exists in that same thread (In docs I've found <code>threading.local()</code>, but I'm not sure if it's needed). Is there a way to achieve such behaviour?</p>
| 0 | 2016-07-31T08:54:23Z | 38,707,344 | <p>Here is an example of one way to do it: <a href="https://github.com/pithos/pithos/blob/f2e0ddaf44a3451907efc45b2c23cd0551d05aaa/pithos/gobject_worker.py" rel="nofollow">pithos/gobject_worker.py</a></p>
<p>You have a Queue of jobs you want on another thread and then your callback is called on the main thread once the job is done.</p>
<p>Also do realize you should not be modifying objects on the main thread from another thread.</p>
| 0 | 2016-08-01T20:14:00Z | [
"python",
"multithreading",
"pygtk",
"pygobject"
] |
Instances & classes: requiring x arguments when x-1 given | 38,682,435 | <p>I have written the following classes to be able to test different encryption schemes. However, I'm having trouble instantiating objects from the different encryption scheme. Could someone point out to something that doesn't make sense that I'm not catching atm? I'm not sure why it doesn't work. It gives a <code>TypeError: encrypt() takes exactly 3 arguments (2 given)</code> but it does have self passed, so I don't know how to fix it on the basis of the rest of them. </p>
<pre><code>class AXU:
def __init__(self, sec_param):
self.sec_param = sec_param
def getHash(self):
# sample a, b and return hash function
a = random.randrange(self.sec_param)
b = random.randrange(self.sec_param)
return lambda x : a*x+b % sec_param
class BC(object):
def __init__(self, sec_param):
# generate a key
self.sec_param = sec_param
def encrypt(self, message, key):
#encrypt with AES?
cipher = AES.new(key, MODE_CFB, sec_param)
msg = iv + cipher.encrypt(message)
return msg
class tBC(object):
def __init__(self, sec_param):
self.sec_param = sec_param
def encrypt(self, tweak, message):
#pass
return AES.new(message, tweak)
class Trivial(tBC):
def __init__(self):
self.bcs = {}
def encrypt(self, tweak, message):
if tweak not in self.bcs.keys():
bc = BC()
self.bcs[tweak] = bc
return self.bcs[tweak].encrypt(message)
class Our(tBC):
def __init__(self, sec_param):
self.bc1 = BC(sec_param)
self.bc2 = BC(sec_param)
self.bc3 = BC(sec_param)
self.bc4 = BC(sec_param)
# encryption over GF field
def encrypt(self, tweak, message):
return self.bc1.encrypt(self.bc2.encrypt(tweak) * self.bc3.encrypt(message) + self.bc4.encrypt(tweak))
</code></pre>
| 0 | 2016-07-31T09:06:47Z | 38,682,444 | <p>You are passing in <strong>one</strong> argument to a bound method:</p>
<pre><code>return self.bc1.encrypt(
self.bc2.encrypt(tweak) * self.bc3.encrypt(message) +
self.bc4.encrypt(tweak))
</code></pre>
<p>That's one argument to the <code>BC.encrypt()</code> method each, and this method takes 2 beyond <code>self</code>, <code>message</code> and <code>key</code>.</p>
<p>Either pass in a value for <code>key</code>, or remove that argument from the <code>BC.encrypt()</code> method definition (and get the key from some place else; perhaps from an instance attribute set in <code>__init__</code>).</p>
| 1 | 2016-07-31T09:08:41Z | [
"python",
"python-3.x",
"instantiation",
"self",
"pycrypto"
] |
Why is using the Python mmap module much slower than calling POSIX mmap from C++? | 38,682,501 | <p>C++ code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <string>
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>
#include <sys/time.h>
using namespace std;
#define FILE_MODE (S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)
int main() {
timeval tv1, tv2, tv3, tve;
gettimeofday(&tv1, 0);
int size = 0x1000000;
int fd = open("data", O_RDWR | O_CREAT | O_TRUNC, FILE_MODE);
ftruncate(fd, size);
char *data = (char *) mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
for(int i = 0; i < size; i++) {
data[i] = 'S';
}
munmap(data, size);
close(fd);
gettimeofday(&tv2, 0);
timersub(&tv2, &tv1, &tve);
printf("Time elapsed: %ld.%06lds\n", (long int) tve.tv_sec, (long int) tve.tv_usec);
}
</code></pre>
<p>Python code:</p>
<pre class="lang-py prettyprint-override"><code>import mmap
import time
t1 = time.time()
size = 0x1000000
f = open('data/data', 'w+')
f.truncate(size)
f.close()
file = open('data/data', 'r+b')
buffer = mmap.mmap(file.fileno(), 0)
for i in xrange(size):
buffer[i] = 'S'
buffer.close()
file.close()
t2 = time.time()
print "Time elapsed: %.3fs" % (t2 - t1)
</code></pre>
<p>I think these two program are the essentially same since C++ and Python call the same system call(<code>mmap</code>).</p>
<p>But the Python version is much slower than C++'s:</p>
<pre><code>Python: Time elapsed: 1.981s
C++: Time elapsed: 0.062143s
</code></pre>
<p><strong>Could any one please explain the reason why the mmap Python of is much slower than C++?</strong></p>
<hr>
<p>Environment:</p>
<p>C++:</p>
<pre><code>$ c++ --version
Apple LLVM version 7.3.0 (clang-703.0.31)
Target: x86_64-apple-darwin15.5.0
</code></pre>
<p>Python:</p>
<pre><code>$ python --version
Python 2.7.11 :: Anaconda 4.0.0 (x86_64)
</code></pre>
| 0 | 2016-07-31T09:18:00Z | 38,682,698 | <p>Not <code>mmap</code> is slower, but the filling of a array with values. Python is known, to be slow on doing primitive operations. Use higher-level operations:</p>
<pre><code>buffer[:] = 'S' * size
</code></pre>
| 5 | 2016-07-31T09:44:43Z | [
"python",
"c++",
"performance",
"posix",
"mmap"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.