title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Reading in string of nested JSON lists and dictionaries with Python | 38,805,125 | <p>I am having trouble reading data in python. A sample of one of the rows is:</p>
<pre><code>foo_brackets='{"KEY2":[{"KEY2a":[{"KEY2a1":"4","KEY2a2":"5"},{"KEY2a1":"6","KEY2a2":"7"}],"KEY2b":"8"}],"KEY3":"9"}'
</code></pre>
<p>When I load with <code>json</code>, the value for <code>KEY2</code> is read in as a list, because of the brackets, which then prevents me from getting at my desired result, which is the value of <code>KEY2b</code>:</p>
<pre><code>>>> import json
>>> foo_brackets_json=json.loads(foo_brackets)
>>> foo_brackets_json['KEY2']['KEY2b']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list indices must be integers, not str
</code></pre>
<p>I could just try to remove the brackets, but there actually is a value that should be a list, <code>KEY2a</code>. You can see this if I strip out all the brackets and try to convert to JSON:</p>
<pre><code>>>> foo_no_brackets='{"KEY2":{"KEY2a":{"KEY2a1":"4","KEY2a2":"5"},{"KEY2a1":"6","KEY2a2":"7"},"KEY2b":"8"},"KEY3":"9"}'
>>> json.loads(foo_no_brackets)
# Traceback omitted since it's just the python error
ValueError: Expecting property name: line 1 column 45 (char 45)
</code></pre>
<p><code>foo_brackets</code> does appear to be valid JSON (I tested <a href="https://jsonformatter.curiousconcept.com/" rel="nofollow">here</a>, with the quotes removed) and got the following:</p>
<pre><code>{
"KEY2":[
{
"KEY2a":[
{
"KEY2a1":"4",
"KEY2a2":"5"
},
{
"KEY2a1":"6",
"KEY2a2":"7"
}
],
"KEY2b":"8"
}
],
"KEY3":"9"
}
</code></pre>
<h1>Question:</h1>
<p>Is there a way for me to read objects like <code>foo_brackets</code> so that I can call <code>foo_brackets_json['KEY2']['KEY2b']</code>?</p>
| -1 | 2016-08-06T14:07:36Z | 38,805,160 | <p><code>foo_brackets_json['KEY2']</code> references a <em>list</em>, here with one element. </p>
<p>You'll have to use integer indices to reference the dictionaries contained in that list:</p>
<pre><code>foo_brackets_json['KEY2'][0]['KEY2b']
</code></pre>
<p>Don't try to remove the brackets; there could be 0 or more nested dictionaries here. You'll have to determine what should happen in those cases where you don't have just 1 nested dictionary.</p>
<p>The above hardcoded reference assumes there is always at least one such a dictionary in the list, and doesn't care if there are more than one.</p>
<p>You could use <em>looping</em> to handle the 0 or more case:</p>
<pre><code>for nested in foo_brackets_json['KEY2']:
print(nested['KEY2b'])
</code></pre>
<p>Now you are handling each nested dictionary, one by one. This'll work for the empty list case, and if there is more than one.</p>
<p>You could make having 0 or more than one an error:</p>
<pre><code>if len(foo_brackets_json['KEY2']) != 1:
raise ValueError('Unexpected number of results')
</code></pre>
<p>etc. etc. It all depends on your actual use-case.</p>
| 1 | 2016-08-06T14:10:37Z | [
"python",
"json",
"python-2.7"
] |
Check whether a nested dict is False in the leaf nodes in python | 38,805,223 | <p>Let's say I have a nested dict, e.g. the following</p>
<pre><code>{'agent1': {'status': True},
'block1': {'status': True, 'number': False, 'usable_by': True, 'location': True, 'skill':
{'speed': False, 'flexibility': True}}}
</code></pre>
<p>At the lowest key level (the leaves) of this dict, the values are only boolean (True or False). The input dict can basically have any kind of nested structure with different names as keys and no fixed depth.</p>
<p>How can I check in general, whether there is <code>False</code> in a given dict.</p>
| -2 | 2016-08-06T14:17:16Z | 38,805,405 | <p>To traverse the nested dict, you can use recursion. See (<a href="http://stackoverflow.com/questions/10756427/loop-through-all-nested-dictionary-values">Loop through all nested dictionary values?</a>)</p>
<pre class="lang-py prettyprint-override"><code>def contains_false(d):
for k,v in d.iteritems():
if isinstance(v, dict):
# recurse into nested-dict
if contains_false(v):
return True
# Check value of leaf-node. Exit early
# if we find a 'False' value.
if v is False:
return True
# no 'False' values found
return False
>>> d = {1:True, 2:{1:True, 2:True}, 3:{1:True, 2:True, 3:{1:False}}}
>>> contains_false(d)
True
>>> d[3][3][1] = True
>>> contains_false(d)
False
</code></pre>
<p>This is assuming that you don't need to know the key(s) to reach the 'False' value.</p>
| 1 | 2016-08-06T14:37:17Z | [
"python",
"dictionary",
"nested"
] |
Check whether a nested dict is False in the leaf nodes in python | 38,805,223 | <p>Let's say I have a nested dict, e.g. the following</p>
<pre><code>{'agent1': {'status': True},
'block1': {'status': True, 'number': False, 'usable_by': True, 'location': True, 'skill':
{'speed': False, 'flexibility': True}}}
</code></pre>
<p>At the lowest key level (the leaves) of this dict, the values are only boolean (True or False). The input dict can basically have any kind of nested structure with different names as keys and no fixed depth.</p>
<p>How can I check in general, whether there is <code>False</code> in a given dict.</p>
| -2 | 2016-08-06T14:17:16Z | 38,805,412 | <p>A quick solution can be:</p>
<pre><code>d = {'block1': {'status': True, 'usable_by': True, 'skill': {'flexibility': True, 'speed': False}, 'number': False, 'location': True}, 'agent1': {'status': True}}
values = []
def find_false(d):
for k in d.keys():
if isinstance(d[k], dict):
find_false(d[k])
else:
values.append(d[k])
find_false(d)
print(False in values)
</code></pre>
<p>Hope this helps.</p>
| 0 | 2016-08-06T14:38:11Z | [
"python",
"dictionary",
"nested"
] |
Check whether a nested dict is False in the leaf nodes in python | 38,805,223 | <p>Let's say I have a nested dict, e.g. the following</p>
<pre><code>{'agent1': {'status': True},
'block1': {'status': True, 'number': False, 'usable_by': True, 'location': True, 'skill':
{'speed': False, 'flexibility': True}}}
</code></pre>
<p>At the lowest key level (the leaves) of this dict, the values are only boolean (True or False). The input dict can basically have any kind of nested structure with different names as keys and no fixed depth.</p>
<p>How can I check in general, whether there is <code>False</code> in a given dict.</p>
| -2 | 2016-08-06T14:17:16Z | 38,805,443 | <p>You'd have to traverse the dictionaries, and you want to exit <em>early</em>:</p>
<pre><code>def any_false_leaf(d):
if isinstance(d, dict):
return any(any_false_leaf(v) for v in d.values())
return not d
</code></pre>
<p>This <em>recurses</em> through your dictionaries, and returns <code>True</code> if there is a nested false value in the structure. Using the <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow"><code>any()</code> function</a> and a <a href="https://docs.python.org/2/tutorial/classes.html#generator-expressions" rel="nofollow">generator expression</a> guarantees that the result is produced as soon as such a value is found.</p>
<p>Demo:</p>
<pre><code>>>> d = {'agent1': {'status': True},
... 'block1': {'status': True, 'number': False, 'usable_by': True, 'location': True, 'skill':
... {'speed': False, 'flexibility': True}}}
>>> any_false_leaf(d)
True
>>> any_false_leaf({'foo': True})
False
>>> any_false_leaf({'foo': {'bar': True}})
False
>>> any_false_leaf({'foo': {'bar': True, 'spam': False}})
True
</code></pre>
| 1 | 2016-08-06T14:40:29Z | [
"python",
"dictionary",
"nested"
] |
Convert comma to space in list | 38,805,232 | <p>how can we convert a string <code>[0.0034596999, 0.0034775001, 0.0010091923]</code> to a form <code>[0.0034596999 0.0034775001 0.0010091923]</code> in python. I tried using <code>map</code>, <code>join</code> etc functions but I am unable to do so. Can anyone help?</p>
| 1 | 2016-08-06T14:17:49Z | 38,805,277 | <p><code>"[0.0034596999, 0.0034775001, 0.0010091923]".replace(",", "")</code> returns <code>"[0.0034596999 0.0034775001 0.0010091923]"</code></p>
<p>Have a look at the <a href="https://docs.python.org/3/library/stdtypes.html#string-methods" rel="nofollow">string methods</a> - there are many useful ones.</p>
| 1 | 2016-08-06T14:22:05Z | [
"python"
] |
Convert comma to space in list | 38,805,232 | <p>how can we convert a string <code>[0.0034596999, 0.0034775001, 0.0010091923]</code> to a form <code>[0.0034596999 0.0034775001 0.0010091923]</code> in python. I tried using <code>map</code>, <code>join</code> etc functions but I am unable to do so. Can anyone help?</p>
| 1 | 2016-08-06T14:17:49Z | 38,805,291 | <p>Just use the string <code>replace()</code> method:</p>
<pre><code>s = '[0.0034596999, 0.0034775001, 0.0010091923]'
s = s.replace(',', '')
print(s) # -> [0.0034596999 0.0034775001 0.0010091923]
</code></pre>
| 0 | 2016-08-06T14:23:32Z | [
"python"
] |
Convert comma to space in list | 38,805,232 | <p>how can we convert a string <code>[0.0034596999, 0.0034775001, 0.0010091923]</code> to a form <code>[0.0034596999 0.0034775001 0.0010091923]</code> in python. I tried using <code>map</code>, <code>join</code> etc functions but I am unable to do so. Can anyone help?</p>
| 1 | 2016-08-06T14:17:49Z | 38,805,328 | <p>If it's a string you could do as the other suggested. If it is a list of strings you could do:</p>
<pre><code>new_list_without_comma = [x.replace(",", "") for x in list_with_comma]
</code></pre>
| 0 | 2016-08-06T14:27:16Z | [
"python"
] |
Convert comma to space in list | 38,805,232 | <p>how can we convert a string <code>[0.0034596999, 0.0034775001, 0.0010091923]</code> to a form <code>[0.0034596999 0.0034775001 0.0010091923]</code> in python. I tried using <code>map</code>, <code>join</code> etc functions but I am unable to do so. Can anyone help?</p>
| 1 | 2016-08-06T14:17:49Z | 38,805,716 | <p>Using the string method <code>replace()</code> is an efficient solution; however thought I'd offer an alternate using <code>split()</code> and <code>join()</code>:</p>
<pre><code>print ''.join(i for i in '[0.0034596999, 0.0034775001, 0.0010091923]'.split(','))
>>> [0.0034596999 0.0034775001 0.0010091923]
</code></pre>
| 2 | 2016-08-06T15:10:37Z | [
"python"
] |
Convert sqlite3 to mysql under python | 38,805,433 | <p>I am converting a python script to use mysql instead of sqlite3 and i am having a lot of issues with mysql syntax errors which have really stumped me to this point. I don't have much experience with databases. It seems the lines will work for a short time then they throw errors.</p>
<p>This is the line that gives me the first error. I think once i get the syntax right i can change all of the others as well.</p>
<pre><code> elif 'Sensors' in line:
Sensors,pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,TankTotal,Tank1,Tank2,Tank3,Tank4,WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4=line.split(",")
Sensors = Sensors.replace("Read fail", "")
WaterTempP4 = WaterTempP4.rstrip()
elapsedTime = now-startTime
elapsedSeconds = (elapsedTime.microseconds+(elapsedTime.days*24*3600+elapsedTime.seconds)*10**6)/10**6
print("\033[10;0H\r")
print("\033[10;0H(" + now.strftime("%Y/%m/%d %H:%M:%S") + ") Sensors: %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s"%(pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,TankTotal,Tank1,Tank2,Tank3,Tank4,WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4))
now = datetime.now()
delta = float(now.strftime('%s')) - float(LastDataPoint_Time.strftime('%s'))
if (delta < 0):
TimeString = LastDataPoint_Time.strftime("%Y-%m-%d %H:%M:%S")
update_sql("DELETE FROM Sensors_Log WHERE Time='" + TimeString + "'")
LastDataPoint_Time = datetime.now()
addMessageLog("Negative Delta - Deleting Last Record (Wrong Time?)")
printMessageLog()
if (delta >= TakeDataPoint_Every) or (Datapoint_count == 0 and first_timesync == True):
addMessageLog("Added a data point to the sensor values log.")
printMessageLog()
update_sql("INSERT INTO Sensors_Log (Time,pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,TankTotal,Tank1,Tank2,Tank3,Tank4,WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4) VALUES ('" + now.strftime("%Y-%m-%d %H:%M:%S") + "'," + pH1 + "," + pH2+ "," + Temp + "," + RH + "," + TDS1 + "," + TDS2 + "," + CO2 + "," + Light + "," + Water + "," + MagX + "," + MagY + "," + MagZ + "," + TankTotal + "," + Tank1 + "," + Tank2 + "," + Tank3 + "," + Tank4 + "," + WaterTempP1 + "," + WaterTempP2 + "," + WaterTempP3 + "," + WaterTempP4 + ")")
LastDataPoint_Time = datetime.now()
timesync = 0 #do a timesync
Datapoint_count = Datapoint_count + 1
#SENSOR VALUES
update_sql("UPDATE `Sensors` SET pH1 = " + pH1 + ", pH2 = " + pH2 + ", Temp = " + Temp + ", RH = " + RH + ", TDS1 = " + TDS1 + ", TDS2 = " + TDS2 + ", CO2 = " + CO2 + ", Light = " + Light + ", Water = " + Water + ", MagX = " + MagX + ", MagY = " + MagY + ", MagZ = " + MagZ + ", TankTotal = " + TankTotal + ", Tank1 = " + Tank1 + ", Tank2 = " + Tank2 + ", Tank3 = " + Tank3 + ", Tank4 = " + Tank4 + ", WaterTempP1 = " + WaterTempP1 + ", WaterTempP2 = " + WaterTempP2 + ", WaterTempP3 = " + WaterTempP3 + ", WaterTempP4 = " + WaterTempP4 + "")
db.commit()
</code></pre>
<p>Its the update_sql("UPDATE string at the bottom that throws a 1064 error any help to set me on the right track with the format would be greatly appreciated. The code above runs when a serial string comes in with 'Sensors' in the serial string. It is followed by sensor readings separated by commas. The code seems to work ok when inserting but not updating.</p>
<p>thanks in advance</p>
<p>here is the full error code</p>
<pre><code>Traceback (most recent call last):
File "yieldbuddy.py", line 1211, in <module>
serialerr=checkSerial()
File "yieldbuddy.py", line 481, in checkSerial
update_sql("UPDATE `Sensors` SET pH1 = " + pH1 + ", pH2 = " + pH2 + ", Temp = " + Temp + ", RH = " + RH + ", TDS1 = " + TDS1 + ", TDS2 = " + TDS2 + ", CO2 = " + CO2 + ", Light = " + Light + ", Water = " + Water + ", MagX = " + MagX + ", MagY = " + MagY + ", MagZ = " + MagZ + ", TankTotal = " + TankTotal + ", Tank1 = " + Tank1 + ", Tank2 = " + Tank2 + ", Tank3 = " + Tank3 + ", Tank4 = " + Tank4 + ", WaterTempP1 = " + WaterTempP1 + ", WaterTempP2 = " + WaterTempP2 + ", WaterTempP3 = " + WaterTempP3 + ", WaterTempP4 = " + WaterTempP4 + "")
File "yieldbuddy.py", line 33, in update_sql
cursor.execute(query)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1")
</code></pre>
<p>I have been using the 3rd example below to setup the sql update statements this seems to be working until i reached the following code</p>
<pre><code> elif 'SetPoint_pH2' in line:
if oldSetPoint_pH2 != line:
oldSetPoint_pH2 = line
#print("%s"%(line)) For Debugging
SetPoint_pH2,pH2Value_Low,pH2Value_High,pH2_Status=line.split(",")
SetPoint_pH2 = SetPoint_pH2.replace("Read fail", "")
pH2_Status = pH2_Status.rstrip()
print("\033[16;0H ")
print("\033[16;0H(" + now.strftime("%Y/%m/%d %H:%M:%S") + ") SetPoint_pH2: %s,%s,%s"%(pH2Value_Low,pH2Value_High,pH2_Status))
#SetPoint_pH
# update_sql("UPDATE `pH2` SET Low='" + pH2Value_Low + "',High='" + pH2Value_High + "',Status='" + pH2_Status + "'")
update_sql("UPDATE `pH2` SET Low = {}, High = {}, Status = {}".format(pH2Value_Low,pH2Value_High,pH2_Status))
</code></pre>
<p>For some reason it doesn't recogninze the status field as being status column in database instead its throwing this error. </p>
<pre><code>(2016/08/07 12:10:42) SetPoint_pH2: -1.00,6.20,OK
Traceback (most recent call last):
File "yieldbuddy.py", line 1310, in <module>
serialerr=checkSerial()
File "yieldbuddy.py", line 607, in checkSerial
update_sql("UPDATE `pH2` SET Low = {}, High = {}, Status = {}".format(pH2Value_Low,pH2Value_High,pH2_Status))
File "yieldbuddy.py", line 33, in update_sql
cursor.execute(query)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1054, "Unknown column 'OK' in 'field list'")
</code></pre>
<p>I can't seem to see the difference between this line and the others i have already done, it tries to use the column "ok" which is actually the value i want to put into the actual column "Status".</p>
| 0 | 2016-08-06T14:39:44Z | 38,807,248 | <p>Syntactically, your UPDATE SQL statement is correct according to the <code>SET</code> clause and placement of equal signs and commas. Most likely what's happening is one or more of your 21 variables is returning a zero length string (<code>''</code>) which MySQL cannot set to a numeric column. Consider converting such zero length strings conditionally to <code>None</code> which translates in MySQL as <code>NULL</code>.</p>
<p>As example, below shows an empty value for <code>v5</code></p>
<pre><code>line = "this,is,a,test,"
v1, v2, v3, v4, v5 = line.split(",")
print(v5=='')
# TRUE
</code></pre>
<p>To resolve, conditionally generate <code>None</code> for such zero-length strings:</p>
<pre><code>v1, v2, v3, v4, v5 = [None if i == '' else i for i in [v1, v2, v3, v4, v5]]
print(v5==None) # ALTERNATIVELY: print(v5 is None)
# TRUE
</code></pre>
<hr>
<p>Therefore, consider the following adjustment at the top when variables are initialized:</p>
<pre><code>...
Sensors,pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,TankTotal,\
Tank1,Tank2,Tank3,Tank4,WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4 = line.split(",")
Sensors,pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,TankTotal,\
Tank1,Tank2,Tank3,Tank4,WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4 = \
[None if i == '' else i for i in [Sensors,pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,\
MagX,MagY,MagZ,TankTotal,Tank1,Tank2,Tank3,Tank4,\
WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4]]
</code></pre>
<p>Then, wrap each variable in <code>str()</code> since the <code>NoneType</code> object requires explicit conversion of string not implicit with concatenation. Update below will set MySQL columns to <code>NULL</code> at every instance of Python's <code>None</code>:</p>
<pre><code>update_sql("UPDATE `Sensors` SET pH1 = " + str(pH1) + ", " + \
"pH2 = " + str(pH2) + ", Temp = " + str(Temp) + ", RH = " + str(RH) + ", " + \
"TDS1 = " + str(TDS1) + ", TDS2 = " + str(TDS2) + ", CO2 = " + str(CO2) + ", " + \
"Light = " + str(Light) + ", Water = " + str(Water) + ", MagX = " + str(MagX) + ", " + \
"MagY = " + str(MagY) + ", MagZ = " + str(MagZ) + ", TankTotal = " + str(TankTotal) + ", " + \
"Tank1 = " + str(Tank1) + ", Tank2 = " + str(Tank2) + ", Tank3 = " + str(Tank3) + ", " + \
"Tank4 = " + str(Tank4) + ", WaterTempP1 = " + str(WaterTempP1) + ", " + \
"WaterTempP2 = " + str(WaterTempP2) + ", WaterTempP3 = " + str(WaterTempP3) + ", " + \
"WaterTempP4 = " + str(WaterTempP4) + "")
</code></pre>
<p>Alternatively, consider string formatting which also accommodates <code>None</code>:</p>
<pre><code>update_sql("UPDATE `Sensors` SET pH1 = {}, \
pH2 = {}, Temp = {}, RH = {}, \
TDS1 = {}, TDS2 = {}, CO2 = {}, \
Light = {}, Water = {}, MagX = {}, \
MagY = {}, MagZ = {}, TankTotal = {}, \
Tank1 = {}, Tank2 = {}, Tank3 = {}, \
Tank4 = {}, WaterTempP1 = {}, \
WaterTempP2 = {}, WaterTempP3 = {}, \
WaterTempP4 = {} ".format(pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,\
MagX,MagY,MagZ,TankTotal,Tank1,Tank2,Tank3,Tank4,\
WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4))
</code></pre>
<p>Even better, run the update as a parameterized query within <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html" rel="nofollow">cursor.execute()</a> method and do so if source data comes from external sources to avoid sql injection:</p>
<pre><code>cur.execute("UPDATE `Sensors` SET pH1 = %s, \
pH2 = %s, Temp = %s, RH = %s, \
TDS1 = %s, TDS2 = %s, CO2 = %s, \
Light = %s, Water = %s, MagX = %s, \
MagY = %s, MagZ = %s, TankTotal = %s, \
Tank1 = %s, Tank2 = %s, Tank3 = %s, \
Tank4 = %s, WaterTempP1 = %s, \
WaterTempP2 = %s, WaterTempP3 = %s, \
WaterTempP4 = %s ", \
(pH1,pH2,Temp,RH,TDS1,TDS2,CO2,Light,Water,MagX,MagY,MagZ,\
TankTotal,Tank1,Tank2,Tank3,Tank4,\
WaterTempP1,WaterTempP2,WaterTempP3,WaterTempP4)
db.commit()
</code></pre>
| 0 | 2016-08-06T17:59:31Z | [
"python",
"mysql",
"sqlite3"
] |
How to sort contours left to right, while going top to bottom, using Python and OpenCV | 38,805,462 | <p>I'm finding the contours for an image with digits and characters, for OCR. So, I need the contours to be sorted left to right, while going line to line, i.e. top to bottom. Right now, the contours aren't sorted that way.</p>
<p><a href="http://i.stack.imgur.com/DOXds.png" rel="nofollow">PIC: Contours is detected as shown here, including dots above i, full stop, comma, etc.</a></p>
<p>For example, the contours for the above image is sorted randomly.</p>
<p>What I need is the sorting as D,o,y,o,u,k,n,o,w,s,o,m,e,o,n,e,r,.(dot),i(without dot),c,h...and so on. I've tried couple of methods where we first observe the y-coordinate and then use some keys and the x-coordinate. Like right now, I have the following sorting code. It works for the first 2 lines. Then in the 3rd line, the sorting somehow doesn't happen. The main problem seem to be in the letters such as i, j, ?, (dot), (comma), etc where the y axis of the (dot) varies, despite belonging to the same line. So what might be a good solution for this?</p>
<pre><code>for ctr in contours:
if cv2.contourArea(ctr) > maxArea * areaRatio:
rect.append(cv2.boundingRect(cv2.approxPolyDP(ctr,1,True)))
#rect contains the contours
for i in rect:
x = i[0]
y = i[1]
w = i[2]
h = i[3]
if(h>max_line_height):
max_line_height = h
mlh = max_line_height*2
max_line_width = raw_image.shape[1] #width of the input image
mlw = max_line_width
rect = np.asarray(rect)
s = rect.astype( np.uint32 ) #prevent overflows
order= mlw*(s[:,1]/mlh)+s[:,0]
sort_order= np.argsort( order )
rect = rect[ sort_order ]
</code></pre>
| 0 | 2016-08-06T14:42:17Z | 38,865,392 | <p>I like your trying to solve the problem with a single sorting. But as you said, the variation of y in each line might break your algorithm, plus, the <code>max_line_height</code> is something you probably have to tweak based on different inputs.</p>
<p>So instead, I would propose a slightly different algorithm, but with decent computational complexity. The idea is that, if you only look at all the boxes horizontally, all the boxes from line <code>N+1</code> will never intersect with the boxes from line <code>1</code> to <code>N</code>, but they intersects with each other inside one line. So you can sort all the boxes by their <code>y</code> first, walk through them one by one and try to find 'breaking point' (grouping them into one line), then within each line, sort them by their <code>x</code>.</p>
<p>Here is a less Pythonic solution:</p>
<pre><code># sort all rect by their y
rect.sort(key=lambda b: b[1])
# initially the line bottom is set to be the bottom of the first rect
line_bottom = rect[0][1]+rect[0][3]-1
line_begin_idx = 0
for i in xrange(len(rect)):
# when a new box's top is below current line's bottom
# it's a new line
if rect[i][1] > line_bottom:
# sort the previous line by their x
rect[line_begin_idx:i] = sorted(rect[line_begin_idx:i], key=lambda b: b[0])
line_begin_idx = i
# regardless if it's a new line or not
# always update the line bottom
line_bottom = max(rect[i][1]+rect[i][3]-1, line_bottom)
# sort the last line
rect[line_begin_idx:] = sorted(rect[line_begin_idx:], key=lambda b: b[0])
</code></pre>
<p>Now <code>rect</code> should be sorted in the way you want.</p>
| 0 | 2016-08-10T06:03:26Z | [
"python",
"sorting",
"opencv",
"contour"
] |
AWS Lambda Python package - no module named redis | 38,805,518 | <p>I have a <code>python</code> package that I would like to upload to <code>AWS Lambda</code>.
The package works on two different machines with no dependencies issues at all.</p>
<p>However, when uploading the same folder to <code>AWS Lambda</code>, I get the following error:</p>
<blockquote>
<p>Unable to import module 'tweet_analyzer_python/lambda_handler': No module named redis</p>
</blockquote>
<p>Here is a list of the files in the package:</p>
<pre><code>.
|-- event.json
|-- lambda_handler.py
|-- redis
| |-- client.py
| |-- client.pyc
| |-- _compat.py
| |-- _compat.pyc
| |-- connection.py
| |-- connection.pyc
| |-- exceptions.py
| |-- exceptions.pyc
| |-- __init__.py
| |-- __init__.pyc
| |-- lock.py
| |-- lock.pyc
| |-- sentinel.py
| |-- utils.py
| `-- utils.pyc
|-- redis-2.10.5-py2.7.egg-info
| |-- dependency_links.txt
| |-- installed-files.txt
| |-- PKG-INFO
| |-- SOURCES.txt
| `-- top_level.txt
|-- retrying-1.3.3-py2.7.egg-info
| |-- dependency_links.txt
| |-- installed-files.txt
| |-- PKG-INFO
| |-- requires.txt
| |-- SOURCES.txt
| `-- top_level.txt
|-- retrying.py
|-- retrying.pyc
|-- six-1.10.0-py2.7.egg-info
| |-- dependency_links.txt
| |-- installed-files.txt
| |-- PKG-INFO
| |-- SOURCES.txt
| `-- top_level.txt
|-- six.py
`-- six.pyc
</code></pre>
<p>For double-checking, I have downloaded the same <code>zip</code> file that was uploaded to <code>AWS Lambda</code> and put it on a clean linux machine.
When running:</p>
<blockquote>
<p>python tweet_analyzer_python/lambda_handler</p>
</blockquote>
<p>I had no issues at all.</p>
<p>Can someone explain me what am I doing wrong?</p>
<p>Thanks!</p>
| 0 | 2016-08-06T14:47:58Z | 39,664,369 | <p>When you run 'lambda_handler.py' locally you are running a main methods within the python file. The lambda function however calls the lambda_handler method within lambda_handler.py directly.</p>
<p>Your lambda handler is not configured to run 'lambda_handler.lambda_handler' and is failing on 'tweet_analyzer_python/lambda_handler'</p>
<p>Either : </p>
<ul>
<li>1) rename lambda_handler.py to tweet_analyzer_python or </li>
<li>2) change your lambda handler to 'lambda_handler.lambda_handler'</li>
</ul>
<p>To change your handler; go your lambda in AWS, select configuration, and update the handler and save the function.</p>
<p>Also ensure your redis dep is packaged in your zipped lambda function.</p>
| 0 | 2016-09-23T15:23:26Z | [
"python",
"amazon-web-services",
"aws-lambda"
] |
Converting cURL call with encode to python request | 38,805,646 | <p>Now before this gets marked as duplicate and downvoted, I've tried all the other links like (see below) and they don't help.</p>
<p><a href="http://stackoverflow.com/questions/36943604/converting-curl-call-to-python-requests">converting curl call to python requests</a></p>
<p><a href="http://stackoverflow.com/questions/32585800/convert-curl-request-to-python-requests-request">Convert cURL request to Python-Requests request</a></p>
<p>the cURL call i'm making is (Theres $ interpolation as its in an .sh file)</p>
<pre><code>COOKIE_JAR=./program.cookies
LOGIN_URL= 'URL'
USER_ID = 'USERID'
PASSWORD = 'password'
VIEWSTATE = 'long string of text'
$(curl"$LOGIN_URL"-L -b "$COOKIE_JAR" -c "$COOKIE_JAR"
--data-urlencode "__VIEWSTATE=$viewstate" --data-urlencode "userid=$USER_ID"
--data-urlencode "password=$PASSWORD") || printf >2 "failed to get token:\n%s"
"$token" && printf "your token is:\n%s\n" "$token"
</code></pre>
<p>How can i translate this to the python requests form? Any help will be greatly appreciated!! :)</p>
| -1 | 2016-08-06T15:02:13Z | 38,805,847 | <p>So I managed to get it working, if anyone needs it:</p>
<pre><code> payload = {
'userid': USERID,
'password': PASSWORD,
'__VIEWSTATE':viewstate # defined earlier
}
with requests.Session() as s:
x = s.get(URL, params=payload,headers=headers)
token = x.text
</code></pre>
<p>Didn't need the cookies after all!</p>
| 0 | 2016-08-06T15:27:19Z | [
"python",
"curl"
] |
Why is my blitted characted not moving in pygame? | 38,805,680 | <p>I am making an RPG in Python using Pygame. My first step is to create my main character and let it move. But it isn't. This is my code:</p>
<pre><code>import pygame,random
from pygame.locals import *
pygame.init()
black = (0,0,0)
white = (255,255,255)
red = (255,0,0)
blue = (0,255,0)
green = (0,0,255)
global screen, size, winWidth, winHeight, gameExit, pressed, mainChar, x, y
size = winWidth,winHeight = (1350,668)
screen = pygame.display.set_mode(size)
pygame.display.set_caption("RPG")
gameExit = False
pressed = pygame.key.get_pressed()
mainChar = pygame.image.load("Main Character.png")
x,y = 655,500
def surroundings():
stoneTile = pygame.image.load("Stone Tile.png")
stoneTileSize = stoneTile.get_rect()
def move():
if pressed[K_LEFT]: x -= 1
if pressed[K_RIGHT]: x += 1
if pressed[K_UP]: y -= 1
if pressed[K_DOWN]: y += 1
def player():
move()
screen.fill(black)
screen.blit(mainChar,(x,y))
while not gameExit:
for event in pygame.event.get():
if event.type == QUIT:
gameExit = True
surroundings()
move()
player()
pygame.display.update()
pygame.quit()
quit()
</code></pre>
<p>Please help me and explain why it isn't working, too. Thanks.</p>
| -2 | 2016-08-06T15:06:53Z | 38,805,808 | <p>You will have to update your pressed variable in each run</p>
<pre><code>while not gameExit:
for event in pygame.event.get():
if event.type == QUIT:
gameExit = True
pressed = pygame.key.get_pressed()
surroundings()
move()
player()
pygame.display.update()
</code></pre>
<p>The values x and y that you have used within that move function are being treated as a local variable you will have to tell the interpreter that they are global variables</p>
<pre><code>def move():
global x,y
if pressed[K_LEFT]: x -= 1
if pressed[K_RIGHT]: x += 1
if pressed[K_UP]: y -= 1
if pressed[K_DOWN]: y += 1
</code></pre>
| 1 | 2016-08-06T15:22:33Z | [
"python",
"pygame",
"blit"
] |
add timedelta data within a group in pandas dataframe | 38,805,744 | <p>I am working on a dataframe in pandas with four columns of <code>user_id</code>, <code>time_stamp1</code>, <code>time_stamp2</code>, and <code>interval</code>. Time_stamp1 and time_stamp2 are of type datetime64[ns] and interval is of type timedelta64[ns]. </p>
<p>I want to sum up interval values for each user_id in the dataframe and I tried to calculate it in many ways as:</p>
<pre><code>1)df["duration"]= df.groupby('user_id')['interval'].apply (lambda x: x.sum())
2)df ["duration"]= df.groupby('user_id').aggregate (np.sum)
3)df ["duration"]= df.groupby('user_id').agg (np.sum)
</code></pre>
<p>but none of them work and the value of the <code>duration</code> will be <code>NaT</code> after running the codes.</p>
| 1 | 2016-08-06T15:13:54Z | 38,805,988 | <p><strong>UPDATE:</strong> you can use <code>transform()</code> method:</p>
<pre><code>In [291]: df['duration'] = df.groupby('user_id')['interval'].transform('sum')
In [292]: df
Out[292]:
a user_id b interval duration
0 2016-01-01 00:00:00 0.01 2015-11-11 00:00:00 51 days 00:00:00 838 days 08:00:00
1 2016-03-10 10:39:00 0.01 2015-12-08 18:39:00 NaT 838 days 08:00:00
2 2016-05-18 21:18:00 0.01 2016-01-05 13:18:00 134 days 08:00:00 838 days 08:00:00
3 2016-07-27 07:57:00 0.01 2016-02-02 07:57:00 176 days 00:00:00 838 days 08:00:00
4 2016-10-04 18:36:00 0.01 2016-03-01 02:36:00 217 days 16:00:00 838 days 08:00:00
5 2016-12-13 05:15:00 0.01 2016-03-28 21:15:00 259 days 08:00:00 838 days 08:00:00
6 2017-02-20 15:54:00 0.02 2016-04-25 15:54:00 301 days 00:00:00 1454 days 00:00:00
7 2017-05-01 02:33:00 0.02 2016-05-23 10:33:00 342 days 16:00:00 1454 days 00:00:00
8 2017-07-09 13:12:00 0.02 2016-06-20 05:12:00 384 days 08:00:00 1454 days 00:00:00
9 2017-09-16 23:51:00 0.02 2016-07-17 23:51:00 426 days 00:00:00 1454 days 00:00:00
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>Demo:</p>
<pre><code>In [260]: df
Out[260]:
a b interval user_id
0 2016-01-01 00:00:00 2015-11-11 00:00:00 51 days 00:00:00 1
1 2016-03-10 10:39:00 2015-12-08 18:39:00 NaT 1
2 2016-05-18 21:18:00 2016-01-05 13:18:00 134 days 08:00:00 1
3 2016-07-27 07:57:00 2016-02-02 07:57:00 176 days 00:00:00 1
4 2016-10-04 18:36:00 2016-03-01 02:36:00 217 days 16:00:00 1
5 2016-12-13 05:15:00 2016-03-28 21:15:00 259 days 08:00:00 1
6 2017-02-20 15:54:00 2016-04-25 15:54:00 301 days 00:00:00 2
7 2017-05-01 02:33:00 2016-05-23 10:33:00 342 days 16:00:00 2
8 2017-07-09 13:12:00 2016-06-20 05:12:00 384 days 08:00:00 2
9 2017-09-16 23:51:00 2016-07-17 23:51:00 426 days 00:00:00 2
In [261]: df.dtypes
Out[261]:
a datetime64[ns]
b datetime64[ns]
interval timedelta64[ns]
user_id int64
dtype: object
In [262]: df.groupby('user_id')['interval'].sum()
Out[262]:
user_id
1 838 days 08:00:00
2 1454 days 00:00:00
Name: interval, dtype: timedelta64[ns]
In [263]: df.groupby('user_id')['interval'].apply(lambda x: x.sum())
Out[263]:
user_id
1 838 days 08:00:00
2 1454 days 00:00:00
Name: interval, dtype: timedelta64[ns]
In [264]: df.groupby('user_id').agg(np.sum)
Out[264]:
interval
user_id
1 838 days 08:00:00
2 1454 days 00:00:00
</code></pre>
<p>So check your data...</p>
| 0 | 2016-08-06T15:43:36Z | [
"python",
"pandas",
"dataframe",
"group-by",
"timedelta"
] |
Rosalind Profile and Consensus: Writing long strings to one line in Python (Formatting) | 38,805,770 | <p>I'm trying to tackle a problem on Rosalind where, given a FASTA file of at most 10 sequences at 1kb, I need to give the consensus sequence and profile (how many of each base do all the sequences have in common at each nucleotide). In the context of formatting my response, what I have as my code works for small sequences (verified). </p>
<p>However, I have issues in formatting my response when it comes to large sequences.
What I expect to return, regardless of length, is:</p>
<pre><code>"consensus sequence"
"A: one line string of numbers without commas"
"C: one line string """" "
"G: one line string """" "
"T: one line string """" "
</code></pre>
<p>All aligned with each other and on their own respective lines, or at least some formatting that allows me to carry this formatting as a unit onward to maintain the integrity of aligning.</p>
<p>but when I run my code for a large sequence, I get each separate string below the consensus sequence broken up by a newline, presumably because the string itself is too long. I've been struggling to think of ways to circumvent the issue, but my searches have been fruitless. I'm thinking about some iterative writing algorithm that can just write the entirety of the above expectation but in chunks Any help would be greatly appreciated. I have attached the entirety of my code below for the sake of completeness, with block comments as needed, though the main section. </p>
<pre><code>def cons(file):
#returns consensus sequence and profile of a FASTA file
import os
path = os.path.abspath(os.path.expanduser(file))
with open(path,"r") as D:
F=D.readlines()
#initialize list of sequences, list of all strings, and a temporary storage
#list, respectively
SEQS=[]
mystrings=[]
temp_seq=[]
#get a list of strings from the file, stripping the newline character
for x in F:
mystrings.append(x.strip("\n"))
#if the string in question is a nucleotide sequence (without ">")
#i'll store that string into a temporary variable until I run into a string
#with a ">", in which case I'll join all the strings in my temporary
#sequence list and append to my list of sequences SEQS
for i in range(1,len(mystrings)):
if ">" not in mystrings[i]:
temp_seq.append(mystrings[i])
else:
SEQS.append(("").join(temp_seq))
temp_seq=[]
SEQS.append(("").join(temp_seq))
#set up list of nucleotide counts for A,C,G and T, in that order
ACGT= [[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))]]
#assumed to be equal length sequences. Counting amount of shared nucleotides
#in each column
for i in range(0,len(SEQS[0])-1):
for j in range(0, len(SEQS)):
if SEQS[j][i]=="A":
ACGT[0][i]+=1
elif SEQS[j][i]=="C":
ACGT[1][i]+=1
elif SEQS[j][i]=="G":
ACGT[2][i]+=1
elif SEQS[j][i]=="T":
ACGT[3][i]+=1
ancstr=""
TR_ACGT=list(zip(*ACGT))
acgt=["A: ","C: ","G: ","T: "]
for i in range(0,len(TR_ACGT)-1):
comp=TR_ACGT[i]
if comp.index(max(comp))==0:
ancstr+=("A")
elif comp.index(max(comp))==1:
ancstr+=("C")
elif comp.index(max(comp))==2:
ancstr+=("G")
elif comp.index(max(comp))==3:
ancstr+=("T")
'''
writing to file... trying to get it to write as
consensus sequence
A: blah(1line)
C: blah(1line)
G: blah(1line)
T: blah(line)
which works for small sequences. but for larger sequences
python keeps adding newlines if the string in question is very long...
'''
myfile="myconsensus.txt"
writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
with open(myfile,'w') as D:
D.writelines(ancstr)
D.writelines("\n")
for i in range(0,len(writing_strings)):
D.writelines(writing_strings[i])
D.writelines("\n")
</code></pre>
<p>cons("rosalind_cons.txt") </p>
| 0 | 2016-08-06T15:16:58Z | 38,806,558 | <p>Your code is totally fine except for this line:</p>
<pre><code>writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
</code></pre>
<p>You accidentally replicate your data. Try replacing it with:</p>
<pre><code>writing_strings=[ACGT[i] + str(ACGT[i]) for i in range(0,len(ACGT))]
</code></pre>
<p>and then write it to your output file as follows:</p>
<pre><code>D.write(writing_strings[i][1:-1])
</code></pre>
<p>That's a lazy way to get rid of the brackets from your list.</p>
| 0 | 2016-08-06T16:45:25Z | [
"python",
"python-3.x",
"rosalind"
] |
Reading Error Message when Clicking on Python File | 38,805,797 | <p>If I run the python script file from IDLE or Windows command prompt, I am able to view the error message.</p>
<p>Script file:</p>
<pre><code>print(3/0)
input()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "...\TEST.py", line 1, in <module>
print(3/0)
ZeroDivisionError: division by zero
</code></pre>
<p>But if I run the file by double clicking on it, the window just closes and I do not know what the error is. How can I see it?</p>
<p>I am running Python 3.4.</p>
| -1 | 2016-08-06T15:20:44Z | 38,805,839 | <p>If you just double-click the file, once you hit an error, it terminates the program, thus closing the console window. To see the errors, you should run it from the command prompt.</p>
| 0 | 2016-08-06T15:26:04Z | [
"python",
"error-handling"
] |
Reading Error Message when Clicking on Python File | 38,805,797 | <p>If I run the python script file from IDLE or Windows command prompt, I am able to view the error message.</p>
<p>Script file:</p>
<pre><code>print(3/0)
input()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "...\TEST.py", line 1, in <module>
print(3/0)
ZeroDivisionError: division by zero
</code></pre>
<p>But if I run the file by double clicking on it, the window just closes and I do not know what the error is. How can I see it?</p>
<p>I am running Python 3.4.</p>
| -1 | 2016-08-06T15:20:44Z | 38,805,841 | <p>This is how its supposed to work.
When you "doubleclick" the <code>.py</code> file windows sees an executable file and invokes a cmd-shell and runs your python executable within (even though its only a text file, its considered as an executable - thats how you have set it in your system - this can be changed, say to make a doubleclick simply open the file in a text editor of your choice, like Notepad or Notepad++ or Python's default IDLE editor). Since division by 0 is an error, the cmd-shell is killed as soon as the error is seen in the <code>.py</code> file - your <code>.py</code> file is treated as an executable, much like how an <code>.exe</code> application crash doesn't wait for you before its killed.</p>
<p>If you do not want lose the window and wish to see the error, then you already seem to know what to do - run it by invoking python from a cmd-shell manually, or better still, use the built-in IDLE editor (Press F5 to run your script from an IDLE editor)</p>
| 0 | 2016-08-06T15:26:35Z | [
"python",
"error-handling"
] |
Why find_one is returning all documents in my case? | 38,805,923 | <p>I'm trying to use BCryptAuth to protect resource as well as for login system.
I'm trying to fetch only one document based on user's email entered at login page. </p>
<pre><code>class BCryptAuth(BasicAuth):
def check_auth(self, email, password, allowed_roles, resource, method):
account = app.data.driver.db['users'].find_one({'email': email})
return account and \
bcrypt.hashpw(password.encode('utf-8'),account['salt'].encode('utf-8')) == account['password']
</code></pre>
<p>But when i try to access the users end point via postman, it actually authenticates but returns all documents. I'm bit confused. If my approach is wrong, pls provide me one.</p>
| 0 | 2016-08-06T15:35:35Z | 38,830,059 | <p>The Auth class you mention will only allow or not the access to the API. It does nothing for resource filtering. </p>
<p>If you want resource filtering when getting <code>users</code>, you can create an event hook and make a pre-GET dynamic filter. Check the <a href="http://python-eve.org/features.html#dynamic-lookup-filters" rel="nofollow">documentation</a>, it should help.</p>
| 0 | 2016-08-08T13:08:06Z | [
"python",
"eve"
] |
In Pandas, how can I count consecutive positive and negatives in a row? | 38,805,928 | <p>In python pandas or numpy, is there a built-in function or a combination of functions that can count the number of positive or negative values in a row? </p>
<p>This could be thought of as similar to a roulette wheel with the number of blacks or reds in a row.</p>
<p>Example input series data: </p>
<pre><code>Date
2000-01-07 -3.550049
2000-01-10 28.609863
2000-01-11 -2.189941
2000-01-12 4.419922
2000-01-13 17.690185
2000-01-14 41.219971
2000-01-18 0.000000
2000-01-19 -16.330078
2000-01-20 7.950195
2000-01-21 0.000000
2000-01-24 38.370117
2000-01-25 6.060059
2000-01-26 3.579834
2000-01-27 7.669922
2000-01-28 2.739991
2000-01-31 -8.039795
2000-02-01 10.239990
2000-02-02 -1.580078
2000-02-03 1.669922
2000-02-04 7.440186
2000-02-07 -0.940185
</code></pre>
<p>Desired output:</p>
<pre><code>- in a row 5 times
+ in a row 4 times
++ in a row once
++++ in a row once
+++++++ in a row once
</code></pre>
| 2 | 2016-08-06T15:36:04Z | 38,809,760 | <p>You can use <a href="https://docs.python.org/dev/library/itertools.html#itertools.groupby" rel="nofollow">itertools.groupby()</a> function.</p>
<pre><code>import itertools
l = [-3.550049, 28.609863, -2.189941, 4.419922, 17.690185, 41.219971, 0.000000, -16.330078, 7.950195, 0.000000, 38.370117, 6.060059, 3.579834, 7.669922, 2.739991, -8.039795, 10.239990, -1.580078, 1.669922, 7.440186, -0.940185]
r_pos = {}
r_neg = {}
for k, v in itertools.groupby(l, lambda e:e>0):
count = len(list(v))
r = r_pos
if k == False:
r = r_neg
if count not in r.keys():
r[count] = 0
r[count] += 1
for k, v in r_neg.items():
print '%s in a row %s time(s)' % ('-'*k, v)
for k, v in r_pos.items():
print '%s in a row %s time(s)' % ('+'*k, v)
</code></pre>
<p>output</p>
<pre><code>- in a row 6 time(s)
+ in a row 2 time(s)
++ in a row 1 time(s)
++++ in a row 1 time(s)
+++++++ in a row 1 time(s)
</code></pre>
<p>depending on what you consider as a positive value, you can change the line <code>lambda e:e>0</code></p>
| 1 | 2016-08-07T00:13:21Z | [
"python",
"pandas",
"numpy"
] |
In Pandas, how can I count consecutive positive and negatives in a row? | 38,805,928 | <p>In python pandas or numpy, is there a built-in function or a combination of functions that can count the number of positive or negative values in a row? </p>
<p>This could be thought of as similar to a roulette wheel with the number of blacks or reds in a row.</p>
<p>Example input series data: </p>
<pre><code>Date
2000-01-07 -3.550049
2000-01-10 28.609863
2000-01-11 -2.189941
2000-01-12 4.419922
2000-01-13 17.690185
2000-01-14 41.219971
2000-01-18 0.000000
2000-01-19 -16.330078
2000-01-20 7.950195
2000-01-21 0.000000
2000-01-24 38.370117
2000-01-25 6.060059
2000-01-26 3.579834
2000-01-27 7.669922
2000-01-28 2.739991
2000-01-31 -8.039795
2000-02-01 10.239990
2000-02-02 -1.580078
2000-02-03 1.669922
2000-02-04 7.440186
2000-02-07 -0.940185
</code></pre>
<p>Desired output:</p>
<pre><code>- in a row 5 times
+ in a row 4 times
++ in a row once
++++ in a row once
+++++++ in a row once
</code></pre>
| 2 | 2016-08-06T15:36:04Z | 38,809,764 | <p>Nonnegatives:</p>
<pre><code>from functools import reduce # For Python 3.x
ser = df['x'] >= 0
c = ser.expanding().apply(lambda r: reduce(lambda x, y: x + 1 if y else x * y, r))
c[ser & (ser != ser.shift(-1))].value_counts()
Out:
1.0 2
7.0 1
4.0 1
2.0 1
Name: x, dtype: int64
</code></pre>
<p>Negatives:</p>
<pre><code>ser = df['x'] < 0
c = ser.expanding().apply(lambda r: reduce(lambda x, y: x + 1 if y else x * y, r))
c[ser & (ser != ser.shift(-1))].value_counts()
Out:
1.0 6
Name: x, dtype: int64
</code></pre>
<hr>
<p>Basically, it creates a boolean series takes the cumulative count between the turning points (when the sign changes, it starts over). For example, for nonnegatives, <code>c</code> is:</p>
<pre><code>Out:
0 0.0
1 1.0 # turning point
2 0.0
3 1.0
4 2.0
5 3.0
6 4.0 # turning point
7 0.0
8 1.0
9 2.0
10 3.0
11 4.0
12 5.0
13 6.0
14 7.0 # turning point
15 0.0
16 1.0 # turning point
17 0.0
18 1.0
19 2.0 # turning point
20 0.0
Name: x, dtype: float64
</code></pre>
<p>Now, in order to identify the turning points the condition is that the current value is different than the next and it is True. If you select those, you have the counts.</p>
| 1 | 2016-08-07T00:13:49Z | [
"python",
"pandas",
"numpy"
] |
In Pandas, how can I count consecutive positive and negatives in a row? | 38,805,928 | <p>In python pandas or numpy, is there a built-in function or a combination of functions that can count the number of positive or negative values in a row? </p>
<p>This could be thought of as similar to a roulette wheel with the number of blacks or reds in a row.</p>
<p>Example input series data: </p>
<pre><code>Date
2000-01-07 -3.550049
2000-01-10 28.609863
2000-01-11 -2.189941
2000-01-12 4.419922
2000-01-13 17.690185
2000-01-14 41.219971
2000-01-18 0.000000
2000-01-19 -16.330078
2000-01-20 7.950195
2000-01-21 0.000000
2000-01-24 38.370117
2000-01-25 6.060059
2000-01-26 3.579834
2000-01-27 7.669922
2000-01-28 2.739991
2000-01-31 -8.039795
2000-02-01 10.239990
2000-02-02 -1.580078
2000-02-03 1.669922
2000-02-04 7.440186
2000-02-07 -0.940185
</code></pre>
<p>Desired output:</p>
<pre><code>- in a row 5 times
+ in a row 4 times
++ in a row once
++++ in a row once
+++++++ in a row once
</code></pre>
| 2 | 2016-08-06T15:36:04Z | 38,829,042 | <p>So far this is what I've come up with, it works and outputs a count for how many times each of the negative, positive and zero values occur in a row. Maybe someone can make it more concise using some of the suggestions posted by ayhan and Ghilas above. </p>
<pre><code>from collections import Counter
ser = [-3.550049, 28.609863, -2.1, 89941,4.419922,17.690185,41.219971,0.000000,-16.330078,7.950195,0.000000,38.370117,6.060059,3.579834,7.669922,2.739991,-8.039795,10.239990,-1.580078, 1.669922, 7.440186,-0.940185]
c = 0
zeros, neg_counts, pos_counts = [], [], []
for i in range(len(ser)):
c+=1
s = np.sign(ser[i])
try:
if s != np.sign(ser[i+1]):
if s == 0:
zeros.append(c)
elif s == -1:
neg_counts.append(c)
elif s == 1:
pos_counts.append(c)
c = 0
except IndexError:
pos_counts.append(c) if s == 1 else neg_counts.append(c) if s ==-1 else zeros.append(c)
print(Counter(neg_counts))
print(Counter(pos_counts))
print(Counter(zeros))
</code></pre>
<p>Out: </p>
<pre><code>Counter({1: 5})
Counter({1: 3, 2: 1, 4: 1, 5: 1})
Counter({1: 2})
</code></pre>
| 1 | 2016-08-08T12:18:14Z | [
"python",
"pandas",
"numpy"
] |
cartesian product of overlapping list of intervals | 38,805,963 | <p>I have two lists of lists consisting of interval the real line </p>
<pre><code>I = [[8,12], [18,24], [3,5]]
J = [[7,10], [2,6], [18,22]]
</code></pre>
<p>I want to to gernerate a list that contains the pairs of intervals from I and J that overlap. For example one element of the list would be [[8,12],[7,10]]. I have a loop that does this</p>
<pre><code>res=[]
for i in range(len(I)):
des=[]
for j in range(len(J)):
if (I[i][1]<=J[j][1] and I[i][1]>=J[j][0]) or (J[j][1]<=I[i][1] and J[j][1]>=I[i][0]):
z=[I[i],J[j]]
res.append(z)
</code></pre>
<p>which yields</p>
<pre><code>res=[[[8, 12], [7, 10]], [[18, 24], [18, 22]], [[3, 5], [2, 6]]]
</code></pre>
<p>but I am trying to find a cleaner more efficient version</p>
<p>It is possible to have overlapping intervals in each seperate list. For example we could have </p>
<pre><code>I= [ [2,5], [1,4] ]
</code></pre>
<p>and </p>
<pre><code>J= [[3,7], [10,12]]
</code></pre>
<p>in this case the list the result would be </p>
<pre><code>[ [[1,4], [3,7]], [[2,5],[3,7]]
</code></pre>
| 0 | 2016-08-06T15:41:01Z | 38,806,684 | <p>This shall do</p>
<pre><code>import itertools
I = [[8,12], [18,24], [3,5]]
J = [[7,10], [2,6], [18,22]]
z = []
for x,y in itertools.product(I,J):
#find intersection via sets
if set(range(x[0],x[1])) & set(range(y[0],y[1])):
z.append([x,y])
print z
</code></pre>
| -1 | 2016-08-06T16:58:15Z | [
"python"
] |
cartesian product of overlapping list of intervals | 38,805,963 | <p>I have two lists of lists consisting of interval the real line </p>
<pre><code>I = [[8,12], [18,24], [3,5]]
J = [[7,10], [2,6], [18,22]]
</code></pre>
<p>I want to to gernerate a list that contains the pairs of intervals from I and J that overlap. For example one element of the list would be [[8,12],[7,10]]. I have a loop that does this</p>
<pre><code>res=[]
for i in range(len(I)):
des=[]
for j in range(len(J)):
if (I[i][1]<=J[j][1] and I[i][1]>=J[j][0]) or (J[j][1]<=I[i][1] and J[j][1]>=I[i][0]):
z=[I[i],J[j]]
res.append(z)
</code></pre>
<p>which yields</p>
<pre><code>res=[[[8, 12], [7, 10]], [[18, 24], [18, 22]], [[3, 5], [2, 6]]]
</code></pre>
<p>but I am trying to find a cleaner more efficient version</p>
<p>It is possible to have overlapping intervals in each seperate list. For example we could have </p>
<pre><code>I= [ [2,5], [1,4] ]
</code></pre>
<p>and </p>
<pre><code>J= [[3,7], [10,12]]
</code></pre>
<p>in this case the list the result would be </p>
<pre><code>[ [[1,4], [3,7]], [[2,5],[3,7]]
</code></pre>
| 0 | 2016-08-06T15:41:01Z | 38,808,347 | <p>This can be made a little more readable by using <code>enumerate</code> to pull the index and values out at the same time. Also, semantically the number pairs might be better described as tuples. </p>
<pre><code>I = [(8,12), (18,24), (3,5)]
J = [(7,10), (2,6), (18,22)]
def overlap(I, J):
res=[]
for i, ival in enumerate(I):
for j, jval in enumerate(J):
if (ival[1]<=jval[1] and ival[1]>=jval[0]) or (jval[1]<=ival[1] and jval[1]>=ival[0]):
z = (ival, jval)
res.append(z)
return res
res = overlap(I, J)
print(res)
assert res == [((8, 12), (7, 10)), ((18, 24), (18, 22)), ((3, 5), (2, 6))]
</code></pre>
<p>But since the index doesn't have to be used, this can be simplified even further by just looping over the values.</p>
<pre><code>def overlap(I, J):
res=[]
for ival in I:
for jval in J:
if (ival[1]<=jval[1] and ival[1]>=jval[0]) or (jval[1]<=ival[1] and jval[1]>=ival[0]):
z = (ival, jval)
res.append(z)
return res
</code></pre>
| 0 | 2016-08-06T20:14:39Z | [
"python"
] |
Create submodels with pandas groupby and locate each model with test data | 38,806,021 | <p>I have a pandas dataframe in which values in a column are used as the group-by basis to create submodels.</p>
<pre><code>import pandas as pd
from sklearn.linear_model import Ridge
data = pd.DataFrame({"Name": ["A", "A", "A", "B", "B", "B"], "Score": [90, 80, 90, 92, 87, 80], "Age": [10, 12, 14, 9, 11, 12], "Training": [0, 1, 2, 0, 1, 2]})
</code></pre>
<p><code>"Name"</code> is used as the basis to create submodel for each individual. I want o use variable <code>"Age"</code> and <code>"Training"</code> to predict <code>"Score"</code> of one individual <code>"Name"</code> (i.e <code>"A"</code> and <code>"B"</code> in this case). That is, if I have <code>"A"</code> and know the <code>"Age"</code> and <code>"Training"</code> of <code>"A"</code>, I would love to use <code>"A"</code>, <code>"Age"</code>, <code>"Training"</code> to predict <code>"Score"</code>. However, <code>"A"</code> should be used to access to the model that <code>"A"</code> belongs to other than other model. </p>
<pre><code>grouped_df = data.groupby(['Name'])
for key, item in grouped_df:
Score = grouped_df['Score']
Y = grouped_df['Age', 'Training']
Score_item = Score.get_group(key)
Y_item = Y.get_group(key)
model = Ridge(alpha = 1.2)
modelfit = model.fit(Y_item, Score_item)
modelpred = model.predict(Y_item)
modelscore = model.score(Y_item, Score_item)
print modelscore
</code></pre>
<p>Up to here, I have built simple Ridge models to sub-groups <code>A</code> and <code>B</code>.</p>
<p>My question is, with test data as below:</p>
<pre><code>test_data = [u"A, 13, 0", u"B, 12, 1", u"A 10, 0"] ##each element, respectively, represents `Name`, `Age` and `Training`
</code></pre>
<p>How to feed the data to the prediction models?
I have </p>
<pre><code>line = test_data
Name = [line[i].split()[0] for i in range(len(line))]
Age = [line[i].split()[1] for i in range(len(line))]
Training = [line[i].split()[2] for i in range(len(line))]
Y = pd.DataFrame({"Name": Name, "Age": Age, "Training": Training})
</code></pre>
<p>This gives me the pandas dataframe of the test data. However, I am not sure how to proceed further to feed the test data to the model. I highly appreciate your help. Thank you!!</p>
<p><strong>UPDATE</strong></p>
<p>After I adopted the code of Parfait, the code looks better now. Here I did not, however, create another pandas dataframe of the testdata (as I am not sure how to deal with row in there). Instead, I feed in the test values by spliting strings. I obtained an error as indicated below. I searched and found a post here <a href="http://stackoverflow.com/questions/35082140/preprocessing-in-scikit-learn-single-sample-depreciation-warning">Preprocessing in scikit learn - single sample - Depreciation warning</a> which is related. However, I tried to reshape the test data but it is on the list form so it does not have the attribute of reshap. I think I misunderstand. I highly appreciate if you can let me know how to fix this error. Thank you.</p>
<pre><code>import pandas as pd
from sklearn.linear_model import Ridge
import numpy as np
data = pd.DataFrame({"Name": ["A", "A", "A", "B", "B", "B"], "Score": [90, 80, 90, 92, 87, 80], "Age": [10, 12, 14, 9, 11, 12], "Training": [0, 1, 2, 0,$
modeldict = {} # INITIALIZE DICT
grouped_df = data.groupby(['Name'])
for key, item in grouped_df:
Score = grouped_df['Score']
Y = grouped_df['Age', 'Training']
Score_item = Score.get_group(key)
Y_item = Y.get_group(key)
model = Ridge(alpha = 1.2)
modelfit = model.fit(Y_item, Score_item)
modelpred = model.predict(Y_item)
modelscore = model.score(Y_item, Score_item)
modeldict[key] = modelfit # SAVE EACH FITTED MODEL TO DICT
line = [u"A, 13, 0", u"B, 12, 1", u"A, 10, 0"]
Name = [line[i].split(",")[0] for i in range(len(line))]
Age = [line[i].split(",")[1] for i in range(len(line))]
Training = [line[i].split(",")[2] for i in range(len(line))]
for i in range(len(line)):
Name = line[i].split(",")[0]
Age = line[i].split(",")[1]
Training = line[i].split(",")[2]
model = modeldict[Name]
ip = [float(Age), float(Training)]
score = model.predict(ip)
print score
</code></pre>
<p><strong>ERROR</strong></p>
<pre><code>/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample. DeprecationWarning)
86.6666666667
/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.DeprecationWarning)
83.5320600273
/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.DeprecationWarning)
86.6666666667
/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.DeprecationWarning)
[ 86.66666667]
/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.DeprecationWarning)
[ 83.53206003]
/opt/conda/lib/python2.7/site-packages/sklearn/utils/validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample. DeprecationWarning)
[ 86.66666667]
</code></pre>
| 0 | 2016-08-06T15:45:46Z | 38,810,102 | <p>Consider saving submodels in a dictionary with <em>Name</em> as the key and then run a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">pandas.DataFrame.apply()</a> to run operations on each row aligning row's <em>Name</em> to corresponding model. </p>
<p><strong>NOTE:</strong> Below is untested code but hopefully gives a general idea to which you can adjust accordingly. The main issue might be the <code>model.predict()</code> input and output in the defined function, <code>runModel</code>, used in the <code>apply()</code>. A numpy matrix to of <em>Age</em> and <em>Training</em> values are used in <code>model.predict()</code> which hopefully returns a numpy equal to sample size (i.e., each row). See <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html" rel="nofollow">Ridge model</a>:</p>
<pre><code>modeldict = {} # INITIALIZE DICT
grouped_df = data.groupby(['Name'])
for key, item in grouped_df:
Score = grouped_df['Score']
Y = grouped_df['Age', 'Training']
Score_item = Score.get_group(key)
Y_item = Y.get_group(key)
model = Ridge(alpha = 1.2)
modelfit = model.fit(Y_item, Score_item)
modelpred = model.predict(Y_item)
modelscore = model.score(Y_item, Score_item)
print modelscore
modeldict[key] = modelfit # SAVE EACH FITTED MODEL TO DICT
line = [u"A, 13, 0", u"B, 12, 1", u"A 10, 0"]
Name = [line[i].split()[0] for i in range(len(line))]
Age = [line[i].split()[1] for i in range(len(line))]
Training = [line[i].split()[2] for i in range(len(line))]
testdata = pd.DataFrame({"Name": Name, "Age": Age, "Training": Training})
def runModel(row):
# LOCATE MODEL BY NAME KEY
model = modeldict[row['Name']]
# PREDICT VALUES
score = model.predict(np.matrix([row['Age'], row['Training']])
# RETURN SCALAR FROM score ARRAY
return(score[0])
testdata['predictedScore'] = testdata.apply(runModel, axis=1)
</code></pre>
| 0 | 2016-08-07T01:28:02Z | [
"python",
"pandas",
"grouping",
"prediction"
] |
md5 hash of file calculated not correct in Python | 38,806,027 | <p>I have a function for calculating the md5 hashes of all the files in a drive. A hash is calculated but it's different from the hash I got using other programs or online services that are designed for that.</p>
<pre><code>def md5_files(path, blocksize = 2**20):
hasher = hashlib.md5()
hashes = {}
for root, dirs, files in os.walk(path):
for file in files:
file_path = os.path.join(root, file)
print(file_path)
with open(file_path, "rb") as f:
data = f.read(blocksize)
if not data:
break
hasher.update(data)
hashes[file_path] = hasher.hexdigest()
return hashes
</code></pre>
<p>the <code>path</code> provided is the drive letter, for example "K:\" then I navigate through the files and I open the file for binary read. I read chunks of data of the size specified in <code>blocksize</code>. Then I store the filename and md5 hash of every file in a dictionary called <code>hashes</code>. The code looks okay, I also checked other questions on Stack Overflow. I don't know why the generated md5 hash is wrong.</p>
| 1 | 2016-08-06T15:46:31Z | 38,806,131 | <p>you need to construct a new md5 object for each file and read it completely. eg. like so</p>
<pre><code>def md5_files(path, blocksize = 2**20):
hashes = {}
for root, dirs, files in os.walk(path):
for file in files:
file_path = os.path.join(root, file)
print(file_path)
with open(file_path, "rb") as f:
data = f.read(blocksize)
hasher = hashlib.md5(data)
while data:
data = f.read(blocksize)
hasher.update(data)
hashes[file_path] = hasher.hexdigest()
return hashes
</code></pre>
| 1 | 2016-08-06T15:58:02Z | [
"python",
"hash",
"md5",
"hashlib"
] |
Taking one positional argument but more than one given | 38,806,100 | <p>I have some code that uses pybing image_search to download pictures but my threading seems to have a problem </p>
<pre><code> import pickle
from urllib.request import urlretrieve
import threading
search_terms = ['Clint Eastwood', 'George Clooney']
def downloader():
for search_term in search_terms:
t = threading.Thread(target=getter, args=(str(search_term)))
print("started thread for %s" %search_term)
t.start()
def getter(search_term):
list_of_lists = pickle.load(open('%s/pickle_dump.p' %search_term, 'rb'))
count = 1
print(list_of_lists)
for list in list_of_lists:
print(list)
for i in list:
try:
link = i.media_url
print('retrieving %s-%s' %(str(count),str(i)))
urlretrieve(link, "%s/%s-%s.jpg" % (search_term, str(count), str(i)))
except:
pass
count += 1
</code></pre>
<p>getter() opens a list of lists, each place in the list holds a pybing image object which I can use to get a link for an image, but when running function downloader(), it says getter takes one positional argument, but 14 were given which is confusing, as I only pass a string into it for each thread any help?</p>
| 0 | 2016-08-06T15:54:36Z | 38,806,129 | <p><code>(str(search_term))</code> is not a tuple; <code>(str(search_term),)</code> is. In your code, <code>getter</code> is receiving a list of arguments, one character of <code>search_term</code> per argument.</p>
| 0 | 2016-08-06T15:57:54Z | [
"python",
"multithreading",
"image"
] |
Intercept all queries on a model in SQLAlchemy | 38,806,196 | <p>I need to intercept all queries that concern a model in SQLAlchemy, in a way that I can inspect it at the point where any of the query methods (<code>all()</code>, <code>one()</code>, <code>scalar()</code>, etc.) is executed.</p>
<p>I have thought about the following approaches:</p>
<h1>1. Subclass the Query class</h1>
<p>I could subclass <code>sqlalchemy.orm.Query</code> and override the execution code, starting basically from something like <a href="http://derrickgilland.com/posts/demystifying-flask-sqlalchemy/" rel="nofollow">this</a>.</p>
<p>However, I am writing a library that can be used in other SQLAlchemy applications, and thus the creation of the declarative base, let alone engines and sessions, is outside my scope.</p>
<p>Maybe I have missed something and it is possible to override the Query class for my models without knowledge of the session?</p>
<h1>2. Use the before_execute Core Event</h1>
<p>I have also thought of hooking into execution with the <a href="http://docs.sqlalchemy.org/en/latest/core/events.html#sqlalchemy.events.ConnectionEvents.before_execute" rel="nofollow">before_execute</a> event.</p>
<p>The problem is thatit is bound to an engine (see above). Also, I need to modify objects in the session, and I got the impression that I do not have access to a session from within this event.</p>
<hr>
<p>What I want to be able to do is something like:</p>
<ol>
<li><code>session.query(MyModel).filter_by(foo="bar").all()</code> is executed.</li>
<li>Intercept that query and do something like storing the query in a log table within the same database (not literally that, but a set of different things that basically need the exact same functionality as this example operation)</li>
<li>Let the query execute like normal.</li>
</ol>
<p>What I am trying to do in the end is inject items from another data store into the SQLAlchemy database on-the-fly upon querying. While this seems stupid - trust me, it might be less stupid than it sounds (or even more stupid) ;).</p>
| 1 | 2016-08-06T16:05:58Z | 38,809,415 | <p>The <a href="http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.QueryEvents.before_compile" rel="nofollow"><code>before_compile</code></a> query event might be useful for you.</p>
<pre><code>from weakref import WeakSet
from sqlalchemy import event
from sqlalchemy.orm import Query
visited_queries = WeakSet()
@event.listens_for(Query, 'before_compile')
def log_query(query):
# You can get the session
session = query.session
# Prevent recursion if you want to compile the query to log it!
if query not in visited_queries:
visited_queries.add(query)
# do something with query.statement
</code></pre>
<p>You can look at <code>query.column_descriptions</code> to see if your model is being queried.</p>
| 1 | 2016-08-06T22:49:38Z | [
"python",
"sqlalchemy"
] |
What's the time complexity of functions in heapq library | 38,806,202 | <p>My question is from the solution in leetcode below, I can't understand why it is <code>O(k+(n-k)log(k))</code>.</p>
<p>Supplement: Maybe the complexity isn't that, in fact I don't know the time complexity of <code>heappush()</code> and <code>heappop()</code></p>
<pre><code># O(k+(n-k)lgk) time, min-heap
def findKthLargest(self, nums, k):
heap = []
for num in nums:
heapq.heappush(heap, num)
for _ in xrange(len(nums)-k):
heapq.heappop(heap)
return heapq.heappop(heap)
</code></pre>
| 0 | 2016-08-06T16:06:35Z | 38,833,175 | <p><code>heapq</code> is a binary heap, with O(log n) <code>push</code> and O(log n) <code>pop</code>. See the <a href="https://fossies.org/dox/Python-3.5.2/heapq_8py_source.html" rel="nofollow">heapq source code</a>.</p>
<p>The algorithm you show takes O(n log n) to push all the items onto the heap, and then O((n-k) log n) to find the kth largest element. So the complexity would be O(n log n). It also requires O(n) extra space.</p>
<p>You can do this in O(n log k), using O(k) extra space by modifying the algorithm slightly. I'm not a Python programmer, so you'll have to translate the pseudocode:</p>
<pre><code>create a new min-heap
push the first k nums onto the heap
for the rest of the nums:
if num > heap.peek()
heap.pop()
heap.push(num)
// at this point, the k largest items are on the heap.
// The kth largest is the root:
return heap.pop()
</code></pre>
<p>The key here is that the heap contains just the largest items seen so far. If an item is smaller than the kth largest seen so far, it's never put onto the heap. The worst case is O(n log k).</p>
<p>Actually, <code>heapq</code> has a <code>heapreplace</code> method, so you could replace this:</p>
<pre><code> if num > heap.peek()
heap.pop()
heap.push(num)
</code></pre>
<p>with</p>
<pre><code> if num > heap.peek()
heap.replace(num)
</code></pre>
<p>Also, an alternative to pushing the first <code>k</code> items is to create a list of the first <code>k</code> items and call <code>heapify</code>. A more optimized (but still O(n log k)) algorithm is:</p>
<pre><code>create array of first `k` items
heap = heapify(array)
for remaining nums
if (num > heap.peek())
heap.replace(num)
return heap.pop()
</code></pre>
<p>You could also call <code>heapify</code> on the entire array, then pop the first <code>n-k</code> items, and then take the top:</p>
<pre><code>heapify(nums)
for i = 0 to n-k
heapq.heappop(nums)
return heapq.heappop(nums)
</code></pre>
<p>That's simpler. Not sure if it's faster than my previous suggestion, but it modifies the original array. The complexity is O(n) to build the heap, then O((n-k) log n) for the pops. So it's be O((n-k) log n). Worst case O(n log n).</p>
| 2 | 2016-08-08T15:29:38Z | [
"python",
"heap"
] |
Get headlines from web archive | 38,806,208 | <p>I am trying to get headline from <code>www.bbc.co.uk/news</code>. The code I have works fine and it is as below:</p>
<pre><code>from bs4 import BeautifulSoup, SoupStrainer
import urllib2
import re
opener = urllib2.build_opener()
url = 'http://www.bbc.co.uk/news'
soup = BeautifulSoup(opener.open(url), "lxml")
titleTag = soup.html.head.title
print(titleTag.string)
titles = soup.find_all('span', {'class' : 'title-link__title-text'})
headlines = [t.text for t in titles]
print(headlines)
</code></pre>
<p>But I would like to build a dataset from a given date, let's say 1st April 2016. But the headlines keep on changing during the day and BBC does not keep the history. </p>
<p>So I thought to get it from <code>web archive</code>. For example, I would like to get headlines from this <a href="http://web.archive.org/web/20160203074646/http://www.bbc.co.uk/news" rel="nofollow">url</a> (<code>http://web.archive.org/web/20160203074646/http://www.bbc.co.uk/news</code>) for the timestamp <code>20160203074646</code>.</p>
<p>When I paste the url in my code, the output contains the headlines.</p>
<p><strong>EDIT</strong></p>
<p>But how do I automate this process for all the timestamps?</p>
| 0 | 2016-08-06T16:07:12Z | 38,817,485 | <p>To see all snapshots for a given URL, replace the timestamp with an asterisk:</p>
<blockquote>
<p><a href="http://web.archive.org/web/*/http://www.bbc.co.uk" rel="nofollow">http://web.archive.org/web/*/http://www.bbc.co.uk</a></p>
</blockquote>
<p>then screen scrape that.</p>
<p>A few things to consider: </p>
<ul>
<li>The <a href="https://archive.org/help/wayback_api.php" rel="nofollow">Wayback API</a> will give you the nearest single snapshot to a given timestamp. You seem like you want all available snapshots, which is why I suggested screen scraping.</li>
<li>The BBC might change headlines faster than the Wayback Machine can snapshot them. </li>
<li>The BBC provides <a href="http://news.bbc.co.uk/2/hi/help/rss/default.stm" rel="nofollow">RSS feeds</a> which can be <a href="http://pythonhosted.org/feedparser/" rel="nofollow">parsed</a> more reliably. There is a listing under "Choose a Feed". </li>
</ul>
<p>EDIT: have a look at the <code>feedparser</code> <a href="http://pythonhosted.org/feedparser/common-rss-elements.html" rel="nofollow">docs</a></p>
<pre><code>import feedparser
d = feedparser.parse('http://feeds.bbci.co.uk/news/rss.xml?edition=uk')
d.entries[0]
</code></pre>
<p>Output </p>
<pre><code>{'guidislink': False,
'href': u'',
'id': u'http://www.bbc.co.uk/news/world-europe-37003819',
'link': u'http://www.bbc.co.uk/news/world-europe-37003819',
'links': [{'href': u'http://www.bbc.co.uk/news/world-europe-37003819',
'rel': u'alternate',
'type': u'text/html'}],
'media_thumbnail': [{'height': u'432',
'url': u'http://c.files.bbci.co.uk/12A34/production/_90704367_mediaitem90704366.jpg',
'width': u'768'}],
'published': u'Sun, 07 Aug 2016 21:24:36 GMT',
'published_parsed': time.struct_time(tm_year=2016, tm_mon=8, tm_mday=7, tm_hour=21, tm_min=24, tm_sec=36, tm_wday=6, tm_yday=220, tm_isdst=0),
'summary': u"Turkey's President Erdogan tells a huge rally in Istanbul that he would approve the return of the death penalty if it was backed by parliament and the public.",
'summary_detail': {'base': u'http://feeds.bbci.co.uk/news/rss.xml?edition=uk',
'language': None,
'type': u'text/html',
'value': u"Turkey's President Erdogan tells a huge rally in Istanbul that he would approve the return of the death penalty if it was backed by parliament and the public."},
'title': u'Turkey death penalty: Erdogan backs return at Istanbul rally',
'title_detail': {'base': u'http://feeds.bbci.co.uk/news/rss.xml?edition=uk',
'language': None,
'type': u'text/plain',
'value': u'Turkey death penalty: Erdogan backs return at Istanbul rally'}}
</code></pre>
| 1 | 2016-08-07T18:57:49Z | [
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
Convert lines of a File into CSV | 38,806,244 | <p>This is an algorithm to get all rows, that begin with "BO_ " in a text-file....</p>
<pre><code> with open("FILE.txt") as f:
for line in f:
if line.startswith('BO_ '):
array+=line
print(array)
</code></pre>
<p>this code gives me the following result:</p>
<pre><code>BO_ 1
BO_ 2
BO_ 3
BO_ 4
BO_ 5
BO_ 6
....
</code></pre>
<p>Now.... is it possible to convert this into a <code>csv</code> format like this:</p>
<pre><code>string=['BO_1','BO_2','BO_3',...]
</code></pre>
<p>I tried already the <code>csv</code> module, but wasn't able to manage it....</p>
| 0 | 2016-08-06T16:11:17Z | 38,806,282 | <p>Well, the format you described isn't really CSV format per-say, but:</p>
<pre><code>with open("FILE.txt") as f:
bo_lines = [line for line in f if line.startswith('BO_')]
</code></pre>
| 0 | 2016-08-06T16:16:35Z | [
"python",
"csv"
] |
Convert lines of a File into CSV | 38,806,244 | <p>This is an algorithm to get all rows, that begin with "BO_ " in a text-file....</p>
<pre><code> with open("FILE.txt") as f:
for line in f:
if line.startswith('BO_ '):
array+=line
print(array)
</code></pre>
<p>this code gives me the following result:</p>
<pre><code>BO_ 1
BO_ 2
BO_ 3
BO_ 4
BO_ 5
BO_ 6
....
</code></pre>
<p>Now.... is it possible to convert this into a <code>csv</code> format like this:</p>
<pre><code>string=['BO_1','BO_2','BO_3',...]
</code></pre>
<p>I tried already the <code>csv</code> module, but wasn't able to manage it....</p>
| 0 | 2016-08-06T16:11:17Z | 38,806,332 | <p>You can use the join function like so: </p>
<pre><code>string = ','.join(array.split('\n'))
</code></pre>
<p>this will give you:</p>
<pre><code>"BO_1,BO_2,BO_3,..."
</code></pre>
<p>This could then be saved as a .csv. From the way your question is phrased though, as others have pointed out, you might not be looking for a csv.</p>
<blockquote>
<p><strong>Edit:</strong> as code apprentice pointed out, you might want to use .append() instead to create an array of all the lines of the file. Currently you are concatenating every line onto <strong>"array"</strong> which is really a string, not an array.</p>
</blockquote>
| 0 | 2016-08-06T16:22:52Z | [
"python",
"csv"
] |
Using numpy.fromfile to read scattered binary data | 38,806,285 | <p>There are different blocks in a binary that I want to read using a single call of <code>numpy.fromfile</code>. Each block has the following format:</p>
<pre><code>OES=[
('EKEY','i4',1),
('FD1','f4',1),
('EX1','f4',1),
('EY1','f4',1),
('EXY1','f4',1),
('EA1','f4',1),
('EMJRP1','f4',1),
('EMNRP1','f4',1),
('EMAX1','f4',1),
('FD2','f4',1),
('EX2','f4',1),
('EY2','f4',1),
('EXY2','f4',1),
('EA2','f4',1),
('EMJRP2','f4',1),
('EMNRP2','f4',1),
('EMAX2','f4',1)]
</code></pre>
<p>Here is the format of the binary:</p>
<pre><code> Data I want (OES format repeating n times)
------------------------
Useless Data
------------------------
Data I want (OES format repeating m times)
------------------------
etc..
</code></pre>
<p>I know the byte increment between the data i want and the useless data. I also know the size of each data block i want.</p>
<p>So far, i have accomplished my goal by seeking on the file object <code>f</code> and then calling:</p>
<pre><code>nparr = np.fromfile(f,dtype=OES,count=size)
</code></pre>
<p>So I have a different <code>nparr</code> for each data block I want and concatenated all the <code>numpy</code> arrays into one new array. </p>
<p>My goal is to have a single array with all the blocks i want without concatenating (for memory purposes). That is, I want to call <code>nparr = np.fromfile(f,dtype=OES)</code> only once. Is there a way to accomplish this goal?</p>
| 0 | 2016-08-06T16:16:47Z | 38,806,512 | <blockquote>
<p>That is, I want to call <code>nparr = np.fromfile(f,dtype=OES)</code> only once. Is there a way to accomplish this goal?</p>
</blockquote>
<p>No, not with a single call to <code>fromfile()</code>.</p>
<p>But if you know the complete layout of the file in advance, you can preallocate the array, and then use <code>fromfile</code> and <code>seek</code> to read the OES blocks directly into the preallocated array. Suppose, for example, that you know the file positions of each OES block, and you know the number of records in each block. That is, you know:</p>
<pre><code>file_positions = [position1, position2, ...]
numrecords = [n1, n2, ...]
</code></pre>
<p>Then you could do something like this (assuming <code>f</code> is the already opened file):</p>
<pre><code>total = sum(numrecords)
nparr = np.empty(total, dtype=OES)
current_index = 0
for pos, n in zip(file_positions, numrecords):
f.seek(pos)
nparr[current_index:current_index+n] = np.fromfile(f, count=n, dtype=OES)
current_index += n
</code></pre>
| 2 | 2016-08-06T16:41:17Z | [
"python",
"numpy",
"binary",
"records"
] |
Python Package ImportError in Windows 10 | 38,806,288 | <p>I am having difficulty installing Python Packages on Windows 10.The package name is Tabular..i have been trying over and over and it doesn't work out.her what i get when I try to install it using pip.Any help about it ?Thanks</p>
<pre>
C:\Python27\Scripts>pip install tabular
Collecting tabular
Using cached tabular-0.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "c:\users\pc\appdata\local\temp\pip-build-5mggv5\tabular\setup.py", line 50, in
raise ImportError("distribute was not found and fallback to setuptools was not allowed")
ImportError: distribute was not found and fallback to setuptools was not allowed
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\pc\appdata\local\temp\pip-build-5mggv5\tabular\
C:\Python27\Scripts>pip install distribute
Requirement already satisfied (use --upgrade to upgrade): distribute in c:\python27\lib\site-packages
Requirement already satisfied (use --upgrade to upgrade): setuptools>=0.7 in c:\python27\lib\site-packages (from distribute)
C:\Python27\Scripts>pip install --upgrade distribute
Requirement already up-to-date: distribute in c:\python27\lib\site-packages
Collecting setuptools>=0.7 (from distribute)
Downloading setuptools-25.1.6-py2.py3-none-any.whl (442kB)
100% |################################| 450kB 191kB/s
Installing collected packages: setuptools
Found existing installation: setuptools 25.1.1
Uninstalling setuptools-25.1.1:
Successfully uninstalled setuptools-25.1.1
Successfully installed setuptools-25.1.6
</pre>
| 1 | 2016-08-06T16:17:05Z | 38,807,583 | <p>Tabular 0.1 has issues with Windows 10. Please fall back to 0.0.8</p>
<pre><code>pip install tabular==0.0.8
</code></pre>
<p><strong>Edit</strong></p>
<p>For scipy installation on Windows 10 with python 2.7, instructions are at <a href="http://stackoverflow.com/a/38618044/5334188">http://stackoverflow.com/a/38618044/5334188</a></p>
| 0 | 2016-08-06T18:37:05Z | [
"python"
] |
Itertools.product raises "Error in argument" | 38,806,295 | <p>I am a bit lost here:</p>
<p>I can not use <code>itertools.product</code> in my code. This is in a break point in unittest <code>setUp</code> method:</p>
<pre><code>ipdb> import itertools
ipdb> itertools
<module 'itertools' (built-in)>
ipdb> itertools.product
<class 'itertools.product'>
ipdb> list(itertools.product([2,7], [1,4]))
*** Error in argument: '(itertools.product([2,7], [1,4]))'
</code></pre>
<p>I am Pretty sure that I'm not doing anything weird with the module itself since this is in my codebase (no uncommite changes there):</p>
<pre><code>$ git grep itertools
simple_wbd/climate.py:import itertools
</code></pre>
<p>If I try this in Ipython interpreter it works fine.</p>
<pre><code>In [1]: import itertools
In [2]: list(itertools.product([2,7], [1,4]))
Out[2]: [(2, 1), (2, 4), (7, 1), (7, 4)]
</code></pre>
<p>I don't even know how to debug this. Any help would be nice.</p>
<p>Thank you.</p>
| 1 | 2016-08-06T16:18:05Z | 38,806,337 | <p>In this debugger, <code>list</code> is a command. For access to the builtin name you were intending, prepend an exclam:</p>
<pre><code>ipdb> list(itertools.product([2,7], [1,4])
*** Error in argument: '(itertools.product([2,7], [1,4])'
ipdb> !list(itertools.product([2,7], [1,4]))
[(2, 1), (2, 4), (7, 1), (7, 4)]
</code></pre>
<p>This should not be an issue in the code itself, only within the debugger.</p>
| 5 | 2016-08-06T16:23:32Z | [
"python",
"python-3.x",
"ipython",
"itertools",
"ipdb"
] |
Web Scraping - No content displayed | 38,806,398 | <p>I am trying to fetch the stock of a company specified by a user by taking the input. I am using requests to get the source code and BeautifulSoup to scrape. I am fetching the data from <em>google.com</em>. I am trying the fetch only the last stock price (806.93 in the picture). When I run my script, it prints none. None of the data is being fetched. What am I missing ?</p>
<p><a href="http://i.stack.imgur.com/lvLhH.png" rel="nofollow"><img src="http://i.stack.imgur.com/lvLhH.png" alt="enter image description here"></a></p>
<pre><code># -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
company = raw_input("Enter the company name:")
URL = "https://www.google.co.in/?gfe_rd=cr&ei=-AKmV6eqC-LH8AfRqb_4Aw#newwindow=1&safe=off&q="+company+"+stock"
request = requests.get(URL)
soup = BeautifulSoup(request.content,"lxml")
code = soup.find('span',{'class':'_Rnb fmob_pr fac-l','data-symbol':'GOOGL'})
print code.contents[0]
</code></pre>
<p>The source code of the page looks like this : </p>
<p><a href="http://i.stack.imgur.com/nUOsZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/nUOsZ.png" alt="The source code "></a></p>
| 1 | 2016-08-06T16:29:46Z | 38,807,846 | <p>Looks like that source is from inspecting the element, not the actual source. A couple of suggestions. Use google finance to get rid of some noise - <strong><a href="https://www.google.com/finance?q=googl" rel="nofollow">https://www.google.com/finance?q=googl</a></strong> would be the URL. On that page there is a section that looks like this:</p>
<pre><code><div class=g-unit>
<div id=market-data-div class="id-market-data-div nwp g-floatfix">
<div id=price-panel class="id-price-panel goog-inline-block">
<div>
<span class="pr">
<span id="ref_694653_l">806.93</span>
</span>
<div class="id-price-change nwp">
<span class="ch bld"><span class="chg" id="ref_694653_c">+9.68</span>
<span class="chg" id="ref_694653_cp">(1.21%)</span>
</span>
</div>
</div>
</code></pre>
<p>You should be able to pull the number out of that.</p>
| 1 | 2016-08-06T19:09:24Z | [
"python",
"web-scraping",
"beautifulsoup",
"python-requests"
] |
Web Scraping - No content displayed | 38,806,398 | <p>I am trying to fetch the stock of a company specified by a user by taking the input. I am using requests to get the source code and BeautifulSoup to scrape. I am fetching the data from <em>google.com</em>. I am trying the fetch only the last stock price (806.93 in the picture). When I run my script, it prints none. None of the data is being fetched. What am I missing ?</p>
<p><a href="http://i.stack.imgur.com/lvLhH.png" rel="nofollow"><img src="http://i.stack.imgur.com/lvLhH.png" alt="enter image description here"></a></p>
<pre><code># -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
company = raw_input("Enter the company name:")
URL = "https://www.google.co.in/?gfe_rd=cr&ei=-AKmV6eqC-LH8AfRqb_4Aw#newwindow=1&safe=off&q="+company+"+stock"
request = requests.get(URL)
soup = BeautifulSoup(request.content,"lxml")
code = soup.find('span',{'class':'_Rnb fmob_pr fac-l','data-symbol':'GOOGL'})
print code.contents[0]
</code></pre>
<p>The source code of the page looks like this : </p>
<p><a href="http://i.stack.imgur.com/nUOsZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/nUOsZ.png" alt="The source code "></a></p>
| 1 | 2016-08-06T16:29:46Z | 38,808,106 | <p>I went to
<a href="https://www.google.com/?gfe_rd=cr&ei=-AKmV6eqC-LH8AfRqb_4Aw#newwindow=1&safe=off&q=+google+stock" rel="nofollow">https://www.google.com/?gfe_rd=cr&ei=-AKmV6eqC-LH8AfRqb_4Aw#newwindow=1&safe=off&q=+google+stock</a>
, did a right click and "View Page Source" but did not see the code that you screenshotted. </p>
<p>Then I typed out a section of your code screenshot and created a BeautifulSoup object with it and then ran your find on it:</p>
<pre><code>test_screenshot = BeautifulSoup('<div class="_F0c" data-tmid="/m/07zln7n"><span class="_Rnb fmob_pr fac-l" data-symbol="GOOGL" data-tmid="/m/07zln7n" data-value="806.93">806.93.</span> = $0<span class ="_hgj">USD</span>')
test_screenshot.find('span',{'class':'_Rnb fmob_pr fac-l','data-symbol':'GOOGL'})`
</code></pre>
<p>Which will output what you want:
<code><span class="_Rnb fmob_pr fac-l" data-symbol="GOOGL" data-tmid="/m/07zln7n" data-value="806.93">806.93.</span></code></p>
<p>This means that the code you are getting is not the code you expect to get. </p>
<p>I suggest using the google finance page:
<code>https://www.google.com/finance?q=google</code> (replace 'google' with what you want to search), which will give you wnat you are looking for:</p>
<pre><code>request = requests.get(URL)
soup = BeautifulSoup(request.content,"lxml")
code = soup.find("span",{'class':'pr'})
print code.contents
</code></pre>
<p>Will give you
<code>[u'\n', <span id="ref_694653_l">806.93</span>, u'\n']</code>.</p>
<p>In general, scraping Google search results can get really nasty, so try to avoid it if you can.</p>
<p>You might also want to look into <a href="https://pypi.python.org/pypi/yahoo-finance/1.1.4" rel="nofollow">Yahoo Finance Python API</a>.</p>
| 1 | 2016-08-06T19:40:38Z | [
"python",
"web-scraping",
"beautifulsoup",
"python-requests"
] |
django 404 when opening static folder | 38,806,439 | <p>I have created a /static/ folder in my project's root and changed the settings thusly:</p>
<pre><code>STATIC_URL = '/static/'
STATICFILES_DIR = [os.path.join(BASE_DIR, "static")]
</code></pre>
<p>But when I open my localhost/static, it produces a 404 error. Why is that?</p>
| 0 | 2016-08-06T16:34:08Z | 38,806,469 | <p>You have to run <code>manage.py collectstatic</code> first. </p>
<p>Note: It's also possible that going to <code>/static</code> is the cause (missing the trailing slash) though django should redirect. </p>
| 0 | 2016-08-06T16:37:15Z | [
"python",
"django"
] |
django 404 when opening static folder | 38,806,439 | <p>I have created a /static/ folder in my project's root and changed the settings thusly:</p>
<pre><code>STATIC_URL = '/static/'
STATICFILES_DIR = [os.path.join(BASE_DIR, "static")]
</code></pre>
<p>But when I open my localhost/static, it produces a 404 error. Why is that?</p>
| 0 | 2016-08-06T16:34:08Z | 38,806,476 | <p>Are you aiming at a particular file? If not then theres your problem, if you are aiming at a file, are you sure its in the correct directory?</p>
| 1 | 2016-08-06T16:37:56Z | [
"python",
"django"
] |
django 404 when opening static folder | 38,806,439 | <p>I have created a /static/ folder in my project's root and changed the settings thusly:</p>
<pre><code>STATIC_URL = '/static/'
STATICFILES_DIR = [os.path.join(BASE_DIR, "static")]
</code></pre>
<p>But when I open my localhost/static, it produces a 404 error. Why is that?</p>
| 0 | 2016-08-06T16:34:08Z | 38,806,707 | <p>For Django 1.3+:</p>
<pre><code>STATICFILES_FINDERS = [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static")
]
STATIC_ROOT = path.join(TOP_DIR, 'staticfiles')
STATIC_URL = '/static/'
</code></pre>
<p>For apache:</p>
<pre><code>Alias /static /var/my/site/static
<Directory /var/my/site/static>
Require all granted
</Directory>
</code></pre>
| 2 | 2016-08-06T17:00:37Z | [
"python",
"django"
] |
How to install gnu gettext (>0.15) on windows? So I can produce .po/.mo files in Django | 38,806,553 | <p>When runing <code>django make messages</code>:</p>
<pre><code>./manage.py makemessages -l pt
</code></pre>
<p>I get:</p>
<pre><code>CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.15 or newer installed.
</code></pre>
<p>I tried to install but the last version I find with an Instalation Setup is 0.14. Where may I find a recent version and how do I install it?</p>
| 0 | 2016-08-06T16:44:51Z | 38,806,554 | <p>Django removed this explanation from the recent docs and it took me some time to found it so i pasted it here before this old documentation goes offline:</p>
<p>Source: <a href="https://docs.djangoproject.com/en/1.7/topics/i18n/translation/#gettext-on-windows" rel="nofollow">Django Docs 1.7</a></p>
<p>Download the following zip files from the <a href="http://ftp.gnome.org/pub/gnome/binaries/win32/dependencies/" rel="nofollow">GNOME servers</a></p>
<ul>
<li>gettext-runtime-X.zip</li>
<li>gettext-tools-X.zip</li>
</ul>
<blockquote>
<p>X is the version number (It needs to be 0.15 or higher)</p>
</blockquote>
<p>Extract the contents of the <code>bin\</code> directories in both files to the same folder on your system (i.e. <code>C:\Program Files\gettext-utils</code>)</p>
<p><strong>Update the system PATH:</strong></p>
<p><code>Control Panel > System > Advanced > Environment Variables</code></p>
<p>In the System variables list, click Path, click Edit and then New.
Add <code>C:\Program Files\gettext-utils\bin</code> value.</p>
<blockquote>
<p>You may also use gettext binaries you have obtained elsewhere, so long as the xgettext --version command works properly. Do not attempt to use Django translation utilities with a gettext package if the command xgettext --version entered at a Windows command prompt causes a popup window saying âxgettext.exe has generated errors and will be closed by Windowsâ.</p>
</blockquote>
<p>After doing this I tested and <code>./manage.py makemessages -l pt</code> works</p>
| 0 | 2016-08-06T16:44:51Z | [
"python",
"django",
"windows",
"gnu",
"gettext"
] |
Program error for this solution | 38,806,625 | <p>This is what is available right now</p>
<pre><code>l = [list[0]]
list.pop(0)
x = len(list)
for i in range(x+1):
n = len(l)
j = 0
if list[j] > l[n-1]:
l.append(list[j])
list.pop(j)
j = j+1
elif i == len(list):
l.append(list[j])
list.pop(j)
print(list)
print(l)
</code></pre>
<p>`</p>
<p>Some pointers would be really helpful!</p>
| -1 | 2016-08-06T16:52:44Z | 38,806,817 | <p>Make a <code>copy</code> of the list. Iterate through the copy in <em>successions</em> and check for monotonically increasing entries to be added to <code>result</code> in the current iteration of the <code>while</code> loop. Delete the entries as they are added to the <code>result</code>. </p>
<p>Repeat the <em>check-append-delete</em> cycle in the next iteration of the while loop; until the copy of the list is empty:</p>
<pre><code>lst = [1, 2, 2, 2, 11, 5, 9, 8, 19]
lst_cp = lst.copy()
result = []
while lst_cp: # keep iterating until list is empty
result.append(lst_cp.pop(0))
for v in lst_cp[:]:
if v > result[-1]:
result.append(v)
lst_cp.remove(v)
print(result)
# [1, 2, 11, 19, 2, 5, 9, 2, 8]
</code></pre>
| 1 | 2016-08-06T17:13:14Z | [
"python"
] |
Importing from a Package in IDLE vs Shell | 38,806,673 | <p>Importing a whole package works in IDLE, but not in shell. The following works fine in IDLE:</p>
<pre><code>import tkinter as tk
tk.filedialog.askopenfilename()
</code></pre>
<p>In shell, I get this error:</p>
<pre><code>AttributeError: 'module' object has no attribute 'filedialog'
</code></pre>
<p>I understand that I have to <code>import tkinter.filedialog</code> to make this work in shell.</p>
<p>Why the difference between IDLE and shell? How can I make IDLE act like shell? It can be frustrating to have a script working in IDLE, and failing in shell.</p>
<p>I am using Python 3.4.</p>
| 1 | 2016-08-06T16:57:23Z | 38,808,437 | <p>This is an IDLE bug which I fixed for future 3.5.3 and 3.6.0a4 releases. <a href="https://bugs.python.org/issue25507" rel="nofollow">Tracker issue.</a></p>
<p>For an existing 3.5 or 3.4 release, add the following to idlelib/run.py just before the LOCALHOST line.</p>
<pre><code>for mod in ('simpledialog', 'messagebox', 'font',
'dialog', 'filedialog', 'commondialog',
'colorchooser'):
delattr(tkinter, mod)
del sys.modules['tkinter.' + mod]
</code></pre>
<p>I presume that this will work with earlier 3.x releases, but do not have them installed to test. For existing 3.6.0a_ releases, replace 'colorchooser' with 'ttk'.</p>
| 1 | 2016-08-06T20:28:49Z | [
"python",
"python-import",
"python-idle",
"python-packaging"
] |
"Method Not Allowed The method is not allowed for the requested URL." | 38,806,699 | <p>I have read through the related posts to this question, but haven't found an answer that fixes or seems to match (apologies if I missed it, I looked through like 10 posts).</p>
<p>Writing a search page that looks for entries in a DB. Initially I wrote this as two separate functions. One to display the search box, and the second to do the actual search and return the results. This works fine, but I'm trying to make this more "user friendly" by keeping the search box at the top of the page and just returning results if there are any.</p>
<p>Seems to be a simple thing to do, but not working.</p>
<p>Python Code in views.app</p>
<pre><code>@app.route('/search', methods=['POST'])
def SearchForm():
if request.method == "POST":
output = []
searchterm = request.form['lookingfor']
whichName = request.form['name']
if searchterm:
conn = openDB()
results = findClient(conn, searchterm, whichName)
for r in results:
output.append({'id': r[0], 'fname': r[1], 'lname': r[2], 'phonen': r[3], 'email': r[4], 'started': r[5],
'opt': r[6], 'signup': r[7], 'enddate': findEndDate(r[7], r[5])})
closeDB(conn)
if output:
message = "Record(s) Found"
else:
message = "Nothing found, sorry."
return render_template('search.html', message=message, output=output)
else:
output = []
message = "Please enter a name in the search box"
return render_template('search.html', message=message, output=output)
else:
return render_template('search.html')
</code></pre>
<p>HTML for search.html</p>
<pre><code>{% extends "baseadmin.html" %}
{% block content %}
<div>
<form action="{{url_for('search')}}" method="post">
<p>Search for a Client: <input type="text" name="lookingfor"/></p>
<input type="radio" name="name" value="fname" id="fname"><label for="fname">First Name</label>
<input type="radio" name="name" value="lname" id="lname"><label for="lname">Last Name</label>
<input type="submit" value="submit"/>
</form>
</div>
<h2>{{ message }}</h2>
<div>
<table>
<tr>
<th>Name</th>
<th>Email Address</th>
<th>Phone Number</th>
<th>Trial Method</th>
<th>Start Date</th>
<th>EndDate</th>
</tr>
{% for client in output %}
<tr>
<td>{{ client['fname'] }} {{ client['lname'] }}</td>
<td>{{ client['email'] }}</td>
<td>{{ client['phonen'] }}</td>
<td>{{ client['started'] }}</td>
<td>{{ client['signup'] }}</td>
<td>{{ client['enddate'] }}</td>
</tr>
{% endfor %}
</table>
</div>
{% endblock %}
</code></pre>
| -1 | 2016-08-06T16:59:39Z | 38,807,552 | <p>As @dirn already mentioned in his <a href="http://stackoverflow.com/questions/38806699/method-not-allowed-the-method-is-not-allowed-for-the-requested-url#comment64981818_38806699">comment</a>, the <code>methods=['POST']</code> in <code>@app.route('/search', methods=['POST'])</code> means the function <code>SearchForm</code> and the URL <code>'/search'</code> will only accept POST requests. If someone tries to access the page using simply the URL, they would be doing so with a GET request.</p>
<p>Changing the line to <code>@app.route('/search', methods=['GET', 'POST'])</code> should fix the error. </p>
<p>(Answered mainly to (1) show the complete solution and to (2) bring visibility to the <a href="http://stackoverflow.com/questions/38806699/method-not-allowed-the-method-is-not-allowed-for-the-requested-url#comment64981818_38806699">solution provided by @dirn</a>.)</p>
| 1 | 2016-08-06T18:34:31Z | [
"python",
"methods",
"flask",
"http-status-code-405"
] |
Make console-friendly string a useable pandas dataframe python | 38,806,750 | <p>A quick question as I'm currently changing from R to pandas for some projects:</p>
<p>I get the following print output from <code>metrics.classification_report</code> from <code>sci-kit learn</code>:</p>
<pre><code> precision recall f1-score support
0 0.67 0.67 0.67 3
1 0.50 1.00 0.67 1
2 1.00 0.80 0.89 5
avg / total 0.83 0.78 0.79 9
</code></pre>
<p>I want to use this (and similar ones) as a matrix/dataframe so, that I could subset it to extract, say the precision of class 0.</p>
<p>In R, I'd give the first "column" a name like 'outcome_class' and then subset it:
<code>my_dataframe[my_dataframe$class_outcome == 1, 'precision']</code></p>
<p>And I can do this in pandas but the <code>dataframe</code> that I want to use is simply a string <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html" rel="nofollow">see sckikit's doc</a></p>
<p>How can I make the table output here to a useable dataframe in pandas?</p>
| 0 | 2016-08-06T17:04:36Z | 38,806,888 | <p>Assign it to a variable, <code>s</code>:</p>
<pre><code>s = classification_report(y_true, y_pred, target_names=target_names)
</code></pre>
<p>Or directly:</p>
<pre><code>s = '''
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
avg / total 0.70 0.60 0.61 5
'''
</code></pre>
<p>Use that as the string input for StringIO:</p>
<pre><code>import io # For Python 2.x use import StringIO
df = pd.read_table(io.StringIO(s), sep='\s{2,}') # For Python 2.x use StringIO.StringIO(s)
df
Out:
precision recall f1-score support
class 0 0.5 1.00 0.67 1
class 1 0.0 0.00 0.00 1
class 2 1.0 0.67 0.80 3
avg / total 0.7 0.60 0.61 5
</code></pre>
<p>Now you can slice it like an R data.frame:</p>
<pre><code>df.loc['class 2']['f1-score']
Out: 0.80000000000000004
</code></pre>
<p>Here, classes are the index of the DataFrame. You can use <code>reset_index()</code> if you want to use it as a regular column:</p>
<pre><code>df = df.reset_index().rename(columns={'index': 'outcome_class'})
df.loc[df['outcome_class']=='class 1', 'support']
Out:
1 1
Name: support, dtype: int64
</code></pre>
| 2 | 2016-08-06T17:21:34Z | [
"python",
"pandas",
"dataframe"
] |
How to use R's assignment methods in rpy2? | 38,806,898 | <p>I'm working with rpy2, and I need to use an assignment method on an R object. For example, starting with this object:</p>
<pre><code># Python code
from rpy2.robjects import r
myvar = r('c(a=1,b=2,c=3)')
</code></pre>
<p>suppose that I want to assign to <code>names(myvar)</code>. (Note: Ignore the fact that rpy2 provides an alternate way to access names via <code>myvar.names</code>. This only works for names, not arbitrary assignment methods.) In R, I would do:</p>
<pre><code># R code
names(myvar) <- c("x", "y", "z")
</code></pre>
<p>However, this won't work in Python:</p>
<pre><code># Python code
> names(myvar) = ['x', 'y', 'z']
In [62]: names(myvar) = ['x', 'y', 'z']
File "<ipython-input-62-aa3f7998cdcb>", line 1
names(myvar) = ['x', 'y', 'z']
^
SyntaxError: can't assign to function call
</code></pre>
<p>Of course, I can run arbitrary code via rpy2's string eval:</p>
<pre><code># Python code
r('''names(myvar) <- c("x", "y", "z")''')
</code></pre>
<p>but interpolating values into a string to be evaluated doesn't sound fun or safe. So is there a way to safely do the equivalent of <code>method(object) <- value</code> through rpy2? </p>
| 0 | 2016-08-06T17:23:06Z | 38,808,519 | <p>In R, "setter" functions follow a naming convention that makes the name of the "getter" followed by <code><-</code>. For example, when doing </p>
<pre class="lang-r prettyprint-override"><code>names(myvar) <- c("x", "y", "z")
</code></pre>
<p>the following is happening:</p>
<pre class="lang-r prettyprint-override"><code>myvar <- "names<-"(myvar, c("x","y","z"))
</code></pre>
<p>If we break it down:</p>
<pre class="lang-r prettyprint-override"><code>> myvar = c(a=1,b=2,c=3)
> # call the assignment function "names<-"
> "names<-"(myvar, c("x","y","z"))
x y z
1 2 3
> # the "names" are stored as an attribute
> attributes(myvar)
$names
[1] "x" "y" "z"
> attributes(myvar)$names <- c("a","b","c")
> myvar
a b c
1 2 3
> # note that the function does have a side effect
> # (unlike what I wrote in a previous version of this answer):
> # the names are changed in place. I think that this is a C-level
> # optimization specific to "names" and this may not always be
> # the case for all "setters"
> "names<-"(myvar, c("x","y","z"))
x y z
1 2 3
> myvar
x y z
1 2 3
</code></pre>
<p>Doing something like <code>method(object) <- value</code> from rpy2 is straightforward. The python code is looking like:</p>
<pre class="lang-py prettyprint-override"><code>set_method = r("`method<-`")
my_object = set_method(my_object, value)
</code></pre>
| 2 | 2016-08-06T20:40:37Z | [
"python",
"rpy2"
] |
How to use R's assignment methods in rpy2? | 38,806,898 | <p>I'm working with rpy2, and I need to use an assignment method on an R object. For example, starting with this object:</p>
<pre><code># Python code
from rpy2.robjects import r
myvar = r('c(a=1,b=2,c=3)')
</code></pre>
<p>suppose that I want to assign to <code>names(myvar)</code>. (Note: Ignore the fact that rpy2 provides an alternate way to access names via <code>myvar.names</code>. This only works for names, not arbitrary assignment methods.) In R, I would do:</p>
<pre><code># R code
names(myvar) <- c("x", "y", "z")
</code></pre>
<p>However, this won't work in Python:</p>
<pre><code># Python code
> names(myvar) = ['x', 'y', 'z']
In [62]: names(myvar) = ['x', 'y', 'z']
File "<ipython-input-62-aa3f7998cdcb>", line 1
names(myvar) = ['x', 'y', 'z']
^
SyntaxError: can't assign to function call
</code></pre>
<p>Of course, I can run arbitrary code via rpy2's string eval:</p>
<pre><code># Python code
r('''names(myvar) <- c("x", "y", "z")''')
</code></pre>
<p>but interpolating values into a string to be evaluated doesn't sound fun or safe. So is there a way to safely do the equivalent of <code>method(object) <- value</code> through rpy2? </p>
| 0 | 2016-08-06T17:23:06Z | 38,809,052 | <p>Consider importing R's base package and directly use the <code>c()</code> function and to assign names import R's stats package and directly use the <code>setNames()</code> function. Below shows how assigning with <code>r()</code> and <code>base.c()</code> yield equivalent values:</p>
<pre><code>from rpy2.robjects import r
from rpy2.robjects.packages import importr
base = importr('base')
myvar1 = r("c('x','y','z')")
myvar2 = base.c('x', 'y', 'z')
# SAME CLASS TYPE
print(type(myvar1))
# <class 'rpy2.robjects.vectors.StrVector'>
print(type(myvar2))
# <class 'rpy2.robjects.vectors.StrVector'>
from rpy2.robjects import pandas2ri
pandas2ri.activate()
# CONVERT TO PYTHON NUMPY ARRAY
py_myvar1 = pandas2ri.ri2py(myvar1)
py_myvar2 = pandas2ri.ri2py(myvar2)
print(py_myvar1==py_myvar2)
# [ True True True]
print(py_myvar1)
# ['x' 'y' 'z']
print(py_myvar2)
# ['x' 'y' 'z']
</code></pre>
<p>And for assigning names with output vectors of names and values:</p>
<pre><code>stats = importr('stats')
# EQUIVALENT TO R: myvar <- setNames(c('a', 'b', 'c'), c(1,2,3))
myvar3 = stats.setNames(base.c(1,2,3), base.c('a', 'b', 'c'))
print(type(myvar3))
# <class 'rpy2.robjects.vectors.IntVector'>
# NAME VECTOR
py_myvar3 = pandas2ri.ri2py(base.names(myvar3))
print(py_myvar3)
# ['a' 'b' 'c']
# VALUES VECTOR
py_myvar3 = pandas2ri.ri2py(myvar3)
print(py_myvar3)
# [1 2 3]
</code></pre>
<p>Altogether, Python does not allow function calls to be assigned values. So find the appropriate method to create an object and assign right hand side values to it aligning to Python's convention.</p>
| 0 | 2016-08-06T21:55:11Z | [
"python",
"rpy2"
] |
Python __getattr__ to create attr if it does not exist and return it | 38,806,907 | <p>A little background: You'll notice my comments describe what I'll go through later. Let's say I have the following object...</p>
<pre><code>#!/usr/bin/python
import os
import sys
class ContainerField(object):
''' An attribute/object storage device '''
def __init__(self, field=None, value=None):
self.m_field = field
self.m_value = value
def __getattr__(self, key):
'''
What can we do here that runs the .get() command but -only- if the key
does not exist.
'''
# super(ContainerField, self).__getattr__(key)
def __call__(self):
return self.get()
def value(self):
return self.m_value
def setValue(self, value):
self.m_value = value
def _recurseSetAttr(self, attr, values):
'''Generate our attributes/objects and store them succinctly.'''
# Container
# \_Container
# \_Container
# \_Container...
for field, value in values.items():
if not hasattr(attr, field):
setattr(attr,
field,
# field type is known from model caching
ContainerField(value=value, field=field_type(field)))
fdbf = getattr(attr, field)
if isinstance(value, dict):
self._recurseSetAttr(fdbf, value)
else:
fdbf.setValue(value)
def get(self):
# Create the new object from scratch and proliferate it's
# attributes recursively. 'values' come in the form of a
# dictionary that we can then use to setattr().
# So... Create container, set value, find keys for this
# and create containers that hold the values of those keys
# and repeate...
self._recurseSetAttr(self, attr, values)
</code></pre>
<p>Now, when generating the objects I can have a dict that looks something like this: <code>{"myContainer" : { "id" : 2, "foo" : { "id" : 3, "bar" : 1 } }}</code> that, once created, can be called like this: <code>myContainer.foo.id.value()</code></p>
<p>In the scenario there's the <code>self.m_field</code> which tells the application what data type the object really is. This is referencing off of Django models but any python could apply.</p>
<p>All containers will have an <code>id</code> (or <code>pk</code>) key to them as part of their instantiation. This is mandatory.</p>
<hr>
<h3>The Rub</h3>
<p>Ideally, we fill our the top level attributes and only when the user requests for the attributes that lie underneath it do we construct them based off the <code>id</code> value and the <code>field</code> type.</p>
<p>So finally, let's say the <code>myContainer.foo.bar</code> attribute has a foreign key field type. If we call <code>myContainer.foo.bar.newkey.value()</code> the app should understand that the 'newkey' attribute does not exist, query against our django instance, store the <code>bar</code> attribute as the now more filled out Container, and return the <code>newkey</code> value that's been put to memory.</p>
<h3>The Python Pitfall</h3>
<p>I'd hoped it would be a simple <code>hasattr()</code> but Python seems to just use <code>getattr()</code> with a default <code>None</code> (The recursion is real!). I've also had loads of trouble getting a <code>try: except:</code> to work.</p>
<p>As I write this I'm realizing how much more complicated it may be due to the recursive attribute setting relying on <code>getattr()</code> and <code>hasattr()</code> Any suggestions would be greatly appreciated. - Cheers</p>
| 0 | 2016-08-06T17:24:08Z | 38,807,023 | <p>You could consider using the @property decorator with private internal fields. The idea would be something like:</p>
<pre><code>class ContainerField(object):
def __init__(self, field=None, value=None):
self._m_field = field
self._m_value = value
@property
def m_field(self):
if self._m_field is None:
self._m_field = self.function_to_populate_m_field()
return self._m_field
@property
def m_value(self):
if self._m_value is None:
self._m_value = self.function_to_populate_m_value()
return self._m_value
...
</code></pre>
| 1 | 2016-08-06T17:36:30Z | [
"python",
"django"
] |
Python __getattr__ to create attr if it does not exist and return it | 38,806,907 | <p>A little background: You'll notice my comments describe what I'll go through later. Let's say I have the following object...</p>
<pre><code>#!/usr/bin/python
import os
import sys
class ContainerField(object):
''' An attribute/object storage device '''
def __init__(self, field=None, value=None):
self.m_field = field
self.m_value = value
def __getattr__(self, key):
'''
What can we do here that runs the .get() command but -only- if the key
does not exist.
'''
# super(ContainerField, self).__getattr__(key)
def __call__(self):
return self.get()
def value(self):
return self.m_value
def setValue(self, value):
self.m_value = value
def _recurseSetAttr(self, attr, values):
'''Generate our attributes/objects and store them succinctly.'''
# Container
# \_Container
# \_Container
# \_Container...
for field, value in values.items():
if not hasattr(attr, field):
setattr(attr,
field,
# field type is known from model caching
ContainerField(value=value, field=field_type(field)))
fdbf = getattr(attr, field)
if isinstance(value, dict):
self._recurseSetAttr(fdbf, value)
else:
fdbf.setValue(value)
def get(self):
# Create the new object from scratch and proliferate it's
# attributes recursively. 'values' come in the form of a
# dictionary that we can then use to setattr().
# So... Create container, set value, find keys for this
# and create containers that hold the values of those keys
# and repeate...
self._recurseSetAttr(self, attr, values)
</code></pre>
<p>Now, when generating the objects I can have a dict that looks something like this: <code>{"myContainer" : { "id" : 2, "foo" : { "id" : 3, "bar" : 1 } }}</code> that, once created, can be called like this: <code>myContainer.foo.id.value()</code></p>
<p>In the scenario there's the <code>self.m_field</code> which tells the application what data type the object really is. This is referencing off of Django models but any python could apply.</p>
<p>All containers will have an <code>id</code> (or <code>pk</code>) key to them as part of their instantiation. This is mandatory.</p>
<hr>
<h3>The Rub</h3>
<p>Ideally, we fill our the top level attributes and only when the user requests for the attributes that lie underneath it do we construct them based off the <code>id</code> value and the <code>field</code> type.</p>
<p>So finally, let's say the <code>myContainer.foo.bar</code> attribute has a foreign key field type. If we call <code>myContainer.foo.bar.newkey.value()</code> the app should understand that the 'newkey' attribute does not exist, query against our django instance, store the <code>bar</code> attribute as the now more filled out Container, and return the <code>newkey</code> value that's been put to memory.</p>
<h3>The Python Pitfall</h3>
<p>I'd hoped it would be a simple <code>hasattr()</code> but Python seems to just use <code>getattr()</code> with a default <code>None</code> (The recursion is real!). I've also had loads of trouble getting a <code>try: except:</code> to work.</p>
<p>As I write this I'm realizing how much more complicated it may be due to the recursive attribute setting relying on <code>getattr()</code> and <code>hasattr()</code> Any suggestions would be greatly appreciated. - Cheers</p>
| 0 | 2016-08-06T17:24:08Z | 38,807,026 | <p>So to answer the first part of the question: how to have <code>__getattr__</code> call <code>self.get()</code> only when the attribute is not defined already. There are two attribute access methods in python classes: <code>__getattribute__</code> and <code>__getattr__</code>. The first is called every time an attribute lookup is attempted, the second is called only when the normal attribute lookup system fails (including lookups in superclasses). Since you're defining <code>__getattr__</code>, which is only called when the attribute doesn't already exist, you can simply proxy it to a call to <code>.get</code>. Where you run into recursion issues is if you try to look up another attribute of <code>self</code>, that also doesn't yet exist, inside of <code>__getattr__</code>. The way to avoid this is to have a list of keys that require special handling and check if the current attribute requested is one of them. This typically is only needed when implementing <code>__getattribute__</code>.</p>
<p>Note that your <code>.get</code> method has a problem: <code>attr</code> and <code>values</code> are undefined. I'd give a slightly more concrete answer for what to put in <code>__getattr__</code> if I knew what values for <code>.get</code>'s <code>attr</code> and <code>values</code> want.</p>
| 1 | 2016-08-06T17:36:49Z | [
"python",
"django"
] |
Python __getattr__ to create attr if it does not exist and return it | 38,806,907 | <p>A little background: You'll notice my comments describe what I'll go through later. Let's say I have the following object...</p>
<pre><code>#!/usr/bin/python
import os
import sys
class ContainerField(object):
''' An attribute/object storage device '''
def __init__(self, field=None, value=None):
self.m_field = field
self.m_value = value
def __getattr__(self, key):
'''
What can we do here that runs the .get() command but -only- if the key
does not exist.
'''
# super(ContainerField, self).__getattr__(key)
def __call__(self):
return self.get()
def value(self):
return self.m_value
def setValue(self, value):
self.m_value = value
def _recurseSetAttr(self, attr, values):
'''Generate our attributes/objects and store them succinctly.'''
# Container
# \_Container
# \_Container
# \_Container...
for field, value in values.items():
if not hasattr(attr, field):
setattr(attr,
field,
# field type is known from model caching
ContainerField(value=value, field=field_type(field)))
fdbf = getattr(attr, field)
if isinstance(value, dict):
self._recurseSetAttr(fdbf, value)
else:
fdbf.setValue(value)
def get(self):
# Create the new object from scratch and proliferate it's
# attributes recursively. 'values' come in the form of a
# dictionary that we can then use to setattr().
# So... Create container, set value, find keys for this
# and create containers that hold the values of those keys
# and repeate...
self._recurseSetAttr(self, attr, values)
</code></pre>
<p>Now, when generating the objects I can have a dict that looks something like this: <code>{"myContainer" : { "id" : 2, "foo" : { "id" : 3, "bar" : 1 } }}</code> that, once created, can be called like this: <code>myContainer.foo.id.value()</code></p>
<p>In the scenario there's the <code>self.m_field</code> which tells the application what data type the object really is. This is referencing off of Django models but any python could apply.</p>
<p>All containers will have an <code>id</code> (or <code>pk</code>) key to them as part of their instantiation. This is mandatory.</p>
<hr>
<h3>The Rub</h3>
<p>Ideally, we fill our the top level attributes and only when the user requests for the attributes that lie underneath it do we construct them based off the <code>id</code> value and the <code>field</code> type.</p>
<p>So finally, let's say the <code>myContainer.foo.bar</code> attribute has a foreign key field type. If we call <code>myContainer.foo.bar.newkey.value()</code> the app should understand that the 'newkey' attribute does not exist, query against our django instance, store the <code>bar</code> attribute as the now more filled out Container, and return the <code>newkey</code> value that's been put to memory.</p>
<h3>The Python Pitfall</h3>
<p>I'd hoped it would be a simple <code>hasattr()</code> but Python seems to just use <code>getattr()</code> with a default <code>None</code> (The recursion is real!). I've also had loads of trouble getting a <code>try: except:</code> to work.</p>
<p>As I write this I'm realizing how much more complicated it may be due to the recursive attribute setting relying on <code>getattr()</code> and <code>hasattr()</code> Any suggestions would be greatly appreciated. - Cheers</p>
| 0 | 2016-08-06T17:24:08Z | 38,807,053 | <p>Check this out:</p>
<pre><code>class Test(object):
def __init__(self):
self.a = 5
self.b = 6
def __getattr__(self, key):
return 'created a new key: {}'.format(key)
obj = Test()
print(obj.a, obj.b)
print(obj.c)
</code></pre>
<p>Here, instead of returning <code>'created a new key...'</code>, you create a new attribute and return it.</p>
| 0 | 2016-08-06T17:39:17Z | [
"python",
"django"
] |
Model field value does not get updated m2m_changed(Django) | 38,806,987 | <p>I have been searching for an answer for hours, however every answer I found does not work. Also trying to find a bug on my own didn't bring me any results.</p>
<p>I have created a <code>receiver function</code> which should update model's <code>total_likes</code> attribute(based on number of <code>users_like</code> attribute) every time user click on the like button of the specific image. (This is a part of 'Django by Example` book). But the field's value stays always the same and is equal to default value of 0. Even if I try to assign the value to the field manually, in the django's shell, it does not change(code example in 'Update' section).</p>
<p>Can someone please have a look at the code and point me in the right directions if I am doing something wrong?</p>
<p>I am using Django 1.9.</p>
<pre><code># models.py
class Image(models.Model):
...
users_like = models.ManyToManyField(settings.AUTH_USER_MODEL,
related_name='images_liked',
blank=True)
total_likes = models.PositiveIntegerField(default=5)
def save(self, *args, **kwargs):
if not self.slug:
self.slug = slugify(self.title)
super(Image, self).save(*args, **kwargs)
# signals.py
from django.db.models.signals import m2m_changed
from django.dispatch import receiver
from .models import Image
@receiver(m2m_changed, sender=Image.users_like.through)
def users_like_changed(sender, instance, **kwargs):
instance.total_likes = instance.users_like.count()
instance.save()
# apps.py
from django.apps import AppConfig
class ImagesConfig(AppConfig):
name = 'images'
verbose_name = 'Image bookmarks'
def ready(self):
# import signal handlers
import images.signals
# __init__.py
default_app_config = 'images.apps.ImagesConfig'
</code></pre>
<h2>Update:</h2>
<p>When I run code below from django shell, this does change the <code>total_likes</code> value, but it looks like it do just temporary:</p>
<pre><code>from images.models import Image
for image in Image.objects.all():
print(image.total_likes)
image.total_likes = image.users_like.count()
print(image.total_likes)
image.save()
print(image.total_likes)
</code></pre>
<p>Output from code above:</p>
<pre><code>0 #initial/default value of 0
3 #current number of users who like the picture
3
</code></pre>
<p>Because when I run the for loop code again, to see the results(or even check the field value in admin interface) i still get the initial/default value of 0.</p>
<p>Can someone see the problem why the field does not get updated?</p>
| 1 | 2016-08-06T17:32:42Z | 38,815,857 | <p>Ok, so the problem was with the custom <code>save()</code> method on the model class.</p>
<p>I needed to call the <code>save()</code> method of the parent class like this:</p>
<pre><code>def save(self, *args, **kwargs):
if not self.slug:
self.slug = slugify(self.title)
super(Image, self).save(*args, **kwargs)
</code></pre>
<p>and it made it work.</p>
| 0 | 2016-08-07T15:50:53Z | [
"python",
"django",
"django-models",
"django-signals"
] |
Django 1.9.2 AssertionError: database connection isn't set to UTC | 38,807,296 | <p>I have setup 3 servers now with PostgreSQL and have so far not seen this issue. I am now setting up the first server which is not running on a danish server, and i start getting errors when accessing the database from the web. </p>
<p>I could use createsuperuser without issues and it created my super user. But when i try to use it to login to my site i get the error.</p>
<pre><code>File "/usr/lib64/python3.4/site-packages/django/db/models/sql/compiler.py", line 1239, in cursor_iter
sentinel):
File "/usr/lib64/python3.4/site-packages/django/db/models/sql/compiler.py", line 1238, in <lambda>
for rows in iter((lambda: cursor.fetchmany(GET_ITERATOR_CHUNK_SIZE)),
File "/usr/lib64/python3.4/site-packages/django/db/utils.py", line 102, in inner
return func(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/db/backends/postgresql/utils.py", line 6, in utc_tzinfo_factory
raise AssertionError("database connection isn't set to UTC")
AssertionError: database connection isn't set to UTC
</code></pre>
<p>I have been looking at the code and the error comes from this code.</p>
<pre><code>from django.utils.timezone import utc
def utc_tzinfo_factory(offset):
if offset != 0:
raise AssertionError("database connection isn't set to UTC")
return utc
</code></pre>
<p>Though i cannot find where offset is initiated, so i cannot figure out how Django decides that my offset is off. </p>
<p>My postgresql database have timezone set to UTC and have verified all postgresql parameters written in the django documentation and i am now running out of ideas of why this happens.</p>
<p>I hope someone here can help?</p>
<p>python3.4 -V: Python 3.4.3</p>
<p>psql -V: psql (PostgreSQL) 9.2.15</p>
<p>django-admin --version: 1.9.2</p>
<p><strong>UPDATED 11/8-2016 - Full Stack from DEBUG view</strong></p>
<p>I found the value for the offset in the utc_tzinfo_factory, it has the value of 120. Though i cannot explain how or why it gets this value. </p>
<p>Below is a copy from the debug page of Django for the error with full stack and variables.</p>
<pre><code>Environment:
Request Method: POST
Request URL: http://myweb.dk/accounts/login/
Django Version: 1.9.2
Python Version: 3.4.3
Installed Applications:
['polls.apps.PollsConfig',
'teamTournamentApp.apps.TeamtournamentappConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/usr/lib64/python3.4/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/usr/lib64/python3.4/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/views.py" in inner
49. return func(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/views/decorators/debug.py" in sensitive_post_parameters_wrapper
76. return view(request, *args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/utils/decorators.py" in _wrapped_view
149. response = view_func(request, *args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/views.py" in login
69. if form.is_valid():
File "/usr/lib64/python3.4/site-packages/django/forms/forms.py" in is_valid
161. return self.is_bound and not self.errors
File "/usr/lib64/python3.4/site-packages/django/forms/forms.py" in errors
153. self.full_clean()
File "/usr/lib64/python3.4/site-packages/django/forms/forms.py" in full_clean
363. self._clean_form()
File "/usr/lib64/python3.4/site-packages/django/forms/forms.py" in _clean_form
390. cleaned_data = self.clean()
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/forms.py" in clean
159. password=password)
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/__init__.py" in authenticate
74. user = backend.authenticate(**credentials)
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/backends.py" in authenticate
17. user = UserModel._default_manager.get_by_natural_key(username)
File "/usr/lib64/python3.4/site-packages/django/contrib/auth/base_user.py" in get_by_natural_key
45. return self.get(**{self.model.USERNAME_FIELD: username})
File "/usr/lib64/python3.4/site-packages/django/db/models/manager.py" in manager_method
122. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/db/models/query.py" in get
381. num = len(clone)
File "/usr/lib64/python3.4/site-packages/django/db/models/query.py" in __len__
240. self._fetch_all()
File "/usr/lib64/python3.4/site-packages/django/db/models/query.py" in _fetch_all
1074. self._result_cache = list(self.iterator())
File "/usr/lib64/python3.4/site-packages/django/db/models/query.py" in __iter__
68. for row in compiler.results_iter(results):
File "/usr/lib64/python3.4/site-packages/django/db/models/sql/compiler.py" in results_iter
805. for rows in results:
File "/usr/lib64/python3.4/site-packages/django/db/models/sql/compiler.py" in cursor_iter
1239. sentinel):
File "/usr/lib64/python3.4/site-packages/django/db/models/sql/compiler.py" in <lambda>
1238. for rows in iter((lambda: cursor.fetchmany(GET_ITERATOR_CHUNK_SIZE)),
File "/usr/lib64/python3.4/site-packages/django/db/utils.py" in inner
102. return func(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/django/db/backends/postgresql/utils.py" in utc_tzinfo_factory
6. raise AssertionError("database connection isn't set to UTC")
Exception Type: AssertionError at /accounts/login/
Exception Value: database connection isn't set to UTC
Request information
GET
No GET data
POST
Variable Value
next
''
password
'xxxxxxx'
username
'admin'
csrfmiddlewaretoken
'f8E50d9kpS2j4Wlc7O9KsKtUXHxbuX58'
FILES
No FILES data
COOKIES
Variable Value
_ga
'GA1.2.1308578855.1465289038'
csrftoken
'f8E50d9kpS2j4Wlc7O9KsKtUXHxbuX58'
META
Variable Value
UNIQUE_ID
'xxxxx'
HTTP_USER_AGENT
('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like '
'Gecko) Chrome/51.0.2704.103 Safari/537.36')
mod_wsgi.total_requests
1
REMOTE_ADDR
'xx.yy.zz.tt'
mod_wsgi.handler_script
''
mod_wsgi.script_name
''
HTTP_ACCEPT
'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
REQUEST_SCHEME
'http'
mod_wsgi.script_start
'1470934394985429'
HTTP_REFERER
'http://myweb.dk/accounts/login/'
mod_wsgi.version
(4, 5, 3)
SERVER_PROTOCOL
'HTTP/1.1'
HTTP_HOST
'myweb.dk'
wsgi.url_scheme
'http'
HTTP_ACCEPT_ENCODING
'gzip, deflate'
PATH_INFO
'/accounts/login/'
wsgi.multiprocess
True
HTTP_CONNECTION
'keep-alive'
mod_wsgi.listener_port
'80'
mod_wsgi.path_info
'/accounts/login/'
CONTEXT_DOCUMENT_ROOT
'/var/www/vhosts/default/htdocs'
REMOTE_PORT
'59723'
wsgi.errors
<_io.TextIOWrapper encoding='utf-8'>
mod_wsgi.callable_object
'application'
SCRIPT_NAME
''
REQUEST_URI
'/accounts/login/'
SCRIPT_FILENAME
'/var/www/vhosts/myweb.dk/httpdocs/TeamTournament/TeamTournament/wsgi.py'
SERVER_ADMIN
'.....'
mod_wsgi.request_start
'1470934394985053'
mod_wsgi.listener_host
''
mod_wsgi.enable_sendfile
'0'
HTTP_UPGRADE_INSECURE_REQUESTS
'1'
mod_wsgi.script_reloading
'1'
SERVER_SIGNATURE
''
mod_wsgi.application_group
'myweb.dk|'
mod_wsgi.thread_requests
0
wsgi.input
<mod_wsgi.Input object at 0x7f6266286920>
QUERY_STRING
''
SERVER_ADDR
'xx.yy.zz.tt'
wsgi.multithread
True
wsgi.version
(1, 0)
CONTEXT_PREFIX
''
wsgi.run_once
False
REQUEST_METHOD
'POST'
HTTP_ORIGIN
'http://myweb.dk'
SERVER_NAME
'myweb.dk'
mod_wsgi.request_handler
'wsgi-script'
mod_wsgi.process_group
''
CONTENT_TYPE
'application/x-www-form-urlencoded'
HTTP_CACHE_CONTROL
'max-age=0'
SERVER_SOFTWARE
'Apache'
HTTP_COOKIE
'_ga=GA1.2.1308578855.1465289038; csrftoken=f8E50d9kpS2j4Wlc7O9KsKtUXHxbuX58'
HTTP_ACCEPT_LANGUAGE
'da-DK,da;q=0.8,en-US;q=0.6,en;q=0.4,sv;q=0.2'
SERVER_PORT
'80'
wsgi.file_wrapper
''
apache.version
(2, 4, 6)
PATH_TRANSLATED
'/var/www/vhosts/myweb.dk/httpdocs/TeamTournament/TeamTournament/wsgi.py/accounts/login/'
CONTENT_LENGTH
'91'
mod_wsgi.thread_id
2
CSRF_COOKIE
'f8E50d9kpS2j4Wlc7O9KsKtUXHxbuX58'
GATEWAY_INTERFACE
'CGI/1.1'
DOCUMENT_ROOT
'/var/www/vhosts/default/htdocs'
Settings
Using settings module TeamTournament.settings
Setting Value
LOGIN_REDIRECT_URL
'/accounts/profile/'
FILE_UPLOAD_HANDLERS
['django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler']
SECURE_SSL_HOST
None
DATETIME_FORMAT
'N j, Y, P'
EMAIL_HOST
'localhost'
SESSION_COOKIE_PATH
'/'
FORMAT_MODULE_PATH
None
DEFAULT_TABLESPACE
''
DATE_INPUT_FORMATS
['%Y-%m-%d',
'%m/%d/%Y',
'%m/%d/%y',
'%b %d %Y',
'%b %d, %Y',
'%d %b %Y',
'%d %b, %Y',
'%B %d %Y',
'%B %d, %Y',
'%d %B %Y',
'%d %B, %Y']
TEMPLATE_DIRS
[]
DATETIME_INPUT_FORMATS
['%Y-%m-%d %H:%M:%S',
'%Y-%m-%d %H:%M:%S.%f',
'%Y-%m-%d %H:%M',
'%Y-%m-%d',
'%m/%d/%Y %H:%M:%S',
'%m/%d/%Y %H:%M:%S.%f',
'%m/%d/%Y %H:%M',
'%m/%d/%Y',
'%m/%d/%y %H:%M:%S',
'%m/%d/%y %H:%M:%S.%f',
'%m/%d/%y %H:%M',
'%m/%d/%y']
FILE_UPLOAD_DIRECTORY_PERMISSIONS
None
FILE_UPLOAD_MAX_MEMORY_SIZE
2621440
FIRST_DAY_OF_WEEK
0
STATICFILES_STORAGE
'django.contrib.staticfiles.storage.StaticFilesStorage'
SESSION_ENGINE
'django.contrib.sessions.backends.db'
TIME_FORMAT
'P'
FORCE_SCRIPT_NAME
None
SECURE_SSL_REDIRECT
False
ALLOWED_INCLUDE_ROOTS
[]
SHORT_DATETIME_FORMAT
'm/d/Y P'
DEFAULT_CONTENT_TYPE
'text/html'
NUMBER_GROUPING
0
DEFAULT_EXCEPTION_REPORTER_FILTER
'django.views.debug.SafeExceptionReporterFilter'
SESSION_EXPIRE_AT_BROWSER_CLOSE
False
LANGUAGE_CODE
'en-us'
TIME_INPUT_FORMATS
['%H:%M:%S', '%H:%M:%S.%f', '%H:%M']
SESSION_COOKIE_NAME
'sessionid'
ALLOWED_HOSTS
['xx.yy.zz.tt', 'myweb.net', 'myweb.dk']
SESSION_COOKIE_DOMAIN
None
EMAIL_SSL_CERTFILE
None
DEFAULT_FROM_EMAIL
'webmaster@localhost'
EMAIL_PORT
25
DATE_FORMAT
'N j, Y'
ABSOLUTE_URL_OVERRIDES
{}
USE_ETAGS
False
CSRF_FAILURE_VIEW
'django.views.csrf.csrf_failure'
EMAIL_SSL_KEYFILE
'********************'
CSRF_COOKIE_HTTPONLY
False
SESSION_CACHE_ALIAS
'default'
LANGUAGES
[('af', 'Afrikaans'),
('ar', 'Arabic'),
('ast', 'Asturian'),
('az', 'Azerbaijani'),
('bg', 'Bulgarian'),
('be', 'Belarusian'),
('bn', 'Bengali'),
('br', 'Breton'),
('bs', 'Bosnian'),
('ca', 'Catalan'),
('cs', 'Czech'),
('cy', 'Welsh'),
('da', 'Danish'),
('de', 'German'),
('el', 'Greek'),
('en', 'English'),
('en-au', 'Australian English'),
('en-gb', 'British English'),
('eo', 'Esperanto'),
('es', 'Spanish'),
('es-ar', 'Argentinian Spanish'),
('es-co', 'Colombian Spanish'),
('es-mx', 'Mexican Spanish'),
('es-ni', 'Nicaraguan Spanish'),
('es-ve', 'Venezuelan Spanish'),
('et', 'Estonian'),
('eu', 'Basque'),
('fa', 'Persian'),
('fi', 'Finnish'),
('fr', 'French'),
('fy', 'Frisian'),
('ga', 'Irish'),
('gd', 'Scottish Gaelic'),
('gl', 'Galician'),
('he', 'Hebrew'),
('hi', 'Hindi'),
('hr', 'Croatian'),
('hu', 'Hungarian'),
('ia', 'Interlingua'),
('id', 'Indonesian'),
('io', 'Ido'),
('is', 'Icelandic'),
('it', 'Italian'),
('ja', 'Japanese'),
('ka', 'Georgian'),
('kk', 'Kazakh'),
('km', 'Khmer'),
('kn', 'Kannada'),
('ko', 'Korean'),
('lb', 'Luxembourgish'),
('lt', 'Lithuanian'),
('lv', 'Latvian'),
('mk', 'Macedonian'),
('ml', 'Malayalam'),
('mn', 'Mongolian'),
('mr', 'Marathi'),
('my', 'Burmese'),
('nb', 'Norwegian Bokmal'),
('ne', 'Nepali'),
('nl', 'Dutch'),
('nn', 'Norwegian Nynorsk'),
('os', 'Ossetic'),
('pa', 'Punjabi'),
('pl', 'Polish'),
('pt', 'Portuguese'),
('pt-br', 'Brazilian Portuguese'),
('ro', 'Romanian'),
('ru', 'Russian'),
('sk', 'Slovak'),
('sl', 'Slovenian'),
('sq', 'Albanian'),
('sr', 'Serbian'),
('sr-latn', 'Serbian Latin'),
('sv', 'Swedish'),
('sw', 'Swahili'),
('ta', 'Tamil'),
('te', 'Telugu'),
('th', 'Thai'),
('tr', 'Turkish'),
('tt', 'Tatar'),
('udm', 'Udmurt'),
('uk', 'Ukrainian'),
('ur', 'Urdu'),
('vi', 'Vietnamese'),
('zh-hans', 'Simplified Chinese'),
('zh-hant', 'Traditional Chinese')]
X_FRAME_OPTIONS
'SAMEORIGIN'
AUTH_USER_MODEL
'auth.User'
SILENCED_SYSTEM_CHECKS
[]
LOGOUT_URL
'/accounts/logout/'
STATICFILES_FINDERS
['django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder']
TEMPLATES
[{'APP_DIRS': True,
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['/var/www/vhosts/myweb.dk/httpdocs/TeamTournament/templates'],
'OPTIONS': {'context_processors': ['django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'django.template.context_processors.request']}}]
SERVER_EMAIL
'root@localhost'
SECURE_BROWSER_XSS_FILTER
False
TEMPLATE_CONTEXT_PROCESSORS
['django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages']
DEBUG_APPS
False
USE_X_FORWARDED_PORT
False
ADMINS
[]
SIGNING_BACKEND
'django.core.signing.TimestampSigner'
CSRF_COOKIE_SECURE
False
EMAIL_USE_SSL
False
CACHES
{'default': {'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'}}
LOCALE_PATHS
[]
TEMPLATE_STRING_IF_INVALID
''
MESSAGE_STORAGE
'django.contrib.messages.storage.fallback.FallbackStorage'
PRODUCTION
False
FIXTURE_DIRS
[]
CSRF_COOKIE_PATH
'/'
MIDDLEWARE_CLASSES
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
MANAGERS
[]
CSRF_TRUSTED_ORIGINS
[]
CACHE_MIDDLEWARE_SECONDS
600
APPEND_SLASH
True
TEST_NON_SERIALIZED_APPS
[]
SECURE_HSTS_INCLUDE_SUBDOMAINS
False
MIGRATION_MODULES
{}
LANGUAGE_COOKIE_AGE
None
TEMPLATE_LOADERS
['django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader']
STATIC_URL
'/static/'
SESSION_COOKIE_AGE
1209600
SETTINGS_MODULE
'TeamTournament.settings'
DECIMAL_SEPARATOR
'.'
YEAR_MONTH_FORMAT
'F Y'
EMAIL_TIMEOUT
None
SESSION_SAVE_EVERY_REQUEST
False
BASE_DIR
'/var/www/vhosts/myweb.dk/httpdocs/TeamTournament'
SECURE_CONTENT_TYPE_NOSNIFF
False
FILE_UPLOAD_TEMP_DIR
None
CACHE_MIDDLEWARE_KEY_PREFIX
'********************'
DEBUG
True
SESSION_COOKIE_HTTPONLY
True
CSRF_HEADER_NAME
'HTTP_X_CSRFTOKEN'
USE_L10N
True
STATICFILES_DIRS
[]
SESSION_SERIALIZER
'django.contrib.sessions.serializers.JSONSerializer'
USE_THOUSAND_SEPARATOR
False
EMAIL_BACKEND
'django.core.mail.backends.smtp.EmailBackend'
USE_X_FORWARDED_HOST
False
STATIC_ROOT
'/var/www/vhosts/myweb.dk/httpdocs/static/'
SECRET_KEY
'********************'
PASSWORD_RESET_TIMEOUT_DAYS
'********************'
MEDIA_ROOT
''
TIME_ZONE
'CET'
DATABASES
{'default': {'ATOMIC_REQUESTS': False,
'AUTOCOMMIT': True,
'CONN_MAX_AGE': 0,
'ENGINE': 'django.db.backends.postgresql',
'HOST': '127.0.0.1',
'NAME': 'user',
'OPTIONS': {},
'PASSWORD': '********************',
'PORT': '5432',
'TEST': {'CHARSET': None,
'COLLATION': None,
'MIRROR': None,
'NAME': None},
'TIME_ZONE': None,
'USER': 'user'}}
DEFAULT_INDEX_TABLESPACE
''
EMAIL_USE_TLS
False
LOGIN_URL
'/accounts/login/'
SHORT_DATE_FORMAT
'm/d/Y'
CSRF_COOKIE_NAME
'csrftoken'
LANGUAGE_COOKIE_DOMAIN
None
USE_I18N
True
SESSION_COOKIE_SECURE
False
CACHE_MIDDLEWARE_ALIAS
'default'
DEFAULT_CHARSET
'utf-8'
TEMPLATE_DEBUG
False
ROOT_URLCONF
'TeamTournament.urls'
SECURE_PROXY_SSL_HEADER
None
EMAIL_HOST_PASSWORD
'********************'
FILE_UPLOAD_PERMISSIONS
None
CSRF_COOKIE_AGE
31449600
DEBUG_PROPAGATE_EXCEPTIONS
False
WSGI_APPLICATION
'TeamTournament.wsgi.application'
PASSWORD_HASHERS
'********************'
SECURE_REDIRECT_EXEMPT
[]
LANGUAGES_BIDI
['he', 'ar', 'fa', 'ur']
CSRF_COOKIE_DOMAIN
None
DEFAULT_FILE_STORAGE
'django.core.files.storage.FileSystemStorage'
POSTGRES
True
PREPEND_WWW
False
EMAIL_SUBJECT_PREFIX
'[Django] '
LOGGING
{'disable_existing_loggers': False,
'filters': {'require_debug_false': {'()': 'django.utils.log.RequireDebugFalse'}},
'handlers': {'logfile': {'class': 'logging.handlers.WatchedFileHandler',
'filename': '/var/log/django/error.log'},
'mail_admins': {'class': 'django.utils.log.AdminEmailHandler',
'filters': ['require_debug_false'],
'level': 'ERROR'}},
'loggers': {'django': {'handlers': ['logfile'],
'level': 'ERROR',
'propagate': False},
'django.request': {'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True}},
'version': 1}
SESSION_FILE_PATH
None
TEST_RUNNER
'django.test.runner.DiscoverRunner'
INTERNAL_IPS
[]
DATABASE_ROUTERS
[]
FILE_CHARSET
'utf-8'
LANGUAGE_COOKIE_NAME
'django_language'
INSTALLED_APPS
['polls.apps.PollsConfig',
'teamTournamentApp.apps.TeamtournamentappConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles']
LANGUAGE_COOKIE_PATH
'/'
IGNORABLE_404_URLS
[]
MEDIA_URL
''
DISALLOWED_USER_AGENTS
[]
LOG_PATH
'/var/log/django/error.log'
LOGGING_CONFIG
'logging.config.dictConfig'
THOUSAND_SEPARATOR
','
MONTH_DAY_FORMAT
'F j'
USE_TZ
True
EMAIL_HOST_USER
''
AUTH_PASSWORD_VALIDATORS
'********************'
SECURE_HSTS_SECONDS
0
AUTHENTICATION_BACKENDS
['django.contrib.auth.backends.ModelBackend']
</code></pre>
| 4 | 2016-08-06T18:04:22Z | 39,649,018 | <p>I encountered this same problem, also with a server that is normally running with the UTC+2 (in my case, Europe/Oslo).</p>
<p>It turned out that the system zoneinfo files on my server (Centos 7) were corrupted, which became evident in <code>pg_timezone_names</code>.</p>
<p><code>postgres=# select * from pg_timezone_names where name like 'UTC';
name | abbrev | utc_offset | is_dst
------+--------+------------+--------
UTC | CEST | 02:00:00 | t
(1 row)</code></p>
<p>After running <code>yum update tzdata</code> to update my server's timezone files, and restarting the PostgreSQL server, the issue appears to be resolved.</p>
<p><code>postgres=# select * from pg_timezone_names where name like 'UTC';
name | abbrev | utc_offset | is_dst
------+--------+------------+--------
UTC | UTC | 00:00:00 | f
(1 row)</code></p>
<p>My guess it that I might previously have run <code>cat /usr/share/zoneinfo/Europe/Oslo > /etc/localtime</code> without first removing <code>/etc/localtime</code> to change the timezone on the system, effectively overwriting the zoneinfo for UTC with the zoneinfo for Europe/Oslo.</p>
| 0 | 2016-09-22T21:16:35Z | [
"python",
"django",
"postgresql"
] |
Why are pool.map() and map() returning varying results? | 38,807,432 | <p>I have the following program :</p>
<pre><code>import string
import itertools
import multiprocessing as mp
def test(word_list):
return list(map(lambda xy: (xy[0], len(list(xy[1]))),
itertools.groupby(sorted(word_list))))
def f(x):
return (x[0], len(list(x[1])))
def test_parallel(word_list):
w = mp.cpu_count()
pool = mp.Pool(w)
return (pool.map(f, itertools.groupby(sorted(word_list))))
def main():
test_list = ["test", "test", "test", "this", "this", "that"]
print(test(test_list))
print(test_parallel(test_list))
return
if __name__ == "__main__":
main()
</code></pre>
<p>The output is :</p>
<pre><code>[('test', 3), ('that', 1), ('this', 2)]
[('test', 0), ('that', 0), ('this', 1)]
</code></pre>
<p>The first line is the expected and correct result. My question is, why isn't pool.map() returning the same results as map()?</p>
<p>Also, I'm aware a 6 item list isn't the perfect case for multiprocessing. This is simply a demonstration of the issue I am having while implementing in a larger application.</p>
<p>I'm using Python 3.5.1.</p>
| 1 | 2016-08-06T18:19:57Z | 38,807,534 | <p><code>groupby()</code> returns <em>iterators</em> per group, and these are <em>not independent</em> from the underlying iterator passed in. You can't independently iterate over these groups in parallel; any preceding group will be prematurely ended the moment you access the next.</p>
<p><code>pool.map()</code> will try to read all of the <code>groupby()</code> iterator results to send those results to separate functions; merely trying to get a second group will cause the first to be empty.</p>
<p>You can see the same result without <code>pool.map()</code> simply by iterating to the next result from <code>groupby()</code>:</p>
<pre><code>>>> from itertools import groupby
>>> word_list = ["test", "test", "test", "this", "this", "that"]
>>> iterator = groupby(sorted(word_list))
>>> first = next(iterator)
>>> next(first[1])
'test'
>>> second = next(iterator)
>>> list(first[1])
[]
</code></pre>
<p>The remainder of the first group is 'empty' because the second group has been requested.</p>
<p>This is clearly <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow">documented</a>:</p>
<blockquote>
<p>Because the source is shared, when the <code>groupby()</code> object is advanced, the previous group is no longer visible.</p>
</blockquote>
<p>You'd have to 'materialise' each group <em>before</em> sending it to the the function:</p>
<pre><code>return pool.map(lambda kg: f((k[0], list(kg[1]))), itertools.groupby(sorted(word_list)))
</code></pre>
<p>or</p>
<pre><code>return pool.map(f, (
(key, list(group)) for key, group in itertools.groupby(sorted(word_list))))
</code></pre>
<p>where the generator expression takes care of the materialising as <code>pool.map()</code> iterates.</p>
| 2 | 2016-08-06T18:32:49Z | [
"python",
"group-by",
"multiprocessing",
"itertools",
"pool"
] |
Why are pool.map() and map() returning varying results? | 38,807,432 | <p>I have the following program :</p>
<pre><code>import string
import itertools
import multiprocessing as mp
def test(word_list):
return list(map(lambda xy: (xy[0], len(list(xy[1]))),
itertools.groupby(sorted(word_list))))
def f(x):
return (x[0], len(list(x[1])))
def test_parallel(word_list):
w = mp.cpu_count()
pool = mp.Pool(w)
return (pool.map(f, itertools.groupby(sorted(word_list))))
def main():
test_list = ["test", "test", "test", "this", "this", "that"]
print(test(test_list))
print(test_parallel(test_list))
return
if __name__ == "__main__":
main()
</code></pre>
<p>The output is :</p>
<pre><code>[('test', 3), ('that', 1), ('this', 2)]
[('test', 0), ('that', 0), ('this', 1)]
</code></pre>
<p>The first line is the expected and correct result. My question is, why isn't pool.map() returning the same results as map()?</p>
<p>Also, I'm aware a 6 item list isn't the perfect case for multiprocessing. This is simply a demonstration of the issue I am having while implementing in a larger application.</p>
<p>I'm using Python 3.5.1.</p>
| 1 | 2016-08-06T18:19:57Z | 38,807,542 | <p>From <a href="https://docs.python.org/3.5/library/itertools.html#itertools.groupby" rel="nofollow">https://docs.python.org/3.5/library/itertools.html#itertools.groupby</a>:</p>
<blockquote>
<p>The returned group is itself an iterator that shares the underlying
iterable with groupby(). Because the source is shared, when the
groupby() object is advanced, the previous group is no longer visible.
So, if that data is needed later, it should be stored as a list:</p>
<pre><code>groups = []
uniquekeys = []
data = sorted(data, key=keyfunc)
for k, g in groupby(data, keyfunc):
groups.append(list(g)) # Store group iterator as a list
uniquekeys.append(k)
</code></pre>
</blockquote>
<p>I think the issue here is that <code>Pool.map</code> tries to chop up its input, and in doing so, it iterates through the result of <code>groupby</code>, which effectively skips over the elements from all but the last group.</p>
<p>One fix for your code would be to use something like <code>[(k, list(v)) for k, v in itertools.groupby(sorted(word_list))]</code>, but I don't know how applicable that is to your real-world use case.</p>
| 3 | 2016-08-06T18:33:29Z | [
"python",
"group-by",
"multiprocessing",
"itertools",
"pool"
] |
Can't get a value of a variable from a class | 38,807,571 | <p>This is some code I wrote to understand how a global variable works.</p>
<p>I can't get any value for <code>aaa</code> in <code>print('fuera ' , aaa)</code>. I am not sure how the sequence of execution happens either.</p>
<pre><code>import tkinter as tk
global aaa
def primero():
winda = tk.Toplevel()
def on_button():
global aaa
aaa = entry.get()
winda.destroy()
entry = tk.Entry(winda)
button = tk.Button(winda, text="Get", command=on_button)
button.pack()
entry.insert(0,'nada')
entry.pack()
entry.focus_set()
windo = tk.Tk()
primero()
print ('fuera ', aaa)
windo.mainloop()
</code></pre>
| -2 | 2016-08-06T18:36:04Z | 38,807,654 | <blockquote>
<p>I am not sure how the sequence of execution happens </p>
</blockquote>
<p>Basically, you are starting a window with a button first. Simple enough. Meanwhile, the code after you initialize the window, i.e. the print statement, continues to run, but <code>aaa</code> is only ever initialized when you click the button. </p>
<p>Therefore the error. </p>
<p>In the bigger picture, here, GUI events are making the learning of global variables more complex than necessary </p>
<p>If you'd like to fix the problem, simply initialize <code>aaa</code> </p>
<pre><code>import tkinter as tk
aaa = None
</code></pre>
<p>If you'd like to watch this global variable change, add another print statement inside the button click </p>
| 0 | 2016-08-06T18:45:16Z | [
"python"
] |
Python3.5 virtual envirement not working ubuntu | 38,807,752 | <p>I've installed Python 3.5 on my VPS Ubuntu.</p>
<p>Command:</p>
<pre><code>python3.5 --version
</code></pre>
<p>Gives:</p>
<pre><code>3.5.0+
</code></pre>
<p>and then I install simple Flask application and install virtual environment and activate it:</p>
<pre><code>virtualenv -p python3.5 envname
source envname/bin/activate
</code></pre>
<p>But if I print Python version, it returns 3.4.3</p>
<pre><code>from flask import Flask
import sys
app = Flask(__name__)
@app.route("/")
def hello():
return sys.version
if __name__ == "__main__":
app.run()
</code></pre>
<p>This part: </p>
<pre><code>sys.executable
</code></pre>
<p>Returns:</p>
<pre><code>/usr/bin/python3
</code></pre>
<p>Not 3.5.</p>
| 0 | 2016-08-06T18:57:07Z | 38,808,780 | <p>Kind of workaround, but the simplest solution that comes to my mind:</p>
<pre><code>cp envname/bin/python3.5 envname/bin/python
</code></pre>
<p>That way even if Python scripts are executed with <code>python</code> command it will use Python 3.5.</p>
| 0 | 2016-08-06T21:16:16Z | [
"python",
"python-3.x"
] |
Fetching URL and converting to UTF-8 Python | 38,807,809 | <p>I would like to do my first project in python but I have problem with coding. When I fetch data it shows coded letters instead of my native letters, for example '\xc4\x87' instead of 'Ä'. The code is below:</p>
<pre><code>import urllib.request
import sys
page = urllib.request.urlopen("http://olx.pl/")
test = page.read()
print(test)
print(sys.stdin.encoding)
z = "Å"
print(z)
print(z.encode("utf-8"))
</code></pre>
<p>I know that code here is poor but I tried many options to change encoding. I wrote z = "Å" to check if it can print any 'special' letter and it shows. I tried to encode it and it works also as it should. Sys.stdin.encoding shows cp852. </p>
| 0 | 2016-08-06T19:04:51Z | 38,807,852 | <p>The data you read from a <code>urlopen()</code> response is <em>encoded data</em>. You'd need to first <em>decode</em> that data using the right encoding.</p>
<p>You appear to have downloaded UTF-8 data; you'd have to decode that data first before you had text:</p>
<pre><code>test = page.read().decode('utf8')
</code></pre>
<p>However, it is up to the server to tell you what data was received. Check for a characterset in the headers:</p>
<pre><code>encoding = page.info().getparam('charset')
</code></pre>
<p>This can still be <code>None</code>; many data formats include the encoding <em>as part of the format</em>. XML for example is UTF-8 by default but the XML declaration at the start can contain information about what codec was used for that document. An XML parser would extract that information to ensure you get properly decoded Unicode text when parsing.</p>
<p>You may not be able to print that data; the 852 codepage can only handle 256 different codepoints, while the Unicode standard is far larger.</p>
| 0 | 2016-08-06T19:10:20Z | [
"python",
"python-3.x",
"urllib"
] |
Fetching URL and converting to UTF-8 Python | 38,807,809 | <p>I would like to do my first project in python but I have problem with coding. When I fetch data it shows coded letters instead of my native letters, for example '\xc4\x87' instead of 'Ä'. The code is below:</p>
<pre><code>import urllib.request
import sys
page = urllib.request.urlopen("http://olx.pl/")
test = page.read()
print(test)
print(sys.stdin.encoding)
z = "Å"
print(z)
print(z.encode("utf-8"))
</code></pre>
<p>I know that code here is poor but I tried many options to change encoding. I wrote z = "Å" to check if it can print any 'special' letter and it shows. I tried to encode it and it works also as it should. Sys.stdin.encoding shows cp852. </p>
| 0 | 2016-08-06T19:04:51Z | 38,807,968 | <p>The <em>urlopen</em> is returning to you a <em>bytes</em> object. That means it's a raw, encoded stream of bytes. Python 3 prints that in a <em>repr</em> format, which uses escape codes for non-ASCII characters. To get the canonical unicode you would have to decode it. The right way to do that would be to inspect the header and look for the encoding declaration. But for this we can assume UTF-8 and you can simply <em>decode</em> it as such, not encode it.</p>
<pre><code>import urllib.request
import sys
page = urllib.request.urlopen("http://olx.pl/")
test = page.read()
print(test.decode("utf-8")) # <- note change
</code></pre>
<p>Now, Python 3 defaults to UTF-8 source encoding. So you can embed non-ASCII like this if your editor supports unicode and saving as UTF-8. </p>
<pre><code>z = "Å"
print(z)
</code></pre>
<p>Printing it will only work if your terminal supports UTF-8 encoding. On Linux and OSX they do, so this is not a problem there. </p>
| 0 | 2016-08-06T19:23:25Z | [
"python",
"python-3.x",
"urllib"
] |
Fetching URL and converting to UTF-8 Python | 38,807,809 | <p>I would like to do my first project in python but I have problem with coding. When I fetch data it shows coded letters instead of my native letters, for example '\xc4\x87' instead of 'Ä'. The code is below:</p>
<pre><code>import urllib.request
import sys
page = urllib.request.urlopen("http://olx.pl/")
test = page.read()
print(test)
print(sys.stdin.encoding)
z = "Å"
print(z)
print(z.encode("utf-8"))
</code></pre>
<p>I know that code here is poor but I tried many options to change encoding. I wrote z = "Å" to check if it can print any 'special' letter and it shows. I tried to encode it and it works also as it should. Sys.stdin.encoding shows cp852. </p>
| 0 | 2016-08-06T19:04:51Z | 38,808,244 | <p>The others are correct, but I'd like to offer a simpler solution. Use <a href="http://docs.python-requests.org/en/master/user/quickstart/" rel="nofollow"><code>requests</code></a>. It's 3rd party, so you'll need to install it via pip:</p>
<pre><code>pip install requests
</code></pre>
<p>But it's a lot simpler to use than the <code>urllib</code> libraries. For your particular case, it handles the decoding for you out of the box:</p>
<pre><code>import requests
r = requests.get("http://olx.pl/")
print(r.encoding)
print(type(r.text))
print(r.text)
</code></pre>
<p>Breakdown:</p>
<ul>
<li><code>get</code> sends an HTTP <code>GET</code> request to the server and returns the respose.</li>
<li>We <code>print</code> the encoding <code>requests</code> thinks the text is in. It chooses this based on the response header Martijin mentions.</li>
<li>We show that <code>r.text</code> is already a decoded text type (<code>unicode</code> in Python 2 and <code>str</code> in Python 3)</li>
<li>Then we actually <code>print</code> the response.</li>
</ul>
<p>Note that we don't <em>have</em> to <code>print</code> the encoding or type; I've just done so for diagnostic purposes to show what <code>requests</code> is doing. <code>requests</code> is designed to simplify a lot of other details of working with HTTP requests, and it does a good job of it.</p>
| 0 | 2016-08-06T20:00:09Z | [
"python",
"python-3.x",
"urllib"
] |
pandas merge dataframes on closest timestamp | 38,807,890 | <p>I want to merge two dataframes on three columns: email, subject and timestamp.
The timestamps between the dataframes differ and I therefore need to identify the closest matching timestamp for a group of email & subject. </p>
<p>Below is a reproducible example using a function for closest match suggested for <a href="http://stackoverflow.com/questions/24614474/pandas-merge-on-name-and-closest-date?noredirect=1&lq=1">this</a> question.</p>
<pre><code>import numpy as np
import pandas as pd
from pandas.io.parsers import StringIO
def find_closest_date(timepoint, time_series, add_time_delta_column=True):
# takes a pd.Timestamp() instance and a pd.Series with dates in it
# calcs the delta between `timepoint` and each date in `time_series`
# returns the closest date and optionally the number of days in its time delta
deltas = np.abs(time_series - timepoint)
idx_closest_date = np.argmin(deltas)
res = {"closest_date": time_series.ix[idx_closest_date]}
idx = ['closest_date']
if add_time_delta_column:
res["closest_delta"] = deltas[idx_closest_date]
idx.append('closest_delta')
return pd.Series(res, index=idx)
a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 02:01:02,a@gmail.com,welcome
2016-07-01 14:45:04,a@gmail.com,subject3
2016-07-01 08:14:02,a@gmail.com,subject2
2016-07-01 16:26:35,a@gmail.com,subject4
2016-07-01 10:17:00,b@gmail.com,subject3
2016-07-01 02:01:02,b@gmail.com,welcome
2016-07-01 14:45:04,b@gmail.com,subject3
2016-07-01 08:14:02,b@gmail.com,subject2
2016-07-01 16:26:35,b@gmail.com,subject4
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 02:01:14,a@gmail.com,welcome,1,1
2016-07-01 08:15:48,a@gmail.com,subject2,2,2
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 14:46:01,a@gmail.com,subject3,1,2
2016-07-01 16:27:28,a@gmail.com,subject4,1,2
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
2016-07-01 02:01:03,b@gmail.com,welcome,0,0
2016-07-01 14:45:05,b@gmail.com,subject3,0,0
2016-07-01 08:16:00,b@gmail.com,subject2,0,0
2016-07-01 17:00:00,b@gmail.com,subject4,0,0
"""
</code></pre>
<p>Notice that for a@gmail.com the closest matched timestamp is 10:17:39, whereas for b@gmail.com the closest match is 10:17:05.</p>
<pre><code>a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 10:17:00,b@gmail.com,subject3
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
"""
df1 = pd.read_csv(StringIO(a), parse_dates=['timestamp'])
df2 = pd.read_csv(StringIO(b), parse_dates=['timestamp'])
df1[['closest', 'time_bt_x_and_y']] = df1.timestamp.apply(find_closest_date, args=[df2.timestamp])
df1
df3 = pd.merge(df1, df2, left_on=['email','subject','closest'], right_on=['email','subject','timestamp'],how='left')
df3
timestamp_x email subject closest time_bt_x_and_y timestamp_y clicks var1
2016-07-01 10:17:00 a@gmail.com subject3 2016-07-01 10:17:05 00:00:05 NaT NaN NaN
2016-07-01 02:01:02 a@gmail.com welcome 2016-07-01 02:01:03 00:00:01 NaT NaN NaN
2016-07-01 14:45:04 a@gmail.com subject3 2016-07-01 14:45:05 00:00:01 NaT NaN NaN
2016-07-01 08:14:02 a@gmail.com subject2 2016-07-01 08:15:48 00:01:46 2016-07-01 08:15:48 2.0 2.0
2016-07-01 16:26:35 a@gmail.com subject4 2016-07-01 16:27:28 00:00:53 2016-07-01 16:27:28 1.0 2.0
2016-07-01 10:17:00 b@gmail.com subject3 2016-07-01 10:17:05 00:00:05 2016-07-01 10:17:05 0.0 0.0
2016-07-01 02:01:02 b@gmail.com welcome 2016-07-01 02:01:03 00:00:01 2016-07-01 02:01:03 0.0 0.0
2016-07-01 14:45:04 b@gmail.com subject3 2016-07-01 14:45:05 00:00:01 2016-07-01 14:45:05 0.0 0.0
2016-07-01 08:14:02 b@gmail.com subject2 2016-07-01 08:15:48 00:01:46 NaT NaN NaN
2016-07-01 16:26:35 b@gmail.com subject4 2016-07-01 16:27:28 00:00:53 NaT NaN NaN
</code></pre>
<p>The result is wrong, mainly because the closest date is incorrect since it does not take into account email & subject.</p>
<p>The expected result is</p>
<p><a href="http://i.stack.imgur.com/DeVc5.png" rel="nofollow"><img src="http://i.stack.imgur.com/DeVc5.png" alt="enter image description here"></a></p>
<p>Amending the function to give the closest timesstamps for a given email and subject would be helpful. </p>
<pre><code>df1.groupby(['email','subject'])['timestamp'].apply(find_closest_date, args=[df1.timestamp])
</code></pre>
<p>But that gives an error as the function is not defined for a group object.
What's the best way of doing this? </p>
| 2 | 2016-08-06T19:15:57Z | 38,807,965 | <p>You want to apply the closest timestamp logic to each group of 'email' and 'subject'</p>
<pre><code>a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 02:01:02,a@gmail.com,welcome
2016-07-01 14:45:04,a@gmail.com,subject3
2016-07-01 08:14:02,a@gmail.com,subject2
2016-07-01 16:26:35,a@gmail.com,subject4
2016-07-01 10:17:00,b@gmail.com,subject3
2016-07-01 02:01:02,b@gmail.com,welcome
2016-07-01 14:45:04,b@gmail.com,subject3
2016-07-01 08:14:02,b@gmail.com,subject2
2016-07-01 16:26:35,b@gmail.com,subject4
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 02:01:14,a@gmail.com,welcome,1,1
2016-07-01 08:15:48,a@gmail.com,subject2,2,2
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 14:46:01,a@gmail.com,subject3,1,2
2016-07-01 16:27:28,a@gmail.com,subject4,1,2
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
2016-07-01 02:01:03,b@gmail.com,welcome,0,0
2016-07-01 14:45:05,b@gmail.com,subject3,0,0
2016-07-01 08:16:00,b@gmail.com,subject2,0,0
2016-07-01 17:00:00,b@gmail.com,subject4,0,0
"""
df1 = pd.read_csv(StringIO(a), parse_dates=['timestamp'])
df2 = pd.read_csv(StringIO(b), parse_dates=['timestamp'])
df2 = df2.set_index(['email', 'subject'])
def find_closest_date(timepoint, time_series, add_time_delta_column=True):
# takes a pd.Timestamp() instance and a pd.Series with dates in it
# calcs the delta between `timepoint` and each date in `time_series`
# returns the closest date and optionally the number of days in its time delta
time_series = time_series.values
timepoint = np.datetime64(timepoint)
deltas = np.abs(np.subtract(time_series, timepoint))
idx_closest_date = np.argmin(deltas)
res = {"closest_date": time_series[idx_closest_date]}
idx = ['closest_date']
if add_time_delta_column:
res["closest_delta"] = deltas[idx_closest_date]
idx.append('closest_delta')
return pd.Series(res, index=idx)
# Then group df1 as needed
grouped = df1.groupby(['email', 'subject'])
# Finally loop over the group items, finding the closest timestamps
join_ts = pd.DataFrame()
for name, group in grouped:
try:
join_ts = pd.concat([join_ts, group['timestamp']\
.apply(find_closest_date, time_series=df2.loc[name, 'timestamp'])],
axis=0)
except KeyError:
pass
df3 = pd.merge(pd.concat([df1, join_ts], axis=1), df2, left_on=['closest_date'], right_on=['timestamp'])
</code></pre>
| 1 | 2016-08-06T19:23:14Z | [
"python",
"pandas",
"merge"
] |
pandas merge dataframes on closest timestamp | 38,807,890 | <p>I want to merge two dataframes on three columns: email, subject and timestamp.
The timestamps between the dataframes differ and I therefore need to identify the closest matching timestamp for a group of email & subject. </p>
<p>Below is a reproducible example using a function for closest match suggested for <a href="http://stackoverflow.com/questions/24614474/pandas-merge-on-name-and-closest-date?noredirect=1&lq=1">this</a> question.</p>
<pre><code>import numpy as np
import pandas as pd
from pandas.io.parsers import StringIO
def find_closest_date(timepoint, time_series, add_time_delta_column=True):
# takes a pd.Timestamp() instance and a pd.Series with dates in it
# calcs the delta between `timepoint` and each date in `time_series`
# returns the closest date and optionally the number of days in its time delta
deltas = np.abs(time_series - timepoint)
idx_closest_date = np.argmin(deltas)
res = {"closest_date": time_series.ix[idx_closest_date]}
idx = ['closest_date']
if add_time_delta_column:
res["closest_delta"] = deltas[idx_closest_date]
idx.append('closest_delta')
return pd.Series(res, index=idx)
a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 02:01:02,a@gmail.com,welcome
2016-07-01 14:45:04,a@gmail.com,subject3
2016-07-01 08:14:02,a@gmail.com,subject2
2016-07-01 16:26:35,a@gmail.com,subject4
2016-07-01 10:17:00,b@gmail.com,subject3
2016-07-01 02:01:02,b@gmail.com,welcome
2016-07-01 14:45:04,b@gmail.com,subject3
2016-07-01 08:14:02,b@gmail.com,subject2
2016-07-01 16:26:35,b@gmail.com,subject4
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 02:01:14,a@gmail.com,welcome,1,1
2016-07-01 08:15:48,a@gmail.com,subject2,2,2
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 14:46:01,a@gmail.com,subject3,1,2
2016-07-01 16:27:28,a@gmail.com,subject4,1,2
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
2016-07-01 02:01:03,b@gmail.com,welcome,0,0
2016-07-01 14:45:05,b@gmail.com,subject3,0,0
2016-07-01 08:16:00,b@gmail.com,subject2,0,0
2016-07-01 17:00:00,b@gmail.com,subject4,0,0
"""
</code></pre>
<p>Notice that for a@gmail.com the closest matched timestamp is 10:17:39, whereas for b@gmail.com the closest match is 10:17:05.</p>
<pre><code>a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 10:17:00,b@gmail.com,subject3
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
"""
df1 = pd.read_csv(StringIO(a), parse_dates=['timestamp'])
df2 = pd.read_csv(StringIO(b), parse_dates=['timestamp'])
df1[['closest', 'time_bt_x_and_y']] = df1.timestamp.apply(find_closest_date, args=[df2.timestamp])
df1
df3 = pd.merge(df1, df2, left_on=['email','subject','closest'], right_on=['email','subject','timestamp'],how='left')
df3
timestamp_x email subject closest time_bt_x_and_y timestamp_y clicks var1
2016-07-01 10:17:00 a@gmail.com subject3 2016-07-01 10:17:05 00:00:05 NaT NaN NaN
2016-07-01 02:01:02 a@gmail.com welcome 2016-07-01 02:01:03 00:00:01 NaT NaN NaN
2016-07-01 14:45:04 a@gmail.com subject3 2016-07-01 14:45:05 00:00:01 NaT NaN NaN
2016-07-01 08:14:02 a@gmail.com subject2 2016-07-01 08:15:48 00:01:46 2016-07-01 08:15:48 2.0 2.0
2016-07-01 16:26:35 a@gmail.com subject4 2016-07-01 16:27:28 00:00:53 2016-07-01 16:27:28 1.0 2.0
2016-07-01 10:17:00 b@gmail.com subject3 2016-07-01 10:17:05 00:00:05 2016-07-01 10:17:05 0.0 0.0
2016-07-01 02:01:02 b@gmail.com welcome 2016-07-01 02:01:03 00:00:01 2016-07-01 02:01:03 0.0 0.0
2016-07-01 14:45:04 b@gmail.com subject3 2016-07-01 14:45:05 00:00:01 2016-07-01 14:45:05 0.0 0.0
2016-07-01 08:14:02 b@gmail.com subject2 2016-07-01 08:15:48 00:01:46 NaT NaN NaN
2016-07-01 16:26:35 b@gmail.com subject4 2016-07-01 16:27:28 00:00:53 NaT NaN NaN
</code></pre>
<p>The result is wrong, mainly because the closest date is incorrect since it does not take into account email & subject.</p>
<p>The expected result is</p>
<p><a href="http://i.stack.imgur.com/DeVc5.png" rel="nofollow"><img src="http://i.stack.imgur.com/DeVc5.png" alt="enter image description here"></a></p>
<p>Amending the function to give the closest timesstamps for a given email and subject would be helpful. </p>
<pre><code>df1.groupby(['email','subject'])['timestamp'].apply(find_closest_date, args=[df1.timestamp])
</code></pre>
<p>But that gives an error as the function is not defined for a group object.
What's the best way of doing this? </p>
| 2 | 2016-08-06T19:15:57Z | 38,808,718 | <p>Notice that if you merge <code>df1</code> and <code>df2</code> on <code>email</code> and <code>subject</code>, then the result
has all the possible <em>relevant</em> timestamp pairings:</p>
<pre><code>In [108]: result = pd.merge(df1, df2, how='left', on=['email','subject'], suffixes=['', '_y']); result
Out[108]:
timestamp email subject timestamp_y clicks var1
0 2016-07-01 10:17:00 a@gmail.com subject3 2016-07-01 10:17:39 1 7
1 2016-07-01 10:17:00 a@gmail.com subject3 2016-07-01 14:46:01 1 2
2 2016-07-01 02:01:02 a@gmail.com welcome 2016-07-01 02:01:14 1 1
3 2016-07-01 14:45:04 a@gmail.com subject3 2016-07-01 10:17:39 1 7
4 2016-07-01 14:45:04 a@gmail.com subject3 2016-07-01 14:46:01 1 2
5 2016-07-01 08:14:02 a@gmail.com subject2 2016-07-01 08:15:48 2 2
6 2016-07-01 16:26:35 a@gmail.com subject4 2016-07-01 16:27:28 1 2
7 2016-07-01 10:17:00 b@gmail.com subject3 2016-07-01 10:17:05 0 0
8 2016-07-01 10:17:00 b@gmail.com subject3 2016-07-01 14:45:05 0 0
9 2016-07-01 02:01:02 b@gmail.com welcome 2016-07-01 02:01:03 0 0
10 2016-07-01 14:45:04 b@gmail.com subject3 2016-07-01 10:17:05 0 0
11 2016-07-01 14:45:04 b@gmail.com subject3 2016-07-01 14:45:05 0 0
12 2016-07-01 08:14:02 b@gmail.com subject2 2016-07-01 08:16:00 0 0
13 2016-07-01 16:26:35 b@gmail.com subject4 2016-07-01 17:00:00 0 0
</code></pre>
<p>You could now take the absolute value of the difference in timestamps for each row:</p>
<pre><code>result['diff'] = (result['timestamp_y'] - result['timestamp']).abs()
</code></pre>
<p>and then use </p>
<pre><code>idx = result.groupby(['timestamp','email','subject'])['diff'].idxmin()
result = result.loc[idx]
</code></pre>
<p>to find the rows with the minimum difference for each group based on <code>['timestamp','email','subject']</code>.</p>
<hr>
<pre><code>import numpy as np
import pandas as pd
from pandas.io.parsers import StringIO
a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject3
2016-07-01 02:01:02,a@gmail.com,welcome
2016-07-01 14:45:04,a@gmail.com,subject3
2016-07-01 08:14:02,a@gmail.com,subject2
2016-07-01 16:26:35,a@gmail.com,subject4
2016-07-01 10:17:00,b@gmail.com,subject3
2016-07-01 02:01:02,b@gmail.com,welcome
2016-07-01 14:45:04,b@gmail.com,subject3
2016-07-01 08:14:02,b@gmail.com,subject2
2016-07-01 16:26:35,b@gmail.com,subject4
"""
b = """timestamp,email,subject,clicks,var1
2016-07-01 02:01:14,a@gmail.com,welcome,1,1
2016-07-01 08:15:48,a@gmail.com,subject2,2,2
2016-07-01 10:17:39,a@gmail.com,subject3,1,7
2016-07-01 14:46:01,a@gmail.com,subject3,1,2
2016-07-01 16:27:28,a@gmail.com,subject4,1,2
2016-07-01 10:17:05,b@gmail.com,subject3,0,0
2016-07-01 02:01:03,b@gmail.com,welcome,0,0
2016-07-01 14:45:05,b@gmail.com,subject3,0,0
2016-07-01 08:16:00,b@gmail.com,subject2,0,0
2016-07-01 17:00:00,b@gmail.com,subject4,0,0
"""
df1 = pd.read_csv(StringIO(a), parse_dates=['timestamp'])
df2 = pd.read_csv(StringIO(b), parse_dates=['timestamp'])
result = pd.merge(df1, df2, how='left', on=['email','subject'], suffixes=['', '_y'])
result['diff'] = (result['timestamp_y'] - result['timestamp']).abs()
idx = result.groupby(['timestamp','email','subject'])['diff'].idxmin()
result = result.loc[idx].drop(['timestamp_y','diff'], axis=1)
result = result.sort_index()
print(result)
</code></pre>
<p>yields</p>
<pre><code> timestamp email subject clicks var1
0 2016-07-01 10:17:00 a@gmail.com subject3 1 7
2 2016-07-01 02:01:02 a@gmail.com welcome 1 1
4 2016-07-01 14:45:04 a@gmail.com subject3 1 2
5 2016-07-01 08:14:02 a@gmail.com subject2 2 2
6 2016-07-01 16:26:35 a@gmail.com subject4 1 2
7 2016-07-01 10:17:00 b@gmail.com subject3 0 0
9 2016-07-01 02:01:02 b@gmail.com welcome 0 0
11 2016-07-01 14:45:04 b@gmail.com subject3 0 0
12 2016-07-01 08:14:02 b@gmail.com subject2 0 0
13 2016-07-01 16:26:35 b@gmail.com subject4 0 0
</code></pre>
| 2 | 2016-08-06T21:07:40Z | [
"python",
"pandas",
"merge"
] |
Seaborn multiple barplots | 38,807,895 | <p>I have a pandas dataframe that looks like this:</p>
<pre><code> class men woman children
0 first 0.91468 0.667971 0.660562
1 second 0.30012 0.329380 0.882608
2 third 0.11899 0.189747 0.121259
</code></pre>
<p>How would I create a plot using seaborn that looks like this? Do I have to rearrange my data in some way?</p>
<p><img src="https://stanford.edu/~mwaskom/software/seaborn/_images/seaborn-factorplot-7.png"></p>
| 1 | 2016-08-06T19:16:05Z | 38,808,042 | <p>Yes you need to reshape the DataFrame:</p>
<pre><code>df = pd.melt(df, id_vars="class", var_name="sex", value_name="survival rate")
df
Out:
class sex survival rate
0 first men 0.914680
1 second men 0.300120
2 third men 0.118990
3 first woman 0.667971
4 second woman 0.329380
5 third woman 0.189747
6 first children 0.660562
7 second children 0.882608
8 third children 0.121259
</code></pre>
<p>Now, you can use factorplot:</p>
<pre><code>sns.factorplot(x='class', y='survival rate', hue='sex', data=df, kind='bar')
</code></pre>
<p><a href="http://i.stack.imgur.com/bMQGA.png" rel="nofollow"><img src="http://i.stack.imgur.com/bMQGA.png" alt="enter image description here"></a></p>
| 3 | 2016-08-06T19:32:57Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] |
readlines() error with for-loop in python | 38,807,928 | <p>This error is hard to describe because I can't figure out how the loop is even affecting the <code>readline()</code> and <code>readlines()</code> Methods. When I try using the former, I get these unexpected Traceback errors. When I try the latter, my code runs and nothing happens. I have determined that the bug is located in the first eight lines. The first few lines of the <code>Topics.txt</code> file is posted.</p>
<p><strong><code>Code</code></strong></p>
<pre><code>import requests
from html.parser import HTMLParser
from bs4 import BeautifulSoup
Url = "https://ritetag.com/best-hashtags-for/"
Topicfilename = "Topics.txt"
Topicfile = open(Topicfilename, 'r')
Line = Topicfile.readlines()
Linenumber = 0
for Line in Topicfile:
Linenumber += 1
print("Reading line", Linenumber)
Topic = Line
Newtopic = Topic.strip("\n").replace(' ', '').replace(',', '')
print(Newtopic)
Link = Url.join(Newtopic)
print(Link)
Sourcecode = requests.get(Link)
</code></pre>
<p>When I run this bit here, it prints the the URL preceded by the first character of the line.For example, it prints as 2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ etc. for 24 Hour Fitness.</p>
<p><strong><code>Topics.txt</code></strong></p>
<ul>
<li>21st Century Fox</li>
<li>24 Hour Fitness</li>
<li>2K Games</li>
<li>3M</li>
</ul>
<p><strong><code>Full Error</code></strong></p>
<blockquote>
<p>Reading line 1 24HourFitness
2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s</p>
<p>Traceback (most recent call last): File
"C:\Users\Caden\Desktop\Programs\LususStudios\AutoDealBot\HashtagScanner.py",
line 17, in
Sourcecode = requests.get(Link) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 71, in get
return request('get', url, params=params, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 57, in request
return session.request(method=method, url=url, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 475, in request
resp = self.send(prep, **send_kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 579, in send
adapter = self.get_adapter(url=request.url) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url) requests.exceptions.InvalidSchema: No connection adapters were
found for
'2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s'</p>
</blockquote>
| 0 | 2016-08-06T19:20:11Z | 38,807,994 | <p>Firstly, python conventions are to lowercase all variable names. </p>
<p>Secondly, you are exhausting the file pointer when you read all the lines at first, then continue to loop over the file. </p>
<p>Try to simply open the file, then loop over it </p>
<pre><code>linenumber = 0
with open("Topics.txt") as topicfile:
for line in topicfile:
# do work
linenumber += 1
</code></pre>
<p>Then, the issue in the traceback, if you look closely, you are building up this really long url string and that's definitely not a url, so requests throws an error </p>
<p><code>InvalidSchema: No connection adapters were found for '2https://ritetag.com/best-hashtags-for/4https://ritetag.com/...</code></p>
<p>And you can debug to see that <code>Url.join(Newtopic)</code> is "interleaving" the <code>Url</code> String between each character of the <code>Newtopic</code> list, which is what <code>str.join</code> will do </p>
| 0 | 2016-08-06T19:26:19Z | [
"python",
"python-3.x",
"for-loop",
"readlines"
] |
readlines() error with for-loop in python | 38,807,928 | <p>This error is hard to describe because I can't figure out how the loop is even affecting the <code>readline()</code> and <code>readlines()</code> Methods. When I try using the former, I get these unexpected Traceback errors. When I try the latter, my code runs and nothing happens. I have determined that the bug is located in the first eight lines. The first few lines of the <code>Topics.txt</code> file is posted.</p>
<p><strong><code>Code</code></strong></p>
<pre><code>import requests
from html.parser import HTMLParser
from bs4 import BeautifulSoup
Url = "https://ritetag.com/best-hashtags-for/"
Topicfilename = "Topics.txt"
Topicfile = open(Topicfilename, 'r')
Line = Topicfile.readlines()
Linenumber = 0
for Line in Topicfile:
Linenumber += 1
print("Reading line", Linenumber)
Topic = Line
Newtopic = Topic.strip("\n").replace(' ', '').replace(',', '')
print(Newtopic)
Link = Url.join(Newtopic)
print(Link)
Sourcecode = requests.get(Link)
</code></pre>
<p>When I run this bit here, it prints the the URL preceded by the first character of the line.For example, it prints as 2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ etc. for 24 Hour Fitness.</p>
<p><strong><code>Topics.txt</code></strong></p>
<ul>
<li>21st Century Fox</li>
<li>24 Hour Fitness</li>
<li>2K Games</li>
<li>3M</li>
</ul>
<p><strong><code>Full Error</code></strong></p>
<blockquote>
<p>Reading line 1 24HourFitness
2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s</p>
<p>Traceback (most recent call last): File
"C:\Users\Caden\Desktop\Programs\LususStudios\AutoDealBot\HashtagScanner.py",
line 17, in
Sourcecode = requests.get(Link) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 71, in get
return request('get', url, params=params, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 57, in request
return session.request(method=method, url=url, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 475, in request
resp = self.send(prep, **send_kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 579, in send
adapter = self.get_adapter(url=request.url) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url) requests.exceptions.InvalidSchema: No connection adapters were
found for
'2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s'</p>
</blockquote>
| 0 | 2016-08-06T19:20:11Z | 38,808,018 | <p>I think there are two issues:</p>
<ol>
<li>You seem to be iterating over <code>Topicfile</code> instead of <code>Topicfile.readLines()</code>.</li>
<li><code>Url.join(Newtopic)</code> isn't returning what you think it is. <code>.join</code> takes a list (in this case, a string is a list of characters) and will insert <code>Url</code> in between each one.</li>
</ol>
<p>Here is code with these problems addressed:</p>
<pre><code>import requests
Url = "https://ritetag.com/best-hashtags-for/"
Topicfilename = "topics.txt"
Topicfile = open(Topicfilename, 'r')
Lines = Topicfile.readlines()
Linenumber = 0
for Line in Lines:
Linenumber += 1
print("Reading line", Linenumber)
Topic = Line
Newtopic = Topic.strip("\n").replace(' ', '').replace(',', '')
print(Newtopic)
Link = '{}{}'.format(Url, Newtopic)
print(Link)
Sourcecode = requests.get(Link)
</code></pre>
<p>As an aside, I also recommend using lowercased variable names since camel case is generally reserved for class names in Python :)</p>
| 1 | 2016-08-06T19:29:29Z | [
"python",
"python-3.x",
"for-loop",
"readlines"
] |
readlines() error with for-loop in python | 38,807,928 | <p>This error is hard to describe because I can't figure out how the loop is even affecting the <code>readline()</code> and <code>readlines()</code> Methods. When I try using the former, I get these unexpected Traceback errors. When I try the latter, my code runs and nothing happens. I have determined that the bug is located in the first eight lines. The first few lines of the <code>Topics.txt</code> file is posted.</p>
<p><strong><code>Code</code></strong></p>
<pre><code>import requests
from html.parser import HTMLParser
from bs4 import BeautifulSoup
Url = "https://ritetag.com/best-hashtags-for/"
Topicfilename = "Topics.txt"
Topicfile = open(Topicfilename, 'r')
Line = Topicfile.readlines()
Linenumber = 0
for Line in Topicfile:
Linenumber += 1
print("Reading line", Linenumber)
Topic = Line
Newtopic = Topic.strip("\n").replace(' ', '').replace(',', '')
print(Newtopic)
Link = Url.join(Newtopic)
print(Link)
Sourcecode = requests.get(Link)
</code></pre>
<p>When I run this bit here, it prints the the URL preceded by the first character of the line.For example, it prints as 2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ etc. for 24 Hour Fitness.</p>
<p><strong><code>Topics.txt</code></strong></p>
<ul>
<li>21st Century Fox</li>
<li>24 Hour Fitness</li>
<li>2K Games</li>
<li>3M</li>
</ul>
<p><strong><code>Full Error</code></strong></p>
<blockquote>
<p>Reading line 1 24HourFitness
2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s</p>
<p>Traceback (most recent call last): File
"C:\Users\Caden\Desktop\Programs\LususStudios\AutoDealBot\HashtagScanner.py",
line 17, in
Sourcecode = requests.get(Link) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 71, in get
return request('get', url, params=params, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\api.py",
line 57, in request
return session.request(method=method, url=url, **kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 475, in request
resp = self.send(prep, **send_kwargs) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 579, in send
adapter = self.get_adapter(url=request.url) File "C:\Python34\lib\site-packages\requests-2.10.0-py3.4.egg\requests\sessions.py",
line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url) requests.exceptions.InvalidSchema: No connection adapters were
found for
'2https://ritetag.com/best-hashtags-for/4https://ritetag.com/best-hashtags-for/Hhttps://ritetag.com/best-hashtags-for/ohttps://ritetag.com/best-hashtags-for/uhttps://ritetag.com/best-hashtags-for/rhttps://ritetag.com/best-hashtags-for/Fhttps://ritetag.com/best-hashtags-for/ihttps://ritetag.com/best-hashtags-for/thttps://ritetag.com/best-hashtags-for/nhttps://ritetag.com/best-hashtags-for/ehttps://ritetag.com/best-hashtags-for/shttps://ritetag.com/best-hashtags-for/s'</p>
</blockquote>
| 0 | 2016-08-06T19:20:11Z | 38,812,303 | <p>What you are attempting to do violates the RiteTag TOS. To get associate4d hashtags from a root hashtag, please use our API: <a href="http://docs.ritekit.apiary.io/#" rel="nofollow">http://docs.ritekit.apiary.io/#</a> and <a href="http://docs.ritekit.apiary.io/#reference/0/ritetag" rel="nofollow">http://docs.ritekit.apiary.io/#reference/0/ritetag</a> should serve your purposes.</p>
| 0 | 2016-08-07T08:24:45Z | [
"python",
"python-3.x",
"for-loop",
"readlines"
] |
Is there an R/caret equivalent of scikit-learn's labeled kfold cross validation? | 38,808,058 | <p>I'm augmenting my data and i want to make sure that related data are not separated into different folds during cross validation. </p>
<p>I know scikit-learn has a labeled k-fold algorithm that takes in a list of labels along with the data set and assures that the same label is not found in 2 different folds. Is there an equivalent of this in R? I'm using the caret package for my regression modeling.</p>
| 1 | 2016-08-06T19:34:56Z | 38,811,183 | <p>The <a href="https://mlr-org.github.io/mlr-tutorial/release/html/index.html" rel="nofollow">mlr package</a> seems to have that sort of functionality. The <a href="https://mlr-org.github.io/mlr-tutorial/release/html/task/index.html#further-settings" rel="nofollow">'blocking'</a> option specifically specifies that all observations within a block must be included together when resampling occurs. If you aren't too attached to the caret package, you consider using this.</p>
| 0 | 2016-08-07T05:27:02Z | [
"python",
"scikit-learn",
"r-caret",
"cross-validation"
] |
why is function executing twice? | 38,808,079 | <p>Am experiencing a bizarre problem with my python script the file is being red twice.</p>
<p>Script:</p>
<pre><code>import platform as serverPlatform
class platform:
@staticmethod
def kernel():
return serverPlatform.release()
@staticmethod
def cpu():
with open('/proc/cpuinfo', 'r') as f:
print("x")
for line in f:
if line.strip():
if line.rstrip('\n').split(':')[0].startswith('model name'):
model_name = line.rstrip('\n').split(':')[1]
print platform.cpu()
</code></pre>
<p>The code above prints "x" twice:</p>
<pre><code>[root@localhost lib]# python platform.py
x
x
</code></pre>
<p>However if i remove the class and run the code found inside the <code>cpu()</code> method directly it prints me "x" only once.(python script without the class)</p>
<pre><code>with open('/proc/cpuinfo', 'r') as f:
print("x")
for line in f:
if line.strip():
if line.rstrip('\n').split(':')[0].startswith('model name'):
model_name = line.rstrip('\n').split(':')[1]
</code></pre>
<p>What am I doing wrong in my initial script and why is it printing me "x" twice? Thanks in advance</p>
<p><strong><em>UPDATE</em></strong></p>
<p>Ok I realised my mistake as silly as it may sound I imported the module platform in a script containing the custom class names platform. So i changed the name of the class from <strong>platform</strong> to <strong>platforms</strong></p>
<pre><code>import platform as serverPlatform
class platforms:
@staticmethod
def kernel():
return serverPlatform.release()
@staticmethod
def cpu():
with open('/proc/cpuinfo', 'r') as f:
print("x")
for line in f:
if line.strip():
if line.rstrip('\n').split(':')[0].startswith('model name'):
model_name = line.rstrip('\n').split(':')[1]
print platforms.cpu()
</code></pre>
| 1 | 2016-08-06T19:37:11Z | 38,808,194 | <p>while importing python scripts, it will execute all the statements, like function declaration, class declaration and executable statements (like print). So when you import flatform it will execute <code>flatform.cpu()</code> once. and one more call from the file in which you imported.</p>
| 2 | 2016-08-06T19:54:32Z | [
"python"
] |
Dictionary: Revert/remove setdefault | 38,808,229 | <p>I currently have an object passed to my callback which is a dictionary and which I have to return.</p>
<p>The invoker called <code>obj.setdefault("Server", "Something")</code> on the dictionary so that it has a default value even if the key/value pair does not exist.</p>
<p>How can I revert/remove that <code>setdefault</code> (remove the default value)? I mean I simply don't want that key/value pair in the dict and it seems that it doesn't have complications if the key doesn't exit, but it is always added because of the default.</p>
| 2 | 2016-08-06T19:58:32Z | 38,808,423 | <p>Assuming the value is not in the dictionary when passed to the callback, <code>dict.setdefault</code> is not really the problem - this operation is one of many available for changing a python dictionary. Specifically, it ensures SOMETHING is stored for the given key, which can be done directly anyway with an indexed assignment. As long as your code and the invoker are both maintaining a reference to the same dictionary, you have no real choice but to trust the invoker (and any all other reference holders to this dictionary).</p>
<p>The mutability of the dictionary is the problem, so the possible solutions orient around the leeway in the design. I can only think of:</p>
<ul>
<li>copy the dictionary when it is passed into the callback, and/or</li>
<li>change the return value to a copy or a (readonly) view of the dictionary</li>
</ul>
| 0 | 2016-08-06T20:26:14Z | [
"python",
"python-3.x",
"dictionary"
] |
Dictionary: Revert/remove setdefault | 38,808,229 | <p>I currently have an object passed to my callback which is a dictionary and which I have to return.</p>
<p>The invoker called <code>obj.setdefault("Server", "Something")</code> on the dictionary so that it has a default value even if the key/value pair does not exist.</p>
<p>How can I revert/remove that <code>setdefault</code> (remove the default value)? I mean I simply don't want that key/value pair in the dict and it seems that it doesn't have complications if the key doesn't exit, but it is always added because of the default.</p>
| 2 | 2016-08-06T19:58:32Z | 38,808,450 | <p>The <code>setdefault</code> method sets the value of the <code>Server</code> key to <code>Something</code> (as long as the <code>Server</code> key is not already in the dictionary). You can simply delete the <code>Server</code> key from the dictionary:</p>
<pre><code>if 'Server' in obj: del obj['Server']
</code></pre>
<p>If you don't want to remove the server key when its value is different from <code>Something</code>, do:</p>
<pre><code>if obj['Server'] == 'Something': del obj['Server']
</code></pre>
<p>However, you cannot tell whether the value of <code>Server</code> was added to the dictionary as a default value or as a plain setting of a key-value pair. That's because after invoking <code>setdefault</code>, the dictionary holds the key-value pair without any indication as to how it was added.</p>
<p>Demonstration:</p>
<pre><code>>>>d = {}
>>>d.setdefault("Server", "Something")
>>>d
{'Server': 'Something'}
>>>del d['Server']
>>>d
{}
</code></pre>
| 5 | 2016-08-06T20:31:08Z | [
"python",
"python-3.x",
"dictionary"
] |
formatting date, time and string for filename | 38,808,311 | <p>I want to create a csv file with a filename of the following format:</p>
<p>"day-month-year hour:minute-malware_scan.csv"</p>
<p>Example:" 6-8-2016 21:45-malware_scan.csv"</p>
<p>The first part of the filename is formed by the actual date and time at file creation time, instead "-malware_scan.csv" is a fixed string.</p>
<p>I know that in order to get the date and time I should use the time or datetime module and the strftime() function for formatting.</p>
<p>At first I tried with:</p>
<pre><code>t = datetime.datetime.now()
formatted_time = t.strftime(%d-%m-%y %H:%M)
filename = formatted_time + "-malware_scan.csv"
with open(filename, "a") as f:
...............
</code></pre>
<p>I didn't get the expected result, so I tried another way:</p>
<pre><code>i = datetime.datetime.now()
file_to_open = "{day}-{month}-{year} {hour}:{minute}-malware_scan.csv".format(day = i.day, month = i.month, year = i.year, hour = i.hour, minute = i.minute)
with open(file_to_open, "a") as f:
.......................
</code></pre>
<p>Also using the code above I don't get the expected result.
I get a filename of this kind: "6-8-2016 21". Day, month, year and hour is displayed but the minutes and the rest of the string (-malware_scan.csv) isn't diplayed.</p>
<p>I'm focusing only on the filename with this question, not on the csv writing itself, whose code is omitted.</p>
| 1 | 2016-08-06T20:10:31Z | 38,808,427 | <p>The <code>:</code> character is not allowed for filenames on PC. You could discard the <code>:</code> separator entirely:</p>
<pre><code>>>> from datetime import datetime
>>> t = datetime.now()
>>> formatted_time = t.strftime('%d-%m-%y %H%M')
>>> formatted_time
'06-08-16 2226'
>>> datetime.strptime(formatted_time, '%d-%m-%y %H%M')
datetime.datetime(2016, 8, 6, 22, 26)
</code></pre>
<p>Or replace that character with an underscore or hyphen.</p>
| 1 | 2016-08-06T20:26:36Z | [
"python",
"string",
"datetime",
"formatting",
"filenames"
] |
formatting date, time and string for filename | 38,808,311 | <p>I want to create a csv file with a filename of the following format:</p>
<p>"day-month-year hour:minute-malware_scan.csv"</p>
<p>Example:" 6-8-2016 21:45-malware_scan.csv"</p>
<p>The first part of the filename is formed by the actual date and time at file creation time, instead "-malware_scan.csv" is a fixed string.</p>
<p>I know that in order to get the date and time I should use the time or datetime module and the strftime() function for formatting.</p>
<p>At first I tried with:</p>
<pre><code>t = datetime.datetime.now()
formatted_time = t.strftime(%d-%m-%y %H:%M)
filename = formatted_time + "-malware_scan.csv"
with open(filename, "a") as f:
...............
</code></pre>
<p>I didn't get the expected result, so I tried another way:</p>
<pre><code>i = datetime.datetime.now()
file_to_open = "{day}-{month}-{year} {hour}:{minute}-malware_scan.csv".format(day = i.day, month = i.month, year = i.year, hour = i.hour, minute = i.minute)
with open(file_to_open, "a") as f:
.......................
</code></pre>
<p>Also using the code above I don't get the expected result.
I get a filename of this kind: "6-8-2016 21". Day, month, year and hour is displayed but the minutes and the rest of the string (-malware_scan.csv) isn't diplayed.</p>
<p>I'm focusing only on the filename with this question, not on the csv writing itself, whose code is omitted.</p>
| 1 | 2016-08-06T20:10:31Z | 38,808,496 | <p>Thanks to Moses Koledoye for spotting the problem. I was thinking I made a mistake in the Python code, but actually the problem was the characters of the filename.</p>
<p>According to <a href="https://msdn.microsoft.com/en-us/library/aa365247" rel="nofollow">MSDN</a> the following are reserved characters that cannot be used in a filename on Windows:</p>
<pre><code>< (less than)
> (greater than)
: (colon)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)
</code></pre>
| 0 | 2016-08-06T20:37:10Z | [
"python",
"string",
"datetime",
"formatting",
"filenames"
] |
How can I find a comment with specified text string | 38,808,461 | <p>I'm using robobrowser to parse some html content. I has a BeautifulSoup inside. How can I find a comment with specified string inside</p>
<pre><code><html>
<body>
<div>
<!-- some commented code here!!!<div><ul><li><div id='ANY_ID'>TEXT_1</div></li>
<li><div>other text</div></li></ul></div>-->
</div>
</body>
</html>
</code></pre>
<p>In fact I need to get TEXT_1 if I know ANY_ID
Thanks</p>
| 0 | 2016-08-06T20:32:40Z | 38,808,493 | <p>Use <a class='doc-link' href="http://stackoverflow.com/documentation/beautifulsoup/1940/locating-elements/18725/locating-comments#t=201608062136151957535">the <code>text</code> argument and check the type to be <code>Comment</code></a>. Then, load the contents with <code>BeautifulSoup</code> again and find the desired element by <code>id</code>:</p>
<pre><code>from bs4 import BeautifulSoup
from bs4 import Comment
data = """
<html>
<body>
<div>
<!-- some commented code here!!!<div><ul><li><div id='ANY_ID'>TEXT_1</div></li>
<li><div>other text</div></li></ul></div>-->
</div>
</body>
</html>
"""
soup = BeautifulSoup(data, "html.parser")
comment = soup.find(text=lambda text: isinstance(text, Comment) and "ANY_ID" in text)
soup_comment = BeautifulSoup(comment, "html.parser")
text = soup_comment.find("div", id="ANY_ID").get_text()
print(text)
</code></pre>
<p>Prints <code>TEXT_1</code>.</p>
| 0 | 2016-08-06T20:36:59Z | [
"python",
"beautifulsoup",
"robobrowser"
] |
Python Error~ List index out of range | 38,808,547 | <pre><code>def getSublists(L,n):
List=L
sublists=[]
for i in range(len(L)-(n-1)):
ii=0
sub=[]
while ii<= n:
a=List[ii+i]
sub.append(a)
ii+=1
sublists.append(sub)
return sublists
</code></pre>
<p>I am trying to get all of the possible sublists of a list L and of sublist size n. I am getting an <code>IndexError: list index out of range</code> when I try to run my program. I've messed around with it to no avail and have read other relevant posts. Can someone help me out? </p>
| 0 | 2016-08-06T20:45:22Z | 38,808,638 | <p>your while condition should be modified:</p>
<pre><code>def getSublists(L,n):
List=L
sublists=[]
for i in range(len(L)-(n-1)):
print ['i: ', i]
ii=0
sub=[]
while ii<= n-1:
print ['ii: ', ii]
a=List[ii+i]
sub.append(a)
ii+=1
sublists.append(sub)
return sublists
</code></pre>
<p>for a list of 10, you can now query all sub lists of 5</p>
<pre><code>a=range(10)
b=getSublists(a,5)
out:
[[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8],
[5, 6, 7, 8, 9]
</code></pre>
| 0 | 2016-08-06T20:57:07Z | [
"python"
] |
Python Error~ List index out of range | 38,808,547 | <pre><code>def getSublists(L,n):
List=L
sublists=[]
for i in range(len(L)-(n-1)):
ii=0
sub=[]
while ii<= n:
a=List[ii+i]
sub.append(a)
ii+=1
sublists.append(sub)
return sublists
</code></pre>
<p>I am trying to get all of the possible sublists of a list L and of sublist size n. I am getting an <code>IndexError: list index out of range</code> when I try to run my program. I've messed around with it to no avail and have read other relevant posts. Can someone help me out? </p>
| 0 | 2016-08-06T20:45:22Z | 38,808,681 | <p>You'll want to only append to the sublists array if the sublist is of length n, then break once it is. Otherwise, <code>ii+i</code> could increase beyond <code>len(L) - 1</code>, leading to the index error.</p>
<pre><code> if len(sub) == n:
sublists.append(sub)
break
</code></pre>
<p>Here's a working solution (assuming what you want is what L3viathan asked in comments).</p>
<pre><code>def getSublists(L,n):
List=L
sublists=[]
for i in range(len(L)-(n-1)):
ii=0
sub=[]
while ii<= n:
a=List[ii+i]
sub.append(a)
ii+=1
if len(sub) == n:
sublists.append(sub)
break
return sublists
</code></pre>
| 0 | 2016-08-06T21:02:16Z | [
"python"
] |
Python Error~ List index out of range | 38,808,547 | <pre><code>def getSublists(L,n):
List=L
sublists=[]
for i in range(len(L)-(n-1)):
ii=0
sub=[]
while ii<= n:
a=List[ii+i]
sub.append(a)
ii+=1
sublists.append(sub)
return sublists
</code></pre>
<p>I am trying to get all of the possible sublists of a list L and of sublist size n. I am getting an <code>IndexError: list index out of range</code> when I try to run my program. I've messed around with it to no avail and have read other relevant posts. Can someone help me out? </p>
| 0 | 2016-08-06T20:45:22Z | 38,808,809 | <p>Your code can be fixed changing the while statement to <b>while ii < n</b>, in case you're trying to obtain this result: </p>
<pre><code>l = [1, 2, 3, 4] n = 2
nl = [[1, 2], [2, 3], [3, 4]]
</code></pre>
<p>But I think this is not what you want. Now, if you're lazy as me, you can use itertools.combinations to get all possible combinations. </p>
<p><a href="https://docs.python.org/2/library/itertools.html#itertools.combinations" rel="nofollow">https://docs.python.org/2/library/itertools.html#itertools.combinations</a></p>
<p>Demonstration:</p>
<pre><code>import itertools
l = [1, 2, 3, 4]
nl = itertools.combinations(l, 2)
nl = list(nl)
print nl
</code></pre>
<p>Result = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]</p>
| 0 | 2016-08-06T21:19:24Z | [
"python"
] |
Django "./manage.py bower install" tells me bower isn't installed when it is | 38,808,548 | <p>I'm following the instructions from the django-bower setup readme <a href="https://github.com/nvbn/django-bower" rel="nofollow">here</a>. I've installed django-bower (v5.1.0) via <code>$ pip install -r requirements.txt</code> (django-bower==5.1.0 is in my requirements.txt). Now I'm trying to run <code>$ ./manage.py bower install</code> (as per the instructions) but I'm getting this error:</p>
<blockquote>
<p>BowerNotInstalled: Bower not installed, read instruction here - <a href="http://bower.io/" rel="nofollow">http://bower.io/</a></p>
</blockquote>
<p>Trying to run <code>$ pip install django-bower</code> gives me a <code>Requirement already satisfied</code> message. </p>
<p>What am I missing?</p>
| 0 | 2016-08-06T20:45:23Z | 38,808,672 | <p>That error message indicates that <code>bower</code> cannot be found. <code>django-bower</code> is properly installed.</p>
<p>Check the instructions here: <a href="https://bower.io/#install-bower" rel="nofollow">https://bower.io/#install-bower</a>:</p>
<pre><code>npm install -g bower
</code></pre>
| 2 | 2016-08-06T21:01:33Z | [
"python",
"django",
"pip",
"bower"
] |
tf.contrib.layers.embedding_column from tensor flow | 38,808,643 | <p>I am going through tensorflow tutorial <a href="https://www.tensorflow.org/versions/r0.10/tutorials/wide_and_deep/index.html#tensorflow-wide-deep-learning-tutorial" rel="nofollow">tensorflow</a>. I would like to find description of the following line:</p>
<pre><code>tf.contrib.layers.embedding_column
</code></pre>
<p>I wonder if it uses word2vec or anything else, or maybe I am thinking in completely wrong direction. I tried to click around on GibHub, but found nothing. I am guessing looking on GitHub is not going to be easy, since python might refer to some C++ libraries. Could anybody point me in the right direction?</p>
| 2 | 2016-08-06T20:58:10Z | 38,980,687 | <p>I've been wondering about this too. It's not really clear to me what they're doing, but this is what I found.</p>
<p>In the <a href="http://arxiv.org/pdf/1606.07792v1.pdf" rel="nofollow">paper on wide and deep learning</a>, they describe the embedding vectors as being randomly initialized and then adjusted during training to minimize error.</p>
<p>Normally when you do embeddings, you take some arbitrary vector representation of the data (such as one-hot vectors) and then multiply it by a matrix that represents the embedding. This matrix can be found by PCA or while training by something like t-SNE or word2vec.</p>
<p>The actual code for the embedding_column is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py" rel="nofollow">here</a>, and it's implemented as a class called _EmbeddingColumn which is a subclass of _FeatureColumn. It stores the embedding matrix inside its sparse_id_column attribute. Then, the method to_dnn_input_layer applies this embedding matrix to produce the embeddings for the next layer.</p>
<pre><code> def to_dnn_input_layer(self,
input_tensor,
weight_collections=None,
trainable=True):
output, embedding_weights = _create_embedding_lookup(
input_tensor=self.sparse_id_column.id_tensor(input_tensor),
weight_tensor=self.sparse_id_column.weight_tensor(input_tensor),
vocab_size=self.length,
dimension=self.dimension,
weight_collections=_add_variable_collection(weight_collections),
initializer=self.initializer,
combiner=self.combiner,
trainable=trainable)
</code></pre>
<p>So as far as I can see, it seems like the embeddings are formed by applying whatever learning rule you're using (gradient descent, etc.) to the embedding matrix.</p>
| 2 | 2016-08-16T17:06:35Z | [
"python",
"tensorflow",
"embedding"
] |
Where can I get pycharm-debug.egg for Idea? | 38,808,690 | <p>I can't find <code>pycharm-debug.egg</code> in IntelliJ Idea (2016.2) installation directory, where can I get it?</p>
| 1 | 2016-08-06T21:03:22Z | 38,808,691 | <p>It is distributed as part of PyCharm - in directory <code>debug-eggs</code> - that if available at <a href="https://www.jetbrains.com/pycharm/download" rel="nofollow">https://www.jetbrains.com/pycharm/download</a>.</p>
<p>The file is also available in <a href="https://github.com/JetBrains/intellij-community" rel="nofollow">JetBrains/intellij-community</a> github repo: <a href="https://github.com/JetBrains/intellij-community/blob/162.1628/python/testData/debug/pycharm-debug.egg" rel="nofollow">https://github.com/JetBrains/intellij-community/blob/162.1628/python/testData/debug/pycharm-debug.egg</a>. Optionally change the branch to match your version appropriately.</p>
<p>One has to pay attention to versions since if version of the egg doesn't match the version of Idea warning</p>
<pre class="lang-none prettyprint-override"><code>Warning: wrong debugger version. Use pycharm-debugger.egg from PyCharm installation folder.
</code></pre>
<p>may be printed or the debugger may even refuse connection.</p>
| 1 | 2016-08-06T21:03:22Z | [
"python",
"debugging",
"intellij-idea",
"pycharm",
"remote-debugging"
] |
Why is my 'for' loop not looping? | 38,808,739 | <p>The following python script should do following:</p>
<ul>
<li>wait for a key press, then</li>
<li>send <code>X1650 Y0 Z0</code> to an embedded device, then</li>
<li>fill the variable <code>line</code> byte by byte with the response</li>
</ul>
<p>Altough <code>print (ser.in_waiting)</code> claims that the input buffer is properly filled, the <code>for</code> is not iterating over it.</p>
<h2>Code:</h2>
<pre><code>import serial
import time
# configure the serial connections
ser = serial.Serial(
port='COM3',
baudrate=9600,
)
while 1 :
# Wait until user presses a key
eingabe = input("PROMPT >> ")
# Send text string to embedded device
destination_position = 'X1650 Y0 Z0\r\n'
ser.write(destination_position.encode('ascii'))
# Wait until embedded device responds
while ser.in_waiting == 0:
time.sleep(0.1)
# How long is the response?
print ('The response is: ')
print (ser.in_waiting)
print (' bytes long')
# Traverse through the queue
line = []
for c in ser.read():
line.append(chr(c))
print(line)
</code></pre>
<h2>Output:</h2>
<pre><code>D:\7-Thema\Programmieren\projects\robot\remote-control-scripts>python test.py
PROMPT >> GO!
The response is:
39
bytes long
['X']
PROMPT >>
</code></pre>
| 0 | 2016-08-06T21:10:14Z | 38,808,863 | <p>You must specify the number of bytes to be read. But I don't know the return type so when you print a line, you will see :) and then you can convert accordingly </p>
<pre><code>line = ser.read(ser.in_waiting)
print("%r"%line)
</code></pre>
<p>Here is a docs <a href="http://pyserial.readthedocs.io/en/latest/pyserial_api.html#serial.Serial.read" rel="nofollow">link</a></p>
| 2 | 2016-08-06T21:27:23Z | [
"python",
"pyserial"
] |
How do I pass the elements of a Python list to HTML for different HREF links? | 38,808,759 | <p>I have a list of filenames in Python, and I want to pass that to HTML. The HTML page should have different href links for each of the filenames in the list. If I pass the whole list then the href link is again a list (which does not allow me to click on different links), and if I use a for loop to pass the list elements one by one, it is getting displayed as different HTML pages.</p>
<p>EDIT: Here is my code. 'docs' is a list of filenames. This is printing "Results" followed by first link, then "Results" followed by second link, and so on. I want it to display "Results" and then all the links one below the other. Basically, I want the loop only for the 'a href' part.</p>
<pre><code>def results(docs):
template = """
<html>
<head>
<title> Results </title>
</head>
<body>
<a href="file://%s"> %s </a>
</body>
</html>
"""
html = ""
for i in range(len(docs)):
html = '\n'.join([html, template % (docs[i], docs[i])])
return html
</code></pre>
<p>PS: Sorry if the question is unclear, it is the first time I am posting a question on stackoverflow.</p>
| 0 | 2016-08-06T21:12:38Z | 38,808,896 | <p>I'd actually use a <em>template engine</em>, like <a href="http://www.makotemplates.org/" rel="nofollow"><code>Mako</code></a>. Working example:</p>
<pre><code>from mako.template import Template
def results(docs):
template = """
<html>
<head>
<title> Results </title>
</head>
<body>
% for doc in docs:
<a href="file://${doc}"> ${doc} </a>
% endfor
</body>
</html>
"""
return Template(template).render(docs=docs)
print(results(["link1", "link2"]))
</code></pre>
<p>Prints:</p>
<pre><code><html>
<head>
<title> Results </title>
</head>
<body>
<a href="file://link1"> link1 </a>
<a href="file://link2"> link2 </a>
</body>
</html>
</code></pre>
| 0 | 2016-08-06T21:32:02Z | [
"python",
"html",
"python-2.7"
] |
How pandas.DataFrame.groupby actually work | 38,808,873 | <p>I need to group a <code>pandas.DataFrame</code> by one,two and three columns and compute the mean of the "groups".</p>
<p>Something like:</p>
<pre class="lang-py prettyprint-override"><code> col1 col2 col3 col4
0 A 17 R 3
1 B 5 T 7
2 F 25 R 11
3 A 33 R 15
4 B 17 T 19
5 F 25 R 23
6 F 25 E 27
</code></pre>
<p><strong>Group by one columns: col1</strong></p>
<p>Here I want the result to be (col3 is dropped as it's not numeric):</p>
<pre class="lang-py prettyprint-override"><code> col2 col4
col1 = A | 0 (17+33)/2 (3+15)/2
col1 = B | 1 (5+17)/2 (7+19)/2
col1 = F | 2 (25+25+25)/2 (11+23)+27/2
</code></pre>
<p><strong>Group by one columns: col1 & col3</strong></p>
<pre class="lang-py prettyprint-override"><code> col2 col4
col1 = A & col3 = R | 0 (17+33)/2 (3+15)/2
col1 = B & col3 = T | 1 (5+17)/2 (7+19)/2
col1 = F & col3 = R | 2 (25+25)/2 (11+23)/2
col1 = F & col3 = E | 4 25 27
</code></pre>
<p>And the same thing for group by 3 columnms.</p>
<p>I found the <code>pandas.DataFrame.groupby().mean()</code> method but I can't figure out how it works exactly.</p>
<p>For example, for this simple dataframe:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: df
Out[2]:
v1 v2 v3 v4
0 0 17 2 3
1 4 5 6 7
2 8 25 10 11
3 12 33 14 15
4 16 17 18 19
5 20 25 22 23
6 24 25 26 27
7 28 29 30 31
8 32 5 34 35
9 36 5 38 39
In [2]: df.groupby(["v2"]).mean()
Out[2]:
v1 v3 v4
v2
5 24.000000 26.000000 27.000000
17 8.000000 10.000000 11.000000
25 17.333333 19.333333 20.333333
29 28.000000 30.000000 31.000000
33 12.000000 14.000000 15.000000
## For this first case it's ok...
In [3]: df.groupby(["v2","v3"]).mean()
Out[3]:
v1 v4
v2 v3
5 6 4 7
34 32 35
38 36 39
17 2 0 3
18 16 19
25 10 8 11
22 20 23
26 24 27
29 30 28 31
33 14 12 15
</code></pre>
<p>How exactly did the <code>groupby</code> function work and why this result (out[3]) don't have the same length as the original dataframe (as there is no commun couple (v2, v3) in the dataframe) ???</p>
| -1 | 2016-08-06T21:29:15Z | 38,809,401 | <p>For your first 2 examples at the top, here is the syntax you are looking for:</p>
<pre><code>>>>df.groupby(['col1'])['col2', 'col4'].mean()
col2 col4
col1
A 25 9.000000
B 11 13.000000
F 25 20.333333
>>>df.groupby(['col1','col3'])['col2', 'col4'].mean()
col2 col4
col1 col3
A R 25 9
B T 11 13
F E 25 27
R 25 17
</code></pre>
<p>Does that help you get the group that you are looking for?</p>
| 1 | 2016-08-06T22:46:35Z | [
"python",
"pandas",
"dataframe"
] |
Sending Notifications in to multiple users FCM | 38,808,880 | <p>I am configuring my mobile applications with firebase cloud messaging.
I've finally figured out how to send these annoying to configure notifications.
My python code looks like this </p>
<pre><code>url = 'https://fcm.googleapis.com/fcm/send'
body = {
"data":{
"title":"mytitle",
"body":"mybody",
"url":"myurl"
},
"notification":{
"title":"My web app name",
"body":"message",
"content_available": "true"
},
"to":"device_id_here"
}
headers = {"Content-Type":"application/json",
"Authorization": "key=api_key_here"}
requests.post(url, data=json.dumps(body), headers=headers)
</code></pre>
<p>I would think that putting this in a for loop and swapping device ids to send thousands of notifications would be an immense strain on the server and a bad programming practice. (Correct me if i'm wrong)
now the documentation tells me to create "device groups" <a href="https://firebase.google.com/docs/cloud-messaging/notifications" rel="nofollow">https://firebase.google.com/docs/cloud-messaging/notifications</a> which store device_id's to send in bulk....this is annoying and inefficient. As my groups for my web application are constantly changing. </p>
<p>Plain and Simple</p>
<p>How do I send the notification above to an array of device id's that I specify in my python code so that i can make only 1 post to FCM instead of thousands.</p>
| 0 | 2016-08-06T21:30:09Z | 38,869,090 | <p>Instead of "to":"device_id" you should use "to":"topic" , </p>
<p>topic is use from group messaging in FCM or GCM</p>
<p><a href="https://developers.google.com/cloud-messaging/topic-messaging" rel="nofollow">https://developers.google.com/cloud-messaging/topic-messaging</a></p>
| 0 | 2016-08-10T09:17:15Z | [
"python",
"push-notification",
"google-cloud-messaging"
] |
Sending Notifications in to multiple users FCM | 38,808,880 | <p>I am configuring my mobile applications with firebase cloud messaging.
I've finally figured out how to send these annoying to configure notifications.
My python code looks like this </p>
<pre><code>url = 'https://fcm.googleapis.com/fcm/send'
body = {
"data":{
"title":"mytitle",
"body":"mybody",
"url":"myurl"
},
"notification":{
"title":"My web app name",
"body":"message",
"content_available": "true"
},
"to":"device_id_here"
}
headers = {"Content-Type":"application/json",
"Authorization": "key=api_key_here"}
requests.post(url, data=json.dumps(body), headers=headers)
</code></pre>
<p>I would think that putting this in a for loop and swapping device ids to send thousands of notifications would be an immense strain on the server and a bad programming practice. (Correct me if i'm wrong)
now the documentation tells me to create "device groups" <a href="https://firebase.google.com/docs/cloud-messaging/notifications" rel="nofollow">https://firebase.google.com/docs/cloud-messaging/notifications</a> which store device_id's to send in bulk....this is annoying and inefficient. As my groups for my web application are constantly changing. </p>
<p>Plain and Simple</p>
<p>How do I send the notification above to an array of device id's that I specify in my python code so that i can make only 1 post to FCM instead of thousands.</p>
| 0 | 2016-08-06T21:30:09Z | 39,447,372 | <p>To send FCM to multiple device you use the key <strong>"registration_ids"</strong> instead of <strong>"to"</strong></p>
<pre><code>"registration_ids": ["fcm_token1", "fcm_token2"]
</code></pre>
<p>Have a look at <a href="https://github.com/olucurious/PyFCM" rel="nofollow">this</a> package and see how they implemented it.</p>
| 0 | 2016-09-12T09:37:12Z | [
"python",
"push-notification",
"google-cloud-messaging"
] |
Keydown function not working | 38,808,937 | <p>I have just downloaded pygame 1.9.2 for python 3.3 and the <code>keydown</code> function is not working. The IDLE shell keeps telling me the same error:</p>
<pre><code>NameError: name 'KEYDOWN' is not defined
</code></pre>
<p>How do I solve this problem? I am kinda new to programming so could you please explain this to me.</p>
| -1 | 2016-08-06T21:38:25Z | 38,811,495 | <p><strong>NOTE: Sorry for deleting my previous answer but Stack Overflow just randomly posted my answer before I was done.</strong></p>
<p>Well, there could be several things wrong. You could be using <code>pygame.key.keydown()</code>, which is wrong, and you should be using <code>pygame.key.get_pressed()</code>.
But what I suspect is wrong with your code is that you're not using <code>pygame.KEYDOWN</code> properly.</p>
<p>If you're trying to do something while a key is being held down, use <code>pygame.key.get_pressed()</code>, and do something like this: </p>
<pre><code>key = pygame.key.get_pressed()
if key[pygame.K_WHATEVER_KEY_YOU_WANT]:
# do code
</code></pre>
<p>and just repeat that for every key you want to check.</p>
<p>If you're just trying to check if a key is being pressed, use <code>pygame.KEYDOWN</code> with an if statement, then check if the key you want is being pressed in a nested if statement <strong>under</strong> the first if statement. Like so:</p>
<pre><code>if event.type == pygame.KEYDOWN:
if event.key == pygame.K_WHATEVER_KEY_YOU_WANT:
#do code
</code></pre>
<p>I'm assuming you have an event variable your using to iterate over <code>pygame.event.get()</code> function that 'gets' events. If not then here is the code <strong>above</strong> in a game/window loop:</p>
<pre><code>running = True # variable to control while loop
while running: # while the variable running is equal to true
for event in pygame.event.get(): # use the variable 'event' to iterate over user events
if event.type == pygame.QUIT: # test if the user is trying to close the window
running = False # break the loop if the user is trying to close the window
pygame.quit() # de-initialize pygame module
quit() # quit() the IDE shell
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_WHATEVER_KEY_YOU_WANT:
#do code
</code></pre>
<p>This is basically how you use <code>pygame.KEYDOWN</code>. It's probably not the only way however. I recommend (since you're new to programming) to read the <a href="http://www.pygame.org/docs/ref/key.html" rel="nofollow">pygame docs</a> on their official website or the <a class='doc-link' href="http://stackoverflow.com/documentation/pygame/5110/event-handling">pygame docs on Stack Overflow</a> for more info about checking key input in pygame. And read some of the <a href="http://stackoverflow.com/questions/25494726/how-to-use-pygame-keydown">answers</a> to this question.</p>
| 1 | 2016-08-07T06:18:52Z | [
"python",
"python-3.x",
"pygame"
] |
In a inorder traversal of a binary search tree, where in the code does it traverse up? | 38,808,959 | <p>I see where how it goes down the tree, but don't see how it traverses back up and onto the right side of the root. Can someone explain? This is fully functional inorder traversal code in Python.</p>
<pre class="lang-py prettyprint-override"><code>def inorder(self):
if self:
if self.leftChild:
self.leftChild.inorder()
print(str(self.value))
if self.rightChild:
self.rightChild.inorder()
</code></pre>
<p>Where in this code specifically does it go back in the tree?</p>
| 0 | 2016-08-06T21:41:46Z | 38,811,413 | <p>Reaching the end of a function is the same thing as executing <code>return</code> which is the same thing as executing <code>return None</code>.</p>
<p>For functions that do not return a meaningful value, it is preferred to let execution reach the end of the function rather than place a superfluous <code>return</code> at the end of the function.</p>
| 1 | 2016-08-07T06:07:29Z | [
"python",
"algorithm",
"data-structures",
"binary-tree",
"binary-search-tree"
] |
Remove some x labels with Seaborn | 38,809,061 | <p>In the screenshot below, all my x-labels are overlapping each other.</p>
<pre><code>g = sns.factorplot(x='Age', y='PassengerId', hue='Survived', col='Sex', kind='strip', data=train);
</code></pre>
<p>I know that I can remove all the labels by calling <code>g.set(xticks=[])</code>, but is there a way to just show some of the Age labels, like 0, 20, 40, 60, 80?</p>
<p><a href="http://i.stack.imgur.com/UUdH4.png" rel="nofollow"><img src="http://i.stack.imgur.com/UUdH4.png" alt="enter image description here"></a></p>
| 3 | 2016-08-06T21:56:00Z | 38,809,632 | <p>I am not sure why there aren't sensible default ticks and values like there are on the y-axis. At any rate you can do something like the following:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
titanic = sns.load_dataset('titanic')
sns.factorplot(x='age',y='fare',hue='survived',col='sex',data=titanic,kind='strip')
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%d'))
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=20))
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="http://i.stack.imgur.com/B3G76.png" rel="nofollow"><img src="http://i.stack.imgur.com/B3G76.png" alt="enter image description here"></a></p>
| 3 | 2016-08-06T23:40:22Z | [
"python",
"seaborn"
] |
python variable concatination with different equators | 38,809,069 | <p>How would I be able to make a variable that contains another = sign in it, like this:</p>
<pre><code>newval = dict[key_to_find] = int(change)
</code></pre>
| 0 | 2016-08-06T21:57:26Z | 38,809,132 | <p>I'm basing the answer off of</p>
<blockquote>
<p>variable that <strong>contains</strong> another = sign in it</p>
</blockquote>
<p>I think you are trying to concatenate <code>dict[key_to_find]</code> and <code>int(change)</code> with an equal sign in the middle. This can be done with the following</p>
<pre><code>newval = str(dict[key_to_find]) + " = " + str(int(change))
</code></pre>
<p>The reason that I'm leaving in the <code>int</code> cast is because if you had change as <code>7.5</code>, then you would want it in the string as <code>7</code>.</p>
| 0 | 2016-08-06T22:06:32Z | [
"python"
] |
python variable concatination with different equators | 38,809,069 | <p>How would I be able to make a variable that contains another = sign in it, like this:</p>
<pre><code>newval = dict[key_to_find] = int(change)
</code></pre>
| 0 | 2016-08-06T21:57:26Z | 38,809,231 | <p>The problem with doing this is that in Python the '=' is an assignment operator rather than a symbol.</p>
<p>Python is reading what you wrote as trying to assign to two variables. For example:</p>
<p><code>foo = "test"</code> and <code>bar = "test"</code> can be written as <code>foo = bar = "test"</code></p>
<p><strong>Creating a variable name with an '=' it can't be done.</strong></p>
<p>If you are trying to create a new variable that has a string value that comes from two other sources you can use the format that intboolstring suggested, or the <code>format</code> function is pretty handy.</p>
<p>Going along with your example: </p>
<p><code>newval = "{dict} = {int}".format(dict=dict[key_to_find], int = int(change))</code></p>
<p>To give a simpler example of how this works:</p>
<p><code>var_a = 'Hello'
var_b = 'world'
variable = "{a} {b}!".format(a = var_a, b = var_b)</code></p>
<p>Variable will print:</p>
<blockquote>
<p>"Hello world!"</p>
</blockquote>
| 0 | 2016-08-06T22:20:18Z | [
"python"
] |
pyqt QThread and Signal and Emit not working for this answer | 38,809,212 | <p>So I've been looking around for an answer for how to use QThreads and Signals, and came across an answer from:</p>
<p><a href="http://stackoverflow.com/questions/16919472/how-to-access-gui-elements-from-another-thread-in-pyqt?rq=1">How to access GUI elements from another thread in PyQt</a></p>
<p>And wondering if that's not working for anyone else? The window just freezes. Is there anything wrong with my computer or is it the answer?</p>
<p>The code is:</p>
<pre><code>from PyQt4 import QtGui as gui
from PyQt4 import QtCore as core
import sys
import time
class ServerThread(core.QThread):
def __init__(self, parent=None):
core.QThread.__init__(self)
def start_server(self):
for i in range(1,6):
time.sleep(1)
self.emit(core.SIGNAL("dosomething(QString)"), str(i))
def run(self):
self.start_server()
class MainApp(gui.QWidget):
def __init__(self, parent=None):
super(MainApp,self).__init__(parent)
self.label = gui.QLabel("hello world!!")
layout = gui.QHBoxLayout(self)
layout.addWidget(self.label)
self.thread = ServerThread()
self.thread.start()
self.connect(self.thread, core.SIGNAL("dosomething(QString)"), self.doing)
def doing(self, i):
self.label.setText(i)
if i == "5":
self.destroy(self, destroyWindow =True, destroySubWindows = True)
sys.exit()
app = gui.QApplication(sys.argv)
form = MainApp()
form.show()
app.exec_()
</code></pre>
| 0 | 2016-08-06T22:17:02Z | 38,850,118 | <p>That code works for me <code>Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:40:30) [MSC v.1500 64 bit (AMD64)] on win32</code>. There is one little bug in that snippet though:</p>
<p>You should change this line <code>self.destroy(self, destroyWindow =True, destroySubWindows = True)</code> by this <code>self.destroy(destroyWindow =True, destroySubWindows = True)</code> otherwise you'll get this error</p>
<blockquote>
<p>TypeError: QWidget.destroy(bool destroyWindow=True, bool destroySubWindows=True): argument 1 has unexpected type 'MainApp'</p>
</blockquote>
| 0 | 2016-08-09T11:59:39Z | [
"python",
"pyqt"
] |
Using comprehensions instead of a for loop | 38,809,247 | <p>The following is a simplified example of my code.</p>
<pre><code>>>> def action(num):
print "Number is", num
>>> items = [1, 3, 6]
>>> for i in [j for j in items if j > 4]:
action(i)
Number is 6
</code></pre>
<p><strong>My question is the following:</strong> is it bad practice (for reasons such as code clarity) to simply replace the <code>for</code> loop with a comprehension which will still call the <code>action</code> function? That is:</p>
<pre><code>>>> (action(j) for j in items if j > 2)
Number is 6
</code></pre>
| 1 | 2016-08-06T22:21:43Z | 38,809,404 | <p>This shouldn't use a generator or comprehension at all.</p>
<pre><code>def action(num):
print "Number is", num
items = [1, 3, 6]
for j in items:
if j > 4:
action(i)
</code></pre>
<p>Generators evaluate lazily. The expression <code>(action(j) for j in items if j > 2)</code> will merely return a generator expression to the caller. Nothing will happen in it unless you explicitly exhaust it. List comprehensions evaluate eagerly, but, in this particular case, you are left with a <code>list</code> with no purpose. Just use a regular loop.</p>
| 4 | 2016-08-06T22:46:51Z | [
"python",
"list-comprehension"
] |
Using comprehensions instead of a for loop | 38,809,247 | <p>The following is a simplified example of my code.</p>
<pre><code>>>> def action(num):
print "Number is", num
>>> items = [1, 3, 6]
>>> for i in [j for j in items if j > 4]:
action(i)
Number is 6
</code></pre>
<p><strong>My question is the following:</strong> is it bad practice (for reasons such as code clarity) to simply replace the <code>for</code> loop with a comprehension which will still call the <code>action</code> function? That is:</p>
<pre><code>>>> (action(j) for j in items if j > 2)
Number is 6
</code></pre>
| 1 | 2016-08-06T22:21:43Z | 38,809,502 | <p>While I personally favour <a href="http://stackoverflow.com/a/38809404/1636276">Tigerhawk's solution</a>, there might be a middle ground between his and willywonkadailyblah's solution (now deleted).</p>
<p>One of willywonkadailyblah's points was:</p>
<blockquote>
<p>Why create a new list instead of just using the old one? You already have the condition to filter out the correct elements, so why put them away in memory and come back for them?</p>
</blockquote>
<p>One way to avoid this problem is to use <em>lazy evaluation</em> of the filtering i.e. have the filtering done only when iterating using the for loop by making the filtering part of a generator expression rather than a list comprehension:</p>
<pre><code>for i in (j for j in items if j > 4):
action(i)
</code></pre>
<p><strong>Output</strong></p>
<pre class="lang-none prettyprint-override"><code>Number is 6
</code></pre>
<p>In all honesty, I think Tigerhawk's solution is the best for this, though. This is just one possible alternative.</p>
<p>The reason that I proposed this is that it reminds me a lot of LINQ queries in C#, where you define a lazy way to extract, filter and project elements from a sequence in one statement (the LINQ expression) and can then use a separate <code>for each</code> loop with that query to perform some action on each element.</p>
| 1 | 2016-08-06T23:06:53Z | [
"python",
"list-comprehension"
] |
Using comprehensions instead of a for loop | 38,809,247 | <p>The following is a simplified example of my code.</p>
<pre><code>>>> def action(num):
print "Number is", num
>>> items = [1, 3, 6]
>>> for i in [j for j in items if j > 4]:
action(i)
Number is 6
</code></pre>
<p><strong>My question is the following:</strong> is it bad practice (for reasons such as code clarity) to simply replace the <code>for</code> loop with a comprehension which will still call the <code>action</code> function? That is:</p>
<pre><code>>>> (action(j) for j in items if j > 2)
Number is 6
</code></pre>
| 1 | 2016-08-06T22:21:43Z | 38,809,513 | <p>This is bad practice. Firstly, your code fragment does not produce the desired output. You would instead get something like: <code><generator object <genexpr> at 0x03D826F0></code>. </p>
<p>Secondly, a list comprehension is for creating sequences, and generators a for creating streams of objects. Typically, they do not have side effects. Your action function is a prime example of a side effect -- it prints its input and returns nothing. Rather, a generator should for each item it generates, take an input and compute some output. eg.</p>
<pre><code>doubled_odds = [x*2 for x in range(10) if x % 2 != 0]
</code></pre>
<p>By using a generator you are obfuscating the purpose of your code, which is to mutate global state (printing something), and not to create a stream of objects.
Whereas, just using a for loop makes the code slightly longer (basically just more whitespace), but immediately you can see that the purpose is to apply function to a selection of items (as opposed to creating a new stream/list of items).</p>
<pre><code>for i in items:
if i < 4:
action(i)
</code></pre>
<p>Remember that generators are still looping constructs and that the underlying bytecode is more or less the same (if anything, generators are marginally less efficient), <em>and</em> you lose clarity. Generators and list comprehensions are great, but this is not the right situation for them.</p>
| 1 | 2016-08-06T23:09:00Z | [
"python",
"list-comprehension"
] |
Python 2.7 check if a file is encoded with UTF-8 | 38,809,491 | <p>My current solution is just read all bytes of a file, try to decode, if any exception, I will say this file is not properly encoded. Any other more elegant ways? Thanks.</p>
<pre><code>utfbytes.decode('utf-8')
</code></pre>
<p>regards,
Lin</p>
| 0 | 2016-08-06T23:05:36Z | 38,809,519 | <p><strong><a href="http://stackoverflow.com/a/436299/2483271">No</a></strong>. From that answer:</p>
<blockquote>
<p>Correctly detecting the encoding all times is impossible.</p>
<p>(From chardet FAQ:)</p>
<blockquote>
<p>However, some encodings are optimized for specific languages, and languages are not random. Some character sequences pop up all the time, while other sequences make no sense. A person fluent in English who opens a newspaper and finds âtxzqJv 2!dasd0a QqdKjvzâ will instantly recognize that that isn't English (even though it is composed entirely of English letters). By studying lots of âtypicalâ text, a computer algorithm can simulate this kind of fluency and make an educated guess about a text's language.</p>
</blockquote>
</blockquote>
<p>However, there are <a href="https://www.crummy.com/software/BeautifulSoup/bs3/documentation.html#Beautiful%20Soup%20Gives%20You%20Unicode,%20Dammit" rel="nofollow">some libraries</a> that exist that do make the best effort to try and find the encoding type.</p>
| 1 | 2016-08-06T23:10:35Z | [
"python",
"python-2.7",
"utf-8"
] |
Filling empty DataFrame with numpy structured array | 38,809,506 | <p>I created an empty <code>DataFrame</code> by doing the following:</p>
<pre><code>In [581]: df=pd.DataFrame(np.empty(8,dtype=([('f0', '<i8'), ('f1', '<f8'),('f2', '<i8'), ('f3', '<f8'),('f4', '<f8'),('f5', '<f8'), ('f6', '<f8'),('f7', '<f8')])))
In [582]: df
Out[582]:
f0 f1 f2 f3 f4 \
0 3714580581 2.448187e-316 3928263553 2.447690e-316 0.000000e+00
1 0 0.000000e+00 0 0.000000e+00 0.000000e+00
2 0 0.000000e+00 0 0.000000e+00 3.284339e-315
3 0 0.000000e+00 0 0.000000e+00 0.000000e+00
4 0 0.000000e+00 298532785 4.341609e-315 0.000000e+00
5 0 0.000000e+00 1178683509 2.448189e-316 0.000000e+00
6 0 0.000000e+00 0 0.000000e+00 7.659812e-315
7 0 0.000000e+00 4211786525 2.448192e-316 0.000000e+00
f5 f6 f7
0 0.000000e+00 0.000000e+00 0.000000e+00
1 0.000000e+00 0.000000e+00 0.000000e+00
2 2.447692e-316 9.702437e-315 2.448246e-316
3 0.000000e+00 0.000000e+00 0.000000e+00
4 0.000000e+00 0.000000e+00 0.000000e+00
5 0.000000e+00 0.000000e+00 0.000000e+00
6 4.341599e-315 0.000000e+00 0.000000e+00
7 0.000000e+00 0.000000e+00 0.000000e+00
</code></pre>
<p>Now i am trying to change the data of the first 4 rows using a <code>numpy</code> <code>structured array</code>:</p>
<pre><code>In [583]: x=np.ones(4,dtype=([('f0', '<i8'), ('f1', '<f8'),('f2', '<i8'), ('f3', '<f8'),('f4', '<f8'),('f5', '<f8'), ('f6', '<f8'),('f7', '<f8')]))
In [584]: x
Out[584]:
array([(1L, 1.0, 1L, 1.0, 1.0, 1.0, 1.0, 1.0),
(1L, 1.0, 1L, 1.0, 1.0, 1.0, 1.0, 1.0),
(1L, 1.0, 1L, 1.0, 1.0, 1.0, 1.0, 1.0),
(1L, 1.0, 1L, 1.0, 1.0, 1.0, 1.0, 1.0)],
dtype=[('f0', '<i8'), ('f1', '<f8'), ('f2', '<i8'), ('f3', '<f8'), ('f4', '<f8'), ('f5', '<f8'), ('f6', '<f8'), ('f7', '<f8')])
In [585]: df[0:4]=x
ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>Is there a different way to accomplish this?</p>
<p>This would partially work if i filled the <code>DataFrame</code> with a view of the <code>structured array</code>:</p>
<pre><code>In [587]: df[0:4]=x.view(np.float64).reshape(x.shape + (-1,))
In [588]: df
Out[588]:
f0 f1 f2 f3 f4 f5 f6 f7
0 0 1.0 0 1.000000e+00 1.000000e+00 1.000000e+00 1.0 1.0
1 0 1.0 0 1.000000e+00 1.000000e+00 1.000000e+00 1.0 1.0
2 0 1.0 0 1.000000e+00 1.000000e+00 1.000000e+00 1.0 1.0
3 0 1.0 0 1.000000e+00 1.000000e+00 1.000000e+00 1.0 1.0
4 0 0.0 298532785 4.341609e-315 0.000000e+00 0.000000e+00 0.0 0.0
5 0 0.0 1178683509 2.448189e-316 0.000000e+00 0.000000e+00 0.0 0.0
6 0 0.0 0 0.000000e+00 7.659812e-315 4.341599e-315 0.0 0.0
7 0 0.0 4211786525 2.448192e-316 0.000000e+00 0.000000e+00 0.0 0.0
</code></pre>
<p>But as you can see the <code>f0</code> and <code>f2</code> columns are now 0 since the integer 1 was coerced to a float.</p>
| 2 | 2016-08-06T23:07:35Z | 39,867,984 | <p>The obvious solution is to give pandas a pandas dataframe:</p>
<pre><code>df[0:4] = pd.DataFrame(x)
</code></pre>
<p>This is very performance heavy, but in your example it is probably not noticeable.</p>
<p>I would suggest you use the <code>.iloc</code> method as it is more explicit.</p>
<pre><code>df.iloc[0:4] = pd.DataFrame(x)
</code></pre>
<p>Of course, the performance drop comes from instanciating a new object, the pandas DataFrame, so this has the same performance flaw.</p>
| 0 | 2016-10-05T07:34:30Z | [
"python",
"pandas",
"numpy",
"dataframe",
"structured-array"
] |
How to make a test class or function for fibonacci with pytest? | 38,809,617 | <pre><code>def fibR(n):
if n==1 or n==2:
return 1
return fib(n-1)+fib(n-2)
print (fibR(5))
</code></pre>
<p>How could I make a test class for fibonnaci, for example?</p>
| 1 | 2016-08-06T23:34:22Z | 38,809,670 | <p>This is the closed equation for nth fibonnaci</p>
<p><a href="http://i.stack.imgur.com/fY87P.png" rel="nofollow"><img src="http://i.stack.imgur.com/fY87P.png" alt="enter image description here"></a></p>
<p>You need to create a function that returns that value. Then</p>
<pre><code>def fibequation(n):
return ((1+sqrt(5))**n-(1-sqrt(5))**n)/(2**n*sqrt(5))
#Testing fibR
assert(fibR(10),fibequation(10))
</code></pre>
<p>Or you can make a test for known fib values </p>
<pre><code>f12 = 144
f14 = 377
assert(fibR(12),f12)
assert(fibR(14),f14)
</code></pre>
<p>If your fibR works for those values, it is doing well.</p>
| 2 | 2016-08-06T23:48:04Z | [
"python",
"python-3.x",
"fibonacci"
] |
How to make a test class or function for fibonacci with pytest? | 38,809,617 | <pre><code>def fibR(n):
if n==1 or n==2:
return 1
return fib(n-1)+fib(n-2)
print (fibR(5))
</code></pre>
<p>How could I make a test class for fibonnaci, for example?</p>
| 1 | 2016-08-06T23:34:22Z | 38,809,702 | <p>I stuck your existing code in a file called <code>fib.py</code>:</p>
<pre><code>def fibR(n):
if n==1 or n==2:
return 1
return fibR(n-1)+fibR(n-2)
</code></pre>
<p>In the same directory I created a file called <code>test_fib.py</code>:</p>
<pre><code>import pytest
from fib import fibR
def test_fib_1_equals_1():
assert fibR(1) == 1
def test_fib_2_equals_1():
assert fibR(2) == 1
def test_fib_6_equals_8():
assert fibR(6) == 8
</code></pre>
<p>If I run <code>py.test</code> in this directory from the command line, I can automatically check the correctness of <code>fibR</code> using these tests:</p>
<pre><code>collected 3 items
test_fib.py ...
================= 3 passed in 0.01 seconds ===========
</code></pre>
| 2 | 2016-08-06T23:57:26Z | [
"python",
"python-3.x",
"fibonacci"
] |
Name conversion is not defined | 38,809,622 | <p>I have an error when running the following in python</p>
<blockquote>
<p>Runtime error
Traceback (most recent call last):
File "", line 6, in
File "C:\Python27\ArcGIS10.3\Lib\rpy2\robjects__init__.py", line 55, in
conversion.ri2py = default_ri2py
NameError: name 'conversion' is not defined</p>
</blockquote>
<p>this is my script <em>init</em>.py</p>
<pre><code>import rpy2.rinterface as rinterface
import rpy2.rlike.container as rlc
import rpy2.robjects.conversion
conversion.ri2py = default_ri2py
</code></pre>
| -1 | 2016-08-06T23:35:40Z | 38,810,091 | <p>Variable simply not defined. Use the snippet below to define it.</p>
<pre><code>import rpy2.robjects as robjects
robjects.conversion.ri2py = default_ri2py
</code></pre>
<p><a href="http://rpy.sourceforge.net/rpy2/doc-2.1/html/robjects_convert.html" rel="nofollow">http://rpy.sourceforge.net/rpy2/doc-2.1/html/robjects_convert.html</a></p>
| 0 | 2016-08-07T01:25:31Z | [
"python"
] |
When does ctypes free memory? | 38,809,624 | <p>In Python I'm using ctypes to exchange data with a C library, and the call interface involves nested pointers-to-structs. </p>
<p>If the memory was allocated from in C, then python should (deeply) extract a copy of any needed values and then explicitly ask that C library to deallocate the memory.</p>
<p>If the memory was allocated from in Python, presumably the memory will be deallocated soon after the corresponding ctypes object passes out of scope. How does this work for pointers? If I create a pointer object from a string buffer, then do I need to keep a variable referencing that original buffer object in scope, to prevent this pointer from dangling? Or does the pointer object itself automatically do this for me (even though it won't return the original object)? Does it make any difference whether I'm using <code>pointer</code>, <code>POINTER</code>, <code>cast</code>, <code>c_void_p</code>, or <code>from_address(addressof)</code>? </p>
| 1 | 2016-08-06T23:36:18Z | 38,826,578 | <p>Nested pointers to simple objects seem fine. The documentation is explicit that ctypes doesn't support "original object return", but implies that the pointer does store a python-reference in order to keep-alive its target object (the precise mechanics might be implementation-specific).</p>
<pre><code>>>> from ctypes import *
>>> x = c_int(7)
>>> triple_ptr = pointer(pointer(pointer(x)))
>>> triple_ptr.contents.contents.contents.value == x.value
True
>>> triple_ptr.contents.contents.contents is x
False
>>> triple_ptr._objects['1']._objects['1']._objects['1'] is x # CPython 3.5
True
</code></pre>
<p>Looks like the pointer function is no different to the POINTER template constructor (like how <code>create_string_buffer</code> relates to <code>c_char * size</code>).</p>
<pre><code>>>> type(pointer(x)) is type(POINTER(c_int)(x))
True
</code></pre>
<p>Casting to void also seems to keep the reference (but I'm not sure why it modifies the original pointer?).</p>
<pre><code>>>> ptr = pointer(x)
>>> ptr._objects
{'1': c_int(7)}
>>> pvoid = cast(p, c_void_p)
>>> pvoid._objects is ptr._objects
True
>>> pvoid._objects
{139665053613048: <__main__.LP_c_int object at 0x7f064de87bf8>, '1': c_int(7)}
>>> pvoid._objects['1'] is x
True
</code></pre>
<p>Creating an object directly from a memory buffer (or address thereof) looks more fraught.</p>
<pre><code>>>> v = c_void_p.from_buffer(triple_ptr)
>>> v2 = c_void_p.from_buffer_copy(triple_ptr)
>>> type(v._objects)
<class 'memoryview'>
>>> POINTER(POINTER(POINTER(c_int))).from_buffer(v)[0][0][0] == x.value
True
>>> p3 = POINTER(POINTER(POINTER(C_int))).from_address(addressof(triple_ptr))
>>> v2._objects is None is p3._objects is p3._b_base_
True
</code></pre>
<p>Incidentally, byref probably keeps-alive the memory it references.</p>
<pre><code>>>> byref(x)._obj is x
True
</code></pre>
| 0 | 2016-08-08T10:14:42Z | [
"python",
"memory-management",
"ctypes"
] |
Keras import error Nadam | 38,809,686 | <p>I am getting an import error when trying to import the Keras module Nadam:</p>
<pre><code>>>> from keras.optimizers import Nadam
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name Nadam
</code></pre>
<p>I can import and use SGD, Adam, etc, just not this optimizer. Any help appreciated.</p>
<p>I installed Keras using:</p>
<pre><code>git clone https://github.com/fchollet/keras.git
sudo python2.7 setup.py install
</code></pre>
<p>I have just found that, if I try to import it using the shell immediately after installation, the Nadam import works. But Nadam won't import in my script. So it's a path issue? </p>
| 10 | 2016-08-06T23:51:51Z | 39,250,535 | <p>If you can import something in one place but not another, it's definitely an issue with the import system. So, carefully check the relevant variables (<code>sys.path</code>, <code>PYTHON_PATH</code>) and where the modules in each case are being imported from (<code>sys.modules</code>).</p>
<p>For a more in-depth reading, I direct you to the <a href="http://meta.stackexchange.com/users/174091/ivan-pozdeev?tab=profile">Python import system docs</a> and <a href="http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html" rel="nofollow">an overview of common traps in the system</a>.</p>
<p>You may also have an old version of Keras installed somewhere: Nadam is <a href="https://github.com/fchollet/keras/commit/1312ed1a9cbfdf18d53e15f0e54329523debd70c" rel="nofollow">a fairly recent addition</a> (2016-05), so this may be the cause for the "can import other optimizers but not this one" behaviour.</p>
| 3 | 2016-08-31T13:08:02Z | [
"python",
"path",
"theano",
"keras"
] |
Keras import error Nadam | 38,809,686 | <p>I am getting an import error when trying to import the Keras module Nadam:</p>
<pre><code>>>> from keras.optimizers import Nadam
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name Nadam
</code></pre>
<p>I can import and use SGD, Adam, etc, just not this optimizer. Any help appreciated.</p>
<p>I installed Keras using:</p>
<pre><code>git clone https://github.com/fchollet/keras.git
sudo python2.7 setup.py install
</code></pre>
<p>I have just found that, if I try to import it using the shell immediately after installation, the Nadam import works. But Nadam won't import in my script. So it's a path issue? </p>
| 10 | 2016-08-06T23:51:51Z | 39,351,292 | <p>It could happen if you're using other version of python. Let's say, you have installed python globally with version 2.7.x, but when running your script, you're using python 3.x. In this case even you'll run python shell, you'll be able to import it, but when running concrete script which uses other version of python it wouldn't be possible.</p>
| 1 | 2016-09-06T14:18:06Z | [
"python",
"path",
"theano",
"keras"
] |
df.to_dict() only get one row of original dataframe (df) | 38,809,705 | <p>I have the following dataframe:</p>
<p>Note: date is the index</p>
<pre><code>city morning afternoon evening midnight
date
2014-05-01 YVR 2.32 4.26 -4.87 6.58
2014-05-01 YYZ 24.78 2.90 -50.55 6.64
2014-05-01 DFW 24.78 2.90 -50.55 6.64
2014-05-01 PDX 2.40 4.06 -4.06 6.54
2014-05-01 SFO 30.35 9.96 64.24 6.66
</code></pre>
<p>I try to save this df to a dict by df.to_dict() but I only get one row:</p>
<pre><code>df.to_dict():
{'city': {Timestamp('2014-05-01 00:00:00'): 'SFO'},
'morning': {Timestamp('2014-05-01 00:00:00'): 9.9600000000000009},
'afternoon': {Timestamp('2014-05-01 00:00:00'): 6.6600000000000001},
'evening': {Timestamp('2014-05-01 00:00:00'): 30.350000000000001},
'midnight': {Timestamp('2014-05-01 00:00:00'): 64.239999999999995}}
</code></pre>
<p>Shouldn't the entire dataframe be output in the dict? </p>
| 1 | 2016-08-06T23:59:08Z | 38,809,763 | <p>The output you're getting is using the index as a key in a dictionary. You have a repeated index, and dictionary keys must be unique, so it's not going to work.</p>
<p>You'll have to choose another output format. I find <code>records</code> useful, as in</p>
<pre><code>In [65]: df.reset_index().to_dict("records")
Out[65]:
[{'afternoon': 4.26,
'city': 'YVR',
'date': '2014-05-01',
'evening': -4.87,
'midnight': 6.58,
'morning': '2.32'},
{'afternoon': 2.9,
'city': 'YYZ',
'date': '2014-05-01',
'evening': -50.55,
'midnight': 6.64,
'morning': '24.78'},
{'afternoon': 2.9,
'city': 'DFW',
'date': '2014-05-01',
'evening': -50.55,
'midnight': 6.64,
'morning': '24.78'},
{'afternoon': 4.06,
'city': 'PDX',
'date': '2014-05-01',
'evening': -4.06,
'midnight': 6.54,
'morning': '2.40'},
{'afternoon': 9.96,
'city': 'SFO',
'date': '2014-05-01',
'evening': 64.24,
'midnight': 6.66,
'morning': '30.35'}]
</code></pre>
| 3 | 2016-08-07T00:13:48Z | [
"python",
"pandas",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.