title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Pandas interpreting datetime wrong | 38,490,144 | <p>This probably has a quick answer and I'm being silly, but after one hour of struggling and 3 cups of coffee...</p>
<p>I have this (not very well formatted) time series which is indexing some readings, here's part of it:</p>
<pre><code>71670 9/6/16 8:58:1
71671 9/6/16 8:59:1
71672 9/6/16 9:0:1
71673 9/6/16 9:1:1
71674 9/6/16 9:2:1
71675 9/6/16 9:3:1
71676 9/6/16 9:4:1
71677 9/6/16 9:5:1
71678 9/6/16 9:6:1
71679 9/6/16 9:7:1
71680 9/6/16 9:8:1
71681 9/6/16 9:9:1
71682 9/6/16 9:10:1
71683 9/6/16 9:11:1
</code></pre>
<p>Now this is actually the <strong>9th of June</strong> but when I call pd.to_datetime it interprets this as the 6th of September. How do I tell it what I mean?</p>
| 0 | 2016-07-20T20:20:41Z | 38,490,208 | <p>Use the dayfirst parameter:</p>
<pre><code>pd.to_datetime(df[col], dayfirst=True)
</code></pre>
| 4 | 2016-07-20T20:24:22Z | [
"python",
"pandas"
] |
Pandas interpreting datetime wrong | 38,490,144 | <p>This probably has a quick answer and I'm being silly, but after one hour of struggling and 3 cups of coffee...</p>
<p>I have this (not very well formatted) time series which is indexing some readings, here's part of it:</p>
<pre><code>71670 9/6/16 8:58:1
71671 9/6/16 8:59:1
71672 9/6/16 9:0:1
71673 9/6/16 9:1:1
71674 9/6/16 9:2:1
71675 9/6/16 9:3:1
71676 9/6/16 9:4:1
71677 9/6/16 9:5:1
71678 9/6/16 9:6:1
71679 9/6/16 9:7:1
71680 9/6/16 9:8:1
71681 9/6/16 9:9:1
71682 9/6/16 9:10:1
71683 9/6/16 9:11:1
</code></pre>
<p>Now this is actually the <strong>9th of June</strong> but when I call pd.to_datetime it interprets this as the 6th of September. How do I tell it what I mean?</p>
| 0 | 2016-07-20T20:20:41Z | 38,490,209 | <p>Use the <code>dayfirst</code> parameter.</p>
<pre><code>In [42]: pd.to_datetime('9/6/16 8:58:1', dayfirst=True)
Out[42]: Timestamp('2016-06-09 08:58:01')
</code></pre>
| 0 | 2016-07-20T20:24:35Z | [
"python",
"pandas"
] |
Pandas interpreting datetime wrong | 38,490,144 | <p>This probably has a quick answer and I'm being silly, but after one hour of struggling and 3 cups of coffee...</p>
<p>I have this (not very well formatted) time series which is indexing some readings, here's part of it:</p>
<pre><code>71670 9/6/16 8:58:1
71671 9/6/16 8:59:1
71672 9/6/16 9:0:1
71673 9/6/16 9:1:1
71674 9/6/16 9:2:1
71675 9/6/16 9:3:1
71676 9/6/16 9:4:1
71677 9/6/16 9:5:1
71678 9/6/16 9:6:1
71679 9/6/16 9:7:1
71680 9/6/16 9:8:1
71681 9/6/16 9:9:1
71682 9/6/16 9:10:1
71683 9/6/16 9:11:1
</code></pre>
<p>Now this is actually the <strong>9th of June</strong> but when I call pd.to_datetime it interprets this as the 6th of September. How do I tell it what I mean?</p>
| 0 | 2016-07-20T20:20:41Z | 38,490,214 | <p>Using the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code> function</a>.</p>
<pre><code>pd.to_datetime(df.iloc[:, 1], format='%d/%m/%y %H:%M:%S')
</code></pre>
| 1 | 2016-07-20T20:24:54Z | [
"python",
"pandas"
] |
Connecting to a JDBC database in Python vs. MATLAB | 38,490,233 | <p>I am trying to connect to a SQL database in Python, but I am have difficult finding documentation/examples of connecting to a JDBC. I can do this in MATLAB using the following code:</p>
<pre><code>`Name = 'ServerName';
Username = '';
Password = '';
Server = ['jdbc:sqlserver://ServerName:1433;'...
'database=DB;',...
'applicationIntent=ReadOnly;',...
'integratedSecurity=true;'];
Connection = database('DB',Username , Password,...
'com.microsoft.sqlserver.jdbc.SQLServerDriver', Server );`
</code></pre>
<p>I would like to do this in Python. Because of the JDBC, I don't think I can use pymssql or pyodbc (I have tried). I have tried, and failed, using the following:</p>
<pre><code>`import jaydebeapi
conn = jaydebeapi.connect('com.microsoft.sqlserver.jdbc.SQLServerDriver',
[Server , Username,Password])`
</code></pre>
<p>Any help in implementing this in Python would be great, thanks!</p>
| 2 | 2016-07-20T20:26:18Z | 38,510,073 | <p>I think that MATLAB requires the jdbc driver because of Java, but it is unnecessary in Python. My solution is using pyodbc:</p>
<pre><code>conn = pyodbc.connect(driver='{SQL Server}', host=Server, database=DB,
trusted_connection='yes', Username ='', Password='', readonly = True)
</code></pre>
<p>It doesn't look like pymssql can pass a ReadOnly argument, which is why I use pyodbc.</p>
| 1 | 2016-07-21T16:51:32Z | [
"python",
"database",
"jdbc",
"jaydebeapi"
] |
Obtaining access token error 401 in Twython | 38,490,314 | <p>I am trying to veify Twitter account of user via Twython</p>
<pre><code>def twitter_view(request):
twitter = Twython(APP_KEY, APP_SECRET)
auth = twitter.get_authentication_tokens(callback_url='http://127.0.0.1:8000/confirm/', force_login=True)
request.session['oauth_token'] = auth['oauth_token']
request.session['oauth_token_secret'] = auth['oauth_token_secret']
return HttpResponseRedirect(auth['auth_url'])
def redirect_view(request):
oauth_verifier = request.GET['oauth_verifier']
twitter = Twython(APP_KEY, APP_SECRET)
final_step = twitter.get_authorized_tokens(oauth_verifier)
request.user.twitter_oauth_token = final_step['oauth_token']
request.user.twitter_oauth_token_secret = final_step['oauth_token_secret']
request.user.save()
return redirect('twitterapp:homepage')
</code></pre>
<p>I am getting </p>
<blockquote>
<p>Twitter API returned a 401 (Unauthorized), Invalid / expired Token</p>
</blockquote>
<p>Traceback (most recent call last):</p>
<blockquote>
<p>File
"/Users/bharatagarwal/my-venv/lib/python2.7/site-packages/django/core/handlers/base.py",
line 149, in get_response
response = self.process_exception_by_middleware(e, request)</p>
<p>File
"/Users/bharatagarwal/my-venv/lib/python2.7/site-packages/django/core/handlers/base.py",
line 147, in get_response response = wrapped_callback(request,
*callback_args, **callback_kwargs)</p>
<p>File
"/Users/bharatagarwal/projects/twitterproject/mysite/twitterapp/views.py",
line 100, in redirect_view<br>
final_step = twitter.get_authorized_tokens(str(oauth_verifier))</p>
<p>File
"/Users/bharatagarwal/my-venv/lib/python2.7/site-packages/twython/api.py",
line 379, in get_authorized_tokens
ken'), error_code=response.status_code)</p>
<p>TwythonError: Twitter API returned a 401 (Unauthorized), Invalid /
expired To ken</p>
</blockquote>
| 0 | 2016-07-20T20:31:22Z | 38,922,006 | <p>On the second Twython instantiation, you have to include the OAUTH_TOKEN and the OAUTH_SECRET_TOKEN obtained in the first step. </p>
<pre><code>twitter = Twython(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
</code></pre>
<p>It is returning Invalid Token because the instantiation you're using didn't include the tokens you received.</p>
| 0 | 2016-08-12T15:53:11Z | [
"python",
"django",
"twython"
] |
Trying to Detect Circle with Houghcircles in OpenCV | 38,490,457 | <p>I am trying to detect circles in OpenCV using the method Houghcircles but when I am trying to run my code I get the following error: </p>
<pre><code>Traceback (most recent call last):
File "detect_circles.py", line 19, in <module>
circles = cv2.cv.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
AttributeError: 'module' object has no attribute 'cv'
</code></pre>
<p>I am currently following the tutorial from this website: <a href="http://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/" rel="nofollow">http://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/</a></p>
<p>I was wondering what was causing this error and what I could do to fix it.</p>
<p>I am running the code like this:</p>
<pre><code>python detect_circles.py --image images/simple.png
</code></pre>
<p>And this is my code:</p>
<pre><code># import the necessary packages
import numpy as np
import argparse
import cv2
import copy
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
original_img = cv2.imread(args["image"])
clone_img = copy.copy(original_img)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.cv.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
cv2.waitKey(0)
</code></pre>
| 0 | 2016-07-20T20:40:50Z | 38,490,809 | <pre><code>cv2.cv.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
</code></pre>
<p>The error message says it all. You have a typo in your code, the method should be called without the second cv module specifier.</p>
<pre><code>cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 100)
</code></pre>
| 1 | 2016-07-20T21:03:31Z | [
"python",
"opencv"
] |
Re-assigning pandas col by dictionary has no effect on original DataFrame? | 38,490,479 | <p>I have a huge <code>pandas</code> <code>DataFrame</code> that looks like this (sample):</p>
<pre><code>df = pd.DataFrame({"col1":{0:"There ARE NO ERRORS!!!", 1:"EVERYTHING is failing", 2:"There ARE NO ERRORS!!!"}, "col2":{0:"WE HAVE SOME ERRORS", 1:"EVERYTHING is failing", 2:"System shutdown!"}})
</code></pre>
<p>I have a function called <code>cleanMessage</code> that strips punctuation and returns lowercase string. For example, <code>cleanMessage("THERE may be some errors, I don't know!!")</code> would return <code>there may be some errors i dont know</code>.</p>
<p>I am trying to replace every message in <code>col1</code> with whatever <code>cleanMessage</code> returns for that particular message (basically cleaning these message columns up). <code>pd.DataFrame.iterrows</code> worked OK for me, but was a bit slow. I'm trying to basically map new values to the keys in the original <code>df</code>, something like this:</p>
<pre><code>message_set = set(df["col1"])
message_dict = dict((original, cleanMessage(original)) for original in message_set)
df = df.replace("col1", message_dict)
</code></pre>
<p>So, the original <code>df</code> would like:</p>
<pre><code>>>> df
col1 col2
0 "There ARE NO ERRORS" "WE HAVE SOME ERRORS"
1 "EVERYTHING is failing" "EVERYTHING is failing"
2 "There ARE NO ERRORS!!!" "System shutdown!"
</code></pre>
<p>And the "after" <code>df</code> should look like:</p>
<pre><code>>>> df
col1 col2
0 "there are no errors" "WE HAVE SOME ERRORS"
1 "everything is failing" "EVERYTHING is failing"
2 "there are no errors" "System shutdown!"
</code></pre>
<p>Am I missing something with the <code>replace</code> portion of my code?</p>
<p>Edit:</p>
<p>For future viewers, here's the code I got to work:</p>
<pre><code>df["col1"] = df["col1"].map(message_dict)
</code></pre>
| 1 | 2016-07-20T20:42:29Z | 38,490,629 | <p><code>replace</code> works well with <code>regex</code> - consider putting the logic of <code>clean message()</code> into a nested <code>replace()</code>.</p>
<pre><code>df["col2"] = df["col1"].replace(...).replace(...)
</code></pre>
| 1 | 2016-07-20T20:51:32Z | [
"python",
"pandas"
] |
Re-assigning pandas col by dictionary has no effect on original DataFrame? | 38,490,479 | <p>I have a huge <code>pandas</code> <code>DataFrame</code> that looks like this (sample):</p>
<pre><code>df = pd.DataFrame({"col1":{0:"There ARE NO ERRORS!!!", 1:"EVERYTHING is failing", 2:"There ARE NO ERRORS!!!"}, "col2":{0:"WE HAVE SOME ERRORS", 1:"EVERYTHING is failing", 2:"System shutdown!"}})
</code></pre>
<p>I have a function called <code>cleanMessage</code> that strips punctuation and returns lowercase string. For example, <code>cleanMessage("THERE may be some errors, I don't know!!")</code> would return <code>there may be some errors i dont know</code>.</p>
<p>I am trying to replace every message in <code>col1</code> with whatever <code>cleanMessage</code> returns for that particular message (basically cleaning these message columns up). <code>pd.DataFrame.iterrows</code> worked OK for me, but was a bit slow. I'm trying to basically map new values to the keys in the original <code>df</code>, something like this:</p>
<pre><code>message_set = set(df["col1"])
message_dict = dict((original, cleanMessage(original)) for original in message_set)
df = df.replace("col1", message_dict)
</code></pre>
<p>So, the original <code>df</code> would like:</p>
<pre><code>>>> df
col1 col2
0 "There ARE NO ERRORS" "WE HAVE SOME ERRORS"
1 "EVERYTHING is failing" "EVERYTHING is failing"
2 "There ARE NO ERRORS!!!" "System shutdown!"
</code></pre>
<p>And the "after" <code>df</code> should look like:</p>
<pre><code>>>> df
col1 col2
0 "there are no errors" "WE HAVE SOME ERRORS"
1 "everything is failing" "EVERYTHING is failing"
2 "there are no errors" "System shutdown!"
</code></pre>
<p>Am I missing something with the <code>replace</code> portion of my code?</p>
<p>Edit:</p>
<p>For future viewers, here's the code I got to work:</p>
<pre><code>df["col1"] = df["col1"].map(message_dict)
</code></pre>
| 1 | 2016-07-20T20:42:29Z | 38,490,712 | <pre><code>df.col1 = df.col1.str.lower().str.replace(r'([^a-z ])', '')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/0dnzu.png" rel="nofollow"><img src="http://i.stack.imgur.com/0dnzu.png" alt="enter image description here"></a></p>
| 0 | 2016-07-20T20:56:58Z | [
"python",
"pandas"
] |
Django Sort ManyToMany Field in A Nonsignificant Order | 38,490,533 | <p>I have two models as below:</p>
<pre><code>class Stop(models.Model):
"""
Showing bus stops in İzmir.
"""
code = models.PositiveIntegerField(
unique=True,
primary_key=True,
verbose_name="Code"
)
label = models.CharField(
null=False,
blank=False,
max_length=64,
verbose_name="Label"
)
coor = ArrayField(
models.FloatField(),
size=2,
verbose_name="Coordination"
)
class Meta:
verbose_name = "Stop"
verbose_name_plural = "Stops"
ordering = ["label"]
def __str__(self):
return self.label
class Route(models.Model):
"""
Bus routes of İzmir.
"""
code = models.PositiveSmallIntegerField(
unique=True,
primary_key=True,
verbose_name="Code"
)
stops = models.ManyToManyField(
Stop,
null=True,
blank=True,
related_name="routes",
verbose_name="Stops"
)
terminals = ArrayField(
models.CharField(
null=False,
blank=False,
max_length=32,
),
size=2,
default=[],
verbose_name="Terminals"
)
departure_times = ArrayField(
ArrayField(
models.TimeField(
null=False,
blank=False
),
null=True,
default=[]
),
default=[],
size=6,
verbose_name="Departure Times"
)
class Meta:
verbose_name = "Route"
verbose_name_plural = "Routes"
ordering = ["code"]
def __str__(self):
return "{}: {} - {}".format(str(self.code), self.terminals[0], self.terminals[1])
</code></pre>
<p>As you can see, <code>Route</code> has a <code>ManyToManyFields</code> which takes <code>Stop</code> instances.</p>
<p>I put the instances with a script which it scraps a couple of web page, it seems I will use crontab to keep them updated. In the data I am scraping, <code>Stop</code> objects are ordered. The thing is, there are no significant filter to sort e.g. a <code>Stop</code> instance comes after another.</p>
<p>Django (or Django Rest Framework) returns <code>Stop</code> instances of <code>Route</code> instance in alphabetic order, e.g.</p>
<pre><code>{
"code": 285,
"terminals": [
"EVKA 1",
"KONAK"
],
"stops": [
40586,
40633,
12066,
40645,
40627,
40647,
40588,
40592,
40623,
40016,
40506,
40508,
40528,
40462,
40631,
40014,
40619,
40530,
12060,
40661,
40504,
40488,
40653,
40590,
40512,
40464,
10240,
10036,
12068,
40514,
40510,
40658,
40002,
40649,
12070,
40004,
40010,
40656,
12064,
40614,
40012
],
...
}
</code></pre>
<p>In which <code>stops[0]</code> returns a <code>Stop</code> instance beginning with <code>A</code> and sorts like that.</p>
<p>So, is there a way to order like a <code>list</code> in Python? Like, there is no significant point, you just append to the end and return so.</p>
<hr>
<h1>Environment</h1>
<ul>
<li>python 3.5.1</li>
<li>django 1.9.7</li>
<li>djangorestframework 3.3.3</li>
<li>psycopg2 2.6.2</li>
<li>postgresql 9.5</li>
</ul>
| 0 | 2016-07-20T20:46:51Z | 38,491,369 | <p>The <code>position</code> of a <code>stop</code> is relative to a <code>Route</code>, e.g. one stop can be first for <code>route 1</code>, 2nd for <code>route 2</code> and etc. So this is a perfect example that you need more <code>metadata</code> about the <code>Route-Stop</code> relation. Djagno solves this by letting you provide a <a href="https://docs.djangoproject.com/en/1.9/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow">Intermediate Table</a> with two <code>ForeignKey</code> and the <code>metadata</code> you need for the relation.</p>
<pre><code>class Stop(models.Model):
#...
class Route(models.Model):
#...
stops = models.ManyToManyField(Stop, through='RouteStop', blank=True, related_name="routes", verbose_name="Stops")
class RouteStop(models.Model):
stop = models.ForeignKey(Stop)
route = models.ForeignKey(Route)
position = models.PositiveSmallIntegerField()
class Meta:
unique_together = (("stop", "route"),)
</code></pre>
<p>Now when you get <code>Routes</code> you can order <code>route.stops</code> by <code>RouteStop.position</code>, something like:</p>
<pre><code>Route.objects.all().prefetch_related(
Prefetch('stops', queryset=Stop.objects.all().order_by('routestop__position'))
)
</code></pre>
| 1 | 2016-07-20T21:44:28Z | [
"python",
"django",
"python-3.x",
"orm",
"manytomanyfield"
] |
Python - Getting the average of n lines in a txt file | 38,490,602 | <p>I've taken a look online to try and solve this problem and used solutions from other posts to build towards my solution, however from here, I don't get what to do next.</p>
<p>I basically want to grab the final 5 lines of the PastWinners text file and then get the average of those numbers. What I currently have gets the average of the entire document and also prints out the final line in the text file.</p>
<pre><code>with open('PastWinners.txt') as f:
data = [float(line.rstrip()) for line in f]
first = f.readline() # Read the first line.
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found...
f.seek(-2, 1) # ...jump back the read byte plus one more.
last = f.readline() # Read last line.
biggest = min(data)
smallest = max(data)
print(sum(data)/len(data))
print(last)
</code></pre>
<p>Thanks for the help.</p>
| 1 | 2016-07-20T20:50:00Z | 38,490,715 | <p>You can read the file in reverse order and use <em>break</em> after 5 reads to get out from the loop. You can also just run 5 iteration of the loop while starting from the end of the file, so you will not have to use <em>break</em> at all.</p>
<p><a href="http://stackoverflow.com/questions/2301789/read-a-file-in-reverse-order-using-python">Here</a> is how to read in reverse in Python.</p>
| 1 | 2016-07-20T20:57:18Z | [
"python",
"file",
"text",
"numbers",
"average"
] |
Python - Getting the average of n lines in a txt file | 38,490,602 | <p>I've taken a look online to try and solve this problem and used solutions from other posts to build towards my solution, however from here, I don't get what to do next.</p>
<p>I basically want to grab the final 5 lines of the PastWinners text file and then get the average of those numbers. What I currently have gets the average of the entire document and also prints out the final line in the text file.</p>
<pre><code>with open('PastWinners.txt') as f:
data = [float(line.rstrip()) for line in f]
first = f.readline() # Read the first line.
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found...
f.seek(-2, 1) # ...jump back the read byte plus one more.
last = f.readline() # Read last line.
biggest = min(data)
smallest = max(data)
print(sum(data)/len(data))
print(last)
</code></pre>
<p>Thanks for the help.</p>
| 1 | 2016-07-20T20:50:00Z | 38,490,754 | <p>You can use <a href="http://techearth.net/python/index.php5?title=Python:Basics:Slices" rel="nofollow">slicing</a> to get only the 5 last numbers (I deleted all the irrelevant code):</p>
<pre><code>with open('PastWinners.txt') as f:
data = [float(line.rstrip()) for line in f]
print(sum(data[-5:])/len(data[-5:]))
</code></pre>
<p>the <code>data[-5:]</code> says take only the data from the 5-1 before last item to the last item.</p>
| 2 | 2016-07-20T20:59:57Z | [
"python",
"file",
"text",
"numbers",
"average"
] |
Python - Getting the average of n lines in a txt file | 38,490,602 | <p>I've taken a look online to try and solve this problem and used solutions from other posts to build towards my solution, however from here, I don't get what to do next.</p>
<p>I basically want to grab the final 5 lines of the PastWinners text file and then get the average of those numbers. What I currently have gets the average of the entire document and also prints out the final line in the text file.</p>
<pre><code>with open('PastWinners.txt') as f:
data = [float(line.rstrip()) for line in f]
first = f.readline() # Read the first line.
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found...
f.seek(-2, 1) # ...jump back the read byte plus one more.
last = f.readline() # Read last line.
biggest = min(data)
smallest = max(data)
print(sum(data)/len(data))
print(last)
</code></pre>
<p>Thanks for the help.</p>
| 1 | 2016-07-20T20:50:00Z | 38,490,963 | <p>This will work when the file is less than 5 lines, too, and will format your average to 2 decimal places.</p>
<pre><code>with open('PastWinners.txt') as f:
data = [float(line.rstrip()) for line in f]
if len(data) > 5:
data = data[-5:]
print min(data), max(data), "{:.2f}".format(sum(data)/len(data))
</code></pre>
| 0 | 2016-07-20T21:14:52Z | [
"python",
"file",
"text",
"numbers",
"average"
] |
Assign the same value to multiple keys in a dictionary | 38,490,606 | <p>I want my dictionary to look like this:</p>
<pre><code>{'A': {('B','C'): 'D'}}
</code></pre>
<p>The code I'm using currently doesn't seem to work to achieve this result.</p>
<pre><code>dict1 = {}
with open('foo.csv') as f:
reader = csv.DictReader(f)
for row in reader:
for key in ['B','C']:
dict1.setdefault(row['A'], {}).update({row[key]: row['D']})
</code></pre>
<p>Currently, I'm getting a result like this:</p>
<pre><code>{'A': {'B': 'D','C': 'D'}}
</code></pre>
<p>Basically, I want B & C to be represented as the key with D as its value.</p>
<p>What am I doing wrong here? Can someone help me correct this code? </p>
| 0 | 2016-07-20T20:50:10Z | 38,490,720 | <p>You can put both items right where you want them, in the tuple:</p>
<pre><code>dict1 = {}
with open('foo.csv') as f:
reader = csv.DictReader(f)
for row in reader:
dict1.setdefault(row['A'], {}).update({(row['B'], row['C']): row['D']})
# ^ ^
</code></pre>
| 0 | 2016-07-20T20:57:43Z | [
"python",
"dictionary",
"key-value"
] |
Creating weight logic in django based off past 7 days | 38,490,612 | <p>I currently have a <code>Restaurant</code> model with associated models <code>Review</code> and <code>Comment</code>. Users can comment and review a restaurant. </p>
<p>I'm trying to create weight logic in Django in which I display the top three restaurants with the largest weight. </p>
<p>The current logic looks like this:</p>
<pre><code>restaurants = Restaurant.objects.all()
top_3 = restaurants.annotate(weight=(Count('review')) + F('views') + (Count('comment'))).order_by('-weight')
</code></pre>
<p>How can I update this logic so that only the reviews and comments for the past 7 days are factored into the weight?</p>
<p><strong>Edit</strong>
The Review and Comment models both have a field for tracking when the object was created:
<code>pub_date = models.DateTimeField(default=timezone.now, blank=True)</code></p>
| 1 | 2016-07-20T20:50:32Z | 38,491,395 | <p>I hope this will help:</p>
<pre><code>import datetime
from django.db.models import Q
from django.utils import timezone
week_ago = timezone.now() - datetime.timedelta(days=7)
top_3 = Restaurant.objects.filter(
Q(review__isnull=True) | Q(review__pub_date__gt=week_ago),
Q(comment__isnull=True) | Q(comment__pub_date__gt=week_ago),
).annotate(weight=...).order_by('-weight')[:3]
</code></pre>
<p><code>review__isnull=True</code> and <code>comment__isnull=True</code> are to not filter out <code>restaurants</code> that are without <code>reviews</code> and <code>comments</code>. If you don't care about those <code>restaurants</code>, you can use this filter:</p>
<pre><code>filter(review__pub_date__gt=week_ago, comment__pub_date__gt=week_ago)
</code></pre>
<p>Docs</p>
<ul>
<li><a href="https://docs.djangoproject.com/en/1.9/topics/db/aggregation/#filter-and-exclude" rel="nofollow"><code>filter()</code> and <code>exclude()</code> with annotations</a></li>
<li><a href="https://docs.djangoproject.com/en/1.9/topics/db/queries/#lookups-that-span-relationships" rel="nofollow">Lookups that span relationships</a></li>
<li><a href="https://docs.djangoproject.com/en/1.9/topics/db/queries/#complex-lookups-with-q-objects" rel="nofollow"><code>Q()</code></a></li>
</ul>
| 3 | 2016-07-20T21:45:59Z | [
"python",
"django"
] |
Creating NaN values in Pandas (instead of Numpy) | 38,490,717 | <p>I'm converting a .ods spreadsheet to a Pandas DataFrame. I have whole columns and rows I'd like to drop because they contain only "None". As "None" is a <code>str</code>, I have:</p>
<p><code>pandas.DataFrame.replace("None", numpy.nan)</code></p>
<p>...on which I call: <code>.dropna(how='all')</code></p>
<p>Is there a <code>pandas</code> equivalent to <code>numpy.nan</code>?</p>
<p>Is there a way to use <code>.dropna()</code> with the *string "None" rather than <code>NaN</code>?</p>
| 0 | 2016-07-20T20:57:33Z | 38,490,842 | <p>You can use <code>float('nan')</code> if you really want to avoid importing things from the numpy namespace:</p>
<pre><code>>>> import pandas as pd
>>> s = pd.Series([1, 2, 3])
>>> s[1] = float('nan')
>>> s
0 1.0
1 NaN
2 3.0
dtype: float64
>>>
>>> s.dropna()
0 1.0
2 3.0
dtype: float64
</code></pre>
<p>Moreover, if you have a string value "None", you can <code>.replace("None", float("nan"))</code>:</p>
<pre><code>>>> s[1] = "None"
>>> s
0 1
1 None
2 3
dtype: object
>>>
>>> s.replace("None", float("nan"))
0 1.0
1 NaN
2 3.0
dtype: float64
</code></pre>
| 1 | 2016-07-20T21:05:31Z | [
"python",
"pandas"
] |
Creating NaN values in Pandas (instead of Numpy) | 38,490,717 | <p>I'm converting a .ods spreadsheet to a Pandas DataFrame. I have whole columns and rows I'd like to drop because they contain only "None". As "None" is a <code>str</code>, I have:</p>
<p><code>pandas.DataFrame.replace("None", numpy.nan)</code></p>
<p>...on which I call: <code>.dropna(how='all')</code></p>
<p>Is there a <code>pandas</code> equivalent to <code>numpy.nan</code>?</p>
<p>Is there a way to use <code>.dropna()</code> with the *string "None" rather than <code>NaN</code>?</p>
| 0 | 2016-07-20T20:57:33Z | 38,491,658 | <p>If you are trying to drop directly the rows containing a "None" string value (without converting these "None" cells to <code>NaN</code> values), I guess it can be done without using <code>replace</code> + <code>dropna</code></p>
<p>Considering a DataFrame like :</p>
<pre><code>In [3]: df = pd.DataFrame({
"foo": [1,2,3,4],
"bar": ["None",5,5,6],
"baz": [8, "None", 9, 10]
})
In [4]: df
Out[4]:
bar baz foo
0 None 8 1
1 5 None 2
2 5 9 3
3 6 10 4
</code></pre>
<p>Using <code>replace</code> and <code>dropna</code>Â will return</p>
<pre><code>In [5]: df.replace('None', float("nan")).dropna()
Out[5]:
bar baz foo
2 5.0 9.0 3
3 6.0 10.0 4
</code></pre>
<p>Which can also be obtained by simply selecting the row you need :</p>
<pre><code>In [7]: df[df.eval("foo != 'None' and bar != 'None' and baz != 'None'")]
Out[7]:
bar baz foo
2 5 9 3
3 6 10 4
</code></pre>
<p>You can also use the <code>drop</code> method of your dataframe, selecting appropriately the axis/labels targeted :</p>
<pre><code>In [9]: df.drop(df[(df.baz == "None") |
(df.bar == "None") |
(df.foo == "None")].index)
Out[9]:
bar baz foo
2 5 9 3
3 6 10 4
</code></pre>
<p>These two methods are more or less interchangeable as you can also do for example:<br>
<code>df[(df.baz != "None") & (df.bar != "None") & (df.foo != "None")]</code><br>
(but i guess the comparison df.somecolumns == "Some string" is only possible if the column type allows it, before theses last 2 examples, which wasn't the case with <code>eval</code>, i had to do <code>df = df.astype (object)</code> as the <code>foo</code> columns was of type <code>int64</code>)</p>
| 1 | 2016-07-20T22:06:45Z | [
"python",
"pandas"
] |
Horizontally layering LSTM cells | 38,490,811 | <p>I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow.</p>
<p>In the tutorial, they have an object <code>tf.nn.rnn_cell.MultiRNNCell</code>, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. However, I couldn't find anything about <strong>horizontal</strong> LSTM cells, in which the output of one cell is the input of another. </p>
<p>I understand that because the cells are recurrent, they wouldn't need to do this, but I was just trying to see if this is straight out possible.</p>
<p>Cheers!</p>
| 0 | 2016-07-20T21:03:47Z | 38,638,969 | <blockquote>
<p>However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another.</p>
</blockquote>
<p>This is the definition of recurrence. All RNNs do this. </p>
| 1 | 2016-07-28T14:11:37Z | [
"python",
"neural-network",
"artificial-intelligence",
"tensorflow",
"recurrent-neural-network"
] |
Refresh Excel external data connection using python | 38,490,829 | <p>I have a excel file with external data connection. I need to refresh the connection data using python.</p>
<p>I tried a soultion</p>
<pre><code>import win32com.client
import os
fileName="testconn.xlsx"
xl = win32com.client.DispatchEx("Excel.Application")
wb = xl.workbooks.open(fileName)
xl.Visible = True
wb.RefreshAll()
wb.Save()
xl.Quit()
</code></pre>
<p>But this solution require excel to be installed on the machine.</p>
<p>The another approach i thought was:
-if some how i get mapping of URLs of data connection and named range in which they are loaded , I can download data from URL and update the data in the named range using openpyxl.</p>
<p>Is there a better way to do this? Does any python library has a a funtionality to retrieve the connection and refresh the connection?</p>
<p>Thanks in advance :)</p>
| -1 | 2016-07-20T21:04:25Z | 38,491,462 | <p>Definitely refreshing an Excel worksheet would require Excel to be installed. So your approach seems to be fine, although there may be easier ways. Pandas has a good library for manipulating Excel spreadsheets <code>pandas.read_excel</code> and <code>pandas.DataFrame.to_excel</code> functions <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html</a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html</a> but writing to a named range is not possible using Pandas alone, it makes you specify the worksheet and start cell at least (fine if your named ranges don't change the start cell / worksheet). BUT you can add another package and use it along with Pandas (or by itself) to WRITE to a named range - <code>xlsxwriter</code> see here:
<a href="http://xlsxwriter.readthedocs.io/example_defined_name.html#ex-defined-name" rel="nofollow">http://xlsxwriter.readthedocs.io/example_defined_name.html#ex-defined-name</a>
<a href="https://xlsxwriter.readthedocs.io/working_with_pandas.html" rel="nofollow">https://xlsxwriter.readthedocs.io/working_with_pandas.html</a></p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('defined_range.xlsx')
worksheet.write('defined_range', data) # writes to defined_range in the Excel workbook
</code></pre>
<p>Pandas also lets you read your data directly from a URL which would help in your proposed solution <code>pandas.read_html</code>. <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html</a></p>
| 0 | 2016-07-20T21:51:25Z | [
"python",
"excel"
] |
Does Numpy allocate temporary arrays in expressions like x += 2 * y? | 38,490,937 | <p>When evaluating expressions like</p>
<pre><code>x += 2 * y
</code></pre>
<p>does <code>Numpy</code> first allocate a new temporary array to hold <code>2*y</code>, add it to <code>x</code> and then delete it, or can it perform this whole operation in-place?</p>
| 4 | 2016-07-20T21:12:52Z | 38,491,048 | <p>Yup, that makes a temporary array.</p>
<p>If you find yourself needing to mitigate NumPy's love of giant scratch arrays, additional libraries like <a href="https://github.com/pydata/numexpr" rel="nofollow">Numexpr</a> can help quite a bit, but make sure you're attributing performance problems to the right causes. Naive attempts to save allocations usually cause massive slowdowns instead of performance improvement.</p>
| 2 | 2016-07-20T21:20:33Z | [
"python",
"performance",
"numpy"
] |
Pyspark command in terminal launches Jupyter notebook | 38,490,946 | <p>I have run into an issue with spark-submit , throws an error is not a Jupyter Command i.e, pyspark launches a web ui instead of pyspark shell </p>
<p>Background info:</p>
<ol>
<li>Installed Scala , Spark using brew on MAC</li>
<li>Installed Conda Python 3.5</li>
<li>Spark commands work on Jupyter Notebook</li>
<li>'pyspark' on terminal launches notebook instead of shell</li>
</ol>
<p>Any help is much appreciated.</p>
| 0 | 2016-07-20T21:13:34Z | 38,510,399 | <p>The PYSPARK_DRIVER_PYTHON variable is set to start ipython/jupyter automatically (probably as intended.) Run <code>unset PYSPARK_DRIVER_PYTHON</code> and then try pyspark again.</p>
<p>If you wish this to be the default, you'll probably need to modify your profile scripts. </p>
| 0 | 2016-07-21T17:10:33Z | [
"python",
"pyspark",
"jupyter",
"jupyter-notebook"
] |
Passing Too Many Arguments to Object Method - Python | 38,491,056 | <p>So I think I have an incredibly simple case but I'm utterly lost about why it won't execute.</p>
<p>Here's my source - this is my main.py file that I'm running by just executing 'python main.py' from the command line:</p>
<pre><code>import player
import enemy
p1 = player.playerClass(health = 200.0, position = [1, 0, 0], damage = 30.0)
e1 = enemy.enemyClass(position = [3, 0, 0], damage = 35.0)
print("Before attacking: " + str(e1.Health))
p1.Attack(e1, p1.Damage) # Error is here
print("After attacking: " + str(e1.Health))
</code></pre>
<p>The code for the Attack method is found within the playerClass class. The enemyClass is exactly the same without the Attack() method (trying to keep everything as simple as possible)</p>
<pre><code>import character
class playerClass(character.CharacterBaseClass): # derived class used to contain an 'Attack' method, but does not anymore
def __init__(self, health = 100.0, position = [0, 0, 0], damage = 10.0):
self.Health = health
self.pos = position
self.Damage = damage
def Attack(self, target):
self.CurrentTarget = target
target.Health -= self.Damage
self.CurrentTarget = None
</code></pre>
<p>From what I've been reading, I've applied the 'self' keyword to the class definition of Attack, and 'p1' should be being passed as such when 'p1.Attack(...)' is called, but for some reason I'm getting the error:</p>
<pre><code>TypeError: Attack() takes exactly 2 arguments (3 given)
</code></pre>
<p>Any suggestions on what I'm missing? I know there are some related questions about the same issue but those questions explain <em>why</em> 'self' is needed, which I already (sort of) get.</p>
| 0 | 2016-07-20T21:21:02Z | 38,491,196 | <p>Try defining your player.playerClass like this</p>
<pre><code>class playerClass:
def __init__ (self, health, position, damage):
self.health = health
self.position = position
self.damage = damage
def Attack(self, target, damage):
self.CurrentTarget = target
target.Health -= damage
self.CurrentTarget = None
</code></pre>
<h2>What does <code>self</code> mean?</h2>
<p><code>self</code> is basically an argument that python passed automatically to the method of a initialized class. The value of self is the equal to that of the class instance. So in your case, in the method <code>Attake ()</code>, <code>self</code> is equal to <code>p1</code>. By running <code>self.CurrentTarget = target</code> you are doing the same thing as <code>p1.CurrentTarget = target</code></p>
| 1 | 2016-07-20T21:31:30Z | [
"python",
"arguments"
] |
Passing Too Many Arguments to Object Method - Python | 38,491,056 | <p>So I think I have an incredibly simple case but I'm utterly lost about why it won't execute.</p>
<p>Here's my source - this is my main.py file that I'm running by just executing 'python main.py' from the command line:</p>
<pre><code>import player
import enemy
p1 = player.playerClass(health = 200.0, position = [1, 0, 0], damage = 30.0)
e1 = enemy.enemyClass(position = [3, 0, 0], damage = 35.0)
print("Before attacking: " + str(e1.Health))
p1.Attack(e1, p1.Damage) # Error is here
print("After attacking: " + str(e1.Health))
</code></pre>
<p>The code for the Attack method is found within the playerClass class. The enemyClass is exactly the same without the Attack() method (trying to keep everything as simple as possible)</p>
<pre><code>import character
class playerClass(character.CharacterBaseClass): # derived class used to contain an 'Attack' method, but does not anymore
def __init__(self, health = 100.0, position = [0, 0, 0], damage = 10.0):
self.Health = health
self.pos = position
self.Damage = damage
def Attack(self, target):
self.CurrentTarget = target
target.Health -= self.Damage
self.CurrentTarget = None
</code></pre>
<p>From what I've been reading, I've applied the 'self' keyword to the class definition of Attack, and 'p1' should be being passed as such when 'p1.Attack(...)' is called, but for some reason I'm getting the error:</p>
<pre><code>TypeError: Attack() takes exactly 2 arguments (3 given)
</code></pre>
<p>Any suggestions on what I'm missing? I know there are some related questions about the same issue but those questions explain <em>why</em> 'self' is needed, which I already (sort of) get.</p>
| 0 | 2016-07-20T21:21:02Z | 38,491,710 | <p>I did end up finding the solution and it has a little to do with @martijin's suggestion.</p>
<p>The issue was that I had compiled python files in the directory that my main.py file was run from - in turn, when python was executing, it was referring to these compiled .pyc files and not the actual script files that I was editing.</p>
<p>Deleting these compiled .pyc files solved my problem as it appears python is now referencing the correct .py script files.</p>
<p>Thanks for input everyone!</p>
| 0 | 2016-07-20T22:11:04Z | [
"python",
"arguments"
] |
Flask-Testing signals not supported error | 38,491,075 | <p>When running my tests I am getting the following traceback.</p>
<pre><code>in get_context_variable
raise RuntimeError("Signals not supported")
RuntimeError: Signals not supported
</code></pre>
<p><strong>__init__.py</strong></p>
<pre><code>from flask_testing import TestCase
from app import create_app, db
class BaseTest(TestCase):
BASE_URL = 'http://localhost:5000/'
def create_app(self):
return create_app('testing')
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
def test_setup(self):
response = self.client.get(self.BASE_URL)
self.assertEqual(response.status_code, 200)
</code></pre>
<p><strong>test_routes.py</strong></p>
<pre><code>from . import BaseTest
class TestMain(BaseTest):
def test_empty_index(self):
r = self.client.get('/')
self.assert200(r)
self.assertEqual(self.get_context_variable('partners'), None)
</code></pre>
<p>It appears that the <code>get_context_variable</code> function call is where the error is coming from. I also receive this error if I try and use <code>assert_template_used</code>. Having a rather difficult time finding any resolution to this.</p>
| 1 | 2016-07-20T21:22:16Z | 38,491,198 | <p>Flask only provides signals as an optional dependency. Flask-Testing requires signals in some places and raises an error if you try to do something without them. For some reason, some messages are more vague than others Flask-Testing raises elsewhere. <sub>(This is a good place for a beginner to contribute a pull request.)</sub></p>
<p>You need to install the <a href="https://pythonhosted.org/blinker/" rel="nofollow">blinker</a> library to enable <a href="http://flask.pocoo.org/docs/0.11/signals/" rel="nofollow">signal support</a> in Flask.</p>
<pre><code>$ pip install blinker
</code></pre>
| 1 | 2016-07-20T21:31:35Z | [
"python",
"flask",
"flask-testing"
] |
Python unexpected character after line continuation character | 38,491,082 | <p>I am very new to Python. I am constructing a string that is nothing but a path to the network location as follows. But it outputs the error: "Python unexpected character after line continuation character". Please help. I saw this post but I am not sure if it applies to my scenario:</p>
<p><a href="http://stackoverflow.com/questions/7791913/syntaxerror-unexpected-character-after-line-continuation-character-in-python">syntaxerror: "unexpected character after line continuation character in python" math</a></p>
<pre><code>s_path_publish_folder = r"\\" + s_host + "\" + s_publish_folder "\" + s_release_name
</code></pre>
| 0 | 2016-07-20T21:22:55Z | 38,491,119 | <p>One of your <code>\</code> backslashes <em>escapes</em> the <code>"</code> double quote following it. The rest of the string then <em>ends</em> just before the next <code>\</code> backslash, and that second backslash is seen as a line-continuation character. Because there's another <code>"</code> right after that you get your error:</p>
<pre><code>s_path_publish_folder = r"\\" + s_host + "\" + s_publish_folder "\" + s_release_name
# ^^ not end of string ||
# ^--- actual string ---^||
# line continuation /|
# extra character /
</code></pre>
<p>You need to <em>double</em> those backslashes:</p>
<pre><code>s_path_publish_folder = r"\\" + s_host + "\\" + s_publish_folder "\\" + s_release_name
</code></pre>
<p>Better yet, use the <code>os.path</code> module here; for example, you could use <code>os.path.join()</code>:</p>
<pre><code>s_path_publish_folder = r"\\" + os.path.join(s_host, s_publish_folder, s_release_name)
</code></pre>
<p>or you could use string templating:</p>
<pre><code>s_path_publish_folder = r"\\{}\{}\{}".format(s_host, s_publish_folder, s_release_name)
</code></pre>
| 3 | 2016-07-20T21:25:04Z | [
"python",
"python-3.x"
] |
how can I remove a widget in kivy? | 38,491,124 | <p>I am trying to do the same as I did to add these widgets, but without success.
I am working with the kv language and bind function. With this code below is possible add a buttons dynamically, but isn't possible remove them.</p>
<p>.py</p>
<pre><code>class PrimeiroScreen(Screen):
def __init__(self, **kwargs):
self.name = 'um'
super(Screen,self).__init__(**kwargs)
def fc2(self):
btn = Button(text="Botão",size_hint=(.1,.1))
self.ids.grade2.add_widget(btn)
btn.bind(on_press=self.printa)
def printa(self,*args):
#btn2 = Button(text="Btn2",size_hint=(.1,.1))#I can add another btn succesfully
self.ids.grade2.add_widget(btn2)#but I can do the same by this way
self.remove_widget(btn)
grade2.remove_widget(self.btn)
</code></pre>
<p>and the .kv</p>
<pre><code><RootScreen>:
PrimeiroScreen:
<PrimeiroScreen>:
GridLayout:
cols: 1
size_hint: (.5,1)
id: grade
Button:
text: "hi!"
on_press: root.fc2()
StackLayout:
orientation: 'bt-rl'
GridLayout:
cols: 2
size_hint: (.5,1)
id: grade2
</code></pre>
<p>Have anybody any idea of the bullshit I did? The python show me the message below: "self.remove_widget(btn)
NameError: global name 'btn' is not defined"
Thanks anybody.</p>
| 0 | 2016-07-20T21:25:18Z | 38,491,700 | <p>Change<br>
<code>btn = Button(text="Botão",size_hint=(.1,.1))</code><br>
to<br>
<code>self.btn = Button(text="Botão",size_hint=(.1,.1))</code><br>
So you make it a class attribute. </p>
<p>And then remove it like this<br>
<code>self.remove_widget(self.btn)</code></p>
| 0 | 2016-07-20T22:10:30Z | [
"python",
"kivy"
] |
Numpy: "<<" and ">>" operators | 38,491,204 | <p>I was doing some Numpy exercises and came across this example:</p>
<pre><code>z = np.arange(10)
2 << z
</code></pre>
<p>It outputs:
array([ 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024])</p>
<p>There are a few other operators like this for example: z >> 2, z <- z, z >- z</p>
<p>I did a search and surprisingly found nothing on Google.</p>
<p>Can anyone explain what these operators do? Any documentations?</p>
| -3 | 2016-07-20T21:32:05Z | 38,491,310 | <p>The operators <code><<</code> and <code>>></code> are bit shift operators (left and right, respectively). In your particular example (with <code><<</code>), you are performing <code>x = x * 2^z</code> for each array element, resulting in your modified output. The operator <code>>></code> in the same example would yield an output characterized by <code>x = x / 2^z</code> for each array element. </p>
<p>As mentioned earlier, <code><-X</code> is the same as <code>< (-X)</code> (and vice versa for <code>>-</code>); these are not defined Python operators. </p>
| 1 | 2016-07-20T21:40:27Z | [
"python",
"numpy"
] |
Python: Replace keys in a nested dictionary | 38,491,318 | <p>I have a nested dictionary <code>{1: {2: {3: None}}}</code> and a dictionary that maps keys of the nested dictionary to a set of values such as <code>{1: x, 2: y, 3: z}</code>. I want to transform the nested dictionary to this form <code>{x: {y: {z: None}}}</code>. I have tried a couple of recursive functions but I keep going in circles and confusing myself. What is the best way to achieve this?</p>
<p>Edit: I forgot to mention that the level of nesting is arbitrary. The above is a simple example.</p>
| 2 | 2016-07-20T21:40:42Z | 38,491,565 | <p>You need to recurse through the dictionary while building a new one with new keys. Note that if you have a list or tuple in there somewhere that has other dictionaries in it, they won't be processed - you'd have to add some code to do that. You can actually do this without building a new dictionary, but I think this way is simpler.</p>
<pre><code>od = { 1: { 2: { 3: None }}}
kd = { 1: 'x', 2: 'y', 3: 'z' }
def replace_keys(old_dict, key_dict):
new_dict = { }
for key in old_dict.keys():
new_key = key_dict.get(key, key)
if isinstance(old_dict[key], dict):
new_dict[new_key] = replace_keys(old_dict[key], key_dict)
else:
new_dict[new_key] = old_dict[key]
return new_dict
nd = replace_keys(od, kd)
print nd
</code></pre>
<p>outputs:</p>
<pre><code>{'x': {'y': {'z': None}}}
</code></pre>
| 1 | 2016-07-20T21:59:26Z | [
"python",
"dictionary",
"recursion",
"nested"
] |
Efficient way for python date string manipulation | 38,491,376 | <p>I want to turn '07/18/2013' to '07/2013' and there are a lot of these strings to be processed. What would be the most efficient way to do it?</p>
<p>I am thinking of using </p>
<pre><code>''.join(['07/18/2013'[0:3],'07/18/2013'[6:]])
</code></pre>
| 1 | 2016-07-20T21:44:43Z | 38,491,452 | <p>Use datetime module:</p>
<pre><code>import datetime
print datetime.datetime.strptime("07/18/2013", '%m/%d/%Y').strftime('%m/%Y')
</code></pre>
| 1 | 2016-07-20T21:50:49Z | [
"python",
"string",
"date"
] |
Efficient way for python date string manipulation | 38,491,376 | <p>I want to turn '07/18/2013' to '07/2013' and there are a lot of these strings to be processed. What would be the most efficient way to do it?</p>
<p>I am thinking of using </p>
<pre><code>''.join(['07/18/2013'[0:3],'07/18/2013'[6:]])
</code></pre>
| 1 | 2016-07-20T21:44:43Z | 38,491,471 | <p>Look into strftime and strptime.</p>
<p>Assuming you start with the string <em>s</em> you can put it into a datetime object using strptime then take that back out into a string with only the necessary fields using strftime. I didn't actually run this code so I don't know if it is perfect, but the idea is here.</p>
<pre><code>temp = datetime.strptime.(s, "%m/%D/%Y")
final = temp.strftime(%m/%Y")
</code></pre>
<p>You can find info on the datetime functions here <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">https://docs.python.org/2/library/datetime.html</a></p>
| 3 | 2016-07-20T21:52:11Z | [
"python",
"string",
"date"
] |
Pyspark Dataframe Join using UDF | 38,491,377 | <p>I'm trying to create a custom join for two dataframes (df1 and df2) in PySpark (similar to <a href="http://stackoverflow.com/questions/30132533/custom-join-with-non-equal-keys">this</a>), with code that looks like this:</p>
<pre><code>my_join_udf = udf(lambda x, y: isJoin(x, y), BooleanType())
my_join_df = df1.join(df2, my_join_udf(df1.col_a, df2.col_b))
</code></pre>
<p>The error message I'm getting is:</p>
<pre><code>java.lang.RuntimeException: Invalid PythonUDF PythonUDF#<lambda>(col_a#17,col_b#0), requires attributes from more than one child
</code></pre>
<p>Is there a way to write a PySpark UDF that can process columns from two separate dataframes?</p>
| 2 | 2016-07-20T21:44:47Z | 38,498,610 | <p>Theoretically you can join and filter:</p>
<pre><code>df1.join(df2).where(my_join_udf(df1.col_a, df2.col_b))
</code></pre>
<p>but in general you shouldn't to it all. Any type of <code>join</code> which is not based on equality requires a full Cartesian product (same as the answer) which is rarely acceptable (see also <a href="http://stackoverflow.com/q/32952080/1560062">Why using a UDF in a SQL query leads to cartesian product?</a>).</p>
| 3 | 2016-07-21T08:13:13Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] |
Python efficient way to copy one dictionary as value of key | 38,491,461 | <p>I am working on a monitoring tool in python.
It has a global dictionary with all hosts as keys and their details as values.</p>
<p>At the moment I do it like this:</p>
<pre><code>host = hostname
hostDictionary["a"] = a
hostDictionary["b"] = b
hostDictionary["c"] = c
globalDictionary[host] = hostDictionary
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Is this copying the whole hostDictionary into global's key or is it just referencing it?</p></li>
<li><p>Is there a better way to do this? </p></li>
<li><p>Is this faster?</p>
<pre><code>globalDictionary[host]["a"] = a
globalDictionary[host]["b"] = b
globalDictionary[host]["c"] = c
</code></pre></li>
</ol>
| 1 | 2016-07-20T21:51:23Z | 38,491,542 | <p>1\2 - It is a reference, you need to make a copy in this way. </p>
<pre><code>from copy import copy
globalDictionary[host] = copy(hostDictionary)
</code></pre>
<p>3 - I don't think so, I think that a copy to all the object can be quicker than access on each elements.</p>
| 1 | 2016-07-20T21:57:59Z | [
"python",
"dictionary"
] |
Python efficient way to copy one dictionary as value of key | 38,491,461 | <p>I am working on a monitoring tool in python.
It has a global dictionary with all hosts as keys and their details as values.</p>
<p>At the moment I do it like this:</p>
<pre><code>host = hostname
hostDictionary["a"] = a
hostDictionary["b"] = b
hostDictionary["c"] = c
globalDictionary[host] = hostDictionary
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Is this copying the whole hostDictionary into global's key or is it just referencing it?</p></li>
<li><p>Is there a better way to do this? </p></li>
<li><p>Is this faster?</p>
<pre><code>globalDictionary[host]["a"] = a
globalDictionary[host]["b"] = b
globalDictionary[host]["c"] = c
</code></pre></li>
</ol>
| 1 | 2016-07-20T21:51:23Z | 38,491,564 | <ol>
<li><p>Nopes, it ain't copying the whole dictionary, its just referencing it. To check this the best method is to do this - </p>
<pre><code>host = 'hostname'
hostDictionary = {}
globalDictionary = {}
hostDictionary["a"] = a
hostDictionary["b"] = b
hostDictionary["c"] = c
globalDictionary[host] = hostDictionary`
</code></pre>
<p>Checking it - </p>
<pre><code>globalDictionary[host] is hostDictionary
</code></pre>
<p>Output - </p>
<pre><code>True
</code></pre>
<p>Which means both have the same id.</p></li>
<li><p>To copy you have to use, <code>copy()</code> or <code>deepcopy()</code></p>
<pre><code>globalDictionary[host] = hostDictionary.copy()
globalDictionary[host] is hostDictionary
</code></pre>
<p>which gives youe <code>False</code> and proves its getting copied.</p></li>
<li><p>It depends on your purpose actually. But in any case direct assignment is obviously faster.</p></li>
</ol>
| 1 | 2016-07-20T21:59:00Z | [
"python",
"dictionary"
] |
Python efficient way to copy one dictionary as value of key | 38,491,461 | <p>I am working on a monitoring tool in python.
It has a global dictionary with all hosts as keys and their details as values.</p>
<p>At the moment I do it like this:</p>
<pre><code>host = hostname
hostDictionary["a"] = a
hostDictionary["b"] = b
hostDictionary["c"] = c
globalDictionary[host] = hostDictionary
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Is this copying the whole hostDictionary into global's key or is it just referencing it?</p></li>
<li><p>Is there a better way to do this? </p></li>
<li><p>Is this faster?</p>
<pre><code>globalDictionary[host]["a"] = a
globalDictionary[host]["b"] = b
globalDictionary[host]["c"] = c
</code></pre></li>
</ol>
| 1 | 2016-07-20T21:51:23Z | 38,491,591 | <p>I get this from Python CLI:</p>
<pre><code>>>> a = {}
>>> b = {}
>>> c = {}
>>> a["foo"] = 1
>>> b = a
>>> c = a.copy()
>>> b["bar"] = 2
>>> a
{'foo': 1, 'bar': 2}
>>> b
{'foo': 1, 'bar': 2}
>>> c
{'foo': 1}
</code></pre>
<p>Which means:</p>
<ol>
<li><p>You are just referencing <code>hostDictionary</code> in the key for <code>globalDictionary</code>. If you want a copy, call <code>hostDictionary.copy()</code> or <code>hostDictionary.deepcopy()</code> like hashcode55 said.</p></li>
<li><p>"Better" depends on your use case. But you could have just declared <code>hostDictionary = {'a': a, 'b': b, 'c': c}</code>, for example.</p></li>
<li><p>It's slower on the Python interpreter side to do what you put there, since (assuming there's nothing built in to optimize things back to your original example) the engine will need to look up the object referenced by <code>globalDictionary[host]</code> for each operation before writing.</p></li>
</ol>
| 1 | 2016-07-20T22:01:03Z | [
"python",
"dictionary"
] |
Python efficient way to copy one dictionary as value of key | 38,491,461 | <p>I am working on a monitoring tool in python.
It has a global dictionary with all hosts as keys and their details as values.</p>
<p>At the moment I do it like this:</p>
<pre><code>host = hostname
hostDictionary["a"] = a
hostDictionary["b"] = b
hostDictionary["c"] = c
globalDictionary[host] = hostDictionary
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Is this copying the whole hostDictionary into global's key or is it just referencing it?</p></li>
<li><p>Is there a better way to do this? </p></li>
<li><p>Is this faster?</p>
<pre><code>globalDictionary[host]["a"] = a
globalDictionary[host]["b"] = b
globalDictionary[host]["c"] = c
</code></pre></li>
</ol>
| 1 | 2016-07-20T21:51:23Z | 38,491,642 | <p>I think it will be faster if you initialize dict like this</p>
<pre><code>lobalDictionary[host] = {
"a": a,
"b": b,
"c": c
}
</code></pre>
<p>Also if you don't want to use copy or deepcopy, you can copy your dictionary like this:</p>
<pre><code>globalDictionary[host] = dict(hostDictionary.items())
</code></pre>
<p>or</p>
<pre><code>globalDictionary[host] = {k:v for k,v in hostDictionary.items()}
</code></pre>
| 1 | 2016-07-20T22:05:12Z | [
"python",
"dictionary"
] |
Internal Server Error when get the tags of the virtual guest | 38,491,538 | <p>The API is returning Internal Server Error when I try to get the tags of the virtual guests of a customer account.</p>
<p>Code example using the SoftLayer API library: </p>
<pre><code>api = SoftLayer.Client(username=customer_id, api_key=customer_apikey)
api['Account'].getVirtualGuests(mask='fullyQualifiedDomainName,tagReferences.tag.name')
</code></pre>
<p>The exception is:</p>
<pre><code>File "scripts/getting_tags.py", line 16, in <module>
for item in func(mask='fullyQualifiedDomainName,tagReferences.tag.name'):
File "/home/mfilipe/workspace/SoftLayerBilling/venv/lib/python2.7/site-packages/SoftLayer/API.py", line 362, in call_handler
return self(name, *args, **kwargs)
File "/home/mfilipe/workspace/SoftLayerBilling/venv/lib/python2.7/site-packages/SoftLayer/API.py", line 330, in call
return self.client.call(self.name, name, *args, **kwargs)
File "/home/mfilipe/workspace/SoftLayerBilling/venv/lib/python2.7/site-packages/SoftLayer/API.py", line 226, in call
return self.transport(request)
File "/home/mfilipe/workspace/SoftLayerBilling/venv/lib/python2.7/site-packages/SoftLayer/transports.py", line 162, in __call__
raise exceptions.TransportError(ex.response.status_code, str(ex))
SoftLayer.exceptions.TransportError: TransportError(500): 500 Server Error: Internal Server Error
</code></pre>
<p>Couple months ago that API call was working properly. When I execute the same call for the hardware (api['Account'].getHardware) or remove tagReferences from mask, it works.</p>
| 0 | 2016-07-20T21:57:47Z | 38,492,656 | <p>it looks like an error because the reponse contains a big amount of data try to add limits in your request:</p>
<pre><code>api = SoftLayer.Client(username=customer_id, api_key=customer_apikey)
api['Account'].getVirtualGuests(mask='fullyQualifiedDomainName,tagReferences.tag.name',limit=10, offset=0)
</code></pre>
<p>for more information about limits see:</p>
<p><a href="http://softlayer-api-python-client.readthedocs.io/en/latest/api/client/" rel="nofollow">http://softlayer-api-python-client.readthedocs.io/en/latest/api/client/</a></p>
<p>Regards</p>
| 0 | 2016-07-20T23:54:30Z | [
"python",
"softlayer"
] |
Improving runtime of weighted moving average filter function? | 38,491,572 | <p>I have a weighted moving average function which smooths a curve by averaging 3*width values to the left and to the right of each point using a gaussian weighting mechanism. I am only worried about smoothing a region bounded by [start, end]. The following code works, but the problem is runtime with large arrays.</p>
<pre><code>import numpy as np
def weighted_moving_average(x, y, start, end, width = 3):
def gaussian(x, a, m, s):
return a*exp(-(x-m)**2/(2*s**2))
cut = (x>=start-3*width)*(x<=end+3*width)
x, y = x[cut], y[cut]
x_avg = x[(x>=start)*(x<=end)]
y_avg = np.zeros(len(x_avg))
bin_vals = np.arange(-3*width,3*width+1)
weights = gaussian(bin_vals, 1, 0, width)
for i in range(len(x_avg)):
y_vals = y[i:i+6*width+1]
y_avg[i] = np.average(y_vals, weights = weights)
return x_avg, y_avg
</code></pre>
<p>From my understanding, it is generally inefficient to loop through a NumPy array. I was wondering if anyone had an idea to replace the for loop with something more runtime efficient.</p>
<p>Thanks</p>
| 1 | 2016-07-20T21:59:56Z | 38,491,834 | <p>That slicing and summing/averaging on a weighted window basically corresponds to 1D convolution with the kernel being flipped. Now, for <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow"><code>1D</code> convolution</a>, NumPy has a very efficient implementation in <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow"><code>np.convolve</code></a> and that could be used to get rid of the loop and give us <code>y_avg</code>. Thus, we would have a vectorized implementation like so -</p>
<pre><code>y_sums = np.convolve(y,weights[::-1],'valid')
y_avg = np.true_divide(y_sums,weights.sum())
</code></pre>
| 2 | 2016-07-20T22:21:20Z | [
"python",
"numpy",
"runtime",
"ipython",
"vectorization"
] |
Improving runtime of weighted moving average filter function? | 38,491,572 | <p>I have a weighted moving average function which smooths a curve by averaging 3*width values to the left and to the right of each point using a gaussian weighting mechanism. I am only worried about smoothing a region bounded by [start, end]. The following code works, but the problem is runtime with large arrays.</p>
<pre><code>import numpy as np
def weighted_moving_average(x, y, start, end, width = 3):
def gaussian(x, a, m, s):
return a*exp(-(x-m)**2/(2*s**2))
cut = (x>=start-3*width)*(x<=end+3*width)
x, y = x[cut], y[cut]
x_avg = x[(x>=start)*(x<=end)]
y_avg = np.zeros(len(x_avg))
bin_vals = np.arange(-3*width,3*width+1)
weights = gaussian(bin_vals, 1, 0, width)
for i in range(len(x_avg)):
y_vals = y[i:i+6*width+1]
y_avg[i] = np.average(y_vals, weights = weights)
return x_avg, y_avg
</code></pre>
<p>From my understanding, it is generally inefficient to loop through a NumPy array. I was wondering if anyone had an idea to replace the for loop with something more runtime efficient.</p>
<p>Thanks</p>
| 1 | 2016-07-20T21:59:56Z | 38,491,867 | <p>The main concern with looping over a large array is that the memory allocation for the large array can be expensive, and the whole thing has to be initialized before the loop can start.</p>
<p>In this particular case I'd go with what Divakar is saying.</p>
<p>In general, if you find yourself in a circumstance where you <em>really need</em> to iterate over a large collection, use <em>iterators</em> instead of arrays. For a relatively simple case like this, just replace <code>range</code> with <code>xrange</code> (see <a href="https://docs.python.org/2/library/functions.html#xrange" rel="nofollow">https://docs.python.org/2/library/functions.html#xrange</a>).</p>
| 0 | 2016-07-20T22:24:58Z | [
"python",
"numpy",
"runtime",
"ipython",
"vectorization"
] |
Python script: problems with shebang line (unix) | 38,491,587 | <p>I am trying to get a feel for the Flask microframework by launching a test application to local server. When trying to run my code, <code>app.py</code>, I keep getting the error message:</p>
<pre><code>-bash: ./app.py: /flask/bin/python: bad interpreter: No such file or directory
</code></pre>
<p>Here is the basic code <a href="http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask" rel="nofollow">(taken from here)</a> for <code>app.py</code>, which lives in my todo-api directory:</p>
<pre><code>#!/flask/bin/python
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I've checked the file path to the python interpreter, and it should exist:</p>
<pre><code>:bin $ pwd python
Users/me/Documents/Python/todo-api/flask/bin
</code></pre>
<p>I have followed the tutorial to the T; I've tried changing the shebang line to:</p>
<pre><code>#!/flask/bin/python2.x
#!flask/bin/python
#!/flask/bin/env python
</code></pre>
<p>But to no avail. I am not that knowledgeable about bash, and have tried looking up what is going on, but the solutions to folks with similar problems have not worked for me; is there something going on behind the scenes that I am not understanding?</p>
| 0 | 2016-07-20T22:00:45Z | 38,491,625 | <p><code>pwd</code> tells you the current directory. It doesn't tell you where a command is located. The output from that command is a red herring.</p>
<p>You may be looking for <code>which python</code>. Put that path into your shebang line. Note that this will give you the Python interpreter from your <code>$PATH</code>, which may or may not be the right one.</p>
<p>The standard shebang line for Python scripts is</p>
<pre><code>#!/usr/bin/env python
</code></pre>
<p>or</p>
<pre><code>#!/usr/bin/python
</code></pre>
| 1 | 2016-07-20T22:04:07Z | [
"python",
"bash",
"shebang"
] |
Python script: problems with shebang line (unix) | 38,491,587 | <p>I am trying to get a feel for the Flask microframework by launching a test application to local server. When trying to run my code, <code>app.py</code>, I keep getting the error message:</p>
<pre><code>-bash: ./app.py: /flask/bin/python: bad interpreter: No such file or directory
</code></pre>
<p>Here is the basic code <a href="http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask" rel="nofollow">(taken from here)</a> for <code>app.py</code>, which lives in my todo-api directory:</p>
<pre><code>#!/flask/bin/python
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I've checked the file path to the python interpreter, and it should exist:</p>
<pre><code>:bin $ pwd python
Users/me/Documents/Python/todo-api/flask/bin
</code></pre>
<p>I have followed the tutorial to the T; I've tried changing the shebang line to:</p>
<pre><code>#!/flask/bin/python2.x
#!flask/bin/python
#!/flask/bin/env python
</code></pre>
<p>But to no avail. I am not that knowledgeable about bash, and have tried looking up what is going on, but the solutions to folks with similar problems have not worked for me; is there something going on behind the scenes that I am not understanding?</p>
| 0 | 2016-07-20T22:00:45Z | 38,491,659 | <p>Bash shebangs expect an absolute path to the interpreter. So in your case you need to specify the full path to your Python interpreter i.e.:</p>
<pre><code>#!/Users/me/Documents/Python/todo-api/flask/bin
</code></pre>
<p>You might want to investigate the use of <code>/usr/bin/env python</code> to be able to use the interpreter that is available in your user's <code>$PATH</code> environment variable. See <a href="http://unix.stackexchange.com/questions/12736/how-does-usr-bin-env-know-which-program-to-use/12751#12751">http://unix.stackexchange.com/questions/12736/how-does-usr-bin-env-know-which-program-to-use/12751#12751</a></p>
| 3 | 2016-07-20T22:06:48Z | [
"python",
"bash",
"shebang"
] |
Python script: problems with shebang line (unix) | 38,491,587 | <p>I am trying to get a feel for the Flask microframework by launching a test application to local server. When trying to run my code, <code>app.py</code>, I keep getting the error message:</p>
<pre><code>-bash: ./app.py: /flask/bin/python: bad interpreter: No such file or directory
</code></pre>
<p>Here is the basic code <a href="http://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask" rel="nofollow">(taken from here)</a> for <code>app.py</code>, which lives in my todo-api directory:</p>
<pre><code>#!/flask/bin/python
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I've checked the file path to the python interpreter, and it should exist:</p>
<pre><code>:bin $ pwd python
Users/me/Documents/Python/todo-api/flask/bin
</code></pre>
<p>I have followed the tutorial to the T; I've tried changing the shebang line to:</p>
<pre><code>#!/flask/bin/python2.x
#!flask/bin/python
#!/flask/bin/env python
</code></pre>
<p>But to no avail. I am not that knowledgeable about bash, and have tried looking up what is going on, but the solutions to folks with similar problems have not worked for me; is there something going on behind the scenes that I am not understanding?</p>
| 0 | 2016-07-20T22:00:45Z | 39,044,051 | <p>I was having a similar issue with trying to setup a python script as an executable for testing some things and realized that bash was getting in the way more than it was helping. I ended up setting up pyinstaller (which is incredibly easy) and then making my script a stand alone executable without bash being in the mix. </p>
<p><strong>Here's what I did (only takes a couple of minutes and no config):</strong>
First; pyinstaller needs: build-essential & python-dev</p>
<p><em>apt-get install build-essential python-dev</em><br>
(or yum, etc... depending on your OS)</p>
<p>Then use the built in python package manager to setup pyinstaller:
<em>pip install pyinstaller</em></p>
<p>That's it. Run: pyinstaller --onefile myapp.py (or pyinstaller.exe if your OS needs exe)</p>
<p>If it's successful (and it usually is), your new executable will be in a folder "Dist" in the same area you ran pysinstaller. </p>
| 0 | 2016-08-19T16:43:31Z | [
"python",
"bash",
"shebang"
] |
change endian of hex in bitstring unpack | 38,491,602 | <p>I'm using the module bitstring to unpack a 24 byte boundary file. I dont have control over the input file. The default interpretation of the module is big-endian apparently which is easy to fix when unpacking data types like int or float but some data I want to be represented as hex values. Using the unpack hex values it displays the incorrect byte ordering. Is there a fix for this? Example input: <code>D806</code> desired output: <code>06D8</code></p>
<pre><code>from bitstring import ConstBitStream
fp = ConstBitStream(filename="testfile.bin")
firstChunk = fp.read(2*8)
data=firstChunk.unpack('hex:16')
print(data)
</code></pre>
| 1 | 2016-07-20T22:02:22Z | 38,787,831 | <p>You could use ordinary Python formatting on a little-endian integer interpretation.</p>
<p>Rather than a <code>read</code> then <code>unpack</code> you also can do both together:</p>
<pre><code>print('{:0>4X}'.format(fp.read('uintle:16')))
</code></pre>
<p>This reads then next 16 bits from the stream, interprets it as an unsigned little-endian integer then formats it as four characters of hexadecimal, right-aligned and padded with zeros.</p>
| 0 | 2016-08-05T11:11:47Z | [
"python",
"unpack",
"bitstring"
] |
Attaching/Cloning a submenu to more than one menu item | 38,491,714 | <p>So I have a basic menu structure:</p>
<pre><code>menu = gtk.menu()
item1 = gtk.MenuItem('Item 1')
item2 = gtk.MenuItem('Item 2')
menu.append(item1)
menu.append(item2)`
menu.show_all()
</code></pre>
<p>And I have a submenu:</p>
<pre><code>submenu = gtk.menu()
subitem1 = gtk.MenuItem('Option 1')
subitem2 = gtk.MenuItem('Option 2')
submenu.append(subitem1)
submenu.append(subitem2)
submenu.show_all()
</code></pre>
<p>Now I want to attach this same submenu structure to both of the top level menu items. My first thought was that simply adding <code>.set_submenu(submenu)</code> to both <code>item1</code> and <code>item2</code> should work, but it gives me the error:</p>
<pre><code>Gtk-WARNING **: gtk_menu_attach_to_widget(): menu already attached to GtkMenuItem
</code></pre>
<p>and the submenu only shows up on the last item it was attached to.</p>
<p>In practice I have a lot more than two top level items, and I need to attach the same submenu structure to most of them. So defining the same submenu structure for each item is not really an option. What is the correct way of doing this?</p>
| 1 | 2016-07-20T22:11:24Z | 38,568,650 | <p>As the warning you get on the terminal specifies, you cannot attach the same instance of GtkMenu to different menus â just like you cannot add the same widget to multiple containers.</p>
<p>Your menu hierarchy seems to be overly complex if you find the need to have the same sub-menu in multiple places; it will undoubtedly confuse users, who rely on positional memory to find actions in a hierarchical menu structure.</p>
<p>In any case, if you still want to repeat menus, you can use a simple "menu factory" function, and generate multiple instances from a common GtkBuilder XML description of the menu.</p>
| 0 | 2016-07-25T13:07:19Z | [
"python",
"menu",
"gtk",
"submenu"
] |
What will be the Hadoop Streaming Run Command to access the files in the Sub Directory | 38,491,716 | <p>I have written a mapper program in python for hadoop Map-Reduce framework.</p>
<p>And I am executing it through the command:</p>
<p><code>hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming.jar -mapper "python wordcount_mapper.py" -file wordcount_mapper.py -input inputfile -output outputfile3</code> </p>
<p>It is working properly if the directory <strong>inputfile</strong> contains only file.</p>
<p>But it is not working and showing error if there is sub directories into the directory <strong>inputfile</strong> . Like i have two sub directory in (KAKA and KAKU) in <strong>inputfile</strong>.</p>
<p>And the error is showing :</p>
<blockquote>
<p>16/07/20 17:01:40 ERROR streaming.StreamJob: Error Launching job : Not
a file: hdfs://secondary/user/team/inputfile/kaka</p>
</blockquote>
<p>So, My question is that what will be the command to reach the files into the Sub Directory. </p>
| 0 | 2016-07-20T22:11:48Z | 38,517,426 | <p>Use regular expressions: </p>
<p><code>inputfile/*</code> - will work for 1 level of sub directories</p>
<p><code>inputfile/*/*</code> - will work for 2 level of sub directories </p>
<p>Run As:</p>
<p><code>hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming.jar -mapper "python wordcount_mapper.py" -file wordcount_mapper.py -input inputfile/* -output outputfile3</code></p>
| 0 | 2016-07-22T02:55:32Z | [
"python",
"python-2.7",
"hadoop",
"hadoop-streaming"
] |
python how to find if a dictionary contains data from other dictionary | 38,491,717 | <p>In Python, how do i find if a dictionary contains data from the other dictionary.</p>
<p>my data is assigned to a variable like this</p>
<pre><code>childDict = {
"assignee" : {
"first":"myFirstName",
"last":"myLastName"
},
"status" : "alive"
}
</code></pre>
<p>I have another dictionary named masterDict with similar hierarchy but with some more data in it.</p>
<pre><code>masterDict = {
"description": "sample description",
"assignee" : {
"first" : "myFirstName",
"last" : "myLastName"
},
"status" : "dead",
"identity": 1234
}
</code></pre>
<p>Now I need to read through childDict and find out if masterDict has these values in them or not.</p>
<p>data is nested, it can have more depth.
In the above example since the status didn't match, it should return false otherwise it should have returned true. how do i compare them. I am new to python. Thanks for your help.</p>
| 2 | 2016-07-20T22:11:52Z | 38,491,928 | <p>Note that there were some errors in your dictionary (missing commas).</p>
<pre><code>childDict1 = {
"assignee": {
"first":"myFirstName",
"last":"myLastName"
},
"status" : "alive"
}
childDict2 = {
"assignee": {
"first":"myFirstName",
"last":"myLastName"
},
"status" : "dead"
}
masterDict = {
"description": "sample description",
"assignee": {
"first":"myFirstName",
"last":"myLastName"
},
"status": "dead",
"identity": 1234
}
def contains_subdict(master, child):
if isinstance(master, dict) and isinstance(child, dict):
for key in child.keys():
if key in master:
if not contains_subdict(master[key], child[key]):
return False
return True
else:
if child == master:
return True
return False
print contains_subdict(masterDict, childDict1)
print contains_subdict(masterDict, childDict2)
</code></pre>
<p>Running the code produces the output:</p>
<pre><code>False
True
</code></pre>
| 2 | 2016-07-20T22:30:12Z | [
"python",
"dictionary",
"compare",
"contains"
] |
Reading a github file using python returns HTML tags | 38,491,722 | <p>I am trying to read a text file saved in github using requests package.
Here is the python code I am using:</p>
<pre><code> import requests
url = 'https://github.com/...../filename'
page = requests.get(url)
print page.text
</code></pre>
<p>Instead of getting the text, I am reading HTML tags.
How can I read the text from the file instead of HTML tags?</p>
| 1 | 2016-07-20T22:12:31Z | 38,491,755 | <p>You can access a text version by changing the beginning of your link to </p>
<pre><code>https://raw.githubusercontent.com/
</code></pre>
| 3 | 2016-07-20T22:14:52Z | [
"python"
] |
Reading a github file using python returns HTML tags | 38,491,722 | <p>I am trying to read a text file saved in github using requests package.
Here is the python code I am using:</p>
<pre><code> import requests
url = 'https://github.com/...../filename'
page = requests.get(url)
print page.text
</code></pre>
<p>Instead of getting the text, I am reading HTML tags.
How can I read the text from the file instead of HTML tags?</p>
| 1 | 2016-07-20T22:12:31Z | 38,497,199 | <p>There are some good solutions already, but if you use <code>requests</code> just follow Github's <a href="https://developer.github.com/v3/repos/contents/#get-contents" rel="nofollow">API</a>.</p>
<p>The endpoint for all content is</p>
<pre><code>GET /repos/:owner/:repo/contents/:path
</code></pre>
<p>But keep in mind that the default behavior of Github's API is to encode the content using <code>base64</code>.</p>
<p>In your case you would do the following:</p>
<pre><code>#!/usr/bin/env python3
import base64
import requests
url = 'https://api.github.com/repos/{user}/{repo_name}/contents/{path_to_file}'
req = requests.get(url)
if req.status_code == requests.codes.ok:
req = req.json() # the response is a JSON
# req is now a dict with keys: name, encoding, url, size ...
# and content. But it is encoded with base64.
content = base64.decodestring(req['content'])
else:
print('Content was not found.')
</code></pre>
| 1 | 2016-07-21T07:04:01Z | [
"python"
] |
How can i select a file with python? | 38,491,810 | <p><a href="http://i.stack.imgur.com/mMIsq.png" rel="nofollow">enter image description here</a></p>
<p>I can do like this on Java, but i can't do on Python</p>
<pre><code>StringSelection ss = new StringSelection("C:\\Users\\Mert\\Desktop\\hello.png");
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(ss, null);
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
</code></pre>
| -2 | 2016-07-20T22:20:11Z | 38,492,642 | <p>You can open a file GUI similar to Java using Python's <code>tkinter</code>:</p>
<pre><code>from Tkinter import Tk
from tkFileDialog import askopenfilename
Tk().withdraw()
filename = askopenfilename()
print(filename)
</code></pre>
<p>And the Python3 equivalent:</p>
<pre><code>from tkinter.filedialog import askopenfilename
filename = askopenfilename()
</code></pre>
| 1 | 2016-07-20T23:53:08Z | [
"python",
"selenium"
] |
Django and MySQL, cannot gain access to connect with the database ERROR:1045 | 38,491,869 | <p>Hello I am working on a project with a few people. We are using django 1.9 my mysql and someone set up django to connect with mysql and then wrote instructions for the rest of us on what we need to download to connect python with mysql. I correctly installed mysql but I have issues with accessing the database. We are all suppose to have access to the DB but I keep getting <code>django.db.utils.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")</code> even when I just type in mysql into the terminal I get <code>ERROR 1045 (28000): Access denied for user 'stevenJing'@'localhost' (using password: NO)</code> and typing in <code>mysql -u root -p</code> I get <code>ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)</code>. I am not sure on what I need to to do to fix this for I cannot even do <code>python3 manage.py runserver</code> for I get a big error which is.</p>
<pre><code> Performing system checks...
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x10434a510>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 264, in get_new_connection
conn = Database.connect(**conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 116, in inner_run
self.check(display_num_errors=True)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/checks/model_checks.py", line 28, in check_all_models
errors.extend(model.check(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/base.py", line 1178, in check
errors.extend(cls._check_fields(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/base.py", line 1255, in _check_fields
errors.extend(field.check(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 925, in check
errors = super(AutoField, self).check(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 208, in check
errors.extend(self._check_backend_specific_checks(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 317, in _check_backend_specific_checks
return connections[db].validation.check_field(self, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/validation.py", line 18, in check_field
field_type = field.db_type(connection)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 625, in db_type
return connection.data_types[self.get_internal_type()] % data
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 184, in data_types
if self.features.supports_microsecond_precision:
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/features.py", line 53, in supports_microsecond_precision
return self.connection.mysql_version >= (5, 6, 4) and Database.version_info >= (1, 2, 5)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 359, in mysql_version
with self.temporary_connection():
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 564, in temporary_connection
cursor = self.cursor()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 264, in get_new_connection
conn = Database.connect(**conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")
</code></pre>
<p>Here is my settings.py for Django</p>
<pre><code>DATABASES = {
'default': {
# 'ENGINE': 'mysql.connector.django',
'ENGINE': 'django.db.backends.mysql',
'NAME': 'lolProject',
'USER': 'django',
'PASSWORD': 'django-pass',
'HOST': '127.0.0.1',
#'OPTIONS':{'read_default_file': ''},
#'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</code></pre>
<p>The person who set up the DB is new with MySQL and so am I, how could we get it so if anyone pulls or clones from our git repo they have access to the DB.</p>
| 0 | 2016-07-20T22:25:03Z | 38,492,016 | <p>If people are pulling your code, and you want them to access YOUR database, then they probably shouldn't be using "localhost." If you're getting this error while running on your localhost, too, then it's a permission problem.</p>
<p>The first thing to do is see if you can use the command line MySQL, or another MySQL client, to log in using those credentials.</p>
<p>I get these kinds of errors a lot (unfortunately) and it's usually because I didn't add the user correctly in MySQL. I noticed just putting '%' doesn't account for localhost, I usually have to add 'user'@'localhost' in addition to 'user'@'%' when granting a user privileges.</p>
<p>EDIT: in order to grant privileges, you need to log into mysql as root, typically you'd use something like:</p>
<pre><code>% mysql -u root -P mysql
</code></pre>
<p>The "-P" means you will have to supply the root/admin password, that last "mysql" is the schema to use. Obviously you need administrator access in order to do that.</p>
<p>Then, in more recent versions of MySQL, you first create the user:</p>
<pre><code>> create user 'stevenJing'@'localhost' identified by 'password';
</code></pre>
<p>This is actually a different user than 'stevenJing'@'%', so if you're going to connect both from localhost and remote hosts, you actually need both.</p>
<p>Then you need to grant privileges to that user:</p>
<pre><code>> grant all privileges on <schema>.* to 'stevenJing'@'localhost';
</code></pre>
<p>Where schema is obviously your Django schema (lolProject in your Django settings above). Additionally, of course, to grant that user privileges to log in from elsewhere:</p>
<pre><code>> grant all privileges on <schema>.* to 'stevenJing'@'%';
</code></pre>
| 1 | 2016-07-20T22:37:50Z | [
"python",
"mysql",
"django",
"connector"
] |
Django and MySQL, cannot gain access to connect with the database ERROR:1045 | 38,491,869 | <p>Hello I am working on a project with a few people. We are using django 1.9 my mysql and someone set up django to connect with mysql and then wrote instructions for the rest of us on what we need to download to connect python with mysql. I correctly installed mysql but I have issues with accessing the database. We are all suppose to have access to the DB but I keep getting <code>django.db.utils.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")</code> even when I just type in mysql into the terminal I get <code>ERROR 1045 (28000): Access denied for user 'stevenJing'@'localhost' (using password: NO)</code> and typing in <code>mysql -u root -p</code> I get <code>ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)</code>. I am not sure on what I need to to do to fix this for I cannot even do <code>python3 manage.py runserver</code> for I get a big error which is.</p>
<pre><code> Performing system checks...
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x10434a510>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 264, in get_new_connection
conn = Database.connect(**conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 116, in inner_run
self.check(display_num_errors=True)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/checks/model_checks.py", line 28, in check_all_models
errors.extend(model.check(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/base.py", line 1178, in check
errors.extend(cls._check_fields(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/base.py", line 1255, in _check_fields
errors.extend(field.check(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 925, in check
errors = super(AutoField, self).check(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 208, in check
errors.extend(self._check_backend_specific_checks(**kwargs))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 317, in _check_backend_specific_checks
return connections[db].validation.check_field(self, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/validation.py", line 18, in check_field
field_type = field.db_type(connection)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 625, in db_type
return connection.data_types[self.get_internal_type()] % data
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 184, in data_types
if self.features.supports_microsecond_precision:
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/features.py", line 53, in supports_microsecond_precision
return self.connection.mysql_version >= (5, 6, 4) and Database.version_info >= (1, 2, 5)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 359, in mysql_version
with self.temporary_connection():
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 564, in temporary_connection
cursor = self.cursor()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 264, in get_new_connection
conn = Database.connect(**conn_params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (1045, "Access denied for user 'django'@'localhost' (using password: YES)")
</code></pre>
<p>Here is my settings.py for Django</p>
<pre><code>DATABASES = {
'default': {
# 'ENGINE': 'mysql.connector.django',
'ENGINE': 'django.db.backends.mysql',
'NAME': 'lolProject',
'USER': 'django',
'PASSWORD': 'django-pass',
'HOST': '127.0.0.1',
#'OPTIONS':{'read_default_file': ''},
#'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</code></pre>
<p>The person who set up the DB is new with MySQL and so am I, how could we get it so if anyone pulls or clones from our git repo they have access to the DB.</p>
| 0 | 2016-07-20T22:25:03Z | 38,492,873 | <h2>Mysql Privileges</h2>
<p>This error is because the <code>django</code> user does not have access to the <code>lolProject</code> database. You can rectify that by using the <a href="http://dev.mysql.com/doc/refman/5.7/en/grant.html" rel="nofollow">GRANT</a> command. The django user needs a lot of privileges because it needs to create and drop tables as part of migrations so someting like this ought to do it.</p>
<pre><code>GRANT ALL ON lolproject.* TO 'django'@'localhost';
</code></pre>
<p><strong>Update:</strong> Typically you do this by opening the mysql console as <code>mysql -u root</code> where root is an account that's installed by default and usually does not have a password associated with it.</p>
<p>Now this will be fine if everyone is having a mysql server on their on computers. If everyone connects to the same compupter, it ought to be </p>
<pre><code>GRANT ALL ON lolproject.* TO 'django'@'%';
</code></pre>
<p>Since everyone who pulls from your repo needs access, you ought to use an IP instead of localhost in your <code>settings.py</code></p>
<h2>Sqlite</h2>
<p>Have you considered using sqlite? It has almost no setup required and every one who clones or pulls from your repo gets a copy of the current database (if it has been added to the repo).</p>
| 1 | 2016-07-21T00:21:58Z | [
"python",
"mysql",
"django",
"connector"
] |
Grouping and ungrouping based on a column | 38,491,881 | <p>My goal is to be able to group rows of a CSV file by a column value, and also to perform the inverse operation. To give an example, it is desired to be able to transform back and forth between these two formats:</p>
<pre><code>uniqueId, groupId, feature_1, feature_2
1, 100, text of 1, 10
2, 100, some text of 2, 20
3, 200, text of 3, 30
4, 200, more text of 4, 40
5, 100, another text of 5, 50
</code></pre>
<p>Grouped on the groupId:</p>
<pre><code>uniqueId, groupId, feature_1, feature_2
1|2|5, 100, text of 1|some text of 2|another text of 5, 10|20|50
3|4, 200, text of 3|more text of 4, 30|40
</code></pre>
<p>The delimiter (here |) is assumed to not exist anywhere in the data.</p>
<p>I am trying to use Pandas to perform this transformation. My code so far can access the cell of rows grouped by a groupId, but I do not know how to populate the new dataframe.</p>
<p>How can my method be completed to accomplish the transformation into the desired new df?</p>
<p>How would an inverse method look like, that transforms the new df back to the original one?</p>
<p>If R is a better tool for this job, I am also open to suggestions in R.</p>
<pre><code>import pandas as pd
def getGroupedDataFrame(df, groupByField, delimiter):
''' Create a df with the rows grouped on groupByField, values separated by delimiter'''
groupIds = set(df[groupByField])
df_copy = pd.DataFrame(index=groupIds,columns=df.columns)
# iterate over the different groupIds
for groupId in groupIds:
groupRows = df.loc[df[groupByField] == groupId]
# for all rows of the groupId
for index, row in groupRows.iterrows():
# for all columns in the df
for column in df.columns:
print row[column]
# this prints the value the cell
# here append row[column] to its cell in the df_copy row of groupId, separated by delimiter
</code></pre>
| 1 | 2016-07-20T22:26:20Z | 38,492,356 | <p>To perform the grouping, you can <code>groupby</code> on <code>'groupId'</code>, and then within each group perform a join with your given delimiter on each column:</p>
<pre><code>def group_delim(grp, delim='|'):
"""Join each columns within a group by the given delimiter."""
return grp.apply(lambda col: delim.join(col))
# Make sure the DataFrame consists of strings, then apply grouping function.
grouped = df.astype(str).groupby('groupId').apply(group_delim)
# Drop the grouped groupId column, and replace it with the index groupId.
grouped = grouped.drop('groupId', axis=1).reset_index()
</code></pre>
<p>The grouped output:</p>
<pre><code> groupId uniqueId feature_1 feature_2
0 100 1|2|5 text of 1|some text of 2|another text of 5 10|20|50
1 200 3|4 text of 3|more text of 4 30|40
</code></pre>
<p>Similar idea for the inverse process, but since each row is a unique group you can just use a regular <code>apply</code>, no need for a <code>groupby</code>:</p>
<pre><code>def ungroup_delim(col, delim='|'):
"""Split elements in a column by the given delimiter, stacking columnwise"""
return col.str.split(delim, expand=True).stack()
# Apply the ungrouping function, and forward fill elements that aren't grouped.
ungrouped = grouped.apply(ungroup_delim).ffill()
# Drop the unwieldy altered index for a new one.
ungrouped = ungrouped.reset_index(drop=True)
</code></pre>
<p>And ungrouping yields the original data:</p>
<pre><code> groupId uniqueId feature_1 feature_2
0 100 1 text of 1 10
1 100 2 some text of 2 20
2 100 5 another text of 5 50
3 200 3 text of 3 30
4 200 4 more text of 4 40
</code></pre>
<p>To use different delimiters, you'd just pass <code>delim</code> as an argument to <code>apply</code>:</p>
<pre><code>foo.apply(group_delim, delim=';')
</code></pre>
<p>As a side note, in general iterating over DataFrames is quite slow. Whenever possible you'll want to use a vectorized approach like what I've done above.</p>
| 4 | 2016-07-20T23:13:31Z | [
"python",
"csv",
"pandas"
] |
Grouping and ungrouping based on a column | 38,491,881 | <p>My goal is to be able to group rows of a CSV file by a column value, and also to perform the inverse operation. To give an example, it is desired to be able to transform back and forth between these two formats:</p>
<pre><code>uniqueId, groupId, feature_1, feature_2
1, 100, text of 1, 10
2, 100, some text of 2, 20
3, 200, text of 3, 30
4, 200, more text of 4, 40
5, 100, another text of 5, 50
</code></pre>
<p>Grouped on the groupId:</p>
<pre><code>uniqueId, groupId, feature_1, feature_2
1|2|5, 100, text of 1|some text of 2|another text of 5, 10|20|50
3|4, 200, text of 3|more text of 4, 30|40
</code></pre>
<p>The delimiter (here |) is assumed to not exist anywhere in the data.</p>
<p>I am trying to use Pandas to perform this transformation. My code so far can access the cell of rows grouped by a groupId, but I do not know how to populate the new dataframe.</p>
<p>How can my method be completed to accomplish the transformation into the desired new df?</p>
<p>How would an inverse method look like, that transforms the new df back to the original one?</p>
<p>If R is a better tool for this job, I am also open to suggestions in R.</p>
<pre><code>import pandas as pd
def getGroupedDataFrame(df, groupByField, delimiter):
''' Create a df with the rows grouped on groupByField, values separated by delimiter'''
groupIds = set(df[groupByField])
df_copy = pd.DataFrame(index=groupIds,columns=df.columns)
# iterate over the different groupIds
for groupId in groupIds:
groupRows = df.loc[df[groupByField] == groupId]
# for all rows of the groupId
for index, row in groupRows.iterrows():
# for all columns in the df
for column in df.columns:
print row[column]
# this prints the value the cell
# here append row[column] to its cell in the df_copy row of groupId, separated by delimiter
</code></pre>
| 1 | 2016-07-20T22:26:20Z | 38,492,679 | <p>A solution in R:</p>
<p>I define the initial data frame (for clarity)</p>
<pre><code>df <- data.frame(uniqueID = c(1,2,3,4,5),
groupID = c(100,100,200,200,100),
feature_1 = c("text of 1","some text of 2",
"text of 3", "more text of 4",
"another text of 5"),
feature_2 = c(10,20,30,40,50), stringsAsFactors = F)
</code></pre>
<p>To obtain the grouped data frame:</p>
<pre><code># Group and summarise using dplyr
library(dplyr)
grouped <- df %>% group_by(groupID) %>% summarise_each(funs(paste(.,collapse = "|")))
</code></pre>
<p>Output:</p>
<pre><code>grouped
groupID uniqueID feature_1 feature_2
(dbl) (chr) (chr) (chr)
1 100 1|2|5 text of 1|some text of 2|another text of 5 10|20|50
2 200 3|4 text of 3|more text of 4 30|40
</code></pre>
<p>To ungroup and go back to the original data frame:</p>
<pre><code>library(stringr)
apply(grouped, 1, function(x) {
temp <- data.frame(str_split(x, '\\|'), stringsAsFactors = F)
colnames(temp) <- names(x)
temp
}) %>%
bind_rows()
</code></pre>
<p>Output:</p>
<pre><code> groupID uniqueID feature_1 feature_2
(chr) (chr) (chr) (chr)
1 100 1 text of 1 10
2 100 2 some text of 2 20
3 100 5 another text of 5 50
4 200 3 text of 3 30
5 200 4 more text of 4 40
</code></pre>
| 2 | 2016-07-20T23:58:34Z | [
"python",
"csv",
"pandas"
] |
Why can't Python find the built-in print function when calling from an imported function? | 38,491,927 | <p>Take this <strong>main.py</strong>:</p>
<pre><code>from __future__ import print_function
from sub import print
print("hello, world")
</code></pre>
<p>and this <strong>sub.py</strong>:</p>
<pre><code>from __future__ import print_function
def print(*args, **kwargs):
return __builtins__.print(*args, **kwargs)
</code></pre>
<p>Using Python 2.7.9, run <code>main.py</code> and you get:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 5, in <module>
print("hello, world")
File "/Users/ien/Studio/songifier/sub.py", line 4, in print
return __builtins__.print(*args, **kwargs)
AttributeError: 'dict' object has no attribute 'print'
</code></pre>
<p>Why and how to make this work? </p>
<p>NOTE: This is an artificial example to isolate the problem, which has arisen in a logging context, where the <code>print</code> function sometimes does some fancy logging, and other times wants to just call the built-in print function.</p>
| 2 | 2016-07-20T22:30:07Z | 38,492,040 | <p>Try this:</p>
<pre><code>import __builtin__
from __future__ import print_function
def print(*args, **kwargs):
return __builtin__.print(*args, **kwargs)
</code></pre>
<hr>
<pre><code>>>> print
<function print at 0x7f80cd622668>
>>> print("Hello", "world", sep="\n")
Hello
world
</code></pre>
<p>The reason for the error you were seeing can be explained better by this excerpt from the <a href="https://docs.python.org/2.7/reference/executionmodel.html" rel="nofollow">python docs</a>:</p>
<blockquote>
<p>By default, when in the <code>__main__</code> module, <code>__builtins__</code> is the
built-in module <code>__builtin__</code> (note: no 's'); when in any other
module, <code>__builtins__</code> is an alias for the dictionary of the
<code>__builtin__</code> module itself. </p>
<p><code>__builtins__</code> can be set to a user-created dictionary to create a
weak form of restricted execution.</p>
<p><strong>CPython implementation detail:</strong> Users should not touch <code>__builtins__</code>; it is strictly an implementation detail. Users
wanting to override values in the builtins namespace should <code>import</code>
the <code>__builtin__</code> (no 's') module and modify its attributes
appropriately. The namespace for a module is automatically created the
first time a module is imported.</p>
</blockquote>
| 3 | 2016-07-20T22:39:55Z | [
"python"
] |
Why can't Python find the built-in print function when calling from an imported function? | 38,491,927 | <p>Take this <strong>main.py</strong>:</p>
<pre><code>from __future__ import print_function
from sub import print
print("hello, world")
</code></pre>
<p>and this <strong>sub.py</strong>:</p>
<pre><code>from __future__ import print_function
def print(*args, **kwargs):
return __builtins__.print(*args, **kwargs)
</code></pre>
<p>Using Python 2.7.9, run <code>main.py</code> and you get:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 5, in <module>
print("hello, world")
File "/Users/ien/Studio/songifier/sub.py", line 4, in print
return __builtins__.print(*args, **kwargs)
AttributeError: 'dict' object has no attribute 'print'
</code></pre>
<p>Why and how to make this work? </p>
<p>NOTE: This is an artificial example to isolate the problem, which has arisen in a logging context, where the <code>print</code> function sometimes does some fancy logging, and other times wants to just call the built-in print function.</p>
| 2 | 2016-07-20T22:30:07Z | 38,492,042 | <p>To quote an answer on this question:
<a href="http://stackoverflow.com/questions/550470/overload-print-python">overload print python</a></p>
<blockquote>
<p>In Python 2.x you can't, because print isn't a function, it's a
statement. In Python 3 print is a function, so I suppose it could be
overridden (haven't tried it, though).</p>
</blockquote>
| -1 | 2016-07-20T22:40:06Z | [
"python"
] |
How to apply RANSAC in Python OpenCV | 38,491,959 | <p>Can someone show me how to apply RANSAC to find the best 4 feature matching points and their corresponding (x,y) coordinate so I can use them in my homography code?</p>
<p>The feature matching points were obtained by SIFT and here is the code:</p>
<pre><code>import numpy as np
import cv2
from matplotlib import pyplot as plt
def drawMatches(img1, kp1, img2, kp2, matches):
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)
# Show the image
cv2.imshow('Matched Features', out)
cv2.waitKey(0)
cv2.destroyWindow('Matched Features')
# Also return the image if you'd like a copy
return out
img1 = cv2.imread("C://Users//user//Desktop//research//img1.2.jpg")
img2 = cv2.imread("C://Users//user//Desktop//research//img3.jpg")
name = cv2.COLOR_YUV2BGRA_YV12
print name
gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT()
kp1,des1 = sift.detectAndCompute(gray1, None)
kp2,des2 = sift.detectAndCompute(gray2, None)
bf = cv2.BFMatcher()
matches=bf.match(des1,des2)
matches=sorted(matches,key=lambda x:x.distance)
img3 = drawMatches(gray1,kp1,gray2,kp2,matches[:100])
plt.imshow(img3),plt.show()
print(matches)
cv2.imwrite('sift_matching1.png',img3)
</code></pre>
<p>And here is the outcome:
<a href="http://i.stack.imgur.com/odNcV.jpg" rel="nofollow">click here</a></p>
<p>Here is my homography code:</p>
<pre><code>import cv2
import numpy as np
if __name__ == '__main__' :
# Read source image.
im_src = cv2.imread('C://Users//user//Desktop//research//img1.2.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])
# Read destination image.
im_dst = cv2.imread('C://Users//user//Desktop//research//img3.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])
# Calculate Homography
h, status = cv2.findHomography(pts_src, pts_dst, cv2.RANSAC,5.0)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
# Display images
cv2.imshow("Warped Source Image", im_out)
cv2.waitKey(0)
</code></pre>
<p>The four points that I chose randomly:</p>
<blockquote>
<p>pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])</p>
</blockquote>
<p>same thing here:</p>
<blockquote>
<p>pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])</p>
</blockquote>
<p>So yeah, basically, I just need to replace those random points with the best feature matching points that will be obtainned by RANSAC. </p>
| 0 | 2016-07-20T22:33:03Z | 38,496,220 | <p>You don't have to use RANSAC before <code>findHomography</code>. RANSAC is applied inside the function. Just pass two arrays of features that match each other (no need to only pass the four best).</p>
<p>However, what you can do is filter out the matches that have large distances. Usually, you try to find two matches for each feature and check if the distance with the first match is greatly inferior to the distance with the second match. Take a look at <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html" rel="nofollow">this OpenCV tutorial</a> to see some code on how to do that.</p>
| 1 | 2016-07-21T06:13:02Z | [
"python",
"opencv",
"homography",
"ransac"
] |
understanding kivy code snippet | 38,491,963 | <p>I am using the following code from web:</p>
<pre><code>from kivy.app import App
from kivy.uix.widget import Widget
from kivy.graphics import Line
class DrawInput(Widget):
def on_touch_down(self, touch):
print(touch)
with self.canvas:
touch.ud["line"] = Line(points=(touch.x, touch.y))
def on_touch_move(self, touch):
print(touch)
touch.ud["line"].points += (touch.x, touch.y)
def on_touch_up(self, touch):
print("RELEASED!", touch)
class SimpleKivy4(App):
def build(self):
return DrawInput()
if __name__ == "__main__":
SimpleKivy4().run()
</code></pre>
<p>Why does the method <code>on_touch_down(self, touch)</code> get implemented even when it is not being called explicitly anywhere?</p>
<p>Edit: If <code>widget</code> fires based on when the touch happens. Then how does the <code>DrawInput()</code> function <code>on_touch_down</code> get fired. Cause class <code>Widget</code> doesn't know about <code>DrawInpout()</code> class or any of it's methods.</p>
| -1 | 2016-07-20T22:33:18Z | 38,492,700 | <p>They get called, because they are Widget class methods that you override.
They are events bound to click/touch, on the Widget.</p>
<p>As from the <a href="https://kivy.org/docs/api-kivy.uix.widget.html#kivy.uix.widget.Widget" rel="nofollow">Widget</a> docs</p>
<blockquote>
<p>Events: </p>
<p>on_touch_down:<br>
Fired when a new touch event occurs </p>
<p>on_touch_move:<br>
Fired when an existing touch moves </p>
<p>on_touch_up:<br>
Fired when an existing touch disappears </p>
</blockquote>
| 0 | 2016-07-21T00:00:46Z | [
"python",
"kivy"
] |
django forloopcounter in template | 38,491,977 | <p>How can i autoincrement a item of my list?</p>
<p>I have something like this:</p>
<p>data = {'port': [22,80,443], 'banner': ['OpenSSH','Apache2','Apache'], 'protocol': ['tcp','tcp','tcp'] }</p>
<pre><code>{% for key, value in data.items %}
<tr>
<th class="white">{{key}}</th>
<th class="black">{{value.0}}</th>
</tr>
{% endfor %}
</code></pre>
<p>How can i autoincrement "value.0" to "value.1", "value.2" etc...</p>
<p>Maybe can i do it with forloop.counter0? But how?</p>
<p>thanks!</p>
| 1 | 2016-07-20T22:34:37Z | 38,492,017 | <p>You can iterate over the values using another <code>for</code> loop:</p>
<pre><code>{% for key, value in ack.items %}
<tr>
<th class="white">{{key}}</th>
{% for v in value %}
<th class="black">{{v}}</th>
{% endfor %}
</tr>
{% endfor %}
</code></pre>
| 1 | 2016-07-20T22:37:57Z | [
"python",
"django",
"templates"
] |
Adding and Renaming a Column in a Multiindex DataFrame | 38,491,990 | <p>The purpose of this post is to understand how to add a column to a level in a <code>MultiIndex.DataFrame</code> using <code>apply()</code> and <code>shift()</code></p>
<p><strong>Create the DataFrame</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame(
[
[5777, 100, 5385, 200, 5419, 4887, 100, 200],
[4849, 0, 4539, 0, 3381, 0, 0, ],
[4971, 0, 3824, 0, 4645, 3424, 0, 0, ],
[4827, 200, 3459, 300, 4552, 3153, 100, 200, ],
[5207, 0, 3670, 0, 4876, 3358, 0, 0, ],
],
index=pd.to_datetime(['2010-01-01',
'2010-01-02',
'2010-01-03',
'2010-01-04',
'2010-01-05']),
columns=pd.MultiIndex.from_tuples(
[('Portfolio A', 'GBP', 'amount'), ('Portfolio A', 'GBP', 'injection'),
('Portfolio B', 'EUR', 'amount'), ('Portfolio B', 'EUR', 'injection'),
('Portfolio A', 'USD', 'amount'), ('Portfolio A', 'USD', 'injection'),
('Portfolio B', 'JPY', 'amount'), ('Portfolio B', 'JPY', 'injection')])
).sortlevel(axis=1)
print df
</code></pre>
<p>I would like to use the following method to add a new column to each currency at level 2 named daily_added_value:</p>
<pre><code>def do_nothing(group):
return group
def calc_daily_added_value(group):
g = (group['amount'] - group['amount'].shift(periods=1, freq=None, axis=0)
-df['injection'].shift(periods=1, freq=None, axis=0)).round(decimals=2)
g.index = ['daily_added_value']
return g
pd.concat([df.T.groupby(level=0).apply(f).T for f in [calc_daily_added_value,do_nothing ]], axis=1).sort_index(axis=1)
</code></pre>
<p>However this throws a key error: <code>KeyError: 'amount'</code></p>
<p>What is the correct syntax for the method <code>calc_daily_added_value()</code>?</p>
<hr>
<p><strong>Following on from the answer below there is still an issue</strong></p>
<p><strong>Adding the daily return works</strong></p>
<pre><code>dav = df.loc[:, pd.IndexSlice[:, :, 'daily_added_value']]
amount = df.loc[:, pd.IndexSlice[:, :, 'amount']]
dr = (dav.values / amount.shift()) * 100
dr.columns.set_levels(['daily_return'], level=2, inplace=True)
df = pd.concat([df, dr], axis=1).sortlevel(axis=1)
</code></pre>
<p><strong>Adding the cumulative compounded returns FAILS</strong></p>
<pre><code>dr = df.loc[:, pd.IndexSlice[:, :, 'daily_return']]
drc = 100*((1+dr / 100).cumprod()-1)
drc.columns.set_levels(['daily_return_cumulative'], level=2, inplace=True)
df = pd.concat([df, drc], axis=1).sort_index(axis=1)
df.head()
</code></pre>
<p>this fails because it is missing the .values, but if I add this it becomes an array?</p>
<p>What is strange here though is that drc is in fact a DataFrame of correct shaped etc. and appears to contain correct results.</p>
<p>This fails on this line:</p>
<pre><code>drc.columns.set_levels(['daily_return_cumulative'], level=2, inplace=True)
</code></pre>
<p>Error is <code>ValueError: On level 2, label max (2) >= length of level (1). NOTE: this index is in an inconsistent state</code></p>
<p><strong>How can the index be placed back into a consistent state?</strong></p>
| 2 | 2016-07-20T22:35:23Z | 38,493,363 | <p>Skip the <code>groupby</code> it is not necessary</p>
<pre><code>amount = df.loc[:, pd.IndexSlice[:, :, 'amount']]
inject = df.loc[:, pd.IndexSlice[:, :, 'injection']]
dav = amount - amount.shift() - inject.shift().values
#dav.columns.set_levels(['daily_added_value'], level=2, inplace=True)
pd.concat([df, dav], axis=1).sort_index(axis=1).T
</code></pre>
<h3>Note: I used <code>T</code> to get a picture that would easily fit</h3>
<p><a href="http://i.stack.imgur.com/kXaEW.png" rel="nofollow"><img src="http://i.stack.imgur.com/kXaEW.png" alt="enter image description here"></a></p>
<p>there appears to be a <a href="https://github.com/pydata/pandas/issues/13754" rel="nofollow">bug</a> in <code>set_levels</code> and as such it is not advised to use it.</p>
<p><strong>Workaround to rename the MultiIndex Column in the DataFrame dav</strong></p>
<pre><code>def map_level(df, dct, level=2):
index = df.index
index.set_levels([[dct.get(item, item) for item in names] if i==level else names
for i, names in enumerate(index.levels)], inplace=True)
dct = {'amount':'daily_added_value'}
map_level(dav.T, dct, level=2)
</code></pre>
| 2 | 2016-07-21T01:21:59Z | [
"python",
"pandas"
] |
virtualenv with --always-copy throws error "Operation not permitted" | 38,491,992 | <p>I'm trying to run virtualenv without symlinking python2.7 but I get permission errors when I use the flag --always-copy.</p>
<pre><code>virtualenv --always-copy myenv
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 11, in <module>
sys.exit(main())
File "/Library/Python/2.7/site-packages/virtualenv.py", line 711, in main
symlink=options.symlink)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 924, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/Library/Python/2.7/site-packages/virtualenv.py", line 1129, in install_python
copyfile(join(stdlib_dir, fn), join(lib_dir, fn), symlink)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 355, in copy file
copyfileordir(src, dest, symlink)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 327, in copyfileordir
shutil.copytree(src, dest, symlink)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
shutil.Error:
[('/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config/config.c', '/Users/user1/myenv/lib/python2.7/config/config.c', "[Errno 1] Operation not permitted: '/Users/user1/myenv/lib/python2.7/config/config.c'"), ...
</code></pre>
| 1 | 2016-07-20T22:35:48Z | 39,237,120 | <p>Hard to answer without additional details but my guess would be you need to sudo first.</p>
<p>so <code>sudo virtualenv --always-copy myenv</code> </p>
| 0 | 2016-08-30T21:08:32Z | [
"python",
"python-2.7",
"virtualenv"
] |
Unclear IndexError on Python | 38,492,045 | <pre><code>def scan_for_match(T1, T2):
i = 0
j = 0
while i <= (len(T1)):
if T1[i] == T2[j]:
keywords = open('keywords.txt', 'w+')
keywords.write(T1.pop(i))
T2.pop(j)
if i > (len(T1)):
i = 0
j += 1
if j > (len(T2)):
print "All words have been scanned through"
print "These are the matches found:\n ", keywords.readlines()
i += 1
</code></pre>
<p>I thought that this was a pretty straight forward piece of code, but...</p>
<pre><code>T1 = ["me", "gusta", "espanol"]; T2 = ["si", "no", "espanol"]; scan_for_match(T1, T2)
</code></pre>
<p>Will just give me:</p>
<pre><code>Traceback (most recent call last):
File "stdin", line 1, in module
File "stdin", line 5, in scan_for_match
IndexError: list index out of range
</code></pre>
<p>The line in question is just a harmless <code>if T1[i] == T2[j]:</code>
Which for me just doesn't make sense since:</p>
<pre><code>i = 0
j = 0
T1[i] = 'me'
T2[j] = 'si'
</code></pre>
<p>So this should just return a False result instead of an IndexError, right?</p>
| 1 | 2016-07-20T22:40:18Z | 38,492,058 | <p>Change the condition on the <code>while</code> to:</p>
<pre><code>while i < len(T1)
# ^
</code></pre>
<p>When <code>i = len(T1)</code> and you try to index your list, you'll get an <code>IndexError</code> because your index starts counting from zero.</p>
| 1 | 2016-07-20T22:41:43Z | [
"python",
"python-2.7",
"runtime-error"
] |
Unclear IndexError on Python | 38,492,045 | <pre><code>def scan_for_match(T1, T2):
i = 0
j = 0
while i <= (len(T1)):
if T1[i] == T2[j]:
keywords = open('keywords.txt', 'w+')
keywords.write(T1.pop(i))
T2.pop(j)
if i > (len(T1)):
i = 0
j += 1
if j > (len(T2)):
print "All words have been scanned through"
print "These are the matches found:\n ", keywords.readlines()
i += 1
</code></pre>
<p>I thought that this was a pretty straight forward piece of code, but...</p>
<pre><code>T1 = ["me", "gusta", "espanol"]; T2 = ["si", "no", "espanol"]; scan_for_match(T1, T2)
</code></pre>
<p>Will just give me:</p>
<pre><code>Traceback (most recent call last):
File "stdin", line 1, in module
File "stdin", line 5, in scan_for_match
IndexError: list index out of range
</code></pre>
<p>The line in question is just a harmless <code>if T1[i] == T2[j]:</code>
Which for me just doesn't make sense since:</p>
<pre><code>i = 0
j = 0
T1[i] = 'me'
T2[j] = 'si'
</code></pre>
<p>So this should just return a False result instead of an IndexError, right?</p>
| 1 | 2016-07-20T22:40:18Z | 38,492,071 | <p><code>while i <= (len(T1)):</code> is wrong, when i equals the length, it will have IndexError, change it to <code><</code>. Index is starting from <strong>0</strong> to <strong>(length - 1)</strong></p>
<p>I suggest not using <code>pop()</code> method, it will remove element(s) from your list, scan for match doesn't need the matching elements to be removed, right? :)</p>
<p>Alternatively, you can find the match in this way:</p>
<pre><code>>>> t2= ["si", "no", "espanol"]
>>> t1= ["me", "gusta", "espanol"]
>>> set(t2) & set(t1)
{'espanol'}
</code></pre>
| 2 | 2016-07-20T22:42:26Z | [
"python",
"python-2.7",
"runtime-error"
] |
strange python index out of range | 38,492,126 | <p>I got a strange problem with my python code </p>
<p>when i test it with : </p>
<pre><code>4
Prashant
32
Pallavi
36
Dheeraj
39
Shivam
40
</code></pre>
<p>it works well but when i try to test it with this :</p>
<pre><code>5
Harry
37.21
Berry
37.21
Tina
37.2
Akriti
41
Harsh
39
</code></pre>
<p>it fail with this error :</p>
<p>Runtime Error
Traceback (most recent call last):
File "solution.py", line 48, in
final = find(nested,find(nested,minimum(nested))[0][1])
IndexError: list index out of range</p>
<p>Here is the code, i dont understand why it crash, i try lot of différents solutions without any result.. </p>
<pre><code>def compteur (list,sch):
nb=0
for t in list:
if t[1] == sch:
nb += 1
return nb
def minimum (list):
minim = list[0][1]
for t in list :
if t[1] < minim :
minim = t[1]
return minim
def find (list,sch):
ret=[]
for t in list:
if t[1] == sch:
ret.append(t)
return ret
def rmv (list,sch):
ret = []
fd = find(list,sch)
for t in list :
if not fd[0][1]==t[1]:
ret.append(t)
return ret
nested = []
number = int(raw_input())
for i in range(number+1) :
try:
nom = raw_input()
except (EOFError):
break
note = float(raw_input())
nested.append([nom,note])
mini = find(nested,minimum(nested))
for i in mini:
nested = rmv(nested,i[1])
final = find(nested,find(nested,minimum(nested))[0][1])
final.sort(key=str)
for e in final :
print e[0]
</code></pre>
<p>Thank you for helping !</p>
| -3 | 2016-07-20T22:48:40Z | 38,492,406 | <p>It pays to unit test your functions or at least add a few print statements to see what they return. I added</p>
<pre><code>print find(nested,minimum(nested))
</code></pre>
<p>which printed</p>
<pre><code>[]
</code></pre>
<p><code>find</code> isn't working the way you want it to and after inspection, it appears to be a problem with the <code>return</code> statement inside a <code>for</code> loop. This makes it work</p>
<pre><code>def find (list,sch):
ret=[]
for t in list:
if t[1] == sch:
ret.append(t)
return ret
</code></pre>
| 2 | 2016-07-20T23:20:22Z | [
"python",
"indexing",
"range"
] |
Audio corrective feedback on psychopy (older version that supports sound) | 38,492,137 | <p>As some of you may know, psychopy has errors playing sound files on the newer versions, so I have the oldest version installed currently and am trying to make audio corrective feedback for a behavioral test meant to be in an fMRI scanner. The feedback is needed to keep the participants awake. I'm having a lot of trouble finding how to code it correctly, as nothing I've tried works (I've gotten plenty of errors, the main one being invalid Syntax), as I am not using the builder. I am very new at coding. This is what I have so far:</p>
<pre><code>cue = visual.TextStim(myWin, pos = [0.0,0.0],color = -1)
fix = visual.TextStim(myWin,text = '+',height = 1.0,color = -1)
begin_text = ('Press the space bar to begin')
hit_sound = sound.Sound(value='G', secs=0.5, octave=4, sampleRate=44100, bits=16)
miss_sound = sound.Sound(value='G', secs=0.5, octave=3, sampleRate=44100, bits=16)
cue_type = ['either_color', 'either_shape', 'either_nochange', 'either_nochange', 'color', 'color_nochange', 'shape', 'shape_nochange']#trial types
cue_type*=num_trials #number of trials
random.shuffle(cue_type) #randomizing trials
if key_resp.corr
feedback= hit_sound.play
else:
feedback= miss_sound.play
</code></pre>
<p>Thank you in advance. </p>
| 0 | 2016-07-20T22:50:34Z | 38,492,240 | <p>Are you coding this from scratch or transferred from builder to coder view and making edits? If you are making edits in the coder view, make sure the indents and punctuations remain the same.</p>
<p>From what you've shared, I can see that</p>
<pre><code> if key_resp.corr
feedback= hit_sound.play
else:
feedback= miss_sound.play
</code></pre>
<p>should be:</p>
<pre><code> if key_resp.corr:
hit_sound.play()
else:
miss_sound.play()
</code></pre>
| 1 | 2016-07-20T23:00:59Z | [
"python",
"psychopy"
] |
Group by in SFrame without installing graphlab | 38,492,159 | <p>How to use groupby operation in SFrame, without installing graphlab.</p>
<p>I would love to do some aggregation, but in all examples in the internet I have seen aggregation function comes from Graphlab.</p>
<p>Like:</p>
<pre><code>import graphlab.aggregate as agg
user_rating_stats = sf.groupby(key_columns='user_id',
operations={
'mean_rating': agg.MEAN('rating'),
'std_rating': agg.STD('rating')
})
</code></pre>
<p>How can I use, say, <code>numpy.mean</code> and not <code>agg.MEAN</code> in the above example?</p>
| 1 | 2016-07-20T22:52:30Z | 38,492,735 | <p>The <code>sframe</code> package contains the same aggregation module as the <code>graphlab</code> package, so you shouldn't need to resort to numpy.</p>
<pre><code>import sframe
import sframe.aggregate as agg
sf = sframe.SFrame({'user_id': [1, 1, 2],
'rating': [3.3, 3.6, 4.1]})
grp = sf.groupby('user_id', {'mean_rating': agg.MEAN('rating'),
'std_rating': agg.STD('rating')})
print(grp)
+---------+---------------------+-------------+
| user_id | std_rating | mean_rating |
+---------+---------------------+-------------+
| 2 | 0.0 | 4.1 |
| 1 | 0.15000000000000024 | 3.45 |
+---------+---------------------+-------------+
[2 rows x 3 columns]
</code></pre>
| 2 | 2016-07-21T00:05:25Z | [
"python",
"numpy",
"group-by",
"graphlab",
"sframe"
] |
Local variable referenced before assignment Python 3.4.5 | 38,492,223 | <p>I'm going through LearnPythonTheHardWay book and I'm stuck on ex35. I decided to create my own game as he asked on Study Drills. I have <code>gold_room</code> function that is just like his, but it raises the title's errors on both codes(his and mine).</p>
<pre><code>def gold_room():
print("You enter a room full of gold.")
print("Do you take the gold and run to the exit or you just walk out with nothing in your hands?")
choice = input("> ")
if choice == "take":
print("How much do you take?")
choice_two = input("> ")
if "0" in choice_two or "1" in choice_two:
how_much = int(choice_two)
else:
print("Man, learn to type a number.")
if how_much < 50:
print("You're not greedy. You win!")
exit(0)
else:
print("You greedy bastard!")
exit(0)
elif choice == "walk":
print("You're not greedy. You win!")
exit(0)
else:
print("I don't know what that means")
</code></pre>
<blockquote>
<p>UnboundLocalError: local variable 'how_much' referenced before assignment</p>
</blockquote>
| -2 | 2016-07-20T22:59:35Z | 38,492,273 | <p>You are receiving that error because you are referencing the variable <code>how_much</code> before any value is assigned to it. :)</p>
<p>This happens at line: <code>if how_much < 50:</code></p>
<p>At that point in code execution, whether <code>how_much</code> is defined or not depends on whether the previous condition (<code>if "0" in choice_two or "1" in choice_two:</code>) or not.</p>
<p>The code as-written doesn't really make sense; you should only be thinking about how much <code>how_much</code> is if the user <strong>has</strong> entered a number, which is what that first condition is supposed to determine.</p>
<p>Try something like this, instead:</p>
<pre><code>if "0" in choice_two or "1" in choice_two:
how_much = int(choice_two)
if how_much < 50:
print("You're not greedy. You win!")
exit(0)
else:
print("You greedy bastard!")
exit(0)
else:
print("Man, learn to type a number.")
</code></pre>
| 2 | 2016-07-20T23:04:15Z | [
"python"
] |
Python error: cannot concatenate 'str' and 'builtin_function_or_method' objects | 38,492,226 | <p>I am currently in the process of programming a text-based adventure in Python as a learning exercise. So far, the player can name themselves, the value of which is stored in a dictionary key. However, when I attempt to allow the player to choose their race, I get the following error:</p>
<p><em>cannot concatenate 'str' and 'builtin_function_or_method' objects</em></p>
<p>I have gone over my code time and time again and can't seem to figure out what's wrong. I'm somewhat new to Python, so I assume it's something simple I'm overlooking. </p>
<pre><code>player = {
"name": "",
"gender": "",
"race": "",
"class": "",
"HP": 10,
}
def error():
print "Error: Unknown Command"
print "You will have to forgive me, " + player['name'] + ". My eyesight isn't what it used to be. What are you, exactly?."
print "- A mighty HUMAN"
print "- A hardy DWARF"
print "- An ingenious GNOME "
print "- or an elegant ELF"
print "(Hint: If you would like to know more about each race, consult the manual, whatever that means)"
player_race = raw_input(">> ").lower
while race_confirm == False:
if player_race != "elf":
print "You say you're a " + player_race + ". Is that correct? Remember, you will be unable to change your race later. (Y/N)"
response = raw_input(">> ").lower()
else:
print "You say you're an " + player_race + ". Is that correct? Remember, you will be unable to change your race later. (Y/N)"
response = raw_input(">> ").lower()
if response == "y":
player_race = player['race']
print "It is nice to meet you, ", player['name'] + "the" + player['race'] + "."
race_confirm = True
elif response == "n":
print "Oh, I'm terribly sorry. I must have misheard you. What did you say you were again?"
player_name = raw_input(">> ")
else:
error()
</code></pre>
| 0 | 2016-07-20T22:59:39Z | 38,492,236 | <p>You need to call that lower method, it's a <em>callable</em> attribute:</p>
<pre><code>player_race = raw_input(">> ").lower()
# ^^
</code></pre>
| 0 | 2016-07-20T23:00:43Z | [
"python",
"command-line",
"text-based"
] |
MatPlotLib: Control Ind Increments | 38,492,292 | <p>Given this chart:</p>
<pre><code>import matplotlib.pyplot as plt
plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
# Example data
people = ('Tom', 'Dick', 'Harry', 'Slim')
y_pos = np.arange(len(people))
performance = 3 + 10 * np.random.rand(len(people))
plt.barh(y_pos, performance, align='center', alpha=0.4)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/ixlw1.png" rel="nofollow"><img src="http://i.stack.imgur.com/ixlw1.png" alt="enter image description here"></a>
You'll notice that the y-axis ind (index?) is in increments of 0.5.
However, if I add a person to the list of people and re-run the code, the index is in increments of 1.</p>
<p><strong>Is there any way to control the increments so that they are always in units of 1?</strong> </p>
<p><em>Update</em></p>
<p>If I can somehow get the increment (0.5 or 1) from the y-axis, that would be just as good.</p>
| 0 | 2016-07-20T23:06:47Z | 38,492,353 | <p>Use the <code>yticks</code> function like this:</p>
<pre><code>plt.yticks(range(len(y_pos)))
</code></pre>
| 1 | 2016-07-20T23:13:18Z | [
"python",
"matplotlib"
] |
reordering cluster numbers for correct correspondence | 38,492,314 | <p>I have a dataset that I clustered using two different clustering algorithms. The results are about the same, but the cluster numbers are permuted.
Now for displaying the color coded labels, I want the label ids to be same for the same clusters.
How can I get correct permutation between the two label ids?</p>
<p>I can do this using brute force, but perhaps there is a better/faster method. I would greatly appreciate any help or pointers. If possible I am looking for a python function.</p>
| 0 | 2016-07-20T23:08:55Z | 38,499,466 | <p>The most well-known algorithm for finding the optimum matching is the <strong>hungarian method</strong>.</p>
<p>Because it cannot be explained in a few sentences, I have to refer you to a book of your choice, or <a href="https://en.wikipedia.org/wiki/Hungarian_algorithm" rel="nofollow">Wikipedia article "Hungarian algorithm"</a>.</p>
<p>You can probably get good results (even perfect if the difference is indeed tiny) by simply picking the maximum of the correspondence matrix and then removing that row and column.</p>
| 1 | 2016-07-21T08:53:01Z | [
"python",
"python-2.7",
"cluster-analysis",
"permutation"
] |
Is it superfluous to declare # -*- coding: utf-8 -*- after #!/usr/bin/python3? | 38,492,358 | <p>I have been writing: </p>
<pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
</code></pre>
<p>But I believe Python3 uses Unicode by default.</p>
| 0 | 2016-07-20T23:13:42Z | 38,492,402 | <p>The default encoding for python3 code <em>is</em> utf-8. See <a href="https://docs.python.org/3/howto/unicode.html#python-s-unicode-support" rel="nofollow">python's unicode support</a>.</p>
<p>If you want to support python2.x in the same file or if you want to use a coding other than utf-8, you need that comment, otherwise you can leave it off without any repercussions.</p>
| 3 | 2016-07-20T23:18:55Z | [
"python",
"utf-8"
] |
How do I reference adjacent elements in a matrix? | 38,492,553 | <p>I am trying to make some code so that a single element a matrix can reference elements adjacent to it. Sort of like how you can only move game pieces to spaces adjacent. For example, I want to be able to say something like:</p>
<pre><code>if x matrix element is adjacent to y element:
y_element_adjacent = True
</code></pre>
<p>I want to know how to accomplish the 'is adjacent' portion of this.</p>
<p>EDIT: I have tried making a list and assigning each element a number. So in a 100 element list (or a 10x10 game board), space 1x1 would be the 0th element and space 1x3 would be the 2nd element... Like this:</p>
<p>1, 2, 3, 4, 5, 6, 7, 8, 9, 10</p>
<p>11, 12, 13, 14, 15, 16, 17, 18, 19, 20</p>
<p>However, the problem with this is that if the x element was 10, my code would take + 1 to find an adjacent space, and 11 is NOT adjacent. I then realized that lists are not the correct way to do this and matrices would be a the way to go. I'm just a little confused on how I can use them differently than lists.</p>
<p>Any help is appreciated! Thanks.</p>
| -1 | 2016-07-20T23:41:17Z | 38,492,593 | <p>Here's a simple way to obtain indices of all adjacent elements. </p>
<pre><code>from itertools import product, starmap
x, y = (x_coordinate, y_coordinate) # matrix values here
cells = starmap(lambda a,b: (x+a, y+b), product((0,-1,+1), (0,-1,+1)))
</code></pre>
<p>For an input of <code>x,y = (1,1)</code>, this returns <code>(list(cells)[1:])</code> containing <code>[(1, 0), (1, 2), (0, 1), (0, 0), (0, 2), (2, 1), (2, 0), (2, 2)]</code>. </p>
<p>Such an implementation might be of interest in your particular scenario (determining to which places a certain game piece may move). If you want to include border checking, you might try</p>
<pre><code>X = max_board_X
Y = max_board_Y
neighbors = lambda x, y : [(x2, y2) for x2 in range(x-1, x+2)
for y2 in range(y-1, y+2)
if (-1 < x <= X and
-1 < y <= Y and
(x != x2 or y != y2) and
(0 <= x2 <= X) and
(0 <= y2 <= Y))]
</code></pre>
<p>Other solutions (and sources for these) may be found here: <a href="http://stackoverflow.com/questions/1620940/determining-neighbours-of-cell-two-dimensional-list">Determining neighbours of cell two dimensional list</a></p>
| 0 | 2016-07-20T23:46:42Z | [
"python",
"algorithm"
] |
get key and value from a list with dict | 38,492,725 | <p>I have a list of dict: </p>
<pre><code>dividends=[
{"2005":0.18},
{"2006":0.21},
{"2007":0.26},
{"2008":0.31},
{"2009":0.34},
{"2010":0.38},
{"2011":0.38},
{"2012":0.38},
{"2013":0.38},
{"2014":0.415},
{"2015":0.427}
]
</code></pre>
<p>I want to retrieve the key and value to two lists, like:</p>
<p>yearslist = [2005,2006, 2007,2008,2009,2010...]
dividendlist = [0.18,0.21, 0.26....]</p>
<p>any way to implement this?</p>
<p>thanks.</p>
| 1 | 2016-07-21T00:04:25Z | 38,492,779 | <p>Assuming your dictionaries always have a single key,value pair that you are extracting, you could use two list comprehensions:</p>
<pre><code>l1 = [d.values()[0] for d in dividends]
# ['2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']
l2 = [d.keys()[0] for d in dividends]
# [0.18, 0.21, 0.26, 0.31, 0.34, 0.38, 0.38, 0.38, 0.38, 0.415, 0.427]
</code></pre>
| 3 | 2016-07-21T00:10:08Z | [
"python",
"list",
"dictionary"
] |
get key and value from a list with dict | 38,492,725 | <p>I have a list of dict: </p>
<pre><code>dividends=[
{"2005":0.18},
{"2006":0.21},
{"2007":0.26},
{"2008":0.31},
{"2009":0.34},
{"2010":0.38},
{"2011":0.38},
{"2012":0.38},
{"2013":0.38},
{"2014":0.415},
{"2015":0.427}
]
</code></pre>
<p>I want to retrieve the key and value to two lists, like:</p>
<p>yearslist = [2005,2006, 2007,2008,2009,2010...]
dividendlist = [0.18,0.21, 0.26....]</p>
<p>any way to implement this?</p>
<p>thanks.</p>
| 1 | 2016-07-21T00:04:25Z | 38,492,789 | <p>you can create two list and append keys in yearlist and values in dividendlist.</p>
<p>here is the code.</p>
<pre><code>dividends=[
{"2005":0.18},
{"2006":0.21},
{"2007":0.26},
{"2008":0.31},
{"2009":0.34},
{"2010":0.38},
{"2011":0.38},
{"2012":0.38},
{"2013":0.38},
{"2014":0.415},
{"2015":0.427}
]
yearlist = []
dividendlist = []
for dividend_dict in dividends:
for key, value in dividend_dict.iteritems():
yearlist.append(key)
dividendlist.append(value)
print 'yearlist = ', yearlist
print 'dividendlist = ', dividendlist
</code></pre>
<p>Output:</p>
<pre><code>yearlist = ['2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']
dividendlist = [0.18, 0.21, 0.26, 0.31, 0.34, 0.38, 0.38, 0.38, 0.38, 0.415, 0.427]
</code></pre>
<p>second way you can use <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehensions</a></p>
<pre><code>dividends=[
{"2005":0.18},
{"2006":0.21},
{"2007":0.26},
{"2008":0.31},
{"2009":0.34},
{"2010":0.38},
{"2011":0.38},
{"2012":0.38},
{"2013":0.38},
{"2014":0.415},
{"2015":0.427}
]
yearlist = [dividend_dict.keys()[0] for dividend_dict in dividends]
dividendlist = [dividend_dict.values()[0] for dividend_dict in dividends]
print 'yearlist = ', yearlist
print 'dividendlist = ', dividendlist
</code></pre>
<p>Output:</p>
<pre><code>yearlist = ['2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']
dividendlist = [0.18, 0.21, 0.26, 0.31, 0.34, 0.38, 0.38, 0.38, 0.38, 0.415, 0.427]
</code></pre>
| 1 | 2016-07-21T00:10:59Z | [
"python",
"list",
"dictionary"
] |
get key and value from a list with dict | 38,492,725 | <p>I have a list of dict: </p>
<pre><code>dividends=[
{"2005":0.18},
{"2006":0.21},
{"2007":0.26},
{"2008":0.31},
{"2009":0.34},
{"2010":0.38},
{"2011":0.38},
{"2012":0.38},
{"2013":0.38},
{"2014":0.415},
{"2015":0.427}
]
</code></pre>
<p>I want to retrieve the key and value to two lists, like:</p>
<p>yearslist = [2005,2006, 2007,2008,2009,2010...]
dividendlist = [0.18,0.21, 0.26....]</p>
<p>any way to implement this?</p>
<p>thanks.</p>
| 1 | 2016-07-21T00:04:25Z | 38,492,798 | <p>Try:</p>
<pre><code>yearslist = dictionary.keys()
dividendlist = dictionary.values()
</code></pre>
<p>For both keys and values:</p>
<pre><code>items = dictionary.items()
</code></pre>
<p>Which can be used to split them as well:</p>
<pre><code>yearslist, dividendlist = zip(*dictionary.items())
</code></pre>
| 0 | 2016-07-21T00:11:57Z | [
"python",
"list",
"dictionary"
] |
fail to visualize a network in python - issue with pygraphviz? | 38,492,870 | <p>As I use some code to visualize a network, I encountered an error</p>
<pre><code>Traceback (most recent call last):
File "networkplot.py", line 21, in <module>
pos = graphviz_layout(G, prog='neato')
File "/opt/conda/lib/python2.7/site-packages/networkx/drawing/nx_agraph.py", line 228, in graphviz_layout
return pygraphviz_layout(G,prog=prog,root=root,args=args)
File "/opt/conda/lib/python2.7/site-packages/networkx/drawing/nx_agraph.py", line 258, in pygraphviz_layout
'http://pygraphviz.github.io/')
ImportError: ('requires pygraphviz ', 'http://pygraphviz.github.io/')
</code></pre>
<p>I therefore download <code>pygraphiviz-1.3.1.tar.gz</code> and <code>pip install pygraphviz</code>. It showed </p>
<pre><code>Failed building wheel for pygraphviz
Running setup.py clean for pygraphviz
Failed to build pygraphviz
Installing collected packages: pygraphviz
Running setup.py install for pygraphviz ... error
Complete output from command /opt/conda/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-0RJB9q/pygraphviz/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-vo59d4-record/install-record.txt --single-version-externally-managed --compile:
running install
Trying pkg-config
Package libcgraph was not found in the pkg-config search path.
Perhaps you should add the directory containing `libcgraph.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libcgraph' found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-0RJB9q/pygraphviz/setup.py", line 87, in <module>
tests_require=['nose>=0.10.1', 'doctest-ignore-unicode>=0.1.0',],
File "/opt/conda/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/opt/conda/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/opt/conda/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup_commands.py", line 44, in modified_run
self.include_path, self.library_path = get_graphviz_dirs()
File "setup_extra.py", line 121, in get_graphviz_dirs
include_dirs, library_dirs = _pkg_config()
File "setup_extra.py", line 44, in _pkg_config
output = S.check_output(['pkg-config', '--libs-only-L', 'libcgraph'])
File "/opt/conda/lib/python2.7/subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['pkg-config', '--libs-only-L', 'libcgraph']' returned non-zero exit status 1
----------------------------------------
Command "/opt/conda/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-0RJB9q/pygraphviz/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-vo59d4-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0RJB9q/pygraphviz/
</code></pre>
<p>The code that I used:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
plt.switch_backend('agg')
from networkx.drawing.nx_agraph import graphviz_layout
import graphs
A = graphs.create_graph()
graph = A.graph
G, labels = A.networkList()
fig = plt.figure()
pos = graphviz_layout(G, prog='neato')
</code></pre>
<p>Could anyone let me know how to solve this? I highly appreciate your assistance. Thank you very much!</p>
| 0 | 2016-07-21T00:21:39Z | 38,492,902 | <p><code>libcgraph</code> is part of the graphviz package from <a href="http://graphviz.org/" rel="nofollow">http://graphviz.org/</a>. You'll need to install that before you install <code>pygraphviz</code>. The <code>graphviz</code> package you are installing with pip is a Python package and not the graphviz package you need.</p>
| 0 | 2016-07-21T00:25:40Z | [
"python",
"networkx",
"pygraphviz"
] |
Counting how many times each word occures in a text | 38,492,913 | <p>What I wanted this code to do, is read a textfile, and print how much each word occures in a percent. It <em>almost</em> works..</p>
<p>I can't figure out how to sort the print out from highest occurrence to least (I had it at one point when I was copy/pasting other peoples code, I think I imported collections and counter, I have no idea)</p>
<p>But another problem is it reads through my whole list, which is fine with smaller textfiles, but larger ones just eat up my terminal, I'd like it to only print out words once, instead of once for each instance</p>
<pre><code>name = raw_input('Enter file:')
handle = open(name, 'r')
text = handle.read()
words = text.split()
def percent(part, whole):
return 100 * float(part)/float(whole)
total = len(words)
counts = dict()
for word in words:
counts[word] = counts.get(word,0) + 1
print "\n"
print"Total Words\n", total
print"\n"
for word in words:
print word, percent(counts[word],total),"%"
</code></pre>
| 0 | 2016-07-21T00:27:29Z | 38,493,020 | <p>You can iterate through your dictionary like this:</p>
<pre><code>for word in counts:
print word, counts[word]
</code></pre>
<p>This will print each key in the dictionary once.
For sorting, you should look at the built-in <code>sorted()</code> function: <a href="https://docs.python.org/3.4/library/functions.html#sorted" rel="nofollow">https://docs.python.org/3.4/library/functions.html#sorted</a></p>
| 1 | 2016-07-21T00:39:00Z | [
"python",
"python-2.7"
] |
Counting how many times each word occures in a text | 38,492,913 | <p>What I wanted this code to do, is read a textfile, and print how much each word occures in a percent. It <em>almost</em> works..</p>
<p>I can't figure out how to sort the print out from highest occurrence to least (I had it at one point when I was copy/pasting other peoples code, I think I imported collections and counter, I have no idea)</p>
<p>But another problem is it reads through my whole list, which is fine with smaller textfiles, but larger ones just eat up my terminal, I'd like it to only print out words once, instead of once for each instance</p>
<pre><code>name = raw_input('Enter file:')
handle = open(name, 'r')
text = handle.read()
words = text.split()
def percent(part, whole):
return 100 * float(part)/float(whole)
total = len(words)
counts = dict()
for word in words:
counts[word] = counts.get(word,0) + 1
print "\n"
print"Total Words\n", total
print"\n"
for word in words:
print word, percent(counts[word],total),"%"
</code></pre>
| 0 | 2016-07-21T00:27:29Z | 38,493,046 | <p>For your first problem you can just Collections an OrderedDict as such:</p>
<pre><code>sortedCounts = collections.OrderedDict(sorted(counts.items(),key=lambda t: t[1]))
</code></pre>
<p>To only print each word once:</p>
<pre><code>for key, value in sortedCounts.iteritems():
print key, percent(value,total),"%"
</code></pre>
<p>Hope that helps</p>
| 0 | 2016-07-21T00:42:28Z | [
"python",
"python-2.7"
] |
Counting how many times each word occures in a text | 38,492,913 | <p>What I wanted this code to do, is read a textfile, and print how much each word occures in a percent. It <em>almost</em> works..</p>
<p>I can't figure out how to sort the print out from highest occurrence to least (I had it at one point when I was copy/pasting other peoples code, I think I imported collections and counter, I have no idea)</p>
<p>But another problem is it reads through my whole list, which is fine with smaller textfiles, but larger ones just eat up my terminal, I'd like it to only print out words once, instead of once for each instance</p>
<pre><code>name = raw_input('Enter file:')
handle = open(name, 'r')
text = handle.read()
words = text.split()
def percent(part, whole):
return 100 * float(part)/float(whole)
total = len(words)
counts = dict()
for word in words:
counts[word] = counts.get(word,0) + 1
print "\n"
print"Total Words\n", total
print"\n"
for word in words:
print word, percent(counts[word],total),"%"
</code></pre>
| 0 | 2016-07-21T00:27:29Z | 38,493,163 | <p>Your code is very close to being workable; I just see a few issues that are causing your problems:</p>
<p><strong>P1:</strong> Your code doesn't take non-word characters into account. For example, <code>word;</code>, <code>word.</code>, and <code>word</code> would all be treated as unique words. </p>
<pre><code>text = handle.read()
words = text.split()
</code></pre>
<p><br/></p>
<p><strong>P2:</strong> You iterate over the <em>entire</em> list of words, which includes duplicates, instead of your unique list in <code>counts</code>. So of course you'll be printing each word multiple times.</p>
<pre><code>for word in words:
</code></pre>
<p><br/></p>
<p><strong>P3:</strong> You open the file but never close it. Not exactly a problem with your code, but something to be improved. This is why using <code>with open(...):</code> syntax is generally encouraged, as it handles closing the file for you.</p>
<pre><code>handle = open(name, 'r')
</code></pre>
<p><br/>
Here's your code with some fixes:</p>
<pre><code>#!/usr/bin/python
import re
name = raw_input('Enter file:')
def percent(part, whole):
return 100 * float(part)/float(whole)
# better way to open files, handles closing the file
with open(name, 'r') as handle:
text = handle.read()
words = text.split()
# get rid of non-word characters that are messing up count
formatted = []
for w in words:
formatted.extend(re.findall(r'\w+', w))
total = len(formatted)
counts = dict()
for word in formatted:
counts[word] = counts.get(word,0) + 1
print "\n"
print"Total Words\n", total
print"\n"
# iterate over the counts dict instead of the original word list
# this way each word is only printed once
for word,count in counts.iteritems():
print word, percent(counts[word],total),"%"
</code></pre>
<p>Output when run on this program:</p>
<pre><code>Total Words
79
text 2.53164556962 %
float 2.53164556962 %
as 1.26582278481 %
file 1.26582278481 %
in 3.79746835443 %
handle 2.53164556962 %
counts 6.32911392405 %
total 3.79746835443 %
open 1.26582278481 %
findall 1.26582278481 %
for 3.79746835443 %
0 1.26582278481 %
percent 2.53164556962 %
formatted 5.06329113924 %
1 1.26582278481 %
re 2.53164556962 %
dict 1.26582278481 %
usr 1.26582278481 %
Words 1.26582278481 %
print 5.06329113924 %
import 1.26582278481 %
split 1.26582278481 %
bin 1.26582278481 %
return 1.26582278481 %
extend 1.26582278481 %
get 1.26582278481 %
python 1.26582278481 %
len 1.26582278481 %
iteritems 1.26582278481 %
part 2.53164556962 %
words 2.53164556962 %
Enter 1.26582278481 %
100 1.26582278481 %
with 1.26582278481 %
count 1.26582278481 %
word 7.59493670886 %
name 2.53164556962 %
read 1.26582278481 %
raw_input 1.26582278481 %
n 3.79746835443 %
r 1.26582278481 %
w 3.79746835443 %
Total 1.26582278481 %
whole 2.53164556962 %
def 1.26582278481 %
</code></pre>
<h1>Edit -- added explanation of the word formatting</h1>
<p>Breakdown of <code>formatted.extend(re.findall(r'\w+', w))</code>:</p>
<p>1: the <code>extend</code> function of lists takes a list and appends it the given list. For example:</p>
<pre><code>listA = [1,2,3]
listB = [4,5,6]
listA.extend(listB)
print(listA)
# [1, 2, 3, 4, 5, 6]
</code></pre>
<p>2: <code>re.findall(r'\w+', w))</code></p>
<p>This expression uses <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expressions</a> to extract only the part of the string we care about. Here is a <a href="http://www.tutorialspoint.com/python/python_reg_expressions.htm" rel="nofollow">tutorial</a> on python's regular expressions.</p>
<p>Basically, <code>re.findall(x, y)</code> returns a list of all the substrings in <code>y</code> that match the regular expression pattern outlined in <code>x</code>. In our case, <code>\w</code> means all word characters (i.e. alphanumeric chars), and the <code>+</code> means one or more of the preceding pattern. So put together, <code>\w+</code> means one or more word characters.</p>
<p>I probably made it somewhat confusing by naming the string variable we're doing the search on <code>w</code>, but just keep in mind that the <code>\w</code> in the pattern is not related to the <code>w</code> variable that is the string.</p>
<pre><code>word = 'heres some1; called s0mething!'
re.findall(r'\w+', word)
# ['heres', 'some1', 'called', 's0mething']
</code></pre>
| 0 | 2016-07-21T00:55:23Z | [
"python",
"python-2.7"
] |
Python MySQLdb response times are extremely different on silmiliar set | 38,492,932 | <p>I am designing a crontab job with Python+MySQLdb to extract data from MySQL, generate XML files and get them zipped. Yes, it is an archive task happens at noon everyday.</p>
<p>My code:</p>
<pre><code>#!/usr/bin/env python
#encoding: utf-8
from dmconfig import DmConf
#from dmdb import Dmdb
import redis
import MySQLdb
import dawnutils
import time
from datetime import datetime, timedelta, date
conf = DmConf().loadConf()
db = MySQLdb.connect(host=conf["DbHost"],user=conf['DbAccount'],passwd=conf['DbPassword'],\
db=conf['DbName'],charset=conf['DbCharset'])
cache = redis.Redis(host=conf['RedisHost'], port=conf['RedisPort'],
db=conf['Redisdbid'], password=conf['RedisPassword'])
#cursor = db.cursor()
def try_reconnect(conn):
try:
conn.ping()
except:
conn = MySQLdb.connect(host=conf["DbHost"],user=conf['DbAccount'],passwd=conf['DbPassword'],\
db=conf['DbName'],charset=conf['DbCharset'])
def zip_task(device, start, stop):
#cursor = db.cursor()
format = "%Y%m%d%H%M%S"
begin = time.strftime("%Y-%m-%d %H:%M:%S",time.strptime(start,format))
end = time.strftime("%Y-%m-%d %H:%M:%S",time.strptime(stop,format))
print "%s (%s,%s)"%(device, begin, end)
sql = "SELECT * from `period` WHERE `snrCode` = \"%s\" AND `time` > \"%s\" AND `time` < \"%s\" ORDER BY `recId` DESC"%(device, begin, end)
print sql
cursor = db.cursor()
try_reconnect(db)
t1 = time.time()
try:
cursor.execute(sql)
results = cursor.fetchall()
except MySQLdb.Error,e:
print "Error %s"%(e)
print ("SQL takes %f seconds"%(time.time()-t1))
print ("len of reconds, %d"%len(results))
#for row in results:
#print row
def dispatcher(devSet, start, stop):
print "size of set: %d"%len(devSet)
print devSet
for dev in devSet:
zip_task(dev, start, stop)
def archive_task_queue():
today = datetime.now()
oneday = timedelta(days=1)
yesterday = today - oneday
format = "%Y%m%d%H%M%S"
begin = time.strftime(format, yesterday.timetuple())[:8] + '120000'
end = time.strftime(format, today.timetuple())[:8] + '120000'
sql = "SELECT * from `logbook` WHERE `login` > \"%s\" AND `login` < \"%s\" AND `logout` > \"%s\" AND `logout` < \"%s\""%(begin, end, begin, end)
print sql
cursor = db.cursor()
reclist = []
try:
cursor.execute(sql)
results = cursor.fetchall()
for row in results:
#print row
reclist.append(row[1])
except MySQLdb.Error,e:
print "Error %s"%(e)
#reclist = [u'A2H300001']
if len(reclist):
dispatcher(set(reclist), begin, end)
db.close()
if __name__ == '__main__':
archive_task_queue()
</code></pre>
<p>In my code, I will query logbook for device activities, and get the active devices set for that day. And query dataset for each device one by one. The issues comes along with the second stage queries. Check out my console after running:</p>
<pre><code>SELECT * from `logbook` WHERE `login` > "20160720120000" AND `login` < "20160721 120000" AND `logout` > "20160720120000" AND `logout` < "20160721120000"
size of set: 4
set([u'B1H700001', u'B1H700002', u'A1E500018', u'A2H300001'])
B1H700001 (2016-07-20 12:00:00,2016-07-21 12:00:00)
SELECT * from `period` WHERE `snrCode` = "B1H700001" AND `time` > "2016-07-20 12 :00:00" AND `time` < "2016-07-21 12:00:00" ORDER BY `recId` DESC
SQL takes 0.018232 seconds
len of reconds, 597
B1H700002 (2016-07-20 12:00:00,2016-07-21 12:00:00)
SELECT * from `period` WHERE `snrCode` = "B1H700002" AND `time` > "2016-07-20 12 :00:00" AND `time` < "2016-07-21 12:00:00" ORDER BY `recId` DESC
SQL takes 0.974020 seconds
len of reconds, 4642
A1E500018 (2016-07-20 12:00:00,2016-07-21 12:00:00)
SELECT * from `period` WHERE `snrCode` = "A1E500018" AND `time` > "2016-07-20 12 :00:00" AND `time` < "2016-07-21 12:00:00" ORDER BY `recId` DESC
SQL takes 0.342373 seconds
len of reconds, 0
A2H300001 (2016-07-20 12:00:00,2016-07-21 12:00:00)
SELECT * from `period` WHERE `snrCode` = "A2H300001" AND `time` > "2016-07-20 12 :00:00" AND `time` < "2016-07-21 12:00:00" ORDER BY `recId` DESC
SQL takes 68.173677 seconds
len of reconds, 5794
</code></pre>
<p>Query time is weired. It takes 0.9s for B1H700002 4642 data points, but it takes 68 seconds for A2H300001 5764 datapoints.</p>
<p>Then I narrow down my issue to query sepecific device ID only, which you can find in my previous code. The result is same. It takes 65 seconds for that query.</p>
<p>Any clue?</p>
| 0 | 2016-07-21T00:29:25Z | 38,516,060 | <p>I did more labs on this SQL query. At last I found it is related to memory usage of MySQLdb. Although the total dataset may have only 5794 row, if I added LIMIT 5000, the query only takes 0.3 second, otherwise it takes 60+ seconds. </p>
<p>So, as a work-around approach, I use LIMIT and some pagination methotd to query limited rows for each query and append it to previous queries. And the total time is reduced within 1 second.</p>
| 0 | 2016-07-21T23:44:42Z | [
"python",
"mysql"
] |
Which way is better to read/write data from/to large files? | 38,492,942 | <p>If we need to read/write some data from/to a large file each time before/after processing, which of the following way (with some demonstration Python codes) is better?</p>
<ol>
<li><p>Open the file each time when we need to read/writing and close immediately after reading/writing. This way seems to be safer? but slower since we need to open and close a lot of times?
<code>
for i in processing_loop:
with open(datafile) as f:
read_data(...)
process_data(...)
with open(resultfile,'a') as f:
save_data(...)
</code>
This looks awkward but it seems matlab takes this way in its <code>.mat</code> file IO functions <code>load</code> and <code>save</code>. We call <code>load</code> and <code>save</code> directly without explicit <code>open</code> nor <code>close</code>.</p></li>
<li><p>Open the file and close until we finish all the work, faster but at the risk of file remaining open if the program raises errors, or the file being <strong>corrupted</strong> if the program is terminated unexpectedly.
<code>
fr = open(datafile)
fw = open(resultfile,'a')
for i in processing_loop:
read_data(...)
process_data(...)
save_data(...)
fr.close()
fw.close()
</code>
In fact, I had several <code>hdf5</code> files corrupted in this way when the program was killed.</p></li>
</ol>
<p>Seems guys prefer the second with wrapping the loop in <code>with</code>.</p>
<pre><code> with open(...) as f:
...
</code></pre>
<p>or in an exception catch block.</p>
<p>I knew these two things and I did used them. But my <code>hdf5</code> files were still corrupted when the program was killed.</p>
<ul>
<li><p>Once I was trying to write a huge array into a hdf5 file and the program was stucked for a long time so I killed it, then the file was corrupted.</p></li>
<li><p>For many times, the program is ternimated because the server is suddenly down or the running time exceeds the wall time.</p></li>
</ul>
<p>I didn't pay attention to if the corruption occurs only when the program is terminated while writing data to file. If so, it means the file structure is corrupted because it's incomplete. So I wander if it would be helpful to flush the data every time, which increase the IO loads but could decrease the chance of writing data to file when terminated.</p>
<p>I tried the first way, accessing the file only when reading/writing data is necessary. But obviously the speed was slow down. What happens in background when we open/close a file handle? Not just make/destroy a pointer? Why <code>open/close</code> operations cost so much?</p>
| 1 | 2016-07-21T00:30:07Z | 38,493,128 | <p>You should wrap your code in solution 2 in a <code>try except finally</code> and always close the file in finally. This way even if there will be errors, your file will close itself.</p>
<p>EDIT: as someone else pointed out you can use <code>with</code> to handle that for you.</p>
| 0 | 2016-07-21T00:51:58Z | [
"python",
"matlab",
"file",
"io",
"hdf5"
] |
Which way is better to read/write data from/to large files? | 38,492,942 | <p>If we need to read/write some data from/to a large file each time before/after processing, which of the following way (with some demonstration Python codes) is better?</p>
<ol>
<li><p>Open the file each time when we need to read/writing and close immediately after reading/writing. This way seems to be safer? but slower since we need to open and close a lot of times?
<code>
for i in processing_loop:
with open(datafile) as f:
read_data(...)
process_data(...)
with open(resultfile,'a') as f:
save_data(...)
</code>
This looks awkward but it seems matlab takes this way in its <code>.mat</code> file IO functions <code>load</code> and <code>save</code>. We call <code>load</code> and <code>save</code> directly without explicit <code>open</code> nor <code>close</code>.</p></li>
<li><p>Open the file and close until we finish all the work, faster but at the risk of file remaining open if the program raises errors, or the file being <strong>corrupted</strong> if the program is terminated unexpectedly.
<code>
fr = open(datafile)
fw = open(resultfile,'a')
for i in processing_loop:
read_data(...)
process_data(...)
save_data(...)
fr.close()
fw.close()
</code>
In fact, I had several <code>hdf5</code> files corrupted in this way when the program was killed.</p></li>
</ol>
<p>Seems guys prefer the second with wrapping the loop in <code>with</code>.</p>
<pre><code> with open(...) as f:
...
</code></pre>
<p>or in an exception catch block.</p>
<p>I knew these two things and I did used them. But my <code>hdf5</code> files were still corrupted when the program was killed.</p>
<ul>
<li><p>Once I was trying to write a huge array into a hdf5 file and the program was stucked for a long time so I killed it, then the file was corrupted.</p></li>
<li><p>For many times, the program is ternimated because the server is suddenly down or the running time exceeds the wall time.</p></li>
</ul>
<p>I didn't pay attention to if the corruption occurs only when the program is terminated while writing data to file. If so, it means the file structure is corrupted because it's incomplete. So I wander if it would be helpful to flush the data every time, which increase the IO loads but could decrease the chance of writing data to file when terminated.</p>
<p>I tried the first way, accessing the file only when reading/writing data is necessary. But obviously the speed was slow down. What happens in background when we open/close a file handle? Not just make/destroy a pointer? Why <code>open/close</code> operations cost so much?</p>
| 1 | 2016-07-21T00:30:07Z | 38,493,152 | <p>If you are concerned about using multiple files within the "with" statement, you can open more than one file with a compound statement, or nest the "with" blocks. This is detailed in the answer here:</p>
<p><a href="http://stackoverflow.com/questions/9282967/how-to-open-a-file-using-the-open-with-statement">How to open a file using the open with statement</a></p>
<p>As for what happens when the program raises errors, that's what try/except blocks are for. If you know what errors are expected, you can easily surround your process_data() calls. Again, one except block can catch multiple exceptions.</p>
<p><a href="https://docs.python.org/3/tutorial/errors.html#handling-exceptions" rel="nofollow">https://docs.python.org/3/tutorial/errors.html#handling-exceptions</a></p>
| 0 | 2016-07-21T00:54:14Z | [
"python",
"matlab",
"file",
"io",
"hdf5"
] |
Problems in datatype while csv file as panda dataframe | 38,492,963 | <p>I have been trying to import my dataset using read.csv in python notebook.
However, on importing my dataset I see that the datatype of each column becomes as object.
<a href="http://i.stack.imgur.com/eayYS.png" rel="nofollow">Plz click this image to see the issue</a></p>
<p>Is there a way that I can retain the datatypes of the columns same as that of the csv file ?</p>
<p>I tried using multiple other ways but it didnât workout. It would help if anyone point us to some right function to do this. IF there is a way to control datatype of each column while importing that would be great.</p>
| 2 | 2016-07-21T00:32:42Z | 38,493,025 | <p>This occurs when you have inconsistent datatypes, e.g. integers and characters such as a blank space. It is difficult to tell without viewing a sample of your actual data, but I suspect this is the issue. For example,</p>
<pre><code>>>> pd.DataFrame([1, 2, '']).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3 entries, 0 to 2
Data columns (total 1 columns):
0 3 non-null object
dtypes: object(1)
memory usage: 48.0+ bytes
</code></pre>
<p>To circumvent this issue, you need to replace these values such as "" with a sentient value such as -1 (the actual value would depend on your use case).</p>
| 1 | 2016-07-21T00:39:20Z | [
"python",
"csv",
"pandas",
"dataframe",
"import"
] |
Multiple boolean tests in IF statement with 'all()' function | 38,492,991 | <pre><code>def common(num):
#returns true if num is divisible by all the 'i' integers
if all(num%divisor==0 for divisor in (1,10)):
return True
else:
return False
print(common(2520)) --> True
print(common(10)) --> True
print(common(17)) --> False
</code></pre>
<p>Hi all, this function is supposed to tell whether 'num' is a common multiple of all the 'i' numbers. I can't figure out why 10 is returning True? Doesn't 'all()' mean every test has to be True for the whole thing to be True?
Am I using it wrong? Any better functions I should use? Thanks for any insights.</p>
<p>-wT</p>
| 1 | 2016-07-21T00:36:05Z | 38,493,068 | <p>The way to find oneself:</p>
<pre><code>num=10
for divisor in (1,10):
print(divisor, num%divisor , num%divisor==0)
</code></pre>
<p>gives</p>
<pre><code>1 0 True
10 0 True
</code></pre>
<p>and makes you understand that <code>(1,10)</code> is a tuple, while you certainly wanted <code>range(1,10)</code>.</p>
<p>Note that the function <code>common()</code> has the same return values as </p>
<pre><code> all(num%divisor==0 for divisor in range(1,10))
</code></pre>
<p>then you can define it as: </p>
<pre><code>def common(num):
return all(num%divisor==0 for divisor in range(1,10))
</code></pre>
| 3 | 2016-07-21T00:45:58Z | [
"python",
"function",
"boolean-logic",
"boolean-expression",
"boolean-operations"
] |
Global Not Working Properly [Python] | 38,493,045 | <p>It's kind of hard to explain by situation. I'll try the best I can.</p>
<p>I'm making a program to make using driftnet easier for new users.
You first enter the gateway IP and then the target IP, the your interface. In the program, once you enter all those, it opens a new terminal window, you then start the second program which flips the original order of the IPs. I could have it for the user to manually switch it, but I want it to just automatically switch it. To do that I have to use <code>global</code> to keep the input information, so it will just switch it. The problem is when I run the second program it just starts the first one all over again.</p>
<pre><code>#This it the first program
#"Driftnet.py"
import os
import time
from subprocess import call
def drift():
global gateway
gateway = raw_input("Gateway IP > ")
time.sleep(0.5)
global target
target = raw_input("Target IP > ")
time.sleep(0.5)
global inter
inter = raw_input("Interface > ")
drift()
call(["gnome-terminal"])
os.system("arpspoof -i " + inter + " -t " + gateway + " " + target)
</code></pre>
<p>I run that, input everything, then it opens the second terminal, and I run the second program where it switches the IPs.</p>
<pre><code>#This is the second program
#"Driftnet2.py"
import os
import time
from subprocess import call
import Driftnet
os.system("arpspoof -i " + Driftnet.inter + " -t " + Driftnet.target + " " + Driftnet.gateway)
</code></pre>
<p>When I run that it pretty much just runs the frist program, starting with the question of "Gateway IP > "</p>
<p>I have utterly no clue what I'm doing wrong.</p>
<p>Thanks</p>
| 2 | 2016-07-21T00:42:25Z | 38,493,073 | <p>Have you tried adding the global variable after import etc...then calling the var inside functions as global? </p>
<pre><code>import os
import time
from subprocess import call
gateway = ''
target = ''
inter = ''
</code></pre>
| 0 | 2016-07-21T00:46:44Z | [
"python"
] |
Global Not Working Properly [Python] | 38,493,045 | <p>It's kind of hard to explain by situation. I'll try the best I can.</p>
<p>I'm making a program to make using driftnet easier for new users.
You first enter the gateway IP and then the target IP, the your interface. In the program, once you enter all those, it opens a new terminal window, you then start the second program which flips the original order of the IPs. I could have it for the user to manually switch it, but I want it to just automatically switch it. To do that I have to use <code>global</code> to keep the input information, so it will just switch it. The problem is when I run the second program it just starts the first one all over again.</p>
<pre><code>#This it the first program
#"Driftnet.py"
import os
import time
from subprocess import call
def drift():
global gateway
gateway = raw_input("Gateway IP > ")
time.sleep(0.5)
global target
target = raw_input("Target IP > ")
time.sleep(0.5)
global inter
inter = raw_input("Interface > ")
drift()
call(["gnome-terminal"])
os.system("arpspoof -i " + inter + " -t " + gateway + " " + target)
</code></pre>
<p>I run that, input everything, then it opens the second terminal, and I run the second program where it switches the IPs.</p>
<pre><code>#This is the second program
#"Driftnet2.py"
import os
import time
from subprocess import call
import Driftnet
os.system("arpspoof -i " + Driftnet.inter + " -t " + Driftnet.target + " " + Driftnet.gateway)
</code></pre>
<p>When I run that it pretty much just runs the frist program, starting with the question of "Gateway IP > "</p>
<p>I have utterly no clue what I'm doing wrong.</p>
<p>Thanks</p>
| 2 | 2016-07-21T00:42:25Z | 38,493,153 | <p>I don't know what "Driftnet" is, and I'm not sure exactly what you're trying to do, but I can explain the behavior you're seeing.</p>
<p>In Driftnet2.py, you call:</p>
<pre><code>import Driftnet
</code></pre>
<p>which causes the Python code in Driftnet.py to be evaluated. That is what the <code>import</code> statement does. All of your code is top-level (except the <code>drift()</code> method, which is called from the top-level), so importing it runs it.</p>
<p>You have only one method, and no top-level variables, so the <code>global</code> declarations are useless. (It almost seems like you think the <code>global</code> keyword is for IPC, but I'm not sure.)</p>
<p>To design this program, you need to first step back and answer some basic questions. Like, if you want to transfer information from Driftnet.py in one process to Driftnet2.py in another process, how is that transfer going to occur? (Command-line flags? Environmental variables? Unix domain sockets?) Once you know what you want your program to do, we can help you implement it.</p>
| 1 | 2016-07-21T00:54:19Z | [
"python"
] |
Global Not Working Properly [Python] | 38,493,045 | <p>It's kind of hard to explain by situation. I'll try the best I can.</p>
<p>I'm making a program to make using driftnet easier for new users.
You first enter the gateway IP and then the target IP, the your interface. In the program, once you enter all those, it opens a new terminal window, you then start the second program which flips the original order of the IPs. I could have it for the user to manually switch it, but I want it to just automatically switch it. To do that I have to use <code>global</code> to keep the input information, so it will just switch it. The problem is when I run the second program it just starts the first one all over again.</p>
<pre><code>#This it the first program
#"Driftnet.py"
import os
import time
from subprocess import call
def drift():
global gateway
gateway = raw_input("Gateway IP > ")
time.sleep(0.5)
global target
target = raw_input("Target IP > ")
time.sleep(0.5)
global inter
inter = raw_input("Interface > ")
drift()
call(["gnome-terminal"])
os.system("arpspoof -i " + inter + " -t " + gateway + " " + target)
</code></pre>
<p>I run that, input everything, then it opens the second terminal, and I run the second program where it switches the IPs.</p>
<pre><code>#This is the second program
#"Driftnet2.py"
import os
import time
from subprocess import call
import Driftnet
os.system("arpspoof -i " + Driftnet.inter + " -t " + Driftnet.target + " " + Driftnet.gateway)
</code></pre>
<p>When I run that it pretty much just runs the frist program, starting with the question of "Gateway IP > "</p>
<p>I have utterly no clue what I'm doing wrong.</p>
<p>Thanks</p>
| 2 | 2016-07-21T00:42:25Z | 38,497,934 | <p>This is what the <a href="http://stackoverflow.com/questions/419163/what-does-if-name-main-do"><code>if __name__=='__main__':</code></a> construct exists for. Change the code in Driftnet.py to this:</p>
<pre><code>import os
import time
from subprocess import call
def drift():
global gateway
gateway = raw_input("Gateway IP > ")
time.sleep(0.5)
global target
target = raw_input("Target IP > ")
time.sleep(0.5)
global inter
inter = raw_input("Interface > ")
if __name__=='__main__':
drift()
call(["gnome-terminal"])
os.system("arpspoof -i " + inter + " -t " + gateway + " " + target)
</code></pre>
<hr>
<p>It's possible that you also have modify Driftnet2.py a little. I'm not sure if you'll be able to access the global variables by doing <code>import Driftnet</code>. If you're getting errors, try <code>import __main__ as Driftnet</code> instead.</p>
| 0 | 2016-07-21T07:39:27Z | [
"python"
] |
Getting Django for Python 3 Started for Mac django-admin not working | 38,493,057 | <p>I have been trying to set up Django for Python 3 for for 2 days now. I have installed python 3.5.2 on my Mac Mini. I have also have pip3 installed succesfully. I have installed Django using <code>pip3 install Django</code>. The problem is that when I try to start my project by typing <code>django-admin startproject mysite</code>, I get the error <code>-bash: django-admin: command not found</code>. If you need any more info, just let me know, I am also new to Mac so I may be missing something simple. How do I get django-admin working? I have tried pretty much everything I could find on the web.</p>
| 1 | 2016-07-21T00:44:06Z | 38,493,996 | <p>Activate <code>virtualenv</code> and install Django there (with <code>python -m pip install django</code>). Try <code>python -m django startproject mysite</code>. You can use <code>python -m django</code> instead of <code>django-admin</code> since Django 1.9.</p>
| 0 | 2016-07-21T02:42:17Z | [
"python",
"django",
"python-3.x",
"django-admin"
] |
Python 2.7 Pip module not installing or setting paths via cmd? | 38,493,144 | <p>I've been having some really odd issues with trying to install and use the Python "Pip" module. Firstly, I've installed the pip module by downloading the getpip.py file and running it which has replaced my pre existing pip which seemed to work fine. However whenever I try to use pip it always comes up with "pip is not recognized as an internal or external command" etc. I've set the path for python by using setx PATH "%PATH%;C:\Python27\python" and then using C:\Python27\Scripts\pip the second time to try and set the path for pip. But one of these seem to work. I can't use pip in cmd neither can I now use python.</p>
<p>Does anyone know how to make this work? I'm trying to run this command "pip install -r requirements.txt " even in the right folder but pip is not recognized. Any suggestions? Thanks.</p>
| 0 | 2016-07-21T00:53:25Z | 38,493,199 | <p>You're using the wrong path, Pip should reside in the <code>Scripts</code> sub directory, set PATH to <code>C:\Python27\Scripts</code> then you should restart cmd.</p>
| 0 | 2016-07-21T00:59:47Z | [
"python",
"windows",
"python-2.7"
] |
Using BeautifulSoup to find all tags containing a AND NOT containing b | 38,493,154 | <p>I am using bs4 to pull out li tags with class of js-stream-item but not containing scroll-bump-user-card from the following. (which is getting a, b only)</p>
<pre><code><li class="js-stream-item stream-item ">a<li>
<li class="js-stream-item stream-item stream-item ">b<li>
<li class="js-stream-item stream-item scroll-bump-user-card ">c<li>
</code></pre>
<p>There are two ways I am thinking of.</p>
<ol>
<li><p>Using <code>soup.find_all('li', class_=re.compile('js-stream-item'))</code> to get all tags and removed the tag with scroll-bump-user-card then. </p></li>
<li><p>Using <code>[tag.extract() for tag in soup.find_all('li', class_=re.compile('scroll-bump-user-card'))]</code> to delete first, then find all after.</p></li>
</ol>
<p><strong>The question is</strong> if there's a decent way to get a, b by editing the regex with AND NOT syntax in <code>re.compile()</code>.</p>
<p><strong>Update</strong> I re-wrote first option of alecxe's answer into one single long line as following:</p>
<pre><code>soup.find_all(lambda tag: re.compile('js-stream-item').search(str(tag))
and not re.compile('scroll-bump-user-card').search(str(tag))
and tag.name == 'li')
</code></pre>
| 1 | 2016-07-21T00:54:20Z | 38,493,584 | <p>First of all, <code>class</code> is a special <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#multi-valued-attributes" rel="nofollow">multi-valued attribute</a> which requires <a href="http://stackoverflow.com/a/34294195/771848">special handling</a>.</p>
<p>One option is to use a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function" rel="nofollow">searching function</a> and check the presence of <code>js-stream-item</code> class and absence of <code>scroll-bump-user-card</code> class:</p>
<pre><code>def search_function(tag):
if tag.name == "li":
class_ = tag.get("class", [])
return "js-stream-item" in class_ and "scroll-bump-user-card" not in class_
for li in soup.find_all(search_function):
print(li.get_text(strip=True))
</code></pre>
<hr>
<p>Another option would be to find all <code>li</code> with <code>js-stream-item</code> class and just skip <code>li</code> elements having <code>scroll-bump-user-card</code> class:</p>
<pre><code>for li in soup.select("li.js-stream-item"):
if "scroll-bump-user-card" in li["class"]:
continue
print(li.get_text(strip=True))
</code></pre>
<hr>
<p>Another, to check that <code>class</code> ends with <code>stream-item</code> with a <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> (don't use this):</p>
<pre><code>for li in soup.select("li[class$=' stream-item ']"):
print(li.get_text(strip=True))
</code></pre>
<hr>
<p>Note that a better CSS selector for this use case would be:</p>
<pre><code>li.js-stream-item:not(.scroll-bump-user-card)
</code></pre>
<p>but it is not going to work because of the limited CSS selector support in <code>BeautifulSoup</code>.</p>
| 0 | 2016-07-21T01:51:27Z | [
"python",
"regex",
"python-2.7",
"beautifulsoup"
] |
Get distinct datatypes within the same Series in Pandas | 38,493,213 | <p>Let's say we have a pandas series which contains a mixture of datatypes as below (string, int and datetime)</p>
<p>If i check the dtype on the diff_series (<code>diff_series.dtype</code>), it shows me that it's an object as expected. I would like to get the distinct datatypes that are in the series. I have the following list comprehension which gets me the unique dtypes in a series. </p>
<pre><code>import datetime
import pandas as pd
>> diff_series = pd.Series(['1','2',3,"random_text",datetime.datetime.now()])
>> set([type(i) for i in diff_series])
set([<type 'str'>, <type 'datetime.datetime'>, <type 'int'>])
</code></pre>
<p>But i get a feeling that there should be a more efficient (pandonic) way of going about doing this ? </p>
<p>I tried </p>
<pre><code>>> diff_series.get_dtype_counts()
object 1
dtype: int64
</code></pre>
<p>which is not what im looking for. Any ideas ?</p>
| 2 | 2016-07-21T01:01:45Z | 38,493,419 | <p>This should be more pandonic:</p>
<pre><code>diff_series.apply(type)
0 <type 'str'>
1 <type 'str'>
2 <type 'int'>
3 <type 'str'>
4 <type 'datetime.datetime'>
dtype: object
</code></pre>
| 4 | 2016-07-21T01:30:09Z | [
"python",
"numpy",
"pandas",
"types",
"series"
] |
Get distinct datatypes within the same Series in Pandas | 38,493,213 | <p>Let's say we have a pandas series which contains a mixture of datatypes as below (string, int and datetime)</p>
<p>If i check the dtype on the diff_series (<code>diff_series.dtype</code>), it shows me that it's an object as expected. I would like to get the distinct datatypes that are in the series. I have the following list comprehension which gets me the unique dtypes in a series. </p>
<pre><code>import datetime
import pandas as pd
>> diff_series = pd.Series(['1','2',3,"random_text",datetime.datetime.now()])
>> set([type(i) for i in diff_series])
set([<type 'str'>, <type 'datetime.datetime'>, <type 'int'>])
</code></pre>
<p>But i get a feeling that there should be a more efficient (pandonic) way of going about doing this ? </p>
<p>I tried </p>
<pre><code>>> diff_series.get_dtype_counts()
object 1
dtype: int64
</code></pre>
<p>which is not what im looking for. Any ideas ?</p>
| 2 | 2016-07-21T01:01:45Z | 38,493,484 | <p>We could do something like:</p>
<pre><code>In [4]: diff_series.map(type).value_counts()
Out[4]:
<class 'str'> 3
<class 'datetime.datetime'> 1
<class 'int'> 1
dtype: int64
</code></pre>
<p>Or, might as well "go all out": </p>
<pre><code>In [5]: diff_series.map(type).value_counts().index.values
Out[5]: array([<class 'str'>, <class 'datetime.datetime'>, <class 'int'>], dtype=object)
</code></pre>
| 3 | 2016-07-21T01:37:18Z | [
"python",
"numpy",
"pandas",
"types",
"series"
] |
Convert .xls to .xlsx so that Openpyxl can work with it | 38,493,257 | <p>I wrote a flask application that will be used to process some excel files, however, I wrote it for .xlsx files. The inputted files might be .xls files, which I know Openpyxl can't open. How can I convert the file to .xlsx before processing it with Openpyxl in my application?</p>
<p>I saw something online about using xlrd to write the original .xls to a .xlsx file that Openpyxl can process but I was having trouble tailoring it to fit my specific application.</p>
<p>Thanks in advance!</p>
<pre><code>from openpyxl import load_workbook
from openpyxl.styles import Style, Font
from flask import Flask, request, render_template, redirect, url_for, send_file
import os
app = Flask(__name__)
@app.route('/')
def index():
return """<center><body bgcolor="#FACC2E">
<h1><p>Automated TDX Report</h1></p><br>
<form action="/upload" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form></center></body>"""
@app.route('/upload', methods = ['GET', 'POST'])
def upload():
if request.method == 'POST':
f = request.files['file']
f.save(f.filename)
return process(f.filename)
def process(filename):
VP = ['ZYJD', 'ZYJC', 'ZYKC', 'ZYKA', 'ZYKB', 'ZYKD', 'ZYKE', 'ZYKF', 'ZYJB', 'ZYJX', 'ZYKG', 'ZYKH', 'ZYJE', 'ZYJA',
'ZYKI', 'ZYKX', 'ZYKK', 'ZYKJ', 'ZYJF', 'ZYJK', 'ZYJG', 'ZYJJ', 'ZYKL', 'ZYKM', 'ZYKN']
VA = ['ZYIC', 'ZYIB', 'ZYHC', 'ZYIA', 'ZYHA', 'ZYHG', 'ZYHB', 'ZYID', 'ZYDA', 'ZYIE', 'ZYHD', 'ZYIG', 'ZYIX', 'ZYHE',
'ZYIF', 'ZYHX', 'ZYDE', 'ZYHF', 'ZYLB', 'ZYAC', 'ZYCF', 'ZYDF', 'ZYBG', 'ZYDG', 'ZYDD', 'ZYDH', 'ZYCB', 'ZYCA',
'ZYWA', 'ZYWB', 'ZYWC', 'ZYWD', 'ZYWE', 'ZYWF', 'ZYWG', 'ZYWI', 'ZYWJ']
Gordon = ['ZYDB', 'ZYDX', 'ZYEB', 'ZYED', 'ZYEC', 'ZYEA', 'ZYEX', 'ZYFE', 'ZYFX', 'ZYFD', 'ZYFA', 'ZYFC', 'ZYFB',
'ZYGC', 'ZYGA', 'ZYGX', 'ZYGB', 'ZYGF', 'ZYGG', 'ZYGD', 'ZYLA', 'ZYBF', 'ZYBE', 'ZYLD', 'ZYKM', 'ZYKN',
'ZYCC', 'ZYCE']
Pete = ['ZYAD', 'ZYBX', 'ZYAX', 'ZYAB', 'ZYBC', 'ZYBA', 'ZYBB', 'ZYAA', 'ZYBD', 'ZYLE', 'ZYCX', 'ZYAE', 'ZYCC', 'ZYCE',
'ZYLA', 'ZYBF', 'ZYBE', 'ZYLD']
Mike = ['ZYKP', 'ZYAP', 'ZYHP', 'ZYJP', 'ZYFP', 'ZYJR', 'ZYCP', 'ZYIR', 'ZYAR', 'ZYBP', 'ZYKR', 'ZYJS', 'ZYIP', 'ZYHR',
'ZYEP', 'ZYFF', 'ZYGP', 'ZYKS', 'ZYEE', 'ZYJH', 'ZYII', 'ZYHH', 'ZYJW']
workbook = load_workbook(filename)
worksheet = workbook.active
worksheet.column_dimensions.group('B', hidden=True)
worksheet.column_dimensions.group('D', hidden=True)
worksheet.column_dimensions.group('E', hidden=True)
worksheet.column_dimensions.group('F', hidden=True)
worksheet.column_dimensions.group('G', hidden=True)
worksheet.column_dimensions.group('H', hidden=True)
worksheet.column_dimensions.group('I', hidden=True)
worksheet.column_dimensions.group('K', hidden=True)
worksheet.column_dimensions.group('L', hidden=True)
worksheet.column_dimensions.group('M', hidden=True)
worksheet.column_dimensions.group('N', hidden=True)
worksheet.column_dimensions.group('O', hidden=True)
worksheet.column_dimensions.group('P', hidden=True)
worksheet.column_dimensions.group('Q', hidden=True)
worksheet.column_dimensions.group('R', hidden=True)
worksheet.column_dimensions.group('S', hidden=True)
worksheet.column_dimensions.group('T', hidden=True)
worksheet.column_dimensions.group('U', hidden=True)
worksheet.column_dimensions.group('V', hidden=True)
worksheet.column_dimensions.group('W', hidden=True)
worksheet.column_dimensions.group('X', hidden=True)
worksheet.column_dimensions.group('Y', hidden=True)
worksheet.column_dimensions.group('Z', hidden=True)
worksheet.column_dimensions.group('AA', hidden=True)
worksheet.column_dimensions.group('AB', hidden=True)
worksheet.column_dimensions.group('AC', hidden=True)
worksheet.column_dimensions.group('AD', hidden=True)
worksheet.column_dimensions.group('AE', hidden=True)
worksheet.column_dimensions.group('AF', hidden=True)
worksheet.column_dimensions.group('AG', hidden=True)
worksheet.column_dimensions.group('AH', hidden=True)
worksheet.column_dimensions.group('AI', hidden=True)
worksheet.column_dimensions.group('AJ', hidden=True)
worksheet.column_dimensions.group('AK', hidden=True)
worksheet.column_dimensions.group('AM', hidden=True)
worksheet.column_dimensions.group('AN', hidden=True)
worksheet.column_dimensions.group('AP', hidden=True)
worksheet.column_dimensions.group('AQ', hidden=True)
worksheet.column_dimensions.group('AR', hidden=True)
worksheet.column_dimensions.group('AS', hidden=True)
worksheet.column_dimensions.group('AT', hidden=True)
worksheet.column_dimensions.group('AU', hidden=True)
worksheet.column_dimensions.group('AV', hidden=True)
worksheet.column_dimensions.group('AW', hidden=True)
worksheet.column_dimensions.group('AX', hidden=True)
worksheet.column_dimensions.group('AY', hidden=True)
worksheet.column_dimensions.group('AZ', hidden=True)
worksheet.column_dimensions.group('BA', hidden=True)
worksheet.column_dimensions.group('BB', hidden=True)
worksheet.column_dimensions.group('BC', hidden=True)
worksheet.column_dimensions.group('BD', hidden=True)
worksheet.column_dimensions.group('BE', hidden=True)
worksheet.column_dimensions.group('BF', hidden=True)
worksheet.column_dimensions.group('BH', hidden=True)
worksheet.column_dimensions.group('BI', hidden=True)
worksheet.column_dimensions.group('BJ', hidden=True)
worksheet.column_dimensions.group('BK', hidden=True)
worksheet.column_dimensions.group('BL', hidden=True)
worksheet.column_dimensions.group('BM', hidden=True)
worksheet.column_dimensions.group('BN', hidden=True)
worksheet.column_dimensions.group('BO', hidden=True)
worksheet.column_dimensions.group('BP', hidden=True)
worksheet.column_dimensions.group('BQ', hidden=True)
worksheet.column_dimensions.group('BR', hidden=True)
worksheet.column_dimensions.group('BS', hidden=True)
worksheet.column_dimensions.group('BT', hidden=True)
worksheet.column_dimensions.group('BU', hidden=True)
worksheet.column_dimensions.group('BV', hidden=True)
worksheet.column_dimensions.group('BW', hidden=True)
worksheet.column_dimensions.group('BX', hidden=True)
worksheet.column_dimensions.group('BY', hidden=True)
worksheet.column_dimensions.group('BZ', hidden=True)
worksheet.column_dimensions.group('CA', hidden=True)
worksheet.column_dimensions.group('CB', hidden=True)
worksheet.column_dimensions.group('CC', hidden=True)
worksheet.column_dimensions.group('CD', hidden=True)
worksheet.column_dimensions.group('CE', hidden=True)
worksheet.column_dimensions.group('CF', hidden=True)
worksheet.column_dimensions.group('CG', hidden=True)
worksheet.column_dimensions.group('CH', hidden=True)
worksheet.column_dimensions.group('CI', hidden=True)
worksheet.column_dimensions.group('CJ', hidden=True)
worksheet.column_dimensions.group('CK', hidden=True)
worksheet.column_dimensions.group('CL', hidden=True)
worksheet.column_dimensions.group('CM', hidden=True)
worksheet.column_dimensions.group('CN', hidden=True)
worksheet.column_dimensions.group('CO', hidden=True)
worksheet.column_dimensions.group('CP', hidden=True)
worksheet.column_dimensions.group('CQ', hidden=True)
worksheet.column_dimensions.group('CR', hidden=True)
worksheet.column_dimensions.group('CS', hidden=True)
worksheet.column_dimensions.group('CU', hidden=True)
routecolumn = worksheet.columns[58]
i = 2
supervisorheader = worksheet.cell("CV1")
s = Style(font=Font(bold=True))
supervisorheader.style = s
worksheet['CV1'] = 'Supervisor'
for route in routecolumn:
if route.value == 'Responsible Route':
continue
if route.value in Gordon:
pos = Gordon.index(route.value)
worksheet['CV' + str(i)].value = 'Gordon'
i += 1
elif route.value in VA:
pos = VA.index(route.value)
worksheet['CV' + str(i)].value = 'Vinny A'
i += 1
elif route.value in VP:
pos = VP.index(route.value)
worksheet['CV' + str(i)].value = 'Vinny P'
i += 1
elif route.value in Pete:
pos = Pete.index(route.value)
worksheet['CV' + str(i)].value = 'Pete'
i += 1
elif route.value in Mike:
pos = Mike.index(route.value)
worksheet['CV' + str(i)].value = 'Mike'
i += 1
elif route.value not in Gordon or route.value not in VA or route.value not in VP or route.value not in Pete \
or route.value not in Mike:
worksheet['CV' + str(i)].value = 'Building'
i += 1
workbook.save(filename)
newfilename = filename.strip(".xlsx")
newfilename = newfilename + ".xls"
os.rename('/home/MY NAME/'+filename, '/home/MY NAME/'+newfilename)
return send_file(newfilename, attachment_filename='tdx.xls', as_attachment=True), os.remove ('/home/MY NAME/'+newfilename)
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0')
</code></pre>
| 0 | 2016-07-21T01:07:11Z | 38,494,890 | <p>My recommendation: Use xlrd to read the values you need, and openpyxl to create a new xlsx workbook. You just need to be able to get all of the relevant info using xlrd.</p>
<p>The only thing you're doing which isn't just reading values from cells is selecting the "active" worksheet. I found <a href="https://groups.google.com/forum/#!topic/python-excel/-e0UkBDMPHo" rel="nofollow">this Google Groups post</a> in a thread titled "xlrd: Get active sheet" which says to try using either the <code>.sheet_selected</code> attribute or the <code>.sheet_visible</code> attribute, with the following code for checking which works:</p>
<pre><code>import xlrd
b = xlrd.open_workbook("vis.xls") # where vis.xls is your test file
for i, s in enumerate(b.sheets()):
print(i, s.name, s.sheet_selected, s.sheet_visible)
</code></pre>
<p>It also notes:</p>
<blockquote>
<p>These two attributes are documented in <code>sheet.py</code> in the dict <code>_WINDOW_2</code>
which starts at about line 2. <code>sheet_visible</code> is probably what you want.
Ignore the MS docs which show this bit as undefined.</p>
</blockquote>
| 0 | 2016-07-21T04:29:27Z | [
"python",
"openpyxl",
"xlrd"
] |
Convert .xls to .xlsx so that Openpyxl can work with it | 38,493,257 | <p>I wrote a flask application that will be used to process some excel files, however, I wrote it for .xlsx files. The inputted files might be .xls files, which I know Openpyxl can't open. How can I convert the file to .xlsx before processing it with Openpyxl in my application?</p>
<p>I saw something online about using xlrd to write the original .xls to a .xlsx file that Openpyxl can process but I was having trouble tailoring it to fit my specific application.</p>
<p>Thanks in advance!</p>
<pre><code>from openpyxl import load_workbook
from openpyxl.styles import Style, Font
from flask import Flask, request, render_template, redirect, url_for, send_file
import os
app = Flask(__name__)
@app.route('/')
def index():
return """<center><body bgcolor="#FACC2E">
<h1><p>Automated TDX Report</h1></p><br>
<form action="/upload" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form></center></body>"""
@app.route('/upload', methods = ['GET', 'POST'])
def upload():
if request.method == 'POST':
f = request.files['file']
f.save(f.filename)
return process(f.filename)
def process(filename):
VP = ['ZYJD', 'ZYJC', 'ZYKC', 'ZYKA', 'ZYKB', 'ZYKD', 'ZYKE', 'ZYKF', 'ZYJB', 'ZYJX', 'ZYKG', 'ZYKH', 'ZYJE', 'ZYJA',
'ZYKI', 'ZYKX', 'ZYKK', 'ZYKJ', 'ZYJF', 'ZYJK', 'ZYJG', 'ZYJJ', 'ZYKL', 'ZYKM', 'ZYKN']
VA = ['ZYIC', 'ZYIB', 'ZYHC', 'ZYIA', 'ZYHA', 'ZYHG', 'ZYHB', 'ZYID', 'ZYDA', 'ZYIE', 'ZYHD', 'ZYIG', 'ZYIX', 'ZYHE',
'ZYIF', 'ZYHX', 'ZYDE', 'ZYHF', 'ZYLB', 'ZYAC', 'ZYCF', 'ZYDF', 'ZYBG', 'ZYDG', 'ZYDD', 'ZYDH', 'ZYCB', 'ZYCA',
'ZYWA', 'ZYWB', 'ZYWC', 'ZYWD', 'ZYWE', 'ZYWF', 'ZYWG', 'ZYWI', 'ZYWJ']
Gordon = ['ZYDB', 'ZYDX', 'ZYEB', 'ZYED', 'ZYEC', 'ZYEA', 'ZYEX', 'ZYFE', 'ZYFX', 'ZYFD', 'ZYFA', 'ZYFC', 'ZYFB',
'ZYGC', 'ZYGA', 'ZYGX', 'ZYGB', 'ZYGF', 'ZYGG', 'ZYGD', 'ZYLA', 'ZYBF', 'ZYBE', 'ZYLD', 'ZYKM', 'ZYKN',
'ZYCC', 'ZYCE']
Pete = ['ZYAD', 'ZYBX', 'ZYAX', 'ZYAB', 'ZYBC', 'ZYBA', 'ZYBB', 'ZYAA', 'ZYBD', 'ZYLE', 'ZYCX', 'ZYAE', 'ZYCC', 'ZYCE',
'ZYLA', 'ZYBF', 'ZYBE', 'ZYLD']
Mike = ['ZYKP', 'ZYAP', 'ZYHP', 'ZYJP', 'ZYFP', 'ZYJR', 'ZYCP', 'ZYIR', 'ZYAR', 'ZYBP', 'ZYKR', 'ZYJS', 'ZYIP', 'ZYHR',
'ZYEP', 'ZYFF', 'ZYGP', 'ZYKS', 'ZYEE', 'ZYJH', 'ZYII', 'ZYHH', 'ZYJW']
workbook = load_workbook(filename)
worksheet = workbook.active
worksheet.column_dimensions.group('B', hidden=True)
worksheet.column_dimensions.group('D', hidden=True)
worksheet.column_dimensions.group('E', hidden=True)
worksheet.column_dimensions.group('F', hidden=True)
worksheet.column_dimensions.group('G', hidden=True)
worksheet.column_dimensions.group('H', hidden=True)
worksheet.column_dimensions.group('I', hidden=True)
worksheet.column_dimensions.group('K', hidden=True)
worksheet.column_dimensions.group('L', hidden=True)
worksheet.column_dimensions.group('M', hidden=True)
worksheet.column_dimensions.group('N', hidden=True)
worksheet.column_dimensions.group('O', hidden=True)
worksheet.column_dimensions.group('P', hidden=True)
worksheet.column_dimensions.group('Q', hidden=True)
worksheet.column_dimensions.group('R', hidden=True)
worksheet.column_dimensions.group('S', hidden=True)
worksheet.column_dimensions.group('T', hidden=True)
worksheet.column_dimensions.group('U', hidden=True)
worksheet.column_dimensions.group('V', hidden=True)
worksheet.column_dimensions.group('W', hidden=True)
worksheet.column_dimensions.group('X', hidden=True)
worksheet.column_dimensions.group('Y', hidden=True)
worksheet.column_dimensions.group('Z', hidden=True)
worksheet.column_dimensions.group('AA', hidden=True)
worksheet.column_dimensions.group('AB', hidden=True)
worksheet.column_dimensions.group('AC', hidden=True)
worksheet.column_dimensions.group('AD', hidden=True)
worksheet.column_dimensions.group('AE', hidden=True)
worksheet.column_dimensions.group('AF', hidden=True)
worksheet.column_dimensions.group('AG', hidden=True)
worksheet.column_dimensions.group('AH', hidden=True)
worksheet.column_dimensions.group('AI', hidden=True)
worksheet.column_dimensions.group('AJ', hidden=True)
worksheet.column_dimensions.group('AK', hidden=True)
worksheet.column_dimensions.group('AM', hidden=True)
worksheet.column_dimensions.group('AN', hidden=True)
worksheet.column_dimensions.group('AP', hidden=True)
worksheet.column_dimensions.group('AQ', hidden=True)
worksheet.column_dimensions.group('AR', hidden=True)
worksheet.column_dimensions.group('AS', hidden=True)
worksheet.column_dimensions.group('AT', hidden=True)
worksheet.column_dimensions.group('AU', hidden=True)
worksheet.column_dimensions.group('AV', hidden=True)
worksheet.column_dimensions.group('AW', hidden=True)
worksheet.column_dimensions.group('AX', hidden=True)
worksheet.column_dimensions.group('AY', hidden=True)
worksheet.column_dimensions.group('AZ', hidden=True)
worksheet.column_dimensions.group('BA', hidden=True)
worksheet.column_dimensions.group('BB', hidden=True)
worksheet.column_dimensions.group('BC', hidden=True)
worksheet.column_dimensions.group('BD', hidden=True)
worksheet.column_dimensions.group('BE', hidden=True)
worksheet.column_dimensions.group('BF', hidden=True)
worksheet.column_dimensions.group('BH', hidden=True)
worksheet.column_dimensions.group('BI', hidden=True)
worksheet.column_dimensions.group('BJ', hidden=True)
worksheet.column_dimensions.group('BK', hidden=True)
worksheet.column_dimensions.group('BL', hidden=True)
worksheet.column_dimensions.group('BM', hidden=True)
worksheet.column_dimensions.group('BN', hidden=True)
worksheet.column_dimensions.group('BO', hidden=True)
worksheet.column_dimensions.group('BP', hidden=True)
worksheet.column_dimensions.group('BQ', hidden=True)
worksheet.column_dimensions.group('BR', hidden=True)
worksheet.column_dimensions.group('BS', hidden=True)
worksheet.column_dimensions.group('BT', hidden=True)
worksheet.column_dimensions.group('BU', hidden=True)
worksheet.column_dimensions.group('BV', hidden=True)
worksheet.column_dimensions.group('BW', hidden=True)
worksheet.column_dimensions.group('BX', hidden=True)
worksheet.column_dimensions.group('BY', hidden=True)
worksheet.column_dimensions.group('BZ', hidden=True)
worksheet.column_dimensions.group('CA', hidden=True)
worksheet.column_dimensions.group('CB', hidden=True)
worksheet.column_dimensions.group('CC', hidden=True)
worksheet.column_dimensions.group('CD', hidden=True)
worksheet.column_dimensions.group('CE', hidden=True)
worksheet.column_dimensions.group('CF', hidden=True)
worksheet.column_dimensions.group('CG', hidden=True)
worksheet.column_dimensions.group('CH', hidden=True)
worksheet.column_dimensions.group('CI', hidden=True)
worksheet.column_dimensions.group('CJ', hidden=True)
worksheet.column_dimensions.group('CK', hidden=True)
worksheet.column_dimensions.group('CL', hidden=True)
worksheet.column_dimensions.group('CM', hidden=True)
worksheet.column_dimensions.group('CN', hidden=True)
worksheet.column_dimensions.group('CO', hidden=True)
worksheet.column_dimensions.group('CP', hidden=True)
worksheet.column_dimensions.group('CQ', hidden=True)
worksheet.column_dimensions.group('CR', hidden=True)
worksheet.column_dimensions.group('CS', hidden=True)
worksheet.column_dimensions.group('CU', hidden=True)
routecolumn = worksheet.columns[58]
i = 2
supervisorheader = worksheet.cell("CV1")
s = Style(font=Font(bold=True))
supervisorheader.style = s
worksheet['CV1'] = 'Supervisor'
for route in routecolumn:
if route.value == 'Responsible Route':
continue
if route.value in Gordon:
pos = Gordon.index(route.value)
worksheet['CV' + str(i)].value = 'Gordon'
i += 1
elif route.value in VA:
pos = VA.index(route.value)
worksheet['CV' + str(i)].value = 'Vinny A'
i += 1
elif route.value in VP:
pos = VP.index(route.value)
worksheet['CV' + str(i)].value = 'Vinny P'
i += 1
elif route.value in Pete:
pos = Pete.index(route.value)
worksheet['CV' + str(i)].value = 'Pete'
i += 1
elif route.value in Mike:
pos = Mike.index(route.value)
worksheet['CV' + str(i)].value = 'Mike'
i += 1
elif route.value not in Gordon or route.value not in VA or route.value not in VP or route.value not in Pete \
or route.value not in Mike:
worksheet['CV' + str(i)].value = 'Building'
i += 1
workbook.save(filename)
newfilename = filename.strip(".xlsx")
newfilename = newfilename + ".xls"
os.rename('/home/MY NAME/'+filename, '/home/MY NAME/'+newfilename)
return send_file(newfilename, attachment_filename='tdx.xls', as_attachment=True), os.remove ('/home/MY NAME/'+newfilename)
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0')
</code></pre>
| 0 | 2016-07-21T01:07:11Z | 38,502,394 | <p>Best thing is to pipe the XLS file through OpenOffice / LibreOffice in <a href="https://ask.libreoffice.org/en/question/53173/libreoffice-444-and-the-headless-install/" rel="nofollow">headless mode</a> to do the conversion and then load the file in Python for processing.</p>
| 0 | 2016-07-21T10:57:11Z | [
"python",
"openpyxl",
"xlrd"
] |
I am getting the error 'Response' object has no attribute 'fromstring' | 38,493,295 | <p>I am trying to use xpath in my django app and I keep getting the error</p>
<pre><code>'Response' object has no attribute 'fromstring'
</code></pre>
<p>I don't understand why. I have researched and the only thing I saw was people having probblems with text instead of from string. Heres my code</p>
<pre><code>def panties():
from lxml import html
pan_url = 'http://www.panvideos.com'
html = requests.get(pan_url, headers=headers)
soup = BeautifulSoup(html.text, 'html5lib')
video_row = soup.find_all('div', {'class': 'video'})
def youtube_link(url):
youtube_page = requests.get(url, headers=headers)
tree = html.fromstring(youtube_page.content)
the_link = tree.xpath('//*[@id="wrapper"]/div[1]/div[2]/div[3]/div[1]/div[1]/h1')
return the_link
entries = [{'text': div.h4.text,
'href': div.a.get('href'),
'tube': youtube_link(div.a.get('href')),
} for div in video_row][:1]
return entries
</code></pre>
<p>any help would be great.</p>
<p>Edit:
I'm following the hitchhikers guide to python but while I search for answers I' keep seeing people use etree and not the way he's using it</p>
| 0 | 2016-07-21T01:12:07Z | 38,493,400 | <p>Don't nest functions like this. Un-nest your function and it will work fine.</p>
<p>You define <code>html = requests.get(*)</code></p>
<p>What this returns is a response object.</p>
<p>In your nested function you're using this <code>html</code> instead of what you imported <code>from lxml import html</code> because of the namespace(s).</p>
| 3 | 2016-07-21T01:27:44Z | [
"python",
"django",
"xpath",
"web-scraping"
] |
Python: Even if the the if statement equals false it still executes the code inside the if statement | 38,493,326 | <p>Even if the the if statement equals false it still executes the code inside the if statement.
Here is the code:</p>
<pre><code># Imports:
import time
# Code Starts Here:
print("Welcome to a test application By: David(Toxicminibro)")
time.sleep(1.25)
Donald = "Donald Trump"
donald = "donald trump"
Hillary = "Hillary Clinton"
hillary = "hillary clinton"
name = input("What is your name?: ")
if name == Donald or donald or Hillary or hillary:
print("No. Stop it.")
else:
print("Hello " + name + " !")
</code></pre>
| 1 | 2016-07-21T01:17:38Z | 38,493,371 | <p>I think you mean to do :</p>
<pre><code>if name == Donald or name == donald or name == Hillary or name == hillary:
</code></pre>
<p>Have a <a href="https://docs.python.org/2.4/lib/truth.html" rel="nofollow">look at this link</a>; it explains how various values are considered to be "true" or "false"</p>
| 2 | 2016-07-21T01:22:43Z | [
"python",
"if-statement"
] |
Python: Even if the the if statement equals false it still executes the code inside the if statement | 38,493,326 | <p>Even if the the if statement equals false it still executes the code inside the if statement.
Here is the code:</p>
<pre><code># Imports:
import time
# Code Starts Here:
print("Welcome to a test application By: David(Toxicminibro)")
time.sleep(1.25)
Donald = "Donald Trump"
donald = "donald trump"
Hillary = "Hillary Clinton"
hillary = "hillary clinton"
name = input("What is your name?: ")
if name == Donald or donald or Hillary or hillary:
print("No. Stop it.")
else:
print("Hello " + name + " !")
</code></pre>
| 1 | 2016-07-21T01:17:38Z | 38,493,382 | <pre><code>if name in (Donald, donald, Hillary, hillary):
print("No. Stop it.")
else:
print("Hello " + name + " !")
</code></pre>
| 2 | 2016-07-21T01:25:26Z | [
"python",
"if-statement"
] |
Python: Even if the the if statement equals false it still executes the code inside the if statement | 38,493,326 | <p>Even if the the if statement equals false it still executes the code inside the if statement.
Here is the code:</p>
<pre><code># Imports:
import time
# Code Starts Here:
print("Welcome to a test application By: David(Toxicminibro)")
time.sleep(1.25)
Donald = "Donald Trump"
donald = "donald trump"
Hillary = "Hillary Clinton"
hillary = "hillary clinton"
name = input("What is your name?: ")
if name == Donald or donald or Hillary or hillary:
print("No. Stop it.")
else:
print("Hello " + name + " !")
</code></pre>
| 1 | 2016-07-21T01:17:38Z | 38,493,390 | <p>A more Pythonic way of doing this is: <code>if name.lower() in [donald, hillary]:</code></p>
| 1 | 2016-07-21T01:26:37Z | [
"python",
"if-statement"
] |
multiple conditions in conditional | 38,493,383 | <p>So i'm not exactly sure how to ask this question exactly but basically I want to see if any value between two variables is also in between two other variables. So for instance here is a sample of what code might look like of what I'm explaining</p>
<pre><code>var1 = 0
var2 = 20
var3 = 5
var4 = 15
if var3 <= [any value in range var1 to var2] <= var4:
do something
</code></pre>
<p>so that's basically it, but I'm not sure what would go in place of the brackets or if there is another way to do it. Sorry if there's an easy solution, I'm pretty tired. Thanks!</p>
| 1 | 2016-07-21T01:25:32Z | 38,493,426 | <p>Always remember when you want to set a condition for some value, lets say 'a' between x and y, you can set a condition, <code>a>x and a<y</code>, which is what you want:</p>
<pre><code>if var3 >= var1 and var3 <= var2 and var3 <=var4:
do something
</code></pre>
<p>I am not 100% sure whether you want var3 be both <code>var3 >= var1 and var3 <= var2 and var3 <=var4</code> or var3 be <code>var3 >= var1 and var3 <= var2 or var3 <=var4</code>, please make changes as per your expected output.</p>
<p>Hope this helps, let me know if it doesn't work for you. This is a classic example but not the pythonic way though :)</p>
| 0 | 2016-07-21T01:30:39Z | [
"python",
"variables",
"if-statement",
"conditional"
] |
multiple conditions in conditional | 38,493,383 | <p>So i'm not exactly sure how to ask this question exactly but basically I want to see if any value between two variables is also in between two other variables. So for instance here is a sample of what code might look like of what I'm explaining</p>
<pre><code>var1 = 0
var2 = 20
var3 = 5
var4 = 15
if var3 <= [any value in range var1 to var2] <= var4:
do something
</code></pre>
<p>so that's basically it, but I'm not sure what would go in place of the brackets or if there is another way to do it. Sorry if there's an easy solution, I'm pretty tired. Thanks!</p>
| 1 | 2016-07-21T01:25:32Z | 38,493,439 | <p>You mean to do: </p>
<pre><code> for i in range(var1, var2+1):
if var3 <= i <=var4:
do something
</code></pre>
| 0 | 2016-07-21T01:31:36Z | [
"python",
"variables",
"if-statement",
"conditional"
] |
multiple conditions in conditional | 38,493,383 | <p>So i'm not exactly sure how to ask this question exactly but basically I want to see if any value between two variables is also in between two other variables. So for instance here is a sample of what code might look like of what I'm explaining</p>
<pre><code>var1 = 0
var2 = 20
var3 = 5
var4 = 15
if var3 <= [any value in range var1 to var2] <= var4:
do something
</code></pre>
<p>so that's basically it, but I'm not sure what would go in place of the brackets or if there is another way to do it. Sorry if there's an easy solution, I'm pretty tired. Thanks!</p>
| 1 | 2016-07-21T01:25:32Z | 38,493,489 | <p>Let's use a bit of mathematical notation. So you have two number ranges, [<em>a</em>, <em>b</em>] and [<em>x</em>, <em>y</em>], where [<em>a</em>, <em>b</em>] represents the concept of "all numbers between <em>a</em> and <em>b</em>".</p>
<p>One interpretation is you want to see if [<em>a</em>, <em>b</em>] is a <strong>subset</strong> of [<em>x</em>, <em>y</em>].</p>
<pre><code>if a >= x and b <= y:
...
</code></pre>
<p>Another is that you want to see if [<em>a</em>, <em>b</em>] <strong>intersects</strong> [<em>x</em>, <em>y</em>] in any way. That happens when either of the two endpoints <em>a</em> or <em>b</em> is contained within [<em>x</em>, <em>y</em>], or vice versa.</p>
<pre><code>if ((x <= a <= y or x <= b <= y) or
(a <= x <= b or a <= y <= b)):
...
</code></pre>
| 1 | 2016-07-21T01:37:51Z | [
"python",
"variables",
"if-statement",
"conditional"
] |
multiple conditions in conditional | 38,493,383 | <p>So i'm not exactly sure how to ask this question exactly but basically I want to see if any value between two variables is also in between two other variables. So for instance here is a sample of what code might look like of what I'm explaining</p>
<pre><code>var1 = 0
var2 = 20
var3 = 5
var4 = 15
if var3 <= [any value in range var1 to var2] <= var4:
do something
</code></pre>
<p>so that's basically it, but I'm not sure what would go in place of the brackets or if there is another way to do it. Sorry if there's an easy solution, I'm pretty tired. Thanks!</p>
| 1 | 2016-07-21T01:25:32Z | 38,493,505 | <p>Assuming that there isn't an ordering among the variables that's known ahead of time....</p>
<pre><code>min34 = min(var3, var4)
max34 = max(var3, var4)
if ( (min34 < var1 && max34 > var1) || (min34 < var2 && max34 > var2) ) :
do something
</code></pre>
<p>Use "<=" and ">=" if the edge of the range counts as "in between".</p>
| 0 | 2016-07-21T01:39:47Z | [
"python",
"variables",
"if-statement",
"conditional"
] |
multiple conditions in conditional | 38,493,383 | <p>So i'm not exactly sure how to ask this question exactly but basically I want to see if any value between two variables is also in between two other variables. So for instance here is a sample of what code might look like of what I'm explaining</p>
<pre><code>var1 = 0
var2 = 20
var3 = 5
var4 = 15
if var3 <= [any value in range var1 to var2] <= var4:
do something
</code></pre>
<p>so that's basically it, but I'm not sure what would go in place of the brackets or if there is another way to do it. Sorry if there's an easy solution, I'm pretty tired. Thanks!</p>
| 1 | 2016-07-21T01:25:32Z | 38,811,854 | <p>I'm surprised at the complexity of other answers. This should simply be:</p>
<pre><code>def intersect(a, A, x, X):
'''Return True if any number in range(a, A+1) is in range(x, X+1).'''
return a <= X and x <= A
</code></pre>
<p>Note that intersection is symmetric, so <code>intersect(a,b,x,y) == intersect(x,y,a,b)</code> always holds.</p>
<hr>
<p>All intersection possibilities:</p>
<pre><code> a...A
x..X
a...A
x..X
a...A
x..X
a...A
x..X
a...A
x..X
a...A
x..X
a...A
x..X
a...A
x..X
</code></pre>
<p>Which corresponds to the above function.</p>
<hr>
<p>Finally, to make sure this is not different than John Kugelman's answer:</p>
<pre><code>def their(a, b, x, y):
return ((x <= a <= y or x <= b <= y) or (a <= x <= b or a <= y <= b))
def my(a, A, x, X):
return a <= X and x <= A
from itertools import product
for x in product(range(5), repeat=4):
if my(*x) != their(*x):
if x[0] <= x[1] and x[2] <= x[3]:
print('[{1}, {2}] and [{3}, {4}] intersect according to {0}.'
.format('me' if my(*x) else 'them', *x))
else:
print('{} say it intersects, when input is incorrect.'
.format('I' if my(*x) else 'They'))
</code></pre>
<p>Running this as <code>python intersect.py | uniq -c</code> outputs:</p>
<pre class="lang-none prettyprint-override"><code>140 They say it intersects, when input is incorrect.
</code></pre>
| 2 | 2016-08-07T07:15:45Z | [
"python",
"variables",
"if-statement",
"conditional"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.