title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
How to add values to keys in a dictionary for keys found in a file? Python
38,588,876
<p>So I have a csv file, from which I have to find the average price for all products grouped by category. I managed to put all lines from the file into a list. Now I'm trying this:</p> <pre><code>FILE_NAME = 'catalog_sample.csv' full_catalog = [] with open(FILE_NAME, encoding='utf-8') as file: for line in file: one_record = line.split(',') full_catalog.append(one_record) category_dict = {} prices = [] for i in full_catalog: if str(i[-2]) not in category_dict: category_name = str(i[-2]) category_dict[category_name] = float(i[-1]) else: prices.append(float(i[-1])) </code></pre> <p>So far I'm getting a dictionary with all the categories from the file as keys, but the value is the price from the first occurrence of the key in the file:</p> <pre><code>'Men': 163.99 'Women': 543.99 </code></pre> <p>It seems that "else" is not working as I'm expecting (adding values to the keys). Any suggestions? Thanks! </p>
-1
2016-07-26T11:37:54Z
38,593,832
<p>I suggest creating your dictionary while going through the file, instead of adding them to a list and going back through it to construct the dictionary.</p> <pre><code>category_dict = {} full_catalog = [] with open(FILE_NAME, encoding='utf-8') as file: for line in file: item = line.split(',') # Unpack the last 2 items from list category = item[-2].strip() price = float(item[-1]) # Try get the list of prices for the category # If there is no key matching category in dict # Then return an empty list prices = category_dict.get(category, []) # Append the price to the list prices.append(price) # Set the list as the value for the category # If there was no key then a key is created # The value is the list with the new price category_dict[category] = prices full_catalog.append(item) </code></pre> <p>Edit: Fixed to match provided line format. <code>full_catalog</code> has been included if you still require the whole list</p>
0
2016-07-26T15:18:19Z
[ "python", "list", "file", "python-3.x", "dictionary" ]
Can't save ManyToManyField
38,588,878
<p>So the problem is that I can't save ManyToManyField from form.</p> <p>forms.py</p> <pre><code>class addGoods(forms.Form): ... permission = form['permission'], ... </code></pre> <p>models.py</p> <pre><code>class Add_good(models.Model): ... permission = models.ManyToManyField(Permission, related_name="+") ... </code></pre> <p>views.py</p> <pre><code>if request.method == "POST": form = addGoods(request.POST) if form.is_valid(): form = form.cleaned_data newGood = Add_good(permission = form['permission']) </code></pre> <p>I tried to do something like this, but there is also errors</p> <p>views.py</p> <pre><code>if request.method == "POST": form = addGoods(request.POST) if form.is_valid(): form = form.cleaned_data newGood = Add_good(permission = form['permission']) to_do_list = newGood.save(commit=False) for permis in form['permission']: to_do_list.permission.add(permis) to_do_list.save() newGood.save_m2m() </code></pre> <p>Traceback:</p> <pre><code>Exception Type: TypeError at /goods/add Exception Value: 'permission' is an invalid keyword argument for this function </code></pre>
-1
2016-07-26T11:37:58Z
38,589,172
<p>The problem is here: <code>newGood = Add_good(permission = form['permission'])</code></p> <p>Permission is ManyToMany field, so your table doesn't directly have <code>permision</code> field, so it can't take that argument. You can create the model entry then add the permission:</p> <pre><code>newGood = Add_good.objects.create(...) newGood.permission.add(Permissions.objects.get(...)) </code></pre> <p>Also, your form won't make models from permissions as you just did <code>permission = form['permission']</code>, so you will need to make a manual query in your view instead of/in the for loop.</p>
2
2016-07-26T11:53:08Z
[ "python", "django", "forms" ]
Python netcdf - convert specified values to NaN
38,588,902
<p>I'm plotting pcolourmesh of wind data from satellites and from a weather model. The values are all stored in a netcdf file. Below I try to replace values equal to 70 or 0 with NaN, this doesn't give an error but it doesn't create NaNs either, <code>nozeros</code> is the same size as the original dataset. I have looked at the data and it does have values ==70 and 0.</p> <pre><code> import netCDF4 as nc import numpy as np import matplotlib.pyplot as plt import csv as cs import pandas as pd ncfile = nc.Dataset('C:\Users\mmso2\Google Drive\ENVI_I-PAC_2007_10_21_21_22_47.nc') SARwind = ncfile.variables['sar_wind'] ModelWind = ncfile.variables['model_speed'] LON = ncfile.variables['longitude'] LAT = ncfile.variables['latitude'] LandMask = ncfile.variables['mask'] #clean the data of values = 70 SARwind_nan = SARwind for i in SARwind_nan: if i.any() == 70: i = np.nan elif i.any()==0: i = np.nan nozeros=np.count_nonzero(~np.isnan(SARwind_nan)) </code></pre> <p>Also, I want to convert areas where LandMask >=0 into NaN, is there a better way to do this?</p> <p>Thanks</p>
0
2016-07-26T11:39:18Z
38,593,207
<p>There are several issues in your code, setting aside the indentation syntax errors.</p> <p>The code below will do nothing. What is <code>i</code> ? The result is not saved.</p> <pre><code>for i in SARwind_nan: if i.any() == 70: i = np.nan ... </code></pre> <p>Here's an example which should do what you want.</p> <pre><code>SARwind = np.array([ [1,2,0,-4,-5], [6,0,70,-9,-15], [10,11,-12,70,-14], [0,17,70,-19,-20], ], dtype=np.float32) SARwind_nan = SARwind.copy() SARwind_nan[SARwind_nan == 0.0] = np.nan SARwind_nan[SARwind_nan == 70.0] = np.nan print SARwind_nan nozeros=np.count_nonzero(~np.isnan(SARwind_nan)) print nozeros </code></pre>
1
2016-07-26T14:51:00Z
[ "python", "python-2.7", null, "netcdf" ]
Python, Better way of coding. Using Loop Array?
38,588,946
<p>Im just starting to learn python and was wondering if there was a better way to code the following.</p> <pre><code>user1 = "username" userkey1 = "userkey" user2 = "username" userkey2 = "userkey" user3 = "username" userkey3 = "userkey" user4 = "username" userkey4 = "userkey" user5 = "username" userkey5 = "userkey" user6 = "username" userkey6 = "userkey" user7 = "username" userkey7 = "userkey" level = ["variable2", "variable2"] connect = ipaddress for article in connect.link("var1", "var2"): if article["variable"] == '' and article["creator"] in level: try: action = dr.create( **{"person": user1,} ) output = output.sign([userkey1]) output = tx.JsonObj(tx) # Broadcast to network broadcast if article["variable"] == '' and article["creator"] in level: try: action = dr.create( **{"person": user2,} ) output = output.sign([userkey2]) output = tx.JsonObj(tx) # Broadcast to network broadcast if article["variable"] == '' and article["creator"] in level: try: action = dr.create( **{"person": user3,} ) output = output.sign([userkey3]) output = tx.JsonObj(tx) # Broadcast to network broadcast </code></pre> <p>I am not yet familiar whit while loops, loops or arrays, but it seems to me the code above can be a bit more efficient?</p> <p>The code is supposed to grab input and just cycle always, when it finds what its looking for its supposed to create an action for each user and their respective key.</p> <p>Any help would be appreciated.</p>
0
2016-07-26T11:41:07Z
38,591,189
<p>If your code is working the way you want it good. But the long variable definitions is to much. Look up (google) python list and dictionary. Check out this post: <a href="http://stackoverflow.com/questions/9495950/how-to-implement-associative-array-not-dictionary-in-python">How to implement associative array in Python?</a> </p>
0
2016-07-26T13:27:21Z
[ "python" ]
How to count sentences taking into account the occurrence of ellipses
38,588,985
<p>I've written the following script to count the number of sentences in a text file:</p> <pre><code>import re filepath = 'sample_text_with_ellipsis.txt' with open(filepath, 'r') as f: read_data = f.read() sentences = re.split(r'[.{1}!?]+', read_data.replace('\n','')) sentences = sentences[:-1] sentence_count = len(sentences) </code></pre> <p>However, if I run it on a <code>sample_text_with_ellipsis.txt</code> with the following content:</p> <pre><code>Wait for it... awesome! </code></pre> <p>I get <code>sentence_count = 2</code> instead of <code>1</code>, because it does not ignore the ellipsis (i.e., the "...").</p> <p>What I tried to do in the regex is to make it match only one occurrence of a period through <code>.{1}</code>, but this apparently doesn't work the way I intended it. How can I get the regex to ignore ellipses?</p>
1
2016-07-26T11:43:16Z
38,589,115
<p>Splitting sentences with a regex like this is not enough. See <a href="http://stackoverflow.com/questions/4576077/python-split-text-on-sentences"><em>Python split text on sentences</em></a> to see how NLTK can be leveraged for this.</p> <p>Answering your question, you call 3 dot sequence an ellipsis. Thus, you need to use </p> <pre><code>[!?]+|(?&lt;!\.)\.(?!\.) </code></pre> <p>See the <a href="https://regex101.com/r/iW1eV2/2" rel="nofollow">regex demo</a>. The <code>.</code> is moved from the character class since <strong>you can't use quantifiers inside them</strong>, and only that <code>.</code> is matched that is not enclosed with other dots.</p> <ul> <li><code>[!?]+</code> - 1 or more <code>!</code> or <code>?</code></li> <li><code>|</code> - or</li> <li><code>(?&lt;!\.)\.(?!\.)</code> - a dot that is neither preceded (<code>(?&lt;!\.)</code>), nor followed (<code>(?!\.)</code>) with a dot.</li> </ul> <p>See <a href="https://ideone.com/k0sqWV" rel="nofollow">Python demo</a>:</p> <pre><code>import re sentences = re.split(r'[!?]+|(?&lt;!\.)\.(?!\.)', "Wait for it... awesome!".replace('\n','')) sentences = sentences[:-1] sentence_count = len(sentences) print(sentence_count) # =&gt; 1 </code></pre>
4
2016-07-26T11:49:39Z
[ "python", "regex" ]
How to count sentences taking into account the occurrence of ellipses
38,588,985
<p>I've written the following script to count the number of sentences in a text file:</p> <pre><code>import re filepath = 'sample_text_with_ellipsis.txt' with open(filepath, 'r') as f: read_data = f.read() sentences = re.split(r'[.{1}!?]+', read_data.replace('\n','')) sentences = sentences[:-1] sentence_count = len(sentences) </code></pre> <p>However, if I run it on a <code>sample_text_with_ellipsis.txt</code> with the following content:</p> <pre><code>Wait for it... awesome! </code></pre> <p>I get <code>sentence_count = 2</code> instead of <code>1</code>, because it does not ignore the ellipsis (i.e., the "...").</p> <p>What I tried to do in the regex is to make it match only one occurrence of a period through <code>.{1}</code>, but this apparently doesn't work the way I intended it. How can I get the regex to ignore ellipses?</p>
1
2016-07-26T11:43:16Z
38,593,479
<p>Following Wiktor's suggestion to use NLTK, I also came up with the following alternative solution:</p> <pre><code>import nltk read_data="Wait for it... awesome!" sentence_count = len(nltk.tokenize.sent_tokenize(read_data)) </code></pre> <p>This yields a sentence count of 1 as expected.</p>
0
2016-07-26T15:03:01Z
[ "python", "regex" ]
how to slice a dataframe from the end to the beginning?
38,588,991
<p>I am slicing a dataframe like this:</p> <pre><code>for i in xrange(0,len(df),100000): print("slice %d!!!!" % i) slice= df.iloc[i:(i+100000)] </code></pre> <p>I would like now to get <code>100k</code> slices <strong>from the end to the beginning</strong> of the df (without necessarily sorting my dataframe). </p> <p>How can I do that?</p> <p>Thanks!</p>
0
2016-07-26T11:43:24Z
38,589,061
<p>Simply use <code>tail</code>:</p> <pre><code>df.tail(100000) </code></pre>
2
2016-07-26T11:47:21Z
[ "python", "pandas", "indexing", "dataframe", "slice" ]
how to slice a dataframe from the end to the beginning?
38,588,991
<p>I am slicing a dataframe like this:</p> <pre><code>for i in xrange(0,len(df),100000): print("slice %d!!!!" % i) slice= df.iloc[i:(i+100000)] </code></pre> <p>I would like now to get <code>100k</code> slices <strong>from the end to the beginning</strong> of the df (without necessarily sorting my dataframe). </p> <p>How can I do that?</p> <p>Thanks!</p>
0
2016-07-26T11:43:24Z
38,590,484
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a>:</p> <pre><code>df.iloc[-100000:] </code></pre>
1
2016-07-26T12:54:41Z
[ "python", "pandas", "indexing", "dataframe", "slice" ]
how to find the black region in near the edge
38,589,119
<p>I have an image like below. It has black border/regions in the top and right side of the image. I want to be able to find these regions like shown in the 2nd images. Note these regions should be always streight (ie rectangle shaped). I want to be able to do this using 'image processing with code not with photoshop' (such as matlab, c# or opencv). </p> <p><a href="http://i.stack.imgur.com/0k4Kh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0k4Kh.jpg" alt=" Input image "></a></p> <p><a href="http://i.stack.imgur.com/5BJkH.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/5BJkH.jpg" alt=" output image"></a></p> <p>I am very new to 'image process'. I have tried to find all the pionts that have (0,0,0) rgb values. But because there are so many of these black values in the noise part (and any other places in the images). My result region also contains these unwanted region....</p> <p>---------- Edit --------------- Thanks for all the comments/answers. However, I have lots of these images. Some of them are rotated, which is a bit more difficult to deal with. I have just uploaded one as shown below.</p> <p><a href="http://i.stack.imgur.com/N6vK9.png" rel="nofollow"><img src="http://i.stack.imgur.com/N6vK9.png" alt="enter image description here"></a></p>
-5
2016-07-26T11:50:03Z
38,589,682
<p>Using Python2.7 + OpenCV3. The idea is to keep only non-zero rows and columns. The code follows.</p> <pre><code>import cv2 import numpy as np #Read in the image arr = np.array(cv2.imread('image.jpg')) #Convert to grayscale gray = np.sum(arr, axis=2) print gray.shape #(496, 1536) filter_row = np.sum(gray,axis=1)!=0 # Assuming first few values are all False, find index of first True, and set all values True after that filter_row[list(filter_row).index(True):,] = True # Keep only non-zero rows horiz = gray[filter_row,:] filter_column = np.sum(gray,axis=0)!=0 # Assuming first few values are all False, find index of first False, and set all values True before that filter_column[:list(filter_column).index(False),] = True # Keep only non-zero columns vert = horiz[:,filter_column] print vert.shape #(472, 1528) bordered = cv2.rectangle(cv2.imread('image.jpg'), (0, gray.shape[0]-vert.shape[0]), (vert.shape[1],gray.shape[0] ), (255,0,0), 2) cv2.imwrite(bordered,'result.jpg') </code></pre>
1
2016-07-26T12:16:05Z
[ "python", "matlab", "opencv", "image-processing" ]
how to find the black region in near the edge
38,589,119
<p>I have an image like below. It has black border/regions in the top and right side of the image. I want to be able to find these regions like shown in the 2nd images. Note these regions should be always streight (ie rectangle shaped). I want to be able to do this using 'image processing with code not with photoshop' (such as matlab, c# or opencv). </p> <p><a href="http://i.stack.imgur.com/0k4Kh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0k4Kh.jpg" alt=" Input image "></a></p> <p><a href="http://i.stack.imgur.com/5BJkH.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/5BJkH.jpg" alt=" output image"></a></p> <p>I am very new to 'image process'. I have tried to find all the pionts that have (0,0,0) rgb values. But because there are so many of these black values in the noise part (and any other places in the images). My result region also contains these unwanted region....</p> <p>---------- Edit --------------- Thanks for all the comments/answers. However, I have lots of these images. Some of them are rotated, which is a bit more difficult to deal with. I have just uploaded one as shown below.</p> <p><a href="http://i.stack.imgur.com/N6vK9.png" rel="nofollow"><img src="http://i.stack.imgur.com/N6vK9.png" alt="enter image description here"></a></p>
-5
2016-07-26T11:50:03Z
38,590,485
<pre><code>color_img = imread('0k4Kh.jpg'); img = rgb2gray(color_img); [x, y] = size(img); for i = 1:x if length(find(img(i, :))) ~= 0 lastmarginalrow = i-1; break; end end for ii = y:-1:1 if length(find(img(:, ii))) ~= 0 lastmarginalcol = ii-1; break; end end figure; fig = imshow(color_img); h = impoly(gca, [0,x; lastmarginalcol,x; lastmarginalcol,lastmarginalrow; 0,lastmarginalrow]); api = iptgetapi(h); api.setColor('red'); saveas(fig, 'test.jpg'); close all; </code></pre> <p>Here is the implementation in MATLAB. Find zeros column and zeros row and draw a border using them.</p> <p><strong>For Rotated Images (should work for non-rotated also)</strong></p> <pre><code>color_img = imread('N6vK9.png'); img = rgb2gray(color_img); [x, y] = size(img); verts = []; % traversing through all columns for i = 1:y % find all non-zero pixels in each column nonzeros = find(img(:,i)); % if all pixels are black in a column, below if condition will skip if length(nonzeros) ~= 0 % if there is atleast one non-zero pixel, not that co-oridinate/positions in matrix by appending verts = [i, nonzeros(1); verts]; end end figure; fig = imshow(color_img); % polygon based on first and last vertix/co-ordinate of found non-zero co-ordinates % Assumed that it was slanted straight line, used last and first co-ordinate. If it is curvy border, anyways we have all veritices/co-ordinates of first non-zero pixel in all columns. h = impoly(gca, [verts(1,:); verts(length(verts), :); 1,x; verts(1),x]); api = iptgetapi(h); api.setColor('red'); saveas(fig, 'test.jpg'); close all; </code></pre> <p><a href="http://i.stack.imgur.com/wa814.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/wa814.jpg" alt="enter image description here"></a></p>
1
2016-07-26T12:54:41Z
[ "python", "matlab", "opencv", "image-processing" ]
Pythonic way of zeroing numpy array axis
38,589,133
<p>Bit annoyed I haven't been able to get this myself, so here goes. </p> <p>Say I have a 2 dimensional numpy array</p> <pre><code>import numpy as np a = np.array([[1,2,3],[4,5,6],[7,8,9]]) </code></pre> <p>I am in a situation where I want to zero a column in this array, but maintain the original array. I can do this by</p> <pre><code>b = a.copy() b[:,1] = 0 </code></pre> <p>to get</p> <pre><code>array([[1, 0, 3], [4, 0, 6], [7, 0, 9]]) </code></pre> <p>or if I want to zero a series of columns, I can do</p> <pre><code>b = a.copy() b[:,[0,3]] = 0 </code></pre> <p>to get</p> <pre><code>array([[0, 2, 0], [0, 5, 0], [0, 8, 0]]) </code></pre> <p>The b array will only be used one and then discarded. </p> <p>Is there any more pythonic way of doing this that can be done as a one liner? I only want the zero valued b array to pass a plotting routine, after which is not needed. Essentially, I don't want to have an extra two lines before calling my plotting function - if I can do it as I call my routine it be much cleaner. For example</p> <pre><code>plotting_func(&lt;numpy_magic_here&gt;) </code></pre> <p>instead of</p> <pre><code>b = a.copy() b[:,1] = 0 plotting_func(b) </code></pre> <p>only for b to never be used again</p>
1
2016-07-26T11:50:57Z
38,589,360
<p>To set certain columns as zeros, you can use <code>np.in1d</code> alongwith <code>np.arange</code> to create a mask of invalid elements, which when multiplied with the input array would set the invalid columns as zeros using <code>NumPy broadcasting</code>. Thus, we would have an one-liner implementation, like so -</p> <pre><code>a*~np.in1d(np.arange(a.shape[1]),cols_to_be_reset) </code></pre> <p>Alternatively, one can use <code>np.where</code> to choose instead of multiplying, like so -</p> <pre><code>np.where(np.in1d(np.arange(a.shape[1]),cols_to_be_reset),0,a) </code></pre> <p>Please note that this is not meant for performance, but just as a one-liner.</p> <p>Sample run -</p> <pre><code>In [546]: a Out[546]: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) In [547]: a*~np.in1d(np.arange(a.shape[1]),1) Out[547]: array([[1, 0, 3], [4, 0, 6], [7, 0, 9]]) In [548]: a*~np.in1d(np.arange(a.shape[1]),[0,2]) Out[548]: array([[0, 2, 0], [0, 5, 0], [0, 8, 0]]) </code></pre>
2
2016-07-26T12:02:00Z
[ "python", "numpy" ]
PHP exec command runs python script but not python3
38,589,183
<p>I currently have python 2.7 installed as part of OSX, and recently installed 3.5.2. </p> <p>I'm running a local webserver on my mac using XAMPP, and when I execute the python script from within apache, it loads fine:</p> <pre><code>$executePython = "python " . __DIR__ . "/cycle/cutoff.py $device_id $processPreviousMinutes"; exec("$executePython"); </code></pre> <p>However, when I replace <code>python</code> with <code>python3</code> my script refuses to run. I can invoke it manually from the command line using both versions, however it seems like the apache account/daemon doesn't have access to python3. Would this be something to do with a configuration file that I've overlooked?</p>
2
2016-07-26T11:53:36Z
38,610,901
<p>We don't want to mess up with the system wide path on the latest OSX. What if you add the python3 path in your script like this and then do your normal stuff</p> <pre><code>putenv("PATH=/usr/local/bin/:" . exec('echo $PATH')); $executePython = "python3 " . __DIR__ . "/cycle/cutoff.py $device_id $processPreviousMinutes"; exec("$executePython"); </code></pre> <p><code>putenv</code> just adds your python3 path to the whatever current path is in your XAMPP's apache.</p>
2
2016-07-27T10:51:37Z
[ "php", "python", "apache", "python-3.x", "xampp" ]
Scrapy: Images Pipeline, download images
38,589,188
<p>Following: <a href="http://doc.scrapy.org/en/latest/topics/media-pipeline.html" rel="nofollow">scrapy's</a> tutorial i made a simple image crawler (scrapes images of Bugattis). Which is illustrated below in <strong>EXAMPLE</strong>.</p> <p>However, following the guide has left me with a non functioning crawler! It finds all of the urls but it does not download the images.</p> <p>I found a duck tape solution: replace <code>ITEM_PIPELINES</code> and <code>IMAGES_STORE</code> such that;</p> <p><code>ITEM_PIPELINES['scrapy.pipeline.images.FilesPipeline'] = 1</code> and </p> <p><code>IMAGES_STORE</code> -> <code>FILES_STORE</code></p> <p>But I do not know why this works? I would like to use the ImagePipeline as documented by scrapy.</p> <p><strong>EXAMPLE</strong></p> <p><strong>settings.py</strong></p> <pre><code>BOT_NAME = 'imagespider' SPIDER_MODULES = ['imagespider.spiders'] NEWSPIDER_MODULE = 'imagespider.spiders' ITEM_PIPELINES = { 'scrapy.pipelines.images.ImagesPipeline': 1, } IMAGES_STORE = "/home/user/Desktop/imagespider/output" </code></pre> <p><strong>items.py</strong></p> <pre><code>import scrapy class ImageItem(scrapy.Item): file_urls = scrapy.Field() files = scrapy.Field() </code></pre> <p><strong>imagespider.py</strong></p> <pre><code>from imagespider.items import ImageItem import scrapy class ImageSpider(scrapy.Spider): name = "imagespider" start_urls = ( "https://www.find.com/search=bugatti+veyron", ) def parse(self, response): for elem in response.xpath("//img"): img_url = elem.xpath("@src").extract_first() yield ImageItem(file_urls=[img_url]) </code></pre>
0
2016-07-26T11:53:48Z
38,590,546
<p>The item your spider returns must contains fields <code>"file_urls"</code> for files and/or <code>"image_urls"</code> for images. In your code you specify settings for Image pipeline but your return urls in <code>"file_urls"</code>.</p> <p>Simply change this line: </p> <pre><code>yield ImageItem(file_urls=[img_url]) # to yield {'image_urls': [img_url]} </code></pre> <p>* scrapy can return dictionary objects instead of items, which saves time when you only have one or two fields.</p>
1
2016-07-26T12:58:03Z
[ "python", "scrapy", "scrapy-spider", "scraper" ]
Recursively searching an object and editing a string when found
38,589,221
<p>I am new to python and having issues dealing with immutable strings. My problem is as follows:</p> <p>I have a tree where each node is a dictionary and each node has variable number of children. I have multiple operations I wish to perform on this tree and therefore recursively traverse it.</p> <p>The setup is that I have class that traverses the tree with the following function like so:</p> <pre><code>def __recursiveDive(self, node, enterFunc, leaveFunc, parentNode): if self.__break: return if not self.__skipNode: enterFunc(node, parentNode, self) if isinstance(node, dict): for key, value in node.items(): self.__recursiveDive(value, enterFunc, leaveFunc, node) elif isinstance(node, list): for child in node: if isinstance(child, dict): self.__recursiveDive(child, enterFunc, leaveFunc, node) leaveFunc(node, parentNode, self) else: self.__skipNode = False </code></pre> <p>enterFunc and leaveFunc are defined externally and perform the required work on the tree/node.</p> <p>My issue is that since python strings are immutable I feel like I am unable to modify any string fields in the tree. The enterFunc, which is a function belonging to another class and is passed to the class is as follows:</p> <pre><code>def enter(self, node, parentNode, traverser): if isinstance(node, str): search = re.search(self.regexPattern, node) if search: node = node.replace(search.group(2),self.modifyString(search.group(2))) </code></pre> <p>The changes to node here are local only. Is my only solution to have the enter and leave functions return the node?</p> <p>What is the correct/pythonic way to approach this problem? </p> <h2>For those who want a TL;DR of the solution</h2> <p>Make sure your pattern returns the variable which has work done on it.</p>
0
2016-07-26T11:54:51Z
38,589,568
<p>Your <code>enterFunc</code> and <code>leaveFunc</code> should return the modified object instead of attempting to modify it in-place. Then the <code>__recursiveDive</code> function can replace the original object with the returned object.</p> <p>Usually you'd implement <code>__recursiveDive</code> in such a way that it it knows <em>both</em> the key and the value of a node, but that doesn't seem to be the case in your code - it gets passed a <code>node</code> variable, but not the corresponding key. It should work kind of like this (pseudo-code, obviously):</p> <pre><code>def __recursiveDive(self, enter, leave): for key, value in self.nodes: new_value= enter(value) self.nodes[key]= new_value if isinstance(new_value, dict): new_value.__recursiveDive(enter, leave) new_value= leave(new_value) self.nodes[key]= new_value </code></pre>
1
2016-07-26T12:11:47Z
[ "python", "string", "design-patterns", "recursion", "immutability" ]
Adapting binary stacking example to multiclass
38,589,230
<p>I have been studying <a href="https://github.com/emanuele/kaggle_pbr/blob/master/blend.py" rel="nofollow">this example of stacking</a>. In this case, each set of K-folds produces one column of data, and this is repeated for each classifier. I.e: the matrices for blending are:</p> <pre><code>dataset_blend_train = np.zeros((X.shape[0], len(clfs))) dataset_blend_test = np.zeros((X_submission.shape[0], len(clfs))) </code></pre> <p>I need to stack predictions from a multiclass problem (probs 15 different classes per sample). This will produce an n*15 matrix for each clf.</p> <p>Should these matrices just be concatenated horizontally? Or should they be combined in some other way, before logistic regression is applied? Thanks. </p>
4
2016-07-26T11:55:19Z
38,796,945
<p>You can adapt the code to the multi-class problem in two ways:</p> <ol> <li>Concatenate horizontally the probabilities, that is you will need to create: <code>dataset_blend_train = np.zeros((X.shape[0], len(clfs)*numOfClasses))</code> <code>dataset_blend_test = np.zeros((X_submission.shape[0], len(clfs)*numOfClasses))</code></li> <li>Instead of using probabilities, use the class prediction for the base models. That way you keep the arrays the same size, but instead of <code>predict_proba</code> you just use <code>predict</code>. </li> </ol> <p>I have used both successfully, but which works better may depend on the dataset. </p>
2
2016-08-05T19:58:38Z
[ "python", "matrix", "machine-learning", "ensemble-learning", "ensembles" ]
tensorflow : .eval() never ends
38,589,255
<p>i am loading the cifar-10 data set , the methods adds the data to tensor array , so to access the data i used .eval() with session , on a normal tf constant it return the value , but on the labels and the train set which are tf array it wont</p> <p>1- i am using docker tensorflow-jupyter</p> <p>2- it uses python 3</p> <p>3- the batch file must be added to data folder</p> <p>i am using the first batch [data_batch_1.bin]from this file</p> <p><a href="http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz" rel="nofollow">http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz</a></p> <p>As notebook:</p> <p><a href="https://drive.google.com/open?id=0B_AFMME1kY1obkk1YmJHcjV0ODA" rel="nofollow">https://drive.google.com/open?id=0B_AFMME1kY1obkk1YmJHcjV0ODA</a></p> <p>The code[As in tensorflow site but modified to read 1 patch] [check the last 7 lines for the data loading] :</p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import urllib import tensorflow as tf from six.moves import xrange # pylint: disable=redefined-builtin # Global constants describing the CIFAR-10 data set. NUM_CLASSES = 10 NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 5000 NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 1000 IMAGE_SIZE = 32 def _generate_image_and_label_batch(image, label, min_queue_examples, batch_size, shuffle): """Construct a queued batch of images and labels. Args: image: 3-D Tensor of [height, width, 3] of type.float32. label: 1-D Tensor of type.int32 min_queue_examples: int32, minimum number of samples to retain in the queue that provides of batches of examples. batch_size: Number of images per batch. shuffle: boolean indicating whether to use a shuffling queue. Returns: images: Images. 4D tensor of [batch_size, height, width, 3] size. labels: Labels. 1D tensor of [batch_size] size. """ # Create a queue that shuffles the examples, and then # read 'batch_size' images + labels from the example queue. num_preprocess_threads = 2 if shuffle: images, label_batch = tf.train.shuffle_batch( [image, label], batch_size=batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * batch_size, min_after_dequeue=min_queue_examples) else: images, label_batch = tf.train.batch( [image, label], batch_size=batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * batch_size) # Display the training images in the visualizer. tf.image_summary('images', images) return images, tf.reshape(label_batch, [batch_size]) def read_cifar10(filename_queue): """Reads and parses examples from CIFAR10 data files. Recommendation: if you want N-way read parallelism, call this function N times. This will give you N independent Readers reading different files &amp; positions within those files, which will give better mixing of examples. Args: filename_queue: A queue of strings with the filenames to read from. Returns: An object representing a single example, with the following fields: height: number of rows in the result (32) width: number of columns in the result (32) depth: number of color channels in the result (3) key: a scalar string Tensor describing the filename &amp; record number for this example. label: an int32 Tensor with the label in the range 0..9. uint8image: a [height, width, depth] uint8 Tensor with the image data """ class CIFAR10Record(object): pass result = CIFAR10Record() # Dimensions of the images in the CIFAR-10 dataset. # See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the # input format. label_bytes = 1 # 2 for CIFAR-100 result.height = 32 result.width = 32 result.depth = 3 image_bytes = result.height * result.width * result.depth # Every record consists of a label followed by the image, with a # fixed number of bytes for each. record_bytes = label_bytes + image_bytes # Read a record, getting filenames from the filename_queue. No # header or footer in the CIFAR-10 format, so we leave header_bytes # and footer_bytes at their default of 0. reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) # Convert from a string to a vector of uint8 that is record_bytes long. record_bytes = tf.decode_raw(value, tf.uint8) # The first bytes represent the label, which we convert from uint8-&gt;int32. result.label = tf.cast( tf.slice(record_bytes, [0], [label_bytes]), tf.int32) # The remaining bytes after the label represent the image, which we reshape # from [depth * height * width] to [depth, height, width]. depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]), [result.depth, result.height, result.width]) # Convert from [depth, height, width] to [height, width, depth]. result.uint8image = tf.transpose(depth_major, [1, 2, 0]) return result def inputs(eval_data, data_dir, batch_size): """Construct input for CIFAR evaluation using the Reader ops. Args: eval_data: bool, indicating if one should use the train or eval data set. data_dir: Path to the CIFAR-10 data directory. batch_size: Number of images per batch. Returns: images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size. labels: Labels. 1D tensor of [batch_size] size. """ filenames=[]; filenames.append(os.path.join(data_dir, 'data_batch_1.bin') ) num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN print(filenames) # Create a queue that produces the filenames to read. filename_queue = tf.train.string_input_producer(filenames) # Read examples from files in the filename queue. read_input = read_cifar10(filename_queue) reshaped_image = tf.cast(read_input.uint8image, tf.float32) height = IMAGE_SIZE width = IMAGE_SIZE # Image processing for evaluation. # Crop the central [height, width] of the image. resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, width, height) # Subtract off the mean and divide by the variance of the pixels. float_image = tf.image.per_image_whitening(resized_image) # Ensure that the random shuffling has good mixing properties. min_fraction_of_examples_in_queue = 0.4 min_queue_examples = int(num_examples_per_epoch * min_fraction_of_examples_in_queue) # Generate a batch of images and labels by building up a queue of examples. return _generate_image_and_label_batch(float_image, read_input.label, min_queue_examples, batch_size, shuffle=False) sess = tf.InteractiveSession() train_data,train_labels = inputs(False,"data",6000) print (train_data,train_labels) train_data=train_data.eval() train_labels=train_labels.eval() print(train_data) print(train_labels) sess.close() </code></pre>
0
2016-07-26T11:56:28Z
38,595,382
<p>You must call <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/train.html#start_queue_runners" rel="nofollow"><code>tf.train.start_queue_runners(sess)</code></a> before you call <code>train_data.eval()</code> or <code>train_labels.eval()</code>.</p> <p>This is a(n unfortunate) consequence of how TensorFlow input pipelines are implemented: the <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/io_ops.html#string_input_producer" rel="nofollow"><code>tf.train.string_input_producer()</code></a>, <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/io_ops.html#shuffle_batch" rel="nofollow"><code>tf.train.shuffle_batch()</code></a>, and <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/io_ops.html#batch" rel="nofollow"><code>tf.train.batch()</code></a> functions internally create queues that buffer records between different stages in the input pipeline. The <code>tf.train.start_queue_runners()</code> call tells TensorFlow to start fetching records into these buffers; without calling it the buffers remain empty and <code>eval()</code> hangs indefinitely.</p>
1
2016-07-26T16:35:09Z
[ "python", "eval", "tensorflow", "jupyter-notebook" ]
Selenium find button element
38,589,434
<p>I want to find a button element on a website with <code>selenium</code> on <code>python 3</code> . I try some different method but all failed . I use <code>Xpath</code> to find my element but i don't know if it's the better method :</p> <p>This is the <code>HTML</code> code :</p> <pre><code>&lt;div id="review-buttons-container"&gt; &lt;div class="columns"&gt; &lt;div class="please-wait" id="review-please-wait" style="display:none;"&gt; &lt;span&gt;PROCESSING...&lt;/span&gt; &lt;/div&gt; &lt;input id="place_order" type="button" value="Complete Order" class="button end"/&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>This what i already try on python :</p> <pre><code>br.find_element_by_xpath("//input[@id='place_order']").click() </code></pre> <p>return :</p> <blockquote> <p>selenium.common.exceptions.WebDriverException: Message: unknown error: Element is not clickable at point (606, 678). Other element would receive the click :<li>...</li></p> </blockquote> <pre><code>//div[@id='review-buttons-container']/div/input </code></pre> <p>return : </p> <blockquote> <p>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@id='review-buttons-container']/div/input"}</p> </blockquote> <pre><code>br.find_element_by_xpath("//form[2]/div[8]/div/input").click() </code></pre> <blockquote> <p>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//form[2]/div[8]/div/input"}</p> </blockquote> <p>Any idea ? thanks </p>
0
2016-07-26T12:05:12Z
38,590,490
<p>You can use <code>ActionChains</code> to move to the element before clicking on it</p> <pre><code>from selenium.webdriver.common.action_chains import ActionChains element = br.find_element_by_xpath("//input[@id='place_order']") ActionChains(br).move_to_element(element).perform() # I assume br is your webdriver element.click() </code></pre> <p>If you don't want to use <code>xpath</code> you can use <code>find_element_by_id('place_order')</code></p> <p>You can find <a href="http://selenium-python.readthedocs.io/locating-elements.html" rel="nofollow">here</a> more ways to locate elements</p>
1
2016-07-26T12:54:57Z
[ "python", "html", "python-3.x", "selenium", "selenium-webdriver" ]
Selenium find button element
38,589,434
<p>I want to find a button element on a website with <code>selenium</code> on <code>python 3</code> . I try some different method but all failed . I use <code>Xpath</code> to find my element but i don't know if it's the better method :</p> <p>This is the <code>HTML</code> code :</p> <pre><code>&lt;div id="review-buttons-container"&gt; &lt;div class="columns"&gt; &lt;div class="please-wait" id="review-please-wait" style="display:none;"&gt; &lt;span&gt;PROCESSING...&lt;/span&gt; &lt;/div&gt; &lt;input id="place_order" type="button" value="Complete Order" class="button end"/&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>This what i already try on python :</p> <pre><code>br.find_element_by_xpath("//input[@id='place_order']").click() </code></pre> <p>return :</p> <blockquote> <p>selenium.common.exceptions.WebDriverException: Message: unknown error: Element is not clickable at point (606, 678). Other element would receive the click :<li>...</li></p> </blockquote> <pre><code>//div[@id='review-buttons-container']/div/input </code></pre> <p>return : </p> <blockquote> <p>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@id='review-buttons-container']/div/input"}</p> </blockquote> <pre><code>br.find_element_by_xpath("//form[2]/div[8]/div/input").click() </code></pre> <blockquote> <p>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//form[2]/div[8]/div/input"}</p> </blockquote> <p>Any idea ? thanks </p>
0
2016-07-26T12:05:12Z
38,632,277
<p>Yo can try to scroll to this button before clicking using its location and js</p> <pre><code>element = driver.find_element_by_id("place_order") element_position = element.location["y"] driver.execute_script("window.scroll(0, {})".format(element_position)) time.sleep(1) #may not be required element.click() </code></pre>
0
2016-07-28T09:26:21Z
[ "python", "html", "python-3.x", "selenium", "selenium-webdriver" ]
Pip Install - "Invalid requirement"
38,589,570
<p>I run the following command in my terminal:</p> <pre><code>pip install -r requirements-dev.txt </code></pre> <p>I get the following error:</p> <pre><code>Invalid requirement: 'nose=1.3.7' = is not a valid operator. Did you mean == ? </code></pre> <p>requirements-dev.txt looks like this:</p> <pre><code>nose=1.3.7 pyflakes=0.9.2 pep8=1.5.6 </code></pre> <p>Why am I getting this error? I'm not too familiar with the pip command.</p>
-5
2016-07-26T12:11:48Z
38,589,856
<p><code>pip</code> does not specify any behavior for <code>=</code> in its specification, rather <code>==</code> (which you intend) referred to as <a href="https://www.python.org/dev/peps/pep-0440/#version-matching" rel="nofollow">version matching</a>, among others.</p> <p>For your later use, all the currently available version specifiers as at version 8.1 are:</p> <blockquote> <p>version matching <code>==</code></p> <p>Compatible release <code>~=</code> </p> <p>Version exclusion <code>!=</code></p> <p>Exclusive ordered comparison <code>&lt;</code> , <code>&gt;</code> </p> <p>Inclusive ordered comparison <code>&lt;=</code> , <code>&gt;=</code> </p> <p>Arbitrary equality <code>===</code></p> </blockquote>
1
2016-07-26T12:24:16Z
[ "python", "pip" ]
getting last n items from queue
38,589,668
<p>everything I see is about lists but this is about <code>events = queue.queue()</code> which is a queue with objects that I want to extract, but how would I go about getting the last N elements from that queue?</p>
1
2016-07-26T12:15:35Z
38,589,764
<p>By definition, you can't.</p> <p>What you can do is use a loop or a comprehension to <code>get</code> the <strong>first</strong> (you can't <code>get</code> from the end of a <code>queue</code>) N elements:</p> <pre><code>N = 2 first_N_elements = [my_queue.get() for _ in range(N)] </code></pre>
1
2016-07-26T12:20:35Z
[ "python", "queue" ]
getting last n items from queue
38,589,668
<p>everything I see is about lists but this is about <code>events = queue.queue()</code> which is a queue with objects that I want to extract, but how would I go about getting the last N elements from that queue?</p>
1
2016-07-26T12:15:35Z
38,590,207
<p>If you're multi-threading, "the last N elements from that queue" is undefined and the question doesn't make sense.</p> <p>If there is no multi-threading, it depends on whether you care about the other elements (not the last N).</p> <p>If you don't:</p> <pre><code>for i in range(events.qsize() - N): events.get() </code></pre> <p>after that, get N items</p> <p>If you don't want to throw away the other items, you'll just have to move everything to a different data structure (like a list). The whole point of a queue is to get things in a certain order.</p>
0
2016-07-26T12:41:36Z
[ "python", "queue" ]
Django Developer Server can't be reached from foreign IP
38,589,723
<p>I'm trying to access Django Developer Server from foreign IP but it is not working. </p> <p>I have created a test application in the server. I've created a test virtual environment also.</p> <p>I start the Developer Server using the following command:</p> <pre><code>env/bin/python manage.py runserver 0.0.0.0:8080 </code></pre> <p>The server starts without any problems, but when I go to my server's IP address (not forgetting the port at the end), I get a "<em>This site can't be reached</em>".</p> <p>Any ideas what the problem maybe? I need help in narrowing down towards a solution because I don't know where to start.</p> <p>Any help would be appreciated, thanks.</p> <p>Just to add, I have edited the main urls.py to show the app in the index. </p> <pre><code>urlpatterns = [ url(r'^$', views.index, name='index'), url(r'^admin/', admin.site.urls), ] </code></pre> <p>Edit to add basic server information: Linux hosting.nonprofit.net.nz 3.13.0-44-generic #73-Ubuntu SMP Tue Dec 16 00:22:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux</p>
1
2016-07-26T12:18:18Z
38,591,265
<p>First check yourMachineIp address and then following may work for you.</p> <p>Try run server as <code>env/bin/python manage.py runserver YourMachineIp:8080</code> still if you get same error try to change port number for example <code>env/bin/python manage.py runserver YourMachineIp:8134</code>. </p> <p>And you can access server on other machine as <code>http://YourMachineIp:8134</code>.</p>
-1
2016-07-26T13:30:06Z
[ "python", "django" ]
Django testing, mock valid requests to urls
38,589,774
<p>I have a bunch of urls in my urls.py file that have the login_required decorator</p> <pre><code># Index Page url(r'^$', login_required(views.IndexPage.as_view()), name='index'), # Schedule urls url(r'^schedules/$', login_required(views.ScheduleListView.as_view()), name='schedule-list'), url(r'^schedule/(?P&lt;pk&gt;[\d]+)/$', login_required(views.ScheduleDetailView.as_view()), name='schedule-detail'), url(r'^schedule-freeze/(?P&lt;pk&gt;[\d]+)/$', login_required(views.freezeSchedule), name='schedule-freeze'), url(r'^schedule-create/$', login_required(views.ScheduleCreate.as_view()), name='schedule-create'), url(r'^schedule-delete/(?P&lt;pk&gt;[\d]+)$', login_required(views.ScheduleDelete.as_view()), name='schedule-delete'), url(r'^schedule-update/(?P&lt;pk&gt;[\d]+)/$', login_required(views.ScheduleUpdate.as_view()), name='schedule-update'), url(r'^schedule-generate/(?P&lt;pk&gt;[\d]+)/$', login_required(views.scheduleGenerate), name='schedule-generate'), # Client urls url(r'^clients/$', login_required(views.ClientList.as_view()), name='client-list'), url(r'^client/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientDetail.as_view()), name='client-detail'), url(r'^client-create/$', login_required(views.ClientCreate.as_view()), name='client-create'), url(r'^client-delete/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientDelete.as_view()), name='client-delete'), url(r'^client-update/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientUpdate.as_view()), name='client-update'), # And so on .... </code></pre> <p>For every restricted view I'm trying to write a test which ensures unauthorized users are redirected to the login page when trying to access the view. If possible I'd like to be able to achieve this in a single block of code, instead of writing a single test for every single URL.</p> <p>I've tried something like the following:</p> <pre><code>list_urls = [e for e in get_resolver(urls).reverse_dict.keys() if isinstance(e, str)] for url in list_urls: # Fetches the urlpath e.g. 'client-list' namedspaced_url = 'reports:' + url path = reverse(namedspaced_url) response = self.client.get(path) self.assertEqual(response.status_code, 302) self.assertRedirects(response, reverse('login') + '?next=' + path) </code></pre> <p><code>list_urls</code> returns a list of all the named urls inside my urls.py file i.e. <code>['schedule-create', 'server-detail', 'schedule-list', 'schedule-update', 'index', ....]</code></p> <p><strong>The Problem</strong></p> <p>this piece of code: <code>reverse(namedspaced_url)</code></p> <p>Where this causes issues is that each url has a different regular expression pattern, i.e. some take slugs some take pk's</p> <p>so the line <code>path = reverse(namedspaced_url)</code> will work for simple URLs like those which point at ListViews but will fail for more complex URLs, such as those that point at DetailViews which require slug's/pk's, i.e. <code>path = reverse(namedspaces_url, args=[1945])</code></p> <p>Is it possible to temporarily override / ignore Django's pattern matching / routing to force a request to go through (regardless of passed args) </p> <p>Or do I have to manually write a test for each URL with valid kwargs/args to satisfy regex?</p> <p>Is there another completely different approach I can take to write tests for all my login_required() views?</p> <p><strong>Update</strong> Using introspection I came up with the following monstrosity to solve my problem</p> <pre><code>def test_page_redirects_for_unauthorised_users(self): url_dict = get_resolver(urls).reverse_dict url_list = [e for e in get_resolver(urls).reverse_dict.keys() if isinstance(e, str)] for url in url_list: patterns = url_dict[url][0][0][1] matches = [1 if e == 'pk' else "slug" if e == 'slug' else None for e in patterns] path = reverse('reports:' + url, args=matches) response = self.client.get(path) self.assertEqual(response.status_code, 302) self.assertRedirects(response, reverse('login') + '?next=' + path) </code></pre>
1
2016-07-26T12:20:58Z
38,589,858
<p>in the proect_name/project_name/urls.py <code>urlpatterns = [ url(r'', login_required(include('app_name.urls')), ]</code></p> <p>This will apply login_required to all urls in the project_name/app_name/urls.py</p>
1
2016-07-26T12:24:20Z
[ "python", "django", "unit-testing" ]
Django testing, mock valid requests to urls
38,589,774
<p>I have a bunch of urls in my urls.py file that have the login_required decorator</p> <pre><code># Index Page url(r'^$', login_required(views.IndexPage.as_view()), name='index'), # Schedule urls url(r'^schedules/$', login_required(views.ScheduleListView.as_view()), name='schedule-list'), url(r'^schedule/(?P&lt;pk&gt;[\d]+)/$', login_required(views.ScheduleDetailView.as_view()), name='schedule-detail'), url(r'^schedule-freeze/(?P&lt;pk&gt;[\d]+)/$', login_required(views.freezeSchedule), name='schedule-freeze'), url(r'^schedule-create/$', login_required(views.ScheduleCreate.as_view()), name='schedule-create'), url(r'^schedule-delete/(?P&lt;pk&gt;[\d]+)$', login_required(views.ScheduleDelete.as_view()), name='schedule-delete'), url(r'^schedule-update/(?P&lt;pk&gt;[\d]+)/$', login_required(views.ScheduleUpdate.as_view()), name='schedule-update'), url(r'^schedule-generate/(?P&lt;pk&gt;[\d]+)/$', login_required(views.scheduleGenerate), name='schedule-generate'), # Client urls url(r'^clients/$', login_required(views.ClientList.as_view()), name='client-list'), url(r'^client/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientDetail.as_view()), name='client-detail'), url(r'^client-create/$', login_required(views.ClientCreate.as_view()), name='client-create'), url(r'^client-delete/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientDelete.as_view()), name='client-delete'), url(r'^client-update/(?P&lt;slug&gt;[\w-]+)/$', login_required(views.ClientUpdate.as_view()), name='client-update'), # And so on .... </code></pre> <p>For every restricted view I'm trying to write a test which ensures unauthorized users are redirected to the login page when trying to access the view. If possible I'd like to be able to achieve this in a single block of code, instead of writing a single test for every single URL.</p> <p>I've tried something like the following:</p> <pre><code>list_urls = [e for e in get_resolver(urls).reverse_dict.keys() if isinstance(e, str)] for url in list_urls: # Fetches the urlpath e.g. 'client-list' namedspaced_url = 'reports:' + url path = reverse(namedspaced_url) response = self.client.get(path) self.assertEqual(response.status_code, 302) self.assertRedirects(response, reverse('login') + '?next=' + path) </code></pre> <p><code>list_urls</code> returns a list of all the named urls inside my urls.py file i.e. <code>['schedule-create', 'server-detail', 'schedule-list', 'schedule-update', 'index', ....]</code></p> <p><strong>The Problem</strong></p> <p>this piece of code: <code>reverse(namedspaced_url)</code></p> <p>Where this causes issues is that each url has a different regular expression pattern, i.e. some take slugs some take pk's</p> <p>so the line <code>path = reverse(namedspaced_url)</code> will work for simple URLs like those which point at ListViews but will fail for more complex URLs, such as those that point at DetailViews which require slug's/pk's, i.e. <code>path = reverse(namedspaces_url, args=[1945])</code></p> <p>Is it possible to temporarily override / ignore Django's pattern matching / routing to force a request to go through (regardless of passed args) </p> <p>Or do I have to manually write a test for each URL with valid kwargs/args to satisfy regex?</p> <p>Is there another completely different approach I can take to write tests for all my login_required() views?</p> <p><strong>Update</strong> Using introspection I came up with the following monstrosity to solve my problem</p> <pre><code>def test_page_redirects_for_unauthorised_users(self): url_dict = get_resolver(urls).reverse_dict url_list = [e for e in get_resolver(urls).reverse_dict.keys() if isinstance(e, str)] for url in url_list: patterns = url_dict[url][0][0][1] matches = [1 if e == 'pk' else "slug" if e == 'slug' else None for e in patterns] path = reverse('reports:' + url, args=matches) response = self.client.get(path) self.assertEqual(response.status_code, 302) self.assertRedirects(response, reverse('login') + '?next=' + path) </code></pre>
1
2016-07-26T12:20:58Z
38,591,036
<p>You're trying to test something very complicated because you've decided to use <code>login_required</code> to <a href="https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/#decorating-in-urlconf" rel="nofollow">decorate the urlconf</a>. </p> <p>Why not <a href="https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/#decorating-the-class" rel="nofollow">decorate the class</a> instead? That way you can simply test each class to make sure it has the <code>login_required</code> decorator. This eliminates the need for mocking slug and pk regex values.</p>
2
2016-07-26T13:20:41Z
[ "python", "django", "unit-testing" ]
mongoengine: how to query on a non-ascii StringField in EmbeddedDocumentListField
38,589,820
<p>when I run a query like </p> <pre><code>answerSheet = answerSheet.subAnswerSheets.get( title=subquiz.title) </code></pre> <p>when my title is non-ascii, I get encode error on this method of EmbeddedDocumentList class in datastructures.py of mongoengine:</p> <pre><code>@classmethod def __match_all(cls, i, kwargs): items = kwargs.items() return all([ getattr(i, k) == v or str(getattr(i, k)) == v for k, v in items ]) </code></pre> <p>and when I remove str cast, it works fine. so is it my fault or source code needs some change?</p>
0
2016-07-26T12:23:06Z
38,591,114
<p>Hope so this will work for you:</p> <pre><code>title = subquiz.title.encode('ascii',errors='ignore') </code></pre>
0
2016-07-26T13:23:58Z
[ "python", "django", "mongoengine" ]
python getattr built_in method executes default arguments
38,589,823
<p>I dont know if this is something an expected behavior of <code>getattr</code> built_in method. <code>getattr</code> executes the default(3rd) argument as well even if the actual argument(2nd) satisfies the condition. Example:</p> <pre><code> def func(): print 'In Function' class A: def __init__(self): self.param = 12 a = A() </code></pre> <p>When I run <code>getattr(a, 'param', Func())</code> it gives result. Note down the <code>In Function</code> which I dont want.</p> <pre><code>In Function 12 </code></pre> <p>But it works perfectly fine when I execute <code>getattr(a, 'param1', func())</code> i.e output</p> <pre><code>In Function </code></pre> <p>But I only want result as <code>12</code> if satisfied the condition. Please let me know why <code>getattr</code> has such behaviour and can we stop them of doing it (that is not to execute 3rd arg if has 2nd argument), would be appreciated if share the alternate way of doing it in Pythonic way. One thing that comes in mind first is to check if <code>param1</code> exist using <code>hasattr</code> and then do the needful.</p>
2
2016-07-26T12:23:11Z
38,590,162
<p>Before <code>getattr</code> gets executed, all the passed parameters have to be evaluated. <code>func()</code> is one of those parameters and an attempt to evaluate it will execute the <code>print</code> statement. Whether the attribute will be found or not, <code>func()</code> must be evaluated apriori.</p> <p>This isn't peculiar to <code>getattr</code> it's how functions and their paraemters work in Python.</p> <hr> <p>Consider the following:</p> <pre><code>&gt;&gt;&gt; def does_nothing(any_arg): pass ... &gt;&gt;&gt; def f(): print("I'll get printed") ... &gt;&gt;&gt; &gt;&gt;&gt; does_nothing(f()) I'll get printed </code></pre> <p>Function <code>does_nothing</code> actually does nothing with the passed parameter. But the parameter has to be evaluated before the function call can go through.</p> <hr> <p>The <code>print</code> statement however will not affect the outcome of <code>getattr</code>; sort of a <em>side effect</em>. In the event the attribute is not found the <code>return</code> value of the function is used.</p>
4
2016-07-26T12:39:35Z
[ "python", "python-2.7", "getattr" ]
Django crontab not executing test function
38,589,830
<p>I have following:</p> <ol> <li><pre><code> python manage.py crontab show # this command give following Currently active jobs in crontab: 12151a7f59f3f0be816fa30b31e7cc4d -&gt; ('*/1 * * * *', 'media_api_server.cron.cronSendEmail') </code></pre></li> <li><p>My app is in virtual environment (env) active</p></li> <li><p>In my media_api_server/cron.py I have following function:</p> <pre><code>def cronSendEmail(): print("Hello") return true </code></pre></li> <li><p>In settings.py module:</p> <pre><code>INSTALLED_APPS = ( ...... 'django_crontab', ) CRONJOBS = [ ('*/1 * * * *', 'media_api_server.cron.cronSendEmail') ] </code></pre></li> </ol> <p>To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.</p> <pre><code> System check identified no issues (0 silenced). July 26, 2016 - 12:12:52 Django version 1.8.1, using settings 'mediaserver.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. </code></pre> <p>'Django-crontab' module is not working I followed its documentation here <a href="https://pypi.python.org/pypi/django-crontab" rel="nofollow">https://pypi.python.org/pypi/django-crontab</a></p>
0
2016-07-26T12:23:35Z
38,592,589
<p>Your code actually works. You may be think that <code>print("Hello")</code> should appear in stdout? So it doesn't work that way, because cron doesn't use <code>stdour</code> and <code>stderr</code> for it's output. To see actual results you should point path to some log file in <code>CRONJOBS</code> list: just put <code>'&gt;&gt; /path/to/log/file.log'</code> as last argument, e.g:</p> <pre><code>CRONJOBS = [ ('*/1 * * * *', 'media_api_server.cron.cronSendEmail', '&gt;&gt; /path/to/log/file.log') ] </code></pre> <p>Also it might be helpful to redirect your errors to stdout too. For this you heed to add <code>CRONTAB_COMMAND_SUFFIX = '2&gt;&amp;1'</code> to your <code>settings.py</code></p>
0
2016-07-26T14:26:21Z
[ "python", "django" ]
How to understand docs and use the module
38,589,834
<p>I've been struggling for a while understanding the documentation of any module:</p> <p>I want to use the <code>selenium</code> module.</p> <p>Looking at the documentation, I can't really implement anything.</p> <p>Given this part of the Docs API : <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.chrome.webdriver" rel="nofollow">http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.chrome.webdriver</a></p> <p>I see this : <code>class selenium.webdriver.chrome.webdriver.WebDriver(...)</code></p> <p>When I try implementing this in my python code it says : <code>AttributeError: module 'selenium' has no attribute 'webdriver'</code></p> <p>python 3.x code: </p> <pre><code>import selenium browser = selenium.webdriver.chrome.webdriver.WebDriver(executable_path='C:/Users/chromedriver') </code></pre> <p>Can someone explain me how to read,understand and use any piece of documentation?</p>
0
2016-07-26T12:23:39Z
38,590,253
<p>I am not able to explain how to read 'any piece of documentation' (I don't think anyone can). However, I can help you in the good direction by giving you an idea of how <em>I</em> start with new modules.</p> <p>Normally, each module not only has a documentation part but also a 'Getting Started' part: which is <a href="http://selenium-python.readthedocs.io/getting-started.html" rel="nofollow">here</a> for <code>selenium</code>.</p> <p>It is only when I have the feeling that I understand the basics of the module that I go and read the docs for fine-tuning what I really wanted to do. As you said: they are hard to read.</p> <p>====================================================</p> <p>In this specific case:</p> <pre><code>import selenium browser = selenium.webdriver.chrome.webdriver.WebDriver(executable_path='C:/Users/chromedriver') </code></pre> <p>results in</p> <blockquote> <p>AttributeError: module 'selenium' has no attribute 'webdriver'</p> </blockquote> <p>However, following the <a href="http://selenium-python.readthedocs.io/getting-started.html" rel="nofollow">'Getting started' tutorial</a>:</p> <pre><code>from selenium import webdriver driver = webdriver.Firefox() </code></pre> <p>works fine. This works, because <code>webdriver</code> is not an attribute to the module <code>selenium</code> but a <a href="https://stackoverflow.com/questions/37801823/attributeerror-module-object-has-no-attribute-webdriver">module itself</a>. This means that you have to call it explicitly to use it. (This is very implicitly noted at the top of the documentation page: <em>Recommended Import Style</em> (showing <code>from selenium import webdriver</code>).)</p> <p>Using Google Chrome:</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome('/path/to/chromedriver') # Optional argument, if not specified will search path. </code></pre> <p>works also fine, and this example is given by <a href="https://sites.google.com/a/chromium.org/chromedriver/getting-started" rel="nofollow">Google itself</a>.</p>
0
2016-07-26T12:43:44Z
[ "python", "python-3.x", "selenium", "selenium-webdriver", "documentation" ]
what does " ...:" mean in Ipython console anaconda?
38,589,901
<p>I am trying to print a dataframe into a csv directly from Ipython Console, but I get this symbol and then nothing " ...:". <strong>What does the symbol mean?</strong> </p> <p><strong>Is there anyway I can force my csv to print ?</strong> </p> <p>Code:</p> <pre><code>import ET_Client import pandas as pd AggreateDF = pd.DataFrame() try: debug = False stubObj = ET_Client.ET_Client(False, debug) print '&gt;&gt;&gt;BounceEvents' getBounceEvent = ET_Client.ET_BounceEvent() getBounceEvent.auth_stub = stubObj getResponse1 = getBounceEvent.get() ResponseResultsBounces = getResponse1.results Results_Message = getResponse1.message print "This is orginial " + str(Results_Message) #print ResponseResultsBounces i = 1 while (Results_Message == 'MoreDataAvailable'): if i &gt; 5: break print Results_Message results1 = getResponse1.results i = i + 1 ClientIDBounces = [] partner_keys1 = [] created_dates1 = [] modified_date1 = [] ID1 = [] ObjectID1 = [] SendID1 = [] SubscriberKey1 = [] EventDate1 = [] EventType1 = [] TriggeredSendDefinitionObjectID1 = [] BatchID1 = [] SMTPCode = [] BounceCategory = [] SMTPReason = [] BounceType = [] for BounceEvent in ResponseResultsBounces: ClientIDBounces.append(str(BounceEvent['Client']['ID'])) partner_keys1.append(BounceEvent['PartnerKey']) created_dates1.append(BounceEvent['CreatedDate']) modified_date1.append(BounceEvent['ModifiedDate']) ID1.append(BounceEvent['ID']) ObjectID1.append(BounceEvent['ObjectID']) SendID1.append(BounceEvent['SendID']) SubscriberKey1.append(BounceEvent['SubscriberKey']) EventDate1.append(BounceEvent['EventDate']) EventType1.append(BounceEvent['EventType']) TriggeredSendDefinitionObjectID1.append(BounceEvent['TriggeredSendDefinitionObjectID']) BatchID1.append(BounceEvent['BatchID']) SMTPCode.append(BounceEvent['SMTPCode']) BounceCategory.append(BounceEvent['BounceCategory']) SMTPReason.append(BounceEvent['SMTPReason']) BounceType.append(BounceEvent['BounceType']) df1 = pd.DataFrame({'ClientID': ClientIDBounces, 'PartnerKey': partner_keys1, 'CreatedDate' : created_dates1, 'ModifiedDate': modified_date1, 'ID':ID1, 'ObjectID': ObjectID1,'SendID':SendID1,'SubscriberKey':SubscriberKey1, 'EventDate':EventDate1,'EventType':EventType1,'TriggeredSendDefinitionObjectID':TriggeredSendDefinitionObjectID1, 'BatchID':BatchID1,'SMTPCode':SMTPCode,'BounceCategory':BounceCategory,'SMTPReason':SMTPReason,'BounceType':BounceType}) #print(df1['ID'].max()) AggreateDF = AggreateDF.append(df1) print(AggreateDF) #print df1 df_masked1 = df1[(df1.EventDate &gt; "2016-02-20") &amp; (df1.EventDate &lt; "2016-07-25")] </code></pre>
-1
2016-07-26T12:26:59Z
38,591,825
<h1>Display Sizing</h1> <p>When <code>pandas</code> is printing to the console in iPython/Jupyter, it uses <code>...</code> to show that there is data in-between rows of the data displayed on the output. This is useful when the data is to large to print every single value. This is the default behavior unless you override the display options.</p> <p>From <a href="http://pandas.pydata.org/pandas-docs/stable/options.html#frequently-used-options" rel="nofollow">Frequently Used Options</a></p> <pre><code> df = pd.DataFrame(np.random.randn(7,2)) pd.set_option('max_rows', 7) df </code></pre> <hr> <pre><code> 0 1 0 0.469112 -0.282863 1 -1.509059 -1.135632 2 1.212112 -0.173215 3 0.119209 -1.044236 4 -0.861849 -2.104569 5 -0.494929 1.071804 6 0.721555 -0.706771 </code></pre> <hr> <pre><code>pd.set_option('max_rows', 5) df </code></pre> <hr> <pre><code> 0 1 0 0.469112 -0.282863 1 -1.509059 -1.135632 .. ... ... 5 -0.494929 1.071804 6 0.721555 -0.706771 [7 rows x 2 columns] </code></pre>
2
2016-07-26T13:52:58Z
[ "python", "pandas", "dataframe", "anaconda", "export-to-csv" ]
Python - pysqlite or sqlite3 must be installed
38,589,963
<p>I have python 2.7.12 installed on my server. I'm using PuTTY to connect to my server. When running my python script I get the following. </p> <blockquote> <p>File "home/myuser/python/lib/python2.7/site-packages/peewee.py", line 3657, in _connect raise ImproperlyConfigured('pysqlite or sqlite3 must be installed.') peewee.ImproperlyConfigured: pysqlite or sqlite3 must be installed.</p> </blockquote> <p>I thought sqlite was installed with python 2.7.12, so I'm assuming the issue is something else. Haven't managed to find any posts on here yet that have been helpful.</p> <p>I am missing something?</p> <p>Thanks in advance</p>
1
2016-07-26T12:30:00Z
38,715,072
<p>Peewee will use either the standard library <code>sqlite3</code> module or, if you did not compile Python with SQLite, Peewee will look for <code>pysqlite2</code>.</p> <p>The problem is most definitely <strong>not</strong> with Peewee on this one, as Peewee requires a SQLite driver to use the <code>SqliteDatabase</code> class... If that driver does not exist, then you need to install it.</p>
0
2016-08-02T08:10:34Z
[ "python", "sqlite" ]
My Python Scrapy cannot scrape out the "keyword" content
38,590,058
<p>I cannot scrapy the "keyword" content. >"&lt; I've tried many methods but still failed.</p> <p>I've successfully retrieved other contents, but still failed to get the "keyword" content.</p> <p>Can anyone help to fix this bug?? The keyword content is located at "#keyword_table a", or XPath "//*[@id="keyword_table"]/tbody/tr/td[2]/a"</p> <p>Picture of the keyword content:</p> <p><a href="http://i.stack.imgur.com/k7B01.png" rel="nofollow"><img src="http://i.stack.imgur.com/k7B01.png" alt="enter image description here"></a></p> <p>My code:</p> <pre><code>import scrapy from bs4 import BeautifulSoup from digitimes.items import DigitimesItem class digitimesCrawler(scrapy.Spider): name = 'digitimes' start_urls = ["http://www.digitimes.com.tw/tw/dt/n/shwnws.asp?id=435000"] def parse(self, response): soup = BeautifulSoup(response.body,'html.parser') soupXml = BeautifulSoup(response.body, "lxml") simpleList = [] item = DigitimesItem() timeSel=soup.select('.insubject .small') tmpTime = timeSel[0].text time = tmpTime[:10] item['time'] = time #處理完時間啦 print(time) titleSel = soup.select('title') title = titleSel[0].text item['title'] = title #處理完時間啦 print(title) #================== To Resolve ================== keywordOutput="" for k in soupXml.select('#keyword_table a'): for key in k: keywordOutput = keywordOutput + key + " " item['keyword'] = keywordOutput print(keywordOutput) #================== To Resolve ================== categoryOutput="" for m in soup.select('#sitemaptable tr td a'): for cate in m: if(cate!="DIGITIMES"): categoryOutput = categoryOutput + cate + " " item['cate'] = categoryOutput print(categoryOutput) simpleList.append(item) return simpleList </code></pre>
0
2016-07-26T12:34:04Z
38,590,335
<p>Is there any particular reason you are using BeautifulSoup over scrapy selectors? Response your method receives already acts as a scrapy selector which can do both xpath and css selections.</p> <p>There seems to be 3 keywords in the table. You can select them with either xpath or css selectors:</p> <pre><code>response.css("#keyword_table a::text").extract() # or with xpath response.xpath("//*[@id='keyword_table']//a/text()").extract() # both return &gt;&gt;&gt; [u'Sony', u'\u5f71\u50cf\u611f\u6e2c\u5668', u'\u80a1\u7968\u4ea4\u6613'] </code></pre>
0
2016-07-26T12:48:12Z
[ "python", "scrapy", "web-crawler" ]
Assign command to button - Tkinter
38,590,087
<p>I want to find out which command I have to assign to the button in my Tkinter GUI in order to print the results.</p> <p>The setup is to run aaaa.py.</p> <pre><code>def question(): import xxxx return xxxx.hoeveel() if __name__ == '__main__': from bbbb import response def juist(): return response() print juist() </code></pre> <p>When running aaaa.py I get a Tkinter GUI based on script xxxx.py</p> <pre><code>from Tkinter import * import ttk def hoeveel(): return int(x=gewicht.get(), base=10) frame = Tk() gewicht = StringVar() a = Label(frame, text="Wat is uw gewicht in kilogram?:").grid(row=1, column=1, sticky='w') aa = Entry(frame, text="value", textvariable=gewicht, justify='center', width=10) aa.grid(row=1, column=2, padx=15) bereken = ttk.Button(frame, text='Bereken') bereken.grid(column=1, row=2, columnspan=2, ipadx=15, pady=25) mainloop() </code></pre> <p>The input given in the Tkinter GUI xxxx.py is sent to bbbb.py for some calculations.</p> <pre><code>from aaaa import question mass_stone = question() * 2.2 / 14 def response(): return str("Uw gewicht in kilograms is gelijk aan " + ("{0:.5}".format(mass_stone)) + " stone.") </code></pre> <p>My issue is that I only get the output "Uw gewicht in kilograms is gelijk aan "x (depending on the value input)" stone, when I close the Tkinter window.</p> <p>I want to get the results when I press the button.</p> <p>Any tips?</p>
1
2016-07-26T12:35:31Z
38,591,349
<p>I think I've found an answer, but I'm not sure it's the most elegant way to do it. If you know a different way and more correct way please let met know.</p> <p>So, you need to run aaaa.py</p> <pre><code>import xxxx def question(): return xxxx.hoeveel() </code></pre> <blockquote> <p>xxxx.py</p> </blockquote> <pre><code>from Tkinter import * import ttk import bbbb def hoeveel(): return int(x=gewicht.get(), base=10) frame = Tk() gewicht = StringVar() a = Label(frame, text="Wat is uw gewicht in kilogram?:").grid(row=1, column=1, sticky='w') aa = Entry(frame, text="value", textvariable=gewicht, justify='center', width=10) aa.grid(row=1, column=2, padx=15) bereken = ttk.Button(frame, text='Bereken', command=bbbb.berekening) bereken.grid(column=1, row=2, columnspan=2, ipadx=15, pady=25) mainloop() </code></pre> <p>and finally </p> <blockquote> <p>bbbb.py</p> </blockquote> <pre><code>from aaaa import question def berekening(): mass_stone = question() * 2.2 / 14 print mass_stone </code></pre> <p>When you run aaaa.py you get the Tkinter Gui with the question. Fill in your weight, and press "Bereken" You get the answer printed like I wanted.</p>
0
2016-07-26T13:33:18Z
[ "python", "tkinter" ]
What is the output of np.asarray(scalar)?
38,590,208
<p>For a long time, I always use <code>np.array</code>, <code>np.asarray</code> and <code>np.asanyarray</code> to convert array_like list to array. </p> <p>But when converting a scalar to numpy array, I know <code>np.atleast_1d(123)</code> gives rise to the right thing, <code>array([123])</code>.</p> <p>But I'm confused about the output of <code>np.array</code> and <code>np.asarray</code></p> <pre><code>i = 123 x = np.array(i, dtype=np.int) print x # array(123) print x.shape # () print x.size # 0 </code></pre> <p>Since <code>x.shape</code> indicates <code>x</code> is empty, what is <code>array(123)</code>? It's a 0-dimension array still contains <code>123</code> in its <code>__str__</code>.</p> <p>A real empty array of <code>size=0</code> should be <code>array([])</code>,</p> <pre><code>print np.array([]).nbytes # 0 print np.array(123).nbytes # 8 print type(np.array(123)) # numpy.ndarray </code></pre> <p>Apparently they are different, though the size of them is both <code>0</code>.</p>
-1
2016-07-26T12:41:37Z
38,596,297
<p>I see this <code>0d</code> case as a natural continuation of <code>nd</code>. MATLAB makes 2d the lower bound. <code>numpy</code> could have used <code>1d</code>, but instead chose <code>0d</code>. </p> <p>An array consists of a data buffer, whether the value bytes are stored, a dtype (how to interpret those bytes), and <code>shape</code> (plus <code>strides</code>). <code>shape</code> is (displayed as) a tuple. Python allows tuples to have 0, 1, 2 or more elements, so why shouldn't shape have the same flexibility?</p> <p>Look at what <code>atleast_1d</code> does</p> <pre><code>res = [] for ary in arys: ary = asanyarray(ary) if len(ary.shape) == 0: result = ary.reshape(1) else: result = ary res.append(result) if len(res) == 1: return res[0] else: return res </code></pre> <p>It can work with a list of inputs (scalar, array,list etc)</p> <pre><code>In [374]: np.atleast_1d(np.array(1),np.array([1]),np.array([[1]])) Out[374]: [array([1]), array([1]), array([[1]])] </code></pre> <p>It converts each to array (as needed) and then checks the dim (len of shape). If 0d it reshapes it to (1,). This reshape does not change the data buffer. <code>atleast_2d</code> does <code>result = ary.reshape(1, 1)</code>.</p> <p>You could also <code>ndmin</code>:</p> <pre><code>In [382]: np.array(1,ndmin=1) Out[382]: array([1]) </code></pre> <p><code>np.array(1)</code> is in many ways like <code>np.int32(1)</code>. Both have <code>()</code> shape, both have methods like <code>sum()</code>. The only obvious difference is in their print format.</p> <p>I don't know of any reason to purposefully construct a 0d array. It's just as easy to write <code>np.array([1])</code> if I really want a 1d array. But you should know how to handle one if it comes up. That includes using <code>.item()</code> to extract the scalar value, and indexing with <code>[()]</code>.</p> <p>I've encountered it most often in SO questions about loading MATLAB files with <code>scipy.io.loadmat</code>. Some <code>MATLAB</code> constructs are returned as 0d object arrays.</p> <p>Another way of thinking about a 0d array is that it adds (or retains) the whole suit of array methods to a scalar - without having to explicitly specify the <code>dtype</code>.</p> <p>I mentioned the similarity to <code>np.int32(1)</code>. I've seen it in beginner's code, but have not needed it myself. </p>
2
2016-07-26T17:25:13Z
[ "python", "arrays", "numpy", "data-type-conversion" ]
Why is the dynamodb performance different between these queries?
38,590,218
<p>I have 3 tables:</p> <pre><code>table 1 ======= Size 458.54 MB Count 2,887,152 table 2 ======= Size 161.05 MB Count 1,060,444 table 3 ======= Size 4.10 GB Count 2,629,162 </code></pre> <p>I've provisioned all 3 tables to 500 read capacity units (RCU), and paginate through 20 pages using python2.7 and a simple boto3.scan(). </p> <p>Why is the duration so different among them, while consuming exactly the same RCU's?</p> <pre><code>table 1 ======= seconds: 65.7738468647 row_count: 131925 scanned_count: 131925 consumed_capacity: 2570.0 table 2 ======= seconds: 97.8232300282 row_count: 138092 scanned_count: 138092 consumed_capacity: 2570.0 table 3 ======= seconds: 37.8384461403 row_count: 13416 scanned_count: 13416 consumed_capacity: 2571.0 </code></pre>
1
2016-07-26T12:41:59Z
38,595,269
<p>The difference is in the boto3 response parser. Larger, more complicated objects will take longer to parse. I imagine if you look at the objects in each of those tables you'll see a correlation between more complicated objects and query speed. Transfer time also will impact things significantly.</p>
1
2016-07-26T16:29:11Z
[ "python", "amazon-web-services", "amazon-dynamodb", "boto3" ]
Python 3.4: PyQt on Windows: Crash on exit only on some computers
38,590,263
<p>I have a Python program that I packaged with <code>cx_freeze</code> to make executable. The program is strictly a desktop program for data acquisition. It works fine and exits fine on every computer, but on one desktop of one of our collaborators with Windows 7 on it, it crashes only on exit (I emphasize that no pythonic errors are given. Just a low-level crash with zero information about it). Simply starting and exiting the program crashes it!</p> <p>I got the guy to create a memory dump for me, and he did. The weird part is the following: Creating a memory dump out of this and analyzing it with WinDbg gives the following chain of errors:</p> <pre><code>STACK_TEXT: WARNING: Stack unwind information not available. Following frames may be wrong. 0020f940 5c51b34e 5c7bd640 9d7a3385 03c93748 QtCore4!QHashData::free_helper+0x26 0020f974 76e314bd 00b30000 00000000 03e0c4c0 QtGui4!QGestureRecognizer::reset+0x1f9e 0020f9a0 5c51c968 03c93748 5d3608c2 00000001 kernel32!HeapFree+0x14 0020f9a8 5d3608c2 00000001 03c93748 03891250 QtGui4!QGestureRecognizer::reset+0x35b8 0020f9c0 5d3627b5 9d0dae1c 03891250 03cac0a0 QtCore4!QObjectPrivate::deleteChildren+0x72 00000000 00000000 00000000 00000000 00000000 QtCore4!QObject::~QObject+0x3e5 </code></pre> <p>Now what surprises me is that a complaint from <code>QGestureRecognizer</code> (which is a <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qgesturerecognizer.html" rel="nofollow">part of QtGUI apparently</a>) is given! But why? I don't use any touch capabilities! The modules I use are: <code>QtCore</code> and <code>QtGUI</code>. Where is this coming from? Can I, like, force disable everything related to that class: <code>QGestureRecognizer</code>? What would you do in this case?</p> <p>Update:</p> <p>This issue seems to happen ONLY on Windows 7 computers. It was tested on 2 computers with Windows 7 and the same crash happened.</p>
1
2016-07-26T12:44:18Z
38,590,440
<p>It seems to have trouble freeing memory. You could try to do it manually with some function like this:</p> <pre><code>def clean(item): """Clean up the memory by closing and deleting the item if possible.""" if isinstance(item, list) or isinstance(item, dict): for _ in range(len(item)): clean(list(item).pop()) else: try: item.close() except (RuntimeError, AttributeError): # deleted or no close method try: item.deleteLater() except (RuntimeError, AttributeError): # deleted or no deleteLater method pass </code></pre> <p>Then you define a cleaning method in your main widget.</p> <pre><code>class MyWindow(QWidget): def cleanUp(self): # Clean up everything for i in self.__dict__: item = self.__dict__[i] clean(item) </code></pre> <p>Finally, before calling <code>qt_app._exec()</code>, you'll have to connect like this:</p> <pre><code>qt_app.aboutToQuit.connect(app.cleanUp) </code></pre> <p>where <code>app</code> is your main window.</p> <hr> <p><strong>EDIT:</strong></p> <p>Wrapping everything under the <code>if __name__ == '__main__'</code> line into a single <code>main()</code> function works sometimes, but I have no idea why.</p>
1
2016-07-26T12:53:07Z
[ "python", "crash", "pyqt", "exit", "crash-dumps" ]
Python 3.4: PyQt on Windows: Crash on exit only on some computers
38,590,263
<p>I have a Python program that I packaged with <code>cx_freeze</code> to make executable. The program is strictly a desktop program for data acquisition. It works fine and exits fine on every computer, but on one desktop of one of our collaborators with Windows 7 on it, it crashes only on exit (I emphasize that no pythonic errors are given. Just a low-level crash with zero information about it). Simply starting and exiting the program crashes it!</p> <p>I got the guy to create a memory dump for me, and he did. The weird part is the following: Creating a memory dump out of this and analyzing it with WinDbg gives the following chain of errors:</p> <pre><code>STACK_TEXT: WARNING: Stack unwind information not available. Following frames may be wrong. 0020f940 5c51b34e 5c7bd640 9d7a3385 03c93748 QtCore4!QHashData::free_helper+0x26 0020f974 76e314bd 00b30000 00000000 03e0c4c0 QtGui4!QGestureRecognizer::reset+0x1f9e 0020f9a0 5c51c968 03c93748 5d3608c2 00000001 kernel32!HeapFree+0x14 0020f9a8 5d3608c2 00000001 03c93748 03891250 QtGui4!QGestureRecognizer::reset+0x35b8 0020f9c0 5d3627b5 9d0dae1c 03891250 03cac0a0 QtCore4!QObjectPrivate::deleteChildren+0x72 00000000 00000000 00000000 00000000 00000000 QtCore4!QObject::~QObject+0x3e5 </code></pre> <p>Now what surprises me is that a complaint from <code>QGestureRecognizer</code> (which is a <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qgesturerecognizer.html" rel="nofollow">part of QtGUI apparently</a>) is given! But why? I don't use any touch capabilities! The modules I use are: <code>QtCore</code> and <code>QtGUI</code>. Where is this coming from? Can I, like, force disable everything related to that class: <code>QGestureRecognizer</code>? What would you do in this case?</p> <p>Update:</p> <p>This issue seems to happen ONLY on Windows 7 computers. It was tested on 2 computers with Windows 7 and the same crash happened.</p>
1
2016-07-26T12:44:18Z
40,110,481
<p>It turned out that <strong>ALL</strong> the problems I used to have with this program crashing were because of QThread (on Windows). All the users I know of that used QThread on Windows got similar issues, and for some reason no one is fixing it.</p> <p>Avoid QThread on Python. It's completely useless and more harmful than useful. I went now to <code>multiprocessing</code>. It's much better and isn't affected by GIL.</p>
0
2016-10-18T14:07:40Z
[ "python", "crash", "pyqt", "exit", "crash-dumps" ]
How to kill subprocesses with another command python
38,590,274
<p>I am running two subprocesses in a Python server script. The purpose of the subprocesses is to stream video from my Raspberry Pi.</p> <p>My question is how to kill the subprocesses when another command is sent to the server. I am currently using Popen() to start the subprocesses. </p> <p>This is my code for when the server receives the command "startStream". I am using Twisted library as server protocol.</p> <pre><code>class Echo(Protocol): def connectionMade(self): #self.transport.write("""connected""") self.factory.clients.append(self) print "clients are ", self.factory.clients def connectionLost(self, reason): self.factory.clients.remove(self) def dataReceived(self, data): print "data is ", data if data == "startStream": p = subprocess.Popen("raspistill --nopreview -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 &amp;", shell=True) pn = subprocess.Popen("LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i %s -o %s &amp;" % (x,y), shell=True) </code></pre> <p>What I would like is something like this.</p> <pre><code>if data == "startStream": p = subprocess.Popen("raspistill --nopreview -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 &amp;", shell=True) pn = subprocess.Popen("LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i %s -o %s &amp;" % (x,y), shell=True) elif data == "stopStream": os.kill(p.pid) os.kill(pn.pid) </code></pre> <p>Many thanks!</p>
0
2016-07-26T12:44:50Z
38,590,906
<p>You're missing some context here, but basically server would do something like:</p> <pre><code>while True: data = wait_for_request() if data == 'startStream': p = subprocess.Popen("raspistill --nopreview -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 &amp;", shell=True) pn = subprocess.Popen("LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i %s -o %s &amp;" % (x,y), shell=True) elif data == 'stopStream': p.terminate() pn.terminate() </code></pre> <p>The crucial part is that names <code>p</code> and <code>pn</code> exists in same scope and therefore they are accessible without using any kind of global state. If your code structure is different, you need to outline it in question.</p> <p>Since <code>data_received</code> has it's own scope on each call, you need to pass a reference to your Popen object in different way. Fortunately, you may keep references in the class instance.</p> <pre><code>def dataReceived(self, data): if data=='startStream': self.p = subprocess.Popen() # ... self.pn = subprocess.Popen() # ... elif data=='stopStream': self.p.terminate() self.pn.terminate() </code></pre> <p><a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate" rel="nofollow"><code>Popen.terminate</code></a> is available from Python 2.6 and should work just fine - I'm not sure what's an issue in question's comments. </p>
1
2016-07-26T13:14:16Z
[ "python", "server", "raspberry-pi", "subprocess", "kill" ]
Django manage.py runserver got AppRegistryNotReady: Models aren't loaded yet
38,590,316
<p>I hit a problem when run django from command line with manage.py runserver.</p> <p>The same code is fine with Django 1.5 several months ago.</p> <p>Today I wanna pickup the code again and run upon Django 1.8.3 and python2.7.10 .</p> <p>Now, got error here:</p> <pre><code>Traceback (most recent call last): File "manage.py", line 29, in &lt;module&gt; execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 108, in populate app_config.import_models(all_models) File "/usr/local/lib/python2.7/dist-packages/django/apps/config.py", line 198, in import_models self.models_module = import_module(models_module_name) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/media/wni/study/workspace4320151111/weichun/mytheme/models.py", line 8, in &lt;module&gt; from mezzanine.pages.models import Page File "/media/wni/study/workspace4320151111/weichun/mezzanine/pages/models.py", line 34, in &lt;module&gt; class Page(BasePage): File "/media/wni/study/workspace4320151111/weichun/mezzanine/core/models.py", line 350, in __new__ return super(OrderableBase, cls).__new__(cls, name, bases, attrs) File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py", line 298, in __new__ new_class.add_to_class(field.name, copy.deepcopy(field)) File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py", line 324, in add_to_class value.contribute_to_class(cls, name) File "/media/wni/study/workspace4320151111/weichun/mezzanine/generic/fields.py", line 226, in contribute_to_class super(KeywordsField, self).contribute_to_class(cls, name) File "/media/wni/study/workspace4320151111/weichun/mezzanine/generic/fields.py", line 84, in contribute_to_class cls._meta.get_fields_with_model()]: File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 56, in wrapper return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 432, in get_fields_with_model return [self._map_model(f) for f in self.get_fields()] File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 740, in get_fields return self._get_fields(include_parents=include_parents, include_hidden=include_hidden) File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 802, in _get_fields all_fields = self._relation_tree File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 60, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 709, in _relation_tree return self._populate_directed_relation_graph() File "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", line 681, in _populate_directed_relation_graph all_models = self.apps.get_models(include_auto_created=True) File "/usr/local/lib/python2.7/dist-packages/django/utils/lru_cache.py", line 101, in wrapper result = user_function(*args, **kwds) File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 168, in get_models self.check_models_ready() File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 131, in check_models_ready raise AppRegistryNotReady("Models aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet. </code></pre> <p>Anyone knows how to fix it?</p> <p>Thanks.</p> <p>Wesley</p>
0
2016-07-26T12:46:55Z
38,590,704
<p>I think you need to change your wsgi.py file as you go for different version of Django.</p> <pre><code>import os import sys from django.core.handlers.wsgi import WSGIHandler os.environ['DJANGO_SETTINGS_MODULE'] = 'YourAppName.settings' application = WSGIHandler() </code></pre> <p>And try to comment all third party application imported in settings.py.</p> <p>1] ./manage.py runserver will use your wsgi.py however it looks like the stack trace you've shown at the top does not include the wsgi file. Therefore the error is occurring before the wsgi file is loaded.</p> <p>2] This could well be an issue with your Django settings. For example,may in LOGGING a filename in a non-existent directory.</p> <p>3] or check <a href="http://stackoverflow.com/questions/25680803/django-1-7-upgrade-error-appregistrynotready-models-arent-loaded-yet">this</a></p>
0
2016-07-26T13:04:08Z
[ "python", "django" ]
Export from python 3.5 to csv
38,590,391
<p>I scraped six different values with Python 3.5 using beautifulSoup. Now I have the following six variables with values:</p> <ol> <li>project_titles</li> <li>project_href</li> <li>project_desc</li> <li>project_per</li> <li>project_mon</li> <li>project_loc</li> </ol> <p>The data for e.g. "project_titles" looks loke this: ['Formula Pi - Self-driving robot racing with the Raspberry Pi', 'The Superbook: Turn your smartphone into a laptop for $99'] --> seperated by a comma.</p> <p>Now I want to export this data to a csv. </p> <p>The Headlines should be in A1 (project_titles), B1 (project_href) and so on. And in A2 I need the first value of "project_titles". In B2 the first value of "project_href".</p> <p>I think I need a loop for this, but I didn't get it. Please help me...</p>
0
2016-07-26T12:50:23Z
38,590,656
<p>Here are a few hints:</p> <p>When you have a string like you want to separate on a given character, use <a class='doc-link' href="http://stackoverflow.com/documentation/python/278/string-methods/1007/split-a-string-based-on-a-delimiter-into-a-list-of-strings#t=201607261257387173865"><code>string.split</code> to get list</a>, which you can <a class='doc-link' href="http://stackoverflow.com/documentation/python/209/list/782/accessing-list-values#t=201607261259386799125">get the first value using <code>lst[0]</code></a>.</p> <p>Then, take a look at the <a class='doc-link' href="http://stackoverflow.com/documentation/python/2116/reading-and-writing-csv/6947/reading-and-writing-csv-files#t=201607261300200894077">csv module</a> to do your export.</p>
0
2016-07-26T13:02:08Z
[ "python", "python-3.x", "csv", "beautifulsoup", "export" ]
Django one-to-one relationships in one form
38,590,568
<p>I'm trying to make a registration form for a extension of django's user model. It has a two relationships, one to <code>User</code> and <code>Address</code>. The form needs to have all the fields from <code>User</code>, <code>UserDetails</code> and <code>Address</code>. But i'm struggling to get the correct view and form. Just having a <code>ModelForm</code> for <code>UserDetails</code> combined with <code>FormView</code> doesn't add the fields for <code>User</code> and <code>Address</code>.</p> <pre><code>class User(AbstractBaseUser, PermissionMixin): email = models.EmailField(unique=True) class UserDetails(model.Model): date_of_birth = models.DateField() user = models.ForeignKey(User, on_delete=models.CASCADE) address = models.OneToOneField(Address, on_delete=models.CASCADE) class Address(model.Model): field = models.CharField(max_length=100) field2 = models.CharField(max_length=100) </code></pre> <p>The form and view look like this:</p> <pre><code>class UserRegistrationForm(ModelForm): class Meta: model = Orchestra fields = '__all__' class UserRegistrationView(FormView): form_class = UserRegistrationForm template_name = 'users/register.html' &lt;form action="" method="post"&gt; {% csrf_token %} {{ form.as_table }} &lt;input type="submit" value="submit"&gt; &lt;/form&gt; </code></pre>
0
2016-07-26T12:58:37Z
38,598,196
<p>You missed the <strong>unicode</strong> or <strong>str</strong> method declaration in model classes. You should always declare it. Remember that str is for python 3.x, and unicode for python 2.x.</p> <p>Here is an example for class Address and python 2:</p> <pre><code>def __unicode__(self): return '%s %s' % (self.field1, self.field2) </code></pre>
0
2016-07-26T19:19:06Z
[ "python", "django" ]
Django one-to-one relationships in one form
38,590,568
<p>I'm trying to make a registration form for a extension of django's user model. It has a two relationships, one to <code>User</code> and <code>Address</code>. The form needs to have all the fields from <code>User</code>, <code>UserDetails</code> and <code>Address</code>. But i'm struggling to get the correct view and form. Just having a <code>ModelForm</code> for <code>UserDetails</code> combined with <code>FormView</code> doesn't add the fields for <code>User</code> and <code>Address</code>.</p> <pre><code>class User(AbstractBaseUser, PermissionMixin): email = models.EmailField(unique=True) class UserDetails(model.Model): date_of_birth = models.DateField() user = models.ForeignKey(User, on_delete=models.CASCADE) address = models.OneToOneField(Address, on_delete=models.CASCADE) class Address(model.Model): field = models.CharField(max_length=100) field2 = models.CharField(max_length=100) </code></pre> <p>The form and view look like this:</p> <pre><code>class UserRegistrationForm(ModelForm): class Meta: model = Orchestra fields = '__all__' class UserRegistrationView(FormView): form_class = UserRegistrationForm template_name = 'users/register.html' &lt;form action="" method="post"&gt; {% csrf_token %} {{ form.as_table }} &lt;input type="submit" value="submit"&gt; &lt;/form&gt; </code></pre>
0
2016-07-26T12:58:37Z
38,633,719
<p>I had to create different forms for each model. Then combine all the instances created from the forms. See <a href="http://stackoverflow.com/a/24011448/2315022">this answer for more details</a></p>
0
2016-07-28T10:27:31Z
[ "python", "django" ]
Why can't I build this code in Sublime text 2?
38,590,600
<p>I need to build a schedule so I found an example (see In [13], Out [13])</p> <p><a href="http://playittodeath.ru/%D0%B0%D0%BD%D0%B0%D0%BB%D0%B8%D0%B7-%D0%B4%D0%B0%D0%BD%D0%BD%D1%8B%D1%85-%D0%BF%D1%80%D0%B8-%D0%BF%D0%BE%D0%BC%D0%BE%D1%89%D0%B8-python-%D0%B3%D1%80%D0%B0%D1%84%D0%B8%D0%BA%D0%B8-%D0%B2-pandas/" rel="nofollow">http://playittodeath.ru/анализ-данных-при-помощи-python-графики-в-pandas/</a></p> <p>However, when I copy it into Sublime Text 2 I get this output:</p> <pre><code>sh: sysctl: command not found sh: grep: command not found sh: sw_vers: command not found sh: grep: command not found /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') [Finished in 1.4s] </code></pre> <p>I use MacOS and Python 2.7.</p>
-1
2016-07-26T12:59:29Z
38,593,661
<p>it worked when i modified my Sublime build file. Here is original:</p> <pre><code>{ "cmd": ["python", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python" } </code></pre> <p>And here is modified one:</p> <pre><code>{ "path": "/Library/Frameworks/Python.framework/Versions/2.7/bin/", "cmd": ["python2.7", "-u", "$file"], "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "selector": "source.python" } </code></pre> <p>For getting path you need you should run <code>which python</code> in your terminal and copy it in "path". And then you should run <code>open *your path*</code> in your terminal and find pyhon.exe or it can by a python python<em>version</em>.exe and put it's name in "cmd".</p>
-1
2016-07-26T15:10:50Z
[ "python", "python-2.7", "pandas", "matplotlib", "sublimetext2" ]
Can two threads use the same embedded python interpreter simultaneously?
38,590,720
<p>The title has it, but here are some elaborations. Suppose the main thread spawns another thread, where some code is loaded into python interpreter and then another thread is called which executes some more code through the same python interface (through PyImport or PyRun). Is such scenario feasable?</p>
5
2016-07-26T13:04:41Z
38,591,001
<p>If I'm following what you are asking, then yes you can do this, but the Python interpreter itself is not fully thread safe. To get around this, you must make sure each thread obtains the interpreter's <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">GIL</a> before calling any Python code and then releases it afterwards. i.e. Each thread needs to do the following when executing Python code:</p> <pre><code>PyGILState_STATE gstate; gstate = PyGILState_Ensure(); // Do any needed Python API operations, execute python code // Release the GIL. No Python API allowed beyond this point. PyGILState_Release(gstate); </code></pre> <p>Also you should do the following after starting the Python interpreter to ensure threads/GIL are properly initialized:</p> <pre><code>if (! PyEval_ThreadsInitialized()) { PyEval_InitThreads(); } </code></pre> <p>See <a href="https://docs.python.org/2/c-api/init.html#thread-state-and-the-global-interpreter-lock" rel="nofollow" title="Non Python Created Threads">Non Python Created Threads</a> for more info on this.</p> <p>As mentioned in the comments, it's worth noting that this is really just serializing access to the interpreter, but it's the best you can do assuming you are using the CPython implementation of Python.</p>
2
2016-07-26T13:18:51Z
[ "python", "c++", "multithreading", "interpreter", "python-c-api" ]
Django - ManyToMany table pulling the wrong info
38,590,751
<p>I'm trying to create a linker table for a many to many relationship between these two tables. The third table will have the <code>cand_id</code> from the first table and the <code>p_id</code> from the second table. Here is my code in the candidate and <code>P_user classes</code> in models and the tables from MariaDB. When I use the ManyToManyField option in my Candidate model it keeps pulling the Users' first name instead of their <code>p_id</code>.</p> <p>First model:</p> <pre><code>class P_user(models.Model): ''' Proxy database model to extend Django User with profile data.''' user = models.ForeignKey(User) p_id = models.CharField(primary_key = True) phone = models.CharField(max_length = 20, default = '', blank = True) password = models.CharField(max_length = 30, default = '', blank = True) confirm_password = models.CharField(max_length = 30, default = '', blank = True) roles = models.CharField(max_length = 30, default = '', blank = True) </code></pre> <p>Second Model:</p> <pre><code>class Candidate(models.Model) .... associated_hm = models.ManyToManyField(P_user, blank = True) .... </code></pre> <p>DB representation:</p> <pre><code>MariaDB [mantis]&gt; describe prog_port_candidate; +------------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+--------------+------+-----+---------+----------------+ | cand_id | int(11) | NO | PRI | NULL | auto_increment | | first_name | varchar(20) | NO | | NULL | | | last_name | varchar(20) | NO | | NULL | | +------------------------+--------------+------+-----+---------+----------------+ </code></pre> <p>Next one:</p> <pre><code>MariaDB [mantis]&gt; describe prog_port_p_user; +------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------------+-------------+------+-----+---------+-------+ | p_id | varchar(8) | NO | PRI | NULL | | | phone | varchar(20) | NO | | NULL | | | password | varchar(30) | NO | | NULL | | | confirm_password | varchar(30) | NO | | NULL | | | roles | varchar(30) | NO | | NULL | | | user_id | int(11) | NO | MUL | NULL | | +------------------+-------------+------+-----+---------+-------+ </code></pre>
0
2016-07-26T13:05:42Z
38,591,985
<p>This might be a longshot, since your question doesn't make the context of your incorrect output clear, but if you're just referencing yourmodel.associated_hm, you will see a reference to the associated_hm object. </p> <p>To get the <code>p_id</code>, you would need to explicitly reference it in your query like so <code>yourmodel.associated_hm.p_id</code>. </p>
0
2016-07-26T14:00:03Z
[ "python", "mysql", "django", "foreign-keys", "mariadb" ]
Binning and then combining bins with minimum number of observations?
38,591,000
<p>Let's say I create some data and then create bins of different sizes:</p> <pre><code>from __future__ import division x = np.random.rand(1,20) new, = np.digitize(x,np.arange(1,x.shape[1]+1)/100) new_series = pd.Series(new) print(new_series.value_counts()) </code></pre> <p>reveals:</p> <pre><code>20 17 16 1 4 1 2 1 dtype: int64 </code></pre> <p>I basically want to transform the underlying data, if I set a minimum threshold of at least 2 per bin, so that <code>new_series.value_counts()</code> is this:</p> <pre><code>20 17 16 3 dtype: int64 </code></pre>
1
2016-07-26T13:18:50Z
38,592,142
<p><strong>EDITED:</strong></p> <pre><code>x = np.random.rand(1,100) bins = np.arange(1,x.shape[1]+1)/100 new = np.digitize(x,bins) n = new.copy()[0] # this will hold the the result threshold = 2 for i in np.unique(n): if sum(n == i) &lt;= threshold: n[n == i] += 1 n.clip(0, bins.size) # avoid adding beyond the last bin n = n.reshape(1,-1) </code></pre> <p>This can move counts up multiple times, until a bin is filled sufficiently.</p> <p>Instead of using <code>np.digitize</code>, it might be simpler to use <code>np.histogram</code> instead, because it will directly give you the counts, so that we don't need to <code>sum</code> ourselves.</p>
1
2016-07-26T14:06:51Z
[ "python", "performance", "numpy", "pandas", "binning" ]
Arduino + Raspy3 serial problems
38,591,219
<p>I am new to python, and I'm trying to connect an <code>Arduino</code> Uno with a Raspberry Pi3 using python. Arduino sends data (ID, Temp and Humidity) every 1 second. </p> <p>The problem is that I want <code>raspberry</code> to read serial port every 5 seconds and raspy is losing data... it only gets IDs: 2,4,6,8,etc, so I'm losing data, and I also discovered that when raspy reads, it doesn't get the latest data, it seems that it's reading a buffer of the serial data(I also tried reading every second and the problem was the same). </p> <p>Below is part of the code:</p> <pre class="lang-python prettyprint-override"><code>import numpy import sys import time from PyQt4.QtCore import * from PyQt4.QtCore import pyqtSignal as Signal from PyQt4.QtGui import * import serial class Ventana(QMainWindow, ui_SQL.Ui_Ventana): port1 = serial.Serial(3) # port1 = serial.Serial('/dev/ttyUSB0') port1.baudrate = 9600 port1.timeout = 1 def __init__(self, parent=None): self.l1 = [] self.l2 = [] self.l3 = [] super(Ventana, self).__init__(parent) self.setupUi(self) self.cajita.clicked.connect(self.cancel1) timer = QTimer(self) timer.timeout.connect(self.medir) timer.start(5000) def medir(self): texto = self.port1.readline() texto1 = texto.split(" ") num1 = int(texto1[0]) num2 = float(texto1[1]) num3 = float(texto1[2]) self.lect1.setText(str(num2)) self.lect2.setText(str(num3)) dato1 = round(num2/num3, 2) num4 = self.blancoSpin.value() dato2 = round(num4/num3, 2) self.muestraDo.setText(str(dato1)) self.guardarTxt() def guardarTxt(self): guardar = self.port1.readline() if self.cajita.isChecked(): with open(self.lineEdit.text()+'.txt', 'a') as yourFile: yourFile.write("%s\n" % guardar) </code></pre> <p>This is the arduino code. It just sends random values.</p> <pre><code>int n=1; float a; float b; void setup(){ Serial.begin(9600); } void loop() { a = random(10, 30000); a /= 100; b = random(900,1100); b /= 100; Serial.print(n); Serial.print(" "); Serial.print(a); Serial.print(" "); Serial.println(b); n +=1; delay(1000); } </code></pre> <p>Could someone please help me?</p>
1
2016-07-26T13:28:13Z
38,685,511
<p>For those having the same problem, I finally solved this issue using Threading....here I paste part of the modified code:</p> <pre><code>class guardarTxt (QtCore.QThread): def __init__(self, texto, nombre): QtCore.QThread.__init__(self) self.nombre = nombre self.texto = texto def __del__(self): self.wait() def run (self): with open(nombre+'.txt', 'a') as yourFile: yourFile.write("%s\n" % texto) self.terminate() </code></pre> <p>And, in <code>class Ventana</code>, i´ve changed <code>self.guardarTxt()</code> for the following:</p> <pre><code>nombre = self.lineEdit.text() self.guardarTxt = guardarTxt(texto, nombre) self.guardarTxt.start() </code></pre>
0
2016-07-31T15:28:04Z
[ "python", "arduino", "raspberry-pi" ]
Suppress warning Eclipse when developing with Pydev
38,591,420
<p>I want to suppress Eclipse warnings when defining decorators.</p> <p>For example:</p> <pre><code>def tool_wrapper(func): def inner(self): cmd="test" cmd+=func(self) return inner @tool_wrapper def list_peer(self): return "testing " </code></pre> <p>I get warning on a decorator definition: "Method 'tool_wrapper' should have self as first parameter</p> <p>I define the decorator inside a class, so this is the only way it's working properly. </p> <p>Thanks</p>
1
2016-07-26T13:36:07Z
38,591,576
<p>Just define your decorator outside the class and pass the instance as an argument, it will work just as fine.</p> <pre><code>def tool_wrapper(func): def inner(inst): # inst : instance of the object cmd="test" cmd+=func(inst) return cmd return inner class Test(): def __init__(self): pass @tool_wrapper def list_peer(self): return "testing " if __name__ == '__main__': t = Test() print t.list_peer() </code></pre> <p>This script prints <code>testtesting</code></p>
1
2016-07-26T13:43:13Z
[ "python", "eclipse", "decorator", "pydev" ]
Passing a value into a python variable via Splinter
38,591,436
<p>I'm looking at a web page and making a yes/no decision. I'm trying to create a prompt that will allow me to pass the "yes" / "no" to a python variable via Splinter.</p> <p>1.) Page loads</p> <p>2.) Execute something like <code>browser.execute_script("window.prompt()")</code> with a yes/ no to a variable</p> <p>3.) Some business logic is done based on that variable</p> <p>ie - </p> <pre><code>data = browser.execute_script("window.prompt()") if data == 'yes': print('the value is good') else: print('the value is bad') </code></pre> <p>Is there a good way to go about doing this?</p>
0
2016-07-26T13:36:50Z
38,591,550
<blockquote> <p>Is there a good way to go about doing this?</p> </blockquote> <p>No. Use Python's <code>input</code> (or <code>raw_input</code> if using Python 2), or get that value as a command line argument. </p>
1
2016-07-26T13:41:46Z
[ "python" ]
Use both Recursive Feature Eliminiation and Grid Search in SciKit-Learn
38,591,469
<p>I have a machine learning problem and want to optimize my SVC estimators as well as the feature selection.</p> <p>For optimizing SVC estimators I use essentially the code from the <a href="http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_digits.html#example-model-selection-grid-search-digits-py" rel="nofollow">docs</a>. Now my question is, how can I combine this with <a href="http://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_with_cross_validation.html#example-feature-selection-plot-rfe-with-cross-validation-py" rel="nofollow">recursive feature elimination cross validation (RCEV)</a>? That is, for each estimator-combination I want to do the RCEV in order to determine the best combination of estimators and features.</p> <p>I tried the solution from <a href="http://stackoverflow.com/questions/31784392/how-can-i-avoid-using-estimator-params-when-using-rfecv-nested-within-gridsearch">this thread</a>, but it yields the following error:</p> <pre><code>ValueError: Invalid parameter C for estimator RFECV. Check the list of available parameters with `estimator.get_params().keys()`. </code></pre> <p>My code looks like this: </p> <pre><code>tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-4,1e-3],'C': [1,10]}, {'kernel': ['linear'],'C': [1, 10]}] estimator = SVC(kernel="linear") selector = RFECV(estimator, step=1, cv=3, scoring=None) clf = GridSearchCV(selector, tuned_parameters, cv=3) clf.fit(X_train, y_train) </code></pre> <p>The error appears at <code>clf = GridSearchCV(selector, tuned_parameters, cv=3)</code>.</p>
0
2016-07-26T13:38:13Z
38,593,101
<p>I would use a Pipeline, but here you have a more adequate response </p> <p><a href="http://stackoverflow.com/questions/23815938/recursive-feature-elimination-and-grid-search-using-scikit-learn">Recursive feature elimination and grid search using scikit-learn</a></p>
1
2016-07-26T14:46:37Z
[ "python", "machine-learning", "scikit-learn", "feature-selection" ]
Can't open pip in Python version 3.5
38,591,547
<p>Using Python version 3.5 on Windows 10 64bit, I'm unable to run the pip command. When I try running the application, the window will just open for a brief second and then closes. I already tried adding the directory to the <code>PATH</code> environment variable and rebooting the system - didn't work.</p>
0
2016-07-26T13:41:39Z
38,591,837
<blockquote> <p>when i try running the application the window will just open for brief second and then closes</p> </blockquote> <p>Sounds like you are trying to open the <code>pip.exe</code> file and expect an interactive interface of some kind. </p> <p>Unfortunately, that's not how you use <code>pip</code>. Open up a <code>cmd</code>, type your pip commands there. The command prompt will print and error, and not close, if there is a problem. </p>
1
2016-07-26T13:53:36Z
[ "python", "python-3.x", "pip" ]
How to know what are the threads running : python
38,591,551
<p>Is there any way to know what threads are running using python threading module. With the following piece of code, I am able to get the Thread name, current thread, active thread count.</p> <p>But my doubt here is ACTIVE_THREADS are 2 and the CURRENT THREAD is always "MainThread". What could be the other thread which is running at the back ground ?</p> <pre><code>import threading import time for _ in range(10): time.sleep(3) print("\n", threading.currentThread().getName()) print("current thread", threading.current_thread()) print("active threads ", threading.active_count()) </code></pre> <hr> <h2>output of the above the code :</h2> <p>MainThread</p> <p>current thread &lt;_MainThread(MainThread, started 11008)></p> <p>active threads 2</p>
2
2016-07-26T13:41:53Z
38,591,662
<p>You can access all current thread objects using <a href="https://docs.python.org/3/library/threading.html#threading.enumerate" rel="nofollow"><code>threading.enumerate()</code></a>, e.g.</p> <pre><code>for thread in threading.enumerate(): print(thread.name) </code></pre>
2
2016-07-26T13:46:09Z
[ "python", "multithreading" ]
How can i find the minimum value and key in a scipy dictionary of keys?
38,591,595
<p>I'm trying to find the shortest distance between 2 kd trees and am using the scipy function 'sparse_distance_matrix'. The result is returned in a dictionary of keys matrix of the form <code>{(1,2):4.54}</code>.</p> <p>Its possible to retrieve the value using the following code but no method seems to work to get the key value since its in tuple form</p> <pre><code>sparsemin = sp.KDTree.sparse_distance_matrix(aKD,bKD,20) m = min(sparsemin.itervalues()) </code></pre>
0
2016-07-26T13:43:48Z
38,591,795
<p><code>min(sparsemin.items(), key=lambda item: (item[1], item[0]))</code> will return a tuple with the minimum value and its key.</p> <pre><code>a = {(1,2): 2.54, (1, 0): 4.52} min(a.items(), key=lambda item: (item[1], item[0])) &gt;&gt; ((1, 2), 2.54) </code></pre>
1
2016-07-26T13:51:52Z
[ "python", "scipy", "kdtree" ]
SQLAlchemy, without explicitly stating a rollback call, are the uncommited changes of the session automatically rolledback?
38,591,713
<p>For example this code:</p> <pre><code>db = SQLAlchemy() def myfunction(a): #somechanges in database if a == 2: return db.session.commit() myfunction(2) # there were some changes here that were not committed neither rolled back myfunction(4) # Here the changes were committed. </code></pre> <p>My question is, are the first changes committed along with the second changes in the second call?</p> <p>Thanks in advance</p>
0
2016-07-26T13:48:49Z
38,591,845
<p>If you read <a href="http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#adding-and-updating-objects" rel="nofollow">the tutorial</a> you'll see that the session has a <code>dirty</code> property:</p> <pre><code>db = SQLAlchemy() def myfunction(a): #somechanges in database if a == 2: return db.session.commit() myfunction(2) # there were some changes here that were not committed neither rolled back print(db.session.dirty) myfunction(4) # Here the changes were committed. </code></pre> <p>What does it tell you?</p>
0
2016-07-26T13:53:58Z
[ "python", "sqlalchemy" ]
SQLAlchemy, without explicitly stating a rollback call, are the uncommited changes of the session automatically rolledback?
38,591,713
<p>For example this code:</p> <pre><code>db = SQLAlchemy() def myfunction(a): #somechanges in database if a == 2: return db.session.commit() myfunction(2) # there were some changes here that were not committed neither rolled back myfunction(4) # Here the changes were committed. </code></pre> <p>My question is, are the first changes committed along with the second changes in the second call?</p> <p>Thanks in advance</p>
0
2016-07-26T13:48:49Z
38,591,854
<p>The changes performed in the call to <code>myFunction(4)</code> will overwrite the changes performed in the call to <code>myFunction(2)</code>. This is the case for updating data, commits or not.</p> <p>For adding rows and columns, there is no over-writing of data. Committing before the function call to <code>myFunction(4)</code> will not make a difference.</p>
1
2016-07-26T13:54:11Z
[ "python", "sqlalchemy" ]
Getting survival function estimates group by attribute level in Lifelines
38,591,748
<p>I have a challenge with using Lifelines for KM estimates. I have a variable column called worker type (Full Time, Part Time, etc) that I would like to group the KM estimates for, then output to a <code>CSV</code> file. Here's a snippet:</p> <pre><code>worker_types = df['Emp_Status'].unique() for i, worker_type in enumerate(worker_types): ix = df['Emp_Status'] == worker_type kmf.fit(T[ix], C[ix]) kmf.survival_function_['worker'] = worker_type #print kmf.survival_function_ kmf.surviva l_function_.to_csv('C:\Users\Downloads\test.csv') </code></pre> <p>When I use the print function, I get each iteration of the KM estimate per <code>worker_type</code>; however, when trying to export to a <code>csv</code> file, I only get the last estimate of worker type. </p> <p>I've read the lifelines docs, and seen the examples for the plotting of different levels, but not sure how to bridge that to exporting to <code>csv</code>.</p>
1
2016-07-26T13:50:05Z
38,592,178
<p>You can open the file in append mode at the top of the loop and then append each row, e.g.:</p> <pre><code>worker_types = df['Emp_Status'].unique() with open('C:/Users/Downloads/test.csv', 'a') as fou: for i, worker_type in enumerate(worker_types): ix = df['Emp_Status'] == worker_type kmf.fit(T[ix], C[ix]) kmf.survival_function_['worker'] = worker_type if i == 0: kmf.survival_function_.to_csv(fou) # write header on first iteration else: kmf.survival_function_.to_csv(fou, header=False) </code></pre> <p>Side note: Please do not use backwards slashes for Windows paths within Python. Instead use forward slashes.</p>
0
2016-07-26T14:08:30Z
[ "python", "pandas", "dataframe" ]
rename the categories and add the missing categories to a Series PANDAS
38,591,868
<p>I want to rename the categories and add the missing categories to a Series.</p> <p>My code:</p> <pre><code>codedCol = bdAu['Bordersite'] print 'pre:' print codedCol.head(10) codedCol = codedCol.astype('category') codedCol = codedCol.cat.set_categories(['a','b','c','d','e','f','g','h','i','j']) print 'post:' print codedCol.head(10) </code></pre> <p>When I do this I get NaN as the result.</p> <pre><code>pre: 0 3 1 3 2 2 3 2 4 3 5 4 6 5 7 3 8 3 9 3 Name: Bordersite, dtype: int64 post: 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN dtype: category Categories (10, object): [a, b, c, d, ..., g, h, i, j] </code></pre> <p>What am I doing wrong here?</p> <p>Thanks Kheeran</p>
1
2016-07-26T13:55:04Z
38,592,028
<p>You've set the categories to the following: <code>['a','b','c','d','e','f','g','h','i','j']</code>. The current values in the column in <code>codedCat</code> do not match any of the categories. Therefore, they get re-set to <code>NaN</code>. For further reading, consider this example <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html" rel="nofollow">from the docs</a>:</p> <pre><code>In [10]: raw_cat = pd.Categorical(["a","b","c","a"], categories=["b","c","d"], ....: ordered=False) ....: In [11]: s = pd.Series(raw_cat) In [12]: s Out[12]: 0 NaN 1 b 2 c 3 NaN dtype: category Categories (3, object): [b, c, d] </code></pre> <p>Since <code>"a"</code> is not a category, it gets re-set to <code>NaN</code>.</p>
0
2016-07-26T14:02:25Z
[ "python", "pandas", "dataframe" ]
rename the categories and add the missing categories to a Series PANDAS
38,591,868
<p>I want to rename the categories and add the missing categories to a Series.</p> <p>My code:</p> <pre><code>codedCol = bdAu['Bordersite'] print 'pre:' print codedCol.head(10) codedCol = codedCol.astype('category') codedCol = codedCol.cat.set_categories(['a','b','c','d','e','f','g','h','i','j']) print 'post:' print codedCol.head(10) </code></pre> <p>When I do this I get NaN as the result.</p> <pre><code>pre: 0 3 1 3 2 2 3 2 4 3 5 4 6 5 7 3 8 3 9 3 Name: Bordersite, dtype: int64 post: 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN dtype: category Categories (10, object): [a, b, c, d, ..., g, h, i, j] </code></pre> <p>What am I doing wrong here?</p> <p>Thanks Kheeran</p>
1
2016-07-26T13:55:04Z
38,592,355
<p>First or creating <code>catagories</code> you can use <code>.astype('category')</code>, but <code>categories</code> are added from your column or <code>Categorical</code> with parameter <code>categories</code> where are defined. </p> <p>You can use:</p> <pre><code>codedCol = bdAu['Bordersite'] codedCol = pd.Series(pd.Categorical(codedCol, categories=[0,1,2,3,4,5,6,7,8,9])) print (codedCol) 0 3 1 3 2 2 3 2 4 3 5 4 6 5 7 3 8 3 9 3 dtype: category Categories (10, int64): [0, 1, 2, 3, ..., 6, 7, 8, 9] </code></pre> <p>And then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cat.rename_categories.html" rel="nofollow"><code>rename_categories</code></a>, but number of items in categories have to be same, else error:</p> <blockquote> <p>ValueError: new categories need to have the same number of items than the old categories!</p> </blockquote> <pre><code>codedCol = codedCol.cat.rename_categories(['a','b','c','d','e','f','g','h','i','j']) print (codedCol) 0 d 1 d 2 c 3 c 4 d 5 e 6 f 7 d 8 d 9 d dtype: category Categories (10, object): [a, b, c, d, ..., g, h, i, j] </code></pre>
0
2016-07-26T14:16:16Z
[ "python", "pandas", "dataframe" ]
List of dicts: remove duplicates and stack together unique values
38,591,938
<p>I have a list similar to this: </p> <pre><code>[ {'code': 'ABCDEFGH', 'message':'Everything is not OK', 'name': 'Tom Sawyer', 'course': 'Networks'}, {'code': 'ABCDEFGH', 'message':'Alright', 'name': 'Julien Sorel', 'course': 'Networks'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, ] </code></pre> <p>And I want it to look like this:</p> <pre><code>[ {'code': 'ABCDEFGH', 'message': ['Everything is not OK', 'Alright'], 'name': ['Tom Sawyer', 'Julien Sorel'], 'course': 'Networks'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, ] </code></pre> <p>So that fully duplicate entries would be removed and those, which have unique dictionary values, would be merged in a list. The order is not important.</p> <p>This looks like a huge hack for me. Thanks in advance.</p> <p><em><strong>! Python 3.5 !</strong></em></p>
-2
2016-07-26T13:58:08Z
38,592,411
<p>To guarantee unicity with rather simple readable code, one could use <a class='doc-link' href="http://stackoverflow.com/documentation/python/497/set#t=201607261412544468519">sets</a>. See comments in the code.</p> <p>I suggest, supposing <code>lst</code> contains your list of dictionaries :</p> <pre><code>res = list() # start from scratch for k in lst: for d in res: # check the previously created entries if d['code'] == k['code']: # add to entry for field in ['message', 'name', 'course']: d[field].add(k[field]) # will not do anything if value is already here break else: # this will be executed if the for didn't break # create new entry tmp = {'code': k['code']} for field in ['message', 'name', 'course']: tmp[field] = set([k[field]]) res.append(tmp) print res </code></pre> <p>With your list of dictionaries as entry, I get the following:</p> <pre><code>[ {'course': set(['Networks']), 'message': set(['Alright', 'Everything is not OK']), 'code': 'ABCDEFGH', 'name': set(['Julien Sorel', 'Tom Sawyer'])}, {'course': set(['Cooking']), 'message': set(['Hello there']), 'code': 'KQPRADBC', 'name': set(['Jacques Paganel'])} ] </code></pre> <p>If you want to have exactly the same output as you wrote in your question, you might want to add somthing like this at the end:</p> <pre><code>for d in res: for field in ['message', 'name', 'course']: if len(d[field]) &gt; 1: d[field] = list(d[field]) else: d[field] = list(d[field])[0] </code></pre>
2
2016-07-26T14:19:06Z
[ "python", "list", "python-3.x", "dictionary" ]
List of dicts: remove duplicates and stack together unique values
38,591,938
<p>I have a list similar to this: </p> <pre><code>[ {'code': 'ABCDEFGH', 'message':'Everything is not OK', 'name': 'Tom Sawyer', 'course': 'Networks'}, {'code': 'ABCDEFGH', 'message':'Alright', 'name': 'Julien Sorel', 'course': 'Networks'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, ] </code></pre> <p>And I want it to look like this:</p> <pre><code>[ {'code': 'ABCDEFGH', 'message': ['Everything is not OK', 'Alright'], 'name': ['Tom Sawyer', 'Julien Sorel'], 'course': 'Networks'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, ] </code></pre> <p>So that fully duplicate entries would be removed and those, which have unique dictionary values, would be merged in a list. The order is not important.</p> <p>This looks like a huge hack for me. Thanks in advance.</p> <p><em><strong>! Python 3.5 !</strong></em></p>
-2
2016-07-26T13:58:08Z
38,593,622
<pre><code>a = [ {'code': 'ABCDEFGH', 'message':'Everything is not OK', 'name': 'Tom Sawyer', 'course': 'Networks'}, {'code': 'ABCDEFGH', 'message':'Alright', 'name': 'Julien Sorel', 'course': 'Networks'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, {'code': 'KQPRADBC', 'message':'Hello there', 'name': 'Jacques Paganel', 'course': 'Cooking'}, ] out=[] for i in set([tuple(d.items()) for d in a]): out.append(dict(i)) print out </code></pre> <p><strong>output:</strong></p> <pre><code>[{'course': 'Cooking', 'message': 'Hello there', 'code': 'KQPRADBC', 'name': 'Jacques Paganel'}, {'course': 'Networks', 'message': 'Everything is not OK', 'code': 'ABCDEFGH', 'name': 'Tom Sawyer'}, {'course': 'Networks', 'message': 'Alright', 'code': 'ABCDEFGH', 'name': 'Julien Sorel'}] </code></pre>
0
2016-07-26T15:08:52Z
[ "python", "list", "python-3.x", "dictionary" ]
List within a string and print formatting
38,592,007
<p>I am creating something which takes a tuple, converts it into a string and then reorganises the string using print formatting. 'other' can sometimes have 2 names, hence why I have used <code>*</code> and the <code>" ".join(other)</code> in this function:</p> <pre><code>def strFormat(x): #Convert to string s=' ' s = s.join(x) print(s) #Split string into different parts payR, dep, sal, *other, surn = s.split() payR, dep, sal, " ".join(other), surn #Print formatting! print (surn , other, payR, dep, sal) </code></pre> <p>The problem with this is that it prints a list of 'other' within the string like this:</p> <pre><code>Jones ['David', 'Peter'] 84921 Python 63120 </code></pre> <p>But I want it more like this:</p> <pre><code>Jones David Peter 84921 Python 63120 </code></pre> <p>So that it is ready for formatting into something like this:</p> <pre><code>Jones, David Peter 84921 Python £63120 </code></pre> <p>Am I going about this the right way and how do I stop the list appearing within the string?</p>
0
2016-07-26T14:01:16Z
38,592,077
<p>You're close. Change this line (which does nothing):</p> <pre><code>payR, dep, sal, " ".join(other), surn </code></pre> <p>to</p> <pre><code>other = " ".join(other) </code></pre>
3
2016-07-26T14:03:59Z
[ "python" ]
How to handle clicks on Links in Python with Gtk 3.0 and WebKit2 4.0?
38,592,026
<p>I have created my view (wrapped in a window) and loaded an URL like this:</p> <pre><code>self.web_view = WebKit2.WebView() self.web_view.load_uri("https://en.wikipedia.org") </code></pre> <p>My "Mini-Browser" starts and I can click on local links (links which are bound to JavaScript events or links to other pages on the same domain). But when the links point to other domains, nothing happens. How do I catch clicks on external links? Or how can I open these links in the system default browser? </p> <p>UPDATE: Cross site links are not handled by the "Mini-Browser". Can I write an event hook(onclick) to interrupt the "Mini-Browser" and act based on custom logic or is there a way to configure cross-site links.</p>
6
2016-07-26T14:02:17Z
38,723,324
<p>Did you use a <a href="https://developer.gnome.org/gtk3/stable/GtkLinkButton.html" rel="nofollow">GtkLinkButton</a> ? According to the doc of <a href="https://developer.gnome.org/gtk3/stable/gtk3-Filesystem-utilities.html#gtk-show-uri" rel="nofollow">gtk-show-uri</a> which uses the default-browser to open links, you additionally need to install the <a href="https://en.wikipedia.org/wiki/GVfs" rel="nofollow">gvfs</a> to get support for uri schemes such as <code>http://</code> or <code>ftp://</code></p> <p>For debian based distributions you can install gvfs just like that:</p> <p><code>sudo apt-get install gvfs gvfs-backends gvfs-fuse</code></p> <p>If that does not help, you additionally can check the error-message of <code>gtk_show_uri</code>, in case it returns FALSE </p> <p>For custom-browsers, like yours, according to the doc of <a href="https://developer.gnome.org/gtk3/stable/GtkLinkButton.html" rel="nofollow">GtkLinkButton</a> you need to connect to the <a href="https://developer.gnome.org/gtk3/stable/GtkLinkButton.html#GtkLinkButton-activate-link" rel="nofollow">activate-link signal</a> and return true from the handler ... probably you already did so.</p>
2
2016-08-02T14:31:31Z
[ "python", "webkit", "gtk", "gtk3", "webkitgtk" ]
Scaling images generated by imshow
38,592,101
<p>The following code snippet</p> <pre><code>import matplotlib.pyplot as plt import numpy as np arr1 = np.arange(100).reshape((10,10)) arr2 = np.arange(25).reshape((5,5)) fig, (ax1, ax2, ) = plt.subplots(nrows=2, figsize=(3,5)) ax1.imshow(arr1, interpolation="none") ax2.imshow(arr2, interpolation="none") plt.tight_layout() plt.show() </code></pre> <p>produces two images with the same size, but a much lower "pixel density" in the second one. </p> <p><a href="http://i.stack.imgur.com/eEQW6.png" rel="nofollow"><img src="http://i.stack.imgur.com/eEQW6.png" alt="enter image description here"></a></p> <p>I would like to have the second image plotted at the same scale (i.e. pixel density) of the first, without filling the subfigure, possibly correctly aligned (i.e. the origin of the image in the same subplot position as the first one.)</p> <p><strong>Edit</strong></p> <p>The shapes of <code>arr1</code> and <code>arr2</code> are only an example to show the problem. What I'm looking for is a way to ensure that two different images generated by <code>imshow</code> in different portions of the figure are drawn at exactly the same scale.</p>
0
2016-07-26T14:05:13Z
38,598,273
<p>The simplest thing I could think of didn't work, but <code>gridspec</code> does. The origins here aren't aligned explicitly, it just takes advantage of how gridspec fills rows (and there's an unused subplot as a spacer). </p> <pre><code>import matplotlib.pyplot as plt import numpy as np from matplotlib import gridspec sizes = (10, 5) arr1 = np.arange(sizes[0]*sizes[0]).reshape((sizes[0],sizes[0])) arr2 = np.arange(sizes[1]*sizes[1]).reshape((sizes[1],sizes[1])) # Maybe sharex, sharey? No, we pad one and lose data in the other #fig, (ax1, ax2, ) = plt.subplots(nrows=2, figsize=(3,5), sharex=True, sharey=True) fig = plt.figure(figsize=(3,5)) # wspace so the unused lower-right subplot doesn't squeeze lower-left gs = gridspec.GridSpec(2, 2, height_ratios = [sizes[0], sizes[1]], wspace = 0.0) ax1 = plt.subplot(gs[0,:]) ax2 = plt.subplot(gs[1,0]) ax1.imshow(arr1, interpolation="none") ax2.imshow(arr2, interpolation="none") plt.tight_layout() plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/DGqtn.png" rel="nofollow"><img src="http://i.stack.imgur.com/DGqtn.png" alt="enter image description here"></a></p>
1
2016-07-26T19:23:13Z
[ "python", "matplotlib", "imshow" ]
How can i get sum of queryset lists and make jsonresponse?
38,592,103
<p><strong>views.py</strong></p> <pre><code>from itertools import chain def post_list(request): i=1 while i: list_i = Post.objects.filter(title__startswith="i") post_list = list(chain('' + ',' + 'list_i')) if len(post_list) &gt;= 5 : break return JsonResponse(serializers.serialize('json', post_list), safe=False) </code></pre> <p>I want make post_list that is sum of list_1, list_2, ..,list_i and make it serialized. </p> <p>But it gives me AttributeError as follows.</p> <pre><code>Environment: Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 1.9.7 Python Version: 3.5.2 Installed Applications: [...I omitted] Installed Middleware: [...I omitted] Traceback: File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/keepair/djangogirls/blog/views.py" in post_list 33. return JsonResponse(serializers.serialize('json', post_list), safe=False) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/serializers/__init__.py" in serialize 129. s.serialize(queryset, **options) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/serializers/base.py" in serialize 83. concrete_model = obj._meta.concrete_model Exception Type: AttributeError at / Exception Value: 'str' object has no attribute '_meta' </code></pre> <p>How can i solve this?</p> <p>Thanks for reading my question.</p>
-1
2016-07-26T14:05:15Z
38,592,476
<p>You shouldn't be using <code>chain</code>. <code>chain</code> method would chain lists together, but you don't want all of them together, so you should just use plain list methods:</p> <pre><code>def post_list(request): result = [] i = 1 while True: list_i = Post.objects.filter(title__startswith=str(i)) result.extend(list_i) if len(result) &gt;= 5: result = result[:5] break i += 1 return JsonResponse(serializers.serialize('json', result), safe=False) </code></pre> <p>No offence but reading your code I feel like you are lack of some fundamental knowledge of python/programming. I would suggest you learning some basics about python before jumping into django development, it would save you a lot of time figuring out stuff like this.</p>
0
2016-07-26T14:21:55Z
[ "python", "json", "django" ]
How can i get sum of queryset lists and make jsonresponse?
38,592,103
<p><strong>views.py</strong></p> <pre><code>from itertools import chain def post_list(request): i=1 while i: list_i = Post.objects.filter(title__startswith="i") post_list = list(chain('' + ',' + 'list_i')) if len(post_list) &gt;= 5 : break return JsonResponse(serializers.serialize('json', post_list), safe=False) </code></pre> <p>I want make post_list that is sum of list_1, list_2, ..,list_i and make it serialized. </p> <p>But it gives me AttributeError as follows.</p> <pre><code>Environment: Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 1.9.7 Python Version: 3.5.2 Installed Applications: [...I omitted] Installed Middleware: [...I omitted] Traceback: File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/keepair/djangogirls/blog/views.py" in post_list 33. return JsonResponse(serializers.serialize('json', post_list), safe=False) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/serializers/__init__.py" in serialize 129. s.serialize(queryset, **options) File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/serializers/base.py" in serialize 83. concrete_model = obj._meta.concrete_model Exception Type: AttributeError at / Exception Value: 'str' object has no attribute '_meta' </code></pre> <p>How can i solve this?</p> <p>Thanks for reading my question.</p>
-1
2016-07-26T14:05:15Z
38,592,739
<p>It seems like you have a serious disconnect between string values and text values.</p> <pre><code>In [126]: x = 3 In [127]: print("x") x In [128]: print(x) 3 In [129]: print(str(x)) 3 </code></pre> <p>Based on your code, I'm <em>pretty sure</em> you thought that <code>print("x")</code> would print out <code>3</code>. That's never going to be true the way it's written. I say that because you have</p> <pre><code>i=1 while i: list_i = Post.objects.filter(title__startswith="i") </code></pre> <p>Aside from just being wrong in general, looking at your other code, I'm <em>pretty sure</em> that you expect this to return <code>Post</code>s that begin with <code>1</code>. It won't, it will only return posts that begin with <code>'i'</code>.</p> <p>Here, you're also creating a variable <code>i</code>, and you're using <code>while i</code>, but you never change <code>i</code> anywhere. If what you're looking for is to make sure that your post list has at least 5 items in it, then you're going about this all wrong.</p> <p>First, what you need is a post list:</p> <pre><code>post_list = [] </code></pre> <p>If you want to filter by numbers where the post titles start with <code>1</code> then <code>2</code>, and so on, you're going to need a counter, too:</p> <pre><code>count = 0 </code></pre> <p>Now, you need to loop while the list has less than 5 elements:</p> <pre><code>while len(post_list) &lt; 5: </code></pre> <p>Then you need to append stuff to the list. What stuff? The posts you get. But we're also going to want to bump up that count each time through so we're not adding the same posts:</p> <pre><code> count += 1 post_list.extend(Post.objects.filter(title__startswith=str(count))) </code></pre> <p>Putting that all together, you get:</p> <p>def post_list(request):</p> <pre><code>post_list = [] count = 0 while len(post_list) &lt; 5: count += 1 post_list.extend(Post.objects.filter(title__startswith=str(count))) return JsonResponse(serializers.serialize('json', post_list), safe=False) </code></pre> <p>However, there's one thing we haven't considered yet - what if you <em>never</em> get more than 5 posts? What if your system only has 4? This is going to (never) end badly - you're going to enter an infinite loop. So we should add another sentinel - we know from counting in grade school:</p> <pre><code>1,2,3,4,5,6,7,8,9, 10, 11, 12, ... 19, 20, 21, ... </code></pre> <p>Every number above 9 is going to begin with a number 1-9. So we can definitely state that if our count goes to 10+ that one of our filters would have already picked it up. Even 222,392,138,902 x 10^20 starts with 2. So we should amend our <code>while</code> condition to this:</p> <pre><code>while count &lt; 10 and len(post_list) &lt; 5: </code></pre> <p>And we'll have a solution that I <em>think</em> does what you want, unless you had some different ideas about filtering.</p>
1
2016-07-26T14:32:17Z
[ "python", "json", "django" ]
Single Link List in Python Add, Remove, Insert
38,592,218
<p>I'm trying to achieve a singly linked list with add, remove and insert methods. I'm confused with the insertion method.</p> <pre><code>class Node(object): def __init__(self, data, next): self.data = data self.next = next class SingleList(object): head = None tail = None def printList(self): print "Link List:" current_node = self.head while current_node is not None: print current_node.data, " --&gt; ", current_node = current_node.next print None def add(self, data): node = Node(data, None) if self.head is None: self.head = self.tail = node else: self.tail.next = node self.tail = node def insert(self, before, nextdata): #nextdata is to be inserted before 'before' #before is actually a data #but it has dif name in this def to avoid confusion with 'data' current_node = self.head previous_node = None while current_node is not None: if current_node.data == before: if previous_node is not None: current_node.next = current_node previous_node = current_node current_node = current_node.next def remove(self, node_value): current_node = self.head previous_node = None while current_node is not None: if current_node.data == node_value: # if this is the first node (head) if previous_node is not None: previous_node.next = current_node.next else: self.head = current_node.next # needed for the next iteration previous_node = current_node current_node = current_node.next </code></pre> <p>Link List:</p> <pre><code>1 --&gt; 2 --&gt; 3 --&gt; 4 --&gt; 5 --&gt; None </code></pre> <p>Link List:</p> <pre><code>3 --&gt; 4 --&gt; 6 --&gt; 10 --&gt; None </code></pre> <p>For example if I'm trying to do insert (4,9) which inserts number 9 before 4.</p> <pre><code>s = SingleList() s.add(1) s.add(2) s.add(3) s.add(4) s.add(5) s.printList() s.add(6) s.add(10) s.remove(5) s.remove(2) s.remove(1) s.printList() s.insert(4,9) s.printList() </code></pre> <p>Any help will do, snippets, advice anything not spoon feeding. Thanks!</p>
0
2016-07-26T14:10:25Z
38,593,253
<p>You have to rewrite your insert as:</p> <pre><code>def insert(self, before, nextdata): #nextdata is to be inserted before 'before' #before is actually a data #but it has dif name in this def to avoid confusion with 'data' current_node = self.head previous_node = None while current_node is not None: if current_node.data == before: if previous_node is not None: temp_node = current_node current_node = Node(nextdata, temp_node) previous_node.next = current_node else: new_node = Node(nextdata, current_node) self.head = new_node break previous_node = current_node current_node = current_node.next </code></pre> <p>This will take care of it. When you insert the new node you have to break your loop otherwise it will be infinite.</p>
0
2016-07-26T14:52:56Z
[ "python", "data-structures", "linked-list" ]
Efficient accessing in sparse matrices
38,592,255
<p>I'm working with recommender systems but I'm struggling with the access times of the scipy sparse matrices.</p> <p>In this case, I'm implementing <em>TrustSVD</em> so I need an efficient structure to operate both in columns and rows (CSR, CSC). I've thought about using both structures, dictionaries,... but either way this is always too slow, especially compared with the numpy matrix operations.</p> <pre class="lang-py prettyprint-override"><code>for u, j in zip(*ratings.nonzero()): items_rated_by_u = ratings[u, :].nonzero()[1] users_who_rated_j = ratings[:, j].nonzero()[0] # More code... </code></pre> <p><strong>Extra:</strong> Each loop takes around 0.033s, so iterating once through 35,000 ratings means to wait 19min per iteration (SGD) and for a minimum of 25 iterations we're talking about 8h. Moreover, here I'm just talking about accessing, if I include the factorization part it would take around 2 days.</p>
1
2016-07-26T14:12:01Z
38,596,728
<p>When you index a sparse matrix, especially just asking for a row or column, it not only has to select the values, but it also has to construct a new sparse matrix. <code>np.ndarray</code> construction is done in compiled code, but most of the sparse construction is pure Python. The <code>nonzero()[1]</code> construct requires converting the matrix to <code>coo</code> format and picking the <code>row</code> and <code>col</code> attributes (look at its code).</p> <p>I think you could access your row columns faster by looking at the <code>rows</code> attribute of the <code>lil</code> format, or its transpose:</p> <pre><code>In [418]: sparse.lil_matrix(np.matrix('0,1,0;1,0,0;0,1,1')) Out[418]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 4 stored elements in LInked List format&gt; In [419]: M=sparse.lil_matrix(np.matrix('0,1,0;1,0,0;0,1,1')) In [420]: M.A Out[420]: array([[0, 1, 0], [1, 0, 0], [0, 1, 1]], dtype=int32) In [421]: M.rows Out[421]: array([[1], [0], [1, 2]], dtype=object) In [422]: M[1,:].nonzero()[1] Out[422]: array([0], dtype=int32) In [423]: M[2,:].nonzero()[1] Out[423]: array([1, 2], dtype=int32) In [424]: M.T.rows Out[424]: array([[1], [0, 2], [2]], dtype=object) </code></pre> <p>You could also access these values in the <code>csr</code> format, but it's a bit more complicated</p> <pre><code>In [425]: M.tocsr().indices Out[425]: array([1, 0, 1, 2], dtype=int32) </code></pre>
2
2016-07-26T17:52:43Z
[ "python", "numpy", "matrix", "scipy", "sparse-matrix" ]
Python incrementing array index through loop
38,592,307
<p>Im currently using xlwings to display graphs and their corresponding values in microsoft excel. I have 5 graphs and their coordinates(in an array) that i've successfully been able to print through a loop. Unfortunately the columns in which they appeared in had to be hard coded and would be affected if i were to add more plots, so i changed my code to:</p> <pre><code>for i in range(1, 6): Columns = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S"] Range(Columns[0] + str(1)).value = list(zip(Xvalues)) Range(Columns[1] + str(1)).value = list(zip(Yvalues)) </code></pre> <p>Currently it will take the first plot and print the x-coordinates vertically in Column A("A1") and then the y-coordinates also vertically in Column B("B1") and then continues. </p> <p>My question is how can i increment the index of Columns[] within the loop so that my next values are Columns[3] and Columns[4]?</p>
1
2016-07-26T14:14:00Z
38,592,348
<p>You can use another variable to keep track of the column index. In this case <code>j</code>:</p> <pre><code>Columns = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S"] j = 1 for j,i in enumerate(range(1, 5)): Range(Columns[j] + str(1)).value = list(zip(Xvalues)) Range(Columns[j+1] + str(1)).value = list(zip(Yvalues)) j += 1 </code></pre> <p>Also you should not declare the list of columns within the loop.</p>
0
2016-07-26T14:15:55Z
[ "python", "xlwings" ]
One Hot Encoding using numpy
38,592,324
<p>If the input is zero I want to make an array which looks like this-</p> <p><code>[1,0,0,0,0,0,0,0,0,0]</code></p> <p>and if it is 5-</p> <p><code>[0,0,0,0,0,1,0,0,0,0]</code></p> <p>For the above I wrote:</p> <p><code>np.put(np.zeros(10),5,1)</code></p> <p>but it did not work.</p> <p>Is there any way in which, this can be implemented in one line.</p>
1
2016-07-26T14:15:01Z
38,592,416
<p>Something like : </p> <pre><code>np.array([int(i == 5) for i in range(10)]) </code></pre> <p>Should do the trick. But I suppose there exist other solutions using numpy.</p> <p>edit : the reason why your formula does not work : np.put does not return anything, it just modifies the element given in first parameter. The good answer while using <code>np.put()</code> is :</p> <pre><code>a = np.zeros(10) np.put(a,5,1) </code></pre> <p>The problem is that it can't be done in one line, as you need to define the array before passing it to <code>np.put()</code></p>
1
2016-07-26T14:19:15Z
[ "python", "numpy", "one-hot", "one-hot-encoding" ]
One Hot Encoding using numpy
38,592,324
<p>If the input is zero I want to make an array which looks like this-</p> <p><code>[1,0,0,0,0,0,0,0,0,0]</code></p> <p>and if it is 5-</p> <p><code>[0,0,0,0,0,1,0,0,0,0]</code></p> <p>For the above I wrote:</p> <p><code>np.put(np.zeros(10),5,1)</code></p> <p>but it did not work.</p> <p>Is there any way in which, this can be implemented in one line.</p>
1
2016-07-26T14:15:01Z
38,592,615
<p>The problem here is that you save your array nowhere. The <code>put</code> function works in place on the array and returns nothing. Since you never give your array a name you can not address it later. So this</p> <pre><code>one_pos = 5 x = np.zeros(10) np.put(x, one_pos, 1) </code></pre> <p>would work, but then you could just use indexing:</p> <pre><code>one_pos = 5 x = np.zeros(10) x[one_pos] = 1 </code></pre> <p>In my opinion that would be the correct way to do this if no special reason exists to do this as a one liner. This might also be easier to read and readable code is good code.</p>
1
2016-07-26T14:27:29Z
[ "python", "numpy", "one-hot", "one-hot-encoding" ]
One Hot Encoding using numpy
38,592,324
<p>If the input is zero I want to make an array which looks like this-</p> <p><code>[1,0,0,0,0,0,0,0,0,0]</code></p> <p>and if it is 5-</p> <p><code>[0,0,0,0,0,1,0,0,0,0]</code></p> <p>For the above I wrote:</p> <p><code>np.put(np.zeros(10),5,1)</code></p> <p>but it did not work.</p> <p>Is there any way in which, this can be implemented in one line.</p>
1
2016-07-26T14:15:01Z
38,592,629
<p>Taking a quick look at <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.put.html" rel="nofollow">the manual</a>, you will see that <code>np.put</code> does not return a value. While your technique is fine, you are accessing <code>None</code> instead of your result array.</p> <p>For a 1-D array it is better to just use direct indexing, especially for such a simple case.</p> <p>Here is how to rewrite your code with minimal modification:</p> <pre><code>arr = np.zeros(10) np.put(arr, 5, 1) </code></pre> <p>Here is how to do the second line with indexing instead of <code>put</code>:</p> <pre><code>arr[5] = 1 </code></pre>
1
2016-07-26T14:27:48Z
[ "python", "numpy", "one-hot", "one-hot-encoding" ]
One Hot Encoding using numpy
38,592,324
<p>If the input is zero I want to make an array which looks like this-</p> <p><code>[1,0,0,0,0,0,0,0,0,0]</code></p> <p>and if it is 5-</p> <p><code>[0,0,0,0,0,1,0,0,0,0]</code></p> <p>For the above I wrote:</p> <p><code>np.put(np.zeros(10),5,1)</code></p> <p>but it did not work.</p> <p>Is there any way in which, this can be implemented in one line.</p>
1
2016-07-26T14:15:01Z
38,593,112
<p>The <code>np.put</code> mutates its array arg <em>in-place</em>. It's conventional in Python for functions / methods that perform in-place mutation to return <code>None</code>; <code>np.put</code> adheres to that convention. So if <code>a</code> is a 1D array and you do</p> <pre><code>a = np.put(a, 5, 1) </code></pre> <p>then <code>a</code> will get replaced by <code>None</code>.</p> <p>Your code is similar to that, but it passes an un-named array to <code>np.put</code>.</p> <p>A compact &amp; efficient way to do what you want is with a simple function, eg:</p> <pre><code>import numpy as np def one_hot(i): a = np.zeros(10, 'uint8') a[i] = 1 return a a = one_hot(5) print(a) </code></pre> <p><strong>output</strong></p> <pre><code>[0 0 0 0 0 1 0 0 0 0] </code></pre>
0
2016-07-26T14:47:10Z
[ "python", "numpy", "one-hot", "one-hot-encoding" ]
One Hot Encoding using numpy
38,592,324
<p>If the input is zero I want to make an array which looks like this-</p> <p><code>[1,0,0,0,0,0,0,0,0,0]</code></p> <p>and if it is 5-</p> <p><code>[0,0,0,0,0,1,0,0,0,0]</code></p> <p>For the above I wrote:</p> <p><code>np.put(np.zeros(10),5,1)</code></p> <p>but it did not work.</p> <p>Is there any way in which, this can be implemented in one line.</p>
1
2016-07-26T14:15:01Z
38,593,472
<pre><code>import time start_time = time.time() z=[] for l in [1,2,3,4,5,6,1,2,3,4,4,6,]: a= np.repeat(0,10) np.put(a,l,1) z.append(a) print("--- %s seconds ---" % (time.time() - start_time)) #--- 0.00174784660339 seconds --- import time start_time = time.time() z=[] for l in [1,2,3,4,5,6,1,2,3,4,4,6,]: z.append(np.array([int(i == l) for i in range(10)])) print("--- %s seconds ---" % (time.time() - start_time)) #--- 0.000400066375732 seconds --- </code></pre>
0
2016-07-26T15:02:40Z
[ "python", "numpy", "one-hot", "one-hot-encoding" ]
How to install another version of python on Linux?
38,592,329
<p>There already exists a version(2.6) of python on my linux server. I want to install another version(maybe 2.7 or 3) to another directory(maybe "/home/zhangxudong") and then run my python script using this new python. How can I do the above through command line? Very thanks. </p>
-2
2016-07-26T14:15:11Z
38,592,644
<p>First of all, I'm just pointing out your question is awkward, there are many ways to do this. You are probably looking to install python from source (google it) which can be done to a particular director or by using virtual environments (also google it). If you just want say python3 installed, you can do this easily.</p> <p>Get prerequisites:</p> <pre><code>sudo apt-get install python-setuptools python-dev build-essential </code></pre> <p><strong>Note:</strong> <em>This is specific to Ubuntu and other Debian distros, you can use a built in package manager or install in the distro of your choice by replacing apt-get.</em></p> <p>Now install python 3</p> <pre><code>sudo apt-get install python3 </code></pre> <p>You can also use VirtualEnv or Docker which create virtual instances on your machine. They are handy but a little involved to setup.</p> <p><a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a></p> <p>Alternatively, you could use Pip to install different Python interpreters such as pypy once prerequisites are met. The nice thing about this is once the python and setuptools are installed, it's consistent between OS's, including Windows:</p> <pre><code>pip -U install pypy </code></pre> <p>specific to a version of python</p> <pre><code>python3 -m pip -U install pypy </code></pre> <p>P.S. If you have access to a graphical desktop, I'd suggest using PyCharm where you can switch between versions of Python (python2, python3, cython, pypy, etc) on the fly. This requires a little bit of setup and learning but it isn't bad at all.</p> <p>Good luck!</p>
0
2016-07-26T14:28:18Z
[ "python", "linux", "install" ]
Python eMail Maibox - Read any "Received" keys
38,592,347
<p>I'm developing a script that allows me to read the "Received" field from the header of a file .mbox.</p> <p>This is a small part of the code:</p> <pre><code>mbox = mailbox.mbox(filename) print message.keys() print message["Received"] </code></pre> <p>The print of the Keys takes me this result:</p> <pre><code>['Return-Path', 'Delivered-To', 'Received', 'Delivered-To', 'Received', 'X-Received', 'Received', 'Received-SPF', 'Authentication-Results', 'DKIM-Signature', 'Received', 'To', 'From', 'Subject', 'Message-ID', 'Disposition-Notification-To', 'Date', 'User-Agent', 'MIME-Version', 'Content-Type', 'Content-Transfer-Encoding', 'X-AntiAbuse', 'X-AntiAbuse', 'X-AntiAbuse', 'X-AntiAbuse', 'X-AntiAbuse', 'X-Get-Message-Sender-Via', 'X-Authenticated-Sender', 'X-Source', 'X-Source-Args', 'X-Source-Dir', 'X-getmail-retrieved-from-mailbox', 'X-GMAIL-THRID', 'X-GMAIL-MSGID'] </code></pre> <p>From this I see that there are 3 fields "Received", but if I execute:</p> <pre><code>print message["Received"] </code></pre> <p>It only displays the first field, how do I print/view them all?</p> <p>Thanks Andrea</p>
1
2016-07-26T14:15:54Z
38,593,363
<p>When you call <code>__get__</code> it will linear scan the list of msg headers and return the first one with matching name. To get multiple use <code>items()</code> method ie:</p> <pre><code>print [v for k, v in message.items() if k == "Received"] </code></pre>
0
2016-07-26T14:58:14Z
[ "python" ]
matplotlib - ValueError: weight is invalid
38,592,358
<p>I am running <code>python/3.3.2</code> with <code>matplotlib/1.5.1</code></p> <p>if I run </p> <pre><code> x = linspace(0,1,10) plot(x,x) </code></pre> <p>I get </p> <pre><code>ValueError: weight is invalid </code></pre> <p>but actually it happens with any matplotlib command. It looks like something in the installation is broken or maybe some configuration. I am looking to some hint on what may be wrong, or maybe how I can override the value of weight to something meaningful. I think it refers to</p> <pre><code>In [1]: matplotlib.rcParams['font.weight'] Out[2]: "['bold']" </code></pre>
0
2016-07-26T14:16:35Z
38,592,531
<p>The value of <code>font.weight</code> <code>rcParams</code> should be one of many strings: <code>'normal'</code>, <code>'bold'</code>, <code>'bolder'</code>, etc. </p> <p>Based on that value you have shown, it's somehow the string-representation of a list containing the string <code>bold</code>. </p> <pre><code>str(['bold']) # "['bold']" </code></pre> <p>You need to change it to simply <code>'bold'</code>.</p> <pre><code>matplotlib.rcParams['font.weight'] = 'bold' </code></pre>
2
2016-07-26T14:24:20Z
[ "python", "matplotlib" ]
Why does py.test show my xfail tests as skipped?
38,592,457
<p>I have some python code and a bunch of pytest tests for it.</p> <p>Some of the tests I expect to fail (that is, I am testing error handling, so I pass in inputs that should cause my code to throw exceptions.)</p> <p>In order to do this, I have code like this:</p> <pre><code>CF_TESTDATA =[('data/1_input.csv', 'data/1_input.csv', 'utf-8'), pytest.mark.xfail(('data/1_input_unknown_encoding.txt', '', 'utf-8')), ('data/1_input_macintosh.txt', 'data/1_converted_macroman.csv', 'macroman')] @pytest.mark.parametrize('input_file, expected_output, encoding', CF_TESTDATA) def test_convert_file(testdir, tmpdir, input_file, expected_output, encoding): ''' Test the function that converts a file to UTF-8. ''' ... </code></pre> <p>The idea here is that the second run of this test, with input_file = 'data/1_input_unknown_encoding.txt', I expect the code under test to fail.</p> <p>This seems to work fine, and when I run from the command line, pytest tells me that the test has xfailed. I can follow the code in the debugger and see that it is throwing the expected exception. So that's all well and good.</p> <p>But Jenkins is showing this test as skkipped. When I look at the output I see the message:</p> <pre><code>Skip Message expected test failure </code></pre> <p>Why does the test show as skipped? It seems like the test runs, and fails as expected. That's not the same as the test being skipped.</p>
0
2016-07-26T14:20:49Z
39,021,921
<p>The answer is that I am using xfail incorrectly. It should not be used to expect test failure. It does not do that. If the test fails or does not fail, it is recorded as passing.</p> <p>What I need to do instead is change the tests to catch the exception I expect, and error out if the exception is not thrown. Then I will remove the xfails.</p> <p>xfail seems to be for cases when you have a test temporarily failing, and you want your test suite to pass. So it does not so much expect a failure, as not care about a failure.</p> <p>While the test is still run, the results are not checked, and so it's reasonable for the report to show that this test was skipped.</p> <p>My tests now look like this:</p> <pre><code># Test the run function. ex1 = False try: result = main.run(base_dir + '/'+ input_file, \ client_config['properties']) except ValueError as e: ex1 = True print e if expect_fail: assert ex1 else: assert not ex1 assert filecmp.cmp(result, base_dir + '/' + expected_output), \ "Ouput file is not the same as the expected file: " + expected_output </code></pre>
1
2016-08-18T15:19:43Z
[ "python", "unit-testing", "jenkins", "py.test" ]
Beautifulsoup return list for attribute "class" while value for other attribute
38,592,497
<p><a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">Beautifulsoup</a> is handy for html parsing in python, and below code result cofuse me.</p> <pre><code>from bs4 import BeautifulSoup tr =""" &lt;table&gt; &lt;tr class="passed" id="row1"&gt;&lt;td&gt;t1&lt;/td&gt;&lt;/tr&gt; &lt;tr class="failed" id="row2"&gt;&lt;td&gt;t2&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt; """ table = BeautifulSoup(tr,"html.parser") for row in table.findAll("tr"): print row["class"] print row["id"] </code></pre> <p>result:</p> <pre><code>[u'passed'] row1 [u'failed'] row2 </code></pre> <p>Why the attribute <code>class</code> returns as array ? while <code>id</code> is normal value ?</p> <p><code>beautifulsoup4-4.5.0</code> is used with <code>python 2.7</code></p>
1
2016-07-26T14:22:48Z
38,592,593
<p><a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class" rel="nofollow"><code>class</code></a> is a special <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#multi-valued-attributes" rel="nofollow">multi-valued attribute</a> in <code>BeautifulSoup</code>:</p> <blockquote> <p>HTML 4 defines a few attributes that can have multiple values. HTML 5 removes a couple of them, but defines a few more. The most common multi-valued attribute is <code>class</code> (that is, a tag can have more than one CSS class)</p> </blockquote> <p>Sometimes, this is problematic to deal with - for instance, when you want to apply a regular expression to <code>class</code> attribute value as a whole:</p> <ul> <li><a href="http://stackoverflow.com/questions/34288969/beautifulsoup-returns-empty-list-when-searching-by-compound-class-names">BeautifulSoup returns empty list when searching by compound class names</a></li> </ul> <p>You can <a href="http://stackoverflow.com/a/34294195/771848">turn this behavior off by tweaking the tree builder</a>, but I would not recommend doing it.</p>
1
2016-07-26T14:26:29Z
[ "python", "beautifulsoup", "html-parsing" ]
Beautifulsoup return list for attribute "class" while value for other attribute
38,592,497
<p><a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">Beautifulsoup</a> is handy for html parsing in python, and below code result cofuse me.</p> <pre><code>from bs4 import BeautifulSoup tr =""" &lt;table&gt; &lt;tr class="passed" id="row1"&gt;&lt;td&gt;t1&lt;/td&gt;&lt;/tr&gt; &lt;tr class="failed" id="row2"&gt;&lt;td&gt;t2&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt; """ table = BeautifulSoup(tr,"html.parser") for row in table.findAll("tr"): print row["class"] print row["id"] </code></pre> <p>result:</p> <pre><code>[u'passed'] row1 [u'failed'] row2 </code></pre> <p>Why the attribute <code>class</code> returns as array ? while <code>id</code> is normal value ?</p> <p><code>beautifulsoup4-4.5.0</code> is used with <code>python 2.7</code></p>
1
2016-07-26T14:22:48Z
38,592,596
<p>Because elements may have multiple classes.</p> <p>Consider this example:</p> <p>from bs4 import BeautifulSoup</p> <pre><code>tr =""" &lt;table&gt; &lt;tr class="passed a b c" id="row1"&gt;&lt;td&gt;t1&lt;/td&gt;&lt;/tr&gt; &lt;tr class="failed" id="row2"&gt;&lt;td&gt;t2&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt; """ table = BeautifulSoup(tr,"html.parser") for row in table.findAll("tr"): print row["class"] print row["id"] ['passed', 'a', 'b', 'c'] row1 ['failed'] row2 </code></pre>
1
2016-07-26T14:26:32Z
[ "python", "beautifulsoup", "html-parsing" ]
Convert a list of lists into a nested dictionary
38,592,504
<p>I am trying to convert a list of lists into a nested dictionary:</p> <p>My code:</p> <pre><code>csv_data={} for key, value in csv_files.iteritems(): if key in desired_keys: csv_data[key]=[] for element in value: csv_data[key].append(element[1:]) </code></pre> <p>This code gives me the following:</p> <pre><code>{ 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ] } </code></pre> <p>So in this case each "value" is a list containing two lists, containung a "title" list and a "numerical value" list</p> <p>However I want to produce a format like:</p> <pre><code>{ 'Network': { 'Total KB/sec':0.3, 'Sent KB/sec':0.1, 'Received KB/sec':0.3 }, 'CPU': { 'Processor Time':'13.8', 'User Time': '6.7', 'Privileged Time': '7.2' } } </code></pre> <p>How should I change my code to produce this output?</p>
1
2016-07-26T14:23:17Z
38,592,603
<p>Suppose I demonstrate the use of <code>zip()</code> on one of your keys, <code>Network</code>:</p> <pre><code>&gt;&gt;&gt; network = [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ] </code></pre> <p><code>zip()</code>ing the two lists will yield a set of tuples that can be turned into a dict, by simply calling <code>dict()</code> on it. In other words,</p> <pre><code>&gt;&gt;&gt; dict(zip(network[0], network[1])) {'Received KB/sec': '0.3', 'Sent KB/sec': '0.1', 'Total KB/sec': '0.3'} </code></pre> <p>Repeat for your <code>CPU</code> key.</p>
4
2016-07-26T14:26:50Z
[ "python", "list", "dictionary" ]
Convert a list of lists into a nested dictionary
38,592,504
<p>I am trying to convert a list of lists into a nested dictionary:</p> <p>My code:</p> <pre><code>csv_data={} for key, value in csv_files.iteritems(): if key in desired_keys: csv_data[key]=[] for element in value: csv_data[key].append(element[1:]) </code></pre> <p>This code gives me the following:</p> <pre><code>{ 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ] } </code></pre> <p>So in this case each "value" is a list containing two lists, containung a "title" list and a "numerical value" list</p> <p>However I want to produce a format like:</p> <pre><code>{ 'Network': { 'Total KB/sec':0.3, 'Sent KB/sec':0.1, 'Received KB/sec':0.3 }, 'CPU': { 'Processor Time':'13.8', 'User Time': '6.7', 'Privileged Time': '7.2' } } </code></pre> <p>How should I change my code to produce this output?</p>
1
2016-07-26T14:23:17Z
38,592,701
<p>Approach without <code>zip</code></p> <pre><code>dct = { 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ] } for key, val in dct.items(): placeholder_dct= {} for i in range(len(val[0])): placeholder_dct[val[0][i]] = val[1][i] dct[key] = placeholder_dct print(dct) </code></pre>
0
2016-07-26T14:30:37Z
[ "python", "list", "dictionary" ]
Convert a list of lists into a nested dictionary
38,592,504
<p>I am trying to convert a list of lists into a nested dictionary:</p> <p>My code:</p> <pre><code>csv_data={} for key, value in csv_files.iteritems(): if key in desired_keys: csv_data[key]=[] for element in value: csv_data[key].append(element[1:]) </code></pre> <p>This code gives me the following:</p> <pre><code>{ 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ] } </code></pre> <p>So in this case each "value" is a list containing two lists, containung a "title" list and a "numerical value" list</p> <p>However I want to produce a format like:</p> <pre><code>{ 'Network': { 'Total KB/sec':0.3, 'Sent KB/sec':0.1, 'Received KB/sec':0.3 }, 'CPU': { 'Processor Time':'13.8', 'User Time': '6.7', 'Privileged Time': '7.2' } } </code></pre> <p>How should I change my code to produce this output?</p>
1
2016-07-26T14:23:17Z
38,592,853
<p><code>zip()</code> comes in very handy for iterating the lists at the same time, and converting to a dictionary becomes very easy with <code>dict()</code>.</p> <pre><code>def to_dict(dic): for key, value in dic.iteritems(): dic[key] = dict(zip(* value)) return dic </code></pre> <p><strong>Sample output:</strong></p> <pre><code>d = {'Network': [['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3']], 'CPU': [['Processor Time', 'User Time', 'Privileged Time'], ['13.8', 6.7', '7.2']]} print to_dict(d) &gt;&gt;&gt; {'Network': {'Sent KB/sec': '0.1', 'Total KB/sec': '0.3', 'Received KB/sec': '0.3'}, 'CPU': {'Processor Time': '13.8', 'Privileged Time': '7.2', 'User Time': '6.7'}} </code></pre> <p><strong>How does it work?</strong></p> <p>When you use the zip function on lists it returns a list of <strong>tuple pairs</strong> and iterates the various levels of lists treating them as <em>parallel lists</em> by coupling each of the lists elements spanning the lists at their <em>respective</em> index. So if we isolate the <code>zip(* value)</code> operation we can clearly see the result of the operation:</p> <pre><code>&gt;&gt;&gt; [('Total KB/sec', '0.3'), ('Sent KB/sec', '0.1'), ('Received KB/sec', '0.3')] [('Processor Time', '13.8'), ('User Time', '6.7'), ('Privileged Time', '7.2')] </code></pre>
3
2016-07-26T14:36:38Z
[ "python", "list", "dictionary" ]
Convert a list of lists into a nested dictionary
38,592,504
<p>I am trying to convert a list of lists into a nested dictionary:</p> <p>My code:</p> <pre><code>csv_data={} for key, value in csv_files.iteritems(): if key in desired_keys: csv_data[key]=[] for element in value: csv_data[key].append(element[1:]) </code></pre> <p>This code gives me the following:</p> <pre><code>{ 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ] } </code></pre> <p>So in this case each "value" is a list containing two lists, containung a "title" list and a "numerical value" list</p> <p>However I want to produce a format like:</p> <pre><code>{ 'Network': { 'Total KB/sec':0.3, 'Sent KB/sec':0.1, 'Received KB/sec':0.3 }, 'CPU': { 'Processor Time':'13.8', 'User Time': '6.7', 'Privileged Time': '7.2' } } </code></pre> <p>How should I change my code to produce this output?</p>
1
2016-07-26T14:23:17Z
38,593,067
<p>Try this code :</p> <pre><code>{x:{z:t for z,t in (dict(zip(y[0], y[1]))).items()} for x,y in data.items()} </code></pre> <p>When Input :</p> <pre><code>data={ 'Network': [ ['Total KB/sec', 'Sent KB/sec', 'Received KB/sec'], ['0.3', '0.1', '0.3'] ], 'CPU': [ ['Processor Time', 'User Time', 'Privileged Time'], ['13.8', '6.7', '7.2'] ]} </code></pre> <p>Output :</p> <pre><code>res= {'Network': {'Sent KB/sec': '0.1', 'Total KB/sec': '0.3', 'Received KB/sec': '0.3'}, 'CPU': {'Processor Time': '13.8', 'Privileged Time': '7.2', 'User Time': '6.7'}} </code></pre>
0
2016-07-26T14:45:01Z
[ "python", "list", "dictionary" ]
Multithreading Issues
38,592,576
<p>This is more of a hypothetical question; I'm having some issues with a program and I'm wondering if it might be because of multithreading.</p> <p>I have a main thread and a worker thread. The worker thread communicates with a machine through a serial port, and when it receives output from the machine, it emits a Pyqt signal. There is a slot in the main thread which receives the signal, and processes that output. The processing is a lengthy process which includes creating another object from the output. </p> <p>If the worker thread were to call the main thread two times before the first output has finished getting processed, what would happen?</p>
0
2016-07-26T14:25:50Z
38,592,947
<p>While your main thread is executing a long task (originated from an event), it will not process any new events. All new events will be queued in a thread-specific queue, and will be processed later when the event loop is executed. If the targeted thread is <em>sleeping</em>, the new event will be queued, and the thread will be awaken to process it.</p> <p>You can read the documentation on <a href="http://doc.qt.io/qt-4.8/eventsandfilters.html#sending-events" rel="nofollow">The Event System</a>.</p>
1
2016-07-26T14:40:31Z
[ "python", "multithreading", "pyqt4" ]
IndexError: only integers, slices (`:`), ellipsis (`...`) . .
38,592,581
<p>I am using pymc3 to find a best fit for a 3D surface. This is the code that I am using.</p> <pre><code>with Model() as model: # specify glm and pass in data. The resulting linear model, its likelihood and # and all its parameters are automatically added to our model. glm.glm('z ~ x**2 + y**2 + x + y + np.sin(x) + np.cos(y)' , flatimage) start = find_MAP() step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm trace = sample(2000, step, progressbar=False) # draw 2000 posterior samples using NUTS sampling </code></pre> <p>I got an error in this line:</p> <pre><code>glm.glm('z ~ x**2 + y**2 + x + y + np.sin(x) + np.cos(y)' , flatimage) </code></pre> <p>The error is :</p> <pre><code>IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices </code></pre> <p>I had tried to fix it by changing sin(x) and cos(y) to np.sin(x) and np.cos(y), but that didn't work, and I don't know what else to do.</p>
0
2016-07-26T14:26:06Z
38,628,221
<p>I think the problem is related to your definition of <code>flatimage</code>. You need your data labeled for the glm module to work. Something like this:</p> <pre class="lang-py prettyprint-override"><code># synthetic data (just an example) x = np.random.normal(size=100) y = np.random.normal(size=100) z = x**2 + y**2 + x + y + np.sin(x) + np.cos(y) data = dict(x=x, y=y, z=z) # a pandas dataframe will also work with pm.Model() as model: pm.glm.glm('z ~ x**2 + y**2 + x + y + np.sin(x) + np.cos(y)' , data) start = pm.find_MAP() step = pm.NUTS(scaling=start) trace = pm.sample(2000, step, start) </code></pre> <p>Check <a href="http://twiecki.github.io/blog/2013/08/12/bayesian-glms-1/" rel="nofollow">this</a> example for other details.</p>
1
2016-07-28T06:02:18Z
[ "python", "pymc3", "best-fit-curve" ]
Splinter find_by_css() not working as expected
38,592,707
<p>I tried this in the browser and it works fine:</p> <pre><code>('button[data-item-id="1054079703"]')[0].click() </code></pre> <p>When I try it with Splinter:</p> <pre><code>browser.find_by_css('button[data-item-id="1054079703"]') </code></pre> <p>returns a Splinter object:</p> <pre><code>[&lt;splinter.driver.webdriver.WebDriverElement object at 0x1108c6c90&gt;] </code></pre> <p>I can see that it's finding the right element:</p> <pre><code>browser.find_by_css('button[data-item-id="1054079703"]').first.html u'this_is_what_im_looking_for' </code></pre> <p>But when I goto click it:</p> <pre><code>browser.find_by_css('button[data-item-id="1054079703"]').first.click() </code></pre> <p>I'm getting the error:</p> <pre><code>selenium.common.exceptions.ElementNotVisibleException: Message: element not visible </code></pre> <p>To verify, this returns <code>False</code></p> <pre><code>browser.find_by_css('button[data-item-id="1054079703"]').first.visible </code></pre> <p>How come I can select it in the browser using jQuery, but it's not visible through Splinter? </p>
0
2016-07-26T14:30:53Z
38,952,732
<p>Sometimes for whatever reason, selenium will determine that an element is not visible when indeed it is.</p> <p>Best to check your css to make sure nothing is overlaying it just to be sure.</p> <p>If you are sure that it is visible, try using <a href="http://splinter.readthedocs.io/en/latest/api/driver-and-element-api.html#splinter.driver.DriverAPI.execute_script" rel="nofollow">execute_script</a></p> <pre><code>browser.execute_script("document.getElementsByClassName('myclass')[0].click()") </code></pre>
0
2016-08-15T09:41:29Z
[ "python", "splinter" ]
Resize matplotlib object within gridspec cell (matshow and colorbar size mismatched)
38,592,719
<p>The matshow and colorbar objects do not fill the same space inside the gridspec cells and therefore they are different heights.</p> <p>Usually I would use the colorbar 'shrink' argument, but this does not seem to work when nested in a gridspec object</p> <p>How can I shrink the colorbar object without resizing the matshow heat map?</p> <p>Thanks in advance</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import gridspec df = pd.DataFrame((np.random.randint(0, 3, 10000).reshape(100, 100))) fig = plt.figure(figsize=(15,15)) gs = gridspec.GridSpec(10, 10) #### other axes removed for simplicity ax2 = fig.add_subplot(gs[2:,:8]) # plot heatmap cax = ax2.matshow(df, interpolation='nearest', cmap=plt.cm.YlGn, aspect='equal') ax2.set_xticks([]) ax2.set_yticks([]) ax3 = fig.add_subplot(gs[2:,8]) fig.colorbar(cax, cax=ax3) plt.tight_layout() gs.update(wspace=2, hspace=0.1) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/KLxSQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/KLxSQ.png" alt="matshow and colorbar are slightly (annoyingly) different heights"></a></p> <h1>EDIT: Annotated image for clarification</h1> <p><a href="http://i.stack.imgur.com/k78Cw.png" rel="nofollow"><img src="http://i.stack.imgur.com/k78Cw.png" alt="enter image description here"></a></p>
0
2016-07-26T14:31:25Z
38,594,239
<p>You could use matplotlib <a href="http://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html#axesdivider%22AxisDivider%22" rel="nofollow">AxesDivider</a>. Below is an example using the data from your question:</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import gridspec fig = plt.figure(figsize=(15,15)) df = pd.DataFrame((np.random.randint(0, 3, 10000).reshape(100, 100))) gs = gridspec.GridSpec(10, 10) ax2 = fig.add_subplot(gs[2:,:8]) im = ax2.matshow(df,interpolation='nearest',cmap=plt.cm.YlGn, aspect='equal') divider = make_axes_locatable(ax2) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(im, cax=cax) plt.show() </code></pre> <p>This produces the following graph, which to my eye looks like they are the same size:</p> <p><a href="http://i.stack.imgur.com/ameQS.png" rel="nofollow"><img src="http://i.stack.imgur.com/ameQS.png" alt="enter image description here"></a></p>
1
2016-07-26T15:36:01Z
[ "python", "matplotlib", "colorbar", "imshow" ]
How to implement insert for OrderedDict in python 3
38,592,779
<p>I want to insert an item into an OrderedDict at a certain position. Using the <a href="https://gist.github.com/jaredks/6276032" rel="nofollow">gist</a> of <a href="http://stackoverflow.com/a/18326914/1504082">this</a> SO answer i have the problem that it doesn't work on python 3.</p> <p>This is the implementation used</p> <pre><code>from collections import OrderedDict class ListDict(OrderedDict): def __init__(self, *args, **kwargs): super(ListDict, self).__init__(*args, **kwargs) def __insertion(self, link_prev, key_value): key, value = key_value if link_prev[2] != key: if key in self: del self[key] link_next = link_prev[1] self._OrderedDict__map[key] = link_prev[1] = link_next[0] = [link_prev, link_next, key] dict.__setitem__(self, key, value) def insert_after(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key], key_value) def insert_before(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key][0], key_value) </code></pre> <p>Using it like</p> <pre><code>ld = ListDict([(1,1), (2,2), (3,3)]) ld.insert_before(2, (1.5, 1.5)) </code></pre> <p>gives</p> <pre><code>File "...", line 35, in insert_before self.__insertion(self._OrderedDict__map[existing_key][0], key_value) AttributeError: 'ListDict' object has no attribute '_OrderedDict__map' </code></pre> <p>It works with python 2.7. What is the reason that it fails in python 3? Checking the source code of the <a href="https://hg.python.org/cpython/file/3.5/Lib/collections/__init__.py" rel="nofollow">OrderedDict</a> implementation shows that <code>self.__map</code> is used instead of <code>self._OrderedDict__map</code>. Changing the code to the usage of <code>self.__map</code> gives</p> <pre><code>AttributeError: 'ListDict' object has no attribute '_ListDict__map' </code></pre> <p>How come? And how can i make this work in python 3? OrderedDict uses the internal <code>__map</code> attribute to store a doubly linked list. So how can i access this attribute properly?</p>
2
2016-07-26T14:33:59Z
38,599,494
<pre><code>from collections import OrderedDict od1 = OrderedDict([ ('a', 1), ('b', 2), ('d', 4), ]) items = od1.items() items.insert(2, ('c', 3)) od2 = OrderedDict(items) print(od2) # OrderedDict([('a', 1), ('b', 2), ('c', 3), ('d', 4)]) </code></pre>
-2
2016-07-26T20:41:22Z
[ "python", "ordereddictionary" ]
How to implement insert for OrderedDict in python 3
38,592,779
<p>I want to insert an item into an OrderedDict at a certain position. Using the <a href="https://gist.github.com/jaredks/6276032" rel="nofollow">gist</a> of <a href="http://stackoverflow.com/a/18326914/1504082">this</a> SO answer i have the problem that it doesn't work on python 3.</p> <p>This is the implementation used</p> <pre><code>from collections import OrderedDict class ListDict(OrderedDict): def __init__(self, *args, **kwargs): super(ListDict, self).__init__(*args, **kwargs) def __insertion(self, link_prev, key_value): key, value = key_value if link_prev[2] != key: if key in self: del self[key] link_next = link_prev[1] self._OrderedDict__map[key] = link_prev[1] = link_next[0] = [link_prev, link_next, key] dict.__setitem__(self, key, value) def insert_after(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key], key_value) def insert_before(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key][0], key_value) </code></pre> <p>Using it like</p> <pre><code>ld = ListDict([(1,1), (2,2), (3,3)]) ld.insert_before(2, (1.5, 1.5)) </code></pre> <p>gives</p> <pre><code>File "...", line 35, in insert_before self.__insertion(self._OrderedDict__map[existing_key][0], key_value) AttributeError: 'ListDict' object has no attribute '_OrderedDict__map' </code></pre> <p>It works with python 2.7. What is the reason that it fails in python 3? Checking the source code of the <a href="https://hg.python.org/cpython/file/3.5/Lib/collections/__init__.py" rel="nofollow">OrderedDict</a> implementation shows that <code>self.__map</code> is used instead of <code>self._OrderedDict__map</code>. Changing the code to the usage of <code>self.__map</code> gives</p> <pre><code>AttributeError: 'ListDict' object has no attribute '_ListDict__map' </code></pre> <p>How come? And how can i make this work in python 3? OrderedDict uses the internal <code>__map</code> attribute to store a doubly linked list. So how can i access this attribute properly?</p>
2
2016-07-26T14:33:59Z
38,602,395
<p>I'm not sure you wouldn't be better served just keeping up with a separate list and dict in your code, but here is a stab at a pure Python implementation of such an object. This will be an order of magnitude slower than an actual <code>OrderedDict</code> in Python 3.5, which as I pointed out in my comment <a href="https://bugs.python.org/issue16991" rel="nofollow">has been rewritten in C</a>.</p> <pre><code>""" A list/dict hybrid; like OrderedDict with insert_before and insert_after """ import collections.abc class MutableOrderingDict(collections.abc.MutableMapping): def __init__(self, iterable_or_mapping=None, **kw): # This mimics dict's initialization and accepts the same arguments # Of course, you have to pass an ordered iterable or mapping unless you # want the order to be arbitrary. Garbage in, garbage out and all :) self.__data = {} self.__keys = [] if iterable_or_mapping is not None: try: iterable = iterable_or_mapping.items() except AttributeError: iterable = iterable_or_mapping for key, value in iterable: self.__keys.append(key) self.__data[key] = value for key, value in kw.items(): self.__keys.append(key) self.__data[key] = value def insert_before(self, key, new_key, value): try: self.__keys.insert(self.__keys.index(key), new_key) except ValueError: raise KeyError(key) from ValueError else: self.__data[new_key] = value def insert_after(self, key, new_key, value): try: self.__keys.insert(self.__keys.index(key) + 1, new_key) except ValueError: raise KeyError(key) from ValueError else: self.__data[new_key] = value def __getitem__(self, key): return self.__data[key] def __setitem__(self, key, value): self.__keys.append(key) self.__data[key] = value def __delitem__(self, key): del self.__data[key] self.__keys.remove(key) def __iter__(self): return iter(self.__keys) def __len__(self): return len(self.__keys) def __contains__(self, key): return key in self.__keys def __eq__(self, other): try: return (self.__data == dict(other.items()) and self.__keys == list(other.keys())) except AttributeError: return False def keys(self): for key in self.__keys: yield key def items(self): for key in self.__keys: yield key, self.__data[key] def values(self): for key in self.__keys: yield self.__data[key] def get(self, key, default=None): try: return self.__data[key] except KeyError: return default def pop(self, key, default=None): value = self.get(key, default) self.__delitem__(key) return value def popitem(self): try: return self.__data.pop(self.__keys.pop()) except IndexError: raise KeyError('%s is empty' % self.__class__.__name__) def clear(self): self.__keys = [] self.__data = {} def update(self, mapping): for key, value in mapping.items(): self.__keys.append(key) self.__data[key] = value def setdefault(self, key, default): try: return self[key] except KeyError: self[key] = default return self[key] def __repr__(self): return 'MutableOrderingDict(%s)' % ', '.join(('%r: %r' % (k, v) for k, v in self.items())) </code></pre> <p>I ended up implementing the whole <code>collections.abc.MutableMapping</code> contract because none of the methods were very long, but you probably won't use all of them. In particular, <code>__eq__</code> and <code>popitem</code> are a little arbitrary. I changed your signature on the <code>insert_*</code> methods to a 4-argument one that feels a little more natural to me. Final note: Only tested on Python 3.5. Certainly will not work on Python 2 without some (minor) changes.</p>
1
2016-07-27T01:44:40Z
[ "python", "ordereddictionary" ]
How to implement insert for OrderedDict in python 3
38,592,779
<p>I want to insert an item into an OrderedDict at a certain position. Using the <a href="https://gist.github.com/jaredks/6276032" rel="nofollow">gist</a> of <a href="http://stackoverflow.com/a/18326914/1504082">this</a> SO answer i have the problem that it doesn't work on python 3.</p> <p>This is the implementation used</p> <pre><code>from collections import OrderedDict class ListDict(OrderedDict): def __init__(self, *args, **kwargs): super(ListDict, self).__init__(*args, **kwargs) def __insertion(self, link_prev, key_value): key, value = key_value if link_prev[2] != key: if key in self: del self[key] link_next = link_prev[1] self._OrderedDict__map[key] = link_prev[1] = link_next[0] = [link_prev, link_next, key] dict.__setitem__(self, key, value) def insert_after(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key], key_value) def insert_before(self, existing_key, key_value): self.__insertion(self._OrderedDict__map[existing_key][0], key_value) </code></pre> <p>Using it like</p> <pre><code>ld = ListDict([(1,1), (2,2), (3,3)]) ld.insert_before(2, (1.5, 1.5)) </code></pre> <p>gives</p> <pre><code>File "...", line 35, in insert_before self.__insertion(self._OrderedDict__map[existing_key][0], key_value) AttributeError: 'ListDict' object has no attribute '_OrderedDict__map' </code></pre> <p>It works with python 2.7. What is the reason that it fails in python 3? Checking the source code of the <a href="https://hg.python.org/cpython/file/3.5/Lib/collections/__init__.py" rel="nofollow">OrderedDict</a> implementation shows that <code>self.__map</code> is used instead of <code>self._OrderedDict__map</code>. Changing the code to the usage of <code>self.__map</code> gives</p> <pre><code>AttributeError: 'ListDict' object has no attribute '_ListDict__map' </code></pre> <p>How come? And how can i make this work in python 3? OrderedDict uses the internal <code>__map</code> attribute to store a doubly linked list. So how can i access this attribute properly?</p>
2
2016-07-26T14:33:59Z
40,118,488
<p>Since Python 3.2, <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict.move_to_end" rel="nofollow"><code>move_to_end</code></a> can be used to move items around in an <a href="https://docs.python.org/3/library/collections.html#ordereddict-objects" rel="nofollow"><code>OrderedDict</code></a>. The following code will implement the <code>insert</code> functionality by moving all items after the provided index to the end.</p> <p>Note that this isn't very efficient and should be used sparingly (if at all).</p> <pre><code>def ordered_dict_insert(ordered_dict, index, key, value): if key in ordered_dict: raise KeyError("Key already exists") if index &lt; 0 or index &gt; len(ordered_dict): raise IndexError("Index out of range") keys = list(ordered_dict.keys())[index:] ordered_dict[key] = value for k in keys: ordered_dict.move_to_end(k) </code></pre> <p>There are obvious optimizations and improvements that could be made, but that's the general idea.</p>
0
2016-10-18T21:35:30Z
[ "python", "ordereddictionary" ]
Multiple quotechars in pandas
38,592,786
<p>I want to parse an nginx access log using pandas python library read_csv function. I'm using the following code:</p> <pre><code>pd.read_csv('lb-access_cache.log', delim_whitespace=True, quotechar='"') </code></pre> <p>It would be possible to specify more than one quotechar, to treat also the elements inside brackets or square brackets as columns?</p> <p>For example, in an string like the following I want to obtain 3 columns.</p> <p>hello "world hello" [world is beautifull]</p>
1
2016-07-26T14:34:15Z
38,593,025
<p>This will do, you need to use a regex in place of sep: </p> <pre><code>df = pd.read_csv(log_file, sep=r'\s(?=(?:[^"]*"[^"]*")*[^"]*$)(?![^\[]*\])', engine='python', usecols=[0, 3, 4, 5, 6, 7, 8], names=['ip', 'time', 'request', 'status', 'size', 'referer', 'user_agent'], na_values='-', header=None ) </code></pre>
3
2016-07-26T14:43:29Z
[ "python", "csv", "pandas" ]
Precision when converting int32 to other types
38,592,794
<p>The code below</p> <pre><code>import numpy as np i = np.iinfo(np.int32).max # 2147483647 x = np.asanyarray(i) # array([2147483647]) dtypes = (np.int32, np.float16, np.float32, np.float64) for dtp in dtypes: print('%s : %s'%(dtp, x.astype(dtp))) </code></pre> <p>outputs</p> <pre><code>&lt;type 'numpy.int32'&gt; : 2147483647 &lt;type 'numpy.float16'&gt; : inf &lt;type 'numpy.float32'&gt; : 2147483648.0 &lt;type 'numpy.float64'&gt; : 2147483647.0 </code></pre> <p>Now we see <code>2147483648.0</code> for <code>numpy.float32</code> and <code>2147483647.0</code> for <code>numpy.float64</code>. I googled and found <a href="https://en.wikipedia.org/wiki/Single-precision_floating-point_format" rel="nofollow">here</a> </p> <blockquote> <p>All integers with six or fewer significant decimal digits can be converted to an IEEE 754 floating point value without loss of precision, some integers up to nine significant decimal digits can be converted to an IEEE 754 floating point value without loss of precision, but no more than nine significant decimal digits can be stored. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE 754 form.</p> </blockquote> <p>which mentioned another value <code>2,147,483,650</code>.</p> <p>I'm not clear about how this happens. <code>float32</code> is valid up to <code>3.402823e38</code>, much beyond the max <code>int32</code>. And <code>float64</code> can give the exact value.</p> <pre><code>--------------------------------------------------------------- </code></pre> <p>Emmm..... after reading the comments below, I began to read more stuff about how int and float numbers are represented in binary. I haven't make this very clear.</p> <p>Maybe someone can talk about how to get the precision/resolution of float numbers in a more general extent, which is also useful for understanding the problem of the original Q.</p> <pre><code>print np.finfo(np.float32) [out]: Machine parameters for float32 --------------------------------------------------------------- precision= 6 resolution= 1.0000000e-06 machep= -23 eps= 1.1920929e-07 negep = -24 epsneg= 5.9604645e-08 minexp= -126 tiny= 1.1754944e-38 maxexp= 128 max= 3.4028235e+38 nexp = 8 min= -max --------------------------------------------------------------- </code></pre>
3
2016-07-26T14:34:32Z
38,594,632
<p>Floating point values consist of two parts, an integer and an exponent. To get the value, you take 2 to the power of the exponent and multiply it by the integer part.</p> <p>For an IEEE 32-bit floating point value, the integer part is <a href="https://en.wikipedia.org/wiki/IEEE_floating_point#Basic_and_interchange_formats" rel="nofollow">only 24 bits long</a>. Larger values can be obtained by compensating with the exponent, but only if their bottom bits beyond the 24th are all zero.</p> <p>2147483647 does not have zero in the bottom bits, but 2147483648 does.</p>
1
2016-07-26T15:54:44Z
[ "python", "numpy", "floating-point", "precision", "data-type-conversion" ]
XLWings / Excel problems handling "multi-cell array formulas"
38,592,907
<h1>scope</h1> <p>I'm using XLWings for the obvious, to retrieve data from external sources, do a bit of transformation in Python, and then pump it into Excel.</p> <p>I'm using UDFs to do so (Windows 10, Excel 2016 32bit).</p> <p>The only way I'm aware of, is to use "multi-cell array formulas" to add the 2 dimensional data (Pandas dataframes) into Excel Worksheets.</p> <h1>issue</h1> <p>The "multi-cell array formulas" seems to have a multiple limitations, which I have not found a solution, how to:</p> <ul> <li>handle dynamic sizes of the returning dataframe (which is the rule and not the exception, see my <a href="https://github.com/ZoomerAnalytics/xlwings/issues/511#issuecomment-234969441" rel="nofollow" title="comment on github">comment on github</a> too)</li> <li>format as tables (not possible) to apply coloring, sorting and filtering</li> <li>add to datamodel (not possible) for i.e. joining</li> <li>(what else is not working?)</li> </ul> <h1>question</h1> <p>How do other handle this?</p>
1
2016-07-26T14:39:06Z
39,591,649
<p>xlwings v0.10 introduces dynamic arrays, see the <a href="http://docs.xlwings.org/en/stable/whatsnew.html#dynamic-array-formulas" rel="nofollow">release notes</a>.</p> <p>Now you can do:</p> <pre><code>import xlwings as xw import numpy as np @xw.func @xw.ret(expand='table') def dynamic_array(r, c): return np.random.randn(int(r), int(c)) </code></pre> <p>and you can use that formula by simply writing it into the top left cell, without the need to make it an Excel array formula.</p>
1
2016-09-20T10:23:13Z
[ "python", "excel", "pandas", "xlwings" ]
How do you setup simple timer between two times when the other time is the next day?
38,592,908
<p>Python noob here</p> <pre><code>from datetime import datetime, time now = datetime.now() now_time = now.time() if now_time &gt;= time(10,30) and now_time &lt;= time(13,30): print "yes, within the interval" </code></pre> <p>I would like the timer to work between 10,30 AM today and 10 AM the next day. Changing time(13,30) to time(10,00) will not work, because I need to tell python 10,00 is the next day. I should use datetime function but don't know how. Any tips or examples appreciated.</p>
0
2016-07-26T14:39:06Z
38,593,155
<p>The <code>combine</code> method on the <code>datetime</code> class will help you a lot, as will the <code>timedelta</code> class. Here's how you would use them:</p> <pre><code>from datetime import datetime, timedelta, date, time today = date.today() tomorrow = today + timedelta(days=1) interval_start = datetime.combine(today, time(10,30)) interval_end = datetime.combine(tomorrow, time(10,00)) time_to_check = datetime.now() # Or any other datetime if interval_start &lt;= time_to_check &lt;= interval_end: print "Within the interval" </code></pre> <p>Notice how I did the comparison. Python lets you "nest" comparisons like that, which is usually more succinct than writing <code>if start &lt;= x and x &lt;= end</code>.</p> <p>P.S. Read <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">https://docs.python.org/2/library/datetime.html</a> for more details about these classes.</p>
1
2016-07-26T14:49:01Z
[ "python" ]
How do you setup simple timer between two times when the other time is the next day?
38,592,908
<p>Python noob here</p> <pre><code>from datetime import datetime, time now = datetime.now() now_time = now.time() if now_time &gt;= time(10,30) and now_time &lt;= time(13,30): print "yes, within the interval" </code></pre> <p>I would like the timer to work between 10,30 AM today and 10 AM the next day. Changing time(13,30) to time(10,00) will not work, because I need to tell python 10,00 is the next day. I should use datetime function but don't know how. Any tips or examples appreciated.</p>
0
2016-07-26T14:39:06Z
38,593,160
<p>Consider this:</p> <pre><code>from datetime import datetime, timedelta now = datetime.now() today_10 = now.replace(hour=10, minute=30) tomorrow_10 = (now + timedelta(days=1)).replace(hour=10, minute=0) if today_10 &lt;= now &lt;= tomorrow_10: print "yes, within the interval" </code></pre> <p>The logic is to create 3 <code>datetime</code> objects: one for today 10 AM, one for right now and one for tomorrow 10 AM. Them simply checking for the condition.</p>
1
2016-07-26T14:49:13Z
[ "python" ]
How do you setup simple timer between two times when the other time is the next day?
38,592,908
<p>Python noob here</p> <pre><code>from datetime import datetime, time now = datetime.now() now_time = now.time() if now_time &gt;= time(10,30) and now_time &lt;= time(13,30): print "yes, within the interval" </code></pre> <p>I would like the timer to work between 10,30 AM today and 10 AM the next day. Changing time(13,30) to time(10,00) will not work, because I need to tell python 10,00 is the next day. I should use datetime function but don't know how. Any tips or examples appreciated.</p>
0
2016-07-26T14:39:06Z
38,593,314
<p>An alternative to creating time objects for the sake of comparison is to simply query the <code>hour</code> and <code>minute</code> attributes:</p> <pre><code>now= datetime.now().time() if now.hour&lt;10 or now.hour&gt;10 or (now.hour==10 and now.minute&gt;30): print('hooray') </code></pre>
0
2016-07-26T14:55:59Z
[ "python" ]
Django get_or_create() is creating multiple rows with the same id
38,592,999
<p>I have a get_or_create() in my django app that's creating duplicate rows and assigning them the same id.</p> <pre><code>stock_search, created = SearchRequest.objects.get_or_create(quote=quote, salesperson=user) </code></pre> <p>count() doesn't count these rows more than once but any queries I run on the data returns the duplicated rows.</p> <p>Any ideas what could be causing this to happen?</p> <p>Model Definition</p> <pre><code>class SearchRequest(models.Model): salesperson = models.ForeignKey(User, blank=True, null=True, related_name='sales') purchaser = models.ManyToManyField(User, blank=True, null=True, related_name='purchaser') datesent = models.DateTimeField(auto_now_add=False, verbose_name=("Date Sent"), blank=True, null=True) notes = models.TextField(default='', blank=True, null=True) full_search = models.BooleanField(verbose_name=("Full Search"), blank=True, default=False) quote = models.ForeignKey('Quote.Quote', blank=True, null=True) lead_time = models.ForeignKey('Logistics.LeadTime', blank=True, null=True) call_list = models.BooleanField(verbose_name=("Call List"), blank=True, default=False) email_list = models.BooleanField(verbose_name=("Email List"), blank=True, default=False) accepted = models.ForeignKey(User, blank=True, null=True, related_name='search_accepted') dateaccepted = models.DateTimeField(auto_now_add=False, verbose_name=("Date Accepted"), blank=True, null=True) </code></pre> <p>Cheers</p>
-3
2016-07-26T14:42:23Z
38,601,454
<p>As mentioned in the <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#get-or-create" rel="nofollow">docs</a>, you need an unique index for get_or_create to work</p> <blockquote> <p>This method is atomic assuming correct usage, correct database configuration, and correct behavior of the underlying database. However, if uniqueness is not enforced at the database level for the kwargs used in a get_or_create call (see unique or unique_together), this method is prone to a race-condition which can result in multiple rows with the same parameters being inserted simultaneously.</p> </blockquote> <p>So you class needs </p> <pre><code>class SearchRequest(models.Model): class Meta: unique_together('quote','salesperson') </code></pre> <p>which should be placed after the field definitions.</p>
1
2016-07-26T23:28:54Z
[ "python", "django", "postgresql", "python-2.7" ]
alembic db migration concurrently
38,593,034
<p>I have multiple flask apps connect to the same db backend for high availability purpose. There is a problem when app instances try to migrate db concurrently using alembic! What's the best way to prevent it?</p>
1
2016-07-26T14:43:44Z
38,596,490
<p>Alembic <a href="https://groups.google.com/forum/#!msg/sqlalchemy-alembic/I2AAEUdF2dQ/mM2dokcZCAAJ" rel="nofollow">does not</a> support simultaneous migration. Your best bet is to do it manually.</p>
1
2016-07-26T17:37:01Z
[ "python", "flask", "concurrency", "alembic" ]
lxml working with namespaces
38,593,176
<p>I'm working with a pretty complex XML like this:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;!-- ***** Configuration Data exported at 20160623T110335 ***** --&gt; &lt;impex:ExportData xmlns:impex="urn:swift:saa:xsd:impex"&gt; &lt;!-- *** Exported Data for Operator *** --&gt; &lt;OperatorData xmlns="urn:swift:saa:xsd:impex:operator"&gt; &lt;ns2:OperatorDefinition xmlns="urn:swift:saa:xsd:operatorprofile" xmlns:ns2="urn:swift:saa:xsd:impex:operator" xmlns:ns3="urn:swift:saa:xsd:unit" xmlns:ns4="urn:swift:saa:xsd:licenseddestination" xmlns:ns5="urn:swift:saa:xsd:operator" xmlns:ns6="urn:swift:saa:xsd:authenticationservergroup"&gt; &lt;ns2:Operator&gt; &lt;ns5:Identifier&gt; &lt;ns5:Name&gt;jdoe&lt;/ns5:Name&gt; &lt;/ns5:Identifier&gt; &lt;ns5:Description&gt;John Doe&lt;/ns5:Description&gt; &lt;ns5:OperatorType&gt;HUMAN&lt;/ns5:OperatorType&gt; &lt;ns5:AuthenticationType&gt;LDAP&lt;/ns5:AuthenticationType&gt; &lt;ns5:AuthenticationServerGroup&gt; &lt;ns6:Type&gt;LDAP&lt;/ns6:Type&gt; &lt;ns6:Name&gt;LDAP_GROUP1&lt;/ns6:Name&gt; &lt;/ns5:AuthenticationServerGroup&gt; &lt;ns5:LdapUserId&gt;jdoe&lt;/ns5:LdapUserId&gt; &lt;ns5:Profile&gt; &lt;Name&gt;DEV Users&lt;/Name&gt; &lt;/ns5:Profile&gt; &lt;ns5:Unit&gt; &lt;ns3:Name&gt;None&lt;/ns3:Name&gt; &lt;/ns5:Unit&gt; &lt;/ns2:Operator&gt; &lt;/ns2:OperatorDefinition&gt; &lt;/OperatorData&gt; &lt;/impex:ExportData&gt; </code></pre> <p>In this XML there are numerous <code>&lt;ns2:OperatorDefinition&gt;</code> elements like the one I included. I'm having a hard time understanding how to pull out something like <code>&lt;ns5:Description&gt;</code> using lxml. All the examples for namespaces I'm finding are not this complex.</p> <p>I'm trying to simply find the tags doing something like this - </p> <pre><code>from lxml import etree doc = etree.parse('c:/robin/Operators_out.xml') r = doc.xpath('/x:OperatorData/ns2:OperatorDefinition', namespaces={'x': 'urn:swift:saa:xsd:impex:operator'}) print len(r) print r[0].text print r[0].tag </code></pre> <p>I get <code>Undefined namespace prefix</code>.</p>
1
2016-07-26T14:49:47Z
38,593,480
<p>You may not need namespaces for your use-case, <a href="http://stackoverflow.com/q/18159221/771848">remove them</a> to make parsing easier:</p> <pre><code>from lxml import etree, objectify tree = etree.parse("input.xml") root = tree.getroot() # remove namespaces ---- for elem in root.getiterator(): if not hasattr(elem.tag, 'find'): continue i = elem.tag.find('}') if i &gt;= 0: elem.tag = elem.tag[i+1:] objectify.deannotate(root, cleanup_namespaces=True) # ---- name = root.findtext(".//OperatorDefinition/Operator/Identifier/Name") print(name) </code></pre> <p>Prints <code>jdoe</code>.</p>
1
2016-07-26T15:03:02Z
[ "python", "python-2.7", "lxml" ]
python error:list.remove(x) x not in list
38,593,206
<p>I was trying to compare integer from a list and remove it from the list and append it on an array with python. Every time I run my code there is an error occured. "list.remove(x) x not in list". I can't figure it out whats happening on such an error. Can anybody give me some advice? Thank you.</p> <pre><code>def maxcompare(n): lis = map(int, n.split(',')) threeans = [] for i in range(3): maxnum = [0] for j in n[1:]: if j &gt; maxnum: maxnum = j lis.remove(maxnum) threeans.append(maxnum) return maxnum </code></pre> <p>comparing integers print out the three biggest integers maxcompare('2,8,9,7,6,10,5')</p>
-1
2016-07-26T14:50:57Z
38,593,653
<pre><code>&gt;&gt;&gt; def maxcompare(n): ... return sorted( map(int,n.split(',')) )[-3:] ... &gt;&gt;&gt; maxcompare('2,8,9,7,6,10,5') [8, 9, 10] </code></pre>
1
2016-07-26T15:10:25Z
[ "python", "python-2.7" ]
python error:list.remove(x) x not in list
38,593,206
<p>I was trying to compare integer from a list and remove it from the list and append it on an array with python. Every time I run my code there is an error occured. "list.remove(x) x not in list". I can't figure it out whats happening on such an error. Can anybody give me some advice? Thank you.</p> <pre><code>def maxcompare(n): lis = map(int, n.split(',')) threeans = [] for i in range(3): maxnum = [0] for j in n[1:]: if j &gt; maxnum: maxnum = j lis.remove(maxnum) threeans.append(maxnum) return maxnum </code></pre> <p>comparing integers print out the three biggest integers maxcompare('2,8,9,7,6,10,5')</p>
-1
2016-07-26T14:50:57Z
38,593,684
<p>There are a few problems with your code. Try this:</p> <pre><code>def maxcompare(n): lis = map(int, n.split(',')) threeans = [] for i in range(3): maxnum = lis[0] for j in lis[1:]: # iterate lis, not n if int(j) &gt; maxnum: # cast j to int maxnum = j lis.remove(maxnum) threeans.append(maxnum) return threeans # return threeans </code></pre> <p>This way, <code>maxcompare('2,8,9,7,6,10,5')</code> will return <code>[10, 9, 8]</code>. However, you could do the same much easier using builtin functions such as <a href="https://docs.python.org/3.5/library/functions.html#max" rel="nofollow"><code>max</code></a> or <a href="https://docs.python.org/3.5/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a>, e.g. you could replace the four lines for finding <code>maxnum</code> with just <code>maxnum = max(lis)</code>, or just sort the entire list in reverse order and return the first three elements.</p>
0
2016-07-26T15:11:41Z
[ "python", "python-2.7" ]