instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
matplotlib imshow - default colour normalisation
I have consistently had problems with my colour maps when using imshow, some colours seem to just become black. I have finally realised that imshow seems to, by default, normalise the matrix of floating point values I give it. I would have expected an array such as [[0,0.25],[0.5,0.75]] to display the appropriate colours from the map, corresponding to those absolute values but the 0.75 will be interpreted as a 1. In the extreme case, an N x N array of 0.2 (for example), would just produce one big black square, rather than whatever one would expect 0.2 to correspond to in the colour map (perhaps a 20% grey). Is there a way to prevent this behaviour? It is particularly annoying when custom colour maps have many discontinuities, a small change in scale could cause all the colours to completely change.
Just specify vmin=0, vmax=1. By default, imshow normalizes the data to its min and max. You can control this with either the vmin and vmax arguments or with the norm argument (if you want a non-linear scaling). As a quick example: import matplotlib.pyplot as plt data = [[0, 0.25], [0.5, 0.75]] fig, ax = plt.subplots() im = ax.imshow(data, cmap=plt.get_cmap('hot'), interpolation='nearest', vmin=0, vmax=1) fig.colorbar(im) plt.show()
How is HDF5 different from a folder with files?
I'm working on an open source project dealing with adding metadata to folders. The provided (Python) API lets you browse and access metadata like it was just another folder. Because it is just another folder. \folder\.meta\folder\somedata.json Then I came across HDF5 and its derivation Alembic. Reading up on HDF5 in the book Python and HDF5 I was looking for benefits to using it compared to using files in folders, but most of what I came across spoke about the benefits of a hierarchical file-format in terms of its simplicity in adding data via its API: >>> import h5py >>> f = h5py.File("weather.hdf5") >>> f["/15/temperature"] = 21 Or its ability to read only certain parts of it upon request (e.g. random access), and parallel execution of a single HDF5 file (e.g. for multiprocessing) You could mount HDF5 files, https://github.com/zjttoefs/hdfuse5 It even boasts a strong yet simple foundation concept of Groups and Datasets which from wiki reads: Datasets, which are multidimensional arrays of a homogeneous type Groups, which are container structures which can hold datasets and other groups Replace Dataset with File and Group with Folder and the whole feature-set sounds to me like what files in folders are already fully capable of doing. For every benefit I came across, not one stood out as being exclusive to HDF5. So my question is, if I were to give you one HDF5 file and one folder with files, both with identical content, in which scenario would HDF5 be better suited? Edit: Having gotten some responses about the portability of HDF5. It sounds lovely and all, but I still haven't been given an example, a scenario, in which an HDF5 would out-do a folder with files. Why would someone consider using HDF5 when a folder is readable on any computer, any file-system, over a network, supports "parallel I/O", is readable by humans without an HDF5 interpreter. I would go as far as to say, a folder with files is far more portable than any HDF5. Edit 2: Thucydides411 just gave an example of a scenario where portability matters. http://stackoverflow.com/a/28512028/478949 I think what I'm taking away from the answers in this thread is that HDF5 is well suited for when you need the organisational structure of files and folders, like in the example scenario above, with lots (millions) small (~1 byte) data structures; like individual numbers or strings. That it makes up for what file-systems lack by providing a "sub file-system" favouring the small and many as opposed to few and large. In computer graphics, we use it to store geometric models and arbitrary data about individual vertices which seems to align quite well with it's use in the scientific community.
As someone who developed a scientific project that went from using folders of files to HDF5, I think I can shed some light on the advantages of HDF5. When I began my project, I was operating on small test datasets, and producing small amounts of output, in the range of kilobytes. I began with the easiest data format, tables encoded as ASCII. For each object I processed, I produced on ASCII table. I began applying my code to groups of objects, which meant writing multiple ASCII tables at the end of each run, along with an additional ASCII table containing output related to the entire group. For each group, I now had a folder that looked like: + group | |-- object 1 | |-- object 2 | |-- ... | |-- object N | |-- summary At this point, I began running into my first difficulties. ASCII files are very slow to read and write, and they don't pack numeric information very efficiently, because each digit takes a full Byte to encode, rather than ~3.3 bits. So I switched over to writing each object as a custom binary file, which sped up I/O and decreased file size. As I scaled up to processing large numbers (tens of thousands to millions) of groups, I suddenly found myself dealing with an extremely large number of files and folders. Having too many small files can be a problem for many filesystems (many filesystems are limited in the number of files they can store, regardless of how much disk space there is). I also began to find that when I would try to do post-processing on my entire dataset, the disk I/O to read many small files was starting to take up an appreciable amount of time. I tried to solve these problems by consolidating my files, so that I only produced two files for each group: + group 1 | |-- objects | |-- summary + group 2 | |-- objects | |-- summary ... I also wanted to compress my data, so I began creating .tar.gz files for collections of groups. At this point, my whole data scheme was getting very cumbersome, and there was a risk that if I wanted to hand my data to someone else, it would take a lot of effort to explain to them how to use it. The binary files that contained the objects, for example, had their own internal structure that existed only in a README file in a repository and on a pad of paper in my office. Whoever wanted to read one of my combined object binary files would have to know the byte offset, type and endianness of each metadata entry in the header, and the byte offset of every object in the file. If they didn't, the file would be gibberish to them. The way I was grouping and compressing data also posed problems. Let's say I wanted to find one object. I would have to locate the .tar.gz file it was in, unzip the entire contents of the archive to a temporary folder, navigate to the group I was interested in, and retrieve the object with my own custom API to read my binary files. After I was done, I would delete the temporarily unzipped files. It was not an elegant solution. At this point, I decided to switch to a standard format. HDF5 was attractive for a number of reasons. Firstly, I could keep the overall organization of my data into groups, object datasets and summary datasets. Secondly, I could ditch my custom binary file I/O API, and just use a multidimensional array dataset to store all the objects in a group. I could even create arrays of more complicated datatypes, like arrays of C structs, without having to meticulously document the byte offsets of every entry. Next, HDF5 has chunked compression which can be completely transparent to the end user of the data. Because the compression is chunked, if I think users are going to want to look at individual objects, I can have each object compressed in a separate chunk, so that only the part of the dataset the user is interested in needs to be decompressed. Chunked compression is an extremely powerful feature. Finally, I can just give a single file to someone now, without having to explain much about how it's internally organized. The end user can read the file in Python, C, Fortran, or h5ls on the commandline or the GUI HDFView, and see what's inside. That wasn't possible with my custom binary format, not to mention my .tar.gz collections. Sure, it's possible to replicate everything you can do with HDF5 with folders, ASCII and custom binary files. That's what I originally did, but it became a major headache, and in the end, HDF5 did everything I was kluging together in an efficient and portable way.
Opposite of melt in python pandas
I cannot figure out how to do "reverse melt" using Pandas in python. This is my starting data import pandas as pd from StringIO import StringIO origin = pd.read_table(StringIO('''label type value x a 1 x b 2 x c 3 y a 4 y b 5 y c 6 z a 7 z b 8 z c 9''')) origin Out[5]: label type value 0 x a 1 1 x b 2 2 x c 3 3 y a 4 4 y b 5 5 y c 6 6 z a 7 7 z b 8 8 z c 9 This is what I would like to have: label a b c x 1 2 3 y 4 5 6 z 7 8 9 I'm sure there is an easy way to do this, but I don't know how.
there are a few ways; using .pivot: >>> origin.pivot(index='label', columns='type')['value'] type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] using pivot_table: >>> origin.pivot_table(values='value', index='label', columns='type') value type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] or .groupby followed by .unstack: >>> origin.groupby(['label', 'type'])['value'].aggregate('mean').unstack() type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns]
How to find max value in a numpy array column?
I can find quite a few permutations of this question, but not this (rather simple) one: how do I find the maximum value of a specific column of a numpy array (in the most pythonic way)? a = array([[10, 2], [3, 4], [5, 6]]) What I want is the max value in the first column and second column (these are x,y coordinates and I eventually need the height and width of each shape), so max x coordinate is 10 and max y coordinate is 6. I've tried: xmax = numpy.amax(a,axis=0) ymax = numpy.amax(a,axis=1) but these yield array([10, 6]) array([10, 4, 6]) ...not what I expected. My solution is to use slices: xmax = numpy.max(a[:,0]) ymax = numpy.max(a[:,1]) Which works but doesn't seem to the best approach. Suggestions?
Just unpack the list: In [273]: xmax, ymax = a.max(axis=0) In [274]: print xmax, ymax #10 6
How to calculate the sentence similarity using word2vec model of gensim with python
According to the Gensim Word2Vec, I can use the word2vec model in gensim package to calculate the similarity between 2 words. e.g. trained_model.similarity('woman', 'man') 0.73723527 However, the word2vec model fails to predict the sentence similarity. I find out the LSI model with sentence similarity in gensim, but, which doesn't seem that can be combined with word2vec model. The length of corpus of each sentence I have is not very long (shorter than 10 words). So, are there any simple ways to achieve the goal?
This is actually a pretty challenging problem that you are asking. Computing sentence similarity requires building a grammatical model of the sentence, understanding equivalent structures (e.g. "he walked to the store yesterday" and "yesterday, he walked to the store"), finding similarity not just in the pronouns and verbs but also in the proper nouns, finding statistical co-occurences / relationships in lots of real textual examples, etc. The simplest thing you could try -- though I don't know how well this would perform and it would certainly not give you the optimal results -- would be to first remove all "stop" words (words like "the", "an", etc. that don't add much meaning to the sentence) and then run word2vec on the words in both sentences, sum up the vectors in the one sentence, sum up the vectors in the other sentence, and then find the difference between the sums. By summing them up instead of doing a word-wise difference, you'll at least not be subject to word order. That being said, this will fail in lots of ways and isn't a good solution by any means (though good solutions to this problem almost always involve some amount of NLP, machine learning, and other cleverness). So, short answer is, no, there's no easy way to do this (at least not to do it well).
Django logging to console
I'm trying to set up a logger that will log to the console (I want this because I'm using Heroku with Papertrails (Heroku's logging addon) and stuff written to the console will show up in Papertrails, making it filterable and all the nice Papertrail features.) In settings I was first trying the following: LOGGING = { 'handlers' = { 'file': { 'level': 'DEBUG', 'class': 'logging.FileHandler', 'filename': 'mysite.log', 'formatter': 'verbose' }, 'console':{ 'level': 'DEBUG', 'class': 'logging.StreamHandler', }, }, (...) 'loggers'={ (...) 'page_processors': { 'handlers': ['console','file'], 'level': 'DEBUG', } } (...) } as per the Django's logging page (for those who don't use Mezzanine, page_processors are what Mezzanine runs whenever you open a page; you can think of them as being like Django's views, but they only do the context, not the rendering). On page_processors.py I have import logging logger = logging.getLogger(__name__) @process_for(MyPage): def myfunc(request, Page): logger.info('page_processor logging test') print 'my page_processor print' (...) When I refresh the page I don't see the logger but I see the print AND the log to the file: [02/Mar/2014 23:07:10] INFO [myApp.page_processors:13] page_progessor logging test and so I know the logic is working. After googling a bit, I found this and this page that addresses precisely this issue. He says that by default logging.StreamHandler logs to STDERR. If we want to log to STDOUT you should add the keyword argument 'stream' to the logging.StreamHandler construct, and so configure the handler as such: 'handlers':{ (...) 'console':{ 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'stream': sys.stdout }, } Turns out this still doesn't work, and I don't get any error or anything, and I still see the print and the file log. Just not the console logger. What's going on? EDIT: I tried this, doesn't make a difference.
I finally got it. Here's what was happening. When you define a logger using getLogger, you give a logger a name, in this case logger = logging.getLogger(__name__) and you then have to define how a logger with that name behaves in the LOGGING configuration. In this case, since that file is inside a module, the logger's name becomes myApp.page_processors, not page_processors, so the logger named 'page_processors' in the LOGGING dict is never called. So why was the logging to the file working? Because in the (...) that I show in the code there is another logger named 'myApp' that apparently gets called instead, and that one writes to the file. So the solution to this question is just to properly name the logger: LOGGING = { # (...) 'loggers': { # (...) 'myApp.page_processors': { 'handlers': ['console','file'], 'level': 'DEBUG', } } # (...) }
A plethora of Python OSC modules - which one to use?
Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. It is particularly common to use OSC with MAX/MSP -- which in fact is what I am doing, using OSC with Python to talk to another subsystem in MAX. There are a bunch of python modules that support OSC. Great. And they all claim to be simple, useful, and perfect. At the risk of verging into subjective territory, what use cases does your experience suggest for the following modules? python-osc pyOSC SimpleOSC (though this seems like an older module) I suppose a simple implementation would serve me best since I have only a glancing familiarity with OSC. And I'm using Python 2.7.
I have used pyOSC with great success on OSX. The code isn't under much development but this is most likely due to it's stability and simplicity. I briefly tried txosc and it may warrant further testing. My usage of pyosc is limited but it works well. eg. import OSC c = OSC.OSCClient() c.connect(('127.0.0.1', 57120)) # connect to SuperCollider oscmsg = OSC.OSCMessage() oscmsg.setAddress("/startup") oscmsg.append('HELLO') c.send(oscmsg)
Convert number strings with commas in pandas DataFrame to float
I have a DataFrame that contains numbers as strings with commas for the thousands marker. I need to convert them to floats. a = [['1,200', '4,200'], ['7,000', '-0.03'], [ '5', '0']] df=pandas.DataFrame(a) I am guessing I need to use locale.atof. Indeed df[0].apply(locale.atof) works as expected. I get a Series of floats. But when I apply it to the DataFrame, I get an error. df.apply(locale.atof) TypeError: ("cannot convert the series to ", u'occurred at index 0') and df[0:1].apply(locale.atof) gives the error ValueError: ('invalid literal for float(): 1,200', u'occurred at index 0') So, how do I convert this DataFrame of strings to a DataFrame of floats?
You need to set the locale first: In [11]: locale.setlocale(locale.LC_NUMERIC, '') Out[11]: 'en_GB.UTF-8' In [12]: df.applymap(atof) Out[12]: 0 1 0 1200 4200.00 1 7000 -0.03 2 5 0.00 If you're reading in from csv then you can use the thousands arg: df.read_csv('foo.tsv', sep='\t', thousands=',')
making django server accessible in LAN
I have installed django server and it can be accessible as below http://localhost:8000/get-sms/ http://127.0.0.1:8000/get-sms/ suppose My ip is x.x.x.x . From another PC under same network when i do my-ip:8000/get-sms/ it is not working. i can easily ping my ip with that computer. Moreover on my port 81 i have apache, which is eaily accessible like below http:///my-ip:81 what do be the issue? do i need something extra in django
Running the Django Development Server This is what you're looking for. To help you further, here is what you should do : python manage.py runserver 0.0.0.0:8000 By the way, this may be a duplicate of that question. Here is what the documentation say : Note that the default IP address, 127.0.0.1, is not accessible from other machines on your network. To make your development server viewable to other machines on the network, use its own IP address (e.g. 192.168.2.1) or 0.0.0.0.
What does axis in pandas mean?
Here is my code to generate a dataframe: import pandas as pd dff = pd.DataFrame(np.random.randn(1,2),columns=list('AB')) then I got the dataframe: +------------+---------+--------+ | | A | B | +------------+---------+--------- | 0 | 0.626386| 1.52325| +------------+---------+--------+ When I type the commmand : dff.mean(axis=1) I got : 0 1.074821 dtype: float64 According to the reference of pandas, axis=1 stands for columns and I expect the result of the command to be A 0.626386 B 1.523255 dtype: float64 So here is my question: what does axis in pandas mean?
It specifies the axis along which the means are computed. By default axis=0. This is consistent with the numpy.mean usage when axis is specified explicitly (in numpy.mean, axis==None by default, which computes the mean value over the flattened array) , in which axis=0 along the rows (namely, index in pandas), and axis=1 along the columns. +------------+---------+--------+ | | A | B | +------------+---------+--------- | 0 | 0.626386| 1.52325|----axis=1-----> +------------+---------+--------+ | | | axis=0 | ↓ ↓
How can I break a for loop in jinja2?
How can I break out of a for loop in jinja2? my code is like this: <a href="#"> {% for page in pages if page.tags['foo'] == bar %} {{page.title}} {% break %} {% endfor %} </a> I have more than one page that has this condition and I want to end the loop, once the condition has been met.
You can't use break, you'd filter instead. From the Jinja2 documentation on {% for %}: Unlike in Python it’s not possible to break or continue in a loop. You can however filter the sequence during iteration which allows you to skip items. The following example skips all the users which are hidden: {% for user in users if not user.hidden %} <li>{{ user.username|e }}</li> {% endfor %} In your case, however, you appear to only need the first element; just filter and pick the first: {{ (pages|selectattr('tags.foo', bar)|first).title }} This filters the list using the selectattr() filter, the result of which is passed to the first filter. The selectattr() filter produces an iterator, so using first here will only iterate over the input up to the first matching element, and no further.
Flask blueprint static directory does not work?
According to the Flask readme, blueprint static files are accessible at blueprintname/static. But for some reason, it doesn't work. My blueprint is like this: app/frontend/views.py : frontend = Blueprint('frontend', __name__, template_folder='templates', static_folder='static') @frontend.route('/') etc... app/frontend/js/app.js : my javascript Blueprint registered in Flask app (routes work and everything) When I go to abc.com/frontend/static/js/app.js, it just gives a 404. When I follow the Flask readme to get my static files: <script src="{{url_for('frontend.static', filename='js/app.js')}}"></script> The output is <script src="/static/js/app.js"></script> Which doesn't work either. There's nothing in my root app/static/ folder. I can't access any static files in my blueprint! Flask read me says that it should work! admin = Blueprint('admin', __name__, static_folder='static') By default the rightmost part of the path is where it is exposed on the web. Because the folder is called static here it will be available at the location of the blueprint + /static. Say the blueprint is registered for /admin the static folder will be at /admin/static.
You probably registered your Blueprint to sit at the root of your site: app.register_blueprint(core, url_prefix='') but the static view in the Blueprint is no different from all your other Blueprint views; it uses that url_prefix value to make the URL unique. The core static view is also active, so you now have two routes that want to handle /static/ URLs. So if you are registering your Blueprint without a URL prefix, you have to give one of these two a unique path. Either give the Blueprint a custom static_url_path value, or the core Flask app.
How to subclass pandas DataFrame?
Subclassing pandas classes seems a common need but I could not find references on the subject. (It seems that pandas developers are still working on it: https://github.com/pydata/pandas/issues/60). There are some SO threads on the subject, but I am hoping that someone here can provide a more systematic account on currently the best way to subclass pandas.DataFrame that satisfies two, I think, general requirements: import numpy as np import pandas as pd class MyDF(pd.DataFrame): # how to subclass pandas DataFrame? pass mydf = MyDF(np.random.randn(3,4), columns=['A','B','C','D']) print type(mydf) # <class '__main__.MyDF'> # Requirement 1: Instances of MyDF, when calling standard methods of DataFrame, # should produce instances of MyDF. mydf_sub = mydf[['A','C']] print type(mydf_sub) # <class 'pandas.core.frame.DataFrame'> # Requirement 2: Attributes attached to instances of MyDF, when calling standard # methods of DataFrame, should still attach to the output. mydf.myattr = 1 mydf_cp1 = MyDF(mydf) mydf_cp2 = mydf.copy() print hasattr(mydf_cp1, 'myattr') # False print hasattr(mydf_cp2, 'myattr') # False And is there any significant differences for subclassing pandas.Series? Thank you.
For Requirement 1, just define _constructor: import pandas as pd import numpy as np class MyDF(pd.DataFrame): @property def _constructor(self): return MyDF mydf = MyDF(np.random.randn(3,4), columns=['A','B','C','D']) print type(mydf) mydf_sub = mydf[['A','C']] print type(mydf_sub) I think there is no simple solution for Requirement 2, I think you need define __init__, copy, or do something in _constructor, for example: import pandas as pd import numpy as np class MyDF(pd.DataFrame): _attributes_ = "myattr1,myattr2" def __init__(self, *args, **kw): super(MyDF, self).__init__(*args, **kw) if len(args) == 1 and isinstance(args[0], MyDF): args[0]._copy_attrs(self) def _copy_attrs(self, df): for attr in self._attributes_.split(","): df.__dict__[attr] = getattr(self, attr, None) @property def _constructor(self): def f(*args, **kw): df = MyDF(*args, **kw) self._copy_attrs(df) return df return f mydf = MyDF(np.random.randn(3,4), columns=['A','B','C','D']) print type(mydf) mydf_sub = mydf[['A','C']] print type(mydf_sub) mydf.myattr1 = 1 mydf_cp1 = MyDF(mydf) mydf_cp2 = mydf.copy() print mydf_cp1.myattr1, mydf_cp2.myattr1
Pandas MultiIndex versus Panel
Using Pandas, what are the reasons to use a Panel versus a MultiIndex DataFrame? I have personally found significant difference between the two in the ease of accessing different dimensions/levels, but that may just be my being more familiar with the interface for one versus the other. I assume there are more substantive differences, however.
In my practice, the strongest, easiest-to-see difference is that a Panel needs to be homogeneous in every dimension. If you look at a Panel as a stack of Dataframes, you cannot create it by stacking Dataframes of different sizes or with different indexes/columns. You can indeed handle more non-homogeneous type of data with multiindex. So the first choice has to be made based on how your data is to be organized.
Read Excel File in Python
I've an Excel File Arm_id DSPName DSPCode HubCode PinCode PPTL 1 JaVAS 01 AGR 282001 1,2 2 JaVAS 01 AGR 282002 3,4 3 JaVAS 01 AGR 282003 5,6 I want to save a string in the form Arm_id,DSPCode,Pincode. This format is configurable, ie it might change to DSPCode,Arm_id,Pincode . I save it format in a list like FORMAT = ['Arm_id', 'DSPName', 'Pincode'] How do I read the content of a specific column with provided name, given that the FORMAT is configurable. This is what I tried. Currently I'm able to read all the content in the file from xlrd import open_workbook wb = open_workbook('sample.xls') for s in wb.sheets(): #print 'Sheet:',s.name values = [] for row in range(s.nrows): col_value = [] for col in range(s.ncols): value = (s.cell(row,col).value) try : value = str(int(value)) except : pass col_value.append(value) values.append(col_value) print values My output is [[u'Arm_id', u'DSPName', u'DSPCode', u'HubCode', u'PinCode', u'PPTL'], ['1', u'JaVAS', '1', u'AGR', '282001', u'1,2'], ['2', u'JaVAS', '1', u'AGR', '282002', u'3,4'], ['3', u'JaVAS', '1', u'AGR', '282003', u'5,6']] Then I loop around values[0] trying to find out the FORMAT content in values[0] and then getting the index of Arm_id, DSPname and Pincode in the values[0] and then from next loop I know the index of all the FORMAT factors , thereby getting to know which value do I need to get . But this is such a poor solution. How do I get the values of a specific column with name in excel file?
This is one approach: from xlrd import open_workbook class Arm(object): def __init__(self, id, dsp_name, dsp_code, hub_code, pin_code, pptl): self.id = id self.dsp_name = dsp_name self.dsp_code = dsp_code self.hub_code = hub_code self.pin_code = pin_code self.pptl = pptl def __str__(self): return("Arm object:\n" " Arm_id = {0}\n" " DSPName = {1}\n" " DSPCode = {2}\n" " HubCode = {3}\n" " PinCode = {4} \n" " PPTL = {5}" .format(self.id, self.dsp_name, self.dsp_code, self.hub_code, self.pin_code, self.pptl)) wb = open_workbook('sample.xls') for sheet in wb.sheets(): number_of_rows = sheet.nrows number_of_columns = sheet.ncols items = [] rows = [] for row in range(1, number_of_rows): values = [] for col in range(number_of_columns): value = (sheet.cell(row,col).value) try: value = str(int(value)) except ValueError: pass finally: values.append(value) item = Arm(*values) items.append(item) for item in items: print item print("Accessing one single value (eg. DSPName): {0}".format(item.dsp_name)) print You don't have to use a custom class, you can simply take a dict(). If you use a class however, you can access all values via dot-notation, as you see above. Here is the output of the script above: Arm object: Arm_id = 1 DSPName = JaVAS DSPCode = 1 HubCode = AGR PinCode = 282001 PPTL = 1 Accessing one single value (eg. DSPName): JaVAS Arm object: Arm_id = 2 DSPName = JaVAS DSPCode = 1 HubCode = AGR PinCode = 282002 PPTL = 3 Accessing one single value (eg. DSPName): JaVAS Arm object: Arm_id = 3 DSPName = JaVAS DSPCode = 1 HubCode = AGR PinCode = 282003 PPTL = 5 Accessing one single value (eg. DSPName): JaVAS
What does enumerate mean?
I am using tkinter in Python and came across the following code: for row_number, row in enumerate(cursor): I was wondering whether anyone could explain what enumerate means in this context?
The enumerate() function adds a counter to an iterable. So for each element in cursor, a tuple is produced with (counter, element); the for loop binds that to row_number and row, respectively. Demo: >>> elements = ('foo', 'bar', 'baz') >>> for elem in elements: ... print elem ... foo bar baz >>> for count, elem in enumerate(elements): ... print count, elem ... 0 foo 1 bar 2 baz By default, enumerate() starts counting at 0 but if you give it a second integer argument, it'll start from that number instead: >>> for count, elem in enumerate(elements, 42): ... print count, elem ... 42 foo 43 bar 44 baz
matplotlib: make plus sign thicker
In Matplotlib, I would like to draw a thick plus sign (or a cross), but the one provided in the marker set is too thin. Even as I increase its size, it doesn't get any thicker. For example: The lines of code drawing the red plus sign are: # Draw median marker. if plot_opts.get('bean_show_median', True): ax.plot(pos, np.median(pos_data), marker=plot_opts.get('bean_median_marker', '+'), color=plot_opts.get('bean_median_color', 'r')) If I add an extra parameter markersize=20, the marker will only stretch. It will be as thin as before. Can I make it thick?
You can use markeredgewidth (or mew). You'll want to combine it with markersize, otherwise you get thick but tiny markers. For example: plt.plot([2,4,6,1,3,5], '+', mew=10, ms=20)
Mixing file.readline() and file.next()
I noticed some strange behavior today playing around with next() and readline(). It seems that both functions produce the same results (which is what I expect). However, when I mix them, I get a ValueError. Here's what I did: >>> f = open("text.txt", 'r') >>> f.readline() 'line 0\n' >>> f.readline() 'line 1\n' >>> f.readline() 'line 2\n' >>> f.next() 'line 3\n' >>> f.next() 'line 4\n' >>> f.readline() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Mixing iteration and read methods would lose data >>> >>> f = open("text.txt", 'r') >>> f.next() 'line 0\n' >>> f.next() 'line 1\n' >>> f.next() 'line 2\n' >>> f.readline() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Mixing iteration and read methods would lose data So the overall question here is what's going on underneath the hood that causes this error? Some questions that might get answered along with but I would like to hear an answer for if not: What are the differences between next() and readline()? When I do for f in file: which function am I calling (and does it matter)? Why can I call next() after readline(), but not the other way around? Thanks in advance, I don't think it matters, but in case this is version dependent, I'm on Python 2.7.6 for Windows
According to Python's doc (emphasis is mine) A file object is its own iterator, for example iter(f) returns f (unless f is closed). When a file is used as an iterator, typically in a for loop (for example, for line in f: print line.strip()), the next() method is called repeatedly. This method returns the next input line, or raises StopIteration when EOF is hit when the file is open for reading (behavior is undefined when the file is open for writing). In order to make a for loop the most efficient way of looping over the lines of a file (a very common operation), the next() method uses a hidden read-ahead buffer. As a consequence of using a read-ahead buffer, combining next() with other file methods (like readline()) does not work right. However, using seek() to reposition the file to an absolute position will flush the read-ahead buffer. The next method reads more that is needed for efficiency reasons. This breaks readline. So the answers are next is faster due to read-ahead for s in f: use next before calling next, readline uses standard slow read on the file so there is no problem.
Pandas Dataframe display on a webpage
Ok, I am using Flask but this probably applies to a lot of similar products. I construct a pandas Dataframe, e.g. @app.route('/analysis/<filename>') def analysis(filename): x = pd.DataFrame(np.random.randn(20, 5)) return render_template("analysis.html", name=filename, data=x) The template analysis.html looks like {% extends "base.html" %} {% block content %} <h1>{{name}}</h1> {{data}} {% endblock %} This works but the output looks horrible. It doesn't use linebreaks etc. I have played with data.to_html() and data.to_string() What's the easiest way to display a frame? Thomas
The following should work: @app.route('/analysis/<filename>') def analysis(filename): x = pd.DataFrame(np.random.randn(20, 5)) return render_template("analysis.html", name=filename, data=x.to_html()) # ^^^^^^^^^ Check the documentation for additional options like CSS styling. Additionally, you need to adjust your template like so: {% extends "base.html" %} {% block content %} <h1>{{name}}</h1> {{data | safe}} {% endblock %} in order to tell Jinja you're passing in markup. Thanks to @SeanVieira for the tip.
Using utf-8 characters in a Jinja2 template
I'm trying to use utf-8 characters when rendering a template with Jinja2. Here is how my template looks like: <!DOCTYPE HTML> <html manifest="" lang="en-US"> <head> <meta charset="UTF-8"> <title>{{title}}</title> ... The title variable is set something like this: index_variables = {'title':''} index_variables['title'] = myvar.encode("utf8") template = env.get_template('index.html') index_file = open(preview_root + "/" + "index.html", "w") index_file.write( template.render(index_variables) ) index_file.close() Now, the problem is that myvar is a message read from a message queue and can contain those special utf8 characters (ex. "Séptimo Cine"). The rendered template looks something like: ... <title>S\u00e9ptimo Cine</title> ... and I want it to be: ... <title>Séptimo Cine</title> ... I have made several tests but I can't get this to work. I have tried to set the title variable without .encode("utf8"), but it throws an exception (ValueError: Expected a bytes object, not a unicode object), so my guess is that the initial message is unicode I have used chardet.detect to get the encoding of the message (it's "ascii"), then did the following: myvar.decode("ascii").encode("cp852"), but the title is still not rendered correctly. I also made sure that my template is a UTF-8 file, but it didn't make a difference. Any ideas on how to do this?
TL;DR: Pass Unicode to template.render() Encode the rendered unicode result to a bytestring before writing it to a file This had me puzzled for a while. Because you do index_file.write( template.render(index_variables) ) in one statement, that's basically just one line where Python is concerned, so the traceback you get is misleading: The exception I got when recreating your test case didn't happen in template.render(index_variables), but in index_file.write() instead. So splitting the code up like this output = template.render(index_variables) index_file.write(output) was the first step to diagnose where exactly the UnicodeEncodeError happens. Jinja returns unicode whet you let it render the template. Therefore you need to encode the result to a bytestring before you can write it to a file: index_file.write(output.encode('utf-8')) The second error is that you pass in an utf-8 encoded bytestring to template.render() - Jinja wants unicode. So assuming your myvar contains UTF-8, you need to decode it to unicode first: index_variables['title'] = myvar.decode('utf-8') So, to put it all together, this works for me: # -*- coding: utf-8 -*- from jinja2 import Environment, PackageLoader env = Environment(loader=PackageLoader('myproject', 'templates')) # Make sure we start with an utf-8 encoded bytestring myvar = 'Séptimo Cine' index_variables = {'title':''} # Decode the UTF-8 string to get unicode index_variables['title'] = myvar.decode('utf-8') template = env.get_template('index.html') with open("index_file.html", "w") as index_file: output = template.render(index_variables) # jinja returns unicode - so `output` needs to be encoded to a bytestring # before writing it to a file index_file.write(output.encode('utf-8'))
How to delete all instances of a character in a string in python?
How do I delete all the instances of a character in this string? Here is my code: def findreplace(char, string): place = string.index(char) string[place] = '' return string However, if I run this, this is what happens: >>> findreplace('i', 'it is icy') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in findreplace TypeError: 'str' object does not support item assignment Why is this?
Strings are immutable in Python, which means once a string is created, you cannot alter the contents of the strings. If at all, you need to change it, a new instance of the string will be created with the alterations. Having that in mind, we have so many ways to solve this Using str.replace, >>> "it is icy".replace("i", "") 't s cy' Using str.translate, >>> "it is icy".translate(None, "i") 't s cy' Using Regular Expression, >>> import re >>> re.sub(r'i', "", "it is icy") 't s cy' Using comprehension as a filter, >>> "".join([char for char in "it is icy" if char != "i"]) 't s cy' Using filter function >>> "".join(filter(lambda char: char != "i", "it is icy")) 't s cy' Timing comparison def findreplace(m_string, char): m_string = list(m_string) for k in m_string: if k == char: del(m_string[m_string.index(k)]) return "".join(m_string) def replace(m_string, char): return m_string.replace("i", "") def translate(m_string, char): return m_string.translate(None, "i") from timeit import timeit print timeit("findreplace('it is icy','i')", "from __main__ import findreplace") print timeit("replace('it is icy','i')", "from __main__ import replace") print timeit("translate('it is icy','i')", "from __main__ import translate") Result 1.64474582672 0.29278588295 0.311302900314 str.replace and str.translate methods are 8 and 5 times faster than the accepted answer. Note: Comprehension method and filter methods are expected to be slower, for this case, since they have to create list and then they have to be traversed again to construct a string. And re is a bit overkill for a single character replacement. So, they all are excluded from the timing comparison.
Python circular importing?
So i'm getting this error Traceback (most recent call last): File "/Users/alex/dev/runswift/utils/sim2014/simulator.py", line 3, in <module> from world import World File "/Users/alex/dev/runswift/utils/sim2014/world.py", line 2, in <module> from entities.field import Field File "/Users/alex/dev/runswift/utils/sim2014/entities/field.py", line 2, in <module> from entities.goal import Goal File "/Users/alex/dev/runswift/utils/sim2014/entities/goal.py", line 2, in <module> from entities.post import Post File "/Users/alex/dev/runswift/utils/sim2014/entities/post.py", line 4, in <module> from physics import PostBody File "/Users/alex/dev/runswift/utils/sim2014/physics.py", line 21, in <module> from entities.post import Post ImportError: cannot import name Post and you can see that i use the same import statement further up and it works? Is there some unwritten rule about circular importing? How do i use the same class further down the call stack?
I think the currently accepted answer by jpmc26 comes down too heavily on circular imports. They can work just fine, if you set them up correctly. The easiest way to do so is to use import my_module syntax, rather than from my_module import some_object. The former will almost always work, even if my_module included imports us back. The latter only works if my_object is already defined in my_module, which in a circular import may not be the case. To be specific to your case: Try changing entities/post.py to do import physics and then refer to physics.PostBody rather than just PostBody directly. Similarly, change physics.py to do import post and then use post.Post rather than just Post.
How could I use requests in asyncio?
I want to do parallel http request tasks in asyncio, but I find that python-requests would block the event loop of asyncio. I've found aiohttp but it couldn't provide the service of http request using a http proxy. So I want to know if there's a way to do asynchronous http requests with the help of asyncio.
To use requests (or any other blocking libraries) with asyncio, you can use BaseEventLoop.run_in_executor to run a function in another thread and yield from it to get the result. For example: import asyncio import requests @asyncio.coroutine def main(): loop = asyncio.get_event_loop() future1 = loop.run_in_executor(None, requests.get, 'http://www.google.com') future2 = loop.run_in_executor(None, requests.get, 'http://www.google.co.uk') response1 = yield from future1 response2 = yield from future2 print(response1.text) print(response2.text) loop = asyncio.get_event_loop() loop.run_until_complete(main()) This will get both responses in parallel. With python 3.5 you can use the new await/async syntax: import asyncio import requests async def main(): loop = asyncio.get_event_loop() future1 = loop.run_in_executor(None, requests.get, 'http://www.google.com') future2 = loop.run_in_executor(None, requests.get, 'http://www.google.co.uk') response1 = await future1 response2 = await future2 print(response1.text) print(response2.text) loop = asyncio.get_event_loop() loop.run_until_complete(main()) See PEP0492 for more.
I get an error in python3 when importing mechanize
I get an error in python3 when importing mechanize. I've just installed mechanize into my virtualenv where python3 is installed. $ which python3 /Users/myname/.virtualenvs/python3/bin/python3 $ pip freeze mechanize==0.2.5 But, when I try to import mechanize in my python code, I get this error. import mechanize --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-6b82e40e2c8e> in <module>() ----> 1 import mechanize /Users/myname/.virtualenvs/python3/lib/python3.3/site-packages/mechanize/__init__.py in <module>() 117 import sys 118 --> 119 from _version import __version__ 120 121 # high-level stateful browser-style interface ImportError: No module named '_version' Does anyone know how to fix this problem? I'm new to python and I've been studying how to program in python these days. Thanks for your help in advance! update I've installed mechanize for python3. Now, I have an another error. In [1]: import mechanize --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-6b82e40e2c8e> in <module>() ----> 1 import mechanize /Users/myname/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/__init__.py in <module>() 120 121 # high-level stateful browser-style interface --> 122 from ._mechanize import \ 123 Browser, History, \ 124 BrowserStateError, LinkNotFoundError, FormNotFoundError /Users/myname/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/_mechanize.py in <module>() /Users/myname/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/_html.py in <module>() ImportError: cannot import name _sgmllib_copy I checked what I've installed in my working virtualenv. I found some warnings. $ pip freeze cssselect==0.9.1 httplib2==0.8 ipython==1.1.0 lxml==3.2.4 ## FIXME: could not find svn URL in dependency_links for this package: mechanize==0.2.6.dev-20140305 pyquery==1.2.8 Warning: cannot find svn location for mechanize==0.2.6.dev-20140305
Alas, mechanize doesn't support Python 3. http://wwwsearch.sourceforge.net/mechanize/faq.html Python 2.4, 2.5, 2.6, or 2.7. Python 3 is not yet supported. You might like to comment on the issue at https://github.com/jjlee/mechanize/issues/96 Update: I wrote my own automating library MechanicalSoup. It's Python 3 compatible https://github.com/hickford/MechanicalSoup
Python - how to find files and skip directories in os.listdir
I use os.listdir and it works fine, but I get sub-directories in the list also, which is not what I want: I need only files. What function do I need to use for that? I looked also at os.walk and it seems to be what I want, but I'm not sure of how it works.
You need to filter out directories; os.listdir() lists all names in a given path. You can use os.path.isdir() for this: basepath = '/path/to/directory' for fname in os.listdir(basepath): path = os.path.join(basepath, fname) if os.path.isdir(path): # skip directories continue os.walk() does the same work under the hood; unless you need to recurse down subdirectories, you don't need to use os.walk() here.
Python, pandas: how to sort dataframe by index
When there is an DataFrame like the following: import pandas as pd df = pd.DataFrame([1, 1, 1, 1, 1], index=[100, 29, 234, 1, 150], columns=['A']) How can I sort this dataframe by index with each combination of index and column value intact?
Dataframes have a sort_index method which returns a copy by default. Pass inplace=True to operate in place. import pandas as pd df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150], columns=['A']) df.sort_index(inplace=True) print(df.to_string()) Gives me: A 1 4 29 2 100 1 150 5 234 3
grouping rows in list in pandas groupby
I have a pandas data frame like: A 1 A 2 B 5 B 5 B 4 C 6 I want to group by the first column and get second column as lists in rows: A [1,2] B [5,5,4] C [6] Is it possible to do something like this using pandas groupby?
You can do this using groupby to group on the column of interest and then apply list to every group: In [1]: # create the dataframe df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]}) df Out[1]: a b 0 A 1 1 A 2 2 B 5 3 B 5 4 B 4 5 C 6 [6 rows x 2 columns] In [76]: df.groupby('a')['b'].apply(list) Out[76]: a A [1, 2] B [5, 5, 4] C [6] Name: b, dtype: object
Confusing with the usage of regex in Python
I'm confused with the following three patterns, would someone explain it in more detail? ## IPython with Python 2.7.3 In [62]: re.findall(r'[a-z]*',"f233op") Out[62]: ['f', '', '', '', 'op', ''] ## why does the last '' come out? In [63]: re.findall(r'([a-z])*',"f233op") Out[63]: ['f', '', '', '', 'p', ''] ## why does the character 'o' get lost? In [64]: re.findall(r'([a-z]*)',"f233op") Out[64]: ['f', '', '', '', 'op', ''] ## what's the different than line 63 above?
Example 1 re.findall(r'[a-z]*',"f233op") This pattern is matching zero-or-more instances of lower case alphabet characters. The ZERO-or-more part is key here, since a match of nothing, starting from every index position in the string, is just as valid as a match of f or op. The last empty string returned is the match starting from the end of the string (the position between p and $ (end of string). Example 2 re.findall(r'([a-z])*',"f233op") Now you are matching character groups, consisting of a single lower-case alphabet character. The o is no longer returned because this is a greedy search, and the last valid matched group will be returned. So if you changed the string to f233op12fre, the final e would be returned, but no the preceding f or r. Likewise, if you take out the p in your string, you still see that o is returned as a valid match. Conversely, if you tried to make this regex non-greedy by adding a ? (eg. ([a-z])*?), the returned set of matches would all be empty strings, since a valid match of nothing has a higher precedence of a valid match of something. Example 3 re.findall(r'([a-z]*)',"f233op") Nothing is different in the matched characters, but now you are returning character groups instead of raw matches. The output of this regex query will be the same as your first example, but you'll notice that if you add an additional matching group, you will suddenly see the results of each match attempt grouped into tuples: IN : re.findall(r'([a-z]*)([0-9]*)',"f233op") OUT: [('f', '233'), ('op', ''), ('', '')] Contrast this with the same pattern, minus the parenthesis (groups), and you'll see why they are important: IN : re.findall(r'[a-z]*[0-9]*',"f233op") OUT: ['f233', 'op', ''] Also... It can be useful to plug regex patterns like these into regex diagram generators like Regexplained to see how the pattern matching logic works. For example, as an explanation as to why your regex is always returning empty character string matches, take a look at the difference between the patterns [a-z]* and [a-z]+. Don't forget to check the Python docs for the re library if you get stuck, they actually give a pretty stellar explanation for the standard regex syntax.
Pandas: drop a level from a multi-level column index?
If I've got a multi-level column index: >>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")]) >>> pd.DataFrame([[1,2], [3,4]], columns=cols) a ---+-- b | c --+---+-- 0 | 1 | 2 1 | 3 | 4 How can I drop the "a" level of that index, so I end up with: b | c --+---+-- 0 | 1 | 2 1 | 3 | 4
You can use MultiIndex.droplevel: >>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")]) >>> df = pd.DataFrame([[1,2], [3,4]], columns=cols) >>> df a b c 0 1 2 1 3 4 [2 rows x 2 columns] >>> df.columns = df.columns.droplevel() >>> df b c 0 1 2 1 3 4 [2 rows x 2 columns]
Calculate summary statistics of columns in dataframe
I have a dataframe of the following form (for example) shopper_num,is_martian,number_of_items,count_pineapples,birth_country,tranpsortation_method 1,FALSE,0,0,MX, 2,FALSE,1,0,MX, 3,FALSE,0,0,MX, 4,FALSE,22,0,MX, 5,FALSE,0,0,MX, 6,FALSE,0,0,MX, 7,FALSE,5,0,MX, 8,FALSE,0,0,MX, 9,FALSE,4,0,MX, 10,FALSE,2,0,MX, 11,FALSE,0,0,MX, 12,FALSE,13,0,MX, 13,FALSE,0,0,CA, 14,FALSE,0,0,US, How can I use Pandas to calculate summary statistics of each column (column data types are variable, some columns have no information And then return the a dataframe of the form: columnname, max, min, median, is_martian, NA, NA, FALSE So on and so on
describe may give you everything you want otherwise you can perform aggregations using groupby and pass a list of agg functions: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once In [43]: df.describe() Out[43]: shopper_num is_martian number_of_items count_pineapples count 14.0000 14 14.000000 14 mean 7.5000 0 3.357143 0 std 4.1833 0 6.452276 0 min 1.0000 False 0.000000 0 25% 4.2500 0 0.000000 0 50% 7.5000 0 0.000000 0 75% 10.7500 0 3.500000 0 max 14.0000 False 22.000000 0 [8 rows x 4 columns] Note that some columns cannot be summarised as there is no logical way to summarise them, for instance columns containing string data As you prefer you can transpose the result if you prefer: In [47]: df.describe().transpose() Out[47]: count mean std min 25% 50% 75% max shopper_num 14 7.5 4.1833 1 4.25 7.5 10.75 14 is_martian 14 0 0 False 0 0 0 False number_of_items 14 3.357143 6.452276 0 0 0 3.5 22 count_pineapples 14 0 0 0 0 0 0 0 [4 rows x 8 columns]
Execution of Python code with -m option or not
The python interpreter has -m module option that "Runs library module module as a script". With this python code a.py: if __name__ == "__main__": print __package__ print __name__ I tested python -m a to get "" <-- Empty String __main__ whereas python a.py returns None <-- None __main__ To me, those two invocation seems to be the same except __package__ is not None when invoked with -m option. Interestingly, with python -m runpy a, I get the same as python -m a with python module compiled to get a.pyc. What's the (practical) difference between these invocations? Any pros and cons between them? Also, David Beazley's Python Essential Reference explains it as "The -m option runs a library module as a script which executes inside the __main__ module prior to the execution of the main script". What does it mean?
When you use the -m command-line flag, Python will import a module or package for you, then run it as a script. When you don't use the -m flag, the file you named is run as just a script. The distinction is important when you try to run a package. There is a big difference between: python foo/bar/baz.py and python -m foo.bar.baz as in the latter case, foo.bar is imported and relative imports will work correctly with foo.bar as the starting point. Demo: $ mkdir -p test/foo/bar $ touch test/foo/__init__.py $ touch test/foo/bar/__init__.py $ cat << EOF > test/foo/bar/baz.py > if __name__ == "__main__": > print __package__ > print __name__ > > EOF $ PYTHONPATH=test python test/foo/bar/baz.py None __main__ $ PYTHONPATH=test bin/python -m foo.bar.baz foo.bar __main__ As a result, Python has to actually care about packages when using the -m switch. A normal script can never be a package, so __package__ is set to None. But run a package or module inside a package with -m and now there is at least the possibility of a package, so the __package__ variable is set to a string value; in the above demonstration it is set to foo.bar, for plain modules not inside a package, it is set to an empty string. As for the __main__ module; Python imports scripts being run as it would a regular module. A new module object is created to hold the global namespace, stored in sys.modules['__main__']. This is what the __name__ variable refers to, it is a key in that structure. For packages, you can create a __main__.py module and have that run when running python -m package_name; in fact that's the only way you can run a package as a script: $ PYTHONPATH=test python -m foo.bar python: No module named foo.bar.__main__; 'foo.bar' is a package and cannot be directly executed $ cp test/foo/bar/baz.py test/foo/bar/__main__.py $ PYTHONPATH=test python -m foo.bar foo.bar __main__ So, when naming a package for running with -m, Python looks for a __main__ module contained in that package and executes that as a script. It's name is then still set to __main__, and the module object is still stored in sys.modules['__main__'].
What does self = None do?
I'm reading the source code of the incoming asyncio package. Note that at the end of the method, there is a self = None statement. What does it do? def _run(self): try: self._callback(*self._args) except Exception as exc: msg = 'Exception in callback {}{!r}'.format(self._callback, self._args) self._loop.call_exception_handler({ 'message': msg, 'exception': exc, 'handle': self, }) self = None # Needed to break cycles when an exception occurs. I thought it would erase the instance, but the following test doesn't suggest so: class K: def haha(self): self = None a = K() a.haha() print(a) # a is still an instance
It simply clears the local reference to self, making sure that if an exception occurs the reference passed to self._loop.call_exception_handler() is the only remaining reference and no cycle has been created. This is still needed here because the local namespace is referenced by the exception traceback; it will not be cleared up when the function exits as there is a reference to the locals alive still. This is documented in the sys.exc_info() function documentation with a warning: Warning: Assigning the traceback return value to a local variable in a function that is handling an exception will cause a circular reference. This will prevent anything referenced by a local variable in the same function or by the traceback from being garbage collected. Since most functions don’t need access to the traceback, the best solution is to use something like exctype, value = sys.exc_info()[:2] to extract only the exception type and value. If you do need the traceback, make sure to delete it after use (best done with a try ... finally statement) or to call exc_info() in a function that does not itself handle an exception. Because tulip handlers form a fundamental framework class the code handles the traceback circular reference case by removing self from the local namespace instead, as it cannot guarantee that the _callback or call_exception_handler functions will clear up their references. In CPython, objects are destroyed when their reference count drops to 0, but a cyclic reference (a series of objects referencing themselves in a cycle) will never see their reference count drop to 0. The garbage collector does try to break such cycles but it cannot always do this or not fast enough. Explicitly clearing references avoids creating cycles. For example, if there is a __del__ method, the garbage collector will not break a cycle as it won't know in what order to break a cycle safely in that case. Even if there is no __del__ method (which a framework class should never assume will not be the case) it's best to not rely on the garbage collector eventually clearing cycles.
How to lowercase a python dataframe string column if it has missing values?
The following code does not work. import pandas as pd import numpy as np df=pd.DataFrame(['ONE','Two', np.nan],columns=['x']) xLower = df["x"].map(lambda x: x.lower()) How should I tweak it to get xLower = ['one','two',np.nan] ? Efficiency is important since the real data frame is huge.
use pandas vectorized string methods; as in the documentation: these methods exclude missing/NA values automatically .str.lower() is the very first example there; >>> df['x'].str.lower() 0 one 1 two 2 NaN Name: x, dtype: object
from ... import OR import ... as for modules
Should I use from foo import bar OR import foo.bar as bar when importing a module and and there is no need/wish for changing the name (bar)? Are there any differences? Does it matter?
Assuming that bar is a module or package in foo, there is no difference, it doesn't matter. The two statements have exactly the same result: >>> import os.path as path >>> path <module 'posixpath' from '/Users/mj/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc'> >>> from os import path >>> path <module 'posixpath' from '/Users/mj/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc'> If bar is not a module or package, the second form will not work; a traceback is thrown instead: >>> import os.walk as walk Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named walk
Stop pip from failing on single package when installing with requirements.txt
I am installing packages from requirements.txt pip install -r requirements.txt The requirements.txt file reads: Pillow lxml cssselect jieba beautifulsoup nltk lxml is the only package failing to install and this leads to everything failing (expected results as pointed out by larsks in the comments). However, after lxml fails pip still runs through and downloads the rest of the packages. From what I understand the pip install -r requirements.txt command will fail if any of the packages listed in the requirements.txt fail to install. Is there any argument I can pass when running pip install -r requirements.txt to tell it to install what it can and skip the packages that it cannot, or to exit as soon as it sees something fail?
Running each line with pip install may be a workaround. xargs -a requirements.txt -n 1 pip install
ImportError: No module named MySQLdb
I am referring the following tutorial to make a login page for my web application. http://code.tutsplus.com/tutorials/intro-to-flask-signing-in-and-out--net-29982 I am having issue with the database. I am getting an ImportError: No module named MySQLdb when I execute http://127.0.0.1:5000/testdb I have tried all possible ways to install python mysql, the one mentioned in the tutorial, easy_install, sudo apt-get install. I have installed mysql in my virtual env. My directory structure is just the same as whats explained in the tutorial. The module is sucessfully installed in my system and still I am getting this error. Please help. What could be causing this.
If you're having issues compiling the binary extension, or on a platform where you cant, you can try using the pure python PyMySQL bindings. Simply pip install pymysql and switch your SQLAlchemy URI to start like this: SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://.....' There are some other drivers you could also try.
Python: understanding (None for g in g if (yield from g) and False)
James Powell, in his short description for an upcoming presentation, says he is the proud inventor of one of the gnarliest Python one-liners: (None for g in g if (yield from g) and False) I am trying to figure out this generator, and since I live with Python 2.7.x, I'm also tripping over the (yield from g) expression. How do I read this, and what would be the python 2.7.x analog? What a great discussion below! I want to check if I have this correct in the main. >>> l = [10, 11, iter(xrange(5)), 12, 13] >>> g = iter(l) >>> flat_g = (None for g in g if (yield from g) and False) >>> list(flat_g) [10, 11, 0, 1, 2, 3, 4, 12, 13] Is that about right?
This expression seems to be a code-golf way of writing: (a for b in g for a in b) ((Or maybe the motivation be taking advantage of generator delegations, but IMHO readability really suffers.)) For example: #! /usr/bin/python3.3 g = ['abc', 'def', 'ghi'] a = (None for g in g if (yield from g) and False) for x in a: print (x) b = (a for b in g for a in b) for x in b: print (x) Prints twice the flattened list. I think it becomes more legible if you use different variable names: (None for sublist in g if (yield from sublist) and False) Which is the same as (42 for sublist in g if (yield from sublist) and False) as due to the something and False the outer generator doesn't yield anything, while the inner generator yields all elements of all sublists (subgenerators, subiterables). Maybe this clarifies a bit how it works: ('Sublist {}'.format(idx) for idx, sublist in enumerate(g) if (yield from sublist) or True) Apparently the original generator can be simplified to this, omitting the last and False: (None for sublist in g if (yield from sublist) ) Revision: Thanks to Martijn Pieters fighting my stubbornness, I managed to see that (None for sublist in g if (yield from sublist) and False) and (None for sublist in g if (yield from sublist) ) are not equivalent. Here an example of a g which makes a difference: def f(): yield 1 return 2 def g(): yield f() a = (None for sublist in g() if (yield from sublist) ) for x in a: print(x) a = (None for sublist in g() if (yield from sublist) and False) for x in a: print(x) This prints: 1 None 1
Homebrew brew doctor warning about /Library/Frameworks/Python.framework, even with brew's Python installed
When I ran Homebrew's brew doctor (Mac OS X 10.9.2), I get the following warning message: Warning: Python is installed at /Library/Frameworks/Python.framework Homebrew only supports building against the System-provided Python or a brewed Python. In particular, Pythons installed to /Library can interfere with other software installs. Therefore, I ran brew install and followed the steps provided in the installation's caveats output to install Homebrew's version of Python. Running which python confirms that Homebrew's version of it is indeed at the top of my PATH. Output is /usr/local/bin/python. Despite all this, when I rerun brew doctor, I am still getting the same warning message. How do I suppress this warning? Do I need to delete the /Library/Frameworks/Python.framework directory from my computer? Am I just supposed to ignore it? Is there a different application on my computer that may be causing this warning to emit? Note that I don't have any applications in particular that are running into errors due to this warning from brew doctor. Also note that this warning message didn't always print out when I ran brew doctor, it was something that started to appear recently. Also, I am using Python 2.7 on my computer, trying to stay away from Python 3.
I also received this message. Something, sometime installed /Library/Frameworks/Python.framework on my machine (the folder date was about 4 years old). I've chosen to remove it. Please note that the Apple provided framework lives in /System/Library/Frameworks/Python.framework/
Numpy hstack - "ValueError: all the input arrays must have same number of dimensions" - but they do
I am trying to join two numpy arrays. In one I have a set of columns/features after running TF-IDF on a single column of text. In the other I have one column/feature which is an integer. So I read in a column of train and test data, run TF-IDF on this, and then I want to add another integer column because I think this will help my classifier learn more accurately how it should behave. Unfortunately, I am getting the error in the title when I try and run hstack to add this single column to my other numpy array. Here is my code : #reading in test/train data for TF-IDF traindata = list(np.array(p.read_csv('FinalCSVFin.csv', delimiter=";"))[:,2]) testdata = list(np.array(p.read_csv('FinalTestCSVFin.csv', delimiter=";"))[:,2]) #reading in labels for training y = np.array(p.read_csv('FinalCSVFin.csv', delimiter=";"))[:,-2] #reading in single integer column to join AlexaTrainData = p.read_csv('FinalCSVFin.csv', delimiter=";")[["alexarank"]] AlexaTestData = p.read_csv('FinalTestCSVFin.csv', delimiter=";")[["alexarank"]] AllAlexaAndGoogleInfo = AlexaTestData.append(AlexaTrainData) tfv = TfidfVectorizer(min_df=3, max_features=None, strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1) #tf-idf object rd = lm.LogisticRegression(penalty='l2', dual=True, tol=0.0001, C=1, fit_intercept=True, intercept_scaling=1.0, class_weight=None, random_state=None) #Classifier X_all = traindata + testdata #adding test and train data to put into tf-idf lentrain = len(traindata) #find length of train data tfv.fit(X_all) #fit tf-idf on all our text X_all = tfv.transform(X_all) #transform it X = X_all[:lentrain] #reduce to size of training set AllAlexaAndGoogleInfo = AllAlexaAndGoogleInfo[:lentrain] #reduce to size of training set X_test = X_all[lentrain:] #reduce to size of training set #printing debug info, output below : print "X.shape => " + str(X.shape) print "AllAlexaAndGoogleInfo.shape => " + str(AllAlexaAndGoogleInfo.shape) print "X_all.shape => " + str(X_all.shape) #line we get error on X = np.hstack((X, AllAlexaAndGoogleInfo)) Below is the output and error message : X.shape => (7395, 238377) AllAlexaAndGoogleInfo.shape => (7395, 1) X_all.shape => (10566, 238377) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-2b310887b5e4> in <module>() 31 print "X_all.shape => " + str(X_all.shape) 32 #X = np.column_stack((X, AllAlexaAndGoogleInfo)) ---> 33 X = np.hstack((X, AllAlexaAndGoogleInfo)) 34 sc = preprocessing.StandardScaler().fit(X) 35 X = sc.transform(X) C:\Users\Simon\Anaconda\lib\site-packages\numpy\core\shape_base.pyc in hstack(tup) 271 # As a special case, dimension 0 of 1-dimensional arrays is "horizontal" 272 if arrs[0].ndim == 1: --> 273 return _nx.concatenate(arrs, 0) 274 else: 275 return _nx.concatenate(arrs, 1) ValueError: all the input arrays must have same number of dimensions What is causing my problem here? How can I fix this? As far as I can see I should be able to join these columns? What have I misunderstood? Thank you. Edit : Using the method in the answer below gets the following error : --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-16-640ef6dd335d> in <module>() ---> 36 X = np.column_stack((X, AllAlexaAndGoogleInfo)) 37 sc = preprocessing.StandardScaler().fit(X) 38 X = sc.transform(X) C:\Users\Simon\Anaconda\lib\site-packages\numpy\lib\shape_base.pyc in column_stack(tup) 294 arr = array(arr,copy=False,subok=True,ndmin=2).T 295 arrays.append(arr) --> 296 return _nx.concatenate(arrays,1) 297 298 def dstack(tup): ValueError: all the input array dimensions except for the concatenation axis must match exactly Interestingly, I tried to print the dtype of X and this worked fine : X.dtype => float64 However, trying to print the dtype of AllAlexaAndGoogleInfo like so : print "AllAlexaAndGoogleInfo.dtype => " + str(AllAlexaAndGoogleInfo.dtype) produces : 'DataFrame' object has no attribute 'dtype'
Use .column_stack. Like so: X = np.column_stack((X, AllAlexaAndGoogleInfo)) From the docs: Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with hstack.
Read a small random sample from a big CSV file into a Python data frame
The CSV file that I want to read does not fit into main memory. How can I read a few (~10K) random lines of it and do some simple statistics on the selected data frame?
Assuming no header in the CSV file: import pandas import random n = 1000000 #number of records in file s = 10000 #desired sample size filename = "data.txt" skip = sorted(random.sample(xrange(n),n-s)) df = pandas.read_csv(filename, skiprows=skip) would be better if read_csv had a keeprows, or if skiprows took a callback func instead of a list. With header and unknown file length: import pandas import random filename = "data.txt" n = sum(1 for line in open(filename)) - 1 #number of records in file (excludes header) s = 10000 #desired sample size skip = sorted(random.sample(xrange(1,n+1),n-s)) #the 0-indexed header will not be included in the skip list df = pandas.read_csv(filename, skiprows=skip)
Application not picking up .css file (flask/python)
I am rendering a template, that I am attempting to style with an external style sheet. File structure is as follows. /app - app_runner.py /services - app.py /templates - mainpage.html /styles - mainpage.css mainpage.html looks like this <html> <head> <link rel= "stylesheet" type= "text/css" href= "../styles/mainpage.css"> </head> <body> <!-- content --> None of my styles are being applied though. Does it have something to do with the fact that the html is a template I am rendering? The python looks like this. return render_template("mainpage.html", variables..) I know this much is working, because I am still able to render the template. However, when I tried to move my styling code from a "style" block within the html's "head" tag to an external file, all the styling went away, leaving a bare html page. Anyone see any errors with my file structure?
You need to have a 'static' folder setup (for css/js files) unless you specifically override it during Flask initialization. I am assuming you did not override it. Your directory structure for css should be like: /app - app_runner.py /services - app.py /templates - mainpage.html /static /styles - mainpage.css Notice that your /styles directory should be under /static Then, do this <link rel= "stylesheet" type= "text/css" href= "{{ url_for('static',filename='styles/mainpage.css') }}"> Flask will now look for the css file under static/styles/mainpage.css
How is order of items in matplotlib legend determined?
I am having to reorder items in a legend, when I don't think I should have to. I try: from pylab import * clf() ax=gca() ht=ax.add_patch(Rectangle((1,1),1,1,color='r',label='Top',alpha=.01)) h1=ax.bar(1,2,label='Middle') hb=ax.add_patch(Rectangle((1,1),1,1,color='k',label='Bottom',alpha=.01)) legend() show() and end up with Bottom above Middle. How can I get the right order? Is it not determined by creation order? Update: The following can be used to force the order. I think this may be the simplest way to do it, and that seems awkward. The question is what determines the original order? hh=[ht,h1,hb] legend([ht,h1.patches[0],hb],[H.get_label() for H in hh])
Here's a quick snippet to sort the entries in a legend. It assumes that you've already added your plot elements with a label, for example, something like ax.plot(..., label='label1') ax.plot(..., label='label2') and then the main bit: handles, labels = ax.get_legend_handles_labels() # sort both labels and handles by labels labels, handles = zip(*sorted(zip(labels, handles), key=lambda t: t[0])) ax.legend(handles, labels) This is just a simple adaptation from the code listed at http://matplotlib.org/users/legend_guide.html
What is the difference between os.path.basename() and os.path.dirname()?
I'm new in Python programming and while studying I had this doubt about this two functions. I already searched for answers and read some links, but didn't understood. Can anyone give some simple explanation?
Both functions use the os.path.split(path) function to split the pathname path into a pair; (head, tail). The os.path.dirname(path) function returns the head of the path. E.g.: The dirname of '/foo/bar/item' is '/foo/bar'. The os.path.basename(path) function returns the tail of the path. E.g.: The basename of '/foo/bar/item' returns 'item' From: http://docs.python.org/2/library/os.path.html#os.path.basename
Label python data points on plot
I searched for ages (hours which is like ages) to find the answer to a really annoying (seemingly basic) problem, and because I cant find a question that quite fits the answer I am posting a question and answering it in the hope that it will save someone else the huge amount of time I just spent on my noobie plotting skills. If you want to label your plot points using python matplotlib from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = anyarray B = anyotherarray plt.plot(A,B) for i,j in zip(A,B): ax.annotate('%s)' %j, xy=(i,j), xytext=(30,0), textcoords='offset points') ax.annotate('(%s,' %i, xy=(i,j)) plt.grid() plt.show() I know that xytext=(30,0) goes along with the textcoords, you use those 30,0 values to position the data label point, so its on the 0 y axis and 30 over on the x axis on its own little area. You need both the lines plotting i and j otherwise you only plot x or y data label. You get something like this out (note the labels only): Its not ideal, there is still some overlap - but its better than nothing which is what I had..
How about print (x, y) at once. from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = -0.75, -0.25, 0, 0.25, 0.5, 0.75, 1.0 B = 0.73, 0.97, 1.0, 0.97, 0.88, 0.73, 0.54 plt.plot(A,B) for xy in zip(A, B): # <-- ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data') # <-- plt.grid() plt.show()
pySerial write() won't take my string
Using Python 3.3 and pySerial for serial communications. I'm trying to write a command to my COM PORT but the write method won't take my string. (Most of the code is from here Full examples of using Pyserial package What's going on? import time import serial ser = serial.Serial( port='\\\\.\\COM4', baudrate=115200, parity=serial.PARITY_ODD, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS ) if ser.isOpen(): ser.close() ser.open() ser.isOpen() ser.write("%01#RDD0010000107**\r") out = '' # let's wait one second before reading output (let's give device time to answer) time.sleep(1) while ser.inWaiting() > 0: out += ser.read(40) if out != '': print(">>" + out) ser.close() Error is at ser.write("%01#RDD0010000107**\r") where it gets Traceback is like this data = to_bytes(data) b.append(item) TypeError: an integer is required.
It turns out that the string needed to be turned into a bytearray and to do this I editted the code to ser.write("%01#RDD0010000107**\r".encode()) This solved the problem
AttributeError: 'module' object has no attribute 'request'
When i run the following code in Python - 3.3: import urllib tempfile = urllib.request.urlopen("http://yahoo.com") I get the following error: I did this too to verify: What am i doing wrong? Thanks in advance!
Import urllib.request instead of urllib. import urllib.request
How do I activate a virtualenv inside PyCharm's terminal?
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
Create a file .pycharmrc in your home folder with the following contents source ~/.bashrc source ~/pycharmvenv/bin/activate Using your virtualenv path as the last parameter. Then set the shell Preferences->Project Settings->Shell path to /bin/bash --rcfile ~/.pycharmrc
Plotting a decision boundary separating 2 classes using Matplotlib's pyplot
I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g., [ 1 2 ] Let's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples). mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T When I plot the data points for each class, it would look like this: Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function: def decision_boundary(x_vec, mu_vec1, mu_vec2): g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1)) g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) ) return g1 - g2 I would really appreciate any help! EDIT: Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...
Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning. The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries : http://scikit-learn.org/stable/modules/svm.html Code : # -*- coding: utf-8 -*- import numpy as np import matplotlib from matplotlib import pyplot as plt import scipy from sklearn import svm mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T fig = plt.figure() plt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+') plt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o') X = np.concatenate((x1_samples,x2_samples), axis = 0) Y = np.array([0]*100 + [1]*100) C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'linear', gamma=0.7, C=C ) clf.fit(X, Y) Linear Plot (taken from http://scikit-learn.org/stable/auto_examples/svm/plot_svm_margin.html) w = clf.coef_[0] a = -w[0] / w[1] xx = np.linspace(-5, 5) yy = a * xx - (clf.intercept_[0]) / w[1] plt.plot(xx, yy, 'k-') MultiLinear Plot (taken from http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html) C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C ) clf.fit(X, Y) h = .02 # step size in the mesh # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, cmap=plt.cm.Paired) Implementation If you want to implement it yourself, you need to solve the following quadratic equation: The wikipedia article Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.
Skip multiple iterations in loop python
I have a list in a loop and I want to skip 3 elements after look has been reached. In this answer a couple of suggestions were made but I fail to make good use of them: song = ['always', 'look', 'on', 'the', 'bright', 'side', 'of', 'life'] for sing in song: if sing == 'look': print sing continue continue continue continue print 'a' + sing print sing Four times continue is nonsense of course and using four times next() doesn't work. The output should look like: always look aside of life
for uses iter(song) to loop; you can do this in your own code and then advance the iterator inside the loop; calling iter() on the iterable again will only return the same iterable object so you can advance the iterable inside the loop with for following right along in the next iteration. Advance the iterator with the next() function; it works correctly in both Python 2 and 3 without having to adjust syntax: song = ['always', 'look', 'on', 'the', 'bright', 'side', 'of', 'life'] song_iter = iter(song) for sing in song_iter: print sing if sing == 'look': next(song_iter) next(song_iter) next(song_iter) print 'a' + next(song_iter) By moving the print sing line up we can avoid repeating ourselves too. Using next() this way can raise a StopIteration exception, if the iterable is out of values. You could catch that exception, but it'd be easier to give next() a second argument, a default value to ignore the exception and return the default instead: song = ['always', 'look', 'on', 'the', 'bright', 'side', 'of', 'life'] song_iter = iter(song) for sing in song_iter: print sing if sing == 'look': next(song_iter, None) next(song_iter, None) next(song_iter, None) print 'a' + next(song_iter, '') I'd use itertools.islice() to skip 3 elements instead; saves repeated next() calls: from itertools import islice song = ['always', 'look', 'on', 'the', 'bright', 'side', 'of', 'life'] song_iter = iter(song) for sing in song_iter: print sing if sing == 'look': print 'a' + next(islice(song_iter, 3, 4), '') The islice(song_iter, 3, 4) iterable will skip 3 elements, then return the 4th, then be done. Calling next() on that object thus retrieves the 4th element from song_iter(). Demo: >>> from itertools import islice >>> song = ['always', 'look', 'on', 'the', 'bright', 'side', 'of', 'life'] >>> song_iter = iter(song) >>> for sing in song_iter: ... print sing ... if sing == 'look': ... print 'a' + next(islice(song_iter, 3, 4), '') ... always look aside of life
Add element to a json in python
I am trying to add an element to a json file in python but I am not able to do it. This is what I tried untill now (with some variation which I deleted): import json data = [ { 'a':'A', 'b':(2, 4), 'c':3.0 } ] print 'DATA:', repr(data) var = 2.4 data.append({'f':var}) print 'JSON', json.dumps(data) But, what I get is: DATA: [{'a': 'A', 'c': 3.0, 'b': (2, 4)}] JSON [{"a": "A", "c": 3.0, "b": [2, 4]}, {"f": 2.4}] Which is fine because I also need this to add a new row instead an element but I want to get something like this: [{'a': 'A', 'c': 3.0, 'b': (2, 4), "f":2.4}] How should I add the new element?
You can do this. data[0]['f'] = var
Can I speed up this basic linear algebra code?
I was wondering whether it is possible to optimise the following using Numpy or mathematical trickery. def f1(g, b, dt, t1, t2): p = np.copy(g) for i in range(dt): p += t1*np.tanh(np.dot(p, b)) + t2*p return p where g is a vector of length n, b is an nxn matrix, dt is the number of iterations, and t1 and t2are scalars. I have quickly ran out of ideas on how to optimise this further, because p is used within the loop, in all three terms of the equation: when added to itself; in the dot product; and in a scalar multiplication. But maybe there is a different way to represent this function or there are other tricks to improve its efficiency. If possible, I would prefer not to use Cython etc., but I'd be willing to use it if the speed improvements are significant. Thanks in advance, and apologies if the question is out of scope somehow. Update: The answers provided so far are more focused on what the values of the input/output could be to avoid unnecessary operations. I have now updated the MWE with proper initialisation values for the variables (I didn't expect the optimisation ideas to come from that side -- apologies). g will be in the range [-1, 1] and b will be in the range [-infinity, infinity]. Approximating the output is not an option because the returned vectors are later given to an evaluation function -- approximation may return the same vector for fairly similar input, so it is not an option. MWE: import numpy as np import timeit iterations = 10000 setup = """ import numpy as np n = 100 g = np.random.uniform(-1, 1, (n,)) # Updated. b = np.random.uniform(-1, 1, (n,n)) # Updated. dt = 10 t1 = 1 t2 = 1/2 def f1(g, b, dt, t1, t2): p = np.copy(g) for i in range(dt): p += t1*np.tanh(np.dot(p, b)) + t2*p return p """ functions = [ """ p = f1(g, b, dt, t1, t2) """ ] if __name__ == '__main__': for function in functions: print(function) print('Time = {}'.format(timeit.timeit(function, setup=setup, number=iterations)))
To get the code running much faster without cython or jit will be very hard, some mathematical trickery may be more the easier approach. It appears to me that if we define a k(g, b) = f1(g, b, n+1, t1, t2)/f1(g, b, n, t1, t2) for n in positive N, the k function should have a limit of t1+t2 (don't have a solid proof yet, just a gut feeling; it may be a special case for E(g)=0 & E(p)=0 also.). For t1=1 and t2=0.5, k() appears to approach the limit fairly quickly, for N>100, it is almost a constant of 1.5. So I think a numerical approximation approach should be the easiest one. In [81]: t2=0.5 data=[f1(g, b, i+2, t1, t2)/f1(g, b, i+1, t1, t2) for i in range(1000)] In [82]: plt.figure(figsize=(10,5)) plt.plot(data[0], '.-', label='1') plt.plot(data[4], '.-', label='5') plt.plot(data[9], '.-', label='10') plt.plot(data[49], '.-', label='50') plt.plot(data[99], '.-', label='100') plt.plot(data[999], '.-', label='1000') plt.xlim(xmax=120) plt.legend() plt.savefig('limit.png') In [83]: data[999] Out[83]: array([ 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5])
Why does locals() return a strange self referential list?
So I'm using locals() to grab some arguments in the function. Works nicely: def my_function(a, b): print locals().values() >>> my_function(1,2) [1, 2] Standard stuff. But now let's introduce a list comprehension: def my_function(a, b): print [x for x in locals().values()] >>> my_function(1,2) [[...], 1, 2] Ehh? Why has it inserted a self-reference?
Python versions before 2.7 and 3.1 used suboptimal bytecode to produce a list comprehension. In those Python versions, the list comprehension was stored in a local variable (or even a global, if at module scope): >>> import dis >>> def foo(): ... return [x for x in y] ... >>> dis.dis(foo) 2 0 BUILD_LIST 0 3 DUP_TOP 4 STORE_FAST 0 (_[1]) 7 LOAD_GLOBAL 0 (y) 10 GET_ITER >> 11 FOR_ITER 13 (to 27) 14 STORE_FAST 1 (x) 17 LOAD_FAST 0 (_[1]) 20 LOAD_FAST 1 (x) 23 LIST_APPEND 24 JUMP_ABSOLUTE 11 >> 27 DELETE_FAST 0 (_[1]) 30 RETURN_VALUE The _[1] local variable is the list-in-progress. When nesting list comprehensions it would use increasing integers to refer to the result: >>> def bar(): ... return [[x for x in y] for z in spam] ... >>> dis.dis(bar) 2 0 BUILD_LIST 0 3 DUP_TOP 4 STORE_FAST 0 (_[1]) 7 LOAD_GLOBAL 0 (spam) 10 GET_ITER >> 11 FOR_ITER 40 (to 54) 14 STORE_FAST 1 (z) 17 LOAD_FAST 0 (_[1]) 20 BUILD_LIST 0 23 DUP_TOP 24 STORE_FAST 2 (_[2]) 27 LOAD_GLOBAL 1 (y) 30 GET_ITER >> 31 FOR_ITER 13 (to 47) 34 STORE_FAST 3 (x) 37 LOAD_FAST 2 (_[2]) 40 LOAD_FAST 3 (x) 43 LIST_APPEND 44 JUMP_ABSOLUTE 31 >> 47 DELETE_FAST 2 (_[2]) 50 LIST_APPEND 51 JUMP_ABSOLUTE 11 >> 54 DELETE_FAST 0 (_[1]) 57 RETURN_VALUE By looping over locals().values() you included a reference to the list-in-progress in the return value. Note that the bytecode uses a DELETE_FAST to clean up the local name to try and avoid the namespace pollution. This was optimized for Python 3.1 and 2.7, see issue 2183. The list result under construction was moved to the stack instead. The optimization changed the LIST_APPEND bytecode to reference what list on the stack to append to, removing the need to use DUP_TOP -> STORE_FAST at the start, LOAD_FAST each iteration and DELETE_FAST after the list comprehension.
Install python packages to correct anaconda environment
I've setup anaconda and created a python 3.3 environment. Now I wanted to install some package (dataset). The install instructions ask to clone the git repo and run python setup.py install but now the packages are not installed to the environments site-packages folder but to a different anaconda location. What are the normal steps to solve that problem? Newbie-compatible solutions are preferred. The OS is MacOSX, just is case, it is relevant.
It looks like conda automatically adds pip to your conda environment, so after you source your conda environment, i.e.: source activate ~/anaconda/envs/dataset you should be able to install it like this: git clone git://github.com/pudo/dataset.git pip install ./dataset EDIT Here are the exact steps I took: $ conda create -p ~/anaconda/envs/py33 python=3.3 anaconda pip $ source activate ~/anaconda/envs/py33 $ which pip ~/anaconda/envs/py33/bin/pip $ pip install ./dataset/
clang error: unknown argument: '-mno-fused-madd' (python package installation failure)
I get the following error when attempting to install psycopg2 via pip on Mavericks 10.9: clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] Not sure how to proceed and have searched here and elsewhere for this particular error. Any help is much appreciated! Here is the complete output from pip: $ pip install psycopg2 Downloading/unpacking psycopg2 Downloading psycopg2-2.5.2.tar.gz (685kB): 685kB downloaded Running setup.py (path:/private/var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip_build_tino/psycopg2/setup.py) egg_info for package psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090303 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/local/Cellar/postgresql/9.3.3/include -I/usr/local/Cellar/postgresql/9.3.3/include/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.9-intel-2.7/psycopg/psycopgmodule.o clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future error: command 'cc' failed with exit status 1 Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip_build_tino/psycopg2/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip-bnWiwB-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/__init__.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/_json.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/_range.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/errorcodes.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/extensions.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/extras.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/pool.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/psycopg1.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 copying lib/tz.py -> build/lib.macosx-10.9-intel-2.7/psycopg2 creating build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/__init__.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/dbapi20.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/dbapi20_tpc.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_async.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_bug_gc.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_bugX000.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_cancel.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_connection.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_copy.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_cursor.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_dates.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_extras_dictcursor.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_green.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_lobject.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_module.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_notify.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_psycopg2_dbapi20.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_quote.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_transaction.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_types_basic.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_types_extras.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/test_with.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/testconfig.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests copying tests/testutils.py -> build/lib.macosx-10.9-intel-2.7/psycopg2/tests running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/psycopg cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090303 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/local/Cellar/postgresql/9.3.3/include -I/usr/local/Cellar/postgresql/9.3.3/include/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.9-intel-2.7/psycopg/psycopgmodule.o clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip_build_tino/psycopg2/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip-bnWiwB-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/0z/ljjwsjmn4v9_zwm81vhxj69m0000gn/T/pip_build_tino/psycopg2
You can tell clang to not raise this as an error by setting the following environment variables prior compilation: export CFLAGS=-Qunused-arguments export CPPFLAGS=-Qunused-arguments Then pip install psycopg2should work. I had the same when trying to pip install lxml. Edit: if you are installing as superuser (which will likely be the case if you are trying to append to /Library/Python/2.7/site-packages, the native Apple factory-installed Python distribution which ships with OS X, rather than to some other Python distribution which you have subsequently installed yourself), then you will need to do, as described by @Thijs Kuipers in comments below: sudo -E pip install psycopg2 or the equivalent, for whatever other package name you may be substituting in place of psycopg2. UPDATE [2014-05-16]: Apple has fixed this problem with updated system Pythons (2.7, 2.6, and 2.5) in OS X 10.9.3 so the workaround is no longer necessary when using the latest Mavericks and Xcode 5.1+. However, as of now, the workaround is still required for OS X 10.8.x (Mountain Lion, currently 10.8.5) if you are using Xcode 5.1+ there.
Installing Pillow/PIL on Mavericks
I'm getting a strange error when trying to install Pillow using pip/easy_install: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DHAVE_LIBJPEG -DHAVE_LIBZ -DHAVE_LIBTIFF -I/System/Library/Frameworks/Tcl.framework/Versions/8.5/Headers -I/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -I/usr/local/Cellar/freetype/2.5.3/include/freetype2 -I/private/var/folders/c_/r7sp373509jdb6_1xmmzvl9c0000gn/T/pip_build_tills13/Pillow/libImaging -I/System/Library/Frameworks/Python.framework/Versions/2.7/include -I/usr/local/include -I/usr/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _imaging.c -o build/temp.macosx-10.9-intel-2.7/_imaging.o clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future error: command 'cc' failed with exit status 1 I've read all the symlink answers, I've installed commandline tools, nothing seems to be working. I always get that error.
I solved that problem the following way. Propably has something to do with todays Mavericks commandline tools update. Try adding following to the terminal before executing pip install: export CFLAGS=-Qunused-arguments export CPPFLAGS=-Qunused-arguments
In Python, why can a lambda expression refer to the variable being defined but not a list?
This is more a curiosity than anything, but I just noticed the following. If I am defining a self-referential lambda, I can do it easily: >>> f = lambda: f >>> f() is f True But if I am defining a self-referential list, I have to do it in more than one statement: >>> a = [a] Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'a' is not defined >>> a = [] >>> a.append(a) >>> a[0] is a True >>> a [[...]] I also noticed that this is not limited to lists but seems like any other expression other than a lambda can not reference the variable left of the assignment. For example, if you have a cyclic linked-list with one node, you can't simply go: >>> class Node(object): ... def __init__(self, next_node): ... self.next = next_node ... >>> n = Node(n) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'n' is not defined Instead, you have to do it in two statements: >>> n = Node(None) >>> n.next = n >>> n is n.next True Does anyone know what the philosophy behind this difference is? I understand that a recursive lambda are used much more frequently, and hence supporting self-reference is important for lambdas, but why not allow it for any assignment? EDIT: The answers below clarify this quite nicely. The reason is that variables in lambdas in Python are evaluated each time the lambda is called, not when it's defined. In this sense they are exactly like functions defined using def. I wrote the following bit of code to experiment with how this works, both with lambdas and def functions in case it might help clarify it for anyone. >>> f = lambda: f >>> f() is f True >>> g = f >>> f = "something else" >>> g() 'something else' >>> f = "hello" >>> g() 'hello' >>> f = g >>> g() is f True >>> def f(): ... print(f) ... >>> f() <function f at 0x10d125560> >>> g = f >>> g() <function f at 0x10d125560> >>> f = "test" >>> g() test >>> f = "something else" >>> g() something else
The expression inside a lambda is evaluated when the function is called, not when it is defined. In other words, Python will not evaluate the f inside your lambda until you call it. And, by then, f is already defined in the current scope (it is the lambda itself). Hence, no NameError is raised. Note that this is not the case for a line like this: a = [a] When Python interprets this type of line (known as an assignment statement), it will evaluate the expression on the right of the = immediately. Moreover, a NameError will be raised for any name used on the right that is undefined in the current scope.
Django: List field in model?
In my model, I want a field that has a list of triplets. e.g. `[[1, 3, 4], [4, 2, 6], [8, 12, 3], [3, 3, 9]]. Is there a field that can store this data in the database?
You can convert it into string by using JSON and store it as string. For example, In [3]: json.dumps([[1, 3, 4], [4, 2, 6], [8, 12, 3], [3, 3, 9]]) Out[3]: '[[1, 3, 4], [4, 2, 6], [8, 12, 3], [3, 3, 9]]' You can add a method into your class to convert it automatically for you. import json class Foobar(models.Model): foo = models.CharField(max_length=200) def setfoo(self, x): self.foo = json.dumps(x) def getfoo(self): return json.loads(self.foo) If you're using Django 1.9 and postgresql, there is a new class called JSONField, you should use it instead. Here is a link to it There is a good talk about PostgreSQL JSONs and Arrays on youtube. Watch it, it has very good information.
get list from pandas dataframe column
I have an excel document which looks like this.. cluster load_date budget actual fixed_price A 1/1/2014 1000 4000 Y A 2/1/2014 12000 10000 Y A 3/1/2014 36000 2000 Y B 4/1/2014 15000 10000 N B 4/1/2014 12000 11500 N B 4/1/2014 90000 11000 N C 7/1/2014 22000 18000 N C 8/1/2014 30000 28960 N C 9/1/2014 53000 51200 N I want to be able to return the contents of column 1 - cluster as a list, so I can run a for loop over it, and create an excel worksheet for every cluster. Is it also possible, to return the contents of a whole row to a list? e.g. list = [], list[column1] or list[df.ix(row1)]
Pandas Dataframe columns are a Pandas Series when you pull them out, which you can then call .tolist() on to turn them into a python list from pandas import * d = {'one' : Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])} df = DataFrame(d) #print df print "DF", type(df['one']), "\n", df['one'] dfList = df['one'].tolist() print "DF list", dfList, type(dfList) This question might be helpful. And the Pandas docs are actually quite good once you get your head around their style. So in your case you could: my_list = df["cluster"].tolist() and then go from there
Pythonic way of detecting outliers in one dimensional observation data
For the given data, I want to set the outlier values (defined by 95% confidense level or 95% quantile function or anything that is required) as nan values. Following is the my data and code that I am using right now. I would be glad if someone could explain me further. import numpy as np, matplotlib.pyplot as plt data = np.random.rand(1000)+5.0 plt.plot(data) plt.xlabel('observation number') plt.ylabel('recorded value') plt.show()
The problem with using percentile is that the points identified as outliers is a function of your sample size. There are a huge number of ways to test for outliers, and you should give some thought to how you classify them. Ideally, you should use a-priori information (e.g. "anything above/below this value is unrealistic because...") However, a common, not-too-unreasonable outlier test is to remove points based on their "median absolute deviation". Here's an implementation for the N-dimensional case (from some code for a paper here: https://github.com/joferkington/oost_paper_code/blob/master/utilities.py): def is_outlier(points, thresh=3.5): """ Returns a boolean array with True if points are outliers and False otherwise. Parameters: ----------- points : An numobservations by numdimensions array of observations thresh : The modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. Returns: -------- mask : A numobservations-length boolean array. References: ---------- Boris Iglewicz and David Hoaglin (1993), "Volume 16: How to Detect and Handle Outliers", The ASQC Basic References in Quality Control: Statistical Techniques, Edward F. Mykytka, Ph.D., Editor. """ if len(points.shape) == 1: points = points[:,None] median = np.median(points, axis=0) diff = np.sum((points - median)**2, axis=-1) diff = np.sqrt(diff) med_abs_deviation = np.median(diff) modified_z_score = 0.6745 * diff / med_abs_deviation return modified_z_score > thresh This is very similar to one of my previous answers, but I wanted to illustrate the sample size effect in detail. Let's compare a percentile-based outlier test (similar to @CTZhu's answer) with a median-absolute-deviation (MAD) test for a variety of different sample sizes: import numpy as np import matplotlib.pyplot as plt import seaborn as sns def main(): for num in [10, 50, 100, 1000]: # Generate some data x = np.random.normal(0, 0.5, num-3) # Add three outliers... x = np.r_[x, -3, -10, 12] plot(x) plt.show() def mad_based_outlier(points, thresh=3.5): if len(points.shape) == 1: points = points[:,None] median = np.median(points, axis=0) diff = np.sum((points - median)**2, axis=-1) diff = np.sqrt(diff) med_abs_deviation = np.median(diff) modified_z_score = 0.6745 * diff / med_abs_deviation return modified_z_score > thresh def percentile_based_outlier(data, threshold=95): diff = (100 - threshold) / 2.0 minval, maxval = np.percentile(data, [diff, 100 - diff]) return (data < minval) | (data > maxval) def plot(x): fig, axes = plt.subplots(nrows=2) for ax, func in zip(axes, [percentile_based_outlier, mad_based_outlier]): sns.distplot(x, ax=ax, rug=True, hist=False) outliers = x[func(x)] ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False) kwargs = dict(y=0.95, x=0.05, ha='left', va='top') axes[0].set_title('Percentile-based Outliers', **kwargs) axes[1].set_title('MAD-based Outliers', **kwargs) fig.suptitle('Comparing Outlier Tests with n={}'.format(len(x)), size=14) main() Notice that the MAD-based classifier works correctly regardless of sample-size, while the percentile based classifier classifies more points the larger the sample size is, regardless of whether or not they are actually outliers.
Access-Control-Allow-Origin in Django app when accesed with Phonegap
I'm developing a Phonegap app for my Django based app, but when trying to make Ajax calls I get this error: XMLHttpRequest cannot load http://domain.herokuapp.com/getcsrf/?tags=jquery%2Cjavascript&tagmode=any&format=json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. How can I make it so my Django app allows cross origin for some urls? Here's my Ajax code: get: function() { $.getJSON("http://domain.herokuapp.com/getcsrf/", { tags: "jquery,javascript", tagmode: "any", format: "json" }, function(data) { $.each(data.items, function(item){ console.log(item); }); }); }
Django by default does not provide the headers necessary to provide cross origin. The easiest way would be to just use this Django app that handles it for you: https://github.com/ottoyiu/django-cors-headers You can then set whichever domains you want white listed using the settings CORS_ORIGIN_WHITELIST = ( 'google.com', 'hostname.example.com' ) to support allowing all, just use the setting... CORS_ORIGIN_ALLOW_ALL = True and then do any filtering of the request in middleware or in the view.
Is there a datetime ± infinity?
For floats we have special objects like -inf (and +inf), and which are guaranteed to compare less than (and greater than) other numbers. I need something similar for datetimes, is there any such thing? In-db ordering must work correctly with django queryset filters, and ideally it should be db-agnostic (but at the very least it must work with mysql and sqlite) and timezone-agnostic. At the moment I'm using null/None, but it is creating very messy queries because None is doing the job of both -inf and +inf and I have to explicitly account for all those cases in the queries.
Try this: >>> import datetime >>> datetime.datetime.max datetime.datetime(9999, 12, 31, 23, 59, 59, 999999) You can get min/max for datetime, date, and time.
Get Traceback of warnings
In numpy we can do np.seterr(invalid='raise') to get a traceback for warnings raising an error instead (see this post). Is there a general way for tracing warnings? Can I make python to give a traceback, when a warning is raised?
You can get what you want by assigning to warnings.showwarning. The warnings module documentation itself recommends that you do that, so it's not that you're being tempted by the dark side of the source. :) You may replace this function with an alternative implementation by assigning to warnings.showwarning. You can define a new function that does what warning.showwarning normaly does and additionally it prints the stack. Then you place it instead of the original: import traceback import warnings import sys def warn_with_traceback(message, category, filename, lineno, file=None, line=None): traceback.print_stack() log = file if hasattr(file,'write') else sys.stderr log.write(warnings.formatwarning(message, category, filename, lineno, line)) warnings.showwarning = warn_with_traceback After this, every warning will print the stack trace as well as the warning message. Take into account, however, that if the warning is ignored because it is not the first one, nothing will happen, so you still need to execute: warnings.simplefilter("always") You can get a similar control that the one numpy.seterr gives through the warning module's filters If what you want is python to report every a warning every time it is triggered and not only the first time, you can include something like: import warnings warnings.simplefilter("always") You can get other behaviours by passing different strings as arguments. Using the same function you can also specify different behaviours for warnings depending on the module that raised them, the message they provide, the warning class, the line of code that is causing it and so on... You can check the list in the module documentation As an example, you can set all the warnings to raise exceptions, except the DeprecationWarnings that should be ignored completely: import warnings warnings.simplefilter("error") warnings.simplefilter("ignore", DeprecationWarning) This way you get the full traceback for each warning raised as error (only the first one, since execution will stop... but you can address them one by one, and create a filter to ignore the ones you don't want to hear about again...
How to obtain values of parameters of get request in flask?
The answer that I found on the web is to use request.args.get. However, I cannot manage it to work. I have the following simple example: from flask import Flask app = Flask(__name__) @app.route("/hello") def hello(): print request.args['x'] return "Hello World!" if __name__ == "__main__": app.run() I go to the 127.0.0.1:5000/hello?x=2 in my browser and as a result I get: Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. What am I doing wrong?
The simple answer is you have not imported the request global object from the flask package. from flask import Flask, request This is easy to determine yourself by running the development server in debug mode by doing app.run(debug=True) This will give you a stacktrace including: print request.args['x'] NameError: global name 'request' is not defined
Problems with pip install numpy - RuntimeError: Broken toolchain: cannot link a simple C program
I'm trying to install numpy (and scipy and matplotlib) into a virturalenv. I keep getting these errors though: RuntimeError: Broken toolchain: cannot link a simple C program ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 I have the command line tools for xcode installed $ which gcc /usr/bin/gcc $ which cc /usr/bin/cc I'm on Mac OSX 10.9 Using a brew installed python Edit Yes, trying to install with pip. The whole traceback is huge (>400 lines) Here is a section of it: C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' cc: _configtest.c clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "<string>", line 17, in <module> File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/setup.py", line 192, in <module> setup_package() File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/setup.py", line 185, in setup_package configuration=configuration ) File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/core.py", line 169, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/command/egg_info.py", line 10, in run self.run_command("build_src") File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/command/build_src.py", line 153, in run self.build_sources() File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/command/build_src.py", line 164, in build_sources self.build_library_sources(*libname_info) File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/command/build_src.py", line 299, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "/Users/bdhammel/Documents/research_programming/julia_env/build/numpy/numpy/distutils/command/build_src.py", line 386, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 674, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program
While it's ugly, it appears to work sudo ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install --upgrade numpy Note that if you are getting this error for a package other than numpy, (such as lxml) specify that package name instead of numpy at the end of the commnd. I saw a similar issue someone was having with installing a gem Ruby Gem install Json fails on Mavericks and Xcode 5.1 - unknown argument: '-multiply_definedsuppress' This is only a temporary fix, at some point the compiler options will have to be fixed
Setting initial Django form field value in the __init__ method
Django 1.6 I have a working block of code in a Django form class as shown below. The data set from which I'm building the form field list can include an initial value for any of the fields, and I'm having no success in setting that initial value in the form. The if field_value: block below does indeed populate the initial form dictionary attribute, but the initial value is not being displayed. Note that (in case you are wondering) the .initial attribute does not exist until after the super() call. Can this be done? If so, what I'm not doing right to make this work? Thanks! def __init__(self, *args, **kwargs): id = kwargs.pop('values_id', 0) super(LaunchForm, self).__init__(*args, **kwargs) # Lotsa code here that uses the id value # to execute a query and build the form # fields and their attributes from the # result set if field_value: self.initial[field_name] = field_value
I had that exact same problem and I solved it doing this: def __init__(self, *args, **kwargs): instance = kwargs.get('instance', None) kwargs.update(initial={ # 'field': 'value' 'km_partida': '1020' }) super(ViagemForm, self).__init__(*args, **kwargs) # all other stuff
What is the difference between curly brace and square bracket in Python?
what is the difference between curly brace and square bracket in python? A ={1,2} B =[1,2] when I print A and B on my terminal, they made no difference. Is it real? And sometimes, I noticed some code use {} and [] to initialize different variables. E.g. A=[], B={} Is there any difference there?
Curly braces create dictionaries or sets. Square brackets create lists. They are called literals; a set literal: aset = {'foo', 'bar'} or a dictionary literal: adict = {'foo': 42, 'bar': 81} empty_dict = {} or a list literal: alist = ['foo', 'bar', 'bar'] empty_list = [] To create an empty set, you can only use set(). Sets are collections of unique elements and you cannot order them. Lists are ordered sequences of elements, and values can be repeated. Dictionaries map keys to values, keys must be unique. Set and dictionary keys must meet other restrictions as well, so that Python can actually keep track of them efficiently and know they are and will remain unique. There is also the tuple type, using a comma for 1 or more elements, with parenthesis being optional in many contexts: atuple = ('foo', 'bar') another_tuple = 'spam', empty_tuple = () WARNING_not_a_tuple = ('eggs') Note the comma in the another_tuple definition; it is that comma that makes it a tuple, not the parenthesis. WARNING_not_a_tuple is not a tuple, it has no comma. Without the parentheses all you have left is a string, instead. See the data structures chapter of the Python tutorial for more details; lists are introduced in the introduction chapter. Literals for containers such as these are also called displays and the syntax allows for procedural creation of the contents based of looping, called comprehensions.
count the frequency that a value occurs in a dataframe column
I have a dataset |category| cat a cat b cat a I'd like to be able to return something like (showing unique values and frequency) category | freq | cat a 2 cat b 1
Use groupby and count: In [37]: df = pd.DataFrame({'a':list('abssbab')}) df.groupby('a').count() Out[37]: a a a 2 b 3 s 2 [3 rows x 1 columns] See the online docs: http://pandas.pydata.org/pandas-docs/stable/groupby.html Also value_counts() as @DSM has commented, many ways to skin a cat here In [38]: df['a'].value_counts() Out[38]: b 3 a 2 s 2 dtype: int64 If you wanted to add frequency back to the original dataframe use transform to return an aligned index: In [41]: df['freq'] = df.groupby('a')['a'].transform('count') df Out[41]: a freq 0 a 2 1 b 3 2 s 2 3 s 2 4 b 3 5 a 2 6 b 3 [7 rows x 2 columns]
How to add a new row to an empty numpy array
Using standard Python arrays, I can do the following: arr = [] arr.append([1,2,3]) arr.append([4,5,6]) # arr is now [[1,2,3],[4,5,6]] However, I cannot do the same thing in numpy. For example: arr = np.array([]) arr = np.append(arr, np.array([1,2,3])) arr = np.append(arr, np.array([4,5,6])) # arr is now [1,2,3,4,5,6] I also looked into vstack, but when I use vstack on an empty array, I get: ValueError: all the input array dimensions except for the concatenation axis must match exactly So how do I do append a new row to an empty array in numpy?
The way to "start" the array that you want is: arr = np.empty((0,3), int) Which is an empty array but it has the proper dimensionality. >>> arr array([], shape=(0, 3), dtype=int64) Then be sure to append along axis 0: arr = np.append(arr, np.array([[1,2,3]]), axis=0) arr = np.append(arr, np.array([[4,5,6]]), axis=0) But, @jonrsharpe is right. In fact, if you're going to be appending in a loop, it would be much faster to append to a list as in your first example, then convert to a numpy array at the end, since you're really not using numpy as intended during the loop: In [210]: %%timeit .....: l = [] .....: for i in xrange(1000): .....: l.append([3*i+1,3*i+2,3*i+3]) .....: l = np.asarray(l) .....: 1000 loops, best of 3: 1.18 ms per loop In [211]: %%timeit .....: a = np.empty((0,3), int) .....: for i in xrange(1000): .....: a = np.append(a, 3*i+np.array([[1,2,3]]), 0) .....: 100 loops, best of 3: 18.5 ms per loop In [214]: np.allclose(a, l) Out[214]: True The numpythonic way to do it depends on your application, but it would be more like: In [220]: timeit n = np.arange(1,3001).reshape(1000,3) 100000 loops, best of 3: 5.93 µs per loop In [221]: np.allclose(a, n) Out[221]: True
Invalid control character with Python json.loads
Below is my string that is getting printed out with the below code - jsonString = data.decode("utf-8") print jsonString And below is the string that got printed out on the console - {"description":"Script to check testtbeat of TEST 1 server.", "script":"#!/bin/bash\nset -e\n\nCOUNT=60 #number of 10 second timeouts in 10 minutes\nSUM_SYNCS=0\nSUM_SYNCS_BEHIND=0\nHOSTNAME=$hostname \n\nwhile [[ $COUNT -ge \"0\" ]]; do\n\necho $HOSTNAME\n\n#send the request, put response in variable\nDATA=$(wget -O - -q -t 1 http://$HOSTNAME:8080/heartbeat)\n\n#grep $DATA for syncs and syncs_behind\nSYNCS=$(echo $DATA | grep -oE 'num_syncs: [0-9]+' | awk '{print $2}')\nSYNCS_BEHIND=$(echo $DATA | grep -oE 'num_syncs_behind: [0-9]+' | awk '{print $2}')\n\necho $SYNCS\necho $SYNCS_BEHIND\n\n#verify conditionals\nif [[ $SYNCS -gt \"8\" && $SYNCS_BEHIND -eq \"0\" ]]; then exit 0; fi\n\n#decrement the counter\nlet COUNT-=1\n\n#wait another 10 seconds\nsleep 10\n\ndone\n"} But when I load this out using python json.loads as shown below- jStr = json.loads(jsonString) I am getting this error - ERROR Invalid control character at: line 1 column 202 (char 202) I looked at char 202 but I have no idea why that is causing an issue? char 202 in my notepad++ is e I guess.. Or may be I am calculating it wrong Any idea what is wrong? How do I find out which one is causing problem. UPDATE:- jsonString = {"description":"Script to check testtbeat of TIER 1 server.", "script":"#!/bin/bash\nset -e\n\nCOUNT=60 #number of 10 second timeouts in 10 minutes\nSUM_SYNCS=0\nSUM_SYNCS_BEHIND=0\nHOSTNAME=$hostname \n\nwhile [[ $COUNT -ge \"0\" ]]; do\n\necho $HOSTNAME\n\n#send the request, put response in variable\nDATA=$(wget -O - -q -t 1 http://$HOSTNAME:8080/heartbeat)\n\n#grep $DATA for syncs and syncs_behind\nSYNCS=$(echo $DATA | grep -oE 'num_syncs: [0-9]+' | awk '{print $2}')\nSYNCS_BEHIND=$(echo $DATA | grep -oE 'num_syncs_behind: [0-9]+' | awk '{print $2}')\n\necho $SYNCS\necho $SYNCS_BEHIND\n\n#verify conditionals\nif [[ $SYNCS -gt \"8\" && $SYNCS_BEHIND -eq \"0\" ]]; then exit 0; fi\n\n#decrement the counter\nlet COUNT-=1\n\n#wait another 10 seconds\nsleep 10\n\ndone\n"} print jsonString[202] Below error I got - KeyError: 202
There is no error in your json text. You can get the error if you copy-paste the string into your Python source code as a string literal. In that case \n is interpreted as a single character (newline). You can fix it by using raw-string literals instead (r'', Use triple-quotes r'''..''' to avoid escaping "' quotes inside the string literal).
Finding the Values of the Arrow Keys in Python: Why are they triples?
I am trying to find the values that my local system assigns to the arrow keys, specifically in Python. I am using the following script to do this: import sys,tty,termios class _Getch: def __call__(self): fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(sys.stdin.fileno()) ch = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch def get(): inkey = _Getch() while(1): k=inkey() if k!='':break print 'you pressed', ord(k) def main(): for i in range(0,25): get() if __name__=='__main__': main() Then I ran the script, and hit UP DOWN RIGHT LEFT, which gave me this output: $ python getchar.py you pressed 27 you pressed 91 you pressed 65 you pressed 27 you pressed 91 you pressed 66 you pressed 27 you pressed 91 you pressed 67 you pressed 27 you pressed 91 you pressed 68 This is anomalous because it suggests that the arrow keys are registered as some form of triple (27-91-6x) on my system, as each press of an arrow key takes up three instances of get(). By comparison, pressing a,b,c and CTRL-C gives: you pressed 97 you pressed 98 you pressed 99 you pressed 3 Can anyone explain to me why the values of my arrow-keys seem to be stored as triples? Why is this is so? Is this the same across all platforms? (I'm using Debian Linux.) If not, how should I go about storing the values of the arrow-keys? The end goal here is in that I'm trying to write a program which needs to correctly recognize arrow-keys and perform a function depending on which arrow-key was pressed.
I think I figured it out. I learned from here that each arrow key is represented by a unique ANSI escape code. Then I learned that the ANSI escape codes vary by system and application: in my terminal, hitting cat and pressing the up-arrow gives ^[[A, in C it seems to be \033[A, etc. The latter part, the [A, remains the same, but the code for the preceding Escape can be in hex(beginning with an x), octal (beginning with a 0), or decimal(no lead in number). Then I opened the python console, and plugged in the triples I had previously received, trying to find their character values. As it turned out, chr(27) gave \x1b, chr(91) gave [, and calling chr on 65,66,67,68 returned A,B,C,D respectively. Then it was clear: \x1b was the escape-code! Then I noted that an arrow key, in ANSI represented as a triple, is of course represented as three characters, so I needed to amend my code so as to read in three characters at a time. Here is the result: import sys,tty,termios class _Getch: def __call__(self): fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(sys.stdin.fileno()) ch = sys.stdin.read(3) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch def get(): inkey = _Getch() while(1): k=inkey() if k!='':break if k=='\x1b[A': print "up" elif k=='\x1b[B': print "down" elif k=='\x1b[C': print "right" elif k=='\x1b[D': print "left" else: print "not an arrow key!" def main(): for i in range(0,20): get() if __name__=='__main__': main()
Locate first and last non NaN values in a Pandas DataFrame
I have a Pandas DataFrame indexed by date. There a number of columns but many columns are only populated for part of the time series. I'd like to find where the first and last values non-NaN values are located so that I can extracts the dates and see how long the time series is for a particular column. Could somebody point me in the right direction as to how I could go about doing something like this? Thanks in advance.
@behzad.nouri's solution worked perfectly to return the first and last non-NaN values using Series.first_valid_index and Series.last_valid_index, respectively.
Fortran - Cython Workflow
I would like to set up a workflow to reach fortran routines from Python using Cython on a Windows Machine after some searching I found : http://www.fortran90.org/src/best-practices.html#interfacing-with-c and http://stackoverflow.com/tags/fortran-iso-c-binding/info and some code pices: Fortran side: pygfunc.h: void c_gfunc(double x, int n, int m, double *a, double *b, double *c); pygfunc.f90 module gfunc1_interface use iso_c_binding use gfunc_module implicit none contains subroutine c_gfunc(x, n, m, a, b, c) bind(c) real(C_FLOAT), intent(in), value :: x integer(C_INT), intent(in), value :: n, m type(C_PTR), intent(in), value :: a, b type(C_PTR), value :: c real(C_FLOAT), dimension(:), pointer :: fa, fb real(C_FLOAT), dimension(:,:), pointer :: fc call c_f_pointer(a, fa, (/ n /)) call c_f_pointer(b, fb, (/ m /)) call c_f_pointer(c, fc, (/ n, m /)) call gfunc(x, fa, fb, fc) end subroutine end module gfunc.f90 module gfunc_module use iso_c_binding implicit none contains subroutine gfunc(x, a, b, c) real, intent(in) :: x real, dimension(:), intent(in) :: a, b real, dimension(:,:), intent(out) :: c integer :: i, j, n, m n = size(a) m = size(b) do j=1,m do i=1,n c(i,j) = exp(-x * (a(i)**2 + b(j)**2)) end do end do end subroutine end module Cython side: pygfunc.pyx cimport numpy as cnp import numpy as np cdef extern from "./pygfunc.h": void c_gfunc(double, int, int, double *, double *, double *) cdef extern from "./pygfunc.h": pass def f(float x, a=-10.0, b=10.0, n=100): cdef cnp.ndarray ax, c ax = np.arange(a, b, (b-a)/float(n)) n = ax.shape[0] c = np.ndarray((n,n), dtype=np.float64, order='F') c_gfunc(x, n, n, <double *> ax.data, <double *> ax.data, <double *> c.data) return c and the setup file: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy as np ext_modules = [Extension('pygfunc', ['pygfunc.pyx'])] setup( name = 'pygfunc', include_dirs = [np.get_include()], cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) all the files ar in one directory the fortran files compile ( using NAG Fortran Builder ) pygfunc compiles but linking them throws a: error LNK2019: unresolved external symbol _c_gfunc referenced in function ___pyx_pf_7pygfunc_f and of course: fatal error LNK1120: 1 unresolved externals What am I missing ? or is this way to set up a workflow between Python and Fortran damned from the beginning ? THX Martin
Here's a minimum working example. I used gfortran and wrote the compile commands directly into the setup file. gfunc.f90 module gfunc_module implicit none contains subroutine gfunc(x, n, m, a, b, c) double precision, intent(in) :: x integer, intent(in) :: n, m double precision, dimension(n), intent(in) :: a double precision, dimension(m), intent(in) :: b double precision, dimension(n, m), intent(out) :: c integer :: i, j do j=1,m do i=1,n c(i,j) = exp(-x * (a(i)**2 + b(j)**2)) end do end do end subroutine end module pygfunc.f90 module gfunc1_interface use iso_c_binding, only: c_double, c_int use gfunc_module, only: gfunc implicit none contains subroutine c_gfunc(x, n, m, a, b, c) bind(c) real(c_double), intent(in) :: x integer(c_int), intent(in) :: n, m real(c_double), dimension(n), intent(in) :: a real(c_double), dimension(m), intent(in) :: b real(c_double), dimension(n, m), intent(out) :: c call gfunc(x, n, m, a, b, c) end subroutine end module pygfunc.h extern void c_gfunc(double* x, int* n, int* m, double* a, double* b, double* c); pygfunc.pyx from numpy import linspace, empty from numpy cimport ndarray as ar cdef extern from "pygfunc.h": void c_gfunc(double* a, int* n, int* m, double* a, double* b, double* c) def f(double x, double a=-10.0, double b=10.0, int n=100): cdef: ar[double] ax = linspace(a, b, n) ar[double,ndim=2] c = empty((n, n), order='F') c_gfunc(&x, &n, &n, <double*> ax.data, <double*> ax.data, <double*> c.data) return c setup.py from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext # This line only needed if building with NumPy in Cython file. from numpy import get_include from os import system # compile the fortran modules without linking fortran_mod_comp = 'gfortran gfunc.f90 -c -o gfunc.o -O3 -fPIC' print fortran_mod_comp system(fortran_mod_comp) shared_obj_comp = 'gfortran pygfunc.f90 -c -o pygfunc.o -O3 -fPIC' print shared_obj_comp system(shared_obj_comp) ext_modules = [Extension(# module name: 'pygfunc', # source file: ['pygfunc.pyx'], # other compile args for gcc extra_compile_args=['-fPIC', '-O3'], # other files to link to extra_link_args=['gfunc.o', 'pygfunc.o'])] setup(name = 'pygfunc', cmdclass = {'build_ext': build_ext}, # Needed if building with NumPy. # This includes the NumPy headers when compiling. include_dirs = [get_include()], ext_modules = ext_modules) test.py # A script to verify correctness from pygfunc import f print f(1., a=-1., b=1., n=4) import numpy as np a = np.linspace(-1, 1, 4)**2 A, B = np.meshgrid(a, a, copy=False) print np.exp(-(A + B)) Most of the changes I made aren't terribly fundamental. Here are the important ones. You were mixing double precision and single precision floating point numbers. Don't do that. Use real (Fortran), float (Cython), and float32 (NumPy) together and use double precision (Fortran), double (Cyton), and float64 (NumPy) together. Try not to mix them unintentionally. I assumed you wanted doubles in my example. You should pass all variables to Fortran as pointers. It does not match the C calling convention in that regard. The iso_c_binding module in Fortran only matches the C naming convention. Pass arrays as pointers with their size as a separate value. There may be other ways of doing this, but I don't know any. I also added some stuff in the setup file to show where you can add some of the more useful extra arguments when building. To compile, run python setup.py build_ext --inplace. To verify that it works, run the test script. Here is the example shown on fortran90.org: mesh_exp Here are two more that I put together some time ago: ftridiag, fssor I'm certainly not an expert at this, but these examples may be a good place to start.
Named colors in matplotlib
What named colors are available in matplotlib for use in plots? I can find a list on the matplotlib documentation that claims that these are the only names: b: blue g: green r: red c: cyan m: magenta y: yellow k: black w: white However, I've found that these colors can also be used, at least in this context: scatter(X,Y, color='red') scatter(X,Y, color='orange') scatter(X,Y, color='darkgreen') but these are not on the above list. Does anyone know an exhaustive list of the named colors that are available?
There is a complete dictionary in colors.py in the matplotlib directory. import matplotlib for name, hex in matplotlib.colors.cnames.iteritems(): print(name, hex) This is the complete dictionary: cnames = { 'aliceblue': '#F0F8FF', 'antiquewhite': '#FAEBD7', 'aqua': '#00FFFF', 'aquamarine': '#7FFFD4', 'azure': '#F0FFFF', 'beige': '#F5F5DC', 'bisque': '#FFE4C4', 'black': '#000000', 'blanchedalmond': '#FFEBCD', 'blue': '#0000FF', 'blueviolet': '#8A2BE2', 'brown': '#A52A2A', 'burlywood': '#DEB887', 'cadetblue': '#5F9EA0', 'chartreuse': '#7FFF00', 'chocolate': '#D2691E', 'coral': '#FF7F50', 'cornflowerblue': '#6495ED', 'cornsilk': '#FFF8DC', 'crimson': '#DC143C', 'cyan': '#00FFFF', 'darkblue': '#00008B', 'darkcyan': '#008B8B', 'darkgoldenrod': '#B8860B', 'darkgray': '#A9A9A9', 'darkgreen': '#006400', 'darkkhaki': '#BDB76B', 'darkmagenta': '#8B008B', 'darkolivegreen': '#556B2F', 'darkorange': '#FF8C00', 'darkorchid': '#9932CC', 'darkred': '#8B0000', 'darksalmon': '#E9967A', 'darkseagreen': '#8FBC8F', 'darkslateblue': '#483D8B', 'darkslategray': '#2F4F4F', 'darkturquoise': '#00CED1', 'darkviolet': '#9400D3', 'deeppink': '#FF1493', 'deepskyblue': '#00BFFF', 'dimgray': '#696969', 'dodgerblue': '#1E90FF', 'firebrick': '#B22222', 'floralwhite': '#FFFAF0', 'forestgreen': '#228B22', 'fuchsia': '#FF00FF', 'gainsboro': '#DCDCDC', 'ghostwhite': '#F8F8FF', 'gold': '#FFD700', 'goldenrod': '#DAA520', 'gray': '#808080', 'green': '#008000', 'greenyellow': '#ADFF2F', 'honeydew': '#F0FFF0', 'hotpink': '#FF69B4', 'indianred': '#CD5C5C', 'indigo': '#4B0082', 'ivory': '#FFFFF0', 'khaki': '#F0E68C', 'lavender': '#E6E6FA', 'lavenderblush': '#FFF0F5', 'lawngreen': '#7CFC00', 'lemonchiffon': '#FFFACD', 'lightblue': '#ADD8E6', 'lightcoral': '#F08080', 'lightcyan': '#E0FFFF', 'lightgoldenrodyellow': '#FAFAD2', 'lightgreen': '#90EE90', 'lightgray': '#D3D3D3', 'lightpink': '#FFB6C1', 'lightsalmon': '#FFA07A', 'lightseagreen': '#20B2AA', 'lightskyblue': '#87CEFA', 'lightslategray': '#778899', 'lightsteelblue': '#B0C4DE', 'lightyellow': '#FFFFE0', 'lime': '#00FF00', 'limegreen': '#32CD32', 'linen': '#FAF0E6', 'magenta': '#FF00FF', 'maroon': '#800000', 'mediumaquamarine': '#66CDAA', 'mediumblue': '#0000CD', 'mediumorchid': '#BA55D3', 'mediumpurple': '#9370DB', 'mediumseagreen': '#3CB371', 'mediumslateblue': '#7B68EE', 'mediumspringgreen': '#00FA9A', 'mediumturquoise': '#48D1CC', 'mediumvioletred': '#C71585', 'midnightblue': '#191970', 'mintcream': '#F5FFFA', 'mistyrose': '#FFE4E1', 'moccasin': '#FFE4B5', 'navajowhite': '#FFDEAD', 'navy': '#000080', 'oldlace': '#FDF5E6', 'olive': '#808000', 'olivedrab': '#6B8E23', 'orange': '#FFA500', 'orangered': '#FF4500', 'orchid': '#DA70D6', 'palegoldenrod': '#EEE8AA', 'palegreen': '#98FB98', 'paleturquoise': '#AFEEEE', 'palevioletred': '#DB7093', 'papayawhip': '#FFEFD5', 'peachpuff': '#FFDAB9', 'peru': '#CD853F', 'pink': '#FFC0CB', 'plum': '#DDA0DD', 'powderblue': '#B0E0E6', 'purple': '#800080', 'red': '#FF0000', 'rosybrown': '#BC8F8F', 'royalblue': '#4169E1', 'saddlebrown': '#8B4513', 'salmon': '#FA8072', 'sandybrown': '#FAA460', 'seagreen': '#2E8B57', 'seashell': '#FFF5EE', 'sienna': '#A0522D', 'silver': '#C0C0C0', 'skyblue': '#87CEEB', 'slateblue': '#6A5ACD', 'slategray': '#708090', 'snow': '#FFFAFA', 'springgreen': '#00FF7F', 'steelblue': '#4682B4', 'tan': '#D2B48C', 'teal': '#008080', 'thistle': '#D8BFD8', 'tomato': '#FF6347', 'turquoise': '#40E0D0', 'violet': '#EE82EE', 'wheat': '#F5DEB3', 'white': '#FFFFFF', 'whitesmoke': '#F5F5F5', 'yellow': '#FFFF00', 'yellowgreen': '#9ACD32'} You could plot them like this: import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.colors as colors import math fig = plt.figure() ax = fig.add_subplot(111) ratio = 1.0 / 3.0 count = math.ceil(math.sqrt(len(colors.cnames))) x_count = count * ratio y_count = count / ratio x = 0 y = 0 w = 1 / x_count h = 1 / y_count for c in colors.cnames: pos = (x / x_count, y / y_count) ax.add_patch(patches.Rectangle(pos, w, h, color=c)) ax.annotate(c, xy=pos) if y >= y_count-1: x += 1 y = 0 else: y += 1 plt.show()
Get the first element of each tuple in a list in Python
An SQL query gives me a list of tuples, like this: [(elt1, elt2), (elt1, elt2), (elt1, elt2), (elt1, elt2), (elt1, elt2), ...] I'd like to have all the first elements of each tuple. Right now I use this: rows = cur.fetchall() res_list = [] for row in rows: res_list += [row[0]] But I think there might be a better syntax to do it. Do you know a better way?
Use a list comprehension: res_list = [x[0] for x in rows] Below is a demonstration: >>> rows = [(1, 2), (3, 4), (5, 6)] >>> [x[0] for x in rows] [1, 3, 5] >>> Alternately, you could use unpacking instead of x[0]: res_list = [x for x,_ in rows] Below is a demonstration: >>> lst = [(1, 2), (3, 4), (5, 6)] >>> [x for x,_ in lst] [1, 3, 5] >>> Both methods practically do the same thing, so you can choose whichever you like.
Can't install python mysql library on Mac Mavericks
It was working like a charm before the update from Mountain Lion. After the update it is broken and I cannot get the environment up again. Does anybody know how to fix this? The error is bolded, below. fedorius@this:~$ pip install mysql-python Downloading/unpacking mysql-python Downloading MySQL-python-1.2.5.zip (108kB): 108kB downloaded Running setup.py (path:/private/var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip_build_fedorius/mysql-python/setup.py) egg_info for package mysql-python Installing collected packages: mysql-python Running setup.py install for mysql-python building '_mysql' extension cc -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.9-intel-2.7/_mysql.o -Os -g -fno-strict-aliasing -arch x86_64 clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future error: command 'cc' failed with exit status 1 Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip_build_fedorius/mysql-python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip-_yi6sy-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 copying _mysql_exceptions.py -> build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/__init__.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb creating build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/REFRESH.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-intel-2.7/MySQLdb/constants running build_ext building '_mysql' extension creating build/temp.macosx-10.9-intel-2.7 cc -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.9-intel-2.7/_mysql.o -Os -g -fno-strict-aliasing -arch x86_64 **clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future error: command 'cc' failed with exit status 1** ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip_build_fedorius/mysql-python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip-_yi6sy-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/pip_build_fedorius/mysql-python Storing debug log for failure in /var/folders/21/zjvwzn891jnf4rnp526y13200000gn/T/tmp5QBn55 UPDATE: As suggested, I've added export CFLAGS=-Qunused-arguments export CPPFLAGS=-Qunused-arguments But it changed the error to error: /Library/Python/2.7/site-packages/_mysql.so: Permission denied I just chmoded this directory to allow writing and it worked :) This is due to mixing macports, easy_install and pip... shame on me.
The problem is due to changes introduced in Xcode 5.1 and due to the way the Apple-supplied system Python 2.7 is built. Try adding these environment variable values before running pip: export CFLAGS=-Qunused-arguments export CPPFLAGS=-Qunused-arguments See clang error: unknown argument: '-mno-fused-madd' (python package installation failure) for more information. UPDATE [2014-05-16]: As expected, Apple has fixed this problem with updated system Pythons (2.7, 2.6, and 2.5) in OS X 10.9.3 so the workaround is no longer necessary when using the latest Mavericks and Xcode 5.1+. However, as of now, the workaround is still required for OS X 10.8.x (Mountain Lion, currently 10.8.5) if you are using Xcode 5.1+ there.
Best way to initialize and fill an numpy array?
I want to initialize and fill a numpy array. What is the best way? This works as I expect: >>> import numpy as np >>> np.empty(3) array([ -1.28822975e-231, -1.73060252e-077, 2.23946712e-314]) But this doesn't: >>> np.empty(3).fill(np.nan) >>> Nothing? >>> type(np.empty(3)) <type 'numpy.ndarray'> It seems to me that the np.empty() call is returning the correct type of object, so I don't understand why .fill() is not working? Assigning the result of np.empty() first works fine: >>> a = np.empty(3) >>> a.fill(np.nan) >>> a array([ nan, nan, nan]) Why do I need to assign to a variable in order to use np.fill()? Am I missing a better alternative?
You could also try: In [79]: np.full(3, np.nan) Out[79]: array([ nan, nan, nan]) The pertinent doc: Definition: np.full(shape, fill_value, dtype=None, order='C') Docstring: Return a new array of given shape and type, filled with `fill_value`. Although I think this might be only available in numpy 1.8+
Python Gensim: how to calculate document similarity using the LDA model?
I've got a trained LDA model and I want to calculate the similarity score between two documents from the corpus I trained my model on. After studying all the Gensim tutorials and functions, I still can't get my head around it. Can somebody give me a hint? Thanks!
Depends what similarity metric you want to use. Cosine similarity is universally useful & built-in: sim = gensim.matutils.cossim(vec_lda1, vec_lda2) Hellinger distance is useful for similarity between probability distributions (such as LDA topics): import numpy as np dense1 = gensim.matutils.sparse2full(lda_vec1, lda.num_topics) dense2 = gensim.matutils.sparse2full(lda_vec2, lda.num_topics) sim = np.sqrt(0.5 * ((np.sqrt(dense1) - np.sqrt(dense2))**2).sum())
Django Rest Framework and JSONField
Given a Django model with a JSONField, what is the correct way of serializing and deserializing it using Django Rest Framework? I've already tried crating a custom serializers.WritableField and overriding to_native and from_native: from json_field.fields import JSONEncoder, JSONDecoder from rest_framework import serializers class JSONFieldSerializer(serializers.WritableField): def to_native(self, obj): return json.dumps(obj, cls = JSONEncoder) def from_native(self, data): return json.loads(data, cls = JSONDecoder) But when I try to updating the model using partial=True, all the floats in the JSONField objects become strings.
If you're using Django Rest Framework >= 3.3, then the JSONField serializer is now included. This is now the correct way. If you're using Django Rest Framework < 3.0, then see gzerone's answer. If you're using DRF 3.0 - 3.2 AND you can't upgrade AND you don't need to serialize binary data, then follow these instructions. First declare a field class: from rest_framework import serializers class JSONSerializerField(serializers.Field): """ Serializer for JSONField -- required to make field writable""" def to_internal_value(self, data): return data def to_representation(self, value): return value And then add in the field into the model like class MySerializer(serializers.ModelSerializer): json_data = JSONSerializerField() And, if you do need to serialize binary data, you can always the copy official release code
In numpy, calculating a matrix where each cell contains the product of all the other entries in that row
I have a matrix A = np.array([[0.2, 0.4, 0.6], [0.5, 0.5, 0.5], [0.6, 0.4, 0.2]]) I want a new matrix, where the value of the entry in row i and column j is the product of all the entries of the ith row of A, except for the cell of that row in the jth column. array([[ 0.24, 0.12, 0.08], [ 0.25, 0.25, 0.25], [ 0.08, 0.12, 0.24]]) The solution that first occurred to me was np.repeat(np.prod(A, 1, keepdims = True), 3, axis = 1) / A But this only works so long as no entries have values zero. Any thoughts? Thank you! Edit: I have developed B = np.zeros((3, 3)) for i in range(3): for j in range(3): B[i, j] = np.prod(i, A[[x for x in range(3) if x != j]]) but surely there is a more elegant way to accomplish this, which makes use of numpy's efficient C backend instead of inefficient python loops?
If you're willing to tolerate a single loop: B = np.empty_like(A) for col in range(A.shape[1]): B[:,col] = np.prod(np.delete(A, col, 1), 1) That computes what you need, a single column at a time. It is not as efficient as theoretically possible because np.delete() creates a copy; if you care a lot about memory allocation, use a mask instead: B = np.empty_like(A) mask = np.ones(A.shape[1], dtype=bool) for col in range(A.shape[1]): mask[col] = False B[:,col] = np.prod(A[:,mask], 1) mask[col] = True
How to inspect variables after Traceback?
My Python script is crashing. To debug it, I ran it in interactive mode python -i example.py Traceback (most recent call last): File "example.py", line 5, in <module> main() File "example.py", line 3, in main message[20] IndexError: string index out of range At this point, I would like to inspect the variable message. I tried >>> message Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'message' is not defined Alas message is not in scope (though main is). That's frustrating. How can I inspect the variable? Is there a more useful version of python -i that keeps what's in scope at the crash (rather than the top level)? Code used for example.py above. Needless to say, this is a simplification. def main(): message = "hello world" message[20] main()
To drop to a debugger only if there is an exception you could define a custom excepthook: import sys def excepthook(type_, value, tb): import traceback import pdb traceback.print_exception(type_, value, tb) pdb.post_mortem(tb) sys.excepthook = excepthook def main(): message = "hello world" message[20] main() Running the script drops you into pdb and in the frame which raised the exception: % script.py Traceback (most recent call last): File "/home/unutbu/pybin/script.py", line 16, in <module> main() File "/home/unutbu/pybin/script.py", line 14, in main message[20] IndexError: string index out of range > /home/unutbu/pybin/script.py(14)main() -> message[20] (Pdb) p message 'hello world' (Pdb) p message[20] *** IndexError: IndexError('string index out of range',) (Pdb) p len(message) 11 If defining the excepthook seems like too much code, you could tuck it away in a utility module, such as utils_debug.py: import sys def enable_pdb(): def excepthook(type_, value, tb): import traceback import pdb traceback.print_exception(type_, value, tb) pdb.post_mortem(tb) sys.excepthook = excepthook and then you would only need to add import utils_debug as UDBG UDBG.enable_pdb() to your script.py. Or, if you are using IPython, you could use the %pdb magic function (which drops you into ipdb when there is an exception). It is unclear why inspecting size at the pdb prompt is giving you a NameError. (A runnable example would be very useful.) You might try using bt (backtrace) to inspect the frame stack. If size is defined in a different frame than the one pdb is currently in, you might be able use u (up) to go up to the frame where size is defined.
What is the difference between `sorted(list)` vs `list.sort()` ? python
list.sort() sorts the list and save the sorted list, while sorted(list) returns a sorted list without changing the original list. But when to use which? And which is faster? And how much faster? Can a list's original positions be retrieved after list.sort()?
sorted() returns a new sorted list, leaving the original list unaffected. list.sort() sorts the list in-place, mutating the list indices, and returns None (like all in-place operations). sorted() works on any iterable, not just lists. Strings, tuples, dictionaries (you'll get the keys), generators, etc., returning a list containing all elements, sorted. Use list.sort() when you want to mutate the list, sorted() when you want a new sorted object back. Use sorted() when you want to sort something that is an iterable, not a list yet. For lists, list.sort() is faster than sorted() because it doesn't have to create a copy. For any other iterable, you have no choice. No, you cannot retrieve the original positions. Once you called list.sort() the original order is gone.
Predicting how long an scikit-learn classification will take to run
Is there a way to predict how long it will take to run a classifier from sci-kit learn based on the parameters and dataset? I know, pretty meta, right? Some classifiers/parameter combinations are quite fast, and some take so long that I eventually just kill the process. I'd like a way to estimate in advance how long it will take. Alternatively, I'd accept some pointers on how to set common parameters to reduce the run time.
There are very specific classes of classifier or regressors that directly report remaining time or progress of your algorithm (number of iterations etc.). Most of this can be turned on by passing verbose=2 (any high number > 1) option to the constructor of individual models. Note: this behavior is according to sklearn-0.14. Earlier versions have a bit different verbose output (still useful though). The best example of this is ensemble.RandomForestClassifier or ensemble.GradientBoostingClassifier` that print the number of trees built so far and remaining time. clf = ensemble.GradientBoostingClassifier(verbose=3) clf.fit(X, y) Out: Iter Train Loss Remaining Time 1 0.0769 0.10s ... Or clf = ensemble.RandomForestClassifier(verbose=3) clf.fit(X, y) Out: building tree 1 of 100 ... This progress information is fairly useful to estimate the total time. Then there are other models like SVMs that print the number of optimization iterations completed, but do not directly report the remaining time. clf = svm.SVC(verbose=2) clf.fit(X, y) Out: * optimization finished, #iter = 1 obj = -1.802585, rho = 0.000000 nSV = 2, nBSV = 2 ... Models like linear models don't provide such diagnostic information as far as I know. Check this thread to know more about what the verbosity levels mean: scikit-learn fit remaining time
Python built-in function "compile". What is it used for?
I came across a built-in function compile today. Though i read the documentation but still do not understand it's usage or where it is applicable. Please can anyone explain with example the use of this function. I will really appreciate examples. From the documentation, the function takes some parameters as shown below. compile(source, filename, mode[, flags[, dont_inherit]])
It is not that commonly used. It is used when you have Python source code in string form, and you want to make it into a Python code object that you can keep and use. Here's a trivial example: >>> codeobj = compile('x = 2\nprint "X is", x', 'fakemodule', 'exec') >>> exec(codeobj) X is 2 Basically, the code object converts a string into an object that you can later call exec on to run the source code in the string. (This is for "exec" mode; the "eval" mode allows use of eval instead, if the string contains code for a single expression.) This is not a common task, which is why you may never run across a need for it. The main use for it is in metaprogramming or embedding situations. For instance, if you have a Python program that allows users to script its behavior with custom Python code, you might use compile and exec to store and execute these user-defined scripts. Another reason compile is rarely used is that, like exec, eval, and their ilk, compile is a potential security hole. If you take user code in string form and compile it and later exec it, you could be running unsafe code. (For instance, imagine that in my example above the code was formatYourHardDrive() instead of print x.)
Configure pep8.py command line options in pycharm
Can I configure the command line arguments that PyCharm sends to pep8.py when it does its automatic PEP8 style checking? I would like to do something like $ pep8 --ignore=E231 foo.py However, in PyCharm under Project Settings -> Inspections I only see options to enable/disable PEP8 style checks in aggregate, but no option to enable/disable specific PEP8 violations.
Found the solution here: http://iambigblind.blogspot.de/2013/02/configuring-pep8py-support-in-pycharm-27.html Just add E501 to the list of ignore errors and the warning will go away in PyCharm 3 (and 4).
demystify Flask app.secret_key
If app.secret_key is not set, the Flask framework will not allow you to set or access the session dictionary. This is all that the flask user guide has to say on the subject. I am very new to web development and I have no idea how/why any security stuff works. I would like to understand what flask is doing under the hood. Why does flask force us to set this secret_key property? How does flask use the secret_key property?
Anything that requires encryption (for safe-keeping against tampering by attackers) requires the secret key to be set. For just Flask itself, that 'anything' is the Session object, but other extensions can make use of the same secret. secret_key is merely the value set for the SECRET_KEY configuration key, or you can set it directly. The Sessions section in the Quickstart has good, sane advice on what kind of server-side secret you should set. Encryption relies on secrets; if you didn't set a server-side secret for the encryption to use, everyone would be able to break your encryption; it's like the password to your computer. The secret plus the data-to-sign are used to create a signature string, a hard-to-recreate value using a cryptographic hashing algorithm; only if you have the exact same secret and the original data can you recreate this value, letting Flask detect if anything has been altered without permission. Since the secret is never included with data Flask sends to the client, a client cannot tamper with session data and hope to produce a new, valid signature. Flask uses the itsdangerous library to do all the hard work; sessions use the itsdangerous.URLSafeTimedSerializer class with a customized JSON serializer.
get list of pandas dataframe columns based on data type
If I have a dataframe with the following columns: 1. NAME object 2. On_Time object 3. On_Budget object 4. %actual_hr float64 5. Baseline Start Date datetime64[ns] 6. Forecast Start Date datetime64[ns] I would like to be able to say: here is a dataframe, give me a list of the columns which are of type Object or of type DateTime? I have a function which converts numbers (Float64) to two decimal places, and I would like to use this list of dataframe columns, of a particular type, and run it through this function to convert them all to 2dp. Maybe: For c in col_list: if c.dtype = "Something" list[] List.append(c)?
If you want a list of columns of a certain type, you can use groupby: >>> df = pd.DataFrame([[1, 2.3456, 'c', 'd', 78]], columns=list("ABCDE")) >>> df A B C D E 0 1 2.3456 c d 78 [1 rows x 5 columns] >>> df.dtypes A int64 B float64 C object D object E int64 dtype: object >>> g = df.columns.to_series().groupby(df.dtypes).groups >>> g {dtype('int64'): ['A', 'E'], dtype('float64'): ['B'], dtype('O'): ['C', 'D']} >>> {k.name: v for k, v in g.items()} {'object': ['C', 'D'], 'int64': ['A', 'E'], 'float64': ['B']}
Plot mean and standard deviation
I have several values of a function at different x points. I want to plot the mean and std in python, like the answer of this SO question. I know this must be easy using matplotlib, but I have no idea of the function's name that can do that. Does anyone know it?
plt.errorbar can be used to plot x, y, error data (as opposed to the usual plt.plot) import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5]) y = np.power(x, 2) # Effectively y = x**2 e = np.array([1.5, 2.6, 3.7, 4.6, 5.5]) plt.errorbar(x, y, e, linestyle='None', marker='^') plt.show() plt.errorbar accepts the same arguments as plt.plot with additional yerr and xerr which default to None (i.e. if you leave them blank it will act as plt.plot).
How can I plot separate Pandas DataFrames as subplots?
I have a few Pandas DataFrames sharing the same value scale, but having different columns and indices. When invoking df.plot(), I get separate plot images. what I really want is to have them all in the same plot as subplots, but I'm unfortunately failing to come up with a solution to how and would highly appreciate some help.
You can manually create the subplots with matplotlib, and then plot the dataframes on a specific subplot using the ax keyword. For example for 4 subplots (2x2): import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=2) df1.plot(ax=axes[0,0]) df2.plot(ax=axes[0,1]) ... Here axes is an array which holds the different subplot axes, and you can access one just by indexing axes. If you want a shared x-axis, then you can provide sharex=True to plt.subplots.
Efficiently select rows that match one of several values in Pandas DataFrame
Problem Given data in a Pandas DataFrame like the following: Name Amount --------------- Alice 100 Bob 50 Charlie 200 Alice 30 Charlie 10 I want to select all rows where the Name is one of several values in a collection {Alice, Bob} Name Amount --------------- Alice 100 Bob 50 Alice 30 Question What is an efficient way to do this in Pandas? Options as I see them Loop through rows, handling the logic with Python Select and merge many statements like the following merge(df[df.name = specific_name] for specific_name in names) # something like this Perform some sort of join What are the performance trade-offs here? When is one solution better than the others? What solutions am I missing? While the example above uses strings my actual job uses matches on 10-100 integers over millions of rows and so fast NumPy operations may be relevant.
You can use the isin Series method: In [11]: df['Name'].isin(['Alice', 'Bob']) Out[11]: 0 True 1 True 2 False 3 True 4 False Name: Name, dtype: bool In [12]: df[df.Name.isin(['Alice', 'Bob'])] Out[12]: Name Amount 0 Alice 100 1 Bob 50 3 Alice 30
multiprocessing in python - sharing large object (e.g. pandas dataframe) between multiple processes
I am using Python multiprocessing, more precisely from multiprocessing import Pool p = Pool(15) args = [(df, config1), (df, config2), ...] #list of args - df is the same object in each tuple res = p.map_async(func, args) #func is some arbitrary function p.close() p.join() This approach has a huge memory consumption; eating up pretty much all my RAM (at which point it gets extremely slow, hence making the multiprocessing pretty useless). I assume the problem is that df is a huge object (a large pandas dataframe) and it gets copied for each process. I have tried using multiprocessing.Value to share the dataframe without copying shared_df = multiprocessing.Value(pandas.DataFrame, df) args = [(shared_df, config1), (shared_df, config2), ...] (as suggested in Python multiprocessing shared memory), but that gives me TypeError: this type has no size (same as Sharing a complex object between Python processes?, to which I unfortunately don't understand the answer). I am using multiprocessing for the first time and maybe my understanding is not (yet) good enough. Is multiprocessing.Value actually even the right thing to use in this case? I have seen other suggestions (e.g. queue) but am by now a bit confused. What options are there to share memory, and which one would be best in this case?
The first argument to Value is *typecode_or_type*. That is defined as: typecode_or_type determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the array module. *args is passed on to the constructor for the type. Emphasis mine. So, you simply cannot put a pandas dataframe in a Value, it has to be a ctypes type. You could instead use a multiprocessing.Manager to serve your singleton dataframe instance to all of your processes. There's a few different ways to end up in the same place - probably the easiest is to just plop your dataframe into the manager's Namespace. from multiprocessing import Manager mgr = Manager() ns = mgr.Namespace() ns.df = my_dataframe # now just give your processes access to ns, i.e. most simply # p = Process(target=worker, args=(ns, work_unit)) Now your dataframe instance is accessible to any process that gets passed a reference to the Manager. Or just pass a reference to the Namespace, it's cleaner. One thing I didn't/won't cover is events and signaling - if your processes need to wait for others to finish executing, you'll need to add that in. Here is a page with some Event examples which also cover with a bit more detail how to use the manager's Namespace. (note that none of this addresses whether multiprocessing is going to result in tangible performance benefits, this is just giving you the tools to explore that question)
Python code works, but eclipse shows error - Syntax error while detecting tuple
I new to python. I use Python 3.3 in Eclipse Kepler. This is my code snippet: f = Fibonacci(0,1) for r in f.series(): if r > 100: break print(r, end=' ') At the line print(r, end = ''), eclipse reports a syntax error - Syntax error while detecting tuple. However, the program runs perfectly. Why does this happen and how do I fix the error?
You need to specify the correct Grammar Version in Eclipse. See here: print function in Python3 Is Grammar Version 3.3 in your setup? Steps - Project > Properties > Python Interpreter/Grammar. You might have to restart Eclipse to see the changes.
GridSpec with shared axes in Python
This solution to another thread suggests using gridspec.GridSpec instead of plt.subplots. However, when I share axes between subplots, I usually use a syntax like the following fig, axes = plt.subplots(N, 1, sharex='col', sharey=True, figsize=(3,18)) How can I specify sharex and sharey when I use GridSpec ?
First off, there's an easier workaround for your original problem, as long as you're okay with being slightly imprecise. Just reset the top extent of the subplots to the default after calling tight_layout: fig, axes = plt.subplots(ncols=2, sharey=True) plt.setp(axes, title='Test') fig.suptitle('An overall title', size=20) fig.tight_layout() fig.subplots_adjust(top=0.9) plt.show() However, to answer your question, you'll need to create the subplots at a slightly lower level to use gridspec. If you want to replicate the hiding of shared axes like subplots does, you'll need to do that manually, by using the sharey argument to Figure.add_subplot and hiding the duplicated ticks with plt.setp(ax.get_yticklabels(), visible=False). As an example: import matplotlib.pyplot as plt from matplotlib import gridspec fig = plt.figure() gs = gridspec.GridSpec(1,2) ax1 = fig.add_subplot(gs[0]) ax2 = fig.add_subplot(gs[1], sharey=ax1) plt.setp(ax2.get_yticklabels(), visible=False) plt.setp([ax1, ax2], title='Test') fig.suptitle('An overall title', size=20) gs.tight_layout(fig, rect=[0, 0, 1, 0.97]) plt.show()
Python "from [dot]package import ..." syntax
Looking through a Django tutorial I saw the following syntax: from .models import Recipe, Ingredient, Instruction Can someone explain how the .models works / what it does exactly? Usually I have: from myapp.models import How does it work without the myapp part in front of .models?
Possible duplicate: What does a . in an import statement in Python mean? The . is a shortcut that tells it search in current package before rest of the PYTHONPATH. So, if a same-named module Recipe exists somewhere else in your PYTHONPATH, it won't be loaded.
cpython vs cython vs numpy array performance
I am doing some performance test on a variant of the prime numbers generator from http://docs.cython.org/src/tutorial/numpy.html. The below performance measures are with kmax=1000 Pure Python implementation, running in CPython: 0.15s Pure Python implementation, running in Cython: 0.07s def primes(kmax): p = [] k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i = i + 1 if i == k: p.append(n) k = k + 1 n = n + 1 return p Pure Python+Numpy implementation, running in CPython: 1.25s import numpy def primes(kmax): p = numpy.empty(kmax, dtype=int) k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i = i + 1 if i == k: p[k] = n k = k + 1 n = n + 1 return p Cython implementation using int*: 0.003s from libc.stdlib cimport malloc, free def primes(int kmax): cdef int n, k, i cdef int *p = <int *>malloc(kmax * sizeof(int)) result = [] k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i = i + 1 if i == k: p[k] = n k = k + 1 result.append(n) n = n + 1 free(p) return result The above performs great but looks horrible, as it holds two copies of the data... so I tried reimplementing it: Cython + Numpy: 1.01s import numpy as np cimport numpy as np cimport cython DTYPE = np.int ctypedef np.int_t DTYPE_t @cython.boundscheck(False) def primes(DTYPE_t kmax): cdef DTYPE_t n, k, i cdef np.ndarray p = np.empty(kmax, dtype=DTYPE) k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i = i + 1 if i == k: p[k] = n k = k + 1 n = n + 1 return p Questions: why is the numpy array so incredibly slower than a python list, when running on CPython? what did I do wrong in the Cython+Numpy implementation? cython is obviously NOT treating the numpy array as an int[] as it should. how do I cast a numpy array to a int*? The below doesn't work cdef numpy.nparray a = numpy.zeros(100, dtype=int) cdef int * p = <int *>a.data
cdef DTYPE_t [:] p_view = p Using this instead of p in the calculations. reduced the runtime from 580 ms down to 2.8 ms for me. About the exact same runtime as the implementation using *int. And that's about the max you can expect from this. DTYPE = np.int ctypedef np.int_t DTYPE_t @cython.boundscheck(False) def primes(DTYPE_t kmax): cdef DTYPE_t n, k, i cdef np.ndarray p = np.empty(kmax, dtype=DTYPE) cdef DTYPE_t [:] p_view = p k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p_view[i] != 0: i = i + 1 if i == k: p_view[k] = n k = k + 1 n = n + 1 return p
Python, remove all non-alphabet chars from string
I am writing a python MapReduce word count program. Problem is that there are many non-alphabet chars strewn about in the data, I have found this post Stripping everything but alphanumeric chars from a string in Python which shows a nice solution using regex, but I am not sure how to implement it def mapfn(k, v): print v import re, string pattern = re.compile('[\W_]+') v = pattern.match(v) print v for w in v.split(): yield w, 1 I'm afraid I am not sure how to use the library re or even regex for that matter. I am not sure how to apply the regex pattern to the incoming string (line of a book) v properly to retrieve the new line without any non-alphanumeric chars. Suggestions?
Use re.sub regex = re.compile('[^a-zA-Z]') #First parameter is the replacement, second parameter is your input string regex.sub('', 'ab3d*E') #Out: 'abdE' Alternatively, if you only want to remove a certain set of characters (as an apostrophe might be okay in your input...) regex = re.compile('[,\.!?]') #etc.
Selecting a value from a drop-down option using selenium python
I want to select a value from a drop-down option. The html is as follows: <span id="searchTypeFormElementsStd"> <label for="numReturnSelect"></label> <select id="numReturnSelect" name="numReturnSelect"> <option value="200"></option> <option value="250"></option> <option value="500"></option> <option selected="" value="200"></option> <option value="800"></option> <option value="15000"></option> <option value="85000"></option> </select> </span I tried as follows: find_element_by_xpath("//select[@name='numReturnSelect']/option[text()='15000']").click() What is wrong with it? Please help me!
Adrian Ratnapala is right and also i would choose id over name, so you can try the following : find_element_by_xpath("//select[@id='numReturnSelect']/option[@value='15000']").click() OR find_element_by_css_selector("select#numReturnSelect > option[value='15000']").click() OR you can use select_by_value(value) : Select(driver.find_element_by_css_selector("select#numReturnSelect")).select_by_value(15000).click() Click here for more info on Select.
No module named setuptools
I want to install setup file of twilio. When I install it through given command it is given me an error "No module named setuptools". Could you please let me know what should I do? I am using python 2.7. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Python27>python D:\test\twilio-twilio-python-26f6707\setup.py install Traceback (most recent call last): File "D:\test\twilio-twilio-python-26f6707\setup.py", line 2, in <module> from setuptools import setup, find_packages ImportError: No module named setuptools
Install setuptools https://pypi.python.org/pypi/setuptools and try again
Django: reverse accessors for foreign keys clashing
I have two Django models which inherit from a base class: - Request - Inquiry - Analysis Request has two foreign keys to the built-in User model. create_user = models.ForeignKey(User, related_name='requests_created') assign_user = models.ForeignKey(User, related_name='requests_assigned') For some reason I'm getting the error Reverse accessor for 'Analysis.assign_user' clashes with reverse accessor for 'Inquiry.assign_user'. Everything I've read says that setting the related_name should prevent the clash, but I'm still getting the same error. Can anyone think of why this would be happening? Thanks!
The related_name would ensure that the fields were not conflicting with each other, but you have two models, each of which has both of those fields. You need to put the name of the concrete model in each one, which you can do with some special string substitution: create_user = models.ForeignKey(User, related_name='%(class)s_requests_created')
ggplot styles in Python
When I look at the plotting style in the Pandas documentation, the plots look different from the default one. It seems to mimic the ggplot "look and feel". Same thing with the seaborn's package. How can I load that style? (even if I am not using a notebook?)
Update: If you have matplotlib >= 1.4, there is a new style module which has a ggplot style by default. To activate this, use: from matplotlib import pyplot as plt plt.style.use('ggplot') This is recommended above the styling through the pandas options as explained below (and is also used in the pandas docs now). For pandas, use: pd.options.display.mpl_style = 'default' and this will give you the 'ggplot-like' style for matplotlib figures (Note that the name is a bit confusing as this is not enabled by default, and this should actually be added to the docs). For seaborn, as Paul H commented, it is enough to import seaborn By the way, if you really want something like ggplot in python with ggplot syntax (and not only ggplot-like style), there is also a python ggplot library based on pandas: https://github.com/yhat/ggplot/
using pandas to select rows conditional on multiple equivalencies
I have a pandas df and would like to accomplish something along these lines (in SQL terms): SELECT * FROM df WHERE column1 = 'a' OR column2 = 'b' OR column3 = 'c' etc... Now this works, for one column/value pair: foo = df.ix[df['column']==value] However, I'm not sure how to expand that to multiple column/value pairs
You need to enclose multiple conditions in braces due to operator precedence and use the bitwise and (&) and or (|) operators: foo = df.ix[(df['column1']==value) | (df['columns2'] == 'b') | (df['column3'] == 'c'] The bitwise operators are because if you don't and use and or or then it is likely to throw an error and moan that it is ambiguous this is because it is unclear whether we are comparing every value in a series in the condition and what does it mean if only 1 or all but 1 match the condition. In this situation you should use the bitwise operators or the numpy np.all or np.any to specify the matching criteria. There is also the query method: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.query.html but there are some limitations mainly to do with issues where there could be ambiguity between column names and index values