Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a text file with the following structure ``` ID,operator,a,b,c,d,true WCBP12236,J1,75.7,80.6,65.9,83.2,82.1 WCBP12236,J2,76.3,79.6,61.7,81.9,82.1 WCBP12236,S1,77.2,81.5,69.4,84.1,82.1 WCBP12236,S2,68.0,68.0,53.2,68.5,82.1 WCBP12234,J1,63.7,67.7,72.2,71.6,75.3 WCBP12234,J2,68.6,68.4,41.4,68.9,75.3 WCBP12234,S1,81.8,82.7,67.0,87.5,75.3 WCBP12234,S2,66.6,67.9,53.0,70.7,75.3 WCBP12238,J1,78.6,79.0,56.2,82.1,84.1 WCBP12239,J2,66.6,72.9,79.5,76.6,82.1 WCBP12239,S1,86.6,87.8,23.0,23.0,82.1 WCBP12239,S2,86.0,86.9,62.3,89.7,82.1 WCBP12239,J1,70.9,71.3,66.0,73.7,82.1 WCBP12238,J2,75.1,75.2,54.3,76.4,84.1 WCBP12238,S1,65.9,66.0,40.2,66.5,84.1 WCBP12238,S2,72.7,73.2,52.6,73.9,84.1 ``` Each `ID` corresponds to a dataset which is analysed by an operator several times. i.e `J1` and `J2` are the first and second attempt by operator J. The measures `a`, `b`, `c` and `d` use 4 slightly different algorithms to measure a value whose true value lies in the column `true` What I would like to do is to create 3 new text files comparing the results for `J1` vs `J2`, `S1` vs `S2` and `J1` vs `S1`. Example output for `J1` vs `J2`: ``` ID,operator,a1,a2,b1,b2,c1,c2,d1,d2,true WCBP12236,75.7,76.3,80.6,79.6,65.9,61.7,83.2,81.9,82.1 WCBP12234,63.7,68.6,67.7,68.4,72.2,41.4,71.6,68.9,75.3 ``` where `a1` is measurement `a` for `J1`, etc. Another example is for `S1` vs `S2`: ``` ID,operator,a1,a2,b1,b2,c1,c2,d1,d2,true WCBP12236,77.2,68.0,81.5,68.0,69.4,53.2,84.1,68.5,82.1 WCBP12234,81.8,66.6,82.7,67.9,67.0,53,87.5,70.7,75.3 ``` The IDs will not be in alphanumerical order nor will the operators be clustered for the same ID. I'm not certain how best to approach this task - using linux tools or a scripting language like perl/python. My initial attempt using linux quickly hit a brick wall First find all unique IDs (sorted) ``` awk -F, '/^WCBP/ {print $1}' file | uniq | sort -k 1.5n > unique_ids ``` Loop through these IDs and sort `J1`, `J2`: ``` foreach i (`more unique_ids`) grep $i test.txt | egrep 'J[1-2]' | sort -t',' -k2 end ``` This gives me the data sorted ``` WCBP12234,J1,63.7,67.7,72.2,71.6,75.3 WCBP12234,J2,68.6,68.4,41.4,68.9,80.4 WCBP12236,J1,75.7,80.6,65.9,83.2,82.1 WCBP12236,J2,76.3,79.6,61.7,81.9,82.1 WCBP12238,J1,78.6,79.0,56.2,82.1,82.1 WCBP12238,J2,75.1,75.2,54.3,76.4,82.1 WCBP12239,J1,70.9,71.3,66.0,73.7,75.3 WCBP12239,J2,66.6,72.9,79.5,76.6,75.3 ``` I'm not sure how to rearrange this data to get the desired structure. I tried adding an additional pipe to `awk` in the `foreach` loop `awk 'BEGIN {RS="\n\n"} {print $1, $3,$10,$4,$11,$5,$12,$6,$13,$7}'` Any ideas? I'm sure this can be done in a less cumbersome manner using `awk`, although it may be better using a proper scripting language.
You can use the Perl csv module [Text::CSV](http://search.cpan.org/perldoc?Text%3a%3aCSV) to extract the fields, and then store them in a hash, where ID is the main key, the second field is the secondary key and all the fields are stored as the value. It should then be trivial to do whatever comparisons you want. If you want to retain the original order of your lines, you can use an array inside the first loop. ``` use strict; use warnings; use Text::CSV; my %data; my $csv = Text::CSV->new({ binary => 1, # safety precaution eol => $/, # important when using $csv->print() }); while ( my $row = $csv->getline(*ARGV) ) { my ($id, $J) = @$row; # first two fields $data{$id}{$J} = $row; # store line } ```
Python Way: ``` import os,sys, re, itertools info=["WCBP12236,J1,75.7,80.6,65.9,83.2,82.1", "WCBP12236,J2,76.3,79.6,61.7,81.9,82.1", "WCBP12236,S1,77.2,81.5,69.4,84.1,82.1", "WCBP12236,S2,68.0,68.0,53.2,68.5,82.1", "WCBP12234,J1,63.7,67.7,72.2,71.6,75.3", "WCBP12234,J2,68.6,68.4,41.4,68.9,80.4", "WCBP12234,S1,81.8,82.7,67.0,87.5,75.3", "WCBP12234,S2,66.6,67.9,53.0,70.7,72.7", "WCBP12238,J1,78.6,79.0,56.2,82.1,82.1", "WCBP12239,J2,66.6,72.9,79.5,76.6,75.3", "WCBP12239,S1,86.6,87.8,23.0,23.0,82.1", "WCBP12239,S2,86.0,86.9,62.3,89.7,82.1", "WCBP12239,J1,70.9,71.3,66.0,73.7,75.3", "WCBP12238,J2,75.1,75.2,54.3,76.4,82.1", "WCBP12238,S1,65.9,66.0,40.2,66.5,80.4", "WCBP12238,S2,72.7,73.2,52.6,73.9,72.7" ] def extract_data(operator_1, operator_2): operator_index=1 id_index=0 data={} result=[] ret=[] for line in info: conv_list=line.split(",") if len(conv_list) > operator_index and ((operator_1.strip().upper() == conv_list[operator_index].strip().upper()) or (operator_2.strip().upper() == conv_list[operator_index].strip().upper()) ): if data.has_key(conv_list[id_index]): iters = [iter(conv_list[int(operator_index)+1:]), iter(data[conv_list[id_index]])] data[conv_list[id_index]]=list(it.next() for it in itertools.cycle(iters)) continue data[conv_list[id_index]]=conv_list[int(operator_index)+1:] return data ret=extract_data("j1", "s2") print ret ``` O/P: > {'WCBP12239': ['70.9', '86.0', '71.3', '86.9', '66.0', '62.3', '73.7', '89.7', '75.3', '82.1'], 'WCBP12238': ['72.7', '78.6', '73.2', '79.0', '52.6', '56.2', '73.9', '82.1', '72.7', '82.1'], 'WCBP12234': ['66.6', '63.7', '67.9', '67.7', '53.0', '72.2', '70.7', '71.6', '72.7', '75.3'], 'WCBP12236': ['68.0', '75.7', '68.0', '80.6', '53.2', '65.9', '68.5', '83.2', '82.1', '82.1']}
Combine lines with matching keys
[ "", "python", "linux", "perl", "awk", "" ]
I have a big table containing trillions of records of the following schema (Here serial no. is the key): ``` MyTable Column | Type | Modifiers ----------- +--------------------------+----------- serial_number | int | name | character varying(255) | Designation | character varying(255) | place | character varying(255) | timeOfJoining | timestamp with time zone | timeOfLeaving | timestamp with time zone | ``` Now I want to fire queries of the form given below on this table: ``` select place from myTable where Designation='Manager' and timeOfJoining>'1930-10-10' and timeOfLeaving<'1950-10-10'; ``` My aim is to achieve fast query execution times. Since, I am designing my own database from scratch, therefore I have the following options. Please guide me as to which one of the two options will be faster. 1. Create 2 separate table. Here, table1 contains the schema (serial\_no, name, Designation, place) and table 2 contains the schema (serial\_no, timeOfJoining, timeOfLeaving). And then perform a merge join between the two tables. Here, serial\_no is the key in both the tables 2. Keep one single table MyTable. And run the following plan: Create an index Designation\_place\_name and using the Designation\_place\_name index, find rows that fit the index condition relation = 'Manager'(The rows on disc are accessed randomly) and then using the filter function keep only rows that match the timeOfJoining criteria. Please help me figure out which one will be faster. It'll be great if you could also tell me the respective pros and cons. EDIT: I intend to use my table as read-only.
If you are dealing with lots and lots of rows and you want to use a relational database, then your best bet for such a query is to satisfy it entirely in an index. The example query is: ``` select place from myTable where Designation='Manager' and timeOfJoining > '1930-10-10' and timeOfLeaving < '1950-10-10'; ``` The index should contain the four fields mentioned in the table. This suggests an index like: `mytable(Designation, timeOfJoining, timeOfLeaving, place)`. Note that only the first two will be used for the `where` clause, because of the inequality. However, most databases will do an index scan on the appropriate data. With such a large amount of data, you have other problems. Although memory is getting cheaper and machines bigger, indexes often speed up queries because an index is smaller than the original table and faster to load in memory. For "trillions" of records, you are talking about tens of trillions of bytes of memory, just for the index -- and I don't know which databases are able to manage that amount of memory. Because this is such a large system, just the hardware costs are still going to be rather expensive. I would suggest a custom solution that stored the data in a compressed format with special purpose indexing for the queries. Off-the-shelf databases are great products applicable in almost all data problems. However, this seems to be going near the limit of their applicability. Even small efficiencies over an off-the-shelf database start to add up with such a large volume of data. For instance, the layout of records on pages invariably leaves empty space on a page (records don't exactly fit on a page, the database has overhead that you may not need such as bits for nullability, and so on). Say the overhead of the page structure and empty space amount to 5% of the size of a page. For most applications, this is in the noise. But 5% of 100 trillion bytes is 5 trillion bytes -- a lot of extra I/O time and wasted storage. EDIT: The real answer to the choice between the two options is to test them. This shouldn't be hard, because you don't need to test them on trillions of rows -- and if you have the hardware for that, you have the hardware for smaller tests. Take a few billions of rows on a machine with correspondingly less memory and CPUs and see which performs better. Once you are satisfied with the results, multiply the data by 10 and try again. You might want to do this one more time if you are not convinced of the results. My opinion, though, is that the second is faster. The first duplicates the "serial number" in both tables, adding 8 bytes to each row ("int" is typically 4-bytes and that isn't big enough, so you need bigint). That alone will increase the I/O time and size of indexes for any analysis. If you were considering a columnar data store (such as Vertica) then this space might be saved. The savings on removing one or two columns is at the expense of reading in more bytes in total. Also, don't store the raw form of any of the variables in the table. The "Designation" should be in a lookup table as well as the "place" and "name", so each would be 4-bytes (that should be big enough for the dimensions, unless one is all people on earth). But . . . The "best" solution in terms of cost, maintainability, and scalability is probably something like Hadoop. That is how companies like Google and Yahoo manage vast quantities of data, and it seems apt here too.
For the most part a single table makes some sense, but it would be ridiculous to store all those values as strings, depending on the uniqueness of your name/designation/place fields you could use something like this: ``` serial_number | BIGINT name_ID | INT Designation_ID | INT place_ID | INT timeOfJoining | timestamp with time zone timeOfLeaving | timestamp with time zone ``` Without knowing the data it's impossible to know which lookups would be practical. As others have mentioned you've got some challenges ahead. Regarding indexing, I agree with Gordon.
Query plan for database table containing trillions of records
[ "", "mysql", "sql", "sql-server", "postgresql", "" ]
This behavior has me puzzled: ``` import code class foo(): def __init__(self): self.x = 1 def interact(self): v = globals() v.update(vars(self)) code.interact(local=v) c = foo() c.interact() Python 2.6.6 (r266:84292, Sep 11 2012, 08:34:23) (InteractiveConsole) >>> id(x) 29082424 >>> id(c.x) 29082424 >>> x 1 >>> c.x 1 >>> x=2 >>> c.x 1 ``` Why doesn't 'c.x' behave like an alias for 'x'? If I understand the id() function correctly, they are both at the same memory address.
Small integers from from -5 to 256 are cached in python, i.e their `id()` is always going to be same. From the [docs](http://docs.python.org/2/c-api/int.html#PyInt_FromLong): > The current implementation keeps an array of integer objects for all > integers between -5 and 256, when you create an int in that range you > actually just get back a reference to the existing object. ``` >>> x = 1 >>> y = 1 #same id() here as integer 1 is cached by python. >>> x is y True ``` # Update: > If two identifiers return same value of **id()** then it doesn't mean they can act as alias of > each other, it totally depends on the type of the object they are pointing to. For **immutable** object you cannot create alias in python. Modifying one of the reference to an immutable object will simple make it point to a new object, while other references to that older object will still remain intact. ``` >>> x = y = 300 >>> x is y # x and y point to the same object True >>> x += 1 # modify x >>> x # x now points to a different object 301 >>> y #y still points to the old object 300 ``` A **mutable** object can be modified from any of it's references, but those modifications must be in-place modifications. ``` >>> x = y = [] >>> x is y True >>> x.append(1) # list.extend is an in-place operation >>> y.append(2) # in-place operation >>> x [1, 2] >>> y #works fine so far [1, 2] >>> x = x + [1] #not an in-place operation >>> x [1, 2, 1] #assigns a new object to x >>> y #y still points to the same old object [1, 2] ```
> If I understand the id() function correctly, they are both at the same memory address. You don't understand it correctly. `id` returns an integer in respect of which the following identity is guaranteed: if `id(x) == id(y)` then `x is y` is guaranteed (and vice versa). Accordingly, `id` tells you about the objects (values) that variables point to, not about the variables themselves. Any relationship to memory addresses is purely an implementation detail. Python, unlike, e.g. C, does not assume any particular relationship to the underlying machine (whether physical or virtual). Variables in python are both opaque, and not language accessible (i.e. not first class).
Variables and aliases with Python's code.interact
[ "", "python", "" ]
First timer on StackExchange. I am working with ArcGIS Server and Python. While trying to execute a query using the REST endpoint to a map service, I am getting the values for a field that is esriFieldTypeDate in negative epoch in the JSON response. The JSON response looks like this: ``` { "feature" : { "attributes" : { "OBJECTID" : 11, "BASIN" : "North Atlantic", "TRACK_DATE" : -3739996800000, } , "geometry" : { "paths" : [ [ [-99.9999999999999, 30.0000000000001], [-100.1, 30.5000000000001] ] ] } } } ``` The field I am referring to is "TRACK\_DATE" in the above JSON. The values returned by ArcGIS Server are always in milliseconds since epoch. ArcGIS Server also provides a HTML response and the TRACK\_DATE field for the same query is displayed as "TRACK\_DATE: 1851/06/27 00:00:00 UTC". So, the date is pre 1900 and I understand the Python in-built datetime module is not able to handle dates before 1900. I am using 32-bit Python v2.6. I am trying to convert it to a datetime by using `datetime.datetime.utcfromtimestamp(float(-3739996800000)/1000)` However, this fails with ``` ValueError: timestamp out of range for platform localtime()/gmtime() function ``` How does one work with epochs that are negative and pre 1900 in Python 2.6? I have looked at similar posts, but could not find one that explains working with negative epochs.
This works for me: ``` datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=(-3739996800000/1000)) ``` → `datetime.datetime(1851, 6, 27, 0, 0)` This would have been better asked on StackOverflow since it is more Python specific than it is GIS-specific.
``` if timestamp < 0: return datetime(1970, 1, 1) + timedelta(seconds=timestamp) else: return datetime.utcfromtimestamp(timestamp) ```
how to create datetime from a negative epoch in Python
[ "", "python", "arcpy", "" ]
So I was having problems with a code before because I was getting an empty line when I iterated through the foodList. Someone suggested using the 'if x.strip():' method as seen below. ``` for x in split: if x.strip(): foodList = foodList + [x.split(",")] ``` It works fine but I would just like to know what it actually means. I know it deletes whitespace, but wouldn't the above if statement be saying if x had empty space then true. Which would be the opposite of what I wanted? Just would like to wrap my ahead around the terminology and what it is doing behind the scenes.
In Python, "empty" objects --- empty list, empty dict, and, as in this case, empty string --- are considered false in a boolean context (like `if`). Any string that is not empty will be considered true. `strip` returns the string after stripping whitespace. If the string contains only whitespace, then `strip()` will strip everything away and return the empty string. So `if strip()` means "if the result of `strip()` is not an empty string" --- that is, if the string contains something besides whitespace.
> The method strip() returns a copy of the string in which all chars > have been stripped from the beginning and the end of the string > (default whitespace characters). So, it trims whitespace from begining and end of a string if no input char is specified. At this point, it just controls whether string `x` is empty or not without considering spaces because an `empty` string is interpreted as `false` in python
what does 'if x.strip( )' mean?
[ "", "python", "if-statement", "strip", "" ]
In my Google App Engine app, I'm getting the error > ImportError: No module named main when going to the URL `/foo`. All the files in my app are in the parent directory. Here is my `app.yaml`: ``` application: foobar version: 1 runtime: python27 api_version: 1 threadsafe: no handlers: - url: /foo.* script: main.application - url: / static_files: index.html - url: /(.*\.(html|css|js|gif|jpg|png|ico)) static_files: \1 upload: .* expiration: "1d" ``` Here is my `main.py`: ``` from google.appengine.ext import webapp from google.appengine.ext.webapp import util class Handler(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('Hello world!') def main(): application = webapp.WSGIApplication([('/foo', Handler)], debug=False) util.run_wsgi_app(application) if __name__ == '__main__': main() ``` I get the same error when I change `main.application` to `main.py` or just `main`. Why is this error occurring?
As the [documentation](https://developers.google.com/appengine/docs/python/config/appconfig#Static_File_Pattern_Handlers) says, > Static files cannot be the same as application code files. If a static > file path matches a path to a script used in a dynamic handler, the > script will not be available to the dynamic handler. In my case, the problem was that the line ``` upload: .* ``` matched all files in my parent directory, including main.py. This meant that main.py was not available to the dynamic handler. The fix was to change this line to only recognize the same files that this rule's URL line recognized: ``` upload: .*\.(html|css|js|gif|jpg|png|ico) ```
Your configuration is OK - only for a small misstep in the `main.py`: you need an access of the `application` name from the `main` module, thus the config is: `main.application`. This change should do the trick: ``` application = webapp.WSGIApplication([('/foo', Handler)], debug=False) def main(): util.run_wsgi_app(application) ``` Don't worry - the `application` object will not *run* on creation, nor on import from this module, it will run only on explicit all such as `.run_wsgi_app` or in google's internal architecture.
ImportError - No module named main in GAE
[ "", "python", "google-app-engine", "importerror", "" ]
I have a RESTful API that I have exposed using an implementation of Elasticsearch on an EC2 instance to index a corpus of content. I can query the search by running the following from my terminal (MacOSX): ``` curl -XGET 'http://ES_search_demo.com/document/record/_search?pretty=true' -d '{ "query": { "bool": { "must": [ { "text": { "record.document": "SOME_JOURNAL" } }, { "text": { "record.articleTitle": "farmers" } } ], "must_not": [], "should": [] } }, "from": 0, "size": 50, "sort": [], "facets": {} }' ``` How do I turn above into a API request using `python/requests` or `python/urllib2` (not sure which one to go for - have been using urllib2, but hear that requests is better...)? Do I pass as a header or otherwise?
Using [requests](https://requests.readthedocs.io): ``` import requests url = 'http://ES_search_demo.com/document/record/_search?pretty=true' data = '''{ "query": { "bool": { "must": [ { "text": { "record.document": "SOME_JOURNAL" } }, { "text": { "record.articleTitle": "farmers" } } ], "must_not": [], "should": [] } }, "from": 0, "size": 50, "sort": [], "facets": {} }''' response = requests.post(url, data=data) ``` Depending on what kind of response your API returns, you will then probably want to look at `response.text` or `response.json()` (or possibly inspect `response.status_code` first). See the quickstart docs [here](https://requests.readthedocs.io/en/master/user/quickstart/), especially [this section](https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests).
Using [requests](http://www.python-requests.org/en/latest/) and [json](https://docs.python.org/2/library/json.html) makes it simple. 1. Call the API 2. Assuming the API returns a JSON, parse the JSON object into a Python dict using `json.loads` function 3. Loop through the dict to extract information. [Requests](http://www.python-requests.org/en/latest/) module provides you useful function to loop for success and failure. `if(Response.ok)`: will help help you determine if your API call is successful (Response code - 200) `Response.raise_for_status()` will help you fetch the http code that is returned from the API. Below is a sample code for making such API calls. Also can be found in [github](https://gist.github.com/vinovator/98b0fb7eb30805595bd6). The code assumes that the API makes use of digest authentication. You can either skip this or use other appropriate authentication modules to authenticate the client invoking the API. ``` #Python 2.7.6 #RestfulClient.py import requests from requests.auth import HTTPDigestAuth import json # Replace with the correct URL url = "http://api_url" # It is a good practice not to hardcode the credentials. So ask the user to enter credentials at runtime myResponse = requests.get(url,auth=HTTPDigestAuth(raw_input("username: "), raw_input("Password: ")), verify=True) #print (myResponse.status_code) # For successful API call, response code will be 200 (OK) if(myResponse.ok): # Loading the response data into a dict variable # json.loads takes in only binary or string variables so using content to fetch binary content # Loads (Load String) takes a Json file and converts into python data structure (dict or list, depending on JSON) jData = json.loads(myResponse.content) print("The response contains {0} properties".format(len(jData))) print("\n") for key in jData: print key + " : " + jData[key] else: # If response code is not ok (200), print the resulting http error code with description myResponse.raise_for_status() ```
Making a request to a RESTful API using Python
[ "", "python", "rest", "" ]
Trying to get the best way to store a phone # in Django. At the monent i'm using Charfield and checking if it's a number ...
I always use a simple CharField, since phone numbers differ so greatly from region to region and country to country. Some people might even use characters instead of numbers - according to the numeric keyboard on phones. Maybe adding a Choicefield for country prefix is a good idea, but that is as far as I would go. I would never check a phone number field for any "invalid" data like dashes, spaces etc, because your users might dislike receiving an error message and because of that do not submit a phone number at all. After all a phone number will be dialled by a person in your office. And they can - and should - verify the number personally.
I store phone numbers in CharField, and use [phonenumbers](https://pypi.python.org/pypi/phonenumbers) for validation. In forms I allow the user to enter the number any way he wants and then parse,format and validate it using `phonenumbers` lib.
What's the recommended way for storing a phone number?
[ "", "python", "django", "django-models", "phone-number", "" ]
I am using PyCharm on Windows and want to change the settings to limit the maximum line length to `79` characters, as opposed to the default limit of `120` characters. Where can I change the maximum amount of characters per line in PyCharm?
Here is screenshot of my Pycharm. Required settings is in following path: `File -> Settings -> Editor -> Code Style -> General: Right margin (columns)` [![Pycharm 4 Settings Screenshot](https://i.stack.imgur.com/V3BLg.png)](https://i.stack.imgur.com/V3BLg.png)
For PyCharm 2018.1 on Mac: Preferences (`⌘`+`,`), then `Editor -> Code Style`: [![enter image description here](https://i.stack.imgur.com/tQVsX.png)](https://i.stack.imgur.com/tQVsX.png) For PyCharm 2018.3 on Windows: File -> Settings (`Ctrl`+`Alt`+`S`), then `Editor -> Code Style`: To follow PEP-8 set `Hard wrap at` to 80.
How do I set the maximum line length in PyCharm?
[ "", "python", "pycharm", "pep8", "" ]
I have crawled txt files from different website, now i need to glue them into one file. There are many lines are similar to each other from various websites. I want to remove repetitions. Here is what I have tried: ``` import difflib sourcename = 'xiaoshanwujzw' destname = 'bindresult' sourcefile = open('%s.txt' % sourcename) sourcelines = sourcefile.readlines() sourcefile.close() for sourceline in sourcelines: destfile = open('%s.txt' % destname, 'a+') destlines = destfile.readlines() similar = False for destline in destlines: ratio = difflib.SequenceMatcher(None, destline, sourceline).ratio() if ratio > 0.8: print destline print sourceline similar = True if not similar: destfile.write(sourceline) destfile.close() ``` I will run it for every source, and write line by line to the same file. The result is, even if i run it for the same file multiple times, the line is always appended to the destination file. EDIT: I have tried the code of the answer. It's still very slow. Even If I minimize the IO, I still need to compare O(n^2), especially when you have 1000+ lines. I have average 10,000 lines per file. Any other ways to remove the duplicates?
Here is a short version that does minimal IO and cleans up after itself. ``` import difflib sourcename = 'xiaoshanwujzw' destname = 'bindresult' with open('%s.txt' % destname, 'w+') as destfile: # we read in the file so that on subsequent runs of this script, we # won't duplicate the lines. known_lines = set(destfile.readlines()) with open('%s.txt' % sourcename) as sourcefile: for line in sourcefile: similar = False for known in known_lines: ratio = difflib.SequenceMatcher(None, line, known).ratio() if ratio > 0.8: print ratio print line print known similar = True break if not similar: destfile.write(line) known_lines.add(line) ``` Instead of reading the known lines each time from the file, we save them to a set, which we use for comparison against. The set is essentially a mirror of the contents of 'destfile'. ### A note on complexity By its very nature, this problem has a O(n2) complexity. Because you're looking for *similarity* with known strings, rather than identical strings, you have to look at every previously seen string. If you were looking to remove exact duplicates, rather than fuzzy matches, you could use a simple lookup in a set, with complexity O(1), making your entire solution have O(n) complexity. There might be a way to reduce the fundamental complexity by using lossy compression on the strings so that two similar strings compress to the same result. This is however both out of scope for a stack overflow answer, and beyond my expertise. It is [an active research area](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.7621) so you might have some luck digging through the literature. You could also reduce the time taken by `ratio()` by using the less accurate alternatives `quick_ratio()` and `real_quick_ratio()`.
Basically what you need to do is check every line in the source file to see if it has a potential match against every line of the destination file. ``` ##xiaoshanwujzw.txt ##----------------- ##radically different thing ##this is data ##and more data ##bindresult.txt ##-------------- ##a website line ##this is data ##and more data from difflib import SequenceMatcher sourcefile = open('xiaoshanwujzw.txt', 'r') sourcelines = sourcefile.readlines() sourcefile.close() destfile = open('bindresult.txt', 'a+') destlines = destfile.readlines() has_matches = {k: False for k in sourcelines} for d_line in destlines: for s_line in sourcelines: if SequenceMatcher(None, d_line, s_line).ratio() > 0.8: has_matches[s_line] = True break for k in has_matches: if has_matches[k] == False: destfile.write(k) destfile.close() ``` This will add the line radically different thing`` to the destinationfile.
python similar string removal from multiple files
[ "", "python", "" ]
I am trying to create a properties file like this... ``` firstname=Jon lastname=Snow occupation=Nights_Watch family=Stark ``` ...from a query like this... ``` SELECT a.fname as firstname, a.lname as lastname, b.occ as occupation... FROM names a, occupation b, family c... WHERE... ``` How can I do this? As I am aware of only using `spool` to a CSV file which won't work here? These property files will be picked up by shell scripts to run automated tasks. I am using Oracle DB
Since you mentionned `spool` I will assume you are running on Oracle. This should produce a result in the desired format, that you can `spool` straight away. ``` SELECT 'firstname=' || firstname || CHR(10) || 'lastname=' || lastname || CHR(10) -- and so on for all fields FROM your_tables; ``` The same approach should be possible with all database engines, if you know the correct incantation for a litteral new line and the syntax for string concatenation.
Perhaps something like this? ``` psql -c 'select id, name from test where id = 1' -x -t -A -F = dbname -U dbuser ``` Output would be like: ``` id=1 name=test1 ``` (For the full list of options: `man psql`.)
Generate a Properties File using Shell Script and Results from a SQL Query
[ "", "sql", "oracle", "shell", "plsql", "formatting", "" ]
I want to build a table in python with three columns and later on fetch the values as necessary. I am thinking dictionaries are the best way to do it, which has key mapping to two values. ``` |column1 | column 2 | column 3 | | MAC | PORT NUMBER | DPID | | Key | Value 1 | Value 2 | ``` proposed way : // define a global learning table `globe_learning_table = defaultdict(set)` // add port number and dpid of a switch based on its MAC address as a key // packet.src will give you MAC address in this case `globe_learning_table[packet.src].add(event.port)` `globe_learning_table[packet.src].add(dpid_to_str(connection.dpid))` // getting value of DPID based on its MAC address `globe_learning_table[packket.src][????]` I am not sure if one key points to two values how can I get the particular value associated with that key. I am open to use any another data structure as well, if it can build this dynamic table and give me the particular values when necessary.
Why a dictionary? Why not a list of named tuples, or a collection (list, dictionary) of objects from some class which you define (with attributes for each column)? What's wrong with: ``` class myRowObj(object): def __init__(self, mac, port, dpid): self.mac = mac self.port = port self.dpid = dpid myTable = list() for each in some_inputs: myTable.append(myRowObj(*each.split()) ``` ... or something like that? (Note: myTable can be a list, or a dictionary or whatever is suitable to your needs. Obviously if it's a dictionary then you have to ask what sort of key you'll use to access these "rows"). The advantage of this approach is that your "row objects" (which you'd name in some way that made more sense to your application domain) can implement whatever semantics you choose. These objects can validate and convert any values supplied at instantiation, compute any derived values, etc. You can also define a string and code representations of your object (implicit conversions for when one of your rows is used as a string or in certain types of development and debugging or serialization (*`_str_`* and *`_repr_`* special methods, for example). The named tuples (added in Python 2.6) are a sort of lightweight object class which can offer some performance advantages and lighter memory footprint over normal custom classes (for situations where you only want the named fields without binding custom methods to these objects, for example).
Something like this perhaps? ``` >>> global_learning_table = collections.defaultdict(PortDpidPair) >>> PortDpidPair = collections.namedtuple("PortDpidPair", ["port", "dpid"]) >>> global_learning_table = collections.defaultdict(collections.namedtuple('PortDpidPair', ['port', 'dpid'])) >>> global_learning_table["ff:" * 7 + "ff"] = PortDpidPair(80, 1234) >>> global_learning_table defaultdict(<class '__main__.PortDpidPair'>, {'ff:ff:ff:ff:ff:ff:ff:ff': PortDpidPair(port=80, dpid=1234)}) >>> ``` Named tuples might be appropriate for each row, but depending on how large this table is going to be, you may be better off with a sqlite db or something similar.
Creating a table in python
[ "", "python", "" ]
Assuming I have the following list: ``` array1 = ['A', 'C', 'Desk'] ``` and another array that contains: ``` array2 = [{'id': 'A', 'name': 'Greg'}, {'id': 'Desk', 'name': 'Will'}, {'id': 'E', 'name': 'Craig'}, {'id': 'G', 'name': 'Johnson'}] ``` What is a good way to remove items from the list? The following does not appear to work ``` for item in array2: if item['id'] in array1: array2.remove(item) ```
You could also use a list comprehension for this: ``` >>> array2 = [{'id': 'A', 'name': 'Greg'}, ... {'id': 'Desk', 'name': 'Will'}, ... {'id': 'E', 'name': 'Craig'}, ... {'id': 'G', 'name': 'Johnson'}] >>> array1 = ['A', 'C', 'Desk'] >>> filtered = [item for item in array2 if item['id'] not in array1] >>> filtered [{'id': 'E', 'name': 'Craig'}, {'id': 'G', 'name': 'Johnson'}] ```
You can use filter: ``` array2 = filter(lambda x: x['id'] not in array1, array2) ```
How to delete from list?
[ "", "python", "" ]
i'm new in development using django, and i'm trying modify an Openstack Horizon Dashboard aplication (based on django aplication). I implements one function and now, i'm trying to do a form, but i'm having some problems with the request. In my code i'm using the method POST Firstly, i'm want to show in the same view what is on the form, and i'm doing like this. ``` from django import http from django.utils.translation import ugettext_lazy as _ from django.views.generic import TemplateView from django import forms class TesteForm(forms.Form): name = forms.CharField() class IndexView(TemplateView): template_name = 'visualizations/validar/index.html' def get_context_data(request): if request.POST: form = TesteForm(request.POST) if form.is_valid(): instance = form.save() else : form = TesteForm() return {'form':form} class IndexView2(TemplateView): template_name = 'visualizations/validar/index.html' def get_context_data(request): text = None if request.POST: form = TesteForm(request.POST) if form.is_valid(): text = form.cleaned_data['name'] else: form = TesteForm() return {'text':text,'form':form} ``` My urls.py file is like this ``` from django.conf.urls.defaults import patterns, url from .views import IndexView from .views import IndexView2 urlpatterns = patterns('', url(r'^$',IndexView.as_view(), name='index'), url(r'teste/',IndexView2.as_view()), ) ``` and my template is like this ``` {% block main %} <form action="teste/" method="POST">{% csrf_token %}{{ form.as_p }} <input type="submit" name="OK"/> </form> <p>{{ texto }}</p> {% endblock %} ``` I search about this on django's docs, but the django's examples aren't clear and the django's aplication just use methods, the Horizon Dashboard use class (how is in my code above) When i execute this, an error message appears. this message says: ``` AttributeError at /visualizations/validar/ 'IndexView' object has no attribute 'POST' Request Method: GET Request URL: http://127.0.0.1:8000/visualizations/validar/ Django Version: 1.4.5 Exception Type: AttributeError Exception Value:'IndexView' object has no attribute 'POST' Exception Location: /home/labsc/Documentos/horizon/openstack_dashboard/dashboards/visualizations/validar/views.py in get_context_data, line 14 Python Executable: /home/labsc/Documentos/horizon/.venv/bin/python Python Version: 2.7.3 ``` i search about this error, but not found nothing. if someone can help me, i'm thankful
Your signature is wrong: ``` def get_context_data(request) ``` should be ``` def get_context_data(self, **kwargs): request = self.request ``` Check the for [get\_context\_data](https://docs.djangoproject.com/en/dev/ref/class-based-views/mixins-simple/#django.views.generic.base.ContextMixin.get_context_data) and the word on [dynamic filtering](https://docs.djangoproject.com/en/dev/topics/class-based-views/generic-display/#dynamic-filtering) Since your first argument is the `self` object, which in this case is `request`, you are getting the error.
If you read more carefully the error message, it appears that the URL was retrieved using a *GET* method. Not *POST*: ``` AttributeError at /visualizations/validar/ 'IndexView' object has no attribute 'POST' Request Method: GET Request URL: http://127.0.0.1:8000/visualizations/validar/ ``` See the following link for an in deep explanation of [GET vs POST](http://www.w3schools.com/tags/ref_httpmethods.asp)
How to get data from form using POST in django (Horizon Dashboard)?
[ "", "python", "django", "development-environment", "" ]
I have a number in my python script that I want to use as part of the title of a plot in matplotlib. Is there a function that converts a float to a formatted TeX string? Basically, ``` str(myFloat) ``` returns ``` 3.5e+20 ``` but I want ``` $3.5 \times 10^{20}$ ``` or at least for matplotlib to format the float like the second string would be formatted. I'm also stuck using python 2.4, so code that runs in old versions is especially appreciated.
You can do something like: ``` ax.set_title( "${0} \\times 10^{{{1}}}$".format('3.5','+20')) ``` in the old style: ``` ax.set_title( "$%s \\times 10^{%s}$" % ('3.5','+20')) ```
With old stype formatting: ``` print r'$%s \times 10^{%s}$' % tuple('3.5e+20'.split('e+')) ``` with new format: ``` print r'${} \times 10^{{{}}}$'.format(*'3.5e+20'.split('e+')) ```
How can I format a float using matplotlib's LaTeX formatter?
[ "", "python", "matplotlib", "tex", "python-2.4", "" ]
**Objective:** Write Python 2.7 code to extract IPv4 addresses from string. **String content example:** --- The following are IP addresses: 192.168.1.1, 8.8.8.8, 101.099.098.000. These can also appear as 192.168.1[.]1 or 192.168.1(.)1 or 192.168.1[dot]1 or 192.168.1(dot)1 or 192 .168 .1 .1 or 192. 168. 1. 1. and these censorship methods could apply to any of the dots (Ex: 192[.]168[.]1[.]1). --- As you can see from the above, I am struggling to find a way to parse through a txt file that may contain IPs depicted in multiple forms of "censorship" (to prevent hyper-linking). I'm thinking that a regex expression is the way to go. Maybe say something along the lines of; any grouping of four ints 0-255 or 000-255 separated by anything in the 'separators list' which would consist of periods, brackets, parenthesis, or any of the other aforementioned examples. This way, the 'separators list' could be updated at as needed. Not sure if this is the proper way to go or even possible so, any help with this is greatly appreciated. --- **Update:** Thanks to recursive's answer below, I now have the following code working for the above example. It will... * find the IPs * place them into a list * clean them of the spaces/braces/etc * and replace the uncleaned list entry with the cleaned one. **Caveat:** The code below does not account for incorrect/non-valid IPs such as 192.168.0.256 or 192.168.1.2.3 Currently, it will drop the trailing 6 and 3 from the aforementioned. If its first octet is invalid (ex:256.10.10.10) it will drop the leading 2 (resulting in 56.10.10.10). ``` import re def extractIPs(fileContent): pattern = r"((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)([ (\[]?(\.|dot)[ )\]]?(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3})" ips = [each[0] for each in re.findall(pattern, fileContent)] for item in ips: location = ips.index(item) ip = re.sub("[ ()\[\]]", "", item) ip = re.sub("dot", ".", ip) ips.remove(item) ips.insert(location, ip) return ips myFile = open('***INSERT FILE PATH HERE***') fileContent = myFile.read() IPs = extractIPs(fileContent) print "Original file content:\n{0}".format(fileContent) print "--------------------------------" print "Parsed results:\n{0}".format(IPs) ```
The code below will... * find IPs in strings even when censored (ex: 192.168.1[dot]20 or 10.10.10 .21) * place them into a list * clean them of the censorship (spaces/braces/parenthesis) * and replace the uncleaned list entry with the cleaned one. **Caveat:** The code below does not account for incorrect/non-valid IPs such as 192.168.0.256 or 192.168.1.2.3 Currently, it will drop the trailing digit (6 and 3 from the aforementioned). If its first octet is invalid (ex: 256.10.10.10), it will drop the leading digit (resulting in 56.10.10.10). ``` import re ``` ``` def extractIPs(fileContent): pattern = r"((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)([ (\[]?(\.|dot)[ )\]]?(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3})" ips = [each[0] for each in re.findall(pattern, fileContent)] for item in ips: location = ips.index(item) ip = re.sub("[ ()\[\]]", "", item) ip = re.sub("dot", ".", ip) ips.remove(item) ips.insert(location, ip) return ips myFile = open('***INSERT FILE PATH HERE***') fileContent = myFile.read() IPs = extractIPs(fileContent) print "Original file content:\n{0}".format(fileContent) print "--------------------------------" print "Parsed results:\n{0}".format(IPs) ```
Here is a regex that works: ``` import re pattern = r"((([01]?[0-9]?[0-9]|2[0-4][0-9]|25[0-5])[ (\[]?(\.|dot)[ )\]]?){3}([01]?[0-9]?[0-9]|2[0-4][0-9]|25[0-5]))" text = "The following are IP addresses: 192.168.1.1, 8.8.8.8, 101.099.098.000. These can also appear as 192.168.1[.]1 or 192.168.1(.)1 or 192.168.1[dot]1 or 192.168.1(dot)1 or 192 .168 .1 .1 or 192. 168. 1. 1. " ips = [match[0] for match in re.findall(pattern, text)] print ips # output: ['192.168.1.1', '8.8.8.8', '101.099.098.000', '192.168.1[.]1', '192.168.1(.)1', '192.168.1[dot]1', '192.168.1(dot)1', '192 .168 .1 .1', '192. 168. 1. 1'] ``` The regex has a few main parts, which I will explain here: * `([01]?[0-9]?[0-9]|2[0-4][0-9]|25[0-5])` This matches the numerical parts of the ip address. `|` means "or". The first case handles numbers from 0 to 199 with or without leading zeroes. The second two cases handle numbers over 199. * `[ (\[]?(\.|dot)[ )\]]?` This matches the "dot" parts. There are three sub-components: + `[ (\[]?` The "prefix" for the dot. Either a space, an open paren, or open square brace. The trailing `?` means that this part is optional. + `(\.|dot)` Either "dot" or a period. + `[ )\]]?` The "suffix". Same logic as the prefix. * `{3}` means repeat the previous component 3 times. * The final element is another number, which is the same as the first, except it is not followed by a dot.
Python - parse IPv4 addresses from string (even when censored)
[ "", "python", "regex", "python-2.7", "ipv4", "data-extraction", "" ]
Is there an easy way in python of creating a list of substrings from a list of strings? Example: original list: `['abcd','efgh','ijkl','mnop']` list of substrings: `['bc','fg','jk','no']` I know this could be achieved with a simple loop but is there an easier way in python (Maybe a one-liner)?
Use `slicing` and `list comprehension`: ``` >>> lis = ['abcd','efgh','ijkl','mnop'] >>> [ x[1:3] for x in lis] ['bc', 'fg', 'jk', 'no'] ``` Slicing: ``` >>> s = 'abcd' >>> s[1:3] #return sub-string from 1 to 2th index (3 in not inclusive) 'bc' ```
With a mix of slicing and list comprehensions you can do it like this ``` listy = ['abcd','efgh','ijkl','mnop'] [item[1:3] for item in listy] >> ['bc', 'fg', 'jk', 'no'] ```
Create new list of substrings from list of strings
[ "", "python", "list", "substring", "" ]
I have a table with this structure: ![enter image description here](https://i.stack.imgur.com/IJzJF.png) I use this script to query the requests: ``` SELECT D.DELIVERY_REQUEST_ID AS "REQUEST_ID", 'Delivery' AS "REQUEST_TYPE" FROM DELIVERY_REQUEST D UNION SELECT I.INVOICE_REQUEST_ID AS "REQUEST_ID", 'Invoice' AS "REQUEST_TYPE" FROM INVOICE_TRX I ``` The result would be like this: ``` REQUEST_ID | REQUEST_TYPE __________________|____________________ | 1 | Delivery 1 | Invoice 2 | Delivery 2 | Invoice ``` What I want to do is to query (or create a view) this with a unique key (should be an INT and like an auto number) at the beginning like this: ``` ID | REQUEST_ID | REQUEST_TYPE ____|________________|____________________ | | 1 | 1 | Delivery 2 | 1 | Invoice 3 | 2 | Delivery 4 | 2 | Invoice ``` Thank you in advance.
Firstly, as you're adding a string use UNION ALL so Oracle doesn't try to do a distinct sort. To actually answer the question you can use the analytic function [ROW\_NUMBER()](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions156.htm#SQLRF06100) ``` select row_number() over ( order by request_id, request_type ) as id , a.* from ( select d.delivery_request_id as request_id , 'delivery' as request_type from delivery_request d union all select i.invoice_request_id as request_id , 'invoice' as request_type from invoice_trx i ) a ```
why you dont try to concat `REQUEST_TYPE + REQUEST_ID` and then put it in column `ID` instead of generate ids? ``` ID | REQUEST_ID | REQUEST_TYPE ____|________________|____________________ | | D1 | 1 | Delivery I1 | 1 | Invoice D2 | 2 | Delivery I2 | 2 | Invoice ```
Insert a UNIQUE DUMMY COLUMN when creating VIEW with UNION
[ "", "sql", "oracle", "oracle11g", "" ]
I have 2 dictionary: ``` a = {'abc': 12} b = {'abcd': 13, 'abc': 99} ``` I want to check if a certain key exist in both the dictionary. In this case i want to check if both a and c contain the key 'abc' I have the following code: ``` if 'abc' in a: if 'abc' in b: print(True) else: print(False) else: print(False) ``` and: ``` if ('abc' in a) and ('abc' in b): print(True) else: print(False) ``` but is there a better way to do this?
Nope - that's pretty much as good as it gets... It's readable and obvious as to what's happening: If the number of `dict`s grows: ``` all('abc' in d for d in (d1, d2, d3, d4)) ``` Or, just pre-compute first, and access that: ``` common_keys = set(d1).intersection(d2, d2, d3, d4) 'abc' in common_keys ```
One liner :) ``` print ('abc' in a) and ('abc' in b) ```
check to see if key exist in 2 dictionary in same line
[ "", "python", "dictionary", "" ]
I am attempting to open a new tab OR a new window in a browser using selenium for python. It is of little importance if a new tab or new window is opened, it is only important that a second instance of the browser is opened. I have tried several different methods already and none have succeeded. 1. Switching to a window that does not exist with hopes that it would then open a new window upon failure to locate said window: `driver.switch_to_window(None)` 2. Iterating through open windows (although there is currently only one) ``` for handle in driver.window_handles: driver.switch_to_window(handle) ``` 3. Attempting to simulate a keyboard key press ``` from selenium.webdriver.common.keys import Keys driver.send_keys(Keys.CONTROL + 'T') ``` The problem with this one in particular was that it does not seem possible to send keys directly to the browser, only to a specific element like this: ``` driver.find_element_by_id('elementID').send_keys(Keys.CONTROL + 'T') ``` However, when a command such as this is sent to an element, it appears to do absolutely nothing. I attempted to locate the topmost HTML element on the page and send the keys to that, but was again met with failure: ``` driver.find_element_by_id('wrapper').send_keys(Keys.CONTROL + 'T') ``` Another version of this I found online, and was not able to verify its validity or lack thereof because I'm not sure what class/module which needs importing ``` act = ActionChains(driver) act.key_down(browserKeys.CONTROL) act.click("").perform() act.key_up(browserKeys.CONTROL) ``` Something very similar with different syntax (I'm not sure if one or both of these is correct syntax) ``` actions.key_down(Keys.CONTROL) element.send_keys('t') actions.key_up(Keys.CONTROL) ```
How about you do something like this ``` driver = webdriver.Firefox() #First FF window second_driver = webdriver.Firefox() #The new window you wanted to open ``` Depending on which window you want to interact with, you send commands accordingly ``` print driver.title #to interact with the first driver print second_driver.title #to interact with the second driver ``` --- **For all down voters:** --- The OP asked for "`it is only important that a second instance of the browser is opened.`". This answer does not encompass ALL possible requirements of each and everyone's use cases. The other answers below may suit your particular need.
You can use `execute_script` to open new window. ``` driver = webdriver.Firefox() driver.get("https://linkedin.com") # open new tab driver.execute_script("window.open('https://twitter.com')") print driver.current_window_handle # Switch to new window driver.switch_to.window(driver.window_handles[-1]) print " Twitter window should go to facebook " print "New window ", driver.title driver.get("http://facebook.com") print "New window ", driver.title # Switch to old window driver.switch_to.window(driver.window_handles[0]) print " Linkedin should go to gmail " print "Old window ", driver.title driver.get("http://gmail.com") print "Old window ", driver.title # Again new window driver.switch_to.window(driver.window_handles[1]) print " Facebook window should go to Google " print "New window ", driver.title driver.get("http://google.com") print "New window ", driver.title ```
How to open a new window on a browser using Selenium WebDriver for python?
[ "", "python", "selenium", "selenium-webdriver", "window", "" ]
I'm using Google's `jsapi` to draw a area chart, so I have to get two different averages. The first one is for a specific person and the second one is for the entire company except for that person. I am using this query to get the last 26 weeks from the specified person. ``` SELECT TOP 26 DATE_GIVEN FROM CHECKS WHERE PERSON_NO='001' ORDER BY DATE_GIVEN DESC ``` But I need to modify that to get the last 26 weeks even if the person skips a week, and for missing weeks fill in a 0 and include it in the average. But the second one is super hard and I don't know how to do it. Here is what I want to do: 1. Select all of the checks in the table expect for person\_no=001 2. Group all of them per week 3. Select only the last 26 check weeks 4. If a week is missing for a person, fill in the value 0 and include it in the average. I tried something like this but it's wrong: ``` SELECT TOP 26 AVG(CHECK_AMOUNT) AS W2 FROM CHECKS WHERE NOT PERSON_NO='001' GROUP BY Datepart(week,DATE_GIVEN) ORDER BY DATE_GIVEN DESC ``` To make it a little more more clear: I'm trying to get the weekly average for one person vs. the average of the rest of the company not including that person. The table name is `CHECKS` with columns `CHECK_NO, DATE_GIVEN, AMOUNT, PERSON_NO`. I also tried something like this but I don't know if this is correct: ``` SELECT TOP 26 AVG(CHECK_AMOUNT) AS W1 FROM CHECKS WHERE PERSON_NO='001' GROUP BY Datepart(week, DATE_GIVEN) SELECT TOP 26 AVG(CHECK_AMOUNT) AS W2 FROM CHECKS WHERE NOT PERSON_NO='001' GROUP BY Datepart(week, DATE_GIVEN) ``` ![enter image description here](https://i.stack.imgur.com/spq6N.png)
1st one. As you need to get data for *last 26 weeks* you need to subtract 26 weeks from current date. Since you want to include 0 for missing weeks, it is the same as diving the SUM of whatever you've got by 26. ``` Declare @weeks int = 26; SELECT sum(CHECK_AMOUNT)/@weeks FROM CHECKS WHERE PERSON_NO='001' and DATE_GIVEN >= dateadd(ww, -@weeks, getdate()) ``` For the rest of the company: (i) Get average checks for every one except your person for last 26 weeks; (ii) get average of those averages (no need to place 0 for a missing person here) ``` select avg (Person_Check_Amount) from ( SELECT PERSON_NO, SUM(CHECK_AMOUNT)/@weeks as Person_Check_Amount FROM CHECKS WHERE PERSON_NO <> '001' and DATE_GIVEN >= dateadd(ww, -@weeks, getdate()) GROUP BY PERSON_NO ) t ``` **UPDATE** I have added `/COUNT (distinct PERSON_NO)` because number of people in the company varies from one week to another. Now we can combine these queries to have single table for comparison. It can be done in a single query. With common table expressions, which make logic more visible. Here I change `DATEPART`to `DATEDIFF`, so when we go back into previous year we keep counting number of weeks from today (25,26,..58,59...), not week number in the year (like 52) ``` DECLARE @weeks int = 26 ;WITH Person AS ( SELECT datediff(ww, DATE_GIVEN, getdate())+1 AS Week, AVG(CHECK_AMOUNT) AS Person_Check_Amount FROM CHECKS WHERE PERSON_NO=11 AND DATE_GIVEN >= dateadd(ww, -@weeks, getdate()) GROUP BY datediff(ww, DATE_GIVEN, getdate()) +1 ) , Company AS ( SELECT week, AVG (COMPANY_Check_Amount) AS COMPANY_Check_Amount FROM ( SELECT datediff(ww, DATE_GIVEN, getdate())+1 AS Week, SUM(CHECK_AMOUNT)/COUNT(DISTINCT PERSON_NO) AS COMPANY_Check_Amount FROM CHECKS WHERE PERSON_NO<>11 AND DATE_GIVEN >= dateadd(ww, -@weeks, getdate()) GROUP BY datediff(ww, DATE_GIVEN, getdate())+1 ) t GROUP BY Week ) SELECT c.week , isnull(Person_Check_Amount,0) Person_Check_Amount , isnull(Company_Check_Amount,0) Company_Check_Amount FROM Person p FULL OUTER JOIN Company c ON c.week = p.week ORDER BY Week DESC ``` **[SQLFiddle](http://sqlfiddle.com/#!3/2f889/4)**
You want to do the `top` in a subquery and then do the average: ``` select avg(check_amount) as w1 from (SELECT TOP 26 c.* FROM CHECKS WHERE PERSON_NO='001' ORDER BY DATE_GIVEN DESC ) c ```
Get Average from last 26 entries
[ "", "sql", "sql-server", "" ]
Pretend I have a `cupcake_rating` table: ``` id | cupcake | delicious_rating -------------------------------------------- 1 | Strawberry | Super Delicious 2 | Strawberry | Mouth Heaven 3 | Blueberry | Godly 4 | Blueberry | Super Delicious ``` I want to find all the cupcakes that have a 'Super Delicious' AND 'Mouth Heaven' rating. I feel like this is easily achievable using a `group by` clause and maybe a `having`. I was thinking: ``` select distinct(cupcake) from cupcake_rating group by cupcake having delicious_rating in ('Super Delicious', 'Mouth Heaven') ``` I know I can't have two separate AND statements. I was able to achieve my goal using: ``` select distinct(cupcake) from cupcake_rating where cupcake in ( select cupcake from cupcake_rating where delicious_rating = 'Super Delicious' ) and cupcake in ( select cupcake from cupcake_rating where delicious_rating = 'Mouth Heaven' ) ``` This will not be satisfactory because once I add a third type of rating I am looking for, the query will take hours (there are a lot of cupcake ratings).
You're correct, you can use a HAVING clause; there's no need to use a self-join either. You want only a cupcake with two ratings, so restrict to those two ratings and then check that the DISTINCT number of ratings is equal to two: ``` select cupcake from cupcake_rating where delicious_rating in ('Super Delicious', 'Mouth Heaven') group by cupcake having count(distinct delicious_rating) = 2 ``` [SQL Fiddle](http://www.sqlfiddle.com/#!4/55152/1) This is far more easily extensible as you don't need to do a new self-join for every delicious rating, you just have to check that you have the number you want.
You can join all "Super Delicious" ratings to "Mouth Heaven" ratings on `cupcake`. This way you find all cupcakes that had both "Super Delicious" and "Mouth Heaven" ratings. ``` SELECT DISTINCT cr.cupcake FROM cupcake_rating cr JOIN cupcake_rating cr2 ON cr.cupcake = cr2.cupcake WHERE cr.delicious_rating = 'Super Delicious' AND cr2.delicious_rating = 'Mouth Heaven' ```
SQL Group By equivalent
[ "", "sql", "oracle", "" ]
I'm using the django rest framework to create an API. I have the following models: ``` class Category(models.Model): name = models.CharField(max_length=100) def __unicode__(self): return self.name class Item(models.Model): name = models.CharField(max_length=100) category = models.ForeignKey(Category, related_name='items') def __unicode__(self): return self.name ``` To create a serializer for the categories I'd do: ``` class CategorySerializer(serializers.ModelSerializer): items = serializers.RelatedField(many=True) class Meta: model = Category ``` ... and this would provide me with: ``` [{'items': [u'Item 1', u'Item 2', u'Item 3'], u'id': 1, 'name': u'Cat 1'}, {'items': [u'Item 4', u'Item 5', u'Item 6'], u'id': 2, 'name': u'Cat 2'}, {'items': [u'Item 7', u'Item 8', u'Item 9'], u'id': 3, 'name': u'Cat 3'}] ``` How would I go about getting the reverse from an Item serializer, ie: ``` [{u'id': 1, 'name': 'Item 1', 'category_name': u'Cat 1'}, {u'id': 2, 'name': 'Item 2', 'category_name': u'Cat 1'}, {u'id': 3, 'name': 'Item 3', 'category_name': u'Cat 1'}, {u'id': 4, 'name': 'Item 4', 'category_name': u'Cat 2'}, {u'id': 5, 'name': 'Item 5', 'category_name': u'Cat 2'}, {u'id': 6, 'name': 'Item 6', 'category_name': u'Cat 2'}, {u'id': 7, 'name': 'Item 7', 'category_name': u'Cat 3'}, {u'id': 8, 'name': 'Item 8', 'category_name': u'Cat 3'}, {u'id': 9, 'name': 'Item 9', 'category_name': u'Cat 3'}] ``` I've read through the docs on [reverse relationships](https://www.django-rest-framework.org/api-guide/relations/#reverse-relations) for the rest framework but that appears to be the same result as the non-reverse fields. Am I missing something obvious?
Just use a related field without setting `many=True`. Note that also because you want the output named `category_name`, but the actual field is `category`, you need to use the `source` argument on the serializer field. The following should give you the output you need... ``` class ItemSerializer(serializers.ModelSerializer): category_name = serializers.RelatedField(source='category', read_only=True) class Meta: model = Item fields = ('id', 'name', 'category_name') ```
In the DRF version 3.6.3 this worked for me ``` class ItemSerializer(serializers.ModelSerializer): category_name = serializers.CharField(source='category.name') class Meta: model = Item fields = ('id', 'name', 'category_name') ``` More info can be found here: [Serializer Fields core arguments](https://www.django-rest-framework.org/api-guide/fields/#source)
Retrieving a Foreign Key value with django-rest-framework serializers
[ "", "python", "django", "django-rest-framework", "" ]
I need to include the table name in a SELECT statement, together with some columns and the unique identifier of the table. I don't know if there is possible to take the table name from a select within that table or some kind of unique identifier. How can I achieve this?
I thank you for your responses but I fixed this in this way (it was too easy actually) ``` select 'table1' as tableName, col1, col2 from anyTable; ```
You will need to query the system catalog of the database to find the primary key and all unique constraints of the table, then choose one that best suites your needs. You can expect to find 0, 1, or more such constraints. For an Oracle database you'd use something like ``` select c.constraint_name, col.column_name from dba_constrants c, dba_cons_columns col where c.table_name = 'YOURTABLE' and c.constraint_type in ('P', 'U') and c.constraint_name = col.constraint_name order by c.constraint_name, col.position ``` For MySQL you would query INFORMATION\_SCHEMA.TABLE\_CONSTRAINTS and INFORMATION\_SCHEMA.KEY\_COLUMN\_USAGE views in a similar manner.
Include the table name in a select statement
[ "", "mysql", "sql", "oracle", "" ]
I use genfromtxt to read in an array from a text file and i need to split this array in half do a calculation on them and recombine them. However i am struggling with recombining the two arrays. here is my code: ``` X2WIN_IMAGE = np.genfromtxt('means.txt').T[1] X2WINa = X2WIN_IMAGE[0:31] z = np.mean(X2WINa) X2WINa = X2WINa-z X2WINb = X2WIN_IMAGE[31:63] ww = np.mean(X2WINb) X2WINb = X2WINb-ww X2WIN = str(X2WINa)+str(X2WINb) print X2WIN ``` How do i go about recombining X2WINa and X2WINb in one array? I just want one array with 62 components
``` combined_array = np.concatenate((X2WINa, X2Winb)) ```
``` X2WINc = np.append(X2WINa, X2WINb) ```
combining two arrays in numpy,
[ "", "python", "arrays", "numpy", "" ]
I have an array of functions [f(x),g(x),...] What I want to do is call the appropriate function based on the range that the value of x is in. ``` f = lambda x: x+1 g = lambda x: x-1 h = lambda x: x*x funcs = [f,g,h] def superFunction(x): if x <= 20: return(funcs[0](x)) if 20 < x <= 40: return(funcs[1](x)) if x > 40: return(funcs[2](x)) ``` Is there a nicer/pythonic way to do this handling a dynamic number of functions The plan is to dynamically generate n number of polyfit functions along sections of data, then combine them into a single callable function.
You'd use a dispatch sequence: ``` funcs = ( (20, f), (40, g), (float('inf'), h), ) def superFunction(x): for limit, f in funcs: if x <= limit: return f(x) ``` or if the list of functions and limits is large, use a `bisect` search to find the closest limit.
Using NumPy to do super-fast selection, in case you have a lot of choices (otherwise, why not stick with "if" statements): ``` import numpy as np funcs = np.array([(20,f), (40,g), (np.inf,h)]) def superFunction(x): idx = np.argmax(x <= funcs[:,0]) return funcs[idx,1](x) ``` This works like your original code, but the function selection happens in C rather than a Python loop.
Is there a pythonic way to group range based segmented functions into a single function?
[ "", "python", "function", "lambda", "" ]
I have really strange problem. Here is the sample code: ``` class SomeClass(object): a = [] b = [] def __init__(self, *args, **kwargs): self.a = [(1,2), (3,4)] self.b = self.a self.a.append((5,6)) print self.b SomeClass() ``` Print outputs [(1, 2), (3, 4), (5, 6)], but why, why result isn't [(1,2), (3,4)] ? Do you know how can I have the old value of self.a in self.b? Thank you!
You are assigning *the same list* to `self.b`, not a copy. If you wanted `self.b` to refer to a copy of the list, create one using either `list()` or a full slice: ``` self.b = self.a[:] ``` or ``` self.b = list(self.a) ``` You can test this easily from the interactive interpreter: ``` >>> a = b = [] # two references to the same list >>> a [] >>> a is b True >>> a.append(42) >>> b [42] >>> b = a[:] # create a copy >>> a.append(3.14) >>> a [42, 3.14] >>> b [42] >>> a is b False ```
As many have already mentioned it, you end up having *two* references on the *same* list. Modifying the list by one reference of by the other just modify *the* list. Here is an illustration to make things more clear if needed: ![Shared reference](https://www.chicoree.fr/pub/python/python-mem.png) * Step "A" is just after ``` self.a = [(1,2), (3,4)] ``` * Step "A" is just after ``` self.b = self.a ``` * Step "C" is just after ``` self.a.append((5,6)) ```
Python list append causes strange result
[ "", "python", "list", "" ]
I want the Notepad++'s wonderful feature "Delete Surplus blank lines" in Python. Say if I have file like this ``` A B C D ``` I want ``` A B C D ``` What is the pythonic way of doing this? Here is what I tried ``` A=['a','\n','\n','\n','a','b','\n','\n','C','\n','\n','\n','\n','\n','\n','D'] B=[] count=0 for l in range(0,len(A)): if A[l]=='\n': count=count+1 else: count=0 if count>1: if A[l+1]=='\n': continue else: B.append('\n') else: if A[l]!='\n': B.append(A[l]) print B ```
Here is a one-liner: ``` In [35]: A=['a','\n','\n','\n','a','b','\n','\n','C','\n','\n','\n','\n','\n','\n','D'] In [36]: B = [A[0]] + [A[i] for i in range(1, len(A)) if A[i] != '\n' or A[i-1] != '\n'] In [37]: B Out[37]: ['a', '\n', 'a', 'b', '\n', 'C', '\n', 'D'] ``` It basically omits newlines that follow other newlines.
Make sure there's no more than `\n\n`, eg: ``` import re print re.sub('\n{3,}', '\n\n', your_string, flags=re.M) ``` And, using `itertools.groupby` for large files: ``` from itertools import groupby with open('your_file') as fin: for has_value, lines in groupby(fin, lambda L: bool(L.strip())): if not has_value: print continue for line in lines: print line, ```
Deleting surplus blank lines using Python
[ "", "python", "" ]
If I have this function: ``` def foo(arg_one, arg_two): pass ``` I can wrap it like so: ``` def bar(arg_one, arg_two): return foo(arg_one, arg_two) foo = bar ``` Is it possible to do this without knowing foo's required arguments, and if so, how?
You can use `*args` and `**kwargs`: ``` def bar(*args, **kwargs): return foo(*args, **kwargs) ``` `args` is a list of positional arguments. `kwargs` is a dictionary of keyword arguments. Note that calling those variables `args` and `kwargs` is just a naming convention. `*` and `**` do all the magic for unpacking the arguments. Also see: * [documentation](http://docs.python.org/2/tutorial/controlflow.html#arbitrary-argument-lists) * [What do \*args and \*\*kwargs mean?](https://stackoverflow.com/questions/287085/what-do-args-and-kwargs-mean) * [\*args and \*\*kwargs?](https://stackoverflow.com/questions/3394835/args-and-kwargs)
You can use the argument unpacking operators (or whatever [they're called](https://stackoverflow.com/questions/2322355/proper-name-for-python-operator)): ``` def bar(*args, **kwargs): return foo(*args, **kwargs) ``` If you don't plan on passing any keyword arguments, you can remove `**kwargs`.
Python wrap function with unknown arguments
[ "", "python", "function", "arguments", "" ]
I have the list of strings from the Amazon S3 API service which contain the full file path, like this: ``` fileA.jpg fileB.jpg images/ ``` I want to put partition folders and files into different lists. How can I divide them? I was thinking of regex like this: ``` for path in list: if re.search("/$",path) dir_list.append(path) else file_list.append(path) ``` is there any better way?
Don't use a regular expression; just use `.endswith('/')`: ``` for path in lst: if path.endswith('/'): dir_list.append(path) else: file_list.append(path) ``` `.endswith()` performs better than a regular expression and is simpler to boot: ``` >>> sample = ['fileA.jpg', 'fileB.jpg', 'images/'] * 30 >>> import random >>> random.shuffle(sample) >>> from timeit import timeit >>> import re >>> def re_partition(pattern=re.compile(r'/$')): ... for e in sample: ... if pattern.search(e): pass ... else: pass ... >>> def endswith_partition(): ... for e in sample: ... if e.endswith('/'): pass ... else: pass ... >>> timeit('f()', 'from __main__ import re_partition as f, sample', number=10000) 0.2553541660308838 >>> timeit('f()', 'from __main__ import endswith_partition as f, sample', number=10000) 0.20675897598266602 ```
From [Filter a list into two parts](http://nedbatchelder.com/blog/201306/filter_a_list_into_two_parts.html), an iterable version: ``` from itertools import tee a, b = tee((p.endswith("/"), p) for p in paths) dirs = (path for isdir, path in a if isdir) files = (path for isdir, path in b if not isdir) ``` It allows to consume an infinite stream of paths from the service if both `dirs` and `files` generators are advanced nearly in sync.
What is best to distinguish between file and dir path in a string
[ "", "python", "regex", "" ]
I've two tables as follow: `tag` table (only tags in english): ``` ID title ------------- 1 tag_1 2 tag_2 3 tag_3 ``` `tag_translation` table: ``` ID title locale tag_id (foreign key) ----------------------------------------------- 1 tag_1_fr FR 1 2 tag_1_de DE 1 3 tag_2_es ES 3 ``` How to do a SQL query returning all tags in french, and if no tag found in french, fallback to english? Example of result (select all tags in french, fallback to english): ``` ID title --------------- 1 tag_1_fr 2 tag_2 3 tag_3 ```
``` SELECT T.ID ,COALESCE(TT.TITLE, T.TITLE) AS TITLE FROM tag T LEFT JOIN tag_translation TT ON T.ID = TT.tag_id AND TT.locale = 'FR'; ``` This assumes that tag\_id and locale are unique in tag\_translation.
Try this with a `CASE` expression: ``` SELECT T.ID, CASE WHEN TT.locale IS NOT NULL AND TT.locale = 'FR' THEN TT.title ELSE t.title END title FROM tag T LEFT JOIN tag_translation TT ON T.ID = TT.tag_id ```
JOIN only if column is NULL
[ "", "sql", "postgresql", "join", "" ]
I have a matlab code that I'm trying to translate in python. I'm new on python but I have been able to answer a lot of questions googling a little bit. But now, I'm trying to figure out the following: I have a for loop when I apply different things on each column, but you don't know the number of columns. For example. In matlab, nothing easier than this: ``` for n = 1:size(x,2); y(n) = mean(x(:,n)); end ``` But I have no idea how to do it on python when, for example, the number of columns is 1, because I can't do x[:,1] in python. Any idea? Thanx
Try [`numpy`](http://www.numpy.org/). It is a python bindings for high-performance math library written in C. I believe it has the same concepts of matrix slice operations, and it is significantly faster than the same code written in pure python (in most cases). Regarding your example, I think the closest would be something using [`numpy.mean`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html). In pure python it is hard to calculate mean of column, but it you are able to transpose the matrix you could do it using something like this: ``` # there are no builtin avg function def avg(lst): return sum(lst)/len(lst) rows = list(avg(row) for row in a) ```
Yes, if you use numpy you can use x[:,1], and also you get other data structures (vectors instead of lists), the main difference between matlab and numpy is that matlab uses matrices for calculations and numpy uses vectors, but you get used to it, I think [this guide](https://web.archive.org/web/20150818181140/http://wiki.scipy.org/NumPy_for_Matlab_Users) will help you out.
unknown vector size python
[ "", "python", "matlab", "vector", "" ]
I am trying to create a TimeLapse creator in python using OpenCV, and have the following code: ``` import cv2 import os import string directory = (r"C:\Users\Josh\Desktop\20130216") video = cv2.VideoWriter(r"C:\Users\Josh\Desktop\video.avi", cv2.cv.CV_FOURCC('F','M','P', '4'), 15, (1536, 1536), 1) for filename in os.listdir(directory): if filename.endswith(".jpg"): video.write(cv2.imread(filename)) cv2.destroyAllWindows() video.release() ``` The folder has 1,440 pictures in it, yet video.avi is only 5.54kb in size, and has an empty output when played. Can anyone see any flaws in my code, and give me any help?
It seems to be that, you have windows without ffmpeg support. I had the same problem and [OpenCV 2.4 VideoCapture not working on Windows](https://stackoverflow.com/questions/11699298/opencv-2-4-videocapture-not-working-on-windows) helped me on that. The opencv246\opencv\3rdparty\ffmpeg\opencv\_ffmpeg.dll should be copied to c:\Python27\opencv\_ffmpeg246.dll Your frame size defined in your code as 1536x1536, so all of your .jpg files should match that size. ``` video = cv2.VideoWriter(r"C:\Users\Josh\Desktop\video.avi", cv2.cv.CV_FOURCC('F','M','P', '4'), 15, (1536, 1536), 1) ```
I have the same issue because the input video size (`width` and `height`) does not match the output file. Mine shows 6KB. I believe this is because openCV is not intended to do this kind of job (**record/save video**) To fix this, just match the `width` and `height` ``` width, height = frame.shape out = cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc('M','J','P','G'), fps, (width, height)) ```
OpenCV VideoWriter with python gives 5.54kb file
[ "", "python", "opencv", "video", "" ]
I am just finished with my script in Python and i want my collegues to use it as well. It runs in python 2.7 Windows 7 64 bit professional env. Now comes the question: How can i make them use my script in an easy way? * First choice is the hard way, making them install python on their machines and then install paramiko,Tkinter . I had a very hard time finding & installing these modules (especially for windows binary files) and do not want to suffer from the same issues again. * I am a newbie in this environment and I think there would be practical solutions to this problem. So i wanted to ask you guys , any ideas appreciated.
use cx\_freeze i checked it. cx\_Freeze is a module used to create the python scripts into an executable(.exe) file. It is very easy method. 1. Download and install cx\_Freeze windows binary for your python version from <http://www.lfd.uci.edu/~gohlke/pythonlibs/> 2. Find the location of the your source code folder. for example- I created a test.py and stored it in c:\samp. c:\samp\test.py ``` x="hai this is an exe file created from python scripts using cxfreeze. Press Enter to exit >> " y=input(x) ``` 3. Create a folder to store the build file(.exe and other files). for eg- i created a folder c:\samp\build\ 4. Open Command Prompt(start->run type "cmd" press enter) and type `console` C:\Documents and Settings\suh>c:\python32\scripts\cxfreeze c:\samp\test.py --target-dir=c:\samp\build for more options type `C:\Documents and Settings\suh>c:\python32\scripts\cxfreeze -help`
You can use [`py2exe`](http://www.py2exe.org/)(windows), [`py2app`](http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html)(Mac OS X) or [`cx_freeze`](http://cx-freeze.sourceforge.net/) to convert your application to an executable. `cx_Freeze` is cross platform and should work on any platform that Python itself works on.
How to make others use my Python Script
[ "", "python", "tkinter", "windows-7-x64", "paramiko", "" ]
i am trying to walk a directory tree and exclude certain directories. Now, according to [os.walk exclude .svn folders](https://stackoverflow.com/questions/4276255/os-walk-exclude-svn-folders) for example i should be able to modify the 'dirs' list which would then let me prune the tree. I tried the following: ``` import sys import os if __name__ == "__main__": for root, dirs, files in os.walk("/usr/lib"): print root dirs = [] ``` I would have expected to not enter ANY subdirectories but i do: ``` /usr/lib /usr/lib/akonadi /usr/lib/akonadi/contact /usr/lib/akonadi/contact/editorpageplugins /usr/lib/os-prober /usr/lib/gnome-settings-daemon-3.0 /usr/lib/gnome-settings-daemon-3.0/gtk-modules /usr/lib/git-core /usr/lib/git-core/mergetools /usr/lib/gold-ld /usr/lib/webkitgtk-3.0-0 /usr/lib/webkitgtk-3.0-0/libexec ``` What am i missing?
Try one of following ``` dirs[:] = [] ``` OR ``` del dirs[:] ```
``` dirs = [] ``` rebinds the local name `dirs`. You can modify the contents of the list instead eg. like this: ``` dirs[:] = [] ```
python walk directory tree with excluding certain directories
[ "", "python", "" ]
I'm trying to create a program that finds the average of n numbers inputted but I'm having trouble getting the recursions to work. The program sort of works but it doesn't exit out of the while statement when I want it to. ``` print("This program 'is' designed to find the average of n numbers you input\n") #print statement that introduces the average finder counter = 0 #this counter will count how many numbers the user has inserted into the program and will be used as denominator sum_of_numbers = 0 #this number is set as 0 as currently the sum is 0, as more numbers are inputed, they will be added together first_question = input('''Would you like to enter a number? Type "yes" if you do, and "no" if you don't. \n\n''') #takes input of yes or no to see whether user wants to find average of numbers while first_question == "yes" : ent_num = int(input("Enter your number here:")) sum_of_numbers = sum_of_numbers + ent_num counter = counter + 1 second_question = input('''Would you like to enter another number after this? Type "yes" if you do, and "no" if you don't. \n''') while second_question == "yes" : ent_num = int(input("Enter your next number here: ")) sum_of_numbers = sum_of_numbers + ent_num counter = counter + 1 else : print("Your average is " + str(sum_of_numbers/counter)) ``` Can someone please help me figure it out? I can't use functions such as try or eval or len its all really basic stuff like the 3rd day in my class
You only need one loop to work. Just ask you question, get the input, and loop. When you enter no, then the loop will exit and compute your average. ``` print("This program 'is' designed to find the average of n numbers you input\n") #print statement that introduces the average finder counter = 0 #this counter will count how many numbers the user has inserted into the program and will be used as denominator sum_of_numbers = 0 #this number is set as 0 as currently the sum is 0, as more numbers are inputed, they will be added together first_question = input('''Would you like to enter a number? Type "yes" if you do, and "no" if you don't. \n\n''') #takes input of yes or no to see whether user wants to find average of numbers while first_question == "yes" : ent_num = int(input("Enter your number here:")) sum_of_numbers = sum_of_numbers + ent_num counter = counter + 1 first_question = input('''Would you like to enter another number after this? Type "yes" if you do, and "no" if you don't. \n''') print("Your average is " + str(sum_of_numbers/counter)) ```
first\_question is never changed and also you should do the math after you break out of the second while indicating you are done. The first\_question always stays "yes" since you assign the output to second\_question. Since you're never going to ask the first\_question agaain simply: ``` print("This program 'is' designed to find the average of n numbers you input\n") #print statement that introduces the average finder counter = 0 #this counter will count how many numbers the user has inserted into the program and will be used as denominator sum_of_numbers = 0 #this number is set as 0 as currently the sum is 0, as more numbers are inputed, they will be added together first_question = input('''Would you like to enter a number? Type "yes" if you do, and "no" if you don't. \n\n''') #takes input of yes or no to see whether user wants to find average of numbers. if first_question == "yes" : ent_num = int(input("Enter your number here:")) sum_of_numbers = sum_of_numbers + ent_num counter = counter + 1 second_question = input('''Would you like to enter another number after this? Type "yes" if you do, and "no" if you don't. \n''') while second_question == "yes" : ent_num = int(input("Enter your next number here: ")) sum_of_numbers = sum_of_numbers + ent_num counter = counter + 1 print("Your average is " + str(sum_of_numbers/counter)) else : print("Well if you're just going to answer no off the bat why did you bother running me\n"); ```
Finding the average of n numbers with Python
[ "", "python", "python-3.x", "" ]
I am making a basic text adventure RPG game in Python. My problem is with the movement system, where the dungeon is broken into squares each with a coordinate. The player is asked for a coordinate of where to move which is then passed into the function move2, then the program checks a bunch of if statements, similar to the one below, and prints the according map of the floor. There is an if statement for each coordinate, so there are 40 different if statements, each with an image of the map. The problem is that nothing happens after the player is asked for a coordinate. The program ends after asking for a coordinate, but does not give any error (and I know I'm inputting a correct coordinate.) ``` move = input("\n To move around the room Hero, simply think the coordinates of the tile you want to move to! However, you can only move one tile at a time Ex: You are at tile 5,4. You can move to 5,3 5,5 or 4,4") move2(move) ``` I apologize for the bad code. I'm sure there is a far better method to do this, but none that I know of yet... ``` def move2 (move): while move == "5,1" "5,2" "5,3" "5,4" "5,5" "5,6" "5,7" "5,8" "4,1" "4,2" "4,3" "4,4" "4,5" "4,6" "4,7" "4,8" "3,1" "3,2" "3,3" "3,4" "3,5" "3,6" "3,7" "3,8" "2,1" "2,2" "2,3" "2,4" "2,5" "2,6" "2,7" "2,8" "1,1" "1,2" "1,3" "1,4" "1,5" "1,6" "1,7" "1,8": if move == "5,3": move = input(""" 1 2 3 4 5 6 7 8 __ __ __ D_ __ __ __ __ 1 |_ |_ |_ |_ |_ |_ |_ |_C| 2 |_ |_ |_ |_ |_ |_ |_ |_ | 3 |_ |?_|_ |_ |_ |_ |_ |_ | 4 |_ |_ |_ |_ |_ |_ |_ |_ | 5 |_ |_ |_x|_ |_ |_ |_ |_ | D""") ```
This will help a bit, but you should really read a tutorial: ``` while move in ("5,1", "5,2", "5,3", "5,4", ... etc): # body ```
As others have pointed out, ``` while move == "5,1" "5,2" "5,3" "5,4" "5,5" "5,6" "5,7" "5,8" "4,1" "4,2" "4,3" "4,4" "4,5" "4,6" "4,7" "4,8" "3,1" "3,2" "3,3" "3,4" "3,5" "3,6" "3,7" "3,8" "2,1" "2,2" "2,3" "2,4" "2,5" "2,6" "2,7" "2,8" "1,1" "1,2" "1,3" "1,4" "1,5" "1,6" "1,7" "1,8": ``` concatenates (smashes together) all the strings. What you want instead is: ``` while move in ("5,1", "5,2", "5,3", "5,4", "5,5", "5,6", "5,7", "5,8", "4,1", "4,2", "4,3", "4,4", "4,5", "4,6", "4,7", "4,8", "3,1", "3,2", "3,3", "3,4", "3,5", "3,6", "3,7", "3,8", "2,1", "2,2", "2,3", "2,4", "2,5", "2,6", "2,7", "2,8", "1,1", "1,2", "1,3", "1,4", "1,5", "1,6", "1,7", "1,8"): ``` but that's not so great either. Instead I'd use better string-matching: ``` import re while re.match(r'\d,\d', move): ```
While loop not executing in Python
[ "", "python", "while-loop", "" ]
Is it possible to have a foreign key that requires either column A or column B to have a value, but not both. And the foreign key for column A matches Table 1 and the foreign key for column B matches Table 2?
A check constraint can handle this. If this is SQL Server, something like this will work: ``` create table A (Id int not null primary key) go create table B (Id int not null primary key) go create table C (Id int not null primary key, A_Id int null, B_Id int null) go alter table C add constraint FK_C_A foreign key (A_Id) references A (Id) go alter table C add constraint FK_C_B foreign key (B_Id) references B (Id) go alter table C add constraint CK_C_OneIsNotNull check (A_Id is not null or B_Id is not null) go alter table C add constraint CK_C_OneIsNull check (A_Id is null or B_Id is null) go ```
It depends on which database you're working with. If you want a table `Foo` that has FK relationships to `Table1` and to `Table2` but only one at a time, then you'll need to set up either some sort of [trigger](http://msdn.microsoft.com/en-us/library/ms189799.aspx) (my links assume SQL Server, but the ideas's the same) or [Constraint](http://msdn.microsoft.com/en-us/library/ms189862%28v=sql.105%29.aspx) to enforce your rule that only one column have a value.
Foreign Key for either-or column?
[ "", "sql", "" ]
So, I have a problem with a SQL Query. It's about getting weather data for German cities. I have 4 tables: staedte (the cities with primary key loc\_id), gehoert\_zu (contains the city-key and the key of the weather station that is closest to this city (stations\_id)), wettermessung (contains all the weather information and the station's key value) and wetterstation (contains the stations key and location). And I'm using PostgreSQL Here is how the tables look like: ``` wetterstation s_id[PK] standort lon lat hoehe ---------------------------------------- 10224 Bremen 53.05 8.8 4 wettermessung stations_id[PK] datum[PK] max_temp_2m ...... ---------------------------------------------------- 10224 2013-3-24 -0.4 staedte loc_id[PK] name lat lon ------------------------------- 15 Asch 48.4 9.8 gehoert_zu loc_id[PK] stations_id[PK] ----------------------------- 15 10224 ``` What I'm trying to do is to get the name of the city with the (for example) highest temperature at a specified date (could be a whole month, or a day). Since the weather data is bound to a station, I actually need to get the station's ID and then just choose one of the corresponding to this station cities. A possible question would be: "In which city was it hottest in June ?" and, say, the highest measured temperature was in station number 10224. As a result I want to get the city Asch. What I got so far is this ``` SELECT name, MAX (max_temp_2m) FROM wettermessung, staedte, gehoert_zu WHERE wettermessung.stations_id = gehoert_zu.stations_id AND gehoert_zu.loc_id = staedte.loc_id AND wettermessung.datum BETWEEN '2012-8-1' AND '2012-12-1' GROUP BY name ORDER BY MAX (max_temp_2m) DESC LIMIT 1 ``` There are two problems with the results: 1) it's taking waaaay too long. The tables are not that big (cities has about 70k entries), but it needs between 1 and 7 minutes to get things done (depending on the time span) 2) it ALWAYS produces the same city and I'm pretty sure it's not the right one either. I hope I managed to explain my problem clearly enough and I'd be happy for any kind of help. Thanks in advance ! :D
If you want to get the max temperature per city use this statement: ``` SELECT * FROM ( SELECT gz.loc_id, MAX(max_temp_2m) as temperature FROM wettermessung as wm INNER JOIN gehoert_zu as gz ON wm.stations_id = gz.stations_id WHERE wm.datum BETWEEN '2012-8-1' AND '2012-12-1' GROUP BY gz.loc_id) as subselect INNER JOIN staedte as std ON std.loc_id = subselect.loc_id ORDER BY subselect.temperature DESC ``` Use this statement to get the city with the highest temperature (only 1 city): ``` SELECT * FROM( SELECT name, MAX(max_temp_2m) as temp FROM wettermessung as wm INNER JOIN gehoert_zu as gz ON wm.stations_id = gz.stations_id INNER JOIN staedte as std ON gz.loc_id = std.loc_id WHERE wm.datum BETWEEN '2012-8-1' AND '2012-12-1' GROUP BY name ORDER BY MAX(max_temp_2m) DESC LIMIT 1) as subselect ORDER BY temp desc LIMIT 1 ``` For performance reasons always use explicit joins as LEFT, RIGHT, INNER JOIN and avoid to use joins with separated table name, so your sql serevr has not to guess your table references.
This is a general example of how to get the item with the highest, lowest, biggest, smallest, whatever value. You can adjust it to your particular situation. ``` select fred, barney, wilma from bedrock join (select fred, max(dino) maxdino from bedrock where whatever group by fred ) flinstone on bedrock.fred = flinstone.fred where dino = maxdino and other conditions ```
How do I use the MAX function over three tables?
[ "", "sql", "postgresql", "" ]
I am thinking about a problem I haven't encountered before and I'm trying to determine the most efficient algorithm to use. I am iterating over two lists, using each pair of elements to calculate a value that I wish to sort on. My end goal is to obtain the top twenty results. I could store the results in a third list, sort that list by absolute value, and simply slice the top twenty, but that is not ideal. Since these lists have the potential to become extremely large, I'd ideally like to only store the top twenty absolute values, evicting old values as a new top value is calculated. What would be the most efficient way to implement this in python?
Take a look at [`heapq.nlargest`](http://docs.python.org/2/library/heapq.html#heapq.nlargest): > `heapq.nlargest(n, iterable[, key])` > > Return a list with the *n* largest elements from the dataset defined by *iterable*. *key*, if provided, specifies a function of one argument that is used to extract a comparison key from each element in the iterable: `key=str.lower` Equivalent to: `sorted(iterable, key=key, reverse=True)[:n]`
You can use `izip` to iterate the two lists in parallel, and build a generator to lazily do a calculation over them, then `heapq.nlargest` to effectively keep the top `n`: ``` from itertools import izip import heapq list_a = [1, 2, 3] list_b = [3, 4, 7] vals = (abs(a - b) for a, b in izip(list_a, list_b)) print heapq.nlargest(2, vals) ```
Python Sort On The Fly
[ "", "python", "algorithm", "sorting", "" ]
I need to rewrite some Python script in Objective-C. It's not that hard since Python is easily readable but this piece of code struggles me a bit. ``` def str_to_a32(b): if len(b) % 4: # pad to multiple of 4 b += '\0' * (4 - len(b) % 4) return struct.unpack('>%dI' % (len(b) / 4), b) ``` What is this function supposed to do?
I'm not positive, but I'm using the [documentation](http://docs.python.org/2/library/struct.html#struct.unpack) to take a stab at it. Looking at the docs, we're going to return a tuple based on the format string: > Unpack the string (presumably packed by pack(fmt, ...)) according to the given format. The result is a tuple even if it contains exactly one item. The string must contain exactly the amount of data required by the format (len(string) must equal calcsize(fmt)). The item coming in (`b`) is *probably* a byte buffer (represented as a string) - looking at the examples they are represented the the `\x` escape, [which consumes](http://www.python.org/dev/peps/pep-0223/) the next two characters as hex. It appears the format string is ``` '>%dI' % (len(b) / 4) ``` The `%` and `%d` are going to put a number into the format string, so if the length of b is 32 the format string becomes ``` `>8I` ``` The first part of the format string is `>`, which the [documentation](http://docs.python.org/2/library/struct.html#byte-order-size-and-alignment) says is setting the byte order to big-endian and size to standard. The `I` says it will be an unsigned int with size 4 ([docs](http://docs.python.org/2/library/struct.html#format-characters)), and the 8 in front of it means it will be repeated 8 times. ``` >IIIIIIII ``` --- So I think this is saying: take this byte buffer, make sure it's a multiple of 4 by appending as many `0x00`s as is necessary, then unpack that into a tuple with as many unsigned integers as there are blocks of 4 bytes in the buffer.
Looks like it's supposed to take an input array of bytes represented as a string and unpack them as big-endian (the ">") unsigned ints (the 'I') The formatting codes are explaied in <http://docs.python.org/2/library/struct.html>
str_to_a32 - What does this function do?
[ "", "python", "" ]
Can someone tell me what is reverse relationship means? I have started using Django and in lot of places in the documentation I see 'reverse relationship, being mentioned. What is it exactly mean? why is it useful? What does it got to do with related\_name in reference to [this post](https://stackoverflow.com/questions/2642613/what-is-the-related-name-mean-in-django) ?
Here is the documentation on [related\_name](https://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects) Lets say you have 2 models ``` class Group(models.Model): #some attributes class Profile(models.Model): group = models.ForeignKey(Group) #more attributes ``` Now, from a profile object, you can do `profile.group`. But if you want the profile objects given the `group` object, How would you do that? Thats' where `related name` or the `reverse relationship` comes in. Django, by defaults gives you a default `related_name` which is the ModelName (in lowercase) followed by `_set` - In this case, It would be `profile_set`, so `group.profile_set`. However, you can override it by specifying a `related_name` in the `ForeignKey` field. ``` class Profile(models.Model): group = models.ForeignKey(Group, related_name='profiles') #more attributes ``` Now, you can access the foreign key as follows: ``` group.profiles.all() ```
For a clearer picture you can assume that when we use reverse relationship, it adds an extra field in the referenced model: For example: ``` class Employee(models.Model): name = models.CharField() email = models.EmailField() class Salary(models.Model): amount = models.IntegerField() employee = models.ForeignKey(Employee, on_delete=models.CASCADE, related_name='salary') ``` After using related\_name in Salary model, now you can assume the Employee model will have one more field: `salary`. For example, the available fields would now be: `name`, `email`, and `salary` To find an employee, we can simply query in this way: `e = Employee.objects.filter(some filter).first()` To check their salary, we can check it by writing `e.salary` (now we can use salary an attribute or field in employee model). This will give you the salary instance of that employee, and you can find the amount by writing `e.salary.amount`. This will give you the salary of that employee. In case of many to many relationship we can use `.all()` and then iterate over that.
Django What is reverse relationship?
[ "", "python", "django", "" ]
Im looking to remove several indices from a list, and want to filter them by content. For example: ``` L= [(1, 2, 3), (etc, etc, etc), (......)] if L[i] == 1: L[i] == nan >>>L [(nan), (etc, etc.......] ``` I know this code isn't correct, but it's just an example to help iterate what I want to do, any help is appreciated, thanks.
You use a list comprehension: ``` L = [float('nan') if el[0] == 1 else el for el in L] ``` The `if .. else ..` part is called a [conditional expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions). This *replaces* the list with a new list. If you have multiple references to the same list, you can replace the *elements* of the list instead with a slice assignment: ``` L[:] = [float('nan') if el[0] == 1 else el for el in L] ``` Now all elements in `L` will be replaced with all the elements produced by the list comprehension. The difference between the two expressions is subtle but crucial; the first rebinds `L` to point to a new list, the second retains that list but only replaces the elements contained in the list itself.
L[:] will modify the same list. This checks if the item contains 1 at 0th index, if `True` use `nan` else use the item as it is. ``` nan = float('nan') L[:] = [nan if item[0] == 1 else item for item in L] ``` Perhaps you're trying to do something like this this: This checks if the item contains 1 at any position not just 0th, if True use nan else use the item as it is. ``` L[:] = [nan if 1 in item else item for item in L] ```
Removing an index from a list if it contains a specific value
[ "", "python", "list", "" ]
`os.fork()` command is not supported under windows, and gives the following error: ``` AttributeError: 'module' object has no attribute 'fork' ``` So the general question is **How to run a script that contain a call to `os.fork()` under Windows?**. I don't mind using something that only mocks up the behavior and runs much slower, it's only for testing. I also prefer not to change the script as it is a 3rd party module. To give you a wider perspective, I'm trying to use the module `rq` a.k.a `redis queue` on Windows. Eventually I will run the code on heroku server which is a Linux machine, but during the development of the web app I'm using Windows.
There is no easy way to emulate `fork()` on systems that don't have it, such as Windows. If the code only uses `fork()` to start a new process with `exec`, you can port it to use `subprocess`. But this [doesn't appear to be the case](https://github.com/nvie/rq/blob/master/rq/worker.py#L357) in rq, so you have several options: 1. Port `rq` to Windows, or ask someone to do it for you. The easiest way to port the part of the code that calls `fork()` might be by using the `multiprocessing` module. However, you will still need to replace other parts of the code that depend on Unix, such as uses of `signal.alarm()` in the `timeouts` module. 2. Use Python under Cygwin, which emulates a fully functional (though slowish) `fork()`, so Cygwin Python has a working `os.fork()`. Note that to get `os.fork()`, you will need to use a Cygwin-built Python, such as the one that Cygwin ships, and *not* simply run the normal Windows Python under Cygwin. 3. Test the application on a Linux running in a virtual machine. Among these unhappy options I'd recommend the last one.
On windows, you can install cygwin with python. This python installation will have os module which will support os.fork() call.
how to run python script with os.fork on windows?
[ "", "python", "compatibility", "" ]
I am trying to setup a LAMP server in my lab, and I'm having trouble getting Apache to execute the .py files. It instead just downloads them. At first I thought my header [might be wrong](https://stackoverflow.com/questions/17329194/which-python-intrpetor-should-i-be-putting-in-py-header), but when I changed it, it unfortunately I'm still not executing the .py. On the plus side I can load the site, run the PHP, and CRUD the MySQL. I think the problem might be in how I'm setting up my Virtual Host. Here is the Apache2.conf: ``` <VirtualHost *:80> Alias "/SiteIwant" "/var/www/SiteIwant" ServerName localhost DocumentRoot /var/www/SiteIwant CustomLog /var/www/SiteIwant/my_access.log combined ErrorLog /var/www/SiteIwant/my_error.log AddType application/x-httpd-php .php SetEnv LD_LIBRARY_PATH /etc/init.d/mysql <Directory /var/www/SiteIwant> Options None ExecCGI AddHandler cgi-script .cgi .pl .py #AddHandler mod_python .py DirectoryIndex index.php AllowOverride AuthConfig Order deny,allow Deny from All Allow from 999.999.999.0/24 #anonymized for posting here, but working </Directory> # <Directory /var/www/SiteIwant/cgi/> # AllowOverride All # Options +ExecCGI +SymLinksIfOwnerMatch # Order allow,deny # Allow from all #</Directory> </VirtualHost> ``` I've tried it with and without the specification on the cgi folder, and I've `chkmod +rwx *.py` in `/var/www/SiteIwant/cgi`. Just for kicks (after that didn't help), I also changed the mode of the python interpreter in /usr/bin and /usr/local/bin to +rwx. Everything else in the apache2.conf file is as it comes out from current Ubuntu Server-LAMP option install. I feel very stuck and like I'm missing something stupid/small. **Edit: Should this really be asked on Server Fault?** If I put an `AddHandler cgi-script .cgi .pl .py` outside the the Virtual Host, I get a 403 permission error, despite chmod 777 the folder.
late answer, I got through there too and got this working by adding ExecCGI in directory option, or for more security, like this: ``` <Directory /path/to/www/yourfile.py> Options +ExecCGI </Directory> ```
Do you have the apache wsgi module installed and enabled?
Setting up Apache on Ubuntu LAMP Server to execute Python in Virtual Host
[ "", "python", "apache", "ubuntu", "lamp", "addhandler", "" ]
I have a dictionary: ``` import math import random d = {1: ["Spices", math.floor(random.gauss(40, 5))], 2: ["Other stuff", math.floor(random.gauss(20, 5))], 3: ["Tea", math.floor(random.gauss(50, 5))], 10: ["Contraband", math.floor(random.gauss(1000, 5))], 5: ["Fruit", math.floor(random.gauss(10, 5))], 6: ["Textiles", math.floor(random.gauss(40, 5))] } ``` I want to print it out so it lines up nicely with headers. Can I add the headers to the dictionary and always be sure they come out on top? I've seen a few ways to do it vertically but I'd like to have it come out with max column widths close to the max str() or int(). Example: Key\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_Label\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_Number 1\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_Spices\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_42 2\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_Other Stuff\_\_\_\_\_\_\_\_\_\_\_\_\_16 etc Apparently I can't even do this inside of this editor manually, but I hope the idea comes across. I also don't really want the \_\_ either. Just a place holder. Thanks all.
I would prefer pandas DataFrame ``` import pandas as pd data = {'Name': ['a', 'b', 'c'], 'Age': [10, 11, 12]} df = pd.DataFrame(data) print(df) ``` **Output:** ``` Name Age 0 a 10 1 b 11 2 c 12 ``` check out more about printing pretty a dataframe [here](https://stackoverflow.com/questions/18528533/pretty-printing-a-pandas-dataframe)
You can use [string formatting in python2](http://docs.python.org/2/library/string.html#formatspec): ``` print "{:<8} {:<15} {:<10}".format('Key','Label','Number') for k, v in d.iteritems(): label, num = v print "{:<8} {:<15} {:<10}".format(k, label, num) ``` Or, [string formatting in python3](http://docs.python.org/3/library/string.html#formatspec): ``` print("{:<8} {:<15} {:<10}".format('Key','Label','Number')) for k, v in d.items(): label, num = v print("{:<8} {:<15} {:<10}".format(k, label, num)) ``` **Output:** ``` Key Label Number 1 Spices 38.0 2 Other stuff 24.0 3 Tea 44.0 5 Fruit 5.0 6 Textiles 37.0 10 Contraband 1000.0 ```
Python - Printing a dictionary as a horizontal table with headers
[ "", "python", "python-3.x", "dictionary", "" ]
I would like to understand how the built-in function `property` works. What confuses me is that `property` can also be used as a decorator, but it only takes arguments when used as a built-in function and not when used as a decorator. This example is from the [documentation](http://docs.python.org/3/library/functions.html#property): ``` class C: def __init__(self): self._x = None def getx(self): return self._x def setx(self, value): self._x = value def delx(self): del self._x x = property(getx, setx, delx, "I'm the 'x' property.") ``` `property`'s arguments are `getx`, `setx`, `delx` and a doc string. In the code below `property` is used as a decorator. The object of it is the `x` function, but in the code above there is no place for an object function in the arguments. ``` class C: def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x ``` How are the `x.setter` and `x.deleter` decorators created in this case?
The `property()` function returns a special [descriptor object](https://docs.python.org/howto/descriptor.html): ``` >>> property() <property object at 0x10ff07940> ``` It is this object that has *extra* methods: ``` >>> property().getter <built-in method getter of property object at 0x10ff07998> >>> property().setter <built-in method setter of property object at 0x10ff07940> >>> property().deleter <built-in method deleter of property object at 0x10ff07998> ``` These act as decorators *too*. They return a new property object: ``` >>> property().getter(None) <property object at 0x10ff079f0> ``` that is a copy of the old object, but with one of the functions replaced. Remember, that the `@decorator` syntax is just syntactic sugar; the syntax: ``` @property def foo(self): return self._foo ``` really means the same thing as ``` def foo(self): return self._foo foo = property(foo) ``` so `foo` the function is replaced by `property(foo)`, which we saw above is a special object. Then when you use `@foo.setter()`, what you are doing is call that `property().setter` method I showed you above, which returns a new copy of the property, but this time with the setter function replaced with the decorated method. The following sequence also creates a full-on property, by using those decorator methods. First we create some functions: ``` >>> def getter(self): print('Get!') ... >>> def setter(self, value): print('Set to {!r}!'.format(value)) ... >>> def deleter(self): print('Delete!') ... ``` Then, we create a `property` object with only a getter: ``` >>> prop = property(getter) >>> prop.fget is getter True >>> prop.fset is None True >>> prop.fdel is None True ``` Next we use the `.setter()` method to add a setter: ``` >>> prop = prop.setter(setter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is None True ``` Last we add a deleter with the `.deleter()` method: ``` >>> prop = prop.deleter(deleter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is deleter True ``` Last but not least, the `property` object acts as a [descriptor object](https://docs.python.org/reference/datamodel.html#implementing-descriptors), so it has [`.__get__()`](https://docs.python.org/reference/datamodel.html#object.__get__), [`.__set__()`](http://docs.python.org/reference/datamodel.html#object.__set__) and [`.__delete__()`](http://docs.python.org/reference/datamodel.html#object.__delete__) methods to hook into instance attribute getting, setting and deleting: ``` >>> class Foo: pass ... >>> prop.__get__(Foo(), Foo) Get! >>> prop.__set__(Foo(), 'bar') Set to 'bar'! >>> prop.__delete__(Foo()) Delete! ``` The Descriptor Howto includes a [pure Python sample implementation](http://docs.python.org/howto/descriptor.html#properties) of the `property()` type: > ``` > class Property: > "Emulate PyProperty_Type() in Objects/descrobject.c" > > def __init__(self, fget=None, fset=None, fdel=None, doc=None): > self.fget = fget > self.fset = fset > self.fdel = fdel > if doc is None and fget is not None: > doc = fget.__doc__ > self.__doc__ = doc > > def __get__(self, obj, objtype=None): > if obj is None: > return self > if self.fget is None: > raise AttributeError("unreadable attribute") > return self.fget(obj) > > def __set__(self, obj, value): > if self.fset is None: > raise AttributeError("can't set attribute") > self.fset(obj, value) > > def __delete__(self, obj): > if self.fdel is None: > raise AttributeError("can't delete attribute") > self.fdel(obj) > > def getter(self, fget): > return type(self)(fget, self.fset, self.fdel, self.__doc__) > > def setter(self, fset): > return type(self)(self.fget, fset, self.fdel, self.__doc__) > > def deleter(self, fdel): > return type(self)(self.fget, self.fset, fdel, self.__doc__) > ```
The [documentation says](http://docs.python.org/3/library/functions.html#property) it's just a shortcut for creating read-only properties. So ``` @property def x(self): return self._x ``` is equivalent to ``` def getx(self): return self._x x = property(getx) ```
How does the @property decorator work in Python?
[ "", "python", "properties", "decorator", "python-decorators", "python-internals", "" ]
I have a column (varchar400) in the following form in an SQL table : ``` Info UserID=1123456,ItemID=6685642 ``` The column is created via our point of sale application, and so I cannot do the normal thing of simply splitting it into two columns as this would cause an obscene amount of work. My problem is that this column is used to store attributes of products in our database, and so while I am only concerned with UserID and ItemID, there may be superfluous information stored here, for example : ``` Info IrrelevantID=666,UserID=123124,AnotherIrrelevantID=1232342,ItemID=1213124. ``` What I want to retrieve is simply two columns, with no error given if neither of these attributes exists in the `Info` column. : ``` UserID ItemID 123124 1213124 ``` Would it be possible to do this effectively, with error checking, given that the length of the IDs are all variable, but all of the attributes are comma-separated and follow a uniform style (i.e "UserID=number"). Can anyone tell me the best way of dealing with my problem ? Thanks a lot.
Try this ``` declare @infotable table (info varchar(4000)) insert into @infotable select 'IrrelevantID=666,UserID=123124,AnotherIrrelevantID=1232342,ItemID=1213124.' union all select 'UserID=1123456,ItemID=6685642' -- convert info column to xml type ; with cte as ( select cast('<info ' + REPLACE(REPLACE(REPLACE(info,',', '" '),'=','="'),'.','') + '" />' as XML) info, ROW_NUMBER() over (order by info) id from @infotable ) select userId, ItemId from ( select T.N.value('local-name(.)', 'varchar(max)') as Name, T.N.value('.', 'varchar(max)') as Value, id from cte cross apply info.nodes('//@*') as T(N) ) v pivot (max(value) for Name in ([UserID], [ItemId])) p ``` [SQL DEMO](http://sqlfiddle.com/#!3/d41d8/16199)
You can try this split function: <http://www.sommarskog.se/arrays-in-sql-2005.html>
Splitting a variable length column in SQL server safely
[ "", "sql", "sql-server", "regex", "sql-server-2005", "select", "" ]
I have a question in SQL that I am trying to solve. I know that the answer is very simple but I just can not get it right. I have two tables, one with customers and the other one with orders. The two tables are connected using customer\_id. The question is to list all the customers that did not make any order! The question is to be run in MapInfo Professional, a GIS desktop software, so not every SQL command is applicable to that program. In other words, I will be thankful if I get more than approach to solve that problem. Here is how I have been thinking: ``` SELECT customer_id from customers WHERE order_id not in (select order_id from order) and customer.customer_id = order.customer_id ```
How about this: ``` SELECT * from customers WHERE customer_id not in (select customer_id from order) ``` The logic is, if we don't have a customer\_id in order that means that customer has never placed an order. As you have mentioned that customer\_id is the common key, hence above query should fetch the desired result.
``` SELECT c.customer_id FROM customers c LEFT JOIN orders o ON (o.customer_id = c.customer_id) WHERE o.order_id IS NULL ```
A simple nested SQL statement
[ "", "sql", "" ]
I'm trying to get this column of words in input.txt: ``` Suzuki music Chinese music Conservatory Blue grass Rock n roll Rhythm Composition Contra Instruments ``` into this format: ``` "suzuki music", "chinese music", "conservatory music", "blue grass", "rock n roll", "rhythm"... ``` This code: ``` with open ('artsplus_stuff.txt', 'r') as f: list.append(", ".join(['%s' % row for row in f.read().splitlines()])) for item in list: item.lower() print list ``` returns a list, but the first letters are capitalized. ['Suzuki music, Chinese music, Conservatory, Blue grass, Rock n roll, Rhythm, Composition, Contra, Instruments '] How do I get all the items lower-cased? Thanks! --- Answer isn't working on this list: ``` Chess Guitar Woodworking Gardening Car_restoration Metalworking Marksman Camping Backpacking_(wilderness) Hunting Fishing Whittling Geocaching Sports Model_Building Leatherworking Bowling Archery Hiking Connoisseur Photography Pool_(cue_sports) Mountaineering Cooking Blacksmith Aviator Magic_(illusion) Foreign_language Card_game Blog Paintball Fencing Brewing Amateur_Astronomy Genealogy Adventure_racing Knitting Computer_Programming Amateur_radio Audiophile Baking Bboying Baton_twirling Chainmail Constructed_language Coloring Crocheting Creative_writing Drawing Fantasy_Football Fishkeeping Home_automation Home_Movies Jewelry Knapping Lapidary_club Locksport Musical_Instruments Painting RC_cars Scrapbooking Sculpting Sewing Singing Writing Air_sports Boardsport Backpacking Bonsai Canoeing Cycling Driving Freerunning Jogging Kayaking Motor_sports Mountain_biking Machining Parkour Rock_climbing Running Sailing Sand_castle Sculling Rowing_(sport) Human_swimming Tai_Chi Vehicle_restoration Water_sports Antiques Coin_collecting Element_collecting Stamp_collecting Vintage_car Vintage_clothing Record_Collecting Antiquities Car_audio Fossil_collecting Insect_collecting Leaf Metal_detectorist Mineral_collecting Petal Rock_(geology) Seaglass Seashell Boxing Combination_puzzle Contract_Bridge Cue_sports Darts Table_football Team_Handball Airsoft American_football Association_football Auto_racing Badminton Climbing Cricket Disc_golf Figure_skating Footbag Kart_racing Plank_(exercise) Racquetball Rugby_league Table_tennis Microscopy Reading_(process) Shortwave_listening Videophile Aircraft_spotting Amateur_geology Birdwatching Bus_spotting Gongoozler Meteorology Travel Board_game Airbrush Advocacy Acting model_aircraft Pets Aquarium Astrology Astronomy Backgammon Base_Jumping Sun_tanning Beachcombing Beadwork Beatboxing Campanology Belly_dance cycle_Polo Bicycle_motocross Boating Boomerang Volunteering Carpentry Butterfly_Watching Button_Collecting Cake_Decorating Calligraphy Candle Cartoonist Casino Cave_Diving Ceramic Church Cigar_Smoking Cloud_Watching Antique Hat album Gramophone_record trading_card Musical_composition Worldbuilding Cosplay Craft Cross-Stitch Crossword_Puzzle Diecast Digital_Photography Dodgeball Doll Dominoes Dumpster_Diving restaurant education Electronics Embroidery Entertainment physical_exercise Falconry List_of_fastest_production_cars Felt Poi_(performance_art) Floorball Floristry Fly_Tying off-roading ultimate_(sport) Game Garage_sale Ghost_Hunting Glowsticking Gunsmith Gyotaku Handwriting Hang_gliding Herping HomeBrewing Home_Repair Home_Theater Hot_air_ballooning Hula_Hoop Ice_skating Impersonator Internet Invention Jewellery Jigsaw_Puzzle Juggling diary skipping_rope amateur_Chemistry Kite snowkiting knot Laser Lawn_Dart poker Leather_crafting Lego Macramé Model_Car Matchstick_Model Meditation Metal_Detector Rail_transport_modelling Model_Rocket ship_model scale_model Motorcycle Needlepoint Origami Papermaking Papier-mâché Parachuting Paragliding Pinochle Pipe_Smoking Pottery Powerbocking Demonstration_(people) Puppetry Pyrotechnics Quilting pigeon_racing Rafting Railfan Rapping remote_control Relaxation Renaissance_Fair Renting_movies Robotics Rock_Balancing Role-playing sand_art_and_play Scuba_Diving Self-Defense Skeet_Shooting Skiing Shopping choir Skateboarding Sketch_(drawing) SlackLining Sleep Slingshot Slot_Car_Racing Snorkeling Soap Rubik's_Cube caving Family Storm_Chasing Storytelling String_Figure Surf_Fishing Survival_skills Tatting Taxidermy Tea_Tasting Tesla_Coil Tetris Textile stone_Rubbing Antique_tool Toy Treasure_Hunting Trekkie tutoring Urban_Exploration Video_Game Violin Volunteer Walking Weightlifting Windsurfing WineMaking Wrestling Zip-line traveling ``` error: list.append(", ".join(['"%s"' % row for row in f.read().splitlines()])) TypeError: descriptor 'append' requires a 'list' object but received a 'str' logout
Instead of ``` for item in list: item.lower() ``` change the name of the variable `list` to `l` or whatever you like that *isn't* a reserved word in Python and use the following line, obviously substituting whatever you name the list for `l`. ``` l = [item.lower() for item in l] ``` The `lower` method returns a copy of the string in all lowercase letters. Once a string has been created, nothing can modify its contents, so you need to create a new string with what you want in it.
Here is how it can be done: ``` In [6]: l = ['Suzuki music', 'Chinese music', 'Conservatory', 'Blue grass'] In [7]: map(str.lower, l) Out[7]: ['suzuki music', 'chinese music', 'conservatory', 'blue grass'] ``` One of the reasons your code doesn't behave as expected is that `item.lower()` doesn't modify the string (in Python, strings are immutable). Instead, it *returns* the lowercase version of the string, which your code then disregards.
Convert list to lower-case
[ "", "python", "lowercase", "" ]
I'm using this [gist's](https://gist.github.com/hrldcpr/2012250/) tree, and now I'm trying to figure out how to prettyprint to a file. Any tips?
What you need is Pretty Print [`pprint`](http://docs.python.org/2/library/pprint.html) module: ``` from pprint import pprint # Build the tree somehow with open('output.txt', 'wt') as out: pprint(myTree, stream=out) ```
Another general-purpose alternative is Pretty Print's `pformat()` method, which creates a pretty string. You can then send that out to a file. For example: ``` import pprint data = dict(a=1, b=2) output_s = pprint.pformat(data) # ^^^^^^^^^^^^^^^ with open('output.txt', 'w') as file: file.write(output_s) ```
Prettyprint to a file?
[ "", "python", "hash", "dictionary", "tree", "pretty-print", "" ]
Say I have code the following code: ``` for i in range(100): print i ``` In general I can add one line to the code as: ``` for i in range(100): import ipdb;ipdb.set_trace() print i ``` However, now I want to debug it at condition of `i == 10`, and I don't want to bother by typing `c` for 10 times in ipdb, how should I do? In the documentation I found `condition bpnumber [condition]`, but how could I know the `bpnumber` if there is no list of `bpnumber` index. The documentation also says `b(reak) ([file:]lineno | function) [, condition]`. For example, assume the line number of `print i` is `xx`. I entered the following in ipdb shell: `b xx, i == 10` but nothing as expected happened.
I did some exploration myself, here is my new understanding of `pdb`. When you input `import ipdb;ipdb.set_trace()` you actually add an entry point of `ipdb` to the line, not really a breakpoint. After you enter `ipdb`, you can then set up breakpoints. So, to realize what I want for conditional debugging, I should do this: ``` import ipdb;ipdb.set_trace() for i in range(100): print i ``` then after I enter `ipdb`, I can input `b xx, i == 10`, and then `c` or `r` to run the code. The code will stop when the condition is met. When I input `l`, the `bpnumber` is shown for the line as : ``` xx-1 for i in range(100): bpnumber> xx print i xx+1 ... ``` I have to say, the documentation and all other explanations are very confusing, I hope my answer here clarifies the difference between the "debug entry point" and "debug breakpoint"
There's a quick dirty way like this: ``` for i in range(100): if i == 10: import ipdb;ipdb.set_trace() print i ``` It works and don't have to busy your mind with any other commands :)
pdb/ipdb for python break on editable condition
[ "", "python", "pdb", "" ]
I don't understand this code snippet. When I try another excel file it doesn't compile properly. Why did the programmer use F and is it referring to the column or row? ``` string strProvider; if (dbFilePath.LastIndexOf("xlsx") > -1) strProvider = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + dbFilePath + @";Extended Properties=""Excel 12.0 Xml;HDR=YES"";"; else strProvider = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=""" + dbFilePath + @""";Persist Security Info=False;Extended Properties=""Excel 8.0;"""; string strQuery = "SELECT " + "F9 AS EmpNbr, " + "F10 AS ACCTSTATUS, " + "F1 AS PROJECTEDAGE, " + "F14 AS UPDBALDUE, " + "F15 AS AMTCURRENT, " + "F16 AS AMT30DAY, " + "F17 AS AMT60DAY, " + "F18 AS AMT90DAY, " + "F19 AS AMT120DAY, " + "F20 AS AMT150DAY, " + "F21 AS AMT180DAY " + "FROM [Sheet1$A8:V] " + "WHERE F9 IS NOT NULL AND " + "(F17 > 0 OR F18 > 0 OR F19 > 0 OR F20 > 0 OR F21 > 0) " + " AND Trim(F10) <> 'RETURN MAIL/LEFT COMPANY' " + " AND Trim(Left(F1,3)) IN ('60','90','120','150','180') " + "ORDER BY F9 "; ```
`F1, F2, F3 etc` are simply generated column headings and are **NOT** cell references. Even though within the connection string you have `HDR=YES`, if the first row is blank then the provider will autogenerate column names from `F1` to `F(n)`. Why it chooses F I do not know, but `n` is simply the column ordinal (starting at 1), and the name is prefixed so it can't be confused with a number. If you revert to thinking of your dataset as a database rather than a spreadsheet, it would not make sense to refer to a specific row in a query, e.g If you imagine a small table ``` ID | A | B | C | ------------------- 1 | x | y | z | 2 | d | e | f | 3 | j | k | l | ``` Then think in terms of SQL rather than excel the following query does not make sense to get just the first row because `A1` = X: ``` SELECT A, B, C FROM Sheet1 WHERE A1 = 'x'; ``` You would have to use columns as so: ``` SELECT A, B, C FROM Sheet1 WHERE ID = 1 AND A = 'x'; ```
F9, F10 and so on are cells , F (or any other letter/combination) being the column indicator, and 9 (or any other number) the line index.
What is this SQL query doing on an Excel file?
[ "", "sql", "excel", "" ]
adding extra lines after each line in a file I need help for the following task for around 1000 lines file. INPUT ``` ./create.pl 1eaj.out ./create.pl 1ezg.out ./create.pl 1f41.out ... ``` OUTPUT ``` ./create.pl 1eaj.out mv complex.* 1eaj ./create.pl 1ezg.out mv complex.* 1ezg ./create.pl 1f41.out mv complex.* 1f41 ... ``` I know following command can add the new line and first part which makes the output like below. ``` awk ' {print;} NR % 1 == 0 { print "mv complex.* "; }' ./create.pl 1eaj.out mv complex.* ./create.pl 1ezg.out mv complex.* ./create.pl 1f41.out mv complex.* ... ``` How to do the rest? Thanks a lot in advance.
My attempt: ``` sed -n 's/^\(\.\/create\.pl\)\s*\(.*\)\.out$/\1 \2.out\nmv complex.* \2/p' s.txt ``` or using `&&` between `./create.pl` and `mv` (since mv is likely needed only when `./create.pl` is correctly executed): ``` sed -n 's/^\(\.\/create\.pl\)\s*\(.*\)\.out$/\1 \2.out \&\& mv complex.* \2/p' s.txt ``` which gives: ``` ./create.pl 1eaj.out && mv complex.* 1eaj ./create.pl 1ezg.out && mv complex.* 1ezg ./create.pl 1f41.out && mv complex.* 1f41 ```
You were nearly there: ``` $ awk '{print $1, $2, "\nmv complex.*", $2}' file ./create.pl 1eaj.out mv complex.* 1eaj.out ./create.pl 1ezg.out mv complex.* 1ezg.out ./create.pl 1f41.out mv complex.* 1f41.out ```
adding extra lines after each line in a file
[ "", "python", "sed", "awk", "" ]
Given the following 2 tables: ``` PROJ_CUSTOM PSR_FINAL_DATA _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ PROJ_ID PROJ_NAME PROJ_ID ACCT_ID 61000.001.ABC Accounting 61000.001.ABC 610-10-23 61000.001.ABD Marketing 61000.001.ABD 610-20-23 61000.001.ABE Applications 61000.001.ABE 610-30-23 61000.001.ABF HR 61000.001.ABF 610-40-23 61000.001.ABG Foo 61000.001.ABG 610-50-23 61000.001.ABC Accounting 61000.001.ABC 610-10-23 61000.001.ABD Marketing 61000.001.ABD 610-20-23 61000.001.ABE Applications 61000.001.ABE 610-30-23 61000.001.ABF HR 61000.001.ABF 610-40-23 61000.001.ABG Foo 61000.001.ABG 610-50-23 ``` And the following sql: ``` SELECT PROJ_ID, PROJ_NAME FROM DELTEK.PROJ_CUSTOM INNER JOIN ( SELECT PROJ_ID AS Project FROM DELTEK.PSR_FINAL_DATA GROUP BY PROJ_ID ) FinalDb ON FinalDb.Project = PROJ_ID WHERE PROJ_ID LIKE '61000.001.[A-Z]%'; ``` The sql returns the columns PROJ\_ID, and PROJ\_NAME. How would I be able to add the ACCT\_ID column from table 2 (PSR\_FINAL\_DATA)?
As long as ACCT\_ID is always the same for each PROJ\_ID you can try this: ``` SELECT Custom.PROJ_ID, Custom.PROJ_NAME, FinalDb.ACCT_ID FROM DELTEK.PROJ_CUSTOM Custom INNER JOIN ( SELECT DISTINCT PROJ_ID, ACCT_ID FROM DELTEK.PSR_FINAL_DATA ) FinalDb ON FinalDb.PROJ_ID = Custom.PROJ_ID WHERE Custom.PROJ_ID LIKE '61000.001.[A-Z]%'; ```
Make it return ACCT\_ID too. You are only returning PROJ\_ID, PROJ\_NAME here : SELECT PROJ\_ID, PROJ\_NAME. Include ACCT\_ID there and see if it works.
SQL: return additional column from inner join subquery with 2 tables
[ "", "sql", "t-sql", "" ]
I want to show an info window in my python script running on ubuntu. I'm using the following code: ``` import tkMessageBox tkMessageBox.showinfo("Say Hello", "Hello World") ``` This works, but there's an empty window displayed, with the message box on top. How can I get rid of the window and just centre the message box on the screen (window manager is gnome 2)? This is just to display some info from a command line script (a password which is why I don't want to just echo it to the console).
Tkinter must have a root window. If you don't create one, one will be created for you. If you don't want this root window, create it and then hide it: ``` import Tkinter as tk root = tk.Tk() root.withdraw() tkMessageBox.showinfo("Say Hello", "Hello World") ``` Your other choice is to *not* use tkMessageBox, but instead put your message in the root window. The advantage of this approach is you can make the window look exactly like you want it to look. ``` import Tkinter as tk root = tk.Tk() root.title("Say Hello") label = tk.Label(root, text="Hello World") label.pack(side="top", fill="both", expand=True, padx=20, pady=20) button = tk.Button(root, text="OK", command=lambda: root.destroy()) button.pack(side="bottom", fill="none", expand=True) root.mainloop() ``` (personally I would choose a more object-oriented approach, but I'm trying to keep the code small for this example)
To avoid a "flash" as the root window is created, use this slight variation on the accepted answer: ``` import Tkinter as tk root = tk.Tk() root.overrideredirect(1) root.withdraw() tkMessageBox.showinfo("Say Hello", "Hello World") ```
Tkinter messagebox without window?
[ "", "python", "tkinter", "" ]
I am trying to change the current working directory in python using os.chdir. I have the following code: ``` import os os.chdir("C:\Users\Josh\Desktop\20130216") ``` However, when I run it, it seems to change the directory, as it comes out with the following error message: ``` Traceback (most recent call last): File "C:\Users\Josh\Desktop\LapseBot 1.0\LapseBot.py", line 3, in <module> os.chdir("C:\Users\Josh\Desktop\20130216") WindowsError: [Error 2] The system cannot find the file specified 'C:\\Users\\Josh\\Desktop\x8130216' ``` Can anyone help me?
Python is interpreting the `\2013` part of the path as the *escape sequence* `\201`, which maps to the character `\x81`, which is ü (and of course, `C:\Users\Josh\Desktopü30216` doesn't exist). Use a raw string, to make sure that Python doesn't try to interpret anything following a `\` as an escape sequence. ``` os.chdir(r"C:\Users\Josh\Desktop\20130216") ```
You could also use `os.path.join` ([documentation](http://docs.python.org/2/library/os.path.html#os.path.join)). Example: ``` os.chdir(os.path.join('C:\Users\Josh\Desktop', '20130216')) ``` This is more elegant + it's compatible with different operating systems.
Python os.chdir is modifying the passed directory name
[ "", "python", "python-2.7", "directory", "chdir", "" ]
My query has this structure: ``` SELECT DISTINCT (CO.CateringOrderId), CO.CateringOrderNumber, MC.FirstName + ' ' + MC.LastName AS "CustomerName", CO.EventDate AS EventDate, CO.IsCompleted, CO.IsVerified, MC.EmailId, CAT.OfficePhone, CAT.Mobile, CAT.Fax, CO.TotalInvoiceAmount, CO.BarterCharityId, (SELECT Sum (Amount) FROM Catering_Order_Payment_Trans WHERE CateringOrderId = CO.CateringOrderId) AS AmountReceived FROM Catering_Orders CO, Master_Customer MC, Customer_Address_Trans CAT, Catering_Order_Employee_Trans COET WHERE MC.CompanyId = @p_CompanyId AND (MC.CustomerId = @p_CustomerId OR @p_CustomerId = -1) AND (CO.CateringOrderNumber LIKE '%' + @p_CateringOrderNumber + '%') AND (CO.EventDate >= CONVERT (DATETIME, @p_FromDate) OR @p_FromDate = '') AND (CO.EventDate <= CONVERT (DATETIME, @p_ToDate) OR @p_ToDate = '') AND (CO.IsCompleted = @p_IsCompleted OR @p_IsCompleted = -1) AND (COET.EmployeeId = @p_CatererId OR @p_CatererId = -1) AND MC.CustomerId = CO.CustomerId AND MC.PersonalAddressId = CAT.CustomerAddressId AND (COET.CateringOrderId = CO.CateringOrderId OR CO.CateringOrderId NOT IN (SELECT CateringOrderId FROM Catering_Order_Employee_Trans)) AND (CAT.Mobile like '%' + @p_ContactNumber + '%' ) AND (CO.IsActive is null or CO.IsActive=1) ORDER BY CO.CateringOrderId DESC ``` I think the `SUM` sub-query is slowing it. Please suggest me on how to speed it up. Currently its execution time is around 7 - 10 seconds.
Try something like this - ``` SELECT DISTINCT CO.CateringOrderId, CO.CateringOrderNumber, MC.FirstName + ' ' + MC.LastName AS CustomerName, CO.EventDate AS EventDate, CO.IsCompleted, CO.IsVerified, MC.EmailId, CAT.OfficePhone, CAT.Mobile, CAT.Fax, CO.TotalInvoiceAmount, CO.BarterCharityId, AmountReceived = ( SELECT SUM(t.Amount) FROM dbo.Catering_Order_Payment_Trans t WHERE t.CateringOrderId = CO.CateringOrderId ) FROM ( SELECT * FROM dbo.Catering_Orders WHERE ISNULL(IsActive, 1) = 1 AND (IsCompleted = @p_IsCompleted OR @p_IsCompleted = -1) AND CateringOrderNumber LIKE '%' + @p_CateringOrderNumber + '%' AND EventDate BETWEEN CONVERT(DATETIME, ISNULL(NULLIF(@p_FromDate, ''), '18000101')) AND CONVERT(DATETIME, ISNULL(NULLIF(@p_ToDate, ''), '30000101')) ) CO JOIN dbo.Master_Customer MC ON MC.CustomerId = CO.CustomerId JOIN dbo.Customer_Address_Trans CAT ON MC.PersonalAddressId = CAT.CustomerAddressId LEFT JOIN ( SELECT * FROM dbo.Catering_Order_Employee_Trans WHERE EmployeeId = @p_CatererId OR @p_CatererId = -1 ) COET ON COET.CateringOrderId = CO.CateringOrderId WHERE MC.CompanyId = @p_CompanyId AND (MC.CustomerId = @p_CustomerId OR @p_CustomerId = -1) AND CAT.Mobile LIKE '%' + @p_ContactNumber + '%' ``` The main problems in ``` AND (COET.CateringOrderId = CO.CateringOrderId OR CO.CateringOrderId NOT IN (SELECT CateringOrderId FROM Catering_Order_Employee_Trans)) ``` and ``` (SELECT Sum (Amount) FROM Catering_Order_Payment_Trans WHERE CateringOrderId = CO.CateringOrderId) AS AmountReceived ```
Try: ``` SELECT CO.CateringOrderId, CO.CateringOrderNumber, MC.FirstName + ' ' + MC.LastName AS "CustomerName", CO.EventDate AS EventDate, CO.IsCompleted, CO.IsVerified, MC.EmailId, CAT.OfficePhone, CAT.Mobile, CAT.Fax, CO.TotalInvoiceAmount, CO.BarterCharityId, COPT.AmountReceived FROM Catering_Orders CO JOIN Master_Customer MC ON MC.CustomerId = CO.CustomerId JOIN Customer_Address_Trans CAT ON MC.PersonalAddressId = CAT.CustomerAddressId LEFT JOIN (SELECT CateringOrderId, Sum(Amount) AS AmountReceived FROM Catering_Order_Payment_Trans GROUP BY CateringOrderId) COPT ON COPT.CateringOrderId = CO.CateringOrderId WHERE MC.CompanyId = @p_CompanyId AND (MC.CustomerId = @p_CustomerId OR @p_CustomerId = -1) AND (CO.CateringOrderNumber LIKE '%' + @p_CateringOrderNumber + '%') AND (CO.EventDate >= CONVERT (DATETIME, @p_FromDate) OR @p_FromDate = '') AND (CO.EventDate <= CONVERT (DATETIME, @p_ToDate) OR @p_ToDate = '') AND (CO.IsCompleted = @p_IsCompleted OR @p_IsCompleted = -1) AND EXISTS (SELECT NULL FROM Catering_Order_Employee_Trans COET WHERE COET.CateringOrderId = CO.CateringOrderId AND (COET.EmployeeId = @p_CatererId OR @p_CatererId = -1) ) AND (CAT.Mobile like '%' + @p_ContactNumber + '%' ) AND (CO.IsActive is null or CO.IsActive=1) ORDER BY CO.CateringOrderId DESC ```
Need to speed up this SQL query
[ "", "sql", "sql-server", "subquery", "query-optimization", "" ]
``` CREATE TABLE AverageStudents AS (SELECT * FROM StudentData WHERE GPA > 3.0); ``` I keep getting the error ``` Incorrect syntax near the keyword 'AS'. ``` Does my simple code look alright to you? I really want a table (not a view, thanks for suggestion though).
Try this one - ``` SELECT * INTO AverageStudents FROM StudentData WHERE GPA > 3.0 ``` Or this - ``` CREATE VIEW AverageStudents AS SELECT * FROM StudentData WHERE GPA > 3.0 ```
I think you're looking for [a view](http://msdn.microsoft.com/en-us/library/ms187956.aspx): ``` CREATE VIEW AverageStudents AS SELECT * FROM StudentData WHERE GPA > 3.0; ```
Why can't I create this table in SQL?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm opening a file named in the following format : ``` ex130626.log exYYMMDD.log ``` following code wants 4-digit year. How to get the two digit year like 13? ``` today = datetime.date.today() filename = 'ex{0}{1:02d}{2:02d}.log'.format(today.year, today.month, today.day) ```
Just take the modulus of the year: ``` >>> import datetime >>> today = datetime.date.today() >>> filename = 'ex{:02}{:02}{:02}.log'.format(today.year%100, today.month, today.day) >>> filename 'ex130625.log' ``` But an easier way is `strftime`: ``` >>> today.strftime('ex%y%m%d.log') 'ex130625.log' ```
You can use `strftime`: ``` filename = 'ex' + today.strftime("%y%m%d") + '.log' ```
open file named with two digit year-Python
[ "", "python", "" ]
I work in Python. Recently, I discovered a wonderful little package called [fn](https://github.com/kachayev/fn.py). I've been using it for function composition. For example, instead of: ``` baz(bar(foo(x)))) ``` with fn, you can write: ``` (F() >> foo >> bar >> baz)(x) . ``` When I saw this, I immediately thought of Clojure: ``` (-> x foo bar baz) . ``` But notice how, in Clojure, the input is on the left. I wonder if this possible in python/fn.
You can't replicate the exact syntax, but you can make something similar: ``` def f(*args): result = args[0] for func in args[1:]: result = func(result) return result ``` Seems to work: ``` >>> f('a test', reversed, sorted, ''.join) ' aestt' ```
You can't get that exact syntax, although you can get something like `F(x)(foo, bar, baz)`. Here's a simple example: ``` class F(object): def __init__(self, arg): self.arg = arg def __call__(self, *funcs): arg = self.arg for f in funcs: arg = f(arg) return arg def a(x): return x+2 def b(x): return x**2 def c(x): return 3*x >>> F(2)(a, b, c) 48 >>> F(2)(c, b, a) 38 ``` This is a bit different from Blender's answer since it stores the argument, which can later be re-used with different functions. This is sort of like the opposite of normal function application: instead of specifying the function up front and leaving some arguments to be specified later, you specify the argument and leave the function(s) to be specified later. It's an interesting toy but it's hard to think why you'd really want this.
Better Function Composition in Python
[ "", "python", "clojure", "functional-programming", "function-composition", "" ]
Part of my app requires the client to request files. Now, a well-behaved client will only request files that are safe to give, but I don't want a user to go about supplying `"../../../creditCardInfo.xls"`, instead. What's the best practice for/simplest way to secure a filename to make sure that no files are served that would be higher than a certain point in the directory hierarchy? First instinct is to disallow filenames with `..` in them but that seems... incomplete and unsatisfactory. The current questions about filename safety on SO focus on making a writable/readable filename, not ensuring that files that shouldn't be accessed are accessed.
If you're running in a UNIX variant, you might want a [chroot jail](https://en.wikipedia.org/wiki/Chroot) to prevent access to the system outside your application. This approach would avoid you having to write your own code to deal with the problem and let you handle it with infrastructure setup. It might not be appropriate if you need to restrict access to some area within the application as it changes what the process thinks is the system root.
This seems like it would work, provided that `open` uses the same mechanism to resolve paths as `os.path.abspath`. Are there any flaws to this approach? ``` import os def is_safe(filename): here = os.path.abspath(".") there = os.path.abspath(filename) return there.startswith(here) >>> is_safe("foo.txt") True >>> is_safe("foo/bar/baz") True >>> is_safe("../../goodies") False >>> is_safe("/hax") False ```
Secure user-provided filename
[ "", "python", "file", "security", "directory", "" ]
Say I have a list ``` Q = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] ``` I believe I can extract the first and every ninth value thereafter using the extended slice notation: ``` Q[::9] ``` Which should give: ``` [0,9,18] ``` But how can I similarly select all the elements *apart from* those?
You mean this? ``` >>> lis = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] >>> lis[1::9] [1, 10] ``` Extended slice notations: ``` lis[start : stop : step] #default values : start = 0, stop = len(lis), step = 1 ``` You can pass your own value for `start`(by default 0 be used) **Update:** ``` >>> lis = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] >>> se = set(range(0, len(lis),9)) #use a list if the lis is not huge. >>> [x for i,x in enumerate(lis) if i not in se] [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17] #for your example even this will work: >>> [x for i,x in enumerate(lis) if i%9 != 0] [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17] ```
In case you have not repeated numbers, this is a general solution for any collection of numbers (not necessarily consecutive): ``` >>> Q = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] >>> list(set(Q).difference(Q[::9])) [1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17] >>> ``` It uses `set.difference`s to get the `set` that is the difference between the original list and the sublist to be removed.
Selecting list elements that aren't in a slice
[ "", "python", "list", "slice", "" ]
I have written a simple load test using Locust (<http://locust.io>). Now I noticed that sometimes (using a higher load) the response I get from a post call has a status\_code of 0 and a `None` content. The 0 status code is not automatically recognized as a failure in Locust, so I have to test it manually. My code fragment is this: ``` with self.client.get(path, catch_response=True) as response: if response.status_code != 200: response.failure(path + ": returned " + str(response.status_code)) elif check not in response.content: response.failure(path + ": wrong response, missing '" + check + "'") ``` Note: `check` is a variable for a part of the expected response content. The question is: is this a expected behaviour? Is this a problem of Locust (or Python) or is this a fault in the tested application?
From the locust documentation in: <http://docs.locust.io/en/latest/writing-a-locustfile.html> > Safe mode > > > The HTTP client is configured to run in safe\_mode. What this does is that any request that fails due to a connection error, timeout, or similar will not raise an exception, but rather return an empty dummy Response object. The request will be reported as a failure in Locust’s statistics. The returned dummy Response’s content attribute will be set to None, and it’s status\_code will be 0. It looks like the server can´t handle all the requests you are swarming it with and some of them are timing out.
Those errors are being swallowed and replaced with a dummy Response: > HttpSession catches any requests.RequestException thrown by Session > (caused by connection errors, timeouts or similar), instead returning > a dummy Response object with status\_code set to 0 and content set to > None. <https://docs.locust.io/en/latest/writing-a-locustfile.html#client-attribute-httpsession> It doesn't look like they have any way to not swallow errors.
Locust: got 0 response status_code and None content
[ "", "python", "http", "locust", "" ]
I have an SQL query like this: ``` SELECT DISTINCT(id) FROM users WHERE ... ``` and I would like to display the results like that: ``` user=12355 user=78949 user=9898 user=489891 ``` Basically with "user=" prepended. Is it possible to do this with PostgreSQL? I've tried with `STRING_AGG('user=', DISTINCT(id))` but got an error on `DISTINCT(id)`. Any idea?
You should be able to use `||` for string concatenation: ``` SELECT DISTINCT('user=' || id) FROM users WHERE ... ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!1/39727/1) This might be useful as well: <http://www.postgresql.org/docs/current/static/functions-string.html>
I'd use a plain `GROUP BY` for this. ``` SELECT format('user=%s',id) FROM users GROUP BY id; ``` <http://sqlfiddle.com/#!1/39727/3> This will be considerably more efficient than using `DISTINCT` on the string concatenation.
Concatenating strings in PostgreSQL result
[ "", "sql", "postgresql", "string-concatenation", "" ]
I want to run a program on several platforms (including Mac OS), so I try to keep it as platform independent as possible. I use Windows myself, and I have a line `os.startfile(file)`. That works for me, but not on other platforms (I read in the documentation, I haven't tested for myself). Is there an equivalent that works for all platforms? By the way, the file is a `.wav` file, but I want users to be able to use their standard media player, so they can pause/rewind the file. That's why I use `os.startfile()`. I might be able to work with libraries that also allow playing/pausing/rewinding media files.
It appears that a cross-platform file opening module does not yet exist, but you can rely on existing infrastructure of the popular systems. This snippet covers Windows, MacOS and Unix-like systems (Linux, FreeBSD, Solaris...): ``` import os, sys, subprocess def open_file(filename): if sys.platform == "win32": os.startfile(filename) else: opener = "open" if sys.platform == "darwin" else "xdg-open" subprocess.call([opener, filename]) ```
Just use [`webbrowser.open(filename)`](http://docs.python.org/2/library/webbrowser.html). it can call `os.startfile()`, `open`, `xdg-open` where appropriate. Beware, there is a [scary text in the docs](http://docs.python.org/2/library/webbrowser.html#webbrowser.open): > Note that on some platforms, trying to open a filename using this > function, may work and start the operating system’s associated > program. However, this is neither supported nor portable. It works fine for me. Test it in your environment. Look at [`webbrower`'s source code](http://hg.python.org/cpython/file/656d0e273ccb/Lib/webbrowser.py#l412) to see how much work needs to be done to be portable. There is also an open issue on Python bug tracker -- [Add shutil.open](http://bugs.python.org/issue3177). "portable os.startfile()" interface turned out to be more complex than expected. You could try the submitted patches e.g., [`shutil.launch()`](http://bugs.python.org/file25311/shutil_launch.py).
Is there an platform independent equivalent of os.startfile()?
[ "", "python", "platform", "" ]
I'm programming a method insertOrUpdate in my android application and I can't do this: ``` database.execSQL('IF (select count(*) from RegNascimento where codigoAnimal = ?) > 0 then begin update RegNascimento set cgnRegNascimento = ? where codigoAnimal = ?; end else begin insert into RegNascimento(cgnRegNascimento, dataInspecaoRegNascimento) values (?,?); end;'); ``` I'm getting this error: ``` 06-26 09:24:58.835: E/SQLiteLog(3924): (1) near "if": syntax error 06-26 09:24:58.835: W/System.err(3924): android.database.sqlite.SQLiteException: near "if": syntax error (code 1): , while compiling: if (select count(*) from RegDefinitivo where codigoAnimal = ?) > 0 then begin update RegDefinitivo set cgdRegDefinitivo = ?, seloRegDefinitivo = ? where codigoAnimal = ?; end else begin insert into RegDefinitivo(cgdRegDefinitivo, seloRegDefinitivo, dataInspecaoRegDefinitivo) values (?,?,?); end; 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteConnection.nativePrepareStatement(Native Method) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteConnection.acquirePreparedStatement(SQLiteConnection.java:1013) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteConnection.prepare(SQLiteConnection.java:624) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteSession.prepare(SQLiteSession.java:588) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteProgram.<init>(SQLiteProgram.java:58) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteQuery.<init>(SQLiteQuery.java:37) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:44) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1314) 06-26 09:24:58.835: W/System.err(3924): at android.database.sqlite.SQLiteDatabase.rawQuery(SQLiteDatabase.java:1253) 06-26 09:24:58.840: W/System.err(3924): at org.apache.cordova.Storage.executeSql(Storage.java:173) 06-26 09:24:58.840: W/System.err(3924): at org.apache.cordova.Storage.execute(Storage.java:83) 06-26 09:24:58.840: W/System.err(3924): at org.apache.cordova.api.CordovaPlugin.execute(CordovaPlugin.java:66) 06-26 09:24:58.840: W/System.err(3924): at org.apache.cordova.api.PluginManager.exec(PluginManager.java:224) 06-26 09:24:58.840: W/System.err(3924): at org.apache.cordova.ExposedJsApi.exec(ExposedJsApi.java:51) 06-26 09:24:58.840: W/System.err(3924): at android.webkit.JWebCoreJavaBridge.sharedTimerFired(Native Method) 06-26 09:24:58.840: W/System.err(3924): at android.webkit.JWebCoreJavaBridge.sharedTimerFired(Native Method) 06-26 09:24:58.840: W/System.err(3924): at android.webkit.JWebCoreJavaBridge.fireSharedTimer(JWebCoreJavaBridge.java:92) 06-26 09:24:58.840: W/System.err(3924): at android.webkit.JWebCoreJavaBridge.handleMessage(JWebCoreJavaBridge.java:108) 06-26 09:24:58.840: W/System.err(3924): at android.os.Handler.dispatchMessage(Handler.java:99) 06-26 09:24:58.840: W/System.err(3924): at android.os.Looper.loop(Looper.java:137) 06-26 09:24:58.840: W/System.err(3924): at android.webkit.WebViewCore$WebCoreThread.run(WebViewCore.java:1064) 06-26 09:24:58.840: W/System.err(3924): at java.lang.Thread.run(Thread.java:856) ``` PS: I'm using sencha touch, but the sql is being executed in android via plugin. Thanks
These three questions might help you figure out the syntax problem. [IF-Statement in SQLite: update or insert?](https://stackoverflow.com/questions/7861663/if-statement-in-sqlite-update-or-insert) [IF() statement alternative in SQLite](https://stackoverflow.com/questions/4874285/if-statement-alternative-in-sqlite) [Does sqlite support any kind of IF(condition) statement in a select](https://stackoverflow.com/questions/1294619/does-sqlite-supports-any-kind-of-ifcondition-statement-in-the-select) Hope this helps
You can try using a CASE expression. Quoting [the SQLite documentation](http://www.sqlite.org/lang_expr.html): > A CASE expression serves a role similar to IF-THEN-ELSE in other programming languages. Otherwise, you can do this as three separate SQLite operations, with the if/else logic in JavaScript. Bear in mind that SQLite is local, and so there is no network round-trip overhead per operation, so the cost of doing the if/else in JavaScript versus in the database should not be dramatic.
How to execute a query with IF and ELSE in Android sqlite?
[ "", "android", "sql", "sqlite", "if-statement", "conditional-statements", "" ]
Using matplotlib, it seems the only time to set the `sharex` or `sharey` axis parameters are during sub\_plot creation (`add_subplot()`, `subplot()`, `subplots()`). For an `axes` class there are methods for getting axis sharing (`get_shared_x_axes()`, `get_shared_y_axes()`), but no corresponding methods for setting sharing. Maybe this is an API oversight, or perhaps it did not fit architecturally. Is there a way to change the shared axis parameter? For those that ask why: I'm using a matrix of plots dynamically, and can control this using view limits, but it just seems like there could be an easier way, and turning sharing on/off and using autoscale would be it. Thanks.
The answer is that the way shared axes are set up is to share some of the internal state of the two axes. It is a tad tricky to get right and the code to do it on-the-fly (both linking and unlinking) doesn't exist in the library yet. See this [PR](https://github.com/matplotlib/matplotlib/pull/1312) for work on-going work on un-linking axes. Help testing and developing this feature would be appreciated.
Just to mention that a method for sharing axes after their creation does exist by now. For two axes `ax1` and `ax2` you can use ``` ax1.get_shared_x_axes().join(ax1, ax2) ``` See [How share x axis of two subplots after they are created?](https://stackoverflow.com/questions/42973223/how-share-x-axis-of-two-subplots-after-they-are-created).
matplotlib set shared axis
[ "", "python", "matplotlib", "" ]
I have a table with the following pieces of ordered data: ``` Employee TimeX TimeY Type Date ----------------------------------------------- 1 0800 0900 'A' 1/1/2013 1 0930 1300 'B' 1/1/2013 1 0600 0845 'A' 1/2/2013 1 0925 1300 'B' 1/2/2013 1 1100 1400 'A' 1/3/2013 1 0500 0700 'A' 1/4/2013 1 0715 0800 'B' 1/4/2013 ``` What I need is to get a count of minutes between TimeY of Type A and TimeX of Type B, for each matching pair. Due to design beyond my control, I have no way to link an 'A' and a 'B' together other than just sequentially by timestamp. And sadly, no, I cannot guarantee that all rows of type 'A' will be followed by rows of type 'B', so any 'A' not followed by a 'B' should be ignored. However, no 'B' will ever be followed by another 'B'. Basically, this is what I'd like to see: ``` Employee Duration --------------------- 1 30 1 40 1 15 ``` Is there a way to do this easily? The closest solution I've found here involves joining on the dates, but that's not going to work in this case. The only possible solutions I'm coming up with late in the afternoon are overly complicated and not panning out. **Edit:** Thanks for the responses! That's some pretty impressive SQL wrangling! I went with Marc's answer, as it was the easiest to read, but thanks to Gordon for providing the inspiration to Marc's answer and to Nenad for the effort along the lines that I was trying.
``` SELECT a.Employee, a.TimeY, b.TimeX FROM Table1 a CROSS APPLY ( SELECT TOP(1) t.TimeX FROM Table1 t WHERE a.[Date] = t.[Date] AND a.Employee = t.Employee AND a.TimeY < t.TimeX AND t.[Type] = 'B' ORDER BY t.TimeX ASC ) b WHERE a.[Type] = 'A' ORDER BY a.Employee ASC ; ``` This doesn't actually do the subtraction, since I'm not clear on the types of TimeX and TimeY. This is similar to the correlated subquery answer, but I think the CROSS APPLY makes it easier to read.
It's 2AM, this is probably one of the ugliest queries I have ever written and I am pretty sure that there are ways to simplify some parts a bit. But, important - it's working :) ``` ;WITH CTE1 AS ( --first CTE is simply to get row numbering over all dates SELECT *, ROW_NUMBER() OVER (ORDER BY [Date],[Type]) RN FROM Table1 ) , RCTE1 AS ( --recursive cte is going row-by-row checking if next type is same or different SELECT *, 1 AS L FROM CTE1 WHERE RN =1 UNION ALL --assigning same L if next is same, L+1 if different SELECT c.*, CASE WHEN r.Type = c.Type THEN L ELSE L+1 END AS L FROM RCTE1 r INNER JOIN CTE1 c ON r.RN +1 = c.RN ) , CTE2 AS ( --here we search for same L values SELECT *, ROW_NUMBER() OVER (PARTITION BY L ORDER BY RN DESC) RN2 FROM RCTE1 ) , CTE3 AS ( --and eliminate the rows not needed (ie A in front of A) SELECT *, ROW_NUMBER() OVER (PARTITION BY [Type] ORDER BY L) RN3 FROM CTE2 WHERE RN2 =1 ) -- at the end join CTE3 based on same RN3 and different type SELECT * -- and some datetime operations to get times from strings , DATEDIFF(MI,DATEADD(MI,CAST(RIGHT(A.TimeY,2) AS INT) , DATEADD(HH,CAST(LEFT(A.TimeY,2) AS INT),0)), DATEADD(MI,CAST(RIGHT(B.TimeX,2) AS INT) , DATEADD(HH,CAST(LEFT(B.TimeX,2) AS INT),0))) AS Goal FROM CTE3 a INNER JOIN CTE3 B ON a.RN3 = b.RN3 AND a.[Type] = 'A' AND b.[Type] = 'B' -- maxrecursion off so Recursive CTE can work OPTION (MAXRECURSION 0) ``` **[SQLFiddle DEMO](http://www.sqlfiddle.com/#!6/f019e/6)**
Combining consecutive SQL rows, every other one, with a caveat
[ "", "sql", "sql-server", "" ]
What is an efficient way for working with large prime numbers with Python? You search on here or on google, and you find many different methods for doing so... sieves, primality test algorithms... Which ways work for larger primes?
For determining if a number is a prime, there a sieves and primality tests. ``` # for large numbers, xrange will throw an error. # OverflowError: Python int too large to convert to C long # to get over this: def mrange(start, stop, step): while start < stop: yield start start += step # benchmarked on an old single-core system with 2GB RAM. from math import sqrt def is_prime(num): if num == 2: return True if (num < 2) or (num % 2 == 0): return False return all(num % i for i in mrange(3, int(sqrt(num)) + 1, 2)) # benchmark is_prime(100**10-1) using mrange # 10000 calls, 53191 per second. # 60006 function calls in 0.190 seconds. ``` This seems to be the fastest. There is another version using `not any` that you see, ``` def is_prime(num) # ... return not any(num % i == 0 for i in mrange(3, int(sqrt(num)) + 1, 2)) ``` However, in the benchmarks I got `70006 function calls in 0.272 seconds.` over the use of `all` `60006 function calls in 0.190 seconds.` while testing if `100**10-1` was prime. If you're needing to find the next highest prime, this method will not work for you. You need to go with a primality test, I have found the [Miller-Rabin](http://en.wikipedia.org/wiki/Miller-Rabin_primality_test) algorithm to be a good choice. It is a little slower the [Fermat](http://en.wikipedia.org/wiki/Fermat_primality_test) method, but more accurate against pseudoprimes. Using the above mentioned method takes +5 minutes on this system. `Miller-Rabin` algorithm: ``` from random import randrange def is_prime(n, k=10): if n == 2: return True if not n & 1: return False def check(a, s, d, n): x = pow(a, d, n) if x == 1: return True for i in xrange(s - 1): if x == n - 1: return True x = pow(x, 2, n) return x == n - 1 s = 0 d = n - 1 while d % 2 == 0: d >>= 1 s += 1 for i in xrange(k): a = randrange(2, n - 1) if not check(a, s, d, n): return False return True ``` `Fermat` algoithm: ``` def is_prime(num): if num == 2: return True if not num & 1: return False return pow(2, num-1, num) == 1 ``` To get the next highest prime: ``` def next_prime(num): if (not num & 1) and (num != 2): num += 1 if is_prime(num): num += 2 while True: if is_prime(num): break num += 2 return num print next_prime(100**10-1) # returns `100000000000000000039` # benchmark next_prime(100**10-1) using Miller-Rabin algorithm. 1000 calls, 337 per second. 258669 function calls in 2.971 seconds ``` Using the `Fermat` test, we got a benchmark of `45006 function calls in 0.885 seconds.`, but you run a higher chance of pseudoprimes. So, if just needing to check if a number is prime or not, the first method for `is_prime` works just fine. It is the fastest, if you use the `mrange` method with it. Ideally, you would want to store the primes generated by `next_prime` and just read from that. For example, using `next_prime` with the `Miller-Rabin` algorithm: ``` print next_prime(10^301) # prints in 2.9s on the old single-core system, opposed to fermat's 2.8s 1000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000531 ``` You wouldn't be able to do this with `return all(num % i for i in mrange(3, int(sqrt(num)) + 1, 2))` in a timely fashion. I can't even do it on this old system. And to be sure that `next_prime(10^301)` and `Miller-Rabin` yields a correct value, this was also tested using the `Fermat` and the `Solovay-Strassen` algorithms. See: [fermat.py](https://gist.github.com/bnlucas/5857437), [miller\_rabin.py](https://gist.github.com/bnlucas/5857478), and [solovay\_strassen.py](https://gist.github.com/bnlucas/5857525) on *gist.github.com* Edit: Fixed a bug in `next_prime`
In response to the possible inaccuracy of `math.sqrt` I have benchmarked two different methods for performing an `isqrt(n)` call. `isqrt_2(n)` is coming from [this article](http://en.wikipedia.org/wiki/Integer_square_root) and [this C code](http://home.utah.edu/~nahaj/factoring/isqrt.c.html) The most common method seen: ``` def isqrt_1(n): x = n while True: y = (n // x + x) // 2 if x <= y: return x x = y cProfile.run('isqrt_2(10**308)') ``` Benchmark results: ``` isqrt_1 at 10000 iterations: 12.25 Can perform 816 calls per second. 10006 function calls in 12.904 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 12.904 12.904 <string>:1(<module>) 1 0.690 0.690 12.904 12.904 math.py:10(func) 10000 12.213 0.001 12.213 0.001 math.py:24(isqrt_1) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} 2 0.000 0.000 0.000 0.000 {time.time} ``` This method is incredibly slow. So we try the next method: ``` def isqrt_2(n): if n < 0: raise ValueError('Square root is not defined for negative numbers.') x = int(n) if x == 0: return 0 a, b = divmod(x.bit_length(), 2) n = 2 ** (a + b) while True: y = (n + x // n) >> 1 if y >= n: return n n = y cProfile.run('isqrt_2(10**308)') ``` Benchmark results: ``` isqrt_2 at 10000 iterations: 0.391000032425 Can perform 25575 calls per second. 30006 function calls in 1.059 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 1.059 1.059 <string>:1(<module>) 1 0.687 0.687 1.059 1.059 math.py:10(func) 10000 0.348 0.000 0.372 0.000 math.py:34(isqrt_2) 10000 0.013 0.000 0.013 0.000 {divmod} 10000 0.011 0.000 0.011 0.000 {method 'bit_length' of 'long' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} 2 0.000 0.000 0.000 0.000 {time.time} ``` As you can see, the difference in `isqrt_1(n)` and `isqrt_2(n)` is an amazing `11.858999967575 seconds` faster. You can see this in action [on Ideone.com](http://ideone.com/d5Hvn) or get [the code](https://gist.github.com/bnlucas/5879594) **note: Ideone.com resulted in execution timeout for `isqrt_1(n)` so the benchmark was reduced to `10**200`**
Working with large primes in Python
[ "", "python", "algorithm", "primes", "" ]
I read an Excel Sheet into a pandas DataFrame this way: ``` import pandas as pd xl = pd.ExcelFile("Path + filename") df = xl.parse("Sheet1") ``` the first cell's value of each column is selected as the column name for the dataFrame, I want to specify my own column names, How do I do this?
call `.parse` with `header=None` keyword argument. ``` df = xl.parse("Sheet1", header=None) ```
This thread is 5 years old and outdated now, but still shows up on the top of the list from a generic search. So I am adding this note. Pandas now (v0.22) has a keyword to specify column names at parsing Excel files. Use: ``` import pandas as pd xl = pd.ExcelFile("Path + filename") df = xl.parse("Sheet 1", header=None, names=['A', 'B', 'C']) ``` If header=None is not set, pd seems to consider the first row as header and delete it during parsing. If there is indeed a header, but you dont want to use it, you have two choices, either (1) use "names" kwarg only; or (2) use "names" with header=None and skiprows=1. I personally prefer the second option, since it clearly makes note that the input file is not in the format I want, and that I am doing something to go around it.
How to specify column names while reading an Excel file using Pandas?
[ "", "python", "pandas", "" ]
I have a celery project connected to a MySQL databases. One of the tables is defined like this: ``` class MyQueues(Base): __tablename__ = 'accepted_queues' id = sa.Column(sa.Integer, primary_key=True) customer = sa.Column(sa.String(length=50), nullable=False) accepted = sa.Column(sa.Boolean, default=True, nullable=False) denied = sa.Column(sa.Boolean, default=True, nullable=False) ``` Also, in the settings I have ``` THREADS = 4 ``` And I am stuck in a function in `code.py`: ``` def load_accepted_queues(session, mode=None): #make query pool = session.query(MyQueues.customer, MyQueues.accepted, MyQueues.denied) #filter conditions if (mode == 'XXX'): pool = pool.filter_by(accepted=1) elif (mode == 'YYY'): pool = pool.filter_by(denied=1) elif (mode is None): pool = pool.filter(\ sa.or_(MyQueues.accepted == 1, MyQueues.denied == 1) ) #generate a dictionary with data for i in pool: #<---------- line 90 in the error l.update({i.customer: {'customer': i.customer, 'accepted': i.accepted, 'denied': i.denied}}) ``` When running this I get an error: ``` [20130626 115343] Traceback (most recent call last): File "/home/me/code/processing/helpers.py", line 129, in wrapper ret_value = func(session, *args, **kwargs) File "/home/me/code/processing/test.py", line 90, in load_accepted_queues for i in pool: #generate a dictionary with data File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2341, in instances fetch = cursor.fetchall() File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3205, in fetchall l = self.process_rows(self._fetchall_impl()) File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3174, in _fetchall_impl self._non_result() File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3179, in _non_result "This result object does not return rows. " ResourceClosedError: This result object does not return rows. It has been closed automatically ``` So mainly it is the part ``` ResourceClosedError: This result object does not return rows. It has been closed automatically ``` and sometimes also this error: > DBAPIError: (Error) (, AssertionError('Result length not requested > length:\nExpected=1. Actual=0. Position: 21. Data Length: 21',)) > 'SELECT accepted\_queues.customer AS accepted\_queues\_customer, > accepted\_queues.accepted AS accepted\_queues\_accepted, > accepted\_queues.denied AS accepted\_queues\_denied \nFROM > accepted\_queues \nWHERE accepted\_queues.accepted = %s OR > accepted\_queues.denied = %s' (1, 1) I cannot reproduce the errror properly as it normally happens when processing a lot of data. I tried to change `THREADS = 4` to `1` and errors disappeared. Anyway, it is not a solution as I need the number of threads to be kept on `4`. Also, I am confused about the need to use ``` for i in pool: #<---------- line 90 in the error ``` or ``` for i in pool.all(): #<---------- line 90 in the error ``` and could not find a proper explanation of it. All together: any advise to skip these difficulties?
> All together: any advise to skip these difficulties? yes. you absolutely *cannot* use a Session (or any objects which are associated with that Session), or a Connection, in more than one thread simultaneously, especially with MySQL-Python whose DBAPI connections are very thread-unsafe\*. You must organize your application such that each thread deals with it's own, dedicated MySQL-Python connection (and therefore SQLAlchemy Connection/ Session / objects associated with that Session) with no leakage to any other thread. * Edit: alternatively, you can make use of mutexes to limit access to the Session/Connection/DBAPI connection to just one of those threads at a time, though this is less common because the high degree of locking needed tends to defeat the purpose of using multiple threads in the first place.
I got the same error while making a query to `SQL-Server` procedure using `SQLAlchemy`. In my case, **adding `SET NOCOUNT ON` to the stored procedure** fixed the problem. ``` ALTER PROCEDURE your_procedure_name AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for your procedure here SELECT * FROM your_table_name; END; ``` Check out [this article](https://www.sqlshack.com/set-nocount-on-statement-usage-and-performance-benefits-in-sql-server/) for more details
Celery and SQLAlchemy - This result object does not return rows. It has been closed automatically
[ "", "python", "multithreading", "sqlalchemy", "celery", "" ]
I’m working with large text datasets, size of about 1 GB (the smallest file has about 2 million lines). Each line is supposed to be split into a number of columns. I say *supposed* because there are exceptions; while the normal lines end with `\r\n`, a good number of them are incorrectly divided onto 2 to 3 lines. Given that there are 10 columns, each line is supposed to have the following format: ``` col_1 | col_2 | col_3 | ... | col_10\r\n ``` The exceptions have this format: ``` 1. col_1 | col_2 | col_3 ...\n ... | col_10\r\n 2. col_1 | col_2 | col_3 ...\n ... | col_10\n \r\n ``` What would be the fastest way to correct these exceptions? I did a simple find/replace in a text editor (TextMate, on Mac) on a sample of 1000 lines using the regular expression `(^[^\r\n]*)\n` (replacing with `$1`), and it works perfectly. But the text editor apparently cannot handle the big files (>= 2 million lines). Can those be done with `sed` or `grep` (or in some other command-line tool, or even in Python) using equivalent regular expressions, and how?
Your approach: ``` perl -pe 's/(^[^\r\n]*)\n/\1/' input > output ``` Or, a negative lookbehind: ``` perl -pe 's/(?<!\r)\n//' input > output ``` Or, remove all `\n` and replace each `\r` with `\r\n`: ``` perl -pe 's/\n//; s/\r/\r\n/' input > output ```
Why not awk? : ``` awk 'BEGIN{RS="\r\n"; FS="\n"; OFS=" "; ORS="\r\n";} {print $1,$2}' input ``` Or tr + sed: ``` cat input | tr '\n' ' ' | tr '\r' '\n' | sed 's/^ \(.*\)/\1\r/g' ```
Fast multiline regex find/replace of \r and \n
[ "", "python", "regex", "command-line", "sed", "grep", "" ]
I have a table with an EntryDate and a ChecklistDay column. They are Date and Integer columns. I want to return two columns, one named StartDate and another EndDate, both getting their values from the EntryDate column. StartDate would be returned from... ``` SELECT EntryDate AS StartDate FROM TABLE WHERE ChecklistDay = 1 ``` EndDate would be returned from... ``` SELECT EntryDate AS EndDate FROM TABLE WHERE ChecklistDay = 10 ``` Now I only want to return a row if a value is returned for both StartDate and EndDate, meaning that a ChecklistDay must have both values 1 and 10 in order for it to select the two EntryDates and return a row. What kind of query do I use here?
You can join to the same table twice. ``` select startDt.EntryDate as StartDate, endDt.EntryDate as EndDate from table startDt inner join table endDt on startDt.id = endDt.id where startDt.ChecklistDay = 1 and endDt.CheckListDay = 10 ```
Would this help: ``` Select CASE ChecklistDay WHEN 1 THEN EntryDate ELSE NULL as StartDate, CASE CheckListDay WHEN 10 THEN DateAdd(day, ChecklistDay, StartDate) ELSE NULL END as EndDate from Table ```
Query To Return Multiple Values From A Single Column Based On Value Of Another Column
[ "", "sql", "" ]
Im trying to run send an email with a delay, since the conditional to send the email can be ON for quite some time and I dont want to receive and infinite amount of email alerts... For this I'm trying the threading.timer to give it a delay and only send the email every 15 minutes... I tried the long 900 second delay on the .timer object and it works (used time script)... but when I run it to send the email it first sends the email and then enters the timer not running the other parts of the script... The email function works just fine... running python 2.6.6 ``` #!/usr/bin/env python import time import smtplib #for sending emails when warning import threading if True: #the possibility exists that the conditional is met several times thus sending lots of emails t = threading.Timer(300,send_email('Tank temperature Overheat',tank_temp)) t.start() # after 300 seconds, the email will be sent but the script will keep running print "rest of the script keeps running" print "keeps running the scrpit and after 300s the email is sent" ``` Any ideas on why its not working or another workaround? After playing with it... it does the sleep but the sends all the emails... not one email every X amount of .time set... i.e. ``` n=300 start = time.time() while (time.time() - start < n): led_temp = 56 if led_temp > 55: t = threading.Timer(100, lambda: send_email('Lights temperature Overheat',led_temp)) t.start() ``` Instead of receiving one email every 100 seconds I get 36 emails after 300 seconds .. ?? Any idea why? (Reformat from comment below) After reading the answer on the threading I understood the problem... Im still knew to python and had never user threading so I guess that was the root cause of the 36 emails I received when creating endless amounts of threads... I fixed it by using a flag, and tested the code like this: ``` def raise_flag(): global start interval = 300 if start > interval: start = 0 flag = True print "Flag True, sending email" return flag else: flag = False start = start + 1 print "Flag OFF", start time.sleep(1) return flag led_temp = 27 while led_temp > 26: flag = raise_flag() if flag: send_email('Tank temperature Overheat',led_temp) flag = False print "Sent email" ```
As second argument to `timer` you should pass a callable, but in your case you are calling the function and passing the result to `threading.Timer`. You should use a `lambda` expression instead: ``` #!/usr/bin/env python import time import smtplib import threading if True: t = threading.Timer(300, lambda: send_email('Tank temperature Overheat',tank_temp)) t.start() print "rest of the script keeps running" print "keeps running the scrpit and after 300s the email is sent" ``` The expression: ``` lambda: send_email('Tank temperature Overheat',tank_temp) ``` Evaluates to a function without arguments, that when called executes `send_email` with that arguments, while in your code you have: ``` t = threading.Timer(300,send_email('Tank temperature Overheat',tank_temp)) ``` This will first evaluate all the arguments, and hence call `send_email`, and *then* create the `Timer` object. --- Regarding the issue with the 36 emails in 300 seconds, in your code: ``` n=300 start = time.time() while (time.time() - start < n): led_temp = 56 if led_temp > 55: t = threading.Timer(100, lambda: send_email('Lights temperature Overheat',led_temp)) t.start() ``` The `while` will create tons of threads during the 300 seconds of iterations. I have no idea why you think this loop should send you an email every 100 seconds. Each thread will send you an email after 100 seconds, but there are a lot of threads. If you want to send only 3 emails, then you should `break` out of the loop after three iterations. Also, the iterations are probably too fast, hence all timers will send all the emails almost at once, since their timeouts are almost equal. You can see this problem with the following code: ``` >>> import threading >>> import threading >>> def send_email(): print("email sent!") ... >>> for _ in range(5): ... t = threading.Timer(7, send_email) ... t.start() ... >>> email sent! email sent! email sent! email sent! email sent! ``` Even though 5 different `Timer`s are created, they timeout almost at the same time, so you'll see all the `email sent!` appear at the same time. You could modify the timeout to take this into account: ``` >>> for i in range(5): ... t = threading.Timer(7 + i, send_email) ... t.start() ... >>> email sent! email sent! email sent! email sent! email sent! ``` In the above code you'll see `email sent!` appear one at a time with an interval of about 1 second. Lastly I'd add that you have no way to control when an email will be received. This is handled in different ways by the different services, so there is no way to guarantee that, when running your code, the recipient will receive an email every 100 seconds.
threading.Timer can be run in two modes: The first option is to use the arguments provided by the method: ``` t = threading.Timer(interval=300, function=send_email, args=['Tank temperature Overheat', tank_temp]) t.start() ``` The second option is to use a "lambda function" ``` t = threading.Timer(interval=300, lambda: send_email('Tank temperature Overheat', tank_temp)) t.start() ```
Python, threading.timer object will not run function timer?
[ "", "python", "function", "timer", "delay", "sleep", "" ]
I print out (m x n) table of values for debugging, however, I do not want the debug messages to be printed out in non-debugging mode. In C, it can be done with "#ifdef \_DEBUG" in code and define \_DEBUG in preprocessor definition. May I know what is equivalent way in Python?
Python has module called "logging" See this question: [Using print statements only to debug](https://stackoverflow.com/questions/6579496/using-print-statements-only-to-debug-python) Or the basic tutorial: <http://docs.python.org/2/howto/logging.html>
You could define a global variable someplace, if that's what you want. However, probably the cleaner and more standard way is to read a config file (easy because you can write a config file in plain Python) and define DEBUG in there. So you've got a config file that looks like this: ``` # program.cfg # Other comments # And maybe other configuration settings DEBUG = True # Or False ``` And then in your code, you can either `import` your config file (if it's in a directory on the Python path and has a Python extension), or else you can `execfile` it. ``` cfg = {} execfile('program.cfg', cfg) # Execute the config file in the new "cfg" namespace. print cfg.get('DEBUG') # Access configuration settings like this. ```
Python - hide debugging message in non-debug mode
[ "", "python", "python-2.7", "" ]
This is a two part question: 1) Is it possible to retrieve the name of the partition that data lives in using a select statement, based on its `ROWID` or some other identifier? eg. ``` SELECT DATA_ID, CATEGORY, VALUE, **PARTITION_NAME** FROM MYTABLE WHERE CATEGORY = 'ABC' ``` 2) Is it possible to truncate a single partition of a table, without deleting the data stored in the other partitions? I have a table with over a billion rows, hash partitioned by category. Only a handful of the categories have problems with their data, so it does not make sense to recreate the entire table, but deleting data from the table, even with all constraints inactive, is taking far too long.
Thanks to your hint about the *rowid*, I found a solution. If you have the rowid, it should be possible to determine the object the row belongs to. A minimal example with 4 hash partitions: ``` CREATE TABLE pt (i NUMBER) PARTITION BY HASH (i) (PARTITION pt1, PARTITION pt2, PARTITION pt3, PARTITION pt4); INSERT INTO pt SELECT ROWNUM FROM all_objects WHERE ROWNUM < 20; ``` Now, each row has a `ROWID`. You can find out the object number via `DBMS_ROWID.ROWID_OBJECT`. The dictionary table `USER_OBJECTS` has then the object\_name (= the name of the table) and the subobject\_name (= the name of the partition): ``` SELECT i, ROWID AS row_id, dbms_rowid.rowid_object(ROWID) AS object_no, (SELECT subobject_name FROM user_objects WHERE object_id = dbms_rowid.rowid_object(pt.ROWID)) AS partition_name FROM pt ORDER BY 3; I ROW_ID OBJECT_NO PARTITION_NAME 6 AAALrYAAEAAAATRAAA 47832 PT1 11 AAALrYAAEAAAATRAAB 47832 PT1 13 AAALrYAAEAAAATRAAC 47832 PT1 9 AAALrZAAEAAAATZAAA 47833 PT2 10 AAALrZAAEAAAATZAAB 47833 PT2 12 AAALrZAAEAAAATZAAC 47833 PT2 17 AAALrZAAEAAAATZAAD 47833 PT2 19 AAALrZAAEAAAATZAAE 47833 PT2 2 AAALraAAEAAAAThAAA 47834 PT3 5 AAALraAAEAAAAThAAB 47834 PT3 18 AAALraAAEAAAAThAAD 47834 PT3 8 AAALraAAEAAAAThAAC 47834 PT3 1 AAALrbAAEAAAATpAAA 47835 PT4 3 AAALrbAAEAAAATpAAB 47835 PT4 4 AAALrbAAEAAAATpAAC 47835 PT4 7 AAALrbAAEAAAATpAAD 47835 PT4 ```
1) no. you cannot do that, you will have to query `all_tab_partitions` to find out the partition for a ceratain value. 2) `alter table x truncate partition y`
Oracle SQL: Selecting data and partition name from table and truncating partitions
[ "", "sql", "database", "oracle", "partitioning", "" ]
This is a two-parter: first, define a function, distance\_from\_zero, with one parameter (choose any parameter name you like). Second, have that function do the following: Check the type of the input it receives. If the type is int or float, the function should return the absolute value of the function input. If the type is any other type, the function should return "Not an integer or float!" code: ``` def distance_from_zero(n): print type(n) if type(n) == int or type(n) == float: var = abs(n) print var return n else: print "no!" return n var = input("Enter number:") print var distance_from_zero(var) ```
First, as the others have suggested, use `raw_input()`. Then, you can try to convert it to an `int` or `float`. If both of those fail, you don't have a number. If you want to keep the "original" type, you can use this: ``` def distance_from_zero(n): try: n = int(n) except ValueError: try: n = float(n) except ValueError: print "Not a number!" n = float("NaN") return abs(n) ``` If you don't mind if the input `1` is converted to `1.0`, then you can simplify the function: ``` def distance_from_zero(n): try: return abs(float(n)) except ValueError: print "Not a number!" return float("NaN") # or remove this line to return None ``` In both cases, you would call the function like this: ``` var = raw_input("Enter number: ") print distance_from_zero(var) ```
A couple of things: `input()` in Python 2.7 is equivalent to `eval(raw_input())`. So if you input "hello", it will raise a `NameError` (unless there is a variable hello). If you're working with Python 2.7, use `raw_input()`. However, if you are using python 3, then use `input()`, because `raw_input()` does not exist in Python 3 (and input is the exact same as raw\_input in 3) You also returned n and not `var`, the absolute value. ``` def distance_from_zero(n): try: return abs(float(n)) except ValueError: return "That is not an integer or float!" var = raw_input("Enter number:") print var distance_from_zero(var) ``` Also, for checking types, you should be using [`isinstance()`](http://docs.python.org/2/library/functions.html#isinstance).
Review: Built-In Functions
[ "", "python", "function", "python-2.7", "python-3.x", "" ]
I'm trying to rewrite a function using numpy which is originally in MATLAB. There's a logical indexing part which is as follows in MATLAB: ``` X = reshape(1:16, 4, 4).'; idx = [true, false, false, true]; X(idx, idx) ans = 1 4 13 16 ``` When I try to make it in numpy, I can't get the correct indexing: ``` X = np.arange(1, 17).reshape(4, 4) idx = [True, False, False, True] X[idx, idx] # Output: array([6, 1, 1, 6]) ``` What's the proper way of getting a grid from the matrix via logical indexing?
You could also write: ``` >>> X[np.ix_(idx,idx)] array([[ 1, 4], [13, 16]]) ```
``` In [1]: X = np.arange(1, 17).reshape(4, 4) In [2]: idx = np.array([True, False, False, True]) # note that here idx has to # be an array (not a list) # or boolean values will be # interpreted as integers In [3]: X[idx][:,idx] Out[3]: array([[ 1, 4], [13, 16]]) ```
Getting a grid of a matrix via logical indexing in Numpy
[ "", "python", "matlab", "numpy", "matrix-indexing", "" ]
Looks like my question is not clear enough so I try to rewrite it: I have a "file" table including among others fields "album" and "rating". Each album has a given number of entries in the file table, each having a rating. Rated files have a rating > 0. What I want is to get the average rating for each album (this can be done with below query). BUT, I only want to include albums for which at least a given percentage (75% for instance) of its file are rated (rating > 0) ``` select album, avg(rating) AS avgRating from file group by album order by avgRating DESC ``` Can someone help on adding the missing clause.
(Updated, following comments): ``` select album, avg(rating) AS avgRating from file group by album having sum(case when rating > 0 then 1 end)*1.0 / count(*) > 0.5 order by avgRating DESC ```
If you mean average rating you need `HAVING avg(rating) > 0`
SQL (Sqlite) GROUP BY and COUNT
[ "", "sql", "sqlite", "" ]
``` def group(l,size): length = len(l) new = [] for i in range(0,length): for j in range(i,size): new[i].append(l[j]) print new ``` The above function group(list, size) that take a list and splits into smaller lists of given size. I need something like this > input: group([1, 2, 3, 4, 5, 6, 7, 8, 9], 3) > output: [[1, 2, 3], [4, 5, 6], [7, 8, 9]] But the problem with the above code is showing index out of range.what's wrong in code? how to append elements into list of lists?
Use slice. ``` >>> def group(l, size): ... return [l[i:i+size] for i in range(0, len(l), size)] ... >>> group([1, 2, 3, 4, 5, 6, 7, 8, 9], 3) [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ```
try: ``` def group(l,size): length = len(l) new = [] for i in range(0, length/size): new.append([]) for i in range(0,len(new)): for j in range(i*size,(i*size)+size): new[i].append(l[i+j]) print new ``` edit: no, don't do this. use slice like [falsetru illustrates](https://stackoverflow.com/a/17336342/2190131).
splitting list into smaller lists
[ "", "python", "list", "python-2.7", "" ]
I have two tables one is **Subjects** and **UserDetails** Columns of those two tables: * `Subjects`(RegNo,IndexNo,Subject\_1,Subject\_23,Subject\_3,Subject\_4) and * `UserDetails`(Name,DOB,RegNo,Address,ID) I want to get `Name`,`DOB`,`RegNo`, `Address`, `ID` from `UserDetails` and **Index** from Subjects acoording to a specific subject .i created a query but it is not working \_subject is the string variable which contains the subject name ``` SELECT UserDetails.Name,UserDetails.DOB, UserDetails.RegNo,UserDetails.Address, UserDetails.ID, Subjects.IndexNo FROM UserDetails INNER JOIN Subjects ON UserDetails.RegNo = Subjects.RegNo WHERE Subjects Subject_1 OR Subject_2 OR Subject_3 OR Subject_4 ='"+_subject+"'" ```
You will probably figure this out on your own, but you need to specify each part of the WHERE clause, like this: ``` WHERE Subjects.Subject_1 = '"+_subject+"'" OR Subjects.Subject_2 = '"+_subject+"'" OR Subjects.Subject_3 = '"+_subject+"'" OR Subjects.Subject_4 = '"+_subject+"'" ```
Your where clause is invalid. Try ``` WHERE Subject_1 = '"+_subject+"' OR Subject_2 = '"+_subject+"' OR Subject_3 = '"+_subject+"' OR Subject_4 = '"+_subject+"'" ``` I'm assuming that your \_subject string has been sanitized and that a user couldn't just use it for a SQL injection attack ...
MS sql inner join
[ "", "sql", "sql-server-2008", "" ]
Please consider the scenario below I have a table like below ``` Tag | Id | Client | ....and more columns c 30 X c 40 Y c 50 X c 60 A c 30 B c 40 C d 50 D d 70 E d 80 X d 90 Z i 30 X i 90 Z i 100 X i 40 M ``` I want to select records from table in such way that if tag=i the row below gets removed from resultset ``` i 30 X i 90 Z ``` This is because the row with id=90 have already appeared with tag=d and client=Z. But the row ``` i 40 M ``` must not be deleted even though id=40 has already appeared with client=C because client column value are different. ``` DELETE FROM myTable WHERE tag=i AND id IN( SELECT id FROM myTable t1 INNER JOIN myTable t2 ON t1.id=t2.id WHERE tag=d or tag=c ) ```
You can use following `CTE` with [`ROW_NUMBER`](http://msdn.microsoft.com/en-us/library/ms189798.aspx) to detect and delete duplicates according to your rule: ``` WITH CTE AS ( SELECT [Tag], [Id], [Client], RN=ROW_NUMBER()OVER(PARTITION BY [Id], [Client] ORDER BY [Tag]) FROM dbo.Tags ) DELETE FROM CTE WHERE RN > 1 AND [Tag] = @Tag; ``` [**DEMO**](http://sqlfiddle.com/#!6/6cb49/12/1) Deletes these records: ``` TAG ID CLIENT RN i 30 X 2 i 90 Z 2 ``` [**Over Clause**](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.110%29.aspx)
This should do the trick... ``` declare @Tag as varchar(10) set @Tag = 'i' -- The select statement to view the record to be deleted select MyTable.Tag,MyTable.Id,MyTable.Client,RecordToDelete.totalCount from [SampleDB].dbo.[MyTable] inner join ( SELECT [Id] ,[Client], TotalCount = count(id) FROM [SampleDB].dbo.[MyTable] group by id,Client ) as RecordToDelete on RecordToDelete.Id=MyTable.Id and RecordToDelete.Client =MyTable.Client where RecordToDelete.totalCount>1 and MyTable.Tag = @Tag -- The delete statement delete [SampleDB].dbo.[MyTable] where MyTable.Tag = @Tag and MyTable.Id in ( --select MyTable.Tag,MyTable.Id,MyTable.Client,RecordToDelete.totalCount from [SampleDB].dbo.[MyTable] select MyTable.Id from [SampleDB].dbo.[MyTable] inner join ( SELECT [Id] ,[Client], TotalCount = count(id) FROM [SampleDB].dbo.[MyTable] group by id,Client ) as RecordToDelete on RecordToDelete.Id=MyTable.Id and RecordToDelete.Client =MyTable.Client where RecordToDelete.totalCount>1 and MyTable.Tag = @Tag ) ```
Delete duplicate record SQL Server
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have a simple join table with two id columns in SQL Server. Is there any way to select all rows in the exact order they were inserted? If I try to make a SELECT \*, even if I don't specify an ORDER BY clause, the rows are not being returned in the order they were inserted, but ordered by the first key column. I know it's a weird question, but this table is very big and I need to check exactly when a strange behavior has begun, and unfortunately I don't have a timestamp column in my table. **UPDATE #1** I'll try to explain why I'm saying that the rows are not returned in 'natural' order when I SELECT \* FROM table without an ORDER BY clause. My table was something like this: ``` id1 id2 --------------- 1 1 2 2 3 3 4 4 5 5 5 6 ... and so on, with about 90.000+ rows ``` Now, I don't know why (probably a software bug inserted these rows), but my table have 4.5 million rows and looks like this: ``` id1 id2 --------------- 1 1 1 35986 1 44775 1 60816 1 62998 1 67514 1 67517 1 67701 1 67837 ... 1 75657 (100+ "strange" rows) 2 2 2 35986 2 44775 2 60816 2 62998 2 67514 2 67517 2 67701 2 67837 ... 2 75657 (100+ "strange" rows) ``` Crazy, my table have now millions of rows. I have to take a look when this happened (when the rows where inserted) because I have to delete them, but I can't just delete using \*WHERE id2 IN (strange\_ids)\* because there are "right" id1 columns that belongs to these id2 columns, and I can't delete them, so I'm trying to see when exactly these rows were inserted to delete them. When I SELECT \* FROM table, it returns me ordered by id1, like the above table, and the rows were not inserted in this order in my table. I think my table is not corrupted because is the second time that this strange behavior happens the same way, but now I have so many rows that I can delete manually like it was on 1st time. Why the rows are not being returned in the order they were inserted? These "strange rows" were definetely inserted yesterday and should be returned near the end of my table if I do a SELECT \* without an ORDER BY, isn't it?
As others have already written, you will not be able to get the rows out of the link table in the order they were inserted. If there is some sort of internal ordering of the rows in one or both of the tables that this link table is *joining*, then you can use that to try to figure out when the link table rows have been created. Basically, they cannot have been created **BEFORE** both of the rows containing the PK:s have been created. But on the other hand you will not be able to find out *how long* after they have been created. If you have decent backups, you could try to restore one or a few backups of varying age and then try to see if those backups also contains this *strange behaviour*. It could give you at least some clue about when the strangeness has started. But the bottom line is that using just a select, there is now way to get the row out of a table like this in the order they were inserted.
A `select` query with no `order by` does *not* retrieve the rows in any particular order. You have to have an `order by` to get an order. SQL Server does not have any default method for retrieving by insert order. You can do it, if you have the information in the row. The best way is a primary key identity column: ``` TableId int identity(1, 1) not null primary key ``` Such a column is incremented as each row is inserted. You can also have a `CreatedAt` column: ``` CreatedAt datetime default getdate() ``` However, this could have duplicates for simultaneous inserts. The key point, though, is that a `select` with no `order by` clause returns an unordered set of rows.
Return rows in the exact order they were inserted
[ "", "sql", "sql-server", "" ]
I'm looking for a way performing some custom escaping whenever certain properties are set, and corresponding unescaping code when those properties are retrieved from one of my objects. I've looked at \_\_setattr\_\_, \_\_getattr\_\_ and \_\_getattribute\_\_ but can't see how they relate to properties. For example I have a class such as: ``` class Bundle(object): def __init__(self): self.storage = Storage() @property def users_name(self): """ Get the user's name """ return self.storage.users_name @users_name.setter def users_name(self, users_name): """ Set the user's name """ # only permit letters and spaces. if not re.match('^[A-Za-z ]+$', users_name): msg = "'%s' is not a valid users_name. It must only contain letters and spaces" % users_name log.fatal(msg) raise ConfigurationError(msg) self.storage.users_name = users_name ... ``` There are several such properties on the object. What I want is a method that will only affect some of my properties (such as users\_name, but not 'storage'), and will escape/unescape the value when set/retrieved. 1. \_\_getattr\_\_ doesn't seem right because this code should be called for new and existing properties. 2. \_\_setattr\_\_ might work but I don't understand how it fits in with my @property methods. For example, will \_\_setattr\_\_ always be called instead of my custom methods? How can I call my custom methods from within \_\_setattr\_\_? Is there an easy way of only affecting my @property properties without storing a list of properties to work on? 3. \_\_getattribute\_\_ I tried using this but ended up in infinite loops. That aside, my questions are roughly the same as for \_\_setattr\_\_ regarding whether it's possible to delegate to the methods declared with @property (and how to do that?), and how best to say "only work for these properties and not just *any* property of this class. What's the best way of approaching this? -- update -- To elaborate: This particular 'Bundle' class acts as a proxy, storing config values in different backends - in this case a FileConfigBackend that wraps ConfigParser. ConfigParser uses '%' signs for interpolation. However, some of the values I want to set can legitimately contain percent signs so I need to escape them. However, instead of escaping them explicitly for every property, I want to have some sort of magic method that will be called every time a property is set. It will then run `replace('%', '%%')` on the string. Similarly, I want a method that every time a property is retrieved will run `replace('%%', '%')` on the value before returning it. Also, since the majority of the properties in 'Bundle' are simply proxied to the backend storage, it'd be nice if there was a way to say 'if the property is in this list, just call `self.storage.PROPERTY_NAME = VALUE`'. Sometimes though I want to be able to override that assignment, for example to run a regex on the value to set. So really, I'm looking for a way of saying 'when a property is set, *always* call this method. Then if a concrete @property.setter exists, call that instead of doing `self.storage.key = value`'. (doh! I meant \_\_getattr\_\_ & \_\_setattr\_\_ not \_\_get\_\_ and \_\_set\_\_ - updated my question to reflect that)
The `@property` decorator creates an object following the [descriptor](http://docs.python.org/2/howto/descriptor.html) protocol, which is where the `__get__`, `__set__` methods live. The best way I can think of to add additional behavior to some properties would be creating your own decorator. This decorator would follow the same protocol, wrap the property originally created by Python, and add your desired escaping/unescaping behavior. This would allow you to mark the 'special' escaped properties as such: ``` @escaped @property def users_name(self): ... ``` Only those properties would get the special treatment. Here is a quick example of how this might be implemented: ``` class escaped: def __init__(self, property_to_wrap): # we wrap the property object created by the other decorator self.property = property_to_wrap def __get__(self, instance, objtype=None): # delegate to the original property original_value = self.property_to_wrap.__get__(instance, objtype) # ... change the data however you like return frotz(original_value) def __set__(self, instance, new_value): actual_value = frob(new_value) self.property.__set__(instance, actual_value) ... ``` The same should be repeated for all the descriptor methods. You will also have to delegate the `getter`, `setter`, `deleter` methods of `property` itself (to allow you to use syntax like `@users_name.setter` with your wrapped property. You can look at the [descriptor guide](http://docs.python.org/2/howto/descriptor.html#properties) for help. Some details can be found here too: [How do Python properties work?](https://stackoverflow.com/questions/6193556/how-do-python-properties-work).
I provided a different answer using a decorator, but if you really prefer to go with the magic methods, there is a way: ``` class Magic(object): @staticmethod def should_be_escaped(property_name): return not "__" in property_name def __getattribute__(self, property): value = object.__getattribute__(self, property) if Magic.should_be_escaped(property): value = value.replace("%%", "%") return value def __setattr__(self, property, value): if Magic.should_be_escaped(property): value = value.replace("%", "%%") object.__setattr__(self, property, value) ``` Calling `object.__getattribute__` and `object.__setattr__` allows you to reach the standard behavior from your magic methods, which should include resolving the properties. Using those is a way to avoid the infinite loops you mentioned. The reason this is the worse solution lies in `should_be_escaped`, where you have to decide whether a property should be escaped or not based on the name. This is difficult to do correctly, and you have to care about special names, your `storage` variable, etc. For example, the implementation of `should_be_escaped` above is so incomplete that it will try to call `replace` on your object's **methods**, so a check should be added for the property type. There is a lot of those corner-cases. This is why I suggested the decorator solution as cleaner - it explicitly marks who receives the special behavior and has no nasty unexpected consequences down the road.
How to automatically escape properties when they are being set/retrieved?
[ "", "python", "" ]
I have an input file consisting of lines with numbers and word sequences, structured like this: ``` \1-grams: number w1 number number w2 number \2-grams: number w1 w2 number number w1 w3 number number w2 w3 number \end\ ``` I want to store the word sequences (so-called n-grams) in such a way that I can easily retrieve both numbers for each unique n-gram. What I do now, is the following: ``` all = {} ngrams = {} for line in open(file): m = re.search('\\\([1-9])-grams:',line.strip()) # find nr of words in sequence if m != None: n = int(m.group(1)) ngrams = {} # reinitialize dict for new n else: m = re.search('(-[0-9]+?[\.]?[0-9]+)\t([^\t]+)\t?(-[0-9]+\.[0-9]+)?',line.strip()) #find numbers and word sequence if m != None: ngrams[m.group(2)] = '{0}|{1}'.format(m.group(1), m.group(3)) elif "\end\\" == line.strip(): all[int(n)] = ngrams ``` In this way I can easily and quite quickly find the numbers for e.g. the sequence s='w1 w2' this way: ``` all[2][s] ``` The problem is that this stored procedure is rather slow, especially when there are a lot (>100k) of n-grams and I'm wondering whether there is a faster way to achieve the same result without having a decrease in access speed. Am I doing something suboptimal here? Where can I improve? Thanks in advance, Joris
I would try doing fewer regexp searches. It's worth considering a few other things: * Storing all the data in a single dictionary may speed things up; a data hierarchy with extra layers doesn't help, perhaps counterintuitively. * Storing a tuple lets you avoid calling `.format()`. * In CPython, code in functions is faster than global code. Here's what it might look like: ``` def load(filename): ngrams = {} for line in open(filename): if line[0] == '\\': pass # just ignore all these lines else: first, rest = line.split(None, 1) middle, last = rest.rsplit(None, 1) ngrams[middle] = first, last return ngrams ngrams = load("ngrams.txt") ``` I would want to store `int(first), int(last)` rather than `first, last`. That would speed up access, but slow down load time. So it depends on your workload. I disagree with johnthexii: doing this in Python should be much faster than talking to a database, even sqlite, as long as the data set fits in memory. (If you use a database, that means you can do the load once and not have to repeat it, so sqlite may end up being exactly what you want—but you can't do that with a :memory: database.)
Regarding optimization of your code. 1) compile the regular expressions before loop. See help for re.compile. 2) Avoid regular expressions whenever it's possible. For example "-grams" string prepended with number can be checked by simple string comparison
Fastest way to store n-grams (strings with variable amount of words) in python
[ "", "python", "n-gram", "" ]
I am creating a 'Euromillions Lottery generator' just for fun and I keep getting the same numbers printing out. How can I make it so that I get random numbers and never get the same number popping up: ``` from random import randint numbers = randint(1,50) stars = randint(1,11) print "Your lucky numbers are: ", numbers, numbers, numbers, numbers, numbers print "Your lucky stars are: " , stars, stars ``` The output is just: ``` >>> Your lucky numbers are: 41 41 41 41 41 >>> Your lucky stars are: 8 8 >>> Good bye! ``` How can I fix this? Regards
Set up a set of numbers then shuffle and slice the set. ``` from random import shuffle numbers = list(range(1,51)) shuffle(numbers) draw = numbers[:6] print(draw) ```
You are generating **one** number then printing that out several times. Generate several numbers instead: ``` print "Your lucky numbers are: ", randint(1,50), randint(1,50), randint(1,50), randint(1,50), randint(1,50) ``` or generate a list: ``` numbers = [randint(1,50) for _ in range(5)] print "Your lucky numbers are: ", ' '.join(numbers) ``` or better still, generate all permissible numbers (using `range()` then pick a sample from that: ``` possible_numbers = range(1, 51) # end point is not included numbers = random.sample(possible_numbers, 5) print "Your lucky numbers are: ", ' '.join(map(str, numbers)) ``` Now `numbers` is guaranteed to consist of entirely unique numbers. The `numbers` variable does not magically update every time you print it because it refers only to the *result* of `random.randint(1, 50)`.
How to get unique numbers using randomint python?
[ "", "python", "loops", "random", "while-loop", "int", "" ]
If I have the following: ``` if a(my_var) and b(my_var): do something ``` Can I assume that `b()` is only evaluated if `a()` is `True`? Or might it do `b()` first? Asking because evaluating `b()` will cause an exception when `a()` is `False`.
With the wonderful help of [`help()`](http://docs.python.org/2/library/functions.html#help) (hah): ``` >>> help('and') Boolean operations ****************** or_test ::= and_test | or_test "or" and_test and_test ::= not_test | and_test "and" not_test not_test ::= comparison | "not" not_test ... The expression ``x and y`` first evaluates *x*; if *x* is false, its value is returned; otherwise, *y* is evaluated and the resulting value is returned. ... ``` So yes, if `a(my_var)` returns False, then the function `b` will not be called.
`b()` will only be evaluated if `a(my_var)` is `True`, yes. The `and` operator short-circuits if `a(my_var)` is falsey. From the [boolean operators documentation](http://docs.python.org/2/reference/expressions.html#boolean-operations): > The expression `x and y` first evaluates `x`; if `x` is false, its value is returned; otherwise, `y` is evaluated and the resulting value is returned. You can test this yourself with a function that prints something when called: ``` >>> def noisy(retval): ... print "Called, returning {!r}".format(retval) ... return retval ... >>> noisy(True) and noisy('whatever') Called, returning True Called, returning 'whatever' 'whatever' >>> noisy(False) and noisy('whatever') Called, returning False False ``` Python consideres empty containers and numeric 0 values as false: ``` >>> noisy(0) and noisy('whatever') Called, returning 0 0 >>> noisy('') and noisy('whatever') Called, returning '' '' >>> noisy({}) and noisy('whatever') Called, returning {} {} ``` Custom classes can implement a [`__nonzero__` hook](http://docs.python.org/2/reference/datamodel.html#object.__nonzero__) to return a boolean flag for the same test, or implement a [`__len__` hook](http://docs.python.org/2/reference/datamodel.html#object.__len__) if they are a container type instead; returning `0` means the container is empty and is to be considered false. On a closely related note, the `or` operator does the same thing, but in reverse. If the first expression evaluates to true the second expression will not be evaluated: ``` >>> noisy('Non-empty string is true') or noisy('whatever') Called, returning 'Non-empty string is true' 'Non-empty string is true' >>> noisy('') or noisy('But an empty string is false') Called, returning '' Called, returning 'But an empty string is false' 'But an empty string is false' ```
python: order of AND execution
[ "", "python", "boolean-logic", "" ]
Ok here we go, i've been looking at this all day and i'm going crazy, i thought i'd done the hard bit but now i'm stuck. I'm making a highscores list for a game and i've already created a binary file that store the scores and names in order. Now i have to do the same thing but store the scores and names in a text file. This is the binary file part but i have no idea where to start with using a text file. ``` def newbinfile(): if not os.path.exists('tops.dat'): hs_data = [] make_file = open('tops.dat', 'wb') pickle.dump(hs_data, make_file) make_file.close else: None def highscore(score, name): entry = (score, name) hs_data = open('tops.dat', 'rb') highsc = pickle.load(hs_data) hs_data.close() hs_data = open('tops.dat', 'wb+') highsc.append(entry) highsc.sort(reverse=True) highsc = highsc[:5] pickle.dump(highsc, hs_data) hs_data.close() return highsc ``` Any help on where to start with this would be appreciated. Thanks
Python has built-in methods for writing to files that you can use to write to a text file. ``` writer = open("filename.txt", 'w+') # w+ is the flag for overwriting if the file already exists # a+ is the flag for appending if it already exists t = (val1, val2) #a tuple of values you want to save for elem in t: writer.write(str(elem) + ', ') writer.write('\n') #the write function doesn't automatically put a new line at the end writer.close() ```
I think you should use the `with` keywords. You'll find examples corresponding to what you want to do [here](http://preshing.com/20110920/the-python-with-statement-by-example). ``` with open('output.txt', 'w') as f: for l in ['Hi','there','!']: f.write(l + '\n') ```
Writing user input to a text file in python
[ "", "python", "" ]
I'm trying to create a stored procedure that goes through a "SALES" table and returns the best two customers of a pharmacy (the two customers who have spent more money). Here's some code: Table creation: ``` create table Customer ( Id_customer int identity(1,1) Primary Key, Name varchar(30), Address varchar(30), DOB datetime, ID_number int not null check (ID_number > 0), Contributor int not null check (Contributor > 0), Customer_number int not null check (Customer_number > 0) ) create table Sale ( Id_sale int identity(1,1) Primary Key, Id_customer int not null references Customer(Id_customer), Sale_date datetime, total_without_tax money, total_with_tax money ) ``` Well, I don't know if this is useful but I have a function that returns the total amount spent by a customer as long as I provide the customer's ID. Here it is: ``` CREATE FUNCTION [dbo].[fGetTotalSpent] ( @Id_customer int ) RETURNS money AS BEGIN declare @total money set @total = (select sum(total_with_tax) as 'Total Spent' from Sale where Id_customer=@Id_customer) return @total END ``` Can someone help me get the two top customers? Thanks Chiapa PS: Here's some data to insert so you can test it better: ``` insert into customer values ('Jack', 'Big street', '1975.02.01', 123456789, 123456789, 2234567891) insert into customer values ('Jim', 'Little street', '1985.02.01', 223456789, 223456789, 2234567891) insert into customer values ('John', 'Large street', '1977.02.01', 323456789, 323456789, 3234567891) insert into customer values ('Jenny', 'Huge street', '1979.02.01', 423456789, 423456789, 4234567891) insert into sale values (1, '2013.04.30', null, 20) insert into sale values (2, '2013.05.22', null, 10) insert into sale values (3, '2013.03.29', null, 30) insert into sale values (1, '2013.05.19', null, 34) insert into sale values (1, '2013.06.04', null, 21) insert into sale values (2, '2013.06.01', null, 10) insert into sale values (2, '2013.05.08', null, 26) ```
You can do this with a single query without any special functions: ``` select top 2 c.id_customer, c.name, sum(s.total_with_tax) from customer c join sale s on c.id_customer = s.id_customer group by c.id_customer, c.name order by sum(s.total_with_tax) desc ```
This joins onto a CTE with the top customers. Remove the `WITH TIES` option if you want exactly 2 and don't want to include customers tied with the same spend. ``` WITH Top2 AS (SELECT TOP 2 WITH TIES Id_customer, SUM(total_with_tax) AS total_with_tax FROM Sale GROUP BY Id_customer ORDER BY SUM(total_with_tax) DESC) SELECT * FROM Customer C JOIN Top2 T ON C.Id_customer = T.Id_customer ```
SQL Stored procedure to obtain top customers
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a dictionary and I would like to get some values from it based on some keys. For example, I have a dictionary for users with their first name, last name, username, address, age and so on. Let's say, I only want to get one value (name) - either last name or first name or username but in descending priority like shown below: (1) last name: if key exists, get value and stop checking. If not, move to next key. (2) first name: if key exists, get value and stop checking. If not, move to next key. (3) username: if key exists, get value or return null/empty ``` #my dict looks something like this myDict = {'age': ['value'], 'address': ['value1, value2'], 'firstName': ['value'], 'lastName': ['']} #List of keys I want to check in descending priority: lastName > firstName > userName keySet = ['lastName', 'firstName', 'userName'] ``` What I tried doing is to get all the possible values and put them into a list so I can retrieve the first element in the list. Obviously it didn't work out. ``` tempList = [] for key in keys: get_value = myDict.get(key) tempList .append(get_value) ``` Is there a better way to do this without using if else block?
One option if the number of keys is small is to use chained gets: ``` value = myDict.get('lastName', myDict.get('firstName', myDict.get('userName'))) ``` But if you have keySet defined, this might be clearer: ``` value = None for key in keySet: if key in myDict: value = myDict[key] break ``` The chained `get`s do not short-circuit, so all keys will be checked but only one used. If you have enough possible keys that the extra lookups matter, use the `for` loop.
Use `.get()`, which if the key is not found, returns `None`. ``` for i in keySet: temp = myDict.get(i) if temp is not None: print temp break ```
How to check if keys exists and retrieve value from Dictionary in descending priority
[ "", "python", "dictionary", "key-value", "" ]
I have a table of Date and a column of ID,FromDate and ToDate shown at below ``` ID FromDate ToDate -- -------------- -------------- 1 2013-06-10 00:00:00 2013-06-30 00:00:00 2 2013-05-10 00:00:00 2013-06-10 00:00:00 3 2012-08-01 00:00:00 2012-12-31 00:00:00 4 2013-07-10 00:00:00 2013-07-30 00:00:00 ``` I doing a checking query and make a column as Result whether is active, inactive or expired by using [getdate] shown at below ``` Select ID, FromDate, ToDate, (case when convert(varchar(8),FromDate,112) <= convert(varchar(8),getdate(),112) and convert(varchar(8),ToDate,112) <= convert(varchar(8),getdate(),112) then 'Expired' when convert(varchar(8),FromDate,112) <= convert(varchar(8),getdate(),112) and convert(varchar(8),ToDate,112) >= convert(varchar(8),getdate(),112) then 'Active' when convert(varchar(8),FromDate,112) >= convert(varchar(8),getdate(),112) and convert(varchar(8),ToDate,112) >= convert(varchar(8),getdate(),112) then 'Inactive' end )'Result' from Date ``` It will show the correct result ``` ID FromDate ToDate Result -- -------------- -------------- -------- 1 2013-06-10 00:00:00 2013-06-30 00:00:00 Active 2 2013-05-10 00:00:00 2013-06-10 00:00:00 Expired 3 2012-08-01 00:00:00 2012-12-31 00:00:00 Expired 4 2013-07-10 00:00:00 2013-07-30 00:00:00 Inactive ``` But my Case query is too long, is there anyway to shorten the code or improve the performance?
How about the query: ``` Select ID, FromDate, ToDate, (case when convert(varchar(8),ToDate,112) <= convert(varchar(8),getdate(),112) then 'Expired' when convert(varchar(8),getdate(),112) between convert(varchar(8),FromDate,112) and convert(varchar(8),ToDate,112) then 'Active' when convert(varchar(8),FromDate,112) >= convert(varchar(8),getdate(),112) then 'Inactive' end )'Result' from Date ``` Given ToDate is greater than FromDate.
``` SELECT ID, FromDate, ToDate, CASE WHEN ToDateStr <= NowStr THEN 'Expired' WHEN NowStr BETWEEN FromDateStr AND ToDateStr THEN 'Active' ELSE 'Inactive' END AS Result FROM ( SELECT ID, FromDate, ToDate, CONVERT(varchar(8),FromDate,112) AS FromDateStr, CONVERT(varchar(8),ToDate,112) AS ToDateStr, CONVERT(varchar(8),getdate(),112) AS NowStr FROM Date) A ``` Given that the From date is always lower or equal than the To Date
improve or shorten sql query select Result on table [Date]
[ "", "sql", "select", "" ]
Here I tried to cut first and second 30sec long video file from "path/connect.webm" to the strings out and out1. It works. But what I need to do is to concatenate these two strings and write that to a file "path/final.webm". So that I get a 60sec long video file "final.webm" at the end. But now i get first 30sec long video only as the output. Please help me. Thanks a lot in advance. Code in python: ``` import subprocess,os fname = "/home/xincoz/test/final.webm" fp = open(fname,'wb') ffmpeg_command = ["ffmpeg", "-i", "/home/xincoz/test/connect.webm", "-acodec", "copy", "-ss", "00:00:00", "-t", "00:00:30","-f", "webm", "pipe:1"] p = subprocess.Popen(ffmpeg_command,stdout=subprocess.PIPE) out, err = p.communicate() ffmpeg_command1 = ["ffmpeg", "-i", "/home/xincoz/test/connect.webm", "-acodec", "copy", "-ss", "00:00:31", "-t", "00:00:30","-f", "webm", "pipe:1"] p1 = subprocess.Popen(ffmpeg_command1,stdout=subprocess.PIPE) out1, err1 = p1.communicate() string = out + out1 print len(out) print len(out1) print len(string) fp.write(string) fp.close() ``` Please help me.
This code works for me. Thanks all for your great help. Thanks a lot. ``` import subprocess ffmpeg_command1 = ["ffmpeg", "-i", "/home/xincoz/test/connect.webm", "-acodec", "copy", "-ss", "00:00:00", "-t", "00:00:30", "/home/xincoz/test/output1.webm"] ffmpeg_command2 = ["ffmpeg", "-i", "/home/xincoz/test/connect.webm", "-acodec", "copy", "-ss", "00:00:30", "-t", "00:00:30", "/home/xincoz/test/output2.webm"] ffmpeg_command3 = ["mencoder", "-forceidx", "-ovc", "copy", "-oac", "pcm", "-o", "/home/xincoz/test/output.webm", "/home/xincoz/test/output1.webm", "/home/xincoz/test/output2.webm"] subprocess.call(ffmpeg_command1) subprocess.call(ffmpeg_command2) subprocess.Popen(ffmpeg_command3) ```
This seems like one of the two questions any reasonable person would ask when first trying to deal with video, programmatically. 'Why can't I just cut and paste the parts I want?' No one answers because the people who really can explain, are sick of the question, and people like me who have figured it out somewhat on their own, don't want to look stupid. But I don't mind - so here's the practical answer. To clip and join complex container formats, it's always more complicated than you think, and requires, at the very least, a solution-per-container. If you read the ffmpeg faq, in theory you can concatenate videos by reformatting them as mpg-v1 (maybe mpg-v2 also works) and then doing more or less what you're doing. ``` cat first_part.mpg second_part.mpg > joined_movie.mpg ``` In practice, the joined\_movie.mpg, may or may not run smoothly. Even in this very simple format, there's some data upfront, apparently, saying "this file is a minute long" or something like that. So you may open it and discover it's only 30 seconds, but find that it plays for a minute (or not, depending on the player). It can be easily righted (and I assume losslessly, or it wouldn't be recommended in the ffmpeg faq). But you probably don't want to work with mpg-v1, ultimately. Webm may be a reasonable choice. From what I gather, webm container is derived from MKV. For audio, it uses vorbis and for video it uses vp8. One layman to another: vp8 ~ H264 (I apologize to anyone from doom9 forum who reads this and has a heart attack). Anyway, for us laypeople, the important point is: this means it's not only not simple, but it's actually super complicated-- even just understanding all the encoder parameters is a life's work. I know mp4box can do something pretty close to what you want with h264 video inside an mp4 container. If you mainly wish to be able to programmatically cut and join video, you could certainly just adopt mp4/h264 instead, but you may be pro-freedom and whatnot, and wish to use webm for ideological or monetary reasons. If you find a solution within webm, I'll be curious. Perhaps mkvtool would work, given its proximity to mkv container? I'm guessing your files are prepped for streaming, given you're talking about web video. So it may seem like you really really should be able to just add them together. But even with everything interleaved by chunk, it has to be quite a bit more complex than just adding them, or even adding them then adjusting the header/metadata for total playing time. I'm inferring the complexity because it seems like there aren't many tools that will work and even mp4box couldn't always do this reliably/accurately. If you do go with mp4, you can tell mp4box to join files with: ``` mp4box -cat file1 -cat file2 -new joined ``` Perhaps a free-software patriot will post how to cut and join webm files from the command line without reencoding too. Good luck with your project.
How to join two video files using Python?
[ "", "python", "video", "ffmpeg", "webm", "" ]
The sixth record in this sample is missing from my CTE output; I'm guessing it's because it only appears once? Is there a way to get it to appear? Apologies if this a stupid question, I'm only just getting my head around CTEs. ``` CREATE TABLE #T (MONTH INT, YEAR INT, CC VARCHAR(4), CO_CC VARCHAR(7), VALUE INT) INSERT INTO #T VALUES (1, 2011, '0000', 'P1-0000', 10) INSERT INTO #T VALUES (2, 2011, '0000', 'P1-0000', 20) INSERT INTO #T VALUES (3, 2011, '0000', 'P1-0000', 30) INSERT INTO #T VALUES (4, 2011, '0000', 'P1-0000', 40) INSERT INTO #T VALUES (5, 2011, '0000', 'P1-0000', 50) INSERT INTO #T VALUES (5, 2011, '0017', 'P1-0017', 50) INSERT INTO #T VALUES (1, 2012, '0000', 'P1-0000', 10) INSERT INTO #T VALUES (2, 2012, '0000', 'P1-0000', 20) INSERT INTO #T VALUES (3, 2012, '0000', 'P1-0000', 30) INSERT INTO #T VALUES (4, 2012, '0000', 'P1-0000', 40) INSERT INTO #T VALUES (5, 2012, '0000', 'P1-0000', 50) INSERT INTO #T VALUES (1, 2011, '0006', 'P1-0006', 10) INSERT INTO #T VALUES (2, 2011, '0006', 'P1-0006', 20) INSERT INTO #T VALUES (3, 2011, '0006', 'P1-0006', 30) INSERT INTO #T VALUES (4, 2011, '0006', 'P1-0006', 40) INSERT INTO #T VALUES (5, 2011, '0006', 'P1-0006', 50) INSERT INTO #T VALUES (1, 2012, '0006', 'P1-0006', 10) INSERT INTO #T VALUES (2, 2012, '0006', 'P1-0006', 20) INSERT INTO #T VALUES (3, 2012, '0006', 'P1-0006', 30) INSERT INTO #T VALUES (4, 2012, '0006', 'P1-0006', 40) INSERT INTO #T VALUES (5, 2012, '0006', 'P1-0006', 50) GO WITH TEST AS (SELECT *, VALUE AS RUNNING_SUM FROM #T WHERE MONTH = 1 UNION ALL SELECT w.*, w.VALUE + t.RUNNING_SUM FROM #T w INNER JOIN TEST t ON w.MONTH = t.MONTH + 1 AND w.YEAR = t.YEAR AND w.CC = t.CC AND w.CO_CC = t.CO_CC WHERE w.MONTH > 1) SELECT * FROM TEST ORDER BY YEAR, MONTH OPTION (MAXRECURSION 0) DROP TABLE #T ``` Plus, if I declare VALUE as DECIMAL (15, 2) the CTE falls over with some error about anchors and recursive types being incompatible?
Ok, you are missing the row because of your `JOIN` conditions: ``` INNER JOIN TEST t ON w.MONTH = t.MONTH + 1 AND w.YEAR = t.YEAR AND w.CC = t.CC AND w.CO_CC = t.CO_CC ``` So, you are saying that you need the rows for the next month (though this won't work for december), on the same year and the **same `CC`**. That particular row has a value for `CC` of `'0017'`, wich doesn't exist on the previous month, hence it won't appear on your recursive CTE. As to the incompatibility issue, I'm not sure exactly whay this is happening, but if you use an explicit conversion on the second `SELECT`, then there is no problem: ``` SELECT w.*, CAST(w.VALUE + t.RUNNING_SUM AS DECIMAL(15,2)) ``` **UPDATE** So, as Martin Smith said on a comment, the reason for the incompatibility issue is that the CTE defined the columns data type according to your first `SELECT`: ``` SELECT *, VALUE AS RUNNING_SUM ``` So, `RUNNING_SUM` would be a `DECIMAL(15,2)`. On your second select, that column comes from this calculation: ``` w.VALUE + t.RUNNING_SUM ``` Since both columns are `DECIMAL(15,2)`, then the result is a `DECIMAL(16,2)`, according to [this](http://msdn.microsoft.com/en-us/library/ms190476.aspx), so both columns are incompatible, hence, the need for an explicit `CAST`.
Your WHERE criteria is excluding that line because MONTH never = 1 for CC = '0017' Instead of starting with MONTH = 1, you could use a `ROW_NUMBER OVER (PARTITION BY CC,YEAR ORDER BY MONTH)` to identify the first month.
CTE not generating what I'm expecting?
[ "", "sql", "sql-server", "common-table-expression", "" ]
I have a `dictionary` that looks like this: ``` {'items': [{'id': 1}, {'id': 2}, {'id': 3}]} ``` and I'm looking for a way to directly get the inner dictionary with `id = 1`. Is there a way to reach this other than looping the `list` items and comparing the `id`?
``` first_with_id_or_none = \ next((value for value in dictionary['items'] if value['id'] == 1), None) ```
You will *have* to loop through the list. The good news is is that you can use a generator expression with `next()` to do that looping: ``` yourdict = next(d for d in somedict['items'] if d['id'] == 1) ``` This *can* raise a `StopIteration` exception if there is no such matching dictionary. Use ``` yourdict = next((d for d in somedict['items'] if d['id'] == 1), None) ``` to return a default instead for that edge-case (here `None` is used, but pick what you need).
Reaching a dictionary inside a list of dictionaries by key
[ "", "python", "dictionary", "" ]
I want to filter distinct field from multiple table. for example user input is "python,web developer". I want to display both "python" keyskills matched jobs and "web developer" title matched jobs. ``` In User table id | username | password ----------------------------- 1 | employer1 | sffddgfd 2 | employer2 | dfggfgfd In Company Table id | emp_id | companyname ----------------------------- 1 | 1 | abc 2 | 2 | xyz In jobs table +----+--------+--------------------+ | id | emp_id | title | +----+--------+--------------------+ | 1 | 1 | Software Developer | | 2 | 1 | Software Developer | | 3 | 1 | testing | | 4 | 1 | webdeveloper | | 5 | 2 | Software Developer | | 6 | 2 | testing | | 7 | 2 | software | +----+--------+--------------------+ In employerkeyskills table +----+--------+--------+--------------------+ | id | emp_id | job_id | keyskills | +----+--------+--------+--------------------+ | 1 | 1 | 2 | python | | 2 | 1 | 2 | django | | 3 | 1 | 2 | html | | 4 | 1 | 3 | manual testing | | 5 | 1 | 3 | automation testing | | 6 | 1 | 4 | css | | 7 | 1 | 4 | javascript | | 8 | 1 | 4 | html | | 9 | 1 | 4 | php | | 10 | 2 | 5 | python | | 11 | 2 | 5 | php | | 12 | 2 | 6 | SQL | | 13 | 2 | 6 | Manual Testing | | 14 | 2 | 7 | sql | | 15 | 2 | 7 | testing | | 16 | 2 | 7 | python | | 17 | 2 | 7 | html | +----+--------+--------+--------------------+ ``` *models.py* ``` class User(models.Model): username = models.CharField(max_length=100) password = models.CharField(max_length=100) class company(models.Models): emp = models.ForeignKey(User, unique=False) companyname = models.CharField(max_length=100) class jobs(models.Model): emp = models.ForeignKey(User, unique=False) title = models.CharField(max_length=100) class employerkeyskills(models.Model): emp=models.ForeignKey(User,unique=False) job=models.ForeignKey(jobs,unique=False) keyskills=models.CharField(max_length=50) ``` *views.py* ``` details = employerkeyskills.objects.filter(keyskills__icontains=search) ``` using the above command i got duplicate result, at first I filtered with only keyskills. But same job repeat twice. i want to filter diplicate field.
You can split input string first: ``` key_skill, title = "python,web developer".split(',') ``` Then filter jobs cross models: ``` jobs = Job.objects.filter(title=title, employerkeyskills__keyskills=key_skill) ```
You can try: ``` details = employerkeyskills.objects.filter(keyskills__icontains=search).distinct() ```
Django Filter distinct field from multiple table
[ "", "python", "django", "django-views", "" ]
My program calls a method `baz` in a new thread from my `spam` method. Now I need to check the value of self.track every second from within this new thread. ``` import threading class A(threading.Thread): def __init__(self, player): self.track = True def Baz(self): // how do i check the value of self.track every 1 second here while self.track: // do something def Spam(self): player_thread = Thread(target=self.Baz) player_thread.start() ``` How do I check it, say every one second ?
If you want to poll, just use `time.sleep` inside the loop. ``` import time ... while self.track: time.sleep(1) ``` If you want more accuracy, for instance if you are doing other operations inside the loop that take non-trivial amounts of time you can do: ``` ... while self.track: start = time.time() # Do things here sleep_time = start-time.time() + 1 if sleep_time > 0: time.sleep(sleep_time) ``` You could also come at this from another angle all together if you are looking to block the thread until `self.track` is set to `True`, and all the workers will be created before that happens. ``` class A(object): def __init__(self, player): self._track = False self.player = player self._track_condition = threading.Condition() @property def track(self): return self._track @track.setter(self, value): if value: self._track_condition.acquire() self._track_condition.notify_all() self._track_condition.release() self._track = value def baz(self): self._track_condition.acquire() self._track_condition.wait() self._track_condition.release() while self.track: # Do stuff def spam(self): player_thread = threading.Thread(target=self.baz) player_thread.start() ``` I would note though that unless your player actions are I/O bound, `threading` is probably not going to do much for performance, and may even hurt it. `threading.Thread`s all live within the [GIL](http://wiki.python.org/moin/GlobalInterpreterLock), so they can't actually execute simultaneously in parallel. If you want that, you need [`multiprocessing`](http://docs.python.org/2/library/multiprocessing.html).
You check, then sleep for one second, in a loop.
Periodic Callback from Thread in Python
[ "", "python", "" ]
I want to query something like this: ``` SELECT id FROM reparacoes WHERE edit_data= (select max(edit_data) from reparacoes where id_reparacao= (select id_reparacao from reparacoes)); ``` However the subquery `select id_reparacao from reparacoes` returns more than 1 row (as expected) and the query doesn't work. Should I do this with `joins`? if yes, how? So here's how it works: every time I edit a "reparacoes" row(at my website), I `INSERT` a new one(i don't `UPDATE`) and a new ID is created as well as a edit\_data(which is the datetime of the edit) but the id\_reparacoes stays the same. I want a query that returns the ID from reparacoes where edit\_data is the most recent from a given id\_reparacao. but I have more then one id\_reparacao and i want this to query for all id\_reparacao(that's why i tried `...where id_reparacao= (select id_reparacao from reparacoes)` PS: sorry, I know this has already been questioned/answered many times but I couldn't find anything that could help me.
I think you want something like this: ``` SELECT rep.ID FROM reparacoes rep INNER JOIN ( SELECT id_reparacao, MAX(Edit_Data) AS Edit_Data FROM reparacoes GROUP BY id_reparacao ) MaxRep ON MaxRep.id_reparacao = rep.id_reparacao AND MaxRep.Edit_Data = rep.Edit_Data; ``` The subquery simply gets the last `edit_data` per `id_reparacao`, then by doing and `INNER JOIN` back to the table you limit the rows to only those where the `Edit_data` matches the latest per `id_reparacao`
Use `in` instead of `=` if the inner query returns more than one row. ``` SELECT id FROM reparacoes WHERE edit_data in (select max(edit_data) from reparacoes where id_reparacao in (select id_reparacao from reparacoes) ); ```
MySQL multiple subquery
[ "", "mysql", "sql", "subquery", "" ]
Query below currently shows the total as item by item but what I want is to calculate subtotal of all items in a purchase order. Thanks **Output should be:** ``` POID Item ItemQTY ItemPrice ItemTotal SubTotal 1 A 1 15.00 15.00 80.50 1 B 1 25.50 25.50 80.50 1 C 2 20.00 40.00 80.50 2 X 6 5.00 30.00 50.00 2 Y 2 10.00 20.00 50.00 ``` **Relationship**: `purchase_order 1 - N purchase_order_items` ``` SELECT purchase_order.id AS POID, purchase_order_items.description AS Item, purchase_order_items.quantity AS ItemQTY, purchase_order_items.price AS ItemPrice, (purchase_order_items.quantity*purchase_order_items.price) AS ItemTotal /* Here, Subtotal should be calculated and displayed */ FROM purchase_order INNER JOIN purchase_order_items ON purchase_order.id = purchase_order_items.fk_purchase_order ``` I looked at [MySQL finding subtotals](https://stackoverflow.com/questions/397384/mysql-finding-subtotals) and [Subtotals and SQL](https://stackoverflow.com/questions/4516261/subtotals-and-sql) but couldn't apply to my query.
Try ``` SELECT i.fk_purchase_order POID, description Item, quantity ItemQTY, price ItemPrice, quantity * price ItemTotal, s.subtotal SubTotal FROM purchase_order_items i JOIN ( SELECT fk_purchase_order, SUM(quantity * price) subtotal FROM purchase_order_items GROUP BY fk_purchase_order ) s ON i.fk_purchase_order = s.fk_purchase_order ``` Output: ``` | POID | ITEM | ITEMQTY | ITEMPRICE | ITEMTOTAL | SUBTOTAL | ------------------------------------------------------------ | 1 | A | 1 | 15 | 15 | 80.5 | | 1 | B | 1 | 25.5 | 25.5 | 80.5 | | 1 | C | 2 | 20 | 40 | 80.5 | | 2 | X | 6 | 5 | 30 | 50 | | 2 | Y | 2 | 10 | 20 | 50 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/755ea8/4)** demo
You can use the `WITH ROLLUP` feature to get subtotals: ``` SELECT purchase_order.id AS POID, purchase_order_items.description AS Item, purchase_order_items.quantity AS ItemQTY, purchase_order_items.price AS ItemPrice, SUM(purchase_order_items.quantity*purchase_order_items.price) AS ItemTotal FROM purchase_order INNER JOIN purchase_order_items ON purchase_order.id = purchase_order_items.fk_purchase_order GROUP BY POID, Item WITH ROLLUP ``` This will create a result set that has `Item = NULL` for the PO subtotal, and `POID = NULL` for a grand total. These subtotals and grand totals go in the `ItemTotal` column of those rows.
Subtotal of all items in a query
[ "", "mysql", "sql", "" ]
I am getting a list in which I am saving the results in the following way ``` City Percentage Mumbai 98.30 London 23.23 Agra 12.22 ..... ``` List structure is [["Mumbai",98.30],["London",23.23]..] I am saving this records in form of a list.I need the list to be sort top\_ten records.Even if I get cities also, it would be fine. I am trying to use the following logic, but it fails for to provide accurate data ``` if (condition): if b not in top_ten: top_ten.append(b) top_ten.remove(tmp) ``` Any other solution,approach is also welcome. **EDIT 1** ``` for a in sc_percentage: print a ``` List I am getting ``` (<ServiceCenter: DELHI-DLC>, 100.0) (<ServiceCenter: DELHI-DLE>, 75.0) (<ServiceCenter: DELHI-DLN>, 90.909090909090907) (<ServiceCenter: DELHI-DLS>, 83.333333333333343) (<ServiceCenter: DELHI-DLW>, 92.307692307692307) ```
Sort the list first and then slice it: ``` >>> lis = [['Mumbai', 98.3], ['London', 23.23], ['Agra', 12.22]] >>> print sorted(lis, key = lambda x : x[1], reverse = True)[:10] #[:10] returns first ten items [['Mumbai', 98.3], ['London', 23.23], ['Agra', 12.22]] ``` To get data in list form from that file use this: ``` with open('abc') as f: next(f) #skip header lis = [[city,float(val)] for city, val in( line.split() for line in f)] print lis #[['Mumbai', 98.3], ['London', 23.23], ['Agra', 12.22]] ``` **Update:** ``` new_lis = sorted(sc_percentage, key = lambda x : x[1], reverse = True)[:10] for item in new_lis: print item ``` `sorted` returns a new sorted list, as we need to sort the list based on the second item of each element so we used the `key` parameter. `key = lambda x : x[1]` means use the value on the index 1(i.e 100.0, 75.0 etc) of each item for comparison. `reverse= True` is used for reverse sorting.
If the list is fairly short then as others have suggested you can sort it and slice it. If the list is very large then you may be better using `heapq.nlargest()`: ``` >>> import heapq >>> lis = [['Mumbai', 98.3], ['London', 23.23], ['Agra', 12.22]] >>> heapq.nlargest(2, lis, key=lambda x:x[1]) [['Mumbai', 98.3], ['London', 23.23]] ``` The difference is that nlargest only makes a single pass through the list and in fact if you are reading from a file or other generated source need not all be in memory at the same time. You might also be interested to look at the source for `nlargest()` as it works in much the same way that you were trying to solve the problem: it keeps only the desired number of elements in a data structure known as a heap and each new value is pushed into the heap then the smallest value is popped from the heap. *Edit to show comparative timing*: ``` >>> import random >>> records = [] >>> for i in range(100000): value = random.random() * 100 records.append(('city {:2.4f}'.format(value), value)) >>> import heapq >>> heapq.nlargest(10, records, key=lambda x:x[1]) [('city 99.9995', 99.99948904248298), ('city 99.9974', 99.99738898315216), ('city 99.9964', 99.99642759230214), ('city 99.9935', 99.99345173704319), ('city 99.9916', 99.99162694442714), ('city 99.9908', 99.99075084123544), ('city 99.9887', 99.98865134685201), ('city 99.9879', 99.98792632193258), ('city 99.9872', 99.98724339718686), ('city 99.9854', 99.98540548350132)] >>> timeit.timeit('sorted(records, key=lambda x:x[1])[:10]', setup='from __main__ import records', number=10) 1.388942152229788 >>> timeit.timeit('heapq.nlargest(10, records, key=lambda x:x[1])', setup='import heapq;from __main__ import records', number=10) 0.5476185073315492 ``` On my system getting the top 10 from 100 records is fastest by sorting and slicing, but with 1,000 or more records it is faster to use nlargest.
Sort the top ten results
[ "", "python", "python-2.7", "tuples", "" ]