Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table like: ``` vending_machine_id | timestamp | manual | 543 | 2011-06-08 | true | 543 | 2011-05-05 | false| 543 | 2010-12-12 | true | 542 | 2013-01-29 | true | 542 | 2012-01-29 | true | ``` All I want is to select most recent date of each vending\_machine\_id with a third field (`manual`) to have `true/false` (if there are both `true` and `false` realdings for that vending\_machine\_id, `true` (if there are only true manual readings), or `false`. I have done this so much: ``` SELECT vmrp.vending_machine_id, max(timestamp), readingMethodSelect.readingMethod FROM vending_machine_reading_product as vmrp, (select distinct(manual) as readingMethod, vending_machine_id FROM vending_machine_reading_product WHERE vending_machine_id in (542, 543, 821, 824) group by vending_machine_id, readingMethod) as readingMethodSelect WHERE vmrp.vending_machine_id = readingMethodSelect.vending_machine_id GROUP BY vmrp.vending_machine_id, readingMethodSelect.readingMethod ORDER BY vmrp.vending_machine_id, max(timestamp) desc ``` And it prints: ``` 542;"2013-01-29 10:59:47";f 543;"2011-06-08 05:43:26";f 543;"2011-06-08 05:43:26";t 821;"2013-02-12 00:56:56";f 824;"2013-02-11 05:52:55";f ``` As you noticed, what I still don't know how to do is to have a single row with `543` vending\_machine\_id and that line to have at the end `f/t`. (because this `id` has both true and false manual reading types). Is there any way I can accomplish this?
``` SELECT vending_machine_id, MAX(timestamp) "TimeStamp", CASE WHEN COUNT(DISTINCT manual) > 1 THEN 'True/False' ELSE MAX(manual) END AS "Manual" -- WHERE ..add condition here.. FROM TableName GROUP BY vending_machine_id ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!1/d31ac/4)
Is this what you want: ``` select vmrp.vending_machine_id, MAX(timestamp), bool_and(manual) from vmrp.vending_machine_id group by vmrp.vending_machine_id ```
How to solve a complex group by sql query?
[ "", "sql", "postgresql", "" ]
I am trying to learn object orientated programming in python 3. I am making a variation of a notebook program that I have in a book but instead of adding notes to a notebook I am trying to add days to a timesheet. In the original tutorial, this is in the main program: ``` def add_note(self): memo = input("Enter a memo: ") self.notebook.new_note(memo) print("Your note has been added") ``` and this is in the class module (notebook): ``` def new_note(self, memo, tags = ''): '''create a new note and add it to the list''' self.notes.append(Note(memo,tags='')) ``` My variation looks like this: main: ``` def add_work_day(self): date = input ("Enter date : ") hours = input ("Enter hours worked : ") rate = input ("Enter hourly rate : £") workday = Timesheet.day(date, hours, rate) ``` module: ``` class Timesheet: def __init__(self): self.timesheet = [] def day(self, date, hours, rate): self.timesheet.append(day(date, hours, rate)) ``` It is giving me this error: ``` File "C:\Python33\timesheet_menu.py", line 39, in add_work_day workday = Timesheet.day(date, hours, rate) TypeError: day() missing 1 required positional argument: 'rate' ``` It seems like the 'self' in 'def day(self, date, hours, rate)' is hogging one of my input arguments. Can someone tell me what I am missing here? **.....Update.....** So now I have created an instance of Timesheet() in main: ``` def add_work_day(self): date = input ("Enter date : ") hours = input ("Enter hours worked : ") rate = input ("Enter hourly rate : £") workday = Timesheet() workday.add_day(date, hours, rate) ``` But I am getting a new error from my Timesheet() method 'day' ``` class Timesheet: def __init__(self): self.timesheet = [] def day(self, date, hours, rate): self.timesheet.append(day(date, hours, rate)) File "C:\Python33\timesheet_menu.py", line 40, in add_work_day workday.add_day(date, hours, rate) File "C:\Python33\timesheet.py", line 29, in add_day self.timesheet.append(day(date, hours, rate)) NameError: global name 'day' is not defined ``` I understand that the problem is the .append(day part but I can't figure out how to fix it. I know a variable isn't global unless specified but my logic tells me that the method should be. So it must be that the .append(day is looking for a pre-exisiting variable called 'day'. I'm confused because this method worked in the example from the book.
The root of your problem is that you don't yet understand how Python classes and instances work. A class, like `Timesheet`, is a collection of methods (functions) and variables, which live in the class's namespace. An instance is a *specific* instance of the class (i.e., *this* timesheet, as opposed to all the other timesheets that exist). Each instance has its very own namespace, which is slightly special: when you look for a method or variable in an instance namespace, if the name is not found, the class namespace will be searched next. (And if the class inherits from other classes, the namespaces of its ancestors will be searched, in order, until either the name is found or there are no more namespaces left to search.) Now, methods (functions) defined in classes have a special behavior, which functions defined *outside* of classes don't have -- this is why a different term (methods) is used for functions defined in classes, to help remind you of this special behavior. The special behavior is this: if the function is being called on an *instance* of the class, then that instance will get passed as an "extra" first parameter to the function. (By convention, that first parameter is called `self`, but there's no reason you couldn't call it `fhqwhgads` if you wanted to. You shouldn't -- it would just make your code utterly confusing to read -- but you could if you wanted to.) Why that extra first paremeter? Well, remember how I said that instances have their own namespace? If you want to look up variables on the instance (e.g., you want to look up the entries on *this* timesheet, not that other timesheet over there), then you need a reference to the instance. The `self` parameter provides that reference. Now, if you call the methods on the *class*, as opposed to on an *instance*, there's no need for that extra `self` parameter, because you clearly already have a reference to the class: that reference is the name `Timesheet`. So when you do `Timesheet.day(...)`, there will be no "extra" first parameter added before your other parameters. That's because you're not referencing an instance, you're referencing a class. But if you call `Timesheet().day(...)`, then two things are happening. First, you're creating an *instance* of `Timesheet` (the `Timesheet()` formulation is how you create an instance), and then you're calling the `day()` method *on that instance*. So the "extra" first parameter will be passed to your `day()` method so that your code inside `day()` will be able to access that instance's variables. One other thing you'll need to understand: when variables belong on an instance, and when they belong on a class. There's a very simple question you can ask yourself to determine this: "does this apply to *every* timesheet, or only to *specific* timesheets?" Your `day()` method clearly needs to access values from *specific* timesheets (Joe worked different hours than Bob, at a different rate of pay), so you need to call it on *instances*, not on the class. So having a `self` parameter in your `day()` method makes sense, but you also need to call it from a method, not from the class. So instead of `Timesheet.day(...)`, you should do something like: ``` my_timesheet = Timesheet() my_timesheet.day(...) # Now do something with the timesheet: calculate total pay, print it out, etc. my_timesheet.calculate_total_pay() # Made up example my_timesheet.print_to_screen() # Made up example ``` It would make no sense to do `Timesheet.calculate_total_pay()`, because the total pay depends on the values in *specific, individual* timesheets. So `calculate_total_pay()` should also be an instance method, and should therefore have a `self` parameter. Actually, in this case I'm not coming up with any methods that should be called as `Timesheet.some_method()`. (Methods called like that are called "static methods" in Python, BTW). Every single example method I can come up with is an instance method (i.e., a method that should be called on an instance, because it would need to access data from *that specific timesheet*). A bit long-winded, but I hope this helps you understand classes and instances better.
Change `workday = Timesheet.day(date, hours, rate)` to `workday = Timesheet().day(date, hours, rate)`
'self' seems to be hogging one of my arguments
[ "", "python", "oop", "class", "self", "" ]
I have a large file with several lines as given below.I want to read in only those lines which have the \_INIT pattern in them and then strip off the \_INIT from the name and only save the OSD\_MODE\_15\_H part in a variable. Then I need to read the corresponding hex value, 8'h00 in this case, ans strip off the 8'h from it and replace it with a 0x and save in a variable. I have been trying strip the off the \_INIT,the spaces and the = and the code is becoming really messy. ``` localparam OSD_MODE_15_H_ADDR = 16'h038d; localparam OSD_MODE_15_H_INIT = 8'h00 ``` Can you suggest a lean and clean method to do this? Thanks!
The following solution uses a regular expression (compiled to speed searching up) to match the relevant lines and extract the needed information. The expression uses named groups "id" and "hexValue" to identify the data we want to extract from the matching line. ``` import re expression = "(?P<id>\w+?)_INIT\s*?=.*?'h(?P<hexValue>[0-9a-fA-F]*)" regex = re.compile(expression) def getIdAndValueFromInitLine(line): mm = regex.search(line) if mm == None: return None # Not the ..._INIT parameter or line was empty or other mismatch happened else: return (mm.groupdict()["id"], "0x" + mm.groupdict()["hexValue"]) ``` EDIT: If I understood the next task correctly, you need to find the hexvalues of those INIT and ADDR lines whose IDs match and make a dictionary of the INIT hexvalue to the ADDR hexvalue. ``` regex = "(?P<init_id>\w+?)_INIT\s*?=.*?'h(?P<initValue>[0-9a-fA-F]*)" init_dict = {} for x in re.findall(regex, lines): init_dict[x.groupdict()["init_id"]] = "0x" + x.groupdict()["initValue"] regex = "(?P<addr_id>\w+?)_ADDR\s*?=.*?'h(?P<addrValue>[0-9a-fA-F]*)" addr_dict = {} for y in re.findall(regex, lines): addr_dict[y.groupdict()["addr_id"]] = "0x" + y.groupdict()["addrValue"] init_to_addr_hexvalue_dict = {init_dict[x] : addr_dict[x] for x in init_dict.keys() if x in addr_dict} ``` Even if this is not what you actually need, having init and addr dictionaries might help to achieve your goal easier. If there are several \_INIT (or \_ADDR) lines with the same ID and different hexvalues then the above dict approach will not work in a straight forward way.
try something like this- not sure what all your requirements are but this should get you close: ``` with open(someFile, 'r') as infile: for line in infile: if '_INIT' in line: apostropheIndex = line.find("'h") clean_hex = '0x' + line[apostropheIndex + 2:] ``` In the case of "16'h038d;", clean\_hex would be "0x038d;" (need to remove the ";" somehow) and in the case of "8'h00", clean\_hex would be "0x00" Edit: if you want to guard against characters like ";" you could do this and test if a character is alphanumeric: ``` clean_hex = '0x' + ''.join([s for s in line[apostropheIndex + 2:] if s.isalnum()]) ```
Splitting lines in a file into string and hex and do operations on the hex values
[ "", "python", "python-2.7", "" ]
I am working on sql server database. I have 2 tables with `1 to many` association in my db. First is **Parent** table and second is **Child** table. **Parent** table has a column *ChildCount* which will update whenever the child entries for this parent is added or deleted. So for this i decided to write a stored procedure and a DML trigger which will execute on INSERT and DELETE operations on **Child** table. I am totally new in database. What i tried yet is : *First i am trying to create a procedure ( which i will execute from the trigger )* ``` CREATE PROCEDURE [dbo].[ChildCount] @parentId int AS //here first i have to extract the total child for the given parentId and //than in the next update statement i will update the count. UPDATE Parent SET ChildCount = //above total child value WHERE Id = parentId RETURN 0 ``` Here i don't understand how to extract a total child and save it in a variable and than use that variable in the update statement ?? And please after guiding me on this CREATE PROCEDURE, suggest me on what i am doing this is correct, good and efficient approach or there is other better approach for doing this ??
Try like this ``` CREATE PROCEDURE [dbo].[ChildCount] @parentId int AS Begin Declare @i as int; Select @i=count(child) from childtable where parentid=@parentId UPDATE Parent SET ChildCount =@i WHERE Id = @parentId End ```
You can also consider using computed columns instead of triggers. Just create a UDF that will return the number of children for a given parent and create a computed column out of it. Here is how it might look like ``` CREATE FUNCTION dbo.GetChildCount(@ParentID int) RETURNS int BEGIN RETURN (SELECT COUNT(*) FROM Child WHERE ParentID = @ParentID) END ALTER TABLE Parent ChildCount as dbo.GetChildCount(ParentID) ``` [Here](http://blogs.msdn.com/b/sqlcat/archive/2011/11/28/a-computed-column-defined-with-a-user-defined-function-might-impact-query-performance.aspx) is a link with more details.
How to write a simple sql procedure?
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
I have a table with First Name, Last Name and Contact Number. If user enters `kar`, then I want a full list with results containing `kar` in the First name or Last Name.
This assumes that `kar` is a portion of a name, if it is the full name, then do `first_name = 'kar'` ``` SELECT first_name, last_name, number FROM your_phone_book WHERE first_name LIKE '%kar%' OR last_name LIKE '%kar%' ```
It's really rather simple: ``` SELECT * FROM directory WHERE firstname LIKE '%kar%' OR lastname LIKE '%kar%'; ```
How do I search two columns?
[ "", "sql", "t-sql", "" ]
I am using Python 2 and the fairly simple method given in Wikipedia's article "Cubic function". This could also be a problem with the cube root function I have to define in order to create the function mentioned in the title. ``` # Cube root and cubic equation solver # # Copyright (c) 2013 user2330618 # # This Source Code Form is subject to the terms of the Mozilla Public # License, v. 2.0. If a copy of the MPL was not distributed with this # file, you can obtain one at http://www.mozilla.org/MPL/2.0/. from __future__ import division import cmath from cmath import log, sqrt def cbrt(x): """Computes the cube root of a number.""" if x.imag != 0: return cmath.exp(log(x) / 3) else: if x < 0: d = (-x) ** (1 / 3) return -d elif x >= 0: return x ** (1 / 3) def cubic(a, b, c, d): """Returns the real roots to cubic equations in expanded form.""" # Define the discriminants D = (18 * a * b * c * d) - (4 * (b ** 3) * d) + ((b ** 2) * (c ** 2)) - \ (4 * a * (c ** 3)) - (27 * (a ** 2) * d ** 2) D0 = (b ** 2) - (3 * a * c) i = 1j # Because I prefer i over j # Test for some special cases if D == 0 and D0 == 0: return -(b / (3 * a)) elif D == 0 and D0 != 0: return [((b * c) - (9 * a * d)) / (-2 * D0), ((b ** 3) - (4 * a * b * c) + (9 * (a ** 2) * d)) / (-a * D0)] else: D1 = (2 * (b ** 3)) - (9 * a * b * c) + (27 * (a ** 2) * d) # More special cases if D != 0 and D0 == 0 and D1 < 0: C = cbrt((D1 - sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2) else: C = cbrt((D1 + sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2) u_2 = (-1 + (i * sqrt(3))) / 2 u_3 = (-1 - (i * sqrt(3))) / 2 x_1 = (-(b + C + (D0 / C))) / (3 * a) x_2 = (-(b + (u_2 * C) + (D0 / (u_2 * C)))) / (3 * a) x_3 = (-(b + (u_3 * C) + (D0 / (u_3 * C)))) / (3 * a) if D > 0: return [x_1, x_2, x_3] else: return x_1 ``` I've found that this function is capable of solving some simple cubic equations: ``` print cubic(1, 3, 3, 1) -1.0 ``` And a while ago I had gotten it to a point where it could solve equations with two roots. I've just done a rewrite and now it's gone haywire. For example, these coefficients are the expanded form of (2x - 4)(x + 4)(x + 2) and it should return [4.0, -4.0, -2.0] or something similar: ``` print cubic(2, 8, -8, -32) [(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)] ``` Is this more a mathematical or a programming mistake I'm making? *Update:* Thank you, everyone, for your answers, but there are more problems with this function than I've iterated so far. For example, I often get an error relating to the cube root function: ``` print cubic(1, 2, 3, 4) # Correct solution: about -1.65 ... if x > 0: TypeError: no ordering relation is defined for complex numbers print cubic(1, -3, -3, -1) # Correct solution: about 3.8473 if x > 0: TypeError: no ordering relation is defined for complex numbers ```
[Wolfram Alpha confirms](http://www.wolframalpha.com/input/?i=2*x%5E3%20%2b%208*x%5E2%20-%208*x%20%20-%2032) that the roots to your last cubic are indeed ``` (-4, -2, 2) ``` and not as you say > ... it should return `[4.0, -4.0, -2.0]` Not withstanding that (I presume) typo, your program gives ``` [(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)] ``` Which to accuracy of `10**(-15)` are the *exact same roots* as the correct solution. The **tiny** imaginary part is probably due, as others have said, to rounding. Note that you'll have to use exact arithmetic to always correctly cancel if you are using a solution like [Cardano's](http://en.wikipedia.org/wiki/Cubic_function). This one of the reasons why programs like `MAPLE` or `Mathematica` exist, there is often a disconnect from the formula to the implementation. To get only the real portion of a number in pure python you call `.real`. Example: ``` a = 3.0+4.0j print a.real >> 3.0 ```
[Hooked's answer](https://stackoverflow.com/a/16271613/857390) is the way to go if you want to do this numerically. You can also do it symbolically using [sympy](http://docs.sympy.org/0.7.2/modules/polys/basics.html#solving-equations): ``` >>> from sympy import roots >>> roots('2*x**3 + 8*x**2 - 8*x - 32') {2: 1, -4: 1, -2: 1} ``` This gives you the roots and their multiplicity.
What's wrong with this function to solve cubic equations?
[ "", "python", "math", "cubic", "" ]
I'm in need of a function to concatenate 2 numbers in SQL Server eg ``` getuserid(3,333) = 3333 getuserid(8,5) = 8005 getuserid(2,11) = 2011 ``` It should return the result based on the length of second parameter (like 3rd example) I tried converting it to string and using case statements I concatenated it. But, I feel that,this can be done mathematically in a more effective manner. Can anyone please suggest a way.? By the by, the output returned should be a numeric type.!
Try this one - ``` CREATE FUNCTION dbo.concat_num (@a INT, @b INT) RETURNS INT AS BEGIN RETURN CONVERT(INT, CONVERT(VARCHAR(25), @a) + REPLICATE('0', CASE WHEN LEN(@b) > 2 THEN LEN(@b) ELSE 3 END - LEN(@b)) + CONVERT(VARCHAR(25), @b)) END ``` Output: ``` ----------- 3333 ----------- 8005 ----------- 2011 ----------- 15555 ```
Looks like this will do it: ``` select num1 * 1000 + num2 ``` Why do you need a function for this simple math?
Concatenate 2 numbers in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
how should I define a function, `where`,which can tell where it was executed, with no arguments passed in? all files in ~/app/ a.py: ``` def where(): return 'the file name where the function was executed' ``` b.py: ``` from a import where if __name__ == '__main__': print where() # I want where() to return '~/app/b.py' like __file__ in b.py ``` c.py: ``` from a import where if __name__ == '__main__': print where() # I want where() to return '~/app/c.py' like __file__ in c.py ```
You need to look up the call stack by using [`inspect.stack()`](http://docs.python.org/2/library/inspect.html#inspect.stack): ``` from inspect import stack def where(): caller_frame = stack()[1] return caller_frame[0].f_globals.get('__file__', None) ``` or even: ``` def where(): caller_frame = stack()[1] return caller_frame[1] ```
You can use [`traceback.extract_stack`](http://docs.python.org/2/library/traceback.html): ``` import traceback def where(): return traceback.extract_stack()[-2][0] ```
Determine where a function was executed?
[ "", "python", "" ]
I want to find all records from database where `date` is "2013-04". I use: ``` SELECT date, pastabos FROM aktai WHERE MATCH(uzsakovas) AGAINST ('Statyba ir Architektūra, UAB' IN BOOLEAN MODE) AND MATCH(date) AGAINST ('2013-04*' IN BOOLEAN MODE) ``` but it does not give the correct answer, because I get records with date equal to '`2013-01-29', '2013-03-28'`. My question is how to use the `MATCH() AGAINST()` like function `LIKE()` with `"2013-04%"` ?
Since the data type of `date` is Date why are you using text search? Just search on the date: ``` SELECT date, pastabos FROM aktai WHERE MATCH(uzsakovas) AGAINST ('Statyba ir Architektūra, UAB' IN BOOLEAN MODE) AND year(date) = :year and month(date) = :month; ``` where :year and :month are integer parameters with values 2013 and 04 respectively. Alternatively ``` SELECT date, pastabos FROM aktai WHERE MATCH(uzsakovas) AGAINST ('Statyba ir Architektūra, UAB' IN BOOLEAN MODE) AND date between :date_from and :date_to; ``` where :date\_from and :date\_to are Date parameters with the values `2013-04-01` and `2013-04-30` respectively.
'-' has a special meaning in full text search syntax (word should be absent). Enclose the phrase in double quotes to match only rows that contain the phrase literally. I.e.: ``` match(date) against ('"2013-04"' in boolean mode) ``` And, generally, better approach would be to handle dates as the date type, not strings.
MySQL MATCH() AGAINST()
[ "", "mysql", "sql", "match-against", "" ]
Im making a small python program to copy some files. My filenames are in a list "selectedList". The user has selected the source dir "self.DirFilename" and the destination dir "self.DirDest". I'm using cp instead of shutil because I've read that shutil is slow. Heres my code: ``` for i in selectedList: src_dir = self.DirFilename + "/" + str(i) + ".mov" dst_dir = self.DirDest r = os.system('cp -fr %s %s' % (src_dir, dst_dir)) if r != 0: print 'An error occurred!'** ``` I would like the copy to search the source directory for the given filename and then recreate the folder structure in the destination as well as copy the file. Any suggestions would be helpful (like any massively obvious mistakes that i'm making)- its my first python programme and I'm nearly there! Thanks Gavin
I think something like this could do the trick. Of course you may want to use something ore advance that os.system to call cp. ``` import os for r, d, f in os.walk(self.DirFilename): for file in f: f_name, f_ext = os.path.splitext(file) if ".mov" == f_ext: if f_name in selectedList: src_abs_path = os.path.join(r, file) src_relative_path = os.path.relpath(src_abs_path, self.DirFilename) dst_abs_path = os.path.join(self.DirDest, src_relative_path) dst_dir = os.path.dirname(dst_abs_path) if not os.path.exists(dst_dir): os.makedirs(dst_dir) ret = os.system('cp -fr %s %s' % (src_abs_path, dst_abs_path)) if ret != 0: print 'An error occurred!' ```
See <http://blogs.blumetech.com/blumetechs-tech-blog/2011/05/faster-python-file-copy.html> for a pure Python implementation of the recursive copy. You can use `os.walk` to find the file you need: ``` def find_files(...): for ... in os.walk(...): if ...: yield filename for name in find_files(...): copy(name, ...) ```
Copy Files From List Python
[ "", "python", "wxpython", "cp", "shutil", "" ]
I've got a program that has a small file structure going on and is then ran using ``` python do_work.py foo bar ``` I want my Rails users to press a button and have this happen for them, with the result either uploaded somewhere or just thrown to them as a download link or something of the sort - the output of `do_work.py` (say, it's `result.txt`) I also want to clarify that the script results in the creation on the filesystem of 3 separate files, which are not text files (which shouldn't matter and isn't really the problem here) What is the best way to go about it? Can rake run exec Python? More importantly, is this doable on heroku? I have Python installed on my system but the provided answer by sockmonk doesn't seem to work - it returns nil. Mind you, other commands like `ls` seem to work. Could it be a permissions problem? ``` def index value = %x( python --version ) render :text => value end ``` Incidentally, trying this in `irb`: ``` %x(python) ``` Brings up the Python terminal INSIDE of irb. It will not take params for whatever reason however.
Your `index` method does not work because `python --version` outputs its version to STDERR, not STDOUT. If you don't need to separate these streams, you may just redirect STDERR to STDOUT: ``` value = %x(python --version 2>&1) ``` This call is synchronous, so after running the script (`python do_work.py foo bar 2>&1`), you should be able to read the files produced by it. If the script is not able to create the files for some reason, you will now see the exception in the `value` variable because error messages are usually sent to STDERR. If you want to separate STDERR from STDOUT, use the [Open3](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open3/rdoc/Open3.html) module. Beware that the script takes some time to run, so the calls may overlap. I would use a queue here to prevent this. And don't forget to check the data the user enters. Never pass it directly to the script.
It partly depends on the format of the data. If it's not too long and can be rendered directly in the browser, you can just do something like this in a rails controller: ``` result = `python do_work.py foo bar` render :text => result ``` And assuming that result is plain ASCII text, the result will go straight to their browser. If the params to do\_work.py come from the user you MUST validate them first though, so you don't wind up creating a nasty vulnerability for yourself. Using the system() call would probably be safer in that case. If you want to send the results back as a file, look at ruby's Tempfile class for creating the file (in a way that won't stick around forever), and rails' send\_file and send\_data commands for some different options to send back the results that way.
How to integrate a standalone Python script into a Rails application?
[ "", "python", "ruby-on-rails", "ruby", "heroku", "rake", "" ]
Let say I have ``` num = 123456 ``` How do I get the digit sum which is 21? How to use the sum() function? I tried sum(num) but it says 'int' object is not iterable.
One way: ``` In [1]: num=123456 In [2]: sum(map(int,str(num))) Out[2]: 21 In [3]: def digitsum(x): ...: return sum(map(int,str(x))) ...: In [4]: digitsum(num) Out[4]: 21 ```
You have to change it to a string first : ``` In [24]: num = 123456 In [25]: sum(int(x) for x in str(num)) Out[25]: 21 ``` Without converting to a string: ``` def solve(n): summ=0 while n: summ+= n%10 n/=10 return summ ....: In [38]: solve(123456) Out[38]: 21 ```
How to get digit sum in python
[ "", "python", "" ]
Table car\_log ``` Speed LogDate 5 2013-04-30 10:10:09 ->row1 6 2013-04-30 10:12:15 ->row2 4 2013-04-30 10:13:44 ->row3 17 2013-04-30 10:15:32 ->row4 22 2013-04-30 10:18:19 ->row5 3 2013-04-30 10:22:33 ->row6 4 2013-04-30 10:24:14 ->row7 15 2013-04-30 10:26:59 ->row8 2 2013-04-30 10:29:19 ->row9 ``` I want to know how long the car get speed under 10. In my mind, i will count the LogDate difference between row 1 - row4 (because in 10:14:44 => between row4 and row3, the speed is 4) + (sum) LogDate difference between row6 - row8. I am doubt if it right or no. How can i count it in mysql queries. Thank you.
For every row, find a first row with higher (later) LogDate. If the speed in this row is less than 10, count date difference between this row's date and next row's date, else put 0. A query that would give a list of the values counted this way should look like: ``` SELECT ( SELECT IF( c1.speed <10, unix_timestamp( c2.LogDate ) - unix_timestamp( c1.logdate ) , 0 ) FROM car_log c2 WHERE c2.LogDate > c1.LogDate LIMIT 1 ) AS seconds_below_10 FROM car_log c1 ``` Now its just a matter of summing it up: ``` SELECT sum( seconds_below_10) FROM ( SELECT ( SELECT IF( c1.speed <10, unix_timestamp( c2.LogDate ) - unix_timestamp( c1.logdate ) , 0 ) FROM car_log c2 WHERE c2.LogDate > c1.LogDate LIMIT 1 ) AS seconds_below_10 FROM car_log c1 ) seconds_between_logs ``` Update after comment about adding CarId: When you have more than 1 car you need to add one more WHERE condition inside dependent subquery (we want next log for that exact car, not just any next log) and group whole rowset by CarId, possibly adding said CarId to the select to show it too. ``` SELECT sbl.carId, sum( sbl.seconds_below_10 ) as `seconds_with_speed_less_than_10` FROM ( SELECT c1.carId, ( SELECT IF( c1.speed <10, unix_timestamp( c2.LogDate ) - unix_timestamp( c1.logdate ) , 0 ) FROM car_log c2 WHERE c2.LogDate > c1.LogDate AND c2.carId = c1.carId LIMIT 1 ) AS seconds_below_10 FROM car_log c1 ) sbl GROUP BY sbl.carId ``` See an example at [Sqlfiddle](http://sqlfiddle.com/#!2/3f900/1).
If the type of column 'LogDate' is a MySQL DATETIME type, you can use the timestampdiff() function in your select statement to get the difference between timestamps. The timestampdiff function is documented in the manual at: <http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_timestampdiff> You need to break the query down into subqueries, and then use the TIMESTAMPDIFF function. The function takes three arguments, the units you want the result in (ex. SECOND, MINUTE, DAY, etc), and then Value2, and last Value1. To get the maximum value for LogDate where speed is less than 10 use: ``` select MAX(LogDate) from <yourtable> where Speed<10 ``` To get the minimum value for LogDate where speed is less than 10 use: ``` select MIN(LogDate) from <yourtable> where Speed<10 ``` Now, combine these into a single query with the TIMESTAMPDIFF function: ``` select TIMESTAMPDIFF(SECOND, (select MAX(LogDate) from <yourtable> where Speed<10, (select MIN(LogDate) from <yourtable> where Speed<10))); ``` If LogDate is of a different type, there are other Date/Time Diff functions to handle math between any of these types. You will just need to change 'TIMESTAMPDIFF' to the correct function for your column type. Additional ref: <http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html>
Sum Multiple Row Date Difference Mysql
[ "", "mysql", "sql", "" ]
The following is my query: ``` select c.cust_lname, c.cust_fname, o.amount from CUSTOMER c, orders o where o.amount > (select AVG (o.amount) from orders o group by order_num having o.amount > AVG(o.amount)); ``` Why is this not working?
The way you are writing the query, it would look like: ``` select c.cust_lname, c.cust_fname, o.amount from CUSTOMER c join orders o on c.customerId = o.customerId where o.amount > (select AVG (o.amount) from orders o) ``` Note that you need to join together the two tables to get what you want.
You overengineered it. Take the having clause out of your subquery.
List Customers That Purchased Orders > The Average Paid Order
[ "", "sql", "sql-server-2008", "" ]
I am developing AI to perform MDP, I am getting states(just integers in this case) and assigning it a value, and I am going to be doing this a lot. So I am looking for a data structure that can hold(no need for delete) that information and will have a very fast get/update function. Is there something faster than the regular dictionary? I am looking for anything really so native python, open sourced, I just need fast getting.
Using a Python dictionary is the way to go.
You're saying that all your keys are integers? In that case, it **might** be faster to use a list and just treat the list indices as the key values. However, you'd have to make sure that you never delete or add list items; just start with as many as you think you'll need, setting them all equal to `None`, as shown: ``` mylist = [None for i in xrange(totalitems)] ``` Then, when you need to "add" an item, just set the corresponding value. Note that this probably won't actually gain you much in terms of actual efficiency, and it might be more confusing than just using a dictionary. For 10,000 items, it turns out (on my machine, with my particular test case) that accessing each one and assigning it to a variable takes about 334.8 seconds with a list and 565 seconds with a dictionary.
Fastest Get Python Data Structures
[ "", "python", "performance", "data-structures", "artificial-intelligence", "" ]
I am struggling to understandwhat the "if (i >> j) % 2 ==1 " does in the following function or any function for that matter? ``` def powerSet(items): N = len(items) for i in xrange(2**N): combo = [] for j in xrange(N): if (i >> j) % 2 == 1: combo.append(items[j]) yield combo ```
It checks whether the `j`'th bit of the binary number `i`, counting from the end, is set. `i >> j` right-shift, so the final `j` bits are gone. `n % 2 == 1` is the familiar check for odd numbers, which in binary have their last bit set. **EDIT**: this is generating a power set as follows. The outer loop walks through all `2**N` subsets of `items`, each represented as a binary integer. The inner loop then constructs the actual subset by checking which of the `N` final bits of these integers are set, using the bits as indicators of subset membership. E.g., suppose that `N=5`. Then at some point, `i` will be `0b10011`. From that, the set `[items[0], items[1], items[4]]` can be constructed. First reverse the bits, because they're numbered right-to-left by `j`: ``` 1 1 0 0 1 items[0] items[1] (nothing) (nothing) items[4] ``` (Try printing `i` and `combo` inside the inner loop.)
You can print the numbers in binary as the operation progresses to see how it works. Here's an example with i=1234 and j=4. 1234 in binary ``` >>> '{:b}'.format(1234) '10011010010' ``` shifting right 4 places causes the rightmost bits (0010) to fall away ``` >>> '{:b}'.format(1234>>4) '1001101' ``` the modulo operation divides by 2 and gives you the remainder ``` >>> '{:b}'.format((1234>>4)%2) '1' ``` its also common to do this with the & operation ``` >>> '{:b}'.format((1234>>4)&1) '1' ``` if you have a number where the 4th bit (from zero) is zero, you get a zero ``` >>> '{:b}'.format((1234+0b10000>>4)&1) '0' ```
struggling to understand bitwise operators in python
[ "", "python", "bit-manipulation", "" ]
I'm trying to compare two csv files (fileA and fileB), and remove any rows from fileA that are not found in fileB. I want to be able to do this without creating a third file. I thought I could do this using the csv writer module but now I'm second guessing myself. Currently, I'm using the following code to record my comparison data from file B: ``` removal_list = set() with open('fileB', 'rb') as file_b: reader1 = csv.reader(file_b) next(reader1) for row in reader1: removal_list.add((row[0], row[2])) ``` This is where I'm stuck and do not know how to delete the rows: ``` with open('fileA', 'ab') as file_a: with open('fileB', 'rb') as file_b: writer = csv.writer(file_a) reader2 = csv.reader(file_b) next(reader2) for row in reader2: if (row[0], row[2]) not in removal_list: # If row was not present in file B, Delete it from file A. #stuck here: writer.<HowDoIRemoveRow>(row) ```
This solution uses [`fileinput`](http://docs.python.org/2/library/fileinput.html) with `inplace=True`, which writes to a temporary file and then automatically renames it at the end to your file name. You can't *remove* rows from a file but you can rewrite it with only the ones you want. > if the keyword argument `inplace=1` is passed to `fileinput.input()` or to the `FileInput` constructor, the file is moved to a backup file and standard output is directed to the input file (if a file of the same name as the backup file already exists, it will be replaced silently). This makes it possible to write a filter that rewrites its input file in place. fileA ``` h1,h2,h3 a,b,c d,e,f g,h,i j,k,l ``` fileB ``` h1,h2,h3 a,b,c 1,2,3 g,h,i 4,5,6 ``` --- ``` import fileinput, sys, csv with open('fileB', 'rb') as file_b: r = csv.reader(file_b) next(r) #skip header seen = {(row[0], row[2]) for row in r} f = fileinput.input('fileA', inplace=True) # sys.stdout is redirected to the file print next(f), # write header as first line w = csv.writer(sys.stdout) for row in csv.reader(f): if (row[0], row[2]) in seen: # write it if it's in B w.writerow(row) ``` --- fileA ``` h1,h2,h3 a,b,c g,h,i ```
CSV is not a database format. It is read and written as a whole. You can't remove rows in the middle. So the only way to do this without creating a third file is to read in the file completely in memory and then write it out, without the offending rows. But in general it's better to use a third file.
How to Delete Rows CSV in python
[ "", "python", "csv", "python-2.7", "module", "delete-row", "" ]
I have a table, and I'd like to select rows with the highest value. For example: ``` ---------------- | user | index | ---------------- | 1 | 1 | | 2 | 1 | | 2 | 2 | | 3 | 4 | | 3 | 7 | | 4 | 1 | | 5 | 1 | ---------------- ``` Expected result: ``` ---------------- | user | index | ---------------- | 1 | 1 | | 2 | 2 | | 3 | 7 | | 4 | 1 | | 5 | 1 | ---------------- ``` How may I do so? I assume it can be done by some oracle function I am not aware of? Thanks in advance :-)
if you have more than one column ``` select user , index from ( select u.* , row_number() over (partition by user order by index desc) as rnk from some_table u) where rnk = 1 ``` * `user` is a reserved word - you should use a different name for the column.
You can use `MAX()` function for that with grouping user column like this: ``` SELECT "user" ,MAX("index") AS "index" FROM Table1 GROUP BY "user" ORDER BY "user"; ``` Result: ``` | USER | INDEX | ---------------- | 1 | 1 | | 2 | 2 | | 3 | 7 | | 4 | 1 | | 5 | 1 | ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!4/9c1aa/11)
selecting data with highest field value in a field
[ "", "sql", "oracle", "oracle11g", "" ]
I am trying to learn Django form the 1st tutorial on the Django project website. I might be missing something obvious but, after following all the instructions when I come to run the command ``` python manage.py runserver ``` I get the error posted at the end of this plea for help (I have posted only the first few lines of the repeated lines of the error message for brevity). Here are some of the solutions/suggestions I have found on the web but were NOT helpful to me. 1)sys.setrecursionlimit(1500). This didn't work for me. 2).[Django RuntimeError: maximum recursion depth exceeded](https://stackoverflow.com/questions/15236556/django-runtimeerror-maximum-recursion-depth-exceeded) This also isn't an option because I am not using PyDeV, I tried uninstalling and installing Django using pip it didn't fix anything and I am using Mountain Lion's native python, which I am not going to uninstall, since it is not recommended. 3). I also tried: ``` python manage.py runserver --settings=mysite.settings ``` Same exact error as the command without the option settings Any suggestions, recommendations would be much appreciated. I am using.... Django Official Version. 1.5.1 which I installed using pip and Python 2.7.2 ``` Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x10f7ee5d0>> Traceback (most recent call last): File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 92, in inner_run self.validate(display_num_errors=True) File "/Library/Python/2.7/site-packages/django/core/management/base.py", line 280, in validate num_errors = get_validation_errors(s, app) File "/Library/Python/2.7/site-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 166, in get_app_errors self._populate() File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 72, in _populate self.load_app(app_name, True) File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 96, in load_app models = import_module('.models', app_name) File "/Library/Python/2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Library/Python/2.7/site-packages/django/contrib/auth/models.py", line 370, in <module> class AbstractUser(AbstractBaseUser, PermissionsMixin): File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 213, in __new__ new_class.add_to_class(field.name, copy.deepcopy(field)) File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 265, in add_to_class value.contribute_to_class(cls, name) File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 257, in contribute_to_class cls._meta.add_field(self) File "/Library/Python/2.7/site-packages/django/db/models/options.py", line 179, in add_field self.local_fields.insert(bisect(self.local_fields, field), field) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), RuntimeError: maximum recursion depth exceeded in cmp ``` UPDATE: So what I ended up doing was to do an overkill of installing virtualbox, installing free ubuntu on it and then moving on to finish the tutorial...oh well!
The problem is in *functools.py* file. This file is from Python. I have just installed a new version of python 2.7.5 and this file is wrong (I have another - older installation of python 2.7.5 and there the file functools.py is correct) To fix the problem replace this (about line 56 in python\Lib\fuctools.py): ``` convert = { '__lt__': [('__gt__', lambda self, other: other < self), ('__le__', lambda self, other: not other < self), ('__ge__', lambda self, other: not self < other)], '__le__': [('__ge__', lambda self, other: other <= self), ('__lt__', lambda self, other: not other <= self), ('__gt__', lambda self, other: not self <= other)], '__gt__': [('__lt__', lambda self, other: other > self), ('__ge__', lambda self, other: not other > self), ('__le__', lambda self, other: not self > other)], '__ge__': [('__le__', lambda self, other: other >= self), ('__gt__', lambda self, other: not other >= self), ('__lt__', lambda self, other: not self >= other)] } ``` to that: ``` convert = { '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], '__le__': [('__ge__', lambda self, other: not self <= other or self == other), ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } ``` Read also: <http://regebro.wordpress.com/2010/12/13/python-implementing-rich-comparison-the-correct-way/>
You have likely run into this bug: <http://bugs.python.org/issue10042> Exactly what happens is hard to tell without debugging, bit I'd guess one of the things that should be a field isn't in this line: ``` self.local_fields.insert(bisect(self.local_fields, field), field) ```
Django: python manage.py runserver gives RuntimeError: maximum recursion depth exceeded in cmp
[ "", "python", "django", "" ]
I'm trying to create a function that takes in 2 lists and returns the list that only has the differences of the two lists. Example: ``` a = [1,2,5,7,9] b = [1,2,4,8,9] ``` The result should print `[4,5,7,8]` The function so far: ``` def xor(list1, list2): list3=list1+list2 for i in range(0, len(list3)): x=list3[i] y=i while y>0 and x<list3[y-1]: list3[y]=list3[y-1] y=y-1 list3[y]=x last=list3[-1] for i in range(len(list3) -2, -1, -1): if last==list3[i]: del list3[i] else: last=list3[i] return list3 print xor([1,2,5,7,8],[1,2,4,8,9]) ``` The first for loop sorts it, second one removes the duplicates. Problem is the result is `[1,2,4,5,7,8,9]` not `[4,5,7,8]`, so it doesn't completely remove the duplicates? What can I add to do this. I can't use any special modules, .sort, set or anything, just loops basically.
You basically want to add an element to your new list if it is present in one and not present in another. Here is a compact loop which can do it. For each element in the two lists (concatenate them with `list1+list2`), we add element if it is not present in one of them: ``` [a for a in list1+list2 if (a not in list1) or (a not in list2)] ``` You can easily transform it into a more unPythonic code with explicit looping through elements as you have now, but honestly I don't see a point (not that it matters): ``` def xor(list1, list2): outputlist = [] list3 = list1 + list2 for i in range(0, len(list3)): if ((list3[i] not in list1) or (list3[i] not in list2)) and (list3[i] not in outputlist): outputlist[len(outputlist):] = [list3[i]] return outputlist ```
Use set is better ``` >>> a = [1,2,5,7,9] >>> b = [1,2,4,8,9] >>> set(a).symmetric_difference(b) {4, 5, 7, 8} ``` Thanks to @DSM, a better sentence is: ``` >>> set(a)^set(b) ``` These two statements are the same. But the latter is clearer. Update: sorry, I did not see the last requirement: cannot use set. As far as I see, the solution provided by @sashkello is the best.
Comparing two lists and only printing the differences? (XORing two lists)
[ "", "python", "python-2.7", "" ]
I just started learning recursion and I have an assignment to write a program that tells the nesting depth of a list. Well, I browsed around and found working code to do this, but I'm still having trouble understanding how it works. Here's the code: ``` def depth(L) : nesting = [] for c in L: if type(c) == type(nesting) : nesting.append(depth(c)) if len(nesting) > 0: return 1 + max(nesting) return 1 ``` So naturally, I start to get confused at the line with the append that calls recursion. Does anyone have a simple way of explaining what's going on here? I'm not sure what is actually being appended, and going through it with test cases in my head isn't helping. Thanks! edit: sorry if the formatting is poor, I typed this from my phone
Let me show it to you the easy way, change the code like this: (### are the new lines I added to your code so you can watch what is happening there) ``` def depth(L) : nesting = [] for c in L: if type(c) == type(nesting) : print 'nesting before append', nesting ### nesting.append(depth(c)) print 'nesting after append', nesting ### if len(nesting) > 0: return 1 + max(nesting) return 1 ``` Now lets make a list with the depth of three: ``` l=[[1,2,3],[1,2,[4]],'asdfg'] ``` You can see our list has 3 element. one of them is a list, the other is a list which has another list in itself and the last one is a string. You can clearly see the depth of this list is 3 (i.e there are 2 lists nested together in the second element of the main list) Lets run this code: ``` >>> depth(l) nesting before append [] nesting after append [1] nesting before append [1] nesting before append [] nesting after append [1] nesting after append [1, 2] 3 ``` Piece of cake! this function appends 1 to the nesting. then if the element has also another list it appends 1 + maximum number in nesting which is the number of time function has been called itself. and if the element is a string, it skips it. At the end, it returns the maximum number in the nesting which is the maximum number of times recursion happened, which is the number of time there is a list inside list in the main list, aka depth. In our case recursion happened twice for the second element + 1=3 as we expected. If you still have problem getting it, try to add more `print` statements or other variables to the function and watch them carefully and eventually you'll get it.
So what this seems to be is a function that takes a list and calculates, as you put it, the nesting depth of it. nesting is a list, so what `if type(c) == type(nesting)` is saying is: if the item in list `L` is a list, run the function again and append it and when it runs the function again, it will do the same test until there are no more nested lists in list `L` and then return 1 + the max amount of nested lists because every list has a depth of 1. Please tell me if any of this is unclear
having trouble understanding this code
[ "", "python", "recursion", "nested", "depth", "" ]
So I am running into the problem where my report header is being displayed on every page, but I only want to display it on the first page. The solution that people on this forum have given is to just put the header in the main content. The problem with this is that my header shows the page number + total pages (i.e. Page 1 of 3 pages). I cannot move my header to the body because I will lose access the page numbers. For some reason, Report Builder will only allow you to have access to the page numbers via the header. Does anyone have any solution to this problem?
Write an expression to hide the textboxes that hold the header information. The expression would look like this: ``` =iif(Globals!PageNumber = 1, FALSE, TRUE) ``` To get to the expression property: right-click text box >> text box properties >> visibility >> select "Show or hide based on expression" >> insert expression above cheers
I had the same issue, where I only wanted the header to show on the first page. The solution I came up with was to stick all of my objects from the header into a rectangle, so it was now acting as a container. I then placed that container into the body. In the report properties, in the code section, I borrowed from this post [Access Page number in report body In SSRS](https://stackoverflow.com/questions/4758793/access-page-number-in-report-body-ssrs) to create functions, which would allow me to pull the page numbers into the body section. Then in my rectangle/container, I set the visibility property to =code.PageNumber>1. I hope this helps!
SSRS Report Builder - Only Show Header On First Page (With Page Numbers)
[ "", "sql", "reporting-services", "" ]
I have a file which looks like ``` 1::12::33::1555 1::412::1245::23444 ``` and so on. I need to get rid of the last argument, and replace the colons with commas. I have tried: ``` myfile = open('words.txt', 'r') content = myfile.read() content = re.sub(r'(.+)::(.+)::(.+)::(.+)', "\1,\2,\3", content) myfile = open('words.txt', 'w') myfile.write(content) # Close the file myfile.close() ``` but the back reference doesn't work, and I just end up with a file with commas.. What I'm hoping to achieve is: ``` 1,12,33 1,412,1245 ```
This would give you the string you need: ``` line = '1::412::1245::23444' line_list = line.split('::') new_line = ','.join(line_list[:-1]) print new_line >> 1,412,1245 ```
Backreferences will only be interpolated with a raw string. ``` re.sub(r'(.+)::(.+)::(.+)::(.+)', r"\1,\2,\3", content) ``` You could also do this using purely strings/lists ``` "\n".join([",".join(y.split('::')[:-1]) for y in content.split("\n")]) ```
How to replace the colons in this text using Python?
[ "", "python", "regex", "" ]
Say I have a list of lists that has indexes `[[start, end], [start1, end1], [start2, end2]]`. Like for example : `[[0, 133], [78, 100], [25, 30]]`. How would I get check for overlap between among the lists and remove the list with the longer length each time? So: ``` >>> list = [[0, 133], [78, 100], [25, 30]] >>> foo(list) [[78, 100], [25, 30]] ``` This is what I tried to do so far: ``` def cleanup_list(list): i = 0 c = 0 x = list[:] end = len(x) while i < end-1: for n in range(x[i][0], x[i][1]): if n in range(x[i+1][0], x[i+1][1]): list.remove(max(x[i], x[i+1])) i +=1 return list ``` But in addition to being kind of convoluted it's not working properly: ``` >>>cleanup_list([[0,100],[9,10],[12,90]]) [[0, 100], [12, 90]] ``` Any help would be appreciated! EDIT: Other examples would be: ``` >>>a = [[0, 100], [4, 20], [30, 35], [30, 78]] >>>foo(a) [[4, 20], [30, 35]] >>>b = [[30, 70], [25, 40]] >>>foo(b) [[25, 40]] ``` I'm basically trying to remove all of the longest lists that overlap with another list. In this case I don't have to worry about the lists being of equal length. Thanks!!
To remove a minimal number of intervals from the list such that the intervals that are left do not overlap, `O(n*log n)` algorithm exists: ``` def maximize_nonoverlapping_count(intervals): # sort by the end-point L = sorted(intervals, key=lambda (start, end): (end, (end - start)), reverse=True) # O(n*logn) iv = build_interval_tree(intervals) # O(n*log n) result = [] while L: # until there are intervals left to consider # pop the interval with the smallest end-point, keep it in the result result.append(L.pop()) # O(1) # remove intervals that overlap with the popped interval overlapping_intervals = iv.pop(result[-1]) # O(log n + m) remove(overlapping_intervals, from_=L) return result ``` It should produce the following results: ``` f = maximize_nonoverlapping_count assert f([[0, 133], [78, 100], [25, 30]]) == [[25, 30], [78, 100]] assert f([[0,100],[9,10],[12,90]]) == [[9,10], [12, 90]] assert f([[0, 100], [4, 20], [30, 35], [30, 78]]) == [[4, 20], [30, 35]] assert f([[30, 70], [25, 40]]) == [[25, 40]] ``` It requires the data structure that can find in `O(log n + m)` time all intervals that overlap with the given interval e.g., [`IntervalTree`](http://en.wikipedia.org/wiki/Interval_tree). There are implementations that can be used from Python e.g., [`quicksect.py`](http://bitbucket.org/james_taylor/bx-python/raw/ebf9a4b352d3/lib/bx/intervals/operations/quicksect.py), see [Fast interval intersection methodologies](http://www.biostars.org/p/99/) for the example usage. --- Here's a `quicksect`-based `O(n**2)` implementation of the above algorithm: ``` from quicksect import IntervalNode class Interval(object): def __init__(self, start, end): self.start = start self.end = end self.removed = False def maximize_nonoverlapping_count(intervals): intervals = [Interval(start, end) for start, end in intervals] # sort by the end-point intervals.sort(key=lambda x: (x.end, (x.end - x.start))) # O(n*log n) tree = build_interval_tree(intervals) # O(n*log n) result = [] for smallest in intervals: # O(n) (without the loop body) # pop the interval with the smallest end-point, keep it in the result if smallest.removed: continue # skip removed nodes smallest.removed = True result.append([smallest.start, smallest.end]) # O(1) # remove (mark) intervals that overlap with the popped interval tree.intersect(smallest.start, smallest.end, # O(log n + m) lambda x: setattr(x.other, 'removed', True)) return result def build_interval_tree(intervals): root = IntervalNode(intervals[0].start, intervals[0].end, other=intervals[0]) return reduce(lambda tree, x: tree.insert(x.start, x.end, other=x), intervals[1:], root) ``` Note: the time complexity in the worst case is `O(n**2)` for this implementation because the intervals are only marked as removed e.g., imagine such input `intervals` that `len(result) == len(intervals) / 3` and there were `len(intervals) / 2` intervals that span the whole range then `tree.intersect()` would be called `n/3` times and each call would execute `x.other.removed = True` at least `n/2` times i.e., `n*n/6` operations in total: ``` n = 6 intervals = [[0, 100], [0, 100], [0, 100], [0, 10], [10, 20], [15, 40]]) result = [[0, 10], [10, 20]] ``` --- Here's a [`banyan`](https://pypi.python.org/pypi/Banyan)-based `O(n log n)` implementation: ``` from banyan import SortedSet, OverlappingIntervalsUpdator # pip install banyan def maximize_nonoverlapping_count(intervals): # sort by the end-point O(n log n) sorted_intervals = SortedSet(intervals, key=lambda (start, end): (end, (end - start))) # build "interval" tree O(n log n) tree = SortedSet(intervals, updator=OverlappingIntervalsUpdator) result = [] while sorted_intervals: # until there are intervals left to consider # pop the interval with the smallest end-point, keep it in the result result.append(sorted_intervals.pop()) # O(log n) # remove intervals that overlap with the popped interval overlapping_intervals = tree.overlap(result[-1]) # O(m log n) tree -= overlapping_intervals # O(m log n) sorted_intervals -= overlapping_intervals # O(m log n) return result ``` Note: this implementation considers `[0, 10]` and `[10, 20]` intervals to be overlapping: ``` f = maximize_nonoverlapping_count assert f([[0, 100], [0, 10], [11, 20], [15, 40]]) == [[0, 10] ,[11, 20]] assert f([[0, 100], [0, 10], [10, 20], [15, 40]]) == [[0, 10] ,[15, 40]] ``` `sorted_intervals` and `tree` can be merged: ``` from banyan import SortedSet, OverlappingIntervalsUpdator # pip install banyan def maximize_nonoverlapping_count(intervals): # build "interval" tree sorted by the end-point O(n log n) tree = SortedSet(intervals, key=lambda (start, end): (end, (end - start)), updator=OverlappingIntervalsUpdator) result = [] while tree: # until there are intervals left to consider # pop the interval with the smallest end-point, keep it in the result result.append(tree.pop()) # O(log n) # remove intervals that overlap with the popped interval overlapping_intervals = tree.overlap(result[-1]) # O(m log n) tree -= overlapping_intervals # O(m log n) return result ```
This may not be the fastest solution, but really verbose and clear - I think. ``` a = [[2,100], [4,10], [77,99], [38,39], [44,80], [69,70], [88, 90]] # build ranges first def expand(list): newList = [] for r in list: newList.append(range(r[0], r[1] + 1)) return newList def compare(list): toBeDeleted = [] for index1 in range(len(list)): for index2 in range(len(list)): if index1 == index2: # we dont want to compare ourselfs continue matches = [x for x in list[index1] if x in list[index2]] if len(matches) != 0: # do we have overlap? ## compare lengths and get rid of the longer one if len(list[index1]) > len(list[index2]): toBeDeleted.append(index1) break elif len(list[index1]) < len(list[index2]): toBeDeleted.append(index2) # distinct toBeDeleted = [ toBeDeleted[i] for i,x in enumerate(toBeDeleted) if x not in toBeDeleted[i+1:]] print len(list) # remove items for i in toBeDeleted[::-1]: del list[i] return list print(compare(expand(a))) ```
Python - Removing overlapping lists
[ "", "python", "algorithm", "list", "" ]
I have done extensive searching for an solution to my question however i have come up short, primarily because there is no example for loop using the conditions that i require. **Question** Use a for loop to count how many times the letter 'a' appears in the string 'fanta'. Check your answer using Python’s string method called count(). Note: I am aware the question asks for how many times a appears in a string but i decided to use list. **My attempt at the answer** ``` drink = ["f","a","n","t","a"] letter = "a" for letter in drink: print (letter) ``` Obviously there are two A's in fanta however i used the built-in count function to verify ``` count = drink.count('a') print(count) ``` Indeed there are two A's in fanta :D By the following train of thought; the for loop will look for letter "a" within the list drink, printing out ``` a a ``` or something of the like, however i just get; ``` f a n t a ``` I am guessing because the for loop doesn't understand i only want it check for string letter inside the list. **I would like to know** How to get the desired result being in some way shape or form the for loop displaying a representation for how many a's in fanta. Thanks for your time.
``` >>> drink = ["f","a","n","t","a"] >>> count = 0 >>> for c in drink: if c == 'a': count += 1 >>> count 2 ```
``` drink = ["f","a","n","t","a"] letter = "a" for each_char in drink: if each_char == letter: print char, ```
Count string values inside a string/list (FOR Loop)
[ "", "python", "python-2.7", "python-3.x", "" ]
I am trying to initialize the camera module in pygame and display video from a usb webcam. This is my code: ``` import pygame import pygame.camera from pygame.camera import * from pygame.locals import * pygame.init() pygame.camera.init() cam = pygame.camera.Camera("/dev/video0",(640,480)) cam.start() image = cam.get_image() ``` Yet i get this error: ``` Traceback (most recent call last): File "C:/Users/Freddie/Desktop/CAMERA/Test1.py", line 7, in <module> pygame.camera.init() File "C:\Python27\lib\site-packages\pygame\camera.py", line 67, in init _camera_vidcapture.init() File "C:\Python27\lib\site-packages\pygame\_camera_vidcapture.py", line 21, in init import vidcap as vc ImportError: No module named vidcap ``` PLS HELP!!! Im on Windows
I met the same problem. The error info of "ImportError: No module named vidcap" indicates that python interpreter didn't find the vidcap module on you machine. so you'd better follow these steps. 1. Download the vidcap from <http://videocapture.sourceforge.net/> 2.Then copy the corresponding version of dll (which named "vidcap.pyd" in VideoCapture-0.9-5\VideoCapture-0.9-5\Python27\DLLs) to "your python path"\DLLs\ . 3.restart you script. Done!.
The camera module can only be used on linux
python pygame.camera.init() NO vidcapture
[ "", "python", "camera", "usb", "pygame", "webcam", "" ]
I am using T-SQL and I am trying to have a then statement return multiple values so I can search the 'Year' column for multiple years. If the year is greater than 2013, then I want to search the current year and the previous year. So if the year is 2016, I want to search for 2016 AND 2015. This code does not work, but this is what I am trying to accomplish. ``` SELECT * FROM [DB_NAME].[dbo].[TABLE_NAME] WHERE YR_CLMN in ( case when YEAR(GETDATE()) = 2013 then YEAR(GETDATE()) when YEAR(GETDATE()) > 2013 then (YEAR(GETDATE()), YEAR(GETDATE())-1) end ) ``` Thanks in advance!!!
I'm not 100% sure I understand the question , but I believe it will give you what you want ``` SELECT * FROM [DB_NAME].[dbo].[TABLE_NAME] WHERE YR_CLMN >= ( case when YEAR(GETDATE()) > 2013 then YEAR(GETDATE())-1) ELSE YEAR(GETDATE()) end ) AND YR_CLMN <= YEAR(GETDATE()) ``` Here's the # [SQLFiddle](http://sqlfiddle.com/#!3/81451/1)
Try: ``` SELECT * FROM [DB_NAME].[dbo].[TABLE_NAME] WHERE YR_CLMN in (YEAR(GETDATE()), case when YEAR(GETDATE()) > 2013 then YEAR(GETDATE())-1 else YEAR(GETDATE()) end) ```
SQL - Case statement with multiple then
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have the following query but it run slowly in my `sql editor` ! How to enhance it (write wise) to speed the query running . --- ``` SELECT year,main_code,name,father_code,main_code || '__' || year AS main_id, (SELECT COUNT(*) FROM GK_main WHERE father_code=sc.main_code AND year= (SELECT MAX(year)FROM SS_job)) childcount FROM GK_main sc WHERE year=(SELECT MAX(year)FROM SS_job) ```
The efficiency depends more on available indexes, rather than the way it is written. You could try this version (without inline subqueries): ``` SELECT sc.year, sc.main_code, sc.name, sc.father_code, sc.main_code || '__' || sc.year AS main_id, NVL(g.childcount, 0) AS childcount FROM GK_main sc LEFT JOIN ( SELECT father_code , COUNT(*) AS childcount FROM GK_main WHERE year = (SELECT MAX(year) FROM SS_job) GROUP BY father_code ) AS g ON g.father_code = sc.main_code WHERE sc.year = (SELECT MAX(year) FROM SS_job) ; ``` But what would benefit the efficiency would be indexes. * Is there an index on `SS_job (year)`? * Is there an index on `GK_main (year, father_code)` or on `GK_main (father_code, year)`? * Is there an index on `GK_main (year, main_code)` or on `GK_main (main_code)`?
Use a join instead of subquery. ``` SELECT sc.year,sc.main_code,sc.name,sc.father_code,sc.main_code || '__' || sc.year AS main_id, COUNT(F.father_code) AS childcount FROM GK_main sc LEFT JOIN GK_main F ON F.father_code = sc.main_code WHERE year=(SELECT MAX(year)FROM SS_job) GROUP BY sc.year,sc.main_code,sc.name,sc.father_code ``` Not tested and made it quickly so might contain a mistake. But this should at least save you from checking SELECT MAX(year)FROM SS\_job twice. I would never do COUNT(\*), but always chose the collumn(s) I wish to count.
How to enhance the performance of nested query?
[ "", "sql", "performance", "t-sql", "informix", "" ]
I am trying to combine for loop and logical operation and as below and running into compilation error. Any inputs on how to fix this? ``` File "test.py", line 36 for ((num in list) and (num not in handled_list)): ^ SyntaxError: invalid syntax ```
You could also do this using sets: ``` >>> a = [1, 2, 3, 4, 5] >>> b = [3, 5] >>> for num in set(a) ^ set(b): ... print num ... 1 2 4 ```
The `for` statement doesn't support that sort of syntax. The syntax is just `for item in iterable` --- you don't get to specify conditions. Specify your conditions inside the loop: ``` for num in list: if num in handled_list: continue # Do what you want with the elements in list but not in handled_list ``` Or precreate a list (or other iterable) that has just what you want to iterate over.
for loop (conditional) and logical s operation in python
[ "", "python", "" ]
I have a SQLAlchemy `Session` object and would like to know whether it is dirty or not. The exact question what I would like to (metaphorically) ask the `Session` is: "If at this point I issue a `commit()` or a `rollback()`, the effect on the database is the same or not?". The rationale is this: I want to ask the user wether he wants or not to confirm the changes. But if there are no changes, I would like not to ask anything. Of course I may monitor myself all the operations that I perform on the `Session` and decide whether there were modifications or not, but because of the structure of my program this would require some quite involved changes. If SQLAlchemy already offered this opportunity, I'd be glad to take advantage of it. Thanks everybody.
you're looking for a net count of actual flushes that have proceeded for the whole span of the session's transaction; while there are some clues to whether or not this has happened (called the "snapshot"), this structure is just to help with rollbacks and isn't strong referencing. The most direct route to this would be to track "after\_flush" events, since this event only emits if flush were called and also that the flush found state to flush: ``` from sqlalchemy import event import weakref transactions_with_flushes = weakref.WeakSet() @event.listens_for(Session, "after_flush") def log_transaction(session, flush_context): for trans in session.transaction._iterate_parents(): transactions_with_flushes.add(trans) def session_has_pending_commit(session): return session.transaction in transactions_with_flushes ``` edit: here's an updated version that's a lot simpler: ``` from sqlalchemy import event @event.listens_for(Session, "after_flush") def log_transaction(session, flush_context): session.info['has_flushed'] = True def session_has_pending_commit(session): return session.info.get('has_flushed', False) ```
Here is my solution based on @zzzeek's answer and updated comment. I've unit tested it and it seems to play well with rollbacks (a session is clean after issuing a rollback): ``` from sqlalchemy import event from sqlalchemy.orm import Session @event.listens_for(Session, "after_flush") def log_flush(session, flush_context): session.info['flushed'] = True @event.listens_for(Session, "after_commit") @event.listens_for(Session, "after_rollback") def reset_flushed(session): if 'flushed' in session.info: del session.info['flushed'] def has_uncommitted_changes(session): return any(session.new) or any(session.deleted) \ or any([x for x in session.dirty if session.is_modified(x)]) \ or session.info.get('flushed', False) ```
How to check whether SQLAlchemy session is dirty or not
[ "", "python", "sqlalchemy", "" ]
I am trying to run grid.py on libsvm-3.17 using some dataset. I am using the command ``` python grid.py -log2c -5,12,1 -log2v -12,5,1 -v 5 -m 300 <dataset> ``` [Instructions](http://www.bcgsc.ca/downloads/genereg/remcbigdata/miR/TargetMiner/TargetMiner/libsvm-2.88/tools/README) over here. But the console says ``` RuntimeError: get no rate worker local quit. ``` and it dies. Any clues what is missing? The data set I am using is german credit dataset on UCI.
I had the same problem with libsvm 3.17. Somehow, this error pops out even when `grid.py` is run with no additional options. However, when grid.py is called through easy.py the execution of the script is not stopped and you are able to get the best parameters for whatever kernel you want to use. In easy.py, change ``` cmd = '{0} -svmtrain "{1}" -gnuplot "{2}" "{3}"'.format(grid_py, svmtrain_exe, gnuplot_exe, scaled_file) ``` to ``` cmd = '{0} -log2c -5,12,1 -log2g -12,5,1 -v 5 -m 300 -svmtrain "{1}" -gnuplot "{2}" "{3}"'.format(grid_py, svmtrain_exe, gnuplot_exe, scaled_file) ``` and run `easy.py` instead of `grid.py`. You will directly get the prediction model with the best parameters. Hope that helps.
In my case, I saw this error when the indices of my data were out of order. For example, a line in my data looked like: ``` -1 1:10 4:4 2:1 ``` when it needed to look like ``` -1 1:10 2:1 4:4 ``` Is there some way you could have gotten a corrupted dataset? There are quite a few ways this particular error message can occur.
grid.py get no rate on dataset
[ "", "python", "gnuplot", "libsvm", "" ]
I am experimenting with lists and was trying to get the following code segment to display: ``` ---------- ---hello-- ---------- ``` But to do this I need to get the 3 'listSmall's to be independent of one another. Is there a way to do this? ( current output is of course: ``` ---hello-- ---hello-- ---hello-- ``` ) ``` listSmall = ['-','-','-','-','-','-','-','-','-','-',] listBig = [listSmall, listSmall, listSmall] word = 'hello' wordPosX = 3 wordPosY = 2 for i in word: listBig[wordPosY][wordPosX] = i wordPosX = wordPosX + 1 i = 0 while i != 3: print ''.join(listBig[i]) i = i + 1 ```
This is because `list` is mutable. ``` listBig = [listSmall, listSmall, listSmall] ``` makes `listBig` point three times to the same mutable list, so when you change this mutable list through on of these references, you will see this change through all the three. You should make three distinct lists: ``` listBig = [ ['-'] * 10 for _ in range(3)] ``` no need for `listSmall` at all. the whole code: ``` listBig = [ ['-'] * 10 for _ in range(3)] word = 'hello' wordPosX, wordPosY = 3, 1 listBig[wordPosY][3: (3+len(word))] = word for v in listBig: print(''.join(v)) ```
Use the `list` constructor to make a shallow copy of `listSmall` ``` listBig = [list(listSmall) for i in range(3)] ``` The other equivalent (less readable IMO) way is using an empty slice: ``` listBig = [listSmall[:] for i in range(3)] ```
Making a list independent of itself(?)
[ "", "python", "list", "" ]
Here is how my database looks like: table: `conversations` ``` +----+--------+--------+ | id | user_1 | user_2 | +----+--------+--------+ | 1 | 1 | 2 | | 2 | 2 | 3 | | 3 | 1 | 3 | +----+--------+--------+ ``` table: `messages` ``` +----+--------------+------+ | id | conversation | text | +----+--------------+------+ | 1 | 1 | hej | | 2 | 1 | test | | 3 | 2 | doh | | 4 | 2 | hi | | 5 | 3 | :) | | 6 | 3 | :D | +----+--------------+------+ ``` Then when I run the followin query: ``` SELECT * FROM `messages` INNER JOIN `conversations` ON `conversations`.`id` = `messages`.`convesations` GROUP BY `conversations`.`id` ORDER BY `messages`.`id` DESC ``` Then I get those out from `messages`: ``` +----+--------------+------+ | id | conversation | text | +----+--------------+------+ | 1 | 1 | hej | | 3 | 2 | doh | | 5 | 3 | :) | +----+--------------+------+ ``` But, is it somehow possible to do so that I will get the messages with the highest `messages.id` from that group, instead of the lowest? EDIT: Here is the output I want from `messages`: ``` +----+--------------+------+ | id | conversation | text | +----+--------------+------+ | 2 | 1 | test | | 4 | 2 | hi | | 6 | 3 | :D | +----+--------------+------+ ``` As those are the `messages` in same `conversation` with the highest `id`.
``` SELECT * FROM conversations c JOIN messages m ON m.id = ( SELECT id FROM messages mi WHERE mi.conversation = c.id ORDER BY mi.conversation DESC, mi.id DESC LIMIT 1 ) ``` Create an index on `messages (conversation, id)` for this to work fast.
You simply need to use nested query like this: ``` SELECT * FROM Messages WHERE ID IN( SELECT Max(m.ID) FROM Messages m INNER JOIN conversations c ON c.id = m.conversation GROUP BY m.conversation ); ``` Output: ``` | ID | CONVERSATION | TEXT | ---------------------------- | 2 | 1 | test | | 4 | 2 | hi | | 6 | 3 | :D | ``` If you want data from both tables try this: ``` SELECT * FROM Messages m JOIN conversations c ON c.id = m.conversation WHERE m.ID IN ( SELECT MAX(ID) FROM Messages GROUP BY conversation ) GROUP BY m.conversation; ``` Output: ``` | ID | CONVERSATION | TEXT | USER_1 | USER_2 | ---------------------------------------------- | 2 | 1 | test | 1 | 2 | | 4 | 2 | hi | 2 | 3 | | 6 | 3 | :D | 1 | 3 | ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!2/49185/13)
How do I select last message for each conversation?
[ "", "mysql", "sql", "group-by", "greatest-n-per-group", "" ]
The below query that I'm executing through SQL Server Management Studio is painfully slow. The input table `tbl_sb12_bhs` has about 40000 records and after an hour only 40 records are processed. What can be changed here to make this run a bit faster? ``` DECLARE @bsrange INT SET @bsrange = 0 WHILE @bsrange <= (SELECT max([p_a_l_out]) FROM [DB001].[FD\f7].[tbl_sb12_bhs]) BEGIN INSERT INTO [FD\f7].tbl_sb13_b_lin1 (aId, p_a_l_out, bs_id, bs_db, bs_tbl, bs_column, Int1, cd1, Hop1, Int2, cd2, Hop2, Int3, cd3, Hop3, Int4, cd4, Hop4, Int5, cd5, Hop5, Int6, cd6, Hop6, Int7, cd7, Hop7, Int8, cd8, Hop8, Int9, cd9, Hop9, Int10, cd10, Hop10, Int11, cd11, Hop11, Int12, cd12, Hop12, Int13, cd13, Hop13, Int14, cd14, Hop14, Int15, cd15, Hop15, Int16, cd16, Hop16) SELECT DISTINCT tbl_sb12_bhs.aId, tbl_sb12_bhs.p_a_l_out, tbl_sb12_bhs.bs_id, tbl_sb12_bhs.bs_db, tbl_sb12_bhs.bs_tbl, tbl_sb12_bhs.bs_column, tbl_rpt_val_pt_crl.pt_el_Int AS Int1, tbl_rpt_val_pt_crl.user_cd AS cd1, tbl_rpt_val_pt_crl.cfk_upel AS Hop1, tbl_rpt_val_pt_crl_1.pt_el_Int AS Int2, tbl_rpt_val_pt_crl_1.user_cd AS cd2, tbl_rpt_val_pt_crl_1.cfk_upel AS Hop2, tbl_rpt_val_pt_crl_2.pt_el_Int AS Int3, tbl_rpt_val_pt_crl_2.user_cd AS cd3, tbl_rpt_val_pt_crl_2.cfk_upel AS Hop3, tbl_rpt_val_pt_crl_3.pt_el_Int AS Int4, tbl_rpt_val_pt_crl_3.user_cd AS cd4, tbl_rpt_val_pt_crl_3.cfk_upel AS Hop4, tbl_rpt_val_pt_crl_4.pt_el_Int AS Int5, tbl_rpt_val_pt_crl_4.user_cd AS cd5, tbl_rpt_val_pt_crl_4.cfk_upel AS Hop5, tbl_rpt_val_pt_crl_5.pt_el_Int AS Int6, tbl_rpt_val_pt_crl_5.user_cd AS cd6, tbl_rpt_val_pt_crl_5.cfk_upel AS Hop6, tbl_rpt_val_pt_crl_6.pt_el_Int AS Int7, tbl_rpt_val_pt_crl_6.user_cd AS cd7, tbl_rpt_val_pt_crl_6.cfk_upel AS Hop7, tbl_rpt_val_pt_crl_7.pt_el_Int AS Int8, tbl_rpt_val_pt_crl_7.user_cd AS cd8, tbl_rpt_val_pt_crl_7.cfk_upel AS Hop8, tbl_rpt_val_pt_crl_8.pt_el_Int AS Int9, tbl_rpt_val_pt_crl_8.user_cd AS cd9, tbl_rpt_val_pt_crl_8.cfk_upel AS Hop9, tbl_rpt_val_pt_crl_9.pt_el_Int AS Int10, tbl_rpt_val_pt_crl_9.user_cd AS cd10, tbl_rpt_val_pt_crl_9.cfk_upel AS Hop10, tbl_rpt_val_pt_crl_10.pt_el_Int AS Int11, tbl_rpt_val_pt_crl_10.user_cd AS cd11, tbl_rpt_val_pt_crl_10.cfk_upel AS Hop11, tbl_rpt_val_pt_crl_11.pt_el_Int AS Int12, tbl_rpt_val_pt_crl_11.user_cd AS cd12, tbl_rpt_val_pt_crl_11.cfk_upel AS Hop12, tbl_rpt_val_pt_crl_12.pt_el_Int AS Int13, tbl_rpt_val_pt_crl_12.user_cd AS cd13, tbl_rpt_val_pt_crl_12.cfk_upel AS Hop13, tbl_rpt_val_pt_crl_13.pt_el_Int AS Int14, tbl_rpt_val_pt_crl_13.user_cd AS cd14, tbl_rpt_val_pt_crl_13.cfk_upel AS Hop14, tbl_rpt_val_pt_crl_14.pt_el_Int AS Int15, tbl_rpt_val_pt_crl_14.user_cd AS cd15, tbl_rpt_val_pt_crl_14.cfk_upel AS Hop15, tbl_rpt_val_pt_crl_15.pt_el_Int AS Int16, tbl_rpt_val_pt_crl_15.user_cd AS cd16, tbl_rpt_val_pt_crl_15.cfk_upel AS Hop16 FROM (SELECT DISTINCT pk_a AS aId, p_a_l_out, bs_id, bs_db, bs_tbl, bs_column, hop_pt_id_1, hop_pt_id_2, hop_pt_id_3, hop_pt_id_4, hop_pt_id_5, hop_pt_id_6, hop_pt_id_7, hop_pt_id_8, hop_pt_id_9, hop_pt_id_10, hop_pt_id_11, hop_pt_id_12, hop_pt_id_13, hop_pt_id_14, hop_pt_id_15, hop_pt_id_16 FROM [FD\f7].tbl_sb12_bhs WHERE [p_a_l_out] >= @bsrange AND [p_a_l_out] < ( @bsrange + 1 )) AS tbl_sb12_bhs LEFT JOIN tbl_rpt_val_pt_crl ON tbl_sb12_bhs.hop_pt_id_1 = tbl_rpt_val_pt_crl.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_1 ON tbl_sb12_bhs.hop_pt_id_2 = tbl_rpt_val_pt_crl_1.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_2 ON tbl_sb12_bhs.hop_pt_id_3 = tbl_rpt_val_pt_crl_2.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_3 ON tbl_sb12_bhs.hop_pt_id_4 = tbl_rpt_val_pt_crl_3.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_4 ON tbl_sb12_bhs.hop_pt_id_5 = tbl_rpt_val_pt_crl_4.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_5 ON tbl_sb12_bhs.hop_pt_id_6 = tbl_rpt_val_pt_crl_5.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_6 ON tbl_sb12_bhs.hop_pt_id_7 = tbl_rpt_val_pt_crl_6.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_7 ON tbl_sb12_bhs.hop_pt_id_8 = tbl_rpt_val_pt_crl_7.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_8 ON tbl_sb12_bhs.hop_pt_id_9 = tbl_rpt_val_pt_crl_8.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_9 ON tbl_sb12_bhs.hop_pt_id_10 = tbl_rpt_val_pt_crl_9.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_10 ON tbl_sb12_bhs.hop_pt_id_11 = tbl_rpt_val_pt_crl_10.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_11 ON tbl_sb12_bhs.hop_pt_id_12 = tbl_rpt_val_pt_crl_11.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_12 ON tbl_sb12_bhs.hop_pt_id_13 = tbl_rpt_val_pt_crl_12.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_13 ON tbl_sb12_bhs.hop_pt_id_14 = tbl_rpt_val_pt_crl_13.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_14 ON tbl_sb12_bhs.hop_pt_id_15 = tbl_rpt_val_pt_crl_14.sk_el_pt LEFT JOIN tbl_rpt_val_pt_crl AS tbl_rpt_val_pt_crl_15 ON tbl_sb12_bhs.hop_pt_id_16 = tbl_rpt_val_pt_crl_15.sk_el_pt SET @bsrange = @bsrange + 1 END ```
My best guess is that it's slow because you're doing a number of intensive operations all in one go. Without any sample data it's tough, but I can try to make a few suggestions. From what you said about it only processing 40 records after an hour, it's what's going on inside the loop that's slowing you down. SELECT DISTINCT isn't cheap because it has to compare all the data, and you're comparing quite a lot of columns as well. If you can, it might run quicker if you limit the number of columns to the bare minimum required for a distinct selection then self joining that to the original table. It should be simple enough to test in isolation to the rest of it to make sure you're getting the same results and whether or not it's quicker. Also the more joins you have, the worse the performance is in general... the price we pay for normalisation. Anyway, I would take a step back from it and try to break this down into its smallest units of work and then you can test each one individually until you find the culprit. In doing so, you might think of a much better way to do this. Again, without any sample data this is a difficult one for me to help with.
Well if you have an index or indexes on the target then SQL will reindex every row. I'd disable any indexes on the target table and then renable them when the insert is complete. Id batch the inserts inro ranges of (say) 5k records so any blocking will be reduced, or I'd create a temp file as a result of the select and bcp in the results. Because your doing that horrendous set of left joins each time prior to one record insert. SQL just cant optimise more than about 7 or 8 left or right joins. My guess is that there are little or no indexes on the table being inserted from which means a table scan on for each join or around 17 tables scans for each one row inserted. Sorry but this approach is wrong at every stage. Or you could get you boss to buy you a datecentre....
SQL Server query is running very slow
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm making a silly little game to learn Python and I'm having an issue creating a goblin creature with **init** here's the generic creature class constructor ``` class Creature(object): def __init__(self,str,dex,wis,n): ``` and here is the call to that: ``` goblin = Creature(randint(1,2),randint(1,2),(randint(1,2),"Goblin")) ``` and here is the error I'm getting ``` TypeError: __init__() takes exactly 5 arguments (4 given) ``` What is also weird is that I create the Player creature in much the same way and yet no error ``` player = Player(str,dex,wis,name) ``` `Player` is a child class of `Creature`
You've given your third parameter as a tuple: ``` goblin = Creature(randint(1,2),randint(1,2),(randint(1,2),"Goblin")) ``` I'm presuming you wanted this separate, so just remove the parentheses: ``` goblin = Creature(randint(1,2),randint(1,2),randint(1,2),"Goblin") ``` The fifth parameter is `self`, which is automatically called.
It has nothing to do with `self` you have four params and passing three: ``` randint(1,2), randint(1,2), (randint(1,2),"Goblin") ```
Python __init__(a,b,c,d) not recognizing "self" as argument
[ "", "python", "class", "init", "" ]
This function produces the sum of the first n values, the sum of the second n values...etc. Here is the function: ``` def collect_sum(iterable,n): for e in range(1,len(ite)+1): if e%n==0: yield sum(iterable[e-n:e]) for i in c_sum(range(1,21),5): print(i,end=' ') ``` This is supposed to return 15, 40, 65. When I use a list comprehension, it returns 0, 30, 40.
``` # generator version def collect_sum(iterable,n): for e in range(1,len(iterable)+1): if e%n==0: yield sum(iterable[e-n:e]) # list comprehension version def collect_sum(iterable,n): return [sum(iterable[e-n:e]) for e in range(1,len(iterable)+1) if e%n==0] for i in collect_sum(range(1,21),5): print(i,end=' ') ```
``` def collect_sum(i,n): return (sum(g) for (_,g ) in groupby(i,key=lambda _,c=count():floor(next(c)/n))) for v in collect_sum(range(1,21),5): print(v) ``` Produces: ``` 15 40 65 90 >>> ```
How do I convert this into a comprehension? (Python)
[ "", "python", "list-comprehension", "python-2.x", "" ]
Say I have a 1D array: ``` import numpy as np my_array = np.arange(0,10) my_array.shape (10, ) ``` In Pandas I would like to create a DataFrame with only one row and `10` columns using this array. FOr example: ``` import pandas as pd import random, string # Random list of characters to be used as columns cols = [random.choice(string.ascii_uppercase) for x in range(10)] ``` But when I try: ``` pd.DataFrame(my_array, columns = cols) ``` I get: ``` ValueError: Shape of passed values is (1,10), indices imply (10,10) ``` I presume this is because Pandas expects a 2D array, and I have a (flat) 1D array. Is there a way to inflate my 1D array into a 2D array or have Panda use a 1D array in the creation of the dataframe? Note: I am using the latest stable version of Pandas (0.11.0)
Your value array has length 9, (values from 1 till 9), and your `cols` list has length 10. I dont understand your error message, based on your code, i get: ``` ValueError: Shape of passed values is (1, 9), indices imply (10, 9) ``` Which makes sense. Try: ``` my_array = np.arange(10).reshape(1,10) cols = [random.choice(string.ascii_uppercase) for x in range(10)] pd.DataFrame(my_array, columns=cols) ``` Which results in: ``` F H L N M X B R S N 0 0 1 2 3 4 5 6 7 8 9 ```
Either these should do it: ``` my_array2 = my_array[None] # same as myarray2 = my_array[numpy.newaxis] ``` or ``` my_array2 = my_array.reshape((1,10)) ```
Inflating a 1D array into a 2D array in numpy
[ "", "python", "numpy", "pandas", "" ]
I have 2 models - Restaurant and Feature. They are connected via has\_and\_belongs\_to\_many relationship. The gist of it is that you have restaurants with many features like delivery, pizza, sandwiches, salad bar, vegetarian option,… So now when the user wants to filter the restaurants and lets say he checks pizza and delivery, I want to display all the restaurants that have both features; pizza, delivery and maybe some more, but it HAS TO HAVE pizza AND delivery. If I do a simple `.where('features IN (?)', params[:features])` I (of course) get the restaurants that have either - so or pizza or delivery or both - which is not at all what I want. My SQL/Rails knowledge is kinda limited since I'm new to this but I asked a friend and now I have this huuuge SQL that gets the job done: ``` Restaurant.find_by_sql(['SELECT restaurant_id FROM ( SELECT features_restaurants.*, ROW_NUMBER() OVER(PARTITION BY restaurants.id ORDER BY features.id) AS rn FROM restaurants JOIN features_restaurants ON restaurants.id = features_restaurants.restaurant_id JOIN features ON features_restaurants.feature_id = features.id WHERE features.id in (?) ) t WHERE rn = ?', params[:features], params[:features].count]) ``` So my question is: is there a better - more Rails even - way of doing this? How would you do it? Oh BTW I'm using Rails 4 on Heroku so it's a Postgres DB.
How much data is in your `features` table? Is it just a table of ids and names? If so, and you're willing to do a little denormalization, you can do this much more easily by encoding the features as a text array on `restaurant`. With this scheme your queries boil down to ``` select * from restaurants where restaurants.features @> ARRAY['pizza', 'delivery'] ``` If you want to maintain your features table because it contains useful data, you can store the array of feature ids on the restaurant and do a query like this: ``` select * from restaurants where restaurants.feature_ids @> ARRAY[5, 17] ``` If you don't know the ids up front, and want it all in one query, you should be able to do something along these lines: ``` select * from restaurants where restaurants.feature_ids @> ( select id from features where name in ('pizza', 'delivery') ) as matched_features ``` That last query might need some more consideration... Anyways, I've actually got a pretty detailed article written up about [Tagging in Postgres and ActiveRecord](http://monkeyandcrow.com/blog/tagging_with_active_record_and_postgres/) if you want some more details.
This is an example of a set-iwthin-sets query. I advocate solving these with `group by` and `having`, because this provides a general framework. Here is how this works in your case: ``` select fr.restaurant_id from features_restaurants fr join features f on fr.feature_id = f.feature_id group by fr.restaurant_id having sum(case when f.feature_name = 'pizza' then 1 else 0 end) > 0 and sum(case when f.feature_name = 'delivery' then 1 else 0 end) > 0 ``` Each condition in the `having` clause is counting for the presence of one of the features -- "pizza" and "delivery". If both features are present, then you get the restaurant\_id.
Filtering model with HABTM relationship
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "has-and-belongs-to-many", "" ]
I have a list that contains decimal numbers, however in this example I use ints: ``` my_list = [40, 60, 100, 240, ...] ``` I want to print each element of the list in reverse order and afterwards I want to print a second line where every value is divided by 2, then a third line where the previous int is devided by 3 and so on... Output should be: ``` 240 120 60 36 120 60 30 18 #previous number divided by 2 40 20 10 6 #previous number divided by 3 ... ... ... ... #previous number divided by 4 ... ``` My solution is ugly: I can make a slice and reverse that list and make n for loops and append the result in a new list. But there must be a better way. How would you do that?
I'd write a generator to yield lists in turn: ``` def divider(lst,n): lst = [float(x) for x in lst[::-1]] for i in range(1,n+1): lst = [x/i for x in lst] yield lst ``` is more appropriate. If we want to make it slightly more efficient, we could factor out the first iteration (division by 1) and yield it separately: ``` def divider(lst,n): lst = [float(x) for x in reversed(lst)] yield lst for i in range(2,n+1): lst = [x/i for x in lst] yield lst ``` \*Note that in this context there isn't a whole lot of difference between `lst[::-1]` and `reversed(lst)`. The former is typically a little faster, but the latter is a little more memory efficient. Choose according to your constraints. --- Demo: ``` >>> def divider(lst,n): ... lst = [float(x) for x in reversed(lst)] ... yield lst ... for i in range(2,n+1): ... lst = [x/i for x in lst] ... yield lst ... >>> for lst in divider([40, 60, 100, 240],3): ... print lst ... [240.0, 100.0, 60.0, 40.0] [120.0, 50.0, 30.0, 20.0] [40.0, 16.666666666666668, 10.0, 6.666666666666667] ```
To print the columnar the output you want, use [format strings](http://docs.python.org/2/library/string.html#format-specification-mini-language). You may have to tweak this to get the alignment and precision you want for your actual data: ``` def print_list(L): print ' '.join('{:>3d}'.format(i) for i in L) ``` Normally to do the division we could use a function with recursion, but we can also use a simple loop where each iteration produces the list that is worked on next: ``` my_list = [40, 60, 100, 240, 36, 60, 120, 240] maxdiv = 20 baselist = list(reversed(my_list)) for div in range(1, maxdiv+1): baselist = [i/div for i in baselist] print_list(baselist) ``` Output: ``` 240 120 60 36 240 100 60 40 120 60 30 18 120 50 30 20 40 20 10 6 40 16 10 6 10 5 2 1 10 4 2 1 2 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... ```
loop through a list and use previous elements
[ "", "python", "" ]
I have a file in which lines are separated using a delimeter say `.`. I want to read this file line by line, where lines should be based on presence of `.` instead of newline. One way is: ``` f = open('file','r') for line in f.read().strip().split('.'): #....do some work f.close() ``` But this is not memory efficient if my file is too large. Instead of reading a whole file together I want to read it line by line. `open` supports a parameter 'newline' but this parameter only takes `None, '', '\n', '\r', and '\r\n'` as input as mentioned [here](http://docs.python.org/2/library/io.html). **Is there any way to read files line efficiently but based on a pre-specified delimiter?**
You could use a generator: ``` def myreadlines(f, newline): buf = "" while True: while newline in buf: pos = buf.index(newline) yield buf[:pos] buf = buf[pos + len(newline):] chunk = f.read(4096) if not chunk: yield buf break buf += chunk with open('file') as f: for line in myreadlines(f, "."): print line ```
Here is a more efficient answer, using `FileIO` and `bytearray` that I used for parsing a PDF file - ``` import io import re # the end-of-line chars, separated by a `|` (logical OR) EOL_REGEX = b'\r\n|\r|\n' # the end-of-file char EOF = b'%%EOF' def readlines(fio): buf = bytearray(4096) while True: fio.readinto(buf) try: yield buf[: buf.index(EOF)] except ValueError: pass else: break for line in re.split(EOL_REGEX, buf): yield line with io.FileIO("test.pdf") as fio: for line in readlines(fio): ... ``` The above example also handles a custom EOF. If you don't want that, use this: ``` import io import os import re # the end-of-line chars, separated by a `|` (logical OR) EOL_REGEX = b'\r\n|\r|\n' def readlines(fio, size): buf = bytearray(4096) while True: if fio.tell() >= size: break fio.readinto(buf) for line in re.split(EOL_REGEX, buf): yield line size = os.path.getsize("test.pdf") with io.FileIO("test.pdf") as fio: for line in readlines(fio, size): ... ```
Reading a file with a specified delimiter for newline
[ "", "python", "file-io", "python-2.7", "io", "" ]
I've just started studying Python, and I'm an absolute newbie. I'm starting to learn about functions, and I wrote this simple script: ``` def add(a,b): return a + b print "The first number you want to add?" a = raw_input("First no: ") print "What's the second number you want to add?" b = raw_input("Second no: ") result = add(a, b) print "The result is: %r." % result ``` The script runs OK, but the result won't be a sum. I.e: if I enter 5 for 'a', and 6 for 'b', the result will not be '11', but 56. As in: ``` The first number you want to add? First no: 5 What's the second number you want to add? Second no: 6 The result is: '56'. ``` Any help would be appreciated.
`raw_input` returns string, you need to convert it to int ``` def add(a,b): return a + b print "The first number you want to add?" a = int(raw_input("First no: ")) print "What's the second number you want to add?" b = int(raw_input("Second no: ")) result = add(a, b) print "The result is: %r." % result ``` Output: ``` The first number you want to add? First no: 5 What's the second number you want to add? Second no: 6 The result is: 11. ```
You need to convert the strings to ints to add them, otherwise `+` will just perform string concatenation since `raw_input` returns *raw* input (a string): ``` result = add(int(a), int(b)) ```
Python 'add' function issue: why won't this work?
[ "", "python", "function", "addition", "" ]
Very quick and easy homework question. I have it running ok but I think there's a better way to do it. A more Pythonic way. Here's my code to recursively decrement each element of a list by 1. ``` l = range(30) def recurseDecrMap(l, x = []): if len(l) == 0: return [] else: x.append(l[0] -1) recurseDecrMap(l[1:], x) return x ``` So thanks for any input. I'm trying to learn to do better recursion. Having trouble getting the knack of it.
You can use only one argument, in my opinion it is simpler: ``` def recurseDecrMap(l): if not l: return [] else: return [l[0]-1] + recurseDecrMap(l[1:]) ``` But as @jamylak pointed out, the complexity of this algorithm is O(N^2), since `l[1:]` creates a new list with references to the rest of the items in the list. If you need efficiency, I'd recommend you using list comprehensions ([Haidro's answer](https://stackoverflow.com/questions/16257858/recursively-decrment-a-list-by-1/16257907#16257907)), but I suppose it is not a priority if you want it only for learning purposes.
Probably *less* pythonic, but there: ``` def recurseDecrMap(l): return [l[0]-1] + recurseDecrMap(l[1:]) if l else [] ```
Recursively decrement a list by 1
[ "", "python", "recursion", "" ]
I tried to research about this command, but I can't seem to find a concrete explanation to it. What is "case when index"? what does it do? how is it differ from case when? When is it used? Is this available to other RDMS or is it Teradata specific? I would really it if you could provide some examples also. Thank you so much. ``` sel CASE WHEN .00 = 0 THEN 0||'.' ELSE .00 END ,CASE WHEN INDEX (.00,'.') = 0 THEN 0||'.' ELSE .00 END ``` Result: ``` 0. .00 ```
Index() is a Teradata function. You can read about it [here](http://www.sqlines.com/teradata/functions/index). You can do the same sort of thing with other database engines, but not with that syntax.
`CASE` is supported by both Oracle and SQL Server. The functions to find string position are different. * In Oracle, use `INSTR` and put the string to search for second: ``` INSTR('abcd', 'b') ``` * In SQL Server, use `CHARINDEX` and put the string to search for first: ``` CHARINDEX('b', 'abcd') ``` The return value for each of the above will be `2`. In default installations, the Oracle function will be case sensitive and the SQL Server search won't, but it's best not to assume for either.
how does "case when index" statement compare to "case index" statement
[ "", "sql", "sql-server", "oracle", "case", "teradata", "" ]
I'm trying to insert an interger, a string and a list as a single record into a database, but am striking an error. Here is my code: ``` values=nprnd.randint(10, size=48) #NUMPY ARRAY valuelist= map(None, values); #Convert to list guid=''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(20)) #Generate guid for x in range(4): var_string = ', '.join('?' * 48) valuelist.insert(0,x) valuelist.insert(0,ent_guid) #50 coloums in table, guid, x and 48 randomly generated values query_string = 'INSERT INTO schema1 VALUES (%s,%d,%s);' % (guid, x, var_string) cursor.execute(query_string, valuelist) ``` I keep getting an error saying: ``` Traceback (most recent call last): File "script.py", line 19, in <module> cursor.execute(query_string, valuelist) File "/Library/Python/2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.8-intel.egg/MySQLdb/cursors.py", line 184, in execute query = query % db.literal(args) ``` I know the cause of this error (Even SO has a few questions on the same error), but none of the solutions I've tried have solved this error for me Any help would be appreciated
I believe MySQL-python uses printf format codes ('%s' for example) instead of '?' for placeholders. Try setting `var_string` like this: ``` var_string = ', '.join(['%s'] * 48) ``` I also agree with [Endophage](https://stackoverflow.com/users/432193/endophage) that `valuelist` appears to be too long. I don't think you need to insert `al` and `ent_guid`. You also need to put quotes around the guid when you insert it: ``` query_string = 'INSERT INTO schema1 VALUES (\'%s\',%d,%s);' % (guid, x, var_string) ```
After you generate your 48 '?'s you insert 2 more elements into valuelist, have you taken those into account? The following code will be more robust: ``` values=nprnd.randint(10, size=48) #NUMPY ARRAY valuelist= map(None, values); #Convert to list guid=''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(20)) #Generate guid for x in range(4): valuelist.insert(0,al) valuelist.insert(0,ent_guid) # moved this line and using len(valuelist) var_string = ', '.join('?' * len(valuelist)) #50 coloums in table, guid, x and 48 randomly generated values query_string = 'INSERT INTO schema1 VALUES (%s,%d,%s);' % (guid, x, var_string) cursor.execute(query_string, valuelist) ``` **Update:** From you comment below, it sounds like you're trying to double insert the guid and x values, therefore, change the query\_string assignment (with the other changes I also made above) to: ``` query_string = 'INSERT INTO schema1 VALUES (%s);' % (var_string) ``` This is safer than your current string interpolation as `cursor.execute` will ensure the values are appropriately escaped.
TypeError: not all arguments converted during string formatting when inserting with parameters
[ "", "python", "mysql", "database", "" ]
**Update: starting with version 0.20.0, pandas cut/qcut DOES handle date fields. See [What's New](https://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0200-enhancements-other) for more.** > pd.cut and pd.qcut now support datetime64 and timedelta64 dtypes (GH14714, GH14798) **Original question:** Pandas cut and qcut functions are great for 'bucketing' continuous data for use in pivot tables and so forth, but I can't see an easy way to get datetime axes in the mix. Frustrating since pandas is so great at all the time-related stuff! Here's a simple example: ``` def randomDates(size, start=134e7, end=137e7): return np.array(np.random.randint(start, end, size), dtype='datetime64[s]') df = pd.DataFrame({'ship' : randomDates(10), 'recd' : randomDates(10), 'qty' : np.random.randint(0,10,10), 'price' : 100*np.random.random(10)}) df price qty recd ship 0 14.723510 3 2012-11-30 19:32:27 2013-03-08 23:10:12 1 53.535143 2 2012-07-25 14:26:45 2012-10-01 11:06:39 2 85.278743 7 2012-12-07 22:24:20 2013-02-26 10:23:20 3 35.940935 8 2013-04-18 13:49:43 2013-03-29 21:19:26 4 54.218896 8 2013-01-03 09:00:15 2012-08-08 12:50:41 5 61.404931 9 2013-02-10 19:36:54 2013-02-23 13:14:42 6 28.917693 1 2012-12-13 02:56:40 2012-09-08 21:14:45 7 88.440408 8 2013-04-04 22:54:55 2012-07-31 18:11:35 8 77.329931 7 2012-11-23 00:49:26 2012-12-09 19:27:40 9 46.540859 5 2013-03-13 11:37:59 2013-03-17 20:09:09 ``` To bin by groups of price or quantity, I can use cut/qcut to bucket them: ``` df.groupby([pd.cut(df['qty'], bins=[0,1,5,10]), pd.qcut(df['price'],q=3)]).count() price qty recd ship qty price (0, 1] [14.724, 46.541] 1 1 1 1 (1, 5] [14.724, 46.541] 2 2 2 2 (46.541, 61.405] 1 1 1 1 (5, 10] [14.724, 46.541] 1 1 1 1 (46.541, 61.405] 2 2 2 2 (61.405, 88.44] 3 3 3 3 ``` But I can't see any easy way of doing the same thing with my 'recd' or 'ship' date fields. For example, generate a similar table of counts broken down by (say) monthly buckets of recd and ship. It seems like resample() has all of the machinery to bucket into periods, but I can't figure out how to apply it here. The buckets (or levels) in the 'date cut' would be equivalent to a pandas.PeriodIndex, and then I want to label each value of df['recd'] with the period it falls into? So the kind of output I'm looking for would be something like: ``` ship recv count 2011-01 2011-01 1 2011-02 3 ... ... 2011-02 2011-01 2 2011-02 6 ... ... ... ``` More generally, I'd like to be able to mix and match continuous or categorical variables in the output. Imagine df also contains a 'status' column with red/yellow/green values, then maybe I want to summarize counts by status, price bucket, ship and recd buckets, so: ``` ship recv price status count 2011-01 2011-01 [0-10) green 1 red 4 [10-20) yellow 2 ... ... ... 2011-02 [0-10) yellow 3 ... ... ... ... ``` As a bonus question, what's the simplest way to modify the groupby() result above to just contain a single output column called 'count'?
Here's a solution using pandas.PeriodIndex (caveat: PeriodIndex doesn't seem to support time rules with a multiple > 1, such as '4M'). I think the answer to your bonus question is `.size()`. ``` In [49]: df.groupby([pd.PeriodIndex(df.recd, freq='Q'), ....: pd.PeriodIndex(df.ship, freq='Q'), ....: pd.cut(df['qty'], bins=[0,5,10]), ....: pd.qcut(df['price'],q=2), ....: ]).size() Out[49]: qty price 2012Q2 2013Q1 (0, 5] [2, 5] 1 2012Q3 2013Q1 (5, 10] [2, 5] 1 2012Q4 2012Q3 (5, 10] [2, 5] 1 2013Q1 (0, 5] [2, 5] 1 (5, 10] [2, 5] 1 2013Q1 2012Q3 (0, 5] (5, 8] 1 2013Q1 (5, 10] (5, 8] 2 2013Q2 2012Q4 (0, 5] (5, 8] 1 2013Q2 (0, 5] [2, 5] 1 ```
Just need to set the index of the field you'd like to resample by, here's some examples ``` In [36]: df.set_index('recd').resample('1M',how='sum') Out[36]: price qty recd 2012-07-31 64.151194 9 2012-08-31 93.476665 7 2012-09-30 94.193027 7 2012-10-31 NaN NaN 2012-11-30 NaN NaN 2012-12-31 12.353405 6 2013-01-31 NaN NaN 2013-02-28 129.586697 7 2013-03-31 NaN NaN 2013-04-30 NaN NaN 2013-05-31 211.979583 13 In [37]: df.set_index('recd').resample('1M',how='count') Out[37]: 2012-07-31 price 1 qty 1 ship 1 2012-08-31 price 1 qty 1 ship 1 2012-09-30 price 2 qty 2 ship 2 2012-10-31 price 0 qty 0 ship 0 2012-11-30 price 0 qty 0 ship 0 2012-12-31 price 1 qty 1 ship 1 2013-01-31 price 0 qty 0 ship 0 2013-02-28 price 2 qty 2 ship 2 2013-03-31 price 0 qty 0 ship 0 2013-04-30 price 0 qty 0 ship 0 2013-05-31 price 3 qty 3 ship 3 dtype: int64 ```
What's the equivalent of cut/qcut for pandas date fields?
[ "", "python", "pandas", "" ]
I'm trying to make a function that will return `True` if the given (x,y) point is inside a convex polygon. I'm trying to make it without numpy or any similar imports, just pure python code. I've already found a sample solution, which seems OK at first sight, but it's not working correctly, and I can't figure out why. The code is as follows: ``` def point_in_poly(x,y,poly): n = len(poly) inside = False p1x,p1y = poly[0] for i in range(n+1): p2x,p2y = poly[i % n] if y > min(p1y,p2y): if y <= max(p1y,p2y): if x <= max(p1x,p2x): if p1y != p2y: xints = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x if p1x == p2x or x <= xints: inside = not inside p1x,p1y = p2x,p2y return inside ``` If I test it for (9,9), for the following polygon, it gives me `True`: ``` polygon = [(0,10),(10,10),(10,0),(0,0)] point_x = 9 point_y = 9 print point_in_poly(point_x,point_y,polygon) ``` But when I change the order of the points of the polygon, for the same point, it gives me `False`: ``` polygon = [(0,0), (0,10), (10,0), (10,10)] point_x = 9 point_y = 9 print point_in_poly(point_x,point_y,polygon) ``` Anybody knows the reason? Thanks!
In the particular case you are having problems with is special: polygon = [(0,0), (0,10), (10,0), (10,10)] Changing the order of points in a polygon can have significant impact on algorithms. If you draw your polygon on a graph you'll see you have a horizontal hourglass shape. The polygon border overlaps itself. In geospatial analysis this overlap is not allowed because visually and logically you now have two closed polygons with a common intersection point. By the way most geospatial software doesn't deal well with triangles either. In this case the point at 9,9 will trick the ray casting algorithm used in your method above because it can easily cross the doubled-over polygon boundary twice. Please run the following code to see what is going on. (9,9) is on the line and this algorithm doesn't account for it. (5,8) is way outside: ``` import turtle as t polygon = [(0,0), (0,100), (100,0), (100,100)] t.goto(0,0) fp = None for p in polygon: t.goto(p) if not fp: fp=p t.goto(fp) t.up() t.goto(90,90) t.write("90,90") t.dot(10) t.goto(50,80) t.write("50,80") t.dot(10) t.done() ``` This code handles the (9,9) edge case: [http://geospatialpython.com/2011/08/point-in-polygon-2-on-line.html](https://i.stack.imgur.com/xYNRu.png) ![The code above draws this image.](https://i.stack.imgur.com/xYNRu.png)
the point `9,0` is not inside the polygon `[(0,10),(10,10),(10,0),(0,0)]` its on the edge. Points exactly on the edge can be considered in or out depending on the specifics of your algorithm.
Point in convex polygon
[ "", "python", "polygon", "point-in-polygon", "" ]
Suppose I have the following list ``` l = [ {'id':1, 's':1.0 }, {'id':3, 's': 0.6}, {'id':1, 's': 1.5} ] ``` I would like to remove elements with duplicate `'id'` value, based on their `'s'` value. In the former example, I would like to discard the first element since both the first and third elements has `'id'==1` and since `l[0]['s'] < l[2]['s']` I would like `l[0]` to be discarded. Therefore the output I expect is (I do **not** care about the order of the elements in the output list) ``` [ {'id':1, 's':1.5}, {'id':3, 's':0.6} ] ```
I'd use a mapping to track ids and their scores: ``` from collections import defaultdict id_to_scores = defaultdict(list) for entry in l: id_to_scores[entry['id']].append(entry['s']) output = [{'id': k, 's': max(v)} for k, v in id_to_scores.iteritems()] ``` Use `.items()` instead if you are using Python 3. Result (ordering changed because a `dict` has no fixed ordering): ``` >>> [{'id': k, 's': max(v)} for k, v in id_to_scores.iteritems()] [{'s': 1.5, 'id': 1}, {'s': 0.6, 'id': 3}] ``` This rebuilds the dictionaries. If there are other keys involved, you need to store the whole dictionary for each `id`, not just the score: ``` per_id = defaultdict(list) for entry in l: per_id[entry['id']].append(entry) output = [max(v, key=lambda d: d['s']) for v in per_id.itervalues()] ```
Using `collections.defaultdict`: ``` In [58]: dic=defaultdict(dict) In [59]: for x in lis: idx=x['id'] if dic[idx].get('s',float('-inf')) < x ['s']: dic[idx]=x ....: In [60]: dic.values() Out[60]: [{'id': 1, 's': 1.5}, {'id': 3, 's': 0.6}] ``` Using simple `dict` : ``` In [71]: dic={} In [72]: for x in lis: idx=x['id'] if dic.get(idx, {'s': float('-inf')}) ['s'] < x['s']: dic[idx]=x ....: In [73]: dic.values() Out[73]: [{'id': 1, 's': 1.5}, {'id': 3, 's': 0.6}] ```
python: remove duplicate elements based on score
[ "", "python", "list", "" ]
What is a good way to sample integers in the range {0,...,n-1} according to (a discrete version of) the exponential distribution? `random.expovariate(lambd)` returns a real number from 0 to positive infinity. **Update.** Changed title to make it more accurate.
In general, it is possible to sample from a distribution by generating a uniform random number then taking the inverse cumulative distribution (CDF). So, to sample from the truncated distribution, you can generate a uniform random number, then take the inverse of the truncated CDF. The truncated CDF is just the normal CDF scaled by the value of the standard geometric CDF at `n-1`: ``` import numpy as np import matplotlib.pyplot as plt p=.3 bins=np.arange(0,50,1) r=np.random.rand( 1000 ) gen=np.floor(np.log(r)/np.log(1-p)) plt.hist(gen,bins=bins,alpha=.8) N=5 gen_trunc=np.floor(np.log(1-r*(1-(1-p)**N))/np.log(1-p)) plt.hist(gen_trunc,bins=bins,alpha=.8) plt.show() ```
The discrete analogue of the exponential distribution is the [geometric distribution](http://mathworld.wolfram.com/GeometricDistribution.html). This is implemented in [NumPy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.geometric.html): ``` >>> import numpy as np >>> np.random.geometric(.01, 10) array([ 33, 45, 41, 171, 62, 119, 56, 47, 30, 197]) ```
Sample integers from truncated geometric distribution
[ "", "python", "math", "" ]
I want to make the function which find for string in the array and then replace the corres[ponding element from the dictionary. so far i have tried this but i am not able to figure out few things like 1. How can escape special characters 2. I can i replace with match found. i tried `\1` but it didn't work dsds ``` def myfunc(h): myarray = { "#":"\\#", "$":"\\$", "%":"\\%", "&":"\\&", "~":"\\~{}", "_":"\\_", "^":"\\^{}", "\\":"\\textbackslash{}", "{":"\\{", "}":"\\}" } pattern = "[#\$\%\&\~\_\^\\\\\{\}]" pattern_obj = re.compile(pattern, re.MULTILINE) new = re.sub(pattern_obj,myarray[\1],h) return new ```
You're looking for re.sub callbacks: ``` def myfunc(h): rules = { "#":r"\#", "$":r"\$", "%":r"\%", "&":r"\&", "~":r"\~{}", "_":r"\_", "^":r"\^{}", "\\":r"\textbackslash{}", "{":r"\{", "}":r"\}" } pattern = '[%s]' % re.escape(''.join(rules.keys())) new = re.sub(pattern, lambda m: rules[m.group()], h) return new ``` This way you avoid 1) loops, 2) replacing already processed content.
1. I'd suggest you to use raw literal syntax (`r""`) for better readability of the code. 2. For the case of your array you may want just to use [`str.replace`](http://docs.python.org/3/library/stdtypes.html#str.replace) function instead of `re.sub`. ``` def myfunc(h): myarray = [ ("\\", r"\textbackslash"), ("{", r"\{"), ("}", r"\}"), ("#", r"\#"), ("$", r"\$"), ("%", r"\%"), ("&", r"\&"), ("~", r"\~{}"), ("_", r"\_"), ("^", r"\^{}")] for (val, replacement) in myarray: h = h.replace(val, replacement) h = h.replace(r"\textbackslash", r"\textbackslash{}", h) return h ``` The code is a modification of @tigger's answer.
How can i search and replace using python regex
[ "", "python", "regex", "" ]
I have an array of longitude-latitude points that defines the boundaries of an area. I would like to create a polygon based on these points and plot the polygon on a map and fill it. Currently, my polygon seems to consist of many patches that connect all the points, but the order of the points is not correct and when I try to fill the polygon I get a weird looking area (see attached). ![The black dots indicate the position of the boundary points](https://i.stack.imgur.com/ZIvoe.png) I sort my longitude-latitude points (mypolyXY array) according to the center of the polygon, but my guess is that this is not correct: ``` cent=(np.sum([p[0] for p in mypolyXY])/len(mypolyXY),np.sum([p[1] for p in mypolyXY])/len(mypolyXY)) # sort by polar angle mypolyXY.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0])) ``` I plot the point locations (black circles) and my polygons (colored patches) using ``` scatter([p[0] for p in mypolyXY],[p[1] for p in mypolyXY],2) p = Polygon(mypolyXY,facecolor=colors,edgecolor='none') ax.add_artist(p) ``` My question is: how can I close my polygon based on my array of longitude-latitude points? **UPDATE:** I tested some more on how to plot the polygon. I removed the sort routine and just used the data in the order they occur in the file. This seems to improve the result, but as @tcaswell mentioned, the polygon shape still undercuts itself (see new plot). I am hoping that there could be a path/polygon routine that could solve my problem and sort of merge all shapes or paths within the boundaries of the polygon. Suggestions are very welcome. ![enter image description here](https://i.stack.imgur.com/L8U4A.png) UPDATE 2: I have now a working version of my script that is based on suggestions by @Rutger Kassies and Roland Smith. I ended up reading the Shapefile using org which worked relatively well. It worked well for the standard lmes\_64.shp file but when I used more detailed LME files where each LME could consist of several polygons this script broke down. I would have to find a way to merge the various polygons for identical LME names to make that work. I attach my script I ended up with in case anyone would take a look at it. I very much appreciate comments for how to improve this script or to make it more generic. This script creates the polygons and extracts data within the polygon region that I read from a netcdf file. The grid of the input file is -180 to 180 and -90 to 90. ``` import numpy as np import math from pylab import * import matplotlib.patches as patches import string, os, sys import datetime, types from netCDF4 import Dataset import matplotlib.nxutils as nx from mpl_toolkits.basemap import Basemap import ogr import matplotlib.path as mpath import matplotlib.patches as patches def getLMEpolygon(coordinatefile,mymap,index,first): ds = ogr.Open(coordinatefile) lyr = ds.GetLayer(0) numberOfPolygons=lyr.GetFeatureCount() if first is False: ft = lyr.GetFeature(index) print "Found polygon:", ft.items()['LME_NAME'] geom = ft.GetGeometryRef() codes = [] all_x = [] all_y = [] all_XY= [] if (geom.GetGeometryType() == ogr.wkbPolygon): for i in range(geom.GetGeometryCount()): r = geom.GetGeometryRef(i) x = [r.GetX(j) for j in range(r.GetPointCount())] y = [r.GetY(j) for j in range(r.GetPointCount())] codes += [mpath.Path.MOVETO] + (len(x)-1)*[mpath.Path.LINETO] all_x += x all_y += y all_XY +=mymap(x,y) if len(all_XY)==0: all_XY=None mypoly=None else: mypoly=np.empty((len(all_XY[:][0]),2)) mypoly[:,0]=all_XY[:][0] mypoly[:,1]=all_XY[:][3] else: print "Will extract data for %s polygons"%(numberOfPolygons) mypoly=None first=False return mypoly, first, numberOfPolygons def openCMIP5file(CMIP5name,myvar,mymap): if os.path.exists(CMIP5name): myfile=Dataset(CMIP5name) print "Opened CMIP5 file: %s"%(CMIP5name) else: print "Could not find CMIP5 input file %s : abort"%(CMIP5name) sys.exit() mydata=np.squeeze(myfile.variables[myvar][-1,:,:]) - 273.15 lonCMIP5=np.squeeze(myfile.variables["lon"][:]) latCMIP5=np.squeeze(myfile.variables["lat"][:]) lons,lats=np.meshgrid(lonCMIP5,latCMIP5) lons=lons.flatten() lats=lats.flatten() mygrid=np.empty((len(lats),2)) mymapgrid=np.empty((len(lats),2)) for i in xrange(len(lats)): mygrid[i,0]=lons[i] mygrid[i,1]=lats[i] X,Y=mymap(lons[i],lats[i]) mymapgrid[i,0]=X mymapgrid[i,1]=Y return mydata, mygrid, mymapgrid def drawMap(NUM_COLORS): ax = plt.subplot(111) cm = plt.get_cmap('RdBu') ax.set_color_cycle([cm(1.*j/NUM_COLORS) for j in range(NUM_COLORS)]) mymap = Basemap(resolution='l',projection='robin',lon_0=0) mymap.drawcountries() mymap.drawcoastlines() mymap.fillcontinents(color='grey',lake_color='white') mymap.drawparallels(np.arange(-90.,120.,30.)) mymap.drawmeridians(np.arange(0.,360.,60.)) mymap.drawmapboundary(fill_color='white') return ax, mymap, cm """Edit the correct names below:""" LMEcoordinatefile='ShapefileBoundaries/lmes_64.shp' CMIP5file='tos_Omon_CCSM4_rcp85_r1i1p1_200601-210012_regrid.nc' mydebug=False doPoints=False first=True """initialize the map:""" mymap=None mypolyXY, first, numberOfPolygons = getLMEpolygon(LMEcoordinatefile, mymap, 0,first) NUM_COLORS=numberOfPolygons ax, mymap, cm = drawMap(NUM_COLORS) """Get the CMIP5 data together with the grid""" SST,mygrid, mymapgrid = openCMIP5file(CMIP5file,"tos",mymap) """For each LME of interest create a polygon of coordinates defining the boundaries""" for counter in xrange(numberOfPolygons-1): mypolyXY,first,numberOfPolygons = getLMEpolygon(LMEcoordinatefile, mymap,counter,first) if mypolyXY != None: """Find the indices inside the grid that are within the polygon""" insideBoolean = plt.mlab.inside_poly(np.c_[mymapgrid[:,0],mymapgrid[:,1]],np.c_[mypolyXY[:,0],mypolyXY[:,1]]) SST=SST.flatten() SST=np.ma.masked_where(SST>50,SST) mymapgrid=np.c_[mymapgrid[:,0],mymapgrid[:,1]] myaverageSST=np.mean(SST[insideBoolean]) mycolor=cm(myaverageSST/SST.max()) scaled_z = (myaverageSST - SST.min()) / SST.ptp() colors = plt.cm.coolwarm(scaled_z) scatter([p[0] for p in mypolyXY],[p[1] for p in mypolyXY],2) p = Polygon(mypolyXY,facecolor=colors,edgecolor='none') ax.add_artist(p) if doPoints is True: for point in xrange(len(insideBoolean)): pointX=mymapgrid[insideBoolean[point],0] pointY=mymapgrid[insideBoolean[point],1] ax.scatter(pointX,pointY,8,color=colors) ax.hold(True) if doPoints is True: colorbar() print "Extracted average values for %s LMEs"%(numberOfPolygons) plt.savefig('LMEs.png',dpi=300) plt.show() ``` Final image attached. Thanks for all help. ![enter image description here](https://i.stack.imgur.com/6QW6U.png) Cheers, Trond
I recommend using the original Shapefile, which is in a format appropriate for storing polygons. As an alternative to OGR you could use Shapely, or export the polygon to Wkt etc. ``` import ogr import matplotlib.path as mpath import matplotlib.patches as patches import matplotlib.pyplot as plt ds = ogr.Open('lmes_64.shp') lyr = ds.GetLayer(0) ft = lyr.GetFeature(38) geom = ft.GetGeometryRef() ds = None codes = [] all_x = [] all_y = [] if (geom.GetGeometryType() == ogr.wkbPolygon): for i in range(geom.GetGeometryCount()): r = geom.GetGeometryRef(i) x = [r.GetX(j) for j in range(r.GetPointCount())] y = [r.GetY(j) for j in range(r.GetPointCount())] codes += [mpath.Path.MOVETO] + (len(x)-1)*[mpath.Path.LINETO] all_x += x all_y += y if (geom.GetGeometryType() == ogr.wkbMultiPolygon): codes = [] for i in range(geom.GetGeometryCount()): # Read ring geometry and create path r = geom.GetGeometryRef(i) for part in r: x = [part.GetX(j) for j in range(part.GetPointCount())] y = [part.GetY(j) for j in range(part.GetPointCount())] # skip boundary between individual rings codes += [mpath.Path.MOVETO] + (len(x)-1)*[mpath.Path.LINETO] all_x += x all_y += y carib_path = mpath.Path(np.column_stack((all_x,all_y)), codes) carib_patch = patches.PathPatch(carib_path, facecolor='orange', lw=2) poly1 = patches.Polygon([[-80,20],[-75,20],[-75,15],[-80,15],[-80,20]], zorder=5, fc='none', lw=3) poly2 = patches.Polygon([[-65,25],[-60,25],[-60,20],[-65,20],[-65,25]], zorder=5, fc='none', lw=3) fig, ax = plt.subplots(1,1) for poly in [poly1, poly2]: if carib_path.intersects_path(poly.get_path()): poly.set_edgecolor('g') else: poly.set_edgecolor('r') ax.add_patch(poly) ax.add_patch(carib_patch) ax.autoscale_view() ``` ![enter image description here](https://i.stack.imgur.com/bJnHJ.png) Also checkout [Fiona](https://github.com/sgillies/Fiona) (wrapper for OGR) if you want really easy Shapefile handling.
Having an array of points is not enough. You need to know the *order* of the points. Normally the points of a polygon are given *sequentially*. So you draw a line from the first point to the second, then from the second to the third et cetera. If your list is not in sequential order, you need extra information to be able to make a sequential list. A shapefile (see the [documentation](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf)) contains a list of shapes, like a Null shape, Point, PolyLine, Polygon, with variants containing also the Z and M (measure) coordinates. So just dumping the points will not do. You have to divide them up in the different shapes and render the ones you are interested in. In this case probably a PolyLine or Polygon. See the link above for the data format for these shapes. Keep in mind that some parts of the file are big-endian, while others are little-endian. What a mess. I would suggest using the [struct](http://docs.python.org/2/library/struct.html) module to parse the binary `.shp` file, because again according to the documentation, the points of a single Polygon *are* in order, and they form a closed chain (the last point is the same as the first). Another thing you could try with your current list of coordinates is to start with a point, and then look for the same point furtheron in the list. Everything between those identical points should be one polygon. This is probably not foolproof, but see how far it gets you.
Create closed polygon from boundary points
[ "", "python", "numpy", "matplotlib", "" ]
I have the below query .... ``` SELECT NGPCostPosition.ProjectNo, NGPCostPosition.CostCat, NGPCostPosition.DocumentNumber, NGPCostPosition.TransactionDate, NGPCostPosition.UnitCost, NGPCostPosition.TotalCost, NGPCostPosition.CreditorEmployeeName, NGPCostPosition.SummaryCostCat, PurchaseNGP_PL.CalculatedCost, CASE WHEN DATEPART(MONTH, NGPCostPosition.TransactionDate) = DATEPART(MONTH, GETDATE()) AND DATEPART(YEAR, NGPCostPosition.TransactionDate) = DATEPART(YEAR, GETDATE()) THEN TotalCost ELSE 0 END AS CurrentMonthCost2 FROM NGPCostPosition INNER JOIN PurchaseNGP_PL ON NGPCostPosition.ProjectNo = PurchaseNGP_PL.PAPROJNUMBER AND NGPCostPosition.DocumentNumber = PurchaseNGP_PL.DocumentNumber AND NGPCostPosition.SummaryCostCat = PurchaseNGP_PL.SummaryCostCat WHERE NGPCostPosition.ProjectNo = @ProjectNumber AND CostCat ='P070' OR CostCat ='P080' AND NGPCostPosition.ProjectNo = @ProjectNumber AND NGPCostPosition.TotalCost = ABS(PurchaseNGP_PL.CalculatedCost) GROUP BY NGPCostPosition.ProjectNo, NGPCostPosition.CostCat, NGPCostPosition.DocumentNumber, NGPCostPosition.TransactionDate, NGPCostPosition.UnitCost, NGPCostPosition.TotalCost, NGPCostPosition.CreditorEmployeeName, NGPCostPosition.SummaryCostCat, PurchaseNGP_PL.CalculatedCost ``` That gives me the below results ... ![enter image description here](https://i.stack.imgur.com/Qp6iu.png) What I want to do is limit the column 'ProjectNo' to the first 5 numbers only. (eg. 12169) Could someone advise if this is possible and what the best way to do this is?
You can do: ``` SELECT LEFT(NGPCostPosition.ProjectNo, 5) TruncatedProjectNumber, .... ``` Then change your grouping to use `TruncatedProjectNumber`
Well at the cost of space you can provide the first five digits into a separate column. If you don't want to use the extra space you can try something like this: ``` SELECT CAST(LEFT(CAST(first_five AS VARCHAR(5)), 5) AS INT) ``` What the above does is converts your numeric into a `varchar`, issues a substring function on that `varchar` than converts those `5` digits back into an int. It can be a costly operation depending on how often you execute it. That being said it may be in your best interest to store this value in a separate column, so you avoid recomputing it every invocation.
Partial Number SQL
[ "", "sql", "sql-server", "t-sql", "" ]
Schema: ``` CREATE TABLE #exclGeoKeys (xKEY INT); INSERT INTO #exclGeoKeys values (1), (2); CREATE TABLE #y (NAME CHAR(1),xKEY INT); INSERT INTO #y values ('A',1), ('C',2), ('D',NULL), ('E',3), ('F',4); ``` Can I shorten the following so it produces the same result and doesn't need the section `OR xKEY IS NULL`? ``` SELECT * FROM #y WHERE xKEY NOT IN ( SELECT * FROM #exclGeoKeys ) OR xKEY IS NULL; ```
Use option with NOT [EXISTS](http://msdn.microsoft.com/en-us/library/ms188336.aspx) operator ``` SELECT * FROM #y t WHERE NOT EXISTS ( SELECT 1 FROM #exclGeoKeys t2 WHERE t.xKEY = t2.xKEY ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/53789/2) Option with EXISTS and EXCEPT operators ``` SELECT * FROM #y t WHERE EXISTS ( SELECT t.xKEY EXCEPT SELECT t2.xKEY FROM #exclGeoKeys t2 ) ``` Option with NOT EXISTS and INTERSECT operators ``` SELECT * FROM #y t WHERE NOT EXISTS ( SELECT t.xKEY INTERSECT SELECT t2.xKEY FROM #exclGeoKeys t2 ) ```
Unreal value ``` SELECT * FROM #y WHERE coalesce(xKEY,-1) NOT IN ( SELECT * FROM #exclGeoKeys ) ```
Combining and shortening two where clause conditions
[ "", "sql", "sql-server", "" ]
I am wanting to get a list of all the process names, CPU, Mem Usage and Peak Mem Usage. I was hoping I could use ctypes. but I am happy to hear any other options. Thanks for your time.
You can use [`psutil`](https://github.com/giampaolo/psutil). For example, to obtain the list of process names: ``` process_names = [proc.name() for proc in psutil.process_iter()] ``` For info about the CPU use [`psutil.cpu_percent`](https://psutil.readthedocs.io/en/latest/#psutil.cpu_percent) or [`psutil.cpu_times`](https://psutil.readthedocs.io/en/latest/#psutil.cpu_times). For info about memory usage use [`psutil.virtual_memory`](https://psutil.readthedocs.io/en/latest/#psutil.virtual_memory). Note that psutil works with Linux, OS X, Windows, Solaris and FreeBSD and with python 2.4 through 3.3.
I like using `wmic` on Windows. You can run it from the command-line, so you can run it from Python. ``` from subprocess import Popen,PIPE proc = Popen('wmic cpu',stdout=PIPE, stderr=PIPE) print str(proc.communicate()) ``` With `wmic` you can get processes, cpu, and memory info easily. Just use `wmic cpu`, `wmic process`, and `wmic memphysical`. You can also filter out certain attributes by using `wmic <alias> get <attribute>`. And you can get a list of all commands with `wmic /?`. Hope that helps! You can check out the official documentation for WMIC here: <http://technet.microsoft.com/en-us/library/bb742610.aspx>
Python - get process names,CPU,Mem Usage and Peak Mem Usage in windows
[ "", "python", "windows", "memory", "process", "cpu", "" ]
I'm working in embebed system in the beagleboard. The source code is in Python, but I import libraries from OpenCV to do image processing. Actually, I'm using the webcam Logitech c910, it's a excellent camera but it has autofocus. I would like to know if I can disable the autofocus from Python or any program in Linux?
Use program `v4l2-ctl` from your shell to control hardware settings on your webcam. To turn off autofocus just do: ``` v4l2-ctl -c focus_auto=0 ``` You can list all possible controls with: ``` v4l2-ctl -l ``` The commands default to your first Video4Linux device, i.e. `/dev/video0`. If you got more than one webcam plugged in, use `-d` switch to select your target device. --- **Installing v4l-utils** Easiest way to install the utility is using your package manager, e.g. on Ubuntu or other Debian-based systems try: ``` apt-get install v4l-utils ``` or on Fedora, CentOS and other RPM-based distros use: ``` yum install v4l-utils ```
You can also do it in Linux with: ``` cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_AUTOFOCUS, 0) ``` For some people this doesn't work in Windows (see [Disable webcam's autofocus in Windows using opencv-python](https://stackoverflow.com/questions/48855506/disable-webcams-autofocus-in-windows-using-opencv-python)). In my system it does (ubuntu 14.04, V4L 2.0.2, opencv 3.4.3 ,logitech c922).
Disable webcam's Autofocus in Linux
[ "", "python", "linux", "opencv", "" ]
I have this current list comprehension: ``` ... cur = [[14, k, j] for j, k in rows[14], range(15)] ... ``` and it is giving me the following error: ``` ... cur = [[14, k, j] for j, k in rows[14], range(15)] ValueError: too many values to unpack ``` Any help appreciated as in how I would fix this. I just don't want to have to write out a full for loop or the entire list by hand. Thank you! :D **Extra info:** ``` rows = [{1: '75'}, {1: '95', 2: '64'}, {1: '17', 2: '47', 3: '82'}, {1: '18', 2: '35', 3: '87', 4: '10'}, {1: '20', 2: '04', 3: '82', 4: '47', 5: '65'}, {1: '19', 2: '01', 3: '23', 4: '75', 5: '03', 6: '34'}, {1: '88', 2: '02', 3: '77', 4: '73', 5: '07', 6: '63', 7: '67'}, {1: '99', 2: '65', 3: '04', 4: '28', 5: '06', 6: '16', 7: '70', 8: '92'}, {1: '41', 2: '41', 3: '26', 4: '56', 5: '83', 6: '40', 7: '80', 8: '70', 9: '33'}, {1: '41', 2: '48', 3: '72', 4: '33', 5: '47', 6: '32', 7: '37', 8: '16', 9: '94', 10: '29'}, {1: '53', 2: '71', 3: '44', 4: '65', 5: '25', 6: '43', 7: '91', 8: '52', 9: '97', 10: '51', 11: '14'}, {1: '70', 2: '11', 3: '33', 4: '28', 5: '77', 6: '73', 7: '17', 8: '78', 9: '39', 10: '68', 11: '17', 12: '57'}, {1: '91', 2: '71', 3: '52', 4: '38', 5: '17', 6: '14', 7: '91', 8: '43', 9: '58', 10: '50', 11: '27', 12: '29', 13: '48'}, {1: '63', 2: '66', 3: '04', 4: '68', 5: '89', 6: '53', 7: '67', 8: '30', 9: '73', 10: '16', 11: '69', 12: '87', 13: '40', 14: '31'}, {1: '04', 2: '62', 3: '98', 4: '27', 5: '23', 6: '09', 7: '70', 8: '98', 9: '73', 10: '93', 11: '38', 12: '53', 13: '60', 14: '04', 15: '23'}] ```
You need to [`zip`](http://docs.python.org/2/library/functions.html#zip) them to iterate like that: ``` cur = [[14, k, j] for j, k in zip(rows[14], range(15))] ```
To explain your code: ``` cur = [[14, k, j] for j, k in rows[14], range(15)] ``` is the same as: ``` cur = [[14, k, j] for j, k in (rows[14], range(15))] ``` Now, we see more clearly that you've created a `tuple` and are iterating over it. The first time through the loop, the tuple gives up `rows[14]` which is a dictionary that has more than 2 items in it, so it can't be unpacked into `j` and `k`. As noted by `jamylak`, the key is to `zip` the two iterables together. ``` cur = [[14, k, j] for j,k in zip(rows[14],range(15))] ``` You can think of it like a zipper: ``` zip(a,b) = [ (a[0], b[0]), (a[1], b[1]), (a[2], b[2]), ... } ``` written out this way, you see how the structure sort of resembles a zipper (with `a` and `b` being the left and right pieces of the zipper. After you zipping, you've matched an element on the left with an element on the right. Of course, the objects you pass to `zip` don't need to be indexable (All that matters is that you can iterate over them), and you can "zip" more than 2 iterables together ...
Python - List comprehension with multiple arguments in the for
[ "", "python", "list", "list-comprehension", "" ]
This query which works with mySQL doesn't work with Postgresql: ``` select ... from ... where (id = ... and ( h > date_sub(now(), INTERVAL 30 MINUTE))) ``` The error is: ``` Query failed: ERREUR: erreur de syntaxe sur ou près de « 30 » ``` Any ideas ?
`DATE_SUB` is a MySQL function that does not exist in PostgreSQL. You can (for example) either use; ``` NOW() - '30 MINUTES'::INTERVAL ``` ...or... ``` NOW() - INTERVAL '30' MINUTE ``` ...or... ``` NOW() - INTERVAL '30 MINUTES' ``` as a replacement. [An SQLfiddle with all 3 to test with](http://sqlfiddle.com/#!12/d41d8/894).
An interval literal needs single quotes: ``` INTERVAL '30' MINUTE ``` And you can use regular "arithmetics": ``` and (h > current_timestamp - interval '30' minute) ```
date_sub ok with mysql, ko with postgresql
[ "", "sql", "postgresql", "" ]
I'm going to apologize up front, this is my first question on stackoverflow... I am attempting to query a table of records where each row has a VehicleID, latitude, longitude, timestamp and various other fields. What I need is to only pull the most recent latitude and longitude for each VehicleID. edit: removed the term unique ID as apparently I was using it incorrectly.
If the Unique ID is truely unique, then you will always have the most recent latitude and longitude, because the ID will change with every singe row. If the Unique ID is a Foreign Key (or an ID referencing a unique ID from a different table) you should do something like this: ``` SELECT latitude, longitude, unique_id FROM table INNER JOIN (SELECT unique_id, MAX(timestamp) AS timestamp FROM table GROUP BY unique_id)t2 ON table.timestamp = t2.timestamp AND table.unique_id = t2.unique_id; ```
You can use the `row_number()` function for this purpose: ``` select id, latitude, longitude, timestamp, . . . from (select t.*, row_number() over (partition by id order by timestamp desc) as seqnum from t ) t where seqnum = 1 ``` The `row_number()` function assigns a sequential value to each id (`partition by` clause), with the most recent time stamp getting the value of `1` (the `order by` clause). The outer `where` just chooses this one value. This is an example of a `window` function, which I encourage you to learn more about. One quibble with your question: you describe the id as unique. However, if there are multiple values at different times, then it is not unique.
Need To Pull Most Recent Record By Timestamp Per Unique ID
[ "", "sql", "sql-server", "sql-server-2005", "greatest-n-per-group", "" ]
I'm trying to match some variable names in a html document to populate a dictionary. I have the html ``` <div class="no_float"> <b>{node_A_test00:02d}</b>{{css}} <br /> Block mask: {block_mask_lower_node_A} to {block_mask_upper_node_A} <br /> </div> <div class="sw_sel_container"> Switch selections: <table class="sw_sel"> <tr> <td class="{sw_sel_node_A_03}">1</td> <td class="{sw_sel_node_A_03}">2</td> <td class="{sw_sel_node_A_03}">3</td> <td class="{sw_sel_node_A_04}">4</td> <td class="{sw_sel_node_A_05}">5</td> ``` I want to match code between { and ( } or : ). But if it starts with {{ I don't want to match it at all (I will be using this for inline css} so far I have the regex expression ``` (?<=\{)((?!{).*?)(?=\}|:) ``` but this is still matching text inside {{css}}.
I see that you've already found a solution that works, but I thought it might be worthwhile to explain what the problem with your original regex is. * `(?<=\{)` means that a `{` must precede whatever matches next. Fair enough. * `((?!{).*?)` will match anything that starts with a character other than `{`. Okay, so we're only matching things *inside* the braces. Good. But now consider what happens when you have two opening braces: `{{bar}}`. Consider the substring `bar`. What precedes the `b`? A `{`. Does `bar` start with `{`? Nope. So the regex will consider this a match. You have, of course, prevented the regex from matching `{bar}`, which is what it would do if you left the `(?!{)` out of your pattern, because `{bar}` starts with a `{`. But as soon as the regex engine determines that no valid match starts on the `{` character, it moves on to the next character--`b`--and sees that a match starts there. Now, just for kicks, here's the regex I'd use: `(?!<={){([^{}:]+)[}:](?!=})` * `(?!<{)` : the match shouldn't be preceded by `{`. * `{` : the match starts with an open brace. * `([^{}:]+)` : **group** everything that isn't an open-brace, close-brace, or colon. This is the part of the match that we actually want. * `[}:]` : end the match with a close-brace or colon. * `(?!})` : the match shouldn't be followed by `}`.
You could do something like this: ``` re.findall(r''' (?<!\{) # No opening bracket before \{ # Opening bracket ([^}]+) # Stuff inside brackets \} # Closing bracket (?!\}) # No closing bracket after ''', '{foo} {{bar}} {foo}', flags=re.VERBOSE) ```
Python - Match string between { } characters, but not between {{ }}
[ "", "python", "regex", "string", "match", "" ]
I have a query that joins on one table. I totally understand the inner join on company\_no. And even if there was another join condition with an **`=`** sign, I would have understood. But what is the greater than or equal to right after the ON clause `AND ts.trans_date >= mtd.trans_date`? I am not able to comprehend what this is doing. ``` SELECT ts.trans_date, ts.company_no, ts.transactions, ts.mtd_transactions, SUM(mtd.transactions) AS tabulated_mtd_transactions FROM transactions_tbl ts INNER JOIN transactions_tbl mtd ON ts.company_no = mtd.company_no AND ts.trans_date >= mtd.trans_date WHERE ts.company_no = 1080 AND ts.trans_date >= '2010-08-01' AND mtd.trans_date >= '2010-08-01' GROUP BY ts.trans_date, ts.company_no, ts.transactions, ts.mtd_transactions ORDER BY ts.trans_date ```
This query tries to build a running sum of `transaction` for a certain company. Basically, it takes every record in `transactions_tbl` for that company starting with `2010-08-01`, joins it to all records earlier than that (and still later than `2010-08-01`) in the same table for the same company, and finds the sum of those records. Since PostgreSQL 8.4 and Oracle 8i, there is a more efficient method to do that: ``` SELECT *, SUM(transactions) OVER (PARTITION BY company_no ORDER BY trans_date) FROM transactions_tbl WHERE ts.company_no = 1080 AND ts.trans_date >= '2010-08-01' ORDER BY trans_date ```
The "inner join" in your query is equivalent to a cross product of the two tables and then a selection on the result with the where clause being the predicate in the ON section of the inner join.
Don't understand the 'AND' after the 'ON' clause in an ANSI SQL inner join syntax
[ "", "sql", "oracle", "postgresql", "" ]
``` def fvals_sqrt(x): """ Return f(x) and f'(x) for applying Newton to find a square root. """ f = x**2 - 4. fp = 2.*x return f, fp def solve(fvals_sqrt, x0, debug_solve = False): """ Solves the sqrt function, using newtons methon. """ iters = 0 f, fp = 0. while f > 10**-14 | -f < 10**-14: f, fp = fvals_sqrt(x0) x0 = x0 - (f/fp) iters = iters+1 print + " x = %22.15e in %i iterations " % (x0, iters) return x0, iters print "we're done" ``` I want this while loop once f once less than 10^-14, but i'm not sure how to modify the argument to make the loop iteratable, any help?
You want logical `or` instead of `|`: ``` while f > 10**-14 or -f < 10**-14: ``` Note that there is a difference in operator precedence between logical or (`or`) and bitwise or (`|`). Your previous statement was likely being parsed as: ``` while (f > (10**-14 | -f)) < 10**-14): ``` which seems like it would raise a `TypeError`... It might be easier to write: ``` while abs(f) > 10**-14: ```
`|` operator is used for `bitwise-OR` so you should use the logical `or` here and secondly the `return` statement is also at wrong place and it'll end your loop only after one iteration. ``` def solve(fvals_sqrt, x0, debug_solve = False): iters = 0 f, fp = 0. while f > 10**-14 or -f < 10**-14: f, fp = fvals_sqrt(x0) x0 = x0 - (f/fp) iters = iters+1 print + " x = %22.15e in %i iterations " % (x0, iters) print "we're done" return x0, iters ```
while block doesn't loop as expected
[ "", "python", "loops", "" ]
I have a `table` and a `SP` in SQL Server. I want to add `Permissions` on a table and SP that no one can change the Structure of table and Logic of SP. Is there any way to specify such type of Permissions. Any Trigger which avoids `drop` and `alter` commands, or any other way to do this. Thanks in Advance.
It is highly recommended that you manage the permissions on the objects. However, if you have no control over the permissions, consider setting up a database DDL trigger to at least log the events. ``` create table AuditTable ( event_type varchar(max) not null , tsql_command varchar(max) not null , modified_by varchar(128) not null default (current_user) , modified_time datetime not null default (getdate()) ) go create trigger log_database_level_event on database for ddl_database_level_events as insert AuditTable ( event_type , tsql_command ) values ( eventdata().value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(max)') , eventdata().value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'varchar(max)') ) go create user tester without login go execute as user = 'tester' go create proc test_proc as select @@version go alter proc test_proc as select 1 go revert go select * from AuditTable go ```
You need to create and use a separate user that has only privileges that you explicitly allow it to (eg `GRANT SELECT` from table or `GRANT EXECUTE` on your stored procedure). Rather than looking at it as disallowing certain actions you should consider what actions *are* allowed (see [Principle of Least Privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)).
Avoid alter and drop command on a Table and SP in SQL Server
[ "", "sql", "sql-server-2008", "t-sql", "" ]
How can you take key/value pairs from a dictionary, and use as attributes in instances in a class? I'm sure it should go through a for loop, but I'm not exactly sure what syntax to use. Here is my code: ``` from random import randint import time drivers={} class car: def __init__(self, Driver, Sponsor, Odometer=0,Speed=randint(1,121)): self.Driver = Driver self.Sponsor = Sponsor self.Odometer = Odometer self.Speed = Speed def update(): Odometer=Odometer+Speed*0.17 Speed = randint(1,121) drivers['David']='Burger King' drivers['Travis']='Dr Pepper' drivers['Landon']='Precon Marine' drivers['Jamie']='Cessna' drivers['Juan']='Target' drivers['Ken']='Federated Auto Parts' drivers['Timmy']='OXYwater' drivers['Josh']='MDS Transport' drivers['Kurt']='Furniture Row' drivers['Casey']='Geico' drivers['Kasey']='Farmers Insurance' drivers['Jeff']='Axalta' drivers['Jimmie']='Lowes' drivers['Dale']='National Guard' drivers['Mike']='Plinker Tactical' drivers['Denny']='FedEx' drivers['Kyle']='Mars Brands' drivers['Matt']='Husky' drivers['Bobby']='Kingsford' drivers['Clint']='5-Hour Energy' ``` I realize that the drivers dictionary is long (and there might be an easier method to add all these drivers). I needed to add in 20 drivers and their sponsors in a list (or dictionary, in this case). Would it be easier to simply add them into the class, rather than using a for loop? And is there a way to add in the driver name and sponsor as attributes, while using the default values for the Odometer and Speed?
> I realize that the drivers dictionary is long (and there might be an easier method to add all these drivers). Yes, you can use a dict display: ``` drivers = { 'David': 'Burger King', 'Travis': 'Dr Pepper', # ... } ``` > is there a way to add in the driver name and sponsor as attributes, while using the default values for the Odometer and Speed? Yes. In fact, you've already done it. The way you've defined `car.__init__`, you just leave out the other two params, and they'll get the default values: ``` >>> david = car('David', 'Burger King') >>> david.Driver 'David' >>> david.Odometer 0 ``` So, now all you're missing is a way to create the 20 `car` instances out of your `drivers` dict. Assuming you want a `dict` that maps driver names to `car` instances, use a dict comprehension: ``` cars = {driver: car(driver, sponsor) for driver, sponsor in drivers.items()} ``` Or, if you just want a `list` of `car` instances, use a list comprehension instead: ``` cars = [car(driver, sponsor) for driver, sponsor in drivers.items()] ``` --- Another thing: The way you've defined `car.__init__`, you're using a single random number for every car's speed: ``` def __init__(self, Driver, Sponsor, Odometer=0,Speed=randint(1,121)): ``` When Python evaluates this function definition, it will call `randint(1,121)` and make the result the default value for every call to the function. What you want is probably something like this: ``` def __init__(self, Driver, Sponsor, Odometer=0, Speed=None): if speed is None: speed = randint(1,121) ``` --- Finally, your `update` method needs to take a `self` parameter, and it needs to use that to access or modify the object's attributes, just like your `__init__` method: ``` def update(self): self.Odometer = self.Odometer + self.Speed*0.17 self.Speed = randint(1,121) ``` --- From the comments, it sounds like the only thing you need to do with this is repeatedly loop over all cars, and then search for the winner at the end. For that, you don't have a need for a dict of cars, just a list. So: ``` cars = [car(driver, sponsor) for driver, sponsor in drivers.items()] ``` Now, here's what you do every minute: ``` for car in cars: car.update() ``` And then, at the end, the winner is: ``` winner = max(cars, key=operator.attrgetter('Odometer')) ``` The `max` function, like most sorting and searching functions in Python, takes an optional `key`, which is a function that tells it what to sort or search by. And `attrgetter(name)` is a function call that returns a function that gets the attribute named `name` for any object. See the [Sorting Mini-HOW TO](http://wiki.python.org/moin/HowTo/Sorting/) for details. For comparison, let's write that part explicitly: ``` winner = None for car in cars: if winner is None or car.Odometer > winner.Odometer: winner = car ```
You can use: ``` cars = [car(driver, sponsor) for driver, sponsor in drivers.items()] ``` This is called a list comprehension, and it creates a list of car objects, each using a driver and corresponding sponsor from the dictionary (but the default values of Odometer and Speed).
Assignment: Python 3.3: add keys and values from dictionary to class
[ "", "python", "class", "dictionary", "python-3.3", "" ]
I am trying to use a lambda or other python feature to return an dictionary if a condition is met, or none if not. ``` myDict = None myDict = lambda c: {} if not myDict else myDict ``` Clearly, this not work, since lambda is not executed, so that what will be most "pythonic" way to do it?
``` myDict = None get_myDict = lambda: myDict or {} get_myDict() ``` returns an empty dictionary.
It seems to me that you should just use the conditional expression: ``` myDict = {} if not myDict else myDict ``` --- `lambda` is a way of creating *functions*. So: ``` func = lambda x: x*x ``` is approximately the same as: ``` def func(x): return x*x ``` So, in your example, you create a *function* more or less equivalent to: ``` def myDict(c): return {} if not myDict else myDict ```
Returning a dictionary based on a condition
[ "", "python", "lambda", "" ]
I recently moved a piece of code into production on a oracle database where one of the more experienced developer who reviewed it mentioned I had way too many `exists` and `not exists` statements and that there should be a way to remove them, but it had been too long since he had to use it and didn't remember much on how it worked. Currently, I'm going back and making the piece of code more maintainable as it is a piece likely to be changed multiple times in future years as business logic/requirements change, and I wanted to go ahead and optimize it while making it more maintainable. I've tried looking it up, but all I can find is recommendations on replacing `not in` with `not exists` and to not return actual results. As such, I'm wondering what can be done to optimize out `exists`/`not exists` or if there is a way to write `exists`/`not exists` so that oracle will optimize it internally (likely at a better degree than I can). For example, how can the following be optimized? ``` UPDATE SCOTT.TABLE_N N SET N.VALUE_1 = 'Data!' WHERE N.VALUE_2 = 'Y' AND EXISTS ( SELECT 1 FROM SCOTT.TABLE_Q Q WHERE N.ID = Q.N_ID ) AND NOT EXISTS ( SELECT 1 FROM SCOTT.TABLE_W W WHERE N.ID = W.N_ID ) ```
Your statement seems perfectly fine to me. In any optimizing task, don't think patterns. Don't think like, "`(not) exists` is bad and slow, `(not) in` is super cool and fast". Think like, how much work does database do on each step and how can you measure it? A simple example: **-- NOT IN:** ``` 23:59:41 HR@sandbox> alter system flush buffer_cache; System altered. Elapsed: 00:00:00.03 23:59:43 HR@sandbox> set autotrace traceonly explain statistics 23:59:49 HR@sandbox> select country_id from countries where country_id not in (select country_id from locations); 11 rows selected. Elapsed: 00:00:00.02 Execution Plan ---------------------------------------------------------- Plan hash value: 1748518851 ------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 6 | 4 (0)| 00:00:01 | |* 1 | FILTER | | | | | | | 2 | NESTED LOOPS ANTI SNA| | 11 | 66 | 4 (75)| 00:00:01 | | 3 | INDEX FULL SCAN | COUNTRY_C_ID_PK | 25 | 75 | 1 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | LOC_COUNTRY_IX | 13 | 39 | 0 (0)| 00:00:01 | |* 5 | TABLE ACCESS FULL | LOCATIONS | 1 | 3 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( NOT EXISTS (SELECT 0 FROM "LOCATIONS" "LOCATIONS" WHERE "COUNTRY_ID" IS NULL)) 4 - access("COUNTRY_ID"="COUNTRY_ID") 5 - filter("COUNTRY_ID" IS NULL) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 11 consistent gets 8 physical reads 0 redo size 446 bytes sent via SQL*Net to client 363 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 11 rows processed ``` **-- NOT EXISTS** ``` 23:59:57 HR@sandbox> alter system flush buffer_cache; System altered. Elapsed: 00:00:00.17 00:00:02 HR@sandbox> select country_id from countries c where not exists (select 1 from locations l where l.country_id = c.country_id ); 11 rows selected. Elapsed: 00:00:00.30 Execution Plan ---------------------------------------------------------- Plan hash value: 840074837 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 11 | 66 | 1 (0)| 00:00:01 | | 1 | NESTED LOOPS ANTI| | 11 | 66 | 1 (0)| 00:00:01 | | 2 | INDEX FULL SCAN | COUNTRY_C_ID_PK | 25 | 75 | 1 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN| LOC_COUNTRY_IX | 13 | 39 | 0 (0)| 00:00:01 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("L"."COUNTRY_ID"="C"."COUNTRY_ID") Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5 consistent gets 2 physical reads 0 redo size 446 bytes sent via SQL*Net to client 363 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 11 rows processed ``` NOT IN in this example reads twice as much database blocks and performs more complicated filtering - ask yourself, why would you chose it over NOT EXISTS?
There is no reason to avoid using EXISTS or NOT EXISTS when that is what you need. In the example you gave, that is probably exactly what you want to use. The typical dilemma is whether to use IN/NOT IN, or EXISTS/NOT EXISTS. They are evaluated quite differently, and one may be faster or slower depending on your specific circumstances. See [here](http://asktom.oracle.com/pls/asktom/f?p=100:11%3a%3a%3a%3a%3aP11_QUESTION_ID:953229842074) for more details than you probably want.
Optimizing Oracle query by removing 'exists' and 'not exists'
[ "", "sql", "oracle", "" ]
I need to output some strings (to stdout) and because windows console works in cp437, if the string contains any characters outside cp437, an exception is thrown. I got around this by ``` encoding=sys.stdout.encoding pathstr = path.encode(encoding,errors="replace").decode(encoding) print(pathstr) ``` where `path` is the `str` i want to output. I'm fine with characters replaced by "?" This doesn't seem good because it converts to a byte array and back to a str. Is there a better way to achieve this? I'm still new to python ( a week maybe ) and I'm using Win7 32 bit with cpython 3.3
> This doesn't seem good because it converts to a byte array and back to a str. If you want to write raw bytes to the stream, use `.buffer`: ``` pathbytes= path.encode(encoding, errors= 'replace') sys.stdout.buffer.write(pathbytes) ``` ...oh for the day that [issue 1602](http://bugs.python.org/issue1602) comes to something and we can avoid the Unicode horror of the Windows command prompt...
> I'm fine with characters replaced by "?" You could set [PYTHONIOENCODING environment variable](http://docs.python.org/3.3/using/cmdline.html#envvar-PYTHONIOENCODING): ``` C:\> set PYTHONIOENCODING=cp437:replace ``` And print Unicode strings directly: ``` print(path) ``` In that case, if you are redirecting to a file; you could set PYTHONIOENCODING to utf-8 and get the correct complete output. You could also try `WriteConsoleW()`-based solutions from [the corresponding Python bug](http://bugs.python.org/issue1602) and see if they work on Python 3.3 e.g.: ``` import _win_console _win_console.install_unicode_console() print("cyrillic: цык.") ``` Where [`_win_console` is from `win_console.patch`](http://bugs.python.org/file23470/win_console.patch). You don't need to set the environment variable in this case and it should work with any codepage [(with an appropriate console font, it might even show characters outside the current codepage)](https://stackoverflow.com/a/1259468/4279). All solutions for the problem of printing Unicode inside the Windows console have drawbacks [(see the discussion and the reference links in the bug tracker for all the gory details)](http://bugs.python.org/issue1602).
Converting between charsets in python
[ "", "python", "unicode", "character-encoding", "python-3.x", "" ]
Lets say I have two lists: ``` x = [1,2,3,4] y = [1,4,7,8] ``` I want to append to x any values in y that are not already in x. I can do this easily with a loop: ``` for value in y: if value not in x: x.append(value) ``` But I am wondering if there a more Pythonic way of doing this.
Something like this: ``` In [22]: x = [1,2,3,4] In [23]: y = [1,4,7,8] In [24]: x += [ item for item in y if item not in x] In [25]: x Out[25]: [1, 2, 3, 4, 7, 8] ``` `+=` acts as `list.extend`, so the above code is equivalent to : ``` In [26]: x = [1,2,3,4] In [27]: lis = [ item for item in y if item not in x] In [28]: x.extend(lis) In [29]: x Out[29]: [1, 2, 3, 4, 7, 8] ``` > Note that if the size of list `x` is huge and your list x/y contain > only immutable(hashable) items then you must use `sets` here, as they > will improve the time complexity to `O(N)`.
``` >>> x = [1,2,3,4] >>> y = [1,4,7,8] >>> set_x = set(x) # for fast O(1) amortized lookup >>> x.extend(el for el in y if el not in set_x) >>> x [1, 2, 3, 4, 7, 8] ```
Appending new values to a python list
[ "", "python", "list", "find", "" ]
Can anyone please tell me how to get the total sum of each tool no in Rental Table corresponding to that which is been available in Tool Master Table **Rental** ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ |tool_no| a1 | a2 | a3 | a4 | |-------|----|----|----|----| | 121 | 52 | 89 | 56 | 16 | |-------|----|----|----|----| | 142 | 45 | 94 | 89 | 12 | |-------|----|----|----|----| | 142 | 24 | 56 | 12 | 89 | |-------|----|----|----|----| | 156 | 96 | 89 | 14 | 16 | |-------|----|----|----|----| ``` **ToolMaster** ``` _ _ _ _ |tool_no| |-------| | 121 | |-------| | 142 | |-------| | 141 | |-------| | 150 | |-------| ``` my query is given below ``` select tool_no, SUM(a1+a2+a3+a4) AS rentalsum from Rental where tool_no IN(select tool_no from ToolMaster) ``` but i'm not getting the required result My expected output is like this ``` _ _ _ _ _ _ _ _ _ _ _ | tool_no | rentalsum | |---------|-----------| | 121 | 213 | |---------|-----------| | 142 | 421 | |---------|-----------| ```
The only thing that is missing in your current query is the `GROUP BY` clause. [See Here](http://www.sqlfiddle.com/#!2/15701/3) But I rather do it in `JOIN` than `IN`. ``` SELECT a.Tool_No, SUM(b.a1 + b.a2 + b.a3 + b.a4) totalSum FROM ToolMaster a INNER JOIN Rental b ON a.Tool_No = b.Tool_No GROUP BY a.Tool_No ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/15701/1) To further gain more knowledge about joins, kindly visit the link below: * [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) OUTPUT ``` ╔═════════╦══════════╗ ║ TOOL_NO ║ TOTALSUM ║ ╠═════════╬══════════╣ ║ 121 ║ 213 ║ ║ 142 ║ 421 ║ ╚═════════╩══════════╝ ```
``` select tool_no, SUM(a1)+SUM(a2)+SUM(a3)+SUM(a4) AS rentalsum from Rental Where tool_no in (select tool_no from ToolMaster) Group By tool_no ``` Using Join ``` select t.tool_no, SUM(a1)+SUM(a2)+SUM(a3)+SUM(a4) AS rentalsum from Rental t Inner Join ToolMaster tm ON t.tool_no = tm.tool_no Group By t.tool_no ```
mysql sum of rows considering two tables
[ "", "mysql", "sql", "" ]
The course that I am currently doing uses brackets in its WHERE clauses like so: ``` SELECT bar FROM Foo WHERE (CurrentState = 'happy'); ``` Is this standard sql ? If not then why use them? Doesn't seem to be used in the Date & Darwen book I have. --- **EDIT** Just to clarify - I'm referring to 1992 sql standards
Yes. You can use parenthesis to bind components of where clauses. This isn't necessary in your example, but if you had multiple `and` and `or` components, you might need parenthesis to either ensure correct order of operations or simply to self-document the query. Example 1: ``` select * from foo where (class='A' and subclass='B') or (class='C' and subclass='D') ``` In example 1, the parens aren't strictly required because `and` binds more tightly than `or`, but if you had multiple `or` conditions tied by `and` you would need it to get correct results, as in example 2 below. Example 2: ``` select * from foo where (class='A' or class='B') and (subclass='C' or subclass='D') ``` I use them in either case, because I don't like having to parse the sql in my head the same way the query optimizer does -- I'd rather be explicit about it and more quickly understand what the intent is.
They are optional. They make sense for more complex WHERE statements. ``` ... WHERE (created > '2012-10' AND created < '2013-01') OR (modified > '2012-10' AND modified < '2013-01') ```
Are brackets in the WHERE clause standard sql
[ "", "sql", "standards", "" ]
In Sql Server, I can do something like this: ``` IF (SELECT COUNT(*) FROM TABLE WHERE COLUMN=VALUE) = 0 SELECT 1,2,3 ELSE SELECT A,B,C FROM TABLE WHERE COLUMN=VALUE ``` I use that a lot, but now I have to work with MySql and I can't really get that to work :) I guess the same applies to an upsert: ``` IF (SELECT COUNT(*) FROM TABLE WHERE COLUMN=VALUE) = 0 INSERT INTO TABLE(COLUMN, OTHERCOLUMN) VALUES(VALUE, OTHERVALUE) ELSE UPDATE TABLE SET OTHERCOLUMN=OTHERVALUE WHERE COLUMN=VALUE ``` Ho do you do 'if subquery else' in MySql?
You could use something like this: ``` SELECT * FROM (SELECT 1,2,3) s WHERE (SELECT COUNT(*) FROM TABLE WHERE COLUMN=VALUE) = 0 UNION ALL SELECT A,B,C FROM TABLE WHERE COLUMN=VALUE ```
Assuming that columns a, b, and c do not have `NULL` values, you can use this as well: ``` SELECT COALESCE(t.a, 1) AS a, COALESCE(t.b, 2) AS b, COALESCE(t.c, 3) AS c FROM (SELECT 1 AS dummy FROM dual) AS d LEFT JOIN TableX AS t ON t.columnX = Value ; ```
IF (SUBQUERY) = SOMETHING SELECT A ELSE SELECT B MySql
[ "", "mysql", "sql", "subquery", "" ]
I have the following code, which sortings\_list consist of 2 items like ``` sortings_list = ['code', 'name'] for i in xrange(0, len(sortings_list)): if sortings_list[i] == '-%s' % field: sortings_list.pop(i) ``` Any ideas ?
You are removing items from a list while iterating, if you remove the first item then the second item's index *changes*. Use a list comprehension instead: ``` sortings_list = [elem for elem in sortings_list if not elem == '-%s' % field] ```
You're calling pop() on the first item which removes it, and now the list only has one element. Then you try to iterate to the second item, which doesn't exist anymore.
list index out of range error while iterating in list length
[ "", "python", "list", "iteration", "" ]
today I think I might have an easy question. I have some code that ask the user to select a number from 1 to 10 which refers to a list. If the user makes an incorrect input ie 55 I what the code to loop back and ask them to make another selection. so far I have the following code but im unsure how to make it loop. thanks in advance ``` print 'Choose a Base Weather Station' print 'Enter the corresponding station number' selection = int(raw_input('Enter a number from: 1 to 10')) if selection == 1: print 'You have selected Karratha Aero as your Base Station' elif selection == 2: print 'You have selected Dampier Salt as your Base Station' elif selection == 3: print 'You have selected Karratha Station as your Base Station' elif selection == 4: print 'You have selected Roebourne Aero as your Base Station' elif selection == 5: print 'You have selected Roebourne as your Base Station' elif selection == 6: print 'You have selected Cossack as your Base Station' elif selection == 7: print 'You have selected Warambie as your Base Station' elif selection == 8: print 'You have selected Pyramid Station as your Base Station' elif selection == 9: print 'You have selected Eramurra Pool as your Base Station' elif selection == 10: print 'You have selected Sherlock as your Base Station' else: print 'You have made an error. Please chose a number from 1 to 10' ```
First off, you should have a list of all possible base stations instead of manually constructing the ten strings to print, as in `basestations = ["", "Karratha Aero", "Dampier Salt", ...]` Then you can do this: `basestations[1]` to get the string at index 1 (the first index is 0), e.g. in general `basestations[selection]`. And now you only need one print statement for all ten possibilities. (Hint: You can concatenate two strings by doing `stringa + stringb`) Second, use a `while` loop. The condition of the while loop should be true if no valid selection was made, and false if a valid selection was made. Unlike `if`, the body of a `while` will go back and check the condition after it reaches the end, and if it's true again it will execute again.
One approach that you can take is to use a while-loop to ensure that the input is within a certain range. ``` selection = 0 first = True print 'Choose a Base Weather Station' print 'Enter the corresponding station number' while selection < 1 or selection > 10: if(first == True): first = False else: print 'You have made an error. Please choose a number from 1 to 10' selection = int(raw_input('Enter a number from: 1 to 10')) if selection == 1: print 'You have selected Karratha Aero as your Base Station' elif selection == 2: print 'You have selected Dampier Salt as your Base Station' elif selection == 3: print 'You have selected Karratha Station as your Base Station' elif selection == 4: print 'You have selected Roebourne Aero as your Base Station' elif selection == 5: print 'You have selected Roebourne as your Base Station' elif selection == 6: print 'You have selected Cossack as your Base Station' elif selection == 7: print 'You have selected Warambie as your Base Station' elif selection == 8: print 'You have selected Pyramid Station as your Base Station' elif selection == 9: print 'You have selected Eramurra Pool as your Base Station' elif selection == 10: print 'You have selected Sherlock as your Base Station' else: print 'Something went wrong' ```
how do i make my code loop if the users input is incorrect
[ "", "python", "loops", "if-statement", "" ]
How do I enumerate two lists of equal length simultaneously? I am sure there must be a more pythonic way to do the following: ``` for index, value1 in enumerate(data1): print index, value1 + data2[index] ``` I want to use the index and data1[index] and data2[index] inside the for loop.
Use [`zip`](https://docs.python.org/2.7/library/functions.html#zip) for both Python2 and Python3: ``` for index, (value1, value2) in enumerate(zip(data1, data2)): print(index, value1 + value2) # for Python 2 use: `print index, value1 + value2` (no braces) ``` Note that `zip` runs only up to the shorter of the two lists(not a problem for equal length lists), but, in case of unequal length lists if you want to traverse the whole list then use [`itertools.izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest).
``` for i, (x, y) in enumerate(zip(data1, data2)): ``` In Python 2.x, you might want to use `itertools.izip` instead of `zip`, esp. for very long lists.
Enumerate two python lists simultaneously?
[ "", "python", "list", "" ]
I need help in regex or Python to extract a substring from a set of string. The string consists of alphanumeric. I just want the substring that starts after the first space and ends before the last space like the example given below. ``` Example 1: A:01 What is the date of the election ? BK:02 How long is the river Nile ? Results: What is the date of the election How long is the river Nile ``` While I am at it, is there an easy way to extract strings before or after a certain character? For example, I want to extract the date or day like from a string like the ones given in Example 2. ``` Example 2: Date:30/4/2013 Day:Tuesday Results: 30/4/2013 Tuesday ``` I have actually read about regex but it's very alien to me. Thanks.
I recommend using `split` ``` >>> s="A:01 What is the date of the election ?" >>> " ".join(s.split()[1:-1]) 'What is the date of the election' >>> s="BK:02 How long is the river Nile ?" >>> " ".join(s.split()[1:-1]) 'How long is the river Nile' >>> s="Date:30/4/2013" >>> s.split(":")[1:][0] '30/4/2013' >>> s="Day:Tuesday" >>> s.split(":")[1:][0] 'Tuesday' ```
``` >>> s="A:01 What is the date of the election ?" >>> s.split(" ", 1)[1].rsplit(" ", 1)[0] 'What is the date of the election' >>> ```
Extracting sub-string after the first space in Python
[ "", "python", "regex", "" ]
I have two tables name `Table 1` and `Table 2`. Both of these table contain a column named `address`. `Table 1` contains about 1200 records while `Table 2` has another 1 million records on store. Now, what I'd like to do is to find the count of records in `Table 1` where a row with a matching address also exists in `Table 2`. I am new to SQL - could anybody please tell me how to get the aforementioned row count?
`Select Count( * ) from Table1 Where address in ( select address from Table2 )`
``` select count(*) from Table1 INNER JOIN Table2 on Table1.address = Table2.address ```
SQL query to get count of records in Table A and also in Table B
[ "", "sql", "postgresql", "" ]
I'm using the northwind database to refresh my SQL skills by creating some more or less complex queries. Unfortunately I could not find a solution for my last use case: "Get the sum of the five greatest orders for every category in year 1997." The tables involved are: ``` Orders(OrderId, OrderDate) Order Details(OrderId, ProductId, Quantity, UnitPrice) Products(ProductId, CategoryId) Categories(CategoryId, CategoryName) ``` I have tried the following query ``` SELECT c.CategoryName, SUM( (SELECT TOP 5 od2.UnitPrice*od2.Quantity FROM [Order Details] od2, Products p2 WHERE od2.ProductID = p2.ProductID AND c.CategoryID = p2.CategoryID ORDER BY 1 DESC)) FROM [Order Details] od, Products p, Categories c, Orders o WHERE od.ProductID = p. ProductID AND p.CategoryID = c.CategoryID AND od.OrderID = o.OrderID AND YEAR(o.OrderDate) = 1997 GROUP BY c.CategoryName ``` Well... It turned out that subqueries are not allowed in aggregate functions. I've read other posts about this issue but could not find a solution for my specific use case. Hope you can help me out...
Subqueries are not generally allowed in aggregate functions. Instead, move the aggregate *inside* the subquery. In this case, you'll need an extra level of subquery because of the `top 5`: ``` SELECT c.CategoryName, (select sum(val) from (SELECT TOP 5 od2.UnitPrice*od2.Quantity as val FROM [Order Details] od2, Products p2 WHERE od2.ProductID = p2.ProductID AND c.CategoryID = p2.CategoryID ORDER BY 1 DESC ) t ) FROM [Order Details] od, Products p, Categories c, Orders o WHERE od.ProductID = p. ProductID AND p.CategoryID = c.CategoryID AND od.OrderID = o.OrderID AND YEAR(o.OrderDate) = 1997 GROUP BY c.CategoryName, c.CategoryId ```
Use [CTE](http://msdn.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) with [ROW\_NUMBER](http://msdn.microsoft.com/en-us/library/ms186734.aspx) ranking function instead of excessive subquery. ``` ;WITH cte AS ( SELECT c.CategoryName, od2.UnitPrice, od2.Quantity, ROW_NUMBER() OVER(PARTITION BY c.CategoryName ORDER BY od2.UnitPrice * od2.Quantity DESC) AS rn FROM [Order Details] od JOIN Products p ON od.ProductID = p.ProductID JOIN Categories c ON p.CategoryID = c.CategoryID JOIN Orders o ON od.OrderID = o.OrderID WHERE o.OrderDate >= DATEADD(YEAR, DATEDIFF(YEAR, 0, '19970101'), 0) AND o.OrderDate < DATEADD(YEAR, DATEDIFF(YEAR, 0, '19970101')+1, 0) ) SELECT CategoryName, SUM(UnitPrice * Quantity) AS val FROM cte WHERE rn < 6 GROUP BY CategoryName ```
SQL - Subquery in Aggregate Function
[ "", "sql", "subquery", "sql-server-2012", "aggregate-functions", "northwind", "" ]
I have been reading the django book and the django documentation, but still can figure it out. i have this `model.py`: ``` from django.db import models from django.forms import ModelForm class Zonas(models.Model): name = models.CharField(max_length=30) def __unicode__(self): return self.name class ZonasForm(ModelForm): class Meta: model = Zonas ``` this `view.py`: ``` from django import forms from testApp.models import Zonas from django.shortcuts import render_to_response def menuForm (request): z = list (Zonas.objects.all()) numbers = forms.CharField(max_length=30, widget=forms.Select(choices=z)) return render_to_response('hello.html', {'numbers':numbers}) ``` this `html`: ``` <html> <body> <form action="" method="get"> <div class="field"> {{ form.numbers }} </div> <input type="submit" value="Submit"> </form> </body> </html> ``` Ans this `urls.py`: ``` from django.conf.urls import patterns, include, url from testApp.views import menuForm urlpatterns = patterns('', url(r'^hello/$', menuForm ), ) ``` All i get when i run the server is a page only with the submit button, and no form.number wich is supossed to be a select menu. I tried this `views.py`: ``` def menuForm (request): z = list (Zonas.objects.all()) numbers = forms.ChoiceField(choices=z) return render_to_response('hello.html', {'numbers':numbers}) ``` But the result is the same... Any hints? Should i use a diferent return?
You are trying to access `{{ form.numbers }}` when you never pass a `form` variable to the template. You would need to access `numbers` directly with `{{ numbers }}`. Also you aren't quite using forms correctly. Check this out <https://docs.djangoproject.com/en/dev/topics/forms/> Create a menu form that contains a [ModelChoiceField](https://docs.djangoproject.com/en/dev/ref/forms/fields/#django.forms.ModelChoiceField) ### forms.py ``` class MenuForm(Form): zonas = forms.ModelChoiceField(queryset=Zonas.objects.all()) ``` Now [use that form in your view](https://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view) ### views.py ``` from myapp.forms import MenuForm def menuForm(request): if request.method == 'POST': # If the form has been submitted... form = MenuForm(request.POST) # A form bound to the POST data if form.is_valid(): # All validation rules pass # Process the data in form.cleaned_data # ... return HttpResponseRedirect('/success/') # Redirect after POST else: form = MenuForm() # An unbound form return render(request, 'hello.html', { 'form': form, }) ``` Now you can [use the form in your template](https://docs.djangoproject.com/en/dev/topics/forms/#displaying-a-form-using-a-template) ### hello.html ``` <html> <body> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit"> </form> </body> </html> ```
How about using `ModelChoiceField`? This code works for me. views.py: ``` from django import forms from models import Zonas from django.shortcuts import render_to_response class NumbersForm(forms.Form): numbers = forms.ModelChoiceField(queryset=Zonas.objects.all()) def menuForm(request): form = NumbersForm() return render_to_response('hello.html', {'form': form}) ``` hello.html: ``` <form action="" method="get"> <div class="field"> {{ form.as_p }} </div> <input type="submit" value="Submit"> </form> </body> </html> ```
newbie Django Choicefield/Charfield display on html page
[ "", "python", "html", "django", "forms", "return", "" ]
I'm struggling to figure out what's wrong with my code. I am trying to randomly select a key from several dictionaries, then add that key to the beginning of a list. For example: ``` import random list1 = ["list"] dict1 = {"key1" : "def1", "key2" : "def2", "key3" : "def3"} dict2 = {"key4" : "def4", "key5" : "def5", "key6" : "def6"} DICTIONARIES = [dict1, dict2] value = random.choice(DICTIONARIES) key = random.choice(list(value.keys())) list1[:0] = key print (list1) ``` What I want, is a print result of [key5, list]. What I get is ['k', 'e', 'y', '5', list] Any ideas? is there a better way to search multiple dictionaries for a random key that will produce the desired results? Thanks.
I suppose that `item` variable is the same as `list1`. If yes, try this: ``` list1[:0] = [key] ``` Or, alternatively you may use the `list.insert` function instead of slice assignment: ``` list1.insert(0, key) ``` Your version was working like the following: * Before assignment: `list1 = ['list']`, `key = 'key5'` * Left side of assignment refers to element before `'list'` in `list1` * Right side refers to value of `key`, which is `"key5"`. * `"key5"` is a sequence of `k`, `e`, `y` and `5`. * So, by `list1[:0] = key` we concatenate `"key5"` *sequence* and `list1`. * But in `list1[:0] = [key]` we concatenate `[key]` sequence (that has only one element that equals to "key5") and `list1`. And that's what we actually want.
Something like this: ``` import random list1 = ["list"] dict1 = {"key1" : "def1", "key2" : "def2", "key3" : "def3"} dict2 = {"key4" : "def4", "key5" : "def5", "key6" : "def6"} all_keys = list(dict1.keys() | dict2.keys()) #creates a list containing all keys key = random.choice(all_keys) list1.insert(0,key) print (list1) #prints ['key1', 'list'] ```
Adding a random key from a dictionary to a list in python 3.3
[ "", "python", "python-3.3", "" ]
I have 6 tables named A,B,C,D,E,F I have A's Primary key, Now B have ref in A, C has ref in B, D has ref in C E has ref in A, F has ref in B Now I want to delete A's records and its related records in other tables and I have only A's primary key, Best way to delete all records. Thanks in advance.
One of * Use a stored procedure to delete child tables, using WHERE EXISTS * Set foreign keys to CASCADE DELETE, so all child rows are deleted when you DELETE from A See [How do I use cascade delete with SQL Server?](https://stackoverflow.com/questions/6260688/sql-server-cascade-delete)
Delete the rows in the child tables by SELECTing their foreign key equal to A's primary key, then delete from A using the primary key.
How to delete data in child tables?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm trying to get the Earth distance and the right ascension (relative to my observer point in Earth) of a satellite not orbiting the Earth, but pyEphem isn't returning the same properties as other solar bodies. With Ganymede (the largest moon of Jupiter), for instance: ``` import math, ephem Observer = ephem.city('London') Observer.date = '2013-04-23' Observer.pressure, Observer.elevation = 0, 100 moonGanymede = ephem.Ganymede(Observer) print math.cos(moonGanymede.ra) # right ascension print moonGanymede.earth_distance * ephem.meters_per_au # distance ``` I get this error: ``` AttributeError: 'Ganymede' object has no attribute 'earth_distance' ``` ~~The `ra` attribute exists, but is it relative to my `Observer` or to Jupiter?~~ *Seems to be relative to the `Observer`, since if I change the location, the value changes too.* I've read [the documentation](http://rhodesmill.org/pyephem/quick.html#bodies) and I know that these properties are not defined for moons, but I have no idea how to compute those relative to the Earth given the additional defined properties of moon bodies: > On planetary moons, also sets: > > Position of moon relative to planet (measured in planet radii) > > ``` > x — offset +east or –west > y — offset +south or –north > z — offset +front or –behind > ``` Doing: ``` print moonGanymede.x, moonGanymede.y, moonGanymede.z ``` Outputs: ``` -14.8928060532 1.52614057064 -0.37974858284 ``` Since Jupiter has an average radius of 69173 kilometers, those values translate to: ``` moonGanymede.x = 1030200 kilometers (west) moonGanymede.y = 105570 kilometers (south) moonGanymede.z = 26268 kilometers (behind) ``` Given that I know the distance and right ascension of Jupiter relative to the `Observer`, ***how can I calculate the distance and right ascension of `moonGanymede` (also relative to the `Observer`)***? I'm using pyEphem 3.7.5.1 (with Python 2.7).
I'm still trying to figure it out (if anyone spots something, please do tell), but it *seems* that if I do: ``` sqrt((-14.8928060532)^2 + (1.52614057064)^2 + (-0.37974858284)^2) = 14.9756130481 ``` I'll always get a value that always falls within the min/max distance from orbit center (14.95 - 14.99). Since that's specified in orbit center radii, I'll need to multiply it by 69173 \* 1000 to get the SI unit: ``` 14.9756130481 * 69173 * 1000 = 1.0359080813762213 * 10^9 meters ``` Since pyEphem deals in distances with AU: ``` print (1.0359080813762213 * 10**9) / ephem.meters_per_au # 0.00692461785302 ``` At the same time, the Earth-Jupiter distance was `5.79160547256` AU. Now, to get the distance, I should either add or subtract depending on the sign of the `z` coordinate: ``` 5.79160547256 - 0.00692461785302 = 5.78468085470698 AU ``` Running the same code for today (now) returns `6.03799937821` which seems to very close to the value of `6.031` that [WolframAlpha](http://www.wolframalpha.com/input/?i=Ganymede%20distance) is returning at the present time, it doesn't match 100% but perhaps that could be accounted for by some different underlying ephemeris library or data source. Not sure...
Just some thoughts; You probably need to do it two steps. 1. Get location of satellite relative to parent planet 2. Get location of planet relative to observer 3. Trigonometry calculation; get the location of satellite relative to observer. You already did 1, and can easily do 2. Convert all values to x,y,z and add then back to angular. Or I'm sure you / ephym can do this for you directly. HTH
pyEphem - Calculating Positions of non-Earthy Moons
[ "", "python", "math", "geometry", "astronomy", "pyephem", "" ]
I am trying to match item number in the following text: ``` User Number 1 will probably like movie ID: RecommendedItem[item:557, value:7.32173]the most! ``` Here is what I tried: ``` myfile = open('result.txt', 'r') for line in myfile: m = re.search('(?<=RecommendedItem\[item:).+,', line) print m ``` However, this is giving me an error: ``` <_sre.SRE_Match object at 0x106e79168> ``` I was wondering what might be wrong with the look behind assertion...(Lookbehind assertions don't seem to work on regexpal.com, even something as simple as (?<=a)b is highlighted in red there)
That isn't an error! The result of `re.search` is a MatchObject. Which contains the matches inside it, these are accessed via `MatchObject.group(groupNumber,...)` (there other ways, such as groups() etc..) as explained here: [group](http://docs.python.org/2/library/re.html#re.MatchObject.group). You would want a regex as follows to do what you ask: ``` import re t = "User Number 1 will probably like movie ID: RecommendedItem[item:557, value:7.32173]the most!" itemNumber = re.search(r'(?<=RecommendedItem\[item:)(\d+)',t) print(itemNumber.group(0)) ``` Or alternatively, using a named group: ``` import re t = "User Number 1 will probably like movie ID: RecommendedItem[item:557, value:7.32173]the most!" itemNumber = re.search(r'(?<=RecommendedItem\[item:)(?P<itemNumber>\d+)',t) print(itemNumber.groupdict()["itemNumber"]) #note using groupdict() not group() ``` Producing ``` >>> 557 ```
To get the matching part of your regexp you should use ``` if m: print line[m.start():m.end()] ``` Or you may use `re.findall` function that returns a list of matches: ``` m = re.findall('(?<=RecommendedItem\[item:).+,', line)) if m: print m[0] ``` You may also want to modify your regexp a bit not to have comma at the end: `'(?<=RecommendedItem\[item:)\d+`
Having difficulties writing the right Regular expression
[ "", "python", "regex", "" ]
I am creating random Toeplitz matrices to estimate the probability that they are invertible. My current code is ``` import random from scipy.linalg import toeplitz import numpy as np for n in xrange(1,25): rankzero = 0 for repeats in xrange(50000): column = [random.choice([0,1]) for x in xrange(n)] row = [column[0]]+[random.choice([0,1]) for x in xrange(n-1)] matrix = toeplitz(column, row) if (np.linalg.matrix_rank(matrix) < n): rankzero += 1 print n, (rankzero*1.0)/50000 ``` Can this be sped up? I would like to increase the value 50000 to get more accuracy but it is too slow to do so currently. Profiling using only `for n in xrange(10,14)` shows ``` 400000 9.482 0.000 9.482 0.000 {numpy.linalg.lapack_lite.dgesdd} 4400000 7.591 0.000 11.089 0.000 random.py:272(choice) 200000 6.836 0.000 10.903 0.000 index_tricks.py:144(__getitem__) 1 5.473 5.473 62.668 62.668 toeplitz.py:3(<module>) 800065 4.333 0.000 4.333 0.000 {numpy.core.multiarray.array} 200000 3.513 0.000 19.949 0.000 special_matrices.py:128(toeplitz) 200000 3.484 0.000 20.250 0.000 linalg.py:1194(svd) 6401273/6401237 2.421 0.000 2.421 0.000 {len} 200000 2.252 0.000 26.047 0.000 linalg.py:1417(matrix_rank) 4400000 1.863 0.000 1.863 0.000 {method 'random' of '_random.Random' objects} 2201015 1.240 0.000 1.240 0.000 {isinstance} [...] ```
One way is to save some work from repeated calling of toeplitz() function by caching the indexes where the values are being put. The following code is ~ 30% faster than the original code. The rest of the performance is in the rank calculation... And I don't know whether there exists a faster rank calculation for toeplitz matrices with 0s and 1s. (update) The code is actually ~ 4 times faster if you replace matrix\_rank by scipy.linalg.det() == 0 (determinant is faster then rank calculation for small matrices) ``` import random from scipy.linalg import toeplitz, det import numpy as np,numpy.random class si: #cache of info for toeplitz matrix construction indx = None l = None def xtoeplitz(c,r): vals = np.concatenate((r[-1:0:-1], c)) if si.indx is None or si.l != len(c): a, b = np.ogrid[0:len(c), len(r) - 1:-1:-1] si.indx = a + b si.l = len(c) # `indx` is a 2D array of indices into the 1D array `vals`, arranged so # that `vals[indx]` is the Toeplitz matrix. return vals[si.indx] def doit(): for n in xrange(1,25): rankzero = 0 si.indx=None for repeats in xrange(5000): column = np.random.randint(0,2,n) #column=[random.choice([0,1]) for x in xrange(n)] # original code row = np.r_[column[0], np.random.randint(0,2,n-1)] #row=[column[0]]+[random.choice([0,1]) for x in xrange(n-1)] #origi matrix = xtoeplitz(column, row) #matrix=toeplitz(column,row) # original code #if (np.linalg.matrix_rank(matrix) < n): # original code if np.abs(det(matrix))<1e-4: # should be faster for small matrices rankzero += 1 print n, (rankzero*1.0)/50000 ```
These two lines that build the lists of 0s and 1s: ``` column = [random.choice([0,1]) for x in xrange(n)] row = [column[0]]+[random.choice([0,1]) for x in xrange(n-1)] ``` have a number of inefficiences. They build, expand, and discard lots of lists unnecessarily, and they call random.choice() on a list to get what's really just one random bit. I sped them up by about 500% like this: ``` column = [0 for i in xrange(n)] row = [0 for i in xrange(n)] # NOTE: n must be less than 32 here, or remove int() and lose some speed cbits = int(random.getrandbits(n)) rbits = int(random.getrandbits(n)) for i in xrange(n): column[i] = cbits & 1 cbits >>= 1 row[i] = rbits & 1 rbits >>= 1 row[0] = column[0] ```
Speed up random matrix computation
[ "", "python", "performance", "math", "numpy", "scipy", "" ]
I want to know version of running SQL Server. would you please help on this ?
Connect to the instance of SQL Server, and then run the following query: ``` Select @@version ``` An example of the output of this query is as follows: ``` Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64) Mar 29 2009 10:11:52 Copyright (c) 1988-2008 Microsoft Corporation Express Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) ``` As shown here: <http://support.microsoft.com/kb/321185?wa=wsignin1.0>
For a thorough list of version information and other properties, try [xp\_msver](http://msdn.microsoft.com/en-gb/library/ms187372.aspx) For example: ``` EXEC master..xp_msver ``` Which gives output of the form: ``` 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 589824 9.00.4053.00 3 Language 1033 English (United States) 4 Platform NULL NT AMD64 5 Comments NULL NT AMD64 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT - 64 Bit 8 FileVersion NULL 2005.090.4053.00 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL © Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft® is a registered trademark of Microsoft Corporation. Windows(TM) is a trademark of Microsoft Corporation 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 265617408 NULL 15 WindowsVersion 248381957 5.2 (3790) 16 ProcessorCount 8 8 17 ProcessorActiveMask 8 ff 18 ProcessorType 8664 NULL 19 PhysicalMemory 32768 32768 (34359439360) 20 Product ID NULL NULL ``` There is an entire knowledge base article about retrieving [SQL Server version information](http://support.microsoft.com/default.aspx?scid=kb;en-us;321185) - in addition to the other answer, using `@@Version`, you can also use: ``` SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') ``` The reason SERVERPROPERTY is sometimes preferred is that @@Version returns the `OS Service Pack` Level, *not* the`SQL Server Service Pack` level in older versions - see <http://beyondrelational.com/modules/2/blogs/69/posts/18272/sql-server-version-showing-incorrect-service-pack-information.aspx> and <http://www.sqlservercentral.com/Forums/Topic1085701-324-1.aspx#bm1127863> for examples.
How to check what is version of SQL Server?
[ "", "sql", "sql-server", "" ]
So here is the context : Developping an ASP.NET MVC 4 web app, I have in my database a table **ProductAllocations** which is composed of 2 foreign keys : one from my table **Products** and the other from the table **Persons**. I have another table, **Vehicles** which contains a foreign key of the table **Products** I want to select the allocations and their information grouped by a product (a product can be allocated several time). Here is my stored procedure : ``` ALTER PROCEDURE GetAllVehicles AS BEGIN SET NOCOUNT ON SELECT p.FirstName, p.LastName, pa.EndDate, pr.PurchaseDate, pr.SerialNumber, pr.CatalogPrice, v.PlateNumber, v.FirstCirculationDate, V.FirstDrivingTax, v.UsualDrivingTax FROM bm_ProductAllocations AS pa INNER JOIN bm_Persons AS p ON pa.Id_Person = p.Id_Person INNER JOIN bm_Products AS pr ON pa.Id_Product = pr.Id_Product INNER JOIN bm_Vehicles AS v ON pr.Id_Product = v.Id_Product GROUP BY pa.Id_Product END ``` However, the Group By clause is generating an error : `Column 'bm_Persons.FirstName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.` I'm working with Visual Studio 2010. I'm new to SQL so I have no idea about what's going on.
The *fields* you *group by*, you need to either use an aggregation `sum`, `max` etc or you need to include the columns in the clause, see the following links: [SQL Group By](http://www.w3schools.com/sql/sql_groupby.asp) & [summarizing values](http://www.thunderstone.com/site/texisman/summarizing_values.html) ``` SELECT p.FirstName ,p.LastName ,pa.EndDate ,pr.PurchaseDate ,pr.SerialNumber ,pr.CatalogPrice ,v.PlateNumber ,v.FirstCirculationDate ,v.FirstDrivingTax ,v.UsualDrivingTax FROM bm_ProductAllocations AS pa INNER JOIN bm_Persons AS p ON pa.Id_Person = p.Id_Person INNER JOIN bm_Products AS pr ON pa.Id_Product = pr.Id_Product INNER JOIN bm_Vehicles AS v ON pr.Id_Product = v.Id_Product GROUP BY pa.Id_Product ,p.FirstName ,p.LastName ,pa.EndDate ,pr.PurchaseDate ,pr.SerialNumber ,pr.CatalogPrice ,v.PlateNumber ,v.FirstCirculationDate ,v.FirstDrivingTax ,v.UsualDrivingTax; ```
To use a GROUP BY function you need to make sure all of the fields in your SELECT statement are in aggregate functions (e.g. SUM() or COUNT()), or they need to be in the GROUP BY function.
Group By clause causing error
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
``` SELECT posts.id AS post_id, categories.id AS category_id, title, contents, posts.date_posted, categories.name, comments.id AS comment_id FROM posts LEFT JOIN (categories, comments) ON categories.id = posts.cat_id AND posts.id = comments.post_id WHERE posts.id = 28 ``` I want this SQL Query to select all posts, not just the ones which DO have a comment, but at the moment this query returns those rows wich DO have comments also.
try this one: ``` SELECT posts.id AS post_id, categories.id AS category_id, title, contents, posts.date_posted, categories.name, comments.id AS comment_id FROM posts p, categories c, comments co LEFT JOIN (categories, comments) ON categories.id = posts.cat_id AND posts.id = comments.post_id WHERE p.cat_id = c.id and (co.post_id is null or co.post_id = p.id) and posts.id = 28 ```
``` Select * From Posts P Left Join Comments C on P.post_id = C.post_id Join Categories CG CG.id = P.cat_id ```
SQL Query SELECT from three tables
[ "", "mysql", "sql", "" ]
I need to compare rows in the same table of a query. Here is an example of the table: ``` id checkin checkout 1 01/15/13 01/31/13 1 01/31/13 05/20/13 2 01/15/13 05/20/13 3 01/15/13 01/19/13 3 01/19/13 05/20/13 4 01/15/13 02/22/13 5 01/15/13 03/01/13 ``` I compare the checkout date to today's date, if it is before today's date then I want to return the result. However, similar to id's 1 and 3, they have multiple records. If one of the records associated with the same id have a record that has a checkout date after today's date then I don't want to return any of their records. I only want to return a record of each id where every record is before today's date in the checkout field.
For this purpose, analytic functions are the best approach: ``` select id, checkin, checkout from (select t.*, max(checkout) over (partition by id) as maxco from t ) t where maxco <= trunc(sysdate) ``` This assumes that the data is stored as date values and not as strings (otherwise, the max will return the wrong value).
``` select id, checking from Table where checkout < CURRENT_DATE --Postgresql date of today, Oracle should have an equivalent now and id not in (select id from Table where checkout >= CURRENT_DATE); ```
Compare rows in SQL query
[ "", "sql", "oracle", "" ]
I know how to retrieve the last day of the last month in MS-SQL-Server ``` SELECT DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE()),0)) ``` This gives me the result ``` 2013-03-31 23:59:59.000 ``` Now I need to change this format so that I receive it like this ``` mm/dd/yy ``` Any suggestions? Thanks
If you are using SQL server try this ``` SELECT CONVERT(varchar, DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE()),0)),101) ``` or ``` SELECT CONVERT(varchar, DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE()),0)),1) ```
Try this one - ``` DECLARE @Date DATETIME SELECT @Date = GETDATE() SELECT CONVERT(VARCHAR(10), DATEADD(MONTH, DATEDIFF(MONTH, 0, @Date) + 1, 0) - 1, 101) ``` Output: ``` 04/30/2013 ```
How to retrieve the last day of last month in a certain format?
[ "", "sql", "sql-server", "" ]
I have to make a table of basketball players and a query which finds the player with the most experience I have tried ``` SELECT firstName, lastName, MAX(experience) FROM Player ``` but I'm assuming thats wrong. So basically I want to find the player with the highest experience (data type set as an INT) Thank you!! :D
``` SELECT firstName, lastName, experience FROM Player WHERE experience = (SELECT MAX(experience) FROM Player) ```
``` SELECT * FROM Player WHERE experience = (SELECT max(experience) FROM Player) ```
SELECT most experienced person
[ "", "sql", "oracle", "top-n", "" ]
I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram? I am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis? thanks!
You can use `plot` or `vlines` to draw a vertical line, but to draw a vertical line from the bottom to the top of the y axis, `axvline` is the probably the simplest function to use. Here's an example: ``` In [80]: import numpy as np In [81]: import matplotlib.pyplot as plt In [82]: np.random.seed(6789) In [83]: x = np.random.gamma(4, 0.5, 1000) In [84]: result = plt.hist(x, bins=20, color='c', edgecolor='k', alpha=0.65) In [85]: plt.axvline(x.mean(), color='k', linestyle='dashed', linewidth=1) Out[85]: <matplotlib.lines.Line2D at 0x119758828> ``` Result: [![plot](https://i.stack.imgur.com/eJbSA.png)](https://i.stack.imgur.com/eJbSA.png)
This is old topic and minor addition, but one thing I have often liked is to also plot mean value beside the line: ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(6789) x = np.random.gamma(4, 0.5, 1000) result = plt.hist(x, bins=20, color='c', edgecolor='k', alpha=0.65) plt.axvline(x.mean(), color='k', linestyle='dashed', linewidth=1) min_ylim, max_ylim = plt.ylim() plt.text(x.mean()*1.1, max_ylim*0.9, 'Mean: {:.2f}'.format(x.mean())) ``` Which produces following result: [![Average line with text](https://i.stack.imgur.com/tbIQX.png)](https://i.stack.imgur.com/tbIQX.png)
Drawing average line in histogram
[ "", "python", "matplotlib", "axis", "" ]
I am trying to append dependent values to a list as below and running into a keyerror...i have the expected output below..can anyone provide inputs on what is wrong here? ``` ''' Current output without "+=" in internal_dep[num] +=dep_gerrit :- {'313137': '12345', '99999': '', '34567': '99999', '12345': '34567'} EXPECTED OUTPUT:- {'313137': ['12345', '34567': '99999']} ''' def findinternaldep(num): if num == '313137': output ='12345' if num == '12345': output = '34567' if num == '34567': output = '99999' if num == '99999': output = '' return output master_gerrit = ['313137'] internal_dep={} for num in master_gerrit: while True: print "NUM" print num (dep_gerrit)=findinternaldep(num) internal_dep[num] +=dep_gerrit num = dep_gerrit if dep_gerrit == '': break print internal_dep ``` ERROR:- ``` Traceback (most recent call last): File "test_internal.py", line 34, in <module> internal_dep[num] +=dep_gerrit KeyError: '313137' ```
You're trying to append to the list before its key exists in the dictionary. The easiest way to address this is to make your `internal_dep` dictionary a `collections.defaultdict`. Then, if the key doesn't already exist, a default value (which you can specify) will be created automatically. ``` import collections internal_dep = collections.defaultdict(list) ``` Also note that `+=` is equivalent to `list.extend` rather than `list.append`, so you should use one of the following: ``` internal_dep[num] += [dep_gerrit] # or internal_dep[num].append(dep_gerrit) ``` What you have won't give you an error, but it won't do what you want, either!
Following is the version that worked for me ``` list = ['313137','253036'] internal_dep={} for num in list: master_gerrit = num while True: print "NUM" print num (dep_gerrit)=findinternaldep(num) print "DEP_GERRIT" print dep_gerrit #print master_gerrit print dep_gerrit if internal_dep.has_key(master_gerrit): internal_dep[master_gerrit].append(dep_gerrit) else: internal_dep[num]=[dep_gerrit,] if dep_gerrit == '': break num = dep_gerrit print internal_dep ``` OUTPUT:- {'313137': ['12345', '34567', '99999', ''], '253036': ['']}
Appending values to list of dictionaries
[ "", "python", "" ]
Let's say I have a list of 8 objects, numbered that 1-8. The objects are put into three boxes, 3 in one box, 3 in another box, 2 in the last box. By mathematics, there are 8C3\*5C3=560 ways to do this. I want to loop through there 560 items. Is there any way in Python to do so? The result should look like this: ``` list=['12','345',678'], ['12','346','578'], ..., etc. ``` Note that `['12','345','678']` and `['12','354',876']` are considered the same for this purpose. I want to make a for-loop this list. Is there any way in Python to do so? Here is the solution I get, but it seems ugly. ``` import itertools for c1,c2 in itertools.combinations(range(8),2): l2=list(range(8)) l2.pop(c2) l2.pop(c1) for c3,c4,c5 in itertools.combinations(l2,3): l3=l2[:] l3.remove(c5) l3.remove(c4) l3.remove(c3) c6,c7,c8=l3 print(c1,c2,c3,c4,c5,c6,c7,c8) ```
``` def F(seq, parts, indexes=None, res=[], cur=0): if indexes is None: # indexes to use for combinations indexes = range(len(seq)) if cur >= len(parts): # base case yield [[seq[i] for i in g] for g in res] return for x in combinations(indexes, r=parts[cur]): set_x = set(x) new_indexes = [i for i in indexes if i not in set_x] for comb in F(seq, parts, new_indexes, res=res + [x], cur=cur + 1): yield comb it = F('12345678', parts=(2,3,3)) for i in range(10): print [''.join(g) for g in next(it)] ``` --- ``` ['12', '345', '678'] ['12', '346', '578'] ['12', '347', '568'] ['12', '348', '567'] ['12', '356', '478'] ['12', '357', '468'] ['12', '358', '467'] ['12', '367', '458'] ['12', '368', '457'] ['12', '378', '456'] ``` --- Another example: ``` for c in F('1234', parts=(2,2)): print [''.join(g) for g in c] ``` --- ``` ['12', '34'] ['13', '24'] ['14', '23'] ['23', '14'] ['24', '13'] ['34', '12'] ```
You could just permute all your 8 values (like shown on previous answers). for that use this [previous answer](https://stackoverflow.com/questions/104420/how-to-generate-all-permutations-of-a-list-in-python) (also on the following code). Then assign each combination as a tuple, so they can be hashed and unique, for that you'll have to order them, so they can also be compare uniquely. ``` def all_perms(elements): if len(elements) <=1: yield elements else: for perm in all_perms(elements[1:]): for i in range(len(elements)): #nb elements[0:1] works in both string and list contexts yield perm[:i] + elements[0:1] + perm[i:] v = [1,2,3,4,5,6,7,8] a = {} for i in all_perms(v): k = (tuple(sorted([i[0],i[1]])) , tuple(sorted([i[2],i[3],i[4]])) , tuple(sorted([i[5],i[6],i[7]]))) if k not in a: a[k] = [str(i[0])+str(i[1]), str(i[2])+str(i[3])+str(i[4]), str(i[5])+str(i[6]) + str(i[7])] x = 0 for i in a.values(): print x, i x+=1 ``` For your example on 8 values, this gives 560 combinations.
Python looping combinations of 8 objects into 3 groups, 3-3-2
[ "", "python", "list", "combinations", "" ]
In Matlab, you can assign a value to multiple slices of the same list: ``` >> a = 1:10 a = 1 2 3 4 5 6 7 8 9 10 >> a([1:3,7:9]) = 10 a = 10 10 10 4 5 6 10 10 10 10 ``` How can you do this in Python with a numpy array? ``` >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> a[1:3,7:9] = 10 IndexError: too many indices ```
``` a = np.arange(10) a[[range(3)+range(6,9)]] = 10 #or a[[0,1,2,6,7,8]] = 10 print a ``` that should work I think ... I dont know that its quite what you want though
You might also consider using `np.r_`: <http://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html> ``` ii = np.r_[0:3,7:10] a[ii] = 10 In [11]: a Out[11]: array([ 10, 10, 10, 3, 4, 5, 6, 10, 10, 10]) ```
Assign value to multiple slices in numpy
[ "", "python", "arrays", "matlab", "numpy", "slice", "" ]
Suppose I have a table employee with id, user\_name, salary. How can I select the record with the 2nd highest salary in Oracle? I googled it, find this solution, is the following right?: ``` select sal from (select rownum n,a.* from ( select distinct sal from emp order by sal desc) a) where n = 2; ```
RANK and DENSE\_RANK have already been suggested - depending on your requirements, you might also consider ROW\_NUMBER(): ``` select * from ( select e.*, row_number() over (order by sal desc) rn from emp e ) where rn = 2; ``` The difference between RANK(), DENSE\_RANK() and ROW\_NUMBER() boils down to: * ROW\_NUMBER() always generates a unique ranking; if the ORDER BY clause cannot distinguish between two rows, it will still give them different rankings (randomly) * RANK() and DENSE\_RANK() will give the same ranking to rows that cannot be distinguished by the ORDER BY clause * DENSE\_RANK() will always generate a contiguous sequence of ranks (1,2,3,...), whereas RANK() will leave gaps after two or more rows with the same rank (think "Olympic Games": if two athletes win the gold medal, there is no second place, only third) So, if you only want one employee (even if there are several with the 2nd highest salary), I'd recommend ROW\_NUMBER().
If you're using Oracle 8+, you can use the `RANK()` or `DENSE_RANK()` functions like so ``` SELECT * FROM ( SELECT some_column, rank() over (order by your_sort_column desc) as row_rank ) t WHERE row_rank = 2; ```
How can I select the record with the 2nd highest salary in database Oracle?
[ "", "sql", "oracle", "" ]
I've previously used Python tools like `virtualenv` and `virtualenvwrapper` in Python projects, but now I'm working on a project in Go so I'd like a general tool to switch environment variables when I do something like ``` workon myproject .... deactivate myproject ``` I especially like the workflow of `virtualenv-wrapper` with pre and post activation scripts in which I can `preactivate.sh` ``` export MYVAR=xxx ``` and postactivate ``` unset MYVAR ``` The tools I've mentioned seem to be centered around Python and pip, but since my project is in Go, I don't know if its kosher for me to use Python tools that happen to provide environment variable management features. Anything more general purpose you would suggest and is not hard to set up? This question is not necessarily Go lang specific.
Yup, you can use gvm: <https://github.com/moovweb/gvm> If you've ever used rvm for ruby, it's similar to that.
You can try [envirius (universal virtual environments manager)](https://github.com/ekalinin/envirius "envirius (universal virtual environments manager)"), which allows to compile any version of go and create any number of environments based on it. `$GOPATH`/`$GOROOT` are depend on each particular environment. Moreover, it allows to create environments with mixed languages. Here is and example of compiling go-based application with envirius: ``` $ nv mk go-hugo-test --go=1.2.1 Creating environment: go-hugo-test ... * installing go==1.2.1 ... * done (in 8 secs.) $ nv ls Available environment(s): go-hugo-test (go==1.2.1) $ nv on go-hugo-test (go-hugo-test) $ go get github.com/spf13/hugo (go-hugo-test) $ cd $GOPATH/src/github.com/spf13/hugo (go-hugo-test) $ go build -o hugo main.go (go-hugo-test) $ cd - (go-hugo-test) $ hugo version Hugo Static Site Generator v0.11-dev ```
Manage Environment Variables
[ "", "python", "go", "environment-variables", "virtualenv", "" ]
I see a piece of python code ``` /*Constructor*/ self.matrix={} /*my funciton*/ if(self.matrix.get((packet.src,packet.dst))==None): ``` does a python array get initialized to None? What does none represent ? Is the above comparision correct?I am a newbie in python and trying to relate to C++ concepts
`self.matrix` isn't an array, it is a `dict`. This is comparable to a `std::map` in C++. From your usage, it is like a `std::map<std::pair<srcType, dstType>, valueType>`. (Note that `dict`s can hold variant types both in the key and the value -- I'm only assuming that it'll always use a `tuple` of 2 elements as the key.) And no, `self.matrix` isn't initialized to `None`. `dict.get()` *returns* `None` if it can't find a match. As an alternative, `[]` throws a `KeyError` exception if it can't find a match.
`matrix` is a dictionary not a list. This is best explained by an example: ``` >>> dic = {} >>> dic['a'] = 1 >>> dic['a'] 1 >>> dic.get('a') 1 >>> dic.get('b') == None True >>> dic['b'] Traceback (most recent call last): File "<pyshell#23>", line 1, in <module> dic['b'] KeyError: 'b' ```
array initialized to "None" in Python
[ "", "python", "python-2.7", "" ]
I've got a simple SQL query to give me information about all of the users who have commented on an image like this... ``` SELECT user.id, user.userName, user.displayName, user.currentDefault, user.discipline FROM user INNER JOIN imageComment ON user.id = imageComment.commentAuthorId WHERE imageComment.imageAlbumJunctionId = 37 GROUP BY user.id LIMIT 2 ``` I have the `LIMIT 2` on there because I know there are only two comments for this image. And I have the `GROUP BY user.id` on there because I only want to show information about a user once, even if they comment multiple times. So lets say that "Mike" commented on a photo twice. My question is, does this query... 1. Only search for 2 comments because of the `LIMIT`, and then perform the `GROUP BY user.id` 2. Perform the `GROUP BY user.id` and then search the full table for a second unique user I am hoping that this query does #1 because if it does #2 that would cause it to search the entire table looking for a second user when "Mike" was actually the one that did both comments. And I did try an `EXPLAIN` but I don't really understand it because it gives the same output whether there is a `GROUP BY` or `LIMIT`. Thank you for reading.
The query finds the first two users who have commented an Image, so it's #2. I'd suggest: ``` select ... from user where exists ( select * from imageComment where imageComment.commentAuthorId = user.id and imageComment.imageAlbumJunctionId = 37 ) ``` where exists is faster than a inner join because it can stop after the first. Good indices should be set.
`LIMIT` is applied after the `GROUP BY user.id`. So in this case, #2 is happening. But the `WHERE` clause will filter the table first, so it's not searching the whole table. Your query will give you correct results, but I think this should be better: ``` SELECT DISTINCT user.id, user.userName, user.displayName, user.currentDefault, user.discipline FROM user INNER JOIN imageComment ON user.id = imageComment.commentAuthorId WHERE imageComment.imageAlbumJunctionId = 37 LIMIT 2 ```
Understanding GROUP BY with a LIMIT
[ "", "mysql", "sql", "" ]
I have a Client table that is linked to a Client Contact table. Naturally there may be multiple contacts for many clients. I have a Select statement using DISTINCT to show me which Clients have at least one email contact in the Client Contact table. ``` SELECT DISTINCT intpkautoclientid FROM tblclient c JOIN tblclientcontact cc WITH (nolock) ON cc.intfkclientid = c.intpkautoclientid WHERE NULLIF(cc.stremail, '') IS NOT NULL ORDER BY intpkautoclientid ``` Is there a simple way using the above select to return all clients not part of the ‘Clients with email addresses’ SET. I really want to know which clients I do not have any valid email addresses for.
``` SELECT * FROM tblclient WHERE intpkautoclientid NOT IN ( SELECT intfkclientid FROM tblclientcontact WHERE stremail > '' ) ```
Another readable way is using [`NOT EXISTS`](http://msdn.microsoft.com/en-us/library/ms188336.aspx): ``` SELECT intpkautoclientid FROM tblclient c WHERE NOT EXISTS ( SELECT 1 FROM tblclientcontact cc WHERE cc.intfkclientid = c.intpkautoclientid ) ```
How do I select clients with no valid email addresses?
[ "", "sql", "sql-server", "" ]
I have a Pandas Series where each element of the series is a one row Pandas DataFrame which I would like to append together into one big DataFrame. For example: ``` import pandas as pd mySeries = pd.Series( numpy.arange(start=1, stop=5, step=1) ) def myFun(val): return pd.DataFrame( { 'square' : [val**2], 'cube' : [val**3] } ) ## returns a Pandas Series where each element is a single row dataframe myResult = mySeries.apply(myFun) ``` so how do I take `myResult` and combine all the little dataframes into one big dataframe?
``` import pandas as pd import numpy as np mySeries = pd.Series(np.arange(start=1, stop=5, step=1)) def myFun(val): return pd.Series([val ** 2, val ** 3], index=['square', 'cube']) myResult = mySeries.apply(myFun) print(myResult) ``` yields ``` square cube 0 1 1 1 4 8 2 9 27 3 16 64 ```
[concat](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.tools.merge.concat.html#pandas.tools.merge.concat) them: ``` In [58]: pd.concat(myResult).reset_index(drop=True) Out[58]: cube square 0 1 1 1 8 4 2 27 9 3 64 16 ``` Since the original indexes are all 0, I also reset them.
Take a Pandas Series where each element is a DataFrame and combine them to one big DataFrame
[ "", "python", "pandas", "" ]
I am using [scikit-learn's Random Forest Regressor](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) to fit a random forest regressor on a dataset. Is it possible to interpret the output in a format where I can then implement the model fit without using scikit-learn or even Python? The solution would need to be implemented in a microcontroller or maybe even an [FPGA](https://en.wikipedia.org/wiki/Field-programmable_gate_array). I am doing analysis and learning in Python but want to implement on a uC or FPGA.
You can check out graphviz, which uses 'dot language' for storing models (which is quite human-readable if you'd want to build some custom interpreter, shouldn't be hard). There is an `export_graphviz` function in scikit-learn. You can load and process the model in C++ through boost library `read_graphviz` method or some of other custom interpreters available.
You could trying extracting rules from the tree ensemble model and implement the rules in hardware. You can use [TE2Rules](https://github.com/linkedin/TE2Rules) (Tree Ensembles to Rules) to extract human understandable rules to explain a scikit tree ensemble (like GradientBoostingClassifier). It provides levers to control interpretability, fidelity and run time budget to extract useful explanations. Rules extracted by TE2Rules are guaranteed to closely approximate the tree ensemble, by considering the joint interactions of multiple trees in the ensemble. References: TE2Rules: You can find the code: <https://github.com/linkedin/TE2Rules> and documentation: <https://te2rules.readthedocs.io/en/latest/> here. Disclosure: I'm one of the core developers of TE2Rules.
Random Forest interpretation in scikit-learn
[ "", "python", "machine-learning", "regression", "scikit-learn", "random-forest", "" ]
I'm trying to generate a random number between 0.1 and 1.0. We can't use `rand.randint` because it returns integers. We have also tried `random.uniform(0.1,1.0)`, but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0. Does somebody else have an idea for this problem?
How "accurate" do you want your random numbers? If you're happy with, say, 10 decimal digits, you can just round `random.uniform(0.1, 1.0)` to 10 digits. That way you will include both `0.1` and `1.0`: ``` round(random.uniform(0.1, 1.0), 10) ``` To be precise, `0.1` and `1.0` will have only half of the probability compared to any other number in between and, of course, you lose all random numbers that differ only after 10 digits.
You could do this: ``` >>> import numpy as np >>> a=.1 >>> b=np.nextafter(1,2) >>> print(b) 1.0000000000000002 >>> [a+(b-a)*random.random() for i in range(10)] ``` or, use [numpy's uniform](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html): ``` np.random.uniform(low=0.1, high=np.nextafter(1,2), size=1) ``` [nextafter](https://stackoverflow.com/a/6163157/298607) will produce the platform specific next representable floating pointing number towards a direction. Using numpy's random.uniform is advantageous because it is unambiguous that it does not include the upper bound. --- ***Edit*** It does appear that Mark Dickinson's comments is correct: [Numpy's documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html) is incorrect regarding the upper bound to random.uniform being inclusive or not. The Numpy documentation states `All values generated will be less than high.` This is easily disproved: ``` >>> low=1.0 >>> high=1.0+2**-49 >>> a=np.random.uniform(low=low, high=high, size=10000) >>> len(np.where(a==high)[0]) 640 ``` Nor is the result uniform over this limited range: ``` >>> for e in sorted(set(a)): ... print('{:.16e}: {}'.format(e,len(np.where(a==e)[0]))) ... 1.0000000000000000e+00: 652 1.0000000000000002e+00: 1215 1.0000000000000004e+00: 1249 1.0000000000000007e+00: 1288 1.0000000000000009e+00: 1245 1.0000000000000011e+00: 1241 1.0000000000000013e+00: 1228 1.0000000000000016e+00: 1242 1.0000000000000018e+00: 640 ``` However, combining J.F. Sebastian and Mark Dickinson's comments, I think this works: ``` import numpy as np import random def rand_range(low=0,high=1,size=1): a=np.nextafter(low,float('-inf')) b=np.nextafter(high,float('inf')) def r(): def rn(): return a+(b-a)*random.random() _rtr=rn() while _rtr > high: _rtr=rn() if _rtr<low: _rtr=low return _rtr return [r() for i in range(size)] ``` If run with the minimal spread of values in Mark's comment such that there are very few discrete floating point values: ``` l,h=1,1+2**-48 s=10000 rands=rand_range(l,h,s) se=sorted(set(rands)) if len(se)<25: for i,e in enumerate(se,1): c=rands.count(e) note='' if e==l: note='low value end point' if e==h: note='high value end point' print ('{:>2} {:.16e} {:,}, {:.4%} {}'.format(i, e, c, c/s,note)) ``` It produces the desired uniform distribution inclusive of end points: ``` 1 1.0000000000000000e+00 589, 5.8900% low value end point 2 1.0000000000000002e+00 544, 5.4400% 3 1.0000000000000004e+00 612, 6.1200% 4 1.0000000000000007e+00 569, 5.6900% 5 1.0000000000000009e+00 593, 5.9300% 6 1.0000000000000011e+00 580, 5.8000% 7 1.0000000000000013e+00 565, 5.6500% 8 1.0000000000000016e+00 584, 5.8400% 9 1.0000000000000018e+00 603, 6.0300% 10 1.0000000000000020e+00 589, 5.8900% 11 1.0000000000000022e+00 597, 5.9700% 12 1.0000000000000024e+00 591, 5.9100% 13 1.0000000000000027e+00 572, 5.7200% 14 1.0000000000000029e+00 619, 6.1900% 15 1.0000000000000031e+00 593, 5.9300% 16 1.0000000000000033e+00 592, 5.9200% 17 1.0000000000000036e+00 608, 6.0800% high value end point ``` On the values requested by the OP, it also produces a uniform distribution: ``` import matplotlib.pyplot as plt l,h=.1,1 s=10000 bin_count=20 rands=rand_range(l,h,s) count, bins, ignored = plt.hist(np.array(rands),bin_count) plt.plot(bins, np.ones_like(bins)*s/bin_count, linewidth=2, color='r') plt.show() ``` Output ![uniform ](https://i.stack.imgur.com/kl8ZJ.png)
Generate random number between 0.1 and 1.0. Python
[ "", "python", "random", "floating-point", "" ]
``` def fvals_sqrt(x): """ Return f(x) and f'(x) for applying Newton to find a square root. """ f = x**2 - 4. fp = 2.*x return f, fp def solve(fvals_sqrt, x0, debug_solve=True): """ Solves the sqrt function, using newtons methon. """ fvals_sqrt(x0) x0 = x0 + (f/fp) print x0 ``` When I try to call the function solve, python returns: ``` NameError: global name 'f' is not defined ``` Obviously this is a scope issue, but how can I use f within my solve function?
You want this: ``` def solve(fvals_sqrt, x0, debug_solve=True): """ Solves the sqrt function, using newtons methon. """ f, fp = fvals_sqrt(x0) # Get the return values from fvals_sqrt x0 = x0 + (f/fp) print x0 ```
You're calling `fvals_sqrt()` but don't do anything with the return values, so they are discarded. Returning variables won't magically make them exist inside the calling function. Your call should be like so: ``` f, fp = fvals_sqrt(x0) ``` Of course, you don't need to use the same names for the variables as are used in the `return` statement of the function you're calling.
Python Variable scope with a function as a parameter
[ "", "python", "scope", "" ]