Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Say I have a list: ``` A = [1,2,3,4,5,6,7,8,9,0] ``` and a second list: ``` B = [3,6,9] ``` What is the best way to sort list A so that anything that matches an item in list B will appear at the beginning so that the result would be: ``` [3,6,9,1,2,4,5,7,8,0] ```
``` >>> A = [1,2,3,4,5,6,7,8,9,0] >>> B = [3,6,9] >>> sorted(A,key=lambda e: e not in B) [3, 6, 9, 1, 2, 4, 5, 7, 8, 0] ``` How this works: `sorted` sorts an interable based on the result of `key(element)` for each element (the default value for `key` is `None` which results in it sorting based on the elements directly). In our case the lambda `lambda e: e not in B` will return either `True` if `e` ***isn't*** in `B`, or `False` if `e` ***is*** in `B`. The element's with `False`'s get sorted to the front, and you end up with your result. As demonstrated by: ``` >>> sorted([True,False,False]) [False, False, True] ```
Many of these answers are using set logic explicitly. But Python has it built in. If, as you say, the order doesn't matter as long as the `B` parts come first, this will take care of the rest: ``` B = set(B) list(B.intersection(A)) + list(set(A) - B) ``` This assumes that (as in your example) there are no duplicate values. If there are, use one of the list comprehension answers.
Python list sorting dependant on if items are in another list
[ "", "python", "list", "sorting", "" ]
I am trying to change a some lines in a text file without affecting the other lines. This is what's inside the text file called "text.txt" ``` this is a test1|number1 this is a test2|number2 this is a test3|number2 this is a test4|number3 this is a test5|number3 this is a test6|number4 this is a test7|number5 this is a test8|number5 this is a test9|number5 this is a test10|number5 ``` My objective is to change the line 4 and line 5 but keep the rest same. ``` mylist1=[] for lines in open('test','r'): a=lines.split('|') b=a[1].strip() if b== 'number3': mylist1.append('{}|{} \n'.format('this is replacement','number7')) else: mylist1.append('{}|{} \n'.format(a[0],a[1].strip())) myfile=open('test','w') myfile.writelines(mylist1) ``` Even though the code works, I am wondering if there is any better and efficient way to do it? Is it possible to read the file just by line number?
There is not much you can improve. But you have to write all lines to **a new file**, either changed or unchanged. Minor improvements would be: * using the `with` statement; * avoiding storing lines in a list; * writing `lines` without formatting in the `else` clause (if applicable). Applying all of the above: ``` import shutil with open('test') as old, open('newtest', 'w') as new: for line in old: if line.rsplit('|', 1)[-1].strip() == 'number3': new.write('this is replacement|number7\n') else: new.write(line) shutil.move('newtest', 'test') ```
``` import fileinput for lines in fileinput.input('test', inplace=True): # inplace=True redirects stdout to a temp file which will # be renamed to the original when we reach the end of the file. this # is more efficient because it doesn't save the whole file into memeory a = lines.split('|') b = a[1].strip() if b == 'number3': print '{}|{} '.format('this is replacement', 'number7') else: print '{}|{} '.format(a[0], a[1].strip()) ```
Read/Write text file
[ "", "python", "io", "" ]
I have tables two tables User and Favorite. I want to make a User row to have multiple foreign keys from Favorite User ``` ID|Name |favorite 1 | tom |1&2 ??? (tom likes cookie and donuts) 2 | max |2&3 ??? (max likes donuts and peanuts) 3 | john|1&2&3 ??? (john likes cookie, donuts and peanuts) ``` Favorite ``` ID | name 1 |cookie 2 |donuts 3 |peanuts ``` What kind of structure is the best for this purpose? I think this question is basic. I have made foreign key structure many times though,,,,, I am using mysql and want to know the way to do this on doctrine2 if possible.
You need to have a many-to-many relationship table User\_Favorite UserId FavoriteId Those two ids will make a composite Primary Key. And each will be a foreign key to the table they relate to.
You need to create a third many to many relationship table that will contain: > User\_ID, Favourite\_ID Essentially, you want a many to many relationship: a user has zero to many favorites. A favorite has zero to many users. The correct way to do this is to have a third table consisting of a foreign key referencing the two table primaries (user\_id, favouriablete\_id). You can remove the favorite column from user table. It is not recommended to use delimited lists to replace many to many.
How could i handle the multiple foreign key?
[ "", "mysql", "sql", "" ]
I have 2 tables TABLE **jt1** ``` name --- A B C ``` and TABLE **jt2** ``` name --- B C D ``` I need get names from both tables, which is not common for both tables, that is result must be ``` result ------ A D ``` This is my query, but may be there is better solution for this? ``` SELECT jt1.name AS name FROM jt1 LEFT JOIN jt2 ON jt1.name = jt2.name WHERE jt2.name IS NULL UNION SELECT jt2.name AS name FROM jt2 LEFT JOIN jt1 ON jt2.name = jt1.name WHERE jt1.name IS NULL ```
``` SELECT COALESCE(jt1.name, jt2.name) AS zname FROM jt1 FULL JOIN jt2 ON jt1.name = jt2.name WHERE jt2.name IS NULL OR jt1.name IS NULL ; ``` BTW: the naive solution could probably be faster: ``` SELECT name FROM a (WHERE NOT EXISTS SELECT 1 FROM b WHERE b.name = a.name) UNION ALL SELECT name FROM b (WHERE NOT EXISTS SELECT 1 FROM a WHERE a.name = b.name) ; ``` BTW: I purposely use `UNION ALL` here, because I **know** that the two *legs* cannot have any overlap, and the removal of duplicates can be omitted.
A combination of `EXCEPT` and `UNION` will do the trick as well. I can't tell if that is more efficient that the other solutions though: ``` ( SELECT name FROM jt1 EXCEPT SELECT name FROM jt2 ) UNION ( SELECT name FROM jt2 EXCEPT SELECT name FROM jt1 ) ORDER BY Name; ``` (The paranthesises are not really necessary, I just added them to visualize the approach)
Get data from both table, where data isn't common for this tables
[ "", "sql", "postgresql", "" ]
We accept the accented characters such as `Ḿấxiḿứś` from a html form, when using hibernate saves it into the database the string becomes `??xi???` Then I did a SQL update, trying to write the accented words directly into the database, the same result happened. The desired behavior is to set into the dabase as what it is. I am using Microsoft SQL Server 2008. I have tried to google it, someone said I need to change the database collation to `SQL_Latin1_General_CP1_CI_AI`. But I can't find this options from the dropdown.
``` ALTER TABLE table_name ALTER COLUMN coloumn_name NVARCHAR(40) null ``` if you are using hibernate that's all you need. I have attempted to change the table collation but it didn't work, maybe you have to backup your data, create a fresh database with the new collation then pop back the data. my database setting the collation is AS (accent sensitive), please make sure yours has the same setting
Only one of these works: nvarchar datatype and N prefix for unicode constants. ``` DECLARE @foo1 varchar(10) = 'Ḿấxiḿứś', @foo2 varchar(10) = N'Ḿấxiḿứś', @fooN1 nvarchar(10) = 'Ḿấxiḿứś', @fooN2 nvarchar(10) = N'Ḿấxiḿứś'; SELECT @foo1, @foo2, @fooN1, @fooN2; ``` You have to ensure that all datatypes are nvarchar end to end (columns, parameters, etc) Collation is not the issue: this is for sorting and comparison for nvarchar
Save Accented Characters in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Users of my site can submit `post title` and `post content` by form. The `post title` will be saved and converted to SEO friendly mode `(eg: "title 12 $" -> "title-12")`. this will be this post's url. My question is if a user entered a title that is identical to previous entered title, the url's of those posts will be identical. So can be the new title be modified automatically by appending a number to the end of the title? **eg:** > "title-new" -> "title-new-1" or if "title-new-1" present in db > convert it to "title-new-2" I'm sorry I'm new to this, maybe it's very easy, Thanks for your help. I'm using PDO.
thank you @Ratna & @Borniet for your answers. i'm posting explained code to any other user who want's it. if there is somrthing should be added or removed or better way please let me know. //first i'm going to search the "new unchanged title name" whether it's present in db. ``` $newtitle= "tst-title"; $dbh = new PDO("mysql:host=localhost;dbname='dbname', 'usrname', 'password'); $stmt = $dbh->prepare("SELECT * FROM `tablename` WHERE `col.name` = ? LIMIT 1"); $stmt->execute(array($newtitle)); if ( $stmt->rowCount() < 1 ) { // enter sql command to insert data } else { $i='0'; do{ $i++; $stmt = $dbh->prepare("SELECT * FROM `tablename` WHERE `url` = ? LIMIT 1"); $stmt->execute(array($newtitle.'-'.$i)); } while($stmt->rowCount()>0); // enter sql command to insert data } ``` --- that's it. the reason i'm dividing in to two is because i want to add '-' to the url instead of just number.
when saving the post title you can query the db if it exists? If it exists the simply append a number and again query, if it again exists increment it by one and hence.
How to manage duplicate sql values that has to be unique?
[ "", "sql", "duplicates", "" ]
My response back from MongoDB after querying an aggregated function on document using Python, It returns valid response and i can print it but can not return it. Error: ``` TypeError: ObjectId('51948e86c25f4b1d1c0d303c') is not JSON serializable ``` Print: ``` {'result': [{'_id': ObjectId('51948e86c25f4b1d1c0d303c'), 'api_calls_with_key': 4, 'api_calls_per_day': 0.375, 'api_calls_total': 6, 'api_calls_without_key': 2}], 'ok': 1.0} ``` But When i try to return: ``` TypeError: ObjectId('51948e86c25f4b1d1c0d303c') is not JSON serializable ``` It is RESTfull call: ``` @appv1.route('/v1/analytics') def get_api_analytics(): # get handle to collections in MongoDB statistics = sldb.statistics objectid = ObjectId("51948e86c25f4b1d1c0d303c") analytics = statistics.aggregate([ {'$match': {'owner': objectid}}, {'$project': {'owner': "$owner", 'api_calls_with_key': {'$cond': [{'$eq': ["$apikey", None]}, 0, 1]}, 'api_calls_without_key': {'$cond': [{'$ne': ["$apikey", None]}, 0, 1]} }}, {'$group': {'_id': "$owner", 'api_calls_with_key': {'$sum': "$api_calls_with_key"}, 'api_calls_without_key': {'$sum': "$api_calls_without_key"} }}, {'$project': {'api_calls_with_key': "$api_calls_with_key", 'api_calls_without_key': "$api_calls_without_key", 'api_calls_total': {'$add': ["$api_calls_with_key", "$api_calls_without_key"]}, 'api_calls_per_day': {'$divide': [{'$add': ["$api_calls_with_key", "$api_calls_without_key"]}, {'$dayOfMonth': datetime.now()}]}, }} ]) print(analytics) return analytics ``` db is well connected and collection is there too and I got back valid expected result but when i try to return it gives me Json error. Any idea how to convert the response back into JSON. Thanks
You should define you own [`JSONEncoder`](http://docs.python.org/2/library/json.html#json.JSONEncoder) and using it: ``` import json from bson import ObjectId class JSONEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, ObjectId): return str(o) return json.JSONEncoder.default(self, o) JSONEncoder().encode(analytics) ``` It's also possible to use it in the following way. ``` json.encode(analytics, cls=JSONEncoder) ```
[Bson](https://api.mongodb.com/python/3.4.0/api/bson/index.html "Pymongo") in PyMongo distribution provides [json\_util](https://api.mongodb.com/python/3.4.0/api/bson/json_util.html) - you can use that one instead to handle BSON types ``` from bson import json_util def parse_json(data): return json.loads(json_util.dumps(data)) ```
TypeError: ObjectId('') is not JSON serializable
[ "", "python", "json", "mongodb", "flask", "" ]
So I'm doing homework for a class and I have been stuck on this problem for days, apparently I'm not as good at google as I need to be. Here it is: "Change the StoreReps table so that NULL values can’t be entered in the first and last name columns." My Code (does not work): ``` Alter Table StoreRep Modify lastname Not Null, Modify firstname Not Null; ``` My Code (does work but I need to be able to change both columns at the same time): ``` Alter Table StoreRep Modify lastname Not Null; ```
``` Alter Table StoreRep Modify (lastname Not Null, firstname Not Null); ```
If you're using MySQL, you also need to specify the type: ``` alter table StoreRep modify firstname varchar(50) not null, modify lastname varchar(50) not null; ``` [See this demo](http://www.sqlfiddle.com/#!2/22d5b).
Simple SQL, Setting default value for multiple columns in a table
[ "", "sql", "null", "" ]
I have a string: `"y, i agree with u."` And I have array dictionary `[(word_will_replace, [word_will_be_replaced])]`: `[('yes', ['y', 'ya', 'ye']), ('you', ['u', 'yu'])]` i want to replace ***'y' with 'yes'*** and ***'u' with 'you'*** according to the array dictionary. So the result i want: `"yes, i agree with you."` I want to keep the punctuation there.
``` import re s="y, i agree with u. yu." l=[('yes', ['y', 'ya', 'ye']), ('you', ['u', 'yu'])] d={ k : "\\b(?:" + "|".join(v) + ")\\b" for k,v in l} for k,r in d.items(): s = re.sub(r, k, s) print s ``` **Output** ``` yes, i agree with you. you. ```
Extending @gnibbler's answer from [Replacing substrings given a dictionary of strings-to-be-replaced as keys and replacements as values. Python](https://stackoverflow.com/questions/16516623/replacing-substrings-given-a-dictionary-of-strings-to-be-replaced-as-keys-and-re/16516892#16516892) with the tips implemented from Raymond Hettinger in the comments. ``` import re text = "y, i agree with u." replacements = [('yes', ['y', 'ya', 'ye']), ('you', ['u', 'yu'])] d = {w: repl for repl, words in replacements for w in words} def fn(match): return d[match.group()] print re.sub('|'.join(r'\b{0}\b'.format(re.escape(k)) for k in d), fn, text) ``` --- ``` >>> yes, i agree with you. ```
replacing word with another word from the string in python
[ "", "python", "regex", "string", "replace", "" ]
for example I have 2 specific columns: id, and threadId, given certain situations, I want the threadId to equal the id (if its the original thread), I am unsure how I would go about this since id is autoIncremented.
You should be able to do that with triggers. Use the information in this post to help you out: [Can you access the auto increment value in MySQL within one statement?](https://stackoverflow.com/questions/469009/can-you-access-the-auto-increment-value-in-mysql-within-one-statement) Before insert get the next ID form auto\_increment and set it in the new field. However this poses a problem - you say it should only happen "in certain situations". This means the trigger is no good for you because it will execute every time (unless you have some extra logic in the table to allow checking whether the new field should be set with an IF statement in the trigger). In which case your only option is to insert the row, get its newly created ID and if necessary - update it setting the other column to this value.
go with this query INSERT INTO table(id,threadId) VALUES (NULL,(SELECT LAST\_INSERT\_ID()+1)); Shamseer PC
in mysql, how do you put the data from an auto incremented field into another column?
[ "", "mysql", "sql", "" ]
So I have two tables that are unrelated but yet share some of the same data. Trying to extract those rows that don't contain certain data. In some of the entries, the EmpployerNo and Payer\_ID are the same. I'd like to find those entries where these two are not the same. What would be the best way to go about doing this? **Table 1** ``` EmployerNo, EmployerName, Address, Phone ``` **Table 2** ``` Payer_ID, PayerName, Address, Phone ``` Thanks
``` SELECT * FROM TABLE1 T WHERE T.EmployerNo NOT IN ( SELECT A.EmployerNo FROM TABLE1 A INNER JOIN TABLE2 B ON A.EmployerNo = B.Payer_ID) ```
The following query will select rows from both tables where EmploerNo != Payer\_ID: ``` SELECT table1.*, table2.* FROM table1 INNER JOIN table2 ON table1.EmployerNo != table2.Payer_ID ```
Unlike Rows in Tables with No Relation
[ "", "sql", "" ]
So I am trying to run this piece of code: ``` reader = list(csv.reader(open('mynew.csv', 'rb'), delimiter='\t')) print reader[1] number = [float(s) for s in reader[1]] ``` inside reader[1] i have the following values: ``` '5/1/2013 21:39:00.230', '46.09', '24.76', '0.70', '0.53', '27.92', ``` I am trying to store each one of values into an array like so: ``` number[0] = 46.09 number[1] = 24.09 and so on..... ``` My question is: how would i skip the date and the number following it and just store legitimate floats. Or store the contents in an array that are separated by comma? It throws an error when I try to run the code above: ``` ValueError: invalid literal for float(): 5/1/2013 21:39:00.230 ``` Thanks!
Just skip values which cannot be converted to float: ``` number = [] for s in reader[1]: try: number.append(float(s)) except ValueError: pass ```
If it's always the first value that's not a float, you can take it out doing: ``` reader = list(csv.reader(open('mynew.csv', 'rb'), delimiter='\t')) print reader[1] number = [float(s) for s in reader[1][1:]] ```
Converting a string to floats error
[ "", "python", "arrays", "csv", "" ]
I have a part in a code like below where file name is supplied to the loop iteratively. I want that no two file names with same name should get processed ( to avoid duplicate processing) so I used the approach of "set" as above. However this does not seem to work as expected. I get an empty processed\_set and logic is not executed as expected. ``` else: create_folder(filename) processed_set=set() if xyz ==1: if filename not in processed_set: createdata(name) processed_set.add(filename) else: avoid_double_process(name) ```
From what I can infer from the code sample and guess based on function names, what you want to do is to avoid running the code if `filename` has already been processed. You would do it this way: ``` processed_set = set() #initialize set outside of loop for filename in filenames: #loop over some collection of filenames if filename not in processed_set: #check for non-membership processed_set.add(filename) #add to set since filename wasn't in the set create_folder(filename) #repositioned based on implied semantics of the name createdata(filename) ``` Alternatively, if `createdata` and `create_folder` are both functions you don't want to run multiple times for the same filename, you could create a filtering decorator. If you actually care about the return value, you would want to use a memoizing decorator ``` def run_once(f): f.processed = set() def wrapper(filename): if filename not in f.processed: f.processed.add(filename) f(filename) return wrapper ``` then include `@run_once` on the line prior to your function definitions for the functions you only want to run once.
Why don't you build your set first, and then process the files in the set afterwards? the set will not add the same element if it is already present; ``` >>> myset = { element for element in ['abc', 'def', 'ghi', 'def'] } >>> myset set(['abc', 'ghi', 'def']) ```
Using set to avoid duplicate processing
[ "", "python", "" ]
I have two lists that I am concatenating using listA.extend(listB). What I need to achieve when I extend listA is to concatenate the last element of listA with the first element of listB an example of my lists are as below end of listA = `... '1633437.0413', '5417978.6108', '1633433.2865', '54']` start of listB = `['79770.3904', '1633434.364', '5417983.127', '1633435.2672', ...` obviously when I extend I get the below (note the 54) ``` '5417978.6108', '1633433.2865', '54', '79770.3904', '1633434.364', '5417983.127 ``` Below is what I want to achieve where the last and first elements are concatenated ``` [...5417978.6108', '1633433.2865', '*5479770.3904*', '1633434.364', '5417983.127...] ``` any ideas?
You can achieve that in two steps: ``` A[-1] += B[0] # update the last element of A to tag on contents of B[0] A.extend(B[1:]) # extend A with B but exclude the first element ``` Example: ``` >>> A = ['1633437.0413', '5417978.6108', '1633433.2865', '54'] >>> B = ['79770.3904', '1633434.364', '5417983.127', '1633435.2672'] >>> A[-1] += B[0] >>> A.extend(B[1:]) >>> A ['1633437.0413', '5417978.6108', '1633433.2865', '5479770.3904', '1633434.364', '5417983.127', '1633435.2672'] ```
``` newlist = listA[:-1] + [listA[-1] + listB[0]] + listB[1:] ``` or if you want to extend listA "inplace" ``` listA[-1:] = [listA[-1] + listB[0]] + listB[1:] ```
Joining two list whereby the last element of list "A" and first element of list "B" are concatenated
[ "", "python", "list", "" ]
I'm trying to login my university's server via python, but I'm entirely unsure of how to go about generating the appropriate HTTP POSTs, creating the keys and certificates, and other parts of the process I may be unfamiliar with that are required to comply with the SAML spec. I can login with my browser just fine, but I'd like to be able to login and access other contents within the server using python. For reference, [here is the site](https://idp-prod.cc.ucf.edu/idp/Authn/UserPassword) I've tried logging in by using mechanize (selecting the form, populating the fields, clicking the submit button control via mechanize.Broswer.submit(), etc.) to no avail; the login site gets spat back each time. At this point, I'm open to implementing a solution in whichever language is most suitable to the task. Basically, I want to programatically login to SAML authenticated server.
Basically what you have to understand is the workflow behind a SAML authentication process. Unfortunately, there is no PDF out there which seems to really provide a good help in finding out what kind of things the browser does when accessing to a SAML protected website. Maybe you should take a look to something like this: <http://www.docstoc.com/docs/33849977/Workflow-to-Use-Shibboleth-Authentication-to-Sign> and obviously to this: <http://en.wikipedia.org/wiki/Security_Assertion_Markup_Language>. In particular, focus your attention to this scheme: ![enter image description here](https://i.stack.imgur.com/LcVqI.png) What I did when I was trying to understand SAML way of working, since documentation was **so** poor, was writing down (yes! writing - on the paper) all the steps the browser was doing from the first to the last. I used Opera, setting it in order to **not** allow automatic redirects (300, 301, 302 response code, and so on), and also not enabling Javascript. Then I wrote down all the cookies the server was sending me, what was doing what, and for what reason. Maybe it was way too much effort, but in this way I was able to write a library, in Java, which is suited for the job, and incredibily fast and efficient too. Maybe someday I will release it public... What you should understand is that, in a SAML login, there are two actors playing: the IDP (identity provider), and the SP (service provider). ### A. FIRST STEP: the user agent request the resource to the SP I'm quite sure that you reached the link you reference in your question from another page clicking to something like "Access to the protected website". If you make some more attention, you'll notice that the link you followed is **not** the one in which the authentication form is displayed. That's because the clicking of the link from the IDP to the SP is a *step* for the SAML. The first step, actally. It allows the IDP to define who are you, and why you are trying to access its resource. So, basically what you'll need to do is making a request to the link you followed in order to reach the web form, and getting the cookies it'll set. What you won't see is a SAMLRequest string, encoded into the 302 redirect you will find behind the link, sent to the IDP making the connection. I think that it's the reason why you can't mechanize the whole process. You simply connected to the form, with no identity identification done! ### B. SECOND STEP: filling the form, and submitting it This one is easy. Please be careful! The cookies that are **now** set are not the same of the cookies above. You're now connecting to a utterly different website. That's the reason why SAML is used: *different website, same credentials*. So you may want to store these authentication cookies, provided by a successful login, to a different variable. The IDP now is going to send back you a response (after the SAMLRequest): the SAMLResponse. You have to detect it getting the source code of the webpage to which the login ends. In fact, this page is a big form containing the response, with some code in JS which automatically subits it, when the page loads. You have to get the source code of the page, parse it getting rid of all the HTML unuseful stuff, and getting the SAMLResponse (encrypted). ### C. THIRD STEP: sending back the response to the SP Now you're ready to end the procedure. You have to send (via POST, since you're emulating a form) the SAMLResponse got in the previous step, to the SP. In this way, it will provide the cookies needed to access to the protected stuff you want to access. Aaaaand, you're done! Again, I think that the most precious thing you'll have to do is using Opera and analyzing ALL the redirects SAML does. Then, replicate them in your code. It's not that difficult, just keep in mind that the IDP is utterly different than the SP.
Selenium with the headless PhantomJS webkit will be your best bet to login into Shibboleth, because it handles cookies and even Javascript for you. ### Installation: ``` $ pip install selenium $ brew install phantomjs ``` --- ``` from selenium import webdriver from selenium.webdriver.support.ui import Select # for <SELECT> HTML form driver = webdriver.PhantomJS() # On Windows, use: webdriver.PhantomJS('C:\phantomjs-1.9.7-windows\phantomjs.exe') # Service selection # Here I had to select my school among others driver.get("http://ent.unr-runn.fr/uPortal/") select = Select(driver.find_element_by_name('user_idp')) select.select_by_visible_text('ENSICAEN') driver.find_element_by_id('IdPList').submit() # Login page (https://cas.ensicaen.fr/cas/login?service=https%3A%2F%2Fshibboleth.ensicaen.fr%2Fidp%2FAuthn%2FRemoteUser) # Fill the login form and submit it driver.find_element_by_id('username').send_keys("myusername") driver.find_element_by_id('password').send_keys("mypassword") driver.find_element_by_id('fm1').submit() # Now connected to the home page # Click on 3 links in order to reach the page I want to scrape driver.find_element_by_id('tabLink_u1240l1s214').click() driver.find_element_by_id('formMenu:linknotes1').click() driver.find_element_by_id('_id137Pluto_108_u1240l1n228_50520_:tabledip:0:_id158Pluto_108_u1240l1n228_50520_').click() # Select and print an interesting element by its ID page = driver.find_element_by_id('_id111Pluto_108_u1240l1n228_50520_:tableel:tbody_element') print page.text ``` --- ### Note: * during development, use Firefox to preview what you are doing `driver = webdriver.Firefox()` * this script is provided as-is and with the corresponding links, so you can compare each line of code with the actual source code of the pages (until login at least).
Logging into SAML/Shibboleth authenticated server using python
[ "", "python", "authentication", "saml", "saml-2.0", "shibboleth", "" ]
I built a web service using tornado and it serves days and nights. I used the command to start my service: ``` nohup python my_service.py & ``` The service log is able to write to `nohup.out`. However, the file becomes bigger as time goes. I want to know how can I manage it more conveniently? For saying, using an automatic method to generate the log files with proper names and size? Such as: ``` service_log_1.txt service_log_2.txt service_log_3.txt ... ``` Thanks.
@jujaro 's answer is quite helpful and I tried `logging` module in my web service. However, there are still some restrictions to use logging in `Tornado`. See the other [question](https://stackoverflow.com/questions/16683735/how-to-logging-with-timed-rotate-file-handler-in-tornado/16697731#16697731) asked. As a result, I tried `crontab` in linux to create a cron job at midnight (use `crontab -e` in linux shell): ``` 59 23 * * * source /home/zfz/cleanlog.sh ``` This cron job launches my script `cleanlog.sh` at 23:59 everyday. The contents of `clean.sh`: ``` fn=$(date +%F_service_log.out) cat /home/zfz/nohup.out >> "/home/zfz/log/$fn" echo '' > /home/zfz/nohup.out ``` This script creates a log file with the date of the day ,and `echo ''` clears the `nohup.out` in case it grows to large. Here are my log files split from nohup.out by now: ``` -rw-r--r-- 1 zfz zfz 54474342 May 22 23:59 2013-05-22_service_log.out -rw-r--r-- 1 zfz zfz 23481121 May 23 23:59 2013-05-23_service_log.out ```
Yes, there is. Put a cron-job in effect, which truncates the file (by something like `"cat /dev/null > nohup.out"`). How often you will have to run this job depends on how much output your process generates. But if you do not need the output of the job altogether (maybe its garbage anyways, only you can answer that) you could prevent writing to file `nohup.out` in first place. Right now you start the process in a way like this: ``` nohup command & ``` replace this by ``` nohup command 2>/dev/null 1>/dev/null & ``` and the file nohup.out won't even get created. The reason why the output of the process is being directed to a file is: Normally all processes (that is: commands you enter from commandline, there are exceptions, but they don't matter here) are attached to a terminal. Per default (this is how Unix is handling this) this is something which can display text and is connect to the host via a serial line. If you enter a command and switch off the terminal you entered it from the process gets terminated too - because it lost its terminal. Because in serial communication the technicians traditionally employed the words from telephone communication (where it came from) the termination of a communication was not called an "interruption" or "termination" but a "hangup". So programs got terminated on "hangups" and to program to prevent this was "nohup", the "no-termination-upon-hangup"-program. But as it may well be that such an orphaned process has no terminal to write to it nohup uses the file nohup.out as a "screen-replacement", redirecting the output there, which would normally go to the screen. If a command has no output whatsoever though nohp.out won't get created.
How to manage nohup.out file in Tornado?
[ "", "python", "web-services", "shell", "tornado", "nohup", "" ]
I have the following select, whose goal is to select all customers who had no sales since the day X, and also bringing the date of the last sale and the number of the sale: ``` select s.customerId, s.saleId, max (s.date) from sales s group by s.customerId, s.saleId having max(s.date) <= '05-16-2013' ``` This way it brings me the following: ``` 19 | 300 | 26/09/2005 19 | 356 | 29/09/2005 27 | 842 | 10/05/2012 ``` In another words, the first 2 lines are from the same customer (id 19), I wish to get only one record for each client, which would be the record with the max date, in the case, the second record from this list. By that logic, I should take off s.saleId from the "group by" clause, but if I do, of course, I get the error: > Invalid expression in the select list (not contained in either an > aggregate function or the GROUP BY clause) I'm using Firebird 1.5 How can I do this?
GROUP BY summarizes data by aggregating a group of rows, returning *one* row per group. You're using the aggregate function `max()`, which will return the *maximum* value from one column for a group of rows. Let's look at some data. I renamed the column you called "date". ``` create table sales ( customerId integer not null, saleId integer not null, saledate date not null ); insert into sales values (1, 10, '2013-05-13'), (1, 11, '2013-05-14'), (1, 12, '2013-05-14'), (1, 13, '2013-05-17'), (2, 20, '2013-05-11'), (2, 21, '2013-05-16'), (2, 31, '2013-05-17'), (2, 32, '2013-03-01'), (3, 33, '2013-05-14'), (3, 35, '2013-05-14'); ``` You said > In another words, the first 2 lines are from the same customer(id 19), i wish he'd get only one record for each client, which would be the record with the max date, in the case, the second record from this list. ``` select s.customerId, max (s.saledate) from sales s where s.saledate <= '2013-05-16' group by s.customerId order by customerId; customerId max -- 1 2013-05-14 2 2013-05-16 3 2013-05-14 ``` What does that table mean? It means that the latest date on or before May 16 on which customer "1" bought something was May 14; the latest date on or before May 16 on which customer "2" bought something was May 16. If you use this derived table in joins, it will return predictable results *with consistent meaning*. Now let's look at a slightly different query. MySQL permits this syntax, and returns the result set below. ``` select s.customerId, s.saleId, max(s.saledate) max_sale from sales s where s.saledate <= '2013-05-16' group by s.customerId order by customerId; customerId saleId max_sale -- 1 10 2013-05-14 2 20 2013-05-16 3 33 2013-05-14 ``` The sale with ID "10" didn't happen on May 14; it happened on May 13. This query has produced a falsehood. Joining this derived table with the table of sales transactions will compound the error. That's why Firebird correctly raises an error. The solution is to drop saleId from the SELECT clause. Now, having said all that, you can find the customers who have had no sales since May 16 like this. ``` select distinct customerId from sales where customerID not in (select customerId from sales where saledate >= '2013-05-16') ``` And you can get the right customerId and the "right" saleId like this. (I say "right" saleId, because there could be more than one on the day in question. I just chose the max.) ``` select sales.customerId, sales.saledate, max(saleId) from sales inner join (select customerId, max(saledate) max_date from sales where saledate < '2013-05-16' group by customerId) max_dates on sales.customerId = max_dates.customerId and sales.saledate = max_dates.max_date inner join (select distinct customerId from sales where customerID not in (select customerId from sales where saledate >= '2013-05-16')) no_sales on sales.customerId = no_sales.customerId group by sales.customerId, sales.saledate ``` Personally, I find common table expressions make it easier for me to read SQL statements like that without getting lost in the SELECTs. ``` with no_sales as ( select distinct customerId from sales where customerID not in (select customerId from sales where saledate >= '2013-05-16') ), max_dates as ( select customerId, max(saledate) max_date from sales where saledate < '2013-05-16' group by customerId ) select sales.customerId, sales.saledate, max(saleId) from sales inner join max_dates on sales.customerId = max_dates.customerId and sales.saledate = max_dates.max_date inner join no_sales on sales.customerId = no_sales.customerId group by sales.customerId, sales.saledate ```
then you can use following query .. **EDIT** changes made after comment by likeitlikeit for only one row per CustomerID even when we will have one case where we have multiple saleID for customer with certain condition - ``` select x.customerID, max(x.saleID), max(x.x_date) from ( select s.customerId, s.saleId, max (s.date) x_date from sales s group by s.customerId, s.saleId having max(s.date) <= '05-16-2013' and max(s.date) = ( select max(s1.date) from sales s1 where s1.customeId = s.customerId))x ``` group by x.customerID
How to do a group by without having to pass all the columns from the select?
[ "", "sql", "firebird", "" ]
I am looking for a Pythonic method to generate all pairwise-unique unique pairings (where a pairing is a system consisting of pairs, and pairwise-unique indicates that `(a,b) ≠ (b,a)`) for a set containing even number `n` items. I like the code from [here](https://stackoverflow.com/questions/14169122/generating-all-unique-pair-permutations): ``` for perm in itertools.permutations(range(n)): print zip(perm[::2], perm[1::2]) ``` except that it generates all order-unique, pairwise-unique pairings, or `(n/2)!` times more pairings than I want (redundancy), which, although I can filter out, really bog down my program at large `n`. That is, for `n = 4`, I am looking for the following output (12 unique pairings): ``` [(0, 1), (2, 3)] [(0, 1), (3, 2)] [(1, 0), (2, 3)] [(1, 0), (3, 2)] [(1, 2), (0, 3)] [(1, 2), (3, 0)] [(1, 3), (0, 2)] [(2, 0), (1, 3)] [(2, 0), (3, 1)] [(3, 1), (0, 2)] [(0, 3), (2, 1)] [(3, 0), (2, 1)] ``` Note that `(a,b) ≠ (b,a)`. Is this possible? I am also okay with a function that generates the 3 non–pairwise-unique pairings for `n = 4` where `(a,b) = (b,a)`, as it is straightforward to permute what I need from there. **My main goal is to avoid the superfluous permutations on the order of the pairs in the pairing.** Thanks in advance for your help and suggestions—I really appreciate it.
I think this gives you the fundamental pairings that you need: 1 when `N=2`; 3 when `N=4`; 15 when `N=6`; 105 when `n=8`, etc. ``` import sys def pairings(remainder, partial = None): partial = partial or [] if len(remainder) == 0: yield partial else: for i in xrange(1, len(remainder)): pair = [[remainder[0], remainder[i]]] r1 = remainder[1:i] r2 = remainder[i+1:] for p in pairings(r1 + r2, partial + pair): yield p def main(): n = int(sys.argv[1]) items = list(range(n)) for p in pairings(items): print p main() ```
In the linked question "Generating all unique pair permutations", ([here](https://stackoverflow.com/questions/14169122/generating-all-unique-pair-permutations)), an algorithm is given to generate a round-robin schedule for any given *n*. That is, each possible set of matchups/pairings for *n* teams. So for n = 4 (assuming exclusive), that would be: ``` [0, 3], [1, 2] [0, 2], [3, 1] [0, 1], [2, 3] ``` Now we've got each of these partitions, we just need to find their permutations in order to get the full list of pairings. i.e [0, 3], [1, 2] is a member of a group of four: **[0, 3], [1, 2]** (itself) and **[3, 0], [1, 2]** and **[0, 3], [2, 1]** and **[3, 0], [2, 1]**. To get all the members of a group from one member, you take the permutation where each pair can be either flipped or not flipped (if they were, for example, n-tuples instead of pairs, then there would be n! options for each one). So because you have two pairs and options, each partition yields 2 ^ 2 pairings. So you have 12 altogether. Code to do this, where round\_robin(n) returns a list of lists of pairs. So round\_robin(4) --> [[[0, 3], [1, 2]], [[0, 2], [3, 1]], [[0, 1], [2, 3]]]. ``` def pairs(n): for elem in round_robin(n): for first in [elem[0], elem[0][::-1]]: for second in [elem[1], elem[1][::-1]]: print (first, second) ``` This method generates less than you want and then goes up instead of generating more than you want and getting rid of a bunch, so it should be more efficient. ([::-1] is voodoo for reversing a list immutably). And here's the round-robin algorithm from the other posting (written by Theodros Zelleke) ``` from collections import deque def round_robin_even(d, n): for i in range(n - 1): yield [[d[j], d[-j-1]] for j in range(n/2)] d[0], d[-1] = d[-1], d[0] d.rotate() def round_robin_odd(d, n): for i in range(n): yield [[d[j], d[-j-1]] for j in range(n/2)] d.rotate() def round_robin(n): d = deque(range(n)) if n % 2 == 0: return list(round_robin_even(d, n)) else: return list(round_robin_odd(d, n)) ```
Python: Generating All Pairwise-Unique Pairings
[ "", "python", "python-itertools", "" ]
I'm trying to find a way to print a string in hexadecimal. For example, I have this string which I then convert to its hexadecimal value. ``` my_string = "deadbeef" my_hex = my_string.decode('hex') ``` How can I print `my_hex` as `0xde 0xad 0xbe 0xef`? To make my question clear... Let's say I have some data like `0x01, 0x02, 0x03, 0x04` stored in a variable. Now I need to print it in hexadecimal so that I can read it. I guess I am looking for a Python equivalent of `printf("%02x", my_hex)`. I know there is `print '{0:x}'.format()`, but that won't work with `my_hex` and it also won't pad with zeroes.
You mean you have a string of *bytes* in `my_hex` which you want to print out as hex numbers, right? E.g., let's take your example: ``` >>> my_string = "deadbeef" >>> my_hex = my_string.decode('hex') # python 2 only >>> print my_hex Þ ­ ¾ ï ``` This construction only works on Python 2; but you could write the same string as a literal, in either Python 2 or Python 3, like this: ``` my_hex = "\xde\xad\xbe\xef" ``` So, to the answer. Here's one way to print the bytes as hex integers: ``` >>> print " ".join(hex(ord(n)) for n in my_hex) 0xde 0xad 0xbe 0xef ``` The comprehension breaks the string into bytes, `ord()` converts each byte to the corresponding integer, and `hex()` formats each integer in the from `0x##`. Then we add spaces in between. Bonus: If you use this method with unicode strings (or Python 3 strings), the comprehension will give you unicode characters (not bytes), and you'll get the appropriate hex values even if they're larger than two digits. ### Addendum: Byte strings In Python 3 it is more likely you'll want to do this with a byte string; in that case, the comprehension already returns ints, so you have to leave out the `ord()` part and simply call `hex()` on them: ``` >>> my_hex = b'\xde\xad\xbe\xef' >>> print(" ".join(hex(n) for n in my_hex)) 0xde 0xad 0xbe 0xef ```
Another answer with later print/format style is: ``` res[0]=12 res[1]=23 print("my num is 0x{0:02x}{1:02x}".format(res[0],res[1])) ```
Print a variable in hexadecimal in Python
[ "", "python", "string", "hex", "" ]
My database schema is like following table - X has following 3 columns docid(document id), terms(terms in document), count(number of terms of occurence of the term for specific docid) ``` docid terms count ``` How to write a query to find documents which contains both the words 'hello' and 'hi' in terms column?
``` Select DocId FROM TableName where Term IN ('hello','hi') Group by DocId Having Count(*)=2; ``` a `DISTINCT` keyword in `HAVING` clause is much preferred if `Term` is not unique on every `DocID` ``` Select DocId FROM TableName where Term IN ('hello','hi') Group by DocId Having Count(DISTINCT Term)=2; ```
Try like this... ``` Select DocId,Count(term) from Table Name where Term='hello' or Term='hi' Group by DocId Having Count(Distinct term)=2; ```
How to write this query to show result for one column based on specific values of another column?
[ "", "sql", "" ]
What is the most elegant way to go through a sorted list by it's first index? Input: ``` Meni22 xxxx xxxx Meni32_2 xxxx xxxx Meni32_2 xxxx xxxx Meni45_1 xxxx xxxx Meni45_1 xxxx xxxx Meni45 xxxx xxxx ``` Is it to go trough line by line: ``` list1 = [] list2 = [] for line in input: if line[0] not in list1: list.append(line) else: list2.append(line) ``` Example won't obviously work. It adds the first match of the line[0] and continues. I would rather have it go through the list, add to list1 lines that it finds only once and rest to list2. After script: ``` List1: Meni22 xxxx xxxx Meni45 xxxx xxxx List2: Meni45_1 xxxx xxxx Meni45_1 xxxx xxxx Meni32_2 xxxx xxxx Meni32_2 xxxx xxxx ```
You can use `collections.Counter`: ``` from collections import Counter lis1 = [] lis2 = [] with open("abc") as f: c = Counter(line.split()[0] for line in f) for key,val in c.items(): if val == 1: lis1.append(key) else: lis2.extend([key]*val) print lis1 print lis2 ``` **output:** ``` ['Meni45', 'Meni22'] ['Meni32_2', 'Meni32_2', 'Meni45_1', 'Meni45_1'] ``` **Edit:** ``` from collections import defaultdict lis1 = [] lis2 = [] with open("abc") as f: dic = defaultdict(list) for line in f: spl =line.split() dic[spl[0]].append(spl[1:]) for key,val in dic.items(): if len(val) == 1: lis1.append(key) else: lis2.append(key) print lis1 print lis2 print dic["Meni32_2"] #access columns related to any key from the the dict ``` **output:** ``` ['Meni45', 'Meni22'] ['Meni32_2', 'Meni45_1'] [['xxxx', 'xxxx'], ['xxxx', 'xxxx']] ```
Since the file is sorted, you can use `groupby` ``` from itertools import groupby list1, list2 = res = [], [] with open('file1.txt', 'rb') as fin: for k,g in groupby(fin, key=lambda x:x.partition(' ')[0]): g = list(g) res[len(g) > 1] += g ``` Or if you prefer this longer version ``` from itertools import groupby list1, list2 = [], [] with open('file1.txt', 'rb') as fin: for k,g in groupby(fin, key=lambda x:x.partition(' ')[0]): g = list(g) if len(g) > 1: list2 += g else: list1 += g ```
compare file line by line python
[ "", "python", "list", "compare", "" ]
I generated a private and a public key using OpenSSL with the following commands: ``` openssl genrsa -out private_key.pem 512 openssl rsa -in private_key.pem -pubout -out public_key.pem ``` I then tried to load them with a python script using Python-RSA: ``` import os import rsa with open('private_key.pem') as privatefile: keydata = privatefile.read() privkey = rsa.PrivateKey.load_pkcs1(keydata,'PEM') with open('public_key.pem') as publicfile: pkeydata = publicfile.read() pubkey = rsa.PublicKey.load_pkcs1(pkeydata) random_text = os.urandom(8) #Generate signature signature = rsa.sign(random_text, privkey, 'MD5') print signature #Verify token try: rsa.verify(random_text, signature, pubkey) except: print "Verification failed" ``` My python script fails when it tries to load the public key: ``` ValueError: No PEM start marker "-----BEGIN RSA PUBLIC KEY-----" found ```
Python-RSA uses the PEM RSAPublicKey format and the PEM RSAPublicKey format uses the header and footer lines: [openssl NOTES](http://www.openssl.org/docs/apps/rsa.html#NOTES) ``` -----BEGIN RSA PUBLIC KEY----- -----END RSA PUBLIC KEY----- ``` Output the public part of a private key in RSAPublicKey format: openssl EXAMPLES ``` openssl rsa -in key.pem -RSAPublicKey_out -out pubkey.pem ```
If on Python3, You also need to open the key in binary mode, e.g: ``` with open('private_key.pem', 'rb') as privatefile: ```
How to load a public RSA key into Python-RSA from a file?
[ "", "python", "openssl", "rsa", "pem", "" ]
I'm trying to use `genfromtxt` with Python3 to read a simple *csv* file containing strings and numbers. For example, something like (hereinafter "test.csv"): ``` 1,a 2,b 3,c ``` with Python2, the following works well: ``` import numpy data=numpy.genfromtxt("test.csv", delimiter=",", dtype=None) # Now data is something like [(1, 'a') (2, 'b') (3, 'c')] ``` in Python3 the same code returns `[(1, b'a') (2, b'b') (3, b'c')]`. This is somehow [expected](http://docs.python.org/3.0/whatsnew/3.0.html#text-vs-data-instead-of-unicode-vs-8-bit) due to the different way Python3 reads the files. Therefore I use a converter to decode the strings: ``` decodef = lambda x: x.decode("utf-8") data=numpy.genfromtxt("test.csv", delimiter=",", dtype="f8,S8", converters={1: decodef}) ``` This works with Python2, but not with Python3 (same `[(1, b'a') (2, b'b') (3, b'c')]` output. However, if in Python3 I use the code above to read only one column: ``` data=numpy.genfromtxt("test.csv", delimiter=",", usecols=(1,), dtype="S8", converters={1: decodef}) ``` the output strings are `['a' 'b' 'c']`, already decoded as expected. I've also tried to provide the file as the output of an `open` with the `'rb'` mode, as suggested at [this link](http://www.gossamer-threads.com/lists/python/python/978888), but there are no improvements. Why the converter works when only one column is read, and not when two columns are read? Could you please suggest me the correct way to use `genfromtxt` in Python3? Am I doing something wrong? Thank you in advance!
The answer to my problem is using the `dtype` for unicode strings (`U2`, for example). Thanks to the answer of E.Kehler, I found the solution. If I use `str` in place of `S8` in the `dtype` definition, then the output for the 2nd column is empty: ``` numpy.genfromtxt("test.csv", delimiter=",", dtype='f8,str') ``` the output is: ``` array([(1.0, ''), (2.0, ''), (3.0, '')], dtype=[('f0', '<f16'), ('f1', '<U0')]) ``` This suggested me that correct `dtype` to solve my problem is an unicode string: ``` numpy.genfromtxt("test.csv", delimiter=",", dtype='f8,U2') ``` that gives the expected output: ``` array([(1.0, 'a'), (2.0, 'b'), (3.0, 'c')], dtype=[('f0', '<f16'), ('f1', '<U2')]) ``` Useful information can be also found at [the numpy datatype doc page](http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) .
In python 3, writing > dtype="S8" (or any variation of "S#") in NumPy's genfromtxt yields a byte string. To avoid this and get just an old fashioned string, write > dtype=str instead.
numpy genfromtxt issues in Python3
[ "", "python", "numpy", "python-3.x", "genfromtxt", "" ]
I have to convert a numpy array of floats to a string (to store in a SQL DB) and then also convert the same string back into a numpy float array. This is how I'm going to a string ([based on this article](http://www.skymind.com/~ocrow/python_string/)) ``` VIstring = ''.join(['%.5f,' % num for num in VI]) VIstring= VIstring[:-1] #Get rid of the last comma ``` So firstly this does work, is it a good way to go? Is their a better way to get rid of that last comma? Or can I get the `join` method to insert the commas for me? Then secondly,more importantly, is there a clever way to get from the string back to a float array? Here is an example of the array and the string: ``` VI array([ 17.95024446, 17.51670904, 17.08894626, 16.66695611, 16.25073861, 15.84029374, 15.4356215 , 15.0367219 , 14.64359494, 14.25624062, 13.87465893, 13.49884988, 13.12881346, 12.76454968, 12.40605854, 12.00293814, 11.96379322, 11.96272486, 11.96142533, 11.96010489, 11.95881595, 12.26924591, 12.67548634, 13.08158864, 13.4877041 , 13.87701221, 14.40238245, 14.94943786, 15.49364166, 16.03681428, 16.5498035 , 16.78362298, 16.90331119, 17.02299387, 17.12193689, 17.09448654, 17.00066063, 16.9300633 , 16.97229868, 17.2169709 , 17.75368411]) VIstring '17.95024,17.51671,17.08895,16.66696,16.25074,15.84029,15.43562,15.03672,14.64359,14.25624,13.87466,13.49885,13.12881,12.76455,12.40606,12.00294,11.96379,11.96272,11.96143,11.96010,11.95882,12.26925,12.67549,13.08159,13.48770,13.87701,14.40238,14.94944,15.49364,16.03681,16.54980,16.78362,16.90331,17.02299,17.12194,17.09449,17.00066,16.93006,16.97230,17.21697,17.75368' ``` Oh yes and the loss of precision from the `%.5f` is totally fine, these values are interpolated by the original points only have 4 decimal place precision so I don't need to beat that. So when recovering the numpy array, I'm happy to only get 5 decimal place precision (obviously I suppose)
First you should use `join` this way to avoid the last comma issue: ``` VIstring = ','.join(['%.5f' % num for num in VI]) ``` Then to read it back, use `numpy.fromstring`: ``` np.fromstring(VIstring, sep=',') ```
``` >>> import numpy as np >>> from cStringIO import StringIO >>> VI = np.array([ 17.95024446, 17.51670904, 17.08894626, 16.66695611, 16.25073861, 15.84029374, 15.4356215 , 15.0367219 , 14.64359494, 14.25624062, 13.87465893, 13.49884988, 13.12881346, 12.76454968, 12.40605854, 12.00293814, 11.96379322, 11.96272486, 11.96142533, 11.96010489, 11.95881595, 12.26924591, 12.67548634, 13.08158864, 13.4877041 , 13.87701221, 14.40238245, 14.94943786, 15.49364166, 16.03681428, 16.5498035 , 16.78362298, 16.90331119, 17.02299387, 17.12193689, 17.09448654, 17.00066063, 16.9300633 , 16.97229868, 17.2169709 , 17.75368411]) >>> s = StringIO() >>> np.savetxt(s, VI, fmt='%.5f', newline=",") >>> s.getvalue() '17.95024,17.51671,17.08895,16.66696,16.25074,15.84029,15.43562,15.03672,14.64359,14.25624,13.87466,13.49885,13.12881,12.76455,12.40606,12.00294,11.96379,11.96272,11.96143,11.96010,11.95882,12.26925,12.67549,13.08159,13.48770,13.87701,14.40238,14.94944,15.49364,16.03681,16.54980,16.78362,16.90331,17.02299,17.12194,17.09449,17.00066,16.93006,16.97230,17.21697,17.75368,' >>> np.fromstring(s.getvalue(), sep=',') array([ 17.95024, 17.51671, 17.08895, 16.66696, 16.25074, 15.84029, 15.43562, 15.03672, 14.64359, 14.25624, 13.87466, 13.49885, 13.12881, 12.76455, 12.40606, 12.00294, 11.96379, 11.96272, 11.96143, 11.9601 , 11.95882, 12.26925, 12.67549, 13.08159, 13.4877 , 13.87701, 14.40238, 14.94944, 15.49364, 16.03681, 16.5498 , 16.78362, 16.90331, 17.02299, 17.12194, 17.09449, 17.00066, 16.93006, 16.9723 , 17.21697, 17.75368]) ```
Convert a numpy array to a CSV string and a CSV string back to a numpy array
[ "", "python", "csv", "python-2.7", "numpy", "" ]
Is there a way to group boxplots in matplotlib? Assume we have three groups "A", "B", and "C" and for each we want to create a boxplot for both "apples" and "oranges". If a grouping is not possible directly, we can create all six combinations and place them linearly side by side. What would be to simplest way to visualize the groupings? I'm trying to avoid setting the tick labels to something like "A + apples" since my scenario involves much longer names than "A".
How about using colors to differentiate between "apples" and "oranges" and spacing to separate "A", "B" and "C"? Something like this: ``` from pylab import plot, show, savefig, xlim, figure, \ hold, ylim, legend, boxplot, setp, axes # function for setting the colors of the box plots pairs def setBoxColors(bp): setp(bp['boxes'][0], color='blue') setp(bp['caps'][0], color='blue') setp(bp['caps'][1], color='blue') setp(bp['whiskers'][0], color='blue') setp(bp['whiskers'][1], color='blue') setp(bp['fliers'][0], color='blue') setp(bp['fliers'][1], color='blue') setp(bp['medians'][0], color='blue') setp(bp['boxes'][1], color='red') setp(bp['caps'][2], color='red') setp(bp['caps'][3], color='red') setp(bp['whiskers'][2], color='red') setp(bp['whiskers'][3], color='red') setp(bp['fliers'][2], color='red') setp(bp['fliers'][3], color='red') setp(bp['medians'][1], color='red') # Some fake data to plot A= [[1, 2, 5,], [7, 2]] B = [[5, 7, 2, 2, 5], [7, 2, 5]] C = [[3,2,5,7], [6, 7, 3]] fig = figure() ax = axes() hold(True) # first boxplot pair bp = boxplot(A, positions = [1, 2], widths = 0.6) setBoxColors(bp) # second boxplot pair bp = boxplot(B, positions = [4, 5], widths = 0.6) setBoxColors(bp) # thrid boxplot pair bp = boxplot(C, positions = [7, 8], widths = 0.6) setBoxColors(bp) # set axes limits and labels xlim(0,9) ylim(0,9) ax.set_xticklabels(['A', 'B', 'C']) ax.set_xticks([1.5, 4.5, 7.5]) # draw temporary red and blue lines and use them to create a legend hB, = plot([1,1],'b-') hR, = plot([1,1],'r-') legend((hB, hR),('Apples', 'Oranges')) hB.set_visible(False) hR.set_visible(False) savefig('boxcompare.png') show() ``` ![grouped box plot](https://i.stack.imgur.com/ckPn9.png)
Here is my version. It stores data based on categories. ``` import matplotlib.pyplot as plt import numpy as np data_a = [[1,2,5], [5,7,2,2,5], [7,2,5]] data_b = [[6,4,2], [1,2,5,3,2], [2,3,5,1]] ticks = ['A', 'B', 'C'] def set_box_color(bp, color): plt.setp(bp['boxes'], color=color) plt.setp(bp['whiskers'], color=color) plt.setp(bp['caps'], color=color) plt.setp(bp['medians'], color=color) plt.figure() bpl = plt.boxplot(data_a, positions=np.array(xrange(len(data_a)))*2.0-0.4, sym='', widths=0.6) bpr = plt.boxplot(data_b, positions=np.array(xrange(len(data_b)))*2.0+0.4, sym='', widths=0.6) set_box_color(bpl, '#D7191C') # colors are from http://colorbrewer2.org/ set_box_color(bpr, '#2C7BB6') # draw temporary red and blue lines and use them to create a legend plt.plot([], c='#D7191C', label='Apples') plt.plot([], c='#2C7BB6', label='Oranges') plt.legend() plt.xticks(xrange(0, len(ticks) * 2, 2), ticks) plt.xlim(-2, len(ticks)*2) plt.ylim(0, 8) plt.tight_layout() plt.savefig('boxcompare.png') ``` I am short of reputation so I cannot post an image to here. You can run it and see the result. Basically it's very similar to what Molly did. Note that, depending on the version of python you are using, you may need to replace `xrange` with `range` [![Result of this code](https://i.stack.imgur.com/HgPFB.png)](https://i.stack.imgur.com/HgPFB.png)
How to create grouped boxplots
[ "", "python", "matplotlib", "boxplot", "" ]
I have the following select: ``` SELECT * FROM Table WHERE Column= datepart(month,'2013-05-07 10:18:00') AND Column= datepart(year,'2013-05-07 10:18:00') ``` I want it to show me itens where month= 05 and year= 2013. Actually it isn't showing anything (but there is results to be show). Thanks.
I would use the MONTH and YEAR functions ``` SELECT * FROM Table WHERE MONTH(yourColumn) = 5 and YEAR(yourColumn) = 2013 ``` <http://msdn.microsoft.com/en-us/library/ms186313.aspx> <http://msdn.microsoft.com/en-us/library/ms187813.aspx>
``` SELECT * FROM Table WHERE datepart(month, datecolumn) = 5 AND datepart(year, datecolumn) = 2013 ```
Select Where Month= X and Year= Y
[ "", "sql", "t-sql", "datepart", "" ]
This is essentially 2 questions I think. 1. How do I make all internal functions use the same raw\_input? 2. How do I chain functions with that input? I've been trying to make something that will make a circle shape with whatever character the user inputs. (This is a "for fun" activity that I thought up to help me get used to functions. I've only been self teaching Python for 2 weeks now) So far my code is: ``` def circle(symbol): def lines1(aa): print(symbol * 20) aa() print(symbol * 20) return(lines1) def lines2(bb): print(symbol * 7 + ' ' * 6 + symbol * 7) bb() print(symbol * 7 + ' ' * 6 + symbol * 7) return(lines2) def lines3(cc): print(symbol * 4 + ' ' * 12 + symbol * 4) cc() print(symbol * 4 + ' ' * 12 + symbol * 4) return(lines3) def lines4(dd): print(symbol * 2 + ' ' * 16 + symbol * 2) dd() print(symbol * 2 + ' ' * 16 + symbol * 2) return(lines4) def lines5(): print(symbol + ' ' * 18 + symbol) print(symbol + ' ' * 18 + symbol) return(lines5) lines1(lines2(lines3(lines4(lines5())))) circle(raw_input()) ``` Example: If the user inputs a #, it is supposed to output: ``` #################### ####### ####### #### #### ## ## # # # # ## ## #### #### ####### ####### #################### ``` The problem is it doesn't output anything.
Your `cicles` function returns early: ``` def circle(symbol): def lines1(aa): print(symbol * 20) aa() print(symbol * 20) return(lines1) ``` The rest of your function is *not* executed. Next, you use functions that want to call other functions, but you never pass in the arguments. `aa()` is not given any reference to the `lines2()` function. Instead, you call `lines5()`, which returns `None`, then pass that to `lines4()`, which cannot call `lines4()`. You'll need inner wrappers to make this work the way you want to: ``` def circle(symbol): def lines1(inner): def wrapper(): print(symbol * 20) inner() print(symbol * 20) return wrapper def lines2(inner): def wrapper(): print(symbol * 7 + ' ' * 6 + symbol * 7) inner() print(symbol * 7 + ' ' * 6 + symbol * 7) return wrapper def lines3(inner): def wrapper(): print(symbol * 4 + ' ' * 12 + symbol * 4) inner() print(symbol * 4 + ' ' * 12 + symbol * 4) return wrapper def lines4(inner): def wrapper(): print(symbol * 2 + ' ' * 16 + symbol * 2) inner() print(symbol * 2 + ' ' * 16 + symbol * 2) return wrapper def lines5(): print(symbol + ' ' * 18 + symbol) print(symbol + ' ' * 18 + symbol) lines1(lines2(lines3(lines4(lines5))))() ``` Now functions `lines1` through `lines4` each return a wrapper function to be passed into the next function, effectively making them decorators. We start with `lines5` (as a function reference, *not by calling it* then call the result of the nested wrappers. The definition of `lines5` could now also use `@decorator` syntax: ``` @lines1 @lines2 @lines3 @lines4 def lines5(): print(symbol + ' ' * 18 + symbol) print(symbol + ' ' * 18 + symbol) line5() ```
Your not using decorators, To make your code work as is: ``` class circle(object): def __init__(self, symbol): self.symbol = symbol def lines1(self): print(self.symbol * 20) print(self.symbol * 20) def lines2(self): print(self.symbol * 7 + ' ' * 6 + self.symbol * 7) print(self.symbol * 7 + ' ' * 6 + self.symbol * 7) def lines3(self): print(self.symbol * 4 + ' ' * 12 + self.symbol * 4) print(self.symbol * 4 + ' ' * 12 + self.symbol * 4) def lines4(self): print(self.symbol * 2 + ' ' * 16 + self.symbol * 2) print(self.symbol * 2 + ' ' * 16 + self.symbol * 2) def lines5(self): print(self.symbol + ' ' * 18 + self.symbol) print(self.symbol + ' ' * 18 + self.symbol) def print_circle(self): self.lines1() self.lines2() self.lines3() self.lines4() self.lines5() self.lines4() self.lines3() self.lines2() self.lines1() x = circle(raw_input()) x.print_circle() ``` Check out this question on decorators I found it too be very helpful in the past: [How to make a chain of function decorators?](https://stackoverflow.com/questions/739654/how-can-i-make-a-chain-of-function-decorators-in-python)
How can I pass input to a function, then have it use that input across 5 internal functions that are chaining each other?
[ "", "python", "function", "input", "arguments", "chaining", "" ]
I'm having some problems with the following file. Each line has the following content: ``` foobar 1234.569 7890.125 12356.789 -236.4569 236.9874 -569.9844 ``` What I want to edit in this file, is reverse last three numbers, positive or negative. The output should be: ``` foobar 1234.569 7890.125 12356.789 236.4569 -236.9874 569.9844 ``` Or even better: ``` foobar,1234.569,7890.125,12356.789,236.4569,-236.9874,569.9844 ``` What is the easiest pythonic way to accomplish this? At first I used the csv.reader, but I found out it's not tab separated, but random (3-5) spaces. I've read the CSV module and some examples / similar questions here, but my knowledge of python ain't that good and the CSV module seems pretty tough when you want to edit a value of a row. I can import and edit this in excel with no problem, but I want to use it in a python script, since I have hundreds of these files. VBA in excel is not an option. Would it be better to just regex each line? If so, can someone point me in a direction with an example?
You can use [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split) to split your white-space-separated lines into a row: ``` row = line.split() ``` then use `csv.writer()` to create your new file. `str.split()` with no arguments, or `None` as the first argument, splits on arbitrary-width whitespace and ignores leading and trailing whitespace on the line: ``` >>> 'foobar 1234.569 7890.125 12356.789 -236.4569 236.9874 -569.9844\n'.split() ['foobar', '1234.569', '7890.125', '12356.789', '-236.4569', '236.9874', '-569.9844'] ``` As a complete script: ``` import csv with open(inputfilename, 'r') as infile, open(outputcsv, 'wb') as outfile: writer = csv.writer(outfile) for line in infile: row = line.split() inverted_nums = [-float(val) for val in row[-3:]] writer.writerow(row[:-3] + inverted_nums) ```
You could use [`genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html): ``` import numpy as np a=np.genfromtxt('foo.csv', dtype=None) with open('foo.csv','w') as f: for el in a[()]: f.write(str(el)+',') ```
Change values in CSV or text style file
[ "", "python", "csv", "" ]
I have the following table in oracle10g. ``` state gender avg_sal status NC M 5200 Single OH F 3800 Married AR M 8800 Married AR F 6200 Single TN M 4200 Single NC F 4500 Single ``` I am trying to form the following report based on some condition. The report should look like the one below. I tried the below query but count(\*) is not working as expected ``` state gender no.of males no.of females avg_sal_men avg_sal_women NC M 10 0 5200 0 OH F 0 5 0 3800 AR M 16 0 8800 0 AR F 0 12 0 6200 TN M 22 0 4200 0 NC F 0 8 0 4500 ``` I tried the following query but I am not able to count based onthe no.of males and no.of females.. ``` select State, "NO_OF MALES", "$AVG_sal", "NO_OF_FEMALES", "$AVG_SAL_FEMALE" from( select State, to_char(SUM((CASE WHEN gender = 'M' THEN average_price ELSE 0 END)),'$999,999,999') as "$Avg_sal_men, to_char(SUM((CASE WHEN gender = 'F' THEN average_price ELSE 0 END)), '$999,999,999') as "$Avg_sal_women, (select count (*) from table where gender='M')"NO_OF MALES", (select count (*) from table where gender='F')"NO_OF_FEMALES" from table group by State order by state); ```
You can use `case` as an expression (which you already know...). And the subquery is unnecessary. ``` select State , sum(case gender when 'M' then 1 else 0 end) as "no.of males" , sum(case gender when 'F' then 1 else 0 end) as "no.of females" , to_char( SUM( ( CASE WHEN gender = 'M' THEN average_price ELSE 0 END ) ) , '$999,999,999' ) as "Avg_sal_men", to_char(SUM((CASE WHEN gender = 'F' THEN average_price ELSE 0 END)) ,'$999,999,999' ) as "Avg_sal_women" from table group by State; ```
You are Conting by this sub-query `select count (*) from table where gender='M'` which always count the total number of male in your whole table....and you are doing same for counting female... So you Can Try like this... ``` select State, "NO_OF MALES", "$AVG_sal", "NO_OF_FEMALES", "$AVG_SAL_FEMALE" from( select State, to_char(SUM((CASE WHEN gender = 'M' THEN average_price ELSE 0 END)),'$999,999,999') as "$Avg_sal_men", to_char(SUM((CASE WHEN gender = 'F' THEN average_price ELSE 0 END)), '$999,999,999') as "$Avg_sal_women, Sum(Case when gender='M' then 1 else 0 end) "NO_OF MALES", Sum(Case when gender='F' then 1 else 0 end) "NO_OF_FEMALES" from table group by State order by state); ```
count(*) based on the gender condition
[ "", "sql", "oracle10g", "" ]
I want to implement least in mysql query but least should not be count 0 how can i do that? this is my query ``` LEAST(price, price_double, price_triple, price_quad) AS minvalue ``` it is counting 0 right now but i dont want to count 0 and search the result with out having zero For example ``` price 100 price_double 75 price_tripple 50 price_quad 0 ``` the minvalue will be 50 not 0, but right now it is including 0. Please help me to do that, i m in very trouble, i search a lot but didn't get any success, thanks a million ton thanks in advance.
You can use `case` in the `least()` function to do this. Unfortunately, `least()` returns NULL if any arguments are NULL in the more recent versions of MySQL. So, this uses a dummy value that is big: ``` least(case when price > 0 then price else 999999 end, case when price_double > 0 then price_double else 999999 end, case when price_triple > 0 then price_triple else 999999 end, case when price_quad > 0 then price_quad else 999999 end ) ```
This is the same like Gordon Linoff's solution only a little shorter: ``` LEAST ( IF(price > 0, price, 999999), IF(price_double > 0, price_double, 999999), IF(price_triple > 0, price_triple, 999999), IF(price_quad > 0, price_quad, 999999) ) ```
How to implement least in mysql query without count 0?
[ "", "mysql", "sql", "" ]
I need to be able to store a `numpy` `array` in a `dict` for caching purposes. Hash speed is important. The `array` represents indicies, so while the actual identity of the object is not important, the value is. Mutabliity is not a concern, as I'm only interested in the current value. What should I hash in order to store it in a `dict`? My current approach is to use `str(arr.data)`, which is faster than `md5` in my testing. --- I've incorporated some examples from the answers to get an idea of relative times: ``` In [121]: %timeit hash(str(y)) 10000 loops, best of 3: 68.7 us per loop In [122]: %timeit hash(y.tostring()) 1000000 loops, best of 3: 383 ns per loop In [123]: %timeit hash(str(y.data)) 1000000 loops, best of 3: 543 ns per loop In [124]: %timeit y.flags.writeable = False ; hash(y.data) 1000000 loops, best of 3: 1.15 us per loop In [125]: %timeit hash((b*y).sum()) 100000 loops, best of 3: 8.12 us per loop ``` It would appear that for this particular use case (small arrays of indicies), `arr.tostring` offers the best performance. While hashing the read-only buffer is fast on its own, the overhead of setting the writeable flag actually makes it slower.
You can simply hash the underlying buffer, if you make it read-only: ``` >>> a = random.randint(10, 100, 100000) >>> a.flags.writeable = False >>> %timeit hash(a.data) 100 loops, best of 3: 2.01 ms per loop >>> %timeit hash(a.tostring()) 100 loops, best of 3: 2.28 ms per loop ``` For very large arrays, `hash(str(a))` is a lot faster, but then it only takes a small part of the array into account. ``` >>> %timeit hash(str(a)) 10000 loops, best of 3: 55.5 us per loop >>> str(a) '[63 30 33 ..., 96 25 60]' ```
You can try [`xxhash`](https://github.com/Cyan4973/xxHash) via its [Python binding](https://github.com/ifduyue/python-xxhash). For large arrays this is much faster than `hash(x.tostring())`. Example IPython session: ``` >>> import xxhash >>> import numpy >>> x = numpy.random.rand(1024 * 1024 * 16) >>> h = xxhash.xxh64() >>> %timeit hash(x.tostring()) 1 loops, best of 3: 208 ms per loop >>> %timeit h.update(x); h.intdigest(); h.reset() 100 loops, best of 3: 10.2 ms per loop ``` And by the way, on various blogs and answers posted to Stack Overflow, you'll see people using `sha1` or `md5` as hash functions. For performance reasons this is usually *not* acceptable, as those "secure" hash functions are rather slow. They're useful only if hash collision is one of the top concerns. Nevertheless, hash collisions happen all the time. And if all you need is implementing `__hash__` for data-array objects so that they can be used as keys in Python dictionaries or sets, I think it's better to concentrate on the speed of `__hash__` itself and let Python handle the hash collision[1]. [1] You may need to override `__eq__` too, to help Python manage hash collision. You would want `__eq__` to return a boolean, rather than an array of booleans as is done by `numpy`.
Most efficient property to hash for numpy array
[ "", "python", "numpy", "" ]
I can connect to my local mysql database from python, and I can create, select from, and insert individual rows. My question is: can I directly instruct mysqldb to take an entire dataframe and insert it into an existing table, or do I need to iterate over the rows? In either case, what would the python script look like for a very simple table with ID and two data columns, and a matching dataframe?
### Update: There is now a [`to_sql`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) method, which is the preferred way to do this, rather than `write_frame`: ``` df.to_sql(con=con, name='table_name_for_df', if_exists='replace', flavor='mysql') ``` *Also note: the syntax may change in pandas 0.14...* You can set up the connection with [MySQLdb](http://mysql-python.sourceforge.net/MySQLdb.html): ``` from pandas.io import sql import MySQLdb con = MySQLdb.connect() # may need to add some other options to connect ``` Setting the `flavor` of `write_frame` to `'mysql'` means you can write to mysql: ``` sql.write_frame(df, con=con, name='table_name_for_df', if_exists='replace', flavor='mysql') ``` The argument `if_exists` tells pandas how to deal if the table already exists: > `if_exists: {'fail', 'replace', 'append'}`, default `'fail'` >      `fail`: If table exists, do nothing. >      `replace`: If table exists, drop it, recreate it, and insert data. >      `append`: If table exists, insert data. Create if does not exist. *Although the [`write_frame` docs](http://pandas.pydata.org/pandas-docs/dev/io.html#sql-queries) currently suggest it only works on sqlite, mysql appears to be supported and in fact there is quite a bit of [mysql testing in the codebase](https://github.com/pydata/pandas/blob/master/pandas/io/tests/test_sql.py#L223).*
Andy Hayden mentioned the correct function ([`to_sql`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html)). In this answer, I'll give a complete example, which I tested with Python 3.5 but should also work for Python 2.7 (and Python 3.x): First, let's create the dataframe: ``` # Create dataframe import pandas as pd import numpy as np np.random.seed(0) number_of_samples = 10 frame = pd.DataFrame({ 'feature1': np.random.random(number_of_samples), 'feature2': np.random.random(number_of_samples), 'class': np.random.binomial(2, 0.1, size=number_of_samples), },columns=['feature1','feature2','class']) print(frame) ``` Which gives: ``` feature1 feature2 class 0 0.548814 0.791725 1 1 0.715189 0.528895 0 2 0.602763 0.568045 0 3 0.544883 0.925597 0 4 0.423655 0.071036 0 5 0.645894 0.087129 0 6 0.437587 0.020218 0 7 0.891773 0.832620 1 8 0.963663 0.778157 0 9 0.383442 0.870012 0 ``` To import this dataframe into a MySQL table: ``` # Import dataframe into MySQL import sqlalchemy database_username = 'ENTER USERNAME' database_password = 'ENTER USERNAME PASSWORD' database_ip = 'ENTER DATABASE IP' database_name = 'ENTER DATABASE NAME' database_connection = sqlalchemy.create_engine('mysql+mysqlconnector://{0}:{1}@{2}/{3}'. format(database_username, database_password, database_ip, database_name)) frame.to_sql(con=database_connection, name='table_name_for_df', if_exists='replace') ``` One trick is that [MySQLdb](http://mysql-python.sourceforge.net/MySQLdb.html) doesn't work with Python 3.x. So instead we use `mysqlconnector`, which may be [installed](https://stackoverflow.com/a/45470868/395857) as follows: ``` pip install mysql-connector==2.1.4 # version avoids Protobuf error ``` Output: [![enter image description here](https://i.stack.imgur.com/jsD1Q.png)](https://i.stack.imgur.com/jsD1Q.png) Note that [`to_sql`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) creates the table as well as the columns if they do not already exist in the database.
How to insert pandas dataframe via mysqldb into database?
[ "", "python", "mysql", "pandas", "mysql-python", "" ]
I want to have query like this: ``` SELECT sum(data_parts.size) as size_sum, (size_sum/2.) as half_of_size_sum FROM data_parts WHERE data_parts.some_id='1'; ``` Of course it won't work, because there's no column named `size_sum`, so my question is: Is there a way to use `size_sum` as a parameter in next select item?
Other than using a subquery containing your current query (see Davide's answer), I don't think there is a way to do that. But you could always do: ``` SELECT sum(data_parts.size) as size_sum, (sum(data_parts.size)/2.) as half_of_size_sum FROM data_parts WHERE data_parts.id='1'; ``` Postgres is smart enough to only get that sum once. Only if this query will be greatly expanded with more calculations being done on `size_sum` would I recommend the subquery approach. Not because it works better, but because it will be easier to read. But if the current calculation is all you need, don't bother with a subquery. It would actually be less easy to read.
yes, a (somewhat) ugly way of making the query run there is... ``` SELECT SIZE_SUM, SIZE_SUM/2 AS HALF_OF_SIZE_SUM FROM ( SELECT sum(data_parts.size) as size_sum) FROM data_parts WHERE data_parts.id='1') X; ``` But I don't think there is a way on postgre to do operations directly based on the previous select fields
Operations on selected items
[ "", "sql", "postgresql", "select", "" ]
I have the following query being used: ``` INSERT INTO userlist (username, lastupdate, programruncount, ip) VALUES (:username, NOW(), 1, :ip) ON DUPLICATE KEY UPDATE lastupdate = NOW(), programruncount = programruncount + 1, ip = :ip; ``` However, I also want to make the `ON DUPLICATE KEY UPDATE` conditional, so it will do the following: * **IF** `lastupdate` was less than 20 minutes ago (`lastupdate > NOW() - INTERVAL 20 MINUTE`). * **True:** Update `lastupdate = NOW()`, add one to `programruncount` and then update `ip = :ip`. * **False:** All fields should be left the same. I am not really sure how I would do this but after looking around, I tried using an `IF` Statement in the `ON DUPLICATE KEY UPDATE` part. ``` INSERT INTO userlist (username, lastupdate, programruncount, ip) VALUES ("testuser", NOW(), "1", "127.0.0.1") ON DUPLICATE KEY UPDATE IF(lastupdate > NOW() - INTERVAL 20 MINUTE, VALUES(lastupdate, programruncount + 1), lastupdate, programruncount); ``` However I am getting the following error: `#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF(lastupdate > NOW() - INTERVAL 20 MINUTE, VALUES(lastupdate, programruncount +' at line 6`
you're using IF statement and VALUES() function incorrectly ``` INSERT INTO userlist (username, lastupdate, programruncount, ip) VALUES (:username, NOW(), 1, :ip) ON DUPLICATE KEY UPDATE lastupdate = IF(NOW() > lastupdate + INTERVAL 20 MINUTE, NOW(), lastupdate), programruncount = IF(NOW() > lastupdate + INTERVAL 20 MINUTE, programruncount + 1, programruncount), ip = IF(NOW() > lastupdate + INTERVAL 20 MINUTE, VALUES(ip), ip); ``` so IF checks for a condition and return one of two values provided as its parameters. See [MySQL's Flow Control Operators](http://dev.mysql.com/doc/refman/5.1/en/control-flow-functions.html#function_if).
i always do it this way: > INSERT INTO userlist (username, lastupdate, programruncount, ip) > VALUES (:username, NOW(), 1, :ip) > > ON DUPLICATE KEY UPDATE > > lastupdate = IF(lastupdate > NOW() - INTERVAL 20 MINUTE, NOW(), lastupdate), > > programruncount = IF(lastupdate > NOW() - INTERVAL 20 MINUTE, programruncount + 1, programruncount), > > ip = IF(lastupdate > NOW() - INTERVAL 20 MINUTE, **VALUE(ip), ip**); VALUE(ip) --> update with new value ip --> will not update the field
Conditional ON DUPLICATE KEY UPDATE (Update only if certain condition is true)
[ "", "mysql", "sql", "pdo", "" ]
I tried to debug my dynamic query via `dbms_output` but seems like the query string is too long for `dbms_output` buffer. I got : ``` ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "SYS.DBMS_OUTPUT", line 148 ORA-06512: at line 1 ``` Any idea how to increase the buffer size ?
You can Enable DBMS\_OUTPUT and set the buffer size. The buffer size can be between 1 and 1,000,000. ``` dbms_output.enable(buffer_size IN INTEGER DEFAULT 20000); exec dbms_output.enable(1000000); ``` Check **[this](http://psoug.org/reference/dbms_output.html)** **EDIT** As per the comment posted by Frank and Mat, you can also enable it with Null ``` exec dbms_output.enable(NULL); ``` **buffer\_size** : Upper limit, in bytes, the amount of buffered information. Setting buffer\_size to NULL specifies that there should be no limit. The maximum size is 1,000,000, and the minimum is 2,000 when the user specifies buffer\_size (NOT NULL).
When buffer size gets full. There are several options you can try: 1) Increase the size of the DBMS\_OUTPUT buffer to 1,000,000 2) Try filtering the data written to the buffer - possibly there is a loop that writes to DBMS\_OUTPUT and you do not need this data. 3) Call ENABLE at various checkpoints within your code. Each call will clear the buffer. DBMS\_OUTPUT.ENABLE(NULL) will default to 20000 for backwards compatibility [Oracle documentation on dbms\_output](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_output.htm#i999293) You can also create your custom output display.something like below snippets ``` create or replace procedure cust_output(input_string in varchar2 ) is out_string_in long default in_string; string_lenth number; loop_count number default 0; begin str_len := length(out_string_in); while loop_count < str_len loop dbms_output.put_line( substr( out_string_in, loop_count +1, 255 ) ); loop_count := loop_count +255; end loop; end; ``` Link -Ref :[Alternative to dbms\_output.putline](https://stackoverflow.com/questions/13667564/alternative-to-dbms-output-putline) @ By: Alexander
How to increase dbms_output buffer?
[ "", "sql", "oracle", "plsql", "oracle10g", "dbms-output", "" ]
I have the following tables and relationships: T1 -> id1, v1, v2 T2 -> id2, va, vb, vc T3 -> id1, id2, v T1 0-to-many T3 T2 0-to-many T3 I want to select *v1, v2, va, vb, vc, v* where *id1* & *id2* exist in *T3*. What SQL-query will give this result?
What you need is an `INNER JOIN` since you want only records on `T3` where `ID`s exists on `T1` or `T2`. ``` SELECT v1, v2, va, vb, vc, v FROM T3 INNER JOIN T1 ON T3.id1 = T1.id1 INNER JOIN T2 ON T3.ID2 = T2.id2 ``` To further gain more knowledge about joins, kindly visit the link below: * [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
Try this one - ``` SELECT t1.v1 , t1.v2 , t2.va , t2.vb , t2.vc FROM dbo.T3 t3 JOIN dbo.T2 t2 ON t2.id2 = t3.id2 JOIN dbo.T1 t1 ON t1.id2 = t3.id2 ```
SQL select query to select from 3 tables
[ "", "sql", "" ]
I have some Python classes that, if simplified, look like: ``` class base: def __init__(self, v): self.value = v def doThings(self): print "Doing things." def doMoreThings(self): print "Doing more things." def combine(self, b): self.value += b.value class foo(base): def showValue(self): print "foo value is %d." % self.value class bar(base): def showValue(self): print "bar value is %d." % self.value ``` The `base` class contains methods (represented above by `doThings` and `doMoreThings`) which implement functionality common to both the `foo` and `bar` subclasses. The `foo` and `bar` subclasses differ essentially in how they interpret the `value` field. (Above, they only differ by what they show when printing it, but in my actual application, they do several other things which are more complicated.) `base` can be thought of as "abstract": users only ever work with `foo`s and `bar`s. `base` exists only as a home for code common to its subclasses. The method I want to ask about is `combine`, which lets you take two of these objects and make a third. Because `foo` and `bar` interpret `value` differently, it doesn't make sense to `combine` two subclasses of different types: you can `combine` two `foo`s to get a `foo` or two `bar`s to get a `bar` but not a `foo` and a `bar`. Even so, the procedure for `combine` is the *same* for all subclasses, so it makes sense to have it factored out and defined in one place. I would probably like to signal an error if a user tries to `combine` two incompatible objects, but I don't see a way to do this without introducing ugly typechecks. Is it good practice to do this? Or should I do the usual thing and not check, document the issue, and assume that the user won't try to use `combine` in a way that wasn't intended, even though such use would appear to "succeed" and return a garbage object instead of raising an error? Thank you for your help.
I see several approaches here: 1. Don't check anything and trust the user to do the right thing. Might be appropriate or dangerous, depending on the situation. 2. Check the type. You are right that it looks ugly, but is the easiest thing. 3. Don't have a `value`, but name them the way they are intended to have (`pressure`, `temperature`) and let them combine themselves. 4. Same as 3, but additionally have the subclasses have a [`property`](http://docs.python.org/2/library/functions.html#property) which maps accesses to `.value` to their respective "real" value variable. This way, you can keep the .`__init__()`, but the `.combine()` will have to be done by every subclass. 5. Same as 4, but don't use `property`, but a self-crafted [descriptor](http://docs.python.org/2/reference/datamodel.html#implementing-descriptors). In [this answer](https://stackoverflow.com/a/16173137/296974), I show how it could be done.
Alter the combine method ``` def combine(self, b): if (not(self.__class__.__name__ == b.__class__.__name__)): raise TypeError("%s cannot combine with %s" % ( self.__class__.__name__, b.__class__.__name__)) else: self.value += b.value ``` Alternatively, alter the class definition for base to ``` class base(object): ``` and then a simpler, nicer, happier combine method is possible (thanks glglgl) ``` def combine(self, b): if (not(type(self) == type(b))): raise TypeError("%s cannot combine with %s" % ( type(self), type(b) )) else: self.value += b.value ```
Python - typechecking OK when error wouldn't be thrown otherwise?
[ "", "python", "oop", "" ]
I am have the following dict: ``` abc = {"type":"insecure","id":"1","name":"peter"} ``` what I want to do is to have a new dict based on the old dict in which there is no key "type" and key "id" is changed to "identity". The new dict will look as follows: ``` xyz = {"identity":"1","name":"peter"} ``` The solution that I came up was as follows: ``` abc = {"type":"insecure","id":"1","name":"peter"} xyz = {} black_list_values = set(("type","id")) for k in abc: if k not in blacklist_values: xyz[k] = abc[k] xyz["identity"] = abc["id"] ``` I was wondering if its the fastest and efficient way to do that? Right now, "abc" have only three values. If "abc" is much bigger and have many values then is my solution still the efficient and fast.
You can use a dict-comprehension: ``` abc = {"type":"insecure","id":"1","name":"peter"} black_list = {"type"} rename ={"id":"identity"} #use a mapping dictionary in case you want to rename multiple items dic = {rename.get(key,key) : val for key ,val in abc.items() if key not in black_list} print dic ``` **output:** ``` {'name': 'peter', 'identity': '1'} ```
You want to create a new dictionary anyway. You can iterate over keys/values in a dict comprehension, which is more compact, but functionally the same: ``` abc = {"type":"insecure","id":"1","name":"peter"} black_list_values = set(("type","id")) xyz = {k:v for k,v in abc.iteritems() if k not in black_list_values} xyz["identity"] = abc["id"] ```
create a new dict based on old dict
[ "", "python", "dictionary", "" ]
I imported my Facebook data onto my computer in the form of a .json file. The data is in the format: ``` { "nodes": [ {"name": "Alan"}, {"name": "Bob"} ], "links": [ {"source": 0, "target: 1"} ] } ``` Then, I use this function: ``` def parse_graph(filename): """ Returns networkx graph object of facebook social network in json format """ G = nx.Graph() json_data = open(filename) data = json.load(json_data) # The nodes represent the names of the respective people # See networkx documentation for information on add_* functions nodes = data["nodes"] G.add_nodes_from([n["name"] for n in nodes]) G.add_edges_from( [ nodes[e["source"]]["name"], nodes[e["target"]]["name"]) for e in data["links"] ] ) json_data.close() return G ``` to enable this `.json` file to be used a graph on NetworkX. If I find the degree of the nodes, the only method I know how to use is: ``` degree = nx.degree(p) ``` Where **p** is the graph of all my friends. Now, I want to plot the graph such that the size of the node is the same as the degree of that node. How do I do this? Using: ``` nx.draw(G, node_size=degree) ``` didn't work and I can't think of another method.
# Update for those using networkx 2.x The API has changed from v1.x to v2.x. `networkx.degree` no longer returns a `dict` but a `DegreeView` Object as per the [documentation](https://networkx.github.io/documentation/stable/reference/classes/generated/networkx.Graph.degree.html). There is a guide for migrating from 1.x to 2.x [here](https://networkx.github.io/documentation/stable/release/migration_guide_from_1.x_to_2.0.html). In this case it basically boils down to using `dict(g.degree)` instead of `d = nx.degree(g)`. The updated code looks like this: ``` import networkx as nx import matplotlib.pyplot as plt g = nx.Graph() g.add_edges_from([(1,2), (2,3), (2,4), (3,4)]) d = dict(g.degree) nx.draw(g, nodelist=d.keys(), node_size=[v * 100 for v in d.values()]) plt.show() ``` --- nx.degree(p) returns a dict while the [node\_size keywod argument](http://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html) needs a scalar or an array of sizes. You can use the dict nx.degree returns like this: ``` import networkx as nx import matplotlib.pyplot as plt g = nx.Graph() g.add_edges_from([(1,2), (2,3), (2,4), (3,4)]) d = nx.degree(g) nx.draw(g, nodelist=d.keys(), node_size=[v * 100 for v in d.values()]) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/ocktl.png)
@miles82 provided a great answer. However, if you've already added the nodes to your graph using something like `G.add_nodes_from(nodes)`, then I found that `d = nx.degree(G)` may not return the degrees in the same order as your nodes. Building off the previous answer, you can modify the solution slightly to ensure the degrees are in the correct order: ``` d = nx.degree(G) d = [(d[node]+1) * 20 for node in G.nodes()] ``` Note the `d[node]+1`, which will be sure that nodes of degree zero are added to the chart.
Node size dependent on the node degree on NetworkX
[ "", "python", "python-2.7", "social-networking", "networkx", "" ]
I have a list containing the numbers 25-1. I'm trying to print it out like a gameboard, where all the numbers match up: ![enter image description here](https://i.stack.imgur.com/PgA49.png) I found out how to add the lines to the list by doing [this](https://stackoverflow.com/questions/5445970/printing-list-in-python-properly): ``` _b = map(str, board) _board = ' | '.join(_b) ``` and I know how to print 5 numbers on each line.. but I'm having trouble getting all the numbers to line up. Is there a way to do this?
If you know how long the longest number is going to be, you can use any of these methods: With the string "5" and a desired width of 3 characters: * `str.rjust(3)` will give the string `' 5'` * `str.ljust(3)` will give the string `'5 '` * `str.center(3)` will give the string `' 5 '`. I tend to like `rjust` for numbers, as it lines up the places like you learn how to do long addition in elementary school, and that makes me happy ;) That leaves you with something like: ``` _b = map(lambda x: str(x).rjust(3), board) _board = ' | '.join(_b) ``` or alternately, with generator expressions: ``` _board = ' | '.join(str(x).rjust(3) for x in board) ```
``` board = range(1,26) #the gameboard for row in [board[i:i+5] for i in range(0,22,5)]: #go over chunks of five print('|'.join(["{:<2}".format(n) for n in row])+"|") #justify each number, join by | print("-"*15) #print the -'s ``` Produces ``` >>> 1 |2 |3 |4 |5 | --------------- 6 |7 |8 |9 |10| --------------- 11|12|13|14|15| --------------- 16|17|18|19|20| --------------- 21|22|23|24|25| --------------- ``` Or using the `grouper` recipe as @abarnert suggested: ``` for row in grouper(5, board): ```
Formatting list in python
[ "", "python", "list", "format", "" ]
I am writing a small module to help a transfer from M$-Access to SQLite (database needs to be portable), but I'm struggling in interpreting the error message that follows from this code (and of course to get it to work). ``` import pyodbc import win32com.client def ado(db, sqlstring='select * from table', user='admin', password=''): conn = win32com.client.Dispatch(r'ADODB.Connection') DSN = ('PROVIDER = Microsoft.Jet.OLEDB.4.0;DATA SOURCE = ' + db + ';') conn.Open(DSN) rs = win32com.client.Dispatch(r'ADODB.Recordset') rs.Open(strsql, conn, 1, 3) data = rs.GetRows() conn.Close() return data def odbc(db, sqlstring='select * from table', user= 'admin', password=''): """Create function for connecting to Access databases.""" odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.mdb)};DBQ=%s;UID=%s;PWD=%s' % (db, user, password) conn = pyodbc.connect(odbc_conn_str) cur = conn.cursor() cur.execute(strsql) data = list(cur) conn.close() return data if __name__ == '__main__': # Unit test db = r'C:\pyodbc_access2007_sample.accdb' sql="select * from Customer Orders" ## tables: 'Customer Orders', 'Physical Stoks','Prodplans' data1 = ado(db,sql) data2 = odbc(db,sql) ``` From the ado function I get the error: ``` Traceback (most recent call last): File "C:/pyodbc_access2007_example.py", line 27, in <module> data1 = ado(db,sql) File "C:/pyodbc_access2007_example.py", line 7, in ado conn.Open(DSN) File "<COMObject ADODB.Connection>", line 3, in Open File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 282, in _ApplyTypes_ result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args) com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft JET Database Engine', u"Unrecognized database format 'C:\\pyodbc_access2007_sample.accdb'.", None, 5003049, -2147467259), None) ``` and from the odbc function I get the error: ``` Traceback (most recent call last): File "C:/pyodbc_access2007_example.py", line 28, in <module> data2 = odbc(db,sql) File "C:/pyodbc_access2007_example.py", line 17, in odbc conn = pyodbc.connect(odbc_conn_str) Error: ('HY000', "[HY000] [Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. (-1028) (SQLDriverConnect); [01000] [Microsoft][ODBC Microsoft Access Driver]General Warning Unable to open registry key 'Temporary (volatile) Jet DSN for process 0x18c0 Thread 0xe70 DBC 0x379fe4 Jet'. (1); [01000] [Microsoft][ODBC Microsoft Access Driver]General Warning Unable to open registry key 'Temporary (volatile) Jet DSN for process 0x18c0 Thread 0xe70 DBC 0x379fe4 Jet'. (1); [01000] [Microsoft][ODBC Microsoft Access Driver]General Warning Unable to open registry key 'Temporary (volatile) Jet DSN for process 0x18c0 Thread 0xe70 DBC 0x379fe4 Jet'. (1); [01000] [Microsoft][ODBC Microsoft Access Driver]General Warning Unable to open registry key 'Temporary (volatile) Jet DSN for process 0x18c0 Thread 0xe70 DBC 0x379fe4 Jet'. (1); [HY000] [Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. (-1028)") ``` Any good idea's on how to read this?
Your connection string only recognizes mdb access files. There is a connection string that will do mdb and accdb files in pyodbc.
``` select * from Customer Orders ----------------------^ ``` Having spaces in table names like that, does that really work in Access? For MSSQL Server I'd quote it like in [Customer Orders]
Python ADO + ODBC function
[ "", "python", "" ]
I have this list of tuples: ``` dCF3=[((0.0, 0.0), (0.100000001490116, 0.0), (0.200000002980232, 0.0), (0.300000011920929, 0.0), (0.400000005960464, 0.0), (0.5, 0.0), (0.600000023841858, 0.0), (0.699999988079071, 0.0), (0.800000011920929, 0.0), (0.899999976158142, 0.0), (1.0, 0.0)), ((0.0, 0.0), (0.00249999994412065, -268.749877929688), (0.00499999988824129, -534.530700683594), (0.0087500000372529, -932.520874023438), (0.0143750002607703, -1527.93103027344), (0.0228125005960464, -2414.58032226563), (0.0328124985098839, -3408.89599609375), (0.0428125001490116, -4313.58447265625), (0.0528125017881393, -5153.6572265625), (0.0628124997019768, -6001.00244140625), (0.0728124976158142, -6861.203125), (0.0828125029802322, -7718.9912109375), (0.0928125008940697, -8568.873046875), (0.102812498807907, -9406.283203125), (0.112812496721745, -10222.2841796875), (0.122812502086163, -11016.26953125), (0.1328125, -11787.7470703125), (0.142812505364418, -12536.3466796875), (0.152812495827675, -13261.8193359375), (0.162812501192093, -13964.04296875), (0.172812506556511, -14643.01953125), (0.182812497019768, -15298.8681640625), (0.192812502384186, -15931.8173828125), (0.202812492847443, -16542.1953125), (0.212812498211861, -17130.41796875), (0.222812503576279, -17696.978515625), (0.232812494039536, -18242.431640625), (0.242812499403954, -18767.3828125), (0.252812504768372, -19272.4765625), (0.262812495231628, -19758.388671875), (0.272812485694885, -20225.806640625), (0.282812505960464, -20675.43359375), (0.292812496423721, -21107.970703125), (0.302812486886978, -21523.888671875), (0.312812507152557, -21923.1015625), (0.322812497615814, -22307.275390625), (0.332812488079071, -22677.072265625), (0.34281250834465, -23033.1328125), (0.352812498807907, -23376.078125), (0.362812489271164, -23706.50390625), (0.372812509536743, -24024.984375), (0.3828125, -24332.06640625), (0.392812490463257, -24628.27734375), (0.402812510728836, -24914.11328125), (0.412812501192093, -25190.052734375), (0.42281249165535, -25456.55078125), (0.432812511920929, -25714.037109375), (0.442812502384186, -25962.919921875), (0.452812492847443, -26203.58984375), (0.462812513113022, -26436.4140625), (0.472812503576279, -26661.74609375), (0.482812494039536, -26879.9140625), (0.492812514305115, -27091.232421875), (0.502812504768372, -27296.00390625), (0.512812495231628, -27494.9765625), (0.522812485694885, -27688.0859375), (0.532812476158142, -27875.443359375), (0.542812526226044, -28057.2890625), (0.552812516689301, -28233.853515625), (0.562812507152557, -28405.35546875), (0.572812497615814, -28571.99609375), (0.582812488079071, -28733.9765625), (0.592812478542328, -28891.48046875), (0.602812528610229, -29044.685546875), (0.612812519073486, -29193.7578125), (0.622812509536743, -29338.859375), (0.6328125, -29480.142578125), (0.642812490463257, -29617.75), (0.652812480926514, -29751.8203125), (0.662812471389771, -29882.486328125), (0.672812521457672, -30009.87109375), (0.682812511920929, -30134.09375), (0.692812502384186, -30255.271484375), (0.702812492847443, -30373.5078125), (0.712812483310699, -30488.91015625), (0.722812473773956, -30601.576171875), (0.732812523841858, -30711.599609375), (0.742812514305115, -30819.0703125), (0.752812504768372, -30924.076171875), (0.762812495231628, -31026.69921875), (0.772812485694885, -31127.01953125), (0.782812476158142, -31225.109375), (0.792812526226044, -31321.044921875), (0.802812516689301, -31414.892578125), (0.812812507152557, -31506.720703125), (0.822812497615814, -31596.591796875), (0.832812488079071, -31684.568359375), (0.842812478542328, -31770.70703125), (0.852812528610229, -31855.06640625), (0.862812519073486, -31937.69921875), (0.872812509536743, -32018.658203125), (0.8828125, -32097.9921875), (0.892812490463257, -32175.75), (0.902812480926514, -32251.9765625), (0.912812471389771, -32326.716796875), (0.922812521457672, -32400.013671875), (0.932812511920929, -32471.91015625), (0.942812502384186, -32542.44140625), (0.952812492847443, -32611.6484375), (0.962812483310699, -32679.568359375), (0.972812473773956, -32746.234375), (0.982812523841858, -32811.6796875), (0.992812514305115, -32875.9453125), (1.00281250476837, -32939.05078125), (1.01281249523163, -33001.03515625), (1.02281248569489, -33061.92578125), (1.03281247615814, -33121.75390625), (1.0428124666214, -33180.5390625), (1.05281245708466, -33238.31640625), (1.06281244754791, -33295.10546875), (1.07281255722046, -33350.9375), (1.08281254768372, -33405.83203125), (1.09281253814697, -33459.8125), (1.10281252861023, -33512.90234375), (1.11281251907349, -33565.12109375), (1.12281250953674, -33616.49609375), (1.1328125, -33667.0390625), (1.14281249046326, -33716.77734375), (1.15281248092651, -33765.7265625), (1.16281247138977, -33813.90625), (1.17281246185303, -33861.33203125), (1.18281245231628, -33908.0234375), (1.19281244277954, -33953.99609375), (1.20281255245209, -33999.26953125), (1.21281254291534, -34043.85546875), (1.2228125333786, -34087.76953125), (1.23281252384186, -34131.03125), (1.24281251430511, -34173.65234375), (1.25281250476837, -34215.64453125), (1.26281249523163, -34257.0234375), (1.27281248569489, -34297.8046875), (1.28281247615814, -34338.0), (1.2928124666214, -34377.6171875), (1.30281245708466, -34416.67578125), (1.31281244754791, -34455.18359375), (1.32281255722046, -34493.1484375), (1.33281254768372, -34530.58984375), (1.34281253814697, -34567.515625), (1.35281252861023, -34603.9296875), (1.36281251907349, -34639.8515625), (1.37281250953674, -34675.2890625), (1.3828125, -34710.25), (1.39281249046326, -34744.7421875), (1.40281248092651, -34778.78125), (1.41281247138977, -34812.3671875), (1.42281246185303, -34845.515625), (1.43281245231628, -34878.234375), (1.44281244277954, -34910.53125), (1.45281255245209, -34942.41015625), (1.46281254291534, -34973.88671875), (1.4728125333786, -35004.9609375), (1.48281252384186, -35035.64453125), (1.49281251430511, -35065.9453125), (1.50281250476837, -35095.8671875), (1.51281249523163, -35125.421875), (1.52281248569489, -35154.61328125), (1.53281247615814, -35183.4453125), (1.5428124666214, -35211.9296875), (1.55281245708466, -35240.0703125), (1.56281244754791, -35267.87109375), (1.57281255722046, -35295.34375), (1.58281254768372, -35322.4921875), (1.59281253814697, -35349.31640625), (1.60281252861023, -35375.828125), (1.61281251907349, -35402.03125), (1.62281250953674, -35427.9296875), (1.6328125, -35453.53515625), (1.64281249046326, -35478.84375), (1.65281248092651, -35503.86328125), (1.66281247138977, -35528.6015625), (1.67281246185303, -35553.05859375), (1.68281245231628, -35577.24609375), (1.69281244277954, -35601.16015625), (1.70281255245209, -35624.8125), (1.71281254291534, -35648.203125), (1.7228125333786, -35671.33984375), (1.73281252384186, -35694.22265625), (1.74281251430511, -35716.859375), (1.75281250476837, -35739.25), (1.76281249523163, -35761.40234375), (1.77281248569489, -35783.31640625), (1.78281247615814, -35805.0), (1.7928124666214, -35826.45703125), (1.80281245708466, -35847.6875), (1.81281244754791, -35868.6953125), (1.82281255722046, -35889.484375), (1.83281254768372, -35910.0625), (1.84281253814697, -35930.42578125), (1.85281252861023, -35950.58203125), (1.86281251907349, -35970.53125), (1.87281250953674, -35990.28125), (1.8828125, -36009.83203125), (1.89281249046326, -36029.1875), (1.90281248092651, -36048.34765625), (1.91281247138977, -36067.3203125), (1.92281246185303, -36086.10546875), (1.93281245231628, -36104.703125), (1.94281244277954, -36123.12109375), (1.95281255245209, -36141.359375), (1.96281254291534, -36159.421875), (1.9728125333786, -36177.3125), (1.98281252384186, -36195.02734375), (1.99281251430511, -36212.578125), (2.00281238555908, -36229.95703125), (2.01281261444092, -36247.17578125), (2.02281260490417, -36264.23046875), (2.03281259536743, -36281.125), (2.04281258583069, -36297.86328125), (2.05281257629395, -36314.4453125), (2.0628125667572, -36344.6875), (2.07281255722046, -36381.9609375), (2.08281254768372, -36418.8984375), (2.09281253814697, -36455.49609375), (2.10281252861023, -36491.76953125), (2.11281251907349, -36531.01953125), (2.12281250953674, -36590.515625), (2.1328125, -36649.4765625), (2.14281249046326, -36707.91796875), (2.15281248092651, -36765.83984375), (2.16281247138977, -36823.25), (2.17281246185303, -36880.15625), (2.18281245231628, -36936.56640625), (2.19281244277954, -36992.48828125), (2.2028124332428, -37047.921875), (2.21281242370605, -37102.87890625), (2.22281241416931, -37157.36328125), (2.23281240463257, -37211.3828125), (2.24281239509583, -37264.94140625), (2.25281238555908, -37318.04296875), (2.26281261444092, -37370.69921875), (2.27281260490417, -37422.9140625), (2.28281259536743, -37474.6875), (2.29281258583069, -37526.03125), (2.30281257629395, -37576.9453125), (2.3128125667572, -37627.44140625), (2.32281255722046, -37677.51953125), (2.33281254768372, -37727.1875), (2.34281253814697, -37776.44921875), (2.35281252861023, -37825.3125), (2.36281251907349, -37873.7734375), (2.37281250953674, -37921.84765625), (2.3828125, -37969.53515625), (2.39281249046326, -38016.83984375), (2.40281248092651, -38063.765625), (2.41281247138977, -38110.31640625), (2.42281246185303, -38156.50390625), (2.43281245231628, -38202.32421875), (2.44281244277954, -38247.78125), (2.4528124332428, -38292.88671875), (2.46281242370605, -38337.640625), (2.47281241416931, -38382.046875))] ``` I have two tuples in this list. First I'd like to get the maximum value of the first column (eg.: (Xi,Yi), max X) of the first tuple and add that value to the first column of the second tuple. Additionally, after this operation I'd like to join the two tuples in one. Any ideas? EDIT: The question has been solved. However, if instead of the first tuple I'd like to find the maximum of the first column of the `m` tuple and then add it to the first column of the next `n` tuples until the end of the list, and then at the end join all tuples in just one, how could I do it? My code works but maybe can be optimized: ``` indexLSNR=len(dCF3)-2 t1max=max(dCF3[indexLSNR])[0] for i in range(0, len(dCF3)): if i<=indexLSNR: if i==0: new_line= dCF3[i] else: new_line= dCF3[i] + new_line if i>indexLSNR: new_tup2 = tuple((a + t1max, b) for a, b in dCF3[i]) new_line= new_line + new_tup2 ```
This will handle any number of tuples : ``` lis = [((0.0, 0.0), (0.100000001490116, 0.0), (0.200000002980232, 0.0), (0.300000011920929, 0.0), (0.400000005960464, 0.0), (0.5, 0.0), (0.600000023841858, 0.0), (0.699999988079071, 0.0), (0.800000011920929, 0.0), (0.899999976158142, 0.0), (1.0, 0.0)), ((0.0, 0.0), (0.00249999994412065, -268.749877929688), (0.00499999988824129, -534.530700683594), (0.0087500000372529, -932.520874023438), (0.0143750002607703, -1527.93103027344), (0.0228125005960464, -2414.58032226563), (0.0328124985098839, -3408.89599609375), (0.0428125001490116, -4313.58447265625), (0.0528125017881393, -5153.6572265625), (0.0628124997019768, -6001.00244140625), (0.0728124976158142, -6861.203125), (0.0828125029802322, -7718.9912109375), (0.0928125008940697, -8568.873046875), (0.102812498807907, -9406.283203125), (0.112812496721745, -10222.2841796875), (0.122812502086163, -11016.26953125), (0.1328125, -11787.7470703125), (0.142812505364418, -12536.3466796875), (0.152812495827675, -13261.8193359375), (0.162812501192093, -13964.04296875), (0.172812506556511, -14643.01953125), (0.182812497019768, -15298.8681640625), (0.192812502384186, -15931.8173828125), (0.202812492847443, -16542.1953125), (0.212812498211861, -17130.41796875), (0.222812503576279, -17696.978515625), (0.232812494039536, -18242.431640625), (0.242812499403954, -18767.3828125), (0.252812504768372, -19272.4765625), (0.262812495231628, -19758.388671875), (0.272812485694885, -20225.806640625), (0.282812505960464, -20675.43359375), (0.292812496423721, -21107.970703125), (0.302812486886978, -21523.888671875), (0.312812507152557, -21923.1015625), (0.322812497615814, -22307.275390625), (0.332812488079071, -22677.072265625), (0.34281250834465, -23033.1328125), (0.352812498807907, -23376.078125), (0.362812489271164, -23706.50390625), (0.372812509536743, -24024.984375), (0.3828125, -24332.06640625), (0.392812490463257, -24628.27734375), (0.402812510728836, -24914.11328125), (0.412812501192093, -25190.052734375), (0.42281249165535, -25456.55078125), (0.432812511920929, -25714.037109375), (0.442812502384186, -25962.919921875), (0.452812492847443, -26203.58984375), (0.462812513113022, -26436.4140625), (0.472812503576279, -26661.74609375), (0.482812494039536, -26879.9140625), (0.492812514305115, -27091.232421875), (0.502812504768372, -27296.00390625), (0.512812495231628, -27494.9765625), (0.522812485694885, -27688.0859375), (0.532812476158142, -27875.443359375), (0.542812526226044, -28057.2890625), (0.552812516689301, -28233.853515625), (0.562812507152557, -28405.35546875), (0.572812497615814, -28571.99609375), (0.582812488079071, -28733.9765625), (0.592812478542328, -28891.48046875), (0.602812528610229, -29044.685546875), (0.612812519073486, -29193.7578125), (0.622812509536743, -29338.859375), (0.6328125, -29480.142578125), (0.642812490463257, -29617.75), (0.652812480926514, -29751.8203125), (0.662812471389771, -29882.486328125), (0.672812521457672, -30009.87109375), (0.682812511920929, -30134.09375), (0.692812502384186, -30255.271484375), (0.702812492847443, -30373.5078125), (0.712812483310699, -30488.91015625), (0.722812473773956, -30601.576171875), (0.732812523841858, -30711.599609375), (0.742812514305115, -30819.0703125), (0.752812504768372, -30924.076171875), (0.762812495231628, -31026.69921875), (0.772812485694885, -31127.01953125), (0.782812476158142, -31225.109375), (0.792812526226044, -31321.044921875), (0.802812516689301, -31414.892578125), (0.812812507152557, -31506.720703125), (0.822812497615814, -31596.591796875), (0.832812488079071, -31684.568359375), (0.842812478542328, -31770.70703125), (0.852812528610229, -31855.06640625), (0.862812519073486, -31937.69921875), (0.872812509536743, -32018.658203125), (0.8828125, -32097.9921875), (0.892812490463257, -32175.75), (0.902812480926514, -32251.9765625), (0.912812471389771, -32326.716796875), (0.922812521457672, -32400.013671875), (0.932812511920929, -32471.91015625), (0.942812502384186, -32542.44140625), (0.952812492847443, -32611.6484375), (0.962812483310699, -32679.568359375), (0.972812473773956, -32746.234375), (0.982812523841858, -32811.6796875), (0.992812514305115, -32875.9453125), (1.00281250476837, -32939.05078125), (1.01281249523163, -33001.03515625), (1.02281248569489, -33061.92578125), (1.03281247615814, -33121.75390625), (1.0428124666214, -33180.5390625), (1.05281245708466, -33238.31640625), (1.06281244754791, -33295.10546875), (1.07281255722046, -33350.9375), (1.08281254768372, -33405.83203125), (1.09281253814697, -33459.8125), (1.10281252861023, -33512.90234375), (1.11281251907349, -33565.12109375), (1.12281250953674, -33616.49609375), (1.1328125, -33667.0390625), (1.14281249046326, -33716.77734375), (1.15281248092651, -33765.7265625), (1.16281247138977, -33813.90625), (1.17281246185303, -33861.33203125), (1.18281245231628, -33908.0234375), (1.19281244277954, -33953.99609375), (1.20281255245209, -33999.26953125), (1.21281254291534, -34043.85546875), (1.2228125333786, -34087.76953125), (1.23281252384186, -34131.03125), (1.24281251430511, -34173.65234375), (1.25281250476837, -34215.64453125), (1.26281249523163, -34257.0234375), (1.27281248569489, -34297.8046875), (1.28281247615814, -34338.0), (1.2928124666214, -34377.6171875), (1.30281245708466, -34416.67578125), (1.31281244754791, -34455.18359375), (1.32281255722046, -34493.1484375), (1.33281254768372, -34530.58984375), (1.34281253814697, -34567.515625), (1.35281252861023, -34603.9296875), (1.36281251907349, -34639.8515625), (1.37281250953674, -34675.2890625), (1.3828125, -34710.25), (1.39281249046326, -34744.7421875), (1.40281248092651, -34778.78125), (1.41281247138977, -34812.3671875), (1.42281246185303, -34845.515625), (1.43281245231628, -34878.234375), (1.44281244277954, -34910.53125), (1.45281255245209, -34942.41015625), (1.46281254291534, -34973.88671875), (1.4728125333786, -35004.9609375), (1.48281252384186, -35035.64453125), (1.49281251430511, -35065.9453125), (1.50281250476837, -35095.8671875), (1.51281249523163, -35125.421875), (1.52281248569489, -35154.61328125), (1.53281247615814, -35183.4453125), (1.5428124666214, -35211.9296875), (1.55281245708466, -35240.0703125), (1.56281244754791, -35267.87109375), (1.57281255722046, -35295.34375), (1.58281254768372, -35322.4921875), (1.59281253814697, -35349.31640625), (1.60281252861023, -35375.828125), (1.61281251907349, -35402.03125), (1.62281250953674, -35427.9296875), (1.6328125, -35453.53515625), (1.64281249046326, -35478.84375), (1.65281248092651, -35503.86328125), (1.66281247138977, -35528.6015625), (1.67281246185303, -35553.05859375), (1.68281245231628, -35577.24609375), (1.69281244277954, -35601.16015625), (1.70281255245209, -35624.8125), (1.71281254291534, -35648.203125), (1.7228125333786, -35671.33984375), (1.73281252384186, -35694.22265625), (1.74281251430511, -35716.859375), (1.75281250476837, -35739.25), (1.76281249523163, -35761.40234375), (1.77281248569489, -35783.31640625), (1.78281247615814, -35805.0), (1.7928124666214, -35826.45703125), (1.80281245708466, -35847.6875), (1.81281244754791, -35868.6953125), (1.82281255722046, -35889.484375), (1.83281254768372, -35910.0625), (1.84281253814697, -35930.42578125), (1.85281252861023, -35950.58203125), (1.86281251907349, -35970.53125), (1.87281250953674, -35990.28125), (1.8828125, -36009.83203125), (1.89281249046326, -36029.1875), (1.90281248092651, -36048.34765625), (1.91281247138977, -36067.3203125), (1.92281246185303, -36086.10546875), (1.93281245231628, -36104.703125), (1.94281244277954, -36123.12109375), (1.95281255245209, -36141.359375), (1.96281254291534, -36159.421875), (1.9728125333786, -36177.3125), (1.98281252384186, -36195.02734375), (1.99281251430511, -36212.578125), (2.00281238555908, -36229.95703125), (2.01281261444092, -36247.17578125), (2.02281260490417, -36264.23046875), (2.03281259536743, -36281.125), (2.04281258583069, -36297.86328125), (2.05281257629395, -36314.4453125), (2.0628125667572, -36344.6875), (2.07281255722046, -36381.9609375), (2.08281254768372, -36418.8984375), (2.09281253814697, -36455.49609375), (2.10281252861023, -36491.76953125), (2.11281251907349, -36531.01953125), (2.12281250953674, -36590.515625), (2.1328125, -36649.4765625), (2.14281249046326, -36707.91796875), (2.15281248092651, -36765.83984375), (2.16281247138977, -36823.25), (2.17281246185303, -36880.15625), (2.18281245231628, -36936.56640625), (2.19281244277954, -36992.48828125), (2.2028124332428, -37047.921875), (2.21281242370605, -37102.87890625), (2.22281241416931, -37157.36328125), (2.23281240463257, -37211.3828125), (2.24281239509583, -37264.94140625), (2.25281238555908, -37318.04296875), (2.26281261444092, -37370.69921875), (2.27281260490417, -37422.9140625), (2.28281259536743, -37474.6875), (2.29281258583069, -37526.03125), (2.30281257629395, -37576.9453125), (2.3128125667572, -37627.44140625), (2.32281255722046, -37677.51953125), (2.33281254768372, -37727.1875), (2.34281253814697, -37776.44921875), (2.35281252861023, -37825.3125), (2.36281251907349, -37873.7734375), (2.37281250953674, -37921.84765625), (2.3828125, -37969.53515625), (2.39281249046326, -38016.83984375), (2.40281248092651, -38063.765625), (2.41281247138977, -38110.31640625), (2.42281246185303, -38156.50390625), (2.43281245231628, -38202.32421875), (2.44281244277954, -38247.78125), (2.4528124332428, -38292.88671875), (2.46281242370605, -38337.640625), (2.47281241416931, -38382.046875))] lis = [map(list,x) for x in lis] #create list of lists as you can't modify a tuple maxx = max(y[0] for x in lis[:m] for y in x) #find the max in first m tuples for i in xrange(m,m+n+1): #update n tuples after m for j in xrange(len(lis[i])): lis[i][j][0] += maxx new_lis = lis[0] + lis[1] ```
> *"First I'd like to get the maximum value of the first column of the first tuple "* ``` m = max(dCF3[0])[0] ``` or, if you prefer to be more explicit about considering only the first column ``` from operator import itemgetter m = max(dCF3[0], key=itemgetter(0))[0] ``` > *"...and add that value to the first column of the second tuple."* tuples are immutable so you cannot update values within it. You can however create a new tuple with the updated values: The following creates new tuple based on `dCF3[1]` with `m` added to the first column of all entries: ``` new_tup2 = tuple((a + m, b) for a, b in dCF3[1]) ``` > *"Additionally, after this operation I'd like to join the two tuples in one."* If you mean concatenating the two tuples into a single long tuple: ``` joined = dCF3[0] + new_tup2 ``` --- **Update:** To address your updated question, I gather from your example code that you want to: 1. Find the max value of the first column of the first column as `t1max` 2. Concatenate all the tuples into a single long tuple, but with the last `N-1` tuples (in your example, `N=2`) using updated values such that the first column is incremented by `t1max`. Here's how I would approach it, with a view of maintaining efficiency by minimising the number of copies we make of the tuples: ``` n = 2 # your parameter p = n - 1 # shorthand, just to make the following code cleaner t1max = max(dCF3[0])[0] # get max of first col of first tuple # store a reference to all tuples except the last N-1 # Note that we're NOT making copies of the actual tuples. Only refs. tup_list = list(dCF3[:-p]) # Append generators that will return the last N-1 tuples with updated values # Again, we're not making any copies of the tuple. Only generators that will # iterate through the tuples when consumed for tup in dCF3[-p:]: tup_list.append(((a + t1max, b) for a, b in tup)) # Now we're ready to create a copy of the tuples as a single large tuple from itertools import chain new_tup = tuple(chain.from_iterable(tup_list)) ``` p.s. If you need to support python <2.6 where [itertools.chain.from\_iterable](http://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable) is not available, you could use `chain(*tup_list)` instead.
Work with tuples
[ "", "python", "tuples", "" ]
In my sql server code, I have this select statement ``` select distinct a.HireLastName, a.HireFirstName, a.HireID, a.Position_ID, a.BarNumber, a.Archived, a.DateArchived, b.Position_Name from NewHire a join Position b on a.Position_ID = b.Position_ID join WorkPeriod c on a.hireID = c.HireID where a.Archived = 0 and c.InquiryID is not null order by a.HireID DESC, a.HireLastName, a.HireFirstName ``` And I want to add a new column to it. However this column is not a column from a table, its just used to store a `float` from a calculation I make from existing columns. The number I get is calculated like this: `@acc` is the `a.HireID` from the above select statement. ``` CAST((select COUNT(*) from Hire_Response WHERE HireID = @acc AND (HireResponse = 0 OR HireResponse = 1)) as FLOAT) / CAST((select COUNT(*) from Hire_Response WHERE HireID = @acc) as FLOAT) ``` How can I do this? Thanks.
This should do it ``` select distinct a.HireLastName, a.HireFirstName, a.HireID, a.Position_ID, a.BarNumber, a.Archived, a.DateArchived, b.Position_Name, CAST((select COUNT(*) from Hire_Response WHERE HireID = a.HireID AND (HireResponse = 0 OR HireResponse = 1)) as FLOAT) / CAST((select case when COUNT(*) = 0 then 1 else COUNT(*) end from Hire_Response WHERE HireID = a.HireID) as FLOAT) as mySpecialColumn from NewHire a join Position b on a.Position_ID = b.Position_ID join WorkPeriod c on a.hireID = c.HireID where a.Archived = 0 and c.InquiryID is not null order by a.HireID DESC, a.HireLastName, a.HireFirstName ```
You just need to add the calculation to your select statement as I've put below, I also alaised the calculation with a Column name for you: ``` select distinct a.HireLastName, a.HireFirstName, a.HireID, a.Position_ID, a.BarNumber, a.Archived, a.DateArchived, b.Position_Name, CAST((select COUNT(*) from Hire_Response WHERE HireID = a.HierID AND (HireResponse = 0 OR HireResponse = 1)) as FLOAT) / CAST((select COUNT(*) from Hire_Response WHERE HireID = a.HierID) as FLOAT) AS [Calculation] from NewHire a join Position b on a.Position_ID = b.Position_ID join WorkPeriod c on a.hireID = c.HireID where a.Archived = 0 and c.InquiryID is not null order by a.HireID DESC, a.HireLastName, a.HireFirstName ```
How to merge a select statement with a new dynamic values as columns?
[ "", "sql", "sql-server", "select", "" ]
I have written following function in SQLServer 2005 Following is the function: ``` create function fnBillParticulars() return table as return (select * from billParticulars where Id='2425') go ``` **Its giving me following errors:** ``` 1.Msg 156, Level 15, State 1, Procedure fnBillParticulars, Line 2 Incorrect syntax near the keyword 'return'. 2.Msg 178, Level 15, State 1, Procedure fnBillParticulars, Line 4 A RETURN statement with a return value cannot be used in this context. ``` what can be the mistake? Please help me.
Please try: ``` create function fnBillParticulars() returns table as return (select * from billParticulars where Id='2425') go ```
you can alternatively create a `VIEW` on this, ``` CREATE VIEW fnBillParticulars AS select * from billParticulars where where Id='2425' ``` or if you want `table valued function` ``` CREATE FUNCTION fnBillParticulars() RETURNS @BillParticulars TABLE ( Id int, -- other columns here ) AS BEGIN INSERT INTO @BillParticulars (Id, ...) -- specify columns here SELECT * FROM billParticulars WHERE Id = '2425'; RETURN; END; ```
sql function not retrieving table and giving error
[ "", "sql", "sql-server-2005", "sql-function", "" ]
I need some assistance creating a query for the following result: ![enter image description here](https://i.stack.imgur.com/p2G7A.jpg) I currently have two tables Accounts, which lists the users and how much they initial owe and Payments, which holds all the records of the payments the users have made. I know how to merge tables with JOIN but I don't know how to do the math needed to get the owed amount and paid amounts. The columns in Accounts consist of: ID, Name, Account, Borrowed The columns in Payments consist of: ID, AcctID, PaymentAmt I need a query that will combine both of these tables and do the math to show how much the user has currently paid and how much the user still owes from the initial borrowed amount. ***Example Table data:*** ### ACCOUNTS TABLE ID = 3, Name = Joe, Account = Business, Borrowed = 100.00 ### PAYMENTS TABLE ID = 1, AcctID = 3, PaymentAmt = 10.00 ID = 2, AcctID = 3, PaymentAmt = 10.00 I am using MS SQL in C#.
You just need to join and then use SUM and GROUP BY. ``` SELECT a.Name , a.Account , a.Borrowed - COALESCE(SUM(p.PaymentAmt),0) as [Still Owes], COALESCE(SUM(p.PaymentAmt),0) as Paid, a.Borrowed FROM ACCOUNTS a LEFT JOIN PAYMENTS p ON a.ID = p.AcctID GROUP BY a.Name , a.Account , a.Borrowed ``` Note that I did a LEFT JOIN in the case no payments were made. This also requires the use of COALESCE to convert Null SUMs to 0 [DEMO](http://sqlfiddle.com/#!6/1a61d/2)
This should do: ``` SELECT A.[Name], A.Account, A.Borrowed - ISNULL(P.Paid,0) [Still Owes], ISNULL(P.Paid,0) Paid, A.Borrowed FROM Accounts A LEFT JOIN ( SELECT AcctID, SUM(PaymentAmt) Paid FROM Payments GROUP BY AcctID) P ON A.ID = P.AcctID ```
Creating a SQL Query with two tables
[ "", "sql", "" ]
In the following code, ``` [{word: score_tweet(tweet) for word in tweet} for tweet in tweets] ``` I am getting a list of dicts: ``` [{u'soad': 0.0, u'&lt;3': 0.0}, {u'outros': 0.0, u'acredita': 0.0}] ``` I would like to obtain only one flat dict like: ``` {u'soad': 0.0, u'&lt;3': 0.0, u'outros': 0.0, u'acredita': 0.0} ``` How should I change my code? Note: I am using Python 2.7.
``` {word: score_tweet(tweet) for tweet in tweets for word in tweet} ```
Move the `for` loop into the dict comprehension: ``` {word: score_tweet(tweet) for tweet in tweets for word in tweet} ``` Keep in mind that two `for` loops in one line are hard to read. I would do something like this instead: ``` scores = {} for tweet in tweets: tweet_score = score_tweet(tweet) for word in tweet: scores[word] = tweet_score ```
Nested dict comprehension
[ "", "python", "" ]
I have a column with date format `2006-09-08 14:39:41.000`. I want make a view using this column but I need the date to display in ISO 8601: `yyyy-MM-ddThh:mm:ss.SSSZ`. How can I convert it?
**Try the following:** ``` SELECT CONVERT(char(30), '2006-09-08 14:39:41.000',126) ``` Hope it helps.
Conversion code for ISO 8601 is 126, you can use something like this: ``` SELECT CONVERT(VARCHAR, DateColumn, 126) FROM Table ```
How to convert date to ISO 8601 in SQL-Server?
[ "", "sql", "sql-server", "" ]
I'm new to OpenCV. What is the Python function which act the same as *cv::clone()* in C++? I just try to get a rect by ``` rectImg = img[10:20, 10:20] ``` but when I draw a line on it, I find the line appear both on *img* and the rectImage, so, how can I get this done?
[Abid Rahman K's answer](https://stackoverflow.com/questions/16533078/clone-an-image-in-cv2-python/16535453#16535453) is correct, but you say that you are using cv2 which inherently uses NumPy arrays. So, to make a complete different copy of say "myImage": ``` newImage = myImage.copy() ``` The above is enough. There isn't any need to import NumPy (`numpy`).
If you use `cv2`, the correct method is to use the `.copy()` method in NumPy. It will create a copy of the array you need. Otherwise it will produce only a view of that object. For example: ``` In [1]: import numpy as np In [2]: x = np.arange(10*10).reshape((10, 10)) In [4]: y = x[3:7, 3:7].copy() In [6]: y[2, 2] = 1000 In [8]: 1000 in x Out[8]: False # See, 1000 in y doesn't change values in x, the parent array. ```
Clone an image in cv2 Python
[ "", "python", "opencv", "" ]
I made a table query is ``` CREATE TABLE questions ( _id INTEGER PRIMARY KEY AUTOINCREMENT , question TEXT , correct_answer TEXT DEFAULT (null) , answer1 TEXT DEFAULT (null) , answer2 TEXT DEFAULT (null) , answer3 TEXT DEFAULT (null) , answer4 TEXT DEFAULT (null) , difficulty INTEGER DEFAULT (null) , nid INTEGER ); ``` Now want to omit correct\_answer column , and make all right answer in answer1 column, and other 3 incorrect answer in answer2,answer3,answer4. But problem is currently right answer could be any of four, so i could not merge as a example table are like ![enter image description here](https://i.stack.imgur.com/sIdsK.png) u can see in first row correct answer in answer2 and for second row correct answer is in answer4 so how i only take the right answer in answer1 column and other wrong answers in answer2, answer3,answer4 respectively? what will be the sql query? or any other idea?
You may try this- ``` SELECT question, correct_answer, CASE WHEN answer1 != correct_answer THEN answer1 ELSE answer2 END AS incorrect1, CASE WHEN (answer1 != correct_answer AND answer2 != correct_answer) THEN answer2 ELSE answer3 END AS incorrect2, CASE WHEN (answer1 != correct_answer AND answer2 != correct_answer AND answer3 != correct_answer) THEN answer3 ELSE answer4 END AS incorrect3 FROM questions ```
I would do something like this: ``` insert into question_new (_id, qestion, answer1, answer2, answer3, answer4) SELECT _id, question, correct_answer as answer1, if (correct_answer = answer2 , answer1 , answer2) as answer2, if (correct_answer = answer3 , answer1 , answer3) as answer3, if (correct_answer = answer4 , answer1 , answer4) as answer4 FROM questions ``` In simple: 1. answer1 will always be the correct\_answer column. 2. answer2/3/4 will be * the same column if different from the correct answer (same as default table) * the answer1 if is equal the correct answer (in answer1 there is the correct answer, so you need to set the wrong answer1 in new position) **update** Added same query using `case when` ``` insert into question_new (_id, qestion, answer1, answer2, answer3, answer4) SELECT _id, question, correct_answer as answer1, case when correct_answer = answer2 then answer1 else answer2 end as answer2, case when correct_answer = answer3 then answer1 else answer3 end as answer3, case when correct_answer = answer4 then answer1 else answer4 end as answer4 FROM questions ```
Table column merged query
[ "", "mysql", "sql", "database", "" ]
Im usign python and opencv to get a image from the webcam, and I want to know how to draw a circle over my image, just a simple green circle with transparent fill ![enter image description here](https://i.stack.imgur.com/tkgoD.gif) my code: ``` import cv2 import numpy import sys if __name__ == '__main__': #get current frame from webcam cam = cv2.VideoCapture(0) img = cam.read() #how draw a circle???? cv2.imshow('WebCam', img) cv2.waitKey() ``` Thanks in advance.
``` cv2.circle(img, center, radius, color, thickness=1, lineType=8, shift=0) → None Draws a circle. Parameters: img (CvArr) – Image where the circle is drawn center (CvPoint) – Center of the circle radius (int) – Radius of the circle color (CvScalar) – Circle color thickness (int) – Thickness of the circle outline if positive, otherwise this indicates that a filled circle is to be drawn lineType (int) – Type of the circle boundary, see Line description shift (int) – Number of fractional bits in the center coordinates and radius value ``` Use "thickness" parameter for only the border.
Just an additional information: The parameter "center" of OpenCV's drawing function cv2.circle() takes a tuple of two integers. The first is the width location and the second is the height location. This ordering is different from the usual array indexing. The following example demonstrates the issue. ``` import numpy as np import cv2 height, width = 150, 200 img = np.zeros((height, width, 3), np.uint8) img[:, :] = [255, 255, 255] # Pixel position to draw at row, col = 20, 100 # Draw a square with position 20, 100 as the top left corner for i in range(row, 30): for j in range(col, 110): img[i, j] = [0, 0, 255] # Will the following draw a circle at (20, 100)? # Ans: No. It will draw at row index 100 and column index 20. cv2.circle(img,(col, row), 5, (0,255,0), -1) cv2.imwrite("square_circle_opencv.jpg", img) ``` [![python opencv cv2.circle center indexing issue](https://i.stack.imgur.com/GK6ln.jpg)](https://i.stack.imgur.com/GK6ln.jpg)
draw a circle over image opencv
[ "", "python", "opencv", "drawing", "" ]
I am using Flask to write a Blog, with that I am using Markdown's Python library to generate HTML for me, I am willing to have syntax highlighting, therefore I am using `markdown.markdown(string, extensions=['codehilite']` According to their [wiki](http://pythonhosted.org/Markdown/extensions/code_hilite.html), it should add a html class; ``` <div class="codehilite"><pre><code># Code goes here ...</code></pre></div> ``` But it doesn't seem to be working, following the tryouts from my interpreter; ``` In [9]: markdown.version Out[9]: '2.3.1' In [10]: text = """:::python ....: import os ....: print "This is a text!" ....: """ In [11]: html = markdown.markdown(text, extensions=['codehilite']) In [12]: html Out[12]: u'<p>:::python\nimport os\nprint "This is a text!"</p>' In [13]: # Even more funnier, when following the examples in the usage section "..['codehilite(linenums=True)'] In [14]: html = markdown.markdown(text, extensions=['codehilite(linenums=True)']) In [15]: html Out[15]: u'<p>:::python\nimport os\nprint "This is a text!"</p>' In [16]: # No line numbers, or any class.. ``` I am not sure what's the problem here, I have Pygments installed, I already upgraded Markdown's lib, but nothing. The expected results here would be that Markdown would add the html class *codehilite* so I will be able to get the syntax working. What seems to be the problem here?
I found another solution, [markdown2](https://github.com/trentm/python-markdown2) Here are a few examples (following what I was willing..) ``` In [1]: import markdown2 In [2]: markdown2.markdown("> This is a paragraph and I am **bold**") Out[2]: u'<blockquote>\n <p>This is a paragraph and I am <strong>bold</strong></p>\n</blockquote>\n' In [3]: code = """```python if True: print "hi" ```""" ...: In [4]: markdown2.markdown(code, extras=['fenced-code-blocks']) Out[4]: u'<div class="codehilite"><pre><code><span class="k">if</span> <span class="bp">True</span><span class="p">:</span>\n <span class="k">print</span> <span class="s">&quot;hi&quot;</span>\n</code></pre></div>\n' ```
I've established that codehilite, aside from being generally temperamental, breaks when there is a list immediately before it: This markdown, and variations of it, just doesn't work: ``` * apples * oranges #!python import os ``` But if I put something between the list and the code, then it does work: ``` * apples * oranges Put something between the code and the list #!python import os ``` But it's generally unpredictable. I tried a zillion combinations with very mixed success replicating what a read in the documentation. Not happy... ### Use `fenced_code` instead Then I wandered into [the other sub-extensions of pygments](https://pythonhosted.org/Markdown/extensions/index.html) and tried adding the fenced\_code extension explicitly and retrying the fenced code examples. Works better. So proceeding with ``` pygmented_body = markdown.markdown(rendered_body, extensions=['codehilite', 'fenced_code']) ``` I'm having much greater success using `fenced code` exclusively: ``` * Don't need to indent 4 spaces * Don't need something between the list and the code ~~~~{.python hl_lines='3'} import os print('hello, world') ~~~~ And final comments here. ```
Python Markdown - Not adding codehilite
[ "", "python", "markdown", "pygments", "codehighlighter", "" ]
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py` How do I access this variable?
You're looking for [modules!](http://docs.python.org/2/tutorial/modules.html) In `aaa.py`: ``` AA = 'Foo' ``` In `bbb.py`: ``` import aaa print aaa.AA # Or print(aaa.AA) for Python 3 # Prints Foo ``` Or this works as well: ``` from aaa import AA print AA # Prints Foo ```
You can import it; this will execute the whole script though. ``` from aaa import AA ```
I want to refer to a variable in another python script
[ "", "python", "variables", "import", "" ]
I have a binary file (which I've created in C) and I would like to have a look inside the file. Obviously, I won't be able to "see" anything useful as it's in binary. However I do know that contains certain number of rows with numbers in double precision. I am looking for a script to just read some values and print them so I can verify the if they are in the right range. In other words, it would be like doing `head` or `tail` in linux on an text file. Is there a way of doing it? Right now I've got something in Python, but it does not do what I want: ``` CHUNKSIZE = 8192 file = open('eigenvalues.bin', 'rb') data = list(file.read()) print data ```
Use the [`array` module](http://docs.python.org/2/library/array.html) to read homogenous binary-representation numbers: ``` from array import array data = array('d') CHUNKSIZE = 8192 rowcount = CHUNKSIZE / data.itemsize # number of doubles we find in CHUNKSIZE bytes with open('eigenvalues.bin', 'rb') as eg: data.fromfile(eg, rowcount) ``` The `array.array` type otherwise behaves just like a list, only the type of values it can hold is constricted (in this case to `float`). Depending on the input data, you may need to add a `data.byteswap()` call after reading to switch between little and big-endian. Use [`sys.byteorder`](http://docs.python.org/2/library/sys.html#sys.byteorder) to see what byteorder was used to read the data. If your data was written on a platform using little-endianess, swap if your platform uses the other form, and vice-versa: ``` import sys if sys.byteorder == 'big': # data was written in little-endian form, so swap the bytes to match data.byteswap() ```
You can see each byte of your file represented in unsigned decimal with this shell command: ``` od -t u1 eigenvalues.bin | less ``` Should you want to see a particular area and decode floating point numbers, you can use `dd` to extract them and `od -F` option to decode them, eg: ``` dd status=noxfer if=eigenvalues.bin bs=1 skip=800 count=16 | od -F ``` will show two double precision numbers stored at offset 800 and 808 in the binary file. Note that according to the Linux tag set to your question, I assume you are using Gnu versions of `dd` and `od`.
Interpreting binary files as ASCII
[ "", "python", "linux", "shell", "" ]
There is a table `T` with column `a`: ``` CREATE TABLE T { id_t integer not null, text varchar2(100), a integer } / ALTER TABLE T ADD CONSTRAINT PK_T PRIMARY KEY (ID_T) / ``` Index was created like this: ``` CREATE INDEX IDX_T$A ON T(a); ``` Also there's such a check constraint: ``` ALTER TABLE T ADD CONSTRAINT CHECK (a is null or a = 1); ``` Most of the records in `T` have null value of `a`, so the query using the index works really fast if the index is in consistent state and statistics for it is up to date. But the problem is that values of `a` of some rows change really frequently (some rows get null value, some get 1), and I need to rebuild the index let's say every hour. However, really often when the job doing this, trying to rebuild the index, it gets an exception: ``` ORA-00054: resource busy and acquire with NOWAIT specified ``` Can anybody help me with coping with this issue?
Index rebuild is not needed in most cases. Of course newly created indexes are efficient and their efficiency decreases over time. But this process stops after some time - it simply converges to some level. If you really need to optimize indexes try to use less invasive DDL command "ALTER INDEX SHRINK SPACE COMPACT". PS: I would also recommend you to use some smaller block size (4K or 8K) for you tablespace storage.
Have you tried adding "ONLINE" to that index rebuild statement? Edit: If online rebuild is not available then you might look at a fast refresh on commit materialised view to store the rowid's or primary keys of rows that have a 1 for column A. Start with a look at the documentation:- <http://docs.oracle.com/cd/B28359_01/server.111/b28326/repmview.htm> <http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_6002.htm#SQLRF01302> You'd create a materialised view log on the table, and then a materialised view. Think in particular about the resource requirements for this: changes to the master table require a change vector to be written to the materialised view log, which is effectively an additional insert for every change. Then the changes have to be propagated to another table (the materialised view storage table) with additional queries. It is by no means a low-impact option.
resource busy while rebuilding an index
[ "", "sql", "oracle", "oracle10g", "indexing", "locking", "" ]
I need to clean records from one database according to that if they not exists in other. It's fairly hard to explain it so here is example: ``` Table Users ----------- id username password Table Articles -------------- id title created_by edited_by ``` `created_by` and `deleted_by` contain the user ID. I have 3-4 tables with almost the same structure like the articles table and I want to delete users from table users who don't have any record in articles-like tables. I mean users whos ID cannot be found in any of the articles-like tables in the `created_by` and `edited_by` table. How to do that? I first tried to see if I can select all the data by all the tables according to users, but the server cannot execute the query: ``` SELECT * FROM `users` JOIN `articles` ON `articles`.`created_by` = `users`.`id` AND `articles`.`edited_by` = `users`.`id` JOIN `articles_two` ON `articles_two`.`created_by` = `users`.`id` AND `articles_two`.`edited_by` = `users`.`id` JOIN `articles_three` ON `articles_three`.`created_by` = `users`.`id` AND `articles_three`.`edited_by` = `users`.`id` JOIN `articles_four` ON `articles_four`.`created_by` = `users`.`id` AND `articles_four`.`edited_by` = `users`.`id` JOIN `articles_five` ON `articles_five`.`created_by` = `users`.`id` AND `articles_five`.`edited_by` = `users`.`id` JOIN `articles_six` ON `articles_six`.`created_by` = `users`.`id` AND `articles_six`.`edited_by` = `users`.`id`; ```
I think the cleanest way is `not in` in the `select` clause: ``` select * from users u where u.id not in (select created_by from articles where created_by is not null) and u.id not in (select edited_by from articles where edited_by is not null) and u.id not in (select created_by from articles_two where created_by is not null) and u.id not in (select edited_by from articles_two where edited_by is not null) and u.id not in (select created_by from articles_three where created_by is not null) and u.id not in (select edited_by from articles_three where edited_by is not null) and u.id not in (select created_by from articles_four where created_by is not null) and u.id not in (select edited_by from articles_four where edited_by is not null) ``` Performance should be helped by having indexes on the various `created_by` and `edited_by` columns.
This should work. It's not terribly elegant but I think it's easy to follow: ``` DELETE FROM Users WHERE ID NOT IN ( SELECT Created_By FROM Articles UNION SELECT Edited_By FROM Articles UNION SELECT Created_By FROM Articles_Two UNION SELECT Edited_By FROM Articles_Two ... UNION SELECT Created_By FROM Articles_Six UNION SELECT Edited_By FROM Articles_Six ) ``` As with any big "cleanup" query, (a) make a copy of the table first and (b) review carefully before typing `COMMIT`.
MySQL select from database where not in
[ "", "mysql", "sql", "join", "" ]
Eclipse platform, Python 3.3. I've created the code below to demonstrate a problem when using global variables and python unittest. I'd like to know why the second unit test (a direct repeat of the first) results in a ``` NameError: global name '_fred' is not defined ``` try commenting out the second test and it'll all pass ok. (Note: I've added a brief digest of what the real code is is attempting to achieve after the example, hopefully it'll be less obtrusive there as it's not really relevant to the issue) ``` ''' Global Problem ''' import unittest _fred = None def start(): global _fred if _fred is None: _fred = 39 _fred += 3 def stop(): global _fred if _fred is not None: del _fred class Test(unittest.TestCase): def setUp(self): start() def tearDown(self): stop() def test_running_first_time(self): assert(_fred == 42) def test_running_second_time(self): assert(_fred == 42) if __name__ == "__main__": #import sys;sys.argv = ['', 'Test.testName'] unittest.main() ``` In the real code \_fred is variable referencing an instance of a class derived from Thread (see what I did there) and gets assigned in the start method. \_fred = MyThreadClass() There is a second global for a synchronized queue. The methods start and stop control processing queue items on the dedicated thread. 'stop' stops the processing while allowing items to be added. The API for Thread only permits a single call to start. So to restart processing I need a new instance of Thread. Hence the use of ``` if _fred is None: ``` and ``` del _fred ``` No prizes for guessing my primary language
`del _fred` does not set `_fred` to `None` or anything like that. It removes the name `_fred`. Completely. For a global, it's as if it had never existed. For a local, it's as if it had never been assigned to. To set a variable to `None`, do the obvious thing: ``` _fred = None ```
The problem is `del _fred`. Since you tell the interpreter `_fred` is global, `_fred` gets removed from the global dictionary, and is not set to `None`. When you tell a function that something is global all it does it remember that so when it performs operations on that variable name it does it globally. `global _fred` in `start` will not in anyway affect the value of `_fred` even if `_fred` is undefined. It is just a head's up to the interpreter.
Global name is not defined for second unittest
[ "", "python", "eclipse", "python-unittest", "" ]
Let's say I have 2 Protobuf-Messages, A and B. Their overall structure is similar, but not identical. So we moved the shared stuff out into a separate message we called Common. This works beautifully. However, I'm now facing the following problem: A special case exists where I have to process a serialized message, but I don't know whether it's a message of type A or type B. I have a working solution in C++ (shown below), but I failed to find a way to do the same thing in Python. **Example:** ``` // file: Common.proto // contains some kind of shared struct that is used by all messages: message Common { ... } // file: A.proto import "Common.proto"; message A { required int32 FormatVersion = 1; optional bool SomeFlag [default = true] = 2; optional Common CommonSettings = 3; ... A-specific Fields ... } // file: B.proto import "Common.proto"; message B { required int32 FormatVersion = 1; optional bool SomeFlag [default = true] = 2; optional Common CommonSettings = 3; ... B-specific Fields ... } ``` **Working Solution in C++** In C++ I'm using the reflection API to get access to the CommonSettings field like this: ``` namespace gp = google::protobuf; ... Common* getCommonBlock(gp::Message* paMessage) { gp::Message* paMessage = new gp::Message(); gp::FieldDescriptor* paFieldDescriptor = paMessage->GetDescriptor()->FindFieldByNumber(3); gp::Reflection* paReflection = paMessage->GetReflection(); return dynamic_cast<Common&>(paReflection->GetMessage(*paMessage,paFieldDescriptor)); } ``` The method '*getCommonBlock*' uses *FindFieldByNumber()* to get hold of the descriptor of the field I'm trying to get. Then it uses reflection to fetch the actual data. *getCommonBlock* can process messages of type A, B or any future type as long as the Common field remains located at index 3. My Question is: Is there a way to do a similar thing Python? I've been looking at the [Protobuf documentation](https://developers.google.com/protocol-buffers/docs/reference/python/google.protobuf-module), but couldn't figure out a way to do it.
I know this is an old thread, but I'll respond anyway for posterity: Firstly, as you know, it's not possible to determine the type of a protocol buffer message purely from its serialized form. The only information in the serialized form you have access to is the field numbers, and their serialized values. Secondly, the "right" way to do this would be to have a proto that contains both, like ``` message Parent { required int32 FormatVersion = 1; optional bool SomeFlag [default = true] = 2; optional Common CommonSettings = 3; oneof letters_of_alphabet { A a_specific = 4; B b_specific = 5; } } ``` This way, there's no ambiguity: you just parse the same proto (`Parent`) every time. --- Anyway, if it's too late to change that, what I recommend you do is define a new message with only the shared fields, like ``` message Shared { required int32 FormatVersion = 1; optional bool SomeFlag [default = true] = 2; optional Common CommonSettings = 3; } ``` You should then be able to pretend that the message (either `A` or `B`) is in fact a `Shared`, and parse it accordingly. The unknown fields will be irrelevant.
One of the advantages of Python over a statically-typed language like C++ is that you don't need to use any special reflection code to get an attribute of an object of unknown type: you just ask the object. The built-in function that does this is `getattr`, so you can do: ``` settings_value = getattr(obj, 'CommonSettings') ```
Accessing field of Protobuf message of unknown type in Python
[ "", "python", "reflection", "protocol-buffers", "" ]
I have a Python function called `plot_pdf(f)` that might throw an error. I use a list comprehension to iterate over a list of files on this function: ``` [plot_pdf(f) for f in file_list] ``` I want to use try-except block to skip any possible errors during the iteration loop and continue with the next file. So is the following code correct way to do the exception handling in Python list comprehension? ``` try: [plot_pdf(f) for f in file_list] # using list comprehensions except: print ("Exception: ", sys.exc_info()[0]) continue ``` Will the above code terminate the current iteration and go to the next iteration? If I can't use list comprehension to catch errors during iteration, then I have to use the normal `for` loop: ``` for f in file_list: try: plot_pdf(f) except: print("Exception: ", sys.exc_info()[0]) continue ``` I want to know if I can use try-except to do exception handling in list comprehension.
``` try: [plot_pdf(f) for f in file_list] # using list comprehensions except: print ("Exception: ", sys.exc_info()[0]) continue ``` If `plot_pdf(f)` throws an error during execution of comprehension, then, it is caught in the `except` clause, other items in comprehension won't be evaluated. It is not possible to handle exceptions in a list comprehension, for a list comprehension is an expression containing other expression, nothing more (i.e. no statements, and only statements can catch/ignore/handle exceptions). > Function calls are expression, and the function bodies can include all > the statements you want, so delegating the evaluation of the > exception-prone sub-expression to a function, as you've noticed, is > one feasible workaround (others, when feasible, are checks on values > that might provoke exceptions, as also suggested in other answers). [More here.](https://stackoverflow.com/questions/1528237/how-can-i-handle-exceptions-in-a-list-comprehension-in-python)
You're stuck with your `for` loop unless you handle the error inside `plot_pdf` or a wrapper. ``` def catch_plot_pdf(f): try: return plot_pdf(f) except: print("Exception: ", sys.exc_info()[0]) [catch_plot_pdf(f) for f in file_list] ```
Python exception handling in list comprehension
[ "", "python", "exception", "list-comprehension", "" ]
Python 3. I'm using QT's file dialog widget to save PDFs downloaded from the internet. I've been reading the file using 'open', and attempting to write it using the file dialog widget. However, I've been running into a"TypeError: '\_io.BufferedReader' does not support the buffer interface" error. Example code: ``` with open('file_to_read.pdf', 'rb') as f1: with open('file_to_save.pdf', 'wb') as f2: f2.write(f1) ``` This logic works properly with text files when not using the 'b' designator, or when reading a file from the web, like with urllib or requests. These are of the 'bytes' type, which I think I need to be opening the file as. Instead, it's opening as a Buffered Reader. I tried bytes(f1), but get "TypeError: 'bytes' object cannot be interpreted as an integer." Any ideaas?
If your intent is to simply make a copy of the file, you could use [shutil](http://docs.python.org/3.3/library/shutil.html#module-shutil) ``` >>> import shutil >>> shutil.copyfile('file_to_read.pdf','file_to_save.pdf') ``` Or if you need to access byte by byte, similar to your structure, this works: ``` >>> with open('/tmp/fin.pdf','rb') as f1: ... with open('/tmp/test.pdf','wb') as f2: ... while True: ... b=f1.read(1) ... if b: ... # process b if this is your intent ... n=f2.write(b) ... else: break ``` But byte by byte is potentially *really slow*. Or, if you want a buffer that will speed this up (without taking the risk of reading an unknown file size completely into memory): ``` >>> with open('/tmp/fin.pdf','rb') as f1: ... with open('/tmp/test.pdf','wb') as f2: ... while True: ... buf=f1.read(1024) ... if buf: ... for byte in buf: ... pass # process the bytes if this is what you want ... # make sure your changes are in buf ... n=f2.write(buf) ... else: ... break ``` With Python 2.7+ or 3.1+ you can also use this shortcut (rather than using two `with` blocks): ``` with open('/tmp/fin.pdf','rb') as f1,open('/tmp/test.pdf','wb') as f2: ... ```
learned from `python cookbook` ``` from functools import partial with open(fpath, 'rb') as f, open(target_fpath, 'wb') as target_f: for _bytes in iter(partial(f.read, 1024), ''): target_f.write(_bytes) ``` `partial(f.read, 1024)` returns a function, read the binary file 1024 bytes at every turn. `iter` will end when meet a `blank string ''`.
Python writing binary files, bytes
[ "", "python", "python-3.x", "io", "buffer", "bufferedreader", "" ]
I got a following table: ``` col1 | col2 | col3 -----+------+------- 1 | a | 5 5 | d | 3 3 | k | 7 6 | o | 2 2 | 0 | 8 ``` If a user searches for "1", the program will look at the `col1` that has "1" then it will get a value in `col3` "5", then the program will continue to search for "5" in `col1` and it will get "3" in `col3`, and so on. So it will print out: ``` 1 | a | 5 5 | d | 3 3 | k | 7 ``` If a user search for "6", it will print out: ``` 6 | o | 2 2 | 0 | 8 ``` How to build a `SELECT` query to do that?
Edit Solution mentioned by @leftclickben is also effective. We can also use a stored procedure for the same. ``` CREATE PROCEDURE get_tree(IN id int) BEGIN DECLARE child_id int; DECLARE prev_id int; SET prev_id = id; SET child_id=0; SELECT col3 into child_id FROM table1 WHERE col1=id ; create TEMPORARY table IF NOT EXISTS temp_table as (select * from table1 where 1=0); truncate table temp_table; WHILE child_id <> 0 DO insert into temp_table select * from table1 WHERE col1=prev_id; SET prev_id = child_id; SET child_id=0; SELECT col3 into child_id FROM TABLE1 WHERE col1=prev_id; END WHILE; select * from temp_table; END // ``` We are using temp table to store results of the output and as the temp tables are session based we wont there will be not be any issue regarding output data being incorrect. **[`SQL FIDDLE Demo`](http://sqlfiddle.com/#!2/58c804/1)** Try this query: ``` SELECT col1, col2, @pv := col3 as 'col3' FROM table1 JOIN (SELECT @pv := 1) tmp WHERE col1 = @pv ``` ## `SQL FIDDLE Demo`: ``` | COL1 | COL2 | COL3 | +------+------+------+ | 1 | a | 5 | | 5 | d | 3 | | 3 | k | 7 | ``` > **Note** > `parent_id` value should be less than the `child_id` for this solution to work.
The accepted answer by @Meherzad only works if the data is in a particular order. It happens to work with the data from the OP question. In my case, I had to modify it to work with my data. **Note** This only works when every record's "id" (col1 in the question) has a value GREATER THAN that record's "parent id" (col3 in the question). This is often the case, because normally the parent will need to be created first. However if your application allows changes to the hierarchy, where an item may be re-parented somewhere else, then you cannot rely on this. This is my query in case it helps someone; note it does not work with the given question because the data does not follow the required structure described above. ``` select t.col1, t.col2, @pv := t.col3 col3 from (select * from table1 order by col1 desc) t join (select @pv := 1) tmp where t.col1 = @pv ``` The difference is that `table1` is being ordered by `col1` so that the parent will be after it (since the parent's `col1` value is lower than the child's).
How to do the Recursive SELECT query in MySQL?
[ "", "mysql", "sql", "query-optimization", "recursive-query", "" ]
In Python 2.7, I have the following string: ``` "((1, u'Central Plant 1', u'http://egauge.com/'), (2, u'Central Plant 2', u'http://egauge2.com/'))" ``` **How can I convert this string back to tuples?** I've tried to use `split` a few times but it's very messy and makes a list instead. Desired output: ``` ((1, 'Central Plant 1', 'http://egauge.com/'), (2, 'Central Plant 2', 'http://egauge2.com/')) ``` Thanks for the help in advance!
You should use the `literal_eval` method from the `ast` module which you can read more about [**here**](http://docs.python.org/2/library/ast.html#ast.literal_eval). ``` >>> import ast >>> s = "((1, u'Central Plant 1', u'http://egauge.com/'),(2, u'Central Plant 2', u'http://egauge2.com/'))" >>> ast.literal_eval(s) ((1, u'Central Plant 1', u'http://egauge.com/'), (2, u'Central Plant 2', u'http://egauge2.com/')) ```
[`ast.literal_eval`](http://docs.python.org/2/library/ast.html?highlight=literal#ast.literal_eval) should do the trick—***safely***. E.G. ``` >>> ast.literal_eval("((1, u'Central Plant 1', u'http://egauge.com/'), ... (2, u'Central Plant 2', u'http://egauge2.com/'))") ((1, u'Central Plant 1', u'http://egauge.com/'), (2, u'Central Plant 2', u'http://egauge2.com/')) ``` See [this answer](https://stackoverflow.com/questions/3513292/python-make-eval-safe#answer-3513475) for more info on why **not** to use `eval`.
How to convert tuple in string to tuple object?
[ "", "python", "tuples", "" ]
I want to simplify my code, especially to avoid square brackets; so my question is : ``` def someFunction(): someString = "Hi!" someNumber = 22 somePointer = "" someList = [] names_list = ['someString','someNumber','somePointer','someList'] val_list = [someString, someNumber, somePointer, someList] someDict = {} for i in range( len( names_list ) ) : someDict[names_list[i]] = val_list[i] someOtherFunction( someDict ) def someOtherFunction( dict ): for key, value in dict.iteritems() : print key, value print someString, someNumber someFunction() ``` Is it possible to make it like this?
can't get why this can be usefull if You know list of variables and they will be used in some function ``` # generate names and values again at other place def foo(): for d in someDict : d_name = d ``` then You just can unpack them during function call ``` def foo(someString=None, someNumber=None, somePointer=None, someList=None): pass foo(**someDict) ```
If i understand correctly, you want a dict associating variable names with values. I propose that you build a list with names and a list with values. Then, you append items to your dict. ``` someString = "Hi!" someNumber = 22 somePointer = 4 someList = [] names_list = ['someString','someNumber','somePointer','someList'] val_list = [someString, someNumber, somePointer, someList] len_list = len(val_list) someDict = {} for i in range(len_list) : someDict[names_list[i]] = val_list[i] ``` Then, you can query your dict doing, say ``` >>> someDict['someString'] 'Hi!' ```
Python how to generate dicts from names and reverse
[ "", "python", "dictionary", "code-generation", "generator", "" ]
I'm probably doing something very silly and basic, but I just can't get this bit of code to work. I have a text file that contains a list of more text files (log files) with the full path for them. I want to open the first file, grab the list and then open each in turn (ultimately to do a search within each for errors) and then close them. The problem I am having is that I can't get the data from the newly opened secondary files to display. Text file 1 (logs.txt) : //server-1/program/data/instances/devapp/log/audit.log //server-2/program/data/instances/devapp/log/bizman.db.log The code I am trying to run: ``` import os logdir = '/cygdrive/c/bob/logs.txt' load_log_file = open (logdir, 'r') read_log_file = load_log_file.readlines () def txt_search (read_log_file) : for entry in read_log_file : view_entry = open (entry, 'a+wb') print view_entry print txt_search (read_log_file) ``` The output looks like the following: ``` $ python log_4.py <open file '//server-1/program/data/instances/devapp/log/audit.log ', mode 'a+wb' at 0xfff3c180> <open file '//server-2/program/data/instances/devapp/log/bizman.db.log ', mode 'a+wb' at 0xfff3c1d8> None ``` Any help would be greatly appreciated as I'm getting to the point of pulling my hair out! Many thanks, Bob
You can do something like this: ``` logdir = r"/cygdrive/c/bob/logs.txt" with open(logdir) as fin: for line in fin: with open(line.strip()) as log: print log.readlines() ``` If you want to `print` the files as seen, so without the surrounding brackets and other list markup, you can use the following line: ``` print "".join(log.readlines()) ```
If you want to display the contents of the file then use view\_entry.read(). You're just referencing the object hence why you're getting that response. ``` C:\Users\brayden>python Python 2.7.4 (default, Apr 6 2013, 19:54:46) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> f = open('test.txt', 'r') >>> print f <open file 'test.txt', mode 'r' at 0x003A6CD8> >>> print f.read() line1 line2 asdf ``` <http://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-objects>
Opening text files from a list within another text file using python
[ "", "python", "text-files", "" ]
I'm having an error when executing the query. I don't know what seems to be the problem. I'm trying to query the DB to to find out the staffs from two or more departments. (ie. Staff\_ID = 1, works under the DEPT\_ID = 4, and 6). There are three tables involved: ``` 1. STAFF 2. DEPARTMENT 3. STAFF_DEPT (contains ID of STAFF and DEPT) ``` Here's what I've done, ``` SELECT sd.STAFF_ID || ' ' || s.STAFF_NAME AS "Staff Name", d.DEPT_NAME AS "Department" FROM STAFF_DEPT sd INNER JOIN STAFF s ON sd.STAFF_ID = s.STAFF_ID INNER JOIN DEPARTMENT d ON sd.DEPT_ID = d.DEPT_ID GROUP BY sd.STAFF_ID HAVING COUNT (sd.STAFF_ID) > 1 ```
Here is your original query: ``` SELECT sd.STAFF_ID || ' ' || s.STAFF_NAME AS "Staff Name", d.DEPT_NAME AS "Department" FROM STAFF_DEPT sd INNER JOIN STAFF s ON sd.STAFF_ID = s.STAFF_ID INNER JOIN DEPARTMENT d ON sd.DEPT_ID = d.DEPT_ID GROUP BY sd.STAFF_ID HAVING COUNT (sd.STAFF_ID) > 1; ``` The problem with your query is that you are doing an aggregation on `staff_id`, but you have the columns `staff_name` and `dept_name` in your aggregation. You are looking for staff in multiple departments. You can get one row per person with a list of departments using: ``` SELECT sd.STAFF_ID || ' ' || s.STAFF_NAME AS "Staff Name", list_agg(d.DEPT_NAME, ',') within group (order by DEPT_NAME) AS "Department_List" FROM STAFF_DEPT sd INNER JOIN STAFF s ON sd.STAFF_ID = s.STAFF_ID INNER JOIN DEPARTMENT d ON sd.DEPT_ID = d.DEPT_ID GROUP BY sd.STAFF_ID, s.STAFF_Name HAVING COUNT (sd.STAFF_ID) > 1; ``` Notice: I've adding `list_agg()` in the `select` and `s.staff_name` in the `group by`. If you want one person/department per row, then use a subquery with an analytic function: ``` selectsd.STAFF_ID || ' ' || s.STAFF_NAME AS "Staff Name", dept_name from (select sd.staff_id, s.staff_name, d.dept_name, count(*) over (partition by sd.staff_id, s.staff_name) as NumDepts FROM STAFF_DEPT sd INNER JOIN STAFF s ON sd.STAFF_ID = s.STAFF_ID INNER JOIN DEPARTMENT d ON sd.DEPT_ID = d.DEPT_ID ) t where NumDepts > 1; ```
Try this one: ``` with temp as (select sd.staff_id from staff_dept sd group by staff_id having count(staff_id)>1) select tp.staff_id||' ' ||s.Name AS "Staff Name", d.DNAME FROM temp tp, staff_dept sd, staff s, dept d where tp.staff_id=sd.staff_id and sd.staff_id=s.id and sd.dept_id=d.deptno; ``` I have stored the staff\_id's having count more than 1 in a temporary view and used it in the final select query. As I have previously mentioned to techdo that you cannot group by sd.STAFF\_ID || ' ' || s.STAFF\_NAME, d.DEPT\_NAME as it would always be unique and would have count as 1 always.
Query an employee from two or more departments
[ "", "sql", "database", "oracle", "" ]
``` try: r = requests.get(url, params={'s': thing}) except requests.ConnectionError, e: print(e) ``` Is this correct? Is there a better way to structure this? Will this cover all my bases?
Have a look at the Requests [exception docs](https://requests.readthedocs.io/en/latest/user/quickstart.html#errors-and-exceptions). In short: > In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a **`ConnectionError`** exception. > > In the event of the rare invalid HTTP response, Requests will raise an **`HTTPError`** exception. > > If a request times out, a **`Timeout`** exception is raised. > > If a request exceeds the configured number of maximum redirections, a **`TooManyRedirects`** exception is raised. > > All exceptions that Requests explicitly raises inherit from **`requests.exceptions.RequestException`**. To answer your question, what you show will *not* cover all of your bases. You'll only catch connection-related errors, not ones that time out. What to do when you catch the exception is really up to the design of your script/program. Is it acceptable to exit? Can you go on and try again? If the error is catastrophic and you can't go on, then yes, you may abort your program by raising [SystemExit](https://docs.python.org/3/library/exceptions.html#SystemExit) (a nice way to both print an error and call `sys.exit`). You can either catch the base-class exception, which will handle all cases: ``` try: r = requests.get(url, params={'s': thing}) except requests.exceptions.RequestException as e: # This is the correct syntax raise SystemExit(e) ``` Or you can catch them separately and do different things. ``` try: r = requests.get(url, params={'s': thing}) except requests.exceptions.Timeout: # Maybe set up for a retry, or continue in a retry loop except requests.exceptions.TooManyRedirects: # Tell the user their URL was bad and try a different one except requests.exceptions.RequestException as e: # catastrophic error. bail. raise SystemExit(e) ``` --- As [Christian](https://stackoverflow.com/users/456550/christian-long) pointed out: > If you want http errors (e.g. 401 Unauthorized) to raise exceptions, you can call [`Response.raise_for_status`](https://requests.readthedocs.io/en/latest/api/#requests.Response.raise_for_status). That will raise an `HTTPError`, if the response was an http error. An example: ``` try: r = requests.get('http://www.google.com/nothere') r.raise_for_status() except requests.exceptions.HTTPError as err: raise SystemExit(err) ``` Will print: ``` 404 Client Error: Not Found for url: http://www.google.com/nothere ```
One additional suggestion to be explicit. It seems best to go from specific to general down the stack of errors to get the desired error to be caught, so the specific ones don't get masked by the general one. ``` url='http://www.google.com/blahblah' try: r = requests.get(url,timeout=3) r.raise_for_status() except requests.exceptions.HTTPError as errh: print ("Http Error:",errh) except requests.exceptions.ConnectionError as errc: print ("Error Connecting:",errc) except requests.exceptions.Timeout as errt: print ("Timeout Error:",errt) except requests.exceptions.RequestException as err: print ("OOps: Something Else",err) Http Error: 404 Client Error: Not Found for url: http://www.google.com/blahblah ``` vs ``` url='http://www.google.com/blahblah' try: r = requests.get(url,timeout=3) r.raise_for_status() except requests.exceptions.RequestException as err: print ("OOps: Something Else",err) except requests.exceptions.HTTPError as errh: print ("Http Error:",errh) except requests.exceptions.ConnectionError as errc: print ("Error Connecting:",errc) except requests.exceptions.Timeout as errt: print ("Timeout Error:",errt) OOps: Something Else 404 Client Error: Not Found for url: http://www.google.com/blahblah ```
Correct way to try/except using Python requests module?
[ "", "python", "exception", "python-requests", "request", "" ]
I have the following many-to-many relationship between employees and workgroups: ``` employees table ----------------- id empgroups table --------------- employee_id workgroup_id workorders table ---------------- workgroup_id ``` I'm trying to write SQL that will list all the workorders for an employee based on the workgroups that employee belongs to. This is my attempt: ``` SELECT wonum, workgroup_id FROM workorders INNER JOIN employees ON workorders.employee_id = employee_id INNER JOIN empgroups ON employees.employee.id = empgroups.employee_id WHERE employee_id = 2 ``` The error I get is: ``` ERROR: schema "employees" does not exist ``` Sorry - the employee has id not employee.id
Isn't this what you're looking for? ``` SELECT wonum, workgroup_id FROM workorders JOIN empgroups ON empgroups.workgroup_id = workorders.workgroup_id JOIN employees ON employees.employee_id = empgroups.employee_id WHERE employees.employee_id = 2 ```
``` SELECT w.wonum, w.workgroup_id FROM workorders w JOIN empgroups e USING (workgroup_id) WHERE e.employee_id = 2 ``` The table `employees` is not needed at all for this query. `USING` shortens the syntax in this case. As do table aliases.
SQL for a many to many relationship using inner joins
[ "", "sql", "postgresql", "" ]
I am currently using Selenium to run instances of Chrome to test web pages. Each time my script runs, a clean instance of Chrome starts up (clean of extensions, bookmarks, browsing history, etc). I was wondering if it's possible to run my script with Chrome extensions. I've tried searching for a Python example, but nothing came up when I googled this.
You should use Chrome WebDriver [options](https://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.chrome.options) to set a list of extensions to load. Here's an example: ``` import os from selenium import webdriver from selenium.webdriver.chrome.options import Options executable_path = "path_to_webdriver" os.environ["webdriver.chrome.driver"] = executable_path chrome_options = Options() chrome_options.add_extension('path_to_extension') driver = webdriver.Chrome(executable_path=executable_path, chrome_options=chrome_options) driver.get("http://stackoverflow.com") driver.quit() ``` Hope that helps.
The leading answer didn't work for me because I didn't realize you had to point the webdriver options toward a `.zip` file. I.e. `chrome_options.add_extension('path_to_extension_dir')` doesn't work. You need: `chrome_options.add_extension('path_to_extension_dir.zip')` After figuring that out and reading a [couple](https://stackoverflow.com/questions/18542535/selenium-chromedriver-cannot-start-google-chrome-with-extension-loaded) [posts](https://stackoverflow.com/questions/34480006/unable-to-run-chrome-selenium-unknown-error-cannot-process-extension-1) on how to create the zip file via the command line and load it into `selenium`, the only way it worked for me was to zip my extension files within the same python script. This actually turned out to be a nice way for automatically updating any changes you might have made to your extension: ``` import os, zipfile from selenium import webdriver # Configure filepaths chrome_exe = "path/to/chromedriver.exe" ext_dir = 'extension' ext_file = 'extension.zip' # Create zipped extension ## Read in your extension files file_names = os.listdir(ext_dir) file_dict = {} for fn in file_names: with open(os.path.join(ext_dir, fn), 'r') as infile: file_dict[fn] = infile.read() ## Save files to zipped archive with zipfile.ZipFile(ext_file), 'w') as zf: for fn, content in file_dict.iteritems(): zf.writestr(fn, content) # Add extension chrome_options = webdriver.ChromeOptions() chrome_options.add_extension(ext_file) # Start driver driver = webdriver.Chrome(executable_path=chrome_exe, chrome_options=chrome_options) driver.get("http://stackoverflow.com") driver.quit() ```
Using Extensions with Selenium (Python)
[ "", "python", "google-chrome", "selenium", "selenium-webdriver", "" ]
How can I make the following Column ``` ------------ MakeDistinct ------------ CAT CAT CAT DOG DOG PIN PIN PIN PIN ``` As Shown Below ``` ------------- ------- AfterDistinct Count ------------- ------- CAT 3 DOG 2 PIN 4 ```
Use `COUNT()` function by grouping `MakeDistinct` column using `GROUP BY` clause. ``` SELECT MakeDistinct AS AfterDistinct , COUNT(MakeDistinct) AS Count FROM MyTable GROUP BY MakeDistinct ``` Output: ``` ╔═══════════════╦═══════╗ ║ AFTERDISTINCT ║ COUNT ║ ╠═══════════════╬═══════╣ ║ CAT ║ 3 ║ ║ DOG ║ 2 ║ ║ PIN ║ 4 ║ ╚═══════════════╩═══════╝ ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!3/0e2f0/1)
Please try: ``` select MakeDistinct AfterDistinct, COUNT(*) [Count] from YourTable group by MakeDistinct ```
How to get number of duplicate Rows of DISTINCT column as another column?
[ "", "sql", "sql-server-2008", "" ]
Is it possible to select the next lower number from a table without using limit. Eg: If my table had `10, 3, 2 , 1` I'm trying to `select * from table where col > 10`. The result I'm expecting is `3`. I know I can use limit 1, but can it be done without that?
Try ``` SELECT MAX(no) no FROM table1 WHERE no < 10 ``` Output: ``` | NO | ------ | 3 | ``` **[SQLFiddle](http://sqlfiddle.com/#!2/f9478/1)**
Try this query ``` SELECT * FROM (SELECT @rid:=@rid+1 as rId, a.* FROM tbl a JOIN (SELECT @rid:=0) b ORDER BY id DESC)tmp WHERE rId=2; ``` ## **[SQL FIDDLE](http://sqlfiddle.com/#!2/556bc/2)**: ``` | RID | ID | TYPE | DETAILS | ------------------------------------ | 2 | 28 | Twitter | @sqlfiddle5 | ``` Another approach ``` select a.* from supportContacts a inner join (select max(id) as id from supportContacts where id in (select id from supportContacts where id not in (select max(id) from supportContacts)))b on a.id=b.id ``` ## **[SQL FIDDLE](http://sqlfiddle.com/#!2/556bc/8)**: ``` | ID | TYPE | DETAILS | ------------------------------ | 28 | Twitter | @sqlfiddle5 | ```
MySql select next lower number without using limit
[ "", "mysql", "sql", "" ]
I have a user\_to\_room table ``` roomID | userID ``` I now want to check, if 2 users are in the same room ``` SELECT roomID FROM room_to_user WHERE userID = 1 AND 2 ``` How to achieve this with a JOIN?
Do a self join: ``` SELECT DISTINCT u1.roomID AS roomID FROM user_to_room u1, user_to_room u2 WHERE u1.userID = 1 AND u2.userID = 2 AND u1.roomID = u2.roomID; ``` See [**demo**](http://www.sqlfiddle.com/#!2/f84de/5).
You can also the following query: ``` CREATE TABLE user_to_room (roomID INT, userID INT); INSERT INTO user_to_room VALUES (101, 2); INSERT INTO user_to_room VALUES (102, 3); INSERT INTO user_to_room VALUES (102, 4); select distinct roomid from user_to_room a where exists (select top 1 1 from user_to_room where roomid = a.roomid and userid =3) and userid = 4 ``` check the [demo](http://www.sqlfiddle.com/#!3/46afa/3) here
Check if 2 users are in the same room with mySQL
[ "", "mysql", "sql", "" ]
After parsing some webpage with utf-8 coding, I realize that I obtain characters that I can't manipulaten, though it is readable by the means of print. ``` >> print data A Deuce >> data u'\uff21\u3000\uff24\uff45\uff55\uff43\uff45' ``` How can I get this into a decent coding using Python? I would like to obtain ``` >> my_variable 'A Deuce' ``` (I mean being able to cast that text in a variable as a "regular" string) I saw several solutions related to that topic but did not find relevant answer (mainly based on encoding/decoding in other charset)
With a little help from [this answer](https://stackoverflow.com/a/5863893/1126841): ``` >>> table = dict([(x + 0xFF00 - 0x20, unichr(x)) for x in xrange(0x21, 0x7F)] + [(0x3000, unichr(0x20))]) >>> data.translate(table) u'A Deuce' ``` The `translate` method takes a dictionary that maps one Unicode code point to another. In this case, it maps the full-width Latin alphabet (which is essentially part of the ASCII character set shifted up to the range 0xFF01-0xFF5E) to the "normal" ASCII character set. For example, 0xFF21 (full-width A) maps to 0x41 (ASCII A), 0xFF22 (full-width B) maps to 0x42 (ASCII B), etc.
This functionality is built into the [unicodedata](http://docs.python.org/3/library/unicodedata.html) module: ``` >>> unicodedata.normalize('NFKC', 'A Deuce') 'A Deuce' ```
Convert exotic charset to string with python
[ "", "python", "string", "character-encoding", "" ]
Hello i was wondering is there a way to round up to a multiple of 5 in SQL? For example this only rounds up to ten if i set @Order = 8 or 9 but when it's 7 or 6 it rounds down to 5, i need it to round up to 10 when it's 6 or 7. ``` declare @Order int set @Order = 7 select round(cast(@Order as float)/cast(5 as float),0)*5 ``` I need * `@Order = 1,2,3,4` to round up to 5 * `@Order = 6,7,8,9` to round up to 10 * `@Order = 11,12,13,14` to round up to 15
Use the CEILING function ``` SELECT CEILING(@Order / 5.0) * 5 ```
``` SELECT CEILING(@Order / 5.0) * 5 ```
sql rounding up to a multiple of 5?
[ "", "sql", "sql-server", "t-sql", "" ]
In python you can write an if statement as follows ``` var = True if var: print 'I\'m here' ``` is there any way to do the opposite without the ==, eg ``` var = False if !var: print 'learnt stuff' ```
Use `not` ``` var = False if not var: print 'learnt stuff' ```
Since Python evaluates also the data type `NoneType` as `False` during the check, a more precise answer is: ``` var = False if var is False: print('learnt stuff') ``` This prevents potentially unwanted behaviour such as: ``` var = [] # or None if not var: print('learnt stuff') # is printed what may or may not be wanted ``` But if you want to check all cases where `var` will be evaluated to `False`, then doing it by using logical `not` keyword is the right thing to do.
if var == False
[ "", "python", "" ]
I am getting table rows and table data (with HTML tags) from SQL using 'FOR XML'. Is there a way I could assign CSS classes to the html tags in SQL? What I am currently getting: `<tr><td>Name</td><td>Value</td></tr>` SQL query: ``` SELECT (SELECT [Name] as [td] FOR XML PATH(''), type), (SELECT [Value] as [td] FOR XML PATH(''), type) FROM table FOR XML PATH('tr') ``` Desired output: `<tr class="test1"> <td class="test2">Name</td> <td class="test3">Value</td> </tr>`
I know I am answering my own question, thought it may help someone else. I'm adding class as an attribute to XML nodes which is giving me the desired output. ``` SELECT 'test1' AS [@class] , ( SELECT 'test2' as [@class] , (SELECT 'Name' FOR XML PATH('')) FOR XML PATH('td'), type) ,(SELECT 'test3' as [@class] , (SELECT 'Value' FOR XML PATH('')) FOR XML PATH('td'), type) FOR XML PATH('tr'), type ``` Output: `<tr class="test1"><td class="test2">Name</td><td class="test3">Value</td></tr>`
Using T-SQL, you can specify an attribute path in the SQL. [Relevant MSDN page](http://msdn.microsoft.com/en-us/library/bb522647.aspx) And you can specify field content to be data in the current row's element with the correct name. ``` SELECT 'test1' as [@class], (SELECT [Name] as [*], 'test2' as [@class] FOR XML PATH('td'), type), (SELECT [Value] as [*], 'test3' as [@class] FOR XML PATH('td'), type) FROM table FOR XML PATH('tr') ``` If at all possible, though, you should have SQL Server produce XML data for you, and then translate it into your HTML need by way of an XSL transformation outside of the server. You'll have cleaner queries, a little less load on your server, and far better seperation of concerns if you do. --- **T-SQL:** ``` SELECT Name , Value FROM table FOR XML AUTO ``` Gets XML like ``` <table name="name" value="value" /> ``` **XSLT:** ``` <xsl:template match="table"> <tr class="test1"> <td class="test2"> <xsl:value-of select="@name" /> </td> <td class="test3"> <xsl:value-of select="@value" /> </td> </tr> </xsl:template> ``` Results in (X)HTML like ``` <tr class="test1"> <td class="test2">Name</td> <td class="test3">Value</td> </tr> ```
Assign CSS class to HTML tags generated with SQL 'FOR XML'
[ "", "css", "sql", "xml", "xml-parsing", "for-xml", "" ]
Sorry for the bad title, I couldn't think of a way to describe it. I have a SQL table that contains utility poles and their construction spec and the quantity. I need to create a query that will list each construction spec and the quantity. For example: ``` Table- Poles Type, Spec, Quanity Record1- Pole, A5-1, 4 Record2- Pole, C4-1, 2 Record3- Pole, A5-4, 3 Record4- Pole, C4-1, 3 ``` And I need a query that would return: ``` Table- Spec Totals Spec, Quantity A5-1, 4 C4-1, 5 A5-4, 3 ``` Thanks for any help.
`GROUP BY` allows you to use various [aggregate functions](http://dev.mysql.com/doc/refman/5.0/en///group-by-functions.html). In this case, `SUM()`. ``` SELECT Spec, SUM(Quantity) AS Quantity FROM Poles GROUP BY Spec ```
Sounds like a group by statement would work well in this scenario. ``` SELECT Spec, SUM(Quantity) FROM Poles GROUP BY Spec ```
How do I list total number of records with similar name in SQL?
[ "", "sql", "syntax", "" ]
I'm trying to fit the distribution of some experimental values with a custom probability density function. Obviously, the integral of the resulting function should always be equal to 1, but the results of simple scipy.optimize.curve\_fit(function, dataBincenters, dataCounts) never satisfy this condition. What is the best way to solve this problem?
You can define your own residuals function, including a penalization parameter, like detailed in the code below, where it is known beforehand that the integral along the interval must be `2.`. If you test without the penalization you will see that what your are getting is the conventional `curve_fit`: ![enter image description here](https://i.stack.imgur.com/tZ0gX.png) ``` import matplotlib.pyplot as plt import scipy from scipy.optimize import curve_fit, minimize, leastsq from scipy.integrate import quad from scipy import pi, sin x = scipy.linspace(0, pi, 100) y = scipy.sin(x) + (0. + scipy.rand(len(x))*0.4) def func1(x, a0, a1, a2, a3): return a0 + a1*x + a2*x**2 + a3*x**3 # here you include the penalization factor def residuals(p, x, y): integral = quad(func1, 0, pi, args=(p[0], p[1], p[2], p[3]))[0] penalization = abs(2.-integral)*10000 return y - func1(x, p[0], p[1], p[2], p[3]) - penalization popt1, pcov1 = curve_fit(func1, x, y) popt2, pcov2 = leastsq(func=residuals, x0=(1., 1., 1., 1.), args=(x, y)) y_fit1 = func1(x, *popt1) y_fit2 = func1(x, *popt2) plt.scatter(x, y, marker='.') plt.plot(x, y_fit1, color='g', label='curve_fit') plt.plot(x, y_fit2, color='y', label='constrained') plt.legend() plt.xlim(-0.1, 3.5) plt.ylim(0, 1.4) print('Exact integral:', quad(sin, 0, pi)[0]) print('Approx integral1:', quad(func1, 0, pi, args=(popt1[0], popt1[1], popt1[2], popt1[3]))[0]) print('Approx integral2:', quad(func1, 0, pi, args=(popt2[0], popt2[1], popt2[2], popt2[3]))[0]) plt.show() #Exact integral: 2.0 #Approx integral1: 2.60068579748 #Approx integral2: 2.00001911981 ``` Other related questions: * [SciPy LeastSq Goodness of Fit Estimator](https://stackoverflow.com/questions/7588371/scipy-leastsq-goodness-of-fit-estimator)
Here is an almost-identical snippet which makes only use of `curve_fit`. ``` import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt import scipy.integrate as integr x = np.linspace(0, np.pi, 100) y = np.sin(x) + (0. + np.random.rand(len(x))*0.4) def Func(x, a0, a1, a2, a3): return a0 + a1*x + a2*x**2 + a3*x**3 # modified function definition with Penalization def FuncPen(x, a0, a1, a2, a3): integral = integr.quad( Func, 0, np.pi, args=(a0,a1,a2,a3))[0] penalization = abs(2.-integral)*10000 return a0 + a1*x + a2*x**2 + a3*x**3 + penalization popt1, pcov1 = opt.curve_fit( Func, x, y ) popt2, pcov2 = opt.curve_fit( FuncPen, x, y ) y_fit1 = Func(x, *popt1) y_fit2 = Func(x, *popt2) plt.scatter(x,y, marker='.') plt.plot(x,y_fit2, color='y', label='constrained') plt.plot(x,y_fit1, color='g', label='curve_fit') plt.legend(); plt.xlim(-0.1,3.5); plt.ylim(0,1.4) print 'Exact integral:',integr.quad(np.sin ,0,np.pi)[0] print 'Approx integral1:',integr.quad(Func,0,np.pi,args=(popt1[0],popt1[1], popt1[2],popt1[3]))[0] print 'Approx integral2:',integr.quad(Func,0,np.pi,args=(popt2[0],popt2[1], popt2[2],popt2[3]))[0] plt.show() #Exact integral: 2.0 #Approx integral1: 2.66485028754 #Approx integral2: 2.00002116217 ``` [![enter image description here](https://i.stack.imgur.com/qtVfX.png)](https://i.stack.imgur.com/qtVfX.png)
How do I put a constraint on SciPy curve fit?
[ "", "python", "optimization", "scipy", "curve-fitting", "" ]
I understand that this question has been asked several times but after reading them, and making the suggested fixes, I'm still stumped. My project structure is as follows: ``` Project | src | root - has __init__.py | nested - has __init__.py | tests - has __init__.py | utilities - has __init__.py | services - has __init__.py ``` I've successfully run a unittest regression class from Eclipse without any issues. As soon as I attempted to run the same class from the command-line (as other users who will be running the suite do not have access to Eclipse) I receive the error: ``` ImportError: No module named 'root' ``` As you can see from above, the module root has an `__init__.py` All `__init__.py` modules are completely empty. And assistance would be gratefully received.
Try adding a `sys.path.append` to the list of your imports. ``` import sys sys.path.append("/Project/src/") import root import root.nested.tests ```
Just a note for anyone who arrives at this issue, using what Gus E showed in the accept answer and some further experience I've found the following to be very useful to ensure that I can run my programs from the command-line on my machine or on another colleague's should the need arise. ``` import sys import os sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) ``` When I execute the 'main' method, which is located in the 'nested' directory, it ensures that the 'src' directory is added to the PYTHONPATH at the time of execution meaning all following imports will not throw an error. Obviously, you need to adjust the number of ".." arguments to the `os.path.join()` method as determined by the location in your program of where your main method is executed from
Receiving Import Error: No Module named ***, but has __init__.py
[ "", "python", "python-3.x", "importerror", "" ]
I want to generate a list in python as follows - ``` [1, 1, 2, 4, 3, 9, 4, 16, 5, 25 .....] ``` You would have figured out, it is nothing but `n, n*n` I tried writing such a list comprehension in python as follows - ``` lst_gen = [i, i*i for i in range(1, 10)] ``` But doing this, gives a syntax error. What would be a good way to generate the above list via list comprehension?
Use [`itertools.chain.from_iterable`](http://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable): ``` >>> from itertools import chain >>> list(chain.from_iterable((i, i**2) for i in xrange(1, 6))) [1, 1, 2, 4, 3, 9, 4, 16, 5, 25] ``` Or you can also use a [generator](http://wiki.python.org/moin/Generators) function: ``` >>> def solve(n): ... for i in xrange(1,n+1): ... yield i ... yield i**2 >>> list(solve(5)) [1, 1, 2, 4, 3, 9, 4, 16, 5, 25] ```
The question is old, but just for the curious reader, i propose another possibility: As stated on first post, you can easily make a couple (i, i\*\*2) from a list of numbers. Then you want to flatten this couple. So just add the flatten operation in your comprehension. ``` [x for i in range(1, 10) for x in (i,i**2)] ```
python list comprehension to produce two values in one iteration
[ "", "python", "list", "list-comprehension", "" ]
The title pretty much says it, although it does not need to be specific to a cmd, just closing an application in general. I have seen ``` os.system(taskkill blah blah) ``` this does not actually close the windows but rather just ends the cmd inside the window, I would like to actually close the window itself. EDIT: Could someone please give me a specific line of code that would close a cmd window. The name of the cmd window when moused over is ``` C:\Windows\system32\cmd.exe ```
Here is an approach that uses the [Python for Windows extensions (pywin32)](http://sourceforge.net/projects/pywin32/) to find the PIDs and [taskill](http://technet.microsoft.com/en-us/library/bb491009.aspx) to end the process (based on this [example](http://mail.python.org/pipermail/python-win32/2003-December/001482.html)). I went this way to give you access to some extra running information in case you didn't want to indiscriminately kill any cmd.exe: ``` import os from win32com.client import GetObject WMI = GetObject('winmgmts:') processes = WMI.InstancesOf('Win32_Process') for p in WMI.ExecQuery('select * from Win32_Process where Name="cmd.exe"'): print "Killing PID:", p.Properties_('ProcessId').Value os.system("taskkill /pid "+str(p.Properties_('ProcessId').Value)) ``` Now inside that for loop you could peek at some other information about each running process (or even look for child processes that depend on it (like running programs inside each cmd.exe). An example of how to read each process property might look like this: ``` from win32com.client import GetObject WMI = GetObject('winmgmts:') processes = WMI.InstancesOf('Win32_Process') for p in WMI.ExecQuery('select * from Win32_Process where Name="cmd.exe"'): print "--running cmd.exe---" for prop in [prop.Name for prop in p.Properties_]: print prop,"=",p.Properties_(prop).Value ```
something like this? ``` import sys sys.exit() ``` or easier ... ``` raise SystemExit ``` if that's not what your looking for tell me also you can just save the file with a .pyw and that doesn't open the cmd at all
How to close a cmd window using python 2.7
[ "", "python", "python-2.7", "" ]
This is my Script ``` SELECT a.PLUCODE as [Code],a.ITEMNAME as [Item], sum(a.janQTY) AS [2012 QTY] ,sum(a.janAmt) AS [2012 AMT], sum(b.janQTY) AS [2013 QTY] ,sum(b.janAmt) AS [ ] FROM 2012_Table LEFT OUTER JOIN 2013_Table b ON a.PLUCODE=b.PLUCODE WHERE a.CATEGORY='Category1' GROUP BY a.PLUCODE,a.ITEMNAME ORDER BY a.PLUCODE ``` OUTPUT is ``` Code Item 2012 QTY 2012 AMT 2013 QTY 2013 AMT 0312 ItemOne 67 837,500 21 262,500 ``` My output in [2012 QTY] and [2012 AMT] is correct, but on [2013 QTY] and [2013 AMT] is wrong it should be 1 for qty and 12,500 only for the amount, can somebody help me what is the correct script for this.
You are getting a Cartesian result. 21 entries @ 12,500 = 262,500. You probably need to prequery each on its own merit, then left-join. I am going on an assumption that the 2013 table is the same structure as 2012. ``` select PQ1.PLUCode, PQ1.Item, PQ1.[2012 QTY], PQ1.[2012 AMT], PQ2.[2013 QTY], PQ2.[2013 AMT] from ( select a.PLUCode, a.ItemName as [Item], sum( a.JanQty ) as [2012 QTY], sum( a.JanAmt ) as [2012 AMT] from 2012_Table a where a.Category = 'Category1' group by a.PLUCode, a.ItemName order by a.PLUCode ) PQ1 LEFT JOIN ( select b.PLUCode, b.ItemName as [Item], sum( b.JanQty ) as [2013 QTY], sum( b.JanAmt ) as [2013 AMT] from 2013_Table b where b.Category = 'Category1' group by b.PLUCode, b.ItemName order by b.PLUCode ) PQ2 ON PQ1.PLUCode = PQ2.PLUCode ```
This should work better: ``` SELECT a.PLUCODE as [Code], a.ITEMNAME as [Item], sum(Case When Yr=2012 Then a.janQTY Else 0 End) AS [2012 QTY], sum(Case When Yr=2012 Then a.janAmt Else 0 End) AS [2012 AMT], sum(Case When Yr=2013 Then a.janQTY Else 0 End) AS [2013 QTY], sum(Case When Yr=2013 Then a.janAmt Else 0 End) AS [2013 AMT], FROM ( Select 2012 As Yr, * From 2012_Table UNION ALL Select 2013 As Yr, * From 2013_Table ) a WHERE a.CATEGORY='Category1' GROUP BY a.PLUCODE,a.ITEMNAME ORDER BY a.PLUCODE ```
Wrong output when using Left Outer Join
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a Python application written on the Pyramid framework. We *really* take advantage of logging on this ( for debugging ) , using the standard import library. As we profile stuff on production, it seems that there is quite a bit of overhead from our logging activity. All the string formatting and loops add up. I'd love to remove it, but we can't -- we do actually need to keep it in for testing and will sometimes need to debug on the production environment. I'm wondering if anyone has effective strategies for minimizing logging as needed , so this code could get "optimized away" during execution on our production environment and simply not run . For example, under mod\_perl, the compiler would "optimize away" statements run under False constants in psuedocode... ( i haven't touched perl in a long time! ) ``` use constant DEBUG => False ; if ( DEBUG ) { log.debug("stuff here " + string ); } ``` or ``` use constant DEBUG => False ; DEBUG && log.debug("stuff here " + string ); ``` Under those scenarios, the call to log.debug and even the string interpolation would never happen. Can anyone recommend an effective method of mimicking that behavior under Python ?
Use `__debug__`. This flag is set to `False` when Python is run with the `-O` command line flag, and Python will furthermore *optimize out debug code at compile time.* I [wrote a blog post on it](http://www.engyrus.com/2013/03/idtkap-4-debug-and-o.html) a couple months ago.
Don't use concatenation when you can use `log.debug('stuff here %s', string)`; the logging module postpones interpolation until actually formatting the string when logging. If the `DEBUG` log level has been disabled, no interpolation takes place. You can also test the logging level to avoid gathering expensive logging information unless needed: ``` if logger.isEnabledFor(logging.DEBUG): logger.debug('Message with %s, %s', expensive_func1(), expensive_func2()) ``` See the [optimization section of the Logging HOWTO](http://docs.python.org/2/howto/logging.html#optimization).
Optimizing Python logging code
[ "", "python", "logging", "pyramid", "" ]
I need a little bit of help here. I am a new python coder. I need a lot of help. So, I want to add the different variables in two dictionaries. An example is: ``` x = {'a':1, 'b':2} y = {'b':1, 'c':2} ``` I want to replace these values such that it looks like: ``` x = {'a':1, 'b':2, 'c':0} y = {'a':0, 'b':1, 'c':2} ```
``` >>> x = {'a':1, 'b':2} >>> y = {'b':1, 'c':2} >>> for k in y: ... x.setdefault(k, 0) ... 0 2 >>> for k in x: ... y.setdefault(k, 0) ... 0 2 1 >>> x {'a': 1, 'c': 0, 'b': 2} >>> y {'a': 0, 'c': 2, 'b': 1} ```
``` >>> x = {'a':1, 'b':2} >>> y = {'b':1, 'c':2} >>> keys = x.viewkeys() | y.viewkeys() #returns union of keys in x and y >>> x = {k : x.get(k,0) for k in keys} >>> y = {k : y.get(k,0) for k in keys} >>> x {'a': 1, 'c': 0, 'b': 2} >>> y {'a': 0, 'c': 2, 'b': 1} ``` change the dict in-place: ``` >>> x = {'a':1, 'b':2} >>> y = {'b':1, 'c':2} >>> diff_x = y.viewkeys() - x.viewkeys() >>> diff_y = x.viewkeys() - y.viewkeys() #if you're using a mutable object as value instead of 0, #then you've to use a dict comprehension instead of dict.fromkeys. >>> x.update(dict.fromkeys(diff_x,0)) >>> y.update(dict.fromkeys(diff_y,0)) >>> x {'a': 1, 'c': 0, 'b': 2} >>> y {'a': 0, 'c': 2, 'b': 1} ```
How do I add two python dictionaries such that the not equal thinks become zero?
[ "", "python", "python-3.x", "" ]
I have a sproc that has a couple inserts and updates. Only the amount of a specific update needs to be returned. I have tried returning or selecting @@ROWCOUNT but because of other selects in the sproc, it is buried and is difficult to access. Having NOCOUNT ON for the whole proc returns the counts for other inserts and updates as well, obviously. I'm wondering if I can do something like: ``` SET NOCOUNT ON -- some inserts, selects, updates SET NOCOUNT OFF -- The update I care about SET NOCOUNT ON ``` and receive just the count from that update? Thanks in advance.
Why yes, yes you can. That's exactly what SET NOCOUNT is for. (Did you try this in MS SQL server management studio, or your test environment?)
You can keep the result of the @@ROWCOUNT in variable it does not matter that other selects ``` -- some inserts, selects, updates -- The update I care about DECLARE @rowCount_updICareAbout INT SET @rowCount_updICareAbout = @@ROWCOUNT SELECT @rowCount_updICareAbout ```
Can a single statement be surrounded by SET NOCOUNT in SQL Server 2005?
[ "", "sql", "sql-server-2005", "nocount", "" ]
I have a table like this: ``` ManufacturerID ProductID Price Region ============================================== 100 1 12.00 A 100 2 20.00 A 100 3 25.00 A 100 4 30.00 B 101 1 15.00 A 101 2 20.00 A 101 4 30.00 B ``` I want to get a query result that compares 2 different manufacturers to look like this: ``` ProductID Price1 Price2 Region ========================================================================= 1 12.00 15.00 A 2 20.00 20.00 A 3 25.00 null A 4 30.00 30.00 B ``` I try to use left join on the same table: ``` SELECT ProductID, a.Price AS Price1, b.Price AS Price2, a.Region FROM table1 a LEFT JOIN table1 b ON a.ProductID = b.ProductID AND a.ManufacturerID = 100 WHERE b.ManufacturerID = 101 ``` but this doesn't give me the missing product (ID:4) from Manufacturer 101. What am I missing?
Since you can't know in advance which product will be missing, for example manufacturer A might be missing product 3 and manufacture B missing product 8, you need a `FULL OUTER` join, if you want to do this with a join (Gordon provided a different way to go). I assumed that the `(ManufacturerID ,ProductID, Region)` combination has a `UNIQUE` constraint: ``` SELECT COALESCE(a.ProductID, b.ProductID) AS ProductID, a.Price AS Price1, b.Price AS Price2, COALESCE(a.Region, b.Region) AS Region FROM ( SELECT ProductID, Price, Region FROM table1 WHERE ManufacturerID = 100 ) AS a FULL JOIN ( SELECT ProductID, Price, Region FROM table1 WHERE ManufacturerID = 101 ) AS b ON a.ProductID = b.ProductID AND a.Region = b.Region -- not sure if you need this line ; ``` Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!3/7df97/18)** (thnx @Thomas)
You have `a.ManufacturerID` and `b.ManufacturerID` the wrong way round in the `on` and `where` clauses - try: ``` SELECT ProductID, a.Price AS Price1, b.Price AS Price2, a.Region FROM table1 a LEFT JOIN table1 b ON a.ProductID = b.ProductID AND b.ManufacturerID = 101 WHERE a.ManufacturerID = 100 ```
SQL Self Left Join Challenge
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have the android emulator/SDK installed on my computer, and I'm trying to run a simple python script, but it fails on the 'import os' line (which should be standard!) Here's my script: ``` import os print os.environ['PATH'] ``` works fine when I run it against the actual python executable as ``` python test.py ``` but when I do ``` monkeyrunner.bat test.py ``` I get the error listed in the title of this question. I've uninstalled/reinstalled python several times, with no luck. Tried adding a PYTHONPATH ENV varaible, no luck. Set the python install dir to be in the PATH ENV variable. Kind of at my wits end, I think monkeyrunner runs on jython but I don't know if I can manage the jython install that comes with the SDK
The solution of this problem (and the problem itself explained) can be found at <http://dtmilano.blogspot.ca/2013/05/monkeyrunner-importerror-no-module.html>. Basically, you have to replace `jython-2.5.3.jar` by `jython-standalone-2.5.3.jar`, at least until **Android SDK Tools Rev. 22.0.1** is out.
The issue is that the latest versions of the Android SDK include jython-2.5.3 which do not have the os and other importable libraries built in. I found an old copy of the SDK and used the old JAR which appears to have that embedded in it those libraries.
Trying to run simple monkeyrunner python script, 'importerror no module named os'
[ "", "android", "python", "jython", "monkeyrunner", "" ]
Say I have an array/list of things I want to compare. In languages I am more familiar with, I would do something like ``` for (int i = 0, i < mylist.size(); i++) for (int j = i + 1, j < mylist.size(); j++) compare(mylist[i], mylist[j]) ``` This ensures we only compare each pair once. For some context, I am doing collision detection on a bunch of objects contained in the list. For each collision detected, a small 'collision' object describing the collision is appended to a list, which another routine then loops through resolving each collision (depending on the nature of the two colliding objects). Obviously, I only want to report each collision once. Now, what is the pythonic way of doing this, since Python favors using iterators rather than looping over indices? I had the following (buggy) code: ``` for this in mylist: for that in mylist: compare(this, that) ``` But this clearly picks up each collision twice, which lead to some strange behavior when trying to resolve them. So what is the pythonic solution here?
Of course this will generate each pair twice as each `for` loop will go through every item of the list. You could use some [itertools](http://docs.python.org/3/library/itertools.html) magic here to generate all possible combinations: ``` import itertools for a, b in itertools.combinations(mylist, 2): compare(a, b) ``` [`itertools.combinations`](http://docs.python.org/3/library/itertools.html#itertools.combinations) will pair each element with each other element in the iterable, but only once. --- You could still write this using index-based item access, equivalent to what you are used to, using nested `for` loops: ``` for i in range(len(mylist)): for j in range(i + 1, len(mylist)): compare(mylist[i], mylist[j]) ``` Of course this may not look as nice and pythonic but sometimes this is still the easiest and most comprehensible solution, so you should not shy away from solving problems like that.
Use `itertools.combinations(mylist, 2)` ``` mylist = range(5) for x,y in itertools.combinations(mylist, 2): print x,y 0 1 0 2 0 3 0 4 1 2 1 3 1 4 2 3 2 4 3 4 ```
How to compare each item in a list with the rest, only once?
[ "", "python", "" ]
I have sql server 2008 databases, I would like to know which tables was updated last week i.e. tables which has new rows, updated existing rows or which rows are deleted. Is there any way to do this for existing database.
Try this one - ``` SELECT [db_name] = d.name , [table_name] = SCHEMA_NAME(o.[schema_id]) + '.' + o.name , s.last_user_update FROM sys.dm_db_index_usage_stats s JOIN sys.databases d ON s.database_id = d.database_id JOIN sys.objects o ON s.[object_id] = o.[object_id] WHERE o.[type] = 'U' AND s.last_user_update IS NOT NULL AND s.last_user_update BETWEEN DATEADD(wk, -1, GETDATE()) AND GETDATE() ```
Try with [Change Data Capture](http://msdn.microsoft.com/en-us/library/bb500244%28v=sql.100%29.aspx). It's a good way to keep track of your change on the DB. You have to enable the feature on one or more DBs, then on one or more table (it's a Table feature, so you will do it for every table you need). --- **Enable CDC on database.** Let's assume we want to enable CDC for AdventureWorks database. We must run the following SP to be sure this feature will work: ``` USE AdventureWorks GO EXEC sys.sp_cdc_enable_db GO ``` As result, we'll find a new schema called *cdc* and several tables automatically added: * **cdc.captured\_columns** – This table returns result for list of captured column. * **cdc.change\_tables** – This table returns list of all the tables which are enabled for capture. * **cdc.ddl\_history** – This table contains history of all the DDL changes since capture data enabled. * **cdc.index\_columns** – This table contains indexes associated with change table. * **cdc.lsn\_time\_mapping** – This table maps LSN number and time. --- **Enable CDC on table.** After having enabled CDC on desired DB(s) it's time to check if there are tables with this feature on: ``` USE AdventureWorks GO SELECT [name], is_tracked_by_cdc FROM sys.tables GO ``` If not, we can enable the changes capture for HumanResources.Shift table with the following procedure: ``` USE AdventureWorks GO EXEC sys.sp_cdc_enable_table @source_schema = N'HumanResources', @source_name = N'Shift', @role_name = NULL GO ``` Be sure you *SQL Agent* is up and running because it will create a job (cdc.AdventureWorks\_capture probably) to catch the modifications. If all procedures are correctly executed we'll find a new table called cdc.HumanResources\_Shift\_CT, among the system tables, containing all the HumanResources.Shift changes. **Note**: be careful with @role\_name parameter, it specifies database infos access.
How to list tables where data was inserted deleted or updated in last week
[ "", "sql", "sql-server", "" ]
I'm a python learner. If I have a lines of text in a file that looks like this > "Y:\DATA\00001\SERVER\DATA.TXT" "V:\DATA2\00002\SERVER2\DATA2.TXT" Can I split the lines around the inverted commas? The only constant would be their position in the file relative to the data lines themselves. The data lines could range from 10 to 100+ characters (they'll be nested network folders). I cannot see how I can use any other way to do those markers to split on, but my lack of python knowledge is making this difficult. I've tried ``` optfile=line.split("") ``` and other variations but keep getting valueerror: empty seperator. I can see why it's saying that, I just don't know how to change it. Any help is, as always very appreciated. Many thanks
Finding all regular expression matches will do it: ``` input=r'"Y:\DATA\00001\SERVER\DATA.TXT" "V:\DATA2\00002\SERVER2\DATA2.TXT"' re.findall('".+?"', # or '"[^"]+"', input) ``` This will return the list of file names: ``` ["Y:\DATA\00001\SERVER\DATA.TXT", "V:\DATA2\00002\SERVER2\DATA2.TXT"] ``` To get the file name without quotes use: ``` [f[1:-1] for f in re.findall('".+?"', input)] ``` or use `re.finditer`: ``` [f.group(1) for f in re.finditer('"(.+?)"', input)] ```
You must escape the `"`: ``` input.split("\"") ``` results in ``` ['\n', 'Y:\\DATA\x0001\\SERVER\\DATA.TXT', ' ', 'V:\\DATA2\x0002\\SERVER2\\DATA2.TXT', '\n'] ``` To drop the resulting empty lines: ``` [line for line in [line.strip() for line in input.split("\"")] if line] ``` results in ``` ['Y:\\DATA\x0001\\SERVER\\DATA.TXT', 'V:\\DATA2\x0002\\SERVER2\\DATA2.TXT'] ```
Python split string on quotes
[ "", "python", "python-2.7", "" ]
I have some string X and I wish to remove semicolons, periods, commas, colons, etc, all in one go. Is there a way to do this that doesn't require a big chain of .replace(somechar,"") calls?
You can use the [`translate`](http://docs.python.org/2/library/stdtypes.html#str.translate) method with a first argument of `None`: ``` string2 = string1.translate(None, ";.,:") ``` Alternatively, you can use the [`filter` function](http://docs.python.org/2/library/functions.html#filter): ``` string2 = filter(lambda x: x not in ";,.:", string1) ``` Note that both of these options only work for non-Unicode strings and only in Python 2.
You can use `re.sub` to pattern match and replace. The following replaces `h` and `i` only with empty strings: ``` In [1]: s = 'byehibyehbyei' In [1]: re.sub('[hi]', '', s) Out[1]: 'byebyebye' ``` Don't forget to `import re`.
Removing many types of chars from a Python string
[ "", "python", "regex", "string", "" ]
I want to be able to define my settings for static/media files using python to get the paths so I don't need different settings on my dev machine and my server. So I have these settings; ``` import os from unipath import Path ### PATH CONFIGURATION # Absolute filesystem path to the top-level project folder SITE_ROOT = Path(__file__).ancestor(3) ### MEDIA CONFIGURATION MEDIA_ROOT = SITE_ROOT.child('media') MEDIA_URL = '/media/' ### END MEDIA CONFIGURATION ### STATIC CONFIGURATION STATIC_ROOT = SITE_ROOT.child('static') STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = os.path.join(SITE_ROOT, 'static'), ``` My problem is that locally it won't load the static files and the terminal says that `STATICFILES_DIRS` should not contain the `STATICFILES_ROOT`. Is it possible to get Python to load the paths like this or am I wasting my time?
There's nothing wrong with your code per se, it's just that the point of the [`staticfiles`](https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/) app is to copy the files from the directories specified in `STATICFILES_DIRS` into the directory specified in `STATIC_ROOT`, so it doesn't make much sense to include the `STATIC_ROOT` directory in the `STATICFILES_DIRS` setting. Unless you're actually using the `staticfiles` app with `./manage.py collectstatic`, you may as well just leave the `STATICFILES_DIRS` setting empty, i.e. just change... ``` STATICFILES_DIRS = os.path.join(SITE_ROOT, 'static'), ``` ...to... ``` STATICFILES_DIRS = () ```
Do like this: ``` import os settings_dir = os.path.dirname(__file__) PROJECT_ROOT = os.path.abspath(os.path.dirname(settings_dir)) MEDIA_ROOT = os.path.join(PROJECT_ROOT, 'media/') STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static/') STATICFILES_DIRS = ( os.path.join(PROJECT_ROOT, 'static/'), ) ``` That should work. Hope it helps!
I can't define STATICFILES_DIRS using Python to figure out the path
[ "", "python", "django", "" ]
I wanted to use `urllib2` for python 3, but I don't think it's available in such name. I use `urllib.request`, is there another way to use `urllib2?`
`urllib2` in Python 3 has been split into [several modules](http://docs.python.org/2/library/urllib2.html): > The `urllib2` module has been split across several modules in Python 3 > named `urllib.request` and `urllib.error`. The `2to3` tool will > automatically adapt imports when converting your sources to Python 3. [`urllib.request`](http://docs.python.org/3.0/library/urllib.request.html) is what you want to use for issuing HTTP requests. Alternatively, the open source [Requests](http://python-requests.org) library provides a simpler and cleaner API for making HTTP requests in both Python 2 and 3.
You should give [Requests](http://docs.python-requests.org/en/latest/) a try; it's available for Python 2.6—3.3. HTH.
Using urllib2 for Python3
[ "", "python", "web", "" ]
I have searched the site before asking the question but havn't come across something related. I am sure this is a ridiculously basic error, i have only been studying Oracle SQL from 0 computer background for around 4 months. I am planning to take the 1z0-051 end of this month so going over all the chapters. In this clause I am trying to get the name, title, salary, department and city of employees who have a salary higher than the average salary of the lowest paid position (CLERK). I keep getting Missing Keyword though? ``` SELECT e.first_name, e.last_name, j.job_title, e.salary, d.department_name, l.city FROM employees e JOIN jobs j WHERE salary > (SELECT AVG(salary) FROM employees WHERE job_id LIKE '%CLERK%' ) ON e.job_id = j.job_id JOIN departments d ON e.department_id = d.department_id JOIN locations l ON d.location_id = l.location_id ORDER BY salary ```
You have `JOIN`-`WHERE`-`ON` sequence which is wrong. Should be something like this (assuming `WHERE` is **not** a part of your joining condition): ``` FROM employees e JOIN jobs j ON e.job_id = j.job_id .... .... WHERE e.salary > (SELECT AVG(salary) FROM employees WHERE job_id LIKE '%CLERK%') ORDER BY ... ```
You cant have a `join` clause after a `where` clause
Missing Keyword in JOIN syntax
[ "", "sql", "oracle", "ora-00905", "" ]
I know that I can surface a row in a query by using it in the `ORDER BY` like this : ``` SELECT IF(`category` IS NOT NULL,`category`,"Uncategorized") AS `category` FROM `table` ORDER BY `category`="Uncategorized" DESC ``` Which will make the first row always contain "Uncategorized", however I have multiple rows that contain it that I also want surfaced. Here are two sample sets of returned data: What I'm getting: ``` Uncategorized Science Health Uncategorized Wellness ``` What I want: ``` Uncategorized Uncategorized Health Science Wellness ``` I have tried a number of other things including a `CASE` and also using a conditional `IF`. What am I doing wrong?
The reason why it is not working is because the `ORDER BY` clause is comparing with the column name `category` and not on the alias given on the column. ``` SELECT IF(category IS NOT NULL,category,'Uncategorized') category FROM `table` ORDER BY IF(category IS NOT NULL,category,'Uncategorized')='Uncategorized' DESC ``` you can alternatively use `COALESCE` or `IFNULL` to make it shorter ``` SELECT COALESCE(category, 'Uncategorized') category FROM `table` ORDER BY COALESCE(category, 'Uncategorized') = 'Uncategorized' DESC, category ```
You need to use the function in the `ORDER BY` clause. ``` SELECT IF(`category` IS NOT NULL,`category`,"Uncategorized") AS `category` FROM `table` ORDER BY IF(`category` IS NOT NULL,`category`,"Uncategorized")="Uncategorized" DESC ```
How to surface a value for more than one row in MySQL
[ "", "mysql", "sql", "" ]
I am using Cherrypy framework to run my python code on server. But the process stops working when the load increases. Every time this happens I have to manually go and start the python code. Is there any way i can use Gunicorn with Cherrypy so that Gunicorn can start the code automatically when it stops working. Any other solution will also work in this case. Just want to make sure that the python program does not stop working.
I use a cron that checks the memory load every few minutes and resets cherrypy when the memory exceeds 500MB -- so that the web host doesn't complain to me with emails. Something on my server doesn't release memory when a function ends as it should, so this is a pragmatic work around. This hack may be weird because I reset it using an HTTP request, but that's because I spent hours trying to figure out how to do this withing the BASH and gave up. It works. # CRON PART ``` */2 * * * * /usr/local/bin/python2.7 /home/{mypath}/cron_reset_cp.py > $HOME/cron.log 2>&1 ``` # And code inside cron\_reset\_cp.py... ``` #cron for resetting cherrypy /cp/ when 500+ MB import os #assuming starts in /home/my_username/ os.chdir('/home/my_username/cp/') import mem C = mem.MemoryMonitor('my_username') #this function adds up all the memory memory = int(float(C.usage())) if memory > 500:#MB #### Tried: pid = os.getpid() #current process = cronjob --- THIS approach did not work for me. import urllib2 cp = urllib2.urlopen('http://myserver.com/cp?reset={password}') ``` Then I added this function to reset the cherrypy via cron OR after a github update from any browser (assuming only I know the {password}) The reset url would be <http://myserver.com/cp?reset=>{password} ``` def index(self, **kw): if kw.get('reset') == '{password}': cherrypy.engine.restart() ip = cherrypy.request.headers["X-Forwarded-For"] #get_client_ip return 'CherryPy RESETTING for duty, sir! requested by '+str(ip) ``` The MemoryMonitor part is from here: [How to get current CPU and RAM usage in Python?](https://stackoverflow.com/questions/276052/how-to-get-current-cpu-and-ram-usage-in-python)
Python uses many error handling strategies to control flow. A simple [try/except](http://docs.python.org/2/tutorial/errors.html#handling-exceptions) statement could throw an exception if, say, your memory overflowed, a load increased, or any number of issues making your code stall (hard to see without the actual code). In the except clause, you could clear any memory you allocated and restart your processes again.
Restart Python.py when it stops working
[ "", "python", "cherrypy", "gunicorn", "" ]
i'm new to MySql(3hours under my belt),just finished reading PHP & MYSQL for dummies 4th Edition and i'm currently to create a database that contains information about shops for practice. The database i'm trying to create contains information about a list of stores.Each store will contain some basic information about the store,the industry(E.g Clothes, Food) the store is operating in, as well as their inventory. While i currently have a table containing the store's name,description(which can be a short write up and/or a URL to the store's website), and a store ID(Which serves as the primary key) ``` create table Merchant( MerchantID SERIAL, Industry ENUM("Retail","Dining","Entertainment"), Name VARCHAR(1000) NOT NULL, Description VARCHAR(1000), PRIMARY KEY(MerchantID) ) ``` Each store will then have multiple categories for what they are selling, and each categories will have multiple items.Would i be right in saying that what i am looking at is a One(Store) to Many(Categories) table linked to a One(Category) to Many(Items) table? The reason being that although the first table(Store to Categories) has a One to Many R/S, the second table(Category to Item) also has a One to Many, and NOT a Many to Many R/S as i am only looking at a singular category which contains multiple items in it. Table for Categories: ``` Create table Categories( CategoryID SERIAL references MerchantID, Category VARCHAR(50) NOT NULL, PRIMARY KEY(CategoryID) ) ``` Table for Items: ``` Create table Items( ItemID Serial references CategoryID, Item VARCHAR(50) NOT NULL, PRIMARY KEY(ItemID) ) ``` Is the code above correct? Or do i have to enter all the Primary Keys that are one level above. E.g: ``` Create table Item( MerchantID SERIAL, CategoryID SERIAL, ItemID SERIAL, Item VARCHAR(50), PRIMARY KEY(MerchantID,CategoryID,ItemID) ) ``` Furthermore, is there any difference between having VARCHAR(50) and VARCHAR(1000) seeing as MYSQL would automatically get rid of all the unused space?Also, is there any way to further optimise the database for performance etc.?
When you create relationship between tables (One-to-Many) you should have `Primary Key` (PK) to `Foreign Key` (FK), the PK has the One side and the FK has the many side. In your case you got wrong concept of using the FK as well as the `Serial` data type. For example you have the Merchant Table (your One-side) structure like this: ``` create table Merchant( MerchantID SERIAL, Industry ENUM("Retail","Dining","Entertainment"), Name VARCHAR(1000) NOT NULL, Description VARCHAR(1000), PRIMARY KEY(MerchantID) ) ``` And then your related table Categories (the Many-side) ``` Create table Categories( CategoryID SERIAL references MerchantID, Category VARCHAR(50) NOT NULL, PRIMARY KEY(CategoryID) ``` First let's tackle the `Serial` data type. A `Serial` data type in mySQL is actually a BIGINT NOT NULL AUTO\_INCREMENT data type which is good for PK. So, in Category table the `MerchantID` is auto\_increment which is ok. Now, in your Category table your Primary key is CategoryID is also `Serial` which is auto\_increment but it references `MerchantID`. So, for example if Store 'A' has two Categories then your first value for your PK in `Merchant` Table is 1. Now in your `Category` table since it is also auto\_increment that means you have values 1 and 2. In which 1 value would match the 1 value of Merchant but the 2 value would not match because there is none (except you add another store). In other words the values of the PK (MerchantID) in Merchant table does not coincide with the values of FK CategoryID in Categories table. In other words you don't use Serial but simply BIGINT in your related field, namely, the MerchantID. So, it should be like this now. ``` create table Merchant( MerchantID SERIAL, Industry ENUM("Retail","Dining","Entertainment"), Name VARCHAR(1000) NOT NULL, Description VARCHAR(1000), PRIMARY KEY(MerchantID) ) Create table Categories( MerchantID BIGINT, Category VARCHAR(50) NOT NULL, PRIMARY KEY(MerchantID, Category) ) ``` Now, since you want to create the relationship between Category (One-side) to Items (Many-side) then you have to retain the CategoryID in your table which now changes the `Categories` table to: ``` Create table Categories( CategoryID SERIAL, MerchantID BIGINT, Category VARCHAR(50) NOT NULL, PRIMARY KEY(CategoryID), FOREIGN KEY(MerchantID) references Merchant(MerchantID) ) ``` Now in your `Items` table with One-to-Many, that is, a Category could have many Items then you Items table will change into this: ``` Create table Items( ItemID Serial, CategoryID BIGINT, Item VARCHAR(50) NOT NULL, PRIMARY KEY(ItemID), FOREIGN KEY(CategoryID) references Categories(CategoryID) ) ``` This is now basically my 2nd point. In this case the Items being the many-side has the `FK CategoryID` from Categories `PK` which is also `CategoryID`. And I would agree with Dan Brauck that you don't use Enum as your data type because it is difficult to extend or not flexible. So, your Merchant table should have this structure now. ``` create table Merchant( MerchantID SERIAL, Industry VARCHAR(50) NOT NULL, Name VARCHAR(1000) NOT NULL, Description VARCHAR(1000), PRIMARY KEY(MerchantID) ) ``` I hope by the way that you also understand `Normalization` concept which is very important in Database Design. Like the links below: ``` http://www.tutorialspoint.com/sql/first-normal-form.htm https://en.wikipedia.org/wiki/Database_normalization ```
There are some problems with your plan. The first one is using an enum in your merchant table. Adding a new industry later on would be a lot easier with a separate table for industries. Your syntax for categories might not run. I don't use MySQL but what you have is not valid for other RDBMSs. When you use the "references" keyword, it should be while declaring a foreign key. So this: ``` Create table Categories( CategoryID SERIAL references MerchantID, Category VARCHAR(50) NOT NULL, PRIMARY KEY(CategoryID) ) ``` should probably resemble this: ``` Create table Categories( CategoryID SERIAL not null, MerchantID integer not null, Category VARCHAR(50) NOT NULL, PRIMARY KEY(CategoryID), foreign key merchantID references (merchant.merchantID) ) ``` None of your table have foreign keys. If the book you used didn't address those, I've heard good things about "Database Design for Mere Mortals". These observations should get you started.
One to Many Table linked to a One to Many Table
[ "", "mysql", "sql", "database", "phpmyadmin", "" ]
I'm developing a test case in Python with Webdriver to click through menu items on <http://www.ym.com>, and this particular one is meant to go through a menu item and click on a sub-menu item. When running the test, it looks like it tries to access the nested menu items, but never clicks on the final item. Here is my code to go through the menu: ``` food = driver.find_element_by_id("menu-item-1654") hov = ActionChains(driver).move_to_element(food).move_to_element(driver.find_element_by_xpath("/html/body/div[4]/div/div/ul/li[2]/ul/li/a")).click() hov.perform() ``` The problem here is that I am trying to click the "Recipes" submenu from the "Food" menu, but what happens is the submenu "France" is being clicked under the menu "Travel" which is situated right next to "Recipes. I've tried using find\_element\_by\_id, find\_element\_by\_css\_locator, find\_element\_by\_link\_text but it seems to always select the France submenu under Travel and not the Recipes submenu under Food. Any ideas? **EDIT** I am using this Python code to run the test now: ``` food = driver.find_element_by_xpath("//a[contains(@href, 'category/food/')]/..") ActionChains(driver).move_to_element(food).perform() WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath("//a[contains(@href, 'category/recipes-2/')]/..")).click() ``` which works perfectly fine in IE, but still access the wrong menu item in Firefox.
Try use normal click instead of ActionChains' move and click. Changes to your code: 1. I assume the ids are dynamic, try avoid using them. 2. Try avoid using absolute xpath 3. Use normal click rather than ActionChain move and click 4. Use 5 seconds WebDriverWait for recipes link. ``` food = driver.find_element_by_xpath("//a[contains(@href, 'category/food/')]/..") hov = ActionChains(driver).move_to_element(food).move_by_offset(5, 45).perform() # 45 is the Height of the 'FOOD' link plus 5 recipes = WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath("//a[contains(@href, 'category/recipes-2/')]/..")) recipes.click() ``` A tested working C# version of the code: ``` driver.Navigate().GoToUrl("http://www.yumandyummer.com"); IWebElement food = driver.FindElement(By.XPath("//a[contains(@href, 'category/food/')]/..")); new Actions(driver).MoveToElement(food).MoveByOffset(5, food.Size.Height + 5).Perform(); WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(5)); IWebElement recipes = wait.Until(ExpectedConditions.ElementExists(By.XPath("//a[contains(@href, 'category/recipes-2/')]/.."))); recipes.Click(); ```
Possibly not that helpful, but in java I have used ``` new Select(driver.findElement(By.id("MyId"))).selectByVisibleText("MyText"); ```
Selecting incorrect submenu item after hover action
[ "", "python", "selenium", "webdriver", "selenium-webdriver", "" ]
I'm rendering some graphics in python with matplotlib, and will include them into a LaTeX paper (using LaTex's nice tabular alignment instead of fiddling with matplotlib's `ImageGrid`, etc.). **I would like to create and save a standalone colorbar with `savefig`, without needing to use `imshow`.** (the `vlim, vmax` parameters, as well as the `cmap` could be provided explicitly) The only way I could find was quite complicated and (from what I understand) draws a hard-coded rectangle onto the canvas: <http://matplotlib.org/examples/api/colorbar_only.html> **Is there an elegant way to create a standalone colorbar with matplotlib?**
You can create some dummy image and then hide it's axe. Draw your colorbar in a customize Axes. ``` import pylab as pl import numpy as np a = np.array([[0,1]]) pl.figure(figsize=(9, 1.5)) img = pl.imshow(a, cmap="Blues") pl.gca().set_visible(False) cax = pl.axes([0.1, 0.2, 0.8, 0.6]) pl.colorbar(orientation="horizontal", cax=cax) pl.savefig("colorbar.pdf") ``` the result: ![enter image description here](https://i.stack.imgur.com/7AZVD.png)
Using the same idea as in HYRY's answer, if you want a "standalone" colorbar in the sense that it is independent of the items on a figure (not directly connected with how they are colored), you can do something like the following: ``` from matplotlib import pyplot as plt import numpy as np # create dummy invisible image # (use the colormap you want to have on the colorbar) img = plt.imshow(np.array([[0,1]]), cmap="Oranges") img.set_visible(False) plt.colorbar(orientation="vertical") # add any other things you want to the figure. plt.plot(np.random.rand(30)) ```
Standalone colorbar
[ "", "python", "matplotlib", "colorbar", "" ]
This query gets several `AssignmentId's` ``` SELECT AS2.AssignmentId FROM dbo.AssignmentSummary AS AS2 WHERE AS2.SixweekPosition = 1 AND AS2.TeacherId = 'mggarcia' ``` This query gets a value for only one assignment through the variable `@assignmentId` ``` SELECT S.StudentId, CASE WHEN OW.OverwrittenScore IS NOT NULL THEN OW.OverwrittenScore ELSE dbo.GetFinalScore(S.StudentId, @assignmentId) END AS FinalScore FROM dbo.Students AS S LEFT JOIN dbo.OverwrittenScores AS OW ON S.StudentId = OW.StudentID AND OW.AssignmentId = @assignmentId WHERE S.ClassId IN ( SELECT C.ClassId FROM Classes AS C WHERE C.TeacherId = @teacherId ) ``` As I pointed, in the last query works when you assign a value through the variable and returns a table. Now I want to get a table of several `AssignmentId's` from the first query. What do I need? A Join table? I have no idea about what to do now.
``` AND OW.AssignmentId IN ( SELECT AS2.AssignmentId FROM dbo.AssignmentSummary AS AS2 WHERE AS2.SixweekPosition = 1 AND AS2.TeacherId = 'mggarcia' ) ``` *the suggestion can be optimize if you can tell me how are the tables are related with each other.*
You can combine them using `in`: ``` SELECT S.StudentId, CASE WHEN OW.OverwrittenScore IS NOT NULL THEN OW.OverwrittenScore ELSE dbo.GetFinalScore(S.StudentId, @assignmentId) END AS FinalScore FROM dbo.Students AS S LEFT JOIN dbo.OverwrittenScores AS OW ON S.StudentId = OW.StudentID AND OW.AssignmentId in (SELECT AS2.AssignmentId FROM dbo.AssignmentSummary AS AS2 WHERE AS2.SixweekPosition = 1 AND AS2.TeacherId = 'mggarcia' ) WHERE S.ClassId IN ( SELECT C.ClassId FROM Classes AS C WHERE C.TeacherId = @teacherId ) ``` There may be ways to simplify this query. This does a direct conversion of substituting the first query into the second.
How to join these two queries?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
I have a set of lists that contain both strings and float numbers, such as: ``` import numpy as num NAMES = num.array(['NAME_1', 'NAME_2', 'NAME_3']) FLOATS = num.array([ 0.5 , 0.2 , 0.3 ]) DAT = num.column_stack((NAMES, FLOATS)) ``` I want to stack these two lists together and write them to a text file in the form of columns; therefore, I want to use **numpy.savetxt** (if possible) to do this. ``` num.savetxt('test.txt', DAT, delimiter=" ") ``` When I do this, I get the following error: ``` >>> num.savetxt('test.txt', DAT, delimiter=" ") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_9597b1f_20120920-py2.7-macosx-10.8-x86_64.egg/numpy/lib/npyio.py", line 1047, in savetxt fh.write(asbytes(format % tuple(row) + newline)) TypeError: float argument required, not numpy.string_ ``` The ideal output file would look like: ``` NAME_1 0.5 NAME_2 0.2 NAME_3 0.3 ``` How can I write both strings and float numbers to a text file, possibly avoiding using csv ( I want to make if readable for other people )? Is there another way of doing this instead of using **numpy.savetxt**?
You have to specify the format (`fmt`) of you data in `savetxt`, in this case as a string (`%s`): ``` num.savetxt('test.txt', DAT, delimiter=" ", fmt="%s") ``` The default format is a float, that is the reason it was expecting a float instead of a string and explains the error message.
The currently accepted answer does not actually address the question, which asks how to save lists that contain both strings and float numbers. For completeness I provide a fully working example, which is based, with some modifications, on the link given in @joris comment. ``` import numpy as np names = np.array(['NAME_1', 'NAME_2', 'NAME_3']) floats = np.array([ 0.1234 , 0.5678 , 0.9123 ]) ab = np.zeros(names.size, dtype=[('var1', 'U6'), ('var2', float)]) ab['var1'] = names ab['var2'] = floats np.savetxt('test.txt', ab, fmt="%10s %10.3f") ``` **Update:** This example also works properly in Python 3 by using the `'U6'` Unicode string [dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html), when creating the `ab` [structured array](https://docs.scipy.org/doc/numpy/user/basics.rec.html), instead of the `'S6'` byte string. The latter dtype would work in Python 2.7, but would write strings like `b'NAME_1'` in Python 3.
How to use python numpy.savetxt to write strings and float number to an ASCII file?
[ "", "python", "list", "numpy", "output", "" ]
I have a JSON return string from MongoDB ``` [{'api_calls_per_day': 0.29411764705882354, '_id': ObjectId('51948e5bc25f4b1d1c0d303a'), 'api_calls_total': 5, 'api_calls_with_key': 3, 'api_calls_without_key': 2}] ``` I want to take off small brackets '[' and ']' from above string and result need be like: ``` {'api_calls_per_day': 0.29411764705882354, '_id': ObjectId('51948e5bc25f4b1d1c0d303a'), 'api_calls_total': 5, 'api_calls_with_key': 3, 'api_calls_without_key': 2} ``` Any idea how to remove small brackets from that string? Thanks
Are you getting a string? If you are, it's just: ``` myString = MongoDBreturned[1:-1] ``` This takes a substring from the { to the }. If it's not a string, and it's actually a List, then simply take the first (and only element), which is the json object. ``` json = MongoDBreturned[0] ```
It's Python, string is a list. How about this ``` a = "[string with brackets]" a[1:-1] # string without brackets (or without first and last char) ```
Take off '[' from Json string?
[ "", "python", "mongodb", "python-3.x", "json", "" ]