Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table that keeps track of changes in customer profiles. [Here's a simplified version](http://sqlfiddle.com/#!4/e493a/1): ``` CREATE TABLE HISTORY ( CUSTOMER_ID NUMBER(9,0), DATE_CHANGED DATE, ACCOUNT_TYPE VARCHAR2(20), CONSTRAINT HISTORY_PK PRIMARY KEY (CUSTOMER_ID, DATE_CHANGED) ); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (200, TO_DATE('05/01/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Premium'); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (300, TO_DATE('17/02/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Free'); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (100, TO_DATE('05/03/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Free'); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (100, TO_DATE('12/03/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Standard'); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (200, TO_DATE('22/03/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Standard'); INSERT INTO HISTORY (CUSTOMER_ID, DATE_CHANGED, ACCOUNT_TYPE) VALUES (100, TO_DATE('29/03/2013 00:00:00','DD/MM/RRRR HH24:MI:SS'), 'Premium'); ``` That data is maintained by a third-party. My ultimate goal is to obtain a sum of customers per account type and month for a given timespan but, by now, I'd like to start with something simpler—display the latest account type for each month/customer combination where there are changes recorded: ``` YEAR MONTH CUSTOMER_ID ACCOUNT_TYPE ==== ===== =========== ============ 2013 1 200 Premium 2013 2 300 Free 2013 3 100 Premium 2013 3 200 Standard ``` Here, customer 100 has made three changes on March; we display "Premium" because it has the latest date within March. The query to obtain *all* rows would be this: ``` SELECT EXTRACT(YEAR FROM DATE_CHANGED) AS YEAR, EXTRACT(MONTH FROM DATE_CHANGED) AS MONTH, CUSTOMER_ID, ACCOUNT_TYPE FROM HISTORY ORDER BY YEAR, MONTH, CUSTOMER_ID, DATE_CHANGED ``` Is it possible to filter out unwanted rows using aggregate functions? Does it make more sense to use analytic functions? (And, in either case, what would be the adequate function?) **Edit:** I've been asked for an example of unwanted rows. There're 3 rows for customer 100 on March: ``` '05/03/2013 00:00:00', 'Free' '12/03/2013 00:00:00', 'Standard' '29/03/2013 00:00:00', 'Premium' ``` Unwanted rows are `'Free'` and `'Standard'` because they aren't the latest in the month.
``` SELECT YEAR ,MONTH ,customer_id ,max(ACCOUNT_TYPE) keep(dense_rank FIRST ORDER BY date_changed DESC) LAST_ACC FROM ( SELECT EXTRACT(YEAR FROM DATE_CHANGED) AS YEAR, EXTRACT(MONTH FROM DATE_CHANGED) AS MONTH, CUSTOMER_ID, date_changed, account_type FROM HISTORY ) GROUP BY YEAR, MONTH, customer_id ORDER BY YEAR, MONTH, CUSTOMER_ID | YEAR | MONTH | CUSTOMER_ID | LAST_ACC | ----------------------------------------- | 2013 | 1 | 200 | Premium | | 2013 | 2 | 300 | Free | | 2013 | 3 | 100 | Premium | | 2013 | 3 | 200 | Standard | ``` <http://sqlfiddle.com/#!4/e493a/15>
``` SELECT DISTINCT CUSTOMER_ID, EXTRACT(YEAR FROM DATE_CHANGED) AS YEAR, EXTRACT(MONTH FROM DATE_CHANGED) AS MONTH, LAST_VALUE(ACCOUNT_TYPE) OVER(PARTITION BY CUSTOMER_ID,TO_CHAR(DATE_CHANGED,'YYYY-MM') ORDER BY DATE_CHANGED ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS ACCOUNT_TYPE FROM HISTORY CUSTOMER_ID YEAR MONTH ACCOUNT_TYPE 200 2013 1 Premium 300 2013 2 Free 100 2013 3 Premium 200 2013 3 Standard ``` <http://www.sqlfiddle.com/#!4/fab60/13>
Row with latest value by customer and month
[ "", "sql", "oracle", "oracle10g", "greatest-n-per-group", "" ]
I have 2 tables in the DataBase 1. Table1 contains list Permanent Banks 2. Table2 contains list of Temporary Banks I want to join all the Banks from both the Tables into a `DataTable` AllBanks in memory... I Just want the SQL Statement to fill those banks in the AllBanks Table in memory.
Use [union](http://en.wikipedia.org/wiki/Set_operations_%28SQL%29): ``` select * from PermanentBanks union select * from TemporaryBanks ```
The SQL operatore to use is [UNION](http://www.w3schools.com/sql/sql_union.asp), but you don't specify the other fields, so I assume that you want only the field that contains the bank name ``` SELECT PermanentName AS Bank from Table1 UNION SELECT TemporaryName AS Bank from Table2 ``` you could also add a predefined value to differentiate between the two input table ``` SELECT PermanentName AS Bank, 'P' as BankType from Table1 UNION SELECT TemporaryName AS Bank, 'T' as BankType from Table2 ``` It is important to remember that each SELECT statement within the UNION must have the same number of columns, expressed in the same order and with the same datatype
Combine Data of 2 Tables into 1 DataTable in Memory in WinForms
[ "", "sql", "winforms", "datatable", "" ]
I want to sort out words which are started with 's' in sentence by python. Here is my code: ``` import re text = "I was searching my source to make a big desk yesterday." m = re.findall(r'[s]\w+', text) print m ``` But the result of code is : ``` ['searching', 'source', 'sk', 'sterday']. ``` How do I write a code about regular expression? Or, is there any method to sort out words?
``` >>> import re >>> text = "I was searching my source to make a big desk yesterday." >>> re.findall(r'\bs\w+', text) ['searching', 'source'] ``` For lowercase and uppercase `s` use: `r'\b[sS]\w+'`
I know it is not a regex solution, but you can use `startswith` ``` >>> text="I was searching my source to make a big desk yesterday." >>> [ t for t in text.split() if t.startswith('s') ] ['searching', 'source'] ```
How to find a word that starts with a specific character
[ "", "python", "regex", "" ]
I'm right now creating View and I need to display only the YEAR from a normal DateTime How can I do that to return just a Year as a value from ? I have already tried it with EXTRACT but somehow it not working .. Thanks for help and fast answer
Use [`YEAR()`](http://msdn.microsoft.com/en-us/library/ms186313.aspx) function ``` SELECT YEAR(MyDateCol) FROM MyTable ``` ### [See this SQLFiddle](http://www.sqlfiddle.com/#!3/d41d8/3756)
You can use [DATEPART()](http://msdn.microsoft.com/en-us/library/ms174420.aspx) to extract the year from datetime value. ``` DATEPART(YEAR,GETDATE()); ``` Check out ***[SQLFIDDLE](http://sqlfiddle.com/#!3/d41d8/13343)***
How to select only year from a Date Time TSQL
[ "", "sql", "sql-server", "t-sql", "" ]
This code works: ``` import tkinter root = tkinter.Tk() canvas = tkinter.Canvas(root) canvas.grid(row = 0, column = 0) photo = tkinter.PhotoImage(file = './test.gif') canvas.create_image(0, 0, image=photo) root.mainloop() ``` It shows me the image. Now, this code compiles but it doesn't show me the image, and I don't know why, because it's the same code, in a class: ``` import tkinter class Test: def __init__(self, master): canvas = tkinter.Canvas(master) canvas.grid(row = 0, column = 0) photo = tkinter.PhotoImage(file = './test.gif') canvas.create_image(0, 0, image=photo) root = tkinter.Tk() test = Test(root) root.mainloop() ```
The variable `photo` is a local variable which gets garbage collected after the class is instantiated. The solution involves saving a reference to the photo, for example: ``` self.photo = tkinter.PhotoImage(...) ``` If you do a Google search on "tkinter image doesn't display", the first result is this: [Why do my Tkinter images not appear?](https://web.archive.org/web/20201111190625id_/http://effbot.org/pyfaq/why-do-my-tkinter-images-not-appear.htm)
``` from tkinter import * from PIL import ImageTk, Image root = Tk() def open_img(): global img path = r"C:\.....\\" img = ImageTk.PhotoImage(Image.open(path)) panel = Label(root, image=img) panel.pack(side="bottom", fill="both") but1 = Button(root, text="click to get the image", command=open_img) but1.pack() root.mainloop() ``` > Just add global to the img definition and it will work
Why does Tkinter image not show up if created in a function?
[ "", "python", "image", "tkinter", "tkinter-canvas", "" ]
I don't really know how to make a specific question for this problem, so i will try my best to explain the scenario The table in question has the following columns. ``` **Table#1 Patients** -PatID -Name -Guarantor FK_PatID (Refer to Patients Table) ``` Okay lets say i want to select Patient's name, and his guarantor's name in the table from a single SQL query. ``` SELECT p.Name, p.Guarantor FROM Patients P ``` This statement will get me the patient's name and Guarantor's PatID, but how can I match that guarantor's ID to get it's Name in the same SQL statement?
You can join a table on itself ``` SELECT P.Name, P.Guarantor, P2.Name FROM Patients P INNER JOIN Paitents P2 on P2.PatId = P.Guarantor ```
This is Classic example of **[SELF JOIN](http://databases.about.com/od/sql/a/selfjoins.htm)** ``` SELECT P1.Name, P1.Guarantor+ 'is Guarantor of' +P2.Name FROM Patients P1 INNER JOIN Paitents P2 on P2.PatId = P1.Guarantor ``` A similar **[STACK OVERFLOW QUESTION](https://stackoverflow.com/questions/10862633/sql-self-referencing-query-join)**
SQL How to Select a returned value in the same statement
[ "", "sql", "select", "" ]
Got the following string: ``` hash=49836EC32432A9B830BECFD66A9B6F936327EAE8 ``` I need to match the `49836EC32432A9B830BECFD66A9B6F936327EAE8` so I do: ``` match = re.findall(".*hash=([A-F0-9]+).*",mydata) ``` all cool but when I want to print it ``` print "Hash: %s" % match ``` I get `Hash: ['C5E8C500BA925237E399C44EFD05BCD4AAF76292']` what am I doing wrong? I need to print `Hash: C5E8C500BA925237E399C44EFD05BCD4AAF76292`
``` if match: print "Hash: %s" % match[0] ```
`findall` gives you a list of all matches in the string. You are seeing exactly that - a list with the one match it found. Try `search` instead: <http://docs.python.org/2/library/re.html#re.search> which returns a MatchGroup that you can get the first group of: <http://docs.python.org/2/library/re.html#match-objects> Or you could do `findall` and use the first entry in the list to print (e.g. `match[0]`).
Python REGEX match string and print
[ "", "python", "regex", "python-2.7", "" ]
i have a query ``` select name,name_order from name_table where dept_id=XXX; ``` and the resultSet is ``` +------------+--------+ | name_order | name | +------------+--------+ | 0 | One | | 1 | Two | | 2 | Three | | 3 | four | | 6 | five | | 9 | six | +------------+--------+ ``` i have to update the name\_order for the dept\_id, in such a way that they start from 0 and incremented (for that dept\_id only) note : name\_order is not an index the out come should be like ``` +------------+--------+ | name_order | name | +------------+--------+ | 0 | One | | 1 | Two | | 2 | Three | | 3 | four | | 4 | five | | 5 | six | +------------+--------+ ``` i tried analytical function rowNumber(), it did not help ``` update name_table set name_order = ( ROW_NUMBER() OVER (PARTITION BY dept_id ORDER BY name_order)-1 ) where dept_id=XXX order by name_order ``` Thanks in advance -R
``` UPDATE NAME_TABLE A SET NAME_ORDER=( SELECT R FROM (SELECT NAME,ROW_NUMBER() OVER(ORDER BY NAME_ORDER) R FROM NAME_TABLE ) B WHERE A.NAME=B.NAME); ``` <http://www.sqlfiddle.com/#!4/6804a/1> ``` UPDATE NAME_TABLE A SET NAME_ORDER=( SELECT R FROM (SELECT NAME,DEPT_ID,ROW_NUMBER() OVER(PARTITION BY DEPT_ID ORDER BY NAME_ORDER)-1 R FROM NAME_TABLE ) B WHERE A.NAME=B.NAME AND A.DEPT_ID=B.DEPT_ID /*AND A.DEPT_ID=XXX*/ ); ``` Add the condition about dept\_id. Thanks Passerby.
You can do it with a `merge command` ``` MERGE INTO name_table dst USING (SELECT t.*, row_number() over (partition BY dept_id ORDER BY name_order) -1 n FROM name_table t) src ON (dst.dept_id = src.dept_id AND dst.name = src.name) WHEN MATCHED THEN UPDATE SET Dst.name_order = src.n; ``` [Here is a sqlfiddle demo](http://www.sqlfiddle.com/#!4/bb0fb/3) But why would you want a column with values you can have in a query ?
Can we replace all values of column with Row numbers?
[ "", "sql", "oracle", "oracle10g", "" ]
I have 2 sql queries running separately, what I would like to do is merge the two so I can extract both sets of data within one query. The queries are using the ExpressionEngine query module and are below, I would like to extract total and total\_2 in one query: Query 1: ``` SELECT COUNT(exp_channel_data.entry_id) AS total FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id WHERE field_id_207 != '' AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" ``` Query 2: ``` SELECT COUNT(exp_channel_data.entry_id) AS total_2 FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id WHERE status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" ```
``` SELECT SUM(CASE WHEN field_id_207 != '' then 1 else 0 end) as total_1, COUNT(exp_channel_data.entry_id) AS total_2 FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" ```
Join the two queries with `union` ``` SELECT COUNT(exp_channel_data.entry_id) AS total FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id WHERE field_id_207 != '' AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" union SELECT COUNT(exp_channel_data.entry_id) AS total_2 FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" ``` you may need to add another field to distinguish which query you are looking at ``` SELECT 'q1' source, COUNT(exp_channel_data.entry_id) AS total FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id WHERE field_id_207 != '' AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" union SELECT 'q2' source, COUNT(exp_channel_data.entry_id) AS total_2 FROM exp_channel_data JOIN exp_channel_titles ON exp_channel_titles.entry_id = exp_channel_data.entry_id AND status = 'open' AND exp_channel_data.channel_id = '18' AND author_id = "CURRENT_USER" ```
Merge 2 sql queries
[ "", "sql", "expressionengine", "" ]
please consider this XML: ``` <Employees> <Person> <ID>1000</ID> <Name>Nima</Name> <LName>Agha</LName> </Person> <Person> <ID>1001</ID> <Name>Ligha</Name> <LName>Ligha</LName> </Person> <Person> <ID>1002</ID> <Name>Jigha</Name> <LName>Jigha</LName> </Person> <Person> <ID>1003</ID> <Name>Aba</Name> <LName>Aba</LName> </Person> </Employees> ``` I want to write a procedure that get a number and then I insert an element to nth Person element. for example if 1 pass to my procedure I insert an element to first person element.
``` DECLARE @data XML = ' <Employees> <Person> <ID>1000</ID> <Name>Nima</Name> <LName>Agha</LName> </Person> <Person> <ID>1001</ID> <Name>Ligha</Name> <LName>Ligha</LName> </Person> <Person> <ID>1002</ID> <Name>Jigha</Name> <LName>Jigha</LName> </Person> <Person> <ID>1003</ID> <Name>Aba</Name> <LName>Aba</LName> </Person> </Employees> ' DECLARE @offset INT = 2 DECLARE @value VARCHAR(100) = 'newvalue' SET @data.modify('insert <NewAttribute>{sql:variable("@value")}</NewAttribute> as last into (/Employees/Person)[sql:variable("@offset")][1]') SELECT @data ```
``` DECLARE @AttributeValue NVARCHAr(100) = 'TestValue'; DECLARE @NodeNR INT = 3 DECLARE @XML XML = '<Employees> <Person> <ID>1000</ID> <Name>Nima</Name> <LName>Agha</LName> </Person> <Person> <ID>1001</ID> <Name>Ligha</Name> <LName>Ligha</LName> </Person> <Person> <ID>1002</ID> <Name>Jigha</Name> <LName>Jigha</LName> </Person> <Person> <ID>1003</ID> <Name>Aba</Name> <LName>Aba</LName> </Person> </Employees>'; SET @XML.modify('insert attribute Attribute {sql:variable("@AttributeValue")} into (/Employees/Person[position()=sql:variable("@NodeNr")])[1]') ```
insert an element to nth element in a xml in Sql Server
[ "", "sql", "sql-server", "xml", "xquery", "" ]
I was wondering if somebody had some information on how to load a CSV file using NumPy's loadtxt(). For some reason it claims that there is no such file or directory, when clearly there is. I've even copy/pasted the full path (with and without the leading / for root), but to no avail. ``` from numpy import * FH = loadtxt("/Users/groenera/Desktop/file.csv") ``` or ``` from numpy import * FH = loadtxt("Users/groenera/Desktop/file.csv") ``` The [documentation for loadtxt](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html) is very unhelpful about this.
This is probably not a `loadtxt` problem. Try simply ``` f = open("/Users/groenera/Desktop/file.csv") ``` to make sure it is `loadtxt`'s fault. Also, try using a Unicode string: ``` f = open(u"/Users/groenera/Desktop/file.csv") ```
You might have forgot the double slash, "//". Some machines require this. So instead of ``` FH = loadtxt("/Users/groenera/Desktop/file.csv") ``` do this: ``` FH = loadtxt("C:\\Users\\groenera\\Desktop\\file.csv") ```
File path name for NumPy's loadtxt()
[ "", "python", "file", "numpy", "" ]
So I am very new to python and am in the process of learning the basics. I am trying to create a function the counts the number of vowels in a string and returns how many times each vowel occurs in the string. For example if I gave it this input, this is what it would print out. ``` >>>countVowels('Le Tour de France') a, e, i, o, and u appear, respectively, 1,3,0,1,1 times. ``` I made this helper function to use, but then I'm not exactly sure how to use it. ``` def find_vowels(sentence): count = 0 vowels = "aeiuoAEIOU" for letter in sentence: if letter in vowels: count += 1 print count ``` And then I thought that maybe I could use formatting to get them in the write places, but I am not sure the notation that would be used for example, one of the lines for the function could be: ``` 'a, , i, o, and u appear, respectively, {(count1)}, {(count2)}, {(count3)}, {(count4)}, {(count5)} times' ``` I am not sure how I would be able to fit the above in the function.
You'd need to use a dictionary to store the values, since if you directly add the counts you lose information about exactly which vowel you are counting. ``` def countVowels(s): s = s.lower() #so you don't have to worry about upper and lower cases vowels = 'aeiou' return {vowel:s.count(vowel) for vowel in vowels} #a bit inefficient, but easy to understand ``` An alternate method would be: ``` def countVowels(s): s = s.lower() vowels = {'a':0,'e':0,'i':0,'o':0,'u':0} for char in s: if char in vowels: vowels[char]+=1 return vowels ``` to print this, you would do this: ``` def printResults(result_dict): print "a, e, i, o, u, appear, respectively, {a},{e},{i},{o},{u} times".format(**result_dict) ```
An easier answer would be to use the Counter class. ``` def count_vowels(s): from collections import Counter #Creates a Counter c, holding the number of occurrences of each letter c = Counter(s.lower()) #Returns a dictionary holding the counts of each vowel return {vowel:c[vowel] for vowel in 'aeiou'} ```
Function that takes a string as input and counts the number of times the vowel occurs in the string
[ "", "python", "" ]
Good time. Say, there is a table that contains data about exam results: ``` EXAM_RESULTS | ID | EXAM_TYPE | PERSON_ID | EXAM_RESULT | ``` A student can try to pass the concrete exam multiple times, so it is possible for one student to have four rows in the above table: three for fail results and one for success. I need to select all students that have not passed the concrete type of exam yet. Please suggest, how to do it or where to read about such a trick. **EDIT**: Sample records: ``` | 1 | SDA | 111 | FAIL | | 2 | SDA | 111 | FAIL | | 3 | SDA | 111 | PASSED | | 4 | SDA | 222 | FAIL | | 4 | SDA | 222 | FAIL | ``` According to the task the query must select only the 222 person cause he have not passed the SDA exam (111 person eventually passed)
``` SELECT Person_ID FROM Exam_Result WHERE Exam_type = 'type_here' GROUP BY Person_ID HAVING COUNT(CASE WHEN EXAM_RESULT = 'PASSED' THEN 1 END) = 0 ```
``` /* Students that not have passed the exam: either haven't taken it or failed all their's attempts */ /* All the students */ select distinct Person_id from Exam_Results except /* this keyword is server dependent! MINUS for Oracle*/ /* Students that have passed */ select Person_Id from Exam_Results where (Exam_Result = 'passed') and (Exam_Type = 'my exam type') ```
SQL: disjoint select to one table
[ "", "sql", "" ]
First off, I know that in general having large numbers of wide columns is a bad idea, but this is the format I'm constrained to. I have an application that imports CSV files into a staging table before manipulating them and inserting/updating values in the database. The staging table is created on the fly and has a variable number of NVARCHAR colums into which the file is imported, plus two INT columns used as row IDs. One particular file I have to import is about 450 columns wide. With the 24 byte pointer used in a large NVARCHAR column, this adds up to around 10k by my calculations, and I get the error `Cannot create a row of size 11166 which is greater than the allowable maximum row size of 8060.` Is there a way around this or are my only choices modifying the importer to split the import or removing columns from the file?
If you are using SQL Server 2005, 2008 or 2012, you should be able to use NVARCHAR(max) or NTEXT which would be larger than 8,000 characters. MAX will give you 2^31 - 1 characters: <http://msdn.microsoft.com/en-us/library/ms186939(v=sql.90).aspx>
You can use text/ntext which uses 16 bytes pointer. Whereas varchar/nvarchar uses 24bytes pointer. **NVARCHAR(max) or NTEXT** can store the data more than **8kb** but a record size can not be greater than **8kb** till **SQL Server 2012**. If Data is not fitted in **8kb page size** then the data of larger column is moved to another page and a **24 bytes**(if data type is varchar/nvarchar) pointer is used to store as reference pointer in main column. if it is text/ntext data type then **16 bytes** poiner is used.
Is there a way around the 8k row length limit in SQL Server?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
Hy guys, I'm starting to study NLTK following the official book from the NLTK team. I'm in chapter 5-"Tagging"- and I can't resolve one of the excercises at page 186 of the PDF version: *Given the list of past participles specified by cfd2['VN'].keys(), try to collect a list of all the word-tag pairs that immediately precede items in that list.* I tried this way: ``` wsj = nltk.corpus.treebank.tagged_words(simplify_tags=True) [wsj[wsj.index((word,tag))-1:wsj.index((word,tag))+1] for (word,tag) in wsj if word in cfd2['VN'].keys()] ``` but it gives me this error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/nltk/corpus/reader/util.py", line 401, in iterate_from for tok in piece.iterate_from(max(0, start_tok-offset)): File "/usr/local/lib/python2.7/dist-packages/nltk/corpus/reader/util.py", line 295, in iterate_from self._stream.seek(filepos) AttributeError: 'NoneType' object has no attribute 'seek' ``` I think I'm doing something wrong in accessing the wsj structure, but I can't figure out what is wrong! Can you help me? Thanks in advance!
`wsj` is of type `nltk.corpus.reader.util.ConcatenatedCorpusView` that behaves like a list (this is why you can use functions like `index()`), but "behind the scenes" NLTK never reads the whole list into memory, it will only read those parts from a file object that it needs. It seems that if you iterate over a CorpusView object and use `index()` (which requires iterating again) at the same time, the file object will return `None`. This way it works, though it is less elegant than a list comprehension: ``` for i in range(len(wsj)): if wsj[i][0] in cfd2['VN'].keys(): print wsj[(i-1):(i+1)] ```
Looks like both the index call and the slicing cause an exception: ``` wsj = nltk.corpus.treebank.tagged_words(simplify_tags=True) cfd2 = nltk.ConditionalFreqDist((t,w) for w,t in wsj) wanted = cfd2['VN'].keys() # just getting the index -> exception before 60 items for w, t in wsj: if w in wanted: print wsj.index((w,t)) # just slicing -> sometimes finishes, sometimes throws exception for i, (w,t) in enumerate(wsj): if w in wanted: print wsj[i-1:i+1] ``` I'm guessing it's caused by accessing previous items in a stream that you are iterating over. It works fine if you iterate once over `wsj` to create a list of indices and use them in a second iteration to grab the slices: ``` results = [ wsj[j-1:j+1] for j in [ i for i, (w,t) in enumerate(wsj) if w in wanted ] ] ``` As a side note: calling `index` without a `start` argument will return the first match every time.
Python NLTK exercise: chapter 5
[ "", "python", "nltk", "tagging", "" ]
Lets say I have this: ``` 1A=6 2A=4.5 3A=6 1B=7 2B=6 3B=7.5 1C=6 2C=6.75 3C=9 ``` I want to find the combination of numbers that yields the highest sum. None of the numbers before the letters can be used more than once and none of the letters after the numbers can be used more than once. eg. `1A+2B+3C, 1C+2B+3A` are valid. `1A+1B+2B,3A+2B+3B,1A+2A+3A` are invalid. In this case the highest sum is `1A+2B+3C=21`. How would I find this result and combination using python? Thanks in advance
Actually, python has libraries that will do this for you. See here: [Munkres](https://pypi.python.org/pypi/munkres/1.0.5.4#downloads)
~~For each of the entries with the same numeric prefix, find the maximum among them. Then sum up all those maximum values, and you will get the maximum sum with the given restriction.~~ If the numbers and the letters must be unique, then it becomes a problem that can be solved with [Hungarian algorithm](http://en.wikipedia.org/wiki/Hungarian_algorithm). > The Hungarian method is a combinatorial optimization algorithm that solves the [assignment problem](http://en.wikipedia.org/wiki/Assignment_problem) in polynomial time `[...]` From the description of [assignment problem](http://en.wikipedia.org/wiki/Assignment_problem): > The assignment problem is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics. It consists of finding a maximum weight [matching](http://en.wikipedia.org/wiki/Matching_(graph_theory)) in a weighted bipartite graph. > > In its most general form, the problem is as follows: > > > There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost that may vary depending on the agent-task assignment. It is required to perform all tasks by assigning exactly one agent to each task and exactly one task to each agent in such a way that the total cost of the assignment is minimized.
Find the Highest Sum in a Combination of Numbers
[ "", "python", "algorithm", "combinations", "python-itertools", "" ]
I need a module or strategy for detecting that a piece of data is written in a programming language, not syntax highlighting where the user specifically chooses a syntax to highlight. My question has two levels, I would greatly appreciate any help, so: 1. Is there any package in python that receives a string(piece of data) and returns if it belongs to any programming language syntax ? 2. I don't necessarily need to recognize the syntax, but know if the string is source code or not at all. Any clues are deeply appreciated.
Maybe you can use existing multi-language syntax highlighters. Many of them can detect language a file is written in.
You could have a look at methods around [baysian filtering](http://en.wikipedia.org/wiki/Bayesian_spam_filtering).
Syntax recognizer in python
[ "", "python", "syntax", "pattern-recognition", "" ]
Here's the problem: Create a trigger that prevents any change to the taking relation that would drop the overall average grade in any particular class below 2.5. Note: This trigger is not intended to address the average GPA of any given student, but rather it should address the average grade for all grades assigned in a particular class. Here's the schema: ``` Student-schema =(studentnum, name, standing, gpa, major) Class-schema = (schedulenum, semester, department, classnum, days, time, place, enrollment) Instructor-schema = (name, department, office) Teaches-schema = (name, schedulenum, semester) Taking-schema = (studentnum, schedulenum, semester, grade) ``` I'm having a terrible time with these triggers, but here's my attempt to make this work: ``` CREATE OR REPLACE TRIGGER stopChange AFTER UPDATE OR INSERT OR DELETE ON taking REFERENCING OLD AS old NEW AS new FOR EACH ROW DECLARE grd_avg taking.grade%TYPE; BEGIN SELECT AVG(grade) INTO grd_avg FROM taking WHERE studentnum = :new.studentnum AND schedulenum = :new.schedulenum AND semester = :new.semester; IF grd_avg < 2.5 THEN UPDATE taking SET grade = :old.grade WHERE studentnum = :old.studentnum AND schedulenum = :old.schedulenum AND semester = :old.semester; END IF; END; / ``` I'm obviously doing something wrong because when I then go to update or delete a tuple, I get the error: ``` ERROR at line 1: ORA-04091: table TAKING is mutating, trigger/function may not see it ORA-06512: at "STOPCHANGE", line 6 ORA-04088: error during execution of trigger 'STOPCHANGE' ``` Any advice? I'm using Oracle.
I think you can fix this by rewriting this as a *before* trigger, rather than an *after* trigger. However, this might be a little complicated for inserts and deletes. The idea is: ``` CREATE OR REPLACE TRIGGER stopChange BEFORE UPDATE OR INSERT OR DELETE ON taking REFERENCING OLD AS old NEW AS new FOR EACH ROW DECLARE grd_avg taking.grade%TYPE; BEGIN SELECT (SUM(grade) - oldgrade + new.grade) / count(*) INTO grd_avg FROM taking WHERE studentnum = :new.studentnum AND schedulenum = :new.schedulenum AND semester = :new.semester; IF grd_avg < 2.5 THEN new.grade = old.grade END IF; END; ```
First you need to read about triggers, mutating table error and compound triggers: <http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#LNPLS2005> Your trigger is AFTER UPDATE OR INSERT OR DELETE. Means if you run UPDATE OR INSERT OR DELETE statements on this table, the trigger will fire. But you are trying to update the same table again inside your trigger, which is compl. wrong. This this is why you are getting the error. You cannot modify the same table the trigger is firing on. The purpose of trigger is to fire automatically when table is updated, inserted or deleted in your case. What you need is some procedure, not trigger.
Table is mutating, trigger/function may not see it (stopping an average grade from dropping below 2.5)
[ "", "sql", "oracle", "triggers", "" ]
I am currently trying to use Python and its eyeD3 library to extract the lyrics from an mp3 file. The lyrics have been embedded into the mp3 file already (via: MusicBee). I am trying to use eyeD3 to return the lyrics. I can't figure out how to do it. I have searched online extensively and all I've found were tutorials showing how to SET the lyrics. I just want to read them from the file. Here's my current code: ``` track = eyed3.load(path) tag = track.tag artist = tag.artist lyrics = tag.lyrics ``` **artist** returns the artist's name correctly but **lyrics** returns the following: ``` <eyed3.id3.tag.LyricsAccessor object at 0x27402d0> ``` How can I just return the raw text lyrics embedded into an mp3? Is this possible? Thank you so much in advanced.
It looks like that is a iterator. Try ``` tag.lyrics[0] ``` or ``` for lyric in tag.lyrics: print lyric ``` last resort print the objects directory and look for useful functions ``` print dir(tag.lyrics) ```
this works for me: ``` tag.lyrics[0].text ```
Retrieve lyrics from an mp3 file in Python using eyeD3
[ "", "python", "mp3", "mutagen", "eyed3", "" ]
I wanted to limit the characters retrieved from a MySQL database and I know that LEFT() is the function to use. This is my query ``` SELECT id, LEFT(heading, 80) AS heading, LEFT(article, 20) AS article FROM news ORDER BY stamp DESC LIMIT 5; ``` But actually what I want is to limit the total number of characters retrieved from the 'heading' and 'article' columns combined... In other means; I always want the total to be 100 chars from both columns with priority to 'heading'... So, Mr. SQL, show 'heading' while you can (within the 100 chars range) and omit from 'article' as you wish, then if needed omit from 'heading' to reach the 100 chars condition. i.e. 'heading' gives 80 chars and 'article' gives 120 chars, therefore, delete 100 chars from 'article'. 'heading' gives 110 chars and 'article' gives 500 chars, therefore, delete 500 chars from article and 10 chars from 'heading'
``` SELECT id, LEFT(heading, 100) AS heading, IF(LENGTH(heading)>=100, '', LEFT(article, 100-LENGTH(heading))) AS article FROM news ORDER BY stamp DESC LIMIT 5; ``` Obviously if you don't care about having it in 2 columns like your original query was, then use the `CONCAT` algorithm as the others mentionned.
Try this: ``` SELECT id, LEFT(CONCAT(heading, article), 100) AS heading_article FROM news ORDER BY stamp DESC LIMIT 5; ``` This concats both columns and than get the `LEFT` 100 of the total Char, if you want a space character in between just do `CONCAT(heading, ' ', article)`
Apply SELECT LEFT() on the total characters from multiple columns combined
[ "", "mysql", "sql", "select", "" ]
I have a problem viewing the following `DataFrame`: ``` n = 100 foo = DataFrame(index=range(n)) foo['floats'] = np.random.randn(n) foo ``` The problem is that it does not print all rows per default in ipython notebook, but I have to slice to view the resulting rows. Even the following option does not change the output: ``` pd.set_option('display.max_rows', 500) ``` Does anyone know how to display the whole array?
Set `display.max_rows`: ``` pd.set_option('display.max_rows', 500) ``` For older versions of pandas (<=0.11.0) you need to change both `display.height` and `display.max_rows`. ``` pd.set_option('display.height', 500) pd.set_option('display.max_rows', 500) ``` See also [`pd.describe_option('display')`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.describe_option.html). You can set an option only *temporarily* for this one time like this: ``` from IPython.display import display with pd.option_context('display.max_rows', 100, 'display.max_columns', 10): display(df) #need display to show the dataframe when using with in jupyter #some pandas stuff ``` You can also reset an option back to its default value like this: `pd.reset_option('display.max_rows')` And reset all of them back: `pd.reset_option('all')`
``` pd.set_option('display.max_rows', 500) df ``` **Does not work** in Jupyter! Instead use: ``` pd.set_option('display.max_rows', 500) df.head(500) ```
Pandas: Setting no. of max rows
[ "", "python", "formatting", "pandas", "jupyter-notebook", "" ]
I need an sql that will give the position of the student ranked in order of marks scored in a specific examtyp e.g. CAT! only.the sql below gives the position of the student but does not distinguish the examtyp.it ranks without considering the examtyp. ``` res_id admNo stream examtyp termId marks grade points year 1 2129 0 CAT1 1 525 C 62 2013 2 4093 0 CAT1 1 569 B+ 69 2013 3 2129 0 CAT2 1 550 B+ 67 2013 4 4093 0 CAT2 1 556 B+ 68 2013 6 2129 0 FINAL 1 559 B+ 68 2013 7 2129 0 AVERAGE 1 545 B 66 2013 7 4093 0 FINAL 1 581 B+ 70 2013 8 4093 0 AVERAGE 1 569 B+ 69 2013 $sql = "SELECT 1 + (SELECT count(*) FROM $table a WHERE a.total_marks > b.total_marks ) AS rank FROM $table b WHERE admNo=? AND examCategory=? AND termId=? AND year=? ORDER BY rank LIMIT 1"; $res = $this->db->query($sql, array($admNo, $examCategory, $term, $year)); ```
This should work for you: ``` SELECT res_ID, admNo, stream, examtyp, termId, grade, points, `year`, Position FROM ( SELECT @r:= CASE WHEN @e = examtyp THEN @r + CASE WHEN @p = points THEN 0 ELSE @i END ELSE 1 END Position, @i:= CASE WHEN @p = points THEN @i + 1 ELSE 1 END incr, @e:= Examtyp, @p:= points, res_ID, admNo, stream, examtyp, termId, grade, points, `year` FROM T, (SELECT @e:= '') e, (SELECT @r:= 0) r, (SELECT @p:= 0) p, (SELECT @i:= 0) i ORDER BY examtyp, points ) T WHERE T.admNo = 4093 AND T.Examtyp = 'CAT1' ``` It uses the same principle of using variables that has been suggested, however also partitions by `examtyp`, resetting the position to 0 for each new exam type, it also records the previous points to deal with ties, so if 3 people get the same mark they all get the same position. **[Example on SQL Fiddle](http://www.sqlfiddle.com/#!2/b8a73/3)** *Note in the bottom pane of the fiddle the results for `AVERAGE` are equal so both get position = 1*
Try the `Query` ``` SET @rank=0; select @rank := @rank+1 AS rank result_id, marks_scored, admNo, Aggregate_points, year from tale_name order by marks_scored DESC ```
Mysql Sql statement to Get Position of Student Ranked in order of marks scored
[ "", "mysql", "sql", "" ]
I have a problem I'm not sure how to solve elegantly. **Background Information** I have a table of widgets. Each widget is assigned an ID from a range of numbers, let's say between 1-999. The values of 1-999 is saved in my database as "lower\_range" and "upper\_range" in a table called "config". When a user requests to create a new widget using my web app, I need to be able to do the following: * generate a random number between 1 and 999 using lua's math.random function or maybe a random number generator in sqlite (So far, in my tests, lua's math.random always returns the same value...but that's a different issue) * do a select statement to see if there already is a widget with this number assigned... * if not, create the new widget. * otherwise repeat process until you get a number that is not currently in use. **Problem** The problem I see with the above logic is two-fold: 1. the algorithm can potentially take a long time because I have to keep searching until I find a unique value. 2. How do I prevent simultaneous requests for new widget numbers generating the same value? Any suggestions would be appreciated. Thanks
Generate your random numbers ahead of time and store them in a table; make sure the numbers are unique. Then when you need to get the next number, just check how many have already been assigned and get the next number from your table. So, instead of * Generate a number between 1-999 * Check if it's already assigned * Generate a new number, and so on. do this: * Generate array of 999 elements that have values 1-999 in some random order * Your `GetNextId` function becomes `return ids[currentMaxId+1]` To manage simultaneous requests, you need to have some resource that generates a proper sequence. The easiest is probably to use a key in your widget table as the index in the `ids` array. So, add a record to the `widgets` table first, get its key and then generate widget ID using `ids[key]`.
Create a table to store the keys and the 'used' property. ``` CREATE TABLE KEYS ("id" INTEGER, "used" INTEGER) ; ``` Then use the following to find a new key ``` select id from KEYS where used = 0 order by RANDOM() limit 1 ```
randomly generating unique number between 1-999 for primary key in table
[ "", "sql", "sqlite", "lua", "" ]
``` INSERT INTO T1 (xField) SELECT 7 AS xField WHERE (SELECT COUNT(*) FROM T1) = 0 ``` Basically, if a million users all run this at the same time, could there ever end up being more than one row in T1? ``` Thread A: SELECT COUNT(*) FROM T1: 0 Thread B: SELECT COUNT(*) FROM T1: 0 Thread A: INSERT INTO T1... Thread B: INSERT INTO T1... ``` Or is that guaranteed to never happen, because it's all one statement? If that isn't safe, what about something like this? Table T2 (GoNorth and GoSouth must never both be 1): ``` ID GoNorth GoSouth 1 0 0 ``` Then this happens: ``` User A: UPDATE T2 SET GoNorth = 1 WHERE GoSouth = 0 User B: UPDATE T2 SET GoSouth = 1 WHERE GoNorth = 0 Thread A: Find rows where GoSouth = 0 Thread B: Find rows where GoNorth = 0 Thread A: Found a row where GoSouth = 0 Thread B: Found a row where GoNorth = 0 Thread A: Setting GoNorth = 1 for the located row Thread B: Setting GoSouth = 1 for the located row ``` And the result: ``` ID GoNorth GoSouth 1 1 1 ``` What are the rules for what can happen at the same time and what can't? My database engine is "Microsoft SQL Server 2008 R2 (SP2)".
No. It's written as a single statement, and therefore the rules of SQL provide for the entire statement to be atomic. However, there are some caveats here. First of all, this could mean creating quite a large number of locks, to the point where your table becomes effectively unavailable until the query is completed. In other words, in order to guarantee safety, you'd be throwing away the concurrency. The other caveat is that this only holds for the default isolation level. A weaker isolation level may allow the query to run without creating the appropriate locks. A really weak isolation level might allow it to ignore locks.
**Answer to the orginal question where #temp tables were used:** If you only working with # tables i.e temporary table then your scenario is not going to happen. # tables are connection specific and I believe your million users won't be sharing the same connection at same time. **Updated answer:** If you are using concrete tables then yes there can be more than one row in T1 is multiple user are running insert statements. However there are other things to consider here. For more details you should read [SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar](http://blog.sqlauthority.com/2012/11/15/sql-server-concurrency-basics-guest-post-by-vinod-kumar/)
SQL Basic Concurrency
[ "", "sql", "sql-server-2008", "concurrency", "" ]
I am cleaning out a database table without a primary key (I know, I know, what were they thinking?). I cannot add a primary key, because there is a duplicate in the column that would become the key. The duplicate value comes from one of two rows that are in all respects identical. I can't delete the row via a GUI (in this case MySQL Workbench, but I'm looking for a database agnostic approach) because it refuses to perform tasks on tables without primary keys (or at least a UQ NN column), and I cannot add a primary key, because there is a duplicate in the column that would become the key. The duplicate value comes from one... How can I delete one of the twins?
One option to solve your problem is to create a new table with the same schema, and then do: ``` INSERT INTO new_table (SELECT DISTINCT * FROM old_table) ``` and then just rename the tables. You will of course need approximately the same amount of space as your table requires spare on your disk to do this! It's not efficient, but it's incredibly simple.
``` SET ROWCOUNT 1 DELETE FROM [table] WHERE .... SET ROWCOUNT 0 ``` This will only delete one of the two identical rows
How can I delete one of two perfectly identical rows?
[ "", "sql", "duplicates", "" ]
I already have a database named "mydb", where I have a table called "AERODROME". My models.py looks like this: ``` from django.db import models class Aerodrome(models.Model): Name = models.CharField(max_length=48) Latitude = models.DecimalField(decimal_places=4, max_digits=7) Longitude = models.DecimalField(decimal_places=4, max_digits=7) ``` And I have this method at views.py: ``` from django.shortcuts import render from helloworld.models import Aerodrome def aerodromes(request): return render(request, 'aerodromes.html', {'aerodromes': Aerodrome.objects.all()}) ``` At my templates folder, I have aerodromes.html, which is quite simple too: ``` <!doctype html> <html> <head> </head> <body> <table> {% for aerodrome in aerodromes %} <tr> <td>{{ aerodrome.Name }}</td> <td>{{ aerodrome.Longitude }}</td> <td>{{ aerodrome.Latitude }}</td> </tr> {% endfor %} </table> </body> </html> ``` When I test through my browser, I get an error, because it looks like it's accessing the table with a wrong name. My application is called "helloworld" since it's a test, and instead of accessing to mydb.AERODROMES, it is accessing to mydb.helloworld\_aerodrome (Also note the case sensitive problem). Since I already had the database populated, I haven't run syncdb (I understood it wasn't neccessary, but maybe this is the problem). So, the problem is that I don't know why it is adding "helloworld\_" to the table name, and also that I still don't know for sure where exactly am I fixing the table name (and from there comes the case sensitive problem having "aerodrome" and not "AERODROMES"). Any help here?
Use the `Meta` class ([documentation here](https://docs.djangoproject.com/en/dev/ref/models/options/)) inside your `models.py` model definition: ``` class Aerodrome(models.Model): Name = models.CharField(max_length=48) Latitude = models.DecimalField(decimal_places=4, max_digits=7) Longitude = models.DecimalField(decimal_places=4, max_digits=7) class Meta: db_table = 'AERODROMES' ``` This will override the default naming scheme for model tables in the SQL database. --- You can also add the `managed` attribute to control whether or not `python manage.py syncdb` and `python manage.py flush` manage the table. ``` class Aerodrome(models.Model): # ... class Meta: db_table = 'AERODROMES' managed = False ``` With this you can `syncdb` without fear of wiping your data.
from the django docs: It is **strongly advised** that you use **lowercase** table names when you override the table name via db\_table, particularly if you are using the MySQL backend. See the MySQL notes for more details. <https://docs.djangoproject.com/en/1.11/ref/databases/#table-names>
Database table names with Django
[ "", "python", "mysql", "django", "" ]
What is the pythonic and efficient way of writing integers from `1` to `10**6` where digits are in a monotonically increasing order? For example: `(1,2,3,4,5,6,7,8,9,10,11,20,21,22,30,31,32,33,...)` This gets the job done but looks pretty ugly. ``` nums = [10**0*k6 for k6 in range(1,10)] + [10**1*k5 + 10**0*k6 for k5 in range(1,10) for k6 in range(k5+1)] + [10**2*k4 + 10**1*k5 + 10**0*k6 for k4 in range(1,10) for k5 in range(k4+1) for k6 in range(k5+1)] + [10**3*k3 + 10**2*k4 + 10**1*k5 + 10**0*k6 for k3 in range(1,10) for k4 in range(k3+1) for k5 in range(k4+1) for k6 in range(k5+1)] + [10**4*k2 + 10**3*k3 + 10**2*k4 + 10**1*k5 + 10**0*k6 for k2 in range(1,10) for k3 in range(k2+1) for k4 in range(k3+1) for k5 in range(k4+1) for k6 in range(k5+1)] + [10**5*k1 + 10**4*k2 + 10**3*k3 + 10**2*k4 + 10**1*k5 + 10**0*k6 for k1 in range(1,10) for k2 in range(k1+1) for k3 in range(k2+1) for k4 in range(k3+1) for k5 in range(k4+1) for k6 in range(k5+1)] ```
This returns 8001 numbers for `max_digits=6`: ``` def ascending(ndig, first_digit_max): for x in range(0, first_digit_max+1): if ndig == 1: yield [x] else: for y in ascending(ndig-1, x): yield [x] + y max_digits = 6 nums = sorted([int(''.join(map(str, num))) for ndig in range(1, max_digits+1) for num in ascending(ndig, 9) if any(num)]) ``` `ascending` yields lists of `ndig` digits, where the first digit is lower or equal to `first_digit_max`. It works recursively, so if it is called with `ndig=6`, it calls itself with `ndig=5`, etc. until it calls itself with `ndig=1` where it returns just individual digits. These are lists, so they have to be checked if any of these digits is different to zero (otherwise it would return 0, 00, 000, etc. as well) and converted into numbers.
``` def gen(size_digits): if size_digits == 0: return ( i for i in range(10) ) else: return ( new_dig*(10**size_digits) + old_digit for old_digit in gen(size_digits-1) for new_dig in range(10) if new_dig < int(str(old_digit)[0]) ) l = [ num for num in gen(6) ] ```
Pythonic way of writing a sequence of integers in monotonically increasing order
[ "", "python", "" ]
I'm having a problem with a slow query. Consider the table **tblVotes** - and it has two columns - **VoterGuid, CandidateGuid**. It holds votes cast by voters to any number of candidates. There are over 3 million rows in this table - with about 13,000 distinct voters casting votes to about 2.7 million distinct candidates. The total number of rows in the table is currently 6.5 million. What my query is trying to achieve is getting - in the quickest and most cache-efficient way possible (we are using SQL Express) - the top 1000 candidates based on the number of votes they have received. The code is: ``` SELECT CandidateGuid, COUNT(*) CountOfVotes FROM dbo.tblVotes GROUP BY CandidateGuid HAVING COUNT(*) > 1 ORDER BY CountOfVotes DESC ``` ... but this takes a scarily long time to run on SQL express when there is a very full table. Can anybody suggest a good way to speed this up and get it running in quick time? CandidateGuid is indexed individually - and there is a composite primary key on CandidateGuid+VoterGuid.
If you have only two columns in a table, a "normal" index on those two fields won't help you much, because it is in fact a copy of your entire table, only ordered. First check in execution plan, if your index is being used at all. Then consider changing your index to clustered index.
I don't know if SQL Server is able to use the composite index to speed this query, but if it is able to do so you would need to express the query as `SELECT CandidateGUID, COUNT(VoterGUID) FROM . . .` in order to get the optimization. This is "safe" because you know VoterGUID is never NULL, since it's part of a PRIMARY KEY. If your composite primary key is specified as (CandidateGUID, VoterGUID) you will not get any added benefit of a separate index on just CandidateGUID -- the existing index can be used to optimize any query that the singleton index would assist in.
Very slow SQL query
[ "", "sql", "sql-server", "performance", "sql-server-express", "" ]
What would be an efficient (time, easy) way of grouping a `2D` `NumPy` matrix rows by different column conditions (e.g. group by column 2 values) and running `f1()` and `f2()` on each of those groups? Thanks
``` from operator import itemgetter sorted(my_numpy_array,key=itemgetter(1)) ``` or maybe something like ``` from itertools import groupby from operator import itemgetter print groupby(my_numpy_array,key = itemgetter(1)) ```
If you have an array `arr` of shape `(rows, cols)`, you can get the vector of all values in column 2 as ``` col = arr[:, 2] ``` You can then construct a boolean array with your grouping condition, say group 1 is made up of those rows with have a value larger than 5 in column 2: ``` idx = col > 5 ``` You can apply this boolean array directly to your original array to select rows: ``` group_1 = arr[idx] group_2 = arr[~idx] ``` For example: ``` >>> arr = np.random.randint(10, size=(6,4)) >>> arr array([[0, 8, 7, 4], [5, 2, 6, 9], [9, 5, 7, 5], [6, 9, 1, 5], [8, 0, 5, 8], [8, 2, 0, 6]]) >>> idx = arr[:, 2] > 5 >>> arr[idx] array([[0, 8, 7, 4], [5, 2, 6, 9], [9, 5, 7, 5]]) >>> arr[~idx] array([[6, 9, 1, 5], [8, 0, 5, 8], [8, 2, 0, 6]]) ```
How to group rows in a Numpy 2D matrix based on column values?
[ "", "python", "numpy", "" ]
This question might be too noob, but I was still not able to figure out how to do it properly. I have a given array `[0,0,0,0,0,0,1,1,2,1,0,0,0,0,1,0,1,2,1,0,2,3`] (arbitrary elements from 0-5) and I want to have a counter for the occurence of zeros in a row. ``` 1 times 6 zeros in a row 1 times 4 zeros in a row 2 times 1 zero in a row => (2,0,0,1,0,1) ``` So the dictionary consists out of `n*0` values as the index and the counter as the value. The final array consists of 500+ million values that are unsorted like the one above.
This should get you what you want: ``` import numpy as np a = [0,0,0,0,0,0,1,1,2,1,0,0,0,0,1,0,1,2,1,0,2,3] # Find indexes of all zeroes index_zeroes = np.where(np.array(a) == 0)[0] # Find discontinuities in indexes, denoting separated groups of zeroes # Note: Adding True at the end because otherwise the last zero is ignored index_zeroes_disc = np.where(np.hstack((np.diff(index_zeroes) != 1, True)))[0] # Count the number of zeroes in each group # Note: Adding 0 at the start so first group of zeroes is counted count_zeroes = np.diff(np.hstack((0, index_zeroes_disc + 1))) # Count the number of groups with the same number of zeroes groups_of_n_zeroes = {} for count in count_zeroes: if groups_of_n_zeroes.has_key(count): groups_of_n_zeroes[count] += 1 else: groups_of_n_zeroes[count] = 1 ``` `groups_of_n_zeroes` holds: ``` {1: 2, 4: 1, 6: 1} ```
Similar to @fgb's, but with a more numpythonic handling of the counting of the occurrences: ``` items = np.array([0,0,0,0,0,0,1,1,2,1,0,0,0,0,1,0,1,2,1,0,2,3]) group_end_idx = np.concatenate(([-1], np.nonzero(np.diff(items == 0))[0], [len(items)-1])) group_len = np.diff(group_end_idx) zero_lens = group_len[::2] if items[0] == 0 else group_len[1::2] counts = np.bincount(zero_lens) >>> counts[1:] array([2, 0, 0, 1, 0, 1], dtype=int64) ```
Counting same elements in an array and create dictionary
[ "", "python", "numpy", "count", "" ]
OK, so I've got this line of code in a search stored procedure: ``` SET @where = 'job_code = ''' + REPLACE(@job_code, '''', '''''') + '''' ``` and there are basically two operations I'd like to streamline -**the first being surrounding the concatenated value in single quotes.** Obviously, in the above statement, I'm escaping a `'` by using two `''` and then ending the string with a `'` so I can concatenate the actual value. There's got to be a better way! **The second of the operations** would be the `REPLACE(@job_code, '''', '''''')` where I'm escaping any single quotes that might exist in the field. **Isn't there a much more elegant way of writing this line of code as a whole?** I thought it was the `ESCAPE` keyword but that's tied tightly to the `LIKE` statement, so no go there.
Not sure how you execute your sql query, if you use sp\_executesql, could be something like this ``` EXECUTE sp_executesql N'SELECT * FROM YouTable WHERE job_code = @job_code', N'@job_code varchar(100)', @job_code = @job_code; ```
The parameterized query answer is probably the real "right answer", but to answer your original question, what you want is [QUOTENAME()](http://msdn.microsoft.com/en-us/library/ms176114.aspx). More specifically, the single-quote version: ``` SET @where = 'job_code = ' + QUOTENAME(@job_code, '''') ``` Do note the length limit on this (input is a `sysname`, meaning 128 characters), though, as it is intended to quote the names of database objects and not as a general-purpose mechanism.
A more elegant way of escaping dynamic SQL?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I got error "Column not found" any time i run the following code even though the column exist in my table. Am using access database, Appealing for help please ``` public class Trial1 { public static void main (String[]args){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String url = "jdbc:odbc:SENSOR"; String user = ""; String pass = ""; Connection con = DriverManager.getConnection(url,user,pass); Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE); //stmt代表資料庫連接成功 ResultSet rs = stmt.executeQuery("select MAX(LevelNum) from NList"); if (rs.next()){ int w = rs.getInt("LevelNum"); int x= 3; double i = Math.pow(2, (w-x))-1; System.out.printf("i is %f",i);} stmt.close(); con.close(); }catch(Exception e) { System.out.println("Error" + e); } } } ```
Assuming the error is when you get the result rather than when you execute the query, you probably need something like this instead ``` // ... ResultSet rs = stmt.executeQuery("select MAX(LevelNum) as maxLevel from NList"); if (rs.next()) { int w = rs.getInt("maxLevel"); // ... etc. } ```
Try this... ``` class Trial1 { public static void main (String[]args){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String url = "jdbc:odbc:SENSOR"; String user = ""; String pass = ""; Connection con = DriverManager.getConnection(url,user,pass); Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE); ResultSet rs = stmt.executeQuery("select MAX(LevelNum) as LEVELNUM from NList"); if (rs.next()){ int w = rs.getInt("LEVELNUM"); int x= 3; double i = Math.pow(2, (w-x))-1; System.out.printf("i is %f",i);} stmt.close(); con.close(); }catch(Exception e) { System.out.println("Error" + e); } } ```
JDBC selecting the Max value from an Access table
[ "", "sql", "jdbc", "ms-access-2007", "" ]
here is my requirement.i already created worker registration screen.its shows below ![enter image description here](https://i.stack.imgur.com/fGqkL.png) i need to change align and set EPF No field after Employer No field(with no label). here shows my code in view.xml ``` <field name="employer_no" style="width: 30%%" /> <field name="epf_no" style="width: 30%%" class="bpl_worker_epf_no" /> ``` i tried with css file.but its not affected to field.its just changed my field's label color. ``` .bpl_worker_epf_no { color: blue; top: 255px; left: 45px } ``` please help me to sort out this issue.when i tried with nolabel="1" its messed up my layout and some fields arranged incorrect pattern.then please help me to sort this issue thanks
I know it's a bit to late, but you could have done something like: ``` <label for="field_B"/> <div> <field name="field_A" class="oe_inline"/> <field name="field_B" class="oe_inline"/> </div> ``` And you'll get the desire output.
AFAIK there's not such option by default. Have a look at this <http://planet.domsense.com/en/2012/09/openerp-custom-fields-colors-and-styles/>. Never tried on v7 though.
How to modify css in OpenERP 7?
[ "", "python", "css", "xml", "odoo", "" ]
I'm making a program that takes input and coverts it to morse code in the form of computer beeps but I can't figure out how to make it so I can put more than one letter in the input without getting an error. Here is my code: ``` import winsound import time morseDict = { 'a': '.-', 'b': '-...', 'c': '-.-.', 'd': '-..', 'e': '.', 'f': '..-.', 'g': '--.', 'h': '....', 'i': '..', 'j': '.---', 'k': '-.-', 'l': '.-..', 'm': '--', 'n': '-.', 'o': '---', 'p': '.--.', 'q': '--.-', 'r': '.-.', 's': '...', 't': '-', 'u': '..-', 'v': '...-', 'w': '.--', 'x': '-..-', 'y': '-.--', 'z': '--..' } while True: inp = raw_input("Message: ") a = morseDict[inp] morseStr = a for c in morseStr: print c if c == '-': winsound.Beep(800, 500) elif c == '.': winsound.Beep(800, 100) else: time.sleep(0.4) time.sleep(0.2) ``` Right now it takes one letter at a time but I want it to take phrases.
just add an extra for loop and loop through the characters in your input to get the message! But don't forget to end your loop when necessary! In the following code I made it such that after the message has been decoded, it asks if you would like to send another, if you type "n" it will quit the loop! ``` going = True while going: inp = raw_input("Message: ") for i in inp: a = morseDict[i] morseStr = a for c in morseStr: print c if c == '-': winsound.Beep(800, 500) elif c == '.': winsound.Beep(800, 100) else: time.sleep(0.4) time.sleep(0.2) again = raw_input("would you like to send another message? (y)/(n) ") if again.lower() == "n": going = False ``` now you still have one problem...you have not accounted for spaces!! So you can still only send words! If I am correct, a space between words is a fixed timed silence in morse code, so what I would say you should do is add: ``` " ": 'x' ``` this way it will not return an error when trying to find the instance of the space and it will run in your `else` statement and add an extra .4 seconds before the next word!
Try changing your loop to something like this: ``` while True: inp = raw_input("Message: ") for char in inp: for x in morseDict[char]: print x if x == "-": winsound.Beep(800, 500) elif x == ".": winsound.Beep(800, 100) else: time.sleep(0.4) time.sleep(0.2) ``` That way you iterate first over the characters in the input, then you look up the character in `morseDict` and iterate over the value of `morseDict[char]`.
Allowing multiple characters in morse code converter
[ "", "python", "dictionary", "morse-code", "" ]
Say I have a table of books with columns author\_name, book\_name, and page\_count. How could I write SQL that would find me instance where an author has written multiple books and at least two books by the same author have different page counts? I've managed to retrieve a list of authors with multiple books by ``` SELECT author_name FROM books GROUP BY author_name HAVING COUNT(book_name) > 1 ``` which I believe does that, but how do I then check each book to compare their page counts?
You can try this: ``` SELECT author_name FROM books GROUP BY author_name HAVING COUNT(distinct page_count) > 1 ``` This doesn't look for multiple books, because if there are multiple page counts, then there are multiple books. For performance reasons, I usually use something like this: ``` SELECT author_name FROM books GROUP BY author_name HAVING min(page_count) <> max(page_count) ``` Usually, `count(distinct)` is more expensive than just doing a `min()` and `max()`. If you want to get a list of all the books, then join back to this list. Here is an example using `in` with a subquery: ``` select b.* from books b where b.author in (SELECT author_name FROM books GROUP BY author_name HAVING min(page_count) <> max(page_count) ) ```
This should do it: ``` SELECT author_name FROM ( SELECT author_name, page_count FROM books GROUP BY author_name, page_count ) GROUP BY author_name HAVING COUNT(*) > 1 ```
How can I use SQL to find records where say an author has multiple books with different page counts?
[ "", "sql", "" ]
Long story short, when I write the following: ``` sudo easy_install MySQL-python ``` I get the error > EnvironmentError: mysql\_config not found All right, so there are plenty of threads and the like on how to fix that, so I run this code: ``` export PATH=$PATH:/usr/local/mysql/bin ``` Then I rerun my sudo code: ``` sudo easy_install MySQL-python ``` Then I get the following error. > Setup script exited with error: command 'llvm-gcc-4.2' failed with exit status 1 Google/Stack Overflow that, and I am told to download a [GCC](http://en.wikipedia.org/wiki/GNU_Compiler_Collection) package which I did the other day, 200 MB's or there-abouts and still no fix. At this point I am lost, they say insanity is doing the same thing over and over while expecting a different result. Well, I've continually run the aforementioned code expecting a different result, so I'm not to far away from going insane. At this point in my Python career, I am new to this, but I am willing to try pretty much anything to get this up and running. If it helps I am officially running, Mac OS X 10.7.5, and I do have [MAMP](https://en.wikipedia.org/wiki/MAMP) installed (is that an issue?) Also, the other day when I was trying all of this for the first time I installed (reinstalled?) MySQL, so I'm really in a tough spot at this point. Is there a fix? I've racked my brain, searched Google, read Stack Overflow, and spent hours trying to figure this out to no avail.
Another option is to use [pymysql](https://github.com/PyMySQL/PyMySQL) it is a pure Python client connection to MySQL so you don't have to mess around with compiling, a good exercise, but it can be frustrating if you are just trying to get something done. pymysql follows the same API as MySQLdb, it can essentially be used as a drop in replacement. Also, it used to be that MySQLdb, did not work with Python 3, but this may have changed, pymysql didn't have that problem which also induced me to switch, this may have changed though. pymysql can be slower than MySQLdb but you'll have to see if you notice that, it is also under a different license (MIT for pymysql, GPL for MySQLdb)
Here's what I would install, especially if you want to use [homebrew](http://brew.sh): * XCode and the command line tools (as suggested by @7stud, @kjti) * Install [homebrew](http://brew.sh/) * `brew install mysql-connector-c` * `pip install mysql-python`
Installing MySQL Python on Mac OS X
[ "", "python", "mysql", "macos", "" ]
I have a query (exert from a stored procedure) that looks something like this: ``` SELECT S.name INTO #TempA from tbl_Student S INNER JOIN tbl_StudentHSHistory TSHSH on TSHSH.STUD_PK=S.STUD_PK INNER JOIN tbl_CODETAILS C on C.CODE_DETL_PK=S.GID WHERE TSHSH.Begin_date < @BegDate ``` Here is the issue, the 2nd inner join and corresponding where statement should only happen if only a certain variable (`@UseArchive`) is true, I don't want it to happen if it is false. Also, in `TSHSH` certain rows might have no corresponding entries in `S`. I tried splitting it into 2 separate queries based on `@UseArchive` but studio refuses to compile that because of the `INTO #TempA` statement saying that there is already an object named `#TempA` in the database. Can anyone tell me of a way to fix the query or a way to split the queries with the `INTO #TempA` statement?
Looks like you're asking 2 questions here. 1- How to fix the SELECT INTO issue: SELECT INTO only works if the target table does not exist. You need to use INSERT INTO...SELECT if the table already exists. 2- Conditional JOIN: You'll need to do a LEFT JOIN if the corresponding row may not exist. Try this. ``` SELECT S.name FROM tbl_Student S INNER JOIN tbl_StudentHSHistory TSHSH ON TSHSH.STUD_PK=S.STUD_PK LEFT JOIN tbl_CODETAILS C ON C.CODE_DETL_PK=S.GID WHERE TSHSH.Begin_date < @BegDate AND CASE WHEN @UseArchive = 1 THEN c.CODE_DETL_PK ELSE 0 END = CASE WHEN @UseArchive = 1 THEN S.GID ELSE 0 END ``` Putting the CASE statement in the WHERE clause and not the JOIN clause will force it to act like an INNER JOIN when @UseArchive and a LEFT JOIN when not.
I'd replace it with LEFT JOIN > LEFT JOIN tbl\_CODETAILS C ON @UseArchive = 1 AND C.CODE\_DETL\_PK=S.GID
SQL query join conditions
[ "", "sql", "sql-server", "t-sql", "" ]
I am writing a small app that is meant to awake at 09:00 every day and fetch data from some sources. However when I review my logs and database entries I am seeing that this executes at 09:00 and again at 10:00. The scraping process at most takes 15 minutes to complete, this has me totally stumped. ``` while 1: if time.strftime("%H") == "09" and time.strftime("%M") == "00": print "Starting at: " + time.strftime("%H") + ":" + time.strftime("%M") worker1.startThread() worker2.startThread() time.sleep(30) ``` In my logs i am essentially seeing ``` Starting at: 09:00 <snip> Starting at: 10:00 ```
Just for clarity I'm going to wrap up some of the suggestions in this answer. First, my guess is that the problem is actually the one that Kos describes, and I'm thinking it happens more often than you expect it does. Making two calls to `time.strftime` (actually four, but two of them are just for printing) means you're making two (four) calls under the hood to `time.localtime`, and since you're checking every thirty seconds, there's a good chance that if you finish very near an exact minute, you'll end up with values straddling the 10:00 hour reasonably often. This is how I'd fix it: ``` while True: t = time.localtime() if t[3:5] == (9, 0): # Compare (hour, day) numerically print time.strftime("Starting at: %H:%M", t) worker1.startThread() worker2.startThread() time.sleep(get_nap_length()) else: time.sleep(59) # No need to sleep less than this, even being paranoid. def get_nap_length(): '''Returns approximate number of seconds before 9:00am tomorrow. Probably implementing this would be easiest with the datetime module.''' ``` I'll leave the implementation of `get_nap_length` to you if you feel like it. I'd have it return something like the number of seconds until tomorrow at 8:58 am, just for safety. Implementing this would cut down on the number of "useless" times you go through the loop, and therefore reduce your chances of misfiring somehow. Note that if you *don't* implement this, you also need to remove the `else` from the code I provided above, or you may find yourself starting `worker1` and `worker2` many many times before 9:01 comes around. Finally, it's definitely worth looking at the system scheduler, because as people have said, it's nicer to just let the OS handle that stuff. Windows makes scheduled tasks reasonably easy with native functionality (Task Scheduler under Administrative Tools). I don't know about \*nix, but I'm sure it can't be that bad.
How about this scenario: ``` while 1: # 09:59:59.97 if time.strftime("%H") == "09" # 09:59:59.99 and time.strftime("%M") == "00": # 10:00:00.01 ``` You'd have to get lucky for this to happen, but who knows :-) --- BTW `time.sleep(30)` means that you are likely to enter the loop twice at 09:00. I can't see how that's related to the issue we're discussing, though.
Problems getting python to start threads at a given time
[ "", "python", "" ]
I understand that you can't remove an element from a list if you are currently iterating over that list. What I'm trying to do then is to copy elements from that list that I do NOT wish to remove to another list, and then replacing the original list with the new list. Here's my relevant code: ``` while len(tokenList) > 0: # loop through the tokenList list # reset the updated token list and the remove flag updatedTokenList = [] removeFlag = False for token in tokenList: completionHash = aciServer.checkTaskForCompletion(token) # If the completion hash is not the empty hash, parse the information if completionHash != {}: # if we find that a task has completed, remove it from the list if completionHash['Status'] == 'FINISHED' and completionHash['Error'] == '': # The task completed successfully, remove the token from the list removeFlag = True elif completionHash['Status'] == 'RUNNING' and completionHash['Error'] == '': # The task must still be running print('Task ' + completionHash['Type'] + ' ' + token + ' has been running for ' + completionHash['Runtime'] + ' seconds.') elif completionHash['Status'] == 'queued': # The task is in the queue print('Task ' + completionHash['Type'] + ' ' + token + ' is queued in position ' + completionHash['QueuePosition']) elif completionHash['Status'] == 'not_found': # Did not find a task with this token, possible the task hasn't been added yet print(completionHash['Error']) # if the task is still running, no change to the token list will have occured else: # This is probably because the server got rid of the token after the task completed print('Completion hash is empty, something went wrong.') tokenListError.append(token) removeFlag = True if not removeFlag: print('appending token to updatedTokenList') updatedTokenList.append(token) print('length of tokenList after removal loop: ' + str(len(updatedTokenList))) # wait some time, proportional to the number of tasks left checkInterval = len(updatedTokenList) * checkIntervalMultiplier print('Waiting ' + str(checkInterval) + ' seconds before checking again...') print('Tokens remaining: ' + str(len(updatedTokenList))) # replace the original token list with the updated token list tokenList = updatedTokenList # wait a while based on how many tokens remain time.sleep(checkInterval) ``` So point of all this is to update the tokenList with new list. Every time through the loop, new tasks will have finished and they should NOT be added to the updatedTokenList. The remaining task tokens will and this replaces the original token list. This does not work. On my first pass through, it does not add any tokens to the updatedTokenList even though no tasks have yet completed. I cannot figure out what I am doing wrong. Any suggestions?
This gets a lot easier if you move the logic into a function: ``` #This function should have a more descriptive name that follows your #project's API. def should_keep(token): """returns True if the token should be kept""" #do other stuff here. Possibly print stuff or whatever ... ... ``` Now, you can replace your list with a simple list comprehension: ``` tokenList = [ token for token in tokenList if should_keep(token) ] ``` Note that we haven't *actually* replaced the list. The old list could still presumably have references to it hanging around. If you want to replace the list in place though, it's no problem. We just use slice assignment: ``` tokenList[:] = [ token for token in tokenList if should_keep(token) ] ```
I understand you wish to delete the items from the list without keeping them, so, what I think you can do is save the number that corresponds to the list items you want to delete. For example, lets say I have a list with numbers from 1 to 5, but I only want this list to get the odd numbers, so I want to delete the even ones. What I would do is set a loop with a counter, to check every item on the list for a condition (in this case I would check if `myList[ItemNumber] % 2 == 0`) and if it does, I would set `ItemNumber` as an item in another list. Then when all the items that are going to be deleted have their numbers in this new list, I would call another loop to run through this new list and delete the items from the other list which numbers are contained in the new one. Like so: ``` myList = [1, 2, 3, 4, 5] count = 0 toBeDeleted = [] while count < len(myList): if myList[count] % 2 == 0: toBeDeleted.append(count) count += 1 cont2 = len(toBeDeleted) while cont2 > 0: cont3 = toBeDeleted[cont2 - 1] myList.pop(cont3) cont2 -= 1 ``` This worked just fine for this problem, so I believe and hope it will help you with yours.
Weird behavior removing elements from a list in a loop in Python
[ "", "python", "list", "" ]
I'm trying to use a Flask application behind an Amazon Load Balancer and the Flask threads keep timing out. It appears that the load balancer is sending a `Connection: keep-alive` header and this is causing the Flask process to never return (or takes a long time). With gunicorn in front the processes are killed and new ones started. We also tried using uWSGI and simply exposign the Flask app directly (no wrapper). All result in the Flask process just not responding. I see nothing in the Flask docs which would make it ignore this header. I'm at a loss as to what else I can do with Flask to fix the problem. *Curl and direct connections to the machine work fine, only those via the load balancer are causing the problem. The load balancer itself doesn't appear to be doing anything wrong and we use it successfully with several other stacks.*
The solution I have now is using gunicorn as a wrapper around the flask application. For the `worker_class` I am using `eventlet` with several workers. This combination seems stable and responsive. Gunicorn is also configured for HTTPS. *I assume it is a defect in Flask that causes the problem and this is an effective workaround.*
Did you remember to set `session.permanent = True` and `app.permanent_session_lifetime`?
flask application timeout with amazon load balancer
[ "", "python", "flask", "gunicorn", "werkzeug", "" ]
I am just learning Python and I need some advice regarding searching items in a list. I have a list that contains a list of lists. How can I search for an item in a list and return the value from the same list like the example below. ``` Example: myList = [['Red', 'Tomato'], ['Green', 'Pear'], ['Red', 'Strawberry'], ['Yellow', 'Lemon']] word = 'Green' return Pear ``` Is is possible to find the first instance or the n-th instance in the list as well? ``` first_instance = 'Red' return Tomato last_instance = 'Red' return Strawberry ```
You can use `collections.defaultdict`: Using this once the dictionary is created it'll take only O(1) lookup to find any instance of "Red" or any other color. ``` >>> myList = [['Red', 'Tomato'], ['Green', 'Pear'], ['Red', 'Strawberry'], ['Yellow', 'Lemon']] >>> from collections import defaultdict >>> dic = defaultdict(list) >>> for k,v in myList: dic[k].append(v) ... >>> dic['Red'][0] #first instance 'Tomato' >>> dic['Red'][-1] #last instance 'Strawberry' >>> dic["Yellow"][0] #first instance of Yellow 'Lemon' ``` Define a simple function here to handle Index errors: ``` >>> def solve(n, color, dic): try: return dic[color][n] except IndexError: return "Error: {0} has only {1} instances".format(color,len(dic[color])) ... >>> dic = defaultdict(list) >>> for k,v in myList: dic[k].append(v) ... >>> solve(0, "Red", dic) 'Tomato' >>> solve(-1, "Red", dic) 'Strawberry' >>> solve(0, "Yellow", dic) 'Lemon' >>> solve(1, "Yellow", dic) 'Error: Yellow has only 1 instances' ```
You can fetch all such elements: ``` instances = [x[1] for x in myList if x[0] == 'Red'] ``` And process `instances[0]`, `instances[-1]`, etc. To get the first one only, I'd go with @eumoro's approach using a generator expression.
Finding a certain item in a list of lists in Python
[ "", "python", "list", "" ]
I have an HTML webpage. It has a search textbox. I want to allow the user to search within a dataset. The dataset is represented by a bunch of files on my server. I wrote a python script which can make that search. Unfortunately, I'm not familiar with how can I unite the HTML page and a Python script. The task is to put a python script into the html file so, that: 1. Python code will be run on the server side 2. Python code can somehow take the values from the HTML page as input 3. Python code can somehow put the search results to the HTML webpage as output Question 1 : How can I do this? Question 2 : How the python code should be stored on the website? Question 3 : How it should take HTML values as input? Question 4 : How can it output the results to the webpage? Do I need to install/use any additional frameworks? Thanks!
There are too many things to get wrong if you try to implement that by yourself with only what the standard library provides. I would recommend using a web framework, like [flask](http://flask.pocoo.org/docs/quickstart/) or [django](https://docs.djangoproject.com/en/1.5/intro/). I linked to the quickstart sections of the comprehensive documentation of both. Basically, you write code and URL specifications that are mapped to the code, e.g. an HTTP GET on `/search` is mapped to a method returning the HTML page. You can then use a form submit button to GET `/search?query=<param>` with the being the user's input. Based on that input you search the dataset and return a new HTML page with results. Both frameworks have template languages that help you put the search results into HTML. For testing purposes, web frameworks usually come with a simple webserver you can use. For production purposes, there are better solutions like [uwsgi](http://projects.unbit.it/uwsgi/) and [gunicorn](http://gunicorn.org/) Also, you should consider putting the data into a database, parsing files for each query can be quite inefficient. I'm sure you will have more questions on the way, but that's what stackoverflow is for, and if you can ask more specific questions, it is easier to provide more focused answers.
I would look at the [cgi](http://docs.python.org/2/library/cgi.html) library in python.
Python search script in an HTML webpage
[ "", "python", "html", "search", "" ]
I have a table structure that looks like this: ``` ClassId ClassDescription ``` My table's data looks like this: ``` ClassId ClassDesc 4 --- 4 --- 4 --- 4 4 2 2 2 ``` Now what I am trying to do is to get the count of `classId` based on each individual `classId` meaning, as an example, the above table would return a count of 5 for all of classId = 4 and a count of 3 for classId = 2. So my Sql statement Count them individually and Order BY DESC On Count of ClassId 4 counts to 5 2 counts to 3 The final output should look like this: ``` ClassId ClassDesc 4 ---- 2 ---- ```
Using Group By ``` SELECT classID, ClassDescription FROM (SELECT classID, ClassDescription, COUNT(1) cnt FROM mytable GROUP BY classID, ClassDescription) AS A ORDER BY cnt DESC ``` OR ``` SELECT classID, ClassDescription FROM <table> GROUP BY classID, ClassDescription ORDER BY COUNT(1) DESC ```
Try this: ``` SELECT ClassId, COUNT(ClassDesc) AS ClassDesc FROM MyTable GROUP BY ClassId ORDER BY COUNT(ClassDesc) DESC; ``` Output: ``` ╔═════════╦═══════════╗ ║ CLASSID ║ CLASSDESC ║ ╠═════════╬═══════════╣ ║ 4 ║ 5 ║ ║ 2 ║ 3 ║ ╚═════════╩═══════════╝ ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!3/21bc6/8)
Issue with Order By on Count of field
[ "", "sql", "sql-server-2008", "count", "sql-order-by", "" ]
Is it possible to have a colon in a variable name ? I already tried to backslash it but it doesn't work... I'm using geoDjango so my identifiers in my models have to keep the same name than those in the database. The trouble is that, as I migrated data from OSM, I have some columns which are named with a colon (ie addr:housenumber) Here's an example : ``` class myClass(models.Model): # ... addr:housename = models.TextField(blank=True) ``` When I'm trying to syncdb : ``` addr:housename = models.TextField(blank=True) ^ SyntaxError: invalid syntax ``` Which seems normal. When I try to escape the colon, I have another error : ``` addr\:housename = models.TextField(blank=True) ^ SyntaxError: unexpected character after line continuation character ``` I really don't understand why I got this error. Someone knows how to put column in an identifier ?
Use [`db_column`](https://docs.djangoproject.com/en/1.5/ref/models/fields/#db-column) field parameter to specify your real column name. And yes, colons are not allowed in identifiers names in python. ``` class myClass(models.Model): addr_housename = models.TextField(blank=True, db_column="addr:housename") ```
The short answer is that the colon is simply not allowed in Python variable names. You can only have letters, digit and underscores. Actually, you should replace all the colons by underscores if possible. If your variable is class attribute - it does not seem to be the case here -, you can use `setattr` and `getattr` to have class attribute names with wathever character you want in them, but that will be ugly, and you won't be able to use them without `setattr` and `getattr` anymore. Here is an example: ``` class A: pass a = A() setattr(a, 'addr:housename', models.TextField(blank=True)) some_var = getattr(a, 'addr:housename') ```
Colon in variable identifier
[ "", "python", "postgis", "geodjango", "" ]
I have the following code that writes the md5sums to a logfile ``` for file in files_output: p=subprocess.Popen(['md5sum',file],stdout=logfile) p.wait() ``` 1. Will these be written in parallel? i.e. if md5sum takes a long time for one of the files, will another one be started before waiting for a previous one to complete? 2. If the answer to the above is yes, can I assume the order of the md5sums written to logfile may differ based upon how long md5sum takes for each file? (some files can be huge, some small)
All sub processes are run in parallel. (To avoid this one has to wait explicitly for their completion.) They even can write into the log file at the same time, thus garbling the output. To avoid this you should let each process write into a different logfile and collect all outputs when all processes are finished. ``` q = Queue.Queue() result = {} # used to store the results for fileName in fileNames: q.put(fileName) def worker(): while True: fileName = q.get() if fileName is None: # Sentinel? return subprocess_stuff_using(fileName) wait_for_finishing_subprocess() checksum = collect_md5_result_for(fileName) result[fileName] = checksum # store it threads = [ threading.Thread(target=worker) for _i in range(20) ] for thread in threads: thread.start() q.put(None) # one Sentinel marker for each thread ``` After this the results should be stored in `result`.
1. Yes, these md5sum processes will be started in parallel. 2. Yes, the order of md5sums writes will be unpredictable. And generally it is considered a bad practice to share a single resource like file from many processes this way. Also your way of making `p.wait()` after the `for` loop will wait just for the last of md5sum processes to finish and the rest of them might still be running. But you can modify this code slightly to still have benefits of parallel processing and predictability of synchronized output if you collect the md5sum output into temporary files and collect it back into one file once all processes are done. ``` import subprocess import os processes = [] for file in files_output: f = os.tmpfile() p = subprocess.Popen(['md5sum',file],stdout=f) processes.append((p, f)) for p, f in processes: p.wait() f.seek(0) logfile.write(f.read()) f.close() ```
Python: running subprocess in parallel
[ "", "python", "subprocess", "" ]
For a table such as: ``` foo_table id | str_col | bool_col 1 "1234" 0 2 "3215" 0 3 "8132" 1 4 NULL 1 5 "" 1 6 "" 0 ``` I know how to query both of: ``` count(*) | bool_col 3 0 3 1 ``` and ``` count(*) | isnull(str_col) or str_col = "" 3 0 3 1 ``` but how could I get something like: ``` count(*) | bool_col | isnull(str_col) or str_col = "" 2 0 0 1 0 1 1 1 0 2 1 1 ``` In the meantime, I'm just individually doing: ``` select count(*) from foo_table where bool_col and (isnull(str_col) or str_col = ""); select count(*) from foo_table where not bool_col and (isnull(str_col) or str_col = ""); select count(*) from foo_table where bool_col and not (isnull(str_col) or str_col = ""); select count(*) from foo_table where not bool_col and not (isnull(str_col) or str_col = ""); ```
Try ``` SELECT COUNT(*), bool_col, CASE WHEN str_col IS NULL OR str_col = '' THEN 1 ELSE 0 END str_col FROM foo_table GROUP BY bool_col, CASE WHEN str_col IS NULL OR str_col = '' THEN 1 ELSE 0 END ``` Output (MySql): ``` | COUNT(*) | BOOL_COL | STR_COL | --------------------------------- | 2 | 0 | 0 | | 1 | 0 | 1 | | 1 | 1 | 0 | | 2 | 1 | 1 | ``` **[SQLFiddle](http://sqlfiddle.com/#!2/a0f67/14)** MySQL **[SQLFiddle](http://sqlfiddle.com/#!3/7c359/3)** SQL Server
In oracle there is a build in function for that called `cube` ``` select bool_col , case when str_col is null or str_col = '' then 1 else 0 end str_col , count(*) from table1 group by cube (bool_col , case when str_col is null or str_col = '' then 1 else 0 end) ``` `cube` will give you all combination. there is also `rollup` which is a private case of `cube`.
How to count and group by combinations of multiple columns?
[ "", "sql", "combinations", "" ]
I have the following relations and 1.I want to find the pids of parts supplied by every supplier and at less than 200. (If any supplier either does not simply the part or charges more than 200 for it, the part is not selected) The fiddle <http://sqlfiddle.com/#!2/4b5d4> ``` Supplies sid 1 2 3 4 5 sname Jason David John Peter Jay address 221 2b 3c 4d 5e Parts pid 10 20 30 40 50 60 pname Head Body Hand Leg Arm Foot color red blue green white red green Catalog sid 1 1 2 2 3 4 1 1 4 4 1 5 5 3 3 2 pid 10 20 20 30 30 40 30 40 10 50 50 50 10 60 10 10 cost 150 220 150 150 130 125 130 280 123 126 120 100 100 210 100 50 ``` So I do the following query I think that it equal that `there is no parts that the suppliers don't supply it.` ``` SELECT C.pid FROM CATALOG C WHERE C.cost < 200 AND NOT EXISTS (SELECT S.sid FROM Suppliers S WHERE NOT EXISTS (SELECT P.pid FROM Parts P WHERE P.pid = C.pid AND S.sid = C.sid )) ``` but it return null, it should return 10. 2.I want to find for every supplier that supplies a green part and a red part, print the name and price of the most expensive part that she supplies. I just can find out the green and red part but cannot find the most expensive. The query is that I have tried. ``` SELECT S.sname FROM Suppliers AS S, Parts AS P1, CATALOG AS C1, Parts AS P2 , CATALOG AS C2 WHERE S.sid = C1.sid AND C1.pid = P1.pid AND S.sid = C2.sid AND C2.pid = P2.pid AND ( P1.color = 'red' AND P2.color = 'green' ) ``` How do I fix it? Thank in advance. ps Sorry that I have to go to class and I will reply about 6 hours later.
PART1 Select pid from catalog where cost<200 group by pid having count(Sid)>=(SelecT Count(sid) from Suppliers) [**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/4b5d4/81) Part2-: ``` Select t.sname,Max(Catalog.cost) from (SELECT S.sname,c2.cost,c2.sid FROM Suppliers AS S, Parts AS P1, Catalog AS C1, Parts AS P2 , Catalog AS C2 WHERE S.sid = C1.sid AND C1.pid = P1.pid AND S.sid = C2.sid AND C2.pid = P2.pid AND ( P1.color = 'red' AND P2.color = 'green' ) ) t inner join Catalog on t.sid =Catalog.sid group by t.sid ``` [**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/4b5d4/88)
For part 1, I think this query is more readable and returns you your expected result: ``` SELECT a.pid FROM ( SELECT pid, MAX(cost) max_price, COUNT(1) amount FROM Catalog GROUP BY pid ) a WHERE a.amount = (SELECT COUNT(1) FROM Suppliers) AND max_price < 200 ``` Now opposite to what you wrote, here is where you need the `EXISTS` to check for supplier having either red or green part: ``` SELECT sid FROM Suppliers WHERE EXISTS ( SELECT 1 FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Parts.color = 'green' ) AND EXISTS ( SELECT 1 FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Parts.color = 'red' ) ``` And finally, the full result should look like: ``` SELECT sid, ( SELECT pname FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Catalog.cost = (SELECT MAX(c.cost) FROM Catalog c WHERE c.sid = Suppliers.sid) LIMIT 1 ) name, ( SELECT cost FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Catalog.cost = (SELECT MAX(c.cost) FROM Catalog c WHERE c.sid = Suppliers.sid) LIMIT 1 ) cost FROM Suppliers WHERE EXISTS ( SELECT 1 FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Parts.color = 'green' ) AND EXISTS ( SELECT 1 FROM Catalog LEFT JOIN Parts ON Catalog.pid = Parts.pid WHERE Catalog.sid = Suppliers.sid AND Parts.color = 'red' ) ``` [SQL fiddle here](http://sqlfiddle.com/#!2/4b5d4/65)
find every item and the most expensive with some condition
[ "", "mysql", "sql", "" ]
I have two table like following ``` DailyData Date Id CompanyName CompanyPrice CompanyId 21-12-2011 123 ABC corp 120 535 25-12-2011 352 Z Edge 101 444 25-12-2011 352 Z Edge 100 444 primary key is `date` and `Id` ReportData RId Date CompanyName TodayPrice CompanyId 1 25-12-2011 Z Edge 230 444 primary key is only `RId` ``` Now I have used following `LEFT JOIN` on both above table like : ``` Select a.date,a.companyname,a.CompanyPrice,b.TodayPrice from DailyData a LEFT JOIN ReportData b ON a.companyid= b.companyid where a.Date = '25-12-2011' ``` But instead of two records it is giving more than two records `(same records multiple times)` Why is it so ? Please help me to correct my sql query. expected output for above data should be: ``` date companyname companyprice todaysprice 25-12-2011 Z Edge 101 230 25-12-2011 Z Edge 100 230 ```
You current query is missing a JOIN on the actual columns, as a result you are getting a CROSS JOIN result of all the rows that meet the date condition. You will want to use: ``` Select a.date,a.companyname,a.CompanyPrice,b.TodayPrice from DailyData a LEFT JOIN ReportData b ON a.CompanyId= b.CompanyId WHERE a.Date = '25-12-2011'; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/6bc6b/3)
Your Join condition: [ `ON a.Date = '25-12-2011'` ] does not establish any condition on table b, therefore, every row in table b is joined to each row in table a with that specified date. From looking at the two tables it is not obvious whether the they should be joined on date or on CompanyID.
In my sql script LEFT JOIN is giving output like CROSS JOIN?
[ "", "sql", "sql-server", "t-sql", "" ]
I want to check if the transaction took place between 23:30 of one day to 6:30 of next day. I am using the following code. ``` DECLARE @StartTime TIME DECLARE @EndTime TIME DECLARE @TrxnDateTime TIME select @TrxnDateTime= CONVERT(Varchar(15),Cast(dtTrxnDateTime as time)) from tbl1 where iTransactionId = 1 SET @EndTime='6:00' SET @StartTime='23:30' SET @TrxnDateTime='3:30' PRINT @StartTime PRINT @EndTime PRINT @TrxnDateTime IF(@TrxnDateTime > @StartTime AND @TrxnDateTime < @EndTime) BEGIN PRINT 'Working' END ELSE BEGIN PRINT 'Rule Not Tripped' END ``` How I can check if the transaction is between 23:30 of one day to 6:30 of the next day as my code above is not working?
Just Change `AND` WITH `OR` HERE... ``` IF(@TrxnDateTime > @StartTime OR @TrxnDateTime< @EndTime) ``` It is so because it not possible that both condtion can become true togehere...looking at your request it shuld return true when any of your condtion become true..so use `OR` instead of `AND` .
``` DECLARE @StartTime Datetime DECLARE @EndTime Datetime DECLARE @TrxnDateTime Datetime select @TrxnDateTime= CONVERT(Varchar(15),Cast(dtTrxnDateTime as Datetime)) from tbl1 where iTransactionId = 1 -- I guess you are getting @TrxnDateTime from DB, so you dont need to set it SET @EndTime='2013-05-08 06:00:00.000' SET @StartTime='2013-05-07 23:30:00.000' PRINT @StartTime PRINT @EndTime PRINT @TrxnDateTime IF(@TrxnDateTime > @StartTime AND @TrxnDateTime< @EndTime) BEGIN PRINT 'Working' END ELSE BEGIN PRINT 'Rule Not Tripped' END ```
SQL Procedure to check if the transaction took place between time 23:30 to 6:30
[ "", "sql", "sql-server", "" ]
Let's say I have a set of values of some category, for example sports activities. The values might be: 1. Running 2. Basketball 3. Football 4. Ice Hockey 5. Walking 6. Baseball 7. Basketball Now let's assume that I have a large group of individuals. Each person engages in a set of sports activities for each season of the year. So person A might play Basketball and Run in Winter, play baseball and run in Spring and Summer, and play Football and run in the fall. Person B might walk and run in the spring, fall, and summer and play basketball and ice hockey in the winter, etc. In other words each person has 4 sets (or arrays) of sports activities (call them favorite spots activities) one for each season. What I am trying to figure out is the best way to map these sets of values to individuals in a relational database efficiently. I assume I will have one table of people and one table of sports activities. But how do I represent overlapping sets of values from the sports activities table and map them to individual people in the people table?
You map the activity to a person. Your table would reference the activity and person, and the primary key would be a composite of both those. ``` person <----- personActivity -----> activity ``` If you want to add in seasons: ``` person, activity, season personActivitySeasn ```
You would also have a table of seasons (probably with just four values). Then you would have a table `PersonActivitySeasons` with columns like: * PersonActivitySeasonId * PersonId * SeasonId * ActivityId * Date of Activity This is the most normalized format. In practice, you might also make the seasons columns: * PersonActivitySeasonId * PersonId * ActivityId * IsWinter * IsSpring * IsSummer * IsAutumn I would tend to go with the first approach, because I could include effective and end dates for each record for each season, and more easily track people going in and out of activities. Also, it would allow for "non-conforming" seasons, if that were useful for you.
How to represent a set of values in a relational database
[ "", "sql", "database-design", "" ]
I need to create a few queries for a database with the following schema: ``` Patient(**pid**,pname,address,phone) Ward(**wid**, wname) // wid=Ward id, Bed(**wid,bid**) // bid=Bed id Appointment(**apid**,date,result,pid,cid) // cid=Consultant id, pid=Patient id Consultant(**cid**,cname,clinId,phone) // clinId=Clinic ID Allocation(**apid**,pid,wid,bid,date,ex_leave,act_leave) //ex=expected, act=actual ``` The queries are: 1. Find how many unoccupied beds are in each ward. 2. Find the ward of which an allocation to it was made on every day during March 2013. 3. Return the consultant details of those who performed most appointments who led to allocation in the Orthopedic ward. I tried to create the first one using views like this: ``` create view hospital.occupied_beds as select A.wid,count(*) as o_beds from hospital.allocation A,hospital.bed B where A.wid=B.wid and A.bid=B.bid and A.act_leave is null group by A.wid; create view hospital.all_beds as select C.wid,count(*) as all_beds from hospital.bed C group by C.wid; select distinct A.wid,all_beds-o_beds as uo_beds from hospital.occupied_beds A, hospital.all_beds B ``` but this way it doesn't return wards in which all the beds are unoccupied. Please help me :)
Here are three possible solutions for your questions. Keep in mind that I was not going for efficiency. There are probably ways to optimize these queries a bit. I just wanted to give you ideas and get you going in the right direction. For Unoccipied beds per ward: ``` select w.wname, bc.total - IFNULL(ob.occupied,0) as unoccupied from Ward w, (select wid, count(bid) as total from Bed group by wid) bc left join (select wid, count(wid) as occupied from Allocation where act_leave is null group by wid) ob on bc.wid = ob.wid where w.wid = bc.wid ``` For wards with allocations for every day in March 2013 ``` select w.wid, w.wname, count(distinct(a.date)) as acount from Ward w, Allocation a where a.date >= '2013-03-01' and a.date <= '2013-03-31' and w.wid = a.wid group by w.wid having acount = 31 ``` The list of consultants with most ortho appointments in descending order (most allocations on top) ``` select c.cid, c.cname, count(a.apid) as apptcount from Consultant c, Appointment p, Allocation a, Ward w where c.cid = p.cid and p.apid = a.apid and a.wid = w.wid and w.wname = 'Orthopedic' group by c.cid order by apptcount desc ```
It seems a bit poorly normalized, in terms that Allocation can specify both ward ID and bed ID. Is bed ID nullable, e.g. the patient hasn't been allocated a bed yet? In any case, I think you need outer joins. I don't have a copy of MySQL available right now, but I believe you can do this: ``` create view hospital.unoccupied_beds as select B.wid,count(*) as o_beds from hospital.allocation A right join hospital.bed B on A.wid=B.wid and A.bid=B.bid and A.act_leave is null where A.apid is null group by B.wid; ```
A few SQL queries I can't figure out how to write
[ "", "mysql", "sql", "database", "schema", "" ]
The pandas `factorize` function assigns each unique value in a series to a sequential, 0-based index, and calculates which index each series entry belongs to. I'd like to accomplish the equivalent of `pandas.factorize` on multiple columns: ``` import pandas as pd df = pd.DataFrame({'x': [1, 1, 2, 2, 1, 1], 'y':[1, 2, 2, 2, 2, 1]}) pd.factorize(df)[0] # would like [0, 1, 2, 2, 1, 0] ``` That is, I want to determine each unique tuple of values in several columns of a data frame, assign a sequential index to each, and compute which index each row in the data frame belongs to. `Factorize` only works on single columns. Is there a multi-column equivalent function in pandas?
You need to create a ndarray of tuple first, `pandas.lib.fast_zip` can do this very fast in cython loop. ``` import pandas as pd df = pd.DataFrame({'x': [1, 1, 2, 2, 1, 1], 'y':[1, 2, 2, 2, 2, 1]}) print pd.factorize(pd.lib.fast_zip([df.x, df.y]))[0] ``` the output is: ``` [0 1 2 2 1 0] ```
I am not sure if this is an efficient solution. There might be better solutions for this. ``` arr=[] #this will hold the unique items of the dataframe for i in df.index: if list(df.iloc[i]) not in arr: arr.append(list(df.iloc[i])) ``` so printing the arr would give you ``` >>>print arr [[1,1],[1,2],[2,2]] ``` to hold the indices, i would declare an ind array ``` ind=[] for i in df.index: ind.append(arr.index(list(df.iloc[i]))) ``` printing ind would give ``` >>>print ind [0,1,2,2,1,0] ```
multi-column factorize in pandas
[ "", "python", "pandas", "enumeration", "data-cleaning", "" ]
My Question is as follows: I know a little bit about ML in Python (using NLTK), and it works ok so far. I can get predictions given certain features. But I want to know, is there a way, to display the best features to achieve a label? I mean the direct opposite of what I've been doing so far (put in all circumstances, and get a label for that) I try to make my question clear via an example: Let's say I have a database with Soccer games. The Labels are e.g. 'Win', 'Loss', 'Draw'. The Features are e.g. 'Windspeed', 'Rain or not', 'Daytime', 'Fouls committed' etc. Now I want to know: Under which circumstances will a Team achieve a Win, Loss or Draw? Basically I want to get back something like this: Best conditions for Win: Windspeed=0, No Rain, Afternoon, Fouls=0 etc Best conditions for Loss: ... Is there a way to achieve this?
My paint skills aren't the best! All I know is theory, so well you'll have to look for the code.. If you have only 1 case(The best for "x" situations) the diagram becomes something like (It won't be 2-D, but something like this): ![The 3 dots](https://i.stack.imgur.com/Bpwz8.png) Green (Win), Orange(Draw), Red(Lose) Now if you want to predict whether the team wins, loses or draws, you have (at least) 2 models to classify: 1. [Linear Regression](http://en.wikipedia.org/wiki/Linear_regression), the separator is the Perpendicular bisector of the line joining the 2 points: [![Linear Regression](https://i.stack.imgur.com/2qX20.png)](http://en.wikipedia.org/wiki/Linear_regression) 2. [K-nearest-neighbours](http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm): it is done just by calculating the distance from all the points, and classifying the point as the same as the closest.. ![Something!!!](https://i.stack.imgur.com/53wwL.png) So, for example, if you have a new data, and have to classify it, here's how: 1. We have a new point, with certain attributes.. ![New Point](https://i.stack.imgur.com/r1WNT.png) 2. We classify it by seeing/calculating which side of the line the point comes in (or seeing how far it is from our benchmark situations... ![Classified Point](https://i.stack.imgur.com/fcwiq.png) > Note: You will have to give some weightage to each factor, for more accuracy..
Not sure if you have to do this in python, but if not, I would suggest Weka. If you're unfamiliar with it, here's a link to a set of tutorials: <https://www.youtube.com/watch?v=gd5HwYYOz2U> Basically, you'd just need to write a program to extract your features and labels and then output a .arff file. Once you've generated a .arff file, you can feed this to Weka and run myriad different classifiers on it to figure out what model best fits your data. If necessary, you can then program this model to operate on your data. Weka has plenty of ways to analyze your results and to graphically display said results. It's truly amazing.
Machine Learning in Python - Get the best possible feature-combination for a label
[ "", "python", "machine-learning", "nltk", "" ]
I have a sql query that is not returning any value, but it has data to be returned. The following query code ``` Cursor cursor = db.query(CrimeDB.NOME_TABELA, CrimeDB.COLUNAS, CrimeDB.ID_CIDADE + "=" + idCidade + "" + " AND " + CrimeDB.TIME + " >= datetime('" + dataInicioFormatada + "') AND " + CrimeDB.TIME + " <= datetime('" + dataFimFormatada + "')" + " AND " + CrimeDB.GRUPO_CRIME + "=" + idCategoria + "", null, null, null, null); ``` Read cursor ``` if (cursor.moveToFirst()) { do { crime = new Crime(); crime.setLastUpadateToken(ultimoTokenValido .getUltimoTokenAtualizado()); listCrime.add(itemCrime); } while (cursor.moveToNext()); } ``` The query result is: ``` SELECT grupo_crime_id_grupo_crime, id_crime, cities_id_cities, time FROM crime WHERE cities_id_cities=1650 AND time >= datetime('20-10-2012') AND time <= datetime('22-05-2014') AND grupo_crime_id_grupo_crime=1 ``` was returns to realize any value because there is value in the database. Using the SQLite Editor I see any register on the table crime. ``` id_crime | cities_id_cities | grupo_crime_id_grupo_crime | time 1 1650 1 28-03-2013 2 1650 1 06-04-2013 ```
Change the date format with `yyyy/MM/dd` and try using `between` instead of comparing dates with `>=` and `<=`.
use apostrophe (') while checking in where clause ``` Cursor cursor = db.query(CrimeDB.NOME_TABELA, CrimeDB.COLUNAS, CrimeDB.ID_CIDADE + "='" + idCidade + "'" + " AND " + CrimeDB.TIME + " >= datetime('" + dataInicioFormatada + "') AND " + CrimeDB.TIME + " <= datetime('" + dataFimFormatada + "')" + " AND " + CrimeDB.GRUPO_CRIME + "='" + idCategoria + "'", null, null, null, null); ```
Sqlite query returns no value Android
[ "", "android", "sql", "sqlite", "" ]
I've got the following two lists: ``` a = ['A-B', 'A-D', 'B-C', 'B-D', 'B-E', 'C-E', 'D-E', 'D-F', 'E-F', 'E-G', 'F-G'] b = ['7', '5', '8', '9', '7', '5', '15', '6', '8', '9', 11] ``` I want to convert these lists into one like this: ``` [ ("A", "B", 7), ("A", "D", 5), ("B", "C", 8), ("B", "D", 9), ("B", "E", 7), ("C", "E", 5), ("D", "E", 15), ("D", "F", 6), ("E", "F", 8), ("E", "G", 9), ("F", "G", 11)] ``` The problem is that the first list needs to be split and the merged into the second, with a tuple as the output. What's the pythonic way to do this?
``` >>> a = ['A-B', 'A-D', 'B-C', 'B-D', 'B-E', 'C-E', 'D-E', 'D-F', 'E-F', 'E-G', 'F-G'] >>> b = ['7', '5', '8', '9', '7', '5', '15', '6', '8', '9', 11] >>> [x.split('-') + [int(y)] for x, y in zip(a, b)] [['A', 'B', 7], ['A', 'D', 5], ['B', 'C', 8], ['B', 'D', 9], ['B', 'E', 7], ['C', 'E', 5], ['D', 'E', 15], ['D', 'F', 6], ['E', 'F', 8], ['E', 'G', 9], ['F', 'G', 11]] ``` If you really need a tuple just use the `tuple(...)` constructor eg. ``` tuple(x.split('-') + [int(y)]) ```
avoiding the split, ``` [(k[0][0],k[0][-1],k[1]) for k in zip(a,b)] ``` or even ``` [(x[0],x[-1],y) for x,y in zip(a,b)] ```
How to combine two lists in a non standard way in Python
[ "", "python", "list", "" ]
I am trying to create a database in a remote system.The relevant code is given below ``` log = core.getLogger() engine = create_engine('sqlite:////name:pass@192.168.129.139/tmp/nwtopology.db',echo=False) Base = declarative_base() Session = sessionmaker(bind=engine) session = Session() class SourcetoPort(Base): """""" __tablename__ = 'source_to_port' id = Column(Integer, primary_key=True) port_no = Column(Integer) src_address = Column(String,index=True) #----------------------------------------- def __init__(self, src_address,port_no): """""" self.src_address = src_address self.port_no = port_no ``` Obviously I am trying to create the database in a remote system. I am getting the error: ``` OperationalError: (OperationalError) unable to open database file None None ``` My questions are as follows. 1) the user that is running the program on local machine is not the same as the user that is trying to create the database on the remote machine.Is this an issue? 2) Are there any mechanisms to improve the latency of inserting an entry into the database and querying and entry by creating a local cached copy? Regards, Karthik.
Please see [my answer with details explanations](https://stackoverflow.com/a/16443697/1296661) to [your other related question](https://stackoverflow.com/questions/16428891/remote-database-creation-in-sql-alchemy-via-sqlite). Also here are some step by step guides how to install and configure remote database. I would say try MySQL first - IMHO it is simpler that PosgreSQL. * [How to install mysql on windows](http://www.sitepoint.com/how-to-install-mysql/) * [How to install mysql on linux](http://www.thegeekstuff.com/2008/07/howto-install-mysql-on-linux/) Also there are some video guides available on [youtube](http://www.youtube.com/results?search_query=install%20mysql%20on%20linux&oq=install%20mysql%20on%20linux&gs_l=youtube.3..0.6658.10483.0.10843.22.13.0.8.8.1.140.846.11j2.13.0...0.0...1ac.1.11.youtube.fv0GkcKB9Wk)
sqlite cannot be used over the network, with login credentials, in the same way you would connect to MySQL or PostgreSQL; the correct connection string to pass to `create_engine` is just ``` engine = create_engine('sqlite:////tmp/nwtopology.db') ```
sqlalchemy OperationalError: (OperationalError) unable to open database file None None
[ "", "python", "sqlite", "python-2.7", "sqlalchemy", "" ]
I would like to use distinct on the following table, but only on the 'PlayerID' column. This is what I have at the moment: ``` MATCHID PLAYERID TEAMID MATCHDATE STARTDATE ---------- ---------- ---------- --------- --------- 20 5 2 14-JAN-12 01-JUN-11 20 5 4 14-JAN-12 01-JUN-10 20 7 4 14-JAN-12 01-JUN-11 20 7 2 14-JAN-12 01-JUN-10 20 10 4 14-JAN-12 01-JUN-11 20 11 2 14-JAN-12 01-JUN-10 20 13 2 14-JAN-12 01-JUN-11 20 16 4 14-JAN-12 01-JUN-10 20 17 4 14-JAN-12 01-JUN-10 20 18 4 14-JAN-12 01-JUN-10 20 19 2 14-JAN-12 01-JUN-11 ``` And this is what I want, so that the highest 'StartDate' for each 'PlayerID' is shown and the next row ignored: ``` MATCHID PLAYERID TEAMID MATCHDATE STARTDATE ---------- ---------- ---------- --------- --------- 20 5 2 14-JAN-12 01-JUN-11 20 7 4 14-JAN-12 01-JUN-11 20 10 4 14-JAN-12 01-JUN-11 20 11 2 14-JAN-12 01-JUN-10 20 13 2 14-JAN-12 01-JUN-11 20 16 4 14-JAN-12 01-JUN-10 20 17 4 14-JAN-12 01-JUN-10 20 18 4 14-JAN-12 01-JUN-10 20 19 2 14-JAN-12 01-JUN-11 ``` Current SQL: ``` SELECT pi.MatchID, pi.PlayerID, t.TeamID, m.MatchDate, pf.StartDate FROM Plays_In pi, Match m, Plays_A pa, Team t, Plays_For pf, Made_Up_Of muo, Season s WHERE pi.MatchID = m.MatchID AND m.MatchID = pa.MatchID AND pa.TeamID = t.TeamID AND pf.PlayerID = pi.PlayerID AND pf.TeamID = t.TeamID AND muo.MatchID = pi.MatchID AND muo.SeasonID = s.SeasonID AND pi.MatchID = '&match_id' AND m.MatchDate >= pf.StartDate ORDER BY pi.MatchID ASC, pi.PlayerID ASC, pf.StartDate DESC; ``` It's an Oracle database. Thanks in advance.
A few points... * Unless you're using the joins to `Made_Up_Of` and `Season` to filter out rows, you don't need these tables. I've left them out here; you can add them back in if you need them. * Mark Tickner is correct that you should use the ANSI JOIN syntax. The nice thing about it (other than being standard) is that it puts the join logic right with the table being joined. Once you get used to it I think you'll find it preferable. * What you're really after is the maximum `pf.StartDate` for each `PlayerID`, which is a nice fit for the analytical `ROW_NUMBER()` function. The `PARTITION BY pi.PlayerID ORDER BY pf.StartDate DESC` will basically assign the value `1` to the row with each player's most recent sort date. The outer filters out all rows except those with the `1` ranking. * You can also assign rankings with the `RANK()` and `DENSE_RANK()` analytical functions, but if a player has a tie for the most recent date then all the tied dates will be ranked #1 and you'll get multiple rows for that player. In situations like this where you only want one row per player, use `ROW_NUMBER()` instead. Put it all together and you get this: ``` SELECT MatchID, PlayerID, TeamID, MatchDte, StartDate FROM ( SELECT pi.MatchID, pi.PlayerID, t.TeamID, m.MatchDate, pf.StartDate, ROW_NUMBER() OVER (PARTITION BY pi.PlayerID ORDER BY pf.StartDate DESC) AS StartDateRank FROM Plays_In pi INNER JOIN Match m ON pi.MatchID = m.MatchID INNER JOIN Plays_A pa ON m.MatchID = pa.MatchID INNER JOIN Team t ON pa.TeamID = t.TeamID INNER JOIN Plays_For pf ON pf.PlayerID = pi.PlayerID AND pf.TeamID = t.TeamID WHERE pi.MatchID = '&match_id' AND m.MatchDate >= pf.StartDate ) WHERE StartDateRank = 1 ORDER BY MatchID, PlayerID ``` One final point: based on the `WHERE pi.MatchID = '&match_id'` it looks like you may be using PHP as your front end and the `mysql` functions to do the query. If so, please look into `mysqli` or `PDO` instead, as they'll protect you from SQL Injection. The `mysql` functions (which are officially deprecated) will not. --- **Addendum**: More information about `ROW_NUMBER`, with many thanks to @AndriyM. With `ROW_NUMBER`, if a player has more than one row with the most recent date, only one of the rows will be assigned as `ROW_NUMBER = 1`, and that row will be picked more or less randomly. Here's an example, where a player's most recent date is 5/1/2013 and the player has three rows with this date: ``` pi.MatchID pi.PlayerID pf.StartDate ---------- ----------- ------------ 100 1000 05/01/2013 <-- could be ROW_NUMBER = 1 101 1000 04/29/2013 105 1000 05/01/2013 <-- could be ROW_NUMBER = 1 102 1000 05/01/2013 <-- could be ROW_NUMBER = 1 107 1000 04/18/2013 ``` Note that only **one** of the rows above will be assigned `ROW_NUMBER = 1`, and *it can be any of them*. Oracle will decide, not you. If this uncertainty is a problem, order by additional columns to get a clear winner. For this example, the highest `pi.MatchID` will be used to determine the "true" `ROW_NUMBER = 1`: ``` -- replace `ROW_NUMBER...` in the query above with this: ROW_NUMBER() OVER ( PARTITION BY pi.PlayerID ORDER BY pf.StartDate DESC, pi.MatchID DESC) AS StartDateRank ``` Now if there's a tie for the highest `pf.StartDate`, Oracle looks for the highest `pi.MatchID` *within the subset of rows with the highest `pf.StartDate`*. As it turns out, only one row satisfies this condition: ``` pi.MatchID pi.PlayerID pf.StartDate ---------- ----------- ------------ 100 1000 05/01/2013 101 1000 04/29/2013 105 1000 05/01/2013 <-- is ROW_NUMBER = 1: highest MatchID for -- most recent StartDate (5/1/2013) 102 1000 05/01/2013 107 1000 04/18/2013 <-- not considered: has the highest MatchID but isn't -- in the subset with the most recent StartDate ```
You could use the rank() function. ``` SELECT * FROM ( SELECT pi.MatchID, pi.PlayerID, t.TeamID, m.MatchDate, pf.StartDate, rank() over (partition by pi.PlayerID order by m.MatchDate desc, rowid) as RNK FROM Plays_In pi, Match m, Plays_A pa, Team t, Plays_For pf, Made_Up_Of muo, Season s WHERE pi.MatchID = m.MatchID AND m.MatchID = pa.MatchID AND pa.TeamID = t.TeamID AND pf.PlayerID = pi.PlayerID AND pf.TeamID = t.TeamID AND muo.MatchID = pi.MatchID AND muo.SeasonID = s.SeasonID AND pi.MatchID = '&match_id' AND m.MatchDate >= pf.StartDate ) WHERE RNK = 1 ORDER BY MatchID ASC, PlayerID ASC, StartDate DESC; ```
Distinct on one column only in Oracle
[ "", "sql", "database", "oracle", "" ]
I want to retrieve body (only text) of emails using python imap and email package. As per this [SO thread](https://stackoverflow.com/questions/3449220/how-do-i-recieve-a-html-email-as-a-regular-text?rq=1), I'm using the following code: ``` mail = email.message_from_string(email_body) bodytext = mail.get_payload()[ 0 ].get_payload() ``` Though it's working fine for some instances, but sometime I get similar to following response ``` [<email.message.Message instance at 0x0206DCD8>, <email.message.Message instance at 0x0206D508>] ```
You are assuming that messages have a uniform structure, with one well-defined "main part". That is not the case; there can be messages with a single part which is not a text part (just an "attachment" of a binary file, and nothing else) or it can be a multipart with multiple textual parts (or, again, none at all) and even if there is only one, it need not be the first part. Furthermore, there are nested multiparts (one or more parts is another MIME message, recursively). In so many words, you must inspect the MIME structure, then decide which part(s) are relevant for your application. If you only receive messages from a fairly static, small set of clients, you may be able to cut some corners (at least until the next upgrade of Microsoft Plague hits) but in general, there simply isn't a hierarchy of any kind, just a collection of (not necessarily always directly related) equally important parts.
The main problem in my case is that replied or forwarded message shown as message instance in the bodytext. Solved my problem using the following code: ``` bodytext=mail.get_payload()[0].get_payload(); if type(bodytext) is list: bodytext=','.join(str(v) for v in bodytext) ```
Get body text of an email using python imap and email package
[ "", "python", "email", "python-2.7", "" ]
I am working on following queries: ``` Query 1: SELECT * From TabA INNER JOIN TabB on TabA.Id=TabB.Id Query 2: SELECT * From TabA WHERE Id in (SELECT Id FROM TabB) Query 3: SELECT TabA.* From TabA INNER JOIN TabB on TabA.Id=TabB.Id ``` I investigate these queries with SQL Server profiler and found some interesting facts. * Query 1 takes 2.312 Seconds * Query 2 takes 0.811 Seconds * Query 3 takes 0.944 Seconds **TabA** 48716 rows **TabB** 62719 rows Basically what I am asking is why Query 1 is taking long time, not Query 3. I already know that 'sub query' is slower than inner join but here Query 2 is fastest; why?
If I had to guess I would say it's because query 1 is pulling the data from both tables. Queries 2 and 3 (aprox the same time) are only pulling data for TabA. One way you could check this is by running the following: ``` SET STATISTICS TIME ON SET STATISTICS IO ON ``` When I ran ``` SELECT * FROM sys.objects ``` I saw the following results. ``` SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 104 ms. (242 row(s) affected) Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'sysschobjs'. Scan count 1, logical reads 10, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'syssingleobjrefs'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'syspalnames'. Scan count 1, logical reads 2, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 866 ms. ``` You can take a look at # of scans, logical reads and physical reads for each query. Physical reads of course take much longer and represent reading from the disk into the cache. If all of your reads are logical reads then your table is completely in cache. I would be willing to bet if you look you will see a lot more logical reads on TabB on query 1 than on 2 and 3. **EDIT:** Just out of curiosity I did some tests and blogged the results [here](http://sqlstudies.com/2013/05/02/comparing-an-inner-join-vs-a-subquery/).
Query 1: This query is returning rows from all rows in both TabA and TabB so the covering index for both tables requires all rows from each table to be included. To see exactly what's going on you'd want to look at the query plan. Query 2 & Query 3: You're returning all rows from TabA and you only need the index for the Id column for TabB. I'm guessing the difference here has something to do with the table statistics, but (once again) we'd have to see the query plan to know exactly what's going on.
SQL inner join vs subquery
[ "", "sql", "sql-server", "sql-server-2008", "" ]
How do I print the memory address of a variable in Python 2.7? I know id() returns the 'id' of a variable or object, but this doesn't return the expected 0x3357e182 style I was expecting to see for a memory address. I want to do something like `print &x`, where x is a C++ int variable for example. How can I do this in Python?
`id` is the method you want to use: to convert it to hex: `hex(id(variable_here))` For instance: ``` x = 4 print hex(id(x)) ``` Gave me: ``` 0x9cf10c ``` Which is what you want, right? (Fun fact, binding two variables to the same `int` may result in the same memory address being used.) Try: ``` x = 4 y = 4 w = 9999 v = 9999 a = 12345678 b = 12345678 print hex(id(x)) print hex(id(y)) print hex(id(w)) print hex(id(v)) print hex(id(a)) print hex(id(b)) ``` This gave me identical pairs, even for the large integers.
According to [the manual](http://docs.python.org/2/library/functions.html#id), in CPython `id()` is the actual memory address of the variable. If you want it in hex format, call `hex()` on it. ``` x = 5 print hex(id(x)) ``` this will print the memory address of x.
print memory address of Python variable
[ "", "python", "memory-address", "" ]
This is what I have tried, but did not work. ``` SELECT COUNT(*) AS month_count FROM `wp_postmeta` WHERE `meta_key`='from_dt' AND (`meta_value` BETWEEN '$st' AND '$end') AND (`meta_value` BETWEEN '$st1' AND '$end1'); ``` I'm trying to count the number of months which come between epoch time of $st and $end of this year & between $st1 and $end1 of next year. Say I have 2 dates (feb 2013 and feb 2014)in meta\_value fields, I want the query to return 2(as there are 2 February's). How to use 2 between operators in one sql statement?
Nothing technically wrong with your query, but depending on your desired results, you probably want to be using `OR` -- just make sure your parentheses are in the correct place: ``` SELECT COUNT(*) AS month_count FROM `wp_postmeta` WHERE `meta_key`='from_dt' AND ((`meta_value` BETWEEN '$st' AND '$end') OR (`meta_value` BETWEEN '$st1' AND '$end1')) ``` Your current query is requiring the meta\_value be between both sets of numbers which may be your problem.
do you mean this? ``` SELECT COUNT(*) AS month_count FROM `wp_postmeta` WHERE (`meta_key`='from_dt' AND `meta_value` BETWEEN '$st' AND '$end') OR (`meta_key`='from_dt' AND `meta_value` BETWEEN '$st1' AND '$end1') ```
using BETWEEN operator twice in one SQL statement
[ "", "mysql", "sql", "phpmyadmin", "" ]
I have a Python function that does a lot of major work on an XML file. When using this function, I want two options: either pass it the name of an XML file, or pass it a pre-parsed ElementTree instance. I'd like the function to be able to determine what it was given in its variable. Example: ``` def doLotsOfXmlStuff(xmlData): if (xmlData != # if xmlData is not ET instance): xmlData = ET.parse(xmlData) # do a bunch of stuff return stuff ``` The app calling this function may need to call it just once, or it may need to call it several times. Calling it several times and parsing the XML each time is hugely inefficient and unnecessary. Creating a whole class just to wrap this one function seems a bit overkill and would end up requiring some code refactoring. For example: ``` ourResults = doLotsOfXmlStuff(myObject) ``` would have to become: ``` xmlObject = XMLProcessingObjectThatHasOneFunction("data.xml") ourResult = xmlObject.doLotsOfXmlStuff() ``` And if I had to run this on lots of small files, a class would be created each time, which seems inefficient. Is there a simple way to simply detect the type of the variable coming in? I know a lot of Pythoners will say "you shouldn't have to check" but here's one good instance where you would. In other strong-typed languages I could do this with method overloading, but that's obviously not the Pythonic way of things...
The principle of "duck typing" is that you shouldn't care so much about the specific type of an object but rather you should check whether is supports the APIs in which you're interested. In other words if the object passed to your function through the xmlData argument contains some method or attribute which is indicative of an ElementTree that's been parsed then you just use those methods or attributes ... if it doesn't have the necessary attribute then you are free to then pass it through some parsing. So functions/methods/attributes of the result ET are you looking to use? You can use `hasattr()` to check for that. Alternatively you can wrap your call to any such functionality with a `try: ... except AttributeError:` block. Personally I think `if not hasattr(...):` is a bit cleaner. (If it doesn't have the attribut I want, then rebind the name to something which has been prepared, parsed, whatever as I need it). This approach has advantages over `isinstance()` because it allows users of your functionality to pass references to objects in their own classes which have extended ET through composition rather than inheritance. In other words if I wrap an ET like object in my own class, and expose the necessary functionality then I should be able to pass reference s to your function and have you just treat my object as if it were a "duck" even if it wasn't a descendant of a duck. If you need feathers, a bill, and webbed feet then just check for one of those and try to use the rest. I may be a black box containing a duck and I may have provided holes through which the feet, duck-bill, and feathers are accessible.
This is a fairly normal pattern (e.g. [Python function that accepts file object or path](https://stackoverflow.com/questions/6783472/python-function-that-accepts-file-object-or-path?rq=1)). Just use `isinstance`: ``` def doLotsOfXmlStuff(xmlData): if not isinstance(xmlData, ET): xmlData = ET.parse(xmlData) ... ``` If you need to do cleanup (e.g. closing files) then calling your function recursively is OK: ``` def doLotsOfXmlStuff(xmlData): if not isinstance(xmlData, ET): xmlData = ET.parse(xmlData) ret = doLotsOfXmlStuff(xmlData) ... # cleanup (or use a context manager) return ret ... ```
Accept different types in python function?
[ "", "python", "function", "object", "types", "" ]
I just want to know how to clear a `multiprocessing.Queue` like a `queue.Queue` in Python: ``` >>> import queue >>> queue.Queue().clear() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Queue' object has no attribute 'clear' >>> queue.Queue().queue.clear() ``` ``` >>> import multiprocessing >>> multiprocessing.Queue().clear() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Queue' object has no attribute 'clear' >>> multiprocessing.Queue().queue.clear() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Queue' object has no attribute 'queue' ```
There is no direct way of clearing a `multiprocessing.Queue`. I believe the closest you have is `close()`, but that simply states that no more data will be pushed to that queue, and will close it when all data has been flushed to the pipe.
So, I take look at Queue class, and you may to try this code: ``` while not some_queue.empty(): some_queue.get() # as docs say: Remove and return an item from the queue. ```
How to clear a multiprocessing.Queue?
[ "", "python", "multiprocessing", "queue", "" ]
Consider the following: ``` >>> import re >>> a = "first:second" >>> re.findall("[^:]*", a) ['first', '', 'second', ''] >>> re.sub("[^:]*", r"(\g<0>)", a) '(first):(second)' ``` `re.sub()`'s behavior makes more sense initially, but I can also understand `re.findall()`'s behavior. After all, you can match an empty string between `first` and `:` that consists only of non-colon characters (exactly zero of them), but why isn't `re.sub()` behaving the same way? Shouldn't the result of the last command be `(first)():(second)()`?
You use the \* which allows empty matches: ``` 'first' -> matched ':' -> not in the character class but, as the pattern can be empty due to the *, an empty string is matched -->'' 'second' -> matched '$' -> can contain an empty string before, an empty string is matched -->'' ``` Quoting the [documentation for `re.findall()`](http://docs.python.org/2/library/re.html#re.findall): > Empty matches are included in the result unless they touch the beginning of another match. The reason you don't see empty matches in sub results is explained in the [documentation for `re.sub()`](http://docs.python.org/2/library/re.html#re.sub): > Empty matches for the pattern are replaced only when not adjacent to a previous match. Try this: ``` re.sub('(?:Choucroute garnie)*', '#', 'ornithorynque') ``` And now this: ``` print re.sub('(?:nithorynque)*', '#', 'ornithorynque') ``` There is no consecutive #
The algorithms for handling empty matches are different, for some reason. In the case of `findall`, it works like (an optimized version of) this: for every possible start index 0 <= i <= len(a), if the string matches at i, then append the match; and avoid overlapping results by using this rule: if there is a match of length m at i, don't look for the next match before i+m. The reason your example returns `['first', '', 'second', '']` is that the empty matches are found immediately after `first` and `second`, but not after the colon --- because looking for a match starting from that position returns the full string `second`. In the case of `sub`, the difference is, as you noticed, that it explicitly ignores matches of length 0 that occurs immediately after another match. While I see why this might help avoid unexpected behavior of `sub`, I'm unsure why there is this difference (e.g. why wouldn't `findall` use the same rule).
Why does re.findall() find more matches than re.sub()?
[ "", "python", "regex", "" ]
In python 2.7.3, how can I start the loop from the second row? e.g. ``` first_row = cvsreader.next(); for row in ???: #expect to begin the loop from second row blah...blah... ```
``` first_row = next(csvreader) # Compatible with Python 3.x (also 2.7) for row in csvreader: # begins with second row # ... ``` Testing it really works: ``` >>> import csv >>> csvreader = csv.reader(['first,second', '2,a', '3,b']) >>> header = next(csvreader) >>> for line in csvreader: print line ['2', 'a'] ['3', 'b'] ```
``` next(reader, None) # Don't raise exception if no line exists ``` looks most readable IMO The other alternative is ``` from itertools import islice for row in islice(reader, 1, None) ``` However shouldn't you be using the header? Consider a `csv.DictReader` which by default sets the fieldnames to the first line.
python csv reader, loop from the second row
[ "", "python", "" ]
I have a table where multiple account numbers are associated with different IDs(DR\_NAME). Each account could have as few as 0 accounts, and as many as 16. I believe UNPIVOT would work, but I'm on Oracle 10g, which does not support this. ``` DR_NAME ACCT1 ACCT2 ACCT3 ACC4 ====================================== SMITH 1234 JONES 5678 2541 2547 MARK NULL WARD 8754 6547 ``` I want to display a new line for each name with only 1 account number per line ``` DR_NAME ACCT ============== SMITH 1234 JONES 5678 JONES 2541 JONES 2547 MARK NULL WARD 8754 WARD 6547 ```
Oracle 10g does not have an `UNPIVOT` function but you can use a `UNION ALL` query to unpivot the columns into rows: ``` select t1.DR_NAME, d.Acct from yourtable t1 left join ( select DR_NAME, ACCT1 as Acct from yourtable where acct1 is not null union all select DR_NAME, ACCT2 as Acct from yourtable where acct2 is not null union all select DR_NAME, ACCT3 as Acct from yourtable where acct3 is not null union all select DR_NAME, ACCT4 as Acct from yourtable where acct4 is not null ) d on t1.DR_NAME = d.DR_NAME; ``` See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!4/b8801/8). This query uses a `UNION ALL` to convert the columns into rows. I included a `where` clause to remove any `null` values, otherwise you will get multiple rows for each account where the acct value is null. Excluding the `null` values will drop the `dr_name = Mark` which you showed that you want in the final result. To include the rows that only have `null` values, I added the join to the table again.
If you're only interested in inserting these records then take a look at multitable insert -- a single scan of the data and multiple rows generated, so it's very efficient. Code examples here: <http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF01604> Note that you can reference the same table multiple times, using syntax along the lines of ... ``` insert all when acct1 is not null then into target_table (..) values (dr_name,acct1) when acct2 is not null then into target_table (..) values (dr_name,acct2) when acct3 is not null then into target_table (..) values (dr_name,acct3) when acct4 is not null then into target_table (..) values (dr_name,acct4) select dr_name, acct1, acct2, acct3, acct4 from my_table. ```
How do I return multiple column values as new rows in Oracle 10g?
[ "", "sql", "oracle10g", "" ]
I'm trying to use celery to schedule and run tasks on a fleet of servers. Each task is somewhat long running (few hours), and involves using subprocess to call a certain program with the given inputs. This program produces a lot of output both in stdout and stderr. Is there some way to show the output produced by the program to the client in near real time? Stream the output, so that the client can watch the output spewed by the task running on the server without logging into the server?
You did not specify many requirements and constraints. I'm going to assume you already have a redis instance somewhere. What you can do is read the output from the other process line by line and publish it through redis: Here's an example where you can `echo` data into a file `/tmp/foo` for testing: ``` import redis redis_instance = redis.Redis() p = subprocess.Popen(shlex.split("tail -f /tmp/foo"), stdout=subprocess.PIPE) while True: line = p.stdout.readline() if line: redis_instance.publish('process log', line) else: break ``` In a separate process: ``` import redis redis_instance = redis.Redis() pubsub = redis_instance.pubsub() pubsub.subscribe('process log') while True: for message in pubsub.listen(): print message # or use websockets to comunicate with a browser ``` If you want the process to end, you can e.g. send a "quit" after the celery task is done. You can use different channels (the string in `subscribe`) to separate the output from different processes. You can also store your log output in redis, if you want to, ``` redis_instance.rpush('process log', message) ``` and later retrieve it in full.
The one way I see how to do it is to write custom Logger which will be used for stderr and stdout (see the [docs](http://docs.celeryproject.org/en/latest/reference/celery.app.log.html): ``` from celery.app.log import Logger Logger.redirect_stdouts_to_logger(MyLogger()) ``` Your logger can save the data into the database, Memcached, Redis or whatever shared storage you'll use to get the data. I'm not sure about the structure of the [logger](http://docs.python.org/2/library/logging.html), but I guess something like this will work: ``` from logging import Logger class MyLogger(Logger): def log(lvl, msg): # Do something with the message ```
Stream results in celery
[ "", "python", "celery", "" ]
I have a long int and I don't want it to be truncated when I print it or convert it to a string. The following does not work: ``` import pandas as pd b = pd.Series({"playerid": 544911367940993}, dtype='float64') print("%s" % b['playerid']) print(str(b['playerid']) ```
If you're just looking to print it out as in the OP, you can just use the `%d` format string ``` In [5]: print('%d' % b['playerid']) 544911367940993 ``` You can also use the format() function: ``` In [25]: x = '{:.0f}'.format(b['playerid']) In [26]: x Out[26]: '544911367940993' ```
Not the printing truncates your long int, nor the formatting using `"%s"`: ``` >>> "%s" % 12345678901234567898012345678901234567890 '12345678901234567898012345678901234567890' ``` So I guess that passing it into `pd.Series()` and/or getting it from that object by writing `b['playerid']` does any truncation.
How can I NOT truncate a long int/float when printing in Python
[ "", "python", "printing", "long-integer", "" ]
I would like to implement a `map`-like function which preserves the type of input sequence. `map` does not preserve it: ``` map(str, (8, 9)) # input is a tuple => ['8', '9'] # output is a list ``` One way I came up with is this: ``` def map2(f, seq): return type(seq)( f(x) for x in seq ) map2(str, (1,2)) => ('1', '2') map2(str, [3,4]) => ['3', '4'] map2(str, deque([5,6])) => deque(['5', '6']) ``` However, this does not work if `seq` is an iterator/generator. `imap` works in this case. So my questions are: 1. Is there a better way to implement `map2`, which supports list, tuple, and many others? 2. Is there an elegant way to extend `map2` to also support generators (like `imap` does)? Clearly, I'd like to avoid: `try: return map2(...) except TypeError: return imap(...)` The reason I'm looking for something like that is that I'm writing a function-decorator which converts the return value, from type X to Y. If the original function returns a sequence (let's assume a sequence can only be a list, a tuple, or a generator), I assume it is a sequence of X's, and I want to convert it to the corresponding sequence of Y's (while preserving the type of the sequence). As you probably realize, I'm using python 2.7, but python 3 is also of interest.
First, `type(seq)( f(x) for x in seq )` is really just `type(seq)(imap(f, seq))`. Why not just use that? Second, what you're trying to do doesn't make sense in general. `map` takes any [*iterable*](http://docs.python.org/2.7/glossary.html#term-iterable), not just a [*sequence*](http://docs.python.org/2.7/glossary.html#term-sequence). The difference is, basically, that a sequence has a `len` and is randomly-accessible. There is no rule that an iterable of type X can be constructed from values of type Y by calling `type(X)(y_iter)`. In fact, while it's generally true for sequences, there are very few other examples for which it *is* true. If what you want is to handle a few special types specially, you can do that: ``` def map2(f, seq): it = imap(f, seq) if isinstance(seq, (tuple, list)): return type(seq)(it) else: return it ``` Or, if you want to assume that all sequences can be constructed this way (which is true for most built-in sequences, but consider, e.g. `xrange`—which wasn't designed as a sequence but does meet the protocol—and of course there are no guarantees beyond what's built in): ``` def map2(f, seq): it = imap(f, seq) try: len(seq) except: return it else: return type(seq)(it) ``` You *could* assume that any iterable type that can be constructed from an iterable is a sequence (as you suggested in your question)… but this is likely to lead to more false positives than benefits, so I wouldn't. Again, remember that `len` is part of the definition of being a sequence, while "constructible from an iterator" is not, and there are perfectly reasonable iterable types that will do something completely different when given an iterator. Whatever you do is going to be a hack, because the very intention is a hack, and goes against the explicit design wishes of the Python developers. The whole point of the iterator/iterable protocol is that you should care about the type of the iterable as rarely as possible. That's why Python 3.x has gone further and replaced the list-based functions like `map` and `filter` with iterator-based functions instead. --- So, how do we turn one of these transformations into a decorator? Well, first, let's skip the decorator bit and just write a higher-order function that takes an `imap`-like function and returns an equivalent function with this transformation applied to it: ``` def sequify(func): def wrapped(f, seq): it = func(f, seq) try: len(seq) except: return it else: return type(seq)(it) return wrapped ``` So: ``` >>> seqmap = sequify(itertools.imap) >>> seqmap(int, (1.2, 2.3)) (1, 2) >>> sequify(itertools.ifilter)(lambda x: x>0, (-2, -1, 0, 1, 2)) (1, 2) ``` Now, how do we turn that into a decorator? Well, a function that returns a function already *is* a decorator. You probably want to add in [`functools.wraps`](http://docs.python.org/2.7/library/functools.html#functools.wraps) (although you *may* want that even in the non-decorator case), but that's the only change. For example, I can write a generator that acts like imap, or a function that returns an iterator, and automatically transform either into a seqmap-like function: ``` @sequify def map_and_discard_none(func, it): for elem in imap(func, it): if elem is not None: yield elem ``` Now: ``` >>> map_and_discard_none(lambda x: x*2 if x else x, (1, 2, None)) (2, 4) ``` --- This, of course, only works for functions with `map`-like syntax—that is, they take a function and an iterable. (Well, it will accidentally work for functions that take various kinds of wrong types—e.g., you can call `sequify(itertools.count(10, 5))` and it will successfully detect that `5` isn't a sequence and therefore just pass the iterator back untouched.) To make it more general, you could do something like: ``` def sequify(func, type_arg=1): def wrapped(*args, **kwargs): it = func(f, seq) try: len(args[type_arg]) except: return it else: return type(seq)(it) return wrapped ``` And now, you can go crazy with `sequify(itertools.combinations, 0)` or whatever you prefer. In this case, to make it a useful decorator, you probably want to go a step further: ``` def sequify(type_arg=1): def wrapper(func): def wrapped(*args, **kwargs): it = func(f, seq) try: len(args[type_arg]) except: return it else: return type(seq)(it) return wrapped return wrapper ``` So you can do this: ``` @sequify(3) def my_silly_function(pred, defval, extrastuff, main_iterable, other_iterable): ```
Your formalism also doesn't work for `map(str,'12')` either. Ultimately, you don't know what arguments the type of the iterable will actually take in the constructor/initializer, so there's no way to do this in general. Also note that `imap` doesn't give you the same type as a generator: ``` >>> type(x for x in range(10)) <type 'generator'> >>> type(imap(str,range(10))) <type 'itertools.imap'> >>> isinstance((x for x in range(10)),type(imap(str,range(10)))) False ``` You might be thinking to yourself "surely with python's introspection, I could inspect the arguments to the initializer" -- And you'd be right! However, even if you know how many arguments go to the initializer, and what their names are, you still can't get any information on what you're actually supposed to pass to them. I suppose you could write some sort of machine learning algorithm to figure it out from the docstrings ... but I think that's well beyond the scope of this question (and it assumes the author was behaving nicely and creating good docstrings to begin with).
`map`-like function preserving sequence-type
[ "", "python", "list", "python-2.7", "generator", "" ]
Alright, I have a relation which stores two keys, a product Id and an attribute Id. I want to figure out which product is most similar to a given product. (Attributes are actually numbers but it makes the example more confusing so they have been changed to letters to simplify the visual representation.) Prod\_att ``` Product | Attributes 1 | A 1 | B 1 | C 2 | A 2 | B 2 | D 3 | A 3 | E 4 | A ``` Initially this seems fairly simple, just select the attributes that a product has and then count the number of attributes per product that are shared. The result of this is then compared to the number of attributes a product has and I can see how similar two products are. This works for products with a large number of attributes relative to their compared products, but issues arise when products have very few attributes. For example product 3 will have a tie for almost every other product (as A is very common). ``` SELECT Product, count(Attributes) FROM Prod_att WHERE Attributes IN (SELECT Attributes FROM prod_att WHERE Product = 1) GROUP BY Product ; ``` Any suggestions on how to fix this or improvements to my current query? Thanks! \*edit: Product 4 will return count() =1 for all Products. I would like to show Product 3 is more similar as it has fewer differing attributes.
Try this ``` SELECT a_product_id, COALESCE( b_product_id, 'no_matchs_found' ) AS closest_product_match FROM ( SELECT *, @row_num := IF(@prev_value=A_product_id,@row_num+1,1) AS row_num, @prev_value := a_product_id FROM (SELECT @prev_value := 0) r JOIN ( SELECT a.product_id as a_product_id, b.product_id as b_product_id, count( distinct b.Attributes ), count( distinct b2.Attributes ) as total_products FROM products a LEFT JOIN products b ON ( a.Attributes = b.Attributes AND a.product_id <> b.product_id ) LEFT JOIN products b2 ON ( b2.product_id = b.product_id ) /*WHERE */ /* a.product_id = 3 */ GROUP BY a.product_id, b.product_id ORDER BY 1, 3 desc, 4 ) t ) t2 WHERE row_num = 1 ``` The above `query` gets the `closest matches` for all the products, you can include the `product_id` in the innermost query, to get the results for a particular `product_id`, I have used `LEFT JOIN` so that even if a `product` has no matches, its displayed [**SQLFIDDLE**](http://www.sqlfiddle.com/#!2/e1ff6/30) Hope this helps
Try the ["Lower bound of Wilson score confidence interval for a Bernoulli parameter"](http://www.evanmiller.org/how-not-to-sort-by-average-rating.html). This explicitly deals with the problem of statistical confidence when you have small n. It looks like a lot of math, but actually this is about the minimum amount of math you need to do this sort of thing right. And the website explains it pretty well. This assumes it is possible to make the step from positive / negative scoring to your problem of matching / not matching attributes. Here's an example for positive and negative scoring and 95% CL: ``` SELECT widget_id, ((positive + 1.9208) / (positive + negative) - 1.96 * SQRT((positive * negative) / (positive + negative) + 0.9604) / (positive + negative)) / (1 + 3.8416 / (positive + negative)) AS ci_lower_bound FROM widgets WHERE positive + negative > 0 ORDER BY ci_lower_bound DESC; ```
SQL- Selecting the most similar product
[ "", "mysql", "sql", "" ]
Is there a built-in Numpy function to convert a complex number in polar form, a magnitude and an angle (degrees) to one in real and imaginary components? Clearly I could write my own but it seems like the type of thing for which there is an optimised version included in some module? More specifically, I have an array of magnitudes and an array of angles: ``` >>> a array([1, 1, 1, 1, 1]) >>> b array([120, 121, 120, 120, 121]) ``` And what I would like is: ``` >>> c [(-0.5+0.8660254038j),(-0.515038074+0.8571673007j),(-0.5+0.8660254038j),(-0.5+0.8660254038j),(-0.515038074+0.8571673007j)] ```
There isn't a function to do exactly what you want, but there is **[angle](http://docs.scipy.org/doc/numpy/reference/generated/numpy.angle.html#numpy.angle)**, which does the hardest part. So, for example, one could define two functions: ``` def P2R(radii, angles): return radii * exp(1j*angles) def R2P(x): return abs(x), angle(x) ``` These functions are using radians for input and output, and for degrees, one would need to do the conversion to radians in both functions. In the numpy [reference](http://docs.scipy.org/doc/numpy/reference/routines.math.html#handling-complex-numbers) there's a section on handling complex numbers, and this is where the function you're looking for would be listed (so since they're not there, I don't think they exist within numpy).
There's an error in the previous answer that uses `numpy.vectorize` - cmath.rect is not a module that can be imported. Numpy also provides the deg2rad function that provides a cleaner piece of code for the angle conversion. Another version of that code could be: ``` import numpy as np from cmath import rect nprect = np.vectorize(rect) c = nprect(a, np.deg2rad(b)) ``` The code uses numpy's vectorize function to return a numpy style version of the standard library's `cmath.rect` function that can be applied element wise across numpy arrays.
Python Numpy - Complex Numbers - Is there a function for Polar to Rectangular conversion?
[ "", "python", "numpy", "complex-numbers", "" ]
How can I get a string of `0`s and `1`s, according to the bits of the IEEE 754 representation of a 32 bit float? For example, given an input `1.00`, the result should be `'00111111100000000000000000000000'`.
You can do that with the `struct` package: ``` import struct def binary(num): return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num)) ``` That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out: ``` >>> binary(1) '00111111100000000000000000000000' ``` **Edit**: There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step. ``` def binary(num): # Struct can provide us with the float packed into bytes. The '!' ensures that # it's in network byte order (big-endian) and the 'f' says that it should be # packed as a float. Alternatively, for double-precision, you could use 'd'. packed = struct.pack('!f', num) print 'Packed: %s' % repr(packed) # For each character in the returned string, we'll turn it into its corresponding # integer code point # # [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n'] integers = [ord(c) for c in packed] print 'Integers: %s' % integers # For each integer, we'll convert it to its binary representation. binaries = [bin(i) for i in integers] print 'Binaries: %s' % binaries # Now strip off the '0b' from each of these stripped_binaries = [s.replace('0b', '') for s in binaries] print 'Stripped: %s' % stripped_binaries # Pad each byte's binary representation's with 0's to make sure it has all 8 bits: # # ['00111110', '10100011', '11010111', '00001010'] padded = [s.rjust(8, '0') for s in stripped_binaries] print 'Padded: %s' % padded # At this point, we have each of the bytes for the network byte ordered float # in an array as binary strings. Now we just concatenate them to get the total # representation of the float: return ''.join(padded) ``` And the result for a few examples: ``` >>> binary(1) Packed: '?\x80\x00\x00' Integers: [63, 128, 0, 0] Binaries: ['0b111111', '0b10000000', '0b0', '0b0'] Stripped: ['111111', '10000000', '0', '0'] Padded: ['00111111', '10000000', '00000000', '00000000'] '00111111100000000000000000000000' >>> binary(0.32) Packed: '>\xa3\xd7\n' Integers: [62, 163, 215, 10] Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010'] Stripped: ['111110', '10100011', '11010111', '1010'] Padded: ['00111110', '10100011', '11010111', '00001010'] '00111110101000111101011100001010' ```
Here's an ugly one ... ``` >>> import struct >>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0]) '0b111111100000000000000000000000' ``` Basically, I just used the struct module to convert the float to an int ... --- Here's a slightly better one using `ctypes`: ``` >>> import ctypes >>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value) '0b111111100000000000000000000000' ``` Basically, I construct a `float` and use the same memory location, but I tag it as a `c_uint32`. The `c_uint32`'s value is a python integer which you can use the builtin `bin` function on. **Note**: by switching types we can do reverse operation as well ``` >>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value 1.0 ``` also for double-precision 64-bit float we can use the same trick using `ctypes.c_double` & `ctypes.c_uint64` instead.
Binary representation of float in Python (bits not hex)
[ "", "python", "binary", "floating-point", "" ]
--- I have a large list of numbers, and I want to see if any of them are approximately equal. If 2 numbers are "approximately equal" (for my purposes), both of them fall within 10% of each other (see the following 2 examples.) Then I want to sort them into separate lists of approximately equal numbers. Example #1 Compare 5.0 and 5.5: 5.5 +/- 10% = 4.95 to 6.05 (and 5.0 is in this range) 5.0 +/- 10% = 4.50 to 5.50 (and 5.5 is in this range) Therefore, 5.0 and 5.5 are approximately equal. Example #2 Compare 5.0 and 5.6: 5.6 +/- 10% = 5.04 to 6.16 (and 5.0 is in this range) 5.0 +/- 10% = 4.50 to 5.50 (and 5.6 is in NOT this range) Therefore, 5.0 and 5.6 are NOT approximately equal. Summary of what I need to do: Input = {4.0, 4.1, 4.2, 4.0, 9.0, 9.4, 8.9, 4.3} Desired output = {4.0, 4.1, 4.2, 4.0, 4.3} and {9.0, 9.4, 8.9}
``` input_list = [4.0, 4.1, 4.2, 4.0, 9.0, 9.4, 8.9, 4.3] results = {input_list[0]: [input_list[0]]} # Start with first value for value in input_list[1:]: # loop through our entire list after first value hi = value * 1.1 low = value * 0.9 print("Value: {0}\tHi: {1}\tLow:{2}".format(value, hi, low)) for existing in results: # search through our result set found_similar = False if low < existing < hi: # if we find a match results[existing].append(value) # we add our value to the list for that set found_similar = True break if not found_similar: # if we looped through our entire results without a match results[value] = [value] # Create a new entry in our results dictionary for entry in results: print(results[entry]) ``` Will give: ``` results = { 9.0: [9.0, 9.4, 8.9], 4.0: [4.0, 4.1, 4.2, 4.0, 4.3] } ``` This code starts with the first value in your list, and finds all subsequent values that are within 10% of that one. So in your example, it starts with 4, and finds all similar values. Any value that isn't within 10 % get added to a new "set". So once it reaches 9.0, it sees that it's not a match, so it adds a new result set to the `results` dictionary, with a key of `9.0`. Now when it considers 9.4, it doesn't find a match in the 4.0 list, but it *does* find a match in the 9.0 list. So it adds this value to the second result set.
Here is a generator / set based method. ``` def set_gen(nums): for seed in sorted(nums): yield tuple([n for n in nums if seed <= n and n/seed <= 1.1]) def remove_subsets(sets): for s in sets.copy(): [sets.remove(s2) for s2 in sets.difference([s]) if set(s2).issubset(s)] >>> nums = [4.0, 4.1, 4.2, 4.0, 9.0, 9.4, 8.9, 4.3] >>> x = set(num for num in set_gen(nums)) >>> remove_subsets(x) >>> list(x) [(9.0, 9.4, 8.9), (4.0, 4.1, 4.2, 4.0, 4.3)] >>> nums = [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] >>> x = set(num for num in set_gen(nums)) >>> remove_subsets(x) >>> list(x) [(1.9, 1.8), (1.5, 1.4), (1.4, 1.3), (1.2, 1.1), (1.7, 1.6), (1.5, 1.6), (1.3, 1.2), (1.9, 2.0), (1.0, 1.1), (1.8, 1.7)] ```
In Python, how can I check if 2 numbers in a list are within a certain percentage of each other?
[ "", "python", "list", "range", "percentage", "approximate", "" ]
I have a list of tuples that looks like this: ``` lst = [(0, 0), (2, 3), (4, 3), (5, 1)] ``` What is the best way to accumulate the sum of the first and secound tuple elements? Using the example above, I'm looking for the best way to produce this list: ``` new_lst = [(0, 0), (2, 3), (6, 6), (11, 7)] ``` I am looking for a solution in Python 2.6
I would argue the best solution is [`itertools.accumulate()`](http://docs.python.org/dev/library/itertools.html#itertools.accumulate) to accumulate the values, and using [`zip()`](http://docs.python.org/3.3/library/functions.html#zip) to split up your columns and merge them back. This means the generator just handles a single column, and makes the method entirely scalable. ``` >>> from itertools import accumulate >>> lst = [(0, 0), (2, 3), (4, 3), (5, 1)] >>> list(zip(*map(accumulate, zip(*lst)))) [(0, 0), (2, 3), (6, 6), (11, 7)] ``` We use `zip()` to take the columns, then apply `itertools.accumulate()` to each column, then use `zip()` to merge them back into the original format. This method will work for any iterable, not just sequences, and should be relatively efficient. Prior to 3.2, accumulate can be defined as: ``` def accumulate(iterator): total = 0 for item in iterator: total += item yield total ``` (The docs page gives a more generic implementation, but for this use case, we can use this simple implementation).
How about this generator: ``` def accumulate_tuples(iterable): accum_a = accum_b = 0 for a, b in iterable: accum_a += a accum_b += b yield accum_a, accum_b ``` If you need a list, just call `list(accumulate_tuples(your_list))`. Here's a version that works for arbitrary length tuples: ``` def accumulate_tuples(iterable): it = iter(iterable): accum = next(it) # initialize with the first value yield accum for val in it: # iterate over the rest of the values accum = tuple(a+b for a, b in zip(accum, val)) yield accum ```
Accumulate items in a list of tuples
[ "", "python", "python-2.6", "" ]
Is there a way to memoize the output of a function to disk? I have a function ``` def getHtmlOfUrl(url): ... # expensive computation ``` and would like to do something like: ``` def getHtmlMemoized(url) = memoizeToFile(getHtmlOfUrl, "file.dat") ``` and then call getHtmlMemoized(url), to do the expensive computation only once for each url.
Python offers a very elegant way to do this - decorators. Basically, a decorator is a function that wraps another function to provide additional functionality without changing the function source code. Your decorator can be written like this: ``` import json def persist_to_file(file_name): def decorator(original_func): try: cache = json.load(open(file_name, 'r')) except (IOError, ValueError): cache = {} def new_func(param): if param not in cache: cache[param] = original_func(param) json.dump(cache, open(file_name, 'w')) return cache[param] return new_func return decorator ``` Once you've got that, 'decorate' the function using @-syntax and you're ready. ``` @persist_to_file('cache.dat') def html_of_url(url): your function code... ``` Note that this decorator is intentionally simplified and may not work for every situation, for example, when the source function accepts or returns data that cannot be json-serialized. More on decorators: [How to make a chain of function decorators?](https://stackoverflow.com/questions/739654/how-can-i-make-a-chain-of-function-decorators-in-python/1594484#1594484) And here's how to make the decorator save the cache just once, at exit time: ``` import json, atexit def persist_to_file(file_name): try: cache = json.load(open(file_name, 'r')) except (IOError, ValueError): cache = {} atexit.register(lambda: json.dump(cache, open(file_name, 'w'))) def decorator(func): def new_func(param): if param not in cache: cache[param] = func(param) return cache[param] return new_func return decorator ```
Check out [`joblib.Memory`](https://joblib.readthedocs.io/en/latest/generated/joblib.Memory.html). It's a library for doing exactly that. ``` from joblib import Memory memory = Memory("cachedir") @memory.cache def f(x): print('Running f(%s)' % x) return x ```
memoize to disk - Python - persistent memoization
[ "", "python", "memoization", "" ]
I'm having troubles to run this SQL: ![enter image description here](https://i.stack.imgur.com/SPfLc.jpg) I think it's a `index problem` but I don't know because I dind't make this database and I'm just a simple programmer. The problem is, that table has 64260 records, so that query gets crazy when executing, I have to stop mysql and run again because the computer get frozen. Thanks. **EDIT:** table Schema ``` CREATE TABLE IF NOT EXISTS `value_magnitudes` ( `id` int(11) NOT NULL AUTO_INCREMENT, `value` float DEFAULT NULL, `magnitude_id` int(11) DEFAULT NULL, `sdi_belongs_id` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL, `reading_date` datetime DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1118402 ; ``` ### Query ``` select * from value_magnitudes where id in ( SELECT min(id) FROM value_magnitudes WHERE magnitude_id = 234 and date(reading_date) >= '2013-04-01' group by date(reading_date) ) ``` **EDIT2** ![enter image description here](https://i.stack.imgur.com/oBnd0.jpg)
First, add an index on `(magnitude_id, reading_date)`: ``` ALTER TABLE ADD INDEX magnitude_id__reading_date__IX -- just a name for the index (magnitude_id, reading_date) ; ``` Then try this variation: ``` SELECT vm.* FROM value_magnitudes AS vm JOIN ( SELECT MIN(id) AS id FROM value_magnitudes WHERE magnitude_id = 234 AND reading_date >= '2013-04-01' -- changed so index is used GROUP BY DATE(reading_date) ) AS vi ON vi.id = vm.id ; ``` The `GROUP BY DATE(reading_date)` will still need to apply the function to all the selected (thorugh the index) rows and that cannot be improved, unless you follow @jurgen's advice and split the column into `date` and `time` columns.
Since you want to get results for every day you need to extract the date from a datetime column with the function `date()`. That makes indexes useless. You can split up the `reading_date` column into `reading_date` and `reading_time`. Then you can run the query without a function and indexes will work. Additionally you can change the query into a `join` ``` select * from value_magnitudes v inner join ( SELECT min(id) as id FROM value_magnitudes WHERE magnitude_id = 234 and reading_date >= '2013-04-01' group by reading_date ) x on x.id = v.id ```
MySQL innoDB: Long time of query execution
[ "", "mysql", "sql", "query-optimization", "innodb", "" ]
I'm reading 'Gray Hat Python.' There's an example where we get the thread of the process and dump all the register values. I copied down the source from the book, and it won't work. Here's a part of the source that I think is the trouble. ``` def run(self): # Now we have to poll the debuggee for debugging events while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): # We aren't going to build any event handlers # just yet. Let's just resume the process for now. # raw_input("Press a key to continue...") # self.debugger_active = False kernel32.ContinueDebugEvent(debug_event.dwProcessId, debug_event.dwThreadId, continue_status) ``` These two lines were used for previous examples and were commented out in this one. ``` # raw_input("Press a key to continue...") # self.debugger_active = False ``` These two lines were commented out The problem is when self.debugger\_active is True, it runs through the WaitForDebugEvent and ContinueDebugEvent. But do not open thread or anything. It just runs 39 times which I have no idea why. Here is the full source. ``` from ctypes import * from my_debugger_defines import * kernel32 = windll.kernel32 class debugger(): def __init__(self): self.h_process = None self.pid = None self.debugger_active = False def load(self, path_to_exe): # dwCreation flag determines how to create the process # set creation_flags = CREATE_NEW_CONSOLE if you want # to see the calculator GUI creation_flags = DEBUG_PROCESS # instantiate the structs startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() # The following two options allow the started process # to be shown as a separate window. This also illustrates # how different settings in the STARTUPINFO struct can affect the debuggee startupinfo.dwFlags = 0x1 startupinfo.wShowWindow = 0x0 # We then initialize the cb variable in the STARTUPINFO struct # which is just the size of the struct itself startupinfo.cb = sizeof(startupinfo) if kernel32.CreateProcessA(path_to_exe, None, None, None, None, creation_flags, None, None, byref(startupinfo), byref(process_information)): print "[*] We have successfully launched the process!" print "[*] PID: %d" % process_information.dwProcessId # Obtain a valid handle to the newly created process # and store it for future access self.h_process = self.open_process(process_information.dwProcessId) else: print "[*] Error: 0x%08x." % kernel32.GetLastError() def open_process(self, pid): h_process = kernel32.OpenProcess(PROCESS_ALL_ACCESS, pid, False) return h_process def attach(self, pid): self.h_process = self.open_process(pid) # We attempt to attach to the process # if this fails we exit the call if kernel32.DebugActiveProcess(pid): self.debugger_active = True self.pid = int(pid) self.run() else: print "[*] Unable to attach to the process. Error: 0x%08x." % kernel32.GetLastError() def run(self): # Now we have to poll the debuggee for debugging events self.count = 1; while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): # We aren't going to build any event handlers # just yet. Let's just resume the process for now. # raw_input("Press a key to continue...") # self.debugger_active = False kernel32.ContinueDebugEvent(debug_event.dwProcessId, debug_event.dwThreadId, continue_status) print "Just finished ContinueDebugEvent %d" % self.count self.count += 1 def detach(self): if kernel32.DebugActiveProcessStop(self.pid): print "[*] Finished debugging. Exiting..." return True else: print "There was an error finishing debugging" return False def open_thread(self, thread_id): print "open_thread" h_thread = kernel32.OpenThread(THREAD_ALL_ACCESS, None, thread_id) if h_thread is not None: return h_thread else: print "[*] Could not obtain a valid thread handle." return False def enumerate_threads(self): print "enumerate_threads" thread_entry = THREADENTRY32() thread_list = [] snapshot = kernel32.CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, self.pid) if snapshot is not None: # You have to set the size of the struct # or the call will fail thread_entry.dwSize = sizeof(thread_entry) success = kernel32.Thread32First(snapshot, byref(thread_entry)) while success: if thread_entry.th32OwnerProcessID == self.pid: thread_list.append(thread_entry.th32ThreadID) success = kernel32.Thread32Next(snapshot, byref(thread_entry)) kernel32.CloseHandle(snapshot) return thread_list else: return False def get_thread_context(self, thread_id): print "get_thread_context" context = CONTEXT() context.ContextFlags = CONTEXT_FULL | CONTEXT_DEBUG_REGISTERS # Obtain a handle to the thread h_thread = self.open_thread(thread_id) if kernel32.GetThreadContext(h_thread, byref(context)): kernel32.CloseHandle(h_thread) return context else: return False ``` **ADDED** I debugged this a little bit, and found out that when `get_thread_context` is called, it always returns false. Also, at the end of the `ContinueDebugEvent`, it does not call `EXIT_THREAD_DEBUG_EVENT`. It just terminates the program right after calling `EXEPTION_DEBUG_EVENT`. I'm not sure if these two are related, but just as an update. Thank you very much. **PART SOLUTION** I found one huge error in the code. I don't know if the book has some kind of edited version or not. Anyway, one of my problems was that `get_thread_context` didn't work. The source should change to ``` def get_thread_context(self, h_thread): context = CONTEXT() context.ContextFlags = CONTEXT_FULL | CONTEXT_DEBUG_REGISTERS if kernel32.GetThreadContext(h_thread, byref(context)): kernel32.CloseHandle(h_thread) return context else: return False ``` For some reason, the source in the book gave the thread handle as the parameter of `open_thread`. You already had got the thread handle before and gave that as the parameter of `get_thread_context`. So no need for that again. =============== Still haven't found any solution for the other error. Which the `ContinueDebugEvent` won't finish with `EXIT_THREAD_DEBUG_EVENT`.
It's confirmed that the code for this book only works on a 32 bit platform. Also, there are a few bugs in the source which are noted on the books website which will stop the programs from working. If you download the source from the site, these bugs have been removed. If you want to get the code to run on your machine and you run x64, you can download "Windows XP mode" which is a virtual 32 bit windows XP environment made available for free by microsoft. <http://www.microsoft.com/en-us/download/details.aspx?id=3702>. Install your Python IDE there and the code should run.
There is a solution for running the debugger from 64bit python instance on 64 bit windows. But you should stick to debugging 32 bit applications or implement 64 bit debugger, there is a difference between 64 a 32 bit registers ofc. I added some code to run it under 64 bit system. 1. whe you wanna debug / run 32 bit application on 64 bit windows. Windows uses Wow64 for it so you have to use some other functions which are explained on msdn. To test if process is run as 32 bit in wow64: ``` i = c_int() kernel32.IsWow64Process(self.h_process,byref(i)) if i: print('[*] 32 bit process') ``` Example: ``` def wow64_get_thread_context(self,thread_id=None,h_thread=None): context = CONTEXT() context.ContextFlags = CONTEXT_FULL | CONTEXT_DEBUG_REGISTERS if h_thread is None: h_thread = self.open_thread(thread_id) if kernel32.Wow64SuspendThread(h_thread) != -1: if kernel32.Wow64GetThreadContext(h_thread,byref(context)) != 0: kernel32.ResumeThread(h_thread) kernel32.CloseHandle(h_thread) return context else: testWinError() return False else: testWinError() return False ``` For testing win errors use: ``` def testWinError(): if kernel32.GetLastError() != 0: raise WinError() ```
Python WaitForDebugEvent & ContinueDebugEvent (Gray Hat Python)
[ "", "python", "debugging", "python-2.7", "kernel32", "" ]
I am trying to clean up the formating on a CSV file in order to import it into a database, and I'm using the following to edit it: ``` f1 = open('visit_summary.csv', 'r') f2 = open('clinics.csv', 'w') for line in f1: f2.write(line.replace('Calendar: ', '')) f1.close() f2.close() ``` This works fine if there is only 1 edit to make, however, I have to repeat this code 19 times in order to make all the changes required; opening and closing each file several times and having multiple placeholder fiels in order to use for intermediate steps ebtween the first and last edit). Is there a simpler way to do this? I tried adding more "f2.write(line.replace"... lines, however, this creates a final file with duplicated lines each of which has only 1 edit. I think I see my problem (I am writing each line multiple times with each edit), however, I cannot seem to find a solution. I am very new to python and am self teachign myself so any help, or direction to better resources would be appreciated.
There's no reason you can't do lots of things to the line before you write it: ``` with open('visit_summary.csv', 'r') as f1, open('clinics.csv', 'w') as f2: for line in f1: line = line.replace('Calendar: ', '') line = line.replace('Something else', '') f2.write(line) ``` (I also replaced `open`, `close` with the `with` statement)
``` f1 = open('visit_summary.csv', 'r') f2 = open('clinics.csv', 'w') for line in f1: f2.write(line.replace('Calendar: ', '').replace('String2', '').replace('String3', '')) f1.close() f2.close() ``` Will this work? Although I don't think its very "pythonic". In this case, you have to be careful about the ordering!
Fixing a CSV using Python
[ "", "python", "csv", "replace", "" ]
I have created a list of names numbered 1-10. I want the user to be able to enter a number (1-10) to select a name. I have the following code but as yet cant get it to work. I'm new to python. thanks for the help ``` def taskFour(): 1 == Karratha_Aero 2 == Dampier_Salt 3 == Karratha_Station 4 == Roebourne_Aero 5 == Roebourne 6 == Cossack 7 == Warambie 8 == Pyramid_Station 9 == Eramurra_Pool 10 == Sherlock print'' print 'Choose a Base Weather Station' print 'Enter the corresponding station number' selection = int(raw_input('Enter a number from: 1 to 10')) if selection == 1: selectionOne() elif selection == 2: selectionTwo() elif selection == 3: selectionThree() ```
You are following an anti-pattern. What are you going to do when there are one million different stations, or multiple data per station? You can't have `selectionOne()` all the way through to `selectionOneMillion()` done manually. How about something like this: ``` stations = {'1': "Karratha_Aero", '2': "Karratha_Station", '10': "Sherlock"} user_selection = raw_input("Choose number: ") print stations.get(user_selection) or "No such station" ``` Input/Output: ``` 1 => Karratha_Aero 10 => Sherlock 5 => No such station ```
First, you need a real list. What you currently have (`1 == Name`) is neither a list, or valid syntax (unless you have variables named after each of the names). Change your list to this: ``` names = ['Karratha_Aero', 'Dampier_Salt', 'Karratha_Station', 'Roebourne_Aero', 'Roebourne', 'Cossack', 'Warambie', 'Pyramid_Station', 'Eramurra_Pool', 'Sherlock'] ``` Then, change your bottom code to this: ``` try: selection = int(raw_input('Enter a number from: 1 to 10')) except ValueError: print "Please enter a valid number. Abort." exit selection = names[selection - 1] ``` `selection` will then be the name of the selection of the user.
Selection an item from a python list
[ "", "python", "list", "menu", "" ]
I am working with a PostgreSQL 8.3.3 database that stores military time as characers (don't ask ...) and I need to update a column which adds/subtracts time to a column that is a time itself. Some of the columns store only hours and minutes (HHMM or 1005) and some store hours, minutes, and seconds. So I'm left with parsing and extracting data. I've been able to drum up this query: ``` UPDATE my_schema.tgtplsel ts SET etd = subquery.new_etd FROM ((SELECT replace(((((interval '1 hours' * substring(starttime, 1, 2)::integer) + (interval '1 minutes' * substring(starttime, 3, 2)::integer)) + ((interval '1 hours' * ((SELECT zuluoffset FROM my_schema.airports WHERE name = 'KABC'))) + (interval '1 minutes' * 0) + (interval '1 seconds' * 0))) + ((interval '1 hours' * substring(ts.taxidelay, 1, 2)::integer) + (interval '1 minutes' * substring(ts.taxidelay, 3, 2)::integer)) + ((interval '1 hours' * substring(ts.etd, 1, 2)::integer) + (interval '1 minutes' * substring(ts.etd, 3, 2)::integer))::time)::char(5), ':', '') as new_etd, ts.exerciseid FROM my_schema.tgtplsel as ts INNER JOIN my_schema.exerparm as ep ON ts.exerciseid = ep.exerciseid WHERE ts.exerciseid = 11 AND ts.taxidelay is not null AND length(ts.taxidelay) > 0)) as subquery WHERE ts.exerciseid = subquery.exerciseid AND ts.exerciseid = 11 AND ts.taxidelay is not null AND length(ts.taxidelay) > 0 ``` The subquery itself returns 16 rows, calculated correctly, ex: ``` 0245 1050 0920 0345 1210 etc. ``` But in the `UPDATE` statement, all rows that meet the criteria in the `WHERE` clause (16 in the example) are updated with the FIRST value of the subquery (`0245`). Why? And how can I get it to update each row with the correct value? **To clarify:** I'm trying to update the `ETD` column of the `tgtplsel` table with a value that is calculated from a couple of other fields. The formula we are using to generate this new value is: `ETD = ((starttime - zuluoffset) + taxidelay + ETD)` So if my ETD is 00:30, starttime is 23:00, zuluoffset is -5, and taxidelay is 0005, then the new ETD should be ((23:00 - (-5)) + 00:05 + 00:30) or 18:35. The reason for converting these to a `time` or `interval` type is because a starttime could be 02:00 or 2am, and if the zuluoffset is -5, then I need that to calculate properly to 21:00, not -03:00. Some of these columns are of type `numeric` while others are `character` - totally out of my hands, so I'm trying my best to work my way around it. I was able to create an [SQL Fiddle](http://sqlfiddle.com/#!10/abe8c/2) with some test data. Although some of my ETD values were supposed to be `0000` and if I do a `SELECT` in the fiddle, they come back as `0`.
### Basic query Complex calculations aside, you can largely simplify the `UPDATE`, no subquery needed: ``` UPDATE my_schema.tgtplsel ts SET etd = <complex calculation> FROM my_schema.exerparm ep WHERE ep.exerciseid = ts.exerciseid AND ts.exerciseid = 11 AND ts.taxidelay is not null AND length(ts.taxidelay) > 0; ``` ### Calculations The `<complex calculation>` can be rewritten as: ``` to_char(to_timestamp(ep.starttime, 'HH24mi')::time + (SELECT interval '1h' * zuluoffset FROM my_schema.airports WHERE name = 'KABC') + to_timestamp(ts.taxidelay, 'HH24mi')::time::interval + to_timestamp(ts.etd, 'HH24mi')::time::interval , 'HH24mi') ``` Operating with `timestamp`, `time` and `interval` would be the proper way. You seem to know that. While stuck with the unfortunate setup I'll assist with the **dark side**. [`to_timestamp()`](https://www.postgresql.org/docs/current/functions-formatting.html) can take `time` as input. It prepends the *year 1*, which is irrelevant here, since we cast to `time` right away (which in turn can be cast to `interval`). Use the function [`to_char()`](https://www.postgresql.org/docs/current/functions-formatting.html) to transform an `interval` back to `text`. ### Functions To make your life on the dark side a little brighter, create some of these conversion functions, which work with or without seconds: Convert "military time" to time: ``` CREATE FUNCTION f_mt2time(text) RETURNS time LANGUAGE sql IMMUTABLE AS $func$ SELECT CASE WHEN length($1) = 4 THEN to_timestamp($1, 'HH24mi')::time ELSE to_timestamp($1, 'HH24miss')::time END $func$; ``` Convert time to "military time": ``` CREATE FUNCTION f_time2mt(time) RETURNS text LANGUAGE sql IMMUTABLE AS $func$ SELECT CASE WHEN extract(sec FROM $1) = 0 THEN to_char($1, 'HH24MI') ELSE to_char($1, 'HH24MISS') END $func$; ``` *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=1d7b5da30ed44593cb2d1f467e7e8fc5)* Old [sqlfiddle](http://sqlfiddle.com/#!12/eb950/1) ### Put it all together Apply the aux. functions and extract the constant subquery for `zuluoffset` from the expression into the `FROM` list. Clearer and faster: ``` UPDATE my_schema.tgtplsel ts SET etd = f_time2mt( f_mt2time(ep.starttime) + z.zulu + f_mt2time(ts.taxidelay)::interval + f_mt2time(ts.etd)::interval) FROM my_schema.exerparm ep CROSS JOIN ( SELECT interval '1h' * zuluoffset AS zulu FROM my_schema.airports WHERE name = 'KABC' ) z WHERE ep.exerciseid = ts.exerciseid AND ts.exerciseid = 11 AND ts.taxidelay is not null AND length(ts.taxidelay) > 0; ``` Much better now.
None of that string mangling should be necessary. Use the `extract` or `date_part` and `date_trunc` functions to get parts of timestamps. Use the `time` or `interval` data types as appropriate when you want to store partial timestamps like hour+day+second. I haven't tried to write a replacement since you haven't posted the data/tables you're operating on or a clear explanation of what the whole query is supposed to do. I strongly suspect you'll have better results when you get rid of the string manipulation in favour of proper date/time processing functions, though. * [Date/time data types](http://www.postgresql.org/docs/current/static/datatype-datetime.html) * [Date/time functions and operators](http://www.postgresql.org/docs/current/static/functions-datetime.html)
Create timestamp dynamically
[ "", "sql", "postgresql", "sql-update", "" ]
I am trying to find a simple and fast way of counting the number of Objects in a list that match a criteria. e.g. ``` class Person: def __init__(self, Name, Age, Gender): self.Name = Name self.Age = Age self.Gender = Gender # List of People PeopleList = [Person("Joan", 15, "F"), Person("Henry", 18, "M"), Person("Marg", 21, "F")] ``` Now what's the simplest function for counting the number of objects in this list that match an argument based on their attributes? E.g., returning 2 for Person.Gender == "F" or Person.Age < 20.
``` class Person: def __init__(self, Name, Age, Gender): self.Name = Name self.Age = Age self.Gender = Gender >>> PeopleList = [Person("Joan", 15, "F"), Person("Henry", 18, "M"), Person("Marg", 21, "F")] >>> sum(p.Gender == "F" for p in PeopleList) 2 >>> sum(p.Age < 20 for p in PeopleList) 2 ```
I know this is an old question but these days one stdlib way to do this would be ``` from collections import Counter c = Counter(getattr(person, 'gender') for person in PeopleList) # c now is a map of attribute values to counts -- eg: c['F'] ```
Python Count Elements in a List of Objects with Matching Attributes
[ "", "python", "list", "object", "attributes", "count", "" ]
I got question to split date, month year, dayname from datetime by using trigger, when I insert a datetime, then in next column will split date, time, month year dayname and count year (to know how old the man in my data) is that possible ? For example, if I insert ``` INSERT INTO MAN VALUES ('04/06/1982') ``` then will be like this ``` DATETIME DATE MONTH YEAR DAYNAME AGE 04/06/1982 00:00:00 04 06 1982 friday 27 ```
Try this :- ``` Declare @myDate datetime set @myDate='19820604' --YYYYMMDD Select @myDate as DateTime, datename(day,@myDate) as Date, month(@myDate) as Month, datename(year,@myDate) as Year, Datename(weekday,@myDate) as DayName, DATEDIFF ( year , @myDate , getdate() ) as Age ``` Result ``` ╔══════════════════════════════╦══════╦═══════╦══════╦═════════╦══════════╗ ║ DateTime ║ DATE ║ MONTH ║ YEAR ║ DAYNAME ║ Age ║ ╠══════════════════════════════╬══════╬═══════╬══════╬═════════╬══════════╣ ║ April, 06 1982 00:00:00+0000 ║ 4 ║ 6 ║ 1982 ║ Friday ║ 31 ║ ╚══════════════════════════════╩══════╩═══════╩══════╩═════════╩══════════╝ ``` SQL Fiddle [Demo](http://sqlfiddle.com/#!3/d41d8/13414)
The code has been slightky altered to give the age correctly. ``` Select myDate,myDateDate,myDateMonth,myDateYear,myDateDayName, Convert(varchar(50),Age)+ ' Years and '+Convert(varchar(50),nodays) +'days' {Age] from ( Select @myDate as myDate, datename(day,@myDate) as myDateDate, month(@myDate) as myDateMonth, datename(year,@myDate) as myDateYear, Datename(weekday,@myDate) as myDateDayName, DATEDIFF ( year , @myDate , getdate() ) Age , DATEDIFF ( dd , @myDate , getdate() ) -365* DATEDIFF ( year , @myDate , getdate() ) as nodays ) As a ``` Thanks Arun
Splitting datetime into date/year/month/dayname and count year
[ "", "sql", "sql-server", "sql-server-2008", "triggers", "" ]
I am trying to design a database to be as scalable as possible, so I am testing the concept with a simple scenario. Lets say I have a `contact` table. A contact typically have an `address`, `phone number`, `mobile number`, and `email address`. A single table can hold this information. However, what if a contact has more than one of any of these details? My solution would be to use junction tables like so: ``` contact -* contact_address *- address contact -* contact_phone *- phone contact -* contact_mobile *- phone contact -* contact_emailaddress *- emailaddress ``` This would allow me to fetch all information about a contact. In addition, the `address`, `phone` and `emailaddress` tables become data banks serving purposes from analysis to simply reuse of data. However, I don't yet know what the performance implications are to this, and whether or not this is good practice, mainly because the `contact` table will only hold an `id` and nothing else. One note is that this uses doctrine2, with symfony2 so writing long queries wont be an issue, my main concerns are those above, this has made a jump from one table to eight, and contacts is only one area I need to use a pattern like this.
This is perfectly fine to use, you should not worry at all. The only problem is if you allow Doctrine to lazy-load relations. Example: if you ``` $contact = $repository->find(1) ; ``` and later in templates you do something like ``` {% for email in contact.emails %} {{ email.address }} {% endfor %} ``` this code will trigger another SELECT statement. Now if you display let's say 10 contacts per page, it means additional 10 queries will be executed. Do the math if you display other things like address or phone numbers. To avoid this lazy-load, you need to JOIN them in your query. Like ``` // ContactRepository public function findForSomePage() { return $this->createQueryBuilder("o") ->leftJoin("o.addresses", "a")->addSelect("a") .... other joins ... ->getQuery() ->getResult() } ``` Now everything you need for display is fetched within 1 query.
If a contact has more than one Address, you would not use a junction table, you would simply use a foreign key from address to contact. I expect this design would most likely be more performant, although there are many factors to consider. If an Address could be attached to more than one contact, and a contact could have more than one address, then you would use a junction table.
Table with only ID and many junction tables
[ "", "sql", "database", "symfony", "doctrine", "normalization", "" ]
For example I have a string like this: ``` '(1) item 1. \n(2) item 2' ``` I should end up with this: ``` '(x) item 1. \n(x) item 2' ``` how can I only match the text inside the parentheses, and replace them? Thanks!
``` In [3]: import re In [4]: re.sub("\([^)]*","(x",'(1) item 1. \n(2) item 2') Out[4]: '(x) item 1. \n(x) item 2' ```
Just escape the brackets: ``` In [1]: import re In [2]: s = '(1) item 1. \n(2) item 2' In [3]: re.sub(r'\(\d+\)', '(x)', s) Out[3]: '(x) item 1. \n(x) item 2' ``` You need to escape them because they have special meaning in the regex context (create a numbered group).
How can I replace text inside the parentheses using re.sub()
[ "", "python", "regex", "" ]
I'm learning python + Django by reading 《beginning django e-commerce》, after I have installed django-db-log, when running $python manage.py runserver, there is a problem. ``` Unhandled exception in thread started by <function wrapper at 0x02C28DB0> Traceback (most recent call last): File "D:\Python27\lib\site-packages\django\utils\autoreload.py", line 93, in wrapper fn(*args, **kwargs) File "D:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 92, in inner_run self.validate(display_num_errors=True) File "D:\Python27\lib\site-packages\django\core\management\base.py", line 308, in validate num_errors = get_validation_errors(s, app) File "D:\Python27\lib\site-packages\django\core\management\validation.py", line 34, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "D:\Python27\lib\site-packages\django\db\models\loading.py", line 166, in get_app_errors self._populate() File "D:\Python27\lib\site-packages\django\db\models\loading.py", line 75, in _populate self.load_app(app_name) File "D:\Python27\lib\site-packages\django\db\models\loading.py", line 96, in load_app models = import_module('.models', app_name) File "D:\Python27\lib\site-packages\django\utils\importlib.py", line 35, in import_module __import__(name) File "build\bdist.win32\egg\djangodblog\models.py", line 9, in <module> File "build\bdist.win32\egg\djangodblog\manager.py", line 23, in <module> File "build\bdist.win32\egg\djangodblog\helpers.py", line 5, in <module> ImportError: No module named hashcompat ```
You are referring to an outdated manual. Module `hashcompat` was removed in Django 1.6, as you can read [here](https://docs.djangoproject.com/en/1.5/internals/deprecation/#id3). Either install an older Django version (1.5 or previous) or get a more recent textbook. You can also manually fix your code by using Python builtins as advised in the removal notice, but for a beginner can be sort of a hassle :)
I solved it using `hashlib`. I just added: ``` try: from hashlib import sha1 as sha_constructor except ImportError: from django.utils.hashcompat import sha_constructor ```
Python Django "ImportError: No module named hashcompat"
[ "", "python", "django", "importerror", "" ]
I am trying to assign a function to another function the left hand side of the assignment is available to me as a String. For example the body of the method I am looking for is ``` def change_function_defintion(name_of_function = 'module1.function1' , function_object): # Can I do eval(name_of_function) = function_object ? will that work? pass ``` Questions: 1. How do I achieve this? Obviously if I call the above method and then call the module.function1 I expect the new function to be picked up. 2. I am doing this in the context of unit testing i.e, Mock several functions, run the test and then basically "unmock" them. Are there any problems with the said approach?
I think it would be better to use a mocking library like [Mock](http://www.voidspace.org.uk/python/mock/). Using `patch` you can change the function's behaviour within the scope of a context manager or a function and have it change back to normal afterwards. For example: ``` from mock import patch with patch('module1.function1') as function1: function1.side_effect = function_object # Do stuff ``` if `function1` is called inside the `with` block it will be replaced with `function_object`. Similarly, patching within a function: ``` @patch('module1.function1') def my_test(function1): function1.side_effect = function_object # Do stuff ```
My approach: ``` import importlib def change_function_defintion(name_of_function = 'module1.function1' , function_object): my_mod = importlib.import_module('module1') setattr(my_mod, function1, function_object) ``` *Now the longer rant:* That approach will probably work, **if module1 is already imported in the local namespace**, for instance, you can do something like: ``` >>> a = eval('str') >>> a <type 'str'> >>> a(123) '123' ``` In the context of mocking for unit tests, there might be a better way of doing so. You can check here: <http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy#MockTestingTools> for some libraries that will allow you to have more control around mocking objects in your unit tests. EDIT: You can do something like this, to dynamically import modules: ``` >>> import importlib >>> my_mod = importlib.import_module('mymodule1') ``` Then, you can access the available functions inside the module, or get them via eval/getattr: ``` my_function = getattr(my_mod,'somefunction') ``` Of, if you want to swap that function to something else: ``` my_mod.functionName = newFunction ```
Python assign a function to another function
[ "", "python", "python-2.7", "" ]
I have a `DataFrame` with column named `date`. How can we convert/parse the 'date' column to a `DateTime` object? I loaded the date column from a Postgresql database using `sql.read_frame()`. An example of the `date` column is `2013-04-04`. What I am trying to do is to select all rows in a dataframe that has their date columns within a certain period, like after `2013-04-01` and before `2013-04-04`. My attempt below gives the error `'Series' object has no attribute 'read'` **Attempt** ``` import dateutil df['date'] = dateutil.parser.parse(df['date']) ``` **Error** ``` AttributeError Traceback (most recent call last) <ipython-input-636-9b19aa5f989c> in <module>() 15 16 # Parse 'Date' Column to Datetime ---> 17 df['date'] = dateutil.parser.parse(df['date']) 18 19 # SELECT RECENT SALES C:\Python27\lib\site-packages\dateutil\parser.pyc in parse(timestr, parserinfo, **kwargs) 695 return parser(parserinfo).parse(timestr, **kwargs) 696 else: --> 697 return DEFAULTPARSER.parse(timestr, **kwargs) 698 699 C:\Python27\lib\site-packages\dateutil\parser.pyc in parse(self, timestr, default, ignoretz, tzinfos, **kwargs) 299 default = datetime.datetime.now().replace(hour=0, minute=0, 300 second=0, microsecond=0) --> 301 res = self._parse(timestr, **kwargs) 302 if res is None: 303 raise ValueError, "unknown string format" C:\Python27\lib\site-packages\dateutil\parser.pyc in _parse(self, timestr, dayfirst, yearfirst, fuzzy) 347 yearfirst = info.yearfirst 348 res = self._result() --> 349 l = _timelex.split(timestr) 350 try: 351 C:\Python27\lib\site-packages\dateutil\parser.pyc in split(cls, s) 141 142 def split(cls, s): --> 143 return list(cls(s)) 144 split = classmethod(split) 145 C:\Python27\lib\site-packages\dateutil\parser.pyc in next(self) 135 136 def next(self): --> 137 token = self.get_token() 138 if token is None: 139 raise StopIteration C:\Python27\lib\site-packages\dateutil\parser.pyc in get_token(self) 66 nextchar = self.charstack.pop(0) 67 else: ---> 68 nextchar = self.instream.read(1) 69 while nextchar == '\x00': 70 nextchar = self.instream.read(1) AttributeError: 'Series' object has no attribute 'read' ``` --- `df['date'].apply(dateutil.parser.parse)` gives me the error `AttributeError: 'datetime.date' object has no attribute 'read'` `df['date'].truncate(after='2013/04/01')` gives the error `TypeError: can't compare datetime.datetime to long` `df['date'].dtype` returns `dtype('O')`. Is it already a `datetime` object?
pandas already reads that as a `datetime` object! So what you want is to select rows between two dates and you can do that by masking: ``` df_masked = df[(df.date > '2012-04-01') & (df.date < '2012-04-04')] ``` Because you said that you were getting an error from the string for some reason, try this: ``` df_masked = df[(df.date > datetime.date(2012,4,1)) & (df.date < datetime.date(2012,4,4))] ```
Pandas is aware of the object datetime but when you use some of the import functions it is taken as a string. So what you need to do is make sure the column is set as the datetime type not as a string. Then you can make your query. ``` df['date'] = pd.to_datetime(df['date']) df_masked = df[(df['date'] > datetime.date(2012,4,1)) & (df['date'] < datetime.date(2012,4,4))] ```
Parse a Pandas column to Datetime when importing table from SQL database and filtering rows by date
[ "", "python", "python-2.7", "numpy", "scipy", "pandas", "" ]
In SublimeText 2, which uses Python plugins. I am trying to enhance an existing plugin that I found. Basically it is a timer plugin that has `start`, `stop`, `pause` functionality and will print the Times to the Status bar using this Sublimetext API call... `sublime.status_message(TIMER)` What I would like to do is show something in the `Status Bar` to show that a Timer is in fact started and running. Something like this... `sublime.status_message('Timer: on')` The problem is this just briefly shows my Status bar message for a few seconds before being dismissed. So I am looking for information on how to print to the status bar and keep it there long term?
You can use `view.set_status(key, value)` to place a persisting message in the status bar. However, this is bound to a view, not the application. If you need the message independent of the view, you will have to do some work using activated and deactivated listeners. Alternatively, you can set the status on all views in the window by using `window.views()` to get an array of all the views in the window, and place the status message on all of the views. Then, when you are done, remove all of the status messages.
please reference below code. This is a class that clear the status after specified time period expired, and to use this class, just call set\_status() function. ``` import time, threading class StatusDisplay: def __init__(self): self.timeout = 3 self.timer = None def set_status(self, view, msg, overwrite): self.cancel_timer() self.view = view if overwrite: self.view.set_status("mytag", msg+'\n') else: self.view.set_status("mytag", msg) self.start_timer() def cancel_timer(self): if self.timer != None: self.timer.cancel() def start_timer(self): self.timer = threading.Timer(self.timeout, self.clear) self.timer.start() def clear(self): print('clear message') self.view.erase_status("mytag") ```
Print to Sublimetext Status bar in Python?
[ "", "python", "sublimetext2", "" ]
I need to add 1 day to the current date and have the output in the format yyyymmdd. The code needs to be written on a stored procedure on the sql server. currently my code is as follows: ``` DECLARE @dat DATE select @dat = dateadd(DD, 1, getdate()) SELECT @dat =LEFT(CONVERT(VARCHAR(8), @dat, 112),10) ``` However, it seems like im doing something wrong as my output on the sql table is in the format yyyy-mm-dd. I need to get rid of the hyphens. any suggestions guys? thanks in advance.
``` select @dat = dateadd(DD, 1, getdate()) DECLARE @datCus varchar(8) select @datCus=LEFT(CONVERT(VARCHAR(8), @dat, 112),10) ``` The problem was that i assigned @dat to the insert statements values. However having a varchar variable to handle the converting part solved the problem (in this case @datCus).
The issue is that you are assigning it back to a date object. You need to assign it to a varchar. I did the following in SQL Server 2005: ``` DECLARE @dat DATETIME DECLARE @string varchar(8) SET @dat = GetUtcDate() select @dat = dateadd(DD, 1, getdate()) SELECT @string =CONVERT(VARCHAR(8), @dat, 112) PRINT @string ```
How to add 1 day to current date and have result in format yyyymmdd in SQL Server?
[ "", "sql", "sql-server", "stored-procedures", "sql-server-2008-r2", "" ]
This is one of those silly questions and I don't really know how to formulate it, so I'll give an example. I got ``` v = chr(0xae) + chr(0xae) ``` where #AEAE is, in decimal, the value of 44718. My question is how I get the integer value of `v`? I know about `ord()` but I can use it only for a char, and not for a string. Thank you.
I managed to do this using the `struct` module: ``` import struct int_no = struct.unpack('>H', v)[0] print int_no ``` which outputs the desired results: ``` 44718 ```
You can convert an arbitrary-length string of bytes to an `int` or `long` using one of these expressions. ``` i = reduce(lambda x, y: (x<<8)+ord(y), v, 0) i = reduce(lambda x, y: (x<<8)+ord(y), reversed(v), 0) ``` Use the one of them for little-endian data and the other for big-endian data. Or vice-versa.
Python: get int value from a char string
[ "", "python", "char", "chr", "ord", "" ]
![enter image description here](https://i.stack.imgur.com/8S1U7.png) ``` test_count = 0 while test_count <= 100: print test_count test_count +=1 ``` Currently this counter is printing on next line but I am looking for way to overwrite it on "0".
Use the `\r` carriage-return character with `sys.stdout.flush`. ``` import sys import time # for invoking time.sleep(n_seconds) inside loop counter = 0 while counter <= 100: time.sleep(1) counter += 1 sys.stdout.write("\rTesting (%ss elapsed)" % counter) sys.stdout.flush() ```
Use `\r` and add a `,` at the end of your print statement to not automatically write a newline as in the code below. Also, see the [python-progressbar](http://code.google.com/p/python-progressbar/) library for some nice text implementations of progress bars. ``` import time # Added to demonstrate effect test_count = 0 while test_count <= 100: print "\r%3d" % test_count, time.sleep(0.1) test_count +=1 ```
print progress counter of python in DOS
[ "", "python", "python-2.7", "python-3.x", "" ]
I am trying to create a remote database using mysql on an Ubuntu machine running 12.04. It has a root user with remote login enabled.I have started the server. output of ``` sudo netstat -tap | grep mysql ``` shows ``` tcp 0 0 *:mysql *:* LISTEN 13135/mysqld ``` From the client machine that also runs Ubuntu 12.04 I use a python script to connect to the remote mysql database using sqlalchemy. ``` from pox.core import core import pox.openflow.libopenflow_01 as of import re import datetime import time from sqlalchemy import create_engine, ForeignKey from sqlalchemy import Column, Date, Integer, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.sql.expression import exists log = core.getLogger() engine = create_engine('mysql://root:pass@192.168.129.139/home/karthik.sharma/ms_thesis/nwtopology.db', echo=False) Base = declarative_base() Session = sessionmaker(bind=engine) session = Session() class SourcetoPort(Base): """""" __tablename__ = 'source_to_port' id = Column(Integer, primary_key=True) port_no = Column(Integer) src_address = Column(String,index=True) #----------------------------------------- def __init__(self, src_address,port_no): """""" self.src_address = src_address self.port_no = port_no ``` The create\_engine() call is failing with the following error. ``` return dialect.connect(*cargs, **cparams) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 280, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/lib/python2.7/dist-packages/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 187, in __init__ super(Connection, self).__init__(*args, **kwargs2) OperationalError: (OperationalError) (1045, "Access denied for user 'root'@'openflow.ems.com' (using password: YES)") None None ``` I cannot figure out why this is happening?Any help is greatly appreciated?
First, mysql doesn't like your password for the root user. Your connection URI has root user password as `pass`: ``` engine = create_engine('mysql://root:pass@192.168.129.139/home/karthik.sharma/ms_thesis/nwtopology.db', echo=False) ``` So what you need to do is to configure root user password and grant access from the host where your application runs to the db server and application. Here is [step by step guide for doing this](http://www.debuntu.org/how-to-create-a-mysql-database-and-set-privileges-to-a-user/). There is a way to access mysql db from command line as a root user by running command `mysql -u root -p` on the same server as your mysql server runs. By default mysql is configured to allow root user login from localhost with empty password. Please try to configure mysql db access and feel free to post your questions to stackoverflow if any. Also mysql URI would [look](http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#transaction-isolation-level) little different: ``` 'mysql://root:pass@192.168.129.139/nwtopology' ```
For others reference - if you're facing this issue despite having properly configured access rights, the reason might be that your password contains special characters. Special characters have to be URL-encoded. In Python 3.x, this can be done using the `urllib` library: ``` import urllib db_password = "!my@pa$sword%" db_password_quoted = urllib.parse.quote(db_password) print(db_password_quoted) >> %21my%40pa%24sword%25 ``` And then passing `db_password_quoted` to the connection/engine instead of the raw password.
sqlalchemy OperationalError: (OperationalError) (1045, "Access denied for user
[ "", "python", "mysql", "sqlalchemy", "" ]
I have the following SQL query: ``` SELECT date(created_at), sum(duration) as total_duration FROM "workouts" WHERE "workouts"."user_id" = 5 AND "workouts"."category" = 'All' GROUP BY date(created_at) ORDER BY date(created_at) ASC ``` but I also want to query for the id of the workout, so I tried this: ``` SELECT id as id, date(created_at), sum(duration) as total_duration FROM "workouts" WHERE "workouts"."user_id" = 5 AND "workouts"."category" = 'All' GROUP BY id, date(created_at) ORDER BY date(created_at) ASC ``` However, this results in the group by date clause not working (i.e not summing the duration for all workouts on a specific date). I think this is because you cannot have one ID for a date that has multiple records. Is there any way to return the ID even where a specific record returned has multiple workouts associated with it? For example, if someone had done 3 workouts yesterday, where each lasted 40 minutes in duration, the query would return 120 minutes (sums the durations for a given date) but then also returns each ID for the workouts on that date? Or should I not do this in the query itself and just do it in the application? Thanks for any help.
Although @flaschenpost and @Gordon Linoff's answers were very helpful, I ended up needing aspects from both answers. Here is how my query ended up: ``` SELECT array_agg(id) OVER (PARTITION BY date(created_at)) as ids, date(created_at), sum(load_volume) OVER (PARTITION BY date(created_at)) as total_load_volume FROM "workouts" WHERE "workouts"."user_id" = 5 AND "workouts"."category" = 'All' GROUP BY date(created_at), id, load_volume ORDER BY date(created_at) ASC; ``` To get each workout id, if there were multiple workouts, on a given date I needed to use array\_agg as well as a window function. This is the output: ``` ids | date | total_load_volume ------------------------------------------------+------------+------------------- {30} | 2013-04-20 | 400 {29} | 2013-04-23 | 400 {31} | 2013-04-24 | 400 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {33,34,35,36,37,38,41,42,43,44,45,46,47,48,49} | 2013-04-28 | 1732 {50} | 2013-04-30 | 400 {51} | 2013-05-07 | 400 (20 rows) ```
You should be able to use a subquery to get the result: ``` SELECT w1.id, w2.created_at coalesce(w2.total_duration, 0) total_duration FROM "workouts" w1 INNER JOIN ( SELECT date(created_at) created_at, sum(duration) as total_duration FROM "workouts" WHERE "workouts"."user_id" = 5 AND "workouts"."category" = 'All' GROUP BY date(created_at) ) w2 on w1.created_at = w2.created_at ORDER BY w2.created_at; ``` If you want to return all IDs even those without a workout, then you could use a LEFT JOIN.
Sum(column) and group by date but get all ID's for that date?
[ "", "sql", "group-by", "aggregate-functions", "postgresql-9.2", "" ]
I have three tables People, Items and Locations. People can have only 1 Items. Locations has no relation to any of 2 tables. I want to get I record join all 3. I did 2 so far people + items but 3rd I keep getting MySQL errors. There's no JOIN ON for location. Any help? ``` SELECT * FROM ITEMS i RIGHT JOIN PEOPLE p ON (p.ITEM_ID =i.ID) where p.ID=3 RIGHT JOIN SELECT * FROM LOCATIONS lo where lo.ID=7 ```
If there are no `join` keys in common, then you might want to do a `cross join`. This produces a Cartesian product, that is, every location for each row selected from `People`/`items`: ``` SELECT * FROM ITEMS i RIGHT JOIN PEOPLE p ON (p.ITEM_ID =i.ID) cross join location l WHERE p.ID=3 ``` By the way, MySQL has a very flexible (and non-standard) `join` syntax. You can actually leave the `on` clause off of a `join` and it will behave the same as a `cross join`. That is a bad habit, of course. If you want a cross join, then use `cross join` explicitly.
Try something like this: ``` SELECT peo.id, it.id, loc.id FROM People as peo INNER JOIN Items as it on it.id = peo.id INNER JOIN Locations as loc on loc.id = peo.id WHERE peo.ID=3 ``` Edit: Your question was edited while I was typing this so my example doesn't match like it used to. Use ITEM\_ID and ID as needed. Although not recommended, you can also use ``` SELECT * FROM People as peo INNER JOIN Items as it on it.id = peo.id INNER JOIN Locations as loc on loc.id = peo.id WHERE peo.ID=3 ```
Three rows join mysql two success third no
[ "", "mysql", "sql", "join", "" ]
``` iluropoda_melanoleuca bos_taurus callithrix_jacchus canis_familiaris ailuropoda_melanoleuca 0 84.6 97.4 44 bos_taurus 0 0 97.4 84.6 callithrix_jacchus 0 0 0 97.4 canis_familiaris 0 0 0 0 ``` This is a short version of the python matrix I have. I have the information in the upper triangle. Is there an easy function to copy the upper triangle to the down triangle of the matrix?
To do this in NumPy, without using a double loop, you can use [`tril_indices`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tril_indices.html#numpy.tril_indices). Note that depending on your matrix size, this may be slower that [adding the transpose and subtracting the diagonal](https://stackoverflow.com/a/58806735/3585557) though perhaps this method is more readable. ``` >>> i_lower = np.tril_indices(n, -1) >>> matrix[i_lower] = matrix.T[i_lower] # make the matrix symmetric ``` Be careful that you do not try to mix `tril_indices` and [`triu_indices`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.triu_indices.html#numpy.triu_indices) as they both use row major indexing, i.e., this does not work: ``` >>> i_upper = np.triu_indices(n, 1) >>> i_lower = np.tril_indices(n, -1) >>> matrix[i_lower] = matrix[i_upper] # make the matrix symmetric >>> np.allclose(matrix.T, matrix) False ```
## **The easiest AND FASTEST (no loop) way to do this for NumPy arrays is the following:** The following is ~3x faster for 100x100 matrices compared to [the accepted answer](https://stackoverflow.com/a/42209263/3585557) and roughly the same speed for 10x10 matrices. ``` import numpy as np X= np.array([[0., 2., 3.], [0., 0., 6.], [0., 0., 0.]]) X = X + X.T - np.diag(np.diag(X)) print(X) #array([[0., 2., 3.], # [2., 0., 6.], # [3., 6., 0.]]) ``` Note that the matrix must either be upper triangular to begin with or it should be made upper triangular as follows. ``` rng = np.random.RandomState(123) X = rng.randomint(10, size=(3, 3)) print(X) #array([[2, 2, 6], # [1, 3, 9], # [6, 1, 0]]) X = np.triu(X) X = X + X.T - np.diag(np.diag(X)) print(X) #array([[2, 2, 6], # [2, 3, 9], # [6, 9, 0]]) ``` ---
Copy upper triangle to lower triangle in a python matrix
[ "", "python", "matrix", "" ]
> A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. > Find the largest palindrome made from the product of two 3-digit numbers. > > <http://projecteuler.net/problem=4> Below is my solution to this problem. It works, but I noticed another solution that used ``` if x*y < max_seen: continue ``` like this: ``` def biggest(): big_x, big_y, max_seen = 0, 0, 0 for x in xrange(999,99,-1): for y in xrange(x, 99,-1): if x*y < max_seen: continue if is_palindrome(x*y): big_x, big_y, max_seen = x,y, x*y ``` What I don't get is how that line works. The first time through `max_seen = 0` and the first `x*y` is `999*999` which is greater than 0. So that condition isn't met and the next line is run. Makes sense. Eventually, however, `max_seen` will be larger than `x*y` so why does it `continue` here? It seems this line isn't even needed because if the condition is or isn't met the program will continue anyway. I suspect I'm not understanding how `continue` works in Python. This is my approach: ``` def find_biggest(): big_x, big_y, new_pal, max_seen = 0, 0, 0, 0 for x in range(999, 100, -1): for y in range(x, 100, -1): if is_palindrome(x*y) == True: new_pal = x*y if new_pal > max_seen: big_x, big_y, max_seen = x, y, new_pal ``` From an efficiency standpoint the program should exit as soon as all new `x*y` are `< max_seen`, but `999*100` is less than `998*900` (meaning it couldn't stop yet as it still needs to check `998*y`, `997*y`, etc.) so how would you code that?
I suspect the reason that the code checks for `x*y < max_seen` first is that it is an easier test than `is_palindrome`. If you expect many of your potential `x` and `y` values to be no good, it makes sense to do the easiest tests first so that you only need to run the complicated tests a few times. That said, if `x*y < max_seen` is true, there won't be any successful tests for the current `x` value. An optimization could be to replace `continue` (which goes on to the next `y` value) with `break` (which ends the inner loop and so goes on to the next `x` value). You could even do something similar for the outer loop, and test if `x * 999 < max_seen`. If so, you'll never find a better result and you can stop looping. Here's how that would look in code: ``` def biggest(): big_x, big_y, max_seen = 0, 0, 0 for x in xrange(999,99,-1): if x*x < max_seen: break # breaks out of outer loop, as no later x value can be better for y in xrange(x, 99,-1): if x*y < max_seen: break # breaks out of inner loop, no later y value can be better if is_palindrome(x*y): big_x, big_y, max_seen = x,y, x*y return big_x, big_y, max_seen ```
The two approaches are almost the same although the first approach avoids checking the palindrom if the product is smaller than the biggest palindrom already encountered, therefore more efficient. ``` if x*y < max_seen: continue if is_palindrome(x*y): ... ``` To answer your first question, in the first approach, `max_seen` will get large only if it belongs to a palindrom, so it will **not** *eventually* get large.
Euler #4 in Python
[ "", "python", "continue", "" ]
I have a pandas dataframe, `df`: ``` c1 c2 0 10 100 1 11 110 2 12 120 ``` How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example: ``` for row in df.rows: print(row['c1'], row['c2']) ``` --- I found a [similar question](https://stackoverflow.com/questions/7837722/what-is-the-most-efficient-way-to-loop-through-dataframes-with-pandas), which suggests using either of these: * ``` for date, row in df.T.iteritems(): ``` * ``` for row in df.iterrows(): ``` But I do not understand what the `row` object is and how I can work with it.
[`DataFrame.iterrows`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html#pandas-dataframe-iterrows) is a generator which yields both the index and row (as a Series): ``` import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) ``` ``` 10 100 11 110 12 120 ``` --- Obligatory disclaimer from the [documentation](https://pandas.pydata.org/docs/user_guide/basics.html#iteration) > **Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed** and can be avoided with one of the following approaches: > > * Look for a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, … > * When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply() instead of iterating over the values. See the docs on function application. > * If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the enhancing performance section for some examples of this approach. Other answers in this thread delve into greater depth on alternatives to iter\* functions if you are interested to learn more.
> ## How to iterate over rows in a DataFrame in Pandas # Answer: DON'T\*! Iteration in Pandas is an anti-pattern and is something you should only do when you have exhausted every other option. You should not use any function with "`iter`" in its name for more than a few thousand rows or you will have to get used to a **lot** of waiting. Do you want to print a DataFrame? Use [**`DataFrame.to_string()`**](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_string.html). Do you want to compute something? In that case, search for methods in this order (list modified from [here](https://stackoverflow.com/questions/24870953/does-iterrows-have-performance-issues)): 1. Vectorization 2. [Cython](https://en.wikipedia.org/wiki/Cython) routines 3. List Comprehensions (vanilla `for` loop) 4. [**`DataFrame.apply()`**](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html): i)  Reductions that can be performed in Cython, ii) Iteration in Python space 5. [**`items()`**](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.items.html) [**`iteritems()`**](https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.DataFrame.iteritems.html) (deprecated since v1.5.0) 6. [**`DataFrame.itertuples()`**](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.itertuples.html) 7. [**`DataFrame.iterrows()`**](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html#pandas.DataFrame.iterrows) `iterrows` and `itertuples` (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for. **Appeal to Authority** [The documentation page](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#iteration) on iteration has a huge red warning box that says: > Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed [...]. \* It's actually a little more complicated than "don't". `df.iterrows()` is the correct answer to this question, but "vectorize your ops" is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you're not sure whether you need an iterative solution, you probably don't. PS: To know more about my rationale for writing this answer, skip to the very bottom. --- ## Faster than Looping: [Vectorization](https://stackoverflow.com/questions/1422149/what-is-vectorization), [Cython](https://cython.org) A good number of basic operations and computations are "vectorised" by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on [Essential Basic Functionality](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#essential-basic-functionality) to find a suitable vectorised method for your problem. If none exists, feel free to write your own using custom [Cython extensions](https://pandas.pydata.org/pandas-docs/stable/user_guide/enhancingperf.html#cython-writing-c-extensions-for-pandas). --- ## Next Best Thing: [List Comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)\* List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you're trying to perform elementwise transformation on your code. There is a [good amount of evidence](https://stackoverflow.com/questions/54028199/for-loops-with-pandas-when-should-i-care) to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common Pandas tasks. The formula is simple, ``` # Iterating over one column - `f` is some function that processes your data result = [f(x) for x in df['col']] # Iterating over two columns, use `zip` result = [f(x, y) for x, y in zip(df['col1'], df['col2'])] # Iterating over multiple columns - same data type result = [f(row[0], ..., row[n]) for row in df[['col1', ...,'coln']].to_numpy()] # Iterating over multiple columns - differing data type result = [f(row[0], ..., row[n]) for row in zip(df['col1'], ..., df['coln'])] ``` If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw Python code. **Caveats** List comprehensions assume that your data is easy to work with - what that means is your data types are consistent and you don't have NaNs, but this cannot always be guaranteed. 1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic. 2. When dealing with mixed data types you should iterate over `zip(df['A'], df['B'], ...)` instead of `df[['A', 'B']].to_numpy()` as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, `to_numpy()` will cast the entire array to string, which may not be what you want. Fortunately `zip`ping your columns together is the most straightforward workaround to this. \*Your mileage may vary for the reasons outlined in the **Caveats** section above. --- ## An Obvious Example Let's demonstrate the difference with a simple example of adding two pandas columns `A + B`. This is a vectorizable operation, so it will be easy to contrast the performance of the methods discussed above. ![](https://i.stack.imgur.com/y44RJ.png) [Benchmarking code, for your reference](https://gist.github.com/Coldsp33d/948f96b384ca5bdf6e8ce203ac97c9a0). The line at the bottom measures a function written in numpandas, a style of Pandas that mixes heavily with NumPy to squeeze out maximum performance. Writing numpandas code should be avoided unless you know what you're doing. Stick to the API where you can (i.e., prefer `vec` over `vec_numpy`). I should mention, however, that it isn't always this cut and dry. Sometimes the answer to "what is the best method for an operation" is "it depends on your data". My advice is to test out different approaches on your data before settling on one. --- ## My Personal Opinion \* Most of the analyses performed on the various alternatives to the iter family has been through the lens of performance. However, in most situations you will typically be working on a reasonably sized dataset (nothing beyond a few thousand or 100K rows) and performance will come second to simplicity/readability of the solution. Here is my personal preference when selecting a method to use for a problem. For the novice: > *Vectorization* (when possible)*; `apply()`; List Comprehensions; `itertuples()`/`iteritems()`; `iterrows()`; Cython* For the more experienced: > *Vectorization* (when possible)*; `apply()`; List Comprehensions; Cython; `itertuples()`/`iteritems()`; `iterrows()`* Vectorization prevails as the most idiomatic method for any problem that can be vectorized. Always seek to vectorize! When in doubt, consult the docs, or look on Stack Overflow for an existing question on your particular task. I do tend to go on about how bad `apply` is in a lot of my posts, but I do concede it is easier for a beginner to wrap their head around what it's doing. Additionally, there are quite a few use cases for `apply` has explained in [this post of mine](https://stackoverflow.com/questions/54432583/when-should-i-not-want-to-use-pandas-apply-in-my-code). Cython ranks lower down on the list because it takes more time and effort to pull off correctly. You will usually never need to write code with pandas that demands this level of performance that even a list comprehension cannot satisfy. \* As with any personal opinion, please take with heaps of salt! --- ## Further Reading * [10 Minutes to pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html), and [Essential Basic Functionality](https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html) - Useful links that introduce you to Pandas and its library of vectorized\*/cythonized functions. * [Enhancing Performance](https://pandas.pydata.org/pandas-docs/stable/user_guide/enhancingperf.html) - A primer from the documentation on enhancing standard Pandas operations * *[Are for-loops in pandas really bad? When should I care?](https://stackoverflow.com/questions/54028199/for-loops-with-pandas-when-should-i-care)* - a detailed write-up by me on list comprehensions and their suitability for various operations (mainly ones involving non-numeric data) * *[When should I (not) want to use pandas apply() in my code?](https://stackoverflow.com/questions/54432583/when-should-i-ever-want-to-use-pandas-apply-in-my-code)* - `apply` is slow (but not as slow as the `iter*` family. There are, however, situations where one can (or should) consider `apply` as a serious alternative, especially in some `GroupBy` operations). \* Pandas string methods are "vectorized" in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize. --- ## Why I Wrote this Answer A common trend I notice from new users is to ask questions of the form "How can I iterate over my df to do X?". Showing code that calls `iterrows()` while doing something inside a `for` loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is the right thing to do. The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I'm not trying to start a war of iteration vs. vectorization, but I want new users to be informed when developing solutions to their problems with this library. And finally ... a TLDR to summarize this post [![Meme based on "Invincible" (2021). Mark (labeled "New Pandas user"): "How do I iterate over this Pandas dataframe". Nolan (his father): "That's the neat part - you don't".](https://i.stack.imgur.com/FYfgi.jpg)](https://i.stack.imgur.com/FYfgi.jpg)
How can I iterate over rows in a Pandas DataFrame?
[ "", "python", "pandas", "dataframe", "loops", "" ]
I would like to use equals into select is it possible to do it? SELECT EXAMPLE ``` DECLARE @NAME_SURNAME varchar(200); SELECT (CASE WHEN (PLP.NAME +' '+ PLP.SURNAME) != @NAME_SURNAME THEN (SET @NAME_SURNAME = (PLP.NAME +' '+ PLP.SURNAME)) END) AS 'LP.', ``` Problem: ``` (SET @NAME_SURNAME = (PLP.NAME +' '+ PLP.SURNAME)) ```
Your SQL doesn't quite makes sense. You seem to be trying to set a variable and return a value (as "LP.") at the same time. This is not allowed. Here is the note in the [documentation](http://msdn.microsoft.com/en-us/library/ms187330%28v=sql.105%29.aspx): > A SELECT statement that contains a variable assignment cannot be used > to also perform typical result set retrieval operations. The correct syntax for setting a variable using a `select` is this: ``` select @NAME_SURNAME = (CASE WHEN (PLP.NAME +' '+ PLP.SURNAME) != @NAME_SURNAME THEN (PLP.NAME +' '+ PLP.SURNAME) else @NAME_SURNAME end) ``` That is, the `set` keyword is not allowed within a `select` statement.
I don't think you need the word `SET` in this statement. ``` (@NAME_SURNAME = (PLP.NAME +' '+ PLP.SURNAME)) ```
equals Test into sql
[ "", "sql", "sql-server", "t-sql", "" ]
I have a string: ``` This is @lame ``` Here I want to extract lame. But here is the issue, the above string can be ``` This is lame ``` Here I dont extract anything. And then this string can be: ``` This is @lame but that is @not ``` Here i extract lame and not So, output I am expecting in each case is: ``` [lame] [] [lame,not] ``` How do I extract these in robust way in python?
Use [`re.findall()`](http://docs.python.org/2/library/re.html#re.findall) to find multiple patterns; in this case for anything that is preceded by `@`, consisting of word characters: ``` re.findall(r'(?<=@)\w+', inputtext) ``` The `(?<=..)` construct is a *positive lookbehind assertion*; it only matches if the current position is preceded by a `@` character. So the above pattern matches 1 or more word characters (the `\w` character class) *only* if those characters were preceded by an `@` symbol. Demo: ``` >>> import re >>> re.findall(r'(?<=@)\w+', 'This is @lame') ['lame'] >>> re.findall(r'(?<=@)\w+', 'This is lame') [] >>> re.findall(r'(?<=@)\w+', 'This is @lame but that is @not') ['lame', 'not'] ``` If you plan on reusing the pattern, do compile the expression first, then use the [`.findall()` method](http://docs.python.org/2/library/re.html#re.RegexObject.findall) on the compiled regular expression object: ``` at_words = re.compile(r'(?<=@)\w+') at_words.findall(inputtext) ``` This saves you a cache lookup every time you call `.findall()`.
This will give the output you requested: ``` import re regex = re.compile(r'(?<=@)\w+') print regex.findall('This is @lame') print regex.findall('This is lame') print regex.findall('This is @lame but that is @not') ```
extracting multiple instances regex python
[ "", "python", "" ]
How do you insert objects to a collection in using MongoKit in Python? I understand you can specify the '**collection**' field in a model and that you can create models in the db like ``` user = db.Users() ``` Then save the object like ``` user.save() ``` However I can't seem to find any mention of an insert function. If I create a User object and now want to insert it into a specific collection, say "online\_users", what do I do?
After completely guessing it appears I can successfully just call ``` db.online_users.insert(user) ```
You create a new Document called `OnlineUser` with the `__collection__` field set to `online_users`, and then you have to related `User` and `OnlineUsers` with either `ObjectID` or `DBRef`. MongoKit [supports both](http://namlook.github.io/mongokit/structure.html) through - * pymongo.objectid.ObjectId * bson.dbref.DBRef You can also use `list` of any other field type as a field.
Mongokit add objects to collection
[ "", "python", "mongodb", "crud", "pymongo", "mongokit", "" ]