Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm writing a set of python functions that perform some sort of conformance checking on a source code project. I'd like to specify quite verbose names for these functions, e.g.: `check_5_theVersionOfAllVPropsMatchesTheVersionOfTheAutolinkHeader()` Could such excessively long names be a problem for python? Is there a maximum length for attribute names?
[2.3. Identifiers and keywords](https://docs.python.org/2/reference/lexical_analysis.html#identifiers) from The Python Language Reference: > Identifiers are unlimited in length. --- But you'll be violating [PEP-8](http://www.python.org/dev/peps/pep-0008/#maximum-line-length) most likely, which is not really cool: > Limit all lines to a maximum of 79 characters. Also you'll be violating [PEP-20](http://www.python.org/dev/peps/pep-0020/) (the Zen of Python): > Readability counts.
They could be a problem for the programmer. Keep the function names reasonably short, and use docstrings to document them.
What is the maximum length for an attribute name in python?
[ "", "python", "python-2.7", "" ]
I frequently find myself creating homogeneous sets of objects, which i usually store in a dict or list dictionaries. ``` a,b,c = MyClass(...), MyClass(...), MyClass(...) my_set = {'a':a,'b':b,'c':c} ``` On a set like this, common needs are * returning homogeneous sets from the object properties or methods + `{k:k.prop for k in my_set}` , or + `{k:k.func() for k in my_set}` * filtering based on properties or the output of methods of the objects + `{k:k.prop for k in my_set if k.prop is blah}` , or + `{k:k.func() for k in my_set if k.func() is blah}` * stringing the first two needs together At first, this seemed to suggest that i should be using a database. however, i would have to give up the freedom provided by using arbitrary objects as elements. id like to create a class for this type of structure,(if it doesnt already exist) which fufills these needs with something like... ``` a,b,c = MyClass(...), MyClass(...), MyClass(...) my_set = {'a':a,'b':b,'c':c} my_set.prop # fufills first need my_set.func() # fufills first need my_set[prop = blah] # fufills second need my_set[prop= blah].prop.func() # third need ``` does functionality like this exist? or is the need for this object a result of bad design? ## Solution Following the suggestion of @mike-muller to override the `__getattr__` method i created a class here <https://gist.github.com/arsenovic/5723000>
You are look for special methods of classes. Have a look `__getattr__` and `__getitem__`. You can write your own class. Its instances of this class can have `a`, `b` and `c` as attributes and can do what you describe. Some thing like this might do part of what you like to do: ``` class DictCollcetion(object): def __init__(self, dict_): self.dict = dict_ def __getattr__(self, name): return {k: k.name for k in self.dict} def __getitem__(self, key): return {k: k.key for k in self.dict} ```
I've run into cases where I have manipulated big lists of dictionaries or objects, much like you have. Often those objects are pulled out of a database and populated via an ORM, but I've done it with CSV files as well. I am not sure that my needs are frequent/common enough that I would look for a module to give me syntactic sugar for it.
Python class for homogenous sets of objects
[ "", "python", "" ]
I need to join both deptno and sal same time when comm is not null. I came up with following. But it only accept either deptno or sal. not both in same time. ``` select ename,deptno,sal,comm from emp where deptno,sal in(select sal,deptno from emp where comm is not null); ``` This is a self join. I am trying it on oracle 10g Thanks in advance.
Your purpose is not clear, although it seems you want to write a multi-column semi-join using `IN`. In that case you can simply add parentheses: ``` SELECT ename, deptno, sal, comm FROM emp WHERE (sal, deptno) IN (SELECT sal, deptno FROM emp WHERE comm IS NOT NULL); ```
You could use a real join: ``` SELECT e1.ename, e1.deptno, e1.sal, e1.comm FROM emp e1 JOIN (SELECT distinct sal, deptno FROM emp WHERE comm IS NOT NULL) e2 ON e1.sal = e2.sal AND e1.deptno = e2.deptno ```
Joining multiple column with multiple column
[ "", "sql", "oracle", "" ]
I have this format of string ``` 2013-06-05T11:01:02.955 LASTNAME=Jone FIRSTNAME=Jason PERSONNELID=salalm QID=231412 READER_NAME="CAZ.1 LOBBY LEFT TURNSTYLE OUT" ACCESS_TYPE="Access Granted" EVENT_TIME_UTC=1370480141.000 REGION=UTAH ``` some of them looks like this ``` 2013-06-05T11:15:48.670 LASTNAME=Ga FIRSTNAME="Je " PERSONNELID=jega QID=Q10138202 READER_NAME="CAZ.1 ELEVATOR LOBBY DBL GLASS" ACCESS_TYPE="Access Granted" EVENT_TIME_UTC=1370481333.000 REGION=UTAH ``` I want to extract value of PERSONNELID,REGION,ACCESS\_TYPE,EVENT\_TIME\_UTC I was going to use split(" ") however READER\_NAME and ACCESS\_TYPE value has bunch of spaces Can I convert to JSON and search by key What is the way to extract those strings. Thank you in advance
One hack I've found useful in the past is to use [`shlex.split`](http://docs.python.org/2/library/shlex.html#shlex.split): ``` >>> s = '2013-06-05T11:01:02.955 LASTNAME=Jone FIRSTNAME=Jason PERSONNELID=salalm QID=231412 READER_NAME="CAZ.1 LOBBY LEFT TURNSTYLE OUT" ACCESS_TYPE="Access Granted" EVENT_TIME_UTC=1370480141.000 REGION=UTAH' >>> split = shlex.split(s) >>> split ['2013-06-05T11:01:02.955', 'LASTNAME=Jone', 'FIRSTNAME=Jason', 'PERSONNELID=salalm', 'QID=231412', 'READER_NAME=CAZ.1 LOBBY LEFT TURNSTYLE OUT', 'ACCESS_TYPE=Access Granted', 'EVENT_TIME_UTC=1370480141.000', 'REGION=UTAH'] ``` And then we can turn this into a dictionary: ``` >>> parsed = dict(k.split("=", 1) for k in split if '=' in k) >>> parsed {'EVENT_TIME_UTC': '1370480141.000', 'FIRSTNAME': 'Jason', 'LASTNAME': 'Jone', 'REGION': 'UTAH', 'ACCESS_TYPE': 'Access Granted', 'PERSONNELID': 'salalm', 'QID': '231412', 'READER_NAME': 'CAZ.1 LOBBY LEFT TURNSTYLE OUT'} ``` As @abarnert points out, you can keep more of the information around if you want: ``` >>> dict(k.partition('=')[::2] for k in split) {'2013-06-05T11:01:02.955': '', 'EVENT_TIME_UTC': '1370480141.000', 'FIRSTNAME': 'Jason', 'LASTNAME': 'Jone', 'REGION': 'UTAH', 'ACCESS_TYPE': 'Access Granted', 'PERSONNELID': 'salalm', 'QID': '231412', 'READER_NAME': 'CAZ.1 LOBBY LEFT TURNSTYLE OUT'} ``` Et cetera. The key point, as he nicely put it, is that the syntax you've shown looks a lot like minimal shell syntax. OTOH, if there are violations of the pattern that you've shown elsewhere, you might want to fall back to writing a custom parser. The `shlex` approach is handy when it applies but isn't as robust as you might want.
Let's analyze the problem: You want to match one of the four identifiers, then an `=` sign, and then either a quoted string or a sequence of non-whitespace characters. That's a perfect job for a regular expression: ``` >>> s= ' 2013-06-05T11:01:02.955 LASTNAME=Jone FIRSTNAME=Jason PERSONNELID=salal m QID=231412 READER_NAME="CAZ.1 LOBBY LEFT TURNSTYLE OUT" ACCESS_TYPE="Access Gr anted" EVENT_TIME_UTC=1370480141.000 REGION=UTAH' >>> import re >>> regex = re.compile(r"""\b(PERSONNELID|REGION|ACCESS_TYPE|EVENT_TIME_UTC) ... = ... ("[^"]*"|\S+)""", re.VERBOSE) >>> result = regex.findall(s) >>> result [('PERSONNELID', 'salalm'), ('ACCESS_TYPE', '"Access Granted"'), ('EVENT_TIME_UTC', '1370480141.000'), ('REGION', 'UTAH')] >>> dict(result) {'EVENT_TIME_UTC': '1370480141.000', 'PERSONNELID': 'salalm', 'ACCESS_TYPE': '"Access Granted"', 'REGION': 'UTAH'} ``` **Explanation:** `\b` makes sure that the match starts at a [word boundary](http://www.regular-expressions.info/wordboundaries.html). `"[^"]*"` matches a quote, followed by any number of non-quote characters, and another quote. `\S+` matches one or more non-whitespace characters. By enclosing the "interesting" parts of the regex in parentheses, building [capturing groups](http://www.regular-expressions.info/brackets.html), you get a list of tuples for each part of the match separately.
how to separate string from unformed string
[ "", "python", "" ]
Select Users Who were 17 During a range of dates (From and End Date)-- Thanks everyone! **Users** * id * birthdate **Example:** 1. User\_1 Birthdate **09/28/1996** 2. User\_2 Birthdate **08/25/1996** 3. User\_3 Birthdate **07/28/1995** 4. User\_4 Birthdate **05/25/1995** If Range of dates are **FROM 03/05/2013** To **End Date 6/05/2013** \*User\_1 and User\_2 Appear because they meet the criteria of being 17 during that time period\* **HOWEVER** If Range is **From 02/10/2012** and **To End Date 06/05/2013** *All the Users should appear since they all were 17 at some point during the range of date* I have tried using the Datepart() but i can't clearly think it through to derive at the answer I want ``` Select u.id, u.birthdate From Users u where convert(varchar,DATEPART(MM,u.birthdate))<=DATEPART(MM,'03/05/2013') ``` With everyone help I came to a conclusion ``` DATEADD(YY,17,birthdate) between @from and @end ```
These should do it; ``` SELECT * FROM users WHERE birthdate BETWEEN DATEADD(year, -18, '2013-03-05') -- lo date of range AND DATEADD(year, -17, '2013-06-05'); -- hi date of range SELECT * FROM users WHERE birthdate BETWEEN DATEADD(year, -18, '2012-02-10') -- lo date of range AND DATEADD(year, -17, '2013-06-05'); -- hi date of range ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/a5e8d/2). Note that User\_1 turns 17 on 09/28/2013 and User\_2 on 08/25/2013, so neither of them is (or should be) included in either range.
**My Preferred Method:** Use date datatypes and explicit comparisons between dates. I recommend storing birthdate as a date datatype in SQL Server 2008+, and also using [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format for datetime literals to avoid ambiguity. ``` select id, birthdate from Users where birthdate > dateadd(year, -18, '2013-03-05') -- Check lower bounds and birthdate <= dateadd(year, -17, '2013-06-05'); -- Check upper bounds ``` Note that I've moved the `dateadd` function to the constants for this revision. As keenly observed by others, this means less calculation (unless you only had 1 row?), and -- perhaps more importantly -- allows for usage of an index on birthdate. **The `BETWEEN` Method:** As shown in another answer, using `BETWEEN` can yield a similar result: ``` select id, birthdate from users where birthdate between dateadd(year, -18, '2013-03-05') and dateadd(year, -17, '2013-06-05') ``` However, `BETWEEN` is inclusive, meaning it will match on all of the range *including* the end points. In this case, **we would get a match on any user's 18th birthday**, which is most likely not the desired result (there is often an [important difference](http://en.wikipedia.org/wiki/Minor_(law)) between ages 17 and 18). I suppose you could use an additional `DATEADD` to subtract a day, but I like to be consistent in my my usage of `BETWEEN` as [Aaron Bertrand suggests](https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries). **What Not To Do:** Do not use `DATEPART` or `DATEDIFF` for this type of comparison. They do not represent timespans. `DATEDIFF` shows the difference in terms of *boundaries crossed*. See how the following age of just one day would show someone as being a year old already, because the years are technically one apart: ``` select datediff(year, '2012-12-31', '2013-01-01'); -- Returns 1! ``` A calculation using 'DATEPART' for years in this fashion would yield the same thing (similarly with months/12, etc., all the way to milliseconds). Thanks to all who noted the indexing possibility. Let's just not forget the sequence of ["Make it work, make it right, make it fast."](http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast)
SQL Select Users Who age is 17 between Date Range
[ "", "sql", "sql-server-2008", "" ]
I have a table containing records for patient admissions to a group of hospitals. I would like to be able to link each record to the most recent previous record for each patient, if there is a previous record or return a null field if there is no previous record. Further to this I would like to place some criteria of the linked records eg previous visit to the same hospital only, previous visit was less than 7 days before. The data looks something like this (with a whole lots of other fields) ``` Record PatientID hospital Admitdate DischargeDate 1. 1. A. 1/2/12. 3/2/12 2. 2. A. 1/2/12. 4/2/12 3. 1. B. 4/3/12. 4/3/12 ``` My thinking was a self join but I can't figure out how to join to the record where the difference between the admit date and the patient's previous discharge date is the minimum. Thanks!
You could use `row_number()` to assign increasing numbers to records for each patient. Then you can `left join` to the previous record: ``` ; with numbered_records as ( select row_number() over (partition by PatientID, Hospital order by Record desc) as rn , * from YourTable ) select * from numbered_records cur left join numbered_records prev on prev.PatientID = cur.PatientID and prev.Hospital = cur.Hospital and prev.DischargeDate >= dateadd(day, -7, getdate()) and prev.rn = cur.rn + 1 ``` To select only the latest row per patient, add: ``` where cur.rn = 1 ``` at the end of the query.
It will give you the First 2 records of the same patients. If you want the same Hospital then add another check of Hospital with the `PatientID`. Also can add the Date as well. ``` SELECT * FROM T1 t WHERE (2 >= (SELECT Count(*) FROM T1 tmp WHERE t.PatientID = tmp.PatientID AND t.Record <= tmp.Record)) ``` It will only bring the one record if there is only one entry.
Link subsequent patient visits from same table in SQL
[ "", "sql", "sql-server-2008", "self-join", "" ]
I have a DQL query like: ``` $em->createQuery(" SELECT r FROM WeAdminBundle:FamilyRelation r WHERE r.col like :query ") ``` Now I want to change "col" depending on various parameters. How can i achieve this with DQL since the normal setParameter doesn't work here.
In short: you can't the way you want it. To do it you'd need something like `$dql->setColumn(array('variable_column' => 'some_column_name'))` just as the [`bindColumn`](http://php.net/manual/en/pdostatement.bindcolumn.php) method from PDO, but there's no equivalent method ([bindColum](https://github.com/doctrine/doctrine2/search?q=bindColumn&type=Code) or [setcolumn](https://github.com/doctrine/doctrine2/search?q=setColumn&type=Code)) in Doctrine.
You can use setParameter with DQL, as [many examples are provided](http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#dql-select-examples) but for LIKE clauses, make sure the variable is wrapped in `%`. ``` $em->createQuery(" SELECT r FROM WeAdminBundle:FamilyRelation r WHERE r.col like :query ")->setParameters(array( 'query' => '%'.$foo.'%' )) ```
Doctrine, escape variable field name in DQL
[ "", "sql", "doctrine", "dql", "" ]
I am writing an app and I want users to be able to input python files for corner cases. In my head the best way I can think of doing this is to save their file to disk and save the location to a DB then dynamically import it using `__import__()` and then execute it. The first part of my question is: is this the best way to do this? Also, this brings up some pretty big security concerns. Is there a way to run their module under restriction? To not let it see the file system or anything? **Edit:** The execution of the python would be to retrieve data from a backend service that is outside the scope of "normal", So it would not be a full application. It could just be a definition of a custom protocol.
You can use a combination of the `compile` and `exec` builtins for executing in memory. The downside is that you don't get source lines in tracebacks. Not much hope about security though. Even if you put ridiculously restrictive filters in place, like restricting everything to a single expression and removing most built-in functions, [arbitrary code can be executed](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html). Prohibiting file system access, or access to other OS resources, is also hard from within Python, as the code would run in the same process. A child process with severe restrictions (`chroot` jail, `ulimit`, etc.) is a possibility, but that's not only a lot of work, it also rules out most means of interaction with the host application. If your application runs locally and with the same permissions as the user's account, it's probably best not to worry and give them more power, as they can't do anything/much they couldn't do anyway, and most restriction would make the task (whatever it is) harder/more annoying. If, on the other hand, the code would execute on a server or something, **forget about** running it in the same process.
you can do several things to limit powers (see my comment on OP) but really my two cents ... just give each user their own VM (maybe AWS?) ... that way they cant screw up much... and you can always just restart it ... a more complicated approach (but arguably cooler) would be to use lex/yacc (in python its PLY) and define your own language (which could be a limited subset of python)
Dynamically reading in a python file and executing it safely
[ "", "python", "" ]
Here's what I have at this point: ``` def checkPlayerCardInput(answer): while True: for x in range(len(player1Cards)): if answer.lower() == player1Cards[x]: return player1[x] ``` What I don't know is how to repeat the loop if the user misspells a word or enters something incorrect entirely? In this scenario, `player1Cards` is equal to a list of 5 strings. I'm simply trying to match the `answer` (which is input previously by the user via `raw_input`) to one of those strings and `return` that answer (which, in this case, is the item from the `player1` list that contains the rest of the data for the "card" returned). If the answer is incorrect, I would like the loop to start with a fresh raw\_input answer from the user.
Does this do what you want? ``` def checkPlayerCardInput(answer): while True: for x in range(len(player1Cards)): if answer.lower() == player1Cards[x]: return player1[x] answer = raw_input('incorrect answer!') ```
To check that an item is contained in a list use `in` operator. Like this: ``` if answer.lower() in player1CArds: #... ``` Use a `while` loop to keep asking for `raw_input` until you're given a string found in the list.
How do I check raw_input against a list of strings?
[ "", "python", "python-2.7", "" ]
I have an access database containing records pulled in from csv files which are published daily. From time to time, an existing record is updated. My current database is of the form ``` ... ID ... Date on which the csv file was created 1 26/05/2013 1 27/05/2013 2 27/05/2013 3 26/05/2013 etc. ``` How do I get Access to delete the older record in VBA?
Create a `DMax` expression which returns the most recent `creation_date` for each `ID`. ``` DMax("creation_date", "YourTable", "ID=" & <current ID>) ``` Then use that expression in a `SELECT` query to identify the rows which you want discarded. ``` SELECT y.ID, y.creation_date, DMax("creation_date", "YourTable", "ID=" & y.ID) FROM YourTable AS y; ``` Once you have the `SELECT` query working correctly, use its working `DMax` expression in a `DELETE` query. ``` DELETE FROM YourTable WHERE creation_date < DMax("creation_date", "YourTable", "ID=" & [ID]); ```
If you want to delete all the older records with duplicate IDs at once, you can just execute this SQL. However, it will require that you have a primary or unique field. I added a field called **RecID** that's just an autonumber field. ``` strSQL = "" strSQL = strSQL & "DELETE " & vbCrLf strSQL = strSQL & " t.recid, " & vbCrLf strSQL = strSQL & " t.* " & vbCrLf strSQL = strSQL & "FROM t " & vbCrLf strSQL = strSQL & "WHERE t.recid NOT IN (SELECT Last(t.recid) AS LastOfRecID " & vbCrLf strSQL = strSQL & " FROM t " & vbCrLf strSQL = strSQL & " GROUP BY t.id)" ``` Then you just do a `Currentdb.Execute(strSQL)`
Delete records if an updated record exists
[ "", "sql", "ms-access", "vba", "" ]
Is there a built-in function or a very simple way of finding the index of n largest elements in a list or a numpy array? ``` K = [1,2,2,4,5,5,6,10] ``` Find the index of the largest 5 elements? I count the duplicates more than once, and the output should be a list of the indices of those largest numbers
Maybe something like: ``` >>> K [4, 5, 1, 6, 2, 5, 2, 10] >>> sorted(range(len(K)), key=lambda x: K[x]) [2, 4, 6, 0, 1, 5, 3, 7] >>> sorted(range(len(K)), key=lambda x: K[x])[-5:] [0, 1, 5, 3, 7] ``` or using `numpy`, you can use `argsort`: ``` >>> np.argsort(K)[-5:] array([0, 1, 5, 3, 7]) ``` `argsort` is also a method: ``` >>> K = np.array(K) >>> K.argsort()[-5:] array([0, 1, 5, 3, 7]) >>> K[K.argsort()[-5:]] array([ 4, 5, 5, 6, 10]) ```
Consider the following code, ``` N=5 K = [1,10,2,4,5,5,6,2] #store list in tmp to retrieve index tmp=list(K) #sort list so that largest elements are on the far right K.sort() #To get the 5 largest elements print K[-N:] #To get the 5th largest element print K[-N] #get index of the 5th largest element print tmp.index(K[-N]) ``` If you wish to ignore duplicates, then use set() as follows, ``` N=5 K = [1,10,2,4,5,5,6,2] #store list in tmp to retrieve index tmp=list(K) #sort list so that largest elements are on the far right K.sort() #Putting the list to a set removes duplicates K=set(K) #change K back to list since set does not support indexing K=list(K) #To get the 5 largest elements print K[-N:] #To get the 5th largest element print K[-N] #get index of the 5th largest element print tmp.index(K[-N]) ``` Hopefully one of them covers your question :)
How to find the index of n largest elements in a list or np.array, Python
[ "", "python", "arrays", "list", "" ]
I have a python method which accepts **a date input as a string**. How do I add a validation to make sure the date string being passed to the method is in the ffg. format: ``` 'YYYY-MM-DD' ``` if it's not, method should raise some sort of error
``` >>> import datetime >>> def validate(date_text): try: datetime.date.fromisoformat(date_text) except ValueError: raise ValueError("Incorrect data format, should be YYYY-MM-DD") >>> validate('2003-12-23') >>> validate('2003-12-32') Traceback (most recent call last): File "<pyshell#20>", line 1, in <module> validate('2003-12-32') File "<pyshell#18>", line 5, in validate raise ValueError("Incorrect data format, should be YYYY-MM-DD") ValueError: Incorrect data format, should be YYYY-MM-DD ``` Note that `datetime.date.fromisoformat()` obviously works only when date is in ISO format. If you need to check date in some other format, use `datetime.datetime.strptime()`.
The [Python `dateutil`](http://labix.org/python-dateutil) library is designed for this (and more). It will automatically convert this to a `datetime` object for you and raise a `ValueError` if it can't. As an example: ``` >>> from dateutil.parser import parse >>> parse("2003-09-25") datetime.datetime(2003, 9, 25, 0, 0) ``` This raises a `ValueError` if the date is not formatted correctly: ``` >>> parse("2003-09-251") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/jacinda/envs/dod-backend-dev/lib/python2.7/site-packages/dateutil/parser.py", line 720, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/Users/jacinda/envs/dod-backend-dev/lib/python2.7/site-packages/dateutil/parser.py", line 317, in parse ret = default.replace(**repl) ValueError: day is out of range for month ``` `dateutil` is also extremely useful if you start needing to parse other formats in the future, as it can handle most known formats intelligently and allows you to modify your specification: [`dateutil` parsing examples](http://labix.org/python-dateutil#head-a23e8ae0a661d77b89dfb3476f85b26f0b30349c). It also handles timezones if you need that. **Update based on comments**: `parse` also accepts the keyword argument `dayfirst` which controls whether the day or month is expected to come first if a date is ambiguous. This defaults to False. E.g. ``` >>> parse('11/12/2001') >>> datetime.datetime(2001, 11, 12, 0, 0) # Nov 12 >>> parse('11/12/2001', dayfirst=True) >>> datetime.datetime(2001, 12, 11, 0, 0) # Dec 11 ```
How do I validate a date string format in python?
[ "", "python", "date", "" ]
I have a situation like the code below. I want to find the index of the first instance of the object A. What is the fastest way I can do that? I know there are a lot of ways to go through the entire list and find it, but is there a way to stop the search once the first one is found? ``` class A(): def __init__(self): self.a = 0 def print(self): print(self.a) l = [0, 0, A(), 0, A(), 0] print(l.index(type(A))) # this does not work ```
``` class A(): def __init__(self): self.a = 0 def __eq__(self,other): #this overrides the equality check if isinstance(other,A): return self.a==other.a def print(self): print(self.a) l = [0, 0, A(), 0, A(), 0] print(l.index(A()))#now this should work print A() in l a1 = A() a2 = A() a1 == a2 #True a1.a = 2 a1 == a2 #False a2.a = 2 a1 == a2 #True a2.a = 5 a1 < a2 #Error we would need to overload __cmp__ or __lt__ methods for this to work ```
You have to test each object; use a list comprehension and [`enumerate()`](http://docs.python.org/3/library/functions.html#enumerate) to get all matching indices: ``` [i for i, ob in enumerate(l) if isinstance(ob, A)] ``` or to get the *first* index, use `next()` and a generator expression: ``` next((i for i, ob in enumerate(l) if isinstance(ob, A)), None) ``` Demo: ``` >>> [i for i, ob in enumerate(l) if isinstance(ob, A)] [2, 4] >>> next((i for i, ob in enumerate(l) if isinstance(ob, A)), None) 2 ```
Finding instance of Object type in list
[ "", "python", "list", "search", "" ]
This must be really basic, but after some searching I haven't found an answer yet. I'm iterating through a list where some columns have values in them and some are empty. I'd like the code to skip the line if the column is empty. This is what I have: ``` for lines in luku: split = lines.split("\t") if "c-sarja" in split and "F" in split[2]: c_nainen = lines.split("\t") if int(c_nainen[8]) >= 50: old_lady = lines print c_nainen[0], ": OLD," " AGE:", c_nainen[8], "years" else: ?? ``` error: ``` ValueError: invalid literal for int() with base 10: '' ```
Calling `int()` on an empty string raises `ValueError`, as you've seen. Just use a `try/except` block: ``` for lines in luku:     split = lines.split("\t")     if "c-sarja" in split and "F" in split[2]: try: age = int(split[8]) except ValueError: continue # Skip to the next iteration if age >= 50: old_lady = lines print split[0], ": OLD," " AGE:", age, "years" ```
If you are in a for loop and want to skip the current item, just do `continue` and execution will jump to the next item in the for loop. (Alternatively, do nothing. Then execution will flow to the start of the next loop through anyway.)
how to skip an empty column with python
[ "", "python", "" ]
I have the MySQL query as below - ``` CREATE TABLE city { city_id INT AUTO_INCREMENT NOT NULL, city_name VARCHAR(50) NOT NULL, state_id int(50) NOT NULL, PRIMARY KEY (city_id), FOREIGN KEY(state_id) REFERENCES state(state_id) ON DELETE CASCADE ON UPDATE CASCADE } ``` what may be the possible error as I am not table to create the table `'city'` Structure of the table is very simple first table named state ``` state_id | state_name ``` another table named city ``` city_id city_name state_id ``` As database table can be easily seen I want to keep refrence to the **table city** `state_id` to the **table state** `state_id` I am still very beginner but I think code I written should work but it not? Why? Any suggestion?
This is because you are using clurly braces `{}` instead of round brackets `()` ``` CREATE TABLE city ( city_id INT AUTO_INCREMENT NOT NULL, city_name VARCHAR(50) NOT NULL, state_id int(50) NOT NULL, PRIMARY KEY (city_id), FOREIGN KEY(state_id) REFERENCES state(state_id) ON DELETE CASCADE ON UPDATE CASCADE ) ```
You are using the wrong braces... ``` CREATE TABLE city ( city_id INT AUTO_INCREMENT NOT NULL, city_name VARCHAR(50) NOT NULL, state_id int(50) NOT NULL, PRIMARY KEY (city_id), FOREIGN KEY(state_id) REFERENCES state(state_id) ON DELETE CASCADE ON UPDATE CASCADE ) ```
Error in MySQL query for references one column to another table
[ "", "mysql", "sql", "create-table", "" ]
I have a table "news" with 10 rows and cols (uid, id, registered\_users, ....) Now i have users that can log in to my website (every registered user has a user id). The user can subscribe to a news on my website. In SQL that means: I need to select the table "news" and the row with the uid (from the news) and insert the user id (from the current user) to the column "registered\_users". ``` INSERT INTO news (registered_users) VALUES (user_id) ``` The INSERT statement has NO WHERE clause so i need the UPDATE clause. ``` UPDATE news SET registered_users=user_id WHERE uid=post_news_uid ``` But if more than one users subscribe to the same news the old user id in "registered\_users" is lost.... Is there a way to keep the current values after an sql UPDATE statement? I use PHP (mysql). The goal is this: * table "news" row 5 (uid) column "registered\_users" (22,33,45) * --- 3 users have subscribed to the news with the uid 5 * table "news" row 7 (uid) column "registered\_users" (21,39) * --- 2 users have subscribed to the news with the uid 7
You should create an additional table to map users to news they have registeres on like: ``` create table user_news (user_id int, news_id int); ``` that looks like ``` ---------------- | News | Users| ---------------- | 5 | 22 | | 5 | 33 | | 5 | 45 | | 7 | 21 | | ... | ... | ---------------- ``` Then you can use multiple queries to first retrieve the news\_id and the user\_id and store them inside variables depending on what language you use and then insert them into the user\_news. The advantage is, that finding all users of a news is much faster, because you don't have to parse every single idstring "(22, 33, 45)"
It sounds like you are asking to insert a new user, to change a row in `news` from: ``` 5 22,33 ``` and then user 45 signs up, and you get: ``` 5 22,33,45 ``` If I don't understand, let me know. The rest of this solution is an excoriation of this approach. This is a bad, bad, bad way to store data. Relational databases are designed around tables that have rows and columns. Lists should be represented as multiple rows in a table, and *not* as string concatenated values. This is all the worse, when you have an integer id and the data structure has to convert the integer to a string. The right way is to introduce a table, say `NewsUsers`, such as: ``` create table NewsUsers ( NewsUserId int identity(1, 1) primary key, NewsId int not null, UserId int not null, CreatedAt datetime default getdaete(), CreatedBy varchar(255) default sysname ); ``` I showed this syntax using SQL Server. The column `NewsUserId` is an auto-incrementing primary key for this table. The columns `NewsId` is the news item (`5` in your first example). The column `UserId` is the user id that signed up. The columns `CreatedAt` and `CreatedBy` are handy columns that I put in almost all my tables. With this structure, you would handle your problem by doing: ``` insert into NewsUsers select 5, <userid>; ```
SQL - keep values with UPDATE statement
[ "", "sql", "sql-update", "sql-insert", "" ]
I have a problem with trying to remove a row of data based on a LIKE match in sql. I'm working with SQL Server 2005. I have a table (@Containers) which holds information like below:- ``` PMBID DEPT ------ --------- 248057 5,8 249799 5,6,8 249800 5 249941 1,5 249942 1,4,15,16 249943 1,4,5 249945 4 249946 4 249947 4 249966 8,16,23 249967 5,6,8,16 ``` Both columns are string, I need a query to be able to remove rows based on departments, for example if the PMBID has department 1,11,23, or 31 in it, it needs to be removed from the table. So with this logic PMBID's... ``` 249941 249942 249943 249966 ``` would be removed. So how do I do this (well) in SQL seen as my column is string? I've been writing queries such as... ``` DELETE FROM @Containers WHERE (Dept LIKE '%1%' OR Dept LIKE '%11%' OR Dept LIKE '%23%' OR Dept LIKE '%31%') ``` Obviously this statement doesn't work as it catches departments like 15, and 16, so for example PMBID ``` 249967 ``` would be included when it **shouldn't** as it's not explicitly a 1,11,23, or 31 (or a combination of these). Is this possible with a string column? Can we use the delimiter somehow to improve the query? I'll stop now as risk of over-complicating the query, please let me know if you need any further information.
The commas-in-the-list searching problem. Put commas around `DEPT` and around the values in the string: ``` DELETE FROM @Containers WHERE (','+Dept+',' LIKE '%,1,%' OR ','+Dept+',' LIKE '%,11,%' OR ','+Dept+',' LIKE '%,23,%' OR ','+Dept+',' LIKE '%,31,%') ```
It is bad practice to store data in a comma delimited string, but this will work for department 1 as an example: ``` DELETE FROM @Containers WHERE Dept LIKE '%,1' OR Dept LIKE '1,%' OR Dept LIKE '%,1,%' ``` You have to force commas into the criteria to exclude 1 from catching 11 and so on.
Removing a column of data (string) by using LIKE query, there must be a better way
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "" ]
I wrote following python program ``` #! /usr/bin/python def checkIndex(key): if not isinstance(key, (int, long)): raise TypeError if key<0: raise IndexError class ArithmeticSequence: def __init__(self, start=0, step=1): self.start = start # Store the start value self.step = step # Store the step value self.changed = {} # No items have been modified def __getitem__(self, key): checkIndex(key) try: return self.changed[key] except KeyError: return self.start + key*self.step def __setitem__(self, key, value): checkIndex(key) self.changed[key] = value ``` the program is my.py when I do ``` chmod +x my.py python my.py ``` I am back to bash shell here after this step I open a python shell ``` user@ubuntu:~/python/$ python Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> s=ArithmeticSequence(1,2) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'ArithmeticSequence' is not defined ``` How do I give input to my program and run it because it was saved in vi
The command that you want to run is: ``` python -i my.py ``` That will parse my.py and define the name `ArithmeticSequence`, and drop you into the Python shell where you can use your objects interactively: ``` >>> s=ArithmeticSequence(1,2) >>> ```
Put your file my.py in PYTHONPATH then ``` from my import ArithmeticSequence s=ArithmeticSequence(1,2) ```
How to use script in interactive shell
[ "", "python", "" ]
Say we have a string, `'123A......'`. I have a collection of other strings that also begin with `'123A......'` where after the A are other letters of the alphabet, for example, `'123AA.....'` and `'123AB.....'` and `'123ABA....'` and so on. I want to make a dictionary where the key would be `'123A......'` and the value would be a list of the matches in the collection. I think doing this with regular expressions would be way better than some hodgepodge programmatic way... So far I was thinking something like ``` baseMatch = re.compile('123A......') baseMatch.findall('123A[A-Z][A-Z][A-Z][A-Z][A-Z][A-Z]') ``` But this looks lame and I have a suspicion that there is a better way.
You should use a quantifier in your regex, like so: ``` baseMatch = re.compile('123A[A-Z]{6}') baseMatch.findall('123AABCDEFxyz123AAABCDExyz') ['123AABCDEF', '123AAABCDE'] ```
If you know the prefixes (the keys), just do ``` dic = dict() st = '123AHELLO' val = st.split('123A')[1] dic['123A'] = val ``` your question however is not clear. if you want to extract subcollection of suffixes when prefix is `'123A'` then you can do ``` st_lst = ['123AHELLO','123AHALLO','123BHELLO','123AGREAT'] res = [st.split('123A')[1] for st in st_lst if '123A' in st] ``` gives you ``` >>> res ['HELLO', 'HALLO', 'GREAT'] ```
Python Re, need a list of matches
[ "", "python", "regex", "" ]
Given the following dataframe in pandas: ``` import numpy as np df = pandas.DataFrame({"a": np.random.random(100), "b": np.random.random(100), "id": np.arange(100)}) ``` where `id` is an id for each point consisting of an `a` and `b` value, how can I bin `a` and `b` into a specified set of bins (so that I can then take the median/average value of `a` and `b` in each bin)? `df` might have `NaN` values for `a` or `b` (or both) for any given row in `df`. Here's a better example using Joe Kington's solution with a more realistic `df`. The thing I'm unsure about is how to access the `df.b` elements for each `df.a` group below: ``` a = np.random.random(20) df = pandas.DataFrame({"a": a, "b": a + 10}) # bins for df.a bins = np.linspace(0, 1, 10) # bin df according to a groups = df.groupby(np.digitize(df.a,bins)) # Get the mean of a in each group print groups.mean() ## But how to get the mean of b for each group of a? # ... ```
There may be a more efficient way (I have a feeling `pandas.crosstab` would be useful here), but here's how I'd do it: ``` import numpy as np import pandas df = pandas.DataFrame({"a": np.random.random(100), "b": np.random.random(100), "id": np.arange(100)}) # Bin the data frame by "a" with 10 bins... bins = np.linspace(df.a.min(), df.a.max(), 10) groups = df.groupby(np.digitize(df.a, bins)) # Get the mean of each bin: print groups.mean() # Also could do "groups.aggregate(np.mean)" # Similarly, the median: print groups.median() # Apply some arbitrary function to aggregate binned data print groups.aggregate(lambda x: np.mean(x[x > 0.5])) ``` --- Edit: As the OP was asking specifically for just the means of `b` binned by the values in `a`, just do ``` groups.mean().b ``` Also if you wanted the index to look nicer (e.g. display intervals as the index), as they do in @bdiamante's example, use `pandas.cut` instead of `numpy.digitize`. (Kudos to bidamante. I didn't realize `pandas.cut` existed.) ``` import numpy as np import pandas df = pandas.DataFrame({"a": np.random.random(100), "b": np.random.random(100) + 10}) # Bin the data frame by "a" with 10 bins... bins = np.linspace(df.a.min(), df.a.max(), 10) groups = df.groupby(pandas.cut(df.a, bins)) # Get the mean of b, binned by the values in a print groups.mean().b ``` This results in: ``` a (0.00186, 0.111] 10.421839 (0.111, 0.22] 10.427540 (0.22, 0.33] 10.538932 (0.33, 0.439] 10.445085 (0.439, 0.548] 10.313612 (0.548, 0.658] 10.319387 (0.658, 0.767] 10.367444 (0.767, 0.876] 10.469655 (0.876, 0.986] 10.571008 Name: b ```
Not 100% sure if this is what you're looking for, but here's what I think you're getting at: ``` In [144]: df = DataFrame({"a": np.random.random(100), "b": np.random.random(100), "id": np.arange(100)}) In [145]: bins = [0, .25, .5, .75, 1] In [146]: a_bins = df.a.groupby(cut(df.a,bins)) In [147]: b_bins = df.b.groupby(cut(df.b,bins)) In [148]: a_bins.agg([mean,median]) Out[148]: mean median a (0, 0.25] 0.124173 0.114613 (0.25, 0.5] 0.367703 0.358866 (0.5, 0.75] 0.624251 0.626730 (0.75, 1] 0.875395 0.869843 In [149]: b_bins.agg([mean,median]) Out[149]: mean median b (0, 0.25] 0.147936 0.166900 (0.25, 0.5] 0.394918 0.386729 (0.5, 0.75] 0.636111 0.655247 (0.75, 1] 0.851227 0.838805 ``` Of course, I don't know what bins you had in mind, so you'll have to swap mine out for your circumstance.
binning a dataframe in pandas in Python
[ "", "python", "numpy", "pandas", "" ]
I am currently trying to use python to parse the notes file for my kindle so that I can keep them more organized than the chronologically ordered list that the kindle automatically saves notes in. Unfortunately, I'm having trouble using regex to parse the file. Here's my code so far: ``` import re def parse_file(in_file): read_file = open(in_file, 'r') file_lines = read_file.readlines() read_file.close() raw_note = "".join(file_lines) # Regex parts title_regex = "(.+)" title_author_regex = "(.+) \((.+)\)" loc_norange_regex = "(.+) (Location|on Page) ([0-9]+)" loc_range_regex = "(.+) (Location|on Page) ([0-9]+)-([0-9]+)" date_regex = "([a-zA-Z]+), ([a-zA-Z]+) ([0-9]+), ([0-9]+)" # Date time_regex = "([0-9]+):([0-9]+) (AM|PM)" # Time content_regex = "(.*)" footer_regex = "=+" nl_re = "\r*\n" # No author regex_noauthor_str =\ title_regex + nl_re +\ "- Your " + loc_range_regex + " | Added on " +\ date_regex + ", " + time_regex + nl_re +\ content_regex + nl_re +\ footer_regex regex_noauthor = re.compile(regex_noauthor_str) print regex_noauthor.findall(raw_note) parse_file("testnotes") ``` Here is the contents of "testnotes": ``` Title - Your Highlight Location 3360-3362 | Added on Wednesday, March 21, 2012, 12:16 AM Note content goes here ========== ``` What I want: ``` [('Title', 'Highlight', 'Location', '3360', '3362', 'Wednesday', 'March', '21', '2012', '12', '16', 'AM', ``` But when I run the program, I get: ``` [('Title', 'Highlight', 'Location', '3360', '3362', '', '', '', '', '', '', '', '')] ``` I'm fairly new to regex, but I feel like this should be fairly straightforward.
When you say `" | Added on "`, you need to escape the `|`. Replace that string with `" \| Added on "`
You need to escape the `|` in `"- Your " + loc_range_regex + " | Added on " +\` to: `"- Your " + loc_range_regex + " \| Added on " +\` `|` is the OR operator in a regex.
Using regex to parse kindle "My Clippings.txt" file
[ "", "python", "regex", "" ]
I'm new to python and writing a program to count lines. the file looks like this: ``` 0.86149806 1.8628227 -0.1380086 -1 0.99927421 -1.0007207 0.99927421 0.99926955 -1.0007258 ``` And my code attempt is the following: ``` counterPos = 0 counterNeg = 0 counterTot = 0 counterNeu = 0 with open('test.txt', 'r') as infile: for line in infile: counterTot += 1 for i in line: if i > 0.3: counterPos += 1 elif i < -0.3: counterNeg += 1 else: counterNeu += 1 ``` I'm trying to get it to count all lines lower than -0.3 to `counterNeg`, all lines above 0.3 as `counterPos`, and all lines that have a number between 0.29 and -0.29 to `counterNeu`. It doesn't seem to work though, I know im going wrong with `for i in line` but not sure how.
Your `line` is a string, but you want to parse it as a `float`. Just use `float(line)`. It's also better to strip all whitespace from the beginning and end of your line just in case. So: ``` for line in infile: i = float(line.strip()) # ... count ```
You are using one extra loop. Also the data read from the files comes as str with "\n" as endline char. Use strip() to remove "\n" and then convert the data as float. End code should be like: ``` counterPos = 0 counterNeg = 0 counterTot = 0 counterNeu = 0 with open('temp.txt', 'r') as infile: counterTot += 1 for i in infile: if float(i.strip()) > 0.3: counterPos += 1 elif float(i.strip()) < -0.3: counterNeg += 1 else: counterNeu += 1 ```
Python count lines with numbers less than, or more than
[ "", "python", "" ]
Recently I've been practicing to build a website with flask. Now i encounter a problem. There is a function which to achieve registration. The code like this: ``` def register(): ... some judgment ... if true: sendmail() return redirect(url_for('onepage')) ``` My question is : When preforming sendmail(), it needs much time. So the users have to wait for a moment to get "onepage", about 4-5s. This will bring a bad experience. I know that using threading can let these two functions independently of each other, but I have learnt programming for a very short time so I have no experience in threading programming, Could anybody provide some ideas or code examples for me at this problem?
What you want is [`threading`](https://docs.python.org/3.8/library/threading.html) rather than the low-level [`thread`](https://docs.python.org/3.8/library/_thread.html) (which has been renamed to `_thread` in Python 3). For a case this simple, there will be no need of subclassing `threading.Thread`, so you could just replace `sendmail()` by: ``` threading.Thread(target=sendmail).start() ``` after: ``` import threading ```
There are several ways of implementing threading in Python. One very simple solution for you would be ``` import thread def register(): ... some judgment ... if true: thread.start_new_thread(sendmail,()) return redirect(url_for('onepage')) ``` This will start `sendmail()` asynchronously. However, if `sendmail` fails or returns something, you will need to use something else. There are plenty of tutorials about Threading in python, I found this quite nice <http://www.tutorialspoint.com/python/python_multithreading.htm>
threading programming in python
[ "", "python", "flask", "" ]
I've got a structure of the form: ``` >>> items [([[0, 1], [2, 20]], 'zz', ''), ([[1, 3], [5, 29], [50, 500]], 'a', 'b')] ``` The first item in each tuple is a list of ranges, and I want to make a generator that provides me the ranges in ascending order based on the starting index. Since the range-lists are already sorted by their starting index this operation is simple: it is just a sorted merge. I'm hoping to do it with good computational efficiency, so I'm thinking that one good way to implicitly track the state of my merge is to simply pop the front off of the list of the tuple which has the smallest starting index in its range list. I can use `min()` to obtain `[0, 1]` which is the first one I want, but how do I get the index of it? I have this: ``` [ min (items[i][0]) for i in range(len(items)) ] ``` which gives me the first item in each list, which I can then `min()` over somehow, but it fails once any of the lists becomes empty, and also it's not clear how to get the index to use `pop()` with without looking it back up in the list. To summarize: Want to build generator that returns for me: ``` ([0,1], 'zz', '') ([1,3], 'a', 'b') ([2,20], 'zz', '') ([5,29], 'a', 'b') ([50,500], 'a', 'b') ``` Or even more efficiently, I only need this data: ``` [0, 1, 0, 1, 1] ``` (the indices of the tuples i want to take the front item of)
This works: ``` by_index = ([sub_index, list_index] for list_index, list_item in enumerate(items) for sub_index in list_item[0]) [item[1] for item in sorted(by_index)] ``` Gives: ``` [0, 1, 0, 1, 1] ``` In detail. The generator: ``` by_index = ([sub_index, list_index] for list_index, list_item in enumerate(items) for sub_index in list_item[0]) list(by_index) [[[0, 1], 0], [[2, 20], 0], [[1, 3], 1], [[5, 29], 1], [[50, 500], 1]] ``` So the only thing needed is sorting and getting only the desired index: ``` [item[1] for item in sorted(by_index)] ```
``` from operator import itemgetter index, element = max(enumerate(items), key=itemgetter(1)) ``` Return the index of the biggest element in `items` and the element itself.
Finding the index of the value which is the min or max in Python
[ "", "python", "" ]
I make a 2d histogram of some `(x, y)` data and I get an image like this one: ![histogram-2d](https://i.stack.imgur.com/rD8dY.png) I want a way to get the `(x, y)` coordinates of the point(s) that store the maximum values in `H`. For example, in the case of the image above it would be two points with the aprox coordinates: `(1090, 1040)` and `(1110, 1090)`. This is my code: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm from os import getcwd from os.path import join, realpath, dirname # Path to dir where this code exists. mypath = realpath(join(getcwd(), dirname(__file__))) myfile = 'datafile.dat' x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True) fig = plt.figure() ax = fig.add_subplot(111) xmin, xmax = min(x), max(x) ymin, ymax = min(y), max(y) rang = [[xmin, xmax], [ymin, ymax]] binsxy = [int((xmax - xmin) / 20), int((ymax - ymin) / 20)] H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy) extent = [yedges[0], yedges[-1], xedges[0], xedges[-1]] cp = ax.imshow(H.transpose()[::-1], interpolation='nearest', extent=extent, cmap=cm.jet) fig.colorbar(cp) plt.show() ``` --- **Edit** I've tried the solutions posted by Marek and qarma attempting to obtain the coordinates of the bins rather than the index of them, like so: ``` # Marek's answer x_cent, y_cent = unravel_index(H.argmax(), H.shape) print('Marek') print(x_cent, y_cent) print(xedges[x_cent], yedges[y_cent]) # qarma's answer idx = list(H.flatten()).index(H.max()) x_cent2, y_cent2 = idx / H.shape[1], idx % H.shape[1] local_maxs = np.argwhere(H == H.max()) print('\nqarma') print(x_cent2, y_cent2) print(xedges[x_cent2], yedges[y_cent2]) print(xedges[local_maxs[0,0]], yedges[local_maxs[0,1]], xedges[local_maxs[1,0]], yedges[local_maxs[1,1]]) ``` which results in: ``` Marek (53, 50) (1072.7838144329899, 1005.0837113402063) qarma (53, 50) (1072.7838144329899, 1005.0837113402063) (1072.7838144329899, 1005.0837113402063, 1092.8257731958763, 1065.3611340206187) ``` So the maximum coordinates are the same which is good! Now I have a small issue because if I zoom in on the 2d plot, I see that the coordinates are a little off-centered for both the global maximum and the local maximum: ![enter image description here](https://i.stack.imgur.com/e26tK.png) Why is this?
Here's how you can find first global maximum ``` idx = list(H.flatten()).index(H.max()) x, y = idx / H.shape[1], idx % H.shape[1] ``` Finding coordinate of all maxima *was* left as exercise to the reader... ``` numpy.argwhere(H == H.max()) ``` **Edit** Your code: ``` H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy) ``` Here `H` contains histogram values and `xedges, yedges` boundaries for histogram bins. Note that size of `edges` arrays is one larger than size of `H` in corresponding dimension. Thus: ``` for x, y in numpy.argwhere(H == H.max()): # center is between x and x+1 print numpy.average(xedges[x:x + 2]), numpy.average(yedges[y:y + 2]) ```
The library [`findpeaks`](https://erdogant.github.io/findpeaks/pages/html/Plots.html#two-dimensional-plots) can be of use. ``` pip install findpeaks ``` I do not see your data but let me try another similar example: ``` from findpeaks import findpeaks ``` [![raw input data](https://i.stack.imgur.com/LkCRQ.png)](https://i.stack.imgur.com/LkCRQ.png) ``` # initialize with default parameters. The "denoise" parameter can be of use in your case fp = findpeaks() # import 2D example dataset img = fp.import_example() # make the fit fp.fit(img) # Make plot fp.plot() ``` [![detected peaks](https://i.stack.imgur.com/QTOtF.png)](https://i.stack.imgur.com/QTOtF.png) The persistence can be of use to determine the impact of the peaks. You see that point 1, 2 and 3 show the strongest peaks, followed by the rest. ``` # Compute persistence fp.plot_persistence() ``` [![persitence plot](https://i.stack.imgur.com/8WCel.png)](https://i.stack.imgur.com/8WCel.png)
Find peak of 2d histogram
[ "", "python", "numpy", "matplotlib", "" ]
I've been struggling on the following. I have 3 tables: players, players\_clothes, and teams\_clothes. Table `players`: ``` id user team_id 1 tom 4 2 robo 5 3 bob 4 ``` So tom and bob are both on the same team Table `players_clothes`: ``` id clothes_id p_id 1 13 1 2 35 3 3 45 3 ``` Bob has clothing article 35 and 45, robo has none. Table `teams_clothes`: ``` id clothes_id team_id 1 35 4 2 45 4 3 55 4 4 65 5 ``` This shows which teams have rights to which articles of clothing. The problem: tom is wearing an article of clothing that does no belong to his team... Let's assume this is illegal. I'm having trouble figuring out how to capture all those who are wearing illegal clothes for a particular team. ``` SELECT pc.clothes_id FROM players AS p JOIN players_clothes AS pc ON p.id = pc.p_id AND p.team_id = 4 GROUP BY pc.clothes_id ``` (I group by players\_clothes.clothes\_id because believe it or not, two players can be assigned the same piece of clothing) I think this results the following set (13, 35, 45) Now I would like to check against the actual set of clothes that team 4 owns. ``` SELECT clothes_id FROM teams_clothes WHERE team_id = 4 and this return (35, 45, 55) ``` How can I create a query so that it returns (13)? I've tried things like NOT EXISTS IN but I think the GROUP BY players\_clothes.clothes\_id part gets in the way
I suggest ``` select * from A where team_id = $team_id join B on B.a_id = A.id where not exists ( select 1 from C where C.clothes_id = B.clothes_id and team_id = $team_id ) ``` Basically, we find all As who are on their team and for each A join to all clothing they wear, and then only return the row IF we can't find indication in table C that the clothing is on our team (this covers not existent in C and exists but in the wrong team on C)
This should do the trick: ``` SELECT b.a_id, b.clothes_id FROM b INNER JOIN a ON b.a_id = a.id LEFT OUTER JOIN c ON a.team_id = c.team_id WHERE c.clothes_id = NULL ``` The thought is to do an outer join on the combination of tables A/B against table C. And then only look for the cases where c.clothes\_id is NULL, which would represent those cases where there is no relational match on the outer join (i.e. the clothes item is not approved for that user's team).
MySQL query table filtering issue
[ "", "mysql", "sql", "" ]
From the following four records, I want to select the `OwnerId` of second-latest record ``` ItemId OwnerId Date 11477 20981 2013-05-13 11477 1 2013-05-21 11477 21086 2013-05-22 #this is the one I'm talking about 11477 3868 2013-05-24 ``` How to go about it?
This needs `ItemID` to be specified, ``` SELECT * FROM TableName WHERE ItemID = '11477' ORDER BY DATE DESC LIMIT 1,1 ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/898adb/1) However, if you don't want to specify the `ItemID`, and you want to get all second latest record for every `ItemID`, you can use a correlated subquery to generate a sequence number for every `ItemID` based on lastest `DATE`, ``` SELECT ItemId, OwnerID, Date FROM ( SELECT A.ItemId, A.OwnerId, A.Date, ( SELECT COUNT(*) FROM tableName c WHERE c.ItemId = a.ItemId AND c.Date >= a.Date) AS RowNumber FROM TableName a ) x WHERE RowNumber = 2 ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/188cb/1)
``` select ownerid from your_table order by date desc limit 1, 1 ```
Retrieve the second latest record with same id
[ "", "mysql", "sql", "" ]
I programmed a Trie as a class in python. The search and insert function are clear, but now i tried to programm the python function `__str__`, that i can print it on the screen. But my function doesn't work! ``` class Trie(object): def __init__(self): self.children = {} self.val = None def __str__(self): s = '' if self.children == {}: return ' | ' for i in self.children: s = s + i + self.children[i].__str__() return s def insert(self, key, val): if not key: self.val = val return elif key[0] not in self.children: self.children[key[0]] = Trie() self.children[key[0]].insert(key[1:], val) ``` Now if I create a Object of Trie: ``` tr = Trie() tr.insert('hallo', 54) tr.insert('hello', 69) tr.insert('hellas', 99) ``` And when i now print the Trie, occures the problem that the entries hello and hellas aren't completely. ``` print tr hallo | ellas | o ``` How can i solve that problem?.
There are several issues you're running into. The first is that if you have several children at the same level, you'll only be prefixing one of them with the initial part of the string, and just showing the suffix of the others. Another issue is that you're only showing leaf nodes, even though you can have terminal values that are not at a leaf (consider what happens when you use both `"foo"` and `"foobar"` as keys into a Trie). Finally, you're not outputting the values at all. To solve the first issue, I suggest using a recursive generator that does the traversal of the Trie. Separating the traversal from `__str__` makes things easier since the generator can simply `yield` each value we come across, rather than needing to build up a string as we go. The `__str__` method can assemble the final result easily using `str.join`. For the second issue, you should yield the current node's key and value whenever `self.val` is not `None`, rather than only at leaf nodes. As long as you don't have any way to remove values, all leaf nodes will have a value, but we don't actually need any special casing to detect that. And for the final issue, I suggest using string formatting to make a `key:value` pair. (I suppose you can skip this if you really don't need the values.) Here's some code: ``` def traverse(self, prefix=""): if self.val is not None: yield "{}:{}".format(prefix, self.val) for letter, child in self.children.items(): yield from child.traverse(prefix + letter) def __str__(self): return " | ".join(self.traverse()) ``` If you're using a version of Python before 3.3, you'll need to replace the `yield from` statement with an explicit loop to yield the items from the recursive calls: ``` for item in child.traverse(prefix + letter) yield item ``` Example output: ``` >>> t = Trie() >>> t.insert("foo", 5) >>> t.insert("bar", 10) >>> t.insert("foobar", 100) >>> str(t) 'bar:10 | foo:5 | foobar:100' ```
Why not have **str** actually dump out the data in the format that it is stored: ``` def __str__(self): if self.children == {}: s = str(self.val) else: s = '{' comma = False for i in self.children: if comma: s = s + ',' else: comma = True s = s + "'" + i + "':" + self.children[i].__str__() s = s + '}' return s ``` Which results in: ``` {'h':{'a':{'l':{'l':{'o':54}}},'e':{'l':{'l':{'a':{'s':99},'o':69}}}}} ```
Implementation of a Trie in Python
[ "", "python", "data-structures", "" ]
I have an array below which consists of repeated strings. I want to find and replace those strings, but each time a match is made I want to change the value of the replace string. Let me demonstrate. This Sample Array: ``` SampleArray = ['champ', 'king', 'king', 'mak', 'mak', 'mak'] ``` should get changed to: ``` SampleArray = ['champ', 'king1', 'king2', 'mak1', 'mak2', 'mak3'] ``` How to make this possible? I've been going at it for 3 days now with no luck. Thanks in advance. ``` My Failed Code: import os, collections, re SampleArray = ['champ', 'king', 'king', 'mak', 'mak', 'mak'] dupes = [x for x, y in collections.Counter(SampleArray).items() if y > 1] length = len(dupes) count = 0 while count < length: j = 0 instances = SampleArray.count(dupes[count]) while j < instances: re.sub(dupes[count], dupes[count] + j, SampleArray, j) j += 1 count += 1 print SampleArray print ''; os.system('pause') ```
I'd use collections.Counter: ``` from collections import Counter numbers = { word: iter([""] if count == 1 else xrange(1, count + 1)) for word, count in Counter(sample).items() } result = [ word + str(next(numbers[word])) for word in sample ] ``` This doesn't require the list to be sorted or grouped in any way. This solution uses [iterators](http://docs.python.org/2/glossary.html#term-iterator) to generate sequential numbers: * first, we calculate how many times each word occurs in the list (`Counter(sample)`). * then we create a dictionary `numbers`, which, for each word, contains its "numbering" iterator `iter(...)`. If the word occurs only once `count==1`, this iterator will return ("yield") an empty string, otherwise it will yield sequential numbers in range from 1 to count `[""] if count == 1 else xrange(1, count + 1)`. * finally, we iterate over the list once again, and, for each word, pick the next value from its own numbering iterator `next(numbers[word])`. Since our iterators return numbers, we have to convert them to strings `str(...)`.
`groupby` is a convenient way to group duplicates: ``` >>> from itertools import groupby >>> FinalArray = [] >>> for k, g in groupby(SampleArray): # g is an iterator, so get a list of it for further handling items = list(g) # If only one item, add it unchanged if len(items) == 1: FinalArray.append(k) # Else add index at the end else: FinalArray.extend([j + str(i) for i, j in enumerate(items, 1)]) >>> FinalArray ['champ', 'king1', 'king2', 'mak1', 'mak2', 'mak3'] ```
Find and replace duplicates in Array, but replace each nth instance with a different string
[ "", "python", "" ]
Here is my set up - ``` project/ __init__.py prog.py test/ __init__.py test_prog.py ``` I would like to be able to run my unit tests by calling a command-line option in *prog.py*. This way, when I deploy my project, I can deploy the ability to run the unit tests at any time. ``` python prog.py --unittest ``` What do I need in *prog.py*, or the rest of my project for this to work?
Perhaps [this](http://docs.python.org/2/library/unittest.html#load-tests-protocol) is what you're looking for. Implement a `load_tests` function in `test_prog.py` and use the following code in `prog.py` to load and run the tests: ``` import unittest import test.test_prog suite = unittest.TestLoader().loadTestsFromModule(test.test_prog) unittest.TextTestRunner().run(suite) ```
The Python `unittest` module contains its own [test discovery](https://docs.python.org/3/library/unittest.html#test-discovery) function, which you can run from the command line: ``` $ python -m unittest discover ``` To run this command from within your module, you can use the `subprocess` module: ``` #!/usr/bin/env python import sys import subprocess # ... # the rest of your module's code # ... if __name__ == '__main__': if '--unittest' in sys.argv: subprocess.call([sys.executable, '-m', 'unittest', 'discover']) ``` If your module has other command-line options you probably want to look into [`argparse`](http://docs.python.org/dev/library/argparse.html) for more advanced options.
Run unittest from a Python program via a command-line option
[ "", "python", "unit-testing", "" ]
What is a "pythonic" approach to checking to see if an element is in the dictionary before extracting it and using it in a comparison? For example: Currently, I do ``` if "key1" in list2 and list2["key1"]=="5": print "correct" ``` so that it will short-circuit on the first conditional if it fails. However, this leads to long conditional statements. Is there however a more "pythonic" approach? I'm guessing that wrapping the conditional in a try catch is an even worse idea.
This approach is perfectly pythonic. Another, slightly different would be ``` if list2.get("key1", <-1,0 or any sensible default, e.g. None>) == 5: print "correct" ``` With this approach you employ [`dict.get`](http://docs.python.org/2/library/stdtypes.html#dict.get) method which allows for safe extraction of value from dict (and provides a way to specify default value)
Nope - try/except is absolutely fine: ``` try: if list2['key1'] == '5': # do something except KeyError: # key wasn't found except ValueError: # most likely value wasn't comparable... ```
Key in Dictionary Before Extraction [Pythonic Approach]
[ "", "python", "" ]
Why does this piece of code throw a SyntaxError? ``` >>> def fun1(a="who is you", b="True", x, y): ... print a,b,x,y ... File "<stdin>", line 1 SyntaxError: non-default argument follows default argument ``` While the following piece of code runs without visible errors: ``` >>> def fun1(x, y, a="who is you", b="True"): ... print a,b,x,y ... ```
All required parameters must be placed before any default arguments. Simply because they are mandatory, whereas default arguments are not. Syntactically, it would be *impossible* for the interpreter to decide which values match which arguments if mixed modes were allowed. A `SyntaxError` is raised if the arguments are not given in the correct order: Let us take a look at keyword arguments, using your function. ``` def fun1(a="who is you", b="True", x, y): ... print a,b,x,y ``` Suppose its allowed to declare function as above, Then with the above declarations, we can make the following (regular) positional or keyword argument calls: ``` func1("ok a", "ok b", 1) # Is 1 assigned to x or ? func1(1) # Is 1 assigned to a or ? func1(1, 2) # ? ``` How you will suggest the assignment of variables in the function call, how default arguments are going to be used along with keyword arguments. ``` >>> def fun1(x, y, a="who is you", b="True"): ... print a,b,x,y ... ``` *Reference O'Reilly - Core-Python* Where as this function make use of the default arguments syntactically correct for above function calls. Keyword arguments calling prove useful for being able to provide for out-of-order positional arguments, but, coupled with default arguments, they can also be used to "skip over" missing arguments as well.
``` SyntaxError: non-default argument follows default argument ``` If you were to allow this, the default arguments would be rendered useless because you would never be able to use their default values, since the non-default arguments come **after**. In Python 3 however, you may do the following: ``` def fun1(a="who is you", b="True", *, x, y): pass ``` which makes `x` and `y` keyword only so you can do this: ``` fun1(x=2, y=2) ``` This works because there is no longer any ambiguity. Note you still can't do `fun1(2, 2)` (that would set the default arguments).
Why can't non-default arguments follow default arguments?
[ "", "python", "keyword-argument", "positional-argument", "" ]
I am used to the python byte literal syntax representing bytes as hex values such as `b'\x7a'` is the hex value `0x7a`. I however have run into an issue that I don't know how it is working. I am using the ssl library for the first time and creating random values with `ssl.RAND_bytes(...)` It is returning strings with characters that alter the value (say when doing an `int.from_bytes(...)`) Example strings I have received: ``` b'\x12\x1f)\x8b\xe0\xd7LD' b'\x808\x8a(\x02\xb3S\xc9\xabW_\n\xf3\xbb\x80o' b'\xde\xef\x81%FMB=Ps' b'9\x81\x90\xdc\xfa\x98\xbd\x1f^;' ``` What I don't understand are those non hex value characters, such as the `9` preceding the `\x81` or the `MB=Ps`. I would like to understand what these extra values represent and how I can interpret them when debugging my code. I know python has no issue with these, it is me who is at a loss. Note: I am on python3.3 on windows 7 64bit
Byte array notation converts certain values to their textual representation: ``` >>> b'\x12\x41\x42' b'\x12AB' ``` In this case the hex number 41 is the ASCII value for capital letter A.
If the byte can be represented with a printable ASCII character that character is used instead of a hex escape, here is some code for generating a table of the representation for all byte values (columns are decimal, hex, and byte representation): ``` >>> all_bytes = bytes(range(256)) >>> for i in range(256): ... print('{:<8}{:<#8x}{}'.format(i, i, all_bytes[i:i+1])) ... 0 0x0 b'\x00' 1 0x1 b'\x01' 2 0x2 b'\x02' 3 0x3 b'\x03' 4 0x4 b'\x04' 5 0x5 b'\x05' 6 0x6 b'\x06' 7 0x7 b'\x07' 8 0x8 b'\x08' 9 0x9 b'\t' 10 0xa b'\n' 11 0xb b'\x0b' 12 0xc b'\x0c' 13 0xd b'\r' 14 0xe b'\x0e' 15 0xf b'\x0f' 16 0x10 b'\x10' 17 0x11 b'\x11' 18 0x12 b'\x12' 19 0x13 b'\x13' 20 0x14 b'\x14' 21 0x15 b'\x15' 22 0x16 b'\x16' 23 0x17 b'\x17' 24 0x18 b'\x18' 25 0x19 b'\x19' 26 0x1a b'\x1a' 27 0x1b b'\x1b' 28 0x1c b'\x1c' 29 0x1d b'\x1d' 30 0x1e b'\x1e' 31 0x1f b'\x1f' 32 0x20 b' ' 33 0x21 b'!' 34 0x22 b'"' 35 0x23 b'#' 36 0x24 b'$' 37 0x25 b'%' 38 0x26 b'&' 39 0x27 b"'" 40 0x28 b'(' 41 0x29 b')' 42 0x2a b'*' 43 0x2b b'+' 44 0x2c b',' 45 0x2d b'-' 46 0x2e b'.' 47 0x2f b'/' 48 0x30 b'0' 49 0x31 b'1' 50 0x32 b'2' 51 0x33 b'3' 52 0x34 b'4' 53 0x35 b'5' 54 0x36 b'6' 55 0x37 b'7' 56 0x38 b'8' 57 0x39 b'9' 58 0x3a b':' 59 0x3b b';' 60 0x3c b'<' 61 0x3d b'=' 62 0x3e b'>' 63 0x3f b'?' 64 0x40 b'@' 65 0x41 b'A' 66 0x42 b'B' 67 0x43 b'C' 68 0x44 b'D' 69 0x45 b'E' 70 0x46 b'F' 71 0x47 b'G' 72 0x48 b'H' 73 0x49 b'I' 74 0x4a b'J' 75 0x4b b'K' 76 0x4c b'L' 77 0x4d b'M' 78 0x4e b'N' 79 0x4f b'O' 80 0x50 b'P' 81 0x51 b'Q' 82 0x52 b'R' 83 0x53 b'S' 84 0x54 b'T' 85 0x55 b'U' 86 0x56 b'V' 87 0x57 b'W' 88 0x58 b'X' 89 0x59 b'Y' 90 0x5a b'Z' 91 0x5b b'[' 92 0x5c b'\\' 93 0x5d b']' 94 0x5e b'^' 95 0x5f b'_' 96 0x60 b'`' 97 0x61 b'a' 98 0x62 b'b' 99 0x63 b'c' 100 0x64 b'd' 101 0x65 b'e' 102 0x66 b'f' 103 0x67 b'g' 104 0x68 b'h' 105 0x69 b'i' 106 0x6a b'j' 107 0x6b b'k' 108 0x6c b'l' 109 0x6d b'm' 110 0x6e b'n' 111 0x6f b'o' 112 0x70 b'p' 113 0x71 b'q' 114 0x72 b'r' 115 0x73 b's' 116 0x74 b't' 117 0x75 b'u' 118 0x76 b'v' 119 0x77 b'w' 120 0x78 b'x' 121 0x79 b'y' 122 0x7a b'z' 123 0x7b b'{' 124 0x7c b'|' 125 0x7d b'}' 126 0x7e b'~' 127 0x7f b'\x7f' 128 0x80 b'\x80' 129 0x81 b'\x81' 130 0x82 b'\x82' 131 0x83 b'\x83' 132 0x84 b'\x84' 133 0x85 b'\x85' 134 0x86 b'\x86' 135 0x87 b'\x87' 136 0x88 b'\x88' 137 0x89 b'\x89' 138 0x8a b'\x8a' 139 0x8b b'\x8b' 140 0x8c b'\x8c' 141 0x8d b'\x8d' 142 0x8e b'\x8e' 143 0x8f b'\x8f' 144 0x90 b'\x90' 145 0x91 b'\x91' 146 0x92 b'\x92' 147 0x93 b'\x93' 148 0x94 b'\x94' 149 0x95 b'\x95' 150 0x96 b'\x96' 151 0x97 b'\x97' 152 0x98 b'\x98' 153 0x99 b'\x99' 154 0x9a b'\x9a' 155 0x9b b'\x9b' 156 0x9c b'\x9c' 157 0x9d b'\x9d' 158 0x9e b'\x9e' 159 0x9f b'\x9f' 160 0xa0 b'\xa0' 161 0xa1 b'\xa1' 162 0xa2 b'\xa2' 163 0xa3 b'\xa3' 164 0xa4 b'\xa4' 165 0xa5 b'\xa5' 166 0xa6 b'\xa6' 167 0xa7 b'\xa7' 168 0xa8 b'\xa8' 169 0xa9 b'\xa9' 170 0xaa b'\xaa' 171 0xab b'\xab' 172 0xac b'\xac' 173 0xad b'\xad' 174 0xae b'\xae' 175 0xaf b'\xaf' 176 0xb0 b'\xb0' 177 0xb1 b'\xb1' 178 0xb2 b'\xb2' 179 0xb3 b'\xb3' 180 0xb4 b'\xb4' 181 0xb5 b'\xb5' 182 0xb6 b'\xb6' 183 0xb7 b'\xb7' 184 0xb8 b'\xb8' 185 0xb9 b'\xb9' 186 0xba b'\xba' 187 0xbb b'\xbb' 188 0xbc b'\xbc' 189 0xbd b'\xbd' 190 0xbe b'\xbe' 191 0xbf b'\xbf' 192 0xc0 b'\xc0' 193 0xc1 b'\xc1' 194 0xc2 b'\xc2' 195 0xc3 b'\xc3' 196 0xc4 b'\xc4' 197 0xc5 b'\xc5' 198 0xc6 b'\xc6' 199 0xc7 b'\xc7' 200 0xc8 b'\xc8' 201 0xc9 b'\xc9' 202 0xca b'\xca' 203 0xcb b'\xcb' 204 0xcc b'\xcc' 205 0xcd b'\xcd' 206 0xce b'\xce' 207 0xcf b'\xcf' 208 0xd0 b'\xd0' 209 0xd1 b'\xd1' 210 0xd2 b'\xd2' 211 0xd3 b'\xd3' 212 0xd4 b'\xd4' 213 0xd5 b'\xd5' 214 0xd6 b'\xd6' 215 0xd7 b'\xd7' 216 0xd8 b'\xd8' 217 0xd9 b'\xd9' 218 0xda b'\xda' 219 0xdb b'\xdb' 220 0xdc b'\xdc' 221 0xdd b'\xdd' 222 0xde b'\xde' 223 0xdf b'\xdf' 224 0xe0 b'\xe0' 225 0xe1 b'\xe1' 226 0xe2 b'\xe2' 227 0xe3 b'\xe3' 228 0xe4 b'\xe4' 229 0xe5 b'\xe5' 230 0xe6 b'\xe6' 231 0xe7 b'\xe7' 232 0xe8 b'\xe8' 233 0xe9 b'\xe9' 234 0xea b'\xea' 235 0xeb b'\xeb' 236 0xec b'\xec' 237 0xed b'\xed' 238 0xee b'\xee' 239 0xef b'\xef' 240 0xf0 b'\xf0' 241 0xf1 b'\xf1' 242 0xf2 b'\xf2' 243 0xf3 b'\xf3' 244 0xf4 b'\xf4' 245 0xf5 b'\xf5' 246 0xf6 b'\xf6' 247 0xf7 b'\xf7' 248 0xf8 b'\xf8' 249 0xf9 b'\xf9' 250 0xfa b'\xfa' 251 0xfb b'\xfb' 252 0xfc b'\xfc' 253 0xfd b'\xfd' 254 0xfe b'\xfe' 255 0xff b'\xff' ```
Python bytes literal has extra characters that aren't hex, but alter the value of the string
[ "", "python", "byte", "python-3.3", "" ]
I have an excel book with a couple of sheets. Each sheet has two columns with `PersonID` and `LegacyID`. We are basically trying to update some records in the database based on `personid`. This is relatively easy to do TSQL and I might even be able to get it done pretty quick in powershell but since I have been trying to learn Python, I thought I would try this in Python. I used `xlrd` module and I was able to print update statements. below is my code: ``` import xlrd book = xlrd.open_workbook('D:\Scripts\UpdateID01.xls') sheet = book.sheet_by_index(0) myList = [] for i in range(sheet.nrows): myList.append(sheet.row_values(i)) outFile = open('D:\Scripts\update.txt', 'wb') for i in myList: outFile.write("\nUPDATE PERSON SET LegacyID = " + "'" + str(i[1]) + "'" + " WHERE personid = " + "'" + str(i[0]) + "'") ``` Two problems - when I read the output file, I see the `LegacyID` printed as float. How do I get rid of `.0` at the end of each id? Second problem, python doesn't print each update statement in a new line in the output text file. How to I format it? **Edit**: Please ignore the format issue. It did print in new lines when I opened the output file in Notepad++. The float issue still remains.
Can you turn the LegacyID into ints ? i[1] = int(i[1]) outFile.write("\nUPDATE PERSON SET LegacyID = " + "'" + str(i[1]) + "'" + " WHERE personid = " + "'" + str(i[0]) + "'")
try this.. ``` # use 'a' if you want to append in your text file outFile = open(r'D:\Scripts\update.txt', 'a') for i in myList: outFile.write("\nUPDATE PERSON SET LegacyID = '%s' WHERE personid = '%s'" %( int(i[1]), str(i[0]))) ```
Reading data from Excel sheets and building SQL statements, writing to output file in Python
[ "", "python", "excel", "" ]
I am trying to solve a problem of how to find the maximum count of consecutive years in a series of records. In the following example: ``` ID Year 1 1993 1 1994 1 1995 1 1995 1 2001 1 2002 2 1993 2 1995 2 1996 2 1996 2 1998 2 1999 2 2000 2 2001 2 2001 ``` My result set should look like ``` id count 1 3 2 4 ``` I have to write the code in oracle SQL.
This will produce your desired result: ``` select id, ayear, byear, yeardiff from ( select a.id, a.year ayear, b.year byear, (b.year - a.year)+1 yeardiff, dense_rank() over (partition by a.id order by (b.year - a.year) desc) rank from years a join years b on a.id = b.id and b.year > a.year where b.year - a.year = (select count(*)-1 from years a1 where a.id = a1.id and a1.year between a.year and b.year) ) where rank = 1 ``` **EDIT** updated to display start/end years of longest stretch. [SQLFiddle](http://www.sqlfiddle.com/#!4/7b8ca/54/0)
Try: ``` with cte as (select t.id, t.year, d.d, row_number() over (partition by t.id, d.d order by t.year) rn from (select -1 d from dual union all select 1 d from dual) d cross join my_table t where not exists (select null from my_table o where t.id = o.id and t.year = o.year-d.d) ) select s.id, max(e.year-s.year)+1 year_count from cte s join cte e on s.id = e.id and s.rn = e.rn and e.d=1 where s.d=-1 group by s.id ``` SQLFiddle [here](http://sqlfiddle.com/#!4/1b644/10).
Find the maximum consecutive years for each ID's in a table(Oracle SQL)
[ "", "sql", "oracle", "" ]
I am using a SQL statement to fetch records where name begins with some alphabet ``` SELECT * FROM Music WHERE Title LIKE 'A%' ORDER BY Title ``` Can anyone suggest SQL query which will fetch Title beginning with numbers and symbols?
You can use LIKE with character sets: ``` SELECT * FROM Music WHERE Title LIKE '[^A-Za-z]%' ORDER BY Title ``` Sample: ``` declare @music table(id int identity(1,1) not null primary key, title varchar(10)) insert @music(title) values ('test1'), ('9test'), ('0test'), ('#test') SELECT * FROM @Music WHERE Title LIKE '[^A-Za-z]%' ORDER BY Title ``` --- results --- ``` id title 4 #test 3 0test 2 9test ```
use `PATINDEX` ``` SELECT * FROM Music WHERE PATINDEX('[^a-zA-Z]%', Title) = 1 ORDER BY Title ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!3/b5df12/2) * [TSQL Doc: PATINDEX](http://msdn.microsoft.com/en-us/library/ms188395.aspx)
WHERE to check if NVARCHAR record begins with a number or symbol in SQL
[ "", "sql", "sql-server", "" ]
Say I have a list of dictionaries. They mostly have the same keys in each row, but a few don't match and have extra key/value pairs. Is there a fast way to get a set of all the keys in all the rows? Right now I'm using this loop: ``` def get_all_keys(dictlist): keys = set() for row in dictlist: keys = keys.union(row.keys()) ``` It just seems terribly inefficient to do this on a list with hundreds of thousands of rows, but I'm not sure how to do it better Thanks!
You could try: ``` def all_keys(dictlist): return set().union(*dictlist) ``` Avoids imports, and will make the most of the underlying implementation of `set`. Will also work with anything iterable.
A fun one which works on python3.x1 relies on `reduce` and the fact the `dict.keys()` now returns a set-like object: ``` >>> from functools import reduce >>> dicts = [{1:2},{3:4},{5:6}] >>> reduce(lambda x,y:x | y.keys(),dicts,{}) {1, 3, 5} ``` For what it's worth, ``` >>> reduce(lambda x,y:x | y.keys(),dicts,set()) {1, 3, 5} ``` works too, or, if you want to avoid a `lambda` (and the initializer), you could even do: ``` >>> reduce(operator.or_, (d.keys() for d in dicts)) ``` Very neat. This really shines most when you only have two elements. Then, instead of doing something like `set(a) | set(b)`, you can do `a.keys() | b.keys()` which seems a little nicer to me. --- 1It can be made to work on python2.7 as well. Use `dict.viewkeys` instead of `dict.keys`
Union of all keys from a list of dictionaries
[ "", "python", "list", "dictionary", "" ]
``` Create table #Tbl ( ID int not null, Keyword nvarchar(max) ) Insert into #Tbl Values ('0','Cryptography') Insert into #Tbl Values ('1','Cryptography') Insert into #Tbl Values ('4','Cryptography') Insert into #Tbl Values ('0','SQL') Insert into #Tbl Values ('0','SQL') Insert into #Tbl Values ('3','Cloud Computing') Insert into #Tbl Values ('6','Recursion') Insert into #Tbl Values ('8','Recursion') Insert into #Tbl Values ('0','Universe') Insert into #Tbl Values ('0','Universe') Insert into #Tbl Values ('7','Universe') ``` I need to get the titles which has more than one ID and at least one of the ID is zero. So the expected result will be: ``` Cryptography Universe ``` I tried below query but not able to add "at least one id is zero" condition ``` select Keyword,COUNT(distinct id) from #Tbl group by Keyword having COUNT(distinct id)>1 ``` How can I proceed here ? Thanks for your help.
Assuming your IDs start from 0, the below should work ``` select Keyword,COUNT(distinct id) from #Tbl group by Keyword having COUNT(distinct id)>1 and MIN(id) = 0 ```
There are many ways to do this, one example: ``` SELECT DISTINCT Keyword FROM #Tbl T WHERE EXISTS (SELECT 1 FROM #Tbl WHERE Keyword = T.Keyword AND ID = 0) AND EXISTS (SELECT 1 FROM #Tbl WHERE Keyword = T.Keyword AND ID != 0) ``` [**Here is a sqlfiddle**](http://sqlfiddle.com/#!3/6e099/3) with a demo.
Get records with more than one value and at least one of them is zero
[ "", "sql", "sql-server", "t-sql", "" ]
I am used to creating basic queries, so I am stuck on a rather complex one. ``` Order Number Order Line Package Part Number Size Cost Reviewed 0001 1 1 A1 S 22.5 Yes 0001 1 1 B2 M 33.1 Yes 0001 1 1 C3 L 11.2 Yes 0001 1 2 A1 XL 15.0 Yes 0001 1 3 A2 M 12.0 Yes 0001 2 1 D1 S 42.9 Yes 0002 1 1 B4 L 72.5 No 0002 1 2 A7 XXL 66.7 No 0002 2 1 C1 XL 11.8 Yes 0002 2 1 B1 S 22.3 Yes 0003 1 1 A1 L 55.2 Yes ``` I would like to select Order Number, Order Line, and Package. I have to search by Part Number, Size, Cost, and if it was Reviewed. This table has around 30,000 orders, so there are multiple results (which is what I want). I got the easy part down, and it works correctly. Example: ``` SELECT ORDER Number, ORDER Line, Package FROM TABLE WHERE (Part Number='A1' AND SIZE='S' AND Cost='22.5' AND Reviewed='Yes') GROUP BY ORDER Number, ORDER Line, Package HAVING count(ORDER Number)=1 Order Number Order Line Package 0001 1 1 ``` Here is my challenge. I have to exclude results that have an Order Line where Package <> 1. So the result for my example above will be excluded, because for Order Number 0001 - Order Line 1 contains a Package of 2 and 3. When this exclusion is applied, the only valid results in the table I provided should be... ``` Order Number Order Line Package 0001 2 1 0002 2 1 0003 1 1 ``` Do not worry about null values. Performance is a concern (obviously), as it is a large table. I have looked around, but I have not found any solid solutions for this yet. Your guidance will be appreciated.
This will exclude records with a matching order\_number and order\_line that have a package other than 1: ``` SELECT ORDER_Number, ORDER_Line, Package FROM aTABLE WHERE NOT EXISTS (SELECT 1 FROM atable AS b WHERE atable.order_number = b.order_number AND atable.order_line = b.order_line AND b.Package != 1) GROUP BY ORDER_Number, ORDER_Line, Package ```
This might be a little hackish and not promising good performance, but I've used something like this before. ``` SELECT ORDERNumber, ORDERLine FROM orders GROUP BY ORDERNumber, ORDERLine HAVING SUM(CASE WHEN package = 1 THEN 0 ELSE 1 END)= 0 ORDER BY ordernumber, orderline ``` **[sqlFiddle](http://sqlfiddle.com/#!6/3a350/21/0)** The case when part should be common across [MySql](http://dev.mysql.com/doc/refman/5.0/en/case.html), [SqlServer](http://msdn.microsoft.com/en-us/library/ms181765.aspx), and [Oracle](http://www.techonthenet.com/oracle/functions/case.php).
Exclude SQL query results - single table
[ "", "sql", "" ]
i have to find a way to solve this issue... in a table like that, i would see my column "C" increment his value on each rows, starting from a costant, adding value in column "B" and adding value by the previous value in the same column "C". Furthermore ... Grouping by User. For example: (starting point Phil: 350, starting point Mark: 100) ``` USER - POINT - INITIALPOINT Phil - 1000 - 1350 Phil - 150 - 1500 Phil - 200 - 1700 Mark - 300 - 400 Mark - 250 - 650 ``` How can i do that?
SQL Server 2008 doesn't support cumulative sums directly using window functions. You can use a correlated subquery for the same effect. So, using the same structure as GBN: ``` DECLARE @t TABLE (ID int IDENTITY(1,1), UserName varchar(100), Point int); INSERT @t (UserName, Point) VALUES ('Phil', 1000), ('Phil', 150), ('Phil', 200), ('Mark', 300), ('Mark', 250); DECLARE @n TABLE (UserName varchar(100), StartPoint int); INSERT @n (UserName, StartPoint) VALUES ('Phil', 350), ('Mark', 100); SELECT T.ID, T.UserName, T.Point, (N.StartPoint + (select SUM(Point) from @t t2 where t2.UserName = t.userName and t2.ID <= t.id) ) FROM @n N JOIN @t T ON N.UserName = T.UserName ORDER BY T.ID; ```
Using windowing. The table declaration is SQL Server but the rest is standard SQL if your RDBMS supports it (SQL Server 2012, PostgreSQL 9.1 etc) ``` DECLARE @t TABLE (ID int IDENTITY(1,1), UserName varchar(100), Point int); INSERT @t (UserName, Point) VALUES ('Phil', 1000), ('Phil', 150), ('Phil', 200), ('Mark', 300), ('Mark', 250); DECLARE @n TABLE (UserName varchar(100), StartPoint int); INSERT @n (UserName, StartPoint) VALUES ('Phil', 350), ('Mark', 100); SELECT T.ID, T.UserName, T.Point, N.StartPoint + SUM(Point) OVER(PARTITION BY T.UserName ORDER BY T.ID ROWS UNBOUNDED PRECEDING) FROM @n N JOIN @t T ON N.UserName = T.UserName ORDER BY T.ID; ``` To do this, you need an order to the table (I used ID) and a better way of doing a starting value (I used a separate table)
Increment value colum by previous row in select sql statement
[ "", "sql", "sql-server", "" ]
I am struggling to join multiple table in mySQL or SQL Server and keep the performance fast. I have this table: table songs ``` songID|songName --------------- 01|diamond 02|goodbye ``` table singersong ``` songID|singerID --------------- 01|15 02|22 ``` table singers ``` singerID|singerName|Sex ------------------------ 15| Rihanna | F 22| Air Supply | M ``` And I want my result like this: ``` songID|songName|singerName|Sex ------------------------------ 01|diamond|Rihanna|F 02|goodbye|Air Supply| M ``` My Query is like this ``` SELECT s.songID, s.songName, sr.singerName, sr.Sex FROM songs s, singersong ss, singer sr WHERE ss.songID = s.songID AND ss.singerID = sr.singerID ORDER BY s.songID ``` And it's performing very very slow.. is there anyway to make this query simpler or more efficient ? Thanks very much for the help.. LL
Specify the conditions to join on. Your current query, depending upon the whims of the optimiser, may be producing a cartesian product between all tables and then filtering the result. Also make sure you have indexes and FKs setup properly. ``` SELECT s.songID, s.songName, sr.singerName, sr.Sex FROM songs s LEFT JOIN singersong ss ON s.songID = ss.songID LEFT JOIN singer sr ON ss.singerID = sr.singerID ORDER BY s.songID ``` Replace `LEFT JOIN` with `INNER JOIN` if you do not want null values returned when there are no matching entries in a related table.
Your syntax is not the modern style, but it should work fine. For performance, make sure you have indexes on all the columns used for the join: `songID` and `singerID`.
How to Do Efficient JOIN on multiple table in MySQL or SQL Server?
[ "", "mysql", "sql", "" ]
this is how my table looks like : ![](https://i.stack.imgur.com/9Jyrp.jpg) Now i want to retrieve Field1 for 'AAA' I tried something like : ``` select Field1 from table where CommaSeparatedList='AAA' ``` But it didn't work. So How can I achieve this ??
``` select field1 from table where ',' + replace(commaseparatedlist, ' ', '') + ',' like '%,AAA,%' ```
I suggest that you try: ``` SELECT Field1 FROM table WHERE CommaSeparatedList LIKE '%AAA%' ```
Comparing value with comma separated values in select statement
[ "", "sql", "" ]
I have a database where `article.cat_id` is linked to `category.id`. I removed a few rows from `category` and now I want to select or remove all rows from `article` that link to an `id` in `category` that doesn't exist. Unfortunately, I don't exactly know which `id`s have been removed. Is there an SQL way to check to return all rows from `article` for which there is no `category.id` equal to `article.cat_id`?
e.g ``` SELECT * FROM Table_1 where Table_1.ID not in (SELECT Table_2.ID FROM Table_2) ``` the inner select it will select all the IDs from Table 2 while the outer select it will base the returned rows where the ID doesn't exist in TABLE 2
``` DELETE a.* FROM article a LEFT JOIN category c ON a.cat_id = c.id WHERE c.id IS NULL ``` Change `DELETE` to `SELECT` first to make sure it's what you want.
Return all rows from a table for which the cat_id cannot be found in another table
[ "", "mysql", "sql", "database", "" ]
This is surely a duplicate, but say I have a class as follows: ``` class MyObj(object): def __init__(self, *args, **kwargs): self._data = [2, 1, 3] self._more_data = [False, True, False] ``` How can I make it sortable, not against other MyObj objects (which I could do with `__lt__`), but internally? So if I call `sorted(my_obj_instance)` I get a version with data like: ``` self._data = [1,2,3] self._more_data [True, False, False] ``` That is, \_data is sorted numerically, and \_more\_data has been sorted correspondingly.
`sorted()` **never** modifies the original data. To make it work with a custom object you'd have to implement `__iter__` though (returning e.g. `iter(self._data)`) - but that would just give you a sorted version of that object and neither modify your original object nor sort both lists. What you want is a `sort()` method which you'd call as `yourobj.sort()`. This method would then call `.sort()` on the lists with an appropriate `key` argument for the way you want them sorted.
You can't make `sorted` return an instance of your class. `sorted` always returns a list. If you make it [an iterator](https://stackoverflow.com/questions/19151/build-a-basic-python-iterator) you can get `sorted` to return a sorted version of your class's data (as a list). But it's probably simpler to just give your class a method called `sort` that sorts its data.
How to make class data sortable
[ "", "python", "sorting", "" ]
I have been debugging a SQL stored procedure which has to take values (in my code ID and Numb) form table A based on the values (ID) present in the Table C, then square the Numb and store it in Table B i.e. all the things ID, Numb and Square. I am not able to figure out the problem in the below code ``` DELIMITER $$ CREATE PROCEDURE matlab.squaring BEGIN DECLARE finish BOOLEAN DEFAULT 0; # <- set up initial conditions DECLARE square BIGINT(10); DECLARE ID INT(10); DECLARE Numb INT (10); DECLARE id_cur CURSOR FOR SELECT ID, Numb FROM A WHERE EXISTS ( SELECT ID FROM c); SET @square= @Numb * @Numb INSERT INTO B ( ID , Numb , square ) values ( ID , Numb, square); DECLARE CONTINUE HANDLER FOR NOT FOUND SET finish = TRUE; OPEN id_cur; the_loop : LOOP FETCH id_cur INTO ID; IF finish THEN CLOSE id_cur; LEAVE the_loop; END IF END LOOP the_loop; END$$ ``` When I run the stored procedure the error that pops up is "there seems to be some syntax error in your code, please refer to MYSql guide. " edit: one more help please how to execute this stored procedure.
There are various minor errors; You need a parameter list, even if empty for the procedure; ``` CREATE PROCEDURE matlab.squaring() ``` The continue handler needs to be right below the other declarations; ``` DECLARE id_cur CURSOR FOR SELECT ID, Numb FROM A WHERE EXISTS ( SELECT ID FROM c); DECLARE CONTINUE HANDLER FOR NOT FOUND SET @finish = TRUE; ``` You forgot a semicolon; ``` SET @square= @Numb * @Numb; ``` You forgot @ on the variable usages; ``` ) values ( @ID , @Numb, @square); ``` You forgot a semicolon on END IF ``` END IF; ``` Just as an overview, here's the complete thing updated; ``` CREATE PROCEDURE matlab.squaring() BEGIN DECLARE finish BOOLEAN DEFAULT 0; # <- set up initial conditions DECLARE square BIGINT(10); DECLARE ID INT(10); DECLARE Numb INT (10); DECLARE id_cur CURSOR FOR SELECT ID, Numb FROM A WHERE EXISTS ( SELECT ID FROM c); DECLARE CONTINUE HANDLER FOR NOT FOUND SET @finish = TRUE; SET @square= @Numb * @Numb; INSERT INTO B ( ID , Numb , square ) values ( @ID , @Numb, @square); OPEN id_cur; the_loop : LOOP FETCH id_cur INTO ID; IF finish THEN CLOSE id_cur; LEAVE the_loop; END IF; END LOOP the_loop; END// ```
* You have missed () after PROCEDURE matlab... * And ; after END IF * Also, HANDLER declaration should be before any executable code and after CURSOR declaration * Semicolon after SET @square= @Numb \* @Numb is needed So, query should be like this: ``` DELIMITER $$ CREATE PROCEDURE matlab.squaring () BEGIN DECLARE finish BOOLEAN DEFAULT 0; # <- set up initial conditions DECLARE square BIGINT(10); DECLARE ID INT(10); DECLARE Numb INT (10); DECLARE id_cur CURSOR FOR SELECT ID, Numb FROM A WHERE EXISTS ( SELECT ID FROM c); DECLARE CONTINUE HANDLER FOR NOT FOUND SET finish = TRUE; SET @square= @Numb * @Numb; INSERT INTO B ( ID , Numb , square ) values ( ID , Numb, square); OPEN id_cur; the_loop : LOOP FETCH id_cur INTO ID; IF finish THEN CLOSE id_cur; LEAVE the_loop; END IF; END LOOP the_loop; END$$ ```
Stored procedure not working properly in MySql?
[ "", "mysql", "sql", "stored-procedures", "" ]
how can I change a string to a string of list (as seen in the question header) I can simply do something like that, but I'm there is a simpler way ``` orig = "bla bla" final = "[" for i in orig: final = "%s %d," % (final, i) final = final[:-1] + "]" ```
You can also do: ``` >>> orig = "bla bla" >>> str(map(ord, orig)) '[98, 108, 97, 32, 98, 108, 97]' >>> ```
You can use [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) and [ord](http://docs.python.org/2/library/functions.html#ord). ``` >>> nums = str([ord(char) for char in "abc"]) >>> nums "[97, 98, 99]" ```
Turn "abc" to "[97, 98 99]"
[ "", "python", "" ]
I have 2 tables, Table1 and Table2, where 2 columns in both tables are same update:type of Table1.col1 same as Table2.col1 and Table1.col2 same as Table2.col2 Trying to fetch the data where table1.col1 not in table2.col1 and table1.col2 not in table2.col2, and this is my query. ``` select * from Table1 where Table1.col1 not in (select Table2.col1 from Table2) and Table1.col2 not in (select Table2.col2 from Table2) ``` would like to know any better way or is this correct ?
This query should do the job, I ran a simple test based on your query and it doesn't produce the desired result ``` SELECT * FROM Table1 t1 LEFT JOIN Table2 t2 ON t1.col1 = t2.col1 AND t1.col2 = t2.col2 WHERE t2.col1 IS NULL AND t2.col2 IS NULL ``` Given this ``` CREATE TABLE Table1 ( colA VarChar(50), col1 Int, col2 Int ) CREATE TABLE Table2 ( colB VarChar(50), col1 Int, col2 Int ) INSERT Table1 VALUES ('A', 1, 1), ('B', 1, 2), ('C', 2, 1) INSERT Table2 VALUES ('X', 1, 1), ('Y', 2, 1), ('Z', 2, 2) ``` If I understood your question, we should get this **B | 1 | 2**
Use a LEFT JOIN: ``` SELECT Table1.* FROM Table1 LEFT JOIN Table2 ON Table1.col1 = Table2.col1 AND Table1.col2 = Table2.col2 WHERE Table2.col1 IS NULL ```
SQL query to fetch the data from two tables with "not in" condition
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "join", "" ]
I have a list of objects of the same type ``` lis = [<obj>, <obj>, <obj>] ``` that I wish to sort naturally by the object attribute `name`. I have tried ``` sortedlist = sorted(lis, key=lambda x: x.name) ``` However this sorts the list as ``` A1 A10 A2 ``` Not in the format that I wanted ``` A1 A2 A10 ``` I have tried modifying the code from [sorting alphanumeric strings](http://arcpy.wordpress.com/2012/05/11/sorting-alphanumeric-strings-in-python), but I can't get it working for a list of objects.
``` sorted(obj, key=lambda x: (x.name[0], int(x.name[1:]))) ```
This way uses groupby, and works for an arbitrary number of swaps between alpha and digits ``` from itertools import groupby def keyfunc(s): return [int(''.join(g)) if k else ''.join(g) for k, g in groupby(s, str.isdigit)] sorted(my_list, key=keyfunc) ``` Demo: ``` >>> my_list =['A1', 'A10', 'A2', 'B0', 'AA11', 'AB10'] >>> sorted(my_list, key=keyfunc) ['A1', 'A2', 'A10', 'AA11', 'AB10', 'B0'] >>> mylist =['foo1', 'foo10', 'foo2', 'foo2bar1', 'foo2bar10', 'foo2bar3'] >>> sorted(mylist, key=keyfunc) ['foo1', 'foo2', 'foo2bar1', 'foo2bar3', 'foo2bar10', 'foo10'] ```
Sort list of objects by attribute alphanumerically
[ "", "python", "sorting", "" ]
I've got django models displaying on a page using a checkbox input that has a label over it like so: ``` {% if recipes_list %} <table> {% for r in recipes_list %} <tr> <td> <section class="ac-container"> <div> <input id="ac-1" type="checkbox" /> <label for="ac-1">{{r.name}}</label> <article class="ac-small"> <ul> {% for i in r.ingredient_list%} <li>{{i.part}}, {{i.amount}}</li> {% endfor %} </ul> </article> </div> </section> </td> </tr> {% endfor %} </table> ``` When I click on the label of each entry in `recipes_list`, it obviously always opens the article of the first one. I've been looking around for solutions for the past couple days on how to give a unique id in the html for each model entry, but I can't find anything that works with this situation. I've tried forms, model forms, various javascript, and php. How can I do this?
You can write a filter which gets the model name ``` from django import template register = template.Library() @register.filter(name='class_name') def class_name(obj): return obj.__class__.__name__ ``` and in the template: and in the template, wherever you want the id/classname: ``` <article id={{obj|class_name}}> {# further code #} </article> ``` # OR ``` class MyModel(models.Model): #fields def class_name(self): return "%s"%self.__class__.__name__ #returns the model instance name ``` If you want to return the instance name: ``` from django.template.defaultfilters import slugify class MyModel(models.Model): def class_name(self): return "%s"%(slugify(self.name)) #or whatever field has the specific instance name ``` and in the template: ``` {{obj.class_name}} ```
You can use the [forloop.counter](https://docs.djangoproject.com/en/dev/ref/templates/builtins/) to achieve this: ``` {% if recipes_list %} <table> {% for r in recipes_list %} <tr> <td> <section class="ac-container"> <div> <input id="ac-{{forloop.counter}}" type="checkbox" /> <label for="ac-{{forloop.counter}}">{{r.name}}</label> <article id="article-{{forloop.counter}}" class="ac-small"> <ul> {% for i in r.ingredient_list%} <li>{{i.part}}, {{i.amount}}</li> {% endfor %} </ul> </article> </div> </section> </td> </tr> {% endfor %} </table> ``` Hope this helps!
Unique HTML element IDs for each model with Django
[ "", "python", "html", "django", "django-forms", "django-templates", "" ]
Why does ``` select 1 as a, 2 as b union all select 20 as b, 10 as a ``` returns ``` a b 1 2 20 10 ``` instead of ``` a b 1 2 10 20 ``` ? Is there a way to make `union` match column names?
I'm not sure if this solves your problem, but you can use subqueries within the `union` to put the columns in the "right" order: ``` (select a, b from (select 1 as a, 2 as b) t) union all (select a, b from (select 20 as b, 10 as a) t) ``` I realize the question is tagged MySQL, which doesn't support `full outer join`. If it did, you could do do the `union all` as: ``` select coalesce(t1.a, t2.a) as a, coalesce(t1.b, t2.b) as b from (select 1 as a, 2 as b) t1 full outer join (select 20 as b, 10 as a) t2 on 0 = 1; ``` You *can* do this in MySQL. This assumes that none of your values are never NULL: ``` select coalesce(t1.a, t2.a) as a, coalesce(t1.b, t2.b) as b from (select 1 as a, 2 as b union all select NULL, NULL) t1 join (select 20 as b, 10 as a union all select NULL, NULL) t2 on (t1.a is null or t2.a is null) and coalesce(t1.a, t2.a) is not null ```
> Is there a way to make union match column names? Nope, selecting the columns in order is required with `UNION`.
UNION ignores column names?
[ "", "mysql", "sql", "union", "" ]
If a have a list like: ``` l = [1,2,3,4,5] ``` and I want to have at the end ``` min = 1 max = 5 ``` WITHOUT `min(l)` and `max(l)`.
The fastest approach I can think of would be to sort the original list and then pick the first and last elements. This avoids looping multiple times, but it does destroy the original structure of your list. This can be solved by simply copying the list and sorting only the copied list. I was curious if this was slower than just using max() and min() with this quick example script: ``` import time l = [1,2,4,5,3] print "Run 1" t1 = time.time() print "Min =", min(l) print "Max =", max(l) print "time =", time.time() - t1 print "" print "l =", l print "" l = [1,2,4,5,3] l1 = list(l) print "Run 2" t1 = time.time() l1.sort() print "Min =", l1[0] print "Max =", l1[-1] print "time =", time.time() - t1 print "" print "l =", l print "l1 =", l1 print "" l = [1,2,4,5,3] print "Run 3" minimum = float('inf') maximum = float('-inf') for item in l: if item < minimum: minimum = item if item > maximum: maximum = item print "Min =", minimum print "Max =", maximum print "time =", time.time() - t1 print "" print "l =", l ``` Surprisingly, the second approach is faster by about 10ms on my computer. Not sure how effective this would be with very large list, but this approach is faster for at least the example list you provided. I added @Martijn Pieters's simple loop algorithm to my timing script. (As timing would be the only important parameter worth exploring in this question.) My results are: ``` Run 1: 0.0199999809265s Run 2: 0.00999999046326s Run 3: 0.0299999713898s ``` --- Edit: Inclusion of timeit module for timing. ``` import timeit from random import shuffle l = range(10000) shuffle(l) def Run_1(): #print "Min =", min(l) #print "Max =", max(l) return min(l), max(l) def Run_2(): l1 = list(l) l1.sort() #print "Min =", l1[0] #print "Max =", l1[-1] return l1[0], l1[-1] def Run_3(): minimum = float('inf') maximum = float('-inf') for item in l: if item < minimum: minimum = item if item > maximum: maximum = item #print "Min =", minimum #print "Max =", maximum return minimum, maximum if __name__ == '__main__': num_runs = 10000 print "Run 1" run1 = timeit.Timer(Run_1) time_run1 = run1.repeat(3, num_runs) print "" print "Run 2" run2 = timeit.Timer(Run_2) time_run2 = run2.repeat(3,num_runs) print "" print "Run 3" run3 = timeit.Timer(Run_3) time_run3 = run3.repeat(3,num_runs) print "" print "Run 1" for each_time in time_run1: print "time =", each_time print "" print "Run 2" for each_time in time_run2: print "time =", each_time print "" print "Run 3" for each_time in time_run3: print "time =", each_time print "" ``` My results are: ``` Run 1 time = 3.42100585452 time = 3.39309908229 time = 3.47903182233 Run 2 time = 26.5261287922 time = 26.2023346397 time = 26.7324208568 Run 3 time = 3.29800945144 time = 3.25067545773 time = 3.29783778232 ``` sort algorithm is very slow for large arrays.
If you are trying to avoid using two loops, hoping a single loop will be faster, you need to reconsider. Calling two O(N) functions still gives you a O(N) algorithm, all you do is double the constant per-iteration cost. A single Python loop with comparisons can't do better than O(N) either (unless your data is already sorted), and interpreting bytecode for each iteration has a sizeable constant cost too. Which approach has the higher constant cost can only be determined by timing your runs. To do this in a single loop, iterate over the list and test each item against the minimum and maximum found so far. `float('inf')` and `float('-inf')` (infinity and negative infinity) are good starting points to simplify the logic: ``` minimum = float('inf') maximum = float('-inf') for item in l: if item < minimum: minimum = item if item > maximum: maximum = item ``` Alternatively, start with the first element and only loop over the rest. Turn the list into an iterable first, store the first element as the result-to-date, and then loop over the rest: ``` iterl = iter(l) minimum = maximum = next(iterl) for item in iterl: if item < minimum: minimum = item if item > maximum: maximum = item ``` Don't use sorting. Python's Tim Sort implementation is a O(N log N) algorithm, which can be expected to be slower than a straight-up O(N) approach. Timing comparisons with a larger, random list: ``` >>> from random import shuffle >>> l = list(range(1000)) >>> shuffle(l) >>> from timeit import timeit >>> def straight_min_max(l): ... return min(l), max(l) ... >>> def sorted_min_max(l): ... s = sorted(l) ... return s[0], s[-1] ... >>> def looping(l): ... l = iter(l) ... min = max = next(l) ... for i in l: ... if i < min: min = i ... if i > max: max = i ... return min, max ... >>> timeit('f(l)', 'from __main__ import straight_min_max as f, l', number=10000) 0.5266690254211426 >>> timeit('f(l)', 'from __main__ import sorted_min_max as f, l', number=10000) 2.162343978881836 >>> timeit('f(l)', 'from __main__ import looping as f, l', number=10000) 1.1799919605255127 ``` So even for lists of 1000 elements, the `min()` and `max()` functions are fastest. Sorting is slowest here. The sorting version can be faster if you allow for *in-place* sorting, but then you'd need to generate a new random list for each timed run as well. Moving to a million items (and only 10 tests per timed run), we see: ``` >>> l = list(range(1000000)) >>> shuffle(l) >>> timeit('f(l)', 'from __main__ import straight_min_max as f, l', number=10) 1.6176080703735352 >>> timeit('f(l)', 'from __main__ import sorted_min_max as f, l', number=10) 6.310506105422974 >>> timeit('f(l)', 'from __main__ import looping as f, l', number=10) 1.7502741813659668 ``` Last but not least, using a million items and `l.sort()` instead of `sorted()`: ``` >>> def sort_min_max(l): ... l.sort() ... return l[0], l[-1] ... >>> timeit('f(l[:])', 'from __main__ import straight_min_max as f, l', number=10) 1.8858389854431152 >>> timeit('f(l[:])', 'from __main__ import sort_min_max as f, l', number=10) 8.408858060836792 >>> timeit('f(l[:])', 'from __main__ import looping as f, l', number=10) 2.003532886505127 ``` Note the `l[:]`; we give each test run a copy of the list. Conclusion: even for large lists, you are better off using the `min()` and `max()` functions anyway, it is hard to beat the low per-iteration cost of a good C loop. But if you have to forgo those functions, the straight loop is the next better option.
How can I get the minimum and the maximum element of a list in python
[ "", "python", "max", "min", "" ]
I have a python list that looks like that: ``` list = [u'a', u'b', u'c'] ``` Now I want to encode it in UTF-8. Therefore I though I should use: ``` list = list[0].encode("utf-8") ``` But print list gives only ``` a ``` meaning the first element of the list. Not even a list anymore. What am I doing wrong?
``` >>> items = [u'a', u'b', u'c'] >>> [x.encode('utf-8') for x in items] ['a', 'b', 'c'] ```
`list[0]` is the first element, not a list. you are reassigning your `list` var to a new value, the utf-8 encoding of the first element. Also, don't name your variables `list`, as it masks the `list()` function.
Encode Python list to UTF-8
[ "", "python", "list", "encode", "" ]
I'm creating a system for keeping track of which type of software, and which version of the software is licensed for each company. Some of the software has version numbers like 1.1.4 or similar with more than one decimal point. I have tried saving it as a varchar for now, since I couldn't find a numerical data type that would take more than one decimal point. And, storing it as a varchar is fine, but I need to be able to search it for >= 1.1.2 or <= 2.5.1 and when I do that the system crashes. Searching for >= 1.2 will yield the result 2.5.1 for instance, but I would like to be able to search that last number too for accuracy. Is there some datatype I'm missing (I'm quite new to this) or is there some other easy way of solving it?
Assuming that the revision and patch values (in `version.revision.patch`) will have a maximum of two digits, you could store them in an `INT UNSIGNED` (or smaller) column as follows: ``` version * 10000 + revision * 100 + patch ``` With this approach, your search becomes `>= 10102 or <= 20501`.
Not natively, no, as multiple decimal points aren't a common data type. Your best bet would be saving it as `varchar` like you have done, and then in your app code, splitting the number and doing the comparison by hand.
Using numbers with multiple decimal points (ex. 1.4.4) in MySql, is it possible?
[ "", "mysql", "sql", "decimal", "" ]
I am using Mac OSX 10.8, previously I used macports, but I switched to brew. ``` Snows-MacBook-Pro:~ Mac$ brew search matplotlib samueljohn/python/matplotlib Snows-MacBook-Pro:~ Mac$ pip search matplotlib matplotlib - Python plotting package ``` So my question is easy. Should I use brew or pip for installing matplotlib ? Is there any difference and what ? My goal is to have pandas, ipythone notebook and simpleCV up and running.
**I recommend using a package manager** (brew, indeed, or MacPorts). Here are a few reasons why: * If you use your package manager (MacPorts, brew,…) to later install additional programs that depend on **Matplotlib**, the package manager **will install** it **regardless**. * **If you install a Python package via pip**, and pip installs it in your package manager tree (MacPorts, brew,…), **the package manager might complain**. For example, MacPorts does not want to erase pip-installed packages, as a precaution, so compilation stops when MacPort detects that someone walked on its turf. The best way of installing Python packages is to first check if they are provided by your package manager, and then only install them with pip if they are not. * **Compilation with pip sometimes fails** where a package manager (MacPorts,…) has no problem: package managers are simply more powerful and general tools (they play nicely with required compiled libraries, for instance). * **I would not recommend using a separate distribution of Matplotlib**, for the same kind of reasons: any program from brew that depends on Matplotlib will install it anyway. Furthermore, if you instead want to install such a program without your package manager, it is generally hard to make it work with a specific distribution of Matplotlib (they might need libraries to be installed of top of it, etc.). In summary, I would recommend to use **one system** for everything (brew, since this is what you chose), and `pip` for things that this system does not provide (just make sure that the pip you use corresponds to where you want things to go: your package manager's Python library, with the right version, or the OS-provided Python,…). Multiplying tools and installs is too messy, in my experience, as various distributions/package managers/etc. are usually not meant to play well with each other.
Since you need compile many of these packages, it is not the simplest task on the Mac. I would recommend to use a distribution like [Anaconda](http://continuum.io/downloads.html). It is free, comes with all the things you need and has a simple installer. It will save you a lot of hassle because all components work together.
Should I use brew or pip for installing matplotlib?
[ "", "python", "macos", "matplotlib", "homebrew", "" ]
I have the following string ``` "1206292WS_R0_ws.shp" ``` I am trying to re.sub everything except what is between the second "\_" and ".shp" Output would be "ws" in this case. I have managed to remove the .shp but for the life of me cannot figure out how to get rid of everything before the "\_" ``` epass = "1206292WS_R0_ws.shp" regex = re.compile(r"(\.shp$)") x = re.sub(regex, "", epass) ``` Outputs ``` 1206292WS_R0_ws ``` Desired output: ``` ws ```
you dont really need a regex for this ``` print epass.split("_")[-1].split(".")[0] >>> timeit.timeit("epass.split(\"_\")[-1].split(\".\")[0]",setup="from __main__ import epass") 0.57268652953933608 >>> timeit.timeit("regex.findall(epass)",setup="from __main__ import epass,regex 0.59134766185007948 ``` speed seems very similar for both but a tiny bit faster with splits actually **by far the fastest method** is ``` print epass.rsplit("_",1)[-1].split(".")[0] ``` which takes 3 seconds on a string 100k long (on my system) vs 35+ seconds for either of the other methods If you actually mean the second \_ and not the last \_ then you could do it ``` epass.split("_",2)[-1].split(".") ``` although depending on where the 2nd \_ is a regex may be just as fast or faster
The regular expression you describe is `^[^_]*_[^_]*_(.*)[.]shp$` ``` >>> import re >>> s="1206292WS_R0_ws.shp" >>> regex=re.compile(r"^[^_]*_[^_]*_(.*)[.]shp$") >>> x=re.sub(regex,r"\1",s) >>> print x ws ``` Note: this is the regular expression as you describe it, not necessarily the best way to solve the actual problem. > everything except what is between the second "\_" and ".shp" Regexplation: ``` ^ # Start of the string [^_]* # Any string of characters not containing _ _ # Literal [^_]* # Any string of characters not containing _ ( # Start capture group .* # Anything ) # Close capture group [.]shp # Literal .shp $ # End of string ```
Python re finding string between underscore and ext
[ "", "python", "regex", "" ]
I would like to know how can I sort a string by the number inside. As example I have: ``` hello = " hola %d" % (number_from_database) bye = "adios %d" % (number_from_database_again) ``` I want to sort them by the number even if it changes.
You can pass a key to sort: ``` sorted(l, key=lambda x: int(re.sub('\D', '', x))) ``` For example: ``` In [1]: import re In [2]: l = ['asdas2', 'asdas1', 'asds3ssd'] In [3]: sorted(l, key=lambda x: int(re.sub('\D', '', x))) Out[3]: ['asdas1', 'asdas2', 'asds3ssd'] ``` Where `re.sub('\D', '', x)` replaces [everything but the digits](https://stackoverflow.com/questions/1450897/python-removing-characters-except-digits-from-string).
Just a little complement to Andy's answer. If you want to sort set which also contain strings without any number: ``` sorted(l, key=lambda x: int('0'+re.sub('\D', '', x))) ``` , which would put those strings without any number at the very beginning.
sort string by number inside
[ "", "python", "string", "int", "" ]
I have a python object which includes some decimals. This is causing the json.dumps() to break. I got the following solution from SO (e.g. [Python JSON serialize a Decimal object](https://stackoverflow.com/questions/1960516/python-json-serialize-a-decimal-object)) but the recoomended solution still does not work. Python website - has the exact same answer. Any suggestions how to make this work? Thanks. Below is my code. It looks like the dumps() doesn't even go into the specialized encoder. ``` clayton@mserver:~/python> cat test1.py import json, decimal class DecimalEncoder(json.JSONEncoder): def _iterencode(self, o, markers=None): print "here we go o is a == ", type(o) if isinstance(o, decimal.Decimal): print "woohoo! got a decimal" return (str(o) for o in [o]) return super(DecimalEncoder, self)._iterencode(o, markers) z = json.dumps( {'x': decimal.Decimal('5.5')}, cls=DecimalEncoder ) print z clayton@mserver:~/python> python test1.py Traceback (most recent call last): File "test1.py", line 11, in <module> z = json.dumps( {'x': decimal.Decimal('5.5')}, cls=DecimalEncoder ) File "/home/clayton/python/Python-2.7.3/lib/python2.7/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/home/clayton/python/Python-2.7.3/lib/python2.7/json/encoder.py", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File "/home/clayton/python/Python-2.7.3/lib/python2.7/json/encoder.py", line 264, in iterencode return _iterencode(o, 0) File "/home/clayton/python/Python-2.7.3/lib/python2.7/json/encoder.py", line 178, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: Decimal('5.5') is not JSON serializable clayton@mserver:~/python> ```
It is not (no longer) recommended you create a subclass; the `json.dump()` and `json.dumps()` functions take a `default` function: ``` def decimal_default(obj): if isinstance(obj, decimal.Decimal): return float(obj) raise TypeError json.dumps({'x': decimal.Decimal('5.5')}, default=decimal_default) ``` Demo: ``` >>> def decimal_default(obj): ... if isinstance(obj, decimal.Decimal): ... return float(obj) ... raise TypeError ... >>> json.dumps({'x': decimal.Decimal('5.5')}, default=decimal_default) '{"x": 5.5}' ``` The code you found only worked on Python 2.6 and overrides a private method that is no longer called in later versions.
I can't believe that no one here talked about using simplejson, which supports deserialization of Decimal out of the box. ``` import simplejson from decimal import Decimal simplejson.dumps({"salary": Decimal("5000000.00")}) '{"salary": 5000000.00}' simplejson.dumps({"salary": Decimal("1.1")+Decimal("2.2")-Decimal("3.3")}) '{"salary": 0.0}' ```
Python to JSON Serialization fails on Decimal
[ "", "python", "json", "decimal", "" ]
I have a >100,000 urls (different domains) in a list that I want to download and save in a database for further processing and tinkering. Would it be wise to use scrapy instead of python's multiprocessing / multithreading? If yes, how do I write a standalone script to do the same? Also, feel free to suggest other awesome approaches that come to your mind.
Scrapy does not seem relevant here if you know very well the URL to fetch (there's is no crawling involved here). The easiest way that comes to mind would be to use [`Requests`](http://docs.python-requests.org/en/latest/). However, querying each URL in a sequence and block waiting for answers wouldn't be efficient, so you could consider [`GRequests`](https://github.com/kennethreitz/grequests) to send batches of requests asynchronously.
Most site owners try to block you crawler if you suddenly create hi-load. So even if you have fixed list of links you need control timeouts, http answer codes, proxies and etc. on scrapy or [grab](http://docs.grablib.org/)
What is the best way to download <very large> number of pages from a list of urls?
[ "", "python", "multithreading", "multiprocessing", "scrapy", "web-crawler", "" ]
I want to use the LIKE operator to match possible values in a column. If the value begins with "CU" followed by a digit (e.g. "3") followed by anything else, I would like to return it. There only seems to be a wildcard for any single character using underscore, however I need to make sure it is a digit and not a-z. I have tried these to no avail: ``` select name from table1 where name like 'CU[0-9]%' select name from table1 where name like 'CU#%' ``` Preferably this could be case sensitive i.e. if cu or Cu or cU then this would not be a match.
You need to use regexp: ``` select name from table1 where name regexp binary '^CU[0-9]' ``` The documentation for `regexp` is [here](http://dev.mysql.com/doc/refman/5.5/en/regexp.html). EDIT: `binary` is required to ensure case-sensitive matching
The [`like` operator](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html) only have the `%` and `_` wildcards in MySQL, but you can use a regular expression with the [`rlike` operator](http://dev.mysql.com/doc/refman/5.0/en/regexp.html): ``` select name from table1 where name rlike '^CU[0-9]' ```
wildcard for single digit mysql
[ "", "mysql", "sql", "" ]
I am currently in some truble regarding python and reading files. I have to open a file in a while loop and do some stuff with the values of the file. The results are written into a new file. This new file is then read in the next run of the while loop. But in this second run I get no values out of this file... Here is a code snippet, that hopefully clarifies what I mean. ``` while convergence == 0: run += 1 prevrun = run-1 if os.path.isfile("./Output/temp/EmissionMat%d.txt" %prevrun) == True: matfile = open("./Output/temp/EmissionMat%d.txt" %prevrun, "r") EmissionMat = Aux_Functions.EmissionMat(matfile) matfile.close() else: matfile = open("./Input/EmissionMat.txt", "r") EmissionMat = Aux_Functions.EmissionMat(matfile) matfile.close() # now some valid operations, which produce a matrix emissionmat_file = open("./output/temp/EmissionMat%d.txt" %run, "w") emissionmat_file.flush() emissionmat_file.write(str(matrix)) emissionmat_file.close() ``` --- Solved it! ``` matfile.seek(0) ``` This resets the pointer to the begining of the file and allows me to read the file in the next run correctly. ---
Why to write to a file and then read it ? Moreover you use flush, so you are doing potentially long io. I would do ``` with open(originalpath) as f: mat = f.read() while condition : run += 1 write_mat_run(mat, run) mat = func(mat) ``` write\_mat\_run may be done in another thread. You should check io exceptions. BTW this will probably solve your bug, or at least make it clear.
I can see nothing wrong with your code. The following concrete example worked on my Linux machine: ``` import os run = 0 while run < 10: run += 1 prevrun = run-1 if os.path.isfile("output%d.txt" %prevrun): matfile = open("output%d.txt" %prevrun, "r") data = matfile.readlines() matfile.close() else: matfile = open("input.txt", "r") data = matfile.readlines() matfile.close() data = [ s[:-1] + "!\n" for s in data ] emissionmat_file = open("output%d.txt" %run, "w") emissionmat_file.writelines(data) emissionmat_file.close() ``` It adds an exclamation mark to each line in the file `input.txt`.
Python read file in while loop
[ "", "python", "file-io", "while-loop", "" ]
So I have my Django app running and I just added South. I performed some migrations which worked fine locally, but I am seeing some database errors on my Heroku version. I'd like to view the current schema for my database both locally and on Heroku so I can compare and see exactly what is different. Is there an easy way to do this from the command line, or a better way to debug this?
From the command line you should be able to do `heroku pg:psql` to connect directly via PSQL to your database and from in there `\dt` will show you your tables and `\d <tablename>` will show you your table schema.
locally django provides a [management command](https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell) that will launch you into your db's shell. `python manage.py dbshell`
How to View My Postgres DB Schema from Command Line
[ "", "python", "django", "postgresql", "heroku", "django-south", "" ]
i'm trying to get started with evernote SDK , i'm using ubuntu 13.04 i installed the SDK via : ``` pip install evernote ``` but when i want to test it using : ``` python -c 'from evernote.api.client import EvernoteClient' ``` i got this : ``` Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named api.client ``` What is the problem ? EDIT : `pip install evernote` works fine i guess , it gives me this : ``` Requirement already satisfied (use --upgrade to upgrade): evernote in /usr/local/lib/python2.7/dist-packages/evernote-1.24.0-py2.7.egg Requirement already satisfied (use --upgrade to upgrade): oauth2 in /usr/lib/python2.7/dist-packages (from evernote) Requirement already satisfied (use --upgrade to upgrade): httplib2 in /usr/lib/python2.7/dist-packages (from oauth2->evernote) Cleaning up... ``` here is the turorial : <http://dev.evernote.com/start/guides/python.php>
This is pretty old already, but I bet more people will hit it, so I'll put the answer here. It seems to be a surprisingly common issue that doesn't have an answer anywhere. Note how the error complains about api.client but not evernote. `Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named api.client` Most likely, the problem is that OP has a script in his path called evernote.py, which I guess is a common name people use to name their first evernote script. Rename the script to something less obvious and that should do the trick.
Could you check the version of Evernote SDK for Python with: ``` pip freeze ``` If `import evernote` works but `from evernote.api.client import EvernoteClient` doesn't, you might happen to use 1.23.0 version or older since EvernoteClient class was introduced in 1.23.1. Also please check your site-packages directory to make sure any older version is not loaded.
unable to import evernote.api.client (evernte sdk)
[ "", "python", "sdk", "evernote", "" ]
I'm trying to calculate pi with arbitrary precision on Python using one of Ramanujan's formulas: <http://en.wikipedia.org/wiki/Approximations_of_%CF%80#20th_century>. It basically requires lots of factorials and high-precision floating numbers division. I'm using multiple threads to divide infinite series calculation by giving each thread all the members that have a certain modulus when divided by the number of threads. So if you have 3 threads, the sum should be divided like this Thread 1 ---> 1, 4, 7... members Thread 2 ---->2, 5, 8... Thread 3 ---->3, 6, 9... Here's my code so far: ``` from decimal import * from math import sqrt, ceil from time import clock from threading import * import argparse memoizedFactorials = [] memoizedFactorials.append( 1 ) memoizedFactorials.append( 1 ) class Accumulator: def __init__( self ): self._sum = Decimal( 0 ) def accumulate( self, decimal ): self._sum += decimal def sum( self ): return self._sum def factorial( k ): if k < 2: return 1 elif len(memoizedFactorials) <= k: product = memoizedFactorials[ - 1 ] #last element for i in range ( len(memoizedFactorials), k+1 ): product *= i; memoizedFactorials.append(product) return memoizedFactorials[ k ] class Worker(Thread): def __init__( self, startIndex, step, precision, accumulator ): Thread.__init__( self, name = ("Thread - {0}".format( startIndex ) ) ) self._startIndex = startIndex self._step = step self._precision = precision self._accumulator = accumulator def run( self ): sum = Decimal( 0 ) result = Decimal( 1 ) zero = Decimal( 0 ) delta = Decimal(1)/( Decimal(10)**self._precision + 1 ) #print "Delta - {0}".format( delta ) i = self._startIndex while( result - zero > delta ): numerator = Decimal(factorial(4 * i)*(1103 + 26390 * i)) denominator = Decimal((factorial(i)**4)*(396**(4*i))) result = numerator / denominator print "Thread - {2} --- Iteration - {0:3} --->{1:3}".format( i, result, self._startIndex ) sum += result i += self._step self._accumulator.accumulate( sum ) print def main( args ): numberOfDigits = args.numberOfDigits; getcontext().prec = numberOfDigits + 8 zero = Decimal(1) / Decimal( 10**( numberOfDigits + 1 ) ) start = clock() accumulator = Accumulator() threadsCount = args.numberOfThreads; threadPool = [] for i in range(0, threadsCount ): worker = Worker( i, threadsCount, numberOfDigits, accumulator ) worker.start() threadPool.append( worker ) for worker in threadPool: worker.join() sum = accumulator.sum(); rootOfTwo = Decimal(2).sqrt() result = Decimal( 9801 ) / ( Decimal( 2 ) * rootOfTwo * sum ) end = clock(); delta = end - start; print result; print ("Took it {0} second to finish".format( delta ) ) #testing the results #realPiFile = open("pi.txt"); #myPi = str(result) #realPi = realPiFile.read( len(myPi) - 1 ) #if ( myPi[:-1] != realPi ): # print "Answer not correct!" # print "My pi - {0}".format(myPi) # print "Real pi - {0}".format(realPi) if __name__ == '__main__': parser = argparse.ArgumentParser(description = 'Calculate Pi at with arbitrary precision') parser.add_argument('-p', dest = 'numberOfDigits', default=20, type = int, help ='Number of digits in pi ') parser.add_argument('-t', '--tasks', dest = 'numberOfThreads', default=1, type = int, help ='Number of tasks( threads )') parser.add_argument('-o', dest = 'outputFileName', type = str, help ='Connect to VCS testing servers') parser.add_argument('-q', '--quet', dest = 'quetMode' , action='store_true', help ='Run in quet mode') args = parser.parse_args() print args main(args) a = raw_input("Press any key to continue...") ``` What concerns me thati have very small or no acceleration in time when using multiple threads. For example 1000 digits of pi: 1 Thread --> 0.68 seconds 2 Threads --> 0.74 seconds 4 Threads --> 0.75 seconds 10 threads --> 0.96 seconds Do you have any ideas on how to decrease the time. I see on task manager that when using four threads both of my cores get involved on 100%. However time seems to be the same. PS: It's a homework assignment so i can't use another formula. PSS: I'm using python 2.7 Thanks:)
Threads do not speed things up because of the GIL (Global Interpret Lock). Use [`multiprocessing`](http://docs.python.org/2/library/multiprocessing.html) for this kind of task. Its usage is very similar to threads.
Python has a GIL([Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock)) that prevents more than one thread to execute python code at the same time, i.e. you cannot obtain a speed up on CPU-bound tasks using multiple threads. You must use multiple processes.
Calculation of Pi with multiple threads( no acceleration - what to do )
[ "", "python", "multithreading", "pi", "" ]
In Python, how do I split on either a space or a hyphen? Input: ``` You think we did this un-thinkingly? ``` Desired output: ``` ["You", "think", "we", "did", "this", "un", "thinkingly"] ``` I can get as far as ``` mystr.split(' ') ``` But I don't know how to split on hyphens as well as spaces [and the Python definition of split only seems to specify a string](http://docs.python.org/2/library/stdtypes.html). Do I need to use a regex?
If your pattern is simple enough for one (or maybe two) `replace`, use it: ``` mystr.replace('-', ' ').split(' ') ``` Otherwise, use RE as suggested by [@jamylak](https://stackoverflow.com/a/16926888/2289509).
``` >>> import re >>> text = "You think we did this un-thinkingly?" >>> re.split(r'\s|-', text) ['You', 'think', 'we', 'did', 'this', 'un', 'thinkingly?'] ``` As @larsmans noted, to split by multiple spaces/hyphens (emulating `.split()` with no arguments) used `[...]` for readability: ``` >>> re.split(r'[\s-]+', text) ['You', 'think', 'we', 'did', 'this', 'un', 'thinkingly?'] ``` Without regex (regex is the most straightforward option in this case): ``` >>> [y for x in text.split() for y in x.split('-')] ['You', 'think', 'we', 'did', 'this', 'un', 'thinkingly?'] ``` Actually the answer by [@Elazar](https://stackoverflow.com/a/16926952/1219006) without regex is quite straightforward as well (I would still vouch for regex though)
Split on either a space or a hyphen?
[ "", "python", "string", "formatting", "" ]
I am implementing a small command line tool in python that needs to ask the user a couple of questions. I use ``` raw_input('Are you male or female?') ``` all the time. Now I would like to be able to deal with dumb users (or those too lazy to read/remember the documentation), so I need to check whether the answer makes sense. ``` gender = '' while gender not in ['male', 'female']: gender = raw_input('Are you male or female?') ``` I am wondering whether there exists something like argparse that would automate this problem, something like ``` import inputparse gender = inputparse.get_input(prompt='Are you male or female?', type=str, possible_input=['male', 'female']) ``` and would take care of automatic checking etc.?
Necromanting again... Please have a look at [click](https://click.pocoo.org) if you need a simple helper library for questions. Its main focus are command line options, but it’s a good fit for your use case, I think. EDIT 3 years later: I’m using [prompt toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) nowadays.
This question is quite old, but I'm researching it today. The library [pyinputplus](https://pypi.org/project/PyInputPlus/) is recommended by Al Swigert in *[Automate the Boring Stuff With Python](https://automatetheboringstuff.com/2e/chapter8/)*
python library for user input
[ "", "python", "user-input", "" ]
I am performing multiple iterations of the type: ``` masterSet=masterSet.union(setA) ``` As the set grows the length of time taken to perform these operations is growing (as one would expect, I guess). I expect that the time is taken up checking whether each element of setA is already in masterSet? My question is that if i KNOW that masterSet does not already contain any of elements in setA can I do this quicker? [UPDATE] Given that this question is still attracting views I thought I would clear up a few of the things from the comments and answers below: When iterating though there were many iterations where I **knew** `setA` would be distinct from `masterSet` because of how it was constructed (without having to process any checks) but a few iterations I needed the uniqueness check. I wondered if there was a way to 'tell' the `masterSet.union()` procedure not to bother with the uniquness check this time around as I know this one is distinct from `masterSet` just add these elements quickly trusting the programmer's assertion they were definately distict. Perhpas through calling some different "`.unionWithDistinctSet()`" procedure or something. I think the responses have suggested that this isnt possible (and that really set operations should be quick enough anyway) but to use `masterSet.update(setA)` instead of union as its slightly quicker still. I have accepted the clearest reponse along those lines, resolved the issue I was having at the time and got on with my life but would still love to hear if my hypothesised `.unionWithDistinctSet()` could ever exist?
You can use `set.update` to update your master set in place. This saves allocating a new set all the time so it should be a little faster than `set.union`... ``` >>> s = set(range(3)) >>> s.update(range(4)) >>> s set([0, 1, 2, 3]) ``` --- Of course, if you're doing this in a loop: ``` masterSet = set() for setA in iterable: masterSet = masterSet.union(setA) ``` You might get a performance boost by doing something like: ``` masterSet = set().union(*iterable) ``` --- Ultimately, membership testing of a set is O(1) (in the average case), so testing if the element is already contained in the set isn't really a big performance hit.
As mgilson points out, you can use `update` to update a set in-place from another set. That actually works out slightly quicker: ``` def union(): i = set(range(10000)) j = set(range(5000, 15000)) return i.union(j) def update(): i = set(range(10000)) j = set(range(5000, 15000)) i.update(j) return i timeit.Timer(union).timeit(10000) # 10.351907968521118 timeit.Timer(update).timeit(10000) # 8.83384895324707 ```
Quick way to extend a set if we know elements are unique
[ "", "python", "set", "union", "" ]
I have a table ItemCategory with category listing and this category is assigned with items. It is like the following: ``` +---------+-------------+ | item_id | category_id | +---------+-------------+ | 1 | 1 | | 2 | 1 | | 3 | 0 | | 3 | 2 | | 3 | 8 | | 4 | 0 | | 5 | 0 | +---------+-------------+ ``` Now I need to get the items which don't have any category values. That is in this case it is 4 and 5 which have category as zero. But not not as it is assigned with atleast one category. I am actually joining these with another tables called Networks and Items so I use query something like this. ``` SELECT Network.networkname,Items.item_id,ItemCategory.catname FROM Network JOIN Items ON Items.networkid=network.networkid JOIN ItemCategories ON ItemCategory.item_id=Item.item_id ```
Try this one: ``` SELECT * FROM Table1 WHERE item_id IN ( SELECT item_id FROM Table1 GROUP BY item_id HAVING MAX(category_id) = 0 ) ``` Result: ``` ╔═════════╦═════════════╗ ║ ITEM_ID ║ CATEGORY_ID ║ ╠═════════╬═════════════╣ ║ 4 ║ 0 ║ ║ 5 ║ 0 ║ ╚═════════╩═════════════╝ ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/07ca0/2) You can use `DISTINCT` keyword if you don't want duplicate rows in the result: ``` SELECT DISTINCT * FROM Table1 WHERE item_id IN ( SELECT item_id FROM Table1 GROUP BY item_id HAVING MAX(category_id) = 0 ); ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/bdbe4/3) for more details.
``` select * from a group by item having count(distinct category)=1 and category=0; ```
Select value which don't have atleast one association
[ "", "mysql", "sql", "" ]
We have this format Date/Time stored in a varChar column for each row in our table: 2013-05-26 20:22:07.2894. How can we use this column in a T\_SQL WHERE statement to retrieve the last 1 hour's worth of rows? We have tried this and it works: WHERE Time\_Stamp > '2013-05-26 18:00:00:0000' however we would like to have the T-SQL to work automatically rather than having the date/tie hard coded.
Here is a sargable approach (meaning it will use the index): ``` where Time_Stamp > convert(varchar(255), getdate() - 1.0/24, 121) ```
To convert your `VARCHAR` to a `DATETIME`, just use; ``` CONVERT(DATETIME, SUBSTRING(myDate, 1, 23), 121); ``` ...where myDate is your column name. Once it's converted to a datetime, a comparison with `GETDATE()` is simple. [A very simple SQLfiddle](http://sqlfiddle.com/#!6/63fad/3).
Compare Date/Time in SQL Server T-SQL
[ "", "sql", "sql-server", "t-sql", "" ]
We are using jinja2 to create our html but, because of the many loops and other things we do in jinja to produce the html, the html 'looks' ugly....(note: this is just for aesthetics). Is there anything we can do to clean up the html? (Other than the obvious of cleaning up our jinja2 code, which would make our template somewhat unreadable to us staffers) Something like beautiful soup's prettify? (Yes, I realize this question is a pretty nit-picky question...the ocd in me says to clean it up). for instance: ``` <table> <tbody> <tr> <td> a column </td> <td> a value </td> </tr> </tbody> </table> ``` Pretty ugly, eeh?
Add '-' to the tags: ``` {%- if 'this'=='this' -%} {{ blah }} {%- endif -%} ```
It looks like someone out there created a library to do just what need. See [this library](https://github.com/cobrateam/django-htmlmin#using-the-html_minify-function) which I found attached to [this question](https://stackoverflow.com/questions/13587531/minify-html-output-from-flask-application-with-jinja2-templates) (whom you should upvote).
Is there a way to clean up the html that jinja2 produces?
[ "", "python", "jinja2", "" ]
Is this now possible using Google Drive API or should I just send a multiple requests to accomplish this task? By the way I'm using Python 2.7
As far as i know, you'll have to send multiple request to delete multiple files: <https://developers.google.com/drive/v2/reference/files/delete>. As you can see "delete" method receives a "fileId".
You can batch your multiple deletes into a single HTTP request.
How to do a multiple folder removal in Google Drive API?
[ "", "python", "google-drive-api", "" ]
This question is one of style. Since the attributes are not private in python, is it good and common practice to address them directly outside the class using something like `my_obj.attr`? Or, maybe, it is still better to do it via member-functions like `get_attr()` that is an usual practice in c++?
Sounds like you want to use properties: ``` class Foo(object): def __init__(self, m): self._m = m @property def m(self): print 'accessed attribute' return self._m @m.setter def m(self, m): print 'setting attribute' self._m = m >>> f = Foo(m=5) >>> f.m = 6 setting attribute >>> f.m accessed attribute 6 ``` This is the most adaptive way to access variables, for simple use cases you do not need to do this, and in Python you may always change your design to use properties later at no cost.
Create an instance of the class first: ``` myinstance = MyClass() myinstance.attr ``` Otherwise you'd get an error like `AttributeError: class MyClass has no attribute 'attr'` when trying to do `MyClass.attr`
Style of addressing attributes in python
[ "", "python", "coding-style", "" ]
Is there a way to slice a 2d array in numpy into smaller 2d arrays? **Example** ``` [[1,2,3,4], -> [[1,2] [3,4] [5,6,7,8]] [5,6] [7,8]] ``` So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.
There was [another question](http://stackoverflow.com/a/13990648/190597) a couple of months ago which clued me in to the idea of using `reshape` and `swapaxes`. The `h//nrows` makes sense since this keeps the first block's rows together. It also makes sense that you'll need `nrows` and `ncols` to be part of the shape. `-1` tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works. You should be able to break your array into "blocks" using some combination of `reshape` and `swapaxes`: ``` def blockshaped(arr, nrows, ncols): """ Return an array of shape (n, nrows, ncols) where n * nrows * ncols = arr.size If arr is a 2D array, the returned array should look like n subblocks with each subblock preserving the "physical" layout of arr. """ h, w = arr.shape assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}" assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}" return (arr.reshape(h//nrows, nrows, -1, ncols) .swapaxes(1,2) .reshape(-1, nrows, ncols)) ``` turns `c` ``` np.random.seed(365) c = np.arange(24).reshape((4, 6)) print(c) [out]: [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23]] ``` into ``` print(blockshaped(c, 2, 3)) [out]: [[[ 0 1 2] [ 6 7 8]] [[ 3 4 5] [ 9 10 11]] [[12 13 14] [18 19 20]] [[15 16 17] [21 22 23]]] ``` --- I've posted an [inverse function, `unblockshaped`, here](https://stackoverflow.com/a/16873755/190597), and an N-dimensional generalization [here](https://stackoverflow.com/a/42298440/190597). The generalization gives a little more insight into the reasoning behind this algorithm. --- Note that there is also [superbatfish's `blockwise_view`](https://stackoverflow.com/a/28207538/190597). It arranges the blocks in a different format (using more axes) but it has the advantage of (1) always returning a view and (2) being capable of handling arrays of any dimension.
It seems to me that this is a task for [`numpy.split`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html) or some variant. e.g. ``` a = np.arange(30).reshape([5,6]) #a.shape = (5,6) a1 = np.split(a,3,axis=1) #'a1' is a list of 3 arrays of shape (5,2) a2 = np.split(a, [2,4]) #'a2' is a list of three arrays of shape (2,5), (2,5), (1,5) ``` If you have a NxN image you can create, e.g., a list of 2 NxN/2 subimages, and then divide them along the other axis. [`numpy.hsplit`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.hsplit.html#numpy.hsplit) and [`numpy.vsplit`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.vsplit.html#numpy.vsplit) are also available.
Slice 2d array into smaller 2d arrays
[ "", "python", "numpy", "" ]
I have a list that looks like this: ``` relationShipArray = [] relationShipArray.append([340859419124453377, 340853571828469762]) relationShipArray.append([340859419124453377, 340854579195432961]) relationShipArray.append([340770796777660416, 340824159120654336]) relationShipArray.append([340509588065513473, 340764841658703872]) relationShipArray.append([340478540048916480, 340671891540934656]) relationShipArray.append([340853571828469762, 340854579195432961]) relationShipArray.append([340842710057492480, 340825411573399553]) relationShipArray.append([340825411573399553, 340770796777660416]) relationShipArray.append([340825411573399553, 340824159120654336]) relationShipArray.append([340824159120654336, 340770796777660416]) relationShipArray.append([340804620295221249, 340825411573399553]) relationShipArray.append([340684236191313923, 340663388122279937]) relationShipArray.append([340663388122279937, 340684236191313923]) relationShipArray.append([340859507280318464, 340859419124453377]) relationShipArray.append([340859507280318464, 340853571828469762]) relationShipArray.append([340859507280318464, 340854579195432961]) relationShipArray.append([340854599697178624, 340845885439229952]) relationShipArray.append([340836561937641472, 340851694759972864]) relationShipArray.append([340854579195432961, 340853571828469762]) relationShipArray.append([340844519832580096, 340854599697178624]) relationShipArray.append([340814054610305024, 340748443670683648]) relationShipArray.append([340851694759972864, 340836561937641472]) relationShipArray.append([340748443670683648, 340814054610305024]) relationShipArray.append([340739498356912128, 340825992832638977]) ``` As you can see there are cases that are duplicated. e.g. ``` [340853571828469762, 340854579195432961] ``` is the same as (but inverted) ``` [340854579195432961, 340853571828469762] ``` What is the best way (with some efficiency but can live without it if need be) to remove the duplicates from this list? So in this case I would **keep** `[340853571828469762, 340854579195432961]`, but **remove** the `[340854579195432961, 340853571828469762]`.
Use an OrderedDict if you need to keep the order: ``` from collections import OrderedDict >>> L = [[1, 2], [4, 5], [1,2], [2, 1]] >>> [[x, y] for x, y in OrderedDict.fromkeys(frozenset(x) for x in L)] [[1, 2], [4, 5]] ``` **EDIT 1** If the order is not important you can get away with a set: ``` >>> [[x, y] for x, y in set(frozenset(x) for x in L)] [[1, 2], [4, 5]] ``` **EDIT 2** A more generic solution that works for lists of varying lenght, not only with two elements: ``` [list(entry) for entry in set(frozenset(x) for x in L)] [list(entry) for entry in OrderedDict.fromkeys(frozenset(x) for x in L)] ```
One liner solution ``` relationShipArray = [] relationShipArray.append([340859419124453377, 340853571828469762]) relationShipArray.append([340859419124453377, 340854579195432961]) relationShipArray.append([340770796777660416, 340824159120654336]) relationShipArray.append([340509588065513473, 340764841658703872]) relationShipArray.append([340478540048916480, 340671891540934656]) relationShipArray.append([340853571828469762, 340854579195432961]) relationShipArray.append([340842710057492480, 340825411573399553]) relationShipArray.append([340825411573399553, 340770796777660416]) relationShipArray.append([340825411573399553, 340824159120654336]) relationShipArray.append([340824159120654336, 340770796777660416]) relationShipArray.append([340804620295221249, 340825411573399553]) relationShipArray.append([340684236191313923, 340663388122279937]) relationShipArray.append([340663388122279937, 340684236191313923]) relationShipArray.append([340859507280318464, 340859419124453377]) relationShipArray.append([340859507280318464, 340853571828469762]) relationShipArray.append([340859507280318464, 340854579195432961]) relationShipArray.append([340854599697178624, 340845885439229952]) relationShipArray.append([340836561937641472, 340851694759972864]) relationShipArray.append([340854579195432961, 340853571828469762]) relationShipArray.append([340844519832580096, 340854599697178624]) relationShipArray.append([340814054610305024, 340748443670683648]) relationShipArray.append([340851694759972864, 340836561937641472]) relationShipArray.append([340748443670683648, 340814054610305024]) relationShipArray.append([340739498356912128, 340825992832638977]) ``` make an array with all lists in `relationShipArray` and their reversed peer. use then `np.unique`. ``` import numpy as np Y = list(np.unique(np.array(relationShipArray + [X[::-1] for X in relationShipArray]))) ```
Python remove duplicate cases that are in inverted matrix
[ "", "python", "deduplication", "" ]
I have a plain text file that contains user login data: ``` dtrapani HCPD-EPD-3687 Mon 05/13/2013 9:47:01.72 dlibby HCPD-COS-4611 Mon 05/13/2013 9:49:34.55 lmurdoch HCPD-SDDEB-3736 Mon 05/13/2013 9:50:38.48 lpatrick HCPD-WIN7-015 Mon 05/13/2013 9:57:44.57 mlay HCPD-WAR-3744 Mon 05/13/2013 10:00:07.94 eyoung HCPD-NLCC-0645 Mon 05/13/2013 10:03:01.83 ``` I'm trying to print the data in left- and right-aligned columns: ``` dtrapani HCPD-EPD-3687 Mon 05/13/2013 9:47:01.72 dlibby HCPD-COS-4611 Mon 05/13/2013 9:49:34.55 lmurdoch HCPD-SDDEB-3736 Mon 05/13/2013 9:50:38.48 lpatrick HCPD-WIN7-015 Mon 05/13/2013 9:57:44.57 mlay HCPD-WAR-3744 Mon 05/13/2013 10:00:07.94 eyoung HCPD-NLCC-0645 Mon 05/13/2013 10:03:01.83 ``` How can I do this? This is the code I have so far: ``` with open(r'C:\path\to\logons.txt', 'r') as f: for line in f: data = line.strip() print(data) ```
I would go for the new(er) print formatter with this one (assuming your fields are consistent). The print/format statement is pretty easy to use and can be found [here](http://docs.python.org/2/library/string.html#formatstrings). Since your data can be seen as a list, you can do a single call to format and supplying the correct formatter data you'll get your output. This has a bit more fine grained control than ljust or rjust but has the downside that you need to know that your data coming in is consistent. ``` with open(r'C:\path\to\logons.txt', 'r') as f: for line in f: data = line.split() # Splits on whitespace print '{0[0]:<15}{0[1]:<15}{0[2]:<5}{0[3]:<15}{0[4]:>15}'.format(data) ```
`str.ljust(width, [fillchar=" "])` (<http://docs.python.org/2/library/stdtypes.html#str.ljust>) seems like what you're after. Left justify each field when printing to the maximum length + a little bit. For the last field to match your example, you'll want to right justify it instead using rjust.
Align columns in a text file
[ "", "python", "python-3.x", "string-interpolation", "" ]
I have a quick question about VBNET, I'm currently using VS 2012 Express, and Microsoft Access as my database. My question is, how do I put my queries in a module instead of put the queries in the form itself? Let's say I have this function in frmStock : frmStock : ``` Public Sub FillCategory(ByVal Key As String, ByVal Txt As String, ByVal N As TreeNode) Dim TD As TreeNode If N Is Nothing Then TD = tvCategory.Nodes.Add(Key, Txt) Else TD = N.Nodes.Add(Key, Txt) End If Dim cmd As New OleDbCommand("SELECT * FROM Category WHERE Category_Parent = ?", conn) cmd.Parameters.AddWithValue("Category_Parent", Key) Dim dr = cmd.ExecuteReader Do While dr.Read() FillCategory(dr("Category_ID"), dr("Category_Name"), TD) Loop dr.Close() cmd.Dispose() End Sub ``` I want to move ``` Dim cmd As New OleDbCommand("SELECT * FROM Category WHERE Category_Parent = ?", conn) ``` into a module instead. I tried doing : ``` Module queryCollection Public getCategoryParent As New OleDbCommand( _ "SELECT * FROM Category WHERE Category_Parent = ?", conn) End Module ``` And modifed my FillCategory function to : ``` Public Sub FillCategory(ByVal Key As String, ByVal Txt As String, ByVal N As TreeNode) Dim TD As TreeNode If N Is Nothing Then TD = tvCategory.Nodes.Add(Key, Txt) Else TD = N.Nodes.Add(Key, Txt) End If getCategoryParent.Parameters.AddWithValue("Category_Parent", Key) Dim dr = getCategoryParent.ExecuteReader Do While dr.Read() FillCategory(dr("Category_ID"), dr("Category_Name"), TD) Loop dr.Close() getCategoryParent.Dispose() End Sub ``` UPDATE : Some Screen shot for my problem : My database : <http://puu.sh/39CMV.PNG> BEFORE moving out my query into module : <http://puu.sh/39D0M.PNG> AFTER moving out my query into module : puu.sh/39CVV.PNG It seem like only execute once(?) Why is this and what is the proper way of declaring my queries in a module? I actually have tons of queries, I want to store them into one place for a more organized coding.
In your modified `FillCategory` code you called the following but never saved the result: ``` getCategoryParent.Parameters.AddWithValue("Category_Parent", Key) ``` That created an `OleDbCommand` object but then it is immediately discarded. You then call: ``` Dim dr = getCategoryParent.ExecuteReader ``` Which creates a second `OleDbCommand` object but without a parameter value. Modify your code like: ``` Dim cmd = getCategoryParent.Parameters.AddWithValue("Category_Parent", Key) Dim dr = cmd.ExecuteReader ``` EDIT: The following will address both problems. ``` Dim cmd = getCategoryParent cmd.Parameters.AddWithValue("Category_Parent", Key) Dim dr = cmd.ExecuteReader ``` Previously the `cmd` variable was being `Infer`ed by VB to be a `OleDbParameter` object, which does not have the `ExecuteReader` method. It was inferred this way because of the way it was called with the `.Parameters.AddWithValue("Category_Parent", Key)` code done inline. With this change `cmd` is correctly set to a `OleDbCommand` variable, and now you can use the `AddParameter` method as many times as needed. EDIT 2: `getCategoryParent` is currently defined as property on the module. That means that the `getCategoryParent` method is executed once when the module is first instantiated, not once per loop. Change that method to be a Function and now the code will be executed once per loop and your TreeView will be loaded as you expect. ``` Public Function getCategoryParent() As OleDbCommand Return New OleDbCommand("SELECT * FROM Category WHERE Category_Parent = ?", conn) End Function ```
How about just put your SQL string on your module .. ``` Module queryCollection Public sCatParent As String = "SELECT * FROM Category WHERE Category_Parent = ?" End Module ``` And you call it like .. ``` getCategoryParent As New OleDbCommand(sCatParent,conn) getCategoryParent.Parameters.AddWithValue("Category_Parent", Key) Dim dr = getCategoryParent.ExecuteReader ```
Store all queries inside a module?
[ "", "sql", "database", "vb.net", "visual-studio-2012", "" ]
I am just trying to create a function that returns a select statement, but it gives the error: > A RETURN statement with a return value cannot be used in this context. This is my code: ``` CREATE FUNCTION [dbo].[Sample] (@SampleValue int) RETURNS TABLE AS BEGIN RETURN( SELECT * FROM View_sls ) ``` Please let me know the solution
Wrong syntax, that's all. You don't need `BEGIN` when you have an "inline table-valued function" See [CREATE FUNCTION](http://msdn.microsoft.com/en-us/library/ms186755.aspx) and example B ``` CREATE FUNCTION [dbo].[Sample] (@SampleValue int) RETURNS TABLE AS RETURN ( SELECT * FROM View_sls ); GO ```
Two things: * you need to **define the structure** of the table you want to return * you need to add data into that table *Then* you can call `RETURN;` to return that table's data to the caller. So you need something like this: ``` CREATE FUNCTION [dbo].[Sample] (@SampleValue int) RETURNS @returnTable TABLE (ContactID int PRIMARY KEY NOT NULL, FirstName nvarchar(50) NULL, LastName nvarchar(50) NULL, JobTitle nvarchar(50) NULL, ContactType nvarchar(50) NULL) AS BEGIN INSERT INTO @returnTable SELECT ContactID, FirstName, LastName, JobTitle, ContactType FROM dbo.View_sls RETURN; END ```
TSQL Error: A RETURN statement with a return value cannot be used in this context
[ "", "sql", "t-sql", "sql-server-2005", "sql-function", "" ]
I'm relative new to sql and I would like some help with this one. Basically, I'm trying to do some calculations in the query because functions were a bit slow. Could you guys improve on this? ``` -- Query to retrive ADA from every school Select Distinct DY, DATENAME(MM,DT) as 'Month', CONVERT(nvarchar,DT,101) as 'Date', ( Select ((@M_stu - (Select Count(SC) from dbo.ATT where sc=1 AND al!='' AND DY=T.DY))/@M_stu)*100 ) as 'Merced ADA', ( Select ((@A_stu - (Select Count(SC) from dbo.ATT where sc=2 AND al!='' AND DY=T.DY))/@A_stu)*100 ) as 'Atwater ADA', ( Select ((@L_stu - (Select Count(SC) from dbo.ATT where sc=3 AND al!='' AND DY=T.DY))/@L_stu)*100 ) as 'Livingston ADA', ( Select ((@B_stu - (Select Count(SC) from dbo.ATT where sc=4 AND al!='' AND DY=T.DY))/@B_stu)*100 ) as 'Buhach ADA', ( Select ((@Y_stu - (Select Count(SC) from dbo.ATT where sc=5 AND al!='' AND DY=T.DY))/@Y_stu)*100 ) as 'Yosemite ADA', ( Select ((@I_stu - (Select Count(SC) from dbo.ATT where sc=6 AND al!='' AND DY=T.DY))/@I_stu)*100 ) as 'Independence ADA', ( Select ((@G_stu - (Select Count(SC) from dbo.ATT where sc=10 AND al!='' AND DY=T.DY))/@G_stu)*100 ) as 'Golden Valley ADA', ( Select ((@S_stu - (Select Count(SC) from dbo.ATT where sc=92 AND al!='' AND DY=T.DY))/@S_stu)*100 ) as 'Sequoia ADA' From dbo.ATT as T Order by DY ASC ```
Try this ``` Select DY, DATENAME(MM,DT) as 'Month', CONVERT(nvarchar,DT,101) as 'Date', @M_stu - (Count(case when sc=1 AND al!='' AND DY=T.DY then 1 else null end)/@M_stu)*100 as 'Merced ADA', @A_stu - (Count(case when sc=2 AND al!='' AND DY=T.DY then 1 else null end)/@A_stu)*100 as 'Atwater ADA', ... From dbo.ATT as T GROUP BY DY, DT Order by DY ASC ```
Try this solution: ``` SELECT DISTINCT t.DY AS DY, DATENAME(MM,t.DT) AS [Month], CONVERT(NVARCHAR,t.DT,101) AS [Date], ((@M_stu - d.[1])/@M_stu)*100 AS [Merced ADA], ((@A_stu - d.[2])/@A_stu)*100 AS [Atwater ADA], ((@L_stu - d.[3])/@L_stu)*100 AS [Livingston ADA], ((@B_stu - d.[4])/@B_stu)*100 AS [Buhach ADA], ((@Y_stu - d.[5])/@Y_stu)*100 AS [Yosemite ADA], ((@I_stu - d.[6])/@I_stu)*100 AS [Independence ADA], ((@G_stu - d.[10])/@G_stu)*100 AS [Golden Valley ADA], ((@S_stu - d.[92])/@S_stu)*100 AS [Sequoia ADA] FROM dbo.ATT AS t OUTER APPLY ( SELECT c.* FROM ( SELECT a.sc FROM dbo.ATT AS a WHERE a.sc IN (1,2,3,4,5,6,10,92) AND a.al!='' AND a.DY=t.DY ) b PIVOT( COUNT(b.sc) FOR a.sc IN ([1],[2],[3],[4],[5],[6],[10],[92]) ) c ) d ORDER BY t.DY ASC; ``` You should check also if you have one of the following indices: `CREATE [UNIQUE] INDEX index_name ON dbo.ATT (DY, sc, al)` or `CREATE [UNIQUE] INDEX index_name ON dbo.ATT (DY, sc) INCLUDE (al)`.
How can this T-SQL query be optimized?
[ "", "sql", "sql-server", "t-sql", "" ]
With normal Django querysets, if I want to retrieve all the myObjects whose "a" attribute is 1, 2 or 3, I would do the following: ``` myObjects.objects.filter(a_in=[1,2,3]) ``` But I would like to do this using the Q objects. How would I write the equivalent query with Q objects?
It should look like this: ``` myObjects.objects.filter(Q(a = 1) | Q( a = 2) | Q( a = 3)) ``` I don't know why you want to do that but you can also do ``` myObjects.objects.filter(Q(a__in=[1,2,3]) ```
It works right away. ``` Q(a__in=[1, 2, 3]) ``` Probably your issue is that you were using a single underscore instead of two.
How to use Q objects to test list membership?
[ "", "python", "django", "django-q", "" ]
Why this query: ``` SELECT SQL_CALC_FOUND_ROWS,a.*,(SELECT cy.iso_code FROM ps_address AS addr, ps_country AS cy WHERE addr.id_customer=a.id_customer AND addr.id_country=cy.id_country) iso_code FROM `ps_customer` a WHERE iso_code='IT' ORDER BY a.`id_customer` ASC LIMIT 0,50 ``` Return me: `#1054 - Unknown column 'iso_code' in 'where clause'?`
From [the documentation on `SELECT`](http://dev.mysql.com/doc/refman/5.0/en/select.html): > It is not permissible to refer to a column alias in a `WHERE` clause, because the column value might not yet be determined when the `WHERE` clause is executed. See [Section C.5.5.4, “Problems with Column Aliases”](http://dev.mysql.com/doc/refman/5.0/en/problems-with-alias.html). You cannot use column aliases in your `WHERE` clause — you may only use columns that actually exist in one of the tables used in your query. You could [wrap the subquery so that its result is treated as a "table" in its own right](https://stackoverflow.com/q/8590421/560648); then you may apply criteria to that "table".
Looks like the iso\_code column comes from the ps\_address table which is only available in the context of the subquery you are making inside the select list of your main query (ps\_customer). Besides that you don't need this WHERE statement in the main query since you are already restricting the iso\_code values in the subquery. Remove it and it should be fine.
Unknown column error in WHERE clause
[ "", "sql", "" ]
My new site (to read books) actually uses static content to serve the pages: I have the HTML file saved in a folder and, to serve the page, I read it and pass the content to a jinja2 template, to show all together. No database hits except for get the book ID to know the title. This is working fine and fast, but it is obvious that I must "upload" every new book with the "Deploy" option from GAE SDK (from what I've read, there is no way to access the file system in GAE from outside, like with an FTP), which is not the optimally way. So I'm thinking on save the HTML contents to the database but: will this increase the database hits a lot? I'm using NDB, so in theory, every user that reads a book will get the cached result from the NDB cache, once it was readed the first time. Is this right? Will be better to pass the html to the database? In terms of size, it will be over 8k per html page. The pages also have image files, so, to avoid the initial problem of upload it every new book, again I must save the images also in the database, right? An example page for a book will be [like this](http://www.collok.com/read?bookId=1&chapterId=1)
Google app engine has a dedicated service for file uploads, it is called Blobstore. You can write an administration page that let you upload file and publish them. You will need to use the database only to store meta data about each object: Book name, author, related image... [Here's some documentation](https://developers.google.com/appengine/docs/python/blobstore/overview)
If you want a "poor man's" implementation, create a folder named 'books' in the top level of your application folder. Inside of that, create one folder for each book, and inside each one of those, create one folder for each chapter. This will give you a tree structure for your library. Then, inside each book's folder, create an index.html file that acts as a cover page and table of contents. It should have links to each chapter. Inside each chapter's folder, create another index.html file that contains the HTML for that chapter. All the images for that chapter go alongside the index.html file inside that folder, and all the links are relative, i.e.: href="picture.jpeg" Change your app.yaml to serve up the 'books' folder as a Static Directory: ``` handlers: - url: /books static_dir: books ``` If you never have to change the contents of the books, you can publish each book just once and deploy it. The beauty of this is that no programming is required, beyond editing HTML and app.yaml.
GAE: use static HTML file vs database to serve contents
[ "", "python", "google-app-engine", "app-engine-ndb", "" ]
I always use fabric to deploy my processes from my local pc to remote servers. If I have a python script like this: test.py: ``` import time while True: print "Hello world." time.sleep(1) ``` Obviously, this script is a continuous running script. And I deploy this script to remote server and execute my fabric script like this: ``` ... sudo("python test.py") ``` The fabric will always wait the return of test.py and won't exit.How can I stop the fabric script at once and ignore the return of test.py
`sudo("python test.py 2>/dev/null >/dev/null &")` or redirect the output to some other file instead of /dev/null
Usually for this kind of asynchronous task processing [Celery](http://www.celeryproject.org/) is preferred . [This](http://nicholaskuechler.com/2012/06/19/auto-scale-rackspace-cloud-servers-with-fabric-and-celery/) explains in detail the use of Celery and Fabric together. ``` from fabric.api import hosts, env, execute,run from celery import task env.skip_bad_hosts = True env.warn_only = True @task() def my_celery_task(testhost): host_string = "%s@%s" % (testhost.SSH_user_name, testhost.IP) @hosts(host_string) def my_fab_task(): env.password = testhost.SSH_password run("ls") try: result = execute(my_fab_task) if isinstance(result.get(host_string, None), BaseException): raise result.get(host_string) except Exception as e: print "my_celery_task -- %s" % e.message ```
How to prevent fabric form waiting for the process to return
[ "", "python", "fabric", "" ]
If I have the following database schema ![enter image description here](https://i.stack.imgur.com/ma7mO.png) And I wanted to write a query that listed all Bill\_Items, but included a flag that was true if any associated inventory items where marked with the CheckFlag bit, what would be the correct way to do it? The way I thought of doing it is ``` select *, case when exists(select CheckFlag from inventoryitems inner join Linked_items on Item_code = ItemCode where Bill_Code = BillCode and CheckFlag = 1 ) then 1 else 0 end as flagSet from Bill_items ``` However I am fairly certain I am not doing this the correct way, what is the way I should be doing a check like this?
Try this: ``` select Bill_items.billcode ,max(cast(CheckFlag as int)) as flagSet from Bill_items join Linked_items on Bill_items.billcode = Linked_items.Bill_Code join inventoryitems on Linked_items.Item_code = inventoryitems.ItemCode group by Bill_items.billcode ``` If you need to include Bill\_items without inventoryitems try this: ``` select Bill_items.billcode ,max(isnull(cast(CheckFlag as int),0)) as flagSet from Bill_items left join Linked_items on Bill_items.billcode = Linked_items.Bill_Code left join inventoryitems on Linked_items.Item_code = inventoryitems.ItemCode group by Bill_items.billcode ``` Here is a derived version joined onto bill items ``` select Bill_items.* ,isnull(_CheckFlags.flagSet,0) as flagSet from Bill_items left join ( select Linked_items.Bill_Code ,max(cast(CheckFlag as int)) as flagSet from Linked_items join inventoryitems on Linked_items.Item_code = inventoryitems.ItemCode group by Linked_items.Bill_Code ) _CheckFlags on Bill_items.billcode = _CheckFlags.Bill_Code ``` Try this for method without CAST: ``` select Bill_items.* ,isnull(_CheckFlags.flagSet,0) as flagSet from Bill_items left join ( select Linked_items.Bill_Code ,CheckFlag as flagSet from Linked_items join inventoryitems on Linked_items.Item_code = inventoryitems.ItemCode where CheckFlag = 1 group by Linked_items.Bill_Code ,CheckFlag ) _CheckFlags on Bill_items.billcode = _CheckFlags.Bill_Code ```
This should work (the inner sub-query shall save you from adding all the columns in the `group by` clause) and would also be faster ``` SELECT bi.*, temp.checkflag FROM bill_items bi LEFT JOIN ( SELECT max(ii.checkflag) as checkflag, li.bill_code FROM linked_items li JOIN inventory_items ii ON ( ii.item_code = ii.itemCode AND ii.checkflag = 1 ) GROUP BY li.bill_code ) as temp ON ( temp.bill_code = bi.billCode ) ```
What is the correct way to find if any item in a one to many relationship has a flag set?
[ "", "sql", "sql-server", "" ]
I have many lists in this format: ``` ['1', 'O1', '', '', '', '0.0000', '0.0000', '', ''] ['2', 'AP', '', '', '', '35.0000', '105.0000', '', ''] ['3', 'EU', '', '', '', '47.0000', '8.0000', '', ''] ``` I need to create a dictionary with key as the first element in the list and value as the entire list. None of the keys are repeating. What is the best way to do that?
put all your lists in another list and do this: ``` my_dict = {} for list in lists: my_dict[list[0]] = list[:] ``` This basically gets the first element and puts it as a key in `my_dict` and put the list as the value.
``` >>> lists = [['1', 'O1', '', '', '', '0.0000', '0.0000', '', ''], ['2', 'AP', '', '', '', '35.0000', '105.0000', '', ''], ['3', 'EU', '', '', '', '47.0000', '8.0000', '', '']] >>> {x[0]: x for x in lists} {'1': ['1', 'O1', '', '', '', '0.0000', '0.0000', '', ''], '3': ['3', 'EU', '', '', '', '47.0000', '8.0000', '', ''], '2': ['2', 'AP', '', '', '', '35.0000', '105.0000', '', '']} ```
Create dictionary from list python
[ "", "python", "list", "dictionary", "" ]
I have a dictionary in format {'string-value-keys':[list-values]}. For a particular key value, I would like to retrieve the value as a string. Here is the example: ``` >>> simpleDict= {'DDL_TABLE1':['create table bla', ' columns bla bla'], 'DML_TABLE1': ['insert into bla', ' values bla']} >>> simpleDict {'DDL_TABLE1': ['create table bla', ' columns bla bla'], 'DML_TABLE1': ['insert into bla', ' values bla']} >>> sqlQry= " ".join(value for key, value in simpleDict.items() if 'DDL' in key) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected string, list found >>> sqlQry= " ".join(value for key, value in simpleDict.items() if 'DDL' in key) ``` I am not able to understand why I am getting this error, when value is a list of string values.
Move the join inside: ``` (' '.join(value) for key, value in simpleDict.items() if 'DDL' in key) ``` Test: ``` >>> simpleDict= {'DDL_TABLE1':['create table bla', ' columns bla bla'], 'DML_TABLE1': ['insert into bla', ' values bla']} >>> list(' '.join(value) for key, value in simpleDict.items() if 'DDL' in key) ['create table bla columns bla bla'] ``` ***Edit based on comment*** This is easier to look at with a list comprehension vs a generator. Your original as an LC: ``` >>> [value for key, value in simpleDict.items() if 'DDL' in key] [['create table bla', ' columns bla bla']] ``` Which is creating a List of List of strings. If you try and use join on that: ``` >>> ' '.join([value for key, value in simpleDict.items() if 'DDL' in key]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected string, list found ``` produces the error you saw. OK, depending on what your full data looks likes, you can do two joins: ``` >>> ' '.join([' '.join(value) for key, value in simpleDict.items() if 'DDL' in key]) ``` If you just want one big string, multiple joins isn't the end of the world. If you are only looking for one item, you can do this: ``` >>> [' '.join(value) for key, value in simpleDict.items() if 'DDL' in key][0] 'create table bla columns bla bla' ``` If you are dealing with multiple hits in the data/multiple uses, use a loop: ``` for s in [' '.join(value) for key, value in simpleDict.items() if 'DDL' in key]: # s multiple times ``` If the data is 'needle in a haystack' type, use a loop and break out once you find what you are looking for.
`" ".join()` expects an iterable of strings and your generator yields lists of strings, so you are ending up with a call like `" ".join([['create table bla', ' columns bla bla']])`. Since it looks like you only expect a single key to match, you probably don't want a generator for this. I would suggest the following: ``` sqlQry = None for key, value in simpleDict.items(): if 'DDL' in key: sqlQry = " ".join(value) break ```
Joining value(list) from <key, value> in dictionary
[ "", "python", "python-2.7", "" ]
I'm wondering if I can select the value of a column if the column exists and just select null otherwise. In other words I'd like to "lift" the select statement to handle the case when the column doesn't exist. ``` SELECT uniqueId , columnTwo , /*WHEN columnThree exists THEN columnThree ELSE NULL END*/ AS columnThree FROM (subQuery) s ``` Note, I'm in the middle to solidifying my data model and design. I hope to exclude this logic in the coming weeks, but I'd really like to move beyond this problem right because the data model fix is a more time consuming endeavor than I'd like to tackle now. Also note, I'd like to be able to do this in one query. So I'm not looking for an answer like > check what columns are on your sub query first. Then modify your > query to appropriately handle the columns on your sub query.
You cannot do this with a simple SQL statement. A SQL query will not compile unless all table and column references in the table exist. You can do this with dynamic SQL if the "subquery" is a table reference or a view. In dynamic SQL, you would do something like: ``` declare @sql nvarchar(max) = ' SELECT uniqueId, columnTwo, '+ (case when exists (select * from INFORMATION_SCHEMA.COLUMNS where tablename = @TableName and columnname = 'ColumnThree' -- and schema name too, if you like ) then 'ColumnThree' else 'NULL as ColumnThree' end) + ' FROM (select * from '+@SourceName+' s '; exec sp_executesql @sql; ``` For an actual subquery, you could approximate the same thing by checking to see if the subquery returned something with that column name. One method for this is to run the query: `select top 0 * into #temp from (<subquery>) s` and then check the columns in `#temp`. EDIT: I don't usually update such old questions, but based on the comment below. If you have a unique identifier for each row in the "subquery", you can run the following: ``` select t.. . ., -- everything but columnthree (select column3 -- not qualified! from t t2 where t2.pk = t.pk ) as column3 from t cross join (values (NULL)) v(columnthree); ``` The subquery will pick up `column3` from the outer query if it doesn't exist. However, this depends critically on having a unique identifier for each row. The question is explicitly about a subquery, and there is no reason to expect that the rows are easily uniquely identified.
As others already suggested, the sane approach is to have queries that meet your table design. There is a rather exotic approach to achieve what you want in (pure, not dynamic) SQL though. A similar problem was posted at DBA.SE: [How to select specific rows if a column exists or all rows if a column doesn't](https://dba.stackexchange.com/questions/44871/how-to-select-specific-rows-if-a-column-exists-or-all-rows-if-a-column-doesnt) but it was simpler as only one row and one column was wanted as result. Your problem is more complex so the query is more convoluted, to say the least. Here is, the insane approach: ``` ; WITH s AS (subquery) -- subquery SELECT uniqueId , columnTwo , columnThree = ( SELECT ( SELECT columnThree FROM s AS s2 WHERE s2.uniqueId = s.uniqueId ) AS columnThree FROM (SELECT NULL AS columnThree) AS dummy ) FROM s ; ``` It also assumes that the `uniqueId` is unique in the result set of the subquery. Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!3/4cdba/68)** --- And a simpler method which has the additional advantage that allows more than one column with a single subquery: ``` SELECT s.* FROM ( SELECT NULL AS columnTwo, NULL AS columnThree, NULL AS columnFour ) AS dummy CROSS APPLY ( SELECT uniqueId, columnTwo, columnThree, columnFour FROM tableX ) AS s ; ``` The question has also been asked at DBA.SE and has been answered by [@Andriy M](https://dba.stackexchange.com/users/6965/andriy-m) (using `CROSS APPLY` too!) and [Michael Ericsson](https://dba.stackexchange.com/users/2103/mikael-eriksson) (using `XML`): [Why can't I use a CASE statement to see if a column exists and not SELECT from it?](https://dba.stackexchange.com/questions/66741/why-cant-i-use-a-case-statement-to-see-if-a-column-exists-and-not-select-from-i/66755#66755)
Select columnValue if the column exists otherwise null
[ "", "sql", "sql-server", "select", "exists", "" ]
I was just wondering if there is a simple way to do this. I have a particular structure that is parsed from a file and the output is a list of a dict of a list of a dict. Currently, I just have a bit of code that looks something like this: ``` for i in xrange(len(data)): for j, k in data[i].iteritems(): for l in xrange(len(data[i]['data'])): for m, n in data[i]['data'][l].iteritems(): dostuff() ``` I just wanted to know if there was a function that would traverse a structure and internally figure out whether each entry was a list or a dict and if it is a dict, traverse into that dict and so on. I've only been using Python for about a month or so, so I am by no means an expert or even an intermediate user of the language. Thanks in advance for the answers. EDIT: Even if it's possible to simplify my code at all, it would help.
You *never* need to iterate through `xrange(len(data))`. You iterate either through `data` (for a list) or `data.items()` (or `values()`) (for a dict). Your code should look like this: ``` for elem in data: for val in elem.itervalues(): for item in val['data']: ``` which is quite a bit shorter.
Will, if you're looking to decend an arbitrary structure of array/hash thingies then you can create a function to do that based on the `type()` function. ``` def traverse_it(it): if (isinstance(it, list)): for item in it: traverse_it(item) elif (isinstance(it, dict)): for key in it.keys(): traverse_it(it[key]) else: do_something_with_real_value(it) ``` Note that the average object oriented guru will tell you not to do this, and instead create a class tree where one is based on an array, another on a dict and then have a single function to process each with the same function name (ie, a virtual function) and to call that within each class function. IE, if/else trees based on types are "bad". Functions that can be called on an object to deal with its contents in its own way "good".
Python: How to traverse a List[Dict{List[Dict{}]}]
[ "", "python", "list", "loops", "dictionary", "traversal", "" ]
I have a problem using BeautifulSoup4... (I'm quite a Python/BeautifulSoup newbie, so forgive me if i'm dumb) Why does the following code: ``` from bs4 import BeautifulSoup soup_ko = BeautifulSoup('<select><option>foo</option><option>bar & baz</option><option>qux</option></select>') soup_ok = BeautifulSoup('<select><option>foo</option><option>bar and baz</option><option>qux</option></select>') print soup_ko.find_all('option') print soup_ok.find_all('option') ``` produce the following output: ``` [<option>foo</option>, <option>bar &amp; baz</option>] [<option>foo</option>, <option>bar and baz</option>, <option>qux</option>] ``` i was expecting the same result, an array of my 3 options... but BeautifulSoup seems to dislike the ampersand in the text? How can i get rid of this and get a correct array without editing my HTML (or by transforming/converting it)? thanks, **Edit:** Seems like a 4.2.0 bug... i downloaded both 4.2.0 and 4.2.1 versions (from <http://www.crummy.com/software/BeautifulSoup/bs4/download/4.2/beautifulsoup4-4.2.0.tar.gz> and <http://www.crummy.com/software/BeautifulSoup/bs4/download/4.2/beautifulsoup4-4.2.1.tar.gz>), unzip it in my script folder, change my code to: ``` import sys sys.path.insert(0, "beautifulsoup4-" + sys.argv[1]) from bs4 import BeautifulSoup, __version__ print "Beautiful Soup %s" % __version__ soup_ko = BeautifulSoup('<select><option>foo</option><option>bar & baz</option><option>qux</option></select>') print soup_ko.find_all('option') ``` and got the results: ``` 15:24:38 pataluc ~ % python stack.py 4.2.0 Beautiful Soup 4.2.0 [<option>foo</option>, <option>bar &amp; baz</option>] 15:24:41 pataluc ~ % python stack.py 4.2.1 Beautiful Soup 4.2.1 [<option>foo</option>, <option>bar &amp; baz</option>, <option>qux</option>] ``` so i guess my question is closed. thanks for your comments who made me realize it was a version issue.
As i said in the edited first post, it was a bug in BeautifulSoup 4.2.0, i downloaded 4.2.1 and the bug is gone.
`&` is used in HTML to input so called *HTML entities*. E.g., `<` is a special symbol in HTML because it starts a tag, so you use `&lt;` instead. Thus, `&` itself is also a special symbol, and you should use `&amp;` for a literal ampersand. Your HTML was invalid and BeautifulSoup fixed it.
BeautifulSoup4 : Ampersand in text
[ "", "python", "html", "python-2.7", "beautifulsoup", "" ]
I am trying to slice different parts of a line into a list of dictionaries using a list comprehension. The code below doesn't work, but it illustrates what I am trying to do. Any help would be much appreciated! Thanks ``` def getDataElements(self): return [x for x for line in self.data: {"Number": line[0:9], "FullName": line[9:27].rstrip(), "LastName": line[27:63].rstrip(), "Area": line[63:65].rstrip(), "City": line[65:90].rstrip(), "Status": line[91], "Status2": line[92], "Status3": line[93]] ```
There are instances where list comprehensions are good, but this is not one of them. Just use a loop and a generator: ``` for line in self.data: yield { "Number": line[0:9], "FullName": line[9:27].rstrip(), "LastName": line[27:63].rstrip(), "Area": line[63:65].rstrip(), "City": line[65:90].rstrip(), "Status": line[91], "Status2": line[92], "Status3": line[93] } ``` If you absolutely need to return a list, pass the output through `list()`: ``` output_list = list(self.getDataElements()) ``` If you're not comfortable with that, there's always the append-to-a-list way: ``` people = [] for line in self.data: people.append({ "Number": line[0:9], "FullName": line[9:27].rstrip(), "LastName": line[27:63].rstrip(), "Area": line[63:65].rstrip(), "City": line[65:90].rstrip(), "Status": line[91], "Status2": line[92], "Status3": line[93] }) return people ```
You were somewhat clear, but you have to put the dictionary in the beginning...if I fully understand what you want, the following should work: ``` return [{"Number": line[0:9],"FullName": line[9:27].rstrip(),"LastName": line[27:63].rstrip(),"Area": line[63:65].rstrip(),"City": line[65:90].rstrip(),"Status": line[91],"Status2": line[92],"Status3": line[93]} for line in self.data] ``` unless there is some extra level of nesting because you say `x for x for line` yet you don't use `x` so I ignored it in that manner. Let me know if that was incorrect, and if so explain in a bit more detail please!
parsing a line into a dictionary through a list comprehension
[ "", "python", "dictionary", "list-comprehension", "" ]
I have a while loop that returns the number of 'rushes' it takes to get up a hill. The size of the hill is the 'slope height' and the height run up is the 'rush\_height\_gain' minus the 'back\_sliding'. The code below works for: ``` ans = num_rushes(15, 10, 5) print(ans) ``` which prints 1 and ``` ans = num_rushes(100, 15,7) print(ans) ``` which prints 2 and ``` ans = num_rushes(10, 10, 9) print(ans) ``` which prints 12 but returns the wrong answer for ``` ans = num_rushes(100, 10, 0) print(ans) ``` which should print 10, but instead prints 9 I'm not sure why this is, and any help would be greatly appreciated ``` def num_rushes(slope_height, rush_height_gain, back_sliding): current_height = 0 rushes = 0 while current_height < slope_height: if rush_height_gain == slope_height: rushes+=1 return rushes elif current_height < slope_height: if current_height == slope_height: return rushes else: a = rush_height_gain - back_sliding current_height += a if current_height == slope_height: return rushes elif current_height > slope_height: return rushes else: rushes+=1 return (rushes) ```
If I'm understanding the question properly, I think what you're looking for is: ``` def num_rushes(slope_height, rush_height_gain, back_sliding): if rush_height_gain < slope_height and rush_height_gain - back_sliding < 1: raise Exception("this is not going to work very well") current_height = rushes = 0 while current_height < slope_height: rushes += 1 current_height += rush_height_gain if current_height >= slope_height: break current_height -= back_sliding return rushes ``` After each uphill "rush", you check to see if you've reached the top. If so, you're done, if not, you slide down a bit and go again! As @perreal noted in his link in a comment to the original post, if you slide down more than you slide up and don't get up all the way the first time, you're going to have a problem. You might want to throw an exception in those cases.
I believe the problem is this statement: ``` if current_height == slope_height: return rushes ``` When `back_sliding` is `0`, then on the tenth iteration, `current_height` goes from `90` to `100`. Then the check returns true, and `9` is returned before it's incremented.
While loops (Python)
[ "", "python", "while-loop", "" ]
I'm using SQL Server 2008. I have a table with a column of type `VarChar`. It is currently filled with strings which actually represent numbers. Unfortunately, the column must remain `VarChar ('1', '2' ... , '1000' )`. I wish to query that field for the max number value but since this is a `VarChar` I get the Lexicographical max and not the natural order max. I thought I'll try and solve this by using the `COLLATE` clause for that query and change to a collation that provides numerical natural order like in this [link](http://msdn.microsoft.com/en-us/library/ms184391.aspx) 1. Is there a collation like this for SQL Server 2008 and if so, what should be the query? 2. If a collation like this exists should I use this method or should I use cast? Thanks.
There is no collation option that will perform a natural sort on numbers. You can however, cast the value as an integer if all the values are expected to be integers. Try something like ``` SELECT MAX(CAST (Value as int)) FROM MyTable ```
Just a note, a third option is to use a persisted computed column - which is the value cast to a numeric That way it is only cast once - on create or update ``` -- Create Table with computed column CREATE TABLE [dbo].[CCtest] ( [myString] [varchar] NULL, [myInt] AS (cast(myString as int)) PERSISTED ) GO ``` It's then ok to create an index on the computed column.
SQL Server - Is there a collation that provides natural order for numbers?
[ "", "sql", "sql-server-2008", "collation", "" ]
How to copy all even lines from one file to a new file in Python? The even number is just an illustration when I want a very select though substantive number of lines copied from one file to another, but it should be good as an example. I use this, but it is very inefficient (it takes around 5 minutes): ``` # foo.txt holds 200,000 lines with 300 values list = [0, 2, 4, 6, 8, 10..... 199996, 199998] newfile = open(savefile, "w") with open("foo.txt", "r") as file: for i, line in enumerate(file): if i in list: newfile.write(line) newfile.close() ``` I would also appreciate it if there's an explanation why this is so slow: reading line by line goes quickly (around 15 seconds), and is also advised by the manual. **EDIT:** My apologies; I am not looking for specific odd/even examples; it is merely for the effect of how to deal with around 100k out of 200k values in no easy order. Is there no general solution to the I/O problem here other than finding more efficient ways to deal with odd/even? Again apologies for bringing it up.
What's taking all the time is searching `list`. In order to figure out whether `i` is in `list`, it has to scan through the entire list to be sure that it's not there. If you really only care about even numbers, you can simply use `if i % 2 == 0`, but if you have a specific group of line numbers you want, you should use a `set`, which has O(1) membership testing, e.g. ``` keep = {1, 5, 888, 20203} ``` and then ``` if i in keep: ```
I'm assuming that your `list` is predefined, and can contain any sequence of possible line indices, not necessarily every Nth line for example. The first probable bottleneck is that you're doing a O(n) list search (`i in list`) 200000 times. Converting the list to a dictionary should already help: ``` listd = dict.fromkeys(list) . . # this is O(1) instead of O(n) if i in listd: ``` Alternatively, if you know that `list` is sorted, or you can sort it, simply keep track of the next line index: ``` list = [0, 2, 4, 6, 8, 10..... 199996, 199998] nextidx = 0 newfile = open(savefile, "w") with open("foo.txt", "r") as file: for i, line in enumerate(file): if i == list[nextidx]: newfile.write(line) nextidx += 1 newfile.close() ```
Efficiently copying specific lines to another file (Python 3.3)
[ "", "python", "" ]
I am attempting to draw circle at the location of my mouse whenever I hit the space bar. The goal is to draw this circle around a character that is following my mouse cursor and kill other objects it comes in contact with. Am I on the right track, also how the heck does drawing a circle work ``` import math import pygame, sys, time from pygame.locals import * pygame.init() WINDOW_WIDTH = 1000 WINDOW_HEIGHT = 600 surface = pygame.display.set_mode((WINDOW_WIDTH, WINDOW_HEIGHT),0,32) pygame.display.set_caption('follow mouse') surface.fill((255,255,255)) class Hero(): def Attack(self): for event in pygame.event.get(): surface.fill ((255,255,255)) amntTuple = pygame.mouse.get_pos() pygame.draw.circle(surface, BLUE, amntTuple, 20, 300) surface.blit (self.space, self.spaceRect) pygame.display.update() var = Hero() while True: for event in pygame.event.get(): if event.type == KEYDOWN: if event.key == K_SPACE: var.Attack() ```
There are a few things you need to change to make it work. I've included a somewhat lengthy explanation behind all of these changes, so that you will be able to understand them and apply them correctly in other parts of your game. If you do not need the extra explanation, you can skip to working code at the bottom of this answer. I've placed comments everywhere I made a change. ## Event handling Pygame keeps a list of events that happen in the game, such as keypresses. When you call `pygame.event.get()`, a list of all events that have happened *since the last call* is returned. This means that in your `Attack()` function, none of your code is being run because it happens for each event in `pygame.event.get()`, but `pygame.event.get()` returns an empty list because you just called it in the while loop. Generally speaking, you should only call `pygame.event.get()` once per loop, exactly like you have it in the bottom of your code. You should delete the for loop in `Attack()` because it has no purpose there. ## Quitting A small inconvenience with pygame is that the close button doesn't work by default, which means you have to forcefully close your game. To fix this, you should check for QUIT events in your event loop, and close the game. Here's an example of how the code would look inserted into your game: ``` while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() if event.type == KEYDOWN: if event.key == K_SPACE: var.Attack() ``` ## Colors Sadly, Pygame has no built-in Colors, so `BLUE` in `pygame.draw.circle(surface, BLUE, amntTuple, 20, 300)` will not work. Instead, replace `BLUE` with `pygame.Color(0,0,255)` to create blue yourself, using the RGB color model. ## Drawing Drawing is done in two steps in Pygame. First, objects are drawn to a virtual surface, which can not be seen. Then, when you call `pygame.display.update()`, the virtual surface is drawn to the screen, which the user can see. This has technical advantages such as allowing the computer to draw smoothly without screen flickering, but it also means that as long as you don't call `pygame.display.update()`, nothing you draw will ever be seen.`pygame.display.update()`, nothing you draw will ever be visible. However, if you call it too many times, your game may flicker or act in unexpected ways. Because of this, you will generally want to call `pygame.display.update()` exactly once per gameloop (just like with `pygame.event.get()`). Generally, `pygame.display.update()` can be the very last line of code in your while loop, with everything else coming before it. There are different ways you can do to draw to the screen in Pygame. One way is to use built-in functions like `pygame.draw.circle.` You were close in your use of it, but made two mistakes. First, BLUE doesn't exist, as I mentioned before. Second, the width parameter is the width of the line used to draw the circle, not the width of the circle itself. So you generally want it to be a small number. If you set the width to 0, the circle will be filled instead of drawn as an outline. Here's a fixed version of your draw.circle: `pygame.draw.circle(surface, pygame.Color(0,0,255), amntTuple, 20, 2)` Another way to draw to the screen is with blitting. Everything in Pygame must be drawn to a Surface, which is like a virtual canvas. When you load an image using pygame.image.load(), the image is loaded as a Surface. And once you have a surface, you use blitting to draw it to the screen. For example: ``` man = pygame.image.load("/images/man.png") surface.blit (man, (0,0)) ``` would be used to draw an image of a man to the screen at pixels 0,0. This will be useful later in your game development process. However, the blit you currently have right after drawing your circle does nothing and causes errors because `self.space` and `self.spaceRect` do not exist, so you should delete it for now. ## The Working Code ``` import math import pygame, sys, time from pygame.locals import * pygame.init() WINDOW_WIDTH = 1000 WINDOW_HEIGHT = 600 surface = pygame.display.set_mode((WINDOW_WIDTH, WINDOW_HEIGHT),0,32) pygame.display.set_caption('follow mouse') surface.fill((255,255,255)) class Hero(): def Attack(self): surface.fill ((255,255,255)) amntTuple = pygame.mouse.get_pos() #Use pygame.Color to make a color. #The last parameter is linewidth, and can be set to 0 for filled circles. pygame.draw.circle(surface, pygame.Color(0,0,255), amntTuple, 20, 2) #The blit was deleted because it did nothing and the broke code. var = Hero() while True: for event in pygame.event.get(): #Add a quit event so you can close your game normally. if event.type == QUIT: pygame.quit() sys.exit() if event.type == KEYDOWN: if event.key == K_SPACE: var.Attack() #Update once at the end of each gameloop. pygame.display.update() ```
With the following line (the last one in your program): ``` var.Attack ``` You aren't calling the `attack` function. Replace it with ``` var.Attack() ``` and see if that works instead.
Drawing a circle at my mouse location when I hit the space bar (Pygame)
[ "", "python", "pygame", "draw", "" ]
I'm storing some queries in a table column so I can execute them later passing some parameters. But it has been really annoying to format the query into an Update sentence, because of the special characters. For Example: ``` SELECT * FROM MOUNTAINS WHERE MON_NAME='PALMA' AND MON_DESC LIKE '%TRANVULCANIA%' ``` Then I need the string just for the udpate query: ``` UPDATE QUERIES SET QUE_SEL='SELECT * FROM MOUNTAINS WHERE MON_NAME='''+'PALMA'+''' AND MON_DESC LIKE '''+'%TRANVULCANIA%'+''' ' WHERE QUE_ID=1 ``` as you can see the first **'** must be replaced for **'''+'** but the next door **'** must be replaced by **'+'''** This is the query I'm working on: ``` DECLARE @QUERY VARCHAR(MAX) SELECT @QUERY='SELECT * FROM QUERIES WHERE QUE_NOMBRE='''+'PRUEBA 1'+''' ' SELECT t.r.value('.', 'varchar(255)') AS token , ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS id FROM ( SELECT myxml = CAST('<t>' + REPLACE(@QUERY, '''', '</t><t>''</t><t>') + '</t>' AS XML) ) p CROSS APPLY myxml.nodes('/t') t(r) ``` this is the result: ``` token id -------------------------------------------------- -------------------- SELECT * FROM QUERIES WHERE QUE_NOMBRE= 1 ' 2 PRUEBA 1 3 ' 4 5 ``` Now I want a column that tell me when to open and when to close and then I can set the final replace.
Adapting the solution given by @rivarolle ``` DECLARE @QUERY VARCHAR(MAX) DECLARE @FORMATTED varchar(max) SELECT @QUERY='SELECT * FROM QUERIES WHERE QUE_NOMBRE='''+'PRUEBA 1'+'''' ;WITH TOKENS AS( SELECT t.r.value('.', 'varchar(MAX)') AS token , ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS Id FROM ( SELECT myxml = CAST('<t>' + REPLACE(@QUERY, '''', '</t><t>''</t><t>') + '</t>' AS XML) ) p CROSS APPLY myxml.nodes('/t') t(r) ) , Tokens2 as ( SELECT TOKENS.token as token ,quotes.row%2 as tipoapostrofe from Tokens left join (select row_number() over( order by Id asc) as row, a.* FROM (SELECT * from Tokens) a where Token = '''') quotes on quotes.Id = Tokens.Id ) SELECT @FORMATTED = STUFF(( SELECT ' ' + REPLACE(token,'''',CASE tipoapostrofe WHEN 1 THEN '''''''+''' WHEN 0 THEN '''+''''''' ELSE '' END) AS [text()] FROM Tokens2 FOR XML PATH('') ), 1, 1, '') print @FORMATTED ``` This Works, just need a function for cleaning XML special characters and another for putting back, and the Dynamic queries are printed ready for an update.
Assuming your token table is Tokens(Token, Id, Position): ``` update Tokens set position = quotes.row%2 from Tokens left join (select row_number() over( order by Id asc) as row, a.* FROM (SELECT * from Tokens) a where Token = '''') quotes on quotes.Id = Tokens.Id ``` The position column will have a value of 1 for starting quote and 0 for closing quote. NULL for the rest.
How to generate an update query of a dynamic query (automatically)?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "formatting", "" ]
I know the title seems crazy, but it is true. Here is my predicament. First, I am still a beginner at Python so please be considerate. I am trying to test if a variable exists. Now the variable comes from a file that is parsed in yaml. Whenever I try this code, ``` if not name['foo']: print "No name given." return False ``` Python does as you would expect and returns false. However, when I try to change it to this code, ``` try: name['foo'] except: print "ERROR: No name given." raise ``` the exception is never risen. I have searched and searched and could not find any questions or sites that could explain this to me. My only thought is that the parser is "tricking" the exception handler, but that doesn't really make sense to me. I have made sure that there were no white spaces in the name field of the document I am parsing. The field has the format: ``` *name: foo *ver: bar ``` Like I have said, I made sure that foo was completely deleted along with any whitespace between lines. If anyone could help, it would be greatly appreciated. EDIT: And I apologize for the negative logic in the if statement. The function has to go through a series of checks. The best way I could think of to make sure all the checks were executed was to return true at the very end and to return false if any individual check failed.
A few things: * That shouldn't throw an exception! You're doing `name['foo']` in two places and expecting different behavior. * The `name` doesn't behave like a dictionary, if it did you would NOT get a return of `False` in the first example, you'd get an exception. Try doing this: ``` name = {} name['foo'] ``` Then you'll get a `KeyError` exception * Don't ever have an `except:` block! Always catch the specific exception you're after, like `IndexError` or whatever
I don't understand why you think it would raise an exception. The key evidently exists, otherwise the first snippet would not work. The value for that key is obviously some false-y value like `False`, `None`, `[]` or `''`.
Python: variable "tricking" try-exception, but works for if statement
[ "", "python", "parsing", "exception", "if-statement", "" ]
i have a simple table: ``` CREATE TABLE aaa_has_bbb ( aaa_id integer not null, bbb_id integer not null, rank integer not null, primary key(aaa_id, bbb_id), uniq(aaa_id, rank) ) ``` I am trying to create a rule which will DELETE and INSERT because that will activate some relevant triggers. ``` CREATE OR REPLACE RULE pivot_key_updates AS ON UPDATE TO aaa_has_bbb WHERE OLD.aaa_id<>NEW.aaa_id OR OLD.bbb_id<>NEW.bbb_id DO INSTEAD ( -- -- on update of keys in this pivot table, delete and insert instead -- DELETE FROM aaa_has_bbb WHERE aaa_id = OLD.aaa_id AND bbb_id = OLD.bbb_id; INSERT INTO aaa_has_bbb (aaa_id, bbb_id, rank) VALUES (NEW.aaa_id, NEW.bbb_id, NEW.rank); ); ``` This never inserts, but successfully deletes. However, if I reverse the order like this: ``` CREATE OR REPLACE RULE pivot_key_updates AS ON UPDATE TO aaa_has_bbb WHERE OLD.aaa_id<>NEW.aaa_id OR OLD.bbb_id<>NEW.bbb_id DO INSTEAD ( -- -- on update of keys in this pivot table, delete and insert instead -- INSERT INTO aaa_has_bbb (aaa_id, bbb_id, rank) VALUES (NEW.aaa_id, NEW.bbb_id, NEW.rank+1); DELETE FROM aaa_has_bbb WHERE aaa_id = OLD.aaa_id AND bbb_id = OLD.bbb_id; ); ``` Switching the order works? Why? To make this work correctly, I have to rank+1 to avoid the key collision, but I don't really want to do this. What am I missing? EDIT: I realize I can make my life easier with triggers, and that's probably what I'll end up doing, but I'm very curious why my rule doesn't work as expected.
I tested and reproduced your problem. This quote from [the manual on `CREATE RULE`](http://www.postgresql.org/docs/current/interactive/sql-createrule.html) should shed some light on the mystery: > Within condition and command, the special table names `NEW` and `OLD` can > be used to refer to values in the referenced table. NEW is valid in ON > `INSERT` and ON `UPDATE` rules to refer to the new row being inserted or > updated. `OLD` is valid in `ON UPDATE` and `ON DELETE` rules to **refer to the > existing row being updated or deleted**. Bold emphasis mine. When you `DELETE` the row first, the following `INSERT` cannot find the referenced row any more and does nothing. I would consider using triggers instead. You might be able to adjust your existing triggers, so you do not need any additional triggers or rules *at all*.
The below fragment works for the simple case, but fails for the second (batch) update, probably caused by the update not being qualified by a where-clause. I was not able to get a correctly qualified range table entry into the resulting query plan; without a where-clause, the delete-target RTE remains unqualified in the final plan. (this *could* be a bug; I am not sure) ``` DROP SCHEMA tmp CASCADE; CREATE SCHEMA tmp ; SET search_path=tmp; CREATE TABLE aaa_has_bbb ( aaa_id integer not null , bbb_id integer not null , zrank integer not null , flipflag BOOLEAN NOT NULL DEFAULT True , primary key(aaa_id, bbb_id) , unique (aaa_id, zrank) ); -- I am trying to create a rule which will DELETE and INSERT because that will activate some relevant triggers. CREATE OR REPLACE RULE pivot_key_updates AS ON UPDATE TO aaa_has_bbb WHERE (OLD.aaa_id <> NEW.aaa_id OR OLD.bbb_id <> NEW.bbb_id) AND OLD.flipflag = NEW.flipflag DO INSTEAD ( -- -- First: copy existing records that fit the criteria -- The flipflag enables us to distinguish between original and cloned rows -- INSERT INTO aaa_has_bbb (aaa_id, bbb_id, zrank, flipflag) SELECT NEW.aaa_id, NEW.bbb_id, NEW.zrank, NOT src.flipflag FROM aaa_has_bbb src WHERE src.aaa_id = OLD.aaa_id AND src.bbb_id = OLD.bbb_id AND src.flipflag = OLD.flipflag ; -- Next: delete existing records that fit the criteria DELETE FROM aaa_has_bbb del WHERE del.aaa_id = OLD.aaa_id AND del.bbb_id = OLD.bbb_id AND del.flipflag = OLD.flipflag ; ); -- Trigger function to reveal actual operations CREATE FUNCTION dingdong() RETURNS TRIGGER AS $func$ BEGIN RAISE NOTICE 'Table= % operation= % Level= %' , TG_TABLE_NAME, TG_OP, TG_LEVEL; RETURN NEW; END $func$ LANGUAGE plpgsql; -- Trigger to reveal actual operations CREATE TRIGGER aaa_has_bbb_dingdong AFTER INSERT OR UPDATE OR DELETE ON aaa_has_bbb FOR EACH ROW EXECUTE PROCEDURE dingdong (); INSERT INTO aaa_has_bbb(aaa_id, bbb_id, zrank) VALUES (1,9, 1) , (2,8, 1) , (3,7, 1) , (4,6, 1) , (5,5, 1) ; -- This works -- EXPLAIN ANALYZE UPDATE aaa_has_bbb up1 SET zrank = 99 , aaa_id = up1.bbb_id , bbb_id = up1.aaa_id WHERE up1.aaa_id = 2; SELECT * FROM aaa_has_bbb; -- This does not work -- EXPLAIN ANALYZE UPDATE aaa_has_bbb up2 SET zrank = 100+up2.zrank , aaa_id = 100+ up2.aaa_id WHERE 1=1; SELECT * FROM aaa_has_bbb; ``` OUTPUT: ``` DROP SCHEMA CREATE SCHEMA SET NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "aaa_has_bbb_pkey" for table "aaa_has_bbb" NOTICE: CREATE TABLE / UNIQUE will create implicit index "aaa_has_bbb_aaa_id_zrank_key" for table "aaa_has_bbb" CREATE TABLE CREATE RULE CREATE FUNCTION CREATE TRIGGER NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW INSERT 0 5 NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW UPDATE 0 aaa_id | bbb_id | zrank | flipflag --------+--------+-------+---------- 1 | 9 | 1 | t 3 | 7 | 1 | t 4 | 6 | 1 | t 5 | 5 | 1 | t 8 | 2 | 99 | f (5 rows) NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= INSERT Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW NOTICE: Table= aaa_has_bbb operation= DELETE Level= ROW UPDATE 0 aaa_id | bbb_id | zrank | flipflag --------+--------+-------+---------- (0 rows) ``` Plan for the final update := {insert+delete}: ``` QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------ Insert on aaa_has_bbb (cost=0.00..61.19 rows=1 width=13) (actual time=0.082..0.082 rows=0 loops=1) -> Nested Loop (cost=0.00..61.19 rows=1 width=13) (actual time=0.011..0.032 rows=5 loops=1) Join Filter: (src.flipflag = up2.flipflag) -> Seq Scan on aaa_has_bbb up2 (cost=0.00..47.80 rows=9 width=13) (actual time=0.005..0.010 rows=5 loops=1) Filter: ((flipflag = flipflag) AND ((aaa_id <> (100 + aaa_id)) OR (bbb_id <> bbb_id))) -> Index Scan using aaa_has_bbb_pkey on aaa_has_bbb src (cost=0.00..1.47 rows=1 width=9) (actual time=0.002..0.002 rows=1 loops=5) Index Cond: ((aaa_id = up2.aaa_id) AND (bbb_id = up2.bbb_id)) Trigger aaa_has_bbb_dingdong: time=0.293 calls=5 Total runtime: 0.425 ms Delete on aaa_has_bbb del (cost=0.00..61.19 rows=1 width=12) (actual time=0.075..0.075 rows=0 loops=1) -> Nested Loop (cost=0.00..61.19 rows=1 width=12) (actual time=0.009..0.047 rows=10 loops=1) Join Filter: (del.flipflag = up2.flipflag) -> Seq Scan on aaa_has_bbb up2 (cost=0.00..47.80 rows=9 width=15) (actual time=0.004..0.011 rows=10 loops=1) Filter: ((flipflag = flipflag) AND ((aaa_id <> (100 + aaa_id)) OR (bbb_id <> bbb_id))) -> Index Scan using aaa_has_bbb_pkey on aaa_has_bbb del (cost=0.00..1.47 rows=1 width=15) (actual time=0.002..0.002 rows=1 loops=10) Index Cond: ((aaa_id = up2.aaa_id) AND (bbb_id = up2.bbb_id)) Trigger aaa_has_bbb_dingdong: time=0.494 calls=10 Total runtime: 0.625 ms Update on aaa_has_bbb up2 (cost=0.00..57.21 rows=1881 width=19) (actual time=0.003..0.003 rows=0 loops=1) -> Seq Scan on aaa_has_bbb up2 (cost=0.00..57.21 rows=1881 width=19) (actual time=0.002..0.002 rows=0 loops=1) Filter: ((((aaa_id <> (100 + aaa_id)) OR (bbb_id <> bbb_id)) AND (flipflag = flipflag)) IS NOT TRUE) Total runtime: 0.023 ms (24 rows) ```
postgres update rule wont insert
[ "", "sql", "postgresql", "triggers", "" ]
I have a .csv file on my F: drive on Windows 7 64-bit that I'd like to read into pandas and manipulate. None of the examples I see read from anything other than a simple file name (e.g. 'foo.csv'). When I try this I get error messages that aren't making the problem clear to me: ``` import pandas as pd trainFile = "F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv" trainData = pd.read_csv(trainFile) ``` The error message says: ``` IOError: Initializing from file failed ``` I'm missing something simple here. Can anyone see it? Update: I did get more information like this: ``` import csv if __name__ == '__main__': trainPath = 'F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv' trainData = [] with open(trainPath, 'r') as trainCsv: trainReader = csv.reader(trainCsv, delimiter=',', quotechar='"') for row in trainReader: trainData.append(row) print trainData ``` I got a permission error on read. When I checked the properties of the file, I saw that it was read-only. I was able to read 892 lines successfully after unchecking it. Now pandas is working as well. No need to move the file or amend the path. Thanks for looking.
I cannot promise that this will work, but it's worth a shot: ``` import pandas as pd import os trainFile = "F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv" pwd = os.getcwd() os.chdir(os.path.dirname(trainFile)) trainData = pd.read_csv(os.path.basename(trainFile)) os.chdir(pwd) ```
A better solution is to use literal strings like r'pathname\filename' rather than 'pathname\filename'. See [Lexical Analysis](https://docs.python.org/2/reference/lexical_analysis.html) for more details.
Read a .csv into pandas from F: drive on Windows 7
[ "", "python", "csv", "pandas", "" ]
Is there a fast way to do serialization of a DataFrame? I have a grid system which can run pandas analysis in parallel. In the end, I want to collect all the results (as a DataFrame) from each grid job and aggregate them into a giant DataFrame. How can I save data frame in a binary format that can be loaded rapidly?
The easiest way is just to use [to\_pickle](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_pickle.html) (as a [pickle](https://docs.python.org/3/library/pickle.html)), see [pickling from the docs api page](https://pandas.pydata.org/pandas-docs/stable/reference/io.html#pickling): ``` df.to_pickle(file_name) ``` *Another option is to use [HDF5](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables) (built on PyTables). It is slightly more work to get started but much richer for querying.*
[DataFrame.to\_msgpack](https://pandas.pydata.org/pandas-docs/version/0.25.0/reference/api/pandas.DataFrame.to_msgpack.html) is experimental and not without some issues e.g. with Unicode, but it is **much faster** than pickling. It serialized a dataframe with 5 million rows that was taking 2-3 Gb of memory in about 2 seconds, and the resulting file was about 750 Mb. Loading is somewhat slower, but still way faster than unpickling.
Serialization of a pandas DataFrame
[ "", "python", "pandas", "" ]
Just for curiosity. Discovered Lambdas a few days ago. I was jus wondering if something like that can be done: (Tried on the interpret but none of my tries seemed to work) ``` p = lambda x: (lambda x: x%2)/2 ``` There's no explicit purpose. I just did'nt find a satisfactory answer. I may have misunderstood Lambdas.
You aren't actually *calling* the inner `lambda`: ``` p = lambda x: (lambda x: x%2)(x)/2 ``` Note in Python 2 this example will always return `0` since the remainder from dividing by `2` will be either `0` or `1` and integer-dividing that result by `2` will result in a truncated `0`.
You can use an inner lambda to return another function, based on the outer parameters: ``` mul = lambda x: (lambda y: y * x) times4 = mul(4) print times4(2) ```
Lambda inside lambda
[ "", "python", "lambda", "" ]
As we all know, there's list comprehension, like ``` [i for i in [1, 2, 3, 4]] ``` and there is dictionary comprehension, like ``` {i:j for i, j in {1: 'a', 2: 'b'}.items()} ``` but ``` (i for i in (1, 2, 3)) ``` will end up in a generator, not a `tuple` comprehension. Why is that? My guess is that a `tuple` is immutable, but this does not seem to be the answer.
You can use a generator expression: ``` tuple(i for i in (1, 2, 3)) ``` but parentheses were already taken for … generator expressions.
Raymond Hettinger (one of the Python core developers) had this to say about tuples in a [recent tweet](https://twitter.com/raymondh/status/324664257004322817): > #python tip: Generally, lists are for looping; tuples for structs. Lists are homogeneous; tuples heterogeneous. Lists for variable length. This (to me) supports the idea that if the items in a sequence are related enough to be generated by a, well, generator, then it should be a list. Although a tuple is iterable and seems like simply a immutable list, it's really the Python equivalent of a C struct: ``` struct { int a; char b; float c; } foo; struct foo x = { 3, 'g', 5.9 }; ``` becomes in Python ``` x = (3, 'g', 5.9) ```
Why is there no tuple comprehension in Python?
[ "", "python", "tuples", "list-comprehension", "dictionary-comprehension", "set-comprehension", "" ]
I want to change name and type of column from `BIT(1)` to `INT(1)`. I am using MySQL Workbench. I tried this: ``` ALTER TABLE table_name ADD new_column_name INT(1) update table_name set new_column_name = convert(INT(1), old_column_name) ALTER TABLE table_name DROP COLUMN old_column_name ``` But I am getting this error ``` You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INT(1), old_column_name)' ```
``` ALTER TABLE table_name ADD new_column INT(1); update table_name set new_column = case when old_column then 1 else 0 end; ALTER TABLE table_name DROP COLUMN old_column; ``` or simply use: ``` ALTER TABLE table_name MODIFY COLUMN old_column int; ```
``` ALTER TABLE table_name CHANGE old_column_name new_col_name varchar(20) not null; ``` I put varchar as an examaple. you can change to any data type you like. You can also add constraints after that.
Change database column name and its type without losing data
[ "", "mysql", "sql", "database", "" ]