Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
``` import pylab as pl data = """AP 10 AA 20 AB 30 BB 40 BC 40 CC 30 CD 20 DD 10""" grades = [] number = [] for line in data.split("/n"): x, y = line.split() grades.append(x) number.append(int(y)) fig = pl.figure() ax=fig.add_subplot(1,1,1) ax.bar(grades,number) p.show() ``` This is my code, i wish to make a bar graph, from the data. Initially when I run my code, I was getting an indentation error, in line 17, after adding a space to all the entire for block, I started getting this 'too many values to unpack error', in line 16. I am new to python, and I don't know how to proceed now.
The problem is that your `for-loop` is splitting at the wrong token (`/n`) instead of `\n`. But when you only want to split the newlines there actually is a `splitlines()`-method on strings that does just that: you should actually use this method, because it will handle different newline delimiters between \*nix and Windows as well (\*nix systems typically denote newlines via `\r\n`, whereas Windows uses `\n` and the old Mac OS uses `\r`: check the [Python documentation](http://docs.python.org/2/glossary.html#term-universal-newlines) for more information) Your error occurs on the next line: due to the fact that the string was not split into lines your whole string will now be split on the whitespace, which will produces many more values than the 2 you try to assign to a tuple.
You have a line that does *not* have two items on it: ``` x, y = line.split() ``` did not split to two elements and that throws the error. Most likely because you are not splitting your `data` variable properly and have the whole text as *one* long text. `/n` does not occur in your `data` data. Use `.splitlines()` instead: ``` for line in data.splitlines(): x, y = line.split() ```
A 'too many values to unpack' error in python
[ "", "python", "python-3.x", "syntax-error", "" ]
I have two lists: one contains the products and the other one contains their associated prices. The lists can contain an undefined number of products. An example of the lists would be something like: * Products : ['Apple', 'Apple', 'Apple', 'Orange', 'Banana', 'Banana', 'Peach', 'Pineapple', 'Pineapple'] * Prices: ['1.00', '2.00', '1.50', '3.00', '0.50', '1.50', '2.00', '1.00', '1.00'] I want to be able to remove all the duplicates from the products list and keep only the cheapest price associated with the unique products in the price list. Note that some products might have the same price (in our example the Pineapple). The desired final lists would be something like: * Products : ['Apple', 'Orange', 'Banana', 'Peach', 'Pineapple'] * Prices: ['1.00', '3.00', '0.50', '2.00', '1.00'] I would like to know the most effective way to do so in Python. Thank you
``` from collections import OrderedDict products = ['Apple', 'Apple', 'Apple', 'Orange', 'Banana', 'Banana', 'Peach', 'Pineapple', 'Pineapple'] prices = ['1.00', '2.00', '1.50', '3.00', '0.50', '1.50', '2.00', '1.00', '1.00'] min_prices = OrderedDict() for prod, price in zip(products, prices): min_prices[prod] = min(float(price), min_prices.get(prod, float('inf'))) >>> print min_prices.keys(), min_prices.values() ['Apple', 'Orange', 'Banana', 'Peach', 'Pineapple'] [1.0, 3.0, 0.5, 2.0, 1.0] ```
Probably the simplest way is to take advantage of dictionaries' enforcement of unique keys: ``` from operator import itemgetter Products = ['Apple', 'Apple', 'Apple', 'Orange', 'Banana', 'Banana', 'Peach', 'Pineapple', 'Pineapple'] Prices = ['1.00', '2.00', '1.50', '3.00', '0.50', '1.50', '2.00', '1.00', '1.00'] final = dict(sorted(zip(Products, Prices), key=itemgetter(1), reverse=True)) ```
Data processing in lists with duplicates in Python
[ "", "python", "list", "duplicates", "" ]
This is my simple code. ``` def reverseString(aStr): newStr = '' if len(aStr) == 0: return newStr else: newStr = newStr + aStr[len(aStr)-1] return reverseString(aStr[:len(aStr)-1]) ``` For `'alina'` (if I insert `print newStr` before `return reverseString...`), the output is: `newStr='a'`, `newStr='n'`, `newStr='i'`, `newStr='l'`, `newStr='a'`, `newStr=''`. I don't get it. Why does it behave like this?
Something like this: ``` def reverseString(aStr, newStr = ''): if len(aStr) == 0: return newStr else: newStr = newStr + aStr[-1] #-1 returns the last element from the string return reverseString(aStr[:-1], newStr) #slice the string up to second last char print reverseString("foobar") #raboof ``` The problem with your code is that `newStr` is getting re-assigned at each recursive loop to an empty string(`''`), you must pass the `newStr` value in every recursive call. ``` def reverseString(aStr, newStr= ''): #define a default value for newStr if len(aStr) == 0: return newStr else: newStr = newStr + aStr[len(aStr)-1] #better use aStr[-1] return reverseString(aStr[:len(aStr)-1], newStr) #pass the new value of newStr print reverseString("foobar")# No value is passed for newStr. So, default is used . ```
The reason your function has not worked is because you forgot to return `newStr` at the end. And every time you call your function, `newStr` will just get reset back to `''`. There's an easier way to do what you are doing. Use [slicing](http://docs.python.org/2/tutorial/introduction.html#strings): ``` def reverseString(s): return s[::-1] ``` Examples: ``` >>> reverseString('alina') 'anila' >>> reverseString('racecar') 'racecar' # See what I did there ;) ```
Writing a string backwards
[ "", "python", "string", "recursion", "" ]
I want to define a class containing `read` and `write` methods, which can be called as follows: ``` instance.read instance.write instance.device.read instance.device.write ``` To not use interlaced classes, my idea was to overwrite the `__getattr__` and `__setattr__` methods and to check, if the given name is `device` to redirect the return to `self`. But I encountered a problem giving infinite recursions. The example code is as follows: ``` class MyTest(object): def __init__(self, x): self.x = x def __setattr__(self, name, value): if name=="device": print "device test" else: setattr(self, name, value) test = MyTest(1) ``` As in `__init__` the code tried to create a new attribute `x`, it calls `__setattr__`, which again calls `__setattr__` and so on. How do I need to change this code, that, in this case, a new attribute `x` of `self` is created, holding the value `1`? Or is there any better way to handle calls like `instance.device.read` to be 'mapped' to `instance.read`? As there are always questions about the why: I need to create abstractions of `xmlrpc` calls, for which very easy methods like `myxmlrpc.instance,device.read` and similar can be created. I need to 'mock' this up to mimic such multi-dot-method calls.
You must call the parent class `__setattr__` method: ``` class MyTest(object): def __init__(self, x): self.x = x def __setattr__(self, name, value): if name=="device": print "device test" else: super(MyTest, self).__setattr__(name, value) # in python3+ you can omit the arguments to super: #super().__setattr__(name, value) ``` Regarding the best-practice, since you plan to use this via `xml-rpc` I think this is probably better done inside the [`_dispatch`](http://docs.python.org/2/library/simplexmlrpcserver.html#SimpleXMLRPCServer.SimpleXMLRPCServer.register_instance) method. A quick and dirty way is to simply do: ``` class My(object): def __init__(self): self.device = self ```
Or you can modify `self.__dict__` from inside `__setattr__()`: ``` class SomeClass(object): def __setattr__(self, name, value): print(name, value) self.__dict__[name] = value def __init__(self, attr1, attr2): self.attr1 = attr1 self.attr2 = attr2 sc = SomeClass(attr1=1, attr2=2) sc.attr1 = 3 ```
How to use __setattr__ correctly, avoiding infinite recursion
[ "", "python", "python-2.7", "getattr", "setattr", "" ]
I have a query in which I want to select data from a column where the data is a date. The problem is that the data is a mix of text and dates. This bit of SQL only returns the longest text field: ``` SELECT MAX(field_value) ``` Where the date does occur, it is always in the format xx/xx/xxxx I'm trying to select the most recent date. I'm using MS SQL. Can anyone help?
Try this using [`ISDATE`](http://msdn.microsoft.com/en-us/library/ms187347.aspx) and [`CONVERT`](http://msdn.microsoft.com/en-us/library/ms187928.aspx): ``` SELECT MAX(CONVERT(DateTime, MaybeDate)) FROM ( SELECT MaybeDate FROM MyTable WHERE ISDATE(MaybeDate) = 1) T ``` You could also use `MAX(CAST(MaybeDate AS DateTime))`. I got in the (maybe bad?) habit of using `CONVERT` years ago and have stuck with it.
To do this without a conversion error: ``` select max(case when isdate(col) = 1 then cast(col as date) end) -- or use convert() from . . . ``` The SQL statement does *not* specify the order of operations. So, even including a `where` clause in a subquery will *not* guarantee that only dates get converted. In fact, the SQL Server optimizer is "smart" enough to do the conversion when the data is brought in and then do the filtering afterwards. The only operation that guarantees sequencing of operations is the `case` statement, and there are even exceptions to that.
Select data in date format
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a scenario similar to the following and all I'm want to do is find the max of three columns - this seems like a very long-winded method that I'm using to find the column `mx`. What is a shorter more elegant solution? ``` CREATE TABLE #Pig ( PigName CHAR(1), PigEarlyAge INT, PigMiddleAge INT, PigOldAge INT ) INSERT INTO #Pig VALUES ('x',5,2,3), ('y',2,9,5), ('z',1,1,8); WITH Mx_cte AS ( SELECT PigName, Age = PigEarlyAge FROM #Pig UNION SELECT PigName, Age = PigMiddleAge FROM #Pig UNION SELECT PigName, Age = PigOldAge FROM #Pig ) SELECT x.PigName, x.PigEarlyAge, x.PigMiddleAge, x.PigOldAge, y.mx FROM #Pig x INNER JOIN ( SELECT PigName, mx = Max(Age) FROM Mx_cte GROUP BY PigName ) y ON x.PigName = y.PigName ```
SQL Server has no equivalent of the `GREATEST` function in other RDBMSs that accepts a list of values and returns the largest. However you can simulate something similar by using a [table valued constructor](http://msdn.microsoft.com/en-us/library/dd776382.aspx) consisting of the desired columns then applying `MAX` to that. ``` SELECT *, (SELECT MAX(Age) FROM (VALUES(PigEarlyAge), (PigMiddleAge), (PigOldAge)) V(Age)) AS mx FROM #Pig ```
If you want the max of the three fields: ``` select (case when max(PigEarlyAge) >= max(PigMiddleAge) and max(PigEarlyAge) >= max(PigOldAge) then max(PigEarlyAge) when max(PigMiddleAge) >= max(PigOldAge) then max(PigMiddleAge) else max(PigOldAge) end) from #Pig ``` If you are looking for the rows that have the respective maxima, then use `row_number()` along with the `union`: ``` select p.PigName, PigEarlyAge, PigMiddleAge, PigOldAge, (case when PigEarlyAge >= PigMiddleAge and PigEarlyAge >= PigOldAge then PigEarlyAge when PigMiddleAge >= PigOldAge then PigMiddleAge else PigOldAge end) as BigAge from (select p.*, row_number() over (order by (case when PigEarlyAge >= PigMiddleAge and PigEarlyAge >= PigOldAge then PigEarlyAge when PigMiddleAge >= PigOldAge then PigMiddleAge else PigOldAge end) desc ) seqnum as seqnum from #Pig p ) p where p.seqnum = 1; ``` If you want duplicate values, then use `rank()` instead of `row_number()`.
Max of several columns
[ "", "sql", "sql-server-2012", "" ]
I am trying to join two tables. Unfortunately there is no single column to join them. Only the combination of 2 columns (in each of the tables) creates a unique identifier that enables me to make an inner join. How do I do that? edit: someone suggested making a join using AND. Unfortunately this does not seem to work. Here is an example of the tablse Table 1 Order no | Operation no | . ... FWA1 | 10 FWA2 | 20 FWA3 | 10 Table 2 Order no | Operation no | Description FWA1 | **10 | drilling** FWA2 | 20 | grinding FWA3 | **10 | buffing** (please notice that operation no 10 can have a different description in different orders.)
I think this will do the job ``` select t1.orderNo, t1.operationNo, t2.description from Table1 t1 inner join Table2 t2 on t1.orderNo = t2.orderNo and t1.operationNo = t2.OperationNo ```
Try the following ``` SELECT * FROM t1, t2 WHERE t1.f1 + t1.f2 = t2.f3 ``` On the other hand if you have 2 columns on which you want to join then [DanFromGermany's answer](https://stackoverflow.com/a/17039661/7028) is appropriate.
Join 2 tables using the combination of 2 columns
[ "", "sql", "join", "" ]
I have this query: ``` SELECT e.*, u.name AS event_creator_name FROM `edu_events` e LEFT JOIN `edu_users` u ON u.user_id = e.event_creator INNER JOIN `edu_event_participants` ON participant_event = e.event_id && participant_user = 1 WHERE MONTH(e.event_date_start) = 6 AND YEAR(e.event_date_start) = 2013 ``` It works perfect, however, I only want to do the INNER JOIN if the value: e.event\_type equals 1. If not, it should ignore the INNER JOIN. I have tried for some time to figure it out, but the solutions seems difficult to implment for my proposes (as it is only for select/specific values). I'm thinking about something like: ``` SELECT e.*, u.name AS event_creator_name FROM `edu_events` e LEFT JOIN `edu_users` u ON u.user_id = e.event_creator if(e.event_type == 1) { INNER JOIN `edu_event_participants` ON participant_event = e.event_id && participant_user = 1 } WHERE MONTH(e.event_date_start) = 6 AND YEAR(e.event_date_start) = 2013 ```
If I understand correctly you only want the results where there is an entry on edu\_event\_participants with the same event\_id and participant\_user = 1 but only if event\_type = 1, but you don't really want to get any information from the edu\_event\_participants table. If that is the case: ``` SELECT e.*, u.name AS event_creator_name FROM `edu_events` e LEFT JOIN `edu_users` u ON u.user_id = e.event_creator WHERE -- as Simon at mso.net suggested WHERE e.event_date_start BETWEEN DATE('2013-06-01') AND DATE('2013-07-01') -- MONTH(e.event_date_start) = 6 -- AND YEAR(e.event_date_start) = 2013 AND ( -- either event is public e.event_type = 1 or -- or the user is in the participants table exists (select 1 from `edu_event_participants` where participant_event = e.event_id AND participant_user = 1) ) ```
I have edited the below following further feedback from @Matthias ``` -- This will get all events for a given user plus all globals SELECT e.*, u.name AS event_creator_name FROM `edu_users` u -- in the events INNER JOIN `edu_events` e  ON ( -- Get all the ones that the user is participant e.event_creator = u.user_id -- Or where event_type is 1 OR e.event_type = 1 ) AND e.event_date_start BETWEEN DATE('2013-06-01') AND DATE('2013-07-01')      -- Add in event participants even though it doesn't seem to be used? INNER JOIN `edu_event_participants` AS eep ON eep.participant_event = e.event_id AND eep.participant_user = 1      -- Add the user ID into the WHERE WHERE u.user_id = 1; ``` This just might not make too much sence as it feels as though edu\_event\_participants has too much information in. event\_creator should really be stored against the event itself, and then event\_participants just containing an event id, user id, and user type. If you are looking to get all users on an event, it may be better to do a seperate query for that event to select all users based off an event\_id The note on your use of MONTH() and YEAR(). This will trigger a table scan, as MySQL will need to apply the MONTH() and YEAR() functions to all rows to determine which match that WHERE statement. If you instead calculate the upper and lower limits (i.e. `2013-06-01 00:00:00 <= e.event_date_start < 2013-07-01 00:00:00`) then MySQL can use a far more efficient range scan on an index (assuming one exists on e.event\_date\_start)
MySQL do INNER JOIN if specific value is 1
[ "", "mysql", "sql", "" ]
As per subject - I am trying to replace slow SQL IN statement with an INNER or LEFT JOIN. What I am trying to get rid of: ``` SELECT sum(VR.Weight) FROM verticalresponses VR WHERE RespondentID IN ( SELECT RespondentID FROM verticalstackedresponses VSR WHERE VSR.Question = 'Brand Aware' ) ``` The above I tried replacing with ``` SELECT sum(VR.Weight) FROM verticalresponses VR LEFT/INNER JOIN verticalstackedresponses VSR ON VSR.RespondentID = VR.RespondentID AND VSR.Question = 'Brand Aware' ``` but unfortunately I'm getting different results. Can anyone see why and if possible advise a solution that will do the job just quicker? Thanks a lot!
The subquery ``` SELECT RespondentID FROM verticalstackedresponses VSR WHERE VSR.Question = 'Brand Aware' ``` could maybe be returning multiple rows for any RespondentID, then you would get different results between join and in versions Something along the lines of this may give the same results ``` SELECT sum(VR.Weight) FROM verticalresponses VR JOIN( SELECT distinct RespondentID FROM verticalstackedresponses WHERE VSR.Question = 'Brand Aware' ) VSR ON VSR.RespondentID = VR.RespondentID ```
* A JOIN will multiply rows because it's an "Equi join" * IN (and EXISTS) will not multiply rows because these are "Semi joins" Either way, you need suitable indexes, probably * verticalresponses, (RespondentID) * verticalstackedresponses, (Question, RespondentID) See [Using 'IN' with a sub-query in SQL Statements](https://stackoverflow.com/questions/6966023/using-in-with-a-sub-query-in-sql-statements/6966259#6966259) for more
SQL - Replacing slow "IN" statement with a JOIN
[ "", "sql", "performance", "" ]
I'm accessing a Firebird database through Microsoft Query in Excel. I have a parameter field in Excel that contains a 4 digit number. One of my DB tables has a column (`TP.PHASE_CODE`) containing a 9 digit phase code, and I need to return any of those 9 digit codes that start with the 4 digit code specified as a parameter. For example, if my parameter field contains '8000', I need to find and return any phase code in the other table/column that is `LIKE '8000%'`. I am wondering how to accomplish this in SQL since it doesn't seem like the '?' representing the parameter can be included in a LIKE statement. (If I write in the 4 digits, the query works fine, but it won't let me use a parameter there.) The problematic statements is this one: `TP.PHASE_CODE like '?%'` Here is my full code: ``` SELECT C.COSTS_ID, C.AREA_ID, S.SUB_NUMBER, S.SUB_NAME, TP.PHASE_CODE, TP.PHASE_DESC, TI.ITEM_NUMBER, TI.ITEM_DESC,TI.ORDER_UNIT, C.UNIT_COST, TI.TLPE_ITEMS_ID FROM TLPE_ITEMS TI INNER JOIN TLPE_PHASES TP ON TI.TLPE_PHASES_ID = TP.TLPE_PHASES_ID LEFT OUTER JOIN COSTS C ON C.TLPE_ITEMS_ID = TI.TLPE_ITEMS_ID LEFT OUTER JOIN AREA A ON C.AREA_ID = A.AREA_ID LEFT OUTER JOIN SUPPLIER S ON C.SUB_NUMBER = S.SUB_NUMBER WHERE (C.AREA_ID = 1 OR C.AREA_ID = ?) and S.SUB_NUMBER = ? and TI.ITEM_NUMBER = ? and **TP.PHASE_CODE like '?%'** ORDER BY TP.PHASE_CODE ``` Any ideas on alternate ways of accomplishing this query?
If you use `LIKE '?%', then the question mark is literal text, not a parameter placeholder. You can use `LIKE ? || '%'`, or alternatively if your parameter itself never contains a LIKE-pattern: `STARTING WITH ?` which might be more efficient if the field you're querying is indexed.
You can do ``` and TP.PHASE_CODE like ? ``` but when you pass your parameter `8000` to the SQL, you have to add the `%` behind it, so in this case, you would pass `"8000%"` to the SQL.
Using the '?' Parameter in SQL LIKE Statement
[ "", "sql", "excel", "parameters", "firebird", "sql-like", "" ]
I want to join two tables even if there is no match on the second one. table user: ``` uid | name 1 dude1 2 dude2 ``` table account: ``` uid | accountid | name 1 1 account1 ``` table i want: ``` uid | username | accountname 1 dude1 account1 2 dude2 NULL ``` the query i'm trying with: ``` SELECT user.uid as uid, user.name as username, account.name as accountname FROM user RIGHT JOIN account ON user.uid=accout.uid ``` what i'm getting: ``` uid | username | accountname 1 dude1 account1 ```
use `Left Join` instead ``` SELECT user.uid as uid, user.name as username, account.name as accountname FROM user LEFT JOIN account ON user.uid=account.uid ```
Try with a `LEFT JOIN` query ``` SELECT user.uid as uid, user.name as username, account.name as accountname FROM user LEFT JOIN account ON user.uid=accout.uid ``` I'd like you have a look at this visual representation of [**`JOIN`** query](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins)
right join even if row on second table does not exist
[ "", "mysql", "sql", "" ]
I am working on unicode values in python for sometime now. Firstly, all the questions and answers help alot. Thanks :) Next, I am stuck in my project where I want to isolate the unicode values for each language. Like, a certain function only accepts hindi codes which are from unicode values 0900 to 097F. Now I want it to reject the rest of all the unicode values... I have , as of now, done ``` for i in range(len(l1)): for j in range(len(l1[i])): unn = '%04x' % ord(l1[i][j]) unn1 = int(unn, 16) if unn1 not in range(2304, 2431): l1[i] = l1[i].replace(l1[i][j], '') ``` this code takes in the values from a list l1 and does what I want it to. But there problem is that it solves for one character and then it terminates at line 3 On manually running it again, it runs and again solves one or two characters and then terminates again. I cant even put it inside a loop.... Please help --- Updated: I didnt wanna put another post so using this one only I got some help and modified the code. There is index problem. ``` for i in range(len(dictt)): j=0 while(1): if j >= len(dictt[i]): break unn = '%04x' % ord(dictt[i][j]) unn1 = int(unn, 16) j = j+1 if unn1 not in range(2304, 2431): dictt[i] = dictt[i].replace(dictt[i][j-1], '') j=0 ``` this code works perfectly fine for my previous query I meant for a specific range but if I change the range or the functionality then again the same problem arises at the same line. Why is that line giving error??
The best solution is most likely using regex to filter out the unwanted characters. You basically need a regex to match your Hindi characters, but as far as I know Hindi characters are bugged in "re" module, so I recommend downloading "regex" module with the command: > $ pip install regex After that you can just simply do a word by word check if all words are written in Hindi: ``` // kinda pseudo code, sorry import regex yourString = your_string_in_hindi words = yourString.split(" ") for word in words: if not regex.match(HINDI_WORD_REGEX, word): // whatever you want to do ``` You can also find some useful information related to your problems here: [Python - pyparsing unicode characters](https://stackoverflow.com/questions/2339386/python-pyparsing-unicode-characters) [Python unicode regular expression matching failing with some unicode characters -bug or mistake?](https://stackoverflow.com/questions/12746458/python-unicode-regular-expression-matching-failing-with-some-unicode-characters) Hope this at least helps you to start. Good luck!
``` def filter(text, range): return ''.join([char for char in text if ord(char) in range]) ```
How to filter out unicode characters in Python?
[ "", "python", "unicode", "" ]
Is there a way to include the following line as a map? ``` alist = [] for name in range(4): for sname in range(15): final = Do_Class(name, sname) #Is a class not to be bothered with alist.append(final) ``` Instead as alist.append(map(.....multiple map within maybe?)) UPDATE: ``` x = [Do_Class(name, sname) for name in xrange(15) for sname in xrange(4)] alist = [i for i in x] ``` the above works with no error ``` alist = [i for i in Do_Class(name, sname) for name in xrange(15) for sname in xrange(4)] ``` Throws back UnboundLocalError: local variable 'sname' referenced before assignment This has got to be the lamest thing in Python
I would use `itertools.product` : ``` from itertools import product alist = [Do_Class(x[0], x[1]) for x in product(range(4), range(15))] ``` if you absolutely need map : ``` alist = map(lambda x: Do_Class(x[0], x[1]), product(range(4), range(15))) ``` if you want a shorter version, but less readable: ``` alist = map(Do_Class, sorted(range(4)*15), range(15)*4) ``` ## edit need to sort the range(4)\*15 to obtain 0, 0, 0, ..., 1, ... rather than 0, 1, 2, 3, 0, ... ## edit 2 I stumbled upon `itertools.starmap`, which should give something like: ``` from itertools import starmap from itertools import product alist = starmap(Do_Class, product(range(4), range(15))) ``` thought that was a nice solution too.
You do not want/need `map` for this: ``` alist = [Do_Class(name, sname) for sname in range(15) for name in range(4)] ``` Using `map` would only be appropriate if you could do something like `map(somefunc, somelist)`. If that's not the case you'd need a lambda which just adds unnecessary overhead compared to a list comprehension.
python: map() two iterations with variables in map iterator
[ "", "python", "python-2.7", "dictionary", "" ]
I am trying to find the unique differences between 5 different lists. I have seen multiple examples of how to find differences between two lists but have not been able to apply this to multiple lists. It has been easy to find the similarities between 5 lists. Example: ``` list(set(hr1) & set(hr2) & set(hr4) & set(hr8) & set(hr24)) ``` However, I want to figure out how to determine the unique features for each set. Does anyone know how to do this?
How's this? Say we have input lists `[1, 2, 3, 4]`, `[3, 4, 5, 6]`, and `[3, 4, 7, 8]`. We would want to pull out `[1, 2]` from the first list, `[5, 6]` from the second list, and `[7, 8]` from the third list. ``` from itertools import chain A_1 = [1, 2, 3, 4] A_2 = [3, 4, 5, 6] A_3 = [3, 4, 7, 8] # Collect the input lists for use with chain below all_lists = [A_1, A_2, A_3] for A in (A_1, A_2, A_3): # Combine all the lists into one super_list = list(chain(*all_lists)) # Remove the items from the list under consideration for x in A: super_list.remove(x) # Get the unique items remaining in the combined list super_set = set(super_list) # Compute the unique items in this list and print them uniques = set(A) - super_set print(sorted(uniques)) ```
Could this help? I m assuming a list of lists to illustrate this example. But you can modify the datastructure to cater to your needs ``` from collections import Counter from itertools import chain list_of_lists = [ [0,1,2,3,4,5], [4,5,6,7,8,8], [8,9,2,1,3] ] counts = Counter(chain.from_iterable(map(set, list_of_lists))) uniques_list = [[x for x in lst if counts[x]==1] for lst in list_of_lists] #uniques_list = [[0], [6, 7], [9]] ``` Edit (Based on some useful comments): ``` counts = Counter(chain.from_iterable(list_of_lists)) unique_list = [k for k, c in counts.items() if c == 1] ```
Unique features between multiple lists
[ "", "python", "list", "set", "unique", "" ]
So, I have lists of words and I need to know how often each word appears on each list. Using ".count(word)" works, but it's too slow (each list has thousands of words and I have thousands of lists). I've been trying to speed things up with numpy. I generated a unique numerical code for each word, so I could use numpy.bincount (since it only works with integers, not strings). But I get "ValueError: array is too big". So now I'm trying to tweak the "bins" argument of the numpy.histogram function to make it return the frequency counts I need (somehow numpy.histogram seems to have no trouble with big arrays). But so far no good. Anyone out there happens to have done this before? Is it even possible? Is there some simpler solution that I'm failing to see?
Don't use numpy for this. Use [`collections.Counter`](http://docs.python.org/2/library/collections#counter-objects) instead. It's designed for this use case.
Why not reduce your integers to the minimum set using `numpy.unique`: ``` original_keys, lookup_vals = numpy.unique(big_int_string_array, return_inverse=True) ``` You can then just use `numpy.bincount` on `lookup_vals`, and if you need to get back the original string unique integer, you can just use the the values of `lookup_vals` as indices to `original_keys`. So, something like: ``` import binascii import numpy string_list = ['a', 'b', 'c', 'a', 'b', 'd', 'c'] int_list = [binascii.crc32(string)**2 for string in string_list] original_keys, lookup_vals = numpy.unique(int_list, return_inverse=True) bins = bincount(lookup_vals) ``` Also, it avoids the need to square your integers.
Can I trick numpy.histogram into behaving like numpy.bincount?
[ "", "python", "numpy", "histogram", "" ]
I want to merge strings (words) that are similar (string is within other string). ``` word wor words wormhole hole ``` Would make: ``` words wormhole ``` As `wor` overlaps with: `word`, `words`, `wormhole` -`wor` is discarded; `word` overlaps with: `words` - `word` is discarded; `hole` overlaps with: `wormhole` - `hole` is discarded; but `words`, `wormhole` don't overlap - so they stay. How can I do this? **Edit** My solution is: ``` while read a do grep $a FILE | awk 'length > m { m = length; a = $0 } END { print a }' done < FILE | sort -u ``` But I don't know if it would't cause troubles with large datasets.
It seems to me that sorting the words longest-to-shortest, we can then step through the sorted list only once, matching only against kept words. I'm poor at algorithmic analysis, but this makes sense to me and I *think* the performance would be good. It also seems to work, assuming the order of the kept words doesn't matter: ``` words = ['word', 'wor', 'words', 'wormhole', 'hole'] keepers = [] words.sort_by(&:length).reverse.each do |word| keepers.push(word) if ! keepers.any?{|keeper| keeper.include?(word)} end keepers # => ["wormhole", "words"] ``` If the order of the kept words does matter, it would be pretty easy to modify this to account for that. One option would simply be: ``` words & keepers # => ["words", "wormhole"] ```
In Ruby: ``` list = %w[word wor words wormhole] list.uniq .tap{|a| a.reverse_each{|e| a.delete(e) if (a - [e]).any?{|x| x.include?(e)}}} ```
Merge/Discard overlapping words
[ "", "python", "ruby", "perl", "bash", "" ]
I have a question related to python code. I need to aggregate if the key = kv1, how can I do that? ``` input='num=123-456-7890&kv=1&kv2=12&kv3=0' result={} for pair in input.split('&'): (key,value) = pair.split('=') if key in 'kv1': print value result[key] += int(value) print result['kv1'] ``` Thanks a lot!!
I'm assuming you meant `key == 'kv1'` and also the `kv` within `input` was meant to be `kv1` and that `result` is an empty `dict` that doesn't need `result[key] += int(value)` just `result[key] = int(value)` ``` input = 'num=123-456-7890&kv1=1&kv2=12&kv3=0' keys = {k: v for k, v in [i.split('=') for i in input.split('&')]} print keys # {'num': '123-456-7890', 'kv2': '12', 'kv1': '1', 'kv3': '0'} result = {} for key, value in keys.items(): if key == 'kv1': # if you need to increase result['kv1'] _value = result[key] + int(value) if key in result else int(value) result[key] = _value # if you need to set result['kv1'] result[key] = int(value) print result # {'kv1': 1} ``` Assuming you have multiple lines with data like: ``` num=123-456-7890&kv1=2&kv2=12&kv3=0 num=123-456-7891&kv1=1&kv2=12&kv3=0 num=123-456-7892&kv1=4&kv2=12&kv3=0 ``` Reading line-by-line in a file: ``` def get_key(data, key): keys = {k: v for k, v in [i.split('=') for i in data.split('&')]} for k, v in keys.items(): if k == key: return int(v) return None results = [] for line in [line.strip() for line in open('filename', 'r')]: value = get_key(line, 'kv1') if value: results.append({'kv1': value}) print results # could be [{'kv1': 2}, {'kv1': 1}, {'kv1': 4}] ``` Or just one `string`: ``` with open('filename', 'r') as f: data = f.read() keys = {k: v for k, v in [i.split('=') for i in data.split('&')]} result = {} for key, value in keys.items(): if key == 'kv1': result[key] = int(value) ``` Console i/o: ``` c:\nathan\python\bnutils>python Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> def get_key(data, key): ... keys = {k: v for k, v in [i.split('=') for i in data.split('&')]} ... for k, v in keys.items(): ... if k == key: return int(v) ... return None ... >>> results = [] >>> for line in [line.strip() for line in open('test.txt', 'r')]: ... value = get_key(line, 'kv1') ... if value: ... results.append({'kv1': value}) ... >>> print results [{'kv1': 2}, {'kv1': 1}, {'kv1': 4}] >>> ``` test.txt: ``` num=123-456-7890&kv1=2&kv2=12&kv3=0 num=123-456-7891&kv1=1&kv2=12&kv3=0 num=123-456-7892&kv1=4&kv2=12&kv3=0 ```
``` import urlparse urlparse.parse_qs(input) ``` results in: {'num': ['123-456-7890'], 'kv2': ['12'], 'kv': ['1'], 'kv3': ['0']} The keys are aggregated for you.
Aggregating key value pair in python
[ "", "python", "key-value", "aggregation", "" ]
Using Python2.7, if I try to compare the identity of two numbers, I don't get the same results for `int` and `long`. **int** ``` >>> a = 5 >>> b = 5 >>> a is b True ``` **long** ``` >>> a = 885763521873651287635187653182763581276358172635812763 >>> b = 885763521873651287635187653182763581276358172635812763 >>> a is b False ``` I have a few related questions: * Why is the behavior different between the two? * Am I correct to generalize this behavior to all `int`s and all `long`s? * Is this CPython specific?
### Why is the behavior different between the two? When you create a new value in CPython, you essentially create an entire new object containing that value. For small integer values (from -5 to 256), CPython decides to reuse old objects you have created earlier. This is called *interning* and it happens for performance reasons – it is considered cheaper to keep around old objects instead of creating new ones. ### Am I correct to generalize this behavior to all ints and all longs? No. As I said, it only happens for small ints. It also happens for short strings. (Where short means less than 7 characters.) **Do not rely on this.** It is only there for performance reasons, and your program shouldn't depend on the interning of values. ### Is this CPython specific? Not at all. Although the specifics may vary, many other platforms do interning. Notable in this aspect is the JVM, since in Java, the `==` operator means the same thing as the Python `is` operator. People learning Java are doing stuff like ``` String name1 = "John"; String name2 = "John"; if (name1 == name2) { System.out.println("They have the same name!"); } ``` which of course is a bad habit, because if the names were longer, they would be in different objects and the comparison would be false, just like in Python if you would use the `is` operator.
This isn't a difference between `int` and `long`. CPython interns small ints (from `-5` to `256`) ``` >>> a = 257 >>> b = 257 >>> a is b False ```
Why are CPython ints unique and long not?
[ "", "python", "identity", "python-2.x", "" ]
I have a MySQL table named 'events' that contains event data. The important columns are 'start' and 'end' which contain string (YYYY-MM-DD) to represent when the events starts and ends. I want to get the records for all the active events in a time period. Events: ``` ------------------------------ ID | START | END | ------------------------------ 1 | 2013-06-14 | 2013-06-14 | 2 | 2013-06-15 | 2013-08-21 | 3 | 2013-06-22 | 2013-06-25 | 4 | 2013-07-01 | 2013-07-10 | 5 | 2013-07-30 | 2013-07-31 | ------------------------------ ``` Request/search: ``` Example: All events between 2013-06-13 and 2013-07-22 : #1, #3, #4 SELECT id FROM events WHERE start BETWEEN '2013-06-13' AND '2013-07-22' : #1, #2, #3, #4 SELECT id FROM events WHERE end BETWEEN '2013-06-13' AND '2013-07-22' : #1, #3, #4 ====> intersect : #1, #3, #4 ``` ``` Example: All events between 2013-06-14 and 2013-06-14 : SELECT id FROM events WHERE start BETWEEN '2013-06-14' AND '2013-06-14' : #1 SELECT id FROM events WHERE end BETWEEN '2013-06-14' AND '2013-06-14' : #1 ====> intersect : #1 ``` I tried many queries still I fail to get the exact SQL query. Don't you know how to do that? Any suggestions? Thanks!
If I understood correctly you are trying to use a single query, i think you can just merge your date search toghter in `WHERE` clauses ``` SELECT id FROM events WHERE start BETWEEN '2013-06-13' AND '2013-07-22' AND end BETWEEN '2013-06-13' AND '2013-07-22' ``` or even more simply you can just use both column to set search time filter ``` SELECT id FROM events WHERE start >= '2013-07-22' AND end <= '2013-06-13' ```
You need the events that start and end within the scope. But that's not all: you also want the events that start within the scope and the events that end within the scope. But then you're still not there because you also want the events that start before the scope and end after the scope. Simplified: 1. events with a start date in the scope 2. events with an end date in the scope 3. events with the scope startdate between the startdate and enddate Because point 2 results in records that also meet the query in point 3 we will only need points 1 and 3 So the SQL becomes: ``` SELECT * FROM events WHERE start BETWEEN '2014-09-01' AND '2014-10-13' OR '2014-09-01' BETWEEN start AND end ```
MySQL query to select events between start/end date
[ "", "mysql", "sql", "date", "date-range", "" ]
I am taking a query from a database, using two tables and am getting the error described in the title of my question. In some cases, the field I need to query by is in table A, but others are in table B. I dynamically create columns to search for (which can either be in table A or table B) and my WHERE clause in my code is causing the error. Is there a dynamic way to fix this, such as if column is in table B then search using table B, or does the INNER JOIN supposed to fix this (which it currently isn't) **Table A** fields: id **Table B** fields: id --- SQL code ``` SELECT * FROM A INNER JOIN B ON A.id = B.id WHERE <cfloop from="1" to="#listLen(selectList1)#" index="i"> #ListGetAt(selectList1, i)# LIKE UPPER(<cfqueryparam cfsqltype="cf_sql_varchar" value="%#ListGetAt(selectList2,i)#%" />) <!--- search column name = query parameter using the same index in both lists (selectList1) (selectList2) ---> <cfif i neq listLen(selectList1)>AND</cfif> <!---append an "AND" if we are on any but the very last element of the list (in that case we don't need an "AND"---> </cfloop> ``` [Question posed here too](https://stackoverflow.com/questions/5966331/ambiguous-column-name-sql-error-with-inner-join-why) I would like to be able to search any additional fields in both **table A** and **table B** with the id column as the data that links the two.
``` Employee ------------------ Emp_ID Emp_Name Emp_DOB Emp_Hire_Date Emp_Supervisor_ID Sales_Data ------------------ Check_ID Tender_Amt Closed_DateTime Emp_ID ``` Every column you reference should be proceeded by the table alias (but you already knew that.) For instance; ``` SELECT E.Emp_ID, B.Check_ID, B.Closed_DateTime FROM Employee E INNER JOIN Sales_Data SD ON E.Emp_ID = SD.Emp_ID ``` However, when you select all (\*) it tries to get all columns from both tables. Let's see what that would look like: ``` SELECT * FROM Employee E INNER JOIN Sales_Data SD ON E.Emp_ID = SD.Emp_ID ``` The compiler sees this as: ``` **Emp_ID**, Emp_Name, Emp_DOB, Emp_Hire_Date, Emp_Supervisor_ID, Check_ID, Tender_Amt, Closed_DateTime, **Emp_ID** ``` Since it tries to get all columns from both tables **Emp\_ID** is duplicated, but SQL doesn't know which Emp\_ID comes from which table, so you get the "ambiguous column name error using inner join". So, you can't use (\*) because whatever column names that exist in both tables will be ambiguous. Odds are you don't want all columns anyway. In addition, if you are adding any columns to your SELECT line via your cfloop they must be proceed by the table alias as well. --Edit: I cleaned up the examples and changed "SELECT \* pulls all columns from the first table" to "SELECT \* pulls all columns from both tables". Shawn pointed out I was incorrect.
You have to write your where clause in such a way that you can say A.field\_from\_A or B.field\_from\_B. You can always pass A.field\_from\_A. Although, you don't really want to say `SELECT * FROM A INNER JOIN B ON A.id=B.id where B.id = '1'`. You would want to say `SELECT * FROM B INNER JOIN A ON B.id=A.id where B.id = '1'` You can get some really slow queries if you try to use a joined table in the where clause. There are times when it's unavoidable, but best practice is to always have your where clause only call from the main table.
How to fix an "ambigous column name error using inner join" error
[ "", "sql", "coldfusion", "" ]
I'm new in Python but basically I want to create sub-groups of element from the list with a double loop, therefore I gonna compare the first element with the next to figure out if I can create these sublist, otherwise I will break the loop inside and I want continue with the last element but in the main loop: Example: `5,7,8,4,11` Compare 5 with 7, is minor? yes so include in the newlist and with the inside for continue with the next 8, is minor than 5? yes, so include in newlist, but when compare with 4, I break the loop so I want continue in m with these 4 to start with the next, in this case with 11... ``` for m in xrange(len(path)): for i in xrange(m+1,len(path)): if (path[i] > path[m]): newlist.append(path[i]) else: break m=m+i ``` Thanks for suggestions or other ideas to achieve it! P.S. Some input will be: input: `[45,78,120,47,58,50,32,34]` output: `[45,78,120],[47,58],50,[32,34]` The idea why i want make a double loops due to to compare sub groups of the full list,in other way is while 45 is minor than the next one just add in the new list, if not take the next to compare in this case will be 47 and start to compare with 58.
No loop! Well at least, no *explicit* looping... ``` import itertools def process(lst): # Guard clause against empty lists if len(lst) < 1: return lst # use a dictionary here to work around closure limitations state = { 'prev': lst[0], 'n': 0 } def grouper(x): if x < state['prev']: state['n'] += 1 state['prev'] = x return state['n'] return [ list(g) for k, g in itertools.groupby(lst, grouper) ] ``` Usage (work both with Python 2 & Python 3): ``` >>> data = [45,78,120,47,58,50,32,34] >>> print (list(process(data))) [[45, 78, 120], [47, 58], [50], [32, 34]] ``` Joke apart, if you need to *group* items in a list [`itertools.groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby) deserves a little bit of attention. Not always the easiest/best answer -- but worth to make a try... --- **EDIT:** If you don't like *closures* -- and prefer using an *object* to hold the state, here is an alternative: ``` class process: def __call__(self, lst): if len(lst) < 1: return lst self.prev = lst[0] self.n = 0 return [ list(g) for k, g in itertools.groupby(lst, self._grouper) ] def _grouper(self, x): if x < self.prev: self.n += 1 self.prev = x return self.n data = [45,78,120,47,58,50,32,34] print (list(process()(data))) ``` --- **EDIT2:** Since I prefer closures ... but @torek don't like the *dictionary* syntax, here a third variation around the same solution: ``` import itertools def process(lst): # Guard clause against empty lists if len(lst) < 1: return lst # use an object here to work around closure limitations state = type('State', (object,), dict(prev=lst[0], n=0)) def grouper(x): if x < state.prev: state.n += 1 state.prev = x return state.n return [ list(g) for k, g in itertools.groupby(lst, grouper) ] data = [45,78,120,47,58,50,32,34] print (list(process(data))) ```
I used a double loop as well, but put the inner loop in a function: ``` #!/usr/bin/env python def process(lst): def prefix(lst): pre = [] while lst and (not pre or pre[-1] <= lst[0]): pre.append(lst[0]) lst = lst[1:] return pre, lst res=[] while lst: subres, lst = prefix(lst) res.append(subres) return res print process([45,78,120,47,58,50,32,34]) => [[45, 78, 120], [47, 58], [50], [32, 34]] ``` The prefix function basically splits a list into 2; the first part is composed of the first ascending numbers, the second is the rest that still needs to be processed (or the empty list, if we are done). The main function then simply assembles the first parts in a result lists, and hands the rest back to the inner function. I'm not sure about the single value 50; in your example it's not in a sublist, but in mine it is. If it is a requirement, then change ``` res.append(subres) ``` to ``` res.append(subres[0] if len(subres)==1 else subres) print process([45,78,120,47,58,50,32,34]) => [[45, 78, 120], [47, 58], 50, [32, 34]] ```
Find monotonic sequences in a list?
[ "", "python", "for-loop", "iteration", "" ]
I have a Python application which outputs an SQL file: ``` sql_string = "('" + name + "', " + age + "')," output_files['sql'].write(os.linesep + sql_string) output_files['sql'].flush() ``` This is not done in a `for` loop, it is written as data becomes available. Is there any way to 'backspace' over the last comma character when the application is done running, and to replace it with a semicolon? I'm sure that I could invent some workaround by outputting the comma before the newline, and using a global Bool to determine if any particular 'write' is the first write. However, I think that the application would be much cleaner if I could just 'backspace' over it. Of course, being Python maybe there is such an easier way! Note that having each `insert` value line in a list and then imploding the list is not a viable solution in this use case.
Use seek to move your cursor one byte (character) backwards, then write the new character: ``` f.seek(-1, os.SEEK_CUR) f.write(";") ``` This is the easiest change, maintaining your current code ("working code" beats "ideal code") but it would be better to avoid the situation.
How about adding the commas before adding the new line? ``` first_line = True ... sql_string = "('" + name + "', " + age + "')" if not first_line: output_files['sql'].write(",") first_line = False output_files['sql'].write(os.linesep + sql_string) output_files['sql'].flush() ... output_files['sql'].write(";") output_files['sql'].flush() ``` You did mention this in your question - I think this is a much clearer to a maintainer than seeking commas and overwriting them. EDIT: Since the above solution would require a global boolean in your code (which is not desirable) you could instead wrap the file writing behaviour into a helper class: ``` class SqlFileWriter: first_line = True def __init__(self, file_name): self.f = open(file_name) def write(self, sql_string): if not self.first_line: self.f.write(",") self.first_line = False self.f.write(os.linesep + sql_string) self.f.flush() def close(self): self.f.write(";") self.f.close() output_files['sql'] = SqlFileWriter("myfile.sql") output_files['sql'].write("('" + name + "', '" + age + "')") ``` This encapsulates all the SQL notation logic into a single class, keeping the code readable and at the same time simplifying the caller code.
"Backspace" over last character written to file
[ "", "python", "" ]
This question is related with [Occasionally Getting SqlException: Timeout expired](https://stackoverflow.com/questions/16917107/occasionally-getting-sqlexception-timeout-expired/16917533?noredirect=1#comment24632507_16917533). Actually, I am using `IF EXISTS... UPDATE .. ELSE .. INSERT` heavily in my app. But user Remus Rusanu is saying that you should not use this. Why I should not use this and what danger it include. So, if I have ``` IF EXISTS (SELECT * FROM Table1 WHERE Column1='SomeValue') UPDATE Table1 SET (...) WHERE Column1='SomeValue' ELSE INSERT INTO Table1 VALUES (...) ``` How to rewrite this statement to make it work?
Use [MERGE](http://technet.microsoft.com/en-us/library/bb510625.aspx) Your SQL fails because 2 concurrent overlapping and very close calls will both get "false" from the EXISTS before the INSERT happens. So they both try to INSERT, and of course one fails. This is explained more here: [Select / Insert version of an Upsert: is there a design pattern for high concurrency?](https://stackoverflow.com/questions/3593870/select-insert-version-of-an-upsert-is-there-a-design-pattern-for-high-concurr/3594328#3594328) THis answer is old though and applies before MERGE was added
The problem with `IF EXISTS ... UPDATE ...` (and `IF NOT EXISTS ... INSERT ...`) is that under concurrency multiple threads (transactions) will execute the `IF EXISTS` part and *all reach the same conclusion* (eg. it does not exists) and try to act accordingly. Result is that all threads attempt to INSERT resulting in a key violation. Depending on the code this can result in constraint violation errors, deadlocks, timeouts or worse (lost updates). You need to ensure that the check `IF EXISTS` and the action are atomic. On pre SQL Server 2008 the solution involved using a transaction and lock hints and was very very error prone (easy to get wrong). Post SQL Server 2008 you can use `MERGE`, which will ensure proper atomicity as is a single statement and the engine understand what you're trying to do.
Danger of using 'IF EXISTS... UPDATE .. ELSE .. INSERT' and what is the alternative?
[ "", "sql", "sql-server-2008", "" ]
With Back-end validations I mean, during the- Triggers,CHECK, Procedure(Insert, Update, Delete), etc. **How practical or necessary** are they now, where nowadays most of these validations are handled in front-end strictly. **How much of back-end validations are good for a program?** Should small things be left out of back-end validations? For example: Lets say we have an age barrier of peoples to enter data of. This can be done in back-end using Triggers or Check in the age column. It can/is also be done in front-end. So is it necessary to have a back-end validation when there is strict validation of age in the front-end?
This is a conceptual question. In general modern programs are built in 3 layers: 1. Presentation 2. Business Logic 3. Database As a rule, layer 1 **may** elect to validate all input in a modern application, providing the user with quick feedback on possible issues (for example a JS popup saying *"this is not a valid email address"*). Layer 2 ***always*** has to do full validation. It's the gateway to the backend and it can check complex relational constraints. It ensures no corrupt data can enter the database, in any way, validated against the application's constraints. Those constraints are oft more complex than what you can check in a database anyway (for example a bank account number here in the Netherlands has to be either 3 to 7 numbers, **or** 9 or 10 and match a [check digit test](http://en.wikipedia.org/wiki/Check_digit)). Layer 3 **can** do validation. If there's only one 'client' it's not a necessity per se, if there's more (especially if there are 'less trusted' users of the same database) it should definitely also be in the database. If the application is mission-critical, it's also recommended to do full validation in the database as well with triggers and constraints, just to have a double guard against bugs in the business logic. The database's job is to ensure its own *integrity*, not compliance to specific business rules. There's no clear-cut answers to this one, it depends on what your application does and how important it is. In a banking application - validate on all 3 levels. In an internet forum - check only where it is needed, and serve extra users with the performance benefits.
This might help: 1. Front end (interface) validation is for data entry help and contexual messages. This ensures that the user has a hassle free data entry experience; and minimizes the roundtrip required for validate *correctness*. 2. Application level validation is for business logic validation. The values are correct, but do they *make sense*. This is the kind of validation do you here, and the majority of your efforts should be in this area. 3. Databases don't do any validation. They provide methods to *constraint* data and the scope of that should be to ensure [referential integrity](http://en.wikipedia.org/wiki/Referential_integrity). Referential integrity ensures that your queries (especially cross table queries) work as expected. Just like no database server will stop you from entering `4000` in a numeric column, it also shouldn't be the place to check if age < 40 as this has no impact on the integrity of the data. However, ensuring that a row being deleted won't leave any orphans - this is referential integrity and should be enforced at the database level.
Is it practical to have back-end (database side) validation for everything?
[ "", "sql", "database", "" ]
I am using a `virtualenv`. I have `fabric` installed, with `pip`. But a `pip freeze` does not give any hint about that. The package is there, in my `virtualenv`, but pip is silent about it. Why could that be? Any way to debug this?
I just tried this myself: create a virtualenv in to the "env" directory: ``` $virtualenv2.7 --distribute env New python executable in env/bin/python Installing distribute....done. Installing pip................done. ``` next, activate the virtual environment: ``` $source env/bin/activate ``` the prompt changed. now install fabric: ``` (env)$pip install fabric Downloading/unpacking fabric Downloading Fabric-1.6.1.tar.gz (216Kb): 216Kb downloaded Running setup.py egg_info for package fabric ... Successfully installed fabric paramiko pycrypto Cleaning up... ``` And `pip freeze` shows the correct result: ``` (env)$pip freeze Fabric==1.6.1 distribute==0.6.27 paramiko==1.10.1 pycrypto==2.6 wsgiref==0.1.2 ``` Maybe you forgot to activate the virtual environment? On a \*nix console type `which pip` to find out.
You can try using the `--all` flag, like this: ``` pip freeze --all > requirements.txt ```
pip freeze does not show all installed packages
[ "", "python", "virtualenv", "pip", "fabric", "" ]
I have a table with lakhs of rows. Now, suddenly I need to create a varchar column index. Also, I need to perform some operations using that column. But its giving innodb\_lock\_wait\_timeout exceeded error. I googled it and changed the value of innodb\_lock\_wait\_timeout to 500 in my.ini file in my mysql folder. But Its still giving the same error. I need to be sure if the value has actually been changed or not. How can I check the effective innodb\_lock\_wait\_timeout value?
I found the answer. I need to run a query: `show variables like 'innodb_lock_wait_timeout';`.
There can be a difference between your command and the server settings: For Example: ``` SHOW GLOBAL VARIABLES LIKE '%INNODB_LOCK_WAIT_TIMEOUT%'; -- Default 50 seconds SET @@SESSION.innodb_lock_wait_timeout = 30; -- innodb_lock_wait_timeout changed in your session -- These queries will produce identical results, as they are synonymous SHOW VARIABLES LIKE '%INNODB_LOCK_WAIT_TIMEOUT%'; -- but is now 30 seconds SHOW SESSION VARIABLES LIKE '%INNODB_LOCK_WAIT_TIMEOUT%'; -- and still is 30 seconds ``` Any listed variable in the [MySQL Documentation](http://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html) can be changed in your session, potentially producing a varied result! Anything with a Variable Scope of "Both Global & Session" like [sysvar\_innodb\_lock\_wait\_timeout](http://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout), can *potentially* contain a different value. Hope this helps!
effective innodb_lock_wait_timeout value check
[ "", "mysql", "sql", "innodb", "" ]
I'm working on a python temperature converter. It will convert Fahrenheit to Celsius and vice versa. I haven't added the if statements yet, or the Celsius to Fahrenheit function yet. But I'm having trouble with this Fahrenheit to Celsius function, ``` def F_C(x): x = raw_input("Please Enter A Value") x = int(x) x - 32 x * 0.55 answer = F_C(x) print x ``` for some reason, it only takes the number and splits it in half. If anyone can help me, I would really appreciate it. Thanks in Advance
You are not returning a value. Also you are not storing all the computed values back into the variable `x` ``` def F_C(x): x = raw_input("Please Enter A Value") x = int(x) x = x - 32 x = x * 0.55 return x ``` You can simplify it to: ``` def F_C(): x = raw_input("Please Enter A Value") return (int(x) - 32)*0.55 ```
do this: ``` def F_C(): x = raw_input("Please Enter A Value") x = int(x) x = x - 32 x = x * 0.55 return x ``` becuase youre just assigning x the reassigning x or do this: ``` def F_C(): x = raw_input("Please Enter A Value") x = int(x) x = (x - 32)*0.55 return x print F_C() ``` you dont need x becuase x is retrieved within the function so you dont need to put it in as an attribute and you have to return something from the function also you can use `input` intsead of `raw_input` then you dont have to convert it to an int becuase `raw_input` gives back a string while `input` gives back a integer or float Good Luck with the rest!
Python Temperature Converter
[ "", "python", "parameter-passing", "converters", "" ]
I am pretty new to Python so it might sound obvious but I haven't found this everywhere else Say I have a application (module) in the directoy A/ then I start developing an application/module in another directory B/ So right now I have ``` source/ |_A/ |_B/ ``` From B I want to use functions are classes defined in B. I might eventually pull them out and put them in a "misc" or "util" module. In any case, what is the best way too add to the PYTHONPATH module B so A can see it? taking into account that I will be also making changes to B. So far I came up with something like: ``` def setup_paths(): import sys sys.path.append('../B') ``` when I want to develop something in A that uses B but this just does not feel right.
Normally when you are developing a single application your directory structure will be similar to ``` src/ |-myapp/ |-pkg_a/ |-__init__.py |-foo.py |-pkg_b/ |-__init__.py |-bar.py |-myapp.py ``` This lets your whole project be reused as a package by others. In `myapp.py` you will typically have a short `main` function. You can import other modules of your application easily. For example, in `pkg_b/bar.py` you might have ``` import myapp.pkg_a.foo ``` I think it's the preferred way of organising your imports. You can do relative imports if you really want, they are described in [PEP-328](http://www.python.org/dev/peps/pep-0328/#rationale-for-relative-imports). ``` import ..pkg_a.foo ``` but personally I think, they are a bit ugly and difficult to maintain (that's arguable, of course). Of course, if one of your modules needs a module from *another* application it's a completely different story, since this application is an external dependency and you'll have to handle it.
I would recommend using the `imp` module ``` import imp imp.load_source('module','../B/module.py') ``` Else use absolute path starting from root ``` def setup_paths(): import sys sys.path.append('/path/to/B') ```
Developing Python modules - adding them to the Path
[ "", "python", "" ]
I would like students to solve a quadratic program in an assignment without them having to install extra software like cvxopt etc. Is there a python implementation available that only depends on NumPy/SciPy?
I ran across a good solution and wanted to get it out there. There is a python implementation of LOQO in the ELEFANT machine learning toolkit out of NICTA (<http://elefant.forge.nicta.com.au> as of this posting). Have a look at optimization.intpointsolver. This was coded by Alex Smola, and I've used a C-version of the same code with great success.
I'm not very familiar with quadratic programming, but I think you can solve this sort of problem just using `scipy.optimize`'s constrained minimization algorithms. Here's an example: ``` import numpy as np from scipy import optimize from matplotlib import pyplot as plt from mpl_toolkits.mplot3d.axes3d import Axes3D # minimize # F = x[1]^2 + 4x[2]^2 -32x[2] + 64 # subject to: # x[1] + x[2] <= 7 # -x[1] + 2x[2] <= 4 # x[1] >= 0 # x[2] >= 0 # x[2] <= 4 # in matrix notation: # F = (1/2)*x.T*H*x + c*x + c0 # subject to: # Ax <= b # where: # H = [[2, 0], # [0, 8]] # c = [0, -32] # c0 = 64 # A = [[ 1, 1], # [-1, 2], # [-1, 0], # [0, -1], # [0, 1]] # b = [7,4,0,0,4] H = np.array([[2., 0.], [0., 8.]]) c = np.array([0, -32]) c0 = 64 A = np.array([[ 1., 1.], [-1., 2.], [-1., 0.], [0., -1.], [0., 1.]]) b = np.array([7., 4., 0., 0., 4.]) x0 = np.random.randn(2) def loss(x, sign=1.): return sign * (0.5 * np.dot(x.T, np.dot(H, x))+ np.dot(c, x) + c0) def jac(x, sign=1.): return sign * (np.dot(x.T, H) + c) cons = {'type':'ineq', 'fun':lambda x: b - np.dot(A,x), 'jac':lambda x: -A} opt = {'disp':False} def solve(): res_cons = optimize.minimize(loss, x0, jac=jac,constraints=cons, method='SLSQP', options=opt) res_uncons = optimize.minimize(loss, x0, jac=jac, method='SLSQP', options=opt) print '\nConstrained:' print res_cons print '\nUnconstrained:' print res_uncons x1, x2 = res_cons['x'] f = res_cons['fun'] x1_unc, x2_unc = res_uncons['x'] f_unc = res_uncons['fun'] # plotting xgrid = np.mgrid[-2:4:0.1, 1.5:5.5:0.1] xvec = xgrid.reshape(2, -1).T F = np.vstack([loss(xi) for xi in xvec]).reshape(xgrid.shape[1:]) ax = plt.axes(projection='3d') ax.hold(True) ax.plot_surface(xgrid[0], xgrid[1], F, rstride=1, cstride=1, cmap=plt.cm.jet, shade=True, alpha=0.9, linewidth=0) ax.plot3D([x1], [x2], [f], 'og', mec='w', label='Constrained minimum') ax.plot3D([x1_unc], [x2_unc], [f_unc], 'oy', mec='w', label='Unconstrained minimum') ax.legend(fancybox=True, numpoints=1) ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_zlabel('F') ``` Output: ``` Constrained: status: 0 success: True njev: 4 nfev: 4 fun: 7.9999999999997584 x: array([ 2., 3.]) message: 'Optimization terminated successfully.' jac: array([ 4., -8., 0.]) nit: 4 Unconstrained: status: 0 success: True njev: 3 nfev: 5 fun: 0.0 x: array([ -2.66453526e-15, 4.00000000e+00]) message: 'Optimization terminated successfully.' jac: array([ -5.32907052e-15, -3.55271368e-15, 0.00000000e+00]) nit: 3 ``` ![enter image description here](https://i.stack.imgur.com/ZEyuC.png)
Quadratic Program (QP) Solver that only depends on NumPy/SciPy?
[ "", "python", "numpy", "scipy", "mathematical-optimization", "" ]
``` my_list = ['a', 'b', 'c', 'd', 'e'] for i in my_list: my_need = my_list (i) print (my_need) ``` What's wrong? I want to obtain every elements of my\_list using for loop.
A for loop over a list or sequence in python automatically fetches one item at a time from the sequence: ``` my_list = ['a', 'b', 'c', 'd', 'e'] for my_need in my_list: print my_need ``` or may be you wanted to do something like this: ``` >>> my_list = ['a', 'b', 'c', 'd', 'e'] >>> my_need = [] >>> for item in my_list: ... my_need.append(item) ... >>> my_need ['a', 'b', 'c', 'd', 'e'] ```
I think what you are looking for is an array slice: ``` my_need = my_list[:] ``` This will copy each element from my\_list into my\_need, so then print (my\_need) should return ``` ['a', 'b', 'c', 'd', 'e'] ``` If you want to concatenate these into a string, you could also do this: ``` my_need = "" for i in my_list: my_need += i print(my_need) ``` This time, the print statement will print: ``` abcde ```
What is wrong when I want to get every elements of my list using for loop?
[ "", "python", "python-3.x", "" ]
hey I'm trying to change elements in my Python list, and I just can't get it to work. ``` content2 = [-0.112272999846, -0.0172778364044, 0, 0.0987861891257, 0.143225416783, 0.0616318333661, 0.99985834, 0.362754457762, 0.103690909138, 0.0767353098528, 0.0605534405723, 0.0, -0.105599793882, -0.0193182826135, 0.040838960163,] for i in range((content2)-1): if content2[i] == 0.0: content2[i] = None print content2 ``` It needs to produce: ``` content2 = [-0.112272999846, -0.0172778364044, None, 0.0987861891257, 0.143225416783, 0.0616318333661, 0.99985834, 0.362754457762, 0.103690909138, 0.0767353098528, 0.0605534405723, None, -0.105599793882, -0.0193182826135, 0.040838960163,] ``` I've tried various other methods too. Anyone got an idea?
You should avoid modifying by index in Python ``` >>> content2 = [-0.112272999846, -0.0172778364044, 0, 0.0987861891257, 0.143225416783, 0.0616318333661, 0.99985834, 0.362754457762, 0.103690909138, 0.0767353098528, 0.0605534405723, 0.0, -0.105599793882, -0.0193182826135, 0.040838960163] >>> [float(x) if x else None for x in content2] [-0.112272999846, -0.0172778364044, None, 0.0987861891257, 0.143225416783, 0.0616318333661, 0.99985834, 0.362754457762, 0.103690909138, 0.0767353098528, 0.0605534405723, None, -0.105599793882, -0.0193182826135, 0.040838960163] ``` To mutate `content2` to the result of this list comprehension, do the following: ``` content2[:] = [float(x) if x else None for x in content2] ``` Your code didn't work because: ``` range((content2)-1) ``` you are trying to subtract `1` from a `list`. Also the `range` endpoint is **exclusive** (it goes up to the endpoint `- 1`, which you are subtracting `1` from again) so what you meant was `range(len(content2))` This modification of your code works: ``` for i in range(len(content2)): if content2[i] == 0.0: content2[i] = None ``` It's nicer to use the implicit fact that `int`s in Python equal to `0` evaluate to false so this works equally fine as well: ``` for i in range(len(content2)): if not content2[i]: content2[i] = None ``` You can get used to doing that for lists and tuples as well instead of checking `if len(x) == 0` as recommended by [PEP-8](http://www.python.org/dev/peps/pep-0008/#id37) The list comprehension I suggested: ``` content2[:] = [float(x) if x else None for x in content2] ``` Is semantically equivalent to ``` res = [] for x in content2: if x: # x is not empty (0.0) res.append(float(x)) else: res.append(None) content2[:] = res # replaces items in content2 with those from res ```
You should use list comprehension here: ``` >>> content2[:] = [x if x!= 0.0 else None for x in content2] >>> import pprint >>> pprint.pprint(content2) [-0.112272999846, -0.0172778364044, None, 0.0987861891257, 0.143225416783, 0.0616318333661, 0.99985834, 0.362754457762, 0.103690909138, 0.0767353098528, 0.0605534405723, None, -0.105599793882, -0.0193182826135, 0.040838960163] ```
python changing integers/ floats to None in list
[ "", "python", "list", "python-2.7", "" ]
I am working on a network traffic monitor project in Python. Not that familiar with Python, so I am seeking help here. In short, I am checking both in and out traffic, I wrote it this way: ``` for iter in ('in','out'): netdata = myhttp() print data ``` netdata is a list consisting of nested lists, its format is like this: ``` [ [t1,f1], [t2,f2], ...] ``` Here `t` represents the moment and `f` is the flow. However I just want to keep these f at this moment for both in and out, I wonder any way to get an efficient code. After some search, I think I need to use create a list of traffic(2 elements), then use zip function to iterate both lists at the same time, but I have difficulty writing a correct one. Since my netdata is a very long list, efficiency is also very important. If there is anything confusing, let me know, I will try to clarify. Thanks for help
Apart from minor fixes on your code (the issues raised by @Zero Piraeus), your question was probably answered [here](https://stackoverflow.com/questions/6340351/python-iterating-through-list-of-list). A possible code to traverse a list of lists in N degree (a tree) is the following: ``` def traverse(item): try: for i in iter(item): for j in traverse(i): yield j except TypeError: yield item ``` Example: ``` l = [1, [2, 3], [4, 5, [[6, 7], 8], 9], 10] print [i for i in traverse(l)] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ``` The key to make it work is recursion and the key to make it work efficiently is using a generator (the keyword `yield` gives the hint). The generator will iterate through your list of lists an returning to you item by item, without needing to copy data or create a whole new list (unless you consume the whole generator assigning the result to a list, like in my example) Using iterators and generators can be strange concepts to understand (the keyword `yield` mainly). Checkout this [great answer](https://stackoverflow.com/questions/231767/the-python-yield-keyword-explained) to fully understand them
The code you've shown doesn't make a great deal of sense. Here's what it does: * Iterate through the sequence `'in', 'out'`, assigning each of those two strings in turn to the variable `iter` (masking the built-in function [`iter()`](http://docs.python.org/2/library/functions.html#iter) in the process) on its two passes through the loop. * Completely ignore the value of `iter` inside the loop. * Assign the result of `myhttp()` to the variable `netdata` on *each pass* through the loop. * Completely ignore the value of `netdata`, and instead attempt to print the undefined variable `data` on each pass through the loop. It's possible, given the nested list you describe, that you want something like this: ``` for t, f in myhttp(): print t print f # ... or whatever you want to do with those values. ```
iterate python nested lists efficiently
[ "", "python", "nested-lists", "" ]
I'm trying to figure out how to count the number of items in a nest list. I'm stuck at how to even begin this. For example if I were to do NestLst([]) it would print 0 but if I do ``` NestLst([[2, [[9]], [1]], [[[[5]]], ['hat', 'bat'], [3.44], ['hat', ['bat']]]] ``` it would return 9. Any help on how to begin this or how to do this would be great. Thanks!
Your question contains the keyword: *recursively*. Create a function that iterates over the list and if it finds a non-list item, adds one to the count, and if it finds a list, calls itself recusively. The issue with **your** code its that you are using length instead of a recursive call. Here is a pythonic pseudocode: ``` def count(list): answer = 0 for item in list: if item is not a list: answer += 1 else: answer += number of items in the sublist (recursion will be useful here) ```
``` import collections def NestLst(seq): if isinstance(seq, str) or not isinstance(seq, collections.Iterable): return 1 return sum(NestLst(x) for x in seq) >>> NestLst([[2, [[9]], [1]], [[[[5]]], ['hat', 'bat'], [3.44], ['hat', ['bat']]]]) 9 ```
How to count items in nest list?
[ "", "python", "" ]
In our running project,Using ASP.net 3.5 with C# and MS SQL, Currently its database size become 4 TB. It will increase gradullay , so we are looking for a solution. Some 1 told me to about BigData (use Hive + Hadoop). And he was giving the following refernce <https://cwiki.apache.org/confluence/display/Hive/GettingStarted> <http://www.codeproject.com/Articles/398563/Analyzing-some-Big-Data-Using-Csharp-Azure-And-Apa> I was read these 2 link, but unfortunatel am not understanding what they meand!!!. So any 1 explain me how can i do this? shall i migrate MSSQL to Hadoop? Please tell the Hardware and software requirements for changing to hadoop? Is it possible to migrate our current database to hadoop? i have read some ware its using for mainly searching data and its not supporting multitransaction? If possible to migrate then how much we want to work with our existing project? Over this much TB , per sec we have 400-500 Data insertion and transaction are there
I'll share some of my experience. First of all I would analyze whether Hadoop fits into my requirements or not. Hadoop is not a replacement for traditional RDBMSs. It is meant for totally different purpose which is `offline batch processing` unlike RDBMSs which are used for real time queries. So, if you have real time needs you better think again. In such a case you would require something like [HBase](http://hbase.apache.org). The first link which you have shown here is about [Hive](http://hive.apache.org). Hive is a data warehouse which runs on top of an existing Hadoop cluster. Along with that it also provides you an SQL like interface which makes it easier for you to query in an easier and convenient manner, in case you are coming from SQL background. Hive stores the data as Hive tables on top of HDFS(the Hadoop file system). But again Hive is not suitable for real time stuff. If you want to perform real time queries on your Hadoop data you might find [Impala](http://hive.apache.org) a good fit. The second link shows you how to use Apache Hadoop with C# and Microsoft Windows Azure. Azure is Microsoft's cloud platform offering. You can find more on this [here](https://www.hadooponazure.com/). This link shows you how to write a [MapReduce](http://hadoop.apache.org/docs/stable/mapred_tutorial.html) program. MapReduce is a computation framework that allows you to process huge amounts of data on a cluster of machines in a distributed and parallel manner. Originally Hadoop was developed to be used with Linux. But now you have the flexibility to use it with Windows as well, courtesy solutions like Azure and [HDP](http://hortonworks.com/products/hortonworksdataplatform/). Hadoop does not require any special software(apart from the basic things like ssh, appropriate language translator etc) or sophisticated hardware. It was meant to run on commodity hardware. But you might wanna keep the specs prescribed the vendor, you are going to follow, in mind. You could easily move your SQL data into your Hadoop cluster either by writing your own programs or by more high level tools like [Sqoop](http://sqoop.apache.org). Search is not the only area where Hadoop is used. This is a very good [link](http://www.slideshare.net/cloudera/20100806-cloudera-10-hadoopable-problems-webinar-4931616) which tells us about the common problems that can be solved using Hadoop. Hope this answers some of your questions. Let me know if you have any further query. --- In response to your comment : Hadoop is basically 2 things-a distributed filesystem(HDFS) and a processing framework(MapReduce). Being a filesystem it lacks random read/write capability. This is where a database, like HBase, comes into picture. Also, Hadoop stores data as files and not as columns which HBase does. If you want to query your data stored in HDFS through Hive you map your HDFS files as tables in Hive and query them using HiveQL. Suppose you have a file called 'file.txt' in HDFS which has 2 fields number and name, you would do something like this : ``` hive> CREATE TABLE demo (no INT, name STRING); hive> LOAD DATA INPATH '/file.txt' INTO TABLE demo; hive> Select * FROM demo; ```
As many others pointed out, a 4TB database is not a reason to move to to Hadoop + Hive. There has to be a pain point that you can't solve with Ms SQL to look for options. Just as a thought, have you considered moving to Azure so that your infrastructure can grow with you?
SQL to BigData Migration?
[ "", "sql", "c#-4.0", "hadoop", "hive", "" ]
Difference between create instance of an objects by () and without parentheses? Assume this is my simple class: ``` class ilist(list): t1 = "Test1" t2 = "Test2" ``` And making the instances to this two variables: ``` list1 = ilist() list2 = ilist ``` When print the two instances ``` >>> list1 [] >>> list2 <class '__main__.ilist'> ``` I can successfully access their attributes ``` >>> list1.test1 'Test1' >>> list2.test1 'Test1' ``` And it shows error in list2 using a method append ``` >>> list1.append("Item1") >>> list2.append("Item1") Traceback (most recent call last): File "<pyshell#64>", line 1, in <module> list2.append("Item1") TypeError: descriptor 'append' requires a 'list' object but received a 'str' ``` What is the difference not only in this simple example?
`list1 = ilist()` creates an object of the class. `list2 = ilist` makes a reference to the class itself. Classes are first-class objects in Python, which means they can be assigned just like any other object. Because the attributes are part of the class, they're also part of the objects that are created with that class.
When you are not calling the class (omitting the `()`), you are not making instances. You are creating another reference to the *class*. `()` calls a class to produce an instance. Your `list2` variable is merely a reference to your `ilist` class object. That class object has both `t1` and `t1` attributes, so you can reference those. But only `list1` is an actual instance, produced by calling the class. When looking up attributes on an instance, the class is consulted *as well*. So `list1` does not have the `t1` and `t2` attributes, but its class *does*. You may want to consult the [Python tutorial on classes](http://docs.python.org/3/tutorial/classes.html) again to see what the difference is between classes and instances.
Create object using parentheses python
[ "", "python", "python-3.x", "" ]
I need to verify if the version number of application bigger than 1.18.10. How regular expression should look like in this case?
Don't use regular expressions for this. Use `split` and `tuple` comparison: ``` def is_recent(version): version_as_ints = (int(x) for x in version.split('.')) return tuple(version_as_ints) > (1, 18, 10) ``` And then check `is_recent("1.10.11")` or `is_recent("1.18.12")`
Seems like this battery has already been included in Python in [`distutils.version`](http://docs.python.org/2/distutils/apiref.html#module-distutils.version) : ``` from distutils.version import LooseVersion LooseVersion("1.18.11") > LooseVersion("1.18.10") #True LooseVersion("1.2.11") > LooseVersion("1.18.10") #False (note that "2">"18" is True) LooseVersion("1.18.10a") > LooseVersion("1.18.10") #True ``` This takes into account splitting and comparing both version number parts as integers, and non-numeric parts (e.g alphabetic extension) seperately and correctly. (*If you want the alternate behaviour*, *(lexicographical comparison)*, you can directly compare the tuples of strings that result on a `version_num.split(".")`) Note that there is also a `StrictVersion` variant that will throw an exception (`ValueError`) on alphabetic characters in the version string. See also [PEP386](http://www.python.org/dev/peps/pep-0386/) which is planning to deprecate both, replacing them with a `NormalizedVersion`.
Regular expression for version number bigger than 1.18.10
[ "", "python", "regex", "" ]
I have the following values in my columns and I need to order them on the basis of their name.Datatype of the column is VARCHAR2 (25 BYTE).My data is ``` Oracle 10g, Oracle 9i, Oracle 11g ``` When I user ORDER BY Name, I got result in the order ``` Oracle 10g Oracle 11g Oracle 9i ``` What can I do to get the data in order as shown below ``` Oracle 9i Oracle 10g Oracle 11g ```
``` /* order based on a number in string */ SELECT str FROM ( SELECT 'Oracle 10g' str FROM DUAL UNION ALL SELECT 'Oracle 11g' FROM DUAL UNION ALL SELECT 'Oracle 9i' FROM DUAL ) ORDER BY TO_NUMBER(REGEXP_REPLACE(str, '\D')) /* result */ Oracle 9i Oracle 10g Oracle 11g ```
Edited: Works as expected Here we have alpha numeric characters. please use `substring` function. > select column\_name from table\_name order by **SUBSTRING**(name,0,8) Use ascending or descending as per your requirement. Note: 8 is the length, because Oracle and Space is the common characters between them.
ORDER BY with characters and numeric
[ "", "sql", "sql-order-by", "varchar", "" ]
I have a dataframe in pandas which I would like to write to a CSV file. I am doing this using: ``` df.to_csv('out.csv') ``` And getting the following error: ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\u03b1' in position 20: ordinal not in range(128) ``` * Is there any way to get around this easily (i.e. I have unicode characters in my data frame)? * And is there a way to write to a tab delimited file instead of a CSV using e.g. a 'to-tab' method (that I don't think exists)?
To delimit by a tab you can use the `sep` argument of [`to_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html): ``` df.to_csv(file_name, sep='\t') ``` To use a specific encoding (e.g. 'utf-8') use the `encoding` argument: ``` df.to_csv(file_name, sep='\t', encoding='utf-8') ```
When you are storing a `DataFrame` object into a **csv file** using the `to_csv` method, you probably wont be needing to store the **preceding indices** of each **row** of the `DataFrame` object. You can **avoid** that by passing a `False` boolean value to `index` parameter. Somewhat like: ``` df.to_csv(file_name, encoding='utf-8', index=False) ``` So if your DataFrame object is something like: ``` Color Number 0 red 22 1 blue 10 ``` The csv file will store: ``` Color,Number red,22 blue,10 ``` instead of (the case when the **default value** `True` was passed) ``` ,Color,Number 0,red,22 1,blue,10 ```
Writing a pandas DataFrame to CSV file
[ "", "python", "csv", "pandas", "dataframe", "" ]
I'm writting a Cython wrapper to a C function. I have a pxd file with the following signature: ``` double contr_hrr(int lena, double xa, double ya, double za, double *anorms) ``` When I try to call this from a pyx file ``` ... return contr_hrr(len(acoefs),a.origin[0],a.origin[1],a.origin[2],anorms2) ``` where **anorms2** is a python list, I get the error message: ``` cython/ctwo.pyx:35:80: Cannot convert Python object to 'double *' ``` How do I pass a python list to a C function as a double array?
1. cimport `array`: ``` from cpython cimport array ``` 2. Create an array object from your list. array class constructor will do all the heavy lifting allocating memory and iterating over your list (could be any iterable actually). ``` cdef array.array anorms2_arr = array.array('d', anorms2) ``` 3. Pass its data to your function: ``` return contr_hrr(.., anorms2_arr.data.as_doubles) ``` `array` is a [standard Python module](http://docs.python.org/2/library/array.html). Cython adds some special support on top, like buffer interface and direct access to the underlying memory block via `arr.data.as_xxx`. Unfortunately, this support is only documented [here](https://github.com/cython/cython/blob/master/Cython/Includes/cpython/array.pxd). You can also find some details about array usage in [this recent thread](https://groups.google.com/d/msg/cython-users/OOfmlj-9HWM/84kbk2pGBBsJ).
I think that you cannot do otherwise, but convert it by yourself : ``` cimport cython from libc.stdlib cimport malloc, free ... cdef double *anorms cdef unsigned int i; anorms = <double *>malloc(len(anorms2)*cython.sizeof(double)) if anorms is NULL: raise MemoryError() for i in xrange(len(anorms2)): anorms[i] = anorms2[i] return contr_hrr(len(acoefs),a.origin[0],a.origin[1],a.origin[2],anorms) ``` If you had been in C++, this would have been different because ``` The following coercions are available: Python type => C++ type => Python type bytes std::string bytes iterable std::vector list iterable std::list list iterable std::set set iterable (len 2) std::pair tuple (len 2) ``` If you could switch to C++, you would have a direct translation from `List[float]` to `vector<double>` : ``` from libcpp.vector cimport vector def py_contr_hrr(vector[double] anorms2, ...): ... return contr_hrr(len(acoefs),a.origin[0],a.origin[1],a.origin[2],anorms2) ``` And calling directly from Python side : ``` anorms2 = [12.0, 0.5, ...] py_contr_hrr(anorms2, ....) ``` Source : <http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library> But I don't know if it is an option that you can consider... It depends on your project's constraints, of course. EDIT : I didn't know about `Nikita`'s way of doing (which is an elegant one, by the way), and I don't know ether which way is the best suitable concerning performances on big arrays.
Cython: can't convert Python object to 'double *'
[ "", "python", "arrays", "list", "cython", "" ]
I'm trying to use the delete command in Access using the shown specifications, but Access keeps saying "Data type mismatch in criteria expression." Does anyone know what to do? ``` DELETE ORDERS.OrderDate, ORDERS.CustomerID FROM ORDERS WHERE (((ORDERS.OrderDate)='2008-01-24') AND ((ORDERS.CustomerID)="C0003")); ```
``` DELETE ORDERS.OrderDate, ORDERS.CustomerID, * FROM ORDERS WHERE (((ORDERS.OrderDate)=#1/24/2008#) AND ((ORDERS.CustomerID)=3)); ```
Try this ``` DELETE ORDERS.OrderDate, ORDERS.CustomerID FROM ORDERS WHERE ORDERS.OrderDate = #01/24/2008# AND ORDERS.CustomerID = "C0003" ```
DELETE in Access?
[ "", "sql", "ms-access", "sql-delete", "mismatch", "" ]
So I need to record a set of 4 integers whose values are different for every second of the day. i.e.: ``` #Here the values are initialized to the same value, however they will change as samples are taken data = [[.25 for numData in range(4)] for numSecs in range(86400)] ``` Now obviously a two dimensional array(gah its python, LIST) whose first index length is 86400 is quite impractical. Instead I want to create a text file with 86400 lines formatted as such: ``` numSec data0 data1 data2 data3 0 .25 .25 .25 .25 1 .25 .25 .25 .25 2 .25 .25 .25 .25 ... ``` And as samples are taken, I want to be able to edit this file, nay, I want to be able to edit the line of the file whose numSec = the second the sample was taken. For example, a sample taken at numSec = 2 (2 seconds after midnight) would cause my program to edit the file so that: ``` 0 .25 .25 .25 .25 1 .25 .25 .25 .25 2 .70 .10 .10 .10 ... ``` Seems easy enough, I have even read a bunch of posts which demonstrated how to rewrite a single in a text file. The problem is, they all require that you read in the whole file. I don't want my program to be reading 86,400 lines every second. Thus we arrive at my question: Can I read a single line in a text file, edit it, and write it back to the file, without reading the entire file every single time a change needs to be made? P.S. I should note I am running Ubuntu 12.04 (Precise) and this is for use in a ROS node P.P.S This program will be running for an arbitrary amount of days so each "second" of data could be read and rewritten many times. Also another reason I would like to use a file is if the system needs to be shut off, I would like to save the distributions for the next time it is run.
You may need to modify this a bit and it assume that all lines are of the same length. For this, I had to pad the first column to a fixed width. But if you don't want padding you should be able to calculate the number of 1,2,3,4,.. digit numbers before a particular row. ``` data = [[.25 for numData in range(4)] for numSecs in range(86400)] length_line=0 def write_line(f, sec, data): line="{:6d}".format(sec) + " " +" ".join( ["{:.2f}".format(dd) for dd in data])+"\n" f.write(line) return len(line) with open('output', 'w') as of: for i,d in enumerate(data): length_line=write_line(of, i, d) with open('output', 'rb+') as of: # modify the 2nd line: n = 2 of.seek(n*length_line) write_line(of, n, [0.10,0.10,0.10,0.10]) # modify the 10th line: n = 10 of.seek(n*length_line) write_line(of, n, [0.10,0.10,0.10,0.10]) ```
If the lines are of different lengths, then everything after the modified line will be in the wrong position and you have to rewrite all those lines. If the lines all have the same length, then you can `seek()` and `write()` the new data by calculating the line's offset in the file. See [Python File Objects](http://docs.python.org/2.4/lib/bltin-file-objects.html) for more info.
Editing a single line in a large text file
[ "", "python", "text-files", "" ]
I need to join something like this: ``` ["Círculo", 23] ["Triángulo, 25, 19, "dos"] ``` I have seen this post -> [Joining a list that has Integer values with Python](https://stackoverflow.com/questions/3590165/joining-list-has-integer-values-with-python) But solutions like: ``` ', '.join(map(str, myList)) or ', '.join(str(x) for x in list_of_ints) ``` does not work for me, because special character 'í' makes it fail: > UnicodeEncodeError: 'ascii' codec can't encode character ... in > position ...: ordinal not in range(128) So what is the pythonic way to solve it? I do not want to check types... Thx!
Checking about encodings I found a solution that works for me: ``` u', '.join([unicode(x.decode('utf-8')) if type(x) == type(str()) else unicode(x) for x in a]) ``` The trick is to use `decode('utf-8')` for getting a valid 8-bit representation of the character. Hope this will help.
If you can specify the strings in your list as unicode, like: ``` [u"Círculo", 23] [u"Triángulo", 25, 19, u"dos"] ``` then this should work: ``` u', '.join(unicode(x) for x in list_of_ints) ``` Assuming you are running Python 2.
Python encoding: list to string having special characters and numbers in the list
[ "", "python", "unicode", "encoding", "" ]
I have a tuple that looks like this: ``` ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n') ``` And i want to split it so that each column will be seperate so i can access it in an easier way. So for an example: ``` tuple[0][2] ``` would return `0` ``` tuple[1][3] ``` would return `299` The second part of my question is what is the equivalent of `.rstrip()` so i can get rid of the `\n`
You could use a list comprehension: ``` rows = [row.split() for row in your_tuple] ``` As for `.rstrip()`, you don't need it. `.split()` (with no argument!) takes care of that for you: ``` >>> ' a b c \t\n\n '.split() ['a', 'b', 'c'] ```
Apply `str.split` to each item of the tuple: ``` >>> tup = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n') >>> t = map(str.split, tup) >>> t [['1', '2130', '0', '279', '90', '92', '193', '1'], ['1', '186', '0', '299', '14', '36', '44', '1']] >>> t[0][2] '0' >>> t[1][3] '299' ```
What are the equivalents of .split() and .rstrip() for a tuple?
[ "", "python", "" ]
When I use requests to access an URL **cookies are automatically sent back to the server** (in the following example the requested URL set some cookie values and then redirect to another URL that display the stored cookie) ``` >>> import requests >>> response = requests.get("http://httpbin.org/cookies/set?k1=v1&k2=v2") >>> response.content '{\n "cookies": {\n "k2": "v2",\n "k1": "v1"\n }\n}' ``` Is it possible to temporary disable cookie handling in the same way you set Chrome or Firefox to not accept cookies? For example if I access the aforementioned URL with Chrome with cookie handling disabled I get what I expected: ``` { "cookies": {} } ```
You can do this by defining a cookie policy to reject all cookies: ``` from http import cookiejar # Python 2: import cookielib as cookiejar class BlockAll(cookiejar.CookiePolicy): return_ok = set_ok = domain_return_ok = path_return_ok = lambda self, *args, **kwargs: False netscape = True rfc2965 = hide_cookie2 = False ``` (Note that `http.cookiejar`'s API requires you to define a bunch of attributes and methods, as shown.) Then, set the cookie policy on your Requests session: ``` import requests s = requests.Session() s.cookies.set_policy(BlockAll()) ``` It will now not store or send cookies: ``` s.get("https://httpbin.org/cookies/set?foo=bar") assert not s.cookies ``` As an aside, if you look at the code, the convenience methods in the `requests` package (as opposed to those on a `requests.Session` object) construct a new `Session` each time. Therefore, cookies aren't persisted between separate calls to `requests.get`. However, if the first page sets cookies and then issues an HTTP redirect, the target page will see the cookies. (This is what happens with the HTTPBin `/cookies/set` call, which redirects to `/cookies`.) So depending on what behavior you want for redirects, you might not need to do anything special. Compare: ``` >>> print(requests.get("https://httpbin.org/cookies/set?foo=bar").json()) {'cookies': {'foo': 'bar'}} >>> print(requests.get("https://httpbin.org/cookies").json()) {'cookies': {}} >>> s = requests.Session() >>> print(s.get("https://httpbin.org/cookies/set?foo=bar").json()) {'cookies': {'foo': 'bar'}} >>> print(s.get("https://httpbin.org/cookies").json()) {'cookies': {'foo': 'bar'}} >>> s = requests.Session() >>> s.cookies.set_policy(BlockAll()) >>> print(s.get("https://httpbin.org/cookies/set?foo=bar").json()) {'cookies': {}} >>> print(requests.get("https://httpbin.org/cookies").json()) {'cookies': {}} ```
A simpler alternative to creating a new class is to use `http.cookiejar.DefaultCookiePolicy` with an empty list of allowed domains: ``` from requests import Session from http.cookiejar import DefaultCookiePolicy s = Session() s.cookies.set_policy(DefaultCookiePolicy(allowed_domains=[])) ``` From the [documentation](https://docs.python.org/3/library/http.cookiejar.html#http.cookiejar.DefaultCookiePolicy): > *allowed\_domains*: if not `None`, this is a sequence of the only domains for which we accept and return cookies
How to disable cookie handling with the Python requests library?
[ "", "python", "cookies", "python-requests", "" ]
I have following function that takes a dictionary, sort it and return the list of dictionary values. ``` def sort_dict_values(dic): keys = dic.keys() keys.sort() return map(dic.get, keys) dict1 = {"b":"1", "a":"2", "d":"", "c":"3"} sorted_list = sort_dict_values(dict1) ``` This function returns a list including items with an empty value. e.g. the resulting list will be: ``` ["2","1","3",""] ``` I want to discard items which don't have a value. e.g. discard "d" since it is empty. Resulting list shall look like: ``` ["2","1","3"] ```
``` >>> dict1 = {"b":"1", "a":"2", "d":"", "c":"3"} >>> [v for k, v in sorted(dict1.items()) if v] ['2', '1', '3'] ``` As @AlexChamberlain said in the comments, it would reduce the load on the O(N log N) sorting algorithm, by performing the O(N) filter first ``` >>> [v for k, v in sorted(x for x in dict1.items() if x[1])] ['2', '1', '3'] ```
You can use `filter` here: ``` from itertools import imap def sort_dict_values(dic): keys = dic.keys() keys.sort() return filter(None,imap(dic.get, keys)) dict1 = {"b":"1", "a":"2", "d":"", "c":"3"} print sort_dict_values(dict1) #['2', '1', '3'] ``` or as suggested by @Alex Chamberlain it would be better filter keys before sorting as that would reduce the number of items to be sorted: ``` def sort_dict_values(dic): keys = sorted(k for k,v in dic.iteritems() if v !='') return map(dic.get, keys) dict1 = {"b":"1", "a":"2", "d":"", "c":"3"} print sort_dict_values(dict1) #['2', '1', '3'] ```
Sort dictionary values and discard those which do not have a value
[ "", "python", "dictionary", "" ]
I'm trying to build a query that takes some input parameters. Few things to note... 1. Not all parameters are required to be passed to the query. 2. If a parameter is not going to be passed to the query, must it still be passed anyway but as `NULL`? 3. If a parameter is not passed, how would I build it to not include the parameter? So say you have... ``` IN name VARCHAR(30), IN age VARCHAR(2), IN address VARCHAR(50) ``` And you only want to search by name... ``` SELECT * FROM Table WHERE ( NAME LIKE CONCAT(name, '%') ); ``` This doesn't seem to work because the age wasn't passed in. Or what if the name was passed in and the address but not the age? How would I build that?
As a previous poster state, pass NULL for the unused parameters. Then structure your query similar to... ``` SELECT * FROM Table WHERE ( ((nameParameter IS NULL) OR (NameColumn LIKE '%' + NameParameter + '%')) AND ((ageParameter IS NULL) OR (AgeColumn = ageParameter)) AND ((addressParameter IS NULL) OR (AddressColumn = addressParameter)) ) ```
The issue is optinal parameters Cannot be used in MYSQL. The workaround is to pass null value for parameter and inside the stored procedure use IF statement to handle the different situations. Take a look here: [Writing optional parameters within stored procedures in MySQL?](https://stackoverflow.com/questions/12652241/writing-optional-parameters-within-stored-procedures-in-mysql)
MySQL Search Query Using LIKE
[ "", "mysql", "sql", "search", "keyword", "" ]
I have a tuple with some names I want to match against one or more dictionaries. ``` t = ('A', 'B') d1 = {'A': 'foo', 'C': 'bar'} d2 = {'A': 'foo', 'B': 'foobar', 'C': 'bar'} def f(dict): """ Given t a tuple of names, find which name exist in the input dictionary dict, and return the name found and its value. If all names in the input tuple are found, pick the first one in the tuple instead. """ keys = set(dict) matches = keys.intersection(t) if len(matches) == 2: name = t[0] else: name = matches.pop() value = dict[name] return name, value print f(d1) print f(d2) ``` The output is `(A, foo)` in both cases. This is not a lot of code, but it involves converting to a set, and then do an intersection. I was looking into some functools and haven't found anything useful. Is there a more optimized way doing this using the standard library or built-in functions that I am not aware of? Thanks.
``` for k in t: try: return k, dic[k] except KeyError: pass ``` If you (like me) don't like exceptions, and assuming `None` is not a legitimate value: ``` for k in t: res = dic.get(k) if res is not None: return k, res ```
``` def f(d): try: return next((x, d[x]) for x in t if x in d) except StopIteration: return () ```
Return the key name and value found in a dictionary given a tuple of names
[ "", "python", "" ]
I have a huge list like ``` ['a', '2'] ['a', '1'] ['b', '3'] ['c', '2'] ['b', '1'] ['a', '1']['b', '1'] ['c', '2']['b', '3'] ['b', '1'] ``` I want to walk through this and get an output of number of each second item for a distinct first item: ``` {a:[2,1,1] b:[3,1,3,1] c:[2,2]} ```
``` data = [['a','2'],['a','1'],['b','3'],['c','2'],['b','1'],['a','1'],['b','1'],['c','2'],['b','3'],['b','1']] result = {} for key, value in data: result.setdefault(key, []).append(value) ``` Outcome: ``` >>> result {'a': ['2', '1', '1'], 'c': ['2', '2'], 'b': ['3', '1', '1', '3', '1']} ``` I prefer [dict.setdefault()](http://docs.python.org/2/library/stdtypes.html#dict.setdefault) over defaultdict because you end up with a normal dictionary, where attempting to access a key that doesn't exist raises an exception instead of giving a value (in this case an empty list).
``` from collections import defaultdict dd = defaultdict(list) for k, v in your_list: dd[k].append(v) ``` Alternatively your data is already sorted (remove the sorted step - or is otherwise okay to sort...) ``` from itertools import groupby from operator import itemgetter print {k: [v[1] for v in g] for k, g in groupby(sorted(d), itemgetter(0))} ```
Python list to dict
[ "", "python", "list", "dictionary", "" ]
I've a project at which I've to plot graphs using jQuery AJAX requests, by reading a constantly updated .txt file with python. However, what I really want to do is to keep the content of the file, until the next data is overwritten to file. However, when i try to read this data stack file, -since the python script constantly overwrites it with new info- **Sadly, I'm not able to see the content everytime.** This is what I've done so far: ``` mydatatosend = str(distance0)+","+str(time0)+","+str(distance1).....) print mydatatosend text_file = open("sensor_file.inc","a") text_file.write(mydatatosend) text_file.close() ``` For the record, I've tried FTP upload-like solution, but I've a problem with FTP Server, so it's not an option for now. Also, any info on how to pass the parameters to a php-file can also do the trick. I really appreciate your assistance, thanks already!
This turned out to be simpler than I think. Problem is solved when I change the "a" to "w+r" Here is the working final code: ``` mydatatosend = str(distance0)+","+str(time0)+","+str(distance1).....) print mydatatosend text_file = open("sensor_file.inc","w+r") text_file.write(mydatatosend) text_file.close() ```
Have you tried this? ``` new_text = str(distance0)+","+str(time0)+","+str(distance1).....) print new_text text_file = open("sensor_file.inc","rw") previous_text = text_file.read() total_text = previous_text + new_text text_file.write(total_text) text_file.close() ``` This reads the data previously on the file, appends the new data to this and rewrites the data back to the file. Hope this helps!
Writing information on an always readable text file with Python
[ "", "jquery", "python", "python-2.7", "fopen", "" ]
I have a need for functions with default arguments that have to be set at function runtime (such as empty lists, values derived from other arguments or data taken from the database) and I am currently using the following pattern to deal with this: ``` def foo(bar, baz=None): baz = baz if baz else blar() # Stuff ``` Where `blar()` gives the proper default value of `baz` which might change during execution. However, the `baz = baz if baz else ...` line strikes me as inelegant. Does anyone else have a better method of avoiding the one-time binding of default function arguments? Small, cross-platform libraries installable by pip are acceptable replacements.
No, that's pretty much it. Usually you test for `is None` so you can safely pass in falsey values like `0` or `""` etc. ``` def foo(bar, baz=None): baz = baz if baz is not None else blar() ``` The old fashioned way is the two liner. Some people may prefer this ``` def foo(bar, baz=None): if baz is None: baz = blar() ```
You can replace ``` baz = baz if baz else blar() ``` with ``` baz = baz or blar() ``` if you're still happy with just testing for falsy values instead of `None`.
Dynamic default arguments in python functions
[ "", "python", "" ]
I am looking for a general and simple way to synchronize the methods of a python class that does not use asynchronous calls by itself. Some possibilities come to my mind: First, the use of decorators on all methods of the class itself: <http://code.activestate.com/recipes/577105-synchronization-decorator-for-class-methods/>. But I do not wish to change the class, so secondly, the use of a wrapper or a subclass synchronizing access to all subclass/core methods. I thought maybe, there is a generic way to synchronize access to a python object so that you do not accidentally miss a method from a superclass in the hierarchy (especially if it is changed later on). So, thirdly, you can probably use a generic proxy similar to: <http://code.activestate.com/recipes/366254-generic-proxy-object-with-beforeafter-method-hooks/> and use reentrant locks for each access. I would prefer the third option. It just bugs me that I did not find a recipe for this. Is there something off with this solution or is there a better one? **EDIT2:** The last option looks like the following snippet and was tested with codetidy.com/5911/ . The test is not a prove that it works, just a light indication. Since this is not my everyday coding, it would help if someone more experienced could check to see if there are any bugs. ``` #!/usr/bin/env python import types from pprint import pformat from threading import RLock class SynchronizeMethodWrapper: """ Wrapper object for a method to be called. """ def __init__( self, obj, func, name, rlock ): self.obj, self.func, self.name = obj, func, name self.rlock = rlock assert obj is not None assert func is not None assert name is not None def __call__( self, *args, **kwds ): """ This method gets called before a method is called to sync access to the core object. """ with self.rlock: rval = self.func(*args, **kwds) return rval class SynchronizeProxy(object): """ Proxy object that synchronizes access to a core object methods and attributes that don't start with _. """ def __init__( self, core ): self._obj = core self.rlock = RLock() def __getattribute__( self, name ): """ Return a proxy wrapper object if this is a method call. """ if name.startswith('_'): return object.__getattribute__(self, name) else: att = getattr(self._obj, name) if type(att) is types.MethodType: return SynchronizeMethodWrapper(self, att, name, object.__getattribute__(self, "rlock")) else: return att def __setitem__( self, key, value ): """ Delegate [] syntax. """ name = '__setitem__' with self.rlock: att = getattr(self._obj, name) pmeth = SynchronizeMethodWrapper(self, att, name, self.rlock) pmeth(key, value) ``` **EDIT3:** I went with the SynchronizeProxy, and it seems to work so far. Since this solution comes closest to what I need, I will select my answer as a solution
I went with the SynchronizeProxy, and it seems to work so far. Since this solution comes closest to what I need, I will select my answer as a solution. If I experience any problems, I will update this answer. ``` #!/usr/bin/env python import types from pprint import pformat from threading import RLock class SynchronizeMethodWrapper: """ Wrapper object for a method to be called. """ def __init__( self, obj, func, name, rlock ): self.obj, self.func, self.name = obj, func, name self.rlock = rlock assert obj is not None assert func is not None assert name is not None def __call__( self, *args, **kwds ): """ This method gets called before a method is called to sync access to the core object. """ with self.rlock: rval = self.func(*args, **kwds) return rval class SynchronizeProxy(object): """ Proxy object that synchronizes access to a core object methods and attributes that don't start with _. """ def __init__( self, core ): self._obj = core self.rlock = RLock() def __getattribute__( self, name ): """ Return a proxy wrapper object if this is a method call. """ if name.startswith('_'): return object.__getattribute__(self, name) else: att = getattr(self._obj, name) if type(att) is types.MethodType: return SynchronizeMethodWrapper(self, att, name, object.__getattribute__(self, "rlock")) else: return att def __setitem__( self, key, value ): """ Delegate [] syntax. """ name = '__setitem__' with self.rlock: att = getattr(self._obj, name) pmeth = SynchronizeMethodWrapper(self, att, name, self.rlock) pmeth(key, value) ```
If you really need to, you can use the black magic of python metaclasses to dynamically add a decorator to each method of the class at class creation time. The following is a quick example of how you might do this. It creates a generic synchronizer metaclass, which you then subclass to create synchronizers for each specific class. Finally you subclass the original class that you want to synchronize and apply the synchronizer metaclass to it. Note I'm using python 3 metaclass syntax. ``` from threading import RLock # # Generic synchronizer # class SynchroMeta(type): def __init__(cls, name, bases, dct): super(SynchroMeta, cls).__init__(name, bases, dct) dct['__lock__'] = RLock() def sync_decorator(f): def inner(*args, **kwargs): with dct['__lock__']: print("Synchronized call") return f(*args, **kwargs) return inner for b in bases: if b.__name__ == cls.sync_object_name: for name, obj in b.__dict__.items(): # Synchronize any callables, but avoid special functions if hasattr(obj, '__call__') and not name.startswith('__'): print("Decorating: ", name) setattr(b, name, sync_decorator(obj)) # # Class you want to synchronize # class MyClass: def __init__(self, v): self.value = v def print_value(self): print("MyClass.value: ", self.value) # # Specific synchronizer for "MyClass" type # class MyClassSynchro(SynchroMeta): sync_object_name = "MyClass" # # Wrapper that uses the specific synchronizer metaclass # class MyClassWrapper(MyClass, metaclass=MyClassSynchro): pass if __name__ == "__main__": w = MyClassWrapper('hello') w.print_value() ```
synchronize access to python object
[ "", "python", "synchronization", "" ]
``` out_file = open('result.txt', 'w') A = [1,2,3,4,5,6,7,8,9,10] B = [11,12,13,14,15] for a in A: for b in B: result = a + b print (result, file = out_file) out_file.close() ``` The above program writes one out file (result.txt) consisting of all the results (50 elements) together. I want to write ten out files each consisting of 5 elements and named as follows: 1.txt 2.txt ... 10.txt The 1.txt file will put the sum of 1+11, 1+12, 1+13, 1+14, and 1+15. The 2.txt file will put the sum of 2+11, 2+12, 2+13, 2+14, and 2+15. ..... The 10.txt file will put the sum of 10+11, 10+12, 10+13, 10+14, and 10+15. Any help, please. Very simple program is expected. Again, when I wanted to name the out file using elements of N, why I could not? ``` A = [1,2,3,4,5,6,7,8,9,10] B = [11,12,13,14,15] N = ['a','b','c','d','e','f','g','h','i','j'] for a in A: results = [] for b in B: result = a + b results.append(result) for n in N: with open('{}.txt'.format(n),'w') as f: for res in results: f.write(str(res)+'\n') ```
``` A = [1,2,3,4,5,6,7,8,9,10] B = [11,12,13,14,15] for a in A: results = [] # define new list for each item in A #loop over B and collect the result in a list(results) for b in B: result = a + b results.append(result) #append the result to results #print results # uncomment this line to see the content of results filename = '{}.txt'.format(a) #generate file name, based on value of `a` #always open file using `with` statement as it automatically closes the file for you with open( filename , 'w') as f: #now loop over results and write them to the file for res in results: #we can only write a string to a file, so convert numbers to string using `str()` f.write(str(res)+'\n') #'\n' adds a new line ``` **Update:** You can use `zip()` here. `zip` return items on the same index from the sequences passed to it. ``` A = [1,2,3,4,5,6,7,8,9,10] B = [11,12,13,14,15] N = ['a','b','c','d','e','f','g','h','i','j'] for a,name in zip(A,N): results = [] for b in B: result = a + b results.append(result) filename = '{}.txt'.format(name) with open( filename , 'w') as f: for res in results: f.write(str(res)+'\n') ``` Help in `zip`: ``` >>> print zip.__doc__ zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)] Return a list of tuples, where each tuple contains the i-th element from each of the argument sequences. The returned list is truncated in length to the length of the shortest argument sequence. ```
``` A = [1,2,3,4,5,6,7,8,9,10] B = [11,12,13,14,15] for a in A: with open(str(a) + '.txt', 'w') as fout: fout.write('\n'.join(str(a + b) for b in B) ```
writing multiple out files from for loop
[ "", "python", "python-3.x", "" ]
I am creating Log file for the code but I am getting the following error : ``` [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] import mainLCF [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/mainLCF.py", line 10, in [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p') [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 1528, in basicConfig [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] hdlr = FileHandler(filename, mode) [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__ [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] StreamHandler.__init__(self, self._open()) [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] stream = open(self.baseFilename, self.mode) [Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] IOError: [Errno 13] Permission denied: '/genetic.log' ``` I have checked the permissions in the particular folder where I want to make the log but still getting the error . My code is : (name is mainLCF.py) ``` import logging import sys logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p') logging.debug("starting of Genetic Algorithm") sys.path.append("/home/ai/Desktop/home/ubuntu/LCF/ws_code") import blackboard from pyevolve import * def eval_func(chromosome): some function here ``` My system's file structure is : ``` / home ai Desktop home ubuntu LCF ws_code GA-LCF blackboard.py main-LCF.py ``` I am calling mainLCF.py from another function lcf.py which is in ws\_code .
You need to change the Logfile path by using logging.handlers python module . In my case I did the following stuff : ``` import logging from logging.handlers import RotatingFileHandler import blackboard WEBAPP_CONSTANTS = { 'LOGFILE': '/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/ga.log', } def getWebAppConstants(constant): return WEBAPP_CONSTANTS.get(constant, False) LOGFILE = getWebAppConstants('LOGFILE') log_handler = RotatingFileHandler(LOGFILE, maxBytes=1048576, backupCount=5) log_handler.setFormatter(logging.Formatter( '%(asctime)s %(levelname)s: %(message)s ' '[in %(pathname)s:%(lineno)d]')) applogger = logging.getLogger("GA") applogger.setLevel(logging.DEBUG) applogger.addHandler(log_handler) applogger.debug("Starting of Genetic Algorithm") from pyevolve import * def eval_func(chromosome): some function here ``` and it worked. However I still don't know the reason why it was earlier trying to make genetic.log at root directory .
Looks like logging tried to open the logfile as `/genetic.log`. If you pass filename as a keyword argument to `logging.basicConfig` it creates a `FileHandler` which passes it to `os.path.abspath` which expands the filename to an absolute path based on your current working dir. So you're either in your root dir or your code changes your current working dir.
Why am I getting IOError: [Errno 13] Permission denied?
[ "", "python", "python-2.7", "permissions", "permission-denied", "ioerror", "" ]
What is the easiest way to create a delay (or parking) queue with Python, Pika and RabbitMQ? I have seen an similar [questions](https://stackoverflow.com/questions/4444208/delayed-message-in-rabbitmq), but none for Python. I find this an useful idea when designing applications, as it allows us to throttle messages that needs to be re-queued again. There are always the possibility that you will receive more messages than you can handle, maybe the HTTP server is slow, or the database is under too much stress. I also found it very useful when something went wrong in scenarios where there is a zero tolerance to losing messages, and while re-queuing messages that could not be handled may solve that. It can also cause problems where the message will be queued over and over again. Potentially causing performance issues, and log spam.
I found this extremely useful when developing my applications. As it gives you an alternative to simply re-queuing your messages. This can easily reduce the complexity of your code, and is one of many powerful hidden features in RabbitMQ. **Steps** First we need to set up two basic channels, one for the main queue, and one for the delay queue. In my example at the end, I include a couple of additional flags that are not required, but makes the code more reliable; such as `confirm delivery`, `delivery_mode` and `durable`. You can find more information on these in the RabbitMQ [manual](http://www.rabbitmq.com/tutorials/amqp-concepts.html). After we have set up the channels we add a binding to the main channel that we can use to send messages from the delay channel to our main queue. ``` channel.queue_bind(exchange='amq.direct', queue='hello') ``` Next we need to configure our delay channel to forward messages to the main queue once they have expired. ``` delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={ 'x-message-ttl' : 5000, 'x-dead-letter-exchange' : 'amq.direct', 'x-dead-letter-routing-key' : 'hello' }) ``` * [x-message-ttl](https://www.rabbitmq.com/ttl.html) *(Message - Time To Live)* This is normally used to automatically remove old messages in the queue after a specific duration, but by adding two optional arguments we can change this behaviour, and instead have this parameter determine in milliseconds how long messages will stay in the delay queue. * [x-dead-letter-routing-key](http://www.rabbitmq.com/dlx.html) This variable allows us to transfer the message to a different queue once they have expired, instead of the default behaviour of removing it completely. * [x-dead-letter-exchange](http://www.rabbitmq.com/dlx.html) This variable determines which Exchange used to transfer the message from hello\_delay to hello queue. **Publishing to the delay queue** When we are done setting up all the basic Pika parameters you simply send a message to the delay queue using basic publish. ``` delay_channel.basic_publish(exchange='', routing_key='hello_delay', body="test", properties=pika.BasicProperties(delivery_mode=2)) ``` Once you have executed the script you should see the following queues created in your RabbitMQ management module. ![enter image description here](https://i.stack.imgur.com/jWEDR.png) **Example.** ``` import pika connection = pika.BlockingConnection(pika.ConnectionParameters( 'localhost')) # Create normal 'Hello World' type channel. channel = connection.channel() channel.confirm_delivery() channel.queue_declare(queue='hello', durable=True) # We need to bind this channel to an exchange, that will be used to transfer # messages from our delay queue. channel.queue_bind(exchange='amq.direct', queue='hello') # Create our delay channel. delay_channel = connection.channel() delay_channel.confirm_delivery() # This is where we declare the delay, and routing for our delay channel. delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={ 'x-message-ttl' : 5000, # Delay until the message is transferred in milliseconds. 'x-dead-letter-exchange' : 'amq.direct', # Exchange used to transfer the message from A to B. 'x-dead-letter-routing-key' : 'hello' # Name of the queue we want the message transferred to. }) delay_channel.basic_publish(exchange='', routing_key='hello_delay', body="test", properties=pika.BasicProperties(delivery_mode=2)) print " [x] Sent" ```
You can use RabbitMQ official plugin: **x-delayed-message** . Firstly, download and copy the [ez file](http://www.rabbitmq.com/community-plugins/v3.5.x/rabbitmq_delayed_message_exchange-0.0.1-rmq3.5.x-9bf265e4.ez) into *Your\_rabbitmq\_root\_path/plugins* Secondly, enable the plugin (do not need to restart the server): ``` rabbitmq-plugins enable rabbitmq_delayed_message_exchange ``` Finally, publish your message with "x-delay" headers like: ``` headers.put("x-delay", 5000); ``` **Notice:** It does not ensure your message's safety, cause if your message expires just during your rabbitmq-server's downtime, unfortunately the message is lost. So **be careful** when you use this scheme. Enjoy it and more info in [rabbitmq-delayed-message-exchange](https://github.com/rabbitmq/rabbitmq-delayed-message-exchange)
How to create a delayed queue in RabbitMQ?
[ "", "python", "queue", "rabbitmq", "delay", "pika", "" ]
I'm pretty confused about how to convert this program to a list comprehension. Any tips? Here's the original code I wrote (also, I'm new to python, so you can tell me if this sucks, I don't mind) The code is supposed to use the built in chr() function to return a string that when printed, is an ascii chart, four entries across.(only using ascii values from 32 to 126 inclusive) ``` def GiveAsciiChart5(): count = 4 for value in range(32, 127): count += 1 print " |ASCII Value|: ", "|",(value),"|", " |Character|: ", "|",chr(value),"|", if count % 4 == 0: print "\n", #print " |ASCII Value|: ", "|",(value),"|", " |Character|: ", "|",chr(value),"|", def Main(): GiveAsciiChart5() if __name__ == '__main__': Main() ``` Here is my attempt at making a list comprehension...it obviously doesn't work. I'm pretty confused at where to put the count variable in? I'm trying to reduce it using list comprehensions and str.join()s. ``` def GiveAsciiChart6(): count = 3 a = [ (i, chr(i)) for i in range(32,127)] for count in a: if count % 4 == 0: print "\n" print " |ASCII Value|: ", "|",(i),"|", " |Character|: ", "|",chr(i),"|", def Main(): GiveAsciiChart6() if __name__ == '__main__': Main() ``` I know it has a ways to go and is pretty awful at present. I'm just pretty stuck and confused at how to format it in this way...any tips are appreciated, not expecting complete solutions as I would like to learn and understand what is going on. Thanks!
Here's an equivalent solution using a list comprehension and the [`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate) built-in function: ``` def GiveAsciiChart5(): vals = [(value, chr(value)) for value in range(32, 127)] for i, (v, c) in enumerate(vals): print " |ASCII Value|: ", "|", v, "|", " |Character|: ", "|", c, "|", if not i % 4: print "\n", ``` Notice that the comprehension is only useful for generating the values, the printing part is done afterwards. By using `enumerate()` and making good use of tuple unpacking I'm skipping the need for a counter.
So as to not give the answer entirely away, I annotated the code. ``` for count in a: #the type of count would be a tuple containing (i, chr(i)) if count % 4 == 0: #so then what should this statement be? print "\n" print " |ASCII Value|: ", "|",(i),"|", " |Character|: ", "|",chr(i),"|", # ^ remember, we are iterating through a list of tuples, the element # our iteration is on is called count (what's 'i' doing here?) ```
converting program to list comprehension
[ "", "python", "list-comprehension", "" ]
I am trying to convert codons within my string into aminoacids. This is what I currently have: ``` def converteRNAmParaAminoacidos(rna): dicionario = {'uuu':'F', 'uuc':'F','uua':'L', 'uug':'L', 'cuu':'L', 'cuc':'L', 'cua':'L', 'cug':'L', 'auu':'I', 'auc':'I', 'aua':'I', 'aug':'M', 'guu':'V', 'guc':'V', 'gua':'V', 'gug':'V', 'ucu':'S', 'ucc':'S', 'uca':'S', 'ucg':'S', 'ccu':'P', 'ccc':'P', 'cca':'P', 'ccg':'P', 'acu':'T', 'acc':'T', 'aca':'T', 'acg':'T', 'gcu':'A', 'gcu':'A', 'gcc':'A', 'gca':'A', 'gcg':'A', 'uau':'T', 'uac':'T', 'uaa':'*', 'uag':'*', 'cau':'H', 'cac':'H', 'caa':'G', 'cag':'G', 'aau':'N', 'aac':'N', 'aaa':'K', 'aag':'K', 'gau':'D', 'gac':'D', 'gaa':'E', 'gag':'E', 'ugu':'C', 'ugc':'C', 'uga':'*', 'ugg':'W', 'cgu':'R', 'cgc':'R', 'cga':'R', 'cgg':'R', 'agu':'S', 'agc':'S', 'aga':'R', 'agg':'R', 'ggu':'G', 'ggc':'G', 'gga':'G', 'ggg':'G'} for i,j in dicionario.iteritems(): aminoacidos=rna.replace(i,j) return aminoacidos ``` However, it is not working, it only replaces the last codon.
In your loop, you don't modify `aminoacidos`. You just keep re-setting it each iteration. While replacement may *seem* like it will work, the order of the replacements will make a difference. Take `ucuuuc` as an example. If you first replace `ucu`, you'll end up with `Suuc`. However, if you find `cuu` first, you'll get `uLuc`, which will break subsequent replacements. You should instead iterate over the string in chunks of three and look up each string in your dictionary: ``` # I'd choose a better variable name dicionario = {'uuu':'F', 'uuc':'F','uua':'L', 'uug':'L', 'cuu':'L', 'cuc':'L', 'cua':'L', 'cug':'L', 'auu':'I', 'auc':'I', 'aua':'I', 'aug':'M', 'guu':'V', 'guc':'V', 'gua':'V', 'gug':'V', 'ucu':'S', 'ucc':'S', 'uca':'S', 'ucg':'S', 'ccu':'P', 'ccc':'P', 'cca':'P', 'ccg':'P', 'acu':'T', 'acc':'T', 'aca':'T', 'acg':'T', 'gcu':'A', 'gcu':'A', 'gcc':'A', 'gca':'A', 'gcg':'A', 'uau':'T', 'uac':'T', 'uaa':'*', 'uag':'*', 'cau':'H', 'cac':'H', 'caa':'G', 'cag':'G', 'aau':'N', 'aac':'N', 'aaa':'K', 'aag':'K', 'gau':'D', 'gac':'D', 'gaa':'E', 'gag':'E', 'ugu':'C', 'ugc':'C', 'uga':'*', 'ugg':'W', 'cgu':'R', 'cgc':'R', 'cga':'R', 'cgg':'R', 'agu':'S', 'agc':'S', 'aga':'R', 'agg':'R', 'ggu':'G', 'ggc':'G', 'gga':'G', 'ggg':'G'} def converteRNAmParaAminoacidos(rna): aminoacidos = '' for i in range(0, len(rna), 3): aminoacidos += dicionario[rna[i:i + 3]] return aminoacidos ``` Or more concisely: ``` def converteRNAmParaAminoacidos(rna): return ''.join([dicionario[rna[i:i + 3]] for i in range(0, len(rna), 3]) ```
In Python, the `replace` function returns a new string. What you're doing is taking the original string, doing a replacement and storing the result as `aminoacidos`. When you call `replace` again, you're not keeping the result from the previous iterations since you're calling `replace` on the original, un-altered string again. You need to re-use `aminoacidos` like so: ``` aminoacidos = rna for tripleto, cambio in dicionario.iteritems(): aminoacidos = aminoacidos.replace(tripleto, cambio) return aminoacidos ```
Replace string based on a dictionary
[ "", "python", "replace", "" ]
Suppose a well-structured OOP Python application, where every call to a method is wrapped in a try block. Now suppose that I'm debugging this application and I want the exceptions to actually be thrown! It would be neigh impossible to replace every `try:` line with `if True:` and to comment out `"""` the `except:` portions, just to debug. Is there any way to tell the Python interpreter that an exceptions thrown by a specific portion of code should stop program execution and print the exception information to stdout? Python 2.7.3 or 3.2.3 on Kubuntu Linux.
> "Suppose a well-structured OOP Python application, where every call to a method is wrapped in a try block ... " this doesn't sound well-structured to me at all. One of the basic principles of exception handling is **ONLY HANDLE EXCEPTIONS THAT YOU KNOW HOW TO DEAL WITH**. This is the driving principle behind the common "don't use a bare except" statement that you'll see: ``` try: do_something() except: #BAD BAD BAD react_to_exception() ``` "Thrown by a specific portion of code" ... How specific a section of code are we talking about? If it's a single block, you can always re-raise: ``` try: do_something() except ValueError as e: raise e # or `raise` or `raise SomeOtherError() from e` in more modern pythons. ```
This sound like a job for a debugger. I'm only familiar with the debugger for [PyCharm](http://www.jetbrains.com/pycharm/), with which you can set an exception breakpoint for any exception.
Don't catch exceptions, even from within a try block
[ "", "python", "exception", "" ]
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all. I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
Set the `MKL_NUM_THREADS` environment variable to 1. As you might have guessed, this environment variable controls the behavior of the Math Kernel Library which is included as part of Enthought's `numpy` build. I just do this in my startup file, .bash\_profile, with `export MKL_NUM_THREADS=1`. You should also be able to do it from inside your script to have it be process specific.
Only hopefully this fixes all scenarios and system you may be on. 1. Use `numpy.__config__.show()` to see if you are using OpenBLAS or MKL From this point on there are a few ways you can do this. 2.1. The terminal route `export OPENBLAS_NUM_THREADS=1` or `export MKL_NUM_THREADS=1` 2.2 (This is my preferred way) In your python script `import os` and add the line `os.environ['OPENBLAS_NUM_THREADS'] = '1'` or `os.environ['MKL_NUM_THREADS'] = '1'`. *NOTE* when setting `os.environ[VAR]` the number of threads must be a string! Also, you may need to set this environment variable *before* importing numpy/scipy. There are probably other options besides openBLAS or MKL but step 1 will help you figure that out.
How do you stop numpy from multithreading?
[ "", "python", "multithreading", "numpy", "" ]
There is the "+=" operator for, namely, int. ``` a = 5 a += 1 b = a == 6 # b is True ``` Is there a "and=" operator for bool? ``` a = True a and= 5 > 6 # a is False a and= 5 > 4 # a is still False ``` I know, this 'and=' operator would correspond to: ``` a = True a = a and 5 > 6 # a is False a = a and 5 > 4 # a is still False ``` But, I do this operation very often and I don’t think it looks very neat. Thanks
Yes - you can use `&=`. ``` a = True a &= False # a is now False a &= True # a is still False ``` You can similarly use `|=` for "or=". It should be noted (as in the comments below) that this is actually a bitwise operation; it will have the expected behavior **only** if `a` starts out as a Boolean, and the operations are only carried out with Booleans.
nrpeterson showed you how to use `&=` with boolean. I show only what can happend if you mix boolean and integer ``` a = True a &= 0 # a is 0 if a == False : print "hello" # "hello" a = True a &= 1 # a is 1 if a == False : print "hello" # nothing a = True a &= 2 # a is 0 (again) if a == False : print "hello" # "hello" a = True a &= 3 # a is 1 if a == False : print "hello" # nothing ```
Is there a "and=" operator for boolean?
[ "", "python", "boolean", "operators", "" ]
I want to run a Sql query with special characters in it. However, I don't want to use sql parameters. There is a way to run following query? ``` SqlString := 'Select * from Table1 where Name LIKE '`1234567890-=\]['';/.,<>?:"{}|+_)(*&^%$#@!~%' FSQLQuery.SQL.Clear; FSQLQuery.SQL.Add( SqlString ); FSQLQuery.Open; ``` Delphi considers this query as Parameterised due to colon sign in '`1234567890-=]['';/.,<>?**:**"{}|+\_)(\*&^%$#@!~%' and throws (No value for parameter '{}|+\_)(\*&^%$#@!~%').
You should put something like this: ``` sqlString := 'SELECT * FROM Table1 WHERE Name LIKE ''`1234567890-=\]['''';/.,<>?:"{}|+_)(*&^%$#@!~%'' '; FSQLQuery.ParamCheck := False; //<<It MUST be prior than SetSql. FSQLQuery.SQL.Clear; FSQLQuery.SQL.Add( sqlString ); FSQLQuery.Open; ```
If `FSQLQuery` is `TADOQuery` then the easiest way to avoid this problem (colon char inside `SQL.Tex`) is to use `FSQLQuery.Connection.Execute(FSQLQuery.SQL.Text)` instead of `FSQLQuery.Open`. In this case you call ADO method directly without having Delphi parse your `Sql.Text`.
How do I avoid Delphi interpreting special characters as parameters in SQL?
[ "", "sql", "delphi", "" ]
How can i get (salary of Mr.Smith in the fist row +salary of Mr.Wong in the second row) of my Firebird sql like the following table? I tried ``` Select sum(column a +column b) ``` it didn't work . I get Null. ``` Name Salary Smith 4000 Wong 3000 John 3300 ... .... ```
try to use the `coalesce` operator. ``` select sum(coalesce(columna, 0) + coalesce(columnb, 0)) ``` cause if any part is null, result will be null. if you're talking of row instead of columns : ``` SELECT SUM(Salary) FROM yourTable WHERE Name IN ('Smith', 'Wong') GROUP BY Name ```
Try below: ``` SELECT name,salary FROM table WHERE name 'smith' UNION SELECT name,salary FROM table WHERE name 'wong' ```
sum of two different rows(salary) in a table
[ "", "sql", "firebird", "" ]
The user needs to input a set of coordinates like so (0,0), (0,1), (1,1), (1,0) The code I wrote for that looks like this: ``` def get_coords(): #user_input = raw_input("Enter a list of points. For example (0,0) (0,1) (1,1) (1,0)\n") print "Enter a list of points. For example (0,0) (0,1) (1,1) (1,0)\n" uin = sys.stdin.readline().strip() try: #coords = map(int, uin.split(' ')) coords = [tuple(map(int, point.replace('(', '').replace(')', '').split(','))) for point in uin.split(' ')] return coords except ValueError: print "Please enter the coordinates in the format mentioned" exit() ``` I'm sure there is a better, more elegant way to do this?
Replace the spaces with `','` and then apply `ast.literal_eval` ``` >>> strs = '(0,0) (0,1) (1,1) (1,0)' >>> from ast import literal_eval >>> literal_eval(strs.replace(' ',',')) ((0, 0), (0, 1), (1, 1), (1, 0)) ``` Using regex, this would work on any amount of spaces: ``` >>> import re >>> strs = '(0, 0) (0, 1) ( 1, 1) ( 1, 0)' >>> literal_eval(re.sub('(\))(\s+)(\()','\g<1>,\g<3>',strs)) ((0, 0), (0, 1), (1, 1), (1, 0)) ```
``` >>> from ast import literal_eval >>> uin = raw_input("coords: ").split() coords: (0,0) (0,1) (1,1) (1,0) >>> uin ['(0,0)', '(0,1)', '(1,1)', '(1,0)'] >>> coords = [literal_eval(coord) for coord in uin] >>> coords [(0, 0), (0, 1), (1, 1), (1, 0)] ``` In your file, you can just write this. Replace the prompt with whatever you like. ``` from ast import literal_eval try: coords = [literal_eval(coord) for coord in raw_input("coords: ").split()] except ValueError: print "Please enter the coordinates in the format mentioned" exit() ``` `literal_eval()` [raises an exception](https://stackoverflow.com/a/15197698/691859) if the code isn't safe. [See the docs.](http://docs.python.org/2/library/ast.html#ast.literal_eval) Regular `eval()` is bad because it can execute arbitrary code which your user inputs!
input a list of 2D coordinates in python from user
[ "", "python", "input", "coordinates", "" ]
I am developing a program, in which I want a window that will display the output thrown by the terminal (like a package manager does ). For example, if I give the install command, the installation process should be outptutted to my window and not the terminal. Is there a way to do this in python Gtk? I am using an Ubuntu 13.04.
If you are on Linux (as you state), something like this should work: ``` import gtk import gobject import pango import os from subprocess import Popen, PIPE import fcntl wnd = gtk.Window() wnd.set_default_size(400, 400) wnd.connect("destroy", gtk.main_quit) textview = gtk.TextView() fontdesc = pango.FontDescription("monospace") textview.modify_font(fontdesc) scroll = gtk.ScrolledWindow() scroll.add(textview) exp = gtk.Expander("Details") exp.add(scroll) wnd.add(exp) wnd.show_all() sub_proc = Popen("ping -c 10 localhost", stdout=PIPE, shell=True) sub_outp = "" def non_block_read(output): fd = output.fileno() fl = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK) try: return output.read().decode("utf-8") except: return '' def update_terminal(): textview.get_buffer().insert_at_cursor(non_block_read(sub_proc.stdout)) return sub_proc.poll() is None gobject.timeout_add(100, update_terminal) gtk.main() ``` The nonblocking read idea is from [here](http://chase-seibert.github.io/blog/2012/11/16/python-subprocess-asynchronous-read-stdout.html). Using a Label to display the text: ``` import gtk import gobject import os from subprocess import Popen, PIPE import fcntl wnd = gtk.Window() wnd.set_default_size(400, 400) wnd.connect("destroy", gtk.main_quit) label = gtk.Label() label.set_alignment(0, 0) wnd.add(label) wnd.show_all() sub_proc = Popen("ping -c 10 localhost", stdout=PIPE, shell=True) sub_outp = "" def non_block_read(output): ''' even in a thread, a normal read with block until the buffer is full ''' fd = output.fileno() fl = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK) try: return output.read().decode("utf-8") except: return '' def update_terminal(): label.set_text(label.get_text() + non_block_read(sub_proc.stdout)) return sub_proc.poll() is None gobject.timeout_add(100, update_terminal) gtk.main() ```
I've been trying to do this and really struggling. I couldn't get the first example working and moved quickly onto the 2nd. Using the 2nd example above (Using Label to display text) and found that this works fine with Python 2.7 but I'm trying to use Python3 and some things just don't work. I struggled for ages trying to convert to Python3 and had to change some of the imports and change gtk to Gtk which got it mostly working but the thing that really stumped me was due to Python3 using utf-8 codes. I finally got it working by modifying "the non\_block\_read" function changing the returned text from utf-8 to string and coping with the return None case. I hope this helps. For completeness I attach my working code:- ``` #!/usr/bin/env python3 import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk from gi.repository import GObject import os from subprocess import Popen, PIPE import fcntl wnd = Gtk.Window() wnd.set_default_size(400, 400) wnd.connect("destroy", Gtk.main_quit) label = Gtk.Label() label.set_alignment(0, 0) wnd.add(label) wnd.show_all() sub_proc = Popen("ping -c 10 localhost", stdout=PIPE, shell=True) sub_outp = "" def non_block_read(output): ''' even in a thread, a normal read with block until the buffer is full ''' fd = output.fileno() fl = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK) op = output.read() if op == None: return '' return op.decode('utf-8') def update_terminal(): label.set_text(label.get_text() + non_block_read(sub_proc.stdout)) return sub_proc.poll() is None GObject.timeout_add(100, update_terminal) Gtk.main() ```
Show terminal output in a gui window using python Gtk
[ "", "python", "gtk", "" ]
I have a list of `datetime.dates` and I need to check if each date is from the next consecutive month. Hope it's clear what do I mean from the code: ``` import datetime from unittest import TestCase def is_consecutive(dates): # TODO return class DatesTestCase(TestCase): def test_consecutive(self): self.assertTrue(is_consecutive([datetime.date(2010, 10, 3), datetime.date(2010, 11, 8), datetime.date(2010, 12, 1), datetime.date(2011, 01, 11)])) def test_not_consecutive(self): self.assertFalse(is_consecutive([datetime.date(2010, 7, 6), datetime.date(2010, 8, 24), datetime.date(2010, 3, 5), datetime.date(2010, 10, 25)])) self.assertFalse(is_consecutive([datetime.date(2010, 10, 6), datetime.date(2010, 11, 2), datetime.date(2010, 12, 9), datetime.date(2010, 01, 20)])) ``` How would you implement `is_consecutive`? Many thanks for any help (advise, hint, code or anything helpful)!
Loop through each item of the list except the last, and compare it to the next item. Two items are consecutive if the month of the second is exactly one greater than the month of the first, or if the month of the second is 1 and the year of the second is exactly one greater than the year of the first. Return `False` at the first failure, otherwise return `True` at the end. EDIT: In the second case, obviously the month of the first must be 12, in addition to the month of the second being 1. Code updated. EDIT 2: And in the first case, obviously the year should be the same. That's what you get for writing too quickly. F'rinstance: ``` #!/usr/bin/python from datetime import date def is_consecutive(datelist): for idx, my_date in enumerate(datelist[:-1]): if ((datelist[idx + 1].month - my_date.month == 1 and datelist[idx + 1].year == my_date.year) or (datelist[idx + 1].month == 1 and my_date.month == 12 and datelist[idx + 1].year - my_date.year == 1)): continue else: return False return True print is_consecutive([date(2010, 10, 3), date(2010, 11, 8), date(2010, 12, 1), date(2011, 1, 11)]) print is_consecutive([date(2010, 7, 6), date(2010, 8, 24), date(2010, 3, 5), date(2010, 10, 25)]) ``` An alternative implementation, possibly easier to follow but basically doing the same thing: ``` def is_consecutive(datelist): for idx, my_date in enumerate(datelist[:-1]): month_diff = datelist[idx + 1].month - my_date.month year_diff = datelist[idx + 1].year - my_date.year if ((month_diff == 1 and year_diff == 0) or (month_diff == -11 and year_diff == 1)): continue else: return False return True ```
This works on your examples and should work generally: ``` def is_consecutive(data): dates=data[:] while len(dates)>1: d2=dates.pop().replace(day=1) d1=dates[-1].replace(day=1) d3=d1+datetime.timedelta(days=32) if d3.month!=d2.month or d3.year!=d2.year: return False return True ```
Determine consecutive dates
[ "", "python", "datetime", "python-2.x", "" ]
I have a `stored procedure` inside which I am creating a temporary table. My `vb` code will execute this stored procedure. Is it possible to access the data in the temp table in `vb6` after sp execution?
Use a global scope temporary table (they start with `##` rather than `#`). These are shared between sessions. They go out of scope when the session that created them ends AND no other session is referring to them. * [Temporary Tables in SQL Server](https://www.simple-talk.com/sql/t-sql-programming/temporary-tables-in-sql-server/) * [Creating and Modifying Table Basics](http://msdn.microsoft.com/en-us/library/ms177399%28v=SQL.105%29.aspx) * [SQL Server 2005 and temporary table scope](https://stackoverflow.com/questions/892351/sql-server-2005-and-temporary-table-scope) Another option is a persistent temporary table (prefixed by `TempDB..`)
From [Books Online](http://msdn.microsoft.com/en-us/library/ms174979.aspx): > A local temporary table created in a stored procedure is dropped > automatically when the stored procedure is finished. The table can be > referenced by any nested stored procedures executed by the stored > procedure that created the table. The table cannot be referenced by > the process that called the stored procedure that created the table.
how to access the temporary table created inside stored procedure [vb6]
[ "", "sql", "stored-procedures", "vb6", "" ]
I apologize for being a newbie, but why does this loop run forever? ``` for i in range (1,3): while i < 5: print ('ha') ```
your code is basically doing the following: > For each value in 1, 2 and 3 do the following: while the current value is less than 5, print 'ha' As you can read it, the value will start in 1 and, while 1 is lesser than 5, 'ha' will be printed. As the value is not updated, 1 will always be lesser than 5 so you will get infinite 'ha's Your probably confused a `while` statement with an `if` statement This makes more sense: > For each value in 1, 2 and 3 do the following: if the current value is less than 5, print 'ha' A tip for the starter: Try always to read/write your algorithms in your natural language and then try to program it in a computer language. Indeed programming languages are hard because they are evolving to a more human form and they need you to think more mathematically and more structured
`for i in range(1, 3)` means it starts with the number `1`. You're saying, with the number 1: ``` while 1 < 5: print('ha') ``` Which will go on forever, because one is less than five, and it will always be. Thus the for-loop will never reach `2` and `3`, as it hasn't finished with `1` yet.
While inside for loop produces infinite loop?
[ "", "python", "for-loop", "while-loop", "infinite-loop", "" ]
I'm doing ``` item['desc'] = site.select('a/text()').extract() ``` but this will be printed like this ``` [u'\n A mano libera\n '] ``` What must I do to tim and remove strange chars like [u'\n , the traling space and '] ? I cannot trim (strip) ``` exceptions.AttributeError: 'list' object has no attribute 'strip' ``` and if converting to string and then stripping, the result was the string above, which I suppose to be in UTF-8
The html page may very well contains these whitespaces characters. What you retrieve a **list** of unicode strings, which is why you can't simply call `strip` on it. If you want to strip these whitespaces characters from *each* string in this list, you can run the following: ``` >>> [s.strip() for s in [u'\n A mano libera\n ']] [u'A mano libera'] ``` If only the first element matters to you, than simply do: ``` >>> [u'\n A mano libera\n '][0].strip() u'A mano libera' ```
There's a nice solution to this using [Item Loaders](http://doc.scrapy.org/en/0.16/topics/loaders.html). Item Loaders are objects that get data from responses, process the data and build Items for you. Here's an example of an Item Loader that will strip the strings and return the first value that matches the XPath, if any: ``` from scrapy.contrib.loader import XPathItemLoader from scrapy.contrib.loader.processor import MapCompose, TakeFirst class MyItemLoader(XPathItemLoader): default_item_class = MyItem default_input_processor = MapCompose(lambda string: string.strip()) default_output_processor = TakeFirst() ``` And you use it like this: ``` def parse(self, response): loader = MyItemLoader(response=response) loader.add_xpath('desc', 'a/text()') return loader.load_item() ```
Scrapy: Why extracted strings are in this format?
[ "", "python", "scrapy", "" ]
Suppose, I have a list of > 1,1 and it can take either + or - sign. So the possible combination would be 2 to the power 2. ``` 1 1 1 -1 -1 1 -1 -1 ``` Similarly, I have a list of > 1,1,1 and it can take either + or - sign. So the possible combination would be 2 to the power 3. ``` -1 1 -1 -1 1 1 1 1 1 1 -1 1 -1 -1 -1 1 1 -1 1 -1 -1 -1 -1 1 ``` In python, how can I do that using itertools or any other methods. Any help please.
``` >>> import itertools >>> lst = [1,1,1] >>> for xs in itertools.product([1,-1], repeat=len(lst)): ... print([a*b for a,b in zip(lst, xs)]) ... [1, 1, 1] [1, 1, -1] [1, -1, 1] [1, -1, -1] [-1, 1, 1] [-1, 1, -1] [-1, -1, 1] [-1, -1, -1] ```
You can do: ``` from itertools import combinations size = 3 ans = list(set(combinations([-1,1]*size,size))) #[(1, 1, -1), # (-1, 1, 1), # (-1, -1, 1), # (1, -1, -1), # (1, -1, 1), # (-1, 1, -1), # (1, 1, 1), # (-1, -1, -1)] ``` This approach also gives the same result with `permutations`.
all possible phase combination
[ "", "python", "" ]
I have a table ("lms\_attendance") of users' check-in and out times that looks like this: ``` id user time io (enum) 1 9 1370931202 out 2 9 1370931664 out 3 6 1370932128 out 4 12 1370932128 out 5 12 1370933037 in ``` I'm trying to create a view of this table that would output only the most recent record per user id, while giving me the "in" or "out" value, so something like: ``` id user time io 2 9 1370931664 out 3 6 1370932128 out 5 12 1370933037 in ``` I'm pretty close so far, but I realized that views won't accept subquerys, which is making it a lot harder. The closest query I got was : ``` select `lms_attendance`.`id` AS `id`, `lms_attendance`.`user` AS `user`, max(`lms_attendance`.`time`) AS `time`, `lms_attendance`.`io` AS `io` from `lms_attendance` group by `lms_attendance`.`user`, `lms_attendance`.`io` ``` But what I get is : ``` id user time io 3 6 1370932128 out 1 9 1370931664 out 5 12 1370933037 in 4 12 1370932128 out ``` Which is close, but not perfect. I know that last group by shouldn't be there, but without it, it returns the most recent time, but not with it's relative IO value. Any ideas? Thanks!
Query: **[SQLFIDDLEExample](http://sqlfiddle.com/#!2/e353d/1)** ``` SELECT t1.* FROM lms_attendance t1 WHERE t1.time = (SELECT MAX(t2.time) FROM lms_attendance t2 WHERE t2.user = t1.user) ``` Result: ``` | ID | USER | TIME | IO | -------------------------------- | 2 | 9 | 1370931664 | out | | 3 | 6 | 1370932128 | out | | 5 | 12 | 1370933037 | in | ``` Note that if a user has multiple records with the same "maximum" time, the query above will return more than one record. If you only want 1 record per user, use the query below: **[SQLFIDDLEExample](http://sqlfiddle.com/#!2/dbb2d/7)** ``` SELECT t1.* FROM lms_attendance t1 WHERE t1.id = (SELECT t2.id FROM lms_attendance t2 WHERE t2.user = t1.user ORDER BY t2.id DESC LIMIT 1) ```
No need to trying reinvent the wheel, as this is common [greatest-n-per-group problem](https://stackoverflow.com/questions/tagged/greatest-n-per-group). Very nice [solution is presented](https://stackoverflow.com/q/8748986/684229). I prefer the most simplistic solution ([see SQLFiddle, updated Justin's](http://sqlfiddle.com/#!2/dbb2d/1)) without subqueries (thus easy to use in views): ``` SELECT t1.* FROM lms_attendance AS t1 LEFT OUTER JOIN lms_attendance AS t2 ON t1.user = t2.user AND (t1.time < t2.time OR (t1.time = t2.time AND t1.Id < t2.Id)) WHERE t2.user IS NULL ``` This also works in a case where there are two different records with the same greatest value within the same group - thanks to the trick with `(t1.time = t2.time AND t1.Id < t2.Id)`. All I am doing here is to assure that in case when two records of the same user have same time only one is chosen. Doesn't actually matter if the criteria is `Id` or something else - basically any criteria that is guaranteed to be unique would make the job here.
Select row with most recent date per user
[ "", "mysql", "sql", "greatest-n-per-group", "" ]
I'm following the tutorial "Think Python" and I'm supposed to install the package called swampy. I'm running python 2.7.3 although I also have python 3 installed. I extracted the package and placed it in site-packages: C:\Python27\Lib\site-packages\swampy-2.1.1 C:\Python31\Lib\site-packages\swampy-2.1.1 But when i try to import a module from it within python: ``` import swampy.TurtleWorld ``` I just get no module named swampy.TurtleWorld. I'd really appreciate it if someone could help me out, here's a link to the lesson if that helps: <http://www.greenteapress.com/thinkpython/html/thinkpython005.html>
> I extracted the package and placed it in site-packages: No, that's the wrong way of "installing" a package. Python packages come with a `setup.py` script that should be used to install them. Simply do: ``` python setup.py install ``` And the module will be installed correctly in the site-packages of the python interpreter you are using. If you want to install it for a specific python version use `python2`/`python3` instead of `python`.
If anyone else is having trouble with this on Windows, I just added my sites-package directory to my PATH variable and it worked like any normal module import. ``` C:\Python34\Lib\site-packages ``` Hope it helps.
Python: I can't import a module even though it's in site-packages
[ "", "python", "module", "installation", "package", "" ]
Is it possible to create a list of functions where you can access them individually? For example: ``` e=0 def a(): global e e=1 def b(): global e e=2 def c(): global e e=3 l=[a(),b(),c()] l[2] print e l[0] print e ``` Output: ``` >>>3 >>>1 ```
The problem is that you are calling your functions at the beginning and storing only their values in the list. ``` e=0 def a(): global e e=1 def b(): global e e=2 def c(): global e e=3 l=[a,b,c] l[2]() print e l[0]() print e ``` Output ``` >>> 3 1 ```
`l=[a(),b(),c()]` is not a list of function, rather a collections of values returned from calling those functions. In a list the items are evaluated from left to right, so `e` is going to be 3 after this step. As functions are objects in python so you can do: ``` >>> l = [a, b, c] # creates new references to the functions objects # l[0] points to a, l[1] points to b... >>> l[0]() >>> e 1 >>> l[2]() >>> e 3 >>> l[1]() >>> e 2 ```
Python List of Functions
[ "", "python", "list", "" ]
I need to make function, which will return for a given folder, a Dictionary, which describes its content. Keys let be the names of the subfolders and files, key value representing the file should be their size and key values ​​that represent folders, whether they are dictionaries that describe the content of these subfolders. The order is not important. Here is an example of such a dictionary: ``` { 'delo' : { 'navodila.docx' : 83273, 'poročilo.pdf' : 37653347, 'artikli.dat' : 253 }, 'igre' : {}, 'seznam.txt' : 7632, 'razno' : { 'slika.jpg' : 4275, 'prijatelji' : { 'janez.jpg' : 8734765, 'mojca.png' : 8736, 'veronika.jpg' : 8376535, 'miha.gif' : 73645 }, 'avto.xlsx' : 76357 } 'ocene.xlsx' : 8304 } ``` i have made this till now : ``` import os def izpis(map): slovar={} listFiles = os.listdir(map) for ts in listFiles: fullName = map +'\\' + ts if os.path.isfile(fullName): size=os.path.getsize(fullName) slovar[ts]=size else: slovar+=izpis(fullName) return (slovar) ```
``` def f(path): if os.path.isdir(path): d = {} for name in os.listdir(path): d[name] = f(os.path.join(path, name)) else: d = os.path.getsize(path) return d ```
``` def dumps(d, level=0, indent=4): if isinstance(d, dict): if not d: return '{}' return '{\n' + ',\n'.join( (' ' * (level+indent) + "'{}' : {}".format(name, dumps(d[name], level+indent, indent)) for name in d), ) + '\n' + ' ' * level + '}' else: return str(d) print dumps({ 'delo' : { 'navodila.docx' : 83273, 'porocilo.pdf' : 37653347, 'artikli.dat' : 253 }, 'igre' : {}, 'seznam.txt' : 7632, 'razno' : { 'slika.jpg' : 4275, 'prijatelji' : { 'janez.jpg' : 8734765, 'mojca.png' : 8736, 'veronika.jpg' : 8376535, 'miha.gif' : 73645 }, 'avto.xlsx' : 76357 }, 'ocene.xlsx' : 8304 }) ```
dictionary of folders and subfolders
[ "", "python", "dictionary", "" ]
I have a table called books with following data: ``` id | description | section 1 | textbook 1 | 1 2 | textbook 2 | 1 3 | textbook 3 | 1 ``` I use the following query to select id 2 and the next and previous rows in section 1: ``` SELECT id FROM books where id IN (2, (SELECT id FROM books WHERE id < 2 LIMIT 1), (SELECT id FROM books WHERE id > 2 LIMIT 1) ) AND section = 1 ``` When I add the `section = 1`, the query returns only id 2 while it should return all the 3 ids but when I remove this part it selects all the 3 ids. My scheme is actually more complex than that but I simplified it to explain the situation. So, what am I doing wrong above?
``` (SELECT id FROM books WHERE id < 2 and section = 1 ORDER BY id DESC LIMIT 1) UNION (SELECT id FROM books WHERE id >= 2 and section = 1 ORDER BY id ASC LIMIT 2) ``` The problem with your query is that you're picking random ids less than 2 and greater than 2, without taking the section into account. If those IDs aren't in section 1, they won't be included in the result. The WHERE clause in your outer query is not distributed into the subqueries. The subqueries are executed independently, returning IDs, which are put into the `IN` clause. Then the outer query filters by section. [FIDDLE](http://www.sqlfiddle.com/#!2/7ea22/3)
``` SELECT id FROM books where id IN (2, (SELECT id FROM books WHERE id < 2 ORDER BY id DESC LIMIT 1), (SELECT id FROM books WHERE id > 2 ORDER BY id ASC LIMIT 1) ) AND section = 1 ``` The `ORDER BY` should do the trick.
MYSQL returning incomplete results
[ "", "mysql", "sql", "database", "" ]
Given this table, I need to generate a year-to-date summary by month. The result should have one row for each month of the year, and a running total for the year up to that month. I am by no means a novice when it comes to SQL, yet I still have no idea how to achieve this in pure SQL. Does anyone know how to do this? Please note that it needs to be compatible with Microsoft Access SQL. ``` projectTitle | completionDate | amount --------------------------------------- 'Project #1' | 2013-01-12 | 1234 'Project #2' | 2013-01-25 | 4567 'Project #3' | 2013-02-08 | 8901 'Project #4' | 2013-02-15 | 2345 'Project #5' | 2013-02-20 | 6789 'Project #6' | 2013-03-01 | 1234 'Project #7' | 2013-04-12 | 5678 'Project #8' | 2013-05-06 | 9012 'Project #9' | 2013-05-20 | 3456 'Project #10' | 2013-06-18 | 7890 'Project #11' | 2013-08-10 | 1234 ``` Example of the expected results ``` month | amount ----------------- 'Jan-13' | 5801 -- = Project #1 + Project #2 + Project #3 'Feb-13' | 23836 -- = Project #1 + Project #2 + Project #3 + Project #4 + Project #5 'Mar-13' | 25070 -- ...and so on 'Apr-13' | 30748 'May-13' | 43216 'Jun-13' | 51106 'Jul-13' | 51106 'Aug-13' | 52340 ``` In my example output, you may notice I included July even though the example table did not have any data for July. Ideally, the output would still include July as you see in the example output, however I can live without if necessary.
I found the answer by using a sub-query, and using aliases for the table name. ``` SELECT FORMAT(t1.completionDate,"mmm-yy") AS [month], ( SELECT SUM(t2.amount) FROM projects AS t2 WHERE MONTH(t2.completionDate) <= MONTH(t1.completionDate) ) AS [amount] FROM projects AS t1; ```
I think, trying to group it by Month could help. Look [here](https://stackoverflow.com/questions/5107562/sql-group-dates-by-month) to see how to Group it by Month. Then you should be able to sum the amount an give it out by the Month. Hope this Idea helps you. Sarajog
Year-To-Date Summary by Month in Access SQL
[ "", "sql", "ms-access", "ms-access-2010", "" ]
I want to write a SQL query where if the datetime is not been assigned I want to replace a string which can select all date and time. How Actually it works: ``` SELECT * FROM `entry` WHERE `user` like 'TOM' or `user` like 'PHP' and `entry_datetime` between '2013-03-13 22:20:18' and '2013-03-13 22:20:26' ``` My Doubt: If the php do not get the value of the date from and to but I don't want to change the query by deleting the AND condition but I want to replace some thing in between date and time and still the query should run: ``` SELECT * FROM `entry` WHERE `user` like 'TOM' or `user` like 'PHP' and `entry_datetime` between ' ' and ' ' ``` I tried the following but its not working? can we write a query like this? ``` SELECT * FROM `entry` WHERE `user` like 'TOM' or `user` like 'PHP' and `entry_datetime` between '%%' and '%%' ``` I just tries the above like because we use it in LIKE command so I tried that? Any advice will be much helpful. I dont know the way I am trying is right or wrong but I can't change the Query id the date value is not assigned.
You can use MIN and MAX values for datetime data type. ``` ... AND `entry_datetime` BETWEEN '1000-01-01 00:00:00' AND '9999-12-31 23:59:59' ``` > The DATETIME type is used for values that contain both date and time > parts. MySQL retrieves and displays DATETIME values in 'YYYY-MM-DD > HH:MM:SS' format. The supported range is '1000-01-01 00:00:00' to > '9999-12-31 23:59:59'.
You could use the `IFNULL` function ``` IFNULL(mindate, '1000-01-01'); ``` Problem with this is you would need to know your date range so you could add the minimum and maximum values correctly. This assumes you are passing in or creating a variable called mindate.
MySQL IN BETWEEN with no condition
[ "", "mysql", "sql", "database", "logic", "" ]
Say I have an array of lists ``` B = [[1,2,3],[1,2,3,4],[1,2]] ``` and I want to calculate the mean of the elements in the lists across the coloumns. How do I do this? i.e How do I get a final array of mean values equal to the longest list: ``` [(1+1+1)/3,(2+2+2)/3,(3+3)/2,4/1] = [1,2,3,4] ``` I tried: ``` final_array = np.array([mean(a) for a in zip(*(B))]) ``` But this gives me an array only as long as my shortest list. Is this were masks come in handy? My apologies if an array of lists makes you cringe, I am still getting used to Python.
One more way, using `cmp` and `izip_longest` ``` from itertools import izip_longest [float(sum(col)) / sum(cmp(x,0) for x in col) for col in izip_longest(*B, fillvalue=0)] ``` This assumes your values are positive.
You can it by using DataFrame of pandas. ``` from pandas import DataFrame B = [[1,2,3],[1,2,3,4],[1,2]] df = DataFrame(B) df.mean(axis=0) """" df 0 1 2 3 0 1 2 3 NaN 1 1 2 3 4 2 1 2 NaN NaN df.mean(axis=0) 0 1 1 2 2 3 3 4 """ ```
How do I calculate the mean of elements in a list across columns?
[ "", "python", "list", "numpy", "mean", "" ]
When I try to resize (thumbnail) an image using PIL, the exif data is lost. What do I have to do preserve exif data in the thumbnail image? When I searched for the same, got some links but none seem to be working. ``` from PIL import Image import StringIO file_path = '/home/me/img/a.JPG' im = Image.open( file_path) THUMB_SIZES = [(512, 512)] for thumbnail_size in THUMB_SIZES: im.thumbnail( thumbnail_size, Image.ANTIALIAS) thumbnail_buf_string = StringIO.StringIO() im.save('512_' + "a", "JPEG") ``` The orginal image has exif data, but image im(512\_a.JPEG) doesn't.
``` import pyexiv2 from PIL import Image file_path = '/home/../img/a.JPG' metadata = pyexiv2.ImageMetadata(file_path) metadata.read() thumb = metadata.exif_thumbnail thumb.set_from_file(file_path) thumb.write_to_file('512_' + "a") thumb.erase() metadata.write() ``` Now I open the image using (Patch Image Inspector) , I can see the exif data
I read throught some of the source code and found a way to make sure that the exif data is saved with the thumbnail. When you open a jpg file in PIL, the `Image` object has an `info` attribute which is a dictionary. One of the keys is called `exif` and it has a value which is a byte string - the raw exif data from the image. You can pass this byte string to the save method and it should write the exif data to the new jpg file: ``` from PIL import Image size = (512, 512) im = Image.open('P4072956.jpg') im.thumbnail(size, Image.ANTIALIAS) exif = im.info['exif'] im.save('P4072956_thumb.jpg', exif=exif) ``` To get a human-readable version of the exif data you can do the following: ``` from PIL import Image from PIL.ExifTags import TAGS im = Image.open('P4072956.jpg') for k, v in im.getexif().items(): print(TAGS.get(k, k), v) ```
Preserve exif data of image with PIL when resize(create thumbnail)
[ "", "python", "python-imaging-library", "exif", "" ]
I want to log every method call in some classes. I could have done ``` class Class1(object): @log def method1(self, *args): ... @log def method2(self, *args): ... ``` But I have a lot of methods in every class, and I don't want to decorate every one separately. Currently, I tried using a hack with metaclasses (overriding my logged class' `__getattribute__` so that if I try to get a method, it'll return a logging method instead): ``` class LoggedMeta(type): def __new__(cls, name, bases, attrs): def __getattribute__(self, name_): attr = super().__getattribute__(name_) if isinstance(attr, (types.MethodType, types.FunctionType)) and not name_.startswith("__"): return makeLogged(attr) #This returns a method that first logs the method call, and then calls the original method. return attr attrs["__getattribute__"] = __getattribute__ return type.__new__(cls, name, bases, attrs) class Class1(object): __metaclass__ = LoggedMeta def method1(self, *args): ... ``` However, I'm on Python 2.X, and the super() syntax doesn't work. At the time I call super, I don't have the `__getattribute__`'s class (but I do have its class name), so I can't use the old super syntax `super(Class, Inst)`. I tried earlier to use metaclasses, but override all the methods instead of `__getattribute__`, but I want to log static method calls also, and they gave me some trouble. I searched for this type of question, but found no-one who tried changing a class this way. Any ideas or help would be very appreciated. EDIT: My solution was this (mostly taken from [this](https://stackoverflow.com/questions/6695854/writing-a-class-decorator-that-applies-a-decorator-to-all-methods?rq=1) thread): ``` import inspect, types CLASS = 0 NORMAL = 1 STATIC = 2 class DecoratedMethod(object): def __init__(self, func, type_): self.func = func self.type = type_ def __get__(self, obj, cls=None): def wrapper(*args, **kwargs): print "before" if self.type == CLASS: #classmethods (unlike normal methods) reach this stage as bound methods, but args still contains the class #as a first argument, so we omit it. ret = self.func(*(args[1:]), **kwargs) else: ret = self.func(*args, **kwargs) print "after" return ret for attr in "__module__", "__name__", "__doc__": setattr(wrapper, attr, getattr(self.func, attr)) if self.type == CLASS: return types.MethodType(wrapper, cls, type) elif self.type == NORMAL: return types.MethodType(wrapper, obj, cls) else: return wrapper def decorate_class(cls): for name, meth in inspect.getmembers(cls): if inspect.ismethod(meth): if inspect.isclass(meth.im_self): # meth is a classmethod setattr(cls, name, DecoratedMethod(meth, CLASS)) else: # meth is a regular method setattr(cls, name, DecoratedMethod(meth, NORMAL)) elif inspect.isfunction(meth): # meth is a staticmethod setattr(cls, name, DecoratedMethod(meth, STATIC)) return cls @decorate_class class MyClass(object): def __init__(self): self.a = 10 print "__init__" def foo(self): print self.a @staticmethod def baz(): print "baz" @classmethod def bar(cls): print "bar" ``` later I cleaned it up a bit, but that's the solution's essence. I need this difference between class, static and normal methods because I want to have ``` inst = MyClass() assert type(inst.baz) == types.FunctionType assert type(inst.foo) == types.MethodType assert type(inst.bar) == types.MethodType ```
If the goal is to make your code easier to debug by simply logging call and response, check out the Autologging module. A single annotation is all it takes =) <https://pythonhosted.org/Autologging/examples-traced.html> ``` pip install Autologging ``` . ``` # my_module.py from autologging import traced @traced class MyClass: def __init__(self): self._value = "ham" def my_method(self, arg, keyword=None): return "%s, %s, and %s" % (arg, self._value, keyword) ``` . ``` >>> import logging, sys >>> from autologging import TRACE >>> logging.basicConfig(level=TRACE, stream=sys.stdout, ... format="%(levelname)s:%(name)s:%(funcName)s:%(message)s") >>> from my_module import MyClass >>> my_obj = MyClass() TRACE:my_module.MyClass:__init__:CALL *() **{} TRACE:my_module.MyClass:__init__:RETURN None >>> my_obj.my_method("spam", keyword="eggs") TRACE:my_module.MyClass:my_method:CALL *('spam',) **{'keyword': 'eggs'} TRACE:my_module.MyClass:my_method:RETURN 'spam, ham, and eggs' 'spam, ham, and eggs' ```
Why don't you alter the class object? You can go through the methods in a class with `dir(MyClass)` and replace them with a wrapped version... something like: ``` def logify(klass): for member in dir(klass): if not callable(getattr(klass, method)) continue # skip attributes setattr(klass, method, log(method)) ``` tinker around with something like this... should work...
Python: Logging all of a class' methods without decorating each one
[ "", "python", "class", "logging", "" ]
I wonder, is there a function in python -let's call it now `apply`- that does the following: ``` apply(f_1, 1) = f_1(1) apply(f_2, (1, 2)) = f_1(1, 2) ... apply(f_n, (1, 2,..., n)) = f_n(1, 2,..., n) # works with a tuple of proper length ``` Since it does exist in eg. *A+* and *Mathematica* and it used to be really useful for me. Cheers!
Python has language-level features for this, known as "argument unpacking", or just "splat". ``` # With positional arguments args = (1, 2, 3) f_1(*args) # With keyword arguments kwargs = {'first': 1, 'second': 2} f_2(**kwargs) ```
You can use the `*` operator for the same effect: ``` f_1(*(1, 2)) = f_1(1, 2) ... ``` The expression following the `*` needn't be a tuple, it can be any expression that evaluates to a sequence. Python also has a built-in `apply` function that [does what you'd expect](http://docs.python.org/2/library/functions.html#apply), but it's been obsolete in favor of the `*` operator since Python 2.3. If you need `apply` for some reason and want to avoid the taint of deprecation, it is trivial to implement one: ``` def my_apply(f, args): return f(*args) ```
Is there such thing as "apply" in python?
[ "", "python", "wolfram-mathematica", "" ]
I installed [vim-flake8](https://github.com/nvie/vim-flake8) by git cloning it on my Pathogen bundle folder as usual, but when I tried to run the plugin pressing `F7` or using `:call Flake8()` in one Python file I receive the following message: > Error detected while processing function Flake8: > > line 8: > > File flake8 not found. Please install it first. Anyone has some clue about what is going on ?
The error message is telling you that you didn't install the program [flake8](https://pypi.python.org/pypi/flake8). Install it. Assuming pip is installed ``` pip install flake8 ``` should work.
If installing flake8 via pip is not working try this: ``` apt-get install flake8 ``` Worked for me.
vim-flake8 is not working
[ "", "python", "vim", "pyflakes", "flake8", "" ]
I have an `if` block statement that is checking if a string that I have assigned to variable `movieTitle` contains the value of a key-value pair in a predefined dictionary. The code I have is: ``` import mechanize from bs4 import BeautifulSoup leaveOut = { 'a':'cat', 'b':'dog', 'c':'werewolf', 'd':'vampire', 'e':'nightmare' } br = mechanize.Browser() r = br.open("http://<a_website_containing_a_list_of_movie_titles/") html = r.read() soup = BeautifulSoup(html) table = soup.find_all('table')[0] for row in table.find_all('tr'): # Find all table data for data in row.find_all('td'): code_handling_the_assignment_of_movie_title_to_var_movieTitle if any(movieTitle.find(leaveOut[c]) < 1 for c in 'abcde'): do_this_set_of_instructions else: pass ``` My thoughts are that I can test the string `movieTitle` for any of the values of the dictionary (predefined) by using the `.find()` method, which if the value is found will return an index integer value of greater than (or at least) equal to 1. Therefore if the result of the condition is <1 (usually -1 when absent) I can continue with the rest of the program, otherwise not perform the rest of the program. However, when I use the Aptana debug feature I can see that my breakpoint on this `if` block are never engaged, as if Aptana is skipping right over it. Why is this? Edit: Have included more code for clarity. Having reviewed the suggestions I have used the code that @kqr suggested. However, my actual program still displays movieTitle despite having string values contained in leaveOut dict. Why?
Could you confirm exactly what you're trying to achieve here? You're trying to execute a set of instructions if ANY of the values in the `leaveOut` dictionary is NOT present in the movieTitle? If so: ``` if [x for x in leaveOut.values() if x not in movieTitle]: ``` would be more concise. Also, if you're going to use the formulation above then the comparator has to be `0` rather than `1` otherwise matches at the first character will fire the set of instructions.
You can do as Captain Skyhawk suggests, or you can replace your entire `if` condition with: ``` if any(movieTitle.find(leaveOut[c]) < 1 for c in 'abcdefghijklm'): ``` As for your second question, are you sure you don't mean ``` if not any(movieTitle.find(leaveOut[c]) < 1 for c in 'abcdefghijklm'): ```
Python dictionary - value
[ "", "python", "dictionary", "" ]
I'm trying to write a condition based on whether a value in a column of a csv is a certain string. This is my code, where I will execute some stuff based on whether the content of the cell in the column 'type' is a 'Question': ``` f = open('/Users/samuelfinegold/Documents/harvard/edXresearch/snaCreationFiles/time_series/time_series.csv','rU') reader = csv.DictReader(f, delimiter=',') for line in reader: if line['type'] == 'Question': print "T" ``` CSV: The error I'm getting: `AttributeError: DictReader instance has no attribute '__getitem__'` ``` post_id thread_id author_id post_content types time votes_up votes_down posters 1 0 Jan NULL Question 3/1/12 10:45 5 1 Jan, Janet, Jack 2 0 Janet NULL Answer 3/1/12 11:00 2 1 Jan, Janet, Jack 3 0 Jack NULL Comment 3/2/12 8:00 0 0 Jan, Janet, Jack 4 1 Jason NULL Question 3/4/12 9:00 3 1 Jason, Jan, Janet 5 1 Jan NULL Answer 3/7/12 1:00 3 1 Jason, Jan, Janet 6 1 Janet NULL Answer 3/7/12 2:00 1 2 Jason, Jan, Janet ```
I put the data you provided in a comma delimited CSV file, I then ran your code on the data you provided and got a `KeyError` for `type`, so I changed `if line['type']` to `if line['types']` and it worked. My code: ``` import csv f = open('test.csv','rU') reader = csv.DictReader(f,delimiter=',') for line in reader: print line if line['types'] == 'Question': print 'The above line has type question' ``` My output: ``` {'thread_id': '0', 'posters ': 'Jan', None: ['Janet', 'Jack'], 'post_id': '1', 'post_content': 'NULL', 'time': '3/1/12 10:45', 'votes_down': '1', 'votes_up': '5', 'author_id': 'Jan', 'types': 'Question'} The above line has type question {'thread_id': '0', 'posters ': 'Jan', None: ['Janet', 'Jack'], 'post_id': '2', 'post_content': 'NULL', 'time': '3/1/12 11:00', 'votes_down': '1', 'votes_up': '2', 'author_id': 'Janet', 'types': 'Answer'} {'thread_id': '0', 'posters ': 'Jan', None: ['Janet', 'Jack'], 'post_id': '3', 'post_content': 'NULL', 'time': '3/2/12 8:00', 'votes_down': '0', 'votes_up': '0', 'author_id': 'Jack', 'types': 'Comment'} {'thread_id': '1', 'posters ': 'Jason', None: ['Jan', 'Janet'], 'post_id': '4', 'post_content': 'NULL', 'time': '3/4/12 9:00', 'votes_down': '1', 'votes_up': '3', 'author_id': 'Jason', 'types': 'Question'} The above line has type question {'thread_id': '1', 'posters ': 'Jason', None: ['Jan', 'Janet'], 'post_id': '5', 'post_content': 'NULL', 'time': '3/7/12 1:00', 'votes_down': '1', 'votes_up': '3', 'author_id': 'Jan', 'types': 'Answer'} {'thread_id': '1', 'posters ': 'Jason', None: ['Jan', 'Janet'], 'post_id': '6', 'post_content': 'NULL', 'time': '3/7/12 2:00', 'votes_down': '2', 'votes_up': '1', 'author_id': 'Janet', 'types': 'Answer'} ``` The reason you have a key called `None` is because in the posters column the data is already comma delimited, therefore only the fist value in the column would be assigned the key 'posters' I'm still not sure why you are getting an `attribute error`, but with a simple change to your code it worked for me.
python has a module to handle csv files in the standard library <https://www.google.com/search?q=python+csv> 1st hit: <http://docs.python.org/library/csv.html>
How to check value of column in spreadsheet in Python
[ "", "python", "csv", "indexing", "" ]
I came across [this example](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_multioutput.html#example-ensemble-plot-forest-multioutput-py) which involves completion of face for the test data set. Here, a value of `32` for `max_features` is passed to the `ExtraTreesRegressor()` function. I learnt that decision trees are constructed, which selects random features from the input data set. For the example from the above link, images are used as train and test data set. [This wiki page](http://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29) describes various types of image features. Now I am not able to understand which features dose `sklearn.ensemble.ExtraTreeRegressor` look for or extract from the image data set provided as input to construct the random forest. Also, how is it determined that a value of `32` is optimum for `max_features`. Please help me with this.
Random forests do not do feature extraction. They use the features in the dataset given to them, which in this example are just pixel intensities from the Olivetti faces dataset. The `max_features` parameter to an [`ExtraTreesRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html) determines "the number of features to consider when looking for the best split" (inside the [decision tree learning algorithm](http://scikit-learn.org/dev/modules/tree.html#tree-algorithms-id3-c4-5-c5-0-and-cart) employed by the forest). The value 32 was probably determined empirically.
The features used here are the raw pixel values. As the images in the dataset are aligned and quite similar, that seems to be enough for the task.
Features considered by ExtraTreeRegressor of Scikit Learn to construct Random Forest
[ "", "python", "machine-learning", "scikit-learn", "random-forest", "" ]
I have file that contains values separated by tab ("\t"). I am trying to create a list and store all values of file in the list. But I get some problem. Here is my code. ``` line = "abc def ghi" values = line.split("\t") ``` It works fine as long as there is only one tab between each value. But if there is one than one tab then it copies the tab to values as well. In my case mostly the extra tab will be after the last value in the file.
You can use `regex` here: ``` >>> import re >>> strs = "foo\tbar\t\tspam" >>> re.split(r'\t+', strs) ['foo', 'bar', 'spam'] ``` **update:** You can use `str.rstrip` to get rid of trailing `'\t'` and then apply regex. ``` >>> yas = "yas\t\tbs\tcda\t\t" >>> re.split(r'\t+', yas.rstrip('\t')) ['yas', 'bs', 'cda'] ```
Split on tab, but then remove all blank matches. ``` text = "hi\tthere\t\t\tmy main man" print([splits for splits in text.split("\t") if splits]) ``` Outputs: ``` ['hi', 'there', 'my main man'] ```
splitting a string based on tab in the file
[ "", "python", "string", "split", "" ]
I am using Oracle 11g. I am trying to realize scenario of concurrent loading into a table with index rebuild. I have few flows which are trying to realize this scenario: 1. load data from source, 2. transform the data, 3. turn off the index on DWH table, 4. load data into DWH, 5. rebuild index on DWH table. I turn off and rebuild indexes for better performance, there are situations where one flow is rebuilding the index while the other tries to turn it off. What I need to do is to place some lock between points 2 and 3, which would be released after point 5. Oracle built in `LOCK TABLE` mechanism is not sufficient, as the lock is released by the end of transaction, so any `ALTER` statement releases the lock. The question is how to solve the problem?
Problem solved. What I wanted to do can be easily achieved using DBMS\_LOCK package. ``` 1. DBMS_LOCK.REQUEST(...) 2. ALTER INDEX ... UNUSABLE 3. Load data 4. ALTER INDEX ... REBUILD 5. DBMS_LOCK.RELEASE(...) ```
You can use DBMS\_SCHEDULER: 1. run jobs that load and transform the data 2. turn off indexes 3. run jobs that load data into DWH 4. rebuild indexes ![enter image description here](https://i.stack.imgur.com/vQ5nP.gif) Using Chains: * <http://docs.oracle.com/cd/B28359_01/server.111/b28310/scheduse009.htm#CHDCFBHG>
Locking table for more transactions
[ "", "sql", "oracle", "data-warehouse", "locks", "" ]
I can't seem to find out how to do this after much searching. Dict: ``` mdr = {'main': {'sm': {'testm': {'test1': 'test', 'test2': 'test'}}, 'lm': {}, 'l': {}}} ``` I would like it to look like this: ``` -Main --SM ---Testm ----Test1:Test ----Test2:Test --LM --L ``` I **won't** know the exact amount of sub-directories it will process so I need to make a loop that will go through the entire direct. I only know how to display the whole directory if I know how far the dictionary goes. I would like it to display all possible keys if possible.
I think recursion is a better bet than a loop. This is pretty close. ``` def print_dict(d, current_depth=1): for k, v in d.items(): if isinstance(v, dict): print '-' * current_depth + str(k) print_dict(v, current_depth + 1) else: print '-' * current_depth + str(k) + ':' + str(v) ``` Output (after fixing the syntax of your dictionary): ``` >>> print_dict(mdr) -main --lm --l --sm ---testm ----test1:test ----test2:test ``` The case is off from your desired output, but you should be able to handle that. If you want to preserve the original order, then you need to use an `OrderedDict` instead of a `dict`. `dict` is a hash table, so it could do anything with the order. (Luckily, the `isinstance(v, dict)` test still works on `OrderedDict`.)
``` def tree(data, indent=0): if isinstance(data, basestring): print '{i}{d}'.format(i='-'*indent, d=data) return for key, val in data.iteritems(): if isinstance(val, dict): print '{i}{k}'.format(i='-'*indent, k=key) tree(val, indent + 1) else: print '{i}{k}:{v}'.format(i='-'*indent, k=key, v=val) tree(mdr) ``` Output: ``` main -sm --lm --testm ---test1:test ---test2:test --l ``` Input from the console (in response to the comment on this answer): ``` >>> mdr = {'main': {'sm': {'testm': {'test1': 'test', 'test2': 'test'}, 'lm': {}, 'l': {}}}} >>> def tree(data, indent=0): if isinstance(data, basestring): print '{i}{d}'.format(i='-'*indent, d=data) return for key, val in data.iteritems(): if isinstance(val, dict): print '{i}{k}'.format(i='-'*indent, k=key) tree(val, indent + 1) else: print '{i}{k}:{v}'.format(i='-'*indent, k=key, v=val) >>> tree(mdr) main -sm --lm --testm ---test1:test ---test2:test --l >>> ```
Python Looping Entire Dictionary
[ "", "python", "dictionary", "" ]
I'm developing a new project. I need a help for the mysql query, I think it is very slow. ``` SELECT post_id, user_id, content, title, TIME, ( SELECT username FROM users WHERE id = posts.user_id ) AS username FROM posts WHERE user_id =1 OR user_id IN ( SELECT DISTINCT user_to FROM friends WHERE user_from =1 OR user_to =1 ) ORDER BY posts.post_id DESC LIMIT 0 , 10 ``` Thanks for any help.
Try: ``` SELECT p.post_id, p.user_id, p.content, p.title, p.TIME, u.username FROM (select distinct us.* from friends f join users us on f.user_to = us.id WHERE f.user_from =1 OR f.user_to =1) u JOIN posts p on u.id = p.user_id ORDER BY p.post_id DESC LIMIT 0 , 10 ```
You can use an `INNER JOIN` query ``` SELECT a.post_id, a.user_id, a.content, a,title, a.TIME, b.username FROM posts a INNER JOIN users b ON a.user_id = b.id WHERE a.user_id =1 OR a.user_id IN ( SELECT DISTINCT user_to FROM friends WHERE user_from =1 OR user_to =1 ) ORDER BY a.post_id DESC LIMIT 0 , 10 ``` This should be faster than your actual query.
How to optimize this my mysql query?
[ "", "mysql", "sql", "" ]
I have a problem with attaching the database with information about users. I'm trying to move the asp.net mvc4 project to the IIS. MS SQL Server 2008 Express. ``` <add name="DefaultConnection" connectionString="Data Source=.;Initial Catalog=aspnet-comvc4-20130423110032;Integrated Security=true;AttachDBFilename=|DataDirectory|\aspnet-comvc4-20130423110032.mdf;" providerName="System.Data.SqlClient" /> ``` Error: Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: CREATE DATABASE permission denied in database 'master'. Cannot attach the file 'C:\co\App\_Data\aspnet-comvc4-20130423110032.mdf' as database 'aspnet-comvc4-20130423110032'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. ``` Stack Trace: [SqlException (0x80131904): CREATE DATABASE permission denied in database 'master'. Cannot attach the file 'C:\co\App_Data\aspnet-comvc4-20130423110032.mdf' as database 'aspnet-comvc4-20130423110032'.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +5295167 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +242 System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +1682 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +69 System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +30 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover) +317 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout) +889 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance) +307 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions) +434 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions) +225 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions) +37 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnectionOptions userOptions) +558 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnectionOptions userOptions) +67 System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection) +1052 System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection) +78 System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection) +167 System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions) +143 System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry) +83 System.Data.SqlClient.SqlConnection.Open() +96 WebMatrix.Data.Database.EnsureConnectionOpen() +46 WebMatrix.Data.<QueryInternal>d__0.MoveNext() +67 System.Linq.Enumerable.FirstOrDefault(IEnumerable`1 source) +164 WebMatrix.Data.Database.QuerySingle(String commandText, Object[] args) +98 WebMatrix.WebData.DatabaseWrapper.QuerySingle(String commandText, Object[] parameters) +14 WebMatrix.WebData.SimpleMembershipProvider.CheckTableExists(IDatabase db, String tableName) +54 WebMatrix.WebData.SimpleMembershipProvider.CreateTablesIfNeeded() +50 WebMatrix.WebData.WebSecurity.InitializeMembershipProvider(SimpleMembershipProvider simpleMembership, DatabaseConnectionInfo connect, String userTableName, String userIdColumn, String userNameColumn, Boolean createTables) +70 WebMatrix.WebData.WebSecurity.InitializeProviders(DatabaseConnectionInfo connect, String userTableName, String userIdColumn, String userNameColumn, Boolean autoCreateTables) +51 WebMatrix.WebData.WebSecurity.InitializeDatabaseConnection(String connectionStringName, String userTableName, String userIdColumn, String userNameColumn, Boolean autoCreateTables) +52 comvc4.Filters.SimpleMembershipInitializer..ctor() +197 [InvalidOperationException: The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588] comvc4.Filters.SimpleMembershipInitializer..ctor() +254 [TargetInvocationException: Exception has been thrown by the target of an invocation.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) +113 System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark) +232 System.Activator.CreateInstance(Type type, Boolean nonPublic) +83 System.Activator.CreateInstance(Type type) +6 System.Threading.LazyHelpers`1.ActivatorFactorySelector() +68 System.Threading.LazyInitializer.EnsureInitializedCore(T& target, Boolean& initialized, Object& syncLock, Func`1 valueFactory) +115 System.Threading.LazyInitializer.EnsureInitialized(T& target, Boolean& initialized, Object& syncLock) +106 comvc4.Filters.InitializeSimpleMembershipAttribute.OnActionExecuting(ActionExecutingContext filterContext) +24 System.Web.Mvc.Async.AsyncControllerActionInvoker.InvokeActionMethodFilterAsynchronously(IActionFilter filter, ActionExecutingContext preContext, Func`1 nextInChain) +69 System.Web.Mvc.Async.<>c__DisplayClass3b.<BeginInvokeActionMethodWithFilters>b__35() +22 System.Web.Mvc.Async.AsyncControllerActionInvoker.InvokeActionMethodFilterAsynchronously(IActionFilter filter, ActionExecutingContext preContext, Func`1 nextInChain) +492 System.Web.Mvc.Async.<>c__DisplayClass3b.<BeginInvokeActionMethodWithFilters>b__35() +22 System.Web.Mvc.Async.<>c__DisplayClass37.<BeginInvokeActionMethodWithFilters>b__31(AsyncCallback asyncCallback, Object asyncState) +190 System.Web.Mvc.Async.WrappedAsyncResult`1.Begin(AsyncCallback callback, Object state, Int32 timeout) +129 System.Web.Mvc.Async.AsyncControllerActionInvoker.BeginInvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters, AsyncCallback callback, Object state) +182 System.Web.Mvc.Async.<>c__DisplayClass25.<BeginInvokeAction>b__1e(AsyncCallback asyncCallback, Object asyncState) +445 System.Web.Mvc.Async.WrappedAsyncResult`1.Begin(AsyncCallback callback, Object state, Int32 timeout) +129 System.Web.Mvc.Async.AsyncControllerActionInvoker.BeginInvokeAction(ControllerContext controllerContext, String actionName, AsyncCallback callback, Object state) +287 System.Web.Mvc.<>c__DisplayClass1d.<BeginExecuteCore>b__17(AsyncCallback asyncCallback, Object asyncState) +30 System.Web.Mvc.Async.WrappedAsyncResult`1.Begin(AsyncCallback callback, Object state, Int32 timeout) +129 System.Web.Mvc.Controller.BeginExecuteCore(AsyncCallback callback, Object state) +338 System.Web.Mvc.Async.WrappedAsyncResult`1.Begin(AsyncCallback callback, Object state, Int32 timeout) +129 System.Web.Mvc.Controller.BeginExecute(RequestContext requestContext, AsyncCallback callback, Object state) +282 System.Web.Mvc.Controller.System.Web.Mvc.Async.IAsyncController.BeginExecute(RequestContext requestContext, AsyncCallback callback, Object state) +15 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__2(AsyncCallback asyncCallback, Object asyncState) +71 System.Web.Mvc.Async.WrappedAsyncResult`1.Begin(AsyncCallback callback, Object state, Int32 timeout) +129 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +236 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +48 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +16 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +301 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 ``` I already have one successful connection to another database in the same project: ``` <add name="codatabaseEntities" connectionString="metadata=res://*/Models.coModel.csdl|res://*/Models.coModel.ssdl|res://*/Models.coModel.msl; provider=System.Data.SqlClient; provider connection string=&quot;Data Source=x.x.x.x,xxxx; Network Library=DBMSSOCN; Integrated Security=false;Initial Catalog=codatabase; User Id=sa2; Password=123456;&quot;" providerName="System.Data.EntityClient"/> ``` But I need both. The first is the default database in asp.net project. The second is mine. I've tried so many ways, nothing helps. Could anyone help me, please.
I've tried harder. This one worked for me: ``` <add name="DefaultConnection" connectionString="Data Source=.;Initial Catalog=aspnet-comvc4-20130423110032.mdf;Integrated Security=false;AttachDBFilename=|DataDirectory|\aspnet-comvc4-20130423110032.mdf;User Id=sa2; Password=123456;" providerName="System.Data.SqlClient"/> ``` And I triple checked access permissions. Thanks, everyone!
I notice that with my connection string there is this `User Instance=True` so you could try adding that to the connection string if you were looking to continue to use integrated security. would look something like ``` <add name="DefaultConnection" connectionString="Data Source=.;Initial Catalog=aspnet-comvc4-20130423110032.mdf;Integrated Security=false;AttachDBFilename=|DataDirectory|\aspnet-comvc4-20130423110032.mdf;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"/> ``` I realize this is an old question but may still be helpful to someone
CREATE DATABASE permission denied in database 'master'. Cannot attach the file 'C:\co\App_Data\xxx.mdf' as database 'xxx'
[ "", "asp.net", "sql", "connection-string", "" ]
I have two tables in Oracle, job and reference. I want to insert a new record into both tables, with a key generated from a sequence. Something like: ``` insert into ( select j.jobid, j.fileid, j.jobname, r.reffileid from job j inner join reference r on j.jobid=r.jobid) values (jobidsequence.nextval, 4660, 'name', 4391); ``` This, of course, leads to: ``` ORA-01776: cannot modify more than one base table through a join view ``` Is there a way of doing this without using PL/SQL? I'd dearly love to do it using just SQL.
You can use a side-effect of the [`insert all` multi-table insert](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9014.htm#i2095116) syntax for this: ``` insert all into job (jobid, fileid, jobname) values (jobidsequence.nextval, fileid, jobname) into reference (jobid, reffileid) values (jobidsequence.nextval, reffileid) select 4660 as fileid, 'name' as jobname, 4391 as reffileid from dual; 2 rows inserted. select * from job; JOBID FILEID JOBNAME ---------- ---------- ---------- 42 4660 name select * from reference; JOBID REFFILEID ---------- ---------- 42 4391 ``` [SQL Fiddle](http://www.sqlfiddle.com/#!4/3cb8e/1). From the restrictions: > You cannot specify a sequence in any part of a multitable insert > statement. A multitable insert is considered a single SQL statement. > Therefore, the first reference to NEXTVAL generates the next number, > and all subsequent references in the statement return the same number. Clearly I am using a sequence in the `values` clause, so the first sentence doesn't seem quite accurate; but you can't use it in the `select` part. (I'm not 100% sure if it can be used in the `values` in all versions, but the documentation is a little misleading in any case, and contradicts itself). So I'm taking advantage of the fact that because it is a single statement, the two references to `nextval` get the same number, as the third sentence says, so the same sequence value is used in both tables.
You can use `jobidsequence.currval` to get the current value of the sequence (within one session transaction, i.e. until `COMMIT` when you called `nextval` before, and until you call `nextval` again). See: [Oracle Admin Manual: Managing Sequences](http://docs.oracle.com/cd/B28359_01/server.111/b28310/views002.htm)
Inserting into two oracle tables with a sequence
[ "", "sql", "oracle", "" ]
How would I remove the space after the equals sign? I searched all over google and could not find anything on how to do it. Anyhelp would be greatly appreciated. ``` Code customer = input('Customer Name:') mpid = input('MPID=<XXXX>:') print ('description' ,customer,'<MPID=',mpid+'>') Output Customer Name:testcustomer MPID=<XXXX>:1234 description testcustomer <MPID= 1234> ```
Here are some ways of combining strings... ``` name = "Joel" print('hello ' + name) print('hello {0}'.format(name)) ``` So you could use any of these in your case... ``` print('description', customer, '<MPID={0}>'.format(mpid)) print('description {0} <MPID={1}>'.format(customer, mpid)) ```
`print(a, b, c)` puts `a` into the output stream, then a space, then `b`, then a space, then `c`. To avoid the space, create a new string and print it. You could: concatenate strings: `a + b + c` better: join strings: `''.join(a, b, c)` even better: format a string: `'description {0} <MPID={1}>'.format(customer, mpid)`
Python Newbie Combine Text
[ "", "python", "" ]
i have a python list whose items are objects of a class with various attributes such as birthday\_score, anniversary\_score , baby\_score..... I want to to sort the list on the basis of one of these attributes ,say anniversary\_score. How do i do it ?
``` your_list.sort(key = lambda x : x.anniversary_score) ``` or if the attribute name is a string then you can use : ``` import operator your_list.sort(key=operator.attrgetter('anniversary_score')) ```
`attrgetter` is handy if you don't know the name of the attribute in advance (eg. maybe it is from a file or a function parameter) ``` from operator import attrgetter sorted(my_list, key=attrgetter('anniversary_score')) ```
sorting a list with objects of a class as its items
[ "", "python", "list", "class", "sorting", "" ]
I've followed the [official tutoral of Scrapy](http://doc.scrapy.org/en/latest/intro/tutorial.html), it's wonderful! I'd like to remove all of DEBUG messages from console output. Is there a way? ``` 2013-06-08 14:51:48+0000 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6029 2013-06-08 14:51:48+0000 [scrapy] DEBUG: Web service listening on 0.0.0.0:6086 ``` The doc told about to set a LOG\_LEVEL, but ... in which file ? Replying, please make referring to this directory structure. It's the mine. Plus, I've a 'test.py' in spyder folder ``` tutorial/ scrapy.cfg tutorial/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py ... ``` Where. in which file, and HOW must i set the log\_level ?
You need to add the following to your `settings.py` document: ``` LOG_LEVEL = 'INFO' ``` See [the documentation](http://doc.scrapy.org/en/latest/topics/settings.html#log-level).
You can also disable it completely with `LOG_ENABLED=False`. And you can pass settings when invoking the command: `scrapy crawl myspider -s LOG_ENABLED=False`
Scrapy: how to disable or change log?
[ "", "python", "scrapy", "" ]
`the_map = { 1:'a',0:'b'}` Now to generate, 8 patterns of `a` and `b` , we create 8 bit patterns: ``` >>> range(8) [0, 1, 2, 3, 4, 5, 6, 7] # 001,010,011....111 ``` How to map the bits to characters 'a' and 'b' , to receive output like : ``` ['aaa','aab','aba'.......'bbb'] ``` I am looking for an efficient one liner. My approaches using translate or format seem a bit inefficient to me: ``` >>> import string >>> [bin(x)[2:].zfill(3).translate(string.maketrans('01','ab')) for x in xrange(8)] ['aaa', 'aab', 'aba', 'abb', 'baa', 'bab', 'bba', 'bbb'] ```
I think you are looking for [`product`](http://docs.python.org/2/library/itertools.html#itertools.product): ``` >>> from itertools import product >>> [''.join(i) for i in product('ABC',repeat=3)] ['AAA', 'AAB', 'AAC', 'ABA', 'ABB', 'ABC', 'ACA', 'ACB', 'ACC', 'BAA', 'BAB', 'B AC', 'BBA', 'BBB', 'BBC', 'BCA', 'BCB', 'BCC', 'CAA', 'CAB', 'CAC', 'CBA', 'CBB' , 'CBC', 'CCA', 'CCB', 'CCC'] ```
Not a huge improvement besides readability, but I think new style formatting could be a better fit here: ``` >>> '{:0<3b}'.format(1).translate(string.maketrans('01', 'ab')) 'baa' ```
A good pythonic way to map bits to characters in python?
[ "", "python", "" ]
I have a file that an application updates every few seconds, and I want to extract a single number field in that file, and record it into a list for use later. So, I'd like to make an infinite loop where the script reads a source file, and any time it notices a change in a particular figure, it writes that figure to an output file. I'm not sure why I can't get Python to notice that the source file is changing: ``` #!/usr/bin/python import re from time import gmtime, strftime, sleep def write_data(new_datapoint): output_path = '/media/USBHDD/PythonStudy/torrent_data_collection/data_one.csv' outfile = open(output_path, 'a') outfile.write(new_datapoint) outfile.close() forever = 0 previous_data = "0" while forever < 1: input_path = '/var/lib/transmission-daemon/info/stats.json' infile = open(input_path, "r") infile.seek(0) contents = infile.read() uploaded_bytes = re.search('"uploaded-bytes":\s(\d+)', contents) if uploaded_bytes: current_time = strftime("%Y-%m-%d %X", gmtime()) current_data = uploaded_bytes.group(1) if current_data != previous_data: write_data(","+ current_time + "$" + uploaded_bytes.group(1)) previous_data = uploaded_bytes.group(1) infile.close() sleep(5) else: print "couldn't write" + strftime("%Y-%m-%d %X", gmtime()) infile.close() sleep(60) ``` As is now, the (messy) script writes once correctly, and then I can see that although my source file (stats.json) file is changing, my script never picks up on any changes. It keeps on running, but my output file doesn't grow. I thought that an `open()` and a `close()` would do the trick, and then tried throwing in a `.seek(0)`. What file method am I missing to ensure that python re-opens and re-reads my source file, (stats.json)?
Thanks for all the answers, unfortunately my error was in the shell, and not in the script with Python. The cause of the problem turned out to be the way I was putting the script in the background. I was doing: `Ctrl+Z` which I thought would put the task in the background. But it does not, `Ctrl+Z` only suspends the task and returns you to the shell, a subsequent `bg` command is necessary for the script to run on infinite loop in the background
Unless you are implementing some synchronization mechanism or could guarantee somehow atomic read and write, I think you are calling for race condition and subtle bugs here. Imagine the "reader" accessing the file whereas the "writer" hasn't completed its write cycle. There is a risk of reading incomplete/inconsistent data. In "modern" systems, you could also hit the cache -- and not seeing file modifications "live" as they appends.
Python: Reread contents of a file
[ "", "python", "file", "io", "refresh", "" ]
I was trying a simple regex code to match the following: ``` line = 'blah black blacksheep blah' if re.match(r'(\bblack\b)', line): print 'found it! ``` What am I doing wrong, I can't find "black" by itself?
From [the docs](http://docs.python.org/2/library/re.html#re.match): > ### `re.match(pattern, string, flags=0)` > > If zero or more characters at the beginning of *string* match the regular expression *pattern*, return a corresponding `MatchObject` instance. You probably want to use `re.search` or `re.findall` instead.
You should use `re.search` or `re.findall` here: ``` >>> strs = 'blah black blacksheep blah' >>> re.search(r'\bblack\b', strs).group(0) 'black' >>> re.findall(r'\bblack\b', strs) ['black'] ```
How to use regex for words
[ "", "python", "regex", "" ]
I'm writing my own container, which needs to give access to a dictionary inside by attribute calls. The typical use of the container would be like this: ``` dict_container = DictContainer() dict_container['foo'] = bar ... print dict_container.foo ``` I know that it might be stupid to write something like this, but that's the functionality I need to provide. I was thinking about implementing this in a following way: ``` def __getattribute__(self, item): try: return object.__getattribute__(item) except AttributeError: try: return self.dict[item] except KeyError: print "The object doesn't have such attribute" ``` I'm not sure whether nested try/except blocks are a good practice, so another way would be to use `hasattr()` and `has_key()`: ``` def __getattribute__(self, item): if hasattr(self, item): return object.__getattribute__(item) else: if self.dict.has_key(item): return self.dict[item] else: raise AttributeError("some customised error") ``` Or to use one of them and one try catch block like this: ``` def __getattribute__(self, item): if hasattr(self, item): return object.__getattribute__(item) else: try: return self.dict[item] except KeyError: raise AttributeError("some customised error") ``` Which option is most Pythonic and elegant?
Your first example is perfectly fine. Even the official Python documentation recommends this style known as [EAFP](https://docs.python.org/3/glossary.html#term-EAFP). Personally, I prefer to avoid nesting when it's not necessary: ``` def __getattribute__(self, item): try: return object.__getattribute__(self, item) except AttributeError: pass # Fallback to dict try: return self.dict[item] except KeyError: raise AttributeError("The object doesn't have such attribute") from None ``` PS. `has_key()` has been deprecated for a long time in Python 2. Use `item in self.dict` instead.
While in Java it's indeed a bad practice to use exceptions for flow control (mainly because exceptions force the JVM to gather resources ([more here](https://stackoverflow.com/questions/6092992/why-is-it-easier-to-ask-forgiveness-than-permission-in-python-but-not-in-java))), in Python you have two important principles: [duck typing](http://docs.python.org/2/glossary.html#term-duck-typing "If it looks like a duck and quacks like a duck, it must be a duck.") and [EAFP](http://docs.python.org/2/glossary.html#term-eafp "Easier to ask for forgiveness than permission"). This basically means that you are encouraged to try using an object the way you think it would work, and handle when things are not like that. In summary, the only problem would be your code getting too much indented. If you feel like it, try to simplify some of the nestings, like [lqc](https://stackoverflow.com/users/324389/lqc) suggested in [the suggested answer above](https://stackoverflow.com/a/17015303/4575793).
Are nested try/except blocks in Python a good programming practice?
[ "", "python", "" ]
I am using csv package now, and every time when I write to a new csv file and open it with excel I will find a empty row in between every two rows. ``` filereader = csv.reader(open("tests.csv", "r"), delimiter=",") filewriter = csv.writer(open("test_step1.csv", "w"), delimiter=",") #Delete header for row in filereader: if row[0].isdigit(): filewriter.writerow(row) ``` ![enter image description here](https://i.stack.imgur.com/0f5zj.png)
You need to open the file in wb mode try: ``` import csv filereader = csv.reader(open("tests.csv", "r"), delimiter=",") filewriter = csv.writer(open("test_step1.csv", "wb"), delimiter=",", newline="") #Delete header for row in filereader: if row[0].isdigit(): filewriter.writerow(row) ``` The csv.writer writes **\r\n** into the file directly. If you don't open the file in [binary mode](http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files), it will write **\r\r\n** because on Windows text mode will translate each **\n** into **\r\n**. edit: For python 3 had to add `newline=""` to **csv.writer** as per [this answer](https://stackoverflow.com/questions/7200606/python3-writing-csv-files)
I am using `Python 3.6` and it seems that `csv.writer(open("test_step1.csv", "wb"), delimiter=",", newline="")` is not correct. Rather, *`newline=""`* should added in the `open()` clause.
Empty space in between rows after using writer in python
[ "", "python", "csv", "python-3.3", "writer", "" ]
I have the following SQL (SQL Server) and it works for the most part. The problem is I am really creating a square and not a true circle. My goal is to pass in a city and state which has a lat and long, then find all cities within a 100 mile radius of that lat long. The latitude and longitude are stored in the DB so all my values are there. I just need a more precise way of doing it. Here is my code thus far: ``` ALTER PROCEDURE [dbo].[sp_StoresByZipArea] (@zip nvarchar(5), @Radius float) AS DECLARE @LatRange float DECLARE @LongRange float DECLARE @LowLatitude float DECLARE @HighLatitude float DECLARE @LowLongitude float DECLARE @HighLongitude float DECLARE @istartlat float DECLARE @istartlong float SELECT @iStartlat=Latitude, @iStartLong=Longitude from zipcodes where zipcode=@ZIP SELECT @LatRange = @Radius / ((6076 / 5280) * 60) SELECT @LongRange = @Radius / (((cos((@iStartLat * 3.141592653589 / 180)) * 6076.) / 5280.) * 60) SELECT @LowLatitude = @istartlat - @LatRange SELECT @HighLatitude = @istartlat + @LatRange SELECT @LowLongitude = @istartlong - @LongRange SELECT @HighLongitude = @istartlong + @LongRange /** Now you can create a SQL statement which limits the recordset of cities in this manner: **/ SELECT * FROM ZipCodes WHERE (Latitude <= @HighLatitude) AND (Latitude >= @LowLatitude) AND (Longitude >= @LowLongitude) AND (Longitude <= @HighLongitude) ```
I've used the great circle distance to do this in the past. The implementation below tells you the distance between two different points, which could be used to do what you are talking about: ``` create function dbo.GreatCircleDistance ( @Latitude1 float, @Longitude1 float, @Latitude2 float, @Longitude2 float ) returns float as /* FUNCTION: dbo.GreatCircleDistance Computes the Great Circle distance in kilometers between two points on the Earth using the Haversine formula distance calculation. Input Parameters: @Longitude1 - Longitude in degrees of point 1 @Latitude1 - Latitude in degrees of point 1 @Longitude2 - Longitude in degrees of point 2 @Latitude2 - Latitude in degrees of point 2 */ begin declare @radius float declare @lon1 float declare @lon2 float declare @lat1 float declare @lat2 float declare @a float declare @distance float -- Sets average radius of Earth in Kilometers set @radius = 6371.0E -- Convert degrees to radians set @lon1 = radians( @Longitude1 ) set @lon2 = radians( @Longitude2 ) set @lat1 = radians( @Latitude1 ) set @lat2 = radians( @Latitude2 ) set @a = sqrt(square(sin((@lat2-@lat1)/2.0E)) + (cos(@lat1) * cos(@lat2) * square(sin((@lon2-@lon1)/2.0E))) ) set @distance = @radius * ( 2.0E *asin(case when 1.0E < @a then 1.0E else @a end )) return @distance end ```
Not sure if this helps, but I think there is an error here: ``` SELECT @LatRange = @Radius / ((6076 / 5280) * 60) ``` The (6076 / 5280) part will always evaluate to 1.
How to Return Resultset with a Given Radius
[ "", "sql", "sql-server", "" ]
I have to make a function to transform example 1 into example 2. I have already made a function for example 1. How do I make a function for example 2, with nested readout and with their sizes in brackets and with newlines? example 1. ``` { 'delo' : { 'navodila.docx' : 83273, 'poročilo.pdf' : 37653347, }, 'igre' : {}, 'seznam.txt' : 7632, 'razno' : { 'slika.jpg' : 4275, 'prijatelji' : { 'janez.jpg' : 8734765, } ``` example 2. ``` delo navodila.docx (83273) poročilo.pdf (37653347) artikli.dat (253) igre seznam.txt (7632) razno slika.jpg (4275) prijatelji janez.jpg (8734765) ``` My function I've made already for the first example, except without newlines: ``` import os def f(folder): dict={} list = os.listdir(folder) for ts in list: fullName = folder + '\\' + ts if os.path.isfile(fullName): size=os.path.getsize(fullname) dict[ts]=size else: dict[ts]=f(fullName) return (dict) ```
This is what i really wanted, an easier way , just to refill it ``` def f(map,n=0): dictionary=beforeF(map) #function before i made for key in dictionary.keys(): fullName = map + '\\' + key if path.isdir(fullName): print(n*'\t'+key+':') f(fullName, n+1) else: print(n*'\t'+key + ' (' + str(dictionary[key])+ ')') ```
``` def dump(d, level=0, indent=4): for name in d: if isinstance(d[name], dict): print('{0}{1}'.format(' ' * level, name)) dump(d[name], level+indent) else: print('{0}{1} ({2})'.format(' ' * level, name, d[name])) ```
How to dict folders?
[ "", "python", "dictionary", "nested", "" ]
how to convert text to date in sql as listed below? > Y12-W01 to 20120102 > > Y12-W02 to 20120109 > > Y12-W03 to 20120116 and so on... i use ms-access. thank you in advance regards.
You don't need a separate function, try this SQL example: ``` SELECT FieldName, DateAdd("ww",CDbl(Mid([FieldName],6,2))-1,DateSerial(Mid([FieldName],2,2),1,1)) AS ConvDate FROM TableName ```
Parse your Week and Year out of your field. I'll let you figure out that one. ;o) Once you have the Week and Year, pass them into this function: ``` Function GetWeekStart(weekNum As Integer, yr As Integer) As Date GetWeekStart = DateSerial(yr, 1, 1 + (weekNum * 7) - 6 - Weekday(DateValue("1/1/" & yr))) End Function ```
convert years and weeks to date in sql msaccess
[ "", "sql", "ms-access", "ms-access-2010", "" ]