Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
How can I can remove all NewLine from a variable in SQL Server? I use SQL Server 2008 R2. I need remove all NewLine in a variable in a T-Sql Command. For example: ``` Declare @A NVarChar(500) Set @A = ' 12345 25487 154814 ' Print @A ``` And it printed like this: ``` 12345 25487 154814 ``` But I want to get strings like this: > 12345 25487 154814 I write this query, but it does not work: ``` Set @A = Replace(@A,CHAR(13),' ') ```
You must use this query ``` Declare @A NVarChar(500); Set @A = N' 12345 25487 154814 '; Set @A = Replace(@A,CHAR(13)+CHAR(10),' '); Print @A; ```
If you want it to look exactly like in your sample output, use [this hack](https://stackoverflow.com/questions/2455750/replace-duplicate-spaces-with-a-single-space-in-t-sql): ``` DECLARE @A nvarchar(500) SET @A = ' 12345 25487 154814 ' SET @A = replace( replace( replace( replace(@A, char(13)+char(10),' '), ' ','<>'), '><','') ,'<>',' ') PRINT @A ``` It will first replace your newline's then your consecutive spaces with one. Pay attention that it would be wise to url-encode the input string to avoid nasty surprises.
How I can remove all NewLine from a variable in SQL Server?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I am new to SQL Server and trying to create a query for following scenario using SQL Server. I have a table named `GenericLabels` containing list of all the labels (`varchar(30)`) that can be used. They are used in table called `UserDeviceStatus`. I want list of labels which are in `GenericLabels` but not used in `UserDeviceStatus` for a given `UserId`. I have created following query ``` select label from GenericLabels where not exists (select customlabel from UserDeviceStatus where userid = 40) ``` This query returns empty result Here is the output with individual queries. ``` select label from GenericLabels ``` returns ``` Aux1 Aux2 Aux3 Aux4 Aux5 Aux6 ``` and ``` select customlabel from userdevicestatus where userid = 40 ``` returns ``` Aux2 Aux3 ``` I want following result ``` Aux1 Aux4 Aux5 Aux6 ```
Try this instead: ``` select label from GenericLabels where label not in (select customlabel from UserDeviceStatus where userid = 40) ```
You have to link on label and customlabel: ``` select label from GenericLabels where not exists ( select 1 from UserDeviceStatus where customlabel = label and userid = 40 ) ```
'select not in' query with subquery selecting varchar(30)
[ "", "sql", "sql-server", "" ]
I want to delete only the rows that the following statement returns, how can I do that? ``` select usersernum,username,useremail from core.appuser where usersernum in ( select ch.usersernum from core.appusersession ch where ch.usersernum not in ( select usersernum from ( select max(starttime) as maxstarttime , usersernum from core.appusersession vip where vip.starttime >= '2011-01-01 00:00:00' group by usersernum ) ) group by usersernum ) order by useremail ; ```
Sometimes it helps to take a step back and ask yourself, What is the real goal here, in "business terms" or every-day language rather than SQL. Your query looks like it has a bunch of unnecessary layers. It appears that what you really want to do is remove any inactive users, meaning those who have not started a session since the beginning of 2011. ``` DELETE FROM core.appuser WHERE usersernum NOT IN ( SELECT DISTINCT usersernum FROM core.appusersession WHERE starttime>='2011-01-01 00:00:00' ); ``` Isn't that much simpler?
You could use ``` DELETE FROM core.appuser WHERE usersnum IN ( select usersernum from core.appuser where usersernum in ( select ch.usersernum from core.appusersession ch where ch.usersernum not in ( select usersernum from ( select max(starttime) as maxstarttime, usersernum from core.appusersession vip where vip.starttime>='2011-01-01 00:00:00' group by usersernum)) group by usersernum) order by useremail); ```
How to delete rows from a table with multiple select in DB2
[ "", "sql", "database", "db2", "delete-row", "" ]
I would like to perform a SQL query across a very large table. However, I only need to look through the last 100,000 rows. Currently: ``` SELECT * FROM bigtable WHERE columna='foobar' ORDER BY id DESC LIMIT 10 ``` I would like to do the same but make it only look in the last X ids. **EDIT:** Selecting 100k rows first and then selecting it from that is not a solution. This selects the full previous 100k rows, even if more than 10 of the needed rows have been found. That makes it slower.
In a database you cannot do a query that checks the "last x rows". A relational database does not guarantee that the rows are physically stored in a specific order. And therefore SQL will not allow you to express that. If you can translate that into an actual constraint based on the data contained in the rows then that would be possible to achieve. Taking your example, the worst operation the database has to do is the sort of the full result set before returning the data. This is regardless of the `limit` clause, because only after you run through all the rows and sorted them do you know which rows have the highest ids. However, if there is an index with `columna` and `id`, by that order, the database engine *should* use the index, which is sorted, to go through the rows much faster, resulting in a faster response time.
Try ``` select * from ( SELECT * FROM bigtable ORDER BY id DESC LIMIT 100000 ) x where columna = 'foobar' order by id desc limit 10 ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!2/84b58/2) And if your query is slow then add an index on `columna`.
Optimizing a SQL query by only looking in the last X rows (Not simply LIMIT)
[ "", "mysql", "sql", "" ]
I have the following code ``` --first statement ALTER TABLE [nameOfMyTable] WITH CHECK ADD CONSTRAINT [nameOfMyConstraint] FOREIGN KEY([myFK]) REFERENCES [tableReference] ([myFK]) GO --second statement ALTER TABLE [nameOfMyTable] CHECK CONSTRAINT [nameOfMyConstraint] GO ``` First, i define a CHECK constraint on a table. What does mean second statement?
The 2nd statement is redundant, the only time it would be needed is if the first statement had `WITH NOCHECK`. By default `WITH CHECK` is added if you don't explicitly state `CHECK` or `NOCHECK` in the `ADD CONSTRAINT` statement. > [sql server management studio generate this code by default – Mikhail](https://stackoverflow.com/questions/17995793/what-does-second-check-constraint-mean/17995925#comment26312519_17995793) Because the code is being auto generated it is just being constructed by a set of steps. Some of those steps will have some overlap so the "*table definition*" step may enable or disable the check the constraint while it creates the table, but the "*setup constraints*" step may also enable or disable the constraint. **[Relevant documentation](http://msdn.microsoft.com/en-us/library/ms190273%28v=sql.105%29.aspx):** > **WITH CHECK | WITH NOCHECK** > > * Specifies whether the data in the table is or is not validated against a newly added or re-enabled FOREIGN KEY or CHECK constraint. > If not specified, WITH CHECK is assumed for new constraints, and WITH > NOCHECK is assumed for re-enabled constraints. > * If you do not want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH NOCHECK. We do not recommend doing > this, except in rare cases. The new constraint will be evaluated in > all later data updates. Any constraint violations that are suppressed > by WITH NOCHECK when the constraint is added may cause future updates > to fail if they update rows with data that does not comply with the > constraint. > * The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are ignored until they are re-enabled > by using ALTER TABLE WITH CHECK CHECK CONSTRAINT ALL. > > **{ CHECK | NOCHECK } CONSTRAINT** > > * Specifies that constraint\_name is enabled or disabled. This option can only be used with FOREIGN KEY and CHECK constraints. When NOCHECK > is specified, the constraint is disabled and future inserts or updates > to the column are not validated against the constraint conditions. > DEFAULT, PRIMARY KEY, and UNIQUE constraints cannot be disabled.
From the docs: > Specifies that constraint\_name is enabled or disabled. This option can only be used with FOREIGN KEY and CHECK constraints. When NOCHECK is specified, the constraint is disabled and future inserts or updates to the column are not validated against the constraint conditions. DEFAULT, PRIMARY KEY, and UNIQUE constraints cannot be disabled.
What does second check constraint mean?
[ "", "sql", "" ]
I am attempting to build a view which selects from an active and archive table, but only returns a record from the archive folder when the record does not exist in the active folder. It must also only return the archive record with the latest timestamp. Thank you Example: **Active:** ``` *ID ItemID Price* 1 001 1.00 2 002 4.99 3 004 2.00 ``` **Archive** ``` *ID ItemID Price ArchiveDate* 1 001 0.99 1/1/2013 2 002 3.78 1/1/2013 3 003 5.00 1/5/2013 4 005 3.49 1/5/2013 5 003 6.99 1/10/2013 ``` Should return the following dataset ``` *ItemID Price* 001 1.00 002 4.99 003 6.99 <-- From Archive 004 2.00 005 3.49 <-- From Archive ```
So you need to get everthing from ACTIVE [First part of the UNION below[, then append the records from ARCHIVE [Second part of the UNION[. For the Archive records, exlcude those that dont exist in the ACTIVE table [The NOT EXISTS clause] and only get the record that has the latest date [THE MAX(ArchiveDate)] clause. ``` SELECT ItemID ,Price FROM ACTIVE UNION SELECT ItemID ,Price FROM ARCHIVE AA WHERE AA.ArchiveDate = (SELECT MAX(ArchiveDate) FROM ARCHIVE AA1 WHERE AA1.ItemID = AA.ItemID) AND NOT EXISTS (SELECT 1 FROM ACTIVE A WHERE A.ItemID = AA.ItemID) ```
You can use the following: ``` SELECT ItemID, Price FROM ( -- All of the active records SELECT ItemID, Price FROM Active UNION ALL -- combined with the latest archive records for missing actives SELECT ItemID, Price FROM Archive AS A1 WHERE ArchiveDate=(SELECT MAX(ArchiveDate) FROM Archive AS A2 WHERE A2.ItemID=A1.ItemID) AND NOT EXISTS(SELECT 1 FROM Active AS AC WHERE AC.ItemID=A1.ItemID) ) AS FinalResult -- Ordered by ItemID ORDER BY ItemID ```
Displaying a sorted dataset from two tables
[ "", "sql", "sql-server", "t-sql", "" ]
I'm happy to accept that this might not be possible, let alone sensible, but is it possible to keep a persistent reference to an object I have created? For example, in a few of my views I have code that looks a bit like this (simplified for clarity): ``` api = Webclient() api.login(GPLAY_USER,GPLAY_PASS) url = api.get_stream_urls(track.stream_id)[0] client = mpd.MPDClient() client.connect("localhost", 6600) client.clear() client.add(url) client.play() client.disconnect() ``` It would be really neat if I could just keep one reference to `api` and `client` throughout my project, especially to avoid repeated api logins with `gmusicapi`. Can I declare them in `settings.py`? (I'm guessing this is a terrible idea), or by some other means keep a connection to them that's persistent? Ideally I would then have functions like `get_api()` which would check the existing object was still ok and return it or create a new one as required.
You can't have anything that's instantiated once per application, because you'll almost certainly have more than one server process, and objects aren't easily shared across processes. However, one per process is definitely possible, and worthwhile. To do that, you only need to instantiate it at module level in the relevant file (eg views.py). That means it will be automatically instantiated when Django first imports that file (in that process), and you can refer to it as a global variable in that file. It will persist as long as the process does, and when as new process is created, a new global var will be instantiated.
You can't persist the object reference, but you can store something either in memory django cache or in memcached django cache. Django Cache <https://docs.djangoproject.com/en/dev/topics/cache/> See also [Creating a Persistent Data Object In Django](https://stackoverflow.com/questions/2345257/creating-a-persistent-data-object-in-django/2348841)
Django: Keep a persistent reference to an object?
[ "", "python", "django", "" ]
I have two files 'seen.txt' and 'members.txt'. First file 'seen.txt' lists the people who have seen a post. The file is structured like following: ``` Friend/Not Friend Name #1 Number of mutual friends Friend/Not Friend Name #2 Number of mutual friends ``` Second file 'members.txt' lists the details of the all the people in a group. The file is structured like following: ``` Name #1 Some info about the person Some more info about the person Name #2 Some info about the person Some more info about the person ``` Now, I want to create a program to show the name of the people who are member but not in the seen list. For that, I create two dictionaries which stores the names from each of the files. After it is done, I simply iterate over each member in the members\_list and see whether they are in the seen\_list or not. If they are not, I print out the name in the console. This is the code I have written: ``` seen = open('seen.txt').readlines() members = open('members.txt').readlines() i = 0 j = 0 seen_list = {} members_list = {} for lines in seen: if i == 1: seen_list[lines.strip()] = 1 i = 0 else: i += 1 for lines in members: if j == 0 or j == 3: # to get the first line and every third line to extract name members_list[lines.strip()] = 1 j = 6 else: j -= 1 for member in members_list: if member not in seen_list: print member ``` I believe my solution is very elaborate and it can be done in much shorter and faster way. Can anyone tell me about some cool python hacks possible on this program in order to make it more efficient and shorter?
I would probably use sets instead of dictionaries because it seems like you're throwing away a lot of the stored info in the files and just care about the names. I first restructured the way you import the names from the files. I used `izip_longest` to read the files 4 lines at a time (3 lines of text plus a blank line). ``` from itertools import izip_longest seen = set() with open('seen.txt', 'r') as seen_file: for lines in izip_longest(*[seen_file]*4): name = lines[1].strip() seen.add(name) members = set() with open('members.txt', 'r') as members_file: for lines in izip_longest(*[members_file]*4): name = lines[0].strip() members.add(name) ``` Then we just take the set difference. See [set operations here](http://docs.python.org/2/library/sets.html). ``` not_seen = members - seen for member in not_seen: print member ```
`members_list` is a dict, so is `seen_list`. Apparently they carry only one value, 1, and each key has the same value. You can simplify that by using a `set`. ``` members_list = set() ``` and adding items to it: ``` members_list.add(lines.strip()) ``` you then have several useful functions on sets, including `difference`: ``` members_not_in_seen = members_list.difference(seen_list) ``` which can also be written ``` members_list - seen_list ``` But you could simplify the reading part: you can use slice and map rather than handle yourself a `i` or `j` counter and call strip on each item: ``` import string members_list = set(map(string.strip, members[1::2]) seen_list = set(map(string.strip, seen[::4]) ```
Finding out matches between two dictionaries in Python
[ "", "python", "dictionary", "" ]
I have two list: ``` a = [1, 2, 3, 4] b = [5, 6, 7, 8] ``` I need to get something like this: ``` c = [1, 5, 2, 6, 3, 7, 4, 8] ``` I use this solution: ``` c = list(reduce(lambda x, y: x + y, zip(a, b))) ``` Is there a better way to do this?
Using *List Comprehension*: ``` >>> [x for tup in zip(a, b) for x in tup] [1, 5, 2, 6, 3, 7, 4, 8] ``` The above nested list comprehension is equivalent to following nested for loops (Just in case you get confused): ``` result = [] for tup in zip(a, b): for x in tup: result.append(x) ```
Using `chain`: ``` from itertools import chain, izip interweaved = list(chain.from_iterable(izip(a, b))) # [1, 5, 2, 6, 3, 7, 4, 8] ```
Python two list lists shuffle
[ "", "python", "list", "" ]
I came across the website rubular.com, and their example regex was : ``` (?<month>\d{1,2})\/(?<day>\d{1,2})\/(?<year>\d{4}) ``` where the months, day, and year encased in the < > tags each group with that name. I'm wondering if there's a way to do that in Python since I couldn't find it in the documentation.
You do that using `(?P<group_name>...)`: ``` (?P<month>\d{1,2})\/(?P<day>\d{1,2})\/(?P<year>\d{4}) ``` See [documentation](http://docs.python.org/2/library/re.html#regular-expression-syntax)
To do so in Python, you would prefix the named group with the letter 'P' like so: ``` import re match = re.search('(?P<month>\d{1,2})\/(?P<day>\d{1,2})\/(?P<year>\d{4})', '01/02/2000') print match.group('day') print match.group('month') print match.group('year') ``` The [documentation page for Regex](http://docs.python.org/2/library/re.html#regular-expression-syntax) doesn't clearly highlight it, but you are looking for the section on ``` (?P<name>...) ```
Naming regex groups in Python
[ "", "python", "regex", "" ]
How python recognize class and instance level variables ? are they different ? ``` For example, class abc: i = 10 def __init__(self, i): self.i = i a = abc(30) b = abc(40) print a.i print b.i print abc.i output -------- 30 40 10 ``` Means, in above example when I access `a.i (or b.i)` and `abc.i` are they referring to completely different variables?
First, your sample is wrong for you can not init the instance if there is only a self in the `__init__`. ``` >>> class abc: ... i = 10 ... j = 11 ... def __init__(self, x): ... self.i = x ``` Then, when you access the attribute on the instance, it will check the instance variables first. Refer the [paragraph here](http://docs.python.org/3/howto/descriptor.html#definition-and-introduction). As you guess: ``` >>> a = abc(30) >>> a.i 30 >>> a.j 11 ``` Besides, the class variables is an object shared by all the instances, and instance variables are owned by the instance: ``` >>> class abc: ... i = [] ... def __init__(self, x): ... self.i = [x] ... abc.i.append(x) ... >>> a = abc(30) >>> b = abc(40) >>> a.i [30] >>> b.i [40] >>> abc.i [30, 40] ```
> in above example when I access a.i (or b.i) and abc.i are they > referring to completely different variables? Yes. abc.i is a [Class Object](http://docs.python.org/2/tutorial/classes.html#class-objects) reference. a.i and b.i are each [Instance Object](http://docs.python.org/2/tutorial/classes.html#instance-objects) references. They are all separate references.
python: class variables and instance variables
[ "", "python", "static-members", "instance-variables", "" ]
I've got the following tables which im querying: ``` categories -id -name -parent_id -last products2categories -id -product_id -category_id ``` What im trying to do is to get the record's `category name` and `category id` which the `product_id` in the table `products2categories` is 10 and `last` in the table `categories` is 1 this is the query that i've tried to execute: ``` SELECT `categories.id`,`categories.name` FROM `categories` JOIN `products2categories` ON `categories.id` = `products2categories.category_id` WHERE `categories.last` = 1 AND `products2categories.product_id`= 10 ``` I've been returned with the following error: ``` #1054 - Unknown column 'categories.last' in 'where clause' ``` To demonstrate what I'm trying to get: ``` categories -id 8 -name "my_name" -parent_id 0 -last 1 products2categories -id 1 -product_id 10 -category_id 1 ``` I want to get as a result the ID of `8` and the name of "my\_name" but it has to be based on the table products2categories, cause one product, can have more than one category which it's related to. Thanks in advance!
You are delimiting fields incorrectly. Delimit the table name and column name separately: ``` SELECT `categories`.`id`,`categories`.`name` FROM `categories` JOIN `products2categories` ON `categories`.`id` = `products2categories`.`category_id` WHERE `categories`.`last` = 1 AND `products2categories`.`product_id`= 10 ```
Replace ``` WHERE `categories.last` = 1 ``` with ``` WHERE `categories`.`last` = 1 ``` Backticks for escaping column and table names can't be used to escape the combination of table and column. It must be applyed on both.
SQL query is producing an error which I can't solve
[ "", "mysql", "sql", "" ]
In the Python Guide's chapter on [project structure](http://docs.python-guide.org/en/latest/writing/structure.html), the term "top-level statement" is brought up a few times. I'm not sure exactly what this refers to. My guess is it's any variable declarations that happen outside of any functions or class methods that fire as soon as a module is loaded. Is this correct? Does it also include a module's `import` statements?
It's not just variable declarations (and there aren't any variable declarations anyway). It's pretty much anything that starts at indentation level 0. ``` import sys # top-level 3 + 4 # top-level x = 0 # top-level def f(): # top-level import os # not top-level! return 3 # not top-level if x: # top-level print 3 # not top-level else: print 4 # not top-level, but executes as part of an if statement # that is top-level class TopLevel(object): # top-level x = 3 # not top-level, but executes as part of the class statement def foo(self): # not top-level, but executes as part of the class statement print 5 # not top-level ```
Here's the first mention of "top-level statement": > Once modu.py is found, the Python interpreter will execute the module in an isolated scope. Any top-level statement in modu.py will be executed, including other imports if any. Function and class definitions are stored in the module’s dictionary. This makes it clear that what they really mean is "things that are interpreted at `import` time". While it's not terribly helpful directly, the [Python documentation itself](http://docs.python.org/2/reference/toplevel_components.html) also uses the phrase "top-level" (components, which then means "statements" in this context). Note that this module: ``` """a python module, spam.py""" def spam(): return "spam" class Spam(object): pass ``` has two statements in it, the `def` and the `class`. These are both *executed* at import time. These definitions are compound statements (see [`def`](http://docs.python.org/2/reference/compound_stmts.html#function-definitions) and [`class`](http://docs.python.org/2/reference/compound_stmts.html#class-definitions) descriptions). If there are decorators attached to a top-level `def`, that adds even more top-level things to run. (See also [user2357112's answer](https://stackoverflow.com/a/18138250/1256452): running a `class` statement invokes more internal workings.) Add an `import sys` at the top and you've added a third statement, which imports `sys`. However, if you add this: ``` def ham(eggs): import os return os.path.basename(eggs) ``` you have still only added one statement, the `def ham`, to the top-level stuff. It's when `ham` itself is executed (called) that the `import os` will be run.
What is a top-level statement in Python?
[ "", "python", "" ]
How do I set up argparse as follows: ``` if -2 is on the command line, no other arguments are required if -2 is not on the command line, -3 and -4 arguments are required ``` For example, ``` -2 [good] -3 a -4 b [good] -3 a [not good, -4 required] -2 -5 c [good] -2 -3 a [good] ``` There are a number of similar questions here, but either they don't address this situation or I don't understand. Python 2.7 if that matters.
A subparser (as suggested in comments) might work. Another alternative (since `mutually_exclusive_group` can't quite do this) is just to code it manually, as it were: ``` import argparse def main(): parser = argparse.ArgumentParser() parser.add_argument('-2', dest='two', action='store_true') parser.add_argument('-3', dest='three') parser.add_argument('-4', dest='four') parser.add_argument('-5', dest='five') args = parser.parse_args() if not args.two: if args.three is None or args.four is None: parser.error('without -2, *both* -3 <a> *and* -4 <b> are required') print args return 0 ``` Adding a little driver to this: ``` import sys sys.exit(main()) ``` and run with your examples, it seems to do the right thing; here are two runs: ``` $ python mxgroup.py -2; echo $? Namespace(five=None, four=None, three=None, two=True) 0 $ python mxgroup.py -3 a; echo $? usage: mxgroup.py [-h] [-2] [-3 THREE] [-4 FOUR] [-5 FIVE] mxgroup.py: error: without -2, *both* -3 <a> *and* -4 <b> are required 2 $ ```
I think it is pretty hard to achieve that (including a nice help message) while only using the standard argparse functions. You can however easily test it yourself after parsing the arguments. You can describe the extra requirements in the epilogue or so. Note that it is unusual to use numbers as options, I had to use dest='two', since args.2 is not valid syntax. ``` #!/usr/bin/env python import argparse parser = argparse.ArgumentParser( description='bla bla', epilog='Note: arguments -3 and -4 are required when -2 is missing') parser.add_argument('-2', dest='two', action='store_true') parser.add_argument('-3', dest='three') parser.add_argument('-4', dest='four') parser.add_argument('-5', dest='five') args = parser.parse_args() if not args.two and (args.three is None or args.four is None): parser.error('arguments -3 and -4 are required when -2 is missing') print 'Good:', args ``` With these results: ``` [~]: ./test.py -h usage: test.py [-h] [-2] [-3 THREE] [-4 FOUR] [-5 FIVE] bla bla optional arguments: -h, --help show this help message and exit -2 -3 THREE -4 FOUR -5 FIVE Note: arguments -3 and -4 are required when -2 is missing [~]: ./test.py -2 Good: Namespace(five=None, four=None, three=None, two=True) [~]: ./test.py -3 a -4 b Good: Namespace(five=None, four='b', three='a', two=False) [~]: ./test.py -3 a usage: test.py [-h] [-2] [-3 THREE] [-4 FOUR] [-5 FIVE] test.py: error: arguments -3 and -4 are required when -2 is missing [~]: ./test.py -2 -5 c Good: Namespace(five='c', four=None, three=None, two=True) [~]: ./test.py -2 -3 a Good: Namespace(five=None, four=None, three='a', two=True) ```
Python argparse conditional requirements
[ "", "python", "python-2.7", "argparse", "" ]
I want to do hierarchical key-value storage in Python, which basically boils down to storing dictionaries to files. By that I mean any type of dictionary structure, that may contain other dictionaries, numpy arrays, serializable Python objects, and so forth. Not only that, I want it to store numpy arrays space-optimized and play nice between Python 2 and 3. Below are methods I know are out there. My question is what is missing from this list and is there an alternative that dodges all my deal-breakers? * Python's `pickle` module (deal-breaker: inflates the size of numpy arrays a lot) * Numpy's `save`/`savez`/`load` (deal-breaker: Incompatible format across Python 2/3) * [PyTables replacement for numpy.savez](https://github.com/PyTables/PyTables/issues/185) (deal-breaker: only handles numpy arrays) * Using PyTables manually (deal-breaker: I want this for constantly changing research code, so it's really convenient to be able to dump dictionaries to files by calling a single function) The PyTables replacement of `numpy.savez` is promising, since I like the idea of using hdf5 and it compresses the numpy arrays really efficiently, which is a big plus. However, it does not take any type of dictionary structure. Lately, what I've been doing is to use something similar to the PyTables replacement, but enhancing it to be able to store any type of entries. This actually works pretty well, but I find myself storing primitive data types in length-1 CArrays, which is a bit awkward (and ambiguous to actual length-1 arrays), even though I set `chunksize` to 1 so it doesn't take up that much space. Is there something like that already out there? Thanks!
After asking this two years ago, I starting coding my own HDF5-based replacement of pickle/`np.save`. Ever since, it has matured into a stable package, so I thought I would finally answer and accept my own question because it is by design exactly what I was looking for: * <https://github.com/uchicago-cs/deepdish>
I recently found myself with a similar problem, for which I wrote a couple of functions for saving the contents of dicts to a group in a PyTables file, and loading them back into dicts. They process nested dictionary and group structures recursively, and handle objects with types that are not natively supported by PyTables by pickling them and storing them as string arrays. It's not perfect, but at least things like numpy arrays will be stored efficiently. There's also a check included to avoid inadvertently loading enormous structures into memory when reading the group contents back into a dict. ``` import tables import cPickle def dict2group(f, parent, groupname, dictin, force=False, recursive=True): """ Take a dict, shove it into a PyTables HDF5 file as a group. Each item in the dict must have a type and shape compatible with PyTables Array. If 'force == True', any existing child group of the parent node with the same name as the new group will be overwritten. If 'recursive == True' (default), new groups will be created recursively for any items in the dict that are also dicts. """ try: g = f.create_group(parent, groupname) except tables.NodeError as ne: if force: pathstr = parent._v_pathname + '/' + groupname f.removeNode(pathstr, recursive=True) g = f.create_group(parent, groupname) else: raise ne for key, item in dictin.iteritems(): if isinstance(item, dict): if recursive: dict2group(f, g, key, item, recursive=True) else: if item is None: item = '_None' f.create_array(g, key, item) return g def group2dict(f, g, recursive=True, warn=True, warn_if_bigger_than_nbytes=100E6): """ Traverse a group, pull the contents of its children and return them as a Python dictionary, with the node names as the dictionary keys. If 'recursive == True' (default), we will recursively traverse child groups and put their children into sub-dictionaries, otherwise sub- groups will be skipped. Since this might potentially result in huge arrays being loaded into system memory, the 'warn' option will prompt the user to confirm before loading any individual array that is bigger than some threshold (default is 100MB) """ def memtest(child, threshold=warn_if_bigger_than_nbytes): mem = child.size_in_memory if mem > threshold: print '[!] "%s" is %iMB in size [!]' % (child._v_pathname, mem / 1E6) confirm = raw_input('Load it anyway? [y/N] >>') if confirm.lower() == 'y': return True else: print "Skipping item \"%s\"..." % g._v_pathname else: return True outdict = {} for child in g: try: if isinstance(child, tables.group.Group): if recursive: item = group2dict(f, child) else: continue else: if memtest(child): item = child.read() if isinstance(item, str): if item == '_None': item = None else: continue outdict.update({child._v_name: item}) except tables.NoSuchNodeError: warnings.warn('No such node: "%s", skipping...' % repr(child)) pass return outdict ``` --- It's also worth mentioning [`joblib.dump`](https://pythonhosted.org/joblib/generated/joblib.dump.html#joblib.dump) and [`joblib.load`](https://pythonhosted.org/joblib/generated/joblib.load.html#joblib.load), which tick all of your boxes apart from Python 2/3 cross-compatibility. Under the hood they use `np.save` for numpy arrays and `cPickle` for everything else.
Saving dictionaries to file (numpy and Python 2/3 friendly)
[ "", "python", "python-3.x", "numpy", "hdf5", "pytables", "" ]
According to the [Python documentation](http://docs.python.org/2/library/stdtypes.html#string-methods): > str.endswith(suffix[, start[, end]]) > > Return True if the string ends with the specified suffix, otherwise return False. suffix can also be a tuple of suffixes to look for. With optional start, test beginning at that position. Withoptional end, stop comparing at that position. > > Changed in version 2.5: Accept tuples as suffix. The following code should return `True`, but it returns `False` in Python 2.7.3: ``` "hello-".endswith(('.', ',', ':', ';', '-' '?', '!')) ``` It seems `str.endswith()` ignores anything beyond the forth tuple element: ``` >>> "hello-".endswith(('.', ',', ':', '-', ';' '?', '!')) >>> True >>> "hello;".endswith(('.', ',', ':', '-', ';' '?', '!')) >>> False ``` Have I found a bug, or am I missing something?
> or am I missing something? You're missing a comma after the `';'` in your tuple: ``` >>> "hello;".endswith(('.', ',', ':', '-', ';' '?', '!')) # ^ # comma missing False ``` Due to this, `;` and `?` are concatenated. So, the string ending with `;?` will return `True` for this case: ``` >>> "hello;?".endswith(('.', ',', ':', '-', ';' '?', '!')) True ``` After adding a comma, it would work as expected: ``` >>> "hello;".endswith(('.', ',', ':', '-', ';', '?', '!')) True ```
It has already been pointed out that adjacent string literals are concatenated, but I wanted to add a little additional information and context. This is a feature that is shared with (and borrowed from) C. Additionally, this is doesn't act like a concatenation operator like '+', and is treated identically as if they were literally joined together in the source without any additional overhead. For example: ``` >>> 'a' 'b' * 2 'abab' ``` Whether this is useful feature or an annoying design is really a matter of opinion, but it does allow for breaking up string literals among multiple lines by encapsulating the literals within parentheses. ``` >>> print("I don't want to type this whole string" "literal all on one line.") I don't want to type this whole stringliteral all on one line. ``` That type of usage (along with being used with #defines) is why it was useful in C in the first place and was subsequently brought along in Python.
Have I found a bug in Python's str.endswith()?
[ "", "python", "string", "" ]
I have a MapReduce job defined in **`main.py`**, which imports the `lib` module from **`lib.py`**. I use Hadoop Streaming to submit this job to the Hadoop cluster as follows: ``` hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -files lib.py,main.py -mapper "./main.py map" -reducer "./main.py reduce" -input input -output output ``` In my understanding, this should put both `main.py` and `lib.py` into the **distributed cache folder** on each computing machine and thus make module `lib` available to `main`. But it doesn't happen: from the log I see that files **are really copied** to the same directory, but `main` can't import `lib`, throwing **`ImportError`**. Why does this happen and how can I fix it? **UPD.** Adding the current directory to the path didn't work: ``` import sys sys.path.append(os.path.realpath(__file__)) import lib # ImportError ``` though, loading the module manually did the trick: ``` import imp lib = imp.load_source('lib', 'lib.py') ``` But that's not what I want. So why does the Python interpreter see other `.py` files in the same directory, but can't import them? Note that I have already tried adding an empty `__init__.py` file to the same directory without effect.
I posted the question to Hadoop user list and finally found the answer. It turns out that Hadoop doesn't really copy files to the location where the command runs, but instead creates **symlinks** for them. Python, in its turn, can't work with symlinks and thus doesn't recognize `lib.py` as Python module. Simple **workaround** here is to put both `main.py` and `lib.py` into the same directory, so that **symlink to the directory** is placed into MR job working directory, while both files are physically in the same directory. So I did the following: 1. Put `main.py` and `lib.py` into `app` directory. 2. In `main.py` I used `lib.py` directly, that is, import string is just import lib 3. Uploaded `app` directory with `-files` option. So, final command looks like this: ``` hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -files app -mapper "app/main.py map" -reducer "app/main.py reduce" -input input -output output ```
When Hadoop-Streaming starts the python scripts, your python script's path is where the script file really is. However, hadoop starts them at './', and your lib.py(it's a symlink) is at './', too. So, try to add 'sys.path.append("./")' before you import lib.py like this: `import sys sys.path.append('./') import lib`
How to import a custom module in a MapReduce job?
[ "", "python", "mapreduce", "hadoop-streaming", "" ]
I'm pretty new to Sybase and am writing a query to return results after a specified date, and also before a specified date. MM/DD/YYYY format At the moment im doing.. ``` SELECT * From aTable WHERE afterDate >= 08/07/2013 AND beforeDate <= 08/08/2013 ``` I'm getting records back, but as I'm a Sybase newbie, I want to be sure Sybase is interpreting these dates correctly.. Their online doc is pretty bad for basic explanations on things like this! Anyone able to confirm if what I have works, or does it need some formatting round the dates?
You'll need to convert the dates into DATETIME and tell sybase what the format is to be sure. According to [this documentation](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1540/doc/html/san1278453000116.html "Chapter 4: SQL Functions - CONVERT function [Data type conversion]") the code for MM/DD/YYYY is 101, so something like this: ``` SELECT * FROM aTable WHERE afterDate >= CONVERT(DATETIME,'08/07/2013',101) AND beforeDate <= CONVERT(DATETIME,'08/08/2013',101) ``` You can see the difference by running the following select statements: ``` SELECT CONVERT(DATETIME,'08/07/2013',101) --MM/DD/YYYY (2013-08-07 00:00:00.000) SELECT CONVERT(DATETIME,'08/07/2013',103) --DD/MM/YYYY (2013-07-08 00:00:00.000) ```
For any date-time field in sybase, instead of going through the convert function, there is a more direct approach. ``` SELECT * From aTable WHERE afterDate >= '2013-08-07' AND beforeDate <= '2013-08-08' ``` The date has to be in the form **'YYYY-MM-DD'** If you want to add a time, it can be included along with the date. The date and the time have to be separated by a T. Any date time field can be directly used using the format **'YYYY-MM-DDTHH:MM:SS'** Using the functions is too lengthy. Noone needs a bazooka to shoot a squirrel! :)
Sybase date comparison - Correct format?
[ "", "sql", "database", "sybase", "mybatis", "" ]
I'm trying to make a function in Python that takes a list of integers as input and returns a greater list containing all positive and negative possibilities of those numbers. Pretend '+' is a positive number and '-' is a negative number The output should match up with: ``` foo([-4]) >>> [ [4], [-4] ] foo([+, +]) >>> [ [+,+], [+,-], [-,+], [-,-] ] foo([-, +]) >>> [ [+,+], [+,-], [-,+], [-,-] ] foo([-1, 3]) >>> [ [1,3], [1,-3], [-1,3], [-1,-3] ] foo( [+,-,+] ) >>> [ [-,-,-],[+,-,-],[-,+,-],[-,-,+],[+,+,-],[+,-,+],[-,+,+],[+,+,+] ] ```
For just numbers, you can use `itertools.product` to create all combos, after generating a list with both positive and negative numbers: ``` from itertools import product def foo(nums): return list(product(*((x, -x) for x in nums))) ``` Demo: ``` >>> foo([-4]) [(4,), (-4,)] >>> foo([-1, 3]) [(1, 3), (1, -3), (-1, 3), (-1, -3)] >>> foo([1, 3]) [(1, 3), (1, -3), (-1, 3), (-1, -3)] >>> foo([1, -3, 4]) [(1, 3, 4), (1, 3, -4), (1, -3, 4), (1, -3, -4), (-1, 3, 4), (-1, 3, -4), (-1, -3, 4), (-1, -3, -4)] ```
``` list(itertools.product(*([x, -x] for x in input))) ``` You want every possible way to pick either a number or its negative, for each number in the input. That's the [Cartesian product](http://en.wikipedia.org/wiki/Cartesian_product) of `{x[i], -x[i]}` for each `x[i]` in the input. [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product) can do that for you, and then `list` makes a list of all the output.
python positive and negative number list possibilities
[ "", "python", "list", "" ]
I have many databases in my SQL Server. I have to just search for database names containg particular table name `Heartbitmaster` I have many databases such as `Gotgold`, `DVD`, etc and I just want to find database names from query that contain this table `Heartbitmaster`. I searched I tried for query: ``` SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'base table' AND table_schema = 'Heartbitmaster' ``` but it didn't work. I searched further and came across: ``` SELECT name, database_id, create_date FROM sys.databases ``` but dont know how to arrange further where condition for search of table name Please help me.
I got it done through following query: ``` SELECT name FROM sys.databases WHERE CASE WHEN state_desc = 'ONLINE' THEN OBJECT_ID(QUOTENAME(name) + '.[dbo].[heartbit]', 'U') END IS NOT NULL ```
``` sp_MSforeachdb 'SELECT "?" AS DB, * FROM [?].sys.tables WHERE name like ''%tablename%''' ``` try this one
Display all the names of databases containing particular table
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I have followed instructions out lined here: <http://forums.nuand.com/forums/viewtopic.php?f=9&t=2804> and installed GNU Radio from GIT repo (scroll down to the section that says "Building GNURADIO from GIT". I used the ./build-gnuradio.sh script to do this and it took a while, but it appeared to build successfully, as per the instructions. I am running on Ubuntu 12.04 LTS. When I attempt to run the "./uhd\_fft" function I get the following error message: ``` Traceback (most recent call last): File "./uhd_fft", line 23, in <module> from gnuradio import gr, gru ImportError: No module named gnuradio ``` I have Googled this error message and most of the forums claim there is a problem with the PYTHONPATH. When I do ``` echo $PYTHONPATH /usr/bin/python2.7 ``` But when I check the python2.7 directory I do not see gnuradio. So I guess it makes sense I'm getting an import error when it tries to import gnuradio. But the bigger question is why? I installed GNU Radio (per instructions from nuand forum) using ./build-gnuradio.sh script. So I should have it installed. I would appreciate if it python / GNU Radio experts from the community could weigh in.
There is a section later in the instructions with this boldface label: **Now for some voodoo to get the new program installed into system libraries and python paths and executable path.** Those instructions lead you through modifying your PYTHONPATH to pick up the gnuradio module, among other things. If you have followed those instructions, you will have to start a new shell to see any effect, or execute the .sh file by hand, since profile scripts only run when a new shell starts up or when they're run manually.
One of the reason for this error is, when the default python is selected incorrectly. To see which python is selected type the following command in the Terminal: ``` "sudo port select python" ``` it will display all the python available on your mac. like: ``` python26 python27-apple(Active) python27 ``` For Gnuradio you need python27,if other python is active like in the example above you can change it by the following command: ``` "sudo port select python python27" ``` Now RUN your python code,it should work.If python27 is already active you may consider changing the path manually as suggested by others
"ImportError no module named gnuradio" when trying to execute ./uhd_fft
[ "", "python", "gnuradio", "" ]
I'm trying to install tastypie for Django. I also have South installed. But when I migrate I get some weird type error. ``` ./manage.py migrate tastypie Running migrations for tastypie: - Migrating forwards to 0002_add_apikey_index. > tastypie:0001_initial TypeError: type() argument 1 must be string, not unicode ``` I looked into the migration 0002 and type isn't even being called!
It's a bug in the latest version (`0.10.0`). A bug report has been submitted. <https://github.com/toastdriven/django-tastypie/issues/1005>. You can fix it by installing a previous version: `pip install django-tastypie==0.9.16`
I had the same issue (in the 0.11.0 version) and I solved it removing 'tastypie' from INSTALLED\_APPS (settings.py).
Tastypie migration error
[ "", "python", "tastypie", "" ]
I am very new to SQL and I wanted some help in writing this SQL query. I have a movie table like this: ``` Movie_id Actor_name Director_name 1 a AA 1 b AA 2 b BB 2 d BB ``` Now I want to print out all the pairs of actors and directors who have not worked with each other - for eg in this case it will print out (a,BB) and (d,AA)
You can do it like this: ``` SELECT a.Actor_name, d.Director_name FROM ( (SELECT DISTINCT Actor_name from movie) a CROSS JOIN (SELECT DISTINCT Director_name from movie) d ) WHERE NOT EXISTS ( SELECT * FROM movie m WHERE m.Actor_name=a.Actor_name AND m.Director_name=d.Director_name ) ``` The idea is to produce all possible pairs of actors and directors (the cartesian product join in the middle of the query), then filter its results to exclude pairs connected through a movie (the `NOT EXISTS` condition). Here is a [demo on sqlfiddle.](http://sqlfiddle.com/#!2/20b2a/3)
I would do this by first developing all pairs of actors and directors with a cross join (i.e. [Cartesian product](http://en.wikipedia.org/wiki/Cartesian_product)), and then filter this list down using the table of observed relationships. ``` SELECT * FROM ( -- cross join SELECT a.actor_name, d.director_name FROM (SELECT DISTINCT actor_name FROM movies) a ,(SELECT DISTINCT director_name FROM movies) d ) pairs WHERE NOT EXISTS ( SELECT 1 FROM movies m WHERE pairs.actor_name = m.actor_name AND pairs.director_name = m.director_name ) ``` [SQLFiddle Demo](http://sqlfiddle.com/#!2/c671b/1/0) (schema basically scavenged from dasblinkenlight's fiddle since he beat me to it).
MySQL - how do I write this Query
[ "", "mysql", "sql", "" ]
Been experiencing this weirdness in my program. Here is a snippet of the part that is giving trouble: ``` #!/usr/bin python def test_func( newList, myList=[] ): for t in newList: for f in t: myList.append(f) return myList print test_func([[3, 4, 5], [6, 7, 8]]) print test_func([[9, 10, 11], [12, 13, 14]]) ``` The first time the function is called, it produces ``` [3, 4, 5, 6, 7, 8] ``` The second time ``` [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] ``` I don't know why it does this. Are python functions static in that they retain the values passed to them in subsequent calls or am I missing something in my code?
Don't use mutable as keyword arguments. ``` def test_func( newList, myList=None ): myList = [] if myList is None else myList ```
Although an answer has been accepted, it is interesting as well as important to understand why this happens, so as to avoid not-so-obvious pitfalls. An excerpt from the [Python documentation for compound statements](http://docs.python.org/2/reference/compound_stmts.html#function) : > Default parameter values are evaluated when the function definition is > executed. This means that the expression is evaluated once, when the > function is defined, and that the same “pre-computed” value is used > for each call. This is especially important to understand when a > default parameter is a mutable object, such as a list or a dictionary: > if the function modifies the object (e.g. by appending an item to a > list), the default value is in effect modified. This is generally not > what was intended. A way around this is to use None as the default, > and explicitly test for it in the body of the function Please refer to the [StackOverflow Discussion here](https://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument) for a discussion related to mutable arguments within the Python language. The discussion points to a very interesting and informative article on [Effbot - Python Default Values](https://web.archive.org/web/20200221224620id_/http://effbot.org/zone/default-values.htm) which gives a good explanation of why this behavior is observed and the places where such a behavior is desirable - for e.g. a calculator function that performs very computing-intensive calculations may use a dictionary mutable as a default parameter to store the results of a calculation keyed by the parameters used for the calculation. In such a case, when the client requests for a calculation to be performed, the function can look up the dictionary mutable and return the values if already present, else perform the calculation. Hopefully, this answer provides an insight into this "astonising" behavior of python and help in designing functions that work correctly and are performant.
Python function with default list argument
[ "", "python", "default-value", "" ]
The model Price has an attribute data, so ``` Price.pluck(:data) ``` results in something like [3.99,4.55,5.44] And Price belongs to Vendor So I want to select the best price for each vendor, something like ``` Vendor.all.pluck(prices.order("data ASC").first.data) ``` How can I go about extracting the lowest price data element for each vendor in this scenario? Thank you in advance for the help.
For each vendor, use `minimum(:data)` on its associated prices: ``` Vendor.includes(:prices).map { |v| v.prices.minimum(:data) } ```
Here's another approach which puts more work on the database and less on Ruby. (I'm pretty sure the code in @meagar 's answer will issue a select for each vendor during the map). Which way is better (faster) may depend on how big your tables are… You could do something like this, which will issue one SQL statement: ``` Vendor.select('min(prices.data) as min_price').from('vendors').joins('INNER join prices on vendors.id = prices.vendor_id').group('vendors.id')map {|v| v.min_price} ```
Pluck last value of an association in rails
[ "", "sql", "ruby-on-rails", "ruby", "activerecord", "" ]
``` select Year(Creation_Date) from Asset_Creation where Creation_Date = @Creation_Date ``` I am executing this query where I am getting year as `2013` when supplied today's date. I want the query to return only 13 of 2013. How can I achieve that?
Try ``` SELECT RIGHT(YEAR(Creation_Date), 2) YY FROM Asset_Creation WHERE ... ``` Sample output: ``` | YY | ------ | 10 | | 11 | | 13 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/cfa46/1)** demo
For SQL Server 2012: ``` SELECT FORMAT(@Creation_Date, 'yy') ```
How to get year in format YY in SQL Server
[ "", "sql", "sql-server", "datetime", "" ]
I have function that appends the substrings of a string to a list. When the input string is large a `MemoryError exception` is thrown. Is there any length limit for the 1 dimensional list?.If yes how can I extend it?
Besides making more memory, you have several other options. 1. Try to change your algorithm so it is less memory-intensive. For example, if you don't need all these substrings at once, you could create and use them successively using a generator function or a generator expression. 2. If you need them, try to just hold the "big string" in memory, and work with [`buffer()`](http://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange). 3. If the big string comes from a file, you could [`mmap()`](http://docs.python.org/2/library/mmap.html#mmap) this file (or a part of it) into your address space.
Yes. Available memory. Make more memory available to the process, either by adding more swap, adding more RAM, or moving to an architecture with a larger memory limit.
Extend of size of a list in python
[ "", "python", "python-2.7", "" ]
I have about 50 CSV files with 60,000 rows in each, and a varying number of columns. I want to merge all the CSV files by column. I've tried doing this in MATLAB by transposing each csv file and re-saving to disk, and then using the command line to concatenate them. This took my computer over a week and the final result needs to transposed once again! I have to do this again, and I'm looking for a solution that won't take another week. Any help would be appreciated.
> [...] transposing each csv file and re-saving to disk, and then using the command line to concatenate them [...] Sounds like Transpose-Cat-Transpose. Use [paste](http://en.wikipedia.org/wiki/Paste_%28Unix%29) for joining files horizontally. ``` paste -d ',' a.csv b.csv c.csv ... > result.csv ```
The Python [`csv`](http://docs.python.org/2/library/csv.html) module can be set up so that each record is a dictionary with the column names as keys. You should that way be able to read in all the files as dictionaries, and write them to an out-file that has all columns. Python is easy to use, so this should be fairly trivial for a programmer of any language. If your csv-files doesn't have column headings, this will be quite a lot of manual work, though, so then it's perhaps not the best solution. Since these files are fairly big, it's best not to read all of them into memory once. I'd recommend that you first open them only to collect all column names into a list, and use that list to create the output file. Then you can concatenate each input file to the output file without having to have all of the files in memory.
What's the fastest way to merge multiple csv files by column?
[ "", "python", "csv", "merge", "" ]
How do you get the total size of the files in a torrent? And is there any way to get the number of files in a torrent and the size of each one?
Using torrent\_parser: ``` import torrent_parser as tp torrent_metadata = tp.parse_torrent_file("file.torrent") totalsize = 0 for file in torrent_metadata['info']['files']: totalsize += file['length'] print(totalsize) ```
``` h = ses.add_torrent(params) s = h.status() while (not h.is_seed()): print s.total_wanted # prints total size wanted after meta data is obtained, before that 0 is printed. ```
How do you get the total size of a torrent in libtorrent?
[ "", "python", "libtorrent", "" ]
I have a table with an int valued column, which has values between 0 and 43 (both included). I would like a query that returns the min value of the range [0,44) which is not in the table. For example: if the table contains: 3,5, 14. The query should return 0 if the table contains: 0,1, 14. The query should return 2 if the table contains: 0,3, 14. The query should return 1 If the table contains all values, the query should return empty. How can I achieve that?
Since the value you want is either `0` or 1 greater than a value that exists in the table, you can just do; ``` SELECT MIN(value) FROM (SELECT 0 value UNION SELECT value+1 FROM MyTable) a WHERE value < 44 AND value NOT IN (SELECT value FROM MyTable) ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/ab2a6/1).
One way would be to create another table that contains the integers in `[0,43]` and then left join that and look for NULLs, the NULLs will tell you what values are missing. Suppose you have: ``` create table numbers (n int not null); ``` and this table contains the integers from 0 to 43 (inclusive). If your table is `t` and has a column `n` which holds the numbers of interest, then: ``` select n.n from numbers n left join t on n.n = t.n where t.n is null order by n.n limit 1 ``` should give you the result you're after. This is a fairly common SQL technique when you're working with a sequence. The most common use is probably calendar tables.
select min value of range [0,44) not in a column
[ "", "mysql", "sql", "" ]
I am learning Python and am trying to use it to perform sentiment analysis. I am following an online tutorial from this link: <http://www.alex-hanna.com/tworkshops/lesson-6-basic-sentiment-analysis/>. I have taken a piece of code as a mapper class, an excerpt of which looks like this: ``` sentimentDict = { 'positive': {}, 'negative': {} } def loadSentiment(): with open('Sentiment/positive_words.txt', 'r') as f: for line in f: sentimentDict['positive'][line.strip()] = 1 with open('Sentiment/negative_words.txt', 'r') as f: for line in f: sentimentDict['negative'][line.strip()] = 1 ``` Here, I can see that a new dictionary is created with two keys, positive and negative, but no values. Following this, two text files are opened and each line is stripped and mapped to the dictionary. However, what is the = 1 part for? Why is this required (and if it isn't how could it be removed?)
The loop creates a nested dictionary, and sets all values to 1, presumably to then just use the keys as a way to weed out duplicate values. You could use sets instead and avoid the `= 1` value: ``` sentimentDict = {} def loadSentiment(): with open('Sentiment/positive_words.txt', 'r') as f: sentimentDict['positive'] = {line.strip() for line in f} with open('Sentiment/negative_words.txt', 'r') as f: sentimentDict['negative'] = {line.strip() for line in f} ``` Note that you don't even need to create the initial dictionaries; you can create the whole set with one statement, a set comprehension. If other code *does* rely on dictionaries with the values being set to `1` (perhaps to update counts at a later stage), it'd be more performant to use the `dict.fromkeys()` class method instead: ``` sentimentDict = {} def loadSentiment(): with open('Sentiment/positive_words.txt', 'r') as f: sentimentDict['positive'] = dict.fromkeys((line.strip() for line in f), 1) with open('Sentiment/negative_words.txt', 'r') as f: sentimentDict['negative'] = dict.fromkeys((line.strip() for line in f), 1) ``` Looking at your [source blog article](http://www.alex-hanna.com/tworkshops/lesson-6-basic-sentiment-analysis/) however shows that the dictionaries are only used to do membership testing against the keys, so using sets here is much better and transparent to the rest of the code to boot.
The point is that this is a *nested* dict. `sentimentDict` is a dictionary, and `sentimentDict['positive']` and `sentimentDict['negative']` are dictionaries as well. In the `loadSentiment()` function those inner dicts get populated with items. The words are the keys, the values are always `1`. So you get something like this: ``` {'negative': {'bad': 1, 'disgusting': 1, 'horrible': 1}, 'positive': {'amazing': 1, 'awesome': 1, 'cool': 1}} ``` My guess regarding the meaning of the value `1` is that these dictionaries are just initialized here, and later these counts may be increased to signify stronger or weaker sentiment.
What is this dictionary assignment doing?
[ "", "python", "mapreduce", "nltk", "sentiment-analysis", "" ]
Hi i am struging to get my head round indentaion levels with this code- ``` def getSolarFlowtemperature(): #Open the temperature sensor, read it and process the result tfile = open("/sys/bus/w1/devices/28-000003086819/w1_slave") text = tfile.read() tfile.close() temperature_data = text.split()[-1] temperature = float(temperature_data[2:]) temperature = temperature / 1000 #This while function checks for the error temperatures, and tries to read the sensor again to get a proper value. After 10 tries it stops count = 0 while temperature == -0.062 or temperature == -0.125: time.sleep(2) count = count + 1 print 'Temperature error on 28-000003086819, retrying' tfile = open("/sys/bus/w1/devices/28-000003086819/w1_slave") text = tfile.read() tfile.close() temperature_data = text.split()[-1] temperature = float(temperature_data[2:]) temperature = temperature / 1000 if count > 10: break else: return(temperature) ``` Can some one advise where the indentation is incorrect? Steve
The if/else should be in the while and their code should be indented, **break does not make sense outside of a loop** ``` def getSolarFlowtemperature(): #Open the temperature sensor, read it and process the result tfile = open("/sys/bus/w1/devices/28-000003086819/w1_slave") text = tfile.read() tfile.close() temperature_data = text.split()[-1] temperature = float(temperature_data[2:]) temperature = temperature / 1000 #This while function checks for the error temperatures, and tries to read the sensor again to get a proper value. After 10 tries it stops count = 0 while temperature == -0.062 or temperature == -0.125: time.sleep(2) count = count + 1 print 'Temperature error on 28-000003086819, retrying' tfile = open("/sys/bus/w1/devices/28-000003086819/w1_slave") text = tfile.read() tfile.close() temperature_data = text.split()[-1] temperature = float(temperature_data[2:]) temperature = temperature / 1000 if count > 10: break else: return(temperature) ```
``` if count > 10: break else: return(temperature) ``` if-else conditions need to be indented as well. It should be ``` if count > 10: break else : return temperature ``` Some other notes: `return temperature` does not need parenthesis like you have. Also, to open a file, read it and the close it, you can just do this: ``` with open("/sys/bus/w1/devices/28-000003086819/w1_slave", "r") as tfile : text = tfile.read() ``` This ensures the file handle is closed even in case of exceptions. Further, I passed a second argument `r` which specifies that the file should only be opened in read-mode.
Python indentation newbie
[ "", "python", "indentation", "" ]
Is there a way that I can get all the columns and tables referenced in all the stored procedures in all the databases in an instance? The output should be: ``` Database Procedure Table Column -------- --------- ----- ------ ```
This will get the list you're after, however it won't help you if you have such column references embedded in dynamic SQL (and may not find references that rely on deferred name resolution). SQL Server doesn't parse the text of the stored procedure to come up with the DMV output. Try now with `COLLATE` clauses to deal with cases where you have databases on the same server with different collations. ``` DECLARE @sql NVARCHAR(MAX) = N''; SELECT @sql += N'UNION ALL SELECT [database] = ''' + REPLACE(name, '''', '''''') + ''', [procedure] = QUOTENAME(s.name) + ''.'' + QUOTENAME(p.name) COLLATE Latin1_General_CI_AI, [table] = QUOTENAME(referenced_schema_name) + ''.'' + QUOTENAME(referenced_entity_name) COLLATE Latin1_General_CI_AI, [column] = QUOTENAME(referenced_minor_name) COLLATE Latin1_General_CI_AI FROM ' + QUOTENAME(name) + '.sys.schemas AS s INNER JOIN ' + QUOTENAME(name) + '.sys.procedures AS p ON s.[schema_id] = p.[schema_id] CROSS APPLY ' + QUOTENAME(name) + '.sys.dm_sql_referenced_entities' + '(QUOTENAME(s.name) + ''.'' + QUOTENAME(p.name), N''OBJECT'') AS d WHERE d.referenced_minor_id > 0' FROM sys.databases WHERE database_id > 4 AND [state] = 0; SET @sql = STUFF(@sql,1,11,''); EXEC sp_executesql @sql; ``` Also the `CROSS APPLY` syntax won't work if you have databases that are in 80 compatibility mode. Just make sure you don't execute the code in such a database and it should work fine (even if some of the target databases are in 80).
To list all SP Name have contain the specified column name: ``` SELECT OBJECT_NAME(M.object_id), M.* FROM sys.sql_modules M JOIN sys.procedures P ON M.object_id = P.object_id WHERE M.definition LIKE '%ColumnName%' ```
List all columns referenced in all procedures of all databases
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
Looking for a way to get items in between tabs ``` \t_e\t1\t_ ``` I need to get the 1 but in other cases it will be a longer string. ``` re.search("\t_e\t(.*)\t_", string).group(1) ``` 9 times out of 10 that line returns the string/value i want but not always, just to be clear after the underscore it could be any anything from a letter to a white space.
Try making the regex lazy by adding a `?` character after `*`, like so: ``` re.search("\t_e\t(.*?)\t_", string).group(1) ^ ``` This makes the `.*` match as little as possible until the next `\t` (and thus prevents `.` to eat up a `\t`).
Try this: ``` re.findall(r"(?s)(?<=\t)(.*?)(?=\t)", "\t_e\t1\t_") ``` output: ``` ['_e', '1'] ``` Basically, it is using positive lookahead and lookbehind assertion. Also, the `(?s)` sets the "dotall" flag, allowing it to match whitespace.
get string in between with python
[ "", "python", "regex", "tabs", "" ]
I have a regex like this `'^(a|ab|1|2)+$'` and want to get all sequence for this... for example for re.search(reg, 'ab1') I want to get ('ab','1') Equivalent result I can get with `'^(a|ab|1|2)(a|ab|1|2)$'` pattern, but I don't know how many blocks been matched with (pattern)+ Is this possible, and if yes - how?
Your original expression does match the way you want to, it just matches the entire string and doesn't capture individual groups for each separate match. Using a repetition operator ('+', '\*', '{m,n}'), the group gets overwritten each time, and only the final match is saved. This is alluded to in the [documentation](http://docs.python.org/2/library/re.html#re.MatchObject.group): > If a group matches multiple times, only the last match is accessible.
try this: ``` import re r = re.compile('(ab|a|1|2)') for i in r.findall('ab1'): print i ``` The `ab` option has been moved to be first, so it will match `ab` in favor of just `a`. findall method matches your regular expression more times and returns a list of matched groups. In this simple example you'll get back just a list of strings. Each string for one match. If you had more groups you'll get back a list of tuples each containing strings for each group. This should work for your second example: ``` pattern = '(7325189|7325|9087|087|18)' str = '7325189087' res = re.compile(pattern).findall(str) print(pattern, str, res, [i for i in res]) ``` I'm removing the `^$` signs from the pattern because if findall has to find more than one substring, then it should search anywhere in str. Then I've removed `+` so it matches single occurences of those options in pattern.
Python regexp: get all group's sequence
[ "", "python", "regex", "" ]
i have this following code in python/Django ``` movie_obj = Movies.objects.all() //assume there are 2 movies for movie in movie_obj movie.genre //genre = models.CharField(max_length = 100, blank=True) ``` will print like ``` Drama, Sci-Fi, Thriller //genre of movie_1 Biography, Drama, Music // genre of movie_2 ``` I want to create Set of movie's genre like ``` genre_set = Set([Drama, Sci-fi, thriller, Music, Biography]) ``` **The problem:** movie.genre is an **unicode object**. Since movie\_obj is considered as a single unicode object. how can i get individual genre from movie\_obj, so that i can make List and therby Set.
``` >>> a = unicode('bla, blee, bla, blue') >>> a u'bla, blee, bla, blue' >>> a.split(', ') [u'bla', u'blee', u'bla', u'blue'] >>> set(a.split(', ')) set([u'blee', u'blue', u'bla']) ``` For each genre, you can split them on the coma, and then have a list to do what you want, then you can convert it into a set, if you'd like so
I don't understand your issue. But this should work: ``` genre_groups = Movies.objects.values_list('genre', flat = True) #output: ['Drama, Sci-Fi, Thriller', 'Biography, Drama, Music'] genres = [ agroup.split(",") for agroup in genre_groups ] #output: ['Drama', 'Sci-Fi', 'Thriller', 'Biography', 'Drama', 'Music'] genres_set = set(genres) #output = set('Drama', 'Sci-Fi', 'Thriller', 'Biography', 'Music') ``` **EDIT**: my bad, the 2nd step returns an array of arrays. I'll see how to solve it an elegant way Alternative step 2 and 3: ``` genres_set = {} for agroup in genre_groups: genres_set |= set(agroup.split(",")) ```
python: How to create set out of unicode objects
[ "", "python", "" ]
> The ORDER BY clause is invalid in views, inline functions, derived > tables, subqueries, and common table expressions, unless TOP, OFFSET > or FOR XML is also specified. I am getting the above said error while trying to execute the following query. Can anyone please have a look and tell me what am I doing wrong here? ``` SELECT * FROM ( SELECT Stockmain.VRNOA, item.description as item_description, party.name as party_name, stockmain.vrdate, stockdetail.qty, stockdetail.rate, stockdetail.amount, ROW_NUMBER() OVER (ORDER BY VRDATE) AS RowNum FROM StockMain INNER JOIN StockDetail ON StockMain.stid = StockDetail.stid INNER JOIN party ON party.party_id = stockmain.party_id INNER JOIN item ON item.item_id = stockdetail.item_id WHERE stockmain.etype='purchase' ORDER BY VRDATE DESC ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 1 and 5 ```
You do not need to use `ORDER BY` in inner query after `WHERE` clause because you have already used it in `ROW_NUMBER() OVER (ORDER BY VRDATE DESC)`. ``` SELECT * FROM ( SELECT Stockmain.VRNOA, item.description as item_description, party.name as party_name, stockmain.vrdate, stockdetail.qty, stockdetail.rate, stockdetail.amount, ROW_NUMBER() OVER (ORDER BY VRDATE DESC) AS RowNum --< ORDER BY FROM StockMain INNER JOIN StockDetail ON StockMain.stid = StockDetail.stid INNER JOIN party ON party.party_id = stockmain.party_id INNER JOIN item ON item.item_id = stockdetail.item_id WHERE stockmain.etype='purchase' ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 1 and 5 ```
``` ORDER BY column OFFSET 0 ROWS ``` Surprisingly makes it work, what a strange feature. A bigger example with a CTE as a way to temporarily "store" a long query to re-order it later: ``` ;WITH cte AS ( SELECT .....long select statement here.... ) SELECT * FROM ( SELECT * FROM ( -- necessary to nest selects for union to work with where & order clauses SELECT * FROM cte WHERE cte.MainCol= 1 ORDER BY cte.ColX asc OFFSET 0 ROWS ) first UNION ALL SELECT * FROM ( SELECT * FROM cte WHERE cte.MainCol = 0 ORDER BY cte.ColY desc OFFSET 0 ROWS ) last ) as unionized ORDER BY unionized.MainCol desc -- all rows ordered by this one OFFSET @pPageSize * @pPageOffset ROWS -- params from stored procedure for pagination, not relevant to example FETCH FIRST @pPageSize ROWS ONLY -- params from stored procedure for pagination, not relevant to example ``` So we get all results ordered by `MainCol` But the results with `MainCol = 1` get ordered by `ColX` And the results with `MainCol = 0` get ordered by `ColY`
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions
[ "", "sql", "sql-server", "database", "sql-server-2012", "sql-query-store", "" ]
I want to return only one column from table. This is my DAO: ``` @SuppressWarnings("unchecked") public String getUri() { return sessionFactory.getCurrentSession() .createQuery("uri from Templates WHERE state=1").toString(); } ``` Uri is a column. Domain: ``` public String getUri() { return uri; } public void setUri(String uri) { this.uri = uri; } ``` Console says: ``` Request processing failed; nested exception is java.lang.IllegalArgumentException: node to traverse cannot be null! ``` The second version: ``` @SuppressWarnings("unchecked") public String getUri() { return (String) sessionFactory.getCurrentSession() .createQuery("select uri from TEMPLATES WHERE state=1") .uniqueResult(); } ``` Console: ``` Request processing failed; nested exception is org.hibernate.hql.ast.QuerySyntaxException: TEMPLATES is not mapped [select uri from TEMPLATES WHERE state=1] ```
Your SELECT clause is missing: ``` @SuppressWarnings("unchecked") public String getUri() { Query q = sessionFactory.getCurrentSession() .createQuery("SELECT uri FROM Templates WHERE state=1"); List l = q.list(); //guess you know there's only one result? return l.get(0).toString(); } ``` More info in: <https://forum.hibernate.org/viewtopic.php?p=2448422>
You need to use the class/field names, not the table/column names. Also, the query won't return a list of instances of your class for that table, but rather an array. Also, put a `select` in your query. I believe that exception means your hql is broken.
How to return only one column in Hibernate?
[ "", "sql", "spring", "hibernate", "" ]
When iterating through a dictionary, I want to skip an item if it has a particular key. I tried something like `mydict.next()`, but I got an error message `'dict' object has no attribute 'next'` ``` for key, value in mydict.iteritems(): if key == 'skipthis': mydict.next() # for others do some complicated process ``` I am using Python 2.7 if that matters.
Use [`continue`](http://docs.python.org/2/reference/simple_stmts.html#the-continue-statement): ``` for key, value in mydict.iteritems(): if key == 'skipthis': continue ``` Also see: * [Are `break` and `continue` bad programming practices?](https://softwareengineering.stackexchange.com/questions/58237/are-break-and-continue-bad-programming-practices)
I think you want to call `mydict.iteritems().next()`, however you should just filter the list before iterating. To filter your list, you could use a generator expression: ``` r = ((k, v) for k, v in mydict.iteritems() if k != 'skipthis') for k,v in r: #do something complicated to filtered items ``` Because this is a generator expression, it has the property of only traversing the original dict once, leading to a boost in performance over other alternatives which iterate the dictionary, and optionally copy elements to a new one or delete existing elements from it. Generators can also be chained, which can be a powerful concept when iterating. More info on generator expressions: <http://www.python.org/dev/peps/pep-0289/>
Skipping to Next item in Dictionary
[ "", "python", "python-2.7", "dictionary", "" ]
I wanted to remove 'a' tag (link) from all the images found. Hence for performance I made a list of all images in the html and look for wrapping a tag and simply remove the link. I am using BeautifulSoup and not sure what I am doing wrong, instead of removing the a tag it is removing the inside content. **This is what I did** ``` from bs4 import BeautifulSoup html = '''<div> <a href="http://somelink"><img src="http://imgsrc.jpg" /></a> <a href="http://somelink2"><img src="http://imgsrc2.jpg /></a>" ''' soup = BeautifulSoup(html) for img in soup.find_all('img'): print 'THIS IS THE BEGINING /////////////// ' #print img.find_parent('a').unwrap() print img.parent.unwrap() ``` **This gives me following output** ``` > >> print img.parent() <a href="http://somelink"><img src="http://imgsrc.jpg" /></a> <a href="http://somelink2"><img src="http://imgsrc2.jpg /></a> > >> print img.parent.unwrap() <a href="http://somelink"></a> <a href="http://somelink2"></a> ``` I have tried `replaceWith` and `replaceWithChildren` but not working when I use `object.parent` or `findParent` I am not sure what I am doing wrong. Its been just few weeks since I started python.
The `unwrap()` function *returns the tag that has been removed*. The tree itself has been properly modified. Quoting from the [`unwrap()` documentation](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#unwrap): > Like `replace_with()`, `unwrap()` returns the tag that was replaced. In other words: it works correctly! Print the *new* parent of `img` instead of the return value of `unwrap()` to see that the `<a>` tags have indeed been removed: ``` >>> from bs4 import BeautifulSoup >>> html = '''<div> <a href="http://somelink"><img src="http://imgsrc.jpg" /></a> <a href="http://somelink2"><img src="http://imgsrc2.jpg /></a>" ''' >>> soup = BeautifulSoup(html) >>> for img in soup.find_all('img'): ... img.parent.unwrap() ... print img.parent ... <a href="http://somelink"></a> <div> <img src="http://imgsrc.jpg"/> <a href="http://somelink2"><img src="http://imgsrc2.jpg /&gt;&lt;/a&gt;"/></a></div> <a href="http://somelink2"></a> <div> <img src="http://imgsrc.jpg"/> <img src="http://imgsrc2.jpg /&gt;&lt;/a&gt;"/></div> ``` Here python echoes the `img.parent.unwrap()` return value, followed by the output of the `print` statement showing the parent of the `<img>` tag is now the `<div>` tag. The first print shows the *other* `<img>` tag still wrapped, the second print shows them both as direct children of the `<div>` tag.
I'm not sure what output you are looking for. Is this it? ``` from bs4 import BeautifulSoup html = '''<div> <a href="http://somelink"><img src="http://imgsrc.jpg" /></a> <a href="http://somelink2"><img src="http://imgsrc2.jpg" /></a> ''' soup = BeautifulSoup(html) for img in soup.find_all('img'): img.parent.unwrap() print(soup) ``` yields ``` <html><body><div> <img src="http://imgsrc.jpg"/> <img src="http://imgsrc2.jpg"/></div></body></html> ```
Unwrap "a" tag from image, without losing content
[ "", "python", "html-parsing", "beautifulsoup", "" ]
So, I'm trying to be a good Python programmer and duck-type wherever I can, but I've got a bit of a problem where my input is either a `dict` or a `list` of `dict`s. I can't distinguish between them being iterable, because they both are. My next thought was simply to call `list(x)` and hope that returned my list intact and gave me my dict as the only item in a list; alas, it just gives me the list of the dict's keys. I'm now officially out of ideas (short of calling `isinstance` which is, as we all know, not very pythonic). I just want to end up with a list of dicts, even if my input is a single solitary dict.
Really, there is no obvious pythonic way to do this, because it's an unreasonable input format, and the obvious pythonic way to do it is to fix the input… But if you can't do that, then yes, you need to write an adapter (as close to the input edge as possible). The best way to do that depends on the actual data. If it really is either a dict, or a list of dicts, and nothing else is possible (e.g., you're calling `json.loads` on the results from some badly-written service that returns an object or an array of objects), then there's nothing wrong with `isinstance`. If you want to make it a bit more general, you can use the appropriate [ABCs](http://docs.python.org/3/library/collections.abc.html). For example: ``` if isinstance(dict_or_list, collections.abc.Mapping): return [dict_or_list] else: return dict_or_list ``` But unless you have some good reason to need this generality, you're just hiding the hacky workaround, when you're better off keeping it as visible as possible. If it's, e.g., coming out of `json.loads` from some remote server, handling a `Mapping` that isn't a `dict` is not useful, right? (If you're using some third-party client library that just returns you "something dict-like" or "something list-like containing dict-like things", then yes, use ABCs. Or, if that library doesn't even support the proper ABCs, you can write code that tries a specific method like `keys`. But if that's an issue, you'll know the specific details you're working around, and can code and document appropriately.)
Accessing a `dict` using a non-`int` key will get you either an item, or a `KeyError`. It will get you a `TypeError` with a `list`. So you can use exception handling: ``` def list_dicts(dict_or_list): try: dict_or_list[None] return [dict_or_list] # no error, we have a dict except TypeError: return dict_or_list # wrong index type, we have a list except Exception: return [dict_or_list] # probably KeyError but catch anything to be safe ``` This function will give you a `list` of `dicts` regardless of whether it got a `list` or a `dict`. (If it got a `dict`, it makes a list of one item out of it.) This should be fairly safe type-wise, too; other `dict`-like or `list`-like objects would probably be considered broken if they didn't have similar behavior.
What's the pythonic way to distinguish between a dict and a list of dicts?
[ "", "python", "list", "dictionary", "" ]
so the code is intended to increase the value of count every time it meet the requirement of the if statement and in the end it gives the count of how many times one particular item has appeared in the list, here is the code ``` x = ["fizz", "fizz", "fizz"] def fizz_count(x): count = 0 for fizz in x: return count+1 ``` now i will only gets me 1 ,for i do aware that the for loop stopped at the very first "fizz" how do i get it going ?(i tried print it return me three one, now that is a start cause if i am able to tell python to check how many one there is i will get the frequency of which the string "fizz" had appeared :) thank you for anyone who take a look at this
You are returning from the for loop on the very first iteration, that is why you get `count = 1`. You should increment the `count` in loop, and return it at the end of the function. --- However, your for loop is not doing what you want it to do. It will not search for `fizz` in the list. The loop variable `fizz` will take each element in order from the list, and not just the element with value `fizz`. If you want to count the total number of `fizz`, you need to add an `if` condition, to test the value. Or you can simply use a [*generator expression*](http://docs.python.org/2/howto/functional.html?highlight=sum#generator-expressions-and-list-comprehensions), with [`sum()`](http://docs.python.org/2/library/functions.html#sum) function: ``` def fizz_count(x): return sum(1 for item in x if item == "fizz") ``` Or, even simpler, use [`list.count(item)`](http://docs.python.org/2/library/functions.html#sum) method: ``` def fizz_count(x): return x.count("fizz") ```
The issue here is that the `return` statement will exit out of the function, so you will only loop once, and always return `count + 1`, which is `0 + 1 = 1`. You can fix this by not returning `count+1`, but rather by returning the total: ``` def fizz_count(x): count = 0 for fizz in x: count = count+1 return count ``` To get the number of times a particular item as appeared, you can use the `if` statement: ``` def fizz_count(x, item): count = 0 for fizz in x: if fizz == item: count = count+1 return count ```
for statement does not work as intended
[ "", "python", "for-loop", "" ]
I have a string that contains letters and punctuation. I'm trying to replace only the letters in this string with other letters. The function I have developed only works for strings that contain letters. If numbers are included it produces a logic error and if punctuation is included then it produces a run-time error. Is there anyway that I can get my function to ignore punctuation and leave it as is while only operating on the letters? ``` #Create a string variable, ABjumbler generates an alphabet shifted by x units to the right #ABshifter converts a string using one type to another textObject = "g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj." smalltext = 'abcde' alphabet = list(['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']) def ABjumbler(alphabet, x): freshset = [] i=0 j=0 while i<(len(alphabet)-x): freshset.extend(alphabet[i+x]) i+=1 while j<x: freshset.extend(alphabet[j]) #extend [0] j+=1 #change j = to 1, extends by [1], then by [2], and then terminates when it reaches x alphabet = freshset return alphabet newAlphabet = ABjumbler(alphabet, 2) def ABshifter(text, shiftedalphabet): freshset = [] for letters in text: position = text.index(letters) freshset.extend(shiftedalphabet[position]) final = ''.join(freshset) return final print ABshifter(smalltext, newAlphabet) ```
For one, there are some faster/simpler ways of doing the shifting you want. But to answer your question, you could simply add: ``` if not letter.isalpha(): continue ``` `str.isalpha()` returns `True` if the string is composed only of alphabetic letters.
Try this one: ``` textObject = "g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj." smalltext = 'abcde' alphabet = list(['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']) def ABjumbler(alphabet, x): #for x greater then alphabet length if x>=len(alphabet): x = x % len(alphabet) #return a dictionary like 'a':'c', 'b':'d' etc return dict(zip(alphabet, alphabet[x:] + alphabet[:x])) def ABshifter(letter, alph): if letter.isalpha(): return alph[letter] return letter print "".join(map(lambda x: ABshifter(x, ABjumbler(alphabet,2)), smalltext)) ```
How can I get a loop to ignore non-letter elements in a list?
[ "", "python", "string", "list", "function", "" ]
Currently developping a RPG, I'm asking how could I protect the saved data so that the player/user can't read or modify it easily. I mean yes a person that is experienced with computers and programming could modify it but I don't want the lambda user to be able to modify it, as easily as one could modify a plaintext xml file. Is there a way I could do that with python?
It sounds like you need a cryptography library. This will help you encrypt or decrypt a file using a key. Good thing there is already one called PyCrypto. You can download it [here](http://www.voidspace.org.uk/python/modules.shtml#pycrypto). To use it, once you have downloaded it [It is documented here](http://pythonhosted.org/pycrypto/Crypto.Cipher.AES-module.html#new): ``` import string from Crypto.Cipher import AES from Crypto import Random from Crypto.Random import random def gen_cipher(): # generates 32 letter long key key = ''.join(random.sample(string.ascii_letters, 32)) iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CFB, iv) return cipher, iv def write_data(data, rfile, cipher, iv): with open(rfile, 'w') as f: msg = iv + cipher.encrypt(b'Users cant edit this') f.write(msg) def read_data(rfile, cipher): with open(rfile, 'r') as f: data = f.read() # first 16 bytes are IV return cipher.decrypt(data)[16:] def encrypt_existing_file(infile, outfile, cipher, iv): with open(infile, 'r') as if: data = if.read() write_data(data, outfile, cipher, iv) def decrypt_existing_file(infile, outfile, cipher, iv): with open(outfile, 'r') as of: data = read_data(infile) of.write(data) if __name__ == '__main__': cipher, iv = gen_cipher() write_data(b"You didn't see anything...", 'file.txt', cipher, iv) # ... # outputs: You didn't see anything... print (read_data('file.txt', cipher)) ``` It works by using AES as a symmetric key cipher. First I generate a random key out of 32 randomly chosen ascii letters. Then I create an initialization vector (iv). This is necessary at the *start* of an encrypted file, to initialize properly. Then the cipher itself, which takes a key, an operation mode, and an initialization vector. CFB mode (Cipher Feedback Mode) just means that AES will be used in a way where the next block depends somewhat on the previous block. Udacity has several great videos on [Symmetric ciphers](http://www.youtube.com/watch?v=u5soSzzXy8Q), [AES](http://www.youtube.com/watch?v=_ujdCbh7dvA), and [CBC](http://www.youtube.com/watch?v=0D7OwYp6ZEc).
Just `pickle` or `cpickle` a configuration object with the compression set to max is a quick and easy option.
Protecting a Save file from modification in a game?
[ "", "python", "pygame", "" ]
**Update:** Following @dtmilano's suggestion, I added ``` import sys print(sys.path) ``` to the beginning of my MonkeyRunner script. This results in ``` ['e:/path/android-sdk/tools/lib/monkeyrunner.jar:e:\\path\\bbct\\android\\functional-tests', 'E:\\path\\android-sdk\\tools\\lib\\Lib', '/E:/path/android-sdk/tools/lib/jython-standalone-2.5.3.jar/Lib', '__classpath__', '__pyclasspath__/'] ``` At first glance I thought this included the current working directory. However, a closer inspection showed that the output is a list of strings where the first string is ``` 'e:/path/android-sdk/tools/lib/monkeyrunner.jar:e:\\path\\bbct\\android\\functional-tests' ``` For some reason this contains two paths concatenated together. Is this a bug in MonekyRunner and/or Jython? **Original Question:** I have two .py files in the same directory: `screenshots.py` and `util.py`. I need to run screenshots.py with the `monkeyrunner` interpreter from the Android build tools. When I run `monkeyrunner screenshots.py`, I get error `No module named util`. How do I configure my python and/or monkeyrunner to find the code in `util.py`? **Edit:** I am using Git Bash on Win7. (Oops, I probably should have mentioned this earlier.) For reference, this is the complete error message: > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] Script terminated due to an exception > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions]Traceback (most recent call last): > File "c:\Users\Dell\Documents\dev\src\java\bbct\android\functional-tests\screenshots.py", line 19, in > import util > ImportError: No module named util > > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.Py.ImportError(Py.java:264) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.imp.import\_first(imp.java:657) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.imp.import\_name(imp.java:741) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.imp.importName(imp.java:791) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.ImportFunction.**call**(\_*builtin*\_.java:1236) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.PyObject.**call**(PyObject.java:367) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.**builtin**.\_*import*\_(**builtin**.java:1207) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.**builtin**.\_*import*\_(**builtin**.java:1190) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.imp.importOne(imp.java:802) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.pycode.\_pyx0.f$0(c:\Users\Dell\Documents\dev\src\java\bbct\android\functional-tests\screenshots.py:51) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.pycode.\_pyx0.call\_function(c:\Users\Dell\Documents\dev\src\java\bbct\android\functional-tests\screenshots.py) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.PyTableCode.call(PyTableCode.java:165) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.PyCode.call(PyCode.java:18) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.Py.runCode(Py.java:1197) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.**builtin**.execfile\_flags(**builtin**.java:538) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:156) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at com.android.monkeyrunner.ScriptRunner.run(ScriptRunner.java:116) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at com.android.monkeyrunner.MonkeyRunnerStarter.run(MonkeyRunnerStarter.java:77) > 130807 12:01:59.978:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at com.android.monkeyrunner.MonkeyRunnerStarter.main(MonkeyRunnerStarter.java:189)
For your reference, the android SDK does appear to have a bug in it where the first item on sys.path is the monkeyRunner lib and the working directory mashed together. I added the following to fix. ``` import sys sys.path.append(sys.path[0].split(':',1)[1]) import util ```
You need to add the module to the search path (you are cwd is not in the same dir?) <http://docs.python.org/2/tutorial/modules.html> import sys sys.path.append('/path/to/your/module')
How does Python (or MonkeyRunner) locate imported modules?
[ "", "android", "python", "module", "monkeyrunner", "" ]
I use this python scripts: python2.6 and 2.7 ``` for i in xrange(1000000):print i ``` python3.3 ``` for i in range(1000000):print(i) ``` Results: python 2.7 ``` real 1m15.973s user 0m3.392s sys 0m6.384s ``` python 2.6 ``` real 1m15.860s user 0m3.064s sys 0m6.296s ``` With python 3.3 I test the script many times and I receive this diff of running . python 3.3 ``` real 1m34.235s user 0m10.668s sys 0m8.988s real 1m29.431s user 0m10.304s sys 0m9.512s real 1m12.528s user 0m10.568s sys 0m9.004s real 1m4.193s user 0m9.688s sys 0m8.812s real 1m18.332s user 0m9.728s sys 0m9.132s ``` After that I try again **python 2.6** and I got this: ``` real 0m45.003s user 0m3.056s sys 0m5.956s ``` What is the best way to benchmark 2 python scripts 3.3 and 2.7 (or 2.6).
There are several ways to benchmark Python programs. At least, I can come with two serious ways. You can find an expanded version of what I say in these slides [here](https://docs.google.com/file/d/0B2opoqYF5uR8U2dHOHpmdS1LNlE/). You also may benefit from this [video](http://pyvideo.org/video/1770/python-profiling) from a talk about Python profiling from PyCon 2013 (from Amjith Ramanujam). ## cProfile module The [cProfile](http://docs.python.org/2/library/profile.html) module gives you an insight of the time spent in every procedure of your program. It can be manipulated in a very efficient and precise manner. But, the drawback of it is that you cannot trust the execution time it gives for each procedure but its relative time spent compared to the others. Using cProfile is simply done like this: ``` python -m cProfile ./mypythonprogram.py ``` If you know `gprof`, it will give you a similar output but for a Python program. ## `timeit` module The [`timeit`](http://docs.python.org/2/library/timeit.html) module is intended to really evaluate the time the program spend in total. On the contrary to `cProfile`, there is no extra instrumentation of each procedure and, thus, no slowdown during the execution. ``` def foo (): for i in range (10): i = i * i from timeit import Timer max_iter = 10000 t = Timer ("foo()", "from __main__ import foo") print ("foo(%i): %f seconds" %(max_iter, t.timeit(max_iter))) ``` And, you call it like this within the command line: ``` $> python2.7 timeit_example.py foo(10000): 0.012774 seconds $> python3.2 timeit_example.py foo(10000): 0.014030 seconds ```
Use the [`timeit` module](http://docs.python.org/2/library/timeit.html) to compare small Python snippets. It avoids the common pitfalls, making timing values comparable. However, you are mostly timing write speed to `sys.stdout` when timing `print i` (and in Python 3, encoding to the terminal codec).
python 3.3 ,2.7 and 2.6 - benchmark inconsistent results. how fix that?
[ "", "python", "" ]
I am trying to print this code but I keep getting errors when I use the print command in Python 2.7 So I pretty much understand how the interpreter reads and executes this program. What would be the ideal location to put *"print s"* to see the results on the screen? Thank you very much. ``` n = raw_input('input an integer value to compute its factorial:\n') n = int(n) def f(n): if n==0: return 1 else: m = f(n-1) s = n * m return s ```
From my experience, you probably want something closer to this: ``` def f(n): if n==0: return 1 else: m = f(n-1) s = n * m return s if __name__ == '__main__': n = raw_input('input an integer value to compute its factorial:\n') n = int(n) result = f(n) print result # or alternatively for the last two lines, if you don't want to save the result print f(n) ``` This is so that when you just execute this script from IDLE (or directly from the command line) it runs the `if __name__` block, but otherwise it just defines the function.
I'd move the definition of `f` to the top, above your call to `raw_input`. Then at the very end, you can call `f` with `n` and print the result: ``` print f(n) ``` If you wanted to, you could store the result in a variable, although there's no advantage here: ``` result = f(n) print result ```
How to print out the results:
[ "", "python", "python-2.7", "" ]
when connecting to mysql database in Django ,I get the error. 1. I'm sure mysql server is running. 2. /var/run/mysqld/mysqld.sock doesn't exist. 3. When I run `$ find / -name *.sock -type s`, I only get /tmp/mysql.sock and some other irrelevant output. 4. I added `socket = /tmp/mysql.sock` to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error. I searched a lot, but I still don't know how to do. Any help is greate. Thanks in advance. Well, I just tried some ways. And it works. I did as follows. 1. Add `socket = /tmp/mysql.sock` .Restart the mysql server. 2. `ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock` I met an another problem today. I can't login to mysql. I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate. I add `socket = /var/mysqld/mysqld.sock` to [mysqld] [client] block in my.cnf and it wokrs.
Use "127.0.0.1", instead of "localhost" ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'django', 'USER': 'root', 'PASSWORD': '', 'HOST': '127.0.0.1', 'PORT': '3306', } } ```
For me this worked add OPTIONS attribute with read\_default\_file and give it the path of my.cnf file ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dbname', 'USER': 'root', 'PASSWORD': '', 'HOST': 'localhost', 'PORT': '3306', 'OPTIONS': { 'read_default_file': '/opt/lampp/etc/my.cnf', } } } ```
OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
[ "", "python", "mysql", "django", "mysql.sock", "" ]
I have a list of lists that contains users and scores as follows: ``` [["user1", 100], ["user2", 234], ["user3", 131]...] ``` I want to produce a list that sorts the users by score in declining order: ``` [["user2", 234], ["user3", 131], ["user1", 100]...] ``` How might I go about doing this kind of sort?
``` >>> li = [["user1", 100], ["user2", 234], ["user3", 131]] >>> >>> import operator >>> >>> sorted(li, key=operator.itemgetter(1)) # Ascending order [['user1', 100], ['user3', 131], ['user2', 234]] >>> sorted(li, key=operator.itemgetter(1), reverse=True) # Reverse Sort [['user2', 234], ['user3', 131], ['user1', 100]] ```
You can custom the sort behaviour by pass a key and reverse. `sorted` will return a new list. If in-place sort wanted, use `list.sort`. ``` >>> a = [["user1", 100], ["user2", 234], ["user3", 131]] >>> sorted(a, key=lambda x: x[1], reverse=True) [['user2', 234], ['user3', 131], ['user1', 100]] ```
Sort a list of lists by an item in the sublist
[ "", "python", "list", "sorting", "" ]
I think this should be a simple question... but it's been holding me up for some time now :( I wish to create a list of numbers, centred (as it were) on zero, from an input that specifies the maximum and the increment. So, ``` max = 100 increment = 1 ``` would return ``` [-100,-99,-98,...,-1,0,1,...,99,100] ``` and ``` max = 35 increment = 0.2 ``` would return ``` [-35.0,-34.8,...,-0.2,0,0.2,...34.8,35.0] ``` If the increment doesn't divide neatly into the maximum, it needs to make a short last step (e.g. if counting to 1 in 0.3 increments, it would run `[-1.0,-0.6,-0.3,0.0,0.3,0.6,0.9,1.0]` `list(numpy.linspace())` seems to be the way to go but I seem to be having a complete mental block on how to make this work in the way described for anything but the simplest cases. Suggestions appreciated! edit: my own solution was ``` def mylist(stop,step): a = list(np.arange(0,-stop,-step))+[-stop] a.reverse() b = list(a) c = list(np.arange(0,stop,step))+[stop] d = b+c d.remove(0) e = list(d) return e ``` which is horribly clunky, even I can see. The best answer was: ``` def mirrored(maxval, inc): x = np.arange(inc, maxval, inc) if x[-1] != maxval: x = np.r_[x, maxval] return np.r_[-x[::-1], 0, x] ``` but I am going to have to google a little more to understand *why* that works (also not sure if I want to round... the input for the increment might be legitimately specified to more than one decimal place)
If you want it to be strictly mirrored around `0`, (i.e. always include 0 and the endpoints, and be perfectly symmetric about 0) you'll need a couple of steps. First off, be aware of @NPE's comment above. Floating point math is not the same as decimal math!! This may seem beside the point, but it *will* bite you in certain circumstances. There's more than one way to do this. Do you want to have all of the numbers be evenly spaced, or stick to the increment and only violate it at the endpoints?. This approach takes the latter of the two. ``` import numpy as np def mirrored(maxval, inc=1): x = np.arange(inc, maxval, inc) if x[-1] != maxval: x = np.r_[x, maxval] return np.r_[-x[::-1], 0, x] print mirrored(1, 0.3) ``` This yields: ``` [-1. -0.9 -0.6 -0.3 0. 0.3 0.6 0.9 1. ] ``` If you want all of the numbers to be evenly spaced (but not the exact increment you specify), just use linspace: ``` import numpy as np def mirrored2(maxval, inc=1): return np.linspace(-maxval, maxval, 2*maxval // inc) print mirrored2(1, 0.3) ``` This yields: ``` [-1. -0.6 -0.2 0.2 0.6 1. ] ```
It is quite easy using [`numpy.arange`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html): ``` >>> import numpy as np >>> max_val = 100 >>> increment = 1 >>> a = np.arange(start=-max_val, stop=max_val+increment, step=increment) >>> print a array([-100, -99, -98, -97, -96, -95, -94, -93, -92, -91, -90, -89, -88, -87, -86, -85, -84, -83, -82, -81, -80, -79, -78, -77, -76, -75, -74, -73, -72, -71, -70, -69, -68, -67, -66, -65, -64, -63, -62, -61, -60, -59, -58, -57, -56, -55, -54, -53, -52, -51, -50, -49, -48, -47, -46, -45, -44, -43, -42, -41, -40, -39, -38, -37, -36, -35, -34, -33, -32, -31, -30, -29, -28, -27, -26, -25, -24, -23, -22, -21, -20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]) ``` In cases the increment value will be less than zero such as `max_val = 1` and `increment = 0.3` The output will be: ``` [-1. -0.7 -0.4 -0.1 0.2 0.5 0.8 1.1] ``` One way to solve this problem could be: ``` >>> print a[np.where((a<=max_val) & (a>=-max_val))[0]] [-1. -0.7 -0.4 -0.1 0.2 0.5 0.8] ``` Which ensures that the maximum and minimum values will be inside limits.
Create list of numbers mirrored around zero (python)
[ "", "python", "list", "numpy", "" ]
I'm trying to figure out how to combine these two queries. ``` SELECT `o`.`Order_ID` FROM `Orders` `o` JOIN `CustomerDetails` `cd` ON `cd`.`Customer_ID` = `o`.`Customer_ID` WHERE `o`.`OrderPlaceServerTime` >= '2013-06-01 00:00:00' AND `o`.`OrderPlaceServerTime` <= '2013-06-31 23:59:59' AND `cd`.`SalesRep_ID` = 6 ``` This gives me a list of `Order_ID`s that I need to update with the `SalesRep_ID = 6` from the above Query. After I get the list of `Order_ID`s from the Query above I use... ``` UPDATE Orders SET SalesRep_ID = '6' WHERE (Order_ID = 541304 OR Order_ID = 541597 OR Order_ID = 542318) ``` Doing so updates the orders with the correct `SalesRep_ID`. Ultimately I'd like to combine these to make one query where I would just change the `SalesRep_ID`
A solution with proper `UPDATE` syntax with `JOIN` for MySql ``` UPDATE Orders o JOIN CustomerDetails d ON d.Customer_ID = o.Customer_ID SET o.SalesRep_ID = 6 WHERE o.OrderPlaceServerTime >= '2013-06-01 00:00:00' AND o.OrderPlaceServerTime <= '2013-06-31 23:59:59' AND d.SalesRep_ID = 6 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/cc24b/1)** demo
You can do it in a single query by just simply combining them: ``` UPDATE Orders SET SalesRep_ID = '6' WHERE Order_ID IN ( SELECT `o`.`Order_ID` FROM `Orders` `o` JOIN `CustomerDetails` `cd` ON `cd`.`Customer_ID` = `o`.`Customer_ID` WHERE `o`.`OrderPlaceServerTime` >= '2013-06-01 00:00:00' AND `o`.`OrderPlaceServerTime` <= '2013-06-31 23:59:59' AND `cd`.`SalesRep_ID` = 6 ); ```
Update multiple rows from results from a SELECT query to the same table
[ "", "mysql", "sql", "" ]
I'm new to python (and programming in general) and I can't seem to find a solution to this by myself. I want to check the first letter of a string is equal to any letter stored in an array, something like this: ``` letter = ["a", "b", "c"] word = raw_input('Enter a word:') first = word[0] if first == letter: print "Yep" else: print "Nope" ``` But this doesn't work, does anyone know how it will? Thanks in advance!
You need to use the `in` operator. Use `if first in letter:`. ``` >>> letter = ["a", "b", "c"] >>> word = raw_input('Enter a word:') Enter a word:ant >>> first = word[0] >>> first in letter True ``` And one False test, ``` >>> word = raw_input('Enter a word:') Enter a word:python >>> first = word[0] >>> first in letter False ```
The hint is in your question. Use [`any`](http://docs.python.org/2/library/functions.html#any). This uses a generator expression to check if it is True or False. ``` any(first == c for c in letter) ```
Checking if a value is equal to any value in an array
[ "", "python", "" ]
I have a text file containing simulation data (60 columns, 100k rows): ``` a b c 1 11 111 2 22 222 3 33 333 4 44 444 ``` ... where in the first row are variable names, and beneath (in columns) is the corresponding data (float type). I need to use all these variables with their data in Python for further calculations. For example, when I insert: ``` print(b) ``` I need to receive the values from the second column. I know how to import data: ``` data=np.genfromtxt("1.txt", unpack=True, skiprows = 1) ``` Assign variables "manually": ``` a,b,c=np.genfromtxt("1.txt", unpack=True, skiprows = 1) ``` But I'm having trouble with getting variable names: ``` reader = csv.reader(open("1.txt", "rt")) for row in reader: list.append(row) variables=(list[0]) ``` How can I change this code to get all variable names from the first row and assign them to the imported arrays ?
Instead of trying to assign names, you might think about using an [associative array](http://en.wikipedia.org/wiki/Associative_array), which is known in Python as a [`dict`](http://docs.python.org/3/library/stdtypes.html#mapping-types-dict), to store your variables and their values. The code could then look something like this (borrowing liberally from the [`csv` docs](http://docs.python.org/3/library/csv.html)): ``` import csv with open('1.txt', 'rt') as f: reader = csv.reader(f, delimiter=' ', skipinitialspace=True) lineData = list() cols = next(reader) print(cols) for col in cols: # Create a list in lineData for each column of data. lineData.append(list()) for line in reader: for i in xrange(0, len(lineData)): # Copy the data from the line into the correct columns. lineData[i].append(line[i]) data = dict() for i in xrange(0, len(cols)): # Create each key in the dict with the data in its column. data[cols[i]] = lineData[i] print(data) ``` `data` then contains each of your variables, which can be accessed via `data['varname']`. So, for example, you could do `data['a']` to get the list `['1', '2', '3', '4']` given the input provided in your question. I think trying to create names based on data in your document might be a rather awkward way to do this, compared to the dict-based method shown above. If you really want to do that, though, you might look into [reflection](http://en.wikipedia.org/wiki/Reflection_%28computer_programming%29) in Python (a subject I don't really know anything about).
The answer is: **you don't want to do that**. Dictionaries are designed for exactly this purpose: the data structure you *actually* want is going to be something like: ``` data = { "a": [1, 2, 3, 4], "b": [11, 22, 33, 44], "c": [111, 222, 333, 444], } ``` ... which you can then easily access using e.g. `data["a"]`. It's **possible** to do what you want, but the usual way is a hack which relies on the fact that Python uses *(drumroll)* a `dict` internally to store variables - and since your code won't know the names of those variables, you'll be stuck using dictionary access to get at them as well ... so you might as well just use a dictionary in the first place. It's worth pointing out that this is **deliberately** made difficult in Python, because if your code doesn't know the names of your variables, they are by definition data rather than logic, and should be treated as such. In case you aren't convinced yet, here's a good article on this subject: **[Stupid Python Ideas: Why you don't want to dynamically create variables](http://stupidpythonideas.blogspot.com/2013/05/why-you-dont-want-to-dynamically-create.html)**
Importing data and variable names from a text file in Python
[ "", "python", "variables", "dynamic", "variable-assignment", "genfromtxt", "" ]
Using python and argparse, the user could input a file name with -d as the flag. ``` parser.add_argument("-d", "--dmp", default=None) ``` However, this failed when the path included spaces. E.g. ``` -d C:\SMTHNG\Name with spaces\MORE\file.csv ``` NOTE: the spaces would cause an error (flag only takes in 'C:SMTHNG\Name' as input). ``` error: unrecognized arguments: with spaces\MORE\file.csv ``` Took me longer than it should have to find the solution to this problem... (did not find a Q&A for it so I'm making my own post)
Simple solution: argparse considers a space filled string as a single argument if it is encapsulated by quotation marks. This input worked and "solved" the problem: ``` -d "C:\SMTHNG\Name with spaces\MORE\file.csv" ``` NOTICE: argument has "" around it.
For those who can't parse arguments and still get "error: unrecognized arguments:" I found a workaround: ``` parser.add_argument('-d', '--dmp', nargs='+', ...) opts = parser.parse_args() ``` and then when you want to use it just do ``` ' '.join(opts.dmp) ```
Handle spaces in argparse input
[ "", "python", "command-line-arguments", "user-input", "argparse", "spaces", "" ]
I have a variable "myvar" that when I print out its `type(myvar)` the output is: ``` <class 'my.object.kind'> ``` If I have a list of 10 variables including strings and variables of that kind.. how can I construct an if statement to check whether an object in the list "mylist" is of `<type 'my.object.kind'>`?
Use [`isinstance`](http://docs.python.org/3/library/functions.html#isinstance), this will return true even if it is an instance of the subclass: ``` if isinstance(x, my.object.kind) ``` Or: ``` type(x) == my.object.kind #3.x ``` If you want to test all in the list: ``` if any(isinstance(x, my.object.kind) for x in alist) ```
``` if any(map(lambda x: isinstance(x, my.object.kind), my_list_of_objects)): print "Found one!" ```
How to check if variable is a specific class in python?
[ "", "python", "" ]
I apologize for asking such a noobish question, but I'm having an issue writing a very basic program in Python to check whether or not a number is prime. Here's my code: ``` def is_prime(x): if x < 2: print ('Please enter a number >= 2.') else: if x == 2 or x == 3 or x == 5: return True if x == 4: return False for num in range (2, int(x/2)): if x % num == 0: return False break else: return True ``` But this returns True for all odd numbers; not just the prime ones. I'm not understanding why. If someone could point me in the correct direction, it'd be much appreciated! :)
Your code just checks for `num % 2` and returns `True` or `False` depending on the result. So, it returns `True` for all odd numbers. You should just `return True` if the loop does not encounter a `return False`, see the code. ``` def is_prime(x): if x < 2: print ('Please enter a number >= 2.') else: if x == 2 or x == 3 or x == 5: return True if x == 4: return False for num in range (2, int(x/2)): if x % num == 0: return False return True >>> is_prime(11) True >>> is_prime(9) False ``` **P.S** - You don't need a `break` after a `return`. :)
Checking if a number is prime (efficiently) is a really hard task to accomplish, since the primes don't really follow any distinct patterns: <http://en.wikipedia.org/wiki/Primality_test> In your code, which is the most intuitive (but inefficient) algorithm, you make the mistake of returning `true` as soon as you see that the number is not divisible by 2. You should run the entire `for` loop before determining that your result is `true`. So what you should do is `return false` if for some `num` in your loop, you find that `x % num == 0`, but otherwise, once you break out of the loop, just return `true`. EDIT: It appears that other people have answered slightly quicker than I have, so yeah. Just do what they said to do.
Having issues with an is_prime(x) program in Python?
[ "", "python", "" ]
I've got (e.g.) the following MySQL row: ``` firstname | lastname | anotherCol1 | anotherCol2 | anotherCol3 -------------------------------------------------------------- John | Smith | Football | Fast cars | 18 Fools ``` and for instance the following strings: ``` oh, xxx, foo, 41 ``` I am looking for a statement that returns `oh` and `foo`, because the rows cells do contain these substrings (`oh`: J*oh*n, `foo`: *Foo*tball & *Foo*ls).
A possible solution ``` SELECT q.keyword FROM Table1 t CROSS JOIN ( SELECT 'oh' keyword UNION ALL SELECT 'xxx' UNION ALL SELECT 'foo' UNION ALL SELECT '41' ) q WHERE INSTR(CONCAT_WS(' ', firstname, lastname, anotherCol1, anotherCol2), q.keyword) > 0 ``` Output: ``` | KEYWORD | ----------- | oh | | foo | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/b0fefc/2)** demo
If your table is `MyISAM`, you can make use of [Full-Text Search](http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html) Create an `index` on your table with the following command: ``` ALTER TABLE <TABLE_NAME> ADD FULLTEXT(firstname, lastname, anotherCol1, anotherCol2, anotherCol3 ); ``` Now, run your queries (Use `boolean` mode as you have to match on multiple keywords): ``` SELECT * FROM <TABLE_NAME> WHERE MATCH(firstname, lastname, anotherCol1, anotherCol2, anotherCol3) AGAINST ('+oh +xxx +foo +41' IN BOOLEAN MODE ); ``` Hope this helps.
MySQL: Given a row and multiple strings. Find the string(s) that are contained in the row
[ "", "mysql", "sql", "database", "select", "" ]
If I explain what I think I am doing, I hope someone can explain where I am going wrong. I have the following dictionary: ``` ls = [{ 'The Wolf Gift (13)': { 'cover': 'V:\\Books\\Anne Rice\\The Wolf Gift (13)\\cover.jpg', 'author': 'Anne Rice', 'year': '1988' }, 'Mummy (14)': { 'cover': 'V:\\Books\\Anne Rice\\Mummy (14)\\cover.jpg', 'author': 'Anne Rice', 'year': '1989' }, }] ``` First of all is the above a multidimensional dictionary? I want to make sure I am talking about the right thing. Secondly, how do I loop through it to retrieve the information at the various levels. The dictionary is dynamically populated so I do not know the keys before hand. I have tried `for book in ls` and then `book['cover']` etc.. but it doesn't seem to work. I need the book name, and then the additional info for each book (cover etc...). I am pretty new to python. I come from PHP and using arrays are my bread and butter, but python is killing me.... Thanks
It's a list containing a single dictionary. You can do something like: ``` >>> books = ls[0] >>> for book, details in books.iteritems(): print book,'-->', details['cover'] ... Mummy (14) --> V:\Books\Anne Rice\Mummy (14)\cover.jpg The Wolf Gift (13) --> V:\Books\Anne Rice\The Wolf Gift (13)\cover.jpg ```
Her is an example that could be used if `ls` contained more than one dictionary. ``` for dic in ls: for key in dic: print 'Book Name: %s' % (key) for value in dic[key]: print '\t%s: %s' % (value, dic[key][value]) ``` This will produce the following output: ``` Book Name: Mummy (14) year: 1989 cover: V:\Books\Anne Rice\Mummy (14)\cover.jpg author: Anne Rice Book Name: The Wolf Gift (13) year: 1988 cover: V:\Books\Anne Rice\The Wolf Gift (13)\cover.jpg author: Anne Rice ``` Or you could remove the final for loop and access the keys directly like so: ``` for dic in ls: for key in dic: print 'Book Name: %s' % (key) print 'Publish Year: %s' % dic[key]['year'] ``` which will give the following output: ``` Book Name: Mummy (14) Publish Year: 1989 Book Name: The Wolf Gift (13) Publish Year: 1988 ```
Python - Looping through a multidimensional dictionary
[ "", "python", "python-2.7", "" ]
Say I have two tables: Table 1: ``` number 1 2 3 ``` and Table 2: ``` letter a b c d e f g ``` how can i produce ``` number letter a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 ``` all the way to g - i would just like to assign one of each value to each specific value of a table
While the query suggested like: ``` SELECT number, letter FROM Table1, Table2 ``` will work the same, and even might look simpler - [DON'T USE IT.](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins) It is an old style syntax outdated more then 20 years ago with the introduction of ANSI-92 standards. Please use ANSI-JOINS: ``` SELECT t1.number, t2.letter FROM Table1 t1 CROSS JOIN Table2 t2 ``` Also get in the habit to always alias and prefix your columns.
Cross join: ``` select Table1.number, Table2.letter from Table1 CROSS JOIN Table2 ```
SQL Joining Values to A Set Of Values
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Coming from a file I have something like the following string: ``` var1 : data1 var2 : data2 dict1 { var3 : data3 dict2 { var4 : data4 } var5 : data5 } dict3 { var6 : data6 var7 : data7 } ``` and so on. (end of lines are \n, indents are \t each) And I try to convert it into something like that: ``` Dictionary={"var1":"data1","var2":"data2", "dict1" : {"var3":"data3", "dict2" : { "var4":"data4" }, "var5":"data5"} , dict3:{"var6":"data6","var7":"data7"} ``` (indents are only too keep it somehow human readable) To solve it, all I can think of, is to split it into a list, then walk the list down until I find a "}" in the string, delete it (so i won't run into it later), then walk up until I find string with "{", remove the whitespaces before and the " {" after (using right now `temp=re.split ('(\S+) \{',out[z])` for this example the 1st temp[1] would be 'dict2'), add everything in between, and finally move on to the next "}". But that's not fast or elegant. I am definitely missing something. code is currently: ``` def procvar(strinG): x=y=z=temp1=temp2=0 back = False out=re.split ('\n',strinG) #left over from some other tries while z < len(out): print "z=",z," out[z]= ", out[z] if "{" in out[z]: if back == True: back = False xtemp=re.split ('(\S+) \{',out[z]) out[z]=xtemp[1] ytemp=xtemp[1] temp2=z+1 print "Temp: ",temp1," - ",out[temp1] out[z]={out[z]:[]} while temp2 <= temp1: out[z][xtemp[1]].append(out[temp2]) # not finished here, for the time being I insert the strings as they are del out[temp2] temp1-=1 print out[z] if "}" in out[z]: back = True del out[z] temp1 = z-1 if back == True: z-=1 else: z+=1 return out ```
``` import re # key : value regexp KV_RE = re.compile(r'^\s*(?P<key>[^\s]+)\s+:\s+(?P<value>.+?)\s*$') # dict start regexp DS_RE = re.compile(r'^\s*(?P<key>[^\s]+)\s+{\s*$') # dict end regexp DE_RE = re.compile(r'^\s*}\s*$') def parse(s): current = {} stack = [] for line in s.strip().splitlines(): match = KV_RE.match(line) if match: gd = match.groupdict() current[gd['key']] = gd['value'] continue match = DS_RE.match(line) if match: stack.append(current) current = current.setdefault(match.groupdict()['key'], {}) continue match = DE_RE.match(line) if match: current = stack.pop() continue # Error occured print('Error: %s' % line) return {} return current ```
your format is close enough to the yaml one (easy\_install pyyaml): <http://pyyaml.org/wiki/PyYAML> ``` x = """var1 : data1 var2 : data2 dict1 { var3 : data3 dict2 { var4 : data4 } var5 : data5 } dict3 { var6 : data6 var7 : data7 }""" x2 = x.replace('{', ':').replace('}','') yaml.load(x2) {'dict1': {'dict2': {'var4': 'data4'}, 'var3': 'data3', 'var5': 'data5'}, 'dict3': {'var6': 'data6', 'var7': 'data7'}, 'var1': 'data1', 'var2': 'data2'} ```
Python: How do I parse a string into a recursive dictionary
[ "", "python", "python-2.7", "" ]
I have a list of dicts that looks like this: ``` [{TYPE, OBJECT_ID, ACTOR, EXTRA_FIELDS}, ...] ``` I'd like to go through and aggregate the duplicates of {TYPE, OBJECT\_ID} and make ACTOR a list so: start with: ``` [ {'type': 'LOVE', 'obj_id': 1242, 'actor': 'bob', {..}}, {'type': 'LOVE', 'obj_id': 1242, 'actor': 'dave', {..}}, {'type': 'FAV', 'obj_id': 1242, 'actor': 'sam', {..}}, {'type': 'LOVE', 'obj_id': 242, 'actor': 'bob', {..}}] ``` end up with: ``` [ {'type': 'LOVE', 'obj_id': 1242, 'actor': ['bob', 'dave'], {..}}, {'type': 'FAV', 'obj_id': 1242, 'actor': ['sam'], {...}}, {'type': 'LOVE', 'obj_id': 242, 'actor': ['bob'], {...}} ] ``` the EXTRA\_FIELDS don't have to be merged, they can just use the data from one of the items aggregated. How can I do this in python?
Here's how I'd do it: ``` def merge_dicts(list_of_dicts): lookup = {} results = [] for d in list_of_dicts: key = (d['type'], d['obj_id']) try: # it's easier to ask forgiveness than permission lookup[key]['actor'].append(d['actor']) except KeyError: val = {'type': d['type'], 'obj_id': d['obj_id'], 'actor': [d['actor']], # note, extra [] around value to make it a list 'extra_fields': d['extra_fields']} lookup[key] = val results.append(val) return results ``` The `lookup` dict maps from the a tuple of the key values to the dictionaries that have been included in the results list. Those output dictionaries will have their `actor` value mutated if other dictionaries with the same key are encountered later on. A rather more natural solution though would be to get rid of the list-of-dictionaries data structure and instead go for a single dictionary that maps from `type, obj_id` keys to `actors, extra_fields` values. Here's what that would look like: ``` def merge_dicts2(list_of_dicts): results = {} for d in list_of_dicts: key = (d['type'], d['obj_id']) try: results[key][0].append(d['actor']) except KeyError: results[key] = ([d['actor']], d['extra_fields']) return results ``` This has most of the data that your list of dicts had, only the order has been lost (and since you were merging items from the old list, some of that order was going to be lost regardless). If you're going to be iterating over the collection later, this way is much easier, since you can unpack tuples (even nested ones) right in the loop: ``` combined_dict = merge_dicts(list_of_dicts) for (type, obj_id), (actors, extra_fields) in combined_dict.items(): # do stuff with type, obj_id, actors, extra_fields ```
Assuming that `input` is the list of tuples (not sets), what about ``` TYPE= 0 OBJECT_ID= 1 ACTOR= 2 EXTRA_INFO= 3 keys= set( [ ( e[TYPE] , e[OBJECT_ID] ) for e in input ] ) output= { k: [ ( e[ACTOR] , e[EXTRA_INFO] ) for e in input if ( e[TYPE] , e[OBJECT_ID] ) == k ] for k in keys } ``` Or, if you like one-liners: ``` output= { k: [ ( e[2] , e[3] ) for e in input if ( e[0] , e[1] ) == k ] for k in [ ( e[0] , e[1] ) for e in input ] } ``` Assuming that `input` is a list of dictionaries, this becomes: ``` keys= set( [ ( e['type'] , e['obj_id'] ) for e in input ] ) output= { k: [ { 'actor':e['actor'] , 'extra_info':e['extra_info'] } for e in input if ( e['type'] , e['obj_id'] ) == k ] for k in keys } ``` Or, ``` output= { k: [ { 'actor':e['actor'] , 'extra_info':e['extra_info'] } for e in input if ( e['type'] , e['obj_id'] ) == k ] for k in [ ( e['type'] , e['obj_id'] ) for e in input ] } ``` Of course, you can also write manually what these comprehensions do, but I wouldn't recommend it unless the data volume is too big and you start experiencing performance issues that require low-level optimization.
Grouping data in a list of of dicts
[ "", "python", "" ]
I am trying to write a code that would analyze if a word is a palindrome. BTW a palindrome is a word that is read the same backward and forward. Example are "madam" or "noon" Here is a try: ``` x = raw_input("please enter a word:\n") L = len(x) # this part returns the first letter of the word def first(word): return word[0] # this part returns the last letter of the word def last(word): return word[-1] def middle(word): return word[1:-1] def is_palindrome(word): if L <= 2: print 'enter a word with at least three letters' elif first(word) != last(word): print 'This word is not a palindrome' else: word = middle(word) is_palindrome(word) is_palindrome(x) ``` But when executed, I get ``` IndexError: string index out of range ...line 7, in first return word[0] ``` The first branch of "is\_palindrome" works perfectly. i.e. when the word is not a palindrome, I get no errors. Like "noopn" is executed with no errors, but the error is in the second branch I've playing with this code for so many times but can't figure out the "iterative part" I have the answer but I don't want to look at it yet. I need to figure out two things: 1. a way to make the iteration in the function is\_palindrome work correctly? and 2. a way to exit the program in the end. Could you folks direct me to how to answer these questions without providing the solution yet? Finally where should I put the print statement: print 'This word is a palindrome' Thank you
Personally, I would prefer separating the check and the output. So `is_palindrome()` should just return the answer and not be responsible for telling the user. That makes it more reusable. ``` def is_palindrome(word): # handle the base case if len(word) <= 1: return True elif first(word) != last(word): return False else: word = middle(word) return is_palindrome(word) ``` This enables you to do ``` x = raw_input("please enter a word:\n") L = len(x) if L <= 2: print 'enter a word with at least three letters' elif is_plaindrome(word): print 'This word is a palindrome' else: print 'This word is not a palindrome' ``` This puts the validity check to the front of the execution, while in the recursion, you have only the checks which are valid all over the recursion. (I doubt if your check is necessary at all - are `y` and `oo` no palindromes? We could argue about the empty string, however...) The next improvement steps could be to omit the functions `first()`, `last()` and `middle()` - they are trivial and only used once, so you could put there code where they are used.
To accomplish your goal, why don't just use: ``` string[::-1] == string ``` And the reason for your answer is when there is only 1 letter, `middle` will return an empty string and then `''[0]` will cause the error.
Recursive Program: What am I doing wrong?
[ "", "python", "python-2.7", "recursion", "" ]
Hello everyone i am new here so please be kind. I have been using scapy lately in order to send and recieve ipv6 packets to selected ipv6 enabled servers. The gist of the code is here: ``` text = line[:-1] #destination=getIPv6Addr(line[:-1]) destination="2607:f1c0:1000:60e0:7992:97f7:61b2:2814" source="2001:630:d0:f105:5cfe:e988:421a:a7b7" syn = IPv6(dst=destination,src=source) / TCP(sport=555,dport=80,flags="S")#flag S is syn packet syn.show() syn_ack = sr1(syn,timeout=11) ``` When i execute the code hoever this is what i get: ``` Begin emission: Finished to send 1 packets. .................WARNING: No route found for IPv6 destination :: (no default route?) WARNING: No route found for IPv6 destination 2607:f1c0:1000:60e0:7992:97f7:61b2:2814 (no default route?) ............................................................................................................ Received 1322 packets, got 0 answers, remaining 1 packets ``` From what i understand, the packet has been sent nevertheless a response is never gotten(0 answers). This made me think of using wireshark to sniff the packet. Upon using it with filter: ``` ip6 net 2607:f1c0:1000:60e0:7992:97f7:61b2:2814 ``` I realised the packet was never sent at the first place! Does anyone know why this is happening or what is hapening here? Regards Martinos EDIT: On closer inspection from comments, i realized that the route list SHOULD not be empty. There are no routes to send the packet. I am not sure how to add them! Please help here is what i did: i tried to get all the interfaces from my device. Here is what i got back: ``` {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth0', 'mtu': 1500L} {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth1', 'mtu': 1500L} {'flags': 33, 'link_addr': 00:50:56:a0:1e:df, 'type': 6, 'name': 'eth2', 'mtu': 1500L} {'flags': 33, 'link_addr': 00:50:56:a0:1e:df, 'type': 6, 'name': 'eth3', 'mtu': 1500L} {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth4', 'mtu': 1500L} {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth5', 'mtu': 1500L} {'flags': 32, 'link_addr': 00:50:56:a0:00:6c, 'type': 6, 'name': 'eth6', 'mtu': 0L} {'flags': 32, 'link_addr': 00:50:56:a0:01:0e, 'type': 6, 'name': 'eth7', 'mtu': 0L} {'flags': 32, 'link_addr': 00:50:56:a0:6e:61, 'type': 6, 'name': 'eth8', 'mtu': 0L} {'addr': 152.78.61.39/24, 'mtu': 1500L, 'flags': 33, 'link_addr': 00:50:56:a0:1e:df, 'type': 6, 'name': 'eth9'} {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth10', 'mtu': 1500L} {'flags': 33, 'link_addr': b0:90:20:52:41:53, 'type': 6, 'name': 'eth11', 'mtu': 1500L} {'flags': 33, 'type': 23, 'name': 'ppp0', 'mtu': 1494L} {'flags': 33, 'link_addr': 20:41:53:59:4e:ff, 'type': 23, 'name': 'ppp1', 'mtu': 3256L} {'addr': 127.0.0.1/8, 'flags': 3, 'type': 24, 'name': 'lo0', 'mtu': 1500L} {'flags': 33, 'type': 131, 'name': 'tun0', 'mtu': 4091L} {'flags': 32, 'type': 131, 'name': 'tun1', 'mtu': 1480L} {'flags': 33, 'type': 131, 'name': 'tun2', 'mtu': 1460L} {'flags': 33, 'type': 131, 'name': 'tun3', 'mtu': 1464L} {'flags': 32, 'type': 131, 'name': 'tun4', 'mtu': 1280L} {'flags': 32, 'type': 131, 'name': 'tun5', 'mtu': 1280L} {'flags': 33, 'type': 131, 'name': 'tun6', 'mtu': 1472L} {'flags': 32, 'type': 131, 'name': 'tun7', 'mtu': 1280L} ``` As you can see, nowhere in there there is an ipv6 address. How am i supposed to send ipv6 packets if no ipv6 address is listed? in extend, here is the ipconfig from cmd: ``` Ethernet adapter Local Area Connection 4: Connection-specific DNS Suffix . : ecs.soton.ac.uk IPv6 Address. . . . . . . . . . . : 2001:630:d0:f105:5cfe:e988:421a:a7b7 Temporary IPv6 Address. . . . . . : 2001:630:d0:f105:e8bc:7c26:9a40:31b8 Link-local IPv6 Address . . . . . : fe80::5cfe:e988:421a:a7b7%17 IPv4 Address. . . . . . . . . . . : 152.78.61.39 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : fe80::210:18ff:fee5:5bfa%17 152.78.61.254 Tunnel adapter 6TO4 Adapter: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:5ef5:79fd:142e:398d:67b1:c2d8 Link-local IPv6 Address . . . . . : fe80::142e:398d:67b1:c2d8%13 Default Gateway . . . . . . . . . : Tunnel adapter isatap.ecs.soton.ac.uk: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : ecs.soton.ac.uk ``` This is run on a VM but i dont think it matters. PLEASE someone help me this is very important! how can i configure scapy in order to understand how to send ipv6 packets. and why do i see no ipv6 addresses in the list above of interfaces?
The ethernet adapter indeed has both an IPv6 address and a default gateway. You could * Try to `ping -6 ipv6.google.com` to see if there is any IPv6 connectivity. I think there must be, since I can ping your IPv6 address just fine, unless it has been reassigned to somebody else now. * Inspect the routing tables to see there are no superfluous routes In particular, the Teredo interface might be causing trouble. Turn it off. (netsh interface teredo set state disabled) Teredo: <https://www.rfc-editor.org/rfc/rfc4380> <http://en.wikipedia.org/wiki/Teredo_tunneling> I do not know how to configure IPv6 routing and interfaces using scapy. I made this an answer, since there is not enough room in the comments. I do still suggest you ask on superuser. I can not sufficiently help you, since I am not familiar with either "JANET" or Windows networking configuration.
I observed that in "layers/inet6.py" there is function call to getmacbyip6, in which they are trying to get the interface info based on the scapy routing table. ``` iff,a,nh = conf.route6.route(ip6, dev=conf.iface6) ``` As conf.iface6 is set to "lo" by default it always return loopback interface. ``` iff,a,nh = conf.route6.route(ip6) ``` I changed the above statement as shown below then I was able to see the packets going out from the corresponding interface. # Example:- ``` [root@purple-perf-tester scapy]# ifconfig eth1 eth1: flags=323<UP,BROADCAST,RUNNING,PROMISC> mtu 1500 inet 11.0.0.5 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 2001::6 prefixlen 64 scopeid 0x0<global> inet6 fe80::f816:3eff:fe2b:cc67 prefixlen 64 scopeid 0x20<link> ether fa:16:3e:2b:cc:67 txqueuelen 1000 (Ethernet) RX packets 6107709 bytes 1239209940 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12093723 bytes 4161092991 (3.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@purple-perf-tester scapy]# ip -6 route show dev eth1 2001::7 via fe80::f816:3eff:fed1:43de metric 1024 2001::/64 proto kernel metric 256 fe80::/64 proto kernel metric 256 [root@purple-perf-tester scapy]# scapy INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). WARNING: No route found for IPv6 destination :: (no default route?) Welcome to Scapy (2.3.1) >>> >>> conf.route6 Destination Next Hop iface src candidates ::1/128 :: lo ::1 2001::7/128 fe80::f816:3eff:fed1:43de eth1 2001::6 2001::/64 :: eth1 2001::6 2002::/64 :: eth2 2002::6 fe80::/64 :: eth0 fe80::f816:3eff:fe7c:d9fe fe80::/64 :: eth1 fe80::f816:3eff:fe2b:cc67 fe80::/64 :: eth2 fe80::f816:3eff:fe1a:a62e ::1/128 :: lo ::1 2001::6/128 :: lo ::1 2002::6/128 :: lo ::1 fe80::f816:3eff:fe1a:a62e/128 :: lo ::1 fe80::f816:3eff:fe2b:cc67/128 :: lo ::1 fe80::f816:3eff:fe7c:d9fe/128 :: lo ::1 >>> iff,a,nh = conf.route6.route("2001::7", dev=conf.iface6) WARNING: No route found for IPv6 destination 2001::7 (no default route?) >>> print iff lo >>> iff,a,nh = conf.route6.route("2001::7") >>> print iff eth1 ```
Scapy fails to send ipv6 packets
[ "", "python", "http", "ipv6", "wireshark", "scapy", "" ]
I'm writing a Python program for running user-uploaded arbitrary (and thus, at the worst case, unsafe, erroneous and crashing) code on a Linux server. The security questions aside, my objective is to determine, if the code (that might be in any language, compiled or interpreted) writes the correct things to `stdout`, `stderr` and other files on given input fed into the program's `stdin`. After this, I need to display the results to the user. # The current solution Currently, my solution is to spawn the child process using `subprocess.Popen(...)` with file handles for the `stdout`, `stderr` and `stdin`. The file behind the `stdin` handle contains the inputs that the program reads during operation, and after the program has terminated, the `stdout` and `stderr` files are read and checked for correctness. ## The problem This approach works otherwise perfectly, but when I display the results, I can't combine the given inputs and outputs so that the inputs would appear in the same places as they would when running the program from a terminal. I.e. for a program like ``` print "Hello." name = raw_input("Type your name: ") print "Nice to meet you, %s!" % (name) ``` the contents of the file containing the program's `stdout` would, after running, be: ``` Hello. Type your name: Nice to meet you, Anonymous! ``` given that the contents the file containing the `stdin` were `Anonymous<LF>`. So, in short, for the given example code (and, equivalently, for *any* other code) I want to achieve a result like: ``` Hello. Type your name: Anonymous Nice to meet you, Anonymous! ``` Thus, the problem is to detect when the program is waiting for input. # Tried methods I've tried the following methods for solving the problem: ## [Popen.communicate(...)](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate) This allows the parent process to separately send data along a [pipe](http://docs.python.org/2/library/subprocess.html#subprocess.PIPE), but can only be called once, and is therefore not suitable for programs with multiple outputs and inputs - just as can be inferred from the documentation. ## Directly reading from [Popen.stdout](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.stdout) and [Popen.stderr](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.stderr) and writing to [Popen.stdin](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.stdin) The documentation warns against this, and the `Popen.stdout`s [`.read()`](http://docs.python.org/2/library/stdtypes.html#file.read) and [`.readline()`](http://docs.python.org/2/library/stdtypes.html#file.readline) calls seem to block infinitely when the programs starts to wait for input. ## Using [`select.select(...)`](http://docs.python.org/2/library/select.html#select.select) to see if the file handles are ready for I/O This doesn't seem to improve anything. Apparently the pipes are always ready for reading or writing, so `select.select(...)` doesn't help much here. ## Using a different thread for non-blocking reading As suggested in [this answer](https://stackoverflow.com/a/4896288/2096560), I have tried creating a separate [Thread()](http://docs.python.org/2/library/threading.html#threading.Thread) that stores results from reading from the `stdout` into a [Queue()](http://docs.python.org/2/library/queue.html#queue-objects). The output lines before a line demanding user input are displayed nicely, but the line on which the program starts to wait for user input (`"Type your name: "` in the example above) never gets read. ## Using a [PTY](http://en.wikipedia.org/wiki/Pseudo_terminal) slave as the child process' file handles As directed [here](https://stackoverflow.com/a/1547764/2096560), I've tried [`pty.openpty()`](http://docs.python.org/2/library/pty.html#pty.openpty) to create a pseudo terminal with master and slave file descriptors. After that, I've given the slave file descriptor as an argument for the `subprocess.Popen(...)` call's `stdout`, `stderr` and `stdin` parameters. Reading through the master file descriptor opened with [`os.fdopen(...)`](http://docs.python.org/2/library/os.html#os.fdopen) yields the same result as using a different thread: the line demanding input doesn't get read. **Edit:** Using @Antti Haapala's example of `pty.fork()` for child process creation instead of `subprocess.Popen(...)` seems to allow me also read the output created by `raw_input(...)`. ## Using [pexpect](http://www.noah.org/wiki/pexpect) I've also tried the `read()`, `read_nonblocking()` and `readline()` methods (documented [here](http://pexpect.sourceforge.net/pexpect.html)) of a process spawned with pexpect, but the best result, which I got with `read_nonblocking()`, ~~is the same as before: the line with outputs before wanting the user to enter something doesn't get read.~~ is the same as with a PTY created with `pty.fork()`: the line demanding input *does* get read. **Edit:** By using `sys.stdout.write(...)` and `sys.stdout.flush()` instead of `print`ing in my *master* program, which creates the child, seemed to fix the prompt line not getting displayed - it actually got read in both cases, though. ## Others I've also tried [`select.poll(...)`](http://docs.python.org/2/library/select.html#select.poll), but it seemed that the pipe or PTY master file descriptors are always ready for writing. # Notes ## Other solutions * What also crossed my mind is to try feeding the input when some time has passed without new output having been generated. This, however, is risky, because there's no way to know if the program is just in the middle of doing a heavy calculation. * As @Antti Haapala mentioned in his answer, the `read()` system call wrapper from glibc could be replaced to communicate the inputs to the master program. However, this doesn't work with statically linked or assembly programs. (Although, now that I think of it, any such calls could be intercepted from the source code and replaced with the patched version of `read()` - could be painstaking to implement still.) * Modifying the Linux kernel code to communicate the `read()` syscalls to the program is probably insane... ## PTYs I think the PTY is the way to go, since it fakes a terminal and interactive programs are run on terminals everywhere. The question is, how?
Have you noticed that raw\_input writes the prompt string into stderr if stdout is terminal (isatty); if stdout is not a terminal, then the prompt too is written to stdout, but stdout will be in fully buffered mode. **With stdout on a tty** ``` write(1, "Hello.\n", 7) = 7 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 write(2, "Type your name: ", 16) = 16 fstat(0, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 3), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb114059000 read(0, "abc\n", 1024) = 4 write(1, "Nice to meet you, abc!\n", 23) = 23 ``` **With stdout not on a tty** ``` ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff8d9d3410) = -1 ENOTTY (Inappropriate ioctl for device) # oops, python noticed that stdout is NOTTY. fstat(0, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 3), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f29895f0000 read(0, "abc\n", 1024) = 4 rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f29891c4bd0}, {0x451f62, [], SA_RESTORER, 0x7f29891c4bd0}, 8) = 0 write(1, "Hello.\nType your name: Nice to m"..., 46) = 46 # squeeze all output at the same time into stdout... pfft. ``` Thus all writes are squeezed into stdout all at the same time; and what is worse, after the input is read. The real solution is thus to use the pty. However you are doing it wrong. For the pty to work, you must use the pty.fork() command, not subprocess. (This will be very tricky). I have some working code that goes like this: ``` import os import tty import pty program = "python" # command name in argv[0] argv = [ "python", "foo.py" ] pid, master_fd = pty.fork() # we are in the child process if pid == pty.CHILD: # execute the program os.execlp(program, *argv) # else we are still in the parent, and pty.fork returned the pid of # the child. Now you can read, write in master_fd, or use select: # rfds, wfds, xfds = select.select([master_fd], [], [], timeout) ``` Notice that depending on the terminal mode set by the child program there might be different kinds of linefeeds coming out, etc. Now about the "waiting for input" problem, that cannot be really helped as one can always write to a pseudoterminal; the characters will be put to wait in the buffer. Likewise, a pipe always allows one to write up to 4K or 32K or some other implementation defined amount, before blocking. One ugly way is to strace the program and notice whenever it enters the read system call, with fd = 0; the other would be to make a C module with a replacement "read()" system call and link it in before glibc for the dynamic linker (fails if the executable is statically linked or uses system calls directly with assembler...), and then would signal python whenever the read(0, ...) system call is executed. All in all, probably not worth the trouble exactly.
Pexpect worked for me In your case here's what you could do : ``` import pexpect import time # Start the other program child = pexpect.spawn("python your_program.py", encoding='utf-8', timeout=None) while True: #pexpect will "wait" until this line is displayed in the child program #so if you know what will be the last line before the input request, #then let him "expect" that child.expect("Type your name: ") #This prints what's "before" the thing you expected print(child.before) #Now you can collect the input from your user user_input = input("Type your name: ") #And send it down child.sendline(user_input) #The loop doesn't stop here, we can wait for the next time the child program asks the user "Type your name: " and do it as many times as we want. ```
Detecting when a child process is waiting for input
[ "", "python", "linux", "subprocess", "pty", "unbuffered-output", "" ]
I have this query, which returns days of the week. The days of the week are 0-6(Mon-Fri). ``` select ID, DayOfWeek from MyTable ID DayOfWeek 1 0 2 0 3 4 ``` Is there a way to substitute an alias for each int DayOfWeek? The enum isn't stored in the database so I can't join. I want to be able to write a query and have it look like this: ``` ID DayOfWeek 1 Monday 2 Monday 3 Friday ```
You should consider storing the lookup in a new table... but just so you're aware of your options, you can also use the [`DATENAME(WEEKDAY)`](http://technet.microsoft.com/en-us/library/ms174395.aspx) function: ``` SELECT DATENAME(WEEKDAY, 0) ``` Returns: ``` Monday ``` [SQL Fiddle](http://sqlfiddle.com/#!3/6ca74/2)
If you can't store a table to join on, you can use a `CASE`: ``` SELECT ID, CASE DayOfWeek WHEN 0 THEN 'Monday' WHEN 1 THEN 'Tuesday' ... END AS DayOfWeek FROM MyTable ```
How to replace an int with text in a query
[ "", "sql", "sql-server", "" ]
There is this code: ``` class MyException(Exception): pass def gen(): for i in range(3): try: yield i except MyException: print("MyException!") a = gen() next(a) a.throw(MyException) ``` Running this code: ``` $ python3.3 main.py MyException! $ python3.3 main.py MyException! Exception TypeError: TypeError('catching classes that do not inherit from BaseException is not allowed',) in <generator object gen at 0xb712efa4> ignored $ python3.3 main.py MyException! $ python3.3 main.py MyException! $ python3.3 main.py MyException! Exception TypeError: TypeError('catching classes that do not inherit from BaseException is not allowed',) in <generator object gen at 0xb714afa4> ignored ``` The thing which I don't understand is why sometimes there is printed this `Exception TypeError` warning. Is there something wrong with custom exception?
You are seeing a `__del__` hook misbehaving somewhere. The `TypeError` is being thrown while *shutting down*, as the Python interpreter is exiting everything is deleted and any exceptions thrown in a `__del__` deconstructor hook are being ignored (but *are* printed). On exit, Python clears everything in the namespace by rebinding everything to `None`, but the order in which this happens is not set. The still running generator is closed (`a.close()` is called) when deleted, which triggers a `GeneratorExit` exception in the generator, which Python tests against your `except MyException:` line. If, however, `MyException` has *already* been cleared up and Python sees `except None:` the `TypeError` is thrown and you see that message printed. You can trigger the error without exiting Python by adding: ``` MyException = None del a ``` If you use `list(a)` and consume the rest of the generator, or explicitly close the generator with `a.close()` before Python exits and deletes `MyException`, the error message goes away. Another work-around would be to handle `GeneratorExit` first: ``` def gen(): for i in range(3): try: yield i except GeneratorExit: return except MyException: print("MyException!") ``` and Python will not evaluate the next `except` handler. The error cannot be reproduced with Python 3.2 or earlier, so it looks like [hash randomization](http://docs.python.org/3/whatsnew/3.3.html#builtin-functions-and-types) (introduced in Python 3.3) randomizes the order objects are cleared; this certainly explains why you see the error only on *some* of your runs, but not on earlier Python runs where the hash order is fixed. Note that the interaction of `.__del__()` hooks and other global objects in Python is documented with a big red warning in the [`.__del__()` documentation](http://docs.python.org/3/reference/datamodel.html#object.__del__): > **Warning**: Due to the precarious circumstances under which `__del__()` methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to `sys.stderr` instead. Also, when `__del__()` is invoked in response to a module being deleted (e.g., when execution of the program is done), other globals referenced by the `__del__()` method may already have been deleted or in the process of being torn down (e.g. the import machinery shutting down). For this reason, `__del__()` methods should do the absolute minimum needed to maintain external invariants. Starting with version 1.5, Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the `__del__()` method is called.
I was having this same error in Python 3.3 on Windows, with the difference that I was defining the exception in its own file. These were my code files: ``` $ cat FooError.py class FooError(Exception): pass $ cat application.py import FooError try: raise FooError('Foo not bar!') Except FooError as e: print(e) ``` This was the exception that I was getting: > TypeError: catching classes that do not inherit from BaseException is not allowed. **Changing `import FooError` to `from FooError import *` resolved the issue. Here is the final code, for clarity's sake:** ``` $ cat FooError.py class FooError(Exception): pass $ cat application.py from FooError import * try: raise FooError('Foo not bar!') Except FooError as e: print(e) ```
Exception TypeError warning sometimes shown, sometimes not when using throw method of generator
[ "", "python", "python-3.x", "exception", "generator", "" ]
I have an xml file I need to open and make some changes to, one of those changes is to remove the namespace and prefix and then save to another file. Here is the xml: ``` <?xml version='1.0' encoding='UTF-8'?> <package xmlns="http://apple.com/itunes/importer"> <provider>some data</provider> <language>en-GB</language> </package> ``` I can make the other changes I need, but can't find out how to remove the namespace and prefix. This is the reusklt xml I need: ``` <?xml version='1.0' encoding='UTF-8'?> <package> <provider>some data</provider> <language>en-GB</language> </package> ``` And here is my script which will open and parse the xml and save it: ``` metadata = '/Users/user1/Desktop/Python/metadata.xml' from lxml import etree parser = etree.XMLParser(remove_blank_text=True) open(metadata) tree = etree.parse(metadata, parser) root = tree.getroot() tree.write('/Users/user1/Desktop/Python/done.xml', pretty_print = True, xml_declaration = True, encoding = 'UTF-8') ``` So how would I add code in my script which will remove the namespace and prefix?
Replace tag as Uku Loskit suggests. In addition to that, use [lxml.objectify.deannotate](http://lxml.de/objectify.html#xml-schema-datatype-annotation). ``` from lxml import etree, objectify metadata = '/Users/user1/Desktop/Python/metadata.xml' parser = etree.XMLParser(remove_blank_text=True) tree = etree.parse(metadata, parser) root = tree.getroot() #### for elem in root.getiterator(): if not hasattr(elem.tag, 'find'): continue # guard for Comment tags i = elem.tag.find('}') if i >= 0: elem.tag = elem.tag[i+1:] objectify.deannotate(root, cleanup_namespaces=True) #### tree.write('/Users/user1/Desktop/Python/done.xml', pretty_print=True, xml_declaration=True, encoding='UTF-8') ``` Note: Some tags like `Comment` return a function when accessing `tag` attribute. added a guard for that.
We can get the desired output document in two steps: 1. Remove namespace URIs from element names 2. Remove unused namespace declarations from the XML tree **Example code** ``` from lxml import etree input_xml = """ <package xmlns="http://apple.com/itunes/importer"> <provider>some data</provider> <language>en-GB</language> <!-- some comment --> <?xml-some-processing-instruction ?> </package> """ root = etree.fromstring(input_xml) # Iterate through all XML elements for elem in root.getiterator(): # Skip comments and processing instructions, # because they do not have names if not ( isinstance(elem, etree._Comment) or isinstance(elem, etree._ProcessingInstruction) ): # Remove a namespace URI in the element's name elem.tag = etree.QName(elem).localname # Remove unused namespace declarations etree.cleanup_namespaces(root) print(etree.tostring(root).decode()) ``` **Output XML** ``` <package> <provider>some data</provider> <language>en-GB</language> <!-- some comment --> <?xml-some-processing-instruction ?> </package> ``` **Details explaining the code** As described in the [documentation](https://lxml.de/tutorial.html#namespaces), we use [`lxml.etree.QName.localname`](https://lxml.de/4.1/api/lxml.etree.QName-class.html#section-Properties) to get local names of elements, that is names without namespace URIs. Then we replace the fully qualified names of the elements by their local names. Some XML elements, such as comments and processing instructions do not have names. So, we have to skip these elements while replacing element names, otherwise a `ValueError` will be raised. Finally, we use [`lxml.etree.cleanup_namespaces()`](https://lxml.de/4.1/api/lxml.etree-module.html#cleanup_namespaces) to remove unused namespace declarations from the XML tree. **Note on namespaced XML attributes** If the XML input contains attributes with explicitly specified namespace prefixes, the example code will not remove those prefixes. To accomplish the deletion of namespace prefixes in attributes, add the following for-loop after the line `elem.tag = etree.QName(elem).localname`, as suggested [here](https://stackoverflow.com/a/71886208/6942134) ``` for attr_name in elem.attrib: local_attr_name = etree.QName(attr_name).localname if attr_name != local_attr_name: attr_value = elem.attrib[attr_name] del elem.attrib[attr_name] elem.attrib[local_attr_name] = attr_value ``` To learn more about namespaced XML attributes see [this answer.](https://stackoverflow.com/a/46865/6942134)
Remove namespace and prefix from xml in python using lxml
[ "", "python", "xml", "namespaces", "lxml", "" ]
So currently, my code looks like this (thanks to help in another post I made) ``` phrase = raw_input("Enter text to Cipher: ") shift = int(raw_input("Please enter shift: ")) result = ("Encrypted text is: ") for character in phrase: #Loops through phrase and shows ascii numbers, example: hello is: 104,101,108,108,111 x = ord(character) #adds a shift to each character so if shift is 1 'hello' becomes: ifmmp 105,102,109,109,112 result += chr(x + shift) print "\n",result,"\n" ``` The problem is, if I type in more than one word for example: hello world , with a shift of 1 the output is: `ifmmp!xpsme` The exclamation mark shows up for a space (varies depending on shift). I was thinking of doing an if statement to detect spaces: ``` phrase = raw_input("Enter text to Cipher: ") shift = int(raw_input("Please enter shift: ")) result = ("Encrypted text is: ") for character in phrase: #Loops through phrase and shows ascii numbers, example: hello is: 104,101,108,108,111 x = ord(character) if x == ord(' '): print "\nfound a space space" #adds 1 to each character so 'hello' becomes: ifmmp 105,102,109,109,112 result += chr(x + shift) print "\n",result,"\n" ``` But I don't know how to add the space into the result variable. Also, I saw in this thread: [Caesar's Cipher using python, could use a little help](https://stackoverflow.com/questions/1538935/caesars-cipher-using-python-could-use-a-little-help) That JeffB used a while loop for dealing with the ASCII table 32 being space and 127 being DEL. Why did he use 96? I don't understand. ``` while x < 32: x += 96 while x > 127: x -= 96 ``` Sorry this question is rather long. Many thanks in advance! Your help is invaluable to me.
You can just skip the space: ``` for character in phrase: x = ord(character) if character == ' ': result += ' ' else: result += chr(x + shift) ``` Your shift won't restrict the output to just ASCII. If you want to ensure that, you should use the modulo operator: ``` chr(32 + (x + shift) % (127 - 32)) ```
You can add a space like this: ``` if character.isspace(): result += ' ' ``` or split the string at white-spaces: **Example:** ``` >>> "hello world".split() ['hello', 'world'] ``` **code:** ``` new_strs = [] result = ("Encrypted text is:") for word in phrase.split(): new_word = [] for character in word: x = ord(character) + shift new_word.append(chr(x if 97 <= x <= 122 else 96 + x % 122)) new_strs.append("".join(new_word)) print result, " ".join(new_strs) ``` Why `x if 97 <= x <= 122 else 96 + x % 122`? For `'z'` with shift = 1 `x + shift` will be 123, i.e `'{'`.So, to get `'a'` instead of `'{'` take the modulus of the new ordinance value with 122(`ord('z')`) and add 96(`ord('a') -1`) to it. **Output:** ``` $ python so.py Enter text to Cipher: hello xyz Please enter shift: 1 Encrypted text is: ifmmp yza ```
Caesar Cipher Python - Additional Features
[ "", "python", "ascii", "shift", "" ]
I have an objects ``` >>> L[0].f.items() dict_items([('a', 1)]) >>> a3.f.items() dict_items([('a', 1), ('c', 3)]) ``` I want to test if L[0].f.items() is a subset of a3.f.items(). So I did the following: ``` >>> L[0].f.items() in a3.f.items() False ``` But I expect that L[0].f.items() is a subset of a3.f.items(). Why it returns False? How can I test if a dictionary items is a subset of another dictionary items?
You can make sets from the lists and see if one set is a subset of another: ``` >>> list1 = [('a', 1), ('c', 3)] >>> list2 = [('a', 1)] >>> set(list2).issubset(list1) True ``` Or, in your case: ``` set(L[0].f.items()).issubset(set(a3.f.items())) ```
`in` tests whether the left operand is an element of the right. Since dict item views are set-like, you want `<=`, which tests whether one is a subset of another: ``` >>> L[0].f.items() <= a3.f.items() True ``` If you want to do this with lists or other non-set-like iterables, you can build a set out of one and use `issuperset`: ``` >>> more = [1, 2, 3] >>> less = [1, 2] >>> set(more).issuperset(less) True ```
How to check if a dictionary is in another dictionary in python
[ "", "python", "python-3.x", "dictionary", "" ]
As an intellectual challenge, I'm trying to do what is described in the title. I am stuck, and wondering if anyone had any helping ideas..? ``` def main(): my_list = [1,2,3,4,5,6,3,4,63,3] sort(my_list) def sort(my_list): n=0 m=1 for number in my_list: if my_list[n] < my_list[m]: my_list[n] = my_list[n+1] n+=1 m+=1 print my_list if __name__ == '__main__': main() ```
There are many sorting algorithms, such as `bubble`, `quick sort` and so on, and here is a reference <http://www.sorting-algorithms.com/>, you can implement any of them according to the descriptioin. The following is bubble sort ``` def main(): my_list = [1,2,3,4,5,6,3,4,63,3] sort(my_list) def sort(my_list): size = len(my_list) for i in range(size): for j in range(size-i-1): if(my_list[j] > my_list[j+1]): tmp = my_list[j] my_list[j] = my_list[j+1] my_list[j+1] = tmp print my_list if __name__ == '__main__': main() ``` Hope helps!
*Most* python implementations use the [`mergesort` algorithm](http://en.wikipedia.org/wiki/Merge_sort)1. A quick google search will turn up an implementation, but the algorithm is simple enough that it's worth trying to code it yourself. 1 (Actually, they use Tim sort which is a hybrid between mergesort and insertion sort ...)
Sorting A List From Least to Greatest Without list.sort( )
[ "", "python", "list", "sorting", "" ]
I am writing the output of a function to a file, how ever if the number that is written has an e+01 at the end it writes one of the numbers on to the next line. for example ``` 0.12605695 1.64761066 1.55001798 0.01785089 15.80005801 0.12231149 1.65899943 1.56369755 0.01511221 10.45653689 0.1238494 1.67704048 1.59633667 0.00687468 0.96146016 1.10437450e-01 1.62918081e+00 1.56024230e+00 1.82792879e-03 -2.05519064e+00 1.04326863e-01 1.63545256e+00 1.58687599e+00 -4.50753949e-03 -6.40408013e+00 ``` There should be five numbers per line, here is the part of my code where i write the numbers to a file, ``` kop=open('new.txt', 'w') results=PCA(kk) res=results.mu print results.mu kop.write(str(res)+'\n') kop.close() ``` how do I get them to write to the one line and not go onto the next? I need them all on the correct line as when I used numpy.genfromtxt in a different piece of code it requires them to all be in the correct columns.
This was the answer, I just didnt realize it was a numpy array. ``` kop=open('new.txt', 'w') results=PCA(kk) res=results.mu res=np.ndarray.tolist(res) print results.mu kop.write(str(res)+'\n') kop.close() ```
Assuming `results.mu` is something that can be iterated like a normal list, this snippet should do what you need by utilizing the [`csv` module](http://docs.python.org/2/library/csv.html): ``` import csv with open('new.txt', 'w') as f: writer = csv.writer(f, delimiter="\t") writer.writerows(results.mu) ```
writing results to a file, some go on to the next line. python
[ "", "python", "" ]
I want to check if a boolean is true, then decide in the WHERE clause what condition to use. Say the boolean variable is @checkbool: ``` SELECT * FROM TableA A WHERE --if @checkbool is true, run this A.Id = 123 --if @checkbool is false, run this A.Id <> 123 ``` Is there a way to negate a condition? Like in C++ you can do if !(condition). If not, what is the best way to solve this problem? Thank you!
SQL's equivalent of `!` in C is `NOT`. However, in your case you want something else: you need to build a condition that decides between the two choices based on the value of `@checkbool`, like this: ``` SELECT * FROM TableA A WHERE ( (@checkbool) AND (A.Id = 123)) OR ((NOT @checkbool) AND (A.Id <> 123)) ```
Here is one solution: ``` IF @Checkbool = 1 SELECT * FROM Table A WHERE A.Id = 123 ELSE SELECT * FROM Table A WHERE A.Id <> 123 ``` Here is another using just the WHERE Clause: ``` SELECT * FROM Table A WHERE (@Checkbool = 1 AND A.Id = 123) OR (@Checkbool = 0 AND A.Id <> 123) ``` Everything you put in the where clause needs to be in the form of an expression. Thus, the solution in this case is to write the condition in the form of an expression. Hope this helps. :)
SQL: Can I negate a condition in a where clause?
[ "", "sql", "conditional-statements", "where-clause", "" ]
I was just wondering how I could find out what the last character of the user input was using Python. I need to know whether it was an S or not. Thanks in advance.....
You can use the built-in function [`str.endswith()`](http://docs.python.org/2/library/stdtypes.html#str.endswith): ``` if raw_input('Enter a word: ').endswith('s'): do_stuff() ``` Or, you can use [Python's Slice Notation](https://stackoverflow.com/questions/509211/the-python-slice-notation): ``` if raw_input('Enter a word: ')[-1:] == 's': # Or you can use [-1] do_stuff() ```
Use [`str.endswith`](http://docs.python.org/2/library/stdtypes.html#str.endswith): ``` >>> "fooS".endswith('S') True >>> "foob".endswith('S') False ``` help on `str.endswith`: ``` >>> print str.endswith.__doc__ S.endswith(suffix[, start[, end]]) -> bool Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try. ```
Python: Last character of user input
[ "", "python", "input", "" ]
I have a table like: ``` create table myTab( id integer primary key, is_available boolean not null default true ); ``` I need to do a query that returns only the first encountered row that has `is_available` set to false.
something like ``` select * from myTab where not is_available order by id asc limit 1 ```
Alternatively, you could use `NOT EXISTS` to find the *first* tuple, in most cases this is the fastest solution, too: ``` SELECT * FROM myTab mt WHERE mt.is_available = False AND NOT EXISTS ( SELECT * FROM myTab nx WHERE nx.is_available = False AND nx.id < mt.id ); ```
Return only one row of a select that would return a set of rows
[ "", "sql", "postgresql", "sql-limit", "" ]
``` if n == 1: return [(-1,), (1,)] if n == 2: return [(-1,0), (1,0), (0,-1), (0,1)] if n == 3: return [(-1,0,0), (1,0,0), (0,-1,0), (0,1,0), (0,0,-1), (0,0,1)] ``` Basically, return a list of `2n` tuples conforming to the above specification. The above code works fine for my purposes but I'd like to see a function that works for all n ∈ ℕ (just for edification). Including `tuple([0]*n)` in the answer is acceptable by me. I'm using this to generate the direction of faces for a measure polytope. For all directions, I can use `list(itertools.product(*[(0, -1, 1)]*n))`, but I can't come up with something quite so concise for only the face directions.
``` def faces(n): def iter_faces(): f = [0] * n for i in range(n): for x in (-1, 1): f[i] = x yield tuple(f) f[i] = 0 return list(iter_faces()) ``` --- ``` >>> faces(1) [(-1,), (1,)] >>> faces(2) [(-1, 0), (1, 0), (0, -1), (0, 1)] >>> faces(3) [(-1, 0, 0), (1, 0, 0), (0, -1, 0), (0, 1, 0), (0, 0, -1), (0, 0, 1)] ```
``` [tuple(sign * (i == p) for i in range(n)) for p in range(n) for sign in (-1, 1)] ``` Plain `for`, no implicit `bool`→`int` equivalent: ``` for p in range(n): for sign in (-1, 1): yield tuple((sign if i == p else 0) for i in range(n)) ```
What is the pythonic way of generating this type of list? (Faces of an n-cube)
[ "", "python", "list", "geometry", "tuples", "" ]
I am facing problem with sorting according to AM / PM. Here is my table data ``` login_time 1:30 PM 2:00 AM 8:00 AM 9:30 PM 10:00 PM 11:10 AM ``` I want result as: ``` login_time 2:00 AM 8:00 AM 11:10 AM 1:30 PM 9:30 PM 10:00 PM ``` How can I do this?
You could use [STR\_TO\_DATE](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) function, and you could also extract only the time part with [TIME()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_time) if you wish: ``` SELECT login_time FROM tablename ORDER BY STR_TO_DATE(login_time, '%l:%i %p') ``` Please see fiddle [here](http://sqlfiddle.com/#!2/624d6/1).
If you already have your date as a `VARCHAR`, I suggest you to refactor that to a `TIME` column. This could be easily done online: ``` UPDATE tablename SET login_time = STR_TO_DATE(login_time, '%l:%i %p'); ALTER TABLE tablename CHANGE login_time login_time TIME; ``` I use the type `TIME` here since your data *apparently* only contains *time*. Maybe a `DATETIME` type would be most appropriate. See <http://dev.mysql.com/doc/refman/5.5/en/date-and-time-types.html> for the various options. Anyway, once the column has be refactored to the appropriate type, you could write: ``` SELECT login_time FROM tablename ORDER BY login_time ``` With the added benefit that *now* you can build an index on `login_time` to speedup your search/sort: ``` ALTER TABLE tablename ADD INDEX ( login_time ); ``` See <http://sqlfiddle.com/#!2/03991/1> (click on 'View Execution Plan' to see that the actual index has been used).
How to sort time ( In AM / PM ) in SQL?
[ "", "mysql", "sql", "" ]
I know this is a really bad description but how can i get this to work: ``` class Test1(): def test_p1(): print("This is part 1 of Test1") def Test2(): return test_p1() ``` Thanks in advance!
Well, there are several options. The most basic are: ## Create instance first ``` class Test1(): def test_p1(self): print("This is part 1 of Test1") def Test2(): return Test1().test_p1() ``` However, you should use it when having new instance makes sense (depends on your API). ## Make it class method ``` class Test1(): @classmethod def test_p1(cls): print("This is part 1 of Test1") def Test2(): return Test1.test_p1() ``` ## Make it static method (discouraged) ``` class Test1(): @staticmethod def test_p1(): print("This is part 1 of Test1") def Test2(): return Test1.test_p1() ``` ## Alternative: use inheritance In some cases (maybe it is your case too, we do not know) it makes sense to actually utilize inheritance: create a class that will inherit from `Test1`. This way you can override parts of it and refer to parent methods. Example: ``` class Test1(): def test_p1(self): print("This is part 1 of Test1") class SomeOtherClass(Test1): def test2(self): return super(SomeOtherClass, self).test_p1() ``` and then use it like this: ``` my_instance = SomeOtherClass() result = my_instance.test2() ``` But again, it really depends on your API / library. ## Alternative 2: module-level function User @user2357112 correctly pointed out, that module-level function can be even better (simpler) idea: ``` def test_p1(): print("This is part 1 of Test1") def Test2(): return test_p1() ``` ## Side note: PEP8 To avoid confusion, as Python is really dynamic, you should give a "hint" to developers on what they are using, and in general follow coding style defined in PEP8: * module names are `all_lower_case`, * functions and methods are also `all_lower_case`, * classes are `CamelCase` (same applies to factory-like functions returning class instances), * constants are `ALL_UPPER_CASE`, * object properies are `all_lower_case`, (and many more - the above is only about non-confusing naming)
Tadeck gave a detailed answer while I was typing mine, but here is my initial solution to what I believe you are trying to accomplish. I'm adding my input simply because I'm new to Python and I think a beginner's perspective may be beneficial for OP. ``` class Test1(): def test_p1(self): print "This is part 1 of Test1" def Test2(): myTest = Test1() return myTest.test_p1() Test2() ``` In your original code you attempt to call the `test_p1` method without ever instantiating the `Test1` class. So I did that first, `myTest = Test1()`, and then called the `test_p1()` method using my newly created `myTest` object. Also, I added `self` to the argument list in the `test_p1` method. I don't exactly understand why but apparently the lack of `self` makes it an unbound method which causes some problems.
how to return a function in a different class in python 3.3
[ "", "python", "function", "" ]
I have the following query: ``` profiles = session.query(profile.name).filter(and_(profile.email == email, profile.password == password_hash)) ``` How do I check if there is a row and how do I just return the first (should only be one if there is a match)?
Use [`query.one()`](http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.one) to get one, and *exactly* one result. In all other cases it will raise an exception you can handle: ``` from sqlalchemy.orm.exc import NoResultFound from sqlalchemy.orm.exc import MultipleResultsFound try: user = session.query(User).one() except MultipleResultsFound, e: print e # Deal with it except NoResultFound, e: print e # Deal with that as well ``` There's also [`query.first()`](http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html#sqlalchemy.orm.query.Query.first), which will give you just the first result of possibly many, without raising those exceptions. But since you want to deal with the case of there being no result or more than you thought, [`query.one()`](http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html#sqlalchemy.orm.query.Query.one) is exactly what you should use.
You can use the [`first()`](http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html#sqlalchemy.orm.query.Query.first) function on the Query object. This will return the first result, or None if there are no results. ``` result = session.query(profile.name).filter(...).first() if not result: print 'No result found' ``` Alternatively you can use [`one()`](http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html#sqlalchemy.orm.query.Query.one), which will give you the only item, but raise exceptions for a query with zero or multiple results. ``` from sqlalchemy.orm.exc import NoResultFound, MultipleResultsFound try: result = session.query(profile.name).filter(...).one() print result except NoResultFound: print 'No result was found' except MultipleResultsFound: print 'Multiple results were found' ```
Getting first row from sqlalchemy
[ "", "python", "sqlalchemy", "flask", "flask-sqlalchemy", "" ]
I have a file containing duplicate timestamps, maximum two for each timestamp, actually they are not duplicate, it is just the second timestamp needs to add a millisecond timestamp. For example, I am having these in the file, ``` .... 2011/1/4 9:14:00 2011/1/4 9:15:00 2011/1/4 9:15:01 2011/1/4 9:15:01 2011/1/4 9:15:02 2011/1/4 9:15:02 2011/1/4 9:15:03 2011/1/4 9:15:03 2011/1/4 9:15:04 .... ``` I would like to change them into ``` 2011/1/4 9:14:00 2011/1/4 9:15:00 2011/1/4 9:15:01 2011/1/4 9:15:01.500 2011/1/4 9:15:02 2011/1/4 9:15:02.500 2011/1/4 9:15:03 2011/1/4 9:15:03.500 2011/1/4 9:15:04 .... ``` what is the most efficient way to perform such task?
Setup ``` In [69]: df = DataFrame(dict(time = x)) In [70]: df Out[70]: time 0 2013-01-01 09:01:00 1 2013-01-01 09:01:00 2 2013-01-01 09:01:01 3 2013-01-01 09:01:01 4 2013-01-01 09:01:02 5 2013-01-01 09:01:02 6 2013-01-01 09:01:03 7 2013-01-01 09:01:03 8 2013-01-01 09:01:04 9 2013-01-01 09:01:04 ``` Find the locations where the difference in time from the previous row is 0 seconds ``` In [71]: mask = (df.time-df.time.shift()) == np.timedelta64(0,'s') In [72]: mask Out[72]: 0 False 1 True 2 False 3 True 4 False 5 True 6 False 7 True 8 False 9 True Name: time, dtype: bool ``` Set theose locations to use an offset of 5 milliseconds (In your question you used 500 but could be anything). This requires numpy >= 1.7. (Not that this syntax will be changing in 0.13 to allow a more direct `df.loc[mask,'time'] += pd.offsets.Milli(5)` ``` In [73]: df.loc[mask,'time'] = df.time[mask].apply(lambda x: x+pd.offsets.Milli(5)) In [74]: df Out[74]: time 0 2013-01-01 09:01:00 1 2013-01-01 09:01:00.005000 2 2013-01-01 09:01:01 3 2013-01-01 09:01:01.005000 4 2013-01-01 09:01:02 5 2013-01-01 09:01:02.005000 6 2013-01-01 09:01:03 7 2013-01-01 09:01:03.005000 8 2013-01-01 09:01:04 9 2013-01-01 09:01:04.005000 ```
So this algorithm should work very well... I'm just having a hell of a time with numpy's datetime datatypes. ``` In [154]: df Out[154]: 0 0 2011/1/4 9:14:00 1 2011/1/4 9:15:00 2 2011/1/4 9:15:01 3 2011/1/4 9:15:01 4 2011/1/4 9:15:02 5 2011/1/4 9:15:02 6 2011/1/4 9:15:03 7 2011/1/4 9:15:03 8 2011/1/4 9:15:04 In [155]: ((dt.diff() == 0) * .005) Out[155]: 0 0.000 1 0.000 2 0.000 3 0.005 4 0.000 5 0.005 6 0.000 7 0.005 8 0.000 Name: 0, dtype: float64 ``` And the idea is to add those two together. Of course, one is `datetime64` and the other is `float64`. For whatever reasons, `np.timedelta64` doesn't operate on arrays? Anyway if you can sort out the dtype issues that will work.
Python pandas change duplicate timestamp to unique
[ "", "python", "duplicates", "pandas", "" ]
I have table like ``` table_id item_id vendor_id category_id 1 1 33 4 2 1 33 4 3 1 33 2 4 2 33 4 5 2 33 2 6 3 33 4 7 3 33 4 8 1 34 4 9 1 34 4 10 3 35 4 ``` Here `table_id` is primary key and table having total 98000 entries including 61 duplicate entries which I found by executing query ``` SELECT * FROM my_table WHERE vendor_id = 33 AND category_id = 4 GROUP BY item_id having count(item_id)>1 ``` In above table `table_id` 1,2 and 6,7 duplicate. I need to delete 2 and 7 from my table( Total 61 Duplicate Entries). How can I delete duplicate entries from my table using query with where clause vendor\_id = 33 AND category\_id = 4 ? I don't want delete other duplicate entries such as `table_id` 8,9 I cannot index the table, since I need to kept some duplicate entries which required. I need to delete duplicate with certain criteria
### Please always take backup before running any deletion query. Try using `LEFT JOIN` like this: ``` DELETE my_table FROM my_table LEFT JOIN (SELECT MIN(table_id) AS IDs FROM my_table GROUP BY `item_id`, `vendor_id`, `category_id` )A ON my_table.table_id = A.IDs WHERE A.ids IS NULL; ``` Result after deletion: ``` | TABLE_ID | ITEM_ID | VENDOR_ID | CATEGORY_ID | ------------------------------------------------ | 1 | 1 | 33 | 4 | | 3 | 1 | 33 | 2 | | 4 | 2 | 33 | 4 | | 5 | 2 | 33 | 2 | | 6 | 3 | 33 | 4 | ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/4357f/1) --- ### Edit: (after OP's edit) If you want to add more conditions, you can add it in outer `WHERE` condition like this: ``` DELETE my_table FROM my_table LEFT JOIN (SELECT MIN(table_id) AS IDs FROM my_table GROUP BY `item_id`, `vendor_id`, `category_id` )A ON my_table.table_id = A.IDs WHERE A.ids IS NULL AND vendor_id = 33 --< Additional conditions here AND category_id = 4 --< Additional conditions here ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/7575a/1)
What about this: ``` DELETE FROM my_table WHERE table_id NOT IN (SELECT MIN(table_id) FROM my_table GROUP BY item_id, vendor_id, category_id) ```
Deleting duplicate entries with search criteria
[ "", "mysql", "sql", "duplicates", "" ]
I wanted to start by asking here about this. This was given to me as part of an exercise at codeacademy.com and confused me for the better part of an hour. Take a look at the following code block: ``` bool_one = 40 / 20 * 4 >= -4**2 ``` Now, I evaluated that as being "8 >= 16" which is False. However, the codeacademy.com terminal says it's True. When I started writing debug code lines, I found the issue was in how "-4\*\*2" gets evaluated. When I run it on the terminal at CodeAcademy as well as on my local linux system, "-4\*\*2" in Python comes out to "-16"... which is contrary to *everything* I have learned in all my math classes as well as every single calculator I have run it on. Whether I run it as "-4 \* -4" or "-4^2" or even, using the "x^y" key, "-4 [x^y] 2", it **still** comes out as "16". So, how is python coming out with "-16" for "-4\*\*2"? Can someone please clarify this for me? TIA.
If you have -4 without parentheses, the negative sign is considered a unary operator which is essentially "multiply by negative one." `(-4)**2` will be 16, because that is actually negative 4 squared, but `-4**2` uses the normal order of operations (exponentiation before multiplication) and treats it as `-(4**2)`. Hope that helps! Edit: to really understand operator precedence, take a look at this handy list in the docs: <http://docs.python.org/2/reference/expressions.html#operator-precedence> as you can see, `-` has less precedence than `**`
From the doc of [Power Operator](http://docs.python.org/2/reference/expressions.html#the-power-operator): > The power operator binds more tightly than unary operators on its > left; it binds less tightly than unary operators on its right. The > syntax is: > > ``` > power ::= primary ["**" u_expr] > ``` > > Thus, in an unparenthesized sequence of power and unary operators, **the operators are evaluated from right to left** (this does not constrain the evaluation order for the operands): -1\*\*2 results in -1. Emphasis mine. So, for getting the required result, you need to add parenthesis around `-4`. ``` >>> (-4) ** 2 16 ```
Need some clarification on the ** operator in Python
[ "", "python", "operators", "" ]
So I have a string which is converted into a dictionary called test and takes the form: ``` {"a":"1","b":"2","c":"3"} ``` Now while creating the dictionary, the availablilty of the `"c":"3"` key-pair, depends on a some condition; so there are cases when the dictionary simply reads as: ``` {"a":"1","b":"2"} ``` I have an `if` condition which checks for the existence of the `"c"` key and gives some output based on it. The condition is as follows: ``` if (test["c"]): print "Hello world!!" ``` This has no else condition. The problem arises when the 3rd key-value pair isn't in the dictionary. Running the code gives the following error: ``` File "test.py", line 14, in test if test["c"]: KeyError: 'c' ``` May I know what I can do to resolve this and what to correct in the code?
Use `in` to check if the key is in the dictionary: ``` if "c" in test: ... ```
Instead of `test['c']` try to use `test.get('c')`
If conditions for Python dictionaries
[ "", "python", "if-statement", "dictionary", "" ]
Im trying to join three tables in the following format. ``` select * from A inner join B (inner join (C on C.id=B.c_id)) on A.id=B.a_id; ``` Ive never used a lookup table to perform a join (in this case B). Oracle throws an error saying ``` ORA-00905: missing keyword 00905. 00000 - "missing keyword" *Cause: *Action: Error at Line: 6 Column: 11 ``` Any insight on where Im going wrong would be appreciated . Thanks!
Rather try something like ``` SELECT * FROM A INNER JOIN B ON A.id = B.a_id INNER JOIN C ON C.id = B.c_id ```
It should be written like this, which is clearer: ``` select * from A inner join B on A.id=b.a_id inner join C on C.id=B.c_id ``` You can see how this shows the intent of your statement (the join clause is very explicit about how you want to join the tables) Vs joining 2 tables by expressing the join condition in the where clause (as in `where a.id=b.id`)
Am I doing this join right? Three way table join
[ "", "sql", "oracle", "inner-join", "" ]
I've got a small problem. I've written a module to parse a configuration file (wvdial's wvdial.conf) using regex. File contains strings like `"Init1 = AT"` and I've used a following regex: ``` match = re.match(r'(.*)=(.*)', line) ``` Everything worked until following line: ``` #Init3 = AT+CPIN="0000" ``` Which got parsed like: ``` '#Init3 = AT+CPIN':'0000' ``` It seems like the regex engine goes from right to left parsing the string. Is there any way to reverse the re.search direction?
You need to mark your first `*` quantifier as non-greedy by appending `?`: ``` match = re.match(r'(.*?)=(.*)', line) ``` Demo: ``` >>> line = '#Init3 = AT+CPIN="0000"' >>> re.match(r'(.*?)=(.*)', line).group() '#Init3 = AT+CPIN="0000"' ``` By making the quantifier non-greedy, the regular expression engine will match the *minimum* to satisfy the pattern, rather than the maximum.
No python re doesn't parse from right to left **but** .\* is by default *greedy* it will look for the longest match it can get, you can get your regex to work in a couple of ways easiest is: ``` re.match(r'([^=]+)=(.*)', line) ``` This will match anything but = as the first part of the match then = then anything after the first = Demo: ``` >>> import re >>> line = '#Init3 = AT+CPIN="0000"' >>> re.match(r'([^=]+)=(.*)', line).groups() ('#Init3 ', ' AT+CPIN="0000"') >>> ```
Python re.search - searching from left to right
[ "", "python", "regex", "parsing", "" ]
``` global $wpdb; $sql = "SELECT * FROM $wpdb->postmeta"; $sql .= " INNER JOIN $wpdb->posts ON $wpdb->postmeta.post_id = $wpdb->posts.ID"; $sql .= " WHERE $wpdb->posts.post_type = 'task'"; $sql .= " AND $wpdb->posts.post_parent IN ( SELECT `ID` FROM $wpdb->posts WHERE `post_type` = 'task_list' AND `post_parent` IN ( SELECT `ID` FROM $wpdb->posts WHERE `post_status` = 'publish' AND `post_type` = 'project' ) )"; $sql .= " AND $wpdb->postmeta.post_id IN ( SELECT `post_id` FROM $wpdb->postmeta WHERE $wpdb->postmeta.meta_key = '_completed' AND $wpdb->postmeta.meta_value = '0' )"; $sql .= " AND $wpdb->postmeta.post_id IN ( SELECT `post_id` FROM $wpdb->postmeta WHERE $wpdb->postmeta.meta_key = '_assigned' AND $wpdb->postmeta.meta_value = '%s' )"; $sql .= " AND $wpdb->postmeta.meta_key = '_due'"; $sql .= " GROUP BY $wpdb->posts.ID"; $sql .= " ORDER BY CAST( $wpdb->postmeta.meta_value AS DATE ) ASC, $wpdb->posts.ID ASC"; $tasks = $wpdb->get_results( sprintf( $sql, $user_id ) ); return $tasks; ``` The result of the query is like this: 1. Post A. 2. Post B. 3. Post X. Due: August 2 4. Post Y. Due: August 3 5. Post Z. Due: August 4 Why are the posts with no `_due` value always returned first? How can I make it so they are always returned last? If I do this: ``` $sql .= " ORDER BY $wpdb->posts.ID ASC, CAST( $wpdb->postmeta.meta_value AS DATE ) ASC"; ``` The posts with no `_due` value are randomly placed in between posts with `_due` value. The ideal result should be: 1. Post X. Due: August 2 2. Post Y. Due: August 3 3. Post Z. Due: August 4 4. Post A. 5. Post B. Thoughts?
``` "ORDER BY CASE WHEN CAST( $wpdb->postmeta.meta_value AS DATE ) '' THEN 1 ELSE 0 END,CAST( $wpdb->postmeta.meta_value AS DATE )ASC, $wpdb->posts.ID ASC"; ```
You can add the function isnull to your order by clause: ``` $sql .= " ORDER BY $wpdb->posts.ID ASC, ISNULL($wpdb->postmeta.meta_value),CAST( $wpdb->postmeta.meta_value AS DATE ) ASC"; ```
ORDER BY date ASC placing records with no dates before records with dates?
[ "", "mysql", "sql", "wordpress", "" ]
``` SELECT alert, (select created_at from alerts WHERE alert = @ALERT ORDER BY created_at desc LIMIT 1) AS latest FROM alerts GROUP BY alert; ``` I am having an issue with the above query where I would like to pass in each alert into the subquery so that I have a column called latest which displays the latest alert for each group of alerts. How should I do this?
If you are trying to get the latest created\_at date for each group of alerts, there is a simpler way. ``` SELECT alert, max (created_at) AS latest FROM alerts GROUP BY alert; ```
This is called a correlated subquery. To make it work, you need table aliases: ``` SELECT a1.alert, (select a2.created_at from alerts a2 WHERE a2.alert = a1.alert ORDER BY a2.created_at desc LIMIT 1 ) AS latest FROM alerts a1 GROUP BY a1.alert; ``` Table aliases are a good habit to get into, because they often make the SQL more readable. It is also a good idea to use table aliases with column references, so you easily know where the column is coming from. EDIT: If you really want the latest, you can get it by simply doing: ``` select alert, max(created_at) from alerts group by alert; ```
Passing query result into subquery
[ "", "mysql", "sql", "" ]
So now, I have a python program which I would like to convert into an executable(preferably a single file). Right now the target systems are only RedHat(and CentOS) and Debian(and Ubuntu). First, I've tried the PyInstaller but after running it, it creates a **.spec** file and 2 folders called build and dist. I have no idea how to proceed from there. Second, I tried the freeze.py which ships with python. I understand the usage is as follows: ``` python /path/to/freeze.py /path/to/myfile.py ``` This throws an error `***Test Failed*** 2 failures` and `NameError: name 'testdata' is not defined` The full error is as follows: ``` ********************************************************************** File "/usr/lib/python2.6/site-packages/freeze.py", line 117, in __main__.freeze Failed example: testdata = json.loads( gzip.open("testdata.json.gz", "r").read().decode() ) Exception raised: Traceback (most recent call last): File "/usr/lib64/python2.6/doctest.py", line 1253, in __run compileflags, 1) in test.globs File "<doctest __main__.freeze[3]>", line 2, in <module> gzip.open("testdata.json.gz", "r").read().decode() File "/usr/lib64/python2.6/gzip.py", line 33, in open return GzipFile(filename, mode, compresslevel) File "/usr/lib64/python2.6/gzip.py", line 79, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 2] No such file or directory: 'testdata.json.gz' ********************************************************************** File "/usr/lib/python2.6/site-packages/freeze.py", line 121, in __main__.freeze Failed example: freeze(testdata) == freeze_fast(testdata) Exception raised: Traceback (most recent call last): File "/usr/lib64/python2.6/doctest.py", line 1253, in __run compileflags, 1) in test.globs File "<doctest __main__.freeze[4]>", line 1, in <module> freeze(testdata) == freeze_fast(testdata) NameError: name 'testdata' is not defined ********************************************************************** 1 items had failures: 2 of 8 in __main__.freeze ***Test Failed*** 2 failures. ``` I'd like some help to using either of the 2 (or any other tool which will help me achieve the same result). Thanks.
If you want to make it executable, you have to `chmod +x /path/to/script.py`. This gives anybody permission to run the file. Then you can `python /path/to/script.py`. You still need to start the command with python, that is ugly. If you add this line `#!/usr/bin/env python` to the first line of your script. This is callled a shebang or a hashbang. Then (still remember to chmod it) you can `/path/to/script.py` and it will execute. If you are already in the directory of your script you can `./script.py`. (still remember to chmod it and at a shebang) If you still aren't satisfied, and you want to type in *just the name of your script*, you move the script into one of the folders on your path (which you can find by typing `echo $PATH` in shell, typically this is /usr/, /bin/, /usr/local/bin, or something like that). If you move your script into one of those folders, then you can just `script.py`. If you do this, I recommend you drop the .py extension, so you can just type in `script`. This will kind of make look like other unix shell commands (ls, grep, cat) at least in its invocation.
You might wish to investigate [Nuitka](http://nuitka.net/pages/overview.html). It takes python source code and converts it in to C++ API calls. Then it compiles into an executable binary (ELF on Linux). It has been around for a few years now and supports a wide range of Python versions. You will probably also get a performance improvement if you use it. Recommended.
converting python program into executable
[ "", "python", "pyinstaller", "" ]
I am fairly new to python so I apologies if this is quite a novice question, but I am trying to extract text from parentheses that has specific format from a raw text file. I have tried this with regular expressions, but please let me know if their is a better method. To show what I want to do by example: ``` s = "Testing (Stackoverflow, 2013). Testing (again) (Stackoverflow, 1999)" ``` From this string I want a result something like: ``` ['(Stackoverflow, 2013)', '(Stackoverflow, 1999)'] ``` The regular expression I have tried so far is ``` "(\(.+[,] [0-9]{4}\))" ``` in conjunction with re.findall(), however this only gives me the result: ``` ['(Stackoverflow, 2013). Testing (again) (Stackoverflow, 1999)'] ``` So, as you may have guessed, I am trying to extract the bibliographic references from a .txt file. But I don't want to extract anything that happens to be in parentheses that is not a bibliographic reference. Again, I apologies if this is novice, and again if there is a question like this out there already. I have searched, but no luck as yet.
Assuming that you will have **no** nested brackets, you could use something like so: `(\([^()]+?, [0-9]{4}\))`. This will match any non bracket character which is within a set of parenthesis which is followed by a comma, a white space four digits and a closing parenthesis.
Using `[^()]` instead of `.`. This will make sure there is no nested (). ``` >>> re.findall("(\([^()]+[,] [0-9]{4}\))", s) ['(Stackoverflow, 2013)', '(Stackoverflow, 1999)'] ```
Extracting parenthesis with a specific format with Python
[ "", "python", "regex", "" ]
I have a million integers in sorted order and I would like to find the longest subsequence where the difference between consecutive pairs is equal. For example ``` 1, 4, 5, 7, 8, 12 ``` has a subsequence ``` 4, 8, 12 ``` My naive method is greedy and just checks how far you can extend a subsequence from each point. This takes `O(n²)` time per point it seems. Is there a faster way to solve this problem? **Update.** I will test the code given in the answers as soon as possible (thank you). However it is clear already that using n^2 memory will not work. So far there is no code that terminates with the input as `[random.randint(0,100000) for r in xrange(200000)]` . **Timings.** I tested with the following input data on my 32 bit system. ``` a= [random.randint(0,10000) for r in xrange(20000)] a.sort() ``` * The dynamic programming method of ZelluX uses 1.6G of RAM and takes 2 minutes and 14 seconds. With pypy it takes only 9 seconds! However it crashes with a memory error on large inputs. * The O(nd) time method of Armin took 9 seconds with pypy but only 20MB of RAM. Of course this would be much worse if the range were much larger. The low memory usage meant I could also test it with a= [random.randint(0,100000) for r in xrange(200000)] but it didn't finish in the few minutes I gave it with pypy. In order to be able to test the method of Kluev's I reran with ``` a= [random.randint(0,40000) for r in xrange(28000)] a = list(set(a)) a.sort() ``` to make a list of length roughly `20000`. All timings with pypy * ZelluX, 9 seconds * Kluev, 20 seconds * Armin, 52 seconds It seems that if the ZelluX method could be made linear space it would be the clear winner.
**Update:** First algorithm described here is obsoleted by [Armin Rigo's second answer](https://stackoverflow.com/a/18247391/1009831), which is much simpler and more efficient. But both these methods have one disadvantage. They need many hours to find the result for one million integers. So I tried two more variants (see second half of this answer) where the range of input integers is assumed to be limited. Such limitation allows much faster algorithms. Also I tried to optimize Armin Rigo's code. See my benchmarking results at the end. --- Here is an idea of algorithm using O(N) memory. Time complexity is O(N2 log N), but may be decreased to O(N2). Algorithm uses the following data structures: 1. `prev`: array of indexes pointing to previous element of (possibly incomplete) subsequence. 2. `hash`: hashmap with key = difference between consecutive pairs in subsequence and value = two other hashmaps. For these other hashmaps: key = starting/ending index of the subsequence, value = pair of (subsequence length, ending/starting index of the subsequence). 3. `pq`: priority queue for all possible "difference" values for subsequences stored in `prev` and `hash`. Algorithm: 1. Initialize `prev` with indexes `i-1`. Update `hash` and `pq` to register all (incomplete) subsequences found on this step and their "differences". 2. Get (and remove) smallest "difference" from `pq`. Get corresponding record from `hash` and scan one of second-level hash maps. At this time all subsequences with given "difference" are complete. If second-level hash map contains subsequence length better than found so far, update the best result. 3. In the array `prev`: for each element of any sequence found on step #2, decrement index and update `hash` and possibly `pq`. While updating `hash`, we could perform one of the following operations: add a new subsequence of length 1, or grow some existing subsequence by 1, or merge two existing subsequences. 4. Remove hash map record found on step #2. 5. Continue from step #2 while `pq` is not empty. This algorithm updates O(N) elements of `prev` O(N) times each. And each of these updates may require to add a new "difference" to `pq`. All this means time complexity of O(N2 log N) if we use simple heap implementation for `pq`. To decrease it to O(N2) we might use more advanced priority queue implementations. Some of the possibilities are listed on this page: [Priority Queues](http://www.theturingmachine.com/algorithms/heaps.html). See corresponding Python code on [Ideone](http://ideone.com/h8oTYv). This code does not allow duplicate elements in the list. It is possible to fix this, but it would be a good optimization anyway to remove duplicates (and to find the longest subsequence beyond duplicates separately). And [the same code after a little optimization](http://ideone.com/bW8meY). Here search is terminated as soon as subsequence length multiplied by possible subsequence "difference" exceeds source list range. --- Armin Rigo's code is simple and pretty efficient. But in some cases it does some extra computations that may be avoided. Search may be terminated as soon as subsequence length multiplied by possible subsequence "difference" exceeds source list range: ``` def findLESS(A): Aset = set(A) lmax = 2 d = 1 minStep = 0 while (lmax - 1) * minStep <= A[-1] - A[0]: minStep = A[-1] - A[0] + 1 for j, b in enumerate(A): if j+d < len(A): a = A[j+d] step = a - b minStep = min(minStep, step) if a + step in Aset and b - step not in Aset: c = a + step count = 3 while c + step in Aset: c += step count += 1 if count > lmax: lmax = count d += 1 return lmax print(findLESS([1, 4, 5, 7, 8, 12])) ``` --- If range of integers in source data (M) is small, a simple algorithm is possible with O(M2) time and O(M) space: ``` def findLESS(src): r = [False for i in range(src[-1]+1)] for x in src: r[x] = True d = 1 best = 1 while best * d < len(r): for s in range(d): l = 0 for i in range(s, len(r), d): if r[i]: l += 1 best = max(best, l) else: l = 0 d += 1 return best print(findLESS([1, 4, 5, 7, 8, 12])) ``` It is similar to the first method by Armin Rigo, but it doesn't use any dynamic data structures. I suppose source data has no duplicates. And (to keep the code simple) I also suppose that minimum input value is non-negative and close to zero. --- Previous algorithm may be improved if instead of the array of booleans we use a bitset data structure and bitwise operations to process data in parallel. The code shown below implements bitset as a built-in Python integer. It has the same assumptions: no duplicates, minimum input value is non-negative and close to zero. Time complexity is O(M2 \* log L) where L is the length of optimal subsequence, space complexity is O(M): ``` def findLESS(src): r = 0 for x in src: r |= 1 << x d = 1 best = 1 while best * d < src[-1] + 1: c = best rr = r while c & (c-1): cc = c & -c rr &= rr >> (cc * d) c &= c-1 while c != 1: c = c >> 1 rr &= rr >> (c * d) rr &= rr >> d while rr: rr &= rr >> d best += 1 d += 1 return best ``` --- **Benchmarks:** Input data (about 100000 integers) is generated this way: ``` random.seed(42) s = sorted(list(set([random.randint(0,200000) for r in xrange(140000)]))) ``` And for fastest algorithms I also used the following data (about 1000000 integers): ``` s = sorted(list(set([random.randint(0,2000000) for r in xrange(1400000)]))) ``` All results show time in seconds: ``` Size: 100000 1000000 Second answer by Armin Rigo: 634 ? By Armin Rigo, optimized: 64 >5000 O(M^2) algorithm: 53 2940 O(M^2*L) algorithm: 7 711 ```
We can have a solution `O(n*m)` in time with very little memory needs, by adapting yours. Here `n` is the number of items in the given input sequence of numbers, and `m` is the range, i.e. the highest number minus the lowest one. Call A the sequence of all input numbers (and use a precomputed `set()` to answer in constant time the question "is this number in A?"). Call d the *step* of the subsequence we're looking for (the difference between two numbers of this subsequence). For every possible value of d, do the following linear scan over all input numbers: for every number n from A in increasing order, if the number was not already seen, look forward in A for the length of the sequence starting at n with a step d. Then mark all items in that sequence as already seen, so that we avoid searching again from them, for the same d. Because of this, the complexity is just `O(n)` for every value of d. ``` A = [1, 4, 5, 7, 8, 12] # in sorted order Aset = set(A) for d in range(1, 12): already_seen = set() for a in A: if a not in already_seen: b = a count = 1 while b + d in Aset: b += d count += 1 already_seen.add(b) print "found %d items in %d .. %d" % (count, a, b) # collect here the largest 'count' ``` *Updates:* * This solution might be good enough if you're only interested in values of d that are relatively small; for example, if getting the best result for `d <= 1000` would be good enough. Then the complexity goes down to `O(n*1000)`. This makes the algorithm approximative, but actually runnable for `n=1000000`. (Measured at 400-500 seconds with CPython, 80-90 seconds with PyPy, with a random subset of numbers between 0 and 10'000'000.) * If you still want to search for the whole range, and if the common case is that long sequences exist, a notable improvement is to stop as soon as d is too large for an even longer sequence to be found.
Longest equally-spaced subsequence
[ "", "python", "algorithm", "" ]
I am stuck in this issue: I have two models: Location and Rate. each location has its rate, possibly multiple rates. i want to get locations ordered by its rates, ascendingly. obvouisly, `order_by` and `distinct()` dont work together: ``` locations = Location.objects.filter(**s_kwargs).order_by('locations_rate__rate').distinct('id') ``` then i read the docs and came to `annotate()`. but i am not sure whether i have to use a function between annotate. if i do this: ``` locations = Location.objects.filter(**s_kwargs).annotate(rate=Count('locations_rate__rate')).order_by('rate') ``` but this counts the rates and orders by the sum. i want to get locations with its rates ordered by the value of those rates. my model definitions are: ``` class Location(models.Model): name = models.TextField() adres = models.TextField() class Rate(models.Model): location = models.ForeignKey(Location,related_name='locations_rate') rate = models.IntegerField(max_length=2) price_rate = models.IntegerField(max_length=2) #<--- added now datum = models.DateTimeField(auto_now_add=True,blank=True) #<--- added now ```
Well the issue is not how to make query in Django for the problem you described. It's that your problem is either incorrect or not property thought through. Let me explained with an example: Suppose you have two `Location` objects, `l1` and `l2`. `l1` has two `Rate` objects related to it, `r1` and `r3`, such that `r1.rate = 1` and `r3.rate = 3`; And `l2` has one rate object related to it, `r2`, such that `r2.rate = 2`. Now what should be the order of your query's result `l1` followed `l2` or `l2` followed by `l1`?? As one of `l1`'s rate is less than `l2`'s rate and the other one is greater than `l2`'s rate.
Possibly you want something like this: ``` locations = (Location.objects.filter(**s_kwargs) .values('locations_rate__rate') .annotate(Count('locations_rate__rate')) .order_by('locations_rate__rate')) ``` You need the `Count()` since you actually need a `GROUP BY` query, and `GROUP BY` only works with aggregate functions like `COUNT` or `SUM`. Anyway I think your problem can be solved with normal `distinct()`: ``` locations = (Location.objects.filter(**s_kwargs) .order_by('locations_rate__rate') .distinct('locations_rate__rate')) ``` Why would you want to use `annotate()` instead? I haven't tested both but hope it helps.
django - annotate() instead of distinct()
[ "", "python", "django", "django-queryset", "" ]
I'm attempting to apply a bandpass filter with time-varying cutoff frequencies to a signal, using Python. The routine I am currently using partitions my signal into equal-length time segments, then for each segment I apply a filter with time-specific parameters, before merging the signal back together. The parameters are based on pre-existing estimates. The problem I seem to be having is that there are "ripples" at the edge of each time-segment that appear after the filter has been applied. This causes discontinuities in my signal, which interfere with my post-filtering data analysis. I was hoping someone could inform me whether there are any existing routines for implementing filters with time-varying parameters in Python? Alternatively, advice on how I might get around this problem would be much appreciated. **EDIT** -- example of what I want to do is added below. Let's say I have a signal x(t). I want to filter the first half with a bandpass filter with parameters (100,200) Hz. The second half I want to filter with parameters (140, 240) Hz. I iterate over x(t), applying my filter to each half, then recombine the results. Some example code might look like: ``` outputArray = np.empty(len(x)) segmentSize = len(x) / 2 filtParams = [(100, 200), (140, 240)] for i in range(2): tempData = x[i*segmentSize:(i+1)*segmentSize] tempFiltered = bandPassFilter(tempData, parameters=filtParams[i]) outputArray[i*segmentSize:(i+1)*segmentSize] = tempFiltered ``` (To save space let's assume I have a function which performs bandpass filtering). As you can see, the data segments do not overlap and are simply "pasted" back together in the new array. **EDIT 2** -- some sample code of my problem @H.D. First of all, thanks for your significant input thus far. The audiolazy package looks like a great tool. I thought it would be a bit more useful if I describe my goals in further detail. As I have posted [elsewhere](https://stackoverflow.com/questions/17302240/filter-design-and-frequency-extraction-in-python), I am attempting to extract the [instantaneous frequency](http://www.scholarpedia.org/article/Hilbert-Huang_transform#Instantaneous_frequency_and_the_Hilbert_transform) (IF) of a signal, using the Hilbert transform. My data contains significant noise but I have a good estimate of the bandwidth where my IF signal lies. A problem I have come up against, however, is that the IF is often nonstationary. Using a "static" filter approach I am often therefore required to use a broad bandpass region, to ensure all frequencies are captured. The following code demonstrates the effect of increasing the filter bandwidth on an IF signal. It includes a signal generating function, an implementation of a bandpass filter using the scipy.signal package, and a method to extract the IF of the resultant filtered signal. ``` from audiolazy import * import scipy.signal as sig import numpy as np from pylab import * def sineGenerator( ts, f, rate, noiseLevel=None ): """generate a sine tone with time, frequency, sample rate and noise specified""" fs = np.ones(len(ts)) * f y = np.sin(2*np.pi*fs*ts) if noiseLevel: y = y + np.random.randn(len(y))/float(noiseLevel) return y def bandPassFilter( y, passFreqs, rate, order ): """STATIC bandpass filter using scipy.signal Butterworth filter""" nyquist = rate / 2.0 Wn = np.array([passFreqs[0]/nyquist, passFreqs[1]/nyquist]) z, p, k = sig.butter(order, Wn, btype='bandpass', output='zpk') b, a = sig.zpk2tf(z, p, k) return sig.lfilter(b, a, y) if __name__ == '__main__': rate = 1e4 ts = np.arange(0, 10, 1/rate) # CHANGING THE FILTER AFFECTS THE LEVEL OF NOISE ys = sineGenerator(ts, 600.0, 1e4, noiseLevel=1.0) # a 600Hz signal with noise filts = [[500, 700], [550, 650], [580, 620]] for f in filts: tempFilt = bandPassFilter( ys, f, rate, order=2 ) tempFreq = instantaneousFrequency( tempFilt, rate ) plot( ts[1:], tempFreq, alpha=.7, label=str(f).strip('[]') ) ylim( 500, 750 ) xlabel( 'time' ) ylabel( 'instantaneous frequency (Hz)' ) legend(frameon=False) title('changing filter passband and instantaneous frequency') savefig('changingPassBand.png') ``` ![changing passband](https://i.stack.imgur.com/aSTU3.png) There is a single frequency component in the signal (at 600Hz). The legend shows the filter parameters used in each case. Using a narrower "static" filter gives a "cleaner" output. But how narrow my filter can be is limited by what the frequencies are. For instance, consider the following signal with two frequency components (one at 600Hz, another at 650Hz). ![varying frequency](https://i.stack.imgur.com/82Iel.png) In this example I have been forced to use a broader bandpass filter, which has resulted in extra noise creeping in to the IF data. **My idea is that by using a time varying filter, I can "optimise" the filter for my signal at certain time increments.** So for the above signal I might want to filter around 580-620Hz for the first 5 seconds, then 630-670Hz for the next 5 seconds. Essentially I want to minimise noise in the final IF signal. Based on the example code you sent I have written a function that uses audiolazy to implement a static Butterworth filter on a signal. ``` def audioLazyFilter( y, rate, Wp, Ws ): """implement a Butterworth filter using audiolazy""" s, Hz = sHz(rate) Wp = Wp * Hz # Bandpass range in rad/sample Ws = Ws * Hz # Bandstop range in rad/sample order, new_wp_divpi = sig.buttord(Wp/np.pi, Ws/np.pi, gpass=dB10(.6), gstop=dB10(.1)) ssfilt = sig.butter(order, new_wp_divpi, btype='bandpass') filt_butter = ZFilter(ssfilt[0].tolist(), ssfilt[1].tolist()) return list(filt_butter(y)) ``` The IF data obtained using this filter in conjunction with the Hilbert transform routine compares well to those obtained using scipy.signal: ``` AL_filtered = audioLazyFilter( ys, rate, np.array([580, 620]), np.array([570, 630]) ) SP_filtered = bandPassFilter( ys, [580, 620], rate, order=2 ) plot(ts[1:], instantaneousFrequency( SP_filtered, 1e4 ), alpha=.75, label='scipy.signal Butterworth filter') plot(ts[1:], instantaneousFrequency( AL_filtered, 1e4 ), 'r', alpha=.75, label='audiolazy Butterworth filter') ``` ![compare audiolazy with scipy.signal](https://i.stack.imgur.com/JMRzm.png) My question is now can I combine the audiolazy Butterworth routine with the time-varying properties you described in your original posts?
[AudioLazy](https://github.com/danilobellini/audiolazy) works natively with time varying filters ``` from audiolazy import sHz, white_noise, line, resonator, AudioIO rate = 44100 s, Hz = sHz(rate) sig = white_noise() # Endless white noise Stream dur = 8 * s # Some few seconds of audio freq = line(dur, 200, 800) # A lazy iterable range bw = line(dur, 100, 240) filt = resonator(freq * Hz, bw * Hz) # A simple bandpass filter with AudioIO(True) as player: player.play(filt(sig), rate=rate) ``` You can also use it for plotting (or analysis, in general), by using `list(filt(sig))` or `filt(sig).take(inf)`. There are a lot of other resources that might be useful as well, such as applying time-varying coefficients directly in a Z-transform filter equation. **EDIT: More information about the AudioLazy components** The following examples were done using the IPython. Resonator is a `StrategyDict` instance, which ties many implementations in one place. ``` In [1]: from audiolazy import * In [2]: resonator Out[2]: {('freq_poles_exp',): <function audiolazy.lazy_filters.freq_poles_exp>, ('freq_z_exp',): <function audiolazy.lazy_filters.freq_z_exp>, ('poles_exp',): <function audiolazy.lazy_filters.poles_exp>, ('z_exp',): <function audiolazy.lazy_filters.z_exp>} In [3]: resonator.default Out[3]: <function audiolazy.lazy_filters.poles_exp> ``` So `resonator` calls internally the `resonator.poles_exp` function, from which you can get some help ``` In [4]: resonator.poles_exp? Type: function String Form:<function poles_exp at 0x2a55b18> File: /usr/lib/python2.7/site-packages/audiolazy/lazy_filters.py Definition: resonator.poles_exp(freq, bandwidth) Docstring: Resonator filter with 2-poles (conjugated pair) and no zeros (constant numerator), with exponential approximation for bandwidth calculation. Parameters ---------- freq : Resonant frequency in rad/sample (max gain). bandwidth : Bandwidth frequency range in rad/sample following the equation: ``R = exp(-bandwidth / 2)`` where R is the pole amplitude (radius). Returns ------- A ZFilter object. Gain is normalized to have peak with 0 dB (1.0 amplitude). ``` So a verbose filter assignment would be ``` filt = resonator.poles_exp(freq=freq * Hz, bandwidth=bw * Hz) ``` Where the `Hz` is just a number to change the unit from Hz to rad/sample, as used in most AudioLazy components. Let's do so with `freq = pi/4` and `bw = pi/8` (`pi` is already in the `audiolazy` namespace): ``` In [5]: filt = resonator(freq=pi/4, bandwidth=pi/8) In [6]: filt Out[6]: 0.233921 ------------------------------------ 1 - 1.14005 * z^-1 + 0.675232 * z^-2 In [7]: type(filt) Out[7]: audiolazy.lazy_filters.ZFilter ``` You can try using this filter instead of the one given in the first example. Another way to do so would be using the `z` object from the package. First let's find the constants for that all-poles resonator: ``` In [8]: freq, bw = pi/4, pi/8 In [9]: R = e ** (-bw / 2) In [10]: c = cos(freq) * 2 * R / (1 + R ** 2) # AudioLazy included the cosine In [11]: gain = (1 - R ** 2) * sqrt(1 - c ** 2) ``` The denominator can be done directly by using the `z` in the equation: ``` In [12]: denominator = 1 - 2 * R * c * z ** -1 + R ** 2 * z ** -2 In [13]: gain / denominator Out[14]: 0.233921 ------------------------------------ 1 - 1.14005 * z^-1 + 0.675232 * z^-2 In [15]: type(_) # The "_" is the last returned value in IPython Out[15]: audiolazy.lazy_filters.ZFilter ``` **EDIT 2: About the time varying coefficients** The filter coefficients can also be a Stream instance (which can be cast from any iterable). ``` In [16]: coeff = Stream([1, -1, 1, -1, 1, -1, 1, -1, 1, -1]) # Cast from a list In [17]: (1 - coeff * z ** -2)(impulse()).take(inf) Out[17]: [1.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ``` The same, given a list input instead of the `impulse()` Stream: ``` In [18]: coeff = Stream((1, -1, 1, -1, 1, -1, 1, -1, 1, -1)) # Cast from a tuple In [19]: (1 - coeff * z ** -2)([1, 0, 0, 0, 0, 0, 0]).take(inf) Out[19]: [1.0, 0.0, -1, 0, 0, 0, 0] ``` A NumPy 1D array is also an iterable: ``` In [20]: from numpy import array In [21]: array_data = array([1, -1, 1, -1, 1, -1, 1, -1, 1, -1]) In [22]: coeff = Stream(array_data) # Cast from an array In [23]: (1 - coeff * z ** -2)([0, 1, 0, 0, 0, 0, 0]).take(inf) Out[23]: [0.0, 1.0, 0, 1, 0, 0, 0] ``` This last example shows the time-variant behaviour. **EDIT 3: Chunked-repeat sequences behaviour** The line function has a behaviour similar to the `numpy.linspace`, which gets the range "length" instead of "step". ``` In [24]: import numpy In [25]: numpy.linspace(10, 20, 5) # Start, stop (included), length Out[25]: array([ 10. , 12.5, 15. , 17.5, 20. ]) In [26]: numpy.linspace(10, 20, 5, endpoint=False) # Makes stop not included Out[26]: array([ 10., 12., 14., 16., 18.]) In [27]: line(5, 10, 20).take(inf) # Length, start, stop (range-like) Out[27]: [10.0, 12.0, 14.0, 16.0, 18.0] In [28]: line(5, 10, 20, finish=True).take(inf) # Include the "stop" Out[28]: [10.0, 12.5, 15.0, 17.5, 20.0] ``` With that, the filter equation has a different behaviour sample-per-sample (1-sample "chunk"). Anyhow, you can use a repeater for larger chunk sizes: ``` In [29]: five_items = _ # List from the last Out[] value In [30]: @tostream ....: def repeater(sig, n): ....: for el in sig: ....: for _ in xrange(n): ....: yield el ....: In [31]: repeater(five_items, 2).take(inf) Out[31]: [10.0, 10.0, 12.5, 12.5, 15.0, 15.0, 17.5, 17.5, 20.0, 20.0] ``` And use it in the line from the first example, so that `freq` and `bw` becomes: ``` chunk_size = 100 freq = repeater(line(dur / chunk_size, 200, 800), chunk_size) bw = repeater(line(dur / chunk_size, 100, 240), chunk_size) ``` **EDIT 4: Emulating time-varying filters/coefficients from LTI filters using time-varying gain/envelope** Another way around would be using different "weights" for two different filtered versions of the signal, and making some "crossfade" math with the signal, something like: ``` signal = thub(sig, 2) # T-Hub is a T (tee) auto-copy filt1(signal) * line(dur, 0, 1) + filt2(signal) * line(dur, 1, 0) ``` This would apply a linear envelope (from 0 to 1 and from 1 to 0) from different filtered versions of the same signal. If `thub` looks confusing, try `sig1, sig2 = tee(sig, 2)` applying `filt(sig1)` and `filt(sig2)` instead, these should do the same. **EDIT 5: Time-variant Butterworth filter** I spent the last hours trying to let that Butterworth be personalized as your example, imposing `order = 2` and giving the half-power bandwidth (~3dB) directly. I've done four examples, the code is [in this Gist](https://gist.github.com/danilobellini/6419374), and I've updated AudioLazy to include a `gauss_noise` Gaussian-distributed noise stream. Please note that the code in gist has nothing optimized, it was done ony to work in this particular case, and the chirp example makes it really slow due to a "per sample" coefficient finding behaviour. The instant frequency can be get from the [filtered] data in rad/sample with: ``` diff(unwrap(phase(hilbert(filtered_data)))) ``` where `diff = 1 - z ** -1` or another approach to find derivatives in discrete time, `hilbert` is the function from `scipy.signal` that gives us the analytical signal (the Discrete Hilbert Transform is the imaginary part of its result) and the other two are helper functions from AudioLazy. This is what happens when Butterworth changes its coefficients abruptly while keeping its memory, without noise: ![variable_butterworth_abrupt_pure_sinusoid.png](https://i.stack.imgur.com/G71yH.png) It's noticeable a oscilatory behaviour in this transition. You can use a moving median to "smooth" that in the lower frequency side while keeping the abrupt transition, but that won't work with the higher frequency. Well, that was what we would expect from a perfect sinusoid, but with noise (a LOT of noise, the Gaussian has the standard deviation equals to the sinusoid amplitude), it becomes: ![variable_butterworth_abrupt_noisy.png](https://i.stack.imgur.com/WkH5X.png) I tried then to do the same with a chirp, precisely this: ![variable_butterworth_pure_sinusoid.png](https://i.stack.imgur.com/OisIp.png) This shows a strange behaviour when filtering with the lower bandwidth, at the top frequency. And with the additive noise: ![variable_butterworth_noisy.png](https://i.stack.imgur.com/D2xCU.png) The code in gist also `AudioIO().play` this last noisy chirp. **EDIT 6: Time-variant resonator filter** I've added to [the same Gist](https://gist.github.com/danilobellini/6419374) an example using resonators instead of Butterworth. They're in pure Python and aren't optimized, but performs faster than calling `butter` for each sample during a chirp, and is far easier to implement, as all the `resonator` strategies accepts Stream instances as valid inputs. Here are the plots for a cascade of two resonators (i.e., a 2nd order filter): ![reson_2_abrupt_pure_sinusoid.png](https://i.stack.imgur.com/AypzD.png) ![reson_2_abrupt_noisy.png](https://i.stack.imgur.com/WM411.png) ![reson_2_pure_sinusoid.png](https://i.stack.imgur.com/3XKAc.png) ![reson_2_noisy.png](https://i.stack.imgur.com/5gcrm.png) And the same for a cascade of three resonators (i.e., a 3nd order filter): ![reson_3_abrupt_pure_sinusoid.png](https://i.stack.imgur.com/NK40J.png) ![reson_3_abrupt_noisy.png](https://i.stack.imgur.com/YqesR.png) ![reson_3_pure_sinusoid.png](https://i.stack.imgur.com/bQTD5.png) ![reson_3_noisy.png](https://i.stack.imgur.com/jytkp.png) These resonators have gain equals to 1 (0 dB) at the center frequency, and that oscillation pattern from the "Abrupt pure sinusoid" plots in the transition happens even without any filtering at all.
If you're extracting portions of your signal using slices, then you're effectively windowing your data with a rectangular window, which "rings" at the edges due to the sudden discontinuities. One way to fix this is by using a window that rings less, like a hanning window : ``` import numpy as np signal = np.random.randn(222) length = 50 window = np.hanning(length) for i in range(0, len(signal)-length, length): do_filtering(signal[i:i+length] * window) ``` More about windows : <http://en.m.wikipedia.org/wiki/Window_function>
Applying time-variant filter in Python
[ "", "python", "filtering", "signal-processing", "" ]
Using SqlBulkCopy and getting this exception: > Received an invalid column length from the bcp client for colid 30. I've been banging my head against this one for hours. I know what row is having the issue, but I don't know which column "colid 30" is. There are 178 columns in the data table. All values seem to be correct and I don't see any that are longer than any column data types in my database. This database holds property listings and currently has over 3 million records, all of which are just fine. Is there a way to pinpoint what colid 30 is? Or is there a way to view the actual SQL that the bcp is submitting to the database?
I hope this helps solve someone else's issues as well. The error was because one of the string/varchar fields in the datatable had a semicolon ";" in it's value. Apparently you need to manually escape these before doing the insert! I did a loop through all rows/columns and did: ``` string.Replace(";", "CHAR(59)"); ``` After that, everything inserted smoothly.
Check the size of the columns in the table you are doing bulk insert. the varchar or other string columns might need to be extended. e.g, Increase size 30 to 50 => ``` ALTER TABLE [dbo].[TableName] ALTER COLUMN [ColumnName] Varchar(50) ```
SqlBulkCopy exception, find the colid
[ "", "sql", "sqlbulkcopy", "" ]
I have a list of list of lists like this ``` matches = [[['rootrank', 'Root'], ['domain', 'Bacteria'], ['phylum', 'Firmicutes'], ['class', 'Clostridia'], ['order', 'Clostridiales'], ['family', 'Lachnospiraceae'], ['genus', 'Lachnospira']], [['rootrank', 'Root'], ['domain', 'Bacteria'], ['phylum', '"Proteobacteria"'], ['class', 'Gammaproteobacteria'], ['order', '"Vibrionales"'], ['family', 'Vibrionaceae'], ['genus', 'Catenococcus']], [['rootrank', 'Root'], ['domain', 'Archaea'], ['phylum', '"Euryarchaeota"'], ['class', '"Methanomicrobia"'], ['order', 'Methanomicrobiales'], ['family', 'Methanomicrobiaceae'], ['genus', 'Methanoplanus']]] ``` And I want to construct a phylogenetic tree from them. I wrote a node class like so (based partially on [this code](http://infohost.nmt.edu/~shipman/xnomo/tree.py)): ``` class Node(object): """Generic n-ary tree node object Children are additive; no provision for deleting them.""" def __init__(self, parent, category=None, name=None): self.parent = parent self.category = category self.name = name self.childList = [] if parent is None: self.birthOrder = 0 else: self.birthOrder = len(parent.childList) parent.childList.append(self) def fullPath(self): """Returns a list of children from root to self""" result = [] parent = self.parent kid = self while parent: result.insert(0, kid) parent, kid = parent.parent, parent return result def ID(self): return '{0}|{1}'.format(self.category, self.name) ``` And then I try to construct my tree like this: ``` node = None for match in matches: for branch in match: category, name = branch node = Node(node, category, name) print [n.ID() for n in node.fullPath()] ``` This works for the first match, but when I start with the second match it is appended at the end of the tree instead of starting again at the top. How would I do that? I tried some variations on searching for the ID, but I can't get it to work.
The issue is that `node` is always the bottommost node in the tree, and you are always appending to that node. You need to store the root node. Since `['rootrank', 'Root']` appears at the beginning of each of the lists, I'd recommend pulling that out and using it as the root. So you can do something like: ``` rootnode = Node(None, 'rootrank', 'Root') for match in matches: node = rootnode for branch in match: category, name = branch node = Node(node, category, name) print [n.ID() for n in node.fullPath()] ``` This will make the `matches` list more readable, and gives the expected output.
I would highly recommend using a phylogenetics library like [Dendropy](http://pythonhosted.org/DendroPy/). The 'standard way of writing phylogenetic trees is with the Newick format (parenthetical statements like ((A,B),C)). If you use Dendropy, reading that tree would be as simple as ``` >>> import dendropy >>> tree1 = dendropy.Tree.get_from_string("((A,B),(C,D))", schema="newick") ``` or to read from a stream ``` >>> tree1 = dendropy.Tree(stream=open("mle.tre"), schema="newick") ``` The creator of the library maintains a nice [tutorial](http://pythonhosted.org/DendroPy/tutorial/index.html) too.
Constructing a phylogentic tree
[ "", "python", "tree", "bioinformatics", "phylogeny", "" ]
first off do you see a problem with this code; ``` SELECT num, surname, firstname, ward FROM doctor, ward WHERE num != consultant; NUM SURNAME FIRSTNAME W ---------- ---------- ---------- - 203 Black Peter A 574 Bisi Mavis B 461 Boyne Steve B 530 Clark Nicola C 405 Mizzi Nicola A 501 Mount Mavis A 203 Black Peter A C NAME CONSULTANT - ---------- ---------- A Surgical 203 B Paediatric 574 C Medical 530 ``` I Expected the output to be something like; ``` 461 Boyne Steve B 405 Mizzi Nicola A 501 Mount Mavis A ``` Thinking the result that aren't equal will be displayed, However when I execute the command the result is this; ``` NUM SURNAME FIRSTNAME W ---------- ---------- ---------- - 574 Bisi Mavis B 461 Boyne Steve B 530 Clark Nicola C 405 Mizzi Nicola A 501 Mount Mavis A 203 Black Peter A 461 Boyne Steve B 530 Clark Nicola C 405 Mizzi Nicola A 501 Mount Mavis A 203 Black Peter A NUM SURNAME FIRSTNAME W ---------- ---------- ---------- - 203 Black Peter A 574 Bisi Mavis B 461 Boyne Steve B 405 Mizzi Nicola A 501 Mount Mavis A 203 Black Peter A ``` I am rather fresh to Oracle so It might be a noob mistake but any assistance would be great.
**[Here is the SQLFiddel Demo](http://www.sqlfiddle.com/#!2/cef82/2)** I think the best way to achive your output is using `not exsits`. **Below is the query which you can try** ``` SELECT * FROM doctor WHERE not exists (select 1 from ward where num = consultant) ```
You need a left outer join: ``` SELECT num,surname, firstname,ward FROM doctor left outer join ward on num = consultant WHERE num is null; ``` The problem with your query is that you are doing a Cartesian product (all combinations) between `doctor` and `ward`. Then you are choosing the rows where the two values are different. An alternative way to write the above may be clearer: ``` select d.* from doctor d where d.num not in (select consultant from ward); ```
Not Equals (!=) not operating as expected
[ "", "sql", "oracle", "" ]
I'm learning MySQL and PHP (running XAMPP and also using HeidiSQL) but have a live project for work that I'm trying to use it instead of the gazillion spreadsheets in which the information is currently located. I want to import 1,000+ rows into a table (tbl\_searches) where one of the columns is a string (contract\_no). Information not in the the spreadsheet required by tbl\_searches includes search\_id (PK and is AUTO\_INCREMENT) and contract\_id. So the only field I am really missing is contract\_id. I have a table (tbl\_contracts) that contains contract\_id and contract\_no. So I think I can have the import use the string contract\_no to reference that table to grab the contract\_id for the contract\_no, but I don't know how. [EDIT] I forgot to mention I have successfully imported the info using HeidiSQL after I exported the tbl\_contracts to Excel and then used it the Excel VLOOKUP function but that ended up yielding incorrect data somehow.
You can do it like this ``` LOAD DATA LOCAL INFILE '/path/to/your/file.csv' INTO TABLE table1 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' -- or '\r\n' if the file has been prepared on Windows (@field1, @contract_no, @field2, @field3,...) SET column1 = @field1, contract_id = (SELECT contract_id FROM tbl_contracts WHERE contract_no = @contract_no LIMIT 1), column2 = @field2, column3 = @field3 ... ```
try something like this: (I am assuming that you have data in tbl\_contracts) ``` <?php $handle = fopen("data_for_table_searches.csv", "r"); while (($data = fgetcsv($handle,",")) !== FALSE) { // get CSV data from you file $contract_id = query("SELECT contract_id FROM tbl_contracts WHERE contract_number = " . $data[<row for contract number>]); // whatever is the equivalent in heidi SQL, to get contract id query("INSERT INTO tbl_searches values($contract_id, data[0], data[1], data[2],...)"); // whatever is the equivalent in heidi SQL, insert data, including contract id into tbl_searches } fclose($handle); ?> ```
Import CSV Pulling One Column Field from Existing Table
[ "", "mysql", "sql", "" ]
For a programming language documentation project written in reStructuredText and rendered to HTML with Sphinx, I want to group my functions into logical groups like: String (all string functions), Web (all web-related functions), List (anything to do with list handling), etc.. Now, since functions can be members of several groups I want to add tags in some way, just like you would to blog posts. It would be really neat if there were a Sphinx extension (or way of using Domains for example) to add the tags and then generate a page per tag referencing all those functions, an overview of all tags and a cross-reference at the bottom of each function page. Is this feasible and if so, how? Example: ``` substring ========= **substring (**\ *<string,number>* **text,** *number* **start,** *number* **end*)** Description ----------- Returns the substring of string ``text`` between integer positions ``start`` and position ``end``. The first character in the string is numbered 0. The last character returned by ``substring`` is the character before position ``end``. Optionally ``end`` can be left out, which means the returned string will end at the last position of ``text``. Example ------- Executing the following code: :: log(substring("Welcome to our site!", 0, 7)); log(substring("Welcome to our site!", 0)); will print: :: Welcome Welcome to our site! Tags ---- String ```
I've solved this with some custom preprocessing and a custom directive. My personal website is made with Sphinx, as is my weblog. And a weblog means tags. First the **custom Sphinx directive** "tags" that I use like this: ``` My blog entry header ==================== .. tags:: python, django Bla bla bla bla ``` The directive itself translates itself to a bunch of relative links of the form `../../tags/python.html`, which works because the blog entries are always in `yyyy/mm/dd/` directories. Second a **small preprocessing script** that I call from the Sphinx makefile. This script simply generates a `tags/TAGNAME.txt` file. Sphinx processes it as a regular Sphinx file, so you only have to generate some valid restructured text. For instance: ``` python ###### .. toctree:: :maxdepth: 1 2013-08-23 Praise for github pull requests <../2013/08/23/praise-for-pull-requests.txt> 2013-08-21 How to say ``[:]`` programmatically in Python <../2013/08/21/programmatical-all-range.txt> 2013-08-15 Handy tracebacks instead of uninformative segfaults <../2013/08/15/handy-tracebacks-with-faulthandler.txt> ``` So the core idea is to generate the tag files and re-use as much regular Sphinx behavior as possible. (I use the same approach for `index.txt`, `yyyy/index.txt`, `yyyy/mm/index.txt` and so on). In case you need some example code: <https://github.com/reinout/reinout.vanrees.org/blob/master/rvo/weblog.py>
You can make use of indexing feature of sphinx. ReST: ``` .. index:: BNF, grammar, syntax, notation Some rest goes here. ``` conf.py: ``` html_use_index = True ```
How to add blog style tags in reStructuredText with Sphinx
[ "", "python", "python-sphinx", "restructuredtext", "" ]
I have a Python module that uses the argparse library. How do I write tests for that section of the code base?
You should refactor your code and move the parsing to a function: ``` def parse_args(args): parser = argparse.ArgumentParser(...) parser.add_argument... # ...Create your parser as you like... return parser.parse_args(args) ``` Then in your `main` function you should just call it with: ``` parser = parse_args(sys.argv[1:]) ``` (where the first element of `sys.argv` that represents the script name is removed to not send it as an additional switch during CLI operation.) In your tests, you can then call the parser function with whatever list of arguments you want to test it with: ``` def test_parser(self): parser = parse_args(['-l', '-m']) self.assertTrue(parser.long) # ...and so on. ``` This way you'll never have to execute the code of your application just to test the parser. If you need to change and/or add options to your parser later in your application, then create a factory method: ``` def create_parser(): parser = argparse.ArgumentParser(...) parser.add_argument... # ...Create your parser as you like... return parser ``` You can later manipulate it if you want, and a test could look like: ``` class ParserTest(unittest.TestCase): def setUp(self): self.parser = create_parser() def test_something(self): parsed = self.parser.parse_args(['--something', 'test']) self.assertEqual(parsed.something, 'test') ```
"argparse portion" is a bit vague so this answer focuses on one part: the `parse_args` method. This is the method that interacts with your command line and gets all the passed values. Basically, you can mock what `parse_args` returns so that it doesn't need to actually get values from the command line. The `mock` [package](https://pypi.org/project/mock/) can be installed via pip for python versions 2.6-3.2. It's part of the standard library as `unittest.mock` from version 3.3 onwards. ``` import argparse try: from unittest import mock # python 3.3+ except ImportError: import mock # python 2.6-3.2 @mock.patch('argparse.ArgumentParser.parse_args', return_value=argparse.Namespace(kwarg1=value, kwarg2=value)) def test_command(mock_args): pass ``` You have to include all your command method's args in `Namespace` even if they're not passed. Give those args a value of `None`. (see the [docs](https://docs.python.org/dev/library/argparse.html#option-value-syntax)) This style is useful for quickly doing testing for cases where different values are passed for each method argument. If you opt to mock `Namespace` itself for total argparse non-reliance in your tests, make sure it behaves similarly to the actual `Namespace` class. Below is an example using the first snippet from the argparse library. ``` # test_mock_argparse.py import argparse try: from unittest import mock # python 3.3+ except ImportError: import mock # python 2.6-3.2 def main(): parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)') args = parser.parse_args() print(args) # NOTE: this is how you would check what the kwargs are if you're unsure return args.accumulate(args.integers) @mock.patch('argparse.ArgumentParser.parse_args', return_value=argparse.Namespace(accumulate=sum, integers=[1,2,3])) def test_command(mock_args): res = main() assert res == 6, "1 + 2 + 3 = 6" if __name__ == "__main__": print(main()) ```
How do you write tests for the argparse portion of a python module?
[ "", "python", "unit-testing", "argparse", "" ]
I'm trying to use regex to parse an `XML` file (in my case this seems the simplest way). For example a line might be: ``` line='<City_State>PLAINSBORO, NJ 08536-1906</City_State>' ``` To access the text for the tag City\_State, I'm using: ``` attr = re.match('>.*<', line) ``` but nothing is being returned. Can someone point out what I'm doing wrong?
You normally don't want to use `re.match`. [Quoting from the docs](http://docs.python.org/2/library/re.html#re.match): > If you want to locate a match anywhere in string, use [search()](http://docs.python.org/2/library/re.html#re.search) instead (see also [search() vs. match()](http://docs.python.org/2/library/re.html#search-vs-match)). Note: ``` >>> print re.match('>.*<', line) None >>> print re.search('>.*<', line) <_sre.SRE_Match object at 0x10f666238> >>> print re.search('>.*<', line).group(0) >PLAINSBORO, NJ 08536-1906< ``` --- Also, why parse XML with regex when you can use something like [`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/) :). ``` >>> from bs4 import BeautifulSoup as BS >>> line='<City_State>PLAINSBORO, NJ 08536-1906</City_State>' >>> soup = BS(line) >>> print soup.find('city_state').text PLAINSBORO, NJ 08536-1906 ```
Please, just use an XML parser like ElementTree ``` >>> from xml.etree import ElementTree as ET >>> line='<City_State>PLAINSBORO, NJ 08536-1906</City_State>' >>> ET.fromstring(line).text 'PLAINSBORO, NJ 08536-1906' ```
Parsing XML in Python with regex
[ "", "python", "xml", "regex", "" ]
I have a data/character\_data.py: ``` CHARACTER_A = { 1: {"level": 1, "name":"Ann", "skill_level" : 1}, 2: {"level": 2, "name":"Tom", "skill_level" : 1}} CHARACTER_B = { 1: {"level": 1, "name":"Kai", "skill_level" : 1}, 2: {"level": 2, "name":"Mel", "skill_level" : 1}} ``` In main.py, I can do this: ``` from data import character_data as character_data print character_data.CHARACTER_A[1]["name"] >>> output: Ann print character_data.CHARACTER_B[2]["name"] >>> output: Mel ``` How do I achieve this? ``` from data import character_data as character_data character_type = "CHARACTER_A" character_id = 1 print character_data.character_type[character_id]["name"] >>> correct output should be: Ann ``` I get AttributeError when try use character\_type as "CHARACTER\_A".
How about this ``` In [38]: from data import character_data as character_data In [39]: character_type = "CHARACTER_A" In [40]: character_id = 1 In [41]: getattr(character_data, character_type)[character_id]["name"] Out[41]: 'Ann' ```
You can use [locals()](http://docs.python.org/2/library/functions.html#locals): ``` >>> from data.character_data import CHARACTER_A, CHARACTER_B >>> character_id = 1 >>> character_type = "CHARACTER_A" >>> locals()[character_type][character_id]["name"] Ann ``` Though, think about merging `CHARACTER_A` and `CHARACTER_B` into one dict and access this dict instead of `locals()`. Also, see [Dive into Python: locals and globals](http://www.diveintopython.net/html_processing/locals_and_globals.html).
Get information from different dict by dict name
[ "", "python", "django", "" ]
So my problem is that I want to have a website that has a single navbar.html file. That I can use to add to the other pages in my website. I figured the best way to do this without javascript which I am trying to avoid because of compatibility issues (school website so I am trying to get best compatibility). I am writing this website for my school so I don't know what is going to happen to it from now on but I want to create an automated system that can parse the folder for html files then go into each and find the `<nav id="mainNav">.....</nav>` and then replace all the content between those two tags with content from one central navbar.html so that someone can just go in change the navbar.html then run this script and it would just auto update the webpages. Now I'm not asking anyone to write the code for me I have a basic to intermediate understanding of python so if anyone could point me in the right direction such as maybe what would be the possible ways to go about this. Thanks for the help. Edit: I would like to only use standard libraries so that it is as easy as possible to move around preferably it would be just one \*.py file Edit: Please don't tell me to just go get a tool I can't do that the server does not have any server side programming languages best I can do is edit the local files then upload static pages to the server. Edit: fixed it so that it was on topic. Hopefully.
I see a lot of people criticizing what you are trying to do, and I'm going to join them. This is a horrible way to implement a navbar, and a horrible procedural call for 'updating' a web site. I would be very adamant with your school that this isn't a wise decision, and more time is going to be wasted in the end doing it in this archaic manner as opposed to also-free alternative like PHP or Python. With that said, I will also help with your question, because it looks like (based on your commends and replies) implementing a templating engine or dynamic web content is beyond your control, and is thus not an option. An approach could be to have the navbar replacement script (lets call it script.py) at the top of the HTML directory, and sitting beside the new navbar.html file. First to get the files, find all files in the child directories using `os.walk`: ``` import os for root, dirs, files in os.walk(os.join(os.getcwd(), "html_dir"): for file in files: # do something with the directories for dir in dirs: # do something with the directories ``` Obviously, you'll want to make this recursive, so that it applies to all HTML files in all subfolders. Next, as far as the replacement goes, I'd recommend reading in the entirety of the navbar.html file with something simple like: ``` with open('file', 'r') as target_file: replacement_text = target_file.read() ``` To do the actual replacement, I'd recommend a Regular Expression. Regular expressions will vary depending on content, and may need to be tweaked depending on the exact data being worked with. They are also prone to breaking if this data changes. ``` import re new_html = re.sub(r'<nav id="mainNav">.*?</nav>', replacement_text, original_text) ``` You'll need to piece it all together, and make it work yourself, but I feel that's a solid approach to this problem. HOWEVER, I re-iterate: I shudder at thinking someone will eventually update their website using this method. Also, backup your files every time you run this, as it failing could results in unintended replacement/deletion of text.
Use a templating engine when creating your web pages / website: an example is `Jinja` you can use a [base html template](http://jinja.pocoo.org/docs/templates/). You can do similar things in other engines as well and there are many many options: <http://wiki.python.org/moin/Templating> if you're determined not to use an engine create a base.html and you're ok with this not being very extensible / readable / etc. `stuff stuff $$$my_flag$$$ stuff stuff` and replace `$$$my_flag$$$` with a string replaceAll. Is that what you want?
Parse HTML files then edit them in python
[ "", "python", "html", "parsing", "replace", "navbar", "" ]