Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a list like this: ``` a = [['Rostock', 'Lubeck'], ['Rostock', 'Hamburg'], ['Rostock', 'Bremen'], ['Rostock', 'Hannover']] ``` I want to create list only with unique element: ``` result = ['Rostock', 'Lubeck', 'Hamburg', 'Bremen', 'Hannover'] ``` How can I do that? Thanks
Assuming order matters ``` >>> a = [['Rostock', 'Lubeck'], ['Rostock', 'Hamburg'], ['Rostock', 'Bremen'], ['Rostock', 'Hannover']] >>> from collections import OrderedDict >>> list(OrderedDict.fromkeys(y for x in a for y in x)) ['Rostock', 'Lubeck', 'Hamburg', 'Bremen', 'Hannover'] ``` or equivalently: ``` >>> from itertools import chain >>> list(OrderedDict.fromkeys(chain.from_iterable(a))) ```
If order doesn't matter - use set: ``` a = [['Rostock', 'Lubeck'], ['Rostock', 'Hamburg'], ['Rostock', 'Bremen'], ['Rostock', 'Hannover']] print list(set(y for x in a for y in x)) # prints ['Rostock', 'Hannover', 'Bremen', 'Lubeck', 'Hamburg'] ```
How to create a list only with a unique element using this list?
[ "", "python", "list", "" ]
Tables: ``` Orders ( OrderId, int PK CustomerId, int FK to Customer, NULL allowed) Customers ( CustomerId, int PK CompanyId, int FK to Company, NULL not allowed) Companies ( CompanyId, int PK Name, nvarchar(50)) ``` I want to select all orders, no matter if they have a customer or not, and if they have a customer then also the customer's company name. If I use this query ... ``` SELECT Orders.OrderId, Customers.CustomerId, Companies.Name FROM Orders LEFT OUTER JOIN Customers ON Orders.CustomerId = Customers.CustomerId INNER JOIN Companies OM Customers.CompanyId = Companies.CompanyId ``` ... it only returns the orders that have a customer. If I replace `INNER JOIN` by `LEFT OUTER JOIN` ... ``` SELECT Orders.OrderId, Customers.CustomerId, Companies.Name FROM Orders LEFT OUTER JOIN Customers ON Orders.CustomerId = Customers.CustomerId LEFT OUTER JOIN Companies OM Customers.CompanyId = Companies.CompanyId ``` ... it works but I don't understand why this is necessary because the relationship between `Customers` and `Companies` is required: A customer *must* have a company. An alternative approach which works as well seems to be: ``` SELECT Orders.OrderId, Customers.CustomerId, Companies.Name FROM Companies INNER JOIN Customers ON Companies.CompanyId = Customers.CompanyId RIGHT OUTER JOIN Orders OM Customers.CustomerId Orders.CustomerId ``` This query has the number of inner and outer joins that I expect but the problem is that it is hard to read for me because I have my query as a query of *`orders`* in mind where an order is the root of the selection and not the company. Also the usage of `RIGHT OUTER JOIN` is rather unfamiliar to me. The last query is a small part of a query generated by the designer for SQL Server Reporting Services Reports. I am trying to write the query manually without the designer surface because it is very overcrowded and I'm having problems to maintain the query after many changes and more changes are expected in the future. So, I want to give the query a readable structure somehow. 1. Why doesn't query 1 work as I expected? 2. Is query 2 the correct solution although (or because?) it uses two LEFT OTHER JOINS? 3. Is query 3 the correct solution? 4. Is there a better way to write the query? 5. Are there some general rules of thumb and practices how to write a query with a lot of outer and inner joins in a good readable manner?
Semantically, joins are processed in the order they appear in the `from` clause. (They may not be actually executed in this order due to SQL optimizations, but the ordering is important for defining the result set.) So, when you do: ``` from orders left outer join customers inner join companies ``` (I'm leaving out the `on` clauses which are a distraction for this purpose.) The SQL is interpreted as: ``` from (orders left outer join customers) inner join companies ``` You are doing an `inner join`, so the values must appear on both sides. In your case, this undoes the effect of the `left outer join`. You want: ``` from orders left outer join (customers inner join companies) ``` Here are some solutions. My preferred solution is to use `left outer join` for all the joins. In fact, for readability and maintainability, almost every query I write is going to be only `left outer join` or `[inner] join` connecting the tables. Having to parse through the query to understand the semantics of the joins seems to be an unnecessary effort, if you can write the queries in a consistent form. Another solution is to use parentheses: ``` from orders left outer join (customers inner join companies) ``` Another solution is a subquery: ``` from orders left outer join (select . . . from customers inner join companies) cc ```
1. Query 1: Because you have an `INNER JOIN` on Customers, the `LEFT JOIN` is effectively an `INNER JOIN`. 2. Query 2 is correct because you want to see all Orders regardless of the data quality / condition. 3. I like to avoid `RIGHT JOIN`s in general as it is confusing to some developers and is therefore less readable. You can generally write your query in such a way to do the same thing with effective use of `LEFT JOIN`s. 4. Query 2 is my recommendation for something simple like this. 5. One general rule... Once you introduce an `OUTER JOIN` into your query, the `JOIN`s that follow should also be `OUTER JOIN`s. Otherwise, you *MAY* exclude rows you did not intend.
Problems with INNER JOIN and LEFT/RIGHT OUTER JOIN
[ "", "sql", "sql-server", "t-sql", "join", "" ]
I'm trying to access a shared C library in Python with `ctypes` on Mac OS X 10.6.8 with Python 2.7.4. To do this, I need to `#include <Python.h>` in my C code. If I try to compile a C script that only has that one include statement in it, call it "sample.c", I get: ``` $ gcc -shared -o sample.so sample.c sample.c:1:20: error: Python.h: No such file or directory ``` Since I'm running Mac 10.6, I have Xcode 3.2.6, the latest version available on this iteration of OS X without paying to upgrade to 10.7 and getting Xcode 4. Is there a way to get the Python header file without upgrading my OS?
Python is a framework on Mac OS X so you need to, ``` #include <Python/Python.h> ``` You also need to call `gcc` with the `-framework` argument to actually do anything inside C, ``` gcc -shared -o sample.so sample.c -framework Python ```
I'm not sure about 10.6.8, but `Python.h` should be in ``` /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 ``` if you installed the official python.org binary. Try adding ``` -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 ``` to your `gcc` command and see if that works.
Python.h header file missing on Mac OS X 10.6
[ "", "python", "xcode", "macos", "osx-snow-leopard", "header-files", "" ]
I have a table in a Microsft SQL Server 2008 R2, which came from an outside source. The columns of the are as follows: `ID, Year, DAY, HOUR & Value`, where `DAY` contains the day of the year (from 1 to 366) and `HOUR` represents the hour of the day (from 0 to 23). I wish to create a new datetime column and populate it with the dateTime created from the data in `Year, DAY & HOUR` columns. What SQL function should I use to create the `DateTime` from its parts? SQL Server 2012 has [`DATETIMEFROMPARTS`](http://msdn.microsoft.com/en-us/library/hh213233.aspx), but there is no equivalent function for [SQL Server 2008 R2](http://msdn.microsoft.com/en-us/library/ms186724%28v=sql.105%29.aspx)
``` declare @Year int = 2003 declare @Day int = 100 declare @Hour int = 13 select dateadd(hour, @Hour, dateadd(dayofyear, @Day - 1, dateadd(year, @Year - 1900, 0))) ```
You could use the following instead: ``` DECLARE @day int, @month int, @year int SELECT @day = 4, @month = 3, @year = 2011 SELECT dateadd(mm, (@year - 1900) * 12 + @month - 1 , @day - 1) ``` That will give you your date (though it is true that SQL 2012 gets it right, finally!) To use it when you have year, day of the year and hour of the day, use the following: ``` declare @year int, @dayofyear int, @hourofday int select @year = 2013, @dayofyear = 120, @hourofday = 12 select dateadd(hh, @hourofday, dateadd(yy, @year - 1900, dateadd(dd, @dayofyear - 1, 0))) ```
Get a datetime from year, day of the year and hour in SQL Server 2008 R2
[ "", "sql", "datetime", "sql-server-2008-r2", "" ]
I've searched all over but cant seem to find how to calculate the average time between orders for each customer. I'm using Oracle SQL Developer. Ive tried using the lag() function with no joy. Example data below shows a customer bought on 3 separate occasions. The average time between all purchases is 7.5 days ((6+9)/2). ``` CustID OrderDate ------- --------- 1000000 14-AUG-12 1000000 23-AUG-12 1000000 29-AUG-12 ``` So how do I calculate that average for each customer? any help would be much appreciated.
the key here is the [lag](http://msdn.microsoft.com/en-us/library/hh231256.aspx) analytic function ``` select cust_id , avg(orderdate - lag_orderdate) as avg_time_between_orders from ( select cust_id , orderDate , lag(orderdate) over (partition by cust_id) as lag_orderdate from orders ) ```
If only the mean average is needed then I think you can get away with a simple query without window functions by looking at the min and max dates and then diving by the number of gaps between orders: ``` SELECT CustID, (MAX(OrderDate) - MIN(OrderDate)) / (COUNT(*) - 1) FROM Orders GROUP BY CustID HAVING COUNT(*) > 1 ```
Calculating the average time between orders for each customer
[ "", "sql", "oracle", "" ]
I'm writing an application which needs to run a series of tasks in parallel and then a single task with the results of all the tasks run: ``` @celery.task def power(value, expo): return value ** expo @celery.task def amass(values): print str(values) ``` It's a very contrived and oversimplified example, but hopefully the point comes across well. Basically, I have *many* items which need to run through `power`, but I only want to run `amass` on the results from all of the tasks. All of this should happen asynchronously, and I don't need anything back from the `amass` method. Does anyone know how to set this up in celery so that everything is executed asynchronously and a single callback with a list of the results is called after all is said and done? I've setup this example to run with a `chord` as Alexander Afanasiev recommended: ``` from time import sleep import random tasks = [] for i in xrange(10): tasks.append(power.s((i, 2))) sleep(random.randint(10, 1000) / 1000.0) # sleep for 10-1000ms callback = amass.s() r = chord(tasks)(callback) ``` Unfortunately, in the above example, all tasks in `tasks` are started only when the `chord` method is called. Is there a way that each task can start separately and then I could add a callback to the group to run when everything has finished?
Here's a solution which worked for my purposes: **tasks.py**: ``` from time import sleep import random @celery.task def power(value, expo): sleep(random.randint(10, 1000) / 1000.0) # sleep for 10-1000ms return value ** expo @celery.task def amass(results, tasks): completed_tasks = [] for task in tasks: if task.ready(): completed_tasks.append(task) results.append(task.get()) # remove completed tasks tasks = list(set(tasks) - set(completed_tasks)) if len(tasks) > 0: # resend the task to execute at least 1 second from now amass.delay(results, tasks, countdown=1) else: # we done print results ``` **Use Case:** ``` tasks = [] for i in xrange(10): tasks.append(power.delay(i, 2)) amass.delay([], tasks) ``` What this *should* do is start all of the tasks as soon as possible asynchronously. Once they've all been posted to the queue, the `amass` task will also be posted to the queue. The amass task will keep reposting itself until all of the other tasks have been completed.
Celery has [plenty of tools](http://docs.celeryproject.org/en/latest/userguide/canvas.html) for most of workflows you can imagine. It seems you need to get use of [chord](http://docs.celeryproject.org/en/latest/userguide/canvas.html#the-primitives). Here's a quote from docs: > A chord is just like a group but with a callback. A chord consists of > a header group and a body, where the body is a task that should > execute after all of the tasks in the header are complete.
Running a task after all tasks have been completed
[ "", "python", "rabbitmq", "celery", "" ]
I have the following SQL query: ``` DECLARE @XMLDOC XML SET @XMLDOC = '<Feed><Product><Name>Foo</Name></Product></Feed>' SELECT x.u.value('Name[1]', 'varchar(100)') as Name from @XMLDOC.nodes('/Feed/Product') x(u) ``` This returns: ``` Name ---- Foo ``` However, if my `<Feed>` node has an `xmlns` attribute, then this doesn't return any results: ``` DECLARE @XMLDOC XML SET @XMLDOC = '<Feed xmlns="bar"><Product><Name>Foo</Name></Product></Feed>' SELECT x.u.value('Name[1]', 'varchar(100)') as Name from @XMLDOC.nodes('/Feed/Product') x(u) ``` Returns: ``` Name ---- ``` This only happens if I have an `xmlns` attribute, anything else works fine. Why is this, and how can I modify my SQL query to return results regardless of the attributes?
If your XML document has XML namespaces, then you need to consider those in your queries! So if your XML looks like your sample, then you need: ``` -- define the default XML namespace to use ;WITH XMLNAMESPACES(DEFAULT 'bar') SELECT x.u.value('Name[1]', 'varchar(100)') as Name from @XMLDOC.nodes('/Feed/Product') x(u) ``` Or if you prefer to have explicit control over which XML namespace to use (e.g. if you have multiple), use XML namespace prefixes: ``` -- define the XML namespace ;WITH XMLNAMESPACES('bar' as b) SELECT x.u.value('b:Name[1]', 'varchar(100)') as Name from @XMLDOC.nodes('/b:Feed/b:Product') x(u) ```
As well as the `XMLNAMESPACES` solution, you can also use the hideously bulky `local-name` syntax... ``` DECLARE @XMLDOC XML SET @XMLDOC = '<Feed xmlns="bar"><Product><Name>Foo</Name></Product></Feed>' SELECT x.u.value('*[local-name() = "Name"][1]', 'varchar(100)') as Name from @XMLDOC.nodes('/*[local-name() = "Feed"]/*[local-name() = "Product"]') x(u) ```
Querying XML data types which have xmlns node attributes
[ "", "sql", "sql-server", "xml", "sql-server-2008", "xml-namespaces", "" ]
I have a string: ``` '[text], text, {text}, itextm' ``` I should end up with: ``` '[replaced], text, {replaced}, ireplacedm' ``` How can I do it with python re module? Thanks!
``` In [1]: s='[text], text, {text}, itextm' In [2]: import re In [3]: re.sub(r'([[{])[^\]}]*',r'\1replaced',s) Out[3]: '[replaced], text, {replaced}, itextm' ``` **EDIT** ah... I didn't notice the last part of your example, the `ireplacedm`! now this should work for your example: ``` In [8]: re.sub(r'([^ ,])text([^ ,])',r'\1replaced\2',s) Out[8]: '[replaced], text, {replaced}, ireplacedm' ```
This is not perfect since it will not work properly if you have a setup like `[term}more]`, but it at least works for your example. ``` re.sub(r'([[{]).*?([\]}])', r'\1replaced\2', str) ```
Replace the content in between using python regex
[ "", "python", "regex", "" ]
I have a table of Colour names ``` Red Yellow Green Purple ``` I also have a Table of Coloured Tops ``` RedandGreen Red YellowandRedandGreen Violet ``` What I want to do is create a third table where if I select Red it will show me all the tops with Red in and the other thing I want it to do is if I Select Purple it needs to bring back Violet. Is this possible? Thanks W
in my opinion, is not necessary create a third table to show your informations. If you link both tables using a correct foreign key, your job will work. See: ``` create table color ( id_color int not null, ds_color varchar(50) null, primary key (id_color) ) create table color_tops ( id_color int not null, id_colortop int not null, ds_colortop varchar(50) null, primary key (id_colortop), foreign key (id_color) references color(id_color) ) select color.ds_color, color_tops.ds_colortops from color inner join color on (color.id_color = color_tops.ds_colortops) ```
Yes, it's possible, but it's going to be horribly impractical. Especially when you start adding more and more shades of e.g. purple, like mauve, lavender, Lilac, etc. It's not a good idea. What you will have to do is create a third table called colors\_to\_tops and when entering data, assign each top to every colour you want it returned for. Colors table ``` id|color -------- 1|Red 2|Yellow 3|Green 4|Purple ``` Colored Tops ``` id|TopName -------------- 1|RedandGreen 2|Red 3|YellowandRedandGreen 4|Violet ``` colors\_to\_tops ``` id|colorID|topID ---------------- 1| 1| 1 2| 1| 2 3| 1| 3 4| 2| 3 5| 3| 1 6| 3| 3 7| 4| 4 ``` SQL: ``` SELECT DISTINCT ct.* FROM ColoredTops AS ct LEFT JOIN Colors_To_Tops AS ctt ON ct.id = ctt.topID LEFT JOIN Colors AS c ON ctt.colorID = c.id WHERE c.color = @color ```
How to create a third table based on two other tables
[ "", "sql", "sql-server", "t-sql", "" ]
Why is it that in Python integers and floats are, without being evaluated in a boolean context, equivalent to True? Other data types have to be evaluated via an operator or bool().
That's not `True`: ``` >>> print("True" if 1 else "False") True >>> print("True" if 0 else "False") False >>> print("True" if 0.0 else "False") False >>> print("True" if 123.456 else "False") True >>> print("True" if "hello" else "False") True >>> print("True" if "" else "False") False >>> print("True" if [1,2,3] else "False") True >>> print("True" if [] else "False") False >>> print("True" if [[]] else "False") True ``` Only non-zero numbers (or non-empty sequences/container types) evaluate to `True`.
Here is a use case - ``` >>> bool(2) True >>> bool(-3.1) True >>> bool(0) False >>> bool(0.0) False >>> bool(None) False >>> bool('') False >>> bool('0') True >>> bool('False') True >>> bool([]) False >>> bool([0]) True ``` In Python, these are `False` - * The Boolean value `False` itself * Any numerical value equal to 0 (0, 0.0 but not 2 or -3.1) * The special value `None` * Any empty sequence or collection, including the empty string(`''`, but not `'0'` or `'hi'` or `'False'`) and the empty list (`[]`, but not `[1,2, 3]` or `[0]`) Rest would evaluate to `True`. [Read more](http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/boolean.html).
Python Boolean Values
[ "", "python", "boolean", "" ]
I'm trying to walk over the contents of a directory and determine whether or not each item is a file or a folder. I can do the solution suggested in [this link](https://stackoverflow.com/q/12295551): ``` >>> for i in ftp.listdir(): ... lstatout=str(ftp.lstat(i)).split()[0] ... if 'd' not in lstatout: print i, 'is a file' ... ``` This works, yes. As a sidenote, I'm just taking the first character instead of `.split()`, so `str(ftp.lstati(i))[0] != 'd'`, as this appears to be faster in my admittedly non-scientific testing. But this seems extremely hacky, so I tried to check out other methods. In the SFTP attributes for a few folders, I see: ``` <SFTPAttributes: [ size=4096 uid=1002 gid=1002 mode=040755 atime=1358785383 mtime=1359475285 ]> ``` while in the same for files, I see: ``` <SFTPAttributes: [ size=72 uid=1002 gid=1002 mode=0100644 atime=1367598914 mtime=1367598914 ]> ``` So it seems that the mode is `010[permissions]` for files, and `040[permissions]` for directories (consistent in the few directories I've seen, but can't say it's universal). This seems to be a much better way to get that attribute of the item! But, when I use `ftp.stat(i).st_mode`, I get a different value – `16877` for the aforementioned directory, and `33188` for the aforementioned file. What do these mean? Is there a way I can convert that integer to the file mode? (I tried Google, but can't find anything relevant.) Alternatively, is there any good way I can determine this without relying on the `str(lstat(i))` function returning a 'd' or not-'d'?
It looks like the high bits do indeed tell you whether it is a file or a directory: [S\_ISDIR](http://docs.python.org/2/library/stat.html#stat.S_IFDIR) [S\_ISREG](http://docs.python.org/2/library/stat.html#stat.S_IFREG) ``` >>> oct(16877) '040755' >>> oct(33188) '0100644' ``` And likewise: ``` >>> int('040755', 8) 16877 >>> int('0100644', 8) 33188 ```
Paramiko's `SFTPAttributes.st_mode` can be passed to Python's `os.stat` for analysis: ``` for file in ftp_cli.listdir_attr(path): is_dir = stat.S_ISDIR(file.st_mode) is_link = stat.S_ISLNK(file.st_mode) is_file = stat.S_ISREG(file.st_mode) ```
Distinguish between a file or a directory in paramiko
[ "", "python", "file", "attributes", "mode", "paramiko", "" ]
I'm trying to get the most recent data for CPU utilization for an instance (actually, several instances, but just one to start with), however the following call doesn't return any data: ``` cw = boto.cloudwatch.connect_to_region(Region) cw.get_metric_statistics( 300, datetime.datetime.now() - datetime.timedelta(seconds=600), datetime.datetime.now(), 'CPUUtilization', 'AWS/EC2', 'Average', dimensions={'InstanceId':['i-11111111']} # for stats across multiple instances: # dimensions={'InstanceId':['i-11111111', 'i-22222222', 'i-33333333']} ) ``` Various posts on other sites indicate that checking the region is correct, checking that the `period` (first argument) is a multiple of 60, and (if you don't have detailed monitoring enabled) is greater than or equal to 300. I've checked all these things and I'm still not getting any data.
This is a daylight savings time / time zone issue! You need to use UTC time when receiving statistics from Cloudwatch: ``` cw = boto.cloudwatch.connect_to_region(Region) cw.get_metric_statistics( 300, datetime.datetime.utcnow() - datetime.timedelta(seconds=600), datetime.datetime.utcnow(), 'CPUUtilization', 'AWS/EC2', 'Average', dimensions={'InstanceId':['i-11111111']} ) ``` From some experimentation it also seems that specifying multiple `InstanceId` dimensions will result in data only for the last specified instance (at least if detailed monitoring is not enabled).
I was also seeing no data returned when setting units to "Megabytes", while setting units to "Bytes" returned data. Both are allowed in the [API reference](http://boto.readthedocs.org/en/latest/ref/cloudwatch.html#boto.ec2.cloudwatch.metric.Metric.query). ``` data = conn.get_metric_statistics(period=60,start_time=start,end_time=end,metric_name="NetworkOut",namespace="AWS/EC2",statistics="Average",unit="Megabytes",dimensions={'InstanceId':'XXXXXX'}) print "data length: %d"%len(data) # data length: 0 data = conn.get_metric_statistics(period=60,start_time=start,end_time=end,metric_name="NetworkOut",namespace="AWS/EC2",statistics="Average",unit="Bytes",dimensions={'InstanceId':'XXXXXX'}) print "data length: %d"%len(data) # data length: 59 ```
How do I get the most recent Cloudwatch metric data for an instance using Boto?
[ "", "python", "amazon-web-services", "boto", "amazon-cloudwatch", "" ]
Quite simply, which of the following methods is better in a `WHERE` clause to retrieve records where the `FIELD_NAME` is **NOT NULL** and **NOT Empty** ``` WHERE DATALENGTH(FIELD_NAME) > 0 ``` or ``` WHERE ISNULL(FIELD_NAME, '') <> '' ``` ### Update I have been informed that the first method gives spurious results for some types of fields... Agree?
Firstly, ``` select * from table where column <> '' ``` will give exactly the same results as ``` select * from table where isnull(column, '') <> '' ``` because records where the condition is `UNKNOWN` rather than `FALSE` will still be filtered out. I would generally go with the first option. `DATALENGTH` counts trailing spaces, which a comparison with `''` does not. It is up to you whether you *want* `' '` to compare unequal to `''`. If you do, you need `DATALENGTH`. If you don't, simply compare with `''`. Note that for `TEXT`/`NTEXT` types, comparisons are not supported, but `DATALENGTH` is.
`ISNULL` is the best approach instead of `DATALENGTH`.
DATALENGTH() or ISNULL() to retrieve fields that are not null and not empty
[ "", "sql", "sql-server", "" ]
I have the following tuple of dictionaries: ``` td=({'associatedFrame': None, 'cyclicModeNumber': None, 'description': 'Increment 10: Arc Length = 7.2813E-02', 'domain': 'ARC_LENGTH', 'fieldOutputs': 'Repository object', 'frameId': 10, 'frameValue': 0.0728124976158142, 'frequency': None, 'incrementNumber': 10, 'isImaginary': False, 'loadCase': None, 'mode': None}) ``` I'd like to get the value associated with the key `frameId`. I don't have many experience with tuples and dictionaries and the questions I've found were about dictionaries of tuples, so I ask for your help. Thanks EDIT: Actually I had already tried your solution but I forgot to mention I'm using python to get results from another program. This program has a specific organisation so in order to work I had to write `td.frameId`.
If this is really what you have, it is just a dictionary. ``` td['frameId'] ``` will work. Parenthesis don't make a tuple. The comma is what makes a tuple. ``` foo = (1) #integer foo = (1,) #1 element tuple foo = 1, #1 element tuple ``` Parenthesis are only necessary in a few situations where the syntax would otherwise be ambiguous (like function calls). Clearly: ``` foo(1,2,3) ``` is different than; ``` foo(1,(2,3)) ``` --- If you actually have a `tuple`, then you need to index it to get the dictionary which you can index to get the item you want. ``` td[0]['frameId'] ```
That's not a tuple, it's just a quiet `dict` ... i.e. a dict in (useless) parentheses. `td['frameId']` to lookup the key. Note: `({'this': 'would be a dict in a tuple'},)`
Work with tuple of dictionaries
[ "", "python", "dictionary", "tuples", "" ]
I am caching some JSON data, and in storage it is represented as a JSON-encode string. No work is performed on the JSON by the server before sending it to the client, other than collation of multiple cached objects, like this: ``` def get_cached_items(): item1 = cache.get(1) item2 = cache.get(2) return json.dumps(item1=item1, item2=item2, msg="123") ``` There may be other items included with the return value, in this case represented by `msg="123"`. The issue is that the cached items are double-escaped. It would behoove the library to allow a pass-through of the string without escaping it. I have looked at the documentation for [json.dumps `default` argument](http://docs.python.org/2/library/json.html#basic-usage), as it seems to be the place where one would address this, and searched on google/SO but found no useful results. It would be unfortunate, from a performance perspective, if I had to decode the JSON of each cached items to send it to the browser. It would be unfortunate from a complexity perspective to not be able to use `json.dumps`. My inclination is to write a class that stores the cached string and when the `default` handler encounters an instance of this class it uses the string without perform escaping. I have yet to figure out how to achieve this though, and I would be grateful for thoughts and assistance. **EDIT** For clarity, here is an example of the proposed `default` technique: ``` class RawJSON(object): def __init__(self, str): self.str = str class JSONEncoderWithRaw(json.JSONEncoder): def default(self, o): if isinstance(o, RawJSON): return o.str # but avoid call to `encode_basestring` (or ASCII equiv.) return super(JSONEncoderWithRaw, self).default(o) ``` Here is a degenerate example of the above: ``` >>> class M(): str = '' >>> m = M() >>> m.str = json.dumps(dict(x=123)) >>> json.dumps(dict(a=m), default=lambda (o): o.str) '{"a": "{\\"x\\": 123}"}' ``` The desired output would include the unescaped string `m.str`, being: ``` '{"a": {"x": 123}}' ``` It would be good if the json module did not encode/escape the return of the `default` parameter, or if same could be avoided. In the absence of a method via the `default` parameter, one may have to achieve the objective here by overloading the `encode` and `iterencode` method of `JSONEncoder`, which brings challenges in terms of complexity, interoperability, and performance.
A quick-n-dirty way is to patch `json.encoder.encode_basestring*()` functions: ``` import json class RawJson(unicode): pass # patch json.encoder module for name in ['encode_basestring', 'encode_basestring_ascii']: def encode(o, _encode=getattr(json.encoder, name)): return o if isinstance(o, RawJson) else _encode(o) setattr(json.encoder, name, encode) print(json.dumps([1, RawJson(u'["abc", 2]'), u'["def", 3]'])) # -> [1, ["abc", 2], "[\"def\", 3]"] ```
If you are caching JSON strings, you need to first *decode* them to python structures; there is no way for `json.dumps()` to distinguish between normal strings and strings that are really JSON-encoded structures: ``` return json.dumps({'item1': json.loads(item1), 'item2': json.loads(item2), 'msg': "123"}) ``` Unfortunately, there is no option to include already-converted JSON data in this; the `default` function is expected to return *Python* values. You extract data from whatever object that is passed in and return a value that *can* be converted to JSON, not a value that is already JSON itself. The only other approach I can see is to insert "template" values, then use string replacement techniques to manipulate the JSON output to replace the templates with your actual cached data: ``` json_data = json.dumps({'item1': '==item1==', 'item2': '==item2==', 'msg': "123"}) return json_data.replace('"==item1=="', item1).replace('"==item2=="', item2) ``` A third option is to cache `item1` and `item2` in non-serialized form, as a Python structure instead of a JSON string.
In Python, have json not escape a string
[ "", "python", "json", "" ]
Why in the example function terminates: ``` def func(iterable): while True: val = next(iterable) yield val ``` but if I take off yield statement function will raise StopIteration exception? **EDIT:** Sorry for misleading you guys. I know what generators are and how to use them. Of course when I said function terminates I didn't mean eager evaluation of function. I just implied that when I use function to produce generator: ``` gen = func(iterable) ``` in case of *func* it works and returns the same generator, but in case of func2: ``` def func2(iterable): while True: val = next(iterable) ``` it raises StopIteration instead of *None* return or infinite loop. Let me be more specific. There is a function *[tee](http://docs.python.org/2/library/itertools.html#itertools.tee)* in **itertools** which is equivalent to: ``` def tee(iterable, n=2): it = iter(iterable) deques = [collections.deque() for i in range(n)] def gen(mydeque): while True: if not mydeque: # when the local deque is empty newval = next(it) # fetch a new value and for d in deques: # load it to all the deques d.append(newval) yield mydeque.popleft() return tuple(gen(d) for d in deques) ``` There is, in fact, some magic, because nested function *gen* has infinite loop without break statements. *gen* function terminates due to *StopIteration* exception when there is no items in *it*. But it terminates correctly (without raising exceptions), i.e. just stops loop. **So the question is**: where is *StopIteration* is handled?
**Note: This question (and the original part of my answer to it) are only really meaningful for Python versions prior to 3.7. The behavior that was asked about no longer happens in 3.7 and later, thanks to changes described in [PEP 479](https://www.python.org/dev/peps/pep-0479/). So this question and the original answer are only really useful as historical artifacts. After the PEP was accepted, I added an additional section at the bottom of the answer which is more relevant to modern versions of Python.** --- To answer your question about where the `StopIteration` gets caught in the `gen` generator created inside of `itertools.tee`: it doesn't. It is up to the consumer of the `tee` results to catch the exception as they iterate. First off, it's important to note that a generator function (which is any function with a `yield` statement in it, anywhere) is fundamentally different than a normal function. Instead of running the function's code when it is called, instead, you'll just get a `generator` object when you call the function. Only when you iterate over the generator will you run the code. A generator function will never finish iterating without raising `StopIteration` (unless it raises some other exception instead). `StopIteration` is the signal from the generator that it is done, and it is not optional. If you reach a `return` statement or the end of the generator function's code without raising anything, Python will raise `StopIteration` for you! This is different from regular functions, which return `None` if they reach the end without returning anything else. It ties in with the different ways that generators work, as I described above. Here's an example generator function that will make it easy to see how `StopIteration` gets raised: ``` def simple_generator(): yield "foo" yield "bar" # StopIteration will be raised here automatically ``` Here's what happens when you consume it: ``` >>> g = simple_generator() >>> next(g) 'foo' >>> next(g) 'bar' >>> next(g) Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> next(g) StopIteration ``` Calling `simple_generator` always returns a `generator` object immediately (without running any of the code in the function). Each call of `next` on the generator object runs the code until the next `yield` statement, and returns the yielded value. If there is no more to get, `StopIteration` is raised. Now, normally you don't see `StopIteration` exceptions. The reason for this is that you usually consume generators inside `for` loops. A `for` statement will automatically call `next` over and over until `StopIteration` gets raised. It will catch and suppress the `StopIteration` exception for you, so you don't need to mess around with `try`/`except` blocks to deal with it. A `for` loop like `for item in iterable: do_suff(item)` is almost exactly equivalent to this `while` loop (the only difference being that a real `for` doesn't need a temporary variable to hold the iterator): ``` iterator = iter(iterable) try: while True: item = next(iterator) do_stuff(item) except StopIteration: pass finally: del iterator ``` The `gen` generator function you showed at the top is one exception. It uses the `StopIteration` exception produced by the iterator it is consuming as it's own signal that it is done being iterated on. That is, rather than catching the `StopIteration` and then breaking out of the loop, it simply lets the exception go uncaught (presumably to be caught by some higher level code). Unrelated to the main question, there is one other thing I want to point out. In your code, you're calling `next` on an variable called `iterable`. If you take that name as documentation for what type of object you will get, this is not necessarily safe. `next` is part of the `iterator` protocol, not the `iterable` (or `container`) protocol. It may work for some kinds of iterables (such as files and generators, as those types are their own iterators), but it will fail for others iterables, such as tuples and lists. The more correct approach is to call `iter` on your `iterable` value, then call `next` on the iterator you receive. (Or just use `for` loops, which call both `iter` and `next` for you at appropriate times!) --- **I just found my own answer in a Google search for a related question, and I feel I should update to point out that the answer above is not true in modern Python versions.** [PEP 479](https://www.python.org/dev/peps/pep-0479/) has made it an error to allow a `StopIteration` to bubble up uncaught from a generator function. If that happens, Python will turn it into a `RuntimeError` exception instead. This means that code like the examples in older versions of `itertools` that used a `StopIteration` to break out of a generator function needs to be modified. Usually you'll need to catch the exception with a `try`/`except` and then `return`. Because this was a backwards incompatible change, it was phased in gradually. In Python 3.5, all code worked as before by default, but you could get the new behavior with `from __future__ import generator_stop`. In Python 3.6, unmodified code would still work, but it would give a warning. In Python 3.7 and later, the new behavior applies all the time.
When a function contains `yield`, calling it does not actually execute anything, it merely creates a generator object. Only iterating over this object will execute the code. So my guess is that you're merely calling the function, which means the function doesn't raise `StopIteration` because *it is never being executed.* Given your function, and an iterable: ``` def func(iterable): while True: val = next(iterable) yield val iterable = iter([1, 2, 3]) ``` This is the wrong way to call it: ``` func(iterable) ``` This is the right way: ``` for item in func(iterable): # do something with item ``` You could also store the generator in a variable and call `next()` on it (or iterate over it in some other way): ``` gen = func(iterable) print(next(gen)) # prints 1 print(next(gen)) # prints 2 print(next(gen)) # prints 3 print(next(gen)) # StopIteration ``` By the way, a better way to write your function is as follows: ``` def func(iterable): for item in iterable: yield item ``` Or in Python 3.3 and later: ``` def func(iterable): yield from iter(iterable) ``` Of course, real generators are rarely so trivial. :-)
How yield catches StopIteration exception?
[ "", "python", "yield", "stopiteration", "" ]
I'm trying to join multiple tables together using a `full outer join`, its getting close to the proper result but there are some duplicate rows due to the join clauses. I've got several tables with columns of id, date, value. I'm looking to get a table with one row for each id, date pair that has all the values from each of the tables. [Here is a SQLFiddle if you want to play with it.](http://sqlfiddle.com/#!12/7993e/1) Here's what I've got so far: ``` SELECT COALESCE(T1.ID, T2.ID, T3.ID, t4.id) AS ID, COALESCE(T1.event_dt, T2.event_dt, T3.event_dt, t4.event_dt) AS DATE, T1.AMT1, T2.AMT2, T3.AMT3, t4.AMT4 FROM T1 FULL OUTER JOIN T2 ON T2.id = T1.id AND T2.event_dt = T1.event_dt FULL OUTER JOIN T3 ON T3.id = T1.id AND T3.event_dt = T1.event_dt FULL OUTER JOIN T4 ON T4.id = T1.id AND T4.event_dt = T1.event_dt ORDER BY ID, DATE ``` This almost works, but I get some duplicate rows when for instance T4 has an ID,event\_dt pair that is not in T1 (as to be expected because thats what I'm joining on). For instance, I'll get something like: ``` 1 April, 06 2012 00:00:00+0000 (null) 2 (null) (null) 1 April, 06 2012 00:00:00+0000 (null) (null) (null) 4 1 April, 06 2012 00:00:00+0000 (null) (null) 3 (null) ``` When I'm looking to get: ``` 1 April, 06 2012 00:00:00+0000 (null) 2 3 4 ``` Is there a way to flatten/merge those rows together, or is there a better way to go about this altogether?
(assuming the OP wants a fully symmetric outer 4-join) ``` WITH four AS ( SELECT id, event_dt FROM t1 UNION SELECT id, event_dt FROM t2 UNION SELECT id, event_dt FROM t3 UNION SELECT id, event_dt FROM t4 ) SELECT f.id, f.event_dt , t1.amt1 , t2.amt2 , t3.amt3 , t4.amt4 FROM four f LEFT JOIN t1 ON t1.id = f.id AND t1.event_dt = f.event_dt LEFT JOIN t2 ON t2.id = f.id AND t2.event_dt = f.event_dt LEFT JOIN t3 ON t3.id = f.id AND t3.event_dt = f.event_dt LEFT JOIN t4 ON t4.id = f.id AND t4.event_dt = f.event_dt ORDER BY id, event_dt ; ``` Result: ``` id | event_dt | amt1 | amt2 | amt3 | amt4 ----+------------+------+------+------+------ 1 | 2012-04-01 | 1 | | | 1 | 2012-04-02 | 1 | | 3 | 1 | 2012-04-03 | 1 | | 3 | 1 | 2012-04-06 | | 2 | 3 | 4 1 | 2012-04-07 | | 2 | | 2 | 2012-04-01 | 40 | | | 2 | 2012-04-02 | | | 3 | 2 | 2012-04-03 | | | 3 | 2 | 2012-04-04 | 40 | | | (9 rows) ``` BTW: after the `UNION` four, `LEFT JOIN`s will do the same as `FULL JOIN`s here (union four already has all the possible {id, event\_dt} pairs)
You could always use an aggregate around the `amount` columns: ``` SELECT COALESCE(T1.ID, T2.ID, T3.ID, t4.id) AS ID, COALESCE(T1.event_dt, T2.event_dt, T3.event_dt, t4.event_dt) AS DATE, max(coalesce(T1.AMT1, 0)) AMT1, -- use coalesce to replace the null with zero max(coalesce(T2.AMT2, 0)) AMT2, max(coalesce(T3.AMT3, 0)) AMT3, max(coalesce(t4.AMT4, 0)) AMT4 FROM T1 FULL OUTER JOIN T2 ON T2.id = T1.id AND T2.event_dt = T1.event_dt FULL OUTER JOIN T3 ON T3.id = T1.id AND T3.event_dt = T1.event_dt FULL OUTER JOIN T4 ON T4.id = T1.id AND T4.event_dt = T1.event_dt group by COALESCE(T1.ID, T2.ID, T3.ID, t4.id), COALESCE(T1.event_dt, T2.event_dt, T3.event_dt, t4.event_dt) ORDER BY ID, DATE; ``` See [Demo](http://sqlfiddle.com/#!12/7993e/11)
SQL Full outer join or alternative solution
[ "", "sql", "postgresql", "join", "" ]
In Python 2.7.3, this is the current behavior: ``` >>> 8./9. 0.8888888888888888 >>> '%.1f' % (8./9.) '0.9' ``` Same appears to be true for `Decimal`s: ``` >>> from decimal import Decimal >>> Decimal(8) / Decimal(9) Decimal('0.8888888888888888888888888889') >>> '%.1f' % (Decimal(8) / Decimal(9)) '0.9' ``` I would have expected truncation, however, it appears to round. So my options to truncating to the tenths place? *FYI I ask because my current solution seems hacky (but maybe its the best practice?) as it make a string of the result, finds the period and simply finds X digits after the period that I want.*
> So my options to truncating to the tenths place? The [Decimal.quantize()](http://docs.python.org/2.7/library/decimal.html#decimal.Decimal.quantize) method rounds a number to a fixed exponent and it provides control over the rounding mode: ``` >>> from decimal import Decimal, ROUND_FLOOR >>> Decimal('0.9876').quantize(Decimal('0.1'), rounding=ROUND_FLOOR) Decimal('0.9') ``` Don't use [*math.floor*](http://docs.python.org/2.7/library/math.html#math.floor) on *Decimal* values because it first coerces them to a binary float introducing representation error and lost precision: ``` >>> x = Decimal('1.999999999999999999998') >>> x.quantize(Decimal('0.1'), rounding=ROUND_FLOOR) Decimal('1.9') >>> math.floor(x * 10) / 10 2.0 ```
You are looking for the [`math.floor()` function](http://docs.python.org/2/library/math.html#math.floor) instead: ``` >>> import math >>> math.floor(8./9. * 10) / 10 0.8 ```
How to properly truncate a float/decimal to a specific place after the decimal in python?
[ "", "python", "truncate", "" ]
I have two tables of data which contain data something like as follows:- ``` | id | name | dob | | name | dob | |-------|------|----------| |------|----------| | 12345 | ABC | 20010301 | | ABC | 20010301 | - matching record | 45678 | DEF | 20010425 | | XYZ | 20010301 | - unmatched record ``` Is it possible to write a query which compares the two tables and then creates a match and unmatched table, leaving the orignal tables structure/data alone? ``` Match Table Unmatched Table | id | rank | | id | rank | |-------|------| |-------|------| | 12345 | 1 | | 45678 | NULL | ``` I was experimenting with MERGE but I have to insert/update one source table and I have hit my ceiling in terms of tsql - I will also be dealing with datasets in excess of 30,000,000 rows - any advice/suggestions? The sql (fields don't match but principle is there) I had so far is as follows? ``` Create TABLE #Cohort ([ID] varchar(4),[match rank] int) INSERT INTO #Cohort ([ID],[match rank]) VALUES('aaaa',NULL) INSERT INTO #Cohort ([ID],[match rank]) VALUES('bbbb',NULL) INSERT INTO #Cohort ([ID],[match rank]) VALUES('cccc',NULL) INSERT INTO #Cohort ([ID],[match rank]) VALUES('dddd',NULL) Create TABLE #link ([ID] varchar(4),[match rank] int) INSERT INTO #link ([ID],[match rank]) VALUES(left(NEWID(),4),NULL) INSERT INTO #link ([ID],[match rank]) VALUES(left(NEWID(),4),NULL) INSERT INTO #link ([ID],[match rank]) VALUES('aaaa',NULL) INSERT INTO #link ([ID],[match rank]) VALUES(left(NEWID(),4),NULL) Create TABLE #Matches ([ID] varchar(4),[match rank] int) Create TABLE #Unmatched ([ID] varchar(4),[match rank] int) MERGE #Cohort tg USING (SELECT distinct c.[ID], 1 as [match rank] from #Cohort c INNER JOIN #link as h on c.[ID]=h.[ID]) sc ON (tg.[ID] = sc.[ID] ) WHEN NOT MATCHED BY TARGET THEN INSERT([ID],[match rank]) VALUES(sc.[ID],sc.[match rank]) WHEN NOT MATCHED BY SOURCE THEN DELETE OUTPUT Deleted.* INTO #Unmatched; ```
Using CTE, at the end you will have on `#Matched` the matched rows and on `#Unmatched` the unmatched rows. As it currently stands, your `MERGE` statement deletes rows from #cohort table leaving just the one with the `aaaa` value. ``` CREATE TABLE #Cohort ([ID] VARCHAR(4),[MATCH RANK] INT) INSERT INTO #Cohort ([ID],[MATCH RANK]) VALUES('aaaa',NULL) INSERT INTO #Cohort ([ID],[MATCH RANK]) VALUES('bbbb',NULL) INSERT INTO #Cohort ([ID],[MATCH RANK]) VALUES('cccc',NULL) INSERT INTO #Cohort ([ID],[MATCH RANK]) VALUES('dddd',NULL) CREATE TABLE #link ([ID] VARCHAR(4),[MATCH RANK] INT) INSERT INTO #link ([ID],[MATCH RANK]) VALUES(LEFT(NEWID(),4),NULL) INSERT INTO #link ([ID],[MATCH RANK]) VALUES(LEFT(NEWID(),4),NULL) INSERT INTO #link ([ID],[MATCH RANK]) VALUES('aaaa',NULL) INSERT INTO #link ([ID],[MATCH RANK]) VALUES(LEFT(NEWID(),4),NULL) CREATE TABLE #Matches ([ID] VARCHAR(4),[MATCH RANK] INT) CREATE TABLE #Unmatched ([ID] VARCHAR(4),[MATCH RANK] INT) ;WITH MatchedTbl AS ( SELECT DISTINCT c.[ID], c.[MATCH RANK] FROM #Cohort c INNER JOIN #link h ON c.[ID] = h.[ID] ) INSERT INTO #Matches SELECT c.[ID], c.[MATCH RANK] FROM MatchedTbl c ;WITH NonMatchedTbl AS ( SELECT DISTINCT l.[ID], l.[MATCH RANK] FROM #link l WHERE l.ID NOT IN (SELECT DISTINCT ID FROM #cohort) ) INSERT INTO #Unmatched SELECT [ID], [MATCH RANK] FROM NonMatchedTbl SELECT * FROM #Cohort SELECT * FROM #Link SELECT * FROM #Matches SELECT * FROM #Unmatched DROP TABLE #Cohort DROP TABLE #link DROP TABLE #Matches DROP TABLE #Unmatched ```
The standard way to look for matched / unmatched records is to perform a left join and look for NULLs in the left-joined table. ``` SELECT t1.id, COUNT(t2.name) AS rank INTO #MatchedTable FROM Table1 t1 LEFT JOIN Table2 t2 ON t2.name = t1.name WHERE t2.name IS NOT NULL GROUP BY t1.id ORDER BY t1.id ``` And: ``` SELECT t1.id, NULL AS rank INTO #UnmatchedTable FROM Table1 t1 LEFT JOIN Table2 t2 ON t2.name = t1.name WHERE t2.name IS NULL GROUP BY t1.id ORDER BY t1.id ``` I hope this helps.
TSQL link data and split into matched and unmatched tables
[ "", "sql", "sql-server", "t-sql", "" ]
I wanted to define a global variable in main, i.e., a variable that can be used by any function I call from the main function. Is that possible? What'd be a good way to do this? Thanks!
A variable created *inside* a method (e.g., main) is local by definition. However, you can create a global variable outside the method, and access and change its value from side any other method. To change its value use the [`global`](http://docs.python.org/2/reference/simple_stmts.html#global) keyword.
What you want is not possible\*. You can just create a variable in the global namespace: ``` myglobal = "UGHWTF" def main(): global myglobal # prevents creation of a local variable called myglobal myglobal = "yu0 = fail it" anotherfunc() def anotherfunc(): print myglobal ``` DON'T DO THIS. The whole point of a function is that it takes parameters. Just add parameters to your functions. If you find that you need to modify a lot of functions, this is an indication that you should collect them up into a class. `*` To elaborate on why this isn't possible: variables in python are not declared - they are created when an assignment statement is executed. This means that the following code (derived from code posted by astronautlevel) will break: ``` def setcake(taste): global cake cake = taste def caketaste(): print cake #Output is whatever taste was caketaste() Traceback (most recent call last): File "prog.py", line 7, in <module> caketaste() File "prog.py", line 5, in caketaste print cake #Output is whatever taste was NameError: global name 'cake' is not defined ``` This happens because when `caketaste` is called, no assignment to `cake` has occurred. It will only occur after `setcake` has been called. You can see the error here: <http://ideone.com/HBRN4y>
global variable inside main function python
[ "", "python", "scope", "" ]
I have a SQL query which fails to execute: ``` select p.PersonID, CreatedDate, * from dbo.Person p join dbo.PotentiallyFraudulentPeople pfp on p.PersonID= pfp.PersonID order by CreatedDate ``` The `Person` table has a PK of `PersonID` (int). The `PotentiallyFraudulentPeople` view is a query of the `Person` table joined with some other tables to decide if we trust a person or not. The `PotentiallyFraudulentPeople` view only has one column: `PersonID`. When I try to execute this query, I get this error: > Msg 209, Level 16, State 1, Line 3 > Ambiguous column name 'CreatedDate'. I understand that this error is telling me that the `CreatedDate` column name is ambiguous and I need to preface it with my table's alias, 'p'. This query works: ``` select p.PersonID, CreatedDate, * from dbo.Person p join dbo.PotentiallyFraudulentPeople pfp on p.PersonID= pfp.PersonID order by p.CreatedDate ``` What I don't understand is why I need to use the 'p' alias in the `ORDER BY` statement and not in the `SELECT` column list. Also, I don't understand why I need to use the table alias at all since the `PotentiallyFraudulentPeople` view doesn't even have a `CreatedDate` column. Can anyone explain this odd behavior? I am using SQL Server 2008 and SSMS to execute the query. **UPDATE** Also, I tried removing the `CreatedDate` column from my `SELECT` column list and then the query no longer requires the 'p' alias in the `ORDER BY`. So this query works as well: ``` select p.PersonID, * from dbo.Person p join dbo.PotentiallyFraudulentPeople pfp on p.PersonID= pfp.PersonID order by CreatedDate ```
You are selecting the `CreatedDate` column twice. 1. Explicitly via `CreatedDate`. 2. Implicitly via `*`. It doesn't know which occurence you want to sort on - and it obviously doesn't realize that both occurences refer to the same column.
And the reason that the name is ambiguous is related to the fact that the order by clause allows use of alias names so p.CreateDt and \*.CreateDt both alias to CreateDt. Since order by allows alias names, it can't resolve
Why is this SQL Query causing an "Ambiguous column name" error?
[ "", "sql", "sql-server", "" ]
I want to **delete** using `INNER JOIN` in **SQL Server 2008**. But I get this error: > Msg 156, Level 15, State 1, Line 15 > **Incorrect** syntax near the **keyword** 'INNER'. My code: ``` DELETE FROM WorkRecord2 INNER JOIN Employee ON EmployeeRun=EmployeeNo WHERE Company = '1' AND Date = '2013-05-06' ```
You need to specify what table you are deleting from. Here is a version with an alias: ``` DELETE w FROM WorkRecord2 w INNER JOIN Employee e ON EmployeeRun=EmployeeNo WHERE Company = '1' AND Date = '2013-05-06' ```
Just add the name of the table between `DELETE` and `FROM` from where you want to delete records, because we have to specify the table to delete. Also remove the `ORDER BY` clause because there is nothing to order while deleting records. So your final query should be like this: ``` DELETE WorkRecord2 FROM WorkRecord2 INNER JOIN Employee ON EmployeeRun=EmployeeNo WHERE Company = '1' AND Date = '2013-05-06'; ```
How can I delete using INNER JOIN with SQL Server?
[ "", "sql", "sql-server", "sql-server-2008", "inner-join", "sql-delete", "" ]
I want to sort the result of this query according to the contains of the query : here it is : `SELECT a from Frais a where a.libelle = 'FRET' or a.libelle = 'Douane' or a.libelle = 'Transitaire'` I would like to have the records that have `FRET` first and `Douane` after and so on ``` order by libelle ``` doesn't resolve the problem it sort them according to alphabetic order asc or desc
``` SELECT a from Frais a where a.libelle = 'FRET' or a.libelle = 'Douane' or a.libelle = 'Transitaire' order by case a.libelle when 'FRET' then 0 when 'Douane' then 1 when 'Transitaire' then 2 end ```
One option is to use a `CASE` statement: ``` SELECT * FROM Frais a WHERE a.libelle = 'FRET' OR a.libelle = 'Douane' OR a.libelle = 'Transitaire' ORDER BY CASE WHEN a.libelle = 'FRET' THEN 1 WHEN a.libelle = 'Douane' THEN 2 WHEN a.libelle = 'Transitaire' THEN 3 END ```
sort result of sql query by order of field in where clause of query
[ "", "sql", "sorting", "sql-order-by", "" ]
I am attempting to create a package (mypackage) that contains a few classes, but would like the classes contained in multiple files. For example, I want class\_a.py to contain a class named ClassA, etc... Thus, I would like the following file structure: ``` .../mypackage __init__.py class_a.py class_b.py ... ``` However, I would like to load and use the package as follows: ``` load mypackage a = mypackage.ClassA() ``` What do I need to do (I assume in the \_\_init\_\_.py) file to make this possible. Currently, it operates using "mypackage.class\_a.ClassA()"?
Make your `__init__.py` import all your `ClassA`, `ClassB`, etc from other files. Then you'll be able to `import mypackage` and use `mypackage.ClassA`, or `from mypackage import ClassA` and use it as unqualified `ClassA`. A bit of background. An `import foo` statement looks for `foo.py`, then for `foo/__init__.py`, and loads the names defined in that file into the current namespace. Put whatever you need to be "top-level" into `__init__.py`. Also, [take a look at `__all__` top-level variable](http://docs.python.org/2/tutorial/modules.html#importing-from-a-package) if you tend to `from mypackage import *`.
As mentioned, in your `__init__.py` for a class, use the following: ``` from class_a import ClassA from class_b import ClassB ``` for the case of a file without a class, use the following: ``` from . import file_a from . import file_b ``` or if you only want to expose specific methods of a file: ``` from .file_a import method_a from .file_b import method_b ```
How to create a python package with multiple files without subpackages
[ "", "python", "python-module", "" ]
In my Python 2.7.2 IDLE interpreter: ``` >>> mylist = [1, 2, 3, 4, 5] >>> for item in mylist: mylist.remove(item) >>> mylist [2, 4] ``` Why?
It's because when you iterate over a list, [python keeps track of the **index** in the list](http://docs.python.org/2/reference/compound_stmts.html#the-for-statement). Consider the following code instead: ``` for i in range(len(mylist)): if i >= len(mylist): break item = mylist[i] mylist.remove(item) ``` If we track this (which is essentially what python is doing in your code), then we see that when we remove an item in the list, the number to the right shifts one position to the left to fill the void left when we removed the item. The right item is now at index `i` and so it will never actually get seen in the iteration because the next thing that happens is we increment `i` for the next iteration of the for loop. --- Now for something a little clever. If instead we iterate over the list backward, we'll clear out the list: ``` for item in reversed(mylist): mylist.remove(item) ``` The reason here is that we're taking an item off the end of the list at each iteration of the for loop. Since we're always taking items off the end, nothing needs to shift (assuming uniqueness in the list -- If the list isn't unique, the result is the same, but the argument gets a bit more complicated). Of course, If you're looking to remove all the items from a list, you can do that really easily: ``` del mylist[:] ``` or even with slice assignment: ``` mylist[:] = [] ``` (I mention the latter because it can be useful to replace segments of a list with other items which don't even need to be the same length).
That's because you're modifying the list while iterating over it, iterate over a shallow copy instead: ``` >>> mylist = [1, 2, 3, 4, 5] >>> for item in mylist[:]: #use mylist[:] or list(mylist) mylist.remove(item) ... >>> mylist [] ```
Why is list.remove only removing every second item?
[ "", "python", "list", "enumeration", "" ]
I have problem how to hide the comma when the field got the value with comma. For example if I have many `ADDRESS`. Anyone got the idea? This is my SQL: ``` select mhn.id_mohon, mhn.penyerah_nama, UPPER(mhn.ADDRESS1), UPPER(mhn.ADDRESS2), UPPER(mhn.ADDRESS3), UPPER(mhn.ADDRESS4) from mohon mhn, kod_negeri kn where mhn.penyerah_kod_negeri = kn.kod(+) and mhn.id_mohon = :p_id_mohon ```
You can *remove* commas from strings using the replace function. Something like: ``` select mhn.id_mohon, mhn.penyerah_nama, replace(UPPER(mhn.ADDRESS1), ',', '') . . . ``` You can concatenate all the address fields into one value, if you only want them in one column: ``` select mhn.id_mohon, mhn.penyerah_nama, UPPER(mhn.ADDRESS1) || UPPER(mhn.ADDRESS2) || UPPER(mhn.ADDRESS3) || UPPER(mhn.ADDRESS4) as Address ```
Assuming you have the data `123 Some Road, SomeCity State, XXXXX` and want to return `123 Some Road SomeCity State XXXXX` you would: `SELECT REPLACE(column, ',', '')`
how to hide the comma data in Oracle
[ "", "sql", "oracle", "" ]
I am trying to use the [Strava API v3](http://strava.github.io/api/v3/) in Python, and I am afraid I am missing something. The docs say: > This base URL is used for all Strava API requests: > <https://api.strava.com> > > ``` > $ curl -i https://api.strava.com > ``` > > ``` > HTTP/1.1 200 OK Content-Type: application/json Status: 200 OK > X-RateLimit-Limit: 5000 X-RateLimit-Remaining: 4999 Content-Length: 2 > ``` > > Responses are in JSON format and gzipped. I am currently doing this: ``` import urllib print urllib.urlopen('https://api.strava.com').read() ``` And gettin this: ``` Traceback (most recent call last): File "StravaAPIv3.py", line 3, in <module> print urllib.urlopen('https://api.strava.com').read() File "C:\Python27\lib\urllib.py", line 86, in urlopen return opener.open(url) File "C:\Python27\lib\urllib.py", line 207, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 436, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 954, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 814, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 776, in send self.connect() File "C:\Python27\lib\httplib.py", line 1157, in connect self.timeout, self.source_address) File "C:\Python27\lib\socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno 11004] getaddrinfo failed ``` I don't know where to start, since I don't know much about HTTP requests and HTTPS UPDATE: According to Merlin's suggestion to use `requests` module, I am doing this: ``` import requests r = requests.get('https://api.strava.com/') print r.status_code print r.headers['content-type'] print r.encoding print r.text print r.json() ``` but keep getting an error: ``` requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.strava.com', port=443): Max retries exceeded with url: / (Caused by <class 'so cket.gaierror'>: [Errno 11004] getaddrinfo failed) ```
You need to follow the instructions here first: <http://strava.github.io/api/v3/oauth/> Basically, the app you create has to authorize the user using it (in this case, you). I've written some example code below. I'm new to python so don't know how to automate log in, so you'll have to copy and paste the url to the browser and copy and paste the code in. ``` import requests import json #Replace #### with your client id (listed here: http://www.strava.com/settings/api) #Replace &&&& with your redirect uri listed on the same page. I used localhost #Go this url in your browser and log in. Click authorize https://www.strava.com/oauth/authorize?client_id=###&response_type=code&redirect_uri=&&&& #Copy and paste the code returned in the url code='qwertyuio123456789' #Replace @@@@ with the code on your api page values={'client_id':'###', 'client_secret': '@@@@', 'code':code} r = requests.post('https://www.strava.com/oauth/token', data=values) json_string = r.text.replace("'", "\"") values = json.loads(json_string) #now you have an access token r = requests.get('http://www.strava.com/api/v3/athletes/227615', params=values) ``` Have fun!
Try using requests! It's safer. <http://docs.python-requests.org/en/latest/>
How should I handle this HTTPS request in Python?
[ "", "python", "https", "urllib", "getaddrinfo", "" ]
There are at least two ways to reverse a list in Python, but the iterator approach is much faster (at least in Python 2.7.x). I want to understand what contributes to this speed difference. ``` >>> x = range(1000) >>> %timeit x[::-1] 100000 loops, best of 3: 2.99 us per loop >>> %timeit reversed(x) 10000000 loops, best of 3: 169 ns per loop ``` I suspect the speed difference is due to at least the following: 1. `reversed` is written in C 2. `reversed` is an iterator, so less memory overhead I tried to use the `dis` module to get a better view of these operations, but it wasn't too helpful. I had to put these operations in a function to disassemble them. ``` >> def reverselist(_list): ... return _list[::-1] ... >>> dis.dis(reverselist) 2 0 LOAD_FAST 0 (_list) 3 LOAD_CONST 0 (None) 6 LOAD_CONST 0 (None) 9 LOAD_CONST 1 (-1) 12 BUILD_SLICE 3 15 BINARY_SUBSCR 16 RETURN_VALUE >>> def reversed_iter(_list): ... return reversed(_list) ... >>> dis.dis(reversed_iter) 2 0 LOAD_GLOBAL 0 (reversed) 3 LOAD_FAST 0 (_list) 6 CALL_FUNCTION 1 9 RETURN_VALUE ``` What all exactly happens during a slicing operation, is there a lot of memory overhead? Maybe slicing is implemented in pure Python?
That's because `reversed` returns an [`iterator`](http://docs.python.org/2/glossary.html#term-iterator) while slicing returns a whole list. ``` >>> lis = range(10) >>> lis[::-1] [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] >>> reversed(lis) <listreverseiterator object at 0x909dd0c> ``` You've to use `list()` to convert that iterator into a whole list: ``` >>> lis = range(10**5) >>> %timeit lis[::-1] 100 loops, best of 3: 2.8 ms per loop >>> %timeit list(reversed(lis)) 100 loops, best of 3: 3.13 ms per loop ``` **Help** on `reversed`: ``` >>> reversed? Type: type String Form:<type 'reversed'> Namespace: Python builtin Docstring: reversed(sequence) -> reverse iterator over values of the sequence Return a reverse iterator ```
`reversed()` returns an iterator. It doesn't *actually* reverse anything until you loop over it. From the [documentation](http://docs.python.org/2/library/functions.html#reversed): > Return a reverse *[iterator](http://docs.python.org/2/glossary.html#term-iterator)*. You need to compare the time it takes to turn the result of `reversed()` into a list again: ``` %timeit list(reversed(x)) ``` Creating just the iterator (which is nothing but a reference to the original list and a item pointer that is initialized to the length of the list) does't take any time at all. Having to turn `reversed()` back into a list makes it a *lot* slower: ``` >>> import timeit >>> x = range(1000) >>> timeit.timeit('x[::-1]', 'from __main__ import x') 4.623600006103516 >>> timeit.timeit('list(reversed(x))', 'from __main__ import x') 16.647125005722046 ```
Why is reversing a list with slicing slower than reverse iterator
[ "", "python", "" ]
In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly. I want to be able to have Python read the string, then test if it is over 75 characters in length. If it is, then split the string up into multiple strings, then print one after the other on a new line. I also want it to be smart, not cutting off full words. i.e. `"The quick brown <newline> fox..."` instead of `"the quick bro<newline>wn fox..."`. I've tried modifying similar code that truncates the string after a set length, but just trashes the string instead of putting it in a new line. What are some methods I could use to accomplish this?
You could use [`textwrap`](https://docs.python.org/3/library/textwrap.html) module: ``` >>> import textwrap >>> strs = "In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly." >>> print(textwrap.fill(strs, 20)) In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly. ``` **help** on [`textwrap.fill`](https://docs.python.org/3/library/textwrap.html#textwrap.fill): ``` >>> textwrap.fill? Definition: textwrap.fill(text, width=70, **kwargs) Docstring: Fill a single paragraph of text, returning a new string. Reformat the single paragraph in 'text' to fit in lines of no more than 'width' columns, and return a new string containing the entire wrapped paragraph. As with wrap(), tabs are expanded and other whitespace characters converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. ``` Use `regex` if you don't want to merge a line into another line: ``` import re strs = """In my project, I have a bunch of strings that are. Read in from a file. Most of them, when printed in the command console, exceed 80. Characters in length and wrap around, looking ugly.""" print('\n'.join(line.strip() for line in re.findall(r'.{1,40}(?:\s+|$)', strs))) # Reading a single line at once: for x in strs.splitlines(): print '\n'.join(line.strip() for line in re.findall(r'.{1,40}(?:\s+|$)', x)) ``` **output:** ``` In my project, I have a bunch of strings that are. Read in from a file. Most of them, when printed in the command console, exceed 80. Characters in length and wrap around, looking ugly. ```
This is what the [textwrap](http://docs.python.org/3/library/textwrap.html) module is for. Try `textwrap.fill(some_string, width=75)`.
A good way to make long strings wrap to newline?
[ "", "python", "word-wrap", "" ]
I have a column called TimeSpent of type Float. It contains values (i.e. 2.50, 1.25, 3.75, 5.60 ) I need to convert these hour values to appropriate hour values like (2:30, 1:15, 3:45 ,5:36) etc) How can this be done?
Try this `Query` ``` select time_decimal ,cast(cast(cast(time_decimal as int) as varchar)+ ':'+cast(cast((time_decimal - cast(time_decimal as int)) * 60 as int) as varchar)+ ':'+cast(cast(((time_decimal - cast(time_decimal as int)) * 60- cast((time_decimal - cast(time_decimal as int)) * 60 as int)) * 60 as int) as varchar) as time) as real_time from time1 ``` **[SQL FIDDLE](http://www.sqlfiddle.com/#!3/55621/2)**
Assuming you have created already a column with datatype time, you update your table by concatenating the left of TimeSpent till the dot and the part right of the dot multiplied by 60. SQL Server 2012: ``` UPDATE yourTable SET newTimeColumn = CONCAT(CAST(TimeSpentFloat AS INT), ':', (TimeSpentFloat - CAST(TimeSpentFloat AS INT)) * 60); ``` SQL Server 2008 and lower: ``` SELECT CAST(CAST(2.75 AS INT) AS CHAR) + ':' + CAST((2.75 - CAST(2.75 AS INT)) * 60 AS CHAR); ``` See it working in an [sqlfiddle](http://sqlfiddle.com/#!3/d41d8/13538/0).
Convert Decimal values(hours) to Appropriate Hours in sql server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with a list of dates where an employee became Active/Inactive and I want to count the weeks that an employee was Active within a certain date range. So the table (ps\_job) would have values like this: ``` EMPLID EFFDT HR_STATUS ------ ----- ------ 1000 01-Jul-11 A 1000 01-Sep-11 I 1000 01-Jan-12 A 1000 01-Mar-12 I 1000 01-Sep-12 A ``` The query would need to show me the number of weeks that this emplid was active from 01-Jul-11 to 31-Dec-12. The desired result set would be: ``` EMPLID WEEKS_ACTIVE ------ ------------ 1000 35 ``` I got the number 35 by adding the results from the SQLs below: ``` SELECT (NEXT_DAY('01-Sep-11','SUNDAY') - NEXT_DAY('01-Jul-11','SUNDAY'))/7 WEEKS_ACTIVE FROM DUAL; SELECT (NEXT_DAY('01-Mar-12','SUNDAY') - NEXT_DAY('01-Jan-12','SUNDAY'))/7 WEEKS_ACTIVE FROM DUAL; SELECT (NEXT_DAY('31-Dec-12','SUNDAY') - NEXT_DAY('01-Sep-12','SUNDAY'))/7 WEEKS_ACTIVE FROM DUAL; ``` The problem is I can't seem to figure out how to create a single query statement that will go through all the rows for every employee within a certain date range and just return each emplid and the number of weeks they were active. I would prefer to use basic SQL instead of PL/SQL so that I can transfer it to a PeopleSoft query that can be run by the user, but I am willing to run it for the user using Oracle SQL Developer if need be. Database: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Here I'm using `lead` in a subquery to get the next date and then summing the intervals in the outer query: ``` with q as ( select EMPLID, EFFDT, HR_STATUS , lead (EFFDT, 1) over (partition by EMPLID order by EFFDT) as NEXT_EFFDT from ps_job order by EMPLID, EFFDT ) select EMPLID , trunc(sum((trunc(coalesce(NEXT_EFFDT, current_timestamp)) - trunc(EFFDT)) / 7)) as WEEKS_ACTIVE from q where HR_STATUS = 'A' group by EMPLID; ``` The `coalesce` function will grab the system date in the event it cannot find a matching `I` record (employee is current). You could substitute the end of the year if that's your spec. Note that I'm not doing any rigorous testing to see that your entries are ordered A/I/A/I etc., so you might want to add checks of that nature if you know your data requires it. Feel free to play with this at [SQL Fiddle](http://www.sqlfiddle.com/#!4/1baa0/2).
If the customer just wants a rough estimate I'd start with the number of days for each stint, divided by 7 and rounded. The trick is to line up the Active date with its corresponding Inactive date, and the best way I can think to do this is to pick out the Active and Inactive dates separately, rank them by date, and join them back together by `EmplID` and rank. The `ROW_NUMBER()` analytical function is the best way to rank in this situation: ``` WITH EmpActive AS ( SELECT EmplID, EffDt, ROW_NUMBER() OVER (PARTITION BY EmplID ORDER BY EffDt NULLS LAST) DtRank FROM ps_job WHERE HR_Status = 'A' ), EmpInactive AS ( SELECT EmplID, EffDt, ROW_NUMBER() OVER (PARTITION BY EmplID ORDER BY EffDt NULLS LAST) DtRank FROM ps_job WHERE HR_Status = 'I' ) SELECT EmpActive.EmplID, EmpActive.EffDt AS ActiveDate, EmpInactive.EffDt AS InactiveDate, ROUND((NVL(EmpInactive.EffDt, TRUNC(SYSDATE)) - EmpActive.EffDt) / 7) AS WeeksActive FROM EmpActive LEFT JOIN EmpInactive ON EmpActive.EmplID = EmpInactive.EmplID AND EmpActive.DtRank = EmpInactive.DtRank ``` The third gig for `EmplID = 1000` has an active date but no inactive date, hence the `NULLS LAST` in the `ROW_NUMBER` ordering and the left join between the two subqueries. I've used the "days / 7" math here; you can substitute what you need when you hear back from the customer. Note that if there isn't a corresponding inactive date the query uses the current date. There's a SQLFiddle of this [here](http://www.sqlfiddle.com/#!4/b4a7b/2).
SQL to Find Number of Weeks an Employee Was Active Between Two Dates
[ "", "sql", "oracle", "oracle11g", "peoplesoft", "" ]
I have a series of files and I want to extract a specific number from each of them. In each of the files I have this line: ``` name, registration num ``` and exactly two lines after that there is the registration number. I would like to extract this number from each file. and put it as a value of a dictionary.Anyone have any idea how it is possible ? my current code that does not actually work is like below: ``` matches=[] for root, dirnames, filenames in os.walk('D:/Dataset2'): for filename in fnmatch.filter(filenames, '*.txt'): matches.append([root, filename]) filenames_list={} for root,filename in matches: filename_key = (os.path.join(filename).strip()).split('.',1)[0] fullfilename = os.path.join(root, filename) f= open(fullfilename, 'r') for line in f: if "<name, registration num'" in line: key=filename_key line+=2 val=line ```
I usually use `next()` when I want to skip a single line, usually a header for a file. ``` with open(file_path) as f: next(f) # skip 1 line next(f) # skip another one. for line in f: pass # now you can keep reading as if there was no first or second line. ``` Note: In Python 2.6 or earlier you must use `f.next()`
One way would be to load the whole line into an array, and then read the line(s) you want from it. Example A file called testfile contains the following: ``` A1 B2 C3 D4 E5 ``` A program test.py: ``` #!/usr/bin/env python file = open('testfile') lines = file.readlines()[2:] file.close() for line in lines: print(line.strip()) ``` Output: ``` $./test.py C3 D4 E5 ``` EDIT: I read the question again, and noticed you just want a single line. Then you could just remove the `:`, and use `f.getlines()[2]` to get the third line in a file * Or you could use f.getline() three times, and just ignore the first two * Or you could use a `for line in f` type loop, and just ignore the first two line (have an incrementing counter)
How to skip 2 lines in a file with Python?
[ "", "python", "dictionary", "count", "line", "" ]
I'm trying to perform some geoprocessing. My task is to locate all shapefiles within a directory, and then find the full path name for that shapefile within the directory. I can get the name of the shapefile, but I don't know how to get the full path name for that shapefile. ``` shpfiles = [] for path, subdirs, files in os.walk(path): for x in files: if x.endswith(".shp") == True: shpfiles.append[x] ```
`os.walk` gives you the path to the directory as the first value in the loop, just use `os.path.join()` to create full filename: ``` shpfiles = [] for dirpath, subdirs, files in os.walk(path): for x in files: if x.endswith(".shp"): shpfiles.append(os.path.join(dirpath, x)) ``` I renamed `path` in the loop to `dirpath` to not conflict with the `path` variable you already were passing to `os.walk()`. Note that you do not need to test if the result of `.endswith() == True`; `if` already does that for you, the `== True` part is entirely redundant. You can use `.extend()` and a generator expression to make the above code a little more compact: ``` shpfiles = [] for dirpath, subdirs, files in os.walk(path): shpfiles.extend(os.path.join(dirpath, x) for x in files if x.endswith(".shp")) ``` or even as one list comprehension: ``` shpfiles = [os.path.join(d, x) for d, dirs, files in os.walk(path) for x in files if x.endswith(".shp")] ```
Why not `import glob` ? ``` import glob print(glob.glob('F:\OTHERS\PHOTOS\Panama\\mai13*\\*.jpg') ) ``` and i get all the jpeg i want, with absolute path ``` >>> ['F:\\OTHERS\\PHOTOS\\Panama\\mai13\\03052013271.jpg', 'F:\\OTHERS\\PHOTOS\\Panama\\mai13\\05052013272.jpg', 'F:\\OTHERS\\PHOTOS\\Panama\\mai13\\05052013273.jpg'] ```
Need the path for particular files using os.walk()
[ "", "python", "shapefile", "os.walk", "" ]
I can't solve [this problem](http://www.codecademy.com/courses/python-beginner-2W5v7/1/5?curriculum_id=4f89dab3d788890003000096#) in Python. --- I wrote this code: ``` pyg = 'ay' original = raw_input('Enter a word:') word = original.lower(); first = word[0]; if len(original) > 0 and original.isalpha(): if first == 'a' or 'e' or 'i' or 'o' or 'u': new_word = word + pyg; else: new_word = word[1:] + first_2 + pyg; print new_word; else: print "Null"; ``` But I get the following error after entering *Basic* word: > Oops, try again! Your word started with a consonant, but "basicay" was printed instead of "asicbay".
Your error is in this line: ``` if first == 'a' or 'e' or 'i' or 'o' or 'u': ``` That essentially translates to this: ``` if (first == 'a') or 'e' or 'i' or 'o' or 'u': ``` and 'e', 'i', 'o', and 'u' translate to "True" in Python, so that statement is *always* true. You actually need to do the equality check for each letter: ``` if first == 'a' or first == 'e' or first == 'i' or first == 'o' or first == 'u': ``` But luckily, there's an easier way to do that in Python: ``` if first in "aeiou": ```
``` if first == 'a' or 'e' or 'i' or 'o' or 'u': ``` evaluates to ``` if (first == 'a') or 'e' or 'i' or 'o' or 'u': ``` which is always true, because even if `first != 'a'`, the following part `'e'` is true. Replace it with ``` if first in ('a', 'e', 'i', 'o', 'u'): ``` or, if `first` is always a single character: ``` if first in 'aeiou': ``` Also, Python doesn't need `;` at the end of lines.
I can't solve this error in Python
[ "", "python", "" ]
Note: this is in Oracle not MySQL, limit/top won't work. I want to return the name of the person that has stayed the longest in a hotel. The longest stay can be found by subtracting the date in the `checkout` column with the `checkin` column. So far I have: ``` select fans.name from fans where fans.checkout-fans.checkin is not null order by fans.checkout-fans.checkin desc; ``` but this only orders the length of stay of each person from highest to lowest. I want it to only return the name (or names, if they are tied) of people who have stayed the longest. Also, As more than one person could have stayed for the highest length of time, simply adding `limit 1` to the end won't do. **Edit (for gbn)**, when adding a join to get checkin/checkout from other table it wont work (no records returned) **edit 2** solved now, the below join should of been `players.team = teams.name` ``` select x.name from ( select players.name, dense_rank() over (order by teams.checkout-teams.checkin desc) as rnk from players join teams on players.name = teams.name where teams.checkout-teams.checkin is not null ) x where x.rnk = 1 ```
Should be this using [DENSE\_RANK to get ties](http://www.techonthenet.com/oracle/functions/dense_rank.php) ``` select x.name from ( select fans.name, dense_rank() over (order by fans.checkout-fans.checkin desc) as rnk from fans where fans.checkout-fans.checkin is not null ) x where x.rnk = 1; ``` SQL Server has [TOP..WITH TIES](https://stackoverflow.com/a/9392484/27535) for this, but this is a generic solution for any RDBMS that has DENSE\_RANK.
**For Oracle:** ``` select * from ( select fans.name from fans where fans.checkout-fans.checkin is not null order by fans.checkout-fans.checkin desc) where rownum=1 ```
Finding entry with maximum date range between two columns in SQL
[ "", "sql", "oracle", "" ]
I need your help. I have this code to query my machines which are on rental, on stock, and on my outlets, but this only works if I input an itemID, which means it will only query one item at a time. I need to query the number of machines on rental and on outlets, parallel to the number of stock on hand. ``` alter procedure GetItemsForQueries @itemID varchar(15) as begin select i.ItemName, m.MachineModel, i.SellingPrice, i.QuantityOnHand, (select COUNT(*) from ClientMachine where AcquisitionType = 'Rental' and ItemID = @itemID) as 'Quantity on Rentals', (select COUNT(*) from OutletMachine where ItemID = @itemID) as 'Quantity on Outlets' from Item i inner join Machine m on (m.ItemID = i.ItemID) where i.ItemID = @itemID end ```
Two subqueries should do it - something like this... In a nutshell, find all the machines, join against a subquery which finds the count id thats rental and join again against another subquery that finds the count per outlet machine... ``` select m.itemid, ifnull(ccount,0) as rental_count, ifnull(ocount,0) as outlet_count from Machine m left join (select itemid, count(*) as ccount from ClientMachine where AcquisitionType = 'Rental' group by ItemID) a1 on (a1.itemid=m.itemid) left join (select itemid, count(*) as ocount from OutletMachine group by ItemID) a2 on (a2.itemid=m.itemid) ```
Of the top of my head (could have some syntax errors but the logic is there) ``` select i.ItemId, i.ItemName, m.MachineModel, i.SellingPrice, i.QuantityOnHand, rental.rentalCount, outlet.outletCount from Item i left join (select ItemId, count(*) as 'rentalCount' from ClientMachine Where AcquisitionType = 'Rental' group by ItemId) rental on rental.ItemId = i.ItemId left join (select ItemId, count(*) as 'outletCount' from OutletMachine group by ItemId) outlet on outlet.ItemId = i.ItemId inner join Machine m on (m.ItemID = i.ItemID) ```
Counting items on tables
[ "", "sql", "" ]
Is it possible to run through a subset in a python list? I have the following problem, I have two lists, `list1` is very long and `list2` is quite short. Now, I want to check which elements of the `list2` are also in `list1`. My current version looks like this: ``` for item in list1: if item in list2: # do something ``` This takes a very long time. Is it possible to get a subset and then run through the list? I need to do this many times.
If the list elements are [hashable](http://docs.python.org/2.7/glossary.html#term-hashable), you can find the intersection using [*sets*](http://docs.python.org/2.7/library/stdtypes.html#set-types-set-frozenset): ``` >>> for x in set(list2).intersection(list1): print x ``` If they are not hashable, you can at least speed-up the search by [sorting](http://docs.python.org/2.7/library/functions.html#sorted) the shorter list and doing [bisected lookups](http://docs.python.org/2.7/library/bisect.html#module-bisect): ``` >>> from bisect import bisect_left >>> list2.sort() >>> n = len(list2) >>> for x in list1: i = bisect_left(list2, x) if i != n and list2[i] == x: print x ``` If your data elements are neither hashable nor sortable, then you won't be able to speed-up your original code: ``` >>> for x in list1: if x in list2: print x ``` The running time of the set-intersection approach is proportional to the sum of lengths of the two lists, `O(n1 + n2)`. The running time of the bisected-search approach is `O((n1 + n2) * log(n2))`. The running time of the original brute-force approach is `O(n1 * n2)`.
You can use `sets` here, they provide `O(1)` lookup compared to `O(N)` by lists. But sets expect that the items must be hashable(immutable). ``` s = set(list1) for item in lis2: if item in s: #do something ```
Running across a subset of a list in python
[ "", "python", "" ]
I have a table in SQL Server where I have the scores for some competencies, I have one score for the standard and one for the actual score. For instance S25 is the actual score and C25 is the standard for the score. I need to find the difference between the two so I can see who was above and below the standard and cannot figure out how to get the subtract to work. THe way I tried was ``` Select (S25) - (C25) AS 25_Score ``` Which did not work
If table starts with a number, bracket it, and that might work. What error do you get? ``` select (S25)-(C25) AS [25_Score] from table_name ```
Your query should work if your columns are a numeric datatype. The only issue I see is you are starting the alias with a number. You will need to escape the number value with a square bracket: ``` Select (S25) - (C25) AS [25_Score] from yt; ``` See [Demo](http://www.sqlfiddle.com/#!3/a12ab/2)
subtracting in SQL Server
[ "", "sql", "sql-server", "" ]
Why I can't delete row from temporary table? ``` DECLARE @tbl2 TABLE ( Id int, ImieNazwisko varchar(200), Pesel varchar(200), Kod varchar(200) ) DELETE FROM @tbl2 tblT WHERE tblT SELECT * FROM @tbl2 ``` also this doesn`t work: ``` DELETE FROM @tbl2 WHERE @tbl2.Id ```
You can delete from a temprorary table. It's just your syntax seems wrong. Try this: ``` --drop table #temptable create table #temptable (Id int) insert into #temptable select 1 union select 2 union select 3 delete from #temptable where Id =2 select Id from #temptable ```
``` DECLARE @tbl2 TABLE ( Id int, ImieNazwisko varchar(200), Pesel varchar(200), Kod varchar(200) ) DELETE FROM @tbl2 tblT WHERE tblT.ID = 1 --- for single row delete SELECT * FROM @tbl2 ``` --from your case you want to delete all data from table i think it should be ``` delete from @tbl2 -- only ```
Temporary table delete row
[ "", "sql", "t-sql", "" ]
My call check\_angles returns the following instead of True: ``` <bound method Triangle.check_angles of <Triangle object at 0x7fb209a66b50>> ``` Here's the code: ``` class Triangle(object): number_of_sides = 3 def __init__(self, angle1, angle2, angle3): self.angle1 = angle1 self.angle2 = angle2 self.angle3 = angle3 def check_angles(): if angle1 + angle2 + angle3 == 180: return True else: return False my_triangle = Triangle(60, 60, 60) (print my_triangle.number_of_sides) (print my_triangle.check_angles) ```
You're missing the parentheses to the method call, first. Next, you have to provide `self` as a parameter to any method in a class. ``` def check_angles(self): ``` Also, you don't want to use `angle1`, `angle2`, or `angle3` - you need to prepend those with `self.` before you can use them in the proper scope. Finally, a style thing: You could just return `self.angle1 + self.angle2 + self.angle3 == 180`, since it's boolean.
You're missing `()` at the end of the method. The output is correct: `my_triangle.check_angles` returns the function itself, so the text you get is the description of that function. To actually print the result, just do `print my_triangle.check_angles()`. PS. Please watch out with floating point numbers. As soon as you use something other than integers, the sum may not be exactly `180`. It will be a number very close to it. If you need anything other than integers, then `abs(result-180) < 1e-6` (or some other small number to compare to) will be better.
Beginner python: what's wrong with my call to check_angles? (or the method itself, if it's that)
[ "", "python", "" ]
Assume I have a list of this type: ``` # 0 1 2 3 4 5 6 7 8 9 10 11 -- list index li=[-1, -1, 2, 2, -1, 1, 1, 1, 1, 1, -1, -1 ] ``` I want to find each index for which the value is the same for the `n` following indices. I can do it (laboriously) this way: ``` def sub_seq(li,n): ans={} for x in set(li): ans[x]=[i for i,e in enumerate(li[:-n+1]) if all(x==y for y in li[i:i+n])] ans={k:v for k,v in ans.items() if v} return ans li=[-1, -1, 2, 2, -1, 1, 1, 1, 1, 1, -1, -1] for i in (5,4,3,2): print i, sub_seq(li,i) ``` Prints: ``` 5 {1: [5]} 4 {1: [5, 6]} 3 {1: [5, 6, 7]} 2 {1: [5, 6, 7, 8], 2: [2], -1: [0, 10]} ``` Is there a better way to do this?
Analyzing data is typically easier if you first convert it to a convenient form. In this case, a [run-length-encoding](http://en.wikipedia.org/wiki/Run-length_encoding) would be a good starting point: ``` from itertools import groupby, accumulate from collections import defaultdict def sub_seq(li, n): d = defaultdict(list) rle = [(k, len(list(g))) for k, g in groupby(li)] endpoints = accumulate(size for k, size in rle) for end_index, (value, count) in zip(endpoints, rle): for index in range(end_index - count, end_index - n + 1): d[value].append(index) return dict(d) ```
As Raymond Hettinger points out in his answer, [`groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby) makes easier to check consecutive values. If you also enumerate the list, you can keep the corresponding indices and add them to the dictionary (I use [`defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict) to make the function as short as possible): ``` from itertools import groupby from operator import itemgetter from collections import defaultdict li = [-1, -1, 2, 2, -1, 1, 1, 1, 1, 1, -1, -1] def sub_seq(li, n): res = defaultdict(list) for k, g in groupby(enumerate(li), itemgetter(1)): l = list(map(itemgetter(0), g)) if n <= len(l): res[k] += l[0:len(l)-n+1] return res for i in (5,4,3,2): print i, sub_seq(li,i) ``` Which prints: ``` 5 defaultdict(<type 'list'>, {1: [5]}) 4 defaultdict(<type 'list'>, {1: [5, 6]}) 3 defaultdict(<type 'list'>, {1: [5, 6, 7]}) 2 defaultdict(<type 'list'>, {1: [5, 6, 7, 8], 2: [2], -1: [0, 10]}) ```
Sequence of elements in a list satisfying a condition
[ "", "python", "list", "" ]
I have a table with the following structure and data: ``` batsman | runs | year 1 | 800 | 2012 1 | 950 | 2011 1 | 1050 | 2010 2 | 550 | 2012 2 | 650 | 2011 2 | 400 | 2010 3 | 900 | 2012 ``` This data needs to be Selected through a sql query as: ``` batsman | 2012 | 2011 | 2010 1 | 800 | 950 | 1050 2 | 550 | 650 | 400 3 | 900 | - | - ``` I'm trying to do this through a stored proc. The assumption can be made that the number of columns (in terms of years) is fixed: 3. Also note, there are no arithmetic operations necessary - all the numbers I need are already there, they just need to be represented column-wise.
There are several ways that you can convert the rows of data into columns. In SQL Server you can use the PIVOT function: ``` select batsman, [2012], [2011], [2010] from ( select batsman, runs, year from yourtable ) d pivot ( sum(runs) for year in ([2012], [2011], [2010]) ) piv; ``` Or you can use an aggregate function with a `CASE` expression: ``` select batsman, sum(case when year = 2012 then runs else 0 end) [2012], sum(case when year = 2011 then runs else 0 end) [2011], sum(case when year = 2010 then runs else 0 end) [2010] from yourtable group by batsman; ``` The other version will work great if you have a known number of columns. But if you are going to have an unknown number of `year` values, then you will need to use dynamic SQL: ``` DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT distinct ',' + QUOTENAME(year) from yourtable FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = 'SELECT batsman,' + @cols + ' from ( select batsman, runs, year from yourtable ) x pivot ( sum(runs) for year in (' + @cols + ') ) p ' execute(@query) ```
Please try PIVOT: ``` declare @tbl as table(batsman int, runs int, yearr int) insert into @tbl values (1, 800, 2012), (1, 950, 2011), (1, 1050, 2010), (2, 550, 2012), (2, 650, 2011), (2, 400, 2010), (3, 900, 2012) select * From @tbl select * from ( select * from @tbl ) d pivot ( max(runs) for yearr in ([2012], [2011], [2010]) ) piv; ```
Represent multiple rows of data in one column as multiple columns in one row SQL server
[ "", "sql", "sql-server-2008", "stored-procedures", "pivot", "" ]
I have a table which records email addresses for newsletter purposes. When someone subscribes it sets a subscribe ind to 'Y' and there is an incremental id. When someone subscribes/unsubscribes, I need to select all table entries for the email address, but then select based on the highest id. Once I have this info I can use the subscribe ind to see if they are subscribed or not. I have been trying to figure this one out but without success. I am trying something like: **mysql\_query** ``` SELECT * FROM newsletter WHERE email=$email and id=(select max(id) from newsletter) ``` Should I be doing something else?
``` SELECT col1,col2,max(id) FROM newsletter WHERE email=$email group by col1,col2 ``` **EDIT** ``` select * from newsletter WHERE email=$email and id in (select max(id) from newsletter group by email) ```
You need to group by email: ``` SELECT * FROM newsletter WHERE email=$email and id=(select max(id) from newsletter group by email) ```
SQL to select email and max id
[ "", "mysql", "sql", "max", "" ]
I have an arbitrary stored procedure `usp_DoubleCheckLockInsert` that does an `INSERT` for multiple clients and I want to give the stored procedure exclusive access to writing to a table `SomeTable` when it is within the critical section `Begin lock` and `End lock`. ``` CREATE PROCEDURE usp_DoubleCheckLockInsert @Id INT ,@SomeValue INT AS BEGIN IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = @Id AND SomeValue = @SomeValue)) RETURN BEGIN TRAN --Begin lock IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = @Id AND SomeValue = @SomeValue)) ROLLBACK INSERT INTO SomeTable(Id, SomeValue) VALUES(@Id,@SomeValue); --End lock COMMIT END ``` I have seen how Isolation Level relates to updates, but is there a way to implement locking in the critical section, give the transaction the writing lock, or does TSQL not work this way? [Obtain Update Table Lock at start of Stored Procedure in SQL Server](https://stackoverflow.com/questions/2973447/obtain-update-table-lock-at-start-of-stored-procedure-in-sql-server)
A second approach which works for me is to combine the `INSERT` and the `SELECT` into a single operation. This index needed only for efficiently querying `SomeTable`. Note that there is NOT a uniqueness constraint. However, if I were taking this approach, I would actually make the index unique. ``` CREATE INDEX [IX_SomeTable_Id_SomeValue_IsDelete] ON [dbo].[SomeTable] ( [Id] ASC, [SomeValue] ASC, [IsDelete] ASC ) ``` The stored proc, which combines the INSERT/ SELECT operations: ``` CREATE PROCEDURE [dbo].[usp_DoubleCheckLockInsert] @Id INT ,@SomeValue INT ,@IsDelete bit AS BEGIN ``` `-- Don't allow dirty reads` `SET TRANSACTION ISOLATION LEVEL READ COMMITTED` ``` BEGIN TRAN -- insert only if data not existing INSERT INTO dbo.SomeTable(Id, SomeValue, IsDelete) SELECT @Id, @SomeValue, @IsDelete where not exists ( select * from dbo.SomeTable WITH (HOLDLOCK, UPDLOCK) where Id = @Id and SomeValue = @SomeValue and IsDelete = @IsDelete) COMMIT END ``` I did try this approach using multiple processes to insert data. (I admit though that I didn't exactly put a lot of stress on SQL Server). There were never any duplicates or failed inserts.
It seems all you are trying to do is to prevent duplicate rows from being inserted. You can do this by adding a unique index, with the option `IGNORE_DUP_KEY = ON`: ``` CREATE UNIQUE INDEX [IX_SomeTable_Id_SomeValue_IsDelete] ON [dbo].[SomeTable] ( [Id] ASC, [SomeValue] ASC, [IsDelete] ASC ) WITH (IGNORE_DUP_KEY = ON) ``` Any inserts with duplicate keys will be ignored by SQL Server. Running the following: ``` INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete]) VALUES(0,0,0) INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete]) VALUES(1,1,0) INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete]) VALUES(2,2,0) INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete]) VALUES(0,0,0) ``` Results in: ``` (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) Duplicate key was ignored. (0 row(s) affected) ``` I did not test the above using multiple processes (threads), but the results in that case should be the same - SQL Server should still ignore any duplicates, no matter which thread is attempting the insert. See also [Index Options](http://msdn.microsoft.com/en-us/library/ms186869.aspx) at MSDN.
TSQL implementing double check locking
[ "", "sql", "sql-server-2008", "stored-procedures", "concurrency", "locking", "" ]
In my database all tables are using a common table for Sequence(ID\_Table). TABLE\_ID has two fields (Common\_ID, Table\_Name). If I insert any record in the table, I have to first insert a record in Table\_ID(Auto-increment, Table\_name) then use that Auto-increment value in my Other Table. For example, I want to insert in Table\_Products which has fields ID(Common\_ID), Product\_Name, Product\_ID(Auto Increment) I want to do something like this: ``` INSERT INTO TABLE_ID (Table_NAME), Values (Table_Products) ``` Get the Inserted ID and use it in Table\_Products: ``` INSERT INTO Table_Products (ID, Product_Name, Product_ID(Auto Increment) VALUES (ID from TABLE_ID, SomeProduct, Increment) ```
Try this one - ``` DECLARE @ID BIGINT INSERT INTO dbo.TABLE_ID (Table_NAME) SELECT 'Table_Products' SELECT @ID = SCOPE_IDENTITY() INSERT INTO dbo.Table_Products (ID, Product_Name) SELECT @ID, 'SomeProduct' ```
Use SCOPE\_IDENTITY() after ur insert statementto get the last inserted id. ``` DECLARE @Product_Id int INSERT INTO TABLE_ID (Table_NAME) VALUES (Table_Products); SELECT @Product_Id=SCOPE_IDENTITY(); Insert INTO Table_Products (ID, Product_Name) VALUES (ID from TABLE_ID, 'SomeProduct') ```
MS SQL Server Last Inserted ID
[ "", "sql", "sql-server", "" ]
I am very new to Python (in the past I used Mathematica, Maple, or Matlab scripts). I am very impressed how NumPy can evaluate functions over arrays but having problems trying to implement it in several dimensions. My question is very simple (please don't laugh): is there a more elegant and efficient way to evaluate some function f (which is defined over R^2) *without* using loops? ``` import numpy M=numpy.zeros((10,10)) for i in range(0,10): for j in range(0,10): M[i,j]=f(i,j) return M ```
The goal when coding with numpy is to implement your computation on the whole array, as much as possible. So if your function is, for example, `f(x,y) = x**2 +2*y` and you want to apply it to all integer pairs `x,y` in `[0,10]x[0,10]`, do: ``` x,y = np.mgrid[0:10, 0:10] fxy = x**2 + 2*y ``` If you don't find a way to express your function in such a way, then: 1. Ask how to do it (and state explicitly the function definition) 2. use numpy.vectorize Same example using `vectorize`: ``` def f(x,y): return x**2 + 2*y x,y = np.mgrid[0:10, 0:10] fxy = np.vectorize(f)(x.ravel(),y.ravel()).reshape(x.shape) ``` Note that in practice I only use `vectorize` similarly to python `map` when the content of the arrays are not numbers. A typical example is to compute the length of all list in an array of lists: ``` # construct a sample list of lists list_of_lists = np.array([range(i) for i in range(1000)]) print np.vectorize(len)(list_of_lists) # [0,1 ... 998,999] ```
Yes, many numpy functions operate on N-dimensional arrays. Take this example: ``` >>> M = numpy.zeros((3,3)) >>> M[0][0] = 1 >>> M[2][2] = 1 >>> M array([[ 1., 0., 0.], [ 0., 0., 0.], [ 0., 0., 1.]]) >>> M > 0.5 array([[ True, False, False], [False, False, False], [False, False, True]], dtype=bool) >>> numpy.sum(M) 2.0 ``` Note the difference between numpy.sum, which operates on N-dimensional arrays, and sum, which only goes 1 level deep: ``` >>> sum(M) array([ 1., 0., 1.]) ``` So if you build your function f() out of operations that work on n-dimensional arrays, then f() itself will work on n-dimensional arrays.
Python: An elegant/efficient way to evaluate function over bi-dimensional indexes?
[ "", "python", "arrays", "performance", "loops", "" ]
I am trying to check the database if a specific combination exists. Table: `conversations` ``` +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | | 5 | +----+ ``` Table: `conversations_users` ``` +----+--------------+------+ | id | conversation | user | +----+--------------+------+ | 1 | 1 | 1 | | 2 | 1 | 2 | | 3 | 2 | 1 | | 4 | 2 | 2 | | 5 | 2 | 3 | | 6 | 2 | 4 | | 7 | 3 | 2 | | 8 | 3 | 3 | | 9 | 4 | 2 | | 10 | 4 | 4 | +----+--------------+------+ ``` Then I want to make a query to get the conversations where those users are in the same conversations: ``` Users: 1,2,3,4 (Only them, no else) ``` If there is a conversation where only those are in, I want to get the id of that conversation, and else the `result` should become `0` Anyone have any ideas how to do this trick?
This is an example of a "set-within-sets" query. For these, I like to use `group by` with the `having` clause: ``` select conversation from conversation_users cu group by conversation having SUM(user = 1) > 0 and sum(user = 2) > 0 and sum(user = 3) > 0 and sum(user = 4) > 0 and sum(user not in (1, 2, 3, 4)) = 0 ``` Each condition of the having clause corresponds to one of the five conditions specified in the question: * user 1 is in the conversation * user 2 is in the conversation * user 3 is in the conversation * user 4 is in the conversation * no other users are in the conversation
The idea is to count the distinct users in a given conversation. If it matches the number of users you set in the `IN` clause, then you are sure that there is only the ones you search for: ``` SELECT id FROM conversations_users WHERE user in (1, 2, 3, 4) GROUP BY id HAVING COUNT(DISTINCT user) = 4 ``` Note that this doesn't output conversations where only 3 of the 4 users appears. If you need these conversations too, then: ``` SELECT id FROM conversations_users WHERE user in (1, 2, 3, 4) GROUP BY id HAVING COUNT(DISTINCT user) <= 4 ```
SELECT WHERE ON an advanced way
[ "", "mysql", "sql", "join", "" ]
I want to find the maximum temperature in a set of data and print the output as "The hottest temperature was x in y" where x and y is temperature and city respectively. I have a code like this: ``` data = [['Sheffield', '41.2', '35.5', '41.1'], ['Lancaster', '31.3', '40.2', '37.9'], ['Southampton', '34.8', '33.9', '32',], ['Manchester', '41.9', '41.5', '44.2'], ['Bristol', '42.1', '37.1', '42.2']] hot = [] for row in data: for item in row: if item == max(row[1:]): hot.append(item) if max(hot) in row: print "The hottest temperature was {0} in {1}.".format(max(hot),row[0]) ``` The outputs that were produced: ``` The hottest temperature was 41.2 in Sheffield. The hottest temperature was 44.2 in Manchester. ``` Now I am confused with the outputs. I want to print only one line of output which is supposed to be "The hottest temperature was 44.2 in Manchester." since 44.2 is the maximum temperature in the data. Why did "The hottest temperature was 41.2 in Sheffield." is printed too? Where did I get it wrong?
You check if the maximum value of `hot` is in `row` for each row, rather than checking once after all the rows have been processed. Try this: ``` hot = [] for row in data: for item in row: if item == max(row[1:]): hot.append(item) if max(hot) in row: max_row = row print "The hottest temperature was {0} in {1}.".format(max(hot),max_row[0]) ``` --- As an aside, you are storing all your temperatures as strings, not floats. You could get an odd result if there is a much wider spread of temperatures (`'5' > '35.3'` is true, for example).
You're building up the list as you iterate, and the `max` is operating on the list as you have it *so far*. When you get to Sheffield, it's the hottest you've seen so far, so it prints. But it can't know that Manchester is even hotter, as it hasn't seen it yet. The quickest way to fix this would be to do two loops: one to build up the list, then a second to find the hottest. (And, 44.2 in Manchester? In your dreams.)
Confuse with output in python
[ "", "python", "" ]
I am using scrapy for crawling the information from a webpage. I have written the spider code which works fine. I also have some other python code files which refine the crawled data. To put it all together, I have a batch file in which I first I use the "Scrapy Crawl" command, and after that I run my python file to refine the crawled information. The problem is that the batch script stops at the end of the "Scrapy Crawl" command, and does not proceed with executing the lines after that in my batch file. How can I solve this problem? Contents of the batch file: ``` CD "spiders_folder" scrapy crawl mySpider -o outputData.json -t json python refineData.py ```
The batch file logic handles calls to other programs as subroutines and returns from there back to the originating batch file after their execution is finished. AFAICR the exception from the rule was if one batch file calls another batch file. In this case the execution is not handled as a subroutine but is given over to the other batch file. Unless you use `call` to call the second batch file. (I might be mistaken, this is all very long ago for me, so please don't hit me too hard if I'm wrong.) If my guess is correct and this is the reason for your trouble, it is because `scrapy` is a batch file itself. Then just call it like this: ``` call scrapy crawl mySpider -o outputData.json -t json ```
I don't know "scrapy", but you can try: ``` start "" "scrapy" crawl mySpider -o outputData.json -t json ```
Continue the batch script commands when using scrapy crawl command in a batch file
[ "", "python", "batch-file", "scrapy", "" ]
In my classic asp app, I am calling a stored procedure and it is supposed to get me a record set through a select statement. If I try the stored procedure select statement in the sql management studio, I get like 100 rows, but when I try calling it in class asp code, I get a record set that has a record count of -1. Does anyone know why this could be happening? Thanks. **EDIT :** ``` Dim Conn SET rsGetHireID = Server.CreateObject("ADODB.RecordSet") SET Conn = Server.CreateObject("ADODB.Command") Conn.CommandText = "sp_selectNewHireSQL" Conn.CommandType = adCmdStoredProc Conn.ActiveConnection = ConnectionString Set rsGetHireID = Conn.Execute NumOfHireID = rsGetHireID.RecordCount Response.Write (NumOfHireID) ``` **OLD working code:** ``` newHireSQL = "select * from NewHire where Archived = 0 order by HireID desc" Set rsGetHireID = Server.CreateObject("ADODB.Recordset") rsGetHireID.Open newHireSQL,ConnectionString,adOpenStatic ``` NEW EDIT: It seems the fix was to remove the BEGIN and END from the stored procedure, which was causing a loop.
Remove the BEGIN and END that the stored procedure template puts by default. Having those causes a loop.
I am pretty sure that your problem is the type of cursor you are opening on your recordset. Not all of them support the recordcount property. From MSDN > Use the RecordCount property to find out how many records are in a > Recordset object. **The property returns -1 when ADO cannot determine > the number of records or if the provider or cursor type does not > support RecordCount**. Reading the RecordCount property on a closed > Recordset causes an error. To specify the cursor type you will need to use the open method on the recordset object. ``` 'use CursorType=3 (Static) rsGetHireID .Open CommandObjectOrSQLQuery, Conn , CursorType ```
My recordset is having a record count of -1?
[ "", "sql", "asp-classic", "" ]
In python I'm seeing evidence that fp.readlines() is closing the file when I try to access the fp later in the program. Can you confirm this behavior, do I need to re-open the file again later if I also want to read from it again? [Is the file closed?](https://stackoverflow.com/questions/16405926/is-the-file-closed) is similar, but didn't answer all of my questions. ``` import sys def lines(fp): print str(len(fp.readlines())) def main(): sent_file = open(sys.argv[1], "r") lines(sent_file) for line in sent_file: print line ``` this returns: ``` 20 ```
Once you have read a file, the file pointer has been moved to the end and no more lines will be 'found' beyond that point. Re-open the file or seek back to the start: ``` sent_file.seek(0) ``` Your file is *not* closed; a closed file raises an exception when you attempt to access it: ``` >>> fileobj = open('names.txt') >>> fileobj.close() >>> fileobj.read() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: I/O operation on closed file ```
It doesn't close the file, but it does read the lines in it so they cannot be read again without reopening the file or setting the file pointer back to the beginning with `fp.seek(0)`. As evidence that it doesn't close the file, try changing the function to actually close the file: ``` def lines(fp): print str(len(fp.readlines())) fp.close() ``` You will get the error: ``` Traceback (most recent call last): File "test5.py", line 16, in <module> main() File "test5.py", line 12, in main for line in sent_file: ValueError: I/O operation on closed file ```
Does fp.readlines() close a file?
[ "", "python", "file-io", "" ]
While using the following code: ``` url = None print("For 'The Survey of Cornwall,' press 1") print("For 'The Adventures of Sherlock Holmes,' press 2") print("For 'Pride and Prejudice,' press 3") n = input("Which do you choose?") if n==1: url = 'http://www.gutenberg.org/cache/epub/9878/pg9878.txt' #cornwall print("cornwall") elif n==2: url = 'http://www.gutenberg.org/cache/epub/1661/pg1661.txt' #holmes print("holmes) elif n==3: url = 'http://www.gutenberg.org/cache/epub/1342/pg1342.txt' #pap print("PaP") else: print("That was not one of the choices") ``` I'm only getting the "else" case returned, why might that be??
`input()` returns a string in py3x. So, you need to convert it to `int` first. ``` n = int(input("Which do you choose?")) ``` Demo: ``` >>> '1' == 1 False >>> int('1') == 1 True ```
`input()` returns a string, but you are comparing it to integers. You can convert the result from input to an integer with the `int()` function.
Using if/elif/else for a "switch" in python
[ "", "python", "" ]
I have some AJAX that polls the server every 5 seconds: ``` var date = $('article').first().find('time').text(); console.log(date); setInterval(function() { $.post('pollNewEntries', {'date':date}, newEntrySuccess) }, 5000); ``` Unfortunately, I'm getting a 403 error every time the AJAX tries to poll the server, stating that I have made an invalid CSRF request. I've used AJAX with forms before and included the CSRF token within the forms, but I"m not sure how I would do it with a formless AJAX request like above.
The solution to this problem is described in the Django documentation: <https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#ajax> Add this code to the top of your js: ``` $.ajaxSetup({ beforeSend: function(xhr, settings) { function getCookie(name) { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } if (!(/^http:.*/.test(settings.url) || /^https:.*/.test(settings.url))) { // Only send the token to relative URLs i.e. locally. xhr.setRequestHeader("X-CSRFToken", getCookie('csrftoken')); } } }); ```
You need to pass `csrf` token along with your post data: ``` var date = $('article').first().find('time').text(); console.log(date); setInterval(function() { $.post('pollNewEntries', {'date':date, 'csrfmiddlewaretoken': '{{csrf_token}}'}, newEntrySuccess) }, 5000); ```
CSRF With Ajax Polling
[ "", "python", "ajax", "django", "" ]
This is what I have, currently. Is there any nicer way to do this? ``` import struct def int32_to_uint32(i): return struct.unpack_from("I", struct.pack("i", i))[0] ```
Not sure if it's "nicer" or not... ``` import ctypes def int32_to_uint32(i): return ctypes.c_uint32(i).value ```
using numpy for example: ``` import numpy result = numpy.uint32( numpy.int32(myval) ) ``` or even on arrays, ``` arr = numpy.array(range(10)) result = numpy.uint32( numpy.int32(arr) ) ```
How to convert signed 32-bit int to unsigned 32-bit int?
[ "", "python", "integer", "unsigned", "signed", "type-conversion", "" ]
I have codes like this: ``` select CUS_Id, CUS_Name, CUS_JoinDate from CUSTOMER where CUS_joinDate between '04-12-2013' and '06-12-2013' ``` How can I make it 'where' CUS\_JoinDate will be declare as from Today Date(minus 1 month and today + 1 month)? So every time I run the report it will depends on the current date and it will automatically minus and plus 30 days.
try, ``` WHERE CUS_joinDate BETWEEN DATEADD(mm,-1,GETDATE()) AND DATEADD(mm,1,GETDATE()) ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!3/d41d8/13578)
You can use CURDATE() and DATEADD() **[W3SCHOOLS SQL DATES](http://www.w3schools.com/sql/sql_dates.asp)**
SQL where Date from today minus and plus days
[ "", "sql", "sql-server-2008", "t-sql", "sql-server-2005", "" ]
I'm using Selenium 2 / WebDriver with the Python API, as follows: ``` from selenium.webdriver.support import expected_conditions as EC # code that causes an ajax query to be run WebDriverWait(driver, 10).until( EC.presence_of_element_located( \ (By.CSS_SELECTOR, "div.some_result"))); ``` I want to wait for **either** a result to be returned (`div.some_result`) **or** a "Not found" string. Is that possible? Kind of: ``` WebDriverWait(driver, 10).until( \ EC.presence_of_element_located( \ (By.CSS_SELECTOR, "div.some_result")) \ or EC.presence_of_element_located( \ (By.CSS_SELECTOR, "div.no_result")) \ ); ``` I realise I could do this with a CSS selector (`div.no_result, div.some_result`), but is there a way to do it using the Selenium expected conditions method?
I did it like this: ``` class AnyEc: """ Use with WebDriverWait to combine expected_conditions in an OR. """ def __init__(self, *args): self.ecs = args def __call__(self, driver): for fn in self.ecs: try: res = fn(driver) if res: return True # Or return res if you need the element found except: pass ``` Then call it like... ``` from selenium.webdriver.support import expected_conditions as EC # ... WebDriverWait(driver, 10).until( AnyEc( EC.presence_of_element_located( (By.CSS_SELECTOR, "div.some_result")), EC.presence_of_element_located( (By.CSS_SELECTOR, "div.no_result")) )) ``` Obviously it would be trivial to also implement an `AllEc` class likewise. Nb. the `try:` block is odd. I was confused because some ECs return true/false while others will throw `NoSuchElementException` for False. The Exceptions are caught by WebDriverWait so my AnyEc thing was producing odd results because the first one to throw an exception meant AnyEc didn't proceed to the next test.
Ancient question but, Consider how `WedDriverWait` works, in an example independent from selenium: ``` def is_even(n): return n % 2 == 0 x = 10 WebDriverWait(x, 5).until(is_even) ``` This will wait up to 5 seconds for `is_even(x)` to return `True` now, `WebDriverWait(7, 5).until(is_even)` will take 5 seconds and them raise a TimeoutException Turns out, you can return any non Falsy value and capture it: ``` def return_if_even(n): if n % 2 == 0: return n else: return False x = 10 y = WebDriverWait(x, 5).until(return_if_even) print(y) # >> 10 ``` Now consider how the methods of `EC` works: ``` print(By.CSS_SELECTOR) # first note this is only a string >> 'css selector' cond = EC.presence_of_element_located( ('css selector', 'div.some_result') ) # this is only a function(*ish), and you can call it right away: cond(driver) # if element is in page, returns the element, raise an exception otherwise ``` You probably would want to try something like: ``` def presence_of_any_element_located(parent, *selectors): ecs = [] for selector in selectors: ecs.append( EC.presence_of_element_located( ('css selector', selector) ) ) # Execute the 'EC' functions agains 'parent' ecs = [ec(parent) for ec in ecs] return any(ecs) ``` this WOULD work if `EC.presence_of_element_located` returned `False` when `selector` not found in `parent`, but it raises an exception, an easy-to-understand workaround would be: ``` def element_in_parent(parent, selector): matches = parent.find_elements_by_css_selector(selector) if len(matches) == 0: return False else: return matches def any_element_in_parent(parent, *selectors): for selector in selectors: matches = element_in_parent(parent, selector) # if there is a match, return right away if matches: return matches # If list was exhausted return False # let's try any_element_in_parent(driver, 'div.some_result', 'div.no_result') # if found in driver, will return matches, else, return False # For convenience, let's make a version wich takes a tuple containing the arguments (either one works): cond = lambda args: any_element_in_parent(*args) cond( (driver, 'div.some_result', 'div.no_result') ) # exactly same result as above # At last, wait up until 5 seconds for it WebDriverWait((driver, 'div.some_result', 'div.no_result'), 5).until(cond) ``` My goal was to explain, [artfulrobot](https://stackoverflow.com/a/16464305/2661403) already gave a snippet for general use of actual `EC` methods, just note that ``` class A(object): def __init__(...): pass def __call__(...): pass ``` Is just a more flexible way to define functions (actually, a 'function-like', but that's irrelevant in this context)
Selenium Expected Conditions - possible to use 'or'?
[ "", "python", "selenium", "" ]
I'm currently using `hash` on tuples of integers and strings (and nested tuples of integers and strings etc.) in order to compute the uniqueness of some objects. Barring that there might be a hash collisions, I wonder - is the `hash` function on those data types guaranteed to return the same result for different versions of Python?
No. Apart from long-standing differences between 32- and 64-bit versions of Python, the hashing algorithm [was changed in Python 3.3](http://docs.python.org/3/reference/datamodel.html#object.__hash__) to resolve a security issue: > By default, the **hash**() values of str, bytes and datetime objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python. > > This is intended to provide protection against a denial-of-service caused by carefully-chosen inputs that exploit the worst case performance of a dict insertion, O(n^2) complexity. See <http://www.ocert.org/advisories/ocert-2011-003.html> for details. > > Changing hash values affects the iteration order of dicts, sets and other mappings. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds). As a result, from 3.3 onwards `hash()` is not even guaranteed to return the same result across different invocations of the same Python version.
I'm not sure what you are looking for, but you can always use [`hashlib`](http://docs.python.org/2/library/hashlib.html) if you're looking for consistent hashing. ``` >>> import hashlib >>> t = ("values", "other") >>> hashlib.sha256(str(t).encode("utf-8")).hexdigest() 'bc3ed71325acf1386b40aa762b661bb63bb72e6df9457b838a2ea93c95cc8f0c' ``` OR: ``` >>> h = hashlib.sha256() >>> for item in t: ... h.update(item.encode("utf-8")) ... >>> h.hexdigest() '5e98df135627bc8d98250ca7e638aeb2ccf7981ce50ee16ce00d4f23efada068' ```
Does python's hash function remain identical across different versions?
[ "", "python", "hash", "compatibility", "backwards-compatibility", "" ]
I'm currently displaying a dataset using django-tables2. The docs make no mention of this in particular, so I'm guessing this'll take probably some table overriding - but, I'm hopeful someone out there has already accomplished this. How can I render page numbers using django-tables2 below my table? What I'd like to be able to display is a horizontal list of page numbers that the user can click. Thanks in advance.
you need to create a custom page rendering template - you don't need to override any classses. To do that, start by copying the file ``` PYTHON\Lib\site-packages\django_tables2\templates\django_tables2\table.html ``` to the `templates` directory inside your django application and rename it to `mytable.html` or whatever else you like. Now, you need to change the *pagination* block of that file. There are many ways to do what you like, but a simple way is to add the following lines inside the pagination block (you may remove or keep the other things that are there depending on your specific needs): ``` {% block pagination.allpages %} {% for p in table.paginator.page_range %} <a href="{% querystring table.prefixed_page_field=p %}">{{ p }}</a> {% endfor %} {% endblock pagination.allpages %} ``` Finally, to use your template, just pass your custom template name to the render\_table command: ``` {% load render_table from django_tables2 %} ... {% render_table table "mytable.html" %} ``` This is very simple and will give you trouble if you have many pages (so you have to use some `if`s to check the number of pages through the `table.paginator.num_pages` variable). Also, you may highlight the current page and disable the link by using the `table.page.number` variable. The above are left as an excersise to the reader :)
Improving on @Serafeim answer (or solving the exercise he left): Here is a pagination block which, using only Django template syntax, renders page numbers that: * are enclosed in a `<ul>` HTML block, whith CSS classes that "play well" with Bootstrap; * if there are more than 8 pages, at most 3 pages below and above current page are shown; * first and last pages are always shown, with ellipsis between them and the start or end of the range (if needed). ``` {% with current_page=table.page.number page_count=table.paginator.num_pages rows_per_page=table.page.object_list|length total_rows=table.page.paginator.count %} {% block pagination %} <ul class="pagination"> {% block pagination.allpages %} <li class="current"> {% blocktrans %}Page {% endblocktrans %} </li> {% for page in table.paginator.page_range %} {% with range_start=current_page|add:"-3" range_end=current_page|add:"3" page_count_minus_5=page_count|add:"-5" page_count_minus_1=page_count|add:"-1" %} {% if page == current_page %} <li class="active"> <span>{{ page }}</span> </li> {% elif page == 1 or page >= range_start and page <= range_end or page == page_count %} <li class="next"> <a href="{% querystring table.prefixed_page_field=page %}">{{ page }}</a> </li> {% endif %} {% if page == 1 and current_page > 5 or page == page_count_minus_1 and current_page <= page_count_minus_5 %} <li class="current">...</li> {% endif %} {% endwith %} {% endfor %} {% endblock pagination.allpages %} {% block pagination.cardinality %} <li class="cardinality"> {% if total_rows != rows_per_page %}{% blocktrans %} {{ rows_per_page }} of {{ total_rows }}{% endblocktrans %} {% else %} {{ total_rows }} {% endif %} {% if total_rows == 1 %} {{ table.data.verbose_name }} {% else %} {{ table.data.verbose_name_plural }} {% endif %} </li> {% endblock pagination.cardinality %} </ul> {% endblock pagination %} {% endwith %} ```
Displaying Page Numbers with django-tables2
[ "", "python", "django", "django-tables2", "" ]
is that possible to make foreign key from datetime datatype ? i try this but i got error message : Msg 1776, Level 16, State 0, Line 1 There are no primary or candidate keys in the referenced table 'penduduk' that match the referencing column list in the foreign key 'tgllahir'. Msg 1750, Level 16, State 0, Line 1 Could not create constraint. See previous errors. i use this query PARENT TABLE : ``` create table penduduk ( no int identity(1,1), noktp char(11) primary key, nama varchar(20), tgl_lahir datetime NOT NULL, namahari varchar(20), tgl int, bulan int, namabulan varchar(20), tahun int, umur int ) CREATE TABLE tua( noktp CHAR(11) PRIMARY KEY, tgl_lahir datetime NOT NULL CONSTRAINT tgllahir FOREIGN KEY REFERENCES penduduk(tgl_lahir), FOREIGN KEY(noktp) REFERENCES penduduk(noktp), ) ```
Try this: ``` create table penduduk ( no int identity(1,1), noktp char(11) primary key, nama varchar(20), tgl_lahir datetime NOT NULL unique, namahari varchar(20), tgl int, bulan int, namabulan varchar(20), tahun int, umur int ) CREATE TABLE tua( noktp CHAR(11) PRIMARY KEY, tgl_lahir datetime NOT NULL CONSTRAINT tgllahir FOREIGN KEY REFERENCES penduduk(tgl_lahir), FOREIGN KEY(noktp) REFERENCES penduduk(noktp), ) ```
You can only use a column as a reference in a Foreign Key constraint if it's a suitable candidate key. From Books Online: > A FOREIGN KEY constraint does not have to be linked only to a PRIMARY > KEY constraint in another table; it can also be defined to reference > the columns of a UNIQUE constraint in another table. See [Foreign Key Constraints](http://msdn.microsoft.com/en-us/library/ms175464%28v=sql.105%29.aspx). In your case `tgl_lahir` is neither Unique or a Primary so cannot be used in your Foreign Key constraint. If you add a Unique constraint to `tgl_lahir` it should work; whether that's workable with your data is the real question.
pointing foreign key to datetime
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to instal virtualenv and/or virtualenvwrapper on a Mac OS X 10.8.3. I have been fighting with python for the last two days. Finally I was able to install Python 2.7.4 using brew. Before I had `virtualenv` installed using `easy_install`. Then I tried to uninstall it, trying to get my computer in the same situation as the one of my colleagues. Maybe I uninstalled it with success, maybe not. I don't know how to test it. Now I am supposed to install virtualenv using: ``` pip install virtualenv ``` But it gives me: ``` Could not find an activated virtualenv (required). ``` `pip install virtualenvwrapper` gives exactly the same output. Also the variable: `PIP_RESPECT_VIRTUALENV` is null: ``` echo $PIP_RESPECT_VIRTUALENV ``` How can I solve this issue?
Open your `~/.bashrc` file and see if this line is there - ``` export PIP_REQUIRE_VIRTUALENV=true ``` It might be causing the trouble. If it's there, change it to `false` and run - ``` source ~/.bashrc ``` If not, run `export PIP_REQUIRE_VIRTUALENV=false` from terminal. Note: everything works the same if you have `.bash_profile` instead of `.bashrc` in your current user's root directory.
@Bibhas has it; +1 to look for `export PIP_REQUIRE_VIRTUALENV=true` in `~/.profile` or `~/.bashrc`. You can confirm the setting in your current shell with `env |grep PIP_REQUIRE_VIRTUALENV`. This setting is a good safety check; more often than not, you'll want to be installing things into virtualenvs. However, sometimes you *do* want to be working with the global/system python. In those cases, take a look at [--isolated](https://pip.pypa.io/en/latest/reference/pip.html#cmdoption--isolated): > Run pip in an isolated mode, ignoring environment variables and user configuration. ``` $ pip install --upgrade pip Could not find an activated virtualenv (required). $ pip install --upgrade pip --isolated Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages $ pip freeze --isolated ... ```
pip: Could not find an activated virtualenv (required)
[ "", "python", "osx-mountain-lion", "virtualenv", "pip", "" ]
``` def isUpper(x): if ord(x)>64 and ord(x)<91: return True else: return False def isLower(x): if ord(x)>96 and ord(x)<123: return True else: return False ``` Having written the above functions, I now need to write the function letterSplit using only isUpper, isLower and the higher-order filter function. I came up with: ``` def letterSplit(strng): if strng=='': return 0 else: for x in strng: for y in strng: filter(isUpper(x),strng) filter(isLower(y),strng) return (x,y) ``` The error that arises says bool obj not callable. Help me please ~Dont quite have a handle on the filter function
The problem is that `isUpper` is your function, and `isUpper(x)` is a boolean value (`isUpper` evaluated with the argument as `x`). Thus, you actually want seems to be this: ``` def letterSplit(strng): return (filter(isUpper,strng),filter(isLower,strng)) if strng else 0 ``` more verbosely, this comes out to be: ``` def letterSplit(strng): if not strng: return 0 uppers = filter(isUpper,strng) lowers = filter(isLower,strng) return uppers,lowers ```
If I understood your question then I think you're trying to filter out uppercase and lowercase letters from a string. ``` def letterSplit(strs): if not strs: return else: #filter(isUpper,strs) is equivalent to "".join([x for x in strs if isUpper(x)]) upper = filter(isUpper,strs) #filter's first argument is simply function name lower = filter(isLower,strs) return upper, lower # in py3x filter returns an iterator, # so you'll have to use "".join(upper), "".join(lower) print letterSplit("SomeWeirdLookingStriNG") ``` **output:** ``` ('SWLSNG', 'omeeirdookingtri') ``` **help** on `filter`: ``` >>> filter? Type: builtin_function_or_method Docstring: filter(function or None, sequence) -> list, tuple, or string Return those items of sequence for which function(item) is true. If function is None, return the items that are true. If sequence is a tuple or string, return the same type, else return a list. ```
Correct filter function
[ "", "python", "function", "filter", "" ]
I want to run a set of queries to insert some data into an SQL table but only if the record satisfying certain criteria are met. The table has 4 fields: `id` (primary), `fund_id`, `date` and `price` I have 3 fields in the query: `fund_id`, `date` and `price`. So my query would go something like this: ``` INSERT INTO funds (fund_id, date, price) VALUES (23, '2013-02-12', 22.43) WHERE NOT EXISTS ( SELECT * FROM funds WHERE fund_id = 23 AND date = '2013-02-12' ); ``` So I only want to insert the data if a record matching the `fund_id` and `date` does not already exist. If the above is correct it strikes me as quite an inefficient way of achieving this as an additional select statement must be run each time. Is there a better way of achieving the above? Edit: For clarification neither `fund_id` nor `date` are unique fields; records sharing the same fund\_id or date will exist but no record should have both the same fund\_id and date as another.
Although the answer I originally marked as chosen is correct and achieves what I asked there is a better way of doing this (which others acknowledged but didn't go into). A composite unique index should be created on the table consisting of `fund_id` and `date`. ``` ALTER TABLE funds ADD UNIQUE KEY `fund_date` (`fund_id`, `date`); ``` Then when inserting a record add the condition when a conflict is encountered: ``` INSERT INTO funds (`fund_id`, `date`, `price`) VALUES (23, DATE('2013-02-12'), 22.5) ON DUPLICATE KEY UPDATE `price` = `price`; --this keeps the price what it was (no change to the table) or: INSERT INTO funds (`fund_id`, `date`, `price`) VALUES (23, DATE('2013-02-12'), 22.5) ON DUPLICATE KEY UPDATE `price` = 22.5; --this updates the price to the new value ``` This will provide much better performance to a sub-query and the structure of the table is superior. It comes with the caveat that you can't have NULL values in your unique key columns as they are still treated as values by MySQL.
This might be a simple solution to achieve this: ``` INSERT INTO funds (ID, date, price) SELECT 23, DATE('2013-02-12'), 22.5 FROM dual WHERE NOT EXISTS (SELECT 1 FROM funds WHERE ID = 23 AND date = DATE('2013-02-12')); ``` **p.s.** alternatively (if `ID` a primary key): ``` INSERT INTO funds (ID, date, price) VALUES (23, DATE('2013-02-12'), 22.5) ON DUPLICATE KEY UPDATE ID = 23; -- or whatever you need ``` see [this Fiddle](http://sqlfiddle.com/#!9/03384/1).
SQL Insert into table only if record doesn't exist
[ "", "mysql", "sql", "sql-insert", "" ]
My code currently has to count things in a heavily nested `dict` into another. I have items that need to be indexed by 3 values and then counted. So, before my loop, I initialize a nested `defaultdict` like so: ``` from collections import defaultdict type_to_count_dic = defaultdict( lambda: defaultdict( lambda: defaultdict(int) ) ) ``` Which allows me to count the items within a tight loop like so: ``` for a in ...: for b in ...: for c in ...: type_to_count_dic[a][b][c] += 1 ``` I feel like initializing all those `defaultdict`s feels a lot like making a type declaration in something like Java. Is there a more idiomatic/Pythonic way of doing something like this?
``` from collections import defaultdict class _defaultdict(defaultdict): def __add__(self, other): return other def CountTree(): return _defaultdict(CountTree) ``` --- ``` >>> t = CountTree() >>> t['a'] defaultdict(<function CountTree at 0x9e5c3ac>, {}) >>> t['a']['b']['c'] += 1 >>> print t['a']['b']['c'] 1 ```
Since you are counting things, you should use a Counter for the inner-most dict: ``` import collections defaultdict = collections.defaultdict Counter = collections.Counter x = defaultdict(lambda: defaultdict(Counter)) for a in A: for b in B: x[a][b].update(C) ``` Using a Counter will give you access to useful methods such as [most\_common](http://docs.python.org/2/library/collections.html#collections.Counter.most_common). Depending on what you intend to do with this dict, you may not need the deep nesting. Instead, you could use a tuple for the key. For example, ``` import collections import itertools as IT A = range(2) B = 'XYZ' C = 'abc' x = collections.Counter(IT.product(A, B, C)) print(x) ``` yields ``` A = range(2) B = 'XYZ' C = 'abc' x = collections.Counter(IT.product(A, B, C)) print(x) ``` yields ``` Counter({(0, 'X', 'c'): 1, (0, 'Z', 'a'): 1, (1, 'Z', 'a'): 1, (1, 'X', 'c'): 1, (1, 'Z', 'b'): 1, (0, 'X', 'b'): 1, (0, 'Y', 'a'): 1, (1, 'Y', 'a'): 1, (0, 'Z', 'c'): 1, (1, 'Z', 'c'): 1, (0, 'X', 'a'): 1, (0, 'Y', 'b'): 1, (1, 'X', 'a'): 1, (1, 'Y', 'b'): 1, (0, 'Z', 'b'): 1, (1, 'Y', 'c'): 1, (1, 'X', 'b'): 1, (0, 'Y', 'c'): 1}) ```
More Pythonic way of counting things in a heavily nested defaultdict
[ "", "python", "defaultdict", "" ]
I have a table `rating` with slightly less than 300k rows and a SQL query: ``` SELECT rt1.product_id as id1, rt2.product_id as id2, sum(1), sum(rt1.rate-rt2.rate) as sum FROM rating as rt1 JOIN rating as rt2 ON rt1.user_id = rt2.user_id AND rt1.product_id != rt2.product_id group by rt1.product_id, rt2.product_id LIMIT 1 ``` The problem is.. it's really slow. It takes 36 secs to execute it with `limit 1`, while I need to execute it without limit. As I figured out, slowdown it caused by `GROUP BY` part. It works fine while grouping by one column no matter from which table rt1 or rt2. I have also tried with indexes, I have created already indexes for user\_id, product\_id, rate and (user\_id, product\_id). `EXPLAIN` doesn't tell much to me too. ``` id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE rt1 ALL PRIMARY,user_id,user_product NULL NULL NULL 289700 Using temporary; Using filesort 1 SIMPLE rt2 ref PRIMARY,user_id,user_product user_id 4 mgrshop.rt1.user_id 30 Using where ``` I need this to execute just once to generate some data, so it's not important to achieve optimal time, but reasonable. Any ideas? Edit. Full table schema ``` CREATE TABLE IF NOT EXISTS `rating` ( `user_id` int(11) NOT NULL, `product_id` int(11) NOT NULL, `rate` int(11) NOT NULL, PRIMARY KEY (`user_id`,`product_id`), KEY `user_id` (`user_id`), KEY `product_id` (`product_id`), KEY `user_product` (`user_id`,`product_id`), KEY `rate` (`rate`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; ```
First I did it via temp table. First selected rows without grouping and put them into a table made just for it. I got over 11kk rows. Then I just grouped them from temp table and put into final table. Then I also tried to do this without creating any other table and it also worked for me. ``` SELECT id1, id2, sum(count), sum(sum) FROM (SELECT rt1.product_id as id1, rt2.product_id as id2, 1 as count, rt1.rate - rt2.rate as sum FROM rating as rt1 JOIN rating as rt2 ON rt1.user_id = rt2.user_id AND rt1.product_id != rt2.product_id) as temptab GROUP BY id1, id2 ``` And finally got about 19k rows. Execution time: 35.8669 Not bad for my case of one-time data generating.
Your problem is in the join, specifically `AND rt1.product_id != rt2.product_id`. Lets say a user has rated 100 products, for that user, this query will generate 99,000 rows before it does the group by. For each of the 100 ratings, the table gets joined back to itself 99 times. What is the question you are trying to answer with this query? Depending on that, there may be some more efficient approaches. Its just hard to tell what you are trying to achieve here.
Slow SQL query when grouping by two columns with self join
[ "", "mysql", "sql", "performance", "join", "group-by", "" ]
I did a program with python, importing the OpenCV's libraries. Now, I'm doing the GUI in Tkinter. I'm trying to show the webcam in the GUI but I couldn't. I put the code in the Function because I would like with a push button see my webcam. My code is: ``` def webcam(): img= cv.QueryFrame(cap) cam= PhotoImage(img) label1 = Label(root, image=cam) label1.image = cam label1.pack() label1.place(x=0, y=400) ``` Also, I don't know how update constantly without a while cycle, because I have another push button to quit the program.
> # Kieleth's updated code for py3.7 ``` import PIL from PIL import Image,ImageTk import pytesseract import cv2 from tkinter import * width, height = 800, 600 cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height) root = Tk() root.bind('<Escape>', lambda e: root.quit()) lmain = Label(root) lmain.pack() def show_frame(): _, frame = cap.read() frame = cv2.flip(frame, 1) cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA) img = PIL.Image.fromarray(cv2image) imgtk = ImageTk.PhotoImage(image=img) lmain.imgtk = imgtk lmain.configure(image=imgtk) lmain.after(10, show_frame) show_frame() root.mainloop() ```
A simple version of camera capture using OpenCv and Tkinter: ``` import Tkinter as tk import cv2 from PIL import Image, ImageTk width, height = 800, 600 cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height) root = tk.Tk() root.bind('<Escape>', lambda e: root.quit()) lmain = tk.Label(root) lmain.pack() def show_frame(): _, frame = cap.read() frame = cv2.flip(frame, 1) cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA) img = Image.fromarray(cv2image) imgtk = ImageTk.PhotoImage(image=img) lmain.imgtk = imgtk lmain.configure(image=imgtk) lmain.after(10, show_frame) show_frame() root.mainloop() ``` You will need to download and install PIL... **UPDATE:** ... and OpenCV for this to work. To Install PIL, run the following command in your Terminal/Command Prompt: `pip install Pillow` or `python -m pip install Pillow` To Install OpenCV, run the following command in your Terminal/Command Prompt: `pip install opencv-python` or `python -m pip install opencv-python`
Show webcam sequence TkInter
[ "", "python", "opencv", "tkinter", "" ]
i want to change case using sql query e.g if text is : `My nAme is iShAn halaRNkar` (text is jumbled i.e it may contain Lower case or Upper case anywhere in the senetence) than i want the output to be : `My Name Is Ishan Halarnkar` i have not worked on sql queries much. Kindly help.
There's no such function in any database which do this for you. You've to write a function which actually performs the check on each word in a sentence. Please check the solutions below: **MySql:** ``` DELIMITER // CREATE FUNCTION CAP_FIRST (input VARCHAR(255)) RETURNS VARCHAR(255) DETERMINISTIC BEGIN DECLARE len INT; DECLARE i INT; SET len = CHAR_LENGTH(input); SET input = LOWER(input); SET i = 0; WHILE (i < len) DO IF (MID(input,i,1) = ' ' OR i = 0) THEN IF (i < len) THEN SET input = CONCAT( LEFT(input,i), UPPER(MID(input,i + 1,1)), RIGHT(input,len - i - 1) ); END IF; END IF; SET i = i + 1; END WHILE; RETURN input; END// DELIMITER ; ``` **Example:** ``` SELECT CAP_FIRST('this is exACtly tHe same!') ``` **Output:** ``` This Is Exactly The Same! ``` **Copyrights:** <http://joezack.com/2008/10/20/mysql-capitalize-function/> Hope this helps!
Here is another Microsoft SQL function: ``` CREATE FUNCTION PROPERCASE(@TEXT AS VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @RESET BIT; DECLARE @STR VARCHAR(MAX); DECLARE @I INT; DECLARE @C CHAR(1); SELECT @RESET = 1, @I=1, @STR = ''; WHILE (@I <= LEN(@TEXT)) SELECT @C= SUBSTRING(@TEXT,@I,1), @STR = @STR + CASE WHEN @RESET=1 THEN UPPER(@C) ELSE LOWER(@C) END, @RESET = CASE WHEN @C LIKE '[A-ZA-Z]' THEN 0 ELSE 1 END, @I = @I +1 RETURN @STR END ```
Change lower case to upper (title) case using sql query
[ "", "sql", "sql-server", "title-case", "" ]
where do I find the documentation for classes like **object** or **dict** ? I would like to know which methods they have and what attributes. I found most stuff at <http://docs.python.org/2> but I could not find methods and attributes for class **object**.
For detailed docmentation visit [online documentation](http://docs.python.org/2/library/stdtypes.html#mapping-types-dict). **`pydoc`** server. It's an offline version of documentation, not a detailed one though : ``` $ python -m pydoc -p 5555 ``` It starts a pydocs server at localhost and you can access docs on that link. For quick lookup you can use `dir()` it will return all the attributes of an object: ``` >>> dir(object) ['__class__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__'] >>> dir(dict) ['__class__', '__cmp__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'clear', 'copy', 'fromkeys', 'get', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values', 'viewitems', 'viewkeys', 'viewvalues'] ``` For some info on an attribute use `help()` ``` >>>help(dict.get) get(...) D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None. ``` You can also use the module `pydoc`: ``` >>> import pydoc >>> print pydoc.getdoc(dict) dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2) >>> print pydoc.getdoc(dict.get) D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None. ```
There really isn't any difference between the built in classes or anything else. These are the steps to find out more info for any object: First try `help()`: ``` >>> help(dict) Help on class dict in module __builtin__: class dict(object) | dict() -> new empty dictionary | dict(mapping) -> new dictionary initialized from a mapping object's | (key, value) pairs | dict(iterable) -> new dictionary initialized as if via: | d = {} | for k, v in iterable: | d[k] = v | dict(**kwargs) -> new dictionary initialized with the name=value pairs | in the keyword argument list. For example: dict(one=1, two=2) | | Methods defined here: | | __cmp__(...) | x.__cmp__(y) <==> cmp(x,y) ... ``` You can also get a list of attributes and methods with `dir()`: ``` >>> dir(list) ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] ``` And you can find much of it's data by looking at the `__dict__`: ``` >>> sys.__dict__ {'setrecursionlimit': <built-in function setrecursionlimit>, 'dont_write_bytecode': False, 'getrefcount': <built-in function getrefcount>, 'long_info': sys.long_info(bits_per_digit=15, sizeof_digit=2), 'path_importer_cache': {'': None, '/usr/lib/python2.7/encodings': None, '/usr/local/lib/python2.7/dist-packages/docutils-0.10-py2.7.egg': None, '/usr/lib/python2.7/plat-linux2': None, ... ``` Although many built-in types doesn't have on, or have very little in it. ``` >>> 'foo'.__dict__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'str' object has no attribute '__dict__' ``` And for more explanation, visit the [documentation](http://docs.python.org/2/library/stdtypes.html). For even more details, read [the source code](http://hg.python.org/cpython/file/56ca8eb5207a/Objects).
Where do I find Python Classes?
[ "", "python", "" ]
On my website use Flask + Jinja2, and Flask-Babel for translation. The site has two languages (depending on the URL), and I'd want to add a link to switch between them. To do this correctly I need to get the name of the current locale, but I didn't find such function in the docs. Does it exist at all?
Finally, I used this solution: add `get_locale` function, which should be defined anyway, to Jinja2 globals, and then call it in template like any other function.
Other answers say that you must implement the babel's `get_locale()` function and that you should add it to Jinja2 globals, but they don't say how. So, what I did is: I implemented the `get_locale()` function as follows: ``` from flask import request, current_app @babel.localeselector def get_locale(): try: return request.accept_languages.best_match(current_app.config['LANGUAGES']) except RuntimeError: # Working outside of request context. E.g. a background task return current_app.config['BABEL_DEFAULT_LOCALE'] ``` Then, I added the following line at my Flask `app` definition: ``` app.jinja_env.globals['get_locale'] = get_locale ``` Now you can call `get_locale()` from the templates.
Get current locale in Jinja2
[ "", "python", "jinja2", "gettext", "python-babel", "" ]
I have two dictionaries, one contains the name and initial values of some registers and the other contains the name and address values of those same registers. I need to compare the two dictionaries on register names [id] and then take the initial value of a register [initValue] from dictionary 1 and the address values [addrValue] from dictionary 2 and put them in a new dictionary so that the key in this new dictionary becomes the address rather than the register name. Below is what I am doing but I don't know how to merge the two dictionary keys. ``` regex = "(?P<id>\w+?)_INIT\s*?=.*?'h(?P<initValue>[0-9a-fA-F]*)" for x in re.findall(regex, lines): init_list = (x.groupdict()["id"], "0x" + x.groupdict()["initValue"]) regex = "(?P<id>\w+?)_ADDR\s*?=.*?'h(?P<addrValue>[0-9a-fA-F]*)" for y in re.findall(regex, lines): addr_list = (y.groupdict()["addr_id"], "0x" + y.groupdict()["addrValue"]) for key in set(init_list['init_id'].keys()).union(addr_list['id'].keys()): if init_list[key] == addr_list[key] expect_by_addr = dict (addr_list['addrValue'] # something here ```
Make the first two dictionaries: ``` dict1 = {} dict2 = {} for x in range(0, 10): dict1[x] = "initValue{}".format(x) y = x+3 dict2[y] = "address{}".format(y) ``` Combine them based on dict2 "addresses" ``` dict3 = {} #Note because you want the addresses to be the key of the new dictionary #we only need to look at the keys that exist in both dict1 and dict2. for key in dict2.keys(): if key in dict1: dict3[dict2[key]] = dict1[key] ``` Dictionaries: ``` dict1 {0: 'initValue0', 1: 'initValue1', 2: 'initValue2', 3: 'initValue3', 4: 'initValue4', 5: 'initValue5', 6: 'initValue6', 7: 'initValue7', 8: 'initValue8', 9: 'initValue9'} dict2 {3: 'address3', 4: 'address4', 5: 'address5', 6: 'address6', 7: 'address7', 8: 'address8', 9: 'address9', 10: 'address10', 11: 'address11', 12: 'address12'} dict3 (combined) {'address5': 'initValue5', 'address4': 'initValue4', 'address7': 'initValue7', 'address6': 'initValue6', 'address3': 'initValue3', 'address9': 'initValue9', 'address8': 'initValue8'} ```
Why don't you build a single dict in a single pass? ``` from collections import defaultdict r_line = re.compile(r"(?P<id>\w+?)_(?P<type>[A-Z]+)\s*?=.*?'h(?P<value>[0-9a-fA-F]*)") register_ids = defaultdict(dict) for match in r_line.finditer(lines): groups = match.groupdict() key = groups['type'].tolower() # store as int. You can use `hex()` or format codes for display as hex value = int(groups['value'], 16) register_ids[groups['id']][key] = value ``` It's not a problem if you have multiple sets of lines: it will just update `register_ids` with new data: ``` for lines in lines_list: for match in r_line.finditer(lines): ... ``` This will produce a single dict `ids` that looks like this: ``` {'register1': {'addr': 1234, 'init': 5678, ...}, 'register2': {...}, ...} ``` You can then produce any other kind of derived dictionary you want very easily. For example, for your initial values keyed by addresses: ``` addresses = {} for values in register_ids.itervalues(): try: addresses[values['addr']] = values['init'] except KeyError: pass ```
Compare two dictionaries with one key in common and form a new dictionary combining the remaining keys associated with this common key
[ "", "python", "python-2.7", "dictionary", "" ]
New to Python, so this is probably a stupid question, but I have not been able to figure this one out after a day of research and executing code. I'd like to take two lists of integers (results and settings) and compare them in the format: ``` (Setting# - 0.1) <= Result# <= (Setting# +0.1) ``` I need to do this for all #'s in the lists. For example, if `Result1=4.6` and `Setting1=4.3`, I want it to compare 4.2 <= 4.6 <= 4.4 (which would result in a failure, as it is too far outside my tolerance of `0.1`. Once it compares that, I would want it to continue through the list until finished, of course. This does not appear to work as I have it. Any ideas? ``` results = [Result1, Result2, Result3, Result4, Result5, Result6] settings = [Setting1, Setting2, Setting3, Setting4, Setting5, Setting6] for n in results and m in settings: if (m-.1) <= n <= (m+.1): #compare values with a + or - 0.1 second error tolerance print 'ok' else: print 'fail' print 'Done' ```
You need to use `zip` to iterate over `results` and `settings` in tandem: ``` for n, m in zip(results, settings): if m - 0.1 <= n <= m + 0.1: print 'ok' else: print 'fail' print 'Done' ```
You need to use [`zip()`](http://docs.python.org/2/library/functions.html#zip) to combine the two lists: ``` for n, m in zip(results, settings): if (m-.1) <= n <= (m+.1): print 'ok' else: print 'fail' ``` `zip()` creates a new list made by combining each nth element from each input sequence: ``` >>> a = range(5) >>> b = 'abcde' >>> zip(a, b) [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e')] ``` You can use [`all()`](http://docs.python.org/2/library/functions.html#all) to short-circuit testing; `all()` returns `False` as soon as possible. We use [`itertools.izip()`](http://docs.python.org/2/library/itertools.html#itertools.izip) here instead to avoid creating a whole new list where perhaps only the first few pairs might be tested: ``` from itertools import izip if all((m-.1) <= n <= (m+.1) for n, m in izip(results, settings)): print 'All are ok' else: print 'At least one failed' ```
Python - Less than or Equal To Compare with two lists?
[ "", "python", "list", "compare", "" ]
I want to compare two strings such that the comparison should ignore differences in the special characters. That is, > Hai, this is a test Should match with > Hai ! this is a test "or" Hai this is a test Is there any way to do this without modifying the original strings?
This removes punctuation and whitespace before doing the comparison: ``` In [32]: import string In [33]: def compare(s1, s2): ...: remove = string.punctuation + string.whitespace ...: return s1.translate(None, remove) == s2.translate(None, remove) In [34]: compare('Hai, this is a test', 'Hai ! this is a test') Out[34]: True ```
``` >>> def cmp(a, b): ... return [c for c in a if c.isalpha()] == [c for c in b if c.isalpha()] ... >>> cmp('Hai, this is a test', 'Hai ! this is a test') True >>> cmp('Hai, this is a test', 'Hai this is a test') True >>> cmp('Hai, this is a test', 'other string') False ``` This creates two temporary lists, but doesn't modify the original strings in any way.
Python : How to compare strings and ignore white space and special characters
[ "", "python", "python-3.x", "string", "python-2.7", "replace", "" ]
I need to return date to the date field when i change my division Here is my Python code ``` tea_worker_ids = self.pool.get('bpl.worker').search(cr, uid, [('bpl_division_id', '=', division_id), ('default_work', '=', 'tea')]) for record in self.pool.get('bpl.worker').browse(cr, uid, tea_worker_ids): tea_list_data.append({'worker_id': record.id, 'worker_emp_no': record.emp_no, 'is_selected':True,'date': (fields.date.context_today)})# tea_v['selected_tea_workers'] = tea_list_data ``` this is the error ``` 2013-05-13 06:22:28,810 15393 ERROR ABC werkzeug: Error on request: Traceback (most recent call last): File "/usr/share/pyshared/werkzeug/serving.py", line 159, in run_wsgi execute(app) File "/usr/share/pyshared/werkzeug/serving.py", line 146, in execute application_iter = app(environ, start_response) File "/usr/share/pyshared/simplejson/__init__.py", line 286, in dumps return _default_encoder.encode(obj) File "/usr/share/pyshared/simplejson/encoder.py", line 226, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/share/pyshared/simplejson/encoder.py", line 296, in iterencode return _iterencode(o, 0) File "/usr/share/pyshared/simplejson/encoder.py", line 202, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: <function context_today at 0xaaa7064> is not JSON serializable ``` please help me to sort this issue, **EDITED** when it changed to `fields.date.context_today()` then error comes as below ``` File "/home/bellvantage/Documents/openerp-7.0/openerp-7/openerp/addons/bpl/bpl.py", line 1059, in on_change_division workers_list.append({'worker_id': record.id,'start_date': fields.date.context_today()}) TypeError: context_today() takes at least 3 arguments (0 given) ```
its worked with ``` workers_list.append({'worker_id': record.id, 'start_date': time.strftime('%Y-%m-%d'), 'selected':True}) ```
Looks like `context_today` is a method, not a value. You need to call it: ``` tea_list_data.append({'worker_id': record.id, 'worker_emp_no': record.emp_no, 'is_selected':True,'date': (fields.date.context_today())}) ```
TypeError: <function context_today at 0xaaa7064> is not JSON serializable
[ "", "python", "xml", "odoo", "" ]
I have two tables which I am returning a reference code from. I need to compare both these lists to find references that exists only in tableA or tableB, NOT in both. As an example, if table A and table B had the following data ``` TABLE A References 01 02 04 TABLE B References 01 22 ``` I would expect to return a table with the following ``` TABLE C References 02 04 22 ``` The SQL for this has me stumped. Any suggestions?
try this, ``` SELECT COL1 FROM ( SELECT Col1 FROM TABLE_A UNION ALL SELECT COL1 FROM TABLE_B ) X GROUP BY COL1 HAVING COUNT(*) =1 ``` This query will also eliminate if any value comes twice in the same table.
``` select COALESCE(a.Value,b.Value) FROM a full outer join b on a.Value = b.Value WHERE a.Value is null or b.Value is null ``` Should do the trick. The `FULL OUTER JOIN` attempts to match up all rows from the two tables. The `WHERE` clause then removes those rows where a match was found, leaving (as a result) only those rows where a row only existed in `a` or `b`. The `COALESCE()` is then used to give the result as a single column, as per your expected output. If you preferred, you could remove the `COALESCE` and have `SELECT a.Value,b.Value` which will have `NULL`s but will make it immediately obvious *which* table contains the value.
Check two tables and find values not in both using SQL
[ "", "sql", "database", "join", "" ]
I know there is the easy way to copy all the source of url, but it's not my task. I need exactly save just all the text (just like webbrowser user copy it) to the \*.txt file. Is it unavoidable to parse source code html for it, or there is a better way?
I think it is impossible if you don't parse at all. I guess you could use HtmlParser <http://docs.python.org/2/library/htmlparser.html> and just keep the data tags, but you will most likely get many other elements than you want. To get exactly the same as [Ctrl-C] would be very difficult to avoid parsing because of things like the style="display: hidden;" which would hide the text, which again will result in full parsing of html, javascript and css of both the document and resource files.
Parsing is required. Don't know if there's a library method. A simple regex: ``` text = sub(r"<[^>]+>", " ", html) ``` this requires many improvements, but it's a starting point.
How to copy all the text from url (like [Ctrl+A][Ctrl+C] with webbrowser) in python?
[ "", "python", "browser", "" ]
My sql query is: ``` SELECT DISTINCT SUBSTRING(DATENAME(MONTH, PostDate), 1, 3) + '-' + CAST(YEAR(PostDate) AS VARCHAR(4)) AS PostArchive, Posts = COUNT(*) FROM Post WHERE Verified=1 GROUP BY SUBSTRING(DATENAME(MONTH, PostDate), 1, 3) + '-' + CAST(YEAR(PostDate) AS VARCHAR(4)), YEAR(PostDate), MONTH(PostDate) ORDER BY PostArchive ``` Its gives a result like this: ``` PostArchive Posts ------------------------ Mar-2009 1 Mar-2010 1 May-2005 1 May-2011 1 May-2012 1 May-2013 1 ``` But I want a result order by date(year) like this. ``` PostArchive Posts ------------------------ May-2005 1 Mar-2009 1 Mar-2010 1 May-2011 1 May-2012 1 May-2013 1 ``` I search and found [this](https://stackoverflow.com/questions/4676139/order-by-descending-date-month-day-and-year) link but unable to solve my problem. I try : ``` ORDER BY CONVERT(DateTime, PostArchive,101) DESC ``` But it gives me a error: `Invalid column name 'PostArchive'.` Is there any way to do this or I am in wrong way.Thanks.
The reason for the error is that `PostArchive` is the name you've given to the column on the SELECT line, which is effectively the output of the query. The ORDER BY clause does not look at that, it looks at its input to the query, which in this case is `PostDate`
1. I assume that you didn't really mean that you want to order it by year, but instead by year/month. The ordering issue that you have is because you are ordering it as a character and not as a date. 2. You don't need DISTINCT, since you already GROUP BY. 3. Main problem is that you already converted to VARCHAR. Hence, months are unsortable. ssss ``` -- Create a CTE (inline view) WITH T AS ( SELECT YEAR(PostDate) PostYear , MONTH(PostDate) PostMM , SUBSTRING(DATENAME(MONTH, PostDate),1,3) PostMonth , COUNT(*) Posts FROM Post WHERE Verified = 1 GROUP BY YEAR(PostDate) , MONTH(PostDate) , DATENAME(MONTH, PostDate) ) -- Build you date string SELECT PostMonth + '-' + CAST(PostYear AS VARCHAR(4)) AS PostArchive , Posts FROM T -- Sort it by the components separately ORDER BY PostYear -- Don't use the character, otherwise, Aug will come before Mar , PostMM ```
Order by year in post date
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm coming from a background in Google's App Inventor. I'm taking an online class. Task: Make a triangle out of asterisks with a nested while loop. The triangle has a base of 19 asterisks and a height of 10 asterisks. Here's where I am. ``` Num = 1 while Num <= 19: print '*' Num = Num * '*' + 2 print Num ```
you can use the 'center'-method from string-objects: ``` width = 19 for num in range(1, width + 1, 2): print(('*' * num).center(width)) * *** ***** ******* ********* *********** ************* *************** ***************** ******************* ```
what you are doing with Num = Num \* '\*' + 2 is the following: * you create a string (Num-times '\*') this is what you want * then you try to add two, you'll probably see an error like *cannot concatenate 'str' and 'int' objects*, because there is no way to add a *string* to an *int* (at least in python). Instead you probably want to add two only to *Num*.
Asterisk Triangle
[ "", "python", "" ]
Im writing this program that searches YouTube for a part of a bigger program when the user inputs the search phrase it might have spaces but i don't want those spaces i want "+" instead so that the URL works Here is my Code: ``` import urllib2 x=raw_input("Search: ") site1 = urllib2.urlopen('http://www.youtube.com/results?search_query=%s'%x) y = site1.read() print y ``` when i input anything that's more than one word it doesn't work because there shouldn't be any spaces in a URL so how can i turn the spaces between words into plus signs Thank You!
Fortunately, `urllib` has `urllib.quote_plus` for that purpose. It will also help to escape everything else that needs escaping: ``` >>> urllib.quote_plus('hello how\'re you?') 'hello+how%27re+you%3F' ```
I don't quite see the problem: ``` >>> "This is a string".replace(" ", "+") 'This+is+a+string' ```
Taking Spaces out of a string and Replacing them with "+"
[ "", "python", "youtube", "urllib2", "" ]
I have several modules in one package (a kind of a toolkit), which I use in my projects. The structure looks like this: ``` the_toolkit: __init__.py basic_io.py simple_math.py matrix_kit.py ... ``` Now when I use `IPython` or the code completion in `VIM` after importing a module from the package with ``` from the_toolkit import matrix_kit ``` I get a bunch of modules/functions which `matrix_kit.py` imports (`array`, `math`, `log`, `loggin`, `format_exc`, etc.), along with the actual functions which `matrix_kit` defines. How can I filter out everything else, since I'm not interested in `math` or `array` in a code completion of the `matrix_kit` module? The `__init__.py` is empty as suggested by many pythonistas, or is it the way to do it?
I had the same problem, and solved it by adding a leading underscore to all my imports. It doesn't look brilliant, but it achieves what you're after. ``` from __future__ import division as _division import numpy as _np import pandas as _pd ``` Only the stuff that starts without an underscore is imported when you import the module using from .. import \*. or when you use tab-completion in IPython or spyder.
I've come to like and use this approach. Starting with your project structure, we will hide all the content of `matrix_kit` that you don't want to show. ``` the_toolkit: __init__.py basic_io.py simple_math.py matrix_kit.py ... ``` Move `matrix_kit.py` into a package with the same name, and place one underscore in the beginning of the module's name: ``` the_toolkit: __init__.py basic_io.py simple_math.py matrix_kit: __init__.py _matrix_kit.py ... ``` Now in `_matrix_kit.py` add a global variable `__all__ = [...]` listing the names of all functions/classes/variables you want to be visible. Then, add the following line to `matrix_kit/__init__.py`: ``` from ._matrix_kit import * ``` It will only import the stuff listed in the `__all__` attribute of the module. Then, while importing `matrix_kit` like you've done before, you will actually import the `__init__.py` and its content.
Hide external modules when importing a module (e.g. regarding code-completion)
[ "", "python", "code-completion", "python-import", "" ]
I would like to generate uniformly distributed random numbers between 0 and 0.5, but truncated to 2 decimal places. without the truncation, I know this is done by ``` import numpy as np rs = np.random.RandomState(123456) set = rs.uniform(size=(50,1))*0.5 ``` could anyone help me with suggestions on how to generate random numbers up to 2 d.p. only? Thanks!
A `float` cannot be truncated (or rounded) to 2 decimal digits, because there are many values with 2 decimal digits that just cannot be represented exactly as an IEEE double. If you really want what you say you want, you need to use a type with exact precision, like [`Decimal`](http://docs.python.org/3/library/decimal.html). Of course there are downsides to doing that—the most obvious one for `numpy` users being that you will have to use `dtype=object`, with all of the compactness and performance implications. But it's the only way to actually do what you asked for. Most likely, what you actually *want* to do is either Joran Beasley's answer (leave them untruncated, and just round at print-out time) or something similar to Lauritz V. Thaulow's answer (get the closest approximation you can, then use explicit epsilon checks everywhere). Alternatively, you can do implicitly fixed-point arithmetic, as David Heffernan suggests in a comment: Generate random integers between 0 and 50, keep them as integers within `numpy`, and just format them as fixed point decimals and/or convert to `Decimal` when necessary (e.g., for printing results). This gives you all of the advantages of `Decimal` without the costs… although it does open an obvious window to create new bugs by forgetting to shift 2 places somewhere.
decimals are not truncated to 2 decimal places ever ... however their string representation maybe ``` import numpy as np rs = np.random.RandomState(123456) set = rs.uniform(size=(50,1))*0.5 print ["%0.2d"%val for val in set] ```
generate random numbers truncated to 2 decimal places
[ "", "python", "random", "numpy", "" ]
This is basically what I am trying to achieve; ``` SELECT COUNT(DISTINCT No) FROM Table WHERE (Type = 1 AND COUNT(DISTINCT No) > Num) OR (Type = 2 AND COUNT(DISTINCT No) > Num) GROUP BY Week ``` What would be the best way to achieve this? Thanks
Any aggregate comparisons need to go into a HAVING clause after the Group By. Just remember that you may get different results depending on where you put your non-aggregate conditional, because you'll be aggregating over what you've included/excluded via the WHERE. ie ``` SELECT COUNT(DISTINCT No) FROM Table WHERE Type = 1 GROUP BY Week HAVING COUNT(DISTINCT No) > Num ``` may be different from ``` SELECT COUNT(DISTINCT No) FROM Table GROUP BY Week HAVING COUNT(DISTINCT No) > Num AND Type = 1 ```
``` SELECT COUNT(DISTINCT No) FROM Table WHERE Type = 1 OR Type = 2 GROUP BY Week Having COUNT(DISTINCT No) > Num ``` This is Possible Answer.
COUNT within a WHERE clause or HAVING?
[ "", "sql", "sql-server-2008-r2", "" ]
I have a `Comment` model which has-many `attachments`. What I want to return, is all of the comments which either have one or more attachment records, OR whose comment is longer than 250 characters. Is there any way I can do this without writing it entirely in pure SQL? I'm struggling to build up a WHERE clause in just the rails method. It's not quite as simple as I'd hoped :( Ideally I want this to be a scope but whatever will work is fine
You could try: ``` Comment.includes(:attachments).where('attachments.comment_id IS NOT NULL OR LEN(comments.content) > 250') ```
The WHERE clause should follow the pattern o the following pseudo-code ``` WHERE Length(Comment_field) > 250 OR EXISTS (Select COMMENT_ID from attachments) ```
Get all records with child records OR field length > 250
[ "", "sql", "ruby-on-rails", "ruby", "activerecord", "" ]
If I have these two lists: ``` la = [1, 2, 3] lb = [4, 5, 6] ``` I can iterate over them as follows: ``` for i in range(min(len(la), len(lb))): print la[i], lb[i] ``` Or more pythonically ``` for a, b in zip(la, lb): print a, b ``` --- What if I have two dictionaries? ``` da = {'a': 1, 'b': 2, 'c': 3} db = {'a': 4, 'b': 5, 'c': 6} ``` Again, I can iterate manually: ``` for key in set(da.keys()) & set(db.keys()): print key, da[key], db[key] ``` Is there some builtin method that allows me to iterate as follows? ``` for key, value_a, value_b in common_entries(da, db): print key, value_a, value_b ```
There is no built-in function or method that can do this. However, you could easily define your own. ``` def common_entries(*dcts): if not dcts: return for i in set(dcts[0]).intersection(*dcts[1:]): yield (i,) + tuple(d[i] for d in dcts) ``` This builds on the "manual method" you provide, but, like `zip`, can be used for any number of dictionaries. ``` >>> da = {'a': 1, 'b': 2, 'c': 3} >>> db = {'a': 4, 'b': 5, 'c': 6} >>> list(common_entries(da, db)) [('c', 3, 6), ('b', 2, 5), ('a', 1, 4)] ``` When only one dictionary is provided as an argument, it essentially returns `dct.items()`. ``` >>> list(common_entries(da)) [('c', 3), ('b', 2), ('a', 1)] ``` With no dictionaries, it returns an empty generator (just like `zip()`) ``` >>> list(common_entries()) [] ```
The object returned by `dict.keys()` (called a dictionary key view) [acts like a `set` object](https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects), so you can just take the [intersection](https://docs.python.org/3/library/stdtypes.html#frozenset.intersection) of the keys: ``` da = {'a': 1, 'b': 2, 'c': 3, 'e': 7} db = {'a': 4, 'b': 5, 'c': 6, 'd': 9} common_keys = da.keys() & db.keys() for k in common_keys: print(k, da[k], db[k]) ``` On Python 2 you'll need to convert the keys to `set`s yourself: ``` common_keys = set(da) & set(db) for k in common_keys: print k, da[k], db[k] ```
Python equivalent of zip for dictionaries
[ "", "python", "dictionary", "iterator", "" ]
Is there a way to SAVE the value of a variable (say an integer) in python? My problem involves calling (entering and exiting) multiple times the same python script (python file, not python function) which in the end creates a txt file. I'd like to name the txt files depending on the number of times the python code was called: txt1.txt,..., txt100.txt for example. EDIT: The question is not related with the SAVE parameter in fortran. My mistake.
**NOTE:** I'm assuming that what you mean by: > calling (entering and exiting) multiple times the same python code is that you want to call the whole Python script multiple times, in which case you need to serialize your counter in some way external to the Python interpreter, to make it available next time. If you're just asking about calling the same function or method several times within one Python session, you can do that a variety of ways and I'd point you to [mgilson's answer](https://stackoverflow.com/a/16342602/2069350). There are plenty of ways to serialize things, but your implementation doesn't really have anything to do with the language. Do you want to store it in a database? Write the value to a file? Or is it enough to just retrieve an appropriate value from context? For instance, this code will get you a new file each time it is called, based on the contents of `output_dir`. It's obviously rough, but you get the idea: ``` import os def get_next_filename(output_dir): '''Gets the next numeric filename in a sequence. All files in the output directory must have the same name format, e.g. "txt1.txt". ''' n = 0 for f in os.listdir(output_dir): n = max(n, int(get_num_part(os.path.splitext(f)[0]))) return 'txt%s.txt' % (n + 1) def get_num_part(s): '''Get the numeric part of a string of the form "abc123". Quick and dirty implementation without using regex.''' for i in xrange(len(s)): if s[i:].isdigit(): return s[i:] return '' ``` Or of course you can just write a file called something like `runnum.cfg` somewhere next to the Python script, and write your current run number into it, then read that when the code launches.
Not really. The best you can do is to use a global variable: ``` counter = 0 def count(): global counter counter += 1 print counter ``` An alternative which bypasses the need for the global statement would be something like: ``` from itertools import count counter = count() def my_function(): print next(counter) ``` or even: ``` from itertools import count def my_function(_counter=count()): print next(_counter) ``` A final version takes advantage of the fact that functions are first class objects and can have attributes added to them whenever you want: ``` def my_function(): my_function.counter += 1 print my_function.counter my_function.counter = 0 #initialize. I suppose you could think of this as your `data counter /0/ statement. ``` --- However, it looks like you actually want to save the count within a file or something. That's not too hard either. You just need to pick a filename: ``` def count(): try: with open('count_data') as fin: i = int(count_data.read()) except IOError: i = 0 i += 1 print i with open('count_data','w') as fout: fout.write(str(i)) ```
Save variable value in python
[ "", "python", "" ]
I have a list with some book names and their authors, but I want it to look organized, so I want it to take the book title, then complete it with spaces so it ends up with 100 characters, independent of the length of the book title. It would, then, add the book title. So far I've tried this: ``` for i in range(0, len(bookList)): t = 100 - len(bookList[i]) numbofspaces = "" for j in range(0, t): numbofspaces += " " s.append(bookList[i] + numbofspaces + authorList[i]) ``` When I tried it in the python shell it worked pretty well, but when it takes the titles from the list, it doesn't work, why is that?
use string method : `str.rjust(100)` ``` >>> x = [ 'charles dickens','shakespeare','j k rowling'] >>> for name in x: ... print(name.rjust(50)) ... charles dickens shakespeare j k rowling ```
While `str.ljust()`/`str.rjust()` are good, simple solutions if this is all you want to do, it's worth noting that if you are doing other formatting, you can do this as a part of string formatting: ``` >>> '{:<30}'.format('left aligned') 'left aligned ' >>> '{:>30}'.format('right aligned') ' right aligned' ``` From [the docs](http://docs.python.org/3/library/string.html#format-string-syntax).
How to fill up a string with spaces so it ends up with a specific number of characters? Python
[ "", "python", "string", "" ]
I have a text file with some names and emails and other stuff. I want to capture email addresses. I don't know whether this is a split or regex problem. Here are some sample lines: ``` [name]bill billy [email]bill.billy@hotmail.com [dob]01.01.81 [name]mark hilly [email]mark.hilly@hotmail.com [dob]02.11.80 [name]gill silly [email]gill.silly@hotmail.com [dob]03.12.79 ``` I want to be able to do a loop that prints all the email addresses. Thanks.
I'd use a regex: ``` import re data = '''[name]bill billy [email]bill.billy@hotmail.com [dob]01.01.81 [name]mark hilly [email]mark.hilly@hotmail.com [dob]02.11.80 [name]gill silly [email]gill.silly@hotmail.com [dob]03.12.79''' group_matcher = re.compile(r'\[(.*?)\]([^\[]+)') for line in data.split('\n'): o = dict(group_matcher.findall(line)) print o['email'] ``` * `\[` is literally `[`. * `(.*?)` is a non-greedy capturing group. It "expands" to capture the text. * `\]` is literally `]` * `(` is the beginning of a capturing group. * `[^\[]` matches anything but a `[`. * `+` repeats the last pattern any number of times. * `)` closes the capturing group.
``` for line in lines: print line.split("]")[2].split(" ")[0] ```
Python - trying to capture the middle of a line, regex or split
[ "", "python", "regex", "split", "" ]
I often need to filter pandas dataframe `df` by `df[df['col_name']=='string_value']`, and I want to speed up the row selction operation, is there a quick way to do that ? For example, ``` In [1]: df = mul_df(3000,2000,3).reset_index() In [2]: timeit df[df['STK_ID']=='A0003'] 1 loops, best of 3: 1.52 s per loop ``` Can 1.52s be shorten ? **Note:** `mul_df()` is function to create multilevel dataframe: ``` >>> mul_df(4,2,3) COL000 COL001 COL002 STK_ID RPT_Date A0000 B000 0.6399 0.0062 1.0022 B001 -0.2881 -2.0604 1.2481 A0001 B000 0.7070 -0.9539 -0.5268 B001 0.8860 -0.5367 -2.4492 A0002 B000 -2.4738 0.9529 -0.9789 B001 0.1392 -1.0931 -0.2077 A0003 B000 -1.1377 0.5455 -0.2290 B001 1.0083 0.2746 -0.3934 ``` Below is the code of mul\_df(): ``` import itertools import numpy as np import pandas as pd def mul_df(level1_rownum, level2_rownum, col_num, data_ty='float32'): ''' create multilevel dataframe, for example: mul_df(4,2,6)''' index_name = ['STK_ID','RPT_Date'] col_name = ['COL'+str(x).zfill(3) for x in range(col_num)] first_level_dt = [['A'+str(x).zfill(4)]*level2_rownum for x in range(level1_rownum)] first_level_dt = list(itertools.chain(*first_level_dt)) #flatten the list second_level_dt = ['B'+str(x).zfill(3) for x in range(level2_rownum)]*level1_rownum dt = pd.DataFrame(np.random.randn(level1_rownum*level2_rownum, col_num), columns=col_name, dtype = data_ty) dt[index_name[0]] = first_level_dt dt[index_name[1]] = second_level_dt rst = dt.set_index(index_name, drop=True, inplace=False) return rst ```
I have long wanted to add binary search indexes to DataFrame objects. You can take the DIY approach of sorting by the column and doing this yourself: ``` In [11]: df = df.sort('STK_ID') # skip this if you're sure it's sorted In [12]: df['STK_ID'].searchsorted('A0003', 'left') Out[12]: 6000 In [13]: df['STK_ID'].searchsorted('A0003', 'right') Out[13]: 8000 In [14]: timeit df[6000:8000] 10000 loops, best of 3: 134 µs per loop ``` This is fast because it always retrieves views and does not copy any data.
Somewhat surprisingly, working with the `.values` array instead of the `Series` is much faster for me: ``` >>> time df = mul_df(3000, 2000, 3).reset_index() CPU times: user 5.96 s, sys: 0.81 s, total: 6.78 s Wall time: 6.78 s >>> timeit df[df["STK_ID"] == "A0003"] 1 loops, best of 3: 841 ms per loop >>> timeit df[df["STK_ID"].values == "A0003"] 1 loops, best of 3: 210 ms per loop ```
How to speed up pandas row filtering by string matching?
[ "", "python", "filter", "pandas", "" ]
I'm working with Django 1.5.1 and I'm experiencing some "strange behaviour" with translations. I'm using `ugettext` and `ugettext_lazy` in the same Python file. If I organize the imports as: ``` from django.utils.translation import ugettext as trans from django.utils.translation import ugettext_lazy as _ ``` or ``` from django.utils.translation import ugettext as trans, ugettext_lazy as _ ``` The strings marked as `trans("string")` are skipped when running `makemessages` command. However, if I don't rename the `ugettext` it works well with both versions: ``` from django.utils.translation import ugettext from django.utils.translation import ugettext_lazy as _ ``` or ``` from django.utils.translation import ugettext, ugettext_lazy as _ ``` Now `trans("string")` works well. **So, does anybody know why this import renaming is causing the renamed function not to be called?** Is this an actual Python "limitation" I didn't know when renaming more than one function inside the same module? --- **UPDATE** After some testing, I've realized that even creating an empty python module inside an app with the following code it doesn't work: ``` from django.utils.translation import ugettext_lazy as translate a = translate("string") ``` However, if using `_` for the alias it works: ``` from django.utils.translation import ugettext_lazy as _ a = _("string") ``` My **conclusion** is: *You can only use the `_` alias for `ugettext` and `ugettext_lazy` (or any other related translation function) in Django or else it won't be recognized by `makemessages` command*. The technical explanation can be found in Robert Lujo's answer. Thanks!
Django command utility makemessages internally calls [xgettext](https://www.gnu.org/savannah-checkouts/gnu/gettext/manual/html_node/xgettext-Invocation.html) program like this: ``` cmd = ( 'xgettext -d %s -L Python %s %s --keyword=gettext_noop ' '--keyword=gettext_lazy --keyword=ngettext_lazy:1,2 ' '--keyword=ugettext_noop --keyword=ugettext_lazy ' '--keyword=ungettext_lazy:1,2 --keyword=pgettext:1c,2 ' '--keyword=npgettext:1c,2,3 --keyword=pgettext_lazy:1c,2 ' '--keyword=npgettext_lazy:1c,2,3 --from-code UTF-8 ' '--add-comments=Translators -o - "%s"' % (domain, wrap, location, work_file)) ``` (source can be found [here](https://github.com/django/django/blob/master/django/core/management/commands/makemessages.py#L98)). So, some keywords are predefined by the [xgettext](https://www.gnu.org/savannah-checkouts/gnu/gettext/manual/html_node/xgettext-Invocation.html) utility (check reference for --keyword): * for python - gettext, ugettext, dgettext, ngettext, ungettext, dngettext, \_ and some are added by django utility: * gettext\_lazy , ngettext\_lazy , ugettext\_noop , ugettext\_lazy , ungettext\_lazy , pgettext , npgettext , pgettext\_lazy , npgettext\_lazy Keyword **trans** is not in any of these keyword sets, so you should not use it for marking texts for translations.
Unexpected ugettext aliases can be handled by overriding the `makemessages` command, such as in: ``` from django.core.management.commands import makemessages class Command(makemessages.Command): """ Extends the makemessages command to look for additional aliases. """ xgettext_options = makemessages.Command.xgettext_options + ['--keyword=_lazy'] ``` See <https://docs.djangoproject.com/en/1.8/topics/i18n/translation/#customizing-the-makemessages-command>
ugettext and ugettext_lazy functions not recognized by makemessages in Python Django
[ "", "python", "django", "internationalization", "" ]
I'm getting this error: ``` Traceback (most recent call last): File "C:\Python27\botid.py", line 23, in <module> fiList = {msg:submission.ups + len(coList)} NameError: name 'coList' is not defined ``` for this: ``` wbcWords = ['wbc', 'advice', 'prc','server'] while True: subreddit = r.get_subreddit('MCPE') for submission in subreddit.get_hot(limit=30): op_text = submission.title.lower() has_wbc = any(string in op_text for string in wbcWords) # Test if it contains a WBC-related question if submission.id not in already_done and has_wbc: msg = '[WBC related thread](%s)' % submission.short_link comments = submission.comments for comment in comments: coList = [comment.author.name] fiList = {msg:submission.ups + len(coList)} print fiList ``` Seems fine to me. All the search results end up being spelling mistakes, but mine seems fine (I hope)
I think you should try: ``` coList = [] for comment in comments: coList.append(comment.author.name) ``` What you were trying with: ``` for comment in comments: coList = [comment.author.name] ``` For every comment this loop is resetting colList to a single item list of the current comments author name, but I can see from your comment you have understood that. The other comments with list comprehension are much nicer imo, personally I would also use: ``` colist = [comment.author.name for comment in comments] ``` Looks cleaner its one line and you can clearly read what the intent is, a list of the authors in the comments.
I think the easiest solution would be a list comprehension: ``` coList = [comment.author.name for comment in comments] ``` This way, if comments is empty, you get an empty list, otherwise the author names. Additionally, given what you put in, it's better to call it `authors_list`.
I am getting a defined error even though everything seems to be fine
[ "", "python", "nameerror", "" ]
I want to send db mail , which contains the result of two queries, how do I make it happen? ``` USE msdb EXEC sp_send_dbmail @profile_name = 'try01', @recipients = 'yyyyy@yyy.com', @subject = 'Table Values', @body = 'xxxxxxxxxxxxxx.', @execute_query_database = 'Reports', @query = 'SELECT * from Table' ```
OK, This was easy as if you put a semicolon the result of both the queries are sent. Here it is: ``` USE msdb EXEC sp_send_dbmail @profile_name = 'try01', @recipients = 'yyyyy@yyy.com', @subject = 'Table Values', @body = 'xxxxxxxxxxxxxx.', @execute_query_database = 'Reports', @query = 'SELECT * from Table; select * from table2' ``` Only problem I have now is to get this into a decent looking format.
[sp\_send\_dbmail on MSDN](http://msdn.microsoft.com/en-GB/library/ms190307.aspx) shows a single @query parameter So you can send one query only unless you can use UNION to join result sets. Or save the query results to disk and then set @file\_attachments (I wouldn't, personally) Or use Reporting Services to query and send the email as a report
Sql server Db Mail which can send mail for two or more queries
[ "", "sql", "sql-server", "dbmail", "" ]
So I was given a database and im trying to solve a trigger bug (it's pretty small change) however I've looked at every function/trigger so I did a: ``` select * from sys.triggers where name = 'NAME' ``` and it returned this: ``` NAME 1181247263 1 OBJECT_OR_COLUMN 2053582354 TR SQL_TRIGGER 2012-11-13 09:41:13.707 2013-03-19 14:08:22.583 0 0 0 0 ``` What exactly does this mean? is there literally a folder/function called OBJECT\_OR\_COLUMN because I can't see it? Im doing this in SQL Server management studio btw.
This will tell you the associated table... ``` select t.name As TriggerName, ss.name As SchemaName, so2.name As TableName from sys.triggers t Join sysobjects so On t.object_id = so.id Join sysobjects so2 On so.parent_obj = so2.id Join sys.schemas ss On so2.uid = ss.schema_id Where t.name = 'NAME' ```
The first column is name, which appears to be the name of your trigger is 'NAME' If you need to find where it is, use this query from here [Need to list all triggers in SQL Server database with table name and table's schema](https://stackoverflow.com/questions/4305691/need-to-list-all-triggers-in-sql-server-database-with-table-name-and-tables-sch)
Can't find trigger even though it exists?
[ "", "sql", "sql-server", "" ]
What is the most efficient way of finding a certain tuple based on e.g. the second element of that tuple in a list and move that tuple to the top of the list Something of the form: ``` LL=[('a','a'),('a','b'),('a','c'),('a','d')] LL.insert(0,LL.pop(LL.index( ... ))) ``` where I would like something in index() that would give me the position of the tuple that has 'c' as second element. Is there a classic python 1-line approach to do that?
``` >>> LL.insert(0,LL.pop([x for x, y in enumerate(LL) if y[1] == 'c'][0])) >>> LL [('a', 'c'), ('a', 'a'), ('a', 'b'), ('a', 'd')] >>> ```
To find position you can: ``` positions = [i for i, tup in enumerate(LL) if tup[1] == 'c'] ``` You can now take the index of the desired element, pop it and push to the beginning of the list ``` pos = positions[0] LL.insert(0, LL.pop(pos)) ``` But you can also sort your list using the item in the tuple as key: ``` sorted(LL, key=lambda tup: tup[1] == 'c', reverse=True) ``` if you don't care about order of the other elements
finding and moving a tuple in a list of tuples
[ "", "python", "list", "" ]
I want to be able to iterate over the matrix to apply a function to each row. How can I do it for a Numpy matrix ?
You can use `numpy.apply_along_axis()`. Assuming that your array is 2D, you can use it like: ``` import numpy as np myarray = np.array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) def myfunction(x): return x[0] + x[1]**2 + x[2]**3 print(np.apply_along_axis(myfunction, axis=1, arr=myarray)) #[ 2352 12672 36992] ```
While you should certainly provide more information, if you are trying to go through each row, you can just iterate with a for loop: ``` import numpy m = numpy.ones((3,5),dtype='int') for row in m: print str(row) ```
Iterating over Numpy matrix rows to apply a function each?
[ "", "python", "matrix", "numpy", "" ]
say I have a (3,3,3) array like this. ``` array([[[1, 1, 1], [1, 1, 1], [0, 0, 0]], [[2, 2, 2], [2, 2, 2], [2, 2, 2]], [[3, 3, 3], [3, 3, 3], [1, 1, 1]]]) ``` How do I get the 9 values corresponding to euclidean distance between each vector of 3 values and the zeroth values? Such as doing a `numpy.linalg.norm([1,1,1] - [1,1,1])` 2 times, and then doing `norm([0,0,0] - [0,0,0])`, and then `norm([2,2,2] - [1,1,1])` 2 times, `norm([2,2,2] - [0,0,0])`, then `norm([3,3,3] - [1,1,1])` 2 times, and finally `norm([1,1,1] - [0,0,0])`. Any good ways to vectorize this? I want to store the distances in a (3,3,1) matrix. The result would be: ``` array([[[0. ], [0. ], [0. ]], [[1.73], [1.73], [3.46]] [[3.46], [3.46], [1.73]]]) ```
`keepdims` argument is added in numpy 1.7, you can use it to keep the sum axis: ``` np.sum((x - [1, 1, 1])**2, axis=-1, keepdims=True)**0.5 ``` the result is: ``` [[[ 0. ] [ 0. ] [ 0. ]] [[ 1.73205081] [ 1.73205081] [ 1.73205081]] [[ 3.46410162] [ 3.46410162] [ 0. ]]] ``` **Edit** ``` np.sum((x - x[0])**2, axis=-1, keepdims=True)**0.5 ``` the result is: ``` array([[[ 0. ], [ 0. ], [ 0. ]], [[ 1.73205081], [ 1.73205081], [ 3.46410162]], [[ 3.46410162], [ 3.46410162], [ 1.73205081]]]) ```
You might want to consider [`scipy.spatial.distance.cdist()`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html#scipy.spatial.distance.cdist), which efficiently computes distances between pairs of points in two collections of inputs (with a standard euclidean metric, among others). Here's example code: ``` import numpy as np import scipy.spatial.distance as dist i = np.array([[[1, 1, 1], [1, 1, 1], [0, 0, 0]], [[2, 2, 2], [2, 2, 2], [2, 2, 2]], [[3, 3, 3], [3, 3, 3], [1, 1, 1]]]) n,m,o = i.shape # compute euclidean distances of each vector to the origin # reshape input array to 2-D, as required by cdist # only keep diagonal, as cdist computes all pairwise distances # reshape result, adapting it to input array and required output d = dist.cdist(i.reshape(n*m,o),i[0]).reshape(n,m,o).diagonal(axis1=2).reshape(n,m,1) ``` `d` holds: ``` array([[[ 0. ], [ 0. ], [ 0. ]], [[ 1.73205081], [ 1.73205081], [ 3.46410162]], [[ 3.46410162], [ 3.46410162], [ 1.73205081]]]) ``` The big caveat of this approach is that we're calculating `n*m*o` distances, when we only need `n*m` (and that it involves an insane amount of reshaping).
How to get euclidean distance on a 3x3x3 array in numpy
[ "", "python", "matrix", "numpy", "scipy", "linear-algebra", "" ]
How do I generate all possible combinations of a string with spaces between the characters? ``` [in]: "foobar" [out]: ['foobar', 'f oobar', 'fo obar', 'f o obar', 'foo bar', 'f oo bar', 'fo o bar', 'f o o bar', 'foob ar', 'f oob ar', 'fo ob ar', 'f o ob ar', 'foo b ar', 'f oo b ar', 'fo o b ar', 'f o o b ar', 'fooba r', 'f ooba r', 'fo oba r', 'f o oba r', 'foo ba r', 'f oo ba r', 'fo o ba r', 'f o o ba r', 'foob a r', 'f oob a r', 'fo ob a r', 'f o ob a r', 'foo b a r', 'f oo b a r', 'fo o b a r', 'f o o b a r', 'foobar', 'f oobar', 'fo obar', 'f o obar', 'foo bar', 'f oo bar', 'fo o bar', 'f o o bar', 'foob ar', 'f oob ar', 'fo ob ar', 'f o ob ar', 'foo b ar', 'f oo b ar', 'fo o b ar', 'f o o b ar', 'fooba r', 'f ooba r', 'fo oba r', 'f o oba r', 'foo ba r', 'f oo ba r', 'fo o ba r', 'f o o ba r', 'foob a r', 'f oob a r', 'fo ob a r', 'f o ob a r', 'foo b a r', 'f oo b a r', 'fo o b a r', 'f o o b a r'] ```
``` import itertools as it def func(s): if not s: return [s] binary = it.product(['',' '], repeat=len(s)-1) zipped = (it.izip_longest(s , comb, fillvalue='') for comb in binary) return [''.join(it.chain.from_iterable(x)) for x in zipped] func('foobar') ``` output: ``` ['foobar', 'fooba r', 'foob ar', 'foob a r', 'foo bar', 'foo ba r', 'foo b ar', 'foo b a r', 'fo obar', 'fo oba r', 'fo ob ar', 'fo ob a r', 'fo o bar', 'fo o ba r', 'fo o b ar', 'fo o b a r', 'f oobar', 'f ooba r', 'f oob ar', 'f oob a r', 'f oo bar', 'f oo ba r', 'f oo b ar', 'f oo b a r', 'f o obar', 'f o oba r', 'f o ob ar', 'f o ob a r', 'f o o bar', 'f o o ba r', 'f o o b ar', 'f o o b a r'] ```
Here's an implementation of my recursive idea above: ``` def string_spaces(s): ret = set([s]) # use a set rather than a list to prevent duplicates for i in range(1, len(s)): for fst in string_spaces(s[:i]): for snd in string_spaces(s[i:]): ret.add(fst + ' ' + snd) return ret ``` Example: ``` In [11]: string_spaces('foo') Out[11]: set(['foo', 'f o o', 'f oo', 'fo o']) ``` NB: Python has a recursion limit of 1000 stack frames, so this will crash for very long strings (longer than 1000 characters).
How do I generate all possible combinations of a string with spaces between the characters? Python
[ "", "python", "string", "whitespace", "combinations", "" ]
What is the correct way to pass an object with a custom exception? I'm pretty sure this code *used* to work, but now it is throwing an error. ``` class FailedPostException(Exception): pass def post_request(request): session = requests.Session() response = session.send(request.prepare(), timeout=5, verify=True) if response.status_code is not requests.codes.ok: raise FailedPostException(response) session.close() return response try: ... except FailedPostException as r: // type(r) - Requests.Response print r.text AttributeError: 'FailedPostException' object has no attribute 'text' ```
The raising and catching of the exception is correct, the issue here is that you expect the exception to have a `text` attribute that does not exist. When inheriting from a built-in exception type you can use the `args` attribute, which will be a tuple of the arguments to the exception, for example: ``` try: ... except FailedPostException as r: print r.args[0] ``` In this case you could use `str(r)` instead of `r.args[0]`. If there is only one argument to the exception then `str(r)` will be equivalent to `str(r.args[0])`, otherwise it will be equivalent to `str(r.args)`. If you want to add the `text` attribute to your `FailedPostException`, you can do the following: ``` class FailedPostException(Exception): def __init__(self, text, *args): super(FailedPostException, self).__init__(text, *args) self.text = text ``` Note that in Python 3.x you can just use `super().__init__(text, *args)`.
You can keep a reference to the original `Response` object and expose its attributes like this: ``` class FailedPostException(Exception): def __init__(self, rsp): super(FailedPostException, self).__init__() self.response = rsp @property def text(self): return self.response.text @property def status_code(self): return self.response.status_code #other properties if interested.... ``` In case you need to introspect more of the `Response` object ``` r.response.url r.response.reason ... ```
Passing an object with an Exception?
[ "", "python", "exception", "" ]
I'm working on PDF with Python and I'm accessing the file's meta data by using `PDFMiner`. I extract the info using this: ``` from pdfminer.pdfparser import PDFParser, PDFDocument fp = open('diveintopython.pdf', 'rb') parser = PDFParser(fp) doc = PDFDocument() parser.set_document(doc) doc.set_parser(parser) doc.initialize() print doc.info[0]['CreationDate'] # And return this value "D:20130501200439+01'00'" ``` How can I convert `D:20130501200439+01'00'` into a readable format in Python?
Is "+01'00'" the timezone information? Not taking that into account, you can create a datetime object as follows... ``` >>>from time import mktime, strptime >>>from datetime import datetime ... >>>datestring = doc.info[0]['CreationDate'][2:-7] >>>ts = strptime(datestring, "%Y%m%d%H%M%S") >>>dt = datetime.fromtimestamp(mktime(ts)) datetime(2013, 5, 1, 20, 4, 30) ```
I found the format documented [here](http://www.verypdf.com/pdfinfoeditor/pdf-date-format.htm). I needed to cope with the timezones too because I have 160k documents from all over to deal with. Here is my full solution: ``` import datetime import re from dateutil.tz import tzutc, tzoffset pdf_date_pattern = re.compile(''.join([ r"(D:)?", r"(?P<year>\d\d\d\d)", r"(?P<month>\d\d)", r"(?P<day>\d\d)", r"(?P<hour>\d\d)", r"(?P<minute>\d\d)", r"(?P<second>\d\d)", r"(?P<tz_offset>[+-zZ])?", r"(?P<tz_hour>\d\d)?", r"'?(?P<tz_minute>\d\d)?'?"])) def transform_date(date_str): """ Convert a pdf date such as "D:20120321183444+07'00'" into a usable datetime http://www.verypdf.com/pdfinfoeditor/pdf-date-format.htm (D:YYYYMMDDHHmmSSOHH'mm') :param date_str: pdf date string :return: datetime object """ global pdf_date_pattern match = re.match(pdf_date_pattern, date_str) if match: date_info = match.groupdict() for k, v in date_info.iteritems(): # transform values if v is None: pass elif k == 'tz_offset': date_info[k] = v.lower() # so we can treat Z as z else: date_info[k] = int(v) if date_info['tz_offset'] in ('z', None): # UTC date_info['tzinfo'] = tzutc() else: multiplier = 1 if date_info['tz_offset'] == '+' else -1 date_info['tzinfo'] = tzoffset(None, multiplier*(3600 * date_info['tz_hour'] + 60 * date_info['tz_minute'])) for k in ('tz_offset', 'tz_hour', 'tz_minute'): # no longer needed del date_info[k] return datetime.datetime(**date_info) ```
Convert CreationTime of PDF to a readable format in Python
[ "", "python", "date", "datetime", "pdf", "" ]
I've been supplied with a zipped file containing multiple individual streams of compressed XML. The compressed file is 833 mb. If I try to decompress it as a single object, I only get the first stream (about 19 kb). I've modified the following code supplied as a answer to an [older question](https://stackoverflow.com/questions/12147484/extract-zlib-compressed-data-from-binary-file-in-python) to decompress each stream and write it to a file: ``` import zlib outfile = open('output.xml', 'w') def zipstreams(filename): """Return all zip streams and their positions in file.""" with open(filename, 'rb') as fh: data = fh.read() i = 0 print "got it" while i < len(data): try: zo = zlib.decompressobj() dat =zo.decompress(data[i:]) outfile.write(dat) zo.flush() i += len(data[i:]) - len(zo.unused_data) except zlib.error: i += 1 outfile.close() zipstreams('payload') infile.close() ``` This code runs and produces the desired result (all the XML data decompressed to a single file). The problem is that it takes several days to work! Even though there are tens of thousands of streams in the compressed file, it still seems like this should be a much faster process. Roughly 8 days to decompress 833mb (estimated 3gb raw) suggests that I'm doing something very wrong. Is there another way to do this more efficiently, or is the slow speed the result of a read-decompress-write---repeat bottleneck that I'm stuck with? Thanks for any pointers or suggestions you have!
It's hard to say very much without more specific knowledge of the file format you're actually dealing with, but it's clear that your algorithm's handling of substrings is quadratic-- not a good thing when you've got tens of thousands of them. So let's see what we know: You say that the vendor states that they are > using the standard zlib compression library.These are the same compression routines on which the gzip utilities are built. From this we can conclude that the component streams are in **raw zlib format,** and are *not* encapsulated in a gzip wrapper (or a PKZIP archive, or whatever). The authoritative documentation on the ZLIB format is here: <https://www.rfc-editor.org/rfc/rfc1950> So let's assume that your file is exactly as you describe: **A 32-byte header, followed by raw ZLIB streams concatenated together, without any other stuff in between.** (**Edit:** That's not the case, after all). Python's [zlib documentation](http://docs.python.org/2/library/zlib.html) provides a `Decompress` class that is actually pretty well suited to churning through your file. It includes an attribute `unused_data` whose [documentation](http://docs.python.org/2/library/zlib.html#zlib.Decompress.unused_data) states clearly that: > The only way to determine where a string of compressed data ends is by actually decompressing it. This means that when compressed data is contained part of a larger file, you can only find the end of it by reading data and feeding it followed by some non-empty string into a decompression object’s decompress() method until the unused\_data attribute is no longer the empty string. So, this is what you can do: Write a loop that reads through `data`, say, one block at a time (no need to even read the entire 800MB file into memory). Push each block to the `Decompress` object, and check the `unused_data` attribute. When it becomes non-empty, you've got a complete object. Write it to disk, create a new decompress object and initialize iw with the `unused_data` from the last one. This just might work (untested, so check for correctness). **Edit:** Since you do have other data in your data stream, I've added a routine that aligns to the next ZLIB start. You'll need to find and fill in the two-byte sequence that identifies a ZLIB stream in *your* data. (Feel free to use your old code to discover it.) While there's no fixed ZLIB header in general, it should be the same for each stream since it consists of [protocol options and flags,](https://www.rfc-editor.org/rfc/rfc1950) which are presumably the same for the entire run. ``` import zlib # FILL IN: ZHEAD is two bytes with the actual ZLIB settings in the input ZHEAD = CMF+FLG def findstart(header, buf, source): """Find `header` in str `buf`, reading more from `source` if necessary""" while buf.find(header) == -1: more = source.read(2**12) if len(more) == 0: # EOF without finding the header return '' buf += more offset = buf.find(header) return buf[offset:] ``` You can then advance to the start of the next stream. I've added a `try`/`except` pair since the same byte sequence might occur outside a stream: ``` source = open(datafile, 'rb') skip_ = source.read(32) # Skip non-zlib header buf = '' while True: decomp = zlib.decompressobj() # Find the start of the next stream buf = findstart(ZHEAD, buf, source) try: stream = decomp.decompress(buf) except zlib.error: print "Spurious match(?) at output offset %d." % outfile.tell(), print "Skipping 2 bytes" buf = buf[2:] continue # Read until zlib decides it's seen a complete file while decomp.unused_data == '': block = source.read(2**12) if len(block) > 0: stream += decomp.decompress(block) else: break # We've reached EOF outfile.write(stream) buf = decomp.unused_data # Save for the next stream if len(block) == 0: break # EOF outfile.close() ``` PS 1. If I were you I'd write each XML stream into a separate file. PS 2. You can test whatever you do on the first MB of your file, till you get adequate performance.
Decompressing 833 MB should take about 30 seconds on a modern processor (e.g. a 2 GHz i7). So yes, you are doing something very wrong. Attempting to decompress at every byte offset to see if you get an error is part of the problem, though not all of it. There are better ways to find the compressed data. Ideally you should find or figure out the format. Alternatively, you can search for valid zlib headers using the [RFC 1950 specification](https://www.rfc-editor.org/rfc/rfc1950), though you may get false positives. More significant may be that you are reading the entire 833 MB into memory at once, and decompressing the 3 GB to memory, possibly in large pieces each time. How much memory does your machine have? You may be thrashing to virtual memory. If the code you show works, then the data is not zipped. zip is a specific file format, normally with the .zip extension, that encapsulates raw deflate data within a structure of local and central directory information intended to reconstruct a directory in a file system. You must have something rather different, since your code is looking for and apparently finding zlib streams. What is the format you have? Where did you get it? How is it documented? Can you provide a dump of, say, the first 100 bytes? The way this should be done is *not* to read the whole thing into memory and decompress entire streams at once, also into memory. Instead, make use of the [`zlib.decompressobj`](http://docs.python.org/3.3/library/zlib.html#zlib.decompress) interface which allows you provide a piece at a time, and get the resulting available decompressed data. You can read the input file in much smaller pieces, find the decompressed data streams by using the documented format or looking for zlib (RFC 1950 headers), and then running those a chunk at a time through the decompressed object, writing out the decompressed data where you want it. `decomp.unused_data` can be used to detect the end of the compressed stream (as in the example you found).
Python and zlib: Terribly slow decompressing concatenated streams
[ "", "python", "stream", "zlib", "compression", "" ]
OK, first-off I admit MySQL Syntax has never been my strongest point. So, here I am. **Urls :** ``` ID Url Code =============================================== 1 http://www.google.com Abcd 2 http://www.freetemplates4u.com Efgh 3 ... ``` **Posts :** ``` ID Title Address =============================================== 1 Some Title 1 http://mize.it/Abcd 2 Some Title 2 http://mize.it/Efgh 3 ... ``` --- I want to create a query to fetch the following table ``` Title Url ======================================================= Some Title 1 http://www.google.com Some Title 2 http://www.freetemplates4u.com ``` --- In a few words : * Take the `Url`-`Code` pairs from `Urls` table * Search for `http://mize.it/`+`Code` in the `Posts` table (in the `Address` field) * Combine the final `Title` and `Url` in a result table. I know it has something to do with joins and concatenation, but I'm definitely lost. --- **SIDENOTE :** I don't care neither about my current database's structure, nor about performance issues. All I want is to transfer existing data, from the existing database (without having to alter it), to my new website's database (under a totally different format/structure).
You should change your DB-Design, this query will have a poor performance since mysql has to do a full tablescan. Try adding a code column in your Posts table hat has the right value (populate it on insert/update) and add an index to Code (both tables). Now you should be able to do. ``` SELECT Posts.Title, Urls.Url FROM Posts INNER JOIN Urls ON Post.Code = Urls.Code ``` **Update:** If the first part of the url is always the same, this will work ``` SELECT Post.Title, Urls.Url FROM Posts INNER JOIN Urls ON Post.Adress = CONCAT('http://mize.it/', Urls.Code) ```
TRY ``` SELECT p.title,x.url FROM Posts p INNER JOIN ( SELECT url, CONCAT('http://mize.it/',code) AS xcode FROM Urls ) x ON (x.xcode = p.address) ``` [Working DEMO](http://sqlfiddle.com/#!2/54199/1)
Trouble with MySQL query
[ "", "mysql", "sql", "" ]
I am tring to do this * Get all rows in a blogs named table. * Copy them in a temporary database * Edit the language field of this temporary table records * Insert into the blogs table And I'm trying it like this: ``` CREATE TEMPORARY TABLE tmptable SELECT * FROM blogs WHERE lan = 2; UPDATE tmptable SET lan = 1; INSERT INTO blogs SELECT * FROM tmptable; dump database tmptable; ``` But of corse I get duplicated key error... How Can I prevent it? -EDIT- I TRIED: ``` CREATE TEMPORARY TABLE tmptable SELECT * FROM blogs WHERE lan = 2; UPDATE tmptable SET lan = 1; ALTER TABLE tmptable DROP id; INSERT INTO blogs SELECT * FROM tmptable; dump database tmptable; ``` But then the `Column count doesn't match value count at row 1` -EDIT- I believe this will work (And it Did, cause I know how many records exist) ``` CREATE TEMPORARY TABLE tmptable SELECT * FROM blogs WHERE lan = 2; UPDATE tmptable SET lan = 1; UPDATE tmptable SET id = id + 1000; INSERT INTO blogs SELECT * FROM tmptable; ``` But how can I do it properly? (just set the next avaliable autoincrement value for primary key(id) (without PHP/alike)) -EDIT- maybe something like this??? ``` CREATE TEMPORARY TABLE tmptable SELECT * FROM blogs WHERE lan = 2; UPDATE tmptable SET lan = 1; UPDATE tmptable SET id = id + (SELECT id FROM blogs ORDER BY id DESC LIMIT 1); INSERT INTO blogs SELECT * FROM tmptable; ```
``` CREATE TEMPORARY TABLE tmptable SELECT * FROM blogs WHERE lan = 2; UPDATE tmptable SET lan = 1; alter table tmptable drop column id; INSERT INTO blogs SELECT NULL,tmptable.* FROM tmptable; ``` Assumed, the column "id" is the first col.
No temporary table needed. ``` INSERT INTO blogs (lan, col1, col2, col3, ...) SELECT 1, col1, col2, col3, ... FROM blogs WHERE lan = 2 ``` Replace `col1, col2, col3, ...` with a list of all columns except `lan` and `id`.
Duplicate all rows in a table and prevent duplicate keys
[ "", "mysql", "sql", "clone", "temp-tables", "" ]
I know this works but does anybody know of a way of doing all these in **one** line of code: ``` EXEC @ResultInt = NameofAStoredProcedure IF @ResultInt <> 0 RETURN @ResultInt ``` LIKE: ``` -- I'm trying something like this but it does not work we want to do it all in -- one line to do a Find-Replace throughout our code base IF (EXEC @ResultInt = NameofAStoredProcedure) <> 0 RETURN ResultInt ```
Maybe I'm getting something wrong, but why not simply removing the whitespaces? ``` EXEC @ResultInt = NameofAStoredProcedure IF @ResultInt <> 0 RETURN @ResultInt; ``` If all you want is to have it in one line for easier editing, that should work. You even could include the declaration of `@ResultInt`: ``` DECLARE @ResultInt int; EXEC @ResultInt = NameofAStoredProcedure IF @ResultInt <> 0 RETURN @ResultInt; ``` Tested with MS SQL 2005.
Instead of trying to crush everything down to one row to make mass find and replaces easier, I would recommend using [regular expressions](http://msdn.microsoft.com/en-us/library/2k3te2cs%28v=vs.80%29.aspx) to handle carriage returns and line feeds while maintaining code readability. For instance, let's say you wanted to change "IF @ResultInt <> 0" to "IF @ResultInt <> 1" and add row "SET @ResultInt = 1", then do the following: ***ENABLE REGULAR EXPRESSIONS IN FIND AND REPLACE*** In Find and Replace in both Visual Studio and SQL Server Management Studio, expand the Find Options -> Select Use -> Select Regular expressions. ***FIND WHAT STATEMENT*** ``` EXEC \@ResultInt = NameofAStoredProcedure\nIF \@ResultInt \<\> 0\nRETURN \@ResultInt ``` ***REPLACE WITH STATEMENT*** ``` EXEC \@ResultInt = NameofAStoredProcedure\nIF \@ResultInt \<\> 1\nSET \@ResultInt = 1\nRETURN \@ResultInt ``` ***RESULTS*** ``` EXEC @ResultInt = NameofAStoredProcedure IF @ResultInt <> 1 SET @ResultInt = 1 RETURN @ResultInt ```
Is there a way to exec a SP, check the result with an if statement and return the value, all within just one line?
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]