Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
This is an embarrassingly simple question. I'm trying to understand how to incorporate a simple Python function in the first Django app I'm building. Here is my views.py file... ``` from django.shortcuts import render from noticeboard.models import Listings from django.views import generic from django.template import Context class IndexView(generic.ListView): template_name = 'listings/index.html' context_object_name = 'latest_listings' def get_queryset(self): return Listings.objects.order_by('-pub_date') class DetailView(generic.DetailView): model = Listings template_name = 'listings/listings_detail.html' context_object_name = 'listing' ``` In the Listings model, I have the following field: ``` listing = models.CharField(max_length=50) ``` I want to write a function that makes sure the listing variable is all upper case - I know how to write the function - I just don't know how to incorporate it into the views.py file!
There are a few possibilities: 1. Pass the variable through the `upper` template filter in the html template, like so: ``` {{ listing.listing|upper }} ``` ... as jpic said. Here, the context object is "listing", and its attribute that you want to upcase is listing.listing. 2. Create a method on the model that returns that attribute as all upper case: ``` class Listing(models.Model): def uppercase_listing(self): return self.listing.upper() ``` and then use it in the template, like so: ``` {{ listing.uppercase_listing }} ``` Note that you may only have methods that do not take any arguments, as there is no way to pass arguments in an implied method call in a Django template. 3. Write a custom template tag or filter. For this simple use (making a variable uppercase) it would be overkill as there already exists a built-in filter for that (what jpic pointed out). But if you wanted some kind of custom changes, then a custom tag or filter might be appropriate. See <https://docs.djangoproject.com/en/1.5/howto/custom-template-tags/> So to sum up, you would call a function from your template by: * attaching it to a model and referencing it that way * writing a custom tag or template. But in your case you don't have to, an existing built-in filter already exists that does what you want. Generally speaking, django doesn't encourage any code going into the templates. It tries to limit what goes in there into fairly declarative type statements. The goal is to put all the logic into the model, view, and other python code, and just reference the pre-calculated values from the templates.
Why not just use the [`upper` template filter](https://docs.djangoproject.com/en/dev/ref/templates/builtins/#upper) ? No need to write any pure python to display a view context variable in uppercase ...
How to write Python Code in Django
[ "", "python", "django", "" ]
Is it possible to download and name a file from website using Python 2.7.2 and save it on desktop? If yes, then how to do it?
Here are 3 way to do it using [urllib2](http://docs.python.org/2/library/urllib2.html), [requests](http://docs.python-requests.org/en/latest/) or [urllib](http://docs.python.org/3.0/library/urllib.request.html) ``` import urllib2 with open('filename','wb') as f: f.write(urllib2.urlopen(URL).read()) f.close() print "Download Complete!" --------------------------------------- import requests r = requests.get(URL) with open("filename", "wb") as code: code.write(r.content) print "Download Complete!" --------------------------------------- import urllib urllib.urlretrieve(URL, "filename") print "Download Complete!" ``` where `filename` is the name you want the file and `URL` is the url of the file you want to download this will save the file in the same directory as the python file you're using to download
You can use `urllib.urlretrieve`: ``` urllib.urlretrieve(url[, filename[, reporthook[, data]]]) ``` From [docs](http://docs.python.org/2/library/urllib.html#urllib.urlretrieve): > Copy a network object denoted by a URL to a local file, if necessary. If the > URL points to a local file, or a valid cached copy of the object > exists, the object is not copied.
Download, name and save a file using Python 2.7.2
[ "", "python", "" ]
I have an adjacency list maintained between 2 sets of entities ``` dict1={'x1':[y1,y2],'x2':[y2,y3,y4]...} dict2={'y1':[x1],'y2':[x1,x2],'y3':[x2]....} ``` Given a new entry coming in ,what is the recommended way to update the dicts. Assuming the entry is 'x3':[y2,y4]. Please note that x3 may not necessarily be a new vertex all the time.
First, I'd suggest `defaultdict` so that referencing an index that doesn't exist will initialize it to an empty list. ``` from collections import defaultdict dict1 = defaultdict(list) dict1['x1'] = ['y1','y2'] dict1['x2'] = ['y2','y3','y4'] dict2 = defaultdict(list) dict2['y1'] = ['x1'] dict2['y2'] = ['x1','x2'] dict2['y3'] = ['x2'] ``` Then when `'x3':[y2,y4]` comes in: ``` dict1['x3'] = set(dict1['x3']+[y2,y4]) for y in dict1['x3']: dict2[y] = set(dict2[y]+'x3') ``` using `set` to eliminate duplicate values. Obviously some of the above values would be a little more dynamic than hard coded values. Note: This isn't faster, in fact it's probably slower, but the defaultdict is a better way to avoid the KeyError and you definitely don't want to introduce duplicate values into your adj list as this will hurt the performance, or maybe even correctness, of whatever algorithm you apply to this graph.
Why can't you do it in a simple way? ``` dict1[x3] = [y2, y4] for yi in dict1[x3]: if yi in dict2.keys(): dict2[yi].append(x3) else: dict2[yi] = [x3] ```
inserting a new entry into adjacency list
[ "", "python", "" ]
I have the following equation: ``` result=[(i,j,k) for i in S for j in S for k in S if sum([i,j,k])==0] ``` I want to add another condition in the if statement such that my result set does not contain (0,0,0). I tried to do the following: `result=[(i,j,k) for i in S for j in S for k in S if sum([i,j,k])==0 && (i,j,k)!=(0,0,0)]` but I am getting a syntax error pointing to the `&&`. I tested my expression for the first condition and it is correct.
You are looking for the [`and` boolean operator](http://docs.python.org/2/reference/expressions.html#boolean-operations) instead: ``` result=[(i,j,k) for i in S for j in S for k in S if sum([i,j,k])==0 and (i,j,k)!=(0,0,0)] ``` `&&` is [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators), [Java](http://docs.oracle.com/javase/tutorial/java/nutsandbolts/opsummary.html), [Perl](http://www.misc-perl-info.com/perl-operators.html), [PHP](http://php.net/manual/en/language.operators.logical.php), Ruby, Go, OCaml, Haskell, MATLAB, R, Lasso, ColdFusion, C, C#, or C++ boolean syntax instead.
Apart from that error instead of triple nested for-loops you can also use [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product) here to get the Cartesian product of `S * S * S`: ``` from itertools import product result=[ x for x in product(S, repeat = 3) if sum(x)==0 and x != (0,0,0)] ``` **Demo:** ``` >>> S = [1, -1, 0, 0] >>> [ x for x in product(S, repeat = 3) if sum(x) == 0 and x != (0,0,0)] [(1, -1, 0), (1, -1, 0), (1, 0, -1), (1, 0, -1), (-1, 1, 0), (-1, 1, 0), (-1, 0, 1), (-1, 0, 1), (0, 1, -1), (0, -1, 1), (0, 1, -1), (0, -1, 1)] ```
How to use two conditions in an if statement
[ "", "python", "if-statement", "" ]
I have a little problem with my SQL sentence. I have a table with a product\_id and a flag\_id, now I want to get the product\_id which matches all the flags specified. I know you have to inner join it self, to match more than one, but I don't know the exact SQL for it. Table for flags ``` product_id | flag_id 1 1 1 51 1 23 2 1 2 51 3 1 ``` I would like to get all products which have flag\_id 1, 51 and 23.
> get the product\_id which ***matches all*** the flags specified This problem is called [Relational Division](https://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/). One way to solve it, is to do this: * `GROUP BY product_id` . * Use the `IN` predicate to specify which flags to match. * Use the `HAVING` clause to ensure the flags each product have, like this: ``` SELECT product_id FROM flags WHERE flag_id IN(1, 51, 23) GROUP BY product_id HAVING COUNT(DISTINCT flag_id) = 3 ``` The `HAVING` clause will ensure that the selected `product_id` must have both the three flags, if it has only one or two of them it will be eliminated. See it in action here: * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!2/47ca3/2) This will give you only: ``` | PRODUCT_ID | -------------- | 1 | ```
try this: ``` SELECT * FROM your_table WHERE flag_id IN(1,2,..); ```
Selecting a value from multiple rows in MySQL
[ "", "mysql", "sql", "attributes", "" ]
I have a XML file with Russian text: ``` <p>все чашки имеют стандартный посадочный диаметр - 22,2 мм</p> ``` I use `xml.etree.ElementTree` to do manipulate it in various ways (without ever touching the text content). Then, I use `ElementTree.tostring`: ``` info["table"] = ET.tostring(table, encoding="utf8") #table is an Element ``` Then I do some other stuff with this string, and finally write it to a file ``` f = open(newname, "w") output = page_template.format(**info) f.write(output) f.close() ``` I wind up with this in my file: ``` <p>\xd0\xb2\xd1\x81\xd0\xb5 \xd1\x87\xd0\xb0\xd1\x88\xd0\xba\xd0\xb8 \xd0\xb8\xd0\xbc\xd0\xb5\xd1\x8e\xd1\x82 \xd1\x81\xd1\x82\xd0\xb0\xd0\xbd\xd0\xb4\xd0\xb0\xd1\x80\xd1\x82\xd0\xbd\xd1\x8b\xd0\xb9 \xd0\xbf\xd0\xbe\xd1\x81\xd0\xb0\xd0\xb4\xd0\xbe\xd1\x87\xd0\xbd\xd1\x8b\xd0\xb9 \xd0\xb4\xd0\xb8\xd0\xb0\xd0\xbc\xd0\xb5\xd1\x82\xd1\x80 - 22,2 \xd0\xbc\xd0\xbc</p> ``` How do I get it encoded properly?
You use ``` info["table"] = ET.tostring(table, encoding="utf8") ``` which returns `bytes`. Then later you apply that to a format string, which is a `str` (unicode), if you do that you'll end up with a representation of the bytes object. etree can return an unicode object instead if you use: ``` info["table"] = ET.tostring(table, encoding="unicode") ```
The problem is that ElementTree.tostring returns a binary object and not an actual string. The answer to this is: ``` info["table"] = ET.tostring(table, encoding="utf8").decode("utf8") ```
How do I get this to encode properly?
[ "", "python", "xml", "encoding", "utf-8", "python-3.x", "" ]
I will try to explain but, if it is not clear, please let me know. English is not my first language. I need some help with a query that would serve as condition to insert set of records or not. First, I've got a db table EmployeeProviders. In one of the stored procedures, I recalculate credits according to some condition. Records are set in quantities of in this case three but, could be more or less. If after recalculation I get exactly the same numbers or credits with the same effective date as in EmployeeProviders I do not need to insert these values. EmployeeProviders may contain few sets of records for each employee separated by effective date. The difficulty for me is to construct a query that will check records not one by one but in sets of three in this case. If one of the records does not match, I need to insert all three. If all of them the same, I do not insert any records. ``` declare @StartDate datetime, @employee_id int select @StartDate = '2013-07-01', @employee_id = 3465 ``` For example here is the db table populated with values ``` DECLARE @EmployeeProviders TABLE ( ident_id int IDENTITY, employee_id int, id int, plan_id int, credits decimal(18,5), effective_date datetime ) INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (18753, 23, 0.00000, '2013-06-01') INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (3465, 18753, 15, 0.00000, '2013-06-01') INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (3465, 18753, 16, 60.00, '2013-06-01') INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (3465, 18753, 23, 0.00000, '2013-07-01') INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (3465, 18753, 15, 0.00000, '2013-07-01') INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) VALUES (3465, 18753, 16, 81.580, '2013-07-01') SELECT * FROM @EmployeeProviders WHERE plan_id = 18753 and datediff(dd,effective_date,@StartDate) = 0 ``` Here is the temp table in stored procedure. It gets updated during caclulation process ``` DECLARE @Providers TABLE ( id int, plan_id int, credits decimal(18,5) ) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 23, 0.00000) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 15, 0.00000) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 16, 81.580) SELECT * FROM @Providers ``` After all updates amounts in this temp table are the same as in db table EmployeeProviders so, I do not need to insert new set of records How can I do one query that can either be a condition like IF NOT EXISTS() or just do INSERT EmployeeProviders ()... SELECT ... FROM @Providers ,,, -- query that would return me set of 3 records if values are not the same as in EmployeeProviders Another scenario, @Providers.credits = 65 so, because the amount is changed compare to EmployeeProviders.credits for id = 16. I will add new set of 3 records to EmployeeProvider table ``` DECLARE @Providers TABLE ( id int, plan_id int, credits decimal(18,5) ) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 23, 0.00000) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 15, 0.00000) INSERT INTO @Providers (plan_id, id, credits) VALUES (18753, 16, 65.00) SELECT * FROM @Providers ``` Thank you in advance, Mak
I'll take a stab at it. Try this in the stored procedure: ``` if (EXISTS(select 3465,pr.plan_id,pr.id,epr.credits,effective_date from providers pr left join @EmployeeProviders epr on pr.id = epr.id and pr.plan_id = epr.plan_id and pr.credits = epr.credits and effective_date = '2013-07-01' where epr.credits is NULL )) INSERT INTO @EmployeeProviders (employee_id, plan_id, id, credits, effective_date) select 3463,plan_id,id,credits,'2013-07-01' from @Providers ``` See it in [action](http://sqlfiddle.com/#!3/b1b5f/15)
If I understand your question correctly, your problem can be solved by making your insert conditional on the outcome of an `Except` ``` if exists( select ID, plan_id , credits from @Providers except SELECT id, plan_id, credits FROM @EmployeeProviders WHERE plan_id = 18753 and datediff(dd,effective_date,@StartDate) = 0 ) ```
Construct a query as a condition to insert set of records
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Here is the problem I am looking at: Take a list of dictionaries. Inside this, you can have the obvious cases of valid and invalid results. So you write a simple ternary operator inside a list comprehension to get out a default message on failure. However, it gives the default for each failure. In the case where you know all the lists are the same length, is it possible to reduce this down to a single failure message inside the same comprehension? The goal in this example code is to make out have the value of 'DEFAULT' if the key is not in the dictionaries inside the list. To test, a simple print is done: ``` print(out) >>> ['DEFAILT'] ``` Here is my test data and a simple, successful result: ``` lis_dic = [{1:'One',2:'Two',3:'Three'}, {1:'Ichi',2:'Ni',3:'San'}, {1:'Eins',2:'Zvi',3:'Dri'}] key = 1 out = [i[key] for i in lis_dic] print(out) >>> ['One', 'Ichi', 'Eins'] ``` Error handling attempts: ``` key = 4 out = [i[key] for i in lis_dic if key in i] print(out) >>> [] out = [i.get(key, 'DEFAILT') for i in lis_dic] print(out) >>> ['DEFAILT', 'DEFAILT', 'DEFAILT'] out = [i[key] if key in i else 'DEFAULT' for i in lis_dic ] print(out) >>> ['DEFAILT', 'DEFAILT', 'DEFAILT'] ``` As you can see, all of those result in no result or three results, not a singular result. I also tried moving the location of the else, but I kept getting syntax errors. Oh, and this is not useful because it can change order of valid results: ``` out = list(set([i[key] if key in i else 'DEFAULT' for i in lis_dic ])) ```
The key is not in any of the three dictionaries, so the `'DEFAULT'` value is generated three times. If the first dictionary held the value and the other two did not, you would get a list with one value and two `'DEFAULT'`s. Anyway, try: ``` out = [i[key] for i in lis_dic if key in i] or ['DEFAULT'] ```
Unless you have a constraint on your data structure, the simplest solution would be to transform it once so it is trivial to retrieve the information you need every time. This is also likely more efficient, depending on what you are actually doing. The transformed data structure would look like this: ``` transformed_lis_dic = { 1: ['One', 'Ichi', 'Eins'], 2: ['Two', 'Ni', 'Zvi'], 3: ['Three', 'San', 'Dri'] } out = transformed_lis_dic.get(key, ['DEFAULT']) ``` You could do the transformation this way: ``` from collections import defaultdict transformed_lis_dic = defaultdict(list) for dic in lis_dic: for key, val in dic.iteritems(): transformed_lis_dic[key].append(val) ```
Single failure result form ternary operator inside a list comprehension of a list of dictionaries in python
[ "", "python", "list", "dictionary", "list-comprehension", "conditional-operator", "" ]
i have a table with the following structure. ![Table Image](https://i.stack.imgur.com/bmzz6.jpg) From that table i need to find the price using the range that was given in the table. ``` ex: if i give the Footage_Range1=100 means it will give the output as 0.00 and if Footage_Range1=101 means the output is 2.66 if Footage_Range1=498 means the output is 2.66 ``` How to write the query to get the price?
If i understood your requirements correctly you can try this: ``` SELECT price FROM my_table WHERE Footage_Range1 <= YOUR_RANGE ORDER BY Footage_Range1 DESC LIMIT 1 ``` Where `YOUR_RANGE` is the input : 100,101,498 etc Basically this query will return the closest price to the `Footage_Range1` input that is smaller or equal.
I have the sample for your requirement. Please take a look. ``` DECLARE @range INT = 498 DECLARE @Test TABLE(mfg_id INT, footage_range INT, price FLOAT) INSERT INTO @Test ( mfg_id, footage_range, price ) SELECT 2, 0, 0.00 UNION ALL SELECT 2, 101, 2.66 UNION ALL SELECT 2, 500, 2.34 UNION ALL SELECT 2, 641, 2.21 UNION ALL SELECT 2, 800, 2.11 UNION ALL SELECT 2, 1250, 2.06 SELECT TOP 1 * FROM @Test WHERE footage_range <= @range ORDER BY footage_range DESC ```
how to get the range of the price using range in sql?
[ "", "mysql", "sql", "oracle", "" ]
I have the following database structure : ``` create table Accounting ( Channel, Account ) create table ChannelMapper ( AccountingChannel, ShipmentsMarketPlace, ShipmentsChannel ) create table AccountMapper ( AccountingAccount, ShipmentsComponent ) create table Shipments ( MarketPlace, Component, ProductGroup, ShipmentChannel, Amount ) ``` I have the following query running on these tables and I'm trying to optimize the query to run as fast as possible : ``` select Accounting.Channel, Accounting.Account, Shipments.MarketPlace from Accounting join ChannelMapper on Accounting.Channel = ChannelMapper.AccountingChannel join AccountMapper on Accounting.Accounting = ChannelMapper.AccountingAccount join Shipments on ( ChannelMapper.ShipmentsMarketPlace = Shipments.MarketPlace and ChannelMapper.AccountingChannel = Shipments.ShipmentChannel and AccountMapper.ShipmentsComponent = Shipments.Component ) join (select Component, sum(amount) from Shipment group by component) as Totals on Shipment.Component = Totals.Component ``` How do I make this query run as fast as possible ? Should I use indexes ? If so, which columns of which tables should I index ? Here is a picture of my query plan : ![enter image description here](https://i.stack.imgur.com/MZTxB.jpg) Thanks,
Indexes are essential to any database. Speaking in "layman" terms, indexes are... well, precisely that. You can think of an index as a second, hidden, table that stores two things: The sorted data and a pointer to its position in the table. Some thumb rules on creating indexes: 1. Create indexes on every field that is (or will be) used in joins. 2. Create indexes on every field on which you want to perform frequent `where` conditions. 3. Avoid creating indexes on everything. Create index on the relevant fields of every table, and use relations to retrieve the desired data. 4. Avoid creating indexes on `double` fields, unless it is absolutely necessary. 5. Avoid creating indexes on `varchar` fields, unless it is absolutely necesary. I recommend you to read this: <http://dev.mysql.com/doc/refman/5.5/en/using-explain.html>
Your JOINS should be the first place to look. The two most obvious candidates for indexes are `AccountMapper.AccountingAccount` and `ChannelMapper.AccountingChannel`. You should consider indexing `Shipments.MarketPlace`,`Shipments.ShipmentChannel` and `Shipments.Component` as well. However, adding indexes increases the workload in maintaining them. While they might give you a performance boost on this query, you might find that updating the tables becomes unacceptably slow. In any case, the MySQL optimiser might decide that a full scan of the table is quicker than accessing it by index. Really the only way to do this is to set up the indexes that would appear to give you the best result and then benchmark the system to make sure you're getting the results you want here, whilst not compromising the performance elsewhere. Make good use of the [EXPLAIN](http://dev.mysql.com/doc/refman/5.5/en/using-explain.html) statement to find out what's going on, and remember that optimisations made by yourself or the optimiser on small tables may not be the same optimisations you'd need on larger ones.
How to speed up sql queries ? Indexes?
[ "", "mysql", "sql", "database", "optimization", "" ]
I'm running Mac OS X 10.8 and get strange behavior for time.clock(), which some online sources say I should prefer over time.time() for timing my code. For example: ``` import time t0clock = time.clock() t0time = time.time() time.sleep(5) t1clock = time.clock() t1time = time.time() print t1clock - t0clock print t1time - t0time 0.00330099999999 <-- from time.clock(), clearly incorrect 5.00392889977 <-- from time.time(), correct ``` Why is this happening? Should I just use time.time() for reliable estimates?
From the docs on [`time.clock`](http://docs.python.org/2/library/time.html#time.clock): > On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms. From the docs on [`time.time`](http://docs.python.org/2/library/time.html#time.time): > Return the time in seconds since the epoch as a floating point number. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls. `time.time()` measures in seconds, `time.clock()` measures the amount of CPU time that has been used by the current process. But on windows, this is different as `clock()` also measures seconds. [Here's a similar question](https://stackoverflow.com/questions/85451/python-time-clock-vs-time-time-accuracy)
Instead of using `time.time` or `time.clock` use `timeit.default_timer`. This will return `time.clock` when `sys.platform == "win32"` and `time.time` for all other platforms. That way, your code will use the best choice of timer, independent of platform. --- From timeit.py: ``` if sys.platform == "win32": # On Windows, the best timer is time.clock() default_timer = time.clock else: # On most other platforms the best timer is time.time() default_timer = time.time ```
Why are there differences in Python time.time() and time.clock() on Mac OS X?
[ "", "python", "macos", "time", "" ]
I want to implement Test First Development in a project that will be implemented only using stored procedures and function in SQL Server. There is a way to simplify the implementation of unit tests for the stored procedures and functions? If not, what is the best strategic to create those unit tests?
It's certainly possible to do xUnit style SQL unit testing and TDD for database development - I've been doing it that way for the last 4 years. There are a number of popular T-SQL based test frameworks, such as tsqlunit. Red Gate also have a product in this area that I've briefly looked at. Then of course you have the option to write your tests in another language, such as C#, and use NUnit to invoke them, but that's entering the realm of integration rather than unit tests and are better for validating the interaction between your back-end and your SQL public interface. <http://sourceforge.net/apps/trac/tsqlunit/> <http://tsqlt.org/> Perhaps I can be so bold as to point you towards the manual for my own free (100% T-SQL) SQL Server unit testing framework - SS-Unit - as that provides some idea of *how* you can write unit tests, even if you don't intend on using it:- <http://www.chrisoldwood.com/sql.htm> <http://www.chrisoldwood.com/sql/ss-unit/manual/SS-Unit.html> I also gave a presentation to the ACCU a few years ago on how to unit test T-SQL code, and the slides for that are also available with some examples of how you can write unit tests either before or after. <http://www.chrisoldwood.com/articles.htm> Here is a blog post based around my database TDD talk at the ACCU conference a couple of years ago that collates a few relevant posts (all mine, sadly) around this way of developing a database API. <http://chrisoldwood.blogspot.co.uk/2012/05/my-accu-conference-session-database.html> (That seems like a fairly gratuitous amount of navel gazing. It's not meant to be, it's just that I have a number of links to bits and pieces that I think are relevant. I'll happily delete the answer if it violates the SO rules)
Unit testing in database is actually big topic,and there is a lot of different ways to do it.I The simplest way of doing it is to write you own test like this: ``` BEGIN TRY <statement to test> THROW 50000,'No error raised',16; END TRY BEGIN CATCH if ERROR_MESSAGE() not like '%<constraint being violated>%' THROW 50000,'<Description of Operation> Failed',16; END CATCH ``` In this way you can implement different kind of data tests: - CHECK constraint,foreign key constraint tests,uniqueness tests and so on...
Unit tests for Stored Procedures in SQL Server
[ "", "sql", "sql-server", "unit-testing", "stored-procedures", "test-first", "" ]
I am trying to do something very similar to a question I have asked before but I cant seem to get it to work correctly. Here is my previous question: [How to get totals per day](https://stackoverflow.com/questions/16013175/how-to-get-totals-per-day) the table looks as follows: ``` Table Name: Totals Date |Program label |count | | 2013-04-09 |Salary Day |4364 2013-04-09 |Monthly |6231 2013-04-09 |Policy |3523 2013-04-09 |Worst Record |1423 2013-04-10 |Salary Day |9872 2013-04-10 |Monthly |6543 2013-04-10 |Policy |5324 2013-04-10 |Worst Record |5432 2013-04-10 |Salary Day |1245 2013-04-10 |Monthly |6345 2013-04-10 |Policy |5431 2013-04-10 |Worst Record |5232 ``` My question is: Using MSSQL 2008 - Is there a way for me to get the total counts per Program Label per day for the current month. As you can see sometimes it will run twice a day. I need to be able to account for this. The output should look as follows: ``` Date |Salary Day |Monthly |Policy |Worst Record 2013-04-9 |23456 |63241 |23521 |23524 2013-04-10|45321 |72535 |12435 |83612 ```
Try this ``` select Date, sum(case when [Program label] = 'Salary Day' then count else 0 end) [Salary Day], sum(case when [Program label] = 'Monthly' then count else 0 end) [Monthly], sum(case when [Program label] = 'Policy' then count else 0 end) [Policy], sum(case when [Program label] = 'Worst Record' then count else 0 end) [Worst Record] from Totals Group by [Date]; ```
Use the [`PIVOT`](http://msdn.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) table operator like this: ``` SELECT * FROM Totals AS t PIVOT ( SUM(count) FOR [Program label] IN ([Salary Day], [Monthly], [Policy], [Worst Record]) ) AS p; ``` See it in action: * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/cb2cb/2) This will give you: ``` | DATE | SALARY DAY | MONTHLY | POLICY | WORST RECORD | ------------------------------------------------------------- | 2013-04-09 | 4364 | 6231 | 3523 | 1423 | | 2013-04-10 | 11117 | 12888 | 10755 | 10664 | ```
MSSQL Totals per day for a month
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "pivot", "" ]
What is the most elegant and concise way (without creating my own class with operator overloading) to perform tuple arithmetic in Python 2.7? Lets say I have two tuples: ``` a = (10, 10) b = (4, 4) ``` My intended result is ``` c = a - b = (6, 6) ``` I currently use: ``` c = (a[0] - b[0], a[1] - b[1]) ``` I also tried: ``` c = tuple([(i - j) for i in a for j in b]) ``` but the result was `(6, 6, 6, 6)`. I believe the above works as a nested for loops resulting in 4 iterations and 4 values in the result.
If you're looking for fast, you can use numpy: ``` >>> import numpy >>> numpy.subtract((10, 10), (4, 4)) array([6, 6]) ``` and if you want to keep it in a tuple: ``` >>> tuple(numpy.subtract((10, 10), (4, 4))) (6, 6) ```
One option would be, ``` >>> from operator import sub >>> c = tuple(map(sub, a, b)) >>> c (6, 6) ``` And [`itertools.imap`](http://docs.python.org/2/library/itertools.html#itertools.imap) can serve as a replacement for `map`. Of course you can also use other functions from [`operator`](http://docs.python.org/2/library/operator.html) to `add`, `mul`, `div`, etc. But I would seriously consider moving into another data structure since I don't think this type of problem is fit for `tuple`s
Elegant way to perform tuple arithmetic
[ "", "python", "python-2.7", "numpy", "tuples", "" ]
I was trying to work on Pyladies website on my local folder. I cloned the repo, (<https://github.com/pyladies/pyladies>) ! and created the virtual environment. However when I do the pip install -r requirements, I am getting this error ``` Installing collected packages: gevent, greenlet Running setup.py install for gevent building 'gevent.core' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -I/opt/local/include -fPIC -I/usr/include/python2.7 -c gevent/core.c -o build/temp.linux-i686-2.7/gevent/core.o In file included from gevent/core.c:253:0: gevent/libevent.h:9:19: fatal error: event.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /home/akoppad/virt/pyladies/bin/python -c "import setuptools;__file__='/home/akoppad/virt/pyladies/build/gevent/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-4MSIGy-record/install-record.txt --install-headers /home/akoppad/virt/pyladies/include/site/python2.7: running install running build running build_py running build_ext building 'gevent.core' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -I/opt/local/include -fPIC -I/usr/include/python2.7 -c gevent/core.c -o build/temp.linux-i686-2.7/gevent/core.o In file included from gevent/core.c:253:0: gevent/libevent.h:9:19: fatal error: event.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /home/akoppad/virt/pyladies/bin/python -c "import setuptools;__file__='/home/akoppad/virt/pyladies/build/gevent/setup.py'; exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-4MSIGy-record/install-record.txt --install-headers /home/akoppad/virt/pyladies/include/site/python2.7 failed with error code 1 in /home/akoppad/virt/pyladies/build/gevent Storing complete log in /home/akoppad/.pip/pip.log. ``` I tried doing this, sudo port install libevent CFLAGS="-I /opt/local/include -L /opt/local/lib" pip install gevent It says port command not found. I am not sure how to proceed with this. Thanks!
I had the same problem and just as the other answer suggested I had to install "libevent". It's apparently not called "libevent-devel" anymore (apt-get couldn't find it) but doing: ``` $ apt-cache search libevent ``` listed a bunch of available packages. ``` $ apt-get install libevent-dev ``` worked for me.
I think you just forget to install the "libevent" in the environment. If you are on a OSX machine, please try to install brew here <http://mxcl.github.io/homebrew/> and use brew install libevent to install the dependency. If you are on an ubuntu machine, you can try apt-get to install the corresponding library.
gevent/libevent.h:9:19: fatal error: event.h: No such file or directory
[ "", "python", "virtualenv", "" ]
I have two lists with values in example: ``` List 1 = TK123,TK221,TK132 ``` AND ``` List 2 = TK123A,TK1124B,TK221L,TK132P ``` What I want to do is get another array with all of the values that match between List 1 and List 2 and then output the ones that Don't match. For my purposes, "TK123" and "TK123A" are considered to match. So, from the lists above this, I would get only `TK1124B`. I don't especially care about speed as I plan to run this program once and be done with it.
This compares every item in the list to every item in the other list. This won't work if both have letters (e.g. TK132C and TK132P wouldn't match). If that is a problem, comment below. ``` list_1 = ['TK123','TK221','TK132'] list_2 = ['TK123A','TK1124B','TK221L','TK132P'] ans = [] for itm1 in list_1: for itm2 in list_2: if itm1 in itm2: break if itm2 in itm1: break else: ans.append(itm1) for itm2 in list_2: for itm1 in list_1: if itm1 in itm2: break if itm2 in itm1: break else: ans.append(itm2) print ans >>> ['TK1124B'] ```
``` >>> list1 = 'TK123','TK221','TK132' >>> list2 = 'TK123A','TK1124B','TK221L','TK132P' >>> def remove_trailing_letter(s): ... return s[:-1] if s[-1].isalpha() else s ... >>> diff = set(map(remove_trailing_letter, list2)).difference(list1) >>> diff set(['TK1124']) ``` And you can add the last letter back in, ``` >>> add_last_letter_back = {remove_trailing_letter(ele):ele for ele in list2} >>> diff = [add_last_letter_back[ele] for ele in diff] >>> diff ['TK1124B'] ```
Comparing two lists in Python (almost the same)
[ "", "python", "sortedlist", "" ]
I currently have a table called People. Within this table there are thousands of rows of data which follow the below layout: ``` gkey | Name | Date | Person_Id 1 | Fred | 12/05/2012 | ABC123456 2 | John | 12/05/2012 | DEF123456 3 | Dave | 12/05/2012 | GHI123456 4 | Fred | 12/05/2012 | JKL123456 5 | Leno | 12/05/2012 | ABC123456 ``` If I execute the following: ``` SELECT [PERSON_ID], COUNT(*) TotalCount FROM [Database].[dbo].[People] GROUP BY [PERSON_ID] HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC ``` I get a return of: ``` Person_Id | TotalCount ABC123456 | 2 ``` Now I would like to remove just one row of the duplicate values so when I execute the above query I return no results. Is this possible?
``` WITH a as ( SELECT row_number() over (partition by [PERSON_ID] order by name) rn FROM [Database].[dbo].[People] ) DELETE FROM a WHERE rn = 2 ```
Try this ``` DELETE FROM [People] WHERE gkey IN ( SELECT MIN(gkey) FROM [People] GROUP BY [PERSON_ID] HAVING COUNT(*) > 1 ) ``` You can use either `MIN` or `Max`
Remove 1 instance of duplicate values T-SQL
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
I'm having a hard time explaining this through writing, so please be patient. I'm making this project in which I have to choose a month and a year to know all the active employees during that month of the year.. but **in my database I'm storing the dates when they started and when they finished in dd/mm/yyyy format.** So if I have an employee who worked for 4 months eg. from 01/01/2013 to 01/05/2013 I'll have him in four months. I'd need to make him appear 4 tables(one for every active month) with the other employees that are active during those months. In this case those will be: January, February, March and April of 2013. The problem is I have no idea how to make a query here or php processing to achieve this. All I can think is something like (I'd run this query for every month, passing the year and month as argument) ``` pg_query= "SELECT employee_name FROM employees WHERE month_and_year between start_date AND finish_date" ``` But that can't be done, mainly because `month_and_year` must be a column not a variable. Ideas anyone? **UPDATE** Yes, I'm very sorry that I forgot to say I was using DATE as data type. **The easiest solution I found was to use EXTRACT** ``` select * from employees where extract (year FROM start_date)>='2013' AND extract (month FROM start_date)='06' AND extract (month FROM finish_date)<='07' ``` This gives me all records from june of 2013 you sure can substite the literal variables for any variable of your preference
There is no need to create a range to make an overlap: ``` select to_char(d, 'YYYY-MM') as "Month", e.name from ( select generate_series( '2013-01-01'::date, '2013-05-01', '1 month' )::date ) s(d) inner join employee e on date_trunc('month', e.start_date)::date <= s.d and coalesce(e.finish_date, 'infinity') > s.d order by 1, 2 ``` [SQL Fiddle](http://www.sqlfiddle.com/#!12/96ebb/1) If you want the months with no active employees to show then change the `inner` for a `left join` --- Erwin, about your comment: *the second expression would have to be `coalesce(e.finish_date, 'infinity') >= s.d`* Notice the requirement: *So if I have an employee who worked for 4 months eg. from 01/01/2013 to 01/05/2013 I'll have him in four months* From that I understand that the last active day is indeed the previous day from finish. If I use your "fix" I will include employee `f` in month `05` from my example. He finished in `2013-05-01`: ``` ('f', '2013-04-17', '2013-05-01'), ``` [SQL Fiddle with your fix](http://www.sqlfiddle.com/#!12/96ebb/2)
First, you can generate multiple date intervals easily with [**`generate_series()`**](http://www.postgresql.org/docs/current/interactive/functions-srf.html). To get lower and upper bound add an interval of 1 month to the start: ``` SELECT g::date AS d_lower , (g + interval '1 month')::date AS d_upper FROM generate_series('2013-01-01'::date, '2013-04-01', '1 month') g; ``` Produces: ``` d_lower | d_upper ------------+------------ 2013-01-01 | 2013-02-01 2013-02-01 | 2013-03-01 2013-03-01 | 2013-04-01 2013-04-01 | 2013-05-01 ``` The upper border of the time range is the first of the next month. This is *on purpose*, since we are going to use the standard SQL [**`OVERLAPS`** operator](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TABLE) further down. Quoting the manual at said location: > Each time period is considered to represent the half-open interval > start <= time < end [...] Next, you use a [`LEFT [OUTER] JOIN`](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-FROM) to connect employees to these date ranges: ``` SELECT to_char(m.d_lower, 'YYYY-MM') AS month_and_year, e.* FROM ( SELECT g::date AS d_lower , (g + interval '1 month')::date AS d_upper FROM generate_series('2013-01-01'::date, '2013-04-01', '1 month') g ) m LEFT JOIN employees e ON (m.d_lower, m.d_upper) OVERLAPS (e.start_date, COALESCE(e.finish_date, 'infinity')) ORDER BY 1; ``` * The `LEFT JOIN` includes date ranges even if no matching employees are found. * Use `COALESCE(e.finish_date, 'infinity'))` for employees without a `finish_date`. They are considered to be still employed. Or maybe use `current_date` in place of `infinity`. * Use `to_char()` to get a nicely formatted `month_and_year` value. * You can easily select any columns you need from `employees`. In my example I take all columns with e.\*. * The `1` in `ORDER BY 1` is a positional parameter to simplify the code. Orders by the first column `month_and_year`. * To make this fast, create an [multi-column index](http://www.postgresql.org/docs/current/interactive/indexes-multicolumn.html) on these expressions. Like ``` CREATE INDEX employees_start_finish_idx ON employees (start_date, COALESCE(finish_date, 'infinity') DESC); ``` Note the descending order on the second index-column. * If you should have committed the folly of storing temporal data as string types ([`text` or `varchar`](http://www.postgresql.org/docs/current/interactive/datatype-character.html)) with the pattern `'DD/MM/YYYY'` instead of [`date` or `timestamp` or `timestamptz`](http://www.postgresql.org/docs/current/interactive/datatype-datetime.html), convert the string to date with [`to_date()`](http://www.postgresql.org/docs/current/interactive/functions-formatting.html). Example: ``` SELECT to_date('01/03/2013'::text, 'DD/MM/YYYY') ``` Change the last line of the query to: ``` ... OVERLAPS (to_date(e.start_date, 'DD/MM/YYYY') ,COALESCE(to_date(e.finish_date, 'DD/MM/YYYY'), 'infinity')) ``` You can even have a functional index like that. But *really*, you should use a `date` or `timestamp` column.
Choose active employes per month with dates formatted dd/mm/yyyy
[ "", "sql", "postgresql", "overlap", "date-range", "generate-series", "" ]
![order_status_data2 table](https://i.stack.imgur.com/ZVIPY.jpg) I am trying to retrieve unique values from the table above (order\_status\_data2). I would like to get the most recent order with the following fields: **id,order\_id and status\_id**. High id field value signifies the most recent item i.e. 4 - 56 - 4 8 - 52 - 6 7 - 6 - 2 9 - 8 - 2 etc. I have tried the following query but not getting the desired result, esp the status\_id field: ``` select max(id) as id, order_id, status_id from order_status_data2 group by order_id ``` This is the result am getting: ![query results](https://i.stack.imgur.com/GFiJl.jpg) How would i formulate the query to get the desired results?
Like so: ``` select d.* from order_status_data2 d join (select max(id) mxid from order_status_data2 group by order_id) s on d.id = s.mxid ```
``` SELECT o.id, o.order_id, o.status_id FROM order_status_data2 o JOIN (SELECT order_id, MAX(id) maxid FROM order_status_data2 GROUP BY order_id) m ON o.order_id = m.order_id AND o.id = m.maxid ``` [SQL Fiddle](http://sqlfiddle.com/#!3/f5ccf/1) In your query, you didn't put any constraints on `status_id`, so it picked it from an arbitrary row in the group. Selecting `max(id)` doesn't make it choose `status_id` from the row that happens to have that value, you need a join to select a specific row for all the non-aggregated columns.
select unique data from a table with similar id data field
[ "", "mysql", "sql", "" ]
I am trying to drop a table but getting the following message: > Msg 3726, Level 16, State 1, Line 3 > Could not drop object 'dbo.UserProfile' because it is referenced by a FOREIGN KEY constraint. > Msg 2714, Level 16, State 6, Line 2 > There is already an object named 'UserProfile' in the database. I looked around with SQL Server Management Studio but I am unable to find the constraint. How can I find out the foreign key constraints?
Here it is: ``` SELECT OBJECT_NAME(f.parent_object_id) TableName, COL_NAME(fc.parent_object_id,fc.parent_column_id) ColName FROM sys.foreign_keys AS f INNER JOIN sys.foreign_key_columns AS fc ON f.OBJECT_ID = fc.constraint_object_id INNER JOIN sys.tables t ON t.OBJECT_ID = fc.referenced_object_id WHERE OBJECT_NAME (f.referenced_object_id) = 'YourTableName' ``` This way, you'll get the referencing table and column name. Edited to use sys.tables instead of generic sys.objects as per comment suggestion. Thanks, marc\_s
Another way is to check the results of ``` sp_help 'TableName' ``` (or just highlight the quoted TableName and press ALT+F1) With time passing, I just decided to refine my answer. Below is a screenshot of the results that `sp_help` provides. A have used the AdventureWorksDW2012 DB for this example. There is numerous good information there, and what we are looking for is at the very end - highlighted in green: [![enter image description here](https://i.stack.imgur.com/HxteO.png)](https://i.stack.imgur.com/HxteO.png)
How can I find out what FOREIGN KEY constraint references a table in SQL Server?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
So first of all, I need to extract the numbers from a range of 455,111,451, to 455,112,000. I could do this manually, there's only 50 numbers I need, but that's not the point. I tried to: ``` for a in range(49999951,50000000): print +str(a) ``` What should I do?
Use [`sum`](http://docs.python.org/2/library/functions.html#sum) ``` >>> sum(range(49999951,50000000)) 2449998775L ``` --- It is a [builtin function](http://docs.python.org/2/library/functions.html), Which means you don't need to import anything or do anything special to use it. you should always consult the documentation, or the tutorials before you come asking here, in case it already exists - also, StackOverflow has a search function that could have helped you find an answer to your problem as well. --- The [`sum`](http://docs.python.org/2/library/functions.html#sum) function in this case, takes a list of integers, and incrementally adds them to eachother, in a similar fashion as below: ``` >>> total = 0 >>> for i in range(49999951,50000000): total += i >>> total 2449998775L ``` Also - similar to [`Reduce`](http://docs.python.org/2/library/functions.html#reduce): ``` >>> reduce(lambda x,y: x+y, range(49999951,50000000)) 2449998775L ```
`sum` is the obvious way, but if you had a massive range and to compute the sum by incrementing each number each time could take a while, then you can do this mathematically instead (as shown in `sum_range`): ``` start = 49999951 end = 50000000 total = sum(range(start, end)) def sum_range(start, end): return (end * (end + 1) / 2) - (start - 1) * start / 2 print total print sum_range(start, end) ``` Outputs: ``` 2449998775 2499998775 ```
How to sum a list of numbers in python
[ "", "python", "numbers", "sum", "" ]
I just finished some online tutorial and i was experimenting with [Discogs Api](https://github.com/discogs/discogs_client) My code is as follows : ``` import discogs_client as discogs discogs.user_agent = '--' #i removed it artist_input = raw_input("Enter artist\'s name : ") artist = discogs.Artist(artist_input) if artist._response.status_code in (400, 401, 402, 403, 404): print "Sorry, this artist %s does not exist in Discogs database" % (artist_input) elif artist._response.status_code in (500, 501, 502, 503) : print "Server is a bit sleepy, give it some time, won\'t you ._." else: print "Artist : " + artist.name print "Description : " + str(artist.data["profile"]) print "Members :", members = artist.data["members"] for index, name in enumerate(members): if index == (len(members) - 1): print name + "." else: print name + ",", ``` The list's format i want to work with, is like this: ``` [<MasterRelease "264288">, <MasterRelease "10978">, <Release "4665127">... ``` **I want to isolate those with MasterRelease, so i can get their ids** I tried something like ``` for i in artist.releases: if i[1:7] in "Master": print i ``` or ``` for i in thereleases: if i[1:7] == "Master": print i ``` I;m certainly missing something, but it buffles me since i can do this ``` newlist = ["<abc> 43242"] print newlist[0][1] ``` and in this scenario ``` thereleases = artist.releases print thereleases[0][1] ``` i get ``` TypeError: 'MasterRelease' object does not support indexing ``` Feel free to point anything about the code, since i have limited python knowledge yet.
You are looking at the Python representation of objects, not strings. Each element in the list is an instance of the `MasterRelease` class. These objects have an `id` attribute, simply refer to the attribute directly: ``` for masterrelease in thereleases: print masterrelease.id ``` Other attributes are `.versions` for other releases, `.title` for the release title, etc.
The problem is that you are working with objects and assuming them to be strings. `artist.releases` is a list containing a number of objects. ``` newlist = ["<abc> 43242"] print newlist[0][1] ``` This works because newlist is a list which contains a single element, a string. so `newlist[0]` gives you a string `'<abc> 43242'`. Applying `newlist[0][1]` gets you the second element of that string, that is `'a'`. But this works only for objects that support *indexing*. And the objects that you are working with don't. What you are seeing in the list is a **string** representation of the object. I cannot really help you further than this because I have no idea about the `MasterRelease` and the `Release` instances - but there should be some attribute in them that gets you the required value.
Parsing from a list in python
[ "", "python", "list", "api", "indexing", "discogs-api", "" ]
This is a language design question. Why the designer didn't use ``` import A.B ``` instead of ``` from A import B ``` assuming A is a module that contains function B. Isn't it better to have a single style for import syntax? What was the design principle behind this? I think that the Java style import syntax feels more natural.
Python import statements primarily exist to load modules and packages. You have to import a module before you can use it. The second form of import is merely an additional feature, loading the module and then copying some parts of it into the local namespace. Java import statements exist to make shortcuts to names loaded in other modules. Java import statements don't load anything, but merely move things into the local namespace. In Java you don't need to import modules in order to use them. The import statement has nothing to do with whether or not a module is loaded. So the two languages take quite a different approach to importing. The imports statements are basically just not doing the same thing. Python's imports are for loading and Java imports are for shortcuts. Java's approach would be somewhat problematic in python. In Java it's pretty easy to sort of what's a class/module/package from the syntax. Python does not have that advantage. As a result, the compiler and the reader would have difficulty determing what is and isn't meant to be a reference to an external package. For that reason, Python's designer chose to make it explicit and force you to specify which external module you want to load.
Consistency. `import A.B` *never* adds `B` to the local namespace, even for cases in which it is valid; it simply makes `B` available via `A`, which functions already naturally are.
Why not import A.B in Python?
[ "", "python", "" ]
I have been asked this question in interview and it seems common question but yet I am unable to get or come up with some solutions so need your help Can you please tell me method to retrive top 1 row from a table using plain sql and no rownum, rowid, top or limit. Thanks
A sample of a query without top. ``` select * from Contests Where ContestId = (SELECT MAX(ContestId) FROM Contests) ```
You can do this using ANSI standard SQL using the `row_number()` function and a subquery: ``` select t.* from (select t.*, row_number() over (order by <however you define the "top" row>) as seqnum from t ) t where t = 1; ``` This assumes that you have some method for ordering that specifies what the "top" row is. Without an `order by` clause, any row would do. And I used `*` in the outer query. Strictly speaking, you would only want to include the columns in the table, excluding `seqnum`.
Query to retrieve top row without using TOP except plain sql
[ "", "sql", "" ]
I would like use something like that: ``` class Board(): ... def __getitem__(self, y, x): return self.board[y][x] ``` but unfortunatelly, when I call: ``` board[x][y] ``` I get: `TypeError: __getitem__() takes exactly 3 arguments (2 given)`
When you do `board[x][y]` you will cause *two* calls to `__getitem__` because you are doing two separate accesses: `[x]` is one and `[y]` is another. There's no way to handle this directly in `__getitem__`; you'd have to have `board[x]` return some kind of sub-object that you could use `[y]` on to get the individual item. What you probably want is to have `__getitem__` accept a tuple: ``` def __getitem__(self, tup): y, x = tup return self.board[y][x] ``` Then do: ``` board[x, y] ``` (Note that you have the order of x and y switched between `__getitem__` and `board[x][y]` --- is that intentional?)
You might want to consider using this syntax: ``` board[(x, y)] ``` It's less pretty, but it allows you to have multidimensional arrays simply. Any number of dimensions in fact: ``` board[(1,6,34,2,6)] ``` By making board a defaultdict you can even have sparse dictionaries: ``` board[(1,6,34,2,6)] >>> from collections import defaultdict >>> board = defaultdict(lambda: 0) >>> board[(1,6,8)] = 7 >>> board[(1,6,8)] 7 >>> board[(5,6,3)] 0 ``` If you want something more advanced than that you probably want [NumPy](http://www.numpy.org/).
Python - is there a way to implement __getitem__ for multidimension array?
[ "", "python", "arrays", "python-2.7", "numpy", "multidimensional-array", "" ]
I have the following: ``` IF OBJECT_ID(N'[dbo].[webpages_Roles_UserProfiles_Target]', 'xxxxx') IS NOT NULL DROP CONSTRAINT [dbo].[webpages_Roles_UserProfiles_Target] ``` I want to be able to check if there is a constraint existing before I drop it. I use the code above with a type of 'U' for tables. How could I modify the code above (change the xxxx) to make it check for the existence of the constraint ?
``` SELECT * FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS ``` or else try this ``` SELECT OBJECT_NAME(OBJECT_ID) AS NameofConstraint, SCHEMA_NAME(schema_id) AS SchemaName, OBJECT_NAME(parent_object_id) AS TableName, type_desc AS ConstraintType FROM sys.objects WHERE type_desc LIKE '%CONSTRAINT' ``` or ``` IF EXISTS(SELECT 1 FROM sys.foreign_keys WHERE parent_object_id = OBJECT_ID(N'dbo.TableName')) BEGIN ALTER TABLE TableName DROP CONSTRAINT CONSTRAINTNAME END ```
Try something like this ``` IF OBJECTPROPERTY(OBJECT_ID('constraint_name'), 'IsConstraint') = 1 ALTER TABLE table_name DROP CONSTRAINT constraint_name ```
How can I check if a SQL Server constraint exists?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Obviously new to Pandas. How can i simply count the number of records in a dataframe. I would have thought some thing as simple as this would do it and i can't seem to even find the answer in searches...probably because it is too simple. ``` cnt = df.count print cnt ``` the above code actually just prints the whole df
Regards to your question... counting one Field? I decided to make it a question, but I hope it helps... Say I have the following DataFrame ``` import numpy as np import pandas as pd df = pd.DataFrame(np.random.normal(0, 1, (5, 2)), columns=["A", "B"]) ``` You could count a single column by ``` df.A.count() #or df['A'].count() ``` both evaluate to 5. The cool thing (or one of many w.r.t. `pandas`) is that if you have `NA` values, count takes that into consideration. So if I did ``` df['A'][1::2] = np.NAN df.count() ``` The result would be ``` A 3 B 5 ```
To get the number of rows in a dataframe use: ``` df.shape[0] ``` (and `df.shape[1]` to get the number of columns). As an alternative you can use ``` len(df) ``` or ``` len(df.index) ``` (and `len(df.columns)` for the columns) `shape` is more versatile and more convenient than `len()`, especially for interactive work (just needs to be added at the end), but `len` is a bit faster (see also [this answer](https://stackoverflow.com/a/15943975/2314737)). **To avoid**: [`count()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html) because it returns *the number of non-NA/null observations over requested axis* **`len(df.index)` is faster** ``` import pandas as pd import numpy as np df = pd.DataFrame(np.arange(24).reshape(8, 3),columns=['A', 'B', 'C']) df['A'][5]=np.nan df # Out: # A B C # 0 0 1 2 # 1 3 4 5 # 2 6 7 8 # 3 9 10 11 # 4 12 13 14 # 5 NaN 16 17 # 6 18 19 20 # 7 21 22 23 %timeit df.shape[0] # 100000 loops, best of 3: 4.22 µs per loop %timeit len(df) # 100000 loops, best of 3: 2.26 µs per loop %timeit len(df.index) # 1000000 loops, best of 3: 1.46 µs per loop ``` **`df.__len__` is just a call to `len(df.index)`** ``` import inspect print(inspect.getsource(pd.DataFrame.__len__)) # Out: # def __len__(self): # """Returns length of info axis, but here we use the index """ # return len(self.index) ``` **Why you should not use `count()`** ``` df.count() # Out: # A 7 # B 8 # C 8 ```
pandas python how to count the number of records or rows in a dataframe
[ "", "python", "pandas", "dataframe", "count", "" ]
I have a complex INNER JOIN SQL request I can't wrap my head around. I was hoping someone could help me. It involves 3 tables, so it has 2 inner joins. My database has the tables "users", "statement", "opinion". Users can author statements and opinions. Opinions have an "authorid" variable referencing the id of the user who they represent, and they have a "statementid" variable referencing the statement they refer to. I am trying to submit a request where, given 2 statements, I can return the list of users who have authored opinions about both statements. I'm thinking something like ``` $sid1=5; $sid2=6; $sql = " SELECT user.* FROM users INNER JOIN opinion ON opinion.authorid=user.uid WHERE opinion.statementid= [sid1? how can i use both] INNER JOIN statement ON statment.uid=opinion.statementid "; ``` But as you can see I am stuck. Do I need a UNION? Please let me know if you need further clarification. Thanks in advance. EDIT: I figured out how to do it: ``` SELECT DISTINCT users.uid FROM users JOIN opinion o, opinion o2 WHERE users.uid = o.authorid AND users.uid = o2.authorid AND o2.statementid = $sid2 AND o.statementid = $sid1 ```
``` SELECT DISTINCT users.uid FROM users JOIN opinion o, opinion o2 WHERE users.uid = o.authorid AND users.uid = o2.authorid AND o2.statementid = $sid2 AND o.statementid = $sid1 ```
``` SELECT DISTINCT user.* FROM opinion o JOIN usrs u ON u.uid = o.author_id AND o.statement_id = $sid1 INTERSECT SELECT DISTINCT user.* FROM opinion o JOIN usrs u ON u.uid = o.author_id AND o.statement_id = $sid2 ``` Don't have INTERSECT available in your SQL dialect? ``` SELECT A.* FROM ( SELECT DISTINCT user.* FROM opinion o JOIN usrs u ON u.uid = o.author_id AND o.statement_id = $sid1 ) A JOIN ( SELECT DISTINCT user.* FROM opinion o JOIN usrs u ON u.uid = o.author_id AND o.statement_id = $sid2 ) B on A.uid = B.uid ```
Complex INNER JOIN SQL request
[ "", "sql", "join", "" ]
I always get confused with date format in ORACLE SQL query and spend minutes together to google, Can someone explain me the simplest way to tackle when we have different format of date in database table ? for instance i have a date column as ES\_DATE, holds data as 27-APR-12 11.52.48.294030000 AM of Data type TIMESTAMP(6) WITH LOCAL TIME ZONE. ![enter image description here](https://i.stack.imgur.com/lND9V.jpg) I wrote simple select query to fetch data for that particular day and it returns me nothing. Can someone explain me ? ``` select * from table where es_date=TO_DATE('27-APR-12','dd-MON-yy') ``` or ``` select * from table where es_date = '27-APR-12'; ```
`to_date()` returns a date at 00:00:00, so you need to "remove" the minutes from the date you are comparing to: ``` select * from table where trunc(es_date) = TO_DATE('27-APR-12','dd-MON-yy') ``` You probably want to create an index on `trunc(es_date)` if that is something you are doing on a regular basis. The literal `'27-APR-12'` can fail very easily if the default date format is changed to anything different. So make sure you you always use `to_date()` with a proper format mask (or an ANSI literal: `date '2012-04-27'`) Although you did right in using `to_date()` and not relying on implict data type conversion, your usage of to\_date() still has a subtle pitfall because of the format `'dd-MON-yy'`. With a different language setting this might easily fail e.g. `TO_DATE('27-MAY-12','dd-MON-yy')` when NLS\_LANG is set to german. Avoid anything in the format that might be different in a different language. Using a four digit year and only numbers e.g. `'dd-mm-yyyy'` or `'yyyy-mm-dd'`
if you are using same date format and have select query where date in oracle : ``` select count(id) from Table_name where TO_DATE(Column_date)='07-OCT-2015'; ``` To\_DATE provided by oracle
Oracle SQL query for Date format
[ "", "sql", "database", "oracle", "oracle11g", "" ]
I am running a query using a regular expression function on a field where a row may contain one or more matches but I cannot get Access to return any matches except either the first one of the collection or the last one (appears random to me). Sample Data: ``` tbl_1 (queried table) row_1 abc1234567890 some text row_2 abc1234567890 abc3459998887 some text row_3 abc9991234567 abc8883456789 abc7778888664 some text tbl_2 (currently returned results) row_1 abc1234567890 row_2 abc1234567890 row_3 abc7778888664 tbl_2 (ideal returned results) row_1 abc1234567890 row_2 abc1234567890 row_3 abc3459998887 row_4 abc9991234567 row_5 abc8883456789 row_6 abc7778888664 ``` Here is my Access VBA code: ``` Public Function OrderMatch(field As String) Dim regx As New RegExp Dim foundMatches As MatchCollection Dim foundMatch As match regx.IgnoreCase = True regx.Global = True regx.Multiline = True regx.Pattern = "\b[A-Za-z]{2,3}\d{10,12}\b" Set foundMatches = regx.Execute(field) If regx.Test(field) Then For Each foundMatch In foundMatches OrderMatch = foundMatch.Value Next End If End Function ``` My SQL code: ``` SELECT OrderMatch([tbl_1]![Field1]) AS Order INTO tbl_2 FROM tbl_1 WHERE OrderMatch([tbl_1]![Field1])<>False; ``` I'm not sure if I have my regex pattern wrong, my VBA code wrong, or my SQL code wrong.
Seems you intend to split out multiple text matches from a field in `tbl_1` and store each of those matches as a separate row in `tbl_2`. Doing that with an Access query is not easy. Consider a VBA procedure instead. Using your sample data in Access 2007, this procedure stores what you asked for in `tbl_2` (in a text field named `Order`). ``` Public Sub ParseAndStoreOrders() Dim rsSrc As DAO.Recordset Dim rsDst As DAO.Recordset Dim db As DAO.database Dim regx As Object ' RegExp Dim foundMatches As Object ' MatchCollection Dim foundMatch As Object ' Match Set regx = CreateObject("VBScript.RegExp") regx.IgnoreCase = True regx.Global = True regx.Multiline = True regx.pattern = "\b[a-z]{2,3}\d{10,12}\b" Set db = CurrentDb Set rsSrc = db.OpenRecordset("tbl_1", dbOpenSnapshot) Set rsDst = db.OpenRecordset("tbl_2", dbOpenTable, dbAppendOnly) With rsSrc Do While Not .EOF If regx.Test(!field1) Then Set foundMatches = regx.Execute(!field1) For Each foundMatch In foundMatches rsDst.AddNew rsDst!Order = foundMatch.value rsDst.Update Next End If .MoveNext Loop .Close End With Set rsSrc = Nothing rsDst.Close Set rsDst = Nothing Set db = Nothing Set foundMatch = Nothing Set foundMatches = Nothing Set regx = Nothing End Sub ``` Paste the code into a standard code module. Then position the cursor within the body of the procedure and press `F5` to run it.
This function is only returning one value because that's the way you have set it up with the logic. This will always return the *last* matching value. ``` For Each foundMatch In foundMatches OrderMatch = foundMatch.Value Next ``` Even though your function implicitly returns a `Variant` data type, it's not returning an array because you're not assigning values to an array. Assuming there are 2+ matches, the assignment statement `OrderMatch = foundMatch.Value` inside the loop will overwrite the first match with the second, the second with the third, etc. Assuming you want to return an array of matching values: ``` Dim matchVals() as Variant Dim m as Long For Each foundMatch In foundMatches matchValues(m) = foundMatch.Value m = m + 1 ReDim Preserve matchValues(m) Next OrderMatch = matchValues ```
Access 2010 Only returning first result Regular Expression result from MatchCollection
[ "", "sql", "regex", "vba", "ms-access", "" ]
In python, if I have a tuple of tuples, like so: ``` ((1, 'foo'), (2, 'bar'), (3, 'baz')) ``` what is the most efficient/clean/pythonic way to return the 0th element of a tuple containing a particular 1st element. I'm assuming it can be done as a simple one-liner. In other words, how do I return 2 using 'bar'? --- This is the clunky equivalent of what I'm looking for, in case it wasn't clear: ``` for tup in ((1, 'foo'), (2, 'bar'), (3, 'baz')): if tup[1] == 'bar': tup[0] ```
Use a list comprehension if you want all such values: ``` >>> lis = ((1, 'foo'), (2, 'bar'), (3, 'baz')) >>> [x[0] for x in lis if x[1]=='bar' ] [2] ``` If you want only one value: ``` >>> next((x[0] for x in lis if x[1]=='bar'), None) 2 ``` If you're doing this multiple times then convert that list of tuples into a dict: ``` >>> d = {v:k for k,v in ((1, 'foo'), (2, 'bar'), (3, 'baz'))} >>> d['bar'] 2 >>> d['foo'] 1 ```
``` >>> lis = ((1, 'foo'), (2, 'bar'), (3, 'baz')) >>> dict(map(reversed, lis))['bar'] 2 ```
Cleanest way to return a tuple containing a particular element?
[ "", "python", "tuples", "" ]
I have the following database table on a Postgres server: ``` id date Product Sales 1245 01/04/2013 Toys 1000 1245 01/04/2013 Toys 2000 1231 01/02/2013 Bicycle 50000 456461 01/01/2014 Bananas 4546 ``` I would like to create a query that gives the `SUM` of the `Sales` column and groups the results by month and year as follows: ``` Apr 2013 3000 Toys Feb 2013 50000 Bicycle Jan 2014 4546 Bananas ``` Is there a simple way to do that?
``` select to_char(date,'Mon') as mon, extract(year from date) as yyyy, sum("Sales") as "Sales" from yourtable group by 1,2 ``` At the request of Radu, I will explain that query: `to_char(date,'Mon') as mon,` : converts the "date" attribute into the defined format of the short form of month. `extract(year from date) as yyyy` : Postgresql's "extract" function is used to extract the YYYY year from the "date" attribute. `sum("Sales") as "Sales"` : The SUM() function adds up all the "Sales" values, and supplies a case-sensitive alias, with the case sensitivity maintained by using double-quotes. `group by 1,2` : The GROUP BY function must contain all columns from the SELECT list that are not part of the aggregate (aka, all columns not inside SUM/AVG/MIN/MAX etc functions). This tells the query that the SUM() should be applied for each unique combination of columns, which in this case are the month and year columns. The "1,2" part is a shorthand instead of using the column aliases, though it is probably best to use the full "to\_char(...)" and "extract(...)" expressions for readability.
I can't believe the accepted answer has so many upvotes -- it's a horrible method. Here's the correct way to do it, with [date\_trunc](http://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC): ``` SELECT date_trunc('month', txn_date) AS txn_month, sum(amount) as monthly_sum FROM yourtable GROUP BY txn_month ``` It's bad practice but you might be forgiven if you use ``` GROUP BY 1 ``` in a very simple query. You can also use ``` GROUP BY date_trunc('month', txn_date) ``` if you don't want to select the date.
Group query results by month and year in postgresql
[ "", "sql", "postgresql", "" ]
Afternoon all, hope you can help an SQL newbie with what's probably a simple request. I'll jump straight in with the question/problem. For table `Property_Information`, I'd like to retrieve either a complete record, or even specified fields if possible where the below criteria are met. The table has column `PLCODE` which is not unique. The Table also has column `PCODE`, which is unique and which there are multiple per `PLCODE` (If that makes sense). What I need to do is request the lowest `PCODE` record, for each unique `PLCODE`. E.G. There are 6500 records in this table, and 255 unique `PLCODES`; therefore I'd expect a results set of the 255 individual `PLCODES`, each with the lowest `PCODE` record atttached. As I'm here, and already feel like a burden to the community, perhaps someone might suggest a good resource for developing existing (but basic) SQL skills? Many thanks in advance P.S. Query will be performed on MSSQLSMS 2012 on a 2005 DB if that's of any relevance
Something like this will give you all columns for your grouped rows. ``` WITH CTE AS ( SELECT PLCODE , MIN(PCODE) AS PCODE FROM Property_Information GROUP BY PLCODE ) SELECT p.* FROM CTE c LEFT JOIN Property_Information p ON c.PLCODE = p.PLCODE AND c.PCODE = p.PCODE ```
``` select PLCODE, min(PCODE) from table group by PLCODE ``` you google any ansi sql site or find SQL tutorials.
SQL return distinct while sorting on another column
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I'm trying to execute a while loop only under a defined time like this, but the while loop continues its execution even when we are above the defined limit : ``` import datetime import time now = datetime.datetime.now() minute = now.minute while minute < 46 : print "test" time.sleep(5) minute = now.minute ``` How can stop the loop once we cross the limit ? Thanks
You're not updating the value of `minute` inside while loop properly. You should recalculate the value of `now` in loop and then assign the new `now.minute` to `minute`. ``` while minute < 46 : print "test" time.sleep(5) now = datetime.datetime.now() minute = now.minute ```
You need to determine the time anew in your loop. The `minute` variable is *static*, it does not update to reflect changing time. If you want to loop for a certain *amount* of time, start with `time.time()` instead and then calculate elapsed time: ``` import time start = time.time() while time.time() - start < 300: print 'test' time.sleep(5) ``` will print 'test' every 5 seconds for 5 minutes (300 seconds). You can do the same with `datetime` objects of course, but the `time.time()` call is a little simpler to work with. To loop *until* a certain time `datetime` can be used like: ``` import datetime while datetime.datetime.now().minute < 46: print 'test' time.sleep(5) ``` Again, note that the loop needs to call a method *each time* to determine what the current time is.
executing a while loop between defined time
[ "", "python", "" ]
I am trying to use array slicing to reverse part of a [NumPy](http://en.wikipedia.org/wiki/NumPy) array. If my array is, for example, ``` a = np.array([1,2,3,4,5,6]) ``` then I can get a slice b ``` b = a[::-1] ``` Which is a view on the original array. What I would like is a view that is partially reversed, for example ``` 1,4,3,2,5,6 ``` I have encountered performance problems with NumPy if you don't play along exactly with how it is designed, so I would like to avoid "fancy" indexing if it is possible.
If you don't like the off by one indices ``` >>> a = np.array([1,2,3,4,5,6]) >>> a[1:4] = a[1:4][::-1] >>> a array([1, 4, 3, 2, 5, 6]) ```
``` >>> a = np.array([1,2,3,4,5,6]) >>> a[1:4] = a[3:0:-1] >>> a array([1, 4, 3, 2, 5, 6]) ```
Reverse part of an array using NumPy
[ "", "python", "numpy", "" ]
If I used 'Having' as 'Where' (i.e. without group by clause), Can you please tell me which one is faster ‘Where’ OR ‘Having (as where)’? Which one we have to prefer? Scenario - I am using this condition in finding locations in certain distance using lat and longitude.
If a condition refers to an `aggregate` function, put that condition in the `HAVING` clause. Otherwise, use the `WHERE` clause. You can use `HAVING` but recommended you should use with `GROUP BY`. `SQL Standard` says that `WHERE` restricts the result set before returning rows and `HAVING` restricts the result set after bringing all the rows. So `WHERE` is faster.
Here is the difference between `HAVING` and `WHERE`: [HAVING without GROUP BY](https://stackoverflow.com/questions/6924896/having-without-group-by) You can not in general use it alternatively, but all about it is covered in this link And here is the topic about their speed: [Which SQL statement is faster? (HAVING vs. WHERE...)](https://stackoverflow.com/questions/328636/which-sql-statement-is-faster-having-vs-where)
Having OR Where, Which is Faster in performance?
[ "", "mysql", "sql", "" ]
I have a pandas dataFrame created through a mysql call which returns the data as object type. The data is mostly numeric, with some 'na' values. How can I cast the type of the dataFrame so the numeric values are appropriately typed (floats) and the 'na' values are represented as numpy NaN values?
Use the replace method on dataframes: ``` import numpy as np df = DataFrame({ 'k1': ['na'] * 3 + ['two'] * 4, 'k2': [1, 'na', 2, 'na', 3, 4, 4]}) print df df = df.replace('na', np.nan) print df ``` I think it's helpful to point out that df.replace('na', np.nan) by itself won't work. You must assign it back to the existing dataframe.
`df = df.convert_objects(convert_numeric=True)` will work in most cases. I should note that this copies the data. It would be preferable to get it to a numeric type on the initial read. If you post your code and a small example, someone might be able to help you with that.
Converting Pandas Dataframe types
[ "", "python", "numpy", "pandas", "" ]
I am trying to query some information from certain large data on connections among a set of clients and servers. Below are sample data from relevant columns in the table (connection\_stats): ``` +---------------------------------------------------------+ | timestamp | client_id | server_id | status | +---------------------------------------------------------+ | 2013-07-06 10:40:30 | 100 | 800 | SUCCESS | +---------------------------------------------------------+ | 2013-07-06 10:40:50 | 101 | 801 | FAILED | +---------------------------------------------------------+ | 2013-07-06 10:42:00 | 100 | 800 | ABORTED | +---------------------------------------------------------+ | 2013-07-06 10:43:30 | 100 | 801 | SUCCESS | +---------------------------------------------------------+ | 2013-07-06 10:56:00 | 100 | 800 | FAILED | +---------------------------------------------------------+ ``` From this table, I am trying to query all instances of the connection status "ABORTED" immediately followed (in the order of timestamp) by connection status "FAILED", **for each client\_id, server\_id pair**. I would like to get both the records - the one with status "ABORTED" and that with status "FAILED". There is one such case in the data sample above - for the pair 100, 800, there is a "FAILED" status immediately after "ABORTED". I am a novice in SQL and databases and I am completely lost on this one. Any pointers to how to approach this will be much appreciated. The database is mysql.
Admittedly not very elegant, but what I can come up with straight off the bat that works with MySQL that does not have CTEs or ranking functions, and without a guaranteed unique row id to work with. ``` SELECT aborted.* FROM Table1 aborted JOIN Table1 failed ON aborted.server_id = failed.server_id AND aborted.client_id = failed.client_id AND aborted.timestamp < failed.timestamp LEFT JOIN Table1 filler ON filler.server_id = aborted.server_id AND filler.client_id = aborted.client_id AND aborted.timestamp < filler.timestamp AND filler.timestamp < failed.timestamp WHERE filler.timestamp IS NULL AND aborted.status = 'ABORTED' AND failed.status = 'FAILED' UNION SELECT failed.* FROM Table1 aborted JOIN Table1 failed ON aborted.server_id = failed.server_id AND aborted.client_id = failed.client_id AND aborted.timestamp < failed.timestamp LEFT JOIN Table1 filler ON filler.server_id = aborted.server_id AND filler.client_id = aborted.client_id AND aborted.timestamp < filler.timestamp AND filler.timestamp < failed.timestamp WHERE filler.timestamp IS NULL AND aborted.status = 'ABORTED' AND failed.status = 'FAILED' ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/2fbf1/11). If you're happy with just one row with both records summarized, you can just select the fields you want from aborted/failed and skip the entire second half of the union (ie the query will be cut in half) Since I got comments on the `UNION`, here's the same thing using a `JOIN`, assuming the timestamp is unique per client/server combination (a unique row id would help here); ``` SELECT * FROM Table1 t JOIN ( SELECT aborted.server_id asid, aborted.client_id acid, aborted.timestamp ats, failed.server_id fsid, failed.client_id fcid, failed.timestamp fts FROM Table1 aborted JOIN Table1 failed ON aborted.server_id = failed.server_id AND aborted.client_id = failed.client_id AND aborted.timestamp < failed.timestamp LEFT JOIN Table1 filler ON filler.server_id = aborted.server_id AND filler.client_id = aborted.client_id AND aborted.timestamp < filler.timestamp AND filler.timestamp < failed.timestamp WHERE filler.timestamp IS NULL AND aborted.status = 'ABORTED' AND failed.status = 'FAILED' ) u WHERE t.server_id=asid AND t.client_id=acid AND t.timestamp=ats OR t.server_id=fsid AND t.client_id=fcid AND t.timestamp=fts ORDER BY timestamp ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/2fbf1/21).
I'm answring this question (albeit late) because I want to offer a more general approach. MySQL does not have a `lag()` or `lead()` function, but you can implement it using a subquery. The idea is to lookup the next timestamp for the client\_id/server\_id pair and then join back to the original data to get the full record. This allows you to pull as many records as you want from the "next" record. It also allows you to consider more complicated relationships (say, the "fail" has to be within 3 minutes): ``` select cs.*, csnext.timestamp as nextTimeStamp, csnext.status as nextStatus from (select cs.*, (select timestamp from connection_stats cs2 where cs2.client_id = cs.client_id and cs2.server_id = cs.server_id and cs2.timestamp > cs.timestamp order by cs2.timestamp limit 1 ) as Nexttimestamp from connection_stats cs ) cs join connection_stats csnext on csnext.client_id = cs.client_id and csnext.server_id = cs.server_id and csnext.timestamp = cs.nexttimestamp where cs.status = 'ABORTED' and csnext.status = 'FAILED' ``` The performance of such a query is greatly improved by having an index on `connection_stats(client_id, server_id, timestamp)`.
How to select records based on sequential relationship using SQL query
[ "", "mysql", "sql", "" ]
Mysql database is like that; ``` userID -> primaryKey, required, unsigned integer, auto increment, unique usermail -> unique,varchar,required ``` I'm creating new users. userId is auto increment. If usermail inserted before,it occurs an error. My question is this ``` Let's think that userID is between 1-9.(there are 9 users now). Somebody has inserted same usermail before and took error. After that,new user inserted another usermail but userID became 11 instead 10. Why? What should i do to make it 10? ```
Is it necessary for you to increment the userID from Database properties itself. Because what I am saying is, before inserting any records, just retrieve the last userID from database, store it in a integer variable and then increment it and insert back into database along with the new record. Say your last user id was 9. Use this query - `Select max(userID) from "your table";` ``` int x=0; SqlDataReader dr=com.ExecuteReader(); if(dr.HasRows) { While(dr.Read()) { x=Convert.ToInt32(dr.GetString(0).ToString); } } x=x+1; ``` Now since you have new `x` value, easily insert it back to database with new record. So that auto increment will be followed and even though failure occurs insertion will not be done and `UserID` does not increase.
When the error is being caused it seems to be inserting a new row and incrementing the uniqueid. On error you will need to query the table for the latest ID and reset the auto increment value ``` ALTER TABLE `table_name` AUTO_INCREMENT =new_number_here ``` Found Here: <http://forums.mysql.com/read.php?20,16088,17748#msg-17748>
SQL userID auto increment
[ "", "mysql", "sql", "database", "unique", "userid", "" ]
How do use the python type in List Comprehension?? Can I? ``` >>> ll [1, 2, 5, 'foo', 'baz', 'wert'] >>> [x for x in ll ] [1, 2, 5, 'foo', 'baz', 'wert'] >>> [x for x in ll if type(x) == 'int'] [] >>> [x*20.0 for x in ll if type(x) == 'int'] [] >>> type(ll[0]) <type 'int'> ``` Looking for :`[20, 40, 100, 'foo', 'baz', 'wert']`
Generally, in Python, best practice is to use "Duck Typing". One way to do that is to use exception handling, which in this case would need a helper function: ``` def safe_multiply(x, y): try: return x * y except TypeError: return x [safe_multiply(x, 20) for x in ll] ``` The alternative duck-typing answer is to see if the object has a method for multiply: ``` [x * 20 if hasattr(x, "__mul__") else x for x in ll] ``` But both of the above have a quirk: in Python it is legal to use `*` with a string, and the result repeats the string: ``` print("foo" * 3) # prints "foofoofoo" ``` So the best way to go is to use the answer by Ignacio Vazquez-Abrams. He didn't actually give you the code, so here it is: ``` [x * 20 if isinstance(x, int) else x for x in ll] ```
Try that: ``` In [6]: [x * 20.0 if type(x) is int else x for x in ll] Out[6]: [20.0, 40.0, 100.0, 'foo', 'baz', 'wert'] ``` Here you're checking type with `type(x) is int`, and if it so - multiply, else you just append x to your resulting list (`else x`), exactly as you wanted to.
How to use python type in List Comprehension
[ "", "python", "if-statement", "list-comprehension", "" ]
I looking for some tweak in mysql ordering , I normally select record from table and then order the record by Name(varchar) ASC **but the number is always come first** here some example of my question (*note. mysql sort the record with 0-9 first*) ``` SELECT name FROM list ORDER BY name ASC record returned: 1 star 2 star 9 slice Ape Age Beg Bell Fish Zoo ``` **What i want is the alphabet order come first then follow by number** Desired output ``` Ape Age Beg Bell Fish Zoo 1 star 2 star 9 slice ```
Use the following `ORDER BY` clause: ``` ORDER BY IF(name RLIKE '^[a-z]', 1, 2), name ```
Ref [this](https://stackoverflow.com/questions/5055049/how-to-order-mysql-varchar-results) ``` SELECT name FROM list ORDER BY name * 1 ASC ``` Edited ``` SELECT name FROM list ORDER BY name * 1, name ASC ```
ORDER BY alphabet first then follow by number
[ "", "mysql", "sql", "sorting", "sql-order-by", "" ]
last parts of the code: ``` lda = LdaModel(corpus=corpus,id2word=dictionary, num_topics=2) print lda ``` bash output: ``` INFO : adding document #0 to Dictionary(0 unique tokens) INFO : built Dictionary(18 unique tokens) from 5 documents (total 20 corpus positions) INFO : using serial LDA version on this node INFO : running online LDA training, 2 topics, 1 passes over the supplied corpus of 5 documents, updating model once every 5 documents WARNING : too few updates, training might not converge; consider increasing the number of passes to improve accuracy INFO : PROGRESS: iteration 0, at document #5/5 INFO : 2/5 documents converged within 50 iterations INFO : topic #0: 0.079*cute + 0.076*broccoli + 0.070*adopted + 0.069*yesterday + 0.069*eat + 0.069*sister + 0.068*kitten + 0.068*kittens + 0.067*bananas + 0.067*chinchillas INFO : topic #1: 0.082*broccoli + 0.079*cute + 0.071*piece + 0.070*munching + 0.069*spinach + 0.068*hamster + 0.068*ate + 0.067*banana + 0.066*breakfast + 0.066*smoothie INFO : topic diff=0.470477, rho=1.000000 <gensim.models.ldamodel.LdaModel object at 0x10f1f4050> ``` So I'm wondering i'm able to save the resulting topics that it generated, to a readable format. I've tried the `.save()` methods, but it always outputs something unreadable.
you just need to use `lda.show_topics(topics=-1)` or any number of topics you want to have (topics=10, topics=15, topics=1000....). I am usually doing just: ``` logfile = open('.../yourfile.txt', 'a') print>>logfile, lda.show_topics(topics=-1, topn=10) ``` All these parameters and others are available in gensim [documentation](http://radimrehurek.com/gensim/models/ldamodel.html).
Here is how to save a model for gensim LDA: ``` from gensim import corpora, models, similarities # create corpus and dictionary corpus = ... dictionary = ... # train model, this might takes time model = models.LdaModel.LdaModel(corpus=corpus,id2word=dictionary, num_topics=200,passes=5, alpha='auto') # save model to disk (no need to use pickle module) model.save('lda.model') ``` To print topics, here are a few ways: ``` # later on, load trained model from file model = models.LdaModel.load('lda.model') # print all topics model.show_topics(topics=200, topn=20) # print topic 28 model.print_topic(109, topn=20) # another way for i in range(0, model.num_topics-1): print model.print_topic(i) # and another way, only prints top words for t in range(0, model.num_topics-1): print 'topic {}: '.format(t) + ', '.join([v[1] for v in model.show_topic(t, 20)]) ```
Gensim: How to save LDA model's produced topics to a readable format (csv,txt,etc)?
[ "", "python", "lda", "gensim", "" ]
I have seen questions like this asked many many times but none are helpful Im trying to submit data to a form on the web ive tried requests, and urllib and none have worked for example here is code that should search for the [python] tag on SO: ``` import urllib import urllib2 url = 'http://stackoverflow.com/' # Prepare the data values = {'q' : '[python]'} data = urllib.urlencode(values) # Send HTTP POST request req = urllib2.Request(url, data) response = urllib2.urlopen(req) html = response.read() # Print the result print html ``` yet when i run it i get the html soure of the home page here is an example of using requests: ``` import requests data= { 'q': '[python]' } r = requests.get('http://stackoverflow.com', data=data) print r.text ``` same result! i dont understand why these methods arent working i've tried them on various sites with no success so if anyone has successfully done this please show me how! Thanks so much!
If you want to pass `q` as a parameter in the URL using [`requests`](http://docs.python-requests.org/), use the `params` argument, not `data` (see [Passing Parameters In URLs](http://docs.python-requests.org/en/latest/user/quickstart.html#passing-parameters-in-urls)): ``` r = requests.get('http://stackoverflow.com', params=data) ``` This will request <https://stackoverflow.com/?q=%5Bpython%5D> , which isn't what you are looking for. You really want to *`POST`* to a *form*. Try this: ``` r = requests.post('https://stackoverflow.com/search', data=data) ``` This is essentially the same as *`GET`*-ting <https://stackoverflow.com/questions/tagged/python> , but I think you'll get the idea from this.
``` import urllib import urllib2 url = 'http://www.someserver.com/cgi-bin/register.cgi' values = {'name' : 'Michael Foord', 'location' : 'Northampton', 'language' : 'Python' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) the_page = response.read() ``` This makes a POST request with the data specified in the values. we need urllib to encode the url and then urllib2 to send a request.
Submitting to a web form using python
[ "", "python", "post", "request", "urllib2", "urllib", "" ]
I get the following result: ``` >>> x = '-15' >>> print x.isdigit() False ``` When I expect it to be `True`. There seems to be no built in function that returns `True` for a string of negative number. What is the recommend to detect it?
The recommended way would be to `try` it: ``` try: x = int(x) except ValueError: print "{} is not an integer".format(x) ``` If you also expect decimal numbers, use `float()` instead of `int()`.
There might be a more elegant Python way, but a general method is to check if the first character is `'-'`, and if so, call `isdigit` on the 2nd character onward.
How to determine whether string is a number when the number is negative
[ "", "python", "" ]
I have a dictionary of list like below: ``` dict = {'key1':list1, 'key2':list2} ``` and ``` list1=['a', 'b', 'c'] list2=['d', 'e', 'f'] ``` i.e., ``` dict = {'key1': ['a', 'b', 'c'], 'key2': ['d', 'e', 'f']} ``` I would like to get the object of the list containing 'b', that is list1, using map rather than using an explicit for loop. I googled a lot but can't find relate information. Any help will be appreciated.
There's really no way to do this without an explicit or implicit loop. That is, you can do one of these: ``` key = next(lst for lst in d.values() if 'b' in lst) ``` But either way, that's really doing the same thing a for loop does. In particular, it's roughly the same as this: ``` for lst in d.values(): if 'b' in lst: key = let break ``` … which is probably exactly the loop you were going to write. --- However, if you're going to do this multiple times, you can just do a for loop once to build an appropriate data structure, then you can just do a quick lookup each time. For example, let's build a map from each list member to the list it's in: ``` valmap = {val:lst for lst in d.values() for val in lst} ``` Now, every time you want to find out which list `'b'` is in, just do this: ``` valmap['b'] ``` --- From your question, you may have been asking about using `map`. First, `map` is still just an implicit `for` loop. Second, `map` doesn't really give you any way to do what you want; you could use it to, e.g., create a new sequence with a True in place of the list containing `'b'` and a `False` everywhere else, but then you still have to search that sequence to find the `True`. What you want here is `filter`. You could write the genexpr solution above as: ``` key = next(filter(lambda lst: 'b' in lst, d.values())) ``` And that's pretty much exactly the same thing as the genexpr. (In CPython, it's probably a bit faster than the explicit loop and a bit slower than the genexpr, but that also probably doesn't matter at all.)
``` next(x for x in D.itervalues() if 'b' in x) ```
How to access a list containing specific item in a dictionary?
[ "", "python", "list", "dictionary", "" ]
Today I was positively surprised by the fact that while reading data from a data file (for example) pandas is able to recognize types of values: ``` df = pandas.read_csv('test.dat', delimiter=r"\s+", names=['col1','col2','col3']) ``` For example it can be checked in this way: ``` for i, r in df.iterrows(): print type(r['col1']), type(r['col2']), type(r['col3']) ``` In particular integer, floats and strings were recognized correctly. However, I have a column that has dates in the following format: `2013-6-4`. These dates were recognized as strings (not as python date-objects). Is there a way to "learn" pandas to recognized dates?
You should add `parse_dates=True`, or `parse_dates=['column name']` when reading, thats usually enough to magically parse it. But there are always weird formats which need to be defined manually. In such a case you can also add a date parser function, which is the most flexible way possible. Suppose you have a column 'datetime' with your string, then: ``` from datetime import datetime dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S') df = pd.read_csv(infile, parse_dates=['datetime'], date_parser=dateparse) ``` This way you can even combine multiple columns into a single datetime column, this merges a 'date' and a 'time' column into a single 'datetime' column: ``` dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S') df = pd.read_csv(infile, parse_dates={'datetime': ['date', 'time']}, date_parser=dateparse) ``` You can find directives (i.e. the letters to be used for different formats) for `strptime` and `strftime` [in this page](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior).
Perhaps the pandas interface has changed since @Rutger answered, but in the version I'm using (0.15.2), the `date_parser` function receives a list of dates instead of a single value. In this case, his code should be updated like so: ``` from datetime import datetime import pandas as pd dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d %H:%M:%S') for d in dates] df = pd.read_csv('test.dat', parse_dates=['datetime'], date_parser=dateparse) ``` Since the original question asker said he wants dates and the dates are in `2013-6-4` format, the `dateparse` function should really be: ``` dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d').date() for d in dates] ```
Can pandas automatically read dates from a CSV file?
[ "", "python", "date", "types", "dataframe", "pandas", "" ]
I'm trying to write a code that automatically logs into two websites and goes to a certain page. I use [Splinter](http://splinter.cobrateam.info/). I only get the error with the "Mijn ING Zakelijk" website using [PhantomJS](http://www.phantomjs.org) as browser type. Until a few days ago the code ran perfectly fine 20 out of 20 times. But since today I'm getting an error. Sometimes the code runs fine. Other times it does not and gives me the "Click succeeded but Load Failed.." error. Here's the full traceback: ``` ## Attempting to login to Mijn ING Zakelijk, please wait. - Starting the browser.. - Visiting the url.. - Filling the username form with the defined username.. - Filling the password form with the defined password.. - Clicking the submit button.. Traceback (most recent call last): File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 98, in <module> mijning() File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 27, in mijning attemptLogin(url2, username2, password2, defined_title2, website_name2, browser_type2) File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 71, in attemptLogin browser.find_by_css('.submit').first.click() File "/Users/###/Library/Python/2.7/lib/python/site-packages/splinter/driver/webdriver/__init__.py", line 344, in click self._element.click() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 54, in click self._execute(Command.CLICK_ELEMENT) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 228, in _execute return self._parent.execute(command, params) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 165, in execute self.error_handler.check_response(response) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 158, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: u'Error Message => \'Click succeeded but Load Failed. Status: \'fail\'\'\n caused by Request => {"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:56899","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"c2bbc8a0-e3d2-11e2-b7a8-f765797dc4e7\\", \\"id\\": \\":wdc:1372850513087\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/c2bbc8a0-e3d2-11e2-b7a8-f765797dc4e7/element/%3Awdc%3A1372850513087/click"}' ; Screenshot: available via screen Process finished with exit code 1 ``` Here's the full code: ``` ## *** Payment Notification and Mail Tool (FPNMT) ## from splinter import * from Tkinter import * def ###(): # Define values browser_type1 = 'phantomjs' # 'phantomjs' or 'firefox' url1 = 'http://###.nl/admin' username1 = '###' password1 = '###' defined_title1 = 'Bestellingen' website_name1 = '###.nl Admin' attemptLogin(url1, username1, password1, defined_title1, website_name1, browser_type1) def mijning(): # Define values browser_type2 = 'phantomjs' # 'phantomjs' or 'firefox' url2 = 'https://mijnzakelijk.ing.nl/internetbankieren/SesamLoginServlet' username2 = '###' password2 = '###' defined_title2 = 'Saldo informatie' website_name2 = 'Mijn ING Zakelijk' attemptLogin(url2, username2, password2, defined_title2, website_name2, browser_type2) # Functions # def attemptLogin(url, username, password, defined_title, website_name, browser_type): print '## Attempting to login to ' + website_name + ', please wait.' # Start the browser print '- Starting the browser..' browser = Browser(browser_type) # Visit in the url print '- Visiting the url..' browser.visit(url) if website_name == '###.nl Admin': # Find the username form and fill it with the defined username print '- Filling the username form with the defined username..' browser.fill('username', username) # Find the password form and fill it with the defined password print '- Filling the password form with the defined password..' browser.fill('password', password) # Find the submit button and click print '- Clicking the submit button..' browser.click_link_by_text('Inloggen') # Find, click and display page with order history print '- Visiting the defined web page..' current_token = browser.url[57:97] url_plus_token = 'http://www.###.nl/admin/index.php?route=sale/order' + current_token browser.visit(url_plus_token) else: website_name == 'Mijn ING Zakelijk' # Find the username form and fill it with the defined username print '- Filling the username form with the defined username..' browser.find_by_id('gebruikersnaam').first.find_by_tag('input').fill(username) # Find the password form and fill it with the defined password print '- Filling the password form with the defined password..' browser.find_by_id('wachtwoord').first.find_by_tag('input').fill(password) # Find the submit button and click print '- Clicking the submit button..' browser.find_by_css('.submit').first.click() # Display page with transaction history print '- Visiting the defined web page..' browser.visit('https://mijnzakelijk.ing.nl/mpz/solstartpaginarekeninginfo.do') # Get page title after successful login current_title = browser.title # Check the title of the page to confirm successful login checkLogin(defined_title, current_title, website_name, browser) def checkLogin(defined_title, current_title, website_name, browser): if current_title == defined_title: print '# Login to', website_name, 'successful.' print '- Quitting the browser..' browser.quit() else: print '# Login to', website_name, 'failed.' print '- Quitting the browser..' browser.quit() i = 1 while i < 10: print i #***() mijning() i = i+1 ``` Any ideas on what's causing this error and how do I solve it? Thanks.
It may be that there was already some active javascript or background AJAX on the page, which confused PhantomJS into thinking that button click was unsuccessful. You could try inserting wait or try to stop the browser before clicking.
The current version of the ghostdriver source code fixes the issue (there is no longer any "Click succeeded but Load Failed" message" - see [here](https://github.com/detro/ghostdriver/commit/d0615a547f5a036df3134ef946c33d972c384aac)). The thing is, that version is not yet released (as of 08/19/2013), so you need to get it and then build it yourself. That solved the problem for me (Windows 7, Python 2.7.5, Selenium 2.33). You can find the step-by-step [here](http://phantomjs.org/build.html). **UPDATE**: PhantomJS 1.9.2 just came out and with Ghostdriver 1.0.4, which fixes the problem (check [here](https://github.com/detro/ghostdriver/blob/master/src/request_handlers/webelement_request_handler.js) - no more "Click succeeded but Load Failed" message). So just upgrade to [PhantomJS 1.9.2](http://phantomjs.org/download.html) and you should be fine. No need to build anything yourself anymore.
Automate browser actions - Clicking the submit button errors - "Click succeeded but Load Failed. .."
[ "", "python", "selenium", "webdriver", "urllib2", "phantomjs", "" ]
I have set of points in tuples, like this: ``` >>> s set([(209, 147), (220, 177), (222, 181), (225, 185), (288, 173), (211, 155), (222, 182)]) ``` What is the right way to do scatter plot of this set?
You can do: ``` x,y = zip(*s) plt.scatter(x, y) ``` Or even in an "one-liner": ``` plt.scatter(*zip(*s)) ``` [`zip()` can be used to pack and unpack arrays](https://stackoverflow.com/q/19339/832621) and when you call using `method(*list_or_tuple)`, each element in the list or tuple is passed as an argument.
``` x = []; y=[] for point in s: x.append(point[0]) y.append(point[1]) plt.scatter(x,y) ```
Make scatter plot from set of points in tuples
[ "", "python", "matplotlib", "set", "" ]
I have this code: ``` for urls in new_keywords if urls not in old_keywords upload_keywords.append(urls) ``` And my error: ``` File "controller.py", line 56 for urls in new_keywords ^ SyntaxError: invalid syntax ``` I had this error before and the issue was a mix of spaces and tabs as indent. I have checked this and with my editor I can see only dots (spaces) but it doesn't seem to work? any ideas?
You're missing a colon, it should be like: ``` for urls in new_keywords: if urls not in old_keywords: upload_keywords.append(urls) ``` That's why you get the invalid syntax error
You missed a colon on each of the first 2 lines of your snippet. Change ``` for urls in new_keywords if urls not in old_keywords ``` To ``` for urls in new_keywords: if urls not in old_keywords: ``` You should always put colons after `for` statements (as well as `while`, `if` and some other)
SyntaxError: invalid syntax I have checked for mix of Tabs & Spaces
[ "", "python", "" ]
I am trying to find a clean way to handle different, mutually-exclusive function inputs. The idea is that I have a function which returns 4 values (these values are linked through mathematical equations), and when you input one of the 4 values, it returns all the values. Currently the function works like this: ``` #example relations are simply: b=1+a, c=0.5*a, d=sqrt(a) def relations(v, vtype="a"): if vtype=="a": a = v elif vtype=="b": a = v - 1 elif vtype=="c": a = 2 * v elif vtype=="d": a = v ** 2 b = 1 + a c = 0.5 * a d = a ** 0.5 return a,b,c,d ``` The user specifies what the input variable is by means of a string vtype, and it returns all the values. It is not possible for the user to input more than one different input value (would be redundant, because all the unknowns can be determined from one input value). Is there a more clean and pythonic way to do this? Specifying the input variable with a string feels really dirty at the moment. Thanks in advance!
A common approach to avoid many `if`-`elif`s is to build a dictionary of functions: ``` def relations(v, vtype='a'): functions = { 'a': lambda x: x, 'b': lambda x: x-1, 'c': lambda x: x * 2, 'd': lambda x: x**2 } a = functions[vtype](v) b = 1 + a c = 0.5 * a d = a ** 0.5 return a,b,c,d ``` *If this function is not a bottleneck* you can avoid using the `lambda`s and simply do: ``` values = {'a': v, 'b': v-1, 'c': v * 2, 'd': v**2} a = values[vtype] ``` If you don't like the idea of having `vtype` in the function signature you can use a single `**kwargs` argument: ``` def relations(**kwargs): if len(kwargs) != 1 or not set('abcd').intersection(kwargs): raise ValueError('Invalid parameters') vtype, v = kwargs.popitem() functions = { 'a': lambda x: x, 'b': lambda x: x-1, 'c': lambda x: x * 2, 'd': lambda x: x**2 } a = functions[vtype](v) b = 1 + a c = 0.5 * a d = a ** 0.5 return a,b,c,d ``` Then call it as: ``` relations(a=...) relations(b=...) ```
You could use variable keyword arguments: ``` def relations(**kwargs): if 'a' in kwargs: a = kwargs['a'] elif 'b' in kwargs: a = kwargs['b'] - 1 elif 'c' in kwargs: a = kwargs['c'] * 2 elif 'd' in kwargs: a = kwargs['d'] ** 2 else: raise TypeError('missing an argument') b = 1 + a c = 0.5 * a d = a ** 0.5 return a, b, c, d ``` Then use with named parameters: ``` relations(a=2) relations(b=4) relations(c=9) relations(d=0) ```
What’s the most Pythonic way of handling different, mutually-exclusive function inputs?
[ "", "python", "input", "" ]
Okay so here's my dilemma: I'm working on a FAQ bot for a subreddit. I'm having trouble with boolean logic and could use a pair of more experienced eyes (this is my first adventure in Python). Right now the bot is basically spamming the test subreddit I created. Rather than share that specific information, here's an example that is showing the exact problem I'm running into: ``` #Define Key Words ################################## #System Requirements Definitions sysReq1 = ('system requirements' and '?') sysReq2 = ('can' and ('handle' or 'play' or 'run') and ('netbook' or 'notebook' or 'pc' or 'mac' or 'macbook' or 'laptop' or 'comp' or 'computer') and '?') #System Requirements Response sysReqResponse = 'PROGRESS' ################################## testString1 = "i hate this" #intended result for 1: nothing testString2 = "is that real?" #intended result for 2: nothing testString3 = "What are the system requirements?" #intended result for 3: 'PROGRESS' print testString1 if (sysReq1 or sysReq2) in testString1.lower(): print sysReqResponse print testString2 if (sysReq1 or sysReq2) in testString2.lower(): print sysReqResponse print testString3 if (sysReq1 or sysReq2) in testString2.lower(): print sysReqResponse ``` When you run it, it displays: ``` i hate this is that real? PROGRESS What are the system requirements? PROGRESS ``` it keeps returning a value for testString2. I think it has something to do with '?' but really have no clue how or why or what to do about it. Help! EDIT: Here's what I want it to display: When you run it, it displays: ``` i hate this is that real? What are the system requirements? PROGRESS ``` So far, I found that if I change my `sysReq1 = ('system requirements' and '?')` to `sysReq1 = 'system requirements' '?'` and only check the `if` for `sysReq1`, it returns the fight values. Problem is I really need those `or` statements to reduce clutter/redundancy. The reason sysReq2 is there is because some people ask a different question for the same answer : * Can it run on my netbook? * Can my PC handle it? * Can I play it on my mac? and this is excluding the variations of 'Will I be able to', but you get the idea. All of these should return the same value (in this case, "PROGRESS")
I think [`any`](http://docs.python.org/2/library/functions.html#any), [`all`](http://docs.python.org/2/library/functions.html#all), and [generators](http://wiki.python.org/moin/Generators) can help you out here. ## The code ``` #Define Key Words ################################## #System Requirements Definitions sysReq1 = ['system requirements', '?'] sysReq2 = [['can'] ,['handle', 'play', 'run'] ,['netbook', 'notebook', 'pc', 'mac', 'macbook', 'laptop', 'comp', 'computer'] ,['?'] ] def test(testString): lowerTestString = testString.lower() return all(i in lowerTestString for i in sysReq1) or all(any(j in lowerTestString for j in i) for i in sysReq2) #System Requirements Response sysReqResponse = 'PROGRESS' ################################## testString1 = "i hate this" #intended result for 1: nothing testString2 = "is that real?" #intended result for 2: nothing testString3 = "What are the system requirements?" #intended result for 3: 'PROGRESS' print testString1 if test(testString1): print sysReqResponse print testString2 if test(testString2): print sysReqResponse print testString3 if test(testString3): print sysReqResponse ``` The function isn't strictly necessary, but it does make the code more maintainable. If you need to change the check, it only changes in one place. ## What's going on here? First, we converted the two sets of strings into a list and a list of lists. The requirement then becomes, "The string must contain all elements of `sysReq1` or at least one element from every sublist of `sysReq2`." We accomplish checking this condition by combining generators with the `any` and `all` functions. After storing the `lower` to avoid calling this repeatedly, we create a generator (which amounts to an iterator) of booleans. Each boolean tells us whether a single element of `sysReq1` is contained in the lowered. Then we pass this iterator to the first `all`, which checks if the list contains all `True`s. If so, the `all` function returns `True` and the second check is short circuited. Otherwise, it returns `False` and Python moves on past the `or.` The check for `sysReq2` is more complicated. First, we create a generator of booleans for each sublist; this is inside the `any` call. This list contains a set of booleans as to whether each element of the sublist is in the lowered string. The `any` call returns `True` if any element in this list of booleans (based on the sublist) is `True`. (It short circuits, by the way, and since we're using a generator, the checks following a `True` aren't even run, unlike if we had used a list.) Then we create another generator; this one contains the results of each sublist test (all the `any` calls). Then `all` is called on this generator, which checks if the question contained an element from all sublists. I would note that users can enter nonsense questions if they're typing them in directly. For example, `'Play can netbook kuguekf ugifugfj ugufsgjf nugjfgjfgj?'` would pass this check. ## Summary Use generators to run your tests that return boolean values. Use `any` and `all` to combine iterables of boolean values. ## Edit Based on comments, here's an alternate solution that [`split`](http://docs.python.org/2/library/stdtypes.html#str.split)s the input string by white space and uses [`set`](http://docs.python.org/2/library/stdtypes.html#set)s instead of `list`s. I also added a test case from the sample questions you listed to ensure that the second part of the `or` is being hit. ``` #Define Key Words ################################## #System Requirements Definitions sysReq1 = set(['system', 'requirements']) sysReq2 = [set(['can']) ,set(['handle', 'play', 'run']) ,set(['netbook', 'notebook', 'pc', 'mac', 'macbook', 'laptop', 'comp', 'computer']) ] def test(testString): if not testString.endswith('?'): return False lowerTestString = set(testString.rstrip('?').lower().split()) return lowerTestString.issuperset(sysReq1) or all(not lowerTestString.isdisjoint(i) for i in sysReq2) #System Requirements Response sysReqResponse = 'PROGRESS' ################################## testString1 = "i hate this" #intended result for 1: nothing testString2 = "is that real?" #intended result for 2: nothing testString3 = "What are the system requirements?" #intended result for 3: 'PROGRESS' testString4 = "Can my PC handle it?" #intended result for 4: 'PROGRESS' print testString1 if test(testString1): print sysReqResponse print testString2 if test(testString2): print sysReqResponse print testString3 if test(testString3): print sysReqResponse print testString4 if test(testString4): print sysReqResponse ``` I think this is fairly straightforward. Note that the `set` constructor takes an iterable, so I'm passing lists in just for that. The "not disjoint" part might be a bit confusing; it's just making sure the intersection isn't empty. I used that in hopes that the function is implemented so that it doesn't compute the entire intersection. The question mark was problematic since it wasn't separated by a space, so I just made sure the string ended in a question mark and [`rstrip`](http://docs.python.org/2/library/stdtypes.html#str.rstrip)ed it off. Actually, this implementation might be cleaner and more maintainable than the one with lots of generators.
Two problems. First, you can't use `and` and `or` to "store" some kind of comparison operator for use later. When you write `and` and `or` the result is evaluated immediately. The rules for that are described in [the documentation](http://docs.python.org/2/reference/expressions.html#boolean-operations). Second, you can't use `and` and `or` with `in` that way. `in` does not "distribute over" `and` and and `or`. Writing `('A' and 'B') in x` doesn't mean "A in x and B in x". It evaluates `('A' and 'B')` *first* (which in this case will give you `'B'`) and then checks whether that single result is in `x`. You can't achieve what you want with simple operators. There's no way to use just the `and` and `or` operators to store a complex query like that that you can apply later. You're going to have to convert your criteria into functions and call them with the "test strings" as arguments: ``` def sysReq1(x): return 'system requirements' in x and '?' in x >>> testString2 = "is that real?" ... testString3 = "What are the system requirements?" ... print testString2 ... if sysReq1(testString2.lower()): ... print "2 passed the test" ... print testString3 ... if sysReq1(testString3.lower()): ... print "3 passed the test" is that real? What are the system requirements? 3 passed the test ``` I'd suggest you work through [the Python tutorial](http://docs.python.org/tutorial/) to get a handle on the basics of Python.
Python: Trying to create the equivalent of and(or(value,value,value))
[ "", "python", "boolean-logic", "" ]
Hoping someone can help me here. I'm very new to Python, and I'm trying to work out what I'm doing wrong. I've already searched and found out that that Python variables can be linked so that changing one changes the other, and I have done numerous tests with the `id()` function to get to grips with this concept. But I seem to have found an exception I'm hoping someone can explain... firstly, the following works as expected to make a independent copy of a list. ``` >>> a = [0,0] >>> b = a[:] >>> print a is b False >>> b[0]=1 >>> print a [0,0] >>> print b [1,0] ``` But if I change this slightly so that `a` is list within a list it changes... ``` >>> a = [[0,0],[0,0]] >>> b = a[:] >>> print a is b False >>> b[0][0]=1 >>> print a [[1, 0], [0, 0]] >>> print b [[1, 0], [0, 0]] ``` Now we see that any update of `b` will also apply to `a`, but yet the result of `print a is b` returns `False`?? I've check this against `id()` as well, everything says they are independent of each other, but when I update one the same applies to the other?? Can anyone explain this?? Noting I'm running these of <http://labs.codecademy.com/#:workspace>, so my first thought is that it's just a bug on their site, but I don't know? EDIT: THANK YOU ALL for the great answers so far. That was fast! I know this has probably been asked before but it was difficult thing to search for. Since all the answers are correct, I'll wait a day before marking. whoever has the most +1's will get the mark :)
`b = a[:]` creates a [shallow copy](http://en.wikipedia.org/wiki/Object_copy#Shallow_copy) of `a`, so changing the mutable lists within `b` still effects *those same lists* in `a`. In other words, `a` and `b` do not point to the same list (which is why `a is not b`), but rather to two different lists which both *contain the same two lists*. You change one of these lists via `b[0][0] = 1` and that change shows up in `a`. You mentioned that you were playing around with `id()`, so take a look at this: ``` >>> a = [[0,0],[0,0]] >>> b = a[:] >>> id(a) 2917280 # <----+ >>> id(b) # |----- different! 2771584 # <----+ >>> id(a[0]), id(a[1]) (2917320, 2917360) # <----+ >>> id(b[0]), id(b[1]) # |----- same! (2917320, 2917360) # <----+ ```
You need to make a *deepcopy* of your list. `a[:]` only makes a *shallow copy* - [see docs](http://docs.python.org/2/library/copy.html) You can use `copy.deepcopy` function: ``` >>> import copy >>> a = [[0,0],[0,0]] >>> b = copy.deepcopy(a) >>> b [[0, 0], [0, 0]] >>> b[0][0]=1 >>> a [[0, 0], [0, 0]] ```
Python: copying a list within a list
[ "", "python", "list", "" ]
We have a stored procedure in a MSSQL Server 2008 database that performs a number of reads and writes in response to user action on a website. I would like to know if there is any way of counting how many rows were edited/created during the procedure, preferably without having to alter the proc (although alterations are possible). Does anyone have any suggestions? I looked at `@@ROWCOUNT` but that covers reads and writes (and I'd prefer not o go throught the proc and manually add up ROWCOUNT after each UPDATE) EDIT: the proc is being called from C# using the System.Data.SqlClient classes
Without editing the proc, your best bet would be to start a SQL Server Profiler session, track [SP:StmtCompleted](http://msdn.microsoft.com/en-us/library/ms189570.aspx), filter ObjectName to your proc, and filter TextData to the statements you want to monitor. Be sure to include the RowCount column. Note that you can't get the row counts from the [extended events](http://msdn.microsoft.com/en-us/library/bb630282.aspx) sql\_statement\_completed event. There is no row count column; just duration, CPU, reads and writes. If you can edit the proc, you would have total control over the behavior. You could sum up the [@@ROWCOUNT](http://msdn.microsoft.com/en-us/library/ms187316.aspx) of every INSERT/UPDATE/DELETE operation. This count could be returned as an output parameter, written to the log, PRINTed, traced with [user-defined trace events](http://msdn.microsoft.com/en-us/library/ms175164.aspx), saved to a table, etc.
MS Sql Profiler is where you want to start for something like this.
Count number of rows edited by Stored Proc
[ "", "sql", "sql-server-2008", "stored-procedures", "" ]
Say I have a 3D numpy.array, e.g. with dimensions x y z, is there a way to iterate over slices along a particular axis? Something like: ``` for layer in data.slices(dim=2): # do something with layer ``` Edit: To clarify, the example is a dim=3 array, i.e. shape=(len\_x, len\_y, len\_z). Elazar and equivalently kamjagin's solutions work, but aren't that general - you have to construct the `[:, :, i]` by hand, which means you need to know the dimensions, and the code isn't general enough to handle arrays of arbitrary dimensions. You can fill missing dimension by using something like `[..., :]`, but again you still have to construct this yourself. Sorry, should have been clearer, the example was a bit too simple!
Iterating over the first dimension is very easy, see below. To iterate over the others, roll that dimension to the front and do the same: ``` >>> data = np.arange(24).reshape(2, 3, 4) >>> for dim_0_slice in data: # the first dimension is easy ... print dim_0_slice ... [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]] >>> for dim_1_slice in np.rollaxis(data, 1): # for the others, roll it to the front ... print dim_1_slice ... [[ 0 1 2 3] [12 13 14 15]] [[ 4 5 6 7] [16 17 18 19]] [[ 8 9 10 11] [20 21 22 23]] >>> for dim_2_slice in np.rollaxis(data, 2): ... print dim_2_slice ... [[ 0 4 8] [12 16 20]] [[ 1 5 9] [13 17 21]] [[ 2 6 10] [14 18 22]] [[ 3 7 11] [15 19 23]] ``` --- **EDIT** Some timings, to compare different methods for largish arrays: ``` In [7]: a = np.arange(200*100*300).reshape(200, 100, 300) In [8]: %timeit for j in xrange(100): a[:, j] 10000 loops, best of 3: 60.2 us per loop In [9]: %timeit for j in xrange(100): a[:, j, :] 10000 loops, best of 3: 82.8 us per loop In [10]: %timeit for j in np.rollaxis(a, 1): j 10000 loops, best of 3: 28.2 us per loop In [11]: %timeit for j in np.swapaxes(a, 0, 1): j 10000 loops, best of 3: 26.7 us per loop ```
This could probably be solved more elegantly than this, but one way of doing it if you know dim beforehand(e.g. 2) is: ``` for i in range(data.shape[dim]): layer = data[:,:,i] ``` or if dim=0 ``` for i in range(data.shape[dim]): layer = data[i,:,:] ``` etc.
iterate over slices of an ndarray
[ "", "python", "numpy", "slice", "" ]
I have a log file which has text that looks like this. ``` Jul 1 03:27:12 syslog: [m_java][ 1/Jul/2013 03:27:12.818][j:[SessionThread <]^Iat com/avc/abc/magr/service/find.something(abc/1235/locator/abc;Ljava/lang/String;)Labc/abc/abcd/abcd;(bytecode:7) ``` There are two time formats in the file. I need to sort this log file based on the date time format enclosed in []. This is the regex I am trying to use. But it does not return anything. ``` t_pat = re.compile(r".*\[\d+/\D+/.*\]") ``` I want to go over each line in file, be able to apply this pattern and sort the lines based on the date & time. Can someone help me on this? Thanks!
You are not matching the initial space; you also want to group the date for easy extraction, and limit the `\D` and `.*` patterns to non-greedy: ``` t_pat = re.compile(r".*\[\s?(\d+/\D+?/.*?)\]") ``` Demo: ``` >>> re.compile(r".*\[\s?(\d+/\D+?/.*?)\]").search(line).group(1) '1/Jul/2013 03:27:12.818' ``` You can narrow down the pattern some more; you only need to match 3 letters for the month for example: ``` t_pat = re.compile(r".*\[\s?(\d{1,2}/[A-Z][a-z]{2}/\d{4} \d{2}:\d{2}:[\d.]{2,})\]") ```
You have a space in there that needs to be added to the regular expression ``` text = "Jul 1 03:27:12 syslog: [m_java][ 1/Jul/2013 03:27:12.818][j:[SessionThread <]^Iat com/avc/abc/magr/service/find.something(abc/1235/locator/abc;Ljava/lang/String;)Labc/abc/abcd/abcd;(bytecode:7)" matches = re.findall(r"\[\s*(\d+/\D+/.*?)\]", text) print matches ['1/Jul/2013 03:27:12.818'] ``` Next parse the time using the following function <http://docs.python.org/2/library/time.html#time.strptime> Finally use this as a key into a dict, and the line as the value, and sort these entries based on the key.
python : reading a datetime from a log file using regex
[ "", "python", "regex", "" ]
I was building a bidimensional vector in Python, and since I wanted it to be all zero at first, and didn't wanted to use numpy, I tried this: ``` columns = 8 rows = 5 m = [[0]* (columns)] * (rows) m[3][2] = 1 print m ``` And I got an unexpected behaviour: ``` >> [[0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0]] ``` It looks like building the bidimensional array like this, makes each row a reference to a single row, so if writting in any of them, you are writing over all of them. Maybe this sounds evident to some of you, but I got a little socked. Of course I can fix it using a different approach, but I am curious of why this is happening. Can anyone explain? Why is this not happening if you build a simple array with `[0] * size_of_array`?
This is a common Python gothca. You are not creating `rows` inner lists, you're creating `rows` references to the same list. Your code is equivalent to the following: ``` inner_list = [0] * columns m = [inner_list] * rows ``` I would recommend building the rows without using the `*` operator. (You don't run into the issue with columns, since `0` is an int and ints are immutable objects.) ``` matrix = [] for row in rows: matrix.append([0] * columns) ```
`[0] * size_of_array` creates a list which multiple references to `0`. If you put another value into this list, it won't be affected. As you noticed, `[[]] * num` creates a list which contains a reference to the same list over and over again. Of you change this list, the change is visible via all references. ``` >>> a = [0] * 10 >>> [id(i) for i in a] [31351584L, 31351584L, 31351584L, 31351584L, 31351584L, 31351584L, 31351584L, 31351584L, 31351584L, 31351584L] >>> >>> all(i is a[0] for i in a) True ``` vs. ``` >>> a = [[]] * 10 >>> a [[], [], [], [], [], [], [], [], [], []] >>> [id(i) for i in a] [44072200L, 44072200L, 44072200L, 44072200L, 44072200L, 44072200L, 44072200L, 44072200L, 44072200L, 44072200L] >>> all(i is a[0] for i in a) True ``` Same situation, but one thing is different: If you do `a[0].append(10)`, the effect is visible in all lists. But if you do `a.append([])`, you add a clean, new list which isn't related to the others: ``` >>> a = [[]] * 10 >>> a [[], [], [], [], [], [], [], [], [], []] >>> a.append([]) >>> a[0].append(8) >>> a [[8], [8], [8], [8], [8], [8], [8], [8], [8], [8], []] >>> a[-1].append(5) >>> a [[8], [8], [8], [8], [8], [8], [8], [8], [8], [8], [5]] ```
Building a bidimensional array in Python
[ "", "python", "arrays", "multidimensional-array", "" ]
Say I have loaded `Report WW26blueprint_with_chart_330.xlsm` in the `report` object with the following code (the `print.range` is used here to test that the workbook is loaded): ``` import xl report = xl.Workbook("Report WW26blueprint_with_chart_330.xlsm") print report print(report.range("A1:E50").get()) ``` From the [pyvot documentation](http://pythonhosted.org/Pyvot/tutorial.html#interop) I get that: > Pyvot is written on top of Excel’s COM API. You can always get the > underlying COM object and make COM calls yourself My Excel file has the following macros: ``` Macro1 Масго2 Macro7 ``` Can I run any of the above macros directly from the python script using python/pyvot?
This did it for me: ``` report.xlWorkbook.Application.Run('Macro1') ```
Didn't manage to do it with `pyvot` but can be done using `pywin32` as following: ``` import win32com.client xl=win32com.client.Dispatch("Excel.Application") xl.Workbooks.Open(Filename="C:\Users\florinescu_eduard\Downloads\\Report WW26blueprint_with_chart_330.xlsm",ReadOnly=1) xl.Application.Run("Macro1") xl.Workbooks(1).Close(SaveChanges=1) xl.Application.Quit() xl=0 ```
pyvot: can I run Excel VBA macros from python script?
[ "", "python", "excel", "python-2.7", "com", "vba", "" ]
I am trying to flip an image horizontally. From this: ![Original Pic](https://i.stack.imgur.com/xB4mo.jpg) To This: ![Flipped Pic](https://i.stack.imgur.com/excyt.jpg) But I keep getting it mirrored half way. Like This: ![Result I get](https://i.stack.imgur.com/yS8rN.jpg) I am trying to reverse the x-axis index and I don't understand why it is being divided. ``` def flip(picture): height = getHeight(picture) width = getWidth(picture) newPicture = makeEmptyPicture(width, height) x2 = width-1 for x in range(0, width): y2 = 0 for y in range(0, height): pxl = getPixel(picture, x, y) newPxl = getPixel(picture, x2, y2) color = getColor(pxl) setColor(newPxl, color) y2 = y2+1 x2 = x2-1 return picture ``` The rest of my code: ``` def d(): f = pickAFile() picture = makePicture(f) newPicture = copy(picture) writePictureTo(newPicture, r"D:\FOLDER\newPic4.jpg") explore(newPicture) ```
In your `flip()` function (like in any function), as mentioned by the other answers, you return `picture` which is the image passed as a parameter of the function but is defined in d()... It is a matter of `scope` of the variables, so I invite you to have a look again at the discussion we had [here](https://stackoverflow.com/a/17256599/1715716). Here, you had two choices (you made a melting between the two): * Modifying directly the `picture` given as parameter * Creating a `newPicture`, modifying it, and finally returning it **Details about the 2d option :** The important thing here is that the `picture` variable belongs to the `d()` function (d() is the scope of it). In the mean time the `newPicture` variable belongs to the `flip()` function (flip() is its scope). So the life time of the newPicture is flip() (i.e. it is destroyed as soon as you terminate the execution of the flip() function, while returning). And d() doesn't know anything about this newPicture, unless you return it to d(). So, in short (assuming we are talking about the second option): **1)** Create a function that takes a `picture` as parameter (flip()) **2)** Inside flip(), create a local variable `newPicture` and modify that one only, such that the original `picture` remains unchanged **3)** Return the newly updated `newPicture` to the parent `scope`. Here d() is calling flip(), so it is the parent scope. We have to create a 3d variable (which belongs to d() scope), to keep a hand on what was returned by flip(): ``` def flip(picture) # Create newPicture # Modify newPicture (using the information from the "picture" parameter) setColor(newPicture, ...) ... return newPicture def d(): file = PickAFile() original_pic = makePicture(file) finalNewPicture = flip(original_pic) # {1} show(finalNewPicture) ``` {1}: Here we assign the value returned by flip (i.e. `newPicture`) to the variable of higher scope `finalNewPicture` (handler)... I hope it helps you to understand the logic behind this. It is like ***Russian dolls***: newPicture is used inside flip(), which is used inside d(), ... --- **EDIT :** I also want to give an explanation about the 1st option... **1)** Create a function that takes a `picture` as parameter (flip()) **2)** Inside flip(), modify directly the higher scoped `picture` variable **3)** Do not return anything from flip() This would result in this : ``` def flip(picture) # Simply modify the variable "picture", given as a parameter setColor(picture, ...) ... # Do not return anything def d(): file = PickAFile() original_pic = makePicture(file) flip(original_pic) # {1} show(original_pic) ``` {1}: Here flip() made the changes directly on the input picture, so we can display the original modified picture directly (`original_pic`). No need for an intermediate handler variable. --- ***Code for option 1 :*** (as you already have it working for option 2) ``` def flip(picture): height = getHeight(picture) width = getWidth(picture) x2=width-1 for x in range(0, width/2): # Only process the half way y2=0 for y in range(0, height): # swap pix and pix2 pxl = getPixel(picture, x, y) pxl2 = getPixel(picture, x2, y2) color = getColor(pxl) color2 = getColor(pxl2) setColor(pxl2, color) setColor(pxl, color2) y2=y2+1 x2=x2-1 def d(): f = pickAFile() original_picture = makePicture(f) flip2(original_picture) show(original_picture) d() ``` ***Note :*** flip could have been widely simplified as follows : ``` def flip2(picture): height = getHeight(picture) width = getWidth(picture) for x in range(0, width/2): # Only process the half way for y in range(0, height): # swap pix and pix2 pxl = getPixel(picture, x, y) pxl2 = getPixel(picture, width-1-x, y) color = getColor(pxl) color2 = getColor(pxl2) setColor(pxl2, color) setColor(pxl, color2) ```
``` def flip(picture): height = getHeight(picture) width = getWidth(picture) newPicture = makeEmptyPicture(width, height) for x in range(width): for y in range(height): color = getColor(getPixel(picture, x, y)) setColor(getPixel(newPicture, width-1-x, y), color) return newPicture ``` How about this? I just removed the `x2`/`y2` stuff which didn't seem necessary to me. Otherwise it should work this way. I found no true bug in your code, just the returning of the `picture` instead of `newPicture` at the end. But that should not have lead to a mirroring either.
Flipping Image Python
[ "", "python", "image", "jython", "image-rotation", "jes", "" ]
Why doesn't the first print statement give me '5 = 3'? This is from John Guttag's book. Can't get my head around it. It looks like this by the way: ``` 3 = 4 2 = 4 1 = abc 4 = 4 5 = 3 6 = <function g at 0x026B22F0> 1 = abc ``` Script: ``` def f(x): def g(): x = 'abc' print '1 =',x def h(): z = x print '2 =',z x = x+1 print '3 =',x h() g() print '4 =', x return g x = 3 z = f(x) print '5 =',x print '6 =',z z() ```
Defining a function does not execute it, So after `f` is defined, `x=3` is executed. The function `f(x)` is called and the result assigned to z. During the `f(x)` call, two more functions are defined, then `x=x+1` is executed. So the first print statement is `3 = 4`. That should be enough to help you figure out the rest.
All the changes you do to `x` is limited to the local scope of function `f`. Unless you declare x global (`global x`), `x` will not change from being `3` in the global space.
Scope confusion. Can't explain order of print statements in o/p
[ "", "python", "" ]
I have the following code structure in my Python script: ``` for fullName in lines[:-1]: do stuff ... ``` This processes every line except for the last one. The last line is a company name and all the rest are people's names. This works except in the case where there's no company name. In this case the loop doesn't get executed. There can only be one name if there's no company name. Is there a way to tell Python to execute this loop at least once?
You could do it like this, dynamically setting the end of your range based on the length of `lines`: ``` i = len(lines) i = -1 if i > 1 else i for fullName in lines[:i]: dostuff() ```
How about using Python's [conditional](http://www.python.org/dev/peps/pep-0308/): ``` >>> name=['Roger'] >>> names=['Pete','Bob','Acme Corp'] >>> >>> tgt=name >>> for n in tgt if len(tgt)==1 else tgt[:-1]: ... print(n) ... Roger >>> tgt=names >>> for n in tgt if len(tgt)==1 else tgt[:-1]: ... print(n) ... Pete Bob ```
Process at least one line in Python
[ "", "python", "" ]
``` Claim# Total ValuationDt 1 100 1/1/12 2 550 1/1/12 1 2000 3/1/12 2 100 4/1/12 1 2100 8/1/12 3 200 8/1/12 3 250 11/1/12 ``` Using MS Access, I need a query that returns only claims which have been valuated greater than $500 at some point in that claim's life time. In this example, the query should return ``` Claim# Total ValuationDt 1 100 1/1/12 2 550 1/1/12 1 2000 3/1/12 2 100 4/1/12 1 2100 8/1/12 ``` because claim# 1 was valuated greater than $500 on 3/1/12, claim# 2 was valuated greater than $500 on 1/1/12, and claim# 3 was never valuated greater than $500.
You can use `IN`: ``` SELECT * FROM Table1 WHERE Claim IN (SELECT Claim FROM Table1 WHERE Total > 500) ``` [Sql Fiddle Demo](http://sqlfiddle.com/#!3/70a7e/2/0)
This should work ``` Select DISTINCT Claim FROM yourtable Where Total > 500 ``` EDIT: In the case that my initial answer does not fulfill your requirements, then you can use a sub-query. A subquery is a query inside your query (nested queries). The reason we have to do it like that is because if you use something like ``` Select * FROM yourtable Where Total > 500 ``` Then the result set would only be those moments where the total of the claim was higher than 500, but it would not indicate other moments where it was less or equal than 500. Therefore, as others have stated, you use a subquery like: ``` SELECT * FROM Table1 WHERE Claim IN (SELECT Claim FROM Table1 WHERE Total > 500) ``` Note: see that there is a query after the IN keyword, so we have nested queries (or subquery if you prefer). Why does it work? well, because: ``` SELECT Claim FROM Table1 WHERE Total > 500 ``` Will return every claim (only the number of the claim) in which the total was greater than 500 at some point. Therefore, this query will return 1 and 2. If you substitute that in the original query you get: ``` SELECT * FROM Table1 WHERE Claim IN (1, 2) ``` Which will get you every column of every row with Claim numbers equal to either 1 or 2.
MS Access query table without primary key
[ "", "sql", "ms-access", "" ]
I have a feeling that this is very easy but I can't quite figure out how to do it. Say I have a Numpy array ``` [1,2,3,4] ``` How do I convert this to ``` [[1],[2],[3],[4]] ``` In an easy way? Thanks
You can use `np.newaxis`: ``` >>> a = np.array([1,2,3,4] array([1, 2, 3, 4]) >>> a[:,np.newaxis] array([[1], [2], [3], [4]]) ```
You can use [numpy.reshape](http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html): ``` >>> import numpy as np >>> a = np.array([1,2,3,4]) >>> np.reshape(a, (-1, 1)) array([[1], [2], [3], [4]]) ``` If you want normal python list then use `list comprehension`: ``` >>> a = np.array([1,2,3,4]) >>> [[x] for x in a] [[1], [2], [3], [4]] ```
Convert Python Numpy array to array of single arrays
[ "", "python", "arrays", "numpy", "" ]
Suppose there are 3 tables: BOOKS, TAGS and ASOC ``` +----------+ +----------+ +----------+ |BOOKS | |TAGS | |ASOC | +----------+ +----------+ +----------+ |book_id | |tag_id | |book_id | |book_name | |tag | |tag_id | |... | +----------+ +----------+ +----------+ ``` Hopefully the use/intent in this example is obvious.. I want to query books that match a certain set of tags. So I try something like: ``` SELECT B.book_name FROM BOOKS B , TAGS T , ASOC A WHERE B.book_id = A.book_id AND T.tag_id = A.tag_id AND (T.tag = 'Classic' OR T.tag = 'Fiction') ``` The undesirable result that I'm getting is that each book is being listed multiple times, once for each tag ASOC entry. I just want the unique list of books that match. How do I do this? Thanks in advance.
The simplest way is to change `SELECT B.book_name` to `SELECT DISTINCT B.book_name`; however, it might be better to write: ``` SELECT book_name FROM books WHERE book_id IN ( SELECT book_id FROM asoc WHERE tag_id IN ( SELECT tag_id FROM tags WHERE tag IN ('Classic', 'Fiction') ) ) ; ``` where the structure of the query makes clearer what you really want. (I suggest trying both on realistic data; it's possible that one will perform much better than the other.)
Use `DISTINCT` to get a unique list: ``` SELECT DISTINCT B.book_name FROM BOOKS B , TAGS T , ASOC A WHERE B.book_id = A.book_id AND T.tag_id = A.tag_id AND (T.tag = 'Classic' OR T.tag = 'Fiction') ``` As a further enhancement, that last line can be rewritten to be slightly more readable: ``` AND T.tag IN ('Classic', 'Fiction') ``` You can also make the JOINs more efficient: ``` SELECT DISTINCT B.book_name FROM BOOKS B INNER JOIN ASOC A on B.book_id = A.book_id INNER JOIN TAGS T on A.tag_id = T.tag_id WHERE T.tag IN ('Classic', 'Fiction') ```
SQL join query returning row for each matching ASOC
[ "", "sql", "join", "" ]
I am trying to install python(epd\_free-7.3-2-win-x86) on my PC (windows 7 - 32bit) but not able to do so. it shows error which says a DLL is missing. searched every where but could not found. i have attached an image of error message dialog. ![enter image description here](https://i.stack.imgur.com/f1O2F.gif) I also tried different python installers but they show same error. I even tried installation after updating my windows but nothing worked. Please tell me what to do??
Was facing the same problem in windows 8 - 64 bit .. try doing this.. Go to **C:/Users//AppData/Local** . right click on the folder **"Temp"**. Go to **Properties**. Go to **Security** Tab. Click on **"Everyone"** and "**Edit"** the permission to **"full control"-> allow**. Click apply. try running the installer again. YOu should be good to go.
Run Command Prompt as Administrator. See Image ![enter image description here](https://i.stack.imgur.com/SpB7e.png) Change directory to location of .msi file using cd command and type the name of of your file with extension i.e. .msi (see image below). You may give the complete path of file instead of changing directories. ![enter image description here](https://i.stack.imgur.com/HWqTo.png)
Missing dll Error Whle installing Python 2.7
[ "", "python", "python-2.7", "enthought", "" ]
I'm trying to echo the output of a SELECT request in bash, in a structured form with column names. The issue is that i cannot do it properly with more than 2 fields or if the value is larger that the column name. Example : My table looks like this : value1 value2 If i do in bash : echo "select value1, value2 from table" | mysql -uUSER -pPASS The result looks like this in bash : ``` value1 value2 a d b e c f ``` Now if i have 3 fields or a large value the result looks like this : ``` value1 value2 value3 aaaaaaaaa ddddddddddd ggg bbbb eeeeeeeee hhhh ccccccc fffffffff iiii ``` Is there a simple way to have a structured result ? I mean with column names correctly spaced? I know it is possible to do it with a sort to get the largest value and add the number of spaces needed but it seems to be too much for a simple problem like this. Do you have an idea? Thanks !
Use the mysql -e option to execute your query, and -t to print table output to stdout: > mysql -uUSER -t -e "select value1, value2, value3 from table" -pPASS Your output will look something like this: ``` +-----------+-------------+--------+ | value1 | value2 | value3 | +-----------+-------------+--------+ | aaaaaaaaa | ddddddddddd | ggg | | bbbb | eeeeeeeee | hhhh | | ccccccc | fffffffff | iiii | +-----------+-------------+--------+ ``` From the mysql manpage: ``` --execute=statement, -e statement Execute the statement and quit. The default output format is like that produced with --batch. See Section 4.2.3.1, “Using Options on the Command Line”, for some examples. With this option, mysql does not use the history file. ``` and ``` -table, -t Display output in table format. This is the default for interactive use, but can be used to produce table output in batch mode. ```
``` echo "select value1, value2 from table" | mysql -uUSER -pPASS | column -t ```
Simple way to print structured SQL SELECT in bash
[ "", "mysql", "sql", "bash", "echo", "shapes", "" ]
# Django Following the official documentation, I am creating a Django app (the same poll app as in the documentation page). While using the class-based view, I got a error. I did not understand much about class based view, for instance, may someone explain, what is the difference between a class based view from a normal view? Here's my code: ``` class DetailView(generic.DetailView): model = Poll template_name = 'polls/details.html' def get_queryset(self): def detail(request, poll_id): try: poll = Poll.objects.get(pk=poll_id) except Poll.DoesNotExist: raise Http404 return render(request, 'polls/details.html', {'poll': poll}) *********************Error ******************** TypeError at /polls/2/results/ as_view() takes exactly 1 argument (3 given) Request Method: GET Request URL: <app-path>/polls/2/results/ Django Version: 1.5.1 Exception Type: TypeError Exception Value: as_view() takes exactly 1 argument (3 given) *****the url*** url(r'^(?P<pk>\d+)/$', views.DetailView.as_view, name='detail') ```
`as_view` should be called, not referenced, according to the [docs](https://docs.djangoproject.com/en/dev/topics/class-based-views/#simple-usage-in-your-urlconf), your url should look like: ``` url(r'^(?P<pk>\d+)/$', views.DetailView.as_view(), name='detail') ``` Note the usage of parenthesis. Also, you should rather call your class `PollDetailView` to avoid confusion for code readers. Also, the `detail()` method you have defined will not be called at all. So you shouldn't define it at all. Also, leave the `get_queryset()` method alone for the moment, try to get the basic View to work first.
When you use url for the CBV ensure that you can not use the reference of the of the url filed, so first you need to add change the as\_view to **as\_view()**. And you can use DetailView like this., ``` class PollDetail(DetailView): model=Book def get_context_data(self,*args,**kwargs): context=super(PollDetail,self).get_context_data(*args,**kwargs) print(context) #It will give you the data in your terminal return context ``` And for accessing data you need to use **{{object}},** and if you want to access other fields at that time use like this **{{object.fieldname}}** In CBV the template name is automatically named as per class name so you don't need to give it. Note:Don't give class name same as DetailView,In future you will confuse.
Django class-based view
[ "", "python", "django", "django-class-based-views", "" ]
I have currently trimmed a section of my data set, but I need to filter it further. My goal is to isolate only those IDs for which a single row is returned. For example, with a data set as follows: ``` ID Client Name Client Manager 1 XYZ Corp. Ralph 2 Bob's Marketplace Marianne 2 Bob's Marketplace David 3 Davis Auto Repair Ralph 3 Davis Auto Repair Marianne 4 Everything Inc. David 5 Pet World Marianne ``` I need to return: ``` ID Client Name Client Manager 1 XYZ Corp. Ralph 4 Everything Inc. David 5 Pet World Marianne ```
``` SELECT ID, Client_Name, Client_Manager FROM table_name WHERE ID IN ( SELECT ID FROM table_name GROUP BY ID HAVING COUNT(*) = 1 ) ```
``` SELECT ID, [Client Name], min([Client Manager]) as [Client Manager] FROM dbo.Table1 GROUP BY ID, [Client Name] HAVING COUNT(*) = 1 ```
How do I return IDs for which only a single record is returned in SQL?
[ "", "sql", "ms-access", "" ]
Using MS SQL: The scenario is I have already a Contact master table (tbl\_Customer)for all contacts in the company. The staffs suggested that they do not want to see all the contact and that they will only choose which contacts are applicable for them. The contact table is not gonna be store on the client like an outlook's pst file but on server side particularly on the database. The methods i can use are: 1.) every time i add a staff i create a new independent table for the staff and the staff just adds the contact he/she needs from the master table via a program. 2.) I can change the contact master table to add say 50 fields with names staff01, staff02 and so on... I will make use of this fields as a marker that this rows of customer is a contact of the staff on the fields. 3.) I do a completely new contact list. Adds a field name "User". Mark the initial records via putting "ALL" on the field name. If a staff adds a contact, the table add a record with his/her name on the field "User". There will be an Auto numbered field as primary index. Currently I'm tempted to use 3.) are there any other better method for my problem?
Why don't you have a staff table with a staff id - then every contact the staff in question wants can be a row in staffContacts. Column 1 is staffID and col 2 is contactID. Then you can join on the contact table to retrieve the details you want. The contacts table then only needs be updated...You only need to add one table with two columns. You may wish to impose constraints that let fields be entered that are unique to staffID and contact ID and are valid staff members and contacts. Nick.
If I understand correctly you want to have a list of the contacts that is filtered specifically based on a particular users preferences? Personally I would create another table like this: ``` Create Table UserContactPreferences ( ID int identity(1,1), UserID int, ContactID int, ) ``` And then each individual user can pick what contacts they want visible and you can add an entry here for it. Then when you query the database you can do something like this, passing in the CurrentUser in the query: ``` SELECT c.* FROM tbl_Customer c JOIN UserContactPreferences up on c.contactID = up.contactID WHERE up.UserID = @CurrentUser ``` This will then only return Customers that the user has specifically said they want to see.
Commonly concepts in adding contacts for staffs using master table
[ "", "sql", "" ]
I have actually finished this exercise (almost) but am just stuck on a tiny problem with `__str__` which is inside an object. If i do this ``` elif choice == "9": try: for i in sworm: print(i) except TypeError: None` ``` Then it will only print out the details of the first object in my list (only 2 objects in there) e.g -- `sworm = [crit,crit1]` When I tried this ``` elif choice == "9": try: print(sworm) except TypeError: None ``` Then I get:- ``` [<__main__.Critter object at 0x02B54AD0>, <__main__.Critter object at 0x02B5B190>] ``` Here is the first half of my Object ``` class Critter(object): """A virtual pet""" def __init__(self, name, hunger = random.randint(1,50), boredom = random.randint(1,50)): self.name = name self.hunger = hunger self.boredom = boredom def __pass_time(self): self.hunger += 1 self.boredom += 1 def __str__(self): print ("Critter object\n") print (self.name) print (self.hunger) print (self.boredom) ``` Thanks in advance.
A Python list always shows the *contents* as representations, calling `repr()` on the objects. You can hook into that by specifying a `__repr__` method as well. Alternatively, don't print the list directly, but only the contents: ``` for elem in sworm: print(elem) ``` or join them as a long string: ``` print(', '.join(map(str, sworm))) ``` *Do* make sure you actually return a value from your `__str__` method though: ``` def __str__(self): return "Critter object\n{}\n{}\n{}".format(self.name, self.hunger, self.boredom) ``` because it is the *return value* that is printed by `print()`.
If you need the `__str__` method to work, then you should return a string from it - something like this ``` def __str__(self): return 'Critter object: %s %s %s' %(self.name, self.hunger, self.boredom) ``` Please read the documentation [here](http://docs.python.org/2/reference/datamodel.html#object.__str__)
Printing Python __str__ from an object
[ "", "python", "string", "" ]
I have table: ``` Bonus Value 500 1400 1500 177 1800 453 1200 100 800 2500 200 50 780 740 ``` I wanted to print the sum of column whichever is maximum. I Tried Following: ``` select case when sum(bonus)>sum(Value) then sum(bonus) end case when sum(Value)>sum(bonus) then sum(Value) end from emp ``` But i had not got the result. **Error:** ``` Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'case'. ```
Your syntax is incorrect, `CASE` keyword goes only once: ``` select case when sum(bonus)>sum(Value) then sum(bonus) else sum(Value) end as MaxSum from emp ```
Your `case` statement is wrong, try this one: ``` select case when sum(bonus)>sum(Value) then sum(bonus) else sum(Value) end from emp ```
max sum of two sums in sql
[ "", "sql", "sql-server-2005", "" ]
I wanna to fetch web pages under different domain, that means I have to use different spider under the command "scrapy crawl myspider". However, I have to use different pipeline logic to put the data into database since the content of web pages are different. But for every spider, they have to go through all of the pipelines which defined in settings.py. Is there have other elegant method to using seperate pipelines for each spider?
`ITEM_PIPELINES` setting is defined globally for all spiders in the project during the engine start. It cannot be changed per spider on the fly. Here are some options to consider: * Change the code of pipelines. Skip/continue processing items returned by spiders in the `process_item` method of your pipeline, e.g.: ``` def process_item(self, item, spider): if spider.name not in ['spider1', 'spider2']: return item # process item ``` * Change the way you start crawling. Do it [from a script](http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script), based on spider name passed as a parameter, override your `ITEM_PIPELINES` setting before calling `crawler.configure()`. See also: * [Scrapy. How to change spider settings after start crawling?](https://stackoverflow.com/questions/10543997/scrapy-how-to-change-spider-settings-after-start-crawling) * [Can I use spider-specific settings?](https://groups.google.com/forum/#!msg/scrapy-users/Uzj519saPXQ/u_lOaIh6LcsJ) * [Using one Scrapy spider for several websites](https://stackoverflow.com/questions/2396529/using-one-scrapy-spider-for-several-websites) * [related answer](https://stackoverflow.com/a/6502863/771848) Hope that helps.
A slightly better version of the above is as follows. It is better because this way allows you to selectively turn on pipelines for different spiders more easily than the coding of 'not in ['spider1','spider2']' *in the pipeline* above. In your spider class, add: ``` #start_urls=... pipelines = ['pipeline1', 'pipeline2'] #allows you to selectively turn on pipelines within spiders #... ``` Then in each pipeline, you can use the `getattr` method as magic. Add: ``` class pipeline1(): def process_item(self, item, spider): if 'pipeline1' not in getattr(spider, 'pipelines'): return item #...keep going as normal ```
Is there any method to using seperate scrapy pipeline for each spider?
[ "", "python", "web-scraping", "scrapy", "" ]
here is my SQL statement and I am getting `Error 170 Incorrect syntax near ',' at line 4` ``` INSERT INTO SEO_Permalink_Test ( IDObjekt ,IDType ,IDLanguage ,StateSEOName ,StateSEOPermalink ,DatumErstellungSEOName ,DatumLetzteAenderungSEOName ,SEOName) VALUES ( 19988 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_ST_004' ), ( 19989 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_FS_003' ) , ( 19997 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CAU0171_WO_015' ) , ( 19998 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_ST_003' ) , ( 19999 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_FS_001' ) , ( 20001 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_ILI0758_AU_007' ) , ( 20002 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_ILI0758_PO_011' ) , ( 20003 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_ILI0758_RS_008' ) , ( 20004 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CIT0550_WO_002' ) , ( 20005 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CIT0550_WO_003' ) , ( 20006 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CIT0550_TR_001' ) , ( 20007 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CIT0550_RS_001' ) , ( 20008 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_CSL0152_AU_001' ) ```
I am guessing that it is SQL Server 2005 or below From: <http://msdn.microsoft.com/en-us/library/ms174335%28v=sql.100%29.aspx> SQL Server 2008 introduces the Transact-SQL row constructor (also called a table value constructor) to specify **multiple rows in a single INSERT statement**. The row constructor consists of a single VALUES clause with multiple value lists enclosed in parentheses and separated by a comma. For more information, see Table Value Constructor (Transact-SQL).
You cannot perform a multi-row insert into SQL Server prior than 2008. You have two ways to do it: ``` INSERT INTO SEO_Permalink_Test ( IDObjekt ,IDType ,IDLanguage ,StateSEOName ,StateSEOPermalink ,DatumErstellungSEOName ,DatumLetzteAenderungSEOName ,SEOName) SELECT 19988 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_ST_004' UNION ALL SELECT 19989 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_FS_003' ... ``` Or ``` INSERT INTO SEO_Permalink_Test ( IDObjekt ,IDType ,IDLanguage ,StateSEOName ,StateSEOPermalink ,DatumErstellungSEOName ,DatumLetzteAenderungSEOName ,SEOName) VALUES ( 19988 , 72 , 1 , 0 , 0 , GETDATE() , GETDATE() , 'H_KDA1559_ST_004' ) INSERT INTO SEO_Permalink_Test ( IDObjekt ,IDType ,IDLanguage ,StateSEOName ,StateSEOPermalink ,DatumErstellungSEOName ,DatumLetzteAenderungSEOName ,SEOName) VALUES ( 19989 , ``` I would go with the 1st way as it's less verbose.
SQL syntax error 170 in insert statement
[ "", "sql", "t-sql", "" ]
I have a `Char(15)` field, in this field I have the data below: ``` 94342KMR 947JCP 7048MYC ``` I need to break down this, I need to get the last RIGHT 3 characters and I need to get whatever is to the LEFT. My issue is that the code on the LEFT is not always the same length as you can see. How can I accomplish this in SQL? Thank you
``` SELECT RIGHT(RTRIM(column), 3), LEFT(column, LEN(column) - 3) FROM table ``` Use `RIGHT` w/ `RTRIM` (to avoid complications with a fixed-length column), and `LEFT` coupled with `LEN` (to only grab what you need, exempt of the last 3 characters). if there's ever a situation where the length is <= 3, then you're probably going to have to use a `CASE` statement so the `LEFT` call doesn't get greedy.
You can use `RTRIM` or cast your value to `VARCHAR`: ``` SELECT RIGHT(RTRIM(Field),3), LEFT(Field,LEN(Field)-3) ``` Or ``` SELECT RIGHT(CAST(Field AS VARCHAR(15)),3), LEFT(Field,LEN(Field)-3) ```
How can I use LEFT & RIGHT Functions in SQL to get last 3 characters?
[ "", "sql", "sql-server", "t-sql", "" ]
i have table test with columns as id , month, year . i need to get id where maximum of month and maximum of year in one query. for example ``` id month year 1 10 2012 2 9 2013 ``` my result should be id 2 . first checks with year based on that maximum month based on these to i need to get id i give query like this in MySQL ``` select id from book where MAX(month) and MAX(year); ``` its produce error
Just sort using month and year: [Demo](http://sqlfiddle.com/#!2/6cccf/7) ``` SELECT id FROM books ORDER BY year DESC, month DESC LIMIT 1 ```
For this type of query, `order by` and `limit` are the best approach. You seem to want the latest date, so start with the year (in descending order) and then month (in descending order): ``` select * from book order by year desc, month desc limit 1 ``` In particular, you *don't* want the maximum month and maximum year. Based on your desired results, you want the most recent/latest month.
comparing with two maximum of columns get one row
[ "", "mysql", "sql", "" ]
I am currently reading through a book about MVVM and I need to use the Northwind database for the next section. Unfortunately, when I try to add Northwind to my project, I get the following error: > "An error occurred while connecting to the database. The database might be unavailable. An exception of type 'Microsoft.Data.ConnectionUI.SqlDatabaseFileUpgradeRequiredException' occurred. The error message is: 'This database file is not compatible with the current instance of SQL Server.'" I haven't installed an SQL Server, just Visual Studio 2012. The book instructs me to add the .MDF file to the project, set it as an Entity Data Model, and then to generate from database. It's on the next screen where I get the error. I use NORTHWND.MDF as the data connection and the entity connection string is as follows: > metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://\*/Model1.msl;provider=System.Data.SqlClient;provider connection string="data source=(LocalDB)\v11.0;attachdbfilename=|DataDirectory|\NORTHWND.MDF;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework" In the book, this reads: > metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://\*/Model1.msl;provider=System.Data.SqlClient;provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\NORTHWND.MDF;Integrated Security=True;UserInstance=True" I have tried using both data sources and neither work. Also of note, the sample code that goes along with the book doesn't work either. It compiles and launches, but in the server explorer, NORTHWND.MDF has an 'x' in the icon and upon launch none of the data is there. When I attempt to upgrade the database, I get an error saying: > "Database '...\NORTHWND.MDF' cannot be upgraded because its non-release version (539) is not supported by this version of SQL Server. You cannot open a database that is incompatible with this version of sqlservr.exe. You must re-create the database..." If anybody could give me any tips on how to get this working, I would certainly appreciate it.
one way to potentially fix it is, install [Sql Server 2012 Express](http://www.microsoft.com/en-us/download/details.aspx?id=29062) and use database connection to sql server. Do not use local database. That might work. Just a suggestion. Also as an extra note, you might want to download SQL Server 2012 *Management Studio* (I believe it comes with the above download as a component that you have select). With Management Studio Installed, you can test the database connectivity separately, create tables, run queries etc. This will make troubleshooting a lot easier.
my simple way: go to VisualStadio 2012 In Server Explorer Windows open Data Connections Right click on ASPNETDB.MDF click on Modify Connection... a new windows open for you ... you just click OK if other windows open for you click Yes FINISH ( be Happy ) :D
"This database file is not compatible with the current instance of SQL Server"
[ "", "sql", "visual-studio", "visual-studio-2012", "" ]
I'm trying to craft a regular expression that will match something like this: `[[uid::page name|page alias]]` for example: `[[nw::Home|Home page]]` The uid and page alias are both optional. I want to allow the delimiters `::` or `|` to appear only once, and only in the order shown. However, the character `:` should be allowed anywhere after the uid. Herein lies the problem. The following regex works pretty well, except that it matches strings where `::` appears twice, or in the wrong place: ``` regex = r'\[\[([\w]+::)?([^|\t\n\r\f\v]+)(\|[^|\t\n\r\f\v]+)?\]\]' re.match(regex, '[[Home]]') # matches, good re.match(regex, '[[Home|Home page]]') # matches, good re.match(regex, '[[nw::Home]]') # matches, good re.match(regex, '[[nw::Home|Home page]]') # matches, good re.match(regex, '[[nw|Home|Home page]]') # doesn't match, good re.match(regex, '[[nw|Home::Home page]]') # matches, bad re.match(regex, '[[nw::Home::Home page]]') # matches, bad ``` I have read all about negative lookahead and lookbehind expressions but I can't figure out how to apply them in this case. Any suggestions would be appreciated. **Edit:** I would also like to know how to prevent the delimiters from being included in the match results as shown here: `('nw::', 'Home', '|Home page')`
If I understand your needs correctly, you could use this: ``` \[\[(?:(?<uid>\w+)::)?(?!.*::)(?<page>[^|\t\n\r\f\v]+)(?:\|(?<alias>[^|\t\n\r\f\v]+))?\]\] ^^^^^^^^ ``` See [here](http://www.regex101.com/r/mT4mY4) for a demo. I added a negative lookahead after the `uid` capture. I have given names to the captured groups but if you don't want them, that's the one without named captured groups: ``` \[\[(?:(\w+)::)?(?!.*::)([^|\t\n\r\f\v]+)(?:\|([^|\t\n\r\f\v]+))?\]\] ```
So, what do you think of this one: ``` import re regex = r''' \[\[ # opening [[ ([\w ]+) # first word (with possible spaces) (?: :: # the two colons ( # second word (with possible spaces and single colons) [\w ]+ # word characters and spaces (?: : # a colon [\w ]+ # word characters and spaces )* # not required, but can repeat unlimitted ) )? # not required (?: \| # a pipe ([\w ]+) # thid word (with possible spaces) )? \]\] # closing ]] ''' test_strings = ( '[[Home]]', '[[Home|Home page]]', '[[nw::Home]]', '[[nw::Home|Home page]]', '[[nw|Home|Home page]]', '[[nw|Home::Home page]]', '[[nw::Home::Home page]]', '[[nw::Home:Home page]]', '[[nw::Home:Home page|Home page]]' ) for test_string in test_strings: print re.findall(regex, test_string, re.X) ``` Outputs: ``` [('Home', '', '')] [('Home', '', 'Home page')] [('nw', 'Home', '')] [('nw', 'Home', 'Home page')] [] [] [] [('nw', 'Home:Home page', '')] ``` It doesn't use lookaheads/behinds. It does allow single colons in the string after the first `::` (as demonstrated by the last two test strings). The short version for the regex would be: ``` \[\[([\w ]+)(?:::([\w ]+(?::[\w ]+)*))?(?:\|([\w ]+))?\]\] ``` Only thing is that you have to check if the second match is empty, if so, there was not double colon (`::`) and you should use the first match, where normally the string BEFORE the colon will be.
Python regex to match 2 distinct delimiters
[ "", "python", "regex", "regex-negation", "" ]
My question is probably quite simple, however I have been puzzling over numerous methods and cannot seem to find an efficient answer without using many for loops. I have a dictionary: ``` my_dict = {'full_name1' : 1, 'full_name2' : 2, 'full_name3' : 3} ``` I also have this dictionary: ``` another_dict = {'name1' : 'x', 'name2' : 'y', 'name3' : 'z'} ``` What I would like is to produce a third dictionary that looks like this: ``` third_dict = {1 : 'x', 2 : 'y', 3 : 'z'} ``` The values of `my_dict` are keys in `third_dict` for the corresponding values of `another_dict`. This would be easy for me, except that the key names for the first two dictionaries are not identical. I am assuming that the key names in `another_dict` will always be a part of the those in `my_dict`, however not all of the keys in `my_dict` will have a matching key in `another_dict`. My current, erroneous, inefficient, method: ``` third_dict={} for key in my_dict: for sub_key in another_dict: if sub_key in key: for key in my_dict: third_dict[my_dict[key]] = another_dict[sub_key] ``` EDIT: As suggested, it would be interesting to see how exceptions are handled. For instance, what if another\_dict has an entry, which doesn't match an entry in my\_dict or vice versa? Or what if another\_dict has surplus entries?
Use [dictionary comprehension](http://docs.python.org/2/tutorial/datastructures.html#dictionaries): ``` >>> my_dict = {'full_name1' : 1, 'full_name2' : 2, 'full_name3' : 3} >>> another_dict = {'name1' : 'x', 'name2' : 'y', 'name3' : 'z'} >>> {key:value for S, key in my_dict.iteritems() for s, value in another_dict.iteritems() if s in S} {1: 'x', 2: 'y', 3: 'z'} ```
This worked for me: ``` full_name1 = 'Will Smith' full_name2 = 'Matt Damon' full_name3 = 'Mark yMark' name1 = 'Will' name2 = 'Matt' name3 = 'Mark' my_dict = {full_name1 : 1, full_name2 : 2, full_name3 : 3} another_dict = {name1 : 'x', name2 : 'y', name3 : 'z'} result = {} for sub, val in another_dict.items(): # start with the substrings for string, key in my_dict.items(): if sub in string: result[key]=val print(result) ``` I used `dict.items()` to make the code a bit more readable. Combined with some more clear variable names, I think that makes the logic a bit easier to follow. docs: <http://docs.python.org/2/library/stdtypes.html#dict.items> Could be simplified I'm sure. Note that I assumed that your words like `name1` were actually strings. Edit: Fixed changed strings to variable names.
Check if dictionary key contains any of another dictionary's keys and print matching pairs
[ "", "python", "dictionary", "key", "contains", "key-value", "" ]
I have a PostgreSQL table with 2 indices. One of the indices is covers the `website_id` and `tweet_id` columns and is an unique B-tree index. The 2nd index only covers the `website_id` column, and is a non-unique index. Is the 2nd index redundant if the first index exists? In other words, will there be no advantages to having the 2nd index?
postgres multicolumn indexes can be used to search on the first columns only,so in practise it is redundant. > A multicolumn B-tree index can be used with query conditions that involve any subset of the index's columns, but the index is most efficient when there are constraints on the leading (leftmost) columns. The exact rule is that equality constraints on leading columns, plus any inequality constraints on the first column that does not have an equality constraint, will be used to limit the portion of the index that is scanned. [Postgres 9.2 documentation](http://www.postgresql.org/docs/9.2/static/indexes-multicolumn.html) there is a remote case where the other index might be useful (see below for more detailed stuff) ie. If you do most of your queries on the first index and have a very small cache available for the indexes. In this case the combined index might not fit the cache , but the smaller single column one would. <https://dba.stackexchange.com/questions/27481/is-a-composite-index-also-good-for-queries-on-the-first-field/27493#27493>
It depends. Assuming we are talking about default B-Tree indexes only. If other index types like [GIN](http://www.postgresql.org/docs/current/interactive/gin.html) or [GiST](http://www.postgresql.org/docs/current/interactive/gist.html) are involved, things are not as simple. In principal an index on `(a,b)` is good for searches on just `a` and another index on just `(a)` is not needed. ([But an additional index on just `(b)` generally makes sense!](https://dba.stackexchange.com/questions/6115/working-of-indexes-in-postgresql/7484#7484)) It *may* still be a good idea if the column `b` is big, so that an index on just `(a)` is substantially smaller. You would have to consider the size of the table, available RAM, typical queries, the involved data types, the size of the index, overhead per tuple and size of data, [data alignment and padding](https://stackoverflow.com/questions/2966524/calculating-and-saving-space-in-postgresql/7431468#7431468) ... or just run tests with your actual data and queries (but careful what you are testing really). For example if `a` and `b` are no bigger than 4 bytes (`integer`, `smallint`, `date`, ...) the index on `(a,b)` is exactly as big as the one on just `(a)` and there is no point whatsoever to keep the second. [A more detailed answer on dba.SE for this case *exactly*.](https://dba.stackexchange.com/questions/27481/is-a-composite-index-also-good-for-queries-on-the-first-field/27493#27493) The [manual for the *current* version of Postgres](http://www.postgresql.org/docs/current/interactive/indexes.html) is always a good source for more detailed information.
2 PostgreSQL indices on the same column of the same table - redundant?
[ "", "sql", "database", "postgresql", "indexing", "" ]
So there are two ways to take a list and add the members of a second list to the first. You can use list concatenation or your can iterate over it. You can: ``` for obj in list2: list1.append(obj) ``` or you can: ``` list1 = list1 + list2 ``` or ``` list1 += list2 ``` My question is: which is faster, and why? I tested this using two extremely large lists (upwards of 10000 objects) and it seemed the iterating method was a lot faster than the list concatenation (as in l1 = l1 + l2). Why is this? Can someone explain?
`append` adds each item one at a time, which is the cause of its slowness, as well as the repeated function calls to `append`. **However** in this case the `+=` operator is **not** syntactic sugar for the `+`. The `+=` operator does not actually create a new list then assign it back, it modifies the left hand operand in place. It's pretty apparent when using `timeit` to use both 10,000 times. ``` >>> timeit.timeit(stmt="l = l + j", setup="l=[1,2,3,4]; j = [5,6,7,8]", number=10000) 0.5794978141784668 >>> timeit.timeit(stmt="l += j", setup="l=[1,2,3,4]; j = [5,6,7,8]", number=10000) 0.0013298988342285156 ``` `+=` is much faster (about 500x) You also have the `extend` method for lists which can append any iterable (not just another list) with something like `l.extend(l2)` ``` >>> timeit.timeit(stmt="l.extend(j)", setup="l=[1,2,3,4]; j = [5,6,7,8]", number=10000) 0.0016009807586669922 >>> timeit.timeit(stmt="for e in j: l.append(e)", setup="l=[1,2,3,4]; j = [5,6,7,8]", number=10000) 0.00805807113647461 ``` Logically equivalent to appending, but much much faster as you can see. So to explain this: iterating is faster than `+` because `+` has to construct an entire new list `extend` is faster than iteration because it's a builtin list method and has been optimized. Logically equivalent to appending repeatedly, but implemented differently. `+=` is faster than `extend` because it can modify the list in place, knowing how much larger the list has to be and without repeated function calls. It assumes you're appending your list with another list/tuple
I ran the following code ``` l1 = list(range(0, 100000)) l2 = list(range(0, 100000)) def t1(): starttime = time.monotonic() for item in l1: l2.append(item) print(time.monotonic() - starttime) l1 = list(range(0, 100000)) l2 = list(range(0, 100000)) def t2(): starttime = time.monotonic() global l1 l1 += l2 print(time.monotonic() - starttime) ``` and got this, which says that adding lists (+=) is faster. 0.016047026962041855 0.0019438499584794044
Iterating vs List Concatenation
[ "", "python", "profiling", "performance", "" ]
I'm new to python. I have a dictionary with subsets as values, link this: ``` dict = {key1: [value1, value2, value 3], key2: [value4], key3: [value5, value6]} ``` and so on. I would like to store some of the sub-sets of values in a new set. For instance, I would like to merge values from key1 and key3 so to have ``` set1 = [value1, value2, value 3,value5, value6] ``` I've tried to do something like this: ``` d1= set() d1.add(dict['key1']) d1.add(dict['key3']) ``` but it does not seem to work (I get the error TypeError: unhashable type: 'set'). Any help?
``` d1= set() map(d1.add, dict['key1']) map(d1.add, dict['key3']) ```
You must coerce each list to a set before trying to join their values: ``` >>> set(dict["key1"]) | set(dict["key3"]) {'value 3', 'value1', 'value2', 'value5', 'value6'} ```
Add dictionary values to a set in python
[ "", "python", "dictionary", "set", "" ]
I need a way to, give a string of text in python, separate its contents into a list, splitting by 3 parameters - outermost brackets versus outermost parentheses versus normal text, preserving the original syntax. For example, given a string ``` (([a] b) c ) [d] (e) f ``` The expected output would be this list: ``` ['(([a] b) c )', '[d]', '(e)', ' f'] ``` I tried several things with regex, such as ``` \[.+?\]|\(.+?\)|[\w+ ?]+ ``` which gave me ``` >>> re.findall(r'\[.+?\]|\(.+?\)|[\w+ ?]+', '(([a] b) c ) [d] (e) f') ['(([a] b)', ' c ', ' ', '[d]', ' ', '(e)', ' f'] ``` (item c in the wrong list) I also tried the greedy version of it, ``` \[.+\]|\(.+\)|[\w+ ?]+ ``` but it falls short when the string has separate operators of the same kind: ``` >>> re.findall(r'\[.+\]|\(.+\)|[\w+ ?]+', '(([a] b) c ) [d] (e) f') ['(([a] b) c ) [d] (e)', ' f'] ``` Then I moved on from regex to use stacks instead: ``` >>> def parenthetic_contents(string): stack = [] for i, c in enumerate(string): if c == '(' or c == '[': stack.append(i) elif (c == ')' or c == ']'): start = stack.pop() yield (len(stack), string[start + 0:i+1]) ``` Which worked great, for brackets and parentheses, except I have no way of getting the flat text (or I do, but I don't know about it?): ``` >>> list(parenthetic_contents('(([a] b) c ) [d] (e) f')) [(2, '[a]'), (1, '([a] b)'), (0, '(([a] b) c )'), (0, '[d]'), (0, '(e)')] ``` I am not familiar with pyparsing. It looked at first as if nestedExpr() would do the trick, but it takes only one delimiter (() or [], but not both), which wouldn't work for me. I am all out of ideas now. Any suggestions would be welcome.
Only very lightly tested (and the output includes white space). As with @Marius' answer (and the general rule about paren matching requiring a PDA), I use a stack. However, I have a little extra paranoia built in to mine. ``` def paren_matcher(string, opens, closes): """Yield (in order) the parts of a string that are contained in matching parentheses. That is, upon encounting an "open parenthesis" character (one in <opens>), we require a corresponding "close parenthesis" character (the corresponding one from <closes>) to close it. If there are embedded <open>s they increment the count and also require corresponding <close>s. If an <open> is closed by the wrong <close>, we raise a ValueError. """ stack = [] if len(opens) != len(closes): raise TypeError("opens and closes must have the same length") # could make sure that no closes[i] is present in opens, but # won't bother here... result = [] for char in string: # If it's an open parenthesis, push corresponding closer onto stack. pos = opens.find(char) if pos >= 0: if result and not stack: # yield accumulated pre-paren stuff yield ''.join(result) result = [] result.append(char) stack.append(closes[pos]) continue result.append(char) # If it's a close parenthesis, match it up. pos = closes.find(char) if pos >= 0: if not stack or stack[-1] != char: raise ValueError("unbalanced parentheses: %s" % ''.join(result)) stack.pop() if not stack: # final paren closed yield ''.join(result) result = [] if stack: raise ValueError("unclosed parentheses: %s" % ''.join(result)) if result: yield ''.join(result) print list(paren_matcher('(([a] b) c ) [d] (e) f', '([', ')]')) print list(paren_matcher('foo (bar (baz))', '(', ')')) ```
I managed to do this using a simple parser that keeps track of how deep you are in the stack using the `level` variable. ``` import string def get_string_items(s): in_object = False level = 0 current_item = '' for char in s: if char in string.ascii_letters: current_item += char continue if not in_object: if char == ' ': continue if char in ('(', '['): in_object = True level += 1 elif char in (')', ']'): level -= 1 current_item += char if level == 0: yield current_item current_item = '' in_object = False yield current_item ``` Output: ``` list(get_string_items(s)) Out[4]: ['(([a] b) c )', '[d]', '(e)', 'f'] list(get_string_items('(hi | hello) world')) Out[12]: ['(hi | hello)', 'world'] ```
separate string into contents in parentheses vs brackets vs flat text
[ "", "python", "regex", "stack", "pyparsing", "" ]
In [this question](https://stackoverflow.com/q/8905501/832621) it is explained how to access the `lower` and `upper` triagular parts of a given matrix, say: ``` m = np.matrix([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) ``` Here I need to transform the matrix in a 1D array, which can be done doing: ``` indices = np.triu_indices_from(m) a = np.asarray( m[indices] )[-1] #array([11, 12, 13, 22, 23, 33]) ``` After doing a lot of calculations with `a`, changing its values, it will be used to fill a symmetric 2D array: ``` new = np.zeros(m.shape) for i,j in enumerate(zip(*indices)): new[j]=a[i] new[j[1],j[0]]=a[i] ``` Returning: ``` array([[ 11., 12., 13.], [ 12., 22., 23.], [ 13., 23., 33.]]) ``` Is there a better way to accomplish this? More especifically, avoiding the Python loop to rebuild the 2D array?
**The fastest and smartest way to put back a vector into a 2D symmetric array is to do this:** --- **Case 1: No offset (k=0) i.e. upper triangle part includes the diagonal** ``` import numpy as np X = np.array([[1,2,3],[4,5,6],[7,8,9]]) #array([[1, 2, 3], # [4, 5, 6], # [7, 8, 9]]) #get the upper triangular part of this matrix v = X[np.triu_indices(X.shape[0], k = 0)] print(v) # [1 2 3 5 6 9] # put it back into a 2D symmetric array size_X = 3 X = np.zeros((size_X,size_X)) X[np.triu_indices(X.shape[0], k = 0)] = v X = X + X.T - np.diag(np.diag(X)) #array([[1., 2., 3.], # [2., 5., 6.], # [3., 6., 9.]]) ``` **The above will work fine even if instead of `numpy.array` you use `numpy.matrix`.** --- **Case 2: With offset (k=1) i.e. upper triangle part does NOT include the diagonal** ``` import numpy as np X = np.array([[1,2,3],[4,5,6],[7,8,9]]) #array([[1, 2, 3], # [4, 5, 6], # [7, 8, 9]]) #get the upper triangular part of this matrix v = X[np.triu_indices(X.shape[0], k = 1)] # offset print(v) # [2 3 6] # put it back into a 2D symmetric array size_X = 3 X = np.zeros((size_X,size_X)) X[np.triu_indices(X.shape[0], k = 1)] = v X = X + X.T #array([[0., 2., 3.], # [2., 0., 6.], # [3., 6., 0.]]) ```
Do you just want to form a symmetric array? You can skip the diagonal indices completely. ``` m=np.array(m) inds = np.triu_indices_from(m,k=1) m[(inds[1], inds[0])] = m[inds] m array([[11, 12, 13], [12, 22, 23], [13, 23, 33]]) ``` Creating a symmetric array from a: ``` new = np.zeros((3,3)) vals = np.array([11, 12, 13, 22, 23, 33]) inds = np.triu_indices_from(new) new[inds] = vals new[(inds[1], inds[0])] = vals new array([[ 11., 12., 13.], [ 12., 22., 23.], [ 13., 23., 33.]]) ```
transform the upper/lower triangular part of a symmetric matrix (2D array) into a 1D array and return it to the 2D format
[ "", "python", "arrays", "numpy", "matrix", "" ]
In bash, we can: ``` python - << EOF import os print 'hello' EOF ``` to embed python code snippet in bash script. But in Windows batch, this doesn't work - although I can still use python -c but that requires me to collpase my code into one line, which is something I try to avoid. Is there a way to achieve this in batch script? Thanks.
You could use a hybrid technic, this solution works also with an python import. ``` 1>2# : ^ ''' @echo off echo normal echo batch code echo Switch to python python "%~f0" exit /b rem ^ ''' print "This is Python code" ``` The batch code is in a multiline string `'''` so this is invisible for python. The batch parser doesn't see the python code, as it exits before. **The first line is the key.** It is valid for batch as also for python! In python it's only a senseless compare `1>2` without output, the rest of the line is a comment by the `#`. For batch `1>2#` is a redirection of stream `1` to the file `2#`. The *command* is a colon `:` this indicates a label and labeled lines are never printed. Then the last caret simply append the next line to the label line, so batch doesn't see the `'''` line.
Even more efficient, plus it passes all command-line arguments to and returns the exit code from the script: ``` @SETLOCAL ENABLEDELAYEDEXPANSION & python -x "%~f0" %* & EXIT /B !ERRORLEVEL! # Your python code goes here... ``` Here's a break-down of what's happening: * `@` prevents the script line from being printed * `SETLOCAL ENABLEDELAYEDEXPANSION` allows !ERRORLEVEL! to be evaluated *after* the python script runs * `&` allows another command to be run on the same line (similar to UNIX's `;`) * `python` runs the python interpreter (Must be in %PATH%) * `-x` tells python to ignore the first line (Run python -h for details) * `"%~f0"` expands to the fully-qualified path of the currently executing batch script (Argument %0). It's quoted in case the path contains spaces * `%*` expands all arguments passed to the script, effectively passing them on to the python script * `EXIT /B` tells Windows Batch to exit from the current batch file only (Using just `EXIT` would cause the calling interpreter to exit) * `!ERRORLEVEL!` expands to the return code from the previous command after it is run. Used as an argument to `EXIT /B`, it causes the batch script to exit with the return code received from the python interpreter NOTE: You may have to change "python" to something else if your python binary is not in the PATH or is in a non-standard location. For example: ``` @"C:\Path\To\Python.exe" -x ... ```
How to embed python code in batch script
[ "", "python", "windows", "bash", "batch-file", "" ]
**Updated, look bottom!** I am stuck! I get a IndexError: list index out of range Error. ``` def makeInverseIndex(strlist): numStrList = list(enumerate(strlist)) n = 0 m = 0 dictionary = {} while (n < len(strList)-1): while (m < len(strlist)-1): if numStrList[n][1].split()[m] not in dictionary: dictionary[numStrList[n][1].split()[m]] = {numStrList[n][0]} m = m+1 elif {numStrList[n][0]} not in dictionary[numStrList[n][1].split()[m]]: dictionary[numStrList[n][1].split()[m]]|{numStrList[n][0]} m = m+1 n = n+1 return dictionary ``` it gives me this error ``` >>> makeInverseIndex(s) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "./inverse_index_lab.py", line 23, in makeInverseIndex if numStrList[n][1].split()[m] not in dictionary: IndexError: list index out of range ``` I don't get it... what causes this? It happens even when I change the conditions of the while loop. I don't get what is the problem. I am pretty new at this, so explain it like you would if a piece of broccoli asked you this question. **Edit:** Thanks guys, I forgot to mention examples of input, I want to input something like this: ``` L=['A B C', 'B C E', 'A E', 'C D A'] ``` and get this as output: ``` D={'A':{0,2,3}, 'B':{0,1}, 'C':{0,3}, 'D':{3}, 'E':{1,2}} ``` so to create a dictionary that shows where in the list you might find a 'A' for example. It should work with a huge list. Do anyone have any tips? I want it to iterate and pick out each letter and then assign them a dictionary value. **Edit number two:** Thanks to great help from you guys my code looks beautiful like this: ``` def makeInverseIndex(strList): numStrList = list(enumerate(strList)) n = 0 dictionary = {} while (n < len(strList)): for word in numStrList[n][1].split(): if word not in dictionary: dictionary[word] = {numStrList[n][0]} elif {numStrList[n][0]} not in dictionary[word]: dictionary[word]|={numStrList[n][0]} n = n+1 return dictionary ``` But I still manage to get this error when I try to run the module: ``` >>> makeInverseIndex(L) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "./inverse_index_lab.py", line 21, in makeInverseIndex for word in numStrList[n][1].split(): NameError: global name 'StrList' is not defined ``` I do not see where the error can come from.
Good to see some smart veggies programming. First, your question. Like @Vasiliy said, you have 3 indices. The `n` is alright, since you protect it with your `while` condition. The `1` is fine since `enumerate` always generates 2 things. That just leaves `m`. This is your problem. Let's say you have `N` elements in `strlist`. For each element `e` in `strlist`, you apply `split()` to it. The number of elements in `e.split()` is not always equal to `N`. The while condition for `m` guards against `N`, not against `len(e.split())`, hence the index out of range. To solve this, split the string first, and then loop through it. While you're at it, might as well get rid of `m` altogether, splitting the string only once, and gain some performance. Plus, you never reset your `m`, which just grows and grows. ``` while (n < len(strList)): for word in numStrList[n][1].split(): if word not in dictionary: dictionary[word] = {numStrList[n][0]} elif {numStrList[n][0]} not in dictionary[word]: dictionary[word]|={numStrList[n][0]} n = n+1 ``` Second, your `while` conditions are too restrictive. `n < len(strlist)` is fine.
I do not have enough reputation to leave a comment on your post so I'm posting an answer here: I have copy and pasted the latest code at the bottom (edit 2) and it runs as expected, so there are two potential issues I can see: 1) You might have forgotten to indent your function definition 2) You might have capitalized strList to StrList in your function definition and then declared StrList elsewhere. Hope this helps.
Python: IndexError: list index out of range Error
[ "", "python", "list", "python-3.x", "" ]
The Kivy Language automatically creates internal binds in properties. For example, if we assign the position of the parent to the position of the child, then the position of the child is going to be updated automatically: ``` Widget: Label: pos: self.parent.pos ``` In this case, if we move the parent `Widget`, then the child is also going to move. How do I unbind the property `pos` from the child? I know how to unbind (properties)[http://kivy.org/docs/api-kivy.uix.widget.html#using-properties] that I bind myself but how do I unbind them if I don't know **the name of the method it is bound**. Here is a small example to show what I mean. The `Button` **Up** moves the `GridLayout` to the top and `Down` to the **Bottom**. The `Button` **Center** center itself in the middle of the screen. My problem is that when I click **Up** or **Down** my Centered button is not anymore. ``` from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(""" <Example>: GridLayout: id: _box cols: 3 size_hint: .7, .3 pos_hint: {'center_x': .5} x: 0 Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _box.y = 0 text: "Down" Widget: Button: pos: self.parent.pos size: self.parent.size on_press: self.center_y = root.height/2 text: "Out of the Grid" Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _box.top = root.height text: "Up" """) class Example(FloatLayout): pass class ExampleApp(App): def build(self): return Example() if __name__ == "__main__": ExampleApp().run() ``` --- *Why do I want to do that in any case?* I am using an animations on the `GridLayout` that constantly updates the position. The normal position of the buttons should be inside the gridlayout but once in a while one of the buttons flies over the screen and come back to the same position. The problem is that I cannot make them fly while my gridlayout is also moving because the property is bound and as soon as the button try to fly it goes back to the grid. That also means that the binding is sometimes desirable. What I want is have control of the bind and unbind.
Comments don't seem to be working right now so I'll post this as a answer. 1. You already have a FloatLayout(your root widget). Use that instead of creating a new FloatLayout. 2. Before removing the widget from the grid. * store it's size, * set size\_hint to None, None * set pos\_hint to position the widget in the center. 3. When adding the widget to grid do the reverse. Here's your code with these fixes:: ``` from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(""" <Example>: center_button: _center_button center_widget: _center_widget grid:_grid GridLayout: id: _grid cols: 3 size_hint: .7, .3 pos_hint: {'center_x': .5} x: 0 Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _grid.y = 0 text: "Down" Widget: id: _center_widget Button: id: _center_button pos: self.parent.pos size: self.parent.size on_press: root.centerize(*args) text: "Out of the Grid" Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _grid.top = root.height text: "Up" """) class Example(FloatLayout): def centerize(self, instance): if self.center_button.parent == self.center_widget: _size = self.center_button.size self.center_widget.remove_widget(self.center_button) self.center_button.size_hint = (None, None) self.add_widget(self.center_button) self.center_button.pos_hint = {'center_x': .5, 'center_y':.5} else: self.remove_widget(self.center_button) self.center_button.size_hint = (1, 1) self.center_widget.add_widget(self.center_button) self.center_button.size = self.center_widget.size self.center_button.pos = self.center_widget.pos class ExampleApp(App): def build(self): return Example() if __name__ == "__main__": ExampleApp().run() ``` Update 1: If for whatever reason you still need to unbind the properties bound by kvlang you can do so using introspection to get a [list of observers](http://kivy.org/docs/api-kivy.event.html#kivy.event.EventDispatcher.get_property_observers) for the property. so for your case it would be something like this:: ``` observers = self.center_widget.get_property_observers('pos') print('list of observers before unbinding: {}'.format(observers)) for observer in observers: self.center_widget.unbind(pos=observer) print('list of observers after unbinding: {}'.format(self.center_widget.get_property_observers('pos'))) ``` You would need to use the latest master for this. I should fore-warn you to be extremely careful with this though you'd need to reset the bindings set in kvlanguage, but then you loose the advantage of kv language... Only use this If you really understand what you are doing.
Following @qua-non suggestion, I am temporarily moving the child to another parent. It actually unbinds it, or maybe, rebinds it to the new parent. This is a partial solution because of whatever reason, it doesn't update the position automatically when I took it out of the GridLayout (i.e. when I press enter) and put it into the new parent. I need to press 'Up' (or 'Down') after the 'Out of the Box' button. However, it does go back immediately. When you click again on the 'Out of the box' button the 2nd time, it goes back to its original position. This part works perfectly. And it continue obeying to its parent instructions. In other words, it doesn't work immediately when I take out of the grid but it does when I put it back. ``` from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(""" <Example>: center_button: _center_button center_widget: _center_widget float: _float grid:_grid GridLayout: id: _grid cols: 3 size_hint: .7, .3 pos_hint: {'center_x': .5} x: 0 Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _grid.y = 0 text: "Down" Widget: id: _center_widget Button: id: _center_button pos: self.parent.pos size: self.parent.size on_press: root.centerize(*args) text: "Out of the Grid" Widget: Button: pos: self.parent.pos size: self.parent.size on_press: _grid.top = root.height text: "Up" FloatLayout: id: _float size_hint: None,None """) class Example(FloatLayout): def centerize(self, instance): if self.center_button.parent == self.center_widget: self.center_widget.remove_widget(self.center_button) self.float.add_widget(self.center_button) self.float.size = self.center_button.size self.float.x = self.center_button.x self.float.center_y = self.center_y else: self.float.remove_widget(self.center_button) self.center_widget.add_widget(self.center_button) self.center_button.size = self.center_widget.size self.center_button.pos = self.center_widget.pos class ExampleApp(App): def build(self): return Example() if __name__ == "__main__": ExampleApp().run() ```
How to unbind a property automatically binded in Kivy language?
[ "", "python", "kivy", "" ]
Perhaps as a remnant of my days with a strongly-typed language (Java), I often find myself writing functions and then forcing type checks. For example: ``` def orSearch(d, query): assert (type(d) == dict) assert (type(query) == list) ``` Should I keep doing this? what are the advantages to doing/not doing this?
Stop doing that. The point of using a "dynamic" language (that is strongly typed as to values\*, untyped as to variables, and late bound) is that your functions can be properly polymorphic, in that they will cope with any object which supports the interface your function relies on ("duck typing"). Python defines a number of common protocols (e.g. iterable) which different types of object may implement without being related to each other. Protocols are not *per se* a language feature (unlike a java interface). The practical upshot of this is that in general, as long as you understand the types in your language, and you comment appropriately (including with docstrings, so other people also understand the types in your programme), you can generally write less code, because you don't have to code around your type system. You won't end up writing the same code for different types, just with different type declarations (even if the classes are in disjoint hierarchies), and you won't have to figure out which casts are safe and which are not, if you want to try to write just the one piece of code. There are other languages that theoretically offer the same thing: type inferred languages. The most popular are C++ (using templates) and Haskell. In theory (and probably in practice), you can end up writing even less code, because types are resolved statically, so you won't have to write exception handlers to deal with being passed the wrong type. I find that they still require you to programme to the type system, rather than to the actual types in your programme (their type systems are theorem provers, and to be tractable, they don't analyse your whole programme). If that sounds great to you, consider using one of those languages instead of python (or ruby, smalltalk, or any variant of lisp). Instead of type testing, in python (or any similar dynamic language) you'll want to use exceptions to catch when an object does not support a particular method. In that case, either let it go up the stack, or catch it, and raise your exception about an improper type. This type of "better to ask forgiveness than permission" coding is idiomatic python, and greatly contributes to simpler code. `*` In practice. Class changes are possible in Python and Smalltalk, but rare. It's also not the same as casting in a low level language. --- Update: You can use mypy to statically check your python outside of production. Annotating your code so they can check that their code is consistent lets them do that if they want; or yolo it if they want.
In most of the cases it would interfere with duck typing and with inheritance. * *Inheritance:* You certainly intended to write something with the effect of ``` assert isinstance(d, dict) ``` to make sure that your code also works correctly with subclasses of `dict`. This is similar to the usage in Java, I think. But Python has something that Java has not, namely * *Duck typing:* most built-in functions do not require that an object belongs to a specific class, only that it has certain member functions that behave in the right way. The `for` loop, e.g., does only require that the loop variable is an *iterable*, which means that it has the member functions `__iter__()` and `next()`, and they behave correctly. Therefore, if you do not want to close the door to the full power of Python, do not check for specific types in your production code. (It might be useful for debugging, nevertheless.)
Should I force Python type checking?
[ "", "python", "types", "semantics", "" ]
``` >>> match = re.findall('a.*?a', 'a 1 a 2 a 3 a 4 a') >>> match ['a 1 a', 'a 3 a'] ``` How do I get it to print ``` ['a 1 a', 'a 2 a', 'a 3 a', 'a 4 a'] ``` Thank you!
I think using a positive lookahead assertion should do the trick: ``` >>> re.findall('(?=(a.*?a))', 'a 1 a 2 a 3 a 4 a') ['a 1 a', 'a 2 a', 'a 3 a', 'a 4 a'] ``` [`re.findall`](http://docs.python.org/2/library/re.html#re.findall) returns all the groups in the regex - including those in look-aheads. This works because the look-ahead assertion doesn't consume any of the string.
You may use alternative [`regex`](https://pypi.python.org/pypi/regex/2013-06-26) module which allows overlapping matches: ``` >>> regex.findall('a.*?a', 'a 1 a 2 a 3 a 4 a', overlapped = True) ['a 1 a', 'a 2 a', 'a 3 a', 'a 4 a'] ```
Python re.findall print all patterns
[ "", "python", "regex", "findall", "" ]
``` import subprocess sample_file_directory = "..." # directory where file is SCRIPT_DIR = "..." # directory where script is p = subprocess.Popen([SCRIPT_DIR,sample_file_directory,'min','edgelen'],stdout=subprocess.PIPE,stderr=subprocess.PIPE) min_side_len, err = p.communicate() print len(err) ``` So i have this script that analyzes .ply files (3d file format) and gives me data about the file. I am trying to find out in my directory which files are corrupted and which aren't. So I am trying to use the subprocess library to run the script to find an arbitrary feature of the .ply files (in this case the minimum edge length) If err has anything in it, it means it couldn't retrieve the arbitrary feature and the file is corrupted. Here, I am only running it on one file. However, I keep getting an error. ``` p = subprocess.Popen([SCRIPT_DIR,sample_file_directory,'min','edgelen'],stdout=subprocess.PIPE,stderr=subprocess.PIPE) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory ``` Can anyone help me figure out why I am getting this error? Both my data directory and script directory are valid. Also, sorry if I left anything out that's important. First time posting.
"One other thing it could potentially be is that you're trying to run a 32-bit binary on a 64-bit system without the 32-bit libraries installed. I actually ran into this issue earlier today. A fix is documented here. I don't know anything about your system, so this may be completely irrelevant advice, but I figured it wouldn't hurt to mention it." Dan Albert. He came up with the answer to the solution! Thanks so much Dan!
The problem lies here: ``` p = subprocess.Popen([SCRIPT_DIR,sample_file_directory,'min','edgelen'],stdout=subprocess.PIPE,stderr=subprocess.PIPE) ``` Notice this part: ``` Popen([SCRIPT_DIR,sample_file_directory,'min','edgelen'] ``` Now, see, you already know that `subprocess` takes a list. But the problem is that you are passing the directory as part of the list. The list is formatted like this: [command, arg1, arg2...] Now, when you run that command, you are doing this: [SCRIPT\_DIR -> command sample\_file\_directory -> arg1 'min' -> arg2 'edgelen' -> arg3] See the problem? You are passing the script's directory as the command to run and the script's name as an argument. Import the `os` module and do this: ``` p = subprocess.Popen([os.path.join(SCRIPT_DIR,sample_file_directory),'min','edgelen'],stdout=subprocess.PIPE,stderr=subprocess.PIPE) ``` This does this: [SCRIPT\_DIR and sample\_file\_directory joined together -> command 'min' -> arg1 'edgelen' -> arg2] `os.path.join` automatically adds the path separator.
subprocessing library in Python
[ "", "python", "subprocess", "" ]
It's blowing my mind a little bit that OrderedDict in Python is not a sequence type. It has a concept of order, but it's not a sequence. The [Python docs](http://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) say > There are seven sequence types: strings, Unicode strings, lists, > tuples, bytearrays, buffers, and xrange objects. > > For other containers see the built in dict and set classes, and the > collections module. ...Most sequence types support the following > operations....Sequence types also support comparisons. Those operations corresponding to `__contains__`, `__add__` for concatenation, `__getitem__` with integers (in `range(len(foo))`), `__len__`, `__min__`, `__slice__`, `index` and `count`. `__lt__` etc implement comparisons. OrderedDicts implement some of these methods but not others, probably because the syntactic sugar for accessing items by key (as in dict) or order (as in index) is the same. I know if something implements `__iter__` I can loop through it. How can I definitely know if something has an order? I would have thought that is what is meant by "sequence", the nth item is always the nth item.
In a duck typing world, this is a difficult question. Both [sequences and mapping](http://docs.python.org/3/glossary.html#term-sequence) use `__getitem__()` to access items, using inter indexes and keys, respectively. Looking for the availability of the `__getitem__()` method does not tell them apart, you need to look at what the method actually does. For the dict it is not possible to know whether the integer argument to `__getitem__()` is an index or a key, so it always works mapping-style. Therefore, I think a dict is not a sequence at all, even though it supports iteration. Same applies to the set. Looking at the `collections.abc.Sequence` base class may be the best test. For custom types, just make sure they are derived from this base class.
[`issubclass(list, collections.abc.Sequence)`](http://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence) Note this only works on built-in types.
How to know if a Python type has a concept of order? What is a sequence, really?
[ "", "python", "types", "sequences", "" ]
I have Student table and I want to update (sequence\_No.) field like that: ``` ID Name age sequence_No. -- ----- --- ------------ 1 sara 20 1 2 sara 20 2 3 sara 20 3 4 john 24 1 5 john 24 2 6 Hama 23 1 ``` Which query can do that in mysql? thank you :)
``` SELECT ID, Name, age, ( CASE Name WHEN @curType THEN @curRow := @curRow + 1 ELSE @curRow := 1 AND @curType := Name END ) + 1 AS sequence_No FROM student, (SELECT @curRow := 0, @curType := '') r ORDER BY ID,NAME; ``` **[SQL FIDDLE](http://www.sqlfiddle.com/#!2/3e6b6d/5)**
hope this will help you : ``` SELECT *, count(*) as seq_number FROM student a JOIN student b ON a.name = b.name AND a.id >= b.id GROUP BY a.id ``` [Sqlfiddle](http://sqlfiddle.com/#!9/7c08f/4)
Numbering duplicates record in mysql
[ "", "mysql", "sql", "" ]
I am calculating `TotalHours` of a employee worked in office based on `Intime` and `Outtime` resultant in the form of `hh.mm` like `8.30`. So, I wrote below sql query : ``` SELECT EMPLOYEEID, sum(DateDiff(mi,isnull(In_Time,0),isnull(Out_Time,0))/60) + sum(round(DateDiff(mi,isnull(In_Time,0),isnull(Out_Time,0))%60,2))/100.0 + sum(round(DateDiff(ss,isnull(In_Time,0),isnull(Out_Time,0)),2))/10000.0 as TotalHours from HR_EMPLOYEES ``` The above sql server query was running correctly intially, but now it is giving following exception: ``` java.sql.SQLException: The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart. ``` Could anybody please help me to get rid off of this?
You may try this: ``` select empid, convert(varchar(5), sum(datediff(minute, [intime], isnull([outtime], dateadd(hh, 19, DATEADD(dd, DATEDIFF(dd, 0, [intime]), 0))))) / 60) + ':' + convert(varchar(5),sum(datediff(minute, [intime], isnull([outtime], dateadd(hh, 19, DATEADD(dd, DATEDIFF(dd, 0, [intime]), 0))))) % 60) as TotalHours from HR_EMPLOYEES group by empid ``` Some thoughts: 1. Can `intime` ever be null? If so how and why? I am assuming `intime` can never be null 2. I am assuming that if `outtime` is `null` then, the employee is still working, thus the use of `getdate()` But it may also be the case that there was a software bug that caused the `null`. 3. Another strategy to handle `null` in `outtime` could be to make it the midnight of the `intime` day. Then this begs the question, how the next day will be handled. I think there may be a lot of edge cases here. You will have to be careful. EDIT: Modified `outtime` to 7 PM of `intime` day if `outtime` is null as per OP's comment. Used [Best approach to remove time part of datetime in SQL Server](https://stackoverflow.com/questions/1177449/best-approach-to-remove-time-part-of-datetime-in-sql-server)
Is it not easier to just discard the null values (returning datediff in seconds from a datetime of zero is what's probably overflowing your query) and also use 'hh' for `datediff`? Or are you also looking for the number of minutes (do you NEED seconds?? If this is for timesheets etc, seconds don't really matter do they?) ``` SELECT EMPLOYEEID, CASE WHEN In_Time IS NOT NULL AND Out_Time IS NOT NULL THEN sum(DateDiff(hh, In_Time, Out_Time)) ELSE 0 END as TotalHours FROM HR_EMPLOYEES ``` Edit: ok for hours/minutes just use: ``` SELECT EMPLOYEEID, CASE WHEN In_Time IS NOT NULL AND Out_Time IS NOT NULL THEN sum(DateDiff(mi, In_Time, Out_Time)) / 60.0 ELSE 0 END as TotalHours FROM HR_EMPLOYEES ``` This gives you hours plus a fraction of minutes (so 90 minutes = 1.5 hours) Edit2: if you want minutes as actual minutes and not a fraction use this: ``` SELECT EMPLOYEEID, CASE WHEN In_Time IS NOT NULL AND Out_Time IS NOT NULL THEN sum(DateDiff(hh, In_Time, Out_Time) + -- Total hours DateDiff(mi, In_Time, Out_Time) % 60 * .01) -- Total minutes (divided by 100 so that you can add it to the hours) ELSE 0 END as TotalHours FROM HR_EMPLOYEES ```
SQL datediff function resulted in an overflow
[ "", "sql", "sql-server-2008", "sqlexception", "" ]
I'm having some difficulty implementing a modified version of binary search (it just needs to check if there is a 1 in the sub list, and then continue searching until it returns the index). Currently the code I've come up with is: ``` def binary_search(inList): low = 0 high = len(inList) -1 while low <= high: mid = (low+high)//2 upper = inList[mid:high] lower = inList[low:mid-1] if any(lower): inList = lower high = mid-1 elif any(upper): inList = upper low = mid else: return mid assert low < high return -1 ``` It seems to work for a couple of iterations of the loop, but then it returns empty lists, and fails. I've tested the function with the following input: ``` l = [0 for x in range(256)] l[123] = 1 ``` I've also noticed that some of the bins are lost when the list is decimated. How would I go about creating a test suite, that will catch these problems and let me extend this algorithm to other input sets (e.g. a 1 in both halves, two 1's next to each other etc).
Man you're asking like three questions in one, but here goes nothing. To create a testsuit simply write a few good examples and assert that they work, so something like: ``` from binary_search import binary_search # Test a basic case inlist = [0] * 256 inlist[123] = 1 assert binary_search(inlist) == 123 # Test a case with odd len inlist = [0] * 99 inlist[20] = 1 assert binary_search(inlist, 20) # Test the case with no 1s inlist = [0] * 256 assert binary_search(inlist) == -1 # It's good to test corner cases just in case inlist = [0] * 256 inlist[0] = 1 assert binary_search(inlist) == 0 inlist = [0] * 256 inlist[255] = 1 assert binary_search(inlist) == 255 ``` You might want to consider using something like nose or the unittest module to help you organize your tests, but in any case the idea is to run the tests every time you change your code to make sure it's working. If you add new feature to your code, for example allowing to search for multiple 1s in the list, you'll want to add tests for that behavior. You might already know this, but just in case I wanted to mention that this is a pretty poor algorithm for finding 1s in a list. The issue is that `any` is an O(N) operation so at every iteration of the loop, you're doing either N/2 or N operations. The loop runs log(N) times. There is a little math involved, but you can pretty easily show that this is an O(N\*log(N)) algorithm, while simply using `inlist.index(1)` (or a basic for loop) you can find 1s in N operations. However, to help you learn I went ahead and fixed your algorithm here is a working version, which passes the above tests :) ``` def binary_search(inList): low = 0 high = len(inList) while low < high: mid = (low + high) // 2 upper = inList[mid:high] lower = inList[low:mid] if any(lower): high = mid elif any(upper): low = mid + 1 else: # Neither side has a 1 return -1 assert low == high return mid ``` The main problem with your version was that you were modifying low/high and modifying inlist at the same time. Because low/high are indices into inlist, when you modify inlist they no longer point to the right place.
You can build a simple test suite using `unittest` that can test the result for different inputs, it should be quite simple for this example. This should get you started - try running this script (after modifying the import to import your binary search module), a google for `python unittest` should give you plenty of ideas on how you can extend this. ``` import unittest from <your module> import binary_search class TestBinarySearchForOne(unittest.TestCase): def test_small_range(self): self.assertEquals(1, binary_search(range(0, 2)) def test_not_found(self): self.assertEquals(-1, binary_search([0, 4, 9, 190]) if __name__ == '__main__': unittest.main() ```
Unit testing binary search
[ "", "python", "" ]
I'm using django-storages and amazon s3 for my static files. Following the documentation, I put these settings in my settings.py ``` STATIC_URL = 'https://mybucket.s3.amazonaws.com/' ADMIN_MEDIA_PREFIX = 'https://mybucket.s3.amazonaws.com/admin/' INSTALLED_APPS += ( 'storages', ) DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = 'mybucket_key_id' AWS_SECRET_ACCESS_KEY = 'mybucket_access_key' AWS_STORAGE_BUCKET_NAME = 'mybucket' STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage' ``` And the first time I ran collect static everything worked correctly and my static files were uploaded to my s3 bucket. However, after making changes to my static files and running `python manage.py collectstatic` this is outputted despite the fact that static files were modified ``` -----> Collecting static files 0 static files copied, 81 unmodified. ``` However, if I rename the changed static file, the changed static file is correctly copied to my s3 bucket. Why isn't django-storages uploading my changed static files? Is there a configuration problem or is the problem deeper?
collectstatic skips files if "target" file is "younger" than source file. Seems like amazon S3 storage returns wrong date for you file. you could investigate [code][1] and debug server responses. Maybe there is a problem with timezone. Or you could just pass --clear argument to collectstatic so that all files are deleted on S3 before collecting
<https://github.com/antonagestam/collectfast> From readme.txt : Custom management command that compares the MD5 sum and etag from S3 and if the two are the same skips file copy. This makes running collect static MUCH faster if you are using git as a source control system which updates timestamps.
Django-storages not detecting changed static files
[ "", "python", "django", "amazon-web-services", "amazon-s3", "" ]
The user selects various words from a drop down list and these values get added into a comma delimited string. When passing the string to a stored procedure I want it to `select * from` a table where that word exists. Table ``` id----word 1-----cat 2-----dog 3-----mouse 4-----dog ``` string that is passed into the stored procedure is `cat, dog` so returning columns 1, 2 and 4. Is there a way of doing this in sql server?
you first need to make a function SplitCSV : ``` CREATE FUNCTION [dbo].[SplitCSV] (@CSVString VARCHAR(8000), @Delimiter CHAR(1)) RETURNS @temptable TABLE (items VARCHAR(8000)) AS BEGIN DECLARE @pos INT; DECLARE @slice VARCHAR(8000); SELECT @pos = 1; IF LEN(@CSVString) < 1 OR @CSVString IS NULL RETURN; WHILE @pos!= 0 BEGIN SET @pos = CHARINDEX(@Delimiter,@CSVString); IF @pos != 0 SET @slice = LEFT(@CSVString, @pos - 1); ELSE SET @slice = @CSVString; IF( LEN(@slice) > 0) INSERT INTO @temptable(Items) VALUES (@slice); SET @CSVString = RIGHT(@CSVString, LEN(@CSVString) - @pos); IF LEN(@CSVString) = 0 BREAK; END RETURN END GO ``` then you can use it like : ``` SELECT * FROM myTable WHERE ID IN ( SELECT items FROM [dbo].[SplitCSV]('1,2,3,4,5', ',') ) ```
Use `IN`: ``` SELECT * FROM your_table WHERE word IN ('cat', 'dog') ```
Select * from table where column = (any value from a comma delimited string)
[ "", "sql", "sql-server", "" ]
I have written one query to group the tax items, if tax is null, i dont want to display the tax name. How to do this? Please check my sql server statement: ``` SELECT 'CENTRAL EXCISE DUTY' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName NOT LIKE '%IBR%' AND TaxAttributeName NOT LIKE '%CST%' AND TaxAttributeName NOT LIKE '%VAT%' UNION SELECT 'CST' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName LIKE '%CST%' UNION SELECT 'VAT' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName LIKE '%VAT%' ``` This Query gives the Output as follows: ``` TaxName Tax ------------- ----------- CENTRAL EXCISE DUTY 15000 CST NULL VAT NULL ``` Here, I don't want to display the CST and VAT since it has null value.
Since you're using SQL Server, you can simplify it somewhat using a common table expression and just eliminate the rows you don't want using `HAVING` at the end; ``` WITH cte AS ( SELECT CASE WHEN TaxAttributeName LIKE '%CST%' THEN 'CST' WHEN TaxAttributeName LIKE '%VAT%' THEN 'VAT' WHEN TaxAttributeName LIKE '%IBR%' THEN NULL ELSE 'CENTRAL EXCISE DUTY' END TaxName, TaxAmount Tax FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 ) SELECT TaxName, SUM(Tax) Tax FROM cte GROUP BY TaxName HAVING TaxName IS NOT NULL AND SUM(Tax) IS NOT NULL ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/e7f3b/3). EDIT: Just for completeness, the minimally intrusive way to fix your existing query is to wrap it in another select that removes the rows you're not interested in; ``` SELECT * FROM ( ...your query here... ) a_dummy_name_is_required_here WHERE Tax IS NOT NULL; ``` Note the slight difference of the first query against the original query, if a row is set to `CSTVAT`, the original query will count it as both `CST` and `VAT`, while the first query with the CTE will count it as `CST` only. I suspect they're not overlapping, but I thought I'd mention it to be sure.
Try this: ``` SELECT * FROM ( SELECT 'CENTRAL EXCISE DUTY' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName NOT LIKE '%IBR%' AND TaxAttributeName NOT LIKE '%CST%' AND TaxAttributeName NOT LIKE '%VAT%' UNION SELECT 'CST' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName LIKE '%CST%' UNION SELECT 'VAT' AS 'TaxName', SUM(TaxAmount) AS 'Tax' FROM PMT_InvoiceTaxAttribute WHERE InvoiceID = 100 AND TaxAttributeName LIKE '%VAT%') as SomeName WHERE TAX is NOT NULL ```
Group by using LIKE Operator and ignore NULL value
[ "", "sql", "sql-server", "sql-server-2008-r2", "group-by", "sql-like", "" ]
I'm trying to analyze the contents of a string. If it has a punctuation mixed in the word I want to replace them with spaces. For example, If Johnny.Appleseed!is:a\*good&farmer is entered as an input then it should say there are 6 words, but my code only sees it as 0 words. I'm not sure how to remove an incorrect character. FYI: I'm using python 3, also I can't import any libraries ``` string = input("type something") stringss = string.split() for c in range(len(stringss)): for d in stringss[c]: if(stringss[c][d].isalnum != True): #something that removes stringss[c][d] total+=1 print("words: "+ str(total)) ```
# Simple loop based solution: ``` strs = "Johnny.Appleseed!is:a*good&farmer" lis = [] for c in strs: if c.isalnum() or c.isspace(): lis.append(c) else: lis.append(' ') new_strs = "".join(lis) print new_strs #print 'Johnny Appleseed is a good farmer' new_strs.split() #prints ['Johnny', 'Appleseed', 'is', 'a', 'good', 'farmer'] ``` # Better solution: Using `regex`: ``` >>> import re >>> from string import punctuation >>> strs = "Johnny.Appleseed!is:a*good&farmer" >>> r = re.compile(r'[{}]'.format(punctuation)) >>> new_strs = r.sub(' ',strs) >>> len(new_strs.split()) 6 #using `re.split`: >>> strs = "Johnny.Appleseed!is:a*good&farmer" >>> re.split(r'[^0-9A-Za-z]+',strs) ['Johnny', 'Appleseed', 'is', 'a', 'good', 'farmer'] ```
Here's a one-line solution that doesn't require importing any libraries. It replaces non-alphanumeric characters (like punctuation) with spaces, and then `split`s the string. Inspired from "[Python strings split with multiple separators](https://stackoverflow.com/questions/1059559/python-strings-split-with-multiple-separators)" ``` >>> s = 'Johnny.Appleseed!is:a*good&farmer' >>> words = ''.join(c if c.isalnum() else ' ' for c in s).split() >>> words ['Johnny', 'Appleseed', 'is', 'a', 'good', 'farmer'] >>> len(words) 6 ```
Trying to count words in a string
[ "", "python", "string", "list", "function", "loops", "" ]
My code is: ``` Criteria criteriaA = getHibernateSession().createCriteria(entityA.class, "aaa"); Criteria criteriaB = criteriaA.createCriteria("entityB").setResultTransformer(CriteriaSpecification.DISTINCT_ROOT_ENTITY); // It seems that the result transformer don't affect the code... criteriaA.add( -- some restriction --) if (orderById) { criteriaB.addOrder(Order.desc("id")); } if (!orderById && orderByDate) { criteriaB.addOrder(Order.desc("date")); } criteriaA.setProjection(Projections.distinct(Projections.property("entityB"))); ``` entityA and entityB have a oneToMany relationshiop (one entityB may have 0..\* entityA) but I can navigate only to A from B (project restrictions.. cannot change that). Now, the problem is that I need no duplicate in the results returned, and I need to sort my result. The strange fact is that the ORA-01791 (that tells me that there's an incorrect orderby) I launched only when sorting by date, sorting by id don't gave me any problem but both are property of the entity I want to distinct! Why I can sort by id but not by date?? Is there another way to do what I'm trying to do? ---- EDIT ----- As maby suggested I looked at the generated query: ``` DEBUG [org.hibernate.SQL] select distinct this_.BBB_ID as y0_ from TB_ENTITY_A this_ inner join TB_ENTITY_B entityB_ on this_.BBB_ID=entityB_.BBB_ID where this_.ANOTHER_ID=? and lower(this_.AAA_VALUE) like ? and this_.AAA_ID in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) order by entityB_.BBB_DATE asc ``` so, my distinct where applied only on the id of the foreign key.. How could I force Hibernate to select all the field? If I do: ``` criteriaB.list(); ``` instead of ``` criteriaA.list(): ``` will I lose the filter from criteriaA? I was thinking also to try to set in Projection all the field from the entityB but I'm not sure hibernate can convert the result to the entity.
As mabi suggested I reply to my questions to post my solution: ---- SOLVED ----- I changed the way I was organizing the query: I've create a detached criteria for the many-side entity and retrieve the other entity with a subquery: ``` DetachedCriteria criteriaA = DetachedCriteria.forClass(EntityA.class, "aaa"); criteriaA.add( -- some restriction --); criteriaA.setProjection(Projections.property("entityB")); Criteria criteriaB = getHibernateSession().createCriteria(EntityB.class, "bbb"); criteriaB.add(Property.forName("id").in(criteriaA)); ``` so I don't need a distinct and can sort for all the field i want from entity B.
<http://ora-01791.ora-code.com/> says: > In this context, all ORDER BY items must be constants, SELECT list > expressions, or expressions whose operands are constants or SELECT > list expressions I hazard a guess that you/hibernate don't select `date`. If you do, can you add the generated SQL to the question?
ORA-01791 on Hibernate Query
[ "", "sql", "hibernate", "" ]
My query: ``` SELECT sites.siteName, sites.siteIP, history.date FROM sites INNER JOIN history ON sites.siteName = history.siteName ORDER BY siteName,date ``` First part of the output: ![enter image description here](https://i.stack.imgur.com/D0Lpg.png) How can I remove the duplicates in `siteName` column? I want to leave only the updated one based on `date` column. In the example output above, I need the rows 1, 3, 6, 10
This is where the window function `row_number()` comes in handy: ``` SELECT s.siteName, s.siteIP, h.date FROM sites s INNER JOIN (select h.*, row_number() over (partition by siteName order by date desc) as seqnum from history h ) h ON s.siteName = h.siteName and seqnum = 1 ORDER BY s.siteName, h.date ```
From your example it seems reasonable to assume that the `siteIP` column is determined by the `siteName` column (that is, each site has only one `siteIP`). If this is indeed the case, then there is a simple solution using `group by`: ``` select sites.siteName, sites.siteIP, max(history.date) from sites inner join history on sites.siteName=history.siteName group by sites.siteName, sites.siteIP order by sites.siteName; ``` However, if my assumption is not correct (that is, it is possible for a site to have multiple `siteIP`), then it is not clear from you question which `siteIP` you want the query to return in the second column. If just any `siteIP`, then the following query will do: ``` select sites.siteName, min(sites.siteIP), max(history.date) from sites inner join history on sites.siteName=history.siteName group by sites.siteName order by sites.siteName; ```
Eliminating duplicate values based on only one column of the table
[ "", "sql", "sql-server", "distinct", "inner-join", "duplicates", "" ]
My code below works great, however, if I select '1', the code will run the if statement and print "success" but it will also run the else statement and exit. How can I stop the else statement running if the user selects 1,2 or 3? Thanks! ``` print "\nWelcome to the tool suite.\nWhich tool would you like to use?\n" print "SPIES - press 1" print "SPADE - press 2" print "SPKF - press 3" print "For information on tools - press i\n" choice = raw_input("") if choice == "1": execfile("lot.py") print ("success") if choice == "2": #execfile("spade.py") print ("success") if choice == "3": #execfile("spkf.py") print ("success") if choice == "i": print "Google Maps is awesome.\n" print "SPADE\n" print "SPKF\n\n" else: print "Error, exiting to terminal" exit(1) ```
You want the `elif` construct. ``` if choice == "1": ... elif choice == "2": ... else: # choice != "1" and choice != "2" ... ``` Otherwise, the different `if` statements are disconnected from each other. I've added a blank line for emphasis: ``` if choice == "1": ... if choice == "2": ... else: # choice != 2 ... ```
You're looking for `elif` ``` if choice == "1": execfile("lot.py") print ("success") elif choice == "2": #execfile("spade.py") print ("success") elif choice == "3": #execfile("spkf.py") print ("success") elif choice == "i": print "Google Maps is awesome.\n" print "SPADE\n" print "SPKF\n\n" else: print "Error, exiting to terminal" exit(1) ``` This makes the entire block above a single conditional construct
If else statement - how to skip past else?
[ "", "python", "if-statement", "" ]
I'm trying to create a file .txt and save the information that the user gives and also open the file in python i'm having trouble in create the file here is my code ``` from Tkinter import * raiz = Tk() frame = Frame(raiz) def cadastro(): form = Toplevel(raiz) Label(form, text='Nome: ').grid(column=0, row=0, sticky=E) Label(form, text='Celular: ').grid(column=0, row=1, sticky=E) nome = StringVar() celular = StringVar() a=Entry(form, textvariable=nome, width=15) a.grid(column=1, row=0, sticky=W) Entry(form, textvariable=celular, width=15).grid(column=1, row=1, sticky=W) def onCancel(): form.destroy() def onOk(): ******.insert('','end',text=nome.get(), values=celular.get()) onCancel() Button(form, text='OK', command=onOk).grid(column=0, row=2, sticky=E) Button(form, text='Cancel', command=onCancel).grid(column=1, row=2, sticky=W) def listar(): w = Button(raiz, text='Cadastrar',command=cadastro).grid() x = Button(raiz, text='Listar').grid() raiz.mainloop() ``` the *\**\* it's where i put the file name Thank you very much in advance
Here's the code, I redesigned it to meet your requirements. Feedback would be much appreciated ``` from Tkinter import * raiz = Tk() frame = Frame(raiz) out = [] def cadastro(): form = Toplevel(raiz) Label(form, text='Nome: ').grid(column=0, row=0, sticky=E) Label(form, text='Celular: ').grid(column=0, row=1, sticky=E) nome = StringVar() celular = StringVar() a=Entry(form, textvariable=nome, width=15) a.grid(column=1, row=0, sticky=W) Entry(form, textvariable=celular, width=15).grid(column=1, row=1, sticky=W) def onCancel(): form.destroy() def onOk(): with open('outt.txt','w') as txt: txt.write('Name : ' + str(nome.get()) + ' ' + 'Telephone No. : ' + str(celular.get())) onCancel() Button(form, text='OK', command=onOk).grid(column=0, row=2, sticky=E) Button(form, text='Cancel', command=onCancel).grid(column=1, row=2, sticky=W) def listar(): with open('outt.txt','r') as txt_read: print txt_read.read() w = Button(raiz, text='Cadastrar',command=cadastro).grid() x = Button(raiz, text='Listar' , command=listar).grid() raiz.mainloop() ``` after entering data, if you clicked on *listar* you can see the output on the screen *(though you can manually view the data which is saved in .txt file)* here's a sample: > Name : **K-DawG** Telephone No. : **911** The key here is using the `with as` statement, for more info check out Codeacademy's [course](http://www.codecademy.com/tracks/python) on python using a list and the *insert()* method was surely not the best option for this problem but rather if you use my method and write to a .csv file with delimiters the program could finally be worthwhile
You can use the [open built-in](http://docs.python.org/2/library/functions.html#open) to get a file object with writing permissions, and then fill the content using the [write function](http://docs.python.org/2/library/stdtypes.html#file.write): ``` file = open('<FILENAME>.txt', 'w') file.write('first line\n') file.write('second line\n') file.close() ``` Check out the linked docs for more info about the open arguments and other useful functions like [writelines](http://docs.python.org/2/library/stdtypes.html#file.writelines).
Python: how to create a file .txt and record information in it
[ "", "python", "" ]