Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have 3 classes like this: ``` class Person (models.Model) name = models.CharField(max_length=200) class Device (models.Model) mobile = models.CharField(max_length=200) class Uses (models.Model) .... person_name = models.ForeignKey(Person) person_device = models.ForeignKey(Device) ``` My question is: How can I make a query to get the devices used by the person "name"? Which needs to pass through the uses class.
Life will be easier if you use ManyToManyField, i.e. add this field to the Person model: ``` devices = models.ManyToManyField('Device', through='Uses') ``` Then, to get the list, you just need to get the devices attribute of the model object: ``` Person.objects.get(name="TheNameYouWant").devices ``` If you do not want to modify the model, the list of devices used by a person could be retrieved as below: ``` Device.objects.filter(uses__person_name__name="TheNameYouWant") ``` If Django say cannot resolve keyword "uses" into filed, please change to "uses\_set". I can't remember which one is which.
> How can I make a query to get the devices used by the person "name"? ``` devices = Uses.objects.filter(person_name__name='name').values_list('person_device') ```
A query with 3 tables in django
[ "", "sql", "django", "" ]
I have the following data set. ``` create table t1 ( dept number, date1 date ); Table created. insert into t1 values (100, '01-jan-2013'); insert into t1 values (100, '02-jan-2013'); insert into t1 values (200, '03-jan-2013'); insert into t1 values (100, '04-jan-2013'); commit; ``` MY goal is to create a rank column that resets each time the department changes. The closest column that I can use for "partition by" clause is dept, but that won't give me the desired result. ``` SQL> select * from t1; DEPT DATE1 ---------- --------- 100 01-JAN-13 100 02-JAN-13 200 03-JAN-13 100 04-JAN-13 select dept, date1, rank () Over (partition by dept order by date1) rnk from t1 order by date1; DEPT DATE1 RNK ---------- --------- ---------- 100 01-JAN-13 1 100 02-JAN-13 2 200 03-JAN-13 1 100 04-JAN-13 3 ``` The desired output is as follows. The last rnk=1 is becuase the Jan-04 record is the first record after the change. ``` DEPT DATE1 RNK ---------- --------- ---------- 100 01-JAN-13 1 100 02-JAN-13 2 200 03-JAN-13 1 100 04-JAN-13 1 <<<---------- ``` Any pointers?
This is a little complicated. Instead of using `rank()` or the like, use `lag()` to see when something changes. Then do a cumulative sum of the flag. ``` select dept, date1, CASE WHEN StartFlag = 0 THEN 1 ELSE 1+StartFlag+NVL(lag(StartFlag) over (order by date1),0) END as rnk from (select t1.*, (case when dept = lag(dept) over (order by date1) then 1 else 0 end) as StartFlag from t1 ) t1 order by date1; ``` [Here](http://www.sqlfiddle.com/#!4/fc339/3) is the SQLFiddle. EDIT: This is Gordon editing my own answer. Oops. The original query was 90% of the way there. It identified the *groups* where the numbers should increase, but did not assign the numbers within the groups. I would do this with another level of `row_number()` as in: ``` select dept, date1, row_number() over (partition by dept, grp order by date1) as rnk from (select dept, date1, startflag, sum(StartFlag) over (partition by dept order by date1) as grp from (select t1.*, (case when dept = lag(dept) over (order by date1) then 0 else 1 end) as StartFlag from t1 ) t1 ) t1 order by date1; ``` So, the overall idea is the following. First use `lag()` to determine where a group begins (that is, where there is a department change from one date to the next). Then, assign a "group id" to these, by doing a cumulative sum. These are the records that are to be enumerated. The final step is to enumerate them using `row_number()`.
This could have been a case for `model` clause, but unfortunately it dramatically underperforms on significant amount of rows compared to Gordon's query. ``` select dept, date1, rank from t1 model dimension by ( row_number() over(order by date1) as rn ) measures( 1 as rank, dept, date1 ) rules ( rank[1] = 1, rank[rn > 1] = case dept[cv()] when dept[cv()-1] then rank[cv()-1] + 1 else 1 end ) ``` <http://www.sqlfiddle.com/#!4/fc339/132>
Oracle Analytic functions - resetting a windowing clause
[ "", "sql", "oracle", "analytic-functions", "" ]
I'm trying to customize the recaptcha inside my form, but I only get a javascript error. Can it be made, or do I have to modify the Flask-WTF code myself?
The example from bbenne10 almost does it except for one little detail: besides putting `theme: 'custom'` into your config you also must specify the `custom_theme_widget: 'recaptcha_widget'`. Or whatever will be the id of container where the actual image will be injected and of course the container must be present in your html. So the final config will look like this: ``` RECAPTCHA_PUBLIC_KEY = 'key' RECAPTCHA_PRIVATE_KEY = 'secret' RECAPTCHA_OPTIONS = dict( theme='custom', custom_theme_widget='recaptcha_widget' ) ``` That said there is [a hardcoded template](https://github.com/lepture/flask-wtf/blob/master/flask_wtf/recaptcha/widgets.py#L10) that you can override with undocumented `RECAPTCHA_TEMPLATE` option. Put whatever you like in there and it will be used as base for all themes recaptcha supports. One more option is to extend from `RecaptchaField` and make it use your custom `RecaptchaWidget` this way you can tell it to `flask.render_template()` with whatever template you like instead of hardcoding html into config.
You can now do this quite easily, actually. First, set (or extend) `RECAPTCHA_OPTIONS` to a dictionary containing `{'theme': 'custom'}` (and ensure you're doing `app.config.from_object(__name__)` or similar), and then ensure that your template contains all of the required DOM elements specified under the Custom Themeing section found [here](https://developers.google.com/recaptcha/docs/customization). Example from a recent site: ``` # IN YOUR MAIN PYTHON FILE RECAPTCHA_OPTIONS = {'theme': 'custom'} RECAPTCHA_PRIVATE_KEY = 'private_key' RECAPTCHA_PUBLIC_KEY = 'public_key' app = Flask(__name__) app.config.from_object(__name) # IN THE TEMPLATE (THIS USES TWITTERBOOTSTRAP'S FORM FORMATTING) <div class='control-group' id='recaptcha_wrapper'> <label class='control-label'>Enter the words above:</label> <div class='controls'> <input type='text' id='recaptcha_response_field'></input> <a id="recaptcha_reload" class="btn" href="javascript:Recaptcha.reload()"><i class="icon-refresh"></i></a> <a class="btn recaptcha_only_if_image" href="javascript:Recaptcha.switch_type('audio')"> <i title="Get an audio CAPTCHA" class="icon-headphones"></i></a> <a class="btn recaptcha_only_if_audio" href="javascript:Recaptcha.switch_type('image')"> <i title="Get an image CAPTCHA" class="icon-picture"></i></a> <a class="btn" href="javascript:Recaptcha.showhelp()"><i class="icon-question-sign"></i></a> # This causes wtf-forms to drop in the js that populates the above elements {{ form.recaptcha() }} </div> </div> ```
How to use a recaptcha custom theme with Flask-WTF
[ "", "python", "flask", "recaptcha", "flask-wtforms", "" ]
When I type ``` import matplotlib.pyplot as plt ``` in my IPython notebook I get ``` ImportError: No module named matplotlib.pyplot ``` Shouldn't be this module installed by default? Do I need to extra install it. If it is the case, how can I do it. I use virtual environment on Linux to install stuff (including IPython).
In my Python setup (2.7.3, Linux), I installed `matplotlib` as well as `IPython` via `pip`. `matplotlib` is available from the normal Python interpreter as well as from IPython (as expected). The order of installation does not matter. And no, `IPython` should not have `matplotlib` installed by default, this does not make sense.
Matplotlib is not installed by default, there are instructions on the [IPython install page](http://ipython.org/install.html) to install on linux: > Optionally install additional tools for scientific computing: On > Ubuntu or other Debian-based distributions, type at the shell: > > ``` > sudo apt-get install python-matplotlib python-scipy python-pandas python-sympy python-nose > ``` > > On Fedora 18 and newer related distributions, use: > > ``` > sudo yum install python-matplotlib scipy python-pandas sympy > python-nose > ``` Though if you are using a python virtualenv, you should be able to use `pip install matplotlib` when you are in the virual environment.
How to get matplotlib working under IPython?
[ "", "python", "matplotlib", "ipython", "" ]
I am conversting a float to integer in the below code. But , the resultant output is not correct for nickels. **Code:** ``` actual = 25 paid = 26.65 cents = (paid-actual)*100 quarters = int(cents/25) cents = cents %25 dimes = int(cents/10) cents = cents %10 nickels = int(cents/5) print quarters, dimes, nickels,cents print 5.0/5,int(5.0/5) ``` **Ouput:** ``` 6 1 0 5.0 1.0 1 ``` **Expected output** ``` 6 1 1 5.0 1.0 1 ``` If i explicitly do int(5.0/5) I get 1 , but when the same is done assigned to a variable in my code, I get 0 . I am not sure why. Can someone explain ?
Floating point numbers are not guaranteed to be spot on the number you expect, they could just be barely off, say `5.0` might actually be `4.999...` and since `int()` truncates/rounds down, you get your error. Many banks just completely give up on the floating point issue and just work with $1.00 = 100 I would advise you do the same, like this: ``` actual = 25 paid = 26.65 cents = int(round(paid*100)) #Turns 26.65 into 2665 before you do any float math dollars = cents / 100 cents %= 100 quarters = cents / 25 cents %= 25 dimes = cents / 10 cents %= 10 nickels = cents / 5 print quarters, dimes, nickels,cents print 5.0/5,int(5.0/5) ``` **note** that this outputs 2 1 1 5 because that's 2 quarters, 1 dime, and 1 nickel = $.65 Typically you want to round as LATE as possible to maintain precision, but when you are working with money, I think working entirely with ints makes the nightmare of floats go away faster. Also, since you are using 2.6, you will need to cast to `int()` because `round()` doesn't return an integer until 3.1
[Floating point numbers cannot represent all real numbers](http://docs.python.org/2/tutorial/floatingpoint.html). Every time you do anything with floating-point numbers, you are approximating the exact result by the closest thing your floating-point representation can represent. When you write ``` 26.65 ``` Python actually uses ``` 26.64999999999999857891452847979962825775146484375 ``` When you do math with floating point numbers, the result is rounded to the nearest representable number. `print` truncates floating-point numbers to 12 decimal places, so the small inaccuracies aren't visible, but when you compute ``` int(cents/5) ``` `cents` is actually `4.999999999999858`, and `cents/5` is `0.9999999999999716`, which rounds down to `0`.
int() conversion of float in python
[ "", "python", "python-2.6", "" ]
How would I take the contents of the file and change it into a dictionary? The contents would include menu items in this form: ``` 1 Chocolate ice cream 2 Green tea 8 Shrimp roll ``` And I want the key to be the number and the value to be the food item. So far, I got: ``` d = {} for line in menu.read().strip().split(): d[line[0]] = d[line[1:]] return d ``` But this comes out skewed...
Loop over the file object, line by line, then split the lines: ``` with open('inputfilename.txt') as menu: d = {} for line in menu: key, value = line.split(None, 1) d[key] = value.strip() return d ``` The `.split(None, 1)` call applies the default split algorithm (split on arbitrary-width whitespace), but limits it to just *one* split to return just 2 items. The advantage is that lines that *start* with whitespace (such as the `' 8 Shrimp roll'` line in your post) are handled correctly too as the leading whitespace is removed before splitting. This produces: ``` {'2': 'Green tea', '1': 'Chocolate ice cream', '8': 'Shrimp roll'} ```
You can try this: ``` d = {} for line in menu.read().strip().split(): d[line[0]] = line[1:] # you want to use just line[1:], than d[line[1:]] return d ``` The issue here is that `d[line[1:]]` gives you the value mapped by the key `line[1:]`.
Converting file to dict
[ "", "python", "string", "file", "dictionary", "" ]
I have a list: ``` ab = [1, 2, a, b, c] ``` I did: ``` strab = str(ab). ``` So `strab` is now a string. I want to cast that string back into a list. How can I do that?
The easiest and safest way would be to use [`ast.literal_eval()`](http://docs.python.org/2/library/ast.html#ast.literal_eval): ``` import ast ab = [1, 2, 'a', 'b', 'c'] # a list strab = str(ab) # the string representation of a list strab => "[1, 2, 'a', 'b', 'c']" lst = ast.literal_eval(strab) # convert string representation back to list lst => [1, 2, 'a', 'b', 'c'] ab == lst # sanity check: are they equal? => True # of course they are! ``` Notice that calling [`eval()`](http://docs.python.org/2/library/functions.html#eval) also works, but [it's not safe](https://stackoverflow.com/questions/661084/security-of-pythons-eval-on-untrusted-strings) and you should not use it: ``` eval(strab) => [1, 2, 'a', 'b', 'c'] ```
Use the ast package: ``` import ast lst = ast.literal_eval(strab) ```
How to cast string back into a list
[ "", "python", "string", "list", "casting", "" ]
i want a good way to improve my sql code, i have to use inner join when condition is met. I am currently replicates the code: ``` @SystemMerge bit if (@SystemMerge=1) BEGIN SELECT ....... FROM myTable INNER JOIN table ON table.param1=myTable.param1 INNER JOIN systemTable on systemTable.param2=myTable.param2 END ELSE BEGIN SELECT ....... FROM myTable INNER JOIN table ON table.param1=myTable.param1 END ``` and i would like to do it in a way like this: ``` @SystemMerge bit BEGIN SELECT ....... FROM myTable INNER JOIN table ON table.param1=myTable.param1 ***//the next 4 lines is not working, but this pseudo of what i want:*** if (@SystemMerge=1) begin INNER JOIN systemTable on systemTable.param2=myTable.param2 end ``` **edit:** the solution (thanks to @Damien\_The\_Unbeliever): ``` LEFT JOIN systemTable ON systemTable.param2=myTable.param2 WHERE ((@SystemMerge=1 AND systemTable.param2 is not null) OR (@SystemMerge=0 OR @SystemMerge is null)) ```
This should (approxmately) do the same thing: ``` SELECT ....... FROM myTable INNER JOIN table ON table.param1=myTable.param1 LEFT JOIN systemTable on systemTable.param2=myTable.param2 and @SystemMerge = 1 WHERE (@SystemMerge = 0 OR systemTable.NonNullableColumn IS NOT NULL) ``` Of course, this also means that any other references to columns within `systemTable` must be written to expect such columns to be `NULL`.
How about dynamic sql? ``` declare @sel varchar(max) set @sel = ' SELECT ....... FROM myTable INNER JOIN table ON table.param1=myTable.param1 ' if (@SystemMerge=1) begin set @sel = @sel+'INNER JOIN systemTable on systemTable.param2=myTable.param2' end exec(@sel) ```
SQL do inner join if condition met
[ "", "sql", "sql-server", "" ]
If I have variables `x` and `y`, such that: * `x` is always a string * `y` can either be a string or a list of strings How can I create a list `z == [x, <all elements of y>]`? For instance: ``` x = 'x' y = 'y' # create z assert z == ['x', 'y'] ``` ``` x = 'x' y = ['y', 'y2'] # create z assert z == ['x', 'y', 'y2'] ```
``` z = [x] + (y if isinstance(y, list) else [y]) ``` Generally I'd avoid having a `y` that could be either a string or a list, though: it seems unnecessary.
``` z = [x] if isinstance(y, list): z.extend(y) else: z.append(y) ```
Create a list of 2 variables
[ "", "python", "string", "list", "" ]
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
Thanks to the [two](http://docs.python.org/2/install/#how-installation-works) [additional](http://docs.python.org/2/install/#alternate-installation-the-home-scheme) links, I found not only the intended answer to my question, but also a solution that I like even more and that - ironically - was also explained in my first search result, but obfuscated by all the version-(in)dependent site-package lingo. ## Answer to original question: default folder I wanted to know if there was a canonical (as in "default") location for my self-written packages. And that exists: ``` >>> import site >>> site.USER_SITE 'C:\\Users\\ojdo\\AppData\\Roaming\\Python\\Python27\\site-packages' ``` And for a Linux and Python 3 example: ``` ojdo@ubuntu:~$ python3 >>> import site >>> site.USER_SITE '/home/ojdo/.local/lib/python3.6/site-packages' ``` The docs on [user scheme package installation](http://docs.python.org/2/install/#alternate-installation-the-user-scheme) state that folder `USER_SITE` - if it exists - will be automatically added to your Python's `sys.path` upon interpreter startup, no manual steps needed. --- ## Bonus: custom directory for own packages 1. Create a directory anywhere, e.g. `C:\Users\ojdo\Documents\Python\Libs`. 2. Add the file `sitecustomize.py` to the site-packages folder of the Python installation, i.e. in `C:\Python27\Lib\site-packages` (for all users) or `site.USER_SITE` (for a single user). 3. This file then is filled with the following code: ``` import site site.addsitedir(r'C:\Users\ojdo\Documents\Python\Libs') ``` 4. Voilà, the new directory now is automatically added to `sys.path` in every (I)Python session. How it works: Package [site](http://docs.python.org/2/library/site.html), that is automatically imported during every start of Python, also tries to import the package `sitecustomize` for custom package path modifications. In this case, this dummy package consists of a script that adds the personal package folder to the Python path.
Place the source of your package wherever you'd like, but at least give your package a minimal `setup.py` file, immediately outside the package: ``` import setuptools setuptools.setup(name='mypackage') ``` Then fake-install your package into your python install's `site-packages` by running: ``` python setup.py develop ``` This is a lot like running `python setup.py install`, except the `egg` just points to your source tree, so you don't have to `install` after every source code change. Finally, you should be able to import your package: ``` python -c "import mypackage as mp; print mp.awesomefunction()" ```
Where shall I put my self-written Python packages?
[ "", "python", "" ]
I've got these tables in my database: Tourist - this is the first table ``` Tourist_ID - primary key Extra_Charge_ID - foreign key name...etc... ``` **EXTRA\_CHARGES** ``` Extra_Charge_ID - primmary key Excursion_ID - foreign key Extra_Charge_Description ``` **Tourist\_Extra\_Charges** ``` Tourist_Extra_charge_ID Extra_Charge_ID - foreign key Tourist_ID - foreign key ``` **Reservations** ``` Reservation_ID - primary key ..... ``` **Tourist\_Reservations** ``` Tourist_Reservation_ID Reservation_ID - foreign key Tourist_ID - foreign key ``` So here is my example: I've got reservation with `Reservaton_ID` - 27 This reservation has two tourists with `Tourist_ID` - 86 and `Tourist_ID` - 87 This Tourist with id 86 has extra charges with `Extra_Charge_ID` - 7 and and `Extra_charge_ID` - 11; Is it possible to make sql query and the name and id of the tourist and then all of its extra charges So the output may look like this: ``` Tourist_ID : 86 Name:John Extra_Charge_ID - 7 Extra_Charge_ID - 11 Tourist_ID: 87 Name:Erika Extra-Charge_ID:10 ``` (Here is the query I made to get the extra\_charge\_description of all of the tourists with Reservation\_ID = 27 but I don't know how to change it. to get the names above) ``` Select EXTRA_CHARGES.Extra_Charge_Description,TOURIST_EXTRA_CHARGES.Tourist_ID FROM EXTRA_CHARGES INNER JOIN TOURIST_EXTRA_CHARGES on EXTRA_CHARGES.Extra_Charge_ID = TOURIST_EXTRA_CHARGES.Extra_Charge_ID INNER JOIN TOURIST_RESERVATION on TOURIST_EXTRA_CHARGES.Tourist_ID = TOURIST_RESERVATION.Tourist_ID INNER JOIN RESERVATIONS on RESERVATIONS.Reservation_ID = TOURIST_RESERVATION.Reservation_ID where RESERVATIONS.Reservation_ID=27 ```
Your database schema is not completely clear to me, but it seems you can link tourists from the **Tourist** table to their extra charges in the **EXTRA\_CHARGES** table via the **Tourist\_Extra\_Charges** table like this: ``` SELECT T.Tourist_ID ,T.Tourist_Name ,EC.Extra_Charge_ID ,EC.Extra_Charge_Description FROM Tourist AS T INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID; ``` **EDIT** If you want to be able to filter on *Reservation\_ID*, you'll have to join the tables **Tourist\_Reservations** and **Reservations** as well, like this: ``` SELECT T.Tourist_ID ,T.Tourist_Name ,EC.Extra_Charge_ID ,EC.Extra_Charge_Description FROM Tourist AS T INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID INNER JOIN Tourist_Reservations AS TR ON T.Tourist_ID = TR.Tourist_ID INNER JOIN Reservations AS R ON TR.Reservation_ID = R.Reservation_ID WHERE R.Reservation_ID = 27; ``` As for your database schema: please note that the field *Extra\_Charge\_ID* is not necessary in your **Tourist** table: you already link tourists to extra charges via the **Tourist\_Extra\_Charges** table. It can be dangerous to the sanity of your data to make these kind of double connections.
You can have it in delimited list with `GROUP_CONCAT` like: ``` SELECT Tourist_ID, Name, GROUP_CONCAT(Extra_Charge_ID SEPARATOR ',') FROM Tourist GROUP BY Tourist_ID ```
Sql subquery with inner join
[ "", "sql", "" ]
This is a simple program I am working on to learn python because I am a beginner. How would I add an exception if the user types something other that y,Y,n,N. I have searched everywhere but can't seem to find what exception to use? Thanks for everyone's help. EDIT:I adjusted my code. The only thing that is not working is if(welcomeString.strip().lower() != 'n' or 'y'): welcomeString = input('Not a valid choice\nWould you like to reverse a string?(y/n):'). It does not realize the user types in y or n. It works for other letters though. EDIT2: Everything is working as expected until the user types in an invalid input a second time. The first time it will say "Not a valid choice", but the second time, the program will exit out. ``` import sys welcomeString = input('Welcome to String Reverser\nWould you like to reverse a string?(y/n)') if not welcomeString.strip().lower() in ['n','y']: welcomeString = input('Not a valid choice\nWould you like to reverse a string?(y/n):') if welcomeString.strip().lower() == 'n': print("Thanks for using String Reverser") while welcomeString.strip().lower() == 'y': myString = input("What string would you like to reverse?:") ReverseString = myString[::-1] print (("Your Reversed String is %s") % ReverseString) welcomeString = input("Would you like to reverse another string(y/n)") ```
In this case you do not really need an exception - an exception tends to be an *exceptional* case that prevents the program from continuing, if it can still continue but under certain circumstances due to user input or conditions it may use a Warning to signal that. Here you can easily check input using any number of methods and repeat till you are able to get valid input :). If you are determined to use exceptions: You can look [here](http://docs.python.org/2/tutorial/errors.html) for more details about how to use exceptions and [here](http://docs.python.org/2/library/exceptions.html#bltin-exceptions) for how to subclass exceptions and make a user defined exception So you can do 3 things: 1) have an assertion - this will cause an assertion error with your text as the statement to be seen as the error ``` a = 1 b = 2 assert a==b, "A does not Equal B" ``` an assert typically is for checking bounds - e.g. assert index >=0 and tends to be missing critical but you can use them for testing if it's your own personal code. for your case you can have a switch statement / if-else chain like you currently do / have set operations. So as @Ketouem says above you can have a list of all the letters and check or a dict (if you had more letters such as 100 this would slightly faster). The Python wiki gives general guidelines for good uses of an assert: > Places to consider putting assertions: > > * checking parameter types, classes, or values > * checking data structure invariants > * checking "can't happen" situations (duplicates in a list, contradictory state variables.) > * after calling a function, to make sure that its return is reasonable > > -- [Python Wiki](http://wiki.python.org/moin/UsingAssertionsEffectively) 2) You can use one of the built in exceptions (look [here](http://docs.python.org/2/library/exceptions.html#bltin-exceptions)) and *raise* one of those; e.g. ``` if(condition_not_met): raise ValueError("You did not enter a correct option") ``` These typically have [specific uses](http://docs.python.org/2/library/exceptions.html#bltin-exceptions) in mind though. 3) You can do what Numpy and many libraries do and create your own exceptions (you can also use a library's such as Numpy's [LinAlgError](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.LinAlgError.html#numpy.linalg.LinAlgError), but unless you are manually catching and rethrowing usually these have domain specific uses. To create your own exception you should subclass Exception *not* BaseException e.g. ``` class MyInputError(Exception): def __init__(self, value): self.value = value def __str__(self): return repr(self.value) ``` and later call this ``` if(condition_not_met): raise MyInputError("Wrong input. Please try again") ``` lastly you can always use if then's and other control structures and then exit - but this isn't that common in Python and is more common in languages like C. In general trying out something and catching errors is one of the more key paradigms in Python: > **EAFP** Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes > and catches exceptions if the assumption proves false. This clean and > fast style is characterized by the presence of many try and except > statements. The technique contrasts with the LBYL style common to many > other languages such as C. > > --[Python Glossary](http://docs.python.org/2/glossary.html)
Invalid user input should not normally result in an exception being thrown. Exceptions are for exceptional conditions. This isn't a hard-and-fast rule and you could argue that some standard exceptions don't follow it, but the way to deal with this kind of user input validation is regular flow control. Use an `if` statement to check if it's valid, and if it's not, ask them again or take some other sensible action.
Python adding an exception (edited)
[ "", "python", "" ]
suppose we have many text files as follows: file1: ``` abc def ghi ``` file2: ``` ABC DEF GHI ``` file3: ``` adfafa ``` file4: ``` ewrtwe rewrt wer wrwe ``` How can we make one text file like below: result: ``` abc def ghi ABC DEF GHI adfafa ewrtwe rewrt wer wrwe ``` Related code may be: ``` import csv import glob files = glob.glob('*.txt') for file in files: with open('result.txt', 'w') as result: result.write(str(file)+'\n') ``` After this? Any help?
You can read the content of each file directly into the write method of the output file handle like this: ``` import glob read_files = glob.glob("*.txt") with open("result.txt", "wb") as outfile: for f in read_files: with open(f, "rb") as infile: outfile.write(infile.read()) ```
The `fileinput` module is designed perfectly for this use case. ``` import fileinput import glob file_list = glob.glob("*.txt") with open('result.txt', 'w') as file: input_lines = fileinput.input(file_list) file.writelines(input_lines) ```
combine multiple text files into one text file using python
[ "", "python", "" ]
I'm trying to write a basic algorithm for encrypting a file. It takes the ASCII value of each character in a string and moves it up or down an amount depending on how long the password is, then you can layer more passwords on top. ``` def encrypt(s): lenStr=s.__len__() #used later for working how far the int is moved s=list(s) #converts the string to a list for x in s: s[x]=ord(s[x]) #the same index of the list is = to the value of the string s[x]=chr(s[x])#is where it eventualy gets changed back to a str ``` `s=ord(s)` is the line which is throwing the error, i added int() around it but didnt help, same error
You're getting the`TypeError`exception because the value of`x`in the`s[x]=ord(s[x])` statement is one of the elements of the `s` list, so it's an individual character from the string argument passed to`encrypt()`. To fix that, just loop through all the possible indices of the `s` list which happens to be the same as the length as the original string: ``` def encrypt(s): lenStr=len(s) s=list(s) # convert the string to a list for i in range(lenStr): s[i]=ord(s[i]) s[i]=chr(s[i]) ``` This will allow your code to run without getting that error. From your description of the encryption algorithm you're going to implement, one thing to watch out for is producing illegal 8-bit character values out of the range of 0-255. You can avoid that problem by simply applying the mod operator `%` to the intermediate results to keep the values in the proper range. Here's what I mean: ``` def encrypt(s): lenStr = len(s) s = list(s) # convert the string to a list for i in range(lenStr): s[i] = chr((ord(s[i]) + lenStr) % 256) return ''.join(s) # convert list back into a string ``` Likewise, you'll have to do the same thing when you decrypt a string: ``` def decrypt(s): lenStr = len(s) s = list(s) # convert the string to a list for i in range(lenStr): s[i] = chr((ord(s[i]) - lenStr) % 256) return ''.join(s) # convert list back into a string enc = encrypt('Gnomorian') print('encrypted:', enc) dec = decrypt(enc) print('decrypted:', dec) ``` Output: ``` encrypted: Pwxvx{rjw decrypted: Gnomorian ``` Also note that not all the characters whose `ord()` values are in the range of 0-255 are printable, so you may want to restrict the encryption transformation even more if that's a requirement (that the encrypted version be printable).
`x` is a character from the string, not an integer. Let me illustrate: ``` >>> s = list('abcd') >>> for x in s: ... print(x) ... a b c d >>> ``` You want x to be integer values from 0 to the length of the string, like this: ``` >>> for x in range(len(s)): ... print(x) ... 0 1 2 3 >>> ``` So, your function should probably look like this (untested): ``` def encrypt(s): lenStr=s.__len__() #used later for working how far the int is moved s=list(s) #converts the string to a list for x in range(len(s)): s[x]=ord(s[x]) #the same index of the list is = to the value of the string s[x]=chr(s[x])#is where it eventualy gets changed back to a str ```
python list indices must be integers not string
[ "", "python", "python-3.x", "" ]
If I have a list of strings, such as: ``` lst = ['aa bb', 'cc dd', 'cc aa'] ``` How can I get this into a list of unique words such as this: ``` ['aa', 'bb', 'cc', 'dd'] ``` using a comprehension? Here's as far as I've gotten, to no avail: ``` wordList = [x.split() for row in lst for x in row] ```
The simplest approach I think is probably this, although not the most efficient. ``` set(' '.join(lst).split()) ``` If you really want a list, then just wrap that in a call to `list()`
You want to loop over the split values: ``` wordList = [word for row in lst for word in row.split()] ``` then use a set to make the whole list unique: ``` wordList = list({word for row in lst for word in row.split()}) ``` or just use a set and be done with it: ``` wordList = {word for row in lst for word in row.split()} ``` Demo: ``` >>> lst = ['aa bb', 'cc dd', 'cc aa'] >>> list({word for row in lst for word in row.split()}) ['aa', 'cc', 'dd', 'bb'] >>> {word for row in lst for word in row.split()} set(['aa', 'cc', 'dd', 'bb']) ``` If order matters (the above code returns words in *arbitrary* order, the sorted order is a coincidence by virtue of the implementation details of CPython), use a separate set to track duplicate values: ``` seen = set() wordList = [word for row in lst for word in row.split() if word not in seen and not seen.add(word)] ``` To illustrate the difference, a better input sample: ``` >>> lst = ['the quick brown fox', 'brown speckled hen', 'the hen and the fox'] >>> seen = set() >>> [word for row in lst for word in row.split() if word not in seen and not seen.add(word)] ['the', 'quick', 'brown', 'fox', 'speckled', 'hen', 'and'] >>> {word for row in lst for word in row.split()} set(['and', 'brown', 'fox', 'speckled', 'quick', 'the', 'hen']) ```
How do I create a list of words from a list of sentences?
[ "", "python", "list-comprehension", "" ]
I have a log in table that have something like this: tablelog ``` date | time | event | program | ordendate 20130722 070000 executing program1 20130722 20130722 070040 end ok program1 20130722 20130722 070100 executing program1 20130722 20130722 070140 end ok program1 20130722 ``` I have a query ``` select a.date || a.time as datetimeStart, b.date || b.time as datetimeStop, a.program, a.ordendate from tablelog a, tablelog b where a.date || a.time < b.date || b.time and a.event = "executing" and b.event = "end ok" ``` This return 3 executions but only I have 2... How can I fix this query??? Thank you!!
As far as I understand it, you want to list sequential start/stops by program(?) This uses `LEAD` to do the work in a `CTE`, then just filters and orders using an outer query; ``` WITH cte AS ( SELECT CASE WHEN "event"='executing' THEN "date" || "time" END "datetimeStart", LEAD(CASE WHEN "event"='end ok' THEN "date" || "time" END) OVER(PARTITION BY "program" ORDER BY "date","time") "datetimeStop", "program", "ordendate" FROM tablelog ) SELECT * FROM cte WHERE "datetimeStart" IS NOT NULL AND "datetimeStop" IS NOT NULL ORDER BY "datetimeStart" ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!4/8e6d7/11).
The query that you are trying to do is best done using the analytic functions `lag()` or `lead()`: ``` select dateTimeStart, dateTimeStop, program, orderdate from (select tl.date || tl.time as datetimeStart, lead(tl.date || tl.time) over (partition by program order by date, time) as dateTimeStop, tl.* from tablelog tl ) tl where tl.event = 'Executing'; ```
SQL QUERY Oracle log analysis
[ "", "sql", "oracle", "plsql", "" ]
I am trying to parse an HTML file ([demo.html](http://pastebin.com/P54pNvLY) to make all relative links absolute. Here is how I try to do this in Python script - ``` from bs4 import BeautifulSoup f = open('demo.html', 'r') html_text = f.read() f.close() soup = BeautifulSoup(html_text) for a in soup.findAll('a'): for x in a.attrs: if x == 'href': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp for a in soup.findAll('link'): for x in a.attrs: if x == 'href': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp for a in soup.findAll('script'): for x in a.attrs: if x == 'src': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp f = open("demo_result.html", "w") f.write(soup.prettify().encode("utf-8")) ``` However, the output file [demo\_result.html](http://pastebin.com/KRkxAJ4M) contains many unexpected changes. For example, ``` <script type="text/javascript" src="/scripts/ddtabmenu.js" /> /*********************************************** * DD Tab Menu script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * + Drop Down/ Overlapping Content- * This notice MUST stay intact for legal use * Visit Dynamic Drive at http://www.dynamicdrive.com/ for full source code ***********************************************/ </script> ``` changes to ``` <script src="http://www.esplanade.com.sg/scripts/ddtabmenu.js" type="text/javascript"> </script> </head> <body> <p> /*********************************************** * DD Tab Menu script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * + Drop Down/ Overlapping Content- * This notice MUST stay intact for legal use * Visit Dynamic Drive at http://www.dynamicdrive.com/ for full source code ***********************************************/ ``` Could someone please tell me where I am going wrong? Thanks and warmest regards.
It seams beautiful soup 4 is giving problem just Downgrade Beautifult soup to version3 your problem will be solved ``` import BeautifulSoup #This is version 3 not version 4 f = open('demo.html', 'r') html_text = f.read() f.close() soup = BeautifulSoup.BeautifulSoup(html_text) print soup.contents for a in soup.findAll('a'): for x in a.attrs: if x == 'href': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp for a in soup.findAll('link'): for x in a.attrs: if x == 'href': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp for a in soup.findAll('script'): for x in a.attrs: if x == 'src': temp = a[x] a[x] = "http://www.esplanade.com.sg" + temp f = open("demo_result.html", "w") f.write(soup.prettify().encode("utf-8")) ```
Your HTML code is alittle messy. You have closed `script` tag and you are closing it again ``` <script type="text/javascript" src="/scripts/ddtabmenu.js" /></script> ``` It breaks DOM. Just remove `/` from the end of `<script type="text/javascript" src="/scripts/ddtabmenu.js" />`
Parse HTML to edit links
[ "", "python", "html", "hyperlink", "beautifulsoup", "web-crawler", "" ]
So, I've got a very basic deployment on an EC2 instance that largely works, except for a couple of large issues. Right now I'm just ssh'ing into the box and running ``` python -m SimpleHTTPServer 80 ``` and I have the box on a security group that allows http requests in on Port 80. This seems to work, but if I leave it alone for a while (1-2 hours usually) my elastic ip will start returning 404s. I really need this server to stay up for demos to third parties. Any ideas on how to make sure it stays up? Additionally it goes down when I close the terminal that's ssh'd into my box, which is extremely non-ideal as I would like this demo to stay up even if my computer is off. That's a less urgent matter, but any advice on that would also be appreciated.
`SimpleHTTPServer` just serves static pages on port 80, mainly for use during development. For production usage (if you want to use EC2) I recommend you read up on `Apache` or `nginx`. Basically you want a web server that runs on Linux. If you think your site will remain static files (HTML, CSS, JS) I recommend you host them on Amazon S3 instead. S3 is cheaper and way more reliable. Take a look at this answer for instructions: [Static hosting on Amazon S3 - DNS Configuration](https://stackoverflow.com/questions/8312162/static-hosting-on-amazon-s3-dns-configuration/8357318#8357318) Enjoy!
Use `screen`! Here's a quick tutorial: <http://www.nixtutor.com/linux/introduction-to-gnu-screen/> Essentially just ssh in, open a new window via screen, start the server via `python -m SimpleHTTPServer 80`, then detach from the window. Additionally, you should be able to close your terminal and it should stay up.
AWS deployment goes down frequently
[ "", "python", "deployment", "amazon-web-services", "amazon-ec2", "" ]
Consider this sample python code. It reads from stdin and writes to a file. ``` import sys arg1 = sys.argv[1] f = open(arg1,'w') f.write('<html><head><title></title></head><body>') for line in sys.stdin: f.write("<p>") f.write(line) f.write("</p>") f.write("</body></html>") f.close() ``` Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`.
Just do ``` >>> import sys >>> f = sys.stdout >>> f.write('abc') abc ``` Now you just need to do `f = sys.stdout` instead of `f = open(fileName)`. (And remove `f.close()`) **Also**, Please consider using the following syntax for files. ``` with open(fileName, 'r') as f: # Do Something ``` The file automatically gets closed for you this way.
Names in Python are just bindings. Therefore: ``` f = sys.stdout ``` Just binds the name `f` to *the object* that's *also bound* to `sys.stdout`... Note that since they're both the same object, any changes you make to `f` or `sys.stdout` at this point will affect *both*... So don't do `f.close()` as you normally wouldn't want to do `sys.stdout.close()`...
Is it possible to have an alias for sys.stdout in python?
[ "", "python", "" ]
I would like process a file line by line. However I need to sort it first which I normally do by piping: ``` sort --key=1,2 data |./script.py. ``` What's the best to call sort from within python? Searching online I see `subprocess` or the `sh` module might be possibilities? I don't want to read the file into memory and sort in python as the data is very big.
Its easy. Use subprocess.Popen to run sort and read its stdout to get your data. ``` import subprocess myfile = 'data' sort = subprocess.Popen(['sort', '--key=1,2', myfile], stdout=subprocess.PIPE) for line in sort.stdout: your_code_here sort.wait() assert sort.returncode == 0, 'sort failed' ```
~~I believe sort will read all data in memory, so I'm not sure you will won anything~~ but you can use `shell=True` in [`subprocess`](http://docs.python.org/2/library/subprocess.html) and use pipeline ``` >>> subprocess.check_output("ls", shell = True) '1\na\na.cpp\nA.java\na.php\nerase_no_module.cpp\nerase_no_module.cpp~\nWeatherSTADFork.cpp\n' >>> subprocess.check_output("ls | grep j", shell = True) 'A.java\n' ``` > **Warning** > Invoking the system shell with shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
Best way to pipe output of Linux sort
[ "", "python", "" ]
user table ``` ID | name 1 | ada 2 | bob 3 | tom ``` group Table ``` ID | name 1 | group A 2 | group B 3 | group C ``` user\_group Table ``` user_id | group_id 1 | 1 2 | 1 1 | 2 2 | 2 3 | 2 1 | 3 3 | 3 ``` Given group of user ids : [1, 2, 3] How to query the group that all users in the above list belongs to? (in this case: Group B)
To get all groups that contain exactly the specified users (i.e. all specified users and no other users) ``` DECLARE @numUsers int = 3 SELECT ug.group_id --The Max doesn't really do anything here because all --groups with the same group id have the same name. The --max is just used so we can select the group name eventhough --we aren't aggregating across group names , MAX(g.name) AS name FROM user_group ug --Filter to only groups with three users JOIN (SELECT group_id FROM user_group GROUP BY group_id HAVING COUNT(*) = @numUsers) ug2 ON ug.group_id = ug2.group_id JOIN [group] g ON ug.group_id = g.ID WHERE user_id IN (1, 2, 3) GROUP BY ug.group_id --The distinct is only necessary if user_group --isn't keyed by group_id, user_id HAVING COUNT(DISTINCT user_id) = @numUsers ``` To get groups that contain all specified users: ``` DECLARE @numUsers int = 3 SELECT ug.group_id --The Max doesn't really do anything here because all --groups with the same group id have the same name. The --max is just used so we can select the group name eventhough --we aren't aggregating across group names , MAX(g.name) AS name FROM user_group ug JOIN [group] g ON ug.group_id = g.ID WHERE user_id IN (1, 2, 3) GROUP BY ug.group_id --The distinct is only necessary if user_group --isn't keyed by group_id, user_id HAVING COUNT(DISTINCT user_id) = 3 ``` SQL Fiddle: <http://sqlfiddle.com/#!6/0e968/3>
Try This: ``` Select t2.name FROM (Select group_id From user_group Group by group_id Having Count(user_id) = (Select Count(*) FROM User_Table)) AS T1 INNER JOIN Group_Table AS T2 ON T1.group_id = T2.ID ``` See Fiddle: <http://sqlfiddle.com/#!2/fa7250/4>
(SQL) Match users belong to which group given user_id[]
[ "", "sql", "" ]
I am defining a class that represents a vector: ``` ''' An entity with size and direction ''' UNINITIALIZED = -1 class myVector(): def __init__(self,direction = UNINITIALIZED,size = UNINITIALIZED): self.direction = direction self.size = size ``` For using the class I imagine 2 scenarios : Either I know the vector's charecharistics on initiation and then initiate it with these values: ``` v = myVector(4,2) ``` Or I do not know these on intiation and then I'm happy it will get the default values. However, with the above implementation a third scenario is realized - initiating a vector using only the first argument : ``` v = myVector(4) ``` In this case only the second parameter (*size*) would be assigned the default value and the resulting object would not make much sense. As I see it, **the desired behavior** in this case is either use both parameters or none. One way to implement this would be to raise an exception if this is the case. ``` def __init__(self,direction = UNINITIALIZED,size = UNINITIALIZED): if (direction != UNINITIALIZED) and (size == UNINITIALIZED): raise Exception('Use both parameters or none') self.direction = direction self.size = size ``` What do you think would be the pythonic way of doing this elegantly?
Size and direction sound like a tuple to me: ``` class myVector(): def __init__(self, sd=(UNINITIALIZED, UNINITIALIZED)): try: self.size, self.direction = sd except (ValueError, TypeError) as e: raise ValueError('two values of size and direction must be specified') ``` It is then called with a tuple of both size and direction if the defaults aren't wanted. If you didn't want to change the semantics to require passing a tuple, an alternative if you're not using other arguments is to change sd to `*args` and do the same thing - that seems less explicit to me though and means you can't use optional args for anything else.
You can also define the class like this: ``` class myVector(): def __init__(self,*direction_and_size): if not len(direction_and_size): direction_and_size = [UNINITIALIZED, UNINITIALIZED] assert len(direction_and_size) == 2, "Please provide both parameters" self.direction, self.size = direction_and_size >>> v = myVector() >>> v = myVector(4,2) >>> v = myVector(4) AssertionError: Please provide both parameters ```
python function use multiple defaults or no defaults at all
[ "", "python", "function", "default", "" ]
I've been searching for hours now but can't find anything that actually works. I've got this in multiple records : ``` <p style="text-align: justify;"> SOME TEXT </p> <p style="text-align: justify;"> MORE TEXT </p> ``` I want to change it to this : ``` <p style="text-align: justify;"> SOME TEXT MORE TEXT </p> ``` I want to keep the line break but delete the first end tag and the second start tag. I tried this : ``` UPDATE my_table SET my_collumn = REPLACE(my_collumn,'</p> <p style="text-align: justify;">','') ``` BUT it doesn't detect it because of the LINE BREAK between it. How can I solve this? Many thanks
If possible, you'd probably be better off doing this sort of replacement in your language of choice, which will offer stronger string handling capabilities than MySQL. That said, MySQL recognizes several C-style [character escapes](http://dev.mysql.com/doc/refman/5.0/en/string-literals.html) in strings, including both `\r` and `\n`; a CRLF in a MySQL string is therefore just `'\r\n'`.
I suggest you not to use MySQL for the string operations. This is not what a Database is made for. Use PHP, Perl, ASP, whatever you are coding with. Problems you might run into: Instead of common line break `\r\n` between the tags, you might have to parse different cases: ``` <blankspace>\r\n \n\n \n \r\n<blankspace>\r\n ... ``` someday, you also might want to change ``` <p style="text-align: justify;"> ``` to ``` <p class="textClass"> ``` Then you'd need to change the SQL again. If you really wonna do it, have a look at UDFs like [Regex Replace in MySQL](https://github.com/mysqludf/lib_mysqludf_preg#readme)
Detect line breaks in MySQL?
[ "", "html", "mysql", "sql", "replace", "line-breaks", "" ]
I tried display 1 monthly data but I want display e.g between 01.06.2013 and 01.07.2013 ? How to change this query for display between two date ? Thanks in advance ``` select b.ID ID, bt.type BULTEN_TYPE, b.ID TOPIC, b.Bul_Effect EFFECTT, b.Bul_Comment COMMENTS, concat(date_format(b.Bul_date,'%d.%m.%Y'), ' ', b.Bul_hour, ':', b.Bul_min) as BEGIN, concat(date_format(b.bitdate,'%d.%m.%Y'), ' ', b.bithour , ':', b.bitmin) as FINISH from bulten b, bulten_type bt, statu s WHERE b.Bul_Type = bt.ID and b.Status = s.ID and Bul_date >= date(now() - interval 1 month) order by ID desc; ```
This `WHERE` clause will get you exactly what you used as an example: ``` ... and Bul_date BETWEEN '1/6/2013' AND '1/7/2013' ... ``` Now, a more dynamic way of getting at what I *think* you want would be: ``` ... and Bul_date BETWEEN GETDATE() AND DATEADD(DAY, 1, GETDATE()) ... ``` that would get you everything between now, and a day from now. Now, the problem with the last example is that `GETDATE()` has a time on it so if you wanted to strip that (i.e. to start from midnight) you could do this: ``` ... and Bul_date BETWEEN DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) AND DATEADD(DAY, 1, DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))) ```
``` select b.ID ID, bt.type BULTEN_TYPE, b.ID TOPIC, b.Bul_Effect EFFECTT, b.Bul_Comment COMMENTS, concat(date_format(b.Bul_date,'%d.%m.%Y'), ' ', b.Bul_hour, ':', b.Bul_min) as BEGIN, concat(date_format(b.bitdate,'%d.%m.%Y'), ' ', b.bithour , ':', b.bitmin) as FINISH from bulten b, bulten_type bt, statu s WHERE b.Bul_Type = bt.ID and b.Status = s.ID and Month(Bul_date)=Month(GetDate()) and Year(Bul_date)=Year(GetDate()) order by ID desc; ```
How to list data between two dates in sql?
[ "", "sql", "sql-server", "" ]
I want to poll a folder continuously for any new files, lets say every 1 hours and whenever it finds a new file, it copies to a specific location. I found code to [find latest file](https://stackoverflow.com/questions/6714491/python-script-to-test-for-most-recently-modified-file-inconsistent-results) and to [copy to another location](http://www.daniweb.com/software-development/python/threads/176391/how-to-move-files-to-another-directory-in-python). How do I merge this two to get the above desired result? This also may be helpful [How to get the most recent file](https://stackoverflow.com/questions/9788119/how-to-get-the-most-recent-file)
For polling, the simplest solution is `time.sleep(n)` which sleeps for `n` seconds. Your code would look something like this, then: ``` import time.sleep as sleep import sys try: while True: # code to find the latest file # code to copy it to another location sleep(3600) except KeyboardInterrupt: print("Quitting the program.") except: print("Unexpected error: "+sys.exc_info()[0]) raise ``` (Because this loop can run forever, you should definitely wrap it in a `try`/`except` block to catch keyboard interrupts and other errors.) Cron jobs are a perfectly good option if you're only going to be on \*nix platforms, of course, but this provides platform independence.
The periodic nature of it suggests that you can use a cron job for it. You can set a cron job to run your python script every hour. It's then the script that handles copying of the file. That is if you're on a Unix machine ``` crontab -e // this will open your crontab file, then add 0 * * * * /path/to/your/script.py ``` above will run 0 minutes past every hour
python script to poll a folder continuously after a certain time interval and copy the new files to another location
[ "", "python", "file", "copy", "polling", "" ]
In a Django application, I have some checks in a form which will return error = "something". The thing is that that error will not be defined unless there is any error. ``` mycharacters = Character.objects.filter(username_id=request.user.id) if(mycharacters.count() >= 5): error = True if not error: #save to DB ``` The problem is that if there is no error, error variable will not exist. I have also thought about a possibility in order to avoid this error, which would be: ``` error = None #checks here if error == None: #save to DB ``` But I am not sure whether this would be best approach. Is there any way to do `if error var does not exist:` in Python?
You can do following: ``` error = mycharacters.count() >= 5 if not error: ... ``` **UPDATE** ``` error = mycharacters.count() >= 5 if error: to_json = {"incorrect":"Excedeed maximum"} else: # Save to DB ```
Either use an `else` branch: ``` if mycharacters.count() >= 5: error = True else: error = False ``` or initialize the variable to the least likely value, and reassign: ``` error = False if mycharacters.count() >= 5: error = True ``` The latter is often easier to read, but both approaches work (and so does the single-expression variant seen in other answers, even if that's harder to expand if you need to do more things). (note that if you need to do more than just a simple assignment to establish the "false" value, the `if/else` construct usually wins, but in this case the cost of assigning twice is marginal.) Checking if a variable exists can be done, but is verbose and bad programming style. Missing variables should be considered programming errors, not hidden program state.
variable 'error' referenced before assignmentRequest
[ "", "python", "django", "" ]
Clutter is not doing the full animation. This is my current code: ``` from gi.repository import Clutter, Gtk import sys def onClick(actor, event): actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [280]) # clutter does not seem to be running this line actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [20]) def main(): Clutter.init(sys.argv) # Colors red = Clutter.Color().new(255, 0, 0, 255) black = Clutter.Color().new(0, 0, 0, 255) # Create Stage stage = Clutter.Stage() stage.set_title("Basic Usage") stage.set_size(400, 200) stage.set_color(black) # Rectangle Actor actor = Clutter.Rectangle() actor.set_size(100, 50) actor.set_position(150, 100) actor.set_color(red) actor.set_reactive(True) actor.connect("button-press-event", onClick) # Add Actor to the Stage stage.add_actor(actor) stage.connect("destroy", lambda w: Clutter.main_quit()) stage.show_all() Clutter.main() if __name__ == '__main__': main() ``` Behold this illustration of my problem: ![enter image description here](https://i.stack.imgur.com/rXsQw.gif) For those of you who don't like gifs, here is my problem described in words: I want the actor to move from the middle to the right, then all the way to the left. Instead it just moves from the middle straight to the left. What is causing this, and how can I fix it?
like the documentation for ClutterActor.animate() says: > Calling this function on an actor that is already being animated will > cause the current animation to change with the new final values, the > new easing mode and the new duration > <https://developer.gnome.org/clutter/stable/clutter-Implicit-Animations.html#clutter-actor-animate> which means that the following code: ``` actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [280]) actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [20]) ``` is exactly equivalent to: ``` actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [20]) ``` which is what you're seeing. if you want to chain up two animations you have to connect to the `completed` signal of `ClutterAnimation`, using the `connect_after` function, so that Clutter can create a new animation: ``` def moveLeft (animation, actor): actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [20]) actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [280]).connect_after('completed', moveLeft) ``` I'd like to point out that `animatev()` and `ClutterAnimation` are deprecated; they can be replaced by using an explicit `Clutter.KeyframeTransition` or an implicit transition, for instance: ``` from gi.repository import Clutter Clutter.init(None) stage = Clutter.Stage() stage.connect('destroy', lambda x: Clutter.main_quit()) actor = Clutter.Actor() actor.set_background_color(Clutter.Color.get_static(Clutter.StaticColor.RED)) actor.set_reactive(True) actor.set_size(32, 32) stage.add_child(actor) actor.set_position(82, 82) def moveLeft(actor): actor.set_x(20) def moveRight(actor): actor.set_easing_duration(1000) actor.set_easing_mode(Clutter.AnimationMode.LINEAR) actor.set_x(280) actor.connect('transition-stopped::x', lambda a, n, t: moveLeft(actor)) actor.connect('button-press-event', lambda a, e: moveRight(actor)) stage.show() Clutter.main() ``` it can be arbitrarily more complex than this; you also need to remember to disconnect the `transition-stopped::x` signal handler, and restore the easing state to avoid creating implicit animations every time you change the actor's state, but I'll leave that as an exercise to the reader.
Try following code: ``` def onClick(actor, event): animation1 = actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [280]) animation1.connect_after( 'completed', lambda animation: actor.animatev(Clutter.AnimationMode.LINEAR, 1000, ["x"], [20]) ) ```
Clutter messing up animations
[ "", "python", "linux", "animation", "clutter", "" ]
I'm running queries in SQL Server 2008.. I have a `sales` table and a `payments` table.. sometimes a sale has multiple methods of payment (part giftcard + part cash or part credit + part cash etc..) so what I want to do is list the sales and the payments for each sale in a table. If I do a `LEFT JOIN ON sales.SaleID = payments.SaleID` I get duplicate sales rows when there are more than one matching payment rows.. So what I have been doing is getting all the sales and a count of how many matching payment rows there are with `(SELECT COUNT(*) FROM payments WHERE payments.SaleID = sales.SaleID) AS NumOfPayments`. Then in my PHP script I check the number of payments and if it is `> 1` I then run another query to get the payment details. The output I am trying to get would look something like this ``` ----------------------------------------------------- | SaleID | SaleDate | Amount | Payments | ----------------------------------------------------- | 123 | 2013-07-23 | $ 19.99 | Cash: $ 19.99 | | 124 | 2013-07-23 | $ 7.53 | Cash: $ 7.53 | | 125 | 2013-07-23 | $174.30 | Credit: $124.30 | | | | | GiftCard: $ 50.00 | | 126 | 2013-07-23 | $ 79.99 | Cash: $ 79.99 | | 127 | 2013-07-23 | $100.00 | Credit: $ 90.00 | | | | | Cash: $ 10.00 | ----------------------------------------------------- ``` Where sale 125 and 127 have multiple payments listed but the sale information only appears once and is not duplicated for each payment. The `sales` and `payments` tables look like this: ``` Sales Payments --------------------------------- -------------------------------------------- | SaleID | SaleDate | Amount | | PaymentID | SaleID | PmtMethod | PmtAmt | --------------------------------- -------------------------------------------- | 123 | 2013-07-23 | $ 19.99 | | 158 | 123 | 4 | $ 19.99 | | 124 | 2013-07-23 | $ 7.53 | | 159 | 124 | 4 | $ 7.53 | | 125 | 2013-07-23 | $174.30 | | 160 | 125 | 2 | $124.30 | | 126 | 2013-07-23 | $ 79.99 | | 161 | 125 | 3 | $ 50.00 | | 127 | 2013-07-23 | $100.00 | | 162 | 126 | 4 | $ 79.99 | --------------------------------- | 163 | 127 | 2 | $ 90.00 | | 164 | 127 | 4 | $ 10.00 | -------------------------------------------- ``` I feel like if I can do it with just SQL it will be faster. Is there a way to accomplish this with pure SQL instead of having to use server side code to run conditional queries.
I wouldn't mix data retrieval and data display, which is what I think you are asking about. Do you have some sort of column to indicate which payment should be displayed first? I'm thinking something like: ``` SELECT columnlist, rn = ROW_NUMBER() OVER (PARTITION BY sales.salesID ORDER BY payment.paymentID) FROM sales JOIN payments ON sales.salesID=payments.salesID ``` Then, in your GUI, just display the values for the first 3 columns where RN = 1, and blank out the values where RN > 1.
It is probably easier to do this in the interface. The basic query you want is: ``` select s.saleID, s.SaleDate, s.Amount, p.PaymentType, p.PaymentAmount, ROW_NUMBER() over (partition by p.SaleId order by p.PaymentAmount desc) as seqnum from sales s join payments p on p.saleID = s.saleId order by 1, 2 ``` However, you are trying to blank-out fields. To do this, you need to convert all the fields to strings and then check if which are on the first line for each `SaleId`: ``` select (case when seqnum > 1 then '' else CAST(SaleId as varchar(255)) end) as SaleId, (case when seqnum > 1 then '' else CONVERT(varchar(10), SaleDate, 121) end) as SaleDate, (case when seqnum > 1 then '' else '$'+STR(amount, 6, 2) end) as Amount, PaymentType, PaymentAmount from (select s.saleID, s.SaleDate, s.Amount, p.PaymentType, p.PaymentAmount, ROW_NUMBER() over (partition by p.SaleId order by p.PaymentAmount desc) as seqnum from sales s join payments p on p.saleID = s.saleId ) sp order by SaleId, SaleDate; ``` This is not the type of operation that SQL is designed for. SQL works with tables, where all columns have the same meaning. Here, you are introducing a different meaning for the column, depending on its position in the sale. Yes, SQL can do it. No, it is not easy.
SQL query one to many relationship join without duplicates
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "join", "" ]
I have a search function on my site which uses GET. I have been trying to code something that would take words out of the GET post and then search in using SQL. This is what I been able to do: ``` $id = $_GET["search"]; $searchTerms = explode(' ', $id); $searchTermBits = array(); foreach ($searchTerms as $term) { $term = trim($term); if (!empty($term)) { $searchTermBits[] = "Name LIKE '%$term%'" } } $lol = mysql_query("SELECT * FROM database WHERE .implode(' AND ', $searchTermBits).") ``` I don't know what I'm doing wrong. I get the following error: ``` You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(' AND ', Array).' at line 1 ```
``` $lol = mysql_query("SELECT * FROM database WHERE .implode(' AND ', $searchTermBits).") ``` should be ``` $lol = mysql_query("SELECT * FROM database WHERE ". implode(' AND ', $searchTermBits). "") ```
`implode` is just a part of the string in your case, you need to take it out from quotes: ``` mysql_query("SELECT * FROM database WHERE ".implode(' AND ', $searchTermBits)) ```
Multiple keywords in SELECT LIKE
[ "", "mysql", "sql", "select", "implode", "" ]
The idea of this loop was to iterate through a list. If a certain property of an object was not a key of the OrderedDict, it would add it. It is a dictionary of lists of objects ``` for object in someList: if object.DATE not in myOrderedDict: myOrderedDict[object.DATE]=[] myOrderedDict[object.DATE].append(object) ``` while it does seem to make the OrderedDict mostly correctly, it ends up out of order when it's printed. Instead of having something like `(01/13) (02/13) (03/13)` it goes more like `(02/13) (03/13) (01/13)`. Why does this happen and how can it be fixed?
It would appear that you expect the items in an `OrderedDict` to be ordered by key, which is not the case. An OrderedDict is ordered by the order items are added to it. From the [PEP 372](http://www.python.org/dev/peps/pep-0372/) FAQ: > **Does OrderedDict support alternate sort orders such as alphabetical?** > > No. Those wanting different sort orders really need to be using another technique. The OrderedDict is all about recording insertion order. If any other order is of interest, then another structure (like an in-memory dbm) is likely a better fit.
The pure-Python [sortedcontainers module](http://www.grantjenks.com/docs/sortedcontainers/index.html) has a [SortedDict](http://www.grantjenks.com/docs/sortedcontainers/sorteddict.html) type that can help you. It maintains the dict keys automatically in sorted order and is well documented and tested. You use it just as you would a dict: ``` >>> from sortedcontainers import SortedDict >>> mySortedDict = SortedDict() >>> for object in someList: >>> if object.DATE not in mySortedDict: >>> mySortedDict[object.DATE]=[] >>> mySortedDict[object.DATE].append(object) >>> list(mySortedDict.keys()) ['(01/13)', '(02/13)', '(03/13)'] ``` The sorted containers module is very fast and has a [performance comparison](http://www.grantjenks.com/docs/sortedcontainers/performance.html) page with benchmarks against alternative implementations.
OrderedDict not staying in order
[ "", "python", "python-2.7", "ordereddictionary", "" ]
I noticed Pandas now has [support for Sparse Matrices and Arrays](http://pandas.pydata.org/pandas-docs/dev/sparse.html). Currently, I create `DataFrame()`s like this: ``` return DataFrame(matrix.toarray(), columns=features, index=observations) ``` Is there a way to create a `SparseDataFrame()` with a `scipy.sparse.csc_matrix()` or `csr_matrix()`? Converting to dense format kills RAM badly. Thanks!
A direct conversion is not supported ATM. Contributions are welcome! Try this, should be ok on memory as the SpareSeries is much like a csc\_matrix (for 1 column) and pretty space efficient ``` In [37]: col = np.array([0,0,1,2,2,2]) In [38]: data = np.array([1,2,3,4,5,6],dtype='float64') In [39]: m = csc_matrix( (data,(row,col)), shape=(3,3) ) In [40]: m Out[40]: <3x3 sparse matrix of type '<type 'numpy.float64'>' with 6 stored elements in Compressed Sparse Column format> In [46]: pd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) for i in np.arange(m.shape[0]) ]) Out[46]: 0 1 2 0 1 0 4 1 0 0 5 2 2 3 6 In [47]: df = pd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) for i in np.arange(m.shape[0]) ]) In [48]: type(df) Out[48]: pandas.sparse.frame.SparseDataFrame ```
As of pandas v 0.20.0 you can use the `SparseDataFrame` constructor. An example from [the pandas docs](https://pandas.pydata.org/pandas-docs/stable/sparse.html#interaction-with-scipy-sparse): ``` import numpy as np import pandas as pd from scipy.sparse import csr_matrix arr = np.random.random(size=(1000, 5)) arr[arr < .9] = 0 sp_arr = csr_matrix(arr) sdf = pd.SparseDataFrame(sp_arr) ```
Populate a Pandas SparseDataFrame from a SciPy Sparse Matrix
[ "", "python", "numpy", "scipy", "pandas", "sparse-matrix", "" ]
the difference between query() and rawQuery() in SQLite when making more complex SQL queries. for example i want to use the SQL keyword DISTINCT, so I don't get any duplicates returned from the database. i understand how to use rawQuery() method, that way you can put an actual SQL query statement in the method. in this way i can make a standard SQL statement with rawQuery. it would be easy to add the DISTINCT keyword to any SQL statement when using rawQuery() however, when using the query() method as shown here in this code, I can't just use regular SQL statements. in this case, how would i make a query with the DISTINCT keyword as part of the query? or something with the same functionality? ``` // get info from country table public String[] getCountries(int numberOfRows) { String[] columns = new String[]{COUNTRY_NAME}; String[] countries = new String[numberOfRows]; int counter = 0; Cursor cursor = sqLiteDatabase.query(COUNTRY_TABLE, columns, null, null, null, null, null); if (cursor != null){ while(cursor.moveToNext()){ countries[counter++] = cursor.getString(cursor.getColumnIndex(COUNTRY_NAME)); } } return countries; } ```
Instead of the... ``` public Cursor query(String table, String[] columns, String selection, String[] selectionArgs, String groupBy, String having, String orderBy) ``` ...method you're using, just use the... ``` public Cursor query (boolean distinct, String table, String[] columns, String selection, String[] selectionArgs, String groupBy, String having, String orderBy, String limit) ``` ...overload and set `distinct` to `true`. The Android docs seem a bit hard to direct link, but the doc page describing both is [here](http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html).
you can use this, ``` Cursor cursor = db.query(true, YOUR_TABLE_NAME, new String[] { COLUMN1 ,COLUMN2, COLUMN_NAME_3 }, null, null, COLUMN2, null, null, null); ``` Here first parameter is used to set the DISTINCT value i.e if set to true it will return distinct column value. and sixth parameter denotes column name which you want to `GROUP BY`.
adding DISTINCT keyword to query() with SQLite in Android
[ "", "android", "sql", "database", "sqlite", "" ]
I have a view that needs to join on a concatenated column. For example; ``` dbo.View1 INNER JOIN dbo.table2 ON dbo.View1.combinedcode = dbo.table2.code ``` Inside the 'View1' there is a column which is comprised like so; ``` dbo.tableA.details + dbo.tableB.code AS combinedcode ``` Performing a join on this column is extremely slow. However the actual 'View1' runs extremely quickly. The poor performance comes with the join, and there aren't even many rows in any of the tables or views. Does anyone know why this might be? Thanks for any insight!
Since there's no index on `combinedcode`, the `JOIN` will most likely result in a full "table scan" of the view to calculate the code for every row. If you want to speed things up, try making the view into an [indexed view](http://msdn.microsoft.com/en-us/library/aa933148%28SQL.80%29.aspx) with an index on `combinedcode` to help the join. Another alternative, depending on your SQL server version, is to (as Parado answers) create a temporary table for the join, although it's usually less performant, at least for single shot queries.
Try this way: ``` select * into #TemTap from View1 /*where conditions on view1*/ ``` after that You could create `index` on `#TemTap.combinedcode` and than ``` dbo.#TemTap as View1 INNER JOIN dbo.table2 ON dbo.View1.combinedcode = dbo.table2.code ``` It often works for me.
JOIN on concatenated column performance
[ "", "sql", "sql-server", "t-sql", "" ]
I have a large file with each line of the form `a b c` I would like to remove all such lines where there does not exist another line either like `b d e` or `d a e` with `abs(c - e) < 10`. `a`, `b`, `c`, `d`, `e` are all integers. For example if the input is: ``` 0 1 10 1 2 20 2 3 25 0 1 15 1 4 40 ``` then the output should be ``` 1 2 20 2 3 25 0 1 15 ``` Is it possible to do this in anything like linear time? One idea is to create two dictionaries of sorted lists. One for the third column values associated with first column values. The other for the third column values associated with second column values. Then when you see a b c, look up c in the sorted list you get using key a in the second dictionary and then c in the sorted list you get using key b in the first dictionary.
I don't know if this can be done in linear time. It is straightforward to do it in O(n·log n) time if there are n triplets in the input. Here is a sketch of a method, in a not-necessarily-preferred form of implementation: 1. Make an array of markers M, initially all clear. 2. Create an array and make a copy of the input, sorted first on the middle element and then by the third element whenever middle elements are equal. (Time is O(n·log n) so far.) 3. For each distinct middle value, make a BST (binary search tree) with key = third element. (Time is O(n·log n) again.) 4. Make a hash table keyed by middle values, with data pointing at appropriate BST's. That is, given a middle value y and third element z, in time O(1) we can get to the BST for triplets whose middle value is y; and from that, in time O(log n) can find the triplet with third-element value closest to z. 5. For each triplet t = (x,y,z) in turn, if marker is not yet set use the hash table to find the BST, if any, corresponding to x. In that BST, find the triplet u with third element closest to z. If difference is less than 10, set the markers for t and u. (Time is O(n·log n) again.) 6. Repeat steps 2–5 but with BST's based on first element values rather than middle value, and lookups in step 5 based on y rather than x. (Although the matching-relations are symmetric, so that we can set two markers at each cycle in step 5, some qualifying triplets may end up not marked; ie, they are in tolerance but more distant than the nearest-match that is found. It would be possible to mark all of the qualifying triplets in step 5, but that would increase worst-case time from O(n·log n) to O(n²·log n).) 7. For each marker that is set, output the corresponding triplet. Overall time: O(n·log n). The same time can be achieved without building BST's but instead using binary searches within subranges of the sorted arrays. *Edit:* In python, one can build structures usable with *[bisect](http://docs.python.org/2/library/bisect.html)* as illustrated below in excerpts from an ipython interpreter session. (There may be more efficient ways of doing these steps.) Each data item in dictionary `h` is an array suitable for searching with `bisect`. ``` In [1]: from itertools import groupby In [2]: a=[(0,1,10), (1,2,20), (2,3,25), (0,1,15), (1,4,40), (1,4,33), (3,3,17), (2,1,19)] In [3]: b=sorted((e[1],e[2],i) for i,e in enumerate(a)); print b [(1, 10, 0), (1, 15, 3), (1, 19, 7), (2, 20, 1), (3, 17, 6), (3, 25, 2), (4, 33, 5), (4, 40, 4)] In [4]: h={k:list(g) for k,g in groupby(b,lambda x: x[0])}; h Out[4]: {1: [(1, 10, 0), (1, 15, 3), (1, 19, 7)], 2: [(2, 20, 1)], 3: [(3, 17, 6), (3, 25, 2)], 4: [(4, 33, 5), (4, 40, 4)]} ```
Like others have said, linear time may not be possible. Here is an easy O(n^2) implementation. If you sort the lists inside the dictionaries, you should be able to improve the runtime. ``` lines = """0 1 10 1 2 20 2 3 25 0 1 15 1 4 40""" Adata = {} Bdata = {} for line in lines.split('\n'): a,b,c = line.split(' ')[:3] vals = map(int,[a,b,c]) if b in Adata: Adata[b].append(vals) else: Adata[b] = [vals] if a in Bdata: Bdata[a].append(vals) else: Bdata[a] = [vals] def case1(a,b,c): if a in Adata: for val in Adata[a]: if abs(int(c)-val[2]) < 10: return True return False def case2(a,b,c): if b in Bdata: for val in Bdata[b]: if abs(int(c)-val[2]) < 10: return True return False out = [] for line in lines.split('\n'): a,b,c = line.split(' ')[:3] if case1(a,b,c) or case2(a,b,c): out.append(line) for line in out: print line ```
How to remove lines from a large file
[ "", "python", "algorithm", "" ]
I have created AFTER INSERT TRIGGER Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table. In One word if any ERROR occurs in trigger it should Ignore it. As I have used ``` BEGIN TRY END TRY BEGIN CATCH END CATCH ``` But it give following error message and Rolled back Insert operation on Triggered table > An error was raised during trigger execution. The batch has been > aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement. In this situation, to suppress this behavior, there are some workarounds. **Option A (the ugly way):** Since transaction is active at the beginning of trigger, you can just `COMMIT` it and continue with your trigger commands: ``` CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT AS BEGIN COMMIT; ... do whatever trigger does END; ``` Note that if there is an error in trigger code this will still produce the error message, but data in `Test1` table are safely inserted. **Option B (also ugly):** You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements `BEGIN/TRY` and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from `INSERTED` table around if needed in the logic (which is in SP now) - probably using some temp tables. **[SQLFiddle DEMO](http://sqlfiddle.com/#!6/cb5ee/1)**
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or @@ERROR check will work around the fundamental issue. If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling. If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts *after* your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
How to Ignoring errors in Trigger and Perform respective operation in MS SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "triggers", "" ]
I am trying to make a list containing all possible variations of 1 and 0. like for example if I have just two digits I want a list like this: ``` [[0,0], [0,1], [1,0], [1,1]] ``` But if I decide to have 3 digits I want to have a list like this: ``` [[0,0,0], [0,0,1], [0,1,0], [0,1,1], [1,0,0], [1,0,1], [1,1,0], [1,1,1]] ``` Someone told me to use itertools, but I cannot get it to work the way I want. ``` >>> list(itertools.permutations((range(2)))) [(0, 1), (1, 0)] >>> [list(itertools.product((range(2))))] [[(0,), (1,)]] ``` Is there a way to do this? And question number two, how would i find documentation on modules like this? I am just flailing blindly here
[`itertools.product()`](http://docs.python.org/2/library/itertools.html#itertools.product) can take a second argument: the length. It defaults to one, as you have seen. Simply, you can add `repeat=n` to your function call: ``` >>> list(itertools.product(range(2), repeat=3)) [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] ``` To find the docs, you can either use [`help(itertools)`](http://docs.python.org/2/library/functions.html#help) or just do a quick google (or whatever your search engine is) search "itertools python".
itertools.product(.., repeat=n) ``` >>> import itertools >>> list(itertools.product((0,1), repeat=3)) [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] ``` [Python Module Index](http://docs.python.org/2/py-modindex.html) contains links for standard library modules documentation.
Python: How do i use itertools?
[ "", "python", "list", "python-itertools", "" ]
``` #def func(param): # if param < 0: # return "test" # i = 0 # while i < param: # yield i # i += 1 def func(param): if param < 0: return "test" def gen(n): i = 0 while i < param: yield i i += 1 return gen(param) print(func(-1)) print(func(3)) g = func(3) for i in range(0, 3): print(next(g)) ``` Is there a reason that the Python interpreter can not convert the commented code to the actual code implicitly? This seems like this should be allowed, but I am wondering what repercussions there are that made them choose to disallow this.
The reason is simply, if the `def` contains a `yield` statement, it creates a generator: <http://www.python.org/dev/peps/pep-0255/> > The yield statement may only be used inside functions. A function that > contains a yield statement is called a generator function. A generator > function is an ordinary function object in all respects, but has the > new CO\_GENERATOR flag set in the code object's co\_flags member. That is how the interpreter distinguishes between a regular function, and a generator function. It's simple to implement, easy to reason about ("if it contains a `yield`, it's a generator") The "conditional generator" behaviour you describe would be much more complex to implement, and in some cases not desirable (maybe the conditional should happen inside the first iteration of the generator, or maybe it should run as soon as you call `func(...)`) Your other code either returns a generator, or a string. If that's the interface you want, it seems like a perfectly good solution (but it's hard to make practical suggestions without a real example)
In python2.x, you can not return something in a generator: ``` >>> def func(): ... return 3 ... yield 3 ... File "<stdin>", line 3 SyntaxError: 'return' with argument inside generator >>> ``` In python3.x, use `return` in a generator means raise a `StopIteration(<something>)`: ``` >>> def func(): ... return 3 ... yield 3 ... >>> func().__next__() Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: 3 >>> ``` I can not think about any reason for the interpreter to decide which part is a generator. It is hard and I think it is the responsibilities of programmers. And I even doubt whether return a value in a generator is a good implementation.
Why doesn't the Python interpreter implicitly create the generator?
[ "", "python", "generator", "yield", "" ]
I currently have the following table: ``` ID | Name | EventTime | State 1001 | User 1 | 2013/07/22 00:00:05 | 15 1002 | User 2 | 2013/07/23 00:10:00 | 100 1003 | User 3 | 2013/07/23 06:15:31 | 35 1001 | User 1 | 2013/07/23 07:13:00 | 21 1001 | User 1 | 2013/07/23 08:15:00 | 25 1003 | User 3 | 2013/07/23 10:00:00 | 22 1002 | User 2 | 2013/07/23 09:18:21 | 50 ``` What I need is the `state` for each distinct `userid` from the last `eventtime` similar to below: ``` ID | Name | EventTime | State 1001 | User 1 | 2013/07/23 08:15:00 | 25 1003 | User 3 | 2013/07/23 10:00:00 | 22 1002 | User 2 | 2013/07/23 09:18:21 | 50 ``` I need something similar to the following but I can't quite get what I need. ``` SELECT ID, Name, max(EventTime), State FROM MyTable GROUP BY ID ```
In databases that support analytic functions, you could use `row_number()`: ``` select * from ( select row_number() over (partition by ID order by EventTime desc) as rn , * from YourTable ) as SubQueryAlias where rn = 1 ```
``` SELECT ID, Name, EventTime, State FROM MyTable mt WHERE EventTime = (SELECT MAX(EventTime) FROM MyTable sq WHERE mt.ID = sq.ID) ```
Select distinct rows whilst grouping by max value
[ "", "sql", "sql-server-2008", "greatest-n-per-group", "" ]
I have stored procedure to find customer, its working fine. If `Customer_City_Name` is null in the table then I am not able to retrieve the row. SP fails . how to do this I have to get result even if `Customer_City_Name` or `Customer_Country_Code` `IS NULL` ``` EXEC findCustomer null,'%',null,null, ``` SP code: ``` CREATE PROCEDURE findCustomer @customerNumber NVARCHAR(100), @customerNamePattern NVARCHAR(35), @customerCityNamePattern NVARCHAR(35), @customerCountryCode NVARCHAR(5) AS BEGIN DECLARE @SQL NVARCHAR(4000) SET @SQL = ' SELECT c.Customer_Number, c.Customer_Name, c.Postal_Address_Identifier, c.Customer_Street_Or_Road_Name, c.Customer_City_Name, c.Customer_Territory_Code, c.Customer_Postal_Code, c.Customer_Country_Code, c.Telephone_Number, c.Mobile_Telephone_Number, c.Fax_Number, c.Email_Address FROM Customer c WHERE c.Customer_Number LIKE ' + CASE WHEN @customerNumber IS NOT NULL THEN '''' + @customerNumber + '''' ELSE 'c.Customer_Number' END + ' AND c.Customer_Name LIKE ' + CASE WHEN @customerNamePattern IS NOT NULL THEN '''' + @customerNamePattern + '''' ELSE 'c.Customer_Name' END + ' AND c.Customer_City_Name LIKE ' + CASE WHEN @customerCityNamePattern IS NOT NULL THEN '''' +@customerCityNamePattern + '''' ELSE 'c.Customer_City_Name' END + ' AND c.Customer_Country_Code LIKE ' + CASE WHEN @customerCountryCode IS NOT NULL THEN '''' +@customerCountryCode + '''' ELSE 'c.Customer_Country_Code' END EXEC sp_executesql @SQL ```
I have fixed this issue by doing this AND ISnull(c.Customer\_City\_Name,'''') LIKE ' + CASE WHEN @customerCityNamePattern IS NOT NULL THEN '''' +@customerCityNamePattern + '''' ELSE 'isnull (c.Customer\_City\_Name,'''')' END + ' AND Isnull(c.Customer\_Country\_Code,'''') LIKE ' + CASE WHEN @customerCountryCode IS NOT NULL THEN '''' +@customerCountryCode + '''' ELSE 'isnull(c.Customer\_Country\_Code , '''')' END 1.Thanks to asafrob i have accepted your approach. 2. Thanks to devio [In 'LIKE' operator needs to select 'NULL'(DB) values](https://stackoverflow.com/questions/2324368/in-like-operator-needs-to-select-nulldb-values) I got idea from 3.Thanks to jpmc26 -- > I like your answer . Finally thanks to every one answered to my question
@user2218371 you should reconsider alternatives instead of dynamic SQL. anyway, this is an alternative code. ``` CREATE PROCEDURE findCustomer @customerNumber NVARCHAR(100), @customerNamePattern NVARCHAR(35), @customerCityNamePattern NVARCHAR(35), @customerCountryCode NVARCHAR(5) AS BEGIN DECLARE @SQL NVARCHAR(4000) SET @SQL = ' SELECT c.Customer_Number, c.Customer_Name, c.Postal_Address_Identifier, c.Customer_Street_Or_Road_Name, c.Customer_City_Name, c.Customer_Territory_Code, c.Customer_Postal_Code, c.Customer_Country_Code, c.Telephone_Number, c.Mobile_Telephone_Number, c.Fax_Number, c.Email_Address FROM Customer c WHERE ' + CASE WHEN @customerNumber IS NOT NULL THEN ' c.Customer_Number LIKE ''' + @customerNumber + '''' + CASE WHEN @customerNamePattern IS NOT NULL THEN ' AND ' ELSE '' END ELSE '' END + CASE WHEN @customerNamePattern IS NOT NULL THEN ' c.Customer_Name LIKE ''' + @customerNamePattern + '''' + CASE WHEN @customerCityNamePattern IS NOT NULL THEN ' AND ' ELSE '' END ELSE '' END + CASE WHEN @customerCityNamePattern IS NOT NULL THEN ' c.Customer_City_Name LIKE ''' + @customerCityNamePattern + '''' + CASE WHEN +@customerCountryCode IS NOT NULL THEN ' AND ' ELSE '' END ELSE '' END + CASE WHEN @customerCountryCode IS NOT NULL THEN ' c.Customer_Country_Code LIKE ''' + @customerCountryCode + '''' ELSE '' END EXEC sp_executesql @SQL ```
not able to retrieve if the table column has null value
[ "", "sql", "sql-server-2008", "" ]
I am getting various data types from a config file and adding them to a dictionary. but I am having a problem with lists. I want to take a line with text: `alist = [1,2,3,4,5,6,7]` and convert into a list of integers. But I am getting ``` ['1', ',', '2', ',', '3', ',', '4', ',', '5', ',', '6', ',', '7']. ``` How can I fix this? Here is config.txt: ``` firstname="Joe" lastname="Bloggs" employeeId=715 type="ios" push-token="12345" time-stamp="Mon, 22 Jul 2013 18:45:58 GMT" api-version="1" phone="1010" level=7 mylist=[1,2,3,4,5,6,7] ``` Here is my code to parse: ``` mapper = {} def massage_type(s): if s.startswith('"'): return s[1:-1] elif s.startswith('['): return list(s[1:-1]) #in this case get 'mylist': ['1', ',', '2', ',', '3', ',', '4', ',', '5', ',', '6', ',', '7'] elif s.startswith('{'): return "object" #todo else: return int(s) doc = open('config.txt') for line in doc: line = line.strip() tokens = line.split('=') if len(tokens) == 2: formatted = massage_type(tokens[1]) mapper[tokens[0]] = formatted #check integer list mapper["properlist"] = [1,2,3,4,5,6,7] #this one works print mapper ``` Here is my printed output: ``` {'time-stamp': 'Mon, 22 Jul 2013 18:45:58 GMT', 'mylist': ['1', ',', '2', ',', '3', ',', '4', ',', '5', ',', '6', ',', '7'], 'employeeId': 715, 'firstname': 'Joe', 'level': 7, 'properlist': [1, 2, 3, 4, 5, 6, 7], 'lastname': 'Bloggs', 'phone': '1010', 'push-token': '12345', 'api-version': '1', 'type': 'ios'} ``` Update. Thanks for the feedback. I realised that I could also get heterogeneous list so changed list part to: ``` elif s.startswith('['): #check element type elements = s[1:-1].split(',') tmplist = [] #assemble temp list for elem in elements: if elem.startswith('"'): tmplist.append(elem[1:-1]) else: tmplist.append(int(elem)) return tmplist ``` It only handles strings and integers but is good enough for what I need right now.
You need to change the `return` statement to. ``` return [int(elem) for elem in s[1:-1].split(',')] # Or map(int, s[1:-1].split(',')) ```
maybe try `ast.literal_eval` here is an example: ``` import ast str1 = '[1,2,3,4,5]' ast.literal_eval(str1) ``` output will be a list like this: ``` [1,2,3,4,5] ``` it wont include the commas in the list
How to convert text format list into a python list
[ "", "python", "parsing", "" ]
I am trying to `UPDATE` all records of a table with the results of a function which uses the other rows of the table as arguments. However, the result of this operation for all records is repeated from the first record. Can anyone explain why this might be the case? ``` def fun(a,b,c,d): return a + b + c + d cur = conn.cursor() cur.execute("SELECT field1, field2, field3, field4 FROM TABLE1") for row in cur: cur.execute("UPDATE TABLE1 SET field5 = ?", (fun(row[0],row[1],row[2],row[3]),)) ``` The completed table looks like this: ``` field1, field2, field3, field4, field5 4, 3, 2, 1, 10 7, 3, 1, 0, 10 8, 5, 2, 0, 10 ``` When it should look like this: ``` field1, field2, field3, field4, field5 4, 3, 2, 1, 10 7, 3, 1, 0, 11 8, 5, 2, 0, 15 ```
First, you select all of the rows: ``` SELECT field1, field2, field3, field4 FROM TABLE1 ``` This yields this result set in `cur`: ``` 4, 3, 2, 1 7, 3, 1, 0 8, 5, 2, 0 ``` You begin to iterate through the rows. You start with the first row: ``` 4, 3, 2, 1 ``` You add all the columns together, correctly yielding the result `10`. Then you execute this SQL statement: ``` UPDATE TABLE1 SET field5 = 10 ``` Whoa, there! There's no `where` clause! You've just changed `field5` of *every single row*! That's problem #1: **you need to add a `where` clause**. Most tables have a primary key, so if you have a primary key, you'll probably want to add a clause like `where id = ?`. If you don't have a primary key, probably the best you can do is include all the other columns, e.g. ``` update table1 set field5 = ? where field1 = ? and field2 = ? and field3 = ? and field4 = ? ``` Make sure that you provide a value for each `?` in the `execute` call. --- So you've finished executing your `update` statement on the `cur` cursor. You go to iterate again… and there's no more rows. Why? Because that `update` statement changed the result set of the cursor, discarding the remaining rows of the `select`. **You need to run your updates on a different cursor or fetch all the rows before you move on to updating.**
I know this is an old and solved issue but I can't help it: That is the worse way to update a table with values form its own columns. All you need to do is one single update statement executed in the database: ``` UPDATE table SET field1 = function(params1), field2=function(params2), field3=function(params3), field4=function(params4) WHERE <condition> ``` The where clause is not needed if you want to do that for all the rows in your table. The only thing you need to do is to define a user function within your database, which is pretty similar with what you would do in python. This way the update will be about ... 1000 times faster, with no exaggeration.
Updating SQLite table with Python function using table columns as arguments
[ "", "python", "sqlite", "" ]
In the data I am working with the index is compound - i.e. it has both item name and a timestamp, e.g. `name@domain.com|2013-05-07 05:52:51 +0200`. I want to do hierarchical indexing, so that the same e-mails are grouped together, so I need to convert a DataFrame Index into a MultiIndex (e.g. for the entry above - `(name@domain.com, 2013-05-07 05:52:51 +0200)`). What is the most convenient method to do so?
Once we have a DataFrame ``` import pandas as pd df = pd.read_csv("input.csv", index_col=0) # or from another source ``` and a function mapping each index to a tuple (below, it is for the example from this question) ``` def process_index(k): return tuple(k.split("|")) ``` we can create a hierarchical index in the following way: ``` df.index = pd.MultiIndex.from_tuples([process_index(k) for k,v in df.iterrows()]) ``` An alternative approach is to create two columns then set them as the index (the original index will be dropped): ``` df['e-mail'] = [x.split("|")[0] for x in df.index] df['date'] = [x.split("|")[1] for x in df.index] df = df.set_index(['e-mail', 'date']) ``` or even shorter ``` df['e-mail'], df['date'] = zip(*map(process_index, df.index)) df = df.set_index(['e-mail', 'date']) ```
In `pandas>=0.16.0`, we can use the `.str` accessor on indices. This makes the following possible: ``` df.index = pd.MultiIndex.from_tuples(df.index.str.split('|').tolist()) ``` (Note: I tried the more intuitive: `pd.MultiIndex.from_arrays(df.index.str.split('|'))` but for some reason that gives me errors.)
Converting Index into MultiIndex (hierarchical index) in Pandas
[ "", "python", "pandas", "" ]
I searched a lot, but didn't find a proper solution to my problem. What do I want to do? I have 2 tables in MySQL: - Country - Currency (I join them together via CountryCurrency --> due to many to many relationship) See this for a working example: <http://sqlfiddle.com/#!2/317d3/8/0> I want to link both tables together using a join, but I want to show just one row per country (some countries have multiple currencies, so that was the first problem). I found the group\_concat function: ``` SELECT country.Name, country.ISOCode_2, group_concat(currency.name) AS currency FROM country INNER JOIN countryCurrency ON country.country_id = countryCurrency.country_id INNER JOIN currency ON currency.currency_id = countryCurrency.currency_id GROUP BY country.name ``` This has the following result: ``` NAME ISOCODE_2 CURRENCY Afghanistan AF Afghani Åland Islands AX Euro Albania AL Lek Algeria DZ Algerian Dinar American Samoa AS US Dollar,Kwanza,East Caribbean Dollar ``` But what I want now is to split the currencies in different columns (currency 1, currency 2, ...). I already tried functions like MAKE\_SET() but this doesn't work.
You can do this with `substring_index()`. The following query uses yours as a subquery and then applies this logic: ``` select Name, ISOCode_2, substring_index(currencies, ',', 1) as Currency1, (case when numc >= 2 then substring_index(substring_index(currencies, ',', 2), ',', -1) end) as Currency2, (case when numc >= 3 then substring_index(substring_index(currencies, ',', 3), ',', -1) end) as Currency3, (case when numc >= 4 then substring_index(substring_index(currencies, ',', 4), ',', -1) end) as Currency4, (case when numc >= 5 then substring_index(substring_index(currencies, ',', 5), ',', -1) end) as Currency5, (case when numc >= 6 then substring_index(substring_index(currencies, ',', 6), ',', -1) end) as Currency6, (case when numc >= 7 then substring_index(substring_index(currencies, ',', 7), ',', -1) end) as Currency7, (case when numc >= 8 then substring_index(substring_index(currencies, ',', 8), ',', -1) end) as Currency8 from (SELECT country.Name, country.ISOCode_2, group_concat(currency.name) AS currencies, count(*) as numc FROM country INNER JOIN countryCurrency ON country.country_id = countryCurrency.country_id INNER JOIN currency ON currency.currency_id = countryCurrency.currency_id GROUP BY country.name ) t ``` The expression `substring_index(currencies, ',' 2)` takes the list in currencies up to the second one. For American Somoa, that would be `'US Dollar,Kwanza'`. The next call with `-1` as the argument takes the last element of the list, which would be `'Kwanza'`, which is the second element of `currencies`. Also note that SQL queries return a well-defined set of columns. A query cannot have a variable number of columns (unless you are using dynamic SQL through a `prepare` statement).
Use this query to work out the number of currency columns you'll need: ``` SELECT MAX(c) FROM ((SELECT count(currency.name) AS c FROM country INNER JOIN countryCurrency ON country.country_id = countryCurrency.country_id INNER JOIN currency ON currency.currency_id = countryCurrency.currency_id GROUP BY country.name) as t) ``` Then dynamically create and execute [prepared statement](https://stackoverflow.com/a/23178848/495157) to generate the result, using Gordon Linoff solution with query result above to in this thread.
SQL GROUP_CONCAT split in different columns
[ "", "mysql", "sql", "group-concat", "" ]
I have this possible values in a column ``` 1 65 5 excellent 54 -1 - . ``` If I use isnumeric with the last example I get 1, but when I try to convert to number I got an error. I want to use a try-catch in a function but I can't, how can I deal with this?
By the way, an even worse example is `'-.'`, which `isnumeric()` considers to be valid. My advice is to look for at least one digit in the value as well. Yucky, but: ``` isnumeric(val) and val like '%[0-9]%' ``` Note that `isnumeric()` also considers something in exponential notation to be valid. So `'8e4'` will test as positive. This may not be an issue for you, because it will convert to a valid value. Such matches have caused a problem for me in the past, so I tend to use something like: ``` val not like '%[^0-9.]%' and val not like '%.%.%' and val like '%[0-9]%' ``` That is, it only has decimal points and digits. And, it doesn't have two decimal points. But, it only works for positive values.
I think you are looking for something like this: > select case isnumeric('a') when 1 then convert(int,'a') else null end
Catch exception with isnumeric in sql server
[ "", "sql", "sql-server", "" ]
I have the following code: ``` CASE WHEN (1+sum(x)/sum(y)) >=0 and (1+sum(x)/sum(y)) < 0.5 Then 1 ELSE 0 ``` x and y are columns. my problem is, `sum(y)` can be 0 and i get the error dividing by 0 is not possible. How can i catch up this error in the else clause. So that if `y = 0 "ELSE 0"` gets activated.
Maybe one more condition ? ``` CASE WHEN sum(y)=0 then 0 WHEN (1+sum(x)/sum(y)) >=0 and (1+sum(x)/sum(y)) < 0.5 Then 1 ELSE 0 END ```
Try to use expression `NullIf(sum(y), 0)` in place of `sum(y)`: ``` select CASE WHEN (1+sum(x)/NullIf(sum(y), 0)) >=0 and (1+sum(x)/NullIf(sum(y), 0)) < 0.5 Then 1 ELSE 0 END ``` [SQLFiddle](http://sqlfiddle.com/#!3/7a1ec/3) sample
0 division in a 'when else' environment
[ "", "sql", "sql-server", "t-sql", "" ]
I am thinking a transaction would help me in the following example but I could be wrong. What I am attempting to avoid is making 3 different calls to the database if I dont have too in the following scenerio.. I want to do the following: 1. select name, address from employee 2. select state from states 3. select error from errorTable In this case I have to call several different selects to get data. What is the best way to approach this scenerio to return all the data I want to read minimizing several calls to the database? Note: No tables have a relationship between them.
If there is no relation between the tables you could do a kind of ``` Select [Your selected columns] From (select * from table1 where [conditon for table1]) t1 Inner join (select * from table2 where [condition for table2]) t2 on 1=1 Inner join (select * from table3 where [condition for table3]) t3 on 1=1 ``` This makes one trip to the database.
I think you can use `reader.NextResult()` to iterate through result sets. An example borrowed from [codeproject](http://www.codeproject.com/Articles/306722/Executing-multiple-SQL-statements-as-one-against-S#pre5): ``` connection.Open(); dataReader = command.ExecuteReader(); // a multi-select query while (loopResult) { stringBuilder = new System.Text.StringBuilder(); while (dataReader.Read()) { stringBuilder.AppendLine(dataReader.GetInt32(0).ToString()); } System.Windows.MessageBox.Show(stringBuilder.ToString(), "Data from the result set"); loopResult = dataReader.NextResult(); } ``` But, I'm not sure what the rationale is for grouping queries for unrelated data. It'll likely make your project harder to understand and maintain.
Best Approach To Handle Multiple Select Statements In A Single Trip To The Database
[ "", ".net", "sql", "sql-server", "t-sql", "" ]
I have two variables as follows. ``` a = 2 b = 3 ``` I want to construct a DataFrame from this: ``` df2 = pd.DataFrame({'A':a, 'B':b}) ``` This generates an error: ``` ValueError: If using all scalar values, you must pass an index ``` I tried this also: ``` df2 = (pd.DataFrame({'a':a, 'b':b})).reset_index() ``` This gives the same error message. How do I do what I want?
The error message says that if you're passing scalar values, you have to pass an index. So you can either not use scalar values for the columns -- e.g. use a list: ``` >>> df = pd.DataFrame({'A': [a], 'B': [b]}) >>> df A B 0 2 3 ``` or use scalar values and pass an index: ``` >>> df = pd.DataFrame({'A': a, 'B': b}, index=[0]) >>> df A B 0 2 3 ```
You may try wrapping your dictionary into a list: ``` my_dict = {'A':1,'B':2} pd.DataFrame([my_dict]) ``` ``` A B 0 1 2 ```
Constructing pandas DataFrame from values in variables gives "ValueError: If using all scalar values, you must pass an index"
[ "", "python", "pandas", "dataframe", "valueerror", "scalar", "" ]
Does including an extra column in group by change the number of rows in the results ? I was doing a select query on a table A(col1,col2....col9) and I first included ``` select col1,col2,col3 from A where col1 = (condition) group by col1, col2, col3 ``` which yielded me certain number of results. now I changed the query to this ``` `select col1,col2,col3, col8,col9 from A where col1=(condition) group by col1,col2,col3, col8,col9' ``` and I got a different number of rows in the results. What could be the possible explanation ?
If the combination of col1, col2 and col3 is not unique, you can have more than one row with the same combination of those three. If that happens, *and* those duplicates have different values for col8 and/or col9, then grouping by those extra columns will result in more rows. Note that you can use `select distinct` to get the same results. `group by` is especially used if you want to aggregate over other columns, for instance, calculate a sum or a count, like so: ``` select col1, col2, col3, sum(col8) as total8 from A group by col1, col2, col3 ``` The query above will give you each unique combination of col1, col2 and col3 plus the sum over all col8's for each combination.
By grouping on those columns you are, in essence, making the results distinct on the grouped columns. So if there were rows that had columns 1, 2, 3, 18, and 19 in common, they would be folded together.
How does group by statement in SQL affect the results ?
[ "", "sql", "performance", "sql-server-2008", "" ]
The only way i could do this was: ``` var1 = list[0] var2 = list[1] var3 = list[2] var4 = list[3] ... ``` Is there a simpler way?
``` var1, var2, var3, var4 = l ``` This assumes `l` is exactly 4 elements long. Most of the time, you want that, but if you don't, ``` var1, var2, var3, var4 = l[:4] ``` will ignore extra elements.
``` var1, var2, var3, var4 = alist ``` The number of names should be as many as the length of list, or there will be an error. If you have no need to have all the variables, you can use a for loop.
What's the simplest way to assign list members to different variables?
[ "", "python", "" ]
I have a strong background in numeric compuation using FORTRAN and parallelization with OpenMP, which I found easy enough to use it on many problems. I switched to PYTHON since it much more fun (at least for me) to develop with, but parallelization for nummeric tasks seem much more tedious than with OpenMP. I'm often interested in loading large (tens of GB) data sets to to the main Memory and manipulate it in parallel while containing only a single copy of the data in main memory (shared data). I started to use the PYTHON module MULTIPROCESSING for this and came up with this generic example: ``` #test cases #python parallel_python_example.py 1000 1000 #python parallel_python_example.py 10000 50 import sys import numpy as np import time import multiprocessing import operator n_dim = int(sys.argv[1]) n_vec = int(sys.argv[2]) #class which contains large dataset and computationally heavy routine class compute: def __init__(self,n_dim,n_vec): self.large_matrix=np.random.rand(n_dim,n_dim)#define large random matrix self.many_vectors=np.random.rand(n_vec,n_dim)#define many random vectors which are organized in a matrix def dot(self,a,b):#dont use numpy to run on single core only!! return sum(p*q for p,q in zip(a,b)) def __call__(self,ii):# use __call__ as computation such that it can be handled by multiprocessing (pickle) vector = self.dot(self.large_matrix,self.many_vectors[ii,:])#compute product of one of the vectors and the matrix return self.dot(vector,vector)# return "length" of the result vector #initialize data comp = compute(n_dim,n_vec) #single core tt=time.time() result = [comp(ii) for ii in range(n_vec)] time_single = time.time()-tt print "Time:",time_single #multi core for prc in [1,2,4,10]:#the 20 case is there to check that the large_matrix is only once in the main memory tt=time.time() pool = multiprocessing.Pool(processes=prc) result = pool.map(comp,range(n_vec)) pool.terminate() time_multi = time.time()-tt print "Time using %2i processes. Time: %10.5f, Speedup:%10.5f" % (prc,time_multi,time_single/time_multi) ``` I ran two test cases on my machine (64bit Linux using Fedora 18) with the following results: ``` andre@lot:python>python parallel_python_example.py 10000 50 Time: 10.3667809963 Time using 1 processes. Time: 15.75869, Speedup: 0.65785 Time using 2 processes. Time: 11.62338, Speedup: 0.89189 Time using 4 processes. Time: 15.13109, Speedup: 0.68513 Time using 10 processes. Time: 31.31193, Speedup: 0.33108 andre@lot:python>python parallel_python_example.py 1000 1000 Time: 4.9363951683 Time using 1 processes. Time: 5.14456, Speedup: 0.95954 Time using 2 processes. Time: 2.81755, Speedup: 1.75201 Time using 4 processes. Time: 1.64475, Speedup: 3.00131 Time using 10 processes. Time: 1.60147, Speedup: 3.08242 ``` My question is, am I misusing the MULTIPROCESSING module here? Or is this the way it goes with PYTHON (i.e. don't parallelize within python but rely totally on numpy's optimizations)?
While there is no general answer to your question (in the title), I think it is valid to say that `multiprocessing` alone is not the key for great number-crunching performance in Python. In principle however, Python (+ 3rd party modules) are awesome for number crunching. Find the right tools, you will be amazed. Most of the times, I am pretty sure, you will get better performance with writing (much!) less code than you have achieved before doing everything manually in Fortran. You just have to use the right tools and approaches. This is a broad topic. A few random things that might interest you: * You can compile numpy and scipy yourself using Intel MKL and OpenMP (or maybe a sys admin in your facility already did so). This way, many linear algebra operations will automatically use multiple threads and get the best out of your machine. This is simply awesome and probably underestimated so far. Get your hands on a properly compiled numpy and scipy! * `multiprocessing` should be understood as a useful tool for managing multiple more or less independent processes. Communication among these processes has to be explicitly programmed. Communication happens mainly through pipes. Processes talking a lot to each other spend most of their time talking and not number crunching. Hence, `multiprocessing` is best used in cases when the transmission time for input and output data is small compared to the computing time. There are also tricks, you can for instance make use of Linux' `fork()` behavior and share large amounts of memory (read-only!) among multiple `multiprocessing` processes without having to pass this data around through pipes. You might want to have a look at <https://stackoverflow.com/a/17786444/145400>. * Cython has already been mentioned, you can use it in special situations and replace performance-critical code parts in your Python program with compiled code. I did not comment on the details of your code, because (a) it is not very readable (please get used to [PEP8](http://www.python.org/dev/peps/pep-0008/) when writing Python code :-)) and (b) I think especially regarding number crunching it depends on the problem what the right solution is. You have already observed in your benchmark what I have outlined above: in the context of `multiprocessing`, it is especially important to have an eye on the communication overhead. Spoken generally, you should always try to find a way from within Python to control compiled code to do the heavy work for you. Numpy and SciPy provide great interfaces for that.
Number crunching with Python... You probably should learn about [Cython](http://cython.org/). It is and intermediate language between Python and C. It is tightly interfaced with numpy and has support for paralellization using openMP as backend.
Is the multiprocessing module of python the right way to speed up large numeric calculations?
[ "", "python", "parallel-processing", "multiprocessing", "" ]
my study question is: Define a procedure, total\_enrollment, that takes as an input a list of elements, where each element is a list containing three elements: a university name, the total number of students enrolled, and the annual tuition fees. The procedure should return two numbers, not a string, giving the total number of students enrolled at all of the universities in the list, and the total tuition fees (which is the sum of the number of students enrolled times the tuition fees for each university). the code given is: ``` usa_univs = [ ['California Institute of Technology',2175,37704], ['Harvard',19627,39849], ['Massachusetts Institute of Technology',10566,40732], ['Princeton',7802,37000], ['Rice',5879,35551], ['Stanford',19535,40569], ['Yale',11701,40500] ] ``` my solution is: ``` def total_enrollment(a): total_students = 0 costsum = 0 for e in a: total_students = total_students + e[1] costsum = costsum + e[2] all_in_all = total_students * costsum return total_students return all_in_all ``` what I should see is: 77285,3058581079 What actually comes out is: 77285 - and no total number
First of all, you can't return twice, change your code to this in order to return a tuple. Also I fixed your math for calculating total cost. You were multiplying total students by total cost, you want to calculate each university separately. Students at CalTech are going to pay $37704, not the total cost of all universities. ``` def total_enrollment(a): total_students = 0 all_in_all = 0 for e in a: total_students = total_students + e[1] all_in_all += (e[1] * e[2]) return (total_students, all_in_all) ``` Then you could access them like this ``` >>>result = total_enrollment(usa_univs) >>>print result[0] 77285 >>>print result[1] 3058581079 ```
You can't return twice from a function. You can rather return both the values as `tuple`: ``` return total_students, all_in_all ``` And then unpack the return value in two variables. *E.g.:* ``` >>> def func(): ... return 1, 2 ... >>> v1, v2 = func() >>> v1 1 >>> v2 2 ```
Why does the code return wrong multiplication?
[ "", "python", "" ]
I'm running a FlaskApp using `mod_wsgi` and `apache2` on Ubuntu server. I tried running the flask app on `localhost` successfully and then deployed it on ubuntu server. But when i try to update database, its giving error: ``` Failed to update model. (OperationalError) attempt to write a readonly database u'UPDATE mysongs SET songurl=? WHERE songid.id = ?' (u'www.site.com/I_wanna_dance', 1) ``` Now i tried look for database file permission which is: ``` -rwxr-xr-x 1 www-data www-data 10240 Jul 14 15:35 /var/www/mywebsite/appfolder/appdata.db` ``` When i try changing permission to 777, 755, 644 etc. it shows another error: `unable to open database file` Although database file works fine with 644 permission on `localhost` but not on ubuntu server. Also i checked permission of directories and for `/var` `/var/www` `/var/www/mywebsite` `/var/www/mywebsite/appfolder` etc., all have `www-data:www-data` as its owner username and group. I tried googling and but no proper solution other than suggestion of changing file/dir permissions, which i have tried myself. Why can't it read/access the database file? Please suggest.
Resolved the issue. It was due to database file permission conflict.
This issue is related to the files permissions management AND mostly to the user chosen in the Apache configuration file (`*.conf`) defined to holds the application processes. In a few words : the **write permissions** need to match this user. Most of the time, the sqlite database file has been created by a specific user (for example your current user) and the site application is running under child processes launched by the Apache default user **www-data** (if the parameter `user` wasn't specified inside the directive `WSGIDaemonProcess`). In this case, the database can be read but it will throw this error if you try to modify anything : > (OperationalError) attempt to write a readonly database... because *www-data* has no permission on the file (or on the parent folder) --- **First way : Apply permissions to the user www-data** You can set the *write* permissions on the database file and its parent folder. If the folder contains other files, you can add write permission on it and only change the ownership of the database file to the user *www-data*, for example : ``` sudo chmod o+w db_directory sudo chown www-data: db_directory/site_database.db ``` Or if the folder contains only the database file, you can try to change the folder owner directly : ``` sudo chown -R www-data: db_directory ``` Then check that *read*/*write* permissions are well set (with `ls -l site_database.db`) More help in [this post.](https://askubuntu.com/questions/6723/change-folder-permissions-and-ownership) --- **Other solution : Add a specific user to hold the application processes** This can be done by giving the [`user` and `group` parameters](http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html) in the directive `WSGIDaemonProcess` in Apache configuration. It will make Apache launch the child processes under a specific user. For example : ``` ... WSGIDaemonProcess main user=myuser group=myuser threads=3 python-home=/path/to/the/virtualenv/ WSGIProcessGroup main WSGIApplicationGroup %{GLOBAL} ... ``` This user will manage all operation, including read/write to any files, so check that it has all needed permissions on every related files. For security concerns, you may not use a wide-privileged user. Some comments can help in [this post](https://serverfault.com/questions/294101/wsgidaemonprocess-specifying-a-user). --- **Note** : be careful if you manage your own logging files with directives like `ErrorLog` in the Apache configuration, these files will follow the same logic of permissions. The same for any file that could be changed by the application.
OperationalError: attempt to write a readonly database in ubuntu server
[ "", "python", "apache", "ubuntu", "flask", "mod-wsgi", "" ]
I have upgraded from django 1.2.7 to django 1.5.1 I am using python 2.6.6 When i try to run `python manage.py collectstatic` i get > Unknown command: 'collectstatic' from my settings.py ``` STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', 'compressor.finders.CompressorFinder', ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'django.contrib.admindocs', 'django.contrib.staticfiles', TEMPLATE_CONTEXT_PROCESSORS = ( "django.contrib.auth.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.request", "django.contrib.messages.context_processors.messages", "MYPROJECT.control.context_processors.debug", "django.core.context_processors.static", ) ``` If i run `python manage.py help` i get ``` Available subcommands: [django] cleanup compilemessages createcachetable dbshell diffsettings dumpdata flush inspectdb loaddata makemessages runfcgi runserver shell sql sqlall sqlclear sqlcustom sqlflush sqlindexes sqlinitialdata sqlsequencereset startapp startproject syncdb test testserver validate ``` If i run `python manage.py version` > 1.5.1
I had a similar error message, but despite my suspicions, it had nothing to do with the Django update. If you have an error in settings (I had an empty SECRET\_KEY value), then "django" will be the only app that gets loaded. I found the root of the problem by running `python manage.py shell` and that quickly told me what's wrong with the settings.
For those coming here from google, I resolved it through adding ``` 'django.contrib.staticfiles', ``` to `INSTALLED_APPS`
Django admin.py Unknown command: 'collectstatic'
[ "", "python", "django", "" ]
I have a list of 3D-points for which I calculate a plane by numpy.linalg.lstsq - method. But Now I want to do a orthogonal projection for each point into this plane, but I can't find my mistake: ``` from numpy.linalg import lstsq def VecProduct(vek1, vek2): return (vek1[0]*vek2[0] + vek1[1]*vek2[1] + vek1[2]*vek2[2]) def CalcPlane(x, y, z): # x, y and z are given in lists n = len(x) sum_x = sum_y = sum_z = sum_xx = sum_yy = sum_xy = sum_xz = sum_yz = 0 for i in range(n): sum_x += x[i] sum_y += y[i] sum_z += z[i] sum_xx += x[i]*x[i] sum_yy += y[i]*y[i] sum_xy += x[i]*y[i] sum_xz += x[i]*z[i] sum_yz += y[i]*z[i] M = ([sum_xx, sum_xy, sum_x], [sum_xy, sum_yy, sum_y], [sum_x, sum_y, n]) b = (sum_xz, sum_yz, sum_z) a,b,c = lstsq(M, b)[0] ''' z = a*x + b*y + c a*x = z - b*y - c x = -(b/a)*y + (1/a)*z - c/a ''' r0 = [-c/a, 0, 0] u = [-b/a, 1, 0] v = [1/a, 0, 1] xn = [] yn = [] zn = [] # orthogonalize u and v with Gram-Schmidt to get u and w uu = VecProduct(u, u) vu = VecProduct(v, u) fak0 = vu/uu erg0 = [val*fak0 for val in u] w = [v[0]-erg0[0], v[1]-erg0[1], v[2]-erg0[2]] ww = VecProduct(w, w) # P_new = ((x*u)/(u*u))*u + ((x*w)/(w*w))*w for i in range(len(x)): xu = VecProduct([x[i], y[i], z[i]], u) xw = VecProduct([x[i], y[i], z[i]], w) fak1 = xu/uu fak2 = xw/ww erg1 = [val*fak1 for val in u] erg2 = [val*fak2 for val in w] erg = [erg1[0]+erg2[0], erg1[1]+erg2[1], erg1[2]+erg2[2]] erg[0] += r0[0] xn.append(erg[0]) yn.append(erg[1]) zn.append(erg[2]) return (xn,yn,zn) ``` This returns me a list of points which are all in a plane, but when I display them, they are not at the positions they should be. I believe there is already a certain built-in method to solve this problem, but I couldn't find any =(
You are doing a very poor use of `np.lstsq`, since you are feeding it a precomputed 3x3 matrix, instead of letting it do the job. I would do it like this: ``` import numpy as np def calc_plane(x, y, z): a = np.column_stack((x, y, np.ones_like(x))) return np.linalg.lstsq(a, z)[0] >>> x = np.random.rand(1000) >>> y = np.random.rand(1000) >>> z = 4*x + 5*y + 7 + np.random.rand(1000)*.1 >>> calc_plane(x, y, z) array([ 3.99795126, 5.00233364, 7.05007326]) ``` It is actually more convenient to use a formula for your plane that doesn't depend on the coefficient of `z` not being zero, i.e. use `a*x + b*y + c*z = 1`. You can similarly compute `a`, `b` and `c` doing: ``` def calc_plane_bis(x, y, z): a = np.column_stack((x, y, z)) return np.linalg.lstsq(a, np.ones_like(x))[0] >>> calc_plane_bis(x, y, z) array([-0.56732299, -0.70949543, 0.14185393]) ``` To project points onto a plane, using my alternative equation, the vector `(a, b, c)` is perpendicular to the plane. It is easy to check that the point `(a, b, c) / (a**2+b**2+c**2)` is on the plane, so projection can be done by referencing all points to that point on the plane, projecting the points onto the normal vector, subtract that projection from the points, then referencing them back to the origin. You could do that as follows: ``` def project_points(x, y, z, a, b, c): """ Projects the points with coordinates x, y, z onto the plane defined by a*x + b*y + c*z = 1 """ vector_norm = a*a + b*b + c*c normal_vector = np.array([a, b, c]) / np.sqrt(vector_norm) point_in_plane = np.array([a, b, c]) / vector_norm points = np.column_stack((x, y, z)) points_from_point_in_plane = points - point_in_plane proj_onto_normal_vector = np.dot(points_from_point_in_plane, normal_vector) proj_onto_plane = (points_from_point_in_plane - proj_onto_normal_vector[:, None]*normal_vector) return point_in_plane + proj_onto_plane ``` So now you can do something like: ``` >>> project_points(x, y, z, *calc_plane_bis(x, y, z)) array([[ 0.13138012, 0.76009389, 11.37555123], [ 0.71096929, 0.68711773, 13.32843506], [ 0.14889398, 0.74404116, 11.36534936], ..., [ 0.85975642, 0.4827624 , 12.90197969], [ 0.48364383, 0.2963717 , 10.46636903], [ 0.81596472, 0.45273681, 12.57679188]]) ```
You can simply do everything in matrices is one option. If you add your points as row vectors to a matrix `X`, and `y` is a vector, then the parameters vector `beta` for the least squares solution are: ``` import numpy as np beta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y)) ``` but there's an easier way, if we want to do projections: QR decomposition gives us an orthonormal projection matrix, as `Q.T`, and `Q` is itself the matrix of orthonormal basis vectors. So, we can first form `QR`, then get `beta`, then use `Q.T` to project the points. QR: ``` Q, R = np.linalg.qr(X) ``` beta: ``` # use R to solve for beta # R is upper triangular, so can use triangular solver: beta = scipy.solve_triangular(R, Q.T.dot(y)) ``` So now we have `beta`, and we can project the points using `Q.T` very simply: ``` X_proj = Q.T.dot(X) ``` Thats it! If you want more information and graphical piccies and stuff, I made a whole bunch of notes, whilst doing something similar, at: <https://github.com/hughperkins/selfstudy-IBP/blob/9dedfbb93f4320ac1bfef60db089ae0dba5e79f6/test_bases.ipynb> (Edit: note that if you want to add a bias term, so the best-fit doesnt have to pass through the origin, you can simply add an additional column, with all-1s, to `X`, which acts as the bias term/feature)
orthogonal projection with numpy
[ "", "python", "arrays", "numpy", "" ]
I have a class: ``` class DatabaseThing(): def __init__(self, dbName, user, password): self.connection = ibm_db_dbi.connect(dbName, user, password) ``` I want to test this class but with a test database. So in my test class I am doing something like this: ``` import sqlite3 as lite import unittest from DatabaseThing import * class DatabaseThingTestCase(unittest.TestCase): def setUp(self): self.connection = lite.connect(":memory:") self.cur = self.connection.cursor() self.cur.executescript ('''CREATE TABLE APPLE (VERSION INT, AMNT SMALLINT); INSERT INTO APPLE VALUES(16,0); INSERT INTO APPLE VALUES(17,5); INSERT INTO APPLE VALUES(18,1); INSERT INTO APPLE VALUES(19,15); INSERT INTO APPLE VALUES(20,20); INSERT INTO APPLE VALUES(21,25);''') ``` How would I go about using this connection than the connection from the class I want to test? Meaning using the connection from `setUp(self)` instead of the connection from `DatabaseThing`. I cannot test the functions without instantiating the class. I want to mock the `__init__` method somehow in the Test Class, but I didn't find anything that seemed useful in the [documentation](http://www.voidspace.org.uk/python/mock/).
Instead of mocking, you could simply subclass the database class and test against that: ``` class TestingDatabaseThing(DatabaseThing): def __init__(self, connection): self.connection = connection ``` and instantiate **that** class instead of `DatabaseThing` for your tests. The methods are still the same, the behaviour will still be the same, but now all methods using `self.connection` use your test-supplied connection instead.
If you want to return a mock when you initialize a class, mock out the\_\_new\_\_ method, not **init**. **new** makes the new instance and **init** initializes it, but can only return None. If you mock **new**, it can return a mock you can assert on to simulate instance creation in the test. ``` @mock.patch('Car.__new__') def test_create_car(self, mock_Car): mock_inst = mock.MagickMock() mock_Car.return_value = mock_inst create_car() # Assert class was called mock_Car.assert_called_once() # __new__ is called with actual class as first arg mock_Car.assert_called_with(Car) # Assert instance method is called as expected mock_inst.set_miles.assert_called_with(0) ```
Mocking __init__() for unittesting
[ "", "python", "sqlite", "unit-testing", "mocking", "" ]
i'm making a query in mysql but i have a problem: this is my column structure ``` |country| -------- Boston -------- Chicago -------- washington ``` The thing is i may have a search item like: ``` North Washington Boston Easht South Chicago ``` So i'm trying to match it using the %like% operador like that: ``` select * from am_ciudad where Ciu_Nombre LIKE '%NORTH BOSTON'; select * from am_ciudad where Ciu_Nombre LIKE 'CHICAGO%'; select * from am_ciudad where Ciu_Nombre LIKE '%South Chicago%'; ``` the second one makes match because it starts with "chicago" word, but in the case of the query has a prefix it doesn't, is there a way to search by at least one match in the query string?
**`IN` method** Use comma separated list of your search query: ``` SELECT * FROM am_ciudad WHERE Ciu_Nombre IN('North', 'Washington', ...) ``` **`REGEXP` method** I can imagine the `REGEXP` will be slower, but I haven't benchmarked it. ``` SELECT * FROM am_ciudad WHERE Ciu_Nombre REGEXP(North|Washington|...) ```
Your other searches won't match because they do not exist. If you want to match Boston in the phrase I love Boston Red Sox then you would need to use `...LIKE '%Boston%';` the %s are wild cards so using them before and after the word you are tying to match tell the query that you don't care what come before and after. Your search string of `...LIKE '%NORTH BOSTON';` is telling query that you are looking for `<anything>North BOSTON;` which obviously you don't have. Hopefully that makes sense and helps out. S
MySQL search match at least one word on query string
[ "", "mysql", "sql", "" ]
I got 1000 rows in my data-table `dt` and I want to insert all this data to my sql table in one shot.I know how to do this with a for loop..but I wonder is there any other better way to do this all rows to datatbase in single shot. My current code to insert this data is like this ``` DataTable dt = result.Tables[0]; SqlHelper.ExecuteScalar(GlobalSettings.DbDSN, CommandType.Text, "INSERT INTO tbl_Projects (Project,Owner,Consultant ,Contractor,Value ,Level1,Level2 ,Status ,Country ,CreatedDate ,CreatedByID ,CreatedByName) VALUES (@Project,@Owner,@Consultant ,@Contractor,@Value ,@Level1,@Level2 ,@Status ,@Country ,@CreatedDate ,@CreatedByID ,@CreatedByName)", new SqlParameter("@Project",dt.Rows[0].ItemArray[0]), new SqlParameter("@Owner", dt.Rows[0].ItemArray[1]), new SqlParameter("@Consultant", dt.Rows[0].ItemArray[2]), new SqlParameter("@Contractor", dt.Rows[0].ItemArray[3]), new SqlParameter("@Value", dt.Rows[0].ItemArray[4]), new SqlParameter("@Level1", dt.Rows[0].ItemArray[5]), new SqlParameter("@Level2", dt.Rows[0].ItemArray[6]), new SqlParameter("@Status", dt.Rows[0].ItemArray[7]), new SqlParameter("@Country", dt.Rows[0].ItemArray[8]), new SqlParameter("@CreatedDate", System.DateTime.Now), new SqlParameter("@CreatedByID", ""), new SqlParameter("@CreatedByName", "")); ``` Can any one give me a hand on this..
You can do this by leveraging the `SqlBulkCopy` class. In short, get a `SqlConnection` created and opened and then use this code to copy that in bulk from the `DataTable` to the server. ``` using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlConn)) { bulkCopy.DestinationTableName = "tbl_Projects"; bulkCopy.WriteToServer(dt); } ```
You could also use a table type parameter and pass an entire dataset from C# to the SQL Server. See this question: [INSERT using LIST into Stored Procedure](https://stackoverflow.com/questions/17487299/insert-using-list-into-stored-procedure/17487572#comment25416816_17487572)
Insert multiple rows in SQL as sqlparameter
[ "", "sql", "sql-server-2008", "" ]
I have two pandas data frames, which look like this: ``` import pandas as pd df_one = pd.DataFrame( { 'A': [1,1,2,3,4,4,4], 'B1': [0.5,0.0,0.2,0.1,0.3,0.2,0.1], 'B2': [0.2,0.3,0.1,0.5,0.3,0.1,0.2], 'B3': [0.1,0.2,0.0,0.9,0.0,0.3,0.5]} ); df_two = pd.DataFrame( { 'A': [1,2,3,4], 'C1': [1.0,9.0,2.1,9.0], 'C2': [2.0,3.0,0.7,1.1], 'C3': [5.0,4.0,2.3,3.4]} ); df_one A B1 B2 B3 0 1 0.5 0.2 0.1 1 1 0.0 0.3 0.2 2 2 0.2 0.1 0.0 3 3 0.1 0.5 0.9 4 4 0.3 0.3 0.0 5 4 0.2 0.1 0.3 6 4 0.1 0.2 0.5 df_two A C1 C2 C3 0 1 1.0 2.0 5.0 1 2 9.0 3.0 4.0 2 3 2.1 0.7 2.3 3 4 9.0 1.1 3.4 ``` What I would like to do is compute is a scalar product where I would be multiplying rows of the first data frame by the rows of the second data frame, i.e., `\sum_i B_i * C_i`, but in such a way that a row in the first data frame is multiplied by a row in the second data frame only if the values of the `A` column match in both frames. I know how to do it looping and using if's but I would like to do that in a more efficient numpy-like or pandas-like way. Any help much appreciated :)
Not sure if you want unique values for column A (If you do, use groupby on the result below) ``` pd.merge(df_one, df_two, on='A') A B1 B2 B3 C1 C2 C3 0 1 0.5 0.2 0.1 1.0 2.0 5.0 1 1 0.0 0.3 0.2 1.0 2.0 5.0 2 2 0.2 0.1 0.0 9.0 3.0 4.0 3 3 0.1 0.5 0.9 2.1 0.7 2.3 4 4 0.3 0.3 0.0 9.0 1.1 3.4 5 4 0.2 0.1 0.3 9.0 1.1 3.4 6 4 0.1 0.2 0.5 9.0 1.1 3.4 pd.merge(df_one, df_two, on='A').apply(lambda s: sum([s['B%d'%i] * s['C%d'%i] for i in range(1, 4)]) , axis=1) 0 1.40 1 1.60 2 2.10 3 2.63 4 3.03 5 2.93 6 2.82 ```
Another approach would be something similar to this: ``` import pandas as pd df_one = pd.DataFrame( { 'A': [1,1,2,3,4,4,4], 'B1': [0.5,0.0,0.2,0.1,0.3,0.2,0.1], 'B2': [0.2,0.3,0.1,0.5,0.3,0.1,0.2], 'B3': [0.1,0.2,0.0,0.9,0.0,0.3,0.5]} ); df_two = pd.DataFrame( { 'A': [1,2,3,4], 'C1': [1.0,9.0,2.1,9.0], 'C2': [2.0,3.0,0.7,1.1], 'C3': [5.0,4.0,2.3,3.4]} ); lookup = df_two.groupby(df_two.A) def multiply_rows(row): other = lookup.get_group(row['A']) # We want every column after "A" x = row.values[1:] # In this case, other is a 2D array with one row, similar to "row" above... y = other.values[0, 1:] return x.dot(y) # The "axis=1" makes each row to be passed in, rather than each column result = df_one.apply(multiply_rows, axis=1) print result ``` This results in: ``` 0 1.40 1 1.60 2 2.10 3 2.63 4 3.03 5 2.93 6 2.82 ```
numpy/pandas effective multiplication of arrays/dataframes
[ "", "python", "numpy", "pandas", "" ]
Say, for example, I want to know whether the pattern "\section" is in the text "abcd\sectiondefghi". Of course, I can do this: ``` import re motif = r"\\section" txt = r"abcd\sectiondefghi" pattern = re.compile(motif) print pattern.findall(txt) ``` That will give me what I want. However, each time I want to find a new pattern in a new text, I have to change the code which is painful. Therefore, I want to write something more flexible, like this (`test.py`): ``` import re import sys motif = sys.argv[1] txt = sys.argv[2] pattern = re.compile(motif) print pattern.findall(txt) ``` Then, I want to run it in terminal like this: ``` python test.py \\section abcd\sectiondefghi ``` However, that will not work (I hate to use `\\\\section`). So, is there any way of converting my user input (either from terminal or from a file) to python raw string? Or is there a better way of doing the regular expression pattern compilation from user input? Thank you very much.
Use [`re.escape()`](http://docs.python.org/2/library/re.html#re.escape) to make sure input text is treated as literal text in a regular expression: ``` pattern = re.compile(re.escape(motif)) ``` Demo: ``` >>> import re >>> motif = r"\section" >>> txt = r"abcd\sectiondefghi" >>> pattern = re.compile(re.escape(motif)) >>> txt = r"abcd\sectiondefghi" >>> print pattern.findall(txt) ['\\section'] ``` `re.escape()` escapes all non-alphanumerics; adding a backslash in front of each such a character: ``` >>> re.escape(motif) '\\\\section' >>> re.escape('\n [hello world!]') '\\\n\\ \\[hello\\ world\\!\\]' ```
One way to do this is using an argument parser, like [`optparse`](http://docs.python.org/2/library/optparse.html) or [`argparse`](http://docs.python.org/2/library/argparse.html#module-argparse). Your code would look something like this: ``` import re from optparse import OptionParser parser = OptionParser() parser.add_option("-s", "--string", dest="string", help="The string to parse") parser.add_option("-r", "--regexp", dest="regexp", help="The regular expression") parser.add_option("-a", "--action", dest="action", default='findall', help="The action to perform with the regexp") (options, args) = parser.parse_args() print getattr(re, options.action)(re.escape(options.regexp), options.string) ``` An example of me using it: ``` > code.py -s "this is a string" -r "this is a (\S+)" ['string'] ``` Using your example: ``` > code.py -s "abcd\sectiondefghi" -r "\section" ['\\section'] # remember, this is a python list containing a string, the extra \ is okay. ```
Convert command line arguments to regular expression
[ "", "python", "regex", "shell", "" ]
I'm looking for a Python 3.x library that is able to allow interaction with other programs. For example, I already have some sort of command-line interface which I have developed in python, and I want to be able to enter, say "1", and have another program open. From here, I wish to hit another input like "2" and have it manipulate the GUI that opens (for example, for it to "click" the Configurations dropdown bar and select an option, perhaps modify a few settings, apply, and then possibly also automatically enter some text). The reason I'm doing this is for test automation. I've already tried using pywinauto, but I've found it to not be compatible for Python 3! :( Is there another possible approach to this? Thanks in advance!!! P.S. I may have forgotten to mention that I'm using Windows 7 but with Python32
You could look into [sikuli](http://doc.sikuli.org/). It lets you automate clicks and other actions based on region or matched graphic. Fairly smart. Is there a reason you're dead set on using py3?
Py3-compatible pywinauto released! New home page: <http://pywinauto.github.io/> P.S. I'm maintainer of pywinauto.
Python 3.x Interaction with other Program GUIs
[ "", "python", "user-interface", "pywinauto", "" ]
I'm remaking my battleship game, and I have a constant variable called SEA which holds an empty board. However, the variable is being modified, and I don't know why (or where). I suspect it's being passed by reference to player\_board and when player\_board is modified, so is SEA. How do I stop that from happening? Here is my code. You'll see on the bottom I print out SEA, and it's been modified. ``` from random import randint #Constants and globals OCEAN = "O" FIRE = "X" HIT = "*" SIZE = 10 SHIPS = [5, 4, 3, 3, 2] player_radar = [] player_board = [] player_ships = [] ai_radar = [] ai_board = [] ai_ships = [] #Classes class Ship(object): def set_board(self, b): self.ship_board = b def edit(self, row, col, x): self.ship_board[row][col] = x def __repre__(self): return self.ship_board #Set up variables last_ship = Ship() #Holds the last ship made in make_ship() SEA = [] # Blank Board for x in range(SIZE): SEA.append([OCEAN] * SIZE) #Functions def print_board(): for row in range(SIZE): print " ".join(player_radar[row]), "||" , " ".join(player_board[row]) def random_row(is_vertical, size): if is_vertical: return randint(0, SIZE - size) else: return randint(0, SIZE -1) def random_col(is_vertical, size): if is_vertical: return randint(0, SIZE - 1) else: return randint(size-1, SIZE -1) def exists(row, col, b): # true if ocean if row < 0 or row >= SIZE: return 0 elif col < 0 or col >= SIZE: return 0 if b[row][col] == OCEAN: return 1 else: return 0 def make_ship(size, board): #Find an unoccupied spot, then place ship on board #Also put ship in last_ship temp = [] temp = board is_vertical = randint(0, 1) # vertical ship if true occupied = True while(occupied): occupied = False ship_row = random_row(is_vertical, size) ship_col = random_col(is_vertical, size) if is_vertical: for p in range(size): if not exists(ship_row+p, ship_col, temp): occupied = True else: for p in range(size): if not exists(ship_row, ship_col-p, temp): occupied = True #Place ship on boards last_ship.set_board(SEA) if is_vertical: last_ship.edit(ship_row, ship_col, "^") last_ship.edit(ship_row+size-1, ship_col, "v") temp[ship_row][ship_col] = "^" temp[ship_row+size-1][ship_col] = "v" for p in range(size -2): last_ship.edit(ship_row+p+1, ship_col, "+") temp[ship_row+p+1][ship_col] = "+" else: last_ship.edit(ship_row, ship_col, ">") last_ship.edit(ship_row, ship_col-size+1, "<") temp[ship_row][ship_col] = ">" temp[ship_row][ship_col-size+1] = "<" for p in range(size -2): last_ship.edit(ship_row, ship_col-p-1, "+") temp[ship_row][ship_col-p-1] = "+" return temp # Make the boards player_radar = SEA player_board = SEA ai_radar = SEA ai_board = SEA print_board() for x in SHIPS: player_board = make_ship(x, player_board) #player_ships.append(last_ship) #ai_board = make_ship(x, ai_board) #ai_ships.append(last_ship) print "Let's play Battleship!" for row in range(SIZE): print " ".join(SEA[row]) ```
`SEA` and its members are lists, and lists in Python are mutable. When you say `player_radar = SEA`, etc., you're not making a copy of `SEA`; you're making a new reference to it. Any changes you then make to `player_radar` will be reflected in `SEA`. [`copy.deepcopy`](http://docs.python.org/2/library/copy.html) is often used to recursively copy nested mutable data structures. Personally, however, I prefer to just copy the number of layers I know I'll need. For instance, to make a copy of a list of lists and all its members, you can do this: ``` player_radar = [sublist[:] for sublist in SEA] ``` This is a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). Each sublist is copied using `[:]`, which makes a shallow copy of each one.
SEA is a list, so make a copies of it: ``` player_radar = SEA[:] player_board = SEA[:] ai_radar = SEA[:] ai_board = SEA[:] ``` or deeper copies of it, if you need to. EDIT: By "deeper copies", I mean that if your list contains, for instance, other lists, then just making a top level copy will create a new list, but its members will be references to the same members that your original list had, so to create a deep copy, you'd also need to make copies of those members. To illustrate: ``` >>> list1 = [[1,2,3]] >>> list2 = list1[:] # Make a shallow copy >>> print(list1) [[1,2,3]] >>> print(list2) [[1,2,3]] >>> list2[0][0] = 4 # Also changing list1's first member, here >>> print(list2) [[4,2,3]] >>> print(list1) [[4,2,3]] # So list1 has also changed. ```
Constant Variable is being changed
[ "", "python", "" ]
I want to import .sql file in Oracle by command prompt so please tell me that command to import .sql file in oracle in MYSQL I am using it like this ``` mysql -u root -p -h localhost softpoint < D:\Nisarg\NEult\softpoint.sql ``` but In Oracle I don't know how to use this utility so please help me Thanks
In Oracle, to run a script (*.sql) you use [SQL](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_five.htm#sthref434)*PLUS command line utility. ``` c:\> sqlplus user_name/user_password@connect_string @path\your_sql.sql ```
Try to use `sqlplus`: ``` sqlplus user/password@connect @/Nisarg/NEult/softpoint.sql ``` [More information.](http://www.techonthenet.com/oracle/questions/script.php)
Oracle Database import .sql file by command prompt
[ "", "sql", "oracle", "oracle11g", "oracle10g", "oracle-sqldeveloper", "" ]
I have a string like `'$200,000,000'` or `'Yan300,000,000'` I want to split the currency and number, and output a tuple `('$', '200000000')`, without `','` in the number string. Currently I'm using the following script, which is working: ``` def splitCurrency(cur_str): cuttingIdx = 0 for char in cur_str: try: int(char) break except ValueError: cuttingIdx = cuttingIdx + 1 return (cur_str[0:cuttingIdx].strip(), cur_str[cuttingIdx:len(cur_str)].replace(',','')) ``` I want to avoid using for-loop and try-except for performance and readability. Any suggestions?
``` >>> import re >>> string = 'YAN300,000,000' >>> match = re.search(r'([\D]+)([\d,]+)', string) >>> output = (match.group(1), match.group(2).replace(',','')) >>> output ('YAN', '300000000') ```
``` >>> filter(str.isdigit, s) '200000000' >>> filter(lambda x: not x.isdigit() and x != ',', s) '$' >>> >>> (filter(lambda x: not x.isdigit() and x != ',' ,s), filter(str.isdigit, s)) ('$', '200000000') >>> ```
How to efficiently split currency sign and number in one string without using a for loop or try statement
[ "", "python", "algorithm", "performance", "" ]
I'd like to know the simplest way to bind keys in python for example, the default python console window apears and waits, then in psuedo -> ``` if key "Y" is pressed: print ("Yes") if key "N" is pressed: print ("No") ``` I would like to achieve this **without** the use of any **modules** not included by python. just pure python Any and all help is greatly appreciated python 2.7 or 3.x Windows 7 **Note:** `raw_input()` requires the user to hit enter and is therefore not keybinding
From <http://code.activestate.com/recipes/134892/> (although a bit simplified): ``` class _Getch: """Gets a single character from standard input. Does not echo to the screen.""" def __init__(self): self.impl = _GetchUnix() def __call__(self): return self.impl() class _GetchUnix: def __init__(self): import tty, sys def __call__(self): import sys, tty, termios fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(sys.stdin.fileno()) ch = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch getch = _Getch() ``` Then you can do: ``` >>> getch() 'Y' # Here I typed Y ``` This is great as it doesn't need any 3rd party modules.
Well, the way to do it with Tkinter which is a module included in the python install is here: ``` from tkinter import * window = Tk() window.geometry("600x400") window.title("Test") def test(event): print("Hi") window.bind("a", test) window.mainloop() ```
Python key binding/capture
[ "", "python", "python-2.7", "python-3.x", "keyboard", "" ]
I am solving few sql queries myself, in a question , which says > Find the largest country (by area) in each continent, show the continent, the name and the area: ``` SELECT continent, name, area FROM world x WHERE area >= ALL (SELECT area FROM world y WHERE y.continent=x.continent AND area>0) ``` I don't understand what does he mean by world x and world y ? could anyone please explain that?
x and y are aliases. it allows you to identify the table in "WHERE y.continent=x.continent"
x and y are used as aliases (a short alternative name for reference purposes) of the table. This allows the use of the world table in two different scopes.
Regarding use of a query
[ "", "sql", "" ]
I've got a numpy array with shape 1001, 2663. Array contains values of 12 and 127, now I would like to count the number of a specific value, in this case 12. So I try using bincount, but that's doing strange. See what I get: ``` >>> x.shape (1001, 2663) >>> np.bincount(x) Traceback (most recent call last): File "<interactive input>", line 1, in <module> ValueError: object too deep for desired array >>> y = np.reshape(x, 2665663) >>> y.shape (2665663,) >>> np.bincount(y) array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 529750, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2135913]) >>> np.nonzero(np.bincount(y)) (array([ 12, 127]),) ``` The value 529750 is probably the frequency of the values 12 and 2135913 is probably the frequency of value 127, but it won't tell me this. Also the shape of the matrix is strange. If I try sum with where also wont give me right value: ``` >>> np.sum(np.where(x==12)) 907804649 ``` I'm out of options: dear prestigious uses of SO, how to get the frequency of a specific value in a numpy matrix? **EDIT** Smaller example. But still get results that I don't really understand. Why the zero? ``` >>> m = np.array([[1,1,2],[2,1,1],[2,1,2]]) >>> np.bincount(m) Traceback (most recent call last): File "<interactive input>", line 1, in <module> ValueError: object too deep for desired array >>> n = np.reshape(m, 9) >>> n array([1, 1, 2, 2, 1, 1, 2, 1, 2]) >>> np.bincount(n) array([0, 5, 4]) ``` I think I get it. The zero in [0,5,4] means there are no 0 values in matrix. So in the my real situation, the 529750 is the 12th value in the matrix, matrix value 0-11 are all '0', than get's lots of 0 values (values 13-126) and then value 127 gives frequency of 2135913. But how to get the frequency as single value of a specific number in a numpy array?
You want the number of occurrences of a simple `number` in your `data` array? Try ``` np.bincount(data)[number] ```
`bincount` returns an array where the frequency of `x` is `bincount[x]`, It requires a flat input so you can use `bincount(array.ravel())` to handle cases when `array` might not be flat. If your array only has a few unique values, ie 2 and 127, it might be worth reducing the array using unique before calling bincount ie: ``` import numpy as np def frequency(array): values, array = np.unique(array, return_inverse=True) return values, bincount(array.ravel()) array = np.array([[2, 2, 2], [127, 127, 127], [2, 2, 2]]) frequency(array) # array([ 2, 127]), array([6, 3]) ``` Lastly you can do ``` np.sum(array == 12) ``` Notice the difference between `array == 12` and `np.where(array == 12)`: ``` array = np.array([12, 0, 0, 12]) array == 12 # array([ True, False, False, True], dtype=bool) np.where(array == 12) #(array([0, 3]),) ``` Clearly summing over the second is not going to give you what you want.
How to get the frequency of a specific value in a numpy array
[ "", "python", "numpy", "" ]
I have a table like so ``` ID Node ParentID 1 A 0 2 B 1 3 C 1 4 D 2 5 E 2 6 F 3 7 G 3 8 H 3 9 I 4 10 J 4 11 K 10 12 L 11 ``` I need a query to generate a 'level' field that shows how many levels deep a particular node is. Example below ``` ID Node ParentID Level 1 A 0 1 2 B 1 2 3 C 1 2 4 D 2 3 5 E 2 3 6 F 3 4 7 G 3 4 8 H 3 4 9 I 4 5 10 J 4 5 11 K 10 6 12 L 11 7 ```
``` Select Id, Node, ParentID, Dense_Rank() Over(Order by ParentID) as Level from Table_Name ``` [**SQL Fiddle Demo**](http://sqlfiddle.com/#!3/88877c/1)
Something like: ``` ;with tree (ID, ParentID, Level) as ( select ID, ParentID, 1 from TableName where ParentID = 0 union all select t.ID, t.ParentID, 1 + tree.Level from Tree join TableName t on t.ParentID = Tree.ID ) select ID, Level from Tree ```
Need a query to insert 'level' into an adjacent list
[ "", "sql", "sql-server", "" ]
I'm maintaining someone else's SQL at the moment, and I came across this in a Stored Procedure: ``` SELECT Location.ID, Location.Location, COUNT(Person.ID) As CountAdultMales FROM Transactions INNER JOIN Location ON Transactions.FKLocationID = Location.ID INNER JOIN Person ON Transactions.FKPersonID = Person.ID AND DATEDIFF(YEAR, Person.DateOfBirth, GETDATE()) >= 18 AND Person.Gender = 1 WHERE ((Transactions.Deleted = 0) AND (Person.Deleted = 0) AND (Location.Deleted = 0)) ``` Is there any difference between the above and this (which is how I would write it) ``` SELECT Location.ID, Location.Location, COUNT(Person.ID) As CountAdultMales FROM Transactions INNER JOIN Location ON Transactions.FKLocationID = Location.ID INNER JOIN Person ON Transactions.FKPersonID = Person.ID WHERE ((Transactions.Deleted = 0) AND (Person.Deleted = 0) AND (Location.Deleted = 0) AND (DATEDIFF(YEAR, Person.DateOfBirth, GETDATE()) >= 18) AND (Person.Gender = 1)) ``` Personally, I find putting the conditions in the WHERE clause most readable, but I wondered if there were performance or other reasons to "conditionalise" (if there is such a word) the JOIN Thanks
Performance is the same in your examples, however you can tune it this way: ``` SELECT Location.ID, Location.Location, COUNT(Person.ID) As CountAdultMales FROM Transactions INNER JOIN Location ON Transactions.FKLocationID = Location.ID INNER JOIN Person ON Transactions.FKPersonID = Person.ID WHERE Transactions.Deleted = 0 AND Person.Deleted = 0 AND Location.Deleted = 0 AND Person.DateOfBirth < dateadd(year, datediff(year, 0, getdate())-17, 0) AND Person.Gender = 1 ``` This way you are not making a calculation on all columns to get the year. instead you will simply be comparing the year with a static value which is much faster. This query is selecting rows where people turns 18(or are older) before current year runs out.
With an `inner join` this wont really make much of a difference as SQL has a query optimiser which will do its best to excecute the query in the most efficiant way (not perfect). If this was an `outer join` it could make a difference though so its something to be aware of
Is there any performance or functional difference between these two SQL statements?
[ "", "sql", "sql-server", "" ]
I have a csv cell value, `row[13]` that contains the following text: `POINT (174.29635 -41.60557)` I need to strip out the text and brackets, and convert the two numbers to float, each assigned to a var:- ``` geo_pair = row[13].replace("POINT (", "") geo_pair = geo_pair.replace(")", "") self.longitude, self.latitude = geo_pair.split(" ") self.longitude, self.latitude = float(self.longitude), float(self.latitude) ``` I'm pretty sure there is a cleaner way of doing this, and I wondered what someone who knows what they are doing would do!
Since the format is fixed and consists of prefix, data, and suffix, I would use slicing to remove prefix and suffix: `map(float, s[7:-1].split())`. This is clear and simple at the same time: ``` >>> s = "POINT (174.29635 -41.60557)" >>> longitude, latitude = map(float, s[7:-1].split()) ``` This also works upon sign changes or when the number of decimal places changes. And by the way, as long as you are not parsing tons of input, it does not really matter which way to chose. It's mostly a matter of taste and most likely not performance-critical in your case. So don't spend too much time with this :-).
Use the regex? ``` >>> map(float, re.search(r'\((.*)\)', s).group(1).split()) [174.29635, -41.60557] >>> ```
most Pythonic way of achieveing these 4 lines?
[ "", "python", "text", "" ]
I have two tables - 'Users' and 'Supervision' For this example, my users table is very simple:- ``` Users ===== ID (PK) UserName ``` Some users manage other users, so I've built a second table 'Supervision' to manage this:- ``` Supervision =========== UserID SuperID - this is the ID of the staff member that the user supervises. ``` This table is used to join the Users table to itself to identify a particular users supervisor. It might be that a user has more than one supervisor, so this table works perfectly to this end. Here's my sample data in 'Users':- ``` userID userName 1 Bob 2 Margaret 3 Amy 4 Emma 5 Carol 6 Albert 7 Robert 8 Richard 9 Harry 10 Arthur ``` And my data in 'Supervision':- ``` userID superID 1 2 1 3 2 4 2 5 3 4 3 5 6 1 6 7 7 8 7 9 9 10 ``` If I want to see who directly reports to Bob, writing an SQL query is straightforward, and tells me that Margaret and Amy are his direct reports. What I want to do however is to write a query that shows everybody who comes under Bob, so it would need to look at Bobs direct reports, and then their direct reports, and so on - it would give Margaret, Amy, Emma and Carol as the result in this case. I'm assuming this requires some kind of recursion but I'm completely stuck..
You should use [recursive CTE](http://msdn.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx): ``` WITH RCTE AS ( SELECT * FROM dbo.Supervision WHERE UserID = 1 UNION ALL SELECT s.* FROM dbo.Supervision s INNER JOIN RCTE r ON s.userID = r.superID ) SELECT DISTINCT u.userID, u.userName FROM RCTE r LEFT JOIN dbo.Users u ON r.superID = u.userID ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!6/c100f/1)**
Sounds to me like you need a Recursive CTE. This article serves as a primer, and includes a fairly similar example to the one you have: <http://blog.sqlauthority.com/2012/04/24/sql-server-introduction-to-hierarchical-query-using-a-recursive-cte-a-primer/> Hope it helps.
SQL Query to display heirarchical data
[ "", "sql", "t-sql", "recursive-query", "" ]
`class` `Foo`: ``` class Foo(): def __init__(self): del self.foo def foo(self): pass ``` when instantiated `raise`s `AttributeError: Foo instance has no attribute 'foo'`. Deriving `Foo` from `object` (`class Foo(object)`) `raise`s `AttributeError: 'Foo' object attribute 'foo' is read-only`. Using `delattr(self, "foo")` doesn't help either. Doing the following doesn't delete the attribute from `Foo`'s instance either: ``` class Foo(): def __init__(self): self.foo = None del self.foo def foo(self): pass >>> Foo().foo <bound method Foo.foo of <__main__.Foo instance at 0x00000000042EA248>> ``` **Objective:** I basically want to totally remove/or make inaccessible certain attributes from the instance of a given `class` provide some conditions are met. Any suggestions?
``` class Foo(object): def __init__(self): self.foo = None # This masks the foo method on the instance `self` def foo(self): pass foo = Foo() print(foo.foo) # None print(foo.foo()) # TypeError: 'NoneType' object is not callable ``` However, are you sure you want this design? Perhaps instead define subclasses of `Foo`, and select the right class *before* instantiation... ``` class Base(object): def common_method(self): pass class Foo(Base): def foo(self): pass class AltFoo(Base): def alt(self): pass def make_foo(): if somecondition: return Foo() eles: return AltFoo() ```
The problem is that what you're trying to delete is a method, and methods aren't stored on the instance, they're stored on the class. So your instance `Foo()` doesn't really have an attribute `foo`. When you do `Foo().foo`, it finds the `foo` on `Foo`. So there's nothing to delete on `self`, because the attribute isn't on `self`. If the class defines a method (or any class attribute) `foo`, there's no way to delete it from an individual instance. The best you could do would be to use some `__getattribute__` hackery to make it raise an exception. However, this would introduce a performance hit to every attribute access on every `Foo` instance. Why do you feel the need to do this?
How to delete class attributes defined outside the __init__ from within __init__?
[ "", "python", "attributes", "" ]
Let's say I have a string of the following form: ``` "000000111100011100001011000000001111" ``` and I want to create a list containing the lengths of the 1-streaks: ``` [4, 3, 1, 2, 4] ``` Is there a nice one-liner for this?
If you don't mind the `from itertools import groupby`... ``` >>> from itertools import groupby >>> [len(list(g)) for k, g in groupby(s) if k == '1'] [4, 3, 1, 2, 4] ```
Can be done with regex, though not quite as elegant as the itertools solutions ``` answer = [len(item) for item in filter(None, re.split(r"[^1]+", test_string))] ``` Or, more elegant: ``` answer = [len(item) for item in re.findall(r"1+", test_string)] ``` and more elegant still (credits to Jon): ``` answer = map(len, re.findall("1+", test_string)) ```
Replacing a string with counts of streaks
[ "", "python", "string", "list", "" ]
I am working in access and utilizing VBA to make queries in wich I have variable names, so I can't use the query wizard in access I have the following 2 tables: ``` tblKabelInfo ``` And a table with a name that varies depending on something else in my program ``` tblName1 as String ``` the tables look like this: ``` tblKabelInfo: +--------------+----------+----------+----------+ | Kabelnummer | data |more data |even more | +--------------+----------+----------+----------+ | 1 | x | x | x | +--------------+----------+----------+----------+ | 2 | x | x | x | +--------------+----------+----------+----------+ | 3 | x | x | x | +--------------+----------+----------+----------+ | 4 | x | x | x | +--------------+----------+----------+----------+ tblName1: +--------------------------------+----------+----------+ | Filename | bla | databla | +--------------------------------+---------------------+ |\850\850nm_Lessenaar 1_0001.SOR | x | x | +--------------------------------+----------+----------+ |\850\850nm_Lessenaar 1_0002.SOR | x | x | +--------------------------------+----------+----------+ |\850\850nm_Lessenaar 1_0003.SOR | x | x | +--------------------------------+----------+----------+ |\850\850nm_Lessenaar 1_0004.SOR | x | x | +--------------------------------+----------+----------+ ``` I know that both tables are of the same size (so if the table "tblName1" goes up to 0234.SOR, I know that Kabelnummer from "tblKabelInfo" also goes up to 234) I would like to make a query that makes a new table that looks something like this: ``` NewTable: +--------------------------------+--------------+-----+--------+-----------+---------+ | Filename |KabelNummer | bla |databla | More Data |Even more| +--------------------------------+--------------+--------------+-----------+---------+ |\850\850nm_Lessenaar 1_0001.SOR | 1 | x | x | x | x | +--------------------------------+--------------+-----+--------+-----------+---------+ |\850\850nm_Lessenaar 1_0002.SOR | 2 | x | x | x | x | +--------------------------------+--------------+-----+--------+-----------+---------+ |\850\850nm_Lessenaar 1_0003.SOR | 3 | x | x | x | x | +--------------------------------+--------------+-----+--------+-----------+---------+ |\850\850nm_Lessenaar 1_0004.SOR | 4 | x | x | x | x | +--------------------------------+--------------+-----+--------+-----------+---------+ ``` I would like to have the 2 tables in 1 table and the common factor is that the end of "Filename" should be the same as "KabelNummer"
It seems the basic challenge here is to identify the digits in `tblName1.Filename` which can then be used to join with `tblKabelInfo.Kabelnummer`. If those digits are always the first 4 of the last 8 characters of the string, you can use `Right` and `Left`, which are compatible with Access SQL, to get them easily. Here is a session from the Immediate window. ``` Filename = "\850\850nm_Lessenaar 1_0001.SOR" ? Right(Filename, 8) 0001.SOR ? Left(Right(Filename, 8), 4) 0001 ``` If you need to convert those characters to a numeric value, you can use the `Val` function. ``` ? Val(Left(Right(Filename, 8), 4)) 1 ``` However, if the `Filename` values are more variable, not always ending with a period and 3 more characters, the task will be more challenging. ``` Filename = "\850\850nm_Lessenaar 1_0001.ABCDEF" ? InstrRev(Filename, "_") 23 ? InstrRev(Filename, ".") 28 ? Mid(Filename, InstrRev(Filename, "_") + 1, _ (InstrRev(Filename, ".") - InstrRev(Filename, "_")) - 1) 0001 ? Val(Mid(Filename, InstrRev(Filename, "_") + 1, _ (InstrRev(Filename, ".") - InstrRev(Filename, "_")) - 1)) 1 ``` Once you work out the appropriate mix of functions to get what you need, you can use them in your Access query. Here is a query using both of those approaches. It runs without error in Access 2007 with your sample data in *tblName1*. ``` SELECT t.Filename, Val(Left(Right(Filename, 8), 4)) AS Kabelnummer1, Val( Mid( Filename, InstrRev(Filename, "_") + 1, (InstrRev(Filename, ".") - InstrRev(Filename, "_")) - 1 ) ) AS Kabelnummer2 FROM tblName1 AS t; ```
Have a try with this ``` INSERT INTO tableNew SELECT B.Filename, A.Kabelnummer, B.bla, B.databla, A.data, A.[more data], A.[even more] FROM tblKabelInfo A INNER JOIN tblName1 B ON A.Kabelnummer=CAST(RIGHT(SUBSTRING( B.Filename,1,LEN(SUBSTRING( B.Filename, 0, PATINDEX('%.%', B.Filename)) + '.') - 1),4) ``` Updated to handle upto your 4 digit in filename. `0001` to `9999` **Your Query (EDIT)** ``` INSERT INTO tableNew SELECT B.[Filename], A.[Vezelnummer], B.[tblVerlies1_Verlies], B.[tblVerlies2_Verlies], A.[KabelNaam], A.[Van], A.[Naar], A.[VezelLengte], A.[TypeKabel], A.[TypeConnector], A.[AantalConnectoren], A.[AantalLassen] FROM tblKabelInfo A INNER JOIN tbl_GL_850 B ON A.Vezelnummer=CAST(RIGHT(SUBSTRING(B.[Filename],1,LEN(SUBSTRING(B.[Filename], 0, PATINDEX('%.%',B.[Filename])) + '.') - 1),4) AS INT) ```
Creating a table with data from 2 other tables
[ "", "sql", "ms-access", "vba", "" ]
I am using SQL Server 2008 and I need help in writing a query that compares two consecutive records. ``` select recordDate from SiteHistory where siteId = 556145 and isRecent = 0 and isRunning = 1 order by recordDate DESC ``` Gives me around 2000 rows that looks like this: ``` recordDate ----------------------- 2013-05-08 20:04:23.357 2013-05-08 19:45:26.417 2013-05-08 19:30:24.810 2013-05-08 19:17:22.843 2013-05-08 19:00:16.017 2013-05-08 18:44:14.230 ..... ..... ``` Now I need to compare the date of each row with the next row and count how many times the difference between two consecutive dates is greater than 15mins. This is what I could come up with so far: ``` ;with temp as( select row_number()over(order by recordDate DESC)as 'row', recordDate from SiteHistory where siteId = 556145 and isRecent =0 and isRunning=1 ) select COUNT(*) as Count from temp t1 INNER JOIN temp t2 ON t2.row = t1.row+1 where DATEDIFF(mm,t1.recordDate,t2.recordDate)>15 ``` However, this doesn't give me the desired. Please let me know how I can correct this to suit my requirements.
Logic of your query is correct, only thing its trying to get date difference in month change it to minutes in ``` datediff(minute, t1.RecordDate, t2.RecordDate) > 15 ``` Query: ``` ;with temp as( select row_number()over(order by recordDate DESC)as 'row', recordDate from SiteHistory where siteId = 556145 and isRecent = 0 and isRunning = 1 ) select COUNT(*) as Count from temp t1 INNER JOIN temp t2 ON t2.row = t1.row+1 where DATEDIFF(minute, t1.recordDate, t2.recordDate) > 15 ```
"mm" gives you the date difference in *months* ``` where DATEDIFF(mm,t1.recordDate,t2.recordDate)>15 ``` Replace "mm" with "minute" ``` where DATEDIFF(minute,t1.recordDate,t2.recordDate)>15 ```
Trying to compare two consecutive records
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have written the following script to concatenate all the files in the directory into one single file. Can this be optimized, in terms of 1. idiomatic python 2. time Here is the snippet: ``` import time, glob outfilename = 'all_' + str((int(time.time()))) + ".txt" filenames = glob.glob('*.txt') with open(outfilename, 'wb') as outfile: for fname in filenames: with open(fname, 'r') as readfile: infile = readfile.read() for line in infile: outfile.write(line) outfile.write("\n\n") ```
Use [`shutil.copyfileobj`](http://docs.python.org/2/library/shutil.html#shutil.copyfileobj) to copy data: ``` import shutil with open(outfilename, 'wb') as outfile: for filename in glob.glob('*.txt'): if filename == outfilename: # don't want to copy the output into the output continue with open(filename, 'rb') as readfile: shutil.copyfileobj(readfile, outfile) ``` `shutil` reads from the `readfile` object in chunks, writing them to the `outfile` fileobject directly. Do not use `readline()` or a iteration buffer, since you do not need the overhead of finding line endings. Use the same mode for both reading and writing; this is especially important when using Python 3; I've used binary mode for both here.
Using Python 2.7, I did some "benchmark" testing of ``` outfile.write(infile.read()) ``` vs ``` shutil.copyfileobj(readfile, outfile) ``` I iterated over 20 .txt files ranging in size from 63 MB to 313 MB with a joint file size of ~ 2.6 GB. In both methods, normal read mode performed better than binary read mode and shutil.copyfileobj was generally faster than outfile.write. When comparing the worst combination (outfile.write, binary mode) with the best combination (shutil.copyfileobj, normal read mode), the difference was quite significant: ``` outfile.write, binary mode: 43 seconds, on average. shutil.copyfileobj, normal mode: 27 seconds, on average. ``` The outfile had a final size of 2620 MB in normal read mode vs 2578 MB in binary read mode.
python script to concatenate all the files in the directory into one file
[ "", "python", "file", "copy", "" ]
I have a list with 2,500 items. I want to print the first 50 items in a line and the next 50 items in the next line. So there will be a total of 50 lines with 50 items in each line. ``` myList = ['item1', item2,..., 'item2500'] line1 = item1, item2, ..., item50 line2 = item51, item52,...., item100 . . line 50 = item2451, item2452,...., item 2500 ``` Tried some while loops but it didn't quite work out. Is there a built-in functions or an easier way to do this? Thank you.
If you want to just print the list with next 50 items on the next line: ``` for i in range(0, len(myList), 50): print myList[i:i+50] ``` If you want to make a list of lists with 50-len chunks: ``` [myList[i:i+50] for i in range(0, len(myList), 50)] ```
Same thing really, but looks nicer and reusable chunks function as a generator, I think. ``` def chunks_of_n(l,n): for i in xrange(0, len(l), n): yield l[i:i+n] def show_my_list_in_chunks(l): for chunk in chunks_of_n(l,50): print ', '.join(l) ```
Print item from list in chunks of 50 in Python
[ "", "python", "list", "" ]
I had to write a script that generates some fixture file with increasing fake MAC addresses. To do that, I decided to have some fun and try to make it as compact as I could. I ended up with: ``` def mac_address(i): return ':'.join(['%02x'] * 6) % tuple([(i >> (8 * j)) & 0xFF for j in reversed(range(6))]) ``` Which actually works pretty well. Obviously, writing this that way is the best way to get slapped by the future person that must work on it, but I did it for the fun (and wrote a more readable version in comment). But now I'm curious, can you think of any more compact way of writing that ? (That is without removing the spaces).
What about ``` ':'.join('%02x' % (i>>(8*j) & 0xFF) for j in reversed(range(6))) ``` That is more compact *and* easier to understand.
``` def mac_address(i): return ':'.join(a+b for a, b in zip(*[iter('{:012x}'.format(i))]*2)) ``` The first step is to get a hex string zero filled so that it is exactly 12 digits, which is what `'{:012x}'.format(i)` does. Then we break that string up in two-character chunks using the method of grouping items from the [`zip()`](http://docs.python.org/2/library/functions.html#zip) documentation, and join the result on `':'`.
Is there any more compact way of writing that statement?
[ "", "python", "maintainability", "" ]
I have the following table in my sql server 2008 database: ``` MsgID TrackerId MsgContent 1 123 red //MsgContent corresponding to colour 2 123 E2120 //MsgContent corresponding to model 5 123 sam //MsgContent corresponding to password 1 111 orange //MsgContent corresponding to colour 2 111 M3420 //MsgContent corresponding to model 5 111 pam //MsgContent corresponding to password ``` I want a single query whose result is as follows: ``` TrackerId Colour Model Password 123 red E2120 sam 111 orange M3420 pam ``` So, how should I go about solving this problem? Thanks in advance.
Here's a version using `PIVOT`. My only problem with this is the unnecessary aggregate function. I don't know your table definition, but if you have only the columns MsgID, TrackerID, MsgContent, then the CTE that selects the grouping, spreading, and aggregation columns to pivot is superfluous. If you do have more columns, then keep the CTE, otherwise you will get null values in your results. ``` SELECT TrackerID, [1] [Colour], [2] [Model], [5] [Password] FROM ( SELECT MsgID, -- spreading column TrackerID, -- grouping column MsgContent -- aggregation column FROM Trackers ) p PIVOT ( MAX(MsgContent) FOR MsgID IN( [1], [2], [5] ) ) AS pvt ``` ## [SQLFiddle](http://sqlfiddle.com/#!3/59b0a/5/0) You can also use a select for each type of value. ``` SELECT DISTINCT TrackerID, (SELECT MsgContent FROM trackers t2 WHERE t2.MsgID = 1 AND t2.TrackerID = t1.TrackerID) [Colour], (SELECT MsgContent FROM trackers t2 WHERE t2.MsgID = 2 AND t2.TrackerID = t1.TrackerID) [Model], (SELECT MsgContent FROM trackers t2 WHERE t2.MsgID = 5 AND t2.TrackerID = t1.TrackerID) [Password] FROM Trackers t1 ``` ## [SQLFiddle](http://sqlfiddle.com/#!3/59b0a/1/0)
You can do this by joining the table to itself based on `TrackerID` and filter by `MsgID`. Example: ``` SELECT Colour.TrackerId, Colour.MsgContent AS Colour, Model.MsgContent AS Model, Password.MsgContent AS Password FROM MyTable Colour JOIN MyTable Model ON Colour.TrackerId = Model.TrackerId AND Model.MsgID = 2 JOIN MyTable Password ON Colour.TrackerId = Password.TrackerId AND Password.MsgID = 5 WHERE Colour.MsgID = 1 ```
How to fetch rows by applying multiple filtres to a single column in sql server 2008?
[ "", "sql", "sql-server", "" ]
``` ID GROUP NAME 1 1,2, Supreeth 2 1,2,5, Aishu 3 3, Arvi 4 4,5, Gani 5 4,3,2, Jyo 6 3,2,1, Upi 7 2,3,4,1,5, Savi ``` I have table like this, I'm trying to get the count of group ``` DECODE Count 1 A 4 2 B 5 3 C 4 4 D 3 5 E 3 ``` I wanted to decode the values of group to the DECODE values like 1 to A , 2 to B ,3 to C , 4 to D, 5 to E.. Im trying like this but get errors ``` select count(*) from s_c where age like '%1%' and '%2%' and '%3%'; ```
This solution might not be good, or flexible as it depends on hard coded values but it achieves the desired output as described in the question: ``` SELECT 1 AS [Num], 'A' AS Decode, COUNT(*) AS [Count] FROM my_table WHERE [Group] LIKE '%1%' UNION SELECT 2 AS [Num], 'B' AS Decode, COUNT(*) AS [Count] FROM my_table WHERE [Group] LIKE '%2%' UNION SELECT 3 AS [Num], 'C' AS Decode, COUNT(*) AS [Count] FROM my_table WHERE [Group] LIKE '%3%' UNION SELECT 4 AS [Num], 'D' AS Decode, COUNT(*) AS [Count] FROM my_table WHERE [Group] LIKE '%4%' UNION SELECT 5 AS [Num], 'E' AS Decode, COUNT(*) AS [Count] FROM my_table WHERE [Group] LIKE '%5%' ```
One simple minded (not necessarily efficient) approach is to obtain the separate counts for each of the group values. Note that the column name `Group` is using a reserved word, so you have to use an appropriate delimited identifier notation. When you use the portable (SQL standard) double quotes, you have to get the case of the identifier correct; I've assumed lower case — YMMV. ``` SELECT 1 AS GroupID, 'A' AS Decode, COUNT(*) AS GroupCount FROM AnonymousTable WHERE "group" LIKE '%1%' GROUP BY GroupID, Decode UNION SELECT 2 AS GroupID, 'B' AS Decode, COUNT(*) AS GroupCount FROM AnonymousTable WHERE "group" LIKE '%1%' GROUP BY GroupID, Decode UNION SELECT 3 AS GroupID, 'C' AS Decode, COUNT(*) AS GroupCount FROM AnonymousTable WHERE "group" LIKE '%1%' GROUP BY GroupID, Decode UNION SELECT 4 AS GroupID, 'D' AS Decode, COUNT(*) AS GroupCount FROM AnonymousTable WHERE "group" LIKE '%1%' GROUP BY GroupID, Decode UNION SELECT 5 AS GroupID, 'E' AS Decode, COUNT(*) AS GroupCount FROM AnonymousTable WHERE "group" LIKE '%1%' GROUP BY GroupID, Decode ``` But this doesn't scale well; add another 5 groups and it is extremely unpleasant; add 500 and it is unmanageable. You would do better to store the data in a properly normalized table which could then be analyzed using simpler SQL. --- ### An alternative schema design and query ``` Users UserGroups Groups ID Name UserID GroupID ID Decode 1 Supreeth 1 1 1 A 2 Aishu 1 2 2 B 3 Arvi 2 1 3 C 4 Gani 2 2 4 D 5 Jyo 2 5 5 E 6 Savi 3 3 4 4 4 5 ... ``` Here is the simplified query, which will probably perform a lot better than the original, and which will scale to any number of groups (up into the millions if you want them): ``` SELECT u.GroupID, g.Decode, COUNT(*) AS Count FROM UserGroups AS u JOIN Groups AS g ON u.GroupID = g.ID GROUP BY u.GroupID, g.Decode ``` Normalization makes life easier — that's one reason for doing it!
how to get the count of GROUP
[ "", "sql", "" ]
A and B of table T3 are the same as A and B from T1. Basically what I need to do is select all the values that aren't on T3. If there is a line with A,B on T3 I don't wanna show it. ``` SELECT T1.A, T1.B, T1.C FROM T1, T2 WHERE T1.X=T2.X AND NOT EXISTS ( SELECT T3.A, T3.B FROM T3 ) ``` Any help? Thanks
``` SELECT T1.A, T1.B, T1.C FROM T1 INNER JOIN T2 ON T1.X=T2.X WHERE NOT EXISTS ( SELECT 1 FROM T3 WHERE T3.A = T1.A AND T3.B = T1.B ) ```
``` select T1.A,T1.B,T1.C from T1 inner join T2 on T1.X=T2.X left join T3 on T1.A=T3.A and T1.B=T3.B where T3.A is null ```
Select rows where some values are not in other table
[ "", "sql", "oracle", "" ]
In python I usually loop through ranges simply by ``` for i in range(100): #do something ``` but now I want to skip a few steps in the loop. More specifically, I want something like `continue(10)` so that it would skip the whole loop and increase the counter by 10. If I were using a for loop in C I'd just sum 10 to `i`, but in Python that doesn't really work.
You cannot alter the target list (`i` in this case) of a `for` loop. Use a `while` loop instead: ``` while i < 10: i += 1 if i == 2: i += 3 ``` Alternatively, use an iterable and increment that: ``` from itertools import islice numbers = iter(range(10)) for i in numbers: if i == 2: next(islice(numbers, 3, 3), None) # consume 3 ``` By assigning the result of `iter()` to a local variable, we can advance the loop sequence inside the loop using standard iteration tools (`next()`, or here, a shortened version of the `itertools` consume recipe). `for` normally calls `iter()` for us when looping over a iterator.
The best way is to assign the iterator a name - it is common have an iterable as opposed to an iterator (the difference being an iterable - for example a list - starts from the beginning each time you iterate over it). In this case, just use [the `iter()` built-in function](http://docs.python.org/3.3/library/functions.html#iter): ``` numbers = iter(range(100)) ``` Then you can advance it inside the loop using the name. The best way to do this is with [the `itertools` `consume()` recipe](http://docs.python.org/3/library/itertools.html#itertools-recipes) - as it is fast (it uses `itertools` functions to ensure the iteration happens in low-level code, making the process of consuming the values very fast, and avoids using up memory by storing the consumed values): ``` from itertools import islice import collections def consume(iterator, n): "Advance the iterator n-steps ahead. If n is none, consume entirely." # Use functions that consume iterators at C speed. if n is None: # feed the entire iterator into a zero-length deque collections.deque(iterator, maxlen=0) else: # advance to the empty slice starting at position n next(islice(iterator, n, n), None) ``` By doing this, you can do something like: ``` numbers = iter(range(100)) for i in numbers: ... if some_check(i): consume(numbers, 3) # Skip 3 ahead. ```
How do I skip a few iterations in a for loop
[ "", "python", "loops", "" ]
I have data in following format. ``` match_id team_id won_ind ---------------------------- 37 Team1 N 67 Team1 Y 98 Team1 N 109 Team1 N 158 Team1 Y 162 Team1 Y 177 Team1 Y 188 Team1 Y 198 Team1 N 207 Team1 Y 217 Team1 Y 10 Team2 N 13 Team2 N 24 Team2 N 39 Team2 Y 40 Team2 Y 51 Team2 Y 64 Team2 N 79 Team2 N 86 Team2 N 91 Team2 Y 101 Team2 N ``` Here `match_id`s are in chronological order, 37 is the first and 217 is the last match played by team1. `won_ind` indicated whether the team won the match or not. So, from the above data, team1 has lost its first match, then won a match, then lost 2 matches, then won 4 consecutive matches and so on. Now I'm interested in finding the longest winning streak for each team. ``` Team_id longest_streak ------------------------ Team1 4 Team2 3 ``` I know how to find this in plsql, but i was wondering if this can be calculated in pure SQL. I tried using LEAD, LAG and several other functions, but not getting anywhere. I have created sample fiddle [here](http://sqlfiddle.com/#!4/31f95/1).
``` with original_data as ( select 37 match_id, 'Team1' team_id, 'N' won_id from dual union all select 67 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 98 match_id, 'Team1' team_id, 'N' won_id from dual union all select 109 match_id, 'Team1' team_id, 'N' won_id from dual union all select 158 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 162 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 177 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 188 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 198 match_id, 'Team1' team_id, 'N' won_id from dual union all select 207 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 217 match_id, 'Team1' team_id, 'Y' won_id from dual union all select 10 match_id, 'Team2' team_id, 'N' won_id from dual union all select 13 match_id, 'Team2' team_id, 'N' won_id from dual union all select 24 match_id, 'Team2' team_id, 'N' won_id from dual union all select 39 match_id, 'Team2' team_id, 'Y' won_id from dual union all select 40 match_id, 'Team2' team_id, 'Y' won_id from dual union all select 51 match_id, 'Team2' team_id, 'Y' won_id from dual union all select 64 match_id, 'Team2' team_id, 'N' won_id from dual union all select 79 match_id, 'Team2' team_id, 'N' won_id from dual union all select 86 match_id, 'Team2' team_id, 'N' won_id from dual union all select 91 match_id, 'Team2' team_id, 'Y' won_id from dual union all select 101 match_id, 'Team2' team_id, 'N' won_id from dual ), ---------------------------------------------------------------------- new_streaks as ( -- -- Identifying new streaks. -- ------------------------ -- select match_id, team_id, won_id, -- -- A new streak is identfied if -- case when -- -- a) won_id = 'Y' and -- won_id = 'Y' and -- -- b) the previous won_id = 'N': -- lag(won_id) over (partition by team_id order by match_id) = 'N' -- -- then 1 -- -- All other cases: no new streak: else 0 -- end new_streak from original_data ), ------------------------------- streak_no as ( -- -- Assigning a unique number to each streak. -- ----------------------------------------- -- select -- match_id, team_id, -- -- In order to be able to count the number of records -- of a streak, we first need to assign a unique number -- to each streak: -- sum(new_streak) over (partition by team_id order by match_id) streak_no -- from new_streaks where -- We're only interested in «winning streaks»: won_id = 'Y' ), ----------------------------------------------- -- -- Counting the elements per streak -- -------------------------------- -- records_per_streak as ( select count(*) counter, team_id, streak_no from streak_no group by team_id, streak_no ) ------------------------------------------------ -- -- Finally: we can find the «longest streak» -- per team: -- select max(counter) longest_streak, team_id from records_per_streak group by team_id ; ```
This should work, Fiddle here: <http://sqlfiddle.com/#!4/31f95/27> ``` SELECT team_id, MAX(seq_length) AS longest_sequence FROM (SELECT team_id, COUNT(*) AS seq_length FROM (SELECT team_id, won_ind,match_id, SUM(new_group) OVER(ORDER BY match_id) AS group_no FROM (SELECT team_id, won_ind, match_id, DECODE(LAG(won_ind) OVER(ORDER BY match_id), won_ind, 0, 1) AS new_group FROM matches ORDER BY team_id)) WHERE won_ind = 'Y' GROUP BY team_id, group_no) GROUP BY team_id ORDER BY 2 DESC, 1; ```
Finding the longest streak of wins
[ "", "sql", "oracle", "oracle11gr2", "" ]
Usually when one has to search in oracle for a single where condition where you don't know the exact condition we use : ``` Select * from Table where column like '%Val%' ``` If I have to run check for multiple condition we use `IN` ``` Select * from Table where column in ('Value1','ABC2') ``` How we combine the two ?i.e , search for a bunch of values in DB when the exact value is not know The below code doesn't give the desired result as it considers the whole as a string . ``` Select * from Table where column in ('%Val%','%AB%') ```
``` Select * from Table where column like '%Val%' or column like '%AB%'; ```
``` Select * from Table where column like '%Val%' or column like '%AB%'..... ``` I know its little hard to write you can create a vertical list and change \n and \r by '% and %' respectively.
using % is IN condition
[ "", "sql", "oracle", "plsql", "sql-in", "" ]
``` import pandas as pd date_stngs = ('2008-12-20','2008-12-21','2008-12-22','2008-12-23') a = pd.Series(range(4),index = (range(4))) for idx, date in enumerate(date_stngs): a[idx]= pd.to_datetime(date) ``` This code bit produces error: > TypeError:" 'int' object is not iterable" Can anyone tell me how to get this series of date time strings into a DataFrame as `DateTime` objects?
``` >>> import pandas as pd >>> date_stngs = ('2008-12-20','2008-12-21','2008-12-22','2008-12-23') >>> a = pd.Series([pd.to_datetime(date) for date in date_stngs]) >>> a 0 2008-12-20 00:00:00 1 2008-12-21 00:00:00 2 2008-12-22 00:00:00 3 2008-12-23 00:00:00 ``` **UPDATE** Use pandas.to\_datetime(pd.Series(..)). It's concise and much faster than above code. ``` >>> pd.to_datetime(pd.Series(date_stngs)) 0 2008-12-20 00:00:00 1 2008-12-21 00:00:00 2 2008-12-22 00:00:00 3 2008-12-23 00:00:00 ```
``` In [46]: pd.to_datetime(pd.Series(date_stngs)) Out[46]: 0 2008-12-20 00:00:00 1 2008-12-21 00:00:00 2 2008-12-22 00:00:00 3 2008-12-23 00:00:00 dtype: datetime64[ns] ``` ## Update: benchmark ``` In [43]: dates = [(dt.datetime(1960, 1, 1)+dt.timedelta(days=i)).date().isoformat() for i in range(20000)] In [44]: timeit pd.Series([pd.to_datetime(date) for date in dates]) 1 loops, best of 3: 1.71 s per loop In [45]: timeit pd.to_datetime(pd.Series(dates)) 100 loops, best of 3: 5.71 ms per loop ```
In Pandas how do I convert a string of date strings to datetime objects and put them in a DataFrame?
[ "", "python", "datetime", "pandas", "" ]
Is there a way for `TextInput`s to receive a bounded string value (i.e., string of maximum length, x)? I tried investigating how to mixin `AliasProperty` in order to mimic `BoundedNumericProperty`, but can't find any Property class methods.
By the time `on_text` is called text is already changed in the textinput. You want to override [insert\_text](http://kivy.org/docs/api-kivy.uix.textinput.html#kivy.uix.textinput.TextInput.insert_text) to catch the text before it is inserted into the TextInput and thus before the [text](http://kivy.org/docs/api-kivy.uix.textinput.html#kivy.uix.textinput.TextInput.text) property is updated so as to restrict the entry to the TextInput. Please don't bind/request the keyboard as the textinput does that for you and your handler will stop working after the Textinput is focused (TextInput would request the keyboard and in a single keyboard environment your handler will stop working). Here is a sample code overriding insert\_text to restrict text text entry to only numeric input. ``` class NumericInput(TextInput): def insert_text(self, substring, from_undo=False): if not from_undo: try: int(substring) except ValueError: return super(NumericInput, self).insert_text(substring, from_undo) ``` So For restricting the text to a certain length you could do the following:: ``` class CustomInput(TextInput): max_chars = NumericProperty(10) def insert_text(self, substring, from_undo=False): if not from_undo and (len(self.text)+len(substring) > self.max_chars): return super(CustomInput, self).insert_text(substring, from_undo) ```
I think that the event `on_text` is triggered each time you modify the text. So you can override the method: ``` def on_text(self, instance, value): print('The widget', instance, 'have:', value) # validate here!!! # you might also want to call the parent. #super(ClassName, self).on_text(instance, value) ``` Or bind it: ``` def my_callback(instance, value): print('The widget', instance, 'have:', value) #validate here textinput = TextInput() textinput.bind(text=my_callback) ``` **Be careful with the infinitive recursion**. If you modify the text variable inside `on_text` or `my_callback` you might be triggering the event ago. I honestly don't remember but I think it does, so you need a flag such as validating before modifying the variable You can also use still use `on_focus` so you check when the `TextInput` lost focus: ``` def on_focus(instance, value): if value: print('User focused', instance) else: print('User defocused', instance) textinput = TextInput() textinput.bind(focus=on_focus) ``` Finally, you can also [bind the keyboard](http://kivy.org/docs/api-kivy.core.window.html?highlight=keyboard#kivy.core.window.WindowBase.request_keyboard) so you will guarantee access before the `TextInput`. I honestly don't know the order of execution but if you use `on_text` you might be deleting after the letter appearing on the screen which might be undesirable. I think implementing your own `BoundedStringProperty` would be quite a work to achieve what you want. Here is the code of [`BoundedNumericProperty`](https://github.com/kivy/kivy/blob/master/kivy/properties.pyx#L743) Also, you shouldn't be trying to use an `AliasProperty` since you already got `StringProperty` that triggers the `on_text` event mentioned before.
Kivy: Is there a "BoundedString" property available for TextInputs?
[ "", "python", "kivy", "" ]
I have a query inside a stored procedure that sums some values inside a table: ``` SELECT SUM(columnA) FROM my_table WHERE columnB = 1 INTO res; ``` After this select I subtract `res` value with an integer retrieved by another query and return the result. If `WHERE` clause is verified, all works fine. But if it's not, all my function returns is an empty column (maybe because I try to subtract a integer with an empty value). How can I make my query return zero if the `WHERE` clause is not satisfied?
You could: ``` SELECT COALESCE(SUM(columnA), 0) FROM my_table WHERE columnB = 1 INTO res; ``` This happens to work, because your query has an aggregate function and consequently *always* returns a row, even if nothing is found in the underlying table. Plain queries without aggregate would return ***no row*** in such a case. `COALESCE` would never be called and couldn't save you. While dealing with a single column we can wrap the whole query instead: ``` SELECT COALESCE( (SELECT columnA FROM my_table WHERE ID = 1), 0) INTO res; ``` Works for your original query as well: ``` SELECT COALESCE( (SELECT SUM(columnA) FROM my_table WHERE columnB = 1), 0) INTO res; ``` More about [`COALESCE()` in the manual](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL). More about [aggregate functions in the manual](https://www.postgresql.org/docs/current/functions-aggregate.html). More **alternatives** in this later post: * [How to return a value from a function if no value is found](https://stackoverflow.com/questions/29293018/how-to-return-a-value-from-a-function-if-no-value-is-found/29294021#29294021)
I'm not familiar with postgresql, but in SQL Server or Oracle, using a subquery would work like below (in Oracle, the `SELECT 0` would be `SELECT 0 FROM DUAL`) ``` SELECT SUM(sub.value) FROM ( SELECT SUM(columnA) as value FROM my_table WHERE columnB = 1 UNION SELECT 0 as value ) sub ``` Maybe this would work for postgresql too?
Return zero if no record is found
[ "", "sql", "postgresql", "null", "plpgsql", "coalesce", "" ]
I am trying to insert a question in database with field in table as nvarchar(max) (sql server 2008 r2) Code is as follows: ``` Protected Sub btnSave_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSave.Click gc.ExecuteCommand("insert into QuestionMaster values('" & txtQuestion.Text & "','" & ViewState("ClientID") & "','" & ViewState("KioskID") & "')") Response.Write("<script type='text/javascript' language='javascript'>alert('Question Added Sucessfully!!!')</script>") BindGrid() End Sub ``` when i insert any string with : ``` what's your name? ``` then it gives me error: > Incorrect syntax near 's'. Unclosed quotation mark after the character string ')'. If i simply supply string as: ``` What is your name? ``` Then it does not gives me error. Please help me.
You could **escape single quote by replacing single quote (') with two single quotes ('')** in the txtQuestion.Text. ``` Protected Sub btnSave_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSave.Click gc.ExecuteCommand("insert into QuestionMaster values('" & txtQuestion.Text.Replace("'", "''") & "','" & ViewState("ClientID") & "','" & ViewState("KioskID") & "')") Response.Write("<script type='text/javascript' language='javascript'>alert('Question Added Sucessfully!!!')</script>") BindGrid() End Sub ```
You should be using parameterised queries if possible since simple string insertion directly into a query will, as you have seen, possibly corrupt the query. In other words, if the text box contains `Paddy O'Rourke`, your query becomes: ``` open close what the ? | | | insert into QuestionMaster values('Paddy O'Rourke') ... ``` and you can see the fact that the embedded `'` is corrupting the query. It will also, as you have yet to realise, allow people to perform SQL injection attacks on your database since you're not sanitising the input. If, for some reason, your shop disallows parameterised queries (as it appears from one of your comments), find another place to work. No, just kidding, but in the presence of such a bone-headed policy, you'll need to sanitise the input yourself. But that's fraught with danger, I would first try to change such a policy, laying out in no uncertain terms the risks involved.
string with 's not getting inserted with insert statement
[ "", "asp.net", ".net", "sql", "vb.net", "" ]
I am using the following code for creating the Comma Delimited List. I wanted the sequence of the list in particular order: ``` USE AdventureWorks GO DECLARE @listStr VARCHAR(MAX) SELECT @listStr = COALESCE(@listStr+',' ,'') + Name FROM Production.Product SELECT @listStr GO ``` When I tried like ``` USE AdventureWorks GO DECLARE @listStr VARCHAR(MAX) SELECT @listStr = COALESCE(@listStr+',' ,'') + Name FROM Production.Product ORDER BY sortOrder SELECT @listStr GO ``` It shows error `incorrect syntax near ' ORDER'`
Use [STUFF](http://msdn.microsoft.com/en-us/library/ms188043.aspx)() Which gives you the same comma seperated result ``` USE AdventureWorks GO DECLARE @listStr VARCHAR(MAX) SELECT @listStr = STUFF((SELECT ',' + Name FROM Production.Product ORDER BY sortOrder FOR XML PATH('')), 1, 1, '') SELECT @listStr GO ```
I don't believe you can put an order by when getting a scalar value. And you should not need it since you are expecting only one value.
ORDER BY in Comma Delimited List Using SELECT Clause
[ "", "sql", "sql-server", "sql-order-by", "csv", "" ]
So I'm parsing a really big log file with some embedded json. So I'll see lines like this `foo="{my_object:foo, bar:baz}" a=b c=d` The problem is that since the internal json can have spaces, but outside of the JSON, spaces act as tuple delimiters (except where they have unquoted strings . Huzzah for whatever idiot thought that was a good idea), I'm not sure how to figure out where the end of the JSON string is without reimplementing large portions of a json parser. Is there a json parser for Python where I can give it `'{"my_object":"foo", "bar":"baz"} asdfasdf'`, and it can return `({'my_object' : 'foo', 'bar':'baz'}, 'asdfasdf')` or am I going to have to reimplement the json parser by hand?
Found a really cool answer. Use json.JSONDecoder's scan\_once function ``` In [30]: import json In [31]: d = json.JSONDecoder() In [32]: my_string = 'key="{"foo":"bar"}"more_gibberish' In [33]: d.scan_once(my_string, 5) Out[33]: ({u'foo': u'bar'}, 18) In [37]: my_string[18:] Out[37]: '"more_gibberish' ``` Just be careful ``` In [38]: d.scan_once(my_string, 6) Out[38]: (u'foo', 11) ```
Match everything around it. ``` >>> re.search('^foo="(.*)" a=.+ c=.+$', 'foo="{my_object:foo, bar:baz}" a=b c=d').group(1) '{my_object:foo, bar:baz}' ```
In python, is there a way to extract a embedded json string?
[ "", "python", "json", "" ]
My function (name CovexHull(point)) accepts the argument as a two-dimensional array: ``` hull = ConvexHull(points) ``` ### Session ``` In [1]: points.ndim Out[1]: 2 In [2]: points.shape Out[2]: (10, 2) In [3]: points Out[3]: array([[ 0. , 0. ], [ 1. , 0.8], [ 0.9, 0.8], [ 0.9, 0.7], [ 0.9, 0.6], [ 0.8, 0.5], [ 0.8, 0.5], [ 0.7, 0.5], [ 0.1, 0. ], [ 0. , 0. ]]) ``` *points* is a NumPy array with *ndim* 2. I have two different NumPy arrays (tp and fp) like below: ``` In [4]: fp.ndim Out[4]: 1 In [5]: fp.shape Out[5]: (10,) In [6]: fp Out[6]: array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.4, 0.5, 0.6, 0.9, 1. ]) ``` How can I create a two-dimensional NumPy array effectively (like *points* mentioned above) with *tp* and *fp*?
If you wish to combine two 10 element one-dimensional arrays into a two-dimensional array, `np.vstack((tp, fp)).T` will do it. `np.vstack((tp, fp))` will return an array of shape (2, 10), and the `T` attribute returns the transposed array with shape (10, 2) (i.e., with the two one-dimensional arrays forming columns rather than rows). ``` >>> tp = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> tp.ndim 1 >>> tp.shape (10,) >>> fp = np.array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> fp.ndim 1 >>> fp.shape (10,) >>> combined = np.vstack((tp, fp)).T >>> combined array([[ 0, 10], [ 1, 11], [ 2, 12], [ 3, 13], [ 4, 14], [ 5, 15], [ 6, 16], [ 7, 17], [ 8, 18], [ 9, 19]]) >>> combined.ndim 2 >>> combined.shape (10, 2) ```
You can use NumPy's [column\_stack](https://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html): ``` np.column_stack((tp, fp)) ```
Create a two-dimensional array with two one-dimensional arrays
[ "", "python", "arrays", "numpy", "" ]
I am trying to translate a piece of code I wrote from Python to Lua. I am using this code inside of compositing package Blackmagic Fusion. Any help would be greatly appreciated! Python script (working): ``` try: comp.ActiveTool() # checks if a tool is selected except: print("nothing selected") comp.AddTool("PolylineMask", -32768, -32768) # adds a tool if nothing's selected ``` Lua script (still not working and erroring): ``` if pcall (comp:ActiveTool()) then print "Node Selected" else comp:AddTool("PolylineMask", -32768, -32768) end ```
Lua's exception handling works a bit differently than in other languages. Instead of wrapping code in try/catch statements, you instead run a function in a 'protected environment' with `pcall`. The general syntax for pcall is: ``` local ok, err = pcall(myfunc, arg1, arg2, ...) if not ok then print("The error message was "..err) else print("The call went OK!") end ``` Where `myfunc` is the function you want to call and `arg1` and so on are the arguments. Note that you aren't actually calling the function, you are just passing it so that `pcall` can call it for you. **BUT** keep in mind that `tbl:method(arg1, arg2)` in Lua is syntax sugar for `tbl.method(tbl, arg1, arg2)`. However, since you aren't calling the function yourself, you can't use that syntax. You need to pass in the table to `pcall` as the first argument, like so: ``` pcall(tbl.method, tbl, arg1, arg2, ...) ``` Thus, in your case it would be: ``` local ok, err = pcall(comp.ActiveTool, comp) ```
You aren't using pcall correctly. you need to pass it the function you actually want called, and it'll call it in a protected mode where it can trap errors. pcall returns 2 values, a bool indicating if the call succeeded or not, and an error code if the call did not succeed. your lua code should look more something like this: ``` local ok, err = pcall(comp.ActiveTool, comp) if not ok then print(err, 'nothing selected') comp.AddTool(...) else -- the call succeeded print 'Node Selected' end ``` and in the case that you want to call functions using pcall that take params, you can simply pass them as additional values to pcall, and it'll pass those on to the method you gave it when it calls it. ``` local ok, err = pcall(comp.AddTool, 'PolylineMask', -32768, -32768) ``` as an example. the above line roughly translates to: ``` try { comp.AddTool('PolylineMask', -32768, -32768); return true } catch (err) { return false, err } ```
How to translate a Python script to Lua script?
[ "", "python", "error-handling", "lua", "try-catch", "blackmagic-fusion", "" ]
Currently, I have created a code that makes graphs from data in .csv files. However, I can only run the code if that code is present in the folder with the csv files. How can I make the the script file so that it doesn't have to be in the same directory as the .csv files. Also, I would like that same script to read every csv file in that other directory. Why is the code below wrong? ``` Here=os.path.dirname(os.path.abspath(__file__)) directory = "path of directory" listing = os.listdir(directory) for files in listing: if files.endswith('.csv'): full_name = os.path.join(Here,files) df=pd.read_csv(full_name) ```
just set `directory="/path/to/fldr/with/csv"` and `full_name = os.path.join(directory,files)`
Yes, it is wrong; you'll need to accept a argument that tells your script where to find the CSV files: ``` import argparse def import_function(sourcedir): for filename in os.listdir(sourcedir): csvfilename = os.path.join(sourcedir, filename) with open(csvfilename) as csvfile: reader = csv.reader(csvfile) for row in reader: # etc. if __name__ == '__main__': parser = argparse.ArgumentParser('CSV importer') parser.add_argument('sourcedir', default='.') options = parser.parse_args() import_function(options.sourcedir) ``` Now your script accepts one command-line argument, a path to the directory to list your files in: ``` python yourscript.py /path/to/directory ``` The default is to look in the current directory still.
A Script that can run in any directory
[ "", "python", "" ]
I'm trying to find the minimum array indices along one dimension of a very large 2D numpy array. I'm finding that this is very slow (already tried speeding it up with bottleneck, which was only a minimal improvement). However, taking the straight minimum appears to be an order of magnitude faster: ``` import numpy as np import time randvals = np.random.rand(3000,160000) start = time.time() minval = randvals.min(axis=0) print "Took {0:.2f} seconds to compute min".format(time.time()-start) start = time.time() minindex = np.argmin(randvals,axis=0) print "Took {0:.2f} seconds to compute argmin".format(time.time()-start) ``` On my machine this outputs: ``` Took 0.83 seconds to compute min Took 9.58 seconds to compute argmin ``` Is there any reason why argmin is so much slower? Is there any way to speed it up to comparable to min?
``` In [1]: import numpy as np In [2]: a = np.random.rand(3000, 16000) In [3]: %timeit a.min(axis=0) 1 loops, best of 3: 421 ms per loop In [4]: %timeit a.argmin(axis=0) 1 loops, best of 3: 1.95 s per loop In [5]: %timeit a.min(axis=1) 1 loops, best of 3: 302 ms per loop In [6]: %timeit a.argmin(axis=1) 1 loops, best of 3: 303 ms per loop In [7]: %timeit a.T.argmin(axis=1) 1 loops, best of 3: 1.78 s per loop In [8]: %timeit np.asfortranarray(a).argmin(axis=0) 1 loops, best of 3: 1.97 s per loop In [9]: b = np.asfortranarray(a) In [10]: %timeit b.argmin(axis=0) 1 loops, best of 3: 329 ms per loop ``` Maybe `min` is smart enough to do its job sequentially over the array (hence with cache locality), and `argmin` is jumping around the array (causing a lot of cache misses)? Anyway, if you're willing to keep `randvals` as a Fortran-ordered array from the start, it'll be faster, though copying into Fortran-ordered doesn't help.
I just took a look at [the source code](https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/calculation.c#L153), and while I don't fully understand why things are being done the way they are, this is what happens: 1. `np.min` is basically a call to `np.minimum.reduce`. 2. `np.argmin` first moves the axis you want to operate on to the end of the shape tuple, then makes it a contiguous array, which of course triggers a copy of the full array unless the axis was the last one to begin with. Since a copy is being made, you can get creative and try to instantiate cheaper arrays: ``` a = np.random.rand(1000, 2000) def fast_argmin_axis_0(a): matches = np.nonzero((a == np.min(a, axis=0)).ravel())[0] rows, cols = np.unravel_index(matches, a.shape) argmin_array = np.empty(a.shape[1], dtype=np.intp) argmin_array[cols] = rows return argmin_array In [8]: np.argmin(a, axis=0) Out[8]: array([230, 532, 815, ..., 670, 702, 989], dtype=int64) In [9]: fast_argmin_axis_0(a) Out[9]: array([230, 532, 815, ..., 670, 702, 989], dtype=int64) In [10]: %timeit np.argmin(a, axis=0) 10 loops, best of 3: 27.3 ms per loop In [11]: %timeit fast_argmin_axis_0(a) 100 loops, best of 3: 15 ms per loop ``` I wouldn't go as far as calling the current implementation a bug, since there may be good reasons for numpy doing what it does the way it does it, but that this kind of trickery can speed up what should be a highly optimized function, strongly suggests that things could be done better.
Is there a way to make numpy.argmin() as fast as min()?
[ "", "python", "arrays", "numpy", "min", "" ]
I've just started to use Django and I haven't found a lot of info on how to display an `imageField`, so I made this: models.py: ``` class Car(models.Model): name = models.CharField(max_length=255) price = models.DecimalField(max_digits=5, decimal_places=2) photo = models.ImageField(upload_to='site_media') ``` views.py: ``` def image(request): carx = Car() variables = RequestContext(request,{ 'carx':carx }) return render_to_response('image.html',variables) ``` image.html: ``` {% extends "base.html" %} {% block content %} <img src=carx /> {% endblock %} ``` I already save an image since terminal and I know is there, also if a do this in image.html: ``` {% block content %} {{ carx }} {% endblock %} ``` The output is: Car object Can anyone tell me where is my error?
An `ImageField` contains a `url` attribute, which you can use in your templates to render the proper HTML. ``` {% block content %} <img src="{{ carx.photo.url }}"> {% endblock %} ```
You can also make use of the Static URL in Settings.py. Make a directory for example "Uploads", in the Static directory of your app. Also change this in your model in `models.py`. Use the following code: ``` <img src="{% static carx.photo.url %}" /> ```
Django - Display ImageField
[ "", "python", "django", "imagefield", "" ]
OK, so I have a load of records in a table and they have many different dates. I want to retern only those records whose date falls on the last day of whatever quarter it's in. I.e. I basically need the equivalent of a `lastDayOfQuarter(date)` function that calculates the date that is the last day in the quarter for the date passed to it. e.g. `lastDayOfQuarter(#16/05/2013#) = #30/06/2013#` My query might look like: ``` SELECT * FROM mytable WHERE mytable.rdate = lastDayOfQuarter(mytable.rdate); ``` This query will be run over PDO so no VBA allowed. Native MS Access only. I would also prefer to not use string manipulation as there is a difference between US and EU dates which might cause issues down the line.
I'm answering myself as, with the help of [HansUp](https://stackoverflow.com/users/77335/hansup) answering a [previous question of mine](https://stackoverflow.com/q/17830458/1606323) for finding month-end records, I found out quite an easy way to acheive this: ``` WHERE DateValue(m.rdate) = DateSerial(Year(m.rdate), Month(m.rdate) + 1, 0) AND Month(m.rdate) IN(3,6,9,12) ```
the "last day of the quarter" could be different for different users. You may be best to build a table of "lastdays" based on your business rules, then use that table in your query.
MS Access SQL query for testing dates for last day of the quarter
[ "", "sql", "ms-access", "" ]
I'm new to SQL (using postgreSQL) and I've written a java program that selects from a large table and performs a few functions. The problem is that when I run the program I get a java OutOfMemoryError because the table is simply too big. I know that I can select from the beginning of the table using the LIMIT operator, but is there a way I can start the selection from a certain index where I left off with the LIMIT command? Thanks!
There is offset option in Postgres as in: ``` select from table offset 50 limit 50 ```
For mysql you can use the follwoing approaches: 1. SELECT \* FROM table LIMIT {offset}, row\_count 2. SELECT \* FROM table WHERE id > {max\_id\_from\_the previous\_selection} LIMIT row\_count. First max\_id\_from\_the previous\_selection = 0.
Select from a SQL table starting with a certain index?
[ "", "sql", "" ]
I Have a stored procedure that has a table with one column and I need to generate a NEWID() for each row in that column. Would I only be able to accomplish this with a loop? ``` +---+ +--------------------------------------+---+ | a | | FD16A8B5-DBE6-46AB-A59A-6B6674E9A78D | a | | b | => | 9E4A6EE6-1C95-4C7F-A666-F88B32D24B59 | b | | c | | 468C0B23-5A7E-404E-A9CB-F624BDA476DA | c | +---+ +--------------------------------------+---+ ```
You should be able to select from your table and include the `newid()` to generate the value for each row: ``` select newid(), col from yourtable; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d6aff/1)
You can create a column with the new guid ``` alter table yourtable add id varchar(40) not null default NEWID() ``` <http://sqlfiddle.com/#!3/b3c31/1>
generate guid for every row in a column
[ "", "sql", "sql-server", "t-sql", "" ]
This query is taking an average of 4 seconds. It will become a subquery in a stored procedure and I need it to take a sub second. Here is the query: ``` (select customercampaignname + ' $' + convert(varchar, cast(amount as numeric(36,2) ) ) As 'Check_Stub_Comment2' from ( select ROW_NUMBER() OVER (ORDER BY amount desc) as rownumber, customercampaignname, amount from ( select * from ( select distinct d.customercampaignname ,sum(d.mastercurrencyamount) As amount from bb02_donation d JOIN bb02_donationline dl on d.donationid = dl.donationid JOIN bb02_fundraiserrevenuestream frs on dl.fundraiserrevenuestreamid = frs.fundraiserrevenuestreamid and frs.fundraiserid = 1869 where d.customercampaignname is not null and d.customercampaignname != '' group by d.CustomerCampaignName ) as x ) as sub ) as y where rownumber = 1) ```
If you don't need to actually use the row number for anything, then I would just go with getting the TOP 1 row. The query could be simplified a lot. I like to start by first selecting from the table that will be filtered out the most by your predicates. Hopefully, you have a good index on frs.fundraiserid and all of the columns participating in the joins. ``` SELECT TOP 2 customercampaignname + ' $' + CONVERT(VARCHAR(255), CAST(SUM(d.mastercurrencyamount) AS NUMERIC(36,2) ) ) AS 'Check_Stub_Comment2', ROW_NUMBER() OVER(ORDER BY SUM(d.mastercurrencyamount) DESC) as rownumber FROM bb02_fundraiserrevenuestream frs JOIN bb02_donationline dl ON dl.fundraiserrevenuestreamid = frs.fundraiserrevenuestreamid JOIN bb02_donation d ON d.donationid = dl.donationid WHERE frs.fundraiserid = 1869 AND d.customercampaignname IS NOT NULL AND d.customercampaignname != '' GROUP BY d.CustomerCampaignName ORDER BY SUM(d.mastercurrencyamount) DESC ``` Since you need to be able to select either the 1st or 2nd row, then wrap it up in a CTE or subquery. ``` WITH topcampaign AS ( SELECT TOP 2 customercampaignname + ' $' + CONVERT(varchar(255), CAST(SUM(d.mastercurrencyamount) AS NUMERIC(36,2) ) ) AS 'Check_Stub_Comment2', ROW_NUMBER() OVER(ORDER BY SUM(d.mastercurrencyamount) DESC) as rownumber FROM bb02_fundraiserrevenuestream frs JOIN bb02_donationline dl ON dl.fundraiserrevenuestreamid = frs.fundraiserrevenuestreamid JOIN bb02_donation d ON d.donationid = dl.donationid WHERE frs.fundraiserid = 1869 AND d.customercampaignname IS NOT NULL AND d.customercampaignname != '' GROUP BY d.CustomerCampaignName ORDER BY SUM(d.mastercurrencyamount) DESC ) SELECT * from topcampaign WHERE rownumber = 1 ``` As another possible optimization, I took the CONVERT out of the CTE and put it in the final select. Not sure if that helps much. ``` WITH topcampaign AS ( SELECT TOP 2 customercampaignname, SUM(d.mastercurrencyamount) AS amount, ROW_NUMBER() OVER(ORDER BY SUM(d.mastercurrencyamount) DESC) as rownumber FROM bb02_fundraiserrevenuestream frs JOIN bb02_donationline dl ON dl.fundraiserrevenuestreamid = frs.fundraiserrevenuestreamid JOIN bb02_donation d ON d.donationid = dl.donationid WHERE frs.fundraiserid = 1869 AND d.customercampaignname IS NOT NULL AND d.customercampaignname != '' GROUP BY d.CustomerCampaignName ORDER BY SUM(d.mastercurrencyamount) DESC ) SELECT rownumber, customercampaignname + ' $' + CONVERT(varchar(255), CAST(amount AS NUMERIC(36,2) ) ) AS 'Check_Stub_Comment2' FROM topcampaign WHERE rownumber = 1 ```
I don't know if this will run faster, but by my reckoning, this can be simplified to: ``` Select top 1 customercampaignname + ' $' + convert(varchar(255), cast(sum(d.mastercurrencyamount) as numeric(36,2) ) ) As amount from bb02_donation d JOIN bb02_donationline dl on d.donationid = dl.donationid JOIN bb02_fundraiserrevenuestream frs on dl.fundraiserrevenuestreamid = frs.fundraiserrevenuestreamid and frs.fundraiserid = 1869 where d.customercampaignname is not null and d.customercampaignname != '' group by d.CustomerCampaignName order by sum(d.mastercurrencyamount) ```
SQL Server - how can this query be optimized?
[ "", "sql", "sql-server", "subquery", "query-optimization", "" ]
*(In 2013)* I don't know why Python is that weird, you can't find this by searching in google very easily, but it's quite simple. How can I detect 'SPACE' or actually any key? How can I do this: ``` print('You pressed %s' % key) ``` This should be included in python core, so please do not link modules not related for core python.
You could make a little Tkinter app: ``` import Tkinter as tk def onKeyPress(event): text.insert('end', 'You pressed %s\n' % (event.char, )) root = tk.Tk() root.geometry('300x200') text = tk.Text(root, background='black', foreground='white', font=('Comic Sans MS', 12)) text.pack() root.bind('<KeyPress>', onKeyPress) root.mainloop() ```
Use Tkinter there are a ton of tutorials online for this. basically, you can create events. Here is a [link](http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm) to a great site! This makes it easy to capture clicks. Also, if you are trying to make a game, Tkinter also has a GUI. Although, I wouldn't recommend Python for games at all, it could be a fun experiment. Good Luck!
Detect key input in Python
[ "", "python", "input", "tkinter", "keyboard", "" ]
Say I have a variable in python called `my_boolean = False` I want an end result to be: `my_bool = "0"` The only way i can think of doing this is. `my_bool = str(int(my_boolean))`, a double type-casting. Is there a better way of doing this? Disadvantages? Advantages? What happens internally?
You could try ``` my_bool = '01'[my_boolean] ``` --- There seems to be a time difference between your approach and what is above: ``` >>> from timeit import timeit >>> timeit("'01'[b]", "b = False") 0.10460775769296968 >>> timeit("str(int(b))", "b = False") 0.8879351199904466 ``` Is it something to lose sleep over? Definitely not. I'm sure there are people who would call your current approach more Pythonic and prefer that over this. In other words, no, there is nothing wrong with what you're doing.
Never assume performances of operations. Profile and benchmark: ``` In [7]: value = False In [8]: %%timeit bool_dict = {False: '0', True: '1'} ...: my_boolean = bool_dict[value] ...: 10000000 loops, best of 3: 47.7 ns per loop In [9]: %timeit my_boolean = str(int(value)) 1000000 loops, best of 3: 420 ns per loop In [10]: %timeit my_boolean = '0' if value is False else '1' 10000000 loops, best of 3: 50 ns per loop In [11]: %timeit my_boolean = '01'[value] 10000000 loops, best of 3: 52.1 ns per loop ``` As you can see the `str(int(value))` is *much* slower than the rest because function calls have high overhead. Note how the branching operation is mostly equal to the dictionary look-up[try it a few times and you'll see the two versions exchange timings], but *it's more readable, so use it*. I personally find the conditional expression version easier to read than the original `str(int(value))`, even though *there isn't anything inherently wrong with using two conversions*, and in other situations this may be the easier solution. ~~The version `'01'[value]` is the fastest, but I believe you should prefer readability over performances, especially if you did not prove that this conversion is the bottleneck.~~ Note that using an identifier instead of the explicit constant `False` I discovered that this: ``` '01'[False] ``` Is optimized by the interpreter to the expression `"0"`: ``` In [14]: import dis In [16]: def test(): ...: a = '01'[False] In [17]: dis.dis(test) 2 0 LOAD_CONST 3 ('0') 3 STORE_FAST 0 (a) 6 LOAD_CONST 0 (None) 9 RETURN_VALUE ``` Hence the benchmark I did before wasn't correct.
Is multiple type-casting in python bad?
[ "", "python", "casting", "" ]