Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have couple of strings (each string is a set of words) which has special characters in them. I know using strip() function, we can remove all occurrences of only one specific character from any string. Now, I would like to remove set of special characters (include !@#%&\*()[]{}/?<> ) etc. What is the best way you can get these unwanted characters removed from the strings. in-str = "@John, It's a fantastic #week-end%, How *about* () you" out-str = "John, It's a fantastic week-end, How about you"
``` import string s = "@John, It's a fantastic #week-end%, How about () you" for c in "!@#%&*()[]{}/?<>": s = string.replace(s, c, "") print s ``` prints "John, It's a fantastic week-end, How about you"
The `strip` function removes only leading and trailing characters. For your purpose I would use python `set` to store your characters, iterate over your input string and create new string from characters not present in the `set`. According to other stackoverflow [article](https://stackoverflow.com/questions/4435169/good-way-to-append-to-a-string) this should be efficient. At the end, just remove double spaces by clever `" ".join(output_string.split())` construction. ``` char_set = set("!@#%&*()[]{}/?<>") input_string = "@John, It's a fantastic #week-end%, How about () you" output_string = "" for i in range(0, len(input_string)): if not input_string[i] in char_set: output_string += input_string[i] output_string = " ".join(output_string.split()) print output_string ```
Remove extra characters in the string in Python
[ "", "python", "string", "strip", "" ]
I spent last few days trying to find a way to remove tiny margins from axes in a 3D plot. I tried `ax.margins(0)` and `ax.autoscale_view('tight')` and other approaches, but these small margins are still there. In particular, I don't like that the bar histograms are elevated, i.e., their bottom is not at the zero level -- see example image. ![unwanted margins on all axes](https://i.stack.imgur.com/zrITp.png) In gnuplot, I would use "set xyplane at 0". In matplotlib, since there are margins on every axis on both sides, it would be great to be able to control each of them. **Edit:** HYRY's solution below works well, but the 'X' axis gets a grid line drawn over it at Y=0: ![strange axis](https://i.stack.imgur.com/885hw.png)
There is not property or method that can modify this margins. You need to patch the source code. Here is an example: ``` from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np ###patch start### from mpl_toolkits.mplot3d.axis3d import Axis if not hasattr(Axis, "_get_coord_info_old"): def _get_coord_info_new(self, renderer): mins, maxs, centers, deltas, tc, highs = self._get_coord_info_old(renderer) mins += deltas / 4 maxs -= deltas / 4 return mins, maxs, centers, deltas, tc, highs Axis._get_coord_info_old = Axis._get_coord_info Axis._get_coord_info = _get_coord_info_new ###patch end### fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]): xs = np.arange(20) ys = np.random.rand(20) # You can provide either a single color or an array. To demonstrate this, # the first bar of each set will be colored cyan. cs = [c] * len(xs) cs[0] = 'c' ax.bar(xs, ys, zs=z, zdir='y', color=cs, alpha=0.8) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() ``` The result is: ![enter image description here](https://i.stack.imgur.com/YqQgO.png) **Edit** To change the color of the grid lines: ``` for axis in (ax.xaxis, ax.yaxis, ax.zaxis): axis._axinfo['grid']['color'] = 0.7, 1.0, 0.7, 1.0 ``` **Edit2** Set X & Y lim: ``` ax.set_ylim3d(-1, 31) ax.set_xlim3d(-1, 21) ```
I had to tweak the accepted solution slightly, because in my case the x and y axes (but not the z) had an additional margin, which, by printing `mins, maxs, deltas`, turned out to be `deltas * 6.0/11`. Here is the updated patch that worked well in my case. ``` ###patch start### from mpl_toolkits.mplot3d.axis3d import Axis def _get_coord_info_new(self, renderer): mins, maxs, cs, deltas, tc, highs = self._get_coord_info_old(renderer) correction = deltas * [1.0/4 + 6.0/11, 1.0/4 + 6.0/11, 1.0/4] mins += correction maxs -= correction return mins, maxs, cs, deltas, tc, highs if not hasattr(Axis, "_get_coord_info_old"): Axis._get_coord_info_old = Axis._get_coord_info Axis._get_coord_info = _get_coord_info_new ###patch end### ``` (I've also changed the patching logic around a bit, so that editing the function and reloading its module now works as expected in Jupyter.)
Removing axes margins in 3D plot
[ "", "python", "matplotlib", "axis", "margins", "matplotlib-3d", "" ]
my question today is if I am going down on the right path for Euler 145 and if it is sorta kinda efficient. I have most of it down, just one of my Defs is giving me troubles with int(str(numb)[:i])%2==0 for a even check. my code is below. Line 10 is the issue spot ``` def reversed(reg): # to flip the number around fliped = str(reg)[::-1]; return(int(fliped)); # Return it as a int. def allEvenDigits(numb): # This is the issue one hasEvenNumb = False; for i in range(0, len(str(numb))): if int(str(numb)[:i])%2 == 0: # if int of the string numb's char at i is even hasEvenNumb = True; ## return that it is true break; # why go on if we found a even. return(hasEvenNumb); for i in range(1, 1000): # its 1000 to save a few minutes revNumb = reversed(i); total = revNumb+i; if(allEvenDigits(total)): print(i, "+" , revNumb, "=",Total); ```
You can use the built-in function `all()` and use a set to keep a track of numbers that has been solved already; for example if you've solved `36` then there's no reason to solve `63`: ``` seen = set() def allEvenDigits(numb): # This is the issue one return all( int(n)%2 == 0 for n in str(numb)) for i in range(1, 1000): # its 1000 to save a few minutes revNumb = reversed(i); total = revNumb+i; if i not in seen and revNumb not in seen: if (allEvenDigits(total)): print(i, "+" , revNumb, "=",total); seen.add(i) seen.add(revNumb) ``` **output:** ``` (1, '+', 1, '=', 2) (2, '+', 2, '=', 4) (3, '+', 3, '=', 6) (4, '+', 4, '=', 8) (11, '+', 11, '=', 22) (13, '+', 31, '=', 44) (15, '+', 51, '=', 66) (17, '+', 71, '=', 88) (22, '+', 22, '=', 44) (24, '+', 42, '=', 66) (26, '+', 62, '=', 88) (33, '+', 33, '=', 66) (35, '+', 53, '=', 88) (44, '+', 44, '=', 88) ... ``` **help** on `all`: ``` >>> all? Type: builtin_function_or_method String Form:<built-in function all> Namespace: Python builtin Docstring: all(iterable) -> bool Return True if bool(x) is True for all values x in the iterable. If the iterable is empty, return True. ```
You're starting with an empty string when your range is `range(0, len(str(numb)))`. You could solve it with: ``` def allEvenDigits(numb): # This is the issue one hasEvenNumb = False; for i in range(1, len(str(numb))): if int(str(numb)[:i])%2 == 0: # if int of the string numb's char at i is even hasEvenNumb = True; ## return that it is true break; # why go on if we found a even. return(hasEvenNumb); >>> allEvenDigits(52) False ``` It seems, however, that the easier thing to do would be to check if each number is even: ``` def allEvenDigits(numb): hasEvenNumb = True for char in str(numb): if int(char) % 2 == 0: hasEvenNumb = False break return hasEvenNumb allEvenDigits(52) ``` Makes it a little more straightforward, and checks only the individual digit rather than a substring.
Python: Project Euler 145
[ "", "python", "" ]
I would like to read serial port with python, and print result with readline, but I'm getting empty lines between all result lines. Can you tell me how can I delete all empty lines? My code: ``` #!/usr/bin/python import serial import time import sys ser = serial.Serial("/dev/ttyACM0", 9600) print ser.readline() print ser.readline() print ser.readline() print ser.readline() print ser.readline() ``` I tried few answers from behind but still got empty lines. My txt file looks like: ``` T: 59.30% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N ``` but I need file like: ``` T: 59.30% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N T: 59.80% 22.20 8% 11% N ``` EDIT: Found way to fix that problem: ``` line = ser.readline().decode('utf-8')[:-2] ``` It works fine now. This -2 delete /n and that's all.
Either: ``` import serial with serial.Serial("/dev/ttyACM0", 9600) as ser: for line in ser: print line.rstrip('\n') ``` or ``` import serial with serial.Serial("/dev/ttyACM0", 9600) as ser: for line in ser: print line, ```
``` while True: line = ser.readline() if not line.strip(): continue do_something_with(line) ```
Python readline empty lines
[ "", "python", "serial-port", "readline", "" ]
I'd like to do a query for every GroupID (which always come in pairs) in which both entries have a value of 1 for HasData. ``` |GroupID | HasData | |--------|---------| | 1 | 1 | | 1 | 1 | | 2 | 0 | | 2 | 1 | | 3 | 0 | | 3 | 0 | | 4 | 1 | | 4 | 1 | ``` So the result would be: ``` 1 4 ``` here's what I'm trying, but I can't seem to get it right. Whenever I do a `GROUP BY` on the GroupID then I only have access to that in the selector ``` SELECT GroupID FROM Table GROUP BY GroupID, HasData HAVING SUM(HasData) = 2 ``` But I get the following error message because HasData is acutally a bit: ``` Operand data type bit is invalid for sum operator. ``` Can I do a count of two where both records are true?
just exclude those group ID's that have a record where HasData = 0. ``` select distinct a.groupID from table1 a where not exists(select * from table1 b where b.HasData = 0 and b.groupID = a.groupID) ```
You can use the `having` clause to check that all values are 1: ``` select GroupId from table group by GroupId having sum(cast(HasData as int)) = 2 ``` That is, simply remove the `HasData` column from the `group by` columns and then check on it.
Finding records sets with GROUP BY and SUM
[ "", "sql", "group-by", "aggregate-functions", "" ]
I am trying to change a column to `not null` on a 3.5 gb table (SQL Server Express). All rows contain values in the table. I remove the check box from `allow null` and click save. I get: > Unable to modify table. > Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. How can I overcome this?
It might not work directly. You need to do it in this way First make all the NULL values in your table non null ``` UPDATE tblname SET colname=0 WHERE colname IS NULL ``` Then update your table ``` ALTER TABLE tblname ALTER COLUMN colname INTEGER NOT NULL ``` Hope this solve your problem.
You can also increase or override the timeout. 1. In SQL Server Managment Studio, click Tools->Options 2. Expand "Designers" and select "Table and Database Designers" on left (see pic) 3. From here you have the option to override the timeout or increase it: * Increase your "Transaction time-out after:" (see pic) OR * Uncheck "Override connection string time-out value for table designer updates:" ![enter image description here](https://i.stack.imgur.com/UsBqb.jpg) The default timeout is 30 seconds as you can see. These options are documented on [MS Support page here](https://support.microsoft.com/en-us/kb/915849).
SQL Server : change column to not null in a very large table
[ "", "sql", "sql-server", "sql-server-2012", "sql-server-express", "" ]
I'm trying to see how many times a player has lost a match at any of his favourite stadiums. I've tried the following, but it is not returning the correct values: ``` select players.name, count (case when players.team <> matches.winner and favstadiums.stadium = matches.stadium then 1 else null end) as LOSSES from players join favstadiums on favstadiums.player = players.name join matches on favstadiums.stadium = matches.stadium group by players.name; ``` I've also tried left/right joins, but it makes no difference in the output. Here is the relational diagram of the database for reference: ![enter image description here](https://i.stack.imgur.com/6Ijsz.png) Any ideas?
Your `join` condition doesn't have the player playing in the stadium. You need to add the condition that the player's team played in the favorite stadium: ``` select players.name, SUM(case when players.team <> matches.winner then 1 else 0 end) as Losses from players join favstadiums on favstadiums.player = players.name join matches on favstadiums.stadium = matches.stadium and players.team in (matches.home, matches.away) group by players.name; ```
Try the following: ``` SELECT P.name, COUNT(DISTINCT M.ID) AS Losses FROM Player P INNER JOIN favStadiums FS ON P.name = FS.player INNER JOIN Match M ON (P.team = M.home OR P.team = M.away) WHERE FS.stadium = M.stadium AND M.winner <> P.team ```
Count() with multiple conditions in SQL
[ "", "sql", "" ]
I have an application that has a couple of commands. When you type a certain command, you have to type in additional info about something/someone. Now that info has to be strictly an integer or a string, depending on the situation. However, whatever you type into Python using raw\_input() actually is a string, no matter what, so more specifically, how would I shortly and without try...except see if a variable is made of digits or characters?
In my opinion you have two options: * Just try to convert it to an `int`, but catch the exception: ``` try: value = int(value) except ValueError: pass # it was a string, not an int. ``` This is the Ask Forgiveness approach. * Explicitly test if there are only digits in the string: ``` value.isdigit() ``` [`str.isdigit()`](http://docs.python.org/2/library/stdtypes.html#str.isdigit) returns `True` only if all characters in the string are digits (`0`-`9`). The `unicode` / Python 3 `str` type equivalent is [`unicode.isdecimal()`](https://docs.python.org/2/library/stdtypes.html#unicode.isdecimal) / [`str.isdecimal()`](https://docs.python.org/3/library/stdtypes.html#str.isdecimal); only Unicode decimals can be converted to integers, as not all digits have an actual integer value ([U+00B2 SUPERSCRIPT 2](http://codepoints.net/U+00B2) is a digit, but not a decimal, for example). This is often called the Ask Permission approach, or Look Before You Leap. The latter will not detect all valid `int()` values, as whitespace and `+` and `-` are also allowed in `int()` values. The first form will happily accept `' +10 '` as a number, the latter won't. If your expect that the user *normally* will input an integer, use the first form. It is easier (and faster) to ask for forgiveness rather than for permission in that case.
if you want to check what it is: ``` >>>isinstance(1,str) False >>>isinstance('stuff',str) True >>>isinstance(1,int) True >>>isinstance('stuff',int) False ``` if you want to get ints from raw\_input ``` >>>x=raw_input('enter thing:') enter thing: 3 >>>try: x = int(x) except: pass >>>isinstance(x,int) True ```
How to check if a variable is an integer or a string?
[ "", "python", "variables", "python-2.7", "" ]
How do I make something have a small delay in python? I want to display something 3 seconds after afterwards and then let the user input something. Here is what I have ``` print "Think of a number between 1 and 100" print "Then I shall guess the number" ``` I want a delay here ``` print "I guess", computerguess raw_input ("Is it lower or higher?") ```
This should work. ``` import time print "Think of a number between 1 and 100" print "Then I shall guess the number" time.sleep(3) print "I guess", computerguess raw_input ("Is it lower or higher?") ```
Try this: ``` import time print "Think of a number between 1 and 100" print "Then I shall guess the number" time.sleep(3) print "I guess", computerguess raw_input ("Is it lower or higher?") ``` The number `3` indicates the number of seconds to pause. Read [here](http://docs.python.org/2/library/time.html).
Time delay in python
[ "", "python", "" ]
Is there a way to tell the `tox` test automation tool to use the PyPI mirrors while installing all packages (explicit testing dependencies in `tox.ini` and dependencies from `setup.py`)? For example, `pip install` has a very useful `--use-mirrors` option that adds mirrors to the list of package servers.
Pip also can be configured using [environment variables](https://pip.pypa.io/en/latest/user_guide/#environment-variables), which `tox` lets you [set in the configuration](http://testrun.org/tox/latest//config.html#confval-setenv=MULTI-LINE-LIST): ``` setenv = PIP_USE_MIRRORS=... ``` Note that `--use-mirrors` has been deprecated; instead, you can set the `PIP_INDEX_URL` or `PIP_EXTRA_INDEX_URL` environment variables, representing the [`--index-url`](https://pip.pypa.io/en/latest/reference/pip_install/#cmdoption-0) and [`--extra-index-url`](https://pip.pypa.io/en/latest/reference/pip_install/#cmdoption-extra-index-url) command-line options. For example: ``` setenv = PIP_EXTRA_INDEX_URL=http://example.org/index ``` would add `http://example.org/index` as an alternative index server, used if the main index doesn't have a package.
Since `indexserver` is [deprecated](https://tox.readthedocs.io/en/latest/config.html#confval-indexserver) and would be removed and `--use-mirrors` is [deprecated](https://github.com/learning-unlimited/ESP-Website/issues/1758) as well, you can use install\_command (in your environment section): ``` [testenv:my_env] install_command=pip install --index-url=https://my.index-mirror.com --trusted-host=my.index-mirror.com {opts} {packages} ```
How to tell tox to use PyPI mirrors for installing packages?
[ "", "python", "testing", "pypi", "tox", "" ]
I've been trying to get django-allauth working for a couple days now and I finally found out what was going on. Instead of loading the `base.html` template that installs with django-allauth, the app loads the `base.html` file that I use for the rest of my website. How do i tell django-allauth to use the base.html template in the `virtualenv/lib/python2.7/sitepackages/django-allauth` directory instead of my `project/template` directory?
Unless called directly, your `base.html` is an extension of the templates that you define. For example, if you render a template called `Page.html` - at the top you will have `{% extends "base.html" %}`. When defined as above, `base.html` is located in the path that you defined in your `settings.py` under `TEMPLATE_DIRS = ()` - which, from your description, is defined as `project/template`. Your best bet is to copy the django-allauth `base.html` file to the defined `TEMPLATE_DIRS` location, rename it to `allauthbase.html`, then extend your templates to include it instead of your default base via `{% extends "allauthbase.html" %}`. Alternatively you could add a subfolder to your template location like `project/template/allauth`, place the allauth `base.html` there, and then use `{% extends "allauth/base.html" %}`.
I had the opposite problem: I was trying to use my own `base.html` file, but my Django project was grabbing the `django-allauth` version of `base.html`. It turns out that the order you define `INSTALLED_APPS` in `settings.py` affects how templates are rendered. In order to have **my** `base.html` render instead of the one defined in `django-allauth`, I needed to define `INSTALLED_APPS` as the following: ``` INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # custom 'common', 'users', 'app', # allauth 'django.contrib.sites', 'allauth', 'allauth.account', 'allauth.socialaccount', ] STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'static'), ] ```
Django-allauth loads wrong base.html template
[ "", "python", "django", "django-allauth", "" ]
I'm using oracle 10g and i've a question for you. Is it possible to "insert" a subquery into a `LIKE()` operator ? Exemple : `SELECT* FROM users u WHERE u.user_name LIKE ( subquery here );` What i've tried before -> ``` SELECT * FROM dictionary WHERE TABLE_NAME LIKE (Select d.TABLE_NAME from dictionary d where d.COMMENTS LIKE '%table%' ) WHERE ROWNUM < 100; ``` It tolds me that my query didnt wokrs -> `ORA-00933: la commande SQL ne se termine pas correctement` (The sql query doesn't finish correctly) and the last `WHERE` is out. I know this is a stupid query, but that just a question that i'm looking for an answer =)
Yeah, why not? ``` SELECT * FROM users u WHERE u.user_name LIKE (select '%arthur%' from dual); ``` [Example at SQL Fiddle.](http://sqlfiddle.com/#!4/00016/2/0)
I am guessing you want to do this because you want to compare multiple values at the same time. Using a subquery (as in your example) won't solve that problem. Here is another approach: ``` select * from users u where exists (<subquery here> where u.user_name like <whatever>) ``` Or using an explicit join: ``` select distinct u.* from users u join (subquery here ) s on u.user_name like s.<whatever> ```
Sql, Subquery into a LIKE() operator
[ "", "sql", "oracle10g", "" ]
What exactly is the difference between numpy `vstack` and `column_stack`. Reading through the documentation, it looks as if `column_stack` is an implementation of `vstack` for 1D arrays. Is it a more efficient implementation? Otherwise, I cannot find a reason for just having `vstack`.
I think the following code illustrates the difference nicely: ``` >>> np.vstack(([1,2,3],[4,5,6])) array([[1, 2, 3], [4, 5, 6]]) >>> np.column_stack(([1,2,3],[4,5,6])) array([[1, 4], [2, 5], [3, 6]]) >>> np.hstack(([1,2,3],[4,5,6])) array([1, 2, 3, 4, 5, 6]) ``` I've included `hstack` for comparison as well. Notice how `column_stack` stacks along the second dimension whereas `vstack` stacks along the first dimension. The equivalent to `column_stack` is the following `hstack` command: ``` >>> np.hstack(([[1],[2],[3]],[[4],[5],[6]])) array([[1, 4], [2, 5], [3, 6]]) ``` I hope we can agree that `column_stack` is more convenient.
`hstack` stacks horizontally, `vstack` stacks vertically: ![](https://i.stack.imgur.com/hSM5G.png) The problem with `hstack` is that when you append a column you need convert it from 1d-array to a 2d-column first, because 1d array is normally interpreted as a vector-row in 2d context in numpy: ``` a = np.ones(2) # 2d, shape = (2, 2) b = np.array([0, 0]) # 1d, shape = (2,) hstack((a, b)) -> dimensions mismatch error ``` So either `hstack((a, b[:, None]))` or `column_stack((a, b))`: ![](https://i.stack.imgur.com/1QBr4.png) where `None` serves as a shortcut for `np.newaxis`. If you're stacking two vectors, you've got three options: ![](https://i.stack.imgur.com/dSZuK.png) As for the (undocumented) `row_stack`, it is just a synonym of `vstack`, as 1d array is ready to serve as a matrix row without extra work. The case of 3D and above proved to be too huge to fit in the answer, so I've included it in the article called [Numpy Illustrated](https://medium.com/better-programming/numpy-illustrated-the-visual-guide-to-numpy-3b1d4976de1d?source=friends_link&sk=57b908a77aa44075a49293fa1631dd9b).
numpy vstack vs. column_stack
[ "", "python", "numpy", "" ]
I have a DataFrame like this ``` OPEN HIGH LOW CLOSE VOL 2012-01-01 19:00:00 449000 449000 449000 449000 1336303000 2012-01-01 20:00:00 NaN NaN NaN NaN NaN 2012-01-01 21:00:00 NaN NaN NaN NaN NaN 2012-01-01 22:00:00 NaN NaN NaN NaN NaN 2012-01-01 23:00:00 NaN NaN NaN NaN NaN ... OPEN HIGH LOW CLOSE VOL 2013-04-24 14:00:00 11700000 12000000 11600000 12000000 20647095439 2013-04-24 15:00:00 12000000 12399000 11979000 12399000 23997107870 2013-04-24 16:00:00 12399000 12400000 11865000 12100000 9379191474 2013-04-24 17:00:00 12300000 12397995 11850000 11850000 4281521826 2013-04-24 18:00:00 11850000 11850000 10903000 11800000 15546034128 ``` I need to fill `NaN` according this rule When OPEN, HIGH, LOW, CLOSE are NaN, * set VOL to 0 * set OPEN, HIGH, LOW, CLOSE to previous CLOSE candle value else keep NaN
Here's how to do it via masking Simulate a frame with some holes (A is your 'close' field) ``` In [20]: df = DataFrame(randn(10,3),index=date_range('20130101',periods=10,freq='min'), columns=list('ABC')) In [21]: df.iloc[1:3,:] = np.nan In [22]: df.iloc[5:8,1:3] = np.nan In [23]: df Out[23]: A B C 2013-01-01 00:00:00 -0.486149 0.156894 -0.272362 2013-01-01 00:01:00 NaN NaN NaN 2013-01-01 00:02:00 NaN NaN NaN 2013-01-01 00:03:00 1.788240 -0.593195 0.059606 2013-01-01 00:04:00 1.097781 0.835491 -0.855468 2013-01-01 00:05:00 0.753991 NaN NaN 2013-01-01 00:06:00 -0.456790 NaN NaN 2013-01-01 00:07:00 -0.479704 NaN NaN 2013-01-01 00:08:00 1.332830 1.276571 -0.480007 2013-01-01 00:09:00 -0.759806 -0.815984 2.699401 ``` The ones we that are all Nan ``` In [24]: mask_0 = pd.isnull(df).all(axis=1) In [25]: mask_0 Out[25]: 2013-01-01 00:00:00 False 2013-01-01 00:01:00 True 2013-01-01 00:02:00 True 2013-01-01 00:03:00 False 2013-01-01 00:04:00 False 2013-01-01 00:05:00 False 2013-01-01 00:06:00 False 2013-01-01 00:07:00 False 2013-01-01 00:08:00 False 2013-01-01 00:09:00 False Freq: T, dtype: bool ``` Ones we want to propogate A ``` In [26]: mask_fill = pd.isnull(df['B']) & pd.isnull(df['C']) In [27]: mask_fill Out[27]: 2013-01-01 00:00:00 False 2013-01-01 00:01:00 True 2013-01-01 00:02:00 True 2013-01-01 00:03:00 False 2013-01-01 00:04:00 False 2013-01-01 00:05:00 True 2013-01-01 00:06:00 True 2013-01-01 00:07:00 True 2013-01-01 00:08:00 False 2013-01-01 00:09:00 False Freq: T, dtype: bool ``` propogate first ``` In [28]: df.loc[mask_fill,'C'] = df['A'] In [29]: df.loc[mask_fill,'B'] = df['A'] ``` fill the 0's ``` In [30]: df.loc[mask_0] = 0 ``` Done ``` In [31]: df Out[31]: A B C 2013-01-01 00:00:00 -0.486149 0.156894 -0.272362 2013-01-01 00:01:00 0.000000 0.000000 0.000000 2013-01-01 00:02:00 0.000000 0.000000 0.000000 2013-01-01 00:03:00 1.788240 -0.593195 0.059606 2013-01-01 00:04:00 1.097781 0.835491 -0.855468 2013-01-01 00:05:00 0.753991 0.753991 0.753991 2013-01-01 00:06:00 -0.456790 -0.456790 -0.456790 2013-01-01 00:07:00 -0.479704 -0.479704 -0.479704 2013-01-01 00:08:00 1.332830 1.276571 -0.480007 2013-01-01 00:09:00 -0.759806 -0.815984 2.699401 ```
Since neither of the other two answers work, here's a complete answer. I'm testing two methods here. The first is based on working4coin's comment on hd1's answer and the second being a slower, pure python implementation. It seems obvious that the python implementation should be slower but I decided to time the two methods to make sure and to quantify the results. ``` def nans_to_prev_close_method1(data_frame): data_frame['volume'] = data_frame['volume'].fillna(0.0) # volume should always be 0 (if there were no trades in this interval) data_frame['close'] = data_frame.fillna(method='pad') # ie pull the last close into this close # now copy the close that was pulled down from the last timestep into this row, across into o/h/l data_frame['open'] = data_frame['open'].fillna(data_frame['close']) data_frame['low'] = data_frame['low'].fillna(data_frame['close']) data_frame['high'] = data_frame['high'].fillna(data_frame['close']) ``` Method 1 does most of the heavy lifting in c (in the pandas code), and so should be quite fast. The slow, python approach (method 2) is shown below ``` def nans_to_prev_close_method2(data_frame): prev_row = None for index, row in data_frame.iterrows(): if np.isnan(row['open']): # row.isnull().any(): pclose = prev_row['close'] # assumes first row has no nulls!! row['open'] = pclose row['high'] = pclose row['low'] = pclose row['close'] = pclose row['volume'] = 0.0 prev_row = row ``` Testing the timing on both of them: ``` df = trades_to_ohlcv(PATH_TO_RAW_TRADES_CSV, '1s') # splits raw trades into secondly candles df2 = df.copy() wrapped1 = wrapper(nans_to_prev_close_method1, df) wrapped2 = wrapper(nans_to_prev_close_method2, df2) print("method 1: %.2f sec" % timeit.timeit(wrapped1, number=1)) print("method 2: %.2f sec" % timeit.timeit(wrapped2, number=1)) ``` The results were: ``` method 1: 0.46 sec method 2: 151.82 sec ``` Clearly method 1 is far faster (approx 330 times faster).
Fill NaN in candlestick OHLCV data
[ "", "python", "pandas", "" ]
I have 3 Mysql tables: **[block\_value]** * id\_block\_value * file\_id **[metadata]** * id\_metadata * metadata\_name **[metadata\_value]** * meta\_id * value * blockvalue\_id In these tables, there are pairs: `metadata_name` = `value` And list of pairs are put in blocks (`id_block_value`) **(A)** If I want height = 1080: ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080"); +---------+ | file_id | +---------+ | 21 | | 22 | (...) | 6962 | (...) | 8146 | | 8147 | +---------+ 794 rows in set (0.06 sec) ``` **(B)** If I want file extension = mpeg: ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg"); +---------+ | file_id | +---------+ | 6889 | | 6898 | | 6962 | +---------+ 3 rows in set (0.06 sec) ``` *BUT*, if I want: * A and B * A or B * A and not B Then, I don't know what is the best. For `A or B`, I tried `A union B` which seems to do the trick. ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") UNION SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg"); +---------+ | file_id | +---------+ | 21 | | 22 | | 34 | (...) | 6889 | | 6898 | +---------+ 796 rows in set (0.13 sec) ``` For `A and B`, since there are no `intersect` in Mysql, I tried `A and file_id in(B)`, but look at perfs (>4mn)... ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") and file_id in( SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg")); +---------+ | file_id | +---------+ | 6962 | +---------+ 1 row in set (4 min 36.22 sec) ``` I tried `B and file_id in(A)` too, which is a lot better, but I will never know how which one to put first. ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg") and file_id in( SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080")); +---------+ | file_id | +---------+ | 6962 | +---------+ 1 row in set (0.75 sec) ``` So... what do I do now? Is there any better way for boolean operations? Any tip? Did I miss something? **EDIT**: what data looks like: This database contains a row in `FILE` table for each audio/video file inserted: * 10, /path/to/file.ts * 11, /path/to/file2.mpeg There is a row in `METADATA` table for each potential information: * 301, height * 302, file extension Then, a row in `BLOCK` table define a container: * 101, Video * 102, Audio * 104, General A file can have several blocks of metadata, a `BLOCK_VALUE` table contains instances of BLOCKS: * 402, 101, 10 // Video 1 * 403, 101, 10 // Video 2 * 404, 101, 10 // Video 3 * 405, 102, 10 // Audio * 406, 104, 10 // General In this example, file 10 has 5 blocks: 3 Video (101) + 1 Audio (102) + 1 General (104) Values are stored in `METADATA_VALUE` * 302, 406, "ts" // file extension, General * 301, 402, "1080" // height, Video 1 * 301, 403, "720" // height, Video 2 * 301, 404, "352" // height, Video 3
I'm opening a new post only to keep the "correct" solution tidy.. Ok, sorry, it seemed that I was making the wrong assumption. I never thought about two blocks being defined exactly the same way. So, since I'm a copycat, and I like my getting the AND from OR solution (:P), I got to these two solutions.. ORing: I like Chris's solution better... ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") OR (metadata_name = "file extension" and value = "mpeg") ``` ANDing: I'll use your ORing version (the one with the UNION all ``` SELECT FILE_ID FROM ( SELECT DISTINCT 1, file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") UNION ALL SELECT DISTINCT 2, file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg") ) IHATEAND GROUP BY FILE_ID HAVING COUNT(1)>1 ``` Which gives: ``` +---------+ | FILE_ID | +---------+ | 6962 | +---------+ 1 row in set (0.24 sec) ``` it should be a little less fast than the ORing seeing the performances you pasted and mines (I am 3 times as slow, time to upgrade -.-), but still significantly faster than the previous queries ;) Anyway, how does the ANDing work? Put pretty simply, it just does the two separate queries and names the records according to the branch they come from, then counts the different file ids coming from them UPDATE: another way of doing it without having to "name" the branches: ``` SELECT FILE_ID FROM ( SELECT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") GROUP BY FILE_ID UNION ALL SELECT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "file extension" and value = "mpeg") GROUP BY FILE_ID ) IHATEAND GROUP BY FILE_ID HAVING COUNT(1)>1 ``` Here the results are the same (and performances as well) and I'm exploiting the fact that while UNION automatically sorts the duplicates and removes the duplicates, UNION ALL does not... which is perfect since I don't want them removed (and in general union all is also faster than union :) ), this way I can forget about naming.
For "OR" why not try it without the UNION... am I missing something? ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE (metadata_name = "height" and value = "1080") OR (metadata_name = "file extension" and value = "mpeg") ``` For "AND", use an inner join on the metadata table twice to ensure to get only file\_id's that meet both conditions... ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata AND (M.metadata_name = "height" and MV.value = "1080") INNER JOIN metadata M2 ON MV.meta_id = M2.id_metadata AND (M2.metadata_name = "file extension" and MV.value = "mpeg") INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value ``` "A" and not "B", use a left join rather than an inner join on the "B" condition. Add a WHERE clause specifying that you expect no results for "B" ``` SELECT DISTINCT file_id FROM metadata_value MV INNER JOIN metadata M ON MV.meta_id = M.id_metadata AND (M.metadata_name = "height" and MV.value = "1080") LEFT JOIN metadata M2 ON MV.meta_id = M2.id_metadata AND (M2.metadata_name = "file extension" and MV.value = "mpeg") INNER JOIN block_value BV ON MV.blockvalue_id = BV.id_block_value WHERE M2.id_metadata is NULL ```
Boolean operations on mysql results
[ "", "mysql", "sql", "merge", "intersect", "booleanquery", "" ]
I have two dictionaries. dict1 and dict2. dict 2 is always of the same length but dict1 varies in length. Both dictionaries are as follows: ``` dict2 = {"name":"martin","sex":"male","age":"97","address":"blablabla"} dict1 = {"name":"falak", "sex":"female"} ``` I want to create a third dictionary that is based on both dict1 and dict2. dict3 will have all values of dict2. But all those keys will be replaced that exists in dict1. Here is the resulting dict3 ``` dict3 = {"name":"falak","sex":"female","age":"97","address":"blablabla"} ``` I can do it wil multiple if statements but want to have a way that is more smarter. Can please someone guide me regarding that.
Have you tried: ``` dict3 = dict(dict2, **dict1) ``` Or: ``` dict3 = dict2.copy() dict3.update(dict1) ```
``` import copy dict3 = copy.copy(dict2) dict3.update(dict1) ```
check two dictionaries that have similar keys but different values
[ "", "python", "dictionary", "" ]
I have a transdate column of varchar2 type which has the following entrees ``` 01/02/2012 01/03/2012 ``` etc. I converted it in to date format in another column using to\_date function. This is the format i got. ``` 01-JAN-2012 03-APR-2012 ``` When I'm trying to extract the weekno, i'm getting all null values. select to\_char(to\_date(TRANSDATE), 'w') as weekno from tablename. ``` null null ``` How to get weekno from date in the above format?
After converting your `varchar2` date to a true `date` datatype, then convert back to `varchar2` with the desired mask: ``` to_char(to_date('01/02/2012','MM/DD/YYYY'),'WW') ``` If you want the week number in a `number` datatype, you can wrap the statement in `to_number()`: ``` to_number(to_char(to_date('01/02/2012','MM/DD/YYYY'),'WW')) ``` However, you have [several week number options](https://www.techonthenet.com/oracle/functions/to_date.php) to consider: > | Parameter | Explanation | > | --- | --- | > | `WW` | Week of year (1-53) where week 1 starts on the first day of the year and continues to the seventh day of the year. | > | `W` | Week of month (1-5) where week 1 starts on the first day of the month and ends on the seventh. | > | `IW` | Week of year (1-52 or 1-53) based on the ISO standard. | (See also [Oracle 19 documentation on datetime format elements](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Format-Models.html#GUID-EAB212CF-C525-4ED8-9D3F-C76D08EEBC7A).)
Try to replace 'w' for 'iw'. For example: ``` SELECT to_char(to_date(TRANSDATE, 'dd-mm-yyyy'), 'iw') as weeknumber from YOUR_TABLE; ```
How to extract week number in sql
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
my table1 is : ## T1 ``` col1 col2 C1 john C2 alex C3 piers C4 sara ``` and so table 2: ## T2 ``` col1 col2 R1 C1,C2,C4 R2 C3,C4 R3 C1,C4 ``` how to result this?: ## query result ``` col1 col2 R1 john,alex,sara R2 piers,sara R3 john,sara ``` please help me?
Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list. Once you have this data normalized then you can easily query the data. The new table structure could be similar to this: ``` CREATE TABLE T1 ( [col1] varchar(2), [col2] varchar(5), constraint pk1_t1 primary key (col1) ); INSERT INTO T1 ([col1], [col2]) VALUES ('C1', 'john'), ('C2', 'alex'), ('C3', 'piers'), ('C4', 'sara') ; CREATE TABLE T2 ( [col1] varchar(2), [col2] varchar(2), constraint pk1_t2 primary key (col1, col2), constraint fk1_col2 foreign key (col2) references t1 (col1) ); INSERT INTO T2 ([col1], [col2]) VALUES ('R1', 'C1'), ('R1', 'C2'), ('R1', 'C4'), ('R2', 'C3'), ('R2', 'C4'), ('R3', 'C1'), ('R3', 'C4') ; ``` Normalizing the tables would make it much easier for you to query the data by joining the tables: ``` select t2.col1, t1.col2 from t2 inner join t1 on t2.col2 = t1.col1 ``` See [Demo](http://sqlfiddle.com/#!3/be97f/8) Then if you wanted to display the data as a comma-separated list, you could use `FOR XML PATH` and `STUFF`: ``` select distinct t2.col1, STUFF( (SELECT distinct ', ' + t1.col2 FROM t1 inner join t2 t on t1.col1 = t.col2 where t2.col1 = t.col1 FOR XML PATH ('')), 1, 1, '') col2 from t2; ``` See [Demo](http://sqlfiddle.com/#!3/be97f/9). If you are not able to normalize the data, then there are several things that you can do. First, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this: ``` CREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1)) returns @temptable TABLE (items varchar(MAX)) as begin declare @idx int declare @slice varchar(8000) select @idx = 1 if len(@String)<1 or @String is null return while @idx!= 0 begin set @idx = charindex(@Delimiter,@String) if @idx!=0 set @slice = left(@String,@idx - 1) else set @slice = @String if(len(@slice)>0) insert into @temptable(Items) values(@slice) set @String = right(@String,len(@String) - @idx) if len(@String) = 0 break end return end; ``` When you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list: ``` ;with cte as ( select c.col1, t1.col2 from t1 inner join ( select t2.col1, i.items col2 from t2 cross apply dbo.split(t2.col2, ',') i ) c on t1.col1 = c.col2 ) select distinct c.col1, STUFF( (SELECT distinct ', ' + c1.col2 FROM cte c1 where c.col1 = c1.col1 FOR XML PATH ('')), 1, 1, '') col2 from cte c ``` See [Demo](http://sqlfiddle.com/#!3/e1fc4/6). A final way that you could get the result is by applying `FOR XML PATH` directly. ``` select col1, ( select ', '+t1.col2 from t1 where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%' for xml path(''), type ).value('substring(text()[1], 3)', 'varchar(max)') as col2 from t2; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/e1fc4/7)
Here's a way of splitting the data without a function, then using the standard `XML PATH` method for getting the CSV list: ``` with CTE as ( select T2.col1 , T1.col2 from T2 inner join T1 on charindex(',' + T1.col1 + ',', ',' + T2.col2 + ',') > 0 ) select T2.col1 , col2 = stuff( ( select ',' + CTE.col2 from CTE where T2.col1 = CTE.col1 for xml path('') ) , 1 , 1 , '' ) from T2 ``` [SQL Fiddle with demo](http://sqlfiddle.com/#!3/45e81/7). As has been mentioned elsewhere in this question it is hard to query this sort of denormalised data in any sort of efficient manner, so your first priority should be to investigate updating the table structure, but this will at least allow to get the results you require.
join comma delimited data column
[ "", "sql", "sql-server", "t-sql", "" ]
[EDIT 00]: I've edited several times the post and now even the title, please read below. I just learned about the format string method, and its use with dictionaries, like the ones provided by `vars()`, `locals()` and `globals()`, example: ``` name = 'Ismael' print 'My name is {name}.'.format(**vars()) ``` But I want to do: ``` name = 'Ismael' print 'My name is {name}.' # Similar to ruby ``` So I came up with this: ``` def mprint(string='', dictionary=globals()): print string.format(**dictionary) ``` You can interact with the code here: <http://labs.codecademy.com/BA0B/3#:workspace> Finally, what I would love to do is to have the function in another file, named `my_print.py`, so I could do: ``` from my_print import mprint name= 'Ismael' mprint('Hello! My name is {name}.') ``` But as it is right now, there is a problem with the scopes, how could I get the the main module namespace as a dictionary from inside the imported mprint function. (not the one from `my_print.py`) I hope I made myself uderstood, if not, try importing the function from another module. (the traceback is in the link) It's accessing the `globals()` dict from `my_print.py`, but of course the variable name is not defined in that scope, any ideas of how to accomplish this? The function works if it's defined in the same module, but notice how I must use `globals()` because if not I would only get a dictionary with the values within `mprint()` scope. I have tried using nonlocal and dot notation to access the main module variables, but I still can't figure it out. --- [EDIT 01]: I think I've figured out a solution: In my\_print.py: ``` def mprint(string='',dictionary=None): if dictionary is None: import sys caller = sys._getframe(1) dictionary = caller.f_locals print string.format(**dictionary) ``` In test.py: ``` from my_print import mprint name = 'Ismael' country = 'Mexico' languages = ['English', 'Spanish'] mprint("Hello! My name is {name}, I'm from {country}\n" "and I can speak {languages[1]} and {languages[0]}.") ``` It prints: ``` Hello! My name is Ismael, I'm from Mexico and I can speak Spanish and English. ``` What do you think guys? That was a difficult one for me! I like it, much more readable for me. --- [EDIT 02]: I've made a module with an `interpolate` function, an `Interpolate` class and an attempt for a `interpolate` class method analogous to the function. It has a small test suite and its documented! I'm stuck with the method implementation, I don't get it. Here's the code: <http://pastebin.com/N2WubRSB> What do you think guys? --- [EDIT 03]: Ok I have settled with just the `interpolate()` function for now. In `string_interpolation.py`: ``` import sys def get_scope(scope): scope = scope.lower() caller = sys._getframe(2) options = ['l', 'local', 'g', 'global'] if scope not in options[:2]: if scope in options[2:]: return caller.f_globals else: raise ValueError('invalid mode: {0}'.format(scope)) return caller.f_locals def interpolate(format_string=str(),sequence=None,scope='local',returns=False): if type(sequence) is str: scope = sequence sequence = get_scope(scope) else: if not sequence: sequence = get_scope(scope) format = 'format_string.format(**sequence)' if returns is False: print eval(format) elif returns is True: return eval(format) ``` Thanks again guys! Any opinions? --- [EDIT 04]: This is my last version, it has a test, docstrings and describes some limitations I've found: <http://pastebin.com/ssqbbs57> You can quickly test the code here: <http://labs.codecademy.com/BBMF#:workspace> And clone grom git repo here: <https://github.com/Ismael-VC/python_string_interpolation.git>
Language Design Is Not Just Solving Puzzles: `;)` <http://www.artima.com/forums/flat.jsp?forum=106&thread=147358> **Edit:** [PEP-0498](https://www.python.org/dev/peps/pep-0498/) solves this issue! The `Template` class from the `string` module, also does what I need (but more similar to the string `format` method), in the end it also has the readability I seek, it also has the recommended explicitness, it's in the Standard Library and it can also be easily customized and extended. <http://docs.python.org/2/library/string.html?highlight=template#string.Template> ``` from string import Template name = 'Renata' place = 'hospital' job = 'Dr.' how = 'glad' header = '\nTo Ms. {name}:' letter = Template(""" Hello Ms. $name. I'm glad to inform, you've been accepted in our $place, and $job Red will ${how}ly recieve you tomorrow morning. """) print header.format(**vars()) print letter.substitute(vars()) ``` The funny thing is that now I'm getting more fond of using `{}` instead of `$` and I still like the `string_interpolation` module I came up with, because it's less typing than either one in the long run. LOL! Run the code here: <http://labs.codecademy.com/BE3n/3#:workspace>
Modules don't share namespaces in python, so `globals()` for `my_print` is always going to be the `globals()` of my\_print.py file ; i.e the location where the function was actually defined. ``` def mprint(string='', dic = None): dictionary = dic if dic is not None else globals() print string.format(**dictionary) ``` You should pass the current module's globals() explicitly to make it work. Ans don't use mutable objects as default values in python functions, it can result in [unexpected results](https://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument). Use `None` as default value instead. A simple example for understanding scopes in modules: file : my\_print.py ``` x = 10 def func(): global x x += 1 print x ``` file : main.py ``` from my_print import * x = 50 func() #prints 11 because for func() global scope is still #the global scope of my_print file print x #prints 50 ```
Python string interpolation implementation
[ "", "python", "string-interpolation", "" ]
# Intro I have the following SQLite table with 198,305 geocoded portuguese postal codes: ``` CREATE TABLE "pt_postal" ( "code" text NOT NULL, "geo_latitude" real(9,6) NULL, "geo_longitude" real(9,6) NULL ); CREATE UNIQUE INDEX "pt_postal_code" ON "pt_postal" ("code"); CREATE INDEX "coordinates" ON "pt_postal" ("geo_latitude", "geo_longitude"); ``` I also have the following user defined function in PHP that returns the distance between two coordinates: ``` $db->sqliteCreateFunction('geo', function () { if (count($data = func_get_args()) < 4) { $data = explode(',', implode(',', $data)); } if (count($data = array_map('deg2rad', array_filter($data, 'is_numeric'))) == 4) { return round(6378.14 * acos(sin($data[0]) * sin($data[2]) + cos($data[0]) * cos($data[2]) * cos($data[1] - $data[3])), 3); } return null; }); ``` Only **874** records have a distance from `38.73311, -9.138707` smaller or equal to 1 km. --- # The Problem The UDF is working flawlessly in SQL queries, but for some reason I cannot use it's return value in `WHERE` clauses - for instance, if I execute the query: ``` SELECT "code", geo(38.73311, -9.138707, "geo_latitude", "geo_longitude") AS "distance" FROM "pt_postal" WHERE 1 = 1 AND "geo_latitude" BETWEEN 38.7241268076 AND 38.7420931924 AND "geo_longitude" BETWEEN -9.15022289523 AND -9.12719110477 AND "distance" <= 1 ORDER BY "distance" ASC LIMIT 2048; ``` It returns 1035 records ***ordered by `distance`*** in ~0.05 seconds, *however* the last record has a "distance" of `1.353` km (which is bigger than the 1 km I defined as the maximum in the last `WHERE`). If I drop the following clauses: ``` AND "geo_latitude" BETWEEN 38.7241268076 AND 38.7420931924 AND "geo_longitude" BETWEEN -9.15022289523 AND -9.12719110477 ``` Now the query takes nearly 6 seconds and returns 2048 records (my `LIMIT`) ordered by `distance`. It's supposed take this long, but it should only return the **874 records that have `"distance" <= 1`**. The `EXPLAIN QUERY PLAN` for the original query returns: ``` SEARCH TABLE pt_postal USING INDEX coordinates (geo_latitude>? AND geo_latitude<?) #(~7500 rows) USE TEMP B-TREE FOR ORDER BY ``` And without the coordinate boundaries: ``` SCAN TABLE pt_postal #(~500000 rows) USE TEMP B-TREE FOR ORDER BY ``` --- # What I Would Like to Do I think I know why this is happening, SQLite is doing: 1. use index `coordinates` to filter out the records outside of the boundaries in the `WHERE` clauses 2. filter those records by the `"distance" <= 1` `WHERE` clause, ***but `distance` is still `NULL => 0`***! 3. populate "code" and "distance" (by calling the UDF for the first time) 4. order by the "distance" (which is populated by now) 5. limit the records What I would like SQLite to do: 1. use index `coordinates` to filter out the records outside of the boundaries in the `WHERE` clauses 2. for those records, populate `code` and `distance` by calling the UDF 3. filter the records by the `"distance" <= 1` `WHERE` clause 4. order by the "distance" (without calling the UDF again) 5. limit the records **Can anyone explain how I can make SQLite behave (if it's even possible) the way I want it to?** --- # Postscript Just out of curiosity, I tried to benchmark how much slower calling the UDF twice would be: ``` SELECT "code", geo(38.73311, -9.138707, "geo_latitude", "geo_longitude") AS "distance" FROM "pt_postal" WHERE 1 = 1 AND "geo_latitude" BETWEEN 38.7241268076 AND 38.7420931924 AND "geo_longitude" BETWEEN -9.15022289523 AND -9.12719110477 AND geo(38.73311, -9.138707, "geo_latitude", "geo_longitude") <= 1 ORDER BY "distance" ASC LIMIT 2048; ``` To my surprise, it still runs in the same ~0.06 seconds - and it still (wrongly!) returns the 1035 records. Seems like the second `geo()` call is not even being evaluated... But [it should](http://www.sqlite.org/lang_expr.html), right?
Basically, I was using `sprintf()` to see what kind of bounding coordinates where being computed, and since I couldn't run the query on any place other than PHP (because of the UDF) I was generating another query with prepared statements. The problem was, I wasn't generating the last bound parameter (the kilometers in the `distance <= ?` clause) and I was fooled by my `sprintf()` version. Guess I shouldn't try to code when I'm sleepy. I'm truly sorry for your wasted time, and thank you all! --- Just for the sake of completeness, the following returns (correctly!) 873 records, in ~ 0.04 seconds: ``` SELECT "code", geo(38.73311, -9.138707, "geo_latitude", "geo_longitude") AS "distance" FROM "pt_postal" WHERE 1 = 1 AND "geo_latitude" BETWEEN 38.7241268076 AND 38.7420931924 AND "geo_longitude" BETWEEN -9.15022289523 AND -9.12719110477 AND "distance" <= 1 ORDER BY "distance" ASC LIMIT 2048; ```
This query (*provided by [@OMGPonies](https://stackoverflow.com/a/2099140/89771)*): ``` SELECT * FROM ( SELECT "code", geo(38.73311, -9.138707, "geo_latitude", "geo_longitude") AS "distance" FROM "pt_postal" WHERE 1 = 1 AND "geo_latitude" BETWEEN 38.7241268076 AND 38.7420931924 AND "geo_longitude" BETWEEN -9.15022289523 AND -9.12719110477 ) WHERE "distance" <= 1 ORDER BY "distance" ASC LIMIT 2048; ``` Correctly returns the 873 records, ordered by `distance` in ~0.07 seconds. However, I'm still wondering why SQLite doesn't evaluate `geo()` in the `WHERE` clause, [like MySQL](http://www.scribd.com/doc/2569355/Geo-Distance-Search-with-MySQL#page=15)...
SQLite - WHERE Clause & UDFs
[ "", "sql", "sqlite", "user-defined-functions", "" ]
I am fitting data points using a logistic model. As I sometimes have data with a ydata error, I first used curve\_fit and its sigma argument to include my individual standard deviations in the fit. Now I switched to leastsq, because I needed also some Goodness of Fit estimation that curve\_fit could not provide. Everything works well, but now I miss the possibility to weigh the least sqares as "sigma" does with curve\_fit. Has someone some code example as to how I could weight the least squares also in leastsq? Thanks, Woodpicker
I just found that it is possible to combine the best of both worlds, and to have the full leastsq() output also from curve\_fit(), using the option full\_output: ``` popt, pcov, infodict, errmsg, ier = curve_fit(func, xdata, ydata, sigma = SD, full_output = True) ``` This gives me infodict that I can use to calculate all my Goodness of Fit stuff, and lets me use curve\_fit's sigma option at the same time...
Assuming your data are in arrays `x`, `y` with `yerr`, and the model is `f(p, x)`, just define the error function to be minimized as `(y-f(p,x))/yerr`.
Python / Scipy - implementing optimize.curve_fit 's sigma into optimize.leastsq
[ "", "python", "scipy", "curve-fitting", "least-squares", "" ]
So far I made my user object and my login function, but I don't understand the user\_loader part at all. I am very confused, but here is my code, please point me in the right direction. ``` @app.route('/login', methods=['GET','POST']) def login(): form = Login() if form.validate(): user=request.form['name'] passw=request.form['password'] c = g.db.execute("SELECT username from users where username = (?)", [user]) userexists = c.fetchone() if userexists: c = g.db.execute("SELECT password from users where password = (?)", [passw]) passwcorrect = c.fetchone() if passwcorrect: #session['logged_in']=True #login_user(user) flash("logged in") return redirect(url_for('home')) else: return 'incorrecg pw' else: return 'fail' return render_template('login.html', form=form) @app.route('/logout') def logout(): logout_user() return redirect(url_for('home')) ``` my user ``` class User(): def __init__(self,name,email,password, active = True): self.name = name self.email = email self.password = password self.active = active def is_authenticated(): return True #return true if user is authenticated, provided credentials def is_active(): return True #return true if user is activte and authenticated def is_annonymous(): return False #return true if annon, actual user return false def get_id(): return unicode(self.id) #return unicode id for user, and used to load user from user_loader callback def __repr__(self): return '<User %r>' % (self.email) def add(self): c = g.db.execute('INSERT INTO users(username,email,password)VALUES(?,?,?)',[self.name,self.email,self.password]) g.db.commit() ``` my database ``` import sqlite3 import sys import datetime conn = sqlite3.connect('data.db')#create db with conn: cur = conn.cursor() cur.execute('PRAGMA foreign_keys = ON') cur.execute("DROP TABLE IF EXISTS posts") cur.execute("DROP TABLE IF EXISTS users") cur.execute("CREATE TABLE users(id integer PRIMARY KEY, username TEXT, password TEXT, email TEXT)") cur.execute("CREATE TABLE posts(id integer PRIMARY KEY, body TEXT, user_id int, FOREIGN KEY(user_id) REFERENCES users(id))") ``` I also set up the LoginManager in my **init**. I am not sure what to do next, but I know I have to some how set up this ``` @login_manager.user_loader def load_user(id): return User.query.get(id) ``` how do I adjust this portion code to work for my database? EDIT: please let me know if this looks correct or can be improved :) ``` @login_manager.user_loader def load_user(id): c = g.db.execute("SELECT id from users where username = (?)", [id]) userid = c.fetchone() return userid @app.route('/login', methods=['GET','POST']) def login(): form = Login() if form.validate(): g.user=request.form['name'] g.passw=request.form['password'] c = g.db.execute("SELECT username from users where username = (?)", [g.user]) userexists = c.fetchone() if userexists: c = g.db.execute("SELECT password from users where password = (?)", [g.passw]) passwcorrect = c.fetchone() if passwcorrect: user = User(g.user, 'email', g.passw) login_user(user) flash("logged in") return redirect(url_for('home')) else: return 'incorrecg pw' else: return 'fail' return render_template('login.html', form=form) @app.route('/logout') def logout(): logout_user() return redirect(url_for('home')) import sqlite3 from flask import g class User(): def __init__(self,name,email,password, active = True): self.name = name self.email = email self.password = password self.active = active def is_authenticated(self): return True #return true if user is authenticated, provided credentials def is_active(self): return True #return true if user is activte and authenticated def is_annonymous(self): return False #return true if annon, actual user return false def get_id(self): c = g.db.execute('SELECT id from users where username = (?)', [g.user]) id = c.fetchone() return unicode(id) #return unicode id for user, and used to load user from user_loader callback def __repr__(self): return '<User %r>' % (self.email) def add(self): c = g.db.execute('INSERT INTO users(username,email,password)VALUES(?,?,?)',[self.name,self.email,self.password]) g.db.commit() ```
user\_loader callback function is a way to tell Flask-Login on "how" to look for the user id from the database ? Since you are using sqllite3, you need to implement the user\_loader function to query your sqllite database and fetch/return the userid/username that you have stored. Something like: ``` @login_manager.user_loader def load_user(id): c = g.db.execute("SELECT username from users where username = (?)", [id]) userrow = c.fetchone() userid = userrow[0] # or whatever the index position is return userid ``` When you call login\_user(user), it calls the load\_user function to figure out the user id. This is how the process flow works: 1. You verify that user has entered correct username and password by checking against database. 2. If username/password matches, then you need to retrieve the user "object" from the user id. your user object could be userobj = User(userid,email..etc.). Just instantiate it. 3. Login the user by calling login\_user(userobj). 4. Redirect wherever, flash etc.
Are you using SQLAlchemy by any chance? Here is an example of my model.py for a project I had a while back that used Sqlite3 & Flask log-ins. ``` USER_COLS = ["id", "email", "password", "age"] ``` Did you create an engine? ``` engine = create_engine("sqlite:///ratings.db", echo=True) session = scoped_session(sessionmaker(bind=engine, autocommit=False, autoflush=False)) Base = declarative_base() Base.query = session.query_property() Base.metadata.create_all(engine) ``` Here is an example of User Class: ``` class User(Base): __tablename__ = "Users" id = Column(Integer, primary_key = True) email = Column(String(64), nullable=True) password = Column(String(64), nullable=True) age = Column(Integer, nullable=True) def __init__(self, email = None, password = None, age=None): self.email = email self.password = password self.age = age ``` Hope that helps give you a little bit of a clue.
flask-login not sure how to make it work using sqlite3
[ "", "python", "flask", "flask-login", "" ]
There are two lists. one is code\_list, the other is points ``` code_list= ['ab','ca','gc','ab','we','ca'] points = [30, 20, 40, 20, 10, -10] ``` These two lists connect each other like this: 'ab' = 30, 'ca'=20 , 'gc' = 40, 'ab'=20, 'we'=10, 'ca'=-10 From these two lists, If there are same elements, I wan to get sum of each element. Finally, I'll get a element which has the biggest point. I'll hope to get a simple result like below: ``` 'ab' has the biggest point: 50 ``` Could you give me a your help?
You can use a [`collections.Counter()`](http://docs.python.org/2/library/collections.html#collections.Counter) instance: ``` >>> from collections import Counter >>> code_list= ['ab','ca','gc','ab','we','ca'] >>> points = [30, 20, 40, 20, 10, -10] >>> c = Counter() >>> for key, val in zip(code_list, points): ... c[key] += val ... >>> c.most_common(1) [('ab', 50)] ``` `zip()` pairs up your two input lists. It's that last call that makes the `Counter()` useful here, the `.most_common()` call uses `max()` internally for just one item, but for an argument greater than 1 `heapq.nlargest()` is used, and with no argument or asking for `len(c)`, `sorted()` is used.
There's another way, using [`collections.defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict): ``` >>> di = collections.defaultdict(int) >>> for k,v in zip(code_list, points): di[k] += v >>> max(di, key=lambda x:di[x]) 'ab' ``` If you don't want to use `defaultdict` for some reason, just do this: ``` >>> di = {} >>> for k,v in zip(code_list, points): if k not in di: di[k] = 0 # or as suggested by Martijn Pieters # di[k] = di.get(k, 0) + v di[k] += v ```
Python: searching maximum data
[ "", "python", "list", "max", "" ]
I have a table of documents, and a table of tags. The documents are tagged with various values. I am attempting to create a search of these tags, and for the most part it is working. However, I am getting extra results returned when it matches any tag. I only want results where it matches all tags. I have created this to illustrate the problem <http://sqlfiddle.com/#!3/8b98e/11> **Tables and Data:** ``` CREATE TABLE Documents ( DocId INT, DocText VARCHAR(500) ); CREATE TABLE Tags ( TagId INT, TagName VARCHAR(50) ); CREATE TABLE DocumentTags ( DocTagId INT, DocId INT, TagId INT, Value VARCHAR(50) ); INSERT INTO Documents VALUES (1, 'Document 1 Text'); INSERT INTO Documents VALUES (2, 'Document 2 Text'); INSERT INTO Tags VALUES (1, 'Tag Name 1'); INSERT INTO Tags VALUES (2, 'Tag Name 2'); INSERT INTO DocumentTags VALUES (1, 1, 1, 'Value 1'); INSERT INTO DocumentTags VALUES (1, 1, 2, 'Value 2'); INSERT INTO DocumentTags VALUES (1, 2, 1, 'Value 1'); ``` **Code:** ``` -- Set up the parameters DECLARE @TagXml VARCHAR(max) SET @TagXml = '<tags> <tag> <description>Tag Name 1</description> <value>Value 1</value> </tag> <tag> <description>Tag Name 2</description> <value>Value 2</value> </tag> </tags>' -- Create a table to store the parsed xml in DECLARE @XmlTagData TABLE ( id varchar(20) ,[description] varchar(100) ,value varchar(250) ) -- Populate our XML table DECLARE @iTag int EXEC sp_xml_preparedocument @iTag OUTPUT, @TagXml -- Execute a SELECT statement that uses the OPENXML rowset provider -- to produce a table from our xml structure and insert it into our temp table INSERT INTO @XmlTagData (id, [description], value) SELECT id, [description], value FROM OPENXML (@iTag, '/tags/tag',1) WITH (id varchar(20), [description] varchar(100) 'description', value varchar(250) 'value') EXECUTE sp_xml_removedocument @iTag -- Update the XML table Id's to match existsing Tag Id's UPDATE @XmlTagData SET X.Id = T.TagId FROM @XmlTagData X INNER JOIN Tags T ON X.[description] = T.TagName -- Check it looks right --SELECT * --FROM @XmlTagData -- This is where things do not quite work. I get both doc 1 & 2 back, -- but what I want is just document 1. -- i.e. documents that have both tags with matching values SELECT DISTINCT D.* FROM Documents D INNER JOIN DocumentTags T ON T.DocId = D.DocId INNER JOIN @XmlTagData X ON X.id = T.TagId AND X.value = T.Value ``` (Note I am not a DBA, so there may be better ways of doing things. Hopefully I am on the right track, but I am open to other suggestions if my implementation can be improved.) **Can anyone offer any suggestions on how to get only results that have all tags?** Many thanks.
Use option with [[NOT] EXISTS](http://msdn.microsoft.com/en-us/library/ms188336%28v=sql.90%29.aspx) and [EXCEPT](http://msdn.microsoft.com/ru-ru/library/ms188055%28v=sql.105%29.aspx) operators in the last query ``` SELECT * FROM Documents D WHERE NOT EXISTS ( SELECT X.ID , X.Value FROM @XmlTagData X EXCEPT SELECT T.TagId, T.VALUE FROM DocumentTags T WHERE T.DocId = D.DocId ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/8b98e/49) OR ``` SELECT * FROM Documents D WHERE EXISTS ( SELECT X.ID , X.Value FROM @XmlTagData X EXCEPT SELECT T.TagId, T.VALUE FROM DocumentTags T WHERE T.DocId != D.DocId ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/8b98e/50) OR Also you can use a simple solution with XQuery methods: [nodes()](http://msdn.microsoft.com/ru-ru/library/ms188282.aspx), [value()](http://msdn.microsoft.com/ru-ru/library/ms178030.aspx)) and CTE/Subquery. ``` -- Set up the parameters DECLARE @TagXml XML SET @TagXml = '<tags> <tag> <description>Tag Name 1</description> <value>Value 1</value> </tag> <tag> <description>Tag Name 2</description> <value>Value 2</value> </tag> </tags>' ;WITH cte AS ( SELECT TagValue.value('(./value)[1]', 'nvarchar(100)') AS value, TagValue.value('(./description)[1]', 'nvarchar(100)') AS [description] FROM @TagXml.nodes('/tags/tag') AS T(TagValue) ) SELECT * FROM Documents D WHERE NOT EXISTS ( SELECT T.TagId, c.value FROM cte c JOIN Tags T WITH(FORCESEEK) ON c.[description] = T.TagName EXCEPT SELECT T.TagId, T.VALUE FROM DocumentTags T WITH(FORCESEEK) WHERE T.DocId = D.DocId ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/8b98e/52) OR ``` -- Set up the parameters DECLARE @TagXml XML SET @TagXml = '<tags> <tag> <description>Tag Name 1</description> <value>Value 1</value> </tag> <tag> <description>Tag Name 2</description> <value>Value 2</value> </tag> </tags>' SELECT * FROM Documents D WHERE NOT EXISTS ( SELECT T2.TagId, TagValue.value('(./value)[1]', 'nvarchar(100)') AS value FROM @TagXml.nodes('/tags/tag') AS T(TagValue) JOIN Tags T2 WITH(FORCESEEK) ON TagValue.value('(./description)[1]', 'nvarchar(100)') = T2.TagName EXCEPT SELECT T.TagId, T.VALUE FROM DocumentTags T WITH(FORCESEEK) WHERE T.DocId = D.DocId ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/8b98e/53) In order to improving performance(forced operation of index seek on the Tags and DocumentTags tables), use indexes and table hints(FORCESEEK hint added to the query above): ``` CREATE INDEX x ON Documents(DocId) INCLUDE(DocText) CREATE INDEX x ON Tags(TagName) INCLUDE(TagId) CREATE INDEX x ON DocumentTags(DocId) INCLUDE(TagID, VALUE) ```
I am not really sure of the syntax for `SQL Server`, but I guess something like this should work ``` SELECT d.docId FROM Documents D INNER JOIN DocumentTags T ON T.DocId = D.DocId INNER JOIN @XmlTagData X ON X.id = T.TagId AND X.value = T.Value group by documents.docid having count(*) = 2 --[total of tags to be searched] ```
Selecting from Table A where it joins to all data in Table B
[ "", "sql", "sql-server-2008", "" ]
``` ACTOR (id, fname, lname, gender) MOVIE (id, name, year, rank) CASTS (pid, mid, role) WHERE pid references ACTOR id mid references Movie id ``` List the movies that x has been in without y (x and y are actors). I am finding it difficult to construct an SQL with NOT in. This is my attempt. Im unable to fininsh it off due to the second actor not being present ``` SELECT m.name FROM MOVIE m WHERE m.id NOT IN (SELECT c.mid FROM CASTS c, ACTOR a WHERE c.pid = a.id AND a.name = "adam..") ```
Using **`NOT EXISTS`**: ``` SELECT m.name -- Show the names FROM movie m -- of all movies WHERE EXISTS -- that there was ( SELECT * -- a role FROM casts c -- casted to JOIN actor a -- actor with ON c.pid = a.id WHERE c.mid = m.id AND a.name = 'Actor X' -- name X ) AND NOT EXISTS -- and there was not ( SELECT * -- any role FROM casts c -- casted JOIN actor a -- to actor with ON c.pid = a.id WHERE c.mid = m.id AND a.name = 'Actor Y' -- name Y ) ; ``` You can also use **`NOT IN`**. Note that this may give you unexpected results if there are rows with `NULL` in the `movie.id` or `casts.mid` column: ``` SELECT m.name -- Show the names FROM movie m -- of all movies WHERE m.id IN -- but keep only the movies that ( SELECT c.mid -- movies that FROM casts c -- had a role casted to JOIN actor a -- actor with ON c.pid = a.id WHERE a.name = 'Actor X' -- name X ) AND m.id NOT IN -- and not the movies ( SELECT c.mid -- that FROM casts c -- had a role casted JOIN actor a -- to actor with ON c.pid = a.id WHERE a.name = 'Actor Y' -- name Y ) ; ```
You can also use the often-overlooked `MINUS`: ``` SELECT Movie.id, Movie.name FROM Actor INNER JOIN Casts ON Actor.id = Casts.pid INNER JOIN Movie ON Casts.mid = Movie.id WHERE Actor.id = 1 MINUS SELECT Movie.id, Movie.name FROM Actor INNER JOIN Casts ON Actor.id = Casts.pid INNER JOIN Movie ON Casts.mid = Movie.id WHERE Actor.id = 2 ``` The `WHERE Actor.id` in the queries above can be substituted with some other way to uniquely identify the actor, for example by their name.
NOT IN with Subquery SQL Construct
[ "", "sql", "oracle", "" ]
I would like to list all form errors together using {{ form.errors }} in the template. This produces a list of form fields and nested lists of the errors for each field. However, the literal name of the field is used. The generated html with an error in a particular field might look like this. ``` <ul class="errorlist"> <li> target_date_mdcy <ul class="errorlist"> <li>This field is required.</li> </ul> </li> </ul> ``` I would like use the errorlist feature, as it's nice and easy. However, I want to use the label ("Target Date", say) rather than the field name. Actually, I can't think of a case in which you would want the field name displaying for the user of a webpage. Is there way to use the rendered error list with the field label?
I don't see a simple way to do this. The errors attribute of the form actually returns an `ErrorDict`, a class defined in `django.forms.utils` - it's a subclass of `dict` that knows to produce that ul rendering of itself as its unicode representation. But the keys are actually the field names, and that's important to maintain for other behavior. So it provides no easy access to the field labels. You could define a custom template tag that accepts the form to produce the rendering you prefer, since in Python code it's easy to get the field label given the form and the field name. Or you could construct an error list by label in the view, add it to your context, and use that instead. *edit* Alternately again, you can iterate over the fields and check their individual errors, remembering to display `non_field_errors` as well. Something like: ``` <ul class="errorlist"> {% if form.non_field_errors %} <li>{{ form.non_field_errors }}</li> {% endif %} {% for field in form %} {% if field.errors %} <li> {{ field.label }} <ul class="errorlist"> {% for error in field.errors %} <li>{{ error }}</li> {% endfor %} </ul> </li> {% endif %} {% endfor %} </ul> ``` You might want to wrap non\_field\_errors in a list as well, depending.
I know this has already been answered but, I ran across the same scenario and found there is a simple way to use the label: ``` {% if form.errors %} <ul class="user-msg error"> {% for field in form %} {% for error in field.errors %} <li> {% if field != '__all__' %} <strong>{{ field.label }}:</strong> {% endif %} {{ error }} </li> {% endfor %} {% endfor %} </ul> {% endif %} ```
Render form errors with the label rather than field name
[ "", "python", "django", "django-forms", "" ]
I am trying to use a list comprehension that compares string objects, but one of the strings is utf-8, the byproduct of json.loads. Scenario: ``` us = u'MyString' # is the utf-8 string ``` Part one of my question, is why does this return False? : ``` us.encode('utf-8') == "MyString" ## False ``` Part two - how can I compare within a list comprehension? ``` myComp = [utfString for utfString in jsonLoadsObj if utfString.encode('utf-8') == "MyString"] #wrapped to read on S.O. ``` EDIT: I'm using Google App Engine, which uses Python 2.7 Here's a more complete example of the problem: ``` #json coming from remote server: #response object looks like: {"number1":"first", "number2":"second"} data = json.loads(response) k = data.keys() I need something like: myList = [item for item in k if item=="number1"] #### I thought this would work: myList = [item for item in k if item.encode('utf-8')=="number1"] ```
You must be looping over the wrong data set; just loop directly over the JSON-loaded dictionary, there is no need to call `.keys()` first: ``` data = json.loads(response) myList = [item for item in data if item == "number1"] ``` You may want to use `u"number1"` to avoid implicit conversions between Unicode and byte strings: ``` data = json.loads(response) myList = [item for item in data if item == u"number1"] ``` Both versions **work fine**: ``` >>> import json >>> data = json.loads('{"number1":"first", "number2":"second"}') >>> [item for item in data if item == "number1"] [u'number1'] >>> [item for item in data if item == u"number1"] [u'number1'] ``` Note that in your first example, `us` is **not** a UTF-8 string; it is unicode data, the `json` library has already decoded it for you. A UTF-8 string on the other hand, is a sequence *encoded bytes*. You may want to read up on Unicode and Python to understand the difference: * [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://joelonsoftware.com/articles/Unicode.html) by Joel Spolsky * The [Python Unicode HOWTO](http://docs.python.org/3/howto/unicode.html) * [Pragmatic Unicode](http://nedbatchelder.com/text/unipain.html) by Ned Batchelder On Python 2, your expectation that your test returns `True` would be correct, you are doing something else wrong: ``` >>> us = u'MyString' >>> us u'MyString' >>> type(us) <type 'unicode'> >>> us.encode('utf8') == 'MyString' True >>> type(us.encode('utf8')) <type 'str'> ``` There is **no** need to encode the strings to UTF-8 to make comparisons; use unicode literals instead: ``` myComp = [elem for elem in json_data if elem == u"MyString"] ```
You are trying to compare a string of bytes (`'MyString'`) with a string of Unicode code points (`u'MyString'`). This is an "apples and oranges" comparison. Unfortunately, Python 2 pretends in some cases that this comparison is valid, instead of always returning `False`: ``` >>> u'MyString' == 'MyString' # in my opinion should be False True ``` It's up to you as the designer/developer to decide what the correct comparison should be. Here is one possible way: ``` a = u'MyString' b = 'MyString' a.encode('UTF-8') == b # True ``` I recommend the above instead of `a == b.decode('UTF-8')` because all `u''` style strings can be encoded into bytes with UTF-8, except possibly in some bizarre cases, but not all byte-strings can be decoded to Unicode that way. But if you choose to do a UTF-8 encode of the Unicode strings before comparing, that will fail for something like this on a Windows system: `u'Em dashes\u2014are cool'.encode('UTF-8') == 'Em dashes\x97are cool'`. But if you `.encode('Windows-1252')` instead it would succeed. That's why it's an apples and oranges comparison.
How can I compare a unicode type to a string in python?
[ "", "python", "unicode", "python-2.7", "list-comprehension", "" ]
I am currently new to python and I want to extract the R, G and B frame separately from an image. For instance, the variable which stores my image is img What I want to know; how do I make `Rimg = img (:,:,1)` `Gimg = img (:,:,2)` `Bimg = img (:,:,3)` Ofcource, these are MATlab pseudo codes and Rimg, Gimg and Bimg are just variables.
Numpy style : ``` Bimg = img[:,:,0] Gimg = img[:,:,1] Rimg = img[:,:,2] ``` OpenCV style : ``` B,G,R = cv2.split(img) ```
The other answers are correct, but this is a common enough operation that a one-liner can be idiomatic: ``` # Let `im` be a numpy array r,g,b = im.transpose((2,0,1)) ``` The `transpose` operator changes the axes around so that the first axis become the channel axis. Then you can use standard Python multiple assignment to assign to `r`,`g`, and `b`.
Extracting individual frames from an image
[ "", "python", "opencv", "python-2.7", "numpy", "" ]
I've looking for answers but haven't found anything that I could apply to my table, or understand. I've a table called `Vote` with 2 fields `idVotant` and `idVote` (`idVotant` is the guy who made the vote, and `idVote` is the guy for who he voted) If I use this: ``` SELECT count(idVote) FROM Vote WHERE idVote=6 ``` I get the number of votes that guy n°6 received. If I use this: ``` SELECT idVote,count(idVote) AS votes FROM Vote GROUP BY idVote ORDER BY votes DESC ``` I get the list of all the guys and the number of votes they have. Now, what I want to do is get the position of each guys, and of a specific one. The guy n°6 is first because he got more votes, the guy n°2 is second. And asking the position of a guy like which position is guy n°3 ?
Try this: ``` SELECT @rownum:=@rownum+1 AS position, u.idVote, u.votes FROM ( SELECT idVote, count(idVote) AS votes FROM Vote GROUP BY idVote ORDER BY votes DESC ) u, (SELECT @rownum:=0) r ``` See the [demo here](http://sqlfiddle.com/#!2/e7435/1). I have basically wrapped your SQL query inside the `rownum` query To find a particular person, use this: ``` SELECT * FROM ( SELECT @rownum:=@rownum+1 AS position, u.idVote, u.votes FROM ( SELECT idVote, count(idVote) AS votes FROM Vote GROUP BY idVote ORDER BY votes DESC ) u, (SELECT @rownum:=0) r ) ranked_vote WHERE idVote=6 ```
**Try Following:** ``` SELECT idVote,count(idVote) AS votes,ROW_NUMBER() OVER (ORDER BY count(idVote) desc) AS Rank FROM Vote GROUP BY idVote ORDER BY count(idVote) DESC ``` [SQLFIDDLE](http://sqlfiddle.com/#!3/e7435/7) Hope its helpful.
SQL request with count() and position
[ "", "mysql", "sql", "phpmyadmin", "" ]
I have a large table with records created every second and want to select only those records that were created at the top of each hour for the last 2 months. So we would get 24 selected records for every day over the last 60 days The table structure is Dateandtime, Value1, Value2, etc Many Thanks
Try: ``` select * from mytable where datepart(mi, dateandtime)=0 and datepart(ss, dateandtime)=0 and datediff(d, dateandtime, getdate()) <=60 ```
You could `group by` on the date part (`cast(col1 as date)`) and the hour part (`datepart(hh, col1)`. Then pick the minimum date for each hour, and filter on that: ``` select * from YourTable yt join ( select min(dateandtime) as dt from YourTable where datediff(day, dateandtime, getdate()) <= 60 group by cast(dateandtime as date) , datepart(hh, dateandtime) ) filter on filter.dt = yt.dateandtime ``` Alternatively, you can group on a date format that only includes the date and the hour. For example, `convert(varchar(13), getdate(), 120)` returns `2013-05-11 18`. ``` ... group by convert(varchar(13), getdate(), 120) ) filter ... ```
SQL Server - Select all top of the hour records
[ "", "sql", "sql-server", "" ]
When I run this code ``` import pandas as pd import numpy as np def add_prop(group): births = group.births.astype(float) group['prop'] = births/births.sum() return group pieces = [] columns = ['name', 'sex', 'births'] for year in range(1880, 2012): path = 'yob%d.txt' % year frame = pd.read_csv(path, names = columns) frame['year'] = year pieces.append(frame) names = pd.concat(pieces, ignore_index = True) total_births = names.pivot_table('births', rows = 'year', cols = 'sex', aggfunc = sum) total_births.plot(title = 'Total Births by sex and year') ``` I get no plot. This is from Wes McKinney's book on using Python for data analysis. Can anyone point me in the right direction?
Put ``` import matplotlib.pyplot as plt ``` at the top, and ``` plt.show() ``` at the end.
In the IPython notebook you could also use `%matplotlib inline` at the top of the notebook to automatically display the created plots in the output cells.
Matplotlib plot is a no-show
[ "", "python", "matplotlib", "pandas", "" ]
The following function is supposed to return information relating to a particular meeting, stored in a meeting table: ``` CREATE TABLE "MEETING" ( "MEETING_ID" NUMBER(10,0) NOT NULL ENABLE, "TIME" TIMESTAMP (4) NOT NULL ENABLE, "LOCATION" VARCHAR2(40), "MAP_HREF" VARCHAR2(140), "FK_INTEREST_ID" CHAR(4) NOT NULL ENABLE, "ADDITIONAL_INFO" CLOB, "PASSED" NUMBER(1,0), "TITLE" VARCHAR2(20), CONSTRAINT "MEETING_PK" PRIMARY KEY ("MEETING_ID") ENABLE ) ; ``` **The code compiles just fine, and runs fine as well.** However, if the meeting exists, only null is returned. If the meeting doesn't exist the exception prints 'UNKNOWN APPOINTMENT' correctly. ``` CREATE OR REPLACE FUNCTION GetMeeting (meetingnumber MEETING.MEETING_ID%TYPE) RETURN VARCHAR IS CURSOR current_meeting(meetingnumber MEETING.MEETING_ID%TYPE) IS SELECT TITLE FROM MEETING WHERE MEETING_ID = meetingnumber; r_meeting current_meeting%ROWTYPE; BEGIN OPEN current_meeting(meetingnumber); FETCH current_meeting INTO r_meeting; IF current_meeting%NOTFOUND THEN r_meeting.TITLE := 'UNKNOWN APPOINTMENT'; END IF; CLOSE current_meeting; RETURN r_meeting.TITLE; END; SELECT GetMeeting (27) appointment FROM MEETING; ```
Seems this is an exercise in using cursors? Its much more complicated than it needs to be. Try something like (*untested*): ``` create or replace function get_meeting(i_meetingnumber MEETING.MEETING_ID%TYPE) RETURN VARCHAR2 IS l_title MEETING.TITLE%TYPE; BEGIN select title into l_title FROM MEETING WHERE MEETING_ID = i_meetingnumber; return l_title; EXCEPTION when no_data_found then return 'UNKNOWN APPOINTMENT'; when others then raise; END; ``` This is also a bit unnecessary to put this small logic in a function, I would simply select it as needed (via a join of a larger SQL or individually in a larger pl/sql procedure) Also, I notice that your original function returns VARCHAR where title is VARCHAR2. Not sure off hand if the conversion is done implicitly by Oracle, but something worth mentioning.
``` SELECT NVL(TITLE, 'UNKNOWN APPOINTMENT') FROM MEETING WHERE MEETING_ID = meetingnumber; ``` Is much cleaner.
Oracle: function only returning null
[ "", "sql", "oracle", "function", "plsql", "" ]
I am within the Python environment in my Terminal on my Mac (OS Lion). The python script contains the following lines, ``` def main(): file1 = open(sys.argv[1]) file2 = open(sys.argv[2]) file3 = open(sys.argv[3]) ``` I assume that that I need to run the script by doing: ``` script.py file1.txt file2.txt file3.txt ``` But I keep getting the error message below: ``` >>> process.py output1.txt output2.txt output3.txt File "<stdin>", line 1 process.py output1.txt output2.txt output3.txt ^ SyntaxError: invalid syntax ``` All the files and the script are in the current working directory (I checked it by `import os`, and run `print os.getcwd()`. Can someone point me to the right direction? Thanks!
You cannot run python scripts from within python itself. You need to start the script from the command line: ``` python script.py file1.txt file2.txt file3.txt ```
Use the Command Prompt or Bash shell. Do. `python scriptName.py < Space Seperated Arguments >`
Unable to read in files in Python 2.7.4 with open(sys.argv[])
[ "", "python", "python-2.7", "" ]
The question is "Find the properties (pids) that have at least ALL the meter types as the property id 7 has" Theres a table with both the id's and the metertypes in called p\_\_METER SO this is what i thought would be correct but it doesnt return only the pid that Has both of the values, it also returns one that just has one value as well as the original property. Any idea's? This double negation stuff is crazy to understand. ``` SELECT DISTINCT pid FROM property__PROPERTYMETER X WHERE NOT EXISTS (SELECT * FROM property__PROPERTYMETER Y WHERE pid = 7 AND NOT EXISTS (SELECT * FROM property__PROPERTYMETER WHERE metertype = X.metertype AND pid = Y.pid )) ```
You need a doubly nested query Better to say "Find the properties (pids) that are NOT Missing ANY OF the meter types THAT the property id 7 has" or "Find the properties (pids) such that there does not exist a metertype belonging to pid 7 that does not also belong to this pid." ``` Select DISTINCT pid -- outer query: "Show all pids ... " From property_PROPERTYMETER X Where not Exists -- mid-query: "Where there is no pid:7 metertype ..." (Select * From property_PROPERTYMETER y Where metertype = X.metertype And pid = 7 And Not exists (Select * from property_PROPERTYMETER -- inner Qry: "..not also in this pid." Where metertype = y.metertype And pid = x.pid)) ```
The innermost query should verify that the outermost property has all metertypes of the middle property. In your example, it verifies that the middle property has none of the metertypes of the outer property. So you've got the innermost query inside out. Change: ``` WHERE metertype = X.metertype AND pid = X.pid )) ``` To: ``` WHERE metertype = Y.metertype AND pid = X.pid )) ```
SQL double negation, I just cant understand it.
[ "", "sql", "" ]
I am using subprocess to call another program and save its return values to a variable. This process is repeated in a loop, and after a few thousands times the program crashed with the following error: ``` Traceback (most recent call last): File "./extract_pcgls.py", line 96, in <module> SelfE.append( CalSelfEnergy(i) ) File "./extract_pcgls.py", line 59, in CalSelfEnergy p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True) File "/usr/lib/python3.2/subprocess.py", line 745, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.2/subprocess.py", line 1166, in _execute_child errpipe_read, errpipe_write = _create_pipe() OSError: [Errno 24] Too many open files ``` Code: ``` cmd = "enerCHARMM.pl -parram=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1]) p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True) out, err = p.communicate() ```
I guess the problem was due to the fact that I was processing an open file with subprocess: ``` cmd = "enerCHARMM.pl -par param=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1]) p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True) ``` Here the cmd variable contain the name of a file that has just been created but not closed. Then the `subprocess.Popen` calls a system command on that file. After doing this for many times, the program crashed with that error message. So the message I learned from this is > Close the file you have created, then process it
In Mac OSX (El Capitan) See current configuration: ``` #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited ``` Set **open files** value to 10K : ``` #ulimit -Sn 10000 ``` Verify results: ``` #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10000 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited ```
Python Subprocess: Too Many Open Files
[ "", "python", "subprocess", "" ]
I am using Python have a list of 2 columns with lots of white space in between, e.g.: ``` TEXT 123.34645 TEXT 13.35372 TEXT 0.55532 TEXT 11.60538 ``` I want my python code to grab the numbers on the right. At first I was doing this, by going line by line and grabbing line[15:24] or whatever - but then I realised that some of the numbers were different lengths, and so this wasn't going to work. What's the correct way to do this please?
Just use [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split): ``` >>> 'TEXT 123.34645\n'.split() ['TEXT', '123.34645'] ``` The default form of `.split()` splits on arbitrary-width whitespace, ignoring leading and trailing whitespace. In the above example, the `\n` at the end of the line is ignored. If your text contains whitespace too, use [`str.rsplit()`](http://docs.python.org/2/library/stdtypes.html#str.rsplit) with a limit: ``` >>> 'TEXT WITH WHITESPACE 123.34645'.rsplit(None, 1) ['TEXT WITH WHITESPACE', '123.34645'] ``` `None` tells `.rsplit()` (or `.split()`) to split on variable-width whitespace, and `1` is the maximum number of splits to make.
`.split()` is the answer. So when you read your lines in from a text file, apply the `split()` method to each line. ``` In [1]: f = open('test.txt') # test.txt contains the text indicated in your question In [2]: for line in f: ...: result = line.split() ...: print result ...: ['TEXT', '123.34645'] ['TEXT', '13.35372'] ['TEXT', '0.55532'] ['TEXT', '11.60538'] ```
choose numbers of different lengths
[ "", "python", "slice", "" ]
Lets say I have a dict with key= 'keys' ``` >>> keys 'taste' ``` After a few lines..output ``` >>> {'taste': ('sweet', 'sour', 'juicy', 'melon-like')} ``` This code snippet ``` from collections import defaultdict agent=defaultdict(str) key_list=[] key_list=[(keys,tuple(key_list))] agent=dict(key_list) #agent[keys]+=key_list ``` What I want to know is, is there a way to lets say I have `agent= {'taste': ('sweet', 'sour', 'juicy', 'melon-like')}` I want to add to a list ``` key_list=['yuck!','tasty','smoothie'] ``` and `agent.setdefault('taste',[]).append(key_list)` and have output like: ``` {'taste': ('sweet', 'sour', 'juicy', 'melon-like','yuck!','tasty','smoothie')} ``` instead of ``` {'taste': ('sweet', 'sour', 'juicy', 'melon-like',['yuck!','tasty','smoothie'])} ``` Is there a way to that? Inshort: 1. I want to add a list to an existing list,which is a value to a key in a dictionary (w/o iterations to find that particular key) 2. Check if the element being fed in as a list already contains that element in that list which is a value to a particular key, say 'taste' here (could be string, as here)
Check this out: ``` >>> tst = {'taste': ('sweet', 'sour', 'juicy', 'melon-like')} >>> tst.get('taste', ()) #default to () if does not exist. ('sweet', 'sour', 'juicy', 'melon-like') >>> key_list=['yuck!','tasty','smoothie'] >>> tst['taste'] = tst.get('taste') + tuple(key_list) >>> tst {'taste': ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie')} ``` To retrieve, ``` >>> tst = {'taste': ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie')} >>> taste = tst.get('taste') >>> taste ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie') >>> 'sour' in taste True >>> 'sour1' in taste False ```
Ok so you have three questions here, let's go over them: 1. You can `extend` a list to append elements from another list: `[1,2,3].extend([4,5]) # [1,2,3,4,5]` 2. Since you have tuples, which are immutable, you can simply add a tuple to existing one: `(1,2,3) + (4,5) # (1, 2, 3, 4, 5)` 3. If you do not want *duplicates*, you want to use `set`, and you can union them: `{1,2}.union({2,3}) # set([1,2,3])` see how 2 is not duplicated here. But beware, sets do not keep their order. In the end, if you want to remove duplicates and don't care about order, you can combine 2 and 3: `set(old_value).union(set(new_value))` Otherwise, if you need to preserve order, see this question: [Combining two lists and removing duplicates, without removing duplicates in original list](https://stackoverflow.com/questions/1319338/combining-two-lists-and-removing-duplicates-without-removing-duplicates-in-orig)
Python: Append a list to an existing list assigned to a key in a dictionary?
[ "", "python", "list", "dictionary", "" ]
I want to build an ASP.NET website with Entity Framework 5 but I do not have permission to install SQL Server on my system, is it possible? If yes, how? If not, any alternative, like files etc...
If you can't install anything, then I'm afraid you're stuck with embedded databases like SQL Server Compact Edition or Sqlite. (and some NOSQL databases as well, I believe some of them don't require a server) Just don't forget about one thing - embedded databases usually aren't enough for web environment. For instance: <https://stackoverflow.com/questions/11591002/how-can-i-use-sqlite-in-a-c-sharp-project> <http://www.microsoft.com/en-us/sqlserver/editions/2012-editions/compact.aspx>
You could offload the database platform entirely to a remote Azure database; there's lots of info on this online, but [this page](http://msdn.microsoft.com/en-us/data/gg190738.aspx) might be a good starting point.
Can I have database without installing SQL Server?
[ "", "asp.net", "sql", "visual-studio-2012", "" ]
I have two tables `registered` and `attended`, each with two columns: `AttendentId` and `SessionId`. I would like to query the count of `AttendantId` from these two tables individually for a particular session id. Example: ### registered ``` AttendantId SessionId ID1 SN1 ID2 SN2 ID3 SN1 ID4 SN3 ``` ### Attended ``` AttendantId SessionId ID1 SN1 ID4 SN3 ``` And I want to obtain the following output: ``` Count(Registered) Count(Attended) Session ID 2 1 SN1 1 0 SN2 1 1 SN3 ```
Try: ``` select count(distinct registered), count(distinct attended), SessionId from (select AttendantId registered, null attended, SessionId from registered union all select null registered, AttendantId attended, SessionId from Attended) sq group by SessionId ```
You could use a `FULL OUTER JOIN`: ``` select coalesce(a.sessionid, r.sessionid) sessionid, count(r.AttendantId) countRegistered, count(a.AttendantId) countAttended from registered r full outer join attended a on r.sessionid = a.sessionid and r.AttendantId = a.AttendantId group by coalesce(a.sessionid, r.sessionid); ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/1ef3a/9)
Combining SQL join with count
[ "", "sql", "sql-server", "" ]
I am trying to open a xlsx file and just print the contents of it. I keep running into this error: ``` import xlrd book = xlrd.open_workbook("file.xlsx") print "The number of worksheets is", book.nsheets print "Worksheet name(s):", book.sheet_names() print sh = book.sheet_by_index(0) print sh.name, sh.nrows, sh.ncols print print "Cell D30 is", sh.cell_value(rowx=29, colx=3) print for rx in range(5): print sh.row(rx) print ``` It prints out this error ``` raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found '\xff\xfeT\x00i\x00m\x00' ``` Thanks
The error message relates to the BOF (Beginning of File) record of an XLS file. However, the example shows that you are trying to read an XLSX file. There are 2 possible reasons for this: 1. Your version of xlrd is old and doesn't support reading xlsx files. 2. The XLSX file is encrypted and thus stored in the OLE Compound Document format, rather than a zip format, making it appear to xlrd as an older format XLS file. Double check that you are in fact using a recent version of xlrd. Opening a new XLSX file with data in just one cell should verify that. However, I would guess the you are encountering the second condition and that the file is encrypted since you state above that you are already using xlrd version 0.9.2. XLSX files are encrypted if you explicitly apply a workbook password but also if you password protect some of the worksheet elements. As such it is possible to have an encrypted XLSX file even if you don't need a password to open it. **Update**: See @BStew's, third, more probable, answer, that the file is open by Excel.
If you use `read_excel()` to read a `.csv` you will get the error > XLRDError: Unsupported format, or corrupt file: Expected BOF record; To read `.csv` one needs to use `read_csv()`, like this ``` df1= pd.read_csv("filename.csv") ```
Error: Unsupported format, or corrupt file: Expected BOF record
[ "", "python", "excel", "csv", "xlrd", "" ]
I have the following sql code: ``` select upper(regexp_substr(street1, '\S+$')) ``` but I don't know where I should put the distinct keyword to have unique values in the table, because I prints out many duplicate values. --- Edit - from comments below. Complete Query(error): ``` select distinct UPPER(REGEXP_SUBSTR(STREET1, '\S+$')) from HELENS_DATA order by REGEXP_SUBSTR(STREET1, '\S+$') asc ``` Error Message: > ORA-01791: not a SELECTed expression 01791. 00000 - "not a SELECTed expression" \*Cause: \*Action: Error at Line: 3 Column: 24 Complete Query(works): ``` select distinct UPPER(REGEXP_SUBSTR(STREET1, '\S+$')) from HELENS_DATA order by UPPER(REGEXP_SUBSTR(STREET1, '\S+$')) desc; ``` please NOTE that the initial query tried to change the query values to upper by using UPPer() function how ever I have mistakenly ignored the function in the ORDER BY clause and it was not the fault of 'DISTINCT'.
It's interesting that `DISTINCT` fails when you have the `UPPER` function in place. You can try to get around it using a subquery: ``` SELECT DISTINCT Ending FROM (select upper(regexp_substr(street1, '\S+$')) Ending) A ``` The subquery really shouldn't be necessary though, this should work too, as chue x recommended. ``` SELECT DISTINCT upper(regexp_substr(street1, '\S+$')) ```
`distinct` always comes right after `select`: ``` select distinct upper... ```
where do I put the distinct in in my code?
[ "", "sql", "function", "distinct", "" ]
Sometimes when I open a file for reading or writing in Python ``` f = open('workfile', 'r') ``` or ``` f = open('workfile', 'w') ``` I read/write the file, and then at the end I forget to do `f.close()`. Is there a way to automatically close after all the reading/writing is done, or after the code finishes processing?
``` with open('file.txt','r') as f: #file is opened and accessible via f pass #file will be closed before here ```
You could always use the **with...as** statement ``` with open('workfile') as f: """Do something with file""" ``` or you could also use a **try...finally block** ``` f = open('workfile', 'r') try: """Do something with file""" finally: f.close() ``` Although since you say that you forget to add f.close(), I guess the with...as statement will be the best for you and given it's simplicity, it's hard to see the reason for not using it!
Python read/write file without closing
[ "", "python", "file", "" ]
Excuse me if this is confusing, as I am not very familiar with postgresql. I have a postgres database with a table full of "sites". Each site reports about once an hour, and when it reports, it makes an entry in this table, like so: ``` site | tstamp -----+-------------------- 6000 | 2013-05-09 11:53:04 6444 | 2013-05-09 12:58:00 6444 | 2013-05-09 13:01:08 6000 | 2013-05-09 13:01:32 6000 | 2013-05-09 14:05:06 6444 | 2013-05-09 14:06:25 6444 | 2013-05-09 14:59:58 6000 | 2013-05-09 19:00:07 ``` As you can see, the time stamps are almost never on-the-nose, and sometimes there will be 2 or more within only a few minutes/seconds of each other. Furthermore, some sites won't report for hours at a time (on occasion). I want to only select one entry per site, per hour (as close to each hour as I can get). How can I go about doing this in an efficient way? I also will need to extend this to other time frames (like one entry per site per day -- as close to midnight as possible). Thank you for any and all suggestions.
You could use [DISTINCT ON](http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT): ``` select distinct on (date_trunc('hour', tstamp)) site, tstamp from t order by date_trunc('hour', tstamp), tstamp ``` Be careful with the ORDER BY if you care about which entry you get. Alternatively, you could use the [`row_number` window function](http://www.postgresql.org/docs/current/static/functions-window.html#FUNCTIONS-WINDOW-TABLE) to mark the rows of interest and then peel off the first result in each group from a derived table: ``` select site, tstamp from ( select site, tstamp, row_number() over (partition by date_trunc('hour', tstamp) order by tstamp) as r from t ) as dt where r = 1 ``` Again, you'd adjust the ORDER BY to select the specific row of interest for each date.
You are looking for the closest value per hour. Some are before the hour and some are after. That makes this a hardish problem. First, we need to identify the range of values that work for a particular hour. For this, I'll consider anything from 15 minutes before the hour to 45 minutes after as being for that hour. So, the period of consideration for 2:00 goes from 1:45 to 2:45 (arbitrary, but seems reasonable for your data). We can do this by shifting the time stamps by 15 minutes. Second, we need to get the closest value to the hour. So, we prefer 1:57 to 2:05. We can do this by considering the first value in (57, 60 - 57, 5, 60 - 5). We can put these rules into a SQL statement, using `row_number()`: ``` select site, tstamp, usedTimestamp from (select site, tstamp, date_trunc('hour', tstamp + 'time 00:15') as usedTimestamp row_number() over (partition by site, to_char(tstamp + time '00:15', 'YYYY-MM-DD-HH24'), order by least(extract(minute from tstamp), 60 - extract(minute from tstamp)) ) as seqnum from t ) as dt where seqnum = 1; ```
How can I select one row of data per hour, from a table of time stamps?
[ "", "sql", "database", "postgresql", "timestamp", "" ]
I'm trying to write a unittest that will check if the correct error message is returned in case the database connection hits exception. I've tried to use `connection.creation.destroy_test_db(':memory:')` but it didn't work as I expected. I suppose I should either remove the tables or somehow cut the db connection. Is any of those possible?
Since dec, 2021 there is the library [Django Mockingbird](https://pypi.org/project/djangomockingbird/). With this you can mock the object that would be retrieved from db. ``` from djangomockingbird import mock_model @mock_model('myapp.myfile.MyModel') def test_my_test(): some_test_query = MyModel.objects.filter(bar='bar').filter.(foo='foo').first() #some more code #assertions here ```
I found my answer in the presentation [Testing and Django by Carl Meyer](https://pycon-2012-notes.readthedocs.org/en/latest/testing_and_django.html). Here is how I did it: ``` from django.db import DatabaseError from django.test import TestCase from django.test.client import Client import mock class NoDBTest(TestCase): cursor_wrapper = mock.Mock() cursor_wrapper.side_effect = DatabaseError @mock.patch("django.db.backends.util.CursorWrapper", cursor_wrapper) def test_no_database_connection(self): response = self.client.post('/signup/', form_data) self.assertEqual(message, 'An error occured with the DB') ```
django unittest without database connection
[ "", "python", "django", "unit-testing", "testing", "" ]
My application involves dealing with data (contained in a CSV) which is of the following form: ``` Epoch (number of seconds since Jan 1, 1970), Value 1368431149,20.3 1368431150,21.4 .. ``` Currently i read the CSV using numpy loadtxt method (can easily use read\_csv from Pandas). Currently for my series i am converting the timestamps field as follows: ``` timestamp_date=[datetime.datetime.fromtimestamp(timestamp_column[i]) for i in range(len(timestamp_column))] ``` I follow this by setting timestamp\_date as the Datetime index for my DataFrame. I tried searching at several places to see if there is a quicker (inbuilt) way of using these Unix epoch timestamps, but could not find any. A lot of applications make use of such timestamp terminology. 1. Is there an inbuilt method for handling such timestamp formats? 2. If not, what is the recommended way of handling these formats?
Convert them to `datetime64[s]`: ``` np.array([1368431149, 1368431150]).astype('datetime64[s]') # array([2013-05-13 07:45:49, 2013-05-13 07:45:50], dtype=datetime64[s]) ```
You can also use pandas **to\_datetime**: ``` df['datetime'] = pd.to_datetime(df["timestamp"], unit='s') ``` This method requires Pandas 0.18 or later.
Pandas: Using Unix epoch timestamp as Datetime index
[ "", "python", "numpy", "pandas", "time-series", "" ]
As I understand there is a war going on between purists of natural key and purists of surrogate key. In likes to this [this](https://stackoverflow.com/questions/3747730/relational-database-design-question-surrogate-key-or-natural-key) post (there are more) people say 'natural key is bad for you, always use surrogate... However, either I am stupid or blind but I can not see a reason to have surrogate key always! Say you have 3 tables in configuration like this: ![Link table](https://i.stack.imgur.com/8f5Yx.png) Why would I need a surrogate key for it?? I mean it makes perfect sense not to have it. Also, can someone please explain why primary keys should never change according to surrogate key purists? I mean, if I have say `color_id VARCHAR(30)` and a key is `black`, and I no longer need black because I am changing it to `charcoal`, why is it a bad idea to change `black` key to `charcoal` and all referencing columns too? EDIT: Just noticed that I dont even need to change it! Just create new one, change referencing columns (same as I would have to do with surrogate key) and leave old one in peace.... In surrogate key mantra I need to then create additional entry with, say, `id=232` and `name=black`. How does that benefit me really? I have a spare key in table which I don't need any more. Also I need to join to get a colour name while otherwise I can stay in one table and be merry? Please explain like to a 5 year old, and please keep in mind that I am not trying to say 'surrogate key is bad', I am trying to understand why would someone say things like 'always use surrogate key!'.
Surrogate keys are useful where there is an suboptimal natural key: no more, no less. A suboptimal natural key would be a GUID or varchar or otherwise wide/non-ordered. However, the decision to use a surrogate is an *implementation* decision after the conceptual and logical modelling process, based on knowledge of how the chosen RDBMS works. However, this best practice of "have a surrogate key" is now "always have a surrogate key" and it introduced immediately. Object Relation Mappers also often add surrogate keys to all tables whether needed or not which doesn't help. For a link (many-many) table, you don't need one: [SQL: Do you need an auto-incremental primary key for Many-Many tables?](https://stackoverflow.com/questions/790334/sql-do-you-need-an-auto-incremental-primary-key-for-many-many-tables). For a table with 2 int columns, the overhead is an extra 50% of data for a surrogate column (assuming ints and ignoring row metadata)
Well, I am more on the natural keys myself :) But surrogate keys can have its advantages, even if you like me want to go "natural" all the way :) For example, I have a table that, due to various constraints, has to be defined as being dependent from others. Something like ``` Table Fee ( foreign_key1, foreign_key2, foreign_key3, value) ``` the record is defined/identified by the three foreign keys but at the same time, you can have at most 2 of them to be null. So you cannot create them as a primary keys (u'll just put an unique on the 3 columns) In order to have a primary key on that table, the only way to do that is to use a surrogate :) Now... why not to change a primary key... This can be considered pretty philosophical... I see it in this way, hope it will make sense... A primary key, in itself, is not only a combination of unique+not null, it is more about "the real essence of the record", what it defines the record at the core. In that sense, it is not something you could change easily, could you? Consider yourself as an example. You have a nick, but it does not defines what u really are. You could change it, but the essence of being yourself would not change. Now, if you maintain the nickname, but change your essence... would it still be the same person? Nope, it would make more sense to consider it a "new" person... And for records it's the same... So that's why you usually do not change the primary key and define a new record from scratch
Surrogate key 'preference' explanation
[ "", "sql", "database-design", "surrogate-key", "natural-key", "" ]
I have a table OrderDetails with the following schema: ``` ---------------------------------------------------------------- | OrderId | CopyCost | FullPrice | Price | PriceType | ---------------------------------------------------------------- | 16 | 50 | 100 | 50 | CopyCost | ---------------------------------------------------------------- | 16 | 50 | 100 | 100 | FullPrice | ---------------------------------------------------------------- | 16 | 50 | 100 | 50 | CopyCost | ---------------------------------------------------------------- | 16 | 50 | 100 | 50 | CopyCost | ---------------------------------------------------------------- ``` I need a query that will surmise the above table into a new table with the following schema: ``` ---------------------------------------------------------------- | OrderId | ItemCount | TotalCopyCost | TotalFullPrice | ---------------------------------------------------------------- | 16 | 4 | 150 | 100 | ---------------------------------------------------------------- ``` Currently I am using a Group By on the Order.Id to the the item count. But I do not know how to conditionally surmise the CopyCost and FullPrice values. Any help would be much appreciated. Regards Freddie
Try ``` SELECT OrderId, COUNT(*) ItemCount, SUM(CASE WHEN PriceType = 'CopyCost' THEN Price ELSE 0 END) TotalCopyCost, SUM(CASE WHEN PriceType = 'FullPrice' THEN Price ELSE 0 END) TotalFullPrice FROM OrderDetails GROUP BY OrderId ``` **[SQLFiddle](http://sqlfiddle.com/#!3/e906c/1)**
Try this query ``` select orderId, count(*) as cnt, sum(if(pricetype='CopyCost', CopyCost, 0)) as totalCopyCost, sum(if(pricetype='FullPrice', FullPrice, 0)) as totalFullPrice from tbl group by orderId ``` ## **[SQL FIDDLE](http://sqlfiddle.com/#!2/6ce07/1)**: ``` | ORDERID | CNT | TOTALCOPYCOST | TOTALFULLPRICE | -------------------------------------------------- | 16 | 4 | 150 | 100 | ```
Conditional sum in Group By query MSSQL
[ "", "sql", "" ]
I'm trying to parse the text in the ebooks at gutenberg.org to extract info about the books, for example, the title. Every book on there has a line like this: ``` *** START OF THIS PROJECT GUTENBERG EBOOK THE ADVENTURES OF SHERLOCK HOLMES *** ``` I'd like to use some thing like this: ``` book_name=() index = 0 for line in finalLines: index+=1 if "*** START OF THIS PROJECT GUTENBERG EBOOK "%%%"***" in line: print(index, line) book_name=%%% ``` but I'm obviously not doing it right. Can someone show me how it's done??
Regex is the way to go: ``` import re title_regex = re.compile(r'\*{3} START OF THIS PROJECT GUTENBERG EBOOK (.*?) \*{3}') for index, line in enumerate(finalLines): match = title_regex.match(line) if match: book_name = match.group(1) print(index, book_name) ``` You can also parse it line-by-line: ``` import urllib.request url = 'http://www.gutenberg.org/cache/epub/1342/pg1342.txt' book = urllib.request.urlopen(url) lines = book.readlines() book.close() reached_start = False metadata = {} for index, line in enumerate(lines): if line.startswith('***'): if not reached_start: reached_start = True else: break if not reached_start and ':' in line: key, _, value = line.partition(':') metadata[key.lower()] = value ```
The simplest solution: ``` sp = line.split() if sp[:7]+sp[-1:] == '*** START OF THIS PROJECT GUTENBERG EBOOK ***'.split(): bookname = ' '.join(sp[7:-1]) ``` A better solustion will use regular expression, as suggested. If you are working with bytes, you should use `b'*** START OF THIS PROJECT GUTENBERG EBOOK ***'`, or use `bytes.decode(s)` for any byte string. Your snippet (with the `urlopen()` part) might look like this: ``` import urllib.request url = 'http://gutenberg.org/cache/epub/1342/pg1342.txt' with urllib.request.urlopen(url) as book: finalLines = book.readlines() booktitle_pattern = '*** START OF THIS PROJECT GUTENBERG EBOOK ***'.split() bookname = None for index, line in enumerate(finalLines): sp = [bytes.decode(word) for word in line.split()] if sp[:7]+sp[-1:] == booktitle_pattern : bookname = ' '.join(sp[7:-1]) ```
Python: How to use %%% when parsing text
[ "", "python", "" ]
I am aware of a few ways to terminate a python script but here I am looking for a good and robust code design (sort of a recommended way to do this). Most of my code is written in functions that are called from the main function. I'd like to know what'd be the most recommendable way to stop running a python script/program from a given function (called from main) and give a message error to the user. Example of my current design (please comment on better practices if have some ideas): ``` import sys def run_function(x): if x == 4: print 'error, x cannot be 4' sys.exit(0) else: print 'good x' return 0 def main(): x=4 run_function(x) return 0 ```
Simply print the message, then use `sys.exit()` to end the program. The `argparse` module uses a utility function (adjusted to be more or less stand-alone): ``` def exit(status=0, message=None): if message: _print_message(message, sys.stderr) sys.exit(status) ``` where `_print_message()` writes the message to the designated file object, here `sys.stderr`; basically just using `print`.
In your example, I would suggest that you raise error from the functions and only call the exit in `__main__`. The message will be passed using raise. Example: ``` import sys def run_function(x): if x == 4: raise ValueError('error, x cannot be 4') else: print 'good x' return 0 def main(): x=4 run_function(x) return 0 if __name__ == "__main__": try: main() except ValueError as e: sys.exit(e) ``` This way, your function indicate that it received a wrong value, and it is the caller that decide to call a `sys.exit` based on the error. A little more detail on the [sys.exit](http://docs.python.org/2/library/sys.html#sys.exit): > The optional argument arg can be an integer giving the exit status > (defaulting to zero), or another type of object. If it is an integer, > zero is considered “successful termination” and any nonzero value is > considered “abnormal termination” by shells and the like. Most systems > require it to be in the range 0-127, and produce undefined results > otherwise. Some systems have a convention for assigning specific > meanings to specific exit codes, but these are generally > underdeveloped; Unix programs generally use 2 for command line syntax > errors and 1 for all other kind of errors. If another type of object > is passed, None is equivalent to passing zero, and any other object is > printed to stderr and results in an exit code of 1. **In particular, > sys.exit("some error message") is a quick way to exit a program when > an error occurs.**
code desing and error handling: exit program with an error in python
[ "", "python", "error-handling", "exit", "" ]
I have an array which looks like this for example: ``` array([[ 1, 1, 2, 0, 4], [ 5, 6, 7, 8, 9], [10, 0, 0, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 0, 24], [25, 26, 27, 28, 29], [30, 31, 32, 33, 34], [35, 36, 37, 38, 39], [40, 41, 42, 43, 44], [45, 46, 47, 48, 49]]) ``` I have another two arrays which are like: ``` array([[ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 2891, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 2891]]) ``` and ``` array([[ 0, 0, 0, 643], [ 0, 0, 0, 0], [ 0, 0, 643, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0]]) ``` What I want is to pick value 2891 from the 2nd array to the first array in the corresponding position and also 643 from the third array to the first array in the corresponding position so that the final array should look like this: ``` array([[ 1, 1, 2, 643, 4], [ 5, 6, 7, 8, 9], [ 10, 2891, 643, 13, 14], [ 15, 16, 17, 18, 19], [ 20, 21, 22, 2891, 24], [ 25, 26, 27, 28, 29], [ 30, 31, 32, 33, 34], [ 35, 36, 37, 38, 39], [ 40, 41, 42, 43, 44], [ 45, 46, 47, 48, 49]]) ``` So far I have tried this command: ``` np.place(a,a<1, np.amax(b)) ``` where `a` referred to the first array and `b` referred to the 2nd array. What it does it just replace all the 0 value with 2891 value. Can someone help?
You can find the indices where `y` and `z` are nonzero using the nonzero method: ``` In [9]: y.nonzero() Out[9]: (array([2, 4]), array([1, 3])) In [10]: z.nonzero() Out[10]: (array([0, 2]), array([3, 2])) ``` You can select the associated values through [fancing indexing](http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.indexing.rst/#arrays-indexing): ``` In [11]: y[y.nonzero()] Out[11]: array([2891, 2891]) ``` and you can assign these values to locations in `x` with ``` In [13]: x[y.nonzero()] = y[y.nonzero()] ``` --- ``` import numpy as np x = np.array([[ 1, 1, 2, 0, 4], [ 5, 6, 7, 8, 9], [10, 0, 0, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 0, 24], [25, 26, 27, 28, 29], [30, 31, 32, 33, 34], [35, 36, 37, 38, 39], [40, 41, 42, 43, 44], [45, 46, 47, 48, 49]]) y = np.array([[ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 2891, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 2891]]) z = np.array([[ 0, 0, 0, 643], [ 0, 0, 0, 0], [ 0, 0, 643, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0]]) x[y.nonzero()] = y[y.nonzero()] x[z.nonzero()] = z[z.nonzero()] print(x) ``` yields ``` [[ 1 1 2 643 4] [ 5 6 7 8 9] [ 10 2891 643 13 14] [ 15 16 17 18 19] [ 20 21 22 2891 24] [ 25 26 27 28 29] [ 30 31 32 33 34] [ 35 36 37 38 39] [ 40 41 42 43 44] [ 45 46 47 48 49]] ```
Do you mean select the max values from the second array and third array? If so, try followings: Init Data: ``` In [48]: arr = array([[ 1, 1, 2, 0, 4], ....: [ 5, 6, 7, 8, 9], ....: [10, 0, 0, 13, 14], ....: [15, 16, 17, 18, 19], ....: [20, 21, 22, 0, 24], ....: [25, 26, 27, 28, 29], ....: [30, 31, 32, 33, 34], ....: [35, 36, 37, 38, 39], ....: [40, 41, 42, 43, 44], ....: [45, 46, 47, 48, 49]]) In [49]: arr1 = array([[ 0, 0, 0, 0], ....: [ 0, 0, 0, 0], ....: [ 0, 2891, 0, 0], ....: [ 0, 0, 0, 0], ....: [ 0, 0, 0, 2891]]) In [50]: arr2 = array([[ 0, 0, 0, 643], ....: [ 0, 0, 0, 0], ....: [ 0, 0, 643, 0], ....: [ 0, 0, 0, 0], ....: [ 0, 0, 0, 0]]) ``` Select and Replace: ``` In [51]: arr[arr1==arr1.max()] = arr1.max() In [52]: arr[arr2==arr2.max()] = arr2.max() In [53]: arr Out[53]: array([[ 1, 1, 2, 643, 4], [ 5, 6, 7, 8, 9], [ 10, 2891, 643, 13, 14], [ 15, 16, 17, 18, 19], [ 20, 21, 22, 2891, 24], [ 25, 26, 27, 28, 29], [ 30, 31, 32, 33, 34], [ 35, 36, 37, 38, 39], [ 40, 41, 42, 43, 44], [ 45, 46, 47, 48, 49]]) ```
Python numpy array replacing
[ "", "python", "arrays", "numpy", "" ]
I've been coding the python "apscheduler" package ([Advanced Python Scheduler](http://pypi.python.org/pypi/APScheduler/)) into my app, so far it's going good, I'm able to do almost everything that I had envisioned doing with it. Only one kink left to iron out... The function my events are calling will only accept around 3 calls a second or fail as it is triggering very slow hardware I/O :( I've tried limiting the max number of threads in the threadpool from 20 to just 1 to try and slow down execution, but since I'm not really putting a bit load on apscheduler my events are still firing pretty much concurrently (well... very, very close together at least). Is there a way to 'stagger' different events that fire within the same second?
My solution for future reference: I added a basic bool lock in the function being called and a wait which seems to do the trick nicely - since it's not the calling of the function itself that raises the error, but rather a deadlock situation with what the function carries out :D
I have recently found this question because I, like yourself, was trying to stagger scheduled jobs slightly to compensate for slow hardware. Including an argument like this in the scheduler `add_job` call staggers the start time for each job by 200ms (while incrementing `idx` for each job): ``` next_run_time=datetime.datetime.now() + datetime.timedelta(seconds=idx * 0.2) ```
Advanced Python Scheduler (apscheduler) Stagger events that fire within the same second?
[ "", "python", "events", "concurrency", "apscheduler", "" ]
How to make `SQLAlchemy` in `Tornado` to be `async` ? I found example for MongoDB on [async mongo example](http://emptysquare.net/blog/refactoring-tornado-code-with-gen-engine/) but I couldn't find anything like `motor` for `SQLAlchemy`. Does anyone know how to make `SQLAlchemy` queries to execute with `tornado.gen` ( I am using `MySQL` below `SQLAlchemy`, at the moment my handlers reads from database and return result, I would like to make this async).
ORMs are poorly suited for explicit asynchronous programming, that is, where the programmer must produce explicit callbacks anytime something that uses network access occurs. A primary reason for this is that ORMs make extensive use of the [lazy loading](http://www.martinfowler.com/eaaCatalog/lazyLoad.html) pattern, which is more or less incompatible with explicit async. Code that looks like this: ``` user = Session.query(User).first() print user.addresses ``` will actually emit two separate queries - one when you say `first()` to load a row, and the next when you say `user.addresses`, in the case that the `.addresses` collection isn't already present, or has been expired. Essentially, nearly every line of code that deals with ORM constructs might block on IO, so you'd be in extensive callback spaghetti within seconds - and to make matters worse, the vast majority of those code lines won't *actually* block on IO, so all the overhead of connecting callbacks together for what would otherwise be simple attribute access operations will make your program vastly less efficient too. A major issue with explicit asynchronous models is that they add tremendous Python function call overhead to complex systems - not just on the user-facing side like you get with lazy loading, but on the internal side as well regarding how the system provides abstraction around the Python database API (DBAPI). For SQLAlchemy to even have basic async support would impose a severe performance penalty on the vast majority of programs that don't use async patterns, and even those async programs that are not highly concurrent. Consider SQLAlchemy, or any other ORM or abstraction layer, might have code like the following: ``` def execute(connection, statement): cursor = connection.cursor() cursor.execute(statement) results = cursor.fetchall() cursor.close() return results ``` The above code performs what seems to be a simple operation, executing a SQL statement on a connection. But using a fully async DBAPI like psycopg2's async extension, the above code blocks on IO at least three times. So to write the above code in explicit async style, even when there's no async engine in use and the callbacks aren't actually blocking, means the above outer function call becomes at least three function calls, instead of one, not including the overhead imposed by the explicit asynchronous system or the DBAPI calls themselves. So a simple application is automatically given a penalty of 3x the function call overhead surrounding a simple abstraction around statement execution. And in Python, [function call overhead is everything](https://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677). For these reasons, I continue to be less than excited about the hype surrounding explicit async systems, at least to the degree that some folks seem to want to go all async for everything, like delivering web pages (see node.js). I'd recommend using implicit async systems instead, most notably [gevent](http://www.gevent.org/), where you get all the non-blocking IO benefits of an asynchronous model and none of the structural verbosity/downsides of explicit callbacks. I continue to try to understand use cases for these two approaches, so I'm puzzled by the appeal of the explicit async approach as a solution to all problems, i.e. as you see with node.js - we're using scripting languages in the first place to cut down on verbosity and code complexity, and explicit async for simple things like delivering web pages seems to do nothing but add boilerplate that can just as well be automated by gevent or similar, if blocking IO is even such a problem in a case like that (plenty of high volume websites do fine with a synchronous IO model). Gevent-based systems are production proven and their popularity is growing, so if you like the code automation that ORMs provide, you might also want to embrace the async-IO-scheduling automation that a system like gevent provides. **Update**: Nick Coghlan pointed out his [great article on the subject of explicit vs. implicit async](http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/async_programming.html) which is also a must read here. And I've also been updated to the fact that [pep-3156 now welcomes interoperability with gevent](http://hg.python.org/peps/diff/26a98d94bb4c/pep-3156.txt#l1.63), reversing its previously stated disinterest in gevent, largely thanks to Nick's article. So in the future I would recommend a hybrid of Tornado using gevent for the database logic, once the system of integrating these approaches is available.
I had this same issue in the past and I couldn't find a reliable Async-MySQL library. However there is a cool solution using [**Asyncio**](https://docs.python.org/3/library/asyncio.html) + **Postgres**. You just need to use the [**aiopg**](http://aiopg.readthedocs.org/en/stable/index.html) library, which comes with SQLAlchemy support out of the box: ``` import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) ``` Updated as mentioned by @cglacet
How to make SQLAlchemy in Tornado to be async?
[ "", "python", "python-2.7", "sqlalchemy", "tornado", "" ]
Consider the following strings: ``` server server_secure server_APAC_secure server_APAC server_US server_US_secure server_EU_secure server_ISRAEL ``` The template is straightforward: 1. The string `server` 2. An optional region string (e.g. `US`, `APAC`) prefixed with an underscore. The region string can be any sequence of English letters, except for the word `secure`. 3. An optional `secure`, prefixed with an underscore I would like to get the list of regions that appear in a set of strings. It is quite simple to do using string manipulation, but I'm quite sure it can be achieved using regex. **How do I extract the optional region substring from each string?**
`server_((?!secure)[a-zA-Z]+)` The first capture will have the required name. demo:<http://regexr.com?34rlv>
Simply match anything that is either followed by a word boundary, or by `_secure`: ``` region = re.compile(r'server_([A-Z]+)(?_secure|\b)') ``` This specifically only matches uppercase characters, you can expand the character class (`[...]`) as needed to match more as needed. Demo: ``` >>> import re >>> region = re.compile(r'server_([A-Z]+)(?:_secure|\b)') >>> example = '''\ ... server ... server_secure ... server_APAC_secure ... server_APAC ... server_US ... server_US_secure ... server_EU_secure ... server_ISRAEL ... '''.splitlines() >>> for ex in example: ... match = region.search(ex) ... if match is not None: ... print match.group(1) ... APAC APAC US US EU ISRAEL ```
Text after an underscore except for a specific word
[ "", "python", "regex", "regex-negation", "" ]
I have this table and data in my database: **tblPhotos** ``` ID | Title | Description ------------------------------ 1 | NULL | Some data - Title 123 - Subtitle - Photographer: John Doe 2 | NULL | Some data - Photographer: Jane Doe 3 | NULL | Some data - Title 345 - Photographer: John Doe Jr ``` The data in the `Description` column comes in two formats: ``` {GARBAGE DATA} - {TITLE DATA} - Photographer: ..., or {GARBAGE DATA} - Photographer: ... ``` Basically, I have thousands of rows, where the `Title` column is empty. I need to somehow pull it out of the `Description` column. The `Title` exists between the first dash and the and the last dash of the `Description` column. Here's what the data should look like: **tblPhotos** ``` ID | Title | Description ------------------------------------------------------------------------------------- 1 | Title 123 - Subtitle | Some data - Title 123 - Subtitle - Photographer: John Doe 2 | NULL | Some data - Photographer: Jane Doe 3 | Title 345 | Some data - Title 345 - Photographer: John Doe Jr ``` How can I make a script that will parse a column, and update another column with that parsed data?
The location of the first dash is easy: `CHARINDEX('-', Description)`. The location of the second dash is easy, too, but you want the location of the *last* dash. That's a bit harder because you have to reverse the string to get it: `LEN(Description) - CHARINDEX('-', REVERSE(Description))`. To get the stuff in between you need to find the length by subtracting the two positions: `SUBSTRING(Description, CHARINDEX('-', Description) + 1, LEN(Description) - CHARINDEX('-', REVERSE(Description))) - CHARINDEX('-', Description)` Putting it all together you get: ``` UPDATE tblPhotos SET Title = RTRIM(LTRIM( SUBSTRING(Description, CHARINDEX('-', Description) + 1, LEN(Description) - CHARINDEX('-', REVERSE(Description)) - CHARINDEX('-', Description)))) WHERE Title IS NULL ```
Try someting like this: ``` update tblPhotos set title = substring(description, charindex('-', description)+1, (charindex('-', description, charindex('-', description) + 1) - charindex('-', description) )-1 ) where title is null and description like '%-%-%' ``` You might need to trim spaces from the title.
Update column by parsing another column with T-SQL
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I'm trying to make a plot: ``` from matplotlib import * import sys from pylab import * f = figure ( figsize =(7,7) ) ``` But I get this error when I try to execute it: ``` File "mratio.py", line 24, in <module> f = figure( figsize=(7,7) ) TypeError: 'module' object is not callable ``` I have run a similar script before, and I think I've imported all the relevant modules.
The `figure` is a module provided by `matplotlib`. You can read more about it in the [Matplotlib documentation](http://matplotlib.org/api/figure_api.html) I think what you want is `matplotlib.figure.Figure` (the class, rather than the module) It's [documented here](http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure) ``` from matplotlib import * import sys from pylab import * f = figure.Figure( figsize =(7,7) ) ``` or ``` from matplotlib import figure f = figure.Figure( figsize =(7,7) ) ``` or ``` from matplotlib.figure import Figure f = Figure( figsize =(7,7) ) ``` or to get `pylab` to work without conflicting with `matplotlib`: ``` from matplotlib import * import sys import pylab as pl f = pl.figure( figsize =(7,7) ) ```
You need to do: ``` matplotlib.figure.Figure ``` Here, ``` matplotlib.figure is a package (module), and `Figure` is the method ``` Reference [here](http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure). So you would have to call it like this: ``` f = figure.Figure(figsize=(7,7)) ```
Python "'module' object is not callable"
[ "", "python", "module", "matplotlib", "" ]
I have 3 columns let say `A`, `B`, and `C`. I need to count the `NULL` values in each column. For example: ``` A | B | C ------------- 1 |NULL| 1 1 | 1 | NULL NULL| 1 | 1 NULL|NULL| 1 ``` Should output: ``` A | B | C --------------- 2 | 2 | 1 ``` I've tried count, sum, sub-queries but nothing has worked for me yet. Any input would be appreciated!
``` SELECT COUNT(*)-COUNT(A) As A, COUNT(*)-COUNT(B) As B, COUNT(*)-COUNT(C) As C FROM YourTable; ```
For SQL SERVER you can use the following: ``` SET NOCOUNT ON DECLARE @Schema NVARCHAR(100) = '<Your Schema>' DECLARE @Table NVARCHAR(100) = '<Your Table>' DECLARE @sql NVARCHAR(MAX) ='' IF OBJECT_ID ('tempdb..#Nulls') IS NOT NULL DROP TABLE #Nulls CREATE TABLE #Nulls (TableName sysname, ColumnName sysname , ColumnPosition int ,NullCount int , NonNullCount int) SELECT @sql += 'SELECT '''+TABLE_NAME+''' AS TableName , '''+COLUMN_NAME+''' AS ColumnName, '''+CONVERT(VARCHAR(5),ORDINAL_POSITION)+''' AS ColumnPosition, SUM(CASE WHEN '+COLUMN_NAME+' IS NULL THEN 1 ELSE 0 END) CountNulls , COUNT(' +COLUMN_NAME+') CountnonNulls FROM '+QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME(TABLE_NAME)+';'+ CHAR(10) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = @Schema AND TABLE_NAME = @Table INSERT INTO #Nulls EXEC sp_executesql @sql SELECT * FROM #Nulls DROP TABLE #Nulls ``` you will receive a result set with the count of Null values and non null values in each column of you table
Count NULL Values from multiple columns with SQL
[ "", "sql", "" ]
I'm trying to convert some code from MATLAB to Python. Is there a Python equivalent to MATLAB's datset array? <http://www.mathworks.com/help/stats/dataset-arrays.html>
If you want to perform numerical operations on the data set, `numpy` would be the way to go. You can specify arbitrary record types by combining basic numpy `dtypes`, and access the records by their field names, similar to Python's built-in dictionary access. ``` import numpy myDtype = numpy.dtype([('name', numpy.str_), ('age', numpy.int32), ('score', numpy.float64)]) myData = numpy.empty(10, dtype=myDtype) # Create empty data sets print myData['age'] # prints all ages ``` You can even save and re-load these data using the `tofile` and 'fromfile` functions in numpy and continue using the named fields: ``` with open('myfile.txt', 'wb') as f: numpy.ndarray.tofile(myData, f) with open('myfile.txt', 'rb') as f: loadedData = numpy.fromfile(f, dtype=myDtype) print loadedData['age'] ```
You should look into [pandas](http://pandas.pydata.org/) library, which is modeled after R's data frame. Not to mention this is way better than MATLAB's dataset
Python equivalent of MATLAB's dataset array
[ "", "python", "matlab", "dataset", "machine-learning", "" ]
How is GUID internally stored and compared by SQL (particularly MS SQL server 2008)? Is it a number or string? Also, is there a big performance hit when using GUID as primary key? Besides the problem with clustering mentioned here: [What are the best practices for using a GUID as a primary key, specifically regarding performance?](https://stackoverflow.com/questions/11938044/guid-as-primary-key-best-practices) I think it should be 128bit number (as described [here](http://blogs.msdn.com/b/oldnewthing/archive/2008/06/27/8659071.aspx)), but I cannot find mode details on how is it implemented in SQL server.
16 bytes, exactly as the [GUID structure](http://msdn.microsoft.com/en-us/library/windows/desktop/aa373931%28v=vs.85%29.aspx): ``` typedef struct _GUID { DWORD Data1; WORD Data2; WORD Data3; BYTE Data4[8]; } GUID; ```
Performance wise, normal `GUID` is slower than `INT` in SQL Server If you plan to use `GUID`, use `uniqueidentifier` instead of `varchar` as data type. Microsoft did not mention how they implement it, there is some speed optimization when you use `uniqueidentifier` as the data type. To use `GUID` as primary key without sacrificing speed of integer, make the `GUID` value *sequential*. Define `uniqueidentifier` data type as PK, set the default to `NEWSEQUENTIALID()`. See [NEWSEQUENTIALID (Transact-SQL)](https://learn.microsoft.com/en-us/sql/t-sql/functions/newsequentialid-transact-sql) for further details. As to how sequential `GUID` values help performance, see [The Cost of GUIDs as Primary Keys](http://www.informit.com/articles/article.aspx?p=25862).
What data type is GUID in SQL server?
[ "", "sql", "sql-server", "primary-key", "guid", "" ]
Conventionally `1e3` means `10**3`. ``` >>> 1e3 1000.0 >>> 10**3 1000 ``` Similar case is `exp(3)` compared to `e**3`. ``` >>> exp(3) 20.085536923187668 >>> e**3 20.085536923187664 ``` However now notice if the exponent is a `float` value: ``` >>> exp(3.1) 22.197951281441636 >>> e**3.1 22.197951281441632 ``` which is fine. Now for the first example: ``` >>> 1e3.1 File "<stdin>", line 1 1e3.1 ^ SyntaxError: invalid syntax >>> 10**3.1 1258.9254117941675 ``` which shows `Python` does not like `1e3.1`, `Fortran` too. Regardless it could be a standard (!) why it is like that?
The notation with the `e` is a numeric literal, part of the lexical syntax of many programming languages, based on standard form/scientific notation. The purpose of this notation is to allow you to specify very large/small numbers by shifting the point position. It's not intended to allow you to encode multiplication by some arbitrary power of 10 into numeric literals. Therefore, that point and the following digits aren't even recognised as part of the numeric literal token. If you want arbitrary powers, as you've found, there are math functions and operators that do the job. Unlike a numeric literal, you even get to determine the parameter values at run-time.
From the [docs](http://docs.python.org/2/library/decimal.html#decimal-objects): ``` sign ::= '+' | '-' digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' indicator ::= 'e' | 'E' digits ::= digit [digit]... decimal-part ::= digits '.' [digits] | ['.'] digits exponent-part ::= indicator [sign] digits #no dots allowed here ```
Why does exponential notation with decimal values fail?
[ "", "python", "floating-point", "ieee-754", "exponent", "" ]
I have a sorted list of "box definitions" that I would like to consolidate. The list looks something like: ``` big_list = [\ # ... # ... [3, 4, 5, 4, 5, 6, 65],\ [3, 4, 5, 4, 5, 6, 60],\ [3, 4, 5, 4, 5, 6, 55],\ [3, 4, 5, 4, 5, 6, 52],\ [3, 4, 5, 4, 5, 6, 23],\ [3, 4, 5, 4, 5, 6, 17],\ [3, 4, 5, 4, 5, 6, 0],\ [5, 8, 9, 6, 9, 10, 90],\ [5, 8, 9, 6, 9, 10, 84],\ [5, 8, 9, 6, 9, 10, 32],\ [5, 8, 9, 6, 9, 10, 0],\ # ... # ... [750, 800, 900, 751, 801, 901, 97],\ [750, 800, 900, 751, 801, 901, 24],\ [750, 800, 900, 751, 801, 901, 17],\ [750, 800, 900, 751, 801, 901, 16],\ [750, 800, 900, 751, 801, 901, 0]\ # ... # ... ] ``` Where the box "format" is: [x1, y1, z1, x2, y2, z2, attribute], and we can assume dx=1, dy=1, dz=1 Also, we can assume the list has already been sorted by something like: ``` big_list=sorted(big_list, key=lambda n:n[6], reverse=True) big_list=sorted(big_list, key=lambda n:n[2]) big_list=sorted(big_list, key=lambda n:n[1]) big_list=sorted(big_list, key=lambda n:n[0]) ``` The list may be several millions of items long, and I would like to reduce the list so that any discrete "box" only gets the highest "attribute"...so something in this case like: ``` reduced_big_list = [\ [3, 4, 5, 4, 5, 6, 65],\ [5, 8, 9, 6, 9, 10, 90],\ [750, 800, 900, 751, 801, 901, 97]\ ] ``` The method I am currently using on this list is something like: ``` i = 0 while i < len(big_list)-1: if big_list[i][0]==big_list[i+1][0]\ and big_list[i][1]==big_list[i+1][1]\ and big_list[i][2]==big_list[i+1][2] \ and big_list[i][6] >= big_list[i+1][6]: del big_list[i+1] else: i=i+1 ``` The problem is that when the list is "long" (10 million+ "boxes"), the process is very, very slow. Is there a clever way to parallelize this list "decimation" process or perhaps quicken this process?
The slowness is the call to `del`, which moves the items of the *complete* tail of the list by one step. In your case, simply don't use `del`. Make instead a new list, starting from an empty list and `append`ing the items that you want to keep.
The reason is slow is that each time you `del` a line takes a linear amount of time, making the total process O(n^2). If instead of deleting lines from the original list, you append the lines you want to keep into a new list, it should be much faster. But there are other, possibly more Pythonic, ways to perform the same thing. For example, using `itertools.groupby` (assuming the list is sorted as you specified): ``` from itertools import groupby new_list = [next(group) for val,group in groupby(big_list, key=lambda x: x[:3])] ``` This will group the list items by the first 3 elements, and return a list of the first item in each group.
Parallelize a list consolidation in Python
[ "", "python", "list", "parallel-processing", "" ]
For each connection in an array called ALLconn, I would like to compare it to my sql table. If exist, then add to my listview. Here is my code below, but it does not seem to work: ``` Dim LoginFilter As Object Dim SelCurrAllSessions As SqlClient.SqlCommand = New SqlClient.SqlCommand("Select * from CurrAllSessions", LFcnn) SelCurrAllSessions.CommandType = CommandType.Text LoginFilter = SelCurrAllSessions.ExecuteReader For Each conn In AllConn While LoginFilter.Read() If conn.UserName.ToString() = LoginFilter.Item(0) Then ListBox1.Items.Add(LoginFilter.Item(0)) End If End While Next ```
Well you need to change the order of the loops ``` While LoginFilter.Read() For Each conn In AllConn If conn.UserName.ToString() = LoginFilter.Item(0).ToString Then ListBox1.Items.Add(LoginFilter.Item(0).ToString) Exit For End If Next End While ``` This is necessary because in your original code, the internal while run till the end of the data loaded from the database then, when you try to check the next conn, you cannot reposition the reader at the start of the data loaded by the database.
It's the other way around, use `Contains` to check if the string is contained in the collection, then you can add it to the `ListBox`: ``` Using LoginFilter = SelCurrAllSessions.ExecuteReader() While LoginFilter.Read() Dim connection = LoginFilter.GetString(0) If AllConn.Contains(connection) Then ListBox1.Items.Add(connection) End If End While End Using ```
Compare array with SQl table
[ "", "sql", "vb.net", "" ]
I saw this statement in the official [Python documentation](http://docs.python.org/2/library/stdtypes.html) : ``` str.upper().isupper() might be False ``` Can someone please explain ?
If the string is number or is made of characters without an uppercase variant (special characters etc.) For example: ``` >>> '42'.upper().isupper() False >>> '-'.upper().isupper() False ``` And as expected: ``` >>> '42a'.upper().isupper() True ``` Be carefule, since there is some strange behaviour for many unicode characters (see the answer from thg435: <https://stackoverflow.com/a/16495101/531222>)
More context to their statement: > str.upper().isupper() might be False if s contains uncased characters > or if the Unicode category of the resulting character(s) is not “Lu” > (Letter, uppercase), but e.g. “Lt” (Letter, titlecase). An example of an uncased character: ``` >>> '1'.upper().isupper() False ```
str.upper().isupper() might be False
[ "", "python", "" ]
I have two tables: ``` users attributes id|name id|name|user_id ------- --------------- 1 |foo 1 |bla | 1 2 |bar 1 |blub| 1 1 |bla | 2 ``` How do I create a query gives users with both the "bla" AND "blub" attributes? In this case it should only return the user "foo". I know that the data is not normalized.
``` SELECT u.*, a.id, b.Id, a.name, b.name FROM users u JOIN attributes a ON a.User_id = u.User_id AND a.name = 'bla' JOIN attributes b ON u.User_Id = b.User_id AND b.name = 'blub' ```
Assuming an attribute association to a user is unique... if you need 3 conditions to be true add the conditions to the in and adjust count up 1. ``` SELECT u.name FROM users u INNER JOIN attributes a on A.user_Id = u.id WHERE a.name in ('bla','blub') GROUP by u.name HAVING count(*)=2 ``` and if you don't have an unique association, or you need to join to another table you could always do... ``` SELECT u.name FROM users u INNER JOIN attributes a on A.user_Id = u.id WHERE a.name in ('bla','blub') GROUP by u.name HAVING count(distinct A.name)=2 ``` for a slight performance hit. but this allows you to join and get back additional fields which others have indicated was a detriment to this method. This allows for scaling of the solution instead of incurring the cost of joining each time to different tables. In addition, if you needed thirty-something values to associate, you may run into restrictions on the number of allowed joins.
How do I match against multiple conditions on a table join?
[ "", "sql", "" ]
I currently have a list of connections stored in a list where each connection is a directed link that connects two points and no point ever links to more than one point or is linked to by more than one point. For example: ``` connections = [ (3, 7), (6, 5), (4, 6), (5, 3), (7, 8), (1, 2), (2, 1) ] ``` Should produce: ``` ordered = [ [ 4, 6, 5, 3, 7, 8 ], [ 1, 2, 1 ] ] ``` I have attempt to do this using an algorithm that takes an input point and a list of connections and recursively calls itself to find the next point and add it to the growing ordered list. However, my algorithm breaks down when I don't start with the correct point (though this should just be a matter of repeating the same algorithm in reverse), but also when there are multiple unconnected strands What would be the best way of writing an efficient algorithm to order these connections?
## Algorithm for a Solution You're looking for a [topological sort](http://en.wikipedia.org/wiki/Topological_sorting) algorithm: ``` from collections import defaultdict def topological_sort(dependency_pairs): 'Sort values subject to dependency constraints' num_heads = defaultdict(int) # num arrows pointing in tails = defaultdict(list) # list of arrows going out for h, t in dependency_pairs: num_heads[t] += 1 tails[h].append(t) ordered = [h for h in tails if h not in num_heads] for h in ordered: for t in tails[h]: num_heads[t] -= 1 if not num_heads[t]: ordered.append(t) cyclic = [n for n, heads in num_heads.iteritems() if heads] return ordered, cyclic if __name__ == '__main__': connections = [(3, 7), (6, 5), (4, 6), (5, 3), (7, 8), (1, 2), (2, 1)] print topological_sort(connections) ``` Here is the output for your sample data: ``` ([4, 6, 5, 3, 7, 8], [1, 2]) ``` The runtime is linearly proportional to the number of edges (dependency pairs). ## HOW IT WORKS The algorithm is organized around a lookup table called num\_heads that keeps a count the number of predecessors (incoming arrows). Consider an example with the following connections: `a->h b->g c->f c->h d->i e->d f->b f->g h->d h->e i->b`, the counts are: ``` node number of incoming edges ---- ------------------------ a 0 b 2 c 0 d 2 e 1 f 1 g 2 h 2 i 1 ``` The algorithm works by "visting" nodes with no predecessors. For example, nodes `a` and `c` have no incoming edges, so they are visited first. Visiting means that the nodes are output and removed from the graph. When a node is visited, we loop over its successors and decrement their incoming count by one. For example, in visiting node `a`, we go to its successor `h` to decrement its incoming count by one (so that `h 2` becomes `h 1`. Likewise, when visiting node `c`, we loop over its successors `f` and `h`, decrementing their counts by one (so that `f 1` becomes `f 0` and `h 1` becomes `h 0`). The nodes `f` and `h` no longer have incoming edges, so we repeat the process of outputting them and removing them from the graph until all the nodes have been visited. In the example, the visitation order (the topological sort is): ``` a c f h e d i b g ``` If num\_heads ever arrives at a state when there are no nodes without incoming edges, then it means there is a cycle that cannot be topologically sorted and the algorithm exits to show the requested results.
Something like this: ``` from collections import defaultdict lis = [ (3, 7), (6, 5), (4, 6), (5, 3), (7, 8), (1, 2), (2, 1) ] dic = defaultdict(list) for k,v in lis: if v not in dic: dic[k].append(v) else: dic[k].extend([v]+dic[v]) del dic[v] for k,v in dic.items(): for x in v: if x in dic and x!=k: dic[k].extend(dic[x]) del dic[x] print dic print [[k]+v for k,v in dic.items()] ``` **output:** ``` defaultdict(<type 'list'>, {2: [1, 2], 4: [6, 5, 3, 7, 8]}) [[2, 1, 2], [4, 6, 5, 3, 7, 8]] ```
How can I order a list of connections
[ "", "python", "algorithm", "graph-theory", "graph-traversal", "" ]
I'm trying to parse through a csv file and extract the data from only specific columns. Example csv: ``` ID | Name | Address | City | State | Zip | Phone | OPEID | IPEDS | 10 | C... | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 | ``` I'm trying to capture only specific columns, say `ID`, `Name`, `Zip` and `Phone`. Code I've looked at has led me to believe I can call the specific column by its corresponding number, so ie: `Name` would correspond to `2` and iterating through each row using `row[2]` would produce all the items in column 2. Only it doesn't. Here's what I've done so far: ``` import sys, argparse, csv from settings import * # command arguments parser = argparse.ArgumentParser(description='csv to postgres',\ fromfile_prefix_chars="@" ) parser.add_argument('file', help='csv file to import', action='store') args = parser.parse_args() csv_file = args.file # open csv file with open(csv_file, 'rb') as csvfile: # get number of columns for line in csvfile.readlines(): array = line.split(',') first_item = array[0] num_columns = len(array) csvfile.seek(0) reader = csv.reader(csvfile, delimiter=' ') included_cols = [1, 2, 6, 7] for row in reader: content = list(row[i] for i in included_cols) print content ``` and I'm expecting that this will print out only the specific columns I want for each row except it doesn't, I get the last column only.
The only way you would be getting the last column from this code is if you don't include your print statement **in** your `for` loop. This is most likely the end of your code: ``` for row in reader: content = list(row[i] for i in included_cols) print content ``` You want it to be this: ``` for row in reader: content = list(row[i] for i in included_cols) print content ``` Now that we have covered your mistake, I would like to take this time to introduce you to the [pandas](http://pandas.pydata.org/) module. Pandas is spectacular for dealing with csv files, and the following code would be all you need to read a csv and save an entire column into a variable: ``` import pandas as pd df = pd.read_csv(csv_file) saved_column = df.column_name #you can also use df['column_name'] ``` so if you wanted to save all of the info in your column `Names` into a variable, this is all you need to do: ``` names = df.Names ``` It's a great module and I suggest you look into it. If for some reason your print statement was in `for` loop and it was still only printing out the last column, which shouldn't happen, but let me know if my assumption was wrong. Your posted code has a lot of indentation errors so it was hard to know what was supposed to be where. Hope this was helpful!
``` import csv from collections import defaultdict columns = defaultdict(list) # each value in each column is appended to a list with open('file.txt') as f: reader = csv.DictReader(f) # read rows into a dictionary format for row in reader: # read a row as {column1: value1, column2: value2,...} for (k,v) in row.items(): # go over each column name and value columns[k].append(v) # append the value into the appropriate list # based on column name k print(columns['name']) print(columns['phone']) print(columns['street']) ``` With a file like ``` name,phone,street Bob,0893,32 Silly James,000,400 McHilly Smithers,4442,23 Looped St. ``` Will output ``` >>> ['Bob', 'James', 'Smithers'] ['0893', '000', '4442'] ['32 Silly', '400 McHilly', '23 Looped St.'] ``` Or alternatively if you want numerical indexing for the columns: ``` with open('file.txt') as f: reader = csv.reader(f) next(reader) for row in reader: for (i,v) in enumerate(row): columns[i].append(v) print(columns[0]) >>> ['Bob', 'James', 'Smithers'] ``` To change the deliminator add `delimiter=" "` to the appropriate instantiation, i.e `reader = csv.reader(f,delimiter=" ")`
Read specific columns from a csv file with csv module?
[ "", "python", "csv", "" ]
I have the following data array, with 2 million entries: ``` [20965 1239 296 231 -1 -1 20976 1239 299 314 147 337 255 348 -1 -1 20978 1239 136 103 241 154 27 293 -1 -1 20984 1239 39 161 180 184 -1 -1 20990 1239 291 31 405 50 569 357 -1 -1 20997 1239 502 25 176 215 360 281 -1 -1 21004 1239 -1 -1 21010 1239 286 104 248 252 -1 -1 21017 1239 162 38 331 240 368 363 321 412 -1 -1 21024 1239 428 323 -1 -1 21030 1239 -1 -1 21037 1239 325 28 353 102 477 189 366 251 143 452 ... ect ``` This array contains x,y coordinates of photons on a CCD chip, I want to go through the array and add up all these photon events in a matrix with dimensions equal to the CCD chip. The formatting is as follows: `number number x0 y0 x1 y1 -1 -1`. The two `number` entries I don't care too much about, the x0 y0 ect. is what I want to get out. The `-1` entries is a delimiter indicating a new frame, after these there is always the 2 'number' entries again. I have made this code, which does work: ``` i = 2 pixels = np.int32(data_height)*np.int32(data_width) data = np.zeros(pixels).reshape(data_height, data_width) while i < len(rdata): x = rdata[i] y = rdata[i+1] if x != -1 and y != -1: data[y,x] = data[y,x] + 1 i = i + 2 elif x == -1 and y == -1: i = i + 4 else: print "something is wrong" print i print x print y ``` `rdata` is my orignal array. `data` is the resulting matrix which starts out with only zeroes. The while loop starts at the first `x` coord, at index 2 and then if it finds two consecutive `-1` entries it will skip four entries. The script works fine, but it does take 7 seconds to run. How can I speed up this script? I am a beginner with python, and from *the hardest way to learn python* I know that while loops should be avoided, but rewriting to a for loop is even slower! ``` for i in range(2, len(rdata), 2): x = rdata[i] y = rdata[i+1] if x != -1 and y != -1: px = rdata[i-2] py = rdata[i-1] if px != -1 and py != -1: data[y,x] = data[y,x] + 1 ``` Maybe someone can think of a faster method, something along the lines of `np.argwhere(rdata == -1)` and use this output to extract the locations of the `x` and `y` coordinates? --- Update: Thanks for all answers! I used askewchan's method to conserve frame information, however, as my data file is 300000 frames long I get a memory error when I try to generate a numpy array with dimensions (300000, 640, 480). I could get around this by making a generator object: ``` def bindata(splits, h, w, data): f0=0 for i,f in enumerate(splits): flat_rdata = np.ravel_multi_index(tuple(data[f0:f].T)[::-1], (h, w)) dataslice = np.zeros((w,h), dtype='h') dataslice = np.bincount(flat_rdata, minlength=pixels).reshape(h, w) f0 = f yield dataslice ``` I then make a tif from the array using a modified version of Gohlke's [tifffile.py](http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html) to generate a tiff file from the data. It works fine, but I need to figure out a way to compress the data as the tiff file is >4gb (at this point the script crashes). I have very sparse arrays, 640\*480 all zeros with some dozen ones per frame, the original data file is 4MB so some compression should be possible.
Sounds like all you want is to do some boolean indexing magic to get rid of the invalid frame stuff, and then of course add the pixels up. ``` rdata = rdata.reshape(-1, 2) mask = (rdata != -1).all(1) # remove every x, y pair that is after a pair with a -1. mask[1:][mask[:-1] == False] = False # remove first x, y pair mask[0] = False rdata = rdata[mask] # Now need to use bincount, [::-1], since you use data[y,x]: flat_rdata = np.ravel_multi_index(tuple(rdata.T)[::-1], (data_height, data_width)) res = np.bincount(flat_rdata, minlength=data_height * data_width) res = res.reshape(data_height, data_width) ```
Use this to remove the `-1`s and `number`s: ``` rdata = np.array("20965 1239 296 231 -1 -1 20976 1239 299 314 147 337 255 348 -1 -1 20978 1239 136 103 241 154 27 293 -1 -1 20984 1239 39 161 180 184 -1 -1 20990 1239 291 31 405 50 569 357 -1 -1 20997 1239 502 25 176 215 360 281 -1 -1 21004 1239 -1 -1 21010 1239 286 104 248 252 -1 -1 21017 1239 162 38 331 240 368 363 321 412 -1 -1 21024 1239 428 323 -1 -1 21030 1239 -1 -1 21037 1239 325 28 353 102 477 189 366 251 143 452".split(), dtype=int) rdata = rdata.reshape(-1,2) splits = np.where(np.all(rdata==-1, axis=1))[0] nonxy = np.hstack((splits,splits+1)) data = np.delete(rdata, nonxy, axis=0)[1:] ``` Now, using part of @seberg's method to convert the x-y lists into arrays, you can make a 3D array where each 'layer' is a frame: ``` nf = splits.size + 1 # number of frames splits -= 1 + 2*np.arange(nf-1) # account for missing `-1`s and `number`s datastack = np.zeros((nf,h,w)) f0 = 0 # f0 = start of the frame for i,f in enumerate(splits): # f = end of the frame flat_data = np.ravel_multi_index(tuple(data[f0:f].T)[::-1], (h, w)) datastack[i] = np.bincount(flat_rdata, minlength=h*w).reshape(h, w) f0 = f ``` Now, `datastack[i]` is a 2D array showing the `i`th frame of your data.
Speed up while loop matching pattern in array
[ "", "python", "numpy", "" ]
Am trying to find the distance between a point [x1, y1] and the nearest point on the circle centered at [c1, c2] with radius 2? I devised the following equation for this ``` sqrt((p[0] - c[0]) ** 2 + (p[1] - c[1]) ** 2) ``` But the answer it returns an answer which seems to be wrong... What am I doing wrong?
Your formula calculates the distance to the center of the circle. Subtract the radius, and take the absolute value. ``` radius = 2 abs(sqrt((p[0] - c[0]) ** 2 + (p[1] - c[1]) ** 2) - radius) ``` **Note:** Yes, this will work for points on or inside the circle.
That will give you the distance between the point and the *centre* of the circle, not the nearest point on the circumference See the [answer here](https://stackoverflow.com/q/300871/2065121) for an algorithm
Logic for calculating distance between points
[ "", "python", "" ]
I am writing a stored procedure, where I have a column called scale which stores the result from the radio button selected as 1/ 2/ 3/ 4 for each type of skill name. Now, I want to see total number of people under each scale- 1 and 2 and 3 and 4 for a particular skillname1, skillname 2,..., skillname20. Here is my table: tblSkill: ``` ID | SkillName ``` and another table as: tblskillMetrics: ``` ID | SkillID | EmployeeID | Scale ``` And here is the query am trying to write: ``` Create Procedure spGetSkillMetricsCount As Begin SELECT tblSkill.Name as skillname, (select COUNT(EmployeeID) from tblSkillMetrics where tblSkillMetrics.Scale=1) AS NotAplicable, (select COUNT(EmployeeID) from tblSkillMetrics where tblSkillMetrics.Scale=2 ) AS Beginner, (select COUNT(EmployeeID) from tblSkillMetrics where tblSkillMetrics.Scale=3 ) AS Proficient, (select COUNT(EmployeeID) from tblSkillMetrics where tblSkillMetrics.Scale=4 ) AS Expert FROM tblSkill INNER JOIN tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID GROUP BY tblSkillMetrics.Scale, tblSkill.Name ORDER BY skillname DESC END ``` By using this stored procedure, I am able to get the desired format in which I want the result but in the output for each : Not Applicable, Beginner, Proficient or Expert is same and it is sum of all the entries made in the table. Please can someone suggest where am I going wrong.
Logically, you are grouping by two criteria, scale and skill name. However, if I understand it correctly, every row is supposed to represent a single skill name. Therefore, you should group by `tblSkill.Name` only. To get different counts for different scales in separate columns, you can use *conditional aggregation*, i.e. aggregation on an expression that (usually) involves a `CASE` construct. Here's how you could go about it: ``` SELECT tblSkill.Name AS skillname, COUNT(CASE tblSkillMetrics.Scale WHEN 1 THEN EmployeeID END) AS NotAplicable, COUNT(CASE tblSkillMetrics.Scale WHEN 2 THEN EmployeeID END) AS Beginner, COUNT(CASE tblSkillMetrics.Scale WHEN 3 THEN EmployeeID END) AS Proficient, COUNT(CASE tblSkillMetrics.Scale WHEN 4 THEN EmployeeID END) AS Expert FROM tblSkill INNER JOIN tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID GROUP BY tblSkill.Name ORDER BY skillname DESC ; ``` Note that there's a special syntax for this kind of queries. It employs the [`PIVOT`](http://msdn.microsoft.com/en-us/library/ms177410.aspx "Using PIVOT and UNPIVOT") keyword, as what you get is essentially a grouped result set pivoted on one of the grouping criteria, scale in this case. This is how the same could be achieved with `PIVOT`: ``` SELECT skillname, [1] AS NotAplicable, [2] AS Beginner, [3] AS Proficient, [4] AS Expert FROM ( SELECT tblSkill.Name AS skillname, tblSkillMetrics.Scale, EmployeeID FROM tblSkill INNER JOIN tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID ) s PIVOT ( COUNT(EmployeeID) FOR Scale IN ([1], [2], [3], [4]) ) p ; ``` Basically, `PIVOT` implies grouping. All columns but one in the source dataset are grouping criteria, namely every one of them that is not used as an argument of an aggregate function in the PIVOT clause is a grouping criterion. One of them is also assigned to be the one the results are pivoted on. (Again, in this case it is scale.) Because grouping is implicit, a derived table is used to avoid grouping by more criteria than necessary. Values of `Scale` become names of new columns that the PIVOT clause produces. (That is why they are delimited with square brackets when listed in PIVOT: they are not IDs in that context but identifiers [delimited as required by Transact-SQL syntax](http://msdn.microsoft.com/en-us/library/ms176027.aspx "Delimited Identifiers (Database Engine)").)
A case construct instead of all those subqueries might work. ``` select tblSkill.name skillname , case when tblSkillMetrics = 1 then 'Not Applicable' etc else 'Expert' end level , count(employeeid) records from tblSkill join tblSkillMetrics on tblSkillMetrics.SkillID = tblSkill.ID group by tblSkill.name , case when tblSkillMetrics = 1 then 'Not Applicable' etc else 'Expert' end level order by skillname desc ```
Stored procedure for getting sum of entries in table for each ID
[ "", "sql", "stored-procedures", "join", "sql-server-2008-r2", "" ]
Hello everyone and thanks in advance! I've searched all around google and read almost every result i got and i still can't figure it out, so please at least point me at some direction! I read about pmw but i want to see if there is any way to do it with tkinter first. I'm writing a simple enough program for DnD dice rolls and i have an OptionMenu containing some of the dice someone needs to play. I also have an input field for entering a die that is not included in my default options. My problem is that even though the new option is added successfully, the options are not sorted. I solved it at some point by destroying the OptionMenu when the new option was added, sorting my List and then rebuilding the OptionMenu from scratch, but i was using the place manager method at that time and i had to rewrite the program later because i had some resolution problems. I'm using the pack manager now and destroying/rebuilding is not an option unless i want to "re"pack all my widgets or make exclusive labels for them! Here is a working sample of my code: ``` from tkinter import * class DropdownExample(Frame): def __init__(self, master = None): Frame.__init__(self, master) self.pack(fill = 'both', expand = True) # Add Option Button self.addOptBtn = Button(self, text = "Add Option", command = self.add_option) # Option Input Field self.newOpt = IntVar() self.newOpt.set("Type a number") self.optIn = Entry(self) self.optIn['textvariable'] = self.newOpt # Dropdown Menu self.myOptions = [0, 1, 2] self.selOpt = IntVar() self.selOpt.set("Options") self.optMenu = OptionMenu(self, self.selOpt, *self.myOptions) # Positioning self.addOptBtn.pack(side = 'left', padx = 5) self.optIn.pack(side = 'left', padx = 5) self.optMenu.pack(side = 'left', padx = 5) def add_option(self): self.numToAdd = "" self.counter = 0 try: self.numToAdd = int(self.optIn.get()) # Integer validation while self.counter < len(self.myOptions): # Comparison loop & error handling if self.numToAdd == self.myOptions[self.counter]: print("Already exists!") break; elif self.numToAdd < 0: print("No less than 0!") break; elif self.counter < len(self.myOptions)-1: self.counter += 1 else: # Dropdown menu option addition self.myOptions.append(self.numToAdd) self.myOptions.sort() self.optMenu['menu'].add_command(label = self.numToAdd) self.selOpt.set(self.numToAdd) print("Added succesfully!") self.counter += 2 except ValueError: print("Type ONLY numbers!") def runme(): app = DropdownExample() app.master.title("Dropdown Menu Example") app.master.resizable(0, 0) app.mainloop() runme() ``` I am using Python 3.3 on Windows 7
There is a set of [`insert_something()`](http://effbot.org/tkinterbook/menu.htm#Tkinter.Menu.insert-method) methods in `Menu`. You must keep your list sorted with each insert ([bisect](http://docs.python.org/3.3/library/bisect.html) module). ``` from tkinter import * import bisect ... else: # Dropdown menu option addition index = bisect.bisect(self.myOptions, self.numToAdd) self.myOptions.insert(index, self.numToAdd) self.optMenu['menu'].insert_command(index, label=self.numToAdd) self.selOpt.set(self.numToAdd) print("Added succesfully!", self.myOptions) self.counter += 2 ```
Replace the line: ``` self.optMenu['menu'].add_command(label = self.numToAdd) ``` with: ``` for dit in self.myOptions: self.optMenu['menu'].delete(0) for dat in self.myOptions: self.optMenu['menu'].add_command(label = dat) ``` The gotcha is that "add\_command" takes the item to add to the menu, while "delete" takes the index of the item.
Python tkinter: Sorting the contents of an OptionMenu widget
[ "", "python", "sorting", "drop-down-menu", "tkinter", "optionmenu", "" ]
I'm trying to follow the Intro to Data Sci coursera class. But I have run into a problem while trying to parse json response from twitter I am trying to retreive the text from the json that is in the following format. ``` {u'delete': {u'status': {u'user_id_str': u'702327198', u'user_id': 702327198, u'id': 332772178690981889L, u'id_str': u'332772178690981889'}}}, {u'delete': {u'status': {u'user_id_str': u'864736118', u'user_id': 864736118, u'id': 332770710667792384L, u'id_str': u'332770710667792384'}}}, {u'contributors': None, u'truncated': False, **u'text'**: u'RT @afgansyah_reza: Lagi ngantri. Ada ibu2 &amp; temennya. "Ih dia mukanya mirip banget sama Afgan.", trus ngedeketin gw, "Tuh kan.. Mirip bang\u2026', u'in_reply_to_status_id': None, u'id': 332772350640668672L, u'favorite_count': 0, ....... ] ``` And here is the code I am using for it: ``` def hw(): data = [] count=0 with open('output.txt') as f: for line in f: encoded_string = line.strip().encode('utf-8') data.append(json.loads(encoded_string)) print data# generates the input to next block for listval in data:#individual block if "text" in listval: print listval["text"] else: continue ``` However I get the following output and error when I run it ``` RT @afgansyah_reza: Lagi ngantri. Ada ibu2 &amp; temennya. "Ih dia mukanya mirip banget sama Afgan.", trus ngedeketin gw, "Tuh kan.. Mirip bang… RT @Dimaz_CSIX: Kolor pakek pita #laguharlemshake Traceback (most recent call last): File "F:\ProgrammingPoint\workspace-new\PyTest\tweet_sentiment.py", line 41, in <module> main() File "F:\ProgrammingPoint\workspace-new\PyTest\tweet_sentiment.py", line 36, in main hw() File "F:\ProgrammingPoint\workspace-new\PyTest\tweet_sentiment.py", line 23, in hw print listval["text"] File "C:\Python27\lib\encodings\cp1252.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_table) UnicodeEncodeError: 'charmap' codec can't encode characters in position 13-63: character maps to <undefined> ``` I am a new comer to Python and any help would be appreciated.
All of your conversion, etc. is correct. The problem is just trying to `print` it to stdout. (Usually, you run into problems with accented, east-Asian, etc. characters; here it seems to be with the `…` ellipsis character, but it's the same problem.) If you're running this in a terminal window (DOS prompt, etc.), you can only print characters that the terminal's character set can handle. So, for example, on a Windows box configured for 'cp1252' (like yours), you can't print non-Latin-1/non-Latin-15/non-ANSI characters. (In earlier versions of Python, there's an additional problem that Python may not properly guess the best encoding for your terminal, and stick you with, say, `'ascii'` even though it can handle `utf-8`, or at least `cp1252`. You can find out what encoding Python has guessed for stdout with `sys.stdout.encoding`. If that's wrong, you can fix it explicitly.) But if your terminal isn't UTF-8 (and it isn't), you need to tell it what to do with characters it can't represent. You can `encode` strings with an explicit `errors` parameter whenever you `print` them, like this: ``` print u.encode(sys.stdout.encoding, 'replace') ``` … or various other things. But if you want to fix this all in one place, you want to change the default error handler for printing. Unfortunately, in Python 2.7, while `sys.stdout` does have an `errors` attribute, it's read-only. One way around this is to replace it with a wrapper around the original `sys.stdout` (or around its underlying file handle, or something else equivalent). For example: ``` >>> u = 'RT @afgansyah_reza: Lagi ngantri. Ada ibu2 &amp; temennya. "Ih dia mukanya mirip banget sama Afgan.", trus ngedeketin gw, "Tuh kan.. Mirip bang\xe2\x80\xa6'.decode('utf8') >>> print u UnicodeEncodeError: 'charmap' codec can't encode characters in position 13-63: character maps to <undefined> >>> sys.stdout = codecs.getwriter(sys.stdout.encoding)(sys.stdout, errors='replace') >>> print u RT @afgansyah_reza: Lagi ngantri. Ada ibu2 &amp; temennya. "Ih dia mukanya mirip banget sama Afgan.", trus ngedeketin gw, "Tuh kan.. Mirip bang? ``` For more information, read the [2.x Unicode HOWTO](http://docs.python.org/2/howto/unicode.html), and the documentation on [`print`](http://docs.python.org/2/reference/simple_stmts.html#the-print-statement). They expect you to know that a `unicode` object is a string, so it does not get converted by `print`, but instead passed to `write` as-is. So, the trick is to put some kind of wrapper in `sys.stdout.write` that will write `str` objects as-is, but encode `unicode` objects differently. Either `codecs` or `io` can do this for you, but `codecs` is more backward-compatible (and `io` is more forward-compatible, but that doesn't make much difference here, because 3.x handles Unicode very differently).
If you are using PyDev Eclipse Plugin try going to Windows->Preferences->General->Workspace and choose at the left lower corner at TEXT FILE ENCODING -> Choose Other = UTF-8 It might work.
Newcomer error in parsing tweet json UnicodeEncodeError: 'charmap' codec can't encode characters in position 13-63: character maps to <undefined>
[ "", "python", "list", "" ]
I'm trying to implement this code on my localhost: ``` def form_a(): form = SQLFORM(db.registration, deletable=True, request.args(0), upload=URL(r=request, f='download')) if form.accepts(request.vars, session): if not form.record: response.flash = "Your input data has been submitted." else: if form.vars.delete_this_record: session.flash = "User record successfully deleted." else: session.flash = "User record successfully updated." redirect(URL(r=request, f='form_a’)) records = db().select(db.registration.ALL) return dict(form=form, records=records) ``` But I get a ***non-keyword arg after keyword arg*** error at this line: ``` form = SQLFORM(db.registration, deletable=True, request.args(0), upload=URL(r=request, f='download')) ``` And a ***EOL while scanning literal error*** at this line: ``` redirect(URL(r=request, f='form_a’)) ``` I'm using Python 3 and Web2Py 2.4.6, thanks.
All positional arguments must come before keyword arguments, so here `request.args(0)` is causing the error as `deletable=True` a keyword argument was passed before it. > form = SQLFORM(db.registration, deletable=True, request.args(0), > upload=URL(r=request, f='download')) From the [docs](http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments): > In a function call, keyword arguments must follow positional > arguments. All the keyword arguments passed must match one of the > arguments accepted by the function and their order is not important. And in `redirect(URL(r=request, f='form_a’))` you're using different types of opening and closing quotes. It must be either `f='form_a'` or `f="form_a"`
You have a non-keyword argument: ``` form = SQLFORM(db.registration, deletable=True, request.args(0), upload=URL(r=request, f='download')) ^^^^^^^^^^^^^^^ ``` After a keyword argument: ``` form = SQLFORM(db.registration, deletable=True, request.args(0), upload=URL(r=request, f='download')) ^^^^^^^^^^^^^^ ``` You either have to make `deletable` a non-keyword argument or make `request.args(0)` a keyword argument. As for the second error, this quote right here isn't actually a closing quote: ``` redirect(URL(r=request, f='form_a’)) ^ ``` Notice how it's curly. Replace it with a regular single quote.
Web2Py Syntax Error Non-Keyword Arg After Keyword Arg
[ "", "python", "web2py", "" ]
So I'm trying to pass a variable operation (user defined) into a function and am having trouble trying to find a good way of doing it. All I can think of to do is hard code all the options into the function like the following: ``` def DoThings(Conditions): import re import pandas as pd d = {'time' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd']), 'legnth' : pd.Series([4., 5., 6., 7.], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(d) print df for Condition in Conditions: # Split the condition into two parts SplitCondition = re.split('<=|>=|!=|<|>|=',Condition) # If the right side of the conditional statement is a number convert it to a float if SplitCondition[1].isdigit(): SplitCondition[1] = float(SplitCondition[1]) # Perform the condition specified if "<=" in Condition: df = df[df[SplitCondition[0]]<=SplitCondition[1]] print "one" elif ">=" in Condition: df = df[df[SplitCondition[0]]>=SplitCondition[1]] print "two" elif "!=" in Condition: df = df[df[SplitCondition[0]]!=SplitCondition[1]] print "three" elif "<" in Condition: df = df[df[SplitCondition[0]]<=SplitCondition[1]] print "four" elif ">" in Condition: df = df[df[SplitCondition[0]]>=SplitCondition[1]] print "five" elif "=" in Condition: df = df[df[SplitCondition[0]]==SplitCondition[1]] print "six" return df # Specify the conditions Conditions = ["time>2","legnth<=6"] df = DoThings(Conditions) # Call the function print df ``` Which results in this: ``` legnth time a 4 1 b 5 2 c 6 3 d 7 4 five one legnth time c 6 3 ``` This is all well and good and everything, but I'm wondering if there is a better or more efficient way of passing conditions into functions without writing all the if statements possible out. Any ideas? **SOLUTION:** ``` def DoThings(Conditions): import re import pandas as pd d = {'time' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd']), 'legnth' : pd.Series([4., 5., 6., 7.], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(d) print df for Condition in Conditions: # Split the condition into two parts SplitCondition = re.split('<=|>=|!=|<|>|=',Condition) # If the right side of the conditional statement is a number convert it to a float if SplitCondition[1].isdigit(): SplitCondition[1] = float(SplitCondition[1]) import operator ops = {'<=': operator.le, '>=': operator.ge, '!=': operator.ne, '<': operator.lt, '>': operator.gt, '=': operator.eq} cond = re.findall(r'<=|>=|!=|<|>|=', Condition) df = df[ops[cond[0]](df[SplitCondition[0]],SplitCondition[1])] return df # Specify the conditions Conditions = ["time>2","legnth<=6"] df = DoThings(Conditions) # Call the function print df ``` Output: ``` legnth time a 4 1 b 5 2 c 6 3 d 7 4 legnth time c 6 3 ```
You can access the built-in operators via the `operator` module, and then build a table mapping your operator names to the built-in ones, like in this cut-down example: ``` import operator ops = {'<=': operator.le, '>=': operator.ge} In [3]: ops['>='](2, 1) Out[3]: True ```
You can use [masking](http://pandas.pydata.org/pandas-docs/dev/indexing.html#where-and-masking) to do this kind of operation (you will find it a *lot* faster): ``` In [21]: df[(df.legnth <= 6) & (df.time > 2)] Out[21]: legnth time c 6 3 In [22]: df[(df.legnth <= 6) & (df.time >= 2)] Out[22]: legnth time b 5 2 c 6 3 ``` *Note: there's a bug in your implementation, since b should not be included in your query.* You can also do or (using `|`) operations, which work as you would expect: ``` In [23]: df[(df.legnth == 4) | (df.time == 4)] Out[23]: legnth time a 4 1 d 7 4 ```
Python perform operation in string
[ "", "python", "pandas", "conditional-statements", "" ]
I've got script to read data from Arduino and save it as TXT file. I also have script to separate data and to add date/time, than save in new TXT file. I've got also few Cron scripts like: * Read temeprature every 1 minute and save as TXT file (Script 1) * Read temperature every 10 minutes and save as TXT file (Script 2) * Read temperature every 60 minutes and save as TXT file (Script 3) I would like to make a few charts: * Temperature in last hour * Temperature in last 3 hours * Temperature in last 12 hours * Temperature in last 24 hours * Temperature in last 7 days So I need new Python script to: * Get last 60 lines from Script 1 - save it as new TXT file, to use it as chart "Temperature in last hour" * Get last 18 lines from Script 2 - save it as new TXT file, to use it as chart "Temperature in last 3 hours" * Get last 72 lines from Script 2 - save it as new TXT file, to use it as chart "Temperature in last 12 hours" * Get last 144 lines from Script 2 - save it as new TXT file, to use it as chart "Temperature in last 24 hours" * Get last 168 lines from Script 3 - save it as new TXT file to use it as chart "Temperature in last 7 days" Can you help me to write simple script for example to get last 60 lines from FILE.txt and save them as FILE2.txt ? I will edit it for next scripts I need. EDIT: I probbably know how to keep my chart files sized as I want (60 lines), but now I would like to ask is it possible to make any script, who will delete FIRST 30 lines? I have only about 1 GB space, so I need to clean TXT files ;) If I will get temperature every minute after two weeks It will full my hard drive ;) So I think CRON action to delete first X lines from txt file can help me a lot. Do you know any script? If I will get it, I will finally finish my master project and ofcourse I will show you results :)
Here is your small program, just run it every one minutes in cron ``` #!/usr/bin/env python from random import randrange import datetime now = datetime.datetime.now() #generate random temperature temperature = randrange(50, 100) #database structure structure = "{'last_hour': [], 'last_3_hours': [], 'last_12_hours': [], " \ "'last_24_hours': [], 'last_7_days': []}" #file for the database database = 'temperature_database.txt' #use absolute path when running from cron #check database file exists, if not write the database structure try: with open(database): pass except IOError: with open(database, 'w') as handler: handler.write(structure) #read the contents of the database with open(database, 'r') as handler: db_contents = eval(handler.read()) #first save every minute db_contents['last_hour'].append(temperature) if len(db_contents['last_hour']) > 60: db_contents['last_hour'] = db_contents['last_hour'][-60:] #get the last 60 elements if now.minute in [10, 0]: #every ten minutes db_contents['last_3_hours'].append(temperature) if len(db_contents['last_3_hours']) > 18: db_contents['last_3_hours'] = db_contents['last_3_hours'][-18:] db_contents['last_12_hours'].append(temperature) if len(db_contents['last_12_hours']) > 72: db_contents['last_12_hours'] = db_contents['last_12_hours'][-72:] db_contents['last_24_hours'].append(temperature) if len(db_contents['last_24_hours']) > 144: db_contents['last_24_hours'] = db_contents['last_24_hours'][-144:] if now.minute == 1: #every hour db_contents['last_7_days'].append(temperature) if len(db_contents['last_7_days']) > 168: db_contents['last_7_days'] = db_contents['last_7_days'][-168:] #save the contents to the database with open(database, 'w') as handler: handler.write(str(db_contents)) ``` After four minutes the file contains ``` {'last_hour': [62, 99, 83, 71], 'last_12_hours': [], 'last_24_hours': [], 'last_3_hours': [], 'last_7_days': []} ```
You can use the `tail` recipe from [`collections.deque`](http://docs.python.org/2/library/collections.html#deque-recipes) here: ``` from collections import deque def tail(filename, n=10): with open(filename) as f: return deque(f, n) lines = tail("script",18) ```
Python use last x lines
[ "", "python", "" ]
I'm trying to get test unit coverage with Sonar. To do so, I have followed these steps : 1. Generating report with `python manage.py jenkins --coverage-html-report=report_coverage` 2. Setting properties in `/sonar/sonar-3.5.1/conf/sonar.properties`: `sonar.dynamicAnalysis=reuseReports` `sonar.cobertura.reportPath=/var/lib/jenkins/workspace/origami/DEV/SRC/origami/reports/coverage.xml` When I launch the tests, the reports are generated in the right place. However, no unit tests are detected by Sonar. Am I missing a step or is everything just wrong?
I think the problem is that there seem to be no link between Sonar and Jenkins. It would be easier to make it with plugins. After installing plugins you'd just have to add a build step in the jenkins administration. In order to see the coverage report in Sonar you should use the "Jenkins Sonar Plugin". However it will force you to create a maven project (and a pom.xml) and as you're using Django (which already does what maven does), it may not be what you want. I think what you want is seeing the code coverage somewhere and maybe you should integrate it in Jenkins instead of Sonar. To do so you can use two plugins, "Jenkins Cobertura Plugin" and the "HTML Publisher plugin". Jenkins Cobertura Plugin will allow you to see graphically the code coverage from your coverage.xml. You can see the percentage covered in package, file, classe, line, and conditionnal. You'll just have to add the link to your coverage.xml into Jenkins in your project administration. More detail [here](http://bhfsteve.blogspot.fr/2012/04/automated-python-unit-testing-code_27.html). HTML Publisher plugin] will probably be useful to see the detailled code coverage by publishing the coverage report in html.
On Jenkins I found that coverage.xml has paths that are relative to the directory in which manage.py jenkins is run. In my case I need to run unit tests on a different machine than Jenkins. To allow Sonar to use the generated coverage.xml, it was necessary for me to run the tests from a folder in the same spot relative to the project as the workspace directory on Jenkins. ``` Say I have the following on Jenkins /local/jenkins/tmp/workspace/my_build + my_project + app1 + app2 Say on test machine I have the following /local/test + my_project + app1 + app2 ``` I run unit tests from /local/test on the test machine. Then coverage.xml has the correct relative paths, which look like my\_project/app1/source1.py or my\_project/app2/source2.py
How to get tests coverage using Django, Jenkins and Sonar?
[ "", "python", "django", "jenkins", "code-coverage", "sonarqube", "" ]
I have a list which looks like below. ``` list = [1, 2, 3, 4, 5, 6, 7, 8, 9 .....] ``` and i want to split it into three lists which will have below values. ``` first_list = [1, 4, 7, ...] second_list = [2, 5, 8,....] third_list = [3, 6, 9, ...] ``` I do not want to split it into three equal sized chunks and want the lists to be split as above. Any help is useful. Thanks
Use the slice notation by changing the start value and setting a step value: ``` l[start:end:step] ``` --- ``` In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9] In [2]: [l[start::3] for start in range(3)] Out[2]: [[1, 4, 7], [2, 5, 8], [3, 6, 9]] ``` --- To assign the lists to variables: ``` first_list, second_list, third_list = [l[i::3]for i in range(3)] ```
``` my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9] print (my_list[0::3]) print (my_list[1::3]) print (my_list[2::3]) --output:-- [1, 4, 7] [2, 5, 8] [3, 6, 9] ``` Also, never use list for a variable name. Until you know what you are doing, put 'my' in front of all your variable names.
split a list into three lists with a stepsize and a starting point
[ "", "python", "list", "" ]
I am trying to get a value from a list using for each: ``` for Therepot, member in enumerate(pots[0]): TherePotValue = Therepot ``` pots[0] holds something like [4, 6, 2, 1, 8, 9] Edit --- To return the value i should point my variable TherePotValue to member and not the TherePot witch is the index. Running a test: TherePot = 0 , Member = 4 TherePot = 1 , Member = 6 TherePot = 2 , Member = 2 TherePot = 3 , Member = 1 TherePot = 4 , Member = 8 TherePot = 5 , Member = 9
I think these examples will help you do what you want to do: ``` lst = pots[0] # solution using a for loop for i, member in enumerate(lst): # i is the position in the list # member is the data item from the list assert lst[i] == member # cannot ever fail if member == the_one_we_want: break # exit loop, variables i and member are set else: # the_one_we_want was never found i = -1 # signal that we never found it # solution using .index() method function on a list try: i = lst.index(the_one_we_want) except ValueError: # the_one_we_want was not found in lst i = -1 # signal that we never found it ``` EDIT: The comments made me realize that the `else` in the `for` loop could be confusing. In Python, a `for` loop can have its own `else` case. Raymond Hettinger has commented that he wishes the keyword had been something like `when_no_break` because the only time you use this `else` is with a `break` keyword! If the `for` loop exits early, with the `break`, the `else` code does not run. But if the `for` loop runs all the way to the end and no `break` ever happens, then at the end the `else` code runs. Nick Coghlan calls this a "completion clause" to distinguish it from the "conditional else" from an `if` statement. <https://ncoghlan_devs-python-notes.readthedocs.org/en/latest/python_concepts/break_else.html> It's sort of unfortunate that the `else` comes right after an `if` statement, because that might be confusing. That `else` has nothing to do with that `if`; it goes with the `for` loop, which is why it indents the way it does. (I do like that in Python you are forced to line things up when they go together.)
It's very important that `pots[0]` actually has the value you think it does. Consider the following code: ``` >>> pots = [[4, 6, 2, 1, 8, 9]] >>> TherePotValue = 0 >>> for Therepot, member in enumerate(pots[0]): TherePotValue = Therepot print "(",Therepot,")", member ``` This produces: ``` ( 0 ) 4 ( 1 ) 6 ( 2 ) 2 ( 3 ) 1 ( 4 ) 8 ( 5 ) 9 >>> print TherePotValue 5 >>> ``` If you are seeing `0` I can only assume that `pots[0]` has only one element.
Get Value from list using foreach
[ "", "python", "foreach", "" ]
I see that there are a few questions regarding fabric and passwords. I know that if I pass -I to fabric then the password I enter is passed to the environmental variable "password." The problem is that I'm prompted for a password when running an ssh command on my remote server to another remote server. **But**, I don't want to be prompted for a password entry. I am prompted no matter what I try to do. So here's a little snippet of the code: ``` elif "test" in run('hostname -d'): print(blue("Gathering Knife info")) run("ssh mychefserver knife node show `hostname`.test.dmz") ``` It works just fine when I enter my password. The thing is, I don't want to have to enter my password. Maybe this is because another ssh connection is initiated on the remote host and fabric can't do anything about that. I could have the script disconnect from the remote host, run the ssh command locally, then reconnect to the remote host to finish the script... but that seems silly. Suggestions? Getpass info: ``` Python 2.6.6 (r266:84292, Sep 11 2012, 08:34:23) [GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from getpass import getpass >>> getpass('test: ') test: 'This is a test' ```
``` from subprocess import Popen, PIPE from getpass import getpass x = Popen('ssh root@host', stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True) print x.stdout.readline() _pass = getpass('Enter your superduper password:') x.stdin.write(_pass) print x.stdout.readline() ``` Once connected, you can still input things as if you were on the other machine via `x.stdin.write(...)` so yea, that should work? DEBUG (just start a cmd promt, navigate to your python directory and write Python): ``` C:\Users>python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from getpass import getpass >>> getpass('test: ') test: 'This is a test' >>> ```
Here's how I did it. In short: If you set [`fabric.api.env.password`](http://fabric.readthedocs.org/en/1.8/usage/env.html#password), fabric will use it to connect to the servers listed in [`env.hosts`](http://fabric.readthedocs.org/en/1.8/usage/env.html#hosts): ``` from getpass import getpass from fabric.api import env # Setting env.hosts env.hosts = ['someuser@someserver.com'] def deploy(): """A deployment script""" # Setting env.password with `getpass` # See here for more: http://fabric.readthedocs.org/en/1.8/usage/env.html#password env.password = getpass('Enter the password for %s: ' % env.hosts[0]) # The rest of the script follows... ```
Python Fabric and password prompts
[ "", "python", "ssh", "fabric", "" ]
Hello guys I have a small predicatement that has me a bit stumped. I have a table like the following.(This is a sample of my real table. I use this to explain since the original table has sensitive data.) ``` CREATE TABLE TEST01( TUID VARCHAR2(50), FUND VARCHAR2(50), ORG VARCHAR2(50)); Insert into TEST01 (TUID,FUND,ORG) values ('9102416AB','1XXXXX','6XXXXX'); Insert into TEST01 (TUID,FUND,ORG) values ('9102416CC','100000','67130'); Insert into TEST01 (TUID,FUND,ORG) values ('955542224','1500XX','67150'); Insert into TEST01 (TUID,FUND,ORG) values ('915522211','1000XX','67XXX'); Insert into TEST01 (TUID,FUND,ORG) values ('566653456','xxxxxx','xxxxx'); Insert into TEST01 (TUID,FUND,ORG) values ('9148859fff','1XXXXXX','X6XXX'); ``` table data after insert ``` "TUID" "FUND" "ORG" "9102416AB" "1XXXXX" "6XXXXX" "9102416CC" "100000" "67130" "955542224" "1500XX" "67150" "915522211" "1000XX" "67XXX" "566653456" "xxxxxx" "xxxxx" "9148859fff" "1XXXXXX" "X6XXX" ``` The "X"'s are wild card elements\*( I inherit this and i cannot change the table format)\* i would like to make a query like the following ``` select tuid from test01 where fund= '100000' and org= '67130' ``` however what i really like to do is retrieve any records that have have those segements in them including 'X's ``` in other words the expected output here would be "TUID" "FUND" "ORG" "9102416AB" "1XXXXX" "6XXXXX" "9102416CC" "100000" "67130" "915522211" "1000XX" "67XXX" "566653456" "xxxxxx" "xxxxx" ``` i have started to write a massive sql statement that would have like 12 like statement in it since i would have to compare the org and fund every possible way. This is where im headed. but im wondering if there is a better way. ``` select * from test02 where fund = '100000' and org = '67130' or fund like '1%' and org like '6%' or fund like '1%' and org like '67%' or fund like '1%' and org like '671%' or fund like '1%' and org like '6713%' or fund like '1%' and org like '67130' or fund like '10%' and org like '6%'...etc /*seems like there should be a better way..*/ ``` can anyone give me a hand coming up with this sql statement... by the way notice that ``` "9148859fff" "1XXXXXX" "X6XXX" ``` is excluded from the expected resul since the second digit in org is "6" and im looking for anything that looks like "67130"
You can use the [`REPLACE()`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions134.htm) function to replace `x` with wildcard character `_` and then use `LIKE`: ``` SELECT * FROM test01 WHERE '100000' LIKE REPLACE(REPLACE(fund, 'x', '_'), 'X', '_') AND '67130' LIKE REPLACE(REPLACE(org, 'x', '_'), 'X', '_') ; ``` Tested at **[SQL-Fiddle](http://sqlfiddle.com/#!4/f4cf5/2)**
Assuming that you want to match both "`X`" and "`x`": ``` SELECT * FROM Test01 t1 WHERE REGEXP_LIKE(t1.fund, '^[Xx1][Xx0][Xx0][Xx0][Xx0][X0x]$') AND REGEXP_LIKE(t1.org, '^[Xx6][Xx7][Xx1][Xx3][Xx0]$') ; ``` My regular expressions control the total number of characters in each field: **6** in `fund` and **5** in `org`. Here's [**SQL Fiddle**](http://sqlfiddle.com/#!4/5647a/9).
oracle sql to get all possible combinations in a table
[ "", "sql", "oracle", "oracle11g", "oracle10g", "" ]
I want to split a string using `-`, `+=`, `==`, `=`, `+`, and white-space as delimiters. I want to keep the delimiter unless it is white-space. I've tried to achieve this with the following code: ``` def tokenize(s): import re pattern = re.compile("(\-|\+\=|\=\=|\=|\+)|\s+") return pattern.split(s) print(tokenize("hello-+==== =+ there")) ``` I expected the output to be ``` ['hello', '-', '+=', '==', '=', '=', '+', 'there'] ``` However I got ``` ['hello', '-', '', '+=', '', '==', '', '=', '', None, '', '=', '', '+', '', None, 'there'] ``` Which is almost what I wanted, except that there are quite a few extraneous `None`s and empty strings. Why is it behaving this way, and how might I change it to get what I want?
[re.split](http://docs.python.org/2/library/re.html#re.split) by default returns an array of the bits of strings that are in between the matches: (As @Laurence Gonsalves notes, this is its main use.) ``` ['hello', '', '', '', '', '', '', '', 'there'] ``` Note the empty strings in between `-` and `+=`, `+=` and `==`, etc. As the docs explain, because you're using a capture group (i.e., because you're using `(\-|\+\=|\=\=|\=|\+)` instead of `(?:\-|\+\=|\=\=|\=|\+)`, the bits that the capture group matches are interspersed: ``` ['hello', '-', '', '+=', '', '==', '', '=', '', None, '', '=', '', '+', '', None, 'there'] ``` `None` corresponds to where the `\s+` half of your pattern was matched; in those cases, the capture group captured nothing. From looking at the docs for re.split, I don't see an easy way to have it discard empty strings in between matches, although a simple list comprehension (or [filter](http://docs.python.org/2/library/functions.html#filter), if you prefer) can easily discard `None`s and empty strings: ``` def tokenize(s): import re pattern = re.compile("(\-|\+\=|\=\=|\=|\+)|\s+") return [ x for x in pattern.split(s) if x ] ``` **One last note**: For what you've described so far, this will work fine, but depending on the direction your project goes, you may want to switch to a proper parsing library. [The Python wiki](http://wiki.python.org/moin/LanguageParsing) has a good overview of some of the options here.
*Why is it behaving this way?* According to the documentation for re.split: > If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. This is literally correct: if capturing parentheses are used, then the text of all groups are returned, whether or not they matched anything; the ones which didn't match anything return `None`. As always with `split`, two consecutive delimiters are considered to separate empty strings, so you get empty strings interspersed. *how might I change it to get what I want?* The simplest solution is to filter the output: ``` filter(None, pattern.split(s)) ```
Python regex -- extraneous matchings
[ "", "python", "regex", "" ]
I have a function `foo()` that returns a tuple. Let's say `('a', 1)` I have an iterable `a` that I want to iterate over and pass every item to that function. At the end I need two lists - `r1` and `r2` where `r1` consists of all the first items in the tuples returned from the function. The second list `r2` - the result of all the second items in these tuples. ``` r1 = ['a', 'b', 'c'] r2 = [1, 2, 3] ``` I don't like this approach too much: ``` result = [foo(i) for i in a] r1 = [i[0] for i in result] r2 = [i[1] for i in result] ```
You can use the [`zip`](http://docs.python.org/2/library/functions.html#zip) function for this. ``` >>> result = [('a', 1), ('b', 2), ('c', 3)] >>> r1, r2 = zip(*result) >>> r1 ('a', 'b', 'c') >>> r2 (1, 2, 3) ``` `zip(*result)` *unpacks* each list in `result` and passes them as separate arguments into the `zip` function. It essentially [*transposes*](http://en.wikipedia.org/wiki/Transpose) the list. It produces a list of two tuples, which then are assigned to `r1` and `r2`.
Just use zip()! ``` result = [foo(i) for i in a] r1, r2 = zip(*result) ```
Assign two variables from list of tuples in one iteration
[ "", "python", "" ]
My problem will look common to you, but I didn't find the answer on the website. I want to combine two dates in python, I use this : ``` delta = 2012-04-07 18:54:40 - 2012-04-07 18:54:39 ``` But I get an error : `TypeError: unsupported operand type(s) for -: 'str' and 'str'` I understand it, but I don't know how to turn it the right way. Do you have an idea? Thks!
You are subtracting `strings`, not `datetime.datetime` objects. Try the `strptime` method in order to convert from strings to `datetime.datetime`. ``` >>> delta = datetime.datetime.strptime('2012-04-07 18:54:40', '%Y-%m-%d %H:%M:%S') \ - datetime.datetime.strptime('2012-04-07 18:54:39', '%Y-%m-%d %H:%M:%S') >>> delta datetime.timedelta(0, 1) ```
Make them datetimes! ``` >>> import datetime >>> datetime.datetime.now() datetime.datetime(2013, 5, 13, 10, 25, 6, 20914) >>> datetime.datetime(2013, 5, 13, 10, 25, 6, 20914) datetime.datetime(2013, 5, 13, 10, 25, 6, 20914) >>> datetime.datetime(2013, 5, 13, 10, 25, 6, 20914) - datetime.datetime(2013, 5, 13, 10, 14, 6, 20914) datetime.timedelta(0, 660) >>> ```
Type error comparing two datetimes
[ "", "python", "datetime", "python-2.7", "" ]
I have data which is a matrix of integer values which indicate a banded distribution curve. I'm optimizing for SELECT performance over INSERT performance. There are max 100 bands. I'll primarily be querying this data by summing or averaging bands across a period of time. My question is can I achieve *better performance* by flattening this data across a table with 1 column for each band, or by using a single column representing the band value? Flattened data ``` UserId ActivityId DateValue Band1 Band2 Band3....Band100 10001 10002 1/1/2013 1 5 100 200 ``` OR Normalized ``` UserId ActivityId DateValue Band BandValue 10001 10002 1/1/2013 1 1 10001 10002 1/1/2013 2 5 10001 10002 1/1/2013 3 100 ``` Sample query ``` SELECT AVG(Band1), AVG(Band2), AVG(Band3)...AVG(Band100) FROM ActivityBands GROUP BY UserId WHERE DateValue > '1/1/2012' AND DateValue < '1/1/2013' ```
Store the data in the normalized format. If you are not getting acceptable performance from this scheme, instead of denormalizing, first consider what indexes you have on the table. You're likely missing an index that would make this perform similar to the denormalized table. Next, try writing a query to retrieve data from the normalized table so that the result set looks like the denormalized table, and use that query to create an [indexed view](https://stackoverflow.com/questions/3986366/how-to-create-materialized-views-in-sql-server). This will give you select performance identical to the denormalized table, but retain the nice data organization benefits of the proper normalization.
Denormalization optimizes exactly one means of accessing the data, at the expense of (almost all) others. If you have only one access method that is performance critical, denormalization is likely to help; though proper index selection is of greater benefit. However, if you have multiple performance critical access paths to the data, you are better to seek other optimizations. Creation of an appropriate clustered index; putting your non-clustered indices on SSD's. increasing memory on your server; are all techniques that will improve performance for *all*\* accesses, rather than trading off between various accesses.
Does denormalizing rows to columns enhance performance in SQL Server?
[ "", "sql", "sql-server", "denormalization", "database-normalization", "" ]
Not sure how to phrase this question, but I want an aggregate query applied to multiple rows. Hopefully an example should make this easier. Assuming I have the following data: ``` player | year | games ------------------------- ausmubr01 | 2006 | 139 ausmubr01 | 2007 | 117 bondsba01 | 2006 | 130 bondsba01 | 2007 | 126 stairma01 | 2006 | 26 stairma01 | 2006 | 77 stairma01 | 2006 | 14 stairma01 | 2007 | 125 ``` And for each player in each year, I want to calculate their "career year", i.e. the number of years they've been playing: ``` player | year | games | cyear -------------------------------- ausmubr01 | 2006 | 139 | 1 ausmubr01 | 2007 | 117 | 2 bondsba01 | 2006 | 130 | 1 bondsba01 | 2007 | 126 | 2 stairma01 | 2006 | 26 | 1 stairma01 | 2006 | 77 | 2 stairma01 | 2006 | 14 | 3 stairma01 | 2007 | 125 | 4 ``` It would be natural to express this transformation as `SELECT player, year, games, year - min(year) + 1 as cyear FROM baseball GROUP by player` but because of the rules for aggregate queries the expression is only evaluated once for each group: ``` player | year | games | cyear -------------------------------- ausmubr01 | 2006 | 139 | 1 bondsba01 | 2006 | 130 | 1 stairma01 | 2006 | 26 | 1 ``` How can I overcome this problem in general (i.e. not just for this case but whenever I want to perform an arithmetic operation combining an existing column and a single per-group number computed with an aggregate function)?
You can use [`ROW_NUMBER`](http://msdn.microsoft.com/en-us/library/ms186734%28v=sql.110%29.aspx) for the career-year: ``` SELECT player, year, games, cyear = ROW_NUMBER () OVER (PARTITION BY player ORDER BY year), gamesPerMax = 1.0 * games / MAX(games) OVER (PARTITION BY player) FROM dbo.TableName ``` [**Demo**](http://sqlfiddle.com/#!6/b63fa/6/0) Have a look at the powerful [`OVER` clause](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.105%29.aspx).
One straightforward method is to compute each player's starting year as an aggregate query, and join the data with the original. These kinds of "sequence based" queries are usually tricky to express in a set based language :( ``` WITH tmp as ( select player, min(year) as minyear from table group by player ); select t.*, t.year - t.minyear + 1 as cyear from table as t, tmp where t.player = tmp.player; ```
Combined aggregated and non-aggregate query in SQL
[ "", "sql", "aggregate-functions", "" ]
Trying to use pybonjour but not sure if it is what I need. <https://code.google.com/p/pybonjour/> I want to be able to discover iOS devices that appear on my network automatically, will be running a script later on based on this, but first I want to just discover a iOS devices as soon as it appear/disappears on my wifi network. So the question, how do I do this? running on a windows machine with python27 and the pybonjour package installed, the two examples work from the pybonjour page, but what command do I run to discover iOS devices using the scripts included on my network? or will this only discovery services running on my pc that i run this script on! If I am going in the wrong direction please let me know, I can't seem to find the documentation on this package! ``` python browse_and_resolve.py xxxxxx ``` Thx Matt. Update... This article and the browser was helpful, <http://marknelson.us/2011/10/25/dns-service-discovery-on-windows/> in finding the services I needed to search for. example; (this discovered my apple tv's, not at home atm so can't check what the iphone is called! I assume iphone! ``` python browse_and_resolve.py _appletv._tcp ``` Also if you have the windows utility dns-sd.exe this will search for all the services available on the network. I used this to find what I was looking for. ``` dns-sd -B _services._dns-sd._udp ``` Update... "Bonjour is used in two ways: - publishing a service - detecting (browsing for) available services". For what I want to do, I don't think it will work as the ipad/iPhone won't advertise a service unless I'm running a app that advertise one (or jailbreak my iPhone/ipad and then ssh will be open). Any more ideas?
What you're trying to do (a) probably can't be done, and (b) probably wouldn't be much use if it could. The point of Bonjour is to discover *services*, not *devices*. Of course each service is provided by some device, so indirectly you can discover devices with it… but only by discovering a service that they're advertising. As far as I know, (except Apple TVs) don't advertise any services, except while you're running an app that uses Bonjour to find the same app on other machines. (Except for jailbroken devices, which often advertise SSH, AFP, etc.) There are a few ways to, indirectly, get a list of all services being advertised by anyone on the network. The simplest is probably to use [Bonjour Browser for Windows](http://hobbyistsoftware.com/bonjourbrowser). (I've never actually used it, but the original Mac tool and the Java port, both of which I *have* used, both suggest this Windows port for Windows users.) Fire it up and you'll get a list of services, and you can click on each one to get the details. So, you can verify that your iPhone and iPad aren't advertising any services, which will show that there is no way to detect them via Bonjour. Meanwhile, even if you *did* find a device, what are you planning to do? Presumably you want to communicate with the device in some way, right? Whatever service you're trying to communicate with… just browse for that service—and then, if appropriate, filter down to iOS devices. That's got to be easier than browsing for iOS devices and then filtering down to those that have the service you want. --- As for whether there's *any* way to detect iOS devices… Well, there are at least two possibilities. I don't know if either of them will work, but… First, even if the iOS device isn't advertising anything for you, I assume it's browsing for services *you* can advertise. How else does it find that there's an Apple TV to AirTunes to, an iTunes on the LAN to sync with, etc.? So, use Bonjour Browser to get a list of all services your iTunes-running desktop, Apple TV, etc. are advertising. Then turn off all the services on your desktop, use PyBonjour to advertise whichever services seem plausibly relevant (and, if need be, use netcat to put trivial listeners on the ports you advertise). Then turn on your iPhone, and see if it connects to any of them. You may want to leave it running for a while, or switch WiFi off and back on. (I'm guessing that, despite Apple's recommendations, it doesn't browse continuously for most services, but just checks every once in a while and/or every time its network status changes. After all, Apple's recommendations are for foreground interactive apps, not background services.) Unfortunately, even if you can find a service that all iOS devices will connect to, you may not be able to distinguish iOS devices from others just by getting connections there. For example, I'm pretty sure any Mac or Windows box running iTunes will hit up your fake AirTunes service, and any Mac will hit your AirPrint, and so on. So, how do you distinguish that from an iPhone hitting it? You may need to actually serve enough of the protocol to get information out of them. Which will be particularly difficult for Apple's undocumented protocols. But hopefully you'll get lucky, and there will be something that all iOS devices, and nothing else, will want to talk to. iTunes Sync seems like the obvious possibility. Alternatively, there are a few things they *have* to broadcast, or they just wouldn't work. You can't get on a WiFi network without broadcasts. And most home WiFi networks use DHCP, which means they have to broadcast DHCP discover (and request), as well. There may be some kind of heuristic signature you can detect in these messages. If nothing else, enabling DDNS should cause the device to send its hostname, and you can guess based on that (e.g., unless you change the defaults, `hostname.lower().endswith('iphone')`). The easiest way is probably to set up your desktop as the main access point for your home network. I believe it's as simple as turning on Internet Connection Sharing somewhere in the control panel. (Setting up as a DHCP relay agent is much less overhead than being a full router, but I have no idea how you'd even get started doing that on Windows.) Then you can capture the DHCP broadcasts (or, failing that, the 802.11 broadcasts) as they come in. [Wireshark](http://www.wireshark.org) will capture and parse the messages for you easily, so you can watch and see if it looks like this is worth pursuing farther. (See [RFC 2131](http://www.ietf.org/rfc/rfc2131.txt) for details on the format that aren't obvious from Wireshark's cryptic one-liner descriptions.) You can take this even farther and watch the internet connections every host makes once they're connected to the internet. Any device that's periodically checking the App Store, the iOS upgrade server, etc.… Well, unless one of the jailbreak devteam guys lives in your house, that's probably an iPhone, right? The downside is that some of these checks may be *very* periodic, and detecting an iPhone 6 hours after it connects to your network isn't very exciting.
Use [python-nmap](https://pypi.python.org/pypi/python-nmap/0.1.1) rather than Bonjour. Or you could use [pyzeroconf](https://github.com/paulsm/pyzeroconf) (Bonjour is an implementation of zeroconf) but it is a little outdated (but should still work). python-nmap is probably easiest, let's suppose you wanted to find all connected devices that have 'iPhone' or 'iPad' in their hostname (just a *simplistic* concept): ``` import nmap ... def notify_me(ip, hostname): print("I found an iOS device! IP Address: %s, Hostname: %s" % (ip, hostname)) iOS_device_list = ['iPhone', 'iPad'] iOS_devices_on_net = {} nm = nmap.PortScanner() # scan ip range for i in range(2, 50, 1): ip = "192.168.1." + str(i) # specify ports to scan nm.scan(ip, '62078') # Matt mentioned that it picks up iphone-sync on this port hostname = nm[ip].hostname() for device in iOS_device_list: if device.lower() in hostname.lower(): iOS_devices_on_net.update({ip:hostname}) notify_me(ip, hostname) # show all iOS devices in ip range print iOS_devices_on_net ``` The limitation of this approach is that it relies on the individual having not changed their hostname which originally includes their name and device name. **It also assumes that there is a port listening on the iOS device that will return a hostname** (this may not be the case). You can use `osscan` which is preferred by running it as a command using python-nmap library. This is obviously a much better approach. My concept above is just a simple example of how it can be used. Using nmap from the command line (I believe python-nmap has `nm.commandline()` method) is simplest: ``` nmap -O -v ip ``` Also try adding `--osscan-guess; --fuzzy` for best results. Example: ``` nmap -O -v --osscan-guess ip ``` Then just search the output for iOS device keywords (see [this example](http://nmap.org/book/osdetect-usage.html)). It's human-readable. Note that you'll need to be running all of this as an administrator for it to work properly (Windows: `runas`, other: `sudo`).
Trying to discover iOS devices on my network using python script
[ "", "python", "ios", "bonjour", "" ]
I have a matrix in Python defined like this: ``` matrix = [['A']*4 for i in range(4)] ``` How do I print it in the following format: ``` 0 1 2 3 0 A A A A 1 A A A A 2 A A A A 3 A A A A ```
This function matches your exact output. ``` >>> def printMatrix(testMatrix): print ' ', for i in range(len(testMatrix[1])): # Make it work with non square matrices. print i, print for i, element in enumerate(testMatrix): print i, ' '.join(element) >>> matrix = [['A']*4 for i in range(4)] >>> printMatrix(matrix) 0 1 2 3 0 A A A A 1 A A A A 2 A A A A 3 A A A A >>> matrix = [['A']*6 for i in range(4)] >>> printMatrix(matrix) 0 1 2 3 4 5 0 A A A A A A 1 A A A A A A 2 A A A A A A 3 A A A A A A ``` To check for single length elements and put an `&` in place of elements with length > 1, you could put a check in the list comprehension, the code would change as follows. ``` >>> def printMatrix2(testMatrix): print ' ', for i in range(len(testmatrix[1])): print i, print for i, element in enumerate(testMatrix): print i, ' '.join([elem if len(elem) == 1 else '&' for elem in element]) >>> matrix = [['A']*6 for i in range(4)] >>> matrix[1][1] = 'AB' >>> printMatrix(matrix) 0 1 2 3 4 5 0 A A A A A A 1 A AB A A A A 2 A A A A A A 3 A A A A A A >>> printMatrix2(matrix) 0 1 2 3 4 5 0 A A A A A A 1 A & A A A A 2 A A A A A A 3 A A A A A A ```
``` >>> for i, row in enumerate(matrix): ... print i, ' '.join(row) ... 0 A A A A 1 A A A A 2 A A A A 3 A A A A ``` I guess you'll find out how to print out the first line :)
print matrix with indicies python
[ "", "python", "printing", "matrix", "" ]
there are a lot of questions about Recursive SELECT query in Mysql, but most of answers is that "There NO solution for Recursive SELECT query in Mysql". Actually there is a certain solution & I want to know it clearly, so this question is the following of the previous question that can be found at ([how-to-do-the-recursive-select-query-in-mysql](https://stackoverflow.com/questions/16513418/how-to-do-the-recursive-select-query-in-mysql)) Suppose you have this table: ``` col1 - col2 - col3 1 - a - 5 5 - d - 3 3 - k - 7 6 - o - 2 2 - 0 - 8 ``` & you want to find all the links that connect to value "1" in col1, i.e. you want to print out: ``` 1 - a - 5 5 - d - 3 3 - k - 7 ``` Then you can use this simple query: ``` select col1, col2, @pv:=col3 as 'col3' from table1 join (select @pv:=1)tmp where col1=@pv ``` Ok, good, however, if your table has 2 records containing "1" in col1 & 2 records containing "3" in col1, ex: ``` col1 - col2 - col3 1 - a - 5 1 - m - 9 5 - d - 3 3 - k - 7 6 - o - 2 3 - v - 10 2 - 0 - 8 ``` Then, when users search for "1" in col1, it should show all the links connecting to 2 "1", i.e. it should show this expecting result: ``` col1 - col2 - col3 1 - a - 5 1 - m - 9 5 - d - 3 3 - k - 7 3 - v - 10 ``` So, my question is **how do we modify the above query so that it will show all the links as in the above expecting result?** **EDIT:** @ Gordon, but if we omit `select distinct col1, col2 from` then this query means something, can you work on this (since the childID got increased, so we can order the table1 ): ``` select col1, col2, @pv:=(case when find_in_set(col3, @pv) then @pv else concat(@pv, ',', col3) end) as 'col3' from (select * from table1 order by col1) tb1 join (select @pv:='1') tmp on find_in_set(col1, @pv) > 0 ``` In this case, we don't worry about the order, for example, if this is the data: ``` col1 - col2 - col3 4 - a - 5 1 - d - 2 1 - k - 4 2 - o - 3 6 - k - 8 8 - o - 9 ``` the output will be: ``` col1 - col2 - col3 1 - d - 1,2 1 - k - 1,2,4 2 - o - 1,2,4,3 ``` So we get this result `1,2,4,3` right? & we just select all records if the col1 is in `1,2,4,3`. Then we can get the final expected result. If that is the case, can you think of any special case that rules out the solution I just mentioned?
I keep wondering if something like this would work: ``` select distinct col1, col2 from (select col1, col2, @pv:=(case when find_in_set(col3, @pv) then @pv else concat(@pv, ',', col3) end) as 'col3' from table1 join (select @pv:='1') tmp on find_in_set(col1, @pv) > 0 ) t ``` Something like this should work for small data sets. However, the idea of putting all the ids in a string is limited to the capacity of a string.
In my limited deep of hierarchy-levels, I used the following: parents: ``` select * from mytable join ( select A.id Aid,B.id Bid, C.id Cid, D.id Did, E.id Eid, F.id Fid,G.id Gid, H.id Hid from mytable A left join mytable B on B.id=A.parent left join mytable C on C.id=B.parent left join mytable D on D.id=C.parent left join mytable E on E.id=D.parent left join mytable F on F.id=E.parent left join mytable G on G.id=F.parent left join mytable H on H.id=G.parent where A.id=9 ) X where id in (Aid,Bid,Cid,Did,Eid,Fid,Gid,Hid); ``` children: ``` select * from mytable where id in ( select distinct id from mytable join ( select A.id Aid,B.id Bid, C.id Cid, D.id Did, E.id Eid, F.id Fid,G.id Gid, H.id Hid FROM mytable A left join mytable B on B.parent=A.id left join mytable C on C.parent=B.id left join mytable D on D.parent=C.id left join mytable E on E.parent=D.id left join mytable F on F.parent=E.id left join mytable G on G.parent=F.id left join mytable H on H.parent=G.id Where A.id=1 ) X where id in (Aid,Bid,Cid,Did,Eid,Fid,Gid,Hid) ``` );
@ Symbol - a solution for Recursive SELECT query in Mysql?
[ "", "mysql", "sql", "recursive-query", "sql-optimization", "" ]
In python, a `date`object can be converted in the proleptic Gregorian ordinal this way: ``` d=datetime.date(year=2010, month=3, day=1) d.toordinal() ``` but what is the reverse operation?
The opposite is [date.fromordinal](http://docs.python.org/2/library/datetime.html#datetime.date.fromordinal) > **classmethod date.fromordinal(ordinal)** > >     Return the date corresponding to the proleptic Gregorian ordinal, > where January 1 of year 1 has ordinal 1. ValueError is raised unless > 1 <= ordinal <= date.max.toordinal(). For any date d, > date.fromordinal(d.toordinal()) == d.
It's `date.fromordial()` as Jon wrote in the comments. or `datetime.fromordinal()` You can read more about it in the [date= documentation](http://docs.python.org/2/library/datetime.html#datetime.date.fromordinal) and for [datetime](http://docs.python.org/2/library/datetime.html#datetime.datetime.fromordinal) From the docs: > `classmethod` `date.fromordinal(ordinal)` > > Return the date corresponding > to the proleptic Gregorian ordinal, where January 1 of year 1 has > ordinal 1. `ValueError` is raised unless `1 <= ordinal <= > date.max.toordinal()`. > > For any date `d`, `date.fromordinal(d.toordinal()) > == d.`
What is the inverse of date.toordinal() in python?
[ "", "python", "date", "" ]
Example strings: 1. I am a numeric string 75698 2. I am a alphanumeric string A14-B32-C7D So far my regex works: `(\S+)$` I want to add a way (probably look ahead) to check if the result generated by above regex contains any digit (0-9) one or more times? This is not working: `(\S+(?=\S*\d\S*))$` How should I do it?
Look ahead is not necessary for this, this is simply : ``` (\S*\d+\S*) ``` Here is a test case : <http://regexr.com?34s7v>
permute it and use the `\D` class instead of `\S`: ``` ((?=\D*\d)\S+)$ ``` explanation: `\D = [^\d]` in other words it is all that is not a digit. You can be more explicit (better performances for your examples) with: ``` ((?=[a-zA-Z-]*\d)\[a-zA-Z\d-]+)$ ``` and if you have only uppercase letters, you know what to do. (smaller is the class, better is the regex)
regex to strict check numbers in string
[ "", "python", "regex", "" ]
In the python docs page for [`any`](http://docs.python.org/2/library/functions.html#any), the equivalent code for the `any()` function is given as: ``` def any(iterable): for element in iterable: if element: return True return False ``` How does this function know what element I wanna test if call it in this form? ``` any(x > 0 for x in list) ``` From the function definition, all I can see is that I'm passing an iterable object. How does the `for` loop know I am looking for something `> 0`?
If you use `any(lst)` you see that `lst` is the iterable, which is a list of some items. If it contained `[0, False, '', 0.0, [], {}, None]` (which all have boolean values of `False`) then `any(lst)` would be `False`. If `lst` also contained any of the following `[-1, True, "X", 0.00001]` (all of which evaluate to `True`) then `any(lst)` would be `True`. In the code you posted, `x > 0 for x in lst`, this is a different kind of iterable, called a **generator expression**. Before generator expressions were added to Python, you would have created a **list comprehension**, which looks very similar, but with surrounding `[]`'s: `[x > 0 for x in lst]`. From the `lst` containing `[-1, -2, 10, -4, 20]`, you would get this *comprehended list*: `[False, False, True, False, True]`. This internal value would then get passed to the `any` function, which would return `True`, since there is at least one `True` value. But with **generator expressions**, Python no longer has to create that internal list of `True(s)` and `False(s)`, the values will be generated as the `any` function iterates through the values generated one at a time by the generator expression. *And*, since `any` short-circuits, it will stop iterating as soon as it sees the first `True` value. This would be especially handy if you created `lst` using something like `lst = range(-1,int(1e9))` (or `xrange` if you are using *Python2.x*). Even though this expression will generate over a billion entries, `any` only has to go as far as the third entry when it gets to `1`, which evaluates `True` for `x>0`, and so `any` can return `True`. If you had created a **list comprehension**, Python would first have had to create the billion-element list in memory, and then pass that to `any`. But by using a **generator expression**, you can have Python's builtin functions like `any` and `all` break out early, as soon as a `True` or `False` value is seen.
``` >>> names = ['King', 'Queen', 'Joker'] >>> any(n in 'King and john' for n in names) True >>> all(n in 'King and Queen' for n in names) False ``` It just reduce several line of code into one. You don't have to write lengthy code like: ``` for n in names: if n in 'King and john': print True else: print False ```
How does this input work with the Python 'any' function?
[ "", "python", "generator-expression", "" ]
I'm relatively new to python but I'm trying to understand something which seems basic. Create a vector: ``` x = np.linspace(0,2,3) Out[38]: array([ 0., 1., 2.]) ``` now why isn't x[:,0] a value argument? ``` IndexError: invalid index ``` It must be x[0]. I have a function I am calling which calculates: ``` np.sqrt(x[:,0]**2 + x[:,1]**2 + x[:,2]**2) ``` Why can't what I have just be true regardless of the input? It many other languages, it is independent of there being other rows in the array. Perhaps I misunderstand something fundamental - sorry if so. I'd like to avoid putting: ``` if len(x) == 1: norm = np.sqrt(x[0]**2 + x[1]**2 + x[2]**2) else: norm = np.sqrt(x[:,0]**2 + x[:,1]**2 + x[:,2]**2) ``` everywhere. Surely there is a way around this... thanks. Edit: An example of it working in another language is Matlab: ``` >> b = [1,2,3] b = 1 2 3 >> b(:,1) ans = 1 >> b(1) ans = 1 ```
Perhaps you are looking for this: ``` np.sqrt(x[...,0]**2 + x[...,1]**2 + x[...,2]**2) ``` There can be any number of dimensions in place of the ellipsis `...` See also [What does the Python Ellipsis object do?](https://stackoverflow.com/q/772124/222914), and [the docs of NumPy basic slicing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing)
It looks like the ellipsis as described by @JanneKarila has answered your question, but I'd like to point out how you might make your code a bit more "numpythonic". It appears you want to handle an n-dimensional array with the shape (d\_1, d\_2, ..., d\_{n-1}, 3), and compute the magnitudes of this collection of three-dimensional vectors, resulting in an (n-1)-dimensional array with shape (d\_1, d\_2, ..., d\_{n-1}). One simple way to do that is to square all the elements, then sum along the last axis, and then take the square root. If `x` is the array, that calculation can be written `np.sqrt(np.sum(x**2, axis=-1))`. The following shows a few examples. x is 1-D, with shape (3,): ``` In [31]: x = np.array([1.0, 2.0, 3.0]) In [32]: np.sqrt(np.sum(x**2, axis=-1)) Out[32]: 3.7416573867739413 ``` x is 2-D, with shape (2, 3): ``` In [33]: x = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) In [34]: x Out[34]: array([[ 1., 2., 3.], [ 4., 5., 6.]]) In [35]: np.sqrt(np.sum(x**2, axis=-1)) Out[35]: array([ 3.74165739, 8.77496439]) ``` x is 3-D, with shape (2, 2, 3): ``` In [36]: x = np.arange(1.0, 13.0).reshape(2,2,3) In [37]: x Out[37]: array([[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]]]) In [38]: np.sqrt(np.sum(x**2, axis=-1)) Out[38]: array([[ 3.74165739, 8.77496439], [ 13.92838828, 19.10497317]]) ```
why can't x[:,0] = x[0] for a single row vector?
[ "", "python", "vector", "numpy", "" ]
Assuming: * I am using REPEATABLE\_READ or SERIALIZABLE transaction isolation (locks get retained every time I access a row) * We are talking about multiple threads accessing multiple tables simultaneously. I have the following questions: 1. **Is it possible for an `INSERT` operation to cause a deadlock?** If so, please provide a detailed scenario demonstrating how a deadlock may occur (e.g. Thread 1 does this, Thread 2 does that, ..., deadlock). 2. For bonus points: answer the same question for all other operations (e.g. SELECT, UPDATE, DELETE). **UPDATE**: 3. For super bonus points: how can I avoid a deadlock in the following scenario? Given tables: * permissions`[id BIGINT PRIMARY KEY]` * companies`[id BIGINT PRIMARY KEY, name VARCHAR(30), permission_id BIGINT NOT NULL, FOREIGN KEY (permission_id) REFERENCES permissions(id))` I create a new Company as follows: * INSERT INTO permissions; -- Inserts permissions.id = 100 * INSERT INTO companies (name, permission\_id) VALUES ('Nintendo', 100); -- Inserts companies.id = 200 I delete a Company as follows: * SELECT permission\_id FROM companies WHERE id = 200; -- returns permission\_id = 100 * DELETE FROM companies WHERE id = 200; * DELETE FROM permissions WHERE id = 100; In the above example, the INSERT locking order is [permissions, companies] whereas the DELETE locking order is [companies, permissions]. Is there a way to fix this example for `REPEATABLE_READ` or `SERIALIZABLE` isolation?
Generally all modifications can cause a deadlock and selects will not (get to that later). So 1. No you cannot ignore these. 2. You can somewhat ignore select depending on your database and settings but the others will give you deadlocks. You don't even need multiple tables. The best way to create a deadlock is to do the same thing in a different order. SQL Server examples: ``` create table A ( PK int primary key ) ``` Session 1: ``` begin transaction insert into A values(1) ``` Session 2: ``` begin transaction insert into A values(7) ``` Session 1: ``` delete from A where PK=7 ``` Session 2: ``` delete from A where PK=1 ``` You will get a deadlock. So that proved inserts & deletes can deadlock. Updates are similar: Session 1: ``` begin transaction insert into A values(1) insert into A values(2) commit begin transaction update A set PK=7 where PK=1 ``` Session 2: ``` begin transaction update A set pk=9 where pk=2 update A set pk=8 where pk=1 ``` Session 1: ``` update A set pk=9 where pk=2 ``` Deadlock! SELECT should never deadlock but on some databases it will because the locks it uses interfere with consistent reads. That's just crappy database engine design though. SQL Server will not lock on a SELECT if you use SNAPSHOT ISOLATION. Oracle & I think Postgres will never lock on SELECT (unless you have FOR UPDATE which is clearly reserving for an update anyway). So basically I think you have a few incorrect assumptions. I think I've proved: 1. Updates can cause deadlocks 2. Deletes can cause deadlocks 3. Inserts can cause deadlocks 4. You do not need more than one table 5. You **do** need more than one session You'll just have to take my word on SELECT ;) but it will depend on your DB and settings.
In addition to LoztInSpace's answer, `inserts` may cause deadlocks even without `deletes` or `updates` presence. All you need is a unique index and a reversed operations order. Example in Oracle : ``` create table t1 (id number); create unique index t1_pk on t1 (id); --thread 1 : insert into t1 values(1); --thread 2 insert into t1 values(2); --thread 1 : insert into t1 values(2); --thread 2 insert into t1 values(1); -- deadlock ! ```
Can an INSERT operation result in a deadlock?
[ "", "sql", "insert", "deadlock", "" ]
I've read the masked array documentation several times now, searched everywhere and feel thoroughly stupid. I can't figure out for the life in me how to apply a mask from one array to another. Example: ``` import numpy as np y = np.array([2,1,5,2]) # y axis x = np.array([1,2,3,4]) # x axis m = np.ma.masked_where(y>2, y) # filter out values larger than 5 print m [2 1 -- 2] print np.ma.compressed(m) [2 1 2] ``` So this works fine.... but to plot this y axis, I need a matching x axis. How do I apply the mask from the y array to the x array? Something like this would make sense, but produces rubbish: ``` new_x = x[m.mask].copy() new_x array([5]) ``` So, how on earth is that done (note the new x array needs to be a new array). **Edit:** Well, it seems one way to do this works like this: ``` >>> import numpy as np >>> x = np.array([1,2,3,4]) >>> y = np.array([2,1,5,2]) >>> m = np.ma.masked_where(y>2, y) >>> new_x = np.ma.masked_array(x, m.mask) >>> print np.ma.compressed(new_x) [1 2 4] ``` But that's incredibly messy! I'm trying to find a solution as elegant as IDL...
Why not simply ``` import numpy as np y = np.array([2,1,5,2]) # y axis x = np.array([1,2,3,4]) # x axis m = np.ma.masked_where(y>2, y) # filter out values larger than 5 print list(m) print np.ma.compressed(m) # mask x the same way m_ = np.ma.masked_where(y>2, x) # filter out values larger than 5 # print here the list print list(m_) print np.ma.compressed(m_) ``` code is for Python 2.x Also, as proposed by joris, this do the work `new_x = x[~m.mask].copy()` giving an array ``` >>> new_x array([1, 2, 4]) ```
I had a similar issue, but involving loads more masking commands and more arrays to apply them. My solution is that I do all the masking on one array and then use the finally masked array as the condition in the `mask_where` command. For example: ``` y = np.array([2,1,5,2]) # y axis x = np.array([1,2,3,4]) # x axis m = np.ma.masked_where(y>5, y) # filter out values larger than 5 new_x = np.ma.masked_where(np.ma.getmask(m), x) # applies the mask of m on x ``` The nice thing is you can now apply this mask to many more arrays without going through the masking process for each of them.
how to apply a mask from one array to another array?
[ "", "python", "numpy", "" ]
Similar questions have already been asked on SO, but they have more specific constraints and their answers don't apply to my question. Generally speaking, what is the most pythonic way to determine if an arbitrary numpy array is a subset of another array? More specifically, I have a roughly 20000x3 array and I need to know the indices of the 1x3 elements that are entirely contained within a set. More generally, is there a more pythonic way of writing the following: ``` master = [12, 155, 179, 234, 670, 981, 1054, 1209, 1526, 1667, 1853] # some indices of interest triangles = np.random.randint(2000, size=(20000, 3)) # some data for i, x in enumerate(triangles): if x[0] in master and x[1] in master and x[2] in master: print i ``` For my use case, I can safely assume that len(master) << 20000. (Consequently, it is also safe to assume that master is sorted because this is cheap).
You can do this easily via iterating over an array in list comprehension. A toy example is as follows: ``` import numpy as np x = np.arange(30).reshape(10,3) searchKey = [4,5,8] x[[0,3,7],:] = searchKey x ``` gives ``` array([[ 4, 5, 8], [ 3, 4, 5], [ 6, 7, 8], [ 4, 5, 8], [12, 13, 14], [15, 16, 17], [18, 19, 20], [ 4, 5, 8], [24, 25, 26], [27, 28, 29]]) ``` Now iterate over the elements: ``` ismember = [row==searchKey for row in x.tolist()] ``` The result is ``` [True, False, False, True, False, False, False, True, False, False] ``` You can modify it for being a subset as in your question: ``` searchKey = [2,4,10,5,8,9] # Add more elements for testing setSearchKey = set(searchKey) ismember = [setSearchKey.issuperset(row) for row in x.tolist()] ``` If you need the indices, then use ``` np.where(ismember)[0] ``` It gives ``` array([0, 3, 7]) ```
Here are two approaches you could try: 1, Use sets. Sets are implemented much like python dictionaries and have have constant time lookups. That would look much like the code you already have, just create a set from master: ``` master = [12,155,179,234,670,981,1054,1209,1526,1667,1853] master_set = set(master) triangles = np.random.randint(2000,size=(20000,3)) #some data for i, x in enumerate(triangles): if master_set.issuperset(x): print i ``` 2, Use search sorted. This is nice because it doesn't require you to use hashable types and uses numpy builtins. `searchsorted` is log(N) in the size of master and O(N) in the size of triangels so it should also be pretty fast, maybe faster depending on the size of your arrays and such. ``` master = [12,155,179,234,670,981,1054,1209,1526,1667,1853] master = np.asarray(master) triangles = np.random.randint(2000,size=(20000,3)) #some data idx = master.searchsorted(triangles) idx.clip(max=len(master) - 1, out=idx) print np.where(np.all(triangles == master[idx], axis=1)) ``` This second case assumes master is sorted, as `searchsorted` implies.
check if numpy array is subset of another array
[ "", "python", "numpy", "set", "" ]
I have a table that has an integer column from which I am trying to get a few counts from. Basically I need four separate counts from the same column. The first value I need returned is the count of how many records have an integer value stored in this column between two values such as 213 and 9999, including the min and max values. The other three count values I need returned are just the count of records between different values of this column. I've tried doing queries like... ``` SELECT (SELECT Count(ID) FROM view1 WHERE ((MyIntColumn BETWEEN 213 AND 9999));) AS Value1, (SELECT Count(ID) FROM FROM view1 WHERE ((MyIntColumn BETWEEN 500 AND 600));) AS Value2 FROM view1; ``` So there are for example, ten records with this column value between 213 and 9999. The result returned from this query gives me 10, but it gives me the same value of 10, 618 times which is the number of total records in the table. How would it be possible for me to only have it return one record of 10 instead?
Use the Iif() function instead of CASE WHEN ``` select Condition1: iif( ), condition2: iif( ), etc ``` P.S. : What I used to do when working with Access was have the iif() resolve to 1 or 0 and then do a SUM() to get the counts. Roundabout but it worked better with aggregation since it avoided nulls.
``` SELECT COUNT(CASE WHEN MyIntColumn >= 213 AND MyIntColumn <= 9999 THEN MyIntColumn ELSE NULL END) AS FirstValue , ??? AS SecondValue , ??? AS ThirdValue , ??? AS FourthValue FROM Table ``` This doesn't need nesting or CTE or anything. Just define via CASE your condition within COUNTs argument. I dont really understand what You want in the second, third an fourth column. Sounds to me, its very similar to the first one.
How to get three count values from same column using SQL in Access?
[ "", "sql", "ms-access", "count", "subquery", "" ]
I have the following CSV file: ![enter image description here](https://i.stack.imgur.com/gPjID.png) How do I import the numbers only into an array in python one row at a time? No date, no string. My code: ``` import csv def test(): out = open("example.csv","rb") data = csv.reader(out) data = [row for row in data] out.close() print data ``` Let me more clear. I don't want a huge 2D array. I want to import just the 2nd row and then manipulate the data then get the 3rd row. I would need a for loop for this, but I am not sure on how csv fully works.
try this: ``` with open('the_CSV_file.csv','r') as f: box = f.readlines() result_box = [] for line in box[1:]: items = line.split(';') # adjust the separator character in the CSV as needed result_box.append(items[1:]) print result_box ```
``` % <csv # just a silly CSV I got from http://secrets.d8u.us/csv Secret,Timestamp Forza la fiera!,1368230474 American healthcare SUXXXXX,1368232342 I am not sure if I wanna take the girl out again,1368240406 I bred a race of intelligent penguin assassins to murder dick cheney. ,1368245584 "I guess it is my mother's time of the month, as it were",1368380424 i've seen walls breath,1368390258 In [33]: %paste with open('csv', 'rb') as csvfile: csv_reader = csv.reader(csvfile, dialect='excel') # excel may be the default, but doesn't hurt to be explicit csv_reader.next() for row in csv_reader: array.append(row[1:]) ## -- End pasted text -- In [34]: array Out[34]: [['1368230474'], ['1368232342'], ['1368240406'], ['1368245584'], ['1368380424'], ['1368390258']] ```
Excel CSV help Python
[ "", "python", "arrays", "excel", "csv", "multidimensional-array", "" ]
I was just wondering if there was an especially pythonic way of adding two tuples elementwise? So far (a and b are tuples), I have ``` map(sum, zip(a, b)) ``` My expected output would be: ``` (a[0] + b[0], a[1] + b[1], ...) ``` And a possible weighing would be to give a 0.5 weight and b 0.5 weight, or so on. (I'm trying to take a weighted average). Which works fine, but say I wanted to add a weighting, I'm not quite sure how I would do that. Thanks
Zip them, then sum each tuple. ``` [sum(x) for x in zip(a,b)] ``` **EDIT :** Here's a better, albeit more complex version that allows for weighting. ``` from itertools import starmap, islice, izip a = [1, 2, 3] b = [3, 4, 5] w = [0.5, 1.5] # weights => a*0.5 + b*1.5 products = [m for m in starmap(lambda i,j:i*j, [y for x in zip(a,b) for y in zip(x,w)])] sums = [sum(x) for x in izip(*[islice(products, i, None, 2) for i in range(2)])] print sums # should be [5.0, 7.0, 9.0] ```
If you do not mind the dependency, you can use numpy for elementwise operations on arrays ``` >>> import numpy as np >>> a = np.array([1, 2, 3]) >>> b = np.array([3, 4, 5]) >>> a + b array([4, 6, 8]) ```
Adding two tuples elementwise
[ "", "python", "tuples", "" ]
I'm trying to access a datetime column to find out whether the date is within a week from today, or overdue. Then write a new column's value to say `Incoming`, `Overdue` or `Fine`. ``` SELECT CASE next_action_date WHEN (BETWEEN GETDATE()+7 AND GETDATE()) THEN 'Incoming' WHEN (< GETDATE()) THEN 'Overdue' ELSE 'Fine' END AS condition FROM tableName ``` This is what I've got so far, but as you can probably see by looking, it doesn't work at all: > Msg 156, Level 15, State 1, Line 3 > Incorrect syntax near the keyword 'BETWEEN'.
Please try ``` select CASE when next_action_date between GETDATE() and GETDATE()+7 then 'Incoming' when next_action_date < GETDATE() THEN 'Overdue' else 'fine' end as Condition from( select GETDATE()+6 next_action_date )x ```
There are [two syntaxes](http://msdn.microsoft.com/en-us/library/ms173318.aspx) of the `CASE` expression - the so-called *simple* one that compares a single value against a list of other values, and a *searched* one with generic boolean conditions. You picked the simple case, but it does not have enough flexibility for what you need; you should switch to the searched syntax, like this: ``` SELECT CASE WHEN next_action_date BETWEEN GETDATE() AND GETDATE()+7 THEN 'Incoming' WHEN next_action_date < GETDATE() THEN 'Overdue' ELSE 'Fine' END AS condition FROM tableName ```
CASE expressions on datetime columns
[ "", "sql", "sql-server", "t-sql", "" ]
I am new to the *Django framework*, I was trying to generate a link to download files from *Amazon S3*. I got this error when I tried to load the template page: ``` Reverse for 'myapp.views.handles3downloads' with arguments '(u'README.md',)' and keyword arguments '{}' not found. ``` **urls.py** ``` urlpatterns = patterns('', url(r'^handles3downloads/(\d+)/$', handles3downloads), ) ``` **views.py** ``` def handles3downloads(request, fname): bucket_name = 'bucketname' key = s.get_bucket(bucket_name).get_key(fname) dfilename = key.get_contents_to_filename(fname) wrapper = HttpResponse(file(dfilename)) response = HttpResponse(wrapper, content_type='text/plain') response['Content-Length'] = os.path.getsize(dfilename) return response ``` **template file** ``` <a href="{% url 'myapp.views.handles3downloads' sfile.linkUrl %}">{{sfile.linkUrl}}</a> ``` I looked at some of the solutions with similar errors but it didn't help me. Can anyone help me out please. Advance thanks
urls.py ``` url(r'^handles3downloads/', handles3downloads), ``` views.py ``` def handles3downloads(request): fname = request.GET['filename'] bucket_name = 'bucketname' key = s.get_bucket(bucket_name).get_key(fname) key.get_contents_to_filename('/tmp/'+key.name) wrapper = FileWrapper(open('/tmp/'+fname, 'rb')) content_type = mimetypes.guess_type('/tmp/'+fname)[0] response = HttpResponse(wrapper,content_type=content_type) response['Content-Length'] = os.path.getsize('/tmp/'+fname) response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(fname) ``` templates ``` <a href="/handles3downloads/?filename=file1.jpg" rel="external">Download</a> ```
Your regular expression in the `urls.py` file seems to be wrong. Try using this instead: ``` url(r'^handles3downloads/(\w+)/$', handles3downloads), ``` You're passing parameter string to the view, and the `regex` is matching integers.
Downloading files from amazon s3 using django
[ "", "python", "django", "amazon-web-services", "amazon-s3", "" ]
I have created a 2 dimension array like: ``` rows =3 columns= 2 mylist = [[0 for x in range(columns)] for x in range(rows)] for i in range(rows): for j in range(columns): mylist[i][j] = '%s,%s'%(i,j) print mylist ``` Printing this list gives an output: ``` [ ['0,0', '0,1'], ['1,0', '1,1'], ['2,0', '2,1'] ] ``` where each list item is a string of the format 'row,column' Now given this list, i want to iterate through it in the order: ``` '0,0' '1,0' '2,0' '0,1' '1,1' '2,1' ``` that is iterate through 1st column then 2nd column and so on. How do i do it with a loop ? This Question pertains to pure python list while the question which is marked as same pertains to numpy arrays. They are clearly different
Use `zip` and `itertools.chain`. Something like: ``` >>> from itertools import chain >>> l = chain.from_iterable(zip(*l)) <itertools.chain object at 0x104612610> >>> list(l) ['0,0', '1,0', '2,0', '0,1', '1,1', '2,1'] ```
same way you did the fill in, but reverse the indexes: ``` >>> for j in range(columns): ... for i in range(rows): ... print mylist[i][j], ... 0,0 1,0 2,0 0,1 1,1 2,1 >>> ```
Iterating over a 2 dimensional python list
[ "", "python", "loops", "multidimensional-array", "" ]
I have a python script that can receive either zero or three command line arguments. (Either it runs on default behavior or needs all three values specified.) What's the ideal syntax for something like: ``` if a and (not b or not c) or b and (not a or not c) or c and (not b or not a): ``` ?
If you mean a minimal form, go with this: ``` if (not a or not b or not c) and (a or b or c): ``` Which translates the title of your question. UPDATE: as correctly said by Volatility and Supr, you can apply De Morgan's law and obtain equivalent: ``` if (a or b or c) and not (a and b and c): ``` My advice is to use whichever form is more significant to you and to other programmers. The first means *"there is something false, but also something true"*, the second *"There is something true, but not everything"*. If I were to optimize or do this in hardware, I would choose the second, here just choose the most readable (also taking in consideration the conditions you will be testing and their names). I picked the first.
How about: ``` conditions = [a, b, c] if any(conditions) and not all(conditions): ... ``` Other variant: ``` if 1 <= sum(map(bool, conditions)) <= 2: ... ```
Python syntax for "if a or b or c but not all of them"
[ "", "python", "if-statement", "" ]