Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
``` print '%d:%02d' % divmod(10,20) ``` results in what I want: ``` 0:10 ``` However ``` print '%s %d:%02d' % ('hi', divmod(10,20)) ``` results in: ``` Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> print '%s %d:%02d' % ('hi', divmod(10,20)) TypeError: %d format: a number is required, not tuple ``` How do I fix the second print statement so that it works? I thought there was a simpler solution than ``` m = divmod(10,20) print m[0], m[1] ``` or using python 3 or format(). I feel I'm missing something obvious
You are *nesting* tuples; concatenate instead: ``` print '%s %d:%02d' % (('hi',) + divmod(10,20)) ``` Now you create a tuple of 3 elements and the string formatting works. Demo: ``` >>> print '%s %d:%02d' % (('hi',) + divmod(10,20)) hi 0:10 ``` and to illustrate the difference: ``` >>> ('hi', divmod(10,20)) ('hi', (0, 10)) >>> (('hi',) + divmod(10,20)) ('hi', 0, 10) ``` Alternatively, use `str.format()`: ``` >>> print '{0} {1[0]:d}:{1[1]:02d}'.format('hi', divmod(10, 20)) hi 0:10 ``` Here we interpolate the first argument (`{0}`), then the first element of the second argument (`{1[0]}`, formatting the value as an integer), then the second element of the second argument (`{1[1]}`, formatting the value as an integer with 2 digits and leading zeros).
``` print '%s %d:%02d' % ('hi',divmod(10,20)[0], divmod(10,20)[1]) ^ ^ ^ 1 2 3 ``` Parentheses with commas indicate tuples, parens with concatenation (+) will return strings. You need a 3-tuple for 3 inputs as shown
python tuple print issue
[ "", "python", "tuples", "string-formatting", "" ]
What I need is: ``` pro [-a xxx | [-b yyy -c zzz]] ``` I tried this but does not work. Could someone help me out? ``` group= parser.add_argument_group('Model 2') group_ex = group.add_mutually_exclusive_group() group_ex.add_argument("-a", type=str, action = "store", default = "", help="test") group_ex_2 = group_ex.add_argument_group("option 2") group_ex_2.add_argument("-b", type=str, action = "store", default = "", help="test") group_ex_2.add_argument("-c", type=str, action = "store", default = "", help="test") ``` Thanks!
`add_mutually_exclusive_group` doesn't make an entire group mutually exclusive. It makes options within the group mutually exclusive. What you're looking for is [subcommands](http://docs.python.org/dev/library/argparse.html#sub-commands). Instead of prog [ -a xxxx | [-b yyy -c zzz]], you'd have: ``` prog command 1 -a: ... command 2 -b: ... -c: ... ``` To invoke with the first set of arguments: ``` prog command_1 -a xxxx ``` To invoke with the second set of arguments: ``` prog command_2 -b yyyy -c zzzz ``` You can also set the sub command arguments as positional. ``` prog command_1 xxxx ``` Kind of like git or svn: ``` git commit -am git merge develop ``` ## Working Example ``` # create the top-level parser parser = argparse.ArgumentParser(prog='PROG') parser.add_argument('--foo', action='store_true', help='help for foo arg.') subparsers = parser.add_subparsers(help='help for subcommand', dest="subcommand") # create the parser for the "command_1" command parser_a = subparsers.add_parser('command_1', help='command_1 help') parser_a.add_argument('a', type=str, help='help for bar, positional') # create the parser for the "command_2" command parser_b = subparsers.add_parser('command_2', help='help for command_2') parser_b.add_argument('-b', type=str, help='help for b') parser_b.add_argument('-c', type=str, action='store', default='', help='test') ``` ## Test it ``` >>> parser.print_help() usage: PROG [-h] [--foo] {command_1,command_2} ... positional arguments: {command_1,command_2} help for subcommand command_1 command_1 help command_2 help for command_2 optional arguments: -h, --help show this help message and exit --foo help for foo arg. >>> >>> parser.parse_args(['command_1', 'working']) Namespace(subcommand='command_1', a='working', foo=False) >>> parser.parse_args(['command_1', 'wellness', '-b x']) usage: PROG [-h] [--foo] {command_1,command_2} ... PROG: error: unrecognized arguments: -b x ``` Good luck.
While [Jonathan's answer](https://stackoverflow.com/a/17909525/492620) is perfectly fine for complex options, there is a very simple solution which will work for the simple cases, e.g. 1 option excludes 2 other options like in ``` command [- a xxx | [ -b yyy | -c zzz ]] ``` or even as in the original question: ``` pro [-a xxx | [-b yyy -c zzz]] ``` Here is how I would do it: ``` parser = argparse.ArgumentParser() # group 1 parser.add_argument("-q", "--query", help="query") parser.add_argument("-f", "--fields", help="field names") # group 2 parser.add_argument("-a", "--aggregation", help="aggregation") ``` I am using here options given to a command line wrapper for querying a mongodb. The `collection` instance can either call the method `aggregate` or the method `find` with to optional arguments `query` and `fields`, hence you see why the first two arguments are compatible and the last one isn't. So now I run `parser.parse_args()` and check it's content: ``` args = parser.parse_args() if args.aggregation and (args.query or args.fields): print "-a and -q|-f are mutually exclusive ..." sys.exit(2) ``` Of course, this little hack is only working for simple cases and it would become a nightmare to check all the possible options if you have many mutually exclusive options and groups. In that case you should break your options in to command groups like Jonathan suggested.
Python argparse mutual exclusive group
[ "", "python", "argparse", "" ]
i am passing args like ``` python file.py arg1 arg2 arg3 ``` I want to know if there is a way to treat these args as objects not strings for the following code: ``` one = sys.argv[1] two = sys.argv[2] three = sys.argv[3] from one import two a = two.three() ``` since one two three would be strings. how can they be parsed as objects? UPDATE: ``` pkg = import_module(two, package=one) ``` gives ``` __import__(name) ImportError: No module named <whatever_module_name> ```
You can use [importlib for Python 2](http://docs.python.org/2/library/importlib.html) or [importlib for Python 3](http://docs.python.org/3/library/importlib.html#importlib.import_module). ``` import importlib path = importlib.import_module('os.path') print(path.join('a','b')) ```
You can use the built-in `__import__` or the `imp` [module](http://docs.python.org/2/library/imp.html) to import object programmatically: ``` # Import 'name' relative to 'path' module = imp.load_module(name, *imp.find_module(name, [path])) ``` Whenever I've wanted to programmatically import modules, this is how I do it, but `imp.find_module` returns a 3-tuple: ``` fil, path, desc = imp.find_module(name, [path]) module = imp.load_module(name, fil, path, desc) ```
python: better workaround for treating argument in bash/cmd line as an object
[ "", "python", "python-2.7", "" ]
Just wondering if it is possible to use both an optional argument in the same function as multiple arguments. I've looked around and I feel as if I just have the vocabulary wrong or something. Example: ``` def pprint(x, sub = False, *Headers): pass ``` Can I call it still using the multiple headers without having to always put `True` or `False` in for sub? I feel like it's a no because `Headers` wouldn't know where it begins. I'd like to explicitly state that `sub = True` otherwise it defaults to `False`.
In Python 3, use: ``` def pprint(x, *headers, sub=False): pass ``` putting the keyword arguments *after* the positionals. This syntax will not work in Python 2. Demo: ``` >>> def pprint(x, *headers, sub=False): ... print(x, headers, sub) ... >>> pprint('foo', 'bar', 'baz', sub=True) foo ('bar', 'baz') True >>> pprint('foo', 'bar', 'baz') foo ('bar', 'baz') False ``` You *must* specify a different value for `sub` using a keyword argument when calling the `pprint()` function defined here.
I want to say yes because lots of matplotlib (for example) methods have something similar to this... For example, `matplotlib.pyplot.xcorr(x, y, normed=True, detrend=<function detrend_none at 0x2523ed8>, usevlines=True, maxlags=10, hold=None, **kwargs)` When I'm using this I can specify any of the keyword arguments by saying `maxlags=20` for example. You **do** have to specify all the non-keyworded arguments (so `x` in your case) before the keyword arguments.
Feasibility of using both optional and multiple arguments
[ "", "python", "python-3.3", "" ]
I have a function that solves a quadratic equation: ``` class QuadEq(object): def __init__(self, a, b, c): self.a = a self.b = b self.c = c def solve_quad_eq(self): D = self.b**2-4*self.a*self.c if D >= 0: x1 = (-self.b-math.sqrt(D))/2*self.a x2 = (-self.b+math.sqrt(D))/2*self.a answer = [x1, x2] return answer else: return 0 ``` And then in the same Class I have function: ``` def show_result(self): print answer ``` that to print the answer of the quadratic equation if we need. How can I give this function the ***answer*** *list* from the function above it to print?
The short answer has already been posted. Use an [instance variable](http://docs.python.org/2/tutorial/classes.html#instance-objects) (`self.answer`): ``` class QuadEq(object): def __init__(self, a, b, c): self.a = a self.b = b self.c = c def solve_quad_eq(self): self.answer = ("this", "that") return self.answer def show_result(self): print self.answer eq = QuadEq(1,2,3) eq.solve_quad_eq() eq.show_result() ``` Loosely speaking, an instance variable (*data member*) is just a variable whom lifespan is the same as the one of its "owner" (in the example, the object referenced by `eq`). --- And now, for the long -- and slightly more pedantic - answer: when designing a *class*, you have to think about its [responsibilities](http://en.wikipedia.org/wiki/Single_responsibility_principle) and its [state](https://en.wikipedia.org/wiki/Object_%28computer_science%29#Properties_of_an_object). Simply said, what is the purpose of your class? Is it just a *container* for various more-or-less related functions? In that case, the above answer is perfectly acceptable. But usually, you have to be a little bit more rigorous -- at the very least in order to improve understandability/maintainability of your code. Here you have a `QuadEq` class. By its name, I understand an instance of this class models *one* equation. Since the roots of such an equation are a *properties* of that equation, I think it is acceptable to have the method `solve_quad_eq` to be a method of that class. With the slight change I would use the more generic name `solve`. Why? Because that make provision for future classes for different equations providing the same *semantic*. In addition, both returning the result *and* storing it in an instance variable might be confusing. You should make a choice here. Not mentioning the fact your function sometimes returns the roots, other time the number of roots (`0`). Now, *printing*. I am more skeptical about this one. Displaying itself is not a "native" property of an equation. And if you go that way, you'll soon have to deal in your(s) equation class(es) with problems totally unrelated with "equations": how to write in a file? Binary or text? Which encoding should I use? How to deal with I/O errors? and so on... So, if I were you, I would push toward [separation of concern](http://en.wikipedia.org/wiki/Separation_of_concerns), providing just an "accessor" method to return the roots -- and display them from the outside. Since this seems to be important, I keep here the separation between that accessor and the `solve` method (potentially computationally intensive for some kinds of equations). Using the instance variable `self.answer` merely as a cache ([memoization](http://en.wikipedia.org/wiki/Memoization)) Here is a full example: ``` class Eq(object): def __init__(self): self.answer = None # this should be calles "roots", no? def roots(self): if self.answer is None: self.solve() return self.answer class QuadEq(Eq): def __init__(self, a, b, c): Eq.__init__(self) self.a = a self.b = b self.c = c def solve(self): self.answer = ("this", "that") return 2 eq = QuadEq(1,2,3) print(eq.roots()) ``` Please note how easy it is now to add an other kind of equation to solve in the program ... ``` class OtherEq(Eq): def __init__(self, a, b, c): Eq.__init__(self) self.a = a self.b = b self.c = c def solve(self): self.answer = ( "it", ) return 1 ``` ... and more important, the code to use that new kind of equation is almost the same as the previous one: ``` eq = OtherEq(1,2,3) print(eq.roots()) ```
Make the answer a member of the class. and reference it by `self.answer`. ``` class QuadEq(object): def __init__(self, a, b, c): self.a = a self.b = b self.c = c self.answer = [] def solve_quad_eq(self): D = self.b**2-4*self.a*self.c if D >= 0: x1 = (-self.b-math.sqrt(D))/2*self.a x2 = (-self.b+math.sqrt(D))/2*self.a self.answer = [x1, x2] return self.answer else: return 0 def show_result(self): print self.answer ```
How to give a function a value from another function in Python
[ "", "python", "" ]
I made the original battleship and now I'm looking to upgrade my AI from random guessing to guessing statistically probably locations. I'm having trouble finding algorithms online, so my question is what kinds of algorithms already exist for this application? And how would I implement one? Ships: 5, 4, 3, 3, 2 Field: 10X10 Board: ``` OCEAN = "O" FIRE = "X" HIT = "*" SIZE = 10 SEA = [] # Blank Board for x in range(SIZE): SEA.append([OCEAN] * SIZE) ``` If you'd like to see the rest of the code, I posted it here: (<https://github.com/Dbz/Battleship/blob/master/BattleShip.py>); I didn't want to clutter the question with a lot of irrelevant code.
The ultimate naive solution wold be to go through every possible placement of ships (legal given what information is known) and counting the number of times each square is full. obviously, in a relatively empty board this will not work as there are too many permutations, but a good start might be: for each square on board: go through all ships and count in how many different ways it fits in that square, i.e. for each square of the ships length check if it fits horizontally and vertically. an improvement might be to also check for each possible ship placement if the rest of the ships can be placed legally whilst covering all known 'hits' (places known to contain a ship). to improve performance, if only one ship can be placed in a given spot, you no longer need to test it on other spots. also, when there are many 'hits', it might be quicker to first cover all known 'hits' and for each possible cover go through the rest. edit: you might want to look into DFS. Edit: Elaboration on OP's (@Dbz) suggestion in the comments: hold a set of dismissed placements ('dissmissed') of ships (can be represented as string, say `"4V5x3"` for the placement of length 4 ship in 5x3, 5x4, 5x5, 5x6), after a guess you add all the placements the guess dismisses, then for each square hold a set of placements that intersect with it ('placements[x,y]') then the probability would be: `34-|intersection(placements[x,y], dissmissed)|/(3400-|dismissed|)` To add to the dismissed list: 1. if guess at (X,Y) is a miss add `placements[x,y]` 2. if guess at (X,Y) is a hit: * add neighboring placements (assuming that ships cannot be placed adjacently), i.e. add: + `<(2,3a,3b,4,5)>H<X+1>x<Y>`, `<(2,3a,3b,4,5)>V<X>x<Y+1>` + `<(2,3a,3b,4,5)>H<X-(2,3,3,4,5)>x<Y>`, `<(2,3a,3b,4,5)>V<X>x<Y-(2,3,3,4,5)>` + `2H<X+-1>x<Y+(-2 to 1)>`, `3aH<X+-1>x<Y+(-3 to 1)>` ... + `2V<X+(-2 to 1)>x<Y+-1>`, `3aV<X+(-3 to 1)>x<Y+-1>` ... * if `|intersection(placements[x,y], dissmissed)|==33`, i.e. only one placement possible add ship (see later) 3. check if any of the previews hits has only one possible placement left, if so, add the ship 4. check to see if any of the ships have only possible placement, if so, add the ship adding a ship: * add all other placements of that ship to dismissed * for each (x,y) of the ships placement add `placements[x,y]` with out the actual placement * for each (x,y) of the ships placement mark as hit guess (if not already known) run stage 2 * for each (x,y) neighboring the ships placement mark as miss guess (if not already known) run stage 1 * run stage 3 and 4. i might have over complicated this, there might be some redundant actions, but you get the point.
Nice question, and I like your idea for statistical approach. I think I would have tried a [machine learning](http://en.wikipedia.org/wiki/Supervised_learning) approach for this problem as follows: First model your problem as a [classification problem](http://en.wikipedia.org/wiki/Statistical_classification). The classification problem is: Given a square `(x,y)` - you want to tell the likelihood of having a ship in this square. Let this likelihood be `p`. Next, you need to develop some 'features'. You can take the surrounding of `(x,y)` [as you might have partial knowledge on it] as your features. For example, the features of the middle of the following mini-board (+ indicates the square you want to determine if there is a ship or not in): ``` OO* O+* ?O? ``` can be something like: ``` f1 = (0,0) = false f2 = (0,1) = false f3 = (0,2) = true f4 = (1,0) = false **note skipping (1,1) f5 = (1,2) = true f6 = (2,0) = unknown f7 = (2,1) = false f8 = (2,2) = unknown ``` I'd implement features relative to the point of origin (in this case - `(1,1)`) and not as absolute location on board (so the square up to `(3,3)` will also be f2). Now, create a training set. The training set is a 'labeled' set of features - based on some real boards. You can create it manually (create a lot of boards), automatically by a random generator of placements, or by some other data you can gather. Feed the training set to a learning algorithm. The algorithm should be able to handle 'unknowns' and be able to give probability of "true" and not only a boolean answer. I think a variation of [Naive Bayes](https://en.wikipedia.org/wiki/Naive_Bayes_classifier) can fit well here. After you have got a classifier - exploit it with your AI. When it's your turn, choose to fire upon a square which has the maximal value of `p`. At first, the shots will be kinda random - but with more shots you fire, you will have more information on the board, and the AI will exploit it for better predictions. --- Note that I gave features based on a square of size 1. You can of course choose any `k` and find features on this bigger square - it will give you more features, but each might be less informative. There is no rule of thumb which will be better - and it should be tested.
How to generate statistically probably locations for ships in battleship
[ "", "python", "algorithm", "math", "statistics", "" ]
I am implementing an algorithm for Texture Synthesis as outlined [here](http://graphics.cs.cmu.edu/people/efros/research/NPS/alg.html). For this I need to calculate the Sum of Squared Differences, a metric to estimate the error between the `template` and different positions across the `image`. I have a slow working implementation in place as follows: ``` total_weight = valid_mask.sum() for i in xrange(input_image.shape[0]): for j in xrange(input_image.shape[1]): sample = image[i:i + window, j:j + window] dist = (template - sample) ** 2 ssd[i, j] = (dist * valid_mask).sum() / total_weight ``` Here, `total_weight` is just for normalisation. Some pixels have unknown intensities, so I use `valid_mask` for masking them. This nested loop lies inside of 2 loops, so that's 4 nested loops which is obviously a performance killer! Is there a way I can make it faster in NumPy or Python, a replacement for this nested loop? Is Vectorization is possible? I'll need to work on `(3, 3)` part of the `image` with the (3, 3) of the `template`. I am subsequently going to implement this in Cython, so the faster I can get it to work using just NumPy, better it is. You can find the complete code [here](https://github.com/chintak/scikit-image/blob/texture_syn/skimage/filter/texture/tex.py). Line 62 - 67 quoted here. Thanks, Chintak
This is basically an improvement over Warren Weckesser's answer. The way to go is clearly with a multidimensional windowed view of the original array, but you want to keep that view from triggering a copy. If you expand your `sum((a-b)**2)`, you can turn it into `sum(a**2) + sum(b**2) - 2*sum(a*b)`, and this multiply-then-reduce-with-a-sum operations you can perform with linear algebra operators, with a substantial improvement in both performance and memory use: ``` def sumsqdiff3(input_image, template): window_size = template.shape y = as_strided(input_image, shape=(input_image.shape[0] - window_size[0] + 1, input_image.shape[1] - window_size[1] + 1,) + window_size, strides=input_image.strides * 2) ssd = np.einsum('ijkl,kl->ij', y, template) ssd *= - 2 ssd += np.einsum('ijkl, ijkl->ij', y, y) ssd += np.einsum('ij, ij', template, template) return ssd In [288]: img = np.random.rand(500, 500) In [289]: template = np.random.rand(3, 3) In [290]: %timeit a = sumsqdiff2(img, template) # Warren's function 10 loops, best of 3: 59.4 ms per loop In [291]: %timeit b = sumsqdiff3(img, template) 100 loops, best of 3: 18.2 ms per loop In [292]: np.allclose(a, b) Out[292]: True ``` I have left the `valid_mask` parameter out on purpose, because I don't fully understand how you would use it. In principle, just zeroing the corresponding values in `template` and/or `input_image` should do the same trick.
You can do some amazing things with the `as_strided` function combined with numpy's broadcasting. Here are two versions of your function: ``` import numpy as np from numpy.lib.stride_tricks import as_strided def sumsqdiff(input_image, template, valid_mask=None): if valid_mask is None: valid_mask = np.ones_like(template) total_weight = valid_mask.sum() window_size = template.shape ssd = np.empty((input_image.shape[0] - window_size[0] + 1, input_image.shape[1] - window_size[1] + 1)) for i in xrange(ssd.shape[0]): for j in xrange(ssd.shape[1]): sample = input_image[i:i + window_size[0], j:j + window_size[1]] dist = (template - sample) ** 2 ssd[i, j] = (dist * valid_mask).sum() return ssd def sumsqdiff2(input_image, template, valid_mask=None): if valid_mask is None: valid_mask = np.ones_like(template) total_weight = valid_mask.sum() window_size = template.shape # Create a 4-D array y, such that y[i,j,:,:] is the 2-D window # input_image[i:i+window_size[0], j:j+window_size[1]] y = as_strided(input_image, shape=(input_image.shape[0] - window_size[0] + 1, input_image.shape[1] - window_size[1] + 1,) + window_size, strides=input_image.strides * 2) # Compute the sum of squared differences using broadcasting. ssd = ((y - template) ** 2 * valid_mask).sum(axis=-1).sum(axis=-1) return ssd ``` Here's an ipython session to compare them. The template that I'll use for the demo: ``` In [72]: template Out[72]: array([[-1, 1, -1], [ 1, 2, 1], [-1, 1, -1]]) ``` A small input so we can inspect the result: ``` In [73]: x Out[73]: array([[ 0., 1., 2., 3., 4., 5., 6.], [ 7., 8., 9., 10., 11., 12., 13.], [ 14., 15., 16., 17., 18., 19., 20.], [ 21., 22., 23., 24., 25., 26., 27.], [ 28., 29., 30., 31., 32., 33., 34.]]) ``` Apply the two functions to `x` and check that we get the same result: ``` In [74]: sumsqdiff(x, template) Out[74]: array([[ 856., 1005., 1172., 1357., 1560.], [ 2277., 2552., 2845., 3156., 3485.], [ 4580., 4981., 5400., 5837., 6292.]]) In [75]: sumsqdiff2(x, template) Out[75]: array([[ 856., 1005., 1172., 1357., 1560.], [ 2277., 2552., 2845., 3156., 3485.], [ 4580., 4981., 5400., 5837., 6292.]]) ``` Now make a much bigger input "image": ``` In [76]: z = np.random.randn(500, 500) ``` and check the performance: ``` In [77]: %timeit sumsqdiff(z, template) 1 loops, best of 3: 3.55 s per loop In [78]: %timeit sumsqdiff2(z, template) 10 loops, best of 3: 33 ms per loop ``` Not too shabby. :) Two drawbacks: * The calculation in `sumsqdiff2` will generate a temporary array that, for a 3x3 template, will be 9 times the size of `input_image`. (In general it will be `template.size` times the size of `input_image`.) * These "stride tricks" will not help you when you Cythonize the code. When converting to Cython, you often end up putting back in the loops you got rid of when vectorizing with numpy.
Faster way to calculate sum of squared difference between an image (M, N) and a template (3, 3) for template matching?
[ "", "python", "image-processing", "numpy", "scipy", "vectorization", "" ]
So, let say I have 3 different calls called `something`, `something1` and `something2`. and right now, im calling it like ``` try: something something1 something2 except Keyerror as e: print e ``` Note that in the above code, if something fails, something1 and something2 will not get executed and so on. The wanted outcome is ``` try: something except KeyError as e: print e try: something1 except KeyError as e: print e try: something2 except KeyError as e: print e ``` How can I achieve the above code without so many try except blocks. **EDIT:** So, the answer I chose as correct worked. But some of the others worked as well. I chose that because it was the simplist and I modified it a little. Here is my solution based on the answer. ``` runs = [something, something1, something2] for func in runs: try: func() except Keyerror as e: print e ```
You could try this, assuming you wrap things in functions: ``` for func in (something, something1, something2): try: func() except Keyerror as e: print e ```
Here's a little context manager I've used for similar situations: ``` from contextlib import contextmanager @contextmanager def ignoring(*exceptions): try: yield except exceptions or Exception as e: print e with ignoring(KeyError): something() # you can also put it on the same line if it's just one statement with ignoring(KeyError): something1() with ignoring(KeyError): something2() ``` A Python 3 version could let you parameterize what to do when an exception occurs (the keyword-only arguments are needed here): ``` from contextlib import contextmanager @contextmanager def ignoring(*exceptions, action=print): try: yield except exceptions or Exception as e: callable(action) and action(e) ``` Then you could pass in some function other than `print` (such as a logger, assumed to be a function named `log`) or if you don't want anything, pass in `None` (since it checks to see if the action is callable): ``` with ignoring(KeyError, action=log): something() ```
a cleaner way to approach try except in python
[ "", "python", "exception", "try-catch", "" ]
I'm trying to make a game, and I want to save the position of the player before he moves (oldPosition), so that I can use it later. The problem is that by the time World.UpdateContents() runs, oldPosition is updated along with self.position. How can I keep oldPosition from changing when self.position does? ``` def Move(self, direction): oldPosition = self.position print oldPosition if direction == "Up": if self.position[0] - 1 in World.worldMap[0]: self.position[0] -= 1 if direction == "Down": if self.position[0] + 1 in World.worldMap[0]: self.position[0] += 1 if direction == "Left": if self.position[1] - 1 in World.worldMap[1]: self.position[1] -= 1 if direction == "Right": if self.position[1] + 1 in World.worldMap[1]: self.position[1] += 1 print oldPosition World.UpdateContents(world, oldPosition, self.position, self.icon) ```
The attribute `position` is a list, when you assign it to `oldPosition` you're creating a reference to that list, but any modification you make on `position` will also be reflected on `oldPosition` - they're pointing to the same object. To solve this problem, make a copy of `position` at the beginning, like this: ``` oldPosition = self.position[:] ``` Now `oldPosition` is a new, different list and it won't be affected by the changes made to `position`.
Oscar Lopez is right, and his solution will work. The problem you're facing is due to the fact that lists in python are mutable. What does this mean? Well, say we have some non-mutable variable, such as an integer. We can do, say, ``` x = 3 y = x x = x + 1 print "x =", x, "y =", y ``` which, as expected, returns: ``` x = 4 y = 3 ``` This happens because the line "y = x" creates a *copy* of the integer x inside of the variable y. We can do whatever we want now to x and y doesn't care, because it is not tied to the variable x after its declaration. If we do similar operations to lists, we get different results. ``` x = [1,2,3] y = x x.append(4) print "x =", x, "y =", y ``` returning ``` x = [1,2,3,4] y = [1,2,3,4] ``` So what just happened? Because lists are mutable, when we let y equal x here, instead of creating a copy of x in y, python makes y point to the same data that x points to. So, when we change x, y appears to have magically changed along with it, which makes sense, as the variable y is referring to the same thing that x does. The connection between x and y with mutable types like lists and dictionaries lasts beyond the line "y = x". To counteract this, y = x[:] works, which is essentially the same as doing ``` y = [] # y is a brand new list! for elem in x: y.append(elem) ```
Trying to store a class variable for later use
[ "", "python", "python-2.7", "" ]
I have a table of projects and a table of comments. A project can have many comments. I want to get a list of all projects where the comment postedOn is > 30 days OR projects with no comment. What is the best way to accomplish this? I’ve had many unsuccessful attempts; this is my latest go at it. ``` SELECT p.id, p.officialStatus, c.posted FROM projects p LEFT JOIN ( SELECT max(posted) as posted, projectid FROM comments WHERE DATEDIFF(day, posted, GETDATE()) > 30 OR comment IS NULL group by projectid ) c ON p.id = c.projectid WHERE (p.officialStatus NOT IN ('Blue', 'Canceled')) ``` Please use these table/column names in your answer: * projects: id, officialStatus * comments: id, projectID, postedOn
``` SELECT projects.id FROM projects LEFT JOIN (SELECT comments.projectID FROM comments GROUP BY comments.projectID HAVING DATEDIFF(Now(), MAX(comments.postedOn)) < 30) AS C ON projects.id = C.projectID WHERE C.projectID IS NULL; ``` <http://sqlfiddle.com/#!2/ec919/14>
``` SELECT PROJ.id, PROJ.officialStatus FROM Projects PROJ LEFT JOIN ( SELECT projectid, MAX(posted) AS max_posted FROM Comments GROUP BY projectid ) COMMENTS ON PROJ.id = COMMENTS.projectid WHERE PROJ.officilstatus NOT IN ('Blue', 'Cancelled) AND COMMENTS.max_posted IS NULL OR COMMENTS.max_posted >= DATEADD(day, -30, Now()) ``` I think your main problem was the outer join, which was exactly what you didn't need... In this tweak, projects with no comments will have a NULL max\_posted date.
SQL left join: selecting the last records
[ "", "sql", "sql-server-2000", "" ]
Is there a way to programmatically clean output on command prompt? I want to be able to print each letter of a word than erase them one at a time in the same line.
Something like this? ``` from sys import stdout from time import sleep def show(word): for char in word: stdout.write(char) stdout.flush() sleep(1) for char in word: stdout.write('\b \b') stdout.flush() sleep(1) show('hello') ```
``` os.system('clear') ``` Works on linux. You'll have to add: ``` import os ``` And for windows: ``` os.system('CLS') ```
Is there a way to clean output on command prompt programmatically?
[ "", "python", "" ]
what is the difference between below two sql sentences: 1. ``` SELECT pid, config, constants FROM sys_template WHERE config LIKE '%code = LIST%' ``` 2. ``` SELECT pid, config, constants FROM sys_template WHERE config LIKE 'code = LIST' ```
1. In your First query it shows all config column data which contains **code = LIST** 2. In your second query it shows all config column data which exact to **code = LIST** LIKE supports wildcards. Usually it uses the % or \_ character for the wildcard. For know more about [LIKE](http://dev.mysql.com/doc/refman/5.0/en/pattern-matching.html)
`%` is a *wildcard character* to match any string. The latter matches only exactly `code = LIST` (basically working as `=` here), whereas the former matches `foocode = LISTbar` (for instance) as well. To match a single arbitrary character, you can use `?` as a wildcard.
sql: what is the difference between LIKE %...% and LIKE?
[ "", "mysql", "sql", "" ]
How do I start a process (another Python script, for example) from a Python script so the "child" process is completely detached from the "parent", so the parent can a) continue on its merry way without waiting for child to finish and b) can be terminated without terminating the child process? **Parent:** ``` import os print "Parent started" os.system("./child.py") print "Parent finished" ``` **Child:** ``` import time print "Child started" time.sleep(10) print "Child finished" ``` Running `parent.py` prints: ``` Parent started Child started Child finished Parent finished ``` What I want it to print: ``` Parent started Child started Parent finished (seconds later) Child finished ```
Answering my own question: I ended up simply using `os.system` with `&` at the end of command as [suggested by @kevinsa](https://stackoverflow.com/questions/17937249/fire-and-forget-a-process-from-a-python-script#comment26211780_17937249). This allows the parent process to be terminated without the child being terminated. Here's some code: **child.py** ``` #!/usr/bin/python import time print "Child started" time.sleep(10) print "Child finished" ``` **parent.py**, using subprocess.Popen: ``` #!/usr/bin/python import subprocess import time print "Parent started" subprocess.Popen("./child.py") print "(child started, sleeping)" time.sleep(5) print "Parent finished" ``` **Output:** ``` $ ./parent.py Parent started (child started, sleeping) Child started ^CTraceback (most recent call last): Traceback (most recent call last): File "./child.py", line 5, in <module> File "./parent.py", line 13, in <module> time.sleep(10) time.sleep(5) KeyboardInterrupt KeyboardInterrupt ``` * note how the child never finishes if the parent is interrupted with Ctrl-C **parent.py**, using os.system and & ``` #!/usr/bin/python import os import time print "Parent started" os.system("./child.py &") print "(child started, sleeping)" time.sleep(5) print "Parent finished" ``` **Output:** ``` $ ./parent.py Parent started (child started, sleeping) Child started ^CTraceback (most recent call last): File "./parent.py", line 12, in <module> time.sleep(5) KeyboardInterrupt $ Child finished ``` Note how the child lives beyond the Ctrl-C.
Since you mentioned `os.system`, I think it's worth to mention that you should have used `os.spawn*` with mode `P_NOWAIT` to achieve the "forget" part. But `subprocess` module provides replacements for `os.system`, `os,spawn*`,etc so you should use that instead like so ``` import subprocess p = subprocess.Popen("./child.py") print "pid = ", p.pid ``` See [Replacing os.spawn with subprocess.Popen](https://docs.python.org/2/library/subprocess.html#replacing-the-os-spawn-family) As I explained in the comments both processes `parent.py` and `child.py` are still on the same process group and therefore the terminal will forward signals (like `Ctrl-C`) to all process in the foreground process group so both will get killed when you `Ctrl-C`. So if you don't want that you can force `child.py` to be in a new process group with the following: ``` #!/usr/bin/env python import subprocess import time import os p = subprocess.Popen("./child.py", preexec_fn=os.setsid) print "pid = ", p.pid time.sleep(30) # Ctrl-C at this point will not kill child.py print "parent exit" ```
"Fire and forget" a process from a Python script
[ "", "python", "linux", "subprocess", "" ]
What i mean is, how is the syntax defined, i.e. how can i make my own constructs like these? I realise in a lot of languages, things like this will be built into the compiler / spec, and so it's dealt with by the compiler (at least that how i understand it to work). But with python, everything i've come across so far has been accessible to the programmer, and so you more or less have the freedom to do whatever you want. How would i go about writing my own version of `for` or `while`? Is it even possible? I don't have any actual application for this, so the answer to any WHY?! questions is just "because why not?" or "curiosity".
Well, you have a couple of options for creating your own syntax: 1. Write a higher-order function, like `map` or `reduce`. 2. Modify python at the C level. This is, as you might expect, *relatively* easy as compared with fiddling with many other languages. See this article for an example: <http://eli.thegreenplace.net/2010/06/30/python-internals-adding-a-new-statement-to-python/> 3. Fake it using the debug facilities, or the encodings facility. See this code: <http://entrian.com/goto/download.html> and <http://timhatch.com/projects/pybraces/> 4. Use a preprocessor. Here's one project that tries to make this easy: <http://www.fiber-space.de/langscape/doc/index.html> 5. Use of the python facilities built in to achieve a similar effect (decorators, metaclasses, and the like). Obviously, none of this is quite what you're looking for, but python, unlike smalltalk or lisp, isn't (necessarily) programmed in itself and guarantees to expose its own underlying execution and parsing mechanisms at runtime.
No, you can't, not from within Python. You can't add new syntax to the language. (You'd have to modify the source code of Python itself to make your own custom version of Python.) Note that the [iterator protocol](http://docs.python.org/2/library/stdtypes.html#iterator-types) allows you to define objects that can be used with `for` in a custom way, which covers a lot of the possible use cases of writing your own iteration syntax.
How do the for / while / print *things* work in python?
[ "", "python", "loops", "for-loop", "syntax", "while-loop", "" ]
I have a very large .txt file with hundreds of thousands of email addresses scattered throughout. They all take the format: ``` ...<name@domain.com>... ``` What is the best way to have Python to cycle through the entire .txt file looking for a all instances of a certain @domain string, and then grab the entirety of the address within the <...>'s, and add it to a list? The trouble I have is with the variable length of different addresses.
This [code](https://developers.google.com/edu/python/regular-expressions) extracts the email addresses in a string. Use it while reading line by line ``` >>> import re >>> line = "should we use regex more often? let me know at jdsk@bob.com.lol" >>> match = re.search(r'[\w.+-]+@[\w-]+\.[\w.-]+', line) >>> match.group(0) 'jdsk@bob.com.lol' ``` If you have several email addresses use `findall`: ``` >>> line = "should we use regex more often? let me know at jdsk@bob.com.lol or popop@coco.com" >>> match = re.findall(r'[\w.+-]+@[\w-]+\.[\w.-]+', line) >>> match ['jdsk@bob.com.lol', 'popop@coco.com'] ``` --- The regex above probably finds the most common non-fake email address. If you want to be completely aligned with the [RFC 5322](http://www.ietf.org/rfc/rfc5322.txt) you should check which email addresses follow the specification. Check [this](https://stackoverflow.com/questions/201323/using-a-regular-expression-to-validate-an-email-address) out to avoid any bugs in finding email addresses correctly. --- **Edit:** as suggested in a comment by [@kostek](https://stackoverflow.com/users/1535436/kostek): In the string `Contact us at support@example.com.` my regex returns support@example.com. (with dot at the end). To avoid this, use `[\w\.,]+@[\w\.,]+\.\w+)` **Edit II:** another wonderful improvement was mentioned in the comments: `[\w\.-]+@[\w\.-]+\.\w+`which will capture example@do-main.com as well. **Edit III:** Added further improvements as discussed in the comments: "In addition to allowing + in the beginning of the address, this also ensures that there is at least one period in the domain. It allows multiple segments of domain like abc.co.uk as well, and does NOT match bad@ss :). Finally, you don't actually need to escape periods within a character class, so it doesn't do that." --- **Update 2023** Seems [stackabuse](https://stackabuse.com/python-validate-email-address-with-regular-expressions-regex/) has compiled a post based on the popular SO [answer](https://stackoverflow.com/a/201378/1031417) mentioned above. ``` import re regex = re.compile(r"([-!#-'*+/-9=?A-Z^-~]+(\.[-!#-'*+/-9=?A-Z^-~]+)*|\"([]!#-[^-~ \t]|(\\[\t -~]))+\")@([-!#-'*+/-9=?A-Z^-~]+(\.[-!#-'*+/-9=?A-Z^-~]+)*|\[[\t -Z^-~]*])") def isValid(email): if re.fullmatch(regex, email): print("Valid email") else: print("Invalid email") isValid("name.surname@gmail.com") isValid("anonymous123@yahoo.co.uk") isValid("anonymous123@...uk") isValid("...@domain.us") ``` --- **Update 2024** (with GPT-4 hints and improvements): ``` import re # Compiling the regex pattern for email validation regex = re.compile( r"(?i)" # Case-insensitive matching r"(?:[A-Z0-9!#$%&'*+/=?^_`{|}~-]+" # Unquoted local part r"(?:\.[A-Z0-9!#$%&'*+/=?^_`{|}~-]+)*" # Dot-separated atoms in local part r"|\"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]" # Quoted strings r"|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" # Escaped characters in local part r"@" # Separator r"[A-Z0-9](?:[A-Z0-9-]*[A-Z0-9])?" # Domain name r"\.(?:[A-Z0-9](?:[A-Z0-9-]*[A-Z0-9])?)+" # Top-level domain and subdomains ) def isValid(email): """Check if the given email address is valid.""" return "Valid email" if re.fullmatch(regex, email) else "Invalid email" # Example Usage print(isValid("name.surname@gmail.com")) print(isValid("anonymous123@yahoo.co.uk")) print(isValid("anonymous123@...uk")) print(isValid("...@domain.us")) ```
You can also use the following to find all the email addresses in a text and print them in an array or each email on a separate line. ``` import re line = "why people don't know what regex are? let me know asdfal2@als.com, Users1@gmail.de " \ "Dariush@dasd-asasdsa.com.lo,Dariush.lastName@someDomain.com" match = re.findall(r'[\w\.-]+@[\w\.-]+', line) for i in match: print(i) ``` If you want to add it to a list just print the "match" ``` # this will print the list print(match) ```
Extract email sub-strings from large document
[ "", "python", "string", "" ]
Now I have read the other stackoverflow Game of Life questions and also Googled voraciously.I know what to do for my Python implementation of the Game Of Life.I want to keep track of the active cells in the grid.The problem is I'm stuck at how should I code it. Here's what I thought up but I was kinda at my wit's end beyond that: * Maintain a ActiveCell list consisting of cell co-ordinates tuples which are active dead or alive. * When computing next generation , just iterate over the ActiveCell list,compute cell state and check whether state changes or not. * If state changes , add all of the present cells neighbours to the list * If not , remove that cell from the list * Now the problem is : (" . "--> other cell) `B C D . A . . . .` If A satisfies 3) then it adds B,C,D then if B also returns true for 3) ,which means it will add A,C again (Duplication) I considered using OrderedSet or something to take care of the order and avoid duplication.But still these I hit these issues.I just need a direction.
You have two lists, I'll name them currentState, and newChanges. Here will be the workflow: 1. Iterate over currentState, figuring out which are newly born cells, and which ones are going to die. **Do NOT add these changes to your currentState**. If there is a cell to be born or a death, add it to the newChanges list. When you are finished with this step, currentState should look exactly the same as it did at the beginning. 2. Once you have finished all calculations in step 1 for *every* cell, then iterate over newChanges. For each pair in newChanges, change it in currentState from dead to alive or vice versa. Example: * currentState has {0,0} {0,1} {0,2}. (Three dots in a line) * newChanges is calculated to be {0,0} {-1,1} {1,1} {0,2} (The two end dots die, and the spot above and below the middle are born) * currentState recieves the changes, and becomes {-1,1} {0,1} {1 ,1}, and newChanges is cleared.
don't know if it will help you, but here's a quick sketch of Game of Life, with activecells dictionary: ``` from itertools import product def show(board): for row in board: print " ".join(row) def init(N): board = [] for x in range(N): board.append([]) for y in range(N): board[x].append("."); return board def create_plane(board): board[2][0] = "x" board[2][1] = "x" board[2][2] = "x" board[1][2] = "x" board[0][1] = "x" def neighbors(i, j, N): g1 = {x for x in product([1, 0, -1], repeat=2) if x != (0, 0)} g2 = {(i + di, j + dj) for di, dj in g1} return [(x, y) for x, y in g2 if x >= 0 and x < N and y >= 0 and y < N] def live(board): N = len(board) acells = {} for i in range(N): for j in range(N): if board[i][j] == "x": for (x, y) in neighbors(i, j, N): if (x, y) not in acells: acells[(x, y)] = board[x][y] while True: print "-" * 2 * N, len(acells), "cells to check" show(board) raw_input("Press any key...") for c in acells.keys(): a = len([x for x in neighbors(c[0], c[1], N) if board[x[0]][x[1]] == "x"]) cur = board[c[0]][c[1]] if a == 0: del acells[c] # if no live cells around, remove from active elif cur == "x" and a not in (2, 3): acells[c] = "." # if alive and not 2 or 3 neighbors - dead elif cur == "." and a == 3: acells[c] = "x" # if dead and 3 neighbors - alive for x in neighbors(c[0], c[1], N): # add all neighbors of new born if x not in acells: acells[x] = board[x[0]][x[1]] for c in acells: board[c[0]][c[1]] = acells[c] N = 7 board = init(N) create_plane(board) live(board) ```
Game Of Life : How to keep track of active cells
[ "", "python", "" ]
Is there a field or a function that would return all ASCII characters in python's standard library?
You can use the [`string`](http://docs.python.org/2/library/string.html#string-constants) module: ``` import string print string.printable ``` which gives: ``` '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' ```
You can make one. ``` ASCII = ''.join(chr(x) for x in range(128)) ``` If you need to check for membership, there are other ways to do it: ``` if c in ASCII: # c is an ASCII character if c <= '\x7f': # c is an ASCII character ``` If you want to check that an entire string is ASCII: ``` def is_ascii(s): """Returns True if a string is ASCII, False otherwise.""" try: s.encode('ASCII') return True except UnicodeEncodeError: return False ```
Is there a list of all ASCII characters in python's standard library?
[ "", "python", "" ]
After `E0_copy = list(E0)`, I guess `E0_copy` is a deep copy of `E0` since `id(E0)` is not equal to `id(E0_copy)`. Then I modify `E0_copy` in the loop, but why is `E0` not the same after? ``` E0 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] for k in range(3): E0_copy = list(E0) E0_copy[k][k] = 0 #print(E0_copy) print E0 # -> [[0, 2, 3], [4, 0, 6], [7, 8, 0]] ```
`E0_copy` is not a deep copy. You don't make a deep copy using `list()`. (Both `list(...)` and `testList[:]` are shallow copies, as well as `testList.copy()`.) You use [`copy.deepcopy(...)`](http://docs.python.org/library/copy.html#copy.deepcopy) for deep copying a list. > `copy.deepcopy(x[, memo])` > > Return a deep copy of *x*. See the following snippet - ``` >>> a = [[1, 2, 3], [4, 5, 6]] >>> b = list(a) >>> a [[1, 2, 3], [4, 5, 6]] >>> b [[1, 2, 3], [4, 5, 6]] >>> a[0][1] = 10 >>> a [[1, 10, 3], [4, 5, 6]] >>> b # b changes too -> Not a deepcopy. [[1, 10, 3], [4, 5, 6]] ``` Now see the `deepcopy` operation ``` >>> import copy >>> b = copy.deepcopy(a) >>> a [[1, 10, 3], [4, 5, 6]] >>> b [[1, 10, 3], [4, 5, 6]] >>> a[0][1] = 9 >>> a [[1, 9, 3], [4, 5, 6]] >>> b # b doesn't change -> Deep Copy [[1, 10, 3], [4, 5, 6]] ``` To explain, `list(...)` does not recursively make copies of the inner objects. It only makes a copy of the outermost list, while still referencing the same inner lists, hence, when you mutate the inner lists, the change is reflected in both the original list and the shallow copy. You can see that shallow copying references the inner lists by checking that `id(a[0]) == id(b[0])` where `b = list(a)`.
In Python, there is a module called `copy` with two useful functions: ``` import copy copy.copy() copy.deepcopy() ``` `copy()` is a shallow copy function. If the given argument is a compound data structure, for instance a **list**, then Python will create another object of the same type (in this case, a **new list**) but for everything inside the old list, only their reference is copied. Think of it like: ``` newList = [elem for elem in oldlist] ``` Intuitively, we could assume that `deepcopy()` would follow the same paradigm, and the only difference is that for each **elem we will recursively call deepcopy**, (just like [mbguy's answer](https://stackoverflow.com/a/17874465/4518341)) ***but this is wrong!*** `deepcopy()` actually preserves the graphical structure of the original compound data: ``` a = [1,2] b = [a,a] # there's only 1 object a c = deepcopy(b) # check the result c[0] is a # False, a new object a_1 is created c[0] is c[1] # True, c is [a_1, a_1] not [a_1, a_2] ``` This is the tricky part: during the process of `deepcopy()`, a hashtable (dictionary in Python) is used to map each old object ref onto each new object ref, which prevents unnecessary duplicates and thus preserves the structure of the copied compound data. [Official docs](https://docs.python.org/2/library/copy.html)
How to deep copy a list?
[ "", "python", "list", "copy", "deep-copy", "" ]
I am getting an error in my Django project, and it looks like it's coming from my `views.py` file: ``` from django.template.loader import get_template from django.template import Context from django.http import HttpResponse import datetime def get_date_time(request): now = datetime.datetime.now() return render(request, 'date_time.html', {'current_date': now}) ``` Error: `global name 'render' is not defined` What can I do to solve this?
You need to import [`render`](https://docs.djangoproject.com/en/dev/topics/http/shortcuts/#render) from [`django.shortcuts`](https://docs.djangoproject.com/en/dev/topics/http/shortcuts/#module-django.shortcuts) as it is not a built-in function.: ``` from django.shortcuts import render ```
If you are following the Django tutorial and have this error but already have the import, it could be because the web server needs to be reloaded. The changes in code won't be reflected until `runserver` is ran again.
Django Python: global name 'render' is not defined
[ "", "python", "django", "render", "" ]
I want to get a data for '20130501' Following is the query where I put '20130501' for start date, and '20130502' for the end date. ``` SELECT A.NAME, A.GENDER FROM TABLE A WHERE A.DTM >= TO_DATE('20130501','YYYYMMDD') AND A.DTM <= TO_DATE('20130502','YYYYMMDD') ``` The query above gives the data for '20130501' fine. But now I want to put in '20130501' for both start and end date input data. Below is the query ``` SELECT A.NAME, A.GENDER FROM TABLE A WHERE A.DTM >= TO_DATE('20130501','YYYYMMDD') AND A.DTM <= TO_DATE('20130501','YYYYMMDD') ``` When I run the query, I get no data. I think it's because DB reads the both '20130501' as the same time. But I want it to be '20130501 00:00' to '20130501 24:00' So can anyone fix the query?
Just add a day to your second date, and use `<` for the comparison; ``` SELECT A.NAME, A.GENDER FROM A WHERE A.DTM >= TO_DATE('20130501','YYYYMMDD') AND A.DTM < TO_DATE('20130501','YYYYMMDD')+1 ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!4/b3d95/11).
Instead of using the `>` and `<` for a date range you should use the `BETWEEN` since it is range inclusive: ``` SELECT A.NAME, A.GENDER FROM TABLE A WHERE A.DTM BETWEEN TO_DATE('20130501','YYYYMMDD') AND TO_DATE('20130502','YYYYMMDD'); ``` <http://www.techonthenet.com/sql/between.php> Since the solution with `TO_DATE` does not support milliseconds you can convert the `TO_DATE` into `TO_TIMESTAMP` that support the milliseconds: ``` SELECT A.NAME, A.GENDER FROM TABLE A WHERE A.DTM BETWEEN TO_DATE('20130501'||' 00:00','YYYYMMDD HH24:MI') AND TO_TIMESTAMP('20130501'||' 23:59:59.999','YYYYMMDD HH24:MI:SS.FF'); ``` but if you need greater precision, you should work with the date difference: ``` SELECT A.NAME, A.GENDER FROM TABLE A WHERE (A.DTM - TRUNC(TO_DATE('20130501','YYYYMMDD')) BETWEEN 0 AND 1; ``` 0 = A.DTM is equal to 20130501 00:00 1 = A.DTM is equal to 20130502
Selecting data between two dates
[ "", "sql", "oracle", "date", "" ]
I want to produce a list of lists that represents all possible combinations of the numbers 0 and 1. The lists have length n. The output should look like this. For n=1: ``` [ [0], [1] ] ``` For n=2: ``` [ [0,0], [0, 1], [1,0], [1, 1] ] ``` For n=3: ``` [ [0,0,0], [0, 0, 1], [0, 1, 1]... [1, 1, 1] ] ``` I looked at itertools.combinations but this produces tuples, not lists. [0,1] and [1,0] are distinct combinations, whereas there is only one tuple (0,1) (order doesn't matter). Any hints or suggestions? I have tried some recursive techniques, but I haven't found the solution.
You're looking for [`itertools.product(...)`](http://docs.python.org/2/library/itertools.html#itertools.product). ``` >>> from itertools import product >>> list(product([1, 0], repeat=2)) [(1, 1), (1, 0), (0, 1), (0, 0)] ``` If you want to convert the inner elements to `list` type, use a list comprehension ``` >>> [list(elem) for elem in product([1, 0], repeat =2)] [[1, 1], [1, 0], [0, 1], [0, 0]] ``` Or by using `map()` ``` >>> map(list, product([1, 0], repeat=2)) [[1, 1], [1, 0], [0, 1], [0, 0]] ```
Use [`itertools.product`](http://docs.python.org/3/library/itertools.html#itertools.product), assigning the `repeat` to n. ``` from itertools import product list(product([0,1], repeat=n)) ``` Demo: ``` >>> list(product([0,1], repeat=2)) [(0, 0), (0, 1), (1, 0), (1, 1)] >>> list(product([0,1], repeat=3)) [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] ```
How to generate list combinations?
[ "", "python", "combinations", "" ]
I have the following code: ``` Matrix = [[""] for x in range(2)] for x in range (2): Matrix[x] = input().title().split(" ") print(Matrix) ``` When I run it with this input : ``` hEllo wOrld SecoNd teST ``` I have this output : ``` [['Hello', 'World'], ['Second', 'Test']] ``` I don’t understand why the case of the letters is modified. I’d like to have this output : ``` [['hEllo', 'wOrld'], ['SecoNd', 'teST']] ``` So why the case is modified, and how can I avoid it ? Thanks.
The [string function title](http://docs.python.org/2/library/stdtypes.html#string-methods) is the one modifying the case. It is meant to do so :) Just remove it and you should be fine: ``` Matrix = [[""] for x in range(2)] for x in range (2): Matrix[x] = input().split(" ") print(Matrix) ```
function `title()` has made such changes. According to Python doc: > str.title() Return a titlecased version of the string where words > start with an uppercase character and the remaining characters are > lowercase.
the case of string is modified when added in a matrix
[ "", "python", "string", "" ]
Well, the title is not very appropriate, please read on (I couldn't get a better one). Note: Using Python 2.7, but an algorithm will help too. I'm making a side scroller game, in which I am generating the obstacles on the fly. The trouble I am having is figuring out how to generate the obstacles. o\_O I have a some kind of a logic, but then I'm having trouble in figuring out the entire logic. So here's my problem from an implementation perspective : I have a `Surface`, in which I have put some `Element`s, which are all rectangles. Think of it like: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 1 1 ``` As in the above structure, how can I determine if a `axb` rectangle can be added without overlapping another rectangle (of 1s), and where all. Also, that with maintaining a distance of x elements (even diagonally) from all the other objects, that means the entire rectangle is (x+3, x+4). Something like if `x=1, a=3, b=4`, there's only one possible arrangement: (2s represent the new object) ``` 2 2 2 0 0 0 0 2 2 2 0 1 1 0 2 2 2 0 1 1 0 2 2 2 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 1 1 ``` Basically, I need to find all the points, from which an rectangle of sides `a` and `b` can have it's, say, top-left corner. How this be achieved? Note: Open to better ideas for generating the obstacles on the fly! PS: I've asked this here and on Programmers as I think it falls on topic on both sites.
The following should work fairly well: ``` def find_valid_locations(grid, z, a, b): check = [(0, 0, 0, 0)] w = z + b h = z + a while check: x, y, ox, oy = check.pop() if x + w >= len(grid) or y + h >= len(grid[0]): continue for i, row in enumerate(grid[x+ox:x+w+1], x+ox): for j, val in enumerate(row[y+oy:y+h+1], y+oy): if val: break else: continue check.append((x, j+1, 0, 0)) if y == 0: check.extend((ii, j+1, 0, 0) for ii in range(x+1, i+1)) check.append((i+1, y, 0, 0)) break else: yield (x, y) check.append((x, y+1, 0, h-1)) if y == 0: check.append((x+1, y, w-1, 0)) continue ``` The brute force method here would be to check all positions in every potential rectangle location and only return locations where the rectange didn't encounter a non-zero position. This is essentially what we do here, with the following optimizations: * If we have found a valid location (x, y), we can check locations (x+1, y) and (x, y+1) easily, by only checking the new positions added to the rectangle by shifting it down or to the right. * If we encounter an obstacle at position (i, j) while checking location (x, y), we can skip checking any other location that includes (i, j) by starting our next checks at (i+1, y) and (x, j+1). Note that I renamed the parameter `x` to `z` so that I could use `x` as a row index in the code.
You can store the surface in a matrix M, then iterate over the matrix to find a place for the top-left corner of the new rectangle R: ``` for all rows of matrix M for all columns of matrix M variable empty = 0 for all numbers from 1 to a for all numbers from 1 to b empty = empty + M(row + a, col + b) if empty == 0 insert R(row,col) //insert R with top-left corner at M(row,col) break; ```
Finding an appropriate rectange in a 2-d array
[ "", "python", "algorithm", "logic", "" ]
Code example: ``` In [171]: A = np.array([1.1, 1.1, 3.3, 3.3, 5.5, 6.6]) In [172]: B = np.array([111, 222, 222, 333, 333, 777]) In [173]: C = randint(10, 99, 6) In [174]: df = pd.DataFrame(zip(A, B, C), columns=['A', 'B', 'C']) In [175]: df.set_index(['A', 'B'], inplace=True) In [176]: df Out[176]: C A B 1.1 111 20 222 31 3.3 222 24 333 65 5.5 333 22 6.6 777 74 ``` Now, I want to retrieve A values: **Q1**: in range [3.3, 6.6] - expected return value: [3.3, 5.5, 6.6] or [3.3, 3.3, 5.5, 6.6] in case last inclusive, and [3.3, 5.5] or [3.3, 3.3, 5.5] if not. **Q2**: in range [2.0, 4.0] - expected return value: [3.3] or [3.3, 3.3] Same for any other *MultiIndex* dimension, for example B values: **Q3**: in range [111, 500] with repetitions, as number of data rows in range - expected return value: [111, 222, 222, 333, 333] More formal: Let us assume T is a table with columns A, B and C. The table includes *n* rows. Table cells are numbers, for example A double, B and C integers. Let's create a *DataFrame* of table T, let us name it DF. Let's set columns A and B indexes of DF (without duplication, i.e. no separate columns A and B as indexes, and separate as data), i.e. A and B in this case *MultiIndex*. Questions: 1. How to write a query on the index, for example, to query the index A (or B), say in the labels interval [120.0, 540.0]? Labels 120.0 and 540.0 exist. I must clarify that I am interested only in the list of indices as a response to the query! 2. How to the same, but in case of the labels 120.0 and 540.0 do not exist, but there are labels by value lower than 120, higher than 120 and less than 540, or higher than 540? 3. In case the answer for Q1 and Q2 was unique index values, now the same, but with repetitions, as number of data rows in index range. I know the answers to the above questions in the case of columns which are not indexes, but in the indexes case, after a long research in the web and experimentation with the functionality of *pandas*, I did not succeed. The only method (without additional programming) I see now is to have a duplicate of A and B as data columns in addition to index.
To query the *df* by the *MultiIndex* values, for example where *(A > 1.7) and (B < 666)*: ``` In [536]: result_df = df.loc[(df.index.get_level_values('A') > 1.7) & (df.index.get_level_values('B') < 666)] In [537]: result_df Out[537]: C A B 3.3 222 43 333 59 5.5 333 56 ``` Hence, to get for example the *'A'* index values, if still required: ``` In [538]: result_df.index.get_level_values('A') Out[538]: Index([3.3, 3.3, 5.5], dtype=object) ``` The problem is, that in large data frames the performance of *by index* selection worse by 10% than the sorted regular rows selection. And in repetitive work, looping, the delay accumulated. See example: ``` In [558]: df = store.select(STORE_EXTENT_BURSTS_DF_KEY) In [559]: len(df) Out[559]: 12857 In [560]: df.sort(inplace=True) In [561]: df_without_index = df.reset_index() In [562]: %timeit df.loc[(df.index.get_level_values('END_TIME') > 358200) & (df.index.get_level_values('START_TIME') < 361680)] 1000 loops, best of 3: 562 µs per loop In [563]: %timeit df_without_index[(df_without_index.END_TIME > 358200) & (df_without_index.START_TIME < 361680)] 1000 loops, best of 3: 507 µs per loop ```
**For better readability**, we can simply use [the `query()` Method](http://pandas.pydata.org/pandas-docs/stable/indexing.html#the-query-method), to avoid the lengthy `df.index.get_level_values()` and `reset_index`/`set_index` to and fro. Here is the target `DataFrame`: ``` In [12]: df Out[12]: C A B 1.1 111 68 222 40 3.3 222 20 333 11 5.5 333 80 6.6 777 51 ``` --- Answer for **Q1** (`A` in range `[3.3, 6.6]`): ``` In [13]: df.query('3.3 <= A <= 6.6') # for closed interval Out[13]: C A B 3.3 222 20 333 11 5.5 333 80 6.6 777 51 In [14]: df.query('3.3 < A < 6.6') # for open interval Out[14]: C A B 5.5 333 80 ``` and of course one can play around with `<, <=, >, >=` for any kind of inclusion. --- Similarly, answer for **Q2** (`A` in range `[2.0, 4.0]`): ``` In [15]: df.query('2.0 <= A <= 4.0') Out[15]: C A B 3.3 222 20 333 11 ``` --- Answer for **Q3** (`B` in range `[111, 500]`): ``` In [16]: df.query('111 <= B <= 500') Out[16]: C A B 1.1 111 68 222 40 3.3 222 20 333 11 5.5 333 80 ``` --- And moreover, you can **COMBINE** the query for col `A` and `B` very naturally! ``` In [17]: df.query('0 < A < 4 and 150 < B < 400') Out[17]: C A B 1.1 222 40 3.3 222 20 333 11 ```
How to query MultiIndex index columns values in pandas
[ "", "python", "pandas", "indexing", "slice", "multi-index", "" ]
I am installing certificates on a remote server and want to check whether they exist before I overwrite them. The server only allows non-root access via ssh public key. I can `sudo -s` to root once in a shell. Root is required because /etc/ssl is not readable by anyone else. This is being developed in `python fabric`, so any command that can be run in a shell command via `sudo` would work. I don't mind typing in passwords at prompts in this case. **TL;DR:** I need an `sh` command that can tell my python program whether a remote file (or directory) exists when run as `if fabric.sudo(sh_command) == True:` (or something similar). Thank you!
``` from fabric.contrib.files import exists def foo(): if exists('/path/to/remote/file', use_sudo=True): #command ```
Maybe not the simplest way, but out of my head, I would suggest ``` ssh user@server 'bash -c "if [ -e /path/to/remote/file ] ; then true ; fi"' ```
Test if File/Dir exists over SSH/Sudo in Python/Bash
[ "", "python", "bash", "ssh", "fabric", "" ]
I have a column name called "PersonNameID" which contains two values ``` ABCD-GHJI ABHK-67891 HJKK-HJJJMH-8990 ``` I have to extract only the first part of the "PersonNameID" which contains number after "-".Ideally my output should be ``` ABCD-GHJI ABHK HJKK-HJJJMH ``` but when I use following code : ``` SELECT TOP 100 CONVERT(NVARCHAR(100), SUBSTRING(PersonNameID, 1, CASE WHEN CHARINDEX('-', PersonNameID) > 0 THEN LEN(PersonNameID) - LEN(REVERSE(SUBSTRING(REVERSE(PersonNameID), 1, CHARINDEX('-', REVERSE(PersonNameID))))) ELSE LEN(PersonNameID) END ) ) AS New_PersonNameID FROM Person ``` I get the output as ``` ABCD ABHK HJKK ``` Any modifications to the above code to get the desired output?
I have found the solution to the above situation... ``` DECLARE @Person TABLE( Person_NAME_Original VARCHAR(500), Person_NAME_Modified VARCHAR(500) ) INSERT INTO @Person (Person_NAME_Original,Person_NAME_Modified) SELECT top 1000 PersonNameID, CASE WHEN CHARINDEX('-',PersonNameID) = 0 THEN PersonNameID WHEN iSNUMERIC( rIGHT(PersonNameID ,CHARINDEX('-',REVERSE(PersonNameID))-1))=1 THEN LEFT(PersonNameID, len(PersonNameID) - CHARINDEX('-',REVERSE(PersonNameID) )-1) ELSE PersonNameID END AS New_PersonNameID from Person select * from @Person ```
Use pattern matching to find the numeric ones and then work out where the numeric ``` SELECT LEFT(PersonNameID, CASE WHEN PersonNameID LIKE '%[0-9]%' AND CHARINDEX('-', PersonNameID) > 0 THEN CHARINDEX('-', PersonNameID)-1 ELSE LEN(PersonNameID) END) AS NewPersonId FROM Person ```
Substring in sql server on numbers
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
I am currently reading "Python Programming for the Absolute Beginning 3rd Edition". One of the challenges is: > Write a Character Creator program for a role-playing game. The > player should be given a pool of 30 points to spend on four > attributes: Strength, Health, Wisdom, and Dexterity. The > player should be able to spend points from the pool on any > attribute and should also be able to take points from an > attribute and put them back into the pool. At first I wrote it with the variables ``` pool = 30 strength = 0 health = 0 wisdom = 0 dexterity = 0 ``` This section is regarding lists and dictionaries. So my question is: Is it better to use the variables in this manner or is it possible to use dictionaries? If so, is it more efficient? ie: ``` attributes = { "strength" : 0, "health" : 0, "wisdom" : 0, "dexterity" : 0 } ```
To make it short: I would go for the dictionary. To make it long: This could be a nice example to delve directly into object oriented programming. ``` #! /usr/bin/python3 class Character: class AbilityScoreOutOfBoundsException (Exception): pass def __init__ (self, name): self.name = name self.stats = {k: 1 for k in ['STR', 'DEX', 'WIS', 'INT'] } @property def strength (self): return self.stats ['STR'] @property def dexterity (self): return self.stats ['DEX'] @property def wisdom (self): return self.stats ['WIS'] @property def intelligence (self): return self.stats ['INT'] @strength.setter def strength (self, amount): self.setStat ('STR', amount) @wisdom.setter def wisdom (self, amount): self.setStat ('WIS', amount) @dexterity.setter def dexterity (self, amount): self.setStat ('DEX', amount) @intelligence.setter def intelligence (self, amount): self.setStat ('INT', amount) def setStat (self, which, amount): if amount < 1: raise Character.AbilityScoreOutOfBoundsException ('Beware hero! Thou wert about to smite thyself.') if self.total + amount - self.stats [which] > 30: raise Character.AbilityScoreOutOfBoundsException ('Beware hero! Thou shalt not grow too mighty.') self.stats [which] = amount @property def total (self): return sum (self.stats.values () ) def __repr__ (self): return '{}\n{}'.format (self.name, '\n'.join ('{}{:>4}'.format (which, self.stats [which] ) for which in ['STR', 'DEX', 'WIS', 'INT'] ) ) a = Character ('Daggeroth') a.strength += 9 a.dexterity += 9 a.wisdom += 5 a.intelligence += 3 print (a) ```
Dictionaries in Python are implement with hash tables, so efficiency isn't an issue here. Using dictionary of attributes is better because it is more flexible. For example, if you wanted multiple characters, then you can simply have a list of attribute dictionaries. Using variables in this case isn't maintainable (you would need something like `player1Health, player2Health, ...`).
Dictionary vs Variable
[ "", "python", "dictionary", "python-3.x", "" ]
I have some list like this ``` [ ['2000-01-01', 1.0], ['2000-02-01', 2.0] ] [ ['2000-01-01', 2.0], ['2000-02-01', 1.0], ['2000-03-01', 3.0] ] [ ['2000-01-01', 3.1], ['2000-02-01', 2.0], ['2000-03-01', 1.4] ] ``` how can i merge the datatime and sum the value get a list like this ``` [ ['2000-01-01', 6.1], ['2000-02-01', 5.0], ['2000-03-01', 4.4] ] ``` the element's type in list is [datetime, double]
Use [`collections.defaultdict()`](http://docs.python.org/2/library/collections.html#collections.defaultdict) here. See the snippet. ``` >>> a = [ ['2000-01-01', 1.0], ['2000-02-01', 2.0] ] >>> b = [ ['2000-01-01', 2.0], ['2000-02-01', 1.0], ['2000-03-01', 3.0] ] >>> c = [ ['2000-01-01', 3.0], ['2000-02-01', 2.0], ['2000-03-01', 1.0] ] >>> from collections import defaultdict >>> m = defaultdict(int) >>> d = a + b + c >>> d [['2000-01-01', 1.0], ['2000-02-01', 2.0], ['2000-01-01', 2.0], ['2000-02-01', 1.0], ['2000-03-01', 3.0], ['2000-01-01', 3.0], ['2000-02-01', 2.0], ['2000-03-01', 1.0]] >>> for date, count in d: m[date] += count >>> m.items() [('2000-02-01', 5.0), ('2000-03-01', 4.0), ('2000-01-01', 6.0)] ```
For all that kind of Excel-like programming, I advise to leverage Pandas as it is made for this. However if you want to stay with the standard lib, a Counter is good enough.
python sum and merge element of list
[ "", "python", "list", "datetime", "sum", "" ]
I need to perform a SQL request to extract some data. I'm not sure if it's possible, and if so, I don't know how to do it. I believe an example is better to demonstrate what I'd like to do. Lets assume a very simple table: ``` -------------------------- | ID | domain | -------------------------- | 1 | example.com | -------------------------- | 2 | stackoverflow.com | -------------------------- ``` I would like to retrieve the entry whose `domain` **ends** a specified string. If user input were `www.example.com`, what request could I perform so the entry whose `domain` is `example.com` would be retrieved? The string `www.example.com` is ended by the string `example.com`, that means I can't use `% LIKE` SQL construct, because I'm looking for a substring of the predicate. Here is a potential dirty workaround to make it clearer: ``` user_input = "www.stackoverflow.com" for domain in get_all_domains_from_db(): if user_input.endswith(domain): print "It's this one!" ``` Ps: Let me know if something isn't clear.
this will resolve your problem. ``` select * from table_name where :userParam like CONCAT('%', domain) ``` :userparam will be 'www.example.com'
you could split the user input along the `.`s, and try shorter and shorter sequences. some pythonlike pseudocode: ``` user_input = "www.stackoverflow.com" split_domain = user_input.split('.') query = 'SELECT * FROM table WHERE domain = ' + '.'.join(split_domain) while db_returns_no_rows(query) and split_domain: del split_domain[0] query = 'SELECT * FROM table WHERE domain = ' + '.'.join(split_domain) ```
SQL: Finding DB entry that ends a given string
[ "", "mysql", "sql", "" ]
I want to subtract two columns, and then I want to compute the total of the subtracted values. I am using MS Access 2010. ``` Value1 Value2 100 50 100 80 ``` I have applied- `select value1-value2 as Result from tbl` it gives the result ``` Result 50 20 ``` But I want ``` Result 70 ``` Any idea?
Use the following: ``` select sum(value1-value2) as Result from tbl ```
Use this statement: ``` SELECT SUM(Value1- value2) as Result FROM tbl ```
SQL Query to Subtract and Sum in same time
[ "", "sql", "ms-access-2007", "" ]
I'm trying to access the corner values of a numpy ndarray. I'm absolutely stumped as for methodology. Any help would be greatly appreciated. For example, from the below array I'd like a return value of array([1,0,0,5]) or array([[1,0],[0,5]]). ``` array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 5.], [ 0., 0., 5., 5.]]) ```
To add variety to the answers, you can get a view (not a copy) of the corner items doing: ``` corners = a[::a.shape[0]-1, ::a.shape[1]-1] ``` Or, for a generic n-dimensional array: ``` corners = a[tuple(slice(None, None, j-1) for j in a.shape)] ``` Doing this, you can modify the original array by modifying the view: ``` >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> corners = a[tuple(slice(None, None, j-1) for j in a.shape)] >>> corners array([[0, 2], [6, 8]]) >>> corners += 1 >>> a array([[1, 1, 3], [3, 4, 5], [7, 7, 9]]) ``` **EDIT** Ah, you want a flat list of corner values... That cannot in general be achieved with a view, so @IanH's answer is what you are looking for.
How about ``` A[[0,0,-1,-1],[0,-1,0,-1]] ``` where `A` is the array.
Get corner values in Python numpy ndarray
[ "", "python", "numpy", "" ]
In previous version of Nose testing framework, there were several ways to specify only a subset of all tests: ``` nosetests test.module nosetests another.test:TestCase.test_method nosetests a.test:TestCase nosetests /path/to/test/file.py:test_function ``` <http://nose.readthedocs.org/en/latest/usage.html#selecting-tests> However, I can't find any information about similar test selection in Nose2. There's a [mention](https://nose2.readthedocs.org/en/latest/differences.html#test-discovery-and-loading) in docs about different test discovery, but that doesn't seem to be related. Is there a way to select a specific test or test case in nose2 or (more generally) in unittest2?
I have some tests in dev/tests, for example: ``` dev/tests/test_file.py ``` I am able to run this with: ``` nose2 -s dev tests.test_file ``` Additionally, I'm able to run a specific test method in a test case as follows: ``` nose2 -s dev tests.test_file.TestCase.test_method ``` Does that accomplish what you want?
Works without `-s`, if you have your test in `tests/path/path2/mytest.py` You can do `nose2 tests.path.path2.mytest`
How to run specific test in Nose2
[ "", "python", "nosetests", "unittest2", "nose2", "" ]
I am pretty new to python and currenty I am trying to use pylint for checking code quality. I am getting a problem. My pylint doesn't point to virtualenv python interpreter. Here is the output that I get when I run pylint --version ``` $ pylint --version pylint 0.21.1, astng 0.20.1, common 0.50.3 Python 2.6.6 (r266:84292, Jul 10 2013, 22:48:45) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] ``` In virtualenv I have python 2.7 installed. Will appretiate you help if someone can point me to how to solve that.
A cheap trick is to run (the global) pylint using the virtualenv python. You can do this using `python $(which pylint)` instead of just `pylint`. On zsh, you can also do `python =pylint`.
I am fairly sure that you need to install pylint under your virtual environment and then run that instance of it. ## Update - Make life easier: I would suggest that anybody working a lot in virtual environments create a batch file, (in a known location or on the path), or bash script with something like the following called something like `getlint.bat`: ``` pip install pylint ``` Invoking this **after** activating the virtual environment will install pylint into that virtual environment. If you are likely to be offline or have a poor internet connection you can, once *when you have a good internet connection*, (possibly once for each of python 2 & 3): ``` mkdir C:\Some\Directory\You\Will\Leave\Alone pip download --dest=C:\Some\Directory\You\Will\Leave\Alone pylint ``` Which will download pylint and its dependencies to `C:\Some\Directory\You\Will\Leave\Alone` and you can modify `getlint.bat` to read: ``` pip install pylint --find-links=C:\Some\Directory\You\Will\Leave\Alone ``` It will then use the pre-downloaded versions.
pylint doesn't point to virtualenv python
[ "", "python", "virtualenv", "pylint", "" ]
Say I define a string in Python like the following: ``` my_string = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" ``` I would like to parse that string in Python in a way that allows me to index the different structures of the language. For example, the output could be a dictionary `parsing_result` that allows me to index the different elements in a structred manner. For example, the following: ``` parsing_result['names'] ``` would hold a `list` of strings: `['name1', 'name2']` whereas `parsing_result['options']` would hold a dictionary so that: * `parsing_result['something']['options']['opt2']` holds the string `"text"` * `parsing_result['something_else']['options']['opt1']` holds the string `"58"` My first question is: How do I approach this problem in Python? Are there any libraries that simplify this task? For a working example, I am not necessarily interested in a solution that parses the exact syntax I defined above (although that would be fantastic), but anything close to it would be great. ### Update 1. It looks like the general right solution is using a parser and a lexer such as [ply](http://www.dabeaz.com/ply/) (thank you @Joran), but the documentation is a bit intimidating. Is there an easier way of getting this done when the syntax is **lightweight**? 2. I found [this thread](https://stackoverflow.com/questions/1648537/how-to-split-a-string-by-commas-positioned-outside-of-parenthesis) where the following regular expression is provided to partition a string around outer commas: ``` r = re.compile(r'(?:[^,(]|\([^)]*\))+') r.findall(s) ``` But this is assuming that the grouping character are `()` (and not `{}`). I am trying to adapt this, but it doesn't look easy.
I highly recommend **[pyparsing](http://pyparsing.wikispaces.com/)**: > The pyparsing module is an alternative approach to creating and > executing simple grammars, vs. the traditional lex/yacc approach, or > the use of regular expressions. > > The Python representation of the grammar is quite > **readable**, owing to the **self-explanatory class names**, and the use of > '+', '|' and '^' operator definitions. The parsed results returned from parseString() **can be accessed as a nested list, a dictionary, or an object with named attributes**. **Sample code** (Hello world from the pyparsing docs): ``` from pyparsing import Word, alphas greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here hello = "Hello, World!" print (hello, "->", greet.parseString( hello )) ``` **Output:** ``` Hello, World! -> ['Hello', ',', 'World', '!'] ``` ***Edit:*** Here's a solution to your sample language: ``` from pyparsing import * import json identifier = Word(alphas + nums + "_") expression = identifier("lhs") + Suppress("=") + identifier("rhs") struct_vals = delimitedList(Group(expression | identifier)) structure = Group(identifier + nestedExpr(opener="{", closer="}", content=struct_vals("vals"))) grammar = delimitedList(structure) my_string = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" parse_result = grammar.parseString(my_string) result_list = parse_result.asList() def list_to_dict(l): d = {} for struct in l: d[struct[0]] = {} for ident in struct[1]: if len(ident) == 2: d[struct[0]][ident[0]] = ident[1] elif len(ident) == 1: d[struct[0]][ident[0]] = None return d print json.dumps(list_to_dict(result_list), indent=2) ``` **Output:** (pretty printed as JSON) ``` { "something_else": { "opt1": "58", "name3": null }, "something": { "opt1": "2", "opt2": "text", "name2": null, "name1": null } } ``` Use the [pyparsing API](http://pythonhosted.org/pyparsing/) as your guide to exploring the functionality of pyparsing and understanding the nuances of my solution. I've found that the quickest way to master this library is trying it out on some simple languages you think up yourself.
Here is a test of regular expression modified to react on `{}` instead of `()`: ``` import re s = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" r = re.compile(r'(?:[^,{]|{[^}]*})+') print r.findall(s) ``` You'll get a list of separate 'named blocks' as a result: ``` `['something{name1, name2, opt1=2, opt2=text}', ' something_else{name3, opt1=58}']` ``` I've made better code that can parse your simple example, you should for example catch exceptions to detect a syntax error, and restrict more valid block names, parameter names: ``` import re s = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" r = re.compile(r'(?:[^,{]|{[^}]*})+') rblock = re.compile(r'\s*(\w+)\s*{(.*)}\s*') rparam = re.compile(r'\s*([^=\s]+)\s*(=\s*([^,]+))?') blocks = r.findall(s) for block in blocks: resb = rblock.match(block) blockname = resb.group(1) blockargs = resb.group(2) print "block name=", blockname print "args:" for arg in re.split(",", blockargs): resp = rparam.match(arg) paramname = resp.group(1) paramval = resp.group(3) if paramval == None: print "param name =\"{0}\" no value".format(paramname) else: print "param name =\"{0}\" value=\"{1}\"".format(paramname, str(paramval)) ```
Parsing a lightweight language in Python
[ "", "python", "parsing", "" ]
At the moment I'm doing my query like this: `results = Points.where(latitude: (lat_low..lat_high))` `result = results.where(longitude: (long_low..long_high)).first()` It works but I can't help but think there should be a better way to pull out that one record.
Try this: ``` Points.where({ latitude: (lat_low..lat_high), longitude: (long_low..long_high) }).first ```
How about ... ``` result = Points.where(latitude: (lat_low..lat_high)). where(longitude: (long_low..long_high)). first ```
Combine two BETWEEN Rails queries
[ "", "sql", "ruby-on-rails", "" ]
![enter image description here](https://i.stack.imgur.com/EzRMN.jpg) This should be done easy in Excel, however, I would like to have this kind of calculation done via SQL. I could use the `GROUP BY` , `OVER()` to calculate the SUM and % of a single year. But I failed to present the data 3 years at once. Any help will be appreciated.
Since you are using SQL Server, if you are using SQL Server 2005+ then you can use the PIVOT function to get the result. This solution implements both an unpivot and a pivot process to get the result. The starting point for this result is to calculate the total percent and total by type: ``` select type, year, total, round(total / sum(total) over(partition by year)*100.0, 1) t_per, sum(total) over(partition by type) t_type, round(sum(total) over(partition by type)*100.0/sum(total) over(), 1) tot_per from tablea ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/51656/36). This will give a result with multiple columns that you want to pivot so you can unpivot the data into multiple rows using CROSS APPLY: ``` select type, col = cast(year as varchar(4))+'_'+col, value, t_type from ( select type, year, total, round(total / sum(total) over(partition by year)*100.0, 1) t_per, sum(total) over(partition by type) t_type, round(sum(total) over(partition by type)*100.0/sum(total) over(), 1) tot_per from tablea ) d cross apply ( select 'total', total union all select 't_per', t_per ) c (col, value); ``` See [Demo](http://sqlfiddle.com/#!3/51656/37). Finally you can apply the PIVOT function to the values in `col`: ``` select type, [2010_total], [2010_t_per], [2011_total], [2011_t_per], [2012_total], [2012_t_per], t_type, tot_per from ( select type, col = cast(year as varchar(4))+'_'+col, value, t_type, tot_per from ( select type, year, total, round(total / sum(total) over(partition by year)*100.0, 1) t_per, sum(total) over(partition by type) t_type, round(sum(total) over(partition by type)*100.0/sum(total) over(), 1) tot_per from tablea ) d cross apply ( select 'total', total union all select 't_per', t_per ) c (col, value) ) s pivot ( max(value) for col in ([2010_total], [2010_t_per], [2011_total], [2011_t_per], [2012_total], [2012_t_per]) ) piv ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/51656/38). This could be refactored to use a CTE instead of the subqueries and this could also be converted to use dynamic SQL if the year will be unknown. If you have an unknown number of values, then the dynamic SQL code will be: ``` DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT ',' + QUOTENAME(cast(year as varchar(4))+'_'+col) from tablea cross apply ( select 'total', 1 union all select 't_per', 2 ) c (col, so) group by year, col, so order by year, so FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = 'SELECT type,' + @cols + ', t_type, tot_per from ( select type, col = cast(year as varchar(4))+''_''+col, value, t_type, tot_per from ( select type, year, total, round(total / sum(total) over(partition by year)*100.0, 1) t_per, sum(total) over(partition by type) t_type, round(sum(total) over(partition by type)*100.0/sum(total) over(), 1) tot_per from tablea ) d cross apply ( select ''total'', total union all select ''t_per'', t_per ) c (col, value) ) x pivot ( max(value) for col in (' + @cols + ') ) p ' execute sp_executesql @query; ``` See [Demo](http://sqlfiddle.com/#!3/51656/53). Both the static version and the dynamic version give the result: ``` | TYPE | 2010_TOTAL | 2010_T_PER | 2011_TOTAL | 2011_T_PER | 2012_TOTAL | 2012_T_PER | T_TYPE | TOT_PER | --------------------------------------------------------------------------------------------------------- | A | 1 | 16.7 | 1 | 16.7 | 1 | 16.7 | 3 | 16.7 | | B | 2 | 33.3 | 2 | 33.3 | 2 | 33.3 | 6 | 33.3 | | C | 3 | 50 | 3 | 50 | 3 | 50 | 9 | 50 | ```
`SUMIF` can be replicated in SQL with `SUM(case statement)`: ``` SELECT Type ,SUM(CASE WHEN Year = '2010' THEN Total ELSE 0 END)'2010 Total' ,SUM(CASE WHEN Year = '2010' THEN Total ELSE 0 END)*1.0/SUM(SUM(CASE WHEN Year = '2010' THEN Total ELSE 0 END)) OVER () '2010 Percent of Total' ,SUM(CASE WHEN Year = '2011' THEN Total ELSE 0 END)'2011 Total' ,SUM(CASE WHEN Year = '2011' THEN Total ELSE 0 END)*1.0/SUM(SUM(CASE WHEN Year = '2011' THEN Total ELSE 0 END)) OVER () '2011 Percent of Total' ,SUM(CASE WHEN Year = '2012' THEN Total ELSE 0 END)'2012 Total' ,SUM(CASE WHEN Year = '2012' THEN Total ELSE 0 END)*1.0/SUM(SUM(CASE WHEN Year = '2012' THEN Total ELSE 0 END)) OVER () '2012 Percent of Total' ,SUM(Total) 'Total' ,SUM(Total)*1.0/SUM(SUM(Total)) OVER () 'Percent of Total' FROM Table GROUP BY Type ```
SUMIFS (Sum if with multiple conditions) with SQL Server
[ "", "sql", "sql-server", "" ]
Im trying to print out all elements in a list, in Python, after I´ve appended one list to another. The problem is that it only prints out every element when I use PRINT instead or RETURN. If I use print it prints out the whole list in a column with "None" at the end of the list, but return will print out just the first item. Why? This is the code: ``` def union(a,b): a.append(b) for item in a: return item a=[1,2,3,4] b=[4,5,6] print union(a,b) ``` It returns: > 1 If I use ``` def union(a,b): a.append(b) for item in a: print item a=[1,2,3,4] b=[4,5,6] print union(a,b) ``` instead, I get: > 1 > > 2 > > 3 > > 4 > > [4, 5, 6] > > None (and not even in a single line). Please note that I´ve found more results with this issue ([like this one](https://stackoverflow.com/questions/15465412/why-isnt-this-code-returning-these-lists-does-this-have-to-do-with-using-retu)), but they are not quite the same, and they are quite complicated for me, I´m just beggining to learn to program, thanks!
When you use a `return` statement, the function *ends*. You are returning just the first value, the loop does not continue nor can you return elements one after another this way. `print` just writes that value to your terminal and does not end the function. The loop continues. Build a list, then return *that*: ``` def union(a,b): a.append(b) result = [] for item in a: result.append(a) return result ``` or just return a concatenation: ``` def union(a, b): return a + b ```
`return` means end of a function. It will only return the first element of the list. For your `print` version, `a.append(b)` makes `a = [1,2,3,4,[1,2,3]]` so you will see the elements before `None`. And the function returns nothing, so `print union(a, b)` will print a None. I think you may want: ``` def union(a, b): a.extend(b) return a ``` Or ``` def union(a, b): return a + b ```
Python: Why "return" won´t print out all list elements in a simple for loop and "print" will do it?
[ "", "python", "list", "return", "" ]
I have been googling almost an hour and am just stuck. for a script, stupidadder.py, that adds 2 to the command arg. e.g. python stupidadder.py 4 prints 6 python stupidadder.py 12 prints 14 I have googled so far: ``` import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('x', metavar='x', type=int, nargs='+', help='input number') ... args = parser.parse_args() print args x = args['x'] # fails here, not sure what to put print x + 2 ``` I can't find a straightforward answer to this anywhere. the documentation is so confusing. :( Can someone help? Please and thank you. :)
I'm not entirely sure what your goal is. But if that's literally all you have to do, you don't have to get very complicated: ``` import sys print int(sys.argv[1]) + 2 ``` Here is the same but with some nicer error checking: ``` import sys if len(sys.argv) < 2: print "Usage: %s <integer>" % sys.argv[0] sys.exit(1) try: x = int(sys.argv[1]) except ValueError: print "Usage: %s <integer>" % sys.argv[0] sys.exit(1) print x + 2 ``` Sample usage: ``` C:\Users\user>python blah.py Usage: blah.py <integer> C:\Users\user>python blah.py ffx Usage: blah.py <integer> C:\Users\user>python blah.py 17 19 ```
Assuming that you are learning how to use the argparse module, you are very close. The parameter is an attribute of the returned args object and is referenced as `x = args.x`. ``` import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('x', metavar='x', type=int, nargs='+', help='input number') ... args = parser.parse_args() print args #x = args['x'] # fails here, not sure what to put x = args.x print x + 2 ```
Python Command Args
[ "", "python", "parsing", "arguments", "args", "" ]
``` import os for dirpaths, dirnames, filenames in os.walk(mydir): print dirpaths ``` gives me *all* (sub)directories in `mydir`. How do I get only the directories at the very bottom of the tree?
This will print out only those directories that have no subdirectories within them ``` for dirpath, dirnames, filenames in os.walk(mydir): if not dirnames: print dirpath, "has 0 subdirectories and", len(filenames), "files" ```
Like this? ``` for dirpaths, dirnames, filenames in os.walk(mydir): if not dirnames: print dirpaths ```
How to list only down-most directories in Python?
[ "", "python", "" ]
``` print 'a' in 'ab' ``` prints `True`, while ``` print 'a' in 'ab' == True ``` prints `False`. Any guess why?
[Operator chaining](http://docs.python.org/2/reference/expressions.html#not-in) at work. ``` 'a' in 'ab' == True ``` is equivalent to ``` 'a' in 'ab' and 'ab' == True ``` Take a look: ``` >>> 'a' in 'ab' == True False >>> ('a' in 'ab') == True True >>> 'a' in ('ab' == True) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: argument of type 'bool' is not iterable >>> 'a' in 'ab' and 'ab' == True False ``` From the docs linked above: > Comparisons can be chained arbitrarily, e.g., x < y <= z is equivalent > to x < y and y <= z, except that y is evaluated only once (but in both > cases z is not evaluated at all when x < y is found to be false). > > Formally, if a, b, c, ..., y, z are expressions and op1, op2, ..., opN > are comparison operators, then a op1 b op2 c ... y opN z is equivalent > to a op1 b and b op2 c and ... y opN z, except that each expression is > evaluated at most once. The real advantage of operator chaining is that each expression is evaluated once, at most. So with `a < b < c`, `b` is only evaluated once and then compared first to `a` and secondly (if necesarry) to `c`. As a more concrete example, lets consider the expression `0 < x < 5`. Semantically, we mean to say that *x is in the closed range [0,5]*. Python captures this by evaluating the logically equivalent expression `0 < x and x < 5`. Hope that clarifies the purpose of operator chaining somewhat.
``` >>> 'a' in 'ab' == True False >>> ('a' in 'ab') == True True ``` --- Let's take a look at what the first variant *actually* means: ``` >>> import dis >>> def f(): ... 'a' in 'ab' == True ... >>> dis.dis(f) 2 0 LOAD_CONST 1 ('a') 3 LOAD_CONST 2 ('ab') 6 DUP_TOP 7 ROT_THREE 8 COMPARE_OP 6 (in) 11 JUMP_IF_FALSE_OR_POP 23 14 LOAD_GLOBAL 0 (True) 17 COMPARE_OP 2 (==) 20 JUMP_FORWARD 2 (to 25) >> 23 ROT_TWO 24 POP_TOP >> 25 POP_TOP 26 LOAD_CONST 0 (None) 29 RETURN_VALUE ``` If you follow the bytecode, you will see that this equates to `'a' in 'ab' and 'ab' == True`. * After the first two `LOAD_CONST`s, the stack consists of: + `'ab'` + `'a'` * After `DUP_TOP`, the stack consists of: + `'ab'` + `'ab'` + `'a'` * After `ROT_THREE`, the stack consists of: + `'ab'` + `'a'` + `'ab'` * At this point, `in` (`COMPARE_OP`) operates on the top two elements, as in `'a' in 'ab'`. The result of this (`True`) is stored on the stack, while `'ab'` and `'a'` are popped off. Now the stack consists of: + `True` + `'ab'` * `JUMP_IF_FALSE_OR_POP` is the short-circuiting of `and`: if the value on top of the stack at this point is `False` we know that the entire expression will be `False`, so we skip over the second upcoming comparison. In this case, however, the top value is `True`, so we pop this value and continue on to the next comparison. Now the stack consists of: + `'ab'` * `LOAD_GLOBAL` loads `True`, and `COMPARE_OP` (`==`) compares this with the remaining stack item, `'ab'`, as in `'ab' == True`. The result of this is stored on the stack, and this is the result of the entire statement. (you can find a full list of bytecode instructions [here](http://docs.python.org/release/1.5/lib/node57.html)) Putting it all together, we have `'a' in 'ab' and 'ab' == True`. Now, `'ab' == True` is `False`: ``` >>> 'ab' == True False ``` meaning the entire expression will be `False`.
Python statement giving unexpected answer
[ "", "python", "boolean", "" ]
I'm trying to create a view using a double join between tables. I'm working on some travel software, managing holiday bookings. The different items a person pays for can be in different currencies. I've a table of bookings, and a table of currencies. There are many different items a person can pay for, all stored in different tables. I've created a view showing the total owed per payment Item type. e.g. owed for Transfers: ``` BookingID CurrencyID TotalTransfersPrice 1 1 340.00 2 1 120.00 2 2 100.00 ``` e.g. owed for Extras: ``` BookingID CurrencyID TotalExtrasPrice 1 1 200.00 1 2 440.00 2 1 310.00 ``` All is good so far. What I'd like to do is to create a master view that brings this all together: ``` BookingID CurrencyID TotalExtrasPrice TotalTransfersPrice 1 1 200.00 340.00 1 2 440.00 NULL 2 1 310.00 120.00 2 2 NULL 100.00 ``` I can't figure out how to make the above. I've been experimenting with double joins, as I'm guessing I need to do joins both for the BookingID and the CurrencyID? Any ideas? Thanks! Phil.
For `SQL Server` This query allows each {`BookingId`, `CurrencyId`} have more than one row in the `Transfer` and `Extras` tables. since you stated > I've created a view showing the total owed per payment Item type. I'm accumulating them by BookinID and CurrencyID ``` SELECT ISNULL(transfers.BookingId, extras.BookingId) AS BookingId, ISNULL(transfers.CurrencyId, extras.CurrencyId) AS CurrencyId, SUM(TotalExtrasPrice) AS TotalExtrasPrice, SUM(t.TotalTransfersPrice) AS TotalTransfersPrice FROM transfers FULL OUTER JOIN extras ON transfers.BookingId = extras.BookingId and transfers.CurrencyId = extras.CurrencyId GROUP BY ISNULL(transfers.BookingId, extras.BookingId),ISNULL(transfers.CurrencyId, extras.CurrencyId) ```
You should try to use the `full outer join` in joining the two tables: **Transfers** & **Extras**. Assuming you are using MySQL platform, the sql query can be: ``` SELECT t.BookingId,t.CurrencyId,e.TotalExtrasPrice,t.TotalTransfersPrice FROM transfers as t FULL OUTER JOIN extras as e ON t.BookingId = e.BookingId AND t.CurrencyId = e.CurrencyId; ```
SQL view using double joins between tables
[ "", "sql", "" ]
How do I check in Python, if an item of a list is repetead in another list? I suppose that I should use a FOR loop, to go and check item by item, but I´m stuck in something like this (which I know it´s not correct): ``` def check(a, b): for item in b: for item1 in a: if b[item] in a[item1]: b.remove(b[item1]) ``` I want to remove repeated elements in the second list in comparision with the first list. **Edit: I do assume that list a has items that are repeated in list b. Those items can be of any type.** Desired output: a=[a,b,c] b=[c,d,e] I want to append both lists and print: a b c d e
Assuming that `a` and `b` do not contain duplicate items that need to be preserved *and* the items are all hashable, you can use Python has a built in [`set`](http://docs.python.org/2/library/stdtypes.html#set): ``` c = list(set(b) - set(a)) # c is now a list of everything in b that is not in a ``` This would work for: ``` a, b = range(7), range(5, 11) ``` but it would *not* work for: ``` a = [1, 2, 1, 1, 3, 4, 2] b = [1, 3, 4] # After the set operations c would be [2] # rather than the possibly desired [2, 2] ``` In the case when duplicates are desired you can do the following: ``` set_b = set(b) c = [x for x in a if x not in b] ``` Using a `set` for `b` will make the lookups `O(1)` rather than `O(N)` (which will not matter for small lists).
You can use Python's [set](http://docs.python.org/2/library/stdtypes.html#set) operations without the need for loops: ``` >>> a = [1,2] >>> b = [2] >>> set(a) - set(b) set([1]) >>> ```
Compare with each other, items inside two lists, in a for loop?
[ "", "python", "list", "for-loop", "" ]
Let's say I have a list of lists of strings (stringList): ``` [['its', 'all', 'ball', 'bearings', 'these', 'days'], ['its', 'all', 'in', 'a', 'days', 'work']] ``` and I also I have a set of strings (stringSet) that are the unique words from stringList: ``` {'its', 'all', 'ball', 'bearings', 'these', 'days', 'in', 'a', 'work'} ``` Using a comprehension, if possible, how can I get a dictionary that maps each word in stringSet to a dictionary of the indexes of stringList that contain that word? In the above example, the return value would be: ``` {'its': {0,1}, 'all':{0,1}, 'ball':{0}, 'bearings':{0}, 'these':{0}, 'days':{0,1}, 'in':{1}, 'a':{1}, 'work':{1}} ``` My hangup is how to accumulate the indexes into the dictionary. I'm sure its relatively simple to those further along than I am. Thanks in advance...
This seems to work: ``` str_list = [ ['its', 'all', 'ball', 'bearings', 'these', 'days'], ['its', 'all', 'in', 'a', 'days', 'work'] ] str_set = set(word for sublist in str_list for word in sublist) str_dict = {word: set(lindex for lindex, sublist in enumerate(str_list) if word in sublist) for word in str_set} print (str_dict) ```
``` >>> alist = [['its', 'all', 'ball', 'bearings', 'these', 'days'], ... ['its', 'all', 'in', 'a', 'days', 'work']] >>> aset = {'its', 'all', 'ball', 'bearings', 'these', 'days', 'in', 'a', 'work'} >>> {x: {alist.index(y) for y in alist if x in y} for x in aset} {'a': set([1]), 'all': set([0, 1]), 'ball': set([0]), 'these': set([0]), 'bearings': set([0]), 'work': set([1]), 'days': set([0, 1]), 'in': set([1]), 'its': set([0, 1])} ``` Also you can use `enumerate` and use list to be value will make the result clearer: ``` >>> {x: [i for i, y in enumerate(alist) if x in y] for x in aset} {'a': [1], 'all': [0, 1], 'ball': [0], 'these': [0], 'bearings': [0], 'work': [1], 'days': [0, 1], 'in': [1], 'its': [0, 1]} ```
How do I add values to a set in a comprehension?
[ "", "python", "list-comprehension", "dictionary-comprehension", "set-comprehension", "" ]
How can I do something like this. ``` index=[[test1,test2,test3],[test4,test5,test6],[test7,test8,test9]] if test5 is in index: print True ```
Using [any](http://docs.python.org/2/library/functions.html#any) + [generator expression](http://docs.python.org/2/howto/functional.html?highlight=generator%20expression#generator-expressions-and-list-comprehensions): ``` if any(test5 in subindex for subindex in index): print True ```
Loop over your list of lists, and check for existence in each inner list: ``` for list in list_of_lists: if needle in list : print 'Found' ```
How to search for an item in a list of lists?
[ "", "python", "list", "" ]
I have a dataframe column, `data['time taken']` ; ``` 02:08:00 02:05:00 02:55:00 03:42:00 01:12:00 01:46:00 03:22:00 03:36:00 ``` How do I get the output in the form of minutes like below? ``` 128 125 175 222 72 106 202 216 ```
You could try to convert it to `DatetimeIndex` ``` In [58]: time = pd.DatetimeIndex(df['time taken']) In [59]: time.hour * 60 + time.minute Out[59]: array([128, 125, 175, 222, 72, 106, 202, 216], dtype=int32) ```
Assuming this is a string column you can use the [`str.split`](http://pandas.pydata.org/pandas-docs/dev/basics.html#vectorized-string-methods) method: ``` In [11]: df['time taken'].str.split(':') Out[11]: 0 [02, 08, 00] 1 [02, 05, 00] 2 [02, 55, 00] 3 [03, 42, 00] 4 [01, 12, 00] 5 [01, 46, 00] 6 [03, 22, 00] 7 [03, 36, 00] Name: time taken, dtype: object ``` And then use [`apply`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html): ``` In [12]: df['time taken'].str.split(':').apply(lambda x: int(x[0]) * 60 + int(x[1])) Out[12]: 0 128 1 125 2 175 3 222 4 72 5 106 6 202 7 216 Name: time taken, dtype: int64 ```
Convert hh:mm:ss to minutes using python pandas
[ "", "python", "pandas", "" ]
Let's imagine I have a table named "users". The code for creation: ``` CREATE TABLE IF NOT EXISTS users (id_user INTEGER PRIMARY KEY AUTOINCREMENT, username VARCHAR(32) COLLATE NOCASE, passwd_hash VARCHAR(255) NOT NULL DEFAULT '', passwd_salt VARCHAR(255) NOT NULL DEFAULT '', email_addr VARCHAR(255) NOT NULL DEFAULT ''); CREATE INDEX IF NOT EXISTS idx_id_user ON users (id_user ASC); CREATE INDEX IF NOT EXISTS idx_username ON users (username ASC); ``` When a player joins the server, it checks if the player's username is registered: ``` SELECT id_user FROM users WHERE username = '%s' LIMIT 1 ``` If the username is registered, the player will be asked to login. On login attempt I have this: ``` SELECT passwd_hash, passwd_salt FROM users WHERE id_user = %d ``` Then it obviously checks if both passwords match. So my question is, should `passwd_hash` and `passwd_salt` be indexed?
I would create a three-column index: (userid, password\_hash, password\_salt). This can be used as a *covering index* for greater efficiency. It seems that this is only a minor improvement in SQLite, but the concept is used to greater benefit in other RDBMS implementations that can cache indexes in RAM. <http://www.sqlite.org/queryplanner.html> says: > **1.7 Covering Indices** > The "price of California oranges" query was made more efficient through the use of a two-column index. But SQLite can do even better with a three-column index that also includes the "price" column: > > This new index contains all the columns of the original FruitsForSale table that are used by the query - both the search terms and the output. We call this a "covering index". Because all of the information needed is in the covering index, SQLite never needs to consult the original table in order to find the price. > > Hence, by adding extra "output" columns onto the end of an index, one can avoid having to reference the original table and thereby cut the number of binary searches for a query in half. This is a constant-factor improvement in performance (roughly a doubling of the speed). But on the other hand, it is also just a refinement; A two-fold performance increase is not nearly as dramatic as the one-million-fold increase seen when the table was first indexed. And for most queries, the difference between 1 microsecond and 2 microseconds is unlikely to be noticed. You may be interested in reading my presentation, [How to Design Indexes, Really](http://www.slideshare.net/billkarwin/how-to-design-indexes-really). I did that presentation for MySQL users, but the concepts are relevant for SQLite and most other RDBMS's too.
When you execute the query: ``` SELECT passwd_hash, passwd_salt FROM users WHERE id_user = %d; ``` The SQL engine will use the index to find the right record. It then goes into the table itself to retrieve the data needed for the `select` clause. If you instead build the index as: ``` CREATE INDEX IF NOT EXISTS idx_id_user ON users (id_user ASC, paswd_hash, passwd_salt); ``` Then the SQL engine can satisfy the query just by using the index. This could provide a performance boost. The gain would be quite minor. This is a general principle, but there are exceptions. Some databases support the notion of a clustered index on a data column. In such an index, the data in the table has to be ordered by the key, and the table itself operates as the index. However, this is not a SQLite index option.
Which columns should be indexed?
[ "", "sql", "database", "sqlite", "indexing", "" ]
I wonder how I can mount a SQL as portable as possible to query for all columns of a table for a specific phrase, like: **Table** ``` ID | Name | text | Date | Author | Status 1 | Augusto Weiand | Test text | 2010-01-01 | Deividi | 1 ``` **Query** ``` SELECT * FROM table WHERE columns LIKE '%augusto%2010%text%" ``` I did not put enough detail, excuse me, I like to make a dynamic SQL, where I do not need to specify the columns with 'AND' or 'OR', as it is possible to do in Postgres: ``` Select * From table Where table::text ~~ '%augusto%2010%text%' ```
Here is how you would concatenate the values in dynamic SQL: ``` set @Pattern = '%augusto%'; select @q := concat('select * from Table1 ', 'where concat(', group_concat(column_name), ', "") like "', @Pattern, '"' ) from information_schema.columns c where table_name = 'Table1'; prepare st from @q; execute st; deallocate prepare st; ``` Of course, dynamic SQL is not particularly portable. The idea would work in most databases. The code would look different. Tested and working [here](http://sqlfiddle.com/#!2/c9cf3/30). And finally, you can do this with variable substitution (which is the better approach): ``` select @q := concat('select * from Table1 ', 'where concat(', group_concat(column_name), ', "") like ?' ) from information_schema.columns c where table_name = 'Table1'; set @p = '%augusto%'; prepare st from @q; execute st using @p; deallocate prepare st; ``` Also tested (;-).
It's doable, although I ***strongly*** suggest you look into full-text search for efficiency; To avoid looking for all patterns in all fields one by one, you can just concat and search in that; ``` SELECT * FROM (SELECT id,CONCAT(name,'|',text,'|',date,'|',author,'|',status) txt FROM Table1) a WHERE txt LIKE '%augusto%' AND txt LIKE '%2010%' AND txt LIKE '%text%'; ``` Note that *no indexing will help you here*, since you're searching in a calculated column. On the other hand, since you're searching with a leading wildcard `%searchterm`, you won't get much help from indexes even if searching field by field :) [An SQLfiddle to test with](http://sqlfiddle.com/#!2/f8929/5).
Mysql query search a string in all columns of a table
[ "", "mysql", "sql", "" ]
This seems to be a gotcha for me, I couldnt figure this out ``` >>> from collections import Counter >>> tree = [Counter()]*3 >>> tree [Counter(), Counter(), Counter()] >>> tree[0][1]+=1 >>> tree [Counter({1: 1}), Counter({1: 1}), Counter({1: 1})] ``` Why does updating one Counter updates everything?
Using `[x] * 3`, the list references same item(`x`) three times. ``` >>> from collections import Counter >>> tree = [Counter()] * 3 >>> tree[0] is tree[1] True >>> tree[0] is tree[2] True >>> another_counter = Counter() >>> tree[0] is another_counter False >>> for counter in tree: print id(counter) ... 40383192 40383192 40383192 ``` Use list comprehension as Waleed Khan commented. ``` >>> tree = [Counter() for _ in range(3)] >>> tree[0] is tree[1] False >>> tree[0] is tree[2] False >>> for counter in tree: print id(counter) ... 40383800 40384104 40384408 ```
`[Counter()]*3` produces a list containing the *same* `Counter` instance 3 times. You can use ``` [Counter() for _ in xrange(3)] ``` to create a list of 3 independent `Counter`s. ``` >>> from collections import Counter >>> tree = [Counter() for _ in xrange(3)] >>> tree[0][1] += 1 >>> tree [Counter({1: 1}), Counter(), Counter()] ``` In general, you should be cautious when multiplying lists whose elements are mutable.
Python list simultaneous update
[ "", "python", "python-2.7", "" ]
I have a set of 2D vectors presented in a `n*2` matrix form. I wish to get the 1st principal component, i.e. the vector that indicates the direction with the largest variance. I have found [a rather detailed documentation](https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml) on this from Rice University. Based on this, I have imported the data and done the following: ``` import numpy as np dataMatrix = np.array(aListOfLists) # Convert a list-of-lists into a numpy array. aListOfLists is the data points in a regular list-of-lists type matrix. myPCA = PCA(dataMatrix) # make a new PCA object from a numpy array object ``` **Then how may I get the 3D vector that is the 1st Principal Component?**
PCA gives only 2d vecs from 2d data. Look at the picture in Wikipedia [PCA](http://en.wikipedia.org/wiki/Principal_component_analysis): starting with a point cloud (dataMatrix) like that, and using `matplotlib.mlab.PCA`, `myPCA.Wt[0]` is the first PC, the long one in the picture.
Similar question here: [Principal component analysis in Python](https://stackoverflow.com/questions/1730600/principal-component-analysis-in-python) Or you could have a look at scikit-learn: <https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html>
How to get the 1st Principal Component by PCA using Python?
[ "", "python", "pca", "" ]
Good day everyone. I would like to ask a question regarding my SQL statement. I am using SQL Server 2008 and has a table called Workflow Transaction. In this table, I have 12 records. The picture below is the content of the table. ![enter image description here](https://i.stack.imgur.com/P5qEf.png) I have this SQL statement: ``` UPDATE Workflow_Txn SET Status = 1 WHERE [RequestId] IN ( SELECT [RequestId] FROM Workflow_Txn WHERE [OrderNumber] < (SELECT MAX(OrderNumber) FROM Workflow_Txn WHERE RequestId = 3) AND RequestId = 3 ) ``` My objective is to update a request ID that has the OrderNumber of less than the maximum, which will be the output from the SELECT statement inside the WHERE clause. Now I expect that the records to be updated shall be only the said records (in the code, it's RequestId # 3). What actually happened was instead of only four records being updated, it becomes five (5)! Is there a problem with my existing SQL statement?
Your problem is that you're doing an update of ALL records with RequestId = 3. Take into account that your subquery result is `3` so you end up updating all related records. Your query is equivalent to do ``` UPDATE Workflow_Txn SET Status = 1 WHERE RequestId = 3 ``` Not sure if you have any reason to make your query more complex than it needs to be. Seems to me that something simpler would do the trick ``` UPDATE Workflow_Txn SET Status = 1 WHERE [OrderNumber] < (SELECT MAX(OrderNumber) FROM Workflow_Txn WHERE RequestId = 3) AND RequestId = 3 ```
The problem with your query is that the subquery goes into great detail to find the records with order number less than the maximum. And then it chooses everything with the same request -- which would include the maximum order number. I prefer to fix this using a CTE as follows: ``` with toupdate as ( select t.*, MAX(OrderNumber) as MaxON from Workflow_txn where RequestId = 3 ) UPDATE toupdate SET Status = 1 where OrderNumber < MaxON; ``` I like this structure, because I can run the CTE separately to see what records are likely to be updated. To fix your query, you would change the request to using `OrderNumber` and repeat the `RequestId = 3`: ``` UPDATE Workflow_Txn SET Status = 1 WHERE [RequestId] = 3 and OrderNumber in ( SELECT [OrderNumber] FROM Workflow_Txn WHERE [OrderNumber] < (SELECT MAX(OrderNumber) FROM Workflow_Txn WHERE RequestId = 3) AND RequestId = 3 ) ```
UPDATE using subqueries - Updates more than the needed records
[ "", "sql", "sql-server", "" ]
I've read similar questions to this on Stack Overflow but they have not helped. Here is my code: ``` import pygame from pygame.locals import * pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption('Hello World') pygame.mouse.set_visible(1) done = False clock = pygame.time.Clock() while not done: clock.tick(60) keyState = pygame.key.get_pressed() if keyState[pygame.K_ESCAPE]: print('\nGame Shuting Down!') done = True ``` Pressing `escape` does not exit the game or print a message. Is this a bug? If I print the value for keyState[pygame.K\_ESCAPE], it is always zero.
The problem is that you don't process pygame's event queue. You should simple call `pygame.event.pump()` at the end of your loop and then your code works fine: ``` ... while not done: clock.tick(60) keyState = pygame.key.get_pressed() if keyState[pygame.K_ESCAPE]: print('\nGame Shuting Down!') done = True pygame.event.pump() # process event queue ``` From the [docs](http://www.pygame.org/docs/ref/event.html#pygame.event.pump) (emphasis mine): > **pygame.event.pump()** > > *internally process pygame event handlers* > > `pump() -> None` > > ***For each frame of your game, you will need to make some sort of call to the event queue. This ensures your program can internally interact with the rest of the operating system.*** If you are not using other event functions in your game, you should call pygame.event.pump() to allow pygame to handle internal actions. > > This function is not necessary if your program is consistently processing events on the queue through the other pygame.event functions. > > There are important things that must be dealt with internally in the event queue. The main window may need to be repainted or respond to the system. ***If you fail to make a call to the event queue for too long, the system may decide your program has locked up***. Note that you don't have to do this if you just call `pygame.event.get()` anywhere in your main-loop; if you do not, you should probably call `pygame.event.clear()` so the event queue will not fill up.
May I suggest using an event que instead? It's probably a better idea: ``` while True: #game loop for event in pygame.event.get(): #loop through all the current events, such as key presses. if event.type == QUIT: die() elif event.type == KEYDOWN: if event.key == K_ESCAPE: #it's better to have these as multiple statments in case you want to track more than one type of key press in the future. pauseGame() ```
pygame.key.get_pressed() is not working
[ "", "python", "pygame", "" ]
I've done some searches, but I'm actually not sure of the way to word what I want to take place, so I started a question. I'm sure its been covered before, so my apologies. The code below doesn't work, but hopefully it illustrates what I'm trying to do. ``` sieve[i*2::i] *= ((i-1) / i): ``` I want to take a list and go through each item in the list that is a multiple of "i" and change its value by multiplying by the same amount. So for example if I had a list ``` [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ``` and I want to start at 2 and change every 2nd item in the list, by itself \* (2 - 1 ) / 2. So after it would look like ``` [1, 2, 3, 2, 5, 3, 7, 4, 9, 5] ``` how do I do that pythonically? thank you very much! EDIT to add: sorry, I see where my poor wording has caused some confusion (ive changed it in the above). I dont want to change every multiple of 2, I want to change every second item in the list, even if its not a multiple of 2. So I cant use x % 2 == 0. Sorry!
[NumPy](http://www.numpy.org/) would actually let you do that! ``` >>> import numpy as np >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> i = 2 >>> a[i*2::2] *= (i-1.0)/i >>> a array([0, 1, 2, 3, 2, 5, 3, 7, 4, 9]) ``` If you can't use NumPy or prefer not to, a loop would probably be clearest: ``` >>> a = range(10) >>> for j in range(i*2, len(a), i): # Not *= since we want ints ... a[j] = a[j] * (i - 1) / i ```
You can do: ``` >>> def sieve(L, i): ... temp = L[:i] ... for x, y in zip(L[i::2], L[i+1::2]): ... temp.append(x) ... temp.append(y/2) ... return temp ... >>> sieve([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 2) [1, 2, 3, 2, 5, 3, 7, 4, 9, 5] ``` Note that `itself * (2 - 1 ) / 2` is equivalent to `itself * 1 / 2` which is equivalent to `itself / 2`.
changing values in a list while looping through it?
[ "", "python", "loops", "" ]
I need a Mysql query like this SELECT \* FROM table 1 where column value = 1,2 or 3 and unique under a foreign (under one foreign key colum value =1 should have only one entry) find the count of duplicated values under the foreign key
``` SELECT colValue, count(colValue) as cnt FROM table1 where colValue in (1,2,3) group by colValue having cnt>1 ```
Try this: ``` SELECT <Table1>.id, COUNT(<Table1>.Id) FROM <Table1> INNER JOIN <Table2> ON <Table1>.Id = <Table2>.Id GROUP BY id HAVING COUNT(<Table1>.Id) > 1 ```
to find the duplication of values
[ "", "mysql", "sql", "" ]
``` Join = input('Would you like to join me?') if Join == 'yes' or 'Yes': print("Great," + myName + '!') else: print ("Sorry for asking...") ``` So this is my code. It's longer; just including the problem. I'm asking a yes or no question and when in the console it runs smoothly until you get to it. Whatever you type you get the 'yes' output. Could someone please help? I've used elif statements as well but no luck.
``` if Join == 'yes' or 'Yes': ``` This is always true. Python reads it as: ``` if (Join == 'yes') or 'Yes': ``` The second half of the `or`, being a non-empty string, is always true, so the whole expression is always true because anything `or` true is true. You can fix this by explicitly comparing `Join` to both values: ``` if Join == 'yes' or Join == 'Yes': ``` But in this particular case I would suggest the best way to write it is this: ``` if Join.lower() == 'yes': ``` This way the case of what the user enters does not matter, it is converted to lowercase and tested against a lowercase value. If you intend to use the variable `Join` elsewhere it may be convenient to lowercase it when it is input instead: ``` Join = input('Would you like to join me?').lower() if Join == 'yes': # etc. ``` You could also write it so that the user can enter anything that begins with `y` or indeed, just `y`: ``` Join = input('Would you like to join me?').lower() if Join.startswith('y'): # etc. ```
I answered [this question](https://stackoverflow.com/a/17902498/2502012) yesterday You can use [`.lower()`](http://docs.python.org/2/library/stdtypes.html#str.lower) ``` Join = input('Would you like to join me?') if Join.lower() == 'yes': print("Great," + myName + '!') else: print ("Sorry for asking...") ```
Yes or No output Python
[ "", "python", "variables", "" ]
How do I prepend an integer to the beginning of a list? ``` [1, 2, 3] ⟶ [42, 1, 2, 3] ```
``` >>> x = 42 >>> xs = [1, 2, 3] >>> xs.insert(0, x) >>> xs [42, 1, 2, 3] ``` How it works: `list.insert(index, value)` Insert an item at a given position. The first argument is the index of the element before which to insert, so `xs.insert(0, x)` inserts at the front of the list, and `xs.insert(len(xs), x)` is equivalent to `xs.append(x)`. Negative values are treated as being relative to the end of the list.
``` >>> x = 42 >>> xs = [1, 2, 3] >>> [x] + xs [42, 1, 2, 3] ``` Note: don't use `list` as a variable name.
Append integer to beginning of list in Python
[ "", "python", "list", "prepend", "" ]
In a "religious" discussion about formatting of Microsoft T-SQL code, I questioned whether or not the GOTO statement was still available in T-SQL syntax. In 13 years of using T-SQL I have never had occasion to use it, and indeed didn't know if it existed. After a brief search of the documentation and to my consternation it does indeed exist! My question is this: Is there **at least one case** where GOTO statements would yield a solution that **performs** better than one in which other higher order programming constructs are used? * Encryption algorithm implementation My question is **NOT**: * How do I use small bits of functionality without creating a whole function to do so? * How do I replicate error handling functionality without copy/paste?
I almost never use `GOTO` and can easily live without it. The one case where I would consider using it is when I have complicated code that does lots of error checking. I might want to take some action when an error returns, and `GOTO` allows me to define a single block of code for the error checking. You can solve this problem in multiple ways, but `GOTO` is a reasonable option that guarantees that errors are processed consistently and the code is not cluttered with a lot of `if @Error = 0 . . .` statements.
i saw goto a few times in large scripts where people used it to improve readability. sometimes it is better readable but mostly it turns into spaghetti code. i see just one situation where goto can maybe perform better. its inside a multiple while loop. you can use goto once instead of multiple [breaks](http://msdn.microsoft.com/de-de/library/ms181271.aspx) which just exists the innermost loop. nevertheless in my opinion break is not much better than goto, its both not a good programming style. ``` while ... while ... while... break break break while ... while ... while... goto endloops endloops: ```
In what case would the use of a T-SQL GOTO statement be acceptable?
[ "", "sql", "t-sql", "coding-style", "goto", "" ]
I'm trying to make a VarDump class, where I can do: ``` vd.attribute = value ``` and ``` print vd.attribute ``` So, this is the code: ``` class VarDump: def __init__(self): self.dump={} def __setattr__(self,item,var): self.dump[item]=var def __getattr__(self,item): if not item in self.dump: return "" return self.dump[item] vd = VarDump() vd.foo="bar" print vd.foo ``` But I'm getting this error: ``` File "classAsVarDump.py", line 9, in __getattr__ if not item in self.dump: return "" RuntimeError: maximum recursion depth exceeded ```
`self.dump={}` calls `__setattr__` to set `dump`, which calls `__getattr__` to get the `dump` to put `dump` into, which calls `__getattr__` to get the `dump`, etc. You could resolve this with `object.__setattr__`, but the easier way is to just do this: ``` class Namespace(object): def __getattr__(self, attr): return '' ``` `__getattr__` is only called when the attribute isn't found by normal means, so this doesn't need to handle looking up attributes that have actually been set.
In Python 3.3, use the new [`types.SimpleNamespace()` type](http://docs.python.org/3/library/types.html#types.SimpleNamespace) instead: ``` >>> from types import SimpleNamespace >>> vd = SimpleNamespace() >>> vd.foo = 'bar' >>> print(vd.foo) bar ``` The documentation gives you a backwards-compatible version on a platter: ``` class SimpleNamespace: def __init__(self, **kwargs): self.__dict__.update(kwargs) def __repr__(self): keys = sorted(self.__dict__) items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys) return "{}({})".format(type(self).__name__, ", ".join(items)) ``` The only difference between your version and this one is that this one doesn't use `__setattr__` and `__getattr__` (not needed at all), this version takes initial values in the initializer, and it includes a nice representation: ``` >>> vd namespace(foo='bar') ```
Python: Class as var dump error
[ "", "python", "variables", "" ]
The file my function reads is of the format: ``` Column1 Column2 Column3 Column4 Value2 Value1 Value4 Value3 Value7 Value6 Value5 Value8 Value9 Value10 Value11 Value12 ``` I read the file, filter and sort it and am left with a list of lists of the format: `lines = [[Column1, Column2, Column3, Column4], [Value1, Value2, Value3, Value4], [Value5, Value6, Value7, Value8]]` Now when I try to write to a file (in the same format as my original file), I am not able to properly format my output. This is my code: ``` for line in lines: outfile.write(line[0].strip()+'\t') outfile.write(line[1].strip()+'\t') outfile.write(line[2].strip()+'\t') outfile.write(line[3].strip()+'\t') ``` If I change it to `outfile.write(line[0].strip()+'\t'+'\n')` or add `outfile.write('\n')`, my file is just one big column with all values in it.
The following code worked for me, the problem I see is that you're not adding a newline after one row has been written. ``` >>> with open('fileName', 'w') as outFile: for line in lines: for elem in line: outFile.write(elem.strip()+'\t') outFile.write('\n') ``` This gave me an output like the following ``` Column1 Column2 Column3 Column4 Value1 Value2 Value3 Value4 Value5 Value6 Value7 Value8 ``` for ``` >>> lines = [['Column1', 'Column2', 'Column3', 'Column4'], ['Value1', 'Value2', 'Value3', 'Value4'], ['Value5', 'Value6', 'Value7', 'Value8']] ``` **EDIT** - To get the output you're looking for, use ``` >>> with open('fileName', 'w') as outFile: for line in lines: outFile.write("\t".join([elem.strip() for elem in line])) outFile.write('\n') ```
You want the `string.join(list)` function, which concatenates the elements of `list`, separating them by `string`. So you'll want `"\t".join(x.strip() for x in line)`. But this alone doesn't give you the newlines at the end of each string. You could then add the newline afterwards as follows: `"\t".join(x.strip() for x in line) + "\n"` And if you want to do all the lines together, you can make it `"\n".join('\t'.join(x.strip() for x in line) for line in lines)` Also, if you use the `print >>outfile, string` syntax, you can get the newlines after each line automatically: ``` for line in lines: print >>outfile, '\t'.join(x.strip() for x in line) ```
Adding a Newline Character at the End of a Line in a Column
[ "", "python", "file-io", "" ]
My problem is when updating a dictonary to a list. Input: ``` >>> res=[] >>> my_inital_dict={'aa':1,'bb':1} >>> for i in range(4): ... my_initial_dict.update({'aa':i+4}) ... print my_initial_dict ... res.append(my_initial_dict) ``` Output I got: ``` {'aa': 4, 'bb': 1} {'aa': 5, 'bb': 1} {'aa': 6, 'bb': 1} {'aa': 7, 'bb': 1} >>> res [{'aa': 7, 'bb': 1}, {'aa': 7, 'bb': 1}, {'aa': 7, 'bb': 1}, {'aa': 7, 'bb': 1}] ``` When I print `my_initial_dict` within the loop, I'm getting correct values. But when I print the resulting list, I'm getting list of dictonary with same dict repeated 4 times. Can someone explain what is happening here?
All list items are reference the same dictionary object. Make a copy: ``` res = [] my_initial_dict = {'aa':1,'bb':1} for i in range(4): d = my_initial_dict.copy() d['aa'] = i + 4 print d res.append(d) print res ```
All items in your res list are referencing the **same object**, as said before. You: Append this **[mutable](http://en.wikipedia.org/wiki/Mutable)** object to a list, change it, and append it again, now show me what that looks like. Computer: Add object to list, add same object to list, here is your list, it has the same two objects in it. Here is ***another way*** to say that; First you **build the list** with the objects, and then your **display the list** of objects, which in this case is a list of the **same object**, so it displays the ***latest state*** of that object in each position of the list, not the state at which you added it, unless you make a copy of it. ## Other ways to do it, just for fun; Nested [Comprehension](http://docs.python.org/2/tutorial/datastructures.html) or map with lamda, with copy() and update() ``` my_inital_dict={'aa':1,'bb':1} res = [x[1] for x in [(my_inital_dict.update({'aa':i+4}), my_inital_dict.copy()) for i in range(4)]] ``` -or- ``` res = map(lambda i: my_inital_dict.update({'aa':i+4}) or my_inital_dict.copy(), range(4)) ```
Python dictionary update to a list gives wrong output
[ "", "python", "python-2.7", "" ]
When inside a jinja template: 1. How is the string, provided to {%extends xxx\_string%} and {% include xxx\_string%}, resolved? 2. Is this relative to actual filesystem, or a generated namespace (such as when using the Flask.url\_for function)? Ultimately, I would like to use relative imports when inside my templates (I don't want to have to update filesystem locations INSIDE each and every template, with respect to the Blueprint). I would like to be able to : 1. Store the actual Blueprint package and its nested static/template resources under an arbitrary filesystem path. ('/bob/projects/2013\_07/marketanalysis') 2. Within the python Blueprint package, define a separate 'slugname' to reference the blueprint instance and all of its resource. Register this slugname on the application for global references. (without global name collisions or race-conditions) 3. Have generic view functions that provide 'cookie-cutter' layouts, depending on how the blueprint is being used (headlines, cover, intro, fullstory, citations) 4. Internally, within the filesystem of the blueprint package, use relative pathnames when resolving extends()/include() inside templates (akin to `url_for` shortcuts when referencing relative blueprint views). The idea is that when the blueprint package is bundled with all of its resources, it has no idea where it will be deployed, and may be relocated several times under different *slug-names*. The python *interface* should be the same for every "bundle", but the html content, css, javascript, and images/downloads will be unique for each bundle. --- I have sharpened the question quite a bit. I think this is as far as it should go on this thread.
It seems the most appropriate solution for my "resource bundling" should be handled with Jinja loaders (see [Jinja Docs on Loaders](http://jinja.pocoo.org/docs/api/#loaders "Jinja Loaders")). Right away, `jinja2.PackageLoader`, `jinja2.PrefixLoader`, and jinja2.DictLoader seem like they could be fun. The accepted answer for this [Similar Thread](https://stackoverflow.com/questions/13598363/how-to-dynamically-select-template-directory-to-be-used-in-flask) gives an idea of how Loaders are handled in Flask. For the most part, we can stay out of the default application-level `DispatchingJinjaLoader`. By default, I believe a Blueprint will end up with its `self.jinja_loader` to ... ``` jinja2.FileSystemLoader(os.path.join(self.root_path, self.template_folder)) ``` This help us to understand how simple the default resolution algorithm is, and how easily we can extend Blueprint specific functions. A clever combination of Subclassed/Custom Loaders will let us create smarter Loaders, and allow us to sneak in a few magics that help us cheat. The real power will come from overriding CustomBaseLoader.list\_templates() and a quick little ProxyLoader hooked into the application's DispatcherJinjaLoader that will have priority over normal lookups.
Using folders instead of prefixes makes it a bit more clean in my opinion. Example application structure: ``` yourapplication |- bp143 |- templates |- bp143 |- index.jinja |- quiz.jinja |- context.jinja |- templates |- base.jinja |- index.jinja |- bp143 |- context.jinja ``` With the above structure you can refer to templates as follows: ``` base.jinja --> comes from the application package index.jinja --> comes from the application package bp143/index.jinja --> comes from the blueprint bp143/context.jinja --> comes from the application package (the app overrides the template of the same name in the blueprint) ```
Flask - Jinja environment magic in a Blueprint
[ "", "python", "flask", "" ]
I need to count only those order numbers where all products' status is **Active**. Existing records: ``` OrderNo ProductID Status 1 1 Active 1 2 Active 1 3 Active 2 1 Inactive 2 2 Inactive 3 3 Active 4 1 Inactive 4 3 Active ``` Output: ``` Completed 2 ``` Note: Both OrderNo 1 and 3 have product/s with Active status.
``` SELECT COUNT(*) FROM ( SELECT OrderNo FROM products GROUP BY OrderNo HAVING MIN(Status) = 'Active' AND MAX(Status) = 'Active' ) AS dt ```
To get the total orders with active numbers ``` SELECT COUNT(*) FROM (SELECT * FROM products WHERE OrderNo NOT IN (SELECT * FROM products WHERE Status = 'Inactive' GROUP BY OrderNo)) as tmp_b; OrderNo ``` To get only the order numbers ``` SELECT OrderNo FROM (SELECT * FROM products WHERE OrderNo NOT IN (SELECT * FROM products WHERE Status = 'Inactive' GROUP BY OrderNo)) as tmp_b; ```
SQL: Count Records if All Columns Are Equal
[ "", "sql", "sql-server", "count", "" ]
I created a file using the `write` command, and for some reason when I try to open it with my text editor after, it comes up with this message: "The document python\_file.rtf could not be opened" Here's exactly what I did: ``` infile=open("python_file.rtf","w") infile.write("insert string here") infile.close() ``` Then when I try to open the file (I can find it in documents and everything) it gives me that error message. Can anyone tell me why? I am very new to programming.
You try to save this to a [RichTextFormated](http://en.wikipedia.org/wiki/Rich_Text_Format) File, but `"insert string here"` is only a character sequence. Try to save it as `python_file.txt` and open it with a notepad application, then you should see the text. If you want to save RTF files you should check how this files are internaly formated. For your example this would be: ``` infile=open("python_file.rtf","w") infile.write("{\rtf1 insert string here }") infile.close() ```
That's because rtf isn't just plain text format. Save it as `python_file.txt` for example or create file that compatibile with rtf format, for example: ``` >>> infile=open("python_file.rtf","w") >>> infile.write("{\r test \par }") >>> infile.close() ```
Python file cannot be opened
[ "", "python", "file", "" ]
Background: I'm working with a device vendor-supplied API module which stores device login data (device hostname, session ID, etc) as a global variable; I'm trying to figure out if it's possible to have multiple instances of the module to represent logins to multiple devices. So far I've attempted a couple strategies with test code, none of which have worked: Test module code: statictest.py ``` count = 0 class Test(): @classmethod def testcount(cls): global count count += 1 return count ``` First attempt: import module multiple times and instantiate: ``` >>> import statictest as s1 >>> import statictest as s2 >>> s1.Test.testcount() 1 >>> s1.Test.testcount() 2 >>> s2.Test.testcount() 3 ``` Second try: import module inside class, instantiate class: ``` #!/usr/bin/env python2.7 class TestMod(): s = __import__('statictest') def test(self): ts = self.s.Test() return ts.testcount() t = TestMod() u = TestMod() print t.test() print u.test() ``` That one didn't work either: ``` [~/]$ ./moduletest.py 1 2 ``` This seems like it *should* be obvious but is there any way to encapsulate a module such that multiple instances are available?
The following seems to work. It uses your `statictest.py` module and a combination of a few of the ideas in other answers to create a context manager which will allow easy switching and use of any of the various instances of the module: ``` from contextlib import contextmanager import importlib import random import sys MODULE_NAME = 'statictest' NUM_INSTANCES = 4 instances = [] # initialize module instances for _ in xrange(NUM_INSTANCES): if MODULE_NAME in sys.modules: del sys.modules[MODULE_NAME] module = importlib.import_module(MODULE_NAME) for _ in xrange(random.randrange(10)): # call testcount a random # of times module.Test.testcount() instances.append(sys.modules[MODULE_NAME]) @contextmanager def statictest_inst(n): save = sys.modules[MODULE_NAME] sys.modules[MODULE_NAME] = instances[n] yield instances[n] sys.modules[MODULE_NAME] = save def get_counts(): counts = [] for i in xrange(NUM_INSTANCES): with statictest_inst(i) as inst: counts.append(inst.count) return counts print 'initial counts', get_counts() choice = random.randrange(NUM_INSTANCES) print 'calling instance[{}].testcount()'.format(choice) with statictest_inst(choice) as inst: # use context manager inst.Test.testcount() print 'counts after updating one of them', get_counts() ``` Sample output: ``` initial counts [2, 4, 4, 1] calling instance[2].testcount() counts after updating one of them [2, 4, 5, 1] ```
I think it's not possible because Python modules pretty much behave like Singletons (in fact this is a valid method to create a Singleton in Python). [Refer to this SO thread](https://stackoverflow.com/questions/10936709/why-does-a-python-module-act-like-a-singleton) or [this one](https://stackoverflow.com/questions/6760685/creating-a-singleton-in-python) for example. This design is intended to prevent multiple imports of the same module because it can be somewhat expensive. The `logging` module is an excellent example of this. You set up your logger once and all your code which is being run by the same interpreter and imports `logging` will write to the same logfile.
Multiple instances of a global module variable?
[ "", "python", "" ]
My code needs to have an inner class and I want to create the instance of this inner class without creating the instance of outer class. **How to do so in python?** In java we can define the inner class to be static but I don't know how to make a inner class static in python. I know that for methods we can use `@staticmethod` decorator. ``` class Outer: def __init__(self): print 'Instance of outer class is created' class Inner: def __init__(self): print 'Instance of Inner class is created' ```
The class Inner is defined during the definition of the class Outer and it exists in its class namespace afterwards. So just `Outer.Inner()`.
You don't need to do anything special. Just refer to it directly: ``` instance = Outer.Inner() ```
Static inner class in python
[ "", "python", "oop", "static", "inner-classes", "static-classes", "" ]
I know this sounds crazy and probably should not be done this way but I need something like this - I have a records from `SELECT [Type], [Total Sales] From Before` I want to add an extra row at the end to show the SUM at the end of the table (After). Could this be done? ![enter image description here](https://i.stack.imgur.com/G9chM.jpg)
If you are on SQL Server 2008 or later version, you can use the [`ROLLUP()`](http://msdn.microsoft.com/en-us/library/bb522495.aspx "Using GROUP BY with ROLLUP, CUBE, and GROUPING SETS") GROUP BY function: ``` SELECT Type = ISNULL(Type, 'Total'), TotalSales = SUM(TotalSales) FROM atable GROUP BY ROLLUP(Type) ; ``` This assumes that the `Type` column cannot have NULLs and so the NULL in this query would indicate the rollup row, the one with the grand total. However, if the `Type` column can have NULLs of its own, the more proper type of accounting for the total row would be like in @Declan\_K's answer, i.e. using the [`GROUPING()`](http://msdn.microsoft.com/en-us/library/ms178544.aspx "GROUPING (Transact-SQL)") function: ``` SELECT Type = CASE GROUPING(Type) WHEN 1 THEN 'Total' ELSE Type END, TotalSales = SUM(TotalSales) FROM atable GROUP BY ROLLUP(Type) ; ```
This is the more powerful grouping / rollup syntax you'll want to use in SQL Server 2008+. Always useful to specify the version you're using so we don't have to guess. ``` SELECT [Type] = COALESCE([Type], 'Total'), [Total Sales] = SUM([Total Sales]) FROM dbo.Before GROUP BY GROUPING SETS(([Type]),()); ``` Craig Freedman wrote [a great blog post introducing `GROUPING SETS`](https://learn.microsoft.com/en-us/archive/blogs/craigfr/grouping-sets-in-sql-server-2008).
Add a summary row with totals
[ "", "sql", "sql-server", "rollup", "" ]
I am looking for some code in Python which could return `k` largest numbers from an unsorted list of `n` numbers. First I thought to do this by sorting the list first, but this might turn out to be very bulky. For example the list from which I want to find `k` largest number be `list1` ``` > list1 = [0.5, 0.7, 0.3, 0.3, 0.3, 0.4, 0.5] ``` Here `n = 7` and if `k = 3`, that is if I want to find 3 largest numbers from a list of 7 numbers then output should be `0.5, 0.7, 0.5` How can this be done?
Python has all batteries included - use [`heapq`](http://docs.python.org/2/library/heapq.html) module :) ``` from heapq import nlargest data = [0.5, 0.7, 0.3, 0.3, 0.3, 0.4, 0.5] print nlargest(3, data) ``` it's also faster than sorting the whole array, because it's using partial heapsort
It can be done like this: ``` >>> list1 [0.5, 0.7, 0.3, 0.3, 0.3, 0.4, 0.5] >>> list2 = list1[:] #make a copy of list1 >>> k = 3 >>> result = [] >>> for i in range(k): result.append(max(list2)) #append largest element to list of results list2.remove(max(list2)) # remove largest element from old list >>> result [0.7, 0.5, 0.5] >>> ```
How to find k biggest numbers from a list of n numbers assuming n > k
[ "", "python", "list", "max", "" ]
I have a class ``` class Foo(): def some_method(): pass ``` And another class **in the same module**: ``` class Bar(): def some_other_method(): class_name = "Foo" # Can I access the class Foo above using the string "Foo"? ``` I want to be able to access the `Foo` class using the string "Foo". I can do this if I'm in another module by using: ``` from project import foo_module foo_class = getattr(foo_module, "Foo") ``` Can I do the same sort of thing in the same module? The guys in [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat) suggested I use a mapping dict that maps string class names to the classes, but I don't want to do that if there's an easier way.
``` import sys getattr(sys.modules[__name__], "Foo") # or globals()['Foo'] ```
[You can do it with help of the `sys` module:](http://ideone.com/XXlIzj) ``` import sys def str2Class(str): return getattr(sys.modules[__name__], str) ```
How to get class object from class name string in the same module?
[ "", "python", "" ]
I am just wondering how do I make python generate random numbers other than a particular number? For instance, I want it to generate any number from 1 to 5 except 3, so the output would be `1, 2, 4, 5` and `3` will not be counted in the list. What can I do to achieve that? An example would be like this: *There are **five** computerized players (Player 0 to 4) in a game. Player 1 randomly selects **one** other player (except itself) and Player 2 to 4 will do the same thing.* So the output will be something like: ``` Player 1, who do you want to play with? Player 1 chooses Player 2 ```
Use `random.choice` on a list, but first remove that particular number from the list: ``` >>> import random >>> n = 3 >>> end = 5 >>> r = list(range(1,n)) + list(range(n+1, end)) >>> r [1, 2, 4] >>> random.choice(r) 2 >>> random.choice(r) 4 ``` Or define a function: ``` def func(n, end, start = 1): return list(range(start, n)) + list(range(n+1, end)) ... >>> r = func(3, 5) >>> r [1, 2, 4] >>> random.choice(r) 2 ``` **Update:** This returns all numbers other than a particular number from the list: ``` >>> r = range(5) for player in r: others = list(range(0, player)) + list(range(player+1, 5)) print player,'-->', others ... 0 --> [1, 2, 3, 4] 1 --> [0, 2, 3, 4] 2 --> [0, 1, 3, 4] 3 --> [0, 1, 2, 4] 4 --> [0, 1, 2, 3] ```
While other answers are correct. The use of intermediate lists is inefficient. --- **Alternate Solution**: Another way you could do this is by choosing randomly from a range of numbers that is n-1 in size. Then adding +1 to any results that are greater than or equal to >= the number you want to skip. The following function `random_choice_except` implements the same API as `np.random.choice`, and so adjusting `size` allows efficient generation of multiple random values: ``` import numpy as np def random_choice_except(a: int, excluding: int, size=None, replace=True): # generate random values in the range [0, a-1) choices = np.random.choice(a-1, size, replace=replace) # shift values to avoid the excluded number return choices + (choices >= excluding) random_choice_except(3, 1) # >>> 0 <or> 2 random_choice_except(3, 1, size=10) # >>> eg. [0 2 0 0 0 2 2 0 0 2] ``` --- *The behaviour of `np.random.choice` changes depending on if an integer, list or array is passed as an argument. To prevent unwanted behavior we may want to add the following assertion at the top of the function: `assert isinstance(a, int)`*
Choosing random integers except for a particular number for python?
[ "", "python", "random", "" ]
forms.py ``` class PhoneForm(forms.ModelForm): number1 = forms.IntegerField(required=False,error_messages={'invalid':'Enter a valid phone number'}) number2 = forms.IntegerField(required=False,error_messages={'invalid':'Enter a valid phone number'}) number3 = forms.IntegerField(required=False,error_messages={'invalid':'Enter a valid phone number'}) class Meta: model = PhoneInfo fields = ['name1','number1','name2','number2','name3','number3','emergency','emergency_number'] ``` models.py ``` class PhoneInfo(models.Model): user = models.ForeignKey(User, null=True) name1 = models.CharField('Name', max_length=100, null=True, blank=True) number1 = models.CharField('Number',max_length=20, null=True, blank=True) name2 = models.CharField('Name', max_length=100, null=True, blank=True) number2 = models.CharField('Number', max_length=20, null=True, blank=True) name3 = models.CharField('Name', max_length=100, null=True, blank=True) number3 = models.CharField('Number',max_length=20, null=True, blank=True) emergency = models.CharField('Emergency', max_length=100, null=True, blank=True) emergency_number = models.CharField('Emergency Number',max_length=20, null=True, blank=True) ``` The nmber1,number2 and number3 are used to save 11-digit phone number.My problem is if i add 1st digit as '0' after save '0' gets disappeared from form field.I i add that "0" after 1st digit it is saving and displaying.Problem is with this :08854215452 1st zero gets disappear after save.I want to show that also while saving phone number with STD code. ``` Traceback: File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 101. request.path_info) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve 252. sub_match = pattern.resolve(new_path) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve 252. sub_match = pattern.resolve(new_path) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve 158. return ResolverMatch(self.callback, args, kwargs, self.name) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py" in _get_callback 164. self._callback = get_callable(self._callback_str) File "/usr/lib/python2.7/site-packages/django/utils/functional.py" in wrapper 124. result = func(*args) File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py" in get_callable 91. lookup_view = getattr(import_module(mod_name), func_name) File "/usr/lib/python2.7/site-packages/django/utils/importlib.py" in import_module 35. __import__(name) File "/root/Projects/ir_uploaded_copy/ir/setting/views.py" in <module> 22. from setting.forms import * File "/root/Projects/ir_uploaded_copy/ir/setting/forms.py" in <module> 151. class PhoneForm(forms.ModelForm): File "/root/Projects/ir_uploaded_copy/ir/setting/forms.py" in PhoneForm 162. emergency_number = forms.RegexField(regex=r'^\+?(\d{3}-?\d+{7})$', error_messages = {'invalid':'Enter a valid phone number'}) File "/usr/lib/python2.7/site-packages/django/forms/fields.py" in __init__ 437. regex = re.compile(regex) File "/usr/lib/python2.7/re.py" in compile 190. return _compile(pattern, flags) File "/usr/lib/python2.7/re.py" in _compile 245. raise error, v # invalid expression Exception Type: error at /setting/call/ Exception Value: multiple repeat ``` Thanks
Never save phone numbers as `IntegerFields`. The behaviour you're seeing is normal for integers in any programming language. ``` >> myInt = 00001 >> print myInt >> 1 ``` What should you store it as? Either you a) write your own field, b) choose `CharField` or choose a `RegexField` When it comes to phonenumbers I usually prefer a regex field ***if*** you're in a set region of the world, Sweden for example would have the regex below: ``` phone_number = forms.RegexField(regex=r'^\+?(\d{3}-?\d+{7})$', error_messages = {'invalid_phonenumber': _("Not a valid Swedish phone number.")} ```
NEVER treat phone numbers as integers! Always use CharFields and maybe do a little bit of validation if necessary. Be sure that you do not only allow digits, but also "-", " " etc. Your ModelForm might look like this: ``` class PhoneForm(forms.ModelForm): class Meta: model = PhoneInfo fields = ['name1','number1','name2','number2','name3','number3','emergency','emergency_number'] def clean_number1(self): n = self.cleaned_data.get('number1') for allowednondigit in '- ./': n.replace(allowednondigit, '') for char in n: if char not in '0123456789': raise forms.ValidationError("Please only use digits, spaces, dots, slash and dash characters") return n ``` That way you store only digits (if that is what you REALLY want) in your database while still letting your user to write a phone number as he is used to.
Integer field prefix zero not displaying
[ "", "python", "django", "django-models", "django-forms", "django-templates", "" ]
How do I convert my Python app to a `.exe`? I made a program with `tkinter` and was wondering how to make it possible for others to use. I use Python 3.3. I searched for a bit but could not find anything.
cx\_Freeze does this but creates a folder with lots of dependencies. [py2exe](http://pypi.python.org/pypi/py2exe/) now does this and, with the --bundle-files 0 option, creates just one EXE, which is probably the best solution to your question. UPDATE: After encountering third-party modules that py2exe had trouble "finding", I've moved to pyinstaller as kotlet schabowy suggests below. Both have ample documentation and include .exes you can run with command line parameters, but I have yet to compile a script that pyinstaller isn't able to handle without debugging or head-scratching. Here's a simple convenience function I use to build an .exe with my defaults from the interpreter (of course a batch or similar would be fine too): ``` import subprocess,os def exe(pyfile,dest="",creator=r"C:\Python34\Scripts\pyinstaller.exe",ico=r"C:\my icons\favicon.ico",noconsole=False): insert="" if dest: insert+='--distpath ""'.format(dest) else: insert+='--distpath "" '.format(os.path.split(pyfile)[0]) if ico: insert+=' --icon="{}" '.format(ico) if noconsole: insert+=' --noconsole ' runstring='"{creator}" "{pyfile}" {insert} -F'.format(**locals()) subprocess.check_output(runstring) ```
I have found [PyInstaller](http://www.pyinstaller.org/) to work the best. You have many options for example you can pack everything to a one file exe. I love to use it together with [Cython](http://cython.org/) for speed.
How do I compile my Python 3 app to an .exe?
[ "", "python", "tkinter", "compilation", "exe", "python-3.3", "" ]
How can I create a loop that prompts for a list of items, with the prompt changing each time. For example "Input your first item" then "Input your second item" etc... (or 1st, 2nd) I need to add all of the items to an array: ``` items = [] for i in range(5): item = input("Input your first thing: ") items.append(item) print (items) ```
Slightly altering your code: ``` names = {1: "first", 2: "second", 3: "third" # and so on... } items = [] for i in range(5): item = input("Input your {} thing: ".format(names[i+1]) items.append(item) print(items) ``` Or a more general version: def getordinal(n): if str(n)[-2:] in ("11","12","13"): return "{}th".format(n) elif str(n)[-1] == "1": return "{}st".format(n) elif str(n)[-1] == "2": return "{}nd".format(n) elif str(n)[-1] == "3": return "{}rd".format(n) else: return "{}th".format(n) Or a more compact definition: ``` def getord(n): s=str(n) return s+("th" if s[-2:] in ("11","12","13") else ((["st","nd","rd"]+ ["th" for i in range(7)]) [int(s[-1])-1])) ```
Use a list of prompts: ``` prompts = ('first', 'second', 'third', 'fourth', 'fifth') items = [] for prompt in prompts: item = input("Input your {} thing: ".format(prompt)) items.append(item) ```
Change prompt to user with each loop pass
[ "", "python", "loops", "" ]
Suppose I have a table: ``` HH SLOT RN -------------- 1 1 null 1 2 null 1 3 null -------------- 2 1 null 2 2 null 2 3 null ``` I want to set RN to be a random number between 1 and 10. It's ok for the number to repeat across the entire table, but it's *bad* to repeat the number *within* any given HH. E.g.,: ``` HH SLOT RN_GOOD RN_BAD -------------------------- 1 1 9 3 1 2 4 8 1 3 7 3 <--!!! -------------------------- 2 1 2 1 2 2 4 6 2 3 9 4 ``` This is on Netezza if it makes any difference. This one's being a real headscratcher for me. Thanks in advance!
Well, I couldn't get a slick solution, so I did a hack: 1. Created a new integer field called `rand_inst`. 2. Assign a random number to each empty slot. 3. Update `rand_inst` to be the instance number of that random number within this household. E.g., if I get two 3's, then the second 3 will have `rand_inst` set to 2. 4. Update the table to assign a different random number anywhere that `rand_inst>1`. 5. Repeat assignment and update until we converge on a solution. Here's what it looks like. Too lazy to anonymise it, so the names are a little different from my original post: ``` /* Iterative hack to fill 6 slots with a random number between 1 and 13. A random number *must not* repeat within a household_id. */ update c3_lalfinal a set a.rand_inst = b.rnum from ( select household_id ,slot_nbr ,row_number() over (partition by household_id,rnd order by null) as rnum from c3_lalfinal ) b where a.household_id = b.household_id and a.slot_nbr = b.slot_nbr ; update c3_lalfinal set rnd = CAST(0.5 + random() * (13-1+1) as INT) where rand_inst>1 ; /* Repeat until this query returns 0: */ select count(*) from ( select household_id from c3_lalfinal group by 1 having count(distinct(rnd)) <> 6 ) x ; ```
To get a random number between 1 and the number of rows in the hh, you can use: ``` select hh, slot, row_number() over (partition by hh order by random()) as rn from t; ``` The larger range of values is a bit more challenging. The following calculates a table (called `randoms`) with numbers and a random position in the same range. It then uses `slot` to index into the position and pull the random number from the `randoms` table: ``` with nums as ( select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 ), randoms as ( select n, row_number() over (order by random()) as pos from nums ) select t.hh, t.slot, hnum.n from (select hh, randoms.n, randoms.pos from (select distinct hh from t ) t cross join randoms ) hnum join t on t.hh = hnum.hh and t.slot = hnum.pos; ``` [Here](http://www.sqlfiddle.com/#!12/d41d8/1238) is a SQLFiddle that demonstrates this in Postgres, which I assume is close enough to Netezza to have matching syntax.
SQL random number that doesn't repeat within a group
[ "", "sql", "algorithm", "random", "netezza", "" ]
If I define a little python program as ``` class a(): def _func(self): return "asdf" # Not sure what to resplace __init__ with so that a.func will return asdf def __init__(self, *args, **kwargs): setattr(self, 'func', classmethod(self._func)) if __name__ == "__main__": a.func ``` I receive the traceback error ``` Traceback (most recent call last): File "setattr_static.py", line 9, in <module> a.func AttributeError: class a has no attribute 'func' ``` What I am trying to figure out is, how can I dynamically set a class method to a class without instantiating an object? --- ## Edit: The answer for this problem is ``` class a(): pass def func(cls, some_other_argument): return some_other_argument setattr(a, 'func', classmethod(func)) if __name__ == "__main__": print(a.func) print(a.func("asdf")) ``` returns the following output ``` <bound method type.func of <class '__main__.a'>> asdf ```
You can dynamically add a classmethod to a class by simple assignment to the class object or by setattr on the class object. Here I'm using the python convention that classes start with capital letters to reduce confusion: ``` # define a class object (your class may be more complicated than this...) class A(object): pass # a class method takes the class object as its first variable def func(cls): print 'I am a class method' # you can just add it to the class if you already know the name you want to use A.func = classmethod(func) # or you can auto-generate the name and set it this way the_name = 'other_func' setattr(A, the_name, classmethod(func)) ```
There are a couple of problems here: * `__init__` is only run when you create an instance, e.g. `obj = a()`. This means that when you do `a.func`, the `setattr()` call hasn't happened * You cannot access the attributes of a class directly from within methods of that class, so instead of using just `_func` inside of `__init__` you would need to use `self._func` or `self.__class__._func` * `self` will be an instance of `a`, if you set an attribute on the instance it will only be available for that instance, not for the class. So even after calling `setattr(self, 'func', self._func)`, `a.func` will raise an AttributeError * Using `staticmethod` the way you are will not do anything, `staticmethod` will return a resulting function, it does not modify the argument. So instead you would want something like `setattr(self, 'func', staticmethod(self._func))` (but taking into account the above comments, this still won't work) So now the question is, what are you actually trying to do? If you really want to add an attribute to a class when initializing an instance, you could do something like the following: ``` class a(): def _func(self): return "asdf" def __init__(self, *args, **kwargs): setattr(self.__class__, 'func', staticmethod(self._func)) if __name__ == '__main__': obj = a() a.func a.func() ``` However, this is still kind of weird. Now you can access `a.func` and call it without any problems, but the `self` argument to `a.func` will always be the most recently created instance of `a`. I can't really think of any sane way to turn an instance method like `_func()` into a static method or class method of the class. Since you are trying to dynamically add a function to the class, perhaps something like the following is closer to what you are actually trying to do? ``` class a(): pass def _func(): return "asdf" a.func = staticmethod(_func) # or setattr(a, 'func', staticmethod(_func)) if __name__ == '__main__': a.func a.func() ```
How can I dynamically create class methods for a class in python
[ "", "python", "class", "metaprogramming", "static-methods", "setattr", "" ]
Here is example ``` In > int('1.5') Out > 1 In > int('10.5') Out > 10 ``` But I want to keep values intact. How do you do it?
Integers are only numbers that have no decimals. ``` -4,-3,-2,-1,0,1,2,3,4,...,65535 etc... ``` Floating point numbers or Decimal numbers are allowed to represent fractions and more precise numbers ``` 10.5, 4.9999999 ``` If you want to take a string and get a numerical type for non-whole numbers, use [`float()`](http://docs.python.org/2/library/functions.html#float) ``` float('10.5') ``` Here is a very simple elementary school explanation of [integers](http://www.mathsisfun.com/whole-numbers.html) Here is the python documentation of [numerical types](http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex)
``` foo = 10.5 foo2 = int(foo) print foo, foo2 10.5, 10 ```
How to convert strings to numbers in python while preserving fractional parts
[ "", "python", "floating-point", "" ]
Suppose I create a histogram using scipy/numpy, so I have two arrays: one for the bin counts, and one for the bin edges. If I use the histogram to represent a probability distribution function, how can I efficiently generate random numbers from that distribution?
It's probably what `np.random.choice` does in @Ophion's answer, but you can construct a normalized cumulative density function, then choose based on a uniform random number: ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt data = np.random.normal(size=1000) hist, bins = np.histogram(data, bins=50) bin_midpoints = bins[:-1] + np.diff(bins)/2 cdf = np.cumsum(hist) cdf = cdf / cdf[-1] values = np.random.rand(10000) value_bins = np.searchsorted(cdf, values) random_from_cdf = bin_midpoints[value_bins] plt.subplot(121) plt.hist(data, 50) plt.subplot(122) plt.hist(random_from_cdf, 50) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/M1984.png) --- A 2D case can be done as follows: ``` data = np.column_stack((np.random.normal(scale=10, size=1000), np.random.normal(scale=20, size=1000))) x, y = data.T hist, x_bins, y_bins = np.histogram2d(x, y, bins=(50, 50)) x_bin_midpoints = x_bins[:-1] + np.diff(x_bins)/2 y_bin_midpoints = y_bins[:-1] + np.diff(y_bins)/2 cdf = np.cumsum(hist.ravel()) cdf = cdf / cdf[-1] values = np.random.rand(10000) value_bins = np.searchsorted(cdf, values) x_idx, y_idx = np.unravel_index(value_bins, (len(x_bin_midpoints), len(y_bin_midpoints))) random_from_cdf = np.column_stack((x_bin_midpoints[x_idx], y_bin_midpoints[y_idx])) new_x, new_y = random_from_cdf.T plt.subplot(121, aspect='equal') plt.hist2d(x, y, bins=(50, 50)) plt.subplot(122, aspect='equal') plt.hist2d(new_x, new_y, bins=(50, 50)) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/HzNhM.png)
@Jaime solution is great, but you should consider using the kde (kernel density estimation) of the histogram. A great explanation why it's problematic to do statistics over histogram, and why you should use kde instead can be found [here](http://nbviewer.ipython.org/url/mglerner.com/HistogramsVsKDE.ipynb) I edited @Jaime's code to show how to use kde from scipy. It looks almost the same, but captures better the histogram generator. ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde def run(): data = np.random.normal(size=1000) hist, bins = np.histogram(data, bins=50) x_grid = np.linspace(min(data), max(data), 1000) kdepdf = kde(data, x_grid, bandwidth=0.1) random_from_kde = generate_rand_from_pdf(kdepdf, x_grid) bin_midpoints = bins[:-1] + np.diff(bins) / 2 random_from_cdf = generate_rand_from_pdf(hist, bin_midpoints) plt.subplot(121) plt.hist(data, 50, normed=True, alpha=0.5, label='hist') plt.plot(x_grid, kdepdf, color='r', alpha=0.5, lw=3, label='kde') plt.legend() plt.subplot(122) plt.hist(random_from_cdf, 50, alpha=0.5, label='from hist') plt.hist(random_from_kde, 50, alpha=0.5, label='from kde') plt.legend() plt.show() def kde(x, x_grid, bandwidth=0.2, **kwargs): """Kernel Density Estimation with Scipy""" kde = gaussian_kde(x, bw_method=bandwidth / x.std(ddof=1), **kwargs) return kde.evaluate(x_grid) def generate_rand_from_pdf(pdf, x_grid): cdf = np.cumsum(pdf) cdf = cdf / cdf[-1] values = np.random.rand(1000) value_bins = np.searchsorted(cdf, values) random_from_cdf = x_grid[value_bins] return random_from_cdf ``` ![enter image description here](https://i.stack.imgur.com/PRj2G.jpg)
Random Number from Histogram
[ "", "python", "numpy", "scipy", "montecarlo", "" ]
For example I have 2 arrays ``` a = array([[0, 1, 2, 3], [4, 5, 6, 7]]) b = array([[0, 1, 2, 3], [4, 5, 6, 7]]) ``` How can I `zip` `a` and `b` so I get ``` c = array([[(0,0), (1,1), (2,2), (3,3)], [(4,4), (5,5), (6,6), (7,7)]]) ``` ?
You can use [dstack](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html): ``` >>> np.dstack((a,b)) array([[[0, 0], [1, 1], [2, 2], [3, 3]], [[4, 4], [5, 5], [6, 6], [7, 7]]]) ``` If you must have tuples: ``` >>> np.array(zip(a.ravel(),b.ravel()), dtype=('i4,i4')).reshape(a.shape) array([[(0, 0), (1, 1), (2, 2), (3, 3)], [(4, 4), (5, 5), (6, 6), (7, 7)]], dtype=[('f0', '<i4'), ('f1', '<i4')]) ``` --- For Python 3+ you need to expand the `zip` iterator object. Please note that this is horribly inefficient: ``` >>> np.array(list(zip(a.ravel(),b.ravel())), dtype=('i4,i4')).reshape(a.shape) array([[(0, 0), (1, 1), (2, 2), (3, 3)], [(4, 4), (5, 5), (6, 6), (7, 7)]], dtype=[('f0', '<i4'), ('f1', '<i4')]) ```
``` np.array([zip(x,y) for x,y in zip(a,b)]) ```
in Numpy, how to zip two 2-D arrays?
[ "", "python", "arrays", "numpy", "zip", "" ]
For a documentation project I am writing I need to include a table with date format strings. Now almost everything works fine, but at the end I have this slight problem where I want to print a literal ' quote and two literal quotes (separately and between other quotes). Sphinx changes these to up/down quotes, which usually looks really neat, but in this particular case makes the text unreadable. The best I could come up with was: ``` ====== =========== ====== ===================== ``'`` Escape for | " ``'`` hour ``'`` h" -> "hour 9" for text Delimiter ``''`` Single | "ss ``''`` SSS" -> "45 ``'`` 876" quote Literal ====== =========== ====== ===================== ``` This produces all the right quotes, but it inserts extra spaces before and after, which I would like to see removed, since the example is not syntactically correct that way. So one could also rephrase my question as: How to remove extra spaces before and after literal quotes when using backticks. I have tried standard ways of escaping. Backslashes have no effect, since ' is not a reStructuredText special character. If I remove the spaces the backticks `` won't work anymore. Sample output with extra spaces: ![enter image description here](https://i.stack.imgur.com/CF9It.png)
As mentioned by other people in the comments, this process where Sphinx changes plain `"` into curled `“` and so on is called SmartQuotes. I'm not sure if you specifically wanted the literals at all in the first place, or if they were only a compromise to avoid the SmartQuotes, but there are (at least) two ways to stop the SmartQuotes without needing to use literals: **1. Disable SmartQuotes for the whole project:** If you don't want SmartQuotes, either add: ``` smartquotes = False ``` to your `conf.py file` Or add a `docutils.conf` file at the same level as `conf.py` with this contents: ``` [parsers] smart_quotes: no ``` (solution from [this GitHub issue](https://github.com/sphinx-doc/sphinx/issues/3848#issuecomment-305918901); see the [Sphinx documentation](http://www.sphinx-doc.org/en/master/usage/configuration.html#confval-smartquotes) for how these two settings interact - TL;DR: switching off in `docutils.conf` will override even if they're turned on in `conf.py`) **2. Escape the individual quotes you don't want to be 'smart':** You can use double-backslashes `\\` to escape the quote marks you want to be straight, eg `\\'` and `\\"`. So in your example, this: ``` "\\'hour\\'h" -> "hour 9" "ss\\'\\'SSS" -> "45\\'876" ``` would give output of: ``` “'hour'h” -> “hour 9” “ss''SSS” -> “45'876” ``` with the outer double quotes left as 'smart', but I think that is the behaviour you wanted. (see the [official docutils documentation](http://docutils.sourceforge.net/docs/user/smartquotes.html#escaping) for details on escaping)
I just found the answer burried in the documentation: <http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#inline-markup> It turns out you can escape space characters by using a backslash "\" before them. So "\ " will be rendered as "". This is useful for instances where you need to use whitespace for the formatting, but don't want that whitespace to show up, as in my question above. So the solution would be: ``` ====== =========== ====== ===================== ``'`` Escape for | "\ ``'``\ hour\ ``'``\ h" -> "hour 9" for text Delimiter ``''`` Single | "ss\ ``''``\ SSS" -> "45\ ``'``\ 876" quote Literal ====== =========== ====== ===================== ``` Ugly to read, but effective. Another example with inline formatting of function calls: ``` **dateformat (**\ *<string,number,void>* **sourceDate,** *string* **sourceDateFormat,** *string* **returnDateFormat)** ``` It turns out this is the only way to get the formatting to be correct (variable types italic and the rest bold, whithout having a space between the opening parenthesis and the variable type).
How to escape single quotes in reStructuredText when converting to HTML using Sphinx
[ "", "python", "python-sphinx", "restructuredtext", "docutils", "" ]
In igraph or networkx, what is the fastest way to find all simple paths of length 4 in a sparse directed graph? One way is to make a graph of a simple path of length 4 and use the subgraph isomorphism vf2 function. Is this the best/fastest way? I don't have a source node, I would like all simple paths of length 4 that exist in the whole graph. In my data there are likely to be very few such paths and I would like to be able to iterate over them efficiently.
Using a function like this: ``` def simple_paths(start, length, visited=[]): if length==0: yield(visited + [start]) else: for child in children(start): if child not in visited: for path in simple_paths(child, length-1, visited + [start]): yield(path) ``` You can list all simple paths of length 4 by calling ``` for start in nodes(): for path in simple_paths(start, 4): print path ``` The above assumes that `nodes()` returns an iterable of all nodes in the graph, and that `children(x)` returns an iterable of the children of node `x`. ![enter image description here](https://i.stack.imgur.com/Ex4Pb.png) Applying the `simple_paths()` function to the above graph correctly yields: ``` ['5', '9', '3', '1', '0'] ['6', '5', '9', '3', '1'] ['6', '5', '3', '1', '0'] ['9', '5', '3', '1', '0'] ``` This demonstrates that the function: * respects directed edges (e.g. it does not choose `['6', '5', '1', '3', '9']`) * only chooses simple paths (e.g. it does not choose `['6', '5', '3', '1', '5']`)
At the beginning, let's solve more simpler problem - calculate number of paths of length 4. 1) Let A be an adjacency matrix of the given graph. `A[i][j] = 1` if there exist an edge between vertices I and J and 0 otherwise. `A^N` gives number of paths of length `N` for some fixed length. 2) matrix *squaring* looks like ``` init(RES,0); for(row=1;N) for(col=1;N) for(common=1;N) RES[row][col] + = a[row][common]*a[common][col]; ``` The physical meaning is such construction is the following: For each degree `deg` of given matrix `A`, `A[i][j]` stores numbers of paths with length = `deg` from `i to j`. At the first stage adjacency matrix is just storing number of paths of length=1. When you multiply `A^N to A` you are trying to extend paths of length `N to N+1`. `a[row][common]*a[common][col]` can be enterpreted as "there `a[row][common]` ways of len=1 from `row` to `common` and `a[common][col]` ways of len=1 from `common` to `col`. According to combinatorics multiplication principle number of ways with len=1 from row to col is `a[row][common]*a[common][col]`". Now important modification. You want to list all paths not just count them! So, A[i][j] is not integer number but `vector` or `ArrayList` of integers. Replace `RES[row][col] + = a[row][common]*a[common][col]` with `RES[row][col].push_back(cartesian_product(a[row][common],a[common][col]))` Complexity of just counting paths is `matrix multiplication*degree` = N^3\*degree. Applying binary exponentiation you can get N^3\*log(degree). In our case degree=4, log(4) = 2, 2~4 - doesn't matter. But now you can't just multiply 2 numbers you should do cartesian product of vectors - paths of len N. So complexity increases to N in common case but in our case to 4) If you have some questions you are welcome.
Sub graph isomorphism
[ "", "python", "algorithm", "networkx", "igraph", "" ]
I wanted to create a regex which could match the following pattern: ``` 5,000 2.5 25 ``` This is the regex I have thus far: ``` re.compile('([\d,]+)') ``` How can I adjust for the `.`?
Easiest method would just be this: ``` re.compile('([\d,.]+)') ``` But this will allow inputs like `...`. This might be acceptable, since your original pattern allows `,,,`. However, if you want to allow only a single decimal point you can do this: ``` re.compile('([\d,]+.?\d*)') ``` Note that this won't allow inputs like `.5`—you'd need to use `0.5` instead.
I think the perfect regex would be ``` re.compile(r'\d{1,2}[,.]\d{1,3}') ``` This way you match one or two digits followed by a comma or a full stop, and then one to three digits. You don't need the parentheses if you are not going to use the contents of the match later. Omitting them speeds up the process.
Regex to pick up comma and period Python
[ "", "python", "regex", "" ]
I am wanting to send the user an email whenever a new primary key is generated in Django. My code in **Models.py:** ``` class PurchaseOrder(models.Model): product = models.CharField(max_length=256) vendor = models.CharField(max_length=256) dollar_amount = models.FloatField() item_number = models.AutoField(primary_key=True) ``` **I have it coded to where I can send an email every time my page is refreshed** ``` email = EmailMessage('Purchase Order System', 'your message here', to=['youremail@gmail.com']) email.send() ``` **But how can I make it so that every time a new Primary key is generated, send an email?**
Use [django signals](https://docs.djangoproject.com/en/dev/topics/signals/). ``` from django.db.models import signals def send_email_on_new_order(instance, created, raw, **kwargs): # Fixtures or updates are not interesting. if not created or raw: return # `instance` is your PurchaseOrder instance. email = EmailMessage('Purchase Order System', 'your message here', to=['youremail@gmail.com']) email.send() signals.post_save.connect(send_email_on_new_order, sender=PurchaseOrder, dispatch_uid='send_email_on_new_order') ``` You have to paste this code somewhere it will be executed during loading django, e.g. `models.py` Example `models.py`: ``` from django.core.mail import EmailMessage from django.db import models from django.db.models import signals class PurchaseOrder(models.Model): product = models.CharField(max_length=256) vendor = models.CharField(max_length=256) dollar_amount = models.FloatField() item_number = models.AutoField(primary_key=True) def send_email_on_new_order(instance, created, raw, **kwargs): # Fixtures or updates are not interesting. if not created or raw: return # `instance` is your PurchaseOrder instance. email = EmailMessage('Purchase Order System', 'your message here', to=['youremail@gmail.com']) email.send() signals.post_save.connect(send_email_on_new_order, sender=PurchaseOrder, dispatch_uid='send_email_on_new_order') ```
Listen to the `post_save` signal. ``` from django.db.models import signals from django.dispatch import receiver @receiver(signals.post_save, sender=PurchaseOrder) def email_handler(sender, **kwargs): if kwargs['created']: email = EmailMessage('Purchase Order System', 'your message here', to=['youremail@gmail.com']) email.send() ``` Or define a custom `save` method in your model. ``` class PurchaseOrder(models.Model): def save(self, *args, **kwargs): # Check if this is a new key if not self.pk: email = EmailMessage('Purchase Order System', 'your message here', to=['youremail@gmail.com']) email.send() super(PurchaseOrder, self).save(*args, **kwargs) ```
How can I send an email based on a new primary key being generated through Django?
[ "", "python", "django", "email", "key", "" ]
Simple table: ``` create table [dbo].[payraisehistory] ( [id][int] not null, [payrate][money] not null, [empid][int] not null ) ``` // Query to return last payrate raise ``` select prh.empid, max(prh.dategrant) from payraisehistory prh group by prh.empid empid (No column name) 2 2013-07-30 00:00:00.000 3 2013-07-30 00:00:00.000 ``` How can I get the pay back as well....when I add the what the payrate was I get back all rows instead of the latest pay rate (per max(dategrant). I want to do this without using nested queries. What am I missing with the group by? ``` select prh.empid, prh.payrate, max(prh.dategrant) from payraisehistory prh group by prh.empid, payrate empid payrate (No column name) 2 20.00 2013-04-30 00:00:00.000 2 30.00 2013-05-30 00:00:00.000 2 40.00 2013-06-30 00:00:00.000 2 50.00 2013-07-30 00:00:00.000 3 100.00 2013-04-30 00:00:00.000 3 120.00 2013-07-30 00:00:00.000 ```
This is a case where you want to use `row_number()`: ``` select prh.empid, prh.payrate, prh.dategrant from (select prh.*, row_number() over (partition by prh.empid order by prh.dategrant desc) as seqnum from payraisehistory prh ) prh where seqnum = 1; ``` Your query is: ``` select prh.empid, prh.payrate, max(prh.dategrant) from payraisehistory prh group by prh.empid, payrate; ``` This is aggregating by `empid` *and* `payrate`. These seem to uniquely define the rows in your table, so you are getting all of them back. I suppose a payrate could be repeated, but your data does not have such an instance.
``` SELECT * FROM ( select prh.empid, prh.payrate, CASE WHEN prh.dategrant = max(prh.dategrant) over(partition by prh.empid ORDER BY byprh.empid) THEN 1 ELSE 0 END AS st from payraisehistory prh ) AS T WHERE t.st = 1 ```
How to get salary back from with max(datetime) aggreate?
[ "", "sql", "sql-server-2008", "" ]
If I have two dictionaries: ``` [{u'id': 217110, u'label': u'Business A', u'score': 0}, {u'id': 217111, u'label': u'Business B', u'score': 0}, {u'id': 217112, u'label': u'Business Yada Yada', u'score': 0'}] ``` and ``` [{u'City': u'Damariscotta', u'EntityType': u'Maine Public', u'EntityTypeCode': u'1', u'Status1': u'Open', u'ID': u'0188', u'Name1': u'A Company 000', u'UpdateTimeUnix': 1363756455}, {u'City': u'Santa Barbra', u'EntityType': u'California Public', u'EntityTypeCode': u'1', u'Status1': u'Delayed', u'ID': u'1001', u'Name1': u'Business A', u'UpdateTimeUnix': 1363758764}, {u'City': u'Boise', u'EntityType': u'Idaho Public', u'EntityTypeCode': u'1', u'Status1': u'Closed', u'ID': u'1012', u'Name1': u'Business Yada Yada', u'UpdateTimeUnix': 1363759375}, {u'City': u'Elkhart', u'EntityType': u'Indiana Public', u'EntityTypeCode': u'1', u'Status1': u'Open', u'ID': u'1016', u'Name1': u'Business B', u'UpdateTimeUnix': 1363826341}] ``` In short, what is an efficient way to get this?: ``` [{u'id': 217110, u'label': u'Business A', u'score': 0, u'Status1': u'Delayed'}, {u'id': 217111, u'label': u'Business B', u'score': 0, u'Status1': u'Open'}, {u'id': 217112, u'label': u'Business Yada Yada', u'score': 0', u'Status1': u'Open'}] ```
Actually not very interesting puzzle supposing that this two lists are joined by `u'label'` and `u'Name1'` respectively. Now answering your question I'd say that efficient way would be to create index of elements in second list by `u'Name1'` ``` name_index = {item[u'Name1']: i for i, item in enumerate(list2)} ``` Then you could easily update items in first list with ``` for item in list1: item[u'Status1'] = list2[name_index[item[u'label']]][u'Status1'] ``` NB: criteria of efficiency here is speed of getting status from second list
First, this sounds like homework problem. We all encourage askers to post what they have written so that we can comment on it. Yet you have not shown us what you have done yet. Second, the way the data is stored is not recommended. If the keys are from a small fixed set, you're better off using namedtuple in dictionary with unique key for each object. Especially for the second list, in which you have to iterate to find a name. In that case the name value will be the key. See: <http://docs.python.org/2/library/collections.html> Third, please be clear in your question. We take the time to help you, so please take your time to help us too. Fourth, to do what you want: ``` for user in list1: for record in list2: if not record['Name1'] == user['label']: continue user['Status1'] = record['Status1'] ``` This is, I believe, the simplest and most efficient way to complete the task. All operations are done in place without generating new object. Fancier search method is impossible in this case, since your object doesn't have key (it's a dictionary...). Linear search is the best we can do.
Dictionaries: how to compare values and merge if values are the same?
[ "", "python", "" ]
I'm trying to create a histogram of a data column and plot it logarithmically (`y-axis`) and I'm not sure why the following code does not work: ``` import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('foo.bar') fig = plt.figure() ax = fig.add_subplot(111) plt.hist(data, bins=(23.0, 23.5,24.0,24.5,25.0,25.5,26.0,26.5,27.0,27.5,28.0)) ax.set_xlim(23.5, 28) ax.set_ylim(0, 30) ax.grid(True) plt.yscale('log') plt.show() ``` I've also tried instead of `plt.yscale('log')` adding `Log=true` in the `plt.hist` line and also I tried `ax.set_yscale('log')`, but nothing seems to work. I either get an empty plot, either the `y-axis` is indeed logarithmic (with the code as shown above), but there is no data plotted (no bins).
try ``` plt.yscale('log', nonposy='clip') ``` <http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.yscale> The issue is with the bottom of bars being at y=0 and the default is to mask out in-valid points (`log(0)` -> undefined) when doing the log transformation (there was discussion of changing this, but I don't remember which way it went) so when it tries to draw the rectangles for you bar plot, the bottom edge is masked out -> no rectangles.
The hist constructor accepts the `log` parameter. You can do this: ``` plt.hist(data, bins=bins, log=True) ```
Logarithmic y-axis bins in python
[ "", "python", "matplotlib", "histogram", "logarithm", "" ]
I created a line that appends an object to a list in the following manner ``` >>> foo = list() >>> def sum(a, b): ... c = a+b; return c ... >>> bar_list = [9,8,7,6,5,4,3,2,1,0] >>> [foo.append(sum(i,x)) for i, x in enumerate(bar_list)] [None, None, None, None, None, None, None, None, None, None] >>> foo [9, 9, 9, 9, 9, 9, 9, 9, 9, 9] >>> ``` The line ``` [foo.append(sum(i,x)) for i, x in enumerate(bar_list)] ``` would give a pylint W1060 Expression is assigned to nothing, but since I am already using the foo list to append the values I don't need to assing the List Comprehension line to something. **My questions is more of a matter of programming correctness** Should I drop list comprehension and just use a simple for expression? ``` >>> for i, x in enumerate(bar_list): ... foo.append(sum(i,x)) ``` or is there a correct way to use both list comprehension an assign to nothing? **Answer** Thank you @user2387370, @kindall and @Martijn Pieters. For the rest of the comments I use append because I'm not using a list(), I'm not using i+x because this is just a simplified example. I left it as the following: ``` histogramsCtr = hist_impl.HistogramsContainer() for index, tupl in enumerate(local_ranges_per_histogram_list): histogramsCtr.append(doSubHistogramData(index, tupl)) return histogramsCtr ```
Yes, this is bad style. A list comprehension is to build a list. You're building a list full of `None`s and then throwing it away. Your actual desired result is a side effect of this effort. Why not define `foo` using the list comprehension in the first place? ``` foo = [sum(i,x) for i, x in enumerate(bar_list)] ``` If it is not to be a list but some other container class, as you mentioned in a comment on another answer, write that class to accept an iterable in its constructor (or, if it's not your code, subclass it to do so), then pass it a generator expression: ``` foo = MyContainer(sum(i, x) for i, x in enumerate(bar_list)) ``` If `foo` already has some value and you wish to append new items: ``` foo.extend(sum(i,x) for i, x in enumerate(bar_list)) ``` If you *really* want to use `append()` and don't want to use a `for` loop for some reason then you *can* use this construction; the generator expression will at least avoid wasting memory and CPU cycles on a list you don't want: ``` any(foo.append(sum(i, x)) for i, x in enumerate(bar_list)) ``` But this is a good deal less clear than a regular `for` loop, and there's still some extra work being done: `any` is testing the return value of `foo.append()` on each iteration. You can write a function to consume the iterator and eliminate that check; the fastest way uses a zero-length `collections.deque`: ``` from collections import deque do = deque([], maxlen=0).extend do(foo.append(sum(i, x)) for i, x in enumerate(bar_list)) ``` This is actually fairly readable, but I believe it's not actually any faster than `any()` and requires an extra import. However, either `do()` or `any()` is a little faster than a `for` loop, if that is a concern.
I think it's generally frowned upon to use list comprehensions just for side-effects, so I would say a for loop is better in this case. But in any case, couldn't you just do `foo = [sum(i,x) for i, x in enumerate(bar_list)]`?
When to drop list Comprehension and the Pythonic way?
[ "", "python", "list-comprehension", "pylint", "code-complete", "" ]
So, basically, I need a program that opens a .dat file, checks each line to see if it meets certain prerequisites, and if they do, copy them into a new csv file. The prerequisites are that it must 1) contain "$W" or "$S" and 2) have the last value at the end of the line of the DAT say one of a long list of acceptable terms. (I can simply make-up a list of terms and hardcode them into a list) For example, if the CSV was a list of purchase information and the last item was what was purchased, I only want to include fruit. In this case, the last item is an ID Tag, and I only want to accept a handful of ID Tags, but there is a list of about 5 acceptable tags. The Tags have very veriable length, however, but they are always the last item in the list (and always the 4th item on the list) Let me give a better example, again with the fruit. My original .DAT might be: ``` DGH$G$H $2.53 London_Port Gyro DGH.$WFFT$Q5632 $33.54 55n39 Barkdust UYKJ$S.52UE $23.57 22#3 Apple WSIAJSM_33$4.FJ4 $223.4 Ha25%ek Banana ``` Only the line: "UYKJ$S $23.57 22#3 Apple" would be copied because only it has both 1) $W or $S (in this case a $S) and 2) The last item is a fruit. Once the .csv file is made, I am going to need to go back through it and replace all the spaces with commas, but that's not nearly as problematic for me as figuring out how to scan each line for requirements and only copy the ones that are wanted. I am making a few programs all very similar to this one, that open .dat files, check each line to see if they meet requirements, and then decides to copy them to the new file or not. But sadly, I have no idea what I am doing. They are all similar enough that once I figure out how to make one, the rest will be easy, though. EDIT: The .DAT files are a few thousand lines long, if that matters at all. EDIT2: The some of my current code snippets Right now, my current version is this: ``` def main(): #NewFile_Loc = C:\Users\J18509\Documents OldFile_Loc=raw_input("Input File for MCLG:") OldFile = open(OldFile_Loc,"r") OldText = OldFile.read() # for i in range(0, len(OldText)): # if (OldText[i] != " "): # print OldText[i] i = split_line(OldText) if u'$S' in i: # $S is in the line print i main() ``` But it's very choppy still. I'm just learning python. Brief update: the server I am working on is down, and might be for the next few hours, but I have my new code, which has syntax errors in it, but here it is anyways. I'll update again once I get it working. Thanks a bunch everyone! ``` import os NewFilePath = "A:\test.txt" Acceptable_Values = ('Apple','Banana') #Main def main(): if os.path.isfile(NewFilePath): os.remove(NewFilePath) NewFile = open (NewFilePath, 'w') NewFile.write('Header 1,','Name Header,','Header 3,','Header 4) OldFile_Loc=raw_input("Input File for Program:") OldFile = open(OldFile_Loc,"r") for line in OldFile: LineParts = line.split() if (LineParts[0].find($W)) or (LineParts[0].find($S)): if LineParts[3] in Acceptable_Values: print(LineParts[1], ' is accepted') #This Line is acceptable! NewFile.write(LineParts[1],',',LineParts[0],',',LineParts[2],',',LineParts[3]) OldFile.close() NewFile.close() main() ```
``` import os NewFilePath = "A:\test.txt" Acceptable_Values = ('Apple','Banana') #Main def main(): if os.path.isfile(NewFilePath): os.remove(NewFilePath) NewFile = open (NewFilePath, 'w') NewFile.write('Header 1,','Name Header,','Header 3,','Header 4) OldFile_Loc=raw_input("Input File for Program:") OldFile = open(OldFile_Loc,"r") for line in OldFile: LineParts = line.split() if (LineParts[0].find(\$W)) or (LineParts[0].find(\$S)): if LineParts[3] in Acceptable_Values: print(LineParts[1], ' is accepted') #This Line is acceptable! NewFile.write(LineParts[1],',',LineParts[0],',',LineParts[2],',',LineParts[3]) OldFile.close() NewFile.close() main() ``` This worked great, and has all the capabilities I needed. The other answers are good, but none of them do 100% of what I needed like this one does.
There are two parts you need to implement: First, read a file line by line and write lines meeting a specific criteria. This is done by ``` with open('file.dat') as f: for line in f: stripped = line.strip() # remove '\n' from the end of the line if test_line(stripped): print stripped # Write to stdout ``` The criteria you want to check for are implemented in the function `test_line`. To check for the occurrence of "$W" or "$S", you can simply use the `in`-Operator like ``` if not '$W' in line and not '$S' in line: return False else: return True ``` To check, if the last item in the line is contained in a fixed list, first split the line using `split()`, then take the last item using the index notation `[-1]` (negative indices count from the end of a sequence) and then use the `in` operator again against your fixed list. This looks like ``` items = line.split() # items is an array of strings last_item = items[-1] # take the last element of the array if last_item in ['Apple', 'Banana']: return True else: return False ``` Now, you combine these two parts into the `test_line` function like ``` def test_line(line): if not '$W' in line and not '$S' in line: return False items = line.split() # items is an array of strings last_item = items[-1] # take the last element of the array if last_item in ['Apple', 'Banana']: return True else: return False ``` Note that the program writes the result to stdout, which you can easily redirect. If you want to write the output to a file, have a look at [Correct way to write line to file in Python](https://stackoverflow.com/questions/6159900/correct-way-to-write-line-to-file-in-python)
Python: Copying lines that meet requirements
[ "", "python", "" ]
I am using MySQL my database created is like this: ``` Create table Author ( Id int , Name nvarchar(max) ); Create table Image ( Id int ,ImagePath nvarchar(max) ); Create table Blog ( Id int ,Name nvarchar(max) ,AuthorId int ); Create table BlogImages ( Id int ,BlogId int ,ImageId int ); ``` now when I am trying this below part I am getting errors ``` ALTER TABLE Blog ADD FOREIGN KEY (AuthorId) REFERENCES Author(Id) ALTER TABLE BlogImages ADD FOREIGN KEY (BlogId) REFERENCES Blog(Id) ALTER TABLE BlogImages ADD FOREIGN KEY (ImageId) REFERENCES Image(Id) ``` Please guide I need to add foreign keys to my tables
You have to define `primary keys`: ``` Create table Author ( Id int , Name nvarchar(max), PRIMARY KEY (id) ); Create table Image ( Id int , ImagePath nvarchar(max), PRIMARY KEY (id) ) Create table Blog ( Id int , Name nvarchar(max) , AuthorId int, PRIMARY KEY (id) ); Create table BlogImages ( Id int , BlogId int , ImageId int PRIMARY KEY (id) ); ``` or this way: ``` ALTER TABLE Author ADD PRIMARY KEY Author(Id); ALTER TABLE Image ADD PRIMARY KEY Image(Id); ALTER TABLE Blog ADD PRIMARY KEY Blog(Id); ALTER TABLE BlogImages ADD PRIMARY KEY BlogImages(Id); ``` [**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/70610/2/0) you can also create constraint primary key, this is a way I prefer ``` ALTER TABLE Author ADD CONSTRAINT pk_AuthorID PRIMARY KEY (ID); ALTER TABLE Image ADD CONSTRAINT pk_ImageID PRIMARY KEY (ID) ; ALTER TABLE Blog ADD CONSTRAINT pk_BlogID PRIMARY KEY (ID); ALTER TABLE BlogImages ADD CONSTRAINT pk_BlogImagesID PRIMARY KEY (ID); ```
You can add query to create table ``` CREATE TABLE author ( Id int , Name varchar(255) ); ``` When you want alter Author table then you can write the below query ``` ALTER TABLE author ADD test varchar(255) ``` When you want to add primary key to id you can put below query ``` ALTER TABLE author ADD Primary key author(Id) ```
foreign keys not adding instead giving errors
[ "", "mysql", "sql", "" ]
I have an SQL Trigger FOR INSERT, UPDATE I created which basically does the following: Gets a LineID (PrimaryID for the table) and RegionID From the Inserted table and stores this in INT variables. It then does a check on joining tables to find what the RegionID should be and if the RegionID is not equal what it should be from the Inserted table, then it should update that record. ``` CREATE TRIGGER [dbo].[TestTrigger] ON [dbo].[PurchaseOrderLine] FOR INSERT, UPDATE AS -- Find RegionID and PurchaseOrderLineID DECLARE @RegionID AS INT DECLARE @PurchaseOrderLineID AS INT SELECT @RegionID = RegionID, @PurchaseOrderLineID = PurchaseOrderLineID FROM Inserted -- Find PurchaserRegionID (if any) for the Inserted Line DECLARE @PurchaserRegionID AS INT SELECT @PurchaserRegionID = PurchaserRegionID FROM (... ) UpdateRegionTable WHERE UpdateRegionTable.PurchaseOrderLineID = @PurchaseOrderLineID -- Check to see if the PurchaserRegionID has a value IF @PurchaserRegionID IS NOT NULL BEGIN -- If PurchaserRegionID has a value, compare it with the current RegionID of the Inserted PurchaseOrderLine, and if not equal then update it IF @PurchaserRegionID <> @RegionID BEGIN UPDATE PurchaseOrderLine SET RegionID = @PurchaserRegionID WHERE PurchaseOrderLineID = @PurchaseOrderLineID END END ``` The problem I have is that it is not updating the record and I'm guessing, it is because the record hasn't been inserted yet into the PurchaseOrderLine table and I'm doing an update on that. But can you update the row which will be inserted from the Inserted table?
This has been resolved. I resolved the problem by adding the trigger to another table as the IF @PurchaserRegionID IS NOT NULL was always false.
The major problem with your trigger is that it's written in assumption that you always get only one row in `INSERTED` virtual table. SQL Server triggers are statement-triggers not row-triggers. You have to take that fact into consideration. Now if I understand correctly the logic behind this trigger then you need just one update statement in it ``` CREATE TRIGGER TestTrigger ON PurchaseOrderLine FOR INSERT, UPDATE AS UPDATE l SET RegionID = u.PurchaserRegionID FROM PurchaseOrderLine l JOIN INSERTED i ON l.PurchaseOrderLineID = i.PurchaseOrderLineID JOIN ( SELECT PurchaseOrderLineID, PurchaserRegionID FROM UpdateRegionTable -- !!! change this for your proper subquery ) u ON l.PurchaseOrderLineID = u.PurchaseOrderLineID ``` *For this example I've created a fake table `UpdateRegionTable`. You have to change it to the proper query that returns `PurchaseOrderLineID, PurchaserRegionID` (in your code you replaced it with `...`). Make sure that it returns all necessary rows, not **one**.* Here is **[SQLFiddle](http://sqlfiddle.com/#!3/8891a/1)** demo
SQL Insert, Update Trigger - Can you update the inserted table?
[ "", "sql", "sql-server", "triggers", "insert", "" ]
I wrote this to calculate the minimum number of bills and coins needed to make change. Can this be done using a loop? ``` def user_change(balance): twen = int(balance/20) balance=balance%20 ten = int(balance/10) balance=balance%10 five = int(balance/5) balance = balance%5 ones = int(balance/1) balance = balance%1 quart = int( balance/0.25) balance = balance%0.25 dime = int(balance/0.10) balance = balance%0.10 nickel = int(balance/0.05) balance = balance%0.05 pennies = int(balance/0.05) print twen print ten print five print ones print quart print dime print nickel print pennies user_change(34.36) ```
EDIT: updated answer per roippi@ suggestion: ``` from collections import OrderedDict _currency_values = [ ('twenties',20), ('tens',10), ('fives',5), ('ones',1), ('quarters',0.25), ('dimes',0.10), ('nickels',0.05), ('pennies',0.01), ] currency_values = OrderedDict(_currency_values) def user_change(balance): user_change_results = [] for currency in currency_values.keys(): #print balance if balance == 0: break currency_amount = currency_values[currency] currency_change_amount = balance//currency_amount user_change_results.append((currency,currency_change_amount)) balance-=(currency_change_amount*currency_amount) return user_change_results if __name__ == '__main__': print user_change(34.36) ``` --- ORIGINAL RESPONSE: Here's my approach. Similar to roippi@ but with descriptors for each currency amount: ``` currency_values = { 'twenties' : 20, 'tens' : 10, 'fives' : 5, 'ones' : 1, 'quarters' : 0.25, 'dimes' : 0.10, 'nickels' : 0.05, 'pennies' : 0.01, } currency_order = ['twenties','tens','fives','ones','quarters','dimes','nickels','pennies'] def user_change(balance): user_change_results = [] for currency in currency_order: #print balance if balance == 0: break currency_amount = currency_values[currency] currency_change_amount = balance//currency_amount user_change_results.append((currency,currency_change_amount)) balance-=(currency_change_amount*currency_amount) return user_change_results if __name__ == '__main__': print user_change(34.36) ```
This is a good time (okay it's *always* a good time) to make things easier on yourself and first think about data structures. You have a list of currency (keys) that, for each key, you want to find one unique amount for (value). k:v pairings mean a `dict`, so fill one up in lieu of just printing the value; you can always print later... ``` def make_change(bal): currency = [20,10,5,1,.25,.1,.05,.01] change = {} for unit in currency: change[unit] = int(bal // unit) bal %= unit return change ``` (whenever you get to use the `%=` operator you should feel cool)
Can this be done by using a loop? Or maybe by using lists?
[ "", "python", "loops", "" ]
I have two numpy boolean arrays (`a` and `b`). I need to find how many of their elements are equal. Currently, I do `len(a) - (a ^ b).sum()`, but the xor operation creates an entirely new numpy array, as I understand. How do I efficiently implement this desired behavior without creating the unnecessary temporary array? I've tried using numexpr, but I can't quite get it to work right. It doesn't support the notion that True is 1 and False is 0, so I have to use `ne.evaluate("sum(where(a==b, 1, 0))")`, which takes about twice as long. **Edit:** I forgot to mention that one of these arrays is actually a view into another array of a different size, and both arrays should be considered immutable. Both arrays are 2-dimensional and tend to be somewhere around 25x40 in size. Yes, this is the bottleneck of my program and is worth optimizing.
On my machine this is faster: ``` (a == b).sum() ``` If you don't want to use any extra storage, than I would suggest using numba. I'm not too familiar with it, but this seems to work well. I ran into some trouble getting Cython to take a boolean NumPy array. ``` from numba import autojit def pysumeq(a, b): tot = 0 for i in xrange(a.shape[0]): for j in xrange(a.shape[1]): if a[i,j] == b[i,j]: tot += 1 return tot # make numba version nbsumeq = autojit(pysumeq) A = (rand(10,10)<.5) B = (rand(10,10)<.5) # do a simple dry run to get it to compile # for this specific use case nbsumeq(A, B) ``` If you don't have numba, I would suggest using the answer by @user2357112 Edit: Just got a Cython version working, here's the `.pyx` file. I'd go with this. ``` from numpy cimport ndarray as ar cimport numpy as np cimport cython @cython.boundscheck(False) @cython.wraparound(False) def cysumeq(ar[np.uint8_t,ndim=2,cast=True] a, ar[np.uint8_t,ndim=2,cast=True] b): cdef int i, j, h=a.shape[0], w=a.shape[1], tot=0 for i in xrange(h): for j in xrange(w): if a[i,j] == b[i,j]: tot += 1 return tot ```
To start with you can skip then A\*B step: ``` >>> a array([ True, False, True, False, True], dtype=bool) >>> b array([False, True, True, False, True], dtype=bool) >>> np.sum(~(a^b)) 3 ``` If you do not mind destroying array a or b, I am not sure you will get faster then this: ``` >>> a^=b #In place xor operator >>> np.sum(~a) 3 ```
Numpy sum of operator results without allocating an unnecessary array
[ "", "python", "arrays", "optimization", "numpy", "numexpr", "" ]
I currently have a django site that is using cookie sessions. Ive noticed that the session is randomly being logged out. When debugging this I found it was due to the logic within my code looking at the age of the session. However I then noticed that during the periods that there were no issues the correct time showed in the Apache timestamp. But then the tiemstamp went back 5 hours when then caused my Django program to believe the session had expired and log it out. Below shows you an example, ``` [Wed Jul 31 16:12:45 2013] [error] DEBUG ok elapsed time 7 [Wed Jul 31 16:12:45 2013] [error] DEBUG ok elapsed time 1 [Wed Jul 31 10:12:46 2013] [error] DEBUG : request.user.is_authenticated() [Wed Jul 31 10:12:46 2013] [error] DEBUG ok elapsed time 64809 [Wed Jul 31 10:12:46 2013] [error] DEBUG since begin . elapsedTime.seconds 64809 [Wed Jul 31 10:12:46 2013] [error] DEBUG request.session\\['deginRequest'\\] [Wed Jul 31 10:12:46 2013] [error] DEBUG ok elapsed time 64801 [Wed Jul 31 10:12:46 2013] [error] DEBUG since last req . elapsedTime.seconds 64801 [Wed Jul 31 10:12:46 2013] [error] DEBUG request.session\\['lastRequest'\\] [Wed Jul 31 10:12:47 2013] [error] DEBUG : shouldLogout ``` The issue also randomly happens. Any ideas ? Also here is the middleware Im using (which generates these logs), ``` class timeOutMiddleware(object): def process_request(self, request): shouldLogout = False if request.user.is_authenticated(): print "DEBUG :request.user.is_authenticated()" if 'beginSession' in request.session: elapsedTime = datetime.datetime.now() - \ request.session['beginSession'] print "DEBUG ok elapsed time",elapsedTime.seconds if elapsedTime.seconds > 12*3600: print "DEBUG since begin . elapsedTime.seconds",elapsedTime.seconds del request.session['beginSession'] print "DEBUG request.session\[\'deginRequest\'\]" shouldLogout = True else: request.session['beginSession'] = datetime.datetime.now() if 'lastRequest' in request.session: elapsedTime = datetime.datetime.now() - \ request.session['lastRequest'] print "DEBUG ok elapsed time",elapsedTime.seconds if elapsedTime.seconds > 2*3600: print "DEBUG since last req . elapsedTime.seconds",elapsedTime.seconds del request.session['lastRequest'] shouldLogout = True request.session['lastRequest'] = datetime.datetime.now() username = request.user if ####.objects.get(username=username).token: print "DEBUG : ####.objects.get(username=username).token" try: token = ####.objects.get(username=username).token url = 'https://############/%s' % (token) response = requests.get(url) answer = json.loads(response.text) #print "DEBUG answer",answer if not answer["valid"]: shouldLogout = True print "DEBUG",datetime.now(),"not answer[\"valid\"]" except: shouldLogout = True print "DEBUG except" else: shouldLogout = True print "DEBUG else" if shouldLogout: print "DEBUG : ",datetime.datetime.now(),"shouldLogout" logout(request) else: if 'beginSession' in request.session: del request.session['beginSession'] if 'lastRequest' in request.session: del request.session['lastRequest'] return None ```
The Apache logs indicate a very discrete time step of 5 hours so this is unlikely to be an underlying system time adjustment by a time keeping daemon such as ntp. I'd guess that it's almost certain to be related to a timezone setting. If you're using mod\_wsgi to serve your app in non-daemon mode, there could be shared state between processes from the environment. In particular, you should take note of the information in the following link: <https://code.google.com/p/modwsgi/wiki/ApplicationIssues#Timezone_and_Locale_Settings> As suggested in other answers, it's a good idea to always store time in UTC and only convert to a specific time zone up presentation. Consider running your application in wsgi daemon mode. From the [mod\_wsgi docs](https://code.google.com/p/modwsgi/) > Although embedded mode can technically perform better, daemon mode > would generally be the safest choice to use. Daemon mode ensure that you have an isolated process handling your application with no possibility for environment contamination from other wsgi processes. eg TZ environment variable changes.
This seems an OS/HW misconfiguration. I suspect that some times your network fails and OS (Operating Sistem) need to get data from hardware clock but ntp. You should ensure than **BIOS clock UTC configuration match with `/etc/default/rcS` parameters**: ``` # assume that the BIOS clock is set to UTC time (recommended) UTC=no ``` Also, ensure your timezone configuration is right: ``` root@egg-v3:/etc# cat /etc/timezone Europe/Barcelona ``` Some easy tests: * start machine without ntp. If system data is not right, try it changing `UTC` parameter. * Play with `hwclock` command: Sample: ``` root@egg-v3:/etc# hwclock Fri 02 Aug 2013 04:30:54 PM CEST -0.609670 seconds root@egg-v3:/etc# date Fri Aug 2 14:30:55 CEST 2013 ```
Apache Time Issue
[ "", "python", "django", "apache", "" ]
I have two variables, 1 varchar named `cust_ref`, and 1 int named `associated_ids`. What I'm trying to accomplish is the following: You provide `cust_ref` with a value which will usually result in between 1+ rows being returned from the `Customer` table. I am concerned with gathering all `customer_id` records for that `cust_ref` and storing them in the `associated_ids` variable seperated by commars. This is the SQL I have so far, and obviously is only loading one of the `customer_id` records into the variable. Based on this example I would like select `@associated_ids` to return the following `75458,77397,94955` ``` declare @cust_ref varchar(20) = 'ABGR55532' declare @associated_ids int select distinct @associated_ids = customer_id from dbo.Customer where cust_ref = @cust_ref select @associated_ids select * from dbo.Customer where cust_ref = @cust_ref ``` Here is the results from the above, as you can see there are actually 3 `associated_ids` that I need stored in the variable in this example but my command is capturing the largest, I want all 3 seperated by commars. ![enter image description here](https://i.stack.imgur.com/lVZ0k.png)
``` declare @cust_ref varchar(20) = 'ABGR55532' --from your code DECLARE @result varchar(100) set @result = (SELECT distinct (customer_id + ' ') FROM dbo.Customer where cust_ref = @cust_ref --from your code ORDER BY (customer_id + ' ') FOR XML PATH ('')) SELECT REPLACE(RTRIM(@result),' ',',') ```
You could try something like this ... obviously, some adjustment will be needed: ``` create table x (id varchar(50),num int) insert into x (id,num) values ('75458','20') insert into x (id,num) values ('77397','20') insert into x (id,num) values ('94955','20') ``` and then, ``` create function GetList (@num as varchar(10)) returns varchar(100) as begin declare @List varchar(100) select @List = COALESCE(@List + ', ', '') + id from x where num = @num return @List end ``` Then, use something like this to get the values: ``` select distinct num,dbo.GetList(num) from x ```
SQL - Storing multiple records in a variable separated by commas
[ "", "sql", "sql-server-2008", "t-sql", "variables", "" ]
I'm working on a Pivot problem. I came up with some code, but have been unsuccessfull in working out the coding. Please could someone give me some guidance on what I'm doing wrong here? I have table 1, which is create in code below: ``` create table T1 ( [name] varchar(30) ,[size] int ,[DT] date) insert into T1 values ( 'x1', 14, '01/03/2013' ); insert into T1 values ( 'x1', 134, '01/04/2013' ); insert into T1 values ( 'x1', 199, '01/05/2013' ); insert into T1 values ( 'x1', 284, '01/06/2013' ); insert into T1 values ( 'x2', 212, '01/03/2013' ); insert into T1 values ( 'x2', 369, '01/04/2013' ); insert into T1 values ( 'x2', 439, '01/05/2013' ); insert into T1 values ( 'x2', 555, '01/06/2013' ); ``` I need to Pivot the table into this format: ``` 01/03/13 01/04/2013 01/05/2013 01/16/2013 X1 14 134 199 284 X2 212 369 439 555 ``` Here is the code I've been working on, but been unsuccessful in working it into the above output? Any ideas or pointers for me? Thanks in advance... ``` declare @DateList as varchar(max) declare @dynamic_PQ as varchar(max) select @DateList = stuff( ( select DISTINCT ', '+ Quotename(CONVERT(VARCHAR(10),DT,110)) from ( select dt from t1 ) t for xml path ('') ),1,1,'') select @DateList set @dynamic_PQ = 'select [GuestID], ' + @DateList + ' from ( Select [name], size, STUFF((SELECT distinct '', '' + convert(a2.size as varch(10)) from t1 a2 where src.name = a2.name and src.dt = a2.dt FOR XML PATH(''''), TYPE ).value(''.'', ''NVARCHAR(MAX)'') ,1,1,'''') answer from ( select name, dt , size from t1 ) src ) as S PIVOT ( MAX([size]) for Question IN (' + @DateList + ') ) as P Exec(@dynamic_PQ) ``` PS: If there is something special you have to do to get anyone to answer/respond to your post, please feel free to share it in your response.
You have unclosed quotation marks, extra unneeded brackets, whole part with STUFF FOR XML inside dynamic sql is not needed (for output described), there was wrong column name... ``` declare @DateList as varchar(max) declare @dynamic_PQ as varchar(max) select @DateList = stuff( ( select DISTINCT ', '+ Quotename(CONVERT(VARCHAR(10),DT,110)) from ( select dt from t1 ) t for xml path ('') ),1,1,'') --select @DateList set @dynamic_PQ ='SELECT * from ( select name, dt , size from t1 ) as S PIVOT ( MAX([size]) for DT IN ('+ @DateList +') ) as P' EXEC (@dynamic_PQ) ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/3638d/21)**
Your query has multiple errors. This version works: ``` select [name], [01-03-2013], [01-04-2013], [01-05-2013], [01-06-2013] from ( Select [name], size, STUFF((SELECT distinct ', ' + cast(a2.size as varchar(10)) from t1 a2 where src.name = a2.name and src.dt = a2.dt FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'' ) as question from ( select name, dt , size from t1 ) src ) S PIVOT ( MAX([size]) for Question IN ([01-03-2013], [01-04-2013], [01-05-2013], [01-06-2013]) )as pvt; ``` You should debug this by printing out the SQL and running it. Here are a few of the problems: * You use `question` in the pivot but define `answer` in the code. * The expression `convert(a2.size as varch(10))` is non-sensical SQL * The `pivot` statement requires an alias * The column `GuestId` is not defined
Pivot Table Code, Need some ideas
[ "", "sql", "sql-server", "sql-server-2008-r2", "pivot", "" ]
I have the following code in my template: ``` data: [{% for deet in deets %} {{ deet.value*100|round(1) }}{% if not loop.last %},{% endif %} {% endfor %}] ``` I am expecting data rounded to 1 decimal place. However, when I view the page or source, this is the output I'm getting: ``` data: [ 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818, 44.2765833818 ] ``` This is not rounded to 1 decimal place. It runs without a template error or anything, but produces incorrect output. My understanding [from the documentation](http://jinja.pocoo.org/docs/templates/#round), and even [a related stack overflow question](https://stackoverflow.com/questions/7675653/rounding-in-jinja2-brackets), are that my format should work. What am I missing or doing wrong?
Didn't realize the filter operator had precedence over multiplication! Following up on bernie's comment, I switched ``` {{ deet.value*100|round(1) }} ``` to ``` {{ 100*deet.value|round(1) }} ``` which solved the problem. I agree the processing should happen in the code elsewhere, and that would be better practice.
You can put parens around the value that you want to round. (This works for division as well, contrary to what @sobri wrote.) ``` {{ (deet.value/100)|round }} ``` NOTE: `round` returns a `float` so if you really want the `int` you have to pass the value through that filter as well. ``` {{ (deet.value/100)|round|int }} ```
Jinja2 round filter not rounding
[ "", "python", "flask", "jinja2", "template-engine", "" ]
I have below sample values in a column ``` Abc-123-xyz Def-456-uvw Ghi-879-rst-123 Jkl-abc ``` Expected output is the third element split by '-', in case there is no third element, the last element will be retrieve. See expected output below: ``` Xyz Uvw Rst Abc ``` Thanks ahead for the help.
``` SELECT initcap(nvl(regexp_substr(word, '[^-]+', 1,3),regexp_substr(word, '[^-]+', 1,2))) FROM your_table; ```
Another approach: ``` SQL> with t1(col) as( 2 select 'Abc-123-xyz' from dual union all 3 select 'Def-456-uvw' from dual union all 4 select 'Ghi-879-rst-123' from dual union all 5 select 'Jkl-Abc' from dual 6 ) 7 select regexp_substr( col 8 , '[^-]+' 9 , 1 10 , case 11 when regexp_count(col, '[^-]+') >= 3 12 then 3 13 else regexp_count(col, '[^-]+') 14 end 15 ) as res 16 from t1 17 ; ``` Result: ``` RES --------------- xyz uvw rst Abc ```
Oracle SQL : Regexp_substr
[ "", "sql", "regex", "oracle", "" ]
I have a set of records and I want to sort these records on the basis of the number of items in a group. ![The below image is a snapshot of the table](https://i.stack.imgur.com/LlRMi.png) I want to arrange the records in such a way that Products with maximum number of items are at the top i.e. the required order is- Product\_ID 3 (with 6 items), then Product\_ID 1 (with 5 items) and the last one would be Product\_ID 2(with 3 items). The following query returns the count of the items with same Product\_ID, however, I want Item\_Name, Item\_Description and Item\_Number to be arranged as well. ``` Select Product_ID, Count(*) from Product group by Product_ID order by Count(*) DESC ``` I have tried another query as follows, but I know I am wrong somewhere that it is not giving the desired results and I can't think of a possible solution: ``` Select Product_ID, Item_Name, Item_Description, Item_Number from Product group by Product_ID,item_name,item_description,item_number order by COUNT(product_ID) ``` Thanks in advance for your help!!
``` Select Product_ID, Item_Name, Item_Description, Item_Number from Product order by COUNT(1) over (partition by Product_ID) desc ```
I assume you want to group by the ID only but you want to list all other fields, you don't need to group by at all if you just want to order by: ``` SELECT product_id, item_name, item_description, item_number FROM product p1 ORDER BY (SELECT Count(product_id) FROM product p2 WHERE p1.product_id = p2.product_id) DESC ```
Sorting Records on the Basis of Number of Items in a Group- SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I wanted to declare an `if` statement inside a `lambda` function: Suppose: ``` cells = ['Cat', 'Dog', 'Snake', 'Lion', ...] result = filter(lambda element: if 'Cat' in element, cells) ``` Is it possible to filter out the 'cat' into `result`?
If you want to filter out all the strings that have `'cat'` in them, then just use ``` >>> cells = ['Cat', 'Dog', 'Snake', 'Lion'] >>> filter(lambda x: not 'cat' in x.lower(), cells) ['Dog', 'Snake', 'Lion'] ``` If you want to keep those that have `'cat'` in them, just remove the `not`. ``` >>> filter(lambda x: 'cat' in x.lower(), cells) ['Cat'] ``` You could use a list comprehension here too. ``` >>> [elem for elem in cells if 'cat' in elem.lower()] ['Cat'] ```
You don't need `if`, here. Your `lambda` will return a boolean value and [`filter()`](http://docs.python.org/2/library/functions.html#filter) will only return those elements for which the `lambda` returns `True`. It looks like you are trying to do either: ``` >>> filter(lambda cell: 'Cat' in cell , cells) ['Cat'] ``` Or... ``` >>> filter(lambda cell: 'Cat' not in cell, cells) ['Dog', 'Snake', 'Lion', '...'] ``` ...I cannot tell which. Note that `filter(function, iterable)` is equivalent to `[item for item in iterable if function(item)]` and it's more usual (Pythonic) to use the list comprehension for this pattern: ``` >>> [cell for cell in cells if 'Cat' in cell] ['Cat'] >>> [cell for cell in cells if 'Cat' not in cell] ['Dog', 'Snake', 'Lion', '...'] ``` See [List filtering: list comprehension vs. lambda + filter](https://stackoverflow.com/q/3013449/78845) for more information on that.
If statement in lambdas Python
[ "", "python", "" ]
Python 3.4 introduces a new module [`enum`](http://docs.python.org/3.4/library/enum.html), which adds an [enumerated type](http://en.wikipedia.org/wiki/Enumerated_type) to the language. The documentation for `enum.Enum` provides [an example](http://docs.python.org/3.4/library/enum.html#planet) to demonstrate how it can be extended: ``` >>> class Planet(Enum): ... MERCURY = (3.303e+23, 2.4397e6) ... VENUS = (4.869e+24, 6.0518e6) ... EARTH = (5.976e+24, 6.37814e6) ... MARS = (6.421e+23, 3.3972e6) ... JUPITER = (1.9e+27, 7.1492e7) ... SATURN = (5.688e+26, 6.0268e7) ... URANUS = (8.686e+25, 2.5559e7) ... NEPTUNE = (1.024e+26, 2.4746e7) ... def __init__(self, mass, radius): ... self.mass = mass # in kilograms ... self.radius = radius # in meters ... @property ... def surface_gravity(self): ... # universal gravitational constant (m3 kg-1 s-2) ... G = 6.67300E-11 ... return G * self.mass / (self.radius * self.radius) ... >>> Planet.EARTH.value (5.976e+24, 6378140.0) >>> Planet.EARTH.surface_gravity 9.802652743337129 ``` This example also demonstrates a problem with `Enum`: in the `surface_gravity()` property method, a constant `G` is defined which would normally be defined at class level - but attempting to do so inside an `Enum` would simply add it as one of the members of the enum, so instead it's been defined inside the method. If the class wanted to use this constant in other methods, it'd have to be defined there as well, which obviously isn't ideal. Is there any way to define a class constant inside an `Enum`, or some workaround to achieve the same effect?
This is advanced behavior which will not be needed in 90+% of the enumerations created. According to the docs: > The rules for what is allowed are as follows: `_sunder_` names (starting and ending with a single underscore) are reserved by enum and cannot be used; all other attributes defined within an enumeration will become members of this enumeration, with the exception of `__dunder__` names and `descriptors` (methods are also descriptors). So if you want a class constant you have several choices: * create it in `__init__` * add it after the class has been created * use a mixin * create your own `descriptor` Creating the constant in `__init__` and adding it after the class has been created both suffer from not having all the class info gathered in one place. Mixins can certainly be used when appropriate ([see dnozay's answer for a good example](https://stackoverflow.com/a/18075283/1733117)), but that case can also be simplified by having a base `Enum` class with the actual constants built in. First, the constant that will be used in the examples below: ``` class Constant: # use Constant(object) if in Python 2 def __init__(self, value): self.value = value def __get__(self, *args): return self.value def __repr__(self): return '%s(%r)' % (self.__class__.__name__, self.value) ``` And the single-use Enum example: ``` from enum import Enum class Planet(Enum): MERCURY = (3.303e+23, 2.4397e6) VENUS = (4.869e+24, 6.0518e6) EARTH = (5.976e+24, 6.37814e6) MARS = (6.421e+23, 3.3972e6) JUPITER = (1.9e+27, 7.1492e7) SATURN = (5.688e+26, 6.0268e7) URANUS = (8.686e+25, 2.5559e7) NEPTUNE = (1.024e+26, 2.4746e7) # universal gravitational constant G = Constant(6.67300E-11) def __init__(self, mass, radius): self.mass = mass # in kilograms self.radius = radius # in meters @property def surface_gravity(self): return self.G * self.mass / (self.radius * self.radius) print(Planet.__dict__['G']) # Constant(6.673e-11) print(Planet.G) # 6.673e-11 print(Planet.NEPTUNE.G) # 6.673e-11 print(Planet.SATURN.surface_gravity) # 10.44978014597121 ``` And, finally, the multi-use Enum example: ``` from enum import Enum class AstronomicalObject(Enum): # universal gravitational constant G = Constant(6.67300E-11) def __init__(self, mass, radius): self.mass = mass self.radius = radius @property def surface_gravity(self): return self.G * self.mass / (self.radius * self.radius) class Planet(AstronomicalObject): MERCURY = (3.303e+23, 2.4397e6) VENUS = (4.869e+24, 6.0518e6) EARTH = (5.976e+24, 6.37814e6) MARS = (6.421e+23, 3.3972e6) JUPITER = (1.9e+27, 7.1492e7) SATURN = (5.688e+26, 6.0268e7) URANUS = (8.686e+25, 2.5559e7) NEPTUNE = (1.024e+26, 2.4746e7) class Asteroid(AstronomicalObject): CERES = (9.4e+20 , 4.75e+5) PALLAS = (2.068e+20, 2.72e+5) JUNOS = (2.82e+19, 2.29e+5) VESTA = (2.632e+20 ,2.62e+5 Planet.MERCURY.surface_gravity # 3.7030267229659395 Asteroid.CERES.surface_gravity # 0.27801085872576176 ``` --- **Note**: The `Constant` `G` really isn't. One could rebind `G` to something else: ``` Planet.G = 1 ``` If you really need it to be constant (aka not rebindable), then use the [new aenum library](https://pypi.python.org/pypi/aenum) [1] which will block attempts to reassign `constant`s as well as `Enum` members. --- 1 Disclosure: I am the author of the [Python stdlib `Enum`](https://docs.python.org/3/library/enum.html), the [`enum34` backport](https://pypi.python.org/pypi/enum34), and the [Advanced Enumeration (`aenum`)](https://pypi.python.org/pypi/aenum) library.
The most elegant solution (IMHO) is to use mixins / base class to provide the correct behavior. * base class to provide the behavior that's needed for all implementation that's common to e.g. `Satellite` and `Planet`. * [mixins are interesting](https://stackoverflow.com/questions/533631/what-is-a-mixin-and-why-are-they-useful) if you decide to provide optional behavior (e.g. `Satellite` and `Planet` may have to provide a different behavior) Here is an example, where you first define your behavior: ``` # # business as usual, define your class, methods, constants... # class AstronomicalObject: # universal gravitational constant G = 6.67300E-11 def __init__(self, mass, radius): self.mass = mass # in kilograms self.radius = radius # in meters class PlanetModel(AstronomicalObject): @property def surface_gravity(self): return self.G * self.mass / (self.radius * self.radius) class SatelliteModel(AstronomicalObject): FUEL_PRICE_PER_KG = 20000 @property def fuel_cost(self): return self.FUEL_PRICE_PER_KG * self.mass def falling_rate(self, destination): return complicated_formula(self.G, self.mass, destination) ``` Then create your `Enum` with the correct base classes / mixins. ``` # # then create your Enum with the correct model. # class Planet(PlanetModel, Enum): MERCURY = (3.303e+23, 2.4397e6) VENUS = (4.869e+24, 6.0518e6) EARTH = (5.976e+24, 6.37814e6) MARS = (6.421e+23, 3.3972e6) JUPITER = (1.9e+27, 7.1492e7) SATURN = (5.688e+26, 6.0268e7) URANUS = (8.686e+25, 2.5559e7) NEPTUNE = (1.024e+26, 2.4746e7) class Satellite(SatelliteModel, Enum): GPS1 = (12.0, 1.7) GPS2 = (22.0, 1.5) ```
Is it possible to define a class constant inside an Enum?
[ "", "python", "python-3.x", "enums", "constants", "class-constants", "" ]
heya I am new to SQL thing I am trying to make a Blog the plan goes like a single blog can have only one AUTHOR where a Author can have more than one blog and a single blog can have multiple images can any one make me understand how will the relation go with this for the sql mapping??
``` Create table Author ( Id int , Name nvarchar(max) ); Create table Image ( Id int ,ImagePath nvarchar(max) ); Create table Blog ( Id int ,Name nvarchar(max) ,AuthorId int ); Create table BlogImages ( Id int ,BlogId int ,ImageId int ); ALTER TABLE Blog ADD FOREIGN KEY (AuthorId) REFERENCES Author(Id); ALTER TABLE BlogImages ADD FOREIGN KEY (BlogId) REFERENCES Blog(Id); ALTER TABLE BlogImages ADD FOREIGN KEY (ImageId) REFERENCES Image(Id); ``` In above relation we have a table `BlogImages` having `blogId` and `ImageId` , which means single `imageID` can have multiple `blogIds` , so multiple blogs using the same image which satisfies your requirement
> Author ----> Blogs -------> Images.. So let me explain the plan a little bit. There will be a table authors having all details of authors. PK being author\_id. Blog table will be having details of blogs. Blog\_id being the PK and will be having a foreign key author\_id referenced by author table. Image table will be having details of images. image\_id being the pk and will be having a foreign key blog\_id referenced by blog table. The behaviour is known as [one to many relation](https://www.google.com/search?q=one%20to%20many%20relation%20in%20sql&oq=one%20to%20many%20relation%20in%20sql&aqs=chrome.0.69i57j5j0l2j69i62l2.7767j0&sourceid=chrome&ie=UTF-8#sclient=psy-ab&q=one%20to%20many%20relation&oq=one%20to%20many%20relation&gs_l=serp.3...5160.5160.0.5330.1.1.0.0.0.0.0.0..0.0....0.0..1c.1.20.psy-ab.ka8K-yFx9ek&pbx=1&bav=on.2,or.r_qf.&bvm=bv.49967636,d.bmk,pv.xjs.s.en_US.jOYpRJj4zMA.O&fp=bed81e6b5f71ca88&biw=1280&bih=923)
SQL relations between blog and author
[ "", "mysql", "sql", "sqlite", "" ]
I am working on a program to sensor out words in python. user input = my email id is harry@hogwarts.com output = my email id is xxxxx@hogwarts.com this is what i have so far ``` def main(): message = [] userInput = str(input("Enter the sentence: ")) splitInput = str(list(userInput)) print(splitInput) for item in splitInput: indeces = splitInput.index('@') while((indeces-1).isalpha()): item = 'x' message.append(item) print(' '.join(message)) ``` this is the error i get ``` File "C:\Users\Manmohit\Desktop\purifier.py", line 8, in main while((indeces-1).isalpha()): AttributeError: 'int' object has no attribute 'isalpha' ``` i have tried looking online for different methods. I want somthing similar to is alpha method. Should i write my own alpha method to check or can i use something inbuilt??? Help is appreciated. Thank you UPDATES : on changing the loop `while((indeces-1).isalpha()):` to `while((str(indeces-1)).isalpha()):` i dont get the error but i dont get any output as well.
you could use this function to encode email: ``` >>> def encodeemail(email): e = email.split("@") return "@".join(["x" * len(e[0]), e[1]]) >>> encodeemail("harry@hogwarts.com") xxxxx@hogwarts.com ``` or even ``` >>> def encodeemail(email): d = email.split(" ") for i, f in enumerate(d): e = f.split("@") if len(e) > 1: d[i] = "@".join(["x" * len(e[0]), e[1]]) return " ".join(d) >>> encodeemail("this is harry@hogwarts.com") this is xxxxx@hogwarts.com ``` without enumerate: ``` >>> def encodeemail(email): d = email.split(" ") for i in range(len(d)): e = d[i].split("@") if len(e) > 1: d[i] = "@".join(["x" * len(e[0]), e[1]]) return " ".join(d) ```
You can use `re` module if the string does not contain email only. ``` >>> s 'my email id is harry@hogwards.com' >>> re.sub('(?<=\s)\w+(?=@)', lambda y: 'x'*len(y.group()), s) 'my email id is xxxxx@hogwards.com' ```
Using Lists and String methods to sensor out words in python
[ "", "python", "" ]
I am working on to achieve dynamic Highcharts graph – basic column and I need help in making SQL in MySQL. I need results for last the 12 months (irrespective of any data for the month (it can be 0 – but all the 12 months records should be fetched)) from the current month showing how many members (4 types of users) have registered on the site for the particular month. There are 4 types of users: 1. Agents 2. Individuals 3. Builders 4. Real Estate Companies For Months column it should retrieve last 12 months from current month - Aug, Sept, Oct, Nov, Dec, Jan, Feb, Mar, Apr, May, Jun, Jul. I have tried with the following query: ``` SELECT CASE WHEN u.`userTypeID`=1 THEN COUNT(`userTypeID`) ELSE 0 END AS agent, CASE WHEN u.`userTypeID`=2 THEN COUNT(`userTypeID`) ELSE 0 END AS individuals, CASE WHEN u.`userTypeID`=3 THEN COUNT(`userTypeID`) ELSE 0 END AS builders, CASE WHEN u.`userTypeID`=4 THEN COUNT(`userTypeID`) ELSE 0 END AS real_estate_companies, u.`userRegistredDate` AS 'timestamp' FROM `dp_users` AS u LEFT JOIN `dp_user_types` AS ut ON u.`userTypeID` = ut.`type_id` WHERE u.`userRegistredDate` < Now( ) AND u.`userRegistredDate` > DATE_ADD( Now( ) , INTERVAL -12 MONTH ) GROUP BY DATE_FORMAT( u.`userRegistredDate`, '%b' ) ``` Output (incorrect): ``` | AGENT | INDIVIDUALS | BUILDERS | REAL_ESTATE_COMPANIES | TIMESTAMP | ---------------------------------------------------------------------------------------- | 0 | 0 | 9 | 0 | July, 01 2013 17:14:35+0000 | | 3 | 0 | 0 | 0 | May, 15 2013 14:14:26+0000 | ``` Output (required: correct): ``` | AGENT | INDIVIDUALS | BUILDERS | REAL_ESTATE_COMPANIES | TIMESTAMP | ---------------------------------------------------------------------------------------- | 3 | 2 | 2 | 2 | July, 01 2013 17:14:35+0000 | | 1 | 2 | 0 | 0 | May, 15 2013 14:14:26+0000 | ``` Another way I tried was with sub-query, please find both examples links below: <http://sqlfiddle.com/#!2/ed101/53> <http://sqlfiddle.com/#!2/ed101/54> Hoping to find favorable solution, thanks.
Try this ``` SELECT month(u.`userRegistredDate`), sum(CASE WHEN u.`userTypeID`=1 THEN 1 ELSE 0 END) AS agent, sum(CASE WHEN u.`userTypeID`=2 THEN 1 ELSE 0 END) AS individuals, sum(CASE WHEN u.`userTypeID`=3 THEN 1 ELSE 0 END) AS builders, sum(CASE WHEN u.`userTypeID`=4 THEN 1 ELSE 0 END) AS real_estate_companies, u.`userRegistredDate` AS 'timestamp',m.month FROM ( SELECT 'January' AS MONTH UNION SELECT 'February' AS MONTH UNION SELECT 'March' AS MONTH UNION SELECT 'April' AS MONTH UNION SELECT 'May' AS MONTH UNION SELECT 'June' AS MONTH UNION SELECT 'July' AS MONTH UNION SELECT 'August' AS MONTH UNION SELECT 'September' AS MONTH UNION SELECT 'October' AS MONTH UNION SELECT 'November' AS MONTH UNION SELECT 'December' AS MONTH ) AS m left join `dp_users` AS u ON m.month = MONTHNAME(u.`userRegistredDate`) and u.`userRegistredDate` < Now( ) AND u.`userRegistredDate` > DATE_ADD( Now( ) , INTERVAL -12 MONTH ) LEFT JOIN `dp_user_types` AS ut ON u.`userTypeID` = ut.`type_id` GROUP BY m.month order by FIELD(m.month,'July','August','September','October','November','December','January','February','March','April','May','June') ``` Please check out this [link](http://sqlfiddle.com/#!2/ed101/99)
This is what you want : ``` SELECT sum(CASE WHEN u.`userTypeID`=1 THEN 1 ELSE 0 END) AS agent, sum(CASE WHEN u.`userTypeID`=2 THEN 1 ELSE 0 END) AS individuals, sum(CASE WHEN u.`userTypeID`=3 THEN 1 ELSE 0 END) AS builders, sum(CASE WHEN u.`userTypeID`=4 THEN 1 ELSE 0 END) AS real_estate_companies, u.`userRegistredDate` AS 'timestamp' FROM `dp_users` AS u LEFT JOIN `dp_user_types` AS ut ON u.`userTypeID` = ut.`type_id` WHERE u.`userRegistredDate` < Now( ) AND u.`userRegistredDate` > DATE_ADD( Now( ) , INTERVAL -12 MONTH ) GROUP BY DATE_FORMAT( u.`userRegistredDate`, '%b' ) ``` <http://sqlfiddle.com/#!2/ed101/69>
SQL query for displaying records on Highcharts graph
[ "", "mysql", "sql", "" ]