Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
i'm using `mysql 5.5`, here is a left join query with a `literal` list: ``` select tbl1.*, details.* from ('a', 'b', 'c'... 300+ elements) as 'tbl1' left join details on details.id=tbl1.id ``` but it doesn't work! `You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''a','b')' at line 1` how to make this `list` as a table??
Use **UNION** ``` select tbl1.*, details.* from (select 'a'as id union select 'b' as id union select 'c' as id union ...300) as tbl1 left join details on details.id=tbl1.id ``` See this **[Fiidle Logic](http://www.sqlfiddle.com/#!2/d41d8/11270)** Instead of using **Subquery**, you can first create a table `tbl1` as ``` create table tbl1 ( id varchar(1) ) insert into tbl1 select 'a' as id union select 'b' as id union select 'c' as id ....300 ``` now you can use **table tbl1** for `join` ``` select tbl1.*, details.* from 'tbl1' left join details on details.id=tbl1.id ```
Missing a select in your sub-query ? ``` select tbl1.*, details.* from (select 'a', 'b', 'c'... 300+ elements) as 'tbl1' left join details on details.id=tbl1.id ``` But I don't understand what your left join is supposed to do [EDIT] Create a temp table for your join ? ``` CREATE TEMPORARY TABLE TempTable ( tempcol varchar(100) ) ENGINE=heap; insert into TempTable values ('a'), ('b'), ('c'), 300+ elements); select TempTable.*, details.* from TempTable left join details on details.id = TempTable.tempcol; ```
how to query from a literal list : select * from ('a', 'b', 'c')
[ "", "mysql", "sql", "" ]
Given code like this: ``` if f(a) == 1: # do sth A elif f(a) == 2: # do sth B elif f(a) == 3: # do sth C else: # do sth D ``` Does the expression `f(a)` get executed/evaluated only once? Or does it get executed/evaluated for every case of `if` or `elif`? In addition, do compound statements make any difference: e.g. ``` if f(a) == 1: # do sth A elif f(a) == 2 and a > 0: # do sth B elif f(a) == 3 and a < 0: # do sth C else: # do sth D ``` In this case, how does `f(a)` get evaluated? Once for all or individually for each case?
Test it out: ``` In [7]: %paste def f(a): print(a) a = 4 if f(a) == 1: pass elif f(a) == 2: pass elif f(a) == 3: pass else: pass ## -- End pasted text -- 4 4 4 ``` `f(a)` will keep on getting evaluated because there's no guarantee that `f(a)` will return the same result each time, so Python won't assume that it will. You can cache it with `result = f(a)` and use `result` in place of `f(a)`. If any of those `if` blocks happens to be `True`, the ones after it will not be tested.
It's important to understand that an `if` statement with several `elif` statements can be expressed as a nested set of `if` and `else` blocks. That is, `elif` is just "syntactic sugar" for `else: if` (but it saves a level of indentation, so it's very handy!). The following block is equivalent to the first `if` and `elif` of your code: ``` if f(a) == 1: pass else: if f(a) == 2: pass ``` With this in mind, the number of times the expression gets evaluated depends on what its result is, since you'll only go into the `else` parts if the previous tests were failed. If the first call to `f(a)` returns `1`, then none of the other calls will be made. However, if all the tests fail (except perhaps the last one), they'll all need to be evaluated. Similarly, within a logical expression using `and` and `or`, Python will short-circuit and not evaluate any parts of the expression that are not needed to determine the final value. ``` if f(a) == 1: # f(a) always gets evaluated here pass elif f(a) == 2: # f(a) gets called a second time here if the "if" was failed pass elif a<1 and f(a): # f(a) runs again only if both previous tests failed and a<1 pass ``` So in summary, you can't tell how many times the function `f(a)` will run unless you know it's results ahead of time. If `f(a)` is not idempotent (that is, if it has side effects), you probably don't want to structure your code that way!
expression evaluation in if elif statements
[ "", "python", "" ]
I would like to do something like this: ``` class A: def hello(): print "Hello" # I do not want to explicitly setup a: a = A() # a = A() -> I want this to happen automatically when I access a # My first try is this: def a(): return A() # Also, I do not want to call a as a function a(): it must be an object # And it must stay alive and initialized a.hello() # a is created, as object of class A a.hello() # I do not want a second instantiation ``` How can I implement this? `properties`? `cached-properties`? They are only for classes: a is a module-level object.
``` def lazyinit(cls): class p(object): def __init__(self, *args, **kws): self._init = lambda: cls(*args, **kws) self._obj = None def __getattr__(self, k): if not self._obj: self._obj = self._init() return getattr(self._obj, k) return p ``` Example: ``` @lazyinit class A(object): def __init__(self, a, b): print("initializing...") self.x = a + b + 2 def foo(self): return self.x x = A(39, 1) print x print x.foo() print x.foo() ```
Maybe something like this: ``` class A(object): def hello(self): print "Hello" class LazyA(object): def __init__(self): self.instance = None def __getattr__(self, k): if self.instance is None: self.instance = A() return getattr(self.instance, k) a = LazyA() ```
Automatically create (and keep) an object when accessed
[ "", "python", "" ]
From the [doc](http://docs.python.org/2/library/functions.html#open), > Modes 'r+', 'w+' and 'a+' open the file for updating (note that 'w+' truncates the file). Append 'b' to the mode to open the file in binary mode, on systems that differentiate between binary and text files; on systems that don’t have this distinction, adding the 'b' has no effect. and [here](http://www.tutorialspoint.com/python/python_files_io.htm) > w+ : Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing. But, how to read a file open with `w+`?
Let's say you're opening the file with a `with` statement like you should be. Then you'd do something like this to read from your file: ``` with open('somefile.txt', 'w+') as f: # Note that f has now been truncated to 0 bytes, so you'll only # be able to read data that you write after this point f.write('somedata\n') f.seek(0) # Important: return to the top of the file before reading, otherwise you'll just read an empty string data = f.read() # Returns 'somedata\n' ``` Note the `f.seek(0)` -- if you forget this, the `f.read()` call will try to read from the end of the file, and will return an empty string.
## ***Here is a list of the different modes of opening a file:*** * # r > Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode. * # rb > Opens a file for reading only in binary format. The file pointer is placed at the beginning of the file. This is the default mode. * # r+ > Opens a file for both reading and writing. The file pointer will be at the beginning of the file. * # rb+ > Opens a file for both reading and writing in binary format. The file pointer will be at the beginning of the file. * # w > Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing. * # wb > Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing. * # w+ > Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing. * # wb+ > Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing. * # a > Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing. * # ab > Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing. * # a+ > Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing. * # ab+ > Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
Confused by python file mode "w+"
[ "", "python", "file", "io", "" ]
How can I get a list of the values in a dict in Python? In Java, getting the values of a Map as a List is as easy as doing `list = map.values();`. I'm wondering if there is a similarly simple way in Python to get a list of values from a dict.
[`dict.values`](https://docs.python.org/library/stdtypes.html#dict.values) returns a [*view*](https://docs.python.org/3/library/stdtypes.html#dict-views) of the dictionary's values, so you have to wrap it in [`list`](https://docs.python.org/3/library/stdtypes.html#list): ``` list(d.values()) ```
You can use [\* operator](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists) to unpack dict\_values: ``` >>> d = {1: "a", 2: "b"} >>> [*d.values()] ['a', 'b'] ``` or list object ``` >>> d = {1: "a", 2: "b"} >>> list(d.values()) ['a', 'b'] ```
How can I get list of values from dict?
[ "", "python", "list", "dictionary", "" ]
**Background:** The data I'm using is being extracted from a `netCDF4` object, which creates a numpy masked array at initialization, but does not appear to support the numpy `reshape()` method, making it only possible to reshape after all the data has been copied = way too slow. **Question:** How can I sub-sample a 1-D array, that is basically a flattened 2-D array, without reshaping it? ``` import numpy a1 = np.array([[1,2,3,4], [11,22,33,44], [111,222,333,444], [1111,2222,3333,4444], [11111,22222,33333,44444]]) a2 = np.ravel(a1) rows, cols = a1.shape row1 = 1 row2 = 3 col1 = 1 col2 = 3 ``` I would like to use a fast slicing method that doesn't require reshaping the 1-D array to a 2-D array. **Desired Output:** ``` np.ravel(a1[row1:row2, col1:col2]) >> array([ 22, 33, 222, 333]) ``` I got as far as getting the start and ending positions, but this just selects ALL data between these points (i.e. extra columns). ``` idx_start = (row1 * cols) + col1 idx_end = (row2 * cols) + col2 ``` **Update:** I just tried [Jaime's brilliant answer](https://stackoverflow.com/a/16240395/943773), but it appears that `netCDF4` won't allow for 2-D indices. ``` z = dataset.variables["z"][idx] File "netCDF4.pyx", line 2613, in netCDF4.Variable.__getitem__ (netCDF4.c:29583) File "/usr/local/lib/python2.7/dist-packages/netCDF4_utils.py", line 141, in _StartCountStride raise IndexError("Index cannot be multidimensional.") IndexError: Index cannot be multidimensional. ```
I came up with this, and though it doesn't copy ALL of the data, it is still copying data that I don't want into memory. This can probably be improved and I hope there is a better solution out there. ``` zi = 0 # Create zero array with the appropriate length for the data subset z = np.zeros((col2 - col1) * (row2 - row1)) # Process number of rows for which data is being extracted for i in range(row2 - row1): # Pull row, then desired elements of that row into buffer tmp = ((dataset.variables["z"][(i*cols):((i*cols)+cols)])[col1:col2]) # Add each item in buffer sequentially to data array for j in tmp: z[zi] = j # Keep a count of what index position the next data point goes to zi += 1 ```
You can get what you want with a combination of [`np.ogrid`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ogrid.html) and [`np.ravel_multi_index`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel_multi_index.html): ``` >>> a1 array([ 1, 2, 3, 4, 11, 22, 33, 44, 111, 222, 333, 444, 1111, 2222, 3333, 4444, 11111, 22222, 33333, 44444]) >>> idx = np.ravel_multi_index((np.ogrid[1:3,1:3]), (5, 4)) >>> a1[idx] array([[ 22, 33], [222, 333]]) ``` You could of course ravel this array to get a 1D return if that's what you are after. Notice also that this is a copy of your original data, not a view. --- **EDIT** You can keep the same general approach, replacing `np.ogrid` with `np.mgrid` and reshaping it to get a flat return: ``` >>> idx = np.ravel_multi_index((np.mgrid[1:3,1:3].reshape(2, -1)), (5, 4)) >>> a1[idx] array([ 22, 33, 222, 333]) ```
Subsample 1-D array using 2-D indices in numpy
[ "", "python", "numpy", "indexing", "slice", "" ]
I get this error: > TypeError: object of type 'Cursor' has no len() when I try to execute: ``` reply = db['test'].find({"date":{"$gt":date_query}} ,{"date":1,"route_id":1,"loc":1,"_id":0}) length = len(reply) ```
The [pymongo cursor](http://api.mongodb.org/python/current/api/pymongo/cursor.html) has a method `count()` which will return what you're looking for: ``` reply = db['test'].find( {"date":{"$gt":date_query}}, {"date":1,"route_id":1,"loc":1,"_id":0} ) length = reply.count() ```
Yes, count will do the work for you. ``` length = reply.count() ``` or ``` length = reply.count(with_limit_and_skip=False) ``` had to suffer a lot coz length = count(reply) also did not work. Since I'm not allowed to comment yet, thought to leave this answer. Hope this will help somebody to save some time.
TypeError: object of type 'Cursor' has no len()
[ "", "python", "mongodb", "pymongo", "database-cursor", "" ]
I was surprised at the following behavior: ``` >>> a=['a','b','c','d','e','f','g'] >>> en=enumerate(a) >>> list(en) [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e'), (5, 'f'), (6, 'g')] >>> list(en) [] >>> # en changed? let's try that again! >>> a=['a','b','c','d','e','f','g'] >>> en=enumerate(a) >>> tuple(en) ((0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e'), (5, 'f'), (6, 'g')) >>> tuple(en) () >>> # en changes again? ``` I was presuming that `list()` would just construct a list from the data in en, and would not change it in any way. The same with `tuple()`. But they both affect en in a "profound" way. Am I misunderstanding something about `list()` and `tuple()`, or am I misunderstanding `enumerate()`? Thanks.
There's nothing special about `list` and `tuple`, either. This will happen however you consume `en`, including via your own hand-written `for` loop: ``` >>> a=['a','b','c','d','e','f','g'] >>> en=enumerate(a) >>> en <enumerate object at 0x21172d0> >>> for index, elem in en: ... print index, elem ... 0 a 1 b 2 c 3 d 4 e 5 f 6 g >>> for index, elem in en: ... print index, elem ... >>> ``` Note that `en` is an "enumerate object", not a list or tuple, and that printing it didn't attempt to print its contents. This is because `enumerate` doesn't actually produce a data structure containing all the same data as `a` plus the index values. Instead it returns a small object that remembers internally which container it was iterating over, where it was up to, and what count it had reached. That's all you need to produce the "next" value, and so it can be iterated over even though it's not a container as such. The reason for this is that people almost never store the result of `enumerate`, it's usually called to immediately iterate over it with a `for` loop. For that purpose, it would be wasteful of time and memory to go to the effort of building a copy of all the data and hold all the indexes in memory at once. Producing them as you go is enough. If you do need to store the resulting data from `enumerate` to use more than once (or to use somewhere else than where you generated it), then you will need that copy. The easiest way to get that is actually to do something like `en = list(enumerate(a))`. Then you can use `en` exactly as you were expecting.
I think you are misunderstanding enumerate(). ``` >>> en=enumerate(a) ``` Is a generator/iterator, not a data structure. Once you run through 'en' once, by creating the list, the generator has run and there's nothing left. Trying list(en) again is trying to iterate through an iterator that has reached it's end, so there's nothing there.
Why do list and tuple change the output of enumerate?
[ "", "python", "list", "tuples", "enumerate", "" ]
I saw that it is possible to connect to remote SQL servers by using their IP inside Manangement studio. Now I want to allow the database on my computer to be accessible remotely. How do I find out the IP of my own SQL server so that I can use that IP to login remotely ?
Try this one - ``` SELECT client_net_address = CASE WHEN client_net_address = '<local machine>' THEN '127.0.0.1' ELSE client_net_address END , local_net_address = ISNULL(local_net_address, '127.0.0.1') , server_name = @@SERVERNAME , machine_name = SERVERPROPERTY('MachineName') FROM sys.dm_exec_connections WHERE session_id = @@SPID; ```
You can Ping your computer name. do not ping "localhost" as that will give you 127.0.0.1. You can go "ping my-pc" and it will resolve your IP address through your DNS server.
SQL Server - How to get IP of your own server?
[ "", "sql", "sql-server", "" ]
Using Python 2.6, the following error occurs when I run the far below script: ``` Traceback (most recent call last): File "g.py", line 7, in <module> results = Geocoder.geocode(row[0]) File "/usr/lib/python2.6/site-packages/pygeocoder.py", line 261, in geocode return GeocoderResult(Geocoder.getdata(params=params)) File "/usr/lib/python2.6/site-packages/pygeocoder.py", line 223, in getdata raise GeocoderError(j['status'], url) pygeocoder.GeocoderError: Error ZERO_RESULTS Query: http://maps.google.com/maps/api/geocode/json?region=&sensor=false&bounds=&language=&address=%22++A+FAKE+ADDRESS ``` The Python 2.6 script: ``` import csv, string from pygeocoder import Geocoder with open('file.csv') as goingGeo: theSpreadsheet = csv.reader(goingGeo, quotechar=None) for row in theSpreadsheet: results = Geocoder.geocode(row[0]) (lat, long) = results[0].coordinates with open('geo_file.csv', 'a') as f: f.write(row[0] + ",") f.write(row[1] + ",") f.write(row[2] + ",") f.write(row[3] + ",") f.write(row[4] + ",") f.write(row[5] + ",") f.write(row[6] + ",") f.write(row[7] + ",") try: f.write(str(lat)) except GeocoderError: pass f.write(",") try: f.write(str(long)) except GeocoderError: pass f.write('\n') ``` I just want the script to continue even with the error. Thank you!
You have try/except blocks around a `write` call that can't possibly throw a GeoCoderError, but you don't have a try/except around the call to `geocoder()` that can (and apparently does) throw that error. You probably want something like: ``` try: results = Geocoder.geocode(row[0]) (lat, long) = results[0].coordinates except GeocoderError: (lat, long) = (0.0, 0.0) ```
Use the try-except-finally statement like this: ``` try: f.write(str(lat)) except GeocodeError: pass finally: do_something_else_regardless_of_above ```
Prevent error from halting a Python script
[ "", "python", "" ]
Im trying to get NewRelic python agent to work with my setup with supervisor and gunicorn, but can't get it to work. Here is my current supervisor setup that works: ``` [program:gunicorn] directory = /home/<USER>/.virtualenvs/<DOMAIN>/myproject/ command=/home/<USER>/.virtualenvs/<DOMAIN>/bin/gunicorn my_project.wsgi:application ``` I tried to do this: ``` [program:gunicorn] directory = /home/<USER>/.virtualenvs/<DOMAIN>/myproject/ #Working command #command=/home/<USER>/.virtualenvs/<DOMAIN>/bin/gunicorn myproject.wsgi:application command=/home/<USER>/.virtualenvs/<DOMAIN>/bin/newrelic-admin run-program /home/<USER>/.virtualenvs/<DOMAIN>/bin/gunicorn myproject.wsgi:application environment=NEW_RELIC_CONFIG_FILE=/home/<USER>/.virtualenvs/<DOMAIN>/myproject/newrelic.ini user = <USER> autostart = true autorestart = true stderr_events_enabled = true redirect_stderr = true stdout_logfile = /home/<USER>/logs/gunicorn.log stderr_logfile = /home/<USER>/logs/gunicorn_err.log ``` but then I get this error: ``` Traceback (most recent call last): File "/home/user/.virtualenvs/domain.com/lib/python2.7/site.py", line 688, in <module> main() File "/home/user/.virtualenvs/domain.com/lib/python2.7/site.py", line 679, in main execsitecustomize() File "/home/user/.virtualenvs/domain.com/lib/python2.7/site.py", line 547, in execsitecustomize import sitecustomize File "/home/user/.virtualenvs/domain.com/local/lib/python2.7/site-packages/newrelic-1.10.2.38-py2.7-linux-x86_64.egg/newrelic/bootstrap/sitecustomize.py", line 74, in <module> newrelic.agent.initialize(config_file, environment) File "/home/user/.virtualenvs/domain.com/local/lib/python2.7/site-packages/newrelic-1.10.2.38-py2.7-linux-x86_64.egg/newrelic/config.py", line 1456, in initialize log_file, log_level) File "/home/user/.virtualenvs/domain.com/local/lib/python2.7/site-packages/newrelic-1.10.2.38-py2.7-linux-x86_64.egg/newrelic/config.py", line 383, in _load_configuration 'Unable to open configuration file %s.' % config_file) newrelic.api.exceptions.ConfigurationError: Unable to open configuration file /. ``` The newrelic.ini file is on that path, so what am I doing wrong? # Edit: Path to newrelic.ini file is: ``` /home/<USER>/.virtualenvs/<DOMAIN>/myproject/newrelic.ini ```
Environment needs quotes to work. Here is a working setup: ``` [program:gunicorn] directory = /home/<USER>/.virtualenvs/<DOMAIN>/<PROJECT>/ command=/home/<USER>/.virtualenvs/<DOMAIN>/bin/newrelic-admin run-program /home/<USER>/.virtualenvs/<DOMAIN>/bin/gunicorn <PROJECT>.wsgi:application environment=NEW_RELIC_CONFIG_FILE="/home/<USER>/.virtualenvs/<DOMAIN>/<PROJECT>/newrelic.ini" user = <USER> autostart = true autorestart = true stderr_events_enabled = true redirect_stderr = true stdout_logfile = /home/<USER>/logs/gunicorn.log stderr_logfile = /home/<USER>/logs/gunicorn_err.log ```
You are not using newrelic-admin as is the preferred method when using gunicorn. Use: ``` [program:gunicorn] directory = /home/user/.virtualenvs/domain.com/my_project/ command=/home/user/.virtualenvs/domain.com/bin/newrelic-admin run-program /home/user/.virtualenvs/domain.com/bin/gunicorn my_project.wsgi:application environment=NEW_RELIC_CONFIG_FILE=/home/user/.virtualenvs/domain.com/bin/newrelic.ini ``` There is no need to change anything in your wsgi.py file. Why you have the newrelic.ini file in the bin directory I do not know. You would normally stick it with your project, but then your projects is also under the virtualenv, which is also a bit odd. For passing environment variables from supervisord see: * <http://supervisord.org/configuration.html> * <http://supervisord.org/subprocess.html#subprocess-environment> For details on the newrelic-admin command and how to use it with gunicorn see: * <https://newrelic.com/docs/python/python-agent-admin-script#run-program> * <https://newrelic.com/docs/python/python-agent-and-gunicorn>
Can't get Newrelic with gunicorn supervisor django 1.6 to work
[ "", "python", "django", "newrelic", "supervisord", "" ]
I'm working on a hangman program that also has user accounts objects. The player can log in, create a new account, or view account details, all of which work fine before playing the game. After the game has completed, the user's wins and losses are updated. Before exiting the program, if I try to view the account (the viewAcc function) I get the error: ``` 'NoneType' object has no attribute 'get_username'. ``` When I run the the program again, I can log in to the account, but when I view the account info the wins and losses haven't been updated. Any help would be appreciated, I have to turn this in for class in about 8 hours. Heres the class code: ``` class Account: def __init__(self, username, password, name, email, win, loss): self.__username = username self.__password = password self.__name = name self.__email = email self.__win = int(win) self.__loss = int(loss) def set_username (self, username): self.__username = username def set_password (self, password): self.__password = password def set_name (self, name): self.__name = name def set_email (self, email): self.__email = email def set_win (self, win): self.__win = win def set_loss (self, loss): self.__loss = loss def get_username (self): return self.__username def get_password (self): return self.__password def get_name (self): return self.__name def get_email (self): return self.__email def get_win (self): return self.__win def get_loss (self): return self.__loss ``` And here's my program's code: ``` import random import os import Account import pickle import sys #List of images for different for different stages of being hanged STAGES = [ ''' ___________ |/ | | | | | | | | | | _____|______ ''' , ''' ___________ |/ | | | | (o_o) | | | | | | _____|______ ''' , ''' ___________ |/ | | | | (o_o) | | | | | | | | _____|______ ''' , ''' ___________ |/ | | | | (o_o) | |/ | | | | | | _____|______ ''' , ''' ___________ |/ | | | | (o_o) | \|/ | | | | | | _____|______ ''' , ''' ___________ |/ | | | | (o_o) | \|/ | | | / | | | _____|______ ''' , ''' YOU DEAD!!! ___________ |/ | | | | (X_X) | \|/ | | | / \ | | | _____|______ ''' ] #used to validate user input ALPHABET = ['abcdefghijklmnopqrstuvwxyz'] #Declares lists of different sized words fourWords = ['ties', 'shoe', 'wall', 'dime', 'pens', 'lips', 'toys', 'from', 'your', 'will', 'have', 'long', 'clam', 'crow', 'duck', 'dove', 'fish', 'gull', 'fowl', 'frog', 'hare', 'hair', 'hawk', 'deer', 'bull', 'bird', 'bear', 'bass', 'foal', 'moth', 'back', 'baby'] fiveWords = ['jazzy', 'faker', 'alien', 'aline', 'allot', 'alias', 'alert', 'intro', 'inlet', 'erase', 'error', 'onion', 'least', 'liner', 'linen', 'lions', 'loose', 'loner', 'lists', 'nasal', 'lunar', 'louse', 'oasis', 'nurse', 'notes', 'noose', 'otter', 'reset', 'rerun', 'ratio', 'resin', 'reuse', 'retro', 'rinse', 'roast', 'roots', 'saint', 'salad', 'ruins'] sixwords = ['baboon', 'python',] def main(): #Gets menu choice from user choice = menu() #Initializes dictionary of user accounts from file accDct = loadAcc() #initializes user's account user = Account.Account("", "", "", "", 0, 0) while choice != 0: if choice == 1: user = play(user) if choice == 2: createAcc(accDct) if choice == 3: user = logIn(accDct) if choice == 4: viewAcc(user) choice = menu() saveAcc(accDct) #Plays the game def play(user): os.system("cls") #Clears screen hangman = 0 #Used as index for stage view done = False #Used to signal when game is finished guessed = [''] #Holds letters already guessed #Gets the game word lenght from the user difficulty = int(input("Chose Difficulty/Word Length:\n"\ "1. Easy: Four Letter Word\n"\ "2. Medium: Five Letter Word\n"\ "3. Hard: Six Letter Word\n"\ "Choice: ")) #Validates input while difficulty < 1 or difficulty > 3: difficulty = int(input("Invalid menu choice.\n"\ "Reenter Choice(1-3): ")) #Gets a random word from a different list depending on difficulty if difficulty == 1: word = random.choice(fourWords) if difficulty == 2: word = random.choice(fiveWords) if difficulty == 3: word = random.choice(sixWords) viewWord = list('_'*len(word)) letters = list(word) while done == False: os.system("cls") print(STAGES[hangman]) for i in range(len(word)): sys.stdout.write(viewWord[i]) sys.stdout.write(" ") print() print("Guessed Letters: ") for i in range(len(guessed)): sys.stdout.write(guessed[i]) print() guess = str(input("Enter guess: ")) guess = guess.lower() while guess in guessed: guess = str(input("Already guessed that letter.\n"\ "Enter another guess: ")) while len(guess) != 1: guess = str(input("Guess must be ONE letter.\n"\ "Enter another guess: ")) while guess not in ALPHABET[0]: guess = str(input("Guess must be a letter.\n"\ "Enter another guess: ")) if guess not in letters: hangman+=1 for i in range(len(word)): if guess in letters[i]: viewWord[i] = guess guessed += guess if '_' not in viewWord: print ("Congratulations! You correctly guessed",word) done = True win = user.get_win() win += 1 username = user.get_username() password = user.get_password() name = user.get_name() email = user.get_email() loss = user.get_loss() user = Account.Account(username, password, name, email, win, loss) if hangman == 6: os.system("cls") print() print(STAGES[hangman]) print("You couldn't guess the word",word.upper(),"before being hanged.") print("Sorry, you lose.") done = True loss = user.get_loss() loss += 1 username = user.get_username() password = user.get_password() name = user.get_name() email = user.get_email() win = user.get_win() user = Account.Account(username, password, name, email, win, loss) #Loads user accounts from file def loadAcc(): try: iFile = open('userAccounts.txt', 'rb') accDct = pickle.load(iFile) iFile.close except IOError: accDct = {} return accDct #Displays the menu def menu(): os.system('cls') print("Welcome to Karl-Heinz's Hangman") choice = int(input("1. Play Hangman\n"\ "2. Create Account\n"\ "3. Log In\n"\ "4. View Account Details\n"\ "0. Quit Program\n"\ "Choice: ")) while choice < 0 or choice > 4: choice = int(input("Invalid Menu Choice.\n"\ "Reenter Choice: ")) return choice #Logs user in to existing account def logIn(accDct): os.system('cls') user = Account.Account("","","","",0,0) username = str(input("Enter Username(case sensitive): ")) if username not in accDct: print("Account does not exist") os.system("pause") return user temp = Account.Account("","","","",0,0) temp = accDct[username] password = str(input("Enter Password(case sensitive): ")) if password != temp.get_password(): print("Incorrect password.") os.system("pause") return user user = accDct[username] return user #Creates a new account and a new account file if one doesn't exist def createAcc(accDct): os.system('cls') print("Enter account info:") username = str(input("UserName: ")) if username in accDct: print("Account already exists.") os.system("pause") return password = str(input("Password: ")) name = str(input("Name: ")) email = str(input("Email: ")) wins = 0 loss = 0 tempuser = Account.Account(username, password, name, email, wins, loss) accDct[username] = tempuser print("Account created.") os.system("pause") def viewAcc(user): os.system('cls') print("Account Details: ") print("Username: ",user.get_username()) print("Name: ",user.get_name()) print("Email: ",user.get_email()) print("Wins: ",user.get_win()) print("Losses: ",user.get_loss()) os.system("pause") #Saves accounts dictionary to file def saveAcc(accDct): oFile = open("userAccounts.txt", "wb") pickle.dump(accDct, oFile) oFile.close() main() ``` Any help is greatly appreciated.
Your play() function doesn't have a `return` statement, which means it's returning `None` as its return value. That's how you're getting `None` into your `user` variable in main(). Add a `return` statement to your play() function and you should be fine.
You are creating *new* `Account` instances in `play` but never return these. The `user` variable in `main` then is assigned `None` because `play` never returns anything. Return `user` from `play` when done: ``` def play(user): # .... return user ```
Python: 'NoneType' object has no attribute 'get_username'
[ "", "python", "class", "python-3.x", "attributeerror", "nonetype", "" ]
Using SQL Server 2008, I've created a database where every table has a datetime column called "CreatedDt". What I'd like to do is create a trigger for each table so that when a value is inserted, the CreatedDt column is populated with the current date and time. If you'll pardon my pseudocode, what I'm after is the T-SQL equivalent of: ``` foreach (Table in MyDatabase) { create trigger CreatedDtTrigger { on insert createddt = datetime.now; } } ``` If anyone would care to help out, I'd greatly appreciate it. Thanks!
As @EricZ says, the best thing to do is bind a default for the column. Here's how you'd add it to every table using a cursor and dynamic SQL: Sure, You can do it with a cursor: ``` declare @table sysname, @cmd nvarchar(max) declare c cursor for select name from sys.tables where is_ms_shipped = 0 order by name open c; fetch next from c into @table while @@fetch_status = 0 begin set @cmd = 'ALTER TABLE ' + @table + ' ADD CONSTRAINT DF_' + @table + '_CreateDt DEFAULT GETDATE() FOR CreateDt' exec sp_executesql @cmd fetch next from c into @table end close c; deallocate c ```
No need to go for `Cursors`. Just copy the **result** of below `Query` and `Execute`. ``` select distinct 'ALTER TABLE '+ t.name + ' ADD CONSTRAINT DF_'+t.name+'_crdt DEFAULT getdate() FOR '+ c.name from sys.tables t inner join sys.columns c on t.object_id=c.object_id where c.name like '%your column name%' ```
SQL Server 2008: create trigger across all tables in db
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "triggers", "" ]
I have a query string that runs in side a loop and for every item in the list query gets executed. The list contains strings and I use python string format technique to replace the query with the respective string from the list as iteration progress. I have unicoded the query along with the string from the list: Here is my unicoded query: ``` query = ur'''SELECT something FROM some_table WHERE some_name LIKE "{this_name}%"''' ``` Prior to execute I encode the query string to `utf-8` ``` try: formatted_query = query.format(this_name=list_name) #encode the query encoded_q = formatted_query.encode('utf-8') # execute the query self.dbCursor.execute(encoded_q) row = self.dbCursor.fetchone() except Exception, e: traceback.print_exc() ``` But the problem is that sometimes I run into strings from list that has single quote example: `foo's`. I have already unicoded with utf-8 and I thought doing so I don't have to worry about situations like this. But I am getting sql error since MySQL is not skipping the single quote. My next attempt was to replace the single quote: ``` format_string = u"foo's".replace(u"'",u"\'") ``` But this didn't work either. I also saw that answer to [this question](https://stackoverflow.com/questions/2561178/python-equivalent-of-mysql-real-escape-string-for-getting-strings-safely-into-m) is using mysqldb library inbuilt functionality which I am not aware of, so I seek the help from stackoverflow community to solve this problem. I change the code to reflect the solution suggested in answers but result is the same: Here is the change: ``` args = [u"{this_name}%".format(this_name=format_name)] self.dbCursor.execute(query.encode('utf-8'), args) ``` *#error get thrown at this line:* **Error:** ``` UnicodeEncodeError: 'latin-1' codec can't encode character u'\u014d' in position 4: ordinal not in range(256) ``` This is the string that error is complaining and I have checked the type of that string its a unicoded string. ``` this_name= Sentōkisei type= <type 'unicode'> ```
If you call `dbCursor.execute` with two arguments, your DB adapter will quote the arguments for you. See the [DB-API specification](http://www.python.org/dev/peps/pep-0249/#id15) for details: ``` query = u'''SELECT something FROM some_table WHERE some_name LIKE %s''' args = [u"{this_name}%".format(this_name=list_name)] self.dbCursor.execute(query, args) ``` The `%s` in `query` is a [parameter marker](http://www.python.org/dev/peps/pep-0249/#paramstyle). It will be replaced by a quoted parameter given in `args`. The correct parameter marker to use depends on your DB adapter. For example, [MySQLdb](http://mysql-python.sourceforge.net/MySQLdb.html) uses `%s`, while [oursql](http://pythonhosted.org/oursql/) and [sqlite3](http://docs.python.org/2/library/sqlite3.html) use `?`. Using parametrized SQL is the recommended way. You really should never have to quote the arguments yourself. --- Regarding the error, you post that ``` this_name= Sentōkisei type= <type 'unicode'> ``` I am going to assume this means `format_name` is unicode. Therefore, ``` args = [u"{this_name}%".format(this_name=format_name)] ``` will make `args` a list containing one unicode. Now we reach the line which is raising an error: ``` self.dbCursor.execute(query.encode('utf-8'), args) ``` `query` is already `unicode`. If you encode that unicode, then it becomes a `str`. So `query.encode('utf-8')` is a `str`, but `args` is a list of `unicode`. I'm not sure why you wanted to encode `query`, but your DB adapter should be able to take two unicode arguments. So try ``` self.dbCursor.execute(query, args) ``` Now, upon re-reading your comments, it appears you've tried this and it also raises the same error: ``` UnicodeEncodeError: 'latin-1' codec can't encode character u'\u014d' in position 75: ordinal not in range(256) ``` I'm not sure why the DB adapter is trying to encode the unicode with `latin-1` when you want `utf-8` instead. The best solution would be to track down where this choice of `latin-1` is coming from. A hacky workaround would be to try encoding the strings yourself: ``` query = u'''SELECT something FROM some_table WHERE some_name LIKE %s'''.encode('utf-8') args = [u"{this_name}%".format(this_name=list_name).encode('utf-8')] self.dbCursor.execute(query, args) ``` But let me stress I really don't think this is the best way, nor should this be necessary.
I have added an answer to a similar question here, you can take a look at it too! link :<https://stackoverflow.com/a/61042304/8939258>
Python: Escape Single Quote from MySQL Query
[ "", "python", "unicode", "utf-8", "mysql-python", "" ]
I have a query: ``` INSERT INTO I#journal ( Type_, Mndnr, Obj, Status, Reason ) VALUES ( 'PO', '0177', '000222', 'NEW', '1' ) ``` this one works OK. But instead of '1' I want to insert multiple values in one field, like '1','2','3' And usually you do it like this: ``` INSERT INTO I#journal ( Type_, Mndnr, Obj, Status, Reason ) VALUES ( 'PO', '0177', '000222e', 'NEW', '1,2,3' ) ``` But how to do it if values will put there as `'1','2','3'`? ``` INSERT INTO I#journal ( Type_, Mndnr, Obj, Status, Reason ) VALUES ( 'PO', '0177', '000222e', 'NEW', '1','2','3' ) ``` So, we **can't** change `'1','2','3'` (due of some automation) but we can add anything before and past this string. In result information in `Reason` field should be something like `1,2,3` How to do that?
Try: ``` INSERT INTO I#journal ( Type_, Mndnr, Obj, Status, Reason ) VALUES ( 'PO', '0177', '000222e', 'NEW', replace(q'['1','2','3']', q'[',']', '') ) ```
Insert value `replace(q'$'1','2','3'$', '''', '')` Single quotes(`'`) is the escape character. i.e. ``` INSERT INTO I#journal ( Type_, Mndnr, Obj, Status, Reason ) VALUES ( 'PO', '0177', '000222', 'NEW', replace(q'$'1','2','3'$', '''', '') ); ```
how to insert multiple values in one field?
[ "", "sql", "oracle", "oracle10g", "" ]
Hey guys hi need your help i am tried alot and now i am tired find no way out. I have two tables & i did inner join want to remove those rows which foreign id is not present in table 2, below i have mentioned by structure.. **Table 1** ``` Column A(Foreign) Column B record A Some thing record B Some thing record c Some thing ``` **Table 2** ``` Column A(Foreign) Column B record A Some thing record B Some thing ``` *Now actually i want to remove record C which is not in the table 2..is there any way out???*
``` DELETE FROM `Table 1` t1 WHERE NOT EXISTS ( SELECT 1 FROM `Table 2` t2 WHERE t2.`Column A(Foreign)` = t1.`Column A(Foreign)` ) ``` Terrible table and column names, by the way Demo here - <http://sqlfiddle.com/#!2/4c2d8/1>
This works: ``` DELETE VM.* FROM `Table1` AS VM LEFT JOIN `Table2` AS VL ON VL.`Column A(Foreign)` = VM.`Column A(Foreign)d` WHERE VL.`Column A(Foreign)` IS NULL ```
how to Delete the rows those are not present in Table 1
[ "", "mysql", "sql", "" ]
What's the best way to handle zero denominators when dividing pandas DataFrame columns by each other in Python? for example: ``` df = pandas.DataFrame({"a": [1, 2, 0, 1, 5], "b": [0, 10, 20, 30, 50]}) df.a / df.b # yields error ``` I'd like the ratios where the denominator is zero to be registered as NA (`numpy.nan`). How can this be done efficiently in pandas? Casting to `float64` does not work at level of columns: ``` In [29]: df Out[29]: a b 0 1 0 1 2 10 2 0 20 3 1 30 4 5 50 In [30]: df["a"].astype("float64") / df["b"].astype("float64") ... FloatingPointError: divide by zero encountered in divide ``` How can I do it just for particular columns and not entire df?
You need to work in floats, otherwise you will have integer division, prob not what you want ``` In [12]: df = pandas.DataFrame({"a": [1, 2, 0, 1, 5], "b": [0, 10, 20, 30, 50]}).astype('float64') In [13]: df Out[13]: a b 0 1 0 1 2 10 2 0 20 3 1 30 4 5 50 In [14]: df.dtypes Out[14]: a float64 b float64 dtype: object ``` Here's one way ``` In [15]: x = df.a/df.b In [16]: x Out[16]: 0 inf 1 0.200000 2 0.000000 3 0.033333 4 0.100000 dtype: float64 In [17]: x[np.isinf(x)] = np.nan In [18]: x Out[18]: 0 NaN 1 0.200000 2 0.000000 3 0.033333 4 0.100000 dtype: float64 ``` Here's another way ``` In [20]: df.a/df.b.replace({ 0 : np.nan }) Out[20]: 0 NaN 1 0.200000 2 0.000000 3 0.033333 4 0.100000 dtype: float64 ```
Just for completeness, I would like to add the following way of division that uses [DataFrame.apply](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html) like: ``` df.loc[:, 'c'] = df.apply(div('a', 'b'), axis=1) ``` In full: ``` In [1]: df = pd.DataFrame({"a": [1, 2, 0, 1, 5, 0], "b": [0, 10, 20, 30, 50, 0]}).astype('float64') def div(numerator, denominator): return lambda row: 0.0 if row[denominator] == 0 else float(row[numerator]/row[denominator]) df.loc[:, 'c'] = df.apply(div('a', 'b'), axis=1) Out[1]: a b c 0 1.0 0.0 0.000000 1 2.0 10.0 0.200000 2 0.0 20.0 0.000000 3 1.0 30.0 0.033333 4 5.0 50.0 0.100000 5 0.0 0.0 0.000000 ``` --- This solution is slower than the one proposed by [Jeff](https://stackoverflow.com/a/16244367/550155): ``` df.loc[:, 'c'] = df.apply(div('a', 'b'), axis=1) # 1.27 ms ± 113 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) df.loc[:, 'c'] = df.a/df.b.replace({ 0 : np.inf }) # 651 µs ± 44.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ```
handling zeros in pandas DataFrames column divisions in Python
[ "", "python", "numpy", "pandas", "dataframe", "" ]
Using Python, I have a list of dictionary objects that contain parent/child relationships between each other which I would like to build into a tree. For example: ``` {'UI': 'T071', 'NAME': 'Entity', 'PARENT': None, 'CHILDREN': 'Conceptual Entity'} {'UI': 'T077', 'NAME': 'Conceptual Entity', 'PARENT': 'Entitity', 'CHILDREN': 'Organism Attribute, Finding, Idea or Concept'} {'UI': 'T032', 'NAME': 'Organism Attribute', 'PARENT': 'Conceptual Entity', 'CHILDREN': 'Clinical Attribute'} etc. ``` There are a total of 4 root nodes in the dataset (with 'PARENT' set as None), which make 4 separate trees. So, I was planning to make a list of trees. The data is not necessarily in any kind of ordering (so the nodes higher up the hierarchy are not necessarily higher in the list). Also, the id's (UI) are in no particular order (T071 is not necessarily higher in the tree than T072). Their names are unique, and the dataset uses their names instead of the id's (UI) to show the relationships. I have this simple class: ``` class node(): def __init__(self, value): self.value = value self.children = [] def add_child(self, obj): self.children.append(obj) ``` I'm a bit stumped on how to approach this. Suggestions much appreciated.
I think the best is to do two pass. First, you create a dictionary that link the name to the node. Then you can add yours items efficiently. My code : ``` nodes = dict((e["NAME"], node(e)) for e in l) for e in l: if e["PARENT"] is not None: nodes[e["PARENT"]].add_children(nodes[e["NAME"]) ``` If you want the roots, you can use the if above, or you can filter the nodes. ``` roots = [n for n in nodes.values() if d.value["PARENT"] is None] ```
I once represented a \*ix process tree with just one dict, and a list of child process pid's for each parent pid. So you get: ``` dict_[1] = [2, 3, 4] dict_[2] = [5, 100] dict_[3] = [6, 200] dict_[4] = [7, 300] dict_[6] = [400] ``` It seemed to work quite well. It's optional whether you want the leaf nodes to exist with empty lists, or to just not appear in the tree. I've shown them above not appearing in the tree at the dict level. I believe this is only appropriate if a pid (node) can only appear in one place in the tree. EG, 100 can't be a child of 2 *and* 4.
Build tree out of list of parent/children in Python
[ "", "python", "algorithm", "tree", "" ]
``` SELECT Id, Name, Lastname FROM customers AS c, Places AS p, WHERE c.customer_ID = p.customer_ID ``` My problem is that, i want to prevent the result of the query of showing a row that exist in another table(stages)
You can do a LEFT JOIN and check for null. ``` SELECT Id, Name, Lastname FROM customers AS c LEFT JOIN Places AS p ON c.customer_ID = p.customer_ID WHERE p.customer_ID IS NULL ```
add ``` and not exists (subquery to select your exclusions) ``` to your query
Select From a Table Where the column value does not exist in another table
[ "", "mysql", "sql", "" ]
I am new to TKinter and cant seem to find any examples on how to view a document in a window. What I am trying to accomplish is that when selecting a PDF or TIF it will open the file and show the first page in a window using TKinter. Is this possible?
A long time has passed since this question was posted but, for those still looking for a solution, here's one I've found : <https://github.com/rk700/PyMuPDF/wiki/Demo:-GUI-script-to-display-a-PDF-using-wxPython-or-Tkinter> Basically, it uses PyMuPDF, a python binding for MuPDF. MuPDF is a lightweight document viewer capable of displaying a few file formats, such as pdf and epub. I quote the code used for TKinter: This demo can easily be adopted to Tkinter. You need the imports ``` from Tkinter import Tk, Canvas, Frame, BOTH, NW from PIL import Image, ImageTk ``` and do the following to display each PDF page image: ``` #----------------------------------------------------------------- # MuPDF code #----------------------------------------------------------------- pix = doc.getPagePixmap(pno - 1) # create pixmap for a page #----------------------------------------------------------------- # Tkinter code #----------------------------------------------------------------- self.img = Image.frombytes("RGBA", [pix.width, pix.height], str(pix.samples)) self.photo = ImageTk.PhotoImage(self.img) canvas = Canvas(self, width=self.img.size[0]+20, height=self.img.size[1]+20) canvas.create_image(10, 10, anchor=NW, image=self.photo) canvas.pack(fill=BOTH, expand=1) ```
Nothing is impossible my friend! Try using snippets from this: <http://nedbatchelder.com/blog/200712/extracting_jpgs_from_pdfs.html> to convert the pdf to an image. I would then display the image on a label using PIL and Tkinter. And I think Tif files should display on a label without problems IIRC.
View a document TKinter
[ "", "python", "tkinter", "" ]
I need a good, quick method for finding the 10 smallest real values from a numpy array that could have arbitrarily many `nan` and/or `inf` values. I need to identify the indices of these smallest real values, not the values themselves. I have found the `argmin` and `nanargmin` functions from numpy. They aren't really getting the job done because I also want to specify more than 1 value, like I want the smallest 100 values, for example. Also they both return `-inf` values as being the smallest value when it is present in the array. `heapq.nsmallest` kind of works, but it also returns `nan` and `-inf` values as smallest values. Also it doesn't give me the indices that I am looking for. Any help here would be greatly appreciated.
The only values that should be throwing this out are the negative infinite ones. So try: ``` import numpy as np a = np.random.rand(20) a[4] = -np.inf k = 10 a[np.isneginf(a)] = inf result = a[np.argsort(a)[:k]] ```
It seems to me like you could just take the first `n` finite values from your sorted array, instead of trying to modify the original array, which could be dangerous. ``` n = 10 b = np.sort(a) smalls = b[np.isfinite(b)][n:] ```
Get smallest N values from numpy array ignoring inf and nan
[ "", "python", "arrays", "math", "numpy", "" ]
I'm trying to have a request with a case sensitive result. For example in my database I have ``` ABCdef abcDEF abcdef ``` The request is ``` SELECT * FROM table WHERE col = 'abcdef' ``` but I have my 3 rows as result and I just want abcdef I try to find a solution with ``` SELECT * FROM table WHERE col COLLATE Latin1_General_CS_AS = 'abcdef' COLLATE Latin1_General_CS_AS ``` but I have this error: > Unknown collation: 'Latin1\_General\_CS\_AS'{"success":false,"error":"#1273 - Unknown collation: 'Latin1\_General\_CS\_AS'"} Thanks
Thanks for your help I find the solution it was not latin1 ut utf8 ``` COLLATE utf8_bin ```
`Latin1_General_CS_AS` is a SQL Server collation. For MySQL, try `latin1_general_cs`: ``` WHERE col = 'abcdef' COLLATE latin1_general_cs ```
Case-sensitive SQL differentiate between upper and lower case
[ "", "sql", "collation", "" ]
I want to know which expression I can use with `like`, which will **always** match. That is, it doesn't actually filter anything out. So for example, suppose if the following SQL returns 1000 records ... ``` select * from rts.Address ``` ... then this should also return 1000 records ... ``` select * from rts.Address where StreetName like 'likeExpressionHere' ``` I tried using this: ``` select * from rts.Address where StreetName like '%%' ``` .. but this still did some filtering (i.e. some records were missing) .. Also, none of the `StreetName` values in the `rts.Address` table are null (i.e. they all have some varchar value in them)
Your problem is not the `like`, which should work. It is most likely that the `StretName` takes on NULL values. You can't fix this with like, although you could do something like: ``` where coalesce(StreetName, '<NULL>') like '%' ```
It will depend on your data, but `where Streetname like '%'` should work. If for example the column is nullable, you will need to add `where StreetName like '%' OR StreetName IS NULL`
SQL 'like' expression to match all varchar values (i.e. no actual filtering done)?
[ "", "sql", "sql-server", "sql-server-2008", "select", "sql-like", "" ]
I have the list like this ``` mylist = [student_number , student_age , student_marks , subject_name , subject_marks, subject_date , ass_name , ass_number] ``` i want something like ``` list_student = [student_number , student_age , student_marks] list_subject = [subject_name , subject_marks, subject_date] list_ass = [ass_name , ass_number] ``` so that it matches the text before underscore put that as key to dictionary I want to convert that to a dictonary so that i can access something like ``` for a AllList['student']: Student stuff ``` EDIT: The list elements can be in any order
I'd do it like this: ``` # Build dictionary with prefix:list pairs targets = { 'student_': [], 'subject_': [], 'ass_': [] } for i in mylist: # Try to find matching prefix for prefix in targets: if i.startswith(prefix): targets[prefix].append(i) break ``` This will easily allow you to add many more prefixes and doesn't care about the order. Assuming (that the values are strings): ``` mylist = ['student_number', 'student_age', 'student_marks', 'subject_name', 'subject_marks', 'subject_date' , 'ass_name' , 'ass_number' ] ``` You'll get result like this: ``` >>> for i in targets: print( i, targets[i]) ... ass_ ['ass_name', 'ass_number'] student_ ['student_number', 'student_age', 'student_marks'] subject_ ['subject_name', 'subject_marks', 'subject_date'] ```
As an alternative you can use [collections.defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict): ``` from collections import defaultdict mylist = ['student_number', 'student_age', 'student_marks', 'subject_name', 'subject_marks', 'subject_date' , 'ass_name' , 'ass_number' ] d = defaultdict(list) for v in myList: k = v.split('_')[0] d[k].append(v) ``` It will give you: ``` >>>print d defaultdict(<type 'list'>, {'ass': ['ass_name', 'ass_number'], 'student': ['student_number', 'student_age', 'student_marks'], 'subject': ['subject_name', 'subject_marks', 'subject_date']}) ```
How can i convert the list into three lists based on criteria and put in dictionary
[ "", "python", "list", "" ]
``` Available formats: 37 : mp4 [1080x1920] 46 : webm [1080x1920] 22 : mp4 [720x1280] 45 : webm [720x1280] 35 : flv [480x854] 44 : webm [480x854] 34 : flv [360x640] 18 : mp4 [360x640] 43 : webm [360x640] 5 : flv [240x400] 17 : mp4 [144x176] ``` That's the output of `youtube-dl -F url`. I'm writing a script and I need to check if the video has the format 18. How can I extract that first column on a list? Then it is easy to check.
Something like this, considering the data is stored in a text file: ``` In [15]: with open("abc") as f: ....: for line in f: ....: spl=line.split() ....: if '18' in spl: ....: print line ....: break ....: 18 : mp4 [360x640] ``` or if the data is stored in a string: ``` In [16]: strs="""Available formats: ....: 37 : mp4 [1080x1920] ....: 46 : webm [1080x1920] ....: 22 : mp4 [720x1280] ....: 45 : webm [720x1280] ....: 35 : flv [480x854] ....: 44 : webm [480x854] ....: 34 : flv [360x640] ....: 18 : mp4 [360x640] ....: 43 : webm [360x640] ....: 5 : flv [240x400] ....: 17 : mp4 [144x176]""" ....: In [17]: for line in strs.splitlines(): ....: spl=line.split() ....: if '18' in spl: ....: print line ....: break ....: 18 : mp4 [360x640] ```
If this is simple list, do as follows: 1. Read one line at a time as a string 2. Split the string on colon : 3. Trim the 1st item 4. Parse the item as a number
How can I create a list with the first column?
[ "", "python", "" ]
In SQL Server 2012 ... I have two column in a table, TransactionDate(date,null) and TransactionTime(time, null) I need to concatenate them together to insert into another table's column which is a Datetime datatype. Is this possible? Thanks in advance.
Try this one - ``` DECLARE @temp TABLE ( TransactionDate DATE , TransactionTime TIME ) INSERT INTO @temp (TransactionDate, TransactionTime) VALUES ('2013-04-27', '08:37:01.217'), ('2013-04-27', '12:39:14.613') --INSERT INTO ... (DatetimeColumn) SELECT CAST(TransactionDate AS VARCHAR(10)) + ' ' + CAST(TransactionTime AS VARCHAR(12)) FROM @temp ```
Why not ``` SELECT TransactionDate+ CAST(TransactionTime AS datetime) FROM table ```
SQL Server 2012, Concat a Date with a Time column to insert into a DateTime column in another table
[ "", "sql", "sql-server", "database-design", "" ]
Which join do you use to select data from parent and child table, where the parent table may contain no child data?
I think your question is more properly stated as this: > How to find items in the master table which don't have any items in the child table? That is a very common question in SQL, and there is a known solution... this works in T-SQL (you need to always say what you're using) ``` select m.master_data, c.child_data from master_table m left outer join child_table c on m.ID = c.ID where c.child_data IS NULL ``` When there is nothing in the child table, the OUTER join produces nulls in the child table columns, so you just show the rows with nulls in the child table and that's it. You don't show any columns from the child table in your result.
There are a few ways of finding properties that don't have a lease. 1. Use a sub-query in the where clause ``` select * from propertys where propertys.propertyID not in (select propertyID from lease); ``` 2. Alternatively you can use a left join with a null check in the where clause ``` select * from propertys left join lease on lease.propertyID = propertys.propertysID where lease.leaseID is null; ```
Simple/ish command
[ "", "sql", "command", "view", "" ]
Python newb... I have a list of dicts that I am trying to organize into the same month & year: ``` [{'date':'2008-04-23','value':'1'}, {'date':'2008-04-01','value':'8'}, {'date':'2008-04-05','value':'3'}, {'date':'2009-04-19','value':'5'}, {'date':'2009-04-21','value':'8'}, {'date':'2010-09-09','value':'3'}, {'date':'2010-09-10','value':'4'}, ] ``` What I'm trying to get is a list of dicts like this: ``` [{'date':2008-04-01,'value':'12'}, {'date':2009-04-01,'value':'13'}, {'date':2010-09-01,'value':'7'}, ] ``` Here's my code, which is just printing an empty list: ``` from datetime import datetime myList = [{'date':'2008-04-23','value':'1'}, {'date':'2008-04-01','value':'8'}, {'date':'2008-04-05','value':'3'}, {'date':'2009-04-19','value':'5'}, {'date':'2009-04-21','value':'8'},{'date':'2010-09-09','value':'3'}, {'date':'2010-09-10','value':'4'}, ] newList = [] newDict = {} for cnt in range(len(myList)): for k,v in myList[cnt].iteritems(): if k == 'date': d = datetime.strptime(v,'%Y-%m-%d').date() for elem in newList: if elem['date'] != d: newList.append({'date':d,'value':myList[cnt]['value']}) else: newList[cnt]['value'] += myList[cnt]['value'] print newList ```
First, I would sort the data1: ``` >>> lst = [{'date':'2008-04-23','value':'1'}, ... {'date':'2008-04-01','value':'8'}, ... {'date':'2008-04-05','value':'3'}, ... {'date':'2009-04-19','value':'5'}, ... {'date':'2009-04-21','value':'8'}, ... {'date':'2010-09-09','value':'3'}, ... {'date':'2010-09-10','value':'4'}, ... ] >>> lst.sort(key=lambda x:x['date'][:7]) >>> lst [{'date': '2008-04-23', 'value': '1'}, {'date': '2008-04-01', 'value': '8'}, {'date': '2008-04-05', 'value': '3'}, {'date': '2009-04-19', 'value': '5'}, {'date': '2009-04-21', 'value': '8'}, {'date': '2010-09-09', 'value': '3'}, {'date': '2010-09-10', 'value': '4'}] ``` Then, I would use `itertools.groupby` to do the grouping: ``` >>> from itertools import groupby >>> for k,v in groupby(lst,key=lambda x:x['date'][:7]): ... print k, list(v) ... 2008-04 [{'date': '2008-04-23', 'value': '1'}, {'date': '2008-04-01', 'value': '8'}, {'date': '2008-04-05', 'value': '3'}] 2009-04 [{'date': '2009-04-19', 'value': '5'}, {'date': '2009-04-21', 'value': '8'}] 2010-09 [{'date': '2010-09-09', 'value': '3'}, {'date': '2010-09-10', 'value': '4'}] >>> ``` Now, to get the output you wanted: ``` >>> for k,v in groupby(lst,key=lambda x:x['date'][:7]): ... print {'date':k+'-01','value':sum(int(d['value']) for d in v)} ... {'date': '2008-04-01', 'value': 12} {'date': '2009-04-01', 'value': 13} {'date': '2010-09-01', 'value': 7} ``` 1Your data actually already appears to be sorted in this regard, so you might be able to skip this step.
Use [itertools.groupby](http://docs.python.org/2/library/itertools.html#itertools.groupby): ``` data = [{'date':'2008-04-23','value':'1'}, {'date':'2008-04-01','value':'8'}, {'date':'2008-04-05','value':'3'}, {'date':'2009-04-19','value':'5'}, {'date':'2009-04-21','value':'8'}, {'date':'2010-09-09','value':'3'}, {'date':'2010-09-10','value':'4'}, ] import itertools key = lambda datum: datum['date'].rsplit('-', 1)[0] data.sort(key=key) result = [{ 'date': key + '-01', 'value': sum(int(item['value']) for item in group) } for key, group in itertools.groupby(data, key=key)] print result # [{'date': '2008-04-01', 'value': 12}, # {'date': '2009-04-01', 'value': 13}, # {'date': '2010-09-01', 'value': 7}] ```
How do I group this list of dicts by the same month?
[ "", "python", "" ]
Okay this is not a programming question so feel free to downvote (Yay ! For the first time, I am encouraging such behavior - make the most of it). But, I feel that it is important to know this. I have searched google and not seen one page which gives real world examples of why a particular join is needed. I read this nice tutorial - <http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html> My main issue is that if you join two tables and extract only part of it (example - 3, 4 th pic in link), then why are you joining in the first place ? I need real example which show me the need to join two tables and then extract only one part of it. Hurrah ! Let the down voting begin !
Often the JOIN is being used merely to determine the rows in one table that you're interested in. For instance, "tell me the names of all the people who live in green houses". If you have a table of people and a table of houses, you'll join them to determine who lives in each house, and use a WHERE clause that selects the green houses. But you'll only return names from the People table.
You might join two tables together, even if you only want the columns from one of them, so that you can filter on the columns in the second table. Example: Teachers table and students table. You only want a list of teachers names if they have at least 20 kids in their class. You join the two tables to find out how many students a teacher has, but you only select the name from the teachers table - very simple example.
Need to understand use cases for JOINS - why join when you are interested in only part of the join?
[ "", "sql", "" ]
``` from datetime import datetime class sms_store: store = [] read = [] def add_new_arrival(self,number,time,text): sms_store.read.append(len(sms_store.store)) sms_store.store.append(("From: {}, Recieved: {}, Msg: {}".format(number,time,text))) def delete(self,i): try: del sms_store.store[i] except IndexError: print("Index is out of range. Cannot delete") def message_count(self): return print("Amt of messages in inbox: {}".format(len(sms_store.store))) def viewall(self): print(sms_store.store) def get_unread_indexes(self): #### ###################################I need help for this method. def get_message(self,i) print(sms_store.store[i]) ### tests #### time = datetime.now().strftime('%H:%M:%S') my_inbox = sms_store() #instantiate an object 'store' for class my_inbox.add_new_arrival("12345",time,"Hello how are you?") #instance of store object my_inbox.add_new_arrival("1111111",time,"BYE BYE BYE") my_inbox.viewall() my_inbox.msgcount() ``` Thanks for viewing this. This is what I need to do: ``` my_inbox.add_new_arrival() ``` When adding a new message, its has\_been\_viewed status is set False. ``` my_inbox.get_unread_indexes() ``` Returns list of indexes of all not-yet-viewed SMS messages ``` my_inbox.get_message(i)** ``` Return (from\_number, time\_arrived, text\_of\_sms) for message[i] Also change its state to "has been viewed". If there is no message at position i, return None Please help me on those above methods!? Thank you so much!
Why don't you add another list to your class called unread. Change `add_new_arrival` to add the message to unread. Then under the `get_message` method move the specified message from unread to read. Lastly your `get_unread` method just lists the indexes of the unread list.
Hi I tweaked your code a bit, I think I have done this before in the "How to think like a computer Scientist Book", Hope it works for you. ``` from datetime import datetime ``` and ``` class SMS_store: ``` then ``` def __init__(self): self.store = [] def __str__(self): return ("{0}".format(self)) def add_new_arrival(self, number, time, text ): self.store.append(("Read: False", "From: "+number, "Recieved: "+time, "Msg: "+text)) def message_count(self): return (len(self.store)) def get_unread_indexes(self): result = [] for (i, v) in enumerate(self.store): if v[0] == "Read: False": result.append(i) return (result) def get_message(self, i): msg = self.store[i] msg = ("Read: True",) + msg[1:] self.store[i] = (msg) return (self.store[i][1:]) def delete(self, i): del self.store[i] def clear(self): self.store = [] ```
Python SMS store program using class and methods - has_been_viewed status
[ "", "python", "class", "methods", "sms", "" ]
I have a SQL query as shown below:- ``` select * from dbo.NGPTimesheetsPosition where ProjectNO = '12169-01-c' AND CreditorEmployeeID <> 'E0000' AND DocType = 'Time Sheet' ``` This returns the below data: ![SSMS output example](https://i.stack.imgur.com/z1e4B.png) What I want to do is be able to show only data up to and including a user defined date e.g. 01/02/2013. User defined date could be anything up to the current month. All help or advice much appreciated.
You'll need to [add a parameter to your report](http://msdn.microsoft.com/en-us/library/aa337401%28v=sql.105%29.aspx), and use this parameter in your dataset query along these lines: ``` SELECT * FROM dbo.NGPTimesheetsPosition WHERE ProjectNO = '12169-01-c' AND CreditorEmployeeID <> 'E0000' AND DocType = 'Time Sheet' AND TransactionDate <= @MyDateParameter -- ADDED! ``` You're not entirely clear in your question about possible values for the parameter, when you state: > User defined date could be anything up to the current month. I think [these folks at SqlServerCentral are right](http://www.sqlservercentral.com/Forums/Topic498576-147-1.aspx): you can't set limits for a data parameter. This leaves you with two basic options: * Let it be. If needed you can add a `AND @MyDateParameter <= CURRENT_TIMESTAMP` clause to your query. * Populate a dataset with available dates, and use those as [available values](http://msdn.microsoft.com/en-us/library/aa337400%28v=sql.105%29.aspx) for your parameter
``` select * from dbo.NGPTimesheetsPosition where ProjectNO = '12169-01-c' AND CreditorEmployeeID <> 'E0000' AND DocType = 'Time Sheet' and TransactionDate <= '2013-02-01' ``` ?
Filter data based on date provided by user
[ "", "sql", "sql-server", "t-sql", "reporting-services", "" ]
I have in my celery configuration ``` BROKER_URL = 'redis://127.0.0.1:6379' CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379' ``` Yet whenever I run the celeryd, I get this error ``` consumer: Cannot connect to amqp://guest@127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds... ``` Why is it not connecting to the redis broker I set it up with, which is running btw?
import your celery and add your broker like that : ``` celery = Celery('task', broker='redis://127.0.0.1:6379') celery.config_from_object(celeryconfig) ``` This code belongs in celery.py
If you followed [First Steps with Celery](http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#using-celery-with-django) tutorial, specifically: ``` app.config_from_object('django.conf:settings', namespace='CELERY') ``` then you need to prefix your settings with `CELERY`, so change your `BROKER_URL` to: ``` CELERY_BROKER_URL = 'redis://127.0.0.1:6379' ```
Celery tries to connect to the wrong broker
[ "", "python", "redis", "celery", "celeryd", "" ]
There's a database table that has an update trigger. Whenever a column is updated, one of its columns is automatically calculated. I have tweaked the trigger and I'd like to run it on all of the rows again. In SQL Server Management Studio, if I choose "Edit Top 200 Rows" on my table and edit one of the rows, the update trigger works. But when I write a query like: ``` UPDATE MyTable SET SomeIrrelevantColumn = 0 ``` the trigger doesn't work, the column that is supposed to be calculated by the trigger stays the same. How can I run the trigger manually on all the rows? **Edit:** Here's the trigger: ``` USE [MY_DATABASE] GO /****** Object: Trigger [dbo].[MY_TABLE_AUER] Script Date: 04/24/2013 00:05:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[MY_TABLE_AUER] ON [dbo].[MY_TABLE] AFTER UPDATE AS DECLARE @UPD_COLUMN_A int, @INS_COLUMN_A int, @UPD_COLUMN_B int, @UPD_COLUMN_C varchar(255), @UPD_COLUMN_D varchar(255), @UPD_COLUMN_E int, @UPD_COLUMN_F datetime BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT @UPD_COLUMN_A = _Column_A, @UPD_COLUMN_B =_Column_B, @UPD_COLUMN_C = COLUMN_C, @UPD_COLUMN_D = [Column_D] FROM DELETED; IF @UPD_COLUMN_D not like '%SomeString%' BEGIN SELECT @INS_COLUMN_A = _Column_A FROM INSERTED; IF @UPD_COLUMN_A != @INS_COLUMN_A BEGIN SELECT @UPD_COLUMN_E = MAX([_Column_B]), @UPD_COLUMN_F = MAX([_Column_G]) FROM MY_TABLE WHERE COLUMN_C = @UPD_COLUMN_C AND [Column_D] LIKE '%SomeString%'; UPDATE MY_TABLE SET COLUMN_E = @UPD_COLUMN_E, COLUMN_F = @UPD_COLUMN_F WHERE [_Column_B] = @UPD_COLUMN_B; UPDATE MY_TABLE SET COLUMN_H = @UPD_COLUMN_B WHERE [_Column_B] = @UPD_COLUMN_E; END END END ```
I have found out that one way to do it is something like the following: ``` UPDATE A SET A.SOME_COLUMN = D.ANOTHER_COLUMN FROM MY_TABLE AS A JOIN inserted AS B ON A.ID = B.ID -- Point 1. JOIN deleted AS C ON A.ID = C.ID JOIN (SELECT MAX(ID) AS OTHER_ID, GROUP_ID AS OTHER_GROUP_ID -- Point 2. FROM MY_TABLE AS E WHERE E.SOME_STRING_COLUMN LIKE '%SomeString%' GROUP BY E.GROUP_ID) AS D ON A.GROUP_ID = D.OTHER_GROUP_ID WHERE C.SOME_BOOL_COLUMN != B.SOME_BOOL_COLUMN -- Point 3. AND C.SOME_STRING_COLUMN NOT LIKE '%SomeString%' ``` 1. After the basic `UPDATE` statement, I go ahead and join the table to 'inserted' and 'deleted' special tables (more info about those at <http://www.mssqltips.com/sqlservertip/2342/understanding-sql-server-inserted-and-deleted-tables-for-dml-triggers/>). This way, I'll only go through the rows that have been updated, and I won't mess with the other ones. You can only join with one of them, but I needed to see a difference in one column between the values before and after the update operation. So that's why I used them both. 'deleted' has the row in the state before the update operation, and 'inserted' has the row in the state after the update operation. 2. After that, I need to find a row in the whole table that is calculated by a value in the row that is currently being updated. In my case, there was a parent of the current row and I wanted to find that row. Those two rows shared the same `GROUP_ID`, and I made a string test to make sure that the new row I get is qualified as a parent, you can define any kind of filter there. Basically, you write another query to find the row based on your updated row, and then you make another `JOIN` to that returned table. 3. And last, I used a `WHERE` clause to make sure that I only update the rows that has changed state by looking at the `SOME_BOOL_COLUMN` column. You can put any kind of criterion here. As you can see I check the difference between the state of the column before and after the update. Take whatever written here with a grain of salt though, as it's coming from someone that has virtually no experience with SQL.
Your trigger makes a very common but false assumption that it will execute once per row. It doesn't, it executes once per action--so when you update the entire table, I bet if you look closer, you'll see that one row was updated. \* with thanks to Aaron Bertrand's comment for this intro paragraph. You'll need to look into how to perform an update based on a JOIN (with the `inserted` or `deleted` meta-tables). For example: ``` CREATE TRIGGER TR_Sample_U ON dbo.Sample FOR UPDATE -- AFTER is default so not needed AS IF EXISTS ( --check for disallowed modifications SELECT * FROM Inserted I INNER JOIN Deleted D ON I.SampleID = D.SampleID WHERE I.Something <> D.Something AND I.UpdateDate = D.UpdateDate ) ROLLBACK TRAN; ``` These resources may also help you: * [SQL Update Query Using Joins](https://stackoverflow.com/questions/982919/sql-update-query-using-joins) * [A trigger to work on multiple row inserts](https://stackoverflow.com/questions/2178889/sql-server-a-trigger-to-work-on-multiple-row-inserts) * [An example of a multi-row handling trigger](https://stackoverflow.com/questions/14967384/computed-column-expression/14970664#14970664) * [Another multi-row handling trigger example](https://stackoverflow.com/questions/15905286/sql-server-inserted-table/15908326#15908326)
SQL Server update trigger is not working
[ "", "sql", "sql-server", "database", "triggers", "sql-update", "" ]
I am learning joins and have the following tables. Student ``` | ID | NAME | ------------- | 1 | A | | 2 | B | | 3 | C | | 4 | D | ``` Pass ``` | ID | MARKS | -------------- | 2 | 80 | | 3 | 75 | ``` Fail ``` | ID | MARKS | -------------- | 1 | 25 | | 4 | 20 | ``` The output I want is this: ``` | NAME | MARKS | ---------------- | B | 80 | | C | 75 | | A | 25 | | D | 20 | ``` I wrote a query like this: ``` select s.id,s.name,p.marks from student s left join pass p on s.id=p.id left join (select f.marks,f.id from fail f ) as nn on s.id=nn.id order by marks desc; ``` The output I got is this: ``` | id | name | Marks| -------------------- | 1 | B | 80 | | 2 | C | 75 | | 3 | A | Null | | 4 | D | NUll | ``` Cant figure out why Null is coming. Any pointers?
You can use `CASE` statement for that: ``` SELECT Name, CASE WHEN P.Marks IS NULL THEN f.Marks ELSE P.Marks END AS Marks FROM Student s LEFT JOIN Pass p ON s.ID = p.ID LEFT JOIN Fail f ON s.ID = f.ID ORDER BY Marks DESC; ``` Or you can also use `IF` statement: ``` SELECT Name, IF(P.Marks IS NULL, F.Marks, P.Marks) AS Marks FROM Student s LEFT JOIN Pass p ON s.ID = p.ID LEFT JOIN Fail f ON s.ID = f.ID ORDER BY Marks DESC; ``` Output ``` | NAME | MARKS | ---------------- | B | 80 | | C | 75 | | A | 25 | | D | 20 | ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!2/86a51/12) To learn more about JOINs see: [A Visual Explanation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
Try the below Query, use **[COALESCE](http://forums.hscripts.com/viewtopic.php?f=12&t=2589)** ``` select s.id,s.name,COALESCE(p.marks , nn.marks) as marks from student s left join pass p on s.id=p.id left join fail nn on s.id=nn.id order by marks desc; ``` **[SQL Fiddle](http://sqlfiddle.com/#!2/86a51/15)**
Mysql join on 3 tables output
[ "", "mysql", "sql", "join", "" ]
I am new in python and doing some testing I found this. ``` >item="testing315testing08634567890" >item.index("086") 17 >item[17:10] '' >item[17:] '08634567890' ``` I don't understand why is not working, while with other strings it works. Note that if I do this... ``` > item[4:10] 'ing315' ``` Are the numbers in the middle causing troubles? Is this a bug?. Did I missing something? I am using Python 3.3
The end index of a list slice is a position greater than the start index, is *not* the size of the slice! so you want something like this: ``` item[17:len(item)] ``` Notice that this is exactly equivalent to the previous snippet: ``` item[17:] ``` The start and end indexes of a slice such as `alist[a:b]` should be interpreted like this: the slice starts at the item in the `a` position in `alist` and ends one element before the `b` position in `alist`. Both `a` and `b` are *indexes* in `alist`.
The second number in your index must be larger than the first number. For example: ``` item[17:17] item[17:10] item[17:17-1] ``` will all return, ``` '' ``` If you want to get the 10 characters after the matched index you could do something like this: ``` >item="testing315testing08634567890" >item[item.index("086") : item.index("086")+10] '0863456789' ```
Python brackets substring not working, why?
[ "", "python", "python-3.x", "substring", "square-bracket", "" ]
I am trying to list students who were enrolled in at least one course in Fall quarter or at least one course in the Spring quarter, but not both. I have tried to go at this from different angles but so far I haven't succeeded with any of them. The code that I feel completes this solution would be the following. Any help is appreciated! ``` SELECT enrolled.StudentID, student.LastName, student.FirstName , enrolled.courseID, enrolled.Quarter FROM enrolled INNER JOIN student ON enrolled.studentID = student.SID GROUP BY enrolled.StudentID, student.LastName, student.FirstName , enrolled.courseID, enrolled.Quarter HAVING (count(distinct enrolled.Quarter) = 1) ```
Stouny's comment was correct, removing `quarter` from the `group by` and using an aggregator in the `select` will work: ``` SELECT enrolled.StudentID, student.LastName, student.FirstName, enrolled.courseID, max(enrolled.Quarter) as quarter FROM enrolled INNER JOIN student ON enrolled.studentID = student.SID GROUP BY enrolled.StudentID, student.LastName, student.FirstName, enrolled.courseID HAVING count(*) = 1 ``` Another approach that won't scale well if you have lots of quarters to check, but works ok for 2: ``` SELECT enrolled.StudentID, student.LastName, student.FirstName, coalesce(spring.courseID, fall.courseID), coaleasce(spring.Quarter, fall.quarter) FROM students LEFT OUTER JOIN enrolled as spring ON spring.studentId = students.sid and spring.quarter = 'spring' LEFT OUTER JOIN enrolled as fall ON fall.studentId = students.sid and fall.quarter = 'fall' WHERE (spring.courseID != NULL and fall.courseID = NULL) OR (spring.courseID ! NULL and fall.courseID != NULL) ```
You could use something along the following lines: ``` SELECT enrolled.StudentID, student.LastName, student.FirstName, enrolled.courseID, enrolled.Quarter FROM enrolled where StudentID in ( SELECT StudentID FROM enrolled where quarter in ( 'fall', 'spring' ) group by StudentID having count(*) = 1 ) ```
Where, either or not both clause SQL
[ "", "mysql", "sql", "count", "having", "having-clause", "" ]
I want to make a list and call it a name which I only know after I run the program: For example: ``` #making shelfs group_number = 1 group_name = 'group' + str(group_number) print group_name ``` group\_name will be: group1 Now I want to make an empty list called group1. How to do such a thing?
Usually you just put this into a dictionary: ``` d = {group_name:[]} ``` Now you have access to your list via the dictionary. e.g.: ``` d['group1'].append('Hello World!') ``` --- The alternative is to modify the result of the `globals()` function (which is a dictionary). This is definitely bad practice and **should be avoided**, but I include it here as it's always nice to know more about the tool you're working with: ``` globals()[group_name] = [] group1.append("Hello World!") ```
You are wanting to create a pseudo-namespace of variables starting with "group". Why not use a dict instead? ``` #making shelfs groups = {} group_number = 1 name = str(group_number) groups[name] = [] # or whatever print groups[name] ``` This is subtly different to @mgilson's answer because I am trying to encourage you to create new namespaces for each collection of related objects.
Make a list with a name that is only known after the program runs
[ "", "python", "list", "python-2.7", "" ]
Having some issues with a list after using the itertools permutations function. ``` from itertools import permutations def longestWord(letters): combinations = list(permutations(letters)) for s in combinations: ''.join(s) print(combinations) longestWord("aah") ``` The output looks like this: ``` [('a', 'a', 'h'), ('a', 'h', 'a'), ('a', 'a', 'h'), ('a', 'h', 'a'), ('h', 'a', 'a'), ('h', 'a', 'a')] ``` I would like this to be a simple list, but it seems to be coming out as a list of tuples(?). Can anyone help me format this so it comes out as the following: ``` ['aah', 'aha', 'aah', 'aha', 'haa', 'haa'] ```
``` from itertools import permutations def longestWord(letters): return [''.join(i) for i in permutations(letters)] print(longestWord("aah")) ``` Result: ``` ['aah', 'aha', 'aah', 'aha', 'haa', 'haa'] ``` A few suggestions: 1. Don't print inside the function, return instead and print the returned value. 2. Your naming of variable `combination` is not good, as combination is different from permutation 3. Your join wasn't doing anything, join doesn't change value inline, it returns the string 4. The function name does not represent what it does. longest word?
Permutations returns an iterator yielding tuples so you need to join them. A map is a nice way to do it instead of your for-loop. ``` from itertools import permutations def longestWord(letters): combinations = list(map("".join, permutations(letters))) print(combinations) longestWord("aah") ``` The way you were doing it, you were joining the letters in each tuple into a single string but you weren't altering the combinations list.
Converting the output of itertools.permutations from list of tuples to list of strings
[ "", "python", "string", "list", "tuples", "permutation", "" ]
How to find third or nth maximum salary from salary `table(EmpID, EmpName, EmpSalary)` in optimized way?
Use `ROW_NUMBER`(if you want a single) or `DENSE_RANK`(for all related rows): ``` WITH CTE AS ( SELECT EmpID, EmpName, EmpSalary, RN = ROW_NUMBER() OVER (ORDER BY EmpSalary DESC) FROM dbo.Salary ) SELECT EmpID, EmpName, EmpSalary FROM CTE WHERE RN = @NthRow ```
Row Number : ``` SELECT Salary,EmpName FROM ( SELECT Salary,EmpName,ROW_NUMBER() OVER(ORDER BY Salary) As RowNum FROM EMPLOYEE ) As A WHERE A.RowNum IN (2,3) ``` Sub Query : ``` SELECT * FROM Employee Emp1 WHERE (N-1) = ( SELECT COUNT(DISTINCT(Emp2.Salary)) FROM Employee Emp2 WHERE Emp2.Salary > Emp1.Salary ) ``` Top Keyword : ``` SELECT TOP 1 salary FROM ( SELECT DISTINCT TOP n salary FROM employee ORDER BY salary DESC ) a ORDER BY salary ```
How to find third or nᵗʰ maximum salary from salary table?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm trying to get a SainSmart USB relay board based on the FT245RL chip working and having a terrible time. I was under the impression that I could control the relays from the command line with something like: ``` echo -e -n "\xFF\x1\x1" > /dev/ttyUSB1 ``` While the device is mounted automatically and I think I've got the baud rate and permissions set up, nothing happens on my Debian squeeze or CentOS 5 machines. SainSmart's support is worthless. I decided to try on windows, so I installed the drivers and wrote a small program in python: ``` import serial ser = serial.Serial(2) #COM3 ser.write(chr(255) + chr(0) + chr(1)) ser.close ``` Still nothing. Perhaps it's a hardware problem so I install a provided windows program. It sees the device and works when I click on the relay buttons. Discouraged, I exit their program, look for bugs in mine (can't find any) but try it anyways, and it works! I write a much bigger program to do all sorts of cool things and cool things happen until I unplug the device. When I plug it back in, nothing works. I've got to run and exit the relay control program before my code will do anything. I suspect that I've got to do something with d2xx drivers like FT\_SetBitMode(). Is there any way to just use VCP mode? PyUSB won't install on my computer and isn't available for Linux. All I want is a simple way to control a relay on Linux using python.
I had the same problem, I think you were right about FT\_SetBitMode(). Eventually I got it to work by using `ftd2xx` (which you can get by pip). You will also need to install the D2XX [drivers](http://www.ftdichip.com/Drivers/D2XX.htm). ``` import ftd2xx if ftd2xx.listDevices() < 1: print "No relays found. Exiting..." exit() else: print "Initializing relays..." relays = ftd2xx.open(0) relays.setBitMode(255,1) # I think this uses FT_SetBitMode() relays.write(b'\01\01') # relay one on relays.write(b'\01\01') # relay two on relays.write(b'\00\00') # all relays off relays.close() ```
I would first suggest you to try out `hyperterminal` first. From your code snippet it seems that you are missing the baudrate (assuming the rest are going to be the default values). And I don't really know if it matters but I always explicitly set the port as `Serial('COM3')`, one less possible point of failure this way :)
How to use FDTI chip in VCP mode?
[ "", "python", "linux", "usb", "ftdi", "" ]
I need to use the command `lsmod` to check if a mod is loaded, but I don't know how to read from it after running it. Im using `subprocess.Popen()` to run it. Any point in the right direction would be much appreciated. :D
Suppose you were looking for `ath` in `lsmod`, then command will be: `lsmod | grep ath` Using `subprocess`: ``` In [60]: c=subprocess.Popen("lsmod",stdout=subprocess.PIPE) In [61]: gr=subprocess.Popen(["grep" ,"ath"],stdin=c.stdout,stdout=subprocess.PIPE) In [62]: print gr.communicate()[0] ath5k 135206 0 ath 19188 1 ath5k mac80211 461261 1 ath5k cfg80211 175574 3 ath5k,ath,mac80211 ```
Use `subprocess.Popen(stdout=subprocess.PIPE)`, then call `subprocess.communicate()` to read the output. Basic usage: ``` process = subprocess.Popen(['lsmod'], stdout=subprocess.PIPE) # Can also capture stderr result_str = process.communicate()[0] # Or [1] for stderr ``` See [the Python documentation](http://docs.python.org/2.7/library/subprocess.html#subprocess.Popen.communicate) for more details.
Read feedback from commands?
[ "", "python", "linux", "bash", "" ]
Problem: I'd like to retrieve the name of a checkbox from a SQL Server database table and see whether it's checked in an "if" statement. Interface: my form consists of a listbox, a button and a checkbox. SQL Server table: ``` ID Name cbName 1 Rest cbRest ``` I'd like to write: ``` sb = dt.rows(0)(cbName) If sb.Checked() = True Then ListBox1.Items.Add(dt.Rows(0)(1).ToString()) Else MsgBox("Nothing checked") End If ``` The expected output should be Rest in the listbox. Of course the next step is to loop through hundreds of checkboxes but for now I'd just like to clarify how to make this work. Right now I get the following error: > Unable to cast object of type 'System.String' to type 'System.Winddows.Forms.CheckBox' I'm using Visual Basic Express 2008 with SQL Server 2008 Express, 64 bit Windows 7 Pro Thanks in advance
Assuming you have added your check box directly to the page you can find controls by name using `Me.Controls.Item(controlID)`, so in your case it would be ``` Dim sb as CheckBox = CType(Me.Controls.Item(dt.rows(0)("cbName")), CheckBox) ```
You will have to loop through all your controls and find the control with id/name [cbRest]. As soon as you get hand on the control (which is a checkbox) you can use ``` If sb.Checked() = True Then ListBox1.Items.Add(dt.Rows(0)(1).ToString()) Else MsgBox("Nothing checked") End If ``` where sb will be the control you found.
Get checkbox name from SQL Server Version 1
[ "", "sql", "vb.net", "types", "sql-server-2008-express", "" ]
I wanna do the following on my existing sqlite database on android which is kind a built like that colomns: id --- rule --- path --- someotherdata A rule now e.g. contains a part of a filename (either just some trivial stuff like "mypicture" or also a filetype like "jpg"). Now what I want to do is, I want to write a query, which gets my all rules, which contain a part of an inputstring. I have tried following example: String value = "somepicturefilename.jpg" my statement: ``` "SELECT DISTINCT * FROM " + TABLE_RULES + " WHERE instr('"+ value + "',rule)!=0 ORDER BY " + KEY_ID+ " DESC;" ``` the statement is in java so the "value" should get inserted in the statement. However, this does not work. I am not too familiar with sql nor sqlite, does anyone have a tip= ;) thanks. edit: i've also tried charindex, which didn't work either. edit: so a more detailed example. following database is given. ``` id --- rule --- 1 "jpg" 2 "ponies" 3 "gif" 4 "pdf" ``` Now the user enters a filename. Let's say "poniesAsUnicorns.jpg". So "poniesAsUnicorns.jpg" is my input string and the query should match both id#1 and id#2 because "poniesAsUnicorns.jpg" contains both "jpg" and "ponies" I hope that clarifies what I want. edit: here is what i tried too: ``` String statement = "SELECT DISTINCT * FROM " + TABLE_RULES + " WHERE charindex(rule,'" + value + "') > 0 ORDER BY " + KEY_ID + " DESC;"; ``` but throws a "no such operation" exception.
AFAIK `instr()` is not available, so you can use: ``` select * from table where replace("poniesAsUnicorns.jpg", rule, "") != "poniesAsUnicorns.jpg"; ``` or for case insensitive matches: ``` select * from table where replace(upper("poniesAsUnicorns.jpg"), upper(rule), "") != upper("poniesAsUnicorns.jpg"); ```
``` "SELECT DISTINCT * FROM " + TABLE_RULES + " WHERE '" + value + "' like '%' || rule || '%' ORDER BY " + KEY_ID + " DESC;" ```
instr() function SQLITE for Android?
[ "", "android", "sql", "sqlite", "function", "" ]
Hi all i want to web based GUI Testing tool. I found dogtail is written using python. but i didnot get any good tutorial and examples to move further. Please Guide me weather dogtail is perfect or something better than this in python is there?. and if please share doc and example. My requirement: A DVR continuous showing live video on tile(4 x 4 ), GUI is web based(mozilla) . i Should be able to swap video and check log and have to compare actual result and present.
[Selenium](https://pypi.python.org/pypi/selenium) is designed exactly for this, it allows you to control the browser in Python, and check if things are as expected (e.g check if a specific element exists, submit a form etc) There's [some more examples in the documentation](http://selenium-python.readthedocs.org/en/latest/getting-started.html) [Project Sikuli](http://www.sikuli.org/) is a similar tool, but is more general than just web-browsers
Selenium provides a python interface rather than just record your mouse movements, see <http://selenium-python.readthedocs.org/en/latest/api.html> If you need to check your video frames your can record them locally and OCR the frames looking for some expected text or timecode.
Automated web based GUI testing tool
[ "", "python", "testing", "user-interface", "automation", "" ]
**High level:** I have checklists and checklists have checklist items. I want get the count of checklists that been completed. Specifically, checklists that have checklist items but that are all completed. **Tables:** ``` Table "checklists" | Column | Type | +--------------+------------------------+ | id | integer | | name | character varying(255) | Table "checklist_items" | Column | Type | +--------------+------------------------+ | id | integer | | completed | boolean | | name | character varying(255) | | checklist_id | integer | ``` **Question:** What query will give me the completed checklists count? Specifically, being careful to exclude checklists that have checklist items that are both complete and incomplete and checklist that have no checklist items. **Tried so far:** ``` SELECT DISTINCT COUNT(DISTINCT "checklists"."id") FROM "checklists" INNER JOIN "checklist_items" ON "checklist_items"."checklist_id" = "checklists"."id" WHERE "checklist_items"."completed" = 't' ``` Problem with this question is that it does not exclude partially completed checklists.
Faster, yet: ``` SELECT count(DISTINCT i.checklist_id) FROM checklist_items i LEFT JOIN checklist_items i1 ON i1.checklist_id = i2.checklist_id AND i.completed IS NOT TRUE WHERE i.completed AND i1.checklist_id IS NULL; ``` This only collects checklists where a completed item exists. And excludes those where another checklist\_item exists that is not completed (`FALSE` or `NULL`).
The following does this using a subquery: ``` select COUNT(*) from (select cl.id, SUM(case when cli.completed <> 't' or cli.completed is null then 1 else 0 end) as NumIncomplete from checklists cl join checklist_items cli ON cli.checklist_id = cl.id group by cl.id ) t where NumIncomplete = 0 ``` You can run the subquery to see how many incomplete items there are for each check list.
Select unique count of rows referenced
[ "", "sql", "postgresql", "select", "" ]
Apologies for the really trivial introductory level python question. Currently working through Google Python tutorials and hit something which may trip me up if I don't nail it down - using and'd values as a compound condition for execution of a while loop. Reading through it appears as if the while loop operates whilst the length of both lists are positive. So once the length of both lists == 0, then the while loop hits a 0 and terminates. I'm unsure of how to parse this mentally - whether the condition is that once both lengths == 0 then the and statement and's 0 and 0, giving a negative condition and terminates. Reading it through I parse it as while '5' and '6' (if for instance 5 and 6 are the len of lists). I've not come across use of a while loop in this way so far (only been going for a day or so). # Code bit I don't get (abstract lines) ``` while len(list1) and len(list2): ``` # Code in context ``` def linear_merge(list1, list2): result = [] while len(list1) and len(list2): if list1[0] < list2[0]: result.append(list1.pop(0)) else: result.append(list2.pop(0)) result.extend(list1) result.extend(list2) return result ``` Thanks kindly.
``` while len(list1) and len(list2): ``` Will continue to loop while both list1 and list2 are not empty; if either list is empty, the loop will terminate. (In a boolean context, any value except `False`, `None`, `0`, `""`, or `[]` will evaluate as true.)
Quoting [Built In Types Page](http://docs.python.org/2/library/stdtypes.html) on Official Python documentation: `x and y` give the result according to: `if x is false, then x, else y` Further on this page it is mentioned that: > This is a short-circuit operator, so it only evaluates the second argument if the first one is True So in your question, it first evaluates len(list1). If it is positive, the first condition is True and next it evaluates the second condition. If that is also True (i.e. len(list2)>=1), it enters into the loop. While fundamentally it is an **AND** operation, it differs in the sense that we don't need to evaluate the second condition, if the first one is False. This can be very helpful in certain cases, when the second condition may involve time consuming calculations.
While loop conditions in: compound conditional expressions AND'd [python]
[ "", "python", "list", "while-loop", "" ]
I am trying to calculate and generate plots using multiprocessing. On Linux the code below runs correctly, however on the Mac (ML) it doesn't, giving the error below: ``` import multiprocessing import matplotlib.pyplot as plt import numpy as np import rpy2.robjects as robjects def main(): pool = multiprocessing.Pool() num_figs = 2 # generate some random numbers input = zip(np.random.randint(10,1000,num_figs), range(num_figs)) pool.map(plot, input) def plot(args): num, i = args fig = plt.figure() data = np.random.randn(num).cumsum() plt.plot(data) main() ``` The Rpy2 is rpy2==2.3.1 and R is 2.13.2 (I could not install R 3.0 and rpy2 latest version on any mac without getting segmentation fault). The error is: ``` The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec(). Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug. The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec(). ``` I have tried everything to understand what the problem is with no luck. My configuration is: ``` Danials-MacBook-Pro:~ danialt$ brew --config HOMEBREW_VERSION: 0.9.4 ORIGIN: https://github.com/mxcl/homebrew HEAD: 705b5e133d8334cae66710fac1c14ed8f8713d6b HOMEBREW_PREFIX: /usr/local HOMEBREW_CELLAR: /usr/local/Cellar CPU: dual-core 64-bit penryn OS X: 10.8.3-x86_64 Xcode: 4.6.2 CLT: 4.6.0.0.1.1365549073 GCC-4.2: build 5666 LLVM-GCC: build 2336 Clang: 4.2 build 425 X11: 2.7.4 => /opt/X11 System Ruby: 1.8.7-358 Perl: /usr/bin/perl Python: /usr/local/bin/python => /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/bin/python2.7 Ruby: /usr/bin/ruby => /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby ``` Any ideas?
This error occurs on Mac OS X when you perform a GUI operation outside the main thread, which is exactly what you are doing by shifting your plot function to the multiprocessing.Pool (I imagine that it will not work on Windows either for the same reason - since Windows has the same requirement). The only way that I can imagine it working is using the pool to generate the data, then have your main thread wait in a loop for the data that's returned (a queue is the way I usually handle it...). Here is an example (recognizing that this may not do what you want - plot all the figures "simultaneously"? - plt.show() blocks so only one is drawn at a time and I note that you do not have it in your sample code - but without I don't see anything on my screen - however, if I take it out - there is no blocking and no error because all GUI functions are happening in the main thread): ``` import multiprocessing import matplotlib.pyplot as plt import numpy as np import rpy2.robjects as robjects data_queue = multiprocessing.Queue() def main(): pool = multiprocessing.Pool() num_figs = 10 # generate some random numbers input = zip(np.random.randint(10,10000,num_figs), range(num_figs)) pool.map(worker, input) figs_complete = 0 while figs_complete < num_figs: data = data_queue.get() plt.figure() plt.plot(data) plt.show() figs_complete += 1 def worker(args): num, i = args data = np.random.randn(num).cumsum() data_queue.put(data) print('done ',i) main() ``` Hope this helps.
I had a similar issue with my worker, which was loading some data, generating a plot, and saving it to a file. Note that this is slightly different than what the OP's case, which seems to be oriented around interactive plotting. Still, I think it's relevant. A simplified version of my code: ``` def worker(id): data = load_data(id) plot_data_to_file(data) # Generates a plot and saves it to a file. def plot_something_parallel(ids): pool = multiprocessing.Pool() pool.map(worker, ids) plot_something_parallel(ids=[1,2,3]) ``` This caused the same error others mention: ``` The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec(). Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug. ``` Following @bbbruce's train of thought, I solved my problem by switching the matplotlib backend from `TKAgg` to the default. Specifically, I commented out the following line in my `matplotlibrc` file: ``` #backend : TkAgg ```
Python rpy2 and matplotlib conflict when using multiprocessing
[ "", "python", "matplotlib", "multiprocessing", "rpy2", "" ]
I have an array that stores an arbitrary set of items for a defined batch. Each item is defined in a separate table with its own properties and values. Example * batch 1 contains items 1,3,11,16,17, * batch 2 contains items 5,11,15. What is the proper way of storing arrays like that in MySQL? Currently, I'm storing the items as comma delimited strings in an items column. Whenever I need to retrieve the info, I query that column, split the string into an array, loop through that array, create a new query form the split values, and then run a new query joining in the split values. Surely there must be a better way of doing this? Example of the table layout: ``` Batches table: ============== id|items -------- 1 |1,3,11,16,17 2 |5,11,15 Items table: ============ id|description |serial -------------------------- 1 |some description|00046552 2 |etc |00046653 3... ```
I would store the data as a many-to-many relationship: ``` Batches table: ============== id | Batch description ---+------------------ 1 | Some description 2 | Some other desc... Items table: ============ id|description |serial --+----------------+------ 1 |some description|00046552 2 |etc |00046653 3... Batch items: ============ batch_id | item_id ---------+-------- 1 | 1 1 | 3 1 | 11 1 | 16 1 | 17 2 | 5 2 | 11 2 | 15 ``` If the order of items in a batch is important, you can add another column to `Batch items` table to indicate an order within a batch; or simply use item id as the ordering column. This way you can you can enforce referential integrity using foreign keys and make it so much easier to manipulate individual items within a batch.
It's a bad design to store comma separated value in a column. Normalize the database properly. Convert the table into 2-table design if the relationship of `Items` to `Batches` is `One-to-many`. Example, **Items** * ItemID (PK) * Description * Serial **Batches** * BatchID * ItemID (FK) But if you have `Many-to-many` relationship, consider adding new table that links between the two. Items * ItemID (PK) * Description * Serial Batches * BatchID (PK) * otherColumns... Batch\_Item * BatchID (FK) -- unique pair with ItemID * ItemID (FK)
Best way of storage delimted values in MySQL
[ "", "mysql", "sql", "" ]
I have a list of lists (`nLedgers`) such as: ``` [['173', '0.', '0.', '0.'], ['183', '1000.', '0.', '0.'], ['184', '0.', '1000.', '0.'], ['194', '1000.', '1000.', '0.'], ['195', '0.', '0.', '1000.'], ['205', '1000.', '0.', '1000.'], ['206', '0.', '1000.', '1000.'], ['216', '1000.', '1000.', '1000.'], ['217', '0.', '0.', '2000.'], ['227', '1000.', '0.', '2000.'], ['228', '0.', '1000.', '2000.'], ['238', '1000.', '1000.', '2000.'], ['239', '0.', '0.', '3000.'], ['249', '1000.', '0.', '3000.'], ['250', '0.', '1000.', '3000.'], ['260', '1000.', '1000.', '3000.'], ['261', '0.', '0.', '4000.'], ['271', '1000.', '0.', '4000.'], ['272', '0.', '1000.', '4000.'], ['282', '1000.', '1000.', '4000.'], ['283', '0.', '0.', '0.'], ['293', '0.', '1000.', '0.'], ['294', '1000.', '0.', '0.'], ['304', '1000.', '1000.', '0.'], ['305', '0.', '0.', '1000.'], ['315', '0.', '1000.', '1000.'], ['316', '1000.', '0.', '1000.'], ['326', '1000.', '1000.', '1000.'], ['327', '0.', '0.', '2000.'], ['337', '0.', '1000.', '2000.'], ['338', '1000.', '0.', '2000.'], ['348', '1000.', '1000.', '2000.'], ['349', '0.', '0.', '3000.'], ['359', '0.', '1000.', '3000.'], ['360', '1000.', '0.', '3000.'], ['370', '1000.', '1000.', '3000.'], ['371', '0.', '0.', '4000.'], ['381', '0.', '1000.', '4000.'], ['382', '1000.', '0.', '4000.'], ['392', '1000.', '1000.', '4000.'], ['436', '-1000.', '0.', '0.'], ['446', '0.', '0.', '0.'], ['447', '-1000.', '1000.', '0.'], ['457', '0.', '1000.', '0.'], ['458', '-1000.', '1000.', '1000.'], ['468', '0.', '1000.', '1000.'], ['469', '-1000.', '1000.', '2000.'], ['479', '0.', '1000.', '2000.'], ['480', '-1000.', '1000.', '3000.'], ['490', '0.', '1000.', '3000.'], ['491', '-1000.', '0.', '0.'], ['501', '-1000.', '1000.', '0.'], ['502', '-1000.', '1000.', '4000.'], ['512', '0.', '1000.', '4000.'], ['513', '-1000.', '0.', '1000.'], ['523', '0.', '0.', '1000.'], ['524', '-1000.', '0.', '2000.'], ['534', '0.', '0.', '2000.'], ['535', '-1000.', '0.', '3000.'], ['545', '0.', '0.', '3000.'], ['546', '-1000.', '0.', '4000.'], ['556', '0.', '0.', '4000.'], ['557', '-1000.', '0.', '1000.'], ['567', '-1000.', '1000.', '1000.'], ['568', '-1000.', '0.', '3000.'], ['578', '-1000.', '1000.', '3000.'], ['579', '-1000.', '0.', '2000.'], ['589', '-1000.', '1000.', '2000.'], ['590', '-1000.', '0.', '4000.'], ['600', '-1000.', '1000.', '4000.'], ['687', '0.', '2000.', '0.'], ['697', '1000.', '2000.', '0.'], ['698', '0.', '2000.', '1000.'], ['708', '1000.', '2000.', '1000.'], ['709', '0.', '2000.', '2000.'], ['719', '1000.', '2000.', '2000.'], ['720', '0.', '2000.', '3000.'], ['730', '1000.', '2000.', '3000.'], ['731', '0.', '2000.', '4000.'], ['741', '1000.', '2000.', '4000.'], ['742', '0.', '1000.', '0.'], ['752', '0.', '2000.', '0.'], ['753', '1000.', '1000.', '1000.'], ['763', '1000.', '2000.', '1000.'], ['764', '1000.', '1000.', '3000.'], ['774', '1000.', '2000.', '3000.'], ['775', '1000.', '1000.', '0.'], ['785', '1000.', '2000.', '0.'], ['786', '1000.', '1000.', '2000.'], ['796', '1000.', '2000.', '2000.'], ['797', '1000.', '1000.', '4000.'], ['807', '1000.', '2000.', '4000.'], ['808', '-1000.', '1000.', '0.'], ['818', '-1000.', '2000.', '0.'], ['819', '0.', '1000.', '1000.'], ['829', '0.', '2000.', '1000.'], ['830', '0.', '1000.', '2000.'], ['840', '0.', '2000.', '2000.'], ['841', '0.', '1000.', '3000.'], ['851', '0.', '2000.', '3000.'], ['852', '0.', '1000.', '4000.'], ['862', '0.', '2000.', '4000.'], ['863', '-1000.', '2000.', '0.'], ['873', '0.', '2000.', '0.'], ['874', '-1000.', '2000.', '1000.'], ['884', '0.', '2000.', '1000.'], ['885', '-1000.', '2000.', '2000.'], ['895', '0.', '2000.', '2000.'], ['896', '-1000.', '2000.', '3000.'], ['906', '0.', '2000.', '3000.'], ['907', '-1000.', '2000.', '4000.'], ['917', '0.', '2000.', '4000.'], ['918', '-1000.', '1000.', '1000.'], ['928', '-1000.', '2000.', '1000.'], ['929', '-1000.', '1000.', '3000.'], ['939', '-1000.', '2000.', '3000.'], ['940', '-1000.', '1000.', '2000.'], ['950', '-1000.', '2000.', '2000.'], ['951', '-1000.', '1000.', '4000.'], ['961', '-1000.', '2000.', '4000.']] ``` I'd like to transform the first column to integer without the dot sign, and the rest to floats. I used this code to transform all columns to floats: ``` nLedgers=[[float(j) for j in i] for i in nLedgers] ``` and this to try my goal: ``` nLedgers=[[[if index==0 int(j), if index>0 float(j)] for index,j in i] for i in nLedgers] ``` I've got a syntax error in my code. I think I'm pretty close but I need your help.
Your attempt is very close. So, while it may not necessarily be the most concise or pythonic way to do this, it will presumably be easy for you to understand. You've written this: ``` nLedgers=[[[if index==0 int(j), if index>0 float(j)] for index,j in i] for i in nLedgers] ``` --- Your first problem is a `SyntaxError` because that's not how you write [`if` expressions](http://docs.python.org/2/reference/expressions.html#conditional-expressions). And expression that gives you `int(j)` if `index==0`, but otherwise gives you `float(j)`, looks like this: ``` int(j) if index==0 else float(j) ``` Almost a direct translation from English. --- Your next problem is that you're trying to do `for index, j in i`, which won't work, because `i` is just a sequence of `j` values, not a sequence of pairs of indexes and `j` values. But the built-in [`enumerate`](http://docs.python.org/2/library/functions.html#enumerate) function is meant for exactly this purpose: it converts a sequence of values into a sequence of index-value pairs. So, just for `for index, j in enumerate(i)`. --- So, if this code translates everything to float: ``` nLedgers=[[float(j) for j in i] for i in nLedgers] ``` We can first add in the `enumerate` to get all the indices: ``` nLedgers=[[float(j) for index, j in enumerate(i)] for i in nLedgers] ``` And then add in the conditional expression to use those indices: ``` nLedgers=[[(int(j) if index==0 else float(j)) for index, j in enumerate(i)] for i in nLedgers] ```
something like this: ``` In [69]: nLedgers=[['173', '0.', '0.', '0.'], ['183', '1000.', '0.', '0.'], ['184', '0.', '1000.', '0.']] In [70]: [ [int(x[0])] + map(float,x[1:]) for x in nLedgers] Out[70]: [[173, 0.0, 0.0, 0.0], [183, 1000.0, 0.0, 0.0], [184, 0.0, 1000.0, 0.0]] ``` or: ``` In [71]: [ [int(x[0])] + [float(y) for y in x[1:]] for x in nLedgers] Out[71]: [[173, 0.0, 0.0, 0.0], [183, 1000.0, 0.0, 0.0], [184, 0.0, 1000.0, 0.0]] ``` use `list(map(float,x[1:]))` in case you're using py3x.
Transform first column to integers, rest to floats
[ "", "python", "" ]
I am opening a `.txt` file and have to use a list inside of it for a function I am writing. This is one of the lists given in the text file: ``` '[24, 72, 95, 100, 59, 80, 87]\n' ``` Using `.strip()` it gets rid of the `\n`, so it becomes: ``` '[24, 72, 95, 100, 59, 80, 87]' ``` I think using split would be useless, because using `split(' ')` would yield: ``` ['[24, 72, 95, 100, 59, 80, 87]'] ``` Which I think just deepens the complications. What is an effective way to turn this 'string list' into a real list I could use a for loop with? I've been trying for a few hours already, and can't figure it out.
You can use `ast.literal_eval`: ``` In [8]: strs='[24, 72, 95, 100, 59, 80, 87]\n' In [9]: from ast import literal_eval In [10]: literal_eval(strs) Out[10]: [24, 72, 95, 100, 59, 80, 87] ``` help on `ast.literal_eval`: ``` In [11]: literal_eval? Type: function String Form:<function literal_eval at 0xb6eaf6bc> File: /usr/lib/python2.7/ast.py Definition: literal_eval(node_or_string) Docstring: Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, numbers, tuples, lists, dicts, booleans, and None. ```
An answer has already been accepted, but I just wanted to add this for the record. The json module is really good for this kind of thing: ``` import json s = '[24, 72, 95, 100, 59, 80, 87]\n' lst = json.loads(s) ```
How to turn a 'string list' into a real list?
[ "", "python", "python-2.7", "" ]
how can I list only file names in a directory without directory info in the result? I tried ``` for file in glob.glob(dir+filetype): print file ``` give me result `/path_name/1.log,/path_name/2.log,....` but what I do need is file name only: `1.log`, `2.log`, etc. I do not need the directory info in the result. is there a simple way to get rid of the path info? I don't want to some substr on the result. Thank you!
[`os.path.basename`](http://docs.python.org/3/library/os.path.html#os.path.basename): > Return the base name of pathname `path`. This is the second element of the pair returned by passing `path` to the function `split()`. Note that the result of this function is different from the Unix `basename` program; where `basename` for '/foo/bar/' returns 'bar', the `basename()` function returns an empty string (''). So: ``` >>> os.path.basename('/path_name/1.log,/path_name/2.log') '2.log' ```
``` import os # Do not use 'dir' as a variable name, as it's a built-in function directory = "path" filetype = "*.log" # ['foo.log', 'bar.log'] [f for f in os.listdir(directory) if f.endswith(filetype[1:])] ```
how can I get filenames without directory name
[ "", "python", "" ]
I am having this error: ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''tablename'( 'id'MEDIUMINT NOT NULL AUTO_INCREMENT, 'content'TEXT NOT NULL, 'd' at line 1 ``` From this statement: ``` CREATE TABLE 'tablename'( 'id'MEDIUMINT NOT NULL AUTO_INCREMENT, 'content'TEXT NOT NULL, 'date_added' DATETIME NOT NULL, 'user' VARCHAR (16) NOT NULL, PRIMARY KEY ('id'), UNIQUE ('id') ); ENGINE=MyISAM; ``` Why?
You need the backtick instead of the single quote (`'`). The backtick is this character: ``` ` ``` Better yet - don't bother with either: ``` CREATE TABLE tablename ( id MEDIUMINT ... ``` **Important**: also see the comments below from tadman; they round out this answer nicely by explaining the backticks and pointing out another syntax issue.
You are using incorrect notation. In your create table statement, you are using the single quote ( '). You can't use that here for table names and column names. Alternatives would be the tick mark (`). Or just completely remove all notation altogether. Here is your code, fully functional: ``` CREATE TABLE tablename ( `id` MEDIUMINT NOT NULL, `content`TEXT NOT NULL, `date_added` DATETIME NOT NULL, `user` VARCHAR (16) NOT NULL, PRIMARY KEY (`id`), UNIQUE (`id`) ); ```
What am I missing? MySQL syntax error
[ "", "mysql", "sql", "database", "syntax", "" ]
Why does my custom Exception class below not serialize/unserialize correctly using the pickle module? ``` import pickle class MyException(Exception): def __init__(self, arg1, arg2): self.arg1 = arg1 self.arg2 = arg2 super(MyException, self).__init__(arg1) e = MyException("foo", "bar") str = pickle.dumps(e) obj = pickle.loads(str) ``` This code throws the following error: ``` Traceback (most recent call last): File "test.py", line 13, in <module> obj = pickle.loads(str) File "/usr/lib/python2.7/pickle.py", line 1382, in loads return Unpickler(file).load() File "/usr/lib/python2.7/pickle.py", line 858, in load dispatch[key](self) File "/usr/lib/python2.7/pickle.py", line 1133, in load_reduce value = func(*args) TypeError: __init__() takes exactly 3 arguments (2 given) ``` I'm sure this problem stems from a lack of knowledge on my part of how to make a class pickle-friendly. Interestingly, this problem doesn't occur when my class doesn't extend `Exception`.
Make `arg2` optional: ``` class MyException(Exception): def __init__(self, arg1, arg2=None): self.arg1 = arg1 self.arg2 = arg2 super(MyException, self).__init__(arg1) ``` The base `Exception` class defines a [`.__reduce__()` method](http://docs.python.org/2/library/pickle.html#object.__reduce__) to make the extension (C-based) type picklable and that method only expects *one* argument (which is `.args`); see the [`BaseException_reduce()` function in the C source](http://hg.python.org/cpython/file/2.7/Objects/exceptions.c#l182). The easiest work-around is making extra arguments optional. The `__reduce__` method *also* includes any additional object attributes beyond `.args` and `.message` and your instances are recreated properly: ``` >>> e = MyException('foo', 'bar') >>> e.__reduce__() (<class '__main__.MyException'>, ('foo',), {'arg1': 'foo', 'arg2': 'bar'}) >>> pickle.loads(pickle.dumps(e)) MyException('foo',) >>> e2 = pickle.loads(pickle.dumps(e)) >>> e2.arg1 'foo' >>> e2.arg2 'bar' ```
The current answers break down if you're using both arguments to construct an error message to pass to the parent Exception class. I believe the best way is to simply override the `__reduce__` method in your exception. The `__reduce__` method should return a two item tuple. The first item in the tuple is your class. The second item is a tuple containing the arguments to pass to your class's `__init__` method. ``` import pickle class MyException(Exception): def __init__(self, arg1, arg2): super().__init__(f'arg1: {arg1}, arg2: {arg2}') self.arg1 = arg1 self.arg2 = arg2 def __reduce__(self): return (MyException, (self.arg1, self.arg2)) # Create an exception instance and print info about it original = MyException('foo', 'bar') print(repr(original)) print(original.arg1) print(original.arg2) # Pickle and unpickle the exception, info printed should match above reconstituted = pickle.loads(pickle.dumps(original)) print(repr(reconstituted)) print(reconstituted.arg1) print(reconstituted.arg2) ``` More info about `__reduce__` [here](https://docs.python.org/2/library/pickle.html#object.__reduce__).
How to make a custom exception class with multiple init args pickleable
[ "", "python", "exception", "serialization", "" ]
Hi I am now trying to join 2 table with only 2 rows from second table join to first table. For example, I have following 2 tables: ``` **Table A** Column1 | Column2 | Column3 A | B | 30 A | C | 50 A | D | 25 **Table B** Column4 | Column5 B | 35 B | 90 B | 65 B | 80 B | 85 B | 40 C | 100 C | 60 C | 70 C | 65 ``` Here is example of my normal query: ``` select * from ( select * from A where Column1 = 'A' and (Column2 = 'B' or Column2 = 'C') order by Column2, Column3 ) A inner join ( select * from B where (Column4 = 'B' or Column4 = 'C') order by Column5 ) B on (A.Column2 = B.Column4 and ((B.Column5 - A.Column3) > 30)) ``` The Result should look like: ``` **Result:** Column1 | Column2 | Column3 | Column4 | Column5 A | B | 30 | B | 65 A | B | 30 | B | 80 A | B | 30 | B | 85 A | B | 30 | B | 90 A | C | 50 | C | 100 ``` However, the result that I want is to join only 2 row from second table result only. The expected result should be: ``` **Expected Result:** Column1 | Column2 | Column3 | Column4 | Column5 A | B | 30 | B | 65 A | B | 30 | B | 80 A | C | 50 | C | 100 ``` Do anyone have idea of how to create such sql statement? Thank you.
You could use row\_number() to limit the number of rows. The example assumes that `(Column1, Column2, Column3)` is unique. If table `A` has a primary key, use that instead. ``` select * from ( select Column1 , Column2 , Column3 , Column4 , Column5 , row_number() over (partition by Column1, Column2, Column3 order by Column5 - Column3 desc) as rn from A join B on A.Column2 = B.Column4 where Column1 = 'A' and Column2 in ('B', 'C') and Column5 - Column3 > 30 ) SubQueryAlias where rn < 2 ``` [See example at SQL Fiddle.](http://sqlfiddle.com/#!4/b865e/14/0)
A good start would be to write more simple SQL without the inline views: ``` select * from A inner join B on (A.Column2 = B.Column4) where A.Column1 = 'A' and A.Column2 in ('B','C') and (B.Column5 - A.Column3) > 30) ```
SQL: Inner join with top 2 rows with second table from on condition
[ "", "sql", "join", "conditional-statements", "" ]
How is it possible to arrange documents in to a space (say multiple grids), so that the position in which they are placed in, contains information about how similar they are to other documents. I looked in to K-means clustering, but it is a bit computationally intensive if data is large. I'm looking for something like hashing the contents of the document, so that they can fit in a large space and documents that are similar would be having similar hashes and distance between them would be small. In this case, it would be easy to find documents similar to a given document, with out doing much extra work. The result could be something similar to the picture below. In this case music documents are near film documents but far from documents related to computers. The box can be considered as the whole world of documents. ![enter image description here](https://i.stack.imgur.com/fQ174.jpg) Any help would be greatly appreciated. Thanks jvc007
One way to introduce a distance or similarity measure between documents is: * first encode your documents as vectors, eg using TF-IDF (see <https://en.wikipedia.org/wiki/Tf%E2%80%93idf>) * the scalar-product between two vectors related to two documents give you a measure about the similarity of the documents. The larger this value is, the higher is the similarity. Using MDS (<http://en.wikipedia.org/wiki/Multidimensional_scaling>) on these similarities should help to visualize the documents in a two dimensional plot.
The problem of mapping high-dimensional data to low dimensional space while preserving similarity can be solved using [Self-organizing map](https://en.wikipedia.org/wiki/Self-organizing_map) (SOM or Kohonen network). I have already seen some applications on documents. I don't know about any python implementation (there might be one), but there is a good one for Matlab (SOM toolbox).
Arranging documents in a grid in accordance with the content similarity
[ "", "python", "hash", "machine-learning", "data-mining", "document-classification", "" ]
I have an array: ``` X = [[5*, 0, 0, 0, 0, 0, 0, 0], [9*, 6, 0, 0, 0, 0, 0, 0], [4, 6*, 8, 0, 0, 0, 0, 0], [0, 7*, 1, 5, 0, 0, 0, 0], [9, 3, 3*, 4, 4, 0, 0, 0], [4, 5, 5*, 6, 7, 5, 0, 0], [4, 5, 6, 8*, 7, 7, 8, 0], [4, 7, 8, 9*, 7, 3, 9, 6]] ``` I want to select and append all the values that are marked by \*. The approach is basically to select 0th element from 0th and 1st row...1th element from 2nd and 3rd row..and so on. The resulting set should be: ``` Result = ((X[0][0], (X[1][0]), (X[2][1], X[3][1]), (X[4][2], X[5][2]), (X[6][3], X[7][3])) ``` Which can be written as: ``` Result = ((X[n+0][n], (X[n+1][n]), (X[n+2][n+1], X[n+3][n+1]), (X[n+4][n+2], X[n+5][n+2]), (X[n+6][n+3], X[n+7][n+3])) Where n = 0 ``` How do I do that? I have applied this but its not working: ``` Result = [] for a in X: Result.append([[[ a[i][j] ] for i in range(0,8)] for j in range(0,8)]) ``` But no results. Any guesses?
This will work if `X` has an even number of lists in it: ``` >>> [(X[2*i][i], X[2*i+1][i]) for i in range(len(X)//2)] [(5, 9), (6, 7), (3, 5), (8, 9)] ``` If you don't mind flattened lists, then then will work for `X` of any length: ``` >>> [lst[idx//2] for idx, lst in enumerate(X)] [5, 9, 6, 7, 3, 5, 8, 9] ```
Because of the numpy tag I thought I would add this: ``` import numpy as np X = np.array([[5 , 0, 0, 0, 0, 0, 0, 0], [9 , 6, 0, 0, 0, 0, 0, 0], [4, 6 , 8, 0, 0, 0, 0, 0], [0, 7 , 1, 5, 0, 0, 0, 0], [9, 3, 3 , 4, 4, 0, 0, 0], [4, 5, 5 , 6, 7, 5, 0, 0], [4, 5, 6, 8 , 7, 7, 8, 0], [4, 7, 8, 9 , 7, 3, 9, 6]]) i = np.array([0, 1, 2, 3, 4, 5, 6, 7]) j = np.array([0, 0, 1, 1, 2, 2, 3, 3]) result = X[i, j] print result # [5 9 6 7 3 5 8 9] ``` To generate i and j in the general case you can do something like: ``` n = 8 i = np.arange(n) j = np.arange(n) // 2 ```
Select specific elements from an array
[ "", "python", "numpy", "" ]
If I run the code: ``` connection = manager.connect("I2Cx") ``` The program crashes and reports a KeyError because I2Cx doesn't exist (it should be I2C). But if I do: ``` try: connection = manager.connect("I2Cx") except Exception, e: print e ``` It doesn't print anything for e. I would like to be able to print the exception that was thrown. If I try the same thing with a divide by zero operation it is caught and reported properly in both cases. What am I missing here?
If it's raising a KeyError with no message, then it won't print anything. If you do... ``` try: connection = manager.connect("I2Cx") except Exception as e: print repr(e) ``` ...you'll at least get the exception class name. A better alternative is to use multiple `except` blocks, and only 'catch' the exceptions you intend to handle... ``` try: connection = manager.connect("I2Cx") except KeyError as e: print 'I got a KeyError - reason "%s"' % str(e) except IndexError as e: print 'I got an IndexError - reason "%s"' % str(e) ``` There are valid reasons to catch all exceptions, but you should almost always re-raise them if you do... ``` try: connection = manager.connect("I2Cx") except KeyError as e: print 'I got a KeyError - reason "%s"' % str(e) except: print 'I got another exception, but I should re-raise' raise ``` ...because you probably don't want to handle `KeyboardInterrupt` if the user presses CTRL-C, nor `SystemExit` if the `try`-block calls `sys.exit()`.
I am using Python 3.6 and using a comma between Exception and e does not work. I need to use the following syntax (just for anyone wondering) ``` try: connection = manager.connect("I2Cx") except KeyError as e: print(e.message) ```
Catch KeyError in Python
[ "", "python", "try-catch", "except", "" ]
I am looking to know how I can edit my code in order to encrypt users passwords. At the moment the user fills in an HTML form which submits to `customerRegister.php` which goes through a series of validations before submitting the query. customerRegister.php ``` registerUser($_POST['firstName'], $_POST['lastName'], $_POST['username'], $_POST['houseNo'], $_POST['StreetName'], $_POST['town'], $_POST['postCode'], $_POST['emailAddress'], $_POST['phoneNumber'], $_POST['password'], $_POST['conPassword'],$_POST['carRegistration'],$_POST['carMake'],$_POST['carModel'],$_POST['carYear'],$_POST['carEngineSize'],$_POST['carFuel']); function registerUser($firstName, $lastName, $username, $houseNo, $streetName, $town, $postCode, $emailAddress, $phoneNumber, $password, $conPassword, $registration, $carMake, $carModel, $carYear, $carEngineSize, $carFuel) { $registerQuery = new UserLoginQueries(); /******************************** SERIES OF VALIDATIONS ********************************/ $registerQuery->insertUser($firstName, $lastName, $username, $houseNo, $streetName, $town, $postCode, $emailAddress, $phoneNumber, $password); ``` Those details are then passed to userLoginQueries.php where the query is executed. userLoginQueries.php ``` public function insertUser($custFirstName, $custLastName, $username, $houseNo, $streetName, $town, $postCode, $email, $number, $pass) { $sth = $this->conn->prepare("INSERT INTO `customer`(`CustFirstName`, `CustLastName`, `HouseNo`, `StreetName`, `Town`, `PostCode`, `EmailAddress`, `PhoneNumber`, `Username`, `Password`) VALUES (?,?,?,?,?,?,?,?,?,?)"); $sth->bindValue (1, $custFirstName); $sth->bindValue (2, $custLastName); $sth->bindValue (3, $houseNo); $sth->bindValue (4, $streetName); $sth->bindValue (5, $town); $sth->bindValue (6, $postCode); $sth->bindValue (7, $email); $sth->bindValue (8, $number); $sth->bindValue (9, $username); $sth->bindValue (10, $pass); $sth->execute(); } ``` When the user enters their login information the following query is ran: ``` public function queryLogin($username, $password) { $sth = $this->conn->prepare("SELECT * FROM customer WHERE Username = ? AND Password = ? AND UserType = 'Customer'"); $sth->bindValue (1, $username); $sth->bindValue (2, $password); $sth->execute(); $count = $sth->rowCount(); return $count; } ``` How can I modify my code so that the users password is encrypted?
Please take a look at this answer: [How do you use bcrypt for hashing passwords in PHP?](https://stackoverflow.com/questions/4795385/how-do-you-use-bcrypt-for-hashing-passwords-in-php) , you would just have to call the `$hash = $bcrypt->hash('password');` when registering the user and then save the hashed version of the password into the database. You can then verify the user by using `$isGood = $bcrypt->verify('password', $hash);`
You should really store your password with a hashing algorithm <http://en.wikipedia.org/wiki/Hash_function> (one way algorithm) using salt, see <http://en.wikipedia.org/wiki/Salt_(cryptography)> One of the fundamental reasons for this would be so that if your database were to be compromised, the attacker would not know what the actual password is. A simple hashing algorithm, is sha-1. You can see how to use it here <http://php.net/manual/en/function.sha1.php> **Hashing using salt** The following may help: - ``` $sth->bindValue (2, sha1($password . "random private stuff here as salt")); ``` Next time you want to verify a password, just compare the entered password and your salt with the actual value and salt. This will let you know whether the password is correct without having to ever save the users password in plaintext. **Why use salt?** Well, although typically one way functions cannot be reversed, they can be brute forced. This may mean that even though you're using a hashing algorithm, some passwords may be vulnerable.
MySQL - how to encrypt password
[ "", "mysql", "sql", "authentication", "encryption", "" ]
I have a table with a column of image type, the table has some rows but all the rows haven't had any image yet, they are all null. To test the CASE WHEN NULL, I've tried this and it gave a strange result: ``` SELECT CASE myImageColumn WHEN NULL THEN 0 ELSE 1 END FROM myTable ``` All the returned rows were in a column of 1's (I thought 0's). What is wrong here? Your help would be highly appreciated! Thank you!
You can't compare with `NULL` like that, you should try: ``` SELECT CASE WHEN myImageColumn IS NULL THEN 0 ELSE 1 END FROM myTable ```
Use a different form of `CASE` instead: ``` SELECT CASE WHEN myImageColumn IS NULL THEN 0 ELSE 1 END FROM myTable ``` Two useful links: * <http://www.sqlite.org/nulls.html> * <http://www.sqlite.org/lang_expr.html>
CASE WHEN NULL makes wrong result in SQLite?
[ "", "sql", "database", "sqlite", "null", "" ]
I am trying to write a function that will look at a list of strings and determine if the next string in the list is a sub-string of the preceding string. So if I had a list of `['Ryan', 'Rya', 'Ry', 'Testing', 'Test']` I will get back `['Ryan', 'Rya', 'Ry', 'Test']`. I'm not really sure where to even start here.
You can accomplish this with a list comprehension ``` def find_results(seq): #I'm sure you can name this function better return [seq[0]] + [current for previous, current in zip(seq, seq[1:]) if current in previous] ``` `seq[1:]` is your whole list except the first element `zip(a, b)` generates pairs of elements for each iterable you pass it. In this case, the preceeding string, and the current string. The `in` operator will test if one string is inside of another. `"test" in "testing"` is true The comprehension says, for each pair of strings (current and previous), construct a list of all the current strings if the current string is a substring of the previous string
You could do something like this: ``` def f(lst): yield lst[0] for i in range(1, len(lst)): prev_string = lst[i - 1] curr_string = lst[i] if curr_string in prev_string: yield curr_string ``` `f` will be a generator, so to turn it into a list, you pass it to `list`: ``` In [36]: f(['Ryan', 'Rya', 'Ry', 'Testing', 'Test']) Out[36]: <generator object f at 0x02F75F08> In [37]: list(f(['Ryan', 'Rya', 'Ry', 'Testing', 'Test'])) Out[37]: ['Ryan', 'Rya', 'Ry', 'Test'] ```
How do you find out if the string is a component of the preceding string
[ "", "python", "string", "list", "function", "" ]
So I have four lines of code ``` seq= 'ATGGAAGTTGGATGAAAGTGGAGGTAAAGAGAAGACGTTTGA' OR_0 = re.findall(r'ATG(?:...){9,}?(?:TAA|TAG|TGA)',seq) ``` Let me explain what I am attempting to do first . . . I'm sorry if this confusing but I am going to try my best to explain it. So I'm looking for sequences that START with *'ATG'* followed by units of 3 of any word char [e.g. 'GGG','GTT','TTA',etc] until it encounters either an *'TAA','TAG' or 'TGA'* I also want them to be at least 30 characters long. . . hence the {9,}? This works to some degree but if you notice in *seq* that there is *ATG* GAA GTT GGA *TGA* AAG TGG AGG *TAA* AGA GAA GAC GTT *TGA* So in this case, it should be finding 'ATGGAAGTTGGATGA' if it starts with the first 'ATG' and goes until the next *'TAA','TAG' or 'TGA'* HOWEVER when you run the OR\_0 line of code, it spits back out the entire seq string. I don't know how to make it only consider the first 'TAA','TAG' or 'TGA' followed by the first 'ATG' If an 'ATG' is followed by another 'ATG' when read in units of 3 then that is alright, it should NOT start over but if it encounters a 'TAA','TAG' or 'TGA' when read in units of 3 it should stop. My question, **why is re.findall finding the longest sequence of 'ATG'xxx-xxx-['TAA','TAG' or 'TGA'] instead of the first occurrence of 'TAA','TAG' or 'TGA' after an ATG separated by word characters in units of 3 ?** Once again, I apologize if this is confusing but its messing with multiple data sets that I have based on this initial line of text and i'm trying to find out why
If you want your regex to stop matching at the first `TAA|TAG|TGA`, but still only succeed if there are at least nine three letter chunks, the following may help: ``` >>> import re >>> regexp = r'ATG(?:(?!TAA|TAG|TGA)...){9,}?(?:TAA|TAG|TGA)' >>> re.findall(regexp, 'ATGAAAAAAAAAAAAAAAAAAAAAAAAAAATAG') ['ATGAAAAAAAAAAAAAAAAAAAAAAAAAAATAG'] >>> re.findall(regexp, 'ATGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATAG') ['ATGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATAG'] >>> re.findall(regexp, 'ATGAAATAGAAAAAAAAAAAAAAAAAAAAATAG') [] ``` This uses a negative lookahead `(?!TAA|TAG|TGA)` to ensure that a three character chunk is *not* a `TAA|TAG|TGA` before it matches the three character chunk. Note though that a `TAA|TAG|TGA` that does *not* fall on a three character boundary will still successfully match: ``` >>> re.findall(regexp, 'ATGAAAATAGAAAAAAAAAAAAAAAAAAAATAG') ['ATGAAAATAGAAAAAAAAAAAAAAAAAAAATAG'] ```
If the length is not a requirement then it's pretty easy: ``` >>> import re >>> seq= 'ATGGAAGTTGGATGAAAGTGGAGGTAAAGAGAAGACGTTTGA' >>> regex = re.compile(r'ATG(?:...)*?(?:TAA|TAG|TGA)') >>> regex.findall(seq) ['ATGGAAGTTGGATGA'] ``` Anyway I believe, according to your explanation, that your previous regex is actually doing what you want: searching for matches **of at least 30 characters** that start in `ATG` and end in `TGA`. In your question you first state that you need matches of at least 30 characters, and hence you put the `{9,}?`, but after that you expect to match *any* match. You cannot have both, choose one. If length is important than keep the regex you already have and the result you are getting *is correct*.
Why is re.findall not being specific in finding triplet items in string. Python
[ "", "python", "regex", "string", "findall", "" ]
I am trying to write a Python 3 recursive function which will tell me if an integer is in a nested list. I am not sure how I can make my code return `True` if it finds it in the list, and `False` if it doesn't find it in the list. When I print the result of my for loop, I get a bunch of ``` false false false false true false false false ``` etc. But, it returns False because the last call was false, even though I want it to return true. How can I fix this? Here is my code: ``` def nestedListContains(NL, target): if( isinstance(NL, int) ): return NL for i in range(0, len(NL)): return ( nestedListContains(NL[i], target) == target ) return False ``` And here is how I'm calling it ``` print(nestedListContains([[3], [4,5,7], [[[8]]]], 8)) ``` EDIT: This seems to be working for me, but it seems rather ghetto: ``` def nestedListContains(NL, target): if( isinstance(NL, int) ): if( NL == target ): return 1 return 0 x = 0 for n in NL: x += nestedListContains(n, target) == 1 return x != 0 ```
You `return` the result regardless of whether it's `True` or not. You could do something like this: ``` def nestedListContains(NL, target): if isinstance(NL, int): return NL == target for n in NL: result = nestedListContains(n, target) if result: return result return False ```
My attempt: ``` def contains(lst, target): if isinstance(lst, int): return lst == target return any(contains(x, target) for x in lst) ```
Python recursion, how come my function doesn't work?
[ "", "python", "python-3.x", "" ]
I have a view `myview` which retrieves information in the following manner: ``` ID | ATTRNAME | ATTRVAL 1 | attr1 | a 2 | attr2 | b 3 | attr3 | c 4 | attr3 | d .. | .. | .. .. | .. | .. ``` I need the output as follows: ``` ATTRNAME | ATTRVAL attr1 | a attr2 | b attr3 | Multiple Values ``` Where, if there is a single value for an attribute (ATTRNAME), it should be displayed and if there are multiple values for an attribute (e.g., attr3), the text "Multiple Values" should be displayed. I have tried: ``` SELECT DISTINCT attrname , CASE cnt WHEN 1 THEN TO_CHAR(attrval) ELSE 'Multiple Values' END AS attrval_rev FROM myview JOIN (SELECT attrname, COUNT(attrval) AS cnt FROM myview GROUP BY attrname) USING (attrname) ``` This works, but this is not the right kind of solution as it is taking over 8 minutes to execute for about 11,000 rows. Please help!
There is a way more simple: ``` SELECT ATTRNAME, CASE WHEN COUNT(*) = 1 THEN MIN(attrval) ELSE 'Multiple Values' END FROM myview GROUP BY ATTRNAME ``` Let me know if you are still having performance issues.
Try this: ``` select attrname, (case when min(attrval) = max(attrval) then min(attrval) when min(attrval) is null then null else 'Multiple Values' end) from myview group by attrname ``` Note that this ignores `NULL` as a valid value. You can also do the first list as `when count(distinct attrval) = 1`, but `count(distinct)` is a bit more expensive than `min()` and `max()`.
Show string constant for a group having multiple values
[ "", "sql", "oracle", "group-by", "aggregate", "" ]
I am wondering what the comma does in this code: ``` line, = ``` The following example shows the behavior: ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, 2)) line, = ax.plot([], [], lw=2) print("First:") print(line) print("Second:") line = ax.plot([], [], lw=2) print(line) ``` Result: ``` First: Line2D(_line0) Second: [<matplotlib.lines.Line2D object at 0xb137c4c>] ``` It really becomes confusing when I try to use the line variable, and I don't know whether to use the trailing comma or not: ``` def init(): line.set_data([], []) return line, ``` Or is there a better way to do this which avoids the comma?
It's just unpacking a 1-tuple. eg: ``` line, = ('foo',) # same as line = 'foo' in this example ``` compare with ``` line1, line2 = ('foo', 'bar') # unpack a 2-tuple ``` etc.
It's helpful for unpacking tuples See the different results here: ``` a = (1,) print(a) # prints (1,) a, = (1,) print(a) # prints 1 ```
Mystical comma in a Python expression
[ "", "python", "" ]
I have a string `s` containing:- ``` Hello {full_name} this is my special address named {address1}_{address2}. ``` I am attempting to match all instances of strings that is contained within the curly brackets. Attempting:- ``` matches = re.findall(r'{.*}', s) ``` gives me ``` ['{full_name}', '{address1}_{address2}'] ``` but what I am actually trying to retrieve is ``` ['{full_name}', '{address1}', '{address2}'] ``` How can I do that?
``` >>> import re >>> text = 'Hello {full_name} this is my special address named {address1}_{address2}.' >>> re.findall(r'{[^{}]*}', text) ['{full_name}', '{address1}', '{address2}'] ```
Try a non-greedy match: ``` matches = re.findall(r'{.*?}', s) ```
python regex: match to the first "}"
[ "", "python", "regex", "" ]
I have a piece of code which is not in a function, say ``` x = 5 y = 10 if x > 5: print("stopping") ``` What can I put after the print statement to stop the code from running further? Sys.exit() works, but raises an error that I don't want in the program. I want it to quietly stop the code as if it had reached the end of the main loop. Thanks.
As JBernardo pointed out, **`sys.exit()` raises an exception**. This exception is [SystemExit](http://docs.python.org/2/library/exceptions.html#exceptions.SystemExit). When it is not handled by the user code, the interpreter exits cleanly (a debugger debugging the program can catch it and keep control of the program, thanks to this mechanism, for instance)—as opposed to `os._exit()`, which is an unconditional abortion of the program. This exception is **not caught by `except Exception:`**, because `SystemExit` does not inherit from `Exception`. However, it *is caught* by a naked `except:` clause. So, if your program sees an exception, you may want to catch fewer exceptions by using `except Exception:` instead of `except:`. That said, catching all exceptions is discouraged, because this might hide real problems, so avoid it if you can, by making the `except` clause (if any) more specific. My understanding of why **this `SystemExit` exception mechanism is useful** is that the user code goes through any `finally` clause *after* a `sys.exit()` found in an `except` clause: files can be closed cleanly, etc.; then the interpreter catches any `SystemExit` that was not caught by the user and exits for good (a debugger would instead catch it so as to keep the interpreter running and obtain information about the program that exited).
You can do what you're looking for by doing this: ``` import os os._exit(1) ```
Stop Python code without an error
[ "", "python", "python-3.3", "" ]
In this case, mostly means less than 5 elements are non-zero in a column. Matrix is a 2d ndarray. Sample data: ``` a = np.array([[1,1,2,1,1], [1,1,0,1,0], [1,1,0,1,0], [1,1,0,3,0], [1,1,0,3,0], [1,1,1,5,3], [1,1,0,1,0], [1,1,0,1,0], [1,1,4,3,0], [1,1,0,4,0], [1,1,0,5,0], [1,1,0,0,0]]) ``` Output ``` a = np.array([[1,1,1], [1,1,1], [1,1,1], [1,1,3], [1,1,3], [1,1,5], [1,1,1], [1,1,1], [1,1,3], [1,1,4], [1,1,5], [1,1,0]]) ```
How about: ``` >>> a[:, (a != 0).sum(axis=0) >= 5] array([[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 3], [1, 1, 3], [1, 1, 5], [1, 1, 1], [1, 1, 1], [1, 1, 3], [1, 1, 4], [1, 1, 5], [1, 1, 0]]) ``` or ``` >>> a[:, np.apply_along_axis(np.count_nonzero, 0, a) >= 5] array([[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 3], [1, 1, 3], [1, 1, 5], [1, 1, 1], [1, 1, 1], [1, 1, 3], [1, 1, 4], [1, 1, 5], [1, 1, 0]]) ``` In the past I've found `np.count_nonzero` to be much faster than the `sum` trick, but here -- probably because of the need to use `np.appyly_along_axis` -- that version is instead much slower, at least for this `a`. Some other tests showed the same even for larger matrices, but YMMV.
Ok, I've figured it out: ``` np.delete(a, np.nonzero((a==0).sum(axis=0) > 5), axis=1) ```
Delete columns of a matrix that are mostly zero
[ "", "python", "numpy", "" ]
I ask a question about sum node's values: [sum some xml nodes values in sql server 2008](https://stackoverflow.com/questions/16144494/sum-some-xml-nodes-values-in-sql-server-2008?) Please consider this code: ``` Declare @xml xml set @xml='<Parent ID="p"> <Child ID="1">1000000000</Child > <Child ID="2">234650</Child > <Child ID="3">0</Child > </Parent >' Select @xml.value('sum(/Parent[@ID="p"]/Child)','bigint') as Sum ``` if you execute this it retrun this error: > Msg 8114, Level 16, State 5, Line 8 > Error converting data type nvarchar to bigint. the problem is it return this value : `1.00023465E9` if I change the above query this way it being ok: ``` Declare @xml xml set @xml='<Parent ID="p"> <Child ID="1">1000000000</Child > <Child ID="2">234650</Child > <Child ID="3">0</Child > </Parent >' Select @xml.value('sum(/Parent[@ID="p"]/Child)','float') as Sum ``` Why Sql Server do this?
Sql Server has a problem converting the value with scientific notation from a string to an integer, as would happen when you run your xpath query, however, it can do this for `float`. You could write your query like this: ``` select @xml.value('sum(/Parent[@ID = "p"]/Child) cast as xs:long?', 'bigint') ```
Try this one - ``` DECLARE @xml XML SELECT @xml='<Parent ID="p"> <Child ID="1">1000000000</Child > <Child ID="2">234650</Child > <Child ID="3">0</Child > </Parent >' SELECT @xml.value('sum(for $r in /Parent[@ID="p"]/Child return xs:int($r))', 'bigint') ``` **UPDATE:** ``` DECLARE @xml XML SELECT @xml='<Parent ID="p"> <Child ID="1">100000000000000</Child > <Child ID="2">234650</Child > <Child ID="3">0</Child > </Parent >' SELECT @xml.value('sum(for $r in /Parent[@ID="p"]/Child return xs:decimal($r))', 'bigint') ``` **UPDATE 2:** ``` DECLARE @xml XML SELECT @xml='<Parent ID="p"> <Child ID="1">100000000000000.6</Child > <Child ID="2">234650</Child > <Child ID="3">0</Child > </Parent >' SELECT @xml.value('sum(for $r in /Parent[@ID="p"]/Child return xs:decimal($r))', 'decimal(18,2)') ```
strange behavior of SQL Server when sum nodes's values in XML
[ "", "sql", "sql-server", "xml", "sql-server-2008", "" ]
I've written the following function that takes a tab delimited file (as a string) and turns it into a dictionary with an integer as a key and a list of two floats and the value: ``` def parseResults(self, results): """ Build a dictionary of the SKU (as key), current UK price and current Euro price """ lines = results.split('\n') individual_results = [] for i in range(1,len(lines)-1): individual_results.append(lines[i].split('\t')) results_dictionary = {} for i in range(len(individual_results)): results_dictionary[int(individual_results[i][0])] = [float(individual_results[i][1]), float(individual_results[i][2])] return results_dictionary ``` I've been reading about using list comprehension and also dictionary comprehension but I don't really know what the best way to build this is. I think I can simplify the first list build using: ``` individual_results = [results.split('\t') for results in lines[1:]] ``` but I don't then know the best way to create the dictionary. I've got the feeling this might be possible in a neat way without even creating the intermediate list. Thanks, Matt
Like this: ``` import csv import StringIO results = "sku\tdelivered-price-gbp\tdelivered-price-euro\tid\n32850238\t15.53\t35.38\t258505\n" data = list(csv.DictReader(StringIO.StringIO(results), delimiter='\t')) print(data) ``` Output: ``` [{'sku': '32850238', 'delivered-price-euro': '35.38', 'delivered-price-gbp': '15.53', 'id': '258505'}] ``` Of course, if you can read from an actual file, you can skip the stringIO part. To build the type of dictionary you want, you would do this: ``` data = {} for entry in csv.DictReader(StringIO.StringIO(results), delimiter='\t'): data[entry['sku']] = [entry['delivered-price-gbp'], entry['delivered-price-euro']] ``` Or even as a dictionary comprehension: ``` import csv import StringIO results = "sku\tdelivered-price-gbp\tdelivered-price-euro\tid\n32850238\t15.53\t35.38\t258505\n10395850\t35.21\t46.32\t3240582\n" data = {entry['sku']: [entry['delivered-price-gbp'], entry['delivered-price-euro']] for entry in csv.DictReader(StringIO.StringIO(results), delimiter='\t')} print(data) ``` But that's now getting highly difficult to read. The output would in those two last cases be: ``` {'32850238': ['15.53', '35.38'], '10395850': ['35.21', '46.32']} ```
Use the CSV module from the standard library it has a method for reading straight to a dictionary [csv.DictReader](http://docs.python.org/2/library/csv.html#csv.DictReader)
Building a dictionary from a tab delimited file in a pythonic way
[ "", "python", "list", "dictionary", "python-2.7", "" ]
I have example data ``` SMMP022011304196 SMMP022011304199 SMMP022011304197 SMMP022011304193 SMMP022011304195 SMMP022011304198 SMMP022011304192 SMMP0220113041910 SMMP022011304191 SMMP022011304194 ``` If I use `SELECT myfield from mytable order by myfield DESC` I got result like this ``` SMMP022011304199 SMMP022011304198 SMMP022011304197 SMMP022011304196 SMMP022011304195 SMMP022011304194 SMMP022011304193 SMMP022011304192 SMMP0220113041910 SMMP022011304191 ``` Please help to make result like this : ``` SMMP0220113041910 SMMP022011304199 SMMP022011304198 SMMP022011304197 SMMP022011304196 SMMP022011304195 SMMP022011304194 SMMP022011304193 SMMP022011304192 SMMP022011304191 ```
Just sort by LENGTH of field and then for values with equal length: ``` SELECT myfield from mytable order by LENGTH(myfield) DESC, myfield DESC ``` [SQLFiddle demo](http://sqlfiddle.com/#!2/cffe5/3)
This is working fine. Below code is work for SQL Server 2008 R2 : ``` SELECT myfield FROM mytable ORDER BY LEN(myfield) DESC ,myfield DESC ```
select query char numeric id to order by desc
[ "", "mysql", "sql", "sql-order-by", "natural-sort", "" ]
I'm having one holiday table based on that I need to return last working date. ``` Table_holiday Id Date Text 1 2013-03-29 Good Friday 2 2013-05-01 Maharashtra day 3 2013-05-02 Holiday ``` When I execute my `date_recursive` function it should have to check and return me `last_business` date `For example` If I execute it on `@date datetime = '2013-03-29'` it should return me `last working date '2013-03-28'`as this is the last working date any help m newbie in sql.
Please try: ``` DECLARE @Table_holiday as TABLE(id int, [date] datetime, [Text] nvarchar(50)) insert into @Table_holiday values(1, '2013-03-29', 'Good Friday') insert into @Table_holiday values(2, '2013-05-01', 'Maharashtra day') insert into @Table_holiday values(3, '2013-05-02', 'Holiday') declare @date datetime set @date ='2013-03-29' ;with T(dt) as ( select @date union all select T1.[date] from T inner join @Table_holiday T1 on T1.[date]=T.dt-1 )select min(dt)-1 from T ```
Assuming ANY date **not** in your holiday table is a working date, you can search backwards until you find a non-holiday. ``` ;with cte(adate) as ( select @date from table_holiday where @date = Date union all select h.Date from cte join table_holiday h on dateadd(d,-1,cte.adate) = h.Date ) select isnull((select dateadd(d,-1,min(adate)) from cte), @date); ```
Need to write recursive date function in sql?
[ "", "sql", "sql-server", "t-sql", "recursive-query", "" ]
I'm trying to do a simple VB6 to c translator to help me port an open source game to the c language. I want to be able to get "NpcList[NpcIndex]" from "With Npclist[NpcIndex]" using ragex and to replace it everywhere it has to be replaced. ("With" is used as a macro in VB6 that adds Npclist[NpcIndex] when ever it needs to until it founds "End With") ``` Example: With Npclist[NpcIndex] .goTo(245) <-- it should be replaced with Npclist[NpcIndex].goTo(245) End With ``` Is it possible to use regex to do the job? I've tried using a function to perfom another regex replace between the "With" and the "End With" but I can't know the text the "With" is replacing (Npclist[NpcIndex]). Thanks in advance
I personally wouldn't trust any single-regex solution to get it right on the first time nor feel like debugging it. Instead, I would parse the code line-to-line and cache any `With` expression to use it to replace any `.` directly preceded by whitespace or by any type of brackets (add use-cases as needed): `(?<=[\s[({])\.` - positive lookbehind for any character from the set + escaped literal dot `(?:(?<=[\s[({])|^)\.` - use this non-capturing alternatives list if to-be-replaced `.` can occur on the beginning of line ``` import re def convert_vb_to_c(vb_code_lines): c_code = [] current_with = "" for line in vb_code_lines: if re.search(r'^\s*With', line) is not None: current_with = line[5:] + "." continue elif re.search(r'^\s*End With', line) is not None: current_with = "{error_outside_with_replacement}" continue line = re.sub(r'(?<=[\s[({])\.', current_with, line) c_code.append(line) return "\n".join(c_code) example = """ With Npclist[NpcIndex] .goTo(245) End With With hatla .matla.tatla[.matla.other] = .matla.other2 dont.mind.me(.do.mind.me) .next() End With """ # use file_object.readlines() in real life print(convert_vb_to_c(example.split("\n"))) ```
This may do what you need in Python 2.7. I'm assuming you want to strip out the `With` and `End With`, right? You don't need those in C. ``` >>> import re >>> search_text = """ ... With Np1clist[Npc1Index] ... .comeFrom(543) ... End With ... ... With Npc2list[Npc2Index] ... .goTo(245) ... End With""" >>> >>> def f(m): ... return '{0}{1}({2})'.format(m.group(1), m.group(2), m.group(3)) ... >>> regex = r'With\s+([^\s]*)\s*(\.[^(]+)\(([^)]+)\)[^\n]*\nEnd With' >>> print re.sub(regex, f, search_text) Np1clist[Npc1Index].comeFrom(543) Npc2list[Npc2Index].goTo(245) ```
Python complex regex replace
[ "", "python", "regex", "replace", "" ]
Is there a way to set a variable to a string, and then use that variable to call a function from a class with the same name of that variable? For example: ``` class Foo(object): def printer(self): print "Hello." Bar = Foo() var = 'Bar' ``` When I then try to call var.printer(), I get the error: ``` AttributeError: 'str' object has no attribute 'printer' ``` Which makes sense, but is there any way to call a class function in that manner, by using a variable with the same name as the class? The reason I want to do this is so that I can have a set of individual variables, be able to adjust those variables, and then combine them into a string and call a function from a class with the same name as the string. For example: ``` from random import randint x = 0 y = 0 class c00(object): def printname(self): print "c00" class c01(object): def printname(self): print "c01" choose_class = randint(0, 2) if choose_class == 0: xstr = str(x) ystr = str(y) string = 'c' + xstr + ystr string.printname() else: y += 1 xstr = str(x) ystr = str(y) string = 'c' + xstr + ystr string.printname() ``` Is there any way to make this work?
You can get a reference to the "Bar" object from it's name as a string from the mapping returned by `locals` (or `globals` depending on where `Bar` is defined): ``` locals()['Bar'].printer() ``` As a bit of unsolicited advice, usually if you're doing this, it's best to put `'Bar'` in a dictionary explicitly in the first place rather than needing to rely on `locals`/`globals`/`vars`.
Instead of putting the objects into individual variables, put them into a dict: ``` chooser = {'c00': c00(), 'c01': c01()} chooser['c' + xstr + ystr].printname() ```
Setting a variable to the name of a class and calling a function from that class
[ "", "python", "" ]
I seem to do this a lot (whether or not I should be is perhaps another topic) in my python codes: ``` the_list = get_list_generator() #So `the_list` is a generator object right now #Iterate the generator pulling the list into memory the_list = list(the_list) ``` When doing arithmetic assignments, we have shorthands like such... ``` the_number += 1 ``` So, is there some way to accomplish the same shorthand when using a function for assignment. I don't know if there is a built-in that does this, or if I need to define a custom operator (i have never done that), or some other way that ultimately leads to *cleaner code* (I promise I will only use it for a generic type cast). ``` #Maybe using a custom operator ? the_list @= list() #Same as above, `the_list` was a generator, but is a list after this line ``` --- **Edit::** I failed to mention originally: This happens to me most often in interactive mode (thus why I wish to cut down required typing). I will try to index an iterator `gen_obj[3]`, get an error, and then have to cast it. As suggested, this is probably the best, but ultimately not quite what I am looking for. ``` the_list = list(get_list_generator()) ```
There isn't a syntactic shortcut for converting an iterator into a list. So just running `list(it)` is the usual practice. If your need is only to inspect the result, use the *take()* recipe from the itertools module: ``` def take(n, iterable): "Return first n items of the iterable as a list" return list(islice(iterable, n)) ``` That recipe works especially well when the underlying iterator is lengthy, infinite, or expensive to compute.
**No** Augmented assignment works only by combining **operators** with assignement. `list(...)` is a **function call** and not an operator. You can find a list of possible augmented assignments [here](http://docs.python.org/2/reference/simple_stmts.html#grammar-token-augmented_assignment_stmt) If you want to avoid doing two assignments simply call `list` immediately.
Python assignment shorthand for *casting*
[ "", "python", "" ]
I'm trying to retrieve environment variables of a remote Linux server in Python. Alternatively, if I could use the variable name in the command I'm executing, that'd be great as well. The calls I'm making should work, as far as I can tell, but I'm getting back garbage. I have set up public-key authentication, so no password required. Effort 1: ``` devdir = subprocess.check_output(["ssh", connectstring, "echo $DEVDIR"]) ``` Effort 2: ``` ret = subprocess.check_output(["ssh", connectstring, "$DEVDIR/{0}".format(testpath)]) ``` `connectstring` is user@ip and works fine. `$DEVDIR` is the remote variable I want to use and contains a path. `testpath` is the path to the script I'm trying to execute, rooted at `$DEVDIR`. Effort 1 returns "\n", Effort 2 fails to resolve $DEVDIR remotely. --- Effort 3: ``` import paramiko ... ssh = paramiko.SSHClient() ssh.connect(ip, user) # succeeds stdin, stdout, stderr = ssh.exec_command("echo $DEVDIR") ``` result: stdout.readlines() = "\n"
If the environment variable is set in `.bashrc`, you can force the remote command to run under a login shell with the `-l` option. The following... ``` devdir = subprocess.check_output(["ssh", connectstring, "sh -l -c 'echo $DEVDIR'"]) ``` ...works for me.
Executing a command va SSH does not produce a login session. So some variables might not be set. You can check this by replacing the variable with another one such as `$HOSTNAME` or `$HOST` or `$SSH_CONNECTION` or by executing the `env` command. A solution might be to put the variable assignment into a file which is executed in a non-login session as well. `.bashrc` should be fine, however; maybe there is a flaw somewhere inside.
How can I retrieve environment variables from remote system in Python?
[ "", "python", "ssh", "environment-variables", "" ]
I have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas? I am new to pandas and numpy. EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype. Sample Data: ``` { "_cls" : "SensorReport", "_id" : ObjectId("515a963b78f6a035d9fa531b"), "_types" : [ "SensorReport" ], "Readings" : [ { "a" : 0.958069536790466, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"), "b" : 6.296118156595, "_cls" : "Reading" }, { "a" : 0.95574014778624, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"), "b" : 6.29651468650064, "_cls" : "Reading" }, { "a" : 0.953648289182713, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"), "b" : 7.29679823731148, "_cls" : "Reading" }, { "a" : 0.955931884300997, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"), "b" : 6.29642922525632, "_cls" : "Reading" }, { "a" : 0.95821381, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"), "b" : 7.28956613, "_cls" : "Reading" }, { "a" : 4.95821335, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"), "b" : 6.28956574, "_cls" : "Reading" }, { "a" : 9.95821341, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"), "b" : 0.28956488, "_cls" : "Reading" }, { "a" : 1.95667927, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"), "b" : 0.29115237, "_cls" : "Reading" } ], "latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"), "sensorName" : "56847890-0", "reportCount" : 8 } ```
`pymongo` might give you a hand, followings is some code I'm using: ``` import pandas as pd from pymongo import MongoClient def _connect_mongo(host, port, username, password, db): """ A util for making a connection to mongo """ if username and password: mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db) conn = MongoClient(mongo_uri) else: conn = MongoClient(host, port) return conn[db] def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True): """ Read from Mongo and Store into DataFrame """ # Connect to MongoDB db = _connect_mongo(host=host, port=port, username=username, password=password, db=db) # Make a query to the specific DB and Collection cursor = db[collection].find(query) # Expand the cursor and construct the DataFrame df = pd.DataFrame(list(cursor)) # Delete the _id if no_id: del df['_id'] return df ```
You can load your mongodb data to pandas DataFrame using this code. It works for me. Hopefully for you too. ``` import pymongo import pandas as pd from pymongo import MongoClient client = MongoClient() db = client.database_name collection = db.collection_name data = pd.DataFrame(list(collection.find())) ```
How to import data from mongodb to pandas?
[ "", "python", "mongodb", "pandas", "pymongo", "" ]
If i have a list of numbers `[4,2,5,1,3]` I want to sort it first by some function `f` and then for numbers with the same value of `f` i want it to be sorted by the magnitude of the number. This code does not seem to be working. ``` list5 = sorted(list5) list5 = sorted(list5, key = lambda vertex: degree(vertex)) ``` Secondary sorting first: list5 is sorted based on magnitude. Primary sorting next: list5 is sorted based on some function of the numbers.
Sort it by a (firstkey, secondkey) tuple: ``` sorted(list5, key=lambda vertex: (degree(vertex), vertex)) ```
From the Python 3 docs on [sorting](https://docs.python.org/3/howto/sorting.html#sortinghowto) ``` from operator import itemgetter, attrgetter student_objects = [ Student('john', 'A', 15), Student('jane', 'B', 12), Student('dave', 'B', 10), ] student_tuples = [ ('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10), ] #The operator module functions allow multiple levels of sorting. For example, to sort by grade then by age: sorted(student_tuples, key=itemgetter(1,2)) sorted(student_objects, key=attrgetter('grade', 'age')) ```
How do I perform secondary sorting in python?
[ "", "python", "" ]
I'm learning about urllib2 and Beautiful Soup and on first tests am getting errors like: ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\u2026' in position 10: ordinal not in range(128) ``` There seem to be lots of posts about this type of error and I have tried the solutions that I can understand but there seem to be catch 22's with them, e.g.: I want to print `post.text` (where text is a beautiful soup method that just returns the text). `str(post.text)` and `post.text` produce the unicode errors (on things like right apostrophe's `'` and `...`). So I add `post = unicode(post)` above `str(post.text)`, then I get: ``` AttributeError: 'unicode' object has no attribute 'text' ``` I also tried `(post.text).encode()` and `(post.text).renderContents()`. The latter producing the error: ``` AttributeError: 'unicode' object has no attribute 'renderContents' ``` and then I tried `str(post.text).renderContents()` and got the error: ``` AttributeError: 'str' object has no attribute 'renderContents' ``` It would be great if I could just define somewhere at the top of the document `'make this content 'interpretable''` and still have access to the required `text` function. --- **Update:** after suggestions: If I add `post = post.decode("utf-8")` above `str(post.text)` I get: ``` TypeError: unsupported operand type(s) for -: 'str' and 'int' ``` If I add `post = post.decode()` above `str(post.text)` I get: ``` AttributeError: 'unicode' object has no attribute 'text' ``` If I add `post = post.encode("utf-8")` above `(post.text)` I get: ``` AttributeError: 'str' object has no attribute 'text' ``` I tried `print post.text.encode('utf-8')` and got: ``` UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 39: ordinal not in range(128) ``` And for the sake of trying things that might work, I installed lxml for Windows from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml) and implemented it with: ``` parsed_content = BeautifulSoup(original_content, "lxml") ``` according to <http://www.crummy.com/software/BeautifulSoup/bs4/doc/#output-formatters>. These steps didn't seem to make a difference. I'm using Python 2.7.4 and Beautiful Soup 4. --- **Solution:** After getting a deeper understanding of unicode, utf-8 and Beautiful Soup types, it had something to do with my printing methodology. I removed all my `str` methods and concatenations, e.g. `str(something) + post.text + str(something_else)`, so that it was `something, post.text, something_else` and it seems to be printing well except I have less control of the formatting at this stage (e.g. spaces inserted at `,`).
In Python 2, `unicode` objects can only be printed if they can be converted to ASCII. If it can't be encoded in ASCII, you'll get that error. You probably want to explicitly encode it and then print the resulting `str`: ``` print post.text.encode('utf-8') ```
``` html = urllib.request.urlopen(THE_URL).read() soup = BeautifulSoup(html) print("'" + str(soup.encode("ascii")) + "'") ``` worked for me ;-)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2026'
[ "", "python", "python-2.7", "unicode", "beautifulsoup", "urllib2", "" ]
How can I find out if a matrix is [positive definite](http://en.wikipedia.org/wiki/Positive-definite_matrix)? My matrix is a NumPy matrix. I was expecting to find any related method in the NumPy library, but I didn't have any success.
You can also check if all the [eigenvalues](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors) of matrix are positive. If so, the matrix is positive definite: ``` import numpy as np def is_pos_def(x): return np.all(np.linalg.eigvals(x) > 0) ```
You could try computing Cholesky decomposition ([`numpy.linalg.cholesky`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.cholesky.html)). This will raise `LinAlgError` if the matrix is not positive definite.
Find out if a matrix is positive definite with NumPy
[ "", "python", "matrix", "numpy", "scipy", "" ]
What's the recommended way to flatten nested lists since the [deprecation of the compiler package](http://docs.python.org/2/library/compiler.html#module-compiler)? ``` >>> from compiler.ast import flatten >>> flatten(["junk",["nested stuff"],[],[[]]]) ['junk', 'nested stuff'] ``` I know that there are a few stack overflow answers for list flattening, but I'm hoping for the pythonic, standard package, "one, and preferably only one, obvious way" to do this.
Your stated function takes a nested list and flattens that into a new list. To flatten an arbitrarily nested list into a new list, this works on Python 3 as you expect: ``` import collections def flatten(x): result = [] for el in x: if isinstance(x, collections.Iterable) and not isinstance(el, str): result.extend(flatten(el)) else: result.append(el) return result print(flatten(["junk",["nested stuff"],[],[[]]])) ``` Prints: ``` ['junk', 'nested stuff'] ``` If you want a generator that does the same thing: ``` def flat_gen(x): def iselement(e): return not(isinstance(e, collections.Iterable) and not isinstance(e, str)) for el in x: if iselement(el): yield el else: for sub in flat_gen(el): yield sub print(list(flat_gen(["junk",["nested stuff"],[],[[[],['deep']]]]))) # ['junk', 'nested stuff', 'deep'] ``` For Python 3.3 and later, use [yield from](https://docs.python.org/3/whatsnew/3.3.html#pep-380) instead of the loop: ``` def flat_gen(x): def iselement(e): return not(isinstance(e, collections.Iterable) and not isinstance(e, str)) for el in x: if iselement(el): yield el else: yield from flat_gen(el) ```
[`itertools.chain`](http://docs.python.org/3/library/itertools.html#itertools.chain) is the best solution for flattening any nested iterable one level - it's highly efficient compared to any pure-python solution. That said, it will work on *all* iterables, so some checking is required if you want to avoid it flattening out strings, for example. Likewise, it won't magically flatten out to an arbitrary depth. That said, generally, such a generic solution isn't required - instead it's best to keep your data structured so that it doesn't require flattening in that way. Edit: I would argue that if one had to do arbitrary flattening, this is the best way: ``` import collections def flatten(iterable): for el in iterable: if isinstance(el, collections.Iterable) and not isinstance(el, str): yield from flatten(el) else: yield el ``` Remember to use `basestring` in 2.x over `str`, and `for subel in flatten(el): yield el` instead of `yield from flatten(el)` pre-3.3. As noted in the comments, I would argue this is the nuclear option, and is likely to cause more problems than it solves. Instead, the best idea is to make your output more regular (output that contains one item still give it as a one item tuple, for example), and do regular flattening by one level where it is introduced, rather than all at the end. This will produce more logical, readable, and easier to work with code. Naturally, there are cases where you *need* to do this kind of flattening (if the data is coming from somewhere you can't mess with, so you have no option but to take it in the poorly-structured format), in which case, this kind of solution might be needed, but in general, it's probably a bad idea.
Python 3 replacement for deprecated compiler.ast flatten function
[ "", "python", "python-3.x", "flatten", "" ]
I have a function that responds differently depending on the way I set up the array that it is taking as an input. For the non-working ways, the function still runs, but just not correctly working way: ``` import numpy as np array1 = ["something1", "a,b,c,9", "more", "b,c,4"] array2 = ["something2", "4,3", "more", "1,a"] array3 = ["something3", "z", "more", "9,1"] array4 = ["something4", "1", "more", "z"] real_array = np.array((array1,array2,array3,array4)) ``` not working way: ``` import numpy as np array = [["something1", "a,b,c,9", "more", "b,c,4"],["something2", "4,3", "more", "1,a"],["something3", "z", "more", "9,1"],["something4", "1", "more", "z"]] real_array = np.array((array)) ``` similar not working way: ``` import numpy as np import csv array = [] reading = csv.reader(open('file.csv', 'rb')) for row in reading: array.append(row) real_array = np.array((array)) ``` clearly the not working way would be easier for dealing with data because I can append the rows to `array` and the other way must be done manually. --- **Both arrays are identical...so why is my function responding to them differently?** --- My function randomly chooses a row from a file and then checks to see if something in the second column matches something in the last column of the **previous choice**. Here it is: ``` def make_sequence(size,array): count = 0 without_column = array[1::] np.random.shuffle(without_column) sequence = [without_column[0]] result = [without_column[0][0]] length = 0 while length < size: np.random.shuffle(without_column) start = without_column[0][1].split(',') end = sequence[count][3].split(',') i = 0 while i < len(start): if start[i] in end: sequence.append(without_column[0]) result.append(without_column[0][0]) count += 1 i = len(start) else: pass i += 1 length = len(result) return result ``` --- EDIT 2: What should happen --- if I do this code: ``` make_sequence(10,real_array) ``` I want it to return an array that is different every time and is made up of the 1st column only, but will only place items next to each other if the starting position of the 2nd item is one of the ending positions of the 1st. Here is an example: If the first item chosen is array3, the next item can only be array1 or array4, none of the others. This is because column 4 for array 3 is 9,1 meaning that the only arrays that can come after that must have a 9 or a 1 in their 2nd column. so if the randomly chosen first item is array3, then if I run this code: ``` make_sequence(2,real_array) ``` could only have the following output possibilities: ["something3","something4"] ["something3","something1"]
I have found a solution! The function works perfectly if I just don't run the files into a numpy array at all...here is what worked: ``` import numpy as np import csv array = [] reading = csv.reader(open('file.csv', 'rb')) for row in reading: array.append(row) make_sequence(10,array) ``` If I do not convert the array to a numpy array, my function works perfectly. **I still do not know why this is and if anybody knows, please tell me**
Yes, your arrays seem identical to me. In addition to Rob's test, you can also see that ``` real_array1.shape == real_array2.shape real_array1.dtype == real_array2.dtype ``` Both return `True` However, your function uses a random line, of course it will return different results each time. I'm not sure, without a more careful reading, what your code is supposed to do, but your problem might be that your `without_column` array is actually the array without the first *row*. ``` In [15]: without_column = real_array1[1::] In [16]: real_array1 Out[16]: array([['something1', 'a,b,c,9', 'more', 'b,c,4'], ['something2', '4,3', 'more', '1,a'], ['something3', 'z', 'more', '9,1'], ['something4', '1', 'more', 'z']], dtype='|S10') In [17]: without_column Out[17]: array([['something2', '4,3', 'more', '1,a'], ['something3', 'z', 'more', '9,1'], ['something4', '1', 'more', 'z']], dtype='|S10') ``` Perhaps what you want is: ``` In [19]: without_column = real_array1[:,1:] In [20]: without_column Out[20]: array([['a,b,c,9', 'more', 'b,c,4'], ['1', 'more', 'z'], ['z', 'more', '9,1'], ['4,3', 'more', '1,a']], dtype='|S10') ``` Is this intended? --- When I try to run your code, choosing `size` > 1 makes the while loop run forever.
arrays are not functioning depending on their format
[ "", "python", "arrays", "numpy", "" ]
I have a database of records that each have a title. I want to be able to search through this database with a search string that will be separated into a list or an array. So for example if I search with "Book Dog", it will search for all titles that have "Book" or "Dog" in the title. I'm using entity framework and I guess the simplest way to write down what I want to do is ``` string[] words; var posts = (from p in ctx.posts where p.title.contains(words) select p).ToList(); ``` I've tried using a StringExtension I found online, but I would get the following error "LINQ to Entities does not recognize the method 'Boolean ContainsAny(System.String, System.String[])' method, and this method cannot be translated into a store expression." And the extension is ``` public static bool ContainsAny(this string str, params string[] values) { if (!string.IsNullOrEmpty(str) || values.Length > 0) { foreach (string value in values) { if (str.Contains(value)) return true; } } return false; } ```
Are you looking for this? ``` var posts = (from p in ctx.posts where words.Any(w => p.title.Contains(w)) select p).ToList(); ```
Is this what you need: ``` ctx.posts.Where(post => words.Any(word => post.Title.Contains(word))) ```
Entity Framework search records with a list of strings?
[ "", "asp.net", "sql", "entity-framework", "asp.net-mvc-4", "" ]
What is the best way to generate a unique key for the contents of a dictionary. My intention is to store each dictionary in a document store along with a unique id or hash so that I don't have to load the whole dictionary from the store to check if it exists already or not. *Dictionaries with the same keys and values should generate the same id or hash.* I have the following code: ``` import hashlib a={'name':'Danish', 'age':107} b={'age':107, 'name':'Danish'} print str(a) print hashlib.sha1(str(a)).hexdigest() print hashlib.sha1(str(b)).hexdigest() ``` The last two print statements generate the same string. Is this is a good implementation? or are there any pitfalls with this approach? Is there a better way to do this? **Update** Combining suggestions from the answers below, the following might be a good implementation ``` import hashlib a={'name':'Danish', 'age':107} b={'age':107, 'name':'Danish'} def get_id_for_dict(dict): unique_str = ''.join(["'%s':'%s';"%(key, val) for (key, val) in sorted(dict.items())]) return hashlib.sha1(unique_str).hexdigest() print get_id_for_dict(a) print get_id_for_dict(b) ```
A possible option would be using a serialized representation of the list that preserves order. I am not sure whether the default list to string mechanism imposes any kind of order, but it wouldn't surprise me if it were interpreter-dependent. So, I'd basically build something akin to `urlencode` that sorts the keys beforehand. Not that I believe that you method would fail, but I'd rather play with predictable things and avoid undocumented and/or unpredictable behavior. It's true that despite "unordered", dictionaries end up having an order that may even be consistent, but the point is that you shouldn't take that for granted.
I prefer serializing the dict as JSON and hashing that: ``` import hashlib import json a={'name':'Danish', 'age':107} b={'age':107, 'name':'Danish'} # Python 2 print hashlib.sha1(json.dumps(a, sort_keys=True)).hexdigest() print hashlib.sha1(json.dumps(b, sort_keys=True)).hexdigest() # Python 3 print(hashlib.sha1(json.dumps(a, sort_keys=True).encode()).hexdigest()) print(hashlib.sha1(json.dumps(b, sort_keys=True).encode()).hexdigest()) ``` Returns: ``` 71083588011445f0e65e11c80524640668d3797d 71083588011445f0e65e11c80524640668d3797d ```
How To Create a Unique Key For A Dictionary In Python
[ "", "python", "hash", "dictionary", "" ]
Getting this error when trying to run a Python script. I have downloaded a requests-1.2.0 folder but I have no idea what to do with it. I've tried running the setup.py file contained in the download but it just opens a command terminal for a second and closes. I am running Python from my Windows desktop, not on a server or anything like that. No idea what I'm doing here!
from the root directory of the requests folder you downloaded, run: ``` $ python setup.py install ``` then it will be installed system-wide, and your scripts may use "import requests"
You need to install it for the same version of Python that you're using to run your script. The `setup.py` tells Python how to do that, so you can open up a command line to that directory (do you know how to do that?) and type `python setup.py install` to install it. But there's a much easier way -- Python has an excellent package manager, called [pip](https://stackoverflow.com/questions/2436731/does-python-have-a-package-module-management-system/13445719#13445719). You can use that to install any other Python packages that you want by typing `pip install requests` at the command line. In particular, it will connect to the internet and work out what to download, then download and install it?
ImportError: No module named 'requests'
[ "", "python", "python-3.x", "python-requests", "" ]
I was curious on how I would check if 2 numbers inside of a list are identical. For instance, ``` myList=[1,7,9,3,1,2,8] ``` **In this case, "1" is repeated in 'myList'.** How would I make a program that checks to see if two numbers inside of a list are the same(repeated). Try to use loops so I can understand because I haven't yet learned complex functions.
If you want to use loops, you'll have to use a list or a set of numbers which you've already seen. Then while looping you'll check, with the `in` operator if the number is already seen. ``` seen = [] for number in myList: if number in seen: print "Number repeated!" else: seen.append(number) ``` `set` does not allow duplicates in it, thus it's a good fit for this sort of an algorithm. As mentioned in the comments, the time complexity for checking if an element is in a set is constant for the average case (O(1)), so this is more efficient if you have a lot of numbers. ``` seen = set() for number in myList: if number in seen: print "Number repeated!" seen.add(number) # won't duplicate ``` I'd say that the most pythonic way is to use `collections.Counter`, but the other answers cover this already. To use a built-in, you could generate a set of the numbers which appear more than once using a [generator expression](http://docs.python.org/2/reference/expressions.html#generator-expressions) and `set`. ``` In [39]: seen = set() In [40]: print list(set(x for x in myList if x in seen or seen.add(x))) [1] ``` Here the expression will loop over all values in `myList` and add them to a `set` called `seen` if they have already been seen. Eventually, it will convert the resulting set into a list and print the contents.
Using [`collections.Counter`](http://docs.python.org/3.3/library/collections.html#collections.Counter): ``` from collections import Counter myList = [1, 7, 9, 3, 1, 2, 8] counter = Counter(myList) print(counter) # prints 'Counter({1: 2, 2: 1, 3: 1, 7: 1, 8: 1, 9: 1})' ``` With `Collections.Counter`, you know how many times is repeated every element of your source list `myList`. After that you can use a simple [`list comprehension`](http://docs.python.org/3.3/tutorial/datastructures.html#list-comprehensions) to know which elements are repeated: ``` result = [i for i, j in counter.items() if j > 1] print(result) # prints '[1]' ```
How to check if two numbers in a list are the same
[ "", "python", "" ]
Please consider the below algorithm: ``` for(j1 = n upto 0) for(j2 = n-j1 upto 0) for(j3 = n-j1-j2 upto 0) . . for (jmax = n -j1 - j2 - j_(max-1)) { count++; product.append(j1 * j2 ... jmax); // just an example } ``` As you can see, some relevant points about the algo snippet above: 1. I have listed an algorithm with a variable number of for loops. 2. The result that i calculate at each innermost loop is appended to a list. This list will grow to dimension of 'count'. Is this problem a suitable candidate for recursion? If yes, i am really not sure how to break the problem up. I am trying to code this up in python, and i do not expect any code from you guys. Just some pointers or examples in the right direction. Thank you. Here is an initial try for a sample case <http://pastebin.com/PiLNTWED>
Your algorithm is finding all the `m`-tuples (`m` being the `max` subscript of `j` from your pseudocode) of non-negative integers that add up to `n` or less. In Python, the most natural way of expressing that would be with a recursive generator: ``` def gen_tuples(m, n): if m == 0: yield () else: for x in range(n, -1, -1): for sub_result in gen_tuples(m-1, n-x): yield (x,)+sub_result ``` Example output: ``` >>> for x, y, z in gen_sums(3, 3): print(x, y, z) 3 0 0 2 1 0 2 0 1 2 0 0 1 2 0 1 1 1 1 1 0 1 0 2 1 0 1 1 0 0 0 3 0 0 2 1 0 2 0 0 1 2 0 1 1 0 1 0 0 0 3 0 0 2 0 0 1 0 0 0 ```
You could also consider using permutations, combinations or product from the *itertools* module. If you want all the possible combinations of i, j, k, ... (i.e. nested for loops) you can use: ``` for p in product(range(n), repeat=depth): j1, j2, j3, ... = p # the same as nested for loops # do stuff here ``` But beware, the number of iterations in the loop grows exponentially!
Formulation of a recursive solution (variable for loops)
[ "", "python", "recursion", "for-loop", "" ]
`import`datetime in my **django views** to save the time in **database** and `now = datetime.datetime.now()` when i am **saving** its value in database it **returns** something like > 2013-04-28 22:54:30.223113 how can i remove > 223113 from 2013-04-28 22:54:30.223113 part please suggest how can i do this ...
Set `microsecond=0`. But I can not find this feature in documentation.' ``` >>> now = datetime.datetime.now().replace(microsecond=0) >>> print now 2013-04-29 12:47:28 ```
You should ideally be using datetime field in your database. But if there is a constraint that you go to store date as a string use this to format it : ``` >>> datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") '2013-04-29 12:17:55' ```
Change time format in Django Views
[ "", "python", "django", "date", "format", "" ]
I'm trying to display rows in a ListView on Android by using an SQL statement and the jTDS driver, through an AsyncTask. **Articles.java:** ``` package com.example.projectmanager; import java.sql.Connection; import java.sql.Date; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.util.ArrayList; import java.util.List; import android.os.AsyncTask; import android.util.Log; public class Articles extends AsyncTask <List<Articles>, Void, List> { int article_id; String title; String body ; Date date; String username; List<Articles> posts = new ArrayList<Articles>(); protected List<Articles> doInBackground(List... params) { Connection conn = null; try { String driver = "net.sourceforge.jtds.jdbc.Driver"; Class.forName(driver).newInstance(); String connString = "jdbc:jtds:sqlserver://10.0.2.2/master_db;"; String sqlusername = "admin"; String sqlpassword = "root"; conn = DriverManager.getConnection(connString, sqlusername, sqlpassword); Log.w("Connection","open"); String articleQuery = "SELECT TOP 5 E.article_id,E.article_title,E.article_description,E.article_date,u.username FROM articles AS E INNER JOIN user_articles as A ON A.article_id = E.article_id INNER JOIN users as u ON A.user_id = u.user_id WHERE E.article_status = 1;"; PreparedStatement stmt = conn.prepareStatement(articleQuery); ResultSet rs; rs = stmt.executeQuery(); while (rs.next()) { Articles article = new Articles(); article.article_id = rs.getInt("article_id"); article.username = rs.getString("username"); article.date = rs.getDate("article_date"); article.title = rs.getString("article_title"); article.body = rs.getString("article_description"); posts.add(article); } conn.close(); } catch (Exception e) { Log.w(e.getMessage(), e); } return posts; } protected void onPostExecute(List posts) { // Result is here now, may be 6 different List type. this.posts = posts; } } ``` **HomeActivity.java:** ``` package com.example.projectmanager; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.concurrent.ExecutionException; import android.os.Bundle; import android.app.Activity; import android.content.Context; import android.view.Menu; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.ListView; public class HomeActivity extends Activity { @SuppressWarnings("unchecked") @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home); Articles a = new Articles(); ListView lv = (ListView) findViewById(R.id.lvPosts); List postsList = null; try { postsList = new Articles().execute().get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } ArrayAdapter<String> arrayAdapter = new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1, postsList); lv.setAdapter(arrayAdapter); } } ``` Whenever I try to run the code I get this output: ![enter image description here](https://i.stack.imgur.com/JikwQ.png) LogCat: <http://pastebin.com/cjm1BNeZ> Can anyone tell me what is going on ? **UPDATE:** Populating the arraylist ``` String articleQuery = "SELECT TOP 5 E.article_id,E.article_title,E.article_description,E.article_date,u.username FROM articles AS E INNER JOIN user_articles as A ON A.article_id = E.article_id INNER JOIN users as u ON A.user_id = u.user_id WHERE E.article_status = 1;"; PreparedStatement stmt = conn.prepareStatement(articleQuery); ResultSet rs; rs = stmt.executeQuery(); while (rs.next()) { Articles article = new Articles(); article.article_id = rs.getInt("article_id"); article.username = rs.getString("username"); article.date = rs.getDate("article_date"); article.title = rs.getString("article_title"); article.body = rs.getString("article_description"); posts.add(article); } ```
Add this to your Article class: ``` @Override public String toString() { return "id: "+article_id + "\n" +"title : " + title +"body: "+body ; } ``` Or anything other information you want your user to see. This is because Android Framework will call YOUROBJECT.toString(); automatically to render out the object in the list. That is why you need to override this method. UPDATE: ``` Articles article = new Articles(); article.article_id = rs.getInt("article_id"); article.username = rs.getString("username"); article.date = rs.getDate("article_date"); article.title = rs.getString("article_title"); article.body = rs.getString("article_description"); ``` This is the code where you popluate your article object. If you want to show other properties of this object in your item list. simply add the field in the toString() method. ex: ``` @Override public String toString() { return "id: "+article_id + "\n" +"title : " + title +"body: "+body+" date"+date +"username: "+username ; } ```
I think you using ArrayAdapter wrong. the type you put in (String) is not the type you should put. you need to create new customized array adaper that accepts the type Articles and do this again. something like: ``` public class ListAdapter extends ArrayAdapter<Articles> { public ListAdapter(Context context, int textViewResourceId) { super(context, textViewResourceId); // TODO Auto-generated constructor stub } private List<Articles> items; public ListAdapter(Context context, int resource, List<Articles> items) { super(context, resource, items); this.items = items; } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if (v == null) { LayoutInflater vi; vi = LayoutInflater.from(getContext()); v = vi.inflate(R.layout.itemlistrow, null); } Item p = items.get(position); if (p != null) { TextView tt = (TextView) v.findViewById(R.id.id); TextView tt1 = (TextView) v.findViewById(R.id.categoryId); TextView tt3 = (TextView) v.findViewById(R.id.description); if (tt != null) { tt.setText(p.userName()); } if (tt1 != null) { tt1.setText(p.getDate); } //and so on... } return v; } ``` you should connect your views with your article object. view to attribute. thats what adapter stands for. and string adapter is for string list only. after this you should do: ``` ListAdapter<Articles> arrayAdapter = new ListAdapter<Articles>(this,android.R.layout.simple_list_item_1, postsList); lv.setAdapter(arrayAdapter); ``` hope it helps.
ListView on Android - Populating with an ArrayList
[ "", "android", "sql", "arraylist", "android-listview", "android-asynctask", "" ]
How can one reference a calculated value from the previous row in a SQL query? In my case each row is an event that somehow manipulates the same value from the previous row. The raw data looks like this: ``` Eventno Eventtype Totalcharge 3 ACQ 32 2 OUT NULL 1 OUT NULL ``` Lets say each Eventtype=OUT should half the previous row totalcharge in a column called Remaincharge: ``` Eventno Eventtype Totalcharge Remaincharge 3 ACQ 32 32 2 OUT NULL 16 1 OUT NULL 8 ``` I've already tried the LAG analytic function but that does not allow me to get a calculated value from the previous row. Tried something like this: `LAG(remaincharge, 1, totalcharge) OVER (PARTITION BY ...) as remaincharge` But this didn't work because remaingcharge could not be found. Any ideas how to achieve this? Would need a analytics function that can give me the the cumulative sum but given a function instead with access to the previous value. Thank you in advance! **Update problem description** I'm afraid my example problem was to general, here is a better problem description: What remains of totalcharge is decided by the ratio of outqty/(previous remainqty). ``` Eventno Eventtype Totalcharge Remainqty Outqty 4 ACQ 32 100 0 3 OTHER NULL 100 0 2 OUT NULL 60 40 1 OUT NULL 0 60 ``` ``` Eventno Eventtype Totalcharge Remainqty Outqty Remaincharge 4 ACQ 32 100 0 32 3 OTHER NULL 100 0 32 - (0/100 * 32) = 32 2 OUT NULL 60 40 32 - (40/100 * 32) = 12.8 1 OUT NULL 0 60 12.8 - (60/60 * 12.8) = 0 ```
A variation on Ben's answer to use a windowing clause, which seems to take care of your updated requirements: ``` select eventno, eventtype, totalcharge, remainingqty, outqty, initial_charge - case when running_outqty = 0 then 0 else (running_outqty / 100) * initial_charge end as remainingcharge from ( select eventno, eventtype, totalcharge, remainingqty, outqty, first_value(totalcharge) over (partition by null order by eventno desc) as initial_charge, sum(outqty) over (partition by null order by eventno desc rows between unbounded preceding and current row) as running_outqty from t42 ); ``` Except it gives `19.2` instead of `12.8` for the third row, but that's what your formula suggests it should be: ``` EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE ---------- ----- ----------- ------------ ---------- --------------- 4 ACQ 32 100 0 32 3 OTHER 100 0 32 2 OUT 60 40 19.2 1 OUT 0 60 0 ``` If I add another split so it goes from 60 to zero in two steps, with another non-OUT record in the mix too: ``` EVENTNO EVENT TOTALCHARGE REMAININGQTY OUTQTY REMAININGCHARGE ---------- ----- ----------- ------------ ---------- --------------- 6 ACQ 32 100 0 32 5 OTHER 100 0 32 4 OUT 60 40 19.2 3 OUT 30 30 9.6 2 OTHER 30 0 9.6 1 OUT 0 30 0 ``` There's an assumption that the remaining quantity is consistent and you can effectively track a running total of what has gone before, but from the data you've shown that looks plausible. The inner query calculates that running total for each row, and the outer query does the calculation; that could be condensed but is hopefully clearer like this...
In your case you could work out the *first* value using the [FIRST\_VALUE()](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions066.htm#SQLRF00642) analytic function and the power of 2 that you have to divide by with [RANK()](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions141.htm#SQLRF00690) in a sub-query and then use that. It's very specific to your example but should give you the general idea: ``` select eventno, eventtype, totalcharge , case when eventtype <> 'OUT' then firstcharge else firstcharge / power(2, "rank" - 1) end as remaincharge from ( select a.* , first_value(totalcharge) over ( partition by 1 order by eventno desc ) as firstcharge , rank() over ( partition by 1 order by eventno desc ) as "rank" from the_table a ) ``` Here's a [SQL Fiddle](http://www.sqlfiddle.com/#!4/c7eda/18) to demonstrate. I haven't partitioned by anything because you've got nothing in your raw data to partition by...
Referencing the value of the previous calculcated value in Oracle
[ "", "sql", "oracle", "analytic-functions", "" ]
I'm having a problem with my database homework. I need to write a subquery that will display the ISBN and title of book that have a category that starts with the letter ‘S’. tables: BOOK(ISBN, Category, Title, Description, Edition, PublisherID) and CATEGORY(CatID, CatDescription) keys: BOOK(ISBN\_PK, Category\_FK) and CATERGORY(CatID\_PK) ---> Category=CatID For now I have some code but it returns an error. Because I'm doing this HW in advance and we still didn't learn subqueries I found most of the solutions to my problems online. Please help. My code: ``` SELECT ISBN, title FROM book WHERE category LIKE (SELECT catdescription FROM category WHERE catdescription LIKE 's%') ```
you can join both tables rather than using subquery, ``` SELECT a.* FROM book a INNER JOIN category b ON a.Category = b.CatID WHERE b.CatDescription LIKE 'S%' ``` To further gain more knowledge about joins, kindly visit the link below: * [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) --- Consider using `FULLTEXTSEARCH` rather than `LIKE` if you are using `MyISAM` engine.
change LIKE to IN ``` SELECT ISBN, title FROM book WHERE title IN (SELECT catdescription FROM category WHERE catdescription LIKE 's%') ``` EDIT: ``` SELECT ISBN, title FROM book WHERE ISBN IN (SELECT catdescription FROM category WHERE catdescription LIKE 's%') OR title IN (SELECT catdescription FROM category WHERE catdescription LIKE 's%') ```
error 1242 subquery returns more than 1 row
[ "", "mysql", "sql", "" ]
I am saving a key to a bucket with: ``` key = bucket.new_key(fileName) key.set_contents_from_string(base64.b64decode(data)) key.set_metadata('Content-Type', 'image/jpeg') key.set_acl('public-read') ``` After the save is successful, how can I access the URL of the newly created file?
If the key is publicly readable (as shown above) you can use [`Key.generate_url`](http://boto.readthedocs.org/en/latest/ref/s3.html#boto.s3.key.Key.generate_url): ``` url = key.generate_url(expires_in=0, query_auth=False) ``` If the key is private and you want to generate an expiring URL to share the content with someone who does not have direct access you could do: ``` url = key.generate_url(expires_in=300) ``` where `expires` is the number of seconds before the URL expires. These will produce HTTPS url's. If you prefer an HTTP url, use this: ``` url = key.generate_url(expires_in=0, query_auth=False, force_http=True) ```
For Boto3, you need to do it the following way... ``` import boto3 s3 = boto3.client('s3') url = '{}/{}/{}'.format(s3.meta.endpoint_url, bucket, key) ```
Using Amazon s3 boto library, how can I get the URL of a saved key?
[ "", "python", "amazon-s3", "boto", "" ]
I am using Python 2.5 (and need to stay with that) and have already downloaded xlrd 0.8.0 and xlwt 0.7.2, and they both seem to be working OK. I will be needing to read from and write to Excel spreadsheets, and so believe I will need to add xlutils as well. The problem is, I can't install it so far. I have pip and tried the simple: ``` pip install xlutils ``` That ran and downloaded xlutils, but got hung up with: ``` Downloading/unpacking xlutils Downloading xlutils-1.6.0.tar.gz (54Kb): 54Kb downloaded Running setup.py egg_info for package xlutils Downloading/unpacking xlrd>=0.7.2 (from xlutils) Downloading xlrd-0.9.2.tar.gz (167Kb): 167Kb downloaded Running setup.py egg_info for package xlrd Traceback (most recent call last): File "<string>", line 14, in <module> File "C:\Python25\Lib\site-packages\xlutils-1.6.0\build\xlrd\setup.py", li ne 8, in <module> raise Exception("This version of xlrd requires Python 2.6 or above. " Exception: This version of xlrd requires Python 2.6 or above. For older versions of Python, you can use the 0.8 series. ... [snipping some] ---------------------------------------- Command python setup.py egg_info failed with error code 1 in C:\Python25\Lib\sit e-packages\xlutils-1.6.0\build\xlrd ``` So then I figured it was trying to download a newer xlrd (which I *can't use* with Python 2.5) and since I already have xlrd installed, it breaks on that. I then tried to just download xlutils from <https://pypi.python.org/pypi/xlutils>, and then unzipped it with 7zip, put the xlutils folder under Python25>Lib>site-packages, cd'd there, and did: ``` python setup.py install ``` but that gives me this error in the cmd window: ``` C:\Python25\Lib\site-packages\xlutils-1.6.0>python setup.py install Traceback (most recent call last): File "setup.py", line 5, in <module> from setuptools import setup ImportError: No module named setuptools ``` **So how can I install this?**
First of all, you do not *need* `xlutils` just to read and write Excel files. You can read them with `xlrd` and write them with `xlwt` and provide your own "glue" in the Python code that you write yourself. That said, `xlutils` does provide features that make some things more convenient than writing them for yourself (that is the point of its existence). So the second part of my answer is: You do not need to "install" `xlutils` per se. You can just unpack it and put the `xlutils` directory into `site-packages` and be off and running. This is true for pretty much every pure-Python package, as far as I know. (Some other packages are partially written in C (or sometimes other languages) and these often require specific installation steps.) So why do pure-Python packages provide a `setup.py` script? Usually to run tests or to build `.pyc` files, both of which are optional.
xlutils 1.4.1 is compatible with python 2.5. So this should work: ``` pip install xlutils==1.4.1 ```
Python - How can I install xlutils?
[ "", "python", "excel", "" ]
I'm trying to run a Django application using Nginx + uWSGI with no success. After hours of googling and debugging I made the simplest possible uwsgi configuration that must work: ``` $ uwsgi --http 127.0.0.1:8000 --wsgi-file test.py ``` Where test.py is ``` def application(env, start_response): start_response('200 OK', [('Content-Type','text/html')]) return "Hello World" ``` The problem is: it doesn't. A wget call on the same machine hangs: ``` $ wget http://127.0.0.1:8000 --2013-04-28 12:43:36-- http://127.0.0.1:8000/ Connecting to 127.0.0.1:8000... connected. HTTP request sent, awaiting response... ``` uWSGI output is silent (except for initial information): ``` *** Starting uWSGI 1.9.8 (32bit) on [Sun Apr 28 12:43:56 2013] *** compiled with version: 4.4.5 on 28 April 2013 06:22:28 os: Linux-2.6.27-ovz-4 #1 SMP Mon Apr 27 00:26:17 MSD 2009 ... ``` The connection is in fact established, because killing uWSGI aborts wget. Probably uWSGI isn't detailed enough about occurred errors, or I must've missed something. Any tip of where to look further is appreciated. ### Update: More system details: Debian 6.0.7, Python 2.6.6. A full uWSGI log on start: ``` $ uwsgi --http 127.0.0.1:8000 --wsgi-file test.py *** Starting uWSGI 1.9.8 (32bit) on [Mon Apr 29 04:50:03 2013] *** compiled with version: 4.4.5 on 28 April 2013 06:22:28 os: Linux-2.6.27-ovz-4 #1 SMP Mon Apr 27 00:26:17 MSD 2009 nodename: max.local machine: i686 clock source: unix detected number of CPU cores: 4 current working directory: /home/user/dir detected binary path: /home/user/dir/env/ENV/bin/uwsgi *** WARNING: you are running uWSGI without its master process manager *** your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uWSGI http bound on 127.0.0.1:8000 fd 4 spawned uWSGI http 1 (pid: 19523) uwsgi socket 0 bound to TCP address 127.0.0.1:57919 (port auto-assigned) fd 3 Python version: 2.6.6 (r266:84292, Dec 27 2010, 00:18:12) [GCC 4.4.5] *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0x80f6240 your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 63944 bytes (62 KB) for 1 cores *** Operational MODE: single process *** WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x80f6240 pid: 19522 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI worker 1 (and the only) (pid: 19522, cores: 1) ``` And nothing else is ever printed.
For those who may encounter this problem too here're final results of my investigation: the issue is definitely environment-related, and most probably Linux kernel specific. The strace util showed that uWSGI couldn't receive a single byte - it is a kernel level. I think that key line is ``` os: Linux-2.6.27-ovz-4 ``` The Linux is running in a virtual environment and 2.6.27 is not a default kernel version for Debian 6.0.7. In 2.6.32-5 everything worked perfectly. I don't know if it is a bug of an old kernel, or uWSGI compatibility, or both. But updating the kernel helps.
I had the same problem with the exact same symptoms, after having installed uwsgi with `pip`. I solved the problem by reinstalling uwsgi from the tarball, i.e. according to the [docs](http://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html) with ``` wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz tar zxvf uwsgi-latest.tar.gz cd <dir> make ``` This resulted into a uwsgi binary, that when used to run the docs example you mention printed a log that only differed to the log of the pip-based version uwsgi in the python version used -- executable version was the same `(uWSGI 2.0.13.1, 64bit)`. The tarball-based version used Python 2.7.6, while the pip-based version used Python 3.4.3 . The version installed as the default, i.e. the version where the `/usr/bin/python` symbolic link points to, was Python 2.7.6 on my system. It turns out that this wasn't at all a coincidence, as changing temporarily the `/usr/bin/python` link to Python 3.4.3 (and changing the return object in `test.py` for Python 3), made the pip-based executable work. The bottom line is that you should **check that the Python version at the uwsgi log coincides with the default version of your system**. I'm not suggesting that installing from the tarball is better than installing from pip here; my guess that it was coincidental that the one source had the correct Python version while the other didn't. So one way to solve this problem is to **try another way to install uwsgi**.
uWSGI server does not respond
[ "", "python", "django", "linux", "debian", "uwsgi", "" ]
What is the most efficient/fastest way to update a table that requires foreign keys (to other tables) in the criteria? I normally do it this way: ``` UPDATE table1 WHERE table1_id in (SELECT table1_id FROM table2 WHERE whatever) ``` But I'm trying to figure out if there is a more efficient way that avoids the subquery. The reason I want to know is because I JUST yesterday learned that it's possible to delete without a subquery like this: ``` DELETE t1 FROM table1 t1 JOIN table2 t2 ON t1.table1_id = t2.table1_id WHERE whatever ``` But I can't figure out how to apply the same JOIN technique to an UPDATE statement
Try this one - **MS-SQL:** ``` UPDATE t SET column_id = t2.column_id FROM table1 t JOIN table2 t2 ON t.table1_id = t.table2_id WHERE whatever ``` **Oracle:** ``` UPDATE ( SELECT table1.value as OLD, table2.CODE as NEW FROM table1 JOIN table2 ON table1.value = table2.DESC WHERE anything ) t SET t.OLD = t.NEW ```
``` UPDATE ( SELECT t1.* FROM table1 t1 JOIN table2 t2 ON t1.table1_id = t2.table1_id WHERE whatever ) t SET t.col = ... ```
most efficient way to UPDATE a table using critera from other tables?
[ "", "sql", "sql-server", "oracle", "" ]
I have a method like ``` @staticmethod def add_transaction(name, date, amount, debit, user, category_id): pass ``` What is the best way to test if any of them is `None`? ``` if not (name or date or amount or debit or user or category_id): raise ValueError ```
If this is something you plan on using a lot, you might want to consider a decorator: ``` import functools def None_is_dumb_and_does_not_deserve_to_enter_this_function(func): @functools.wraps(func) def new_func(*args,**kwargs): if None in args: raise ValueError("None's aren't welcome here") return func(*args,**kwargs) return new_func @None_is_dumb_and_does_not_deserve_to_enter_this_function def foo(a,b,c): """Docstring""" print a,b,c foo(1,2,3) print foo.__doc__ foo(None,'bar','baz') ``` This still fais if you call `foo(1,2,c=3)`. We can fix that using the [`decorator`](https://pypi.python.org/pypi/decorator) module: ``` import decorator @decorator.decorator def no_none(f,*args,**kwargs): if None in args: raise ValueError("None's aren't welcome here") return f(*args,**kwargs) @no_none def foo(a,b,c): """Docstring""" print a,b,c foo(1,2,3) print foo.__doc__ try: foo(None,'bar','baz') except ValueError as e: print ('Good, raised ValueError') try: foo("bar","baz",c=None) except ValueError as e: print ('Good, raised ValueError') ```
``` if any(arg is None for arg in (name, date, amount, debit, user, category_id))): raise ValueError("I hate None") ``` You need to test `arg is None`, rather than just using `not`. With `not`, you'll end up raising an exception if any of the arguments are `False`, `0`, `[]`, etc, which is not what you want. @DSM's suggestion of `if None in (name, date...` works as well - it's up to you which you prefer. Side note: your function takes a lot of arguments. I wonder if you couldn't refactor it in some way - perhaps you could make a `Transaction` class that encapsulates this data, and change this method signature to `add_transaction(transaction)`.
How do you check if any of the arguments passed to function is None?
[ "", "python", "" ]
I'm very new to python and I have a directory structure like this: ``` root --child -----config.py example.py ``` In `example.py` I just tried: ``` import child ``` but that doesn't seems to be working. Where I'm making the mistake!
Do you have a `__init__.py` file in `root/child/` directory? After creating this file you should be able to do this: ``` import child.config ``` or ``` from child import config ``` You can also import multiple modules from child directory like this: ``` from child import first, second, third ``` Read about modules and packages [here](http://docs.python.org/3/tutorial/modules.html#packages).
If you want to import `config.py` with importing `child` you need to define `child` as a package. To do so you need to create an `__init__.py` file in your `child` directory. Check this about [packages](http://docs.python.org/2/tutorial/modules.html#packages)
How do I import other python code from a directory?
[ "", "python", "" ]
I'm developing a web app in Python where one of the use-cases is for a user to: * Upload a large file over HTTP POST and * Simultaneously download and display a response, which is a processed version of the file of a similar size. The client is developed by us in C++ but I would like to use HTTP. The server doesn't need the whole file to begin to generate its response, it can start processing the data once the first 250KBs or so has arrived. The latency between the upload start and the first pieces of the response should be as low as possible (for example within 100ms of what you might reach with raw sockets for example) Presumably it would be ideal to use chunked transfer encoding rather than multiple small HTTP requests? The length of the total request/response can't be known ahead of time but I suppose it could be split into multiple requests/responses of known size, is there a web server that would happily stream (rather than buffer + deliver) those chunks as they're being uploaded? I've heard twisted is good with chunked transfer encoding but I'd prefer to use a more conventional web framework if possible, especially for the rest of my application (which, outside of this use-case doesn't need anything fancy like this).
[WSGI](http://www.python.org/dev/peps/pep-3333/) supports this, I believe. Here we'll echo whatever they send us: ``` def application(environ, start_response): content_type = environ.get('CONTENT_TYPE', 'text/plain') headers = [('Content-Type', content_type)] if 'CONTENT_LENGTH' in environ: headers.append(('Content-Length', environ['CONTENT_LENGTH'])) start_response('200 OK', headers) input = environ.get('wsgi.input') if input is None: yield '' return while True: datum = input.read(4096) # or so if not datum: return yield datum ``` Web servers may elect to use each `yield` as a `Transfer-Encoding: chunked` chunk, though they are not required to.
Have a look at: <https://github.com/jakobadam/plupload-backends> which has a Python WSGI implementation for [plupload](http://plupload.com/). It works by (IIRC) by combing multiple large requests into one file which may or may not be using chunked transfer encoding.
Python web framework capable of chunked transfer encoding?
[ "", "python", "http", "chunked-encoding", "" ]
Suppose I have the below table ( TestTable ) : > **ID , SystemID , UserID** ( all columns are of type int ) I want to write a stored procedure that should accept a string parameter; its value like **((5 and 6) or 7)** to return all users that apply the below queries : ``` Select * From TestTable Where SystemID = 5 Intersect Select * From TestTable Where SystemID = 6 ``` and the above result is ***union*** with ``` Select * From TestTable Where SystemID = 7 ``` SP must accept any combination like **(((4 or 5) and 6) or 8)** , **(((5 or 9) or 8) and 10)** .. etc ***How can I implement that ?*** **Update :** my issue isn't how to split the string .. but how can i make dynamic sql to implement it's logical mean
``` DECLARE @param NVARCHAR(MAX) = N'4 or 5 and 6 or 8 and 10'; DECLARE @sql NVARCHAR(MAX) = N'', @q NVARCHAR(MAX) = N'SELECT UserID FROM dbo.TestTable WHERE SystemID = '; SELECT @sql = @q + REPLACE(REPLACE(@param, ' or ', ' UNION ALL ' + @q), ' and ', ' INTERSECT ' + @q); PRINT @sql; -- EXEC sp_executesql @sql; ``` Results: ``` SELECT UserID FROM dbo.TestTable WHERE SystemID = 4 UNION ALL SELECT UserID FROM dbo.TestTable WHERE SystemID = 5 INTERSECT SELECT UserID FROM dbo.TestTable WHERE SystemID = 6 UNION ALL SELECT UserID FROM dbo.TestTable WHERE SystemID = 8 INTERSECT SELECT UserID FROM dbo.TestTable WHERE SystemID = 10 ``` Now, whether this query yields the results you're actually after, I have no idea, but I believe it meets the requirements as stated.
Try this... I have little changed Aaron Bertrand's query. ``` DECLARE @param NVARCHAR(MAX) = N'(((4 or 5) and 6) or 8)'; DECLARE @sql NVARCHAR(MAX) = N'', @q NVARCHAR(MAX) = N'SELECT * FROM dbo.TestTable WHERE SystemID = ', @paranth NVARCHAR(100) = substring(@param,0,PATINDEX('%[0-9]%',@param)); set @param =substring(@param,PATINDEX('%[0-9]%',@param),len(@param)-PATINDEX('%[0-9]%',@param)) SELECT @sql = @q + REPLACE(REPLACE(@param, ' or ', ' UNION ALL ' + @q), ' and ', ' INTERSECT ' + @q); set @sql=@paranth+@sql if (isnull(@paranth,'')<>'') set @sql=@sql+')' PRINT @sql; ```
How can I implement Stored Procedure that accept dynamic search criteria?
[ "", "sql", "sql-server-2008", "stored-procedures", "" ]
Hi I have table which asks a person for a yes/no answer, however some have said both yes and no i.e.: ``` person--------------status 1-------------------yes 2-------------------yes 3-------------------yes 3-------------------no 4-------------------no 5-------------------yes 5-------------------no ``` where persons 3 and 5 have two rows, one for 'yes' and one for 'no'. What I want to find people with both answers, and delete the row which line which says 'no'. so i end up with: ``` person--------------status 1-------------------yes 2-------------------yes 3-------------------yes 4-------------------no 5-------------------yes ``` My logic is failing me and I can only get: ``` delete from table where status = 'no' and where person in (select person from table where status = 'yes') ``` but of course this deletes both types. anyone have any suggestions?
The expression `and where` is not SQL. Try this: ``` delete from table where status = 'no' and person in (select person from table where status = 'yes') ``` The logic looks correct to me.
It does **not** delete both, once you fix the syntax error of having an extra `where`. It says delete where status = 'no' **`AND`** another condition is true. Both must be true to delete the record. If the record has 'yes', then clearly the conditions cannot both be true for that record since one of them is false. So let's try this out in DB2: ``` create table survey (person smallint ,answer varchar(5) ); insert into survey values (1, 'yes'), (2, 'yes'), (3, 'yes'), (3, 'no'), (4, 'no'), (5, 'yes'), (5, 'no'); ``` This gives me ``` person answer ------ ------ 1 yes 2 yes 3 yes 3 no 4 no 5 yes 5 no ``` So, now the test ``` delete from survey where answer = 'no' and person in (select person from survey where answer = 'yes' ); ``` My table now has ``` person answer ------ ------ 1 yes 2 yes 3 yes 4 no 5 yes ``` It clearly did **not** delete both types, and has the desired result. I am using DB2 for i 7.1, and perhaps others are using DB2 LUW. But I find it very difficult to imagine that the fundamental logic of how basic SQL works (such as `AND`) could work drastically differently across different IBM platforms.
use sql to delete a person from a list with a conflicting field
[ "", "sql", "db2", "" ]
Without using library function how can you print any number of words in Python? there are some answers were using library function but I want the core code. ``` Like: 12345 = "twelve thousand three hundred and forty five" 97835200 ="Nine core seventy eight lakh thirty five thousand two hundred" 230100 = "Two lakh thirty thousand one hundred" ```
Code for this: --- ``` >>>def handel_upto_99(number): predef={0:"zero",1:"one",2:"two",3:"three",4:"four",5:"five",6:"six",7:"seven",8:"eight",9:"nine",10:"ten",11:"eleven",12:"twelve",13:"thirteen",14:"fourteen",15:"fifteen",16:"sixteen",17:"seventeen",18:"eighteen",19:"nineteen",20:"twenty",30:"thirty",40:"forty",50:"fifty",60:"sixty",70:"seventy",80:"eighty",90:"ninety",100:"hundred",100000:"lakh",10000000:"crore",1000000:"million",1000000000:"billion"} if number in predef.keys(): return predef[number] else: return predef[(number/10)*10]+' '+predef[number%10] ``` --- ``` >>>def return_bigdigit(number,devideby): predef={0:"zero",1:"one",2:"two",3:"three",4:"four",5:"five",6:"six",7:"seven",8:"eight",9:"nine",10:"ten",11:"eleven",12:"twelve",13:"thirteen",14:"fourteen",15:"fifteen",16:"sixteen",17:"seventeen",18:"eighteen",19:"nineteen",20:"twenty",30:"thirty",40:"forty",50:"fifty",60:"sixty",70:"seventy",80:"eighty",90:"ninety",100:"hundred",1000:"thousand",100000:"lakh",10000000:"crore",1000000:"million",1000000000:"billion"} if devideby in predef.keys(): return predef[number/devideby]+" "+predef[devideby] else: devideby/=10 return handel_upto_99(number/devideby)+" "+predef[devideby] ``` --- ``` >>>def mainfunction(number): dev={100:"hundred",1000:"thousand",100000:"lakh",10000000:"crore",1000000000:"billion"} if number is 0: return "Zero" if number<100: result=handel_upto_99(number) else: result="" while number>=100: devideby=1 length=len(str(number)) for i in range(length-1): devideby*=10 if number%devideby==0: if devideby in dev: return handel_upto_99(number/devideby)+" "+ dev[devideby] else: return handel_upto_99(number/(devideby/10))+" "+ dev[devideby/10] res=return_bigdigit(number,devideby) result=result+' '+res if devideby not in dev: number=number-((devideby/10)*(number/(devideby/10))) number=number-devideby*(number/devideby) if number <100: result = result + ' '+ handel_upto_99(number) return result ``` --- Copy the three function one by one and paste in your python shell. after that run like this: ANSWER: ``` >>>mainfunction(12345) ' twelve thousand three hundred forty five' >>>mainfunction(0) 'Zero' >>>mainfunction(100) 'one hundred' >>>mainfunction(40230534) ' four crore two lakh thirty thousand five hundred thirty four' ```
you cam use third party library num2word available in python ``` num2word.to_card(1e25) 'ten septillion, one billion, seventy-three million, seven hundred and forty-one this will avoid your long code and you can directly use it. ```
How can you print any number in words in Python?
[ "", "python", "python-2.7", "" ]