Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
At the moment I am taking an online course for python, only about 1/3 the way through and I decided to try and make something with what I've learnt so far. Running into an error now though. I am creating a text-based adventure sort of game in a house. Every room is a seperate function. EX: ``` def hallway(): hallway_direction = raw_input('blahblah') if hallway_direction == 'n': living_room() ``` Although I have one room where you need a torch to enter. I used a dictionary for holding any values for rooms here is what I have. ``` global rooms rooms = {} rooms['first_room'] = {'note' : False} rooms['old_door'] = {'boots' : False} rooms['first_again'] = {'torch' : False} rooms['first_again'] = {'seen' : False} ``` In another room it sets torch to true, but the problem I'm having is that if you don't have the torch I need it to take you back to the hall ``` def fancy_door(): raw_input('You open the door, the inside is pitch black. You need a source of light before you can enter.') if rooms['first_again']['torch']: raw_input('You light the torch and step inside, the room is bare, only a table with a ring in the center.') choice5_r = raw_input('Do you take the ring? Y/N ("back" to leave)') choice5_r = choice5_r.lower() if choice5_r == 'y': raw_input('Some text here') darkness() elif choice5_r == 'n': raw_input('You leave the ring as it is.') fancy_door() elif choice5_r == 'back': hall() else: raw_input('Not a valid option') fancy_door() else: hall() ``` When I run this however I get this error: ``` Traceback (most recent call last): File "<stdin>", line 247, in <module> File "<stdin>", line 23, in first_room File "<stdin>", line 57, in hall File "<stdin>", line 136, in fancy_door KeyError: 'torch' ``` On line 247 it calls first\_room() which works up until this point. 23 calls hall() which works until this point. 57 calls the fancy\_door() which should be working it looks as the same as the other door functions and they work fine. line 136 is the line above "if rooms['first\_again']['torch']:" If the problem isn't here I can post the entirety of the code on here or pastebin, I didn't only because it was 230 lines long. If someone could help me out I'd be very greatful. Also, please excuse the bad code, I know it probably doesn't follow proper conventions but like I said, I'm new to Python, and programming in general. This is the first thing I've ever written. Thanks in advance!
in your definition of the global variables, you define rooms['first\_again'] twice. Each time you assign a value to an element of a dict: ``` rooms['first_again'] = #something ``` you overwrite what was there previously. It's saying ``` KeyError: 'torch' ``` because that object no longer has an element called torch. Try changing it to: ``` rooms['first_again'] = {'torch' : False, 'seen' : False} ``` Or if you need to add values to that element later, you can do: ``` rooms['first_again'] = {'torch' : False} rooms['first_again']['seen'] = False ```
You've assigned the `rooms['first_again']` twice. ``` rooms['first_again'] = {'torch' : False} rooms['first_again'] = {'seen' : False} ``` Maybe it should be: ``` rooms['first_aggin'] = {} rooms['first_again']['torch'] = False rooms['first_again']['seen'] = False ```
Python, can't find function
[ "", "python", "function", "dictionary", "traceback", "" ]
I am using a python script to automate a process involving batch files. These are batch files that are used for other applications and I am not allowed to edit them. At the end of the batch file, it prompts the following: "Press any key to continue ..." How do I use python to recognize when this prompt appears, and how do I respond to it? I want to be able to close the file so I can run the next batch file. Currently I have found the following solution, but it's terrible and makes me feel dirty inside: ``` #Run the batch file with parameter DIABFile subprocess.Popen([path + '\\' + batchFile, path + '\\' + DIABFile]) #Sit here like an idiot until I'm confident the batch file is finished time.sleep(4) #Press any key virtual_keystrokes.press('enter') ``` Any ideas? ### Attempt #1 ``` p = subprocess.Popen([path + '\\' + batchFile, path + '\\' + DIABFile], bufsize=1, stdin=subprocess.PIPE, stdout=subprocess.PIPE) while p.poll() is None: line = p.stdout.readline() print(line) if line.startswith('Press any key to continue'): p.communicate('\r\n') ``` Resulted in the following output and error: ``` b'\r\n' Traceback (most recent call last): File "C:\workspace\Perform_QAC_Check\Perform_QAC_Check.py", line 341, in <module> main() File "C:\workspace\Perform_QAC_Check\Perform_QAC_Check.py", line 321, in main run_setup_builderenv(sandboxPath, DIABFile) File "C:\workspace\Perform_QAC_Check\Perform_QAC_Check.py", line 126, in run_setup_builderenv if line.startswith('Press any key to continue'): TypeError: startswith first arg must be bytes or a tuple of bytes, not str The process tried to write to a nonexistent pipe. ``` The part that seemed weirdest to me was that the startswith first arg must be bytes or a tuple of bytes, not str. I looked up the documentation and it definitely should be a string? [tutorial of startswith](http://www.tutorialspoint.com/python/string_startswith.htm) So I looked online and found [this](http://adventuresofacomputerscientist.blogspot.com/2011/12/using-nltk-with-python-3-2_16.html) little bit. > The error message seems to be a bug in Python, as it is exactly the other way around. But still, no problems here, add after line #75 in indian.py ``` try: line = line.decode() except AttributeError: pass ``` And so I did. ### Attempt #2 ``` p = subprocess.Popen([path + '\\' + batchFile, path + '\\' + DIABFile], bufsize=1, stdin=subprocess.PIPE, stdout=subprocess.PIPE) while p.poll() is None: line = p.stdout.readline() print(line) try: line = line.decode() if line.startswith('Press any key to continue'): p.communicate('\r\n') except AttributeError: pass ``` Resulted in the following output: ``` b'\r\n' b'Build Environment is created.\r\n' b'\r\n' b'Please Refer to the directory: C:/directory\r\n' b'\r\n' ``` And then it hangs there... That is the last output before the "Please press any key to continue" should show up, but it never does. ### Notes I have since taken the second script and asked it to find "Please Refer", which it does. Unfortunately, then the script hangs again at the line: ``` p.communicate('\r\n') ``` Ending the program, again, prints the error: ``` The process tried to write to a nonexistent pipe. ``` Which I believe is related to [this](http://www.gossamer-threads.com/lists/python/python/133156) bug. I can't imagine what I'm trying to do is THAT out of the ordinary. Since this is seemingly a little more complicated than expected I would like to say I am using XP and Python version 3.3.
Something like the following should work: ``` p = subprocess.Popen([path + '\\' + batchFile, path + '\\' + DIABFile], bufsize=1, stdin=subprocess.PIPE, stdout=subprocess.PIPE) while p.poll() is None: line = p.stdout.readline() if line.startswith('Press any key to continue'): p.communicate('\r\n') ```
You could parse the output of the subprocess and match on the "Press any key to continue" phrase to continue on. See this thread: [read subprocess stdout line by line](https://stackoverflow.com/questions/2804543/read-subprocess-stdout-line-by-line) especially what he posted as Update2 It might look like this: ``` import subprocess proc = subprocess.Popen([path + '\\' + batchFile, path + '\\' + DIABFile],stdout=subprocess.PIPE) for line in iter(proc.stdout.readline,''): if (line.rstrip() == "Press any key to..": break; ```
Python/Batch: Use Python to press any key to continue
[ "", "python", "batch-file", "python-3.x", "automation", "" ]
I have a table called `Company` with a single column `CompanyNames`. I have 17 company names in this column as: ``` Bahria Town Projacs/TPMS Habib Rafique PCA Wilsons MLDB All Orient Extreme Engineering Prime Engineering Method Rousing co Atlas Pakistan Bemsol EDL Deep Well Emaad In Situ ``` Now I want to insert an empty row at the top of these company names. Can you guys please help me write sql-query which will solve my problem? Thanks.
``` insert into Company (CompanyName) values ('') ```
create another table with Columns ---CompanyName and row number ``` update Company1 set id=null WHERE id % 2 = 1; update Company2 set id=null WHERE id % 2 != 1; ``` using even odd values to set null May be this will help if u got what i am saying.
INSERT empty field in a table using sql query
[ "", "sql", "" ]
Following [this](http://zguide.zeromq.org/py%3ataskwork) example in the ØMQ docs, I'm trying to create a simple receiver. The example uses infinite loop. Everything works just fine. However, on MS Windows, when I hit CTRL+C to raise KeyboardInterrupt, the loop does not break. It seems that `recv()` method somehow ignores the exception. However, I'd love to exit the process by hiting CTRL+C instead of killing it. Is that possible?
A `zmq.Poller` object seems to help: ``` def poll_socket(socket, timetick = 100): poller = zmq.Poller() poller.register(socket, zmq.POLLIN) # wait up to 100msec try: while True: obj = dict(poller.poll(timetick)) if socket in obj and obj[socket] == zmq.POLLIN: yield socket.recv() except KeyboardInterrupt: pass # Escape while loop if there's a keyboard interrupt. ``` Then you can do things like: ``` for message in poll_socket(socket): handle_message(message) ``` and the for-loop will automatically terminate on Ctrl-C. It looks like the translation from Ctrl-C to a Python KeyboardInterrupt only happens when the interpreter is active and Python has not yielded control to low-level C code; the pyzmq `recv()` call apparently blocks while in low-level C code, so Python never gets a chance to issue the KeyboardInterrupt. But if you use `zmq.Poller` then it will stop at a timeout and give the interpreter a chance to issue the KeyboardInterrupt after the timeout is complete.
In response to the @Cyclone's request, I suggest the following as a possible solution: ``` import signal signal.signal(signal.SIGINT, signal.SIG_DFL); # any pyzmq-related code, such as `reply = socket.recv()` ```
Stop pyzmq receiver by KeyboardInterrupt
[ "", "python", "loops", "break", "termination", "pyzmq", "" ]
I am doing a speed test with three functions, readFile, prepDict and test. Test is simply prepDict(readFile). I am then running these many times with the timeit module. When I increase the number of loops by a factor of 10, function prepDict takes ~100 times longer, however function test which uses function prepDict only increases by 10. Here are the functions and tests. ``` def readFile(filepath): tempDict = {} file = open(filepath,'rb') for line in file: split = line.split('\t') tempDict[split[1]] = split[2] return tempDict def prepDict(tempDict): for key in tempDict.keys(): tempDict[key+'a'] = tempDict[key].upper() del tempDict[key] return tempDict def test(): prepDict(readFile('two.txt')) if __name__=='__main__': from timeit import Timer t = Timer(lambda: readFile('two.txt')) print 'readFile(10000): ' + str(t.timeit(number=10000)) tempDict = readFile('two.txt') t = Timer(lambda: prepDict(tempDict)) print 'prepDict (10000): ' + str(t.timeit(number=10000)) t = Timer(lambda: test()) print 'prepDict(readFile) (10000): ' + str(t.timeit(number=10000)) t = Timer(lambda: readFile('two.txt')) print 'readFile(100000): ' + str(t.timeit(number=100000)) tempDict = readFile('two.txt') t = Timer(lambda: prepDict(tempDict)) print 'prepDict (100000): ' + str(t.timeit(number=100000)) t = Timer(lambda: test()) print 'prepDict(readFile) (100000): ' + str(t.timeit(number=100000)) ``` The results I get are as follows: ``` readFile(10000): 0.61602914474 prepDict (10000): 0.200615847469 prepDict(readFile) (10000): 0.609288647286 readFile(100000): 5.91858320729 prepDict (100000): 18.8842101717 prepDict(readFile) (100000): 6.45040039665 ``` And I get similar results if I run it many times. Why does prepDict increases by a factor of ~100, while prepDict(readFile) only increases by a factor of 10, even though it is using the prepDict function? two.txt is a tabular delimited file with these data points: ``` Item Title Hello2 Item Desc Testing1232 Item Release 2011-02-03 ```
The problem here is that your `prepDict` function expands the inputs. Each time you call it in sequence, it has more data to deal with. And that data grows linearly, so the 10000th run takes about 10000x as long as the first.\* When you call `test`, it's creating a new dict each time, so the time is constant. You can see this pretty easily by changing the `prepDict` tests to run on a new copy of the dict each time: ``` t = Timer(lambda: prepDict(tempDict.copy())) ``` --- By the way, your `prepDict` is not actually growing exponentially\*\* with `number`, just quadratically. In general, when something is growing super-linearly, and you want to estimate the algorithmic cost, you really need to get more than two data points. --- \* That's not *quite* true—it only starts to grow linearly once the time taken for the string and hashing operations (which grow linearly) starts to swamp the time taken for every other operation (which are all constant). \*\* You didn't mention anything about exponential growth here, but in [your previous question](https://stackoverflow.com/questions/17180277/why-do-speed-tests-of-functions-not-add-when-nested-python/17180641#17180641) you did, so you may have made the same unwarranted assumption in your real problem.
Your calls to `prepDict` are not happening in an isolated environment. Each call to `prepDict` modifies `tempDict` -- the keys get a little longer each time. So after 10\*\*5 calls to `prepDict` the keys in `prepDict` are rather large strings. You can see this (copiously) if you put a print statement in `prepDict`: ``` def prepDict(tempDict): for key in tempDict.keys(): tempDict[key+'a'] = tempDict[key].upper() del tempDict[key] print(tempDict) return tempDict ``` The way to fix this is to make sure each call to `prepDict` -- or more generally, the statement you are timing -- does not affect the next call (or statement) you are timing. abarnert has already shown the solution: `prepDict(tempDict.copy())`. By the way, you could use a `for-loop` to cut down on the code duplication: ``` import timeit import collections if __name__=='__main__': Ns = [10**4, 10**5] timing = collections.defaultdict(list) for N in Ns: timing['readFile'].append(timeit.timeit( "readFile('two.txt')", "from __main__ import readFile", number = N)) timing['prepDict'].append(timeit.timeit( "prepDict(tempDict.copy())", "from __main__ import readFile, prepDict; tempDict = readFile('two.txt')", number = N)) timing['test'].append(timeit.timeit( "test()", "from __main__ import test", number = N)) print('{k:10}: {N[0]:7} {N[1]:7} {r}'.format(k='key', N=Ns, r='ratio')) for key, t in timing.iteritems(): print('{k:10}: {t[0]:0.5f} {t[1]:0.5f} {r:>5.2f}'.format(k=key, t=t, r=t[1]/t[0])) ``` yields timings such as ``` key : 10000 100000 ratio test : 0.11320 1.12601 9.95 prepDict : 0.01604 0.16167 10.08 readFile : 0.08977 0.91053 10.14 ```
Speed Test causing weird behavior. Multiplying time spent by 100 in one instance, only 10 in another
[ "", "python", "performance", "unit-testing", "" ]
I'm running a large query in a python script against my postgres database using psycopg2 (I upgraded to version 2.5). After the query is finished, I close the cursor and connection, and even run gc, but the process still consumes a ton of memory (7.3gb to be exact). Am I missing a cleanup step? ``` import psycopg2 conn = psycopg2.connect("dbname='dbname' user='user' host='host'") cursor = conn.cursor() cursor.execute("""large query""") rows = cursor.fetchall() del rows cursor.close() conn.close() import gc gc.collect() ```
I ran into a similar problem and after a couple of hours of blood, sweat and tears, found the answer simply requires the addition of one parameter. Instead of ``` cursor = conn.cursor() ``` write ``` cursor = conn.cursor(name="my_cursor_name") ``` or simpler yet ``` cursor = conn.cursor("my_cursor_name") ``` The details are found at <http://initd.org/psycopg/docs/usage.html#server-side-cursors> I found the instructions a little confusing in that I though I'd need to rewrite my SQL to include "DECLARE my\_cursor\_name ...." and then a "FETCH count 2000 FROM my\_cursor\_name" but it turns out psycopg does that all for you under the hood if you simply overwrite the "name=None" default parameter when creating a cursor. The suggestion above of using fetchone or fetchmany doesn't resolve the problem since, if you leave the name parameter unset, psycopg will by default attempt to load the entire query into ram. The only other thing you may need to to (besides declaring a name parameter) is change the cursor.itersize attribute from the default 2000 to say 1000 if you still have too little memory.
Please see the *next answer* by @joeblog for the better solution. --- First, you shouldn't need all that RAM in the first place. What you should be doing here is fetching *chunks* of the result set. Don't do a `fetchall()`. Instead, use the much more efficient `cursor.fetchmany` method. See [the psycopg2 documentation](http://initd.org/psycopg/docs/cursor.html). Now, the explanation for why it isn't freed, and why that isn't a memory leak in the formally correct use of that term. Most processes don't release memory back to the OS when it's freed, they just make it available for re-use elsewhere in the program. Memory may only be released to the OS if the program can compact the remaining objects scattered through memory. This is only possible if indirect handle references are used, since otherwise moving an object would invalidate existing pointers to the object. Indirect references are rather inefficient, especially on modern CPUs where chasing pointers around does horrible things to performance. What usually lands up happening unless extra caution is exersised by the program is that each large chunk of memory allocated with `brk()` lands up with a few small pieces still in use. The OS can't tell whether the program considers this memory still in use or not, so it can't just claim it back. Since the program doesn't tend to access the memory the OS will usually swap it out over time, freeing physical memory for other uses. This is one of the reasons you should have swap space. It's possible to write programs that hand memory back to the OS, but I'm not sure that you can do it with Python. See also: * [python - memory not being given back to kernel](https://stackoverflow.com/q/11957539/398670) * [Why doesn't memory get released to system after large queries (or series of queries) in django?](https://stackoverflow.com/questions/5494178/why-doesnt-memory-get-released-to-system-after-large-queries-or-series-of-quer) * [Releasing memory in Python](https://stackoverflow.com/questions/15455048/releasing-memory-in-python) So: this isn't actually a memory *leak*. If you do something else that uses lots of memory, the process shouldn't grow much if at all, it'll re-use the previously freed memory from the last big allocation.
psycopg2 leaking memory after large query
[ "", "python", "postgresql", "psycopg2", "" ]
How do I search and replace text in a file using Python 3? Here is my code: ``` import os import sys import fileinput print("Text to search for:") textToSearch = input("> ") print("Text to replace it with:") textToReplace = input("> ") print("File to perform Search-Replace on:") fileToSearch = input("> ") tempFile = open(fileToSearch, 'r+') for line in fileinput.input(fileToSearch): if textToSearch in line: print('Match Found') else: print('Match Not Found!!') tempFile.write(line.replace(textToSearch, textToReplace)) tempFile.close() input('\n\n Press Enter to exit...') ``` Input file: ``` hi this is abcd hi this is abcd This is dummy text file. This is how search and replace works abcd ``` When I search and replace 'ram' by 'abcd' in above input file, it work like a charm. But when I do it vice versa, i.e., replacing 'abcd' by 'ram', some junk characters are left at the end. Replacing 'abcd' by 'ram': ``` hi this is ram hi this is ram This is dummy text file. This is how search and replace works rambcd ```
[`fileinput`](https://docs.python.org/3/library/fileinput.html) already supports inplace editing. It redirects `stdout` to the file in this case: ``` #!/usr/bin/env python3 import fileinput with fileinput.FileInput(filename, inplace=True, backup='.bak') as file: for line in file: print(line.replace(text_to_search, replacement_text), end='') ```
As [pointed out by michaelb958](https://stackoverflow.com/questions/17140886/how-to-search-and-replace-text-in-a-file#comment24808323_17141040), you cannot replace in place with data of a different length because this will put the rest of the sections out of place. I disagree with the other posters suggesting you read from one file and write to another. Instead, I would read the file into memory, fix the data up, and then write it out to the same file in a separate step. ``` # Read in the file with open('file.txt', 'r') as file: filedata = file.read() # Replace the target string filedata = filedata.replace('abcd', 'ram') # Write the file out again with open('file.txt', 'w') as file: file.write(filedata) ``` Unless you've got a massive file to work with which is too big to load into memory in one go, or you are concerned about potential data loss if the process is interrupted during the second step in which you write data to the file.
How to search and replace text in a file
[ "", "python", "python-3.x", "string", "file", "replace", "" ]
``` if myval == 0: nyval=1 if myval == 1: nyval=0 ``` Is there a better way to do a toggle in python, like a nyvalue = not myval ?
Use the [`not` boolean operator](http://docs.python.org/2/reference/expressions.html#boolean-operations): ``` nyval = not myval ``` `not` returns a *boolean* value (`True` or `False`): ``` >>> not 1 False >>> not 0 True ``` If you must have an integer, cast it back: ``` nyval = int(not myval) ``` However, the python `bool` type is a subclass of `int`, so this may not be needed: ``` >>> int(not 0) 1 >>> int(not 1) 0 >>> not 0 == 1 True >>> not 1 == 0 True ```
In python, [`not`](http://docs.python.org/3/reference/expressions.html#boolean-operations) is a boolean operator which gets the opposite of a value: ``` >>> myval = 0 >>> nyvalue = not myval >>> nyvalue True >>> myval = 1 >>> nyvalue = not myval >>> nyvalue False ``` And `True == 1` and `False == 0` (if you need to convert it to an integer, you can use `int()`)
How to "negate" value: if true return false, if false return true?
[ "", "python", "boolean", "negate", "" ]
I want to use condition in `for` loop in a succinct, clean way. Can I rewrite this ``` >>> for m in (m for m in range(10) if m < 5): ... print m ``` to avoid doubled `for m in ...`, and still have a nice one-line `for`? ``` >>> for m in range(10) if m < 5: print m File "<stdin>", line 1 for m in range(10) if m < 5: print m ^ ``` This is what I *don't* want: ``` >>> for m in range(10): ... if m < 5: ... print m ``` I use python2.7 **Edit** `range(10)`, it is just a toy example. A real use case would include, for instance ``` for m in items if any(cond(item) for cond in conditions) ``` or ``` for m in items if cond1(m) and cond2(m) or cond3(m) ```
If we assume that you actually meant to do something more complex that would actually require the `if` statement, what you should consider doing is defining a generator to produce the relevant sequence. e.g. instead of: ``` for m in candidates: if isvalid(m): ... do something ... ``` Write: ``` def valid_candidates(candidates): for m in candidates: is isvalid(m): yield m for m in valid_candidates(candidates): ... do something ... ``` That way the complex condition is replaced by a descriptive name and the for loop is as simple as it can possibly be. Also, you can test the loop logic separately from the processing logic which you cannot if you mix the two together.
> This is what I don't want: ``` >>> for m in range(10): ... if m < 5: ... print m ``` That's fine, you could also do something like this: ``` >>> from functools import partial >>> from operator import gt >>> for m in filter(partial(gt, 5), range(10)): print m 0 1 2 3 4 ```
Conditional generator in `for` loop
[ "", "python", "python-2.7", "" ]
What's the difference between: ``` Maand['P_Sanyo_Gesloten'] Out[119]: Time 2012-08-01 00:00:11 0 2012-08-01 00:05:10 0 2012-08-01 00:10:11 0 2012-08-01 00:20:10 0 2012-08-01 00:25:10 0 2012-08-01 00:30:09 0 2012-08-01 00:40:10 0 2012-08-01 00:50:09 0 2012-08-01 01:05:10 0 2012-08-01 01:10:10 0 2012-08-01 01:15:10 0 2012-08-01 01:25:10 0 2012-08-01 01:30:10 0 2012-08-01 01:35:09 0 2012-08-01 01:40:10 0 ... 2012-08-30 22:35:09 0 2012-08-30 22:45:10 0 2012-08-30 22:50:09 0 2012-08-30 22:55:10 0 2012-08-30 23:00:09 0 2012-08-30 23:05:10 0 2012-08-30 23:10:09 0 2012-08-30 23:15:10 0 2012-08-30 23:20:09 0 2012-08-30 23:25:10 0 2012-08-30 23:35:09 0 2012-08-30 23:40:10 0 2012-08-30 23:45:09 0 2012-08-30 23:50:10 0 2012-08-30 23:55:11 0 Name: P_Sanyo_Gesloten, Length: 7413, dtype: int64 ``` And ``` Maand[[1]] Out[120]: &ltclass 'pandas.core.frame.DataFrame'&gt DatetimeIndex: 7413 entries, 2012-08-01 00:00:11 to 2012-08-30 23:55:11 Data columns (total 1 columns): P_Sanyo_Gesloten 7413 non-null values dtypes: int64(1) ``` How can I get column by its index number? And not by an index string?
One is a column (aka Series), while the other is a DataFrame: ``` In [1]: df = pd.DataFrame([[1,2], [3,4]], columns=['a', 'b']) In [2]: df Out[2]: a b 0 1 2 1 3 4 ``` The column 'b' (aka Series): ``` In [3]: df['b'] Out[3]: 0 2 1 4 Name: b, dtype: int64 ``` The subdataframe with columns (position) in [1]: ``` In [4]: df[[1]] Out[4]: b 0 2 1 4 ``` Note: it's preferable (and less ambiguous) to specify whether you're talking about the column name e.g. ['b'] or the integer location, since sometimes you can have columns named as integers: ``` In [5]: df.iloc[:, [1]] Out[5]: b 0 2 1 4 In [6]: df.loc[:, ['b']] Out[6]: b 0 2 1 4 In [7]: df.loc[:, 'b'] Out[7]: 0 2 1 4 Name: b, dtype: int64 ```
Another way is to select a column with the `columns` array: ``` In [5]: df = pd.DataFrame([[1,2], [3,4]], columns=['a', 'b']) In [6]: df Out[6]: a b 0 1 2 1 3 4 In [7]: df[df.columns[0]] Out[7]: 0 1 1 3 Name: a, dtype: int64 ```
Get column by number in Pandas
[ "", "python", "pandas", "" ]
I'm using pandas to do an `outer` merge on a set of about ~1000-2000 CSV files. Each CSV file has an identifier column `id` which is shared between all the CSV files, but each file has a unique set of columns of 3-5 columns. There are roughly 20,000 unique `id` rows in each file. All I want to do is merge these together, bringing all the new columns together and using the `id` column as the merge index. I do it using a simple `merge` call: ``` merged_df = first_df # first csv file dataframe for next_filename in filenames: # load up the next df # ... merged_df = merged_df.merge(next_df, on=["id"], how="outer") ``` The problem is that with nearly 2000 CSV files, I get a `MemoryError` in the `merge` operation thrown by pandas. I'm not sure if this is a limitation due to a problem in the merge operation? The final dataframe would have 20,000 rows and roughly (2000 x 3) = 6000 columns. This is large, but not large enough to consume all the memory on the computer I am using which has over 20 GB of RAM. Is this size too much for pandas manipulation? Should I be using something like sqlite instead? Is there something I can change in the `merge` operation to make it work on this scale? thanks.
I think you'll get better performance using a [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tools.merge.concat.html) (which acts like an outer join): ``` dfs = (pd.read_csv(filename).set_index('id') for filename in filenames) merged_df = pd.concat(dfs, axis=1) ``` *This means you are doing only one merge operation rather than one for each file.*
I met same error in 32-bit pytwhen using read\_csv with 1GB file. Try 64-bit version and hopefully will solve Memory Error problem
MemoryError on large merges with pandas in Python
[ "", "python", "numpy", "pandas", "dataframe", "" ]
i've struggled again and again on this, but can't get it to work. been on these forums for hours now... Data set: ``` Row Date AccountID Column 1 1 21/02/2013 0:30 A0M8FA1003YP . 2 21/02/2013 0:30 A0M8FA1003YP . 3 21/02/2013 0:30 A0M8FA1003YP . 4 24/09/2007 12:00 A0M8FA1003YP . 5 24/09/2007 12:00 A0M8FA1003YP . 6 24/09/2007 12:00 A0M8FA1003YP . 7 12/02/2009 12:00 A023123332YP . 8 24/09/2003 12:00 A023123332YP . 9 24/09/2003 12:00 A023123332YP . 10 24/09/2003 12:00 A023123332YP . ``` I want to return the max value of the date column, but not just return a single row, but any rows that match that max value. I.e. In the above set I want to return rows 1, 2, 3 and 7 (all columns for the rows as well). ``` Row Date AccountID Column 1 1 21/02/2013 0:30 A0M8FA1003YP . 2 21/02/2013 0:30 A0M8FA1003YP . 3 21/02/2013 0:30 A0M8FA1003YP . 7 12/02/2009 12:00 A023123332YP . ``` I've got thousands of rows, and the number of matching rows to return for each ACCOUNTID will vary, some 1, some 2, some 10. Please help me!!! **UPDATE** Have also tried this ``` Select max(ASS_SCH_DATE) over (partition by AccountID), AccountID, ASS_SCH_DATE, ACCOUNTID from #Temp3 order by #Temp3.ACCOUNTID ``` Results still showing extra rows. ``` (No column name) ASS_SCH_DATE ACCOUNTID 2013-02-21 00:30:00.000 2013-02-21 00:30:00.000 A0M8FA1003YP 2013-02-21 00:30:00.000 2013-02-21 00:30:00.000 A0M8FA1003YP 2013-02-21 00:30:00.000 2013-02-21 00:30:00.000 A0M8FA1003YP 2013-02-21 00:30:00.000 2007-09-24 12:00:00.000 A0M8FA1003YP 2013-02-21 00:30:00.000 2007-09-24 12:00:00.000 A0M8FA1003YP ```
Query: **[SQLFIDDLEExample](http://sqlfiddle.com/#!3/2a9e2/1)** ``` SELECT t1.* FROM Table1 t1 WHERE t1.Date = (SELECT MAX(t2.Date) FROM Table1 t2 WHERE t2.AccountID = t1.AccountID) ``` Result: ``` | ROW | DATE | ACCOUNTID | -------------------------------------------------------- | 1 | February, 21 2013 00:30:00+0000 | A0M8FA1003YP | | 2 | February, 21 2013 00:30:00+0000 | A0M8FA1003YP | | 3 | February, 21 2013 00:30:00+0000 | A0M8FA1003YP | | 7 | February, 12 2009 12:00:00+0000 | A023123332YP | ```
``` select * from table where date in (select max(date) from table) ```
SQL select multiple max rows where ID is same
[ "", "sql", "" ]
I am trying string repetition in Python. ``` #!/bin/python str = 'Hello There' print str[:5]*2 ``` **Output** > HelloHello **Required Output** > Hello Hello Can anyone please point me in the right direction? Python version: 2.6.4
``` string = 'Hello There' print ' '.join([string[:5]] * 2) ```
In case if you want just to repeat any string ``` "Hello world " * 2 ```
How to repeat a string with spaces?
[ "", "python", "string", "" ]
The code is as follows: ``` #coding=utf-8 import re str = "The output is\n" str += "1) python\n" str += "A dynamic language\n" str += "easy to learn\n" str += "2) C++\n" str += "difficult to learn\n" str += "3244) PHP\n" str += "eay to learn\n" pattern = r'^[1-9]+\) .*' print re.findall(pattern,str,re.M) ``` The output is ``` ['1) python', '2) C++', '3244) PHP'] ``` However, I want to split it like this: ``` ['1) python\n'A dynamic language\n easy to learn\n' 2) C++\n difficult to learn\n', '3244) PHP\n easy to learn\n'] ``` That is, ignore the first lines does not start with "number)",and when comes across a number, the following lines until next line start with a "number)" is consider to be the same group. How should I rewrite the pattern ?
``` >>> import re >>> strs = 'The output is\n1) python\nA dynamic language\neasy to learn\n2) C++\ndifficult to learn\n3244) PHP\neay to learn\n' >>> re.findall(r'\d+\)\s[^\d]+',strs) ['1) python\nA dynamic language\neasy to learn\n', '2) C++\ndifficult to learn\n', '3244) PHP\neay to learn\n'] ```
you can use this, that allow digits but not followed by a closing parenthesis: ``` re.findall(r'\d+\)\s(?:\D+|\d+(?!\d*\)))*',str) ```
How to write a regex in python to match this?
[ "", "python", "regex", "" ]
In one of my testing scripts in Python I use this pattern several times: ``` sys.path.insert(0, "somedir") mod = __import__(mymod) sys.path.pop(0) ``` Is there a more concise way to temporarily modify the search path?
You could use a simple [context manager](https://docs.python.org/3/reference/datamodel.html#context-managers): ``` import sys class add_path(): def __init__(self, path): self.path = path def __enter__(self): sys.path.insert(0, self.path) def __exit__(self, exc_type, exc_value, traceback): try: sys.path.remove(self.path) except ValueError: pass ``` Then to import a module you can do: ``` with add_path('/path/to/dir'): mod = __import__('mymodule') ``` On exit from the body of the `with` statement `sys.path` will be restored to the original state. If you only use the module within that block you might also want to delete its reference from [`sys.modules`](https://docs.python.org/3.6/library/sys.html#sys.modules): ``` del sys.modules['mymodule'] ```
Appending a value to `sys.path` only modifies it temporarily, i.e for that session only. Permanent modifications are done by changing `PYTHONPATH` and the default installation directory. So, if by temporary you meant for current session only then your approach is okay, but you can remove the `pop` part if `somedir` is not hiding any important modules that is expected to be found in in `PYTHONPATH` ,current directory or default installation directory. <http://docs.python.org/2/tutorial/modules.html#the-module-search-path>
How to temporarily modify sys.path in Python?
[ "", "python", "python-import", "" ]
I'm struggling to do this. I have created a new database in the terminal called "somedb" using ``` CREATE DATABASE somedb ``` On my desktop I have the SQL dump downloaded from phpMyadmin: somedb.sql I have tried: ``` somedb < /Users/myname/Desktop/somedb.sql ``` Result: ERROR 1064 (42000): You have an error in your SQL syntax ``` mysql -u myname -p -h localhost somedb </Users/myname/Desktop/somedb.sql ``` Result: ERROR 1064 (42000): You have an error in your SQL syntax; I'm new to SQL (The purpose of importing this db is for a text book exercise) I have granted myself all privileges and there is no password. Any idea what I'm doing wrong? Here is the top of the SQL dump file: ``` -- phpMyAdmin SQL Dump -- version 4.0.2 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: Jun 18, 2013 at 02:22 PM -- Server version: 5.5.31-30.3 -- PHP Version: 5.2.17 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `somedb` -- CREATE DATABASE IF NOT EXISTS `somedb` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; USE `somedb`; -- -------------------------------------------------------- -- -- Table structure for table `actions` -- CREATE TABLE IF NOT EXISTS `actions` ( `action_id` int(11) NOT NULL AUTO_INCREMENT, `action` varchar(75) NOT NULL, `qualifiers` text NOT NULL, `response` varchar(75) NOT NULL, `response_vars` text NOT NULL, `active` tinyint(4) NOT NULL, PRIMARY KEY (`action_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='Stores user defined actions triggered by certain events' AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- ```
I found an SO post [here](https://stackoverflow.com/questions/4546778/how-can-i-import-a-database-with-mysql-from-terminal). I used "source" like so: ``` SOURCE /Users/myname/Desktop/somedb.sql; ``` That worked. Great but the internet seemed to want me to use the method like so: ``` mysql -u username -p password databasename < filename.sql ``` I may post another question on when to use that second method but in the meantime I just used source from a SQL dump file
Using MAMP Pro, created "uploads" dir in MAMP and put my SQL file in there called "file.sql". Ran the query below in terminal and worked for me. Make sure to replace brackets and and user info with no spaces after "-u" or "-p" ``` /Applications/MAMP/Library/bin/mysql -u<username> -p<root> <db_name> < /Applications/MAMP/uploads/file.sql ```
Importing a SQL db into mysql using the terminal
[ "", "mysql", "sql", "import", "terminal", "" ]
I am reading an [article](http://www.peterbe.com/plog/uniqifiers-benchmark) about python removing duplicate element in a list. there is a function defined as: ``` def f8(seq): # Dave Kirby # Order preserving seen = set() return [x for x in seq if x not in seen and not seen.add(x)] ``` However, i don't really understand the syntax for **[x for x in seq if x not in seen and not seen.add(x)]** what is this syntax ? how do I read it? thank you.
Firstly list comprehensions are usually easy to read, here is a simple example: ``` [x for x in seq if x != 2] ``` translates to: ``` result = [] for x in seq: if x != 2: result.append(x) ``` The reason why you can't read this code is because it is not readable and hacky code as I stated [in this question](https://stackoverflow.com/a/17016257/1219006): ``` def f8(seq): seen = set() return [x for x in seq if x not in seen and not seen.add(x)] ``` translates to: ``` def f8(seq): seen = set() result = [] for x in seq: if x not in seen and not seen.add(x): # not seen.add(...) always True result.append(x) ``` and relies on the fact that `set.add` is an in-place method that always returns `None` so `not None` evaluates to `True`. ``` >>> s = set() >>> y = s.add(1) # methods usually return None >>> print s, y set([1]) None ``` The reason why the code has been written this way is to sneakily take advantage of Python's list comprehension speed optimizations. Python methods will usually return `None` if they modify the data structure (`pop` is one of the exceptions) I also noted that the current accepted way of doing this (`2.7+`) which is more readable and doesn't utilize a *hack* is as follows: ``` >>> from collections import OrderedDict >>> items = [1, 2, 0, 1, 3, 2] >>> list(OrderedDict.fromkeys(items)) [1, 2, 0, 3] ``` Dictionary keys must be unique, therefore the duplicates are filtered out.
It is called a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions), they provide a syntactically more compact and more efficient way of writing a normal for-loop based solution. ``` def f8(seq): # Dave Kirby # Order preserving seen = set() return [x for x in seq if x not in seen and not seen.add(x)] ``` The above **list comprehension** is roughly equivalent to: ``` def f8(seq): seen = set() lis =[] for x in seq: if x not in seen: lis.append(x) seen.add(x) return lis ```
python list looping technique syntax
[ "", "python", "list-comprehension", "" ]
I am creating a code that requires the program to time how long it runs for and then displays the time. it is basically a timer that runs in the background and I can call upon it to display how long the code has been running for. How do I do this?
You record the start time, then later on calculate the difference between that start time and the current time. Due to platform differences, for precision you want to use the [`timeit.default_timer` callable](http://docs.python.org/2/library/timeit.html#timeit.default_timer): ``` from timeit import default_timer start = default_timer() # do stuff duration = default_timer() - start ``` This gives you a wall-clock time duration in seconds as a floating point value. Demo: ``` >>> from timeit import default_timer >>> start = default_timer() >>> # Martijn reads another post somewhere ... >>> print default_timer() - start 19.1996181011 ```
This is easy in python ``` import time start_time=time.time() #do something end_time=time.time()-start_time ``` The resultant `end_time` will be in seconds
How to create a timer on python
[ "", "python", "" ]
I read that Private class methods can't be called from outside their class. If I have a `Car` class with a private method `__reset_odometer()` can I say: ``` import Car Car turboCar = Car(); turboCar.__reset_odometer(); ``` `__reset_odometer()` is defined inside the Car class. `turboCar` is an instance of the `Car` class. So why does calling the method `turboCar.__reset_odometer()` result in an accesss error? I guess 'outside the class' is a term that I am not understanding, b/c a `turboCar` to me is not outside the class `Car`
Calling `turboCar.__reset_odometer()` will raise an exception since, even though the method is being called on a `Car` object, it is still *outside the class definition*. Think of it this way: you aren't inside the class, writing definitions of methods when you instantiate `turboCar = Car()`. So you can still refer to `__reset_odometer` inside the class like so, ``` class Car(object): def __init__(self): self.__odometer = 88800 self.__reset_odometer() # <-- call double underscore function def __reset_odometer(self): self.__odometer = 0 def read_odometer(self): return self.__odometer ``` And using `turboCar` works fine and the odometer has been reset, ``` >>> turboCar = Car() >>> turboCar.read_odometer() 0 ``` Of course, with Python there are no real private variables like in `C++` and the like, ``` >>> turboCar._Car__odometer = 9999 >>> turboCar.read_odometer() 9999 >>> turboCar._Car__reset_odometer() >>> turboCar.read_odometer() 0 ```
``` class Car(object): def __reset_odometer(self): pass def reset(self): self.__reset_odometer() # This is valid. turbocar = Car() turbocar.reset() turbocar.__reset_odometer() # This is invalid. __reset_odometer is only accessible from Car methods. ```
What does it mean for a private method to be only accessible from its own class in python?
[ "", "python", "static", "private", "" ]
I have a table as follows: ``` CallID | CompanyID | OutcomeID ---------------------------------- 1234 | 3344 | 36 1235 | 3344 | 36 1236 | 3344 | 36 1237 | 3344 | 37 1238 | 3344 | 39 1239 | 6677 | 37 1240 | 6677 | 37 ``` I would like to create a SQL script that counts the number of Sales outcomes and the number of all the other attempts (anything <> 36), something like: ``` CompanyID | SalesCount | NonSalesCount ------------------------------------------ 3344 | 3 | 1 6677 | 0 | 2 ``` Is there a way to do a COUNT() that contains a condition like COUNT(CallID WHERE OutcomeID = 36)?
You can use a CASE expression with your aggregate to get a total based on the `outcomeId` value: ``` select companyId, sum(case when outcomeid = 36 then 1 else 0 end) SalesCount, sum(case when outcomeid <> 36 then 1 else 0 end) NonSalesCount from yourtable group by companyId; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d41d8/15821)
Something like this: ``` SELECT companyId, COUNT(CASE WHEN outcomeid = 36 THEN 1 END) SalesCount, COUNT(CASE WHEN outcomeid <> 36 THEN 1 END) NonSalesCount FROM yourtable GROUP BY companyId ``` should work -- `COUNT()` counts only not null values.
SQL: Count() based on column value
[ "", "sql", "count", "" ]
I searched it on SO, but I couldn't get a right answer. ``` Student ------------------ rollno int PK name varchar(20) class varchar(20) ``` The other table is ``` Marks ----------------- rollno FK sub1 int sub2 int sub3 int sub4 int sub5 int ``` `sub1`, `sub2`, etc. contain the marks of subjects. Now I want a query which will display student information who has >35 marks in more than 2 subjects?
``` select rollno, case when sub1 < 35 then 0 else 1 end + case when sub2 < 35 then 0 else 1 end + case when sub3 < 35 then 0 else 1 end + case when sub4 < 35 then 0 else 1 end + case when sub5 < 35 then 0 else 1 end + end as [Count] from student,marks where count > 2 ```
You can use `CASE WHEN` and sum the fields that has more than 35. ``` SELECT s.rollno, s.name, s.class FROM Student s join Marks m on (s.rollno = m.rollno) where (CASE WHEN sub1>35 THEN 1 ELSE 0 END + CASE WHEN sub2>35 THEN 1 ELSE 0 END + CASE WHEN sub3>35 THEN 1 ELSE 0 END + CASE WHEN sub4>35 THEN 1 ELSE 0 END + CASE WHEN sub5>35 THEN 1 ELSE 0 END) > 2; ``` You can check it from here [SQL Fiddle..](http://sqlfiddle.com/#!2/cfd65/1/0)
SQL query to display data from multiple tables
[ "", "sql", "" ]
Consider: ``` >>> a = {'foo': {'bar': 3}} >>> b = {'foo': {'bar': 3}} >>> a == b True ``` According to the python doc, [you can indeed use](http://docs.python.org/2/library/stdtypes.html#dict) the `==` operator on dictionaries. What is actually happening here? Is Python recursively checking each element of the dictionaries to ensure equality? Is it making sure the keys are identically matched, and the values are also identically matched? Is there documentation that specifies exactly what `==` on a dictionary means? Or whether I have to implement my own version of checking for equality? (If the `==` operator works, why aren't dicts hashable? That is, why can't I create a set() of dicts, or use a dict as a dictionary key?)
Python is recursively checking each element of the dictionaries to ensure equality. See the [C `dict_equal()` implementation](http://hg.python.org/cpython/file/6f535c725b27/Objects/dictobject.c#l1839), which checks each and every key and value (provided the dictionaries are the same length); if dictionary `b` has the same key, then a `PyObject_RichCompareBool` tests if the values match too; this is essentially a recursive call. Dictionaries are not hashable because their [`__hash__` attribute is set to `None`](http://docs.python.org/2/reference/datamodel.html#object.__hash__), and most of all they are *mutable*, which is disallowed when used as a dictionary key. If you were to use a dictionary as a key, and through an existing reference then change the key, then that key would no longer slot to the same position in the hash table. Using another, equal dictionary (be it equal to the unchanged dictionary or the changed dictionary) to try and retrieve the value would now no longer work because the wrong slot would be picked, or the key would no longer be equal.
From [docs](http://docs.python.org/2/reference/expressions.html#not-in): > Mappings (dictionaries) compare equal if and only if their sorted > (key, value) lists compare equal .[[5]](http://docs.python.org/2/reference/expressions.html#id24) Outcomes other than equality are > resolved consistently, but are not otherwise defined. [[6]](http://docs.python.org/2/reference/expressions.html#id25) Footnote [[5]](http://docs.python.org/2/reference/expressions.html#id24): > The implementation computes this efficiently, without constructing > lists or sorting. Footnote [[6]](http://docs.python.org/2/reference/expressions.html#id25): > Earlier versions of Python used lexicographic comparison of the sorted > (key, value) lists, but this was very expensive for the common case of > comparing for equality. An even earlier version of Python compared > dictionaries by identity only, but this caused surprises because > people expected to be able to test a dictionary for emptiness by > comparing it to {}.
What does the == operator actually do on a Python dictionary?
[ "", "python", "dictionary", "" ]
I know how to create a trigger that checks if a group of columns has one and only one NON NULL for one table but i would like to reuse the code because i will have some other tables with the same requirements. Any recommendations? I was thinking of maybe a trigger that passes it's name of the columns to be checked and table name to a stored procedure and the function does the rest, but i'm not sure on how to implement it. EDIT: i tried ``` DROP tAble if exists t; create table t( a integer, b integer, c integer, CONSTRAINT enforce_only1FK CHECK ((a <> NULL)::integer +(b <> NULL)::integer+(c <>NULL)::integer = 1) ); INSERT into t VALUES (4,NULL,6); ``` it should not allow the insert but it does... what am i doing wrong? EDIT 2 : interesting... it works if i write ``` DROP tAble if exists t; create table t( a integer, b integer, c integer, CONSTRAINT enforce_only1FK CHECK ((a NOT NULL)::integer +(b NOT NULL)::integer+(c NOT NULL)::integer = 1) ); INSERT into t VALUES (4,NULL,6); ```
> a trigger that checks if a group of columns has one and only one NON > NULL for one table This would be a case for a table-level check constraint rather than a trigger. Example with the constraint on the first 3 columns: ``` CREATE TABLE tablename ( a int, b int, c int, d text, CHECK ((a is not null and b is null and c is null) OR (a is null and b is not null and c is null) OR (a is null and b is null and c is not null)) ); ``` or in more elaborate form with a function: ``` CREATE FUNCTION count_notnull(variadic arr int[]) returns int as $$ select sum(case when $1[i] is null then 0 else 1 end)::int from generate_subscripts($1,1) a(i); $$ language sql immutable; CREATE TABLE tablename ( a int, b int, c int, d text, CHECK (count_notnull(a,b,c)=1) ); ``` This second form looks better when many columns are involved in the constraint but it requires them to be all of the same type.
That is not a case for a trigger. Just a check constraint: ``` create table t ( a integer, b text, c boolean check (( (a is not null)::integer + (b is not null)::integer + (c is not null)::integer ) = 1) );] ``` In instead of checking every possible combination just use the boolean cast to integer and sum the results. ``` insert into t (a, b, c) values (1, 'a', true); ERROR: new row for relation "t" violates check constraint "t_check" DETAIL: Failing row contains (1, a, t). insert into t (a, b, c) values (null, 'b', false); ERROR: new row for relation "t" violates check constraint "t_check" DETAIL: Failing row contains (null, b, f). insert into t (a, b, c) values (2, null, null); INSERT 0 1 insert into t (a, b, c) values (null, null, null); ERROR: new row for relation "t" violates check constraint "t_check" DETAIL: Failing row contains (null, null, null). ```
Create trigger that ensures there is one and only one NON-NULL in a set of column and reuse it for other tables
[ "", "sql", "postgresql", "plpgsql", "" ]
I've decided not to waste my summer and start learning python. I figured I'd start learning looping techniques so I wanted to start with a basic list of numbers, aka, write a for loop that will generate the numbers 1 - 10. This is what I have: ``` def generateNumber(num): i=0 for i in range(num): return i return i ``` and the code doesn't work. I want to get an output in a list like this: ``` >>> generateNumber(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ```
Trying to be consistent with what you first tried, you could do something like this ``` def generateNumber(num): mylist = [] for i in range(num+1): mylist.append(i) return mylist x = generateNumber(10) ``` but, you could, instead just say, ``` x = range(10+1) # gives a generator that will make a list ``` or ``` x = list(range(10+1)) # if you want a real list ``` In general though, you should keep this list based on inputting the number 10 so it is [0...9] and not [0...10].
It might help to implement this with the ability to specify a range: ``` def generateNumber(low, high): '''returns a list with integers between low and high inclusive example: generateNumber(2,10) --> [2,3,4,5,6,7,8,9,10] ''' return range(low, high+1) ``` This can also be done with the built-in range function: ``` range(10) --> [0,1,2,3,4,5,6,7,8,9] #note the "off by one" range(11) --> [0,1,2,3,4,5,6,7,8,9,10] range(2,11) --> [2,3,4,5,6,7,8,9,10] ``` More about range: <http://docs.python.org/2/library/functions.html#range>
Python Sequence of Numbers
[ "", "python", "list", "sequence", "" ]
So I have two dictionaries: ``` dic1 = {1.0: 9.0, 3.0: 33.0, 5.0: 13.0} dic2 = {1.0: 3.4, 3.0: 88.9, 4.0: 73.0, 5.0: 9.0} ``` The keys in each are the ID number, the values in dic1 are rating and the values in dic2 are cost. What I want is a way to associated each ID number with both its cost and rating and in the process delete any keys dic2 that do not have a match in dic1. ``` dic3 = {1.0: (9.0, 4.2), 3.0: (33.0, 88.9), 5.0: (13.0, 9.0)} ``` I then would like to be able to create an array with columns ID, cost, rating. I haven't been able to think of a way to do this. Any suggestions? Thanks!
Try this, which will work on old versions of Python (older than 2.7): ``` dic3 = {} for k in dic1.viewkeys() & dic2.viewkeys(): dic3[k] = (dic1[k], dic2[k]) ``` Or this, an approach that will work on recent versions of Python (>= 2.7 but < 3.0), using dictionary comprehensions: ``` dic3 = { k : (dic1[k], dic2[k]) for k in dic1.viewkeys() & dic2.viewkeys() } ``` In Python 3.0 and up the keys are already treated as sets, yielding the nicest solution: ``` dic3 = { k : (dic1[k], dic2[k]) for k in dic1.keys() & dic2.keys() } ``` Anyway, the trick to determine the keys common to both dictionaries is to intersect their sets, and it works as expected: ``` dic3 => {1.0: (9.0, 3.4), 3.0: (33.0, 88.9), 5.0: (13.0, 9.0)} ```
``` dic3 = {k : (v1, dic2[k]) for k, v1 in dic1.items() if k in dic2} ```
Match keys from two dictionaries, and make new dictionary with matched key and two associated values
[ "", "python", "python-2.7", "dictionary", "" ]
I am writing a very simple function that, given a string as input, parses the string into sentences, reverses the words of each sentence, and returns the reversed sentences but in the given sentence order. I am struggling with the `string.split()` and `string.join()` built-in methods and am getting a TypeError when I run the following python program. ``` import string def reverseSentences(str): senList = str.split('.') for i, item in enumerate(senList[:]): senList[i] = string.join(item.split(' ').reverse(), ' ') return string.join(senList, '.') + '.' ``` When I try to print a call to this function it gives me a generic TypeError. Thanks for enduring my n00bishness.
Try ``` >> str = 'a1 b2' >> ' '.join(x[::-1] for x in str.split()) '1a 2b' ``` In function form: ``` def reverseSentences(str): return ' '.join(x[::-1] for x in str.split()) ``` Notice that `x[::-1]` is way faster than `reversed()` ([Reverse a string in Python](https://stackoverflow.com/questions/931092/reverse-a-string-in-python)).
`item.split(' ').reverse()` does not return the list, rather it does the reverse in place. So, your join is same as: ``` string.join(None, ' ') ``` Clearly a problem. You should rather use `reversed()` function: ``` string.join(reversed(item.split(' ')), ' ') ``` And rather than `string.join` function, use the method defined in `string` class in newer Python version. ``` ' '.join(reversed(item.split(' ')) ```
TypeError when using string.join() in Python
[ "", "python", "string", "" ]
I identify Internet **traffic flows** by their **5-tuple** (src IP, dst port, sport, dport, transport protocol number) and I would like turn this 5-tuple into a much more compact **alphanumeric ID** for internal use in my script. What choices do I have in Python? I read that the built-in function `hash` is only **consistent OS-wise**, so I would prefer something else. I will only ever have to deal with **no more than a few hundreds** different 5-tuples.
Just choose your own hash function: ``` import hashlib hash = hashlib.md5() t = (1, 2, 3, 4, 5) # whatever t_as_string = str(t) hash.update(t_as_string) print hash.hexdigest() ``` You can use any of the functions in [hashlib](http://docs.python.org/2/library/hashlib.html). And since this isn't a security issue, it doesn't really matter which one... **BUT:** wanna bet, comparing tuples will be faster / more efficient?
The following Python Hash function, by Ewen Cheslack-Postava, shall remain consistent accross several OS and CPU : <https://pypi.python.org/pypi/pyhashxx/>
identify 5-tuple flows by hash value in Python
[ "", "python", "hash", "" ]
I'll start by posting my query... ``` SELECT [copyright status], sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources GROUP BY [copyright status] UNION SELECT [lw status], sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources WHERE [lw status] = 'In Reserve' GROUP BY [lw status] UNION SELECT [lw status], sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources WHERE [lw status] = 'Published' GROUP BY [lw status]; ``` (Hope that is easy to read) Its working as I intend it to, however I would like to add one more function to the query. After the first `SELECT` query, I would like to add in an additional query which totals each each of the three sums (Key Info, Approaches, Research). The syntax I tried adding in was as follows: ``` <Previous Query> UNION SELECT, sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources UNION <Next Query> ``` However, when I try and run it, I get an error which reads "The number of columns in the two selected tables of queries of a union query do not match." I'm not sure if I'm being too ambitious with this. Also, is there a more efficient way of formatting the initial query? If it makes any difference, the values in layer, copyright status and lw status are stored in seperate tables and drawn into the resources table via a combo box in the table design mode. I'm using Access 2003. If any more info is required, please let me know. Thanks.
The number of columns has to be equal for all parts of the `union`. You could just add a `null` first column for your summary row: ``` SELECT null, sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources ```
You removed `lw status` and didn't select anything: ``` UNION SELECT 'SUM', sum(IIF(layer='key info',1,0)) AS [Key Info], sum(IIF(layer='approaches',1,0)) AS [Approaches], sum(IIF(layer='research',1,0)) AS [Research] FROM resources UNION <Next Query> ```
SQL - Sum within UNION query
[ "", "sql", "ms-access", "ms-access-2003", "" ]
I have a project with multiple package dependencies, the main requirements being listed in `requirements.txt`. When I call `pip freeze` it prints the currently installed packages as plain list. I would prefer to also get their dependency relationships, something like this: ``` Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 Jinja2==2.7 Werkzeug==0.8.3 Flask-Admin==1.0.6 Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 ``` The goal is to detect the dependencies of each specific package: ``` Werkzeug==0.8.3 Flask==0.9 Flask-Admin==1.0.6 ``` And insert these into my current `requirements.txt`. For example, for this input: ``` Flask==0.9 Flask-Admin==1.0.6 Werkzeug==0.8.3 ``` I would like to get: ``` Flask==0.9 Jinja2==2.7 Flask-Admin==1.0.6 Werkzeug==0.8.3 ``` Is there any way show the dependencies of installed pip packages?
You should take a look at [`pipdeptree`](https://pypi.python.org/pypi/pipdeptree): ``` $ pip install pipdeptree $ pipdeptree -fl Warning!!! Cyclic dependencies found: ------------------------------------------------------------------------ xlwt==0.7.5 ruamel.ext.rtf==0.1.1 xlrd==0.9.3 openpyxl==2.0.4 - jdcal==1.0 pymongo==2.7.1 reportlab==3.1.8 - Pillow==2.5.1 - pip - setuptools ``` It doesn't generate a `requirements.txt` file as you indicated directly. However the source (255 lines of python code) should be relatively easy to modify to your needs, or alternatively you can (as @MERose indicated is in the pipdeptree 0.3 README ) out use: ``` pipdeptree --freeze --warn silence | grep -P '^[\w0-9\-=.]+' > requirements.txt ``` The 0.5 version of `pipdeptree` also allows JSON output with the `--json` option, that is more easily machine parseble, at the expense of being less readable.
# Warning: py2 only / abandonware [`yolk`](https://pypi.python.org/pypi/yolk/0.4.3) can display dependencies for packages, provided that they * were installed via `setuptools` * came with metadata that includes dependency information ``` $ yolk -d Theano Theano 0.6.0rc3 scipy>=0.7.2 numpy>=1.5.0 ```
Is there any way to show the dependency trees for pip packages?
[ "", "python", "pip", "requirements.txt", "" ]
I am trying to follow [this tutorial](http://django-tinymce.readthedocs.org/en/latest/installation.html#testing) on getting tinymce working with django and zinnia. It's not working, so I am attempting to do "Testing" but get this error when I run `django-admin.py syncdb`. How do I fix this? ``` $django-admin.py syncdb Traceback (most recent call last): File "/usr/local/bin/django-admin.py", line 5, in <module> pkg_resources.run_script('Django==1.5.1', 'django-admin.py') File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 505, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1245, in run_script execfile(script_filename, namespace, namespace) File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/EGG-INFO/scripts/django-admin.py", line 5, in <module> management.execute_from_command_line() File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/core/management/__init__.py", line 453, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/core/management/__init__.py", line 263, in fetch_command app_name = get_commands()[subcommand] File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/core/management/__init__.py", line 109, in get_commands apps = settings.INSTALLED_APPS File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/conf/__init__.py", line 53, in __getattr__ self._setup(name) File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/conf/__init__.py", line 48, in _setup self._wrapped = Settings(settings_module) File "/usr/local/lib/python2.7/dist-packages/Django-1.5.1-py2.7.egg/django/conf/__init__.py", line 134, in __init__ raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e)) ImportError: Could not import settings 'testtinymce.staticfiles_settings' (Is it on sys.path?): No module named staticfiles_settings ``` Thank you.
I found out the django-tinymce documentation is outdated, i.e. partially wrong. What I discovered is that different versions of tinymce and django-tinymce packages are not compatible. I solved it adding some variables to my project/settings.py and altering the tinymce directory and file names. django-tinymce urls.py had some hardcoded paths in it which assumed the directories were named "tiny\_mce" when in reality they were named "tinymce", hence I had to rename them, or alternatively you can change the hardcoded paths in django-tinymce's urls.py. ``` # project setting.py STATIC_ROOT = os.path.join(BASE_DIR, "static") STATIC_JS_DIR = os.path.join(STATIC_DIR, "js") TINYMCE_JS_ROOT = os.path.join(STATIC_JS_DIR, "tiny_mce") TINYMCE_JS_URL = os.path.join(TINYMCE_JS_ROOT, "tiny_mce.js") #TINYMCE_JS_ROOT = os.path.join(STATIC_JS_DIR, "tiny_mce") #TINYMCE_JS_URL = os.path.join(TINYMCE_JS_ROOT, "tiny_mce.js") ```
A simple shutdown of the terminal, then restarting the app again fixed it for me (without needing to configure anything extra). I followed the instructions [here](http://django-tinymce.readthedocs.org/en/latest/installation.html): 1. pip install django-tinymce 2. Add `tinymce` to the INSTALLED\_APPS of 'settings.py' 3. Add `(r'^tinymce/', include('tinymce.urls')),` to the urlpatterns in urls.py 4. Do a `python manage.py syncdb` (not sure if this is needed) 5. In terminal: `$ export DJANGO_SETTINGS_MODULE='testtinymce.staticfiles_settings'` 6. Do another `python manage.py syncdb` just in case and then a `python manage.py runserver` 7. I then received the error when I tried to open up the browser to: `http://localhost:8000/admin/myapphere` 8. I restarted the terminal, did a 'collect static' just in case, then did `python manage.py runserver` and it worked (I was able to see the new fields)
Django-Tinymce Import Error
[ "", "python", "django", "tinymce", "django-tinymce", "" ]
I have a `pandas.DataFrame` that I wish to export to a CSV file. However, pandas seems to write some of the values as `float` instead of `int` types. I couldn't not find how to change this behavior. Building a data frame: ``` df = pandas.DataFrame(columns=['a','b','c','d'], index=['x','y','z'], dtype=int) x = pandas.Series([10,10,10], index=['a','b','d'], dtype=int) y = pandas.Series([1,5,2,3], index=['a','b','c','d'], dtype=int) z = pandas.Series([1,2,3,4], index=['a','b','c','d'], dtype=int) df.loc['x']=x; df.loc['y']=y; df.loc['z']=z ``` View it: ``` >>> df a b c d x 10 10 NaN 10 y 1 5 2 3 z 1 2 3 4 ``` Export it: ``` >>> df.to_csv('test.csv', sep='\t', na_rep='0', dtype=int) >>> for l in open('test.csv'): print l.strip('\n') a b c d x 10.0 10.0 0 10.0 y 1 5 2 3 z 1 2 3 4 ``` Why do the tens have a dot zero ? Sure, I could just stick this function into my pipeline to reconvert the whole CSV file, but it seems unnecessary: ``` def lines_as_integer(path): handle = open(path) yield handle.next() for line in handle: line = line.split() label = line[0] values = map(float, line[1:]) values = map(int, values) yield label + '\t' + '\t'.join(map(str,values)) + '\n' handle = open(path_table_int, 'w') handle.writelines(lines_as_integer(path_table_float)) handle.close() ```
The answer I was looking for was a slight variation of what @Jeff proposed in his answer. The credit goes to him. This is what solved my problem in the end for reference: ``` import pandas df = pandas.DataFrame(data, columns=['a','b','c','d'], index=['x','y','z']) df = df.fillna(0) df = df.astype(int) df.to_csv('test.csv', sep='\t') ```
This is a ["gotcha" in pandas (Support for integer NA)](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na), where integer columns with NaNs are converted to floats. > This trade-off is made largely for memory and performance reasons, and also so that the resulting Series continues to be “numeric”. One possibility is to use `dtype=object` arrays instead.
Python pandas: output dataframe to csv with integers
[ "", "python", "csv", "dataframe", "pandas", "" ]
I want to get value from string before I replace the string with other value in Python using regex, but I don't know how to do it. For example: Original String: > This is 123 and that is ABC. > > This is 456 and that is OPQ. I want to extract 123 and 456 and then replace '*This is 123*' and '*This is 456*'to '*That was XYZ*' The result is pair list such as: > 123 : That was XYZ and that is ABC. > > 456 : That was XYZ and that is OPQ. Above is one very simple example, the extracted and replaced string might be more complicate in my case. Is it possible use Regex to do it in Python ? My original idea is use re.findall to find all number, and then use sub to replace the string. But the question is that I am not sure if it possible to paired the replaced string and the number. Thank you for your answering.
Something like this? ``` >>> strs = "This is 123 and that is ABC." >>> match = re.search(r'.*?(\d+)',strs) >>> rep = match.group(0) >>> num = match.group(1) >>> "{}: {}".format(num, re.sub(rep,'That was XYZ',strs)) '123: That was XYZ and that is ABC.' >>> strs = 'This is 456 and that is OPQ.' >>> match = re.search(r'.*?(\d+)',strs) >>> rep = match.group(0) >>> num = match.group(1) >>> "{}: {}".format(num, re.sub(rep,'That was XYZ',strs)) '456: That was XYZ and that is OPQ.' ```
``` string = "This is 123 and that is ABC." match = re.search("\d+", string).group() string = match+":"+string.replace(match, "XYZ") ``` Considering the match happens for sure, else you can put an if condition around match
Get value before replace in Python Regex
[ "", "python", "regex", "" ]
i have 3 tables. A, B, C as shown. ``` Table A | Table B | Table C ---------------------- | ------------------------- |-------------------------------- StudentId StudentName | SubjectId SubjectName | StudentId SubjectId Marks 1 Jack | 101 History | 1 101 33 2 Peter | 102 Science | 2 102 75 3 Samantha | 103 Literature | 3 101 55 ----------------------- | ------------------------- | ------------------------------- ``` I need a query to generate result against each subject as follows:- ``` ------------------------------------ StudentName SubjectName Marks ------------------------------------ Jack History 33 Jack Science 0 Jack Literature 0 Peter History 0 Peter Science 75 Peter Literature 0 Samantha History 33 Samantha Science 33 Samantha Literature 33 ------------------------------------ ``` i used following queries that did not produced desired result. ``` 1. select a.StudentName, b.SubjectName, c.Marks from a, b, c where a.StudentId = c.StudentId and c.SubjectId = b.StudentId 2. select a.StudentName, b.SubjectName, c.Marks from a, b, c where a.StudentId = c.StudentId and c.SubjectId = b.StudentId(+) 3.select a.StudentName, b.SubjectName, c.Marks from a, b, c where a.StudentId = c.StudentId and (+)c.SubjectId = b.StudentId ``` My queries would miss out the subject whose marks would not be in table C. while i need all three subjects from table b to be repeated for every student. get marks where entered and show "0" where the subject is not entered against a particular student in TableC. Thanks in advance.
This [Link](http://sqlfiddle.com/#!3/e6a42/10) will give you what exactly you want. Below is the select Query. ``` select D.StudentName,D.subjectName,Isnull(C.marks,0) as Marks from TableC C Right Join (select * from TableA A,TableB B ) D on C.studentID = D.studentID and C.subjectID = D.subjectID ```
Try: ``` SELECT a.StudentName, b.SubjectName, c.Marks FROM A LEFT JOIN C ON A.StudentID = C.StudentID LEFT JOIN B ON B.SubjectID = C.SubjectID ```
SQL to get all values from look up and corresponding values from master table
[ "", "sql", "join", "" ]
I have this string: ``` 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 ``` I want to be able to remove the 'eth0' part from this string, but it may not always be eth0! Its positioning after the "2: " part is consistent though. Any ideas? edit: The overall idea would be to isolate the interface name 'lo', 'eth0', etc.. Here's a longer example: ``` 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:e0:c8:10:00:00 brd ff:ff:ff:ff:ff:ff inet 192.168.100.1/24 brd 192.168.100.255 scope global eth0 inet 192.168.200.16/32 scope global eth0 inet 192.168.200.17/32 scope global eth0 inet 192.168.200.18/32 scope global eth0 3: gre0: <NOARP> mtu 1476 qdisc noop state DOWN link/gre 0.0.0.0 brd 0.0.0.0 4: 3g-wan1: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 3 link/ppp ```
Edit 2: New answer for the updated question. Use `re.findall(r'\d+: (.*?):', string)` ``` import re string = """ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:e0:c8:10:00:00 brd ff:ff:ff:ff:ff:ff inet 192.168.100.1/24 brd 192.168.100.255 scope global eth0 inet 192.168.200.16/32 scope global eth0 inet 192.168.200.17/32 scope global eth0 inet 192.168.200.18/32 scope global eth0 3: gre0: <NOARP> mtu 1476 qdisc noop state DOWN link/gre 0.0.0.0 brd 0.0.0.0 4: 3g-wan1: <POINTOPOINT,MULTICAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 3 link/ppp """ print re.findall(r'\d+: (.*?):', string) ``` Output: ``` ['lo', 'eth0', 'gre0', '3g-wan1'] ```
It is not clear if the 2 is a fixed part. You can try variations around that: ``` import re re.sub("(\d*:)[^:]*:(.*)", "\\1\\2", "2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000") ``` Result: ``` '2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000' ``` **EDIT:** It seems we all understood your expectations the wrong way. @Dogbert 's usage of `findall` would have my preference. However, I'll mention an adapted version of the `re.sub` proposal for closure. You could also do that by adapting the `re.sub` function call shown above: ``` >>> re.sub("\d*: *([^:]*):.*", "\\1", "2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000") 'eth0' ```
I want to be able to isolate a non-specific string from a list using python
[ "", "python", "regex", "string", "" ]
I have a table with approximately 2.5 million rows that I am thinking about moving into a much larger table, 35 million rows, with a boolean flag set on the original 2.5 million. If I wanted to run lots of queries against the 2.5 million records in the new larger table, would adding an index be useful / not cause a full table scan on every query? I know that traditionally indexes aren't helpful in booleans, but since only 7% of the records will be true, I thought it might not require a table scan on every query.
Perhaps look at using a [partial index](http://www.postgresql.org/docs/8.0/static/indexes-partial.html). **From docs** > A partial index is an index built over a subset of a table; the subset > is defined by a conditional expression (called the predicate of the > partial index). The index contains entries for only those table rows > that satisfy the predicate. > > A major motivation for partial indexes is to avoid indexing common > values. Since a query searching for a common value (one that accounts > for more than a few percent of all the table rows) will not use the > index anyway, there is no point in keeping those rows in the index at > all. This reduces the size of the index, which will speed up queries > that do use the index. It will also speed up many table update > operations because the index does not need to be updated in all cases. > Example 11-1 shows a possible application of this idea.
I would be looking at partitioning, if you have a substantial proportion of the table that you want to access efficiently.
Indexing and performance Implications of moving small table into big table
[ "", "sql", "postgresql", "indexing", "" ]
I have a table with over 1000 tables (e.g Customers). I have a query requiring details of a known list of customers (e.g by CustomerID - 1,79,14,100,123) The IN() function is what I would like to use for the query. I know to find customers that match the list, I would write: ``` SELECT * FROM Customers WHERE CustomerID IN (1,79,14,100,123) ``` To find those that are not in the list, I would write ``` SELECT * FROM Customers WHERE CustomerID NOT IN (1,79,14,100,123) ``` **Question** **How do I find the list of Customers that where NOT returned or did not find a match from the list?** Suppose the Customers table only has (1,79,100). Then it would mean 14 and 123 will not be matched. How do I find those values that do not find a match. I was simplifying in my example. My list of items has over 300 IDs, so using `WHERE` condition with a long list of `OR` would be cumbersome/clumsy. I have thought of combining with self LEFT JOIN and identifying the NULL paired values, which would be 14 and 123 Is there a more elegant approach?
You can use a derived table or temporary table for example to hold the list of `CustomerId` then find the non matching ones with `EXCEPT`. The below uses a [table value constructor](http://technet.microsoft.com/en-us/library/dd776382.aspx) as a derived table (compatible with SQL Server 2008+) ``` SELECT CustomerId FROM (VALUES(1), (79), (14), (100), (123)) V(CustomerId) EXCEPT SELECT CustomerId FROM Customers ```
In case anyone stumbles upon this question and is wondering how to do this in PostgreSQL ``` VALUES (1),(79),(14),(100),(123) EXCEPT ALL SELECT "CustomerId" from "Customers"; ```
SELECT those not found in IN() list
[ "", "sql", "sql-server", "t-sql", "" ]
Suppose I had a list where each list element was made up of three parts like: ``` [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0], [0, 0, 3], [0, 1, 2], [0, 2, 1], [0, 3, 0], [1, 0, 2], [1, 1, 1], [1, 2, 0], [2, 0, 1], [2, 1, 0], [3, 0, 0], [0, 0, 4], [0, 1, 3], [0, 2, 2], [0, 3, 1], [0, 4, 0], [1, 0, 3], [1, 1, 2], [1, 2, 1], [1, 3, 0], [2, 0, 2], [2, 1, 1], [2, 2, 0], [3, 0, 1], [3, 1, 0], [4, 0, 0]] ``` would there be a way to check what is inside each list element i.e say I wanted to create a new list, based on the above containing the index position of all elements that contain two zeroes, and also a list of every element containing one zero, how would I do this? I am aware as to how to check if a single thing is in a list element, but not if that element is occurs twice.
You could use the `.count` method. ``` two_zeros = [x for x in lst if x.count(0) == 2] one_zero = [x for x in lst if x.count(0) == 1] ``` If you wanted to be really clever, you could do the whole thing in a single loop with a collections.defaultdict: ``` d = collections.defaultdict(list) for sublist in lst: d[sublist.count(0)].append(sublist) ``` Now you have a mapping of number of zeros to sublists which contain that number of zeros. Of course, if you actually want a list of the indices, you could use `enumerate`.
Use `count()`. From `help`: ``` count(...) L.count(value) -> integer -- return number of occurrences of value ``` Example Code (Part of your list) - ``` >>> startList = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0], [0, 0, 3], [0, 1, 2], [0, 2, 1]] >>> for element in startList: element.count(0) 2 2 2 2 1 2 1 1 2 2 1 1 ``` How to create your lists? Use the above idea with list comprehension. ``` >>> twoZero = [index for index, elem in enumerate(startList) if elem.count(0) == 2] >>> twoZero [0, 1, 2, 3, 5, 8, 9] >>> oneZero = [index for index, elem in enumerate(startList) if elem.count(0) == 1] >>> oneZero [4, 6, 7, 10, 11] ``` This is for a part of your list.
check two values in list element python
[ "", "python", "" ]
I'm a little new to the site and have been teaching myself Microsoft SQL over the past week to cover for work and was hoping you guys could help. I have to pull all active patients for a type of group and the only way to tell if patients are active is if there is not a date discharge for the patient. My code is the following. ``` select p.clinic_id, p.program_id, p.protocol_id, p.patient_id, p.date_discharged from patient_assignment p where p.program_id = 'MH' and p.protocol_id = 'grp' and p.date_discharged IS NULL ``` (Have also tried date\_discharge = 'Null') - It enter code here`works without this line. It is not pulling in any information when I view the field I see there are null values. I'm sure this is something simple. I am just very green with this. Thanks in advance.
try `p.date_discharged IS NULL` you've aliased your table name, and used the alias `a` everywhere except there, as well as used the wrong column name. The `invalid column name` was a good hint.
Try this : ``` select p.clinic_id, p.program_id, p.protocol_id, p.patient_id, p.date_discharged from patient_assignment p where p.program_id = 'MH' and p.protocol_id = 'grp' and date_discharged IS NULL ``` You missed date\_discharged with date\_discharge.
Pulling Null date (Microsoft SQL)
[ "", "sql", "sql-server", "null", "" ]
I am getting an error when running a python program: ``` Traceback (most recent call last): File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 110, in <module> File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 27, in __init__ File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\class\inventory.py", line 17, in __init__ builtins.NameError: global name 'xrange' is not defined ``` The game is from [here](https://github.com/linkey11/Necromonster). What causes this error?
You are trying to run a Python 2 codebase with Python 3. [`xrange()`](https://docs.python.org/2/library/functions.html#xrange) was renamed to [`range()`](https://docs.python.org/3/library/functions.html#func-range) in Python 3. Run the game with Python 2 instead. Don't try to port it unless you know what you are doing, most likely there will be more problems beyond `xrange()` vs. `range()`. For the record, what you are seeing is not a syntax error but a runtime exception instead. --- If you do know what your are doing and are actively making a Python 2 codebase compatible with Python 3, you can bridge the code by adding the global name to your module as an alias for `range`. (Take into account that you *may* have to update any existing `range()` use in the Python 2 codebase with `list(range(...))` to ensure you still get a list object in Python 3): ``` try: # Python 2 xrange except NameError: # Python 3, xrange is now named range xrange = range # Python 2 code that uses xrange(...) unchanged, and any # range(...) replaced with list(range(...)) ``` or replace all uses of `xrange(...)` with `range(...)` in the codebase and then use a different shim to make the Python 3 syntax compatible with Python 2: ``` try: # Python 2 forward compatibility range = xrange except NameError: pass # Python 2 code transformed from range(...) -> list(range(...)) and # xrange(...) -> range(...). ``` The latter is preferable for codebases that want to aim to be Python 3 compatible *only* in the long run, it is easier to then just use Python 3 syntax whenever possible.
add `xrange=range` in your code :) It works to me.
NameError: global name 'xrange' is not defined in Python 3
[ "", "python", "python-3.x", "range", "runtimeexception", "xrange", "" ]
So here is a code i have written to find palindromes within a word (To check if there are palindromes within a word including the word itself) Condition: spaces inbetween characters are counted and not ignored Example: A but tuba is a palindrome but technically due to spaces involved now it isn't. so that's the criteria. Based on above, the following code usually should work. You can try on your own with different tests to check out if this code gives any error. ``` def pal(text): """ param text: given string or test return: returns index of longest palindrome and a list of detected palindromes stored in temp """ lst = {} index = (0, 0) length = len(text) if length <= 1: return index word = text.lower() # Trying to make the whole string lower case temp = str() for x, y in enumerate(word): # Try to enumerate over the word t = x for i in xrange(x): if i != t+1: string = word[i:t+1] if string == string[::-1]: temp = text[i:t+1] index = (i, t+1) lst[temp] = index tat = lst.keys() longest = max(tat, key=len) #print longest return lst[longest], temp ``` And here is a defunct version of it. What I mean is I have tried to start out from the middle and detect palindromes by iterating from the beginning and checking for each higher and lower indices for character by checking if they are equal characters. if they are then i am checking if its a palindrome like a regular palindrome check. here's what I have done ``` def pal(t): text = t.lower() lst = {} ptr = '' index = (0, 0) #mid = len(text)/2 #print mid dec = 0 inc = 0 for mid, c in enumerate(text): dec = mid - 1 inc = mid + 1 while dec != 0 and inc != text.index(text[-1]): print 'dec {}, inc {},'.format(dec, inc) print 'text[dec:inc+1] {}'.format(text[dec:inc+1]) if dec<0: dec = 0 if inc > text.index(text[-1]): inc = text.index(text[-1]) while text[dec] != text[inc]: flo = findlet(text[inc], text[:dec]) fhi = findlet(text[dec], text[inc:]) if len(flo) != 0 and len(fhi) != 0 and text[flo[-1]] == text[fhi[0]]: dec = flo[-1] inc = fhi[0] print ' break if' break elif len(flo) != 0 and text[flo[-1]] == text[inc]: dec = flo[-1] print ' break 1st elif' break elif len(fhi) != 0 and text[fhi[0]] == text[inc]: inc = fhi[0] print ' break 2nd elif' break else: dec -= 1 inc += 1 print ' break else' break s = text[dec:inc+1] print ' s {} '.format(s) if s == s[::-1]: index = (dec, inc+1) lst[s] = index if dec > 0: dec -= 1 if inc < text.index(text[-1]): inc += 1 if len(lst) != 0: val = lst.keys() longest = max(val, key = len) return lst[longest], longest, val else: return index ``` findlet() fun: ``` def findlet(alpha, string): f = [i for i,j in enumerate(string) if j == alpha] return f ``` Sometimes it works: ``` pal('madem') dec -1, inc 1, text[dec:inc+1] s m dec 1, inc 3, text[dec:inc+1] ade break 1st elif s m dec 2, inc 4, text[dec:inc+1] dem break 1st elif s m dec 3, inc 5, text[dec:inc+1] em break 1st elif s m Out[6]: ((0, 1), 'm', ['m']) pal('Avid diva.') dec -1, inc 1, text[dec:inc+1] break 2nd if s avid div dec 1, inc 3, text[dec:inc+1] vid break else s avid dec 2, inc 4, text[dec:inc+1] id break else s vid d dec 3, inc 5, text[dec:inc+1] d d s d d dec 2, inc 6, text[dec:inc+1] id di s id di dec 1, inc 7, text[dec:inc+1] vid div s vid div dec 4, inc 6, text[dec:inc+1] di break 1st elif s id di dec 1, inc 7, text[dec:inc+1] vid div s vid div dec 5, inc 7, text[dec:inc+1] div break 1st elif s vid div dec 6, inc 8, text[dec:inc+1] iva break 1st elif s avid diva dec 8, inc 10, text[dec:inc+1] a. break else s va. dec 6, inc 10, text[dec:inc+1] iva. break else s diva. dec 4, inc 10, text[dec:inc+1] diva. break else s d diva. dec 2, inc 10, text[dec:inc+1] id diva. break else s vid diva. Out[9]: ((0, 9), 'avid diva', ['avid diva', 'd d', 'id di', 'vid div']) ``` And based on the Criteria/Condition i have put: ``` pal('A car, a man, a maraca.') dec -1, inc 1, text[dec:inc+1] break else s dec -1, inc 3, text[dec:inc+1] s a ca dec 1, inc 3, text[dec:inc+1] ca break if s a ca dec 2, inc 4, text[dec:inc+1] car break else s car, dec 3, inc 5, text[dec:inc+1] ar, break else s car, dec 1, inc 7, text[dec:inc+1] car, a break 1st elif s a car, a dec 4, inc 6, text[dec:inc+1] r, break 1st elif s car, dec 5, inc 7, text[dec:inc+1] , a break 1st elif s ar, a dec 2, inc 8, text[dec:inc+1] car, a break 1st elif s car, a dec 6, inc 8, text[dec:inc+1] a s a dec 5, inc 9, text[dec:inc+1] , a m break else s r, a ma dec 3, inc 11, text[dec:inc+1] ar, a man break else s car, a man, dec 1, inc 13, text[dec:inc+1] car, a man, s car, a man, dec 7, inc 9, text[dec:inc+1] a m break else s a ma dec 5, inc 11, text[dec:inc+1] , a man break else s r, a man, dec 3, inc 13, text[dec:inc+1] ar, a man, break if s dec 8, inc 10, text[dec:inc+1] ma break if s dec 6, inc 4, text[dec:inc+1] break 1st elif s r dec 3, inc 5, text[dec:inc+1] ar, break else s car, dec 1, inc 7, text[dec:inc+1] car, a break 1st elif s a car, a dec 9, inc 11, text[dec:inc+1] man break else s man, dec 7, inc 13, text[dec:inc+1] a man, break if s dec 5, inc 2, text[dec:inc+1] break 1st elif s c dec 1, inc 3, text[dec:inc+1] ca break if s a ca dec 10, inc 12, text[dec:inc+1] an, break 1st elif s , a man, dec 4, inc 13, text[dec:inc+1] r, a man, break 1st elif s car, a man, dec 11, inc 13, text[dec:inc+1] n, break 1st elif s man, dec 7, inc 14, text[dec:inc+1] a man, a s a man, a dec 6, inc 15, text[dec:inc+1] a man, a s a man, a dec 5, inc 16, text[dec:inc+1] , a man, a m break else s r, a man, a ma dec 3, inc 18, text[dec:inc+1] ar, a man, a mar break else s car, a man, a mara dec 1, inc 20, text[dec:inc+1] car, a man, a marac break else s a car, a man, a maraca dec 12, inc 14, text[dec:inc+1] , a break 1st elif s an, a dec 9, inc 15, text[dec:inc+1] man, a break if s dec 7, inc 2, text[dec:inc+1] break 1st elif s c dec 1, inc 3, text[dec:inc+1] ca break if s a ca dec 13, inc 15, text[dec:inc+1] a s a dec 12, inc 16, text[dec:inc+1] , a m break 1st elif s man, a m dec 8, inc 17, text[dec:inc+1] man, a ma break 1st elif s a man, a ma dec 6, inc 18, text[dec:inc+1] a man, a mar break 1st elif s r, a man, a mar dec 3, inc 19, text[dec:inc+1] ar, a man, a mara s ar, a man, a mara dec 2, inc 20, text[dec:inc+1] car, a man, a marac s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 14, inc 16, text[dec:inc+1] a m break 1st elif s man, a m dec 8, inc 17, text[dec:inc+1] man, a ma break 1st elif s a man, a ma dec 6, inc 18, text[dec:inc+1] a man, a mar break 1st elif s r, a man, a mar dec 3, inc 19, text[dec:inc+1] ar, a man, a mara s ar, a man, a mara dec 2, inc 20, text[dec:inc+1] car, a man, a marac s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 15, inc 17, text[dec:inc+1] ma break 1st elif s a ma dec 13, inc 18, text[dec:inc+1] a mar break 1st elif s r, a man, a mar dec 3, inc 19, text[dec:inc+1] ar, a man, a mara s ar, a man, a mara dec 2, inc 20, text[dec:inc+1] car, a man, a marac s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 16, inc 18, text[dec:inc+1] mar break 1st elif s r, a man, a mar dec 3, inc 19, text[dec:inc+1] ar, a man, a mara s ar, a man, a mara dec 2, inc 20, text[dec:inc+1] car, a man, a marac s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 17, inc 19, text[dec:inc+1] ara s ara dec 16, inc 20, text[dec:inc+1] marac break 1st elif s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 18, inc 20, text[dec:inc+1] rac break 1st elif s car, a man, a marac dec 1, inc 21, text[dec:inc+1] car, a man, a maraca break 1st elif s a car, a man, a maraca dec 19, inc 21, text[dec:inc+1] aca s aca dec 21, inc 23, text[dec:inc+1] a. break else s ca. dec 19, inc 23, text[dec:inc+1] aca. break else s raca. dec 17, inc 23, text[dec:inc+1] araca. break else s maraca. dec 15, inc 23, text[dec:inc+1] maraca. break else s a maraca. dec 13, inc 23, text[dec:inc+1] a maraca. break else s , a maraca. dec 11, inc 23, text[dec:inc+1] n, a maraca. break else s an, a maraca. dec 9, inc 23, text[dec:inc+1] man, a maraca. break else s man, a maraca. dec 7, inc 23, text[dec:inc+1] a man, a maraca. break else s a man, a maraca. dec 5, inc 23, text[dec:inc+1] , a man, a maraca. break else s r, a man, a maraca. dec 3, inc 23, text[dec:inc+1] ar, a man, a maraca. break else s car, a man, a maraca. dec 1, inc 23, text[dec:inc+1] car, a man, a maraca. break else s a car, a man, a maraca. Out[8]: ((13, 16), ' a ', ['', ' a ', 'c', ' ', 'aca', 'ara', 'r']) ``` Sometimes, it doesn't work at all: ``` pal('madam') dec -1, inc 1, text[dec:inc+1] s m dec 1, inc 3, text[dec:inc+1] ada break 1st elif s m dec 2, inc 4, text[dec:inc+1] dam break 1st elif s m dec 3, inc 5, text[dec:inc+1] am break 1st elif s m Out[5]: ((0, 1), 'm', ['m']) ``` Now considering madam is a very nice palindrome it should work and there are many cases which i haven't tested myself to find out what other legitimate palindromes it doesn't detect. Q1: Why is it sometimes not detecting? Q2: I would like to optimize my second code for that matter. Any inputs? Q3: What better approach is there for a much much more efficient code than my First code which iterates many a times?
Your solution seems a bit complicated to me. Just look at all of the possible substrings and check them individually: ``` def palindromes(text): text = text.lower() results = [] for i in range(len(text)): for j in range(0, i): chunk = text[j:i + 1] if chunk == chunk[::-1]: results.append(chunk) return text.index(max(results, key=len)), results ``` `text.index()` will only find the first occurrence of the longest palindrome, so if you want the last, replace it with `text.rindex()`.
If you like the recursive solution, I have written a recursive version. It is also intuitive. ``` def palindrome(s): if len(s) <= 1: return s elif s[0] != s[-1]: beginning_palindrome = palindrome(s[:-1]) ending_palindrome = palindrome(s[1:]) if len(beginning_palindrome) >= len(ending_palindrome): return beginning_palindrome else: return ending_palindrome else: middle_palindrome = palindrome(s[1:-1]) if len(middle_palindrome) == len(s[1:-1]): return s[0] + middle_palindrome + s[-1] else: return middle_palindrome ```
Python: search longest palindromes within a word and palindromes within a word/string
[ "", "python", "palindrome", "" ]
I have a query that returns a few columns and some information. I want to hardcode a list, and have each row returned each value from my list. So, currently my `SELECT` is returning, for example, ``` ID Name Value 1 Mike 404 2 John 404 ``` And lets say, for example, I wish to add a column to my `SELECT` so that a managers name is also returned with each row. So I have a set of managers, which I want to write myself into the SELECT statement(i.e these are not returned from any external source, I want to hardcode them into my `SELECT`) : {'Steve', 'Bill'}. What I now want returned is : ``` ID Name Value Manager 1 Mike 404 Steve 2 John 404 Steve 1 Mike 404 Bill 2 John 404 Bill ``` Is it possible to do this? If so, how? :) Thanks a lot.
One way is using `UNION ALL` and [`CROSS APPLY`](https://stackoverflow.com/questions/1139160/when-should-i-use-cross-apply-over-inner-join) to join them with your rows: ``` SELECT p.ID, p.Name ,p.Value, x.Col AS Manager FROM dbo.Persons p CROSS APPLY (SELECT Col FROM (SELECT 'Steve' UNION ALL SELECT 'Bill')AS T(Col))X ``` If the managers are a comma seperated list you need a split function in SQL-Server 2005, for example: ``` CREATE FUNCTION [dbo].[Split] ( @ItemList NVARCHAR(MAX), @delimiter CHAR(1) ) RETURNS @IDTable TABLE (Item VARCHAR(50)) AS BEGIN DECLARE @tempItemList NVARCHAR(MAX) SET @tempItemList = @ItemList DECLARE @i INT DECLARE @Item NVARCHAR(4000) SET @tempItemList = REPLACE (@tempItemList, ' ', '') SET @i = CHARINDEX(@delimiter, @tempItemList) WHILE (LEN(@tempItemList) > 0) BEGIN IF @i = 0 SET @Item = @tempItemList ELSE SET @Item = LEFT(@tempItemList, @i - 1) INSERT INTO @IDTable(Item) VALUES(@Item) IF @i = 0 SET @tempItemList = '' ELSE SET @tempItemList = RIGHT(@tempItemList, LEN(@tempItemList) - @i) SET @i = CHARINDEX(@delimiter, @tempItemList) END RETURN END ``` Now this works: ``` SELECT p.ID, p.Name ,p.Value, s.Item AS Manager FROM dbo.Persons p CROSS APPLY dbo.Split('Steve,Bill', ',')s ``` Result (SQL-Fiddle is down): ``` ID Name Value Manager 1 Mike 404 Steve 2 John 404 Steve 1 Mike 404 Bill 2 John 404 Bill ```
This query will give you the exact result as you want ``` select ID,Name,Value, Manager from Test cross join (select 'Bill' Manager union select 'Steve' Manager) t1 ```
Creating a superset in SQL Server
[ "", "sql", "sql-server", "database", "sql-server-2005", "select", "" ]
I have 10 lineedit widget that design by `qt-designer` and i define signal&slot in my code, my problem is with repeating same signal&slot 10times in my code, something like this: ``` self.ui.descCode_lineEdit_1.cursorPositionChanged.connect(lambda: self.Desc(1)) self.ui.Desc_lineEdit_1.editingFinished.connect(lambda: self.DescRate(1)) self.ui.descCode_lineEdit_2.cursorPositionChanged.connect(lambda: self.Desc(2)) self.ui.Desc_lineEdit_2.editingFinished.connect(lambda: self.DescRate(2)) ... self.ui.descCode_lineEdit_10.cursorPositionChanged.connect(lambda: self.Desc(10)) self.ui.Desc_lineEdit_10.editingFinished.connect(lambda: self.DescRate(10)) ``` i write these with `getattr` : ``` for i in range(1, 11): getattr(self.ui, 'descCode_lineEdit_{}'.format(i)).cursorPositionChanged.connect(lambda: self.Desc(i)) getattr(self.ui, 'Desc_lineEdit_{}'.format(i)).editingFinished.connect(lambda: self.DescRate(i)) ``` but when run code, only 10th widget's signal&slot works fine, other fields nothing. Any idea to how to fix this part of code and repeated same signals ? Thanks.
the problem here lies within the lambda, and the i. Later on only the last value of i will be saved. Hence try it like this: **Minimal *not working* example**: ``` >>> def test(a): ... print a ... x = [] ... for i in range(2): ... x.append(lambda: test(i)) ... x 14: [<function <lambda> at 0x027FBA30>, <function <lambda> at 0x027FBAF0>] >>> x[0]() # is supposed to print 0 not 1 1 >>> x[1]() 1 ``` **Minimal working example**: ``` >>> def test(a): ... print a ... x = [] ... for i in range(2): ... x.append((lambda x: (lambda:test(x)))(i) ) ... x 16: [<function <lambda> at 0x027FBBF0>, <function <lambda> at 0x027FBBB0>] >>> x[0]() 0 >>> x[1]() 1 ``` **What is happening?** In the second example, I have my outter `lambda` function return a function. The outter `lambda` function will be evaluated with the input `i` returning a function that has this `i` built into it. Hence the value of `i` is individually saved and things work as intended. **Solution** Hence your solution could look something like this: ``` for i in range(1, 11): ...connect((lambda x:(lambda: self.Desc(x)))(i)) ...connect((lambda x:(lambda: self.DescRate(x)))(i)) ``` Cheers!
It's probably easier to use a data structure. ``` fields = [QLineEdit() for x in range(10)] for i in range(10): fields[i].cursorPositionChanged.connect(lambda: self.Desc(i+1)) fields[i].editingFinished.connect(lambda:self.DescRate(i+1)) ```
repeated signal and slot for same widets in pyqt4
[ "", "python", "python-3.x", "pyqt4", "" ]
I have a MSSQL Database named "Database". Now when I am trying to rename it using query shown below, ``` USE master; GO ALTER DATABASE Database Modify Name = Database01 GO ``` It gives me this error message: > Msg 102, Level 15, State 1, Line 1 Incorrect syntax near 'Database'. But this query works fine for other database. What I am doing wrong?
If you "quote" the table name it should work. The default quote characters are square brackets [], so: ``` USE master; GO ALTER DATABASE [Database] Modify Name = Database01 GO ```
Instead of using the the Long code you Can just use the System built-in Stored Procedure -`sp_renamedb'oldDBName','NewDBName'`
How to rename a MSSQL Database that has name "Database"?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
I'm trying to run multiple DDLs (around 90) on an SQL Server. The DDLs don't contain any changes to tables, only view, stored procedures, and functions. The DDLs might have inter-dependencies between them, one STP that calls another, for example. I don't want to start organizing the files in the correct order, because it would take too long, and I want the entire operation to fail if any one of the scripts has an error. How can I achieve this? My idea so far, is to start a transaction, tell the SQL to ignore errors (which I don't know how to do) run all the scripts once, tell the SQL to start throwing errors again, run all the scripts again, and then commit if everything succeeds. 1. Is this a good idea? 2. How do I `CREATE \ ALTER` a stored procedure or view even though it has errors? --- To clarify and address some concerns... This is not intended for production. I just don't want to leave the DB I'm testing on broken. What I would like to achieve is this: run a big group of scripts on the server, without taking the time to order them. But if any of the scripts has an error in it, I want to rollback the entire operation. I don't care about isolation, I only want the operation to happen as a single transaction.
We actually do something like this to deploy our database scripts to production. We do this in an application that connects to our databases. To add to the complication, we also have 600 databases that should have the same schema, but don't really. Here's our approach: 1. Merge all our scripts into one big file. Injecting go's in between every single file. This makes it look like there's one very long script. We do a simple ordering based on what the coders requested. 2. Split everything into "go blocks". Since go isn't legal sql, we split them up into multiple blocks that get executed one at a time. 3. Open a database connection. 4. Start a transaction. 5. for each go block: 1. Make sure the transaction is still active. (This is VERY important. I'll explain why in a bit.) 2. Run the code, recording the errors. 6. If there were any errors, rollback. Otherwise, commit. In our multi database set up, we do this whole thing twice. Run through every database once, "testing" the code to make sure there are no errors on any database, and then go back and run them again "for real". Now on to why you need to make sure the transaction is still active. There are some commands that will rollback your transaction on error! Imagine our surprise the first time we found this out... Everything before the error was rolled back, but everything after was committed. If there is an error, however, nothing in that same block gets committed, so it's all good. Below is our core of our execution code. We use a wrapper around SqlClient, but it should look very similar to SqlClient. ``` Dim T = New DBTransaction(client) For Each block In scriptBlocks If Not T.RestartIfNecessary Then exceptionCount += 1 Log("Could not (re)start the transaction for {0}. Not executing the rest of the script.", scriptName) Exit For End If Debug.Assert(T.IsInTransaction) Try client.Text = block client.ExecNonQuery() Catch ex As Exception exceptionCount += 1 Log(ex.Message + " on {0} executing: '{1}'", client.Connection.Database, block.Replace(vbNewLine, "")) End Try Next If exceptionCount > 0 Then Log("There were {0} exceptions while executing {1}.", exceptionCount, scriptName) If testing OrElse exceptionCount > 0 Then Try T.Rollback() Log("Rolled back all changes for {0} on {1}.", scriptName, client.Connection.Database) Catch ex As Exception Log("Could not roll back {0} on {1}: {2}", scriptName, client.Connection.Database, ex.Message) If Debugger.IsAttached Then Debugger.Break() End If End Try Else T.Commit() Log("Successfully committed all changes for {0} on {1}.", scriptName, client.Connection.Database) End If Return exceptionCount Class DBTransaction Private _tName As String Public ReadOnly Property name() As String Get Return _tName End Get End Property Private _client As OB.Core2.DB.Client Public Sub New(client As OB.Core2.DB.Client, Optional name As String = Nothing) If name Is Nothing Then name = "T" & Guid.NewGuid.ToString.Replace("-", "").Substring(0, 30) End If _tName = name _client = client End Sub Public Function Begin() As Boolean Return RestartIfNecessary() End Function Public Function RestartIfNecessary() As Boolean Try _client.Text = "IF NOT EXISTS (Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "') BEGIN BEGIN TRANSACTION " & name & " END" _client.ExecNonQuery() Return IsInTransaction() Catch ex As Exception Return False End Try End Function Public Function IsInTransaction() As Boolean _client.Text = "Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "'" Dim scalar As String = _client.ExecScalar Return scalar <> "" End Function Public Sub Rollback() _client.Text = "ROLLBACK TRANSACTION " & name _client.ExecNonQuery() End Sub Public Sub Commit() _client.Text = "COMMIT TRANSACTION " & name _client.ExecNonQuery() End Sub End Class ```
Organize the files in the correct order, test the procedure on a test environment, have a validation and acceptance test, then run it in production. While running DDL in a transaction may seem possible, in practice is not. There are many DDL statements that don't mix well with transactions. You must put the application offline, take a database backup (or create a snapshot) before the schema changes, run the tested and verified upgrade procedure (your scripts), validate the result with acceptance tests and then turn the application back online. If something fails, revert to the backup created initially (with all the implications vis-a-vis any downstream log consumer like replication, log shipping or mirroring). This is the correct way, and as far as I'm concerned the only way. I know you'll find plenty of advice on how to do this the wrong way.
SQL Server Multiple DDLs Ignoring Order And in a Single Transaction
[ "", "sql", "sql-server", "t-sql", "ddl", "" ]
I have a Table, Table A, and in table A I have Field A. There are values in field A like the following: ``` Street A Street B ,Street C Street D etc ``` I would like to know if there is any SQL that will allow me to either remove the 1st character from Field A where there is a ,. I have know idea where to start I can select all the rows which have a , in Field A but I don't know where to start when trying to remove it.
``` UPDATE YourTable SET YourCol = SUBSTRING(YourCol, 2, 0+0x7fffffff) WHERE YourCol LIKE ',%' ```
If you'd rather not care about the length, STUFF is the right candidate : ``` UPDATE YourTable SET YourCol = STUFF(YourCol, 1, 1, '') WHERE YourCol LIKE ',%' ```
How to remove the 1st character from a column in SQL Server
[ "", "sql", "sql-server-2008", "" ]
I need help in inserting a character inside a string, e.g.: `031613 05:39 AM` The output should be: `03/16/13 05:39 AM`
You can use `STUFF` ``` DECLARE @String NVARCHAR(20) = '031613 05:39 AM' SELECT STUFF(STUFF(@String,3,0,'/'),6,0,'/') ``` [Fiddle](http://sqlfiddle.com/#!3/d41d8/15786)
How about using [SUBSTRING](http://msdn.microsoft.com/en-us/library/ms187748.aspx)? ``` DECLARE @String VARCHAR(50) = '031613 05:39 AM' SELECT @String, SUBSTRING(@String,1,2) + '/' + SUBSTRING(@String,3,2) + '/' + SUBSTRING(@String,3,2) + SUBSTRING(@String,7,LEN(@String)-6) ``` ## [SQLFiddle DEMO](http://sqlfiddle.com/#!3/d41d8/15784)
How to insert character in between a string in SQL Server 2008?
[ "", "sql", "sql-server", "string", "" ]
I am trying to figure out some ways to accomplish this script. I import an excel sheet and then I need to populate 5 different tables based on this excel sheet. However for this example I just need help with the initial loop then I think I can work through the rest. ``` select distinct Department from IPACS_New_MasterList where Department is not null ``` This provides me a list of 7 different departments. ``` Dep1, Dep2, Dep3, Dep4, Dep5, Dep6, Dep7 ``` For each of these departments I need to perform some code. Step #1: Insert the department into `table_one` I then need to keep the `SCOPE_IDENTITY()` for the rest of the code. Step #2 perform the second loop (inserting all functions in that department into `table2`. I'm not sure how to really do a foreach row in this select statement loop, or if I need to do something completely different. I've looked at several answers but can't seem to find exactly what I'm looking for. Sample Data: Source Table ``` Dep1, func1, process1, procedure1 dep1, func1, process1, procedure2 dep1, func1, process2, procedure3 dep1, func1, process2, procedure4 dep1, func1, process2, procedure5 dep1, func2, process3, procedure6 dep2, func3, process4, procedure7 ``` My Tables: My first table is a list of every department from the above query. With a key on the `departmentID`. Each department can have many functions. My second table is a list of all functions with a key on `functionID` and a foreign key on `departmentID`. Each function must have 1 department and can have many processes My third table is a list of all processes with a key on `processID` and a foreign key on `functionID`. Each process must have 1 function and can have many procedures.
There are two approaches you can use without a loop. 1) If you have candidate keys in your source (department name) just join your source table back to the table you inserted e.g. ``` INSERT INTO Department (Name) SELECT DISTINCT Dep1 FROM SOURCE; INSERT INTO Functions ( Name, DepartmentID) SELECT DISTINCT s.Func1, d.DepartmentID FROM source s INNER JOIN Department d on s.dep1 = d.name; INSERT INTO processes ( name, FunctionID, [Procedure] ) SELECT s.process1, f.FunctionID, s.procedure1 FROM source s INNER JOIN Department d on s.dep1 = d.name INNER JOIN Functions f on d.DepartmentID = f.departmentID and s.func1 = f.name; ``` [SQL Fiddle](http://sqlfiddle.com/#!6/147f2/6) 2) If you don't have candidate keys in your source then you can use the [output clause](http://msdn.microsoft.com/en-us/library/ms177564.aspx) For example here if a department weren't guaranteed to be unique this would correctly find only the newly add ``` DECLARE @Department TABLE ( DepartmentID INT ) DECLARE @Functions TABLE ( FunctionID INT ) INSERT INTO Department (Name) OUTPUT INSERTED.DepartmentID INTO @Department SELECT DISTINCT Dep1 FROM SOURCE INSERT INTO Functions ( Name, DepartmentID) OUTPUT INSERTED.FunctionID INTO @FunctionID SELECT DISTINCT s.Func1, d.DepartmentID FROM source s INNER JOIN Department d on s.dep1 = d.name INNER JOIN @Department d2 ON d.departmentID = d2.departmentID; INSERT INTO processes ( name, FunctionID, [Procedure] ) SELECT s.process1, f.FunctionID, s.procedure1 FROM source s INNER JOIN Department d on s.dep1 = d.name INNER JOIN Functions f on d.DepartmentID = f.departmentID and s.func1 = f.name INNER JOIN @Functions f2 ON f.Functions = f2.Functions SELECT * FROM Department; SELECT * FROm Functions; SELECT * FROM processes; ``` [SQL Fiddle](http://sqlfiddle.com/#!6/147f2/15)
Assuming you have tables set up with an IDENTITY field set for the Primary Key, you can populate each successive table's foreign key by joining to the previous table and the source table, something like: ``` INSERT INTO Table1 SELECT DISTINCT Department FROM SourceTable GO INSERT INTO Table2 SELECT DISTINCT b.Deptartment_ID, a.Function FROM SourceTable a JOIN Table1 b ON a.Department = b.Department GO INSERT INTO Table3 SELECT DISTINCT b.Function_ID, a.Process FROM SourceTable a JOIN Table2 b ON a.Function = b.Function GO INSERT INTO Table4 SELECT DISTINCT b.Process_ID, a.Procedure FROM SourceTable a JOIN Table3 b ON a.Process = b.Process GO ```
SQL Loop/Crawler
[ "", "sql", "sql-server", "t-sql", "" ]
So I came across a problem/question yesterday. I am building a chat (with AJAX) and use two tables: `TABLE users -> 'name', 'username', 'password', 'time'` `TABLE messages -> 'sendFrom', 'sendTo', 'message', 'time'` So an example message now would be `'foo' | 'bar' | 'Hey, how are you?' | 130611134427611` I was told the correct way to do this is, instead, to use an `ID` column, and use that as a Primary Key instead of the username (which, anyway, makes sense). OK, so now this looks like `TABLE users -> 'ID', 'name', 'username', 'password', 'time'` `TABLE messages -> 'sendFrom', 'sendTo', 'message', 'time'` So an example message now would be `'22' | '7' | 'Hey, how are you?' | 130611134427611` I've managed to `JOIN` both tables to return the rows as on the first example message, but since I am detecting user keypresses too, I need to scan the table twice, so: ``` SELECT * FROM (SELECT * FROM (SELECT * FROM messages WHERE sendTo = '$username' AND time > (SELECT time FROM users WHERE username = '$username' LIMIT 1) AND message <> '$keypressCode' ORDER BY time DESC LIMIT 30) ORDER BY time ASC) UNION SELECT * FROM (SELECT * FROM messages WHERE message = '$keypressCode' AND time > (SELECT time FROM users WHERE username = '$username' LIMIT 1) AND sendTo = '$username' LIMIT 1); ``` But now, of course, I don't just select from messages; instead, I use a long query like ``` SELECT * FROM ( SELECT u1.ID as sendTo, u2.ID as sendFrom, messages.message, ..... ..... ..... ..... ..... ) as messages; ``` that MUST BE INSERTED just in the place of `messages` (I haven't tried this yet, but I think is like that. See, the thing is I DuckDuckGo'ed and Googled and found nothing, so I came here) --- My first question is: Is there a way to use `ALIAS` for the table `messages` so I don't have to scan it TWICE? So, instead, I just save the above query using `ALIAS` as a table called `messages` and select data from it twice, once in each part of `UNION`. In addition, the answer to the first question would also be an answer for: Is there a way to use `ALIAS` to save the `time` selected from the table? (since, again, I am searching for it TWICE). --- In practice, what I am doing may not be unefficient (since there will be at most 20 users), but what if? Also, I am a mathematician, and like it or not, I like to worry a lot about efficiency! Thank you so much in advance, I hope I made myself clear.
I am answering my own question, since I consider what people might be looking for is a [VIEW](http://www.sqlite.org/lang_createview.html "CREATE VIEW (SQLite)"). First, define that query like this: ``` CREATE VIEW MyViewTable AS SELECT ... FROM ... ... ...; ``` Now you can use that view (which is a sepparate query) in any context where an ordinary table can be used: in a FROM clause, in a JOIN clause, as a subquery etc.: ``` SELECT ... FROM MyViewTable WHERE ... ... UNION SELECT ... FROM MyViewTable WHERE ... ``` but with a few restrictions: * You **cannot** `SELECT` from your view using subqueries, such as ``` SELECT * FROM MyViewTable WHERE someColumn = (SELECT ... ...) ``` *but* (as normal) you can use subqueries when creating the `VIEW` and in the main query. * The SELECT statement **cannot** refer to prepared statement parameters. * The definition **cannot** refer to a TEMPORARY table, and you cannot create a TEMPORARY view. (there are more, but this are, in my opinion, among the most common queries, so the restrictions might be among the most common errors. See [SQLite reference](https://www.sqlite.org/draft/lang_createview.html) for more information. ).
I am not sure but it does look as if you want a [view](http://www.sqlite.org/lang_createview.html "CREATE VIEW (SQLite)"). Define that query like this: ``` CREATE VIEW MyMessageView AS SELECT ... FROM ... ... ``` Now you can use that view in any context where an ordinary table can be used: in a FROM clause, in a JOIN clause, as a subquery etc.: ``` SELECT ... FROM MyMessageView WHERE ... ... UNION SELECT ... FROM MyMessageView WHERE ... ```
Use same alias for union query SQL
[ "", "sql", "sqlite", "" ]
I've got a simple class from which I create two objects. I now want to print the name of the object from within the class. So something like this: ``` class Example: def printSelf(self): print self object1 = Example() object2 = Example() object1.printSelf() object2.printSelf() ``` I need this to print: ``` object1 object2 ``` Unfortunately this just prints `<myModule.Example instance at 0xb67e77cc>` Does anybody know how I can do this?
`object1` is just an identifier(or variable) pointing to an instance object, objects don't have names. ``` >>> class A: ... def foo(self): ... print self ... >>> a = A() >>> b = a >>> c = b >>> a,b,c #all of them point to the same instance object (<__main__.A instance at 0xb61ee8ec>, <__main__.A instance at 0xb61ee8ec>, <__main__.A instance at 0xb61ee8ec>) ``` `a`,`b`,`c` are simply references that allow us to access a same object, when an object has **0** references it is automatically garbage collected. A quick hack will be to pass the name when creating the instance: ``` >>> class A: ... def __init__(self, name): ... self.name = name ... >>> a = A('a') >>> a.name 'a' >>> foo = A('foo') >>> foo.name 'foo' >>> bar = foo # additional references to an object will still return the original name >>> bar.name 'foo' ```
The object does not have a "name". A variable which refers to the object is not a "name" of the object. The object cannot know about any of the variables which refer to it, not least because variables are not a first-class subject of the language. If you wish to alter the way that object prints, override either `__repr__` or `__unicode__`. If this is for debugging purposes, use a debugger. That's what it's for.
How to get the object name from within the class?
[ "", "python", "class", "object", "" ]
I have to make an `UNION` stament like this more or less: ``` select [table_name], name, address from Employees where [my_condition] UNION select [table_name], name, address from Employees_history where [my_condition] ``` The data retrieved will be in either Employees or Employees\_history but not in both tables. I need to know which table the data comes from.
``` SELECT 'Employees' AS [table_name], name, address FROM Employees WHERE [my_condition] UNION ALL SELECT 'Employees_history' AS [table_name], name, address FROM Employees_history WHERE [my_condition] ``` I use `UNION ALL` rather than `UNION` as there will be no duplicates across the two branches. So it can avoid some unnecessary work removing duplicates across the whole result set. If there might be duplicates within branch(es) add `DISTINCT` to the individual `SELECT`(s)
You can append a new field as shown below: ``` select [table_name], name, address, 'Employees' from Employees where [my_condition] UNION select [table_name], name, address, 'History' from Employees_history where [my_condition] ``` You can also use an `alias` as Martin has shown in his answer.
How to get table name within a 'select' statement in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I am very new to Python (and web scraping). Let me ask you a question. Many website actually do not report its specific URLs in Firefox or other browsers. For example, Social Security Admin shows popular baby names with ranks (since 1880), but the url does not change when I change the year from 1880 to 1881. It is constantly, <http://www.ssa.gov/cgi-bin/popularnames.cgi> Because I don't know the specific URL, I could not download the webpage using urllib. In this page source, it includes: `<input type="text" name="year" id="yob" size="4" value="1880">` So presumably, if I can control this "year" value (like, "1881" or "1991"), I can deal with this problem. Am I right? I still don't know how to do it. Can anybody tell me the solution for this please? If you know some websites that may help my study, please let me know. THANKS!
You can still use `urllib`. The button performs a POST to the current url. Using Firefox's [Firebug](https://getfirebug.com/) I took a look at the network traffic and found they're sending 3 parameters: `member`, `top`, and `year`. You can send the same arguments: ``` import urllib url = 'http://www.ssa.gov/cgi-bin/popularnames.cgi' post_params = { # member was blank, so I'm excluding it. 'top' : '25', 'year' : year } post_args = urllib.urlencode(post_params) ``` Now, just send the url-encoded arguments: ``` urllib.urlopen(url, post_args) ``` If you need to send headers as well: ``` headers = { 'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language' : 'en-US,en;q=0.5', 'Connection' : 'keep-alive', 'Host' : 'www.ssa.gov', 'Referer' : 'http://www.ssa.gov/cgi-bin/popularnames.cgi', 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0' } # With POST data: urllib.urlopen(url, post_args, headers) ``` Execute the code in a loop: ``` for year in xrange(1880, 2014): # The above code... ```
I recommend using [Scrapy](https://scrapy.readthedocs.org). It's a very powerful and easy-to-use tool for web-scraping. Why it is worth trying: 1. Speed/performance/efficiency > Scrapy is written with Twisted, a popular event-driven networking > framework for Python. Thus, it’s implemented using a non-blocking (aka > asynchronous) code for concurrency. 2. Database pipelining Scrapy has `Item Pipelines` feature: > After an item has been scraped by a spider, it is sent to the Item > Pipeline which process it through several components that are executed > sequentially. So, each page can be written to the database immediately after it has been downloaded. 3. Code organization Scrapy offers you a nice and clear project structure, there you have settings, spiders, items, pipelines etc separated logically. Even that makes your code clearer and easier to support and understand. 4. Time to code Scrapy does a lot of work for you behind the scenes. This makes you focus on the actual code and logic itself and not to think about the "metal" part: creating processes, threads etc. Yeah, you got it - I love it. In order to get started: * [official tutorial](https://scrapy.readthedocs.org/en/latest/intro/tutorial.html) * [newcoder.io tutorial](http://newcoder.io/scrape/) Hope that helps.
a (presumably basic) web scraping of http://www.ssa.gov/cgi-bin/popularnames.cgi in urllib
[ "", "python", "cgi", "web-scraping", "firebug", "urllib", "" ]
I wanted to match the numeric values of a string: ``` 1,000 metric tonnes per contract month Five cents ($0.05) per tonne Five cents ($0.05) per tonne 1,000 metric tonnes per contract month ``` My current approach: ``` size = re.findall(r'(\d+(,?\d*).*?)', my_string) ``` What I get with my approach: ``` print size [(u'1,000', u',000')] ``` As you can see, the number `1` was being cut out from the second element of the list, why is that? Also, could I get a hint as to how I can match the `$0.05` terms?
Something like this: ``` >>> import re >>> strs = """1,000 metric tonnes per contract month Five cents ($0.05) per tonne Five cents ($0.05) per tonne 1,000 metric tonnes per contract month""" >>> [m.group(0) for m in re.finditer(r'\$?\d+([,.]\d+)?', strs)] ['1,000', '$0.05', '$0.05', '1,000'] ``` Demo : <http://rubular.com/r/UomzIY3SD3>
`re,findall()` returns a tuple of all the capturing groups for each match, and each set of normal parentheses generates one such group. Write your regex like this: ``` size = re.findall(r'\d{1,3}(?:,\d{3})*(?:\.\d+)?', my_string) ``` **Explanation:** ``` \d{1,3} # One to three digits (?:,\d{3})* # Optional thousands groups (?:\.\d+)? # Optional decimal part ``` This assumes that all numbers have commas as thousands separators, i. e. no numbers like `1000000`. If you need to match those too, use ``` size = re.findall(r'\d+(?:,\d{3})*(?:\.\d+)?', my_string) ```
Regex for matching string Python
[ "", "python", "regex", "" ]
I am developing an iphone app which uses Sqlite as its backend. I want to implement search functionality in my app and I used sql query for that. I want to implement the searching in a case sensitive manner.I used the following query or fetching data. ``` SELECT data FROM table where data LIKE 'A%' ORDER BY ID ASC ``` Since Sqlite is case insensitive I am getting data starting with 'a' and 'A'. The alternate option to meet my requirement is filter the fetched content within my code before displaying it. I tried NSPredicate for that, but that also works in case insensitive manner. This is my code.. ``` NSPredicate *pred = [NSPredicate predicateWithFormat:@"self BEGINSWITH[c] %@",[NSString stringWithFormat:@"%c",[filter characterAtIndex:i]]]; NSArray *arr = [arrayComponent filteredArrayUsingPredicate:pred]; ``` Is there any solution for this ?
``` SELECT * FROM a WHERE name glob 'A*' ``` see sql fiddle <http://sqlfiddle.com/#!7/18215/1>
I'm not sure if you could use a `PRAGMA` statement in your environment, but there is an option as follows: ``` PRAGMA case_sensitive_like = true; ``` > The PRAGMA statement is an SQL extension specific to SQLite and used > to modify the operation of the SQLite library. When > case\_sensitive\_like is enabled, case becomes significant. So, for > example, 'a' LIKE 'A' is false but 'a' LIKE 'a' is still true. The document is here: <http://www.sqlite.org/pragma.html#pragma_case_sensitive_like>
Case sensitive searching
[ "", "sql", "sqlite", "search", "wildcard", "nspredicate", "" ]
I've inherited some `<cough>`ugly`</cough>` code that contains a rather unwieldy SQL statement that hits an Informix database. This code computes various parts of the SQL, and does lots of string concatenation, but I had it spit out what it actually sends to Informix and tried using that in my SQL query tool. It is functional when it runs on the server, but when I try running it in FlySpeed SQL (which is an "approved" tool for use at the office), it complains about `outer` as an unknown keyword. Googling got me nowhere, so I guessed this was an Informix shortcut for `LEFT OUTER JOIN`, but replacing it with that did nothing to fix the problem, so I'm guessing it's just a bug in FlySpeed. The SQL is completely unformatted, which I'm guessing is part of what is tripping up FlySpeed. ``` SELECT A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM as STMT_NBR, F.ST_DT, A.BUSINESS_UNIT, H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR as ENTRY_TYPE_DESCR, G.DESCR as ENTRY_REASON_DESCR, SUM(A.BAL_AMT) as ORIG_BAL_AMT, SUM(H.BAL_AMT), C.NAME1, B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE as ORIG_ENTRY_TYPE, CASE WHEN A.ENTRY_TYPE = 'PY' OR A.ENTRY_TYPE = 'BGB' THEN 'AA' || substr(A.ENTRY_TYPE,1,3) ELSE substr(A.ENTRY_TYPE,1,3) END, A.ENTRY_REASON as ORIG_ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO FROM PS_STMT_CUST_DTL A, PS_CUST_ADDRESS B, PS_CUSTOMER C, PS_SET_CNTRL_REC E, PS_STMT_CUST F, PS_ENTRY_TYPE_TBL D, PS_ENTRY_REASN_TBL G, outer PS_ITEM H, outer PS_BI_TYPE K WHERE A.ADDRESS_SEQ_NUM = B.ADDRESS_SEQ_NUM AND B.EFFDT = (SELECT MAX(B_ED.EFFDT) FROM PS_CUST_ADDRESS B_ED Where B.SETID = B_ED.SETID AND B.CUST_ID = B_ED.CUST_ID AND B.ADDRESS_SEQ_NUM = B_ED.ADDRESS_SEQ_NUM AND B_ED.EFFDT <= '06/19/2013') AND A.CRSPD_SETID = B.SETID AND A.CRSPD_CUST_ID = B.CUST_ID AND B.SETID = C.SETID AND B.CUST_ID = C.CUST_ID AND B.ADDRESS_SEQ_NUM = C.ADDRESS_SEQ_NUM AND G.ENTRY_TYPE = A.ENTRY_TYPE AND G.ENTRY_REASON = A.ENTRY_REASON AND G.SETID = E.SETID AND G.SETID = D.SETID AND G.ENTRY_TYPE = D.ENTRY_TYPE AND E.SETCNTRLVALUE = A.BUSINESS_UNIT AND E.RECNAME = 'ENTRY_TYPE_TBL' AND A.CRSPD_SETID = F.CRSPD_SETID AND A.CRSPD_CUST_ID = F.CRSPD_CUST_ID AND A.ST_ID_NUM = F.ST_ID_NUM AND A.BUSINESS_UNIT = H.BUSINESS_UNIT AND A.CUST_ID = H.CUST_ID AND A.ITEM = H.ITEM AND A.ITEM_LINE = H.ITEM_LINE AND substr(A.ENTRY_TYPE,1,3) = K.BILL_TYPE_ID AND K.SETID = 'SPN' AND K.EFFDT = (SELECT MAX(C_ED.EFFDT) FROM PS_BI_TYPE C_ED WHERE K.SETID = C_ED.SETID AND K.BILL_TYPE_ID = C_ED.BILL_TYPE_ID AND C_ED.EFFDT <= '06/19/2013') AND K.EFF_STATUS = 'A' AND A.CRSPD_CUST_ID = '000331' AND A.CRSPD_SETID = 'SPN' AND F.ST_DT = '' AND F.ST_ID_NUM = '' GROUP BY A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM, F.ST_DT, A.BUSINESS_UNIT ,H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR, G.DESCR, C.NAME1 ,B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE ,B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE ,25, A.ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO UNION ALL SELECT A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM as STMT_NBR, F.ST_DT, A.BUSINESS_UNIT, H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR as ENTRY_TYPE_DESCR, G.DESCR as ENTRY_REASON_DESCR, SUM(A.BAL_AMT) as ORIG_BAL_AMT, SUM(H.BAL_AMT), C.NAME1, B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE as ORIG_ENTRY_TYPE, CASE WHEN A.ENTRY_TYPE = 'PY' OR A.ENTRY_TYPE = 'BGB' THEN 'AA' || substr(A.ENTRY_TYPE,1,3) ELSE substr(A.ENTRY_TYPE,1,3) END, A.ENTRY_REASON as ORIG_ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO FROM PS_AR_STMT_CUSDT_H A, PS_CUST_ADDRESS B, PS_CUSTOMER C, PS_SET_CNTRL_REC E, PS_AR_STMT_CUST_H F, PS_ENTRY_TYPE_TBL D, PS_ENTRY_REASN_TBL G, outer PS_AR_ITEM_H H, outer PS_BI_TYPE K WHERE A.ADDRESS_SEQ_NUM = B.ADDRESS_SEQ_NUM AND B.EFFDT = (SELECT MAX(B_ED.EFFDT) FROM PS_CUST_ADDRESS B_ED Where B.SETID = B_ED.SETID AND B.CUST_ID = B_ED.CUST_ID AND B.ADDRESS_SEQ_NUM = B_ED.ADDRESS_SEQ_NUM AND B_ED.EFFDT <= '06/19/2013') AND A.CRSPD_SETID = B.SETID AND A.CRSPD_CUST_ID = B.CUST_ID AND B.SETID = C.SETID AND B.CUST_ID = C.CUST_ID AND B.ADDRESS_SEQ_NUM = C.ADDRESS_SEQ_NUM AND G.ENTRY_TYPE = A.ENTRY_TYPE AND G.ENTRY_REASON = A.ENTRY_REASON AND G.SETID = E.SETID AND G.SETID = D.SETID AND G.ENTRY_TYPE = D.ENTRY_TYPE AND E.SETCNTRLVALUE = A.BUSINESS_UNIT AND E.RECNAME = 'ENTRY_TYPE_TBL' AND A.CRSPD_SETID = F.CRSPD_SETID AND A.CRSPD_CUST_ID = F.CRSPD_CUST_ID AND A.ST_ID_NUM = F.ST_ID_NUM AND A.BUSINESS_UNIT = H.BUSINESS_UNIT AND A.CUST_ID = H.CUST_ID AND A.ITEM = H.ITEM AND A.ITEM_LINE = H.ITEM_LINE AND substr(A.ENTRY_TYPE,1,3) = K.BILL_TYPE_ID AND K.SETID = 'SPN' AND K.EFFDT = (SELECT MAX(C_ED.EFFDT) FROM PS_BI_TYPE C_ED WHERE K.SETID = C_ED.SETID AND K.BILL_TYPE_ID = C_ED.BILL_TYPE_ID AND C_ED.EFFDT <= '06/19/2013') AND K.EFF_STATUS = 'A' AND A.CRSPD_CUST_ID = '000331' AND A.CRSPD_SETID = 'SPN' AND F.ST_DT = '' AND F.ST_ID_NUM = '' GROUP BY A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM, F.ST_DT, A.BUSINESS_UNIT ,H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR, G.DESCR, C.NAME1 ,B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE ,B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE ,25, A.ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO ORDER BY 2,3,4,25,8,11,24 ``` Am I correct that `OUTER` is just shorthand (maybe Informix-flavored shorthand) for `LEFT OUTER JOIN`? ***UPDATE***: I've found [here](https://stackoverflow.com/questions/11251751/which-join-syntax-is-better#comment14791950_11251836) that Informix *did* in fact use `OUTER`, but I've yet to find any explanation of just *how*. Obviously search results overwhelmingly favor things containing the normal syntax. Even [this very useful RDBMS comparison](http://troels.arvin.dk/db/rdbms/) contains almost nothing about Informix. Here's the formatted SQL (sorry about the size!): ``` SELECT A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM AS STMT_NBR, F.ST_DT, A.BUSINESS_UNIT, H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR AS ENTRY_TYPE_DESCR, G.DESCR AS ENTRY_REASON_DESCR, SUM(A.BAL_AMT) AS ORIG_BAL_AMT, SUM(H.BAL_AMT), C.NAME1, B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE AS ORIG_ENTRY_TYPE, CASE WHEN A.ENTRY_TYPE = 'PY' OR A.ENTRY_TYPE = 'BGB' THEN 'AA' || substr(A.ENTRY_TYPE,1,3) ELSE substr(A.ENTRY_TYPE,1,3) END, A.ENTRY_REASON AS ORIG_ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO FROM PS_STMT_CUST_DTL A, PS_CUST_ADDRESS B, PS_CUSTOMER C, PS_SET_CNTRL_REC E, PS_STMT_CUST F, PS_ENTRY_TYPE_TBL D, PS_ENTRY_REASN_TBL G, OUTER PS_ITEM H, OUTER PS_BI_TYPE K WHERE A.ADDRESS_SEQ_NUM = B.ADDRESS_SEQ_NUM AND B.EFFDT = (SELECT MAX(B_ED.EFFDT) FROM PS_CUST_ADDRESS B_ED WHERE B.SETID = B_ED.SETID AND B.CUST_ID = B_ED.CUST_ID AND B.ADDRESS_SEQ_NUM = B_ED.ADDRESS_SEQ_NUM AND B_ED.EFFDT <= '06/19/2013') AND A.CRSPD_SETID = B.SETID AND A.CRSPD_CUST_ID = B.CUST_ID AND B.SETID = C.SETID AND B.CUST_ID = C.CUST_ID AND B.ADDRESS_SEQ_NUM = C.ADDRESS_SEQ_NUM AND G.ENTRY_TYPE = A.ENTRY_TYPE AND G.ENTRY_REASON = A.ENTRY_REASON AND G.SETID = E.SETID AND G.SETID = D.SETID AND G.ENTRY_TYPE = D.ENTRY_TYPE AND E.SETCNTRLVALUE = A.BUSINESS_UNIT AND E.RECNAME = 'ENTRY_TYPE_TBL' AND A.CRSPD_SETID = F.CRSPD_SETID AND A.CRSPD_CUST_ID = F.CRSPD_CUST_ID AND A.ST_ID_NUM = F.ST_ID_NUM AND A.BUSINESS_UNIT = H.BUSINESS_UNIT AND A.CUST_ID = H.CUST_ID AND A.ITEM = H.ITEM AND A.ITEM_LINE = H.ITEM_LINE AND substr(A.ENTRY_TYPE,1,3) = K.BILL_TYPE_ID AND K.SETID = 'SPN' AND K.EFFDT = (SELECT MAX(C_ED.EFFDT) FROM PS_BI_TYPE C_ED WHERE K.SETID = C_ED.SETID AND K.BILL_TYPE_ID = C_ED.BILL_TYPE_ID AND C_ED.EFFDT <= '06/19/2013') AND K.EFF_STATUS = 'A' AND A.CRSPD_CUST_ID = '000331' AND A.CRSPD_SETID = 'SPN' AND F.ST_DT = '' AND F.ST_ID_NUM = '' GROUP BY A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM, F.ST_DT, A.BUSINESS_UNIT , H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR, G.DESCR, C.NAME1 , B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE , B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE , 25, A.ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO UNION ALL SELECT A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM AS STMT_NBR, F.ST_DT, A.BUSINESS_UNIT, H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR AS ENTRY_TYPE_DESCR, G.DESCR AS ENTRY_REASON_DESCR, SUM(A.BAL_AMT) AS ORIG_BAL_AMT, SUM(H.BAL_AMT), C.NAME1, B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE AS ORIG_ENTRY_TYPE, CASE WHEN A.ENTRY_TYPE = 'PY' OR A.ENTRY_TYPE = 'BGB' THEN 'AA' || substr(A.ENTRY_TYPE,1,3) ELSE substr(A.ENTRY_TYPE,1,3) END, A.ENTRY_REASON AS ORIG_ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO FROM PS_AR_STMT_CUSDT_H A, PS_CUST_ADDRESS B, PS_CUSTOMER C, PS_SET_CNTRL_REC E, PS_AR_STMT_CUST_H F, PS_ENTRY_TYPE_TBL D, PS_ENTRY_REASN_TBL G, OUTER PS_AR_ITEM_H H, OUTER PS_BI_TYPE K WHERE A.ADDRESS_SEQ_NUM = B.ADDRESS_SEQ_NUM AND B.EFFDT = (SELECT MAX(B_ED.EFFDT) FROM PS_CUST_ADDRESS B_ED WHERE B.SETID = B_ED.SETID AND B.CUST_ID = B_ED.CUST_ID AND B.ADDRESS_SEQ_NUM = B_ED.ADDRESS_SEQ_NUM AND B_ED.EFFDT <= '06/19/2013') AND A.CRSPD_SETID = B.SETID AND A.CRSPD_CUST_ID = B.CUST_ID AND B.SETID = C.SETID AND B.CUST_ID = C.CUST_ID AND B.ADDRESS_SEQ_NUM = C.ADDRESS_SEQ_NUM AND G.ENTRY_TYPE = A.ENTRY_TYPE AND G.ENTRY_REASON = A.ENTRY_REASON AND G.SETID = E.SETID AND G.SETID = D.SETID AND G.ENTRY_TYPE = D.ENTRY_TYPE AND E.SETCNTRLVALUE = A.BUSINESS_UNIT AND E.RECNAME = 'ENTRY_TYPE_TBL' AND A.CRSPD_SETID = F.CRSPD_SETID AND A.CRSPD_CUST_ID = F.CRSPD_CUST_ID AND A.ST_ID_NUM = F.ST_ID_NUM AND A.BUSINESS_UNIT = H.BUSINESS_UNIT AND A.CUST_ID = H.CUST_ID AND A.ITEM = H.ITEM AND A.ITEM_LINE = H.ITEM_LINE AND substr(A.ENTRY_TYPE,1,3) = K.BILL_TYPE_ID AND K.SETID = 'SPN' AND K.EFFDT = (SELECT MAX(C_ED.EFFDT) FROM PS_BI_TYPE C_ED WHERE K.SETID = C_ED.SETID AND K.BILL_TYPE_ID = C_ED.BILL_TYPE_ID AND C_ED.EFFDT <= '06/19/2013') AND K.EFF_STATUS = 'A' AND A.CRSPD_CUST_ID = '000331' AND A.CRSPD_SETID = 'SPN' AND F.ST_DT = '' AND F.ST_ID_NUM = '' GROUP BY A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM, F.ST_DT, A.BUSINESS_UNIT , H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR, G.DESCR, C.NAME1 , B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE , B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE , 25, A.ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO ORDER BY 2, 3, 4, 25, 8, 11, 24 ```
The Informix-style OUTER join is not simply a short cut for LEFT OUTER JOIN, but it is a moderate approximation to consider it as such. There are a number of details why it is not that simple. There's an online explanation for [Informix OUTER Joins](https://web.archive.org/web/20161118035919/http://savage.net.au/SQL/outer-joins.html), some of which are considerably more complex than the query in your example. Translating the first half of your UNION query into a more modern notation, you'd get: ``` SELECT A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM AS STMT_NBR, F.ST_DT, A.BUSINESS_UNIT, H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR AS ENTRY_TYPE_DESCR, G.DESCR AS ENTRY_REASON_DESCR, SUM(A.BAL_AMT) AS ORIG_BAL_AMT, SUM(H.BAL_AMT), C.NAME1, B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE AS ORIG_ENTRY_TYPE, CASE WHEN A.ENTRY_TYPE = 'PY' OR A.ENTRY_TYPE = 'BGB' THEN 'AA' || substr(A.ENTRY_TYPE,1,3) ELSE substr(A.ENTRY_TYPE,1,3) END, A.ENTRY_REASON AS ORIG_ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO FROM PS_STMT_CUST_DTL A JOIN PS_CUST_ADDRESS B ON A.ADDRESS_SEQ_NUM = B.ADDRESS_SEQ_NUM AND A.CRSPD_SETID = B.SETID AND A.CRSPD_CUST_ID = B.CUST_ID JOIN PS_CUSTOMER C ON B.SETID = C.SETID AND B.CUST_ID = C.CUST_ID AND B.ADDRESS_SEQ_NUM = C.ADDRESS_SEQ_NUM JOIN PS_SET_CNTRL_REC E ON E.SETCNTRLVALUE = A.BUSINESS_UNIT AND E.RECNAME = 'ENTRY_TYPE_TBL' JOIN PS_STMT_CUST F ON A.CRSPD_SETID = F.CRSPD_SETID AND A.CRSPD_CUST_ID = F.CRSPD_CUST_ID AND A.ST_ID_NUM = F.ST_ID_NUM JOIN PS_ENTRY_REASN_TBL G ON G.ENTRY_TYPE = A.ENTRY_TYPE AND G.ENTRY_REASON = A.ENTRY_REASON AND G.SETID = E.SETID AND JOIN PS_ENTRY_TYPE_TBL D ON G.SETID = D.SETID AND G.ENTRY_TYPE = D.ENTRY_TYPE LEFT OUTER JOIN PS_ITEM H ON A.BUSINESS_UNIT = H.BUSINESS_UNIT AND A.CUST_ID = H.CUST_ID AND A.ITEM = H.ITEM AND A.ITEM_LINE = H.ITEM_LINE LEFT OUTER JOIN PS_BI_TYPE K ON SUBSTR(A.ENTRY_TYPE,1,3) = K.BILL_TYPE_ID WHERE B.EFFDT = (SELECT MAX(B_ED.EFFDT) FROM PS_CUST_ADDRESS B_ED WHERE B.SETID = B_ED.SETID AND B.CUST_ID = B_ED.CUST_ID AND B.ADDRESS_SEQ_NUM = B_ED.ADDRESS_SEQ_NUM AND B_ED.EFFDT <= '06/19/2013') AND K.SETID = 'SPN' AND K.EFFDT = (SELECT MAX(C_ED.EFFDT) FROM PS_BI_TYPE C_ED WHERE K.SETID = C_ED.SETID AND K.BILL_TYPE_ID = C_ED.BILL_TYPE_ID AND C_ED.EFFDT <= '06/19/2013') AND K.EFF_STATUS = 'A' AND A.CRSPD_CUST_ID = '000331' AND A.CRSPD_SETID = 'SPN' AND F.ST_DT = '' AND F.ST_ID_NUM = '' GROUP BY A.CRSPD_SETID, A.CRSPD_CUST_ID, F.ST_ID_NUM, F.ST_DT, A.BUSINESS_UNIT , H.ASOF_DT, H.DUE_DT, A.ITEM, H.CONTRACT_NUM, D.DESCR, G.DESCR, C.NAME1 , B.ADDRESS1, B.ADDRESS2, B.ADDRESS3, B.ADDRESS4, B.CITY, B.STATE, B.POSTAL, A.BAL_CURRENCY, A.CUST_ID, A.ENTRY_TYPE, 25, A.ENTRY_REASON, C.CUSTOMER_TYPE, K.AR_LVL, H.ORDER_NO ``` That's fairly complex because of the sheer number of tables and multi-column join conditions. However, the outer join structure is straight-forward — both outer joins are related directly to the primary table, PS\_STMT\_CUST\_DTL, which is given the alias A. I'm fairly sure that the SQL could be simplified. For example, when it was written, Informix probably didn't support 'sub-queries in the FROM clause'. But the two SUM values could probably be put into a sub-query, and that would simplify the GROUP BY clause (eliminate it from the main query).
For information on Informix outer syntax: <http://pic.dhe.ibm.com/infocenter/idshelp/v117/index.jsp?topic=%2Fcom.ibm.sqls.doc%2Fsqls.htm> IBM's funky documentation does not allow direct links, so search for OUTER in there. This is for Informix 11.7, not sure what version you are using. Essentially the outer tables will be subservient tables in ANY join condition in the where clause, and are effectively left outer joins. Also note the order the tables are specified in the from clause is significant with the Informix syntax.
How was `outer` used in Informix-flavored SQL?
[ "", "sql", "informix", "" ]
Is there a method to find all functions that were defined in a python environment? For instance, if I had ``` def test: pass ``` `some_command_here` would return `test`
You can use [inspect](http://docs.python.org/2/library/inspect.html) module: ``` import inspect import sys def test(): pass functions = [name for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isfunction)] print functions ``` prints: ``` ['test'] ```
You can use `globals()` to grab everything defined in the global scope of the file, and `inspect` to filter the objects you care about. ``` [ f for f in globals().values() if inspect.isfunction(f) ] ```
Finding All Defined Functions in Python Environment
[ "", "python", "function", "" ]
When I try to use a link in my Django template from `/appname/index/` to get to `/appname/detail/###` I am instead getting to `/appname/index/detail/###` which is not what I'm trying to get so my app can't find it in the urlconf of course. First the urls.py line for the detail page ``` url(r'detail/(?P<jobID>\d+)/$', 'appname.views.detail') ``` Additionally, the root urlconf ``` urlpatterns = patterns('', url(r'^appname/', include('appname.urls')), url(r'^admin/', include(admin.site.urls)), ) ``` Next the template code trying to get there ``` {% for job in jobList %} <a href="detail/{{ job.id }}/">{{ job.name }}</a> ``` I'm not sure what else might be applicable information, just ask if you would like to see something else. I also tried : ``` <a href="{% url 'appname.views.detail' %}/{{ job.id }}">{{ job.name }}</a> ``` But that didn't work either. Thank you in advance for any help.
Add `/` at start in `href`: ``` <a href="/appname/detail/{{ job.id }}/">{{ job.name }}</a> ``` And for the `url` tag to work you need to do it like this: ``` <a href="{% url 'appname.views.detail' jobID=job.id %}">{{ job.name }}</a> ```
From my experience, as long as you have defined the url of the page the `href` tag should lead to in `urls.py`, include the absolute path in the format below. ``` Site name: yyyy.com Url of page to redirect to in urls.py: yyyy.com/signup/ Sample link: <a href="/signup/">Signup</a> ``` You will notice that what goes inside the href tag is sort of appended to the current url. For more dynamic links, you can use some python as of DTL guidelines.
how to use the href attribute in django templates
[ "", "python", "django", "url", "href", "" ]
I have a query in `SQL`, I have to get a date in a format of `dd/mm/yy` Example: `25/jun/2013`. How can I `convert` it for `SQL server`?
I'm not sure there is an exact match for the format you want. But you can get close with [`convert()`](http://msdn.microsoft.com/en-us/library/ms187928.aspx) and style `106`. Then, replace the spaces: ``` SELECT replace(convert(NVARCHAR, getdate(), 106), ' ', '/') ```
There are already [multiple answers and formatting types](https://msdn.microsoft.com/en-us/library/ms187928.aspx) for SQL server 2008. But this method somewhat ambiguous and it would be difficult for you to remember the number with respect to Specific Date Format. That's why in next versions of SQL server there is better option. ## If you are using SQL Server 2012 or above versions, you should use [Format() function](https://msdn.microsoft.com/en-us/library/hh213505.aspx) ``` FORMAT ( value, format [, culture ] ) ``` With culture option, you can specify date as per your viewers. ``` DECLARE @d DATETIME = '10/01/2011'; SELECT FORMAT ( @d, 'd', 'en-US' ) AS 'US English Result' ,FORMAT ( @d, 'd', 'en-gb' ) AS 'Great Britain English Result' ,FORMAT ( @d, 'd', 'de-de' ) AS 'German Result' ,FORMAT ( @d, 'd', 'zh-cn' ) AS 'Simplified Chinese (PRC) Result'; SELECT FORMAT ( @d, 'D', 'en-US' ) AS 'US English Result' ,FORMAT ( @d, 'D', 'en-gb' ) AS 'Great Britain English Result' ,FORMAT ( @d, 'D', 'de-de' ) AS 'German Result' ,FORMAT ( @d, 'D', 'zh-cn' ) AS 'Chinese (Simplified PRC) Result'; US English Result Great Britain English Result German Result Simplified Chinese (PRC) Result ---------------- ----------------------------- ------------- ------------------------------------- 10/1/2011 01/10/2011 01.10.2011 2011/10/1 US English Result Great Britain English Result German Result Chinese (Simplified PRC) Result ---------------------------- ----------------------------- ----------------------------- --------------------------------------- Saturday, October 01, 2011 01 October 2011 Samstag, 1. Oktober 2011 2011年10月1日 ``` For OP's solution, we can use following format, which is already mentioned by @Martin Smith: ``` FORMAT(GETDATE(), 'dd/MMM/yyyy', 'en-us') ``` Some sample date formats: [![enter image description here](https://i.stack.imgur.com/07jfP.png)](https://i.stack.imgur.com/07jfP.png) If you want more date formats of SQL server, you should visit: 1. [Custom Date and Time Format](https://msdn.microsoft.com/en-us/library/8kb3ddd4(v=vs.110).aspx) 2. [Standard Date and Time Format](https://msdn.microsoft.com/en-us/library/az4se3k1(v=vs.110).aspx)
Convert Date format into DD/MMM/YYYY format in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
I Have add some wrong `task` to a `celery` with redis `broker` but now I want to remove the incorrect `task` and I can't find any way to do this Is there some commands or some api to do this ?
I know two ways of doing so: 1) Delete queue directly from broker. In your case it's Redis. There are two commands that could help you: **llen** (to find right queue) and **del** (to delete it). 2) Start celery worker with **--purge** or **--discard** options. Here is help: ``` --purge, --discard Purges all waiting tasks before the daemon is started. **WARNING**: This is unrecoverable, and the tasks will be deleted from the messaging server. ```
The simplest way is to use the `celery control revoke [id1 [id2 [... [idN]]]]` (do not forget to pass the `-A project.application` flag too). Where id1 to idN are task IDs. However, it is not guaranteed to succeed every time you run it, for valid reasons... Sure Celery has API for it. Here is an example how to do it from a script: `res = app.control.revoke(task_id, terminate=True)` In the example above `app` is an instance of the Celery application. In some rare ocasions the control command above will not work, in which case you have to instruct Celery worker to kill the worker process: `res = app.control.revoke(task_id, terminate=True, signal='SIGKILL')`
how to remove task from celery with redis broker?
[ "", "python", "celery", "celery-task", "celeryd", "" ]
Using the `Twitter API`, I get my rate\_limiting information with the code I've written below: ``` def limit(): twitter = Twitter(auth=OAuth('....')) g = twitter.application.rate_limit_status() print g ``` This is great, but I get a big jumble like this: ``` {u'rate_limit_context': {u'access_token': u'....'}, u'resources': {u'account': {u'/account/verify_credentials': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/account/settings': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'blocks': {u'/blocks/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/blocks/ids': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'users': {u'/users/contributors': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/users/lookup': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/users/search': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/users/suggestions/:slug/members': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/users/suggestions/:slug': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/users/show/:id': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/users/suggestions': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/users/profile_banner': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/users/contributees': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'friends': {u'/friends/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/friends/ids': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'saved_searches': {u'/saved_searches/show/:id': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/saved_searches/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/saved_searches/destroy/:id': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'lists': {u'/lists/subscriptions': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/subscribers/show': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/statuses': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/lists/subscribers': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/lists/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/members/show': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/show': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/memberships': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/lists/members': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/lists/ownerships': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'search': {u'/search/tweets': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}}, u'application': {u'/application/rate_limit_status': {u'reset': 1371673401, u'limit': 180, u'remaining': 178}}, u'trends': {u'/trends/available': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/trends/closest': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/trends/place': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'followers': {u'/followers/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/followers/ids': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'favorites': {u'/favorites/list': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'friendships': {u'/friendships/outgoing': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/friendships/show': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/friendships/incoming': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/friendships/no_retweets/ids': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/friendships/lookup': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'geo': {u'/geo/similar_places': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/geo/id/:place_id': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/geo/reverse_geocode': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/geo/search': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'direct_messages': {u'/direct_messages/show': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/direct_messages': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/direct_messages/sent': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/direct_messages/sent_and_received': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}, u'statuses': {u'/statuses/retweets_of_me': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/statuses/retweeters/ids': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/statuses/mentions_timeline': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/statuses/user_timeline': {u'reset': 1371672792, u'limit': 180, u'remaining': 177}, u'/statuses/oembed': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/statuses/show/:id': {u'reset': 1371673471, u'limit': 180, u'remaining': 180}, u'/statuses/home_timeline': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/statuses/retweets/:id': {u'reset': 1371672792, u'limit': 15, u'remaining': 0}}, u'help': {u'/help/tos': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/help/configuration': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/help/privacy': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}, u'/help/languages': {u'reset': 1371673471, u'limit': 15, u'remaining': 15}}}} ``` I only need 1 part from it though. I just need to know the remaining limit on this: ``` u'/statuses/retweets/:id': {u'reset': 1371672792, u'limit': 15, u'remaining': 0} ``` How do I get just that number?
Without knowing the details of the Twitter API, it looks like the `rate_limit_status()` method is returning a `dictionary` of `dictionaries`. So you should be able to get at the data you want by looking up the right keys: ``` >>> g = eval("{u'rate_limit_context': {u'access_token': u'....'}, u'resources': ... ") >>> g[u'resources'][u'statuses'][u'/statuses/retweets/:id'] {u'limit': 15, u'remaining': 0, u'reset': 1371672792} >>> g[u'resources'][u'statuses'][u'/statuses/retweets/:id'][u'remaining'] 0 ```
This problem has been frustrating me for the last 72 hours. First, we should clarify here - you are using the Tweepy library, correct? I had some code using the twitter package found at the top of the list in PyPI, but it doesn't appear that they made the upgrade from API 1.0 to API 1.1, nor does the #2 in popularity python-twitter appear to be ready, despite the older API sunsetting over a month ago. The entire rate limit scheme has changed from API 1.0 to API 1.1 - previously you had 350 general purpose tokens per hour and the system reset on the hour. The prior ability to get 150 API tokens worth of search based on source IP w/o authentication is gone. Now time increments are fifteen minutes and you get 15 posts and 180 gets. <https://dev.twitter.com/docs/rate-limiting/1.1> The new API provides granularity to the level of each API access point - instead of just a bucket of 350 tokens it looks like every conceivable post type action is limited by type to 15 per each quarter hour, and get type actions are limited to 180 per quarter hour. It looks like Twitter will have a much better handle on API usage accounting. For my purposes it appears that capacity is nearly double, from 350 calls/hour to (4x180 = 720) calls/hour. Thank you, Twitter. The access method the other responder described is ugly - we have to dig four levels deep into dictionaries for what is conceptually a dictionary of endpoint/count tuples? If that's the full structure, so be it, but there really needs to be a simple method of naming the desired endpoint and getting back an integer. That would be readable, this muddle of dictionary references is not.
need specific rate limiting number twitter api python
[ "", "python", "twitter", "tweets", "rate", "" ]
I am trying to write a query to get count of the records from my database which has happened in a specific day which is in the table. My query looks like this: ``` SELECT count(id), A.status, YEAR(A.dateRequested), MONTHNAME(A.dateRequested), WEEK(A.dateRequested), DAYNAME(A.dateRequested) FROM ASSESSMENT A where A.status = 'Pending'; ``` What it gives me is just one record: ``` 50 Pending 2013 April 16 Monday ``` Why it is not giving me all the records listed. For e.g. I have 50 records inserted on Monday, I have 20 on Tuesday and so on.. How can I get the list like that? And why the function WEEK(A.dateRequested) returns `16` as there are 4 weeks in a month?
You're using `count(id)` without a `group by`. The way MySQL works is that this translates to an empty `group by`. You'll get the `count(id)` for the entire table, and the other fields will be set to one random row. To list the number of Pending records by weekday: ``` SELECT DAYNAME(A.dateRequested) , count(id), FROM ASSESSMENT A where A.status = 'Pending' group by DAYNAME(A.dateRequested) ```
1. Week 16 is the week of the year, not the week of the month. 2. Your query is NOT querying for items happened today. It is looking for items which have a pending status. Change it to something like: WHERE A.dateRequested = '2013-06-20' 3. You are using COUNT function which will return only one result unless you GROUP BY something.
MySQL query to get a list of records depending on year/month/day
[ "", "mysql", "sql", "" ]
I've got a list of lists containing integers sorted at the moment by the sum of the contents: ``` [[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0], [0, 0, 3], [0, 1, 2], [0, 2, 1], [0, 3, 0], [1, 0, 2], [1, 1, 1], [1, 2, 0], [2, 0, 1], [2, 1, 0], [3, 0, 0], [0, 0, 4], [0, 1, 3], [0, 2, 2], [0, 3, 1], [0, 4, 0], [1, 0, 3], [1, 1, 2], [1, 2, 1], [1, 3, 0], [2, 0, 2], [2, 1, 1], [2, 2, 0], [3, 0, 1], [3, 1, 0], [4, 0, 0]] ``` I would like to sort them in ascending order by the common structure of its contents i.e like ``` [[1, 0, 0], [2, 0, 0], [3, 0, 0], [4, 0, 0], [0, 1, 0], [0, 2, 0], [0, 3, 0], [0, 4, 0], [0, 0, 1], [0, 0, 2], [0,0,3], [0,0,4]... ] ``` I have seen the docs but I can't figure out how I can do this.
Is this what you're after... ``` >>> l = [[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0], [0, 0, 3], [0, 1, 2], [0, 2, 1], [0, 3, 0], [1, 0, 2], [1, 1, 1], [1, 2, 0], [2, 0, 1], [2, 1, 0], [3, 0, 0], [0, 0, 4], [0, 1, 3], [0, 2, 2], [0, 3, 1], [0, 4, 0], [1, 0, 3], [1, 1, 2], [1, 2, 1], [1, 3, 0], [2, 0, 2], [2, 1, 1], [2, 2, 0], [3, 0, 1], [3, 1, 0], [4, 0, 0]] >>> l.sort(key=lambda x: (-x.count(0), x[::-1])) >>> l [[1, 0, 0], [2, 0, 0], [3, 0, 0], [4, 0, 0], [0, 1, 0], [0, 2, 0], [0, 3, 0], [0, 4, 0], [0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [1, 1, 0], [2, 1, 0], [3, 1, 0], [1, 2, 0], [2, 2, 0], [1, 3, 0], [1, 0, 1], [2, 0, 1], [3, 0, 1], [0, 1, 1], [0, 2, 1], [0, 3, 1], [1, 0, 2], [2, 0, 2], [0, 1, 2], [0, 2, 2], [1, 0, 3], [0, 1, 3], [1, 1, 1], [2, 1, 1], [1, 2, 1], [1, 1, 2]] ```
Use your sorting criteria function as `key` in the sorting. ``` input_list = [[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 2], [0, 1, 1], [0, 2, 0], [1, 0, 1], [1, 1, 0], [2, 0, 0], [0, 0, 3], [0, 1, 2], [0, 2, 1], [0, 3, 0], [1, 0, 2], [1, 1, 1], [1, 2, 0], [2, 0, 1], [2, 1, 0], [3, 0, 0], [0, 0, 4], [0, 1, 3], [0, 2, 2], [0, 3, 1], [0, 4, 0], [1, 0, 3], [1, 1, 2], [1, 2, 1], [1, 3, 0], [2, 0, 2], [2, 1, 1], [2, 2, 0], [3, 0, 1], [3, 1, 0], [4, 0, 0]] sorted_list = sorted(input_list,key=my_sorting_func) ```
Sort a list of lists containing integers
[ "", "python", "" ]
I believed that `hash()` function works the same in all python interpreters. But it differs when I run it on my mobile using [python for android](https://code.google.com/p/python-for-android/). I get same hash value for hashing strings and numbers but when I hash built-in data types the hash value differs. **PC Python Interpreter (Python 2.7.3)** ``` >>> hash(int) 31585118 >>> hash("hello sl4a") 1532079858 >>> hash(101) 101 ``` **Mobile Python Interpreter (Python 2.6.2)** ``` >>> hash(int) -2146549248 >>> hash("hello sl4a") 1532079858 >>> hash(101) 101 ``` Can any one tell me is it a bug or I misunderstood something.
for old python (at least, my Python 2.7), it seems that ``` hash(<some type>) = id(<type>) / 16 ``` and for CPython `id()` is the address in memory - <http://docs.python.org/2/library/functions.html#id> ``` >>> id(int) / hash(int) 16 >>> id(int) % hash(int) 0 ``` so my guess is that the Android port has some strange convention for memory addresses? anyway, given the above, hashes for types (and other built-ins i guess) will differ across installs because functions are at different addresses. in contrast, hashes for values (what i think you mean by "non-internal objects") (before the random stuff was added) are calculated from their values and so likely repeatable. PS but there's at least one more CPython wrinkle: ``` >>> for i in range(-1000,1000): ... if hash(i) != i: print(i) ... -1 ``` there's an answer here somewhere explaining that one...
`hash()` is randomised by default each time you start a new instance of recent versions (Python3.3+) to [prevent dictionary insertion DOS attacks](http://mail.python.org/pipermail/python-announce-list/2012-March/009394.html) Prior to that, `hash()` was different for 32bit and 64bit builds anyway. If you want something that *does* hash to the same thing every time, use one of the hashes in hashlib ``` >>> import hashlib >>> hashlib.algorithms ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') ```
Why doesn't Python hash function give the same values when run on Android implementation?
[ "", "python", "hash", "sl4a", "" ]
Let's say I have: ``` action = '{bond}, {james} {bond}'.format(bond='bond', james='james') ``` this wil output: ``` 'bond, james bond' ``` Next we have: ``` action = '{bond}, {james} {bond}'.format(bond='bond') ``` this will output: ``` KeyError: 'james' ``` Is there some workaround to prevent this error to happen, something like: * if keyrror: ignore, leave it alone (but do parse others) * compare format string with available named arguments, if missing then add
## If you are using Python 3.2+, use can use [str.format\_map()](http://docs.python.org/3/library/stdtypes.html#str.format_map). For `bond, bond`: ``` from collections import defaultdict '{bond}, {james} {bond}'.format_map(defaultdict(str, bond='bond')) ``` Result: ``` 'bond, bond' ``` For `bond, {james} bond`: ``` class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' '{bond}, {james} {bond}'.format_map(SafeDict(bond='bond')) ``` Result: ``` 'bond, {james} bond' ``` ## In Python 2.6/2.7 For `bond, bond`: ``` from collections import defaultdict import string string.Formatter().vformat('{bond}, {james} {bond}', (), defaultdict(str, bond='bond')) ``` Result: ``` 'bond, bond' ``` For `bond, {james} bond`: ``` from collections import defaultdict import string class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' string.Formatter().vformat('{bond}, {james} {bond}', (), SafeDict(bond='bond')) ``` Result: ``` 'bond, {james} bond' ```
You could use a [template string](http://docs.python.org/2/library/string.html#template-strings) with the `safe_substitute` method. ``` from string import Template tpl = Template('$bond, $james $bond') action = tpl.safe_substitute({'bond': 'bond'}) ```
Format string unused named arguments
[ "", "python", "string", "string-formatting", "missing-data", "defaultdict", "" ]
I'm using Python 3.3. I'm getting an email from an IMAP server, then converting it to an instance of an email from the [standard email library](http://docs.python.org/2/library/email.message.html). I do this: ``` message.get("date") ``` Which gives me this for example: ``` Wed, 23 Jan 2011 12:03:11 -0700 ``` I want to convert this to something I can put into `time.strftime()` so I can format it nicely. I want the result in local time, not UTC. There are so many functions, deprecated approaches and side cases, not sure what is the modern route to take?
Do this: ``` import email, email.utils, datetime, time def dtFormat(s): dt = email.utils.parsedate_tz(s) dt = email.utils.mktime_tz(dt) dt = datetime.datetime.fromtimestamp(dt) dt = dt.timetuple() return dt ``` then this: ``` s = message.get("date") # e.g. "Wed, 23 Jan 2011 12:03:11 -0700" print(time.strftime("%Y-%m-%d-%H-%M-%S", dtFormat(s))) ``` gives this: ``` 2011-01-23-21-03-11 ```
Something like this? ``` >>> import time >>> s = "Wed, 23 Jan 2011 12:03:11 -0700" >>> newtime = time.strptime(s, '%a, %d %b %Y %H:%M:%S -0700') >>> print(time.strftime('Two years ago was %Y', newtime)) Two years ago was 2011 # Or whatever output you wish to receive. ```
Parse date/time from a string
[ "", "python", "python-3.x", "" ]
Is it possible to modify code below to have printout from 'stdout 'and 'stderr': * printed on the **terminal** (in real time), * and finally stored in **outs** and **errs** variables? The code: ``` #!/usr/bin/python3 # -*- coding: utf-8 -*- import subprocess def run_cmd(command, cwd=None): p = subprocess.Popen(command, cwd=cwd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) outs, errs = p.communicate() rc = p.returncode outs = outs.decode('utf-8') errs = errs.decode('utf-8') return (rc, (outs, errs)) ``` Thanks to @unutbu, special thanks for @j-f-sebastian, final function: ``` #!/usr/bin/python3 # -*- coding: utf-8 -*- import sys from queue import Queue from subprocess import PIPE, Popen from threading import Thread def read_output(pipe, funcs): for line in iter(pipe.readline, b''): for func in funcs: func(line.decode('utf-8')) pipe.close() def write_output(get): for line in iter(get, None): sys.stdout.write(line) def run_cmd(command, cwd=None, passthrough=True): outs, errs = None, None proc = Popen( command, cwd=cwd, shell=False, close_fds=True, stdout=PIPE, stderr=PIPE, bufsize=1 ) if passthrough: outs, errs = [], [] q = Queue() stdout_thread = Thread( target=read_output, args=(proc.stdout, [q.put, outs.append]) ) stderr_thread = Thread( target=read_output, args=(proc.stderr, [q.put, errs.append]) ) writer_thread = Thread( target=write_output, args=(q.get,) ) for t in (stdout_thread, stderr_thread, writer_thread): t.daemon = True t.start() proc.wait() for t in (stdout_thread, stderr_thread): t.join() q.put(None) outs = ' '.join(outs) errs = ' '.join(errs) else: outs, errs = proc.communicate() outs = '' if outs == None else outs.decode('utf-8') errs = '' if errs == None else errs.decode('utf-8') rc = proc.returncode return (rc, (outs, errs)) ```
You could spawn threads to read the stdout and stderr pipes, write to a common queue, and append to lists. Then use a third thread to print items from the queue. ``` import time import Queue import sys import threading import subprocess PIPE = subprocess.PIPE def read_output(pipe, funcs): for line in iter(pipe.readline, ''): for func in funcs: func(line) # time.sleep(1) pipe.close() def write_output(get): for line in iter(get, None): sys.stdout.write(line) process = subprocess.Popen( ['random_print.py'], stdout=PIPE, stderr=PIPE, close_fds=True, bufsize=1) q = Queue.Queue() out, err = [], [] tout = threading.Thread( target=read_output, args=(process.stdout, [q.put, out.append])) terr = threading.Thread( target=read_output, args=(process.stderr, [q.put, err.append])) twrite = threading.Thread(target=write_output, args=(q.get,)) for t in (tout, terr, twrite): t.daemon = True t.start() process.wait() for t in (tout, terr): t.join() q.put(None) print(out) print(err) ``` The reason for using the third thread -- instead of letting the first two threads both print directly to the terminal -- is to prevent both print statements from occurring concurrently, which can result in sometimes garbled text. --- The above calls `random_print.py`, which prints to stdout and stderr at random: ``` import sys import time import random for i in range(50): f = random.choice([sys.stdout,sys.stderr]) f.write(str(i)+'\n') f.flush() time.sleep(0.1) ``` --- This solution borrows code and ideas from [J. F. Sebastian, here](https://stackoverflow.com/a/4418891/190597). --- Here is an alternative solution for Unix-like systems, using `select.select`: ``` import collections import select import fcntl import os import time import Queue import sys import threading import subprocess PIPE = subprocess.PIPE def make_async(fd): # https://stackoverflow.com/a/7730201/190597 '''add the O_NONBLOCK flag to a file descriptor''' fcntl.fcntl( fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK) def read_async(fd): # https://stackoverflow.com/a/7730201/190597 '''read some data from a file descriptor, ignoring EAGAIN errors''' # time.sleep(1) try: return fd.read() except IOError, e: if e.errno != errno.EAGAIN: raise e else: return '' def write_output(fds, outmap): for fd in fds: line = read_async(fd) sys.stdout.write(line) outmap[fd.fileno()].append(line) process = subprocess.Popen( ['random_print.py'], stdout=PIPE, stderr=PIPE, close_fds=True) make_async(process.stdout) make_async(process.stderr) outmap = collections.defaultdict(list) while True: rlist, wlist, xlist = select.select([process.stdout, process.stderr], [], []) write_output(rlist, outmap) if process.poll() is not None: write_output([process.stdout, process.stderr], outmap) break fileno = {'stdout': process.stdout.fileno(), 'stderr': process.stderr.fileno()} print(outmap[fileno['stdout']]) print(outmap[fileno['stderr']]) ``` This solution uses code and ideas from [Adam Rosenfield's post, here](https://stackoverflow.com/a/7730201/190597).
To capture and display at the same time both stdout and stderr from a child process line by line in a single thread, you could use asynchronous I/O: ``` #!/usr/bin/env python3 import asyncio import os import sys from asyncio.subprocess import PIPE @asyncio.coroutine def read_stream_and_display(stream, display): """Read from stream line by line until EOF, display, and capture the lines. """ output = [] while True: line = yield from stream.readline() if not line: break output.append(line) display(line) # assume it doesn't block return b''.join(output) @asyncio.coroutine def read_and_display(*cmd): """Capture cmd's stdout, stderr while displaying them as they arrive (line by line). """ # start process process = yield from asyncio.create_subprocess_exec(*cmd, stdout=PIPE, stderr=PIPE) # read child's stdout/stderr concurrently (capture and display) try: stdout, stderr = yield from asyncio.gather( read_stream_and_display(process.stdout, sys.stdout.buffer.write), read_stream_and_display(process.stderr, sys.stderr.buffer.write)) except Exception: process.kill() raise finally: # wait for the process to exit rc = yield from process.wait() return rc, stdout, stderr # run the event loop if os.name == 'nt': loop = asyncio.ProactorEventLoop() # for subprocess' pipes on Windows asyncio.set_event_loop(loop) else: loop = asyncio.get_event_loop() rc, *output = loop.run_until_complete(read_and_display(*cmd)) loop.close() ```
Subprocess.Popen: cloning stdout and stderr both to terminal and variables
[ "", "python", "python-3.x", "subprocess", "popen", "" ]
The following code tries to create an integer array filled with `n` times number 1. ``` import sys def foo(n): if n == 0: return [] else: return foo(n-1).append(1) if __name__ == '__main__': foo(5) ``` Executing this program yields in an error: ``` AttributeError: 'NoneType' object has no attribute 'append' ``` What am I doing wrong when creating the array?
The problem is in your `else`-clause. `append` does not return a new list, but rather adds an element to the list in-place, and then returns `None` (hence your error). Try this instead, ``` return foo(n-1) + [1] # creates a *new* list ```
Just look at the following code to understand why you are getting the error, ``` >>> x = [].append(1) >>> x is None True ``` When you append to a list, the return value is `None`! So you must do something like this, ``` def foo(n): if n == 0: return [] else: return foo(n-1) + [1] ``` Using `+` operator is really like calling `extend` on a list for which the return value *is the new list*, unlike `append`. ``` >>> x = [1] + [1] >>> x [1, 1] ``` **NOTE:** Obviously for this simple example you should just use, ``` >>> [1] * 6 [1, 1, 1, 1, 1, 1] ``` Which is fine for immutable `int`s but if you are dealing with objects where you don't want references to the same one, ``` >>> [1 for _ in range(6)] [1, 1, 1, 1, 1, 1] ``` But I'm assuming you are writing this to practice recursive solutions and such.
How to create an integer array within a recursion?
[ "", "python", "arrays", "" ]
So I have no idea how to do this and I've been reading the documentation and searching around but I can't seem to find anything. \d gives me a nice list of all the tables and information on them but I want to be able to send a command to the postgresql database and pipe it into a text file that will contain just the table names.
Run psql with the following switch. It'll show the internal commands psql sends to the catalog to get this kind of information: > ``` > -E > --echo-hidden > ``` > > Echo the actual queries generated by \d and other backslash commands. > You can use this to study psql's internal operations. This is > equivalent to setting the variable ECHO\_HIDDEN from within psql. <http://www.postgresql.org/docs/current/static/app-psql.html> e.g.: ``` ~ $ psql -E SET psql (9.2.4) Type "help" for help. denis=# \dt ********* QUERY ********** SELECT n.nspname as "Schema", c.relname as "Name", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type", pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('r','') AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' AND n.nspname !~ '^pg_toast' AND pg_catalog.pg_table_is_visible(c.oid) ORDER BY 1,2; ************************** List of relations Schema | Name | Type | Owner --------+------+-------+------- public | test | table | denis (1 row) ```
Alternatively to Denis solution, you might want: ``` select table_name from information_schema.tables; ``` but this is bound to cause problems to lack of schema information.
Show only table names when viewing postgresql tables
[ "", "sql", "postgresql", "" ]
I got a task to replace "O"(capital O) by "0" in a text file by using python. But one condition is that I have to preserve the other words like Over, NATO etc. I have to replace only the words like 9OO to 900, 2OO6 to 2006 and so on. I tried a lot but yet not successful. My code is given below. Please help me any one. Thanks in advance ``` import re srcpatt = 'O' rplpatt = '0' cre = re.compile(srcpatt) with open('myfile.txt', 'r') as file: content = file.read() wordlist = re.findall(r'(\d+O|O\d+)',str(content)) print(wordlist) for word in wordlist: subcontent = cre.sub(rplpatt, word) newrep = re.compile(word) newcontent = newrep.sub(subcontent,content) with open('myfile.txt', 'w') as file: file.write(newcontent) print('"',srcpatt,'" is successfully replaced by "',rplpatt,'"') ```
`re.sub` can take in a replacement function, so we can pare this down pretty nicely: ``` import re with open('myfile.txt', 'r') as file: content = file.read() with open('myfile.txt', 'w') as file: file.write(re.sub(r'\d+[\dO]+|[\dO]+\d+', lambda m: m.group().replace('O', '0'), content)) ```
You can probably get away with matching just a leading digit followed by `O`. This won't handle `OO7`, but it will work nicely with `8080` for example. Which none of the answers here matching the trailing digits will. If you want to do that you need to use a lookahead match. ``` re.sub(r'(\d)(O+)', lambda m: m.groups()[0] + '0'*len(m.groups()[1]), content) ```
How to replace a character in some specific word in a text file using python
[ "", "python", "replace", "" ]
I have this somewhat complicated command line function in Python (lets call it `myFunction()`), and I am working to integrate it in a graphical interface (using PySide/Qt). The GUI is used to help select inputs, and display outputs. However, `myFunction` is designed to work as a stand-alone command line function, and it occasionnaly prints out the progress. My question is: how can I intercept these `print` calls and display them in the GUI? I know it would be possible to modify `myFunction()` to send `processEvents()` to the GUI, but I would then lose the ability to execute `myFunction()` in a terminal. Ideally, I would like something similar to Ubuntu's graphical software updater, which has a small embeded terminal-looking widget displaying what `apt-get` would display were it executed in a terminal.
you could redirect stdout and restore after. for example: ``` import StringIO import sys # somewhere to store output out = StringIO.StringIO() # set stdout to our StringIO instance sys.stdout = out # print something (nothing will print) print 'herp derp' # restore stdout so we can really print (__stdout__ stores the original stdout) sys.stdout = sys.__stdout__ # print the stored value from previous print print out.getvalue() ```
Here is a Python 3 pattern using contextmanager that both encapsulates [the monkey-patch technique](https://stackoverflow.com/a/17067799/39396) and also ensures that `sys.stdout` is restored in case of an exception. ``` from io import StringIO import sys from contextlib import contextmanager @contextmanager def capture_stdout(): """ context manager encapsulating a pattern for capturing stdout writes and restoring sys.stdout even upon exceptions Examples: >>> with capture_stdout() as get_value: >>> print("here is a print") >>> captured = get_value() >>> print('Gotcha: ' + captured) >>> with capture_stdout() as get_value: >>> print("here is a print") >>> raise Exception('oh no!') >>> print('Does printing still work?') """ # Redirect sys.stdout out = StringIO() sys.stdout = out # Yield a method clients can use to obtain the value try: yield out.getvalue finally: # Restore the normal stdout sys.stdout = sys.__stdout__ ```
Intercept python's `print` statement and display in GUI
[ "", "python", "" ]
I'm attempting to remove all lines where my regex matches(regex is simply looking for any line that has yahoo in it). Each match is on it's own line, so there's no need for the multiline option. This is what I have so far... ``` import re inputfile = open('C:\\temp\\Scripts\\remove.txt','w',encoding="utf8") inputfile.write(re.sub("\[(.*?)yahoo(.*?)\n","",inputfile)) inputfile.close() ``` I'm receiving the following error: Traceback (most recent call last): line 170, in sub return \_compile(pattern, flags).sub(repl, string, count) TypeError: expected string or buffer
Use `fileinput` module if you want to modify the original file: ``` import re import fileinput for line in fileinput.input(r'C:\temp\Scripts\remove.txt', inplace = True): if not re.search(r'\byahoo\b', line): print(line, end="") ```
Here's Python 3 variant of [@Ashwini Chaudhary's answer](https://stackoverflow.com/a/17221420/4279), to remove all lines that contain a regex `pattern` from a give `filename`: ``` #!/usr/bin/env python3 """Usage: remove-pattern <pattern> <file>""" import fileinput import re import sys def main(): pattern, filename = sys.argv[1:] # get pattern, filename from command-line matched = re.compile(pattern).search with fileinput.FileInput(filename, inplace=1, backup='.bak') as file: for line in file: if not matched(line): # save lines that do not match print(line, end='') # this goes to filename due to inplace=1 main() ``` It assumes `locale.getpreferredencoding(False) == input_file_encoding` otherwise it might break on non-ascii characters. To make it work regardless what current locale is or for input files that have a different encoding: ``` #!/usr/bin/env python3 import os import re import sys from tempfile import NamedTemporaryFile def main(): encoding = 'utf-8' pattern, filename = sys.argv[1:] matched = re.compile(pattern).search with open(filename, encoding=encoding) as input_file: with NamedTemporaryFile(mode='w', encoding=encoding, dir=os.path.dirname(filename), delete=False) as outfile: for line in input_file: if not matched(line): print(line, end='', file=outfile) os.replace(outfile.name, input_file.name) main() ```
Using Python to Remove All Lines Matching Regex
[ "", "python", "regex", "python-3.x", "" ]
I'm not a programmer, but would like to use Python for automation of some Administrative purposes. The first application after "Hello world" I tried to create is interactive ssh client. I've read some documentation and articles and decided it would be the easiest way to use paramiko module, but unfortunately I'm facing a problem: My applications asks you to enter some neccessary information such as server ip, username, password. After this it establishes connection with defined server and provide you with cli on your screen. For emulating the process of entering command I use while loop. Unfortunately my application works well only with the first command you enter. While trying to type the second command an error appears: ``` Traceback (most recent call last): File "C:\Python27\Tests\ssh_client.py", line 53, in <module> client.execute_command(command) File "C:\Python27\Tests\ssh_client.py", line 26, in execute_command stdin,stdout,stderr = self.connection.exec_command(command) File "C:\Python27\lib\site-packages\paramiko\client.py", line 343, in exec_command chan.exec_command(command) AttributeError: 'NoneType' object has no attribute 'exec_command' ``` Code of the programm (Windows 7): ``` import paramiko SERVER = raw_input('Please enter an ip address of remote host: ') USER = raw_input('Please enter your username: ') PASSWORD = raw_input('Please enter your password: ') class MYSSHClient(): def __init__(self, server=SERVER, username=USER, password=PASSWORD): self.server = server self.username = username self.password = password self.connection = None self.result = '' self.is_error = False def do_connect(self): self.connection = paramiko.SSHClient() self.connection.set_missing_host_key_policy(paramiko.AutoAddPolicy()) self.connection.connect(self.server, username=self.username, password=self.password) def execute_command(self, command): if command: print command stdin,stdout,stderr = self.connection.exec_command(command) stdin.close() error = str(stderr.read()) if error: self.is_error = True self.result = error print 'error' else: self.is_error = False self.result = str(stdout.read()) print 'no error' print self.result else: print "no command was entered" def do_close(self): self.connection.close() if __name__ == '__main__': client = MYSSHClient() client.do_connect() while 1: command = raw_input('cli: ') if command == 'q': break client.execute_command(command) client.do_close() ``` I tried to delete while loop and just call commands one by one right in the code, but have the same problem (when typing the second command see the same error). It looks like I don't understand fully how paramiko module works. I tried to find information on web but unfortunately didn't find any solution. I'd be very appreciated if somebody could tell me what I do wrong or give me a link on the similar issue where I can find a solution. Thanks in advance for any help.
Unfortunately I didn't find the way to resolve my issue using paramiko module, but I've found such a module as Exscript. The simple code is below: ``` from Exscript.util.interact import read_login from Exscript.protocols import SSH2 account = read_login() conn = SSH2() conn.connect('192.168.1.1') conn.login(account) while True: command = raw_input('cli: ') if command == 'q': break conn.execute(command) print conn.response conn.send('quit\r') conn.close() ```
Please use for pxssh module this is very useful for your application if work for windows [Python: How can remote from my local pc to remoteA to remoteb to remote c using Paramiko](https://stackoverflow.com/questions/15818328/python-how-can-remote-from-my-local-pc-to-remotea-to-remoteb-to-remote-c-using/15831027#15831027) this example very helpful for you [Python - Pxssh - Getting an password refused error when trying to login to a remote server](https://stackoverflow.com/questions/15823011/python-pxssh-getting-an-password-refused-error-when-trying-to-login-to-a-rem/15831372#15831372) i think u check for your server settings in remote host machine
Python interactive ssh client using paramiko
[ "", "python", "ssh", "paramiko", "" ]
So I have a little script in python and for the help I want to print each method docstring. For example ``` ~$ myscript.py help update ``` would print `myClass.update.__doc__` to the screen. The code I was trying to run is this: ``` import sys class myClass: def update(self): """ update method help """ def help(self): method = sys.argv[2:3][0] if method == "update": print "Help: " + self.update.__doc__ myClass = myClass() myClass.help() ``` It works, but as my methods collection grows it will be a pain in the ass to make the help work as intend. Is there anyway to call something like `self.method.__doc__` dynamically? Thanks.
Instead of using this: ``` if method == 'update': help_string = self.update.__doc__ ``` you could use more flexible solution: ``` help_string = getattr(self, method).__doc__ ``` Just make sure that you catch `AttributeError`s (it will be thrown when there is no method with given name).
This will do it: ``` method = sys.argv[2:3][0] # This is a bit odd; why not sys.argv[2]? print "Help: " + getattr(self, method).__doc__ ```
Dynamically access instance method
[ "", "python", "" ]
I have a 22GB .sql file (100+ tables) and i only need, let's say, 5 of them. I have tried all oracle tools, but none of them is capable of extracting only specific tables. Is there ANY way to extract only specific tables ?
I just stumpled over a very interesting script that creates a single .sql for each table that exists in the huge main .sql: MYSQLDUMPSPLITTER.sh <http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script> In case the link is 404, please see the github gist [here](https://gist.github.com/panique/5896376).
If you created the file with `mysqldump`, I believe you can use text utilities to extract the CREATE TABLE and INSERT statements. Specifically, you can use [`sed` addresses](http://www.gnu.org/software/sed/manual/html_node/Addresses.html) to extract all the lines between two regular expressions. It won't have trouble with a 22 gig file. I dumped my sandbox database (a small database I use mainly for answering questions on SO) for testing. In the version of MySQL that I have installed here, this sed one-liner extracts the CREATE table statement and INSERT statements for the table "DEP\_FACULTY". ``` $ sed -n -e '/^CREATE TABLE `DEP_FACULTY`/,/UNLOCK TABLES/p' mysql.sql > output.file ``` This regular expression identifies the start of the CREATE TABLE statement. * /^CREATE TABLE `DEP_FACULTY`/ CREATE TABLE statements seem to always be immediately followed by INSERT statements. So we just need a regular expression that identifies the end of the INSERT statements. * /UNLOCK TABLES/ If your version of `mysqldump` produces the same output, you should be able to just replace the table name, change the name of the output file to something meaningful, and go drink a cup of coffee.
How to extract/split SQL statements in a huge .sql file to get only certain tables?
[ "", "mysql", "sql", "" ]
I am trying to create a dictionary where the name comes from a variable. Here is the situation since maybe there is a better way: Im using an API to get attributes of "objects". (Name, Description, X, Y, Z) etc. I want to store this information in a way that keeps the data by "object". In order to get this info, the API iterates through all the "objects". So what my proposal was that if the object name is one of the ones i want to "capture", I want to create a dictionary with that name like so: ``` ObjectName = {'Description': VarDescrption, 'X': VarX.. etc} ``` (Where I say `"Varetc..."` that would be the value of that attribute passed by the API. Now since I know the list of names ahead of time, I CAN use a really long If tree but am looking for something easier to code to accomplish this. (and extensible without adding too much code) Here is code I have: ``` def py_cell_object(): #object counter - unrelated to question addtototal() #is this an object I want? if aw.aw_string (239)[:5] == "TDT3_": #If yes, make a dictionary with the object description as the name of the dictionary. vars()[aw.aw_string (239)]={'X': aw.aw_int (232), 'Y': aw.aw_int (233), 'Z': aw.aw_int (234), 'No': aw.aw_int (231)} #print back result to test for key in aw.aw_string (239): print 'key=%s, value=%s' % (key, aw.aw_string (239)[key]) ``` here are the first two lines of code to show what "aw" is ``` from ctypes import * aw = CDLL("aw") ``` to explain what the numbers in the API calls are: 231 AW\_OBJECT\_NUMBER, 232 AW\_OBJECT\_X, 233 AW\_OBJECT\_Y, 234 AW\_OBJECT\_Z, 239 AW\_OBJECT\_DESCRIPTION, 231-234 are integers and 239 is a string
I deduce that you are using the [Active Worlds SDK](http://wiki.activeworlds.com/index.php?title=SDK). It would save time to mention that in the first place in future questions. I guess your goal is to create a top-level dictionary, where each key is the object description. Each value is another dictionary, storing many of the attributes of that object. I took a quick look at the AW SDK documentation on the wiki and I don't see a way to ask the SDK for a list of attribute names, IDs, and types. So you will have to hard-code that information in your program somehow. Unless you need it elsewhere, it's simplest to just hard-code it where you create the dictionary, which is what you are already doing. To print it back out, just print the attribute dictionary's `repr`. I would probably format your method more like this: ``` def py_cell_object(): #object counter - unrelated to question addtototal() description = aw.aw_string(239) if description.startswith("TDT3_"): vars()[description] = { 'DESCRIPTION': description, 'X': aw.aw_int(232), 'Y': aw.aw_int(233), 'Z': aw.aw_int(234), 'NUMBER': aw.aw_int (231), ... etc for remaining attributes } print repr(vars()[description]) ``` Some would argue that you should make named constants for the numbers 232, 233, 234, etc., but I see little reason to do that unless you need them in multiple places, or unless it's easy to generate them automatically from the SDK (for example, by parsing a `.h` file).
If the variables are defined in the local scope, it's as simple as: ``` obj_names = {} while True: varname = read_name() if not varname: break obj_names[varname] = locals()[varname] ```
How to create a dictionary based on variable value in Python
[ "", "python", "variables", "dictionary", "activeworlds", "" ]
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess. My questions are: 1. how can i get the same order as the items of the website itself. 2. how can i reverse this order of question 1. So items on website: * A * B * C * D * E adding order in my sql: * E * D * C * B * A
Items in a database are have not a special order if you don't impose it. So you should add a timestamp to your table in the database, keep it up-to-date (mysql has a special flag to mark a field as auto-now) and use ORDER BY in your queries.
It's hard to say without the actual code, but in theory.. Scrapy is completely async, you cannot know the order of items that will be parsed and processed through the pipeline. But, you can control the behavior by "marking" each item with `priority` key. Add a field `priority` to your `Item` class, in the `parse_item` method of your spider set the `priority` based on the position on a web page, then in your pipeline you can either write this `priority` field to the database (in order to have an ability to sort later), or gather all items in a class-wide list, and in `close_spider` method sort the list and bulk insert it into the database. Hope that helps.
Scrapy reversed item ordening for preparing in db
[ "", "python", "scrapy", "" ]
I have two tables in a MySQL database. The first one has a list of department names. ``` departments abbreviation | name -------------|------------- ACC | accounting BUS | business ... ``` The second table has a list of courses with names that contain the department's abbreviation. ``` courses section | name -------------|------------- ACC-101-01 | Intro to Accounting ACC-110-01 | More accounting BUS-200-02 | Business etc. ... ``` I'd like to write a query that will, for each row in the `departments` table, give me a count of how many rows in the `courses` table are like the abbreviation I have. Something such as this: ``` abbreviation | num -------------|-------------- ACC | 2 BUS | 1 ... ``` I can do this for one individual department with the query ``` SELECT COUNT(*) FROM courses WHERE section LIKE '%ACC%' (gives me 2) ``` Although I could loop through in PHP and do the above query many times, I'd rather do it in a single query. This is the pseudocode I'm thinking of... ``` SELECT department.abbreviation, num FROM for each row in departments SELECT COUNT(*) AS num FROM classes WHERE section LIKE CONCAT('%',departments.abbreviation,'%) ``` Any ideas?
``` SELECT d.abbreviation, COUNT(*) num FROM departments d INNER JOIN courses c ON c.section LIKE CONCAT(d.abbreviation, "%") GROUP BY d.abbreviation ``` **[Sql Fiddle](http://sqlfiddle.com/#!2/f9e2c/6)**
I quick solution, but not the best, could be: ``` SELECT abbreviation, (SELECT COUNT(*) FROM courses C WHERE D.abbreviation = SUBSTRING(C.abbreviation, 0, 3)) AS c FROM departments D; ```
How to count rows in one table based on another table in mysql
[ "", "mysql", "sql", "select", "count", "" ]
I am trying to implement this into python: <https://dev.twitter.com/docs/api/1.1/get/statuses/retweeters/ids> here is what I have so far: ``` def reqs(): t = Twitter(auth=OAuth('...')) tweets = t.statuses.user_timeline.snl() ``` How do I get the user id's of those who retweeted a single tweet from the user's timeline?
use tweeters instead. boom. figured it out on my own ``` t.statuses.retweets(....) ```
Please see my solution at this post: [difficulty using twitter api command implemention in python](https://stackoverflow.com/questions/17172169/difficulty-using-twitter-api-command-implemention-in-python) It does use the Twyton API, however.
having difficulty using twitter api command implemention in python
[ "", "python", "api", "twitter", "encoding", "implementation", "" ]
I'm working a linux machine running CentOS. I don't have full sudo powers and theres multiple versions of python already installed on the machine and the whole thing is a bit of a mess, stuff like numpy doesn't work and I need to install modules which rely on that. I was wondering if its possible to just install python (and hopefully R) into my own home directory or something and then install the modules I need into that directory and run what I need from there? Thanks
Try using virtualenv. Of course this assumes that your system already has virtualenv installed. <https://pypi.python.org/pypi/virtualenv> Basic usage: ``` virtualenv venv ``` That creates a directory called 'venv' at your current folder. It puts in the appropriate `python` and `pip` binaries. To go into the virtual environment: ``` . venv/bin/activate ``` or equivalently: ``` source venv/bin/activate ``` That is the key step. From then on, any packages installed using `pip` will be local to the virtualenv folder that we created above. Remember to do the above step before installing your packages and running your programs. Running `python` will also use the `python` from the virtualenv. You should see something like: ``` (venv)[username@host]$ ``` on your shell. Install packages using pip like this: ``` pip install packagename ``` This will install packages for the virtualenv we created in the first step. `pip` makes use of a `requirements.txt` file for specifying packages required for your python programs. If you have a `requirements.txt` file, you can use: ``` pip install -r requirements.txt ``` to install the packages specified in that file. For running your python program: ``` python programName ``` or use the relevant command you need. Based on my limited experience so far, I have run `gunicorn` for web applications. This will use the `python` and associated libraries from our virtualenv. To deactivate the virtualenv, use: ``` deactivate ``` Hope that helps!
Make a directory where you install your own software: ``` mkdir -p ~/sw/src cd ~/sw/src ``` Download and untar Python source into this directory. Configure it to install into `~/sw`, then compile (make sure all the required headers are available, notably those for SSL if you want to open HTTPS urls with `urllib2`): ``` ./configure --prefix=$HOME/sw make make install ``` Finally, set your `PATH` to include `$HOME/sw/bin` in `.bashrc` (or a similar shell startup file). Now run `python`. R should work similarly.
How to install python for one user on centos?
[ "", "python", "centos", "" ]
In Pandas, how can I get a list of indices of a series/dataframe for the entries that satisfy some property? The following returns a `Series` object ``` my_dataframe.loc[:,'some_column'] == 'some_value' ``` that looks like this: ``` 519 True 509 False 826 False 503 False 511 False 512 False 500 False 507 False 516 True 504 False 521 False 510 False 351 False 522 False 526 False 517 False 501 False ``` but what I want is two lists, one with ``` [519, 516] ``` and one with the rest of indices. How can I do this in Pandas?
``` In [8]: df = DataFrame(randn(10,2),columns=list('AB')) In [9]: df Out[9]: A B 0 -1.046978 1.561624 1 -0.264645 0.717171 2 0.112354 -2.084449 3 -1.243482 -1.183749 4 1.055667 0.532444 5 -1.295805 2.168225 6 -1.239725 0.969934 7 -0.354017 1.434943 8 -0.867560 0.810315 9 0.097698 -0.033039 In [10]: df.loc[:,'B'] > 0 Out[10]: 0 True 1 True 2 False 3 False 4 True 5 True 6 True 7 True 8 True 9 False Name: B, dtype: bool In [14]: x = df.loc[:,'B'] > 0 Per Tom/Andy, much simpler In [33]: x[x].index Out[33]: Int64Index([0, 1, 4, 5, 6, 7, 8], dtype=int64) In [34]: x[~x].index Out[34]: Int64Index([2, 3, 9], dtype=int64) ```
A slight variant on @Jeff's example: ``` In [18]: df Out[18]: A B 0 0.319489 1.012319 1 0.494205 -0.918240 2 1.501922 -0.409661 3 -1.593702 0.705407 4 -0.735312 1.037567 5 -0.201132 -0.673124 6 1.237310 -0.877043 7 -0.946714 0.984164 8 -0.923548 0.415094 9 0.135281 -0.199951 In [14] list1 = df.index[df.loc[:, 'B'] > 0] In [15]: list1 Out[15]: Int64Index([0, 3, 4, 7, 8], dtype=int64) In [16]: list2 = df.index - list1 In [17]: list2 Out[17]: Int64Index([1, 2, 5, 6, 9], dtype=int64) ```
Get indices that satisfy some criteria
[ "", "python", "pandas", "" ]
I want to ensure that all resources are being cleaned correctly. Is this a safe thing to do: ``` try: closing(open(okFilePath, "w")) except Exception, exception: logger.error(exception) raise ``` **EDIT:** Infact, thinking about it, do I even need the try/catch as I am raising the exception anyways I can log at a higher level. If it errors on creating the file, one can assume there is nothing to close?
To be sure that the file is closed in any case, you can use the [with](http://docs.python.org/release/2.5/whatsnew/pep-343.html) statement. For example: ``` try: with open(path_to_file, "w+") as f: # Do whatever with f except: # log exception ```
you can use this to just create a file and close it in one line. `with open(file_path, 'w') as document: pass`
python - creating an empty file and closing in one line
[ "", "python", "file-io", "" ]
I am trying to make a POST request But getting this error : ``` Traceback (most recent call last): File "demo.py", line 7, in <module> r = requests.post(url, data=payload, headers=headers) File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 87, in post return request('post', url, data=data, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 266, in request prep = req.prepare() File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 215, in prepare p.prepare_body(self.data, self.files) File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 338, in prepare_body body = self._encode_params(data) File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 74, in _encode_params for k, vs in to_key_val_list(data): ValueError: too many values to unpack ``` This is my program : ``` import requests url = 'http://www.n-gal.com/index.php?route=openstock/openstock/optionStatus' payload = {'var:1945,product_id:1126'} headers = {'content-type': 'application/x-www-form-urlencoded'} r = requests.post(url, data=payload, headers=headers) ``` I have tried the same POST request through Advanced rest client using following data : `URL : http://www.n-gal.com/index.php?route=openstock/openstock/optionStatus` `payload : var=1945&product_id=1126` `Content-Type: application/x-www-form-urlencoded` And it is working fine can anyone help me please...
Try this : ``` import requests url = 'http://www.n-gal.com/index.php?route=openstock/openstock/optionStatus' payload = 'var=1945&product_id=1126' headers = {'content-type': 'application/x-www-form-urlencoded'} r = requests.post(url, data=payload, headers=headers) print r.json() ```
You have made `payload` a [set](http://docs.python.org/2/library/stdtypes.html#set-types-set-frozenset), not a dictionary. You forgot to close the string. Change: ``` payload = {'var:1945,product_id:1126'} ``` To: ``` payload = {'var':'1945','product_id':'1126'} ``` As it is a set, the request will thus fail.
Make POST request using Python
[ "", "python", "" ]
Is there any way I can change this SQL so the terms are defined only once? [SQLFiddle](http://www.sqlfiddle.com/#!2/b065c/1). ``` SELECT sum(score) score, title FROM ( SELECT score, title FROM ( SELECT 3 score, 'a railway employee' term UNION ALL SELECT 2 score, 'a railway' term UNION ALL SELECT 2 score, 'railway employee' term UNION ALL SELECT 1 score, 'a' term UNION ALL SELECT 1 score, 'railway' term UNION ALL SELECT 1 score, 'employee' term ) terms INNER JOIN tableName ON title LIKE concat('%', terms.term, '%') UNION ALL SELECT score*1.1 score, title FROM ( SELECT 3 score, 'a railway employee' term UNION ALL SELECT 2 score, 'a railway' term UNION ALL SELECT 2 score, 'railway employee' term UNION ALL SELECT 1 score, 'a' term UNION ALL SELECT 1 score, 'railway' term UNION ALL SELECT 1 score, 'employee' term ) terms INNER JOIN tableName ON summary LIKE concat('%', terms.term, '%') ) AS t GROUP BY title ORDER BY score DESC ```
Note: I do advise that you put the values into their own table. Just sticking them in the query text is probably not ideal. But the queries I present below will work equally well with a real table as with a hard-coded derived table. Here's one way: ``` SELECT sum(score * multiplier) score, title FROM ( SELECT 3 score, 'a railway employee' term UNION ALL SELECT 2, 'a railway' UNION ALL SELECT 2, 'railway employee' UNION ALL SELECT 1, 'a' UNION ALL SELECT 1, 'railway' UNION ALL SELECT 1, 'employee' ) terms CROSS JOIN ( SELECT 'title' which, 1 multiplier UNION ALL SELECT 'summary', 1.1 ) X INNER JOIN tableName ON CASE X.which WHEN 'title' THEN title WHEN 'summary' THEN summary END LIKE concat('%', terms.term, '%') GROUP BY title ORDER BY score DESC ; ``` ## [See a Live Demo at SQL Fiddle](http://www.sqlfiddle.com/#!2/b065c/6) And here's another way that is basically the same but shuffled around a little bit: ``` SELECT sum(terms.score * T.multiplier) score, title FROM ( SELECT 3 score, 'a railway employee' term UNION ALL SELECT 2, 'a railway' UNION ALL SELECT 2, 'railway employee' UNION ALL SELECT 1, 'a' UNION ALL SELECT 1, 'railway' UNION ALL SELECT 1, 'employee' ) terms INNER JOIN ( SELECT title, CASE X.which WHEN 'title' THEN title WHEN 'summary' THEN summary END comparison, X.multiplier FROM tableName CROSS JOIN ( SELECT 'title' which, 1 multiplier UNION ALL SELECT 'summary', 1.1 ) X ) T ON T.comparison LIKE concat('%', terms.term, '%') GROUP BY title ORDER BY score DESC ; ``` ## [See a Live Demo at SQL Fiddle](http://www.sqlfiddle.com/#!2/b065c/9) And finally, one more way: ``` SELECT * FROM ( SELECT sum( terms.score * ( CASE WHEN T.title LIKE concat('%', terms.term, '%') THEN 1 ELSE 0 END + CASE WHEN T.summary LIKE concat('%', terms.term, '%') THEN 1.1 ELSE 0 END ) ) score, title FROM tableName T CROSS JOIN ( SELECT 3 score, 'a railway employee' term UNION ALL SELECT 2, 'a railway' UNION ALL SELECT 2, 'railway employee' UNION ALL SELECT 1, 'a' UNION ALL SELECT 1, 'railway' UNION ALL SELECT 1, 'employee' ) terms GROUP BY title ORDER BY score DESC ) Z WHERE Z.score > 0 ; ``` ## [See a Live Demo at SQL Fiddle](http://www.sqlfiddle.com/#!2/b065c/11) Also, if MySQL has something like `CROSS APPLY` that will let the `CROSS JOIN` have an outer reference, then some of this becomes easier (e.g., the first query could lose the CASE statement completely).
If you don't want to write them out twice, why not just create a table that stores the terms and the scores and then you join on the table: ``` create table terms ( term varchar(50), score int ); insert into terms values ('a railway employee', 3), ('a railway', 2), ('railway employee', 2), ('a', 1), ('railway', 1), ('employee', 1); ``` Then the query will be: ``` SELECT sum(score) score, title FROM ( SELECT score,title FROM terms INNER JOIN tableName ON title LIKE concat('%', terms.term, '%') UNION ALL SELECT score*1.1 score, title FROM terms INNER JOIN tableName ON summary LIKE concat('%', terms.term, '%') ) AS t GROUP BY title ORDER BY score DESC; ``` See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!2/d4c69/1)
Is there any way I can change this SQL so the terms are defined only once?
[ "", "mysql", "sql", "" ]
Is there any method to replace values with `None` in Pandas in Python? You can use `df.replace('pre', 'post')` and can replace a value with another, but this can't be done if you want to replace with `None` value, which if you try, you get a strange result. So here's an example: ``` df = DataFrame(['-',3,2,5,1,-5,-1,'-',9]) df.replace('-', 0) ``` which returns a successful result. But, ``` df.replace('-', None) ``` which returns a following result: ``` 0 0 - // this isn't replaced 1 3 2 2 3 5 4 1 5 -5 6 -1 7 -1 // this is changed to `-1`... 8 9 ``` Why does such a strange result be returned? Since I want to pour this data frame into MySQL database, I can't put `NaN` values into any element in my data frame and instead want to put `None`. Surely, you can first change `'-'` to `NaN` and then convert `NaN` to `None`, but I want to know why the dataframe acts in such a terrible way. > Tested on pandas 0.12.0 dev on Python 2.7 and OS X 10.8. Python is a > pre-installed version on OS X and I installed pandas by using SciPy > Superpack script, for your information.
Actually in later versions of pandas this will give a TypeError: ``` df.replace('-', None) TypeError: If "to_replace" and "value" are both None then regex must be a mapping ``` You can do it by passing either a list or a dictionary: ``` In [11]: df.replace('-', df.replace(['-'], [None]) # or .replace('-', {0: None}) Out[11]: 0 0 None 1 3 2 2 3 5 4 1 5 -5 6 -1 7 None 8 9 ``` But I recommend using NaNs rather than None: ``` In [12]: df.replace('-', np.nan) Out[12]: 0 0 NaN 1 3 2 2 3 5 4 1 5 -5 6 -1 7 NaN 8 9 ```
I prefer the solution using [`replace`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html#pandas-dataframe-replace) with a `dict` because of its simplicity and elegance: ``` df.replace({'-': None}) ``` You can also have more replacements: ``` df.replace({'-': None, 'None': None}) ``` And even for larger replacements, it is always obvious and clear what is replaced by what - which is way harder for long lists, in my opinion.
Replace invalid values with None in Pandas DataFrame
[ "", "python", "pandas", "dataframe", "replace", "nan", "" ]
I have several records (bills) that are basically duplicates of each other, except for one field, which represents the language that bill is in. For example: ``` ID,BillID,Account,Name,Amount,Lang 1,0001,abcd,John Smith,10.99,ENG 2,0002,qwer,Jane Doe,9.99,ENG 3,0001,abcd,John Smith,10.99,SPA 4,0003,abcd,John Smith,4.99,CHI ``` All fields are strings, except ID, which is an autonumber. In my SQL select, I have ``` SELECT * FROM Bills WHERE Account='abcd' ``` and it returns 3 rows in total, but 2 rows for the same bill. I need to return unique bills for a specific account. So in the scenario above, I want to retrieve 2 bills with billID 0003 and either SPA or ENG version of 0001, but not both. What would by query be? Thank you EDIT: I cannot rely on a specific language always being there. For example, I cannot say `SELECT * FROM Bills WHERE Account='abcd' AND Lang='ENG'` because sometimes a bill may be only in one language, which is not `ENG`, and sometimes may be in several languages in any combination.
Probably the easiest way would be to use `ROW_NUMBER` and `PARTITION BY` ``` SELECT * FROM ( SELECT b.*, ROW_NUMBER() OVER (PARTITION BY BillID ORDER BY Lang) as num FROM Bills b WHERE Account = 'abcd' ) tbl WHERE num = 1 ```
``` select ID,BillID,Account,Name,Amount,max(Lang) FROM Bills WHERE Account='abcd' group by BillID,Account,Name,Amount; ``` Given that you are not giving priority to any specific language if there is same bill in multiple languages. The above query will work perfect. EDIT : Removed "ID" from group by. @Phil You are right..!!
Filter duplicate rows based on a field
[ "", "sql", "sql-server", "" ]
I am using python to create a gaussian filter of size 5x5. I saw this post [here](https://stackoverflow.com/questions/11209115/creating-gaussian-filter-of-required-length-in-python) where they talk about a similar thing but I didn't find the exact way to get equivalent python code to matlab function `fspecial('gaussian', f_wid, sigma)` Is there any other way to do it? I tried using the following code : ``` size = 2 sizey = None size = int(size) if not sizey: sizey = size else: sizey = int(sizey) x, y = scipy.mgrid[-size: size + 1, -sizey: sizey + 1] g = scipy.exp(- (x ** 2/float(size) + y ** 2 / float(sizey))) print g / np.sqrt(2 * np.pi) ``` The output obtained is ``` [[ 0.00730688 0.03274718 0.05399097 0.03274718 0.00730688] [ 0.03274718 0.14676266 0.24197072 0.14676266 0.03274718] [ 0.05399097 0.24197072 0.39894228 0.24197072 0.05399097] [ 0.03274718 0.14676266 0.24197072 0.14676266 0.03274718] [ 0.00730688 0.03274718 0.05399097 0.03274718 0.00730688]] ``` What I want is something like this: ``` 0.0029690 0.0133062 0.0219382 0.0133062 0.0029690 0.0133062 0.0596343 0.0983203 0.0596343 0.0133062 0.0219382 0.0983203 0.1621028 0.0983203 0.0219382 0.0133062 0.0596343 0.0983203 0.0596343 0.0133062 0.0029690 0.0133062 0.0219382 0.0133062 0.0029690 ```
In general terms if you really care about getting the the exact same result as MATLAB, the easiest way to achieve this is often by looking directly at the source of the MATLAB function. In this case, `edit fspecial`: ``` ... case 'gaussian' % Gaussian filter siz = (p2-1)/2; std = p3; [x,y] = meshgrid(-siz(2):siz(2),-siz(1):siz(1)); arg = -(x.*x + y.*y)/(2*std*std); h = exp(arg); h(h<eps*max(h(:))) = 0; sumh = sum(h(:)); if sumh ~= 0, h = h/sumh; end; ... ``` Pretty simple, eh? It's <10mins work to port this to Python: ``` import numpy as np def matlab_style_gauss2D(shape=(3,3),sigma=0.5): """ 2D gaussian mask - should give the same result as MATLAB's fspecial('gaussian',[shape],[sigma]) """ m,n = [(ss-1.)/2. for ss in shape] y,x = np.ogrid[-m:m+1,-n:n+1] h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) ) h[ h < np.finfo(h.dtype).eps*h.max() ] = 0 sumh = h.sum() if sumh != 0: h /= sumh return h ``` This gives me the same answer as `fspecial` to within rounding error: ``` >> fspecial('gaussian',5,1) 0.002969 0.013306 0.021938 0.013306 0.002969 0.013306 0.059634 0.09832 0.059634 0.013306 0.021938 0.09832 0.1621 0.09832 0.021938 0.013306 0.059634 0.09832 0.059634 0.013306 0.002969 0.013306 0.021938 0.013306 0.002969 : matlab_style_gauss2D((5,5),1) array([[ 0.002969, 0.013306, 0.021938, 0.013306, 0.002969], [ 0.013306, 0.059634, 0.09832 , 0.059634, 0.013306], [ 0.021938, 0.09832 , 0.162103, 0.09832 , 0.021938], [ 0.013306, 0.059634, 0.09832 , 0.059634, 0.013306], [ 0.002969, 0.013306, 0.021938, 0.013306, 0.002969]]) ```
I found similar solution for this problem: ``` def fspecial_gauss(size, sigma): """Function to mimic the 'fspecial' gaussian MATLAB function """ x, y = numpy.mgrid[-size//2 + 1:size//2 + 1, -size//2 + 1:size//2 + 1] g = numpy.exp(-((x**2 + y**2)/(2.0*sigma**2))) return g/g.sum() ```
How to obtain a gaussian filter in python
[ "", "python", "matlab", "numpy", "gaussian", "" ]
This is a slightly weird request but I am looking for a way to write a list to file and then read it back some other time. I have no way to remake the lists so that they are correctly formed/formatted as the example below shows. My lists have data like the following: ``` test data here this is one group :) test data here this is another group :) ```
If you don't need it to be human-readable/editable, the easiest solution is to just use `pickle`. To write: ``` with open(the_filename, 'wb') as f: pickle.dump(my_list, f) ``` To read: ``` with open(the_filename, 'rb') as f: my_list = pickle.load(f) ``` --- If you *do* need them to be human-readable, we need more information. If `my_list` is guaranteed to be a list of strings with no embedded newlines, just write them one per line: ``` with open(the_filename, 'w') as f: for s in my_list: f.write(s + '\n') with open(the_filename, 'r') as f: my_list = [line.rstrip('\n') for line in f] ``` --- If they're Unicode strings rather than byte strings, you'll want to `encode` them. (Or, worse, if they're byte strings, but not necessarily in the same encoding as your system default.) If they might have newlines, or non-printable characters, etc., you can use escaping or quoting. Python has a variety of different kinds of escaping built into the stdlib. Let's use `unicode-escape` here to solve both of the above problems at once: ``` with open(the_filename, 'w') as f: for s in my_list: f.write((s + u'\n').encode('unicode-escape')) with open(the_filename, 'r') as f: my_list = [line.decode('unicode-escape').rstrip(u'\n') for line in f] ``` --- You can also use the 3.x-style solution in 2.x, with either the [`codecs`](https://docs.python.org/2/library/codecs.html) module or the [`io`](https://docs.python.org/2/library/io.html) module:\* ``` import io with io.open(the_filename, 'w', encoding='unicode-escape') as f: f.writelines(line + u'\n' for line in my_list) with open(the_filename, 'r') as f: my_list = [line.rstrip(u'\n') for line in f] ``` \* TOOWTDI, so which is the one obvious way? It depends… For the short version: if you need to work with Python versions before 2.6, use `codecs`; if not, use `io`.
As long as your file has consistent formatting (i.e. line-breaks), this is easy with just basic file IO and string operations: ``` with open('my_file.txt', 'rU') as in_file: data = in_file.read().split('\n') ``` That will store your data file as a list of items, one per line. To then put it into a file, you would do the opposite: ``` with open('new_file.txt', 'w') as out_file: out_file.write('\n'.join(data)) # This will create a string with all of the items in data separated by new-line characters ``` Hopefully that fits what you're looking for.
Write and read a list from file
[ "", "python", "list", "python-2.7", "" ]
My tables as follows ``` name | id | name | | 1 | jon | | 2 | mary | skill | id | skill | level | | 1 | C++ | 3 | | 1 | Java | 2 | | 1 | HTML | 5 | | 1 | CSS | 4 | | 1 | JS | 5 | | 2 | PHP | 4 | | 2 | Ruby | 3 | | 2 | Perl | 1 | ``` So I want the output to be like this: ``` | name | skill_1 | lv_1 | skill_2 | lv_2 | skill_3 | lv_3 | skill_4 | lv_4 | skill_5 | lv_5 | | jon | C++ | 3 | Java | 2 | HTML | 5 | CSS | 4 | JS | 5 | | mary | PHP | 4 | Ruby | 3 | Perl | 1 | | | | | ``` What type of join or union statements would I be using? Each person has only 5 skills max. So how would the SQL for this look like? Is it even possible? I'm completely lost and have no idea where to start.
Since you have mentioned that a `Name` can have a maximum of `5 Skills`, this problem can be done using static query. ``` -- <<== PART 2 SELECT Name, MAX(CASE WHEN RowNumber = 1 THEN Skill END) Skill_1, MAX(CASE WHEN RowNumber = 2 THEN Skill END) Skill_2, MAX(CASE WHEN RowNumber = 3 THEN Skill END) Skill_3, MAX(CASE WHEN RowNumber = 4 THEN Skill END) Skill_4, MAX(CASE WHEN RowNumber = 5 THEN Skill END) Skill_5 FROM ( -- <<== PART 1 SELECT a.Name, b.Skill, ( SELECT COUNT(*) FROM Skill c WHERE c.id = b.id AND c.Skill <= b.Skill) AS RowNumber FROM Name a INNER JOIN Skill b ON a.id = b.id ) x GROUP BY Name ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/909b1/4) OUTPUT ``` ╔══════╦═════════╦═════════╦═════════╦═════════╦═════════╗ ║ NAME ║ SKILL_1 ║ SKILL_2 ║ SKILL_3 ║ SKILL_4 ║ SKILL_5 ║ ╠══════╬═════════╬═════════╬═════════╬═════════╬═════════╣ ║ jon ║ C++ ║ CSS ║ HTML ║ Java ║ JS ║ ║ mary ║ Perl ║ PHP ║ Ruby ║ (null) ║ (null) ║ ╚══════╩═════════╩═════════╩═════════╩═════════╩═════════╝ ``` **BRIEF EXPLANATION** Let's breakdown it down. There are two parts in the query. The first part, which is **PART 1**, of the query generates the sequence of number on `Skill` for every `Name`. It just uses correlated subquery to mimic a window function `ROW_NUMBER` which `MySQL` does not support. * [Generating Sequential Number on Skill (SQLFiddle Demo)](http://www.sqlfiddle.com/#!2/909b1/5) The second part, **PART 2**, transpose the rows into columns based on the sequential number generated on **PART 1**. It uses `CASE` to test the value of the number and returns the `Skill` associated on the number. If the number does not match it returns a `NULL` value. Next, it aggregates the column for every group of `Name` using `MAX()` so `SKILL` will be returned instead of `NULL` if there is any. **UPDATE 1** ``` SELECT Name, MAX(CASE WHEN RowNumber = 1 THEN Skill END) Skill_1, MAX(CASE WHEN RowNumber = 1 THEN Level END) Level_1, MAX(CASE WHEN RowNumber = 2 THEN Skill END) Skill_2, MAX(CASE WHEN RowNumber = 2 THEN Level END) Level_2, MAX(CASE WHEN RowNumber = 3 THEN Skill END) Skill_3, MAX(CASE WHEN RowNumber = 3 THEN Level END) Level_3, MAX(CASE WHEN RowNumber = 4 THEN Skill END) Skill_4, MAX(CASE WHEN RowNumber = 4 THEN Level END) Level_4, MAX(CASE WHEN RowNumber = 5 THEN Skill END) Skill_5, MAX(CASE WHEN RowNumber = 5 THEN Level END) Level_5 FROM ( SELECT a.Name, b.Skill, ( SELECT COUNT(*) FROM Skill c WHERE c.id = b.id AND c.skill <= b.skill) AS RowNumber, b.Level FROM Name a INNER JOIN Skill b ON a.id = b.id ) x GROUP BY Name ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/7915b/8) OUTPUT ``` ╔══════╦═════════╦═════════╦═════════╦═════════╦═════════╦═════════╦═════════╦═════════╦═════════╦═════════╗ ║ NAME ║ SKILL_1 ║ LEVEL_1 ║ SKILL_2 ║ LEVEL_2 ║ SKILL_3 ║ LEVEL_3 ║ SKILL_4 ║ LEVEL_4 ║ SKILL_5 ║ LEVEL_5 ║ ╠══════╬═════════╬═════════╬═════════╬═════════╬═════════╬═════════╬═════════╬═════════╬═════════╬═════════╣ ║ jon ║ C++ ║ 3 ║ CSS ║ 4 ║ HTML ║ 5 ║ Java ║ 2 ║ JS ║ 5 ║ ║ mary ║ Perl ║ 1 ║ PHP ║ 4 ║ Ruby ║ 3 ║ (null) ║ (null) ║ (null) ║ (null) ║ ╚══════╩═════════╩═════════╩═════════╩═════════╩═════════╩═════════╩═════════╩═════════╩═════════╩═════════╝ ```
This could also be the possible solution! **Query** ``` select a.name, (select skill from skill where skill.id=a.id limit 0,1 ) as skill_1, (select skill from skill where skill.id=a.id limit 1,1) skill_2, (select skill from skill where skill.id=a.id limit 2,1 ) skill_3, (select skill from skill where skill.id=a.id limit 3,1) skill_4, (select skill from skill where skill.id=a.id limit 4,1) skill_5 from name a ``` **Output** ![enter image description here](https://i.stack.imgur.com/jU2An.png)
Is it possible for SQL to make a SELECT from a "list"? Example inside
[ "", "mysql", "sql", "pivot", "" ]
I have a python script that runs on aws machines, as well as on other machines. The functionality of the script depends on whether or not it is on AWS. Is there a way to programmatically discover whether or not it runs on AWS? (maybe using boto?)
If you want to do that strictly using boto, you could do: ``` import boto.utils md = boto.utils.get_instance_metadata(timeout=.1, num_retries=0) ``` The `timeout` specifies the how long the HTTP client will wait for a response before timing out. The `num_retries` parameter controls how many times the client will retry the request before giving up and returning and empty dictionary.
you can easily use the AWS SDK and check for instance id. beside of that, you can check the aws ip ranges - check out this link <https://forums.aws.amazon.com/ann.jspa?annID=1701>
Find out if the current machine is on aws in python
[ "", "python", "amazon-web-services", "boto", "" ]
I have a table like: ``` ID DATE 01 20.06.13 01 21.06.13 02 13.04.13 03 12.05.13 04 17.05.13 04 19.06.13 ``` I need to query so that I have the total amount of distinct ID which have two or more entry in the DATE field. For example in the example table only 01 and 04 have at least two entries so the result of my query should be 2.
Since your requirement is the *total count* of unique duplicates, not to list the duplicate items themselves, here is the query to do that: ``` SELECT Count(*) FROM ( SELECT ID FROM dbo.YourTable GROUP BY ID HAVING Count(*) >= 2 ) X ; ``` You could also use some kind of `JOIN` (including a subquery, correlated or not) but that will be far less efficient than the aggregate above.
try this. ``` SELECT DISTINCT ID, COUNT(*) FROM TABLE_NAME GROUP BY ID HAVING COUNT(*) > 1 ```
SQL: number of elements that appear at least twice
[ "", "sql", "sql-server", "t-sql", "" ]
I wish to read in an XML file, find all sentences that contain both the markup `<emotion>` and the markup `<LOCATION>`, then print those entire sentences to a unique line. Here is a sample of the code: ``` import re text = "Cello is a <emotion> wonderful </emotion> parakeet who lives in <LOCATION> Omaha </LOCATION>. He is the <emotion> best </emotion> singer <pronoun> I </pronoun> have ever heard." out = open('out.txt', 'w') for match in re.findall(r'(?:(?<=\.)\s+|^)((?=(?:(?!\.(?:\s|$)).)*?\bwonderful(?=\s|\.|$))(?=(?:(?!\.(?:\s|$)).)*?\bomaha(?=\s|\.|$)).*?\.(?=\s|$))', text, flags=re.I): line = ''.join(str(x) for x in match) out.write(line + '\n') out.close() ``` The regex here grabs all sentences with "wonderful" and "omaha" in them, and returns: `Cello is a <emotion> wonderful </emotion> parakeet who lives in <LOCATION> Omaha </LOCATION>.` Which is perfect, but I really want to print all sentences that contain both `<emotion>` and `<LOCATION>`. For some reason, though, when I replace "wonderful" in the regex above with "emotion," the regex fails to return any output. So the following code yields no result: ``` import re text = "Cello is a <emotion> wonderful </emotion> parakeet who lives in <LOCATION> Omaha </LOCATION>. He is the <emotion> best </emotion> singer I have ever heard." out = open('out.txt', 'w') for match in re.findall(r'(?:(?<=\.)\s+|^)((?=(?:(?!\.(?:\s|$)).)*?\bemotion(?=\s|\.|$))(?=(?:(?!\.(?:\s|$)).)*?\bomaha(?=\s|\.|$)).*?\.(?=\s|$))', text, flags=re.I): line = ''.join(str(x) for x in match) out.write(line + '\n') out.close() ``` My question is: How can I modify my regular expression in order to grab only those sentences that contain both `<emotion>` and `<LOCATION>`? I would be most grateful for any help others can offer on this question. (For what it's worth, I'm working on parsing my text in BeautifulSoup as well, but wanted to give regular expressions one last shot before throwing in the towel.)
Your problem appears to be that your regex is expecting a space (`\s`) to follow the matching word, as seen with: ``` emotion(?=\s|\.|$) ``` Since when it's part of a tag, it's followed by a `>`, rather than a space, no match is found since that lookahead fails. To fix it, you can just add the `>` after emotion, like: ``` for match in re.findall(r'(?:(?<=\.)\s+|^)((?=(?:(?!\.(?:\s|$)).)*?\bemotion>(?=\s|\.|$))(?=(?:(?!\.(?:\s|$)).)*?\bomaha(?=\s|\.|$)).*?\.(?=\s|$))', text, flags=re.I): line = ''.join(str(x) for x in match) ``` Upon testing, this seems to solve your problem. Make sure and treat "LOCATION" similarly: ``` for match in re.findall(r'(?:(?<=\.)\s+|^)((?=(?:(?!\.(?:\s|$)).)*?\bemotion>(?=\s|\.|$))(?=(?:(?!\.(?:\s|$)).)*?\bLOCATION>(?=\s|\.|$)).*?\.(?=\s|$))', text, flags=re.I): line = ''.join(str(x) for x in match) ```
If I do not understand bad what you are trying to do is remove `<emotion> </emotion> <LOCATION></LOCATION>` ?? Well if is that what you want to do you can do this ``` import re text = "Cello is a <emotion> wonderful </emotion> parakeet who lives in <LOCATION> Omaha </LOCATION>. He is the <emotion> best </emotion> singer I have ever heard." out = open('out.txt', 'w') def remove_xml_tags(xml): content = re.compile(r'<.*?>') return content.sub('', xml) data = remove_xml_tags(text) out.write(data + '\n') out.close() ```
Python regex to print all sentences that contain two identified classes of markup
[ "", "python", "regex", "pattern-matching", "markup", "proximity", "" ]
I am fairly new to Python and come from a more Matlab point of view. I am trying to make a series of 2 x 5 panel contourf subplots. My approach so far has been to convert (to a certain degree) my Matlab code to Python and plot my subplots within a loop. The relavent part of the code looks like this: ``` fig=plt.figure(figsize=(15, 6),facecolor='w', edgecolor='k') for i in range(10): #this part is just arranging the data for contourf ind2 = py.find(zz==i+1) sfr_mass_mat = np.reshape(sfr_mass[ind2],(pixmax_x,pixmax_y)) sfr_mass_sub = sfr_mass[ind2] zi = griddata(massloclist, sfrloclist, sfr_mass_sub,xi,yi,interp='nn') temp = 250+i # this is to index the position of the subplot ax=plt.subplot(temp) ax.contourf(xi,yi,zi,5,cmap=plt.cm.Oranges) plt.subplots_adjust(hspace = .5,wspace=.001) #just annotating where each contour plot is being placed ax.set_title(str(temp)) ``` As a newbie to this forum, i dont seem to be allowed to attach the resulting image. However, going by my indexing in the code as 'temp' the resulting layout of the the 2 x 5 panels is: ``` 251 - 252 - 253 - 254 - 255 256 - 257 - 258 - 259 - 250 ``` However, what i want is ``` 250 - 251 - 252 - 253 - 254 255 - 256 - 257 - 258 - 259 ``` That is, the first panel (250) appears in the last position where i would think 259 should be. And 251 seems to be where i want 250 to be placed. They all seem to be in the correct sequence, just circularly shifted by one. I know this will be something very silly, but appreciate any help you can give. Thank you in advance.
Using your code with some random data, this would work: ``` fig, axs = plt.subplots(2,5, figsize=(15, 6), facecolor='w', edgecolor='k') fig.subplots_adjust(hspace = .5, wspace=.001) axs = axs.ravel() for i in range(10): axs[i].contourf(np.random.rand(10,10),5,cmap=plt.cm.Oranges) axs[i].set_title(str(250+i)) ``` The layout is off course a bit messy, but that's because of your current settings (the figsize, wspace etc). ![enter image description here](https://i.stack.imgur.com/v2j2B.png)
Basically the same solution as provided by [Rutger Kassies](https://stackoverflow.com/a/17211410), but using a more pythonic syntax: ``` fig, axs = plt.subplots(2,5, figsize=(15, 6), facecolor='w', edgecolor='k') fig.subplots_adjust(hspace = .5, wspace=.001) data = np.arange(250, 260) for ax, d in zip(axs.ravel(), data): ax.contourf(np.random.rand(10,10), 5, cmap=plt.cm.Oranges) ax.set_title(str(d)) ```
Python: subplot within a loop: first panel appears in wrong position
[ "", "python", "matplotlib", "subplot", "" ]
I have a table that looks something like this. It updates every 5 minutes for each game\_id ``` game_id players date 12 420 2013-06-19 12:30:00 13 345 2013-06-19 12:30:00 14 600 2013-06-19 12:30:00 12 375 2013-06-19 12:25:00 13 475 2013-06-19 12:25:00 14 575 2013-06-19 12:25:00 12 500 2013-06-19 12:20:00 ... ``` I need a query for each game id, get the current players (latest timestamp) and the max for the day. so the results would look like this ``` game_id max current 12 500 420 13 475 345 14 600 600 ``` I tried something like this but, had no luck and can't figure it out :( select game\_id, max(players) as max, players as current from players where date >= '2013-06-19' order by date desc group by game\_id; Thanks for your help!
To get the last value, you need a trick of one sort or another. Instead of using a join, this version uses the `substring_index()`/`group_concat()` trick: ``` select game_id, max(players) as MaxPlayers, substring_index(group_concat(players order by date desc), ',', 1) + 0 as Latest from players group by game_id; ``` The nice thing about this approach is that it is guaranteed to work and does not require any additional joins. In particular, it does *not* use the MySQL extension that allows columns to be included in the `select` clause without their being in the `group by` clause. The results are indeterminate when there are multiple values, as expressly stated in the [documentation](http://dev.mysql.com/doc/refman/5.5/en/group-by-extensions.html): > You can use this feature to get better performance by avoiding > unnecessary column sorting and grouping. However, this is useful > primarily when all values in each nonaggregated column not named in > the GROUP BY are the same for each group. The server is free to choose > any value from each group, so unless they are the same, the values > chosen are indeterminate.
<http://www.sqlfiddle.com/#!2/e5157/5> ``` select game_id, max(players) as maximum, players as current from tab where date >= '2013-06-19' group by game_id ```
How do I query a mysql table to get the current value of a column and the max for the day
[ "", "mysql", "sql", "group-by", "sql-order-by", "" ]
I'm currently learning Python. How do I put this: ``` dates = list() for entry in some_list: entry_split = entry.split() if len(entry_split) >= 3: date = entry_split[1] if date not in dates: dates.append(date) ``` into a one-liner in Python?
Instead of a 1-liner, probably it's easier to understand with a 3-liner. ``` table = (entry.split() for entry in some_list) raw_dates = (row[1] for row in table if len(row) >= 3) # Uniquify while keeping order. http://stackoverflow.com/a/17016257 dates = list(collections.OrderedDict.fromkeys(raw_dates)) ```
If the order does not matter: ``` dates = set(split[1] for split in (x.split() for x in some_list) if len(split) >= 3) ``` Of course if you want a list instead of a set, just pass the result into `list()`. Although I would probably stick with what you have, as it is more readable.
Python List Iteration into One-Liner
[ "", "python", "list", "" ]
I need to figure out the simplest method of grabbing the length of a youtube video programmatically given the url of said video. Is the youtube API the best method? It looks somewhat complicated and I've never used it before so it's likely to take me a bit to get accommodated, but I really just want the quickest solution. I took a glance through the source of a video page in the hopes it might list it there, but apparently it does not (though it lists recommended video times in a very nice list that would be easy to parse). If it is the best method, does anyone have a snippit? Ideally I could get this done in Python, and I need it to ultimately be in the format of ``` 00:00:00.000 ``` but I'm completely open to any solutions anyone may have. I'd appreciate any insight.
All you have to do is read the `seconds` attribute in the `yt:duration` element from the XML returned by Youtube API 2.0. You only end up with seconds resolution (no milliseconds yet). Here's an example: ``` from datetime import timedelta from urllib2 import urlopen from xml.dom.minidom import parseString for vid in ('wJ4hPaNyHnY', 'dJ38nHlVE78', 'huXaL8qj2Vs'): url = 'https://gdata.youtube.com/feeds/api/videos/{0}?v=2'.format(vid) s = urlopen(url).read() d = parseString(s) e = d.getElementsByTagName('yt:duration')[0] a = e.attributes['seconds'] v = int(a.value) t = timedelta(seconds=v) print(t) ``` And the output is: ``` 0:00:59 0:02:24 0:04:49 ```
(I'm not sure what "pre-download" refers to.) The simplest way to get the length of `VIDEO_ID` is to make an HTTP request for `http://gdata.youtube.com/feeds/api/videos/VIDEO_ID?v=2&alt=jsonc` and then look at the value of the `data`->`duration` element that's returned. It will be set to the video's duration in seconds.
Getting length of YouTube video (without downloading the video itself)
[ "", "python", "youtube-api", "youtube-dl", "" ]
Using the following result set: ``` | DATE | BUSINESS | COLLEAGUE | POSITION | HOURS | STANDARDHOURS | COUNTER | OVER16 | OVER32 | OVER48 | ----------------------------------------------------------------------------------------------------------------- | 2013-01-01 | a | bob jones | analyst | 168 | 168 | 1 | 0 | 0 | 0 | | 2013-01-01 | a | cindy jones | assistant | 184 | 168 | 1 | 1 | 0 | 0 | | 2013-01-01 | b | tim harris | programmer | 200 | 168 | 1 | 1 | 1 | 0 | | 2013-01-01 | b | tom white | manager | 216 | 168 | 1 | 1 | 1 | 1 | | 2013-02-01 | a | bob jones | analyst | 176 | 176 | 1 | 0 | 0 | 0 | | 2013-02-01 | a | cindy jones | assistant | 176 | 176 | 1 | 0 | 0 | 0 | | 2013-02-01 | b | tim harris | programmer | 200 | 176 | 1 | 1 | 0 | 0 | | 2013-02-01 | b | tom white | manager | 216 | 176 | 1 | 1 | 1 | 0 | ``` Using this query: ``` SELECT c.date, c.business, CASE WHEN Sum(c.over16) > 0 THEN ( Sum(c.over16) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver16, CASE WHEN Sum(c.over32) > 0 THEN ( Sum(c.over32) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver32, CASE WHEN Sum(c.over48) > 0 THEN ( Sum(c.over48) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver48 FROM (SELECT a.date, a.business, a.colleague, a.position, a.hours, b.standardhours, 1 AS counter, CASE WHEN a.hours >= b.standardhours + 16 THEN 1 ELSE 0 END AS over16, CASE WHEN a.hours >= b.standardhours + 32 THEN 1 ELSE 0 END AS over32, CASE WHEN a.hours >= b.standardhours + 48 THEN 1 ELSE 0 END AS over48 FROM colleaguetime a JOIN businesshours b ON b.date = a.date) c GROUP BY c.date, c.business ``` I get: ``` | DATE | BUSINESS | PERCOVER16 | PERCOVER32 | PERCOVER48 | ---------------------------------------------------------------- | 2013-01-01 | a | 0 | 0 | 0 | | 2013-01-01 | b | 100 | 100 | 0 | | 2013-02-01 | a | 0 | 0 | 0 | | 2013-02-01 | b | 100 | 0 | 0 | ``` The desired result is: ``` | DATE | BUSINESS | PERCOVER16 | PERCOVER32 | PERCOVER48 | ---------------------------------------------------------------- | 2013-01-01 | a | 50 | 0 | 0 | | 2013-01-01 | b | 100 | 100 | 50 | | 2013-02-01 | a | 0 | 0 | 0 | | 2013-02-01 | b | 100 | 50 | 0 | ``` [SQL Fiddle](http://sqlfiddle.com/#!3/2d721/1) Is there an easier way to do this using a CTE?
I believe the problem that you are having is the integer divide problem. You want to convert the values to a decimal or floating point format before doing the division. Here is one method: ``` SELECT c.date, c.business, CASE WHEN Sum(c.over16) > 0 THEN ( Sum(c.over16*1.0) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver16, CASE WHEN Sum(c.over32) > 0 THEN ( Sum(c.over32*1.0) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver32, CASE WHEN Sum(c.over48) > 0 THEN ( Sum(c.over48*1.0) / Sum(c.counter) ) * 100 ELSE 0 END AS percOver48 ``` EDIT: The simplest alternative is to change the definition of counter in the subquery: ``` 1.0 as counter, -- This has a decimal point so it can be used for division ``` This will define it as a non-integer, numeric data type. As a general rule for production code, I like to have these conversions where the division is happening, to prevent unexpected errors. Somewhere down the road, you or someone else could look at a line like `1.0 as counter` and think "That's stupid. `Counter` should be an integer." You or he or she then changes it, and stuff breaks. Or someone sees the `sum(c.counter)` and thinks "That's stupid. It could just do `count(*)` or `count(c.counter)`. On the other hand, for ad-hoc code, I'd probably just make the counter `1.0`.
You can fix this by using the following in your subquery: ``` SELECT a.date, a.business, a.colleague, a.position, a.hours, b.standardHours, 1 AS counter, CASE WHEN a.hours >= b.standardHours + 16 THEN 1.0 ELSE 0.0 END AS over16, CASE WHEN a.hours >= b.standardHours + 32 THEN 1.0 ELSE 0.0 END AS over32, CASE WHEN a.hours >= b.standardHours + 48 THEN 1.0 ELSE 0.0 END AS over48 FROM colleagueTime a JOIN businessHours b ON b.date = a.date; ``` See [Demo](http://sqlfiddle.com/#!3/2d721/10) Instead of using `1` and `0`, change the values to `1.0` so they are decimals instead of integers. Or as Gordon pointed out, you can use 1.0 as the counter value: ``` SELECT a.date, a.business, a.colleague, a.position, a.hours, b.standardHours, 1.0 AS counter, CASE WHEN a.hours >= b.standardHours + 16 THEN 1 ELSE 0 END AS over16, CASE WHEN a.hours >= b.standardHours + 32 THEN 1 ELSE 0 END AS over32, CASE WHEN a.hours >= b.standardHours + 48 THEN 1 ELSE 0 END AS over48 FROM colleagueTime a JOIN businessHours b ON b.date = a.date; ```
calculate percentage of aggregated column
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I want to count the number of 2 or more consecutive week periods that have negative values within a range of weeks. Example: ``` Week | Value 201301 | 10 201302 | -5 <--| both weeks have negative values and are consecutive 201303 | -6 <--| Week | Value 201301 | 10 201302 | -5 201303 | 7 201304 | -2 <-- negative but not consecutive to the last negative value in 201302 Week | Value 201301 | 10 201302 | -5 201303 | -7 201304 | -2 <-- 1st group of negative and consecutive values 201305 | 0 201306 | -12 201307 | -8 <-- 2nd group of negative and consecutive values ``` Is there a better way of doing this other than using a cursor and a reset variable and checking through each row in order? Here is some of the SQL I have setup to try and test this: ``` IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestOne') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestOne IF OBJECT_ID('TempDB..#ConsecutiveNegativeWeekTestTwo') IS NOT NULL DROP TABLE #ConsecutiveNegativeWeekTestTwo CREATE TABLE #ConsecutiveNegativeWeekTestOne ( [Week] INT NOT NULL ,[Value] DECIMAL(18,6) NOT NULL ) -- I have a condition where I expect to see at least 2 consecutive weeks with negative values -- TRUE : Week 201328 & 201329 are both negative. INSERT INTO #ConsecutiveNegativeWeekTestOne VALUES (201327, 5) ,(201328,-11) ,(201329,-18) ,(201330, 25) ,(201331, 30) ,(201332, -36) ,(201333, 43) ,(201334, 50) ,(201335, 59) ,(201336, 0) ,(201337, 0) SELECT * FROM #ConsecutiveNegativeWeekTestOne WHERE Value < 0 ORDER BY [Week] ASC CREATE TABLE #ConsecutiveNegativeWeekTestTwo ( [Week] INT NOT NULL ,[Value] DECIMAL(18,6) NOT NULL ) -- FALSE: The negative weeks are not consecutive INSERT INTO #ConsecutiveNegativeWeekTestTwo VALUES (201327, 5) ,(201328,-11) ,(201329,20) ,(201330, -25) ,(201331, 30) ,(201332, -36) ,(201333, 43) ,(201334, 50) ,(201335, -15) ,(201336, 0) ,(201337, 0) SELECT * FROM #ConsecutiveNegativeWeekTestTwo WHERE Value < 0 ORDER BY [Week] ASC ``` My SQL fiddle is also here: <http://sqlfiddle.com/#!3/ef54f/2>
you could use a combination of EXISTS. Assuming you only want to know groups (series of consecutive weeks all negative) --Find the potential start weeks ``` ;WITH starts as ( SELECT [Week] FROM #ConsecutiveNegativeWeekTestOne AS s WHERE s.[Value] < 0 AND NOT EXISTS ( SELECT 1 FROM #ConsecutiveNegativeWeekTestOne AS p WHERE p.[Week] = s.[Week] - 1 AND p.[Value] < 0 ) ) SELECT COUNT(*) FROM Starts AS s WHERE EXISTS ( SELECT 1 FROM #ConsecutiveNegativeWeekTestOne AS n WHERE n.[Week] = s.[Week] + 1 AND n.[Value] < 0 ) ``` If you have an index on Week this query should even be moderately efficient.
First, would you please share the formula for calculating week number, or provide a real date for each week, or some method to determine if there are 52 or 53 weeks in any particular year? Once you do that, I can make my queries properly skip missing data AND cross year boundaries. Now to queries: this can be done without a `JOIN`, which depending on the exact indexes present, may improve performance a huge amount over any solution that does use `JOINs`. Then again, it may not. This is also harder to understand so may not be worth it if other solutions perform well enough (especially when the right indexes are present). Simulate a `PREORDER BY` windowing function (respects gaps, ignores year boundaries): ``` WITH Calcs AS ( SELECT Grp = [Week] -- comment out to ignore gaps and gain year boundaries -- Row_Number() OVER (ORDER BY [Week]) -- swap with previous line - Row_Number() OVER (PARTITION BY (SELECT 1 WHERE Value < 0) ORDER BY [Week]), * FROM dbo.ConsecutiveNegativeWeekTestOne ) SELECT [Week] = Min([Week]) -- NumWeeks = Count(*) -- if you want the count FROM Calcs C WHERE Value < 0 GROUP BY C.Grp HAVING Count(*) >= 2 ; ``` ## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/5920c/1) (1st query) And another way, simulating `LAG` and `LEAD` with a `CROSS JOIN` and aggregates (respects gaps, ignores year boundaries): ``` WITH Groups AS ( SELECT Grp = T.[Week] + X.Num, * FROM dbo.ConsecutiveNegativeWeekTestOne T CROSS JOIN (VALUES (-1), (0), (1)) X (Num) ) SELECT [Week] = Min(C.[Week]) -- Value = Min(C.Value) FROM Groups G OUTER APPLY (SELECT G.* WHERE G.Num = 0) C WHERE G.Value < 0 GROUP BY G.Grp HAVING Min(G.[Week]) = Min(C.[Week]) AND Max(G.[Week]) > Min(C.[Week]) ; ``` ## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/5920c/1) (2nd query) And, my original second query, but simplified (ignores gaps, handles year boundaries): ``` WITH Groups AS ( SELECT Grp = (Row_Number() OVER (ORDER BY T.[Week]) + X.Num) / 3, * FROM dbo.ConsecutiveNegativeWeekTestOne T CROSS JOIN (VALUES (0), (2), (4)) X (Num) ) SELECT [Week] = Min(C.[Week]) -- Value = Min(C.Value) FROM Groups G OUTER APPLY (SELECT G.* WHERE G.Num = 2) C WHERE G.Value < 0 GROUP BY G.Grp HAVING Min(G.[Week]) = Min(C.[Week]) AND Max(G.[Week]) > Min(C.[Week]) ; ``` Note: The execution plan for these may be rated as more expensive than other queries, but there will be only 1 table access instead of 2 or 3, and while the CPU may be higher it is still respectably low. Note: I originally was not paying attention to only producing one row per group of negative values, and so I produced this query as only requiring 2 table accesses (respects gaps, ignores year boundaries): ``` SELECT T1.[Week] FROM dbo.ConsecutiveNegativeWeekTestOne T1 WHERE Value < 0 AND EXISTS ( SELECT * FROM dbo.ConsecutiveNegativeWeekTestOne T2 WHERE T2.Value < 0 AND T2.[Week] IN (T1.[Week] - 1, T1.[Week] + 1) ) ; ``` ## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/5920c/1) (3rd query) However, I have now modified it to perform as required, showing only each starting date (respects gaps, ignored year boundaries): ``` SELECT T1.[Week] FROM dbo.ConsecutiveNegativeWeekTestOne T1 WHERE Value < 0 AND EXISTS ( SELECT * FROM dbo.ConsecutiveNegativeWeekTestOne T2 WHERE T2.Value < 0 AND T1.[Week] - 1 <= T2.[Week] AND T1.[Week] + 1 >= T2.[Week] AND T1.[Week] <> T2.[Week] HAVING Min(T2.[Week]) > T1.[Week] ) ; ``` ## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/5920c/1) (3rd query) Last, just for fun, here is a SQL Server 2012 and up version using `LEAD` and `LAG`: ``` WITH Weeks AS ( SELECT PrevValue = Lag(Value, 1, 0) OVER (ORDER BY [Week]), SubsValue = Lead(Value, 1, 0) OVER (ORDER BY [Week]), PrevWeek = Lag(Week, 1, 0) OVER (ORDER BY [Week]), SubsWeek = Lead(Week, 1, 0) OVER (ORDER BY [Week]), * FROM dbo.ConsecutiveNegativeWeekTestOne ) SELECT @Week = [Week] FROM Weeks W WHERE ( [Week] - 1 > PrevWeek OR PrevValue >= 0 ) AND Value < 0 AND SubsValue < 0 AND [Week] + 1 = SubsWeek ; ``` ## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/5920c/1) (4th query) I am not sure I am doing this the best way as I haven't used these much, but it works nonetheless. You should do some performance testing of the various queries presented to you, and pick the best one, considering that code should be, in order: 1. Correct 2. Clear 3. Concise 4. Fast Seeing that some of my solutions are anything but clear, other solutions that are fast enough and concise enough will probably win out in the competition of which one to use in your own production code. But... maybe not! And maybe someone will appreciate seeing these techniques, even if they can't be used as-is *this* time. So let's do some testing and see what the truth is about all this! Here is some test setup script. It will generate the same data on your own server as it did on mine: ``` IF Object_ID('dbo.ConsecutiveNegativeWeekTestOne', 'U') IS NOT NULL DROP TABLE dbo.ConsecutiveNegativeWeekTestOne; GO CREATE TABLE dbo.ConsecutiveNegativeWeekTestOne ( [Week] int NOT NULL CONSTRAINT PK_ConsecutiveNegativeWeekTestOne PRIMARY KEY CLUSTERED, [Value] decimal(18,6) NOT NULL ); SET NOCOUNT ON; DECLARE @f float = Rand(5.1415926535897932384626433832795028842), @Dt datetime = '17530101', @Week int; WHILE @Dt <= '20140106' BEGIN INSERT dbo.ConsecutiveNegativeWeekTestOne SELECT Format(@Dt, 'yyyy') + Right('0' + Convert(varchar(11), DateDiff(day, DateAdd(year, DateDiff(year, 0, @Dt), 0), @Dt) / 7 + 1), 2), Rand() * 151 - 76 ; SET @Dt = DateAdd(day, 7, @Dt); END; ``` This generates 13,620 weeks, from 175301 through 201401. I modified all the queries to select the `Week` values instead of the count, in the format `SELECT @Week = Expression ...` so that tests are not affected by returning rows to the client. I tested only the gap-respecting, non-year-boundary-handling versions. **Results** ``` Query Duration CPU Reads ------------------ -------- ----- ------ ErikE-Preorder 27 31 40 ErikE-CROSS 29 31 40 ErikE-Join-IN -------Awful--------- ErikE-Join-Revised 46 47 15069 ErikE-Lead-Lag 104 109 40 jods 12 16 120 Transact Charlie 12 16 120 ``` **Conclusions** 1. The reduced reads of the non-JOIN versions are not significant enough to warrant their increased complexity. 2. The table is so small that the performance *almost* doesn't matter. 261 years of weeks is insignificant, so a normal business operation won't see any performance problem even with a poor query. 3. I tested with an index on `Week` (which is more than reasonable), doing two separate `JOIN`s with a seek was far, far superior to any device to try to get the relevant related data in one swoop. Charlie and jods were spot on in their comments. 4. This data is not large enough to expose real differences between the queries in CPU and duration. The values above are representative, though at times the 31 ms were 16 ms and the 16 ms were 0 ms. Since the resolution is ~15 ms, this doesn't tell us much. 5. My tricky query techniques do perform better. They might be worth it in performance critical situations. But this is not one of those. 6. Lead and Lag may not always win. The presence of an index on the lookup value is probably what determines this. The ability to still pull prior/next values based on a certain order even when the order by value is not sequential may be one good use case for these functions.
SQL to check for 2 or more consecutive negative week values
[ "", "sql", "sql-server-2008", "" ]
I am creating a weather station using a Raspberry Pi. I have a mySQL database setup for the different sensors (temp, humidity, pressure, rain, etc) and am now getting to processing the wind sensors. I have a python program that watches the GPIO pins for the anemometer and counts the pulses to calculate the wind speed. It also reads from a wind vane processes through an ADC to get the direction. For the other sensors I only process them every few minutes and dump the data directly to the DB. Because I have to calculate a lot of things from the wind sensor data, I don't necessarily want to write to the DB every 5 seconds and then have to read back the past 5 minutes of data to calculate the current speed and direction. I would like to collect the data in memory, do the processing, then write the finalized data to the DB. The sensor reading is something like: datetime, speed, direction 2013-6-20 09:33:45, 4.5, W 2013-6-20 09:33:50, 4.0, SW 2013-6-20 09:33:55, 4.3, W The program is calculating data every 5 seconds from the wind sensors. I would like to write data to the DB every 5 minutes. Because the DB is on an SD card I obviously don't want to write to the DB 60 times, then read it back to process it, then write it to the permanent archival DB every 5 minutes. Would I be better off using a list of lists? Or a dictionary of tuples keyed by datetime? {datetime.datetime(2013, 6, 20, 9, 33, 45, 631816): ('4.5', 'W')} {datetime.datetime(2013, 6, 20, 9, 33, 50, 394820): ('4.0', 'SW')} {datetime.datetime(2013, 6, 20, 9, 33, 55, 387294): ('4.3', 'W')} For the latter, what is the best way to update a dictionary? Should I just dump it to a DB and read it back? That seems like an excessive amount of read/writes a day for so little data.
There are multiple cache layers between a Python program and a database. In particular, the Linux disk block cache may keep your database in core depending on patterns of usage. Therefore, you should not assume that writing to a database and reading back is necessarily slower than some home-brew cache that you'd put in your application. And code that you write to prematurely optimize your DB is going to be infinitely more buggy than code you don't write. For the workload as you've specified it, MySQL strikes me as a little heavyweight relative to SQLite, but you may have unstated reasons to require it.
One option is to use [redis](http://redis.io/) to store your data. It's a key value store that's really good for storing data like what you're talking about. It operates in memory and writes data to disk for persistence, this is [configurable](http://redis.io/topics/persistence "redis persistence") so you could make it only write to disk every few hours or once a day. The [redis-py](https://github.com/andymccurdy/redis-py) library is very easy to use.
Python "in memory DB style" data types
[ "", "python", "types", "raspberry-pi", "" ]
I have searched this question here but couldn't find it, please redirect me if we already have it on the site. I'm looking for a way to create CTE which uses another CTE as the data to further limit. I have a CTE which creates a report for me , but I would like to narrow this report with another input using the existing CTE. I hope my question is clear.
You can chain 2 (or more) CTE's together. For example ``` with ObjectsWithA as ( select * from sys.objects where name like '%A%' ), ObjectsWithALessThan100 as ( select * from ObjectsWithA where object_id < 100 ) select * from ObjectsWithALessThan100; ``` Or the same example, with more "spelled out" names/aliases: ``` with ObjectsWithA (MyObjectId , MyObjectName) as ( select object_id as MyObjIdAlias , name as MyNameAlias from sys.objects where name like '%A%' ), ObjectsWithALessThan100 as ( select * from ObjectsWithA theOtherCte where theOtherCte.MyObjectId < 100 ) select lessThan100Alias.MyObjectId , lessThan100Alias.MyObjectName from ObjectsWithALessThan100 lessThan100Alias order by lessThan100Alias.MyObjectName; ```
A CTE can refer to previous CTEs: ``` with report as ( <your query here> ), reportLimited as ( select * from report where foo = @bar ) select * from reportLimited ``` The only rule is that the references have to be sequential. No forward references.
How to create CTE which uses another CTE as the data to further limit?
[ "", "sql", "common-table-expression", "" ]
I have a script that gets a string in the form of 3/4/2013. How can I convert that to a date that I can then use to determine the age of the date (in months)? I would also like to be able to have the month in decimal form (i.e. 2.8 months old). I'm not sure what to do at all as far as coding this. I've read about different libraries that can do things like this, but I'm not sure what one is best. EDIT: Assume a month has 30 days. This is what I have so far: ``` import time def ageinmonths( str ) return time.today() - time.strptime("3/4/2013", "%b %d %y") ```
This is the answer from kzh but with the addition of the decimal that the poster wanted. ``` from datetime import datetime from dateutil.relativedelta import relativedelta date = '3/4/2013' dt = datetime.strptime(date,'%m/%d/%Y') r = relativedelta(datetime.now(), dt) months = r.years * 12 + r.months + r.days/30. print months >>>> 3.33333333333 ```
Something like this: ``` >>> from datetime import datetime >>> from dateutil.relativedelta import relativedelta >>> dt = datetime.strptime('3/4/2013','%m/%d/%Y') >>> r = relativedelta(datetime.now(), dt) >>> months = r.years * 12 + r.months + r.days/30 ```
Given a date in a specific format, how can I determine age in months?
[ "", "python", "datetime", "" ]
``` >>> if 5==5: print '5' else: print 'cat' File "<pyshell#2>", line 3 else: print 'cat' ^ IndentationError: unindent does not match any outer indentation level ``` No matter what the indent is, it still doesn't work. Even if I try to put the `print cat` statement on the next line it gives me syntax error. Why is this?
You want your code to look like this: ``` if 5 == 5: print '5' else: print 'cat' ``` Why? Because Python uses indentation like other languages use brackets: it determines the grouping of statements. [Dive Into Python](http://www.diveintopython.net/getting_to_know_python/indenting_code.html) has an excellent explanation of how this works: > Python uses carriage returns to separate statements and a colon and indentation to separate code blocks. C++ and Java use semicolons to separate statements and curly braces to separate code blocks. The [reference manual](http://docs.python.org/2/reference/lexical_analysis.html#indentation) also describes how this works at a lower level: > Leading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the indentation level of the line, which in turn is used to determine the grouping of statements. > > ... > > First, tabs are replaced (from left to right) by one to eight spaces such that the total number of characters up to and including the replacement is a multiple of eight (this is intended to be the same rule as used by Unix). The total number of spaces preceding the first non-blank character then determines the line’s indentation. Stylistically, you want to use 4 spaces to indicate a level of indentation. This is defined in [PEP 8](http://www.python.org/dev/peps/pep-0008/#indentation). This will ensure everyone sees the same thing, no matter what IDE they're using. Note that you should **never mix tabs and spaces**, as it becomes very easy to unwittingly break code (consider not all editors show tabs as the same size).
Use proper indentation in python otherwise you'll get such errors. Use 4 spaces for indentation. ``` if 5 == 5: print '5' else: print 'cat' ``` From [docs](http://docs.python.org/2/reference/lexical_analysis.html#indentation): > Leading whitespace (spaces and tabs) at the beginning of a logical > line is used to compute the indentation level of the line, which in > turn is used to determine the grouping of statements.
python else and elif error
[ "", "python", "if-statement", "" ]