Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
**Context**: I needed to randomly erase some precise element of a few lists of numbers, extracting some random indexes and saving them in a set called *aleaindex* (done, it properly works, thanks to some SO users' help). Now, I'd like to substitute the old lists *a*, *b*, etc with the new, eventually shorter ones *newa*, *newb*, etc. Here is the function: ``` def myfunction(N, other_parameters, a, b, c): ... while (...): aleaindex.add(random.randint(..., ...)) ... new_a = [v for i, v in enumerate(a) if i not in aleaindex] while a: a.pop() a = new_a[:] ... ``` and so on for the other lists *b*, *c*, etc. **Problem**: the function seems to correctly modify them within the module (checked by printing) but then, when I print the modified lists *outside* the module, that is in the "main" file, lists are as they had not modified. Where am I wrong?
This line: ``` a=new_a[:] ``` overwrites the variable `a` with a new object. Outside the function or module, the old object is still pointed at by `a` (or whatever it was called there). Try: ``` new_a = [v for i, v in enumerate(a) if i not in aleaindex] while a: a.pop() a[:] = new_a[:] ``` ### Explanation To see this, just try the following. ``` >>> a = [1,2,3,4] >>> b = a >>> print b [1, 2, 3, 4] >>> a[:] = [2,3] >>> print b [2, 3] >>> a = [5] >>> print b [2, 3] ``` ## Example in function! If the variable is mutable (and a normal list is), this works: ``` >>> def f(a): ... a[0] = 2 >>> b = [3] >>> f(b) >>> print b [2] ``` Variables are not passed by value - you can edit a mutable value.
I do not know what you are trying to do but from your snippets you are clearly lost. Your code does not make much sense and there are more more than one problem. Nonetheless, the problem you asked about - why the list is not fully changed? - seems to be related to this loop: ``` while a: a.pop() a = new_a[:] ``` Suppose we call your function this way: ``` list1 = [1, 2, 3, 4, 5, 6, 7] myfunction(N, other_parameters, list1, [], []) ``` What will happen is, when you call the first line, you will get a variable called `list1` and it will point to a list: ![list1 points to a list](https://i.stack.imgur.com/8Laww.png) When you call the function `myfunction()`, the function, among other things, create a variable called `a` which will point to the same list pointed by `list1`: ![Now a points to the same list as list1](https://i.stack.imgur.com/WDGpX.png) So far, so good. Then we get at the loop below: ``` while a: a.pop() a = new_a[:] ``` In the first line of it (`a.pop()`), you get an item out of the list. Since both variables `a` and `list1` points to the same list, you would see the same result... ![Removing an item from the list](https://i.stack.imgur.com/Cq06A.png) ...if it were not for the next line of the loop (`a = new_a[:]`). In this line, you are making the `a` variable to point to *another* list: ![a points to another list](https://i.stack.imgur.com/Us0qx.png) Now, every operation you execute on `a` will be in this list, which is in no way related to `list1`. For example, you can execute `a.pop()` at the next iteration to get it: ![Popping from the other list](https://i.stack.imgur.com/fbI21.png) However, it makes no sense at all, because the line `a = new_a[:]` will replace the list pointed to `a` *again* for *yet another different list*: ![Changing a again?!](https://i.stack.imgur.com/mCB9c.png) So, what is the solution? I don't know. As I have said, it is not possible (to me, at least) to make sense from your code. You have to reflect a bit more about what you are trying to do and explain it to us, with a bit more of context.
Why function-modified lists don't change when seen in the main file?
[ "", "python", "list", "function", "" ]
**Context:** I want to replace an object with another: ``` x = [] r = [2] replace(x, r) assert x == [2] assert x is r ``` In Smalltalk this would be called `become` . **Question:** How would I do that? Would I need to create a C-extension or is there already something like this? **I tried:** [This](https://gist.github.com/anonymous/74ae8e03a55a6a3ea533) But I need to handle so many cases. Just changing the object behind the pointer would be enough for me. **Reason:** I want to implement extensions like features, refinements and subjects. Sometimes I can not change the class of an builtin object `os.__class__ = X` to enable more flexibility. Since I can not do that, I though about replacing. **Concrete Example of Usage:** A subjective view: ``` import pylung pylung.deutsch() import os # I want to preserve object identity with the original os os.ERREICHE_SETZEN os.durchlaufe(...) ``` A translation: ``` import os import pylung os = pylung.translate(os, 'deutsch') # German @os.walk def durchlaufe(ordner, oben_nach_unten, folge_links): """Ordner baum erzeuger """ ```
If you want to be able to do this... ``` import pylung pylung.deutsch() import os # I want to preserve object identity with the original os os.ERREICHE_SETZEN os.durchlaufe(...) ``` ...and retain compatibility with existing libraries (which will use the English names), it's probably easiest to add additional attributes to the `os` module, rather than replacing them. For example... ``` # pylung.py def deutsch(): import os os.durchlaufe = os.walk # etc... ``` ...if you also want to translate module names, say `collections` to `kollektionen`, you can do something like this... ``` # pylung.py import sys def deutsch(): import os os.durchlaufe = os.walk # etc... import collections sys.modules['kollektionen'] = sys.modules['collections'] ``` ...again, ensuring you keep the original module name in place for compatibility. You could also look at using a [custom import hook](http://www.python.org/dev/peps/pep-0302/) to do the module translations on-demand, so you need only replace the symbols that are actually used in the script.
A simple `=` does what you want: ``` x = [] r = [2] x = r assert x == [2] assert x == r assert x is r ```
Replace object with another in the entire process / become in Python
[ "", "python", "reference", "" ]
I am trying to select the rows from a table by 'group by' and ignoring the first row got by sorting the data by date. The sorting should be done by a date field, to ignore the newest entry and returning the old ones for the group. The table looks like ``` +----+------------+-------------+-----------+ | id | updated on | group_name | list_name | +----+------------+----------------+--------+ | 1 | 2013-04-03 | g1 | l1 | | 2 | 2013-03-21 | g2 | l1 | | 3 | 2013-02-26 | g2 | l1 | | 4 | 2013-02-21 | g1 | l1 | | 5 | 2013-02-20 | g1 | l1 | | 6 | 2013-01-09 | g2 | l2 | | 7 | 2013-01-10 | g2 | l2 | | 8 | 2012-12-11 | g1 | l1 | +----+------------+-------------+-----------+ ``` <http://www.sqlfiddle.com/#!2/cec99/1> So, basically, I just want to return ids (3,4,5,6,8) as those are the oldest in the group\_name and list\_name. Ignoring the latest entry and returning the old ones by grouping it based on group\_name and list\_name I am not able to write sql for this problem. I know order by will not work with group by. Please help me in figuring out a solution. Thanks And also, is there a way to do this without using subqueries?
Something like the following to get only the rows that are the minimum date for a specific row: ``` select a.ID, a.updated_on, a.group_name, list_name from data a where a.updated_on < ( select max(updated_on) from data group by group_name having group_name = a.group_name ); ``` SQL Fiddle: <http://www.sqlfiddle.com/#!2/00d43/10> ## Update (based on your reqs) ``` select a.ID, a.updated_on, a.group_name, list_name from data a where a.updated_on < ( select max(updated_on) from data group by group_name, list_name having group_name = a.group_name and list_name = a.list_name ); ``` See: <http://www.sqlfiddle.com/#!2/cec99/3> ## Update (To not use Correlated Subquery but Simple subquery) Decided correlated subquery is too slow based on: [Subqueries vs joins](https://stackoverflow.com/questions/141278/subqueries-vs-joins) So I changed to joining with a aliased temporary table based on nested query. ``` select a.ID, a.updated_on, a.group_name, a.list_name from data a, ( select group_name, list_name , max(updated_on) as MAX_DATE from data group by group_name, list_name ) as MAXDATE where a.list_name = MAXDATE.list_name AND a.group_name = MAXDATE.group_name AND a.updated_on < MAXDATE.MAX_DATE ; ``` SQL Fiddle: <http://www.sqlfiddle.com/#!2/5df64/8>
You could try using the following query (yes, it has a nested join, but maybe it helps). ``` SELECT ID FROM (select d1.ID FROM data d1 LEFT JOIN data d2 ON (d1.group_name = d2.group_name AND d1.list_name=d2.list_name AND d1.updated_on > d2.updated_on) WHERE d2.ID IS NULL) data_tmp; ``` **CORRECTION:** ``` SELECT DISTINCT(ID) FROM (select d1.* FROM data d1 LEFT JOIN data d2 ON (d1.group_name = d2.group_name AND d1.list_name=d2.list_name AND d1.updated_on < d2.updated_on) WHERE d2.ID IS NOT NULL) date_tmp; ```
Mysql to select rows group by with order by another column
[ "", "mysql", "sql", "" ]
I'm launching a program with `subprocess` on Python. In some cases the program may freeze. This is out of my control. The only thing I can do from the command line it is launched from is `Ctrl``Esc` which kills the program quickly. Is there any way to emulate this with `subprocess`? I am using `subprocess.Popen(cmd, shell=True)` to launch the program.
``` p = subprocess.Popen("echo 'foo' && sleep 60 && echo 'bar'", shell=True) p.kill() ``` Check out the docs on the `subprocess` module for more info: <http://docs.python.org/2/library/subprocess.html>
Well, there are a couple of methods on the object returned by `subprocess.Popen()` which may be of use: [`Popen.terminate()`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate) and [`Popen.kill()`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.kill), which send a `SIGTERM` and `SIGKILL` respectively. For example... ``` import subprocess import time process = subprocess.Popen(cmd, shell=True) time.sleep(5) process.terminate() ``` ...would terminate the process after five seconds. Or you can use [`os.kill()`](http://docs.python.org/2/library/os.html#os.kill) to send other signals, like `SIGINT` to simulate CTRL-C, with... ``` import subprocess import time import os import signal process = subprocess.Popen(cmd, shell=True) time.sleep(5) os.kill(process.pid, signal.SIGINT) ```
Kill a running subprocess call
[ "", "python", "multithreading", "subprocess", "" ]
earlier this week i posted a question on how to change specific words to numbers in a file. As part of my sentiment analysis work. This was not the right method for me unfortunately, i interpreted my data wrong. So i will re ask the question using the right method. I have a specific word list that contains tokens, for example purposes i will use 4 words eventhough it will be 40 words. I need to turn tweets into a 0 1 1 0 type format using a list to do so. My list is as follows (a text file with 1 word per line): * :) * :( * happy * sad my example tweets: * TWEET1: I find python cool, it makes me happy :) * TWEET2: today is a sad day :( the output should be: * TWEET1: 1 0 1 0 * TWEET2: 0 1 0 1 Basically every digit corresponds to where the token is found in the list. So in TWEET1, the first '1' corresponds to position one in the list (which is the smiley), the second digit '0', corresponds to position two in the list (unhappy smiley), and because it is NOT found in the tweet, it becomes a '0'. The third digit, which is a '1', corresponds to the third place in the list (happy), and because it is found in the tweet...it becomes a '1'.....I hope im explaining it well. Im using python to write a lot of my scripts/programs to manipulate the text found in my files, so im looking for a python program to do this for me. Im quite new to python, so i was hoping someone could help me write a script to do this. I hope ive explained it well enough, it took me a while to grasp the concept myself. thanx :) MORE INFO: * since my word list will be about 40 words, the output for every tweet will be atleast 40 digits. E.g. 0 1 1 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 **EDITED PART** The amazing answer given below, does not suit the criteria. It replaces words with a digit very elegantly. But not what i need unfortuntately.... further explanation (the way it helped me understand it better)..... consider this: TWEET1: "today is going to be a happy day :)" * before it reads the line, the code is set to '0 0 0 0' * it then checks the first '0' ......which means: check the first token in the list (smiley)...can it be found anywhere in the tweet? answer: yes. therefore the code becomes....'1 0 0 0' * next we move to the second '0' (corresponds to unhappy face).....can we find unhappy face anywhere in the tweet? Answer: No....therefore 2nd digit stays '0'....our code is now '1 0 0 0' * next we move to the 3rd digit, which corresponds to the word 'happy'. Can this word be found anywhere in the tweet? Answer: yes......our code now becomes '1 0 1 0' * now we move to the last digit, corresponds to the word/token 'sad'......can this be found anywhere in the tweet? Answer: no......therefore last digit remains '0' * our final code becomes '1 0 1 0' i hope this explains it better :) NOTE: the code corresponds to the word list, not to the words in the tweet.
Here: ``` wordlist = [':)', ':(', 'happy', 'sad'] tweets = ['I find python cool, it makes me happy :)', 'today is a sad day :('] for tweet in tweets: print(' '.join(['1' if word in tweet else '0' for word in wordlist])) ``` Output: ``` 1 0 1 0 0 1 0 1 ```
# Description If you have to do this with regular expressions I'd do this in two parts. **Part 1** would find and replace all the known words with a `1`. Read your known word file into an array, then join the array with the regex or symbol `|`. Then nest that string into the regex. `(?<=^|\s)(\b(?:happy|kittens|[:][)])\b\W?)(?=\s|$)` ![enter image description here](https://i.stack.imgur.com/mt8t1.png) **Part 2** goes back and replaces all the non-`1`'s with a `0`. `(?<=^|\s)\b(1[^\s]+|[^1]|[^\s]{2,})\b(?=\s|$)` ![enter image description here](https://i.stack.imgur.com/PbQVy.png) # Example I don't know python, but here is a php example of how this would look. ``` <?php $sourcestring="I really like kittens, they make me happy."; echo preg_replace('/(?<=^|\s)(\b(?:happy|kittens|[:][)])\b\W?)(?=\s|$)/i',' 1 ',$sourcestring); ?> $sourcestring after replacement: I really like 1 they make me 1 <?php $sourcestring="I really like 1 they make me 1"; echo preg_replace('/(?<=^|\s)\b(1[^\s]+|[^1]|[^\s]{2,})\b(?=\s|$)/im',' 0 ',$sourcestring); ?> $sourcestring after replacement: 0 0 0 1 0 0 0 1 ``` # Summary 1. part 1 * `(?<=^|\s)` lookbehind to ensure the word has either white space or a start of a string * `(` start capture group 1 * `\b` consume word boundary * `(?:` start non capture group * `happy|kittens|[:][)]` match the words happy, kittens, or :) respectively * `)` close the non capture group * `\b` consume word boundary * `\W?` capture any additional non white space character, this eats a punctuation mark * `)` close capture group 1 * `(?=\s|$)` require a word has a space or end of string at the end 2. part 2 * `(?<=^|\s)` lookbehind to ensure the word has either white space or a start of a string * `\b` consume word boundary * `(` start capture group 1 * `1[^\s]+` consume `1` followed by anynumber of non white space characters, this prevents preveiously matched/replaced `1`'s from being pickedup * `|` or * `[^1]` consume a single character providing it's not a `1` * `|` or * `[^\s]{2,}` 2 or more non white space characters * `)` close capture group 1 * `\b` consume word boundary * `(?=\s|$)` require a word has a space or end of string at the end # Disclaimer This solution may fail if the input string contained `1` and if `1` wasn't part of your input string. You may want to consider doing a split on the delimited whitespace and then applying logic against the returned array.
Changing lines of text into binary type pattern
[ "", "python", "regex", "" ]
I think the question is clear enough. I want to make a hidden Markov model in Python and draw a vizualization model of it. So, it's something like this picture: ![model](https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/HMMGraph.svg/708px-HMMGraph.svg.png) Is there any module to do that? I've googled it and found nothing.
Though I've never worked with Hidden Markov Models, when I need to visualize a graph (directed, with labels, colors, etc.), I use [Gephi](https://gephi.org/), a GUI graph browser/editor and generate the graphs programmatically as [GraphML](http://graphml.graphdrawing.org/) files, which is an XML-based format. Python has good XML-handling tools (in the [standard library](http://docs.python.org/2.7/library/xml.etree.elementtree.html) and [lxml](http://lxml.de/)). Gephi recognizes some of the `<data>` sub-elements as positions, colors, and labels for nodes and edges.
The `dot` package from [graphviz](http://www.graphviz.org) is the best I've found. The syntax is simple, simpler than xml.
How to visualize a hidden Markov model in Python?
[ "", "python", "graphics", "hidden-markov-models", "" ]
I have a table `Student` in SQL Server with these columns: ``` [ID], [Age], [Level] ``` I want the query that returns each age value that appears in `Students`, and finds the level value that appears most often. For example, if there are more `'a'` level students aged 18 than 'b' or 'c' it should print the pair `(18, a)`. I am new to SQL Server and I want a simple answer with nested query.
One more option with ROW\_NUMBER ranking function in the ORDER BY clause. WITH TIES used when you want to return two or more rows that tie for last place in the limited results set. ``` SELECT TOP 1 WITH TIES age, level FROM dbo.Student GROUP BY age, level ORDER BY ROW_NUMBER() OVER(PARTITION BY age ORDER BY COUNT(*) DESC) ``` Or the second version of the query using amount each pair of age and level, and max values of count pair age and level per age. ``` SELECT * FROM ( SELECT age, level, COUNT(*) AS cnt, MAX(COUNT(*)) OVER(PARTITION BY age) AS mCnt FROM dbo.Student GROUP BY age, level )x WHERE x.cnt = x.mCnt ``` Demo on [SQLFiddle](http://sqlfiddle.com/#!3/f847a/1)
You can do this using window functions: ``` select t.* from (select age, level, count(*) as cnt, row_number() over (partition by age order by count(*) desc) as seqnum from student s group by age, level ) t where seqnum = 1; ``` The inner query aggregates the data to count the number of levels for each age. The `row_number()` enumerates these for each age (the `partition by` with the largest first). The `where` clause then chooses the highest values. In the case of ties, this returns just one of the values. If you want all of them, use `rank()` instead of `row_number()`.
sql query finding most often level appear
[ "", "sql", "sql-server", "" ]
In my data.txt file, there are 2 types of lines. 1. Normal data: 16 numbers separated by spaces with a '\n' appended at the end. 2. Incomplete data: In the process of writing the data into data.txt, the writing-in of the last line is always interrupted by the STOP command. Thus, it is always incomplete, e.g.it can have 10 numbers and no '\n' **Two questions:** **a. How can I import the whole file EXCEPT the last incomplete line into Python?** I notice that ``` # Load the .txt file in myData = np.loadtxt('twenty_z_up.txt') ``` is quite "strict" in the sense that when the last incomplete line exists there, the file cannot be imported. The imported .txt file has to be a nice matrix. **b.** Occasionally, I make timestamps on the first entry of a line for experiment purpose. Say I have my 1st timestamp at the start of line 2, and my 2nd stamp at the start of line 5. **How can I import only from line 2 to line 5 into Python?** =============================== Updates: Q.a is solved ================================ ``` myData = np.genfromtxt('fast_walking_pocket.txt', skip_footer=1) ``` will help discard the final incomplete row
To answer your 'b' question. Assume you have this file (called '/tmp/lines.txt'): ``` line 1 2013:10:15 line 3 line 4 2010:8:15 line 6 ``` You can use the [linecache](http://docs.python.org/2/library/linecache.html?highlight=linecache) module: ``` >>> import linecache >>> linecache.getline('/tmp/lines.txt', 2) '2013:10:15\n' ``` So you can parse this time directly: ``` >>> import datetime as dt >>>dt.datetime.strptime(linecache.getline('/tmp/lines.txt',2).strip(),'%Y:%m:%d') datetime.datetime(2013, 10, 15, 0, 0) ``` ***Edit*** Multiple lines: ``` >>> li=[] >>> for i in (2,5): ... li.append(linecache.getline('/tmp/lines.txt', i).strip()) ... >>> li ['2013:10:15', '2010:8:15'] ``` Or: ``` >>> lines={} >>> for i in (2,5): ... lines[i]=linecache.getline('/tmp/lines.txt', i).strip() ... >>> lines {2: '2013:10:15', 5: '2010:8:15'} ``` Or a range: ``` >>> lines={} >>> for i in range(2,6): ... lines[i]=linecache.getline('/tmp/lines.txt', i).strip() ... >>> lines {2: '2013:10:15', 3: 'line 3', 4: 'line 4', 5: '2010:8:15'} ```
You can try [pandas](http://pandas.pydata.org/) which provides a use function [read\_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html) to load the data more easily. Example data: ``` a b c d e f g h i j k l m n o p a b c d e f g h i j k l m n o p a b c d e f g h i j k l m n o p a b c d e f g h i j k l m n o p a b c d e f g h i j k l m n o p a b c d e f g h i j ``` For your Q1, you can load the data by: ``` In [27]: import pandas as pd In [28]: df = pd.read_csv('test.txt', sep=' ', header=None, skipfooter=1) ``` [DataFrame](http://pandas.pydata.org/pandas-docs/dev/dsintro.html#dataframe) is a useful structure which can help you to process data easier. To get a numpy array, simply get the `values` attribute of the `DataFrame`. ``` In [33]: df.values Out[33]: array([['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p']], dtype=object) ``` For your Q2, you can get the second and the fifth line by ``` In [36]: df.ix[[1, 4]] Out[36]: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 a b c d e f g h i j k l m n o p 4 a b c d e f g h i j k l m n o p ```
numpy - Python - Selectively import parts of the .txt file
[ "", "python", "numpy", "analysis", "" ]
I'm using python 2.6 current output ``` mylist = [('log:A', '1009.2'), ('log:B', '938.1'), ('log:C', '925.7'), ('log:C', '925.7')] ``` I'am trying to add the values to produce the follow: Ranked highest to lowest. The problem I'am having is adding everthing with a log:C tag together. and not outputting it twice. ``` log:C = 1851.4 log:A = 1009.2 log:B = 938.1 ```
Using `collections.defaultdict`: ``` >>> strs = "log:A 22 log:B 44 log:C 74 log:D 24 log:B 10" >>> from collections import defaultdict >>> dic = defaultdict(int) >>> it = iter(strs.split()) >>> for k in it: ... dic[k] += int(next(it)) ... >>> for k,v in sorted(dic.items(), key = lambda x: x[1], reverse = True): ... print k,v ... log:C 74 log:B 54 log:D 24 log:A 22 ``` To get a sorted list of items based on values: ``` >>> sorted(dic.items(), key = lambda x: x[1], reverse = True) [('log:C', 74), ('log:B', 54), ('log:D', 24), ('log:A', 22)] ``` **Update:** Based on your new input ``` >>> mylist = [('log:A', '1009.2'), ('log:B', '938.1'), ('log:C', '925.7'), ('log:C', '925.7')] >>> dic = defaultdict(int) >>> for k,v in mylist: dic[k] += float(v) ... >>> sorted(dic.items(), key = lambda x: x[1], reverse = True) [('log:C', 1851.4), ('log:A', 1009.2), ('log:B', 938.1)] ```
``` mystr = 'log:A 22 log:B 44 log:C 74 log:D 24 log:B 10' li=mystr.split() res={} for k,v in zip(li[::2],li[1::2]): res.setdefault(k,[]).append(int(v)) print res ``` Prints: ``` {'log:D': [24], 'log:A': [22], 'log:C': [74], 'log:B': [44, 10]} ``` Then just sum them: ``` for k in sorted(res): print k, sum(res[k]) ``` Prints: ``` log:A 22 log:B 54 log:C 74 log:D 24 ```
python matching list items
[ "", "python", "string", "parsing", "dictionary", "" ]
So I have this function. ``` def test(fourBits): binaryDigits = fourBits if binaryDigits[1] == 1: print 'test' ``` If I enter `test('1111')` it won't print `test`. I don't understand why it is not?
Use this: ``` if int(binaryDigits[1]) == 1: ``` or this: ``` if binaryDigits[1] == '1': ``` so that the types match, i.e., compare two characters or two numbers.
binaryDigits[1] is a string, 1 is an integer
Why won't my function print
[ "", "python", "printing", "" ]
I need to do something like this, but in Python instead of Bash: ``` i=1 while [ $i <= 10 ] ; do wget http://somewebsite.net/shared/fshared_$i.7z $i = $i + 1 done ``` In Python I tried with the following: ``` import urllib, os i = 0 while i <= 3: os.system('wget http://somewebsite.net/shared/fshared_',i,'.7z') i = i + 1 ``` But it does not work, the variable is not concatenated correctly (or something similar). Using Bash code does not work, apparently in Bash can't do something simple like: `i = i + 1` Could anyone help me with this? **SOLVED!** :) Now I have the script both Bash and Python, actually with Python I have several variants. Thanks to all... thanks a lot ^-^ How do I mark the topic as solved? Thanks again.
Try ``` os.system('wget http://somewebsite.net/shared/fshared_%s.7z'%i) ``` use `%s` instead of `,`
You **can** increment in Bash. You have to do something like: ``` i=3 (( i++ )) echo $i ``` That last like should print 4. So your script would be: ``` i=1 while [ $i -le 10 ] ; do wget http://somewebsite.net/shared/fshared_$i.7z (( i++ )) done ``` Edit: fixed code to use `-le` instead of `<=` since `<=` doesn't work in bash.
How to download with wget in Python using variables?
[ "", "python", "bash", "wget", "" ]
Is there a way in python such that given a python class: ``` class Foo(object): apple = None orange = None def __init__(self, apple=None, orange=None) super(Foo, self).__init__() self.apple = apple self.orange = orange ``` after a Foo object is **init**, one of the two attributes (apple, orange) must always be assigned to a value other than None, and that at no time should both attributes be assigned to something other than None. In other words: ``` orange is None | orange is not None | apple is None NO | YES ________________________________________|___________________________ | apple is not None YES | NO ``` How would one do this in python?
In the constructor, it's simple enough to `raise` a `ValueError` if they are both `None` or both set. The problem is later on in the code. Following the principle of least surprise, I think you should mark the variables private and use setter methods (not property setter, plain old methods). This clearly suggests you're doing extra logic when the value is set, and it gives you an obvious place to add extra logic later if needed. Using getter property methods would be fine, though. So something like this: ``` class Foo(object): def __init__(self, apple=None, orange=None): super(Foo, self).__init__() if apple is None and orange is None: raise ValueError('apple and orange cannot both be None') if apple is not None and orange is not None: raise ValueError('apple and orange cannot both be set') self._apple = apple self._orange = orange @property def apple(self): return self._apple @property def orange(self): return self._orange def setAppleClearOrange(self, value): if value is None: raise ValueError('Cannot set both to None') self._orange = None self._apple = value def setOrangeClearApple(self, value): if value is None: raise ValueError('Cannot set both to None') self._apple = None self._orange = value ``` Yes, it's a bit verbose, but it's *obvious* what your intentions are, which is actually more important.
Something like ``` if !((self.apple==None)^(self.orange==None)) // some error ``` Might do the trick... ^ is the XOR operator, which returns true if one, but not both, of the operands are true.
Force one of two python class attributes to be always be assigned a value?
[ "", "python", "" ]
I know, that ``` map(function, arguments) ``` is equivalent to ``` for argument in arguments: function(argument) ``` Is it possible to use map function to do the following? ``` for arg, kwargs in arguments: function(arg, **kwargs) ```
You can with a lambda: ``` map(lambda a: function(a[0], **a[1]), arguments) ``` or you could use a generator expression or list comprehension, depending on what you want: ``` (function(a, **k) for a, k in arguments) [function(a, **k) for a, k in arguments] ``` In Python 2, `map()` returns a list (so the list comprehension is the equivalent), in Python 3, `map()` is a generator (so the generator expression can replace it). There is no built-in or standard library method that does this directly; the use case is too specialised.
For the case of positional arguments only, you can use `itertools.starmap(fun, args)`: > Return an iterator whose values are returned from the function evaluated with a argument tuple taken from the given sequence. Example: ``` from itertools import starmap def f(i, arg): print(arg * (i+1)) for _ in starmap(f, enumerate(["a", "b", "c"])): pass ``` prints: ``` a bb ccc ```
Python `map` and arguments unpacking
[ "", "python", "map-function", "" ]
I'm having a fairly difficult time using `mock` in Python: ``` def method_under_test(): r = requests.post("http://localhost/post") print r.ok # prints "<MagicMock name='post().ok' id='11111111'>" if r.ok: return StartResult() else: raise Exception() class MethodUnderTestTest(TestCase): def test_method_under_test(self): with patch('requests.post') as patched_post: patched_post.return_value.ok = True result = method_under_test() self.assertEqual(type(result), StartResult, "Failed to return a StartResult.") ``` The test actually returns the right value, but `r.ok` is a Mock object, not `True`. How do you mock attributes in Python's `mock` library?
You need to use [`return_value`](https://docs.python.org/3/library/unittest.mock.html?highlight=return_value#unittest.mock.Mock.return_value) and [`PropertyMock`](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.PropertyMock): ``` with patch('requests.post') as patched_post: type(patched_post.return_value).ok = PropertyMock(return_value=True) ``` This means: when calling `requests.post`, on the return value of that call, set a `PropertyMock` for the property `ok` to return the value `True`.
A compact and simple way to do it is to use `new_callable` `patch`'s attribute to force `patch` to use `PropertyMock` instead of `MagicMock` to create the mock object. The other arguments passed to `patch` will be used to create `PropertyMock` object. ``` with patch('requests.post.ok', new_callable=PropertyMock, return_value=True) as mock_post: """Your test""" ```
Mock attributes in Python mock?
[ "", "python", "unit-testing", "testing", "mocking", "python-mock", "" ]
I have 2 tables in my database: `BusLines` & `BusStops`. Each instance of `BusLines` can have many stops associated with it, in a particular order. To make the associations easier to manage (delete or add new stops in an existing line), the associative table has been designed with the following structure: `id_BusLine | id_BusStop | id_NextBusStop | isFirstStop` It seemed like a good idea to do it this way instead of giving each stop a number, which would have to be changed for every record once new stops have to be added to the beginning of the line (but if you have a better idea, I would sure like to hear it). So, to reiterate: how could I create a SELECT statement on each line to have all the stops in the right order? Because I can't solve it with just a simple ORDER BY...
**Solutions for SQL Server 2008-2012, PostgreSQL 9.1.9, Oracle 11g** Actually, recursive CTE is a solution for almost all current RDBMS, including PostgreSQL (explanations and example shown below). However there is another better solution (optimized) for Oracle DBs: hierarchical queries. NOCYCLE instructs Oracle to return rows even if your data has a loop in it. CONNECT\_BY\_ROOT gives you access to the root element, even several layers down in the query. Using the HR schema: **The corresponding code for Oracle 11g:** ``` select b.id_bus_line, b.id_bus_stop from BusLine_BusStop b start with b.is_first_stop = 1 connect by nocycle prior b.id_next_bus_stop = b.id_bus_stop and prior b.id_bus_line = b.id_bus_line ``` [DEMO for Oracle 11g](http://sqlfiddle.com/#!4/535ae/26) (code of my own). Please note that the standard is recursive CTE in the SQL:1999 norm. As you can see, there are several differences between SQL Server and PostgreSQL. **The following solution is for SQL Server 2012:** ``` ;WITH route AS ( SELECT BusLineId, BusStopId, NextBusStopId FROM BusLine_BusStop WHERE IsFirstStop = 1 UNION ALL SELECT b.BusLineId, b.BusStopId, b.NextBusStopId FROM BusLine_BusStop b INNER JOIN route r ON r.BusLineId = b.BusLineId AND r.NextBusStopId = b.BusStopId WHERE IsFirstStop = 0 or IsFirstStop is null ) SELECT BusLineId, BusStopId FROM route ORDER BY BusLineId ``` [DEMO for SQL Server 2012](http://sqlfiddle.com/#!6/83734/2) (inspired by T I). **And this one is for PostgreSQL 9.1.9 (it is not optimal but should work):** The trick consists in the creation of a dedicated temporary sequence for the current session that you can reset. ``` create temp sequence rownum; WITH final_route AS ( WITH RECURSIVE route AS ( SELECT BusLineId, BusStopId, NextBusStopId FROM BusLine_BusStop WHERE IsFirstStop = 1 UNION ALL SELECT b.BusLineId, b.BusStopId, b.NextBusStopId FROM BusLine_BusStop b INNER JOIN route r ON r.BusLineId = b.BusLineId AND r.NextBusStopId = b.BusStopId WHERE IsFirstStop = 0 or IsFirstStop is null ) SELECT BusLineId, BusStopId, nextval('rownum') as rownum FROM route ) SELECT BusLineId, BusStopId FROM final_route ORDER BY BusLineId, rownum; ``` [DEMO for PostgreSQL 9.1.9](http://sqlfiddle.com/#!1/15d28/18) of my own. **EDIT:** Sorry for the multiple edits. It is quite uncommon to connect records by children record instead of by its parent. You can avoid this poor representation by dropping your isFirstStop column and connecting your records using an id\_PreviousBusStop column (if possible). In that case, you have to set id\_PreviousBusStop to null for the first record. You may save space (for fixed-length data, the entire space is still reserved). Moreover your queries will then become more efficient using less characters.
If using sql server then I think a recursive cte like this works ``` ;WITH route AS ( SELECT BusLineId, BusStopId, NextBusStopId FROM BusLine_BusStop WHERE IsFirstStop = 1 UNION ALL SELECT b.BusLineId, b.BusStopId, b.NextBusStopId FROM BusLine_BusStop b INNER JOIN route r ON r.BusLineId = b.BusLineId AND r.NextBusStopId = b.BusStopId WHERE IsFirstStop = 0 ) SELECT BusLineId, BusStopId FROM route ORDER BY BusLineId ``` [demo](http://sqlfiddle.com/#!3/83734/6)
Order SQL request when each row contains id of the next one
[ "", "sql", "postgresql", "request", "" ]
I have written some code in Python that checks for an MD5 hash in a file and makes sure the hash matches that of the original. Here is what I have developed: ``` # Defines filename filename = "file.exe" # Gets MD5 from file def getmd5(filename): return m.hexdigest() md5 = dict() for fname in filename: md5[fname] = getmd5(fname) # If statement for alerting the user whether the checksum passed or failed if md5 == '>md5 will go here<': print("MD5 Checksum passed. You may now close this window") input ("press enter") else: print("MD5 Checksum failed. Incorrect MD5 in file 'filename'. Please download a new copy") input("press enter") exit ``` But whenever I run the code, I get the following error: ``` Traceback (most recent call last): File "C:\Users\Username\md5check.py", line 13, in <module> md5[fname] = getmd5(fname) File "C:\Users\Username\md5check.py, line 9, in getmd5 return m.hexdigest() NameError: global name 'm' is not defined ``` Is there anything I am missing in my code?
In regards to your error and what's missing in your code. `m` is a name which is not defined for `getmd5()` function. No offence, I know you are a beginner, but your code is all over the place. Let's look at your issues one by one :) First, you are not using `hashlib.md5.hexdigest()` method correctly. Please refer explanation on hashlib functions in [Python Doc Library](http://docs.python.org/3.3/library/hashlib.html). The correct way to return MD5 for provided **string** is to do something like this: ``` >>> import hashlib >>> hashlib.md5("example string").hexdigest() '2a53375ff139d9837e93a38a279d63e5' ``` However, you have a bigger problem here. You are calculating MD5 on a **file name string**, where in reality MD5 is calculated based on file **contents**. You will need to basically read file contents and pipe it though MD5. My next example is not very efficient, but something like this: ``` >>> import hashlib >>> hashlib.md5(open('filename.exe','rb').read()).hexdigest() 'd41d8cd98f00b204e9800998ecf8427e' ``` As you can clearly see second MD5 hash is totally different from the first one. The reason for that is that we are pushing contents of the file through, not just file name. A simple solution could be something like that: ``` # Import hashlib library (md5 method is part of it) import hashlib # File to check file_name = 'filename.exe' # Correct original md5 goes here original_md5 = '5d41402abc4b2a76b9719d911017c592' # Open,close, read file and calculate MD5 on its contents with open(file_name, 'rb') as file_to_check: # read contents of the file data = file_to_check.read() # pipe contents of the file through md5_returned = hashlib.md5(data).hexdigest() # Finally compare original MD5 with freshly calculated if original_md5 == md5_returned: print "MD5 verified." else: print "MD5 verification failed!." ``` Please look at the post **[Python: Generating a MD5 checksum of a file](https://stackoverflow.com/questions/3431825/python-generating-a-md5-checksum-of-a-file)**. It explains in detail a couple of ways how it can be achieved efficiently. Best of luck.
In Python [3.8+](https://docs.python.org/whatsnew/3.8.html#assignment-expressions) you can do ``` import hashlib with open("your_filename.png", "rb") as f: file_hash = hashlib.md5() while chunk := f.read(8192): file_hash.update(chunk) print(file_hash.digest()) print(file_hash.hexdigest()) # to get a printable str instead of bytes ``` On Python 3.7 and below: ``` with open("your_filename.png", "rb") as f: file_hash = hashlib.md5() chunk = f.read(8192) while chunk: file_hash.update(chunk) chunk = f.read(8192) print(file_hash.hexdigest()) ``` This reads the file 8192 (or 2¹³) bytes at a time instead of all at once with `f.read()` to use less memory. --- Consider using [`hashlib.blake2b`](https://docs.python.org/library/hashlib.html#hashlib.blake2b) instead of `md5` (just replace `md5` with `blake2b` in the above snippets). It's cryptographically secure and [faster](https://blake2.net/) than MD5.
How do I calculate the MD5 checksum of a file in Python?
[ "", "python", "md5", "md5sum", "" ]
Reading "Learn Python the Hard Way", I am playing with Exercise 10 ``` tabby_cat = "\tI'm tabbed in." persian_cat = "I'm split\non a line." backslash_cat = "I'm \\ a \\ cat." fat_cat = """ I'll do a list: \t* Cat food \t* Fishies \t* Catnip\n\t* Grass """ print tabby_cat print persian_cat print backslash_cat print fat_cat ``` This produces the following output: ``` I'm tabbed in. I'm split on a line. I'm \ a \ cat. I'll do a list: * Cat food * Fishies * Catnip * Grass ``` My question is: **Why there is a new line between the third and the fourth printed string? and how can I remove it?**
``` fat_cat = """ <-- here? I'll do a list: ``` That is, ``` fat_cat = """I'll do a list: ... ``` removes it.
Because `print` always puts a newline (by default), the reason you've got another new line between the 3rd and fourth is that it contains an explicit `\n` ``` print fat_cat, ``` The above suppresses the implicit newline for the final output.
Where do this new line come from?
[ "", "python", "python-2.7", "" ]
[Using Python 3.x] I'm trying to create a CSV file with two columns, one containing fake e-mail addresses, the second column should contain a certain country code as specified in the respective function. I would like the country codes to be - at least - uniformly distributed assigned to each e-mail address. But it would be great if there was also a way that they were not uniformly distributed. For instance where one country could would be assigned to 30% of the e-mail addresses, another one 10%, etc. My biggest struggle is with creating a dictionary where the key is the e-mail address and the value the country code, and thus zipping two lists that have unequal length and not having empty values (None). On a side note, I thought creating a dictionary would be the best way to do so, but I'm very new to programming and python and hence if you have a better solution please do share!! This is my code: ``` from random import choice, randint from string import ascii_lowercase from itertools import zip_longest import csv def gen_name(length): """"Generates a random name with the given amount of characters.""" return ''.join(choice(ascii_lowercase) for i in range(length)) def email_gen(): """Generates a fake email address.""" user = gen_name(randint(5, 10)) host = gen_name(randint(5, 15)) return user + "@" + host + ".com" def gen_plain_email_list(n): """Generates a list of n amount of random e-mail addresses""" emaillist = [] for i in range(n): emaillist.append(email_gen()) return emaillist def gen_email_dict(n): """Generates a dictionary where the key is an e-mail address and the value a random country code.""" email_list = [] cc = ['us', 'gb', 'de', 'fr', 'it', 'nl', 'es', 'ae', 'br', 'au'] # Creating a list of n amount of e-mail addresses for i in range(n): email_list.append(email_gen()) # Creates dictionary with with an e-mail address from email_list and # a random country code from the cc list email_dict = dict(zip_longest(email_list, cc, fillvalue=choice(cc))) return email_dict def dict_to_csv(filename, n): with open(filename, 'w', newline='') as f: w = csv.writer(f) w.writerows(gen_email_dict(n).items()) dict_to_csv('test.csv', 1000) ``` Thanks in advance for the help!
If you have percentages per country code, simply expand the countries list until you have enough elements, then shuffle the list: ``` cc = [('us', .2), ('gb', .2), ('de', .1), ('fr', .05), ('it', .05)] distribution = n / sum(dist for c, dist in cc) countries = [] for c, dist in cc: countries.extend([c] * int(round(distribution * dist))) # rounding errors may mean we have too few, add until we have enough while len(countries) < n: countries.append(random.choice(cc)[0]) random.shuffle(countries) ``` Now you can zip these with your email addresses, with the countries distributed evenly according to their weights.
You are trying to **abuse** the `zip` function. In your case using a genexp or dict-comprehension is straightforward: ``` def gen_email_dict(n): return {get_email(): choice(cc) for _ in range(n)} #return dict((get_email(), choice(cc)) for _ in range(n)) # python2 ``` The `zip` function should be used only with sequences of equal length, while `zip_longest` allows not equal length, but the default value is not a function that can produce arbitrary values! If you really want to use `zip`, a way for doing this is to have an infinite country code generator: ``` cc = ['us', 'gb', 'de', 'fr', 'it', 'nl', 'es', 'ae', 'br', 'au'] def _countries(): while True: yield choice(cc) countries = _countries() def gen_email_dict(n): # using zip_longest you'll get an infinite loop!!! return dict(zip((gen_email() for _ in range(n)), countries)) # using itertools.islice you wont get an infinite loop. # but there is no reason to complicate things. #return dict(zip_longest((gen_email() for _ in range(n)), it.islice(countries, n))) ```
Zip two lists of unequal length to form a dictionary. Values to be randomly picked from one of the lists
[ "", "python", "python-3.x", "" ]
I have a Pandas data frame and one of the columns, have dates in string format of `YYYY-MM-DD`. For e.g. : `'2013-10-28'` At the moment the `dtype` of the column is `object`. How do I convert the column values to Pandas date format?
Use [astype](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html) ``` In [31]: df Out[31]: a time 0 1 2013-01-01 1 2 2013-01-02 2 3 2013-01-03 In [32]: df['time'] = df['time'].astype('datetime64[ns]') In [33]: df Out[33]: a time 0 1 2013-01-01 00:00:00 1 2 2013-01-02 00:00:00 2 3 2013-01-03 00:00:00 ```
Essentially equivalent to @waitingkuo, but I would use `pd.to_datetime` here (it seems a little cleaner, and offers some additional functionality e.g. `dayfirst`): ``` In [11]: df Out[11]: a time 0 1 2013-01-01 1 2 2013-01-02 2 3 2013-01-03 In [12]: pd.to_datetime(df['time']) Out[12]: 0 2013-01-01 00:00:00 1 2013-01-02 00:00:00 2 2013-01-03 00:00:00 Name: time, dtype: datetime64[ns] In [13]: df['time'] = pd.to_datetime(df['time']) In [14]: df Out[14]: a time 0 1 2013-01-01 00:00:00 1 2 2013-01-02 00:00:00 2 3 2013-01-03 00:00:00 ``` --- **Handling `ValueError`s** If you run into a situation where doing ``` df['time'] = pd.to_datetime(df['time']) ``` Throws a ``` ValueError: Unknown string format ``` That means you have invalid (non-coercible) values. If you are okay with having them converted to `pd.NaT`, you can add an `errors='coerce'` argument to `to_datetime`: ``` df['time'] = pd.to_datetime(df['time'], errors='coerce') ```
How to convert Pandas Series of dates string into date objects?
[ "", "python", "pandas", "date", "time-series", "" ]
i am running a flask project and i am looking for a way to create a directory ABOVE the path from which the current App is running. For example: ``` dirA --> dirBinA --> peter.py griffin.sh dirCinA --> index.py <--------- this is the flask app that's running tom.css dick.html harry.js dirDinA --> <--------- this directory doesn't exist yet anotherDir --> turtle.py ``` i want to create a new directory `anotherDir` inside a new directory `dirDinA` from the flask app that's running in `dirCinA/index.py` If I try with `os.mkdir("../dirDinA/anotherDir/")`, then flask says `OSError: [Errno 2] No such file or directory: '../dirDinA/anotherDir'`
In order to create a new 2-level-depth directory, you need to create it in ***TWO*** steps. For instance, if `../dirDinA` doesn't yet exist, then the following command fails. ``` os.mkdir("../dirDinA/anotherDir") ``` It produces the `OSError: No such file or directory`, misleadingly, showing you the ***FULL*** path that you are trying to create, instead of highlighting on the ***ACTUAL*** part whose non-existence is producing the error. However, the following 2 step method goes well without any error ``` os.mkdir("../dirDinA") os.mkdir("../dirDinA/anotherDir") ``` Directory `../dirDinA` needs to exist before `anotherDir` can be created inside it Thanks goes to the [answer by @JimPivarski](https://stackoverflow.com/a/16868483/636762).
You can use [os.makedirs](https://docs.python.org/3/library/os.html?highlight=makedir#os.makedirs) to create multiple directory levels in a single call: ``` os.makedirs("../dirDinA/anotherDir") ```
how to os.mkdir() above current root path in python
[ "", "python", "path", "flask", "mkdir", "" ]
I have a number of classes and corresponding feature vectors, and when I run predict\_proba() I will get this: ``` classes = ['one','two','three','one','three'] feature = [[0,1,1,0],[0,1,0,1],[1,1,0,0],[0,0,0,0],[0,1,1,1]] from sklearn.naive_bayes import BernoulliNB clf = BernoulliNB() clf.fit(feature,classes) clf.predict_proba([0,1,1,0]) >> array([[ 0.48247836, 0.40709111, 0.11043053]]) ``` I would like to get what probability that corresponds to what class. On this page it says that they are ordered by arithmetical order, i'm not 100% sure of what that means: <http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.predict_proba> Does it mean that I have go trough my training examples assign the corresponding index to the first encounter of a class, or is there a command like `clf.getClasses() = ['one','two','three']?`
Just use the `.classes_` attribute of the classifier to recover the mapping. In your example that gives: ``` >>> clf.classes_ array(['one', 'three', 'two'], dtype='|S5') ``` And thanks for putting a minimalistic reproduction script in your question, it makes answering really easy by just copy and pasting in a IPython shell :)
``` import pandas as pd test = [[0,1,1,0],[1,1,1,0]] pd.DataFrame(clf.predict_proba(test), columns=clf.classes_) Out[2]: one three two 0 0.542815 0.361876 0.095309 1 0.306431 0.612863 0.080706 ```
How to find the corresponding class in clf.predict_proba()
[ "", "python", "machine-learning", "scikit-learn", "" ]
I have a table with the points `X` and `Y`. I need to find the `X` and `Y` point closest to the origin `(0, 0)`. I am trying this way: ``` SELECT * FROM `line` WHERE xi < yi and 0 < xi and 0 < yi and yi < xi ORDER BY yi and xi ASC Limit 100 ``` But I am not getting the desired values.
The distance to the origin is given by `sqrt(xi^2 +yi^2)`. Since a square root is strictly ascending, you can omit it for the purpose of ordering. That gives: ``` SELECT * FROM `line` ORDER BY xi*xi + yi*yi Limit 100 ```
You need to calculate the distance `d = sqrt(x²+y²)` to get the nearest point from the origin ``` select x, y, sqrt(x*x + y*y) as distance from `line` order by distance asc limit 1 ```
Closest two values from certain value
[ "", "mysql", "sql", "" ]
After some searching and trawling through the IPython [documentation](http://ipython.org/ipython-doc/rel-0.12/config/ipython.html) and some [code](https://github.com/ipython/ipython), I can't seem to figure out whether it's possible to store the command history (*not* the output log) to a **text file** rather than an SQLite database. `ipython --help-all` seems to indicate that this option doesn't exist. This would be very nice for version controlling frequently used commands like in [.bash\_history](https://github.com/l0b0/tilde/blob/master/.bash_history). **Edit**: [Working solution](https://github.com/l0b0/tilde/blob/992f2295866c531e0f8602b0ad652d73293f6b2f/.config/ipython/profile_default/startup/history.py) based on @minrk's answer.
You can emulate bash's behavior by adding this in one of your startup scripts (e.g. `$(ipython locate profile)/startup/log_history.py`: ``` import atexit import os ip = get_ipython() LIMIT = 1000 # limit the size of the history def save_history(): """save the IPython history to a plaintext file""" histfile = os.path.join(ip.profile_dir.location, "history.txt") print("Saving plaintext history to %s" % histfile) lines = [] # get previous lines # this is only necessary because we truncate the history, # otherwise we chould just open with mode='a' if os.path.exists(histfile): with open(histfile, 'r') as f: lines = f.readlines() # add any new lines from this session lines.extend(record[2] + '\n' for record in ip.history_manager.get_range()) with open(histfile, 'w') as f: # limit to LIMIT entries f.writelines(lines[-LIMIT:]) # do the save at exit atexit.register(save_history) ``` Note that this emulates the bash/readline history behavior in that it will fail on an interpreter crash, etc. [in a gist](https://gist.github.com/minrk/5686821) ## update: alternative If what you actually want is to just have a few manual favorite commands available to readline (completion, ^R search, etc.) that you can version control, this startup file will allow you to maintain that file yourself, which will be purely in addition to the actual command history of IPython: ``` import os ip = get_ipython() favfile = "readline_favorites" def load_readline_favorites(): """load profile_dir/readline_favorites into the readline history""" path = os.path.join(ip.profile_dir.location, favfile) if not os.path.exists(path): return with open(path) as f: for line in f: ip.readline.add_history(line.rstrip('\n')) if ip.has_readline: load_readline_favorites() ``` Drop this in your `profile_default/startup/` dir, and edit `profile_default/readline_favorites`, or anywhere you prefer to keep that file, and it will show up in readline completions, etc. on every IPython session.
You can export all of your history in IPython to a text file with the [**%history**](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-history) magic like this: ``` %history -g -f filename ``` One way of getting what you want might be to do that export in a [git hook](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks). I usually put these "sync an external resource" actions in the post-checkout git hook.
How to log IPython history to text file?
[ "", "python", "configuration", "ipython", "" ]
I have a Django application with a Publication model and a Tag model. Each publication has one or more Tags associated with it. I want to query the database with a set of two Tags and have returned only publications that have BOTH of those tags. I cannot seem to find the syntax for this although I am certain it is readily available - I suppose I am not using the correct language to search. What I have tried already is: ``` pubs_for_tags = Publication.objects.filter(tags__title__istartswith=q, tags__title__istartswith=q2) ``` But this gives me an error "keyword argument repeated". I've also tried some variations of this, but nothing has worked so far. Can someone enlighten me on the correct syntax for this?
``` pubs_for_tags = Publication.objects.filter(tags__title__istartswith=q).filter( tags__title__istartswith=q2) ``` or ``` pubs_for_tags = Publication.objects.filter(Q(tags__title__istartswith=q), Q( tags__title__istartswith=q2)) ```
I know this is old, but I just ran into the same problem and realized it points at an (as far as I know) undocumented aspect of using Django filters across one-to-many or many-to-many relations. Two conditions made within the same filter apply to the *same* related object. Two conditions made in separate filters can match two *separate* related objects. Another way to think of this is that each complete filter only looks at a single related object at a time, removing a result if all of its related objects fail that filter. Given this, it is extremely rare that you would want two conditions in the same filter using the same keyword. Consider the following query: ``` pubs_for_tags = Publication.objects.filter( tags__title__istartswith=q, tags__title__iendswith=q2 ) ``` vs ``` pubs_for_tags = Publication.objects.filter( tags__title__istartswith=q, ).filter( tags__title__iendswith=q2 ) ``` The first query finds publications that each have a single tag that *both* starts with q and ends with q2. When the keyword is the same (note I used two different keywords in my example), you also get the "keyword argument repeated" error. The second query finds publications that each have a tag that starts with q and have a tag that ends with q2, but it can be two different tags for each publication. From your post, it sounds like this is very close to what you need (just change the "iendswith" to "istartswith"). The only part that could break is if q and q2 are the same or one is a substring of the other. In that instance, a publication could have a single tag that would satisfy both conditions. Note that all this means using Q objects (which nnmware and Gaurav gave as a possible solution) will not give you the result you want. Having two Q objects in a single filter forces behaviour the same as the first example, but gets around the "keyword argument repeated" error. ``` pubs_for_tags = Publication.objects.filter( Q(tags__title__istartswith=q) & Q(tags__title__istartswith=q2) ) ```
And Condition Django Query Using Same Keyword
[ "", "sql", "django", "django-filter", "" ]
I am attempting to write a quick piece of code that will check to see if a directory exists, if it does it will print that it does, if it does request permission to overwrite, and if not prompt to make the directory. Here is what I have so far: ``` def mk_prjDir(): print "Checking to see if", prjDir, "exists.." dirList = os.listdir(prjDir) for d in dirList: if os.path.isdir(d): print (prjDir, "exists!") else: print (prjDir, "does not exist!") ``` But every time I run it, if the directory isn't there it just dumps with an error. What am I doing wrong? Update with traceback: ``` Checking to see if /directory exists.. Traceback (most recent call last): File "alpha.py", line 73, in <module> mk_prjDir() File "alpha.py", line 50, in mk_prjDir dirList = os.listdir(prjDir) OSError: [Errno 2] No such file or directory: '/directory' ``` Update #2: ospath was a typo, but the error remains. I want to try and catch that exception. Should I be using a while == True: and a try/except? If so, what does that look like?
Firstly `prjDir` must be a full path to `prjDir`, i.e "\home\my\_direc\", othwerise `os.listdir` will try to find `prjDir` in the current working directory of the script. Secondly as `os.listdir` returns only file/directory names not the full path, so to get the full path you can use `os.path.join(prjDir,d)`. ``` import os def mk_prjDir(): print "Checking to see if", prjDir, "exists.." dirList = os.listdir(prjDir) for d in dirList: if os.path.isdir(os.path.join(prjDir,d)): print (prjDir, "exists!") else: print (prjDir, "does not exist!") mk_prjDir() ``` Update: I think I misunderstood your question if you only want to check whether a directory exists or not then try this: ``` def mk_prjDir(): print "Checking to see if", prjDir, "exists.." if os.path.isdir(prjDir): print (prjDir, "exists!") else: print (prjDir, "does not exist!") ```
`os.listdir(dir)` returns you a list of files found in the directory `dir`. As the error says, this directory does not exists, so, obviously, `listdir` can't list its contents. Why are you doing this loop? Just use `os.path.isdir` to test if your directory exists.
Python: If statement with exception using os.path.isdir?
[ "", "python", "scripting", "os.path", "" ]
So I've got this to find out if a process is running or not: ``` os.system("ps aux | grep [my]process") ``` I use the square brackets so I don't get back the grep command too. Altho when I try to do something like ``` 'yes' if os.system("ps aux | grep [my]process") else 'no' ``` I always get no, even if in fact python print the line with the info of the process. Pretty sure that there must be some sort of misunderstanding from my side...I assume that if the output of os.system is not zero, the expression resolve in true, so I should get 'yes'. But this does not happen at all, I get no, even if the process is there running, and the command return correctly the info about the process. What am I doing wrong here? Thanks!
Use `subprocess.call()` or `subprocess.check_call()`
You have the logic the wrong way round. `grep` is returning `0` when there are some matching lines using the `subprocess` module is a better idea anyway. You can get the output of `ps aux` and examine it in your program. Although parsing the output of ps is always going to be fairly fragile eg: ``` import subprocess 'yes' if 'myprocess' in subprocess.check_output(['ps','aux']) else 'no' ```
python: output of os.system is not considered for a true/false check?
[ "", "python", "" ]
I am 2 result sets from the below query which I want to 'combine' the results into 1 single table but not sure how to proceed from here, can someone please kindly provide some guidance: ``` SELECT userid, vehicleId, count(vehicleId) As SearchCount FROM MemberSearches GROUP BY userid, vehicleId ORDER BY count(vehicleId) DESC SELECT f.UserId, v.AutoId AS VehicleId, count(v.AutoId) AS SearchCount FROM Favorites f LEFT JOIN [SellPost] sp ON (f.PostId = sp.AutoId) LEFT JOIN [Vehicle] v ON (sp.CarId = v.AutoId) GROUP BY f.UserId, v.AutoId ORDER BY COUNT(v.AutoId) DESC ``` **Result from the first select:** ``` UserId VehicleId SearchCount 2926FC8A78FB 7 3 2926FC8A78FB 2 2 2926FC8A78FB 6 1 ``` **Result from the second select:** ``` UserId VehicleId SearchCount 2926FC8A78FB 1 5 2926FC8A78FB 2 5 ``` **I need to achieve the final result as:** ``` UserId VehicleId SearchCount 2926FC8A78FB 1 5 2926FC8A78FB 2 7 2926FC8A78FB 6 1 2926FC8A78FB 7 3 ```
Not efficient, just test its working... ``` SELECT userid, vehicleId, SUM(SearchCount) As SearchCount FROM (SELECT userid, vehicleId, count(vehicleId) As SearchCount FROM MemberSearches GROUP BY userid, vehicleId UNION ALL SELECT f.UserId, v.AutoId AS VehicleId, count(v.AutoId) AS SearchCount FROM Favorites f LEFT JOIN [SellPost] sp ON (f.PostId = sp.AutoId) LEFT JOIN [Vehicle] v ON (sp.CarId = v.AutoId) GROUP BY f.UserId, v.AutoId ) t GROUP BY userid, vehicleId ```
``` SELECT userid, vehicleId, count(*) As SearchCount FROM ( SELECT userid, vehicleId FROM MemberSearches UNION ALL SELECT f.UserId, v.AutoId FROM Favorites f LEFT JOIN [SellPost] sp ON (f.PostId = sp.AutoId) LEFT JOIN [Vehicle] v ON (sp.CarId = v.AutoId) ) AS X GROUP BY userid, vehicleId ORDER BY 3 DESC ```
SQL Server + 'Merging results'
[ "", "sql", "sql-server", "" ]
I am doing a report whereby I can't construct the SQL programatically. I have two values that are fed into the report. The values that can be one of the following three options: 1. redeem 2. purchase 3. redeem AND purchase The query needs to have a WHERE clause. If "redeem" is fed in, it must have: ``` ... WHERE balance < 0 ``` If "purchase" is fed in, it must have: ``` ... WHERE balance >= 0 ``` if both are fed in, this condition can be left out completely, or it can be said: ``` ... WHERE balance >= 0 OR balance < 0 --> but this is redundant ``` Is there a way to apply this kind of logic in SQL? Is something like this possible in SQL: ``` SELECT * FROM account WHERE (if param1 = 'redeem' then 'balance <= 0) ... etc ``` ?
Yep. You're almost there. ``` WHERE (param='redeem' and balance <=0) or (param='purchase' and balance>=0) or (param='redeem AND purchase ') ```
Use `CASE` statements in your `WHERE` clause. You can find examples [here](http://www.devx.com/tips/Tip/15633). I tried the following code in MySQL and it works: ``` SET @param = 'purchase'; SELECT * FROM TEST.ACCOUNT WHERE CASE WHEN @param = 'redeem' THEN BALANCE < 0 WHEN @param = 'purchase' THEN BALANCE >= 0 ELSE TRUE END; ```
Conditional where in SQL based on Value?
[ "", "mysql", "sql", "" ]
I've got functions, which sometimes return NaNs with `float('nan')` (I'm not using numpy). How do I write a test for it, since ``` assertEqual(nan_value, float('nan')) ``` is just like `float('nan') == float('nan')` always false. Is there maybe something like `assertIsNan`? I could not find anything about it…
I came up with ``` assertTrue(math.isnan(nan_value)) ```
`math.isnan(x)` will raise a `TypeError` if `x` is neither a `float` nor a `Real`. It's better to use something like this : ``` import math class NumericAssertions: """ This class is following the UnitTest naming conventions. It is meant to be used along with unittest.TestCase like so : class MyTest(unittest.TestCase, NumericAssertions): ... It needs python >= 2.6 """ def assertIsNaN(self, value, msg=None): """ Fail if provided value is not NaN """ standardMsg = "%s is not NaN" % str(value) try: if not math.isnan(value): self.fail(self._formatMessage(msg, standardMsg)) except: self.fail(self._formatMessage(msg, standardMsg)) def assertIsNotNaN(self, value, msg=None): """ Fail if provided value is NaN """ standardMsg = "Provided value is NaN" try: if math.isnan(value): self.fail(self._formatMessage(msg, standardMsg)) except: pass ``` You can then use `self.assertIsNaN()` and `self.assertIsNotNaN()`.
How to check if value is nan in unittest?
[ "", "python", "unit-testing", "assert", "nan", "" ]
i'm using oracle 11 and i need to display the "orders" from last month, where the date of each order has already been set. tables are: name: Store order ``` OrderNo OrderDate ------- ---------- ST1 03-MAY-12 ST2 03-APR-13 ST3 15-APR-13 ``` so ideally, what it should return is both ST2 and ST3, as they're both april 2013 however now we're 3 days into june and there's no values entered for may 2013. so let's just ignore that. how would i write a code that would get the "last month" from the user-inputted dates?
using @Grisha's answer ``` select * from orders where to_char(OrderDate,'MM') = to_char(add_months(sysdate,-1),'MM') ```
This should be able to use indexes well since it does no per row calculations on the rows in orders; ``` WITH cte AS ( SELECT MAX(OrderDate) orderdate FROM orders ) SELECT * FROM orders o, cte WHERE o.OrderDate >= TRUNC(cte.orderdate,'MM') AND o.OrderDate < LAST_DAY(cte.orderdate)+1 ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!4/77743/11)
SQL pull values from "last month" when dates have already been declared
[ "", "sql", "oracle", "" ]
I want to write a function that takes a file as a string, and returns True if the file has duplicate words and False otherwise. So far I have: ``` def double(filename): infile = open(filename, 'r') res = False l = infile.split() infile.close() for line in l: #if line is in l twice res = True return res ``` if my file contains: "there is is a same word" I should get True if my file contains: "there is not a same word" I should get False How do I determine if there is a duplicate of a word in the string P.S. the duplicate word does not have to come right after the other i.e In "there is a same word in the sentence over there" should return True because "there" is also a duplicate.
Use a set to detect duplicates: ``` def double(filename): seen = set() with open(filename, 'r') as infile: for line in l: for word in line.split(): if word in seen: return True seen.add(word) return False ``` You could shorten that to: ``` def double(filename): seen = set() with open(filename, 'r') as infile: return any(word in seen or seen.add(word) for line in l for word in line.split()) ``` Both versions exit early; as soon as a duplicate word is found, the function returns `True`; it does have to read the whole file to determine there are no duplicates and return `False`.
The [*str.split()*](http://docs.python.org/2.7/library/stdtypes.html#str.split) method doesn't work well for splitting words in natural English text because of apostrophes and punctuation. You usually need the power of [regular expressions](http://docs.python.org/2.7/library/re.html) for this: ``` >>> text = """I ain't gonna say ain't, because it isn't in the dictionary. But my dictionary has it anyways.""" >>> text.lower().split() ['i', "ain't", 'gonna', 'say', "ain't,", 'because', 'it', "isn't", 'in', 'the', 'dictionary.', 'but', 'my', 'dictionary', 'has', 'it', 'anyways.'] >>> re.findall(r"[a-z']+", text.lower()) ['i', "ain't", 'gonna', 'say', "ain't", 'because', 'it', "isn't", 'in', 'the', 'dictionary', 'but', 'my', 'dictionary', 'has', 'it', 'anyways'] ``` To find whether there are any duplicate words, you can use [set operations](http://docs.python.org/2.7/library/stdtypes.html#set-types-set-frozenset): ``` >>> len(words) != len(set(words)) True ``` To list out the duplicate words, use the multiset operations in [*collections.Counter*](http://docs.python.org/2.7/library/collections.html#collections.Counter): ``` >>> sorted(Counter(words) - Counter(set(words))) ["ain't", 'dictionary', 'it'] ```
How to determine if word in string is a double word?
[ "", "python", "python-3.x", "" ]
I am learning bash script and python for my project, I wants to Kill the terminal after completion of its execution. I run my process on one terminal and when I execute it it open a new terminal and do requested operation and later after completion new terminal have to be closed which is not happening. My code is: ``` import os os.system(""xfce4-terminal -e 'bash -c \"./testing.sh; exec bash\"'") ``` When I run this, it will open new terminal and execute in it, and after execution "pwd" is ``` ubuntu:-/home/new$ ``` Not able to send any commands to there. After completion I want to kill the terminal, because for next execution I want to open new terminal. I tried this it works well but don't have idea how to add in python script: ``` a=`tty` echo $a b=`tty | sed 's/\/dev//'` echo $b c=`ps aux | grep $a | grep 'bash$' | awk '{print $2}'` echo $c d= `$c | awk '{print $1}' kill -9 $d ``` Looking for suggestion , please guide me to get solution. Thanks in advance.
A terminal closes when its last child process exits. This will do what I think you want: ``` os.system("xfce4-terminal -e 'bash -c \"echo hello ; /bin/bash ; exit 0\"'") ``` which I tested. **Added in response to comment** There is a shell script in Ubuntu `/usr/bin/usb-devices` which prints to stdout and then exits. If I run the Python ``` os.system("xfce4-terminal -e /usr/bin/usb-devices") ``` then a terminal appears, shows the output of the script and then the terminal exits. It really can't be much simpler than that.
Let's back up and look carefully at the one main line of code you posted: ``` os.system(""xfce4-terminal -e 'bash -c \"./testing.sh; exec bash\"'") ``` What does it do? * It uses Python to invoke a shell (`os.system`), which is usually not a great idea for a host of reasons. * Inside that shell, it launches xfce4-terminal. * Inside xfce4-terminal, it runs bash. Probably xfce4-terminal runs this bash inside another bash. * Inside this new bash, it runs testing.sh. * Finally, for reasons unclear to me, it runs another bash. What does the process hierarchy look like? ``` shell (whichever you use to launch this thing, likely bash) python shell (the system default, possibly bash) xfce4-terminal bash (replaced by another bash at the end, "exec bash") bash testing.sh ``` Do you see how crazy this looks? Think about what you're actually trying to accomplish. It could probably look more like this: ``` shell python testing.sh ``` How to do that? ``` import subprocess subprocess.check_call(['./testing.sh']) ``` If you really need to launch an xfce4-terminal: ``` subprocess.check_call(['xfce4-terminal', '-e', './testing.sh']) ```
How to kill a terminal after completion of process
[ "", "python", "bash", "shell", "ubuntu", "" ]
[**get()**](https://docs.python.org/3/library/queue.html#queue.Queue.get) removes and returns an item from **Queue** in Python. ``` import queue q = queue.Queue() # Here q.put("Apple") q.put("Orange") q.put("Banana") print(q.get()) print(q.get()) print(q.get()) ``` Output: ``` Apple Orange Banana ``` Now, I want to get the items in **Queue** without removing the items. Is it possible to do this?
> The `Queue` module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. As you can see, the `Queue` module was created specifically for use with threads, providing only *FIFO*, *LIFO* and *priority queues*, none of which provide this functionality. However by examining the [source code](http://hg.python.org/cpython/file/2.7/Lib/Queue.py) of the `Queue` module, you can see that it simply uses a [`collections.deque`](http://docs.python.org/2/library/collections.html#collections.deque) (double ended queue) which can easily accomplish your task. You may index the first item (`[0]`) and `.popleft()` in constant time.
queue\_object.queue will return copy of your queue in a deque object which you can then use the slices of. It is of course, not syncronized with the original queue, but will allow you to peek at the queue at the time of the copy. There's a good rationalization for why you wouldn't want to do this explained in detail in this thread [comp.lang.python - Queue peek?](https://groups.google.com/forum/?fromgroups#!topic/comp.lang.python/ujy2LIHUy3o). But if you're just trying to understand how Queue works, this is one simple way. ``` import Queue q = Queue.Queue() q.push(1) q.put('foo') q.put('bar') d = q.queue print(d) deque(['foo', 'bar']) print(d[0]) 'foo' ```
How to get the items in Queue without removing the items?
[ "", "python", "data-structures", "queue", "" ]
Two's complement is when you inverse bits then add a binary 1 digit. So for example... ``` 0011001 apply two's complement 1. inverse the bits, 1100110 2. add a binary digit, 1100110 + 1 = 1100111 ``` Another example to show overflow situation... ``` 1001100 apply two's complement 1. inverse the bits, 0110011 2. add a binary digit, 0110011 + 1 = 0110100 ``` What would be the best way to implement this in python. So far I have this code, but I would like it to be more efficient because I'm using this method too much. ``` def toTwosComplement(binarySequence): convertedSequence = [0] * len(binarySequence) carryBit = 1 # INVERT THE BITS for i in range(0, len(binarySequence)): if binarySequence[i] == '0': convertedSequence[i] = 1 else: convertedSequence[i] = 0 # ADD BINARY DIGIT 1 if convertedSequence[-1] == 0: #if last digit is 0, just add the 1 then there's no carry bit so return convertedSequence[-1] = 1 return ''.join(str(x) for x in convertedSequence) for bit in range(0, len(binarySequence)): if carryBit == 0: break index = len(binarySequence) - bit - 1 if convertedSequence[index] == 1: convertedSequence[index] = 0 carryBit = 1 else: convertedSequence[index] = 1 carryBit = 0 return ''.join(str(x) for x in convertedSequence) if __name__ == '__main__': print toTwosComplement('00110010101101001') ``` My question is, can I optimise this algorithm because at the moment it is running too slow for the amount of binary code I have to run it through.
``` x=int(a,2) num_bits = 10 print x - (1 << num_bits) ``` I think this should solve the problem
Try this: ``` x = 0b11001100 complement = abs(~x) + 0b1 print bin(complement) ```
Python - Most effective way to implement two's complement?
[ "", "python", "performance", "twos-complement", "" ]
I have a table (n2m) that contains authors' ids and their articles' ids. I'm counting number of authors who have just one article in the table. I'm using the following query: ``` select count(*) from authors_articles where AutID in ( select AutID from authors_articles group by AutID having count(distinct articleID) = 1 ) ``` Now, I'm wondering if my query is correct and in any case can I improve this query to make it faster!? Many thanks,
Your query can be simplified to: ``` select count(*) from ( select AutID from authors_articles group by AutID having count(distinct articleID) = 1 ) x ```
I think it can be even simpler: ``` SELECT count(*) num_articles FROM authors_articles GROUP BY AutID HAVING num_articles = 1 ```
Counting authors with just one article
[ "", "mysql", "sql", "" ]
I am trying to do the following, in a clear pythonic way: ``` def remove_prefix(str, prefix): return str.lstrip(prefix) print(remove_prefix('template.extensions', 'template.')) ``` This gives: ``` xtensions ``` Which is not what I was expecting (`extensions`). Obviously (stupid me), because I have used [lstrip](http://docs.python.org/2/library/string.html#string.lstrip) wrongly: lstrip will remove all characters which appear in the passed `chars` string, not considering that string as a real string, but as "a set of characters to remove from the beginning of the string". Is there a standard way to remove a substring from the beginning of a string?
For Python 3.9+: ``` text.removeprefix(prefix) ``` For older versions, the following provides the same behavior: ``` def remove_prefix(text, prefix): if text.startswith(prefix): return text[len(prefix):] return text ```
Short and sweet: ``` def remove_prefix(text, prefix): return text[text.startswith(prefix) and len(prefix):] ```
Remove a prefix from a string
[ "", "python", "" ]
I'm fairly new to Python, and think this should be a fairly common problem, but can't find a solution. I've already looked at [this page](https://stackoverflow.com/questions/3459098/create-list-of-single-item-repeated-n-times-in-python) and found it helpful for one item, but I'm struggling to extend the example to multiple items without using a 'for' loop. I'm running this bit of code for 250 walkers through Emcee, so I'm looking for the fastest way possible. I have a list of numbers, `a = [x,y,z]` that I want to repeat `b = [1,2,3]` times (for example), so I end up with a list of lists: ``` [ [x], [y,y], [z,z,z] ] ``` The 'for' loop I have is: ``` c = [ ] for i in range (0,len(a)): c.append([a[i]]*b[i]) ``` Which does exactly what I want, but means my code is excruciatingly slow. I've also tried naively turning a and b into arrays and doing `[a]*b` in the hopes that it would multiply element by element, but no joy.
You can use `zip` and a list comprehension here: ``` >>> a = ['x','y','z'] >>> b = [1,2,3] >>> [[x]*y for x,y in zip(a,b)] [['x'], ['y', 'y'], ['z', 'z', 'z']] ``` or: ``` >>> [[x for _ in xrange(y)] for x,y in zip(a,b)] [['x'], ['y', 'y'], ['z', 'z', 'z']] ``` `zip` will create the whole list in memory first, to get an iterator use `itertools.izip` In case `a` contains mutable objects like lists or lists of lists, then you may have to use `copy.deepcopy` here because modifying one copy will change other copies as well.: ``` >>> from copy import deepcopy as dc >>> a = [[1 ,4],[2, 5],[3, 6, 9]] >>> f = [[dc(x) for _ in xrange(y)] for x,y in zip(a,b)] #now all objects are unique >>> [[id(z) for z in x] for x in f] [[172880236], [172880268, 172880364], [172880332, 172880492, 172880428]] ``` `timeit` comparisons(ignoring imports): ``` >>> a = ['x','y','z']*10**4 >>> b = [100,200,300]*10**4 >>> %timeit [[x]*y for x,y in zip(a,b)] 1 loops, best of 3: 104 ms per loop >>> %timeit [[x]*y for x,y in izip(a,b)] 1 loops, best of 3: 98.8 ms per loop >>> %timeit map(lambda v: [v[0]]*v[1], zip(a,b)) 1 loops, best of 3: 114 ms per loop >>> %timeit map(list, map(repeat, a, b)) 1 loops, best of 3: 192 ms per loop >>> %timeit map(list, imap(repeat, a, b)) 1 loops, best of 3: 211 ms per loop >>> %timeit map(mul, [[x] for x in a], b) 1 loops, best of 3: 107 ms per loop >>> %timeit [[x for _ in xrange(y)] for x,y in zip(a,b)] 1 loops, best of 3: 645 ms per loop >>> %timeit [[x for _ in xrange(y)] for x,y in izip(a,b)] 1 loops, best of 3: 680 ms per loop ```
The fastest way to do it is with [*map()*](http://docs.python.org/2.7/library/functions.html#map) and [*operator.mul()*](http://docs.python.org/2.7/library/operator.html#operator.mul): ``` >>> from operator import mul >>> map(mul, [['x'], ['y'], ['z']], [1, 2, 3]) [['x'], ['y', 'y'], ['z', 'z', 'z']] ```
Creating list of individual list items multiplied n times
[ "", "python", "arrays", "list", "loops", "emcee", "" ]
``` table : metrics ``` columns: ``` 1. name : Name 2. instance: A name can have several instances (Name: John, Instances: John at work, John at concert) 3. metric: IQ, KQ, EQ 4. metric_value: Any numeric ``` Objective of the query Find out the metrics whose `metric_value` is 0 for all instances for all names. Nature of data A name's metric '`M`' for instance '`X`' could be 10. But for the same name and the same metric instance '`Y`' could be `0`. In this case, '`M`' should NOT be returned. Edit: Sample data: ``` NAME INSTANCE METRIC VALUE John At work IQ 0 John At home EQ 10 John At a concert KQ 0 Jim At work IQ 0 Jim At home KQ 0 Tina At home IQ 100 Tina At work EQ 0 Tina At work KQ 0 ``` In this case, only KQ should be returned since it is always zero for all Names and their instances.
Are you looking for something like this? ``` SELECT metric FROM metrics GROUP BY metric HAVING SUM(metric_value) = 0 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/f0e3a5/1)** demo **UPDATE** If metric\_value can have negative values then use this one ``` SELECT metric FROM metrics GROUP BY metric HAVING SUM(ABS(metric_value)) = 0 ``` Here is updated **[SQLFiddle](http://sqlfiddle.com/#!2/0af51/4)** demo
Even though this looks suspiciously like homework.... see if this gives you what you're after: ``` SELECT DISTINCT M1.Metric FROM METRICS M1 WHERE NOT EXISTS ( SELECT * FROM METRICS M2 WHERE M2.Metric <> 0 AND M1.Metric = M2.Metric ) ```
SQL Help: Complex query
[ "", "sql", "" ]
I'm getting this error on a SQL user defined function: > An expression of non-boolean type specified in a context where a > condition is expected, near ')'. For this: ``` UPDATE LMI_Contact SET Phone = NULL WHERE dbo.LMI_IsSingleCharacterRepeated(Phone, '0') ``` where the function can be created using: ``` -- ***this will also find NULL and empty string values*** CREATE FUNCTION LMI_IsSingleCharacterRepeated (@string varchar(max), @char char(1)) RETURNS bit AS BEGIN DECLARE @index int DECLARE @len int DECLARE @currentChar char(1) SET @index = 1 SET @len= LEN(@string) WHILE @index <= @len BEGIN SET @currentChar = SUBSTRING(@string, @index, 1) IF @currentChar = @char SET @index= @index+ 1 ELSE RETURN 0 END RETURN 1 END; GO ``` This function is for checking if a string is any specified single character, repeated.
You must use comparison operators against functions even if the return type is `bit`. ``` UPDATE LMI_Contact SET Phone = NULL WHERE dbo.LMI_IsSingleCharacterRepeated(Phone, '0') = 1 ```
Try This ``` CREATE FUNCTION LMI_IsSingleCharacterRepeated (@str varchar(max), @char char(1)) RETURNS BIT AS BEGIN DECLARE @indx int DECLARE @len int DECLARE @currentChar char(1) SET @indx = 1 SET @len= LEN(@str) WHILE @indx <= @len BEGIN SET @currentChar = SUBSTRING(@str, @indx, 1) IF @currentChar = @char SET @indx= @indx+ 1 ELSE RETURN 0 END RETURN 1 END; GO ```
SQL User Defined Function Generates a Non-Boolean Type Error
[ "", "sql", "function", "boolean", "where-clause", "" ]
I have a character (eg. "a") and I need to check a string (eg. "aaaabcd") for the number of occurances of "a" in a row (processing stops at "b" in this case and returned value is 4). I have something like this: ``` def count_char(str_, ch_): count = 0 for c in str_: if c == ch_: count += 1 else: return count ``` So I was thinking... Is there a better/more pythonic/simplier way to do this?
The `re.match` function will start looking in the beginning of the string ``` m = re.match(r'[%s]+' % ch_, str_) return m.end() if m else 0 ``` If you want the biggest number of chars in any part of the string: ``` max(len(x) for x in re.findall(r'[%s]+' % ch_, str_)) ```
One option using [`itertools.takewhile`](http://docs.python.org/2/library/itertools.html#itertools.takewhile), ``` >>> from itertools import takewhile >>> str_ = 'aaaabcd' >>> ch_ = 'a' >>> sum(1 for _ in takewhile(lambda x: x == ch_, str_)) 4 ```
Number of the same characters in a row - python
[ "", "python", "string", "" ]
How can I return records where I did not have sales for a month e.g July 2004? User will supply a date say July 2004 then I need to see the products where there was no sales. ``` SELECT DP.ProductID ,DP.Name ,FS.OrderDate ,FS.OrderQty ,FS.OrderAmount FROM dbo.DimProduct AS DP LEFT JOIN dbo.FactSales as FS on FS.ProductID = DP.ProductID ```
``` SELECT DP.* FROM dbo.DimProduct AS DP LEFT JOIN dbo.FactSales as FS ON FS.ProductID = DP.ProductID AND DATENAME(month, FS.OrderDate) = 'July' AND YEAR(FS.OrderDate) = 2004 WHERE FS.ProductID IS NULL ```
Maybe this solve your problem: ``` SELECT DP.ProductID ,DP.Name ,FS.OrderDate ,FS.OrderQty ,FS.OrderAmount FROM dbo.DimProduct AS DP LEFT JOIN dbo.FactSales as FS on FS.ProductID = DP.ProductID WHERE FS.OrderAmount == 0 ```
Return products where there was no sales for month
[ "", "sql", "t-sql", "" ]
How can I do a search AND REPLACE a string. To be more specific. I have a text file with ``` SAMPLE AB CD .. TYPES AB QP PO .. RUNS AB DE ZY ``` I want to replace `AB` with `XX`, only under lines `SAMPLE` and `RUNS`. I've already tried multiple ways of using `replace()`. I tried something like ``` if 'SAMPLE' in line: f1.write(line.replace('testsample', 'XX')) if 'RUNS' in line: f1.write(line.replace('testsample', 'XX')) ``` and that didn't work.
A file is an iterator over lines in Python: ``` for line in file: output.write(line) # save as is if 'SAMPLE' in line or 'RUNS' in line: line = next(file, "") # move to the next line output.write(line.replace('AB', 'XX')) # save replacing AB with XX ``` To support SAMPLE/RUNS lines that follows another SAMPLE/RUNS line e.g.: ``` SAMPLE SAMPLE AB ``` you could: ``` for line in file: output.write(line) # save as is while 'SAMPLE' in line or 'RUNS' in line: line = next(file, "") # move to the next line output.write(line.replace('AB', 'XX')) # save replacing AB with XX ```
The easiest way will be to iterate your file line by line, and each time you see a `SAMPLE` or `RUNS` line to save a flag meaning “the previous line was the one I was looking for”. Any other line will reset this flag. Now on every iteration you check if the flag was set during the previous iteration, and if it was you do your `.replace` thing.
Search and replace string under specific line
[ "", "python", "replace", "find", "" ]
I have two MySQL tables that are relational based on an ID number. I need to select the count of the IDs in the first table that do NOT match any ID in the second table. This is what I tried: ``` SELECT COUNT(DISTINCT(ask_questions.id)) FROM ask_questions INNER JOIN ask_answers ON ask_questions.id != ask_answers.question_id; ``` I thought that using the "!=" would return results that do not match but the number returned is not correct. Is there a better solution?
use `LEFT JOIN` since you want to get `ID` that do not have atleast a match on the other table. ``` SELECT COUNT(DISTINCT(ask_questions.id)) FROM ask_questions LEFT JOIN ask_answers ON ask_questions.id = ask_answers.question_id WHERE ask_answers.question_id IS NULL ``` To further gain more knowledge about joins, kindly visit the link below: * [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) An alternative with `JOIN` is by using `NOT EXISTS` ``` SELECT COUNT(DISTINCT(id)) FROM ask_questions WHERE NOT EXISTS ( SELECT null FROM ask_answers WHERE ask_questions.id = ask_answers.question_id ) ```
Use `NOT IN` ``` SELECT COUNT(ask_questions.id) FROM ask_questions WHERE ask_questions.id NOT IN(SELECT question_id FROM ask_answers) ```
How to select COUNT of one table column where there are no matches in another table column
[ "", "mysql", "sql", "" ]
My actual example is more involved so I boiled the concept down to a simple example: ``` l = [1,2,3,4,5,6,7,8] for number in l: calc = number*10 print calc ``` For each iteration of my loop, I end up with a variable (`calc`) I'd like to use to populate a new list. My actual process involves much more than multiplying the value by `10`, so I'd like to be able to set each value in the new list by this method. The new code might look like this: ``` l = [1,2,3,4,5,6,7,8] for number in l: calc = number*10 # set calc as x'th entry in a new list called l2 (x = iteration cycle) print l2 ``` Then it would print the new list: `[10,20,30,40,...]`
Here's how to do it without jumping straight into list comprehensions. It's not a great idea to use `l` as a variable name because it is identical to `1` in some fonts, so I changed it (althought `l1` isn't really much better :) ) ``` l1 = [1,2,3,4,5,6,7,8] l2 = [] for number in l1: calc = number*10 print calc l2.append(calc) ``` list comprehensions do provide a more compact way to write this pattern ``` l2 = [ number*10 for number in l1 ] ```
There are several options... ## List comprehensions Use [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions), if short enough: ``` new_list = [number * 10 for number in old_list] ``` ## `map()` You could also use `map()`, if the function exists before (or you will eg. use `lambda`): ``` def my_func(value): return value * 10 new_list = map(my_func, old_list) ``` Be aware, that in Python 3.x `map()` does not return a list (so you would need to do something like this: `new_list = list(map(my_func, old_list))`). ## Filling other list using simple `for ... in` loop Alternatively you could use simple loop - it is still valid and Pythonic: ``` new_list = [] for item in old_list: new_list.append(item * 10) ``` ## Generators Sometimes, if you have a lot of processing (as you said you have), you want to perform it lazily, when requested, or just the result may be too big, you may wish to use generators. Generators remember the way to generate next element and forget whatever happened before (I am simplifying), so you can iterate through them once (but you can also explicitly create eg. list that stores all the results). In your case, if this is only for printing, you could use this: ``` def process_list(old_list): for item in old_list: new_item = ... # lots of processing - item into new_item yield new_item ``` And then print it: ``` for new_item in process_list(old_list): print(new_item) ``` More on generators you can find in Python's wiki: <http://wiki.python.org/moin/Generators> ## Accessing "iteration number" But if your question is more about how to retrieve the number of iteration, take a look at `enumerate()`: ``` for index, item in enumerate(old_list): print('Item at index %r is %r' % (index, item)) ```
How to generate new list from variable in a loop?
[ "", "python", "" ]
I want to read a .xlsx file using the Pandas Library of python and port the data to a postgreSQL table. All I could do up until now is: ``` import pandas as pd data = pd.ExcelFile("*File Name*") ``` Now I know that the step got executed successfully, but I want to know how i can parse the excel file that has been read so that I can understand how the data in the excel maps to the data in the variable data. I learnt that data is a Dataframe object if I'm not wrong. So How do i parse this dataframe object to extract each line row by row.
I usually create a dictionary containing a `DataFrame` for every sheet: ``` xl_file = pd.ExcelFile(file_name) dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names} ``` --- Update: In pandas version 0.21.0+ you will get this behavior more cleanly by passing [`sheet_name=None`](https://github.com/pandas-dev/pandas/issues/9930) to [`read_excel`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html): ``` dfs = pd.read_excel(file_name, sheet_name=None) ``` --- In 0.20 and prior, this was `sheetname` rather than `sheet_name` (this is now deprecated in favor of the above): ``` dfs = pd.read_excel(file_name, sheetname=None) ```
``` pd.read_excel(file_name) ``` sometimes this code gives an error for xlsx files as: `XLRDError:Excel xlsx file; not supported` instead , you can use `openpyxl` engine to read excel file. ``` df_samples = pd.read_excel(r'filename.xlsx', engine='openpyxl') ```
How to read a .xlsx file using the pandas Library in iPython?
[ "", "python", "pandas", "ipython", "jupyter-notebook", "dataframe", "" ]
I have a list of filenames as follows ``` files = [ '/dl/files/4j55eeer_wq3wxxpiqm.jpg', '/home/Desktop/hjsd03wnsbdr9rk3k', 'kd0dje7cmidj0xks03nd8nd8a3', ... ] ``` The problem is most of the files do not have an extension in the filenames, what would be the best way to get file extension of these files ? I don't know if this is even possible because python would treat all files as `buffer` or `string` objects that do not have any `filetype` associated with them. can this be done at all ?
Once you use `magic` to get the MIME type, you can use [`mimetypes.guess_extension()`](http://docs.python.org/2/library/mimetypes.html#mimetypes.guess_extension) to get the extension for it.
It can be done if you have an oracle that determines file types from their content. Happily at least one such oracle is already implemented in Python: <https://github.com/ahupp/python-magic>
How can i get a file extension from a filetype?
[ "", "python", "file", "" ]
I'm working on the GUI for a simple quiz app using Tkinter in Python 2.7. Thus far, I have begun to set up my frame. I've put a scrollbar inside of a Text widget named `results_txtbx` to scroll up and down a list noting the player's performance on each question. I've been using `grid` since it's easier for me to manage. ``` from Tkinter import * class Q_and_A: def __init__(self, master): frame = Frame(master) Label(master).grid(row = 4) results_txtbx = Text(master) results_scrbr = Scrollbar(results_txtbx) results_scrbr.grid(sticky = NS + E) results_txtbx.config(width = 20, height = 4, wrap = NONE, yscrollcommand = results_scrbr.set) results_txtbx.grid(row = 3, column = 1, padx = 12, sticky = W) root = Tk() root.wm_title("Question and Answer") root.resizable(0, 0) app = Q_and_A(root) root.mainloop() ``` What happens is that when it runs, `results_txtbx` resizes to fit the scrollbar. Is there any way to make it keep its original size using `grid`?
You don't want to use a text widget as the master for a scrollbar. Like any other widget, if you pack or grid the scrollbar in the text widget, the text widget will shrink or expand to fit the scrollbar. That is the crux of your problem. Instead, create a separate frame (which you're already doing), and use that frame as the parent for both the text widget and the scrollbars. If you want the appearance that the scrollbars are inside, set the borderwidth of the text widget to zero, and then give the containing frame a small border. As a final usability hint, I recommend *not* making the window non-resizable. Your users probably know better what size of window they want than you do. Don't take that control away from your users. Here's (roughly) how I would implement your code: * I would use `import Tkinter as tk` rather than `from Tkinter import *` since global imports are generally a bad idea. * I would make `Q_and_A` a subclass of `tk.Frame` so that it can be treated as a widget. * I would make the whole window resizable * I would separate widget creation from widget layout, so all my layout options are in one place. This makes it easier to write and maintain, IMO. * As mentioned in my answer, I would put the text and scrollbar widgets inside a frame Here's the final result: ``` import Tkinter as tk class Q_and_A(tk.Frame): def __init__(self, master): tk.Frame.__init__(self, master, borderwidth=1, relief="sunken") self.label = tk.Label(self) self.results_txtbx = tk.Text(self, width=20, height=4, wrap="none", borderwidth=0, highlightthickness=0) self.results_scrbr = tk.Scrollbar(self, orient="vertical", command=self.results_txtbx.yview) self.results_txtbx.configure(yscrollcommand=self.results_scrbr.set) self.label.grid(row=1, columnspan=2) self.results_scrbr.grid(row=0, column=1, sticky="ns") self.results_txtbx.grid(row=0, column=0, sticky="nsew") self.grid_rowconfigure(0, weight=1) self.grid_columnconfigure(0, weight=1) root = tk.Tk() root.wm_title("Question And Answer") app = Q_and_A(root) app.pack(side="top", fill="both", expand=True) root.mainloop() ```
Set `results_scrbr.grid(row = 3, column = 2)` next to `results_txtbx.grid(row = 3,column = 1, padx = 4)`, `sticky` is not needed because window is not resizable, and i lowered the padx so scrollbar is closer to text. Also to make the results\_txtbx vertically scrollable, add `results_scrbr.config(command=results_txtbx.yview)` Here is a working code... ``` from Tkinter import * class Q_and_A: def __init__(self, master): frame = Frame(master) Label(master).grid(row = 4) results_txtbx = Text(master) results_scrbr = Scrollbar(master) results_scrbr.grid(row = 3, column = 2) results_scrbr.config(command=results_txtbx.yview) results_txtbx.config(width = 20, height = 4, wrap = NONE, yscrollcommand = results_scrbr.set) results_txtbx.grid(row = 3, column = 1, padx = 4) root = Tk() root.wm_title("Question and Answer") root.resizable(0, 0) app = Q_and_A(root) root.mainloop() ```
Tkinter - Text widget shrinks when Scrollbar is added
[ "", "python", "python-2.7", "tkinter", "" ]
I need your help, I have a structure like this: ``` myList = [(1,2,3),(2,4,4),(1,5,6)] ``` It's a list of tuples. Now I need to get every first element of each tuple in the list to e.g. replace each `1` with a `3`. The output should be: `myList = [(3,2,3),(2,4,4),(3,5,6)]` I know I can make it like: ``` for item in myList: if item[0] == 1: item[0] = 3 ``` But is there a other way to do this? Without iterating over the whole list? Something like: `myList.getFirstItemOfEachTuple.replace(1,3)` EDIT: I could change the `myList` to `[[1,2,3,4,5,6]...]` if necessary.
``` >>> myList = [(1,2,3,4,5,6),(4,5,6,7,8)] >>> dic = {1:3} >>> [ (dic.get(x[0],x[0]),) + x[1:] for x in myList] [(3, 2, 3, 4, 5, 6), (4, 5, 6, 7, 8)] ``` If `myList` is a list of lists: ``` >>> myList = [[1,2,3,4,5,6],[4,5,6,7,8]] >>> [ [dic.get(x[0],x[0]) ] + x[1:] for x in myList] [[3, 2, 3, 4, 5, 6], [4, 5, 6, 7, 8]] ``` to modify the original list: ``` >>> myList[:] = [(dic.get(x[0],x[0]),) + x[1:] for x in myList] >>> myList [(3, 2, 3, 4, 5, 6), (4, 5, 6, 7, 8)] ```
> *But is there a other way to do this? without iterating over the whole list?* No. Not without iterating over the whole list. Since you wish to examine each tuple to see if the element you wish to change is a certain number, you *have* to iterate over the whole list *somehow*. So the only remaining consideration is *how* to do it. There exist good, time-tested and industry-standard guidelines that help decide how to write code: when writing code you should have **code readability** as a *first* priority. Code efficiency comes in as a distant second. There are exceptions to this rule, but they're not relevant here. Look at your original code. It assumes `item` is a `list`, so I will too: ``` for item in myList: if item[0] == 1: item[0] = 3 ``` Now compare with Ashwini's suggestion: ``` dic = {1: 3} myList[:] = [[dic.get(x[0], x[0])] + x[1:] for x in myList] ``` Now ask yourself: * Which one is easiest to read and understand? I think the answer is obvious. * Which one is more efficient? Let's look at the efficiency: * **Your original code:** For each item in `myList`, perform a single list lookup and then possibly a single list assignment, both extremely fast operations. * **Ashwinis code:** Rebuild the entire structure. For each item in `myList` python needs to create *three* new lists (*five* if you want to change an item that's not the first). Python must allocate new memory for each list and garbage collect a lot of old lists, both rather slow operations. All for the sake of cramming it into a one-liner. *Please*, go with your original code. Here's why: * It's the *obvious* way to do it. * It's the *pythonic* way to do it. * It's the *most readable* way to do it. * It's the *most efficient* way to do it. Which means it's the *right* way to do it. If you want a one-liner, make it a function: ``` def conditional_sublist_assign(myList, index, from, to): """ For each `item` in `myList`, set `item[index] = to` if `item[index] == from`. """ for item in myList: if item[index] == from: item[index] = to # Here's your one-liner: conditional_sublist_assign(myList, 0, 1, 3) ``` To lend some more weight to my arguments, here are some relevant lines from [The Zen of Python](http://www.python.org/dev/peps/pep-0020/): * Beautiful is better than ugly. * Simple is better than complex. * Readability counts. * There should be one-- and preferably only one --obvious way to do it. * If the implementation is hard to explain, it's a bad idea.
Possible to modify every first tuple element in a list of tuples?
[ "", "python", "list", "tuples", "" ]
Suppose I have a table with column which takes values from 1 to 10. I need to select columns with all values except for 9 and 10. Will there be a difference (performance-wise) when I use this query: ``` SELECT * FROM tbl WHERE col NOT IN (9, 10) ``` and this one? ``` SELECT * FROM tbl WHERE col IN (1, 2, 3, 4, 5, 6, 7, 8) ```
When it comes to performance you should always profile your code (i.e. run your queries few thousand times and measure each loops performance using some kind of `stopwatch`. [Sample](http://ideone.com/WbNnJT)). But here I highly recommend using the first query for better future maintaining. The logic is that you need all records but 9 and 10. If you add value 11 to your table and use second query, logic of your application will be broken that will lead to bug, of course. **Edit:** I remember this was tagged as php that's why I provided sample in php, but I might be mistaken. I guess it won't be hard to rewrite that sample in the language you're using.
Use "IN" as it will most likely make the DBMS use an index on the corresponding column. "NOT IN" could in theory also be translated into an index usage, but in a more complicated way which DBMS might not "spend overhead time" using.
SQL - IN vs. NOT IN
[ "", "sql", "" ]
I am trying this question from CodingBat > Given three ints, a b c, return True if one of b or c is "close" (differing by > at most 1), while the other is "far", differing from both other values by 2 or > more. Note: abs(num) computes the absolute value of a number. ``` close_far(1, 2, 10) → True close_far(1, 2, 3) → False close_far(4, 1, 3) → True ``` I know i can solve this by a series of if else statement but it would really long , are there any other approaches to this question???
This problem can be greatly simplified, without loss of generality, by sorting: ``` def close_far(a, b, c): x, y, z = sorted([a, b, c]) delta_close, delta_far = sorted([y - x, z - y]) return delta_close <= 1 and delta_far >= 2 ```
``` def close_far(a, b, c): def close(x, y): return abs(x - y) <= 1 def far(x, y): return abs(x - y) >= 2 return (close(b, a) and far(c, a) and far(c, b) or close(c, a) and far(b, a) and far(b, c)) >>> close_far(1, 2, 10) True >>> close_far(1, 2, 3) False >>> close_far(4, 1, 3) True ```
Alternative approaches to comparing values
[ "", "python", "algorithm", "" ]
**I'm using SQL 2000. A string column 'Password' is there in a table 'Users'. It has around 3k rows. My requirement is to encrypt all the values of the 'Password' column. Also, I should be able to decrypt those encrypted password fields whenever it is needed.** I know, from SQL 2005 onward, there are in-built functionalists for these requirements. But I'm concerned for SQL 2000. Please suggest **if there is any way to achieve my requirement via VB code or SQL script**. NOT with any third party tools. I have searched many places but with no success. Thanks.
You could use undocumented `PWDENCRYPT` and `PWDCOMPARE` functions available in SQL Server 2000 - ``` CREATE TABLE #USER ( LOGIN_ID varchar(20), UserPassword nvarchar(256) ) -- Encrypt & Insert Password -- Note: You will have to write UPDATE on existing records INSERT #USER VALUES ( 'my_loginid', PWDENCRYPT('MyPassword1')) DECLARE @InputPassword VARCHAR(100) DECLARE @IsValid INT = 0 -- Test for Correct Password SET @InputPassword = 'MyPassword1' SET @IsValid = (SELECT PWDCOMPARE(@InputPassword, UserPassword, 0) FROM #USER WHERE LOGIN_ID = 'my_loginid') SELECT @IsValid AS 'Test1'; -- Test for Wrong Password SET @InputPassword = 'WrongPassword' SET @IsValid = (SELECT PWDCOMPARE(@InputPassword, UserPassword, 0) FROM #USER WHERE LOGIN_ID = 'my_loginid') SELECT @IsValid AS 'Test2' DROP TABLE #USER ``` Reference links - * [PWDENCRYPT](http://technet.microsoft.com/en-us/library/dd822791%28v=SQL.105%29.aspx) * [PWDCOMPARE](http://technet.microsoft.com/en-us/library/dd822792%28v=sql.105%29.aspx) * [SQL Server's Undocumented Password Encryption Functions](http://sqlmag.com/sql-server/sql-servers-undocumented-password-encryption-functions)
Passwords are usually stored with a 1 way hash (for example SHA1), meaning they are encrypted and never need to be decrypted. When the user enters the password, your code would hash it and check if the hashed value matched the hashed value in the database. However, it sounds like you have a requirement to also be able to decrypt the password. For that there are several asymmetric algorithms (RSA, PGP, etc) where you would have a private and public key pair. The private key is kept secret, while the public key could be shared for others to be able to encrypt their own information before sending it to you. It sounds like that is overkill since only your VB6 code needs to encrypt the data and not any 3rd parties. Therefore, you could simply use a symmetric algorithm (like Blowfish or TripleDES) where you use the same passphrase (instead of a key pair) to encrypt and decrypt the data. That passphrase could be stored in a configuration file on the server. Make sure to keep it protected from unauthorized users. Have you seen this article? It uses TripleDES with a passphrase which sounds exactly like what you need. <http://msdn.microsoft.com/en-us/library/ms172831(v=vs.80).aspx>
Encrypt a column in SQL 2000 via code or SQL script
[ "", "sql", "encryption", "sql-server-2000", "" ]
I have a program to generate dynamic query string based on input. This query may select from any tables or joined tables in my DB, and the column names and number of columns are unknown. Now with this query string as the only input, I want to fetch all data from the result and output them line by line, is there any way to do this ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Thank Thinkjet for the reference. I have solved the problem, to help the others, here is the piece of code I used: ``` DECLARE v_curid NUMBER; v_desctab DBMS_SQL.DESC_TAB; v_colcnt NUMBER; v_name_var VARCHAR2(10000); v_num_var NUMBER; v_date_var DATE; v_row_num NUMBER; p_sql_stmt VARCHAR2(1000); BEGIN v_curid := DBMS_SQL.OPEN_CURSOR; p_sql_stmt :='SELECT * FROM emp'; DBMS_SQL.PARSE(v_curid, p_sql_stmt, DBMS_SQL.NATIVE); DBMS_SQL.DESCRIBE_COLUMNS(v_curid, v_colcnt, v_desctab); -- Define columns: FOR i IN 1 .. v_colcnt LOOP IF v_desctab(i).col_type = 2 THEN DBMS_SQL.DEFINE_COLUMN(v_curid, i, v_num_var); ELSIF v_desctab(i).col_type = 12 THEN DBMS_SQL.DEFINE_COLUMN(v_curid, i, v_date_var); ELSE DBMS_SQL.DEFINE_COLUMN(v_curid, i, v_name_var, 50); END IF; END LOOP; v_row_num := dbms_sql.execute(v_curid); -- Fetch rows with DBMS_SQL package: WHILE DBMS_SQL.FETCH_ROWS(v_curid) > 0 LOOP FOR i IN 1 .. v_colcnt LOOP IF (v_desctab(i).col_type = 1) THEN DBMS_SQL.COLUMN_VALUE(v_curid, i, v_name_var); ELSIF (v_desctab(i).col_type = 2) THEN DBMS_SQL.COLUMN_VALUE(v_curid, i, v_num_var); ELSIF (v_desctab(i).col_type = 12) THEN DBMS_SQL.COLUMN_VALUE(v_curid, i, v_date_var); END IF; END LOOP; END LOOP; DBMS_SQL.CLOSE_CURSOR(v_curid); END; / ```
You can do that with [`DBMS_SQL`](http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm#BHCIBJBG) package. *Update* To get more detailed reference about DBMS\_SQL go [here](http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_sql.htm#ARPLS058).
``` <PRE> DECLARE RUN_S CLOB; IGNORE NUMBER; SOURCE_CURSOR NUMBER; PWFIELD_COUNT NUMBER DEFAULT 0; L_DESCTBL DBMS_SQL.DESC_TAB2; Z_NUMBER NUMBER; BEGIN RUN_S := ' SELECT 1 AS VAL1, 2 AS VAL2, CURSOR (SELECT 11 AS VAL11, 12 AS VAL12 FROM DUAL) AS CUR1, CURSOR (SELECT 11 AS VAL11, 12 AS VAL12 FROM DUAL) AS CUR2 FROM DUAL'; SOURCE_CURSOR := DBMS_SQL.OPEN_CURSOR; DBMS_SQL.PARSE(SOURCE_CURSOR, RUN_S, DBMS_SQL.NATIVE); DBMS_SQL.DESCRIBE_COLUMNS2(SOURCE_CURSOR, PWFIELD_COUNT, L_DESCTBL); -- get record structure FOR I IN 1 .. PWFIELD_COUNT LOOP DBMS_OUTPUT.PUT_LINE('Col ' || I || ' Type:' || L_DESCTBL(I).COL_TYPE); IF L_DESCTBL(I).COL_TYPE = 2 THEN DBMS_SQL.DEFINE_COLUMN(SOURCE_CURSOR, I, Z_NUMBER); END IF; NULL; END LOOP; IGNORE := DBMS_SQL.EXECUTE(SOURCE_CURSOR); LOOP IF DBMS_SQL.FETCH_ROWS(SOURCE_CURSOR) > 0 THEN FOR I IN 1 .. PWFIELD_COUNT LOOP IF L_DESCTBL(I).COL_TYPE IN (2) THEN DBMS_SQL.COLUMN_VALUE(SOURCE_CURSOR, I, Z_NUMBER); DBMS_OUTPUT.PUT_LINE('Col ' || I || ' Value:' || Z_NUMBER); END IF; END LOOP; ELSE EXIT; END IF; END LOOP; END; </PRE> ```
Oracle : how to fetch data from dynamic query?
[ "", "sql", "oracle", "plsql", "" ]
I have a list of words, let's say it's ``` ['a', 'b', 'c', 'd'] ``` I have a document where I've already pre-processed a text file into a matrix, and it goes like this: ``` a,b,c,d 0,1,1,0 1,1,0,0 1,1,1,1 ``` Where 1 is the presence of the word in a sentence, and 0 is the absence of that word in a sentence. I would like to go through that matrix, line by line, and increment some sort of counter associated with the original word list up above, so I can know how many of each word was found in the sentences at the end. How can I make this? Do I have to create an associative array, or a 2d array? Is there a way to create a new variable within the array associated with each word that I can increment? Thanks!
All you have to do is `sum` each column since it's just 0s and 1s! ``` import numpy as np array = numpy.array((matrix)) answer = np.apply_along_axis(sum,0,array[1::]) my_dict = dict(zip(matrix[0],answer)) ``` now you have a dictionary where the keys are the words and the values is the total number of appearances!
You can use [*collections.Counter*](http://docs.python.org/2.7/library/collections.html#collections.Counter) to tally the word counts: ``` >>> from collections import Counter >>> filedata = '''\ 0,1,1,0 1,1,0,0 1,1,1,1 ''' >>> counter = Counter() >>> for line in filedata.splitlines(): a, b, c, d = map(int, line.split(',')) counter['a'] += a counter['b'] += b counter['c'] += c counter['d'] += d >>> counter Counter({'b': 3, 'a': 2, 'c': 2, 'd': 1}) ```
I have a list of words. I want to add a counter variable associated with each word. How do I do this?
[ "", "python", "arrays", "list", "" ]
I am a MSSQL user now I am converting my database to MySQL, I am writing following query in MySQL. ``` select col1 as [My Column] from table1 ``` > Error : You have an error in sql syntax
You have a syntax error because the escape character (*delimiter*) is different from `MSSQL`. You need to use `backtick` instead of brackets. eg, ``` select col1 as `My Column` from table1 ``` MySQL => backtick MSSQL => bracket
Remove `[]` ``` select col1 as MyAliasName from table1 ``` Or ``` select col1 as `My Alias Name` from table1 ```
Alias in mysql select query
[ "", "mysql", "sql", "sql-server", "alias", "" ]
I have a bunch of product data in my DB. Each product has a certain ShopID which tells me to which shop it belongs. **I want all products with ShopID 1,4,7 appear last in the results.** How can I do that? Sample Code ``` SELECT * FROM Tablename WHERE Name LIKE %red% ORDER BY ShopID ASC ``` Best Regards, D.
Another alternative option would be to use a `CASE` statement in your `ORDER BY` clause: ``` SELECT * FROM Tablename WHERE Name LIKE '%red%' ORDER BY CASE WHEN ShopID IN (1,4,7) THEN 1 ELSE 0 END, ShopID ``` * [SQL Fiddle Demo](http://www.sqlfiddle.com/#!2/4147eb/1)
if `ShopID` is not saved as comma separated value, you can use MySQL's `FIELD()` ``` ORDER BY FIELD(ShopID, 1,4,7) ASC, ShopID ASC ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/1fb5d/1) the reason why `1,4,7` appears on the last part of the result list is because `FIELD()` returns the index that was specified on the list. in this case, any number that is not in `1,4,7` has an index `0` making it first on the list. * [MySQL Documentation: **FIELD()**](http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_field)
MySQL: Show rows with certain content last
[ "", "mysql", "sql", "" ]
I have two arrays: ``` a=np.array((1,2,3,4,5)) b=np.array((2,3,4,5,6)) ``` What I want is to use the values of a and b for the limits of linspace e.g. ``` c=np.linspace(a,b,11) ``` I get an error when I use this code. The answer should be for the first element of the array: ``` c=np.linspace(a,b,11) print c c=[1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2] ```
You can do this: ``` c = np.array([np.linspace(i,j,5) for i,j in zip(a,b)]) #array([[ 1. , 1.25, 1.5 , 1.75, 2. ], # [ 2. , 2.25, 2.5 , 2.75, 3. ], # [ 3. , 3.25, 3.5 , 3.75, 4. ], # [ 4. , 4.25, 4.5 , 4.75, 5. ], # [ 5. , 5.25, 5.5 , 5.75, 6. ]]) ```
If you want to avoid explicit Python loops, you can do the following: ``` >>> a = np.array([1, 2, 3, 4, 5]).reshape(-1, 1) >>> b = np.array([2, 3, 4, 5, 6]).reshape(-1, 1) >>> c = np.linspace(0, 1, 11) >>> a + (b - a) * c array([[ 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. ], [ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. ], [ 3. , 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4. ], [ 4. , 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5. ], [ 5. , 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6. ]]) ```
Python linspace limits from two arrays
[ "", "python", "arrays", "numpy", "" ]
In my database project, I had to build a database regarding some data from the Olympic games. I have to build the following query as well : *Compute medal table for the specific Olympic Games supplied by the user. Medal table should contain country’s IOC code followed by the number of gold, silver, bronze and total medals. It should first be sorted by the number of gold, then silvers and finally bronzes.* Basically, I have a table `Medals`which contains the medals won by some participants at some Olympic game. The medals are stored in the following way : "Gold medal", "Silver medal", "Bronze medal" in the `color`field of my table `Medals`. I tried to use the following query : ``` SELECT q1.country, q1.name as "Game", q1.cntG, q2.cntS, q3.cntB FROM ( SELECT c.countryName as country, g.name as name, count(m.idMedal) as cntG FROM Game g INNER JOIN Participant p ON p.fkGame = g.idGame INNER JOIN Country c ON p.fkCountry = c.idCountry INNER JOIN Medals m ON m.fkMedalist = p.idParticipant WHERE g.name = "2012 Summer Olympics" AND m.color like '%Gold%' GROUP BY c.countryName ORDER BY c.countryName, cntG DESC ) as q1, ( SELECT c.countryName as country, g.name as name, count(m.idMedal) as cntS FROM Game g INNER JOIN Participant p ON p.fkGame = g.idGame INNER JOIN Country c ON p.fkCountry = c.idCountry INNER JOIN Medals m ON m.fkMedalist = p.idParticipant WHERE g.name = "2012 Summer Olympics" AND m.color like '%Silver%' GROUP BY c.countryName ORDER BY c.countryName, cntS DESC ) as q2, ( SELECT c.countryName as country, g.name as name, count(m.idMedal) as cntB FROM Game g INNER JOIN Participant p ON p.fkGame = g.idGame INNER JOIN Country c ON p.fkCountry = c.idCountry INNER JOIN Medals m ON m.fkMedalist = p.idParticipant WHERE g.name = "2012 Summer Olympics" AND m.color like '%Bronze%' GROUP BY c.countryName ORDER BY c.countryName, cntB DESC ) as q3 GROUP BY q1.country ORDER BY q1.cntG, q2.cntS, q3.cntB DESC ``` Well, it gives me a totally weird result. I know there is something wrong with this query but cannot figure out what it is ! Hope you can help me :) Thanks NOTE: I ignored the total number of queries (as asked in the assignement) for the moment. Once i've figured of to build the first part I'll try for the total
The data are not the problem - the fact that you have an implicit cartesian join between each of q1, q2 and q3 *is* a problem. Try: ``` SELECT c.countryName as country, count(case m.idMedal when 'Gold medal' then 1 end) as cntG, count(case m.idMedal when 'Silver medal' then 1 end) as cntS, count(case m.idMedal when 'Bronze medal' then 1 end) as cntB FROM Game g INNER JOIN Participant p ON p.fkGame = g.idGame INNER JOIN Country c ON p.fkCountry = c.idCountry INNER JOIN Medals m ON m.fkMedalist = p.idParticipant WHERE g.name = "2012 Summer Olympics" GROUP BY c.countryName ORDER BY cntG DESC, cntS DESC, cntB DESC ```
Sorry but your query is a real mess. You can't simply join the 3 subqueries that way. However, you can have what you want in one query. I'll give you the pseudo code and leave you the details :) ``` SELECT [countryName], SUM(color like '%Gold%') as total_gold, SUM(color like '%Silver%') as total_silver, SUM(color like '%Bronze%') as total_bronze, COUNT(*) as total FROM Medals INNER JOIN Participant (...) INNER JOIN Country (...) INNER JOIN Game (...) WHERE (...) GROUP BY [countryName] ORDER BY total_gold DESC, total_silver DESC, total_bronze DESC; ```
SQL: count from same table/field, but different values
[ "", "sql", "subquery", "" ]
I have a dictionary where the value elements are lists: ``` d1={'A': [], 'C': ['SUV'], 'B': []} ``` I need to concatenate the values into a single list ,only if the list is non-empty. Expected output: ``` o=['SUV'] ``` Help is appreciated.
``` from itertools import chain d1={'A': [], 'C': ['SUV'], 'B': []} print list(chain.from_iterable(d1.itervalues())) ```
You can use `itertools.chain`, but the order can be arbitrary as dicts are unordered collection. So may have have to sort the dict based on keys or values to get the desired result. ``` >>> d1={'A': [], 'C': ['SUV'], 'B': []} >>> from itertools import chain >>> list(chain(*d1.values())) # or use d1.itervalues() as it returns an iterator(memory efficient) ['SUV'] ```
Concatenate lists together
[ "", "python", "" ]
I'm running Python3.3 on Win64 and am having trouble following the installation instructions for Jinja2. I followed the suggestion here ([Jinja install for python](https://stackoverflow.com/questions/6726983/jinja-install-for-python)) but my installation of Python 3.3 doesn't have a easy\_install.exe as described. I downloaded the dstribute-0.6.45.tar file and ran the distribute\_setup.py as described in the README file, but when I type easy\_install Jinja2 from the python shell, I get a SyntaxError. I've been spoiled by years of double clicking on setup.exe files to install software and am unfamiliar with terms such as "egg" and "pip" when reading reading through the Jinja2 installation instruction. If someone could shed some light it would be much appreciated.
To use easy install and pip: use windows setuptools : <https://pypi.python.org/pypi/setuptools> Python for windows extensions is also very helpfull :
Please go to <http://www.lfd.uci.edu/~gohlke/pythonlibs/#setuptools> 1. Download and install appropriate setuptool.exe for your Computer. 2. Download and install appropriate jinja2.exe for your Computer. 3. From your command prompt install C:\Python27\Lib\Jinja2(version)>setup.py install 4. A jinja2 folder should be created in C:\Python27\Lib\site-packages\jinja2 5. In your .yaml file add: ``` - name: jinja2 version: latest ``` Now your application must run.
Trouble installing Jinja2 on Windows
[ "", "python", "installation", "jinja2", "python-3.3", "" ]
I am looking for all the functions that have a parameter called *adjustable*. One of these function is **matplotlib.pyplot.figure ().add\_axes**. help(matplotlib.pyplot.figure ().add\_axes) describes that parameter, that can be present in *kwargs* dictionary. I tried pydoc.apropos ``` from pydoc import apropos In [8]: apropos ('adjustable') No handlers could be found for logger "OpenGL.Tk" In [9]: ``` This is all what it returned apropos(key). One brute force way to find what I am looking for is to grep the source code of python from /usr/share/. But I need to do it from the current python environment (only what is present loaded in evaluator).
I just noticed you're using IPython, if so, there's an extension called [grasp](https://pypi.python.org/pypi/grasp/0.3.2) which implements its own version of apropos that may be useful here. The documentation even uses `matplotlib` in its example.
Here is a quickly written function that walks all current packages in the specified path, using `ast` to find matching parameters, and returning `(filename, funcname, line_no)` for each match. ``` import ast import pkgutil import os.path class FindParameter(ast.NodeVisitor): def __init__(self, parameter): self.parameter = parameter self.found = [] def visit_FunctionDef(self, node): for arg in node.args.args: if getattr(arg, 'id', None) == self.parameter: self.found.append(node) def apropos(parameter, path=None): paramFinder = FindParameter(parameter) for importer, modname, is_package in pkgutil.iter_modules(path=path): try: loader = importer.find_module(modname) loader.get_code() if loader.source: tree = ast.parse(loader.source, filename=loader.filename) paramFinder.visit(tree) except SyntaxError: pass # avoid any broken code in the current path return [ (loader.filename, found.name, found.lineno) for found in paramFinder.found ] all_selfs = apropos('self') ```
Python *apropos* command
[ "", "python", "matplotlib", "" ]
I'm looking for the equivalent to the vlookup function in excel. I have a script where I read in a csv file. I would like to be able to query an associated value from another column in the .csv. Script so far: ``` import matplotlib import matplotlib.mlab as mlab import glob for files in glob.glob("*.csv"): print files r = mlab.csv2rec(files) r.cols = r.dtype.names depVar = r[r.cols[0]] indVar = r[r.cols[1]] print indVar ``` This will read in from .csv files in the same folder the script is in. In the above example depVar is the first column in the .csv, and indVar is the second column. In my case, I know a value for indVar, and I want to return the associated value for depVar. I'd like to add a command like: ``` depVar = r[r.cols[0]] indVar = r[r.cols[1]] print indVar depVarAt5 = lookup value in depVar where indVar = 5 (I could sub in things for the 5 later) ``` In my case, all values in all fields are numbers and all of the values of indVar are unique. I want to be able to define a new variable (depVarAt5 in last example) equal to the associated value. Here's example .csv contents, name the file anything and place it in same folder as script. In this example, depVarAt5 should be set equal to 16.1309. ``` Temp,Depth 16.1309,5 16.1476,94.4007 16.2488,100.552 16.4232,106.573 16.4637,112.796 16.478,118.696 16.4961,124.925 16.5105,131.101 16.5462,137.325 16.7016,143.186 16.8575,149.101 16.9369,155.148 17.0462,161.187 ```
I think this solves your problem quite directly: ``` import numpy import glob for f in glob.glob("*.csv"): print f r = numpy.recfromcsv(f) print numpy.interp(5, r.depth, r.temp) ``` I'm pretty sure numpy is a prerequisite for matplotlib.
Not sure what that `r` object is, but since it has a member called `cols`, I'm going to assume it also has a member called `rows` which contains the row data. ``` >>> r.rows [[16.1309, 5], [16.1476, 94.4007], ...] ``` In that case, your pseudocode very nearly contains a valid generator expression/list comprehension. ``` depVarAt5 = lookup value in depVar where indVar = 5 (I could sub in things for the 5 later) ``` becomes ``` depVarAt5 = [row[0] for row in r.rows if row[1] == 5] ``` Or, more generally ``` depVarValue = [row[depVarColIndex] for row in r.rows if row[indVarColIndex] == searchValue] ``` so ``` def vlookup(rows, searchColumn, dataColumn, searchValue): return [row[dataColumn] for row in rows if row[searchColumn] == searchValue] ``` Throw a `[0]` on the end of that if you can guarantee there will be exactly one output per input. There's also a `csv` module in the Python standard libary which you might prefer to work with. =)
basic python vlookup equivalent
[ "", "python", "" ]
Using `.read()` to read a file, how would I split on two objects at once? I'm trying to split on commas, and `"\n"` simultaneously, but when I split on commas first, it turns my string into a list, in which I cannot split again. Here is the string I'm trying to split: `'States, Total Score, Critical Reading, Mathematics, Writing, Participation (%)\nWashington,1564,524,532,508,41.2000\nNewHampshire,1554,520,524,510,64.0000\nMassachusetts,1547,512,526,509,72.1000\nOregon,1546,523,524,499,37.1000\nVermont,1546,519,512,506,64.0000\nArizona,1544,519,525,500,22.4000\nConnecticut,1536,509,514,513,71.2000\nAlaska,1524,518,515,491,32.7000\nVirginia,1521,512,512,497,56.0000\nCalifornia,1517,501,516,500,37.5000\nNewJersey,1506,495,514,497,69.0000\nMaryland,1502,501,506,495,56.7000\nNorthCarolina,1485,497,511,477,45.5000\nRhodeIsland,1477,494,495,488,60.8000\nIndiana,1476,494,505,477,52.0000\nFlorida,1473,496,498,479,44.7000\nPennsylvania,1473,492,501,480,62.3000\nNevada,1470,496,501,473,25.9000\nDelaware,1469,493,495,481,59.2000\nTexas,1462,484,505,473,41.5000\nNewYork,1461,484,499,478,59.6000\nHawaii,1458,483,505,470,47.1000\nGeorgia,1453,488,490,475,46.5000\nSouthCarolina,1447,484,495,468,40.7000\nMaine,1389,468,467,454,87.1000\nIowa,1798,603,613,582,2.7000\nMinnesota,1781,594,607,580,6.0000\nWisconsin,1778,595,604,579,3.8000\nMissouri,1768,593,595,580,3.6000\nMichigan,1766,585,605,576,3.8000\nSouthDakota,1766,592,603,571,2.0000\nIllinois,1762,585,600,577,4.6700\nKansas,1752,590,595,567,4.7000\nNebraska,1746,585,593,568,3.9000\nNorthDakota,1733,580,594,559,3.4000\nKentucky,1713,575,575,563,5.0000\nTennessee,1712,576,571,565,6.4000\nColorado,1695,568,572,555,14.1000\nArkansas,1684,566,566,552,3.5000\nOklahoma,1684,569,568,547,3.8000\nWyoming,1683,570,567,546,3.6000\nUtah,1674,568,559,547,4.5000\nMississippi,1666,566,548,552,2.2000\nLouisiana,1652,555,550,547,4.0000\nAlabama,1650,556,550,544,5.4000\nNewMexico,1636,553,549,534,7.1000\nOhio,1609,538,548,522,17.2000\nIdaho,1601,543,541,517,14.6000\nMontana,1593,538,538,517,20.0000\nWest Virginia,1522,515,507,500,13.2000\n'`
You can use a list comprehension: ``` >>> strs = 'States, Total Score, Critical Reading, Mathematics, Writing, Participation (%)\nWashington,1564,524,532,508,41.2000\nNewHampshire,1554,520,524,510,64.0000\nMassachusetts,1547,512,526,509,72.1000\nOregon,1546,523,524,499,37.1000\nVermont,1546,519,512,506,64.0000\nArizona,1544,519,525,500,22.4000\nConnecticut,1536,509,514,513,71.2000\nAlaska,1524,518,515,491,32.7000\nVirginia,1521,512,512,497,56.0000\nCalifornia,1517,501,516,500,37.5000\nNewJersey,1506,495,514,497,69.0000\nMaryland,1502,501,506,495,56.7000\nNorthCarolina,1485,497,511,477,45.5000\nRhodeIsland,1477,494,495,488,60.8000\nIndiana,1476,494,505,477,52.0000\nFlorida,1473,496,498,479,44.7000\nPennsylvania,1473,492,501,480,62.3000\nNevada,1470,496,501,473,25.9000\nDelaware,1469,493,495,481,59.2000\nTexas,1462,484,505,473,41.5000\nNewYork,1461,484,499,478,59.6000\nHawaii,1458,483,505,470,47.1000\nGeorgia,1453,488,490,475,46.5000\nSouthCarolina,1447,484,495,468,40.7000\nMaine,1389,468,467,454,87.1000\nIowa,1798,603,613,582,2.7000\nMinnesota,1781,594,607,580,6.0000\nWisconsin,1778,595,604,579,3.8000\nMissouri,1768,593,595,580,3.6000\nMichigan,1766,585,605,576,3.8000\nSouthDakota,1766,592,603,571,2.0000\nIllinois,1762,585,600,577,4.6700\nKansas,1752,590,595,567,4.7000\nNebraska,1746,585,593,568,3.9000\nNorthDakota,1733,580,594,559,3.4000\nKentucky,1713,575,575,563,5.0000\nTennessee,1712,576,571,565,6.4000\nColorado,1695,568,572,555,14.1000\nArkansas,1684,566,566,552,3.5000\nOklahoma,1684,569,568,547,3.8000\nWyoming,1683,570,567,546,3.6000\nUtah,1674,568,559,547,4.5000\nMississippi,1666,566,548,552,2.2000\nLouisiana,1652,555,550,547,4.0000\nAlabama,1650,556,550,544,5.4000\nNewMexico,1636,553,549,534,7.1000\nOhio,1609,538,548,522,17.2000\nIdaho,1601,543,541,517,14.6000\nMontana,1593,538,538,517,20.0000\nWest Virginia,1522,515,507,500,13.2000\n' >>> [ y for x in strs.splitlines() for y in x.split(",")] ['States', ' Total Score', ' Critical Reading', ' Mathematics', ' Writing', ' Participation (%)', 'Washington', '1564', '524', '532', '508', '41.2000', 'NewHampshire', '1554', '520', '524', '510', '64.0000', 'Massachusetts', '1547', '512', '526', '509', '72.1000', 'Oregon', '1546', '523', '524', '499', '37.1000', 'Vermont', '1546', '519', '512', '506', '64.0000', 'Arizona', '1544', '519', '525', '500', '22.4000', 'Connecticut', '1536', '509', '514', '513', '71.2000', 'Alaska', '1524', '518', '515', '491', '32.7000', 'Virginia', '1521', '512', '512', '497', '56.0000', 'California', '1517', '501', '516', '500', '37.5000', 'NewJersey', '1506', '495', '514', '497', '69.0000', 'Maryland', '1502', '501', '506', '495', '56.7000', 'NorthCarolina', '1485', '497', '511', '477', '45.5000', 'RhodeIsland', '1477', '494', '495', '488', '60.8000', 'Indiana', '1476', '494', '505', '477', '52.0000', 'Florida', '1473', '496', '498', '479', '44.7000', 'Pennsylvania', '1473', '492', '501', '480', '62.3000', 'Nevada', '1470', '496', '501', '473', '25.9000', 'Delaware', '1469', '493', '495', '481', '59.2000', 'Texas', '1462', '484', '505', '473', '41.5000', 'NewYork', '1461', '484', '499', '478', '59.6000', 'Hawaii', '1458', '483', '505', '470', '47.1000', 'Georgia', '1453', '488', '490', '475', '46.5000', 'SouthCarolina', '1447', '484', '495', '468', '40.7000', 'Maine', '1389', '468', '467', '454', '87.1000', 'Iowa', '1798', '603', '613', '582', '2.7000', 'Minnesota', '1781', '594', '607', '580', '6.0000', 'Wisconsin', '1778', '595', '604', '579', '3.8000', 'Missouri', '1768', '593', '595', '580', '3.6000', 'Michigan', '1766', '585', '605', '576', '3.8000', 'SouthDakota', '1766', '592', '603', '571', '2.0000', 'Illinois', '1762', '585', '600', '577', '4.6700', 'Kansas', '1752', '590', '595', '567', '4.7000', 'Nebraska', '1746', '585', '593', '568', '3.9000', 'NorthDakota', '1733', '580', '594', '559', '3.4000', 'Kentucky', '1713', '575', '575', '563', '5.0000', 'Tennessee', '1712', '576', '571', '565', '6.4000', 'Colorado', '1695', '568', '572', '555', '14.1000', 'Arkansas', '1684', '566', '566', '552', '3.5000', 'Oklahoma', '1684', '569', '568', '547', '3.8000', 'Wyoming', '1683', '570', '567', '546', '3.6000', 'Utah', '1674', '568', '559', '547', '4.5000', 'Mississippi', '1666', '566', '548', '552', '2.2000', 'Louisiana', '1652', '555', '550', '547', '4.0000', 'Alabama', '1650', '556', '550', '544', '5.4000', 'NewMexico', '1636', '553', '549', '534', '7.1000', 'Ohio', '1609', '538', '548', '522', '17.2000', 'Idaho', '1601', '543', '541', '517', '14.6000', 'Montana', '1593', '538', '538', '517', '20.0000', 'West Virginia', '1522', '515', '507', '500', '13.2000'] ``` If you want a list of lists containing each line split at `,`: ``` >>> [x.split(",") for x in strs.splitlines()] [['States', ' Total Score', ' Critical Reading', ' Mathematics', ' Writing', ' Participation (%)'], ['Washington', '1564', '524', '532', '508', '41.2000'], ['NewHampshire', '1554', '520', '524', '510', '64.0000'], ['Massachusetts', '1547', '512', '526', '509', '72.1000'], ['Oregon', '1546', '523', '524', '499', '37.1000'], ['Vermont', '1546', '519', '512', '506', '64.0000'], ['Arizona', '1544', '519', '525', '500', '22.4000'], ['Connecticut', '1536', '509', '514', '513', '71.2000'], ['Alaska', '1524', '518', '515', '491', '32.7000'], ['Virginia', '1521', '512', '512', '497', '56.0000'], ['California', '1517', '501', '516', '500', '37.5000'], ['NewJersey', '1506', '495', '514', '497', '69.0000'], ['Maryland', '1502', '501', '506', '495', '56.7000'], ['NorthCarolina', '1485', '497', '511', '477', '45.5000'], ['RhodeIsland', '1477', '494', '495', '488', '60.8000'], ['Indiana', '1476', '494', '505', '477', '52.0000'], ['Florida', '1473', '496', '498', '479', '44.7000'], ['Pennsylvania', '1473', '492', '501', '480', '62.3000'], ['Nevada', '1470', '496', '501', '473', '25.9000'], ['Delaware', '1469', '493', '495', '481', '59.2000'], ['Texas', '1462', '484', '505', '473', '41.5000'], ['NewYork', '1461', '484', '499', '478', '59.6000'], ['Hawaii', '1458', '483', '505', '470', '47.1000'], ['Georgia', '1453', '488', '490', '475', '46.5000'], ['SouthCarolina', '1447', '484', '495', '468', '40.7000'], ['Maine', '1389', '468', '467', '454', '87.1000'], ['Iowa', '1798', '603', '613', '582', '2.7000'], ['Minnesota', '1781', '594', '607', '580', '6.0000'], ['Wisconsin', '1778', '595', '604', '579', '3.8000'], ['Missouri', '1768', '593', '595', '580', '3.6000'], ['Michigan', '1766', '585', '605', '576', '3.8000'], ['SouthDakota', '1766', '592', '603', '571', '2.0000'], ['Illinois', '1762', '585', '600', '577', '4.6700'], ['Kansas', '1752', '590', '595', '567', '4.7000'], ['Nebraska', '1746', '585', '593', '568', '3.9000'], ['NorthDakota', '1733', '580', '594', '559', '3.4000'], ['Kentucky', '1713', '575', '575', '563', '5.0000'], ['Tennessee', '1712', '576', '571', '565', '6.4000'], ['Colorado', '1695', '568', '572', '555', '14.1000'], ['Arkansas', '1684', '566', '566', '552', '3.5000'], ['Oklahoma', '1684', '569', '568', '547', '3.8000'], ['Wyoming', '1683', '570', '567', '546', '3.6000'], ['Utah', '1674', '568', '559', '547', '4.5000'], ['Mississippi', '1666', '566', '548', '552', '2.2000'], ['Louisiana', '1652', '555', '550', '547', '4.0000'], ['Alabama', '1650', '556', '550', '544', '5.4000'], ['NewMexico', '1636', '553', '549', '534', '7.1000'], ['Ohio', '1609', '538', '548', '522', '17.2000'], ['Idaho', '1601', '543', '541', '517', '14.6000'], ['Montana', '1593', '538', '538', '517', '20.0000'], ['West Virginia', '1522', '515', '507', '500', '13.2000']] ``` Instead of generating the whole list at once you can use `itertools.chain` to get elements lazily (Or even better if you iterate over one line at once, prefer [@Martijn Pieters's solution](https://stackoverflow.com/a/16761978/846892) in that case): ``` >>> from itertools import chain >>> for elem in chain(*(x.split(",") for x in strs.splitlines())): ... print elem ... States Total Score Critical Reading Mathematics Writing Participation (%) Washington ... ```
Don't read the whole file in one go, read *per line*, then split: ``` with open(filepath) as f: for line in f: print line.strip().split(',') ``` You could also first split on newlines, then loop and split on commas: ``` lines = [line.split(',') for line in somestring.splitlines()] ``` But for comma-separated files, your best bet is to use the `csv` module: ``` import csv with open(filepath, 'rb') as f: reader = csv.reader(f, delimiter=',') for row in reader: print row ``` This gives you the rows as: ``` ['States', ' Total Score', ' Critical Reading', ' Mathematics', ' Writing', ' Participation (%)'] ['Washington', '1564', '524', '532', '508', '41.2000'] ['NewHampshire', '1554', '520', '524', '510', '64.0000'] ``` Since you have a first row with headers, you could use a `DictReader` as well and get dictionaries mapping headers to values: ``` with open(filepath, 'rb') as f: reader = csv.DictReader(f, delimiter=',') for row in reader: print row # address columns as: row['States'], row['Total Score'] ``` which outputs rows as: ``` {' Writing': '508', ' Total Score': '1564', ' Critical Reading': '524', 'States': 'Washington', ' Mathematics': '532', ' Participation (%)': '41.2000'} ```
How to split on two items in a string?
[ "", "python", "string", "list", "split", "" ]
I'm trying to get `python` to return, as close as possible, the **center** of the most obvious clustering in an image like the one below: ![image](https://i.stack.imgur.com/iE0C3.png) In my [previous question](https://stackoverflow.com/questions/16822334/find-peak-of-2d-histogram) I asked how to get the global maximum and the local maximums of a 2d array, and the answers given worked perfectly. The issue is that the center estimation I can get by averaging the global maximum obtained with different bin sizes is always slightly off than the one I would set *by eye*, because I'm only accounting for the biggest **bin** instead of a **group** of biggest bins (like one does by eye). I tried adapting the [answer to this question](https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array) to my problem, but it turns out my image is *too noisy* for that algorithm to work. Here's my code implementing that answer: ``` import numpy as np from scipy.ndimage.filters import maximum_filter from scipy.ndimage.morphology import generate_binary_structure, binary_erosion import matplotlib.pyplot as pp from os import getcwd from os.path import join, realpath, dirname # Save path to dir where this code exists. mypath = realpath(join(getcwd(), dirname(__file__))) myfile = 'data_file.dat' x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True) xmin, xmax = min(x), max(x) ymin, ymax = min(y), max(y) rang = [[xmin, xmax], [ymin, ymax]] paws = [] for d_b in range(25, 110, 25): # Number of bins in x,y given the bin width 'd_b' binsxy = [int((xmax - xmin) / d_b), int((ymax - ymin) / d_b)] H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy) paws.append(H) def detect_peaks(image): """ Takes an image and detect the peaks usingthe local maximum filter. Returns a boolean mask of the peaks (i.e. 1 when the pixel's value is the neighborhood maximum, 0 otherwise) """ # define an 8-connected neighborhood neighborhood = generate_binary_structure(2,2) #apply the local maximum filter; all pixel of maximal value #in their neighborhood are set to 1 local_max = maximum_filter(image, footprint=neighborhood)==image #local_max is a mask that contains the peaks we are #looking for, but also the background. #In order to isolate the peaks we must remove the background from the mask. #we create the mask of the background background = (image==0) #a little technicality: we must erode the background in order to #successfully subtract it form local_max, otherwise a line will #appear along the background border (artifact of the local maximum filter) eroded_background = binary_erosion(background, structure=neighborhood, border_value=1) #we obtain the final mask, containing only peaks, #by removing the background from the local_max mask detected_peaks = local_max - eroded_background return detected_peaks #applying the detection and plotting results for i, paw in enumerate(paws): detected_peaks = detect_peaks(paw) pp.subplot(4,2,(2*i+1)) pp.imshow(paw) pp.subplot(4,2,(2*i+2) ) pp.imshow(detected_peaks) pp.show() ``` and here's the result of that (varying the bin size): ![enter image description here](https://i.stack.imgur.com/duZ7f.png) Clearly my background is too noisy for that algorithm to work, so the question is: **how can I make that algorithm less sensitive**? If an alternative solution exists then please let me know. --- ## EDIT Following Bi Rico advise I attempted smoothing my 2d array before passing it on to the local maximum finder, like so: ``` H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy) H1 = gaussian_filter(H, 2, mode='nearest') paws.append(H1) ``` These were the results with a `sigma` of 2, 4 and 8: ![enter image description here](https://i.stack.imgur.com/ZciZY.png) ## EDIT 2 A `mode ='constant'` seems to work much better than `nearest`. It converges to the right center with a `sigma=2` for the largest bin size: ![enter image description here](https://i.stack.imgur.com/AYc24.png) So, **how do I get the coordinates of the maximum that shows in the last image?**
I'm adding this answer because it's the solution I ended up using. It's a combination of Bi Rico's comment here (May 30 at 18:54) and the answer given in this question: [Find peak of 2d histogram](https://stackoverflow.com/questions/16822334/find-peak-of-2d-histogram). As it turns out using the peak detection algorithm from this question [Peak detection in a 2D array](https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array) only complicates matters. After applying the Gaussian filter to the image all that needs to be done is to ask for the maximum bin (as Bi Rico pointed out) and then obtain the maximum in coordinates. So instead of using the *detect-peaks* function as I did above, I simply add the following code after the Gaussian 2D histogram is obtained: ``` # Get 2D histogram. H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy) # Get Gaussian filtered 2D histogram. H1 = gaussian_filter(H, 2, mode='nearest') # Get center of maximum in bin coordinates. x_cent_bin, y_cent_bin = np.unravel_index(H1.argmax(), H1.shape) # Get center in x,y coordinates. x_cent_coor , y_cent_coord = np.average(xedges[x_cent_bin:x_cent_bin + 2]), np.average(yedges[y_cent_g:y_cent_g + 2]) ```
Answering the last part of your question, always you have points in an image, you can find their coordinates by searching, in some order, the local maximums of the image. In case your data is not a point source, you can apply a mask to each peak in order to avoid the peak neighborhood from being a maximum while performing a future search. I propose the following code: ``` import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import copy def get_std(image): return np.std(image) def get_max(image,sigma,alpha=20,size=10): i_out = [] j_out = [] image_temp = copy.deepcopy(image) while True: k = np.argmax(image_temp) j,i = np.unravel_index(k, image_temp.shape) if(image_temp[j,i] >= alpha*sigma): i_out.append(i) j_out.append(j) x = np.arange(i-size, i+size) y = np.arange(j-size, j+size) xv,yv = np.meshgrid(x,y) image_temp[yv.clip(0,image_temp.shape[0]-1), xv.clip(0,image_temp.shape[1]-1) ] = 0 print xv else: break return i_out,j_out #reading the image image = mpimg.imread('ggd4.jpg') #computing the standard deviation of the image sigma = get_std(image) #getting the peaks i,j = get_max(image[:,:,0],sigma, alpha=10, size=10) #let's see the results plt.imshow(image, origin='lower') plt.plot(i,j,'ro', markersize=10, alpha=0.5) plt.show() ``` The image ggd4 for the test can be downloaded from: <http://www.ipac.caltech.edu/2mass/gallery/spr99/ggd4.jpg> The first part is to get some information about the noise in the image. I did it by computing the standard deviation of the full image (actually is better to select an small rectangle without signal). This is telling us how much noise is present in the image. The idea to get the peaks is to ask for successive maximums, which are above of certain threshold (let's say, 3, 4, 5, 10, or 20 times the noise). This is what the function get\_max is actually doing. It performs the search of maximums until one of them is below the threshold imposed by the noise. In order to avoid finding the same maximum many times it is necessary to remove the peaks from the image. In the general way, the shape of the mask to do so depends strongly on the problem that one want to solve. for the case of stars, it should be good to remove the star by using a Gaussian function, or something similar. I have chosen for simplicity a square function, and the size of the function (in pixels) is the variable "size". I think that from this example, anybody can improve the code by adding more general things. *EDIT:* The original image looks like: [![enter image description here](https://i.stack.imgur.com/AUUUI.jpg)](https://i.stack.imgur.com/AUUUI.jpg) While the image after identifying the luminous points looks like this: [![enter image description here](https://i.stack.imgur.com/HzaHW.png)](https://i.stack.imgur.com/HzaHW.png)
Peak detection in a noisy 2d array
[ "", "python", "image-processing", "numpy", "matplotlib", "" ]
I noticed something very weird. I tried to look at where it was going wrong and realised that for some reason, the `nest[0]` list keeps changing midway through the loop. ``` >>> nest = [['prefix'],['line 1'],['line 2']] >>> for part in nest[1:]: ... list = nest[0] ... list += part ... print list ``` The output that I get is: ``` ['prefix', 'line 1'] ['prefix', 'line 1', 'line 2'] ``` Whereas, what I need is: ``` ['prefix', 'line 1'] ['prefix', 'line 2'] ``` Can somebody explain why this happens? I might be doing something very stupid.
EDIT: with explanation of pointers, as requested Your problem is that when you assign list to nest[0], you are not creating a new list, your just assigning a pointer. Your pointer is directed to the list containing ['prefix'] On your first iteration, you add something to this list On your second iteration, you don't make a NEW list, you just repoint to the old one. Then when you append again, you're appending to the old list! What you mean is: ``` nest = [['prefix'],['line 1'],['line 2']] for part in nest[1:]: list = [] + nest[0] list += part print list ``` There's several ways to think about this. Here's one. Say you had a deck of cards object: ``` myobj = Deck(). ``` If I then say, ``` myobj2 = myobj, ``` I haven't created a new deck of cards, it would be like someone else looking at the deck I already have. We need to be able to do that to do a lot of programming (it's the fundamentals of object oriented design)! I would need to say ``` myobj3 = Deck() ``` to construct a new deck of cards object. Consider: ``` myobj.shuffle #we're shuffling one deck, that two people are looking at ``` Both myobj and myobj2 will change. Calling myobj3.shuffle leaves the other two untouched. What you've done told someone to re-look at the same deck, where you meant to make a new one!
`list = nest[0]` mean you assign pointer to nest[0] to variable name `list` If you want your expected output, you need to create a new list to make sure it will not effect the original. ``` nest = [['prefix'],['line 1'],['line 2']] for part in nest[1:]: list = nest[0] + part print list ``` `nest[0] + part` will create a new value and assign to `list`
For loop through a nested-list
[ "", "python", "nested-lists", "" ]
I'm modifying a query I have that pulls news items from my database. These news items have tags, which, in the database, is stored in a single column as a string separated by commas. For example: ``` 'content,video,featured video,foo' ``` What I'm trying to do is grab all the items in the table but **not** the items that contain `'video'` in the tags string, **unless** the tag string also contains `'featured video'` What is the best way to do this? Here is my query: ``` SELECT * FROM posts WHERE status = 2 ORDER BY postDate ```
I'm offering horrible thing, but if you want to stick to your table structure, you may try following: ``` SELECT * FROM posts WHERE STATUS=2 AND INSTR(tags,'featured video')>0 OR INSTR(tags,'video')=0 ``` At least use `FULLTEXT` index on that field, so it won't be this painful to use.
I would use a query like this: ``` SELECT * FROM posts WHERE status = 2 AND (CONCAT(',', tags, ',') LIKE '%,featured video,%' OR CONCAT(',', tags, ',') NOT LIKE '%,video,%') ORDER BY postDate ```
SQL Query - Don't grab this unless this
[ "", "sql", "" ]
I have this php mysql statement ``` SELECT a.*, p.filename, m.`first name`, m.`last name`, m.`mobile number`, m.`status`, m.`email address` FROM map a join members m on a.members_id = m.id join pictures p on m.pictures_id = p.id WHERE a.active = 1 GROUP BY a.members_id order by a.`date added` DESC limit 1; ``` However it's not working. The map table has records, and many of them can have the same `members_id` value. I want to group them by the `members_id`, then order them by `date added`, so the most recent is on top of each group, then only get the top row (i.e. get most recent of each group). Does anyone know whats wrong here? Thanks
Try: ``` select * from (SELECT a.*, p.filename, m.`first name`, m.`last name`, m.`mobile number`, m.`status`, m.`email address` FROM map a join members m on a.members_id = m.id join pictures p on m.pictures_id = p.id WHERE a.active = 1 order by a.members_id, a.`date added` DESC) sq GROUP BY members_id; ``` Note that the fact that MySQL returns the first row when grouping is not documented and may change in future releases - so although this query should work with current versions of MySQL, it is not guaranteed to do so in future.
If you want to get one result per map, you have to select it in two steps - so with a subquery. The inner query gets the newest map per member and the outer query gets all the data. Be careful with the indices, otherwise it will be very slow. I think it will be something like: ``` SELECT a.*, p.filename, m.`first name`, m.`last name`, m.`mobile number`, m.`status`, m.`email address` FROM map a inner join members m on a.members_id = m.id inner join pictures p on m.pictures_id = p.id inner join ( select max(a.`date added`) as maxdate from map ia where ia.members_id = m.id) ) as sub_a on sub_a.member_id = a.member_id and sub_a.maxdate = a.`date added` WHERE a.active = 1 ``` That depends on a single maximal `date added`, otherwise you will need some more tricks.
How to order group by and get first row?
[ "", "mysql", "sql", "select", "" ]
I have some code that is meant to convert CSV files into tab delimited files. My problem is that I cannot figure out how to write the correct values in the correct order. Here is my code: ``` for file in import_dir: data = csv.reader(open(file)) fields = data.next() new_file = export_dir+os.path.basename(file) tab_file = open(export_dir+os.path.basename(file), 'a+') for row in data: items = zip(fields, row) item = {} for (name, value) in items: item[name] = value.strip() tab_file.write(item['name']+'\t'+item['order_num']...) tab_file.write('\n'+item['amt_due']+'\t'+item['due_date']...) ``` Now, since both my `write` statements are in the `for row in data` loop, my headers are being written multiple times over. If I outdent the first `write` statement, I'll have an obvious formatting error. If I move the second `write` statement above the first and then outdent, my data will be out of order. What can I do to make sure that the first `write` statement gets written once as a header, and the second gets written for each line in the CSV file? How do I extract the first 'write' statement outside of the loop without breaking the dictionary? Thanks!
The `csv` module contains methods for writing as well as reading, making this pretty trivial: ``` import csv with open("test.csv") as file, open("test_tab.csv", "w") as out: reader = csv.reader(file) writer = csv.writer(out, dialect=csv.excel_tab) for row in reader: writer.writerow(row) ``` No need to do it all yourself. Note my use of [the `with` statement](http://www.youtube.com/watch?v=lRaKmobSXF4), which should always be used when working with files in Python. Edit: Naturally, if you want to select specific values, you can do that easily enough. You appear to be making your own dictionary to select the values - again, the `csv` module provides `DictReader` to do that for you: ``` import csv with open("test.csv") as file, open("test_tab.csv", "w") as out: reader = csv.DictReader(file) writer = csv.writer(out, dialect=csv.excel_tab) for row in reader: writer.writerow([row["name"], row["order_num"], ...]) ``` As kirelagin points out in the commends, `csv.writerows()` could also be used, here with a [generator expression](http://www.youtube.com/watch?v=pShL9DCSIUw): ``` writer.writerows([row["name"], row["order_num"], ...] for row in reader) ```
Extract the code that writes the headers *outside* the main loop, in such a way that it only gets written exactly once at the beginning. Also, consider using the [CSV module](http://docs.python.org/2/library/csv.html) for writing CSV files (not just for reading), don't reinvent the wheel!
Trouble with Python order of operations/loop
[ "", "python", "" ]
I'm building a MongoDB database and the problem is that I want to avoid duplicate entries. At the moment I'm doing this (inserting document only after checking if entry doesn't exist): ``` from pymongo import Connection import pandas as pd from time import strftime from collections import OrderedDict connection = Connection() db = connection.mydb collection = db.mycollection data = pd.read_csv("data/myfile.csv", parse_dates=[2,5]) for i in range(len(data)): if(collection.find({ "id": data.ix[0], \ "date1": data.ix[i, 2].strftime("%Y-%m-%d"), \ "date2": data.ix[i, 5].strftime("%Y-%m-%d"), \ "number": int(data.ix[i, 6]), \ "type": data.ix[i, 7]}).count() == 0): collection.insert(here goes what I'd like to insert) ``` Which does work fine, but this already has significant performance issues (with just ~100Mb of data), as doing `find()` every time seems to slow things down significantly. Is there a way to speed things up? Maybe I'm doing it fundamentally wrong? I need to avoid duplicates only on a certain set of fields, not all of them (i.e., there is also "number2", which can be different, but I still want to have it as duplicate if all other fields match).
You can build a [unique index](http://docs.mongodb.org/manual/core/indexes/#index-type-unique) on the fields you are searching (mongo shell syntax): ``` db.mycollection.ensureIndex({_id:1, date1:1, date2:1, number:1, type:1}, {unique: true}); ``` And capture the constraint violation exception (and ignore it if appropriate) when duplicates are inserted. Typically this should increase performance as duplicates checking is done by an index lookup.
Checking before inserting is not good way to prevent it. To prevent duplication of keys , use primary key. see [how to set a primary key in mongodb](https://stackoverflow.com/questions/3298963/how-to-set-a-primary-key-in-mongodb) Also if it's not good for you , at least add [mongo index](http://docs.mongodb.org/manual/core/indexes/) The best way to solve this (I think) will be to generate key from all the field that are relevant and then do 1 of the 2 : 1. Check on that key, which if will be index - will be faster 2. Make this key your primary key, and the insert will fail
Checking if record exists in MongoDB
[ "", "python", "performance", "mongodb", "pymongo", "" ]
I have a string (it could be an integer too) in Python and I want to write it to a file. It contains only ones and zeros I want that pattern of ones and zeros to be written to a file. I want to write the binary directly because I need to store a lot of data, but only certain values. I see no need to take up the space of using eight bit per value when I only need three. For instance. Let's say I were to write the binary string `"01100010"` to a file. If I opened it in a text editor it would say `b` (01100010 is the ascii code for b). Do not be confused though. I do not want to write ascii codes, the example was just to indicate that I want to directly write bytes to the file. --- Clarification: My string looks something like this: ``` binary_string = "001011010110000010010" ``` It is not made of of the binary codes for numbers or characters. It contains data relative only to my program.
Alright, after quite a bit more searching, I found an answer. I believe that the rest of you simply didn't understand (which was probably my fault, as I had to edit twice to make it clear). I found it [here](http://www.linuxquestions.org/questions/programming-9/writing-binary-data-under-python-718165/). The answer was to split up each piece of data, convert them into a binary integer then put them in a binary array. After that, you can use the array's `tofile()` method to write to a file. ``` from array import * bin_array = array('B') bin_array.append(int('011',2)) bin_array.append(int('010',2)) bin_array.append(int('110',2)) with file('binary.mydata', 'wb') as f: bin_array.tofile(f) ```
To write out a string you can use the file's `.write` method. To write an integer, you will need to use the `struct` module ``` import struct #... with open('file.dat', 'wb') as f: if isinstance(value, int): f.write(struct.pack('i', value)) # write an int elif isinstance(value, str): f.write(value) # write a string else: raise TypeError('Can only write str or int') ``` However, the representation of int and string are different, you may with to use the `bin` function instead to turn it into a string of 0s and 1s ``` >>> bin(7) '0b111' >>> bin(7)[2:] #cut off the 0b '111' ``` but maybe the best way to handle all these `int`s is to decide on a fixed width for the binary strings in the file and convert them like so: ``` >>> x = 7 >>> '{0:032b}'.format(x) #32 character wide binary number with '0' as filler '00000000000000000000000000000111' ```
Write a binary integer or string to a file in python
[ "", "python", "file", "file-io", "binary", "" ]
``` True and "asdf" or "qwer" ``` result => "asdf" ``` False and "asdf" or "qwer" ``` result => "qwer" I can't understand how those things work. I thought that boolean operator on string would make type error, but it didn't. Is it something like predefined statement, for instance "a if b else c"?
Python *and* and *or* operations stop when the answer is determined and return the value of the last object scanned. They do not return *True* or *False*. I love this feature and find myself using it all the time. Since non-empty strings count as True ``` True and "asdf" or absolutely_anything_here_or_following ``` stops calculating when it hits the *or* because the answer is now determined (one of the *or* values is true), and returns the last thing it checked ("asdf"). No further operands are even inspected. On the other hand, when ``` False and "asdf" or absolutely_anything_here ``` hits the *or*, it doesn't know the anwser yet so continues to the next operand. As long as absolutely\_anything\_here is the last operation, the answer is determined and the last thing scanned is returned.
`and` and `or` work just like the familiar boolean operators - they return true if both of their operands are true and false if one of their operands are true, respectively. They also short circuit, just like `&&` and `||`. However, in Python, where anything can be interpreted as being `True` or `False` in a boolean context, there is an additional fact - it will return the first operand that evaluated to `True` or evaluated to `False` in a boolean context, when it has enough information to stop evaluation. (This is as opposed to constructing and returning a real boolean `True` or `False`.) This is okay to do because if it is boolean evaluated it will evaluate to the boolean it would have returned if not for this fact. Thus (note that `""` is evaluated to `False` in a boolean context): ``` >>> "" and "a" '' >>> "a" and "b" 'b' >>> "a" and "" '' >>> >>> "" or "" '' >>> "a" or "" 'a' >>> "" or "a" 'a' >>> "a" or "b" 'a' >>> "" or False False >>> "" or True True >>> False and "" False ```
How does boolean operator work on string in python
[ "", "python", "" ]
I have a table with two columns ID,DESCRIPTION the ID column in not unique. I would like a DELETE query that will make the table have unique values in the ID column. For example, if this is my current table **ID DESCRIPTION** ``` - 5 ABC - 5 DEF - 6 XDX - 6 KKK - 7 AAA ``` I would like to modify the table so it will become **ID DESCRIPTION** ``` - 5 ABC - 6 XDX - 7 AAA ``` (I want to leave only one row for each id, doesn't matter which row, can be the first row for example) I am using mysql
> *"I would like a query that will make the table have unique values in the ID column."* This will give you one record for each `ID`. ``` SELECT ID, MIN(Description) Description FROM tableName GROUP BY ID ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/397bb/1) --- Here's a delete statement to remove duplicate `ID`, leaving only unique `ID` on the table. **WARNING: this will delete records from your table.** ``` DELETE a FROM tableName a LEFT JOIN ( SELECT ID, MIN(Description) Description FROM tableName GROUP BY ID ) b ON a.ID = b.ID AND a.Description = b.Description WHERE b.ID IS NULL ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/b842d/2)
Uthe most elegant way is to use mysql's multiple-table delete syntax and join the table to itself: ``` delete d from mytable t join mytable d on d.id = t.id and d.description > t.description ``` The key part of this query is the greater-than comparison of `description`, which not only stops same-row joins but also predictably selects the duplicate to delete.
SQL unique query
[ "", "mysql", "sql", "" ]
I'm using Scrapy to crawl some pages. I fetch the start\_urls from an excel sheet and I need to save the url in the item. ``` class abc_Spider(BaseSpider): name = 'abc' allowed_domains = ['abc.com'] wb = xlrd.open_workbook(path + '/somefile.xlsx') wb.sheet_names() sh = wb.sheet_by_name(u'Sheet1') first_column = sh.col_values(15) start_urls = first_column handle_httpstatus_list = [404] def parse(self, response): item = abcspiderItem() item['url'] = response.url ``` The problem is that the url gets redirected to some other url (and thus gives something else in the response url). How do I get the original url that I got from the excel?
You can find what you need in `response.request.meta['redirect_urls']`. Quote from [docs](http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#std:reqmeta-redirect_urls): > The urls which the request goes through (while being redirected) can > be found in the redirect\_urls Request.meta key. Hope that helps.
This gave me the original 'referer URL', i.e. which of my start\_urls led to the URL corresponding to this request object being scraped: ``` req = response.request req_headers = req.__dict__['headers'] referer_url = req_headers['Referer'].decode('utf-8') ```
how to get the original start_url in scrapy (before redirect)
[ "", "python", "redirect", "web-scraping", "scrapy", "" ]
Sorry for the beginner question. But how can I put these queries into one data set rather than multiple queries? They are all being taken out of the same table. Also as you can see there is an "Open\_Time" This is a DATE format. How can I write that to say AND open\_date is within the last 60 days? ``` SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='Crosby' AND Severity=4 AND Open_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='Crosby' AND Severity=5 AND Open_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='Crosby' AND Severity=4 AND Close_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='Crosby' AND Severity=4 AND Close_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='EUC' AND Severity=4 AND Open_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='EUC' AND Severity=5 AND Open_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='EUC' AND Severity=4 AND Close_Time<=60; SELECT COUNT(P_NUMBER) FROM PROBLEM_REPORT WHERE Assignment='EUC' AND Severity=4 AND Close_Time<=60; ```
Try this sql. ``` SELECT COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND DATEDIFF(CURDATE(),Open_Time)<=60 THEN P_NUMBER END) as p1, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND DATEDIFF(CURDATE(),Open_Time)<=60 THEN P_NUMBER END) as p2, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND DATEDIFF(CURDATE(),Close_Time)<=60 THEN P_NUMBER END) as p3, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND DATEDIFF(CURDATE(),Close_Time)<=60 THEN P_NUMBER END) as p4, COUNT(CASE WHEN Assignment = 'EUC' AND Severity = 4 AND DATEDIFF(CURDATE(),Open_Time)<=60 THEN P_NUMBER END) as p5, COUNT(CASE WHEN Assignment = 'EUC' AND Severity = 5 AND DATEDIFF(CURDATE(),Open_Time)<=60 THEN P_NUMBER END) as p6, COUNT(CASE WHEN Assignment = 'EUC' AND Severity = 4 AND DATEDIFF(CURDATE(),Close_Time)<=60 THEN P_NUMBER END) as p7, COUNT(CASE WHEN Assignment = 'EUC' AND Severity = 5 AND DATEDIFF(CURDATE(),Close_Time)<=60 THEN P_NUMBER END) as p8 FROM PROBLEM_REPORT WHERE Assignment IN('EUC','Crosby') AND Severity IN(4,5) ```
One way of doing this: ``` SELECT SUM(CASE WHEN Assignment = 'Crosby' AND Severity=4 AND Open_Time<=60 THEN 1 ELSE 0 END) AS P_number1 ,SUM(CASE WHEN Assignment = 'Crosby' AND Severity=5 AND Open_Time<=60 THEN 1 ELSE 0 END) AS P_number2 ,SUM(CASE WHEN Assignment = 'Crosby' AND Severity=4 AND close_time<=60 THEN 1 ELSE 0 END) AS P_number3 ,SUM(CASE WHEN Assignment = 'Crosby' AND Severity=5 AND close_time<=60 THEN 1 ELSE 0 END) AS P_number4 ,SUM(CASE WHEN Assignment = 'EUC' AND Severity=4 AND Open_Time<=60 THEN 1 ELSE 0 END) AS P_number5 ,SUM(CASE WHEN Assignment = 'EUC' AND Severity=5 AND Open_Time<=60 THEN 1 ELSE 0 END) AS P_number6 ,SUM(CASE WHEN Assignment = 'EUC' AND Severity=4 AND close_time<=60 THEN 1 ELSE 0 END) AS P_number7 ,SUM(CASE WHEN Assignment = 'EUC' AND Severity=5 AND close_time<=60 THEN 1 ELSE 0 END) AS P_number8 FROM PROBLEM_REPORT ```
SQL combine separate queries
[ "", "sql", "" ]
I am working with [this tutorial](https://www.kaggle.com/c/titanic-gettingStarted/details/getting-started-with-python). On the example ``` import csv as csv import numpy as np csv_file_object = csv.reader(open('train.csv', 'rb')) header = csv_file_object.next() data = [] for row in csv_file_object: data.append(row) data = np.array(data) ``` I encountered the following error: > Traceback (most recent call last): > > File "C:/Users/Prashant/Desktop/data mining/demo.py", line 7, > > in module data.append(row) > > AttributeError: 'numpy.ndarray' object has no attribute 'append' I googled this and found [this question/answer](https://stackoverflow.com/questions/8409498/attributeerror-numpy-ndarray-object-has-no-attribute-append) on `append`, but I didn't get anything.
Check your indentation. If `data = np.array(data)` is in your for loop (ie indented the same amount as `data.append(row)`), you'll turn `data` into a Numpy array before you've finished appending items to a list. This will cause the error you see because lists have an `append()` method, while numpy arrays do not. Your for loop should look something like ``` data = [] # Make data a list for row in csv_file_object: #iterate through rows in the csv and append them to the list data.append(row) # Turn the list into an array. Notice this is NOT indented! If it is, the data # list will be overwritten! data = np.array(data) ``` Check [Dive Into Python](http://www.diveintopython.net/getting_to_know_python/indenting_code.html) for a more extensive explanation of how indentation works in Python.
Have a look at the example at the [linked location](https://www.kaggle.com/c/titanic-gettingStarted/details/getting-started-with-python): ``` #The first thing to do is to import the relevant packages # that I will need for my script, #these include the Numpy (for maths and arrays) #and csv for reading and writing csv files #If i want to use something from this I need to call #csv.[function] or np.[function] first import csv as csv import numpy as np #Open up the csv file in to a Python object csv_file_object = csv.reader(open('../csv/train.csv', 'rb')) header = csv_file_object.next() #The next() command just skips the #first line which is a header data=[] #Create a variable called 'data' for row in csv_file_object: #Run through each row in the csv file data.append(row) #adding each row to the data variable data = np.array(data) #Then convert from a list to an array #Be aware that each item is currently #a string in this format ``` Python is [indentation-sensitive](http://www.python.org/dev/peps/pep-0008/#indentation). That is, the indentation level will determine the body of the for loop, and according to the comment by thegrinner: > There is a HUGE difference in whether your data = np.array(data) line is in the loop or outside it. That being said the following should demonstrate the difference: ``` >>> import numpy as np >>> data = [] >>> for i in range(5): ... data.append(i) ... >>> data = np.array(data) # re-assign data after the loop >>> print data array([0, 1, 2, 3, 4]) ``` vs. ``` >>> data = [] >>> for i in range(5): ... data.append(i) ... data = np.array(data) # re-assign data within the loop ... Traceback (most recent call last): File "<stdin>", line 2, in <module> AttributeError: 'numpy.ndarray' object has no attribute 'append' ``` As a side-note, I'd doubt the quality of the tutorial you are apparantly following is appropriate for bloody Python starters. I think this more basic (official) tutorial should be more appropriate for a quick first overview of the language: <http://docs.python.org/2/tutorial/>
Python code not running
[ "", "python", "python-2.7", "numpy", "" ]
I am trying to create a VIEW where I should have the following columns: **Clinic\_id | Result\_month\_id | AVF | AVC | AVG | Other | Total\_Days** Total\_Days should be calculated dynamically using (AVF+AVC+[AVG]+Other). The SQL Query is: ``` CREATE VIEW Rate AS SELECT clinic_id, result_month_id, sum(case v_id when 'ula' then [days] else 0 end) as AVF, sum(case v_id when 'ter' then [days] when 'theter' then [days] when 'p_theter' then [days] when 't_theter' then [days] else 0 end) as AVC, sum(case v_id when 's_graft' then [days] else 0 end) as [AVG], sum(case v_id when 'other' then [days] else 0 end) as [Other] FROM [Server].[DBName].[TableName] GROUP BY clinic_id, result_month_id ; ``` I have tried to add the final column by using ``` SELECT columns, .... (AVF+AVC+[AVG]+Other)as Total_Days FROM (SELECT the QUERY displayed above... )q ``` But the above did not work. Any idea how can I dynamically create the Total of the four columns that I am creating on the VIEW?
You can use a subquery for this: ``` CREATE VIEW Rate AS select t.*, AVC + [AVG] + Other as TotalDays from (SELECT clinic_id, result_month_id, sum(case v_id when 'ula' then [days] else 0 end) as AVF, sum(case v_id when 'ter' then [days] when 'theter' then [days] when 'p_theter' then [days] when 't_theter' then [days] else 0 end) as AVC, sum(case v_id when 's_graft' then [days] else 0 end) as [AVG], sum(case v_id when 'other' then [days] else 0 end) as [Other] FROM [Server].[DBName].[TableName] GROUP BY clinic_id, result_month_id ) t ```
Simplest way is going to be to use a CTE. ``` CREATE VIEW Rate AS WITH CalculatedValues AS ( SELECT clinic_id, result_month_id, sum(case v_id when 'ula' then [days] else 0 end) as AVF, sum(case v_id when 'ter' then [days] when 'theter' then [days] when 'p_theter' then [days] when 't_theter' then [days] else 0 end) as AVC, sum(case v_id when 's_graft' then [days] else 0 end) as [AVG], sum(case v_id when 'other' then [days] else 0 end) as [Other] FROM [Server].[DBName].[TableName] GROUP BY clinic_id, result_month_id ) SELECT *, (AVF+AVC+[AVG]+Other)as Total_Days FROM CalculatedValues; ```
Dynamic SUM() while using Sum(Case) on a VIEW in MSSQL
[ "", "sql", "sql-server", "dynamic", "" ]
Consider the following class: ``` class MyObject(object): __slots__ = ('_att1', '_att2') def __init__(self): self._att1 = None self._att2 = None @property def att1(self): """READ-ONLY property. """ return self._att1 @property def att2(self): """att2 property description. """ return self._att2 @att2.setter def att2(self, val): self._att2 = val ``` An advantage of using the property decorator is that we can add some documentation ``` a = MyObject() help(a) Help on MyObject in module __main__ object: class MyObject(__builtin__.object) | Methods defined here: | | __init__(self) | | ---------------------------------------------------------------------- | Data descriptors defined here: | | att1 | READ-ONLY property. | | att2 | att2 property description. ``` If the class is meant to be used by the final user then it is fine to access the attributes by using the properties. ``` a.att2 = "new value" ``` However, if I were to derive a class from `MyObject`, is it acceptable to use the "protected" variables in the derived class? That is, ``` class Derived(MyObject): __slots__ = () def __init__(self): self._att1 = 1 self._att2 = 0 @property def att2(self): """att2 adds 1 to itself in every call. """ self._att2 += 1 return self._att2 @att2.setter def att2(self, val): self._att2 = val ``` I'm aware that if `MyObject` was an object from a third party then the attributes with underscore are subject to change and thus my code would break if they ever do change. However, since I am using `MyObject` as a base class I would think that it is fine to use this with my own derived classes. I recently started using `pylint` and it made me aware of the fact that I am using "protected" variables in my derived classes. The reason I decided to post a question here is to know if this is something acceptable to do so that I can just suppress those warnings in `pylint`. If it isn't, then what is the standards to deal with this? Another point I would like to make is about the attribute access. What would be faster ``` a.att1 ``` or ``` a._att1 ``` I'm under the impression that by doing `a.att1` python would first look in the objects dictionary, or slots as in the example I am working with. If that isn't located there when it looks for functions (as is in the case with the [`__getattr__`](http://docs.python.org/2/reference/datamodel.html#object.__getattr__)). When doing a lot of computations in my class definition I rather access something located in the dictionary or in the slots as opposed to the ones I defined with the decorators. Is this considered bad practice among the python community? I'm only asking this since the default configuration of `pylint` tells me otherwise and I would like to continue working having good standards in mind. ## EDIT: Let us try not to get into the discussion of `__slots__` and premature optimization. If possible assume that in my original post I did not use `__slots__` and that everything is in the object's dictionary.
Python doesn't have a formal distinction between private and protected class members. Even the distinction between private and public is pretty weak, since anything can be accessed if a user tries hard enough. If a private versus protected distinction would make sense for your application, you can simply document that `_foo` (or `foo`) is not part of the public user interface, but can be relied upon by subclasses to always have certain semantics. (What those semantics are is up to you.) Now, I don't know if it's really worth the documentation effort to do this for a simple read-only property (the function call overhead is likely not too bad, so subclasses can just use the property like everyone else). However, if you had a property that was doing a lot more work, like a database query or HTTP request, it might make some sense for the internals to be exposed a bit for subclasses to get at just the parts they need. If you do think that exposing the "protected" value is desirable, the only question is whether you should put the underscore in front of the variable name or not. The underscore has a few effects on things like `dir` and perhaps in documentation generating tools, but it doesn't change anything about how the code runs. So it's really an issue of code style, and it really doesn't matter much either way. You can either leave the underscore off and put big warning text in the documentation so that users know they shouldn't be messing with the internals, or you could use the underscore and silence the pylint warnings (and if necessary, force there to be some extra docs for subclass implementors).
Attributes with underscore are not “protected”, but “private”—they are subject to change. Even if your class is derived from it. You should use parent property to access this data. Speaking about performance. Of course, property access is a bit slower than attribute access, simply because it involves a function call, but you shouldn't care about it. By the way it has nothing to do with `__getattr__` and all this stuff. Properies are also looked up in the dictionary as normal attributes, they just implement the [descriptor protocol](http://docs.python.org/2/reference/datamodel.html#implementing-descriptors).
Faster attribute access in python
[ "", "python", "pylint", "" ]
The SQL below contains some DDL and a simple query. The result I am getting is ``` a1|b1|c1 a1|b2|c3 a3|b3|c2 a3|b3|c3 a3|b3|c4 a3|b3|c5 a3|b5|c6 a3|b5|c7 ``` The result I want is ``` a1 |b1 |c1 a1 |b2 |c3 a3 |b3 |c2 null |null |c4 null |null |c5 a3 |b5 |c6 null |null |c7 ``` I tried using MAX, MIN, rownums and what not. I am at my wit's end. I am including only the base query I started with and not all the options I tried because they don't work at all. Any help is appreciated! ``` BEGIN TRANSACTION; drop table if exists table_A; drop table if exists table_B; drop table if exists table_C; /* Create a table called NAMES */ CREATE TABLE table_A(a_Id text PRIMARY KEY, val_a text); CREATE TABLE table_B(a_Id text, b_Id text, val_b text); CREATE TABLE table_C(b_Id text, c_Id text, val_c text); /* Create few records in this table */ INSERT INTO table_A VALUES('a1','va1'); INSERT INTO table_A VALUES('a2','va2'); INSERT INTO table_A VALUES('a3','va3'); INSERT INTO table_B VALUES('a1', 'b1','vb1'); INSERT INTO table_B VALUES('a1', 'b2','vb2'); INSERT INTO table_B VALUES('a3', 'b3','vb31'); INSERT INTO table_B VALUES('a2', 'b4','vb4'); INSERT INTO table_B VALUES('a3', 'b5','vb31'); INSERT INTO table_C VALUES('b1', 'c1','vc1'); INSERT INTO table_C VALUES('b3', 'c2','vc2'); INSERT INTO table_C VALUES('b3', 'c3','vc3'); INSERT INTO table_C VALUES('b2', 'c3','vc3'); INSERT INTO table_C VALUES('b3', 'c4','vc2'); INSERT INTO table_C VALUES('b3', 'c5','vc3'); INSERT INTO table_C VALUES('b5', 'c6','vc3'); INSERT INTO table_C VALUES('b5', 'c7','vc3'); COMMIT; select a.a_Id, b.b_Id, c.c_Id from table_A as a join table_B as b on a.a_Id = b.a_Id join table_C as c on b.b_Id = c.b_Id; ```
something like this should work (I have tested it on PostgreSql, should work on Oracle too) ``` SELECT case when row_number = 1 then a_id end as a_id, case when row_number = 1 then b_id end as b_id, c_id FROM ( SELECT a.a_Id, b.b_Id, c.c_Id, row_number() OVER (partition by a.a_id, b.b_id order by c.c_id) as row_number, --for a_id, b_id row_number() OVER (partition by c.c_id order by c.c_id) as row_number2 --to avoid c_id duplicates FROM table_A a join table_B b on a.a_Id = b.a_Id join table_C c on b.b_Id = c.b_Id ) innerquery WHERE row_number2 = 1 --this is to avoid c_id duplicates ``` [**SQLFIDDLE**](http://www.sqlfiddle.com/#!4/6049b/35)
``` select t1.a_id, t1.b_id, table_c.c_id from table_c left join ( select a_Id, b_Id, c_Id from ( select a.a_Id as a_id, b.b_Id as b_id, c.c_Id as c_id, ROW_NUMBER() OVER (PARTITION BY a.a_ID, b.b_id ORDER BY C_ID) as aNum from table_A as a join table_B as b on a.a_Id = b.a_Id join table_C as c on b.b_Id = c.b_Id ) t2 where aNum = 1 ) t1 on table_c.c_id = t1.c_id order by table_c.c_id ``` fiddle: <http://sqlfiddle.com/#!3/6049b/1>
Restrict many - many results in SQL join
[ "", "sql", "" ]
I have three database tables: `routes`, `trips`, and `stoptimes` that contain transit information. They're related with foreign keys as follows: ``` routes -> ROUTE_ID -> trips -> TRIP_ID -> stoptimes ``` i.e. there are some routes, lots of trips per route, and even more stoptimes per trip. For each route in the table I'd like to select the trip that has the greatest number of stoptimes. Furthermore, each route has an enum (INT) `direction_id` too and I'd like to select the trip with the most stoptimes for each direction, for each route. This is all for some data pre-processing, the idea is that these selected trips will have a flag set on them so that they can be easily recalled in future. Is it possible to achieve this in SQL? --- **EDIT:** More info as requested. Here is a sample SELECT query / results table: ``` select t.route_id, t.direction_id, t.trip_id, NumStops, t.isPrototypical from trips t join (select st.trip_id, count(*) as NumStops from stoptimes st group by st.trip_id ) st on st.trip_id = t.trip_id; ``` Results: ![sample sql results table](https://i.stack.imgur.com/I6I3L.png) In the example above, I want a SQL statement that would select trips 2 and 10, since these have the (equal-)greatest NumStops in each direction. Even better if, rather than `SELECTING` the SQL statement could `UPDATE` the column `isPrototypical` to `TRUE` for those particular rows. Bear in mind: in the production DB there will be more than one `route_id` and an arbitrary number of `direction_id`s on each trip. The statement needs to do its magic for each direction, and per route. --- **Final Answer** A correct, well-performing solution was provided by Gordon Linoff, below, and I thought I would also post the modified version of his code that I used to solve the problem. Here's the SQL that selects and updates the trips with the most stops, per route, per direction, while only picking one trip in the event of a tie: ``` update trips t join ( select substring_index(group_concat(t.trip_id order by NumStops desc), ',', 1) as prototripid from trips t join (select st.trip_id, count(*) as NumStops from stoptimes st group by st.trip_id ) st on st.trip_id = t.trip_id group by t.route_id, t.direction_id ) t2 on t2.prototripid = t.trip_id set isPrototypical = 1 ; ``` I believe that this may be MySQL-specific.
You can do this with a trick in MySQL, involving group concatenation. Here is the query: ``` select t.route_id, substring_index(group_concat(t.trip_id order by NumStops desc), ',', 1), max(NumStops) as Length from trips t join (select st.trip_id, count(*) as NumStops from stoptimes st group by st.trip_id ) st on st.trip_id = t.trip_id group by t.route_id; ``` (You don't need the `routes` table unless you need the name of the route.) The subquery counts the number of stops on each trip. This is then aggregated by `route_id`. Normally, `group_concat()` would be used to put all the trips in a comma-delimited string. Here it does that, with the caveat that they are ordered by the number of stops with the longest first. The function `substring_index()` then takes the first value. This converts the `trip_id` to a string. You might want to convert it back to whatever data type it started out as. The following gets the best for each direction: ``` select t.route_id, t.direction_id, substring_index(group_concat(t.trip_id order by NumStops desc), ',', 1), max(NumStops) as Length from trips t join (select st.trip_id, count(*) as NumStops from stoptimes st group by st.trip_id ) st on st.trip_id = t.trip_id group by t.route_id, t.direction_id; ``` Because the direction is stored a the *trip* level, it doesn't interfere with the counting of stops on a trip (that is, it doesn't seem to be needed in the `st` subquery.
If you join all the tables together correctly, you'll get one row for each stop time, so a `COUNT(*)` will give you the total stops. As for the count by direction, I'll assume the direction values are `1, 2, 3, ...`. I can't tell which table `direction_id` is in so I've left it unaliased in the query: ``` SELECT routes.Route_ID COUNT(*) AS TotalStops, COUNT(CASE WHEN direction_id = 1 THEN 1 END) AS Direction1Stops, COUNT(CASE WHEN direction_id = 2 THEN 1 END) AS Direction2Stops, COUNT(CASE WHEN direction_id = 3 THEN 1 END) AS Direction3Stops, ... and the remaining direction_id values FROM routes INNER JOIN trips ON routes.Route_ID = trips.Route_ID INNER JOIN stoptimes on trips.Trip_ID = stoptimes.Trip_ID GROUP BY routes.Route_ID ```
SQL help - select table with the greatest number of related rows
[ "", "mysql", "sql", "database", "" ]
I am trying to learn python and for that purpose i made a simple addition program using python 2.7.3 ``` print("Enter two Numbers\n") a = int(raw_input('A=')) b = int(raw_input('B=')) c=a+b print ('C= %s' %c) ``` i saved the file as *add.py* and when i double click and run it;the program run and exits instantenously without showing answer. Then i tried code of this question [Simple addition calculator in python](https://stackoverflow.com/questions/4665558/simple-addition-calculator-in-python) it accepts user inputs but after entering both numbers the python exits with out showing answer. Any suggestions for the above code. Advance thanks for the help
add an empty `raw_input()` at the end to pause until you press `Enter` ``` print("Enter two Numbers\n") a = int(raw_input('A=')) b = int(raw_input('B=')) c=a+b print ('C= %s' %c) raw_input() # waits for you to press enter ``` Alternatively run it from `IDLE`, command line, or whichever editor you use.
It's exiting because you're not telling the interpreter to pause at any moment after printing the results. The program itself works. I recommend running it directly in the terminal/command line window like so: ![screenshot of it working](https://i.stack.imgur.com/fFTGm.png) Alternatively, you could write: ``` import time print("Enter two Numbers\n") a = int(raw_input('A=')) b = int(raw_input('B=')) c=a+b print ('C= %s' %c) time.sleep(3.0) #pause for 3 seconds ``` Or you can just add another `raw_input()` at the end of your code so that it waits for input (at which point the user will type something and nothing will happen to their input data). ---
Simple addition program in python
[ "", "python", "math", "python-2.7", "" ]
``` IF object_id('tempdb..#A') IS NOT NULL DROP TABLE #A IF object_id('tempdb..#B') IS NOT NULL DROP TABLE #B CREATE TABLE #A (fname varchar(20), lname varchar(20)) CREATE TABLE #B (fname varchar(20), lname varchar(20)) INSERT INTO #A SELECT 'Kevin', 'XP' UNION ALL SELECT 'Tammy', 'Win7' UNION ALL SELECT 'Wes', 'XP' UNION ALL SELECT 'Susan', 'Win7' UNION ALL SELECT 'Kevin', 'Win7' SELECT * FROM #A INSERT INTO #B SELECT a.fname, a.lname FROM #A a WHERE a.fname NOT IN (SELECT fname from #B) SELECT * FROM #B DELETE FROM #B INSERT INTO #B SELECT a.fname, a.lname FROM #A a LEFT OUTER JOIN #B b ON a.fname = b.fname WHERE a.fname NOT IN (SELECT fname from #B) SELECT * FROM #B ``` Both of these examples copy all 5 records to the new table. I only want to see one unique fname so only one Kevin should show up. Why don't these work, or is there a better way to do it? It seems like such a simple thing.
This would create rows with unique fname and take Win7 if both Win7 and XP existed. ``` INSERT INTO #B SELECT a.fname, MIN(a.lname) FROM #A a GROUP BY a.fname ```
Answering your question, why don't your queries work? ``` INSERT INTO #B SELECT a.fname, a.lname FROM #A a WHERE a.fname NOT IN (SELECT fname from #B) ``` This operation is evaluated in two different operations. In the first, the SELECT part of the query is executed. It returns a table. At such point #B is empty, hence, every tuple in #A will be part of this result. Then, once this result is computed, this result is inserted into #B. #B will end being a copy of #A. The DBMS does not insert one tuple, and then re-evaluate the query for the next tuple of #A, as your question seems to imply. Insertions are always done AFTER the query has been completely evaluated. if your goal is to insert into #B the tuples in #A without duplicates, there are many ways to do that. One of them is: ``` INSERT INTO #B SELECT distinct * from #A; ``` --dmg
Copy records from one table to another without duplicates
[ "", "sql", "sql-server-2008", "" ]
How can I decode a pem-encoded (base64) certificate with Python? For example this here from github.com: ``` -----BEGIN CERTIFICATE----- MIIHKjCCBhKgAwIBAgIQDnd2il0H8OV5WcoqnVCCtTANBgkqhkiG9w0BAQUFADBp MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMSgwJgYDVQQDEx9EaWdpQ2VydCBIaWdoIEFzc3VyYW5j ZSBFViBDQS0xMB4XDTExMDUyNzAwMDAwMFoXDTEzMDcyOTEyMDAwMFowgcoxHTAb BgNVBA8MFFByaXZhdGUgT3JnYW5pemF0aW9uMRMwEQYLKwYBBAGCNzwCAQMTAlVT MRswGQYLKwYBBAGCNzwCAQITCkNhbGlmb3JuaWExETAPBgNVBAUTCEMzMjY4MTAy MQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2Fu IEZyYW5jaXNjbzEVMBMGA1UEChMMR2l0SHViLCBJbmMuMRMwEQYDVQQDEwpnaXRo dWIuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA7dOJw11wcgnz M08acnTZtlqVULtoYZ/3+x8Z4doEMa8VfBp/+XOvHeVDK1YJAEVpSujEW9/Cd1JR GVvRK9k5ZTagMhkcQXP7MrI9n5jsglsLN2Q5LLcQg3LN8OokS/rZlC7DhRU5qTr2 iNr0J4mmlU+EojdOfCV4OsmDbQIXlXh9R6hVg+4TyBkaszzxX/47AuGF+xFmqwld n0xD8MckXilyKM7UdWhPJHIprjko/N+NT02Dc3QMbxGbp91i3v/i6xfm/wy/wC0x O9ZZovLdh0pIe20zERRNNJ8yOPbIGZ3xtj3FRu9RC4rGM+1IYcQdFxu9fLZn6TnP pVKACvTqzQIDAQABo4IDajCCA2YwHwYDVR0jBBgwFoAUTFjLJfBBT1L0KMiBQ5um qKDmkuUwHQYDVR0OBBYEFIfRjxlu5IdvU4x3kQdQ36O/VUcgMCUGA1UdEQQeMByC CmdpdGh1Yi5jb22CDnd3dy5naXRodWIuY29tMIGBBggrBgEFBQcBAQR1MHMwJAYI KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBLBggrBgEFBQcwAoY/ aHR0cDovL3d3dy5kaWdpY2VydC5jb20vQ0FDZXJ0cy9EaWdpQ2VydEhpZ2hBc3N1 cmFuY2VFVkNBLTEuY3J0MAwGA1UdEwEB/wQCMAAwYQYDVR0fBFowWDAqoCigJoYk aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL2V2MjAwOWEuY3JsMCqgKKAmhiRodHRw Oi8vY3JsNC5kaWdpY2VydC5jb20vZXYyMDA5YS5jcmwwggHEBgNVHSAEggG7MIIB tzCCAbMGCWCGSAGG/WwCATCCAaQwOgYIKwYBBQUHAgEWLmh0dHA6Ly93d3cuZGln aWNlcnQuY29tL3NzbC1jcHMtcmVwb3NpdG9yeS5odG0wggFkBggrBgEFBQcCAjCC AVYeggFSAEEAbgB5ACAAdQBzAGUAIABvAGYAIAB0AGgAaQBzACAAQwBlAHIAdABp AGYAaQBjAGEAdABlACAAYwBvAG4AcwB0AGkAdAB1AHQAZQBzACAAYQBjAGMAZQBw AHQAYQBuAGMAZQAgAG8AZgAgAHQAaABlACAARABpAGcAaQBDAGUAcgB0ACAAQwBQ AC8AQwBQAFMAIABhAG4AZAAgAHQAaABlACAAUgBlAGwAeQBpAG4AZwAgAFAAYQBy AHQAeQAgAEEAZwByAGUAZQBtAGUAbgB0ACAAdwBoAGkAYwBoACAAbABpAG0AaQB0 ACAAbABpAGEAYgBpAGwAaQB0AHkAIABhAG4AZAAgAGEAcgBlACAAaQBuAGMAbwBy AHAAbwByAGEAdABlAGQAIABoAGUAcgBlAGkAbgAgAGIAeQAgAHIAZQBmAGUAcgBl AG4AYwBlAC4wHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMBEGCWCGSAGG +EIBAQQEAwIGwDAOBgNVHQ8BAf8EBAMCBaAwDQYJKoZIhvcNAQEFBQADggEBABRS cR+GnW01Poa7ZhqLhZi5AEzLQrVG/AbnRDnI6FLYERQjs3KW6RSUni8AKPfVBEVA AMb0V0JC3gmJlxENFFxrvQv3GKNfZwLzCThjv8ESnTC6jqVUdFlTZ6EbUFsm2v0T flkXv0nvlH5FpP06STLwav+JjalhqaqblkbIHOAYHOb7gvQKq1KmyuhUItnbKj1a InuA6gcF1PnH8FNZX7t3ft6TcEFOI8t4eXnELurXZioY99HFfOISeIKNHeyCngGi 5QK+eKG5WVjFTG9PpTG0SVtemB4uOPYZxDmiSvt5BbjyWeUmEnCtwOh1Ix8Y0Qvg n2Xkw9dJh1tybLEvrG8= -----END CERTIFICATE----- ``` According to [ssl-shopper](http://www.sslshopper.com/certificate-decoder.html) it should be something like this: ``` Common Name: github.com Subject Alternative Names: github.com, www.github.com Organization: GitHub, Inc. Locality: San Francisco State: California Country: US Valid From: May 26, 2011 Valid To: July 29, 2013 ``` How can I get this plaintext using python?
Python's standard library, even in the latest version, does not include anything that can decode X.509 certificates. However, the add-on [`cryptography`](https://cryptography.io/) package does support this. Quoting an [example from the documentation](https://cryptography.io/en/latest/x509/reference/#loading-certificates): ``` >>> from cryptography import x509 >>> from cryptography.hazmat.backends import default_backend >>> cert = x509.load_pem_x509_certificate(pem_data, default_backend()) >>> cert.serial_number 2 ``` Another add-on package that might be an option is [`pyopenssl`](https://launchpad.net/pyopenssl). This is a thin wrapper around the OpenSSL C API, which means it will be *possible* to do what you want, but expect to spend a couple days tearing your hair out at the documentation. If you can't install Python add-on packages, but you do have the `openssl` command-line utility, ``` import subprocess cert_txt = subprocess.check_output(["openssl", "x509", "-text", "-noout", "-in", certificate]) ``` should produce roughly the same stuff you got from your web utility in `cert_txt`. Incidentally, the reason doing a straight-up base64 decode gives you binary gobbledygook is that there are two layers of encoding here. [X.509 certificates](https://en.wikipedia.org/wiki/X.509#Certificates) are [ASN.1](https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One) data structures, serialized to [X.690 DER](https://en.wikipedia.org/wiki/X.690#DER_encoding) format and then, since DER is a binary format, base64-armored for ease of file transfer. (A lot of the standards in this area were written way back in the nineties when you couldn’t reliably ship anything but seven-bit ASCII around.)
You can use [`pyasn1`](https://github.com/etingof/pyasn1) and [`pyasn1-modules`](https://github.com/etingof/pyasn1-modules) packages to parse this kind of data. For instance: ``` from pyasn1_modules import pem, rfc2459 from pyasn1.codec.der import decoder substrate = pem.readPemFromFile(open('cert.pem')) cert = decoder.decode(substrate, asn1Spec=rfc2459.Certificate())[0] print(cert.prettyPrint()) ``` Read the docs for pyasn1 for the rest.
How can I decode a SSL certificate using python?
[ "", "python", "ssl", "cryptography", "certificate", "pem", "" ]
I have a list of lookup fields: ``` >>> l = ['A', 'C', 'Z', 'M'] ``` I would need to test the equality of 2 dictionaries on this lookup list: ``` >>> d1 = {'A': 3,'F': 4,'Z': 1} >>> d2 = {'B': 0,'A': 3,'C': 7} ``` The equality test for any field 'x' in the list succeeds if any of the following conditions are satisfied: 1.if 'x' is not present in either of the dicts 2.if 'x' is present and d1[x]==d2[x] The equality function would return a match ONLY IF all fields in the list succeed based on the conditions above. So,for the above dicts - Z fails,C fails,A succeeds,M succeeds. However,the equality test for the dicts should report a failure. What would be the shortest way to achieve this?
This is probably the shortest and most elegant: ``` all( d1.get(x) == d2.get(x) for x in l ) ``` Note that if x is not in both dictionary, we are comparing None with None. EDIT: - Following comments, using generator (not list comprehension). - Ashwini comment is correct. Will not work if one of the dictionaries has 'None' as a value EDIT2 (following comment): In None can appear in dictionary, you need to have a different 'No Value'. If -9999 wouldn't work, then just generate one: ``` sentinel = object() #a new object, guaranteed not in dictionary all( d1.get(x, sentinel) == d2.get(x, sentinel) for x in l ) ```
Try this: ``` success = True for x in l: if not (((x not in d1) and (x not in d2)) or (d1.get(x) == d2.get(x))): success = False ```
Equality of dictionaries
[ "", "python", "" ]
I am trying to manipulate my sql select result. I stored the variable in the database in this way `<p>asd</p>` so it will be recognized as a paragraph in my application, but I want to use it in my app's other function. I want to show it in this way 'asd'. Is there any way to do it in sql? any help will be appriciated I am using phpmyadmin
If you are running [MySQL 5.1.5 or above](http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/mysql-5.1-xml.html), then you could use the [XPath](http://www.w3schools.com/xpath/) function [`ExtractValue()`](http://dev.mysql.com/doc/refman/5.1/en/xml-functions.html#function_extractvalue), like this... ``` SELECT ExtractValue(col1, '/p') FROM xml_test; ``` [**Click here to see it in action at SQL Fiddle**](http://sqlfiddle.com/#!2/0a4be/1)
Yes, you can use select replace, like so: ``` select replace(replace("<p>asd</p>",'<p>',''),'</p>','') ``` [See it in action: SQL fiddle](http://sqlfiddle.com/#!2/d41d8/13613/0)
How to manipulate the result of select query in SQL?
[ "", "mysql", "sql", "" ]
I am having a difficult time summing a column of distinct values based on a column of values, which also needs to be distinct. For example, I have: ``` Index PaymentID PaymentAmt PartPmtApplied 1 35 50 26 2 35 50 24 3 36 50 38 4 36 50 12 5 37 52 14 6 37 52 14 7 37 52 24 8 38 54 37 9 38 54 17 10 39 100 100 ``` The query I used is as follows: ``` Select sum(paymentamt) from tbl_A where paymentid = (select distinct paymentsid from tbl_A); ``` So, I thought I was doing my query correct, but when I do, it still sums up every value in the PaymentAmt column, i.e. I receive a total of 564 instead of 256. Anyone have a better way for me to find this value? Thanks in advance. Thank you for the quick responses. But, as someone asked, here is what I am looking for: ``` 35 50 36 50 37 52 38 54 39 100 ``` Sum total = 256. So, I should be returning a single value of 256.
If you need to sum distinct values you can just simply sum everything from a subquery that would hold only distinct values ( or distinct pairs of values ). So you probably need ``` SELECT SUM( PaymentAmt ) FROM ( SELECT PaymentAmt FROM tabl_a GROUP BY PaymentId, PaymentAmt ) distinct_payments; ``` This will give 306 as a result though, treating payments for id 35 and 36 as separate. If you really want to get 256, which would be sum of distinct payments without any link to ids then just use `GROUP BY PaymentAmt` instead.
You can actually do it without any subquery. Just use `GROUP BY` in your query. ``` SELECT PaymentID, SUM(PaymentAmount) TotalPayment FROM tabl_a GROUP BY PaymentID ``` This will give you unique value of `PaymentID` with its calculated total amount of `PaymentAmount`. * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/99a65/3) OUTPUT ``` ╔═══════════╦══════════════╗ ║ PAYMENTID ║ TOTALPAYMENT ║ ╠═══════════╬══════════════╣ ║ 35 ║ 100 ║ ║ 36 ║ 100 ║ ║ 37 ║ 156 ║ ║ 38 ║ 108 ║ ║ 39 ║ 100 ║ ╚═══════════╩══════════════╝ ```
How to correctly sum a 2 column query in MySQL
[ "", "mysql", "sql", "" ]
I am trying to install GDAL in virtual environment based on the various [solutions](https://stackoverflow.com/questions/11336153/python-gdal-package-missing-header-file-when-installing-via-pip) out there. However the download itself already fails: ``` $ pip install --no-install GDAL ``` Here is the pip.log ``` ------------------------------------------------------------ /Users/test/venv/bin/pip run on Sun Jun 2 15:35:15 2013 Downloading/unpacking GDAL Running setup.py egg_info for package GDAL running egg_info writing pip-egg-info/GDAL.egg-info/PKG-INFO writing top-level names to pip-egg-info/GDAL.egg-info/top_level.txt writing dependency_links to pip-egg-info/GDAL.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found Traceback (most recent call last): File "<string>", line 16, in <module> File "/Users/test/venv/build/GDAL/setup.py", line 267, in <module> ext_modules = ext_modules ) File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "<string>", line 14, in replacement_run File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 259, in find_sources mm.run() File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 325, in run self.add_defaults() File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 361, in add_defaults sdist.add_defaults(self) File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/sdist.py", line 211, in add_defaults build_ext = self.get_finalized_command('build_ext') File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 312, in get_finalized_command cmd_obj.ensure_finalized() File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized self.finalize_options() File "/Users/test/venv/build/GDAL/setup.py", line 164, in finalize_options self.gdaldir = self.get_gdal_config('prefix') File "/Users/test/venv/build/GDAL/setup.py", line 144, in get_gdal_config return fetch_config(option) File "/Users/test/venv/build/GDAL/setup.py", line 97, in fetch_config raise gdal_config_error, e""") File "<string>", line 4, in <module> __main__.gdal_config_error: [Errno 2] No such file or directory Complete output from command python setup.py egg_info: running egg_info writing pip-egg-info/GDAL.egg-info/PKG-INFO writing top-level names to pip-egg-info/GDAL.egg-info/top_level.txt writing dependency_links to pip-egg-info/GDAL.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found Traceback (most recent call last): File "<string>", line 16, in <module> File "/Users/test/venv/build/GDAL/setup.py", line 267, in <module> ext_modules = ext_modules ) File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "<string>", line 14, in replacement_run File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 259, in find_sources mm.run() File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 325, in run self.add_defaults() File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/egg_info.py", line 361, in add_defaults sdist.add_defaults(self) File "/Users/test/venv/lib/python2.7/site-packages/distribute-0.6.34-py2.7.egg/setuptools/command/sdist.py", line 211, in add_defaults build_ext = self.get_finalized_command('build_ext') File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 312, in get_finalized_command cmd_obj.ensure_finalized() File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized self.finalize_options() File "/Users/test/venv/build/GDAL/setup.py", line 164, in finalize_options self.gdaldir = self.get_gdal_config('prefix') File "/Users/test/venv/build/GDAL/setup.py", line 144, in get_gdal_config return fetch_config(option) File "/Users/test/venv/build/GDAL/setup.py", line 97, in fetch_config raise gdal_config_error, e""") File "<string>", line 4, in <module> __main__.gdal_config_error: [Errno 2] No such file or directory ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /Users/test/venv/build/GDAL Exception information: Traceback (most recent call last): File "/Users/test/venv/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/basecommand.py", line 139, in main status = self.run(options, args) File "/Users/test/venv/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/commands/install.py", line 266, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/Users/test/venv/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/req.py", line 1057, in prepare_files req_to_install.run_egg_info() File "/Users/test/venv/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/req.py", line 236, in run_egg_info command_desc='python setup.py egg_info') File "/Users/test/venv/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/util.py", line 662, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command python setup.py egg_info failed with error code 1 in /Users/test/venv/build/GDAL ``` What am I missing?
A workaround to that problem is to 1. Download [GDAL Complete](http://www.kyngchaos.com/files/software/frameworks/GDAL_Complete-1.10.dmg) from [this site](http://www.kyngchaos.com/software/frameworks) 2. Install GDAL Complete.pkg 3. Go to `/Library/Frameworks/GDAL.framework/Versions/1.10/` and copy all files in there 4. Go to your virtual environment's sites-packages `/venv/lib/python2.7/site-packages` and paste the copied files in there
The workaround given by @user1738154 didn't work as expected for me so went ahead and installed GDAL using [HomeBrew](http://brew.sh) then using pip into my Virtual environment. ``` $ brew install gdal (venv)$ pip install GDAL ``` Everything compiled and ran like a charm.
Python GDAL: pip install --no-install GDAL fails
[ "", "python", "virtualenv", "gdal", "" ]
I want a function to refer to itself. e.g. to be recursive. So I do something like that: ``` def fib(n): return n if n <= 1 else fib(n-1)+fib(n-2) ``` This is fine most of the time, but `fib` does not, actually, refer to itself; it refers to the the binding of `fib` in the enclosing block. So if for some reason `fib` is reassigned, it will break: ``` >>> foo = fib >>> fib = foo(10) >>> x = foo(8) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in fib TypeError: 'int' object is not callable ``` How can I prevent this from happening (from inside `fib`), if at all possible? As far as I know, the name of `fib` does not exist before the function-definition is fully executed; Are there any workarounds? I don't have a real use case where it may actually happen; I am asking out of sheer curiosity.
Like [abamert](https://stackoverflow.com/questions/16821879/is-there-a-way-to-refer-to-the-current-function-in-python#comment24252510_16821975) said "..there is no way around the problem from inside ..". Here's my approach: ``` def fib(n): def fib(n): return n if n <= 1 else fib(n-1)+fib(n-2) return fib(n) ```
I'd make a decorator for this ``` from functools import wraps def selfcaller(func): @wraps(func) def wrapper(*args, **kwargs): return func(wrapper, *args, **kwargs) return wrapper ``` And use it like ``` @selfcaller def fib(self, n): return n if n <= 1 else self(n-1)+self(n-2) ``` --- This is actually a readable way to define a [Fixed Point Combinator](http://en.wikipedia.org/wiki/Fixed-point_combinator) (or Y Combinator): ``` fix = lambda g: (lambda f: g(lambda arg: f(f)(arg))) (lambda f: g(lambda arg: f(f)(arg))) ``` usage: ``` fib = fix(lambda self: lambda n: n if n <= 1 else self(n-1)+self(n-2)) ``` or: ``` @fix def fib(self): return lambda n: n if n <= 1 else self(n-1)+self(n-2) ``` The binding here happens in the formal parameter, so the problem does not arise.
Is there a way to refer to the current function in python?
[ "", "python", "" ]
I'm trying to build a list of domain names from an Enom API call. I get back a lot of information and need to locate the domain name related lines, and then join them together. The string that comes back from Enom looks somewhat like this: ``` SLD1=domain1 TLD1=com SLD2=domain2 TLD2=org TLDOverride=1 SLD3=domain3 TLD4=co.uk SLD5=domain4 TLD5=net TLDOverride=1 ``` I'd like to build a list from that which looks like this: ``` [domain1.com, domain2.org, domain3.co.uk, domain4.net] ``` To find the different domain name components I've tried the following (where "enom" is the string above) but have only been able to get the SLD and TLD matches. ``` re.findall("^.*(SLD|TLD).*$", enom, re.M) ```
this works for you example, ``` >>> sld_list = re.findall("^.*SLD[0-9]*?=(.*?)$", enom, re.M) >>> tld_list = re.findall("^.*TLD[0-9]*?=(.*?)$", enom, re.M) >>> map(lambda x: x[0] + '.' + x[1], zip(sld_list, tld_list)) ['domain1.com', 'domain2.org', 'domain3.co.uk', 'domain4.net'] ```
**Edit:** Every time I see a question asking for regular expression solution I have this bizarre urge to try and solve it without regular expressions. Most of the times it's more efficient than the use of regex, I encourage the OP to test which of the solutions is most efficient. Here is the naive approach: ``` a = """SLD1=domain1 TLD1=com SLD2=domain2 TLD2=org TLDOverride=1 SLD3=domain3 TLD4=co.uk SLD5=domain4 TLD5=net TLDOverride=1""" b = a.split("\n") c = [x.split("=")[1] for x in b if x != 'TLDOverride=1'] for x in range(0,len(c),2): print ".".join(c[x:x+2]) >> domain1.com >> domain2.org >> domain3.co.uk >> domain4.net ```
Regular Expression in Python
[ "", "python", "regex", "" ]
after a computer fix my python projects dir (windows) changed (say from d: to f:). now all my virtualenvs are broken. after activating the env the project inside the virtualenv can't find the dependencies and the custom scripts (from the env\scripts folder)won't work tried running: ``` virtualenv --relocateble ENV_NAME (with the env name ..) ``` like in this [stackoverflow question](https://stackoverflow.com/questions/7153113/virtualenv-relocatable-does-it-really-work) and it outputted a lot of lines like: ``` Script agent\Scripts\deactivate.bat cannot be made relative ``` and my virtualenv is still broken. when I manually changed activate.bat `set VIRTUAL_ENV` to the new path. some scripts work again. but the relocate scripts still doesn't run and most of the scripts are still broken even running the python interpeter fails with: ``` Traceback (most recent call last): File "F:\Python27\learn\agent\agent\lib\site.py", line 677, in <module> main() File "F:\Python27\learn\agent\agent\lib\site.py", line 666, in main aliasmbcs() File "F:\Python27\learn\agent\agent\lib\site.py", line 506, in aliasmbcs import locale, codecs File "F:\Python27\learn\agent\agent\lib\locale.py", line 19, in <module> import functools ImportError: No module named functools ``` is there any way to fix this? HELP **Update:** I also changed manually the shebang python interpeter line in all scripts in ENV\Scripts. now all fail with the same python failure as above **Another Update:** to @udi the system python path is: ``` ['', 'C:\\dev\\Python27\\lib\\site-packages\\distribute-0.6.37-py2.7.egg', 'C:\\ dev\\Python27\\lib\\site-packages\\pip-1.3.1-py2.7.egg', 'C:\\dev\\Python27\\lib \\site-packages\\numpy-1.7.1-py2.7-win32.egg', 'C:\\dev\\Python27\\lib\\site-pac kages\\pandas-0.11.0-py2.7-win32.egg', 'C:\\dev\\Python27\\lib\\site-packages\\p ytz-2013b-py2.7.egg', 'C:\\dev\\Python27\\lib\\site-packages\\python_dateutil-2. 1-py2.7.egg', 'C:\\dev\\Python27\\lib\\site-packages\\six-1.3.0-py2.7.egg', 'C:\ \dev\\Python27\\lib\\site-packages\\tornado-3.0.1-py2.7.egg', 'C:\\dev\\Python27 \\lib\\site-packages\\pyzmq-13.1.0-py2.7-win32.egg', 'C:\\dev\\Python27\\lib\\si te-packages\\pygments-1.6-py2.7.egg', 'C:\\Windows\\system32\\python27.zip', 'C: \\dev\\Python27\\DLLs', 'C:\\dev\\Python27\\lib', 'C:\\dev\\Python27\\lib\\plat- win', 'C:\\dev\\Python27\\lib\\lib-tk', 'C:\\dev\\Python27', 'C:\\dev\\Python27\ \lib\\site-packages', 'C:\\dev\\Python27\\lib\\site-packages\\setuptools-0.6c11- py2.7.egg-info'] ``` since I can't run python from the virtualenv, I can't print the python path from there
Seems like your system and local environments create a mix of libraries and binaries from different versions of python. Chances are you would need to delete `Lib`, `Scripts` and `Include` and start again with `virtualenv .`. You might be able to save the `site-packages` folder, but if you have `requirements.txt` files, you should probably reinstall packages instead (see also: [How do I install from a local cache with pip?](https://stackoverflow.com/questions/4806448/how-do-i-install-from-a-local-cache-with-pip) ). Anyway, I believe you can create a script that does all this in one step.
Correcting python directory path in `ENV_FOLDER\Lib\orig-prefix.txt` helped me
changing virtualenv folder on windows
[ "", "python", "virtualenv", "virtualenv-commands", "" ]
For example I have a dataset as below: ``` id Date 1 2000/01/01 1 2001/01/01 1 2002/01/01 2 2003/01/01 ``` By datastep or sql, how could I get the record with id = 1 and latest Date 2002/01/01? Help is appreciated and thanks in advance.
Try this sql. ``` select id,max(Date) from yourtable group by id; ```
If you want the entire record and the data is sorted as shown (BY id and DESCENDING date), you can use this data step: ``` data want; set have; by id; /* you don't need to specify date here */ if last.id; run; ``` This give you the most recent record for each id.
SAS datastep/SQL select latest record from multiple records with same ID
[ "", "sql", "sas", "proc-sql", "datastep", "" ]
Can't we yield more than one value in the python generator functions? Example, ``` def gen(): for i in range(5): yield i, i+1 k1, k2 = gen() ``` --- ``` Traceback (most recent call last) ----> 1 k1, k2 = a() ValueError: too many values to unpack ``` This works as follows: ``` >>> b = a() >>> list(b) [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5)] ``` Same results even when I do this: ``` def a(): for i in range(5): yield i yield i+1 ```
Because `gen()` returns a generator (a single item - so it can't be unpacked as two), it needs to be advanced *first* to get the values... ``` g = gen() a, b = next(g) ``` It works with `list` because that implicitly consumes the generator. Can we further make this a generator? Something like this: ``` g = gen(); def yield_g(): yield g.next(); k1,k2 = yield_g(); ``` and therefore `list(k1)` would give `[0,1,2,3,4]` and `list(k2)` would give `[1,2,3,4,5]`. Keep your existing generator, and use `izip` (or zip): ``` from itertools import izip k1, k2 = izip(*gen()) ```
Your function `gen` returns a generator and not values as you might expect judging from the example you gave. If you iterate over the generator the pairs of values will be yielded: ``` In [2]: def gen(): ...: for i in range(5): ...: yield i, i+1 ...: In [3]: for k1, k2 in gen(): ...: print(k1, k2) ...: 0 1 1 2 2 3 3 4 4 5 ```
Yield multiple values
[ "", "python", "generator", "" ]
I have just started learning Django. I was wondering if Django app can have more than one views file? Let's say, I have two separate classes. Should I keep them in one views file or can I make two views files? Thanks in advance!
Views are just python modules, you can do whatever you want, for instance you can change their names to `whatever.py` as long as your `imports` are correct :) And as suggested: find more info here [Django: split views.py in several files](https://stackoverflow.com/questions/1921771/django-split-views-py-in-several-files) :)
Yes, you can. A modular way of splitting would be to create a package - `views/` ``` - views/ - first.py - second.py - __init__.py ``` and in your `__init.py__` add the following: ``` from .first import * from .second import * ``` This way, all your views would be available for `urls.py`.
Can a django app have more than one views.py?
[ "", "python", "django", "django-views", "" ]
Say I have a select box with some values such as 'RED', 'BLUE' and 'GREEN' and I want to use those strings as references to constants called the same thing, but containing tuples, as so: ``` RED = ('x234114','z8765667') ``` The code I am looking for would look something like this in my mind's eye :) ``` globals_lookup('RED') ``` This could be an obvious one, but it's not coming to me.
Once you have defined the variable with the tuple you can access it by using locals. ``` >>> RED = ('x234114','z8765667') >>> locals()["RED"] ('x234114', 'z8765667') ``` But the right way to do it would be using a dictionary ``` >>> select_data = {"RED" :('x234114','z8765667'),"BLUE":('x934587','z943657')} >>> select_data["RED"] ('x234114','z8765667') ```
Using eval: ``` RED = ('x234114','z8765667') def globals_lookup(ident): return eval(ident) t=globals_lookup('RED') ``` But a better way would be using a hash table: ``` colors={'RED':('x234114','z8765667')} t=colors['RED'] ```
How to convert a string into a constant tuple in Python
[ "", "python", "string", "tuples", "" ]
I'm trying to calculate the GC content (in %) of a DNA sequence for a Rosalind question. I have the following code, but it returns 0, or only the number of G's alone or C's alone (no percentage). ``` x = raw_input("Sequence?:").upper() total = len(x) c = x.count("C") g = x.count("G") gc_total = g+c gc_content = gc_total/total print gc_content ``` I also tried this, just to get a count of G's and C's, and not the percentage, but it just returns a count of the entire string: ``` x = raw_input("Sequence?:").upper() def gc(n): count = 0 for i in n: if i == "C" or "G": count = count + 1 else: count = count return count gc(x) ``` EDIT: I fixed the typo in the print statement in the first example of code. That wasn't the problem, I just pasted the wrong snippet of code (there were many attempts...)
Shouldn't: > print cg\_content read > print gc\_content? As for the other snippet of code, your loop says > if i == "C" or "G": This is evaluating "G" to true every time and thus running the if statement as true. Instead, it should read > if i == "C" or i=="G": Also, you don't need that else statement. Hope this helps. Let us know how it goes. Abdul Sattar
Your problem is that you are performming integer division, not floating point division. Try ``` gc_content = gc_total / float(total) ```
Beginner Python script for calculating GC content in DNA sequence
[ "", "python", "function", "rosalind", "" ]
I'm on linux, and I have one python script that I want to call from another python script. I don't want to import it as a module (for a layer of security, and now for an academic exercise because I want to figure this out), I want to actually have one script call the other with os.system() or another function like that, and have the other script return a list of tuples to the first script. I know this might not be the optimal way to do it, but I'd like to try it out, and learn some stuff in the process.
Importing a module is different from executing it as a script. If you don't trust the child Python script; you shouldn't run any code from it. A regular way to use some code from another Python module: ``` import another_module result = another_module.some_function(args) ``` If you want to execute it instead of importing: ``` namespace = {'args': [1,2,3]} # define __name__, __file__ if necessary execfile('some_script.py', namespace) result = namespace['result'] ``` [`execfile()`](http://docs.python.org/2/library/functions.html#execfile) is used very rarely in Python. It might be useful in a debugger, a profiler, or to run `setup.py` in tools such as `pip`, `easy_install`. See also [`runpy` module](http://docs.python.org/2/library/runpy.html). If another script is executed in a different process; you could use many [IPC methods](http://en.wikipedia.org/wiki/Inter-process_communication). The simplest way is just pipe serialized (Python objects converted to a bytestring) input args into subprocess' stdin and read the result back from its stdout as [suggested by @kirelagin](https://stackoverflow.com/a/16877343/4279): ``` import json import sys from subprocess import Popen, PIPE marshal, unmarshal = json.dumps, json.loads p = Popen([sys.executable, 'some_script.py'], stdin=PIPE, stdout=PIPE) result = unmarshal(p.communicate(marshal(args))[0]) ``` where `some_script.py` could be: ``` #!/usr/bin/env python import json import sys args = json.load(sys.stdin) # read input data from stdin result = [x*x for x in args] # compute result json.dump(result, sys.stdout) # write result to stdout ```
You can use [`subprocess`](http://docs.python.org/2/library/subprocess.html): ``` subprocess.call(["python", "myscript.py"]) ``` This will also return the process return value (such as 0 or 1).
Getting return information from another python script
[ "", "python", "linux", "" ]
I would like to do a huge find and replace using python. `tot11.txt` is a string (has 600000 items), I want to replace items here from the file `1.txt`. So for example `tot11.txt` has : `'alba'`, `'raim'`, and `1.txt` looks like this : `'alba':'barba', 'raim':'uva'`. and as a result i would get `'barba'`, `'uva'`, and so on... When I run the script I get the following error: ``` Traceback (most recent call last): File "sort2.py", line 12, in <module> txt = replace_all(my_text, dic) File "sort2.py", line 4, in replace_all for i, j in dic.iteritems(): AttributeError: 'str' object has no attribute 'iteritems' ``` Also the script works well if I dont use a text file, just writing the changeable items in a script. ``` import sys def replace_all(text, dic): for i, j in dic.iteritems(): text = text.replace(i, j) return text my_text= open('tot11.txt', 'r').read() reps = open('1.txt', 'r').read() txt = replace_all(my_text, reps) f = open('results.txt', 'w') sys.stdout = f print txt ```
`open('1.txt', 'r').read()` returns a string not dict. ``` >>> print file.read.__doc__ read([size]) -> read at most size bytes, returned as a string. ``` If `1.txt` contains: ``` 'alba':'barba', 'raim':'uva' ``` then you can use `ast.literal_eval` to get a dict: ``` >>> from ast import literal_eval >>> with open("1.txt") as f: dic = literal_eval('{' + f.read() +'}') print dic ... {'alba': 'barba', 'raim': 'uva'} ``` Instead of using `str.replace` you should use `regex`, as `str.replace('alba','barba')` will also replace words like `'albaa'`, `'balba'`, etc: ``` import re def replace_all(text, dic): for i, j in dic.iteritems(): text = re.sub(r"'{}'".format(i), "'{}'".format(j), text) return text ```
The second argument of the replace\_all function is a string, as it came from reps = open('1.txt', 'r').read().... so the calling iteritems() over a string object fails because that function doesn't exist for string objects.
python: 'str' object has no attribute 'iteritems'
[ "", "python", "string", "" ]
I have created some code: ``` import numpy as np Length=(2.7)*10**-3 Nx=4 x = np.linspace(0, Length, Nx+1) # mesh points in space t1=110 t2=100 m=((t2-t1)/Length) T=5 N=5 t = np.linspace(0, T, N+1) Coeff=0.5 b=0.2 tamb = 20 u = np.zeros(Nx+1) u_1 = np.zeros(Nx+1) for i in range(0, Nx+1): u_1[i] = m*(x[i])+t1 #print u_1 r=[] for n in range(0, N+1): # Compute u at inner mesh points for i in range(0,1): u[i] = 2*Coeff*(u_1[i+1]+b*tamb)+(1-2*Coeff-2*b*Coeff)*u_1[i] for i in range(1,Nx): u[i] = Coeff*(u_1[i+1]+u_1[i-1])+(1-2*Coeff)*u_1[i] for i in range(Nx,Nx+1): u[i] = 2*Coeff*(u_1[i-1])+(1-2*Coeff)*u_1[i] # Switch variables before next step u_1, u = u, u_1 r.append(u.copy()) print r[5] ``` Output for code: ``` [ 78.1562 94.1595 96.82 102.6375 102.125 ] ``` Using the code I have created a function to apply to an array: ``` def function(data,time): import numpy as np Values=data[n] Length=(Values[2])*10**-3 Nx=4 x = np.linspace(0, Length, Nx+1) # mesh points in space t1=Values[0] t2=Values[1] m=((t2-t1)/Length) T=time[5] N=5 t = np.linspace(0, T, N+1) Coeff=0.5 b=0.2 tamb = 20 u = np.zeros(Nx+1) u_1 = np.zeros(Nx+1) for i in range(0, Nx+1): u_1[i] = m*(x[i])+t1 #print u_1 r=[] for n in range(0, N+1): # Compute u at inner mesh points for i in range(0,1): u[i] = 2*Coeff*(u_1[i+1]+b*tamb)+(1-2*Coeff-2*b*Coeff)*u_1[i] for i in range(1,Nx): u[i] = Coeff*(u_1[i+1]+u_1[i-1])+(1-2*Coeff)*u_1[i] for i in range(Nx,Nx+1): u[i] = 2*Coeff*(u_1[i-1])+(1-2*Coeff)*u_1[i] # Switch variables before next step u_1, u = u, u_1 r.append(u.copy()) return r import numpy as np #arrays data=np.array(((110,100,2.5),(112,105,2.6),(115,109,2.7))) time=np.array((0,1,2,3,4,5)) #apply function to array for n in range(len(data)): r = function(data,time) print r[5] ``` The 1st code works fine but when I apply the code using a function (2nd Code) if I tell I get the following error: ``` Traceback (most recent call last): File "C:/Users/a/Desktop/functiontrial3.py", line 39, in <module> r = function(data,time) File "C:/Users/a/Desktop/functiontrial3.py", line 3, in function Values=data[n] UnboundLocalError: local variable 'n' referenced before assignment ``` What do I have to do to get the following code to work?
Change your function signature: ``` def function(data,time,n): ``` and call it like this: ``` for n in xrange(len(data)): r = function(data,time,n) ```
You're using the global `n` here ``` Values=data[n] ``` You're using `n` as a local variable here ``` for n in range(0, N+1): ``` Python won't let you use `n` as both a global and local in the same scope. Is it *supposed* to be the same `n` or is it just a bad reuse of a variable name? There are several ways to fix this error, but it depends on your intent.
Python local variable referenced before assignment
[ "", "python", "function", "" ]
I'm using the python [`requests` module](https://requests.readthedocs.io/) to send a RESTful GET to a server, for which I get a response in JSON. The JSON response is basically just a list of lists. What's the best way to coerce the response to a native Python object so I can either iterate or print it out using `pprint`?
You can use [`json.loads`](http://docs.python.org/2/library/json.html#json.loads): ``` import json import requests response = requests.get(...) json_data = json.loads(response.text) ``` This converts a given string into a dictionary which allows you to access your JSON data easily within your code. Or you can use @Martijn's helpful suggestion, and the higher voted answer, `response.json()`.
Since you're using `requests`, you should use the response's [`json`](https://requests.readthedocs.io/en/latest/user/quickstart/#json-response-content) method. ``` import requests response = requests.get(...) data = response.json() ``` It [autodetects which decoder to use](https://github.com/requests/requests/blob/4ea09e49f7d518d365e7c6f7ff6ed9ca70d6ec2e/requests/models.py#L868-L892).
What's the best way to parse a JSON response from the requests library?
[ "", "python", "json", "rest", "python-requests", "" ]
Is there any clean way to supress the unicode character prefix when printing an object using the pprint module? ``` >>> import pprint >>> pprint.pprint({u'foo': u'bar', u'baz': [u'apple', u'orange', u'pear', u'guava', u'banana'], u'hello': u'world'}) {u'baz': [u'apple', u'orange', u'pear', u'guava', u'banana'], u'foo': u'bar', u'hello': u'world'} ``` This looks pretty ugly. Is there any way to print the \_\_str\_\_ value of each object, instead of the \_\_repr\_\_?
It could be done by overriding the format method of the PrettyPrinter object, and casting any unicode object to string: ``` import pprint def my_safe_repr(object, context, maxlevels, level): typ = pprint._type(object) if typ is unicode: object = str(object) return pprint._safe_repr(object, context, maxlevels, level) printer = pprint.PrettyPrinter() printer.format = my_safe_repr printer.pprint({u'foo': u'bar', u'baz': [u'apple', u'orange', u'pear', u'guava', u'banana'], u'hello': u'world'}) ``` which gives: ``` {'baz': ['apple', 'orange', 'pear', 'guava', 'banana'], 'foo': 'bar', 'hello': 'world'} ```
This may be too much, but one possible way is to implement a wrapper over the output stream: ``` import pprint,sys,re class writer : def write(self, text): text=re.sub(r'u\'([^\']*)\'', r'\1',text) sys.stdout.write(text) wrt=writer() d = { u'foo': u'bar', u'baz': [u'apple', u'orange', u'pear', u'guava', u'banana'], u'hello': u'world'} pp = pprint.PrettyPrinter(stream=wrt) pp.pprint(d) ``` *Output:* ``` {baz: [apple, orange, pear, guava, banana], foo: bar, hello: world} ``` It is also possible to put quotes inside parents to have single quotes around strings, e,g, 'foo' : 'bar': ``` text=re.sub(r'u(\'[^\']*\')', r'\1',text) ``` This gives: ``` {'baz': ['apple', 'orange', 'pear', 'guava', 'banana'], 'foo': 'bar', 'hello': 'world'} ```
Suppress unicode prefix on strings when using pprint
[ "", "python", "pprint", "" ]
I have this query: ``` SELECT `gift_donations`.*, `scholarships`.`name` AS scholarship_name FROM (`gift_donations`) LEFT OUTER JOIN `scholarships` scholarships ON `scholarships`.`id` = `gift_donations`.`scholarship_id` WHERE `gift_donations`.`contact_id` = '13' AND `gift_donations`.`in_memory` REGEXP '[a-zA-Z]+' OR in_honor REGEXP '[a-zA-Z]+' ORDER BY `gift_donations`.`id` desc ``` As you can see, here I am trying to get **only those records whose `contact_id` is `13`** but the problem is that the result set also contains other records whose `contact_id` isn't `13` Why is it so, is it because of `REGEXP` or I am not making my query the way it should be to bring back only those records whose `contact_id` is `13` or any other number that I want ?
`AND` takes precedence over `OR`. You should surround your second `AND` clause with parentheses like such ``` (`gift_donations`.`in_memory` REGEXP '[a-zA-Z]+' OR in_honor REGEXP '[a-zA-Z]+') ``` As it is, you have essentially written following where clause ``` A and B or C ``` wich due to operator precedence is equivalent to ``` (A and B) or (C) ``` and has to be changed to ``` (A) and (B or C) ``` You can look for all operators precedence in the [MySQL Reference Manual](http://dev.mysql.com/doc/refman/5.0/en/operator-precedence.html) *Note that it's always a good idea to be explicit using parentheses*
Use bracket. ``` WHERE `gift_donations`.`contact_id` = '13' AND ( `gift_donations`.`in_memory` REGEXP '[a-zA-Z]+' OR in_honor REGEXP '[a-zA-Z]+') ORDER BY `gift_donations`.`id` desc ```
SQL Query not filtering right data
[ "", "mysql", "sql", "database", "" ]
Is there a function in Google App Engine to test if a string is valid 'string key' prior to calling `memcache.get(key)` without using `db.get()` or `db.get_by_key_name()` first? In my case the key is being passed from the user's get request: `obj = memcache.get(self.request.get("obj"))` Somehow I'd like to know if that string is a valid key string without calling the db first, which would defeat the purpose of using memcache.
That is probably the most efficient (and practical) way to determine if the key string is valid. The code is obviously performing that test for you before it attempts to retrieve the entity from memcache/datastore. Even better, Google will update that code if necessary. ``` try: obj = memcache.get(self.request.get("obj")) except BadKeyError: # give a friendly error message here ``` Also, consider switching to ndb. Performing a get() on a key automatically uses two levels of cache, local and memcache. You don't need to write separate code for memcache.
A db module key sent to a client should pass through str(the\_key) which gives you an URL safe encoded key. Your templating environment etc.. will do this for you just by rendering the key into a template. On passing the key back from a client, you should recreate the key with `key = db.Key(encoded=self.request.get("obj"))` At this point it could fail with something like `BadKeyError: Invalid string key "thebadkeystring"=.` If not you have a valid key `obj = memcache.get(self.request.get("obj"))` won't actually raise BadKeyError because at that point you are just working with a string, and you just get `None` returned or a value. So at that point all you know is you have a key missing. However you need to use the memcache.get(self.request.get("obj")) to get the object from memcache, as a `db.Key` instance is not a valid memcache key. So you will be constructing a key to validate the key string at this point. Of course if the memcache get fails then you can use the just created key to fetch the object with `db.get(key)`
Test if string is valid key prior to memcache.get()
[ "", "python", "google-app-engine", "python-memcached", "" ]