Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to add a new column that will be a foreign key. I have been able to add the column and the foreign key constraint using two separate `ALTER TABLE` commands:
```
ALTER TABLE one
ADD two_id integer;
ALTER TABLE one
ADD FOREIGN KEY (two_id) REFERENCES two(id);
```
Is there a way to do this with one ALTER TABLE command instead of two? I could not come up with anything that works. | As so often with SQL-related question, it depends on the DBMS. Some DBMS allow you to combine `ALTER TABLE` operations separated by commas. For example...
**[Informix](https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_0290.htm)** syntax:
```
ALTER TABLE one
ADD two_id INTEGER,
ADD CONSTRAINT FOREIGN KEY(two_id) REFERENCES two(id);
```
The syntax for [IBM DB2 LUW](https://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000888.html?cp=SSEPGG_10.5.0%2F2-12-7-31) is similar, repeating the keyword ADD but (if I read the diagram correctly) not requiring a comma to separate the added items.
**Microsoft [SQL Server](https://msdn.microsoft.com/en-us/library/ms190273.aspx)** syntax:
```
ALTER TABLE one
ADD two_id INTEGER,
FOREIGN KEY(two_id) REFERENCES two(id);
```
Some others do not allow you to combine `ALTER TABLE` operations like that. Standard SQL only allows a single operation in the `ALTER TABLE` statement, so in Standard SQL, it has to be done in two steps. | In MS-SQLServer:
```
ALTER TABLE one
ADD two_id integer CONSTRAINT fk FOREIGN KEY (two_id) REFERENCES two(id)
``` | Add new column with foreign key constraint in one command | [
"",
"sql",
""
] |
I've a very simple question about python and lists.
I need to cycle trough a list and get sublists of a fixed lenght, spanning from the beginning to the end. To be more clear:
```
def get_sublists( length ):
# sublist routine
list = [ 1, 2, 3, 4, 5, 6, 7 ]
sublist_len = 3
print get_sublists( sublist_len )
```
this should return something like this:
```
[ 1, 2, 3 ]
[ 2, 3, 4 ]
[ 3, 4, 5 ]
[ 4, 5, 6 ]
[ 5, 6, 7 ]
```
Is there any simple and elegant approach to do this in python? | Use a loop and yield slices:
```
def get_sublists(length):
for i in range(len(lst) - length + 1)
yield lst[i:i + length]
```
or, if you must return a list:
```
def get_sublists(length):
return [lst[i:i + length] for i in range(len(lst) - length + 1)]
``` | ```
[alist[i:i+3] for i in range(len(alist)-2)]
``` | Python get sublists | [
"",
"python",
""
] |
I am trying to subset hierarchical data that has two row ids.
Say I have data in `hdf`
```
index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
['one', 'two', 'three']],
labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
[0, 1, 2, 0, 1, 1, 2, 0, 1, 2]])
hdf = DataFrame(np.random.randn(10, 3), index=index,
columns=['A', 'B', 'C'])
hdf
```
And I wish to subset so that i see `foo` and `qux`, subset to return only sub-row `two` and columns `A` and `C`.
I can do this in two steps as follows:
```
sub1 = hdf.ix[['foo','qux'], ['A', 'C']]
sub1.xs('two', level=1)
```
Is there a single-step way to do this?
thanks | ```
In [125]: hdf[hdf.index.get_level_values(0).isin(['foo', 'qux']) & (hdf.index.get_level_values(1) == 'two')][['A', 'C']]
Out[125]:
A C
foo two -0.113320 -1.215848
qux two 0.953584 0.134363
```
Much more complicated, but it would be better if you have many different values you want to choose in level one. | Doesn't look the nicest, but use tuples to get the rows you want and then squares brackets to select the columns.
```
In [36]: hdf.loc[[('foo', 'two'), ('qux', 'two')]][['A', 'C']]
Out[36]:
A C
foo two -0.356165 0.565022
qux two -0.701186 0.026532
```
`loc` could be swapped out for `ix` here. | subsetting hierarchical data in pandas | [
"",
"python",
"pandas",
""
] |
I have this script that does a word search in text. The search goes pretty good and results work as expected. What I'm trying to achieve is extract `n` words close to the match. For example:
> The world is a small place, we should try to take care of it.
Suppose I'm looking for `place` and I need to extract the 3 words on the right and the 3 words on the left. In this case they would be:
```
left -> [is, a, small]
right -> [we, should, try]
```
What is the best approach to do this?
Thanks! | ```
def search(text,n):
'''Searches for text, and retrieves n words either side of the text, which are retuned seperatly'''
word = r"\W*([\w]+)"
groups = re.search(r'{}\W*{}{}'.format(word*n,'place',word*n), text).groups()
return groups[:n],groups[n:]
```
This allows you to specify how many words either side you want to capture. It works by constructing the regular expression dynamically. With
```
t = "The world is a small place, we should try to take care of it."
search(t,3)
(('is', 'a', 'small'), ('we', 'should', 'try'))
``` | While regex would work, I think it's overkill for this problem. You're better off with two list comprehensions:
```
sentence = 'The world is a small place, we should try to take care of it.'.split()
indices = (i for i,word in enumerate(sentence) if word=="place")
neighbors = []
for ind in indices:
neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])
```
Note that if the word that you're looking for appears multiple times consecutively in the sentence, then this algorithm will include the consecutive occurrences as neighbors.
For example:
In [29]: neighbors = []
In [30]: sentence = 'The world is a small place place place, we should try to take care of it.'.split()
In [31]: sentence
Out[31]:
['The',
'world',
'is',
'a',
'small',
'place',
'place',
'place,',
'we',
'should',
'try',
'to',
'take',
'care',
'of',
'it.']
```
In [32]: indices = [i for i,word in enumerate(sentence) if word == 'place']
In [33]: for ind in indices:
....: neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])
In [34]: neighbors
Out[34]:
[['is', 'a', 'small', 'place', 'place,', 'we'],
['a', 'small', 'place', 'place,', 'we', 'should']]
``` | Extract words surrounding a search word | [
"",
"python",
"regex",
"find",
"text-processing",
""
] |
The `lda.show_topics` module from the following code only prints the distribution of the top 10 words for each topic, how do i print out the full distribution of all the words in the corpus?
```
from gensim import corpora, models
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
lda = models.ldamodel.LdaModel(corpus_tfidf, id2word=dictionary, num_topics=2)
for i in lda.show_topics():
print i
``` | There is a variable call `topn` in `show_topics()` where you can specify the number of top N words you require from the words distribution over each topic. see <http://radimrehurek.com/gensim/models/ldamodel.html>
So instead of the default `lda.show_topics()`. You can use the `len(dictionary)` for the full word distributions for each topic:
```
for i in lda.show_topics(topn=len(dictionary)):
print i
``` | There are two variable call `num_topics` and `num_words` in `show_topics()`,for `num_topics` number of topics, return `num_words` most significant words (10 words per topic, by default). see <http://radimrehurek.com/gensim/models/ldamodel.html#gensim.models.ldamodel.LdaModel.show_topics>
So you can use the `len(lda.id2word)` for the full words distributions for each topic,and the `lda.num_topics` for the all topics in your lda model.
```
for i in lda.show_topics(formatted=False,num_topics=lda.num_topics,num_words=len(lda.id2word)):
print i
``` | How to print out the full distribution of words in an LDA topic in gensim? | [
"",
"python",
"lda",
"topic-modeling",
"gensim",
""
] |
In Pascal we have `var` parameters, and functions can change parameter values to new values:
```
procedure a(var S1, S2: string);
begin
S1:= S1+'test'+S1;
S2:= S1+'('+S2+')';
end;
```
Does Python have such a feature? Can I change the `string` parameter inside the method, or must I use `return` and assign the variable later? | Python can return *multiple* values (in the form of a tuple), obsoleting the need to pass values by reference.
In your simple sample case, even if you were able to apply the same technique, you could not achieve the same result as Python strings are not mutable.
As such, your simple example can be translated to Python as:
```
def a(s1, s2):
s1 = '{0}test{0}'.format(s1)
s2 = '{}({})'.format(s1, s2)
return s1, s2
foo, bar = a(foo, bar)
```
The alternative is to pass in mutable objects (dictionaries, lists, etc.) and alter their *contents*. | This is called "pass by reference", and no, Python doesn't do it (although if you pass in a mutable object by value and change it in the function, it's changed everywhere because it's the same object.) | Pascal "var parameter" in Python | [
"",
"python",
""
] |
Ok, so I have read in a text file from a .txt file that came in the format as such:
```
home school 5
home office 10
home store 7
school store 8
office school 4
END OF FILE
```
I then turned it into a two dimensional list in python and it looks something like:
```
[['home', 'school', '5'], ['home','office','10'],['home','store','7'],
['school','store','8'], ['office','school','4']]
```
But the way that I would really like it is to be more in dictionary format such as:
```
{'home': {'school': 5, 'office': 10, 'store': 7},
'school': {'store': 8},
'office': {'school': 4}}
```
That format looks a lot better and is easier to read. The data that I have is lot more in detail but this is simple version. I have read my text file as follows:
```
myFileOpen = open(myInputFile, 'r')
myMap = myFileOpen.readlines()[:-1]
#Format the list, each line becomes a list in a greater list
myMap = [i.split('\n')[0] for i in myMap]
myMap = [i.split(' ') for i in myMap]
```
If anyone can help explain how to this I'd be very grateful! Thank you! | The code may look like this:
```
result = {}
for item in data:
result.setdefault(item[0], {}).update({item[1]: item[2]})
```
Proof with whole code: <http://ideone.com/8XUA41> | Skip the intermediate list and just do it all at once:
```
d = {}
with open(myInputFile, 'r') as handle:
for line in handle:
if line == 'END OF FILE':
continue
key1, key2, value = line.split()
if key1 not in d:
d[key1] = {}
d[key1][key2] = int(value)
```
You could further condense that last part into:
```
d.setdefault(key1, {})[key2] = int(value)
``` | Turning a two dimensional list into a dictionary in python | [
"",
"python",
"list",
"dictionary",
"readfile",
""
] |
I have one table holding events and dates:
```
NAME | DOB
-------------------
Adam | 6/26/1999
Barry | 7/18/2005
Daniel| 1/18/1984
```
I have another table defining date ranges as either start or end times, each with a descriptive code:
```
CODE | DATE
---------------------
YearStart| 6/28/2013
YearEnd | 8/14/2013
```
I am trying to write SQL that will find all Birthdates that fall between the start and end of the times described in the second table. **The YearStart will always be in June, and the YearEnd will always be in August.** My thought was to try:
```
SELECT
u.Name
CAST(MONTH(u.DOB) AS varchar) + '/' + CAST(DAY(u.DOB) AS varchar) as 'Birthdate',
u.DOB as 'Birthday'
FROM
Users u
WHERE
MONTH(DOB) = '7' OR
(MONTH(DOB) = '6' AND DAY(DOB) >= DAY(SELECT d.Date FROM Dates d WHERE d.Code='YearStart')) OR
(MONTH(DOB) = '8' AND DAY(DOB) <= DAY(SELECT d.Date FROM Dates d WHERE d.Code='YearEnd')))
ORDER BY
MONTH(DOB) ASC, DAY(DOB) ASC
```
But this doesn't pass, I'm guessing because there is no guarantee that the internal SELECT statement will return only one row, so cannot be parsed as a datetime. How do I actually accomplish this query? | This seems strange and I still feel like we're missing a relevant piece of the requirements, but look at the following. It seems from your description that the years are irrelevant and you want birthdays that fall between the given months/days.
```
SELECT
t1.Name, t1.DOB
FROM
t1
JOIN t2 AS startDate ON (startDate.Code = 'YearStart')
JOIN t2 AS endDate ON (endDate.Code = 'YearEnd')
WHERE
STUFF(CONVERT(varchar, t1.DOB, 112), 1, 4, '') BETWEEN
STUFF(CONVERT(varchar, startDate.[Date], 112), 1, 4, '')
AND
STUFF(CONVERT(varchar, endDate.[Date], 112), 1, 4, '')
``` | Try using a PIVOT to get the years on the same row, like this. This will return only 'Bob'
```
DECLARE @Names TABLE(
NAME VARCHAR(20),
DOB VARCHAR(10));
DECLARE @Dates TABLE(
CODE VARCHAR(20),
THEDATE VARCHAR(10));
INSERT @Names (NAME,DOB) VALUES ('Adam', '6/26/1999');
INSERT @Names (NAME,DOB) VALUES ('Daniel', '1/18/1984');
INSERT @Names (NAME,DOB) VALUES ('Bob', '7/1/2013');
INSERT @Dates (CODE,THEDATE) VALUES ('YearStart', '6/28/2013');
INSERT @Dates (CODE,THEDATE) VALUES ('YearEnd', '8/14/2013');
SELECT * FROM @Names;
SELECT * FROM @Dates;
SELECT n.*
FROM @Names AS n
INNER JOIN (
SELECT
1 AS YearTypeId
, [YearStart]
, [YearEnd]
FROM ( SELECT [CODE]
, THEDATE
FROM @Dates
) p PIVOT ( MIN(THEDATE)
FOR [CODE]
IN ([YearStart],[YearEnd])
) AS pvt) AS y
ON
n.DOB >= y.YearStart
AND n.DOB <= y.YearEnd
``` | How do I compare dates in one SQL table to a range defined in another table? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
How do I parse a json output get the list from data only and then add the output into say google.com/confidetial and the other strings in the list.
so my json out put i will name it "text"
```
text = {"success":true,"code":200,"data":["Confidential","L1","Secret","Secret123","foobar","maret1","maret2","posted","rontest"],"errs":[],"debugs":[]}.
```
What I am looking to do is get the list under data only. so far the script i got is giving me the entire json out put.
```
json.loads(text)
print text
output = urllib.urlopen("http://google.com" % text)
print output.geturl()
print output.read()
``` | ```
jsonobj = json.loads(text)
print jsonobj['data']
```
Will print the list in the `data` section of your JSON.
If you want to open each as a link after `google.com`, you could try this:
```
def processlinks(text):
output = urllib.urlopen('http://google.com/' % text)
print output.geturl()
print output.read()
map(processlinks, jsonobj['data'])
``` | ```
info = json.loads(text)
json_text = json.dumps(info["data"])
```
Using `json.dumps` converts the python data structure gotten from `json.loads` back to regular json text.
So, you could then use `json_text` wherever you were using `text` before and it should only have the selected key, in your case: `"data"`. | Parse a json file and add the strings to a URL | [
"",
"python",
""
] |
Is there any way to understand what data type that a string holds... The question is of little logic but see below cases
```
varname = '444'
somefunc(varname) => int
varname = 'somestring'
somefunc(varname) => String
varname = '1.2323'
somefunc(varname) => float
```
My Case:
I get a mixed data in a list but they're in string format.
```
myList = ['1', '2', '1.2', 'string']
```
I'm looking for a generic way to understand whats their data so that i can add respective comparison. Since they're already converted to string format, I cant really call the list (myList) as mixed data... but still is there a way? | ```
from ast import literal_eval
def str_to_type(s):
try:
k=literal_eval(s)
return type(k)
except:
return type(s)
l = ['444', '1.2', 'foo', '[1,2]', '[1']
for v in l:
print str_to_type(v)
```
*Output*
```
<type 'int'>
<type 'float'>
<type 'str'>
<type 'list'>
<type 'str'>
``` | You can use ast.literal\_eval() and type():
```
import ast
stringy_value = '333'
try:
the_type = type(ast.literal_eval(stringy_value))
except:
the_type = type('string')
``` | Get type of data stored in a string in python | [
"",
"python",
""
] |
I would like to find an occurrence of a word in Python and print the word after this word. The words are space separated.
example :
if there is an occurrence of the word "sample" "thisword" in a file . I want to get thisword. I want a regex as the thisword keeps on changing . | python strings have a built in method split that splits the string into a list of words delimited by white space characters ([doc](http://docs.python.org/2/library/stdtypes.html#str.split)), it has parameters for controlling the way it splits the word, you can then search the list for the word you want and return the next index
```
your_string = "This is a string"
list_of_words = your_string.split()
next_word = list_of_words[list_of_words.index(your_search_word) + 1]
``` | Sounds like you want a function.
```
>>> s = "This is a sentence"
>>> sl = s.split()
>>>
>>> def nextword(target, source):
... for i, w in enumerate(source):
... if w == target:
... return source[i+1]
...
>>> nextword('is', sl)
'a'
>>> nextword('a', sl)
'sentence'
>>>
```
Of course, you'll want to do some error checking (e.g., so you don't fall off the end) and maybe a while loop so you get all the instances of the target. But this should get you started. | How to find word next to a word in Python | [
"",
"python",
"python-2.7",
""
] |
I'm reading throug a large gene file of the format
```
>GeneID
ACTCTCTCTATATATATATAT\n
GCTCTGCTTCTAGAGAGAGTG\n
TCTATTTGTTTATATATCTTT\n
>GeneID
GCTCTGCTTCTAGAAATTCCC\n
ACTCTGTATATATTTTCAAAA\n
GCTCTGCTTCTAGAGAGAGTG\n
```
Each gene start with a > then a unique ID. After comes line of the nucleotides of that gene.
Unfortunately, this file was generated so that there are line breaks between each of the lines of sequence.
I need to read in each sequence as one contiguous string. So, I have been using the next method (as shown below).
```
for line in filer:
if line.startswith(">"):
# Find Sequences
seq_seg = next(filer)
seq = ""
# Concatenate lines until find next gene
while not (seq_seg.startswith(">")):
seq += seq_seg.strip() # Get rid of '\n'
seq_seg = next(filer)
```
I find that my script is only picking up half of the genes in the file because when next is called on the condition that makes the while loop fail, the file pointer is pointing to the next gene ID, then when the next iteration of the for loop executes, it moves to the next file.
Is there a way to rewind the file pointer to the previous line, so my for loop catches that as a new gene?
I have seen similar questions, but none of them address the specific way that I am reading through the file with the
```
for line in file:
#do stuff
``` | I would use a generator instead of skipping through lines (something tells me this can be simplified greatly):
```
def parse_file(file):
id = ''
gene = ''
for line in file:
if line.startswith('>'):
if gene:
yield id, gene
id = line[1:]
gene = ''
else:
gene += line.strip()
else:
yield id, gene # Final gene
```
Now, you can safely iterate over the whole thing with just a few lines of code:
```
with open('file.txt', 'r') as handle:
for gene_id, nucleotides in parse_file(handle):
print gene_id, nucleotides
```
There's also [pyfasta](https://pypi.python.org/pypi/pyfasta/).
Or a more general function with `itertools`:
```
def grouper(line):
return line.startswith('>') and line[1:]
def itersplit(it, pred):
groups = (list(group) for key, group in itertools.groupby(it, pred))
yield from zip(groups, groups)
def parse(file):
for key, group in itersplit(file, grouper):
yield key[0], ''.join(group)
``` | Here's another method that uses `re` and `mmap`:
```
import mmap, re
with open(your_file) as fin:
mm = mmap.mmap(fin.fileno(), 0, access=mmap.ACCESS_READ)
for match in re.finditer('>([^\n]+)([^>]*)', mm, flags=re.DOTALL):
print match.group(1), match.group(2).replace('\n', '')
#GeneID1 ACTCTCTCTATATATATATATGCTCTGCTTCTAGAGAGAGTGTCTATTTGTTTATATATCTTT
#GeneID2 GCTCTGCTTCTAGAAATTCCCACTCTGTATATATTTTCAAAAGCTCTGCTTCTAGAGAGAGTG
```
Doing it this way enables the entire file to be treated as a string, but will make use of the OS providing the part of the file on demand to be complete the regex. As it uses `finditer` we're also not building a result set up in memory. | Rewind File Pointer to Previous Line Python File IO | [
"",
"python",
"file-io",
""
] |
I wish to pass in some variables into python during run time
```
python add2values.py 123 124
```
then in the python script it will take those 2 values and add together.
OR
```
python add2values.py a=123 b=124
```
then in the python script it will take those 2 values and add together. | You can use `sys.argv`
test.py
```
#!/usr/bin/env python
import sys
total = int(sys.argv[1]) + int(sys.argv[2])
print('Argument List: %s' % str(sys.argv))
print('Total : %d' % total)
```
Run the following command:
```
$ python test.py 123 124
Argument List: ['test.py', 'arg1', 'arg2', 'arg3']
Total : 247
``` | There are a few ways to handle command-line arguments.
One is, as has been suggested, `sys.argv`: an array of strings from the arguments at command line. Use this if you want to perform arbitrary operations on different kinds of arguments. You can cast the first two arguments into integers and print their sum with the code below:
```
import sys
n1 = sys.argv[1]
n2 = sys.argv[2]
print (int(n1) + int(n2))
```
Of course, this does not check whether the user has input strings or lists or integers and gives the risk of a `TypeError`. However, for a range of command line arguments, this is probably your best bet - to manually take care of each case.
If your script/program has fixed arguments and you would like to have more flexibility (short options, long options, help texts) then it is worth checking out the [optparse](https://stackoverflow.com/questions/4960880/understanding-optionparser) and `argparse` (requires Python 2.7 or later) modules. Below are some snippets of code involving these two modules taken from actual questions on this site.
```
import argparse
parser = argparse.ArgumentParser(description='my_program')
parser.add_argument('-verbosity', help='Verbosity', required=True)
```
`optparse`, usable with earlier versions of Python, has similar syntax:
```
from optparse import OptionParser
parser = OptionParser()
...
parser.add_option("-m", "--month", type="int",
help="Numeric value of the month",
dest="mon")
```
And there is even `getopt` if you prefer C-like syntax... | Passing variables at runtime | [
"",
"python",
""
] |
I've been using AS3 before, and liked its grammar, such as the keyword `as`.
For example, if I type `(cat as Animal)` and press `.` in an editor, the editor will be smart enough to offer me code hinting for the class `Animal` no matter what type `cat` actually is, and the code describes itself well.
Python is a beautiful language. How am I suppose to do the above in Python? | If you were to translate this literally, you'd end up with
```
cat if isinstance(cat, Animal) else None
```
However, that's not a common idiom.
> How am I suppose to do the above in Python?
I'd say you're not supposed to do this in Python in general, especially not for documentation purposes. | You are looking for a way to inspect the class of an instance.
`isinstance(instance, class)` is a good choice. It tells you whether the instance is of a class or is an instance of a subclass of the class.
In other way, you can use `instance.__class__` to see the exact class the instance is and `class.__bases__` to see the superclasses of the class.
For built-in types like generator or function, you can use `inspect` module.
Names in python do not get a type. So there is no need to do the cast and in return determine the type of an instance is a good choice.
As for the hints feature, it is a feature of editors or IDEs, not Python. | What is the python way to declare an existing object as an instance of a class? | [
"",
"python",
""
] |
I have a structured array called "data" with several scores for the same entry. For the question sake, I reduced "data" to the following 2 columns.
> ```
> queryid bitscore
> gene1 500
> gene1 480
> gene1 440
> gene2 900
> gene2 300
> ```
What I want to do is to extract the highest values for queryid that are the same, i.e. any common entry for which the bitscore is at least 10% lower than the highest bitscore.
For example only the first 2 entries "gene1" should be conserved as the third one has a bitscore lower than 10% of 500. For gene 2, only the first one should be conserved (this one is easy).
> ```
> queryid bitscore
> gene1 500
> gene1 480
> gene2 900
> ```
When I make a loop like this one :
```
for i in range(0, lastrow-1, 1):
if data[i]['queryid'] == data[i+1]['queryid']:
if data[i+1]['bitscore'] < data[i]['bitscore']-(0.01*data[i]['bitscore']):
data[i+1]['queryid'] = 'DELETE'
data = data[data[:]['queryid'] != 'DELETE']
```
all "gene1" entries will be conserved as 440 is within the 10% of 480.
I could add the highest value to another column that could be kept as reference, but I wanted to check if any of you guys had a better idea about it... | It would probably be much faster to use logical indexing than `for` loops. How about something like this:
```
def high_bitscores(a,qid,thresh=0.9):
valid = a[a['queryid'] == qid]
return valid[valid['bitscore'] >= valid['bitscore'].max()*thresh]
```
**Edit:** If you want to return *all* elements in `data` which pass this criterion, you could loop over the unique `queryid` values in `data` and update a set of boolean indices specifying which elements pass the test:
```
def all_high_bitscores(a,thresh=0.9):
# set of indices for the elements in a that we're going to keep
keep = np.zeros(a.size,np.bool)
for qid in set(a['queryid']):
idx = a['queryid'] == qid
keep[idx] = a[idx]['bitscore'] >= a[idx]['bitscore'].max()*thresh
return a[keep]
``` | If you are able to use pandas, it becomes a one-line problem:
```
from pandas import DataFrame
import numpy as np
# Taken from Theodros
data = zip(('gene1',) * 3 + ('gene2',) * 2, [500, 480, 440, 900, 300])
dtype = [('queryid', 'S6'), ('bitscore', 'i4')]
struct_arr = np.array(data, dtype=dtype)
# Create pandas DataFrame from NumPy struct array
df = DataFrame.from_records(struct_arr)
# Filter the rows per group
df.groupby('queryid').apply(lambda x: x[x["bitscore"] >= x["bitscore"].max() * 0.9])
```
Produces:
```
queryid bitscore
queryid
gene1 0 gene1 500
1 gene1 480
gene2 3 gene2 900
``` | Extract highest values of numpy structured array | [
"",
"python",
"arrays",
"parsing",
"numpy",
""
] |
I have the following model.
```
class Location(models.Model):
name = models.CharField(max_length = 128, blank = True)
address =models.CharField(max_length = 200, blank= True)
latitude = models.DecimalField(max_digits=6, decimal_places=3)
longitude = models.DecimalField(max_digits=6, decimal_places=3)
def __unicode__(self):
return self.name
```
If my current latitude & longitude is:
```
current_lat = 43.648
current_long = 79.404
```
I did some research and came across the [Haversine Equation](http://www.platoscave.net/blog/2009/oct/5/calculate-distance-latitude-longitude-python/) which calculates the distance between two location coordinates. Below is the equation I found:
```
import math
def distance(origin, destination):
lat1, lon1 = origin
lat2, lon2 = destination
radius = 6371 # km
dlat = math.radians(lat2-lat1)
dlon = math.radians(lon2-lon1)
a = math.sin(dlat/2) * math.sin(dlat/2) + math.cos(math.radians(lat1)) \
* math.cos(math.radians(lat2)) * math.sin(dlon/2) * math.sin(dlon/2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a))
d = radius * c
return d
```
I would like to return all the Location objects that fall within a 10 km radius, how can I filter it in such a way that it will only return all the Location objects that fall within this 10 km radius?
```
LocationsNearMe = Location.objects.filter(#This is where I am stuck)
```
Is there anyway I can implement the Haversine equation into the filtering so that it only returns the location objects that fall within a 10 km radius?
I'm looking for a well detailed answer. Appreciate the help. | But you can always make proposed by Brian approach better by filtering the results from previous step (which hoepfully should be smaller subset) and for each you check either they are within the radius.
Your user is in black point. Square approximation given by Brian return green but also orange points. The divernce in distance can be significant in worst case user have to go sqrt(2) times further than expected (extra 40% of distance). So for all orange and green points it is worth to check if their distance from black point (e.g euclidian one if this are really short distances e.g navigation in city) is not greater than assumed radius.

UPDATE:
If you would like to use Haversine distance or (better) mentioned GeoDjango hava a look on this snippet comparing two django views dealing with nearby search:
<https://gist.github.com/andilabs/4232b463e5ad2f19c155> | You can do range queries with `filter`.
```
LocationsNearMe = Location.objects.filter(latitude__gte=(the minimal lat from distance()),
latitude__lte=(the minimal lat from distance()),
(repeat for longitude))
```
Unfortunately, this returns results in the form of a geometric square (instead of a circle) | How to filter a django model with latitude and longitude coordinates that fall within a certain radius | [
"",
"python",
"django",
"haversine",
""
] |
I have a range of points x and y stored in numpy arrays.
Those represent x(t) and y(t) where t=0...T-1
I am plotting a scatter plot using
```
import matplotlib.pyplot as plt
plt.scatter(x,y)
plt.show()
```
I would like to have a colormap representing the time (therefore coloring the points depending on the index in the numpy arrays)
What is the easiest way to do so? | Here is an example
```
import numpy as np
import matplotlib.pyplot as plt
x = np.random.rand(100)
y = np.random.rand(100)
t = np.arange(100)
plt.scatter(x, y, c=t)
plt.show()
```
Here you are setting the color based on the index, `t`, which is just an array of `[1, 2, ..., 100]`.

Perhaps an easier-to-understand example is the slightly simpler
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(100)
y = x
t = x
plt.scatter(x, y, c=t)
plt.show()
```

Note that the array you pass as `c` doesn't need to have any particular order or type, i.e. it doesn't need to be sorted or integers as in these examples. The plotting routine will scale the colormap such that the minimum/maximum values in `c` correspond to the bottom/top of the colormap.
## Colormaps
You can change the colormap by adding
```
import matplotlib.cm as cm
plt.scatter(x, y, c=t, cmap=cm.cmap_name)
```
Importing `matplotlib.cm` is optional as you can call colormaps as `cmap="cmap_name"` just as well. There is a [reference page](http://matplotlib.org/examples/color/colormaps_reference.html) of colormaps showing what each looks like. Also know that you can reverse a colormap by simply calling it as `cmap_name_r`. So either
```
plt.scatter(x, y, c=t, cmap=cm.cmap_name_r)
# or
plt.scatter(x, y, c=t, cmap="cmap_name_r")
```
will work. Examples are `"jet_r"` or `cm.plasma_r`. Here's an example with the new 1.5 colormap viridis:
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(100)
y = x
t = x
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.scatter(x, y, c=t, cmap='viridis')
ax2.scatter(x, y, c=t, cmap='viridis_r')
plt.show()
```
[](https://i.stack.imgur.com/mIjeW.png)
## Colorbars
You can add a colorbar by using
```
plt.scatter(x, y, c=t, cmap='viridis')
plt.colorbar()
plt.show()
```
[](https://i.stack.imgur.com/nzkp5.png)
Note that if you are using figures and subplots explicitly (e.g. `fig, ax = plt.subplots()` or `ax = fig.add_subplot(111)`), adding a colorbar can be a bit more involved. Good examples can be found [here for a single subplot colorbar](http://matplotlib.org/1.3.1/examples/pylab_examples/colorbar_tick_labelling_demo.html) and [here for 2 subplots 1 colorbar](https://stackoverflow.com/a/13784887/1634191). | To add to wflynny's answer above, you can find the available colormaps [here](http://matplotlib.org/examples/color/colormaps_reference.html)
Example:
```
import matplotlib.cm as cm
plt.scatter(x, y, c=t, cmap=cm.jet)
```
or alternatively,
```
plt.scatter(x, y, c=t, cmap='jet')
``` | Scatter plot and Color mapping in Python | [
"",
"python",
"matplotlib",
""
] |
In [this post](https://superuser.com/questions/301431/how-to-batch-convert-csv-to-xls-xlsx) there is a Python example to convert from csv to xls.
However, my file has more than 65536 rows so xls does not work. If I name the file xlsx it doesnt make a difference. Is there a Python package to convert to xlsx? | Here's an example using [xlsxwriter](https://xlsxwriter.readthedocs.io/):
```
import os
import glob
import csv
from xlsxwriter.workbook import Workbook
for csvfile in glob.glob(os.path.join('.', '*.csv')):
workbook = Workbook(csvfile[:-4] + '.xlsx')
worksheet = workbook.add_worksheet()
with open(csvfile, 'rt', encoding='utf8') as f:
reader = csv.reader(f)
for r, row in enumerate(reader):
for c, col in enumerate(row):
worksheet.write(r, c, col)
workbook.close()
```
FYI, there is also a package called [openpyxl](http://pythonhosted.org/openpyxl/), that can read/write Excel 2007 xlsx/xlsm files. | With my library `pyexcel`,
```
$ pip install pyexcel pyexcel-xlsx
```
you can do it in one command line:
```
from pyexcel.cookbook import merge_all_to_a_book
# import pyexcel.ext.xlsx # no longer required if you use pyexcel >= 0.2.2
import glob
merge_all_to_a_book(glob.glob("your_csv_directory/*.csv"), "output.xlsx")
```
Each csv will have its own sheet and the name will be their file name. | Python convert csv to xlsx | [
"",
"python",
"excel",
"file",
"csv",
"xlsx",
""
] |
Product name contains words deliminated by space.
First word is size second in brand etc.
How to extract those words from string, e.q how to implement query like:
```
select
id,
getwordnum( prodname,1 ) as size,
getwordnum( prodname,2 ) as brand
from products
where ({0} is null or getwordnum( prodname,1 )={0} ) and
({1} is null or getwordnum( prodname,2 )={1} )
create table product ( id char(20) primary key, prodname char(100) );
```
How to create getwordnum() function in Postgres or should some substring() or other function used directly in this query to improve speed ? | You could try to use function **split\_part**
```
select
id,
split_part( prodname, ' ' , 1 ) as size,
split_part( prodname, ' ', 2 ) as brand
from products
where ({0} is null or split_part( prodname, ' ' , 1 )= {0} ) and
({1} is null or split_part( prodname, ' ', 2 )= {1} )
``` | What you're looking for is probably `split_part` which is available as a String function in PostgreSQL. See <http://www.postgresql.org/docs/9.1/static/functions-string.html>. | how to extract specific word from string in Postgres | [
"",
"sql",
"postgresql",
""
] |
Why is `random.shuffle` returning `None` in Python?
```
>>> x = ['foo','bar','black','sheep']
>>> from random import shuffle
>>> print shuffle(x)
None
```
How do I get the shuffled value instead of `None`? | [`random.shuffle()`](https://docs.python.org/3/library/random.html#random.shuffle) changes the `x` list **in place**.
Python API methods that alter a structure in-place generally return `None`, not the modified data structure.
```
>>> x = ['foo', 'bar', 'black', 'sheep']
>>> random.shuffle(x)
>>> x
['black', 'bar', 'sheep', 'foo']
```
---
If you wanted to create a **new** randomly-shuffled list based on an existing one, where the existing list is kept in order, you could use [`random.sample()`](https://docs.python.org/3/library/random.html#random.sample) with the full length of the input:
```
random.sample(x, len(x))
```
You could also use [`sorted()`](https://docs.python.org/3/library/functions.html#sorted) with [`random.random()`](https://docs.python.org/3/library/random.html#random.random) for a sorting key:
```
shuffled = sorted(x, key=lambda k: random.random())
```
but this invokes sorting (an O(N log N) operation), while sampling to the input length only takes O(N) operations (the same process as `random.shuffle()` is used, swapping out random values from a shrinking pool).
Demo:
```
>>> import random
>>> x = ['foo', 'bar', 'black', 'sheep']
>>> random.sample(x, len(x))
['bar', 'sheep', 'black', 'foo']
>>> sorted(x, key=lambda k: random.random())
['sheep', 'foo', 'black', 'bar']
>>> x
['foo', 'bar', 'black', 'sheep']
``` | This method works too.
```
import random
shuffled = random.sample(original, len(original))
``` | Why does random.shuffle return None? | [
"",
"python",
"list",
"random",
"shuffle",
""
] |
I have one `.tex`-document in which one graph is made by the python module `matplotlib`. What I want is, that the graph blends in to the document as good as possible. So I want the characters used in the graph to look exactly like the other same characters in the rest of the document.
My first try looks like this (the `matplotlibrc`-file):
```
text.usetex : True
text.latex.preamble: \usepackage{lmodern} #Used in .tex-document
font.size : 11.0 #Same as in .tex-document
backend: PDF
```
For compiling of the `.tex` in which the PDF output of `matplotlib` is included, `pdflatex` is used.
Now, the output looks not bad, but it looks somewhat different, the characters in the graph seem weaker in stroke width.
What is the best approach for this?
EDIT: Minimum example: LaTeX-Input:
```
\documentclass[11pt]{scrartcl}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{graphicx}
\begin{document}
\begin{figure}
\includegraphics{./graph}
\caption{Excitation-Energy}
\label{fig:graph}
\end{figure}
\end{document}
```
Python-Script:
```
import matplotlib.pyplot as plt
import numpy as np
plt.plot([1,2,3,4])
plt.xlabel("Excitation-Energy")
plt.ylabel("IntensitΓ€t")
plt.savefig("graph.pdf")
```
PDF output:
 | The difference in the fonts can be caused by incorrect parameter setting out pictures with matplotlib or wrong its integration into the final document.
I think problem in *text.latex.preamble: \usepackage{lmodern}*. This thing works very badly and even developers do not guarantee its workability, [how you can find here](http://matplotlib.org/users/customizing.html). In my case it did not work at all.
Minimal differences in font associated with font family. For fix this u need: *'font.family' : 'lmodern'* in **rc**.
Other options and more detailed settings can be found [here.](http://matplotlib.org/users/customizing.html)
To suppress this problem, I used a slightly different method - direct. *plt.rcParams['text.latex.preamble']=[r"\usepackage{lmodern}"]*.
It is not strange, but it worked. Further information can be found at the link above.
---
To prevent these effects suggest taking a look at this code:
```
import matplotlib.pyplot as plt
#Direct input
plt.rcParams['text.latex.preamble']=[r"\usepackage{lmodern}"]
#Options
params = {'text.usetex' : True,
'font.size' : 11,
'font.family' : 'lmodern',
'text.latex.unicode': True,
}
plt.rcParams.update(params)
fig = plt.figure()
#You must select the correct size of the plot in advance
fig.set_size_inches(3.54,3.54)
plt.plot([1,2,3,4])
plt.xlabel("Excitation-Energy")
plt.ylabel("IntensitΓ€t")
plt.savefig("graph.pdf",
#This is simple recomendation for publication plots
dpi=1000,
# Plot will be occupy a maximum of available space
bbox_inches='tight',
)
```
---
And finally move on to the latex:
```
\documentclass[11pt]{scrartcl}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{graphicx}
\begin{document}
\begin{figure}
\begin{center}
\includegraphics{./graph}
\caption{Excitation-Energy}
\label{fig:graph}
\end{center}
\end{figure}
\end{document}
```
---
## Results

As can be seen from a comparison of two fonts - differences do not exist
(1 - MatPlotlib, 2 - pdfLaTeX)
 | Alternatively, you can use Matplotlib's [PGF backend](http://matplotlib.org/users/pgf.html). It exports your graph using LaTeX package PGF, then it will use the same fonts your document uses, as it is just a collection of LaTeX commands. You add then in the figure environment using input command, instead of includegraphics:
```
\begin{figure}
\centering
\input{your_figure.pgf}
\caption{Your caption}
\end{figure}
```
If you need to adjust the sizes, package adjustbox can help. | How to obtain the same font(-style, -size etc.) in matplotlib output as in latex output? | [
"",
"python",
"matplotlib",
"tex",
""
] |
`SELECT Val from storedp_Value` within the query editor of SQL Server Management Studio, is this possible?
**UPDATE**
I tried to create a temp table but it didn't seem to work hence why I asked here.
```
CREATE TABLE #Result
(
batchno_seq_no int
)
INSERT #Result EXEC storedp_UPDATEBATCH
SELECT * from #Result
DROP TABLE #Result
RETURN
```
Stored Procedure UpdateBatch
```
delete from batchno_seq;
insert into batchno_seq default values;
select @batchno_seq= batchno_seq_no from batchno_seq
RETURN @batchno_seq
```
What am I doing wrong and how do I call it from the query window?
**UPDATE #2**
Ok, I'd appreciate help on this one, direction or anything - this is what I'm trying to achieve.
```
select batchno_seq from (delete from batchno_seq;insert into batchno_seq default values;
select * from batchno_seq) BATCHNO
INTO TEMP_DW_EKSTICKER_CLASSIC
```
This is part of a larger select statement. Any help would be much appreciated. Essentially this SQL is broken as we've migrated for Oracle. | Well, no. To select from a stored procedure you can do the following:
```
declare @t table (
-- columns that are returned here
);
insert into @t(<column list here>)
exec('storedp_Value');
```
If you are using the results from a stored procedure in this way *and* you wrote the stored procedure, seriously consider changing the code to be a view or user defined function. In many cases, you can replace such code with a simpler, better suited construct. | This is not possible in sql server, you can insert the results into a temp table and then further query that
```
CREATE TABLE #temp ( /* columns */ )
INSERT INTO #temp ( /* columns */ )
EXEC sp_MyStoredProc
SELECT * FROM #temp
WHERE 1=1
DROP TABLE #temp
```
Or you can use `OPENQUERY` but this requires setting up a linked server, the SQL is
```
SELECT * FROM (ThisServer, 'Database.Schema.ProcedureName <params>')
``` | SELECT against stored procedure SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have this structure
```
table idc (numId,idInt, IdAffiliate)
table glob (idInt, IdAtt)
table gratt(IdAtt, dtRomp)
Table update(IdAffiliate, dateUpdate)
```
making this select statement will give me this :
```
SELECT
NumId,
dateUpdate,
DtRomp,
Idc.IdFiliale
FROM Idc inner join glob on glob .IdInt = Idc.IdInt
inner join Grat on Glob.IdAtt = Grat.IdAtt
inner join update on update.IdAffiliate = Idc.IdAffiliate
where NumId = 9976666
```
will give me this :
```
NumId DtUpdate DtRomp filiale
9976666 01/05/2005 11/07/2006 27
9976666 01/05/2005 03/07/2008 27
9976666 01/05/2005 24/06/2010 27
9976666 01/05/2006 11/07/2006 27
9976666 01/05/2006 03/07/2008 27
9976666 01/05/2006 24/06/2010 27
```
I m trying to do this :
to select the most close dtUpdqte to DtRomp and that is inferior to it
Kindest regards
I have been trying but but with no solution yet. | It worked with this!!!!!
```
SELECT
NumId,
dateUpdate,
DtRomp,
Idc.IdFiliale
FROM Idc inner join glob on glob .IdInt = Idc.IdInt
inner join Grat on Glob.IdAtt = Grat.IdAtt
inner join update on update.IdAffiliate = Idc.IdAffiliate
where NumId = 9976666
and datediff(day,dateUpdate,DtRomp) = (
SELECT
min(datediff(day,dateUpdate,DtRomp))
FROM Idc ainner join glob b on a.IdInt = c.IdInt
inner join Grat con b.IdAtt = c.IdAtt
inner join update d on d.IdAffiliate = Idc.IdAffiliate
where NumId = 9976666 and Idc.IdAffiliate = a.IdAffiliate and Grat.DtRompu = c.DtRompu and Grat.DtRompu>DtDebValidite
)
```
Regards | You can do this with `row_number()`:
```
select NumId, dateUpdate, DtRomp, Idc.IdFiliale
from (SELECT NumId, dateUpdate, DtRomp, Idc.IdFiliale,
row_number() over (partition by NumID, DTRomp order by DTRomp desc) as seqnum
FROM Idc inner join glob on glob .IdInt = Idc.IdInt
inner join Grat on Glob.IdAtt = Grat.IdAtt
inner join update on update.IdAffiliate = Idc.IdAffiliate
where NumId = 9976666 and dateUpdate < DTRomp
) t
where seqnum = 1;
``` | top 1 date foreach date | [
"",
"sql",
"foreach",
"sybase",
""
] |
I need to write event calendar in Python which allows pasting events in any position AND works as FIFO (pop elements from the left side).
Python collections.deque can efficiently works as FIFO, but it don't allow to paste elements between current elements.
In the other hand, Python list allows inserting into the middle, but popleft is inefficient.
So, is there some compromise?
**UPD** Such structure probably more close to linked list than queue. Title changed. | You can have a look at [`blist`](https://pypi.python.org/pypi/blist/). Quoted from their website:
*The blist is a drop-in replacement for the Python list the provides better performance when modifying large lists.*
*...*
*Here are some of the use cases where the blist asymptotically outperforms the built-in list:*
```
Use Case blist list
--------------------------------------------------------------------------
Insertion into or removal from a list O(log n) O(n)
Taking slices of lists O(log n) O(n)
Making shallow copies of lists O(1) O(n)
Changing slices of lists O(log n + log k) O(n+k)
Multiplying a list to make a sparse list O(log k) O(kn)
Maintain a sorted lists with bisect.insort O(log**2 n) O(n)
```
Some performance numbers here --> <http://stutzbachenterprises.com/performance-blist> | It's a bit of a hack but you can also use the `SortedListWithKey` data type from the [SortedContainers](http://www.grantjenks.com/docs/sortedcontainers/) module. You simply want the key to return a constant so you can order elements any way you like. Try this:
```
from sortedcontainers import SortedListWithKey
class FastDeque(SortedListWithKey):
def __init__(self, iterable=None, **kwargs):
super(FastDeque, self).__init__(iterable, key=lambda val: 0, **kwargs)
items = FastDeque('abcde')
print items
# FastDeque(['a', 'b', 'c', 'd', 'e'], key=<function <lambda> at 0x1089bc8c0>, load=1000)
del items[0]
items.insert(0, 'f')
print list(items)
# ['f', 'b', 'c', 'd', 'e']
```
The `FastDeque` will efficiently support fast random access and deletion.
Other benefits of the SortedContainers module: pure-Python, fast-as-C implementations, 100% unit test coverage, hours of stress testing. | Effective queue/linked list in Python | [
"",
"python",
"queue",
"deque",
""
] |
i am trying to get a binary list contains all possibilities by providing the length of these possible lists , now i found a solution but it is not very handy to be used in other functions.
example : i want a list of lists each one represents one binary option of four digits.
if the length is 4 then the result should be the following.
```
[[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 0, 1, 1], [0, 1, 0, 0], [0, 1, 0, 1], [0, 1, 1, 0], [0, 1, 1, 1], [1, 0, 0, 0], [1, 0, 0, 1], [1, 0, 1, 0], [1, 0, 1, 1], [1, 1, 0, 0], [1, 1, 0, 1], [1, 1, 1, 0], [1, 1, 1, 1]]
```
what i have done is by the following code:
```
>>> [[a, b, c, d] for a in [0,1] for b in [0,1] for c in [0,1] for d in [0,1]]
```
Now , i am looking for a way that by knowing the length of each member binary list we can generate the big list without the need to type manually [ a, b, c, d] , so if is possible to generate the list by a function lets say L\_set(4) we get the list above . and if we type L\_set(3) we get the following:
```
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]
```
and by typing L\_set(2) we get :
```
[[0, 0], [0, 1], [1, 0], [1, 1]]
```
and so on.
After spending few hours i felt stuck here in this point , i hope that some of you can help.
Thanks | Looks like a job for [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product):
```
>>> import itertools
>>> n = 4
>>> list(itertools.product((0,1), repeat=n))
[(0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 0), (0, 0, 1, 1), (0, 1, 0, 0), (0, 1, 0, 1), (0, 1, 1, 0), (0, 1, 1, 1), (1, 0, 0, 0), (1, 0, 0, 1), (1, 0, 1, 0), (1, 0, 1, 1), (1, 1, 0, 0), (1, 1, 0, 1), (1, 1, 1, 0), (1, 1, 1, 1)]
``` | I think the `itertools` module in the standard library can help, in particular the `product` function.
<http://docs.python.org/2/library/itertools.html#itertools.product>
```
for x in itertools.product( [0, 1] , repeat=3 ):
print x
```
gives
```
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
```
the `repeat` parameter is the length of each combination in the output | binary list contains all possible options by knowing its length in python | [
"",
"python",
""
] |
I am having a bit of trouble with this SQL query, first some background
Table definition
```
create table [owner]
(
[patientid] nvarchar(10) NOT NULL,
[clientid] nvarchar(10) NOT NULL,
[percentage] float NULL,
[status] bit NOT NULL
)
alter table [owner] ADD CONSTRAINT PK_OWNER PRIMARY KEY CLUSTERED ([patientid],[clientid])
```
Example source data
```
| PATIENTID | CLIENTID | PERCENTAGE | STATUS |
----------------------------------------------
| Pet1 | Owner1 | 100 | 1 |
| Pet2 | Owner2 | 75 | 1 |
| Pet2 | Owner3 | 25 | 1 |
| Pet3 | Owner4 | 10 | 1 |
| Pet3 | Owner5 | 90 | 1 |
| Pet3 | Owner6 | 100 | 0 |
| Pet4 | Owner7 | 50 | 1 |
| Pet4 | Owner8 | 50 | 1 |
```
What I am looking for is I want the owner who has the highest percentage per pet who has a status of `1`, in the event of a tie, it should go alphabetically by the Owner's name.
So here is the output I would want to see
```
| PATIENTID | CLIENTID |
------------------------
| Pet1 | Owner1 |
| Pet2 | Owner2 |
| Pet3 | Owner5 |
| Pet4 | Owner7 |
```
The closest I got was
```
SELECT f1.[patientid]
,f1.[clientid]
FROM [OWNER] f1
inner join
(
select [patientid], max([percentage]) as [percentage]
from [owner]
where status = 1
group by [patientid]
) f2 on f1.[patientid] = f2.[patientid] and f1.[percentage] = f2.[percentage]
where status = 1
```
However that gives me two records for `Pet4`.
```
| PATIENTID | CLIENTID |
------------------------
| Pet1 | Owner1 |
| Pet2 | Owner2 |
| Pet3 | Owner5 |
| Pet4 | Owner7 |
| Pet4 | Owner8 |
```
What is the correct way to handle something like this so I only get one record and I apply that alphabetical ordering on the tie to find the one record?
Here is a [SQL Fiddle workspace](http://sqlfiddle.com/#!3/f8b5c) to try out any answers.
---
**EDIT:**
I figured a way how to do it, but to me it reeks of code smell, is there a more "proper" way of doing this?
```
select distinct f3.[patientid], (
SELECT top 1 f1.[clientid]
FROM [OWNER] f1
inner join
(
select [patientid], max([percentage]) as [percentage]
from [owner]
where status = 1
group by [patientid]
) f2 on f1.[patientid] = f2.[patientid] and f1.[percentage] = f2.[percentage]
where status = 1 and f1.[patientid] = f3.[patientid]
order by f1.[patientid], f1.[clientid]
)
from owner f3
``` | You should be able to use `row_number()` to get the result by applying a partition by the `patientid` and ordering it by the percentage and clientid:
```
select patientid, clientid
from
(
select patientid, clientid, percentage, status,
row_number() over(partition by patientid
order by percentage desc, clientid) rn
from owner
where status = 1
) d
where rn = 1;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/f8b5c/7) | A correlated subquery will also work, though partition may scale better
```
declare @tmpOwner table (
PatientID varchar(50),
ClientID varchar(50),
Percentage int,
Status smallint
)
insert @tmpOwner (PatientID,ClientID,Percentage,Status)
SELECT 'Pet1','Owner1',100,1 UNION
SELECT 'Pet2','Owner2',75,1 UNION
SELECT 'Pet2','Owner3',25,1 UNION
SELECT 'Pet3','Owner4',10,1 UNION
SELECT 'Pet3','Owner5',90,1 UNION
SELECT 'Pet3','Owner6',100,0 UNION
SELECT 'Pet4','Owner7',50,1 UNION
SELECT 'Pet4','Owner8',50,1
select x.PatientID,
(SELECT top 1 ClientID
FROM @tmpOwner
where Percentage=max(x.Percentage)
and x.PatientID=PatientID
order by ClientID) Win_Owner
from @tmpOwner x
where x.Status=1
group by PatientID
``` | Complicated grouping query, finding a ID not part of the GROUP BY | [
"",
"sql",
"sql-server",
"sql-server-2005",
"group-by",
""
] |
I'm trying to convert the Date key in my table which is numeric into date time key. My current query is:
```
SELECT
DATEADD(HOUR,-4,CONVERT(DATETIME,LEFT([Date],8)+' '+
SUBSTRING([Date],10,2)+':'+
SUBSTRING([Date],12,2)+':'+
SUBSTRING([Date],14,2)+'.'+
SUBSTRING([Date],15,3))) [Date],
[Object] AS [Dataset],
SUBSTRING(Parms,1,6) AS [Media]
FROM (Select CONVERT(VARCHAR(18),[Date]) [Date],
[Object],
MsgId,
Parms
FROM JnlDataSection) A
Where MsgID = '325' AND
SUBSTRING(Parms,1,6) = 'V40449'
Order By Date DESC;
```
The Date Column shows this:
2013-06-22 13:36:44.403
I want to split this into two columns:
Date:
2013-06-22
Time (Remove Microseconds):
13:36:44
Can anyone modify my existing query to display the required output? That would be greatly appreciated. Please Note: I'm using SQL Server Management Studio 2008. | You may want to investigate the convert() function:
```
select convert(date, getdate()) as [Date], convert(varchar(8), convert(time, getdate())) as [Time]
```
gives
```
Date Time
---------- --------
2013-07-16 15:05:43
```
Wrapping these around your original SQL gives the admittedly very ugly:
```
SELECT convert(date,
DATEADD(HOUR,-4,CONVERT(DATETIME,LEFT([Date],8)+' '+
SUBSTRING([Date],10,2)+':'+
SUBSTRING([Date],12,2)+':'+
SUBSTRING([Date],14,2)+'.'+
SUBSTRING([Date],15,3)))) [Date],
convert(varchar(8), convert(time,
DATEADD(HOUR,-4,CONVERT(DATETIME,LEFT([Date],8)+' '+
SUBSTRING([Date],10,2)+':'+
SUBSTRING([Date],12,2)+':'+
SUBSTRING([Date],14,2)+'.'+
SUBSTRING([Date],15,3))))) [Time],
[Object] AS [Dataset],
SUBSTRING(Parms,1,6) AS [Media]
FROM (Select CONVERT(VARCHAR(18),[Date]) [Date],
[Object],
MsgId,
Parms
FROM JnlDataSection) A
Where MsgID = '325' AND
SUBSTRING(Parms,1,6) = 'V40449'
Order By Date DESC;
```
You may want to move part of this into a view, just to reduce complexity. | ```
SELECT CONVERT(DATE,[Date])
SELECT CONVERT(TIME(0),[Date])
``` | Splitting Date into 2 Columns (Date + Time) in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
i am trying to use codeacademy to learn python. the assignment is to "Write a function called fizz\_count that takes a list x as input and returns the count of the string βfizzβ in that list."
```
# Write your function below!
def fizz_count(input):
x = [input]
count = 0
if x =="fizz":
count = count + 1
return count
```
i think the code above the if loop is fine since the error message ("Your function fails on fizz\_count([u'fizz', 0, 0]); it returns None when it should return 1.") only appears when i add that code.
i also tried to make a new variable (new\_count) and set that to count + 1 but that gives me the same error message
I would appreciate your assistance very much | The problem is that you have no loop.
```
# Write your function below!
def fizz_count(input):
count = 0
for x in input: # you need to iterate through the input list
if x =="fizz":
count = count + 1
return count
```
There is a more concise way by using the `.count()` function:
```
def fizz_count(input):
return input.count("fizz")
``` | Get rid of `x = [input]`, that just creates another list containing the list `input`.
> *i think the code above the if loop is fine*
[`if`](http://docs.python.org/2/tutorial/controlflow.html#if-statements)s don't loop; you're probably looking for [`for`](http://docs.python.org/2/tutorial/controlflow.html#for-statements):
```
for x in input: # 'x' will get assigned to each element of 'input'
...
```
Within this loop, you would check if `x` is equal to `"fizz"` and increment the count accordingly (as you are doing with your `if`-statement currently).
Lastly, move your `return`-statement out of the loop / if-statement. You want that to get executed after the loop, since you always want to traverse the list *entirely* before returning.
As a side note, you shouldn't use the name `input`, as that's already assigned to a [built-in function](http://docs.python.org/2/library/functions.html#input).
Putting it all together:
```
def fizz_count(l):
count = 0 # set our initial count to 0
for x in l: # for each element x of the list l
if x == "fizz": # check if x equals "fizz"
count = count + 1 # if so, increment count
return count # return how many "fizz"s we counted
``` | how to change the value of a variable inside an if loop each time the if loop is triggered ? | [
"",
"python",
"loops",
""
] |
In mysql I have `tableA` with rows `userid` and `valueA` and `tableB` with `userid` and `valueB`.
Now I want all entries from `tableA` which don't have an entry in `tableB` with the same `userid`.
I tried several things but can't figure out what I do wrong.
```
SELECT * FROM `tableA`
left join `tableB` on `tableA`.`userid` = `tableB`.`userid`
```
This is a very good start actually. It gives me all entries from `tableA` + the corresponding values from `tableB`. If they don't exist they are displayed as `NULL` (in phpmyadmin).
```
SELECT * FROM `tableA`
left join `tableB` on `tableA`.`userid` = `tableB`.`userid`
where `tableB`.`valueB` = NULL
```
Too bad, empty result. Maybe this would have been too easy. (By the way: `tableA` has ~10k entries and `tableB` has ~7k entries with `userid` being unique in each. No way the result would be empty if it would do what I want it to do)
```
SELECT * FROM `tableA`
left join `tableB` on `tableA`.`userid` = `tableB`.`userid`
where `tableA`.`userid` != `tableB`.`userid`
```
This doesn't work either, and to be honest it also looks totally paradox. Anyways, I'm clueless now. Why didn't my 2nd query work and what is a correct solution? | You are almost there. That second query is SO close! All it needs is one little tweak:
Instead of "`= NULL`" you need an "`IS NULL`" in the predicate.
```
SELECT * FROM `tableA`
left join `tableB` on `tableA`.`userid` = `tableB`.`userid`
where `tableB`.`valueB` IS NULL
^^
```
Note that the equality comparison operator `=` will return NULL (rather than TRUE or FALSE) when one side (or both sides) of the comparison are NULL. (In terms of relational databases and SQL, boolean logic has three values, rather than two: TRUE, FALSE and NULL.)
BTW... the pattern in your query, the outer join with the test for the NULL on the outer joined table) is commonly referred to as an "anti-join" pattern. The usual pattern is to test the same column (or columns) that were referred to in the JOIN condition, or a column that has a NOT NULL constraint, to avoid ambiguous results. (for example, what if 'ValueB' can have a NULL value, and we did match a row. Nothing wrong with that at all, it just depends on whether you want that row returned or not.)
If you are looking for rows in tableA that do NOT have a matching row in tableB, we'd generally do this:
```
SELECT * FROM `tableA`
left join `tableB` on `tableA`.`userid` = `tableB`.`userid`
where `tableB`.`userid` IS NULL
^^^^^^ ^^
```
Note that the IS NULL test is on the **`userid`** column, which is guaranteed to be "not null" if a matching rows was found. (If the column had been NULL, the row would not have satisfied the equality test in the JOIN predicate. | Change `= NULL` for `IS NULL` on your code. You can also use `NOT EXISTS` instead:
```
SELECT *
FROM `tableA` A
WHERE NOT EXISTS (SELECT 1 FROM `tableB`
WHERE `userid` = A.`userid`)
``` | How to get entries from tableA which have no entry in tableB? (SQL) | [
"",
"mysql",
"sql",
""
] |
I will calculate last Sunday and last Saturday on every Monday.
E.g. today is 08 July 2013 Monday
last Sunday: 30 June 2013 00:00:00
last Saturday: 6 July 2013 23:59:59.
Note the last Sunday is from 00:00:00 and last Saturday is until 23:59:59 | Given your question, where the query will be run only on Mondays and the objective is to obtain the dates as stated above, one way to solve it is:
```
SELECT TRUNC(SYSDATE) AS TODAYS_DATE,
TRUNC(SYSDATE)-8 AS PREVIOUS_SUNDAY,
TRUNC(SYSDATE) - (INTERVAL '1' DAY + INTERVAL '1' SECOND) AS PREVIOUS_SATURDAY
FROM DUAL
```
Share and enjoy. | For those looking to get the last weekend days (Saturday and Sunday) on week where the first day is Monday, here's an alternative:
```
select today as todays_date,
next_day(today - 7, 'sat') as prev_saturday,
next_day(today - 7, 'sun') as prev_sunday
from dual
``` | oracle sql get last Sunday and last Saturday | [
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I'm running a South migration `python manage.py syncdb; python manage.py migrate --all` which breaks when run on a fresh database. However, if you run it *twice*, it goes through fine! On the first try, I get
```
DoesNotExist: ContentType matching query does not exist. Lookup parameters were {'model': 'mymodel', 'app_label': 'myapp'}
```
After failure, I go into the database `select * from django_content_type` but sure enough it has
```
13 | my model | myapp | mymodel
```
Then I run the migration `python manage.py syncdb; python manage.py migrate --all` and it works!
So how did I manage to make a migration that only works the second time around? By the way **this is a data migration** which puts the proper groups into the admin app. The following method within the migration is breaking it:
```
@staticmethod
def create_admin_group(orm, model_name, group_name):
model_type = orm['contenttypes.ContentType'].objects.get(app_label='myapp', model=model_name.lower())
permissions = orm['auth.Permission'].objects.filter(content_type=model_type)
group = orm['auth.Group']()
group.name = group_name
group.save()
group.permissions = permissions
group.save()
```
(The migration files come from an existing working project which means a long time ago I had already run schemamigration --initial. I'm merely trying to replicate the database schema and initial data onto a new database.) | Turns out this is a bug in South.
<http://south.aeracode.org/ticket/1281> | Of course its going to be like this, you have not made any intial schemamigrations. The right way would be like this:
1. Register your django apps with `south` first. So something like:
`python manage.py schemamigration --initial <app_name>`.
2. Then you run `manage.py syncdb`.
3. After this, you run `migrate` like so `python manage.py migrate <apps>`, please note that simply running `migrate` will just migrate all your registered apps. I tend to do this.
4. If you change models to change the schema, then you can simply use:
`manage.py schemamigration --auto`
The problem that you are alluding to is this. Once you run `syncdb`, you already get a table crated, south had nothing to do with this. What you are hence doing is querying a database that has no migration control (iirc). | Django South Migration does not work the first time around | [
"",
"python",
"django",
"django-south",
""
] |
I try to define a generator function `mycount()` that can be reset with the generator function `send(0)` as in the example below. Everything works fine, except when I use `send(0)` on a new generator object that hasn't started yet. In this case it gives a `TypeError`. Is there any function that checks if the generator has started or do I have to catch the `TypeError` and create a new generator object with `mycount(0)` in such case?
```
def mycount(value):
while True:
v = yield value
if v == None:
value = value + 1
else:
value = v
g = mycount(3)
print(next(g)) # prints 3
print(next(g)) # prints 4
print(g.send(0)) # prints 0
print(next(g)) # prints 1
print(next(g)) # prints 2
g2 = mycount(3)
g2.send(0)
# TypeError: can't send non-None value to a just-started generator
``` | To avoid sending a non-`None` value to a just-started generator, you need to call `next` or `send(None)` first. I agree with the others that David Beazley's [coroutine decorator](http://dabeaz.com/coroutines/coroutine.py) (in python 3.x you need to call to `__next__()` function instead of `next()`) is a great option. Though that particular decorator is simple, I've also successfully used the [copipes](https://pypi.python.org/pypi/CoPipes) library, which is a nice implementation of many of the utilities from Beazley's presentations, including coroutine.
Regarding whether one can check if a generator is started - in Python 3, you can use [inspect.getgeneratorstate](http://docs.python.org/3.2/library/inspect.html#inspect.getgeneratorstate). This isn't available in Python 2, but the [CPython implementation](http://hg.python.org/cpython/file/ddff866d820d/Lib/inspect.py#l1273) is pure python and doesn't rely on anything new to Python 3, so you can check yourself in the same way:
```
if generator.gi_running:
return GEN_RUNNING
if generator.gi_frame is None:
return GEN_CLOSED
if generator.gi_frame.f_lasti == -1:
return GEN_CREATED
return GEN_SUSPENDED
```
Specifically, `g2` is started if `inspect.getgeneratorstate(g2) != inspect.GEN_CREATED`. | As your error implies the `send` function must be called with `None` on a just-started generator
[(docs-link)](http://docs.python.org/3/reference/expressions.html?highlight=generator#generator.send).
You *could* catch the `TypeError` and roll from there:
```
#...
try:
g2.send(0)
except TypeError:
#Now you know it hasn't started, etc.
g2.send(None)
```
Either way it can't be used to 'reset' the generator, it just has to be remade.
Great overview of generator concepts and syntax [here](http://www.dabeaz.com/generators/Generators.pdf), covering chaining of generators and other advanced topics. | Is there a Python function that checks if a generator is started? | [
"",
"python",
"generator",
""
] |
Inspired by the <https://www.python.org/doc/essays/graphs/> information in this link, I have been implementing graph structures. I generally use the
```
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
'C': ['D'],
'D': ['C']}
edges = [('A','B',20),('A','C',40), ('B','C',10), ('B','D',15),('C','D',10),('D','C',10)]
```
These two are the closest I can get to object representation in C. But sometimes I have to store more information about the graph labels. For example, the graph nodes are states within a country, so I have to store the full name (california) and abbreviation (CA) along with each node.
What is the best way to do it. I know dictionary values can be class instances, but not dictionary keys. So I was just thinking of creating a separate dictionary with the graph node 'A' as key and class instance 'StateDetails' as the value which contains the state and abbreivation. For example
```
state_map = {'A': StateDetails('California','CA') .. }
```
I would appreciate if someone tells me a more efficient way to deal with this problem in particular and with graphs in general in python | You can use your class as a dict key if you implement `__eq__` and `__hash__`, for example:
```
class StateDetails(object):
def __init__(self, state, abbrev):
self.state = state
self.abbrev = abbrev
def __eq__(self, other):
return isinstance(other, self.__class__) and self.abbrev == other.abbrev
def __hash__(self):
return hash(self.abbrev)
def __repr__(self):
return '{}({!r}, {!r})'.format(self.__class__.__name__, self.state, self.abbrev)
CA = StateDetails('California', 'CA')
AZ = StateDetails('Arizona', 'AZ')
NV = StateDetails('Nevada', 'NV')
UT = StateDetails('Utah', 'UT')
graph = {CA: [AZ, NV],
AZ: [CA, NV, UT],
NV: [CA, AZ, UT],
UT: [AZ, NV]}
```
Result:
```
>>> pprint.pprint(graph)
{StateDetails('California', 'CA'): [StateDetails('Arizona', 'AZ'),
StateDetails('Nevada', 'NV')],
StateDetails('Arizona', 'AZ'): [StateDetails('California', 'CA'),
StateDetails('Nevada', 'NV'),
StateDetails('Utah', 'UT')],
StateDetails('Nevada', 'NV'): [StateDetails('California', 'CA'),
StateDetails('Arizona', 'AZ'),
StateDetails('Utah', 'UT')],
StateDetails('Utah', 'UT'): [StateDetails('Arizona', 'AZ'),
StateDetails('Nevada', 'NV')]}
``` | Just store the extra info outside your graph. E.g. keep a dict
```
full_name = {"CA": California,
# 49 more entries
}
```
then use `"CA"` as the graph node.
This makes the graph algorithms must easier to implement because you don't have to work around the extra information that the nodes are dragging along, it makes them maintainable because the information you're storing might change, and it might also make them faster.
(In fact, for a real-world application I'd use integer indices only as the graph nodes and store all the extra information in a separate structure. That way, you can use NumPy and SciPy to do the heavy lifting.) | implementation of graphs in python that have a label | [
"",
"python",
"graph-theory",
""
] |
I have an URL for example:
```
http://name.abc.wxyz:1234/Assts/asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5
```
From this Url I want to extract only '`asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5`' how could i do that?
I am still learning regular expressions and I am not able to solve the above. Any suggestions would be appreciated. | In this specific example splitting the string is enough:
```
url.split('/')[-1]
```
If you have a more complex URL I would recommend the [yarl library](https://github.com/aio-libs/yarl) for parsing it:
```
>>> import yarl # pip install yarl
>>> url = yarl.URL('http://name.abc.wxyz:1234/Assts/asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5')
>>> url.path_qs
'/Assts/asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5'
```
You could also use the builtin `urllib.parse` library but I find that it gets in the way once you start doing complex things like:
```
>>> url.update_query(asd='foo').with_fragment('asd/foo/bar')
URL('http://name.abc.wxyz:1234/Assts/asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5&asd=foo#asd/foo/bar')
``` | You can use [`urlparse`](http://docs.python.org/2/library/urlparse.html) assuming `asset.epx` is the same:
```
>>> import urlparse
>>> url = 'http://name.abc.wxyz:1234/Assts/asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5'
>>> res = urlparse.urlparse(url)
>>> print 'asset.epx?'+res.query
asset.epx?id=F3F94D94-7232-4FA2-98EF-07sdfssfdsa3B5
```
This is useful if you ever need other information from the url (You can `print res` to check out the other info you can get ;))
If you're using Python 3 though, you'll have to do `from urllib.parse import urlparse`. | Extract a part of URL - python | [
"",
"python",
"regex",
"url",
"python-2.7",
""
] |
I have two dictionaries coming from JSON files that look something like that:
```
dict1 = {"data": [{"text": "text1", "id": "id1"}, {"text": "text2", "id": "id2"}]}
dict2 = {"data": [{"text": "text3", "id": "id3"}, {"text": "text4", "id": "id4"}]}
```
I want to create the following out of them:
```
dict = {"data": [{"text": "text1", "id": "id1"}, {"text": "text2", "id": "id2"}, {"text": "text3", "id": "id3"}, {"text": "text4", "id": "id4"}]}
```
I tried different methods like:
```
dict = dict1.update(dict2)
```
or
```
dict = dict1.append(dict2)
```
Both wrong. I think the problems arise from the "data" part that I do need. Would be really grateful for help. Thanks. | ```
dict={"data": dict1["data"] +dict2["data"]}
``` | `update` and `append` don't make any assumptions about your structure, so they can't work. You'll have to construct a new dictionary:
```
dict3 = {'data': [dict1['data'] + dict2['data']]}
```
Or modify one of the existing ones:
```
dict1['data'].extend(dict2['data'])
``` | Append dictionaries in Python | [
"",
"python",
"dictionary",
"append",
""
] |
I have to construct a dictionary using variables `base=10` and `digits=set(range(10))` and have to write a comprehension that maps each integer from 0 through 999 to the list of three digits that represents that integer in base 10. That is, the value should be
```
{0: [0, 0, 0], 1: [0, 0, 1], 2: [0, 0, 2], 3: [0, 0, 3], ...,10: [0, 1, 0], 11: [0, 1, 1], 12: [0, 1, 2], ...,999: [9, 9, 9]}
```
I am stuck .
I tried something like
```
{q:[x,y,z] for q in list[range(1000)] for x in digits for y in digits for z in digits}
```
but the index **q** should be `x * base**2 + y * base**1 + z * base**0`
this is not the right way of thinking, any idea? | ```
alphabet = range(10)
base = 10
dict((x*base**2+y*base+z,(x,y,z)) for x in alphabet
for y in alphabet
for z in alphabet )
```
is what you want ... i think
```
alphabet = range(2)
base = 2
dict((x*base**2+y*base+z,(x,y,z)) for x in alphabet
for y in alphabet
for z in alphabet )
```
generates
```
{0: (0, 0, 0), 1: (0, 0, 1), 2: (0, 1, 0), 3: (0, 1, 1), 4: (1, 0, 0), 5: (1, 0, 1), 6: (1, 1, 0), 7: (1, 1, 1)}
``` | I would use `itertools`. For example
```
dict( (i, tup) for i, tup in enumerate(itertools.product(range(10), repeat=3)) )
```
If you really require each value to be a list, you can add `(i, list(tup))` above.
The `product` function computes cartesian product, equivalent to a nested for-loop, according to the [documentation](http://docs.python.org/2/library/itertools.html#itertools.product). For example
```
In [34]: list(itertools.product(range(3), repeat=3))
Out[34]:
[(0, 0, 0),
(0, 0, 1),
(0, 0, 2),
(0, 1, 0),
(0, 1, 1),
(0, 1, 2),
(0, 2, 0),
(0, 2, 1),
(0, 2, 2),
(1, 0, 0),
(1, 0, 1),
(1, 0, 2),
(1, 1, 0),
(1, 1, 1),
(1, 1, 2),
(1, 2, 0),
(1, 2, 1),
(1, 2, 2),
(2, 0, 0),
(2, 0, 1),
(2, 0, 2),
(2, 1, 0),
(2, 1, 1),
(2, 1, 2),
(2, 2, 0),
(2, 2, 1),
(2, 2, 2)]
```
`product(range(3), repeat=3)` is equivalent to `product(range(3), range(3), range(3))`. The `product` function accepts `*iterables`, so the above syntax is valid. | python construct a dictionary {0: [0, 0, 0], 1: [0, 0, 1], 2: [0, 0, 2], 3: [0, 0, 3], ...,999: [9, 9, 9]} | [
"",
"python",
"list",
"dictionary",
"integer",
"list-comprehension",
""
] |
I have a data set like so in a pandas dataframe:
```
score
timestamp
2013-06-29 00:52:28+00:00 -0.420070
2013-06-29 00:51:53+00:00 -0.445720
2013-06-28 16:40:43+00:00 0.508161
2013-06-28 15:10:30+00:00 0.921474
2013-06-28 15:10:17+00:00 0.876710
```
I need to get counts for the number of measurements, that occur so I am looking for something like this:
```
count
timestamp
2013-06-29 2
2013-06-28 3
```
I do not care about the sentiment column I want the count of the occurrences per day. | If your `timestamp` index is a `DatetimeIndex`:
```
import io
import pandas as pd
content = '''\
timestamp score
2013-06-29 00:52:28+00:00 -0.420070
2013-06-29 00:51:53+00:00 -0.445720
2013-06-28 16:40:43+00:00 0.508161
2013-06-28 15:10:30+00:00 0.921474
2013-06-28 15:10:17+00:00 0.876710
'''
df = pd.read_table(io.BytesIO(content), sep='\s{2,}', parse_dates=[0], index_col=[0])
print(df)
```
so `df` looks like this:
```
score
timestamp
2013-06-29 00:52:28 -0.420070
2013-06-29 00:51:53 -0.445720
2013-06-28 16:40:43 0.508161
2013-06-28 15:10:30 0.921474
2013-06-28 15:10:17 0.876710
print(df.index)
# <class 'pandas.tseries.index.DatetimeIndex'>
```
You can use:
```
print(df.groupby(df.index.date).count())
```
which yields
```
score
2013-06-28 3
2013-06-29 2
```
---
Note the importance of the `parse_dates` parameter. Without it, the index would just be a `pandas.core.index.Index` object. In which case you could not use `df.index.date`.
So the answer depends on the `type(df.index)`, which you have not shown... | Otherwise, using the [resample](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Series.resample.html) function.
```
In [419]: df
Out[419]:
timestamp
2013-06-29 00:52:28 -0.420070
2013-06-29 00:51:53 -0.445720
2013-06-28 16:40:43 0.508161
2013-06-28 15:10:30 0.921474
2013-06-28 15:10:17 0.876710
Name: score, dtype: float64
In [420]: df.resample('D', how={'score':'count'})
Out[420]:
2013-06-28 3
2013-06-29 2
dtype: int64
```
**UPDATE : with pandas 0.18+**
as @jbochi pointed out, resample with `how` is now deprecated. Use instead :
```
df.resample('D').apply({'score':'count'})
``` | summing the number of occurrences per day pandas | [
"",
"python",
"pandas",
"dataframe",
""
] |
What is the preferred way to write Python doc string?
`"""` or `"`
In the book [Dive Into Python](http://www.diveintopython.net/getting_to_know_python/documenting_functions.html), the author provides the following example:
```
def buildConnectionString(params):
"""Build a connection string from a dictionary of parameters.
Returns string."""
```
In another [chapter](http://www.diveintopython.net/object_oriented_framework/index.html#fileinfo.divein), the author provides another example:
```
def stripnulls(data):
"strip whitespace and nulls"
return data.replace("\00", "").strip()
```
Both syntax work. The only difference to me is that `"""` allows us to write multi-line doc.
Are there any differences other than that? | From the [PEP8 Style Guide](http://www.python.org/dev/peps/pep-0008/):
* [PEP 257](http://www.python.org/dev/peps/pep-0257/#one-line-docstrings) describes good docstring conventions. Note that most
importantly, the """ that ends a multiline docstring should be on a
line by itself, e.g.:
```
"""Return a foobang
Optional plotz says to frobnicate the bizbaz first.
"""
```
* For one liner docstrings, it's okay to keep the closing """ on the
same line.
[PEP 257](http://www.python.org/dev/peps/pep-0257/#one-line-docstrings) recommends using triple quotes, even for one-line docstrings:
* Triple quotes are used even though the string fits on one line. This
makes it easy to later expand it.
Note that not even the Python standard library itself follows these recommendations consistently. For example,
* [abcoll.py](http://hg.python.org/cpython/file/db9fe49069ed/Lib/collections/abc.py#l234)
* [ftplib.py](http://hg.python.org/cpython/file/db9fe49069ed/Lib/ftplib.py#l75)
* [functools.py](http://hg.python.org/cpython/file/db9fe49069ed/Lib/functools.py#l28)
* [inspect.py](http://hg.python.org/cpython/file/db9fe49069ed/Lib/inspect.py#l360) | They're both strings, so there is no difference. The preferred style is triple double quotes ([PEP 257](http://www.python.org/dev/peps/pep-0257/)):
> For consistency, always use `"""triple double quotes"""` around docstrings.
>
> Use `r"""raw triple double quotes"""` if you use any backslashes in your docstrings. For Unicode docstrings, use `u"""Unicode triple-quoted strings"""`. | Triple-double quote v.s. Double quote | [
"",
"python",
"pep8",
"quote",
"docstring",
"pep",
""
] |
I have a string array filled with smaller string arrays that I've split in to sets of three. It looks like this (except many more):
```
conv = ('http-get:*:audio/xxx', ':', 'YYY.ORG_XXXXXXXXXX;YYY.ORG_FLAGS=97570000000000000000000000000'), ('http-get:*:video/xxx', ':', 'YYY.ORG_PN=XXXXXXXXXXX;YYY.ORG_FLAGS=ED100000000000000000000000')
```
The only part of these arrays that I actually want is the third item in the list. How would I go about printing the third item only? My problem is that this is an array inside of an array. | Basically loop through the `conv` tuple and store/print out the 2nd object in each. It can be done as a traditional `for-loop` or using list-comprehensions. Try this -
```
>>> [i[2] for i in conv]
['YYY.ORG;YYY.ORG_FLAGS=97570000000', 'YYY.ORG_PN=XXXXXXXXXXX;YYY.ORG_FLAGS=ED100000']
``` | Like for so many other things, the solution is a list comprehension.
```
array = [[],[] etc. #your array]
print " ".join([item[2] for item in array])
>>>YYY.ORG;YYY.ORG_FLAGS=97570000000 YYY.ORG_PN=XXXXXXXXXXX;YYY.ORG_FLAGS=ED100000
```
Basically the key part here is the line:
```
[item[2] for item in array]
```
which iterates through array and returns the third value (zero-indexed) of each object it finds.
For loop equivalent:
```
result = []
for item in array:
result.append(item[2])
``` | How can I print one part of a Python substring? | [
"",
"python",
"python-2.7",
""
] |
I have following two tables
Table Person
```
Id Name
1 A
2 B
3 C
4 D
5 E
```
Table RelationHierarchy
```
ParentId CHildId
2 1
3 2
4 3
```
This will form a tree like structure
```
D
|
C
|
B
|
A
```
ParentId and ChildId are foreign keys of Id column of Person Table
I need to write SQL that Can fetch me Top Level Parent i-e Root of Each Person.
Following CTE can do this for Each. I converted that to a Function and ran it for each row of Person. I have got about 3k rows in Person table and it takes about 10 Secs to do that. Can anyone suggest a approach that can take less. The Problem is the function that runs following CTE runs 3k times
```
DECLARE @childID INT
SET @childID = 1 --chield to search
;WITH RCTE AS
(
SELECT *, 1 AS Lvl FROM RelationHierarchy
WHERE ChildID = @childID
UNION ALL
SELECT rh.*, Lvl+1 AS Lvl FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rh.CHildId = rc.ParentId
)
SELECT TOP 1 id, Name
FROM RCTE r
inner JOIN dbo.Person p ON p.id = r.ParentId
ORDER BY lvl DESC
``` | I have also updated the answer in the [original question](https://stackoverflow.com/questions/17676944/finding-a-top-level-parent-in-sql/17677469), but never-mind, here is a copy also:
```
;WITH RCTE AS
(
SELECT ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy
UNION ALL
SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl
FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId
)
,CTE_RN AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN
FROM RCTE r
)
SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM dbo.Person pc
LEFT JOIN CTE_RN r ON pc.id = r.CHildId AND RN =1
LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/7355f/5)**
Note that the slight difference is in recursive part of CTE. ChildID is now rewritten each time from the anchor part. Also addition is ROW\_NUMBER() function (and new CTE) to get the top level for each child at the end.
**EDIT - Version2**
After finding a performance issues with first query, here is an improved version. Going from top-to-bottom, instead of other way around - eliminating creating of extra rows in CTE, should be much faster on high number of recursions:
```
;WITH RCTE AS
(
SELECT ParentId, CHildId, 1 AS Lvl FROM RelationHierarchy r1
WHERE NOT EXISTS (SELECT * FROM RelationHierarchy r2 WHERE r2.CHildId = r1.ParentId)
UNION ALL
SELECT rc.ParentId, rh.CHildId, Lvl+1 AS Lvl
FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rc.CHildId = rh.ParentId
)
SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM dbo.Person pc
LEFT JOIN RCTE r ON pc.id = r.CHildId
LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/7355f/11)** | You could try to use a loop. As you will get many levels of recursion with your approach:
```
declare @child int = 0
declare @parent int = 1 --child to search
while @child <> @parent
BEGIN
set @child = @parent
select @parent = Parentid from @parentchild where ChildID = @child
END
select @parent
``` | Finding Top level parent of each row of a table [SQL Server 2008] | [
"",
"sql",
"sql-server",
"database",
"hierarchical-data",
""
] |
I am trying to test my Django views. This view passes a QuerySet to the template:
```
def merchant_home(request, slug):
merchant = Merchant.objects.get(slug=slug)
product_list = merchant.products.all()
return render_to_response('merchant_home.html',
{'merchant': merchant,
'product_list': product_list},
context_instance=RequestContext(request))
```
and test:
```
def test(self):
"Merchant home view should send merchant and merchant products to the template"
merchant = Merchant.objects.create(name='test merchant')
product = Product.objects.create(name='test product', price=100.00)
merchant.products.add(product)
test_client = Client()
response = test_client.get('/' + merchant.slug)
# self.assertListEqual(response.context['product_list'], merchant.products.all())
self.assertQuerysetEqual(response.context['product_list'], merchant.products.all())
```
***EDIT***
I am using `self.assertQuerysetEqua`l instead of `self.assertListEqual`. Unfortunately this still doesn't work, and the terminal displays this:
`['<Product: Product object>'] != [<Product: Product object>]`
---
`assertListEqual` raises: `'QuerySet' object has no attribute 'difference'` and
`assertEqual` does not work either, although `self.assertSetEqual(response.context['product_list'][0], merchant.products.all()[0])` does pass.
I assume this is because the QuerySets are different objects even though they contain the same model instances.
How do I test that two QuerySets contain the same data? I am even testing this correctly? This is my 4th day learning Django so I would like to know best practices, if possible. Thanks. | Use [assertQuerysetEqual](https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.TransactionTestCase.assertQuerysetEqual), which is built to compare the two querysets for you. You will need to subclass Django's `django.test.TestCase` for it to be available in your tests. | By default `assertQuerysetEqual` uses `repr()` on the first argument. This is why you were having issues with the strings in the queryset comparison.
To work around this you can [override the `transform` argument](https://docs.djangoproject.com/en/2.0/topics/testing/tools/#django.test.TransactionTestCase.assertQuerysetEqual) with a `lambda` function that doesn't use `repr()`:
```
self.assertQuerysetEqual(queryset_1, queryset_2, transform=lambda x: x)
``` | How do I test Django QuerySets are equal? | [
"",
"python",
"django",
"django-queryset",
"django-testing",
""
] |
I'm new to sql and for some reason, the arrow symbol ( -> ) that I am used to seeing in teh command line, which mean it is ready for input, is now displayed as ( '> ) and it does not accept commands. What does it mean and how do I get back to ( -> ) ?
Thanks | It means that it is treating any input which follows as part of a string literal, until it encounters a(n unescaped) string termination quote `'` character.
This will have happened because you previously began the string literal with such a string termination quote character. For example:
```
mysql> SELECT foo
-> FROM tbl
-> WHERE bar LIKE 'somestring
'> this is still part of somestring'
-> ;
``` | [find the attached image](https://i.stack.imgur.com/9z6Jm.png)
use '/ command and press enter then it will go on next line starting with ->
then use ; and press enter.
it happens if there is unbalanced '(single quote) in query. | What does the ( ' > ) symbol mean in the command line in MySQL? | [
"",
"mysql",
"sql",
"database",
"command-line",
""
] |
Suppose I have two number, @n1, @n2, I want to get bigger one in one simple expression like Max(@n1,@n2). How to write the expression for T-SQL? | ```
DECLARE
@n1 INT = 2,
@n2 INT = 3
SELECT MAX(n) FROM (VALUES(@n1), (@n2)) t(n)
``` | ```
CASE WHEN @n1 > @n2 THEN @n1 ELSE @n2 END
``` | What's the simple expression to get bigger number from two with T-SQL? | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
# Code:
```
class C:
def __init__(self, **kwargs):
self.w = 'foo'
self.z = kwargs['z']
self.my_function(self.z)
def my_function(self, inp):
inp += '!!!'
input_args = {}
input_args['z'] = 'bar'
c = C(**input_args)
print c.z
```
# Expected Result
```
bar!!!
```
# Actual Result
```
bar
```
How do you call a class' method in **init**? | Modify `self.z`, not `inp`:
```
def my_function(self, inp):
self.z += '!!!'
```
Secondly strings are immutable in python, so modifying `inp` won't affect the original string object.
See what happens when `self.z` is mutable object:
```
class C:
def __init__(self, ):
self.z = []
self.my_function(self.z)
def my_function(self, inp):
inp += '!!!'
print inp
print self.z
C()
```
**output:**
```
['!', '!', '!']
['!', '!', '!']
``` | The problem is, you are not actually modifying the value of `self.z`
Try this instead
```
class C:
def __init__(self, **kwargs):
self.w = 'foo'
self.z = kwargs['z']
self.z = self.my_function(self.z)
def my_function(self, inp):
inp += '!!!'
return inp
input_args = {}
input_args['z'] = 'bar'
c = C(**input_args)
print c.z
``` | How do you call a class' method in __init__? | [
"",
"python",
"constructor",
""
] |
Instead of using:
```
var = re.compile('old word',re.IGNORECASE)
```
And then using:
```
var2 = var.sub(r'new word', line)
```
How would I implement `IGNORECASE` into:
```
var = re.sub(r'WoRd',r'Word',line)
``` | Use the `flags` kwarg:
```
re.sub(r'WoRd',r'Word',line, flags=re.IGNORECASE)
```
Note that it's only available in 2.7 or later.
<http://docs.python.org/2/library/re.html#re.sub> | You can use the flags in re.sub() too, like this:
```
var = re.sub(r'WoRd',r'Word',line, flags=re.IGNORECASE)
``` | How to add IGNORECASE without Compiling a RE | [
"",
"python",
""
] |
I'm opening a file that might be something like this...
> It was the best of times,
>
> it was the worst of times.
Let's say this file name is myFile1.txt
I want the file to split up into this
```
[['It','was','the','best','of','times',','],
['it','was', 'the','worst','of','times','.']]
```
It should be a list of strings...
This is my idea...
```
def Split():
inFile=open('myFile1.txt','r')
for line in inFile:
separate=list(line.split())
return(separate)
print(Split())
```
would something like this work? | You can get your desired list by declaring `separate = []` outside of the loop, and then appending the result of `line.split()` to the list. You don't need to use the `list` function, as `line.split()` already returns a list.
You could try this:
```
def Split():
separate = []
with open('myFile1.txt','r') as inFile:
for line in inFile:
separate.append(line.split())
return(separate)
``` | ```
def Split():
results=[]
inFile=open('myFile1.txt','r')
for line in inFile.readlines():
results.append(line.split())
return results
print(Split())
```
Then everything should work fine. :) | Python Text Files to list of strings | [
"",
"python",
"python-3.x",
""
] |
I try to build V8 javascript engine.
When I try to invoke the command `python build/git_v8`, I get error:
```
File build/gyp_v8, line 48 in < module >
import gyp
ImportError: No module named GYP
```
How I can tell python where search GYP module and what is the correct path to the module in the folder GYP?
My version of python is 2.6.2.2, recommended in build instructions. | Obviously, the module gyp.py is not in the search path of modules (sys.path). sys.path is an array variable in sys module which contains all known paths of the modules. You can add the directory containing the module gyp.py manually by either of these methods:
1. set via PYTHONPATH environment variable (see <http://docs.python.org/3/using/cmdline.html?highlight=path#envvar-PYTHONPATH>)
2. Add the path manually within your python script prior to importing gyp. For example, if the directory containing this module is /home/you/gyp:
```
import os, sys
sys.path.append('/home/you/gyp')
import gyp
#--------- That's it ------------
```
You can check if this path already exists using the debug lines
```
import sys
print(sys.path) # version python 3.2
```
or
```
print sys.path # version python 2.7
``` | Install the module will be fine.
```
git clone https://chromium.googlesource.com/external/gyp
cd gyp
sudo ./setup.py install
```
enjoy it. | how to add path with module to python? | [
"",
"python",
"v8",
"gyp",
""
] |
I have my development environment setup on Win 7 like this:
**Django development structure**
```
Apache -server- C:\Program Files (x86)\Apache Software Foundation\Apache2.4
PostgreSQL -database- C:\Program Files\PostgreSQL\9.2
Django -framework- C:\Python27\Lib\site-packages\django
Python -code- C:\Python27
Project -root- C:\mysite
|----------apps
|----------HTML
|----------CSS
|----------JavaScript
|----------assets
```
I am attempting to keep this extremely simple to start out. There are 5 main directories each with a distinct purpose. All the code resides in the project folder.
**compared to WAMP structure:**
```
C:\WAMP
|----------C:\Apache
|----------C:\MySQL
|----------C:\PHP
|----------C:\www
```
I like how Apache, MySQL, and PHP all reside in a neat directory. I know to keep the root project OUTSIDE in another directory in Django for security reasons.
* Is it fine that Apache, PostgreSQL, and Python are installed all over the place in the Django environment?
* Did I miss a core Django component and/or directory?
* Will deploying and scaling be a problem?
I want this to be a guideline for beginning Django web programmers. | I can answer the question one by one:
* `Is if fine that Apache, PostgreSQL, and Python are installed all over the place in the Django environment?`
All over the place sounds weird but yes it is totally fine.
* `Did I miss a core Django component and/or directory?`
No you don't miss anything, Django core is in `site-packages` folder already and your site code is `mysite`, which can be located anywhere you want.
* `Will deploying and scaling be a problem?`
No it won't be a problem with current structure. You will deploy your `mysite` only, the other will be installed separately.
Something you should get familiar with when starting with Django development:
* Most likely when you deploy your project, it will be on a Linux server, so install and learn Linux maybe?
* [virtualenv](http://www.virtualenv.org/en/latest/): Soon you will have to install Django, then a bunch of external packages to support your project. `virtualenv` helps you isolate your working environment. Well it's "unofficial" a must when you start with python development.
* [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/) to make your life easier when working with `virtualenv`
* [git](http://git-scm.com/) and [github](https://github.com/) or [bitbucket](https://bitbucket.org): if you don't know `git` yet, you should now. | Apache is just web server, it is used to serve files, but to make a website you do not necessary need it. Django comes with its own development server. See :
```
python manage.py runserver
```
Apache is required when you are developing PHP websites because your computer do not know how to compile and interpret it. But for Django, you use the Python language, and you have already install it if you are using Django.
Read <https://docs.djangoproject.com/en/1.5/intro/tutorial01/>
And where it will be the time to set up your own server using Apache look at :
<https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/>. | python django project and folder structure (differing from WAMP) | [
"",
"python",
"django",
"directory-structure",
"project-structure",
""
] |
While it is incredibly useful to be able to do set operations between the keys of a dictionary, I often wish that I could perform the set operations on the dictionaries themselves.
I found some recipes for taking [the difference of two dictionaries](http://code.activestate.com/recipes/576644-diff-two-dictionaries/) but I found those to be quite verbose and felt there must be more pythonic answers. | tl;dr Recipe: `{k:d1.get(k, k in d1 or d2[k]) for k in set(d1) | set(d2)}` and `|` can be replaced with any other set operator.
Based @torek's comment, another recipe that might be easier to remember (while being fully general) is: `{k:d1.get(k,d2.get(k)) for k in set(d1) | set(d2)}`.
Full answer below:
My first answer didn't deal correctly with values that evaluated to False. Here's an improved version which deals with Falsey values:
```
>>> d1 = {'one':1, 'both':3, 'falsey_one':False, 'falsey_both':None}
>>> d2 = {'two':2, 'both':30, 'falsey_two':None, 'falsey_both':False}
>>>
>>> print "d1 - d2:", {k:d1[k] for k in d1 if k not in d2} # 0
d1 - d2: {'falsey_one': False, 'one': 1}
>>> print "d2 - d1:", {k:d2[k] for k in d2 if k not in d1} # 1
d2 - d1: {'falsey_two': None, 'two': 2}
>>> print "intersection:", {k:d1[k] for k in d1 if k in d2} # 2
intersection: {'both': 3, 'falsey_both': None}
>>> print "union:", {k:d1.get(k, k in d1 or d2[k]) for k in set(d1) | set(d2)} # 3
union: {'falsey_one': False, 'falsey_both': None, 'both': 3, 'two': 2, 'one': 1, 'falsey_two': None}
```
The version for `union` is the most general and can be turned into a function:
```
>>> def dict_ops(d1, d2, setop):
... """Apply set operation `setop` to dictionaries d1 and d2
...
... Note: In cases where values are present in both d1 and d2, the value from
... d1 will be used.
... """
... return {k:d1.get(k,k in d1 or d2[k]) for k in setop(set(d1), set(d2))}
...
>>> print "d1 - d2:", dict_ops(d1, d2, lambda x,y: x-y)
d1 - d2: {'falsey_one': False, 'one': 1}
>>> print "d2 - d1:", dict_ops(d1, d2, lambda x,y: y-x)
d2 - d1: {'falsey_two': None, 'two': 2}
>>> import operator as op
>>> print "intersection:", dict_ops(d1, d2, op.and_)
intersection: {'both': 3, 'falsey_both': None}
>>> print "union:", dict_ops(d1, d2, op.or_)
union: {'falsey_one': False, 'falsey_both': None, 'both': 3, 'two': 2, 'one': 1, 'falsey_two': None}
```
Where items are in both dictionaries, the value from `d1` will be used. Of course we can return the value from `d2` instead by changing the order of the function arguments.
```
>>> print "union:", dict_ops(d2, d1, op.or_)
union: {'both': 30, 'falsey_two': None, 'falsey_one': False, 'two': 2, 'one': 1, 'falsey_both': False}
``` | EDIT: The recipes here don't deal correctly with False values. I've submitted another improved answer.
Here are some recipes I've come up with:
```
>>> d1 = {'one':1, 'both':3}
>>> d2 = {'two':2, 'both':30}
>>>
>>> print "d1 only:", {k:d1.get(k) or d2[k] for k in set(d1) - set(d2)} # 0
d1 only: {'one': 1}
>>> print "d2 only:", {k:d1.get(k) or d2[k] for k in set(d2) - set(d1)} # 1
d2 only: {'two': 2}
>>> print "in both:", {k:d1.get(k) or d2[k] for k in set(d1) & set(d2)} # 2
in both: {'both': 3}
>>> print "in either:", {k:d1.get(k) or d2[k] for k in set(d1) | set(d2)} # 3
in either: {'both': 3, 'two': 2, 'one': 1}
```
While the expressions in #0 and #2 could be made simpler, I like the generality of this expression which allows me to copy and paste this recipe everywhere and simply change the set operation at the end to what I require.
Of course we can turn this into a function:
```
>>> def dict_ops(d1, d2, setop):
... return {k:d1.get(k) or d2[k] for k in setop(set(d1), set(d2))}
...
>>> print "d1 only:", dict_ops(d1, d2, lambda x,y: x-y)
d1 only: {'one': 1}
>>> print "d2 only:", dict_ops(d1, d2, lambda x,y: y-x)
d2 only: {'two': 2}
>>> import operator as op
>>> print "in both:", dict_ops(d1, d2, op.and_)
in both: {'both': 3}
>>> print "in either:", dict_ops(d1, d2, op.or_)
in either: {'both': 3, 'two': 2, 'one': 1}
>>> print "in either:", dict_ops(d2, d1, lambda x,y: x|y)
in either: {'both': 30, 'two': 2, 'one': 1}
``` | How can I perform set operations on Python dictionaries? | [
"",
"python",
"dictionary",
""
] |
So I've started a simple sort of roguelike game project in python, but I have problems with editing a duplicate list editing the original. The code is as follows:
```
charx = 1
chary = 1
level = [["#","#","#","#","#","#","#"],["#",".",".",".",".",".","#"],["#",".","#",".",".",".","#"],["#","#","#","#","#","#","#"]]
while True:
move = raw_input("u,d,l or r: ")
if move=="u":
if level[chary-1][charx]!="#":
chary-=1
elif move=="d":
if level[chary+1][charx]!="#":
chary+=1
elif move=="r":
if level[chary][charx+1]!="#":
charx+=1
elif move=="l":
if level[chary][charx-1]!="#":
charx-=1
display_level=level
display_level[chary][charx]="@"
for row in display_level:
print "".join(row)
for row2 in level:
print ",".join(row2)
```
As you can see I have created list called level, where each list inside the list is a row. The characters starting row and column position is stated.
Then the loop asks whether you want to move up, down left or right. It checks to see if there is a wall (the # character) in the way and reacts accordingly.
As you can see, it makes a duplicate of the level called display\_level and then puts an "@" as the character onto display level. The problem is however is that changing display\_level changes level as well for no apparent reason, even though they are separate and each time the loop runs through it makes display\_level=level.
display\_level and level are then printed separately and it shows level being edited along side display\_level which shouldn't happen.
So what I want to know is why changing an part of display\_level would change level and how I can fix it.
Much appreciated | You are *not* creating a duplicate of `level`. All you do is create another reference to the *same list*.
To create a copy of a list, you can just slice it from start to end:
```
display_level = level[:]
```
but for your case, that is *not enough*. You need to copy the *nested* lists. In this case, a list comprehension is easiest:
```
display_level = [l[:] for l in level]
```
A more comprehensive solution is to use the [`copy.deepcopy()` function](http://docs.python.org/2/library/copy.html#copy.deepcopy) to make *absolutely sure* that the top-level object and all contained objects are copies:
```
import copy
# ...
display_level = copy.deepcopy(level)
``` | Assigning a variable to another is referencing not duplicating. Use [`copy()`](http://docs.python.org/2/library/copy.html) instead.
```
>>> l = range(10) # assign original list
>>> l # display list (l)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a = l # assign a to l
>>> a # display list (a)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a.remove(4) # remove '4' item from a
>>> a # display list (a)
[0, 1, 2, 3, 5, 6, 7, 8, 9] # notice that '4' is gone
>>> l # display list (l)
[0, 1, 2, 3, 5, 6, 7, 8, 9] # notice that '4' is *also* gone
>>> import copy # import the copy module
>>> b = copy.deepcopy(l) # create a deepcopy of l called b
>>> b # display list (b)
[0, 1, 2, 3, 5, 6, 7, 8, 9] # notice, '4' is still gone.
>>> l # display list (l)
[0, 1, 2, 3, 5, 6, 7, 8, 9]
>>> b.remove(3) # remove '3' item from b
>>> b # display list (b)
[0, 1, 2, 5, 6, 7, 8, 9] # notice, '3' is gone.
>>> l # display list (l)
[0, 1, 2, 3, 5, 6, 7, 8, 9] # notice, '3' is still there.
```
**note:** there are some "simpler" ways, or "shortcuts" to copying lists detailed in [Martijn's Answer](https://stackoverflow.com/a/17704107/1561176) | Editing a duplicate list edits the original | [
"",
"python",
"list",
"python-2.7",
""
] |
I have created a python (I m using 2.4) script to automatically create a virtual host in httpd.conf. But when I run it it gives the following error:
```
Traceback (most recent call last):
File "ApaPy2.py", line 2, in ?
from io import open
ImportError: No module named io
```
This is my script
```
import os
from io import open
project = raw_input(u'Enter the name of project ')
domain = raw_input (u'Enter the domain ')
docroot = raw_input(u'Enter root folder ')
virtualhost=u"""
<VirtualHost *:80>
ServerAdmin abhishek.verma@laitkor.com
DocumentRoot /""" +docroot+ u"""/""" +project+ u"""
ServerName """ +project+ u""".""" +domain+ u""".com
ErrorLog logs/""" +project+ u""".com-error_log
CustomLog logs/""" +project+ u""".com-access_log common
</VirtualHost>"""
f = open(u'/etc/httpd/conf/httpd.conf', u'a')
f.write(virtualhost)
f.close()
``` | The `io` module was introduced in Python 2.6, so it doesn't exist in 2.4. From the [documentation](http://docs.python.org/2/library/io.html):
> New in version 2.6.
The [`open`](http://docs.python.org/2/library/functions.html#open) keyword should work fine for what you're doing here. | The `io` module doesn't exist in 2.4 (and you don't need it to use `open` in this case). I would also simplify your code to use string formatting using `%` instead:
```
project = raw_input(u'Enter the name of project ')
domain = raw_input (u'Enter the domain ')
docroot = raw_input(u'Enter root folder ')
virtualhost=u"""
<VirtualHost *:80>
ServerAdmin abhishek.verma@laitkor.com
DocumentRoot /%(docroot)s/%(project)s
ServerName %(project)s.%(domain)s.com
ErrorLog logs/%(project)s.com-error_log
CustomLog logs/%(project)s.com-access_log common
</VirtualHost>"""
f = open(u'/etc/httpd/conf/httpd.conf', u'a')
f.write(virtualhost % dict(project=project, docroot=docroot, domain=domain)
f.close()
``` | Creation of virtual host through python | [
"",
"python",
"python-2.4",
""
] |
I have the following **[schema](http://sqlfiddle.com/#!2/6e52f/3/0)**:
```
CREATE TABLE `filmati` (
`idfilmato` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idfilmato`)
) ;
CREATE TABLE `utenti` (
`idutente` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idutente`)
) ;
CREATE TABLE `utenti_has_filmati` (
`fkutente` int(11) NOT NULL,
`fkfilmato` int(11) NOT NULL,
PRIMARY KEY (`fkutente`,`fkfilmato`),
KEY `fk_Utenti_has_videos_videos1_idx` (`fkfilmato`),
KEY `fk_Utenti_has_videos_Utenti_idx` (`fkutente`),
CONSTRAINT `fk_Utenti_has_videos_Utenti` FOREIGN KEY (`fkutente`) REFERENCES `utenti` (`idutente`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `fk_Utenti_has_videos_videos1` FOREIGN KEY (`fkfilmato`) REFERENCES `filmati` (`idfilmato`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ;
```
And data:
```
Insert into filmati VALUES (1);
Insert into filmati VALUES (2);
Insert into filmati VALUES (3);
insert into utenti values(1);
insert into utenti values(2);
insert into utenti values(3);
insert into utenti_has_filmati values(1,2);
insert into utenti_has_filmati values(1,3);
insert into utenti_has_filmati values(2,3);
insert into utenti_has_filmati values(2,1);
insert into utenti_has_filmati values(3,1);
insert into utenti_has_filmati values(3,2);
```
I would like a query that can tell me the missing relationship
In this case:
```
1 1
2 2
3 3
```
Any advice? | (cross) join the two tables, so you get the combination of all filmati and all utenti (=cathesian product).
Then you can `left join` the junction table `utenti_has_filmati` to join all existing relations.
Then use the where clause to only return those records that have no matching row in the junction table
```
select u.idutente, f.idfilmato
from
utenti u
cross join filmati f
left join utenti_has_filmati uf
on uf.fkutente = u.idutente and
uf.fkfilmato = f.idfilmato
where
uf.fkfilmato is null -- Check for either field. Doesn't matter.
``` | ```
SELECT
A.idfilmato, B.idutente
FROM
filmati A,utenti B
WHERE NOT EXISTS
(
SELECT 1
FROM
utenti_has_filmati AB
WHERE
AB.fkutente = A.idfilmato AND
AB.fkfilmato = B.idutente
)
``` | Retrieve Missing relationship in SQL | [
"",
"mysql",
"sql",
""
] |
Is there a way to append a list of lines to a file in a line of python code? I've been doing it as such:
```
lines = ['this is the foo bar line to append','this is the second line', 'whatever third line']
for l in lines:
print>>open(infile,'a'), l
``` | Two lines:
```
lines = [ ... ]
with open('sometextfile', 'a') as outfile:
outfile.write('\n'.join(lines) + '\n')
```
We add the `\n` at the end for a trailing newline.
One line:
```
lines = [ ... ]
open('sometextfile', 'a').write('\n'.join(lines) + '\n')
```
I'd argue for going with the first though. | You can do that :
```
lines = ['this is the foo bar line to append','this is the second line', 'whatever third line']
with open('file.txt', 'w') as fd:
fd.write('\n'.join(lines))
``` | Append a list of strings to a file in a single line? - Python | [
"",
"python",
"string",
"file",
"append",
"output",
""
] |
I want one row in my mysql table to always equal the difference of another two, but the only way i seem to be able to do it is by setting it first time to 0 and then running an update command. Is there anyway to just automate this behaviour so i never have to think about it? Appreciate the advice! | You probably want to investigate using [triggers](http://dev.mysql.com/doc/refman/5.0/en/triggers.html), which are set to run whenever a particular operation occurs on another database object. e.g.,
```
CREATE TRIGGER foobar AFTER UPDATE ON table
FOR EACH ROW SET foo = foo + NEW.bar;
``` | You can also do this with a view:
```
create vw_table as
select t.*, (col2 - col1) as diff
from t;
```
The view will then do the calculation when it is used, so the results are always up-to-date. You don't need to store the data in separate columns.
The downside is that you cannot create a column on the result (in MySQL). For that, you need a separate column, maintained as described by Gian. | How to automatically set a mysql column to equal an operation on another? | [
"",
"mysql",
"sql",
"database",
""
] |
I am having a bit of trouble filtering data with pandas NAs. I have a data frame looking like this:
```
Jan Feb Mar Apr May June
0 0.349143 0.249041 0.244352 NaN 0.425336 NaN
1 0.530616 0.816829 NaN 0.212282 0.099364 NaN
2 0.713001 0.073601 0.242077 0.553908 NaN NaN
3 0.245295 0.007016 0.444352 0.515705 0.497119 NaN
4 0.195662 0.007249 NaN 0.852287 NaN NaN
```
and I need to filter out the rows that have "holes". I think of the rows as time series, and by a hole I mean NAs in the middle of the series, but not at the end. I.e. in the data frame above, lines 0, 1 and 4 have holes, but 2 and 3 do not (having NAs only at the end of the row).
The only way I could think of so far is something like this:
```
for rowindex, row in df.iterrows():
# now step through each entry in the row
# and after encountering the first NA,
# check if all subsequent values are NA too.
```
But I was hoping that there might be a less convoluted and more efficient way to do it.
Thanks,
Anne | As you say, looping (iterrows) is a last resort. Try this, which uses `apply` with `axis=1` instead of iterating through rows.
```
In [19]: def holey(s):
starts_at = s.notnull().argmax()
next_null = s[starts_at:].isnull().argmax()
if next_null == 0:
return False
any_values_left = s[next_null:].notnull().any()
return any_values_left
....:
In [20]: df.apply(holey, axis=1)
Out[20]:
0 True
1 True
2 False
3 False
4 True
dtype: bool
```
And now you can filter like `df[~df.apply(holey, axis=1)]`.
A handy idiom here: use `argmax()` to find the first occurrence of `True` in a Series of boolean values. | Here is another way using NumPy. It is faster because it uses NumPy functions on the underlying array as a whole, rather than applying a Python function to each row individually:
```
import io
import pandas as pd
import numpy as np
content = '''\
Jan Feb Mar Apr May June
0.349143 0.249041 0.244352 NaN 0.425336 NaN
0.530616 0.816829 NaN 0.212282 0.099364 NaN
0.713001 0.073601 0.242077 0.553908 NaN NaN
0.245295 0.007016 0.444352 0.515705 0.497119 NaN
0.195662 0.007249 NaN 0.852287 NaN NaN'''
df = pd.read_table(io.BytesIO(content), sep='\s+')
def remove_rows_with_holes(df):
nans = np.isnan(df.values)
# print(nans)
# [[False False False True False True]
# [False False True False False True]
# [False False False False True True]
# [False False False False False True]
# [False False True False True True]]
# First index (per row) which is a NaN
nan_index = np.argmax(nans, axis=1)
# print(nan_index)
# [3 2 4 5 2]
# Last index (per row) which is not a NaN
h, w = nans.shape
not_nan_index = w - np.argmin(np.fliplr(nans), axis=1)
# print(not_nan_index)
# [5 5 4 5 4]
mask = nan_index >= not_nan_index
# print(mask)
# [False False True True False]
# print(df[mask])
# Jan Feb Mar Apr May June
# 2 0.713001 0.073601 0.242077 0.553908 NaN NaN
# 3 0.245295 0.007016 0.444352 0.515705 0.497119 NaN
return df[mask]
def holey(s):
starts_at = s.notnull().argmax()
next_null = s[starts_at:].isnull().argmax()
if next_null == 0:
return False
any_values_left = s[next_null:].notnull().any()
return any_values_left
def remove_using_holey(df):
mask = df.apply(holey, axis=1)
return df[~mask]
```
Here are the timeit results:
```
In [78]: %timeit remove_using_holey(df)
1000 loops, best of 3: 1.53 ms per loop
In [79]: %timeit remove_rows_with_holes(df)
10000 loops, best of 3: 85.6 us per loop
```
The difference becomes more dramatic as the number of rows in the DataFrame increases:
```
In [85]: df = pd.concat([df]*100)
In [86]: %timeit remove_using_holey(df)
1 loops, best of 3: 1.29 s per loop
In [87]: %timeit remove_rows_with_holes(df)
1000 loops, best of 3: 440 us per loop
In [88]: 1.29 * 10**6 / 440
Out[88]: 2931.818181818182
``` | Use Pandas' NaNs to filter out holes in time series | [
"",
"python",
"pandas",
""
] |
I am creating a web app using Flask.
I wonder if it is possible to save user session data like
```
session['ishappy'] = true
```
in database like it's done in Django using SessionMiddleware where you have options to choose between cookies and database.
And if it is what should I import to my Flask app. | I suggest you implement your own Session and SessionInterface by subclassing flask defaults. Basically, you need to define your own session class and a session interface class.
```
class MyDatabaseSession(CallbackDict, SessionMixin):
def __init__(self, initial=None, sid=None):
CallbackDict.__init__(self, initial)
self.sid = sid
self.modified = False
```
The above class will now have a **session id (sid) that will be stored in the cookie**. All the data related to this session id will be stored in your mysql database. For that, you need to implement the following class and methods below:
```
class MyDatabaseSessionInterface(SessionInterface):
def __init__(self, db):
# this could be your mysql database or sqlalchemy db object
self.db = db
def open_session(self, app, request):
# query your cookie for the session id
sid = request.cookies.get(app.session_cookie_name)
if sid:
# Now you query the session data in your database
# finally you will return a MyDatabaseSession object
def save_session(self, app, session, response):
# save the sesion data if exists in db
# return a response cookie with details
response.set_cookie(....)
```
Also, you can define a model for storing session data:
```
class SessionData(db.Model):
def __init__(self,sid,data):
self.sid = sid
self.data = data
# and so on...
```
The following snippets should give you an idea:
<https://github.com/fengsp/flask-snippets/blob/master/sessions/redis_session.py>
<https://github.com/fengsp/flask-snippets/blob/master/sessions/sqlite_session.py>
<https://github.com/fengsp/flask-snippets/blob/master/sessions/mongodb_session.py> | You should check out [Flask-KVSession](http://pythonhosted.org/Flask-KVSession/), which is self-described as
> a drop-in replacement for Flaskβs signed
> cookie-based session management. Instead of storing data on the
> client, only a securely generated ID is stored on the client, while
> the actual session data resides on the server.
Which basically describes traditional server-side sessions. Note that it supports multiple database backends:
> Flask-KVSession uses the [simplekv](http://github.com/mbr/simplekv) package for storing session data on a variety of backends.
See the [Example Use](http://pythonhosted.org/Flask-KVSession/#example-use) for an example of what to import and how to configure it. | Flask - Save session data in database like using cookies | [
"",
"python",
"flask",
""
] |
I have a table with a *varchar* column with data like this:
```
"<tasa>
<parametros>
<parametro>
<nombre>ea</nombre>
<valor>35</valor>
</parametro>
</parametros>
<valorTasa>3.15</valorTasa>
</tasa>"
```
I need to be able to extract the value between the **valorTasa** tags, but don't know how to use the function and can't access oracle documentation.
I'm trying something like
```
select regexp_substr(field, '<valorTasa>[0-9]{0-3}</valorTasa') from dual;
```
With no results.
Any help would be greatly appreciated | More simple way would be using [extractvalue](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions061.htm#i1131042) function to extract the value of the node.
```
-- sample of data
SQL> with t1(col) as(
2 select '<tasa>
3 <parametros>
4 <parametro>
5 <nombre>ea</nombre>
6 <valor>35</valor>
7 </parametro>
8 </parametros>
9 <valorTasa>3.15</valorTasa>
10 </tasa>'
11 from dual
12 )
13 select extractvalue(xmltype(col), '/tasa/valorTasa') as res
14 from t1
15 /
RES
-------
3.15
``` | Actually `REGEXP_REPLACE` will work best for this. If you put a part of the search expression in parentheses you can refer to it in the third "replace-with" parameter - the first such expression is `\1`, the second is `\2`, and so on up to `\9` (you can't do more than 9).
For your requirement, try this:
```
SELECT REGEXP_REPLACE(myXMLCol, '^.*<valorTasa>(.*)</valorTasa>.*$', '\1') FROM myTable
^^^^ ^^
```
The part in the parentheses above - `(.*)` maps to `\1`. The Oracle [`REGEXP_REPLACE`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions130.htm) docs explain this better than I can :) | extracting text from a column using regexp_substr | [
"",
"sql",
"regex",
"oracle",
""
] |
My requirement is to print the output of the file in the console as well as a log file. The following piece of code does it for me , exept for a minor hiccup. I am calling a perl script at the end of the file , whose output is getting displayed in the console , but not getting printed to the file.
```
import subprocess
import sys
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('MyFile.txt', 'w')
original = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "Logging Started"
# My code
print "A"
subprocess.call(['./MyScript])
sys.stdout = original
print "Logging Stopped" # Only on stdout
f.close()
```
Can anyone please advise how can that be achieved? Or is it possible at all to achieve the same? | Use [`subprocess.check_output`](http://docs.python.org/2/library/subprocess.html#subprocess.check_output):
```
print subprocess.check_output(['./MyScript])
```
In Python 2.6, either use the backport [`subprocess32`](http://code.google.com/p/python-subprocess32/), or copy the [2.7 source for `check_output`](http://hg.python.org/cpython/file/2.7/Lib/subprocess.py#l544). | The below code did the trick for me. Thanks everyone for helping.
```
#!/usr/bin/python
import os
import subprocess
import sys
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('MyFile.txt', 'w')
original = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "Logging Started"
# My code
print "A"
def check_output(*popenargs, **kwargs):
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
error = subprocess.CalledProcessError(retcode, cmd)
error.output = output
raise error
return output
location = "%s/folder"%(os.environ["Home"])
myoutput = check_output(['./MyFile'])
print myoutput
sys.stdout = original
print "Logging Stopped" # Only on stdout
f.close()
``` | printing results in stdout file as well as console | [
"",
"python",
"logging",
"printing",
"python-2.6",
""
] |
I already have an existing Database that has a lot of tables and a lot of data in `MySQL`. I intend to create a `Flask` app and use sqlalchemy along with it. Now I asked out on irc and looked around on google and tried the following ideas:
**First** I used [sqlacodegen](https://pypi.python.org/pypi/sqlacodegen) to generate the models from my `DB`. But then I was confused about it a little and looked some more. And I found [this](http://flask.pocoo.org/mailinglist/archive/2012/5/23/flask-sqlalchemy-with-pre-existing-database/#9974419419ee1960c88c98689f6aee9d).
This looked like an elegant solution.
So **Second**, I rewrote my `models.py` according to the solution there and now I am even more confused. I am looking for the best approach to build this flask app along with the already existing DB.
I looked into the flask documentation but didnt really get any help for a project with an already existing db. There is a lot of good stuff for creating something from scratch, creating the db and all. But I am really confused.
Please Note that its my first day with `Flask`, but I have experience with `Django`, so the basic concepts are not a hurdle. I need some guidance in choosing the best approach for this usecase. A detailed explanation would be greatly appreciated. By detailed I definitely do not expect someone to write all the code and spoon feed me on this, but just enough to get me started, that is integrate this db seamlessly into `flask` via `sqlalchemy`. Note my DB is in `MySQL`. | I'd say your question has nothing to do with flask at all. For example, you don't have a problem with the templates, routes, views or logon decorators.
Where you struggle at is at SQLAlchemy.
So my suggestion is to ignore Flask for a while and get used to SQLAlchemy first. You need to get used to your existing database and how to access it from SQLAlchemy. Use some MySQL documentation tool to find your way around this. The start with something like this (note that it has nothing to do with Flask ask all ... yet):
```
#!/usr/bin/python
# -*- mode: python -*-
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///webmgmt.db', convert_unicode=True, echo=False)
Base = declarative_base()
Base.metadata.reflect(engine)
from sqlalchemy.orm import relationship, backref
class Users(Base):
__table__ = Base.metadata.tables['users']
if __name__ == '__main__':
from sqlalchemy.orm import scoped_session, sessionmaker, Query
db_session = scoped_session(sessionmaker(bind=engine))
for item in db_session.query(Users.id, Users.name):
print item
```
In the line "`engine =`" you need to provide your path to your MySQL database, so that SQLAlchemy finds it. In my case I used a pre-existing sqlite3 database.
In the line "`class Users(Base)`" you need to use one of existing tables in your MySQL database. I knew that my sqlite3 database had a table named "users".
After this point, SQLalchemy knows how to connect to your MySQL database and it knows about one of the tables. You need now to add all the other tables that you care for. Finally, you need to specify relationships to SQLalchemy. Here I mean things like one-to-one, one-to-many, many-to-many, parent-child and so on. The SQLAlchemy web site contains a rather lenghty section about this.
After the line "`if __name__ == '__main__'`" just comes some test code. It will be executed if I don't import my python script, but run. Here you see that I create a DB session and is that for a very simply query.
My suggestion is that you first read about the important parts of SQLAlchemy's documentation, for example the descriptive table definition, the relationship model and how to query. Once you know this, you can change the last part of my example into a controller (e.g. using Python's `yield` method) and write a view that uses that controller. | The key to connecting Holger's answer to a flask context is that `db.Model` is a `declarative_base` object like `Base`. Took me a while to notice this important sentence in flask-sqlalchemy's [documentation](https://flask-sqlalchemy.palletsprojects.com/en/2.x/quickstart/)
Below are the steps I used for my app:
1. initiate a `db` object in the usual flask-alchemy manner:`db = SQLAlchemy(app)`. Note you'll need to set `app.config['SQLALCHEMY_DATABASE_URI'] = 'connection_string'` before that.
2. bind the declarative base to an engine: `db.Model.metadata.reflect(db.engine)`
3. Then you can use existing tables easily (eg. I have a table called BUILDINGS):
```
class Buildings(db.Model):
__table__ = db.Model.metadata.tables['BUILDING']
def __repr__(self):
return self.DISTRICT
```
Now your `Buildings` class will follow the existing schema. You can try `dir(Buildings)` in a Python shell and see all the columns already listed. | How to build a flask application around an already existing database? | [
"",
"python",
"mysql",
"sqlalchemy",
"flask",
"flask-sqlalchemy",
""
] |
I have a dictionary, which contains a title and the data. Is there a way to split this data into two lists while keeping the order it was extracted from the dictionary? I have to process the key list and the value list separately, and then I use those lists to build a string.
It's important because I print them out separately, and the output has to match. It doesn't matter if lists are out of order than when they were entered. So long as their positions match in the lists, it's fine.
Here is a very simple example to represent the case:
```
mydict = {'Hello':1, 'World':2, 'Again':3}
keys = mydict.keys()
values = mydict.values()
print 'The list of keys are: %s' % stringify(keys)
print 'The corresponding values are: %s' % stringify(values)
# Output:
> The list of keys are: Hello, Again, World
> The corresponding values are: 1, 3, 2
```
I know I can build an ordered dictionary and then getting the key/value ordering will be guaranteed, but I would also like to handle this case as well (non-ordered dictionary). | Even though the order you see the pairs in is arbitrary, the output from `keys()` and `values()` *will* always align, assuming you don't modify the dictionary. From the [docs](http://docs.python.org/2/library/stdtypes.html#dict.items):
> If items(), keys(), values(), iteritems(), iterkeys(), and
> itervalues() are called with no intervening modifications to the
> dictionary, the lists will directly correspond. This allows the
> creation of (value, key) pairs using zip(): pairs = zip(d.values(),
> d.keys()). The same relationship holds for the iterkeys() and
> itervalues() methods: pairs = zip(d.itervalues(), d.iterkeys())
> provides the same value for pairs. Another way to create the same list
> is pairs = [(v, k) for (k, v) in d.iteritems()]. | ```
titles = myDict.keys()
allData = [myDict[t] for t in titles]
```
This way, `titles` might be in some unpredictable order, but each element in `allData` will be the data pertaining to the corresponding element in `titles` | A way to guarantee ordering of key/value list from unordered dictionary? | [
"",
"python",
"dictionary",
""
] |
For example:
700 means 700.00 - 700.99
400.1 means 400.10 - 400.19
Currently I have something like this:
```
Select codes
From code_table
where codes in ('700', '400.1')
```
which doesn't work because it's just looking for 700 and 400.1 so I tried:
```
Select codes
From code_table
where left (codes, 3) = '700'
```
which works but the problem is my initial set of numbers I'm looking for is a lot more than just the 2 here, is there a simpler way to do this without having to do it for each one? I think the between clause can also work but also requires code each one? | If I've understood your comments correctly, you need to build the regular expressions explicitly if you compare against the varchar() values, or use strings with the BETWEEN operator.
So, this . . .
```
select *
from code_table
where codes = '700'
or codes like '700.[1-9]'
or codes like '700.[1-9][0-9]';
```
or this.
```
select *
from code_table
where codes between '700' and '700.99'
```
The second one is simpler. Both queries exclude the value '700.999', which I *believe* you want to exclude.
To work with numbers instead of varchar() values, cast to decimal. (*Not* to float.)
```
select *
from code_table
where cast(codes as decimal(10, 3)) between 700 and 700.99;
```
In any case, your SQL has to know something about the structure of the codes. If I were in your position, I'd just pass the range explicitly. That way, the full responsibility is with the front-end code, so there shouldn't be any surprises. | You can try something like :
```
select *
from code_table
where codes like '700.__' or codes like '400.1_'
```
Or to be sure to take only numeric values :
```
select *
from code_table
where (codes like '700.__' or codes like '400.1_') and isnumeric(codes) = 1
``` | SQL - Returning a range of numbers 1 to 1.99 | [
"",
"sql",
"sql-server-2008",
""
] |
I have a variable `c`.
```
c = ' FR,DE,UK,IT '
```
I want to cut this variable on another variables: `c1`, `c2`, `c3`, `c4`
So that `c1 = 'FR'`, `c2 = 'DE'`, `c3 = 'UK'`, `c4 = 'IT'`.
How do I do that? | ```
c1, c2, c3, c4 = c.split(',')
```
Note that if, as I suspect, you're hoping for an arbitrary number of dynamic variables to be created depending on the length of the list, you should *not* try to do that. Keep them in a list:
```
c_list = c.split(',')
``` | You can use the [`split`](http://docs.python.org/2/library/stdtypes.html) function of strings:
```
c1, c2, c3, c4 = c.split(',')
``` | I want to cut variable on another variables using Python | [
"",
"python",
"string",
""
] |
I am trying to define a function to make the perimeter of a rectangle. Here is the code:
```
width = input()
height = input()
def rectanglePerimeter(width, height):
return ((width + height)*2)
print(rectanglePerimeter(width, height))
```
I think I haven't left any arguments opened or anything like that. | ```
width, height = map(int, input().split())
def rectanglePerimeter(width, height):
return ((width + height)*2)
print(rectanglePerimeter(width, height))
```
Running it like this produces:
```
% echo "1 2" | test.py
6
```
---
I suspect IDLE is simply passing a single string to your script. The first `input()` is slurping the entire string. Notice what happens if you put some print statements in after the calls to `input()`:
```
width = input()
print(width)
height = input()
print(height)
```
Running `echo "1 2" | test.py` produces
```
1 2
Traceback (most recent call last):
File "/home/unutbu/pybin/test.py", line 5, in <module>
height = input()
EOFError: EOF when reading a line
```
Notice the first print statement prints the entire string `'1 2'`. The second call to `input()` raises the `EOFError` (end-of-file error).
So a simple pipe such as the one I used only allows you to pass one string. Thus you can only call `input()` once. You must then process this string, split it on whitespace, and convert the string fragments to ints yourself. That is what
```
width, height = map(int, input().split())
```
does.
Note, there are other ways to pass input to your program. If you had run `test.py` in a terminal, then you could have typed `1` and `2` separately with no problem. Or, you could have written a program with [pexpect](http://www.noah.org/wiki/pexpect) to simulate a terminal, passing `1` and `2` programmatically. Or, you could use [argparse](http://docs.python.org/dev/library/argparse.html) to pass arguments on the command line, allowing you to call your program with
```
test.py 1 2
``` | Use a `try` / `except` block to get rid of the EOF error.
```
try:
width = input()
height = input()
def rectanglePerimeter(width, height):
return ((width + height)*2)
print(rectanglePerimeter(width, height))
except EOFError as e:
print(end="")
``` | EOFError: EOF when reading a line | [
"",
"python",
"python-3.x",
""
] |
```
/****** Script for SelectTopNRows command from SSMS ******/
DECLARE @user int;
DECLARE @aipNip varchar(20);
DECLARE @accessToBaseCompanies TABLE (baseCompanyId INT);
SET @user = 1;
INSERT INTO @accessToBaseCompanies (baseCompanyId)
((SELECT c.BaseCompanyFk FROM [dbo].Companies c WHERE c.CompaniesTeamFk IN (SELECT u.CompaniesTeamFk FROM [dbo].[CompaniesTeams_Users] u WHERE UserFk = @user))
UNION ALL
(SELECT c.BaseCompanyFk FROM [dbo].Beneficiaries c WHERE c.DepartmentFk IN (SELECT u.DepartmentFk FROM [dbo].[Departments_Users] u WHERE UserFk = @user)))
SET @aipNip = (SELECT TOP 1 fc.[PureNip]
FROM [dbo].[BaseCompanies] bc
INNER JOIN [dbo].[Companies] c ON bc.Id = c.BaseCompanyFk AND c.CompanyType = 1
INNER JOIN [dbo].[Payment_Partners] fc ON fc.id = bc.CompanyPartnerFk)
SELECT bc.[Id]
,bc.[Name] as 'BaseCompany'
,SUM(cd.[PaidAmountNavireo]) - SUM(cd.[GrossTotal])
,SUM(CASE WHEN (ps.Id = 1 OR ps.Id = 3)
FROM [dbo].[BaseCompanies] bc
INNER JOIN [dbo].[Payment_CostDocuments] cd ON bc.Id = cd.BaseCompanyFk
INNER JOIN [dbo].[PaymentStatuses] ps ON ps.Id = cd.PaymentStatusFk
INNER JOIN [dbo].[Payment_Partners] fc ON fc.Id = cd.PartnerFk
WHERE bc.[Id] IN @accessToBaseCompanies
Team BY
bc.[Id],
bc.[Name]
```
@accessToBaseCompanies is not empty, why when I try execute query I get error: Incorrect syntax near '@accessToBaseCompanies'. | The argument to `IN` is a value list or a subquery, not a table. Try:
```
WHERE bc.[Id] IN (select baseCompanyId from @accessToBaseCompanies)
``` | The problem is this line:
```
WHERE bc.[Id] IN @accessToBaseCompanies
```
It should be:
```
WHERE bc.[Id] IN (select baseCompanyId from @accessToBaseCompanies)
``` | TSQL table variable in WHERE statement causes error | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am fetching data from mysql table it works fine, but it shows `NULL` for `Status` in two fields. I want instead of `NULL` show `0`.
Here is my query:
```
SELECT ml.GroupID, ml.GroupTitle, ml.GroupDescription, ml.GroupCreatedDateTime, ml.GroupOwnerUserID, ml.ApprovalNeeded, ml.GroupStatus, ml.OrganizationCode, cat.Status
FROM GroupsMaster AS ml
LEFT JOIN UserGroupsMapping cat ON cat.GroupID = ml.GroupID
WHERE ml.OrganizationCode = ?
``` | use the [COALESCE](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce) operator
```
COALESCE(cat.Status, 0)
SELECT
ml.GroupID,
ml.GroupTitle,
ml.GroupDescription,
ml.GroupCreatedDateTime,
ml.GroupOwnerUserID,
ml.ApprovalNeeded,
ml.GroupStatus,
ml.OrganizationCode,
COALESCE(cat.Status, 0) as Status
FROM GroupsMaster AS ml
LEFT JOIN UserGroupsMapping cat ON cat.GroupID = ml.GroupID
WHERE ml.OrganizationCode = 'ibm'
``` | You can try [ifnull()](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull) function.
like belowing:
select **ifnull**(cat.status,0) as status
FROM GroupsMaster AS ml
LEFT JOIN UserGroupsMapping cat ON cat.GroupID = ml.GroupID
WHERE ml.OrganizationCode = '$organizationCode'"); | How to show 0 instead of NULL while fetching data my sql table | [
"",
"mysql",
"sql",
"null",
""
] |
i am trying to learn python with code academy, and the assignment is to "Write a for loop that populates square\_list with items that are the square (x \*\* 2) of each item in start\_list. Then sort square\_list!"
the hint says to have a for loop, an append function and a sort function, and i believe i am using those correctly.
```
start_list = [5, 3, 1, 2, 4]
square_list = []
for start_list in square_list:
square_list.append(start_list**2)
sort(square_list)
print square_list
```
i get this error message: "Oops, try again! squareList should be the same length as startList! "
I have also tried using square\_list.insert(0, start\_list\*\*2) but that also gives the same error.
I would appreciate your help very much since i am stuck | There are a couple of things wrong with your code.
```
for start_list in square_list:
square_list.append(start_list**2)
sort(square_list)
```
On the first line you have `for start_list in square_list:`. That's saying "I want to do something in a loop for every `start_list` in `square_list`. The problem is that these are two separate lists. You want to create a new variable to stand for every item in `start_list`. The first two lines should be something like:
```
for x in start_list:
square_list.append(x ** 2)
```
This is looping through each *value* (`x`) in `start_list` and appending its square onto `square_list`.
The second problem is the line `sort(square_list)`. This should not be indented, as that makes it execute each time through the loop. Also, the syntax to sort a list is `square_list.sort()`. So those three lines should be:
```
for x in start_list:
square_list.append(x ** 2)
square_list.sort()
``` | You need to iterate over each elements in `start_list`:
```
for item in start_list: # for each element in `start_list`
square_list.append(item ** 2) # add square of `item` in `square_list`
square_list.sort(); # finally sort the list
```
---
or you can use `sorted()` function over `generator` expression:
```
>>> sorted(item ** 2 for item in start_list)
``` | when appending a list with math using another list, how do i make the second list the same length as the first ? | [
"",
"python",
"list",
""
] |
I'm having difficulty reading a csv file into the pandas data frame. I am a total newcomer to pandas, and this is preventing me from progressing. I have read the documentation and searched for solutions, but I am unable to proceed. I have tried the following to no avail...
```
import pandas as pd
import numpy as np
pd.read_csv('C:\Users\rcreedon\Desktop\TEST.csv')
pd.read_csv("C:\Users\rcreedon\Desktop\TEST.csv")
```
and similar permutations with/without quotation marks.
It spits out a large composite error that ends in:
```
IOError: File C:\Users
creedon\Desktop\TEST.csv does not exist
```
It seems strange that in the error it misses of the "r" from "rcreedon". Is this what's causing the problem?
Just for the sake of it i also tried
```
pd.read_csv('C:\rcreedon\Desktop\TEST.csv')
```
And again the 'r' was missed when the error was returned.
Sorry to be such a block head, but I'm struggling here....
Any help appreciated. | "\r" usually is interpreted as a special character and means carriage return. Either add a 'r' prefix to your string literals which prevents this special sequence from being interpreted (e.g. `path = r"foo\rar"`), or, as already suggested, just use a normal slash as path delimiter. Python is intelligent enough for it to also work on Windows :-) | Just use a raw string:
```
pd.read_csv(r'C:\Users\rcreedon\Desktop\TEST.csv')
``` | 'CSV does not exist' - Pandas DataFrame | [
"",
"python",
"pandas",
""
] |
I have a GUI where I ask the user to enter some values. When the user submits the data I do some validation:
1. first I check if the user has entered a value for each input
2. then I check if each of the inputted values are integers
Trying not to repeat myself to much I came up with this, but the second part of the validation looks more like a hack. Is there a more pythonic way of rewriting this, short of spelling everything out like in the first part of the validation?
```
errors = []
# 1) check if values exist
if not self.startInput.GetValue():
errors.append("Please provide a start")
if not self.stopInput.GetValue():
errors.append("Please provide a stop")
if not self.valueInput.GetValue():
errors.append("Please provide a value")
# 2) check if values are integers
try:
self.start = int(self.startInput.GetValue())
self.stop = int(self.stopInput.GetValue())
self.value = int(self.valueInput.GetValue())
except ValueError as err:
tb = traceback.format_exc()
func = re.search('self\.(.*) =', tb).groups()[0]
errors.append("Value for {0} needs to be an integer.".format(func))
if errors:
raise RuntimeError('\n'.join(errors))
``` | Since you're checking for integers, and not floats, you can simply do:
```
if self.start.GetValue().strip().isdigit():
pass
```
`isdigit()` returns `False` for both cases where the input is an empty string, and when the input contains non-digits.
If you want to send specific errors for incorrect assignment:
```
startValue = self.start.GetValue().strip()
if not startValue:
errors.append("Please provide a start.")
if not startValue.isdigit():
errors.append("Value of start must be an integer.")
``` | I think the `try: ... except` is perfectly Pythonic. I would have uses an helper function instead of searching through an error message `get_int_of_name(name, value, error)` which return an int and update error if needed:
```
def get_int_of_name(name, value, error):
try:
res = int(value)
except ValueError:
error.append("...")
return 0
else:
return res
``` | More pythonic way of checking if user input exists and consists of integers | [
"",
"python",
"validation",
""
] |
I am new to Python and Pyramid. In a test application I am using to learn more about Pyramid, I want to query a database and create a dictionary based on the results of a sqlalchemy query object and finally send the dictionary to the chameleon template.
So far I have the following code (which works fine), but I wanted to know if there is a better way to create my dictionary.
```
...
index = 0
clients = {}
q = self.request.params['q']
for client in DBSession.query(Client).filter(Client.name.like('%%%s%%' % q)).all():
clients[index] = { "id": client.id, "name": client.name }
index += 1
output = { "clients": clients }
return output
```
While learning Python, I found a nice way to create a list in a for loop statement like the following:
```
myvar = [user.name for user in users]
```
So, the other question I had: is there a similar 'one line' way like the above to create a dictionary of a sqlalchemy query object?
Thanks in advance. | well, yes, we can tighten this up a bit.
First, this pattern:
```
index = 0
for item in seq:
frobnicate(index, item)
item += 1
```
is common enough that there's a builtin function that does it automatically, `enumerate()`, used like this:
```
for index, item in enumerate(seq):
frobnicate(index, item)
```
*but*, I'm not sure you need it, Associating things with an integer index starting from zero is the functionality of a list, you don't really need a dict for that; unless you want to have holes, or need some of the other special features of dicts, just do:
```
stuff = []
stuff.extend(seq)
```
when you're only interested in a small subset of the attributes of a database entity, it's a good idea to tell sqlalchemy to emit a query that returns only that:
```
query = DBSession.query(Client.id, Client.name) \
.filter(q in Client.name)
```
In the above i've also shortened the `.name.like('%%%s%%' % q)` into just `q in name` since they mean the same thing (sqlalchemy expands it into the correct `LIKE` expression for you)
Queries constructed in this way return a [special thing](http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.util.KeyedTuple) that looks like a tuple, and can be easily turned into a dict by calling `_asdict()` on it:
so to put it all together
```
output = [row._asdict() for row in DBSession.query(Client.id, Client.name)
.filter(q in Client.name)]
```
or, if you really desperately need it to be a dict, you can use a dict comprehension:
```
output = {index: row._asdict()
for index, row
in enumerate(DBSession.query(Client.id, Client.name)
.filter(q in Client.name))}
``` | @TokenMacGuy gave a nice and detailed answer to your question. However, I have a feeling you've asked a wrong question :)
You don't need to convert SQLALchemy objects to dictionaries before passing them to the template - that would be quite inconvenient. You can pass the result of a query as is and directly use SQLALchemy mapped objects in your template
```
q = self.request.params['q']
clients = DBSession.query(Client).filter(q in Client.name).all()
return {'clients': clients}
``` | Create dictionary of a sqlalchemy query object in Pyramid | [
"",
"python",
"sqlalchemy",
"pyramid",
""
] |
All,
I have a personal script that I'm trying to write, and I've run into a small problem. The problem is that I want to be able to support a '-s' argument no matter where the argument is. I also want that argument to be allowed more than once. For example:
script.py firstargument secondargument -s thirdargument -s fourth fifth -s sixth
What I've tried isn't working. I've tried the following:
```
currentArg = 1
foldername = sys.argv[1:]
for folders in foldername:
if "-s" in folders:
newArg = currentArg + 1
setType = str(sys.argv[newArg])
function(setType)
```
What it's doing is that it's taking the -s as an argument and still passing that to the function. What I'd like it to above is see that the first '-s' is at the fourth position, add 1 to 4, and then setType is set to sys.argv[5]. I'd also like it to continue to loop through the arguments and find any '-s' and then use the next argument as the value. Any ideas?
The error that I get is:
```
WindowsError: [Error 3] The system cannot find the path specified: 'c:\\-s/*.*'
```
Notice that it sees -s as an argument and tries to pass that as a folder, but I want it to pass the NEXT argument as the folder...
Is there a better way of doing this? I'll need to take all "-s" into account... | See other solution:
```
args = ['firstargument', 'secondargument', '-s', 'thirdargument', '-s', 'fourth', 'fifth', '-s', 'sixth', 'seventh']
args1 = ['firstargument', 'secondargument', '-s', 'thirdargument', '-s', 'fourth', 'fifth', '-s', 'sixth', 'seventh', '-s']
marker = "-s"
def parser(arglist):
scraplist, result = [], []
arglist_range = range(len(arglist))
#Remember candidates for exclude. It's your marker and previous element.
for arg in arglist_range:
if arglist[arg] == marker:
scraplist.append(arg - 1)
scraplist.append(arg)
#Excluding marked elements
for arg in arglist_range:
if (not arg in scraplist):
result.append(arglist[arg])
return result
print parser(args)
print parser(args1)
```
Run it:
```
$ python test.py
['firstargument', 'fourth', 'sixth', 'seventh']
['firstargument', 'fourth', 'sixth']
```
Goog practice afford result what you want. For example:
input = ['some', 'input', 'data']
output = ['some', 'correct', 'output'] | [argparse](http://docs.python.org/dev/library/argparse.html) is beautiful. Redesign your command line interface for it or write your own CLI-parser. CLI example:
```
args = ['firstargument', 'secondargument', '-s', 'thirdargument', '-s', 'fourth', 'fifth', '-s', 'sixth']
last_arg = None
container = []
marker = '-s'
for arg in args:
if (arg != marker): last_arg = arg
else: container.append(last_arg)
print container
```
Result of execute:
```
$python test.py
['secondargument', 'thirdargument', 'fifth']
``` | Python - Need to get argument on command line | [
"",
"python",
"loops",
"arguments",
""
] |
I have 5 lists, all of the same length, and I'd like to write them to 5 columns in a CSV. So far, I can only write one to a column with this code:
```
with open('test.csv', 'wb') as f:
writer = csv.writer(f)
for val in test_list:
writer.writerow([val])
```
If I add another `for` loop, it just writes that list to the same column. Anyone know a good way to get five separate columns? | Change them to rows:
```
rows = zip(list1, list2, list3, list4, list5)
```
Then just:
```
import csv
with open(newfilePath, "w") as f:
writer = csv.writer(f)
for row in rows:
writer.writerow(row)
``` | The following code writes python lists into columns in csv
```
import csv
from itertools import zip_longest
list1 = ['a', 'b', 'c', 'd', 'e']
list2 = ['f', 'g', 'i', 'j']
d = [list1, list2]
export_data = zip_longest(*d, fillvalue = '')
with open('numbers.csv', 'w', encoding="ISO-8859-1", newline='') as myfile:
wr = csv.writer(myfile)
wr.writerow(("List1", "List2"))
wr.writerows(export_data)
myfile.close()
```
The output looks like this
[](https://i.stack.imgur.com/Limy4.png) | Writing Python lists to columns in csv | [
"",
"python",
"list",
"csv",
""
] |
I have a query that looks like:
```
SELECT 'asdf', '123' ...
FROM table1
LEFT JOIN table2
on
(
condition1
)
LEFT JOIN table3
on
(
condition2
)
where
(
main_condition
)
```
Now the problem is, I need to conditionally include `table1` as well. I tried this:
```
..
..
FROM table1
on
(
new_condition
)
..
..
```
but it wouldn't work. Please help.
---
**EDIT** (New finding):
In this post (<http://blog.sqlauthority.com/2010/07/20/sql-server-select-from-dual-dual-equivalent/>), I found this piece of code:
```
SELECT 1 as i, f.bar, f.jar FROM dual LEFT JOIN foo AS f on f.bar = 1 WHERE dual.dummy = βXβ
UNION
SELECT 2 as i, f.bar, f.jar FROM dual LEFT JOIN foo AS f on f.bar = 2 WHERE dual.dummy = βXβ
```
I'm sure it's not directly related to what I'm trying to do, but is it possible to `JOIN` a table to `DUAL` like that? | Thanks for contributing to the discussion. I found the answer. It's really simple:
```
SELECT temp_table.* FROM
(SELECT 'asdf', '123' ... FROM DUAL) temp_table
LEFT JOIN table1
on
(
new_condition
)
LEFT JOIN table2
on
(
condition1
)
LEFT JOIN table3
on
(
condition2
)
where
(
main_condition
)
```
Interesting problem. Maybe I should favorite my own question this time :) | **Dummy table:**
Select a record from a dummy table first. `dual` is such a table, that is built in in MySQL for this exact purpose. I wrapped `dual` in a subselect, because MySQL apparently doesn't allow left joining against it.
```
SELECT 'asdf', '123' ...
FROM
(select 1 from dual) d
LEFT JOIN table1
on(
new_condition
)
LEFT JOIN table2
on
(
condition1
)
LEFT JOIN table3
on
(
condition2
)
```
**Full (outer) join**
Another solution, though different is using a `full join` or `full outer join`, which is like a `left join` and `right join` combined. It is quite different, though you can achieve a very similar result:
```
select
*
from
table1
full outer join table2 on joincondition.
```
In the query above, all records from both tables are returned, even if no matching record in either table exists. | Mysql - Left Join all tables | [
"",
"mysql",
"sql",
"join",
""
] |
Lets say there is a dictionary
```
foo = {'b': 1, 'c':2, 'a':3 }
```
I want to iterate over this dictionary in the order of the appearance of items in the dictionary.
```
for k,v in foo.items():
print k, v
```
prints
```
a 3
c 2
b 1
```
If we use sorted() function:
```
for k,v in sorted(foo.items()):
print k, v
```
prints
```
a 3
b 1
c 2
```
But i need them in the order in which they appear in the dictionary i;e
```
b 1
c 2
a 3
```
How do i achieve this ? | Dictionaries have no order. If you want to do that, you need to find some method of sorting in your original list. Or, save the keys in a list in the order they are saved and then access the dictionary using those as keys.
From [The Python Docs](http://docs.python.org/2/tutorial/datastructures.html#dictionaries)
> It is best to think of a dictionary as an unordered set of key: value
> pairs, with the requirement that the keys are unique (within one
> dictionary).
Example -
```
>>> testList = ['a', 'c', 'b']
>>> testDict = {'a' : 1, 'c' : 2, 'b' : 3}
>>> for elem in testList:
print elem, testDict[elem]
a 1
c 2
b 3
```
**Or better yet**, use an [`OrderedDict`](http://docs.python.org/2/library/collections.html#collections.OrderedDict) -
```
>>> from collections import OrderedDict
>>> testDict = OrderedDict([('a', 1), ('c', 2), ('b', 3)])
>>> for key, value in testDict.items():
print key, value
a 1
c 2
b 3
``` | An [ordered dictionary](http://docs.python.org/2/library/collections.html#collections.OrderedDict) would have to be used to remember the order that they were stored in
```
>>>from collections import OrderedDict
>>>od = OrderedDict()
>>>od['b'] = 1
>>>od['c'] = 2
>>>od['a'] = 3
>>>print od
OrderedDict([('b',1), ('c',2), ('a',3)]
``` | Python Sorted by Index | [
"",
"python",
""
] |
I have a DataFrame named `df` as
```
Order Number Status
1 1668 Undelivered
2 19771 Undelivered
3 100032108 Undelivered
4 2229 Delivered
5 00056 Undelivered
```
I would like to convert the `Status` column to boolean (`True` when Status is Delivered and `False` when Status is Undelivered)
but if Status is neither 'Undelivered' neither 'Delivered' it should be considered as `NotANumber` or something like that.
I would like to use a dict
```
d = {
'Delivered': True,
'Undelivered': False
}
```
so I could easily add other string which could be either considered as `True` or `False`. | You can just use `map`:
```
In [7]: df = pd.DataFrame({'Status':['Delivered', 'Delivered', 'Undelivered',
'SomethingElse']})
In [8]: df
Out[8]:
Status
0 Delivered
1 Delivered
2 Undelivered
3 SomethingElse
In [9]: d = {'Delivered': True, 'Undelivered': False}
In [10]: df['Status'].map(d)
Out[10]:
0 True
1 True
2 False
3 NaN
Name: Status, dtype: object
``` | An example of `replace` method to replace values only in the specified column `C2` and get result as `DataFrame` type.
```
import pandas as pd
df = pd.DataFrame({'C1':['X', 'Y', 'X', 'Y'], 'C2':['Y', 'Y', 'X', 'X']})
C1 C2
0 X Y
1 Y Y
2 X X
3 Y X
df.replace({'C2': {'X': True, 'Y': False}})
C1 C2
0 X False
1 Y False
2 X True
3 Y True
``` | Convert Pandas series containing string to boolean | [
"",
"python",
"pandas",
"boolean",
"type-conversion",
"series",
""
] |
I'm running an animation using matplotlib's `FuncAnimation` to display data (live) from a microprocessor. I'm using buttons to send commands to the processor and would like the color of the button to change after being clicked, but I can't find anything in the `matplotlib.widgets.button` documentation (yet) that achieves this.
```
class Command:
def motor(self, event):
SERIAL['Serial'].write(' ')
plt.draw()
write = Command()
bmotor = Button(axmotor, 'Motor', color = '0.85', hovercolor = 'g')
bmotor.on_clicked(write.motor) #Change Button Color Here
``` | Just set `button.color`.
E.g.
```
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
import itertools
fig, ax = plt.subplots()
button = Button(ax, 'Click me!')
colors = itertools.cycle(['red', 'green', 'blue'])
def change_color(event):
button.color = next(colors)
# If you want the button's color to change as soon as it's clicked, you'll
# need to set the hovercolor, as well, as the mouse is still over it
button.hovercolor = button.color
fig.canvas.draw()
button.on_clicked(change_color)
plt.show()
``` | In current matplotlib version (1.4.2) 'color' and 'hovercolor' are took into account only when mouse '\_motion' event has happened, so the button change color not when you press mouse button, but only when you move mouse afterwards.
Nevertheless, you can change button background manually:
```
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
import itertools
button = Button(plt.axes([0.45, 0.45, 0.2, 0.08]), 'Blink!')
def button_click(event):
button.ax.set_axis_bgcolor('teal')
button.ax.figure.canvas.draw()
# Also you can add timeout to restore previous background:
plt.pause(0.2)
button.ax.set_axis_bgcolor(button.color)
button.ax.figure.canvas.draw()
button.on_clicked(button_click)
plt.show()
``` | Change matplotlib Button color when pressed | [
"",
"python",
"user-interface",
"animation",
"button",
"matplotlib",
""
] |
How can I rename a column in microsoft sql server managment studio?? I used this code but give me syntax error.
```
alter table sudents rename column old to new;
``` | Use `sp_Rename` function
```
sp_RENAME 'sudents.[OldColumnName]' , '[NewColumnName]', 'COLUMN'
``` | ```
EXEC sp_RENAME 'TableName.FieldName' , 'NewFieldName', 'COLUMN'
``` | Rename a column in microsoft sql server managment studio | [
"",
"sql",
"sql-server",
""
] |
I've given a task of exporting data from an Oracle view to a fixed length text file, however I've been given specification of how data should be exported to a text file. I.e.
```
quantity NUM (10)
price NUM (8,2)
participant_id CHAR (3)
brokerage NUM (10,2)
cds_fees NUM (8,2)
```
My confusion arises in Numeric types where when it says (8,2). If I'm to use same as text, does it effectively means
```
10 characters (as to_char(<field name>, '9999999.99'))
```
or
```
8 characters (as to_char(<field name>, '99999.99'))
```
when exporting to fixed length text field in the text file?
I was looking at this [question](https://stackoverflow.com/questions/10661708/what-is-precision-and-scale-means-in-oracle-number-data-type) which gave an insight, but not entirely.
Appreciate if someone could enlighten me with some examples.
Thanks a lot. | According to the [Oracle docs on types](http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i16209)
> Optionally, you can also specify a precision (total number of digits)
> and scale (number of digits to the right of the decimal point):
>
> If a precision is not specified, the column stores values as given. If
> no scale is specified, the scale is zero.
So in your case, a `NUMBER(8,2)`, has got:
* 8 digits in total
* 2 of which are after the decimal point
This gives you a range of `-999999.99` to `999999.99` | I assume that you mean NUMBER data type by NUM.
When it says NUMBER(8,2), it means that there will be **8 digits**, and that the number should be **rounded to the nearest hundredth**. Which means that there will be 6 digits before, and 2 digits after the decimal point.
Refer to [oracle doc](http://docs.oracle.com/cd/A97630_01/appdev.920/a96624/03_types.htm):
> You use the NUMBER datatype to store fixed-point or floating-point
> numbers. Its magnitude range is 1E-130 .. 10E125. If the value of an
> expression falls outside this range, you get a numeric overflow or
> underflow error. You can specify precision, which is the total number
> of digits, and scale, which is the number of digits to the right of
> the decimal point. The syntax follows:
>
> NUMBER[(precision,scale)]
>
> To declare fixed-point numbers, for which you must specify scale, use
> the following form:
>
> NUMBER(precision,scale)
>
> To declare floating-point numbers, for which you cannot specify
> precision or scale because the decimal point can "float" to any
> position, use the following form:
>
> NUMBER
>
> To declare integers, which have no decimal point, use this form:
>
> NUMBER(precision) -- same as NUMBER(precision,0)
>
> You cannot use constants or variables to specify precision and scale;
> you must use integer literals. The maximum precision of a NUMBER value
> is 38 decimal digits. If you do not specify precision, it defaults to
> 38 or the maximum supported by your system, whichever is less.
>
> Scale, which can range from -84 to 127, determines where rounding
> occurs. For instance, a scale of 2 rounds to the nearest hundredth
> (3.456 becomes 3.46). A negative scale rounds to the left of the
> decimal point. For example, a scale of -3 rounds to the nearest
> thousand (3456 becomes 3000). A scale of 0 rounds to the nearest whole
> number. If you do not specify scale, it defaults to 0. | Number format in Oracle SQL | [
"",
"sql",
"oracle",
""
] |
**Description:**
I have a database of private messages users. Scheme:

For my task is only interested column `owner_user_id` and `viewer_user_id`.
The result of the query `SELECT owner_user_id, viewer_user_id FROM mail` roughly the:
[Result http://screencloud.net//img/screenshots/6dae938eab89faab8f69e683403b1eb2.png](http://screencloud.net//img/screenshots/6dae938eab89faab8f69e683403b1eb2.png)
**Problem**:
I need to get a list of **unique dialogues**.
Namely, from this list(\*1), I have to get something like this(\*2).
And this despite the fact that when you build a query, I do not know the identity of any one person.
\*1 [Some list http://screencloud.net//img/screenshots/1ad2ddeb5afa0d3b97985ab65adaef3f.png](http://screencloud.net//img/screenshots/1ad2ddeb5afa0d3b97985ab65adaef3f.png)
\*2 <http://screencloud.net//img/screenshots/1d75c3ee60e73f9874a836b738013c5a.png>
**Question**:
How i do it?
*I hope I put the question correctly.* | Found a solution!
```
SELECT DISTINCT
LEAST(
`owner_user_id`,
`viewer_user_id`
) AS first_user,
GREATEST(
`owner_user_id`,
`viewer_user_id`
) AS second_user
FROM
`mail`
``` | You can use Jeffs answer (distinct) but need to "order" the tuples in order to unidirect what you call a dialogue.
e.g.
```
SELECT distinct
if(owner_user_id>viewer_user_id,owner_user_id,viewer_user_id),
if(viewer_user_id>owner_user_id,viewer_user_id,owner_user_id)
FROM mail
```
Rgds
R | Build dialogs list from mail database | [
"",
"mysql",
"sql",
""
] |
I have a simple `Employee` model that includes `firstname`, `lastname` and `middlename` fields.
On the admin side and likely elsewhere, I would like to display that as:
```
lastname, firstname middlename
```
To me the logical place to do this is in the model by creating a calculated field as such:
```
from django.db import models
from django.contrib import admin
class Employee(models.Model):
lastname = models.CharField("Last", max_length=64)
firstname = models.CharField("First", max_length=64)
middlename = models.CharField("Middle", max_length=64)
clocknumber = models.CharField(max_length=16)
name = ''.join(
[lastname.value_to_string(),
',',
firstname.value_to_string(),
' ',
middlename.value_to_string()])
class Meta:
ordering = ['lastname','firstname', 'middlename']
class EmployeeAdmin(admin.ModelAdmin):
list_display = ('clocknumber','name')
fieldsets = [("Name", {"fields":(("lastname", "firstname", "middlename"), "clocknumber")}),
]
admin.site.register(Employee, EmployeeAdmin)
```
Ultimately what I think I need is to get the value of the name fields as strings. The error I am getting is `value_to_string() takes exactly 2 arguments (1 given)`. Value to string wants `self, obj`. I am not sure what `obj` means.
There must be an easy way to do this, I am sure I am not the first to want to do this.
Edit: Below is my code modified to Daniel's answer. The error I get is:
> ```
> django.core.exceptions.ImproperlyConfigured:
> EmployeeAdmin.list_display[1], 'name' is not a callable or an
> attribute of 'EmployeeAdmin' of found in the model 'Employee'.
> ```
```
from django.db import models
from django.contrib import admin
class Employee(models.Model):
lastname = models.CharField("Last", max_length=64)
firstname = models.CharField("First", max_length=64)
middlename = models.CharField("Middle", max_length=64)
clocknumber = models.CharField(max_length=16)
@property
def name(self):
return ''.join(
[self.lastname,' ,', self.firstname, ' ', self.middlename])
class Meta:
ordering = ['lastname','firstname', 'middlename']
class EmployeeAdmin(admin.ModelAdmin):
list_display = ('clocknumber','name')
fieldsets = [("Name", {"fields":(("lastname", "firstname", "middlename"), "clocknumber")}),
]
admin.site.register(Employee, EmployeeAdmin)
``` | Ok... Daniel Roseman's answer seemed like it should have worked. As is always the case, you find what you're looking for *after you post the question*.
From the [Django 1.5 docs](https://docs.djangoproject.com/en/1.5/topics/db/models/#model-methods) I found this example that worked right out of the box. Thanks to all for your help.
Here is the code that worked:
```
from django.db import models
from django.contrib import admin
class Employee(models.Model):
lastname = models.CharField("Last", max_length=64)
firstname = models.CharField("First", max_length=64)
middlename = models.CharField("Middle", max_length=64)
clocknumber = models.CharField(max_length=16)
def _get_full_name(self):
"Returns the person's full name."
return '%s, %s %s' % (self.lastname, self.firstname, self.middlename)
full_name = property(_get_full_name)
class Meta:
ordering = ['lastname','firstname', 'middlename']
class EmployeeAdmin(admin.ModelAdmin):
list_display = ('clocknumber','full_name')
fieldsets = [("Name", {"fields":(("lastname", "firstname", "middlename"), "clocknumber")}),
]
admin.site.register(Employee, EmployeeAdmin)
``` | That's not something you do as a field. Even if that syntax worked, it would only give the value when the class was defined, not at the time you access it. You should do this as a method, and you can use the `@property` decorator to make it look like a normal attribute.
```
@property
def name(self):
return ''.join(
[self.lastname,' ,', self.firstname, ' ', self.middlename])
```
`self.lastname` etc appear as just their values, so no need to call any other method to convert them. | How to add a calculated field to a Django model | [
"",
"python",
"django-models",
"django-admin",
""
] |
I need acculate how many days between today and Employee hiredate(but do not care the year). That mearn if one employee hire in 07/1/2012, i want to get a result is today 07/15/2013-07/1/2013, that is 15 days. I do not need the hire year 2012.
i play around with the dateadd and datediff but just did not get the correct result.
```
SELECT
Co Employee,
LastName,
FirstName,
dateadd(dd, DATEDIFF(dd,CURRENT_TIMESTAMP,0),HireDate)
FROM dbo.PREH
``` | This is a bit brute force, perhaps, but it seems to work. The idea is to add a number of years to the hiredate, then to check that against the current date. When it is greater, use one fewer years:
```
select (case when DATEADD(year, datediff(year, hiredate, getdate()), hiredate) < GETDATE()
then DATEDIFF(dd, DATEADD(year, datediff(year, hiredate, getdate()), hiredate), getdate())
else DATEDIFF(dd, DATEADD(year, datediff(year, hiredate, getdate()) - 1, hiredate), getdate())
end)
from preh;
```
The problem is that `datediff()` with `year` returns the number of times that the year boundary is crossed, not the number of years between two dates as a span. So, there is one year between 2012-12-30 and 2013-01-01, and there is one year between 2012-01-01 and 2013-12-31. | You could use modulus division:
```
SELECT DATEDIFF(DAY,'20120101',GETDATE())%365
```
The downside is that you're treating leap year the same as every other year, which you could handle with case logic.
In your code:
```
SELECT
Co Employee,
LastName,
FirstName,
DATEDIFF(DAY,HireDate,GETDATE())%365 AS DaysThisYear
FROM dbo.PREH
``` | Get the correct days in vacation accumulation report | [
"",
"sql",
""
] |
So lately I've been asking a few questions about more professional and pythonic style in Python, and despite being given great answers to my questions, I feel like I need to ask a much broader question.
In general, when writing utility functions (for a library, etc.) that deal more with side effects (file writes, dictionary definitions, etc.) than return values, it's very useful to return a status code to tell the calling function that it passed or failed.
In Python, there seem to be three ways to flag this:
Using a return value of -1 or 0 (C like) and using statements such as
```
if my_function(args) < 0:
fail condition
pass condition
```
or using a return value of True/False
```
if not my_function(args):
fail condition
pass condition
```
or using a 'return or 'return None' using exceptions (exits on unknown error)
```
try:
my_function(args)
except ExpectedOrKnownExceptionOrError:
fail condition
pass condition
```
Which of these is best? Most correct? Preferred? I understand all work, and there isn't much technical advantage of one over the other (except perhaps the overhead of exception handling). | Don't return something to indicate an error. Throw an exception. Definitely don't catch an exception and turn it into a return code. | When an exception is raised and not caught, [Python will exit with an error (non-zero) code for you](https://stackoverflow.com/a/5456201/691859). If you are defining your own exception types, you should [override `sys.excepthook()`](https://stackoverflow.com/a/16787722/691859) to provide you with the exit codes you desire, since they will by default use the exit code `1`.
If you want to specify your exit code for some weird reason, use `sys.exit()` and the [`errno`](https://stackoverflow.com/a/286444/691859) module to get the standard exit codes so that you will use the appropriate one. You can also use the [`traceback`](https://stackoverflow.com/a/6720174/691859) module to get the stack traceback (and it seems the correct error code as well, according to that answer). Personally, I don't prefer this approach either.
The approach I recommend is to *not* catch the exception; just let it happen. If your program can continue to operate after an exception occurs, you ought to catch it and handle it appropriately. However, if your program cannot continue, you should let the exception be. This is particularly useful for re-using your program in other Python modules, because you will be able to catch the exception if you so desire then. | Python Return Codes | [
"",
"python",
"styles",
""
] |
```
from cs1graphics import *
from math import sqrt
numLinks = 50
restingLength = 20.0
totalSeparation = 630.0
elasticityConstant = 0.005
gravityConstant = 0.110
epsilon = 0.001
def combine(A,B,C=(0,0)):
return (A[0] + B[0] + C[0], A[1] + B[1] + C[1])
def calcForce(A,B):
dX = (B[0] - A[0])
dY = (B[1] - A[1])
distance = sqrt(dX*dX+dY*dY)
if distance > restingLength:
stretch = distance - restingLength
forceFactor = stretch * elasticityConstant
else:
forceFactor = 0
return (forceFactor * dX, forceFactor * dY) #return a tuple
def drawChain(chainData, chainPath, theCanvas):
for k in range(len(chainData)):
chainPath.setPoint(Point(chainData[k][0], chainData[k][1]),k)
theCanvas.refresh() #refresh canvas
chain = [] #chain here
for k in range(numLinks + 1):
X = totalSeparation * k / numLinks
chain.append( (X,0.0) )
paper = Canvas(totalSeparation, totalSeparation)
paper.setAutoRefresh(False)
curve = Path()
for p in chain:
curve.addPoint(Point(p[0], p[1]))
paper.add(curve)
graphicsCounter = 100
somethingMoved = True
while somethingMoved:
somethingMoved = False
oldChain = list(chain) #oldChain here
for k in range(1, numLinks):
gravForce = (0, gravityConstant)
leftForce = calcForce(oldChain[k], oldChain[k-1])
rightForce = calcForce(oldChain[k], oldChain[k+1])
adjust = combine(gravForce, leftForce, rightForce)
if abs(adjust[0]) > epsilon or abs(adjust[1]) > epsilon:
somethingMoved = True
chain[k] = combine(oldChain[k], adjust)
graphicsCounter -= 1
if graphicsCounter == 0:
drawChain(chain, curve, paper)
graphicsCounter = 100
curve.setBorderWidth(2)
drawChain(chain, curve, paper)
```
I was told that `list([]) == []`. So why is this code doing
`oldChain = list(chain)` instead of `oldChain = chain`
it's the same thing so it does not matter either way to do it? | `list(chain)` returns a shallow copy of `chain`, it is equivalent to `chain[:]`.
If you want a shallow copy of the list then use `list()`, it also used sometimes to get all the values from an iterator.
Difference between `y = list(x)` and `y = x`:
---
**Shallow copy:**
```
>>> x = [1,2,3]
>>> y = x #this simply creates a new referece to the same list object
>>> y is x
True
>>> y.append(4) # appending to y, will affect x as well
>>> x,y
([1, 2, 3, 4], [1, 2, 3, 4]) #both are changed
#shallow copy
>>> x = [1,2,3]
>>> y = list(x) #y is a shallow copy of x
>>> x is y
False
>>> y.append(4) #appending to y won't affect x and vice-versa
>>> x,y
([1, 2, 3], [1, 2, 3, 4]) #x is still same
```
---
**Deepcopy:**
Note that if `x` contains mutable objects then just `list()` or `[:]` are not enough:
```
>>> x = [[1,2],[3,4]]
>>> y = list(x) #outer list is different
>>> x is y
False
```
But inner objects are still references to the objects in x:
```
>>> x[0] is y[0], x[1] is y[1]
(True, True)
>>> y[0].append('foo') #modify an inner list
>>> x,y #changes can be seen in both lists
([[1, 2, 'foo'], [3, 4]], [[1, 2, 'foo'], [3, 4]])
```
As the outer lists are different then modifying x will not affect y and vice-versa
```
>>> x.append('bar')
>>> x,y
([[1, 2, 'foo'], [3, 4], 'bar'], [[1, 2, 'foo'], [3, 4]])
```
To handle this use `copy.deepcopy`. | It is true that `list([])` is functionally equivalent to `[]`, both creating a new empty list.
But `x = list(y)` is not the same as `x = y`. The formers makes a shallow copy, and the latter creates a new reference to the existing list.
Note that `list([])` is inefficient -- it creates a new empty list (by doing `[]`), then copies it, resulting with another empty list (by doing `list(...)`), then deallocates the original, unreferenced, list. | Python list([]) and [] | [
"",
"python",
"arrays",
"list",
""
] |
I have a pandas dataframe in the following format:
```
df = pd.DataFrame([
[1.1, 1.1, 1.1, 2.6, 2.5, 3.4,2.6,2.6,3.4,3.4,2.6,1.1,1.1,3.3],
list('AAABBBBABCBDDD'),
[1.1, 1.7, 2.5, 2.6, 3.3, 3.8,4.0,4.2,4.3,4.5,4.6,4.7,4.7,4.8],
['x/y/z','x/y','x/y/z/n','x/u','x','x/u/v','x/y/z','x','x/u/v/b','-','x/y','x/y/z','x','x/u/v/w'],
['1','3','3','2','4','2','5','3','6','3','5','1','1','1']
]).T
df.columns = ['col1','col2','col3','col4','col5']
```
df:
```
col1 col2 col3 col4 col5
0 1.1 A 1.1 x/y/z 1
1 1.1 A 1.7 x/y 3
2 1.1 A 2.5 x/y/z/n 3
3 2.6 B 2.6 x/u 2
4 2.5 B 3.3 x 4
5 3.4 B 3.8 x/u/v 2
6 2.6 B 4 x/y/z 5
7 2.6 A 4.2 x 3
8 3.4 B 4.3 x/u/v/b 6
9 3.4 C 4.5 - 3
10 2.6 B 4.6 x/y 5
11 1.1 D 4.7 x/y/z 1
12 1.1 D 4.7 x 1
13 3.3 D 4.8 x/u/v/w 1
```
I want to get the count by each row like following. Expected Output:
```
col5 col2 count
1 A 1
D 3
2 B 2
etc...
```
How to get my expected output? And I want to find largest count for each 'col2' value? | Followed by @Andy's answer, you can do following to solve your second question:
```
In [56]: df.groupby(['col5','col2']).size().reset_index().groupby('col2')[[0]].max()
Out[56]:
0
col2
A 3
B 2
C 1
D 3
``` | You are looking for [`size`](http://pandas.pydata.org/pandas-docs/stable/groupby.html#aggregation):
```
In [11]: df.groupby(['col5', 'col2']).size()
Out[11]:
col5 col2
1 A 1
D 3
2 B 2
3 A 3
C 1
4 B 1
5 B 2
6 B 1
dtype: int64
```
---
To get the same answer as waitingkuo (the "second question"), but slightly cleaner, is to groupby the level:
```
In [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max()
Out[12]:
col2
A 3
B 2
C 1
D 3
dtype: int64
``` | Pandas DataFrame Groupby two columns and get counts | [
"",
"python",
"pandas",
"dataframe",
"group-by",
"pivot-table",
""
] |
I have a pandas dataframe like this:
```
Balance Jan Feb Mar Apr
0 9.724135 0.389376 0.464451 0.229964 0.691504
1 1.114782 0.838406 0.679096 0.185135 0.143883
2 7.613946 0.960876 0.220274 0.788265 0.606402
3 0.144517 0.800086 0.287874 0.223539 0.206002
4 1.332838 0.430812 0.939402 0.045262 0.388466
```
I would like to group the rows by figuring out if the values from Jan through to Apr are monotonically decreasing (as in rows indexed 1 and 3) or not, and then add up the balances for each group, i.e. in the end I would like to end up with two numbers (1.259299 for the decreasing time series, and 18.670919 for the others).
I think if I could add a column "is decreasing" containg booleans I could do the sums using pandas' groupby, but how would I create this column?
Thanks,
Anne | You could use one of the `is_monotonic` functions from algos:
```
In [10]: months = ['Jan', 'Feb', 'Mar', 'Apr']
In [11]: df.loc[:, months].apply(lambda x: pd.algos.is_monotonic_float64(-x)[0],
axis=1)
Out[11]:
0 False
1 True
2 False
3 True
4 False
dtype: bool
```
The `is_monotonic` checks whether an array it's *decreasing* hence the `-x.values`.
(This seems substantially faster than Tom's solution, even using the small DataFrame provided.) | ```
months = ['Jan', 'Feb', 'Mar', 'Apr']
```
Transpose so that we can use the `diff` method (which doesn't take an axis argument).
We fill in the first row (January) with 0. Otherwise it's `NaN`.
```
In [77]: df[months].T.diff().fillna(0) <= 0
Out[77]:
0 1 2 3 4
Jan True True True True True
Feb False True True True False
Mar True True False True True
Apr False True True True False
```
To check if it's monotonically decreasing, use the .all() method. By default this goes over axis 0, the rows (months).
```
In [78]: is_decreasing = (df[months].T.diff().fillna(0) <= 0).all()
In [79]: is_decreasing
Out[79]:
0 False
1 True
2 False
3 True
4 False
dtype: bool
In [80]: df['is_decreasing'] = is_decreasing
In [81]: df
Out[81]:
Balance Jan Feb Mar Apr is_decreasing
0 9.724135 0.389376 0.464451 0.229964 0.691504 False
1 1.114782 0.838406 0.679096 0.185135 0.143883 True
2 7.613946 0.960876 0.220274 0.788265 0.606402 False
3 0.144517 0.800086 0.287874 0.223539 0.206002 True
4 1.332838 0.430812 0.939402 0.045262 0.388466 False
```
And like you suggested, we can groupby `is_decreasing` and sum:
```
In [83]: df.groupby('is_decreasing')['Balance'].sum()
Out[83]:
is_decreasing
False 18.670919
True 1.259299
Name: Balance, dtype: float64
```
It's times like these when I love pandas. | Pandas dataframe: Check if data is monotonically decreasing | [
"",
"python",
"pandas",
""
] |
I have linux `centos 6.4` server with `mysql5.6`
I have the cron job for making hourly mysql database backups like this
Its in python
```
os.popen("mysqldump -u %s --password=%s -h %s -e --opt --skip-lock-tables --skip-extended-insert -c %s | gzip -c > %s.gz" % (username, password, hostname, database, filename))
```
The problem is very little data gets chnaged but every time i have to backup all databases 24 times , which takes too much space.
What is best alternative for that
Is there any way to have incremental backups so that i take 1 full backup of database evry week and then only incremental backups.
**I want to have separate files for separate databases not all mysql** so that if i need to restore single database then i can do that as well | One hacky way may be, once you have taken full data base to a file. diff it against the weekly backup and store the patch on disk. When you want to retrieve, just apply diff to the weekly backup and get full db
To store
```
mysqldump -u $USERNAME --password=$PASSWORD -h $HOSTNAME -e --opt --skip-lock-tables --skip-extended-insert -c $DATABASE >hourlyFile
diff weeklyFile hourlyFile >hourlyFile.patch
rm hourlyFile
```
To retrieve:
```
cp weeklyFile hourlyFile
patch hourlyFile <hourlyFile.patch
```
I am not really aware what kind of output sqldump gives. if it's text above would work. Otherwise bsdiff may help you here : <http://www.daemonology.net/bsdiff/> | Since your concern is space rather than speed you could always go for a pattern something like:
Only keep the hourly backup for the last 24 hours.
Keep the midnight backup for each weekday as that days backup.
Weekly - archive backups to offline storage.
A lot of this could almost certainly be mostly automated by some careful file name choices - e.g. Midnight backups being called Backup\_Mon, etc., other hours Backup\_00\_00, Backup\_01\_00, etc.
You could go for a even smaller latest\_backup, prev\_backup, prev\_prev\_backup by simply renaming the files at the start of each backup.
You could also consider using zip, (which python can do for you), to reduce the file size. | How can i create in incremental backups of mysql databases | [
"",
"python",
"mysql",
"linux",
"backup",
""
] |
In the book Programming Collective Intelligence there is a regular expression,
```
splitter = re.compile('\\W*')
```
From context it looks like this matches any non-alphanumeric character. But I am confused because it seems like it matches a backslash, then one or more W's. What does it really match? | Your regex is equivalent to `\W*`. It matches 0 or more non-alphanumeric characters.
Actually, you are using python string literal, instead of raw string. In a python string literal, to match a literal backslash, you need to escape the backslash - `\\`, as a backslash has a special meaning there. And then for regex, you need to escape both the backslashes, to make it - `\\\\`.
So, to match `\` followed by 0 or more `W`, you would need `\\\\W*` in a string literal. You can simplify this by using a raw string. Where a `\\` will match a literal `\`. That's because, backslashes are not handled in any special way when used inside a *raw string*.
The below example will help you understand this:
```
>>> s = "\WWWW$$$$"
# Without raw string
>>> splitter = re.compile('\\W*') # Match non-alphanumeric characters
>>> re.findall(splitter, s)
['\\', '', '', '', '', '$$$$', '']
>>> splitter = re.compile('\\\\W*') # Match `\` followed by 0 or more `W`
>>> re.findall(splitter, s)
['\\WWWW']
# With raw string
>>> splitter = re.compile(r'\W*') # Same as first one. You need a single `\`
>>> re.findall(splitter, s)
['\\', '', '', '', '', '$$$$', '']
>>> splitter = re.compile(r'\\W*') # Same as 2nd. Two `\\` needed.
>>> re.findall(splitter, s)
['\\WWWW']
``` | The first backslash is there just as an escape character, for programming languages that don't have a good string representation of regular expressions (for example: Java). In Python you can do better, this is equivalent:
```
r'\W*'
```
Notice the `r` at the beginning (a [raw string](http://docs.python.org/2/reference/lexical_analysis.html#string-literals)), that renders unnecessary the use of the first `\` escape character. The second `\` is unavoidable, that's part of the character class `\W` | Confused about regular expression | [
"",
"python",
"regex",
""
] |
I have a dataframe in Pandas, I would like to sort its columns (i.e. get a new dataframe, or a view) according to the mean value of its columns (or e.g. by their std value). The documentation talks about [sorting by label or value](http://pandas.pydata.org/pandas-docs/stable/basics.html#sorting-by-index-and-value), but I could not find anything on custom sorting methods.
How can I do this? | You can use the [`mean`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html) DataFrame method and the Series [`sort_values`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sort_values.html) method:
```
In [11]: df = pd.DataFrame(np.random.randn(4,4), columns=list('ABCD'))
In [12]: df
Out[12]:
A B C D
0 0.933069 1.432486 0.288637 -1.867853
1 -0.455952 -0.725268 0.339908 1.318175
2 -0.894331 0.573868 1.116137 0.508845
3 0.661572 0.819360 -0.527327 -0.925478
In [13]: df.mean()
Out[13]:
A 0.061089
B 0.525112
C 0.304339
D -0.241578
dtype: float64
In [14]: df.mean().sort_values()
Out[14]:
D -0.241578
A 0.061089
C 0.304339
B 0.525112
dtype: float64
```
Then you can reorder the columns using [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html):
```
In [15]: df.reindex(df.mean().sort_values().index, axis=1)
Out[15]:
D A C B
0 -1.867853 0.933069 0.288637 1.432486
1 1.318175 -0.455952 0.339908 -0.725268
2 0.508845 -0.894331 1.116137 0.573868
3 -0.925478 0.661572 -0.527327 0.819360
```
---
*Note: In earlier versions of pandas, `sort_values` used to be `order`, but `order` was deprecated as part of 0.17 so to be more consistent with the other sorting methods. Also, in earlier versions, one had to use `reindex_axis` rather than `reindex`.* | You can use [assign](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html) to create a variable, use it to sort values and drop it in the same line of code.
```
df = pd.DataFrame(np.random.randn(4,4), columns=list('ABCD'))
df.assign(m=df.mean(axis=1)).sort_values('m').drop('m', axis=1)
``` | Pandas: Sorting columns by their mean value | [
"",
"python",
"pandas",
""
] |
As I learn Python (Specifically Jython, if the difference is important here) I'm writing a simple terminal game that uses skills and dice rolls based on the level of those skills to determine success/fail at an attempted action. I hope to use this code eventually in a larger game project.
Under a stress test, the code uses .5GB of ram and seems to take quite a while to get a result (~50 seconds). It could just be that the task is really that intensive but as a noob I'm betting I'm just doing things inefficiently. Could anyone give some tips on both:
* how to improve the efficiency of this code
* and how to write this code in a more pythonic way?
```
import random
def DiceRoll(maxNum=100,dice=2,minNum=0):
return sum(random.randint(minNum,maxNum) for i in xrange(dice))
def RollSuccess(max):
x = DiceRoll()
if(x <= (max/10)):
return 2
elif(x <= max):
return 1
elif(x >= 100-(100-max)/10):
return -1
return 0
def RollTesting(skill=50,rolls=10000000):
cfail = 0
fail = 0
success = 0
csuccess = 0
for i in range(rolls+1):
roll = RollSuccess(skill)
if(roll == -1):
cfail = cfail + 1
elif(roll == 0):
fail = fail + 1
elif(roll == 1):
success = success + 1
else:
csuccess = csuccess + 1
print "CFails: %.4f. Fails: %.4f. Successes: %.4f. CSuccesses: %.4f." % (float(cfail)/float(rolls), float(fail)/float(rolls), float(success)/float(rolls), float(csuccess)/float(rolls))
RollTesting()
```
EDIT - Here's my code now:
```
from random import random
def DiceRoll():
return 50 * (random() + random())
def RollSuccess(suclim):
x = DiceRoll()
if(x <= (suclim/10)):
return 2
elif(x <= suclim):
return 1
elif(x >= 90-suclim/10):
return -1
return 0
def RollTesting(skill=50,rolls=10000000):
from time import clock
start = clock()
cfail = fail = success = csuccess = 0.0
for _ in xrange(rolls):
roll = RollSuccess(skill)
if(roll == -1):
cfail += 1
elif(roll == 0):
fail += 1
elif(roll == 1):
success += 1
else:
csuccess += 1
stop = clock()
print "Last time this statement was manually updated, DiceRoll and RollSuccess totaled 12 LOC."
print "It took %.3f seconds to do %d dice rolls and calculate their success." % (stop-start,rolls)
print "At skill level %d, the distribution is as follows" % (skill)
print "CFails: %.4f. Fails: %.4f. Successes: %.4f. CSuccesses: %.4f." % (cfail/rolls, fail/rolls, success/rolls, csuccess/rolls)
RollTesting(50)
```
And the output:
```
Last time this statement was manually updated, DiceRoll and RollSuccess totaled 12 LOC.
It took 6.558 seconds to do 10000000 dice rolls and calculate their success.
At skill level 50, the distribution is as follows
CFails: 0.0450. Fails: 0.4548. Successes: 0.4952. CSuccesses: 0.0050.
```
Noticeably this isn't equivalent because I changed the random calculation enough to be noticeably different output (the original was supposed to be 0-100, but I forgot to divide by the amount of dice). The mem usage looks to be ~.2GB now. Also the previous implementation couldn't do 100mil tests, I've ran this one at up to 1bil tests (it took 8 minutes, and the mem usage doesn't seem significantly different). | You're doing 10 million loops. Just the looping costs are probably 10% of your total time. Then, if the whole loop doesn't fit into cache at once, it may slow things down even more.
Is there a way to avoid doing all those loops in Python? Yes, you can do them in Java.
The obvious way to do that is to actually write and call Java code. But you don't have to do that.
---
A list comprehension, or a generator expression driven by a native builtin, will also do the looping in Java. So, on top of being more compact and simpler, this should also be faster:
```
attempts = (RollSuccess(skill) for i in xrange(rolls))
counts = collections.Counter(attempts)
cfail, fail, success, csuccess = counts[-1], counts[0], counts[1], counts[2]
```
Unfortunately, while this does seem to be faster in Jython 2.7b1, it's actually slower in 2.5.2.
---
Another way to speed up loops is to use a vectorization library. Unfortunately, I don't know what Jython people use for this, but in CPython with `numpy`, it looks something like this:
```
def DiceRolls(count, maxNum=100, dice=2, minNum=0):
return sum(np.random.random_integers(minNum, maxNum, count) for die in range(dice))
def RollTesting(skill=50, rolls=10000000):
dicerolls = DiceRolls(rolls)
csuccess = np.count_nonzero(dicerolls <= skill/10)
success = np.count_nonzero((dicerolls > skill/10) & (dicerolls <= skill))
fail = np.count_nonzero((dicerolls > skill) & (dicerolls <= 100-(100-skill)/10))
cfail = np.count_nonzero((dicerolls > 100-(100-skill)/10)
```
This speeds things up by a factor of about 8.
I suspect that in Jython things aren't nearly as nice as with `numpy`, and you're expected to import Java libraries like the Apache Commons numerics or PColt and figure out the Java-vs.-Python issues yourselfβ¦Β but better to search and/or ask than to assume.
---
Finally, you may want to use a different interpreter. CPython 2.5 or 2.7 doesn't seem to be much different from Jython 2.5 here, but it *does* mean you can use `numpy` to get an 8x improvement. PyPy 2.0, meanwhile, is 11x faster, with no changes.
Even if you need to do your main program in Jython, if you've got something slow enough to dwarf the cost of starting a new process, you can move it to a separate script that you run via `subprocess`. For example:
subscript.py:
```
# ... everything up to the RollTesting's last line
return csuccess, success, fail, cfail
skill = int(sys.argv[1]) if len(sys.argv) > 1 else 50
rolls = int(sys.argv[2]) if len(sys.argv) > 2 else 10000000
csuccess, success, fail, cfail = RollTesting(skill, rolls)
print csuccess
print success
print fail
print cfail
```
mainscript.py:
```
def RollTesting(skill, rolls):
results = subprocess32.check_output(['pypy', 'subscript.py',
str(skill), str(rolls)])
csuccess, success, fail, cfail = (int(line.rstrip()) for line in results.splitlines())
print "CFails: %.4f. Fails: %.4f. Successes: %.4f. CSuccesses: %.4f." % (float(cfail)/float(rolls), float(fail)/float(rolls), float(success)/float(rolls), float(csuccess)/float(rolls))
```
(I used the [`subprocess32`](http://code.google.com/p/python-subprocess32/) module to get the backport of `check_output`, which isn't available in Python 2.5, Jython or otherwise. You could also just [borrow the source](http://hg.python.org/cpython/file/2.7/Lib/subprocess.py#l544) for `check_output` from 2.7's implementation.)
Note that Jython 2.5.2 has some serious bugs in `subprocess` (which will be fixed in 2.5.3 and 2.7.0, but that doesn't help you today). But fortunately, they don't affect this code.
In a quick test, the overhead (mostly spawning a new interpreter process, but there's also marshalling the parameters and results, etc.) added more than 10% to the cost, meaning I only got a 9x improvement instead of 11x. And that will be a little worse on Windows. But not enough to negate the benefits for any script that's taking on the order of a minute to run.
---
Finally, if you're doing more complicated stuff, you can use [execnet](http://codespeak.net/execnet/), which wraps up Jython<->CPython<->PyPy to let you use whatever's best in each part of the code without having to do all that explicit `subprocess` stuff. | Well, one thing, use [`xrange`](http://docs.python.org/2/library/functions.html#xrange) instead of [`range`](http://docs.python.org/2/library/functions.html#range). `range` allocates an array with an element for each of the 10 million digits whereas `xrange` creates a generator. That's going to help out memory a bunch, and probably speed too. | Inefficient Random Dice Roll in Python/Jython | [
"",
"python",
"jython",
""
] |
I am trying to open a log file every 15 minutes and see if the file called a 'finalize' string. I know how to check the file for the string but I do not know if there is an easy way to have the script look every 15 minutes.
Is there a module of some sort that will allow me to do this?
**[EDIT]** I want this because I am writing a queue program. That is once one job calls finalize, the script will load up another job and then monitor that until it finishes, etc. | The [`sched`](http://docs.python.org/3/library/sched.html) module will let you do this. So will the [`Timer`](http://docs.python.org/2/library/threading.html#timer-objects) class in the `threading` module.
However, either of those is overkill for what you want.
The easy, and generally best, way to do it is to use your system's scheduler (Scheduled Tasks on Windows, Launch Services on mac, `cron` or a related program on most other systems) to run your program every 15 minutes.
Alternatively, just use `time.sleep(15*60)`, and check whether `time.time()` is actually 15\*60 seconds later than last time through the loop.
---
One advantage of triggering your script from outside is that it makes it very easy to change the trigger. For example, you could change it to trigger every time the file has been modified, instead of every 15 minutes, without changing your script at all. (The exact way you do that, of course, depends on your platform; LaunchServices, famd, inotify, FindFirstChangeNotification, etc. all work very differently. But every modern platform has some way to do it.) | Take a look at [python-tail](https://github.com/kasun/python-tail) module:
```
import tail
def print_line(txt):
print 'finalize' in txt
t = tail.Tail('/var/log/syslog')
t.register_callback(print_line)
t.follow(s=60*15)
``` | Check a log file every 15 minutes with Python | [
"",
"python",
"time",
"scheduling",
""
] |
I'm creating this program and one of its functions is to output a list that is initialized in the constructor. But, what's happening is that it is outputting the memory location in hexadecimal or something and I don't know why.
I have two classes and a run class:
```
class Person :
def __init__(self, name, ID, age, location, destination):
self.name = name
self.ID = ID
self.age = age
self.location = location
self.destination = destination
def introduce_myself(self):
print("Hi, my name is " + self.name + " , my ID number is " + str(self.ID) + " I am " + str(self.age) + " years old")
def get_route(self):
return self.location + self.destination
def add2bus(self, Bus):
if Person.get_route(self) == Bus.get_route() :
Bus.get_on(Bus)
else :
print("not compatible")
def get_name(self):
print(self.name)
import People
class Bus :
def __init__(self, name, capacity, location, destination):
self.bus_name = name
self.bus_capacity = capacity
self.bus_location = location
self.bus_destination = destination
self.seats = []
self.people = []
def Bus_properties(self):
print(self.bus_name + ", " + str(self.bus_capacity) + ", " + self.bus_location + ", " + self.bus_destination)
def print_list(self):
a = self.people
print(self.people)
def get_route(self):
return self.bus_location + self.bus_destination
def get_on(self, Person):
if len(self.people) < 20: #Loop through all the seats
self.people.append(Person)
else:
print('fulll')
def get_name(self):
print(self.name)
import People
import Transport
C2C = Transport.Bus("C2C", 30, "Ithaca", "New York")
Fred = People.Person("Fred", 12323, 13, "Ithaca", "New York")
Jennifer = People.Person("Jennifer", 111117, 56, "Ithaca", "New York")
Fred.add2bus(C2C)
Jennifer.add2bus(C2C)
```
I want to create a while loop that takes the length of the peoplelist and with the condition while x < len(C2C.people) then it appends all the names of the people on that bus to a list y
like this...
```
x = 0
y = []
while x < len(C2C.people) :
y.append((C2C.people[x].get_name()))
x = x + 1
print(y)
```
but I'm getting this as a result:
Fred
[None]
Jennifer
[None, None] | First, you're sending the bus as the person in the add2bus method.
```
def add2bus(self, Bus):
if Person.get_route(self) == Bus.get_route() :
Bus.get_on(Bus)
else :
print("not compatible")
```
So this will take C2C as the bus object, and then call C2C.get\_on(C2C)
Instead you want to do:
```
Bus.get_on(self)
```
Then to get the name of the person, you can do it like this.
```
C2C.people[0].get_name().
```
Calling this will print the passenger's name, but what you want to do is get the name of the passenger back as a string, which is what return does.
So in the get\_name() method of people instead of doing print(self.name), return it. Now the statement above will become the string self.name.
Like this:
```
def get_name(self):
return self.name
```
When you want to do your loop, it should work as you expect now.
If you want me to go into any more detail, let me know and I'll update my answer. | When you use the `print()` function (or statement, pre 3.0), python asks the objecs you are printing to convert themselves to strings; via the `__str__` function. Since `object` defines this method for you, it always works; but the predefined version is not very helpful (in the way you are seeing).
Provide your own. It takes no arguments and *must* return a string:
```
class Foo:
bar = 'baz'
def __str__(self):
return "Friendly Foo Description: " + bar
``` | How to NOT print out the memory location python | [
"",
"python",
""
] |
I have some code which tots up a set of selected values. I would like to define an empty set and add to it, but `{}` keeps turning into a dictionary. I have found if I populate the set with a dummy value I can use it, but it's not very elegant. Can someone tell me the proper way to do this? Thanks.
```
inversIndex = {'five': {1}, 'ten': {2}, 'twenty': {3},
'two': {0, 1, 2}, 'eight': {2}, 'four': {1},
'six': {1}, 'seven': {1}, 'three': {0, 2},
'nine': {2}, 'twelve': {2}, 'zero': {0, 1, 3},
'eleven': {2}, 'one': {0}}
query = ['four', 'two', 'three']
def orSearch(inverseIndex, query):
b = [ inverseIndex[c] for c in query ]
x = {'dummy'}
for y in b:
{ x.add(z) for z in y }
x.remove('dummy')
return x
orSearch(inverseIndex, query)
```
> {0, 1, 2} | You can just construct a set:
```
>>> s = set()
```
will do the job. | The "proper" way to do it:
```
myset = set()
```
The `{...}` notation cannot be used to initialize an empty set | Creating an empty set | [
"",
"python",
""
] |
I have wrote a list in to a file, how can i get it back as old array list.
list looks like this
> ['82294', 'ABDUL', 'NAVAS', 'B', 'M', 'MSCS', 'CUKE', '30',
> 'Kasargod', 'CU', 'Kerala', 'Online', 'PG-QS-12', '15', 'June,',
> '2013', '12.00', 'Noon', '-', '02.00', 'PM\n', '29']
>
> ['82262', 'ABDUL', 'SHAFWAN', 'T', 'H', 'MSCS', 'CUKE', '30',
> 'Kasargod', 'CU', 'Kerala', 'Online', 'PG-QS-12', '15', 'June,',
> '2013', '12.00', 'Noon', '-', '02.00', 'PM\n', '29']
when i read the file, it does consider as a string list,
for eg:
consider first list:
var[0][0] should be 82294 not '
i am a python noob, | You can read the file line by line. For each line you can then eval it, or use json.loads to unpack it. | Why couldn't you just do it by hand ? You'll need to either be sure of the safety of the string or adapt the list comprehension to your needs.
```
foo = "['4','8','15','16','23','42']"
bar = [token.strip("'") for token in foo[1:-1].split(',')]
``` | Get array list from file | [
"",
"python",
"python-2.7",
""
] |
I have been beating my head against a wall over this problem. I create a list and make 4 copies, only one of which shares the same memory index. If I change the original list, is somehow changes 3 of those copies as well, 2 of which have a different memory index. Only if I make a list using the same command as the original, am I able to create a list that is not impacted by changes to the original. How is this possible? Here is the output from my console:
```
>>> orig=[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]]
>>> id(orig)
151498220
>>> copy1=orig #same index
>>> id(copy1)
151498220
>>> copy2=orig[:] #different index
>>> id(copy2)
151498348
>>> copy3=list(orig) #different index
>>> id(copy3)
151503020
>>> copy4=[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]]
>>> id(copy4)
151498636
>>> orig[0][1]=34
>>> copy1
[[0, 34, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]] #expected, same memory index
>>> copy2
[[0, 34, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]] #WTF?!?!?
>>> copy3
[[0, 34, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]] #ARGH!!!
>>> copy4
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]] #workaround?
>>> id(orig)
151498220
>>> id(copy1)
151498220
>>> id(copy2)
151498348
>>> id(copy3)
151503020
>>> id(copy4)
151498636
```
The memory indices did not change and yet the lists were altered. Only copy1 should have changed as it has the same memory index as orig. | That's because you are just creating a shallow copy. You need to create a deep copy instead.
As per [copy](http://docs.python.org/2/library/copy.html) module doc:
> * A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in
> the original.
> * A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
You can verify it by comparing the id of inner list:
```
>>> orig=[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]]
>>> id(orig)
151498220
>>> copy2=orig[:] #different index
>>> id(copy2)
151498348
>>> id(copy2[0]) == id(orig[0]) # inner list have same id
True
```
You can create a `deepcopy` using [`copy.deepcopy(x)`](http://docs.python.org/2/library/copy.html#copy.deepcopy):
```
>>> import copy
>>>
>>> copy3 = copy.deepcopy(orig)
>>>
>>> id(copy3[0]) == id(orig[0]) # inner list have different id
False
>>> orig[0][3] = 34
>>>
>>> orig
[[0, 34, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]]
>>> copy3
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]]
``` | Your list is a list of [*names*](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables), not a list of lists as you are thinking of it. When you make a copy of the list using any of the methods you list (slicing, creating a new list based on the old one, etc) you make a new outer list, but the names in the new list reference the same internal lists as the names in the old one.
```
# One through three are all examples of:
first_list, second_list, third_list = [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 123, 0]
original = [first_list, second_list, third_list]
another_list = original[:]
# We do indeed have another list
assert id(original) != id(another_list)
# But the *references* in the list are pointing at the same underlying child list
assert id(original[0]) == id(another_list[0])
``` | Changing a value in one list changes the values in another list with a different memory ID | [
"",
"python",
"python-2.7",
""
] |
I'm wondering if there is a simpler, memory efficient way to select a subset of rows and columns from a pandas DataFrame.
For instance, given this dataframe:
```
df = DataFrame(np.random.rand(4,5), columns = list('abcde'))
print df
a b c d e
0 0.945686 0.000710 0.909158 0.892892 0.326670
1 0.919359 0.667057 0.462478 0.008204 0.473096
2 0.976163 0.621712 0.208423 0.980471 0.048334
3 0.459039 0.788318 0.309892 0.100539 0.753992
```
I want only those rows in which the value for column 'c' is greater than 0.5, but I only need columns 'b' and 'e' for those rows.
This is the method that I've come up with - perhaps there is a better "pandas" way?
```
locs = [df.columns.get_loc(_) for _ in ['a', 'd']]
print df[df.c > 0.5][locs]
a d
0 0.945686 0.892892
```
My final goal is to convert the result to a numpy array to pass into an sklearn regression algorithm, so I will use the code above like this:
```
training_set = array(df[df.c > 0.5][locs])
```
... and that peeves me since I end up with a huge array copy in memory. Perhaps there's a better way for that too? | `.loc` accept row and column selectors simultaneously (as do `.ix/.iloc` FYI)
This is done in a single pass as well.
```
In [1]: df = DataFrame(np.random.rand(4,5), columns = list('abcde'))
In [2]: df
Out[2]:
a b c d e
0 0.669701 0.780497 0.955690 0.451573 0.232194
1 0.952762 0.585579 0.890801 0.643251 0.556220
2 0.900713 0.790938 0.952628 0.505775 0.582365
3 0.994205 0.330560 0.286694 0.125061 0.575153
In [5]: df.loc[df['c']>0.5,['a','d']]
Out[5]:
a d
0 0.669701 0.451573
1 0.952762 0.643251
2 0.900713 0.505775
```
And if you want the values (though this should pass directly to sklearn as is); frames support the array interface
```
In [6]: df.loc[df['c']>0.5,['a','d']].values
Out[6]:
array([[ 0.66970138, 0.45157274],
[ 0.95276167, 0.64325143],
[ 0.90071271, 0.50577509]])
``` | Use its value directly:
```
In [79]: df[df.c > 0.5][['b', 'e']].values
Out[79]:
array([[ 0.98836259, 0.82403141],
[ 0.337358 , 0.02054435],
[ 0.29271728, 0.37813099],
[ 0.70033513, 0.69919695]])
``` | How to convert a pandas DataFrame subset of columns AND rows into a numpy array? | [
"",
"python",
"arrays",
"numpy",
"pandas",
"scikit-learn",
""
] |
I've installed python and some other packages using web platform installer, but I was having some issues getting a Django project to work so I uninstalled everything and am trying to get it going from scratch. Web Platform Installer still shows that I have 'Windows Azure SDK for Python" and "Python 2.7 (32-bit)" installed however and I can't mark them as uninstalled.
I don't see where to uninstall from WPI at all, I uninstalled them using control panel. I think I had originally installed python from the python site and had version 3.3 and 2.7 (64-bit), but now there are no entries containing 'python' when I try to uninstall a program from the control panel.
Does anyone know what is going on or can I download the setups from somewhere and try them manually? Is there a way to reset what shows as installed in WPI? I tried uninstalling and reinstalling WPI but that didn't help.
 | I found a PowerShell script in the WPI directory that checked for python installs and I had to delete the registry keys specified in it.
---
Let me add some context:
Open the below path,
```
%LOCALAPPDATA%\Microsoft\Web Platform Installer\installers\PythonDetectInstalls
```
in the sub-folder of the above path, there is a PowerShell script "DetectPythonInstalls.ps1" which contains two script lines for checking if Python is installed:
```
$regKey = "hklm:SOFTWARE\Python\PythonCore\$pythonVersion\InstallPath";
$correctRegKey = "hklm:SOFTWARE\Wow6432Node\Python\PythonCore\$pythonVersion\InstallPath";
```
Uninstall all Python versions you do not neet. However, we need to remove some registry keys manually using "regedit".
(Safety Note: Please take a backup of the registry keys before removing the above-mentioned registry key)
Ref: [Social.Tecnet](https://social.technet.microsoft.com/Forums/en-US/ee1d9598-d683-441a-9e26-2cd0b45730c9/uninstall-an-application-installed-with-web-platform-installer-50?forum=windowsazuredevelopment) | For what it's worth, I just deleted the folder containing the installed PHP versions(5.3,5.4,5.5), which for me was \Program Files (x86)\IIS Express\PHP. Also, I removed "\Program Files (x86)\iis express\php\5.3" from the search path.
When I return to the web installer the 'Add' buttons are enabled.
I opened the options, set the Web Server to IIS, then installed PHP v5.5. It was installed into \Program Files (x86)\PHP\v5.5 and added to the search path. | How do I uninstall python from web platform installer? | [
"",
"python",
"web-platform-installer",
""
] |
I'm trying to install PyQt5 on my Ubuntu 12.04 box. So after downloading it from [here](http://www.riverbankcomputing.co.uk/software/pyqt/download5) I untarred it, ran `python configure.py` and `make`. Make however, results in the following:
```
cd qpy/ && ( test -f Makefile || /opt/qt5/bin/qmake /home/kram/Downloads/PyQt-gpl-5.0/qpy/qpy.pro -o Makefile ) && make -f Makefile
make[1]: Map '/home/kram/Downloads/PyQt-gpl-5.0/qpy' is entered
cd QtCore/ && ( test -f Makefile || /opt/qt5/bin/qmake /home/kram/Downloads/PyQt-gpl-5.0/qpy/QtCore/QtCore.pro -o Makefile ) && make -f Makefile
make[2]: Map '/home/kram/Downloads/PyQt-gpl-5.0/qpy/QtCore' is entered
g++ -c -pipe -fno-strict-aliasing -O2 -Wall -W -fPIC -D_REENTRANT -DQT_NO_DEBUG -DQT_CORE_LIB -I/opt/qt5/mkspecs/linux-g++ -I. -I. -I../../QtCore -I/usr/local/include/python2.7 -I/opt/qt5/include -I/opt/qt5/include/QtCore -I. -o qpycore_chimera.o qpycore_chimera.cpp
qpycore_chimera.cpp:21:20: fatal error: Python.h: File or folder does not exist
compilation terminated.
make[2]: *** [qpycore_chimera.o] Error 1
make[2]: Map '/home/kram/Downloads/PyQt-gpl-5.0/qpy/QtCore' is left
make[1]: *** [sub-QtCore-make_first] Error 2
make[1]: Map '/home/kram/Downloads/PyQt-gpl-5.0/qpy' is left
make: *** [sub-qpy-make_first-ordered] Error 2
```
(I translated some parts of the error message from Dutch to English, so some words may be a bit off from the normal wording..)
Does anybody what the problem is here? Where could the relevant Python.h file be? | The problem is that the include path for all python headers in every Makefile will be pointing to `/usr/local/include/python2.7` , which should have been `/usr/include/python2.7`
There are 2 possible solutions for this. Either you can change all the occurrence of this in every Makefile or else you can create a symlink to that location
```
sudo ln -s /usr/include/python2.7 /usr/local/include/python2.7
```
Now you can run `make` | ```
sudo apt-get install python-dev
```
Your missing the python header files. | Fatal error while compiling PyQt5: Python.h does not exist | [
"",
"python",
"qt",
"installation",
"pyqt",
"qt5",
""
] |
A variable is set. Another variable is set to the first. The first changes value. The second does not. This has been the nature of programming since the dawn of time.
```
>>> a = 1
>>> b = a
>>> b = b - 1
>>> b
0
>>> a
1
```
I then extend this to Python lists. A list is declared and appended. Another list is declared to be equal to the first. The values in the second list change. Mysteriously, the values in the first list, though not acted upon directly, also change.
```
>>> alist = list()
>>> blist = list()
>>> alist.append(1)
>>> alist.append(2)
>>> alist
[1, 2]
>>> blist
[]
>>> blist = alist
>>> alist.remove(1)
>>> alist
[2]
>>> blist
[2]
>>>
```
Why is this?
And how do I prevent this from happening -- I want `alist` to be unfazed by changes to `blist` (immutable, if you will)? | Variable binding in Python works this way: you assign an object to a variable.
```
a = 4
b = a
```
Both point to `4`.
```
b = 9
```
Now `b` points to somewhere else.
Exactly the same happens with lists:
```
a = []
b = a
b = [9]
```
Now, `b` has a new value, while `a` has the old one.
Till now, everything is clear and you have the same behaviour with mutable and immutable objects.
Now comes your misunderstanding: it is about *modifying* objects.
`list`s are mutable, so if you mutate a list, the modifications are visible via all variables ("name bindings") which exist:
```
a = []
b = a # the same list
c = [] # another empty one
a.append(3)
print a, b, c # a as well as b = [3], c = [] as it is a different one
d = a[:] # copy it completely
b.append(9)
# now a = b = [3, 9], c = [], d = [3], a copy of the old a resp. b
``` | Python variables are actually **not** variables but **references** to objects (similar to pointers in C). There is a very good explanation of that for beginners in <http://foobarnbaz.com/2012/07/08/understanding-python-variables/>
One way to convince yourself about this is to try this:
```
a=[1,2,3]
b=a
id(a)
68617320
id(b)
68617320
```
id returns the memory address of the given object. Since both are the same for both lists it means that changing one affects the other, because they are, in fact, the same thing. | Why are lists linked in Python in a persistent way? | [
"",
"python",
"list",
"immutability",
""
] |
When I try to receive larger amounts of data it gets cut off and I have to press enter to get the rest of the data. At first I was able to increase it a little bit but it still won't receive all of it. As you can see I have increased the buffer on the conn.recv() but it still doesn't get all of the data. It cuts it off at a certain point. I have to press enter on my raw\_input in order to receive the rest of the data. Is there anyway I can get all of the data at once? Here's the code.
```
port = 7777
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('0.0.0.0', port))
sock.listen(1)
print ("Listening on port: "+str(port))
while 1:
conn, sock_addr = sock.accept()
print "accepted connection from", sock_addr
while 1:
command = raw_input('shell> ')
conn.send(command)
data = conn.recv(8000)
if not data: break
print data,
conn.close()
``` | TCP/IP is a *stream-based* protocol, not a *message-based* protocol. There's no guarantee that every `send()` call by one peer results in a single `recv()` call by the other peer receiving the exact data sentβit might receive the data piece-meal, split across multiple `recv()` calls, due to packet fragmentation.
You need to define your own message-based protocol on top of TCP in order to differentiate message boundaries. Then, to read a message, you continue to call `recv()` until you've read an entire message or an error occurs.
One simple way of sending a message is to prefix each message with its length. Then to read a message, you first read the length, then you read that many bytes. Here's how you might do that:
```
def send_msg(sock, msg):
# Prefix each message with a 4-byte length (network byte order)
msg = struct.pack('>I', len(msg)) + msg
sock.sendall(msg)
def recv_msg(sock):
# Read message length and unpack it into an integer
raw_msglen = recvall(sock, 4)
if not raw_msglen:
return None
msglen = struct.unpack('>I', raw_msglen)[0]
# Read the message data
return recvall(sock, msglen)
def recvall(sock, n):
# Helper function to recv n bytes or return None if EOF is hit
data = bytearray()
while len(data) < n:
packet = sock.recv(n - len(data))
if not packet:
return None
data.extend(packet)
return data
```
Then you can use the `send_msg` and `recv_msg` functions to send and receive whole messages, and they won't have any problems with packets being split or coalesced on the network level. | You can use it as: `data = recvall(sock)`
```
def recvall(sock):
BUFF_SIZE = 4096 # 4 KiB
data = b''
while True:
part = sock.recv(BUFF_SIZE)
data += part
if len(part) < BUFF_SIZE:
# either 0 or end of data
break
return data
``` | Python Socket Receive Large Amount of Data | [
"",
"python",
"sockets",
""
] |
I want to create a table with a subset of records from a master table.
for example, I have:
```
id name code ref
1 peter 73 2.5
2 carl 84 3.6
3 jack 73 1.1
```
I want to store peter and carl but not jack because has same peter's code.
I need the max ref!
I try this:
```
SELECT id, name, DISTINCT(code) INTO new_tab
FROM old_tab
WHERE (conditions)
```
but it doesn't work. | You can use window functions for this:
```
select t.id, t.name, t.code, t.ref
from (select t.*,
row_number() over (partition by code order by ref desc) as seqnum
from old_tab t
) t
where seqnum = 1;
```
The insert statement just wraps `insert` around this:
```
insert into new_tab(id, name, code)
select t.id, t.name, t.code
from (select t.*,
row_number() over (partition by code order by ref desc) as seqnum
from old_tab t
) t
where seqnum = 1;
``` | You can try a sub-query like this:
```
SELECT ot.* FROM old_tab ot
JOIN
(
SELECT "code", MAX("ref") AS "MaxRef"
FROM old_tab
GROUP BY "code"
) tbl
ON ot."code" = tbl."code"
AND ot."ref" = tbl."MaxRef"
```
Output:
```
ββββββ¦ββββββββ¦βββββββ¦ββββββ
β ID β NAME β CODE β REF β
β βββββ¬ββββββββ¬βββββββ¬ββββββ£
β 1 β peter β 73 β 2.5 β
β 2 β carl β 84 β 3.6 β
ββββββ©ββββββββ©βββββββ©ββββββ
```
### See [this SQLFiddle](http://sqlfiddle.com/#!1/9307b/1) | How to insert many records excluding some in PostgreSQL | [
"",
"sql",
"database",
"postgresql",
"greatest-n-per-group",
""
] |
I have two tables: `customer` and `mailing` :
```
+==========+ +=============+
| customer | | mailing |
+==========+ +=============+
| id | | id |
+----------+ +-------------+
| name | | customer_id |
+----------+ +-------------+
| mailing_id |
+-------------+
```
Every time I send a mailing to a customer, I add a row in the mailings table with that mailing id. One mailing can be sent to multiple customers.
I want to have a sql call that returns all customers that have not yet received a certain mailing. How to ?
I am using mysql | ```
select * from customer where id not in (
select customer_id from mailing where mailing_id = @mailing_id
)
``` | ```
SELECT * FROM customers c
JOIN mailings m
ON c.id = m.id
WHERE NOT EXISTS (
SELECT id
FROM mailings i
WHERE i.id = c.id
GROUP BY i.id
)
``` | SQL statement for join but not in other table | [
"",
"mysql",
"sql",
"select",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.